Dec  1 13:25:14 np0005541455 kernel: Linux version 5.14.0-642.el9.x86_64 (mockbuild@x86-05.stream.rdu2.redhat.com) (gcc (GCC) 11.5.0 20240719 (Red Hat 11.5.0-14), GNU ld version 2.35.2-68.el9) #1 SMP PREEMPT_DYNAMIC Thu Nov 20 14:15:03 UTC 2025
Dec  1 13:25:14 np0005541455 kernel: The list of certified hardware and cloud instances for Red Hat Enterprise Linux 9 can be viewed at the Red Hat Ecosystem Catalog, https://catalog.redhat.com.
Dec  1 13:25:14 np0005541455 kernel: Command line: BOOT_IMAGE=(hd0,msdos1)/boot/vmlinuz-5.14.0-642.el9.x86_64 root=UUID=b277050f-8ace-464d-abb6-4c46d4c45253 ro console=ttyS0,115200n8 no_timer_check net.ifnames=0 crashkernel=1G-2G:192M,2G-64G:256M,64G-:512M
Dec  1 13:25:14 np0005541455 kernel: BIOS-provided physical RAM map:
Dec  1 13:25:14 np0005541455 kernel: BIOS-e820: [mem 0x0000000000000000-0x000000000009fbff] usable
Dec  1 13:25:14 np0005541455 kernel: BIOS-e820: [mem 0x000000000009fc00-0x000000000009ffff] reserved
Dec  1 13:25:14 np0005541455 kernel: BIOS-e820: [mem 0x00000000000f0000-0x00000000000fffff] reserved
Dec  1 13:25:14 np0005541455 kernel: BIOS-e820: [mem 0x0000000000100000-0x00000000bffdafff] usable
Dec  1 13:25:14 np0005541455 kernel: BIOS-e820: [mem 0x00000000bffdb000-0x00000000bfffffff] reserved
Dec  1 13:25:14 np0005541455 kernel: BIOS-e820: [mem 0x00000000feffc000-0x00000000feffffff] reserved
Dec  1 13:25:14 np0005541455 kernel: BIOS-e820: [mem 0x00000000fffc0000-0x00000000ffffffff] reserved
Dec  1 13:25:14 np0005541455 kernel: BIOS-e820: [mem 0x0000000100000000-0x000000023fffffff] usable
Dec  1 13:25:14 np0005541455 kernel: NX (Execute Disable) protection: active
Dec  1 13:25:14 np0005541455 kernel: APIC: Static calls initialized
Dec  1 13:25:14 np0005541455 kernel: SMBIOS 2.8 present.
Dec  1 13:25:14 np0005541455 kernel: DMI: OpenStack Foundation OpenStack Nova, BIOS 1.15.0-1 04/01/2014
Dec  1 13:25:14 np0005541455 kernel: Hypervisor detected: KVM
Dec  1 13:25:14 np0005541455 kernel: kvm-clock: Using msrs 4b564d01 and 4b564d00
Dec  1 13:25:14 np0005541455 kernel: kvm-clock: using sched offset of 3343518689 cycles
Dec  1 13:25:14 np0005541455 kernel: clocksource: kvm-clock: mask: 0xffffffffffffffff max_cycles: 0x1cd42e4dffb, max_idle_ns: 881590591483 ns
Dec  1 13:25:14 np0005541455 kernel: tsc: Detected 2800.000 MHz processor
Dec  1 13:25:14 np0005541455 kernel: last_pfn = 0x240000 max_arch_pfn = 0x400000000
Dec  1 13:25:14 np0005541455 kernel: MTRR map: 4 entries (3 fixed + 1 variable; max 19), built from 8 variable MTRRs
Dec  1 13:25:14 np0005541455 kernel: x86/PAT: Configuration [0-7]: WB  WC  UC- UC  WB  WP  UC- WT  
Dec  1 13:25:14 np0005541455 kernel: last_pfn = 0xbffdb max_arch_pfn = 0x400000000
Dec  1 13:25:14 np0005541455 kernel: found SMP MP-table at [mem 0x000f5ae0-0x000f5aef]
Dec  1 13:25:14 np0005541455 kernel: Using GB pages for direct mapping
Dec  1 13:25:14 np0005541455 kernel: RAMDISK: [mem 0x2d83a000-0x32c14fff]
Dec  1 13:25:14 np0005541455 kernel: ACPI: Early table checksum verification disabled
Dec  1 13:25:14 np0005541455 kernel: ACPI: RSDP 0x00000000000F5AA0 000014 (v00 BOCHS )
Dec  1 13:25:14 np0005541455 kernel: ACPI: RSDT 0x00000000BFFE16BD 000030 (v01 BOCHS  BXPC     00000001 BXPC 00000001)
Dec  1 13:25:14 np0005541455 kernel: ACPI: FACP 0x00000000BFFE1571 000074 (v01 BOCHS  BXPC     00000001 BXPC 00000001)
Dec  1 13:25:14 np0005541455 kernel: ACPI: DSDT 0x00000000BFFDFC80 0018F1 (v01 BOCHS  BXPC     00000001 BXPC 00000001)
Dec  1 13:25:14 np0005541455 kernel: ACPI: FACS 0x00000000BFFDFC40 000040
Dec  1 13:25:14 np0005541455 kernel: ACPI: APIC 0x00000000BFFE15E5 0000B0 (v01 BOCHS  BXPC     00000001 BXPC 00000001)
Dec  1 13:25:14 np0005541455 kernel: ACPI: WAET 0x00000000BFFE1695 000028 (v01 BOCHS  BXPC     00000001 BXPC 00000001)
Dec  1 13:25:14 np0005541455 kernel: ACPI: Reserving FACP table memory at [mem 0xbffe1571-0xbffe15e4]
Dec  1 13:25:14 np0005541455 kernel: ACPI: Reserving DSDT table memory at [mem 0xbffdfc80-0xbffe1570]
Dec  1 13:25:14 np0005541455 kernel: ACPI: Reserving FACS table memory at [mem 0xbffdfc40-0xbffdfc7f]
Dec  1 13:25:14 np0005541455 kernel: ACPI: Reserving APIC table memory at [mem 0xbffe15e5-0xbffe1694]
Dec  1 13:25:14 np0005541455 kernel: ACPI: Reserving WAET table memory at [mem 0xbffe1695-0xbffe16bc]
Dec  1 13:25:14 np0005541455 kernel: No NUMA configuration found
Dec  1 13:25:14 np0005541455 kernel: Faking a node at [mem 0x0000000000000000-0x000000023fffffff]
Dec  1 13:25:14 np0005541455 kernel: NODE_DATA(0) allocated [mem 0x23ffd5000-0x23fffffff]
Dec  1 13:25:14 np0005541455 kernel: crashkernel reserved: 0x00000000af000000 - 0x00000000bf000000 (256 MB)
Dec  1 13:25:14 np0005541455 kernel: Zone ranges:
Dec  1 13:25:14 np0005541455 kernel:  DMA      [mem 0x0000000000001000-0x0000000000ffffff]
Dec  1 13:25:14 np0005541455 kernel:  DMA32    [mem 0x0000000001000000-0x00000000ffffffff]
Dec  1 13:25:14 np0005541455 kernel:  Normal   [mem 0x0000000100000000-0x000000023fffffff]
Dec  1 13:25:14 np0005541455 kernel:  Device   empty
Dec  1 13:25:14 np0005541455 kernel: Movable zone start for each node
Dec  1 13:25:14 np0005541455 kernel: Early memory node ranges
Dec  1 13:25:14 np0005541455 kernel:  node   0: [mem 0x0000000000001000-0x000000000009efff]
Dec  1 13:25:14 np0005541455 kernel:  node   0: [mem 0x0000000000100000-0x00000000bffdafff]
Dec  1 13:25:14 np0005541455 kernel:  node   0: [mem 0x0000000100000000-0x000000023fffffff]
Dec  1 13:25:14 np0005541455 kernel: Initmem setup node 0 [mem 0x0000000000001000-0x000000023fffffff]
Dec  1 13:25:14 np0005541455 kernel: On node 0, zone DMA: 1 pages in unavailable ranges
Dec  1 13:25:14 np0005541455 kernel: On node 0, zone DMA: 97 pages in unavailable ranges
Dec  1 13:25:14 np0005541455 kernel: On node 0, zone Normal: 37 pages in unavailable ranges
Dec  1 13:25:14 np0005541455 kernel: ACPI: PM-Timer IO Port: 0x608
Dec  1 13:25:14 np0005541455 kernel: ACPI: LAPIC_NMI (acpi_id[0xff] dfl dfl lint[0x1])
Dec  1 13:25:14 np0005541455 kernel: IOAPIC[0]: apic_id 0, version 17, address 0xfec00000, GSI 0-23
Dec  1 13:25:14 np0005541455 kernel: ACPI: INT_SRC_OVR (bus 0 bus_irq 0 global_irq 2 dfl dfl)
Dec  1 13:25:14 np0005541455 kernel: ACPI: INT_SRC_OVR (bus 0 bus_irq 5 global_irq 5 high level)
Dec  1 13:25:14 np0005541455 kernel: ACPI: INT_SRC_OVR (bus 0 bus_irq 9 global_irq 9 high level)
Dec  1 13:25:14 np0005541455 kernel: ACPI: INT_SRC_OVR (bus 0 bus_irq 10 global_irq 10 high level)
Dec  1 13:25:14 np0005541455 kernel: ACPI: INT_SRC_OVR (bus 0 bus_irq 11 global_irq 11 high level)
Dec  1 13:25:14 np0005541455 kernel: ACPI: Using ACPI (MADT) for SMP configuration information
Dec  1 13:25:14 np0005541455 kernel: TSC deadline timer available
Dec  1 13:25:14 np0005541455 kernel: CPU topo: Max. logical packages:   8
Dec  1 13:25:14 np0005541455 kernel: CPU topo: Max. logical dies:       8
Dec  1 13:25:14 np0005541455 kernel: CPU topo: Max. dies per package:   1
Dec  1 13:25:14 np0005541455 kernel: CPU topo: Max. threads per core:   1
Dec  1 13:25:14 np0005541455 kernel: CPU topo: Num. cores per package:     1
Dec  1 13:25:14 np0005541455 kernel: CPU topo: Num. threads per package:   1
Dec  1 13:25:14 np0005541455 kernel: CPU topo: Allowing 8 present CPUs plus 0 hotplug CPUs
Dec  1 13:25:14 np0005541455 kernel: kvm-guest: APIC: eoi() replaced with kvm_guest_apic_eoi_write()
Dec  1 13:25:14 np0005541455 kernel: PM: hibernation: Registered nosave memory: [mem 0x00000000-0x00000fff]
Dec  1 13:25:14 np0005541455 kernel: PM: hibernation: Registered nosave memory: [mem 0x0009f000-0x0009ffff]
Dec  1 13:25:14 np0005541455 kernel: PM: hibernation: Registered nosave memory: [mem 0x000a0000-0x000effff]
Dec  1 13:25:14 np0005541455 kernel: PM: hibernation: Registered nosave memory: [mem 0x000f0000-0x000fffff]
Dec  1 13:25:14 np0005541455 kernel: PM: hibernation: Registered nosave memory: [mem 0xbffdb000-0xbfffffff]
Dec  1 13:25:14 np0005541455 kernel: PM: hibernation: Registered nosave memory: [mem 0xc0000000-0xfeffbfff]
Dec  1 13:25:14 np0005541455 kernel: PM: hibernation: Registered nosave memory: [mem 0xfeffc000-0xfeffffff]
Dec  1 13:25:14 np0005541455 kernel: PM: hibernation: Registered nosave memory: [mem 0xff000000-0xfffbffff]
Dec  1 13:25:14 np0005541455 kernel: PM: hibernation: Registered nosave memory: [mem 0xfffc0000-0xffffffff]
Dec  1 13:25:14 np0005541455 kernel: [mem 0xc0000000-0xfeffbfff] available for PCI devices
Dec  1 13:25:14 np0005541455 kernel: Booting paravirtualized kernel on KVM
Dec  1 13:25:14 np0005541455 kernel: clocksource: refined-jiffies: mask: 0xffffffff max_cycles: 0xffffffff, max_idle_ns: 1910969940391419 ns
Dec  1 13:25:14 np0005541455 kernel: setup_percpu: NR_CPUS:8192 nr_cpumask_bits:8 nr_cpu_ids:8 nr_node_ids:1
Dec  1 13:25:14 np0005541455 kernel: percpu: Embedded 64 pages/cpu s225280 r8192 d28672 u262144
Dec  1 13:25:14 np0005541455 kernel: kvm-guest: PV spinlocks disabled, no host support
Dec  1 13:25:14 np0005541455 kernel: Kernel command line: BOOT_IMAGE=(hd0,msdos1)/boot/vmlinuz-5.14.0-642.el9.x86_64 root=UUID=b277050f-8ace-464d-abb6-4c46d4c45253 ro console=ttyS0,115200n8 no_timer_check net.ifnames=0 crashkernel=1G-2G:192M,2G-64G:256M,64G-:512M
Dec  1 13:25:14 np0005541455 kernel: Unknown kernel command line parameters "BOOT_IMAGE=(hd0,msdos1)/boot/vmlinuz-5.14.0-642.el9.x86_64", will be passed to user space.
Dec  1 13:25:14 np0005541455 kernel: random: crng init done
Dec  1 13:25:14 np0005541455 kernel: Dentry cache hash table entries: 1048576 (order: 11, 8388608 bytes, linear)
Dec  1 13:25:14 np0005541455 kernel: Inode-cache hash table entries: 524288 (order: 10, 4194304 bytes, linear)
Dec  1 13:25:14 np0005541455 kernel: Fallback order for Node 0: 0 
Dec  1 13:25:14 np0005541455 kernel: Built 1 zonelists, mobility grouping on.  Total pages: 2064091
Dec  1 13:25:14 np0005541455 kernel: Policy zone: Normal
Dec  1 13:25:14 np0005541455 kernel: mem auto-init: stack:off, heap alloc:off, heap free:off
Dec  1 13:25:14 np0005541455 kernel: software IO TLB: area num 8.
Dec  1 13:25:14 np0005541455 kernel: SLUB: HWalign=64, Order=0-3, MinObjects=0, CPUs=8, Nodes=1
Dec  1 13:25:14 np0005541455 kernel: ftrace: allocating 49313 entries in 193 pages
Dec  1 13:25:14 np0005541455 kernel: ftrace: allocated 193 pages with 3 groups
Dec  1 13:25:14 np0005541455 kernel: Dynamic Preempt: voluntary
Dec  1 13:25:14 np0005541455 kernel: rcu: Preemptible hierarchical RCU implementation.
Dec  1 13:25:14 np0005541455 kernel: rcu: #011RCU event tracing is enabled.
Dec  1 13:25:14 np0005541455 kernel: rcu: #011RCU restricting CPUs from NR_CPUS=8192 to nr_cpu_ids=8.
Dec  1 13:25:14 np0005541455 kernel: #011Trampoline variant of Tasks RCU enabled.
Dec  1 13:25:14 np0005541455 kernel: #011Rude variant of Tasks RCU enabled.
Dec  1 13:25:14 np0005541455 kernel: #011Tracing variant of Tasks RCU enabled.
Dec  1 13:25:14 np0005541455 kernel: rcu: RCU calculated value of scheduler-enlistment delay is 100 jiffies.
Dec  1 13:25:14 np0005541455 kernel: rcu: Adjusting geometry for rcu_fanout_leaf=16, nr_cpu_ids=8
Dec  1 13:25:14 np0005541455 kernel: RCU Tasks: Setting shift to 3 and lim to 1 rcu_task_cb_adjust=1 rcu_task_cpu_ids=8.
Dec  1 13:25:14 np0005541455 kernel: RCU Tasks Rude: Setting shift to 3 and lim to 1 rcu_task_cb_adjust=1 rcu_task_cpu_ids=8.
Dec  1 13:25:14 np0005541455 kernel: RCU Tasks Trace: Setting shift to 3 and lim to 1 rcu_task_cb_adjust=1 rcu_task_cpu_ids=8.
Dec  1 13:25:14 np0005541455 kernel: NR_IRQS: 524544, nr_irqs: 488, preallocated irqs: 16
Dec  1 13:25:14 np0005541455 kernel: rcu: srcu_init: Setting srcu_struct sizes based on contention.
Dec  1 13:25:14 np0005541455 kernel: kfence: initialized - using 2097152 bytes for 255 objects at 0x(____ptrval____)-0x(____ptrval____)
Dec  1 13:25:14 np0005541455 kernel: Console: colour VGA+ 80x25
Dec  1 13:25:14 np0005541455 kernel: printk: console [ttyS0] enabled
Dec  1 13:25:14 np0005541455 kernel: ACPI: Core revision 20230331
Dec  1 13:25:14 np0005541455 kernel: APIC: Switch to symmetric I/O mode setup
Dec  1 13:25:14 np0005541455 kernel: x2apic enabled
Dec  1 13:25:14 np0005541455 kernel: APIC: Switched APIC routing to: physical x2apic
Dec  1 13:25:14 np0005541455 kernel: tsc: Marking TSC unstable due to TSCs unsynchronized
Dec  1 13:25:14 np0005541455 kernel: Calibrating delay loop (skipped) preset value.. 5600.00 BogoMIPS (lpj=2800000)
Dec  1 13:25:14 np0005541455 kernel: x86/cpu: User Mode Instruction Prevention (UMIP) activated
Dec  1 13:25:14 np0005541455 kernel: Last level iTLB entries: 4KB 512, 2MB 255, 4MB 127
Dec  1 13:25:14 np0005541455 kernel: Last level dTLB entries: 4KB 512, 2MB 255, 4MB 127, 1GB 0
Dec  1 13:25:14 np0005541455 kernel: Spectre V1 : Mitigation: usercopy/swapgs barriers and __user pointer sanitization
Dec  1 13:25:14 np0005541455 kernel: Spectre V2 : Mitigation: Retpolines
Dec  1 13:25:14 np0005541455 kernel: Spectre V2 : Spectre v2 / SpectreRSB: Filling RSB on context switch and VMEXIT
Dec  1 13:25:14 np0005541455 kernel: Spectre V2 : Enabling Speculation Barrier for firmware calls
Dec  1 13:25:14 np0005541455 kernel: RETBleed: Mitigation: untrained return thunk
Dec  1 13:25:14 np0005541455 kernel: Spectre V2 : mitigation: Enabling conditional Indirect Branch Prediction Barrier
Dec  1 13:25:14 np0005541455 kernel: Speculative Store Bypass: Mitigation: Speculative Store Bypass disabled via prctl
Dec  1 13:25:14 np0005541455 kernel: Speculative Return Stack Overflow: IBPB-extending microcode not applied!
Dec  1 13:25:14 np0005541455 kernel: Speculative Return Stack Overflow: WARNING: See https://kernel.org/doc/html/latest/admin-guide/hw-vuln/srso.html for mitigation options.
Dec  1 13:25:14 np0005541455 kernel: x86/bugs: return thunk changed
Dec  1 13:25:14 np0005541455 kernel: Speculative Return Stack Overflow: Vulnerable: Safe RET, no microcode
Dec  1 13:25:14 np0005541455 kernel: x86/fpu: Supporting XSAVE feature 0x001: 'x87 floating point registers'
Dec  1 13:25:14 np0005541455 kernel: x86/fpu: Supporting XSAVE feature 0x002: 'SSE registers'
Dec  1 13:25:14 np0005541455 kernel: x86/fpu: Supporting XSAVE feature 0x004: 'AVX registers'
Dec  1 13:25:14 np0005541455 kernel: x86/fpu: xstate_offset[2]:  576, xstate_sizes[2]:  256
Dec  1 13:25:14 np0005541455 kernel: x86/fpu: Enabled xstate features 0x7, context size is 832 bytes, using 'compacted' format.
Dec  1 13:25:14 np0005541455 kernel: Freeing SMP alternatives memory: 40K
Dec  1 13:25:14 np0005541455 kernel: pid_max: default: 32768 minimum: 301
Dec  1 13:25:14 np0005541455 kernel: LSM: initializing lsm=lockdown,capability,landlock,yama,integrity,selinux,bpf
Dec  1 13:25:14 np0005541455 kernel: landlock: Up and running.
Dec  1 13:25:14 np0005541455 kernel: Yama: becoming mindful.
Dec  1 13:25:14 np0005541455 kernel: SELinux:  Initializing.
Dec  1 13:25:14 np0005541455 kernel: LSM support for eBPF active
Dec  1 13:25:14 np0005541455 kernel: Mount-cache hash table entries: 16384 (order: 5, 131072 bytes, linear)
Dec  1 13:25:14 np0005541455 kernel: Mountpoint-cache hash table entries: 16384 (order: 5, 131072 bytes, linear)
Dec  1 13:25:14 np0005541455 kernel: smpboot: CPU0: AMD EPYC-Rome Processor (family: 0x17, model: 0x31, stepping: 0x0)
Dec  1 13:25:14 np0005541455 kernel: Performance Events: Fam17h+ core perfctr, AMD PMU driver.
Dec  1 13:25:14 np0005541455 kernel: ... version:                0
Dec  1 13:25:14 np0005541455 kernel: ... bit width:              48
Dec  1 13:25:14 np0005541455 kernel: ... generic registers:      6
Dec  1 13:25:14 np0005541455 kernel: ... value mask:             0000ffffffffffff
Dec  1 13:25:14 np0005541455 kernel: ... max period:             00007fffffffffff
Dec  1 13:25:14 np0005541455 kernel: ... fixed-purpose events:   0
Dec  1 13:25:14 np0005541455 kernel: ... event mask:             000000000000003f
Dec  1 13:25:14 np0005541455 kernel: signal: max sigframe size: 1776
Dec  1 13:25:14 np0005541455 kernel: rcu: Hierarchical SRCU implementation.
Dec  1 13:25:14 np0005541455 kernel: rcu: #011Max phase no-delay instances is 400.
Dec  1 13:25:14 np0005541455 kernel: smp: Bringing up secondary CPUs ...
Dec  1 13:25:14 np0005541455 kernel: smpboot: x86: Booting SMP configuration:
Dec  1 13:25:14 np0005541455 kernel: .... node  #0, CPUs:      #1 #2 #3 #4 #5 #6 #7
Dec  1 13:25:14 np0005541455 kernel: smp: Brought up 1 node, 8 CPUs
Dec  1 13:25:14 np0005541455 kernel: smpboot: Total of 8 processors activated (44800.00 BogoMIPS)
Dec  1 13:25:14 np0005541455 kernel: node 0 deferred pages initialised in 11ms
Dec  1 13:25:14 np0005541455 kernel: Memory: 7765924K/8388068K available (16384K kernel code, 5787K rwdata, 13900K rodata, 4192K init, 7172K bss, 616268K reserved, 0K cma-reserved)
Dec  1 13:25:14 np0005541455 kernel: devtmpfs: initialized
Dec  1 13:25:14 np0005541455 kernel: x86/mm: Memory block size: 128MB
Dec  1 13:25:14 np0005541455 kernel: clocksource: jiffies: mask: 0xffffffff max_cycles: 0xffffffff, max_idle_ns: 1911260446275000 ns
Dec  1 13:25:14 np0005541455 kernel: futex hash table entries: 2048 (order: 5, 131072 bytes, linear)
Dec  1 13:25:14 np0005541455 kernel: pinctrl core: initialized pinctrl subsystem
Dec  1 13:25:14 np0005541455 kernel: NET: Registered PF_NETLINK/PF_ROUTE protocol family
Dec  1 13:25:14 np0005541455 kernel: DMA: preallocated 1024 KiB GFP_KERNEL pool for atomic allocations
Dec  1 13:25:14 np0005541455 kernel: DMA: preallocated 1024 KiB GFP_KERNEL|GFP_DMA pool for atomic allocations
Dec  1 13:25:14 np0005541455 kernel: DMA: preallocated 1024 KiB GFP_KERNEL|GFP_DMA32 pool for atomic allocations
Dec  1 13:25:14 np0005541455 kernel: audit: initializing netlink subsys (disabled)
Dec  1 13:25:14 np0005541455 kernel: audit: type=2000 audit(1764613512.681:1): state=initialized audit_enabled=0 res=1
Dec  1 13:25:14 np0005541455 kernel: thermal_sys: Registered thermal governor 'fair_share'
Dec  1 13:25:14 np0005541455 kernel: thermal_sys: Registered thermal governor 'step_wise'
Dec  1 13:25:14 np0005541455 kernel: thermal_sys: Registered thermal governor 'user_space'
Dec  1 13:25:14 np0005541455 kernel: cpuidle: using governor menu
Dec  1 13:25:14 np0005541455 kernel: acpiphp: ACPI Hot Plug PCI Controller Driver version: 0.5
Dec  1 13:25:14 np0005541455 kernel: PCI: Using configuration type 1 for base access
Dec  1 13:25:14 np0005541455 kernel: PCI: Using configuration type 1 for extended access
Dec  1 13:25:14 np0005541455 kernel: kprobes: kprobe jump-optimization is enabled. All kprobes are optimized if possible.
Dec  1 13:25:14 np0005541455 kernel: HugeTLB: registered 1.00 GiB page size, pre-allocated 0 pages
Dec  1 13:25:14 np0005541455 kernel: HugeTLB: 16380 KiB vmemmap can be freed for a 1.00 GiB page
Dec  1 13:25:14 np0005541455 kernel: HugeTLB: registered 2.00 MiB page size, pre-allocated 0 pages
Dec  1 13:25:14 np0005541455 kernel: HugeTLB: 28 KiB vmemmap can be freed for a 2.00 MiB page
Dec  1 13:25:14 np0005541455 kernel: Demotion targets for Node 0: null
Dec  1 13:25:14 np0005541455 kernel: cryptd: max_cpu_qlen set to 1000
Dec  1 13:25:14 np0005541455 kernel: ACPI: Added _OSI(Module Device)
Dec  1 13:25:14 np0005541455 kernel: ACPI: Added _OSI(Processor Device)
Dec  1 13:25:14 np0005541455 kernel: ACPI: Added _OSI(3.0 _SCP Extensions)
Dec  1 13:25:14 np0005541455 kernel: ACPI: Added _OSI(Processor Aggregator Device)
Dec  1 13:25:14 np0005541455 kernel: ACPI: 1 ACPI AML tables successfully acquired and loaded
Dec  1 13:25:14 np0005541455 kernel: ACPI: _OSC evaluation for CPUs failed, trying _PDC
Dec  1 13:25:14 np0005541455 kernel: ACPI: Interpreter enabled
Dec  1 13:25:14 np0005541455 kernel: ACPI: PM: (supports S0 S3 S4 S5)
Dec  1 13:25:14 np0005541455 kernel: ACPI: Using IOAPIC for interrupt routing
Dec  1 13:25:14 np0005541455 kernel: PCI: Using host bridge windows from ACPI; if necessary, use "pci=nocrs" and report a bug
Dec  1 13:25:14 np0005541455 kernel: PCI: Using E820 reservations for host bridge windows
Dec  1 13:25:14 np0005541455 kernel: ACPI: Enabled 2 GPEs in block 00 to 0F
Dec  1 13:25:14 np0005541455 kernel: ACPI: PCI Root Bridge [PCI0] (domain 0000 [bus 00-ff])
Dec  1 13:25:14 np0005541455 kernel: acpi PNP0A03:00: _OSC: OS supports [ExtendedConfig ASPM ClockPM Segments MSI EDR HPX-Type3]
Dec  1 13:25:14 np0005541455 kernel: acpiphp: Slot [3] registered
Dec  1 13:25:14 np0005541455 kernel: acpiphp: Slot [4] registered
Dec  1 13:25:14 np0005541455 kernel: acpiphp: Slot [5] registered
Dec  1 13:25:14 np0005541455 kernel: acpiphp: Slot [6] registered
Dec  1 13:25:14 np0005541455 kernel: acpiphp: Slot [7] registered
Dec  1 13:25:14 np0005541455 kernel: acpiphp: Slot [8] registered
Dec  1 13:25:14 np0005541455 kernel: acpiphp: Slot [9] registered
Dec  1 13:25:14 np0005541455 kernel: acpiphp: Slot [10] registered
Dec  1 13:25:14 np0005541455 kernel: acpiphp: Slot [11] registered
Dec  1 13:25:14 np0005541455 kernel: acpiphp: Slot [12] registered
Dec  1 13:25:14 np0005541455 kernel: acpiphp: Slot [13] registered
Dec  1 13:25:14 np0005541455 kernel: acpiphp: Slot [14] registered
Dec  1 13:25:14 np0005541455 kernel: acpiphp: Slot [15] registered
Dec  1 13:25:14 np0005541455 kernel: acpiphp: Slot [16] registered
Dec  1 13:25:14 np0005541455 kernel: acpiphp: Slot [17] registered
Dec  1 13:25:14 np0005541455 kernel: acpiphp: Slot [18] registered
Dec  1 13:25:14 np0005541455 kernel: acpiphp: Slot [19] registered
Dec  1 13:25:14 np0005541455 kernel: acpiphp: Slot [20] registered
Dec  1 13:25:14 np0005541455 kernel: acpiphp: Slot [21] registered
Dec  1 13:25:14 np0005541455 kernel: acpiphp: Slot [22] registered
Dec  1 13:25:14 np0005541455 kernel: acpiphp: Slot [23] registered
Dec  1 13:25:14 np0005541455 kernel: acpiphp: Slot [24] registered
Dec  1 13:25:14 np0005541455 kernel: acpiphp: Slot [25] registered
Dec  1 13:25:14 np0005541455 kernel: acpiphp: Slot [26] registered
Dec  1 13:25:14 np0005541455 kernel: acpiphp: Slot [27] registered
Dec  1 13:25:14 np0005541455 kernel: acpiphp: Slot [28] registered
Dec  1 13:25:14 np0005541455 kernel: acpiphp: Slot [29] registered
Dec  1 13:25:14 np0005541455 kernel: acpiphp: Slot [30] registered
Dec  1 13:25:14 np0005541455 kernel: acpiphp: Slot [31] registered
Dec  1 13:25:14 np0005541455 kernel: PCI host bridge to bus 0000:00
Dec  1 13:25:14 np0005541455 kernel: pci_bus 0000:00: root bus resource [io  0x0000-0x0cf7 window]
Dec  1 13:25:14 np0005541455 kernel: pci_bus 0000:00: root bus resource [io  0x0d00-0xffff window]
Dec  1 13:25:14 np0005541455 kernel: pci_bus 0000:00: root bus resource [mem 0x000a0000-0x000bffff window]
Dec  1 13:25:14 np0005541455 kernel: pci_bus 0000:00: root bus resource [mem 0xc0000000-0xfebfffff window]
Dec  1 13:25:14 np0005541455 kernel: pci_bus 0000:00: root bus resource [mem 0x240000000-0x2bfffffff window]
Dec  1 13:25:14 np0005541455 kernel: pci_bus 0000:00: root bus resource [bus 00-ff]
Dec  1 13:25:14 np0005541455 kernel: pci 0000:00:00.0: [8086:1237] type 00 class 0x060000 conventional PCI endpoint
Dec  1 13:25:14 np0005541455 kernel: pci 0000:00:01.0: [8086:7000] type 00 class 0x060100 conventional PCI endpoint
Dec  1 13:25:14 np0005541455 kernel: pci 0000:00:01.1: [8086:7010] type 00 class 0x010180 conventional PCI endpoint
Dec  1 13:25:14 np0005541455 kernel: pci 0000:00:01.1: BAR 4 [io  0xc140-0xc14f]
Dec  1 13:25:14 np0005541455 kernel: pci 0000:00:01.1: BAR 0 [io  0x01f0-0x01f7]: legacy IDE quirk
Dec  1 13:25:14 np0005541455 kernel: pci 0000:00:01.1: BAR 1 [io  0x03f6]: legacy IDE quirk
Dec  1 13:25:14 np0005541455 kernel: pci 0000:00:01.1: BAR 2 [io  0x0170-0x0177]: legacy IDE quirk
Dec  1 13:25:14 np0005541455 kernel: pci 0000:00:01.1: BAR 3 [io  0x0376]: legacy IDE quirk
Dec  1 13:25:14 np0005541455 kernel: pci 0000:00:01.2: [8086:7020] type 00 class 0x0c0300 conventional PCI endpoint
Dec  1 13:25:14 np0005541455 kernel: pci 0000:00:01.2: BAR 4 [io  0xc100-0xc11f]
Dec  1 13:25:14 np0005541455 kernel: pci 0000:00:01.3: [8086:7113] type 00 class 0x068000 conventional PCI endpoint
Dec  1 13:25:14 np0005541455 kernel: pci 0000:00:01.3: quirk: [io  0x0600-0x063f] claimed by PIIX4 ACPI
Dec  1 13:25:14 np0005541455 kernel: pci 0000:00:01.3: quirk: [io  0x0700-0x070f] claimed by PIIX4 SMB
Dec  1 13:25:14 np0005541455 kernel: pci 0000:00:02.0: [1af4:1050] type 00 class 0x030000 conventional PCI endpoint
Dec  1 13:25:14 np0005541455 kernel: pci 0000:00:02.0: BAR 0 [mem 0xfe000000-0xfe7fffff pref]
Dec  1 13:25:14 np0005541455 kernel: pci 0000:00:02.0: BAR 2 [mem 0xfe800000-0xfe803fff 64bit pref]
Dec  1 13:25:14 np0005541455 kernel: pci 0000:00:02.0: BAR 4 [mem 0xfeb90000-0xfeb90fff]
Dec  1 13:25:14 np0005541455 kernel: pci 0000:00:02.0: ROM [mem 0xfeb80000-0xfeb8ffff pref]
Dec  1 13:25:14 np0005541455 kernel: pci 0000:00:02.0: Video device with shadowed ROM at [mem 0x000c0000-0x000dffff]
Dec  1 13:25:14 np0005541455 kernel: pci 0000:00:03.0: [1af4:1000] type 00 class 0x020000 conventional PCI endpoint
Dec  1 13:25:14 np0005541455 kernel: pci 0000:00:03.0: BAR 0 [io  0xc080-0xc0bf]
Dec  1 13:25:14 np0005541455 kernel: pci 0000:00:03.0: BAR 1 [mem 0xfeb91000-0xfeb91fff]
Dec  1 13:25:14 np0005541455 kernel: pci 0000:00:03.0: BAR 4 [mem 0xfe804000-0xfe807fff 64bit pref]
Dec  1 13:25:14 np0005541455 kernel: pci 0000:00:03.0: ROM [mem 0xfeb00000-0xfeb7ffff pref]
Dec  1 13:25:14 np0005541455 kernel: pci 0000:00:04.0: [1af4:1001] type 00 class 0x010000 conventional PCI endpoint
Dec  1 13:25:14 np0005541455 kernel: pci 0000:00:04.0: BAR 0 [io  0xc000-0xc07f]
Dec  1 13:25:14 np0005541455 kernel: pci 0000:00:04.0: BAR 1 [mem 0xfeb92000-0xfeb92fff]
Dec  1 13:25:14 np0005541455 kernel: pci 0000:00:04.0: BAR 4 [mem 0xfe808000-0xfe80bfff 64bit pref]
Dec  1 13:25:14 np0005541455 kernel: pci 0000:00:05.0: [1af4:1002] type 00 class 0x00ff00 conventional PCI endpoint
Dec  1 13:25:14 np0005541455 kernel: pci 0000:00:05.0: BAR 0 [io  0xc0c0-0xc0ff]
Dec  1 13:25:14 np0005541455 kernel: pci 0000:00:05.0: BAR 4 [mem 0xfe80c000-0xfe80ffff 64bit pref]
Dec  1 13:25:14 np0005541455 kernel: pci 0000:00:06.0: [1af4:1005] type 00 class 0x00ff00 conventional PCI endpoint
Dec  1 13:25:14 np0005541455 kernel: pci 0000:00:06.0: BAR 0 [io  0xc120-0xc13f]
Dec  1 13:25:14 np0005541455 kernel: pci 0000:00:06.0: BAR 4 [mem 0xfe810000-0xfe813fff 64bit pref]
Dec  1 13:25:14 np0005541455 kernel: ACPI: PCI: Interrupt link LNKA configured for IRQ 10
Dec  1 13:25:14 np0005541455 kernel: ACPI: PCI: Interrupt link LNKB configured for IRQ 10
Dec  1 13:25:14 np0005541455 kernel: ACPI: PCI: Interrupt link LNKC configured for IRQ 11
Dec  1 13:25:14 np0005541455 kernel: ACPI: PCI: Interrupt link LNKD configured for IRQ 11
Dec  1 13:25:14 np0005541455 kernel: ACPI: PCI: Interrupt link LNKS configured for IRQ 9
Dec  1 13:25:14 np0005541455 kernel: iommu: Default domain type: Translated
Dec  1 13:25:14 np0005541455 kernel: iommu: DMA domain TLB invalidation policy: lazy mode
Dec  1 13:25:14 np0005541455 kernel: SCSI subsystem initialized
Dec  1 13:25:14 np0005541455 kernel: ACPI: bus type USB registered
Dec  1 13:25:14 np0005541455 kernel: usbcore: registered new interface driver usbfs
Dec  1 13:25:14 np0005541455 kernel: usbcore: registered new interface driver hub
Dec  1 13:25:14 np0005541455 kernel: usbcore: registered new device driver usb
Dec  1 13:25:14 np0005541455 kernel: pps_core: LinuxPPS API ver. 1 registered
Dec  1 13:25:14 np0005541455 kernel: pps_core: Software ver. 5.3.6 - Copyright 2005-2007 Rodolfo Giometti <giometti@linux.it>
Dec  1 13:25:14 np0005541455 kernel: PTP clock support registered
Dec  1 13:25:14 np0005541455 kernel: EDAC MC: Ver: 3.0.0
Dec  1 13:25:14 np0005541455 kernel: NetLabel: Initializing
Dec  1 13:25:14 np0005541455 kernel: NetLabel:  domain hash size = 128
Dec  1 13:25:14 np0005541455 kernel: NetLabel:  protocols = UNLABELED CIPSOv4 CALIPSO
Dec  1 13:25:14 np0005541455 kernel: NetLabel:  unlabeled traffic allowed by default
Dec  1 13:25:14 np0005541455 kernel: PCI: Using ACPI for IRQ routing
Dec  1 13:25:14 np0005541455 kernel: pci 0000:00:02.0: vgaarb: setting as boot VGA device
Dec  1 13:25:14 np0005541455 kernel: pci 0000:00:02.0: vgaarb: bridge control possible
Dec  1 13:25:14 np0005541455 kernel: pci 0000:00:02.0: vgaarb: VGA device added: decodes=io+mem,owns=io+mem,locks=none
Dec  1 13:25:14 np0005541455 kernel: vgaarb: loaded
Dec  1 13:25:14 np0005541455 kernel: clocksource: Switched to clocksource kvm-clock
Dec  1 13:25:14 np0005541455 kernel: VFS: Disk quotas dquot_6.6.0
Dec  1 13:25:14 np0005541455 kernel: VFS: Dquot-cache hash table entries: 512 (order 0, 4096 bytes)
Dec  1 13:25:14 np0005541455 kernel: pnp: PnP ACPI init
Dec  1 13:25:14 np0005541455 kernel: pnp: PnP ACPI: found 5 devices
Dec  1 13:25:14 np0005541455 kernel: clocksource: acpi_pm: mask: 0xffffff max_cycles: 0xffffff, max_idle_ns: 2085701024 ns
Dec  1 13:25:14 np0005541455 kernel: NET: Registered PF_INET protocol family
Dec  1 13:25:14 np0005541455 kernel: IP idents hash table entries: 131072 (order: 8, 1048576 bytes, linear)
Dec  1 13:25:14 np0005541455 kernel: tcp_listen_portaddr_hash hash table entries: 4096 (order: 4, 65536 bytes, linear)
Dec  1 13:25:14 np0005541455 kernel: Table-perturb hash table entries: 65536 (order: 6, 262144 bytes, linear)
Dec  1 13:25:14 np0005541455 kernel: TCP established hash table entries: 65536 (order: 7, 524288 bytes, linear)
Dec  1 13:25:14 np0005541455 kernel: TCP bind hash table entries: 65536 (order: 8, 1048576 bytes, linear)
Dec  1 13:25:14 np0005541455 kernel: TCP: Hash tables configured (established 65536 bind 65536)
Dec  1 13:25:14 np0005541455 kernel: MPTCP token hash table entries: 8192 (order: 5, 196608 bytes, linear)
Dec  1 13:25:14 np0005541455 kernel: UDP hash table entries: 4096 (order: 5, 131072 bytes, linear)
Dec  1 13:25:14 np0005541455 kernel: UDP-Lite hash table entries: 4096 (order: 5, 131072 bytes, linear)
Dec  1 13:25:14 np0005541455 kernel: NET: Registered PF_UNIX/PF_LOCAL protocol family
Dec  1 13:25:14 np0005541455 kernel: NET: Registered PF_XDP protocol family
Dec  1 13:25:14 np0005541455 kernel: pci_bus 0000:00: resource 4 [io  0x0000-0x0cf7 window]
Dec  1 13:25:14 np0005541455 kernel: pci_bus 0000:00: resource 5 [io  0x0d00-0xffff window]
Dec  1 13:25:14 np0005541455 kernel: pci_bus 0000:00: resource 6 [mem 0x000a0000-0x000bffff window]
Dec  1 13:25:14 np0005541455 kernel: pci_bus 0000:00: resource 7 [mem 0xc0000000-0xfebfffff window]
Dec  1 13:25:14 np0005541455 kernel: pci_bus 0000:00: resource 8 [mem 0x240000000-0x2bfffffff window]
Dec  1 13:25:14 np0005541455 kernel: pci 0000:00:01.0: PIIX3: Enabling Passive Release
Dec  1 13:25:14 np0005541455 kernel: pci 0000:00:00.0: Limiting direct PCI/PCI transfers
Dec  1 13:25:14 np0005541455 kernel: ACPI: \_SB_.LNKD: Enabled at IRQ 11
Dec  1 13:25:14 np0005541455 kernel: pci 0000:00:01.2: quirk_usb_early_handoff+0x0/0x160 took 85015 usecs
Dec  1 13:25:14 np0005541455 kernel: PCI: CLS 0 bytes, default 64
Dec  1 13:25:14 np0005541455 kernel: PCI-DMA: Using software bounce buffering for IO (SWIOTLB)
Dec  1 13:25:14 np0005541455 kernel: software IO TLB: mapped [mem 0x00000000ab000000-0x00000000af000000] (64MB)
Dec  1 13:25:14 np0005541455 kernel: ACPI: bus type thunderbolt registered
Dec  1 13:25:14 np0005541455 kernel: Trying to unpack rootfs image as initramfs...
Dec  1 13:25:14 np0005541455 kernel: Initialise system trusted keyrings
Dec  1 13:25:14 np0005541455 kernel: Key type blacklist registered
Dec  1 13:25:14 np0005541455 kernel: workingset: timestamp_bits=36 max_order=21 bucket_order=0
Dec  1 13:25:14 np0005541455 kernel: zbud: loaded
Dec  1 13:25:14 np0005541455 kernel: integrity: Platform Keyring initialized
Dec  1 13:25:14 np0005541455 kernel: integrity: Machine keyring initialized
Dec  1 13:25:14 np0005541455 kernel: Freeing initrd memory: 85868K
Dec  1 13:25:14 np0005541455 kernel: NET: Registered PF_ALG protocol family
Dec  1 13:25:14 np0005541455 kernel: xor: automatically using best checksumming function   avx       
Dec  1 13:25:14 np0005541455 kernel: Key type asymmetric registered
Dec  1 13:25:14 np0005541455 kernel: Asymmetric key parser 'x509' registered
Dec  1 13:25:14 np0005541455 kernel: Block layer SCSI generic (bsg) driver version 0.4 loaded (major 246)
Dec  1 13:25:14 np0005541455 kernel: io scheduler mq-deadline registered
Dec  1 13:25:14 np0005541455 kernel: io scheduler kyber registered
Dec  1 13:25:14 np0005541455 kernel: io scheduler bfq registered
Dec  1 13:25:14 np0005541455 kernel: atomic64_test: passed for x86-64 platform with CX8 and with SSE
Dec  1 13:25:14 np0005541455 kernel: shpchp: Standard Hot Plug PCI Controller Driver version: 0.4
Dec  1 13:25:14 np0005541455 kernel: input: Power Button as /devices/LNXSYSTM:00/LNXPWRBN:00/input/input0
Dec  1 13:25:14 np0005541455 kernel: ACPI: button: Power Button [PWRF]
Dec  1 13:25:14 np0005541455 kernel: ACPI: \_SB_.LNKB: Enabled at IRQ 10
Dec  1 13:25:14 np0005541455 kernel: ACPI: \_SB_.LNKC: Enabled at IRQ 11
Dec  1 13:25:14 np0005541455 kernel: ACPI: \_SB_.LNKA: Enabled at IRQ 10
Dec  1 13:25:14 np0005541455 kernel: Serial: 8250/16550 driver, 4 ports, IRQ sharing enabled
Dec  1 13:25:14 np0005541455 kernel: 00:00: ttyS0 at I/O 0x3f8 (irq = 4, base_baud = 115200) is a 16550A
Dec  1 13:25:14 np0005541455 kernel: Non-volatile memory driver v1.3
Dec  1 13:25:14 np0005541455 kernel: rdac: device handler registered
Dec  1 13:25:14 np0005541455 kernel: hp_sw: device handler registered
Dec  1 13:25:14 np0005541455 kernel: emc: device handler registered
Dec  1 13:25:14 np0005541455 kernel: alua: device handler registered
Dec  1 13:25:14 np0005541455 kernel: uhci_hcd 0000:00:01.2: UHCI Host Controller
Dec  1 13:25:14 np0005541455 kernel: uhci_hcd 0000:00:01.2: new USB bus registered, assigned bus number 1
Dec  1 13:25:14 np0005541455 kernel: uhci_hcd 0000:00:01.2: detected 2 ports
Dec  1 13:25:14 np0005541455 kernel: uhci_hcd 0000:00:01.2: irq 11, io port 0x0000c100
Dec  1 13:25:14 np0005541455 kernel: usb usb1: New USB device found, idVendor=1d6b, idProduct=0001, bcdDevice= 5.14
Dec  1 13:25:14 np0005541455 kernel: usb usb1: New USB device strings: Mfr=3, Product=2, SerialNumber=1
Dec  1 13:25:14 np0005541455 kernel: usb usb1: Product: UHCI Host Controller
Dec  1 13:25:14 np0005541455 kernel: usb usb1: Manufacturer: Linux 5.14.0-642.el9.x86_64 uhci_hcd
Dec  1 13:25:14 np0005541455 kernel: usb usb1: SerialNumber: 0000:00:01.2
Dec  1 13:25:14 np0005541455 kernel: hub 1-0:1.0: USB hub found
Dec  1 13:25:14 np0005541455 kernel: hub 1-0:1.0: 2 ports detected
Dec  1 13:25:14 np0005541455 kernel: usbcore: registered new interface driver usbserial_generic
Dec  1 13:25:14 np0005541455 kernel: usbserial: USB Serial support registered for generic
Dec  1 13:25:14 np0005541455 kernel: i8042: PNP: PS/2 Controller [PNP0303:KBD,PNP0f13:MOU] at 0x60,0x64 irq 1,12
Dec  1 13:25:14 np0005541455 kernel: serio: i8042 KBD port at 0x60,0x64 irq 1
Dec  1 13:25:14 np0005541455 kernel: serio: i8042 AUX port at 0x60,0x64 irq 12
Dec  1 13:25:14 np0005541455 kernel: mousedev: PS/2 mouse device common for all mice
Dec  1 13:25:14 np0005541455 kernel: input: AT Translated Set 2 keyboard as /devices/platform/i8042/serio0/input/input1
Dec  1 13:25:14 np0005541455 kernel: rtc_cmos 00:04: RTC can wake from S4
Dec  1 13:25:14 np0005541455 kernel: input: VirtualPS/2 VMware VMMouse as /devices/platform/i8042/serio1/input/input4
Dec  1 13:25:14 np0005541455 kernel: rtc_cmos 00:04: registered as rtc0
Dec  1 13:25:14 np0005541455 kernel: input: VirtualPS/2 VMware VMMouse as /devices/platform/i8042/serio1/input/input3
Dec  1 13:25:14 np0005541455 kernel: rtc_cmos 00:04: setting system clock to 2025-12-01T18:25:13 UTC (1764613513)
Dec  1 13:25:14 np0005541455 kernel: rtc_cmos 00:04: alarms up to one day, y3k, 242 bytes nvram
Dec  1 13:25:14 np0005541455 kernel: amd_pstate: the _CPC object is not present in SBIOS or ACPI disabled
Dec  1 13:25:14 np0005541455 kernel: hid: raw HID events driver (C) Jiri Kosina
Dec  1 13:25:14 np0005541455 kernel: usbcore: registered new interface driver usbhid
Dec  1 13:25:14 np0005541455 kernel: usbhid: USB HID core driver
Dec  1 13:25:14 np0005541455 kernel: drop_monitor: Initializing network drop monitor service
Dec  1 13:25:14 np0005541455 kernel: Initializing XFRM netlink socket
Dec  1 13:25:14 np0005541455 kernel: NET: Registered PF_INET6 protocol family
Dec  1 13:25:14 np0005541455 kernel: Segment Routing with IPv6
Dec  1 13:25:14 np0005541455 kernel: NET: Registered PF_PACKET protocol family
Dec  1 13:25:14 np0005541455 kernel: mpls_gso: MPLS GSO support
Dec  1 13:25:14 np0005541455 kernel: IPI shorthand broadcast: enabled
Dec  1 13:25:14 np0005541455 kernel: AVX2 version of gcm_enc/dec engaged.
Dec  1 13:25:14 np0005541455 kernel: AES CTR mode by8 optimization enabled
Dec  1 13:25:14 np0005541455 kernel: sched_clock: Marking stable (1340002193, 144791778)->(1564551591, -79757620)
Dec  1 13:25:14 np0005541455 kernel: registered taskstats version 1
Dec  1 13:25:14 np0005541455 kernel: Loading compiled-in X.509 certificates
Dec  1 13:25:14 np0005541455 kernel: Loaded X.509 cert 'The CentOS Project: CentOS Stream kernel signing key: 8ec4bd273f582f9a9b9a494ae677ca1f1488f19e'
Dec  1 13:25:14 np0005541455 kernel: Loaded X.509 cert 'Red Hat Enterprise Linux Driver Update Program (key 3): bf57f3e87362bc7229d9f465321773dfd1f77a80'
Dec  1 13:25:14 np0005541455 kernel: Loaded X.509 cert 'Red Hat Enterprise Linux kpatch signing key: 4d38fd864ebe18c5f0b72e3852e2014c3a676fc8'
Dec  1 13:25:14 np0005541455 kernel: Loaded X.509 cert 'RH-IMA-CA: Red Hat IMA CA: fb31825dd0e073685b264e3038963673f753959a'
Dec  1 13:25:14 np0005541455 kernel: Loaded X.509 cert 'Nvidia GPU OOT signing 001: 55e1cef88193e60419f0b0ec379c49f77545acf0'
Dec  1 13:25:14 np0005541455 kernel: Demotion targets for Node 0: null
Dec  1 13:25:14 np0005541455 kernel: page_owner is disabled
Dec  1 13:25:14 np0005541455 kernel: Key type .fscrypt registered
Dec  1 13:25:14 np0005541455 kernel: Key type fscrypt-provisioning registered
Dec  1 13:25:14 np0005541455 kernel: Key type big_key registered
Dec  1 13:25:14 np0005541455 kernel: Key type encrypted registered
Dec  1 13:25:14 np0005541455 kernel: ima: No TPM chip found, activating TPM-bypass!
Dec  1 13:25:14 np0005541455 kernel: Loading compiled-in module X.509 certificates
Dec  1 13:25:14 np0005541455 kernel: Loaded X.509 cert 'The CentOS Project: CentOS Stream kernel signing key: 8ec4bd273f582f9a9b9a494ae677ca1f1488f19e'
Dec  1 13:25:14 np0005541455 kernel: ima: Allocated hash algorithm: sha256
Dec  1 13:25:14 np0005541455 kernel: ima: No architecture policies found
Dec  1 13:25:14 np0005541455 kernel: evm: Initialising EVM extended attributes:
Dec  1 13:25:14 np0005541455 kernel: evm: security.selinux
Dec  1 13:25:14 np0005541455 kernel: evm: security.SMACK64 (disabled)
Dec  1 13:25:14 np0005541455 kernel: evm: security.SMACK64EXEC (disabled)
Dec  1 13:25:14 np0005541455 kernel: evm: security.SMACK64TRANSMUTE (disabled)
Dec  1 13:25:14 np0005541455 kernel: evm: security.SMACK64MMAP (disabled)
Dec  1 13:25:14 np0005541455 kernel: evm: security.apparmor (disabled)
Dec  1 13:25:14 np0005541455 kernel: evm: security.ima
Dec  1 13:25:14 np0005541455 kernel: evm: security.capability
Dec  1 13:25:14 np0005541455 kernel: evm: HMAC attrs: 0x1
Dec  1 13:25:14 np0005541455 kernel: usb 1-1: new full-speed USB device number 2 using uhci_hcd
Dec  1 13:25:14 np0005541455 kernel: Running certificate verification RSA selftest
Dec  1 13:25:14 np0005541455 kernel: Loaded X.509 cert 'Certificate verification self-testing key: f58703bb33ce1b73ee02eccdee5b8817518fe3db'
Dec  1 13:25:14 np0005541455 kernel: Running certificate verification ECDSA selftest
Dec  1 13:25:14 np0005541455 kernel: Loaded X.509 cert 'Certificate verification ECDSA self-testing key: 2900bcea1deb7bc8479a84a23d758efdfdd2b2d3'
Dec  1 13:25:14 np0005541455 kernel: clk: Disabling unused clocks
Dec  1 13:25:14 np0005541455 kernel: Freeing unused decrypted memory: 2028K
Dec  1 13:25:14 np0005541455 kernel: Freeing unused kernel image (initmem) memory: 4192K
Dec  1 13:25:14 np0005541455 kernel: Write protecting the kernel read-only data: 30720k
Dec  1 13:25:14 np0005541455 kernel: Freeing unused kernel image (rodata/data gap) memory: 436K
Dec  1 13:25:14 np0005541455 kernel: usb 1-1: New USB device found, idVendor=0627, idProduct=0001, bcdDevice= 0.00
Dec  1 13:25:14 np0005541455 kernel: usb 1-1: New USB device strings: Mfr=1, Product=3, SerialNumber=10
Dec  1 13:25:14 np0005541455 kernel: usb 1-1: Product: QEMU USB Tablet
Dec  1 13:25:14 np0005541455 kernel: usb 1-1: Manufacturer: QEMU
Dec  1 13:25:14 np0005541455 kernel: usb 1-1: SerialNumber: 28754-0000:00:01.2-1
Dec  1 13:25:14 np0005541455 kernel: input: QEMU QEMU USB Tablet as /devices/pci0000:00/0000:00:01.2/usb1/1-1/1-1:1.0/0003:0627:0001.0001/input/input5
Dec  1 13:25:14 np0005541455 kernel: hid-generic 0003:0627:0001.0001: input,hidraw0: USB HID v0.01 Mouse [QEMU QEMU USB Tablet] on usb-0000:00:01.2-1/input0
Dec  1 13:25:14 np0005541455 kernel: x86/mm: Checked W+X mappings: passed, no W+X pages found.
Dec  1 13:25:14 np0005541455 kernel: Run /init as init process
Dec  1 13:25:14 np0005541455 systemd: systemd 252-59.el9 running in system mode (+PAM +AUDIT +SELINUX -APPARMOR +IMA +SMACK +SECCOMP +GCRYPT +GNUTLS +OPENSSL +ACL +BLKID +CURL +ELFUTILS +FIDO2 +IDN2 -IDN -IPTC +KMOD +LIBCRYPTSETUP +LIBFDISK +PCRE2 -PWQUALITY +P11KIT -QRENCODE +TPM2 +BZIP2 +LZ4 +XZ +ZLIB +ZSTD -BPF_FRAMEWORK +XKBCOMMON +UTMP +SYSVINIT default-hierarchy=unified)
Dec  1 13:25:14 np0005541455 systemd: Detected virtualization kvm.
Dec  1 13:25:14 np0005541455 systemd: Detected architecture x86-64.
Dec  1 13:25:14 np0005541455 systemd: Running in initrd.
Dec  1 13:25:14 np0005541455 systemd: No hostname configured, using default hostname.
Dec  1 13:25:14 np0005541455 systemd: Hostname set to <localhost>.
Dec  1 13:25:14 np0005541455 systemd: Initializing machine ID from VM UUID.
Dec  1 13:25:14 np0005541455 systemd: Queued start job for default target Initrd Default Target.
Dec  1 13:25:14 np0005541455 systemd: Started Dispatch Password Requests to Console Directory Watch.
Dec  1 13:25:14 np0005541455 systemd: Reached target Local Encrypted Volumes.
Dec  1 13:25:14 np0005541455 systemd: Reached target Initrd /usr File System.
Dec  1 13:25:14 np0005541455 systemd: Reached target Local File Systems.
Dec  1 13:25:14 np0005541455 systemd: Reached target Path Units.
Dec  1 13:25:14 np0005541455 systemd: Reached target Slice Units.
Dec  1 13:25:14 np0005541455 systemd: Reached target Swaps.
Dec  1 13:25:14 np0005541455 systemd: Reached target Timer Units.
Dec  1 13:25:14 np0005541455 systemd: Listening on D-Bus System Message Bus Socket.
Dec  1 13:25:14 np0005541455 systemd: Listening on Journal Socket (/dev/log).
Dec  1 13:25:14 np0005541455 systemd: Listening on Journal Socket.
Dec  1 13:25:14 np0005541455 systemd: Listening on udev Control Socket.
Dec  1 13:25:14 np0005541455 systemd: Listening on udev Kernel Socket.
Dec  1 13:25:14 np0005541455 systemd: Reached target Socket Units.
Dec  1 13:25:14 np0005541455 systemd: Starting Create List of Static Device Nodes...
Dec  1 13:25:14 np0005541455 systemd: Starting Journal Service...
Dec  1 13:25:14 np0005541455 systemd: Load Kernel Modules was skipped because no trigger condition checks were met.
Dec  1 13:25:14 np0005541455 systemd: Starting Apply Kernel Variables...
Dec  1 13:25:14 np0005541455 systemd: Starting Create System Users...
Dec  1 13:25:14 np0005541455 systemd: Starting Setup Virtual Console...
Dec  1 13:25:14 np0005541455 systemd: Finished Create List of Static Device Nodes.
Dec  1 13:25:14 np0005541455 systemd: Finished Apply Kernel Variables.
Dec  1 13:25:14 np0005541455 systemd: Finished Create System Users.
Dec  1 13:25:14 np0005541455 systemd: Starting Create Static Device Nodes in /dev...
Dec  1 13:25:14 np0005541455 systemd-journald[305]: Journal started
Dec  1 13:25:14 np0005541455 systemd-journald[305]: Runtime Journal (/run/log/journal/321a04b465954e40a9f1f8a11b88d7a9) is 8.0M, max 153.6M, 145.6M free.
Dec  1 13:25:14 np0005541455 systemd-sysusers[310]: Creating group 'users' with GID 100.
Dec  1 13:25:14 np0005541455 systemd-sysusers[310]: Creating group 'dbus' with GID 81.
Dec  1 13:25:14 np0005541455 systemd-sysusers[310]: Creating user 'dbus' (System Message Bus) with UID 81 and GID 81.
Dec  1 13:25:14 np0005541455 systemd: Started Journal Service.
Dec  1 13:25:14 np0005541455 systemd[1]: Starting Create Volatile Files and Directories...
Dec  1 13:25:14 np0005541455 systemd[1]: Finished Create Static Device Nodes in /dev.
Dec  1 13:25:14 np0005541455 systemd[1]: Finished Create Volatile Files and Directories.
Dec  1 13:25:14 np0005541455 systemd[1]: Finished Setup Virtual Console.
Dec  1 13:25:14 np0005541455 systemd[1]: dracut ask for additional cmdline parameters was skipped because no trigger condition checks were met.
Dec  1 13:25:14 np0005541455 systemd[1]: Starting dracut cmdline hook...
Dec  1 13:25:14 np0005541455 dracut-cmdline[324]: dracut-9 dracut-057-102.git20250818.el9
Dec  1 13:25:14 np0005541455 dracut-cmdline[324]: Using kernel command line parameters:    BOOT_IMAGE=(hd0,msdos1)/boot/vmlinuz-5.14.0-642.el9.x86_64 root=UUID=b277050f-8ace-464d-abb6-4c46d4c45253 ro console=ttyS0,115200n8 no_timer_check net.ifnames=0 crashkernel=1G-2G:192M,2G-64G:256M,64G-:512M
Dec  1 13:25:14 np0005541455 systemd[1]: Finished dracut cmdline hook.
Dec  1 13:25:14 np0005541455 systemd[1]: Starting dracut pre-udev hook...
Dec  1 13:25:14 np0005541455 kernel: device-mapper: core: CONFIG_IMA_DISABLE_HTABLE is disabled. Duplicate IMA measurements will not be recorded in the IMA log.
Dec  1 13:25:14 np0005541455 kernel: device-mapper: uevent: version 1.0.3
Dec  1 13:25:14 np0005541455 kernel: device-mapper: ioctl: 4.50.0-ioctl (2025-04-28) initialised: dm-devel@lists.linux.dev
Dec  1 13:25:14 np0005541455 kernel: RPC: Registered named UNIX socket transport module.
Dec  1 13:25:14 np0005541455 kernel: RPC: Registered udp transport module.
Dec  1 13:25:14 np0005541455 kernel: RPC: Registered tcp transport module.
Dec  1 13:25:14 np0005541455 kernel: RPC: Registered tcp-with-tls transport module.
Dec  1 13:25:14 np0005541455 kernel: RPC: Registered tcp NFSv4.1 backchannel transport module.
Dec  1 13:25:14 np0005541455 rpc.statd[442]: Version 2.5.4 starting
Dec  1 13:25:14 np0005541455 rpc.statd[442]: Initializing NSM state
Dec  1 13:25:14 np0005541455 rpc.idmapd[447]: Setting log level to 0
Dec  1 13:25:14 np0005541455 systemd[1]: Finished dracut pre-udev hook.
Dec  1 13:25:14 np0005541455 systemd[1]: Starting Rule-based Manager for Device Events and Files...
Dec  1 13:25:14 np0005541455 systemd-udevd[460]: Using default interface naming scheme 'rhel-9.0'.
Dec  1 13:25:14 np0005541455 systemd[1]: Started Rule-based Manager for Device Events and Files.
Dec  1 13:25:15 np0005541455 systemd[1]: Starting dracut pre-trigger hook...
Dec  1 13:25:15 np0005541455 systemd[1]: Finished dracut pre-trigger hook.
Dec  1 13:25:15 np0005541455 systemd[1]: Starting Coldplug All udev Devices...
Dec  1 13:25:15 np0005541455 systemd[1]: Created slice Slice /system/modprobe.
Dec  1 13:25:15 np0005541455 systemd[1]: Starting Load Kernel Module configfs...
Dec  1 13:25:15 np0005541455 systemd[1]: Finished Coldplug All udev Devices.
Dec  1 13:25:15 np0005541455 systemd[1]: modprobe@configfs.service: Deactivated successfully.
Dec  1 13:25:15 np0005541455 systemd[1]: Finished Load Kernel Module configfs.
Dec  1 13:25:15 np0005541455 systemd[1]: Mounting Kernel Configuration File System...
Dec  1 13:25:15 np0005541455 systemd[1]: nm-initrd.service was skipped because of an unmet condition check (ConditionPathExists=/run/NetworkManager/initrd/neednet).
Dec  1 13:25:15 np0005541455 systemd[1]: Reached target Network.
Dec  1 13:25:15 np0005541455 systemd[1]: nm-wait-online-initrd.service was skipped because of an unmet condition check (ConditionPathExists=/run/NetworkManager/initrd/neednet).
Dec  1 13:25:15 np0005541455 systemd[1]: Starting dracut initqueue hook...
Dec  1 13:25:15 np0005541455 systemd[1]: Mounted Kernel Configuration File System.
Dec  1 13:25:15 np0005541455 systemd[1]: Reached target System Initialization.
Dec  1 13:25:15 np0005541455 systemd[1]: Reached target Basic System.
Dec  1 13:25:15 np0005541455 kernel: virtio_blk virtio2: 8/0/0 default/read/poll queues
Dec  1 13:25:15 np0005541455 kernel: virtio_blk virtio2: [vda] 167772160 512-byte logical blocks (85.9 GB/80.0 GiB)
Dec  1 13:25:15 np0005541455 systemd-udevd[478]: Network interface NamePolicy= disabled on kernel command line.
Dec  1 13:25:15 np0005541455 kernel: vda: vda1
Dec  1 13:25:15 np0005541455 kernel: scsi host0: ata_piix
Dec  1 13:25:15 np0005541455 kernel: scsi host1: ata_piix
Dec  1 13:25:15 np0005541455 kernel: ata1: PATA max MWDMA2 cmd 0x1f0 ctl 0x3f6 bmdma 0xc140 irq 14 lpm-pol 0
Dec  1 13:25:15 np0005541455 kernel: ata2: PATA max MWDMA2 cmd 0x170 ctl 0x376 bmdma 0xc148 irq 15 lpm-pol 0
Dec  1 13:25:15 np0005541455 systemd[1]: Found device /dev/disk/by-uuid/b277050f-8ace-464d-abb6-4c46d4c45253.
Dec  1 13:25:15 np0005541455 systemd[1]: Reached target Initrd Root Device.
Dec  1 13:25:15 np0005541455 kernel: ata1: found unknown device (class 0)
Dec  1 13:25:15 np0005541455 kernel: ata1.00: ATAPI: QEMU DVD-ROM, 2.5+, max UDMA/100
Dec  1 13:25:15 np0005541455 kernel: scsi 0:0:0:0: CD-ROM            QEMU     QEMU DVD-ROM     2.5+ PQ: 0 ANSI: 5
Dec  1 13:25:15 np0005541455 kernel: scsi 0:0:0:0: Attached scsi generic sg0 type 5
Dec  1 13:25:15 np0005541455 kernel: sr 0:0:0:0: [sr0] scsi3-mmc drive: 4x/4x cd/rw xa/form2 tray
Dec  1 13:25:15 np0005541455 kernel: cdrom: Uniform CD-ROM driver Revision: 3.20
Dec  1 13:25:15 np0005541455 systemd[1]: Finished dracut initqueue hook.
Dec  1 13:25:15 np0005541455 systemd[1]: Reached target Preparation for Remote File Systems.
Dec  1 13:25:15 np0005541455 systemd[1]: Reached target Remote Encrypted Volumes.
Dec  1 13:25:15 np0005541455 systemd[1]: Reached target Remote File Systems.
Dec  1 13:25:15 np0005541455 systemd[1]: Starting dracut pre-mount hook...
Dec  1 13:25:15 np0005541455 systemd[1]: Finished dracut pre-mount hook.
Dec  1 13:25:15 np0005541455 systemd[1]: Starting File System Check on /dev/disk/by-uuid/b277050f-8ace-464d-abb6-4c46d4c45253...
Dec  1 13:25:15 np0005541455 systemd-fsck[555]: /usr/sbin/fsck.xfs: XFS file system.
Dec  1 13:25:15 np0005541455 systemd[1]: Finished File System Check on /dev/disk/by-uuid/b277050f-8ace-464d-abb6-4c46d4c45253.
Dec  1 13:25:15 np0005541455 systemd[1]: Mounting /sysroot...
Dec  1 13:25:16 np0005541455 kernel: SGI XFS with ACLs, security attributes, scrub, quota, no debug enabled
Dec  1 13:25:16 np0005541455 kernel: XFS (vda1): Mounting V5 Filesystem b277050f-8ace-464d-abb6-4c46d4c45253
Dec  1 13:25:16 np0005541455 kernel: XFS (vda1): Ending clean mount
Dec  1 13:25:16 np0005541455 systemd[1]: Mounted /sysroot.
Dec  1 13:25:16 np0005541455 systemd[1]: Reached target Initrd Root File System.
Dec  1 13:25:16 np0005541455 systemd[1]: Starting Mountpoints Configured in the Real Root...
Dec  1 13:25:16 np0005541455 systemd[1]: initrd-parse-etc.service: Deactivated successfully.
Dec  1 13:25:16 np0005541455 systemd[1]: Finished Mountpoints Configured in the Real Root.
Dec  1 13:25:16 np0005541455 systemd[1]: Reached target Initrd File Systems.
Dec  1 13:25:16 np0005541455 systemd[1]: Reached target Initrd Default Target.
Dec  1 13:25:16 np0005541455 systemd[1]: Starting dracut mount hook...
Dec  1 13:25:16 np0005541455 systemd[1]: Finished dracut mount hook.
Dec  1 13:25:16 np0005541455 systemd[1]: Starting dracut pre-pivot and cleanup hook...
Dec  1 13:25:16 np0005541455 rpc.idmapd[447]: exiting on signal 15
Dec  1 13:25:16 np0005541455 systemd[1]: var-lib-nfs-rpc_pipefs.mount: Deactivated successfully.
Dec  1 13:25:16 np0005541455 systemd[1]: Finished dracut pre-pivot and cleanup hook.
Dec  1 13:25:16 np0005541455 systemd[1]: Starting Cleaning Up and Shutting Down Daemons...
Dec  1 13:25:16 np0005541455 systemd[1]: Stopped target Network.
Dec  1 13:25:16 np0005541455 systemd[1]: Stopped target Remote Encrypted Volumes.
Dec  1 13:25:16 np0005541455 systemd[1]: Stopped target Timer Units.
Dec  1 13:25:16 np0005541455 systemd[1]: dbus.socket: Deactivated successfully.
Dec  1 13:25:16 np0005541455 systemd[1]: Closed D-Bus System Message Bus Socket.
Dec  1 13:25:16 np0005541455 systemd[1]: dracut-pre-pivot.service: Deactivated successfully.
Dec  1 13:25:16 np0005541455 systemd[1]: Stopped dracut pre-pivot and cleanup hook.
Dec  1 13:25:16 np0005541455 systemd[1]: Stopped target Initrd Default Target.
Dec  1 13:25:16 np0005541455 systemd[1]: Stopped target Basic System.
Dec  1 13:25:16 np0005541455 systemd[1]: Stopped target Initrd Root Device.
Dec  1 13:25:16 np0005541455 systemd[1]: Stopped target Initrd /usr File System.
Dec  1 13:25:16 np0005541455 systemd[1]: Stopped target Path Units.
Dec  1 13:25:16 np0005541455 systemd[1]: Stopped target Remote File Systems.
Dec  1 13:25:16 np0005541455 systemd[1]: Stopped target Preparation for Remote File Systems.
Dec  1 13:25:16 np0005541455 systemd[1]: Stopped target Slice Units.
Dec  1 13:25:16 np0005541455 systemd[1]: Stopped target Socket Units.
Dec  1 13:25:16 np0005541455 systemd[1]: Stopped target System Initialization.
Dec  1 13:25:16 np0005541455 systemd[1]: Stopped target Local File Systems.
Dec  1 13:25:16 np0005541455 systemd[1]: Stopped target Swaps.
Dec  1 13:25:16 np0005541455 systemd[1]: dracut-mount.service: Deactivated successfully.
Dec  1 13:25:16 np0005541455 systemd[1]: Stopped dracut mount hook.
Dec  1 13:25:16 np0005541455 systemd[1]: dracut-pre-mount.service: Deactivated successfully.
Dec  1 13:25:16 np0005541455 systemd[1]: Stopped dracut pre-mount hook.
Dec  1 13:25:16 np0005541455 systemd[1]: Stopped target Local Encrypted Volumes.
Dec  1 13:25:16 np0005541455 systemd[1]: systemd-ask-password-console.path: Deactivated successfully.
Dec  1 13:25:16 np0005541455 systemd[1]: Stopped Dispatch Password Requests to Console Directory Watch.
Dec  1 13:25:16 np0005541455 systemd[1]: dracut-initqueue.service: Deactivated successfully.
Dec  1 13:25:16 np0005541455 systemd[1]: Stopped dracut initqueue hook.
Dec  1 13:25:16 np0005541455 systemd[1]: systemd-sysctl.service: Deactivated successfully.
Dec  1 13:25:16 np0005541455 systemd[1]: Stopped Apply Kernel Variables.
Dec  1 13:25:16 np0005541455 systemd[1]: systemd-tmpfiles-setup.service: Deactivated successfully.
Dec  1 13:25:16 np0005541455 systemd[1]: Stopped Create Volatile Files and Directories.
Dec  1 13:25:16 np0005541455 systemd[1]: systemd-udev-trigger.service: Deactivated successfully.
Dec  1 13:25:16 np0005541455 systemd[1]: Stopped Coldplug All udev Devices.
Dec  1 13:25:16 np0005541455 systemd[1]: dracut-pre-trigger.service: Deactivated successfully.
Dec  1 13:25:16 np0005541455 systemd[1]: Stopped dracut pre-trigger hook.
Dec  1 13:25:16 np0005541455 systemd[1]: Stopping Rule-based Manager for Device Events and Files...
Dec  1 13:25:16 np0005541455 systemd[1]: systemd-vconsole-setup.service: Deactivated successfully.
Dec  1 13:25:16 np0005541455 systemd[1]: Stopped Setup Virtual Console.
Dec  1 13:25:16 np0005541455 systemd[1]: run-credentials-systemd\x2dtmpfiles\x2dsetup.service.mount: Deactivated successfully.
Dec  1 13:25:16 np0005541455 systemd[1]: run-credentials-systemd\x2dsysctl.service.mount: Deactivated successfully.
Dec  1 13:25:16 np0005541455 systemd[1]: initrd-cleanup.service: Deactivated successfully.
Dec  1 13:25:16 np0005541455 systemd[1]: Finished Cleaning Up and Shutting Down Daemons.
Dec  1 13:25:16 np0005541455 systemd[1]: systemd-udevd.service: Deactivated successfully.
Dec  1 13:25:16 np0005541455 systemd[1]: Stopped Rule-based Manager for Device Events and Files.
Dec  1 13:25:16 np0005541455 systemd[1]: systemd-udevd-control.socket: Deactivated successfully.
Dec  1 13:25:16 np0005541455 systemd[1]: Closed udev Control Socket.
Dec  1 13:25:16 np0005541455 systemd[1]: systemd-udevd-kernel.socket: Deactivated successfully.
Dec  1 13:25:16 np0005541455 systemd[1]: Closed udev Kernel Socket.
Dec  1 13:25:16 np0005541455 systemd[1]: dracut-pre-udev.service: Deactivated successfully.
Dec  1 13:25:16 np0005541455 systemd[1]: Stopped dracut pre-udev hook.
Dec  1 13:25:16 np0005541455 systemd[1]: dracut-cmdline.service: Deactivated successfully.
Dec  1 13:25:16 np0005541455 systemd[1]: Stopped dracut cmdline hook.
Dec  1 13:25:16 np0005541455 systemd[1]: Starting Cleanup udev Database...
Dec  1 13:25:16 np0005541455 systemd[1]: systemd-tmpfiles-setup-dev.service: Deactivated successfully.
Dec  1 13:25:16 np0005541455 systemd[1]: Stopped Create Static Device Nodes in /dev.
Dec  1 13:25:16 np0005541455 systemd[1]: kmod-static-nodes.service: Deactivated successfully.
Dec  1 13:25:16 np0005541455 systemd[1]: Stopped Create List of Static Device Nodes.
Dec  1 13:25:16 np0005541455 systemd[1]: systemd-sysusers.service: Deactivated successfully.
Dec  1 13:25:16 np0005541455 systemd[1]: Stopped Create System Users.
Dec  1 13:25:16 np0005541455 systemd[1]: run-credentials-systemd\x2dtmpfiles\x2dsetup\x2ddev.service.mount: Deactivated successfully.
Dec  1 13:25:16 np0005541455 systemd[1]: run-credentials-systemd\x2dsysusers.service.mount: Deactivated successfully.
Dec  1 13:25:16 np0005541455 systemd[1]: initrd-udevadm-cleanup-db.service: Deactivated successfully.
Dec  1 13:25:16 np0005541455 systemd[1]: Finished Cleanup udev Database.
Dec  1 13:25:16 np0005541455 systemd[1]: Reached target Switch Root.
Dec  1 13:25:16 np0005541455 systemd[1]: Starting Switch Root...
Dec  1 13:25:16 np0005541455 systemd[1]: Switching root.
Dec  1 13:25:16 np0005541455 systemd-journald[305]: Journal stopped
Dec  1 13:25:17 np0005541455 systemd-journald: Received SIGTERM from PID 1 (systemd).
Dec  1 13:25:17 np0005541455 kernel: audit: type=1404 audit(1764613516.760:2): enforcing=1 old_enforcing=0 auid=4294967295 ses=4294967295 enabled=1 old-enabled=1 lsm=selinux res=1
Dec  1 13:25:17 np0005541455 kernel: SELinux:  policy capability network_peer_controls=1
Dec  1 13:25:17 np0005541455 kernel: SELinux:  policy capability open_perms=1
Dec  1 13:25:17 np0005541455 kernel: SELinux:  policy capability extended_socket_class=1
Dec  1 13:25:17 np0005541455 kernel: SELinux:  policy capability always_check_network=0
Dec  1 13:25:17 np0005541455 kernel: SELinux:  policy capability cgroup_seclabel=1
Dec  1 13:25:17 np0005541455 kernel: SELinux:  policy capability nnp_nosuid_transition=1
Dec  1 13:25:17 np0005541455 kernel: SELinux:  policy capability genfs_seclabel_symlinks=1
Dec  1 13:25:17 np0005541455 kernel: audit: type=1403 audit(1764613516.901:3): auid=4294967295 ses=4294967295 lsm=selinux res=1
Dec  1 13:25:17 np0005541455 systemd: Successfully loaded SELinux policy in 145.760ms.
Dec  1 13:25:17 np0005541455 systemd: Relabelled /dev, /dev/shm, /run, /sys/fs/cgroup in 30.991ms.
Dec  1 13:25:17 np0005541455 systemd: systemd 252-59.el9 running in system mode (+PAM +AUDIT +SELINUX -APPARMOR +IMA +SMACK +SECCOMP +GCRYPT +GNUTLS +OPENSSL +ACL +BLKID +CURL +ELFUTILS +FIDO2 +IDN2 -IDN -IPTC +KMOD +LIBCRYPTSETUP +LIBFDISK +PCRE2 -PWQUALITY +P11KIT -QRENCODE +TPM2 +BZIP2 +LZ4 +XZ +ZLIB +ZSTD -BPF_FRAMEWORK +XKBCOMMON +UTMP +SYSVINIT default-hierarchy=unified)
Dec  1 13:25:17 np0005541455 systemd: Detected virtualization kvm.
Dec  1 13:25:17 np0005541455 systemd: Detected architecture x86-64.
Dec  1 13:25:17 np0005541455 systemd-rc-local-generator: /etc/rc.d/rc.local is not marked executable, skipping.
Dec  1 13:25:17 np0005541455 systemd: initrd-switch-root.service: Deactivated successfully.
Dec  1 13:25:17 np0005541455 systemd: Stopped Switch Root.
Dec  1 13:25:17 np0005541455 systemd: systemd-journald.service: Scheduled restart job, restart counter is at 1.
Dec  1 13:25:17 np0005541455 systemd: Created slice Slice /system/getty.
Dec  1 13:25:17 np0005541455 systemd: Created slice Slice /system/serial-getty.
Dec  1 13:25:17 np0005541455 systemd: Created slice Slice /system/sshd-keygen.
Dec  1 13:25:17 np0005541455 systemd: Created slice User and Session Slice.
Dec  1 13:25:17 np0005541455 systemd: Started Dispatch Password Requests to Console Directory Watch.
Dec  1 13:25:17 np0005541455 systemd: Started Forward Password Requests to Wall Directory Watch.
Dec  1 13:25:17 np0005541455 systemd: Set up automount Arbitrary Executable File Formats File System Automount Point.
Dec  1 13:25:17 np0005541455 systemd: Reached target Local Encrypted Volumes.
Dec  1 13:25:17 np0005541455 systemd: Stopped target Switch Root.
Dec  1 13:25:17 np0005541455 systemd: Stopped target Initrd File Systems.
Dec  1 13:25:17 np0005541455 systemd: Stopped target Initrd Root File System.
Dec  1 13:25:17 np0005541455 systemd: Reached target Local Integrity Protected Volumes.
Dec  1 13:25:17 np0005541455 systemd: Reached target Path Units.
Dec  1 13:25:17 np0005541455 systemd: Reached target rpc_pipefs.target.
Dec  1 13:25:17 np0005541455 systemd: Reached target Slice Units.
Dec  1 13:25:17 np0005541455 systemd: Reached target Swaps.
Dec  1 13:25:17 np0005541455 systemd: Reached target Local Verity Protected Volumes.
Dec  1 13:25:17 np0005541455 systemd: Listening on RPCbind Server Activation Socket.
Dec  1 13:25:17 np0005541455 systemd: Reached target RPC Port Mapper.
Dec  1 13:25:17 np0005541455 systemd: Listening on Process Core Dump Socket.
Dec  1 13:25:17 np0005541455 systemd: Listening on initctl Compatibility Named Pipe.
Dec  1 13:25:17 np0005541455 systemd: Listening on udev Control Socket.
Dec  1 13:25:17 np0005541455 systemd: Listening on udev Kernel Socket.
Dec  1 13:25:17 np0005541455 systemd: Mounting Huge Pages File System...
Dec  1 13:25:17 np0005541455 systemd: Mounting POSIX Message Queue File System...
Dec  1 13:25:17 np0005541455 systemd: Mounting Kernel Debug File System...
Dec  1 13:25:17 np0005541455 systemd: Mounting Kernel Trace File System...
Dec  1 13:25:17 np0005541455 systemd: Kernel Module supporting RPCSEC_GSS was skipped because of an unmet condition check (ConditionPathExists=/etc/krb5.keytab).
Dec  1 13:25:17 np0005541455 systemd: Starting Create List of Static Device Nodes...
Dec  1 13:25:17 np0005541455 systemd: Starting Load Kernel Module configfs...
Dec  1 13:25:17 np0005541455 systemd: Starting Load Kernel Module drm...
Dec  1 13:25:17 np0005541455 systemd: Starting Load Kernel Module efi_pstore...
Dec  1 13:25:17 np0005541455 systemd: Starting Load Kernel Module fuse...
Dec  1 13:25:17 np0005541455 systemd: Starting Read and set NIS domainname from /etc/sysconfig/network...
Dec  1 13:25:17 np0005541455 systemd: systemd-fsck-root.service: Deactivated successfully.
Dec  1 13:25:17 np0005541455 systemd: Stopped File System Check on Root Device.
Dec  1 13:25:17 np0005541455 systemd: Stopped Journal Service.
Dec  1 13:25:17 np0005541455 systemd: Starting Journal Service...
Dec  1 13:25:17 np0005541455 systemd: Load Kernel Modules was skipped because no trigger condition checks were met.
Dec  1 13:25:17 np0005541455 systemd: Starting Generate network units from Kernel command line...
Dec  1 13:25:17 np0005541455 systemd: TPM2 PCR Machine ID Measurement was skipped because of an unmet condition check (ConditionPathExists=/sys/firmware/efi/efivars/StubPcrKernelImage-4a67b082-0a4c-41cf-b6c7-440b29bb8c4f).
Dec  1 13:25:17 np0005541455 kernel: fuse: init (API version 7.37)
Dec  1 13:25:17 np0005541455 systemd: Starting Remount Root and Kernel File Systems...
Dec  1 13:25:17 np0005541455 systemd: Repartition Root Disk was skipped because no trigger condition checks were met.
Dec  1 13:25:17 np0005541455 systemd: Starting Apply Kernel Variables...
Dec  1 13:25:17 np0005541455 systemd: Starting Coldplug All udev Devices...
Dec  1 13:25:17 np0005541455 kernel: xfs filesystem being remounted at / supports timestamps until 2038 (0x7fffffff)
Dec  1 13:25:17 np0005541455 systemd: Mounted Huge Pages File System.
Dec  1 13:25:17 np0005541455 systemd: Mounted POSIX Message Queue File System.
Dec  1 13:25:17 np0005541455 systemd: Mounted Kernel Debug File System.
Dec  1 13:25:17 np0005541455 systemd: Mounted Kernel Trace File System.
Dec  1 13:25:17 np0005541455 systemd: Finished Create List of Static Device Nodes.
Dec  1 13:25:17 np0005541455 systemd: modprobe@configfs.service: Deactivated successfully.
Dec  1 13:25:17 np0005541455 systemd-journald[677]: Journal started
Dec  1 13:25:17 np0005541455 systemd-journald[677]: Runtime Journal (/run/log/journal/1f988c78c563e12389ab342aced42dbb) is 8.0M, max 153.6M, 145.6M free.
Dec  1 13:25:17 np0005541455 systemd[1]: Queued start job for default target Multi-User System.
Dec  1 13:25:17 np0005541455 systemd[1]: systemd-journald.service: Deactivated successfully.
Dec  1 13:25:17 np0005541455 systemd: Finished Load Kernel Module configfs.
Dec  1 13:25:17 np0005541455 systemd: Started Journal Service.
Dec  1 13:25:17 np0005541455 systemd[1]: modprobe@efi_pstore.service: Deactivated successfully.
Dec  1 13:25:17 np0005541455 systemd[1]: Finished Load Kernel Module efi_pstore.
Dec  1 13:25:17 np0005541455 systemd[1]: modprobe@fuse.service: Deactivated successfully.
Dec  1 13:25:17 np0005541455 systemd[1]: Finished Load Kernel Module fuse.
Dec  1 13:25:17 np0005541455 systemd[1]: Finished Read and set NIS domainname from /etc/sysconfig/network.
Dec  1 13:25:17 np0005541455 systemd[1]: Finished Generate network units from Kernel command line.
Dec  1 13:25:17 np0005541455 systemd[1]: Finished Remount Root and Kernel File Systems.
Dec  1 13:25:17 np0005541455 systemd[1]: Finished Apply Kernel Variables.
Dec  1 13:25:17 np0005541455 kernel: ACPI: bus type drm_connector registered
Dec  1 13:25:17 np0005541455 systemd[1]: Mounting FUSE Control File System...
Dec  1 13:25:17 np0005541455 systemd[1]: First Boot Wizard was skipped because of an unmet condition check (ConditionFirstBoot=yes).
Dec  1 13:25:17 np0005541455 systemd[1]: Starting Rebuild Hardware Database...
Dec  1 13:25:17 np0005541455 systemd[1]: Starting Flush Journal to Persistent Storage...
Dec  1 13:25:17 np0005541455 systemd[1]: Platform Persistent Storage Archival was skipped because of an unmet condition check (ConditionDirectoryNotEmpty=/sys/fs/pstore).
Dec  1 13:25:17 np0005541455 systemd[1]: Starting Load/Save OS Random Seed...
Dec  1 13:25:17 np0005541455 systemd[1]: Starting Create System Users...
Dec  1 13:25:17 np0005541455 systemd[1]: modprobe@drm.service: Deactivated successfully.
Dec  1 13:25:17 np0005541455 systemd[1]: Finished Load Kernel Module drm.
Dec  1 13:25:17 np0005541455 systemd-journald[677]: Runtime Journal (/run/log/journal/1f988c78c563e12389ab342aced42dbb) is 8.0M, max 153.6M, 145.6M free.
Dec  1 13:25:17 np0005541455 systemd-journald[677]: Received client request to flush runtime journal.
Dec  1 13:25:17 np0005541455 systemd[1]: Mounted FUSE Control File System.
Dec  1 13:25:17 np0005541455 systemd[1]: Finished Flush Journal to Persistent Storage.
Dec  1 13:25:17 np0005541455 systemd[1]: Finished Load/Save OS Random Seed.
Dec  1 13:25:17 np0005541455 systemd[1]: First Boot Complete was skipped because of an unmet condition check (ConditionFirstBoot=yes).
Dec  1 13:25:17 np0005541455 systemd[1]: Finished Coldplug All udev Devices.
Dec  1 13:25:17 np0005541455 systemd[1]: Finished Create System Users.
Dec  1 13:25:17 np0005541455 systemd[1]: Starting Create Static Device Nodes in /dev...
Dec  1 13:25:17 np0005541455 systemd[1]: Finished Create Static Device Nodes in /dev.
Dec  1 13:25:17 np0005541455 systemd[1]: Reached target Preparation for Local File Systems.
Dec  1 13:25:17 np0005541455 systemd[1]: Reached target Local File Systems.
Dec  1 13:25:17 np0005541455 systemd[1]: Starting Rebuild Dynamic Linker Cache...
Dec  1 13:25:17 np0005541455 systemd[1]: Mark the need to relabel after reboot was skipped because of an unmet condition check (ConditionSecurity=!selinux).
Dec  1 13:25:17 np0005541455 systemd[1]: Set Up Additional Binary Formats was skipped because no trigger condition checks were met.
Dec  1 13:25:17 np0005541455 systemd[1]: Update Boot Loader Random Seed was skipped because no trigger condition checks were met.
Dec  1 13:25:17 np0005541455 systemd[1]: Starting Automatic Boot Loader Update...
Dec  1 13:25:17 np0005541455 systemd[1]: Commit a transient machine-id on disk was skipped because of an unmet condition check (ConditionPathIsMountPoint=/etc/machine-id).
Dec  1 13:25:17 np0005541455 systemd[1]: Starting Create Volatile Files and Directories...
Dec  1 13:25:17 np0005541455 bootctl[695]: Couldn't find EFI system partition, skipping.
Dec  1 13:25:17 np0005541455 systemd[1]: Finished Automatic Boot Loader Update.
Dec  1 13:25:17 np0005541455 systemd[1]: Finished Create Volatile Files and Directories.
Dec  1 13:25:17 np0005541455 systemd[1]: Starting Security Auditing Service...
Dec  1 13:25:17 np0005541455 systemd[1]: Starting RPC Bind...
Dec  1 13:25:17 np0005541455 systemd[1]: Starting Rebuild Journal Catalog...
Dec  1 13:25:17 np0005541455 auditd[701]: audit dispatcher initialized with q_depth=2000 and 1 active plugins
Dec  1 13:25:17 np0005541455 auditd[701]: Init complete, auditd 3.1.5 listening for events (startup state enable)
Dec  1 13:25:17 np0005541455 systemd[1]: Finished Rebuild Journal Catalog.
Dec  1 13:25:17 np0005541455 systemd[1]: Started RPC Bind.
Dec  1 13:25:17 np0005541455 augenrules[706]: /sbin/augenrules: No change
Dec  1 13:25:18 np0005541455 augenrules[721]: No rules
Dec  1 13:25:18 np0005541455 augenrules[721]: enabled 1
Dec  1 13:25:18 np0005541455 augenrules[721]: failure 1
Dec  1 13:25:18 np0005541455 augenrules[721]: pid 701
Dec  1 13:25:18 np0005541455 augenrules[721]: rate_limit 0
Dec  1 13:25:18 np0005541455 augenrules[721]: backlog_limit 8192
Dec  1 13:25:18 np0005541455 augenrules[721]: lost 0
Dec  1 13:25:18 np0005541455 augenrules[721]: backlog 0
Dec  1 13:25:18 np0005541455 augenrules[721]: backlog_wait_time 60000
Dec  1 13:25:18 np0005541455 augenrules[721]: backlog_wait_time_actual 0
Dec  1 13:25:18 np0005541455 augenrules[721]: enabled 1
Dec  1 13:25:18 np0005541455 augenrules[721]: failure 1
Dec  1 13:25:18 np0005541455 augenrules[721]: pid 701
Dec  1 13:25:18 np0005541455 augenrules[721]: rate_limit 0
Dec  1 13:25:18 np0005541455 augenrules[721]: backlog_limit 8192
Dec  1 13:25:18 np0005541455 augenrules[721]: lost 0
Dec  1 13:25:18 np0005541455 augenrules[721]: backlog 4
Dec  1 13:25:18 np0005541455 augenrules[721]: backlog_wait_time 60000
Dec  1 13:25:18 np0005541455 augenrules[721]: backlog_wait_time_actual 0
Dec  1 13:25:18 np0005541455 augenrules[721]: enabled 1
Dec  1 13:25:18 np0005541455 augenrules[721]: failure 1
Dec  1 13:25:18 np0005541455 augenrules[721]: pid 701
Dec  1 13:25:18 np0005541455 augenrules[721]: rate_limit 0
Dec  1 13:25:18 np0005541455 augenrules[721]: backlog_limit 8192
Dec  1 13:25:18 np0005541455 augenrules[721]: lost 0
Dec  1 13:25:18 np0005541455 augenrules[721]: backlog 4
Dec  1 13:25:18 np0005541455 augenrules[721]: backlog_wait_time 60000
Dec  1 13:25:18 np0005541455 augenrules[721]: backlog_wait_time_actual 0
Dec  1 13:25:18 np0005541455 systemd[1]: Started Security Auditing Service.
Dec  1 13:25:18 np0005541455 systemd[1]: Starting Record System Boot/Shutdown in UTMP...
Dec  1 13:25:18 np0005541455 systemd[1]: Finished Record System Boot/Shutdown in UTMP.
Dec  1 13:25:18 np0005541455 systemd[1]: Finished Rebuild Hardware Database.
Dec  1 13:25:18 np0005541455 systemd[1]: Starting Rule-based Manager for Device Events and Files...
Dec  1 13:25:18 np0005541455 systemd[1]: Finished Rebuild Dynamic Linker Cache.
Dec  1 13:25:18 np0005541455 systemd[1]: Starting Update is Completed...
Dec  1 13:25:18 np0005541455 systemd[1]: Finished Update is Completed.
Dec  1 13:25:18 np0005541455 systemd-udevd[729]: Using default interface naming scheme 'rhel-9.0'.
Dec  1 13:25:18 np0005541455 systemd[1]: Started Rule-based Manager for Device Events and Files.
Dec  1 13:25:18 np0005541455 systemd[1]: Reached target System Initialization.
Dec  1 13:25:18 np0005541455 systemd[1]: Started dnf makecache --timer.
Dec  1 13:25:18 np0005541455 systemd[1]: Started Daily rotation of log files.
Dec  1 13:25:18 np0005541455 systemd[1]: Started Daily Cleanup of Temporary Directories.
Dec  1 13:25:18 np0005541455 systemd[1]: Reached target Timer Units.
Dec  1 13:25:18 np0005541455 systemd[1]: Listening on D-Bus System Message Bus Socket.
Dec  1 13:25:18 np0005541455 systemd[1]: Listening on SSSD Kerberos Cache Manager responder socket.
Dec  1 13:25:18 np0005541455 systemd[1]: Reached target Socket Units.
Dec  1 13:25:18 np0005541455 systemd[1]: Starting D-Bus System Message Bus...
Dec  1 13:25:18 np0005541455 systemd[1]: TPM2 PCR Barrier (Initialization) was skipped because of an unmet condition check (ConditionPathExists=/sys/firmware/efi/efivars/StubPcrKernelImage-4a67b082-0a4c-41cf-b6c7-440b29bb8c4f).
Dec  1 13:25:18 np0005541455 systemd[1]: Condition check resulted in /dev/ttyS0 being skipped.
Dec  1 13:25:18 np0005541455 systemd[1]: Starting Load Kernel Module configfs...
Dec  1 13:25:18 np0005541455 systemd[1]: modprobe@configfs.service: Deactivated successfully.
Dec  1 13:25:18 np0005541455 systemd[1]: Finished Load Kernel Module configfs.
Dec  1 13:25:18 np0005541455 systemd-udevd[734]: Network interface NamePolicy= disabled on kernel command line.
Dec  1 13:25:18 np0005541455 systemd[1]: Started D-Bus System Message Bus.
Dec  1 13:25:18 np0005541455 systemd[1]: Reached target Basic System.
Dec  1 13:25:18 np0005541455 dbus-broker-lau[763]: Ready
Dec  1 13:25:18 np0005541455 kernel: input: PC Speaker as /devices/platform/pcspkr/input/input6
Dec  1 13:25:18 np0005541455 systemd[1]: Starting NTP client/server...
Dec  1 13:25:18 np0005541455 systemd[1]: Starting Cloud-init: Local Stage (pre-network)...
Dec  1 13:25:18 np0005541455 kernel: piix4_smbus 0000:00:01.3: SMBus Host Controller at 0x700, revision 0
Dec  1 13:25:18 np0005541455 systemd[1]: Starting Restore /run/initramfs on shutdown...
Dec  1 13:25:18 np0005541455 kernel: i2c i2c-0: 1/1 memory slots populated (from DMI)
Dec  1 13:25:18 np0005541455 kernel: i2c i2c-0: Memory type 0x07 not supported yet, not instantiating SPD
Dec  1 13:25:18 np0005541455 systemd[1]: Starting IPv4 firewall with iptables...
Dec  1 13:25:18 np0005541455 chronyd[792]: chronyd version 4.8 starting (+CMDMON +REFCLOCK +RTC +PRIVDROP +SCFILTER +SIGND +NTS +SECHASH +IPV6 +DEBUG)
Dec  1 13:25:18 np0005541455 chronyd[792]: Loaded 0 symmetric keys
Dec  1 13:25:18 np0005541455 systemd[1]: Started irqbalance daemon.
Dec  1 13:25:18 np0005541455 chronyd[792]: Using right/UTC timezone to obtain leap second data
Dec  1 13:25:18 np0005541455 systemd[1]: Load CPU microcode update was skipped because of an unmet condition check (ConditionPathExists=/sys/devices/system/cpu/microcode/reload).
Dec  1 13:25:18 np0005541455 chronyd[792]: Loaded seccomp filter (level 2)
Dec  1 13:25:18 np0005541455 systemd[1]: OpenSSH ecdsa Server Key Generation was skipped because of an unmet condition check (ConditionPathExists=!/run/systemd/generator.early/multi-user.target.wants/cloud-init.target).
Dec  1 13:25:18 np0005541455 systemd[1]: OpenSSH ed25519 Server Key Generation was skipped because of an unmet condition check (ConditionPathExists=!/run/systemd/generator.early/multi-user.target.wants/cloud-init.target).
Dec  1 13:25:18 np0005541455 systemd[1]: OpenSSH rsa Server Key Generation was skipped because of an unmet condition check (ConditionPathExists=!/run/systemd/generator.early/multi-user.target.wants/cloud-init.target).
Dec  1 13:25:18 np0005541455 systemd[1]: Reached target sshd-keygen.target.
Dec  1 13:25:18 np0005541455 systemd[1]: System Security Services Daemon was skipped because no trigger condition checks were met.
Dec  1 13:25:18 np0005541455 systemd[1]: Reached target User and Group Name Lookups.
Dec  1 13:25:18 np0005541455 systemd[1]: Starting User Login Management...
Dec  1 13:25:18 np0005541455 systemd[1]: Started NTP client/server.
Dec  1 13:25:18 np0005541455 systemd[1]: Finished Restore /run/initramfs on shutdown.
Dec  1 13:25:18 np0005541455 kernel: kvm_amd: TSC scaling supported
Dec  1 13:25:18 np0005541455 kernel: kvm_amd: Nested Virtualization enabled
Dec  1 13:25:18 np0005541455 kernel: kvm_amd: Nested Paging enabled
Dec  1 13:25:18 np0005541455 kernel: kvm_amd: LBR virtualization supported
Dec  1 13:25:18 np0005541455 kernel: [drm] pci: virtio-vga detected at 0000:00:02.0
Dec  1 13:25:18 np0005541455 kernel: virtio-pci 0000:00:02.0: vgaarb: deactivate vga console
Dec  1 13:25:18 np0005541455 kernel: Warning: Deprecated Driver is detected: nft_compat will not be maintained in a future major release and may be disabled
Dec  1 13:25:18 np0005541455 kernel: Warning: Deprecated Driver is detected: nft_compat_module_init will not be maintained in a future major release and may be disabled
Dec  1 13:25:18 np0005541455 systemd-logind[797]: Watching system buttons on /dev/input/event0 (Power Button)
Dec  1 13:25:18 np0005541455 systemd-logind[797]: Watching system buttons on /dev/input/event1 (AT Translated Set 2 keyboard)
Dec  1 13:25:18 np0005541455 kernel: Console: switching to colour dummy device 80x25
Dec  1 13:25:18 np0005541455 kernel: [drm] features: -virgl +edid -resource_blob -host_visible
Dec  1 13:25:18 np0005541455 kernel: [drm] features: -context_init
Dec  1 13:25:18 np0005541455 kernel: [drm] number of scanouts: 1
Dec  1 13:25:18 np0005541455 kernel: [drm] number of cap sets: 0
Dec  1 13:25:18 np0005541455 systemd-logind[797]: New seat seat0.
Dec  1 13:25:18 np0005541455 systemd[1]: Started User Login Management.
Dec  1 13:25:18 np0005541455 kernel: [drm] Initialized virtio_gpu 0.1.0 for 0000:00:02.0 on minor 0
Dec  1 13:25:18 np0005541455 kernel: fbcon: virtio_gpudrmfb (fb0) is primary device
Dec  1 13:25:18 np0005541455 kernel: Console: switching to colour frame buffer device 128x48
Dec  1 13:25:18 np0005541455 kernel: virtio-pci 0000:00:02.0: [drm] fb0: virtio_gpudrmfb frame buffer device
Dec  1 13:25:18 np0005541455 iptables.init[784]: iptables: Applying firewall rules: [  OK  ]
Dec  1 13:25:18 np0005541455 systemd[1]: Finished IPv4 firewall with iptables.
Dec  1 13:25:19 np0005541455 cloud-init[837]: Cloud-init v. 24.4-7.el9 running 'init-local' at Mon, 01 Dec 2025 18:25:19 +0000. Up 6.84 seconds.
Dec  1 13:25:19 np0005541455 systemd[1]: run-cloud\x2dinit-tmp-tmpr_lx52lm.mount: Deactivated successfully.
Dec  1 13:25:19 np0005541455 systemd[1]: Starting Hostname Service...
Dec  1 13:25:19 np0005541455 systemd[1]: Started Hostname Service.
Dec  1 13:25:19 np0005541455 systemd-hostnamed[851]: Hostname set to <np0005541455.novalocal> (static)
Dec  1 13:25:19 np0005541455 systemd[1]: Finished Cloud-init: Local Stage (pre-network).
Dec  1 13:25:19 np0005541455 systemd[1]: Reached target Preparation for Network.
Dec  1 13:25:19 np0005541455 systemd[1]: Starting Network Manager...
Dec  1 13:25:19 np0005541455 NetworkManager[856]: <info>  [1764613519.7201] NetworkManager (version 1.54.1-1.el9) is starting... (boot:c12f9c43-c499-4c8a-a9df-8527ffbb5e7f)
Dec  1 13:25:19 np0005541455 NetworkManager[856]: <info>  [1764613519.7206] Read config: /etc/NetworkManager/NetworkManager.conf, /run/NetworkManager/conf.d/15-carrier-timeout.conf
Dec  1 13:25:19 np0005541455 NetworkManager[856]: <info>  [1764613519.7279] manager[0x564eeeb7e080]: monitoring kernel firmware directory '/lib/firmware'.
Dec  1 13:25:19 np0005541455 NetworkManager[856]: <info>  [1764613519.7335] hostname: hostname: using hostnamed
Dec  1 13:25:19 np0005541455 NetworkManager[856]: <info>  [1764613519.7335] hostname: static hostname changed from (none) to "np0005541455.novalocal"
Dec  1 13:25:19 np0005541455 NetworkManager[856]: <info>  [1764613519.7340] dns-mgr: init: dns=default,systemd-resolved rc-manager=symlink (auto)
Dec  1 13:25:19 np0005541455 NetworkManager[856]: <info>  [1764613519.7497] manager[0x564eeeb7e080]: rfkill: Wi-Fi hardware radio set enabled
Dec  1 13:25:19 np0005541455 NetworkManager[856]: <info>  [1764613519.7498] manager[0x564eeeb7e080]: rfkill: WWAN hardware radio set enabled
Dec  1 13:25:19 np0005541455 NetworkManager[856]: <info>  [1764613519.7552] Loaded device plugin: NMTeamFactory (/usr/lib64/NetworkManager/1.54.1-1.el9/libnm-device-plugin-team.so)
Dec  1 13:25:19 np0005541455 NetworkManager[856]: <info>  [1764613519.7554] manager: rfkill: Wi-Fi enabled by radio killswitch; enabled by state file
Dec  1 13:25:19 np0005541455 NetworkManager[856]: <info>  [1764613519.7555] manager: rfkill: WWAN enabled by radio killswitch; enabled by state file
Dec  1 13:25:19 np0005541455 NetworkManager[856]: <info>  [1764613519.7555] manager: Networking is enabled by state file
Dec  1 13:25:19 np0005541455 NetworkManager[856]: <info>  [1764613519.7558] settings: Loaded settings plugin: keyfile (internal)
Dec  1 13:25:19 np0005541455 NetworkManager[856]: <info>  [1764613519.7602] settings: Loaded settings plugin: ifcfg-rh ("/usr/lib64/NetworkManager/1.54.1-1.el9/libnm-settings-plugin-ifcfg-rh.so")
Dec  1 13:25:19 np0005541455 NetworkManager[856]: <info>  [1764613519.7629] Warning: the ifcfg-rh plugin is deprecated, please migrate connections to the keyfile format using "nmcli connection migrate"
Dec  1 13:25:19 np0005541455 systemd[1]: Listening on Load/Save RF Kill Switch Status /dev/rfkill Watch.
Dec  1 13:25:19 np0005541455 NetworkManager[856]: <info>  [1764613519.7644] dhcp: init: Using DHCP client 'internal'
Dec  1 13:25:19 np0005541455 NetworkManager[856]: <info>  [1764613519.7648] manager: (lo): new Loopback device (/org/freedesktop/NetworkManager/Devices/1)
Dec  1 13:25:19 np0005541455 NetworkManager[856]: <info>  [1764613519.7663] device (lo): state change: unmanaged -> unavailable (reason 'connection-assumed', managed-type: 'external')
Dec  1 13:25:19 np0005541455 NetworkManager[856]: <info>  [1764613519.7675] device (lo): state change: unavailable -> disconnected (reason 'connection-assumed', managed-type: 'external')
Dec  1 13:25:19 np0005541455 NetworkManager[856]: <info>  [1764613519.7683] device (lo): Activation: starting connection 'lo' (05520b0a-6bbf-47af-9e84-ea1a46a10382)
Dec  1 13:25:19 np0005541455 NetworkManager[856]: <info>  [1764613519.7694] manager: (eth0): new Ethernet device (/org/freedesktop/NetworkManager/Devices/2)
Dec  1 13:25:19 np0005541455 NetworkManager[856]: <info>  [1764613519.7698] device (eth0): state change: unmanaged -> unavailable (reason 'managed', managed-type: 'external')
Dec  1 13:25:19 np0005541455 NetworkManager[856]: <info>  [1764613519.7732] bus-manager: acquired D-Bus service "org.freedesktop.NetworkManager"
Dec  1 13:25:19 np0005541455 NetworkManager[856]: <info>  [1764613519.7736] device (lo): state change: disconnected -> prepare (reason 'none', managed-type: 'external')
Dec  1 13:25:19 np0005541455 NetworkManager[856]: <info>  [1764613519.7739] device (lo): state change: prepare -> config (reason 'none', managed-type: 'external')
Dec  1 13:25:19 np0005541455 NetworkManager[856]: <info>  [1764613519.7742] device (lo): state change: config -> ip-config (reason 'none', managed-type: 'external')
Dec  1 13:25:19 np0005541455 NetworkManager[856]: <info>  [1764613519.7744] device (eth0): carrier: link connected
Dec  1 13:25:19 np0005541455 NetworkManager[856]: <info>  [1764613519.7747] device (lo): state change: ip-config -> ip-check (reason 'none', managed-type: 'external')
Dec  1 13:25:19 np0005541455 NetworkManager[856]: <info>  [1764613519.7754] device (eth0): state change: unavailable -> disconnected (reason 'carrier-changed', managed-type: 'full')
Dec  1 13:25:19 np0005541455 NetworkManager[856]: <info>  [1764613519.7760] policy: auto-activating connection 'System eth0' (5fb06bd0-0bb0-7ffb-45f1-d6edd65f3e03)
Dec  1 13:25:19 np0005541455 NetworkManager[856]: <info>  [1764613519.7764] device (eth0): Activation: starting connection 'System eth0' (5fb06bd0-0bb0-7ffb-45f1-d6edd65f3e03)
Dec  1 13:25:19 np0005541455 NetworkManager[856]: <info>  [1764613519.7765] device (eth0): state change: disconnected -> prepare (reason 'none', managed-type: 'full')
Dec  1 13:25:19 np0005541455 NetworkManager[856]: <info>  [1764613519.7767] manager: NetworkManager state is now CONNECTING
Dec  1 13:25:19 np0005541455 NetworkManager[856]: <info>  [1764613519.7768] device (eth0): state change: prepare -> config (reason 'none', managed-type: 'full')
Dec  1 13:25:19 np0005541455 NetworkManager[856]: <info>  [1764613519.7775] device (eth0): state change: config -> ip-config (reason 'none', managed-type: 'full')
Dec  1 13:25:19 np0005541455 NetworkManager[856]: <info>  [1764613519.7778] dhcp4 (eth0): activation: beginning transaction (timeout in 45 seconds)
Dec  1 13:25:19 np0005541455 NetworkManager[856]: <info>  [1764613519.7815] dhcp4 (eth0): state changed new lease, address=38.102.83.97
Dec  1 13:25:19 np0005541455 NetworkManager[856]: <info>  [1764613519.7822] policy: set 'System eth0' (eth0) as default for IPv4 routing and DNS
Dec  1 13:25:19 np0005541455 NetworkManager[856]: <info>  [1764613519.7839] device (eth0): state change: ip-config -> ip-check (reason 'none', managed-type: 'full')
Dec  1 13:25:19 np0005541455 systemd[1]: Starting Network Manager Script Dispatcher Service...
Dec  1 13:25:19 np0005541455 systemd[1]: Started Network Manager.
Dec  1 13:25:19 np0005541455 systemd[1]: Reached target Network.
Dec  1 13:25:19 np0005541455 systemd[1]: Starting Network Manager Wait Online...
Dec  1 13:25:19 np0005541455 systemd[1]: Starting GSSAPI Proxy Daemon...
Dec  1 13:25:19 np0005541455 NetworkManager[856]: <info>  [1764613519.8139] device (lo): state change: ip-check -> secondaries (reason 'none', managed-type: 'external')
Dec  1 13:25:19 np0005541455 NetworkManager[856]: <info>  [1764613519.8144] device (eth0): state change: ip-check -> secondaries (reason 'none', managed-type: 'full')
Dec  1 13:25:19 np0005541455 NetworkManager[856]: <info>  [1764613519.8147] device (lo): state change: secondaries -> activated (reason 'none', managed-type: 'external')
Dec  1 13:25:19 np0005541455 NetworkManager[856]: <info>  [1764613519.8160] device (lo): Activation: successful, device activated.
Dec  1 13:25:19 np0005541455 NetworkManager[856]: <info>  [1764613519.8171] device (eth0): state change: secondaries -> activated (reason 'none', managed-type: 'full')
Dec  1 13:25:19 np0005541455 NetworkManager[856]: <info>  [1764613519.8178] manager: NetworkManager state is now CONNECTED_SITE
Dec  1 13:25:19 np0005541455 systemd[1]: Started Network Manager Script Dispatcher Service.
Dec  1 13:25:19 np0005541455 NetworkManager[856]: <info>  [1764613519.8185] device (eth0): Activation: successful, device activated.
Dec  1 13:25:19 np0005541455 NetworkManager[856]: <info>  [1764613519.8195] manager: NetworkManager state is now CONNECTED_GLOBAL
Dec  1 13:25:19 np0005541455 NetworkManager[856]: <info>  [1764613519.8202] manager: startup complete
Dec  1 13:25:19 np0005541455 systemd[1]: Started GSSAPI Proxy Daemon.
Dec  1 13:25:19 np0005541455 systemd[1]: RPC security service for NFS client and server was skipped because of an unmet condition check (ConditionPathExists=/etc/krb5.keytab).
Dec  1 13:25:19 np0005541455 systemd[1]: Reached target NFS client services.
Dec  1 13:25:19 np0005541455 systemd[1]: Reached target Preparation for Remote File Systems.
Dec  1 13:25:19 np0005541455 systemd[1]: Reached target Remote File Systems.
Dec  1 13:25:19 np0005541455 systemd[1]: TPM2 PCR Barrier (User) was skipped because of an unmet condition check (ConditionPathExists=/sys/firmware/efi/efivars/StubPcrKernelImage-4a67b082-0a4c-41cf-b6c7-440b29bb8c4f).
Dec  1 13:25:19 np0005541455 systemd[1]: Finished Network Manager Wait Online.
Dec  1 13:25:19 np0005541455 systemd[1]: Starting Cloud-init: Network Stage...
Dec  1 13:25:20 np0005541455 cloud-init[921]: Cloud-init v. 24.4-7.el9 running 'init' at Mon, 01 Dec 2025 18:25:20 +0000. Up 7.93 seconds.
Dec  1 13:25:20 np0005541455 cloud-init[921]: ci-info: +++++++++++++++++++++++++++++++++++++++Net device info+++++++++++++++++++++++++++++++++++++++
Dec  1 13:25:20 np0005541455 cloud-init[921]: ci-info: +--------+------+------------------------------+---------------+--------+-------------------+
Dec  1 13:25:20 np0005541455 cloud-init[921]: ci-info: | Device |  Up  |           Address            |      Mask     | Scope  |     Hw-Address    |
Dec  1 13:25:20 np0005541455 cloud-init[921]: ci-info: +--------+------+------------------------------+---------------+--------+-------------------+
Dec  1 13:25:20 np0005541455 cloud-init[921]: ci-info: |  eth0  | True |         38.102.83.97         | 255.255.255.0 | global | fa:16:3e:e7:60:e7 |
Dec  1 13:25:20 np0005541455 cloud-init[921]: ci-info: |  eth0  | True | fe80::f816:3eff:fee7:60e7/64 |       .       |  link  | fa:16:3e:e7:60:e7 |
Dec  1 13:25:20 np0005541455 cloud-init[921]: ci-info: |   lo   | True |          127.0.0.1           |   255.0.0.0   |  host  |         .         |
Dec  1 13:25:20 np0005541455 cloud-init[921]: ci-info: |   lo   | True |           ::1/128            |       .       |  host  |         .         |
Dec  1 13:25:20 np0005541455 cloud-init[921]: ci-info: +--------+------+------------------------------+---------------+--------+-------------------+
Dec  1 13:25:20 np0005541455 cloud-init[921]: ci-info: +++++++++++++++++++++++++++++++++Route IPv4 info+++++++++++++++++++++++++++++++++
Dec  1 13:25:20 np0005541455 cloud-init[921]: ci-info: +-------+-----------------+---------------+-----------------+-----------+-------+
Dec  1 13:25:20 np0005541455 cloud-init[921]: ci-info: | Route |   Destination   |    Gateway    |     Genmask     | Interface | Flags |
Dec  1 13:25:20 np0005541455 cloud-init[921]: ci-info: +-------+-----------------+---------------+-----------------+-----------+-------+
Dec  1 13:25:20 np0005541455 cloud-init[921]: ci-info: |   0   |     0.0.0.0     |  38.102.83.1  |     0.0.0.0     |    eth0   |   UG  |
Dec  1 13:25:20 np0005541455 cloud-init[921]: ci-info: |   1   |   38.102.83.0   |    0.0.0.0    |  255.255.255.0  |    eth0   |   U   |
Dec  1 13:25:20 np0005541455 cloud-init[921]: ci-info: |   2   | 169.254.169.254 | 38.102.83.126 | 255.255.255.255 |    eth0   |  UGH  |
Dec  1 13:25:20 np0005541455 cloud-init[921]: ci-info: +-------+-----------------+---------------+-----------------+-----------+-------+
Dec  1 13:25:20 np0005541455 cloud-init[921]: ci-info: +++++++++++++++++++Route IPv6 info+++++++++++++++++++
Dec  1 13:25:20 np0005541455 cloud-init[921]: ci-info: +-------+-------------+---------+-----------+-------+
Dec  1 13:25:20 np0005541455 cloud-init[921]: ci-info: | Route | Destination | Gateway | Interface | Flags |
Dec  1 13:25:20 np0005541455 cloud-init[921]: ci-info: +-------+-------------+---------+-----------+-------+
Dec  1 13:25:20 np0005541455 cloud-init[921]: ci-info: |   1   |  fe80::/64  |    ::   |    eth0   |   U   |
Dec  1 13:25:20 np0005541455 cloud-init[921]: ci-info: |   3   |  multicast  |    ::   |    eth0   |   U   |
Dec  1 13:25:20 np0005541455 cloud-init[921]: ci-info: +-------+-------------+---------+-----------+-------+
Dec  1 13:25:21 np0005541455 cloud-init[921]: Generating public/private rsa key pair.
Dec  1 13:25:21 np0005541455 cloud-init[921]: Your identification has been saved in /etc/ssh/ssh_host_rsa_key
Dec  1 13:25:21 np0005541455 cloud-init[921]: Your public key has been saved in /etc/ssh/ssh_host_rsa_key.pub
Dec  1 13:25:21 np0005541455 cloud-init[921]: The key fingerprint is:
Dec  1 13:25:21 np0005541455 cloud-init[921]: SHA256:X3J5bfpOkl3EzYFUE5bnvlUmDp1Ok9L3QyhhSgQdcMY root@np0005541455.novalocal
Dec  1 13:25:21 np0005541455 cloud-init[921]: The key's randomart image is:
Dec  1 13:25:21 np0005541455 cloud-init[921]: +---[RSA 3072]----+
Dec  1 13:25:21 np0005541455 cloud-init[921]: |       o*=. ..o*o|
Dec  1 13:25:21 np0005541455 cloud-init[921]: |        oE o ..+=|
Dec  1 13:25:21 np0005541455 cloud-init[921]: |        . o .o.+*|
Dec  1 13:25:21 np0005541455 cloud-init[921]: |         . .ooO+=|
Dec  1 13:25:21 np0005541455 cloud-init[921]: |        S . =*o**|
Dec  1 13:25:21 np0005541455 cloud-init[921]: |         . + .o*=|
Dec  1 13:25:21 np0005541455 cloud-init[921]: |          .   + *|
Dec  1 13:25:21 np0005541455 cloud-init[921]: |               = |
Dec  1 13:25:21 np0005541455 cloud-init[921]: |               .o|
Dec  1 13:25:21 np0005541455 cloud-init[921]: +----[SHA256]-----+
Dec  1 13:25:21 np0005541455 cloud-init[921]: Generating public/private ecdsa key pair.
Dec  1 13:25:21 np0005541455 cloud-init[921]: Your identification has been saved in /etc/ssh/ssh_host_ecdsa_key
Dec  1 13:25:21 np0005541455 cloud-init[921]: Your public key has been saved in /etc/ssh/ssh_host_ecdsa_key.pub
Dec  1 13:25:21 np0005541455 cloud-init[921]: The key fingerprint is:
Dec  1 13:25:21 np0005541455 cloud-init[921]: SHA256:9uo903euXySnO3FPuY0dcwycIf3He/3lgYnz9fGx/Go root@np0005541455.novalocal
Dec  1 13:25:21 np0005541455 cloud-init[921]: The key's randomart image is:
Dec  1 13:25:21 np0005541455 cloud-init[921]: +---[ECDSA 256]---+
Dec  1 13:25:21 np0005541455 cloud-init[921]: |             .   |
Dec  1 13:25:21 np0005541455 cloud-init[921]: |            . o  |
Dec  1 13:25:21 np0005541455 cloud-init[921]: |             o = |
Dec  1 13:25:21 np0005541455 cloud-init[921]: |              + +|
Dec  1 13:25:21 np0005541455 cloud-init[921]: |        S   . +oB|
Dec  1 13:25:21 np0005541455 cloud-init[921]: |       . . o o.#O|
Dec  1 13:25:21 np0005541455 cloud-init[921]: |          ..o +B^|
Dec  1 13:25:21 np0005541455 cloud-init[921]: |         oo ..E=@|
Dec  1 13:25:21 np0005541455 cloud-init[921]: |       .o .o o+B=|
Dec  1 13:25:21 np0005541455 cloud-init[921]: +----[SHA256]-----+
Dec  1 13:25:21 np0005541455 cloud-init[921]: Generating public/private ed25519 key pair.
Dec  1 13:25:21 np0005541455 cloud-init[921]: Your identification has been saved in /etc/ssh/ssh_host_ed25519_key
Dec  1 13:25:21 np0005541455 cloud-init[921]: Your public key has been saved in /etc/ssh/ssh_host_ed25519_key.pub
Dec  1 13:25:21 np0005541455 cloud-init[921]: The key fingerprint is:
Dec  1 13:25:21 np0005541455 cloud-init[921]: SHA256:J137RHjMDgi6Ih/O2lRsHZJeIzhA9w7i2tklGJucjmA root@np0005541455.novalocal
Dec  1 13:25:21 np0005541455 cloud-init[921]: The key's randomart image is:
Dec  1 13:25:21 np0005541455 cloud-init[921]: +--[ED25519 256]--+
Dec  1 13:25:21 np0005541455 cloud-init[921]: | .o .   .        |
Dec  1 13:25:21 np0005541455 cloud-init[921]: |   o o o . . +   |
Dec  1 13:25:21 np0005541455 cloud-init[921]: |  o + * + . + =  |
Dec  1 13:25:21 np0005541455 cloud-init[921]: | o B * * + . *   |
Dec  1 13:25:21 np0005541455 cloud-init[921]: |.EO + O S o . o  |
Dec  1 13:25:21 np0005541455 cloud-init[921]: |o= B B   o   o   |
Dec  1 13:25:21 np0005541455 cloud-init[921]: |o + *         .  |
Dec  1 13:25:21 np0005541455 cloud-init[921]: |   +             |
Dec  1 13:25:21 np0005541455 cloud-init[921]: |  . .            |
Dec  1 13:25:21 np0005541455 cloud-init[921]: +----[SHA256]-----+
Dec  1 13:25:21 np0005541455 systemd[1]: Finished Cloud-init: Network Stage.
Dec  1 13:25:21 np0005541455 systemd[1]: Reached target Cloud-config availability.
Dec  1 13:25:21 np0005541455 systemd[1]: Reached target Network is Online.
Dec  1 13:25:21 np0005541455 systemd[1]: Starting Cloud-init: Config Stage...
Dec  1 13:25:21 np0005541455 systemd[1]: Starting Crash recovery kernel arming...
Dec  1 13:25:21 np0005541455 systemd[1]: Starting Notify NFS peers of a restart...
Dec  1 13:25:21 np0005541455 systemd[1]: Starting System Logging Service...
Dec  1 13:25:21 np0005541455 sm-notify[1004]: Version 2.5.4 starting
Dec  1 13:25:21 np0005541455 systemd[1]: Starting OpenSSH server daemon...
Dec  1 13:25:21 np0005541455 systemd[1]: Starting Permit User Sessions...
Dec  1 13:25:21 np0005541455 systemd[1]: Started Notify NFS peers of a restart.
Dec  1 13:25:21 np0005541455 systemd[1]: Started OpenSSH server daemon.
Dec  1 13:25:21 np0005541455 systemd[1]: Finished Permit User Sessions.
Dec  1 13:25:21 np0005541455 systemd[1]: Started Command Scheduler.
Dec  1 13:25:21 np0005541455 systemd[1]: Started Getty on tty1.
Dec  1 13:25:21 np0005541455 systemd[1]: Started Serial Getty on ttyS0.
Dec  1 13:25:21 np0005541455 systemd[1]: Reached target Login Prompts.
Dec  1 13:25:21 np0005541455 systemd[1]: Started System Logging Service.
Dec  1 13:25:21 np0005541455 rsyslogd[1005]: [origin software="rsyslogd" swVersion="8.2510.0-2.el9" x-pid="1005" x-info="https://www.rsyslog.com"] start
Dec  1 13:25:21 np0005541455 rsyslogd[1005]: imjournal: No statefile exists, /var/lib/rsyslog/imjournal.state will be created (ignore if this is first run): No such file or directory [v8.2510.0-2.el9 try https://www.rsyslog.com/e/2040 ]
Dec  1 13:25:21 np0005541455 systemd[1]: Reached target Multi-User System.
Dec  1 13:25:21 np0005541455 systemd[1]: Starting Record Runlevel Change in UTMP...
Dec  1 13:25:21 np0005541455 systemd[1]: systemd-update-utmp-runlevel.service: Deactivated successfully.
Dec  1 13:25:21 np0005541455 systemd[1]: Finished Record Runlevel Change in UTMP.
Dec  1 13:25:21 np0005541455 rsyslogd[1005]: imjournal: journal files changed, reloading...  [v8.2510.0-2.el9 try https://www.rsyslog.com/e/0 ]
Dec  1 13:25:21 np0005541455 kdumpctl[1014]: kdump: No kdump initial ramdisk found.
Dec  1 13:25:21 np0005541455 kdumpctl[1014]: kdump: Rebuilding /boot/initramfs-5.14.0-642.el9.x86_64kdump.img
Dec  1 13:25:21 np0005541455 cloud-init[1117]: Cloud-init v. 24.4-7.el9 running 'modules:config' at Mon, 01 Dec 2025 18:25:21 +0000. Up 9.55 seconds.
Dec  1 13:25:21 np0005541455 systemd[1]: Finished Cloud-init: Config Stage.
Dec  1 13:25:21 np0005541455 systemd[1]: Starting Cloud-init: Final Stage...
Dec  1 13:25:22 np0005541455 dracut[1267]: dracut-057-102.git20250818.el9
Dec  1 13:25:22 np0005541455 cloud-init[1268]: Cloud-init v. 24.4-7.el9 running 'modules:final' at Mon, 01 Dec 2025 18:25:22 +0000. Up 9.97 seconds.
Dec  1 13:25:22 np0005541455 cloud-init[1285]: #############################################################
Dec  1 13:25:22 np0005541455 cloud-init[1286]: -----BEGIN SSH HOST KEY FINGERPRINTS-----
Dec  1 13:25:22 np0005541455 cloud-init[1288]: 256 SHA256:9uo903euXySnO3FPuY0dcwycIf3He/3lgYnz9fGx/Go root@np0005541455.novalocal (ECDSA)
Dec  1 13:25:22 np0005541455 cloud-init[1290]: 256 SHA256:J137RHjMDgi6Ih/O2lRsHZJeIzhA9w7i2tklGJucjmA root@np0005541455.novalocal (ED25519)
Dec  1 13:25:22 np0005541455 cloud-init[1292]: 3072 SHA256:X3J5bfpOkl3EzYFUE5bnvlUmDp1Ok9L3QyhhSgQdcMY root@np0005541455.novalocal (RSA)
Dec  1 13:25:22 np0005541455 cloud-init[1293]: -----END SSH HOST KEY FINGERPRINTS-----
Dec  1 13:25:22 np0005541455 cloud-init[1294]: #############################################################
Dec  1 13:25:22 np0005541455 cloud-init[1268]: Cloud-init v. 24.4-7.el9 finished at Mon, 01 Dec 2025 18:25:22 +0000. Datasource DataSourceConfigDrive [net,ver=2][source=/dev/sr0].  Up 10.19 seconds
Dec  1 13:25:22 np0005541455 dracut[1270]: Executing: /usr/bin/dracut --quiet --hostonly --hostonly-cmdline --hostonly-i18n --hostonly-mode strict --hostonly-nics  --mount "/dev/disk/by-uuid/b277050f-8ace-464d-abb6-4c46d4c45253 /sysroot xfs rw,relatime,seclabel,attr2,inode64,logbufs=8,logbsize=32k,noquota" --squash-compressor zstd --no-hostonly-default-device --add-confdir /lib/kdump/dracut.conf.d -f /boot/initramfs-5.14.0-642.el9.x86_64kdump.img 5.14.0-642.el9.x86_64
Dec  1 13:25:22 np0005541455 systemd[1]: Finished Cloud-init: Final Stage.
Dec  1 13:25:22 np0005541455 systemd[1]: Reached target Cloud-init target.
Dec  1 13:25:22 np0005541455 dracut[1270]: dracut module 'systemd-networkd' will not be installed, because command 'networkctl' could not be found!
Dec  1 13:25:22 np0005541455 dracut[1270]: dracut module 'systemd-networkd' will not be installed, because command '/usr/lib/systemd/systemd-networkd' could not be found!
Dec  1 13:25:22 np0005541455 dracut[1270]: dracut module 'systemd-networkd' will not be installed, because command '/usr/lib/systemd/systemd-networkd-wait-online' could not be found!
Dec  1 13:25:22 np0005541455 dracut[1270]: dracut module 'systemd-resolved' will not be installed, because command 'resolvectl' could not be found!
Dec  1 13:25:22 np0005541455 dracut[1270]: dracut module 'systemd-resolved' will not be installed, because command '/usr/lib/systemd/systemd-resolved' could not be found!
Dec  1 13:25:22 np0005541455 dracut[1270]: dracut module 'systemd-timesyncd' will not be installed, because command '/usr/lib/systemd/systemd-timesyncd' could not be found!
Dec  1 13:25:22 np0005541455 dracut[1270]: dracut module 'systemd-timesyncd' will not be installed, because command '/usr/lib/systemd/systemd-time-wait-sync' could not be found!
Dec  1 13:25:22 np0005541455 dracut[1270]: dracut module 'busybox' will not be installed, because command 'busybox' could not be found!
Dec  1 13:25:22 np0005541455 dracut[1270]: dracut module 'dbus-daemon' will not be installed, because command 'dbus-daemon' could not be found!
Dec  1 13:25:22 np0005541455 dracut[1270]: dracut module 'rngd' will not be installed, because command 'rngd' could not be found!
Dec  1 13:25:23 np0005541455 dracut[1270]: dracut module 'connman' will not be installed, because command 'connmand' could not be found!
Dec  1 13:25:23 np0005541455 dracut[1270]: dracut module 'connman' will not be installed, because command 'connmanctl' could not be found!
Dec  1 13:25:23 np0005541455 dracut[1270]: dracut module 'connman' will not be installed, because command 'connmand-wait-online' could not be found!
Dec  1 13:25:23 np0005541455 dracut[1270]: dracut module 'network-wicked' will not be installed, because command 'wicked' could not be found!
Dec  1 13:25:23 np0005541455 dracut[1270]: 62bluetooth: Could not find any command of '/usr/lib/bluetooth/bluetoothd /usr/libexec/bluetooth/bluetoothd'!
Dec  1 13:25:23 np0005541455 dracut[1270]: dracut module 'lvmmerge' will not be installed, because command 'lvm' could not be found!
Dec  1 13:25:23 np0005541455 dracut[1270]: dracut module 'lvmthinpool-monitor' will not be installed, because command 'lvm' could not be found!
Dec  1 13:25:23 np0005541455 dracut[1270]: dracut module 'btrfs' will not be installed, because command 'btrfs' could not be found!
Dec  1 13:25:23 np0005541455 dracut[1270]: dracut module 'dmraid' will not be installed, because command 'dmraid' could not be found!
Dec  1 13:25:23 np0005541455 dracut[1270]: dracut module 'lvm' will not be installed, because command 'lvm' could not be found!
Dec  1 13:25:23 np0005541455 dracut[1270]: dracut module 'mdraid' will not be installed, because command 'mdadm' could not be found!
Dec  1 13:25:23 np0005541455 dracut[1270]: dracut module 'pcsc' will not be installed, because command 'pcscd' could not be found!
Dec  1 13:25:23 np0005541455 dracut[1270]: dracut module 'tpm2-tss' will not be installed, because command 'tpm2' could not be found!
Dec  1 13:25:23 np0005541455 dracut[1270]: dracut module 'cifs' will not be installed, because command 'mount.cifs' could not be found!
Dec  1 13:25:23 np0005541455 dracut[1270]: dracut module 'iscsi' will not be installed, because command 'iscsi-iname' could not be found!
Dec  1 13:25:23 np0005541455 dracut[1270]: dracut module 'iscsi' will not be installed, because command 'iscsiadm' could not be found!
Dec  1 13:25:23 np0005541455 dracut[1270]: dracut module 'iscsi' will not be installed, because command 'iscsid' could not be found!
Dec  1 13:25:23 np0005541455 dracut[1270]: dracut module 'nvmf' will not be installed, because command 'nvme' could not be found!
Dec  1 13:25:23 np0005541455 dracut[1270]: dracut module 'biosdevname' will not be installed, because command 'biosdevname' could not be found!
Dec  1 13:25:23 np0005541455 dracut[1270]: dracut module 'memstrack' will not be installed, because command 'memstrack' could not be found!
Dec  1 13:25:23 np0005541455 dracut[1270]: memstrack is not available
Dec  1 13:25:23 np0005541455 dracut[1270]: If you need to use rd.memdebug>=4, please install memstrack and procps-ng
Dec  1 13:25:23 np0005541455 dracut[1270]: dracut module 'systemd-resolved' will not be installed, because command 'resolvectl' could not be found!
Dec  1 13:25:23 np0005541455 dracut[1270]: dracut module 'systemd-resolved' will not be installed, because command '/usr/lib/systemd/systemd-resolved' could not be found!
Dec  1 13:25:23 np0005541455 dracut[1270]: dracut module 'systemd-timesyncd' will not be installed, because command '/usr/lib/systemd/systemd-timesyncd' could not be found!
Dec  1 13:25:23 np0005541455 dracut[1270]: dracut module 'systemd-timesyncd' will not be installed, because command '/usr/lib/systemd/systemd-time-wait-sync' could not be found!
Dec  1 13:25:23 np0005541455 dracut[1270]: dracut module 'busybox' will not be installed, because command 'busybox' could not be found!
Dec  1 13:25:23 np0005541455 dracut[1270]: dracut module 'dbus-daemon' will not be installed, because command 'dbus-daemon' could not be found!
Dec  1 13:25:23 np0005541455 dracut[1270]: dracut module 'rngd' will not be installed, because command 'rngd' could not be found!
Dec  1 13:25:23 np0005541455 dracut[1270]: dracut module 'connman' will not be installed, because command 'connmand' could not be found!
Dec  1 13:25:23 np0005541455 dracut[1270]: dracut module 'connman' will not be installed, because command 'connmanctl' could not be found!
Dec  1 13:25:23 np0005541455 dracut[1270]: dracut module 'connman' will not be installed, because command 'connmand-wait-online' could not be found!
Dec  1 13:25:23 np0005541455 dracut[1270]: dracut module 'network-wicked' will not be installed, because command 'wicked' could not be found!
Dec  1 13:25:23 np0005541455 dracut[1270]: 62bluetooth: Could not find any command of '/usr/lib/bluetooth/bluetoothd /usr/libexec/bluetooth/bluetoothd'!
Dec  1 13:25:23 np0005541455 dracut[1270]: dracut module 'lvmmerge' will not be installed, because command 'lvm' could not be found!
Dec  1 13:25:23 np0005541455 dracut[1270]: dracut module 'lvmthinpool-monitor' will not be installed, because command 'lvm' could not be found!
Dec  1 13:25:23 np0005541455 dracut[1270]: dracut module 'btrfs' will not be installed, because command 'btrfs' could not be found!
Dec  1 13:25:23 np0005541455 dracut[1270]: dracut module 'dmraid' will not be installed, because command 'dmraid' could not be found!
Dec  1 13:25:23 np0005541455 dracut[1270]: dracut module 'lvm' will not be installed, because command 'lvm' could not be found!
Dec  1 13:25:23 np0005541455 dracut[1270]: dracut module 'mdraid' will not be installed, because command 'mdadm' could not be found!
Dec  1 13:25:23 np0005541455 dracut[1270]: dracut module 'pcsc' will not be installed, because command 'pcscd' could not be found!
Dec  1 13:25:23 np0005541455 dracut[1270]: dracut module 'tpm2-tss' will not be installed, because command 'tpm2' could not be found!
Dec  1 13:25:23 np0005541455 dracut[1270]: dracut module 'cifs' will not be installed, because command 'mount.cifs' could not be found!
Dec  1 13:25:23 np0005541455 dracut[1270]: dracut module 'iscsi' will not be installed, because command 'iscsi-iname' could not be found!
Dec  1 13:25:23 np0005541455 dracut[1270]: dracut module 'iscsi' will not be installed, because command 'iscsiadm' could not be found!
Dec  1 13:25:23 np0005541455 dracut[1270]: dracut module 'iscsi' will not be installed, because command 'iscsid' could not be found!
Dec  1 13:25:24 np0005541455 dracut[1270]: dracut module 'nvmf' will not be installed, because command 'nvme' could not be found!
Dec  1 13:25:24 np0005541455 dracut[1270]: dracut module 'memstrack' will not be installed, because command 'memstrack' could not be found!
Dec  1 13:25:24 np0005541455 dracut[1270]: memstrack is not available
Dec  1 13:25:24 np0005541455 dracut[1270]: If you need to use rd.memdebug>=4, please install memstrack and procps-ng
Dec  1 13:25:24 np0005541455 dracut[1270]: *** Including module: systemd ***
Dec  1 13:25:24 np0005541455 dracut[1270]: *** Including module: fips ***
Dec  1 13:25:24 np0005541455 chronyd[792]: Selected source 162.159.200.1 (2.centos.pool.ntp.org)
Dec  1 13:25:24 np0005541455 chronyd[792]: System clock TAI offset set to 37 seconds
Dec  1 13:25:24 np0005541455 dracut[1270]: *** Including module: systemd-initrd ***
Dec  1 13:25:24 np0005541455 dracut[1270]: *** Including module: i18n ***
Dec  1 13:25:25 np0005541455 dracut[1270]: *** Including module: drm ***
Dec  1 13:25:25 np0005541455 dracut[1270]: *** Including module: prefixdevname ***
Dec  1 13:25:25 np0005541455 dracut[1270]: *** Including module: kernel-modules ***
Dec  1 13:25:25 np0005541455 kernel: block vda: the capability attribute has been deprecated.
Dec  1 13:25:26 np0005541455 chronyd[792]: Selected source 23.159.16.194 (2.centos.pool.ntp.org)
Dec  1 13:25:26 np0005541455 dracut[1270]: *** Including module: kernel-modules-extra ***
Dec  1 13:25:26 np0005541455 dracut[1270]: *** Including module: qemu ***
Dec  1 13:25:26 np0005541455 dracut[1270]: *** Including module: fstab-sys ***
Dec  1 13:25:26 np0005541455 dracut[1270]: *** Including module: rootfs-block ***
Dec  1 13:25:26 np0005541455 dracut[1270]: *** Including module: terminfo ***
Dec  1 13:25:26 np0005541455 dracut[1270]: *** Including module: udev-rules ***
Dec  1 13:25:27 np0005541455 dracut[1270]: Skipping udev rule: 91-permissions.rules
Dec  1 13:25:27 np0005541455 dracut[1270]: Skipping udev rule: 80-drivers-modprobe.rules
Dec  1 13:25:27 np0005541455 dracut[1270]: *** Including module: virtiofs ***
Dec  1 13:25:27 np0005541455 dracut[1270]: *** Including module: dracut-systemd ***
Dec  1 13:25:27 np0005541455 dracut[1270]: *** Including module: usrmount ***
Dec  1 13:25:27 np0005541455 dracut[1270]: *** Including module: base ***
Dec  1 13:25:27 np0005541455 dracut[1270]: *** Including module: fs-lib ***
Dec  1 13:25:28 np0005541455 dracut[1270]: *** Including module: kdumpbase ***
Dec  1 13:25:28 np0005541455 dracut[1270]: *** Including module: microcode_ctl-fw_dir_override ***
Dec  1 13:25:28 np0005541455 dracut[1270]:  microcode_ctl module: mangling fw_dir
Dec  1 13:25:28 np0005541455 dracut[1270]:    microcode_ctl: reset fw_dir to "/lib/firmware/updates /lib/firmware"
Dec  1 13:25:28 np0005541455 dracut[1270]:    microcode_ctl: processing data directory  "/usr/share/microcode_ctl/ucode_with_caveats/intel"...
Dec  1 13:25:28 np0005541455 dracut[1270]:    microcode_ctl: configuration "intel" is ignored
Dec  1 13:25:28 np0005541455 dracut[1270]:    microcode_ctl: processing data directory  "/usr/share/microcode_ctl/ucode_with_caveats/intel-06-2d-07"...
Dec  1 13:25:28 np0005541455 dracut[1270]:    microcode_ctl: configuration "intel-06-2d-07" is ignored
Dec  1 13:25:28 np0005541455 dracut[1270]:    microcode_ctl: processing data directory  "/usr/share/microcode_ctl/ucode_with_caveats/intel-06-4e-03"...
Dec  1 13:25:28 np0005541455 dracut[1270]:    microcode_ctl: configuration "intel-06-4e-03" is ignored
Dec  1 13:25:28 np0005541455 dracut[1270]:    microcode_ctl: processing data directory  "/usr/share/microcode_ctl/ucode_with_caveats/intel-06-4f-01"...
Dec  1 13:25:28 np0005541455 dracut[1270]:    microcode_ctl: configuration "intel-06-4f-01" is ignored
Dec  1 13:25:28 np0005541455 dracut[1270]:    microcode_ctl: processing data directory  "/usr/share/microcode_ctl/ucode_with_caveats/intel-06-55-04"...
Dec  1 13:25:28 np0005541455 dracut[1270]:    microcode_ctl: configuration "intel-06-55-04" is ignored
Dec  1 13:25:28 np0005541455 dracut[1270]:    microcode_ctl: processing data directory  "/usr/share/microcode_ctl/ucode_with_caveats/intel-06-5e-03"...
Dec  1 13:25:28 np0005541455 dracut[1270]:    microcode_ctl: configuration "intel-06-5e-03" is ignored
Dec  1 13:25:28 np0005541455 dracut[1270]:    microcode_ctl: processing data directory  "/usr/share/microcode_ctl/ucode_with_caveats/intel-06-8c-01"...
Dec  1 13:25:28 np0005541455 dracut[1270]:    microcode_ctl: configuration "intel-06-8c-01" is ignored
Dec  1 13:25:28 np0005541455 dracut[1270]:    microcode_ctl: processing data directory  "/usr/share/microcode_ctl/ucode_with_caveats/intel-06-8e-9e-0x-0xca"...
Dec  1 13:25:29 np0005541455 dracut[1270]:    microcode_ctl: configuration "intel-06-8e-9e-0x-0xca" is ignored
Dec  1 13:25:29 np0005541455 dracut[1270]:    microcode_ctl: processing data directory  "/usr/share/microcode_ctl/ucode_with_caveats/intel-06-8e-9e-0x-dell"...
Dec  1 13:25:29 np0005541455 dracut[1270]:    microcode_ctl: configuration "intel-06-8e-9e-0x-dell" is ignored
Dec  1 13:25:29 np0005541455 dracut[1270]:    microcode_ctl: processing data directory  "/usr/share/microcode_ctl/ucode_with_caveats/intel-06-8f-08"...
Dec  1 13:25:29 np0005541455 dracut[1270]:    microcode_ctl: configuration "intel-06-8f-08" is ignored
Dec  1 13:25:29 np0005541455 dracut[1270]:    microcode_ctl: final fw_dir: "/lib/firmware/updates /lib/firmware"
Dec  1 13:25:29 np0005541455 dracut[1270]: *** Including module: openssl ***
Dec  1 13:25:29 np0005541455 dracut[1270]: *** Including module: shutdown ***
Dec  1 13:25:29 np0005541455 irqbalance[790]: Cannot change IRQ 35 affinity: Operation not permitted
Dec  1 13:25:29 np0005541455 irqbalance[790]: IRQ 35 affinity is now unmanaged
Dec  1 13:25:29 np0005541455 irqbalance[790]: Cannot change IRQ 25 affinity: Operation not permitted
Dec  1 13:25:29 np0005541455 irqbalance[790]: IRQ 25 affinity is now unmanaged
Dec  1 13:25:29 np0005541455 irqbalance[790]: Cannot change IRQ 28 affinity: Operation not permitted
Dec  1 13:25:29 np0005541455 irqbalance[790]: IRQ 28 affinity is now unmanaged
Dec  1 13:25:29 np0005541455 irqbalance[790]: Cannot change IRQ 34 affinity: Operation not permitted
Dec  1 13:25:29 np0005541455 irqbalance[790]: IRQ 34 affinity is now unmanaged
Dec  1 13:25:29 np0005541455 irqbalance[790]: Cannot change IRQ 30 affinity: Operation not permitted
Dec  1 13:25:29 np0005541455 irqbalance[790]: IRQ 30 affinity is now unmanaged
Dec  1 13:25:29 np0005541455 irqbalance[790]: Cannot change IRQ 29 affinity: Operation not permitted
Dec  1 13:25:29 np0005541455 irqbalance[790]: IRQ 29 affinity is now unmanaged
Dec  1 13:25:29 np0005541455 dracut[1270]: *** Including module: squash ***
Dec  1 13:25:29 np0005541455 dracut[1270]: *** Including modules done ***
Dec  1 13:25:29 np0005541455 dracut[1270]: *** Installing kernel module dependencies ***
Dec  1 13:25:29 np0005541455 systemd[1]: NetworkManager-dispatcher.service: Deactivated successfully.
Dec  1 13:25:30 np0005541455 dracut[1270]: *** Installing kernel module dependencies done ***
Dec  1 13:25:30 np0005541455 dracut[1270]: *** Resolving executable dependencies ***
Dec  1 13:25:32 np0005541455 dracut[1270]: *** Resolving executable dependencies done ***
Dec  1 13:25:32 np0005541455 dracut[1270]: *** Generating early-microcode cpio image ***
Dec  1 13:25:32 np0005541455 dracut[1270]: *** Store current command line parameters ***
Dec  1 13:25:32 np0005541455 dracut[1270]: Stored kernel commandline:
Dec  1 13:25:32 np0005541455 dracut[1270]: No dracut internal kernel commandline stored in the initramfs
Dec  1 13:25:32 np0005541455 dracut[1270]: *** Install squash loader ***
Dec  1 13:25:33 np0005541455 dracut[1270]: *** Squashing the files inside the initramfs ***
Dec  1 13:25:34 np0005541455 dracut[1270]: *** Squashing the files inside the initramfs done ***
Dec  1 13:25:34 np0005541455 dracut[1270]: *** Creating image file '/boot/initramfs-5.14.0-642.el9.x86_64kdump.img' ***
Dec  1 13:25:34 np0005541455 dracut[1270]: *** Hardlinking files ***
Dec  1 13:25:34 np0005541455 dracut[1270]: *** Hardlinking files done ***
Dec  1 13:25:34 np0005541455 dracut[1270]: *** Creating initramfs image file '/boot/initramfs-5.14.0-642.el9.x86_64kdump.img' done ***
Dec  1 13:25:35 np0005541455 kdumpctl[1014]: kdump: kexec: loaded kdump kernel
Dec  1 13:25:35 np0005541455 kdumpctl[1014]: kdump: Starting kdump: [OK]
Dec  1 13:25:35 np0005541455 systemd[1]: Finished Crash recovery kernel arming.
Dec  1 13:25:35 np0005541455 systemd[1]: Startup finished in 1.697s (kernel) + 2.818s (initrd) + 18.630s (userspace) = 23.146s.
Dec  1 13:25:40 np0005541455 systemd[1]: Created slice User Slice of UID 1000.
Dec  1 13:25:40 np0005541455 systemd[1]: Starting User Runtime Directory /run/user/1000...
Dec  1 13:25:40 np0005541455 systemd-logind[797]: New session 1 of user zuul.
Dec  1 13:25:40 np0005541455 systemd[1]: Finished User Runtime Directory /run/user/1000.
Dec  1 13:25:40 np0005541455 systemd[1]: Starting User Manager for UID 1000...
Dec  1 13:25:40 np0005541455 systemd[4298]: Queued start job for default target Main User Target.
Dec  1 13:25:40 np0005541455 systemd[4298]: Created slice User Application Slice.
Dec  1 13:25:40 np0005541455 systemd[4298]: Started Mark boot as successful after the user session has run 2 minutes.
Dec  1 13:25:40 np0005541455 systemd[4298]: Started Daily Cleanup of User's Temporary Directories.
Dec  1 13:25:40 np0005541455 systemd[4298]: Reached target Paths.
Dec  1 13:25:40 np0005541455 systemd[4298]: Reached target Timers.
Dec  1 13:25:40 np0005541455 systemd[4298]: Starting D-Bus User Message Bus Socket...
Dec  1 13:25:40 np0005541455 systemd[4298]: Starting Create User's Volatile Files and Directories...
Dec  1 13:25:40 np0005541455 systemd[4298]: Listening on D-Bus User Message Bus Socket.
Dec  1 13:25:40 np0005541455 systemd[4298]: Reached target Sockets.
Dec  1 13:25:40 np0005541455 systemd[4298]: Finished Create User's Volatile Files and Directories.
Dec  1 13:25:40 np0005541455 systemd[4298]: Reached target Basic System.
Dec  1 13:25:40 np0005541455 systemd[4298]: Reached target Main User Target.
Dec  1 13:25:40 np0005541455 systemd[4298]: Startup finished in 179ms.
Dec  1 13:25:40 np0005541455 systemd[1]: Started User Manager for UID 1000.
Dec  1 13:25:40 np0005541455 systemd[1]: Started Session 1 of User zuul.
Dec  1 13:25:40 np0005541455 python3[4380]: ansible-setup Invoked with gather_subset=['!all'] gather_timeout=10 filter=[] fact_path=/etc/ansible/facts.d
Dec  1 13:25:43 np0005541455 python3[4408]: ansible-ansible.legacy.setup Invoked with gather_subset=['all'] gather_timeout=10 filter=[] fact_path=/etc/ansible/facts.d
Dec  1 13:25:49 np0005541455 systemd[1]: systemd-hostnamed.service: Deactivated successfully.
Dec  1 13:25:50 np0005541455 python3[4468]: ansible-setup Invoked with gather_subset=['network'] gather_timeout=10 filter=[] fact_path=/etc/ansible/facts.d
Dec  1 13:25:51 np0005541455 python3[4508]: ansible-zuul_console Invoked with path=/tmp/console-{log_uuid}.log port=19885 state=present
Dec  1 13:25:53 np0005541455 python3[4536]: ansible-authorized_key Invoked with user=zuul state=present key=ssh-rsa AAAAB3NzaC1yc2EAAAADAQABAAABgQDJbEVVLmAXSsjzgbn/xgvaJigfAM6xWr7qeYT2D/WunmSupwor+vio5MvoyHa6aU1YfiLGH5Y/NrE0lrLKMyEsj+XVqUUo3rlRABvaTkTPBnMjhxJWBKIQi7zZ91+dl+zxYgwro6VNPBOMdf6CxL/DWBoCZNqbY716S2lAJCdFuH+wS3BwSkL5QwT3Yol7ZdRfh6yO55YBJck4qXnpwu9mPv1oOXFOOZctIXY76qGM3PS64++46OfiRXujT8eG5+7kdeVsvAZTdC2KmzQSG8YnAloodME3wROhA6H0lERUzEHo5Hbd+1M81KNnJpiOAyYF/mLVDxoFeQaGEsAsLasAKgTvDG6ywBgZUyK25a1S2W98AZa/YEuhoyR2x8B6evnBnnWfgsU04ltKP58zuLl0Q9PUMMGO0FDW2Lj/W2m1Or02fXchTshuvgj68E/CROiNANMOgV3NsIcks5HgS1YtaLhVNnmQLUlBFdudhxdTzJE++FUgqzmtkZXetmjT2BU= zuul-build-sshkey manage_dir=True exclusive=False validate_certs=True follow=False path=None key_options=None comment=None
Dec  1 13:25:54 np0005541455 python3[4560]: ansible-file Invoked with state=directory path=/home/zuul/.ssh mode=448 recurse=False force=False follow=True modification_time_format=%Y%m%d%H%M.%S access_time_format=%Y%m%d%H%M.%S unsafe_writes=False _original_basename=None _diff_peek=None src=None modification_time=None access_time=None owner=None group=None seuser=None serole=None selevel=None setype=None attributes=None
Dec  1 13:25:54 np0005541455 python3[4659]: ansible-ansible.legacy.stat Invoked with path=/home/zuul/.ssh/id_rsa follow=False get_checksum=False checksum_algorithm=sha1 get_md5=False get_mime=True get_attributes=True
Dec  1 13:25:55 np0005541455 python3[4730]: ansible-ansible.legacy.copy Invoked with src=/home/zuul/.ansible/tmp/ansible-tmp-1764613554.5929525-207-246217435120000/source dest=/home/zuul/.ssh/id_rsa mode=384 force=False _original_basename=3459809ac0b04290960bc5b03200121a_id_rsa follow=False checksum=8ed7319caa78bb5b7301949578e0b81adc32f1d9 backup=False unsafe_writes=False content=NOT_LOGGING_PARAMETER validate=None directory_mode=None remote_src=None local_follow=None owner=None group=None seuser=None serole=None selevel=None setype=None attributes=None
Dec  1 13:25:55 np0005541455 python3[4853]: ansible-ansible.legacy.stat Invoked with path=/home/zuul/.ssh/id_rsa.pub follow=False get_checksum=False checksum_algorithm=sha1 get_md5=False get_mime=True get_attributes=True
Dec  1 13:25:56 np0005541455 python3[4924]: ansible-ansible.legacy.copy Invoked with src=/home/zuul/.ansible/tmp/ansible-tmp-1764613555.5928545-240-39185583814301/source dest=/home/zuul/.ssh/id_rsa.pub mode=420 force=False _original_basename=3459809ac0b04290960bc5b03200121a_id_rsa.pub follow=False checksum=b0951c04b18cb454f8d7f3055716982b15049309 backup=False unsafe_writes=False content=NOT_LOGGING_PARAMETER validate=None directory_mode=None remote_src=None local_follow=None owner=None group=None seuser=None serole=None selevel=None setype=None attributes=None
Dec  1 13:25:57 np0005541455 python3[4972]: ansible-ping Invoked with data=pong
Dec  1 13:25:58 np0005541455 python3[4996]: ansible-setup Invoked with gather_subset=['all'] gather_timeout=10 filter=[] fact_path=/etc/ansible/facts.d
Dec  1 13:26:00 np0005541455 python3[5054]: ansible-zuul_debug_info Invoked with ipv4_route_required=False ipv6_route_required=False image_manifest_files=['/etc/dib-builddate.txt', '/etc/image-hostname.txt'] image_manifest=None traceroute_host=None
Dec  1 13:26:01 np0005541455 python3[5086]: ansible-file Invoked with path=/home/zuul/zuul-output/logs state=absent recurse=False force=False follow=True modification_time_format=%Y%m%d%H%M.%S access_time_format=%Y%m%d%H%M.%S unsafe_writes=False _original_basename=None _diff_peek=None src=None modification_time=None access_time=None mode=None owner=None group=None seuser=None serole=None selevel=None setype=None attributes=None
Dec  1 13:26:01 np0005541455 python3[5110]: ansible-file Invoked with path=/home/zuul/zuul-output/artifacts state=absent recurse=False force=False follow=True modification_time_format=%Y%m%d%H%M.%S access_time_format=%Y%m%d%H%M.%S unsafe_writes=False _original_basename=None _diff_peek=None src=None modification_time=None access_time=None mode=None owner=None group=None seuser=None serole=None selevel=None setype=None attributes=None
Dec  1 13:26:02 np0005541455 python3[5134]: ansible-file Invoked with path=/home/zuul/zuul-output/docs state=absent recurse=False force=False follow=True modification_time_format=%Y%m%d%H%M.%S access_time_format=%Y%m%d%H%M.%S unsafe_writes=False _original_basename=None _diff_peek=None src=None modification_time=None access_time=None mode=None owner=None group=None seuser=None serole=None selevel=None setype=None attributes=None
Dec  1 13:26:02 np0005541455 python3[5158]: ansible-file Invoked with path=/home/zuul/zuul-output/logs state=directory mode=493 recurse=False force=False follow=True modification_time_format=%Y%m%d%H%M.%S access_time_format=%Y%m%d%H%M.%S unsafe_writes=False _original_basename=None _diff_peek=None src=None modification_time=None access_time=None owner=None group=None seuser=None serole=None selevel=None setype=None attributes=None
Dec  1 13:26:02 np0005541455 python3[5182]: ansible-file Invoked with path=/home/zuul/zuul-output/artifacts state=directory mode=493 recurse=False force=False follow=True modification_time_format=%Y%m%d%H%M.%S access_time_format=%Y%m%d%H%M.%S unsafe_writes=False _original_basename=None _diff_peek=None src=None modification_time=None access_time=None owner=None group=None seuser=None serole=None selevel=None setype=None attributes=None
Dec  1 13:26:03 np0005541455 python3[5206]: ansible-file Invoked with path=/home/zuul/zuul-output/docs state=directory mode=493 recurse=False force=False follow=True modification_time_format=%Y%m%d%H%M.%S access_time_format=%Y%m%d%H%M.%S unsafe_writes=False _original_basename=None _diff_peek=None src=None modification_time=None access_time=None owner=None group=None seuser=None serole=None selevel=None setype=None attributes=None
Dec  1 13:26:04 np0005541455 python3[5232]: ansible-file Invoked with path=/etc/ci state=directory owner=root group=root mode=493 recurse=False force=False follow=True modification_time_format=%Y%m%d%H%M.%S access_time_format=%Y%m%d%H%M.%S unsafe_writes=False _original_basename=None _diff_peek=None src=None modification_time=None access_time=None seuser=None serole=None selevel=None setype=None attributes=None
Dec  1 13:26:05 np0005541455 python3[5310]: ansible-ansible.legacy.stat Invoked with path=/etc/ci/mirror_info.sh follow=False get_checksum=True checksum_algorithm=sha1 get_md5=False get_mime=True get_attributes=True
Dec  1 13:26:06 np0005541455 python3[5383]: ansible-ansible.legacy.copy Invoked with dest=/etc/ci/mirror_info.sh owner=root group=root mode=420 src=/home/zuul/.ansible/tmp/ansible-tmp-1764613564.9796543-21-54685427025275/source follow=False _original_basename=mirror_info.sh.j2 checksum=92d92a03afdddee82732741071f662c729080c35 backup=False force=True unsafe_writes=False content=NOT_LOGGING_PARAMETER validate=None directory_mode=None remote_src=None local_follow=None seuser=None serole=None selevel=None setype=None attributes=None
Dec  1 13:26:06 np0005541455 python3[5431]: ansible-authorized_key Invoked with user=zuul state=present key=ssh-rsa AAAAB3NzaC1yc2EAAAABIwAAAQEA4Z/c9osaGGtU6X8fgELwfj/yayRurfcKA0HMFfdpPxev2dbwljysMuzoVp4OZmW1gvGtyYPSNRvnzgsaabPNKNo2ym5NToCP6UM+KSe93aln4BcM/24mXChYAbXJQ5Bqq/pIzsGs/pKetQN+vwvMxLOwTvpcsCJBXaa981RKML6xj9l/UZ7IIq1HSEKMvPLxZMWdu0Ut8DkCd5F4nOw9Wgml2uYpDCj5LLCrQQ9ChdOMz8hz6SighhNlRpPkvPaet3OXxr/ytFMu7j7vv06CaEnuMMiY2aTWN1Imin9eHAylIqFHta/3gFfQSWt9jXM7owkBLKL7ATzhaAn+fjNupw== arxcruz@redhat.com manage_dir=True exclusive=False validate_certs=True follow=False path=None key_options=None comment=None
Dec  1 13:26:06 np0005541455 python3[5455]: ansible-authorized_key Invoked with user=zuul state=present key=ssh-rsa AAAAB3NzaC1yc2EAAAADAQABAAABAQDS4Fn6k4deCnIlOtLWqZJyksbepjQt04j8Ed8CGx9EKkj0fKiAxiI4TadXQYPuNHMixZy4Nevjb6aDhL5Z906TfvNHKUrjrG7G26a0k8vdc61NEQ7FmcGMWRLwwc6ReDO7lFpzYKBMk4YqfWgBuGU/K6WLKiVW2cVvwIuGIaYrE1OiiX0iVUUk7KApXlDJMXn7qjSYynfO4mF629NIp8FJal38+Kv+HA+0QkE5Y2xXnzD4Lar5+keymiCHRntPppXHeLIRzbt0gxC7v3L72hpQ3BTBEzwHpeS8KY+SX1y5lRMN45thCHfJqGmARJREDjBvWG8JXOPmVIKQtZmVcD5b mandreou@redhat.com manage_dir=True exclusive=False validate_certs=True follow=False path=None key_options=None comment=None
Dec  1 13:26:07 np0005541455 python3[5479]: ansible-authorized_key Invoked with user=zuul state=present key=ssh-rsa AAAAB3NzaC1yc2EAAAADAQABAAABAQC9MiLfy30deHA7xPOAlew5qUq3UP2gmRMYJi8PtkjFB20/DKeWwWNnkZPqP9AayruRoo51SIiVg870gbZE2jYl+Ncx/FYDe56JeC3ySZsXoAVkC9bP7gkOGqOmJjirvAgPMI7bogVz8i+66Q4Ar7OKTp3762G4IuWPPEg4ce4Y7lx9qWocZapHYq4cYKMxrOZ7SEbFSATBbe2bPZAPKTw8do/Eny+Hq/LkHFhIeyra6cqTFQYShr+zPln0Cr+ro/pDX3bB+1ubFgTpjpkkkQsLhDfR6cCdCWM2lgnS3BTtYj5Ct9/JRPR5YOphqZz+uB+OEu2IL68hmU9vNTth1KeX rlandy@redhat.com manage_dir=True exclusive=False validate_certs=True follow=False path=None key_options=None comment=None
Dec  1 13:26:07 np0005541455 python3[5503]: ansible-authorized_key Invoked with user=zuul state=present key=ssh-ed25519 AAAAC3NzaC1lZDI1NTE5AAAAIFCbgz8gdERiJlk2IKOtkjQxEXejrio6ZYMJAVJYpOIp raukadah@gmail.com manage_dir=True exclusive=False validate_certs=True follow=False path=None key_options=None comment=None
Dec  1 13:26:07 np0005541455 python3[5527]: ansible-authorized_key Invoked with user=zuul state=present key=ssh-ed25519 AAAAC3NzaC1lZDI1NTE5AAAAIBqb3Q/9uDf4LmihQ7xeJ9gA/STIQUFPSfyyV0m8AoQi bshewale@redhat.com manage_dir=True exclusive=False validate_certs=True follow=False path=None key_options=None comment=None
Dec  1 13:26:08 np0005541455 python3[5551]: ansible-authorized_key Invoked with user=zuul state=present key=ssh-rsa AAAAB3NzaC1yc2EAAAADAQABAAABAQC0I8QqQx0Az2ysJt2JuffucLijhBqnsXKEIx5GyHwxVULROa8VtNFXUDH6ZKZavhiMcmfHB2+TBTda+lDP4FldYj06dGmzCY+IYGa+uDRdxHNGYjvCfLFcmLlzRK6fNbTcui+KlUFUdKe0fb9CRoGKyhlJD5GRkM1Dv+Yb6Bj+RNnmm1fVGYxzmrD2utvffYEb0SZGWxq2R9gefx1q/3wCGjeqvufEV+AskPhVGc5T7t9eyZ4qmslkLh1/nMuaIBFcr9AUACRajsvk6mXrAN1g3HlBf2gQlhi1UEyfbqIQvzzFtsbLDlSum/KmKjy818GzvWjERfQ0VkGzCd9bSLVL dviroel@redhat.com manage_dir=True exclusive=False validate_certs=True follow=False path=None key_options=None comment=None
Dec  1 13:26:08 np0005541455 python3[5575]: ansible-authorized_key Invoked with user=zuul state=present key=ssh-rsa AAAAB3NzaC1yc2EAAAADAQABAAABgQDLOQd4ZLtkZXQGY6UwAr/06ppWQK4fDO3HaqxPk98csyOCBXsliSKK39Bso828+5srIXiW7aI6aC9P5mwi4mUZlGPfJlQbfrcGvY+b/SocuvaGK+1RrHLoJCT52LBhwgrzlXio2jeksZeein8iaTrhsPrOAs7KggIL/rB9hEiB3NaOPWhhoCP4vlW6MEMExGcqB/1FVxXFBPnLkEyW0Lk7ycVflZl2ocRxbfjZi0+tI1Wlinp8PvSQSc/WVrAcDgKjc/mB4ODPOyYy3G8FHgfMsrXSDEyjBKgLKMsdCrAUcqJQWjkqXleXSYOV4q3pzL+9umK+q/e3P/bIoSFQzmJKTU1eDfuvPXmow9F5H54fii/Da7ezlMJ+wPGHJrRAkmzvMbALy7xwswLhZMkOGNtRcPqaKYRmIBKpw3o6bCTtcNUHOtOQnzwY8JzrM2eBWJBXAANYw+9/ho80JIiwhg29CFNpVBuHbql2YxJQNrnl90guN65rYNpDxdIluweyUf8= anbanerj@kaermorhen manage_dir=True exclusive=False validate_certs=True follow=False path=None key_options=None comment=None
Dec  1 13:26:08 np0005541455 python3[5599]: ansible-authorized_key Invoked with user=zuul state=present key=ssh-rsa AAAAB3NzaC1yc2EAAAADAQABAAABgQC3VwV8Im9kRm49lt3tM36hj4Zv27FxGo4C1Q/0jqhzFmHY7RHbmeRr8ObhwWoHjXSozKWg8FL5ER0z3hTwL0W6lez3sL7hUaCmSuZmG5Hnl3x4vTSxDI9JZ/Y65rtYiiWQo2fC5xJhU/4+0e5e/pseCm8cKRSu+SaxhO+sd6FDojA2x1BzOzKiQRDy/1zWGp/cZkxcEuB1wHI5LMzN03c67vmbu+fhZRAUO4dQkvcnj2LrhQtpa+ytvnSjr8icMDosf1OsbSffwZFyHB/hfWGAfe0eIeSA2XPraxiPknXxiPKx2MJsaUTYbsZcm3EjFdHBBMumw5rBI74zLrMRvCO9GwBEmGT4rFng1nP+yw5DB8sn2zqpOsPg1LYRwCPOUveC13P6pgsZZPh812e8v5EKnETct+5XI3dVpdw6CnNiLwAyVAF15DJvBGT/u1k0Myg/bQn+Gv9k2MSj6LvQmf6WbZu2Wgjm30z3FyCneBqTL7mLF19YXzeC0ufHz5pnO1E= dasm@fedora manage_dir=True exclusive=False validate_certs=True follow=False path=None key_options=None comment=None
Dec  1 13:26:09 np0005541455 python3[5623]: ansible-authorized_key Invoked with user=zuul state=present key=ssh-ed25519 AAAAC3NzaC1lZDI1NTE5AAAAIHUnwjB20UKmsSed9X73eGNV5AOEFccQ3NYrRW776pEk cjeanner manage_dir=True exclusive=False validate_certs=True follow=False path=None key_options=None comment=None
Dec  1 13:26:09 np0005541455 python3[5647]: ansible-authorized_key Invoked with user=zuul state=present key=ssh-ed25519 AAAAC3NzaC1lZDI1NTE5AAAAIDercCMGn8rW1C4P67tHgtflPdTeXlpyUJYH+6XDd2lR jgilaber@redhat.com manage_dir=True exclusive=False validate_certs=True follow=False path=None key_options=None comment=None
Dec  1 13:26:09 np0005541455 python3[5671]: ansible-authorized_key Invoked with user=zuul state=present key=ssh-ed25519 AAAAC3NzaC1lZDI1NTE5AAAAIAMI6kkg9Wg0sG7jIJmyZemEBwUn1yzNpQQd3gnulOmZ adrianfuscoarnejo@gmail.com manage_dir=True exclusive=False validate_certs=True follow=False path=None key_options=None comment=None
Dec  1 13:26:09 np0005541455 python3[5695]: ansible-authorized_key Invoked with user=zuul state=present key=ecdsa-sha2-nistp256 AAAAE2VjZHNhLXNoYTItbmlzdHAyNTYAAAAIbmlzdHAyNTYAAABBBPijwpQu/3jhhhBZInXNOLEH57DrknPc3PLbsRvYyJIFzwYjX+WD4a7+nGnMYS42MuZk6TJcVqgnqofVx4isoD4= ramishra@redhat.com manage_dir=True exclusive=False validate_certs=True follow=False path=None key_options=None comment=None
Dec  1 13:26:10 np0005541455 python3[5719]: ansible-authorized_key Invoked with user=zuul state=present key=ssh-ed25519 AAAAC3NzaC1lZDI1NTE5AAAAIGpU/BepK3qX0NRf5Np+dOBDqzQEefhNrw2DCZaH3uWW rebtoor@monolith manage_dir=True exclusive=False validate_certs=True follow=False path=None key_options=None comment=None
Dec  1 13:26:10 np0005541455 python3[5743]: ansible-authorized_key Invoked with user=zuul state=present key=ssh-ed25519 AAAAC3NzaC1lZDI1NTE5AAAAIDK0iKdi8jQTpQrDdLVH/AAgLVYyTXF7AQ1gjc/5uT3t ykarel@yatinkarel manage_dir=True exclusive=False validate_certs=True follow=False path=None key_options=None comment=None
Dec  1 13:26:10 np0005541455 python3[5767]: ansible-authorized_key Invoked with user=zuul state=present key=ssh-ed25519 AAAAC3NzaC1lZDI1NTE5AAAAIF/V/cLotA6LZeO32VL45Hd78skuA2lJA425Sm2LlQeZ fmount@horcrux manage_dir=True exclusive=False validate_certs=True follow=False path=None key_options=None comment=None
Dec  1 13:26:11 np0005541455 python3[5791]: ansible-authorized_key Invoked with user=zuul state=present key=ssh-ed25519 AAAAC3NzaC1lZDI1NTE5AAAAIDa7QCjuDMVmRPo1rREbGwzYeBCYVN+Ou/3WKXZEC6Sr manage_dir=True exclusive=False validate_certs=True follow=False path=None key_options=None comment=None
Dec  1 13:26:11 np0005541455 python3[5815]: ansible-authorized_key Invoked with user=zuul state=present key=ssh-rsa AAAAB3NzaC1yc2EAAAADAQABAAACAQCfNtF7NvKl915TGsGGoseUb06Hj8L/S4toWf0hExeY+F00woL6NvBlJD0nDct+P5a22I4EhvoQCRQ8reaPCm1lybR3uiRIJsj+8zkVvLwby9LXzfZorlNG9ofjd00FEmB09uW/YvTl6Q9XwwwX6tInzIOv3TMqTHHGOL74ibbj8J/FJR0cFEyj0z4WQRvtkh32xAHl83gbuINryMt0sqRI+clj2381NKL55DRLQrVw0gsfqqxiHAnXg21qWmc4J+b9e9kiuAFQjcjwTVkwJCcg3xbPwC/qokYRby/Y5S40UUd7/jEARGXT7RZgpzTuDd1oZiCVrnrqJNPaMNdVv5MLeFdf1B7iIe5aa/fGouX7AO4SdKhZUdnJmCFAGvjC6S3JMZ2wAcUl+OHnssfmdj7XL50cLo27vjuzMtLAgSqi6N99m92WCF2s8J9aVzszX7Xz9OKZCeGsiVJp3/NdABKzSEAyM9xBD/5Vho894Sav+otpySHe3p6RUTgbB5Zu8VyZRZ/UtB3ueXxyo764yrc6qWIDqrehm84Xm9g+/jpIBzGPl07NUNJpdt/6Sgf9RIKXw/7XypO5yZfUcuFNGTxLfqjTNrtgLZNcjfav6sSdVXVcMPL//XNuRdKmVFaO76eV/oGMQGr1fGcCD+N+CpI7+Q+fCNB6VFWG4nZFuI/Iuw== averdagu@redhat.com manage_dir=True exclusive=False validate_certs=True follow=False path=None key_options=None comment=None
Dec  1 13:26:11 np0005541455 python3[5839]: ansible-authorized_key Invoked with user=zuul state=present key=ssh-rsa AAAAB3NzaC1yc2EAAAADAQABAAABgQDq8l27xI+QlQVdS4djp9ogSoyrNE2+Ox6vKPdhSNL1J3PE5w+WCSvMz9A5gnNuH810zwbekEApbxTze/gLQJwBHA52CChfURpXrFaxY7ePXRElwKAL3mJfzBWY/c5jnNL9TCVmFJTGZkFZP3Nh+BMgZvL6xBkt3WKm6Uq18qzd9XeKcZusrA+O+uLv1fVeQnadY9RIqOCyeFYCzLWrUfTyE8x/XG0hAWIM7qpnF2cALQS2h9n4hW5ybiUN790H08wf9hFwEf5nxY9Z9dVkPFQiTSGKNBzmnCXU9skxS/xhpFjJ5duGSZdtAHe9O+nGZm9c67hxgtf8e5PDuqAdXEv2cf6e3VBAt+Bz8EKI3yosTj0oZHfwr42Yzb1l/SKy14Rggsrc9KAQlrGXan6+u2jcQqqx7l+SWmnpFiWTV9u5cWj2IgOhApOitmRBPYqk9rE2usfO0hLn/Pj/R/Nau4803e1/EikdLE7Ps95s9mX5jRDjAoUa2JwFF5RsVFyL910= ashigupt@ashigupt.remote.csb manage_dir=True exclusive=False validate_certs=True follow=False path=None key_options=None comment=None
Dec  1 13:26:12 np0005541455 python3[5863]: ansible-authorized_key Invoked with user=zuul state=present key=ssh-ed25519 AAAAC3NzaC1lZDI1NTE5AAAAIOKLl0NYKwoZ/JY5KeZU8VwRAggeOxqQJeoqp3dsAaY9 manage_dir=True exclusive=False validate_certs=True follow=False path=None key_options=None comment=None
Dec  1 13:26:12 np0005541455 python3[5887]: ansible-authorized_key Invoked with user=zuul state=present key=ssh-ed25519 AAAAC3NzaC1lZDI1NTE5AAAAIASASQOH2BcOyLKuuDOdWZlPi2orcjcA8q4400T73DLH evallesp@fedora manage_dir=True exclusive=False validate_certs=True follow=False path=None key_options=None comment=None
Dec  1 13:26:12 np0005541455 python3[5911]: ansible-authorized_key Invoked with user=zuul state=present key=ssh-ed25519 AAAAC3NzaC1lZDI1NTE5AAAAILeBWlamUph+jRKV2qrx1PGU7vWuGIt5+z9k96I8WehW amsinha@amsinha-mac manage_dir=True exclusive=False validate_certs=True follow=False path=None key_options=None comment=None
Dec  1 13:26:12 np0005541455 python3[5935]: ansible-authorized_key Invoked with user=zuul state=present key=ssh-ed25519 AAAAC3NzaC1lZDI1NTE5AAAAIANvVgvJBlK3gb1yz5uef/JqIGq4HLEmY2dYA8e37swb morenod@redhat-laptop manage_dir=True exclusive=False validate_certs=True follow=False path=None key_options=None comment=None
Dec  1 13:26:13 np0005541455 python3[5959]: ansible-authorized_key Invoked with user=zuul state=present key=ssh-rsa AAAAB3NzaC1yc2EAAAADAQABAAACAQDZdI7t1cxYx65heVI24HTV4F7oQLW1zyfxHreL2TIJKxjyrUUKIFEUmTutcBlJRLNT2Eoix6x1sOw9YrchloCLcn//SGfTElr9mSc5jbjb7QXEU+zJMhtxyEJ1Po3CUGnj7ckiIXw7wcawZtrEOAQ9pH3ExYCJcEMiyNjRQZCxT3tPK+S4B95EWh5Fsrz9CkwpjNRPPH7LigCeQTM3Wc7r97utAslBUUvYceDSLA7rMgkitJE38b7rZBeYzsGQ8YYUBjTCtehqQXxCRjizbHWaaZkBU+N3zkKB6n/iCNGIO690NK7A/qb6msTijiz1PeuM8ThOsi9qXnbX5v0PoTpcFSojV7NHAQ71f0XXuS43FhZctT+Dcx44dT8Fb5vJu2cJGrk+qF8ZgJYNpRS7gPg0EG2EqjK7JMf9ULdjSu0r+KlqIAyLvtzT4eOnQipoKlb/WG5D/0ohKv7OMQ352ggfkBFIQsRXyyTCT98Ft9juqPuahi3CAQmP4H9dyE+7+Kz437PEtsxLmfm6naNmWi7Ee1DqWPwS8rEajsm4sNM4wW9gdBboJQtc0uZw0DfLj1I9r3Mc8Ol0jYtz0yNQDSzVLrGCaJlC311trU70tZ+ZkAVV6Mn8lOhSbj1cK0lvSr6ZK4dgqGl3I1eTZJJhbLNdg7UOVaiRx9543+C/p/As7w== brjackma@redhat.com manage_dir=True exclusive=False validate_certs=True follow=False path=None key_options=None comment=None
Dec  1 13:26:13 np0005541455 python3[5983]: ansible-authorized_key Invoked with user=zuul state=present key=ssh-ed25519 AAAAC3NzaC1lZDI1NTE5AAAAIKwedoZ0TWPJX/z/4TAbO/kKcDZOQVgRH0hAqrL5UCI1 vcastell@redhat.com manage_dir=True exclusive=False validate_certs=True follow=False path=None key_options=None comment=None
Dec  1 13:26:13 np0005541455 python3[6007]: ansible-authorized_key Invoked with user=zuul state=present key=ssh-ed25519 AAAAC3NzaC1lZDI1NTE5AAAAIEmv8sE8GCk6ZTPIqF0FQrttBdL3mq7rCm/IJy0xDFh7 michburk@redhat.com manage_dir=True exclusive=False validate_certs=True follow=False path=None key_options=None comment=None
Dec  1 13:26:14 np0005541455 python3[6031]: ansible-authorized_key Invoked with user=zuul state=present key=ssh-ed25519 AAAAC3NzaC1lZDI1NTE5AAAAICy6GpGEtwevXEEn4mmLR5lmSLe23dGgAvzkB9DMNbkf rsafrono@rsafrono manage_dir=True exclusive=False validate_certs=True follow=False path=None key_options=None comment=None
Dec  1 13:26:16 np0005541455 python3[6057]: ansible-community.general.timezone Invoked with name=UTC hwclock=None
Dec  1 13:26:16 np0005541455 systemd[1]: Starting Time & Date Service...
Dec  1 13:26:16 np0005541455 systemd[1]: Started Time & Date Service.
Dec  1 13:26:16 np0005541455 systemd-timedated[6059]: Changed time zone to 'UTC' (UTC).
Dec  1 13:26:16 np0005541455 python3[6088]: ansible-file Invoked with path=/etc/nodepool state=directory mode=511 recurse=False force=False follow=True modification_time_format=%Y%m%d%H%M.%S access_time_format=%Y%m%d%H%M.%S unsafe_writes=False _original_basename=None _diff_peek=None src=None modification_time=None access_time=None owner=None group=None seuser=None serole=None selevel=None setype=None attributes=None
Dec  1 13:26:17 np0005541455 python3[6164]: ansible-ansible.legacy.stat Invoked with path=/etc/nodepool/sub_nodes follow=False get_checksum=True checksum_algorithm=sha1 get_md5=False get_mime=True get_attributes=True
Dec  1 13:26:17 np0005541455 python3[6235]: ansible-ansible.legacy.copy Invoked with dest=/etc/nodepool/sub_nodes src=/home/zuul/.ansible/tmp/ansible-tmp-1764613576.9922285-153-201948458772422/source _original_basename=tmpok5u_32d follow=False checksum=da39a3ee5e6b4b0d3255bfef95601890afd80709 backup=False force=True unsafe_writes=False content=NOT_LOGGING_PARAMETER validate=None directory_mode=None remote_src=None local_follow=None mode=None owner=None group=None seuser=None serole=None selevel=None setype=None attributes=None
Dec  1 13:26:18 np0005541455 python3[6335]: ansible-ansible.legacy.stat Invoked with path=/etc/nodepool/sub_nodes_private follow=False get_checksum=True checksum_algorithm=sha1 get_md5=False get_mime=True get_attributes=True
Dec  1 13:26:18 np0005541455 python3[6406]: ansible-ansible.legacy.copy Invoked with dest=/etc/nodepool/sub_nodes_private src=/home/zuul/.ansible/tmp/ansible-tmp-1764613577.9167564-183-11286798252639/source _original_basename=tmpf4fx5cea follow=False checksum=da39a3ee5e6b4b0d3255bfef95601890afd80709 backup=False force=True unsafe_writes=False content=NOT_LOGGING_PARAMETER validate=None directory_mode=None remote_src=None local_follow=None mode=None owner=None group=None seuser=None serole=None selevel=None setype=None attributes=None
Dec  1 13:26:19 np0005541455 python3[6508]: ansible-ansible.legacy.stat Invoked with path=/etc/nodepool/node_private follow=False get_checksum=True checksum_algorithm=sha1 get_md5=False get_mime=True get_attributes=True
Dec  1 13:26:19 np0005541455 python3[6581]: ansible-ansible.legacy.copy Invoked with dest=/etc/nodepool/node_private src=/home/zuul/.ansible/tmp/ansible-tmp-1764613579.059656-231-202730099669458/source _original_basename=tmp_lk5flwu follow=False checksum=8c2ca5bc92adf57e5f110fdd685e6d08e9897451 backup=False force=True unsafe_writes=False content=NOT_LOGGING_PARAMETER validate=None directory_mode=None remote_src=None local_follow=None mode=None owner=None group=None seuser=None serole=None selevel=None setype=None attributes=None
Dec  1 13:26:20 np0005541455 python3[6629]: ansible-ansible.legacy.command Invoked with _raw_params=cp .ssh/id_rsa /etc/nodepool/id_rsa zuul_log_id=in-loop-ignore zuul_ansible_split_streams=False _uses_shell=False warn=False stdin_add_newline=True strip_empty_ends=True argv=None chdir=None executable=None creates=None removes=None stdin=None
Dec  1 13:26:20 np0005541455 python3[6655]: ansible-ansible.legacy.command Invoked with _raw_params=cp .ssh/id_rsa.pub /etc/nodepool/id_rsa.pub zuul_log_id=in-loop-ignore zuul_ansible_split_streams=False _uses_shell=False warn=False stdin_add_newline=True strip_empty_ends=True argv=None chdir=None executable=None creates=None removes=None stdin=None
Dec  1 13:26:21 np0005541455 python3[6735]: ansible-ansible.legacy.stat Invoked with path=/etc/sudoers.d/zuul-sudo-grep follow=False get_checksum=True checksum_algorithm=sha1 get_md5=False get_mime=True get_attributes=True
Dec  1 13:26:21 np0005541455 python3[6808]: ansible-ansible.legacy.copy Invoked with dest=/etc/sudoers.d/zuul-sudo-grep mode=288 src=/home/zuul/.ansible/tmp/ansible-tmp-1764613580.8867648-273-11710935981914/source _original_basename=tmp96dg4wiz follow=False checksum=bdca1a77493d00fb51567671791f4aa30f66c2f0 backup=False force=True unsafe_writes=False content=NOT_LOGGING_PARAMETER validate=None directory_mode=None remote_src=None local_follow=None owner=None group=None seuser=None serole=None selevel=None setype=None attributes=None
Dec  1 13:26:22 np0005541455 python3[6859]: ansible-ansible.legacy.command Invoked with _raw_params=/usr/sbin/visudo -c zuul_log_id=fa163efc-24cc-9eb9-c57b-00000000001d-1-compute0 zuul_ansible_split_streams=False _uses_shell=False warn=False stdin_add_newline=True strip_empty_ends=True argv=None chdir=None executable=None creates=None removes=None stdin=None
Dec  1 13:26:22 np0005541455 python3[6887]: ansible-ansible.legacy.command Invoked with executable=/bin/bash _raw_params=env#012 _uses_shell=True zuul_log_id=fa163efc-24cc-9eb9-c57b-00000000001e-1-compute0 zuul_ansible_split_streams=False warn=False stdin_add_newline=True strip_empty_ends=True argv=None chdir=None creates=None removes=None stdin=None
Dec  1 13:26:24 np0005541455 python3[6915]: ansible-file Invoked with path=/home/zuul/workspace state=directory recurse=False force=False follow=True modification_time_format=%Y%m%d%H%M.%S access_time_format=%Y%m%d%H%M.%S unsafe_writes=False _original_basename=None _diff_peek=None src=None modification_time=None access_time=None mode=None owner=None group=None seuser=None serole=None selevel=None setype=None attributes=None
Dec  1 13:26:30 np0005541455 chronyd[792]: Selected source 138.197.164.54 (2.centos.pool.ntp.org)
Dec  1 13:26:41 np0005541455 python3[6943]: ansible-ansible.builtin.file Invoked with path=/etc/ci/env state=directory mode=0755 recurse=False force=False follow=True modification_time_format=%Y%m%d%H%M.%S access_time_format=%Y%m%d%H%M.%S unsafe_writes=False _original_basename=None _diff_peek=None src=None modification_time=None access_time=None owner=None group=None seuser=None serole=None selevel=None setype=None attributes=None
Dec  1 13:26:46 np0005541455 systemd[1]: systemd-timedated.service: Deactivated successfully.
Dec  1 13:27:22 np0005541455 kernel: pci 0000:00:07.0: [1af4:1000] type 00 class 0x020000 conventional PCI endpoint
Dec  1 13:27:22 np0005541455 kernel: pci 0000:00:07.0: BAR 0 [io  0x0000-0x003f]
Dec  1 13:27:22 np0005541455 kernel: pci 0000:00:07.0: BAR 1 [mem 0x00000000-0x00000fff]
Dec  1 13:27:22 np0005541455 kernel: pci 0000:00:07.0: BAR 4 [mem 0x00000000-0x00003fff 64bit pref]
Dec  1 13:27:22 np0005541455 kernel: pci 0000:00:07.0: ROM [mem 0x00000000-0x0007ffff pref]
Dec  1 13:27:22 np0005541455 kernel: pci 0000:00:07.0: ROM [mem 0xc0000000-0xc007ffff pref]: assigned
Dec  1 13:27:22 np0005541455 kernel: pci 0000:00:07.0: BAR 4 [mem 0x240000000-0x240003fff 64bit pref]: assigned
Dec  1 13:27:22 np0005541455 kernel: pci 0000:00:07.0: BAR 1 [mem 0xc0080000-0xc0080fff]: assigned
Dec  1 13:27:22 np0005541455 kernel: pci 0000:00:07.0: BAR 0 [io  0x1000-0x103f]: assigned
Dec  1 13:27:22 np0005541455 kernel: virtio-pci 0000:00:07.0: enabling device (0000 -> 0003)
Dec  1 13:27:22 np0005541455 NetworkManager[856]: <info>  [1764613642.3187] manager: (eth1): new Ethernet device (/org/freedesktop/NetworkManager/Devices/3)
Dec  1 13:27:22 np0005541455 systemd-udevd[6947]: Network interface NamePolicy= disabled on kernel command line.
Dec  1 13:27:22 np0005541455 NetworkManager[856]: <info>  [1764613642.3417] device (eth1): state change: unmanaged -> unavailable (reason 'managed', managed-type: 'external')
Dec  1 13:27:22 np0005541455 NetworkManager[856]: <info>  [1764613642.3441] settings: (eth1): created default wired connection 'Wired connection 1'
Dec  1 13:27:22 np0005541455 NetworkManager[856]: <info>  [1764613642.3445] device (eth1): carrier: link connected
Dec  1 13:27:22 np0005541455 NetworkManager[856]: <info>  [1764613642.3446] device (eth1): state change: unavailable -> disconnected (reason 'carrier-changed', managed-type: 'full')
Dec  1 13:27:22 np0005541455 NetworkManager[856]: <info>  [1764613642.3451] policy: auto-activating connection 'Wired connection 1' (fbdaa184-f8a1-3bfc-a799-1b0024f7214e)
Dec  1 13:27:22 np0005541455 NetworkManager[856]: <info>  [1764613642.3455] device (eth1): Activation: starting connection 'Wired connection 1' (fbdaa184-f8a1-3bfc-a799-1b0024f7214e)
Dec  1 13:27:22 np0005541455 NetworkManager[856]: <info>  [1764613642.3456] device (eth1): state change: disconnected -> prepare (reason 'none', managed-type: 'full')
Dec  1 13:27:22 np0005541455 NetworkManager[856]: <info>  [1764613642.3458] device (eth1): state change: prepare -> config (reason 'none', managed-type: 'full')
Dec  1 13:27:22 np0005541455 NetworkManager[856]: <info>  [1764613642.3461] device (eth1): state change: config -> ip-config (reason 'none', managed-type: 'full')
Dec  1 13:27:22 np0005541455 NetworkManager[856]: <info>  [1764613642.3465] dhcp4 (eth1): activation: beginning transaction (timeout in 45 seconds)
Dec  1 13:27:23 np0005541455 python3[6973]: ansible-ansible.legacy.command Invoked with _raw_params=ip -j link zuul_log_id=fa163efc-24cc-031c-09f0-0000000000fc-0-controller zuul_ansible_split_streams=False _uses_shell=False warn=False stdin_add_newline=True strip_empty_ends=True argv=None chdir=None executable=None creates=None removes=None stdin=None
Dec  1 13:27:30 np0005541455 python3[7053]: ansible-ansible.legacy.stat Invoked with path=/etc/NetworkManager/system-connections/ci-private-network.nmconnection follow=False get_checksum=True checksum_algorithm=sha1 get_md5=False get_mime=True get_attributes=True
Dec  1 13:27:30 np0005541455 python3[7126]: ansible-ansible.legacy.copy Invoked with src=/home/zuul/.ansible/tmp/ansible-tmp-1764613649.9522243-102-239292942413620/source dest=/etc/NetworkManager/system-connections/ci-private-network.nmconnection mode=0600 owner=root group=root follow=False _original_basename=bootstrap-ci-network-nm-connection.nmconnection.j2 checksum=b12e3d54b08804f70fa9c0c3df9513fcfe9a8530 backup=False force=True unsafe_writes=False content=NOT_LOGGING_PARAMETER validate=None directory_mode=None remote_src=None local_follow=None seuser=None serole=None selevel=None setype=None attributes=None
Dec  1 13:27:31 np0005541455 python3[7176]: ansible-ansible.builtin.systemd Invoked with name=NetworkManager state=restarted daemon_reload=False daemon_reexec=False scope=system no_block=False enabled=None force=None masked=None
Dec  1 13:27:31 np0005541455 systemd[1]: NetworkManager-wait-online.service: Deactivated successfully.
Dec  1 13:27:31 np0005541455 systemd[1]: Stopped Network Manager Wait Online.
Dec  1 13:27:31 np0005541455 systemd[1]: Stopping Network Manager Wait Online...
Dec  1 13:27:31 np0005541455 systemd[1]: Stopping Network Manager...
Dec  1 13:27:31 np0005541455 NetworkManager[856]: <info>  [1764613651.5087] caught SIGTERM, shutting down normally.
Dec  1 13:27:31 np0005541455 NetworkManager[856]: <info>  [1764613651.5106] dhcp4 (eth0): canceled DHCP transaction
Dec  1 13:27:31 np0005541455 NetworkManager[856]: <info>  [1764613651.5107] dhcp4 (eth0): activation: beginning transaction (timeout in 45 seconds)
Dec  1 13:27:31 np0005541455 NetworkManager[856]: <info>  [1764613651.5107] dhcp4 (eth0): state changed no lease
Dec  1 13:27:31 np0005541455 NetworkManager[856]: <info>  [1764613651.5110] manager: NetworkManager state is now CONNECTING
Dec  1 13:27:31 np0005541455 NetworkManager[856]: <info>  [1764613651.5216] dhcp4 (eth1): canceled DHCP transaction
Dec  1 13:27:31 np0005541455 NetworkManager[856]: <info>  [1764613651.5216] dhcp4 (eth1): state changed no lease
Dec  1 13:27:31 np0005541455 systemd[1]: Starting Network Manager Script Dispatcher Service...
Dec  1 13:27:31 np0005541455 systemd[1]: Started Network Manager Script Dispatcher Service.
Dec  1 13:27:31 np0005541455 NetworkManager[856]: <info>  [1764613651.5823] exiting (success)
Dec  1 13:27:31 np0005541455 systemd[1]: NetworkManager.service: Deactivated successfully.
Dec  1 13:27:31 np0005541455 systemd[1]: Stopped Network Manager.
Dec  1 13:27:31 np0005541455 systemd[1]: NetworkManager.service: Consumed 1.060s CPU time, 9.9M memory peak.
Dec  1 13:27:31 np0005541455 systemd[1]: Starting Network Manager...
Dec  1 13:27:31 np0005541455 NetworkManager[7193]: <info>  [1764613651.6556] NetworkManager (version 1.54.1-1.el9) is starting... (after a restart, boot:c12f9c43-c499-4c8a-a9df-8527ffbb5e7f)
Dec  1 13:27:31 np0005541455 NetworkManager[7193]: <info>  [1764613651.6560] Read config: /etc/NetworkManager/NetworkManager.conf, /run/NetworkManager/conf.d/15-carrier-timeout.conf
Dec  1 13:27:31 np0005541455 NetworkManager[7193]: <info>  [1764613651.6632] manager[0x56317baa3070]: monitoring kernel firmware directory '/lib/firmware'.
Dec  1 13:27:31 np0005541455 systemd[1]: Starting Hostname Service...
Dec  1 13:27:31 np0005541455 systemd[1]: Started Hostname Service.
Dec  1 13:27:31 np0005541455 NetworkManager[7193]: <info>  [1764613651.7796] hostname: hostname: using hostnamed
Dec  1 13:27:31 np0005541455 NetworkManager[7193]: <info>  [1764613651.7797] hostname: static hostname changed from (none) to "np0005541455.novalocal"
Dec  1 13:27:31 np0005541455 NetworkManager[7193]: <info>  [1764613651.7803] dns-mgr: init: dns=default,systemd-resolved rc-manager=symlink (auto)
Dec  1 13:27:31 np0005541455 NetworkManager[7193]: <info>  [1764613651.7810] manager[0x56317baa3070]: rfkill: Wi-Fi hardware radio set enabled
Dec  1 13:27:31 np0005541455 NetworkManager[7193]: <info>  [1764613651.7810] manager[0x56317baa3070]: rfkill: WWAN hardware radio set enabled
Dec  1 13:27:31 np0005541455 NetworkManager[7193]: <info>  [1764613651.7844] Loaded device plugin: NMTeamFactory (/usr/lib64/NetworkManager/1.54.1-1.el9/libnm-device-plugin-team.so)
Dec  1 13:27:31 np0005541455 NetworkManager[7193]: <info>  [1764613651.7844] manager: rfkill: Wi-Fi enabled by radio killswitch; enabled by state file
Dec  1 13:27:31 np0005541455 NetworkManager[7193]: <info>  [1764613651.7845] manager: rfkill: WWAN enabled by radio killswitch; enabled by state file
Dec  1 13:27:31 np0005541455 NetworkManager[7193]: <info>  [1764613651.7846] manager: Networking is enabled by state file
Dec  1 13:27:31 np0005541455 NetworkManager[7193]: <info>  [1764613651.7849] settings: Loaded settings plugin: keyfile (internal)
Dec  1 13:27:31 np0005541455 NetworkManager[7193]: <info>  [1764613651.7854] settings: Loaded settings plugin: ifcfg-rh ("/usr/lib64/NetworkManager/1.54.1-1.el9/libnm-settings-plugin-ifcfg-rh.so")
Dec  1 13:27:31 np0005541455 NetworkManager[7193]: <info>  [1764613651.7887] Warning: the ifcfg-rh plugin is deprecated, please migrate connections to the keyfile format using "nmcli connection migrate"
Dec  1 13:27:31 np0005541455 NetworkManager[7193]: <info>  [1764613651.7898] dhcp: init: Using DHCP client 'internal'
Dec  1 13:27:31 np0005541455 NetworkManager[7193]: <info>  [1764613651.7901] manager: (lo): new Loopback device (/org/freedesktop/NetworkManager/Devices/1)
Dec  1 13:27:31 np0005541455 NetworkManager[7193]: <info>  [1764613651.7908] device (lo): state change: unmanaged -> unavailable (reason 'connection-assumed', managed-type: 'external')
Dec  1 13:27:31 np0005541455 NetworkManager[7193]: <info>  [1764613651.7915] device (lo): state change: unavailable -> disconnected (reason 'connection-assumed', managed-type: 'external')
Dec  1 13:27:31 np0005541455 NetworkManager[7193]: <info>  [1764613651.7924] device (lo): Activation: starting connection 'lo' (05520b0a-6bbf-47af-9e84-ea1a46a10382)
Dec  1 13:27:31 np0005541455 NetworkManager[7193]: <info>  [1764613651.7934] device (eth0): carrier: link connected
Dec  1 13:27:31 np0005541455 NetworkManager[7193]: <info>  [1764613651.7939] manager: (eth0): new Ethernet device (/org/freedesktop/NetworkManager/Devices/2)
Dec  1 13:27:31 np0005541455 NetworkManager[7193]: <info>  [1764613651.7947] manager: (eth0): assume: will attempt to assume matching connection 'System eth0' (5fb06bd0-0bb0-7ffb-45f1-d6edd65f3e03) (indicated)
Dec  1 13:27:31 np0005541455 NetworkManager[7193]: <info>  [1764613651.7947] device (eth0): state change: unmanaged -> unavailable (reason 'connection-assumed', managed-type: 'assume')
Dec  1 13:27:31 np0005541455 NetworkManager[7193]: <info>  [1764613651.7956] device (eth0): state change: unavailable -> disconnected (reason 'connection-assumed', managed-type: 'assume')
Dec  1 13:27:31 np0005541455 NetworkManager[7193]: <info>  [1764613651.7963] device (eth0): Activation: starting connection 'System eth0' (5fb06bd0-0bb0-7ffb-45f1-d6edd65f3e03)
Dec  1 13:27:31 np0005541455 NetworkManager[7193]: <info>  [1764613651.7971] device (eth1): carrier: link connected
Dec  1 13:27:31 np0005541455 NetworkManager[7193]: <info>  [1764613651.7976] manager: (eth1): new Ethernet device (/org/freedesktop/NetworkManager/Devices/3)
Dec  1 13:27:31 np0005541455 NetworkManager[7193]: <info>  [1764613651.7982] manager: (eth1): assume: will attempt to assume matching connection 'Wired connection 1' (fbdaa184-f8a1-3bfc-a799-1b0024f7214e) (indicated)
Dec  1 13:27:31 np0005541455 NetworkManager[7193]: <info>  [1764613651.7982] device (eth1): state change: unmanaged -> unavailable (reason 'connection-assumed', managed-type: 'assume')
Dec  1 13:27:31 np0005541455 NetworkManager[7193]: <info>  [1764613651.7989] device (eth1): state change: unavailable -> disconnected (reason 'connection-assumed', managed-type: 'assume')
Dec  1 13:27:31 np0005541455 NetworkManager[7193]: <info>  [1764613651.7997] device (eth1): Activation: starting connection 'Wired connection 1' (fbdaa184-f8a1-3bfc-a799-1b0024f7214e)
Dec  1 13:27:31 np0005541455 systemd[1]: Started Network Manager.
Dec  1 13:27:31 np0005541455 NetworkManager[7193]: <info>  [1764613651.8004] bus-manager: acquired D-Bus service "org.freedesktop.NetworkManager"
Dec  1 13:27:31 np0005541455 NetworkManager[7193]: <info>  [1764613651.8010] device (lo): state change: disconnected -> prepare (reason 'none', managed-type: 'external')
Dec  1 13:27:31 np0005541455 NetworkManager[7193]: <info>  [1764613651.8013] device (lo): state change: prepare -> config (reason 'none', managed-type: 'external')
Dec  1 13:27:31 np0005541455 NetworkManager[7193]: <info>  [1764613651.8016] device (lo): state change: config -> ip-config (reason 'none', managed-type: 'external')
Dec  1 13:27:31 np0005541455 NetworkManager[7193]: <info>  [1764613651.8018] device (eth0): state change: disconnected -> prepare (reason 'none', managed-type: 'assume')
Dec  1 13:27:31 np0005541455 NetworkManager[7193]: <info>  [1764613651.8022] device (eth0): state change: prepare -> config (reason 'none', managed-type: 'assume')
Dec  1 13:27:31 np0005541455 NetworkManager[7193]: <info>  [1764613651.8026] device (eth1): state change: disconnected -> prepare (reason 'none', managed-type: 'assume')
Dec  1 13:27:31 np0005541455 NetworkManager[7193]: <info>  [1764613651.8029] device (eth1): state change: prepare -> config (reason 'none', managed-type: 'assume')
Dec  1 13:27:31 np0005541455 NetworkManager[7193]: <info>  [1764613651.8034] device (lo): state change: ip-config -> ip-check (reason 'none', managed-type: 'external')
Dec  1 13:27:31 np0005541455 NetworkManager[7193]: <info>  [1764613651.8042] device (eth0): state change: config -> ip-config (reason 'none', managed-type: 'assume')
Dec  1 13:27:31 np0005541455 NetworkManager[7193]: <info>  [1764613651.8048] dhcp4 (eth0): activation: beginning transaction (timeout in 45 seconds)
Dec  1 13:27:31 np0005541455 NetworkManager[7193]: <info>  [1764613651.8057] device (eth1): state change: config -> ip-config (reason 'none', managed-type: 'assume')
Dec  1 13:27:31 np0005541455 NetworkManager[7193]: <info>  [1764613651.8060] dhcp4 (eth1): activation: beginning transaction (timeout in 45 seconds)
Dec  1 13:27:31 np0005541455 NetworkManager[7193]: <info>  [1764613651.8079] device (lo): state change: ip-check -> secondaries (reason 'none', managed-type: 'external')
Dec  1 13:27:31 np0005541455 NetworkManager[7193]: <info>  [1764613651.8085] device (lo): state change: secondaries -> activated (reason 'none', managed-type: 'external')
Dec  1 13:27:31 np0005541455 NetworkManager[7193]: <info>  [1764613651.8093] device (lo): Activation: successful, device activated.
Dec  1 13:27:31 np0005541455 NetworkManager[7193]: <info>  [1764613651.8110] dhcp4 (eth0): state changed new lease, address=38.102.83.97
Dec  1 13:27:31 np0005541455 NetworkManager[7193]: <info>  [1764613651.8118] policy: set 'System eth0' (eth0) as default for IPv4 routing and DNS
Dec  1 13:27:31 np0005541455 systemd[1]: Starting Network Manager Wait Online...
Dec  1 13:27:32 np0005541455 NetworkManager[7193]: <info>  [1764613652.0503] device (eth0): state change: ip-config -> ip-check (reason 'none', managed-type: 'assume')
Dec  1 13:27:32 np0005541455 NetworkManager[7193]: <info>  [1764613652.0536] device (eth0): state change: ip-check -> secondaries (reason 'none', managed-type: 'assume')
Dec  1 13:27:32 np0005541455 NetworkManager[7193]: <info>  [1764613652.0539] device (eth0): state change: secondaries -> activated (reason 'none', managed-type: 'assume')
Dec  1 13:27:32 np0005541455 NetworkManager[7193]: <info>  [1764613652.0545] manager: NetworkManager state is now CONNECTED_SITE
Dec  1 13:27:32 np0005541455 NetworkManager[7193]: <info>  [1764613652.0552] device (eth0): Activation: successful, device activated.
Dec  1 13:27:32 np0005541455 NetworkManager[7193]: <info>  [1764613652.0563] manager: NetworkManager state is now CONNECTED_GLOBAL
Dec  1 13:27:32 np0005541455 python3[7241]: ansible-ansible.legacy.command Invoked with _raw_params=ip route zuul_log_id=fa163efc-24cc-031c-09f0-0000000000a7-0-controller zuul_ansible_split_streams=False _uses_shell=False warn=False stdin_add_newline=True strip_empty_ends=True argv=None chdir=None executable=None creates=None removes=None stdin=None
Dec  1 13:27:42 np0005541455 systemd[1]: NetworkManager-dispatcher.service: Deactivated successfully.
Dec  1 13:28:01 np0005541455 systemd[1]: systemd-hostnamed.service: Deactivated successfully.
Dec  1 13:28:03 np0005541455 systemd[4298]: Starting Mark boot as successful...
Dec  1 13:28:03 np0005541455 systemd[4298]: Finished Mark boot as successful.
Dec  1 13:28:17 np0005541455 NetworkManager[7193]: <info>  [1764613697.2273] device (eth1): state change: ip-config -> ip-check (reason 'none', managed-type: 'assume')
Dec  1 13:28:17 np0005541455 systemd[1]: Starting Network Manager Script Dispatcher Service...
Dec  1 13:28:17 np0005541455 systemd[1]: Started Network Manager Script Dispatcher Service.
Dec  1 13:28:17 np0005541455 NetworkManager[7193]: <info>  [1764613697.2662] device (eth1): state change: ip-check -> secondaries (reason 'none', managed-type: 'assume')
Dec  1 13:28:17 np0005541455 NetworkManager[7193]: <info>  [1764613697.2668] device (eth1): state change: secondaries -> activated (reason 'none', managed-type: 'assume')
Dec  1 13:28:17 np0005541455 NetworkManager[7193]: <info>  [1764613697.2690] device (eth1): Activation: successful, device activated.
Dec  1 13:28:17 np0005541455 NetworkManager[7193]: <info>  [1764613697.2702] manager: startup complete
Dec  1 13:28:17 np0005541455 NetworkManager[7193]: <info>  [1764613697.2708] device (eth1): state change: activated -> failed (reason 'ip-config-unavailable', managed-type: 'full')
Dec  1 13:28:17 np0005541455 NetworkManager[7193]: <warn>  [1764613697.2743] device (eth1): Activation: failed for connection 'Wired connection 1'
Dec  1 13:28:17 np0005541455 NetworkManager[7193]: <info>  [1764613697.2754] device (eth1): state change: failed -> disconnected (reason 'none', managed-type: 'full')
Dec  1 13:28:17 np0005541455 systemd[1]: Finished Network Manager Wait Online.
Dec  1 13:28:17 np0005541455 NetworkManager[7193]: <info>  [1764613697.2856] dhcp4 (eth1): canceled DHCP transaction
Dec  1 13:28:17 np0005541455 NetworkManager[7193]: <info>  [1764613697.2856] dhcp4 (eth1): activation: beginning transaction (timeout in 45 seconds)
Dec  1 13:28:17 np0005541455 NetworkManager[7193]: <info>  [1764613697.2857] dhcp4 (eth1): state changed no lease
Dec  1 13:28:17 np0005541455 NetworkManager[7193]: <info>  [1764613697.2877] policy: auto-activating connection 'ci-private-network' (c8f215e6-5e9a-5e2d-a810-1cab7f3f4862)
Dec  1 13:28:17 np0005541455 NetworkManager[7193]: <info>  [1764613697.2883] device (eth1): Activation: starting connection 'ci-private-network' (c8f215e6-5e9a-5e2d-a810-1cab7f3f4862)
Dec  1 13:28:17 np0005541455 NetworkManager[7193]: <info>  [1764613697.2886] device (eth1): state change: disconnected -> prepare (reason 'none', managed-type: 'full')
Dec  1 13:28:17 np0005541455 NetworkManager[7193]: <info>  [1764613697.2889] device (eth1): state change: prepare -> config (reason 'none', managed-type: 'full')
Dec  1 13:28:17 np0005541455 NetworkManager[7193]: <info>  [1764613697.2899] device (eth1): state change: config -> ip-config (reason 'none', managed-type: 'full')
Dec  1 13:28:17 np0005541455 NetworkManager[7193]: <info>  [1764613697.2911] device (eth1): state change: ip-config -> ip-check (reason 'none', managed-type: 'full')
Dec  1 13:28:17 np0005541455 NetworkManager[7193]: <info>  [1764613697.2965] device (eth1): state change: ip-check -> secondaries (reason 'none', managed-type: 'full')
Dec  1 13:28:17 np0005541455 NetworkManager[7193]: <info>  [1764613697.2968] device (eth1): state change: secondaries -> activated (reason 'none', managed-type: 'full')
Dec  1 13:28:17 np0005541455 NetworkManager[7193]: <info>  [1764613697.2977] device (eth1): Activation: successful, device activated.
Dec  1 13:28:27 np0005541455 systemd[1]: NetworkManager-dispatcher.service: Deactivated successfully.
Dec  1 13:28:32 np0005541455 systemd-logind[797]: Session 1 logged out. Waiting for processes to exit.
Dec  1 13:28:50 np0005541455 systemd-logind[797]: New session 3 of user zuul.
Dec  1 13:28:50 np0005541455 systemd[1]: Started Session 3 of User zuul.
Dec  1 13:28:51 np0005541455 python3[7374]: ansible-ansible.legacy.stat Invoked with path=/etc/ci/env/networking-info.yml follow=False get_checksum=True checksum_algorithm=sha1 get_md5=False get_mime=True get_attributes=True
Dec  1 13:28:51 np0005541455 python3[7447]: ansible-ansible.legacy.copy Invoked with dest=/etc/ci/env/networking-info.yml owner=root group=root mode=0644 src=/home/zuul/.ansible/tmp/ansible-tmp-1764613730.6908126-259-210655231279725/source _original_basename=tmpik9le9vr follow=False checksum=92693a024887f0d8a73db760a96b6ced9ef40e01 backup=False force=True unsafe_writes=False content=NOT_LOGGING_PARAMETER validate=None directory_mode=None remote_src=None local_follow=None seuser=None serole=None selevel=None setype=None attributes=None
Dec  1 13:28:53 np0005541455 systemd[1]: session-3.scope: Deactivated successfully.
Dec  1 13:28:53 np0005541455 systemd-logind[797]: Session 3 logged out. Waiting for processes to exit.
Dec  1 13:28:53 np0005541455 systemd-logind[797]: Removed session 3.
Dec  1 13:30:47 np0005541455 chronyd[792]: Selected source 23.159.16.194 (2.centos.pool.ntp.org)
Dec  1 13:31:03 np0005541455 systemd[4298]: Created slice User Background Tasks Slice.
Dec  1 13:31:03 np0005541455 systemd[4298]: Starting Cleanup of User's Temporary Files and Directories...
Dec  1 13:31:03 np0005541455 systemd[4298]: Finished Cleanup of User's Temporary Files and Directories.
Dec  1 13:36:01 np0005541455 systemd-logind[797]: New session 4 of user zuul.
Dec  1 13:36:01 np0005541455 systemd[1]: Started Session 4 of User zuul.
Dec  1 13:36:01 np0005541455 python3[7526]: ansible-ansible.legacy.command Invoked with _raw_params=lsblk -nd -o MAJ:MIN /dev/vda#012 _uses_shell=True zuul_log_id=fa163efc-24cc-1765-01f7-000000001cf4-1-compute0 zuul_ansible_split_streams=False warn=False stdin_add_newline=True strip_empty_ends=True argv=None chdir=None executable=None creates=None removes=None stdin=None
Dec  1 13:36:02 np0005541455 python3[7555]: ansible-ansible.builtin.file Invoked with path=/sys/fs/cgroup/init.scope state=directory recurse=False force=False follow=True modification_time_format=%Y%m%d%H%M.%S access_time_format=%Y%m%d%H%M.%S unsafe_writes=False _original_basename=None _diff_peek=None src=None modification_time=None access_time=None mode=None owner=None group=None seuser=None serole=None selevel=None setype=None attributes=None
Dec  1 13:36:02 np0005541455 python3[7581]: ansible-ansible.builtin.file Invoked with path=/sys/fs/cgroup/machine.slice state=directory recurse=False force=False follow=True modification_time_format=%Y%m%d%H%M.%S access_time_format=%Y%m%d%H%M.%S unsafe_writes=False _original_basename=None _diff_peek=None src=None modification_time=None access_time=None mode=None owner=None group=None seuser=None serole=None selevel=None setype=None attributes=None
Dec  1 13:36:02 np0005541455 python3[7607]: ansible-ansible.builtin.file Invoked with path=/sys/fs/cgroup/system.slice state=directory recurse=False force=False follow=True modification_time_format=%Y%m%d%H%M.%S access_time_format=%Y%m%d%H%M.%S unsafe_writes=False _original_basename=None _diff_peek=None src=None modification_time=None access_time=None mode=None owner=None group=None seuser=None serole=None selevel=None setype=None attributes=None
Dec  1 13:36:02 np0005541455 python3[7633]: ansible-ansible.builtin.file Invoked with path=/sys/fs/cgroup/user.slice state=directory recurse=False force=False follow=True modification_time_format=%Y%m%d%H%M.%S access_time_format=%Y%m%d%H%M.%S unsafe_writes=False _original_basename=None _diff_peek=None src=None modification_time=None access_time=None mode=None owner=None group=None seuser=None serole=None selevel=None setype=None attributes=None
Dec  1 13:36:03 np0005541455 python3[7659]: ansible-ansible.builtin.file Invoked with path=/etc/systemd/system.conf.d state=directory mode=0755 recurse=False force=False follow=True modification_time_format=%Y%m%d%H%M.%S access_time_format=%Y%m%d%H%M.%S unsafe_writes=False _original_basename=None _diff_peek=None src=None modification_time=None access_time=None owner=None group=None seuser=None serole=None selevel=None setype=None attributes=None
Dec  1 13:36:04 np0005541455 python3[7737]: ansible-ansible.legacy.stat Invoked with path=/etc/systemd/system.conf.d/override.conf follow=False get_checksum=True checksum_algorithm=sha1 get_md5=False get_mime=True get_attributes=True
Dec  1 13:36:04 np0005541455 python3[7810]: ansible-ansible.legacy.copy Invoked with dest=/etc/systemd/system.conf.d/override.conf mode=0644 src=/home/zuul/.ansible/tmp/ansible-tmp-1764614163.9356449-495-101273861118218/source _original_basename=tmp0n19e9a_ follow=False checksum=a05098bd3d2321238ea1169d0e6f135b35b392d4 backup=False force=True unsafe_writes=False content=NOT_LOGGING_PARAMETER validate=None directory_mode=None remote_src=None local_follow=None owner=None group=None seuser=None serole=None selevel=None setype=None attributes=None
Dec  1 13:36:05 np0005541455 python3[7860]: ansible-ansible.builtin.systemd_service Invoked with daemon_reload=True daemon_reexec=False scope=system no_block=False name=None state=None enabled=None force=None masked=None
Dec  1 13:36:05 np0005541455 systemd[1]: Reloading.
Dec  1 13:36:05 np0005541455 systemd-rc-local-generator: /etc/rc.d/rc.local is not marked executable, skipping.
Dec  1 13:36:07 np0005541455 python3[7918]: ansible-ansible.builtin.wait_for Invoked with path=/sys/fs/cgroup/system.slice/io.max state=present timeout=30 host=127.0.0.1 connect_timeout=5 delay=0 active_connection_states=['ESTABLISHED', 'FIN_WAIT1', 'FIN_WAIT2', 'SYN_RECV', 'SYN_SENT', 'TIME_WAIT'] sleep=1 port=None search_regex=None exclude_hosts=None msg=None
Dec  1 13:36:07 np0005541455 python3[7944]: ansible-ansible.legacy.command Invoked with _raw_params=echo "252:0   riops=18000 wiops=18000 rbps=262144000 wbps=262144000" > /sys/fs/cgroup/init.scope/io.max#012 _uses_shell=True zuul_log_id=in-loop-ignore zuul_ansible_split_streams=False warn=False stdin_add_newline=True strip_empty_ends=True argv=None chdir=None executable=None creates=None removes=None stdin=None
Dec  1 13:36:08 np0005541455 python3[7972]: ansible-ansible.legacy.command Invoked with _raw_params=echo "252:0   riops=18000 wiops=18000 rbps=262144000 wbps=262144000" > /sys/fs/cgroup/machine.slice/io.max#012 _uses_shell=True zuul_log_id=in-loop-ignore zuul_ansible_split_streams=False warn=False stdin_add_newline=True strip_empty_ends=True argv=None chdir=None executable=None creates=None removes=None stdin=None
Dec  1 13:36:08 np0005541455 python3[8000]: ansible-ansible.legacy.command Invoked with _raw_params=echo "252:0   riops=18000 wiops=18000 rbps=262144000 wbps=262144000" > /sys/fs/cgroup/system.slice/io.max#012 _uses_shell=True zuul_log_id=in-loop-ignore zuul_ansible_split_streams=False warn=False stdin_add_newline=True strip_empty_ends=True argv=None chdir=None executable=None creates=None removes=None stdin=None
Dec  1 13:36:08 np0005541455 python3[8028]: ansible-ansible.legacy.command Invoked with _raw_params=echo "252:0   riops=18000 wiops=18000 rbps=262144000 wbps=262144000" > /sys/fs/cgroup/user.slice/io.max#012 _uses_shell=True zuul_log_id=in-loop-ignore zuul_ansible_split_streams=False warn=False stdin_add_newline=True strip_empty_ends=True argv=None chdir=None executable=None creates=None removes=None stdin=None
Dec  1 13:36:09 np0005541455 python3[8055]: ansible-ansible.legacy.command Invoked with _raw_params=echo "init";    cat /sys/fs/cgroup/init.scope/io.max; echo "machine"; cat /sys/fs/cgroup/machine.slice/io.max; echo "system";  cat /sys/fs/cgroup/system.slice/io.max; echo "user";    cat /sys/fs/cgroup/user.slice/io.max;#012 _uses_shell=True zuul_log_id=fa163efc-24cc-1765-01f7-000000001cfb-1-compute0 zuul_ansible_split_streams=False warn=False stdin_add_newline=True strip_empty_ends=True argv=None chdir=None executable=None creates=None removes=None stdin=None
Dec  1 13:36:09 np0005541455 python3[8085]: ansible-ansible.builtin.stat Invoked with path=/sys/fs/cgroup/kubepods.slice/io.max follow=False get_md5=False get_checksum=True get_mime=True get_attributes=True checksum_algorithm=sha1
Dec  1 13:36:11 np0005541455 systemd[1]: session-4.scope: Deactivated successfully.
Dec  1 13:36:11 np0005541455 systemd-logind[797]: Session 4 logged out. Waiting for processes to exit.
Dec  1 13:36:11 np0005541455 systemd[1]: session-4.scope: Consumed 4.306s CPU time.
Dec  1 13:36:11 np0005541455 systemd-logind[797]: Removed session 4.
Dec  1 13:36:13 np0005541455 systemd-logind[797]: New session 5 of user zuul.
Dec  1 13:36:13 np0005541455 systemd[1]: Started Session 5 of User zuul.
Dec  1 13:36:13 np0005541455 python3[8121]: ansible-ansible.legacy.dnf Invoked with name=['podman', 'buildah'] state=present allow_downgrade=False autoremove=False bugfix=False cacheonly=False disable_gpg_check=False disable_plugin=[] disablerepo=[] download_only=False enable_plugin=[] enablerepo=[] exclude=[] installroot=/ install_repoquery=True install_weak_deps=True security=False skip_broken=False update_cache=False update_only=False validate_certs=True sslverify=True lock_timeout=30 allowerasing=False nobest=False use_backend=auto conf_file=None disable_excludes=None download_dir=None list=None releasever=None
Dec  1 13:36:19 np0005541455 irqbalance[790]: Cannot change IRQ 27 affinity: Operation not permitted
Dec  1 13:36:19 np0005541455 irqbalance[790]: IRQ 27 affinity is now unmanaged
Dec  1 13:36:28 np0005541455 kernel: SELinux:  Converting 383 SID table entries...
Dec  1 13:36:28 np0005541455 kernel: SELinux:  policy capability network_peer_controls=1
Dec  1 13:36:28 np0005541455 kernel: SELinux:  policy capability open_perms=1
Dec  1 13:36:28 np0005541455 kernel: SELinux:  policy capability extended_socket_class=1
Dec  1 13:36:28 np0005541455 kernel: SELinux:  policy capability always_check_network=0
Dec  1 13:36:28 np0005541455 kernel: SELinux:  policy capability cgroup_seclabel=1
Dec  1 13:36:28 np0005541455 kernel: SELinux:  policy capability nnp_nosuid_transition=1
Dec  1 13:36:28 np0005541455 kernel: SELinux:  policy capability genfs_seclabel_symlinks=1
Dec  1 13:36:38 np0005541455 kernel: SELinux:  Converting 383 SID table entries...
Dec  1 13:36:38 np0005541455 kernel: SELinux:  policy capability network_peer_controls=1
Dec  1 13:36:38 np0005541455 kernel: SELinux:  policy capability open_perms=1
Dec  1 13:36:38 np0005541455 kernel: SELinux:  policy capability extended_socket_class=1
Dec  1 13:36:38 np0005541455 kernel: SELinux:  policy capability always_check_network=0
Dec  1 13:36:38 np0005541455 kernel: SELinux:  policy capability cgroup_seclabel=1
Dec  1 13:36:38 np0005541455 kernel: SELinux:  policy capability nnp_nosuid_transition=1
Dec  1 13:36:38 np0005541455 kernel: SELinux:  policy capability genfs_seclabel_symlinks=1
Dec  1 13:36:48 np0005541455 kernel: SELinux:  Converting 383 SID table entries...
Dec  1 13:36:48 np0005541455 kernel: SELinux:  policy capability network_peer_controls=1
Dec  1 13:36:48 np0005541455 kernel: SELinux:  policy capability open_perms=1
Dec  1 13:36:48 np0005541455 kernel: SELinux:  policy capability extended_socket_class=1
Dec  1 13:36:48 np0005541455 kernel: SELinux:  policy capability always_check_network=0
Dec  1 13:36:48 np0005541455 kernel: SELinux:  policy capability cgroup_seclabel=1
Dec  1 13:36:48 np0005541455 kernel: SELinux:  policy capability nnp_nosuid_transition=1
Dec  1 13:36:48 np0005541455 kernel: SELinux:  policy capability genfs_seclabel_symlinks=1
Dec  1 13:36:49 np0005541455 setsebool[8190]: The virt_use_nfs policy boolean was changed to 1 by root
Dec  1 13:36:49 np0005541455 setsebool[8190]: The virt_sandbox_use_all_caps policy boolean was changed to 1 by root
Dec  1 13:37:03 np0005541455 kernel: SELinux:  Converting 386 SID table entries...
Dec  1 13:37:03 np0005541455 kernel: SELinux:  policy capability network_peer_controls=1
Dec  1 13:37:03 np0005541455 kernel: SELinux:  policy capability open_perms=1
Dec  1 13:37:03 np0005541455 kernel: SELinux:  policy capability extended_socket_class=1
Dec  1 13:37:03 np0005541455 kernel: SELinux:  policy capability always_check_network=0
Dec  1 13:37:03 np0005541455 kernel: SELinux:  policy capability cgroup_seclabel=1
Dec  1 13:37:03 np0005541455 kernel: SELinux:  policy capability nnp_nosuid_transition=1
Dec  1 13:37:03 np0005541455 kernel: SELinux:  policy capability genfs_seclabel_symlinks=1
Dec  1 13:37:21 np0005541455 dbus-broker-launch[772]: avc:  op=load_policy lsm=selinux seqno=6 res=1
Dec  1 13:37:21 np0005541455 systemd[1]: Started /usr/bin/systemctl start man-db-cache-update.
Dec  1 13:37:21 np0005541455 systemd[1]: Starting man-db-cache-update.service...
Dec  1 13:37:21 np0005541455 systemd[1]: Reloading.
Dec  1 13:37:21 np0005541455 systemd-rc-local-generator: /etc/rc.d/rc.local is not marked executable, skipping.
Dec  1 13:37:21 np0005541455 systemd[1]: Queuing reload/restart jobs for marked units…
Dec  1 13:37:23 np0005541455 python3[10171]: ansible-ansible.legacy.command Invoked with _raw_params=echo "openstack-k8s-operators+cirobot"#012 _uses_shell=True zuul_log_id=fa163efc-24cc-3b8c-0066-00000000000a-1-compute0 zuul_ansible_split_streams=False warn=False stdin_add_newline=True strip_empty_ends=True argv=None chdir=None executable=None creates=None removes=None stdin=None
Dec  1 13:37:24 np0005541455 kernel: evm: overlay not supported
Dec  1 13:37:24 np0005541455 systemd[4298]: Starting D-Bus User Message Bus...
Dec  1 13:37:24 np0005541455 dbus-broker-launch[11217]: Policy to allow eavesdropping in /usr/share/dbus-1/session.conf +31: Eavesdropping is deprecated and ignored
Dec  1 13:37:24 np0005541455 dbus-broker-launch[11217]: Policy to allow eavesdropping in /usr/share/dbus-1/session.conf +33: Eavesdropping is deprecated and ignored
Dec  1 13:37:24 np0005541455 systemd[4298]: Started D-Bus User Message Bus.
Dec  1 13:37:24 np0005541455 dbus-broker-lau[11217]: Ready
Dec  1 13:37:24 np0005541455 systemd[4298]: selinux: avc:  op=load_policy lsm=selinux seqno=6 res=1
Dec  1 13:37:24 np0005541455 systemd[4298]: Created slice Slice /user.
Dec  1 13:37:24 np0005541455 systemd[4298]: podman-11088.scope: unit configures an IP firewall, but not running as root.
Dec  1 13:37:24 np0005541455 systemd[4298]: (This warning is only shown for the first unit using IP firewalling.)
Dec  1 13:37:24 np0005541455 systemd[4298]: Started podman-11088.scope.
Dec  1 13:37:24 np0005541455 systemd[4298]: Started podman-pause-5bff1b3d.scope.
Dec  1 13:37:25 np0005541455 python3[11948]: ansible-ansible.builtin.blockinfile Invoked with state=present insertafter=EOF dest=/etc/containers/registries.conf content=[[registry]]#012location = "38.102.83.217:5001"#012insecure = true path=/etc/containers/registries.conf block=[[registry]]#012location = "38.102.83.217:5001"#012insecure = true marker=# {mark} ANSIBLE MANAGED BLOCK create=False backup=False marker_begin=BEGIN marker_end=END unsafe_writes=False insertbefore=None validate=None mode=None owner=None group=None seuser=None serole=None selevel=None setype=None attributes=None
Dec  1 13:37:25 np0005541455 python3[11948]: ansible-ansible.builtin.blockinfile [WARNING] Module remote_tmp /root/.ansible/tmp did not exist and was created with a mode of 0700, this may cause issues when running as another user. To avoid this, create the remote_tmp dir with the correct permissions manually
Dec  1 13:37:25 np0005541455 systemd[1]: session-5.scope: Deactivated successfully.
Dec  1 13:37:25 np0005541455 systemd[1]: session-5.scope: Consumed 1min 4.294s CPU time.
Dec  1 13:37:25 np0005541455 systemd-logind[797]: Session 5 logged out. Waiting for processes to exit.
Dec  1 13:37:25 np0005541455 systemd-logind[797]: Removed session 5.
Dec  1 13:37:50 np0005541455 systemd-logind[797]: New session 6 of user zuul.
Dec  1 13:37:50 np0005541455 systemd[1]: Started Session 6 of User zuul.
Dec  1 13:37:50 np0005541455 python3[21751]: ansible-ansible.posix.authorized_key Invoked with user=zuul key=ecdsa-sha2-nistp256 AAAAE2VjZHNhLXNoYTItbmlzdHAyNTYAAAAIbmlzdHAyNTYAAABBBMpExICxWH08DykPw9F24+39i0gxZEG8Dl+cHitiIjw6N5BY1GHMRC00GCAGxdbZg6IEUKgJNCrPW/227qVokoA= zuul@np0005541454.novalocal#012 manage_dir=True state=present exclusive=False validate_certs=True follow=False path=None key_options=None comment=None
Dec  1 13:37:51 np0005541455 python3[21906]: ansible-ansible.posix.authorized_key Invoked with user=root key=ecdsa-sha2-nistp256 AAAAE2VjZHNhLXNoYTItbmlzdHAyNTYAAAAIbmlzdHAyNTYAAABBBMpExICxWH08DykPw9F24+39i0gxZEG8Dl+cHitiIjw6N5BY1GHMRC00GCAGxdbZg6IEUKgJNCrPW/227qVokoA= zuul@np0005541454.novalocal#012 manage_dir=True state=present exclusive=False validate_certs=True follow=False path=None key_options=None comment=None
Dec  1 13:37:51 np0005541455 python3[22209]: ansible-ansible.builtin.user Invoked with name=cloud-admin shell=/bin/bash state=present non_unique=False force=False remove=False create_home=True system=False move_home=False append=False ssh_key_bits=0 ssh_key_type=rsa ssh_key_comment=ansible-generated on np0005541455.novalocal update_password=always uid=None group=None groups=None comment=None home=None password=NOT_LOGGING_PARAMETER login_class=None password_expire_max=None password_expire_min=None hidden=None seuser=None skeleton=None generate_ssh_key=None ssh_key_file=None ssh_key_passphrase=NOT_LOGGING_PARAMETER expires=None password_lock=None local=None profile=None authorization=None role=None umask=None
Dec  1 13:37:52 np0005541455 python3[22450]: ansible-ansible.posix.authorized_key Invoked with user=cloud-admin key=ecdsa-sha2-nistp256 AAAAE2VjZHNhLXNoYTItbmlzdHAyNTYAAAAIbmlzdHAyNTYAAABBBMpExICxWH08DykPw9F24+39i0gxZEG8Dl+cHitiIjw6N5BY1GHMRC00GCAGxdbZg6IEUKgJNCrPW/227qVokoA= zuul@np0005541454.novalocal#012 manage_dir=True state=present exclusive=False validate_certs=True follow=False path=None key_options=None comment=None
Dec  1 13:37:53 np0005541455 python3[22627]: ansible-ansible.legacy.stat Invoked with path=/etc/sudoers.d/cloud-admin follow=False get_checksum=True checksum_algorithm=sha1 get_md5=False get_mime=True get_attributes=True
Dec  1 13:37:53 np0005541455 python3[22869]: ansible-ansible.legacy.copy Invoked with dest=/etc/sudoers.d/cloud-admin mode=0640 src=/home/zuul/.ansible/tmp/ansible-tmp-1764614272.8165905-135-48746288852001/source _original_basename=tmppvg0g249 follow=False checksum=e7614e5ad3ab06eaae55b8efaa2ed81b63ea5634 backup=False force=True unsafe_writes=False content=NOT_LOGGING_PARAMETER validate=None directory_mode=None remote_src=None local_follow=None owner=None group=None seuser=None serole=None selevel=None setype=None attributes=None
Dec  1 13:37:54 np0005541455 python3[23181]: ansible-ansible.builtin.hostname Invoked with name=compute-0 use=systemd
Dec  1 13:37:54 np0005541455 systemd[1]: Starting Hostname Service...
Dec  1 13:37:54 np0005541455 systemd[1]: Started Hostname Service.
Dec  1 13:37:54 np0005541455 systemd-hostnamed[23277]: Changed pretty hostname to 'compute-0'
Dec  1 13:37:54 np0005541455 systemd-hostnamed[23277]: Hostname set to <compute-0> (static)
Dec  1 13:37:54 np0005541455 NetworkManager[7193]: <info>  [1764614274.5679] hostname: static hostname changed from "np0005541455.novalocal" to "compute-0"
Dec  1 13:37:54 np0005541455 systemd[1]: Starting Network Manager Script Dispatcher Service...
Dec  1 13:37:54 np0005541455 systemd[1]: Started Network Manager Script Dispatcher Service.
Dec  1 13:37:54 np0005541455 systemd[1]: session-6.scope: Deactivated successfully.
Dec  1 13:37:54 np0005541455 systemd[1]: session-6.scope: Consumed 2.397s CPU time.
Dec  1 13:37:54 np0005541455 systemd-logind[797]: Session 6 logged out. Waiting for processes to exit.
Dec  1 13:37:54 np0005541455 systemd-logind[797]: Removed session 6.
Dec  1 13:38:04 np0005541455 systemd[1]: NetworkManager-dispatcher.service: Deactivated successfully.
Dec  1 13:38:15 np0005541455 systemd[1]: man-db-cache-update.service: Deactivated successfully.
Dec  1 13:38:15 np0005541455 systemd[1]: Finished man-db-cache-update.service.
Dec  1 13:38:15 np0005541455 systemd[1]: man-db-cache-update.service: Consumed 1min 5.724s CPU time.
Dec  1 13:38:15 np0005541455 systemd[1]: run-r9bebe57140b24a6da4c574a713248257.service: Deactivated successfully.
Dec  1 13:38:24 np0005541455 systemd[1]: systemd-hostnamed.service: Deactivated successfully.
Dec  1 13:40:23 np0005541455 systemd[1]: Starting Cleanup of Temporary Directories...
Dec  1 13:40:23 np0005541455 systemd[1]: systemd-tmpfiles-clean.service: Deactivated successfully.
Dec  1 13:40:23 np0005541455 systemd[1]: Finished Cleanup of Temporary Directories.
Dec  1 13:40:23 np0005541455 systemd[1]: run-credentials-systemd\x2dtmpfiles\x2dclean.service.mount: Deactivated successfully.
Dec  1 13:44:13 np0005541455 systemd-logind[797]: New session 7 of user zuul.
Dec  1 13:44:13 np0005541455 systemd[1]: Started Session 7 of User zuul.
Dec  1 13:44:13 np0005541455 python3[30098]: ansible-ansible.legacy.setup Invoked with gather_subset=['all'] gather_timeout=10 filter=[] fact_path=/etc/ansible/facts.d
Dec  1 13:44:15 np0005541455 python3[30214]: ansible-ansible.legacy.stat Invoked with path=/etc/yum.repos.d/delorean.repo follow=False get_checksum=True checksum_algorithm=sha1 get_md5=False get_mime=True get_attributes=True
Dec  1 13:44:16 np0005541455 python3[30287]: ansible-ansible.legacy.copy Invoked with dest=/etc/yum.repos.d/ src=/home/zuul/.ansible/tmp/ansible-tmp-1764614655.2969284-33605-104145498495916/source mode=0755 _original_basename=delorean.repo follow=False checksum=39c885eb875fd03e010d1b0454241c26b121dfb2 backup=False force=True unsafe_writes=False content=NOT_LOGGING_PARAMETER validate=None directory_mode=None remote_src=None local_follow=None owner=None group=None seuser=None serole=None selevel=None setype=None attributes=None
Dec  1 13:44:16 np0005541455 python3[30313]: ansible-ansible.legacy.stat Invoked with path=/etc/yum.repos.d/delorean-antelope-testing.repo follow=False get_checksum=True checksum_algorithm=sha1 get_md5=False get_mime=True get_attributes=True
Dec  1 13:44:16 np0005541455 python3[30386]: ansible-ansible.legacy.copy Invoked with dest=/etc/yum.repos.d/ src=/home/zuul/.ansible/tmp/ansible-tmp-1764614655.2969284-33605-104145498495916/source mode=0755 _original_basename=delorean-antelope-testing.repo follow=False checksum=0bdbb813b840548359ae77c28d76ca272ccaf31b backup=False force=True unsafe_writes=False content=NOT_LOGGING_PARAMETER validate=None directory_mode=None remote_src=None local_follow=None owner=None group=None seuser=None serole=None selevel=None setype=None attributes=None
Dec  1 13:44:17 np0005541455 python3[30412]: ansible-ansible.legacy.stat Invoked with path=/etc/yum.repos.d/repo-setup-centos-highavailability.repo follow=False get_checksum=True checksum_algorithm=sha1 get_md5=False get_mime=True get_attributes=True
Dec  1 13:44:17 np0005541455 python3[30485]: ansible-ansible.legacy.copy Invoked with dest=/etc/yum.repos.d/ src=/home/zuul/.ansible/tmp/ansible-tmp-1764614655.2969284-33605-104145498495916/source mode=0755 _original_basename=repo-setup-centos-highavailability.repo follow=False checksum=55d0f695fd0d8f47cbc3044ce0dcf5f88862490f backup=False force=True unsafe_writes=False content=NOT_LOGGING_PARAMETER validate=None directory_mode=None remote_src=None local_follow=None owner=None group=None seuser=None serole=None selevel=None setype=None attributes=None
Dec  1 13:44:17 np0005541455 python3[30511]: ansible-ansible.legacy.stat Invoked with path=/etc/yum.repos.d/repo-setup-centos-powertools.repo follow=False get_checksum=True checksum_algorithm=sha1 get_md5=False get_mime=True get_attributes=True
Dec  1 13:44:18 np0005541455 python3[30584]: ansible-ansible.legacy.copy Invoked with dest=/etc/yum.repos.d/ src=/home/zuul/.ansible/tmp/ansible-tmp-1764614655.2969284-33605-104145498495916/source mode=0755 _original_basename=repo-setup-centos-powertools.repo follow=False checksum=4b0cf99aa89c5c5be0151545863a7a7568f67568 backup=False force=True unsafe_writes=False content=NOT_LOGGING_PARAMETER validate=None directory_mode=None remote_src=None local_follow=None owner=None group=None seuser=None serole=None selevel=None setype=None attributes=None
Dec  1 13:44:18 np0005541455 python3[30610]: ansible-ansible.legacy.stat Invoked with path=/etc/yum.repos.d/repo-setup-centos-appstream.repo follow=False get_checksum=True checksum_algorithm=sha1 get_md5=False get_mime=True get_attributes=True
Dec  1 13:44:18 np0005541455 python3[30683]: ansible-ansible.legacy.copy Invoked with dest=/etc/yum.repos.d/ src=/home/zuul/.ansible/tmp/ansible-tmp-1764614655.2969284-33605-104145498495916/source mode=0755 _original_basename=repo-setup-centos-appstream.repo follow=False checksum=e89244d2503b2996429dda1857290c1e91e393a1 backup=False force=True unsafe_writes=False content=NOT_LOGGING_PARAMETER validate=None directory_mode=None remote_src=None local_follow=None owner=None group=None seuser=None serole=None selevel=None setype=None attributes=None
Dec  1 13:44:19 np0005541455 python3[30709]: ansible-ansible.legacy.stat Invoked with path=/etc/yum.repos.d/repo-setup-centos-baseos.repo follow=False get_checksum=True checksum_algorithm=sha1 get_md5=False get_mime=True get_attributes=True
Dec  1 13:44:19 np0005541455 python3[30782]: ansible-ansible.legacy.copy Invoked with dest=/etc/yum.repos.d/ src=/home/zuul/.ansible/tmp/ansible-tmp-1764614655.2969284-33605-104145498495916/source mode=0755 _original_basename=repo-setup-centos-baseos.repo follow=False checksum=36d926db23a40dbfa5c84b5e4d43eac6fa2301d6 backup=False force=True unsafe_writes=False content=NOT_LOGGING_PARAMETER validate=None directory_mode=None remote_src=None local_follow=None owner=None group=None seuser=None serole=None selevel=None setype=None attributes=None
Dec  1 13:44:19 np0005541455 python3[30808]: ansible-ansible.legacy.stat Invoked with path=/etc/yum.repos.d/delorean.repo.md5 follow=False get_checksum=True checksum_algorithm=sha1 get_md5=False get_mime=True get_attributes=True
Dec  1 13:44:20 np0005541455 python3[30881]: ansible-ansible.legacy.copy Invoked with dest=/etc/yum.repos.d/ src=/home/zuul/.ansible/tmp/ansible-tmp-1764614655.2969284-33605-104145498495916/source mode=0755 _original_basename=delorean.repo.md5 follow=False checksum=6e18e2038d54303b4926db53c0b6cced515a9151 backup=False force=True unsafe_writes=False content=NOT_LOGGING_PARAMETER validate=None directory_mode=None remote_src=None local_follow=None owner=None group=None seuser=None serole=None selevel=None setype=None attributes=None
Dec  1 13:47:06 np0005541455 python3[30948]: ansible-ansible.legacy.command Invoked with _raw_params=hostname _uses_shell=False stdin_add_newline=True strip_empty_ends=True argv=None chdir=None executable=None creates=None removes=None stdin=None
Dec  1 13:52:06 np0005541455 systemd[1]: session-7.scope: Deactivated successfully.
Dec  1 13:52:06 np0005541455 systemd[1]: session-7.scope: Consumed 5.576s CPU time.
Dec  1 13:52:06 np0005541455 systemd-logind[797]: Session 7 logged out. Waiting for processes to exit.
Dec  1 13:52:06 np0005541455 systemd-logind[797]: Removed session 7.
Dec  1 13:58:53 np0005541455 systemd[1]: Starting dnf makecache...
Dec  1 13:58:53 np0005541455 dnf[30971]: Failed determining last makecache time.
Dec  1 13:58:53 np0005541455 dnf[30971]: delorean-openstack-barbican-42b4c41831408a8e323 264 kB/s |  13 kB     00:00
Dec  1 13:58:53 np0005541455 dnf[30971]: delorean-python-glean-10df0bd91b9bc5c9fd9cc02d7 2.6 MB/s |  65 kB     00:00
Dec  1 13:58:53 np0005541455 dnf[30971]: delorean-openstack-cinder-1c00d6490d88e436f26ef 1.4 MB/s |  32 kB     00:00
Dec  1 13:58:53 np0005541455 dnf[30971]: delorean-python-stevedore-c4acc5639fd2329372142 5.4 MB/s | 131 kB     00:00
Dec  1 13:58:53 np0005541455 dnf[30971]: delorean-python-cloudkitty-tests-tempest-2c80f8 1.4 MB/s |  32 kB     00:00
Dec  1 13:58:53 np0005541455 dnf[30971]: delorean-os-net-config-d0cedbdb788d43e5c7551df5  11 MB/s | 349 kB     00:00
Dec  1 13:58:53 np0005541455 dnf[30971]: delorean-openstack-nova-6f8decf0b4f1aa2e96292b6 1.6 MB/s |  42 kB     00:00
Dec  1 13:58:53 np0005541455 dnf[30971]: delorean-python-designate-tests-tempest-347fdbc 791 kB/s |  18 kB     00:00
Dec  1 13:58:53 np0005541455 dnf[30971]: delorean-openstack-glance-1fd12c29b339f30fe823e 716 kB/s |  18 kB     00:00
Dec  1 13:58:53 np0005541455 dnf[30971]: delorean-openstack-keystone-e4b40af0ae3698fbbbb 1.0 MB/s |  29 kB     00:00
Dec  1 13:58:54 np0005541455 dnf[30971]: delorean-openstack-manila-3c01b7181572c95dac462 1.0 MB/s |  25 kB     00:00
Dec  1 13:58:54 np0005541455 dnf[30971]: delorean-python-whitebox-neutron-tests-tempest- 5.7 MB/s | 154 kB     00:00
Dec  1 13:58:54 np0005541455 dnf[30971]: delorean-openstack-octavia-ba397f07a7331190208c 932 kB/s |  26 kB     00:00
Dec  1 13:58:54 np0005541455 dnf[30971]: delorean-openstack-watcher-c014f81a8647287f6dcc 745 kB/s |  16 kB     00:00
Dec  1 13:58:54 np0005541455 dnf[30971]: delorean-ansible-config_template-5ccaa22121a7ff 334 kB/s | 7.4 kB     00:00
Dec  1 13:58:54 np0005541455 dnf[30971]: delorean-puppet-ceph-7352068d7b8c84ded636ab3158 5.7 MB/s | 144 kB     00:00
Dec  1 13:58:54 np0005541455 dnf[30971]: delorean-openstack-swift-dc98a8463506ac520c469a 564 kB/s |  14 kB     00:00
Dec  1 13:58:54 np0005541455 dnf[30971]: delorean-python-tempestconf-8515371b7cceebd4282 2.3 MB/s |  53 kB     00:00
Dec  1 13:58:54 np0005541455 dnf[30971]: delorean-openstack-heat-ui-013accbfd179753bc3f0 3.6 MB/s |  96 kB     00:00
Dec  1 13:58:54 np0005541455 dnf[30971]: CentOS Stream 9 - BaseOS                         31 kB/s | 7.3 kB     00:00
Dec  1 13:58:54 np0005541455 dnf[30971]: CentOS Stream 9 - AppStream                      82 kB/s | 7.4 kB     00:00
Dec  1 13:58:55 np0005541455 dnf[30971]: CentOS Stream 9 - CRB                            27 kB/s | 7.2 kB     00:00
Dec  1 13:58:55 np0005541455 dnf[30971]: CentOS Stream 9 - Extras packages                81 kB/s | 8.3 kB     00:00
Dec  1 13:58:55 np0005541455 dnf[30971]: dlrn-antelope-testing                            27 MB/s | 1.1 MB     00:00
Dec  1 13:58:55 np0005541455 dnf[30971]: dlrn-antelope-build-deps                         17 MB/s | 461 kB     00:00
Dec  1 13:58:56 np0005541455 dnf[30971]: centos9-rabbitmq                                1.1 MB/s | 123 kB     00:00
Dec  1 13:58:56 np0005541455 dnf[30971]: centos9-storage                                  23 MB/s | 415 kB     00:00
Dec  1 13:58:56 np0005541455 dnf[30971]: centos9-opstools                                4.0 MB/s |  51 kB     00:00
Dec  1 13:58:56 np0005541455 dnf[30971]: NFV SIG OpenvSwitch                              27 MB/s | 456 kB     00:00
Dec  1 13:58:56 np0005541455 dnf[30971]: repo-setup-centos-appstream                      84 MB/s |  25 MB     00:00
Dec  1 13:59:03 np0005541455 dnf[30971]: repo-setup-centos-baseos                         20 MB/s | 8.8 MB     00:00
Dec  1 13:59:04 np0005541455 dnf[30971]: repo-setup-centos-highavailability               29 MB/s | 744 kB     00:00
Dec  1 13:59:04 np0005541455 dnf[30971]: repo-setup-centos-powertools                     65 MB/s | 7.3 MB     00:00
Dec  1 13:59:07 np0005541455 dnf[30971]: Extra Packages for Enterprise Linux 9 - x86_64   15 MB/s |  20 MB     00:01
Dec  1 13:59:20 np0005541455 dnf[30971]: Metadata cache created.
Dec  1 13:59:20 np0005541455 systemd[1]: dnf-makecache.service: Deactivated successfully.
Dec  1 13:59:20 np0005541455 systemd[1]: Finished dnf makecache.
Dec  1 13:59:20 np0005541455 systemd[1]: dnf-makecache.service: Consumed 23.884s CPU time.
Dec  1 13:59:45 np0005541455 systemd-logind[797]: New session 8 of user zuul.
Dec  1 13:59:45 np0005541455 systemd[1]: Started Session 8 of User zuul.
Dec  1 13:59:46 np0005541455 python3.9[31229]: ansible-ansible.legacy.setup Invoked with gather_subset=['all'] gather_timeout=10 filter=[] fact_path=/etc/ansible/facts.d
Dec  1 13:59:47 np0005541455 python3.9[31410]: ansible-ansible.legacy.command Invoked with _raw_params=set -euxo pipefail#012pushd /var/tmp#012curl -sL https://github.com/openstack-k8s-operators/repo-setup/archive/refs/heads/main.tar.gz | tar -xz#012pushd repo-setup-main#012python3 -m venv ./venv#012PBR_VERSION=0.0.0 ./venv/bin/pip install ./#012./venv/bin/repo-setup current-podified -b antelope#012popd#012rm -rf repo-setup-main#012 _uses_shell=True expand_argument_vars=True stdin_add_newline=True strip_empty_ends=True cmd=None argv=None chdir=None executable=None creates=None removes=None stdin=None
Dec  1 13:59:55 np0005541455 systemd[1]: session-8.scope: Deactivated successfully.
Dec  1 13:59:55 np0005541455 systemd[1]: session-8.scope: Consumed 7.695s CPU time.
Dec  1 13:59:55 np0005541455 systemd-logind[797]: Session 8 logged out. Waiting for processes to exit.
Dec  1 13:59:55 np0005541455 systemd-logind[797]: Removed session 8.
Dec  1 14:00:02 np0005541455 systemd-logind[797]: New session 9 of user zuul.
Dec  1 14:00:02 np0005541455 systemd[1]: Started Session 9 of User zuul.
Dec  1 14:00:03 np0005541455 python3.9[31620]: ansible-ansible.builtin.setup Invoked with gather_subset=['!all', '!min', 'distribution'] gather_timeout=10 filter=[] fact_path=/etc/ansible/facts.d
Dec  1 14:00:03 np0005541455 systemd[1]: session-9.scope: Deactivated successfully.
Dec  1 14:00:03 np0005541455 systemd-logind[797]: Session 9 logged out. Waiting for processes to exit.
Dec  1 14:00:03 np0005541455 systemd-logind[797]: Removed session 9.
Dec  1 14:00:20 np0005541455 systemd-logind[797]: New session 10 of user zuul.
Dec  1 14:00:20 np0005541455 systemd[1]: Started Session 10 of User zuul.
Dec  1 14:00:20 np0005541455 python3.9[31802]: ansible-ansible.legacy.ping Invoked with data=pong
Dec  1 14:00:22 np0005541455 python3.9[31976]: ansible-ansible.builtin.setup Invoked with gather_subset=['!all', '!min', 'local'] gather_timeout=10 filter=[] fact_path=/etc/ansible/facts.d
Dec  1 14:00:23 np0005541455 python3.9[32128]: ansible-ansible.legacy.command Invoked with _raw_params=PATH=/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin which growvols#012 _uses_shell=True expand_argument_vars=True stdin_add_newline=True strip_empty_ends=True cmd=None argv=None chdir=None executable=None creates=None removes=None stdin=None
Dec  1 14:00:24 np0005541455 python3.9[32281]: ansible-ansible.builtin.stat Invoked with path=/etc/ansible/facts.d/bootc.fact follow=False get_checksum=True get_mime=True get_attributes=True get_selinux_context=False checksum_algorithm=sha1
Dec  1 14:00:25 np0005541455 python3.9[32433]: ansible-ansible.builtin.file Invoked with mode=755 path=/etc/ansible/facts.d state=directory recurse=False force=False follow=True modification_time_format=%Y%m%d%H%M.%S access_time_format=%Y%m%d%H%M.%S unsafe_writes=False _original_basename=None _diff_peek=None src=None modification_time=None access_time=None owner=None group=None seuser=None serole=None selevel=None setype=None attributes=None
Dec  1 14:00:25 np0005541455 python3.9[32585]: ansible-ansible.legacy.stat Invoked with path=/etc/ansible/facts.d/bootc.fact follow=False get_checksum=True get_size=False checksum_algorithm=sha1 get_mime=True get_attributes=True get_selinux_context=False
Dec  1 14:00:26 np0005541455 python3.9[32708]: ansible-ansible.legacy.copy Invoked with dest=/etc/ansible/facts.d/bootc.fact mode=755 src=/home/zuul/.ansible/tmp/ansible-tmp-1764615625.341461-73-103105288359067/.source.fact _original_basename=bootc.fact follow=False checksum=eb4122ce7fc50a38407beb511c4ff8c178005b12 backup=False force=True remote_src=False unsafe_writes=False content=NOT_LOGGING_PARAMETER validate=None directory_mode=None local_follow=None owner=None group=None seuser=None serole=None selevel=None setype=None attributes=None
Dec  1 14:00:27 np0005541455 python3.9[32860]: ansible-ansible.builtin.setup Invoked with gather_subset=['!all', '!min', 'local'] gather_timeout=10 filter=[] fact_path=/etc/ansible/facts.d
Dec  1 14:00:28 np0005541455 python3.9[33016]: ansible-ansible.builtin.file Invoked with group=root mode=0750 owner=root path=/var/log/journal setype=var_log_t state=directory recurse=False force=False follow=True modification_time_format=%Y%m%d%H%M.%S access_time_format=%Y%m%d%H%M.%S unsafe_writes=False _original_basename=None _diff_peek=None src=None modification_time=None access_time=None seuser=None serole=None selevel=None attributes=None
Dec  1 14:00:29 np0005541455 python3.9[33168]: ansible-ansible.builtin.file Invoked with group=zuul mode=0755 owner=zuul path=/var/lib/config-data/ansible-generated recurse=True setype=container_file_t state=directory force=False follow=True modification_time_format=%Y%m%d%H%M.%S access_time_format=%Y%m%d%H%M.%S unsafe_writes=False _original_basename=None _diff_peek=None src=None modification_time=None access_time=None seuser=None serole=None selevel=None attributes=None
Dec  1 14:00:30 np0005541455 python3.9[33318]: ansible-ansible.builtin.service_facts Invoked
Dec  1 14:00:35 np0005541455 python3.9[33571]: ansible-ansible.builtin.lineinfile Invoked with line=cloud-init=disabled path=/proc/cmdline state=present encoding=utf-8 backrefs=False create=False backup=False firstmatch=False unsafe_writes=False regexp=None search_string=None insertafter=None insertbefore=None validate=None mode=None owner=None group=None seuser=None serole=None selevel=None setype=None attributes=None
Dec  1 14:00:36 np0005541455 python3.9[33721]: ansible-ansible.builtin.setup Invoked with gather_subset=['!all', '!min', 'local'] gather_timeout=10 filter=[] fact_path=/etc/ansible/facts.d
Dec  1 14:00:37 np0005541455 python3.9[33875]: ansible-ansible.builtin.setup Invoked with gather_subset=['!all', '!min', 'local', 'distribution'] gather_timeout=10 filter=[] fact_path=/etc/ansible/facts.d
Dec  1 14:00:39 np0005541455 python3.9[34033]: ansible-ansible.legacy.setup Invoked with filter=['ansible_pkg_mgr'] gather_subset=['!all'] gather_timeout=10 fact_path=/etc/ansible/facts.d
Dec  1 14:00:39 np0005541455 python3.9[34117]: ansible-ansible.legacy.dnf Invoked with name=['driverctl', 'lvm2', 'crudini', 'jq', 'nftables', 'NetworkManager', 'openstack-selinux', 'python3-libselinux', 'python3-pyyaml', 'rsync', 'tmpwatch', 'sysstat', 'iproute-tc', 'ksmtuned', 'systemd-container', 'crypto-policies-scripts', 'grubby', 'sos'] state=present allow_downgrade=False allowerasing=False autoremove=False bugfix=False cacheonly=False disable_gpg_check=False disable_plugin=[] disablerepo=[] download_only=False enable_plugin=[] enablerepo=[] exclude=[] installroot=/ install_weak_deps=True security=False skip_broken=False update_cache=False update_only=False validate_certs=True sslverify=True lock_timeout=30 use_backend=auto best=None conf_file=None disable_excludes=None download_dir=None list=None nobest=None releasever=None
Dec  1 14:01:41 np0005541455 systemd[1]: Reloading.
Dec  1 14:01:41 np0005541455 systemd-rc-local-generator: /etc/rc.d/rc.local is not marked executable, skipping.
Dec  1 14:01:42 np0005541455 systemd[1]: Listening on Device-mapper event daemon FIFOs.
Dec  1 14:01:42 np0005541455 systemd[1]: Reloading.
Dec  1 14:01:42 np0005541455 systemd-rc-local-generator: /etc/rc.d/rc.local is not marked executable, skipping.
Dec  1 14:01:42 np0005541455 systemd[1]: Starting Monitoring of LVM2 mirrors, snapshots etc. using dmeventd or progress polling...
Dec  1 14:01:42 np0005541455 systemd[1]: Finished Monitoring of LVM2 mirrors, snapshots etc. using dmeventd or progress polling.
Dec  1 14:01:42 np0005541455 systemd[1]: Reloading.
Dec  1 14:01:42 np0005541455 systemd-rc-local-generator: /etc/rc.d/rc.local is not marked executable, skipping.
Dec  1 14:01:42 np0005541455 systemd[1]: Listening on LVM2 poll daemon socket.
Dec  1 14:01:43 np0005541455 dbus-broker-launch[763]: Noticed file-system modification, trigger reload.
Dec  1 14:01:43 np0005541455 dbus-broker-launch[763]: Noticed file-system modification, trigger reload.
Dec  1 14:01:43 np0005541455 dbus-broker-launch[763]: Noticed file-system modification, trigger reload.
Dec  1 14:02:47 np0005541455 kernel: SELinux:  Converting 2717 SID table entries...
Dec  1 14:02:47 np0005541455 kernel: SELinux:  policy capability network_peer_controls=1
Dec  1 14:02:47 np0005541455 kernel: SELinux:  policy capability open_perms=1
Dec  1 14:02:47 np0005541455 kernel: SELinux:  policy capability extended_socket_class=1
Dec  1 14:02:47 np0005541455 kernel: SELinux:  policy capability always_check_network=0
Dec  1 14:02:47 np0005541455 kernel: SELinux:  policy capability cgroup_seclabel=1
Dec  1 14:02:47 np0005541455 kernel: SELinux:  policy capability nnp_nosuid_transition=1
Dec  1 14:02:47 np0005541455 kernel: SELinux:  policy capability genfs_seclabel_symlinks=1
Dec  1 14:02:47 np0005541455 dbus-broker-launch[772]: avc:  op=load_policy lsm=selinux seqno=8 res=1
Dec  1 14:02:47 np0005541455 systemd[1]: Started /usr/bin/systemctl start man-db-cache-update.
Dec  1 14:02:47 np0005541455 systemd[1]: Starting man-db-cache-update.service...
Dec  1 14:02:47 np0005541455 systemd[1]: Reloading.
Dec  1 14:02:47 np0005541455 systemd-rc-local-generator: /etc/rc.d/rc.local is not marked executable, skipping.
Dec  1 14:02:48 np0005541455 systemd[1]: Queuing reload/restart jobs for marked units…
Dec  1 14:02:49 np0005541455 systemd[1]: man-db-cache-update.service: Deactivated successfully.
Dec  1 14:02:49 np0005541455 systemd[1]: Finished man-db-cache-update.service.
Dec  1 14:02:49 np0005541455 systemd[1]: man-db-cache-update.service: Consumed 1.376s CPU time.
Dec  1 14:02:49 np0005541455 systemd[1]: run-reb211682773c4fd2acfdf3ab3fa19d4c.service: Deactivated successfully.
Dec  1 14:02:49 np0005541455 python3.9[35650]: ansible-ansible.legacy.command Invoked with _raw_params=rpm -V driverctl lvm2 crudini jq nftables NetworkManager openstack-selinux python3-libselinux python3-pyyaml rsync tmpwatch sysstat iproute-tc ksmtuned systemd-container crypto-policies-scripts grubby sos _uses_shell=False expand_argument_vars=True stdin_add_newline=True strip_empty_ends=True cmd=None argv=None chdir=None executable=None creates=None removes=None stdin=None
Dec  1 14:02:51 np0005541455 python3.9[35931]: ansible-ansible.posix.selinux Invoked with policy=targeted state=enforcing configfile=/etc/selinux/config update_kernel_param=False
Dec  1 14:02:52 np0005541455 python3.9[36083]: ansible-ansible.legacy.command Invoked with cmd=dd if=/dev/zero of=/swap count=1024 bs=1M creates=/swap _uses_shell=False expand_argument_vars=True stdin_add_newline=True strip_empty_ends=True _raw_params=None argv=None chdir=None executable=None removes=None stdin=None
Dec  1 14:02:55 np0005541455 python3.9[36236]: ansible-ansible.builtin.file Invoked with group=root mode=0600 owner=root path=/swap recurse=False force=False follow=True modification_time_format=%Y%m%d%H%M.%S access_time_format=%Y%m%d%H%M.%S unsafe_writes=False state=None _original_basename=None _diff_peek=None src=None modification_time=None access_time=None seuser=None serole=None selevel=None setype=None attributes=None
Dec  1 14:02:56 np0005541455 python3.9[36388]: ansible-ansible.posix.mount Invoked with dump=0 fstype=swap name=none opts=sw passno=0 src=/swap state=present path=none boot=True opts_no_log=False backup=False fstab=None
Dec  1 14:02:57 np0005541455 python3.9[36540]: ansible-ansible.builtin.file Invoked with group=root mode=0755 owner=root path=/etc/pki/ca-trust/source/anchors setype=cert_t state=directory recurse=False force=False follow=True modification_time_format=%Y%m%d%H%M.%S access_time_format=%Y%m%d%H%M.%S unsafe_writes=False _original_basename=None _diff_peek=None src=None modification_time=None access_time=None seuser=None serole=None selevel=None attributes=None
Dec  1 14:02:58 np0005541455 python3.9[36692]: ansible-ansible.legacy.stat Invoked with path=/etc/pki/ca-trust/source/anchors/tls-ca-bundle.pem follow=False get_checksum=True get_size=False checksum_algorithm=sha1 get_mime=True get_attributes=True get_selinux_context=False
Dec  1 14:03:01 np0005541455 python3.9[36815]: ansible-ansible.legacy.copy Invoked with dest=/etc/pki/ca-trust/source/anchors/tls-ca-bundle.pem group=root mode=0644 owner=root src=/home/zuul/.ansible/tmp/ansible-tmp-1764615778.1451259-236-129293447820456/.source.pem _original_basename=tls-ca-bundle.pem follow=False checksum=c865f96d02e0a24f5a122339d49fd81effd2143b backup=False force=True remote_src=False unsafe_writes=False content=NOT_LOGGING_PARAMETER validate=None directory_mode=None local_follow=None seuser=None serole=None selevel=None setype=None attributes=None
Dec  1 14:03:03 np0005541455 python3.9[36967]: ansible-ansible.builtin.stat Invoked with path=/etc/lvm/devices/system.devices follow=False get_checksum=True get_mime=True get_attributes=True get_selinux_context=False checksum_algorithm=sha1
Dec  1 14:03:03 np0005541455 python3.9[37119]: ansible-ansible.legacy.command Invoked with _raw_params=/usr/sbin/vgimportdevices --all _uses_shell=False expand_argument_vars=True stdin_add_newline=True strip_empty_ends=True cmd=None argv=None chdir=None executable=None creates=None removes=None stdin=None
Dec  1 14:03:04 np0005541455 python3.9[37272]: ansible-ansible.builtin.file Invoked with group=root mode=0600 owner=root path=/etc/lvm/devices/system.devices state=touch recurse=False force=False follow=True modification_time_format=%Y%m%d%H%M.%S access_time_format=%Y%m%d%H%M.%S unsafe_writes=False _original_basename=None _diff_peek=None src=None modification_time=None access_time=None seuser=None serole=None selevel=None setype=None attributes=None
Dec  1 14:03:05 np0005541455 python3.9[37424]: ansible-ansible.builtin.getent Invoked with database=passwd key=qemu fail_key=True service=None split=None
Dec  1 14:03:05 np0005541455 rsyslogd[1005]: imjournal: journal files changed, reloading...  [v8.2510.0-2.el9 try https://www.rsyslog.com/e/0 ]
Dec  1 14:03:06 np0005541455 python3.9[37578]: ansible-ansible.builtin.group Invoked with gid=107 name=qemu state=present force=False system=False local=False non_unique=False gid_min=None gid_max=None
Dec  1 14:03:08 np0005541455 python3.9[37736]: ansible-ansible.builtin.user Invoked with comment=qemu user group=qemu groups=[''] name=qemu shell=/sbin/nologin state=present uid=107 non_unique=False force=False remove=False create_home=True system=False move_home=False append=False ssh_key_bits=0 ssh_key_type=rsa ssh_key_comment=ansible-generated on compute-0 update_password=always home=None password=NOT_LOGGING_PARAMETER login_class=None password_expire_max=None password_expire_min=None password_expire_warn=None hidden=None seuser=None skeleton=None generate_ssh_key=None ssh_key_file=None ssh_key_passphrase=NOT_LOGGING_PARAMETER expires=None password_lock=None local=None profile=None authorization=None role=None umask=None password_expire_account_disable=None uid_min=None uid_max=None
Dec  1 14:03:08 np0005541455 python3.9[37896]: ansible-ansible.builtin.getent Invoked with database=passwd key=hugetlbfs fail_key=True service=None split=None
Dec  1 14:03:09 np0005541455 python3.9[38049]: ansible-ansible.builtin.group Invoked with gid=42477 name=hugetlbfs state=present force=False system=False local=False non_unique=False gid_min=None gid_max=None
Dec  1 14:03:10 np0005541455 python3.9[38207]: ansible-ansible.builtin.file Invoked with group=qemu mode=0755 owner=qemu path=/var/lib/vhost_sockets setype=virt_cache_t seuser=system_u state=directory recurse=False force=False follow=True modification_time_format=%Y%m%d%H%M.%S access_time_format=%Y%m%d%H%M.%S unsafe_writes=False _original_basename=None _diff_peek=None src=None modification_time=None access_time=None serole=None selevel=None attributes=None
Dec  1 14:03:11 np0005541455 python3.9[38359]: ansible-ansible.legacy.dnf Invoked with name=['dracut-config-generic'] state=absent allow_downgrade=False allowerasing=False autoremove=False bugfix=False cacheonly=False disable_gpg_check=False disable_plugin=[] disablerepo=[] download_only=False enable_plugin=[] enablerepo=[] exclude=[] installroot=/ install_weak_deps=True security=False skip_broken=False update_cache=False update_only=False validate_certs=True sslverify=True lock_timeout=30 use_backend=auto best=None conf_file=None disable_excludes=None download_dir=None list=None nobest=None releasever=None
Dec  1 14:03:13 np0005541455 python3.9[38513]: ansible-ansible.builtin.file Invoked with group=root mode=0755 owner=root path=/etc/modules-load.d setype=etc_t state=directory recurse=False force=False follow=True modification_time_format=%Y%m%d%H%M.%S access_time_format=%Y%m%d%H%M.%S unsafe_writes=False _original_basename=None _diff_peek=None src=None modification_time=None access_time=None seuser=None serole=None selevel=None attributes=None
Dec  1 14:03:14 np0005541455 python3.9[38665]: ansible-ansible.legacy.stat Invoked with path=/etc/modules-load.d/99-edpm.conf follow=False get_checksum=True get_size=False checksum_algorithm=sha1 get_mime=True get_attributes=True get_selinux_context=False
Dec  1 14:03:15 np0005541455 python3.9[38788]: ansible-ansible.legacy.copy Invoked with dest=/etc/modules-load.d/99-edpm.conf group=root mode=0644 owner=root setype=etc_t src=/home/zuul/.ansible/tmp/ansible-tmp-1764615794.134522-355-44288684116361/.source.conf follow=False _original_basename=edpm-modprobe.conf.j2 checksum=8021efe01721d8fa8cab46b95c00ec1be6dbb9d0 backup=False force=True remote_src=False unsafe_writes=False content=NOT_LOGGING_PARAMETER validate=None directory_mode=None local_follow=None seuser=None serole=None selevel=None attributes=None
Dec  1 14:03:16 np0005541455 python3.9[38940]: ansible-ansible.builtin.systemd Invoked with name=systemd-modules-load.service state=restarted daemon_reload=False daemon_reexec=False scope=system no_block=False enabled=None force=None masked=None
Dec  1 14:03:16 np0005541455 systemd[1]: Starting Load Kernel Modules...
Dec  1 14:03:16 np0005541455 kernel: bridge: filtering via arp/ip/ip6tables is no longer available by default. Update your scripts to load br_netfilter if you need this.
Dec  1 14:03:16 np0005541455 systemd-modules-load[38944]: Inserted module 'br_netfilter'
Dec  1 14:03:16 np0005541455 kernel: Bridge firewalling registered
Dec  1 14:03:16 np0005541455 systemd[1]: Finished Load Kernel Modules.
Dec  1 14:03:17 np0005541455 python3.9[39101]: ansible-ansible.legacy.stat Invoked with path=/etc/sysctl.d/99-edpm.conf follow=False get_checksum=True get_size=False checksum_algorithm=sha1 get_mime=True get_attributes=True get_selinux_context=False
Dec  1 14:03:18 np0005541455 python3.9[39224]: ansible-ansible.legacy.copy Invoked with dest=/etc/sysctl.d/99-edpm.conf group=root mode=0644 owner=root setype=etc_t src=/home/zuul/.ansible/tmp/ansible-tmp-1764615796.8529706-378-253191808988561/.source.conf follow=False _original_basename=edpm-sysctl.conf.j2 checksum=2a366439721b855adcfe4d7f152babb68596a007 backup=False force=True remote_src=False unsafe_writes=False content=NOT_LOGGING_PARAMETER validate=None directory_mode=None local_follow=None seuser=None serole=None selevel=None attributes=None
Dec  1 14:03:19 np0005541455 python3.9[39376]: ansible-ansible.legacy.dnf Invoked with name=['tuned', 'tuned-profiles-cpu-partitioning'] state=present allow_downgrade=False allowerasing=False autoremove=False bugfix=False cacheonly=False disable_gpg_check=False disable_plugin=[] disablerepo=[] download_only=False enable_plugin=[] enablerepo=[] exclude=[] installroot=/ install_weak_deps=True security=False skip_broken=False update_cache=False update_only=False validate_certs=True sslverify=True lock_timeout=30 use_backend=auto best=None conf_file=None disable_excludes=None download_dir=None list=None nobest=None releasever=None
Dec  1 14:03:22 np0005541455 dbus-broker-launch[763]: Noticed file-system modification, trigger reload.
Dec  1 14:03:22 np0005541455 dbus-broker-launch[763]: Noticed file-system modification, trigger reload.
Dec  1 14:03:22 np0005541455 systemd[1]: Started /usr/bin/systemctl start man-db-cache-update.
Dec  1 14:03:22 np0005541455 systemd[1]: Starting man-db-cache-update.service...
Dec  1 14:03:22 np0005541455 systemd[1]: Reloading.
Dec  1 14:03:22 np0005541455 systemd-rc-local-generator: /etc/rc.d/rc.local is not marked executable, skipping.
Dec  1 14:03:23 np0005541455 systemd[1]: Queuing reload/restart jobs for marked units…
Dec  1 14:03:24 np0005541455 python3.9[40678]: ansible-ansible.builtin.stat Invoked with path=/etc/tuned/active_profile follow=False get_checksum=True get_mime=True get_attributes=True get_selinux_context=False checksum_algorithm=sha1
Dec  1 14:03:25 np0005541455 python3.9[41593]: ansible-ansible.builtin.slurp Invoked with src=/etc/tuned/active_profile
Dec  1 14:03:25 np0005541455 python3.9[42379]: ansible-ansible.builtin.stat Invoked with path=/etc/tuned/throughput-performance-variables.conf follow=False get_checksum=True get_mime=True get_attributes=True get_selinux_context=False checksum_algorithm=sha1
Dec  1 14:03:26 np0005541455 systemd[1]: man-db-cache-update.service: Deactivated successfully.
Dec  1 14:03:26 np0005541455 systemd[1]: Finished man-db-cache-update.service.
Dec  1 14:03:26 np0005541455 systemd[1]: man-db-cache-update.service: Consumed 4.930s CPU time.
Dec  1 14:03:26 np0005541455 systemd[1]: run-r1bcfe78a148942b19e68db9fd95a5409.service: Deactivated successfully.
Dec  1 14:03:26 np0005541455 python3.9[43570]: ansible-ansible.legacy.command Invoked with _raw_params=/usr/sbin/tuned-adm profile throughput-performance _uses_shell=False expand_argument_vars=True stdin_add_newline=True strip_empty_ends=True cmd=None argv=None chdir=None executable=None creates=None removes=None stdin=None
Dec  1 14:03:27 np0005541455 systemd[1]: Starting Dynamic System Tuning Daemon...
Dec  1 14:03:27 np0005541455 systemd[1]: Starting Authorization Manager...
Dec  1 14:03:27 np0005541455 systemd[1]: Started Dynamic System Tuning Daemon.
Dec  1 14:03:27 np0005541455 polkitd[43788]: Started polkitd version 0.117
Dec  1 14:03:27 np0005541455 systemd[1]: Started Authorization Manager.
Dec  1 14:03:28 np0005541455 python3.9[43958]: ansible-ansible.builtin.systemd Invoked with enabled=True name=tuned state=restarted daemon_reload=False daemon_reexec=False scope=system no_block=False force=None masked=None
Dec  1 14:03:28 np0005541455 systemd[1]: Stopping Dynamic System Tuning Daemon...
Dec  1 14:03:28 np0005541455 systemd[1]: tuned.service: Deactivated successfully.
Dec  1 14:03:28 np0005541455 systemd[1]: Stopped Dynamic System Tuning Daemon.
Dec  1 14:03:28 np0005541455 systemd[1]: Starting Dynamic System Tuning Daemon...
Dec  1 14:03:28 np0005541455 systemd[1]: Started Dynamic System Tuning Daemon.
Dec  1 14:03:29 np0005541455 python3.9[44120]: ansible-ansible.builtin.slurp Invoked with src=/proc/cmdline
Dec  1 14:03:32 np0005541455 python3.9[44272]: ansible-ansible.builtin.systemd Invoked with enabled=False name=ksm.service state=stopped daemon_reload=False daemon_reexec=False scope=system no_block=False force=None masked=None
Dec  1 14:03:32 np0005541455 systemd[1]: Reloading.
Dec  1 14:03:32 np0005541455 systemd-rc-local-generator: /etc/rc.d/rc.local is not marked executable, skipping.
Dec  1 14:03:33 np0005541455 python3.9[44461]: ansible-ansible.builtin.systemd Invoked with enabled=False name=ksmtuned.service state=stopped daemon_reload=False daemon_reexec=False scope=system no_block=False force=None masked=None
Dec  1 14:03:33 np0005541455 systemd[1]: Reloading.
Dec  1 14:03:33 np0005541455 systemd-rc-local-generator: /etc/rc.d/rc.local is not marked executable, skipping.
Dec  1 14:03:34 np0005541455 python3.9[44649]: ansible-ansible.legacy.command Invoked with _raw_params=mkswap "/swap" _uses_shell=False expand_argument_vars=True stdin_add_newline=True strip_empty_ends=True cmd=None argv=None chdir=None executable=None creates=None removes=None stdin=None
Dec  1 14:03:35 np0005541455 python3.9[44802]: ansible-ansible.legacy.command Invoked with _raw_params=swapon "/swap" _uses_shell=False expand_argument_vars=True stdin_add_newline=True strip_empty_ends=True cmd=None argv=None chdir=None executable=None creates=None removes=None stdin=None
Dec  1 14:03:35 np0005541455 kernel: Adding 1048572k swap on /swap.  Priority:-2 extents:1 across:1048572k 
Dec  1 14:03:36 np0005541455 python3.9[44955]: ansible-ansible.legacy.command Invoked with _raw_params=/usr/bin/update-ca-trust _uses_shell=False expand_argument_vars=True stdin_add_newline=True strip_empty_ends=True cmd=None argv=None chdir=None executable=None creates=None removes=None stdin=None
Dec  1 14:03:38 np0005541455 python3.9[45117]: ansible-ansible.legacy.command Invoked with _raw_params=echo 2 >/sys/kernel/mm/ksm/run _uses_shell=False expand_argument_vars=True stdin_add_newline=True strip_empty_ends=True cmd=None argv=None chdir=None executable=None creates=None removes=None stdin=None
Dec  1 14:03:39 np0005541455 python3.9[45270]: ansible-ansible.builtin.systemd Invoked with name=systemd-sysctl.service state=restarted daemon_reload=False daemon_reexec=False scope=system no_block=False enabled=None force=None masked=None
Dec  1 14:03:39 np0005541455 systemd[1]: systemd-sysctl.service: Deactivated successfully.
Dec  1 14:03:39 np0005541455 systemd[1]: Stopped Apply Kernel Variables.
Dec  1 14:03:39 np0005541455 systemd[1]: Stopping Apply Kernel Variables...
Dec  1 14:03:39 np0005541455 systemd[1]: Starting Apply Kernel Variables...
Dec  1 14:03:39 np0005541455 systemd[1]: run-credentials-systemd\x2dsysctl.service.mount: Deactivated successfully.
Dec  1 14:03:39 np0005541455 systemd[1]: Finished Apply Kernel Variables.
Dec  1 14:03:40 np0005541455 systemd[1]: session-10.scope: Deactivated successfully.
Dec  1 14:03:40 np0005541455 systemd[1]: session-10.scope: Consumed 2min 17.051s CPU time.
Dec  1 14:03:40 np0005541455 systemd-logind[797]: Session 10 logged out. Waiting for processes to exit.
Dec  1 14:03:40 np0005541455 systemd-logind[797]: Removed session 10.
Dec  1 14:03:46 np0005541455 systemd-logind[797]: New session 11 of user zuul.
Dec  1 14:03:46 np0005541455 systemd[1]: Started Session 11 of User zuul.
Dec  1 14:03:47 np0005541455 python3.9[45453]: ansible-ansible.builtin.setup Invoked with gather_subset=['!all', '!min', 'local'] gather_timeout=10 filter=[] fact_path=/etc/ansible/facts.d
Dec  1 14:03:49 np0005541455 python3.9[45607]: ansible-ansible.builtin.setup Invoked with gather_subset=['!all', '!min', 'local'] gather_timeout=10 filter=[] fact_path=/etc/ansible/facts.d
Dec  1 14:03:50 np0005541455 python3.9[45763]: ansible-ansible.legacy.command Invoked with _raw_params=PATH=/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin which growvols#012 _uses_shell=True expand_argument_vars=True stdin_add_newline=True strip_empty_ends=True cmd=None argv=None chdir=None executable=None creates=None removes=None stdin=None
Dec  1 14:03:51 np0005541455 python3.9[45914]: ansible-ansible.builtin.setup Invoked with gather_subset=['!all', '!min', 'local'] gather_timeout=10 filter=[] fact_path=/etc/ansible/facts.d
Dec  1 14:03:52 np0005541455 python3.9[46070]: ansible-ansible.legacy.setup Invoked with filter=['ansible_pkg_mgr'] gather_subset=['!all'] gather_timeout=10 fact_path=/etc/ansible/facts.d
Dec  1 14:03:53 np0005541455 python3.9[46154]: ansible-ansible.legacy.dnf Invoked with name=['podman'] state=present allow_downgrade=False allowerasing=False autoremove=False bugfix=False cacheonly=False disable_gpg_check=False disable_plugin=[] disablerepo=[] download_only=False enable_plugin=[] enablerepo=[] exclude=[] installroot=/ install_weak_deps=True security=False skip_broken=False update_cache=False update_only=False validate_certs=True sslverify=True lock_timeout=30 use_backend=auto best=None conf_file=None disable_excludes=None download_dir=None list=None nobest=None releasever=None
Dec  1 14:03:55 np0005541455 python3.9[46307]: ansible-ansible.builtin.setup Invoked with filter=['ansible_interfaces'] gather_subset=['!all', '!min', 'network'] gather_timeout=10 fact_path=/etc/ansible/facts.d
Dec  1 14:03:56 np0005541455 python3.9[46480]: ansible-ansible.builtin.file Invoked with group=root mode=0755 owner=root path=/etc/containers/networks recurse=True state=directory force=False follow=True modification_time_format=%Y%m%d%H%M.%S access_time_format=%Y%m%d%H%M.%S unsafe_writes=False _original_basename=None _diff_peek=None src=None modification_time=None access_time=None seuser=None serole=None selevel=None setype=None attributes=None
Dec  1 14:03:57 np0005541455 python3.9[46632]: ansible-ansible.legacy.command Invoked with _raw_params=podman network inspect podman#012 _uses_shell=True expand_argument_vars=True stdin_add_newline=True strip_empty_ends=True cmd=None argv=None chdir=None executable=None creates=None removes=None stdin=None
Dec  1 14:03:57 np0005541455 systemd[1]: var-lib-containers-storage-overlay-compat3514172597-merged.mount: Deactivated successfully.
Dec  1 14:03:57 np0005541455 systemd[1]: var-lib-containers-storage-overlay-metacopy\x2dcheck1802389607-merged.mount: Deactivated successfully.
Dec  1 14:03:57 np0005541455 podman[46633]: 2025-12-01 19:03:57.624169752 +0000 UTC m=+0.061955273 system refresh
Dec  1 14:03:58 np0005541455 systemd[1]: var-lib-containers-storage-overlay.mount: Deactivated successfully.
Dec  1 14:03:58 np0005541455 python3.9[46795]: ansible-ansible.legacy.stat Invoked with path=/etc/containers/networks/podman.json follow=False get_checksum=True get_size=False checksum_algorithm=sha1 get_mime=True get_attributes=True get_selinux_context=False
Dec  1 14:03:59 np0005541455 python3.9[46918]: ansible-ansible.legacy.copy Invoked with dest=/etc/containers/networks/podman.json group=root mode=0644 owner=root src=/home/zuul/.ansible/tmp/ansible-tmp-1764615837.9398882-109-142183555393102/.source.json follow=False _original_basename=podman_network_config.j2 checksum=ba6e6015d2c619de71381470fb513a3c0c3dde65 backup=False force=True remote_src=False unsafe_writes=False content=NOT_LOGGING_PARAMETER validate=None directory_mode=None local_follow=None seuser=None serole=None selevel=None setype=None attributes=None
Dec  1 14:04:00 np0005541455 python3.9[47070]: ansible-ansible.legacy.stat Invoked with path=/etc/containers/registries.conf.d/20-edpm-podman-registries.conf follow=False get_checksum=True get_size=False checksum_algorithm=sha1 get_mime=True get_attributes=True get_selinux_context=False
Dec  1 14:04:00 np0005541455 python3.9[47193]: ansible-ansible.legacy.copy Invoked with dest=/etc/containers/registries.conf.d/20-edpm-podman-registries.conf group=root mode=0644 owner=root setype=etc_t src=/home/zuul/.ansible/tmp/ansible-tmp-1764615839.764508-124-129420714485817/.source.conf follow=False _original_basename=registries.conf.j2 checksum=8c73fbc0d7cddf5b89d40cde842a385025fa8102 backup=False force=True remote_src=False unsafe_writes=False content=NOT_LOGGING_PARAMETER validate=None directory_mode=None local_follow=None seuser=None serole=None selevel=None attributes=None
Dec  1 14:04:01 np0005541455 python3.9[47345]: ansible-community.general.ini_file Invoked with create=True group=root mode=0644 option=pids_limit owner=root path=/etc/containers/containers.conf section=containers setype=etc_t value=4096 backup=False state=present exclusive=True no_extra_spaces=False ignore_spaces=False allow_no_value=False modify_inactive_option=True follow=False unsafe_writes=False section_has_values=None values=None seuser=None serole=None selevel=None attributes=None
Dec  1 14:04:02 np0005541455 python3.9[47497]: ansible-community.general.ini_file Invoked with create=True group=root mode=0644 option=events_logger owner=root path=/etc/containers/containers.conf section=engine setype=etc_t value="journald" backup=False state=present exclusive=True no_extra_spaces=False ignore_spaces=False allow_no_value=False modify_inactive_option=True follow=False unsafe_writes=False section_has_values=None values=None seuser=None serole=None selevel=None attributes=None
Dec  1 14:04:03 np0005541455 python3.9[47649]: ansible-community.general.ini_file Invoked with create=True group=root mode=0644 option=runtime owner=root path=/etc/containers/containers.conf section=engine setype=etc_t value="crun" backup=False state=present exclusive=True no_extra_spaces=False ignore_spaces=False allow_no_value=False modify_inactive_option=True follow=False unsafe_writes=False section_has_values=None values=None seuser=None serole=None selevel=None attributes=None
Dec  1 14:04:04 np0005541455 python3.9[47801]: ansible-community.general.ini_file Invoked with create=True group=root mode=0644 option=network_backend owner=root path=/etc/containers/containers.conf section=network setype=etc_t value="netavark" backup=False state=present exclusive=True no_extra_spaces=False ignore_spaces=False allow_no_value=False modify_inactive_option=True follow=False unsafe_writes=False section_has_values=None values=None seuser=None serole=None selevel=None attributes=None
Dec  1 14:04:05 np0005541455 python3.9[47951]: ansible-ansible.builtin.setup Invoked with gather_subset=['!all', '!min', 'distribution'] gather_timeout=10 filter=[] fact_path=/etc/ansible/facts.d
Dec  1 14:04:06 np0005541455 python3.9[48105]: ansible-ansible.legacy.dnf Invoked with download_only=True name=['driverctl', 'lvm2', 'crudini', 'jq', 'nftables', 'NetworkManager', 'openstack-selinux', 'python3-libselinux', 'python3-pyyaml', 'rsync', 'tmpwatch', 'sysstat', 'iproute-tc', 'ksmtuned', 'systemd-container', 'crypto-policies-scripts', 'grubby', 'sos'] allow_downgrade=False allowerasing=False autoremove=False bugfix=False cacheonly=False disable_gpg_check=False disable_plugin=[] disablerepo=[] enable_plugin=[] enablerepo=[] exclude=[] installroot=/ install_weak_deps=True security=False skip_broken=False update_cache=False update_only=False validate_certs=True sslverify=True lock_timeout=30 use_backend=auto best=None conf_file=None disable_excludes=None download_dir=None list=None nobest=None releasever=None state=None
Dec  1 14:04:08 np0005541455 python3.9[48258]: ansible-ansible.legacy.dnf Invoked with download_only=True name=['openstack-network-scripts'] allow_downgrade=False allowerasing=False autoremove=False bugfix=False cacheonly=False disable_gpg_check=False disable_plugin=[] disablerepo=[] enable_plugin=[] enablerepo=[] exclude=[] installroot=/ install_weak_deps=True security=False skip_broken=False update_cache=False update_only=False validate_certs=True sslverify=True lock_timeout=30 use_backend=auto best=None conf_file=None disable_excludes=None download_dir=None list=None nobest=None releasever=None state=None
Dec  1 14:04:10 np0005541455 python3.9[48418]: ansible-ansible.legacy.dnf Invoked with download_only=True name=['podman', 'buildah'] allow_downgrade=False allowerasing=False autoremove=False bugfix=False cacheonly=False disable_gpg_check=False disable_plugin=[] disablerepo=[] enable_plugin=[] enablerepo=[] exclude=[] installroot=/ install_weak_deps=True security=False skip_broken=False update_cache=False update_only=False validate_certs=True sslverify=True lock_timeout=30 use_backend=auto best=None conf_file=None disable_excludes=None download_dir=None list=None nobest=None releasever=None state=None
Dec  1 14:04:13 np0005541455 python3.9[48571]: ansible-ansible.legacy.dnf Invoked with download_only=True name=['tuned', 'tuned-profiles-cpu-partitioning'] allow_downgrade=False allowerasing=False autoremove=False bugfix=False cacheonly=False disable_gpg_check=False disable_plugin=[] disablerepo=[] enable_plugin=[] enablerepo=[] exclude=[] installroot=/ install_weak_deps=True security=False skip_broken=False update_cache=False update_only=False validate_certs=True sslverify=True lock_timeout=30 use_backend=auto best=None conf_file=None disable_excludes=None download_dir=None list=None nobest=None releasever=None state=None
Dec  1 14:04:15 np0005541455 python3.9[48724]: ansible-ansible.legacy.dnf Invoked with download_only=True name=['NetworkManager-ovs'] allow_downgrade=False allowerasing=False autoremove=False bugfix=False cacheonly=False disable_gpg_check=False disable_plugin=[] disablerepo=[] enable_plugin=[] enablerepo=[] exclude=[] installroot=/ install_weak_deps=True security=False skip_broken=False update_cache=False update_only=False validate_certs=True sslverify=True lock_timeout=30 use_backend=auto best=None conf_file=None disable_excludes=None download_dir=None list=None nobest=None releasever=None state=None
Dec  1 14:04:17 np0005541455 python3.9[48880]: ansible-ansible.legacy.dnf Invoked with download_only=True name=['os-net-config'] allow_downgrade=False allowerasing=False autoremove=False bugfix=False cacheonly=False disable_gpg_check=False disable_plugin=[] disablerepo=[] enable_plugin=[] enablerepo=[] exclude=[] installroot=/ install_weak_deps=True security=False skip_broken=False update_cache=False update_only=False validate_certs=True sslverify=True lock_timeout=30 use_backend=auto best=None conf_file=None disable_excludes=None download_dir=None list=None nobest=None releasever=None state=None
Dec  1 14:04:21 np0005541455 python3.9[49048]: ansible-ansible.legacy.dnf Invoked with download_only=True name=['openssh-server'] allow_downgrade=False allowerasing=False autoremove=False bugfix=False cacheonly=False disable_gpg_check=False disable_plugin=[] disablerepo=[] enable_plugin=[] enablerepo=[] exclude=[] installroot=/ install_weak_deps=True security=False skip_broken=False update_cache=False update_only=False validate_certs=True sslverify=True lock_timeout=30 use_backend=auto best=None conf_file=None disable_excludes=None download_dir=None list=None nobest=None releasever=None state=None
Dec  1 14:04:23 np0005541455 python3.9[49201]: ansible-ansible.legacy.dnf Invoked with download_only=True name=['libvirt ', 'libvirt-admin ', 'libvirt-client ', 'libvirt-daemon ', 'qemu-kvm', 'qemu-img', 'libguestfs', 'libseccomp', 'swtpm', 'swtpm-tools', 'edk2-ovmf', 'ceph-common', 'cyrus-sasl-scram'] allow_downgrade=False allowerasing=False autoremove=False bugfix=False cacheonly=False disable_gpg_check=False disable_plugin=[] disablerepo=[] enable_plugin=[] enablerepo=[] exclude=[] installroot=/ install_weak_deps=True security=False skip_broken=False update_cache=False update_only=False validate_certs=True sslverify=True lock_timeout=30 use_backend=auto best=None conf_file=None disable_excludes=None download_dir=None list=None nobest=None releasever=None state=None
Dec  1 14:04:39 np0005541455 python3.9[49530]: ansible-ansible.legacy.dnf Invoked with download_only=True name=['iscsi-initiator-utils'] allow_downgrade=False allowerasing=False autoremove=False bugfix=False cacheonly=False disable_gpg_check=False disable_plugin=[] disablerepo=[] enable_plugin=[] enablerepo=[] exclude=[] installroot=/ install_weak_deps=True security=False skip_broken=False update_cache=False update_only=False validate_certs=True sslverify=True lock_timeout=30 use_backend=auto best=None conf_file=None disable_excludes=None download_dir=None list=None nobest=None releasever=None state=None
Dec  1 14:04:41 np0005541455 python3.9[49686]: ansible-ansible.builtin.file Invoked with group=zuul mode=0770 owner=zuul path=/root/.config/containers recurse=True state=directory force=False follow=True modification_time_format=%Y%m%d%H%M.%S access_time_format=%Y%m%d%H%M.%S unsafe_writes=False _original_basename=None _diff_peek=None src=None modification_time=None access_time=None seuser=None serole=None selevel=None setype=None attributes=None
Dec  1 14:04:42 np0005541455 python3.9[49861]: ansible-ansible.legacy.stat Invoked with path=/root/.config/containers/auth.json follow=False get_checksum=True get_size=False checksum_algorithm=sha1 get_mime=True get_attributes=True get_selinux_context=False
Dec  1 14:04:42 np0005541455 python3.9[49984]: ansible-ansible.legacy.copy Invoked with dest=/root/.config/containers/auth.json group=zuul mode=0660 owner=zuul src=/home/zuul/.ansible/tmp/ansible-tmp-1764615881.6779668-272-208422328877575/.source.json _original_basename=.a5bm3v89 follow=False checksum=bf21a9e8fbc5a3846fb05b4fa0859e0917b2202f backup=False force=True remote_src=False unsafe_writes=False content=NOT_LOGGING_PARAMETER validate=None directory_mode=None local_follow=None seuser=None serole=None selevel=None setype=None attributes=None
Dec  1 14:04:44 np0005541455 python3.9[50136]: ansible-containers.podman.podman_image Invoked with auth_file=/root/.config/containers/auth.json name=quay.io/podified-antelope-centos9/openstack-ovn-controller:current-podified tag=latest pull=True push=False force=False state=present executable=podman build={'force_rm': False, 'format': 'oci', 'cache': True, 'rm': True, 'annotation': None, 'file': None, 'container_file': None, 'volume': None, 'extra_args': None, 'target': None} push_args={'ssh': None, 'compress': None, 'format': None, 'remove_signatures': None, 'sign_by': None, 'dest': None, 'extra_args': None, 'transport': None} arch=None pull_extra_args=None path=None validate_certs=None username=None password=NOT_LOGGING_PARAMETER ca_cert_dir=None quadlet_dir=None quadlet_filename=None quadlet_file_mode=None quadlet_options=None
Dec  1 14:04:44 np0005541455 systemd[1]: var-lib-containers-storage-overlay.mount: Deactivated successfully.
Dec  1 14:04:46 np0005541455 systemd[1]: var-lib-containers-storage-overlay-compat3413074409-lower\x2dmapped.mount: Deactivated successfully.
Dec  1 14:04:49 np0005541455 podman[50148]: 2025-12-01 19:04:49.905792175 +0000 UTC m=+5.651578768 image pull 3a37a52861b2e44ebd2a63ca2589a7c9d8e4119e5feace9d19c6312ed9b8421c quay.io/podified-antelope-centos9/openstack-ovn-controller:current-podified
Dec  1 14:04:49 np0005541455 systemd[1]: var-lib-containers-storage-overlay.mount: Deactivated successfully.
Dec  1 14:04:49 np0005541455 systemd[1]: var-lib-containers-storage-overlay.mount: Deactivated successfully.
Dec  1 14:04:49 np0005541455 systemd[1]: var-lib-containers-storage-overlay.mount: Deactivated successfully.
Dec  1 14:04:51 np0005541455 python3.9[50444]: ansible-containers.podman.podman_image Invoked with auth_file=/root/.config/containers/auth.json name=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified tag=latest pull=True push=False force=False state=present executable=podman build={'force_rm': False, 'format': 'oci', 'cache': True, 'rm': True, 'annotation': None, 'file': None, 'container_file': None, 'volume': None, 'extra_args': None, 'target': None} push_args={'ssh': None, 'compress': None, 'format': None, 'remove_signatures': None, 'sign_by': None, 'dest': None, 'extra_args': None, 'transport': None} arch=None pull_extra_args=None path=None validate_certs=None username=None password=NOT_LOGGING_PARAMETER ca_cert_dir=None quadlet_dir=None quadlet_filename=None quadlet_file_mode=None quadlet_options=None
Dec  1 14:04:51 np0005541455 systemd[1]: var-lib-containers-storage-overlay.mount: Deactivated successfully.
Dec  1 14:05:01 np0005541455 podman[50457]: 2025-12-01 19:05:01.912182264 +0000 UTC m=+10.741306945 image pull 014dc726c85414b29f2dde7b5d875685d08784761c0f0ffa8630d1583a877bf9 quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified
Dec  1 14:05:01 np0005541455 systemd[1]: var-lib-containers-storage-overlay.mount: Deactivated successfully.
Dec  1 14:05:01 np0005541455 systemd[1]: var-lib-containers-storage-overlay.mount: Deactivated successfully.
Dec  1 14:05:02 np0005541455 systemd[1]: var-lib-containers-storage-overlay.mount: Deactivated successfully.
Dec  1 14:05:03 np0005541455 python3.9[50757]: ansible-containers.podman.podman_image Invoked with auth_file=/root/.config/containers/auth.json name=quay.io/podified-antelope-centos9/openstack-multipathd:current-podified tag=latest pull=True push=False force=False state=present executable=podman build={'force_rm': False, 'format': 'oci', 'cache': True, 'rm': True, 'annotation': None, 'file': None, 'container_file': None, 'volume': None, 'extra_args': None, 'target': None} push_args={'ssh': None, 'compress': None, 'format': None, 'remove_signatures': None, 'sign_by': None, 'dest': None, 'extra_args': None, 'transport': None} arch=None pull_extra_args=None path=None validate_certs=None username=None password=NOT_LOGGING_PARAMETER ca_cert_dir=None quadlet_dir=None quadlet_filename=None quadlet_file_mode=None quadlet_options=None
Dec  1 14:05:03 np0005541455 systemd[1]: var-lib-containers-storage-overlay.mount: Deactivated successfully.
Dec  1 14:05:05 np0005541455 podman[50770]: 2025-12-01 19:05:05.606398656 +0000 UTC m=+2.442752295 image pull 9af6aa52ee187025bc25565b66d3eefb486acac26f9281e33f4cce76a40d21f7 quay.io/podified-antelope-centos9/openstack-multipathd:current-podified
Dec  1 14:05:05 np0005541455 systemd[1]: var-lib-containers-storage-overlay.mount: Deactivated successfully.
Dec  1 14:05:05 np0005541455 systemd[1]: var-lib-containers-storage-overlay.mount: Deactivated successfully.
Dec  1 14:05:05 np0005541455 systemd[1]: var-lib-containers-storage-overlay.mount: Deactivated successfully.
Dec  1 14:05:06 np0005541455 python3.9[51002]: ansible-containers.podman.podman_image Invoked with auth_file=/root/.config/containers/auth.json name=quay.io/podified-antelope-centos9/openstack-nova-compute:current-podified tag=latest pull=True push=False force=False state=present executable=podman build={'force_rm': False, 'format': 'oci', 'cache': True, 'rm': True, 'annotation': None, 'file': None, 'container_file': None, 'volume': None, 'extra_args': None, 'target': None} push_args={'ssh': None, 'compress': None, 'format': None, 'remove_signatures': None, 'sign_by': None, 'dest': None, 'extra_args': None, 'transport': None} arch=None pull_extra_args=None path=None validate_certs=None username=None password=NOT_LOGGING_PARAMETER ca_cert_dir=None quadlet_dir=None quadlet_filename=None quadlet_file_mode=None quadlet_options=None
Dec  1 14:05:06 np0005541455 systemd[1]: var-lib-containers-storage-overlay.mount: Deactivated successfully.
Dec  1 14:05:19 np0005541455 podman[51014]: 2025-12-01 19:05:19.332216835 +0000 UTC m=+12.465878226 image pull 5571c1b2140c835f70406e4553b3b44135b9c9b4eb673345cbd571460c5d59a3 quay.io/podified-antelope-centos9/openstack-nova-compute:current-podified
Dec  1 14:05:19 np0005541455 systemd[1]: var-lib-containers-storage-overlay.mount: Deactivated successfully.
Dec  1 14:05:19 np0005541455 systemd[1]: var-lib-containers-storage-overlay.mount: Deactivated successfully.
Dec  1 14:05:19 np0005541455 systemd[1]: var-lib-containers-storage-overlay.mount: Deactivated successfully.
Dec  1 14:05:20 np0005541455 python3.9[51296]: ansible-containers.podman.podman_image Invoked with auth_file=/root/.config/containers/auth.json name=quay.rdoproject.org/podified-master-centos10/openstack-ceilometer-compute:current-tested tag=latest pull=True push=False force=False state=present executable=podman build={'force_rm': False, 'format': 'oci', 'cache': True, 'rm': True, 'annotation': None, 'file': None, 'container_file': None, 'volume': None, 'extra_args': None, 'target': None} push_args={'ssh': None, 'compress': None, 'format': None, 'remove_signatures': None, 'sign_by': None, 'dest': None, 'extra_args': None, 'transport': None} arch=None pull_extra_args=None path=None validate_certs=None username=None password=NOT_LOGGING_PARAMETER ca_cert_dir=None quadlet_dir=None quadlet_filename=None quadlet_file_mode=None quadlet_options=None
Dec  1 14:05:20 np0005541455 systemd[1]: var-lib-containers-storage-overlay.mount: Deactivated successfully.
Dec  1 14:05:37 np0005541455 podman[51308]: 2025-12-01 19:05:37.724742371 +0000 UTC m=+16.766721118 image pull b1b6d71b432c07886b3bae74df4dc9841d1f26407d5f96d6c1e400b0154d9a3d quay.rdoproject.org/podified-master-centos10/openstack-ceilometer-compute:current-tested
Dec  1 14:05:37 np0005541455 systemd[1]: var-lib-containers-storage-overlay.mount: Deactivated successfully.
Dec  1 14:05:37 np0005541455 systemd[1]: var-lib-containers-storage-overlay.mount: Deactivated successfully.
Dec  1 14:05:37 np0005541455 systemd[1]: var-lib-containers-storage-overlay.mount: Deactivated successfully.
Dec  1 14:05:38 np0005541455 python3.9[51638]: ansible-containers.podman.podman_image Invoked with auth_file=/root/.config/containers/auth.json name=quay.io/prometheus/node-exporter:v1.5.0 tag=latest pull=True push=False force=False state=present executable=podman build={'force_rm': False, 'format': 'oci', 'cache': True, 'rm': True, 'annotation': None, 'file': None, 'container_file': None, 'volume': None, 'extra_args': None, 'target': None} push_args={'ssh': None, 'compress': None, 'format': None, 'remove_signatures': None, 'sign_by': None, 'dest': None, 'extra_args': None, 'transport': None} arch=None pull_extra_args=None path=None validate_certs=None username=None password=NOT_LOGGING_PARAMETER ca_cert_dir=None quadlet_dir=None quadlet_filename=None quadlet_file_mode=None quadlet_options=None
Dec  1 14:05:39 np0005541455 podman[51650]: 2025-12-01 19:05:39.632857109 +0000 UTC m=+1.078709901 image pull 0da6a335fe1356545476b749c68f022c897de3a2139e8f0054f6937349ee2b83 quay.io/prometheus/node-exporter:v1.5.0
Dec  1 14:05:39 np0005541455 systemd[1]: var-lib-containers-storage-overlay.mount: Deactivated successfully.
Dec  1 14:05:39 np0005541455 systemd[1]: var-lib-containers-storage-overlay.mount: Deactivated successfully.
Dec  1 14:05:39 np0005541455 systemd[1]: var-lib-containers-storage-overlay.mount: Deactivated successfully.
Dec  1 14:05:39 np0005541455 systemd[1]: var-lib-containers-storage-overlay.mount: Deactivated successfully.
Dec  1 14:05:39 np0005541455 systemd[1]: var-lib-containers-storage-overlay.mount: Deactivated successfully.
Dec  1 14:05:40 np0005541455 python3.9[51927]: ansible-containers.podman.podman_image Invoked with auth_file=/root/.config/containers/auth.json name=quay.io/podified-antelope-centos9/openstack-ceilometer-ipmi:current-podified tag=latest pull=True push=False force=False state=present executable=podman build={'force_rm': False, 'format': 'oci', 'cache': True, 'rm': True, 'annotation': None, 'file': None, 'container_file': None, 'volume': None, 'extra_args': None, 'target': None} push_args={'ssh': None, 'compress': None, 'format': None, 'remove_signatures': None, 'sign_by': None, 'dest': None, 'extra_args': None, 'transport': None} arch=None pull_extra_args=None path=None validate_certs=None username=None password=NOT_LOGGING_PARAMETER ca_cert_dir=None quadlet_dir=None quadlet_filename=None quadlet_file_mode=None quadlet_options=None
Dec  1 14:05:43 np0005541455 podman[51940]: 2025-12-01 19:05:43.407981723 +0000 UTC m=+2.671355018 image pull 24d4416455a3caf43088be1a1fdcd72d9680ad5e64ac2b338cb2cc50d15f5acc quay.io/podified-antelope-centos9/openstack-ceilometer-ipmi:current-podified
Dec  1 14:05:43 np0005541455 systemd[1]: var-lib-containers-storage-overlay.mount: Deactivated successfully.
Dec  1 14:05:43 np0005541455 systemd[1]: var-lib-containers-storage-overlay.mount: Deactivated successfully.
Dec  1 14:05:43 np0005541455 systemd[1]: var-lib-containers-storage-overlay.mount: Deactivated successfully.
Dec  1 14:05:44 np0005541455 python3.9[52193]: ansible-containers.podman.podman_image Invoked with auth_file=/root/.config/containers/auth.json name=quay.io/sustainable_computing_io/kepler:release-0.7.12 tag=latest pull=True push=False force=False state=present executable=podman build={'force_rm': False, 'format': 'oci', 'cache': True, 'rm': True, 'annotation': None, 'file': None, 'container_file': None, 'volume': None, 'extra_args': None, 'target': None} push_args={'ssh': None, 'compress': None, 'format': None, 'remove_signatures': None, 'sign_by': None, 'dest': None, 'extra_args': None, 'transport': None} arch=None pull_extra_args=None path=None validate_certs=None username=None password=NOT_LOGGING_PARAMETER ca_cert_dir=None quadlet_dir=None quadlet_filename=None quadlet_file_mode=None quadlet_options=None
Dec  1 14:05:54 np0005541455 podman[52205]: 2025-12-01 19:05:54.487934515 +0000 UTC m=+10.141559371 image pull ed61e3ea3188391c18595d8ceada2a5a01f0ece915c62fde355798735b5208d7 quay.io/sustainable_computing_io/kepler:release-0.7.12
Dec  1 14:05:54 np0005541455 systemd[1]: var-lib-containers-storage-overlay.mount: Deactivated successfully.
Dec  1 14:05:54 np0005541455 systemd[1]: var-lib-containers-storage-overlay.mount: Deactivated successfully.
Dec  1 14:05:54 np0005541455 systemd[1]: var-lib-containers-storage-overlay.mount: Deactivated successfully.
Dec  1 14:05:55 np0005541455 systemd[1]: session-11.scope: Deactivated successfully.
Dec  1 14:05:55 np0005541455 systemd[1]: session-11.scope: Consumed 2min 27.366s CPU time.
Dec  1 14:05:55 np0005541455 systemd-logind[797]: Session 11 logged out. Waiting for processes to exit.
Dec  1 14:05:55 np0005541455 systemd-logind[797]: Removed session 11.
Dec  1 14:06:02 np0005541455 systemd-logind[797]: New session 12 of user zuul.
Dec  1 14:06:02 np0005541455 systemd[1]: Started Session 12 of User zuul.
Dec  1 14:06:04 np0005541455 python3.9[52632]: ansible-ansible.builtin.setup Invoked with gather_subset=['!all', '!min', 'local'] gather_timeout=10 filter=[] fact_path=/etc/ansible/facts.d
Dec  1 14:06:05 np0005541455 python3.9[52788]: ansible-ansible.builtin.getent Invoked with database=passwd key=openvswitch fail_key=True service=None split=None
Dec  1 14:06:06 np0005541455 python3.9[52941]: ansible-ansible.builtin.group Invoked with gid=42476 name=openvswitch state=present force=False system=False local=False non_unique=False gid_min=None gid_max=None
Dec  1 14:06:07 np0005541455 python3.9[53099]: ansible-ansible.builtin.user Invoked with comment=openvswitch user group=openvswitch groups=['hugetlbfs'] name=openvswitch shell=/sbin/nologin state=present uid=42476 non_unique=False force=False remove=False create_home=True system=False move_home=False append=False ssh_key_bits=0 ssh_key_type=rsa ssh_key_comment=ansible-generated on compute-0 update_password=always home=None password=NOT_LOGGING_PARAMETER login_class=None password_expire_max=None password_expire_min=None password_expire_warn=None hidden=None seuser=None skeleton=None generate_ssh_key=None ssh_key_file=None ssh_key_passphrase=NOT_LOGGING_PARAMETER expires=None password_lock=None local=None profile=None authorization=None role=None umask=None password_expire_account_disable=None uid_min=None uid_max=None
Dec  1 14:06:08 np0005541455 python3.9[53259]: ansible-ansible.legacy.setup Invoked with filter=['ansible_pkg_mgr'] gather_subset=['!all'] gather_timeout=10 fact_path=/etc/ansible/facts.d
Dec  1 14:06:09 np0005541455 python3.9[53343]: ansible-ansible.legacy.dnf Invoked with download_only=True name=['openvswitch'] allow_downgrade=False allowerasing=False autoremove=False bugfix=False cacheonly=False disable_gpg_check=False disable_plugin=[] disablerepo=[] enable_plugin=[] enablerepo=[] exclude=[] installroot=/ install_weak_deps=True security=False skip_broken=False update_cache=False update_only=False validate_certs=True sslverify=True lock_timeout=30 use_backend=auto best=None conf_file=None disable_excludes=None download_dir=None list=None nobest=None releasever=None state=None
Dec  1 14:06:11 np0005541455 python3.9[53504]: ansible-ansible.legacy.dnf Invoked with name=['openvswitch'] state=present allow_downgrade=False allowerasing=False autoremove=False bugfix=False cacheonly=False disable_gpg_check=False disable_plugin=[] disablerepo=[] download_only=False enable_plugin=[] enablerepo=[] exclude=[] installroot=/ install_weak_deps=True security=False skip_broken=False update_cache=False update_only=False validate_certs=True sslverify=True lock_timeout=30 use_backend=auto best=None conf_file=None disable_excludes=None download_dir=None list=None nobest=None releasever=None
Dec  1 14:06:24 np0005541455 kernel: SELinux:  Converting 2731 SID table entries...
Dec  1 14:06:24 np0005541455 kernel: SELinux:  policy capability network_peer_controls=1
Dec  1 14:06:24 np0005541455 kernel: SELinux:  policy capability open_perms=1
Dec  1 14:06:24 np0005541455 kernel: SELinux:  policy capability extended_socket_class=1
Dec  1 14:06:24 np0005541455 kernel: SELinux:  policy capability always_check_network=0
Dec  1 14:06:24 np0005541455 kernel: SELinux:  policy capability cgroup_seclabel=1
Dec  1 14:06:24 np0005541455 kernel: SELinux:  policy capability nnp_nosuid_transition=1
Dec  1 14:06:24 np0005541455 kernel: SELinux:  policy capability genfs_seclabel_symlinks=1
Dec  1 14:06:24 np0005541455 dbus-broker-launch[772]: avc:  op=load_policy lsm=selinux seqno=9 res=1
Dec  1 14:06:24 np0005541455 systemd[1]: Started daily update of the root trust anchor for DNSSEC.
Dec  1 14:06:25 np0005541455 systemd[1]: Started /usr/bin/systemctl start man-db-cache-update.
Dec  1 14:06:25 np0005541455 systemd[1]: Starting man-db-cache-update.service...
Dec  1 14:06:26 np0005541455 systemd[1]: Reloading.
Dec  1 14:06:26 np0005541455 systemd-rc-local-generator: /etc/rc.d/rc.local is not marked executable, skipping.
Dec  1 14:06:26 np0005541455 systemd-sysv-generator: SysV service '/etc/rc.d/init.d/network' lacks a native systemd unit file. Automatically generating a unit file for compatibility. Please update package to include a native systemd unit file, in order to make it more safe and robust.
Dec  1 14:06:26 np0005541455 systemd[1]: Queuing reload/restart jobs for marked units…
Dec  1 14:06:26 np0005541455 systemd[1]: man-db-cache-update.service: Deactivated successfully.
Dec  1 14:06:26 np0005541455 systemd[1]: Finished man-db-cache-update.service.
Dec  1 14:06:26 np0005541455 systemd[1]: run-r2f8256ab3f42483dbebbbbaccc704c2d.service: Deactivated successfully.
Dec  1 14:06:27 np0005541455 python3.9[54602]: ansible-ansible.builtin.systemd Invoked with enabled=True masked=False name=openvswitch.service state=started daemon_reload=False daemon_reexec=False scope=system no_block=False force=None
Dec  1 14:06:28 np0005541455 systemd[1]: Reloading.
Dec  1 14:06:28 np0005541455 systemd-sysv-generator: SysV service '/etc/rc.d/init.d/network' lacks a native systemd unit file. Automatically generating a unit file for compatibility. Please update package to include a native systemd unit file, in order to make it more safe and robust.
Dec  1 14:06:28 np0005541455 systemd-rc-local-generator: /etc/rc.d/rc.local is not marked executable, skipping.
Dec  1 14:06:28 np0005541455 systemd[1]: Starting Open vSwitch Database Unit...
Dec  1 14:06:28 np0005541455 chown[54645]: /usr/bin/chown: cannot access '/run/openvswitch': No such file or directory
Dec  1 14:06:28 np0005541455 ovs-ctl[54650]: /etc/openvswitch/conf.db does not exist ... (warning).
Dec  1 14:06:28 np0005541455 ovs-ctl[54650]: Creating empty database /etc/openvswitch/conf.db [  OK  ]
Dec  1 14:06:28 np0005541455 ovs-ctl[54650]: Starting ovsdb-server [  OK  ]
Dec  1 14:06:28 np0005541455 ovs-vsctl[54700]: ovs|00001|vsctl|INFO|Called as ovs-vsctl --no-wait -- init -- set Open_vSwitch . db-version=8.5.1
Dec  1 14:06:28 np0005541455 ovs-vsctl[54716]: ovs|00001|vsctl|INFO|Called as ovs-vsctl --no-wait set Open_vSwitch . ovs-version=3.3.5-115.el9s "external-ids:system-id=\"91869463-7ce7-4561-8225-db4a77bb5f12\"" "external-ids:rundir=\"/var/run/openvswitch\"" "system-type=\"centos\"" "system-version=\"9\""
Dec  1 14:06:28 np0005541455 ovs-ctl[54650]: Configuring Open vSwitch system IDs [  OK  ]
Dec  1 14:06:28 np0005541455 ovs-ctl[54650]: Enabling remote OVSDB managers [  OK  ]
Dec  1 14:06:28 np0005541455 systemd[1]: Started Open vSwitch Database Unit.
Dec  1 14:06:28 np0005541455 ovs-vsctl[54726]: ovs|00001|vsctl|INFO|Called as ovs-vsctl --no-wait add Open_vSwitch . external-ids hostname=compute-0
Dec  1 14:06:28 np0005541455 systemd[1]: Starting Open vSwitch Delete Transient Ports...
Dec  1 14:06:28 np0005541455 systemd[1]: Finished Open vSwitch Delete Transient Ports.
Dec  1 14:06:28 np0005541455 systemd[1]: Starting Open vSwitch Forwarding Unit...
Dec  1 14:06:29 np0005541455 kernel: openvswitch: Open vSwitch switching datapath
Dec  1 14:06:29 np0005541455 ovs-ctl[54770]: Inserting openvswitch module [  OK  ]
Dec  1 14:06:29 np0005541455 ovs-ctl[54739]: Starting ovs-vswitchd [  OK  ]
Dec  1 14:06:29 np0005541455 ovs-vsctl[54787]: ovs|00001|vsctl|INFO|Called as ovs-vsctl --no-wait add Open_vSwitch . external-ids hostname=compute-0
Dec  1 14:06:29 np0005541455 ovs-ctl[54739]: Enabling remote OVSDB managers [  OK  ]
Dec  1 14:06:29 np0005541455 systemd[1]: Started Open vSwitch Forwarding Unit.
Dec  1 14:06:29 np0005541455 systemd[1]: Starting Open vSwitch...
Dec  1 14:06:29 np0005541455 systemd[1]: Finished Open vSwitch.
Dec  1 14:06:30 np0005541455 python3.9[54939]: ansible-ansible.builtin.setup Invoked with gather_subset=['!all', '!min', 'selinux'] gather_timeout=10 filter=[] fact_path=/etc/ansible/facts.d
Dec  1 14:06:31 np0005541455 python3.9[55091]: ansible-community.general.sefcontext Invoked with selevel=s0 setype=container_file_t state=present target=/var/lib/edpm-config(/.*)? ignore_selinux_state=False ftype=a reload=True substitute=None seuser=None
Dec  1 14:06:32 np0005541455 kernel: SELinux:  Converting 2745 SID table entries...
Dec  1 14:06:32 np0005541455 kernel: SELinux:  policy capability network_peer_controls=1
Dec  1 14:06:32 np0005541455 kernel: SELinux:  policy capability open_perms=1
Dec  1 14:06:32 np0005541455 kernel: SELinux:  policy capability extended_socket_class=1
Dec  1 14:06:32 np0005541455 kernel: SELinux:  policy capability always_check_network=0
Dec  1 14:06:32 np0005541455 kernel: SELinux:  policy capability cgroup_seclabel=1
Dec  1 14:06:32 np0005541455 kernel: SELinux:  policy capability nnp_nosuid_transition=1
Dec  1 14:06:32 np0005541455 kernel: SELinux:  policy capability genfs_seclabel_symlinks=1
Dec  1 14:06:33 np0005541455 python3.9[55246]: ansible-ansible.builtin.setup Invoked with gather_subset=['!all', '!min', 'local', 'distribution'] gather_timeout=10 filter=[] fact_path=/etc/ansible/facts.d
Dec  1 14:06:34 np0005541455 dbus-broker-launch[772]: avc:  op=load_policy lsm=selinux seqno=10 res=1
Dec  1 14:06:34 np0005541455 python3.9[55404]: ansible-ansible.legacy.dnf Invoked with name=['driverctl', 'lvm2', 'crudini', 'jq', 'nftables', 'NetworkManager', 'openstack-selinux', 'python3-libselinux', 'python3-pyyaml', 'rsync', 'tmpwatch', 'sysstat', 'iproute-tc', 'ksmtuned', 'systemd-container', 'crypto-policies-scripts', 'grubby', 'sos'] state=present allow_downgrade=False allowerasing=False autoremove=False bugfix=False cacheonly=False disable_gpg_check=False disable_plugin=[] disablerepo=[] download_only=False enable_plugin=[] enablerepo=[] exclude=[] installroot=/ install_weak_deps=True security=False skip_broken=False update_cache=False update_only=False validate_certs=True sslverify=True lock_timeout=30 use_backend=auto best=None conf_file=None disable_excludes=None download_dir=None list=None nobest=None releasever=None
Dec  1 14:06:36 np0005541455 python3.9[55557]: ansible-ansible.legacy.command Invoked with _raw_params=rpm -V driverctl lvm2 crudini jq nftables NetworkManager openstack-selinux python3-libselinux python3-pyyaml rsync tmpwatch sysstat iproute-tc ksmtuned systemd-container crypto-policies-scripts grubby sos _uses_shell=False expand_argument_vars=True stdin_add_newline=True strip_empty_ends=True cmd=None argv=None chdir=None executable=None creates=None removes=None stdin=None
Dec  1 14:06:38 np0005541455 python3.9[55844]: ansible-ansible.builtin.file Invoked with mode=0750 path=/var/lib/edpm-config selevel=s0 setype=container_file_t state=directory recurse=False force=False follow=True modification_time_format=%Y%m%d%H%M.%S access_time_format=%Y%m%d%H%M.%S unsafe_writes=False _original_basename=None _diff_peek=None src=None modification_time=None access_time=None owner=None group=None seuser=None serole=None attributes=None
Dec  1 14:06:39 np0005541455 python3.9[55994]: ansible-ansible.builtin.stat Invoked with path=/etc/cloud/cloud.cfg.d follow=False get_checksum=True get_mime=True get_attributes=True get_selinux_context=False checksum_algorithm=sha1
Dec  1 14:06:40 np0005541455 python3.9[56148]: ansible-ansible.legacy.dnf Invoked with name=['NetworkManager-ovs'] state=present allow_downgrade=False allowerasing=False autoremove=False bugfix=False cacheonly=False disable_gpg_check=False disable_plugin=[] disablerepo=[] download_only=False enable_plugin=[] enablerepo=[] exclude=[] installroot=/ install_weak_deps=True security=False skip_broken=False update_cache=False update_only=False validate_certs=True sslverify=True lock_timeout=30 use_backend=auto best=None conf_file=None disable_excludes=None download_dir=None list=None nobest=None releasever=None
Dec  1 14:06:42 np0005541455 systemd[1]: Started /usr/bin/systemctl start man-db-cache-update.
Dec  1 14:06:42 np0005541455 systemd[1]: Starting man-db-cache-update.service...
Dec  1 14:06:42 np0005541455 systemd[1]: Reloading.
Dec  1 14:06:42 np0005541455 systemd-sysv-generator: SysV service '/etc/rc.d/init.d/network' lacks a native systemd unit file. Automatically generating a unit file for compatibility. Please update package to include a native systemd unit file, in order to make it more safe and robust.
Dec  1 14:06:42 np0005541455 systemd-rc-local-generator: /etc/rc.d/rc.local is not marked executable, skipping.
Dec  1 14:06:42 np0005541455 systemd[1]: Queuing reload/restart jobs for marked units…
Dec  1 14:06:43 np0005541455 systemd[1]: man-db-cache-update.service: Deactivated successfully.
Dec  1 14:06:43 np0005541455 systemd[1]: Finished man-db-cache-update.service.
Dec  1 14:06:43 np0005541455 systemd[1]: run-r517aa31295ef4d7783c0b6372abddcce.service: Deactivated successfully.
Dec  1 14:06:43 np0005541455 python3.9[56465]: ansible-ansible.builtin.systemd Invoked with name=NetworkManager state=restarted daemon_reload=False daemon_reexec=False scope=system no_block=False enabled=None force=None masked=None
Dec  1 14:06:43 np0005541455 systemd[1]: NetworkManager-wait-online.service: Deactivated successfully.
Dec  1 14:06:43 np0005541455 systemd[1]: Stopped Network Manager Wait Online.
Dec  1 14:06:43 np0005541455 systemd[1]: Stopping Network Manager Wait Online...
Dec  1 14:06:43 np0005541455 NetworkManager[7193]: <info>  [1764616003.7739] caught SIGTERM, shutting down normally.
Dec  1 14:06:43 np0005541455 systemd[1]: Stopping Network Manager...
Dec  1 14:06:43 np0005541455 NetworkManager[7193]: <info>  [1764616003.7751] dhcp4 (eth0): canceled DHCP transaction
Dec  1 14:06:43 np0005541455 NetworkManager[7193]: <info>  [1764616003.7751] dhcp4 (eth0): activation: beginning transaction (timeout in 45 seconds)
Dec  1 14:06:43 np0005541455 NetworkManager[7193]: <info>  [1764616003.7751] dhcp4 (eth0): state changed no lease
Dec  1 14:06:43 np0005541455 NetworkManager[7193]: <info>  [1764616003.7752] manager: NetworkManager state is now CONNECTED_SITE
Dec  1 14:06:43 np0005541455 NetworkManager[7193]: <info>  [1764616003.7840] exiting (success)
Dec  1 14:06:43 np0005541455 systemd[1]: Starting Network Manager Script Dispatcher Service...
Dec  1 14:06:43 np0005541455 systemd[1]: Started Network Manager Script Dispatcher Service.
Dec  1 14:06:43 np0005541455 systemd[1]: NetworkManager.service: Deactivated successfully.
Dec  1 14:06:43 np0005541455 systemd[1]: Stopped Network Manager.
Dec  1 14:06:43 np0005541455 systemd[1]: NetworkManager.service: Consumed 18.523s CPU time, 4.1M memory peak, read 0B from disk, written 18.0K to disk.
Dec  1 14:06:43 np0005541455 systemd[1]: Starting Network Manager...
Dec  1 14:06:43 np0005541455 NetworkManager[56474]: <info>  [1764616003.8642] NetworkManager (version 1.54.1-1.el9) is starting... (after a restart, boot:c12f9c43-c499-4c8a-a9df-8527ffbb5e7f)
Dec  1 14:06:43 np0005541455 NetworkManager[56474]: <info>  [1764616003.8646] Read config: /etc/NetworkManager/NetworkManager.conf, /run/NetworkManager/conf.d/15-carrier-timeout.conf
Dec  1 14:06:43 np0005541455 NetworkManager[56474]: <info>  [1764616003.8715] manager[0x5625c747a090]: monitoring kernel firmware directory '/lib/firmware'.
Dec  1 14:06:43 np0005541455 systemd[1]: Starting Hostname Service...
Dec  1 14:06:44 np0005541455 systemd[1]: Started Hostname Service.
Dec  1 14:06:44 np0005541455 NetworkManager[56474]: <info>  [1764616004.0053] hostname: hostname: using hostnamed
Dec  1 14:06:44 np0005541455 NetworkManager[56474]: <info>  [1764616004.0054] hostname: static hostname changed from (none) to "compute-0"
Dec  1 14:06:44 np0005541455 NetworkManager[56474]: <info>  [1764616004.0062] dns-mgr: init: dns=default,systemd-resolved rc-manager=symlink (auto)
Dec  1 14:06:44 np0005541455 NetworkManager[56474]: <info>  [1764616004.0069] manager[0x5625c747a090]: rfkill: Wi-Fi hardware radio set enabled
Dec  1 14:06:44 np0005541455 NetworkManager[56474]: <info>  [1764616004.0070] manager[0x5625c747a090]: rfkill: WWAN hardware radio set enabled
Dec  1 14:06:44 np0005541455 NetworkManager[56474]: <info>  [1764616004.0107] Loaded device plugin: NMOvsFactory (/usr/lib64/NetworkManager/1.54.1-1.el9/libnm-device-plugin-ovs.so)
Dec  1 14:06:44 np0005541455 NetworkManager[56474]: <info>  [1764616004.0123] Loaded device plugin: NMTeamFactory (/usr/lib64/NetworkManager/1.54.1-1.el9/libnm-device-plugin-team.so)
Dec  1 14:06:44 np0005541455 NetworkManager[56474]: <info>  [1764616004.0124] manager: rfkill: Wi-Fi enabled by radio killswitch; enabled by state file
Dec  1 14:06:44 np0005541455 NetworkManager[56474]: <info>  [1764616004.0125] manager: rfkill: WWAN enabled by radio killswitch; enabled by state file
Dec  1 14:06:44 np0005541455 NetworkManager[56474]: <info>  [1764616004.0126] manager: Networking is enabled by state file
Dec  1 14:06:44 np0005541455 NetworkManager[56474]: <info>  [1764616004.0129] settings: Loaded settings plugin: keyfile (internal)
Dec  1 14:06:44 np0005541455 NetworkManager[56474]: <info>  [1764616004.0135] settings: Loaded settings plugin: ifcfg-rh ("/usr/lib64/NetworkManager/1.54.1-1.el9/libnm-settings-plugin-ifcfg-rh.so")
Dec  1 14:06:44 np0005541455 NetworkManager[56474]: <info>  [1764616004.0178] Warning: the ifcfg-rh plugin is deprecated, please migrate connections to the keyfile format using "nmcli connection migrate"
Dec  1 14:06:44 np0005541455 NetworkManager[56474]: <info>  [1764616004.0191] dhcp: init: Using DHCP client 'internal'
Dec  1 14:06:44 np0005541455 NetworkManager[56474]: <info>  [1764616004.0196] manager: (lo): new Loopback device (/org/freedesktop/NetworkManager/Devices/1)
Dec  1 14:06:44 np0005541455 NetworkManager[56474]: <info>  [1764616004.0204] device (lo): state change: unmanaged -> unavailable (reason 'connection-assumed', managed-type: 'external')
Dec  1 14:06:44 np0005541455 NetworkManager[56474]: <info>  [1764616004.0213] device (lo): state change: unavailable -> disconnected (reason 'connection-assumed', managed-type: 'external')
Dec  1 14:06:44 np0005541455 NetworkManager[56474]: <info>  [1764616004.0225] device (lo): Activation: starting connection 'lo' (05520b0a-6bbf-47af-9e84-ea1a46a10382)
Dec  1 14:06:44 np0005541455 NetworkManager[56474]: <info>  [1764616004.0235] device (eth0): carrier: link connected
Dec  1 14:06:44 np0005541455 NetworkManager[56474]: <info>  [1764616004.0242] manager: (eth0): new Ethernet device (/org/freedesktop/NetworkManager/Devices/2)
Dec  1 14:06:44 np0005541455 NetworkManager[56474]: <info>  [1764616004.0250] manager: (eth0): assume: will attempt to assume matching connection 'System eth0' (5fb06bd0-0bb0-7ffb-45f1-d6edd65f3e03) (indicated)
Dec  1 14:06:44 np0005541455 NetworkManager[56474]: <info>  [1764616004.0251] device (eth0): state change: unmanaged -> unavailable (reason 'connection-assumed', managed-type: 'assume')
Dec  1 14:06:44 np0005541455 NetworkManager[56474]: <info>  [1764616004.0261] device (eth0): state change: unavailable -> disconnected (reason 'connection-assumed', managed-type: 'assume')
Dec  1 14:06:44 np0005541455 NetworkManager[56474]: <info>  [1764616004.0270] device (eth0): Activation: starting connection 'System eth0' (5fb06bd0-0bb0-7ffb-45f1-d6edd65f3e03)
Dec  1 14:06:44 np0005541455 NetworkManager[56474]: <info>  [1764616004.0279] device (eth1): carrier: link connected
Dec  1 14:06:44 np0005541455 NetworkManager[56474]: <info>  [1764616004.0286] manager: (eth1): new Ethernet device (/org/freedesktop/NetworkManager/Devices/3)
Dec  1 14:06:44 np0005541455 NetworkManager[56474]: <info>  [1764616004.0293] manager: (eth1): assume: will attempt to assume matching connection 'ci-private-network' (c8f215e6-5e9a-5e2d-a810-1cab7f3f4862) (indicated)
Dec  1 14:06:44 np0005541455 NetworkManager[56474]: <info>  [1764616004.0294] device (eth1): state change: unmanaged -> unavailable (reason 'connection-assumed', managed-type: 'assume')
Dec  1 14:06:44 np0005541455 NetworkManager[56474]: <info>  [1764616004.0303] device (eth1): state change: unavailable -> disconnected (reason 'connection-assumed', managed-type: 'assume')
Dec  1 14:06:44 np0005541455 NetworkManager[56474]: <info>  [1764616004.0313] device (eth1): Activation: starting connection 'ci-private-network' (c8f215e6-5e9a-5e2d-a810-1cab7f3f4862)
Dec  1 14:06:44 np0005541455 systemd[1]: Started Network Manager.
Dec  1 14:06:44 np0005541455 NetworkManager[56474]: <info>  [1764616004.0323] bus-manager: acquired D-Bus service "org.freedesktop.NetworkManager"
Dec  1 14:06:44 np0005541455 NetworkManager[56474]: <info>  [1764616004.0353] device (lo): state change: disconnected -> prepare (reason 'none', managed-type: 'external')
Dec  1 14:06:44 np0005541455 NetworkManager[56474]: <info>  [1764616004.0358] device (lo): state change: prepare -> config (reason 'none', managed-type: 'external')
Dec  1 14:06:44 np0005541455 NetworkManager[56474]: <info>  [1764616004.0360] device (lo): state change: config -> ip-config (reason 'none', managed-type: 'external')
Dec  1 14:06:44 np0005541455 NetworkManager[56474]: <info>  [1764616004.0363] device (eth0): state change: disconnected -> prepare (reason 'none', managed-type: 'assume')
Dec  1 14:06:44 np0005541455 NetworkManager[56474]: <info>  [1764616004.0367] device (eth0): state change: prepare -> config (reason 'none', managed-type: 'assume')
Dec  1 14:06:44 np0005541455 NetworkManager[56474]: <info>  [1764616004.0370] device (eth1): state change: disconnected -> prepare (reason 'none', managed-type: 'assume')
Dec  1 14:06:44 np0005541455 NetworkManager[56474]: <info>  [1764616004.0374] device (eth1): state change: prepare -> config (reason 'none', managed-type: 'assume')
Dec  1 14:06:44 np0005541455 NetworkManager[56474]: <info>  [1764616004.0378] device (lo): state change: ip-config -> ip-check (reason 'none', managed-type: 'external')
Dec  1 14:06:44 np0005541455 NetworkManager[56474]: <info>  [1764616004.0389] device (eth0): state change: config -> ip-config (reason 'none', managed-type: 'assume')
Dec  1 14:06:44 np0005541455 NetworkManager[56474]: <info>  [1764616004.0393] dhcp4 (eth0): activation: beginning transaction (timeout in 45 seconds)
Dec  1 14:06:44 np0005541455 NetworkManager[56474]: <info>  [1764616004.0406] device (eth1): state change: config -> ip-config (reason 'none', managed-type: 'assume')
Dec  1 14:06:44 np0005541455 NetworkManager[56474]: <info>  [1764616004.0425] device (eth1): state change: ip-config -> ip-check (reason 'none', managed-type: 'assume')
Dec  1 14:06:44 np0005541455 NetworkManager[56474]: <info>  [1764616004.0441] device (lo): state change: ip-check -> secondaries (reason 'none', managed-type: 'external')
Dec  1 14:06:44 np0005541455 NetworkManager[56474]: <info>  [1764616004.0444] device (lo): state change: secondaries -> activated (reason 'none', managed-type: 'external')
Dec  1 14:06:44 np0005541455 NetworkManager[56474]: <info>  [1764616004.0451] device (lo): Activation: successful, device activated.
Dec  1 14:06:44 np0005541455 NetworkManager[56474]: <info>  [1764616004.0462] dhcp4 (eth0): state changed new lease, address=38.102.83.97
Dec  1 14:06:44 np0005541455 NetworkManager[56474]: <info>  [1764616004.0472] policy: set 'System eth0' (eth0) as default for IPv4 routing and DNS
Dec  1 14:06:44 np0005541455 NetworkManager[56474]: <info>  [1764616004.0551] device (eth1): state change: ip-check -> secondaries (reason 'none', managed-type: 'assume')
Dec  1 14:06:44 np0005541455 NetworkManager[56474]: <info>  [1764616004.0560] device (eth0): state change: ip-config -> ip-check (reason 'none', managed-type: 'assume')
Dec  1 14:06:44 np0005541455 NetworkManager[56474]: <info>  [1764616004.0570] device (eth1): state change: secondaries -> activated (reason 'none', managed-type: 'assume')
Dec  1 14:06:44 np0005541455 NetworkManager[56474]: <info>  [1764616004.0574] manager: NetworkManager state is now CONNECTED_LOCAL
Dec  1 14:06:44 np0005541455 NetworkManager[56474]: <info>  [1764616004.0579] device (eth1): Activation: successful, device activated.
Dec  1 14:06:44 np0005541455 NetworkManager[56474]: <info>  [1764616004.0594] device (eth0): state change: ip-check -> secondaries (reason 'none', managed-type: 'assume')
Dec  1 14:06:44 np0005541455 NetworkManager[56474]: <info>  [1764616004.0597] device (eth0): state change: secondaries -> activated (reason 'none', managed-type: 'assume')
Dec  1 14:06:44 np0005541455 NetworkManager[56474]: <info>  [1764616004.0602] manager: NetworkManager state is now CONNECTED_SITE
Dec  1 14:06:44 np0005541455 NetworkManager[56474]: <info>  [1764616004.0608] device (eth0): Activation: successful, device activated.
Dec  1 14:06:44 np0005541455 NetworkManager[56474]: <info>  [1764616004.0615] manager: NetworkManager state is now CONNECTED_GLOBAL
Dec  1 14:06:44 np0005541455 NetworkManager[56474]: <info>  [1764616004.0648] manager: startup complete
Dec  1 14:06:44 np0005541455 systemd[1]: Starting Network Manager Wait Online...
Dec  1 14:06:44 np0005541455 systemd[1]: Finished Network Manager Wait Online.
Dec  1 14:06:44 np0005541455 python3.9[56691]: ansible-ansible.legacy.dnf Invoked with name=['os-net-config'] state=present allow_downgrade=False allowerasing=False autoremove=False bugfix=False cacheonly=False disable_gpg_check=False disable_plugin=[] disablerepo=[] download_only=False enable_plugin=[] enablerepo=[] exclude=[] installroot=/ install_weak_deps=True security=False skip_broken=False update_cache=False update_only=False validate_certs=True sslverify=True lock_timeout=30 use_backend=auto best=None conf_file=None disable_excludes=None download_dir=None list=None nobest=None releasever=None
Dec  1 14:06:48 np0005541455 systemd[1]: Started /usr/bin/systemctl start man-db-cache-update.
Dec  1 14:06:48 np0005541455 systemd[1]: Starting man-db-cache-update.service...
Dec  1 14:06:48 np0005541455 systemd[1]: Reloading.
Dec  1 14:06:48 np0005541455 systemd-rc-local-generator: /etc/rc.d/rc.local is not marked executable, skipping.
Dec  1 14:06:48 np0005541455 systemd-sysv-generator: SysV service '/etc/rc.d/init.d/network' lacks a native systemd unit file. Automatically generating a unit file for compatibility. Please update package to include a native systemd unit file, in order to make it more safe and robust.
Dec  1 14:06:49 np0005541455 systemd[1]: Queuing reload/restart jobs for marked units…
Dec  1 14:06:49 np0005541455 systemd[1]: man-db-cache-update.service: Deactivated successfully.
Dec  1 14:06:49 np0005541455 systemd[1]: Finished man-db-cache-update.service.
Dec  1 14:06:49 np0005541455 systemd[1]: run-r6cc3331947cc4d87bad9350fbf49a1fe.service: Deactivated successfully.
Dec  1 14:06:51 np0005541455 python3.9[57149]: ansible-ansible.builtin.stat Invoked with path=/var/lib/edpm-config/os-net-config.returncode follow=False get_checksum=True get_mime=True get_attributes=True get_selinux_context=False checksum_algorithm=sha1
Dec  1 14:06:52 np0005541455 python3.9[57301]: ansible-community.general.ini_file Invoked with backup=True mode=0644 no_extra_spaces=True option=no-auto-default path=/etc/NetworkManager/NetworkManager.conf section=main state=present value=* exclusive=True ignore_spaces=False allow_no_value=False modify_inactive_option=True create=True follow=False unsafe_writes=False section_has_values=None values=None owner=None group=None seuser=None serole=None selevel=None setype=None attributes=None
Dec  1 14:06:52 np0005541455 python3.9[57455]: ansible-community.general.ini_file Invoked with backup=True mode=0644 no_extra_spaces=True option=dns path=/etc/NetworkManager/NetworkManager.conf section=main state=absent value=none exclusive=True ignore_spaces=False allow_no_value=False modify_inactive_option=True create=True follow=False unsafe_writes=False section_has_values=None values=None owner=None group=None seuser=None serole=None selevel=None setype=None attributes=None
Dec  1 14:06:53 np0005541455 python3.9[57607]: ansible-community.general.ini_file Invoked with backup=True mode=0644 no_extra_spaces=True option=dns path=/etc/NetworkManager/conf.d/99-cloud-init.conf section=main state=absent value=none exclusive=True ignore_spaces=False allow_no_value=False modify_inactive_option=True create=True follow=False unsafe_writes=False section_has_values=None values=None owner=None group=None seuser=None serole=None selevel=None setype=None attributes=None
Dec  1 14:06:54 np0005541455 systemd[1]: NetworkManager-dispatcher.service: Deactivated successfully.
Dec  1 14:06:54 np0005541455 python3.9[57759]: ansible-community.general.ini_file Invoked with backup=True mode=0644 no_extra_spaces=True option=rc-manager path=/etc/NetworkManager/NetworkManager.conf section=main state=absent value=unmanaged exclusive=True ignore_spaces=False allow_no_value=False modify_inactive_option=True create=True follow=False unsafe_writes=False section_has_values=None values=None owner=None group=None seuser=None serole=None selevel=None setype=None attributes=None
Dec  1 14:06:55 np0005541455 python3.9[57911]: ansible-community.general.ini_file Invoked with backup=True mode=0644 no_extra_spaces=True option=rc-manager path=/etc/NetworkManager/conf.d/99-cloud-init.conf section=main state=absent value=unmanaged exclusive=True ignore_spaces=False allow_no_value=False modify_inactive_option=True create=True follow=False unsafe_writes=False section_has_values=None values=None owner=None group=None seuser=None serole=None selevel=None setype=None attributes=None
Dec  1 14:06:55 np0005541455 python3.9[58063]: ansible-ansible.legacy.stat Invoked with path=/etc/dhcp/dhclient-enter-hooks follow=False get_checksum=True get_size=False checksum_algorithm=sha1 get_mime=True get_attributes=True get_selinux_context=False
Dec  1 14:06:56 np0005541455 python3.9[58186]: ansible-ansible.legacy.copy Invoked with dest=/etc/dhcp/dhclient-enter-hooks mode=0755 src=/home/zuul/.ansible/tmp/ansible-tmp-1764616015.2980468-229-122661455042721/.source _original_basename=.6rw5xzko follow=False checksum=f6278a40de79a9841f6ed1fc584538225566990c backup=False force=True remote_src=False unsafe_writes=False content=NOT_LOGGING_PARAMETER validate=None directory_mode=None local_follow=None owner=None group=None seuser=None serole=None selevel=None setype=None attributes=None
Dec  1 14:06:57 np0005541455 python3.9[58338]: ansible-ansible.builtin.file Invoked with mode=0755 path=/etc/os-net-config state=directory recurse=False force=False follow=True modification_time_format=%Y%m%d%H%M.%S access_time_format=%Y%m%d%H%M.%S unsafe_writes=False _original_basename=None _diff_peek=None src=None modification_time=None access_time=None owner=None group=None seuser=None serole=None selevel=None setype=None attributes=None
Dec  1 14:06:58 np0005541455 python3.9[58490]: ansible-edpm_os_net_config_mappings Invoked with net_config_data_lookup={}
Dec  1 14:06:59 np0005541455 python3.9[58642]: ansible-ansible.builtin.file Invoked with path=/var/lib/edpm-config/scripts state=absent recurse=False force=False follow=True modification_time_format=%Y%m%d%H%M.%S access_time_format=%Y%m%d%H%M.%S unsafe_writes=False _original_basename=None _diff_peek=None src=None modification_time=None access_time=None mode=None owner=None group=None seuser=None serole=None selevel=None setype=None attributes=None
Dec  1 14:07:01 np0005541455 python3.9[59069]: ansible-ansible.builtin.slurp Invoked with path=/etc/os-net-config/config.yaml src=/etc/os-net-config/config.yaml
Dec  1 14:07:03 np0005541455 ansible-async_wrapper.py[59244]: Invoked with j515220027870 300 /home/zuul/.ansible/tmp/ansible-tmp-1764616022.267616-295-60964109914616/AnsiballZ_edpm_os_net_config.py _
Dec  1 14:07:03 np0005541455 ansible-async_wrapper.py[59247]: Starting module and watcher
Dec  1 14:07:03 np0005541455 ansible-async_wrapper.py[59247]: Start watching 59248 (300)
Dec  1 14:07:03 np0005541455 ansible-async_wrapper.py[59248]: Start module (59248)
Dec  1 14:07:03 np0005541455 ansible-async_wrapper.py[59244]: Return async_wrapper task started.
Dec  1 14:07:03 np0005541455 python3.9[59249]: ansible-edpm_os_net_config Invoked with cleanup=True config_file=/etc/os-net-config/config.yaml debug=True detailed_exit_codes=True safe_defaults=False use_nmstate=True
Dec  1 14:07:03 np0005541455 kernel: cfg80211: Loading compiled-in X.509 certificates for regulatory database
Dec  1 14:07:03 np0005541455 kernel: Loaded X.509 cert 'sforshee: 00b28ddf47aef9cea7'
Dec  1 14:07:03 np0005541455 kernel: Loaded X.509 cert 'wens: 61c038651aabdcf94bd0ac7ff06c7248db18c600'
Dec  1 14:07:03 np0005541455 kernel: platform regulatory.0: Direct firmware load for regulatory.db failed with error -2
Dec  1 14:07:03 np0005541455 kernel: cfg80211: failed to load regulatory.db
Dec  1 14:07:05 np0005541455 NetworkManager[56474]: <info>  [1764616025.1900] audit: op="checkpoint-create" arg="/org/freedesktop/NetworkManager/Checkpoint/1" pid=59250 uid=0 result="success"
Dec  1 14:07:05 np0005541455 NetworkManager[56474]: <info>  [1764616025.1922] audit: op="checkpoint-adjust-rollback-timeout" arg="/org/freedesktop/NetworkManager/Checkpoint/1" pid=59250 uid=0 result="success"
Dec  1 14:07:05 np0005541455 NetworkManager[56474]: <info>  [1764616025.2453] manager: (br-ex): new Open vSwitch Bridge device (/org/freedesktop/NetworkManager/Devices/4)
Dec  1 14:07:05 np0005541455 NetworkManager[56474]: <info>  [1764616025.2457] audit: op="connection-add" uuid="f5a00056-9a6e-45bd-a53c-158deb034919" name="br-ex-br" pid=59250 uid=0 result="success"
Dec  1 14:07:05 np0005541455 NetworkManager[56474]: <info>  [1764616025.2481] manager: (br-ex): new Open vSwitch Port device (/org/freedesktop/NetworkManager/Devices/5)
Dec  1 14:07:05 np0005541455 NetworkManager[56474]: <info>  [1764616025.2485] audit: op="connection-add" uuid="32079a64-c312-4825-902d-ef5fe8abe123" name="br-ex-port" pid=59250 uid=0 result="success"
Dec  1 14:07:05 np0005541455 NetworkManager[56474]: <info>  [1764616025.2503] manager: (eth1): new Open vSwitch Port device (/org/freedesktop/NetworkManager/Devices/6)
Dec  1 14:07:05 np0005541455 NetworkManager[56474]: <info>  [1764616025.2506] audit: op="connection-add" uuid="74eeb64d-a284-4118-9826-e6d82b834cd9" name="eth1-port" pid=59250 uid=0 result="success"
Dec  1 14:07:05 np0005541455 NetworkManager[56474]: <info>  [1764616025.2525] manager: (vlan20): new Open vSwitch Port device (/org/freedesktop/NetworkManager/Devices/7)
Dec  1 14:07:05 np0005541455 NetworkManager[56474]: <info>  [1764616025.2528] audit: op="connection-add" uuid="92ebc737-26a8-43b2-9bdb-084f64acf2ff" name="vlan20-port" pid=59250 uid=0 result="success"
Dec  1 14:07:05 np0005541455 NetworkManager[56474]: <info>  [1764616025.2546] manager: (vlan21): new Open vSwitch Port device (/org/freedesktop/NetworkManager/Devices/8)
Dec  1 14:07:05 np0005541455 NetworkManager[56474]: <info>  [1764616025.2551] audit: op="connection-add" uuid="6b66216e-6183-4967-92e2-b7733ee78d68" name="vlan21-port" pid=59250 uid=0 result="success"
Dec  1 14:07:05 np0005541455 NetworkManager[56474]: <info>  [1764616025.2570] manager: (vlan22): new Open vSwitch Port device (/org/freedesktop/NetworkManager/Devices/9)
Dec  1 14:07:05 np0005541455 NetworkManager[56474]: <info>  [1764616025.2573] audit: op="connection-add" uuid="c777ddf1-566d-4c18-beaa-ce57efc0ba29" name="vlan22-port" pid=59250 uid=0 result="success"
Dec  1 14:07:05 np0005541455 NetworkManager[56474]: <info>  [1764616025.2606] audit: op="connection-update" uuid="5fb06bd0-0bb0-7ffb-45f1-d6edd65f3e03" name="System eth0" args="ipv6.addr-gen-mode,ipv6.method,ipv6.dhcp-timeout,802-3-ethernet.mtu,connection.autoconnect-priority,connection.timestamp,ipv4.dhcp-client-id,ipv4.dhcp-timeout" pid=59250 uid=0 result="success"
Dec  1 14:07:05 np0005541455 NetworkManager[56474]: <info>  [1764616025.2623] manager: (br-ex): new Open vSwitch Interface device (/org/freedesktop/NetworkManager/Devices/10)
Dec  1 14:07:05 np0005541455 NetworkManager[56474]: <info>  [1764616025.2624] audit: op="connection-add" uuid="00584f7d-646c-43f7-9452-862ba99a20a2" name="br-ex-if" pid=59250 uid=0 result="success"
Dec  1 14:07:05 np0005541455 NetworkManager[56474]: <info>  [1764616025.2688] audit: op="connection-update" uuid="c8f215e6-5e9a-5e2d-a810-1cab7f3f4862" name="ci-private-network" args="ovs-interface.type,ovs-external-ids.data,ipv6.routing-rules,ipv6.routes,ipv6.addr-gen-mode,ipv6.addresses,ipv6.dns,ipv6.method,connection.controller,connection.port-type,connection.timestamp,connection.slave-type,connection.master,ipv4.routing-rules,ipv4.routes,ipv4.addresses,ipv4.method,ipv4.dns,ipv4.never-default" pid=59250 uid=0 result="success"
Dec  1 14:07:05 np0005541455 NetworkManager[56474]: <info>  [1764616025.2704] manager: (vlan20): new Open vSwitch Interface device (/org/freedesktop/NetworkManager/Devices/11)
Dec  1 14:07:05 np0005541455 NetworkManager[56474]: <info>  [1764616025.2707] audit: op="connection-add" uuid="73a25f36-489d-4b14-b616-b6a5ead12b91" name="vlan20-if" pid=59250 uid=0 result="success"
Dec  1 14:07:05 np0005541455 NetworkManager[56474]: <info>  [1764616025.2721] manager: (vlan21): new Open vSwitch Interface device (/org/freedesktop/NetworkManager/Devices/12)
Dec  1 14:07:05 np0005541455 NetworkManager[56474]: <info>  [1764616025.2722] audit: op="connection-add" uuid="eca04b3e-9510-4cbf-ad91-ff640a36c8ef" name="vlan21-if" pid=59250 uid=0 result="success"
Dec  1 14:07:05 np0005541455 NetworkManager[56474]: <info>  [1764616025.2736] manager: (vlan22): new Open vSwitch Interface device (/org/freedesktop/NetworkManager/Devices/13)
Dec  1 14:07:05 np0005541455 NetworkManager[56474]: <info>  [1764616025.2737] audit: op="connection-add" uuid="0bfc9937-bff7-4647-a4ae-2d8b92fe27c1" name="vlan22-if" pid=59250 uid=0 result="success"
Dec  1 14:07:05 np0005541455 NetworkManager[56474]: <info>  [1764616025.2748] audit: op="connection-delete" uuid="fbdaa184-f8a1-3bfc-a799-1b0024f7214e" name="Wired connection 1" pid=59250 uid=0 result="success"
Dec  1 14:07:05 np0005541455 NetworkManager[56474]: <info>  [1764616025.2759] device (br-ex)[Open vSwitch Bridge]: state change: unmanaged -> unavailable (reason 'managed', managed-type: 'external')
Dec  1 14:07:05 np0005541455 NetworkManager[56474]: <info>  [1764616025.2768] device (br-ex)[Open vSwitch Bridge]: state change: unavailable -> disconnected (reason 'user-requested', managed-type: 'full')
Dec  1 14:07:05 np0005541455 NetworkManager[56474]: <info>  [1764616025.2771] device (br-ex)[Open vSwitch Bridge]: Activation: starting connection 'br-ex-br' (f5a00056-9a6e-45bd-a53c-158deb034919)
Dec  1 14:07:05 np0005541455 NetworkManager[56474]: <info>  [1764616025.2772] audit: op="connection-activate" uuid="f5a00056-9a6e-45bd-a53c-158deb034919" name="br-ex-br" pid=59250 uid=0 result="success"
Dec  1 14:07:05 np0005541455 NetworkManager[56474]: <info>  [1764616025.2773] device (br-ex)[Open vSwitch Port]: state change: unmanaged -> unavailable (reason 'managed', managed-type: 'external')
Dec  1 14:07:05 np0005541455 NetworkManager[56474]: <info>  [1764616025.2778] device (br-ex)[Open vSwitch Port]: state change: unavailable -> disconnected (reason 'user-requested', managed-type: 'full')
Dec  1 14:07:05 np0005541455 NetworkManager[56474]: <info>  [1764616025.2781] device (br-ex)[Open vSwitch Port]: Activation: starting connection 'br-ex-port' (32079a64-c312-4825-902d-ef5fe8abe123)
Dec  1 14:07:05 np0005541455 NetworkManager[56474]: <info>  [1764616025.2782] device (eth1)[Open vSwitch Port]: state change: unmanaged -> unavailable (reason 'managed', managed-type: 'external')
Dec  1 14:07:05 np0005541455 NetworkManager[56474]: <info>  [1764616025.2787] device (eth1)[Open vSwitch Port]: state change: unavailable -> disconnected (reason 'user-requested', managed-type: 'full')
Dec  1 14:07:05 np0005541455 NetworkManager[56474]: <info>  [1764616025.2790] device (eth1)[Open vSwitch Port]: Activation: starting connection 'eth1-port' (74eeb64d-a284-4118-9826-e6d82b834cd9)
Dec  1 14:07:05 np0005541455 NetworkManager[56474]: <info>  [1764616025.2791] device (vlan20)[Open vSwitch Port]: state change: unmanaged -> unavailable (reason 'managed', managed-type: 'external')
Dec  1 14:07:05 np0005541455 NetworkManager[56474]: <info>  [1764616025.2797] device (vlan20)[Open vSwitch Port]: state change: unavailable -> disconnected (reason 'user-requested', managed-type: 'full')
Dec  1 14:07:05 np0005541455 NetworkManager[56474]: <info>  [1764616025.2800] device (vlan20)[Open vSwitch Port]: Activation: starting connection 'vlan20-port' (92ebc737-26a8-43b2-9bdb-084f64acf2ff)
Dec  1 14:07:05 np0005541455 NetworkManager[56474]: <info>  [1764616025.2802] device (vlan21)[Open vSwitch Port]: state change: unmanaged -> unavailable (reason 'managed', managed-type: 'external')
Dec  1 14:07:05 np0005541455 NetworkManager[56474]: <info>  [1764616025.2807] device (vlan21)[Open vSwitch Port]: state change: unavailable -> disconnected (reason 'user-requested', managed-type: 'full')
Dec  1 14:07:05 np0005541455 NetworkManager[56474]: <info>  [1764616025.2810] device (vlan21)[Open vSwitch Port]: Activation: starting connection 'vlan21-port' (6b66216e-6183-4967-92e2-b7733ee78d68)
Dec  1 14:07:05 np0005541455 NetworkManager[56474]: <info>  [1764616025.2812] device (vlan22)[Open vSwitch Port]: state change: unmanaged -> unavailable (reason 'managed', managed-type: 'external')
Dec  1 14:07:05 np0005541455 NetworkManager[56474]: <info>  [1764616025.2818] device (vlan22)[Open vSwitch Port]: state change: unavailable -> disconnected (reason 'user-requested', managed-type: 'full')
Dec  1 14:07:05 np0005541455 NetworkManager[56474]: <info>  [1764616025.2822] device (vlan22)[Open vSwitch Port]: Activation: starting connection 'vlan22-port' (c777ddf1-566d-4c18-beaa-ce57efc0ba29)
Dec  1 14:07:05 np0005541455 NetworkManager[56474]: <info>  [1764616025.2822] device (br-ex)[Open vSwitch Bridge]: state change: disconnected -> prepare (reason 'none', managed-type: 'full')
Dec  1 14:07:05 np0005541455 NetworkManager[56474]: <info>  [1764616025.2824] device (br-ex)[Open vSwitch Bridge]: state change: prepare -> config (reason 'none', managed-type: 'full')
Dec  1 14:07:05 np0005541455 NetworkManager[56474]: <info>  [1764616025.2826] device (br-ex)[Open vSwitch Bridge]: state change: config -> ip-config (reason 'none', managed-type: 'full')
Dec  1 14:07:05 np0005541455 NetworkManager[56474]: <info>  [1764616025.2831] device (br-ex)[Open vSwitch Interface]: state change: unmanaged -> unavailable (reason 'managed', managed-type: 'external')
Dec  1 14:07:05 np0005541455 NetworkManager[56474]: <info>  [1764616025.2836] device (br-ex)[Open vSwitch Interface]: state change: unavailable -> disconnected (reason 'user-requested', managed-type: 'full')
Dec  1 14:07:05 np0005541455 NetworkManager[56474]: <info>  [1764616025.2841] device (br-ex)[Open vSwitch Interface]: Activation: starting connection 'br-ex-if' (00584f7d-646c-43f7-9452-862ba99a20a2)
Dec  1 14:07:05 np0005541455 NetworkManager[56474]: <info>  [1764616025.2842] device (br-ex)[Open vSwitch Port]: state change: disconnected -> prepare (reason 'none', managed-type: 'full')
Dec  1 14:07:05 np0005541455 NetworkManager[56474]: <info>  [1764616025.2845] device (br-ex)[Open vSwitch Port]: state change: prepare -> config (reason 'none', managed-type: 'full')
Dec  1 14:07:05 np0005541455 NetworkManager[56474]: <info>  [1764616025.2847] device (br-ex)[Open vSwitch Port]: state change: config -> ip-config (reason 'none', managed-type: 'full')
Dec  1 14:07:05 np0005541455 NetworkManager[56474]: <info>  [1764616025.2848] device (br-ex)[Open vSwitch Port]: Activation: connection 'br-ex-port' attached as port, continuing activation
Dec  1 14:07:05 np0005541455 NetworkManager[56474]: <info>  [1764616025.2849] device (eth1): state change: activated -> deactivating (reason 'new-activation', managed-type: 'full')
Dec  1 14:07:05 np0005541455 NetworkManager[56474]: <info>  [1764616025.2883] device (eth1): disconnecting for new activation request.
Dec  1 14:07:05 np0005541455 NetworkManager[56474]: <info>  [1764616025.2891] device (eth1)[Open vSwitch Port]: state change: disconnected -> prepare (reason 'none', managed-type: 'full')
Dec  1 14:07:05 np0005541455 NetworkManager[56474]: <info>  [1764616025.2899] device (eth1)[Open vSwitch Port]: state change: prepare -> config (reason 'none', managed-type: 'full')
Dec  1 14:07:05 np0005541455 NetworkManager[56474]: <info>  [1764616025.2902] device (eth1)[Open vSwitch Port]: state change: config -> ip-config (reason 'none', managed-type: 'full')
Dec  1 14:07:05 np0005541455 NetworkManager[56474]: <info>  [1764616025.2905] device (eth1)[Open vSwitch Port]: Activation: connection 'eth1-port' attached as port, continuing activation
Dec  1 14:07:05 np0005541455 NetworkManager[56474]: <info>  [1764616025.2911] device (vlan20)[Open vSwitch Interface]: state change: unmanaged -> unavailable (reason 'managed', managed-type: 'external')
Dec  1 14:07:05 np0005541455 NetworkManager[56474]: <info>  [1764616025.2920] device (vlan20)[Open vSwitch Interface]: state change: unavailable -> disconnected (reason 'user-requested', managed-type: 'full')
Dec  1 14:07:05 np0005541455 NetworkManager[56474]: <info>  [1764616025.2928] device (vlan20)[Open vSwitch Interface]: Activation: starting connection 'vlan20-if' (73a25f36-489d-4b14-b616-b6a5ead12b91)
Dec  1 14:07:05 np0005541455 NetworkManager[56474]: <info>  [1764616025.2929] device (vlan20)[Open vSwitch Port]: state change: disconnected -> prepare (reason 'none', managed-type: 'full')
Dec  1 14:07:05 np0005541455 NetworkManager[56474]: <info>  [1764616025.2936] device (vlan20)[Open vSwitch Port]: state change: prepare -> config (reason 'none', managed-type: 'full')
Dec  1 14:07:05 np0005541455 NetworkManager[56474]: <info>  [1764616025.2940] device (vlan20)[Open vSwitch Port]: state change: config -> ip-config (reason 'none', managed-type: 'full')
Dec  1 14:07:05 np0005541455 NetworkManager[56474]: <info>  [1764616025.2942] device (vlan20)[Open vSwitch Port]: Activation: connection 'vlan20-port' attached as port, continuing activation
Dec  1 14:07:05 np0005541455 NetworkManager[56474]: <info>  [1764616025.2947] device (vlan21)[Open vSwitch Interface]: state change: unmanaged -> unavailable (reason 'managed', managed-type: 'external')
Dec  1 14:07:05 np0005541455 NetworkManager[56474]: <info>  [1764616025.2955] device (vlan21)[Open vSwitch Interface]: state change: unavailable -> disconnected (reason 'user-requested', managed-type: 'full')
Dec  1 14:07:05 np0005541455 NetworkManager[56474]: <info>  [1764616025.2962] device (vlan21)[Open vSwitch Interface]: Activation: starting connection 'vlan21-if' (eca04b3e-9510-4cbf-ad91-ff640a36c8ef)
Dec  1 14:07:05 np0005541455 NetworkManager[56474]: <info>  [1764616025.2964] device (vlan21)[Open vSwitch Port]: state change: disconnected -> prepare (reason 'none', managed-type: 'full')
Dec  1 14:07:05 np0005541455 NetworkManager[56474]: <info>  [1764616025.2970] device (vlan21)[Open vSwitch Port]: state change: prepare -> config (reason 'none', managed-type: 'full')
Dec  1 14:07:05 np0005541455 NetworkManager[56474]: <info>  [1764616025.2972] device (vlan21)[Open vSwitch Port]: state change: config -> ip-config (reason 'none', managed-type: 'full')
Dec  1 14:07:05 np0005541455 NetworkManager[56474]: <info>  [1764616025.2974] device (vlan21)[Open vSwitch Port]: Activation: connection 'vlan21-port' attached as port, continuing activation
Dec  1 14:07:05 np0005541455 NetworkManager[56474]: <info>  [1764616025.2979] device (vlan22)[Open vSwitch Interface]: state change: unmanaged -> unavailable (reason 'managed', managed-type: 'external')
Dec  1 14:07:05 np0005541455 NetworkManager[56474]: <info>  [1764616025.2987] device (vlan22)[Open vSwitch Interface]: state change: unavailable -> disconnected (reason 'user-requested', managed-type: 'full')
Dec  1 14:07:05 np0005541455 NetworkManager[56474]: <info>  [1764616025.2993] device (vlan22)[Open vSwitch Interface]: Activation: starting connection 'vlan22-if' (0bfc9937-bff7-4647-a4ae-2d8b92fe27c1)
Dec  1 14:07:05 np0005541455 NetworkManager[56474]: <info>  [1764616025.2994] device (vlan22)[Open vSwitch Port]: state change: disconnected -> prepare (reason 'none', managed-type: 'full')
Dec  1 14:07:05 np0005541455 NetworkManager[56474]: <info>  [1764616025.3000] device (vlan22)[Open vSwitch Port]: state change: prepare -> config (reason 'none', managed-type: 'full')
Dec  1 14:07:05 np0005541455 NetworkManager[56474]: <info>  [1764616025.3003] device (vlan22)[Open vSwitch Port]: state change: config -> ip-config (reason 'none', managed-type: 'full')
Dec  1 14:07:05 np0005541455 NetworkManager[56474]: <info>  [1764616025.3005] device (vlan22)[Open vSwitch Port]: Activation: connection 'vlan22-port' attached as port, continuing activation
Dec  1 14:07:05 np0005541455 NetworkManager[56474]: <info>  [1764616025.3009] device (br-ex)[Open vSwitch Bridge]: state change: ip-config -> ip-check (reason 'none', managed-type: 'full')
Dec  1 14:07:05 np0005541455 systemd[1]: Starting Network Manager Script Dispatcher Service...
Dec  1 14:07:05 np0005541455 NetworkManager[56474]: <info>  [1764616025.3037] audit: op="device-reapply" interface="eth0" ifindex=2 args="ipv6.addr-gen-mode,ipv6.method,802-3-ethernet.mtu,connection.autoconnect-priority,ipv4.dhcp-client-id,ipv4.dhcp-timeout" pid=59250 uid=0 result="success"
Dec  1 14:07:05 np0005541455 NetworkManager[56474]: <info>  [1764616025.3041] device (br-ex)[Open vSwitch Interface]: state change: disconnected -> prepare (reason 'none', managed-type: 'full')
Dec  1 14:07:05 np0005541455 NetworkManager[56474]: <info>  [1764616025.3050] device (br-ex)[Open vSwitch Interface]: state change: prepare -> config (reason 'none', managed-type: 'full')
Dec  1 14:07:05 np0005541455 NetworkManager[56474]: <info>  [1764616025.3054] device (br-ex)[Open vSwitch Interface]: state change: config -> ip-config (reason 'none', managed-type: 'full')
Dec  1 14:07:05 np0005541455 NetworkManager[56474]: <info>  [1764616025.3070] device (br-ex)[Open vSwitch Port]: state change: ip-config -> ip-check (reason 'none', managed-type: 'full')
Dec  1 14:07:05 np0005541455 NetworkManager[56474]: <info>  [1764616025.3079] device (eth1)[Open vSwitch Port]: state change: ip-config -> ip-check (reason 'none', managed-type: 'full')
Dec  1 14:07:05 np0005541455 kernel: ovs-system: entered promiscuous mode
Dec  1 14:07:05 np0005541455 NetworkManager[56474]: <info>  [1764616025.3099] device (vlan20)[Open vSwitch Interface]: state change: disconnected -> prepare (reason 'none', managed-type: 'full')
Dec  1 14:07:05 np0005541455 NetworkManager[56474]: <info>  [1764616025.3108] device (vlan20)[Open vSwitch Interface]: state change: prepare -> config (reason 'none', managed-type: 'full')
Dec  1 14:07:05 np0005541455 NetworkManager[56474]: <info>  [1764616025.3111] device (vlan20)[Open vSwitch Interface]: state change: config -> ip-config (reason 'none', managed-type: 'full')
Dec  1 14:07:05 np0005541455 systemd-udevd[59253]: Network interface NamePolicy= disabled on kernel command line.
Dec  1 14:07:05 np0005541455 NetworkManager[56474]: <info>  [1764616025.3123] device (vlan20)[Open vSwitch Port]: state change: ip-config -> ip-check (reason 'none', managed-type: 'full')
Dec  1 14:07:05 np0005541455 kernel: Timeout policy base is empty
Dec  1 14:07:05 np0005541455 NetworkManager[56474]: <info>  [1764616025.3132] device (vlan21)[Open vSwitch Interface]: state change: disconnected -> prepare (reason 'none', managed-type: 'full')
Dec  1 14:07:05 np0005541455 NetworkManager[56474]: <info>  [1764616025.3139] device (vlan21)[Open vSwitch Interface]: state change: prepare -> config (reason 'none', managed-type: 'full')
Dec  1 14:07:05 np0005541455 NetworkManager[56474]: <info>  [1764616025.3142] device (vlan21)[Open vSwitch Interface]: state change: config -> ip-config (reason 'none', managed-type: 'full')
Dec  1 14:07:05 np0005541455 NetworkManager[56474]: <info>  [1764616025.3153] device (vlan21)[Open vSwitch Port]: state change: ip-config -> ip-check (reason 'none', managed-type: 'full')
Dec  1 14:07:05 np0005541455 NetworkManager[56474]: <info>  [1764616025.3164] device (vlan22)[Open vSwitch Interface]: state change: disconnected -> prepare (reason 'none', managed-type: 'full')
Dec  1 14:07:05 np0005541455 NetworkManager[56474]: <info>  [1764616025.3171] device (vlan22)[Open vSwitch Interface]: state change: prepare -> config (reason 'none', managed-type: 'full')
Dec  1 14:07:05 np0005541455 NetworkManager[56474]: <info>  [1764616025.3176] device (vlan22)[Open vSwitch Interface]: state change: config -> ip-config (reason 'none', managed-type: 'full')
Dec  1 14:07:05 np0005541455 NetworkManager[56474]: <info>  [1764616025.3186] device (vlan22)[Open vSwitch Port]: state change: ip-config -> ip-check (reason 'none', managed-type: 'full')
Dec  1 14:07:05 np0005541455 NetworkManager[56474]: <info>  [1764616025.3196] dhcp4 (eth0): canceled DHCP transaction
Dec  1 14:07:05 np0005541455 NetworkManager[56474]: <info>  [1764616025.3196] dhcp4 (eth0): activation: beginning transaction (timeout in 45 seconds)
Dec  1 14:07:05 np0005541455 NetworkManager[56474]: <info>  [1764616025.3197] dhcp4 (eth0): state changed no lease
Dec  1 14:07:05 np0005541455 NetworkManager[56474]: <info>  [1764616025.3199] dhcp4 (eth0): activation: beginning transaction (no timeout)
Dec  1 14:07:05 np0005541455 NetworkManager[56474]: <info>  [1764616025.3216] device (br-ex)[Open vSwitch Interface]: Activation: connection 'br-ex-if' attached as port, continuing activation
Dec  1 14:07:05 np0005541455 NetworkManager[56474]: <info>  [1764616025.3223] audit: op="device-reapply" interface="eth1" ifindex=3 pid=59250 uid=0 result="fail" reason="Device is not activated"
Dec  1 14:07:05 np0005541455 NetworkManager[56474]: <info>  [1764616025.3238] device (vlan20)[Open vSwitch Interface]: Activation: connection 'vlan20-if' attached as port, continuing activation
Dec  1 14:07:05 np0005541455 systemd[1]: Started Network Manager Script Dispatcher Service.
Dec  1 14:07:05 np0005541455 NetworkManager[56474]: <info>  [1764616025.3277] device (vlan21)[Open vSwitch Interface]: Activation: connection 'vlan21-if' attached as port, continuing activation
Dec  1 14:07:05 np0005541455 NetworkManager[56474]: <info>  [1764616025.3279] dhcp4 (eth0): state changed new lease, address=38.102.83.97
Dec  1 14:07:05 np0005541455 NetworkManager[56474]: <info>  [1764616025.3286] device (vlan22)[Open vSwitch Interface]: Activation: connection 'vlan22-if' attached as port, continuing activation
Dec  1 14:07:05 np0005541455 NetworkManager[56474]: <info>  [1764616025.3332] device (eth1): disconnecting for new activation request.
Dec  1 14:07:05 np0005541455 NetworkManager[56474]: <info>  [1764616025.3332] audit: op="connection-activate" uuid="c8f215e6-5e9a-5e2d-a810-1cab7f3f4862" name="ci-private-network" pid=59250 uid=0 result="success"
Dec  1 14:07:05 np0005541455 NetworkManager[56474]: <info>  [1764616025.3333] device (eth1): state change: deactivating -> disconnected (reason 'new-activation', managed-type: 'full')
Dec  1 14:07:05 np0005541455 NetworkManager[56474]: <info>  [1764616025.3444] device (eth1): Activation: starting connection 'ci-private-network' (c8f215e6-5e9a-5e2d-a810-1cab7f3f4862)
Dec  1 14:07:05 np0005541455 NetworkManager[56474]: <info>  [1764616025.3448] device (br-ex)[Open vSwitch Bridge]: state change: ip-check -> secondaries (reason 'none', managed-type: 'full')
Dec  1 14:07:05 np0005541455 NetworkManager[56474]: <info>  [1764616025.3465] device (eth1): state change: disconnected -> prepare (reason 'none', managed-type: 'full')
Dec  1 14:07:05 np0005541455 NetworkManager[56474]: <info>  [1764616025.3468] device (eth1): state change: prepare -> config (reason 'none', managed-type: 'full')
Dec  1 14:07:05 np0005541455 NetworkManager[56474]: <info>  [1764616025.3475] device (br-ex)[Open vSwitch Bridge]: state change: secondaries -> activated (reason 'none', managed-type: 'full')
Dec  1 14:07:05 np0005541455 NetworkManager[56474]: <info>  [1764616025.3480] device (br-ex)[Open vSwitch Bridge]: Activation: successful, device activated.
Dec  1 14:07:05 np0005541455 NetworkManager[56474]: <info>  [1764616025.3486] device (br-ex)[Open vSwitch Port]: state change: ip-check -> secondaries (reason 'none', managed-type: 'full')
Dec  1 14:07:05 np0005541455 NetworkManager[56474]: <info>  [1764616025.3487] device (eth1)[Open vSwitch Port]: state change: ip-check -> secondaries (reason 'none', managed-type: 'full')
Dec  1 14:07:05 np0005541455 NetworkManager[56474]: <info>  [1764616025.3488] device (vlan20)[Open vSwitch Port]: state change: ip-check -> secondaries (reason 'none', managed-type: 'full')
Dec  1 14:07:05 np0005541455 NetworkManager[56474]: <info>  [1764616025.3489] device (vlan21)[Open vSwitch Port]: state change: ip-check -> secondaries (reason 'none', managed-type: 'full')
Dec  1 14:07:05 np0005541455 NetworkManager[56474]: <info>  [1764616025.3490] device (vlan22)[Open vSwitch Port]: state change: ip-check -> secondaries (reason 'none', managed-type: 'full')
Dec  1 14:07:05 np0005541455 NetworkManager[56474]: <info>  [1764616025.3491] audit: op="checkpoint-adjust-rollback-timeout" arg="/org/freedesktop/NetworkManager/Checkpoint/1" pid=59250 uid=0 result="success"
Dec  1 14:07:05 np0005541455 NetworkManager[56474]: <info>  [1764616025.3493] device (eth1): state change: config -> ip-config (reason 'none', managed-type: 'full')
Dec  1 14:07:05 np0005541455 NetworkManager[56474]: <info>  [1764616025.3500] device (br-ex)[Open vSwitch Port]: state change: secondaries -> activated (reason 'none', managed-type: 'full')
Dec  1 14:07:05 np0005541455 NetworkManager[56474]: <info>  [1764616025.3504] device (br-ex)[Open vSwitch Port]: Activation: successful, device activated.
Dec  1 14:07:05 np0005541455 NetworkManager[56474]: <info>  [1764616025.3506] device (eth1)[Open vSwitch Port]: state change: secondaries -> activated (reason 'none', managed-type: 'full')
Dec  1 14:07:05 np0005541455 NetworkManager[56474]: <info>  [1764616025.3509] device (eth1)[Open vSwitch Port]: Activation: successful, device activated.
Dec  1 14:07:05 np0005541455 NetworkManager[56474]: <info>  [1764616025.3511] device (vlan20)[Open vSwitch Port]: state change: secondaries -> activated (reason 'none', managed-type: 'full')
Dec  1 14:07:05 np0005541455 NetworkManager[56474]: <info>  [1764616025.3514] device (vlan20)[Open vSwitch Port]: Activation: successful, device activated.
Dec  1 14:07:05 np0005541455 NetworkManager[56474]: <info>  [1764616025.3519] device (vlan21)[Open vSwitch Port]: state change: secondaries -> activated (reason 'none', managed-type: 'full')
Dec  1 14:07:05 np0005541455 NetworkManager[56474]: <info>  [1764616025.3522] device (vlan21)[Open vSwitch Port]: Activation: successful, device activated.
Dec  1 14:07:05 np0005541455 NetworkManager[56474]: <info>  [1764616025.3525] device (vlan22)[Open vSwitch Port]: state change: secondaries -> activated (reason 'none', managed-type: 'full')
Dec  1 14:07:05 np0005541455 NetworkManager[56474]: <info>  [1764616025.3528] device (vlan22)[Open vSwitch Port]: Activation: successful, device activated.
Dec  1 14:07:05 np0005541455 NetworkManager[56474]: <info>  [1764616025.3532] device (eth1): Activation: connection 'ci-private-network' attached as port, continuing activation
Dec  1 14:07:05 np0005541455 NetworkManager[56474]: <info>  [1764616025.3534] device (eth1): state change: ip-config -> ip-check (reason 'none', managed-type: 'full')
Dec  1 14:07:05 np0005541455 NetworkManager[56474]: <info>  [1764616025.3576] device (eth1): state change: ip-check -> secondaries (reason 'none', managed-type: 'full')
Dec  1 14:07:05 np0005541455 NetworkManager[56474]: <info>  [1764616025.3578] device (eth1): state change: secondaries -> activated (reason 'none', managed-type: 'full')
Dec  1 14:07:05 np0005541455 NetworkManager[56474]: <info>  [1764616025.3583] device (eth1): Activation: successful, device activated.
Dec  1 14:07:05 np0005541455 kernel: br-ex: entered promiscuous mode
Dec  1 14:07:05 np0005541455 NetworkManager[56474]: <info>  [1764616025.3802] device (br-ex)[Open vSwitch Interface]: carrier: link connected
Dec  1 14:07:05 np0005541455 NetworkManager[56474]: <info>  [1764616025.3813] device (br-ex)[Open vSwitch Interface]: state change: ip-config -> ip-check (reason 'none', managed-type: 'full')
Dec  1 14:07:05 np0005541455 NetworkManager[56474]: <info>  [1764616025.3830] device (br-ex)[Open vSwitch Interface]: state change: ip-check -> secondaries (reason 'none', managed-type: 'full')
Dec  1 14:07:05 np0005541455 NetworkManager[56474]: <info>  [1764616025.3833] device (br-ex)[Open vSwitch Interface]: state change: secondaries -> activated (reason 'none', managed-type: 'full')
Dec  1 14:07:05 np0005541455 NetworkManager[56474]: <info>  [1764616025.3838] device (br-ex)[Open vSwitch Interface]: Activation: successful, device activated.
Dec  1 14:07:05 np0005541455 kernel: vlan22: entered promiscuous mode
Dec  1 14:07:05 np0005541455 kernel: virtio_net virtio5 eth1: entered promiscuous mode
Dec  1 14:07:05 np0005541455 NetworkManager[56474]: <info>  [1764616025.3954] device (vlan22)[Open vSwitch Interface]: carrier: link connected
Dec  1 14:07:05 np0005541455 NetworkManager[56474]: <info>  [1764616025.3963] device (vlan22)[Open vSwitch Interface]: state change: ip-config -> ip-check (reason 'none', managed-type: 'full')
Dec  1 14:07:05 np0005541455 kernel: vlan21: entered promiscuous mode
Dec  1 14:07:05 np0005541455 NetworkManager[56474]: <info>  [1764616025.4002] device (vlan22)[Open vSwitch Interface]: state change: ip-check -> secondaries (reason 'none', managed-type: 'full')
Dec  1 14:07:05 np0005541455 NetworkManager[56474]: <info>  [1764616025.4004] device (vlan22)[Open vSwitch Interface]: state change: secondaries -> activated (reason 'none', managed-type: 'full')
Dec  1 14:07:05 np0005541455 NetworkManager[56474]: <info>  [1764616025.4010] device (vlan22)[Open vSwitch Interface]: Activation: successful, device activated.
Dec  1 14:07:05 np0005541455 kernel: vlan20: entered promiscuous mode
Dec  1 14:07:05 np0005541455 systemd-udevd[59256]: Network interface NamePolicy= disabled on kernel command line.
Dec  1 14:07:05 np0005541455 NetworkManager[56474]: <info>  [1764616025.4098] device (vlan21)[Open vSwitch Interface]: carrier: link connected
Dec  1 14:07:05 np0005541455 NetworkManager[56474]: <info>  [1764616025.4110] device (vlan21)[Open vSwitch Interface]: state change: ip-config -> ip-check (reason 'none', managed-type: 'full')
Dec  1 14:07:05 np0005541455 NetworkManager[56474]: <info>  [1764616025.4134] device (vlan21)[Open vSwitch Interface]: state change: ip-check -> secondaries (reason 'none', managed-type: 'full')
Dec  1 14:07:05 np0005541455 NetworkManager[56474]: <info>  [1764616025.4136] device (vlan21)[Open vSwitch Interface]: state change: secondaries -> activated (reason 'none', managed-type: 'full')
Dec  1 14:07:05 np0005541455 NetworkManager[56474]: <info>  [1764616025.4143] device (vlan21)[Open vSwitch Interface]: Activation: successful, device activated.
Dec  1 14:07:05 np0005541455 NetworkManager[56474]: <info>  [1764616025.4193] device (vlan20)[Open vSwitch Interface]: carrier: link connected
Dec  1 14:07:05 np0005541455 NetworkManager[56474]: <info>  [1764616025.4207] device (vlan20)[Open vSwitch Interface]: state change: ip-config -> ip-check (reason 'none', managed-type: 'full')
Dec  1 14:07:05 np0005541455 NetworkManager[56474]: <info>  [1764616025.4229] device (vlan20)[Open vSwitch Interface]: state change: ip-check -> secondaries (reason 'none', managed-type: 'full')
Dec  1 14:07:05 np0005541455 NetworkManager[56474]: <info>  [1764616025.4231] device (vlan20)[Open vSwitch Interface]: state change: secondaries -> activated (reason 'none', managed-type: 'full')
Dec  1 14:07:05 np0005541455 NetworkManager[56474]: <info>  [1764616025.4238] device (vlan20)[Open vSwitch Interface]: Activation: successful, device activated.
Dec  1 14:07:06 np0005541455 NetworkManager[56474]: <info>  [1764616026.5705] audit: op="checkpoint-adjust-rollback-timeout" arg="/org/freedesktop/NetworkManager/Checkpoint/1" pid=59250 uid=0 result="success"
Dec  1 14:07:06 np0005541455 NetworkManager[56474]: <info>  [1764616026.8550] checkpoint[0x5625c7450950]: destroy /org/freedesktop/NetworkManager/Checkpoint/1
Dec  1 14:07:06 np0005541455 NetworkManager[56474]: <info>  [1764616026.8554] audit: op="checkpoint-destroy" arg="/org/freedesktop/NetworkManager/Checkpoint/1" pid=59250 uid=0 result="success"
Dec  1 14:07:07 np0005541455 python3.9[59583]: ansible-ansible.legacy.async_status Invoked with jid=j515220027870.59244 mode=status _async_dir=/root/.ansible_async
Dec  1 14:07:07 np0005541455 NetworkManager[56474]: <info>  [1764616027.2214] audit: op="checkpoint-create" arg="/org/freedesktop/NetworkManager/Checkpoint/2" pid=59250 uid=0 result="success"
Dec  1 14:07:07 np0005541455 NetworkManager[56474]: <info>  [1764616027.2229] audit: op="checkpoint-adjust-rollback-timeout" arg="/org/freedesktop/NetworkManager/Checkpoint/2" pid=59250 uid=0 result="success"
Dec  1 14:07:07 np0005541455 NetworkManager[56474]: <info>  [1764616027.4619] audit: op="networking-control" arg="global-dns-configuration" pid=59250 uid=0 result="success"
Dec  1 14:07:07 np0005541455 NetworkManager[56474]: <info>  [1764616027.4653] config: signal: SET_VALUES,values,values-intern,global-dns-config (/etc/NetworkManager/NetworkManager.conf, /run/NetworkManager/conf.d/15-carrier-timeout.conf)
Dec  1 14:07:07 np0005541455 NetworkManager[56474]: <info>  [1764616027.4686] audit: op="networking-control" arg="global-dns-configuration" pid=59250 uid=0 result="success"
Dec  1 14:07:07 np0005541455 NetworkManager[56474]: <info>  [1764616027.4834] audit: op="checkpoint-adjust-rollback-timeout" arg="/org/freedesktop/NetworkManager/Checkpoint/2" pid=59250 uid=0 result="success"
Dec  1 14:07:07 np0005541455 NetworkManager[56474]: <info>  [1764616027.6385] checkpoint[0x5625c7450a20]: destroy /org/freedesktop/NetworkManager/Checkpoint/2
Dec  1 14:07:07 np0005541455 NetworkManager[56474]: <info>  [1764616027.6390] audit: op="checkpoint-destroy" arg="/org/freedesktop/NetworkManager/Checkpoint/2" pid=59250 uid=0 result="success"
Dec  1 14:07:07 np0005541455 ansible-async_wrapper.py[59248]: Module complete (59248)
Dec  1 14:07:08 np0005541455 ansible-async_wrapper.py[59247]: Done in kid B.
Dec  1 14:07:10 np0005541455 python3.9[59689]: ansible-ansible.legacy.async_status Invoked with jid=j515220027870.59244 mode=status _async_dir=/root/.ansible_async
Dec  1 14:07:11 np0005541455 python3.9[59788]: ansible-ansible.legacy.async_status Invoked with jid=j515220027870.59244 mode=cleanup _async_dir=/root/.ansible_async
Dec  1 14:07:12 np0005541455 python3.9[59940]: ansible-ansible.legacy.stat Invoked with path=/var/lib/edpm-config/os-net-config.returncode follow=False get_checksum=True get_size=False checksum_algorithm=sha1 get_mime=True get_attributes=True get_selinux_context=False
Dec  1 14:07:12 np0005541455 python3.9[60063]: ansible-ansible.legacy.copy Invoked with dest=/var/lib/edpm-config/os-net-config.returncode mode=0644 src=/home/zuul/.ansible/tmp/ansible-tmp-1764616031.6620104-322-54778384399446/.source.returncode _original_basename=.zjai6851 follow=False checksum=b6589fc6ab0dc82cf12099d1c2d40ab994e8410c backup=False force=True remote_src=False unsafe_writes=False content=NOT_LOGGING_PARAMETER validate=None directory_mode=None local_follow=None owner=None group=None seuser=None serole=None selevel=None setype=None attributes=None
Dec  1 14:07:13 np0005541455 python3.9[60216]: ansible-ansible.legacy.stat Invoked with path=/etc/cloud/cloud.cfg.d/99-edpm-disable-network-config.cfg follow=False get_checksum=True get_size=False checksum_algorithm=sha1 get_mime=True get_attributes=True get_selinux_context=False
Dec  1 14:07:14 np0005541455 systemd[1]: systemd-hostnamed.service: Deactivated successfully.
Dec  1 14:07:14 np0005541455 python3.9[60341]: ansible-ansible.legacy.copy Invoked with dest=/etc/cloud/cloud.cfg.d/99-edpm-disable-network-config.cfg mode=0644 src=/home/zuul/.ansible/tmp/ansible-tmp-1764616033.2063146-338-96566818101123/.source.cfg _original_basename=.3rdln0e5 follow=False checksum=f3c5952a9cd4c6c31b314b25eb897168971cc86e backup=False force=True remote_src=False unsafe_writes=False content=NOT_LOGGING_PARAMETER validate=None directory_mode=None local_follow=None owner=None group=None seuser=None serole=None selevel=None setype=None attributes=None
Dec  1 14:07:15 np0005541455 python3.9[60493]: ansible-ansible.builtin.systemd Invoked with name=NetworkManager state=reloaded daemon_reload=False daemon_reexec=False scope=system no_block=False enabled=None force=None masked=None
Dec  1 14:07:16 np0005541455 systemd[1]: Reloading Network Manager...
Dec  1 14:07:16 np0005541455 NetworkManager[56474]: <info>  [1764616036.4919] audit: op="reload" arg="0" pid=60497 uid=0 result="success"
Dec  1 14:07:16 np0005541455 NetworkManager[56474]: <info>  [1764616036.4926] config: signal: SIGHUP,config-files,values,values-user,no-auto-default (/etc/NetworkManager/NetworkManager.conf, /usr/lib/NetworkManager/conf.d/00-server.conf, /run/NetworkManager/conf.d/15-carrier-timeout.conf, /var/lib/NetworkManager/NetworkManager-intern.conf)
Dec  1 14:07:16 np0005541455 systemd[1]: Reloaded Network Manager.
Dec  1 14:07:16 np0005541455 systemd[1]: session-12.scope: Deactivated successfully.
Dec  1 14:07:16 np0005541455 systemd[1]: session-12.scope: Consumed 52.175s CPU time.
Dec  1 14:07:16 np0005541455 systemd-logind[797]: Session 12 logged out. Waiting for processes to exit.
Dec  1 14:07:16 np0005541455 systemd-logind[797]: Removed session 12.
Dec  1 14:07:22 np0005541455 systemd-logind[797]: New session 13 of user zuul.
Dec  1 14:07:22 np0005541455 systemd[1]: Started Session 13 of User zuul.
Dec  1 14:07:23 np0005541455 python3.9[60682]: ansible-ansible.builtin.setup Invoked with gather_subset=['!all', '!min', 'local'] gather_timeout=10 filter=[] fact_path=/etc/ansible/facts.d
Dec  1 14:07:24 np0005541455 python3.9[60836]: ansible-ansible.builtin.setup Invoked with filter=['ansible_default_ipv4'] gather_subset=['!all', '!min', 'network'] gather_timeout=10 fact_path=/etc/ansible/facts.d
Dec  1 14:07:26 np0005541455 python3.9[61025]: ansible-ansible.legacy.command Invoked with _raw_params=hostname -f _uses_shell=False expand_argument_vars=True stdin_add_newline=True strip_empty_ends=True cmd=None argv=None chdir=None executable=None creates=None removes=None stdin=None
Dec  1 14:07:26 np0005541455 systemd[1]: NetworkManager-dispatcher.service: Deactivated successfully.
Dec  1 14:07:26 np0005541455 systemd-logind[797]: Session 13 logged out. Waiting for processes to exit.
Dec  1 14:07:26 np0005541455 systemd[1]: session-13.scope: Deactivated successfully.
Dec  1 14:07:26 np0005541455 systemd[1]: session-13.scope: Consumed 2.884s CPU time.
Dec  1 14:07:26 np0005541455 systemd-logind[797]: Removed session 13.
Dec  1 14:07:32 np0005541455 systemd-logind[797]: New session 14 of user zuul.
Dec  1 14:07:32 np0005541455 systemd[1]: Started Session 14 of User zuul.
Dec  1 14:07:33 np0005541455 python3.9[61208]: ansible-ansible.builtin.setup Invoked with gather_subset=['!all', '!min', 'local'] gather_timeout=10 filter=[] fact_path=/etc/ansible/facts.d
Dec  1 14:07:34 np0005541455 python3.9[61362]: ansible-ansible.builtin.setup Invoked with gather_subset=['!all', '!min', 'local'] gather_timeout=10 filter=[] fact_path=/etc/ansible/facts.d
Dec  1 14:07:35 np0005541455 python3.9[61518]: ansible-ansible.legacy.setup Invoked with filter=['ansible_pkg_mgr'] gather_subset=['!all'] gather_timeout=10 fact_path=/etc/ansible/facts.d
Dec  1 14:07:36 np0005541455 python3.9[61604]: ansible-ansible.legacy.dnf Invoked with name=['podman'] state=present allow_downgrade=False allowerasing=False autoremove=False bugfix=False cacheonly=False disable_gpg_check=False disable_plugin=[] disablerepo=[] download_only=False enable_plugin=[] enablerepo=[] exclude=[] installroot=/ install_weak_deps=True security=False skip_broken=False update_cache=False update_only=False validate_certs=True sslverify=True lock_timeout=30 use_backend=auto best=None conf_file=None disable_excludes=None download_dir=None list=None nobest=None releasever=None
Dec  1 14:07:38 np0005541455 python3.9[61758]: ansible-ansible.builtin.setup Invoked with filter=['ansible_interfaces'] gather_subset=['!all', '!min', 'network'] gather_timeout=10 fact_path=/etc/ansible/facts.d
Dec  1 14:07:40 np0005541455 python3.9[61949]: ansible-ansible.builtin.file Invoked with group=root mode=0755 owner=root path=/etc/containers/networks recurse=True state=directory force=False follow=True modification_time_format=%Y%m%d%H%M.%S access_time_format=%Y%m%d%H%M.%S unsafe_writes=False _original_basename=None _diff_peek=None src=None modification_time=None access_time=None seuser=None serole=None selevel=None setype=None attributes=None
Dec  1 14:07:40 np0005541455 python3.9[62101]: ansible-ansible.legacy.command Invoked with _raw_params=podman network inspect podman#012 _uses_shell=True expand_argument_vars=True stdin_add_newline=True strip_empty_ends=True cmd=None argv=None chdir=None executable=None creates=None removes=None stdin=None
Dec  1 14:07:40 np0005541455 systemd[1]: var-lib-containers-storage-overlay.mount: Deactivated successfully.
Dec  1 14:07:41 np0005541455 python3.9[62263]: ansible-ansible.legacy.stat Invoked with path=/etc/containers/networks/podman.json follow=False get_checksum=True get_size=False checksum_algorithm=sha1 get_mime=True get_attributes=True get_selinux_context=False
Dec  1 14:07:42 np0005541455 python3.9[62341]: ansible-ansible.legacy.file Invoked with group=root mode=0644 owner=root dest=/etc/containers/networks/podman.json _original_basename=podman_network_config.j2 recurse=False state=file path=/etc/containers/networks/podman.json force=False follow=True modification_time_format=%Y%m%d%H%M.%S access_time_format=%Y%m%d%H%M.%S unsafe_writes=False _diff_peek=None src=None modification_time=None access_time=None seuser=None serole=None selevel=None setype=None attributes=None
Dec  1 14:07:43 np0005541455 python3.9[62493]: ansible-ansible.legacy.stat Invoked with path=/etc/containers/registries.conf.d/20-edpm-podman-registries.conf follow=False get_checksum=True get_size=False checksum_algorithm=sha1 get_mime=True get_attributes=True get_selinux_context=False
Dec  1 14:07:43 np0005541455 python3.9[62571]: ansible-ansible.legacy.file Invoked with group=root mode=0644 owner=root setype=etc_t dest=/etc/containers/registries.conf.d/20-edpm-podman-registries.conf _original_basename=registries.conf.j2 recurse=False state=file path=/etc/containers/registries.conf.d/20-edpm-podman-registries.conf force=False follow=True modification_time_format=%Y%m%d%H%M.%S access_time_format=%Y%m%d%H%M.%S unsafe_writes=False _diff_peek=None src=None modification_time=None access_time=None seuser=None serole=None selevel=None attributes=None
Dec  1 14:07:44 np0005541455 python3.9[62723]: ansible-community.general.ini_file Invoked with create=True group=root mode=0644 option=pids_limit owner=root path=/etc/containers/containers.conf section=containers setype=etc_t value=4096 backup=False state=present exclusive=True no_extra_spaces=False ignore_spaces=False allow_no_value=False modify_inactive_option=True follow=False unsafe_writes=False section_has_values=None values=None seuser=None serole=None selevel=None attributes=None
Dec  1 14:07:45 np0005541455 python3.9[62875]: ansible-community.general.ini_file Invoked with create=True group=root mode=0644 option=events_logger owner=root path=/etc/containers/containers.conf section=engine setype=etc_t value="journald" backup=False state=present exclusive=True no_extra_spaces=False ignore_spaces=False allow_no_value=False modify_inactive_option=True follow=False unsafe_writes=False section_has_values=None values=None seuser=None serole=None selevel=None attributes=None
Dec  1 14:07:46 np0005541455 python3.9[63027]: ansible-community.general.ini_file Invoked with create=True group=root mode=0644 option=runtime owner=root path=/etc/containers/containers.conf section=engine setype=etc_t value="crun" backup=False state=present exclusive=True no_extra_spaces=False ignore_spaces=False allow_no_value=False modify_inactive_option=True follow=False unsafe_writes=False section_has_values=None values=None seuser=None serole=None selevel=None attributes=None
Dec  1 14:07:46 np0005541455 python3.9[63179]: ansible-community.general.ini_file Invoked with create=True group=root mode=0644 option=network_backend owner=root path=/etc/containers/containers.conf section=network setype=etc_t value="netavark" backup=False state=present exclusive=True no_extra_spaces=False ignore_spaces=False allow_no_value=False modify_inactive_option=True follow=False unsafe_writes=False section_has_values=None values=None seuser=None serole=None selevel=None attributes=None
Dec  1 14:07:47 np0005541455 python3.9[63331]: ansible-ansible.legacy.dnf Invoked with name=['openssh-server'] state=present allow_downgrade=False allowerasing=False autoremove=False bugfix=False cacheonly=False disable_gpg_check=False disable_plugin=[] disablerepo=[] download_only=False enable_plugin=[] enablerepo=[] exclude=[] installroot=/ install_weak_deps=True security=False skip_broken=False update_cache=False update_only=False validate_certs=True sslverify=True lock_timeout=30 use_backend=auto best=None conf_file=None disable_excludes=None download_dir=None list=None nobest=None releasever=None
Dec  1 14:07:50 np0005541455 python3.9[63484]: ansible-setup Invoked with gather_subset=['!all', '!min', 'distribution', 'distribution_major_version', 'distribution_version', 'os_family'] gather_timeout=10 filter=[] fact_path=/etc/ansible/facts.d
Dec  1 14:07:51 np0005541455 python3.9[63639]: ansible-stat Invoked with path=/run/ostree-booted follow=False get_checksum=True get_mime=True get_attributes=True get_selinux_context=False checksum_algorithm=sha1
Dec  1 14:07:52 np0005541455 python3.9[63791]: ansible-stat Invoked with path=/sbin/transactional-update follow=False get_checksum=True get_mime=True get_attributes=True get_selinux_context=False checksum_algorithm=sha1
Dec  1 14:07:52 np0005541455 python3.9[63943]: ansible-ansible.legacy.command Invoked with _raw_params=systemctl is-system-running _uses_shell=False expand_argument_vars=True stdin_add_newline=True strip_empty_ends=True cmd=None argv=None chdir=None executable=None creates=None removes=None stdin=None
Dec  1 14:07:54 np0005541455 python3.9[64096]: ansible-service_facts Invoked
Dec  1 14:07:54 np0005541455 network[64113]: You are using 'network' service provided by 'network-scripts', which are now deprecated.
Dec  1 14:07:54 np0005541455 network[64114]: 'network-scripts' will be removed from distribution in near future.
Dec  1 14:07:54 np0005541455 network[64115]: It is advised to switch to 'NetworkManager' instead for network management.
Dec  1 14:08:00 np0005541455 python3.9[64567]: ansible-ansible.legacy.dnf Invoked with name=['chrony'] state=present allow_downgrade=False allowerasing=False autoremove=False bugfix=False cacheonly=False disable_gpg_check=False disable_plugin=[] disablerepo=[] download_only=False enable_plugin=[] enablerepo=[] exclude=[] installroot=/ install_weak_deps=True security=False skip_broken=False update_cache=False update_only=False validate_certs=True sslverify=True lock_timeout=30 use_backend=auto best=None conf_file=None disable_excludes=None download_dir=None list=None nobest=None releasever=None
Dec  1 14:08:03 np0005541455 python3.9[64720]: ansible-package_facts Invoked with manager=['auto'] strategy=first
Dec  1 14:08:04 np0005541455 python3.9[64872]: ansible-ansible.legacy.stat Invoked with path=/etc/chrony.conf follow=False get_checksum=True get_size=False checksum_algorithm=sha1 get_mime=True get_attributes=True get_selinux_context=False
Dec  1 14:08:05 np0005541455 python3.9[64997]: ansible-ansible.legacy.copy Invoked with backup=True dest=/etc/chrony.conf mode=0644 src=/home/zuul/.ansible/tmp/ansible-tmp-1764616084.0012019-232-269331454456791/.source.conf follow=False _original_basename=chrony.conf.j2 checksum=cfb003e56d02d0d2c65555452eb1a05073fecdad force=True remote_src=False unsafe_writes=False content=NOT_LOGGING_PARAMETER validate=None directory_mode=None local_follow=None owner=None group=None seuser=None serole=None selevel=None setype=None attributes=None
Dec  1 14:08:06 np0005541455 python3.9[65151]: ansible-ansible.legacy.stat Invoked with path=/etc/sysconfig/chronyd follow=False get_checksum=True get_size=False checksum_algorithm=sha1 get_mime=True get_attributes=True get_selinux_context=False
Dec  1 14:08:07 np0005541455 python3.9[65276]: ansible-ansible.legacy.copy Invoked with backup=True dest=/etc/sysconfig/chronyd mode=0644 src=/home/zuul/.ansible/tmp/ansible-tmp-1764616085.8713636-247-58889508714027/.source follow=False _original_basename=chronyd.sysconfig.j2 checksum=dd196b1ff1f915b23eebc37ec77405b5dd3df76c force=True remote_src=False unsafe_writes=False content=NOT_LOGGING_PARAMETER validate=None directory_mode=None local_follow=None owner=None group=None seuser=None serole=None selevel=None setype=None attributes=None
Dec  1 14:08:08 np0005541455 python3.9[65430]: ansible-lineinfile Invoked with backup=True create=True dest=/etc/sysconfig/network line=PEERNTP=no mode=0644 regexp=^PEERNTP= state=present path=/etc/sysconfig/network encoding=utf-8 backrefs=False firstmatch=False unsafe_writes=False search_string=None insertafter=None insertbefore=None validate=None owner=None group=None seuser=None serole=None selevel=None setype=None attributes=None
Dec  1 14:08:09 np0005541455 python3.9[65584]: ansible-ansible.legacy.setup Invoked with gather_subset=['!all'] filter=['ansible_service_mgr'] gather_timeout=10 fact_path=/etc/ansible/facts.d
Dec  1 14:08:10 np0005541455 python3.9[65668]: ansible-ansible.legacy.systemd Invoked with enabled=True name=chronyd state=started daemon_reload=False daemon_reexec=False scope=system no_block=False force=None masked=None
Dec  1 14:08:12 np0005541455 python3.9[65822]: ansible-ansible.legacy.setup Invoked with gather_subset=['!all'] filter=['ansible_service_mgr'] gather_timeout=10 fact_path=/etc/ansible/facts.d
Dec  1 14:08:13 np0005541455 python3.9[65906]: ansible-ansible.legacy.systemd Invoked with name=chronyd state=restarted daemon_reload=False daemon_reexec=False scope=system no_block=False enabled=None force=None masked=None
Dec  1 14:08:13 np0005541455 chronyd[792]: chronyd exiting
Dec  1 14:08:13 np0005541455 systemd[1]: Stopping NTP client/server...
Dec  1 14:08:13 np0005541455 systemd[1]: chronyd.service: Deactivated successfully.
Dec  1 14:08:13 np0005541455 systemd[1]: Stopped NTP client/server.
Dec  1 14:08:13 np0005541455 systemd[1]: Starting NTP client/server...
Dec  1 14:08:13 np0005541455 chronyd[65914]: chronyd version 4.8 starting (+CMDMON +REFCLOCK +RTC +PRIVDROP +SCFILTER +SIGND +NTS +SECHASH +IPV6 +DEBUG)
Dec  1 14:08:13 np0005541455 chronyd[65914]: Frequency -31.770 +/- 0.332 ppm read from /var/lib/chrony/drift
Dec  1 14:08:13 np0005541455 chronyd[65914]: Loaded seccomp filter (level 2)
Dec  1 14:08:13 np0005541455 systemd[1]: Started NTP client/server.
Dec  1 14:08:13 np0005541455 systemd[1]: session-14.scope: Deactivated successfully.
Dec  1 14:08:13 np0005541455 systemd-logind[797]: Session 14 logged out. Waiting for processes to exit.
Dec  1 14:08:13 np0005541455 systemd[1]: session-14.scope: Consumed 27.930s CPU time.
Dec  1 14:08:13 np0005541455 systemd-logind[797]: Removed session 14.
Dec  1 14:08:19 np0005541455 systemd-logind[797]: New session 15 of user zuul.
Dec  1 14:08:19 np0005541455 systemd[1]: Started Session 15 of User zuul.
Dec  1 14:08:20 np0005541455 python3.9[66093]: ansible-ansible.builtin.setup Invoked with gather_subset=['!all', '!min', 'local'] gather_timeout=10 filter=[] fact_path=/etc/ansible/facts.d
Dec  1 14:08:21 np0005541455 python3.9[66249]: ansible-ansible.builtin.file Invoked with group=zuul mode=0770 owner=zuul path=/root/.config/containers recurse=True state=directory force=False follow=True modification_time_format=%Y%m%d%H%M.%S access_time_format=%Y%m%d%H%M.%S unsafe_writes=False _original_basename=None _diff_peek=None src=None modification_time=None access_time=None seuser=None serole=None selevel=None setype=None attributes=None
Dec  1 14:08:23 np0005541455 python3.9[66424]: ansible-ansible.legacy.stat Invoked with path=/root/.config/containers/auth.json follow=False get_checksum=True get_size=False checksum_algorithm=sha1 get_mime=True get_attributes=True get_selinux_context=False
Dec  1 14:08:23 np0005541455 python3.9[66502]: ansible-ansible.legacy.file Invoked with group=zuul mode=0660 owner=zuul dest=/root/.config/containers/auth.json _original_basename=.btkdiy5c recurse=False state=file path=/root/.config/containers/auth.json force=False follow=True modification_time_format=%Y%m%d%H%M.%S access_time_format=%Y%m%d%H%M.%S unsafe_writes=False _diff_peek=None src=None modification_time=None access_time=None seuser=None serole=None selevel=None setype=None attributes=None
Dec  1 14:08:24 np0005541455 python3.9[66654]: ansible-ansible.legacy.stat Invoked with path=/etc/sysconfig/podman_drop_in follow=False get_checksum=True get_size=False checksum_algorithm=sha1 get_mime=True get_attributes=True get_selinux_context=False
Dec  1 14:08:25 np0005541455 python3.9[66777]: ansible-ansible.legacy.copy Invoked with dest=/etc/sysconfig/podman_drop_in mode=0644 src=/home/zuul/.ansible/tmp/ansible-tmp-1764616103.9909716-61-84171027076251/.source _original_basename=.1u4htkkt follow=False checksum=125299ce8dea7711a76292961206447f0043248b backup=False force=True remote_src=False unsafe_writes=False content=NOT_LOGGING_PARAMETER validate=None directory_mode=None local_follow=None owner=None group=None seuser=None serole=None selevel=None setype=None attributes=None
Dec  1 14:08:26 np0005541455 python3.9[66929]: ansible-ansible.builtin.file Invoked with path=/var/local/libexec recurse=True setype=container_file_t state=directory force=False follow=True modification_time_format=%Y%m%d%H%M.%S access_time_format=%Y%m%d%H%M.%S unsafe_writes=False _original_basename=None _diff_peek=None src=None modification_time=None access_time=None mode=None owner=None group=None seuser=None serole=None selevel=None attributes=None
Dec  1 14:08:26 np0005541455 python3.9[67081]: ansible-ansible.legacy.stat Invoked with path=/var/local/libexec/edpm-container-shutdown follow=False get_checksum=True get_size=False checksum_algorithm=sha1 get_mime=True get_attributes=True get_selinux_context=False
Dec  1 14:08:27 np0005541455 python3.9[67204]: ansible-ansible.legacy.copy Invoked with dest=/var/local/libexec/edpm-container-shutdown group=root mode=0700 owner=root setype=container_file_t src=/home/zuul/.ansible/tmp/ansible-tmp-1764616106.2902346-85-270377040397237/.source _original_basename=edpm-container-shutdown follow=False checksum=632c3792eb3dce4288b33ae7b265b71950d69f13 backup=False force=True remote_src=False unsafe_writes=False content=NOT_LOGGING_PARAMETER validate=None directory_mode=None local_follow=None seuser=None serole=None selevel=None attributes=None
Dec  1 14:08:28 np0005541455 python3.9[67356]: ansible-ansible.legacy.stat Invoked with path=/var/local/libexec/edpm-start-podman-container follow=False get_checksum=True get_size=False checksum_algorithm=sha1 get_mime=True get_attributes=True get_selinux_context=False
Dec  1 14:08:28 np0005541455 python3.9[67479]: ansible-ansible.legacy.copy Invoked with dest=/var/local/libexec/edpm-start-podman-container group=root mode=0700 owner=root setype=container_file_t src=/home/zuul/.ansible/tmp/ansible-tmp-1764616107.7428162-85-191296158015006/.source _original_basename=edpm-start-podman-container follow=False checksum=b963c569d75a655c0ccae95d9bb4a2a9a4df27d1 backup=False force=True remote_src=False unsafe_writes=False content=NOT_LOGGING_PARAMETER validate=None directory_mode=None local_follow=None seuser=None serole=None selevel=None attributes=None
Dec  1 14:08:29 np0005541455 python3.9[67631]: ansible-ansible.builtin.file Invoked with mode=420 path=/etc/systemd/system-preset state=directory recurse=False force=False follow=True modification_time_format=%Y%m%d%H%M.%S access_time_format=%Y%m%d%H%M.%S unsafe_writes=False _original_basename=None _diff_peek=None src=None modification_time=None access_time=None owner=None group=None seuser=None serole=None selevel=None setype=None attributes=None
Dec  1 14:08:30 np0005541455 python3.9[67783]: ansible-ansible.legacy.stat Invoked with path=/etc/systemd/system/edpm-container-shutdown.service follow=False get_checksum=True get_size=False checksum_algorithm=sha1 get_mime=True get_attributes=True get_selinux_context=False
Dec  1 14:08:31 np0005541455 python3.9[67906]: ansible-ansible.legacy.copy Invoked with dest=/etc/systemd/system/edpm-container-shutdown.service group=root mode=0644 owner=root src=/home/zuul/.ansible/tmp/ansible-tmp-1764616109.8717227-122-178126773046754/.source.service _original_basename=edpm-container-shutdown-service follow=False checksum=6336835cb0f888670cc99de31e19c8c071444d33 backup=False force=True remote_src=False unsafe_writes=False content=NOT_LOGGING_PARAMETER validate=None directory_mode=None local_follow=None seuser=None serole=None selevel=None setype=None attributes=None
Dec  1 14:08:31 np0005541455 python3.9[68058]: ansible-ansible.legacy.stat Invoked with path=/etc/systemd/system-preset/91-edpm-container-shutdown.preset follow=False get_checksum=True get_size=False checksum_algorithm=sha1 get_mime=True get_attributes=True get_selinux_context=False
Dec  1 14:08:32 np0005541455 python3.9[68181]: ansible-ansible.legacy.copy Invoked with dest=/etc/systemd/system-preset/91-edpm-container-shutdown.preset group=root mode=0644 owner=root src=/home/zuul/.ansible/tmp/ansible-tmp-1764616111.323356-137-172826521365917/.source.preset _original_basename=91-edpm-container-shutdown-preset follow=False checksum=b275e4375287528cb63464dd32f622c4f142a915 backup=False force=True remote_src=False unsafe_writes=False content=NOT_LOGGING_PARAMETER validate=None directory_mode=None local_follow=None seuser=None serole=None selevel=None setype=None attributes=None
Dec  1 14:08:33 np0005541455 python3.9[68333]: ansible-ansible.builtin.systemd Invoked with daemon_reload=True enabled=True name=edpm-container-shutdown state=started daemon_reexec=False scope=system no_block=False force=None masked=None
Dec  1 14:08:33 np0005541455 systemd[1]: Reloading.
Dec  1 14:08:33 np0005541455 systemd-rc-local-generator: /etc/rc.d/rc.local is not marked executable, skipping.
Dec  1 14:08:33 np0005541455 systemd-sysv-generator: SysV service '/etc/rc.d/init.d/network' lacks a native systemd unit file. Automatically generating a unit file for compatibility. Please update package to include a native systemd unit file, in order to make it more safe and robust.
Dec  1 14:08:33 np0005541455 systemd[1]: Reloading.
Dec  1 14:08:33 np0005541455 systemd-rc-local-generator: /etc/rc.d/rc.local is not marked executable, skipping.
Dec  1 14:08:33 np0005541455 systemd-sysv-generator: SysV service '/etc/rc.d/init.d/network' lacks a native systemd unit file. Automatically generating a unit file for compatibility. Please update package to include a native systemd unit file, in order to make it more safe and robust.
Dec  1 14:08:34 np0005541455 systemd[1]: Starting EDPM Container Shutdown...
Dec  1 14:08:34 np0005541455 systemd[1]: Finished EDPM Container Shutdown.
Dec  1 14:08:34 np0005541455 python3.9[68561]: ansible-ansible.legacy.stat Invoked with path=/etc/systemd/system/netns-placeholder.service follow=False get_checksum=True get_size=False checksum_algorithm=sha1 get_mime=True get_attributes=True get_selinux_context=False
Dec  1 14:08:35 np0005541455 python3.9[68684]: ansible-ansible.legacy.copy Invoked with dest=/etc/systemd/system/netns-placeholder.service group=root mode=0644 owner=root src=/home/zuul/.ansible/tmp/ansible-tmp-1764616114.256487-160-279680045040591/.source.service _original_basename=netns-placeholder-service follow=False checksum=b61b1b5918c20c877b8b226fbf34ff89a082d972 backup=False force=True remote_src=False unsafe_writes=False content=NOT_LOGGING_PARAMETER validate=None directory_mode=None local_follow=None seuser=None serole=None selevel=None setype=None attributes=None
Dec  1 14:08:36 np0005541455 python3.9[68836]: ansible-ansible.legacy.stat Invoked with path=/etc/systemd/system-preset/91-netns-placeholder.preset follow=False get_checksum=True get_size=False checksum_algorithm=sha1 get_mime=True get_attributes=True get_selinux_context=False
Dec  1 14:08:37 np0005541455 python3.9[68959]: ansible-ansible.legacy.copy Invoked with dest=/etc/systemd/system-preset/91-netns-placeholder.preset group=root mode=0644 owner=root src=/home/zuul/.ansible/tmp/ansible-tmp-1764616115.7828724-175-117129111966756/.source.preset _original_basename=91-netns-placeholder-preset follow=False checksum=28b7b9aa893525d134a1eeda8a0a48fb25b736b9 backup=False force=True remote_src=False unsafe_writes=False content=NOT_LOGGING_PARAMETER validate=None directory_mode=None local_follow=None seuser=None serole=None selevel=None setype=None attributes=None
Dec  1 14:08:38 np0005541455 python3.9[69111]: ansible-ansible.builtin.systemd Invoked with daemon_reload=True enabled=True name=netns-placeholder state=started daemon_reexec=False scope=system no_block=False force=None masked=None
Dec  1 14:08:38 np0005541455 systemd[1]: Reloading.
Dec  1 14:08:38 np0005541455 systemd-rc-local-generator: /etc/rc.d/rc.local is not marked executable, skipping.
Dec  1 14:08:38 np0005541455 systemd-sysv-generator: SysV service '/etc/rc.d/init.d/network' lacks a native systemd unit file. Automatically generating a unit file for compatibility. Please update package to include a native systemd unit file, in order to make it more safe and robust.
Dec  1 14:08:38 np0005541455 systemd[1]: Reloading.
Dec  1 14:08:38 np0005541455 systemd-rc-local-generator: /etc/rc.d/rc.local is not marked executable, skipping.
Dec  1 14:08:38 np0005541455 systemd-sysv-generator: SysV service '/etc/rc.d/init.d/network' lacks a native systemd unit file. Automatically generating a unit file for compatibility. Please update package to include a native systemd unit file, in order to make it more safe and robust.
Dec  1 14:08:38 np0005541455 systemd[1]: Starting Create netns directory...
Dec  1 14:08:38 np0005541455 systemd[1]: run-netns-placeholder.mount: Deactivated successfully.
Dec  1 14:08:38 np0005541455 systemd[1]: netns-placeholder.service: Deactivated successfully.
Dec  1 14:08:38 np0005541455 systemd[1]: Finished Create netns directory.
Dec  1 14:08:39 np0005541455 python3.9[69337]: ansible-ansible.builtin.service_facts Invoked
Dec  1 14:08:39 np0005541455 network[69354]: You are using 'network' service provided by 'network-scripts', which are now deprecated.
Dec  1 14:08:39 np0005541455 network[69355]: 'network-scripts' will be removed from distribution in near future.
Dec  1 14:08:39 np0005541455 network[69356]: It is advised to switch to 'NetworkManager' instead for network management.
Dec  1 14:08:44 np0005541455 python3.9[69618]: ansible-ansible.builtin.systemd Invoked with enabled=False name=iptables.service state=stopped daemon_reload=False daemon_reexec=False scope=system no_block=False force=None masked=None
Dec  1 14:08:44 np0005541455 systemd[1]: Reloading.
Dec  1 14:08:44 np0005541455 systemd-rc-local-generator: /etc/rc.d/rc.local is not marked executable, skipping.
Dec  1 14:08:44 np0005541455 systemd-sysv-generator: SysV service '/etc/rc.d/init.d/network' lacks a native systemd unit file. Automatically generating a unit file for compatibility. Please update package to include a native systemd unit file, in order to make it more safe and robust.
Dec  1 14:08:44 np0005541455 systemd[1]: Stopping IPv4 firewall with iptables...
Dec  1 14:08:44 np0005541455 iptables.init[69660]: iptables: Setting chains to policy ACCEPT: raw mangle filter nat [  OK  ]
Dec  1 14:08:44 np0005541455 iptables.init[69660]: iptables: Flushing firewall rules: [  OK  ]
Dec  1 14:08:44 np0005541455 systemd[1]: iptables.service: Deactivated successfully.
Dec  1 14:08:44 np0005541455 systemd[1]: Stopped IPv4 firewall with iptables.
Dec  1 14:08:45 np0005541455 python3.9[69856]: ansible-ansible.builtin.systemd Invoked with enabled=False name=ip6tables.service state=stopped daemon_reload=False daemon_reexec=False scope=system no_block=False force=None masked=None
Dec  1 14:08:47 np0005541455 python3.9[70010]: ansible-ansible.builtin.systemd Invoked with enabled=True name=nftables state=started daemon_reload=False daemon_reexec=False scope=system no_block=False force=None masked=None
Dec  1 14:08:47 np0005541455 systemd[1]: Reloading.
Dec  1 14:08:47 np0005541455 systemd-rc-local-generator: /etc/rc.d/rc.local is not marked executable, skipping.
Dec  1 14:08:47 np0005541455 systemd-sysv-generator: SysV service '/etc/rc.d/init.d/network' lacks a native systemd unit file. Automatically generating a unit file for compatibility. Please update package to include a native systemd unit file, in order to make it more safe and robust.
Dec  1 14:08:47 np0005541455 systemd[1]: Starting Netfilter Tables...
Dec  1 14:08:47 np0005541455 systemd[1]: Finished Netfilter Tables.
Dec  1 14:08:48 np0005541455 python3.9[70202]: ansible-ansible.legacy.command Invoked with _raw_params=nft flush ruleset _uses_shell=False expand_argument_vars=True stdin_add_newline=True strip_empty_ends=True cmd=None argv=None chdir=None executable=None creates=None removes=None stdin=None
Dec  1 14:08:49 np0005541455 python3.9[70355]: ansible-ansible.legacy.stat Invoked with path=/etc/ssh/sshd_config follow=False get_checksum=True get_size=False checksum_algorithm=sha1 get_mime=True get_attributes=True get_selinux_context=False
Dec  1 14:08:50 np0005541455 python3.9[70480]: ansible-ansible.legacy.copy Invoked with dest=/etc/ssh/sshd_config mode=0600 src=/home/zuul/.ansible/tmp/ansible-tmp-1764616128.8152945-244-209433814500643/.source validate=/usr/sbin/sshd -T -f %s follow=False _original_basename=sshd_config_block.j2 checksum=6c79f4cb960ad444688fde322eeacb8402e22d79 backup=False force=True remote_src=False unsafe_writes=False content=NOT_LOGGING_PARAMETER directory_mode=None local_follow=None owner=None group=None seuser=None serole=None selevel=None setype=None attributes=None
Dec  1 14:08:51 np0005541455 python3.9[70633]: ansible-ansible.builtin.systemd Invoked with name=sshd state=reloaded daemon_reload=False daemon_reexec=False scope=system no_block=False enabled=None force=None masked=None
Dec  1 14:08:51 np0005541455 systemd[1]: Reloading OpenSSH server daemon...
Dec  1 14:08:51 np0005541455 systemd[1]: Reloaded OpenSSH server daemon.
Dec  1 14:08:51 np0005541455 python3.9[70789]: ansible-ansible.builtin.file Invoked with group=root mode=0750 owner=root path=/var/lib/edpm-config/firewall state=directory recurse=False force=False follow=True modification_time_format=%Y%m%d%H%M.%S access_time_format=%Y%m%d%H%M.%S unsafe_writes=False _original_basename=None _diff_peek=None src=None modification_time=None access_time=None seuser=None serole=None selevel=None setype=None attributes=None
Dec  1 14:08:52 np0005541455 python3.9[70941]: ansible-ansible.legacy.stat Invoked with path=/var/lib/edpm-config/firewall/sshd-networks.yaml follow=False get_checksum=True get_size=False checksum_algorithm=sha1 get_mime=True get_attributes=True get_selinux_context=False
Dec  1 14:08:53 np0005541455 python3.9[71064]: ansible-ansible.legacy.copy Invoked with dest=/var/lib/edpm-config/firewall/sshd-networks.yaml group=root mode=0644 owner=root src=/home/zuul/.ansible/tmp/ansible-tmp-1764616132.024207-275-246231444057613/.source.yaml follow=False _original_basename=firewall.yaml.j2 checksum=0bfc8440fd8f39002ab90252479fb794f51b5ae8 backup=False force=True remote_src=False unsafe_writes=False content=NOT_LOGGING_PARAMETER validate=None directory_mode=None local_follow=None seuser=None serole=None selevel=None setype=None attributes=None
Dec  1 14:08:54 np0005541455 python3.9[71216]: ansible-community.general.timezone Invoked with name=UTC hwclock=None
Dec  1 14:08:54 np0005541455 systemd[1]: Starting Time & Date Service...
Dec  1 14:08:54 np0005541455 systemd[1]: Started Time & Date Service.
Dec  1 14:08:55 np0005541455 python3.9[71372]: ansible-ansible.builtin.file Invoked with group=root mode=0750 owner=root path=/var/lib/edpm-config/firewall state=directory recurse=False force=False follow=True modification_time_format=%Y%m%d%H%M.%S access_time_format=%Y%m%d%H%M.%S unsafe_writes=False _original_basename=None _diff_peek=None src=None modification_time=None access_time=None seuser=None serole=None selevel=None setype=None attributes=None
Dec  1 14:08:56 np0005541455 python3.9[71524]: ansible-ansible.legacy.stat Invoked with path=/var/lib/edpm-config/firewall/edpm-nftables-base.yaml follow=False get_checksum=True get_size=False checksum_algorithm=sha1 get_mime=True get_attributes=True get_selinux_context=False
Dec  1 14:08:56 np0005541455 python3.9[71647]: ansible-ansible.legacy.copy Invoked with dest=/var/lib/edpm-config/firewall/edpm-nftables-base.yaml mode=0644 src=/home/zuul/.ansible/tmp/ansible-tmp-1764616135.4546335-310-72353634069544/.source.yaml follow=False _original_basename=base-rules.yaml.j2 checksum=450456afcafded6d4bdecceec7a02e806eebd8b3 backup=False force=True remote_src=False unsafe_writes=False content=NOT_LOGGING_PARAMETER validate=None directory_mode=None local_follow=None owner=None group=None seuser=None serole=None selevel=None setype=None attributes=None
Dec  1 14:08:57 np0005541455 python3.9[71799]: ansible-ansible.legacy.stat Invoked with path=/var/lib/edpm-config/firewall/edpm-nftables-user-rules.yaml follow=False get_checksum=True get_size=False checksum_algorithm=sha1 get_mime=True get_attributes=True get_selinux_context=False
Dec  1 14:08:58 np0005541455 python3.9[71922]: ansible-ansible.legacy.copy Invoked with dest=/var/lib/edpm-config/firewall/edpm-nftables-user-rules.yaml mode=0644 src=/home/zuul/.ansible/tmp/ansible-tmp-1764616136.8838053-325-102382191708276/.source.yaml _original_basename=.gwhz5yev follow=False checksum=97d170e1550eee4afc0af065b78cda302a97674c backup=False force=True remote_src=False unsafe_writes=False content=NOT_LOGGING_PARAMETER validate=None directory_mode=None local_follow=None owner=None group=None seuser=None serole=None selevel=None setype=None attributes=None
Dec  1 14:08:58 np0005541455 python3.9[72074]: ansible-ansible.legacy.stat Invoked with path=/etc/nftables/iptables.nft follow=False get_checksum=True get_size=False checksum_algorithm=sha1 get_mime=True get_attributes=True get_selinux_context=False
Dec  1 14:08:59 np0005541455 python3.9[72197]: ansible-ansible.legacy.copy Invoked with dest=/etc/nftables/iptables.nft group=root mode=0600 owner=root src=/home/zuul/.ansible/tmp/ansible-tmp-1764616138.36031-340-190871115982165/.source.nft _original_basename=iptables.nft follow=False checksum=3e02df08f1f3ab4a513e94056dbd390e3d38fe30 backup=False force=True remote_src=False unsafe_writes=False content=NOT_LOGGING_PARAMETER validate=None directory_mode=None local_follow=None seuser=None serole=None selevel=None setype=None attributes=None
Dec  1 14:09:00 np0005541455 python3.9[72349]: ansible-ansible.legacy.command Invoked with _raw_params=nft -f /etc/nftables/iptables.nft _uses_shell=False expand_argument_vars=True stdin_add_newline=True strip_empty_ends=True cmd=None argv=None chdir=None executable=None creates=None removes=None stdin=None
Dec  1 14:09:01 np0005541455 python3.9[72502]: ansible-ansible.legacy.command Invoked with _raw_params=nft -j list ruleset _uses_shell=False expand_argument_vars=True stdin_add_newline=True strip_empty_ends=True cmd=None argv=None chdir=None executable=None creates=None removes=None stdin=None
Dec  1 14:09:02 np0005541455 python3[72655]: ansible-edpm_nftables_from_files Invoked with src=/var/lib/edpm-config/firewall
Dec  1 14:09:03 np0005541455 python3.9[72807]: ansible-ansible.legacy.stat Invoked with path=/etc/nftables/edpm-jumps.nft follow=False get_checksum=True get_size=False checksum_algorithm=sha1 get_mime=True get_attributes=True get_selinux_context=False
Dec  1 14:09:03 np0005541455 python3.9[72930]: ansible-ansible.legacy.copy Invoked with dest=/etc/nftables/edpm-jumps.nft group=root mode=0600 owner=root src=/home/zuul/.ansible/tmp/ansible-tmp-1764616142.523695-379-191632169346539/.source.nft follow=False _original_basename=jump-chain.j2 checksum=4c6f036d2d5808f109acc0880c19aa74ca48c961 backup=False force=True remote_src=False unsafe_writes=False content=NOT_LOGGING_PARAMETER validate=None directory_mode=None local_follow=None seuser=None serole=None selevel=None setype=None attributes=None
Dec  1 14:09:04 np0005541455 python3.9[73082]: ansible-ansible.legacy.stat Invoked with path=/etc/nftables/edpm-update-jumps.nft follow=False get_checksum=True get_size=False checksum_algorithm=sha1 get_mime=True get_attributes=True get_selinux_context=False
Dec  1 14:09:05 np0005541455 python3.9[73205]: ansible-ansible.legacy.copy Invoked with dest=/etc/nftables/edpm-update-jumps.nft group=root mode=0600 owner=root src=/home/zuul/.ansible/tmp/ansible-tmp-1764616144.015531-394-235364139219069/.source.nft follow=False _original_basename=jump-chain.j2 checksum=4c6f036d2d5808f109acc0880c19aa74ca48c961 backup=False force=True remote_src=False unsafe_writes=False content=NOT_LOGGING_PARAMETER validate=None directory_mode=None local_follow=None seuser=None serole=None selevel=None setype=None attributes=None
Dec  1 14:09:06 np0005541455 python3.9[73357]: ansible-ansible.legacy.stat Invoked with path=/etc/nftables/edpm-flushes.nft follow=False get_checksum=True get_size=False checksum_algorithm=sha1 get_mime=True get_attributes=True get_selinux_context=False
Dec  1 14:09:06 np0005541455 python3.9[73480]: ansible-ansible.legacy.copy Invoked with dest=/etc/nftables/edpm-flushes.nft group=root mode=0600 owner=root src=/home/zuul/.ansible/tmp/ansible-tmp-1764616145.6018388-409-76090918968797/.source.nft follow=False _original_basename=flush-chain.j2 checksum=d16337256a56373421842284fe09e4e6c7df417e backup=False force=True remote_src=False unsafe_writes=False content=NOT_LOGGING_PARAMETER validate=None directory_mode=None local_follow=None seuser=None serole=None selevel=None setype=None attributes=None
Dec  1 14:09:07 np0005541455 python3.9[73632]: ansible-ansible.legacy.stat Invoked with path=/etc/nftables/edpm-chains.nft follow=False get_checksum=True get_size=False checksum_algorithm=sha1 get_mime=True get_attributes=True get_selinux_context=False
Dec  1 14:09:08 np0005541455 python3.9[73755]: ansible-ansible.legacy.copy Invoked with dest=/etc/nftables/edpm-chains.nft group=root mode=0600 owner=root src=/home/zuul/.ansible/tmp/ansible-tmp-1764616147.2106311-424-258769521552589/.source.nft follow=False _original_basename=chains.j2 checksum=2079f3b60590a165d1d502e763170876fc8e2984 backup=False force=True remote_src=False unsafe_writes=False content=NOT_LOGGING_PARAMETER validate=None directory_mode=None local_follow=None seuser=None serole=None selevel=None setype=None attributes=None
Dec  1 14:09:09 np0005541455 python3.9[73907]: ansible-ansible.legacy.stat Invoked with path=/etc/nftables/edpm-rules.nft follow=False get_checksum=True get_size=False checksum_algorithm=sha1 get_mime=True get_attributes=True get_selinux_context=False
Dec  1 14:09:10 np0005541455 python3.9[74030]: ansible-ansible.legacy.copy Invoked with dest=/etc/nftables/edpm-rules.nft group=root mode=0600 owner=root src=/home/zuul/.ansible/tmp/ansible-tmp-1764616148.895057-439-242336157718028/.source.nft follow=False _original_basename=ruleset.j2 checksum=15a82a0dc61abfd6aa593407582b5b950437eb80 backup=False force=True remote_src=False unsafe_writes=False content=NOT_LOGGING_PARAMETER validate=None directory_mode=None local_follow=None seuser=None serole=None selevel=None setype=None attributes=None
Dec  1 14:09:11 np0005541455 python3.9[74182]: ansible-ansible.builtin.file Invoked with group=root mode=0600 owner=root path=/etc/nftables/edpm-rules.nft.changed state=touch recurse=False force=False follow=True modification_time_format=%Y%m%d%H%M.%S access_time_format=%Y%m%d%H%M.%S unsafe_writes=False _original_basename=None _diff_peek=None src=None modification_time=None access_time=None seuser=None serole=None selevel=None setype=None attributes=None
Dec  1 14:09:11 np0005541455 python3.9[74334]: ansible-ansible.legacy.command Invoked with _raw_params=set -o pipefail; cat /etc/nftables/edpm-chains.nft /etc/nftables/edpm-flushes.nft /etc/nftables/edpm-rules.nft /etc/nftables/edpm-update-jumps.nft /etc/nftables/edpm-jumps.nft | nft -c -f - _uses_shell=True expand_argument_vars=True stdin_add_newline=True strip_empty_ends=True cmd=None argv=None chdir=None executable=None creates=None removes=None stdin=None
Dec  1 14:09:12 np0005541455 python3.9[74493]: ansible-ansible.builtin.blockinfile Invoked with backup=False block=include "/etc/nftables/iptables.nft"#012include "/etc/nftables/edpm-chains.nft"#012include "/etc/nftables/edpm-rules.nft"#012include "/etc/nftables/edpm-jumps.nft"#012 path=/etc/sysconfig/nftables.conf validate=nft -c -f %s state=present marker=# {mark} ANSIBLE MANAGED BLOCK create=False marker_begin=BEGIN marker_end=END append_newline=False prepend_newline=False encoding=utf-8 unsafe_writes=False insertafter=None insertbefore=None mode=None owner=None group=None seuser=None serole=None selevel=None setype=None attributes=None
Dec  1 14:09:13 np0005541455 python3.9[74646]: ansible-ansible.builtin.file Invoked with group=hugetlbfs mode=0775 owner=zuul path=/dev/hugepages1G state=directory recurse=False force=False follow=True modification_time_format=%Y%m%d%H%M.%S access_time_format=%Y%m%d%H%M.%S unsafe_writes=False _original_basename=None _diff_peek=None src=None modification_time=None access_time=None seuser=None serole=None selevel=None setype=None attributes=None
Dec  1 14:09:14 np0005541455 python3.9[74798]: ansible-ansible.builtin.file Invoked with group=hugetlbfs mode=0775 owner=zuul path=/dev/hugepages2M state=directory recurse=False force=False follow=True modification_time_format=%Y%m%d%H%M.%S access_time_format=%Y%m%d%H%M.%S unsafe_writes=False _original_basename=None _diff_peek=None src=None modification_time=None access_time=None seuser=None serole=None selevel=None setype=None attributes=None
Dec  1 14:09:15 np0005541455 python3.9[74950]: ansible-ansible.posix.mount Invoked with fstype=hugetlbfs opts=pagesize=1G path=/dev/hugepages1G src=none state=mounted boot=True dump=0 opts_no_log=False passno=0 backup=False fstab=None
Dec  1 14:09:15 np0005541455 rsyslogd[1005]: imjournal: journal files changed, reloading...  [v8.2510.0-2.el9 try https://www.rsyslog.com/e/0 ]
Dec  1 14:09:15 np0005541455 rsyslogd[1005]: imjournal: journal files changed, reloading...  [v8.2510.0-2.el9 try https://www.rsyslog.com/e/0 ]
Dec  1 14:09:16 np0005541455 python3.9[75104]: ansible-ansible.posix.mount Invoked with fstype=hugetlbfs opts=pagesize=2M path=/dev/hugepages2M src=none state=mounted boot=True dump=0 opts_no_log=False passno=0 backup=False fstab=None
Dec  1 14:09:16 np0005541455 systemd[1]: session-15.scope: Deactivated successfully.
Dec  1 14:09:16 np0005541455 systemd[1]: session-15.scope: Consumed 41.059s CPU time.
Dec  1 14:09:16 np0005541455 systemd-logind[797]: Session 15 logged out. Waiting for processes to exit.
Dec  1 14:09:16 np0005541455 systemd-logind[797]: Removed session 15.
Dec  1 14:09:21 np0005541455 systemd-logind[797]: New session 16 of user zuul.
Dec  1 14:09:21 np0005541455 systemd[1]: Started Session 16 of User zuul.
Dec  1 14:09:22 np0005541455 python3.9[75285]: ansible-ansible.builtin.tempfile Invoked with state=file prefix=ansible. suffix= path=None
Dec  1 14:09:23 np0005541455 python3.9[75437]: ansible-ansible.builtin.stat Invoked with path=/etc/ssh/ssh_known_hosts follow=False get_checksum=True get_mime=True get_attributes=True get_selinux_context=False checksum_algorithm=sha1
Dec  1 14:09:24 np0005541455 systemd[1]: systemd-timedated.service: Deactivated successfully.
Dec  1 14:09:25 np0005541455 python3.9[75591]: ansible-ansible.builtin.setup Invoked with gather_subset=['!all', '!min', 'ssh_host_key_rsa_public', 'ssh_host_key_ed25519_public', 'ssh_host_key_ecdsa_public'] gather_timeout=10 filter=[] fact_path=/etc/ansible/facts.d
Dec  1 14:09:26 np0005541455 python3.9[75743]: ansible-ansible.builtin.blockinfile Invoked with block=compute-0.ctlplane.example.com,192.168.122.100,compute-0* ssh-rsa AAAAB3NzaC1yc2EAAAADAQABAAABgQDIqp1X0nHyqTYQgbsxjpXf8vuC75x4n1sx4QMVBFz5HFVZvaF+D/6SKxq04kGT4Fg85a4BvgVreHvuHQKyKZhsk+y1mzjjg3EtnXjkt76KyYdcyHpa4XCKK9T0Fdvl1i/UD2LUbXSP20SxXQe7YUhNgSNkj9s/5nHGe7djiDt6VPwrdZgeApDxxghFlYOO39TRkWckOpYW4uINKfC66NagP2rv9gOr1kCNzeCKY8PS7cqvclnJXiEV7TVJGJIsKvSd44oBTZfoboOBSwqr5bBfhadGpd4EuemSbsjMDIjz9mU3Izj3YOo0wOusWdqdBpBXkL6+0eK7HAX8TscEgX7dwFiD8mBX9iAa4aL6xreXqDyMEDOV0NJ3Cg/8vXjAkcrs+jH7B91caaqo6Ozvb9Bla8ifbDd0Q7d7wGZEGKskTQ0ui1438909jfgu4LK5idKS1N/YggqjebnZMyylTueag/1LNR5x8ARVKJk/rtC76k70THR3naqkTFy60yH0GVs=#012compute-0.ctlplane.example.com,192.168.122.100,compute-0* ssh-ed25519 AAAAC3NzaC1lZDI1NTE5AAAAIFPDO3tWgv0TFgg5Kjr3tCOqP/rkHHtuL8EwmUUOALZH#012compute-0.ctlplane.example.com,192.168.122.100,compute-0* ecdsa-sha2-nistp256 AAAAE2VjZHNhLXNoYTItbmlzdHAyNTYAAAAIbmlzdHAyNTYAAABBBJLdrHkDjbuO0j3A5SD4lNsuElMt2GLUx6WZQrFjDi3XdaHXXIUdSLxbLC+c4+2IHgVrIrgj3ZT5aaohyi5wx+U=#012 create=True mode=0644 path=/tmp/ansible.tggb97nt state=present marker=# {mark} ANSIBLE MANAGED BLOCK backup=False marker_begin=BEGIN marker_end=END append_newline=False prepend_newline=False encoding=utf-8 unsafe_writes=False insertafter=None insertbefore=None validate=None owner=None group=None seuser=None serole=None selevel=None setype=None attributes=None
Dec  1 14:09:27 np0005541455 python3.9[75895]: ansible-ansible.legacy.command Invoked with _raw_params=cat '/tmp/ansible.tggb97nt' > /etc/ssh/ssh_known_hosts _uses_shell=True expand_argument_vars=True stdin_add_newline=True strip_empty_ends=True cmd=None argv=None chdir=None executable=None creates=None removes=None stdin=None
Dec  1 14:09:28 np0005541455 python3.9[76049]: ansible-ansible.builtin.file Invoked with path=/tmp/ansible.tggb97nt state=absent recurse=False force=False follow=True modification_time_format=%Y%m%d%H%M.%S access_time_format=%Y%m%d%H%M.%S unsafe_writes=False _original_basename=None _diff_peek=None src=None modification_time=None access_time=None mode=None owner=None group=None seuser=None serole=None selevel=None setype=None attributes=None
Dec  1 14:09:29 np0005541455 systemd[1]: session-16.scope: Deactivated successfully.
Dec  1 14:09:29 np0005541455 systemd[1]: session-16.scope: Consumed 4.467s CPU time.
Dec  1 14:09:29 np0005541455 systemd-logind[797]: Session 16 logged out. Waiting for processes to exit.
Dec  1 14:09:29 np0005541455 systemd-logind[797]: Removed session 16.
Dec  1 14:09:34 np0005541455 systemd-logind[797]: New session 17 of user zuul.
Dec  1 14:09:34 np0005541455 systemd[1]: Started Session 17 of User zuul.
Dec  1 14:09:35 np0005541455 python3.9[76227]: ansible-ansible.builtin.setup Invoked with gather_subset=['!all', '!min', 'local'] gather_timeout=10 filter=[] fact_path=/etc/ansible/facts.d
Dec  1 14:09:36 np0005541455 python3.9[76383]: ansible-ansible.builtin.systemd Invoked with enabled=True name=sshd daemon_reload=False daemon_reexec=False scope=system no_block=False state=None force=None masked=None
Dec  1 14:09:37 np0005541455 python3.9[76537]: ansible-ansible.builtin.systemd Invoked with name=sshd state=started daemon_reload=False daemon_reexec=False scope=system no_block=False enabled=None force=None masked=None
Dec  1 14:09:38 np0005541455 python3.9[76690]: ansible-ansible.legacy.command Invoked with _raw_params=nft -f /etc/nftables/edpm-chains.nft _uses_shell=False expand_argument_vars=True stdin_add_newline=True strip_empty_ends=True cmd=None argv=None chdir=None executable=None creates=None removes=None stdin=None
Dec  1 14:09:39 np0005541455 python3.9[76843]: ansible-ansible.builtin.stat Invoked with path=/etc/nftables/edpm-rules.nft.changed follow=False get_checksum=True get_mime=True get_attributes=True get_selinux_context=False checksum_algorithm=sha1
Dec  1 14:09:40 np0005541455 python3.9[76997]: ansible-ansible.legacy.command Invoked with _raw_params=set -o pipefail; cat /etc/nftables/edpm-flushes.nft /etc/nftables/edpm-rules.nft /etc/nftables/edpm-update-jumps.nft | nft -f - _uses_shell=True expand_argument_vars=True stdin_add_newline=True strip_empty_ends=True cmd=None argv=None chdir=None executable=None creates=None removes=None stdin=None
Dec  1 14:09:41 np0005541455 python3.9[77152]: ansible-ansible.builtin.file Invoked with path=/etc/nftables/edpm-rules.nft.changed state=absent recurse=False force=False follow=True modification_time_format=%Y%m%d%H%M.%S access_time_format=%Y%m%d%H%M.%S unsafe_writes=False _original_basename=None _diff_peek=None src=None modification_time=None access_time=None mode=None owner=None group=None seuser=None serole=None selevel=None setype=None attributes=None
Dec  1 14:09:41 np0005541455 systemd[1]: session-17.scope: Deactivated successfully.
Dec  1 14:09:41 np0005541455 systemd[1]: session-17.scope: Consumed 4.985s CPU time.
Dec  1 14:09:41 np0005541455 systemd-logind[797]: Session 17 logged out. Waiting for processes to exit.
Dec  1 14:09:41 np0005541455 systemd-logind[797]: Removed session 17.
Dec  1 14:09:47 np0005541455 systemd-logind[797]: New session 18 of user zuul.
Dec  1 14:09:47 np0005541455 systemd[1]: Started Session 18 of User zuul.
Dec  1 14:09:48 np0005541455 python3.9[77330]: ansible-ansible.builtin.setup Invoked with gather_subset=['!all', '!min', 'local'] gather_timeout=10 filter=[] fact_path=/etc/ansible/facts.d
Dec  1 14:09:50 np0005541455 python3.9[77486]: ansible-ansible.legacy.setup Invoked with filter=['ansible_pkg_mgr'] gather_subset=['!all'] gather_timeout=10 fact_path=/etc/ansible/facts.d
Dec  1 14:09:50 np0005541455 python3.9[77570]: ansible-ansible.legacy.dnf Invoked with name=['yum-utils'] allow_downgrade=False allowerasing=False autoremove=False bugfix=False cacheonly=False disable_gpg_check=False disable_plugin=[] disablerepo=[] download_only=False enable_plugin=[] enablerepo=[] exclude=[] installroot=/ install_weak_deps=True security=False skip_broken=False update_cache=False update_only=False validate_certs=True sslverify=True lock_timeout=30 use_backend=auto best=None conf_file=None disable_excludes=None download_dir=None list=None nobest=None releasever=None state=None
Dec  1 14:09:53 np0005541455 python3.9[77721]: ansible-ansible.legacy.command Invoked with _raw_params=needs-restarting -r _uses_shell=False expand_argument_vars=True stdin_add_newline=True strip_empty_ends=True cmd=None argv=None chdir=None executable=None creates=None removes=None stdin=None
Dec  1 14:09:54 np0005541455 python3.9[77872]: ansible-ansible.builtin.find Invoked with paths=['/var/lib/openstack/reboot_required/'] patterns=[] read_whole_file=False file_type=file age_stamp=mtime recurse=False hidden=False follow=False get_checksum=False checksum_algorithm=sha1 use_regex=False exact_mode=True excludes=None contains=None age=None size=None depth=None mode=None encoding=None limit=None
Dec  1 14:09:55 np0005541455 python3.9[78022]: ansible-ansible.builtin.stat Invoked with path=/var/lib/config-data/puppet-generated follow=False get_checksum=True get_mime=True get_attributes=True get_selinux_context=False checksum_algorithm=sha1
Dec  1 14:09:55 np0005541455 python3.9[78172]: ansible-ansible.builtin.stat Invoked with path=/var/lib/openstack/config follow=False get_checksum=True get_mime=True get_attributes=True get_selinux_context=False checksum_algorithm=sha1
Dec  1 14:09:56 np0005541455 systemd[1]: session-18.scope: Deactivated successfully.
Dec  1 14:09:56 np0005541455 systemd[1]: session-18.scope: Consumed 5.962s CPU time.
Dec  1 14:09:56 np0005541455 systemd-logind[797]: Session 18 logged out. Waiting for processes to exit.
Dec  1 14:09:56 np0005541455 systemd-logind[797]: Removed session 18.
Dec  1 14:10:01 np0005541455 systemd-logind[797]: New session 19 of user zuul.
Dec  1 14:10:01 np0005541455 systemd[1]: Started Session 19 of User zuul.
Dec  1 14:10:02 np0005541455 python3.9[78352]: ansible-ansible.builtin.setup Invoked with gather_subset=['!all', '!min', 'local'] gather_timeout=10 filter=[] fact_path=/etc/ansible/facts.d
Dec  1 14:10:04 np0005541455 python3.9[78508]: ansible-ansible.builtin.file Invoked with group=root mode=0755 owner=root path=/var/lib/openstack/certs/telemetry-power-monitoring/default setype=container_file_t state=directory recurse=False force=False follow=True modification_time_format=%Y%m%d%H%M.%S access_time_format=%Y%m%d%H%M.%S unsafe_writes=False _original_basename=None _diff_peek=None src=None modification_time=None access_time=None seuser=None serole=None selevel=None attributes=None
Dec  1 14:10:05 np0005541455 python3.9[78660]: ansible-ansible.builtin.file Invoked with group=root mode=0755 owner=root path=/var/lib/openstack/certs/telemetry-power-monitoring/default setype=container_file_t state=directory recurse=False force=False follow=True modification_time_format=%Y%m%d%H%M.%S access_time_format=%Y%m%d%H%M.%S unsafe_writes=False _original_basename=None _diff_peek=None src=None modification_time=None access_time=None seuser=None serole=None selevel=None attributes=None
Dec  1 14:10:06 np0005541455 python3.9[78812]: ansible-ansible.legacy.stat Invoked with path=/var/lib/openstack/certs/telemetry-power-monitoring/default/tls.crt follow=False get_checksum=True get_size=False checksum_algorithm=sha1 get_mime=True get_attributes=True get_selinux_context=False
Dec  1 14:10:07 np0005541455 python3.9[78935]: ansible-ansible.legacy.copy Invoked with dest=/var/lib/openstack/certs/telemetry-power-monitoring/default/tls.crt group=root mode=0600 owner=root src=/home/zuul/.ansible/tmp/ansible-tmp-1764616205.7370539-65-975296651943/.source.crt _original_basename=compute-0.ctlplane.example.com-tls.crt follow=False checksum=4255ef4131a37e9d4b68eda75017cd71e3949d92 backup=False force=True remote_src=False unsafe_writes=False content=NOT_LOGGING_PARAMETER validate=None directory_mode=None local_follow=None seuser=None serole=None selevel=None setype=None attributes=None
Dec  1 14:10:08 np0005541455 python3.9[79087]: ansible-ansible.legacy.stat Invoked with path=/var/lib/openstack/certs/telemetry-power-monitoring/default/ca.crt follow=False get_checksum=True get_size=False checksum_algorithm=sha1 get_mime=True get_attributes=True get_selinux_context=False
Dec  1 14:10:09 np0005541455 python3.9[79210]: ansible-ansible.legacy.copy Invoked with dest=/var/lib/openstack/certs/telemetry-power-monitoring/default/ca.crt group=root mode=0600 owner=root src=/home/zuul/.ansible/tmp/ansible-tmp-1764616207.813189-65-271327710437910/.source.crt _original_basename=compute-0.ctlplane.example.com-ca.crt follow=False checksum=ba1204aa9e67e358645f61b0e553decce3ef604b backup=False force=True remote_src=False unsafe_writes=False content=NOT_LOGGING_PARAMETER validate=None directory_mode=None local_follow=None seuser=None serole=None selevel=None setype=None attributes=None
Dec  1 14:10:09 np0005541455 python3.9[79362]: ansible-ansible.legacy.stat Invoked with path=/var/lib/openstack/certs/telemetry-power-monitoring/default/tls.key follow=False get_checksum=True get_size=False checksum_algorithm=sha1 get_mime=True get_attributes=True get_selinux_context=False
Dec  1 14:10:10 np0005541455 python3.9[79485]: ansible-ansible.legacy.copy Invoked with dest=/var/lib/openstack/certs/telemetry-power-monitoring/default/tls.key group=root mode=0600 owner=root src=/home/zuul/.ansible/tmp/ansible-tmp-1764616209.2372687-65-35850116439781/.source.key _original_basename=compute-0.ctlplane.example.com-tls.key follow=False checksum=d4e1bf035aac23f67a91e48fc3cb87a5da09dd40 backup=False force=True remote_src=False unsafe_writes=False content=NOT_LOGGING_PARAMETER validate=None directory_mode=None local_follow=None seuser=None serole=None selevel=None setype=None attributes=None
Dec  1 14:10:11 np0005541455 python3.9[79637]: ansible-ansible.builtin.file Invoked with group=root mode=0755 owner=root path=/var/lib/openstack/certs/telemetry/default setype=container_file_t state=directory recurse=False force=False follow=True modification_time_format=%Y%m%d%H%M.%S access_time_format=%Y%m%d%H%M.%S unsafe_writes=False _original_basename=None _diff_peek=None src=None modification_time=None access_time=None seuser=None serole=None selevel=None attributes=None
Dec  1 14:10:12 np0005541455 python3.9[79789]: ansible-ansible.builtin.file Invoked with group=root mode=0755 owner=root path=/var/lib/openstack/certs/telemetry/default setype=container_file_t state=directory recurse=False force=False follow=True modification_time_format=%Y%m%d%H%M.%S access_time_format=%Y%m%d%H%M.%S unsafe_writes=False _original_basename=None _diff_peek=None src=None modification_time=None access_time=None seuser=None serole=None selevel=None attributes=None
Dec  1 14:10:13 np0005541455 python3.9[79941]: ansible-ansible.legacy.stat Invoked with path=/var/lib/openstack/certs/telemetry/default/tls.crt follow=False get_checksum=True get_size=False checksum_algorithm=sha1 get_mime=True get_attributes=True get_selinux_context=False
Dec  1 14:10:13 np0005541455 python3.9[80064]: ansible-ansible.legacy.copy Invoked with dest=/var/lib/openstack/certs/telemetry/default/tls.crt group=root mode=0600 owner=root src=/home/zuul/.ansible/tmp/ansible-tmp-1764616212.4702783-124-167544635241122/.source.crt _original_basename=compute-0.ctlplane.example.com-tls.crt follow=False checksum=e9b6c40da183fb04e4dfb0b72431f044bc268b64 backup=False force=True remote_src=False unsafe_writes=False content=NOT_LOGGING_PARAMETER validate=None directory_mode=None local_follow=None seuser=None serole=None selevel=None setype=None attributes=None
Dec  1 14:10:14 np0005541455 python3.9[80216]: ansible-ansible.legacy.stat Invoked with path=/var/lib/openstack/certs/telemetry/default/ca.crt follow=False get_checksum=True get_size=False checksum_algorithm=sha1 get_mime=True get_attributes=True get_selinux_context=False
Dec  1 14:10:15 np0005541455 python3.9[80339]: ansible-ansible.legacy.copy Invoked with dest=/var/lib/openstack/certs/telemetry/default/ca.crt group=root mode=0600 owner=root src=/home/zuul/.ansible/tmp/ansible-tmp-1764616213.9558601-124-125321297255949/.source.crt _original_basename=compute-0.ctlplane.example.com-ca.crt follow=False checksum=ba1204aa9e67e358645f61b0e553decce3ef604b backup=False force=True remote_src=False unsafe_writes=False content=NOT_LOGGING_PARAMETER validate=None directory_mode=None local_follow=None seuser=None serole=None selevel=None setype=None attributes=None
Dec  1 14:10:16 np0005541455 python3.9[80491]: ansible-ansible.legacy.stat Invoked with path=/var/lib/openstack/certs/telemetry/default/tls.key follow=False get_checksum=True get_size=False checksum_algorithm=sha1 get_mime=True get_attributes=True get_selinux_context=False
Dec  1 14:10:16 np0005541455 python3.9[80614]: ansible-ansible.legacy.copy Invoked with dest=/var/lib/openstack/certs/telemetry/default/tls.key group=root mode=0600 owner=root src=/home/zuul/.ansible/tmp/ansible-tmp-1764616215.450809-124-143305498022300/.source.key _original_basename=compute-0.ctlplane.example.com-tls.key follow=False checksum=eef809465296c7c2d0810abcf999cd072af1abc3 backup=False force=True remote_src=False unsafe_writes=False content=NOT_LOGGING_PARAMETER validate=None directory_mode=None local_follow=None seuser=None serole=None selevel=None setype=None attributes=None
Dec  1 14:10:17 np0005541455 python3.9[80766]: ansible-ansible.builtin.file Invoked with group=root mode=0755 owner=root path=/var/lib/openstack/certs/ovn/default setype=container_file_t state=directory recurse=False force=False follow=True modification_time_format=%Y%m%d%H%M.%S access_time_format=%Y%m%d%H%M.%S unsafe_writes=False _original_basename=None _diff_peek=None src=None modification_time=None access_time=None seuser=None serole=None selevel=None attributes=None
Dec  1 14:10:18 np0005541455 python3.9[80918]: ansible-ansible.builtin.file Invoked with group=root mode=0755 owner=root path=/var/lib/openstack/certs/ovn/default setype=container_file_t state=directory recurse=False force=False follow=True modification_time_format=%Y%m%d%H%M.%S access_time_format=%Y%m%d%H%M.%S unsafe_writes=False _original_basename=None _diff_peek=None src=None modification_time=None access_time=None seuser=None serole=None selevel=None attributes=None
Dec  1 14:10:19 np0005541455 python3.9[81070]: ansible-ansible.legacy.stat Invoked with path=/var/lib/openstack/certs/ovn/default/tls.crt follow=False get_checksum=True get_size=False checksum_algorithm=sha1 get_mime=True get_attributes=True get_selinux_context=False
Dec  1 14:10:20 np0005541455 python3.9[81193]: ansible-ansible.legacy.copy Invoked with dest=/var/lib/openstack/certs/ovn/default/tls.crt group=root mode=0600 owner=root src=/home/zuul/.ansible/tmp/ansible-tmp-1764616218.7035525-183-33485811646891/.source.crt _original_basename=compute-0.ctlplane.example.com-tls.crt follow=False checksum=371c12bbd9abef1eafc20af9ad74fae10535a990 backup=False force=True remote_src=False unsafe_writes=False content=NOT_LOGGING_PARAMETER validate=None directory_mode=None local_follow=None seuser=None serole=None selevel=None setype=None attributes=None
Dec  1 14:10:20 np0005541455 python3.9[81345]: ansible-ansible.legacy.stat Invoked with path=/var/lib/openstack/certs/ovn/default/ca.crt follow=False get_checksum=True get_size=False checksum_algorithm=sha1 get_mime=True get_attributes=True get_selinux_context=False
Dec  1 14:10:21 np0005541455 python3.9[81468]: ansible-ansible.legacy.copy Invoked with dest=/var/lib/openstack/certs/ovn/default/ca.crt group=root mode=0600 owner=root src=/home/zuul/.ansible/tmp/ansible-tmp-1764616220.249071-183-182125196634759/.source.crt _original_basename=compute-0.ctlplane.example.com-ca.crt follow=False checksum=746e8fbc84fbc410a0894b458c35795135ac978c backup=False force=True remote_src=False unsafe_writes=False content=NOT_LOGGING_PARAMETER validate=None directory_mode=None local_follow=None seuser=None serole=None selevel=None setype=None attributes=None
Dec  1 14:10:22 np0005541455 python3.9[81620]: ansible-ansible.legacy.stat Invoked with path=/var/lib/openstack/certs/ovn/default/tls.key follow=False get_checksum=True get_size=False checksum_algorithm=sha1 get_mime=True get_attributes=True get_selinux_context=False
Dec  1 14:10:22 np0005541455 chronyd[65914]: Selected source 162.159.200.123 (pool.ntp.org)
Dec  1 14:10:22 np0005541455 python3.9[81743]: ansible-ansible.legacy.copy Invoked with dest=/var/lib/openstack/certs/ovn/default/tls.key group=root mode=0600 owner=root src=/home/zuul/.ansible/tmp/ansible-tmp-1764616221.7063885-183-715793059125/.source.key _original_basename=compute-0.ctlplane.example.com-tls.key follow=False checksum=e036ffe31d3eb5edd27a970fc9a65d578e5532bb backup=False force=True remote_src=False unsafe_writes=False content=NOT_LOGGING_PARAMETER validate=None directory_mode=None local_follow=None seuser=None serole=None selevel=None setype=None attributes=None
Dec  1 14:10:23 np0005541455 python3.9[81895]: ansible-ansible.builtin.file Invoked with group=root mode=0755 owner=root path=/var/lib/openstack/certs/libvirt/default setype=container_file_t state=directory recurse=False force=False follow=True modification_time_format=%Y%m%d%H%M.%S access_time_format=%Y%m%d%H%M.%S unsafe_writes=False _original_basename=None _diff_peek=None src=None modification_time=None access_time=None seuser=None serole=None selevel=None attributes=None
Dec  1 14:10:24 np0005541455 python3.9[82047]: ansible-ansible.builtin.file Invoked with group=root mode=0755 owner=root path=/var/lib/openstack/certs/libvirt/default setype=container_file_t state=directory recurse=False force=False follow=True modification_time_format=%Y%m%d%H%M.%S access_time_format=%Y%m%d%H%M.%S unsafe_writes=False _original_basename=None _diff_peek=None src=None modification_time=None access_time=None seuser=None serole=None selevel=None attributes=None
Dec  1 14:10:25 np0005541455 python3.9[82199]: ansible-ansible.legacy.stat Invoked with path=/var/lib/openstack/certs/libvirt/default/tls.crt follow=False get_checksum=True get_size=False checksum_algorithm=sha1 get_mime=True get_attributes=True get_selinux_context=False
Dec  1 14:10:26 np0005541455 python3.9[82322]: ansible-ansible.legacy.copy Invoked with dest=/var/lib/openstack/certs/libvirt/default/tls.crt group=root mode=0600 owner=root src=/home/zuul/.ansible/tmp/ansible-tmp-1764616224.659046-242-122672188143032/.source.crt _original_basename=compute-0.ctlplane.example.com-tls.crt follow=False checksum=e7fa81c531745a4a63975b679add564222c2fef8 backup=False force=True remote_src=False unsafe_writes=False content=NOT_LOGGING_PARAMETER validate=None directory_mode=None local_follow=None seuser=None serole=None selevel=None setype=None attributes=None
Dec  1 14:10:26 np0005541455 python3.9[82474]: ansible-ansible.legacy.stat Invoked with path=/var/lib/openstack/certs/libvirt/default/ca.crt follow=False get_checksum=True get_size=False checksum_algorithm=sha1 get_mime=True get_attributes=True get_selinux_context=False
Dec  1 14:10:27 np0005541455 python3.9[82597]: ansible-ansible.legacy.copy Invoked with dest=/var/lib/openstack/certs/libvirt/default/ca.crt group=root mode=0600 owner=root src=/home/zuul/.ansible/tmp/ansible-tmp-1764616226.242635-242-37028177282750/.source.crt _original_basename=compute-0.ctlplane.example.com-ca.crt follow=False checksum=2443bc4d88882bbf23136914a9ff6964e1d6f270 backup=False force=True remote_src=False unsafe_writes=False content=NOT_LOGGING_PARAMETER validate=None directory_mode=None local_follow=None seuser=None serole=None selevel=None setype=None attributes=None
Dec  1 14:10:28 np0005541455 python3.9[82749]: ansible-ansible.legacy.stat Invoked with path=/var/lib/openstack/certs/libvirt/default/tls.key follow=False get_checksum=True get_size=False checksum_algorithm=sha1 get_mime=True get_attributes=True get_selinux_context=False
Dec  1 14:10:28 np0005541455 python3.9[82872]: ansible-ansible.legacy.copy Invoked with dest=/var/lib/openstack/certs/libvirt/default/tls.key group=root mode=0600 owner=root src=/home/zuul/.ansible/tmp/ansible-tmp-1764616227.7623558-242-222325349652763/.source.key _original_basename=compute-0.ctlplane.example.com-tls.key follow=False checksum=930e08778f9e3bd2aef72263c9a7ed724a96afe2 backup=False force=True remote_src=False unsafe_writes=False content=NOT_LOGGING_PARAMETER validate=None directory_mode=None local_follow=None seuser=None serole=None selevel=None setype=None attributes=None
Dec  1 14:10:29 np0005541455 python3.9[83024]: ansible-ansible.builtin.file Invoked with group=root mode=0755 owner=root path=/var/lib/openstack/certs/neutron-metadata/default setype=container_file_t state=directory recurse=False force=False follow=True modification_time_format=%Y%m%d%H%M.%S access_time_format=%Y%m%d%H%M.%S unsafe_writes=False _original_basename=None _diff_peek=None src=None modification_time=None access_time=None seuser=None serole=None selevel=None attributes=None
Dec  1 14:10:30 np0005541455 python3.9[83176]: ansible-ansible.builtin.file Invoked with group=root mode=0755 owner=root path=/var/lib/openstack/certs/neutron-metadata/default setype=container_file_t state=directory recurse=False force=False follow=True modification_time_format=%Y%m%d%H%M.%S access_time_format=%Y%m%d%H%M.%S unsafe_writes=False _original_basename=None _diff_peek=None src=None modification_time=None access_time=None seuser=None serole=None selevel=None attributes=None
Dec  1 14:10:31 np0005541455 python3.9[83328]: ansible-ansible.legacy.stat Invoked with path=/var/lib/openstack/certs/neutron-metadata/default/tls.crt follow=False get_checksum=True get_size=False checksum_algorithm=sha1 get_mime=True get_attributes=True get_selinux_context=False
Dec  1 14:10:31 np0005541455 python3.9[83451]: ansible-ansible.legacy.copy Invoked with dest=/var/lib/openstack/certs/neutron-metadata/default/tls.crt group=root mode=0600 owner=root src=/home/zuul/.ansible/tmp/ansible-tmp-1764616230.8062441-301-46102982065625/.source.crt _original_basename=compute-0.ctlplane.example.com-tls.crt follow=False checksum=e1d59ccc0f40d2f7ef0c09b702d3db6230f3177d backup=False force=True remote_src=False unsafe_writes=False content=NOT_LOGGING_PARAMETER validate=None directory_mode=None local_follow=None seuser=None serole=None selevel=None setype=None attributes=None
Dec  1 14:10:32 np0005541455 python3.9[83603]: ansible-ansible.legacy.stat Invoked with path=/var/lib/openstack/certs/neutron-metadata/default/ca.crt follow=False get_checksum=True get_size=False checksum_algorithm=sha1 get_mime=True get_attributes=True get_selinux_context=False
Dec  1 14:10:33 np0005541455 python3.9[83726]: ansible-ansible.legacy.copy Invoked with dest=/var/lib/openstack/certs/neutron-metadata/default/ca.crt group=root mode=0600 owner=root src=/home/zuul/.ansible/tmp/ansible-tmp-1764616232.1810775-301-154398402971456/.source.crt _original_basename=compute-0.ctlplane.example.com-ca.crt follow=False checksum=746e8fbc84fbc410a0894b458c35795135ac978c backup=False force=True remote_src=False unsafe_writes=False content=NOT_LOGGING_PARAMETER validate=None directory_mode=None local_follow=None seuser=None serole=None selevel=None setype=None attributes=None
Dec  1 14:10:34 np0005541455 python3.9[83878]: ansible-ansible.legacy.stat Invoked with path=/var/lib/openstack/certs/neutron-metadata/default/tls.key follow=False get_checksum=True get_size=False checksum_algorithm=sha1 get_mime=True get_attributes=True get_selinux_context=False
Dec  1 14:10:34 np0005541455 python3.9[84001]: ansible-ansible.legacy.copy Invoked with dest=/var/lib/openstack/certs/neutron-metadata/default/tls.key group=root mode=0600 owner=root src=/home/zuul/.ansible/tmp/ansible-tmp-1764616233.5115771-301-83492142841644/.source.key _original_basename=compute-0.ctlplane.example.com-tls.key follow=False checksum=1609eb68be184da2b95564481cce7f09ff007a63 backup=False force=True remote_src=False unsafe_writes=False content=NOT_LOGGING_PARAMETER validate=None directory_mode=None local_follow=None seuser=None serole=None selevel=None setype=None attributes=None
Dec  1 14:10:36 np0005541455 python3.9[84153]: ansible-ansible.builtin.file Invoked with group=root mode=0755 owner=root path=/var/lib/openstack/cacerts/nova setype=container_file_t state=directory recurse=False force=False follow=True modification_time_format=%Y%m%d%H%M.%S access_time_format=%Y%m%d%H%M.%S unsafe_writes=False _original_basename=None _diff_peek=None src=None modification_time=None access_time=None seuser=None serole=None selevel=None attributes=None
Dec  1 14:10:37 np0005541455 python3.9[84305]: ansible-ansible.legacy.stat Invoked with path=/var/lib/openstack/cacerts/nova/tls-ca-bundle.pem follow=False get_checksum=True get_size=False checksum_algorithm=sha1 get_mime=True get_attributes=True get_selinux_context=False
Dec  1 14:10:37 np0005541455 python3.9[84428]: ansible-ansible.legacy.copy Invoked with dest=/var/lib/openstack/cacerts/nova/tls-ca-bundle.pem group=root mode=0644 owner=root src=/home/zuul/.ansible/tmp/ansible-tmp-1764616236.4807723-369-143061839074809/.source.pem _original_basename=tls-ca-bundle.pem follow=False checksum=c865f96d02e0a24f5a122339d49fd81effd2143b backup=False force=True remote_src=False unsafe_writes=False content=NOT_LOGGING_PARAMETER validate=None directory_mode=None local_follow=None seuser=None serole=None selevel=None setype=None attributes=None
Dec  1 14:10:38 np0005541455 python3.9[84580]: ansible-ansible.builtin.file Invoked with group=root mode=0755 owner=root path=/var/lib/openstack/cacerts/repo-setup setype=container_file_t state=directory recurse=False force=False follow=True modification_time_format=%Y%m%d%H%M.%S access_time_format=%Y%m%d%H%M.%S unsafe_writes=False _original_basename=None _diff_peek=None src=None modification_time=None access_time=None seuser=None serole=None selevel=None attributes=None
Dec  1 14:10:39 np0005541455 python3.9[84732]: ansible-ansible.legacy.stat Invoked with path=/var/lib/openstack/cacerts/repo-setup/tls-ca-bundle.pem follow=False get_checksum=True get_size=False checksum_algorithm=sha1 get_mime=True get_attributes=True get_selinux_context=False
Dec  1 14:10:40 np0005541455 python3.9[84855]: ansible-ansible.legacy.copy Invoked with dest=/var/lib/openstack/cacerts/repo-setup/tls-ca-bundle.pem group=root mode=0644 owner=root src=/home/zuul/.ansible/tmp/ansible-tmp-1764616238.7785163-393-156401871067135/.source.pem _original_basename=tls-ca-bundle.pem follow=False checksum=c865f96d02e0a24f5a122339d49fd81effd2143b backup=False force=True remote_src=False unsafe_writes=False content=NOT_LOGGING_PARAMETER validate=None directory_mode=None local_follow=None seuser=None serole=None selevel=None setype=None attributes=None
Dec  1 14:10:40 np0005541455 python3.9[85007]: ansible-ansible.builtin.file Invoked with group=root mode=0755 owner=root path=/var/lib/openstack/cacerts/libvirt setype=container_file_t state=directory recurse=False force=False follow=True modification_time_format=%Y%m%d%H%M.%S access_time_format=%Y%m%d%H%M.%S unsafe_writes=False _original_basename=None _diff_peek=None src=None modification_time=None access_time=None seuser=None serole=None selevel=None attributes=None
Dec  1 14:10:41 np0005541455 python3.9[85159]: ansible-ansible.legacy.stat Invoked with path=/var/lib/openstack/cacerts/libvirt/tls-ca-bundle.pem follow=False get_checksum=True get_size=False checksum_algorithm=sha1 get_mime=True get_attributes=True get_selinux_context=False
Dec  1 14:10:42 np0005541455 python3.9[85282]: ansible-ansible.legacy.copy Invoked with dest=/var/lib/openstack/cacerts/libvirt/tls-ca-bundle.pem group=root mode=0644 owner=root src=/home/zuul/.ansible/tmp/ansible-tmp-1764616241.2180548-417-15068798049907/.source.pem _original_basename=tls-ca-bundle.pem follow=False checksum=c865f96d02e0a24f5a122339d49fd81effd2143b backup=False force=True remote_src=False unsafe_writes=False content=NOT_LOGGING_PARAMETER validate=None directory_mode=None local_follow=None seuser=None serole=None selevel=None setype=None attributes=None
Dec  1 14:10:43 np0005541455 python3.9[85434]: ansible-ansible.builtin.file Invoked with group=root mode=0755 owner=root path=/var/lib/openstack/cacerts/ovn setype=container_file_t state=directory recurse=False force=False follow=True modification_time_format=%Y%m%d%H%M.%S access_time_format=%Y%m%d%H%M.%S unsafe_writes=False _original_basename=None _diff_peek=None src=None modification_time=None access_time=None seuser=None serole=None selevel=None attributes=None
Dec  1 14:10:44 np0005541455 python3.9[85586]: ansible-ansible.legacy.stat Invoked with path=/var/lib/openstack/cacerts/ovn/tls-ca-bundle.pem follow=False get_checksum=True get_size=False checksum_algorithm=sha1 get_mime=True get_attributes=True get_selinux_context=False
Dec  1 14:10:44 np0005541455 python3.9[85709]: ansible-ansible.legacy.copy Invoked with dest=/var/lib/openstack/cacerts/ovn/tls-ca-bundle.pem group=root mode=0644 owner=root src=/home/zuul/.ansible/tmp/ansible-tmp-1764616243.503863-441-201373237009566/.source.pem _original_basename=tls-ca-bundle.pem follow=False checksum=c865f96d02e0a24f5a122339d49fd81effd2143b backup=False force=True remote_src=False unsafe_writes=False content=NOT_LOGGING_PARAMETER validate=None directory_mode=None local_follow=None seuser=None serole=None selevel=None setype=None attributes=None
Dec  1 14:10:45 np0005541455 python3.9[85861]: ansible-ansible.builtin.file Invoked with group=root mode=0755 owner=root path=/var/lib/openstack/cacerts/telemetry setype=container_file_t state=directory recurse=False force=False follow=True modification_time_format=%Y%m%d%H%M.%S access_time_format=%Y%m%d%H%M.%S unsafe_writes=False _original_basename=None _diff_peek=None src=None modification_time=None access_time=None seuser=None serole=None selevel=None attributes=None
Dec  1 14:10:46 np0005541455 python3.9[86013]: ansible-ansible.legacy.stat Invoked with path=/var/lib/openstack/cacerts/telemetry/tls-ca-bundle.pem follow=False get_checksum=True get_size=False checksum_algorithm=sha1 get_mime=True get_attributes=True get_selinux_context=False
Dec  1 14:10:46 np0005541455 python3.9[86136]: ansible-ansible.legacy.copy Invoked with dest=/var/lib/openstack/cacerts/telemetry/tls-ca-bundle.pem group=root mode=0644 owner=root src=/home/zuul/.ansible/tmp/ansible-tmp-1764616245.6442187-465-277448472824375/.source.pem _original_basename=tls-ca-bundle.pem follow=False checksum=c865f96d02e0a24f5a122339d49fd81effd2143b backup=False force=True remote_src=False unsafe_writes=False content=NOT_LOGGING_PARAMETER validate=None directory_mode=None local_follow=None seuser=None serole=None selevel=None setype=None attributes=None
Dec  1 14:10:47 np0005541455 python3.9[86288]: ansible-ansible.builtin.file Invoked with group=root mode=0755 owner=root path=/var/lib/openstack/cacerts/neutron-metadata setype=container_file_t state=directory recurse=False force=False follow=True modification_time_format=%Y%m%d%H%M.%S access_time_format=%Y%m%d%H%M.%S unsafe_writes=False _original_basename=None _diff_peek=None src=None modification_time=None access_time=None seuser=None serole=None selevel=None attributes=None
Dec  1 14:10:48 np0005541455 python3.9[86440]: ansible-ansible.legacy.stat Invoked with path=/var/lib/openstack/cacerts/neutron-metadata/tls-ca-bundle.pem follow=False get_checksum=True get_size=False checksum_algorithm=sha1 get_mime=True get_attributes=True get_selinux_context=False
Dec  1 14:10:48 np0005541455 python3.9[86563]: ansible-ansible.legacy.copy Invoked with dest=/var/lib/openstack/cacerts/neutron-metadata/tls-ca-bundle.pem group=root mode=0644 owner=root src=/home/zuul/.ansible/tmp/ansible-tmp-1764616247.704139-489-62930794456038/.source.pem _original_basename=tls-ca-bundle.pem follow=False checksum=c865f96d02e0a24f5a122339d49fd81effd2143b backup=False force=True remote_src=False unsafe_writes=False content=NOT_LOGGING_PARAMETER validate=None directory_mode=None local_follow=None seuser=None serole=None selevel=None setype=None attributes=None
Dec  1 14:10:49 np0005541455 python3.9[86715]: ansible-ansible.builtin.file Invoked with group=root mode=0755 owner=root path=/var/lib/openstack/cacerts/bootstrap setype=container_file_t state=directory recurse=False force=False follow=True modification_time_format=%Y%m%d%H%M.%S access_time_format=%Y%m%d%H%M.%S unsafe_writes=False _original_basename=None _diff_peek=None src=None modification_time=None access_time=None seuser=None serole=None selevel=None attributes=None
Dec  1 14:10:50 np0005541455 python3.9[86867]: ansible-ansible.legacy.stat Invoked with path=/var/lib/openstack/cacerts/bootstrap/tls-ca-bundle.pem follow=False get_checksum=True get_size=False checksum_algorithm=sha1 get_mime=True get_attributes=True get_selinux_context=False
Dec  1 14:10:51 np0005541455 python3.9[86990]: ansible-ansible.legacy.copy Invoked with dest=/var/lib/openstack/cacerts/bootstrap/tls-ca-bundle.pem group=root mode=0644 owner=root src=/home/zuul/.ansible/tmp/ansible-tmp-1764616250.0124462-513-158003598097224/.source.pem _original_basename=tls-ca-bundle.pem follow=False checksum=c865f96d02e0a24f5a122339d49fd81effd2143b backup=False force=True remote_src=False unsafe_writes=False content=NOT_LOGGING_PARAMETER validate=None directory_mode=None local_follow=None seuser=None serole=None selevel=None setype=None attributes=None
Dec  1 14:10:52 np0005541455 python3.9[87142]: ansible-ansible.builtin.file Invoked with group=root mode=0755 owner=root path=/var/lib/openstack/cacerts/telemetry-power-monitoring setype=container_file_t state=directory recurse=False force=False follow=True modification_time_format=%Y%m%d%H%M.%S access_time_format=%Y%m%d%H%M.%S unsafe_writes=False _original_basename=None _diff_peek=None src=None modification_time=None access_time=None seuser=None serole=None selevel=None attributes=None
Dec  1 14:10:52 np0005541455 python3.9[87294]: ansible-ansible.legacy.stat Invoked with path=/var/lib/openstack/cacerts/telemetry-power-monitoring/tls-ca-bundle.pem follow=False get_checksum=True get_size=False checksum_algorithm=sha1 get_mime=True get_attributes=True get_selinux_context=False
Dec  1 14:10:53 np0005541455 python3.9[87417]: ansible-ansible.legacy.copy Invoked with dest=/var/lib/openstack/cacerts/telemetry-power-monitoring/tls-ca-bundle.pem group=root mode=0644 owner=root src=/home/zuul/.ansible/tmp/ansible-tmp-1764616252.243068-537-232285706389857/.source.pem _original_basename=tls-ca-bundle.pem follow=False checksum=c865f96d02e0a24f5a122339d49fd81effd2143b backup=False force=True remote_src=False unsafe_writes=False content=NOT_LOGGING_PARAMETER validate=None directory_mode=None local_follow=None seuser=None serole=None selevel=None setype=None attributes=None
Dec  1 14:10:53 np0005541455 systemd[1]: session-19.scope: Deactivated successfully.
Dec  1 14:10:53 np0005541455 systemd[1]: session-19.scope: Consumed 40.615s CPU time.
Dec  1 14:10:53 np0005541455 systemd-logind[797]: Session 19 logged out. Waiting for processes to exit.
Dec  1 14:10:53 np0005541455 systemd-logind[797]: Removed session 19.
Dec  1 14:11:00 np0005541455 systemd-logind[797]: New session 20 of user zuul.
Dec  1 14:11:00 np0005541455 systemd[1]: Started Session 20 of User zuul.
Dec  1 14:11:01 np0005541455 python3.9[87595]: ansible-ansible.builtin.setup Invoked with gather_subset=['!all', '!min', 'local'] gather_timeout=10 filter=[] fact_path=/etc/ansible/facts.d
Dec  1 14:11:02 np0005541455 python3.9[87751]: ansible-ansible.builtin.file Invoked with group=zuul mode=0750 owner=zuul path=/var/lib/edpm-config/firewall setype=container_file_t state=directory recurse=False force=False follow=True modification_time_format=%Y%m%d%H%M.%S access_time_format=%Y%m%d%H%M.%S unsafe_writes=False _original_basename=None _diff_peek=None src=None modification_time=None access_time=None seuser=None serole=None selevel=None attributes=None
Dec  1 14:11:03 np0005541455 python3.9[87903]: ansible-ansible.builtin.file Invoked with group=openvswitch owner=openvswitch path=/var/lib/openvswitch/ovn setype=container_file_t state=directory recurse=False force=False follow=True modification_time_format=%Y%m%d%H%M.%S access_time_format=%Y%m%d%H%M.%S unsafe_writes=False _original_basename=None _diff_peek=None src=None modification_time=None access_time=None mode=None seuser=None serole=None selevel=None attributes=None
Dec  1 14:11:04 np0005541455 python3.9[88054]: ansible-ansible.builtin.setup Invoked with gather_subset=['!all', '!min', 'selinux'] gather_timeout=10 filter=[] fact_path=/etc/ansible/facts.d
Dec  1 14:11:05 np0005541455 python3.9[88206]: ansible-ansible.posix.seboolean Invoked with name=virt_sandbox_use_netlink persistent=True state=True ignore_selinux_state=False
Dec  1 14:11:07 np0005541455 dbus-broker-launch[772]: avc:  op=load_policy lsm=selinux seqno=11 res=1
Dec  1 14:11:07 np0005541455 python3.9[88362]: ansible-ansible.legacy.setup Invoked with filter=['ansible_pkg_mgr'] gather_subset=['!all'] gather_timeout=10 fact_path=/etc/ansible/facts.d
Dec  1 14:11:08 np0005541455 python3.9[88446]: ansible-ansible.legacy.dnf Invoked with name=['openvswitch'] state=present allow_downgrade=False allowerasing=False autoremove=False bugfix=False cacheonly=False disable_gpg_check=False disable_plugin=[] disablerepo=[] download_only=False enable_plugin=[] enablerepo=[] exclude=[] installroot=/ install_weak_deps=True security=False skip_broken=False update_cache=False update_only=False validate_certs=True sslverify=True lock_timeout=30 use_backend=auto best=None conf_file=None disable_excludes=None download_dir=None list=None nobest=None releasever=None
Dec  1 14:11:11 np0005541455 python3.9[88599]: ansible-ansible.builtin.systemd Invoked with enabled=True masked=False name=openvswitch.service state=started daemon_reload=False daemon_reexec=False scope=system no_block=False force=None
Dec  1 14:11:12 np0005541455 python3[88754]: ansible-osp.edpm.edpm_nftables_snippet Invoked with content=- rule_name: 118 neutron vxlan networks#012  rule:#012    proto: udp#012    dport: 4789#012- rule_name: 119 neutron geneve networks#012  rule:#012    proto: udp#012    dport: 6081#012    state: ["UNTRACKED"]#012- rule_name: 120 neutron geneve networks no conntrack#012  rule:#012    proto: udp#012    dport: 6081#012    table: raw#012    chain: OUTPUT#012    jump: NOTRACK#012    action: append#012    state: []#012- rule_name: 121 neutron geneve networks no conntrack#012  rule:#012    proto: udp#012    dport: 6081#012    table: raw#012    chain: PREROUTING#012    jump: NOTRACK#012    action: append#012    state: []#012 dest=/var/lib/edpm-config/firewall/ovn.yaml state=present
Dec  1 14:11:13 np0005541455 python3.9[88906]: ansible-ansible.builtin.file Invoked with group=root mode=0750 owner=root path=/var/lib/edpm-config/firewall state=directory recurse=False force=False follow=True modification_time_format=%Y%m%d%H%M.%S access_time_format=%Y%m%d%H%M.%S unsafe_writes=False _original_basename=None _diff_peek=None src=None modification_time=None access_time=None seuser=None serole=None selevel=None setype=None attributes=None
Dec  1 14:11:14 np0005541455 python3.9[89058]: ansible-ansible.legacy.stat Invoked with path=/var/lib/edpm-config/firewall/edpm-nftables-base.yaml follow=False get_checksum=True get_size=False checksum_algorithm=sha1 get_mime=True get_attributes=True get_selinux_context=False
Dec  1 14:11:14 np0005541455 python3.9[89136]: ansible-ansible.legacy.file Invoked with mode=0644 dest=/var/lib/edpm-config/firewall/edpm-nftables-base.yaml _original_basename=base-rules.yaml.j2 recurse=False state=file path=/var/lib/edpm-config/firewall/edpm-nftables-base.yaml force=False follow=True modification_time_format=%Y%m%d%H%M.%S access_time_format=%Y%m%d%H%M.%S unsafe_writes=False _diff_peek=None src=None modification_time=None access_time=None owner=None group=None seuser=None serole=None selevel=None setype=None attributes=None
Dec  1 14:11:15 np0005541455 python3.9[89288]: ansible-ansible.legacy.stat Invoked with path=/var/lib/edpm-config/firewall/edpm-nftables-user-rules.yaml follow=False get_checksum=True get_size=False checksum_algorithm=sha1 get_mime=True get_attributes=True get_selinux_context=False
Dec  1 14:11:15 np0005541455 python3.9[89366]: ansible-ansible.legacy.file Invoked with mode=0644 dest=/var/lib/edpm-config/firewall/edpm-nftables-user-rules.yaml _original_basename=.1u27_yfa recurse=False state=file path=/var/lib/edpm-config/firewall/edpm-nftables-user-rules.yaml force=False follow=True modification_time_format=%Y%m%d%H%M.%S access_time_format=%Y%m%d%H%M.%S unsafe_writes=False _diff_peek=None src=None modification_time=None access_time=None owner=None group=None seuser=None serole=None selevel=None setype=None attributes=None
Dec  1 14:11:16 np0005541455 python3.9[89518]: ansible-ansible.legacy.stat Invoked with path=/etc/nftables/iptables.nft follow=False get_checksum=True get_size=False checksum_algorithm=sha1 get_mime=True get_attributes=True get_selinux_context=False
Dec  1 14:11:17 np0005541455 python3.9[89596]: ansible-ansible.legacy.file Invoked with group=root mode=0600 owner=root dest=/etc/nftables/iptables.nft _original_basename=iptables.nft recurse=False state=file path=/etc/nftables/iptables.nft force=False follow=True modification_time_format=%Y%m%d%H%M.%S access_time_format=%Y%m%d%H%M.%S unsafe_writes=False _diff_peek=None src=None modification_time=None access_time=None seuser=None serole=None selevel=None setype=None attributes=None
Dec  1 14:11:17 np0005541455 python3.9[89748]: ansible-ansible.legacy.command Invoked with _raw_params=nft -j list ruleset _uses_shell=False expand_argument_vars=True stdin_add_newline=True strip_empty_ends=True cmd=None argv=None chdir=None executable=None creates=None removes=None stdin=None
Dec  1 14:11:18 np0005541455 python3[89901]: ansible-edpm_nftables_from_files Invoked with src=/var/lib/edpm-config/firewall
Dec  1 14:11:19 np0005541455 python3.9[90053]: ansible-ansible.legacy.stat Invoked with path=/etc/nftables/edpm-jumps.nft follow=False get_checksum=True get_size=False checksum_algorithm=sha1 get_mime=True get_attributes=True get_selinux_context=False
Dec  1 14:11:20 np0005541455 python3.9[90178]: ansible-ansible.legacy.copy Invoked with dest=/etc/nftables/edpm-jumps.nft group=root mode=0600 owner=root src=/home/zuul/.ansible/tmp/ansible-tmp-1764616279.1395059-157-72175366142547/.source.nft follow=False _original_basename=jump-chain.j2 checksum=81c2fc96c23335ffe374f9b064e885d5d971ddf9 backup=False force=True remote_src=False unsafe_writes=False content=NOT_LOGGING_PARAMETER validate=None directory_mode=None local_follow=None seuser=None serole=None selevel=None setype=None attributes=None
Dec  1 14:11:21 np0005541455 python3.9[90330]: ansible-ansible.legacy.stat Invoked with path=/etc/nftables/edpm-update-jumps.nft follow=False get_checksum=True get_size=False checksum_algorithm=sha1 get_mime=True get_attributes=True get_selinux_context=False
Dec  1 14:11:22 np0005541455 python3.9[90455]: ansible-ansible.legacy.copy Invoked with dest=/etc/nftables/edpm-update-jumps.nft group=root mode=0600 owner=root src=/home/zuul/.ansible/tmp/ansible-tmp-1764616281.1078305-172-72463062149563/.source.nft follow=False _original_basename=jump-chain.j2 checksum=81c2fc96c23335ffe374f9b064e885d5d971ddf9 backup=False force=True remote_src=False unsafe_writes=False content=NOT_LOGGING_PARAMETER validate=None directory_mode=None local_follow=None seuser=None serole=None selevel=None setype=None attributes=None
Dec  1 14:11:23 np0005541455 python3.9[90607]: ansible-ansible.legacy.stat Invoked with path=/etc/nftables/edpm-flushes.nft follow=False get_checksum=True get_size=False checksum_algorithm=sha1 get_mime=True get_attributes=True get_selinux_context=False
Dec  1 14:11:23 np0005541455 python3.9[90732]: ansible-ansible.legacy.copy Invoked with dest=/etc/nftables/edpm-flushes.nft group=root mode=0600 owner=root src=/home/zuul/.ansible/tmp/ansible-tmp-1764616282.512209-187-93502399101461/.source.nft follow=False _original_basename=flush-chain.j2 checksum=4d3ffec49c8eb1a9b80d2f1e8cd64070063a87b4 backup=False force=True remote_src=False unsafe_writes=False content=NOT_LOGGING_PARAMETER validate=None directory_mode=None local_follow=None seuser=None serole=None selevel=None setype=None attributes=None
Dec  1 14:11:24 np0005541455 python3.9[90884]: ansible-ansible.legacy.stat Invoked with path=/etc/nftables/edpm-chains.nft follow=False get_checksum=True get_size=False checksum_algorithm=sha1 get_mime=True get_attributes=True get_selinux_context=False
Dec  1 14:11:25 np0005541455 python3.9[91009]: ansible-ansible.legacy.copy Invoked with dest=/etc/nftables/edpm-chains.nft group=root mode=0600 owner=root src=/home/zuul/.ansible/tmp/ansible-tmp-1764616283.8953784-202-166194136493360/.source.nft follow=False _original_basename=chains.j2 checksum=298ada419730ec15df17ded0cc50c97a4014a591 backup=False force=True remote_src=False unsafe_writes=False content=NOT_LOGGING_PARAMETER validate=None directory_mode=None local_follow=None seuser=None serole=None selevel=None setype=None attributes=None
Dec  1 14:11:25 np0005541455 python3.9[91161]: ansible-ansible.legacy.stat Invoked with path=/etc/nftables/edpm-rules.nft follow=False get_checksum=True get_size=False checksum_algorithm=sha1 get_mime=True get_attributes=True get_selinux_context=False
Dec  1 14:11:26 np0005541455 python3.9[91286]: ansible-ansible.legacy.copy Invoked with dest=/etc/nftables/edpm-rules.nft group=root mode=0600 owner=root src=/home/zuul/.ansible/tmp/ansible-tmp-1764616285.2849045-217-223468087407140/.source.nft follow=False _original_basename=ruleset.j2 checksum=eb691bdb7d792c5f8ff0d719e807fe1c95b09438 backup=False force=True remote_src=False unsafe_writes=False content=NOT_LOGGING_PARAMETER validate=None directory_mode=None local_follow=None seuser=None serole=None selevel=None setype=None attributes=None
Dec  1 14:11:27 np0005541455 python3.9[91438]: ansible-ansible.builtin.file Invoked with group=root mode=0600 owner=root path=/etc/nftables/edpm-rules.nft.changed state=touch recurse=False force=False follow=True modification_time_format=%Y%m%d%H%M.%S access_time_format=%Y%m%d%H%M.%S unsafe_writes=False _original_basename=None _diff_peek=None src=None modification_time=None access_time=None seuser=None serole=None selevel=None setype=None attributes=None
Dec  1 14:11:27 np0005541455 python3.9[91590]: ansible-ansible.legacy.command Invoked with _raw_params=set -o pipefail; cat /etc/nftables/edpm-chains.nft /etc/nftables/edpm-flushes.nft /etc/nftables/edpm-rules.nft /etc/nftables/edpm-update-jumps.nft /etc/nftables/edpm-jumps.nft | nft -c -f - _uses_shell=True expand_argument_vars=True stdin_add_newline=True strip_empty_ends=True cmd=None argv=None chdir=None executable=None creates=None removes=None stdin=None
Dec  1 14:11:28 np0005541455 python3.9[91745]: ansible-ansible.builtin.blockinfile Invoked with backup=False block=include "/etc/nftables/iptables.nft"#012include "/etc/nftables/edpm-chains.nft"#012include "/etc/nftables/edpm-rules.nft"#012include "/etc/nftables/edpm-jumps.nft"#012 path=/etc/sysconfig/nftables.conf validate=nft -c -f %s state=present marker=# {mark} ANSIBLE MANAGED BLOCK create=False marker_begin=BEGIN marker_end=END append_newline=False prepend_newline=False encoding=utf-8 unsafe_writes=False insertafter=None insertbefore=None mode=None owner=None group=None seuser=None serole=None selevel=None setype=None attributes=None
Dec  1 14:11:29 np0005541455 python3.9[91897]: ansible-ansible.legacy.command Invoked with _raw_params=nft -f /etc/nftables/edpm-chains.nft _uses_shell=False expand_argument_vars=True stdin_add_newline=True strip_empty_ends=True cmd=None argv=None chdir=None executable=None creates=None removes=None stdin=None
Dec  1 14:11:30 np0005541455 python3.9[92050]: ansible-ansible.builtin.stat Invoked with path=/etc/nftables/edpm-rules.nft.changed follow=False get_checksum=True get_mime=True get_attributes=True get_selinux_context=False checksum_algorithm=sha1
Dec  1 14:11:31 np0005541455 python3.9[92204]: ansible-ansible.legacy.command Invoked with _raw_params=set -o pipefail; cat /etc/nftables/edpm-flushes.nft /etc/nftables/edpm-rules.nft /etc/nftables/edpm-update-jumps.nft | nft -f - _uses_shell=True expand_argument_vars=True stdin_add_newline=True strip_empty_ends=True cmd=None argv=None chdir=None executable=None creates=None removes=None stdin=None
Dec  1 14:11:32 np0005541455 python3.9[92359]: ansible-ansible.builtin.file Invoked with path=/etc/nftables/edpm-rules.nft.changed state=absent recurse=False force=False follow=True modification_time_format=%Y%m%d%H%M.%S access_time_format=%Y%m%d%H%M.%S unsafe_writes=False _original_basename=None _diff_peek=None src=None modification_time=None access_time=None mode=None owner=None group=None seuser=None serole=None selevel=None setype=None attributes=None
Dec  1 14:11:33 np0005541455 python3.9[92509]: ansible-ansible.builtin.setup Invoked with gather_subset=['!all', '!min', 'machine'] gather_timeout=10 filter=[] fact_path=/etc/ansible/facts.d
Dec  1 14:11:34 np0005541455 python3.9[92662]: ansible-ansible.legacy.command Invoked with _raw_params=ovs-vsctl set open . external_ids:hostname=compute-0.ctlplane.example.com external_ids:ovn-bridge=br-int external_ids:ovn-bridge-mappings=datacentre:br-ex external_ids:ovn-chassis-mac-mappings="datacentre:1e:0a:c6:22:5a:f7" external_ids:ovn-encap-ip=172.19.0.100 external_ids:ovn-encap-type=geneve external_ids:ovn-encap-tos=0 external_ids:ovn-match-northd-version=False external_ids:ovn-monitor-all=True external_ids:ovn-remote=ssl:ovsdbserver-sb.openstack.svc:6642 external_ids:ovn-remote-probe-interval=60000 external_ids:ovn-ofctrl-wait-before-clear=8000 external_ids:rundir=/var/run/openvswitch #012 _uses_shell=True expand_argument_vars=True stdin_add_newline=True strip_empty_ends=True cmd=None argv=None chdir=None executable=None creates=None removes=None stdin=None
Dec  1 14:11:34 np0005541455 ovs-vsctl[92663]: ovs|00001|vsctl|INFO|Called as ovs-vsctl set open . external_ids:hostname=compute-0.ctlplane.example.com external_ids:ovn-bridge=br-int external_ids:ovn-bridge-mappings=datacentre:br-ex external_ids:ovn-chassis-mac-mappings=datacentre:1e:0a:c6:22:5a:f7 external_ids:ovn-encap-ip=172.19.0.100 external_ids:ovn-encap-type=geneve external_ids:ovn-encap-tos=0 external_ids:ovn-match-northd-version=False external_ids:ovn-monitor-all=True external_ids:ovn-remote=ssl:ovsdbserver-sb.openstack.svc:6642 external_ids:ovn-remote-probe-interval=60000 external_ids:ovn-ofctrl-wait-before-clear=8000 external_ids:rundir=/var/run/openvswitch
Dec  1 14:11:35 np0005541455 python3.9[92815]: ansible-ansible.legacy.command Invoked with _raw_params=set -o pipefail#012ovs-vsctl show | grep -q "Manager"#012 _uses_shell=True expand_argument_vars=True stdin_add_newline=True strip_empty_ends=True cmd=None argv=None chdir=None executable=None creates=None removes=None stdin=None
Dec  1 14:11:36 np0005541455 python3.9[92970]: ansible-ansible.legacy.command Invoked with _raw_params=ovs-vsctl --timeout=5 --id=@manager -- create Manager target=\"ptcp:********@manager#012 _uses_shell=True expand_argument_vars=True stdin_add_newline=True strip_empty_ends=True cmd=None argv=None chdir=None executable=None creates=None removes=None stdin=None
Dec  1 14:11:36 np0005541455 ovs-vsctl[92971]: ovs|00001|vsctl|INFO|Called as ovs-vsctl --timeout=5 --id=@manager -- create Manager "target=\"ptcp:6640:127.0.0.1\"" -- add Open_vSwitch . manager_options @manager
Dec  1 14:11:37 np0005541455 python3.9[93121]: ansible-ansible.builtin.stat Invoked with path=/var/lib/openstack/cacerts/ovn/tls-ca-bundle.pem follow=False get_checksum=True get_mime=True get_attributes=True get_selinux_context=False checksum_algorithm=sha1
Dec  1 14:11:37 np0005541455 python3.9[93275]: ansible-ansible.builtin.file Invoked with path=/var/local/libexec recurse=True setype=container_file_t state=directory force=False follow=True modification_time_format=%Y%m%d%H%M.%S access_time_format=%Y%m%d%H%M.%S unsafe_writes=False _original_basename=None _diff_peek=None src=None modification_time=None access_time=None mode=None owner=None group=None seuser=None serole=None selevel=None attributes=None
Dec  1 14:11:38 np0005541455 python3.9[93429]: ansible-ansible.legacy.stat Invoked with path=/var/local/libexec/edpm-container-shutdown follow=False get_checksum=True get_size=False checksum_algorithm=sha1 get_mime=True get_attributes=True get_selinux_context=False
Dec  1 14:11:39 np0005541455 python3.9[93507]: ansible-ansible.legacy.file Invoked with group=root mode=0700 owner=root setype=container_file_t dest=/var/local/libexec/edpm-container-shutdown _original_basename=edpm-container-shutdown recurse=False state=file path=/var/local/libexec/edpm-container-shutdown force=False follow=True modification_time_format=%Y%m%d%H%M.%S access_time_format=%Y%m%d%H%M.%S unsafe_writes=False _diff_peek=None src=None modification_time=None access_time=None seuser=None serole=None selevel=None attributes=None
Dec  1 14:11:40 np0005541455 python3.9[93659]: ansible-ansible.legacy.stat Invoked with path=/var/local/libexec/edpm-start-podman-container follow=False get_checksum=True get_size=False checksum_algorithm=sha1 get_mime=True get_attributes=True get_selinux_context=False
Dec  1 14:11:40 np0005541455 python3.9[93737]: ansible-ansible.legacy.file Invoked with group=root mode=0700 owner=root setype=container_file_t dest=/var/local/libexec/edpm-start-podman-container _original_basename=edpm-start-podman-container recurse=False state=file path=/var/local/libexec/edpm-start-podman-container force=False follow=True modification_time_format=%Y%m%d%H%M.%S access_time_format=%Y%m%d%H%M.%S unsafe_writes=False _diff_peek=None src=None modification_time=None access_time=None seuser=None serole=None selevel=None attributes=None
Dec  1 14:11:41 np0005541455 python3.9[93889]: ansible-ansible.builtin.file Invoked with mode=420 path=/etc/systemd/system-preset state=directory recurse=False force=False follow=True modification_time_format=%Y%m%d%H%M.%S access_time_format=%Y%m%d%H%M.%S unsafe_writes=False _original_basename=None _diff_peek=None src=None modification_time=None access_time=None owner=None group=None seuser=None serole=None selevel=None setype=None attributes=None
Dec  1 14:11:42 np0005541455 python3.9[94041]: ansible-ansible.legacy.stat Invoked with path=/etc/systemd/system/edpm-container-shutdown.service follow=False get_checksum=True get_size=False checksum_algorithm=sha1 get_mime=True get_attributes=True get_selinux_context=False
Dec  1 14:11:42 np0005541455 python3.9[94119]: ansible-ansible.legacy.file Invoked with group=root mode=0644 owner=root dest=/etc/systemd/system/edpm-container-shutdown.service _original_basename=edpm-container-shutdown-service recurse=False state=file path=/etc/systemd/system/edpm-container-shutdown.service force=False follow=True modification_time_format=%Y%m%d%H%M.%S access_time_format=%Y%m%d%H%M.%S unsafe_writes=False _diff_peek=None src=None modification_time=None access_time=None seuser=None serole=None selevel=None setype=None attributes=None
Dec  1 14:11:43 np0005541455 python3.9[94271]: ansible-ansible.legacy.stat Invoked with path=/etc/systemd/system-preset/91-edpm-container-shutdown.preset follow=False get_checksum=True get_size=False checksum_algorithm=sha1 get_mime=True get_attributes=True get_selinux_context=False
Dec  1 14:11:44 np0005541455 python3.9[94349]: ansible-ansible.legacy.file Invoked with group=root mode=0644 owner=root dest=/etc/systemd/system-preset/91-edpm-container-shutdown.preset _original_basename=91-edpm-container-shutdown-preset recurse=False state=file path=/etc/systemd/system-preset/91-edpm-container-shutdown.preset force=False follow=True modification_time_format=%Y%m%d%H%M.%S access_time_format=%Y%m%d%H%M.%S unsafe_writes=False _diff_peek=None src=None modification_time=None access_time=None seuser=None serole=None selevel=None setype=None attributes=None
Dec  1 14:11:44 np0005541455 python3.9[94501]: ansible-ansible.builtin.systemd Invoked with daemon_reload=True enabled=True name=edpm-container-shutdown state=started daemon_reexec=False scope=system no_block=False force=None masked=None
Dec  1 14:11:44 np0005541455 systemd[1]: Reloading.
Dec  1 14:11:45 np0005541455 systemd-rc-local-generator: /etc/rc.d/rc.local is not marked executable, skipping.
Dec  1 14:11:45 np0005541455 systemd-sysv-generator: SysV service '/etc/rc.d/init.d/network' lacks a native systemd unit file. Automatically generating a unit file for compatibility. Please update package to include a native systemd unit file, in order to make it more safe and robust.
Dec  1 14:11:46 np0005541455 python3.9[94692]: ansible-ansible.legacy.stat Invoked with path=/etc/systemd/system/netns-placeholder.service follow=False get_checksum=True get_size=False checksum_algorithm=sha1 get_mime=True get_attributes=True get_selinux_context=False
Dec  1 14:11:46 np0005541455 python3.9[94770]: ansible-ansible.legacy.file Invoked with group=root mode=0644 owner=root dest=/etc/systemd/system/netns-placeholder.service _original_basename=netns-placeholder-service recurse=False state=file path=/etc/systemd/system/netns-placeholder.service force=False follow=True modification_time_format=%Y%m%d%H%M.%S access_time_format=%Y%m%d%H%M.%S unsafe_writes=False _diff_peek=None src=None modification_time=None access_time=None seuser=None serole=None selevel=None setype=None attributes=None
Dec  1 14:11:47 np0005541455 python3.9[94922]: ansible-ansible.legacy.stat Invoked with path=/etc/systemd/system-preset/91-netns-placeholder.preset follow=False get_checksum=True get_size=False checksum_algorithm=sha1 get_mime=True get_attributes=True get_selinux_context=False
Dec  1 14:11:47 np0005541455 python3.9[95000]: ansible-ansible.legacy.file Invoked with group=root mode=0644 owner=root dest=/etc/systemd/system-preset/91-netns-placeholder.preset _original_basename=91-netns-placeholder-preset recurse=False state=file path=/etc/systemd/system-preset/91-netns-placeholder.preset force=False follow=True modification_time_format=%Y%m%d%H%M.%S access_time_format=%Y%m%d%H%M.%S unsafe_writes=False _diff_peek=None src=None modification_time=None access_time=None seuser=None serole=None selevel=None setype=None attributes=None
Dec  1 14:11:48 np0005541455 python3.9[95152]: ansible-ansible.builtin.systemd Invoked with daemon_reload=True enabled=True name=netns-placeholder state=started daemon_reexec=False scope=system no_block=False force=None masked=None
Dec  1 14:11:48 np0005541455 systemd[1]: Reloading.
Dec  1 14:11:48 np0005541455 systemd-sysv-generator: SysV service '/etc/rc.d/init.d/network' lacks a native systemd unit file. Automatically generating a unit file for compatibility. Please update package to include a native systemd unit file, in order to make it more safe and robust.
Dec  1 14:11:48 np0005541455 systemd-rc-local-generator: /etc/rc.d/rc.local is not marked executable, skipping.
Dec  1 14:11:49 np0005541455 systemd[1]: Starting Create netns directory...
Dec  1 14:11:50 np0005541455 systemd[1]: run-netns-placeholder.mount: Deactivated successfully.
Dec  1 14:11:50 np0005541455 systemd[1]: netns-placeholder.service: Deactivated successfully.
Dec  1 14:11:50 np0005541455 systemd[1]: Finished Create netns directory.
Dec  1 14:11:50 np0005541455 python3.9[95346]: ansible-ansible.builtin.file Invoked with group=zuul mode=0755 owner=zuul path=/var/lib/openstack/healthchecks setype=container_file_t state=directory recurse=False force=False follow=True modification_time_format=%Y%m%d%H%M.%S access_time_format=%Y%m%d%H%M.%S unsafe_writes=False _original_basename=None _diff_peek=None src=None modification_time=None access_time=None seuser=None serole=None selevel=None attributes=None
Dec  1 14:11:51 np0005541455 python3.9[95498]: ansible-ansible.legacy.stat Invoked with path=/var/lib/openstack/healthchecks/ovn_controller/healthcheck follow=False get_checksum=True get_size=False checksum_algorithm=sha1 get_mime=True get_attributes=True get_selinux_context=False
Dec  1 14:11:52 np0005541455 python3.9[95621]: ansible-ansible.legacy.copy Invoked with dest=/var/lib/openstack/healthchecks/ovn_controller/ group=zuul mode=0700 owner=zuul setype=container_file_t src=/home/zuul/.ansible/tmp/ansible-tmp-1764616311.0971043-468-78788841093113/.source _original_basename=healthcheck follow=False checksum=4098dd010265fabdf5c26b97d169fc4e575ff457 backup=False force=True remote_src=False unsafe_writes=False content=NOT_LOGGING_PARAMETER validate=None directory_mode=None local_follow=None seuser=None serole=None selevel=None attributes=None
Dec  1 14:11:53 np0005541455 python3.9[95774]: ansible-ansible.builtin.file Invoked with path=/var/lib/kolla/config_files recurse=True setype=container_file_t state=directory force=False follow=True modification_time_format=%Y%m%d%H%M.%S access_time_format=%Y%m%d%H%M.%S unsafe_writes=False _original_basename=None _diff_peek=None src=None modification_time=None access_time=None mode=None owner=None group=None seuser=None serole=None selevel=None attributes=None
Dec  1 14:11:54 np0005541455 python3.9[95926]: ansible-ansible.legacy.stat Invoked with path=/var/lib/kolla/config_files/ovn_controller.json follow=False get_checksum=True get_size=False checksum_algorithm=sha1 get_mime=True get_attributes=True get_selinux_context=False
Dec  1 14:11:54 np0005541455 python3.9[96049]: ansible-ansible.legacy.copy Invoked with dest=/var/lib/kolla/config_files/ovn_controller.json mode=0600 src=/home/zuul/.ansible/tmp/ansible-tmp-1764616313.5119774-493-39997060622303/.source.json _original_basename=.t_enlz1c follow=False checksum=2328fc98619beeb08ee32b01f15bb43094c10b61 backup=False force=True remote_src=False unsafe_writes=False content=NOT_LOGGING_PARAMETER validate=None directory_mode=None local_follow=None owner=None group=None seuser=None serole=None selevel=None setype=None attributes=None
Dec  1 14:11:55 np0005541455 python3.9[96201]: ansible-ansible.builtin.file Invoked with mode=0755 path=/var/lib/edpm-config/container-startup-config/ovn_controller state=directory recurse=False force=False follow=True modification_time_format=%Y%m%d%H%M.%S access_time_format=%Y%m%d%H%M.%S unsafe_writes=False _original_basename=None _diff_peek=None src=None modification_time=None access_time=None owner=None group=None seuser=None serole=None selevel=None setype=None attributes=None
Dec  1 14:11:57 np0005541455 python3.9[96628]: ansible-container_config_data Invoked with config_overrides={} config_path=/var/lib/edpm-config/container-startup-config/ovn_controller config_pattern=*.json debug=False
Dec  1 14:11:58 np0005541455 python3.9[96780]: ansible-container_config_hash Invoked with check_mode=False config_vol_prefix=/var/lib/config-data
Dec  1 14:11:59 np0005541455 python3.9[96932]: ansible-containers.podman.podman_container_info Invoked with executable=podman name=None
Dec  1 14:11:59 np0005541455 systemd[1]: var-lib-containers-storage-overlay.mount: Deactivated successfully.
Dec  1 14:12:01 np0005541455 python3[97096]: ansible-edpm_container_manage Invoked with concurrency=1 config_dir=/var/lib/edpm-config/container-startup-config/ovn_controller config_id=ovn_controller config_overrides={} config_patterns=*.json log_base_path=/var/log/containers/stdouts debug=False
Dec  1 14:12:01 np0005541455 systemd[1]: var-lib-containers-storage-overlay.mount: Deactivated successfully.
Dec  1 14:12:01 np0005541455 systemd[1]: var-lib-containers-storage-overlay.mount: Deactivated successfully.
Dec  1 14:12:01 np0005541455 podman[97133]: 2025-12-01 19:12:01.426437173 +0000 UTC m=+0.062255053 container create ac5c9902abf0db9f43c889599b2bcc73d33eb8b65444ffdd9b56a5cc93dab792 (image=quay.io/podified-antelope-centos9/openstack-ovn-controller:current-podified, name=ovn_controller, maintainer=OpenStack Kubernetes Operator team, config_id=ovn_controller, org.label-schema.build-date=20251125, org.label-schema.name=CentOS Stream 9 Base Image, tcib_build_tag=fa2bb8efef6782c26ea7f1675eeb36dd, tcib_managed=true, org.label-schema.license=GPLv2, org.label-schema.vendor=CentOS, managed_by=edpm_ansible, org.label-schema.schema-version=1.0, container_name=ovn_controller, config_data={'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ovn_controller', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-ovn-controller:current-podified', 'net': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/lib/modules:/lib/modules:ro', '/run:/run', '/var/lib/openvswitch/ovn:/run/ovn:shared,z', '/var/lib/kolla/config_files/ovn_controller.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/openstack/cacerts/ovn/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/certs/ovn/default/ca.crt:/etc/pki/tls/certs/ovndbca.crt:ro,z', '/var/lib/openstack/certs/ovn/default/tls.crt:/etc/pki/tls/certs/ovndb.crt:ro,z', '/var/lib/openstack/certs/ovn/default/tls.key:/etc/pki/tls/private/ovndb.key:ro,Z', '/var/lib/openstack/healthchecks/ovn_controller:/openstack:ro,z']}, io.buildah.version=1.41.3)
Dec  1 14:12:01 np0005541455 podman[97133]: 2025-12-01 19:12:01.398908225 +0000 UTC m=+0.034726095 image pull 3a37a52861b2e44ebd2a63ca2589a7c9d8e4119e5feace9d19c6312ed9b8421c quay.io/podified-antelope-centos9/openstack-ovn-controller:current-podified
Dec  1 14:12:01 np0005541455 python3[97096]: ansible-edpm_container_manage PODMAN-CONTAINER-DEBUG: podman create --name ovn_controller --conmon-pidfile /run/ovn_controller.pid --env KOLLA_CONFIG_STRATEGY=COPY_ALWAYS --healthcheck-command /openstack/healthcheck --label config_id=ovn_controller --label container_name=ovn_controller --label managed_by=edpm_ansible --label config_data={'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ovn_controller', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-ovn-controller:current-podified', 'net': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/lib/modules:/lib/modules:ro', '/run:/run', '/var/lib/openvswitch/ovn:/run/ovn:shared,z', '/var/lib/kolla/config_files/ovn_controller.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/openstack/cacerts/ovn/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/certs/ovn/default/ca.crt:/etc/pki/tls/certs/ovndbca.crt:ro,z', '/var/lib/openstack/certs/ovn/default/tls.crt:/etc/pki/tls/certs/ovndb.crt:ro,z', '/var/lib/openstack/certs/ovn/default/tls.key:/etc/pki/tls/private/ovndb.key:ro,Z', '/var/lib/openstack/healthchecks/ovn_controller:/openstack:ro,z']} --log-driver journald --log-level info --network host --privileged=True --user root --volume /lib/modules:/lib/modules:ro --volume /run:/run --volume /var/lib/openvswitch/ovn:/run/ovn:shared,z --volume /var/lib/kolla/config_files/ovn_controller.json:/var/lib/kolla/config_files/config.json:ro --volume /var/lib/openstack/cacerts/ovn/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z --volume /var/lib/openstack/certs/ovn/default/ca.crt:/etc/pki/tls/certs/ovndbca.crt:ro,z --volume /var/lib/openstack/certs/ovn/default/tls.crt:/etc/pki/tls/certs/ovndb.crt:ro,z --volume /var/lib/openstack/certs/ovn/default/tls.key:/etc/pki/tls/private/ovndb.key:ro,Z --volume /var/lib/openstack/healthchecks/ovn_controller:/openstack:ro,z quay.io/podified-antelope-centos9/openstack-ovn-controller:current-podified
Dec  1 14:12:02 np0005541455 systemd[1]: var-lib-containers-storage-overlay.mount: Deactivated successfully.
Dec  1 14:12:02 np0005541455 python3.9[97321]: ansible-ansible.builtin.stat Invoked with path=/etc/sysconfig/podman_drop_in follow=False get_checksum=True get_mime=True get_attributes=True get_selinux_context=False checksum_algorithm=sha1
Dec  1 14:12:03 np0005541455 python3.9[97475]: ansible-file Invoked with path=/etc/systemd/system/edpm_ovn_controller.requires state=absent recurse=False force=False follow=True modification_time_format=%Y%m%d%H%M.%S access_time_format=%Y%m%d%H%M.%S unsafe_writes=False _original_basename=None _diff_peek=None src=None modification_time=None access_time=None mode=None owner=None group=None seuser=None serole=None selevel=None setype=None attributes=None
Dec  1 14:12:03 np0005541455 python3.9[97551]: ansible-stat Invoked with path=/etc/systemd/system/edpm_ovn_controller_healthcheck.timer follow=False get_checksum=True get_mime=True get_attributes=True get_selinux_context=False checksum_algorithm=sha1
Dec  1 14:12:04 np0005541455 python3.9[97702]: ansible-copy Invoked with src=/home/zuul/.ansible/tmp/ansible-tmp-1764616323.851327-581-223275079969197/source dest=/etc/systemd/system/edpm_ovn_controller.service mode=0644 owner=root group=root backup=False force=True remote_src=False follow=False unsafe_writes=False _original_basename=None content=NOT_LOGGING_PARAMETER validate=None directory_mode=None local_follow=None checksum=None seuser=None serole=None selevel=None setype=None attributes=None
Dec  1 14:12:04 np0005541455 python3.9[97778]: ansible-systemd Invoked with daemon_reload=True daemon_reexec=False scope=system no_block=False name=None state=None enabled=None force=None masked=None
Dec  1 14:12:05 np0005541455 systemd[1]: Reloading.
Dec  1 14:12:05 np0005541455 systemd-sysv-generator: SysV service '/etc/rc.d/init.d/network' lacks a native systemd unit file. Automatically generating a unit file for compatibility. Please update package to include a native systemd unit file, in order to make it more safe and robust.
Dec  1 14:12:05 np0005541455 systemd-rc-local-generator: /etc/rc.d/rc.local is not marked executable, skipping.
Dec  1 14:12:05 np0005541455 python3.9[97888]: ansible-systemd Invoked with state=restarted name=edpm_ovn_controller.service enabled=True daemon_reload=False daemon_reexec=False scope=system no_block=False force=None masked=None
Dec  1 14:12:05 np0005541455 systemd[1]: Reloading.
Dec  1 14:12:05 np0005541455 systemd-sysv-generator: SysV service '/etc/rc.d/init.d/network' lacks a native systemd unit file. Automatically generating a unit file for compatibility. Please update package to include a native systemd unit file, in order to make it more safe and robust.
Dec  1 14:12:05 np0005541455 systemd-rc-local-generator: /etc/rc.d/rc.local is not marked executable, skipping.
Dec  1 14:12:06 np0005541455 systemd[1]: Starting ovn_controller container...
Dec  1 14:12:06 np0005541455 systemd[1]: Created slice Virtual Machine and Container Slice.
Dec  1 14:12:06 np0005541455 systemd[1]: Started libcrun container.
Dec  1 14:12:06 np0005541455 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/8a0d8d7a1ab469f4bc2d148d5d6cae69ae35ac2dc638bb7cfaa50c6fa5b7fc18/merged/run/ovn supports timestamps until 2038 (0x7fffffff)
Dec  1 14:12:06 np0005541455 systemd[1]: Started /usr/bin/podman healthcheck run ac5c9902abf0db9f43c889599b2bcc73d33eb8b65444ffdd9b56a5cc93dab792.
Dec  1 14:12:06 np0005541455 podman[97931]: 2025-12-01 19:12:06.299910335 +0000 UTC m=+0.130407494 container init ac5c9902abf0db9f43c889599b2bcc73d33eb8b65444ffdd9b56a5cc93dab792 (image=quay.io/podified-antelope-centos9/openstack-ovn-controller:current-podified, name=ovn_controller, org.label-schema.build-date=20251125, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.schema-version=1.0, container_name=ovn_controller, org.label-schema.vendor=CentOS, tcib_build_tag=fa2bb8efef6782c26ea7f1675eeb36dd, io.buildah.version=1.41.3, org.label-schema.license=GPLv2, managed_by=edpm_ansible, tcib_managed=true, config_data={'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ovn_controller', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-ovn-controller:current-podified', 'net': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/lib/modules:/lib/modules:ro', '/run:/run', '/var/lib/openvswitch/ovn:/run/ovn:shared,z', '/var/lib/kolla/config_files/ovn_controller.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/openstack/cacerts/ovn/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/certs/ovn/default/ca.crt:/etc/pki/tls/certs/ovndbca.crt:ro,z', '/var/lib/openstack/certs/ovn/default/tls.crt:/etc/pki/tls/certs/ovndb.crt:ro,z', '/var/lib/openstack/certs/ovn/default/tls.key:/etc/pki/tls/private/ovndb.key:ro,Z', '/var/lib/openstack/healthchecks/ovn_controller:/openstack:ro,z']}, config_id=ovn_controller, maintainer=OpenStack Kubernetes Operator team)
Dec  1 14:12:06 np0005541455 ovn_controller[97948]: + sudo -E kolla_set_configs
Dec  1 14:12:06 np0005541455 podman[97931]: 2025-12-01 19:12:06.3301456 +0000 UTC m=+0.160642739 container start ac5c9902abf0db9f43c889599b2bcc73d33eb8b65444ffdd9b56a5cc93dab792 (image=quay.io/podified-antelope-centos9/openstack-ovn-controller:current-podified, name=ovn_controller, config_data={'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ovn_controller', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-ovn-controller:current-podified', 'net': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/lib/modules:/lib/modules:ro', '/run:/run', '/var/lib/openvswitch/ovn:/run/ovn:shared,z', '/var/lib/kolla/config_files/ovn_controller.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/openstack/cacerts/ovn/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/certs/ovn/default/ca.crt:/etc/pki/tls/certs/ovndbca.crt:ro,z', '/var/lib/openstack/certs/ovn/default/tls.crt:/etc/pki/tls/certs/ovndb.crt:ro,z', '/var/lib/openstack/certs/ovn/default/tls.key:/etc/pki/tls/private/ovndb.key:ro,Z', '/var/lib/openstack/healthchecks/ovn_controller:/openstack:ro,z']}, io.buildah.version=1.41.3, maintainer=OpenStack Kubernetes Operator team, org.label-schema.build-date=20251125, container_name=ovn_controller, tcib_build_tag=fa2bb8efef6782c26ea7f1675eeb36dd, tcib_managed=true, config_id=ovn_controller, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.schema-version=1.0, managed_by=edpm_ansible, org.label-schema.vendor=CentOS)
Dec  1 14:12:06 np0005541455 edpm-start-podman-container[97931]: ovn_controller
Dec  1 14:12:06 np0005541455 systemd[1]: Created slice User Slice of UID 0.
Dec  1 14:12:06 np0005541455 systemd[1]: Starting User Runtime Directory /run/user/0...
Dec  1 14:12:06 np0005541455 systemd[1]: Finished User Runtime Directory /run/user/0.
Dec  1 14:12:06 np0005541455 podman[97955]: 2025-12-01 19:12:06.393533534 +0000 UTC m=+0.053911673 container health_status ac5c9902abf0db9f43c889599b2bcc73d33eb8b65444ffdd9b56a5cc93dab792 (image=quay.io/podified-antelope-centos9/openstack-ovn-controller:current-podified, name=ovn_controller, health_status=starting, health_failing_streak=1, health_log=, config_data={'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ovn_controller', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-ovn-controller:current-podified', 'net': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/lib/modules:/lib/modules:ro', '/run:/run', '/var/lib/openvswitch/ovn:/run/ovn:shared,z', '/var/lib/kolla/config_files/ovn_controller.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/openstack/cacerts/ovn/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/certs/ovn/default/ca.crt:/etc/pki/tls/certs/ovndbca.crt:ro,z', '/var/lib/openstack/certs/ovn/default/tls.crt:/etc/pki/tls/certs/ovndb.crt:ro,z', '/var/lib/openstack/certs/ovn/default/tls.key:/etc/pki/tls/private/ovndb.key:ro,Z', '/var/lib/openstack/healthchecks/ovn_controller:/openstack:ro,z']}, config_id=ovn_controller, managed_by=edpm_ansible, container_name=ovn_controller, org.label-schema.vendor=CentOS, org.label-schema.build-date=20251125, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.schema-version=1.0, io.buildah.version=1.41.3, maintainer=OpenStack Kubernetes Operator team, org.label-schema.license=GPLv2, tcib_build_tag=fa2bb8efef6782c26ea7f1675eeb36dd, tcib_managed=true)
Dec  1 14:12:06 np0005541455 systemd[1]: Starting User Manager for UID 0...
Dec  1 14:12:06 np0005541455 systemd[1]: ac5c9902abf0db9f43c889599b2bcc73d33eb8b65444ffdd9b56a5cc93dab792-7a9c4058a2cacc83.service: Main process exited, code=exited, status=1/FAILURE
Dec  1 14:12:06 np0005541455 systemd[1]: ac5c9902abf0db9f43c889599b2bcc73d33eb8b65444ffdd9b56a5cc93dab792-7a9c4058a2cacc83.service: Failed with result 'exit-code'.
Dec  1 14:12:06 np0005541455 edpm-start-podman-container[97929]: Creating additional drop-in dependency for "ovn_controller" (ac5c9902abf0db9f43c889599b2bcc73d33eb8b65444ffdd9b56a5cc93dab792)
Dec  1 14:12:06 np0005541455 systemd[1]: Reloading.
Dec  1 14:12:06 np0005541455 systemd-sysv-generator: SysV service '/etc/rc.d/init.d/network' lacks a native systemd unit file. Automatically generating a unit file for compatibility. Please update package to include a native systemd unit file, in order to make it more safe and robust.
Dec  1 14:12:06 np0005541455 systemd-rc-local-generator: /etc/rc.d/rc.local is not marked executable, skipping.
Dec  1 14:12:06 np0005541455 systemd[97988]: Queued start job for default target Main User Target.
Dec  1 14:12:06 np0005541455 systemd[97988]: Created slice User Application Slice.
Dec  1 14:12:06 np0005541455 systemd[97988]: Mark boot as successful after the user session has run 2 minutes was skipped because of an unmet condition check (ConditionUser=!@system).
Dec  1 14:12:06 np0005541455 systemd[97988]: Started Daily Cleanup of User's Temporary Directories.
Dec  1 14:12:06 np0005541455 systemd[97988]: Reached target Paths.
Dec  1 14:12:06 np0005541455 systemd[97988]: Reached target Timers.
Dec  1 14:12:06 np0005541455 systemd[97988]: Starting D-Bus User Message Bus Socket...
Dec  1 14:12:06 np0005541455 systemd[97988]: Starting Create User's Volatile Files and Directories...
Dec  1 14:12:06 np0005541455 systemd[97988]: Listening on D-Bus User Message Bus Socket.
Dec  1 14:12:06 np0005541455 systemd[97988]: Reached target Sockets.
Dec  1 14:12:06 np0005541455 systemd[97988]: Finished Create User's Volatile Files and Directories.
Dec  1 14:12:06 np0005541455 systemd[97988]: Reached target Basic System.
Dec  1 14:12:06 np0005541455 systemd[97988]: Reached target Main User Target.
Dec  1 14:12:06 np0005541455 systemd[97988]: Startup finished in 128ms.
Dec  1 14:12:06 np0005541455 systemd[1]: Started User Manager for UID 0.
Dec  1 14:12:06 np0005541455 systemd[1]: Started ovn_controller container.
Dec  1 14:12:06 np0005541455 systemd[1]: Started Session c1 of User root.
Dec  1 14:12:06 np0005541455 ovn_controller[97948]: INFO:__main__:Loading config file at /var/lib/kolla/config_files/config.json
Dec  1 14:12:06 np0005541455 ovn_controller[97948]: INFO:__main__:Validating config file
Dec  1 14:12:06 np0005541455 ovn_controller[97948]: INFO:__main__:Kolla config strategy set to: COPY_ALWAYS
Dec  1 14:12:06 np0005541455 ovn_controller[97948]: INFO:__main__:Writing out command to execute
Dec  1 14:12:06 np0005541455 systemd[1]: session-c1.scope: Deactivated successfully.
Dec  1 14:12:06 np0005541455 ovn_controller[97948]: ++ cat /run_command
Dec  1 14:12:06 np0005541455 ovn_controller[97948]: + CMD='/usr/bin/ovn-controller --pidfile unix:/run/openvswitch/db.sock  -p /etc/pki/tls/private/ovndb.key -c /etc/pki/tls/certs/ovndb.crt -C /etc/pki/tls/certs/ovndbca.crt '
Dec  1 14:12:06 np0005541455 ovn_controller[97948]: + ARGS=
Dec  1 14:12:06 np0005541455 ovn_controller[97948]: + sudo kolla_copy_cacerts
Dec  1 14:12:06 np0005541455 systemd[1]: Started Session c2 of User root.
Dec  1 14:12:06 np0005541455 systemd[1]: session-c2.scope: Deactivated successfully.
Dec  1 14:12:06 np0005541455 ovn_controller[97948]: + [[ ! -n '' ]]
Dec  1 14:12:06 np0005541455 ovn_controller[97948]: + . kolla_extend_start
Dec  1 14:12:06 np0005541455 ovn_controller[97948]: Running command: '/usr/bin/ovn-controller --pidfile unix:/run/openvswitch/db.sock  -p /etc/pki/tls/private/ovndb.key -c /etc/pki/tls/certs/ovndb.crt -C /etc/pki/tls/certs/ovndbca.crt '
Dec  1 14:12:06 np0005541455 ovn_controller[97948]: + echo 'Running command: '\''/usr/bin/ovn-controller --pidfile unix:/run/openvswitch/db.sock  -p /etc/pki/tls/private/ovndb.key -c /etc/pki/tls/certs/ovndb.crt -C /etc/pki/tls/certs/ovndbca.crt '\'''
Dec  1 14:12:06 np0005541455 ovn_controller[97948]: + umask 0022
Dec  1 14:12:06 np0005541455 ovn_controller[97948]: + exec /usr/bin/ovn-controller --pidfile unix:/run/openvswitch/db.sock -p /etc/pki/tls/private/ovndb.key -c /etc/pki/tls/certs/ovndb.crt -C /etc/pki/tls/certs/ovndbca.crt
Dec  1 14:12:06 np0005541455 ovn_controller[97948]: 2025-12-01T19:12:06Z|00001|reconnect|INFO|unix:/run/openvswitch/db.sock: connecting...
Dec  1 14:12:06 np0005541455 ovn_controller[97948]: 2025-12-01T19:12:06Z|00002|reconnect|INFO|unix:/run/openvswitch/db.sock: connected
Dec  1 14:12:06 np0005541455 ovn_controller[97948]: 2025-12-01T19:12:06Z|00003|main|INFO|OVN internal version is : [24.03.8-20.33.0-76.8]
Dec  1 14:12:06 np0005541455 ovn_controller[97948]: 2025-12-01T19:12:06Z|00004|main|INFO|OVS IDL reconnected, force recompute.
Dec  1 14:12:06 np0005541455 ovn_controller[97948]: 2025-12-01T19:12:06Z|00005|reconnect|INFO|ssl:ovsdbserver-sb.openstack.svc:6642: connecting...
Dec  1 14:12:06 np0005541455 ovn_controller[97948]: 2025-12-01T19:12:06Z|00006|main|INFO|OVNSB IDL reconnected, force recompute.
Dec  1 14:12:06 np0005541455 NetworkManager[56474]: <info>  [1764616326.8198] manager: (br-int): new Open vSwitch Interface device (/org/freedesktop/NetworkManager/Devices/14)
Dec  1 14:12:06 np0005541455 NetworkManager[56474]: <info>  [1764616326.8208] device (br-int)[Open vSwitch Interface]: state change: unmanaged -> unavailable (reason 'managed', managed-type: 'external')
Dec  1 14:12:06 np0005541455 NetworkManager[56474]: <info>  [1764616326.8221] manager: (br-int): new Open vSwitch Port device (/org/freedesktop/NetworkManager/Devices/15)
Dec  1 14:12:06 np0005541455 NetworkManager[56474]: <info>  [1764616326.8227] manager: (br-int): new Open vSwitch Bridge device (/org/freedesktop/NetworkManager/Devices/16)
Dec  1 14:12:06 np0005541455 NetworkManager[56474]: <info>  [1764616326.8232] device (br-int)[Open vSwitch Interface]: state change: unavailable -> disconnected (reason 'none', managed-type: 'full')
Dec  1 14:12:06 np0005541455 ovn_controller[97948]: 2025-12-01T19:12:06Z|00007|reconnect|INFO|ssl:ovsdbserver-sb.openstack.svc:6642: connected
Dec  1 14:12:06 np0005541455 kernel: br-int: entered promiscuous mode
Dec  1 14:12:06 np0005541455 ovn_controller[97948]: 2025-12-01T19:12:06Z|00008|features|INFO|unix:/var/run/openvswitch/br-int.mgmt: connecting to switch
Dec  1 14:12:06 np0005541455 ovn_controller[97948]: 2025-12-01T19:12:06Z|00009|rconn|INFO|unix:/var/run/openvswitch/br-int.mgmt: connecting...
Dec  1 14:12:06 np0005541455 ovn_controller[97948]: 2025-12-01T19:12:06Z|00010|reconnect|INFO|unix:/run/openvswitch/db.sock: connecting...
Dec  1 14:12:06 np0005541455 ovn_controller[97948]: 2025-12-01T19:12:06Z|00011|ofctrl|INFO|unix:/var/run/openvswitch/br-int.mgmt: connecting to switch
Dec  1 14:12:06 np0005541455 ovn_controller[97948]: 2025-12-01T19:12:06Z|00012|rconn|INFO|unix:/var/run/openvswitch/br-int.mgmt: connecting...
Dec  1 14:12:06 np0005541455 ovn_controller[97948]: 2025-12-01T19:12:06Z|00013|features|INFO|OVS Feature: ct_zero_snat, state: supported
Dec  1 14:12:06 np0005541455 ovn_controller[97948]: 2025-12-01T19:12:06Z|00014|features|INFO|OVS Feature: ct_flush, state: supported
Dec  1 14:12:06 np0005541455 ovn_controller[97948]: 2025-12-01T19:12:06Z|00015|features|INFO|OVS Feature: dp_hash_l4_sym_support, state: supported
Dec  1 14:12:06 np0005541455 ovn_controller[97948]: 2025-12-01T19:12:06Z|00016|reconnect|INFO|unix:/run/openvswitch/db.sock: connected
Dec  1 14:12:06 np0005541455 ovn_controller[97948]: 2025-12-01T19:12:06Z|00017|main|INFO|OVS feature set changed, force recompute.
Dec  1 14:12:06 np0005541455 ovn_controller[97948]: 2025-12-01T19:12:06Z|00018|rconn|INFO|unix:/var/run/openvswitch/br-int.mgmt: connected
Dec  1 14:12:06 np0005541455 ovn_controller[97948]: 2025-12-01T19:12:06Z|00019|features|INFO|OVS DB schema supports 4 flow table prefixes, our IDL supports: 4
Dec  1 14:12:06 np0005541455 ovn_controller[97948]: 2025-12-01T19:12:06Z|00020|rconn|INFO|unix:/var/run/openvswitch/br-int.mgmt: connected
Dec  1 14:12:06 np0005541455 ovn_controller[97948]: 2025-12-01T19:12:06Z|00021|ofctrl|INFO|ofctrl-wait-before-clear is now 8000 ms (was 0 ms)
Dec  1 14:12:06 np0005541455 ovn_controller[97948]: 2025-12-01T19:12:06Z|00022|main|INFO|OVS OpenFlow connection reconnected,force recompute.
Dec  1 14:12:06 np0005541455 ovn_controller[97948]: 2025-12-01T19:12:06Z|00023|main|INFO|Setting flow table prefixes: ip_src, ip_dst, ipv6_src, ipv6_dst.
Dec  1 14:12:06 np0005541455 ovn_controller[97948]: 2025-12-01T19:12:06Z|00024|main|INFO|OVS feature set changed, force recompute.
Dec  1 14:12:06 np0005541455 ovn_controller[97948]: 2025-12-01T19:12:06Z|00001|pinctrl(ovn_pinctrl0)|INFO|unix:/var/run/openvswitch/br-int.mgmt: connecting to switch
Dec  1 14:12:06 np0005541455 ovn_controller[97948]: 2025-12-01T19:12:06Z|00002|rconn(ovn_pinctrl0)|INFO|unix:/var/run/openvswitch/br-int.mgmt: connecting...
Dec  1 14:12:06 np0005541455 ovn_controller[97948]: 2025-12-01T19:12:06Z|00001|statctrl(ovn_statctrl3)|INFO|unix:/var/run/openvswitch/br-int.mgmt: connecting to switch
Dec  1 14:12:06 np0005541455 ovn_controller[97948]: 2025-12-01T19:12:06Z|00002|rconn(ovn_statctrl3)|INFO|unix:/var/run/openvswitch/br-int.mgmt: connecting...
Dec  1 14:12:06 np0005541455 ovn_controller[97948]: 2025-12-01T19:12:06Z|00003|rconn(ovn_pinctrl0)|INFO|unix:/var/run/openvswitch/br-int.mgmt: connected
Dec  1 14:12:06 np0005541455 ovn_controller[97948]: 2025-12-01T19:12:06Z|00003|rconn(ovn_statctrl3)|INFO|unix:/var/run/openvswitch/br-int.mgmt: connected
Dec  1 14:12:06 np0005541455 NetworkManager[56474]: <info>  [1764616326.8422] manager: (ovn-293002-0): new Open vSwitch Port device (/org/freedesktop/NetworkManager/Devices/17)
Dec  1 14:12:06 np0005541455 kernel: genev_sys_6081: entered promiscuous mode
Dec  1 14:12:06 np0005541455 NetworkManager[56474]: <info>  [1764616326.8632] device (genev_sys_6081): carrier: link connected
Dec  1 14:12:06 np0005541455 NetworkManager[56474]: <info>  [1764616326.8638] manager: (genev_sys_6081): new Generic device (/org/freedesktop/NetworkManager/Devices/18)
Dec  1 14:12:06 np0005541455 systemd-udevd[98105]: Network interface NamePolicy= disabled on kernel command line.
Dec  1 14:12:06 np0005541455 systemd-udevd[98109]: Network interface NamePolicy= disabled on kernel command line.
Dec  1 14:12:07 np0005541455 python3.9[98216]: ansible-ansible.legacy.command Invoked with _raw_params=ovs-vsctl remove open . other_config hw-offload#012 _uses_shell=True expand_argument_vars=True stdin_add_newline=True strip_empty_ends=True cmd=None argv=None chdir=None executable=None creates=None removes=None stdin=None
Dec  1 14:12:07 np0005541455 ovs-vsctl[98217]: ovs|00001|vsctl|INFO|Called as ovs-vsctl remove open . other_config hw-offload
Dec  1 14:12:08 np0005541455 python3.9[98369]: ansible-ansible.legacy.command Invoked with _raw_params=ovs-vsctl get Open_vSwitch . external_ids:ovn-cms-options | sed 's/\"//g'#012 _uses_shell=True expand_argument_vars=True stdin_add_newline=True strip_empty_ends=True cmd=None argv=None chdir=None executable=None creates=None removes=None stdin=None
Dec  1 14:12:08 np0005541455 ovs-vsctl[98371]: ovs|00001|db_ctl_base|ERR|no key "ovn-cms-options" in Open_vSwitch record "." column external_ids
Dec  1 14:12:09 np0005541455 python3.9[98524]: ansible-ansible.legacy.command Invoked with _raw_params=ovs-vsctl remove Open_vSwitch . external_ids ovn-cms-options#012 _uses_shell=True expand_argument_vars=True stdin_add_newline=True strip_empty_ends=True cmd=None argv=None chdir=None executable=None creates=None removes=None stdin=None
Dec  1 14:12:09 np0005541455 ovs-vsctl[98525]: ovs|00001|vsctl|INFO|Called as ovs-vsctl remove Open_vSwitch . external_ids ovn-cms-options
Dec  1 14:12:09 np0005541455 systemd[1]: session-20.scope: Deactivated successfully.
Dec  1 14:12:09 np0005541455 systemd[1]: session-20.scope: Consumed 50.718s CPU time.
Dec  1 14:12:09 np0005541455 systemd-logind[797]: Session 20 logged out. Waiting for processes to exit.
Dec  1 14:12:09 np0005541455 systemd-logind[797]: Removed session 20.
Dec  1 14:12:15 np0005541455 systemd-logind[797]: New session 22 of user zuul.
Dec  1 14:12:15 np0005541455 systemd[1]: Started Session 22 of User zuul.
Dec  1 14:12:16 np0005541455 systemd[1]: Stopping User Manager for UID 0...
Dec  1 14:12:16 np0005541455 systemd[97988]: Activating special unit Exit the Session...
Dec  1 14:12:16 np0005541455 systemd[97988]: Stopped target Main User Target.
Dec  1 14:12:16 np0005541455 systemd[97988]: Stopped target Basic System.
Dec  1 14:12:16 np0005541455 systemd[97988]: Stopped target Paths.
Dec  1 14:12:16 np0005541455 systemd[97988]: Stopped target Sockets.
Dec  1 14:12:16 np0005541455 systemd[97988]: Stopped target Timers.
Dec  1 14:12:16 np0005541455 systemd[97988]: Stopped Daily Cleanup of User's Temporary Directories.
Dec  1 14:12:16 np0005541455 systemd[97988]: Closed D-Bus User Message Bus Socket.
Dec  1 14:12:16 np0005541455 systemd[97988]: Stopped Create User's Volatile Files and Directories.
Dec  1 14:12:16 np0005541455 systemd[97988]: Removed slice User Application Slice.
Dec  1 14:12:16 np0005541455 systemd[97988]: Reached target Shutdown.
Dec  1 14:12:16 np0005541455 systemd[97988]: Finished Exit the Session.
Dec  1 14:12:16 np0005541455 systemd[97988]: Reached target Exit the Session.
Dec  1 14:12:16 np0005541455 systemd[1]: user@0.service: Deactivated successfully.
Dec  1 14:12:16 np0005541455 systemd[1]: Stopped User Manager for UID 0.
Dec  1 14:12:16 np0005541455 systemd[1]: Stopping User Runtime Directory /run/user/0...
Dec  1 14:12:16 np0005541455 systemd[1]: run-user-0.mount: Deactivated successfully.
Dec  1 14:12:16 np0005541455 systemd[1]: user-runtime-dir@0.service: Deactivated successfully.
Dec  1 14:12:16 np0005541455 systemd[1]: Stopped User Runtime Directory /run/user/0.
Dec  1 14:12:16 np0005541455 systemd[1]: Removed slice User Slice of UID 0.
Dec  1 14:12:17 np0005541455 python3.9[98704]: ansible-ansible.builtin.setup Invoked with gather_subset=['!all', '!min', 'local'] gather_timeout=10 filter=[] fact_path=/etc/ansible/facts.d
Dec  1 14:12:18 np0005541455 python3.9[98860]: ansible-ansible.builtin.file Invoked with group=zuul owner=zuul path=/var/lib/config-data/ansible-generated/neutron-ovn-metadata-agent setype=container_file_t state=directory recurse=False force=False follow=True modification_time_format=%Y%m%d%H%M.%S access_time_format=%Y%m%d%H%M.%S unsafe_writes=False _original_basename=None _diff_peek=None src=None modification_time=None access_time=None mode=None seuser=None serole=None selevel=None attributes=None
Dec  1 14:12:19 np0005541455 python3.9[99012]: ansible-ansible.builtin.file Invoked with group=zuul mode=0755 owner=zuul path=/var/lib/neutron setype=container_file_t state=directory recurse=False force=False follow=True modification_time_format=%Y%m%d%H%M.%S access_time_format=%Y%m%d%H%M.%S unsafe_writes=False _original_basename=None _diff_peek=None src=None modification_time=None access_time=None seuser=None serole=None selevel=None attributes=None
Dec  1 14:12:20 np0005541455 python3.9[99164]: ansible-ansible.builtin.file Invoked with group=zuul mode=0755 owner=zuul path=/var/lib/neutron/kill_scripts setype=container_file_t state=directory recurse=False force=False follow=True modification_time_format=%Y%m%d%H%M.%S access_time_format=%Y%m%d%H%M.%S unsafe_writes=False _original_basename=None _diff_peek=None src=None modification_time=None access_time=None seuser=None serole=None selevel=None attributes=None
Dec  1 14:12:20 np0005541455 python3.9[99316]: ansible-ansible.builtin.file Invoked with group=zuul mode=0755 owner=zuul path=/var/lib/neutron/ovn-metadata-proxy setype=container_file_t state=directory recurse=False force=False follow=True modification_time_format=%Y%m%d%H%M.%S access_time_format=%Y%m%d%H%M.%S unsafe_writes=False _original_basename=None _diff_peek=None src=None modification_time=None access_time=None seuser=None serole=None selevel=None attributes=None
Dec  1 14:12:21 np0005541455 python3.9[99468]: ansible-ansible.builtin.file Invoked with group=zuul mode=0755 owner=zuul path=/var/lib/neutron/external/pids setype=container_file_t state=directory recurse=False force=False follow=True modification_time_format=%Y%m%d%H%M.%S access_time_format=%Y%m%d%H%M.%S unsafe_writes=False _original_basename=None _diff_peek=None src=None modification_time=None access_time=None seuser=None serole=None selevel=None attributes=None
Dec  1 14:12:22 np0005541455 python3.9[99618]: ansible-ansible.builtin.setup Invoked with gather_subset=['!all', '!min', 'selinux'] gather_timeout=10 filter=[] fact_path=/etc/ansible/facts.d
Dec  1 14:12:23 np0005541455 python3.9[99770]: ansible-ansible.posix.seboolean Invoked with name=virt_sandbox_use_netlink persistent=True state=True ignore_selinux_state=False
Dec  1 14:12:24 np0005541455 python3.9[99922]: ansible-ansible.legacy.stat Invoked with path=/var/lib/neutron/ovn_metadata_haproxy_wrapper follow=False get_checksum=True get_size=False checksum_algorithm=sha1 get_mime=True get_attributes=True get_selinux_context=False
Dec  1 14:12:25 np0005541455 python3.9[100044]: ansible-ansible.legacy.copy Invoked with dest=/var/lib/neutron/ovn_metadata_haproxy_wrapper mode=0755 setype=container_file_t src=/home/zuul/.ansible/tmp/ansible-tmp-1764616344.28442-86-103177335001229/.source follow=False _original_basename=haproxy.j2 checksum=95c62e64c8f82dd9393a560d1b052dc98d38f810 backup=False force=True remote_src=False unsafe_writes=False content=NOT_LOGGING_PARAMETER validate=None directory_mode=None local_follow=None owner=None group=None seuser=None serole=None selevel=None attributes=None
Dec  1 14:12:26 np0005541455 python3.9[100194]: ansible-ansible.legacy.stat Invoked with path=/var/lib/neutron/kill_scripts/haproxy-kill follow=False get_checksum=True get_size=False checksum_algorithm=sha1 get_mime=True get_attributes=True get_selinux_context=False
Dec  1 14:12:27 np0005541455 python3.9[100315]: ansible-ansible.legacy.copy Invoked with dest=/var/lib/neutron/kill_scripts/haproxy-kill mode=0755 setype=container_file_t src=/home/zuul/.ansible/tmp/ansible-tmp-1764616345.9419925-101-15287259213883/.source follow=False _original_basename=kill-script.j2 checksum=2dfb5489f491f61b95691c3bf95fa1fe48ff3700 backup=False force=True remote_src=False unsafe_writes=False content=NOT_LOGGING_PARAMETER validate=None directory_mode=None local_follow=None owner=None group=None seuser=None serole=None selevel=None attributes=None
Dec  1 14:12:28 np0005541455 python3.9[100467]: ansible-ansible.legacy.setup Invoked with filter=['ansible_pkg_mgr'] gather_subset=['!all'] gather_timeout=10 fact_path=/etc/ansible/facts.d
Dec  1 14:12:29 np0005541455 python3.9[100551]: ansible-ansible.legacy.dnf Invoked with name=['openvswitch'] state=present allow_downgrade=False allowerasing=False autoremove=False bugfix=False cacheonly=False disable_gpg_check=False disable_plugin=[] disablerepo=[] download_only=False enable_plugin=[] enablerepo=[] exclude=[] installroot=/ install_weak_deps=True security=False skip_broken=False update_cache=False update_only=False validate_certs=True sslverify=True lock_timeout=30 use_backend=auto best=None conf_file=None disable_excludes=None download_dir=None list=None nobest=None releasever=None
Dec  1 14:12:31 np0005541455 python3.9[100704]: ansible-ansible.builtin.systemd Invoked with enabled=True masked=False name=openvswitch.service state=started daemon_reload=False daemon_reexec=False scope=system no_block=False force=None
Dec  1 14:12:32 np0005541455 python3.9[100857]: ansible-ansible.legacy.stat Invoked with path=/var/lib/config-data/ansible-generated/neutron-ovn-metadata-agent/01-rootwrap.conf follow=False get_checksum=True get_size=False checksum_algorithm=sha1 get_mime=True get_attributes=True get_selinux_context=False
Dec  1 14:12:33 np0005541455 python3.9[100978]: ansible-ansible.legacy.copy Invoked with dest=/var/lib/config-data/ansible-generated/neutron-ovn-metadata-agent/01-rootwrap.conf mode=0644 setype=container_file_t src=/home/zuul/.ansible/tmp/ansible-tmp-1764616351.891821-138-223318943659234/.source.conf follow=False _original_basename=rootwrap.conf.j2 checksum=11f2cfb4b7d97b2cef3c2c2d88089e6999cffe22 backup=False force=True remote_src=False unsafe_writes=False content=NOT_LOGGING_PARAMETER validate=None directory_mode=None local_follow=None owner=None group=None seuser=None serole=None selevel=None attributes=None
Dec  1 14:12:33 np0005541455 python3.9[101128]: ansible-ansible.legacy.stat Invoked with path=/var/lib/config-data/ansible-generated/neutron-ovn-metadata-agent/01-neutron-ovn-metadata-agent.conf follow=False get_checksum=True get_size=False checksum_algorithm=sha1 get_mime=True get_attributes=True get_selinux_context=False
Dec  1 14:12:34 np0005541455 python3.9[101249]: ansible-ansible.legacy.copy Invoked with dest=/var/lib/config-data/ansible-generated/neutron-ovn-metadata-agent/01-neutron-ovn-metadata-agent.conf mode=0644 setype=container_file_t src=/home/zuul/.ansible/tmp/ansible-tmp-1764616353.1926374-138-264018524469921/.source.conf follow=False _original_basename=neutron-ovn-metadata-agent.conf.j2 checksum=8bc979abbe81c2cf3993a225517a7e2483e20443 backup=False force=True remote_src=False unsafe_writes=False content=NOT_LOGGING_PARAMETER validate=None directory_mode=None local_follow=None owner=None group=None seuser=None serole=None selevel=None attributes=None
Dec  1 14:12:35 np0005541455 python3.9[101399]: ansible-ansible.legacy.stat Invoked with path=/var/lib/config-data/ansible-generated/neutron-ovn-metadata-agent/10-neutron-metadata.conf follow=False get_checksum=True get_size=False checksum_algorithm=sha1 get_mime=True get_attributes=True get_selinux_context=False
Dec  1 14:12:36 np0005541455 python3.9[101520]: ansible-ansible.legacy.copy Invoked with dest=/var/lib/config-data/ansible-generated/neutron-ovn-metadata-agent/10-neutron-metadata.conf mode=0644 setype=container_file_t src=/home/zuul/.ansible/tmp/ansible-tmp-1764616355.1275206-182-273942405446375/.source.conf _original_basename=10-neutron-metadata.conf follow=False checksum=ca7d4d155f5b812fab1a3b70e34adb495d291b8d backup=False force=True remote_src=False unsafe_writes=False content=NOT_LOGGING_PARAMETER validate=None directory_mode=None local_follow=None owner=None group=None seuser=None serole=None selevel=None attributes=None
Dec  1 14:12:36 np0005541455 ovn_controller[97948]: 2025-12-01T19:12:36Z|00025|memory|INFO|16000 kB peak resident set size after 30.1 seconds
Dec  1 14:12:36 np0005541455 ovn_controller[97948]: 2025-12-01T19:12:36Z|00026|memory|INFO|idl-cells-OVN_Southbound:239 idl-cells-Open_vSwitch:471 ofctrl_desired_flow_usage-KB:5 ofctrl_installed_flow_usage-KB:4 ofctrl_sb_flow_ref_usage-KB:2
Dec  1 14:12:36 np0005541455 podman[101644]: 2025-12-01 19:12:36.913979877 +0000 UTC m=+0.136222576 container health_status ac5c9902abf0db9f43c889599b2bcc73d33eb8b65444ffdd9b56a5cc93dab792 (image=quay.io/podified-antelope-centos9/openstack-ovn-controller:current-podified, name=ovn_controller, health_status=healthy, health_failing_streak=0, health_log=, org.label-schema.build-date=20251125, org.label-schema.license=GPLv2, config_data={'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ovn_controller', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-ovn-controller:current-podified', 'net': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/lib/modules:/lib/modules:ro', '/run:/run', '/var/lib/openvswitch/ovn:/run/ovn:shared,z', '/var/lib/kolla/config_files/ovn_controller.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/openstack/cacerts/ovn/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/certs/ovn/default/ca.crt:/etc/pki/tls/certs/ovndbca.crt:ro,z', '/var/lib/openstack/certs/ovn/default/tls.crt:/etc/pki/tls/certs/ovndb.crt:ro,z', '/var/lib/openstack/certs/ovn/default/tls.key:/etc/pki/tls/private/ovndb.key:ro,Z', '/var/lib/openstack/healthchecks/ovn_controller:/openstack:ro,z']}, config_id=ovn_controller, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS, org.label-schema.name=CentOS Stream 9 Base Image, tcib_managed=true, container_name=ovn_controller, maintainer=OpenStack Kubernetes Operator team, managed_by=edpm_ansible, tcib_build_tag=fa2bb8efef6782c26ea7f1675eeb36dd, io.buildah.version=1.41.3)
Dec  1 14:12:36 np0005541455 python3.9[101680]: ansible-ansible.legacy.stat Invoked with path=/var/lib/config-data/ansible-generated/neutron-ovn-metadata-agent/05-nova-metadata.conf follow=False get_checksum=True get_size=False checksum_algorithm=sha1 get_mime=True get_attributes=True get_selinux_context=False
Dec  1 14:12:37 np0005541455 python3.9[101814]: ansible-ansible.legacy.copy Invoked with dest=/var/lib/config-data/ansible-generated/neutron-ovn-metadata-agent/05-nova-metadata.conf mode=0644 setype=container_file_t src=/home/zuul/.ansible/tmp/ansible-tmp-1764616356.4146278-182-141308157831507/.source.conf _original_basename=05-nova-metadata.conf follow=False checksum=a14d6b38898a379cd37fc0bf365d17f10859446f backup=False force=True remote_src=False unsafe_writes=False content=NOT_LOGGING_PARAMETER validate=None directory_mode=None local_follow=None owner=None group=None seuser=None serole=None selevel=None attributes=None
Dec  1 14:12:38 np0005541455 python3.9[101964]: ansible-ansible.builtin.stat Invoked with path=/var/lib/openstack/cacerts/neutron-metadata/tls-ca-bundle.pem follow=False get_checksum=True get_mime=True get_attributes=True get_selinux_context=False checksum_algorithm=sha1
Dec  1 14:12:39 np0005541455 python3.9[102118]: ansible-ansible.builtin.file Invoked with path=/var/local/libexec recurse=True setype=container_file_t state=directory force=False follow=True modification_time_format=%Y%m%d%H%M.%S access_time_format=%Y%m%d%H%M.%S unsafe_writes=False _original_basename=None _diff_peek=None src=None modification_time=None access_time=None mode=None owner=None group=None seuser=None serole=None selevel=None attributes=None
Dec  1 14:12:40 np0005541455 python3.9[102270]: ansible-ansible.legacy.stat Invoked with path=/var/local/libexec/edpm-container-shutdown follow=False get_checksum=True get_size=False checksum_algorithm=sha1 get_mime=True get_attributes=True get_selinux_context=False
Dec  1 14:12:40 np0005541455 python3.9[102348]: ansible-ansible.legacy.file Invoked with group=root mode=0700 owner=root setype=container_file_t dest=/var/local/libexec/edpm-container-shutdown _original_basename=edpm-container-shutdown recurse=False state=file path=/var/local/libexec/edpm-container-shutdown force=False follow=True modification_time_format=%Y%m%d%H%M.%S access_time_format=%Y%m%d%H%M.%S unsafe_writes=False _diff_peek=None src=None modification_time=None access_time=None seuser=None serole=None selevel=None attributes=None
Dec  1 14:12:41 np0005541455 python3.9[102500]: ansible-ansible.legacy.stat Invoked with path=/var/local/libexec/edpm-start-podman-container follow=False get_checksum=True get_size=False checksum_algorithm=sha1 get_mime=True get_attributes=True get_selinux_context=False
Dec  1 14:12:41 np0005541455 python3.9[102578]: ansible-ansible.legacy.file Invoked with group=root mode=0700 owner=root setype=container_file_t dest=/var/local/libexec/edpm-start-podman-container _original_basename=edpm-start-podman-container recurse=False state=file path=/var/local/libexec/edpm-start-podman-container force=False follow=True modification_time_format=%Y%m%d%H%M.%S access_time_format=%Y%m%d%H%M.%S unsafe_writes=False _diff_peek=None src=None modification_time=None access_time=None seuser=None serole=None selevel=None attributes=None
Dec  1 14:12:42 np0005541455 python3.9[102730]: ansible-ansible.builtin.file Invoked with mode=420 path=/etc/systemd/system-preset state=directory recurse=False force=False follow=True modification_time_format=%Y%m%d%H%M.%S access_time_format=%Y%m%d%H%M.%S unsafe_writes=False _original_basename=None _diff_peek=None src=None modification_time=None access_time=None owner=None group=None seuser=None serole=None selevel=None setype=None attributes=None
Dec  1 14:12:43 np0005541455 python3.9[102882]: ansible-ansible.legacy.stat Invoked with path=/etc/systemd/system/edpm-container-shutdown.service follow=False get_checksum=True get_size=False checksum_algorithm=sha1 get_mime=True get_attributes=True get_selinux_context=False
Dec  1 14:12:44 np0005541455 python3.9[102960]: ansible-ansible.legacy.file Invoked with group=root mode=0644 owner=root dest=/etc/systemd/system/edpm-container-shutdown.service _original_basename=edpm-container-shutdown-service recurse=False state=file path=/etc/systemd/system/edpm-container-shutdown.service force=False follow=True modification_time_format=%Y%m%d%H%M.%S access_time_format=%Y%m%d%H%M.%S unsafe_writes=False _diff_peek=None src=None modification_time=None access_time=None seuser=None serole=None selevel=None setype=None attributes=None
Dec  1 14:12:44 np0005541455 python3.9[103112]: ansible-ansible.legacy.stat Invoked with path=/etc/systemd/system-preset/91-edpm-container-shutdown.preset follow=False get_checksum=True get_size=False checksum_algorithm=sha1 get_mime=True get_attributes=True get_selinux_context=False
Dec  1 14:12:45 np0005541455 python3.9[103190]: ansible-ansible.legacy.file Invoked with group=root mode=0644 owner=root dest=/etc/systemd/system-preset/91-edpm-container-shutdown.preset _original_basename=91-edpm-container-shutdown-preset recurse=False state=file path=/etc/systemd/system-preset/91-edpm-container-shutdown.preset force=False follow=True modification_time_format=%Y%m%d%H%M.%S access_time_format=%Y%m%d%H%M.%S unsafe_writes=False _diff_peek=None src=None modification_time=None access_time=None seuser=None serole=None selevel=None setype=None attributes=None
Dec  1 14:12:46 np0005541455 python3.9[103342]: ansible-ansible.builtin.systemd Invoked with daemon_reload=True enabled=True name=edpm-container-shutdown state=started daemon_reexec=False scope=system no_block=False force=None masked=None
Dec  1 14:12:46 np0005541455 systemd[1]: Reloading.
Dec  1 14:12:46 np0005541455 systemd-rc-local-generator: /etc/rc.d/rc.local is not marked executable, skipping.
Dec  1 14:12:46 np0005541455 systemd-sysv-generator: SysV service '/etc/rc.d/init.d/network' lacks a native systemd unit file. Automatically generating a unit file for compatibility. Please update package to include a native systemd unit file, in order to make it more safe and robust.
Dec  1 14:12:48 np0005541455 python3.9[103532]: ansible-ansible.legacy.stat Invoked with path=/etc/systemd/system/netns-placeholder.service follow=False get_checksum=True get_size=False checksum_algorithm=sha1 get_mime=True get_attributes=True get_selinux_context=False
Dec  1 14:12:48 np0005541455 python3.9[103610]: ansible-ansible.legacy.file Invoked with group=root mode=0644 owner=root dest=/etc/systemd/system/netns-placeholder.service _original_basename=netns-placeholder-service recurse=False state=file path=/etc/systemd/system/netns-placeholder.service force=False follow=True modification_time_format=%Y%m%d%H%M.%S access_time_format=%Y%m%d%H%M.%S unsafe_writes=False _diff_peek=None src=None modification_time=None access_time=None seuser=None serole=None selevel=None setype=None attributes=None
Dec  1 14:12:49 np0005541455 python3.9[103762]: ansible-ansible.legacy.stat Invoked with path=/etc/systemd/system-preset/91-netns-placeholder.preset follow=False get_checksum=True get_size=False checksum_algorithm=sha1 get_mime=True get_attributes=True get_selinux_context=False
Dec  1 14:12:50 np0005541455 python3.9[103840]: ansible-ansible.legacy.file Invoked with group=root mode=0644 owner=root dest=/etc/systemd/system-preset/91-netns-placeholder.preset _original_basename=91-netns-placeholder-preset recurse=False state=file path=/etc/systemd/system-preset/91-netns-placeholder.preset force=False follow=True modification_time_format=%Y%m%d%H%M.%S access_time_format=%Y%m%d%H%M.%S unsafe_writes=False _diff_peek=None src=None modification_time=None access_time=None seuser=None serole=None selevel=None setype=None attributes=None
Dec  1 14:12:52 np0005541455 python3.9[103992]: ansible-ansible.builtin.systemd Invoked with daemon_reload=True enabled=True name=netns-placeholder state=started daemon_reexec=False scope=system no_block=False force=None masked=None
Dec  1 14:12:52 np0005541455 systemd[1]: Reloading.
Dec  1 14:12:52 np0005541455 systemd-rc-local-generator: /etc/rc.d/rc.local is not marked executable, skipping.
Dec  1 14:12:52 np0005541455 systemd-sysv-generator: SysV service '/etc/rc.d/init.d/network' lacks a native systemd unit file. Automatically generating a unit file for compatibility. Please update package to include a native systemd unit file, in order to make it more safe and robust.
Dec  1 14:12:52 np0005541455 systemd[1]: Starting Create netns directory...
Dec  1 14:12:52 np0005541455 systemd[1]: run-netns-placeholder.mount: Deactivated successfully.
Dec  1 14:12:52 np0005541455 systemd[1]: netns-placeholder.service: Deactivated successfully.
Dec  1 14:12:52 np0005541455 systemd[1]: Finished Create netns directory.
Dec  1 14:12:53 np0005541455 python3.9[104185]: ansible-ansible.builtin.file Invoked with group=zuul mode=0755 owner=zuul path=/var/lib/openstack/healthchecks setype=container_file_t state=directory recurse=False force=False follow=True modification_time_format=%Y%m%d%H%M.%S access_time_format=%Y%m%d%H%M.%S unsafe_writes=False _original_basename=None _diff_peek=None src=None modification_time=None access_time=None seuser=None serole=None selevel=None attributes=None
Dec  1 14:12:54 np0005541455 python3.9[104337]: ansible-ansible.legacy.stat Invoked with path=/var/lib/openstack/healthchecks/ovn_metadata_agent/healthcheck follow=False get_checksum=True get_size=False checksum_algorithm=sha1 get_mime=True get_attributes=True get_selinux_context=False
Dec  1 14:12:55 np0005541455 python3.9[104460]: ansible-ansible.legacy.copy Invoked with dest=/var/lib/openstack/healthchecks/ovn_metadata_agent/ group=zuul mode=0700 owner=zuul setype=container_file_t src=/home/zuul/.ansible/tmp/ansible-tmp-1764616374.101532-333-15644799274003/.source _original_basename=healthcheck follow=False checksum=898a5a1fcd473cf731177fc866e3bd7ebf20a131 backup=False force=True remote_src=False unsafe_writes=False content=NOT_LOGGING_PARAMETER validate=None directory_mode=None local_follow=None seuser=None serole=None selevel=None attributes=None
Dec  1 14:12:56 np0005541455 python3.9[104612]: ansible-ansible.builtin.file Invoked with path=/var/lib/kolla/config_files recurse=True setype=container_file_t state=directory force=False follow=True modification_time_format=%Y%m%d%H%M.%S access_time_format=%Y%m%d%H%M.%S unsafe_writes=False _original_basename=None _diff_peek=None src=None modification_time=None access_time=None mode=None owner=None group=None seuser=None serole=None selevel=None attributes=None
Dec  1 14:12:57 np0005541455 python3.9[104764]: ansible-ansible.legacy.stat Invoked with path=/var/lib/kolla/config_files/ovn_metadata_agent.json follow=False get_checksum=True get_size=False checksum_algorithm=sha1 get_mime=True get_attributes=True get_selinux_context=False
Dec  1 14:12:57 np0005541455 python3.9[104887]: ansible-ansible.legacy.copy Invoked with dest=/var/lib/kolla/config_files/ovn_metadata_agent.json mode=0600 src=/home/zuul/.ansible/tmp/ansible-tmp-1764616376.7647133-358-20662946893270/.source.json _original_basename=._2k0hinc follow=False checksum=a908ef151ded3a33ae6c9ac8be72a35e5e33b9dc backup=False force=True remote_src=False unsafe_writes=False content=NOT_LOGGING_PARAMETER validate=None directory_mode=None local_follow=None owner=None group=None seuser=None serole=None selevel=None setype=None attributes=None
Dec  1 14:12:58 np0005541455 python3.9[105039]: ansible-ansible.builtin.file Invoked with mode=0755 path=/var/lib/edpm-config/container-startup-config/ovn_metadata_agent state=directory recurse=False force=False follow=True modification_time_format=%Y%m%d%H%M.%S access_time_format=%Y%m%d%H%M.%S unsafe_writes=False _original_basename=None _diff_peek=None src=None modification_time=None access_time=None owner=None group=None seuser=None serole=None selevel=None setype=None attributes=None
Dec  1 14:13:01 np0005541455 python3.9[105466]: ansible-container_config_data Invoked with config_overrides={} config_path=/var/lib/edpm-config/container-startup-config/ovn_metadata_agent config_pattern=*.json debug=False
Dec  1 14:13:02 np0005541455 python3.9[105618]: ansible-container_config_hash Invoked with check_mode=False config_vol_prefix=/var/lib/config-data
Dec  1 14:13:03 np0005541455 python3.9[105770]: ansible-containers.podman.podman_container_info Invoked with executable=podman name=None
Dec  1 14:13:04 np0005541455 python3[105950]: ansible-edpm_container_manage Invoked with concurrency=1 config_dir=/var/lib/edpm-config/container-startup-config/ovn_metadata_agent config_id=ovn_metadata_agent config_overrides={} config_patterns=*.json log_base_path=/var/log/containers/stdouts debug=False
Dec  1 14:13:05 np0005541455 podman[105988]: 2025-12-01 19:13:04.959731278 +0000 UTC m=+0.023011296 image pull 014dc726c85414b29f2dde7b5d875685d08784761c0f0ffa8630d1583a877bf9 quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified
Dec  1 14:13:05 np0005541455 podman[105988]: 2025-12-01 19:13:05.090542954 +0000 UTC m=+0.153822962 container create 43b014a7c88484529ca37fbc1aa040d68d3c565a681d98a3ffe696ded1c66c8b (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified, name=ovn_metadata_agent, config_id=ovn_metadata_agent, org.label-schema.schema-version=1.0, org.label-schema.build-date=20251125, org.label-schema.vendor=CentOS, managed_by=edpm_ansible, io.buildah.version=1.41.3, org.label-schema.name=CentOS Stream 9 Base Image, config_data={'cgroupns': 'host', 'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'EDPM_CONFIG_HASH': '0823bd3e096c75f72e4a95820d41b0d4b6a1172bd2892ddb9f29b788a11bc87d'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ovn_metadata_agent', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified', 'net': 'host', 'pid': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/run/openvswitch:/run/openvswitch:z', '/var/lib/config-data/ansible-generated/neutron-ovn-metadata-agent:/etc/neutron.conf.d:z', '/run/netns:/run/netns:shared', '/var/lib/kolla/config_files/ovn_metadata_agent.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/neutron:/var/lib/neutron:shared,z', '/var/lib/neutron/ovn_metadata_haproxy_wrapper:/usr/local/bin/haproxy:ro', '/var/lib/neutron/kill_scripts:/etc/neutron/kill_scripts:ro', '/var/lib/openstack/cacerts/neutron-metadata/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/ca.crt:/etc/pki/tls/certs/ovndbca.crt:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/tls.crt:/etc/pki/tls/certs/ovndb.crt:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/tls.key:/etc/pki/tls/private/ovndb.key:ro,Z', '/var/lib/openstack/healthchecks/ovn_metadata_agent:/openstack:ro,z']}, org.label-schema.license=GPLv2, container_name=ovn_metadata_agent, tcib_build_tag=fa2bb8efef6782c26ea7f1675eeb36dd, tcib_managed=true, maintainer=OpenStack Kubernetes Operator team)
Dec  1 14:13:05 np0005541455 python3[105950]: ansible-edpm_container_manage PODMAN-CONTAINER-DEBUG: podman create --name ovn_metadata_agent --cgroupns=host --conmon-pidfile /run/ovn_metadata_agent.pid --env KOLLA_CONFIG_STRATEGY=COPY_ALWAYS --env EDPM_CONFIG_HASH=0823bd3e096c75f72e4a95820d41b0d4b6a1172bd2892ddb9f29b788a11bc87d --healthcheck-command /openstack/healthcheck --label config_id=ovn_metadata_agent --label container_name=ovn_metadata_agent --label managed_by=edpm_ansible --label config_data={'cgroupns': 'host', 'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'EDPM_CONFIG_HASH': '0823bd3e096c75f72e4a95820d41b0d4b6a1172bd2892ddb9f29b788a11bc87d'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ovn_metadata_agent', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified', 'net': 'host', 'pid': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/run/openvswitch:/run/openvswitch:z', '/var/lib/config-data/ansible-generated/neutron-ovn-metadata-agent:/etc/neutron.conf.d:z', '/run/netns:/run/netns:shared', '/var/lib/kolla/config_files/ovn_metadata_agent.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/neutron:/var/lib/neutron:shared,z', '/var/lib/neutron/ovn_metadata_haproxy_wrapper:/usr/local/bin/haproxy:ro', '/var/lib/neutron/kill_scripts:/etc/neutron/kill_scripts:ro', '/var/lib/openstack/cacerts/neutron-metadata/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/ca.crt:/etc/pki/tls/certs/ovndbca.crt:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/tls.crt:/etc/pki/tls/certs/ovndb.crt:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/tls.key:/etc/pki/tls/private/ovndb.key:ro,Z', '/var/lib/openstack/healthchecks/ovn_metadata_agent:/openstack:ro,z']} --log-driver journald --log-level info --network host --pid host --privileged=True --user root --volume /run/openvswitch:/run/openvswitch:z --volume /var/lib/config-data/ansible-generated/neutron-ovn-metadata-agent:/etc/neutron.conf.d:z --volume /run/netns:/run/netns:shared --volume /var/lib/kolla/config_files/ovn_metadata_agent.json:/var/lib/kolla/config_files/config.json:ro --volume /var/lib/neutron:/var/lib/neutron:shared,z --volume /var/lib/neutron/ovn_metadata_haproxy_wrapper:/usr/local/bin/haproxy:ro --volume /var/lib/neutron/kill_scripts:/etc/neutron/kill_scripts:ro --volume /var/lib/openstack/cacerts/neutron-metadata/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z --volume /var/lib/openstack/certs/neutron-metadata/default/ca.crt:/etc/pki/tls/certs/ovndbca.crt:ro,z --volume /var/lib/openstack/certs/neutron-metadata/default/tls.crt:/etc/pki/tls/certs/ovndb.crt:ro,z --volume /var/lib/openstack/certs/neutron-metadata/default/tls.key:/etc/pki/tls/private/ovndb.key:ro,Z --volume /var/lib/openstack/healthchecks/ovn_metadata_agent:/openstack:ro,z quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified
Dec  1 14:13:05 np0005541455 python3.9[106178]: ansible-ansible.builtin.stat Invoked with path=/etc/sysconfig/podman_drop_in follow=False get_checksum=True get_mime=True get_attributes=True get_selinux_context=False checksum_algorithm=sha1
Dec  1 14:13:06 np0005541455 python3.9[106332]: ansible-file Invoked with path=/etc/systemd/system/edpm_ovn_metadata_agent.requires state=absent recurse=False force=False follow=True modification_time_format=%Y%m%d%H%M.%S access_time_format=%Y%m%d%H%M.%S unsafe_writes=False _original_basename=None _diff_peek=None src=None modification_time=None access_time=None mode=None owner=None group=None seuser=None serole=None selevel=None setype=None attributes=None
Dec  1 14:13:07 np0005541455 podman[106380]: 2025-12-01 19:13:07.201862695 +0000 UTC m=+0.143686334 container health_status ac5c9902abf0db9f43c889599b2bcc73d33eb8b65444ffdd9b56a5cc93dab792 (image=quay.io/podified-antelope-centos9/openstack-ovn-controller:current-podified, name=ovn_controller, health_status=healthy, health_failing_streak=0, health_log=, config_data={'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ovn_controller', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-ovn-controller:current-podified', 'net': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/lib/modules:/lib/modules:ro', '/run:/run', '/var/lib/openvswitch/ovn:/run/ovn:shared,z', '/var/lib/kolla/config_files/ovn_controller.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/openstack/cacerts/ovn/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/certs/ovn/default/ca.crt:/etc/pki/tls/certs/ovndbca.crt:ro,z', '/var/lib/openstack/certs/ovn/default/tls.crt:/etc/pki/tls/certs/ovndb.crt:ro,z', '/var/lib/openstack/certs/ovn/default/tls.key:/etc/pki/tls/private/ovndb.key:ro,Z', '/var/lib/openstack/healthchecks/ovn_controller:/openstack:ro,z']}, container_name=ovn_controller, maintainer=OpenStack Kubernetes Operator team, org.label-schema.build-date=20251125, org.label-schema.license=GPLv2, org.label-schema.vendor=CentOS, config_id=ovn_controller, org.label-schema.name=CentOS Stream 9 Base Image, io.buildah.version=1.41.3, managed_by=edpm_ansible, org.label-schema.schema-version=1.0, tcib_build_tag=fa2bb8efef6782c26ea7f1675eeb36dd, tcib_managed=true)
Dec  1 14:13:07 np0005541455 python3.9[106421]: ansible-stat Invoked with path=/etc/systemd/system/edpm_ovn_metadata_agent_healthcheck.timer follow=False get_checksum=True get_mime=True get_attributes=True get_selinux_context=False checksum_algorithm=sha1
Dec  1 14:13:08 np0005541455 python3.9[106585]: ansible-copy Invoked with src=/home/zuul/.ansible/tmp/ansible-tmp-1764616387.43143-446-221535148590664/source dest=/etc/systemd/system/edpm_ovn_metadata_agent.service mode=0644 owner=root group=root backup=False force=True remote_src=False follow=False unsafe_writes=False _original_basename=None content=NOT_LOGGING_PARAMETER validate=None directory_mode=None local_follow=None checksum=None seuser=None serole=None selevel=None setype=None attributes=None
Dec  1 14:13:08 np0005541455 python3.9[106661]: ansible-systemd Invoked with daemon_reload=True daemon_reexec=False scope=system no_block=False name=None state=None enabled=None force=None masked=None
Dec  1 14:13:08 np0005541455 systemd[1]: Reloading.
Dec  1 14:13:08 np0005541455 systemd-rc-local-generator: /etc/rc.d/rc.local is not marked executable, skipping.
Dec  1 14:13:08 np0005541455 systemd-sysv-generator: SysV service '/etc/rc.d/init.d/network' lacks a native systemd unit file. Automatically generating a unit file for compatibility. Please update package to include a native systemd unit file, in order to make it more safe and robust.
Dec  1 14:13:09 np0005541455 python3.9[106771]: ansible-systemd Invoked with state=restarted name=edpm_ovn_metadata_agent.service enabled=True daemon_reload=False daemon_reexec=False scope=system no_block=False force=None masked=None
Dec  1 14:13:09 np0005541455 systemd[1]: Reloading.
Dec  1 14:13:09 np0005541455 systemd-rc-local-generator: /etc/rc.d/rc.local is not marked executable, skipping.
Dec  1 14:13:09 np0005541455 systemd-sysv-generator: SysV service '/etc/rc.d/init.d/network' lacks a native systemd unit file. Automatically generating a unit file for compatibility. Please update package to include a native systemd unit file, in order to make it more safe and robust.
Dec  1 14:13:10 np0005541455 systemd[1]: Starting ovn_metadata_agent container...
Dec  1 14:13:10 np0005541455 systemd[1]: Started libcrun container.
Dec  1 14:13:10 np0005541455 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/d18f38a97bc89df806460e9ef0e90b35309c183dd86938f9dfdc2b8ee6e36bed/merged/etc/neutron.conf.d supports timestamps until 2038 (0x7fffffff)
Dec  1 14:13:10 np0005541455 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/d18f38a97bc89df806460e9ef0e90b35309c183dd86938f9dfdc2b8ee6e36bed/merged/var/lib/neutron supports timestamps until 2038 (0x7fffffff)
Dec  1 14:13:10 np0005541455 systemd[1]: Started /usr/bin/podman healthcheck run 43b014a7c88484529ca37fbc1aa040d68d3c565a681d98a3ffe696ded1c66c8b.
Dec  1 14:13:10 np0005541455 podman[106812]: 2025-12-01 19:13:10.223297879 +0000 UTC m=+0.154403439 container init 43b014a7c88484529ca37fbc1aa040d68d3c565a681d98a3ffe696ded1c66c8b (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified, name=ovn_metadata_agent, org.label-schema.vendor=CentOS, config_id=ovn_metadata_agent, managed_by=edpm_ansible, org.label-schema.schema-version=1.0, container_name=ovn_metadata_agent, io.buildah.version=1.41.3, maintainer=OpenStack Kubernetes Operator team, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.build-date=20251125, tcib_build_tag=fa2bb8efef6782c26ea7f1675eeb36dd, tcib_managed=true, config_data={'cgroupns': 'host', 'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'EDPM_CONFIG_HASH': '0823bd3e096c75f72e4a95820d41b0d4b6a1172bd2892ddb9f29b788a11bc87d'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ovn_metadata_agent', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified', 'net': 'host', 'pid': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/run/openvswitch:/run/openvswitch:z', '/var/lib/config-data/ansible-generated/neutron-ovn-metadata-agent:/etc/neutron.conf.d:z', '/run/netns:/run/netns:shared', '/var/lib/kolla/config_files/ovn_metadata_agent.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/neutron:/var/lib/neutron:shared,z', '/var/lib/neutron/ovn_metadata_haproxy_wrapper:/usr/local/bin/haproxy:ro', '/var/lib/neutron/kill_scripts:/etc/neutron/kill_scripts:ro', '/var/lib/openstack/cacerts/neutron-metadata/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/ca.crt:/etc/pki/tls/certs/ovndbca.crt:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/tls.crt:/etc/pki/tls/certs/ovndb.crt:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/tls.key:/etc/pki/tls/private/ovndb.key:ro,Z', '/var/lib/openstack/healthchecks/ovn_metadata_agent:/openstack:ro,z']}, org.label-schema.license=GPLv2)
Dec  1 14:13:10 np0005541455 ovn_metadata_agent[106828]: + sudo -E kolla_set_configs
Dec  1 14:13:10 np0005541455 podman[106812]: 2025-12-01 19:13:10.264492724 +0000 UTC m=+0.195598284 container start 43b014a7c88484529ca37fbc1aa040d68d3c565a681d98a3ffe696ded1c66c8b (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified, name=ovn_metadata_agent, config_data={'cgroupns': 'host', 'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'EDPM_CONFIG_HASH': '0823bd3e096c75f72e4a95820d41b0d4b6a1172bd2892ddb9f29b788a11bc87d'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ovn_metadata_agent', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified', 'net': 'host', 'pid': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/run/openvswitch:/run/openvswitch:z', '/var/lib/config-data/ansible-generated/neutron-ovn-metadata-agent:/etc/neutron.conf.d:z', '/run/netns:/run/netns:shared', '/var/lib/kolla/config_files/ovn_metadata_agent.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/neutron:/var/lib/neutron:shared,z', '/var/lib/neutron/ovn_metadata_haproxy_wrapper:/usr/local/bin/haproxy:ro', '/var/lib/neutron/kill_scripts:/etc/neutron/kill_scripts:ro', '/var/lib/openstack/cacerts/neutron-metadata/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/ca.crt:/etc/pki/tls/certs/ovndbca.crt:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/tls.crt:/etc/pki/tls/certs/ovndb.crt:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/tls.key:/etc/pki/tls/private/ovndb.key:ro,Z', '/var/lib/openstack/healthchecks/ovn_metadata_agent:/openstack:ro,z']}, maintainer=OpenStack Kubernetes Operator team, org.label-schema.license=GPLv2, managed_by=edpm_ansible, org.label-schema.build-date=20251125, org.label-schema.schema-version=1.0, tcib_build_tag=fa2bb8efef6782c26ea7f1675eeb36dd, container_name=ovn_metadata_agent, io.buildah.version=1.41.3, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.vendor=CentOS, config_id=ovn_metadata_agent, tcib_managed=true)
Dec  1 14:13:10 np0005541455 edpm-start-podman-container[106812]: ovn_metadata_agent
Dec  1 14:13:10 np0005541455 edpm-start-podman-container[106811]: Creating additional drop-in dependency for "ovn_metadata_agent" (43b014a7c88484529ca37fbc1aa040d68d3c565a681d98a3ffe696ded1c66c8b)
Dec  1 14:13:10 np0005541455 ovn_metadata_agent[106828]: INFO:__main__:Loading config file at /var/lib/kolla/config_files/config.json
Dec  1 14:13:10 np0005541455 ovn_metadata_agent[106828]: INFO:__main__:Validating config file
Dec  1 14:13:10 np0005541455 ovn_metadata_agent[106828]: INFO:__main__:Kolla config strategy set to: COPY_ALWAYS
Dec  1 14:13:10 np0005541455 ovn_metadata_agent[106828]: INFO:__main__:Copying service configuration files
Dec  1 14:13:10 np0005541455 ovn_metadata_agent[106828]: INFO:__main__:Deleting /etc/neutron/rootwrap.conf
Dec  1 14:13:10 np0005541455 ovn_metadata_agent[106828]: INFO:__main__:Copying /etc/neutron.conf.d/01-rootwrap.conf to /etc/neutron/rootwrap.conf
Dec  1 14:13:10 np0005541455 ovn_metadata_agent[106828]: INFO:__main__:Setting permission for /etc/neutron/rootwrap.conf
Dec  1 14:13:10 np0005541455 ovn_metadata_agent[106828]: INFO:__main__:Writing out command to execute
Dec  1 14:13:10 np0005541455 ovn_metadata_agent[106828]: INFO:__main__:Setting permission for /var/lib/neutron
Dec  1 14:13:10 np0005541455 ovn_metadata_agent[106828]: INFO:__main__:Setting permission for /var/lib/neutron/kill_scripts
Dec  1 14:13:10 np0005541455 ovn_metadata_agent[106828]: INFO:__main__:Setting permission for /var/lib/neutron/ovn-metadata-proxy
Dec  1 14:13:10 np0005541455 ovn_metadata_agent[106828]: INFO:__main__:Setting permission for /var/lib/neutron/external
Dec  1 14:13:10 np0005541455 ovn_metadata_agent[106828]: INFO:__main__:Setting permission for /var/lib/neutron/ovn_metadata_haproxy_wrapper
Dec  1 14:13:10 np0005541455 ovn_metadata_agent[106828]: INFO:__main__:Setting permission for /var/lib/neutron/kill_scripts/haproxy-kill
Dec  1 14:13:10 np0005541455 ovn_metadata_agent[106828]: INFO:__main__:Setting permission for /var/lib/neutron/external/pids
Dec  1 14:13:10 np0005541455 ovn_metadata_agent[106828]: ++ cat /run_command
Dec  1 14:13:10 np0005541455 ovn_metadata_agent[106828]: + CMD=neutron-ovn-metadata-agent
Dec  1 14:13:10 np0005541455 ovn_metadata_agent[106828]: + ARGS=
Dec  1 14:13:10 np0005541455 ovn_metadata_agent[106828]: + sudo kolla_copy_cacerts
Dec  1 14:13:10 np0005541455 systemd[1]: Reloading.
Dec  1 14:13:10 np0005541455 ovn_metadata_agent[106828]: + [[ ! -n '' ]]
Dec  1 14:13:10 np0005541455 ovn_metadata_agent[106828]: + . kolla_extend_start
Dec  1 14:13:10 np0005541455 ovn_metadata_agent[106828]: Running command: 'neutron-ovn-metadata-agent'
Dec  1 14:13:10 np0005541455 ovn_metadata_agent[106828]: + echo 'Running command: '\''neutron-ovn-metadata-agent'\'''
Dec  1 14:13:10 np0005541455 ovn_metadata_agent[106828]: + umask 0022
Dec  1 14:13:10 np0005541455 ovn_metadata_agent[106828]: + exec neutron-ovn-metadata-agent
Dec  1 14:13:10 np0005541455 podman[106835]: 2025-12-01 19:13:10.413703819 +0000 UTC m=+0.127697371 container health_status 43b014a7c88484529ca37fbc1aa040d68d3c565a681d98a3ffe696ded1c66c8b (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified, name=ovn_metadata_agent, health_status=healthy, health_failing_streak=0, health_log=, config_id=ovn_metadata_agent, managed_by=edpm_ansible, org.label-schema.build-date=20251125, org.label-schema.name=CentOS Stream 9 Base Image, tcib_build_tag=fa2bb8efef6782c26ea7f1675eeb36dd, org.label-schema.license=GPLv2, org.label-schema.vendor=CentOS, tcib_managed=true, io.buildah.version=1.41.3, org.label-schema.schema-version=1.0, container_name=ovn_metadata_agent, maintainer=OpenStack Kubernetes Operator team, config_data={'cgroupns': 'host', 'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'EDPM_CONFIG_HASH': '0823bd3e096c75f72e4a95820d41b0d4b6a1172bd2892ddb9f29b788a11bc87d'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ovn_metadata_agent', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified', 'net': 'host', 'pid': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/run/openvswitch:/run/openvswitch:z', '/var/lib/config-data/ansible-generated/neutron-ovn-metadata-agent:/etc/neutron.conf.d:z', '/run/netns:/run/netns:shared', '/var/lib/kolla/config_files/ovn_metadata_agent.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/neutron:/var/lib/neutron:shared,z', '/var/lib/neutron/ovn_metadata_haproxy_wrapper:/usr/local/bin/haproxy:ro', '/var/lib/neutron/kill_scripts:/etc/neutron/kill_scripts:ro', '/var/lib/openstack/cacerts/neutron-metadata/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/ca.crt:/etc/pki/tls/certs/ovndbca.crt:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/tls.crt:/etc/pki/tls/certs/ovndb.crt:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/tls.key:/etc/pki/tls/private/ovndb.key:ro,Z', '/var/lib/openstack/healthchecks/ovn_metadata_agent:/openstack:ro,z']})
Dec  1 14:13:10 np0005541455 systemd-rc-local-generator: /etc/rc.d/rc.local is not marked executable, skipping.
Dec  1 14:13:10 np0005541455 systemd-sysv-generator: SysV service '/etc/rc.d/init.d/network' lacks a native systemd unit file. Automatically generating a unit file for compatibility. Please update package to include a native systemd unit file, in order to make it more safe and robust.
Dec  1 14:13:10 np0005541455 systemd[1]: Started ovn_metadata_agent container.
Dec  1 14:13:11 np0005541455 systemd[1]: session-22.scope: Deactivated successfully.
Dec  1 14:13:11 np0005541455 systemd[1]: session-22.scope: Consumed 39.365s CPU time.
Dec  1 14:13:11 np0005541455 systemd-logind[797]: Session 22 logged out. Waiting for processes to exit.
Dec  1 14:13:11 np0005541455 systemd-logind[797]: Removed session 22.
Dec  1 14:13:12 np0005541455 ovn_metadata_agent[106828]: 2025-12-01 19:13:12.115 106833 INFO neutron.common.config [-] Logging enabled!#033[00m
Dec  1 14:13:12 np0005541455 ovn_metadata_agent[106828]: 2025-12-01 19:13:12.115 106833 INFO neutron.common.config [-] /usr/bin/neutron-ovn-metadata-agent version 22.2.2.dev43#033[00m
Dec  1 14:13:12 np0005541455 ovn_metadata_agent[106828]: 2025-12-01 19:13:12.116 106833 DEBUG neutron.common.config [-] command line: /usr/bin/neutron-ovn-metadata-agent setup_logging /usr/lib/python3.9/site-packages/neutron/common/config.py:123#033[00m
Dec  1 14:13:12 np0005541455 ovn_metadata_agent[106828]: 2025-12-01 19:13:12.116 106833 DEBUG neutron.agent.ovn.metadata_agent [-] ******************************************************************************** log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2589#033[00m
Dec  1 14:13:12 np0005541455 ovn_metadata_agent[106828]: 2025-12-01 19:13:12.116 106833 DEBUG neutron.agent.ovn.metadata_agent [-] Configuration options gathered from: log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2590#033[00m
Dec  1 14:13:12 np0005541455 ovn_metadata_agent[106828]: 2025-12-01 19:13:12.116 106833 DEBUG neutron.agent.ovn.metadata_agent [-] command line args: [] log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2591#033[00m
Dec  1 14:13:12 np0005541455 ovn_metadata_agent[106828]: 2025-12-01 19:13:12.116 106833 DEBUG neutron.agent.ovn.metadata_agent [-] config files: ['/etc/neutron/neutron.conf'] log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2592#033[00m
Dec  1 14:13:12 np0005541455 ovn_metadata_agent[106828]: 2025-12-01 19:13:12.117 106833 DEBUG neutron.agent.ovn.metadata_agent [-] ================================================================================ log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2594#033[00m
Dec  1 14:13:12 np0005541455 ovn_metadata_agent[106828]: 2025-12-01 19:13:12.117 106833 DEBUG neutron.agent.ovn.metadata_agent [-] agent_down_time                = 75 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602#033[00m
Dec  1 14:13:12 np0005541455 ovn_metadata_agent[106828]: 2025-12-01 19:13:12.117 106833 DEBUG neutron.agent.ovn.metadata_agent [-] allow_bulk                     = True log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602#033[00m
Dec  1 14:13:12 np0005541455 ovn_metadata_agent[106828]: 2025-12-01 19:13:12.117 106833 DEBUG neutron.agent.ovn.metadata_agent [-] api_extensions_path            =  log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602#033[00m
Dec  1 14:13:12 np0005541455 ovn_metadata_agent[106828]: 2025-12-01 19:13:12.117 106833 DEBUG neutron.agent.ovn.metadata_agent [-] api_paste_config               = api-paste.ini log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602#033[00m
Dec  1 14:13:12 np0005541455 ovn_metadata_agent[106828]: 2025-12-01 19:13:12.117 106833 DEBUG neutron.agent.ovn.metadata_agent [-] api_workers                    = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602#033[00m
Dec  1 14:13:12 np0005541455 ovn_metadata_agent[106828]: 2025-12-01 19:13:12.117 106833 DEBUG neutron.agent.ovn.metadata_agent [-] auth_ca_cert                   = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602#033[00m
Dec  1 14:13:12 np0005541455 ovn_metadata_agent[106828]: 2025-12-01 19:13:12.118 106833 DEBUG neutron.agent.ovn.metadata_agent [-] auth_strategy                  = keystone log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602#033[00m
Dec  1 14:13:12 np0005541455 ovn_metadata_agent[106828]: 2025-12-01 19:13:12.118 106833 DEBUG neutron.agent.ovn.metadata_agent [-] backlog                        = 4096 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602#033[00m
Dec  1 14:13:12 np0005541455 ovn_metadata_agent[106828]: 2025-12-01 19:13:12.118 106833 DEBUG neutron.agent.ovn.metadata_agent [-] base_mac                       = fa:16:3e:00:00:00 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602#033[00m
Dec  1 14:13:12 np0005541455 ovn_metadata_agent[106828]: 2025-12-01 19:13:12.118 106833 DEBUG neutron.agent.ovn.metadata_agent [-] bind_host                      = 0.0.0.0 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602#033[00m
Dec  1 14:13:12 np0005541455 ovn_metadata_agent[106828]: 2025-12-01 19:13:12.118 106833 DEBUG neutron.agent.ovn.metadata_agent [-] bind_port                      = 9696 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602#033[00m
Dec  1 14:13:12 np0005541455 ovn_metadata_agent[106828]: 2025-12-01 19:13:12.118 106833 DEBUG neutron.agent.ovn.metadata_agent [-] client_socket_timeout          = 900 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602#033[00m
Dec  1 14:13:12 np0005541455 ovn_metadata_agent[106828]: 2025-12-01 19:13:12.118 106833 DEBUG neutron.agent.ovn.metadata_agent [-] config_dir                     = ['/etc/neutron.conf.d'] log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602#033[00m
Dec  1 14:13:12 np0005541455 ovn_metadata_agent[106828]: 2025-12-01 19:13:12.118 106833 DEBUG neutron.agent.ovn.metadata_agent [-] config_file                    = ['/etc/neutron/neutron.conf'] log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602#033[00m
Dec  1 14:13:12 np0005541455 ovn_metadata_agent[106828]: 2025-12-01 19:13:12.118 106833 DEBUG neutron.agent.ovn.metadata_agent [-] config_source                  = [] log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602#033[00m
Dec  1 14:13:12 np0005541455 ovn_metadata_agent[106828]: 2025-12-01 19:13:12.119 106833 DEBUG neutron.agent.ovn.metadata_agent [-] control_exchange               = neutron log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602#033[00m
Dec  1 14:13:12 np0005541455 ovn_metadata_agent[106828]: 2025-12-01 19:13:12.119 106833 DEBUG neutron.agent.ovn.metadata_agent [-] core_plugin                    = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602#033[00m
Dec  1 14:13:12 np0005541455 ovn_metadata_agent[106828]: 2025-12-01 19:13:12.119 106833 DEBUG neutron.agent.ovn.metadata_agent [-] debug                          = True log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602#033[00m
Dec  1 14:13:12 np0005541455 ovn_metadata_agent[106828]: 2025-12-01 19:13:12.119 106833 DEBUG neutron.agent.ovn.metadata_agent [-] default_availability_zones     = [] log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602#033[00m
Dec  1 14:13:12 np0005541455 ovn_metadata_agent[106828]: 2025-12-01 19:13:12.119 106833 DEBUG neutron.agent.ovn.metadata_agent [-] default_log_levels             = ['amqp=WARN', 'amqplib=WARN', 'boto=WARN', 'qpid=WARN', 'sqlalchemy=WARN', 'suds=INFO', 'oslo.messaging=INFO', 'oslo_messaging=INFO', 'iso8601=WARN', 'requests.packages.urllib3.connectionpool=WARN', 'urllib3.connectionpool=WARN', 'websocket=WARN', 'requests.packages.urllib3.util.retry=WARN', 'urllib3.util.retry=WARN', 'keystonemiddleware=WARN', 'routes.middleware=WARN', 'stevedore=WARN', 'taskflow=WARN', 'keystoneauth=WARN', 'oslo.cache=INFO', 'oslo_policy=INFO', 'dogpile.core.dogpile=INFO', 'OFPHandler=INFO', 'OfctlService=INFO', 'os_ken.base.app_manager=INFO', 'os_ken.controller.controller=INFO'] log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602#033[00m
Dec  1 14:13:12 np0005541455 ovn_metadata_agent[106828]: 2025-12-01 19:13:12.119 106833 DEBUG neutron.agent.ovn.metadata_agent [-] dhcp_agent_notification        = True log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602#033[00m
Dec  1 14:13:12 np0005541455 ovn_metadata_agent[106828]: 2025-12-01 19:13:12.119 106833 DEBUG neutron.agent.ovn.metadata_agent [-] dhcp_lease_duration            = 86400 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602#033[00m
Dec  1 14:13:12 np0005541455 ovn_metadata_agent[106828]: 2025-12-01 19:13:12.119 106833 DEBUG neutron.agent.ovn.metadata_agent [-] dhcp_load_type                 = networks log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602#033[00m
Dec  1 14:13:12 np0005541455 ovn_metadata_agent[106828]: 2025-12-01 19:13:12.119 106833 DEBUG neutron.agent.ovn.metadata_agent [-] dns_domain                     = openstacklocal log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602#033[00m
Dec  1 14:13:12 np0005541455 ovn_metadata_agent[106828]: 2025-12-01 19:13:12.120 106833 DEBUG neutron.agent.ovn.metadata_agent [-] enable_new_agents              = True log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602#033[00m
Dec  1 14:13:12 np0005541455 ovn_metadata_agent[106828]: 2025-12-01 19:13:12.120 106833 DEBUG neutron.agent.ovn.metadata_agent [-] enable_traditional_dhcp        = True log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602#033[00m
Dec  1 14:13:12 np0005541455 ovn_metadata_agent[106828]: 2025-12-01 19:13:12.120 106833 DEBUG neutron.agent.ovn.metadata_agent [-] external_dns_driver            = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602#033[00m
Dec  1 14:13:12 np0005541455 ovn_metadata_agent[106828]: 2025-12-01 19:13:12.120 106833 DEBUG neutron.agent.ovn.metadata_agent [-] external_pids                  = /var/lib/neutron/external/pids log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602#033[00m
Dec  1 14:13:12 np0005541455 ovn_metadata_agent[106828]: 2025-12-01 19:13:12.120 106833 DEBUG neutron.agent.ovn.metadata_agent [-] filter_validation              = True log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602#033[00m
Dec  1 14:13:12 np0005541455 ovn_metadata_agent[106828]: 2025-12-01 19:13:12.120 106833 DEBUG neutron.agent.ovn.metadata_agent [-] global_physnet_mtu             = 1500 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602#033[00m
Dec  1 14:13:12 np0005541455 ovn_metadata_agent[106828]: 2025-12-01 19:13:12.120 106833 DEBUG neutron.agent.ovn.metadata_agent [-] host                           = compute-0 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602#033[00m
Dec  1 14:13:12 np0005541455 ovn_metadata_agent[106828]: 2025-12-01 19:13:12.120 106833 DEBUG neutron.agent.ovn.metadata_agent [-] http_retries                   = 3 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602#033[00m
Dec  1 14:13:12 np0005541455 ovn_metadata_agent[106828]: 2025-12-01 19:13:12.121 106833 DEBUG neutron.agent.ovn.metadata_agent [-] instance_format                = [instance: %(uuid)s]  log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602#033[00m
Dec  1 14:13:12 np0005541455 ovn_metadata_agent[106828]: 2025-12-01 19:13:12.121 106833 DEBUG neutron.agent.ovn.metadata_agent [-] instance_uuid_format           = [instance: %(uuid)s]  log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602#033[00m
Dec  1 14:13:12 np0005541455 ovn_metadata_agent[106828]: 2025-12-01 19:13:12.121 106833 DEBUG neutron.agent.ovn.metadata_agent [-] ipam_driver                    = internal log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602#033[00m
Dec  1 14:13:12 np0005541455 ovn_metadata_agent[106828]: 2025-12-01 19:13:12.121 106833 DEBUG neutron.agent.ovn.metadata_agent [-] ipv6_pd_enabled                = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602#033[00m
Dec  1 14:13:12 np0005541455 ovn_metadata_agent[106828]: 2025-12-01 19:13:12.121 106833 DEBUG neutron.agent.ovn.metadata_agent [-] log_config_append              = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602#033[00m
Dec  1 14:13:12 np0005541455 ovn_metadata_agent[106828]: 2025-12-01 19:13:12.121 106833 DEBUG neutron.agent.ovn.metadata_agent [-] log_date_format                = %Y-%m-%d %H:%M:%S log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602#033[00m
Dec  1 14:13:12 np0005541455 ovn_metadata_agent[106828]: 2025-12-01 19:13:12.121 106833 DEBUG neutron.agent.ovn.metadata_agent [-] log_dir                        = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602#033[00m
Dec  1 14:13:12 np0005541455 ovn_metadata_agent[106828]: 2025-12-01 19:13:12.121 106833 DEBUG neutron.agent.ovn.metadata_agent [-] log_file                       = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602#033[00m
Dec  1 14:13:12 np0005541455 ovn_metadata_agent[106828]: 2025-12-01 19:13:12.121 106833 DEBUG neutron.agent.ovn.metadata_agent [-] log_rotate_interval            = 1 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602#033[00m
Dec  1 14:13:12 np0005541455 ovn_metadata_agent[106828]: 2025-12-01 19:13:12.121 106833 DEBUG neutron.agent.ovn.metadata_agent [-] log_rotate_interval_type       = days log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602#033[00m
Dec  1 14:13:12 np0005541455 ovn_metadata_agent[106828]: 2025-12-01 19:13:12.122 106833 DEBUG neutron.agent.ovn.metadata_agent [-] log_rotation_type              = none log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602#033[00m
Dec  1 14:13:12 np0005541455 ovn_metadata_agent[106828]: 2025-12-01 19:13:12.122 106833 DEBUG neutron.agent.ovn.metadata_agent [-] logging_context_format_string  = %(asctime)s.%(msecs)03d %(process)d %(levelname)s %(name)s [%(global_request_id)s %(request_id)s %(user_identity)s] %(instance)s%(message)s log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602#033[00m
Dec  1 14:13:12 np0005541455 ovn_metadata_agent[106828]: 2025-12-01 19:13:12.122 106833 DEBUG neutron.agent.ovn.metadata_agent [-] logging_debug_format_suffix    = %(funcName)s %(pathname)s:%(lineno)d log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602#033[00m
Dec  1 14:13:12 np0005541455 ovn_metadata_agent[106828]: 2025-12-01 19:13:12.122 106833 DEBUG neutron.agent.ovn.metadata_agent [-] logging_default_format_string  = %(asctime)s.%(msecs)03d %(process)d %(levelname)s %(name)s [-] %(instance)s%(message)s log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602#033[00m
Dec  1 14:13:12 np0005541455 ovn_metadata_agent[106828]: 2025-12-01 19:13:12.122 106833 DEBUG neutron.agent.ovn.metadata_agent [-] logging_exception_prefix       = %(asctime)s.%(msecs)03d %(process)d ERROR %(name)s %(instance)s log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602#033[00m
Dec  1 14:13:12 np0005541455 ovn_metadata_agent[106828]: 2025-12-01 19:13:12.122 106833 DEBUG neutron.agent.ovn.metadata_agent [-] logging_user_identity_format   = %(user)s %(project)s %(domain)s %(system_scope)s %(user_domain)s %(project_domain)s log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602#033[00m
Dec  1 14:13:12 np0005541455 ovn_metadata_agent[106828]: 2025-12-01 19:13:12.122 106833 DEBUG neutron.agent.ovn.metadata_agent [-] max_dns_nameservers            = 5 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602#033[00m
Dec  1 14:13:12 np0005541455 ovn_metadata_agent[106828]: 2025-12-01 19:13:12.122 106833 DEBUG neutron.agent.ovn.metadata_agent [-] max_header_line                = 16384 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602#033[00m
Dec  1 14:13:12 np0005541455 ovn_metadata_agent[106828]: 2025-12-01 19:13:12.122 106833 DEBUG neutron.agent.ovn.metadata_agent [-] max_logfile_count              = 30 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602#033[00m
Dec  1 14:13:12 np0005541455 ovn_metadata_agent[106828]: 2025-12-01 19:13:12.122 106833 DEBUG neutron.agent.ovn.metadata_agent [-] max_logfile_size_mb            = 200 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602#033[00m
Dec  1 14:13:12 np0005541455 ovn_metadata_agent[106828]: 2025-12-01 19:13:12.123 106833 DEBUG neutron.agent.ovn.metadata_agent [-] max_subnet_host_routes         = 20 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602#033[00m
Dec  1 14:13:12 np0005541455 ovn_metadata_agent[106828]: 2025-12-01 19:13:12.123 106833 DEBUG neutron.agent.ovn.metadata_agent [-] metadata_backlog               = 4096 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602#033[00m
Dec  1 14:13:12 np0005541455 ovn_metadata_agent[106828]: 2025-12-01 19:13:12.123 106833 DEBUG neutron.agent.ovn.metadata_agent [-] metadata_proxy_group           =  log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602#033[00m
Dec  1 14:13:12 np0005541455 ovn_metadata_agent[106828]: 2025-12-01 19:13:12.123 106833 DEBUG neutron.agent.ovn.metadata_agent [-] metadata_proxy_shared_secret   = **** log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602#033[00m
Dec  1 14:13:12 np0005541455 ovn_metadata_agent[106828]: 2025-12-01 19:13:12.123 106833 DEBUG neutron.agent.ovn.metadata_agent [-] metadata_proxy_socket          = /var/lib/neutron/metadata_proxy log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602#033[00m
Dec  1 14:13:12 np0005541455 ovn_metadata_agent[106828]: 2025-12-01 19:13:12.123 106833 DEBUG neutron.agent.ovn.metadata_agent [-] metadata_proxy_socket_mode     = deduce log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602#033[00m
Dec  1 14:13:12 np0005541455 ovn_metadata_agent[106828]: 2025-12-01 19:13:12.123 106833 DEBUG neutron.agent.ovn.metadata_agent [-] metadata_proxy_user            =  log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602#033[00m
Dec  1 14:13:12 np0005541455 ovn_metadata_agent[106828]: 2025-12-01 19:13:12.123 106833 DEBUG neutron.agent.ovn.metadata_agent [-] metadata_workers               = 1 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602#033[00m
Dec  1 14:13:12 np0005541455 ovn_metadata_agent[106828]: 2025-12-01 19:13:12.123 106833 DEBUG neutron.agent.ovn.metadata_agent [-] network_link_prefix            = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602#033[00m
Dec  1 14:13:12 np0005541455 ovn_metadata_agent[106828]: 2025-12-01 19:13:12.124 106833 DEBUG neutron.agent.ovn.metadata_agent [-] notify_nova_on_port_data_changes = True log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602#033[00m
Dec  1 14:13:12 np0005541455 ovn_metadata_agent[106828]: 2025-12-01 19:13:12.124 106833 DEBUG neutron.agent.ovn.metadata_agent [-] notify_nova_on_port_status_changes = True log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602#033[00m
Dec  1 14:13:12 np0005541455 ovn_metadata_agent[106828]: 2025-12-01 19:13:12.124 106833 DEBUG neutron.agent.ovn.metadata_agent [-] nova_client_cert               =  log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602#033[00m
Dec  1 14:13:12 np0005541455 ovn_metadata_agent[106828]: 2025-12-01 19:13:12.124 106833 DEBUG neutron.agent.ovn.metadata_agent [-] nova_client_priv_key           =  log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602#033[00m
Dec  1 14:13:12 np0005541455 ovn_metadata_agent[106828]: 2025-12-01 19:13:12.124 106833 DEBUG neutron.agent.ovn.metadata_agent [-] nova_metadata_host             = nova-metadata-internal.openstack.svc log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602#033[00m
Dec  1 14:13:12 np0005541455 ovn_metadata_agent[106828]: 2025-12-01 19:13:12.124 106833 DEBUG neutron.agent.ovn.metadata_agent [-] nova_metadata_insecure         = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602#033[00m
Dec  1 14:13:12 np0005541455 ovn_metadata_agent[106828]: 2025-12-01 19:13:12.124 106833 DEBUG neutron.agent.ovn.metadata_agent [-] nova_metadata_port             = 8775 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602#033[00m
Dec  1 14:13:12 np0005541455 ovn_metadata_agent[106828]: 2025-12-01 19:13:12.124 106833 DEBUG neutron.agent.ovn.metadata_agent [-] nova_metadata_protocol         = https log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602#033[00m
Dec  1 14:13:12 np0005541455 ovn_metadata_agent[106828]: 2025-12-01 19:13:12.124 106833 DEBUG neutron.agent.ovn.metadata_agent [-] pagination_max_limit           = -1 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602#033[00m
Dec  1 14:13:12 np0005541455 ovn_metadata_agent[106828]: 2025-12-01 19:13:12.125 106833 DEBUG neutron.agent.ovn.metadata_agent [-] periodic_fuzzy_delay           = 5 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602#033[00m
Dec  1 14:13:12 np0005541455 ovn_metadata_agent[106828]: 2025-12-01 19:13:12.125 106833 DEBUG neutron.agent.ovn.metadata_agent [-] periodic_interval              = 40 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602#033[00m
Dec  1 14:13:12 np0005541455 ovn_metadata_agent[106828]: 2025-12-01 19:13:12.125 106833 DEBUG neutron.agent.ovn.metadata_agent [-] publish_errors                 = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602#033[00m
Dec  1 14:13:12 np0005541455 ovn_metadata_agent[106828]: 2025-12-01 19:13:12.125 106833 DEBUG neutron.agent.ovn.metadata_agent [-] rate_limit_burst               = 0 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602#033[00m
Dec  1 14:13:12 np0005541455 ovn_metadata_agent[106828]: 2025-12-01 19:13:12.125 106833 DEBUG neutron.agent.ovn.metadata_agent [-] rate_limit_except_level        = CRITICAL log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602#033[00m
Dec  1 14:13:12 np0005541455 ovn_metadata_agent[106828]: 2025-12-01 19:13:12.125 106833 DEBUG neutron.agent.ovn.metadata_agent [-] rate_limit_interval            = 0 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602#033[00m
Dec  1 14:13:12 np0005541455 ovn_metadata_agent[106828]: 2025-12-01 19:13:12.125 106833 DEBUG neutron.agent.ovn.metadata_agent [-] retry_until_window             = 30 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602#033[00m
Dec  1 14:13:12 np0005541455 ovn_metadata_agent[106828]: 2025-12-01 19:13:12.125 106833 DEBUG neutron.agent.ovn.metadata_agent [-] rpc_resources_processing_step  = 20 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602#033[00m
Dec  1 14:13:12 np0005541455 ovn_metadata_agent[106828]: 2025-12-01 19:13:12.125 106833 DEBUG neutron.agent.ovn.metadata_agent [-] rpc_response_max_timeout       = 600 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602#033[00m
Dec  1 14:13:12 np0005541455 ovn_metadata_agent[106828]: 2025-12-01 19:13:12.126 106833 DEBUG neutron.agent.ovn.metadata_agent [-] rpc_state_report_workers       = 1 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602#033[00m
Dec  1 14:13:12 np0005541455 ovn_metadata_agent[106828]: 2025-12-01 19:13:12.126 106833 DEBUG neutron.agent.ovn.metadata_agent [-] rpc_workers                    = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602#033[00m
Dec  1 14:13:12 np0005541455 ovn_metadata_agent[106828]: 2025-12-01 19:13:12.126 106833 DEBUG neutron.agent.ovn.metadata_agent [-] send_events_interval           = 2 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602#033[00m
Dec  1 14:13:12 np0005541455 ovn_metadata_agent[106828]: 2025-12-01 19:13:12.126 106833 DEBUG neutron.agent.ovn.metadata_agent [-] service_plugins                = [] log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602#033[00m
Dec  1 14:13:12 np0005541455 ovn_metadata_agent[106828]: 2025-12-01 19:13:12.126 106833 DEBUG neutron.agent.ovn.metadata_agent [-] setproctitle                   = on log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602#033[00m
Dec  1 14:13:12 np0005541455 ovn_metadata_agent[106828]: 2025-12-01 19:13:12.126 106833 DEBUG neutron.agent.ovn.metadata_agent [-] state_path                     = /var/lib/neutron log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602#033[00m
Dec  1 14:13:12 np0005541455 ovn_metadata_agent[106828]: 2025-12-01 19:13:12.126 106833 DEBUG neutron.agent.ovn.metadata_agent [-] syslog_log_facility            = syslog log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602#033[00m
Dec  1 14:13:12 np0005541455 ovn_metadata_agent[106828]: 2025-12-01 19:13:12.126 106833 DEBUG neutron.agent.ovn.metadata_agent [-] tcp_keepidle                   = 600 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602#033[00m
Dec  1 14:13:12 np0005541455 ovn_metadata_agent[106828]: 2025-12-01 19:13:12.126 106833 DEBUG neutron.agent.ovn.metadata_agent [-] transport_url                  = **** log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602#033[00m
Dec  1 14:13:12 np0005541455 ovn_metadata_agent[106828]: 2025-12-01 19:13:12.126 106833 DEBUG neutron.agent.ovn.metadata_agent [-] use_eventlog                   = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602#033[00m
Dec  1 14:13:12 np0005541455 ovn_metadata_agent[106828]: 2025-12-01 19:13:12.126 106833 DEBUG neutron.agent.ovn.metadata_agent [-] use_journal                    = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602#033[00m
Dec  1 14:13:12 np0005541455 ovn_metadata_agent[106828]: 2025-12-01 19:13:12.127 106833 DEBUG neutron.agent.ovn.metadata_agent [-] use_json                       = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602#033[00m
Dec  1 14:13:12 np0005541455 ovn_metadata_agent[106828]: 2025-12-01 19:13:12.127 106833 DEBUG neutron.agent.ovn.metadata_agent [-] use_ssl                        = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602#033[00m
Dec  1 14:13:12 np0005541455 ovn_metadata_agent[106828]: 2025-12-01 19:13:12.127 106833 DEBUG neutron.agent.ovn.metadata_agent [-] use_stderr                     = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602#033[00m
Dec  1 14:13:12 np0005541455 ovn_metadata_agent[106828]: 2025-12-01 19:13:12.127 106833 DEBUG neutron.agent.ovn.metadata_agent [-] use_syslog                     = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602#033[00m
Dec  1 14:13:12 np0005541455 ovn_metadata_agent[106828]: 2025-12-01 19:13:12.127 106833 DEBUG neutron.agent.ovn.metadata_agent [-] vlan_transparent               = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602#033[00m
Dec  1 14:13:12 np0005541455 ovn_metadata_agent[106828]: 2025-12-01 19:13:12.127 106833 DEBUG neutron.agent.ovn.metadata_agent [-] watch_log_file                 = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602#033[00m
Dec  1 14:13:12 np0005541455 ovn_metadata_agent[106828]: 2025-12-01 19:13:12.127 106833 DEBUG neutron.agent.ovn.metadata_agent [-] wsgi_default_pool_size         = 100 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602#033[00m
Dec  1 14:13:12 np0005541455 ovn_metadata_agent[106828]: 2025-12-01 19:13:12.127 106833 DEBUG neutron.agent.ovn.metadata_agent [-] wsgi_keep_alive                = True log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602#033[00m
Dec  1 14:13:12 np0005541455 ovn_metadata_agent[106828]: 2025-12-01 19:13:12.128 106833 DEBUG neutron.agent.ovn.metadata_agent [-] wsgi_log_format                = %(client_ip)s "%(request_line)s" status: %(status_code)s  len: %(body_length)s time: %(wall_seconds).7f log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602#033[00m
Dec  1 14:13:12 np0005541455 ovn_metadata_agent[106828]: 2025-12-01 19:13:12.128 106833 DEBUG neutron.agent.ovn.metadata_agent [-] wsgi_server_debug              = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602#033[00m
Dec  1 14:13:12 np0005541455 ovn_metadata_agent[106828]: 2025-12-01 19:13:12.128 106833 DEBUG neutron.agent.ovn.metadata_agent [-] oslo_concurrency.disable_process_locking = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Dec  1 14:13:12 np0005541455 ovn_metadata_agent[106828]: 2025-12-01 19:13:12.128 106833 DEBUG neutron.agent.ovn.metadata_agent [-] oslo_concurrency.lock_path     = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Dec  1 14:13:12 np0005541455 ovn_metadata_agent[106828]: 2025-12-01 19:13:12.128 106833 DEBUG neutron.agent.ovn.metadata_agent [-] profiler.connection_string     = messaging:// log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Dec  1 14:13:12 np0005541455 ovn_metadata_agent[106828]: 2025-12-01 19:13:12.128 106833 DEBUG neutron.agent.ovn.metadata_agent [-] profiler.enabled               = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Dec  1 14:13:12 np0005541455 ovn_metadata_agent[106828]: 2025-12-01 19:13:12.128 106833 DEBUG neutron.agent.ovn.metadata_agent [-] profiler.es_doc_type           = notification log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Dec  1 14:13:12 np0005541455 ovn_metadata_agent[106828]: 2025-12-01 19:13:12.128 106833 DEBUG neutron.agent.ovn.metadata_agent [-] profiler.es_scroll_size        = 10000 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Dec  1 14:13:12 np0005541455 ovn_metadata_agent[106828]: 2025-12-01 19:13:12.129 106833 DEBUG neutron.agent.ovn.metadata_agent [-] profiler.es_scroll_time        = 2m log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Dec  1 14:13:12 np0005541455 ovn_metadata_agent[106828]: 2025-12-01 19:13:12.129 106833 DEBUG neutron.agent.ovn.metadata_agent [-] profiler.filter_error_trace    = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Dec  1 14:13:12 np0005541455 ovn_metadata_agent[106828]: 2025-12-01 19:13:12.129 106833 DEBUG neutron.agent.ovn.metadata_agent [-] profiler.hmac_keys             = SECRET_KEY log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Dec  1 14:13:12 np0005541455 ovn_metadata_agent[106828]: 2025-12-01 19:13:12.129 106833 DEBUG neutron.agent.ovn.metadata_agent [-] profiler.sentinel_service_name = mymaster log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Dec  1 14:13:12 np0005541455 ovn_metadata_agent[106828]: 2025-12-01 19:13:12.129 106833 DEBUG neutron.agent.ovn.metadata_agent [-] profiler.socket_timeout        = 0.1 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Dec  1 14:13:12 np0005541455 ovn_metadata_agent[106828]: 2025-12-01 19:13:12.129 106833 DEBUG neutron.agent.ovn.metadata_agent [-] profiler.trace_sqlalchemy      = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Dec  1 14:13:12 np0005541455 ovn_metadata_agent[106828]: 2025-12-01 19:13:12.129 106833 DEBUG neutron.agent.ovn.metadata_agent [-] oslo_policy.enforce_new_defaults = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Dec  1 14:13:12 np0005541455 ovn_metadata_agent[106828]: 2025-12-01 19:13:12.129 106833 DEBUG neutron.agent.ovn.metadata_agent [-] oslo_policy.enforce_scope      = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Dec  1 14:13:12 np0005541455 ovn_metadata_agent[106828]: 2025-12-01 19:13:12.129 106833 DEBUG neutron.agent.ovn.metadata_agent [-] oslo_policy.policy_default_rule = default log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Dec  1 14:13:12 np0005541455 ovn_metadata_agent[106828]: 2025-12-01 19:13:12.130 106833 DEBUG neutron.agent.ovn.metadata_agent [-] oslo_policy.policy_dirs        = ['policy.d'] log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Dec  1 14:13:12 np0005541455 ovn_metadata_agent[106828]: 2025-12-01 19:13:12.130 106833 DEBUG neutron.agent.ovn.metadata_agent [-] oslo_policy.policy_file        = policy.yaml log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Dec  1 14:13:12 np0005541455 ovn_metadata_agent[106828]: 2025-12-01 19:13:12.130 106833 DEBUG neutron.agent.ovn.metadata_agent [-] oslo_policy.remote_content_type = application/x-www-form-urlencoded log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Dec  1 14:13:12 np0005541455 ovn_metadata_agent[106828]: 2025-12-01 19:13:12.130 106833 DEBUG neutron.agent.ovn.metadata_agent [-] oslo_policy.remote_ssl_ca_crt_file = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Dec  1 14:13:12 np0005541455 ovn_metadata_agent[106828]: 2025-12-01 19:13:12.130 106833 DEBUG neutron.agent.ovn.metadata_agent [-] oslo_policy.remote_ssl_client_crt_file = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Dec  1 14:13:12 np0005541455 ovn_metadata_agent[106828]: 2025-12-01 19:13:12.130 106833 DEBUG neutron.agent.ovn.metadata_agent [-] oslo_policy.remote_ssl_client_key_file = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Dec  1 14:13:12 np0005541455 ovn_metadata_agent[106828]: 2025-12-01 19:13:12.130 106833 DEBUG neutron.agent.ovn.metadata_agent [-] oslo_policy.remote_ssl_verify_server_crt = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Dec  1 14:13:12 np0005541455 ovn_metadata_agent[106828]: 2025-12-01 19:13:12.130 106833 DEBUG neutron.agent.ovn.metadata_agent [-] oslo_messaging_metrics.metrics_buffer_size = 1000 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Dec  1 14:13:12 np0005541455 ovn_metadata_agent[106828]: 2025-12-01 19:13:12.130 106833 DEBUG neutron.agent.ovn.metadata_agent [-] oslo_messaging_metrics.metrics_enabled = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Dec  1 14:13:12 np0005541455 ovn_metadata_agent[106828]: 2025-12-01 19:13:12.130 106833 DEBUG neutron.agent.ovn.metadata_agent [-] oslo_messaging_metrics.metrics_process_name =  log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Dec  1 14:13:12 np0005541455 ovn_metadata_agent[106828]: 2025-12-01 19:13:12.131 106833 DEBUG neutron.agent.ovn.metadata_agent [-] oslo_messaging_metrics.metrics_socket_file = /var/tmp/metrics_collector.sock log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Dec  1 14:13:12 np0005541455 ovn_metadata_agent[106828]: 2025-12-01 19:13:12.131 106833 DEBUG neutron.agent.ovn.metadata_agent [-] oslo_messaging_metrics.metrics_thread_stop_timeout = 10 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Dec  1 14:13:12 np0005541455 ovn_metadata_agent[106828]: 2025-12-01 19:13:12.131 106833 DEBUG neutron.agent.ovn.metadata_agent [-] oslo_middleware.http_basic_auth_user_file = /etc/htpasswd log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Dec  1 14:13:12 np0005541455 ovn_metadata_agent[106828]: 2025-12-01 19:13:12.131 106833 DEBUG neutron.agent.ovn.metadata_agent [-] service_providers.service_provider = [] log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Dec  1 14:13:12 np0005541455 ovn_metadata_agent[106828]: 2025-12-01 19:13:12.131 106833 DEBUG neutron.agent.ovn.metadata_agent [-] privsep.capabilities           = [21, 12, 1, 2, 19] log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Dec  1 14:13:12 np0005541455 ovn_metadata_agent[106828]: 2025-12-01 19:13:12.131 106833 DEBUG neutron.agent.ovn.metadata_agent [-] privsep.group                  = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Dec  1 14:13:12 np0005541455 ovn_metadata_agent[106828]: 2025-12-01 19:13:12.131 106833 DEBUG neutron.agent.ovn.metadata_agent [-] privsep.helper_command         = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Dec  1 14:13:12 np0005541455 ovn_metadata_agent[106828]: 2025-12-01 19:13:12.131 106833 DEBUG neutron.agent.ovn.metadata_agent [-] privsep.logger_name            = oslo_privsep.daemon log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Dec  1 14:13:12 np0005541455 ovn_metadata_agent[106828]: 2025-12-01 19:13:12.132 106833 DEBUG neutron.agent.ovn.metadata_agent [-] privsep.thread_pool_size       = 8 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Dec  1 14:13:12 np0005541455 ovn_metadata_agent[106828]: 2025-12-01 19:13:12.132 106833 DEBUG neutron.agent.ovn.metadata_agent [-] privsep.user                   = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Dec  1 14:13:12 np0005541455 ovn_metadata_agent[106828]: 2025-12-01 19:13:12.132 106833 DEBUG neutron.agent.ovn.metadata_agent [-] privsep_dhcp_release.capabilities = [21, 12] log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Dec  1 14:13:12 np0005541455 ovn_metadata_agent[106828]: 2025-12-01 19:13:12.132 106833 DEBUG neutron.agent.ovn.metadata_agent [-] privsep_dhcp_release.group     = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Dec  1 14:13:12 np0005541455 ovn_metadata_agent[106828]: 2025-12-01 19:13:12.132 106833 DEBUG neutron.agent.ovn.metadata_agent [-] privsep_dhcp_release.helper_command = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Dec  1 14:13:12 np0005541455 ovn_metadata_agent[106828]: 2025-12-01 19:13:12.132 106833 DEBUG neutron.agent.ovn.metadata_agent [-] privsep_dhcp_release.logger_name = oslo_privsep.daemon log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Dec  1 14:13:12 np0005541455 ovn_metadata_agent[106828]: 2025-12-01 19:13:12.132 106833 DEBUG neutron.agent.ovn.metadata_agent [-] privsep_dhcp_release.thread_pool_size = 8 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Dec  1 14:13:12 np0005541455 ovn_metadata_agent[106828]: 2025-12-01 19:13:12.132 106833 DEBUG neutron.agent.ovn.metadata_agent [-] privsep_dhcp_release.user      = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Dec  1 14:13:12 np0005541455 ovn_metadata_agent[106828]: 2025-12-01 19:13:12.132 106833 DEBUG neutron.agent.ovn.metadata_agent [-] privsep_ovs_vsctl.capabilities = [21, 12] log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Dec  1 14:13:12 np0005541455 ovn_metadata_agent[106828]: 2025-12-01 19:13:12.133 106833 DEBUG neutron.agent.ovn.metadata_agent [-] privsep_ovs_vsctl.group        = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Dec  1 14:13:12 np0005541455 ovn_metadata_agent[106828]: 2025-12-01 19:13:12.133 106833 DEBUG neutron.agent.ovn.metadata_agent [-] privsep_ovs_vsctl.helper_command = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Dec  1 14:13:12 np0005541455 ovn_metadata_agent[106828]: 2025-12-01 19:13:12.133 106833 DEBUG neutron.agent.ovn.metadata_agent [-] privsep_ovs_vsctl.logger_name  = oslo_privsep.daemon log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Dec  1 14:13:12 np0005541455 ovn_metadata_agent[106828]: 2025-12-01 19:13:12.133 106833 DEBUG neutron.agent.ovn.metadata_agent [-] privsep_ovs_vsctl.thread_pool_size = 8 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Dec  1 14:13:12 np0005541455 ovn_metadata_agent[106828]: 2025-12-01 19:13:12.133 106833 DEBUG neutron.agent.ovn.metadata_agent [-] privsep_ovs_vsctl.user         = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Dec  1 14:13:12 np0005541455 ovn_metadata_agent[106828]: 2025-12-01 19:13:12.133 106833 DEBUG neutron.agent.ovn.metadata_agent [-] privsep_namespace.capabilities = [21] log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Dec  1 14:13:12 np0005541455 ovn_metadata_agent[106828]: 2025-12-01 19:13:12.133 106833 DEBUG neutron.agent.ovn.metadata_agent [-] privsep_namespace.group        = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Dec  1 14:13:12 np0005541455 ovn_metadata_agent[106828]: 2025-12-01 19:13:12.133 106833 DEBUG neutron.agent.ovn.metadata_agent [-] privsep_namespace.helper_command = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Dec  1 14:13:12 np0005541455 ovn_metadata_agent[106828]: 2025-12-01 19:13:12.133 106833 DEBUG neutron.agent.ovn.metadata_agent [-] privsep_namespace.logger_name  = oslo_privsep.daemon log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Dec  1 14:13:12 np0005541455 ovn_metadata_agent[106828]: 2025-12-01 19:13:12.134 106833 DEBUG neutron.agent.ovn.metadata_agent [-] privsep_namespace.thread_pool_size = 8 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Dec  1 14:13:12 np0005541455 ovn_metadata_agent[106828]: 2025-12-01 19:13:12.134 106833 DEBUG neutron.agent.ovn.metadata_agent [-] privsep_namespace.user         = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Dec  1 14:13:12 np0005541455 ovn_metadata_agent[106828]: 2025-12-01 19:13:12.134 106833 DEBUG neutron.agent.ovn.metadata_agent [-] privsep_conntrack.capabilities = [12] log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Dec  1 14:13:12 np0005541455 ovn_metadata_agent[106828]: 2025-12-01 19:13:12.134 106833 DEBUG neutron.agent.ovn.metadata_agent [-] privsep_conntrack.group        = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Dec  1 14:13:12 np0005541455 ovn_metadata_agent[106828]: 2025-12-01 19:13:12.134 106833 DEBUG neutron.agent.ovn.metadata_agent [-] privsep_conntrack.helper_command = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Dec  1 14:13:12 np0005541455 ovn_metadata_agent[106828]: 2025-12-01 19:13:12.134 106833 DEBUG neutron.agent.ovn.metadata_agent [-] privsep_conntrack.logger_name  = oslo_privsep.daemon log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Dec  1 14:13:12 np0005541455 ovn_metadata_agent[106828]: 2025-12-01 19:13:12.134 106833 DEBUG neutron.agent.ovn.metadata_agent [-] privsep_conntrack.thread_pool_size = 8 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Dec  1 14:13:12 np0005541455 ovn_metadata_agent[106828]: 2025-12-01 19:13:12.134 106833 DEBUG neutron.agent.ovn.metadata_agent [-] privsep_conntrack.user         = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Dec  1 14:13:12 np0005541455 ovn_metadata_agent[106828]: 2025-12-01 19:13:12.134 106833 DEBUG neutron.agent.ovn.metadata_agent [-] privsep_link.capabilities      = [12, 21] log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Dec  1 14:13:12 np0005541455 ovn_metadata_agent[106828]: 2025-12-01 19:13:12.135 106833 DEBUG neutron.agent.ovn.metadata_agent [-] privsep_link.group             = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Dec  1 14:13:12 np0005541455 ovn_metadata_agent[106828]: 2025-12-01 19:13:12.135 106833 DEBUG neutron.agent.ovn.metadata_agent [-] privsep_link.helper_command    = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Dec  1 14:13:12 np0005541455 ovn_metadata_agent[106828]: 2025-12-01 19:13:12.135 106833 DEBUG neutron.agent.ovn.metadata_agent [-] privsep_link.logger_name       = oslo_privsep.daemon log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Dec  1 14:13:12 np0005541455 ovn_metadata_agent[106828]: 2025-12-01 19:13:12.135 106833 DEBUG neutron.agent.ovn.metadata_agent [-] privsep_link.thread_pool_size  = 8 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Dec  1 14:13:12 np0005541455 ovn_metadata_agent[106828]: 2025-12-01 19:13:12.135 106833 DEBUG neutron.agent.ovn.metadata_agent [-] privsep_link.user              = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Dec  1 14:13:12 np0005541455 ovn_metadata_agent[106828]: 2025-12-01 19:13:12.135 106833 DEBUG neutron.agent.ovn.metadata_agent [-] AGENT.check_child_processes_action = respawn log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Dec  1 14:13:12 np0005541455 ovn_metadata_agent[106828]: 2025-12-01 19:13:12.135 106833 DEBUG neutron.agent.ovn.metadata_agent [-] AGENT.check_child_processes_interval = 60 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Dec  1 14:13:12 np0005541455 ovn_metadata_agent[106828]: 2025-12-01 19:13:12.135 106833 DEBUG neutron.agent.ovn.metadata_agent [-] AGENT.comment_iptables_rules   = True log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Dec  1 14:13:12 np0005541455 ovn_metadata_agent[106828]: 2025-12-01 19:13:12.135 106833 DEBUG neutron.agent.ovn.metadata_agent [-] AGENT.debug_iptables_rules     = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Dec  1 14:13:12 np0005541455 ovn_metadata_agent[106828]: 2025-12-01 19:13:12.136 106833 DEBUG neutron.agent.ovn.metadata_agent [-] AGENT.kill_scripts_path        = /etc/neutron/kill_scripts/ log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Dec  1 14:13:12 np0005541455 ovn_metadata_agent[106828]: 2025-12-01 19:13:12.136 106833 DEBUG neutron.agent.ovn.metadata_agent [-] AGENT.root_helper              = sudo neutron-rootwrap /etc/neutron/rootwrap.conf log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Dec  1 14:13:12 np0005541455 ovn_metadata_agent[106828]: 2025-12-01 19:13:12.136 106833 DEBUG neutron.agent.ovn.metadata_agent [-] AGENT.root_helper_daemon       = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Dec  1 14:13:12 np0005541455 ovn_metadata_agent[106828]: 2025-12-01 19:13:12.136 106833 DEBUG neutron.agent.ovn.metadata_agent [-] AGENT.use_helper_for_ns_read   = True log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Dec  1 14:13:12 np0005541455 ovn_metadata_agent[106828]: 2025-12-01 19:13:12.136 106833 DEBUG neutron.agent.ovn.metadata_agent [-] AGENT.use_random_fully         = True log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Dec  1 14:13:12 np0005541455 ovn_metadata_agent[106828]: 2025-12-01 19:13:12.136 106833 DEBUG neutron.agent.ovn.metadata_agent [-] oslo_versionedobjects.fatal_exception_format_errors = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Dec  1 14:13:12 np0005541455 ovn_metadata_agent[106828]: 2025-12-01 19:13:12.136 106833 DEBUG neutron.agent.ovn.metadata_agent [-] QUOTAS.default_quota           = -1 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Dec  1 14:13:12 np0005541455 ovn_metadata_agent[106828]: 2025-12-01 19:13:12.136 106833 DEBUG neutron.agent.ovn.metadata_agent [-] QUOTAS.quota_driver            = neutron.db.quota.driver_nolock.DbQuotaNoLockDriver log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Dec  1 14:13:12 np0005541455 ovn_metadata_agent[106828]: 2025-12-01 19:13:12.136 106833 DEBUG neutron.agent.ovn.metadata_agent [-] QUOTAS.quota_network           = 100 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Dec  1 14:13:12 np0005541455 ovn_metadata_agent[106828]: 2025-12-01 19:13:12.137 106833 DEBUG neutron.agent.ovn.metadata_agent [-] QUOTAS.quota_port              = 500 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Dec  1 14:13:12 np0005541455 ovn_metadata_agent[106828]: 2025-12-01 19:13:12.137 106833 DEBUG neutron.agent.ovn.metadata_agent [-] QUOTAS.quota_security_group    = 10 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Dec  1 14:13:12 np0005541455 ovn_metadata_agent[106828]: 2025-12-01 19:13:12.137 106833 DEBUG neutron.agent.ovn.metadata_agent [-] QUOTAS.quota_security_group_rule = 100 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Dec  1 14:13:12 np0005541455 ovn_metadata_agent[106828]: 2025-12-01 19:13:12.137 106833 DEBUG neutron.agent.ovn.metadata_agent [-] QUOTAS.quota_subnet            = 100 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Dec  1 14:13:12 np0005541455 ovn_metadata_agent[106828]: 2025-12-01 19:13:12.137 106833 DEBUG neutron.agent.ovn.metadata_agent [-] QUOTAS.track_quota_usage       = True log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Dec  1 14:13:12 np0005541455 ovn_metadata_agent[106828]: 2025-12-01 19:13:12.137 106833 DEBUG neutron.agent.ovn.metadata_agent [-] nova.auth_section              = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Dec  1 14:13:12 np0005541455 ovn_metadata_agent[106828]: 2025-12-01 19:13:12.137 106833 DEBUG neutron.agent.ovn.metadata_agent [-] nova.auth_type                 = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Dec  1 14:13:12 np0005541455 ovn_metadata_agent[106828]: 2025-12-01 19:13:12.137 106833 DEBUG neutron.agent.ovn.metadata_agent [-] nova.cafile                    = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Dec  1 14:13:12 np0005541455 ovn_metadata_agent[106828]: 2025-12-01 19:13:12.138 106833 DEBUG neutron.agent.ovn.metadata_agent [-] nova.certfile                  = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Dec  1 14:13:12 np0005541455 ovn_metadata_agent[106828]: 2025-12-01 19:13:12.138 106833 DEBUG neutron.agent.ovn.metadata_agent [-] nova.collect_timing            = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Dec  1 14:13:12 np0005541455 ovn_metadata_agent[106828]: 2025-12-01 19:13:12.138 106833 DEBUG neutron.agent.ovn.metadata_agent [-] nova.endpoint_type             = public log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Dec  1 14:13:12 np0005541455 ovn_metadata_agent[106828]: 2025-12-01 19:13:12.138 106833 DEBUG neutron.agent.ovn.metadata_agent [-] nova.insecure                  = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Dec  1 14:13:12 np0005541455 ovn_metadata_agent[106828]: 2025-12-01 19:13:12.138 106833 DEBUG neutron.agent.ovn.metadata_agent [-] nova.keyfile                   = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Dec  1 14:13:12 np0005541455 ovn_metadata_agent[106828]: 2025-12-01 19:13:12.138 106833 DEBUG neutron.agent.ovn.metadata_agent [-] nova.region_name               = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Dec  1 14:13:12 np0005541455 ovn_metadata_agent[106828]: 2025-12-01 19:13:12.138 106833 DEBUG neutron.agent.ovn.metadata_agent [-] nova.split_loggers             = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Dec  1 14:13:12 np0005541455 ovn_metadata_agent[106828]: 2025-12-01 19:13:12.138 106833 DEBUG neutron.agent.ovn.metadata_agent [-] nova.timeout                   = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Dec  1 14:13:12 np0005541455 ovn_metadata_agent[106828]: 2025-12-01 19:13:12.138 106833 DEBUG neutron.agent.ovn.metadata_agent [-] placement.auth_section         = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Dec  1 14:13:12 np0005541455 ovn_metadata_agent[106828]: 2025-12-01 19:13:12.139 106833 DEBUG neutron.agent.ovn.metadata_agent [-] placement.auth_type            = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Dec  1 14:13:12 np0005541455 ovn_metadata_agent[106828]: 2025-12-01 19:13:12.139 106833 DEBUG neutron.agent.ovn.metadata_agent [-] placement.cafile               = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Dec  1 14:13:12 np0005541455 ovn_metadata_agent[106828]: 2025-12-01 19:13:12.139 106833 DEBUG neutron.agent.ovn.metadata_agent [-] placement.certfile             = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Dec  1 14:13:12 np0005541455 ovn_metadata_agent[106828]: 2025-12-01 19:13:12.139 106833 DEBUG neutron.agent.ovn.metadata_agent [-] placement.collect_timing       = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Dec  1 14:13:12 np0005541455 ovn_metadata_agent[106828]: 2025-12-01 19:13:12.139 106833 DEBUG neutron.agent.ovn.metadata_agent [-] placement.endpoint_type        = public log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Dec  1 14:13:12 np0005541455 ovn_metadata_agent[106828]: 2025-12-01 19:13:12.139 106833 DEBUG neutron.agent.ovn.metadata_agent [-] placement.insecure             = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Dec  1 14:13:12 np0005541455 ovn_metadata_agent[106828]: 2025-12-01 19:13:12.139 106833 DEBUG neutron.agent.ovn.metadata_agent [-] placement.keyfile              = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Dec  1 14:13:12 np0005541455 ovn_metadata_agent[106828]: 2025-12-01 19:13:12.140 106833 DEBUG neutron.agent.ovn.metadata_agent [-] placement.region_name          = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Dec  1 14:13:12 np0005541455 ovn_metadata_agent[106828]: 2025-12-01 19:13:12.140 106833 DEBUG neutron.agent.ovn.metadata_agent [-] placement.split_loggers        = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Dec  1 14:13:12 np0005541455 ovn_metadata_agent[106828]: 2025-12-01 19:13:12.140 106833 DEBUG neutron.agent.ovn.metadata_agent [-] placement.timeout              = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Dec  1 14:13:12 np0005541455 ovn_metadata_agent[106828]: 2025-12-01 19:13:12.140 106833 DEBUG neutron.agent.ovn.metadata_agent [-] ironic.auth_section            = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Dec  1 14:13:12 np0005541455 ovn_metadata_agent[106828]: 2025-12-01 19:13:12.140 106833 DEBUG neutron.agent.ovn.metadata_agent [-] ironic.auth_type               = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Dec  1 14:13:12 np0005541455 ovn_metadata_agent[106828]: 2025-12-01 19:13:12.140 106833 DEBUG neutron.agent.ovn.metadata_agent [-] ironic.cafile                  = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Dec  1 14:13:12 np0005541455 ovn_metadata_agent[106828]: 2025-12-01 19:13:12.140 106833 DEBUG neutron.agent.ovn.metadata_agent [-] ironic.certfile                = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Dec  1 14:13:12 np0005541455 ovn_metadata_agent[106828]: 2025-12-01 19:13:12.140 106833 DEBUG neutron.agent.ovn.metadata_agent [-] ironic.collect_timing          = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Dec  1 14:13:12 np0005541455 ovn_metadata_agent[106828]: 2025-12-01 19:13:12.141 106833 DEBUG neutron.agent.ovn.metadata_agent [-] ironic.connect_retries         = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Dec  1 14:13:12 np0005541455 ovn_metadata_agent[106828]: 2025-12-01 19:13:12.141 106833 DEBUG neutron.agent.ovn.metadata_agent [-] ironic.connect_retry_delay     = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Dec  1 14:13:12 np0005541455 ovn_metadata_agent[106828]: 2025-12-01 19:13:12.141 106833 DEBUG neutron.agent.ovn.metadata_agent [-] ironic.enable_notifications    = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Dec  1 14:13:12 np0005541455 ovn_metadata_agent[106828]: 2025-12-01 19:13:12.141 106833 DEBUG neutron.agent.ovn.metadata_agent [-] ironic.endpoint_override       = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Dec  1 14:13:12 np0005541455 ovn_metadata_agent[106828]: 2025-12-01 19:13:12.141 106833 DEBUG neutron.agent.ovn.metadata_agent [-] ironic.insecure                = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Dec  1 14:13:12 np0005541455 ovn_metadata_agent[106828]: 2025-12-01 19:13:12.141 106833 DEBUG neutron.agent.ovn.metadata_agent [-] ironic.interface               = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Dec  1 14:13:12 np0005541455 ovn_metadata_agent[106828]: 2025-12-01 19:13:12.141 106833 DEBUG neutron.agent.ovn.metadata_agent [-] ironic.keyfile                 = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Dec  1 14:13:12 np0005541455 ovn_metadata_agent[106828]: 2025-12-01 19:13:12.141 106833 DEBUG neutron.agent.ovn.metadata_agent [-] ironic.max_version             = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Dec  1 14:13:12 np0005541455 ovn_metadata_agent[106828]: 2025-12-01 19:13:12.141 106833 DEBUG neutron.agent.ovn.metadata_agent [-] ironic.min_version             = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Dec  1 14:13:12 np0005541455 ovn_metadata_agent[106828]: 2025-12-01 19:13:12.142 106833 DEBUG neutron.agent.ovn.metadata_agent [-] ironic.region_name             = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Dec  1 14:13:12 np0005541455 ovn_metadata_agent[106828]: 2025-12-01 19:13:12.142 106833 DEBUG neutron.agent.ovn.metadata_agent [-] ironic.service_name            = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Dec  1 14:13:12 np0005541455 ovn_metadata_agent[106828]: 2025-12-01 19:13:12.142 106833 DEBUG neutron.agent.ovn.metadata_agent [-] ironic.service_type            = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Dec  1 14:13:12 np0005541455 ovn_metadata_agent[106828]: 2025-12-01 19:13:12.142 106833 DEBUG neutron.agent.ovn.metadata_agent [-] ironic.split_loggers           = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Dec  1 14:13:12 np0005541455 ovn_metadata_agent[106828]: 2025-12-01 19:13:12.142 106833 DEBUG neutron.agent.ovn.metadata_agent [-] ironic.status_code_retries     = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Dec  1 14:13:12 np0005541455 ovn_metadata_agent[106828]: 2025-12-01 19:13:12.142 106833 DEBUG neutron.agent.ovn.metadata_agent [-] ironic.status_code_retry_delay = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Dec  1 14:13:12 np0005541455 ovn_metadata_agent[106828]: 2025-12-01 19:13:12.142 106833 DEBUG neutron.agent.ovn.metadata_agent [-] ironic.timeout                 = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Dec  1 14:13:12 np0005541455 ovn_metadata_agent[106828]: 2025-12-01 19:13:12.142 106833 DEBUG neutron.agent.ovn.metadata_agent [-] ironic.valid_interfaces        = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Dec  1 14:13:12 np0005541455 ovn_metadata_agent[106828]: 2025-12-01 19:13:12.142 106833 DEBUG neutron.agent.ovn.metadata_agent [-] ironic.version                 = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Dec  1 14:13:12 np0005541455 ovn_metadata_agent[106828]: 2025-12-01 19:13:12.142 106833 DEBUG neutron.agent.ovn.metadata_agent [-] cli_script.dry_run             = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Dec  1 14:13:12 np0005541455 ovn_metadata_agent[106828]: 2025-12-01 19:13:12.143 106833 DEBUG neutron.agent.ovn.metadata_agent [-] ovn.allow_stateless_action_supported = True log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Dec  1 14:13:12 np0005541455 ovn_metadata_agent[106828]: 2025-12-01 19:13:12.143 106833 DEBUG neutron.agent.ovn.metadata_agent [-] ovn.dhcp_default_lease_time    = 43200 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Dec  1 14:13:12 np0005541455 ovn_metadata_agent[106828]: 2025-12-01 19:13:12.143 106833 DEBUG neutron.agent.ovn.metadata_agent [-] ovn.disable_ovn_dhcp_for_baremetal_ports = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Dec  1 14:13:12 np0005541455 ovn_metadata_agent[106828]: 2025-12-01 19:13:12.143 106833 DEBUG neutron.agent.ovn.metadata_agent [-] ovn.dns_servers                = [] log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Dec  1 14:13:12 np0005541455 ovn_metadata_agent[106828]: 2025-12-01 19:13:12.143 106833 DEBUG neutron.agent.ovn.metadata_agent [-] ovn.enable_distributed_floating_ip = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Dec  1 14:13:12 np0005541455 ovn_metadata_agent[106828]: 2025-12-01 19:13:12.143 106833 DEBUG neutron.agent.ovn.metadata_agent [-] ovn.neutron_sync_mode          = log log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Dec  1 14:13:12 np0005541455 ovn_metadata_agent[106828]: 2025-12-01 19:13:12.143 106833 DEBUG neutron.agent.ovn.metadata_agent [-] ovn.ovn_dhcp4_global_options   = {} log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Dec  1 14:13:12 np0005541455 ovn_metadata_agent[106828]: 2025-12-01 19:13:12.143 106833 DEBUG neutron.agent.ovn.metadata_agent [-] ovn.ovn_dhcp6_global_options   = {} log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Dec  1 14:13:12 np0005541455 ovn_metadata_agent[106828]: 2025-12-01 19:13:12.143 106833 DEBUG neutron.agent.ovn.metadata_agent [-] ovn.ovn_emit_need_to_frag      = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Dec  1 14:13:12 np0005541455 ovn_metadata_agent[106828]: 2025-12-01 19:13:12.144 106833 DEBUG neutron.agent.ovn.metadata_agent [-] ovn.ovn_l3_mode                = True log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Dec  1 14:13:12 np0005541455 ovn_metadata_agent[106828]: 2025-12-01 19:13:12.144 106833 DEBUG neutron.agent.ovn.metadata_agent [-] ovn.ovn_l3_scheduler           = leastloaded log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Dec  1 14:13:12 np0005541455 ovn_metadata_agent[106828]: 2025-12-01 19:13:12.144 106833 DEBUG neutron.agent.ovn.metadata_agent [-] ovn.ovn_metadata_enabled       = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Dec  1 14:13:12 np0005541455 ovn_metadata_agent[106828]: 2025-12-01 19:13:12.144 106833 DEBUG neutron.agent.ovn.metadata_agent [-] ovn.ovn_nb_ca_cert             =  log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Dec  1 14:13:12 np0005541455 ovn_metadata_agent[106828]: 2025-12-01 19:13:12.144 106833 DEBUG neutron.agent.ovn.metadata_agent [-] ovn.ovn_nb_certificate         =  log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Dec  1 14:13:12 np0005541455 ovn_metadata_agent[106828]: 2025-12-01 19:13:12.144 106833 DEBUG neutron.agent.ovn.metadata_agent [-] ovn.ovn_nb_connection          = tcp:127.0.0.1:6641 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Dec  1 14:13:12 np0005541455 ovn_metadata_agent[106828]: 2025-12-01 19:13:12.144 106833 DEBUG neutron.agent.ovn.metadata_agent [-] ovn.ovn_nb_private_key         =  log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Dec  1 14:13:12 np0005541455 ovn_metadata_agent[106828]: 2025-12-01 19:13:12.144 106833 DEBUG neutron.agent.ovn.metadata_agent [-] ovn.ovn_sb_ca_cert             = /etc/pki/tls/certs/ovndbca.crt log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Dec  1 14:13:12 np0005541455 ovn_metadata_agent[106828]: 2025-12-01 19:13:12.144 106833 DEBUG neutron.agent.ovn.metadata_agent [-] ovn.ovn_sb_certificate         = /etc/pki/tls/certs/ovndb.crt log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Dec  1 14:13:12 np0005541455 ovn_metadata_agent[106828]: 2025-12-01 19:13:12.144 106833 DEBUG neutron.agent.ovn.metadata_agent [-] ovn.ovn_sb_connection          = ssl:ovsdbserver-sb.openstack.svc:6642 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Dec  1 14:13:12 np0005541455 ovn_metadata_agent[106828]: 2025-12-01 19:13:12.145 106833 DEBUG neutron.agent.ovn.metadata_agent [-] ovn.ovn_sb_private_key         = /etc/pki/tls/private/ovndb.key log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Dec  1 14:13:12 np0005541455 ovn_metadata_agent[106828]: 2025-12-01 19:13:12.145 106833 DEBUG neutron.agent.ovn.metadata_agent [-] ovn.ovsdb_connection_timeout   = 180 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Dec  1 14:13:12 np0005541455 ovn_metadata_agent[106828]: 2025-12-01 19:13:12.145 106833 DEBUG neutron.agent.ovn.metadata_agent [-] ovn.ovsdb_log_level            = INFO log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Dec  1 14:13:12 np0005541455 ovn_metadata_agent[106828]: 2025-12-01 19:13:12.145 106833 DEBUG neutron.agent.ovn.metadata_agent [-] ovn.ovsdb_probe_interval       = 60000 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Dec  1 14:13:12 np0005541455 ovn_metadata_agent[106828]: 2025-12-01 19:13:12.145 106833 DEBUG neutron.agent.ovn.metadata_agent [-] ovn.ovsdb_retry_max_interval   = 180 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Dec  1 14:13:12 np0005541455 ovn_metadata_agent[106828]: 2025-12-01 19:13:12.145 106833 DEBUG neutron.agent.ovn.metadata_agent [-] ovn.vhost_sock_dir             = /var/run/openvswitch log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Dec  1 14:13:12 np0005541455 ovn_metadata_agent[106828]: 2025-12-01 19:13:12.145 106833 DEBUG neutron.agent.ovn.metadata_agent [-] ovn.vif_type                   = ovs log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Dec  1 14:13:12 np0005541455 ovn_metadata_agent[106828]: 2025-12-01 19:13:12.145 106833 DEBUG neutron.agent.ovn.metadata_agent [-] OVS.bridge_mac_table_size      = 50000 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Dec  1 14:13:12 np0005541455 ovn_metadata_agent[106828]: 2025-12-01 19:13:12.145 106833 DEBUG neutron.agent.ovn.metadata_agent [-] OVS.igmp_snooping_enable       = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Dec  1 14:13:12 np0005541455 ovn_metadata_agent[106828]: 2025-12-01 19:13:12.145 106833 DEBUG neutron.agent.ovn.metadata_agent [-] OVS.ovsdb_timeout              = 10 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Dec  1 14:13:12 np0005541455 ovn_metadata_agent[106828]: 2025-12-01 19:13:12.146 106833 DEBUG neutron.agent.ovn.metadata_agent [-] ovs.ovsdb_connection           = tcp:127.0.0.1:6640 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Dec  1 14:13:12 np0005541455 ovn_metadata_agent[106828]: 2025-12-01 19:13:12.146 106833 DEBUG neutron.agent.ovn.metadata_agent [-] ovs.ovsdb_connection_timeout   = 180 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Dec  1 14:13:12 np0005541455 ovn_metadata_agent[106828]: 2025-12-01 19:13:12.146 106833 DEBUG neutron.agent.ovn.metadata_agent [-] oslo_messaging_rabbit.amqp_auto_delete = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Dec  1 14:13:12 np0005541455 ovn_metadata_agent[106828]: 2025-12-01 19:13:12.146 106833 DEBUG neutron.agent.ovn.metadata_agent [-] oslo_messaging_rabbit.amqp_durable_queues = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Dec  1 14:13:12 np0005541455 ovn_metadata_agent[106828]: 2025-12-01 19:13:12.146 106833 DEBUG neutron.agent.ovn.metadata_agent [-] oslo_messaging_rabbit.conn_pool_min_size = 2 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Dec  1 14:13:12 np0005541455 ovn_metadata_agent[106828]: 2025-12-01 19:13:12.146 106833 DEBUG neutron.agent.ovn.metadata_agent [-] oslo_messaging_rabbit.conn_pool_ttl = 1200 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Dec  1 14:13:12 np0005541455 ovn_metadata_agent[106828]: 2025-12-01 19:13:12.146 106833 DEBUG neutron.agent.ovn.metadata_agent [-] oslo_messaging_rabbit.direct_mandatory_flag = True log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Dec  1 14:13:12 np0005541455 ovn_metadata_agent[106828]: 2025-12-01 19:13:12.146 106833 DEBUG neutron.agent.ovn.metadata_agent [-] oslo_messaging_rabbit.enable_cancel_on_failover = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Dec  1 14:13:12 np0005541455 ovn_metadata_agent[106828]: 2025-12-01 19:13:12.146 106833 DEBUG neutron.agent.ovn.metadata_agent [-] oslo_messaging_rabbit.heartbeat_in_pthread = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Dec  1 14:13:12 np0005541455 ovn_metadata_agent[106828]: 2025-12-01 19:13:12.147 106833 DEBUG neutron.agent.ovn.metadata_agent [-] oslo_messaging_rabbit.heartbeat_rate = 2 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Dec  1 14:13:12 np0005541455 ovn_metadata_agent[106828]: 2025-12-01 19:13:12.147 106833 DEBUG neutron.agent.ovn.metadata_agent [-] oslo_messaging_rabbit.heartbeat_timeout_threshold = 60 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Dec  1 14:13:12 np0005541455 ovn_metadata_agent[106828]: 2025-12-01 19:13:12.147 106833 DEBUG neutron.agent.ovn.metadata_agent [-] oslo_messaging_rabbit.kombu_compression = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Dec  1 14:13:12 np0005541455 ovn_metadata_agent[106828]: 2025-12-01 19:13:12.147 106833 DEBUG neutron.agent.ovn.metadata_agent [-] oslo_messaging_rabbit.kombu_failover_strategy = round-robin log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Dec  1 14:13:12 np0005541455 ovn_metadata_agent[106828]: 2025-12-01 19:13:12.147 106833 DEBUG neutron.agent.ovn.metadata_agent [-] oslo_messaging_rabbit.kombu_missing_consumer_retry_timeout = 60 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Dec  1 14:13:12 np0005541455 ovn_metadata_agent[106828]: 2025-12-01 19:13:12.147 106833 DEBUG neutron.agent.ovn.metadata_agent [-] oslo_messaging_rabbit.kombu_reconnect_delay = 1.0 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Dec  1 14:13:12 np0005541455 ovn_metadata_agent[106828]: 2025-12-01 19:13:12.147 106833 DEBUG neutron.agent.ovn.metadata_agent [-] oslo_messaging_rabbit.rabbit_ha_queues = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Dec  1 14:13:12 np0005541455 ovn_metadata_agent[106828]: 2025-12-01 19:13:12.147 106833 DEBUG neutron.agent.ovn.metadata_agent [-] oslo_messaging_rabbit.rabbit_interval_max = 30 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Dec  1 14:13:12 np0005541455 ovn_metadata_agent[106828]: 2025-12-01 19:13:12.147 106833 DEBUG neutron.agent.ovn.metadata_agent [-] oslo_messaging_rabbit.rabbit_login_method = AMQPLAIN log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Dec  1 14:13:12 np0005541455 ovn_metadata_agent[106828]: 2025-12-01 19:13:12.148 106833 DEBUG neutron.agent.ovn.metadata_agent [-] oslo_messaging_rabbit.rabbit_qos_prefetch_count = 0 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Dec  1 14:13:12 np0005541455 ovn_metadata_agent[106828]: 2025-12-01 19:13:12.148 106833 DEBUG neutron.agent.ovn.metadata_agent [-] oslo_messaging_rabbit.rabbit_quorum_delivery_limit = 0 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Dec  1 14:13:12 np0005541455 ovn_metadata_agent[106828]: 2025-12-01 19:13:12.148 106833 DEBUG neutron.agent.ovn.metadata_agent [-] oslo_messaging_rabbit.rabbit_quorum_max_memory_bytes = 0 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Dec  1 14:13:12 np0005541455 ovn_metadata_agent[106828]: 2025-12-01 19:13:12.148 106833 DEBUG neutron.agent.ovn.metadata_agent [-] oslo_messaging_rabbit.rabbit_quorum_max_memory_length = 0 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Dec  1 14:13:12 np0005541455 ovn_metadata_agent[106828]: 2025-12-01 19:13:12.148 106833 DEBUG neutron.agent.ovn.metadata_agent [-] oslo_messaging_rabbit.rabbit_quorum_queue = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Dec  1 14:13:12 np0005541455 ovn_metadata_agent[106828]: 2025-12-01 19:13:12.148 106833 DEBUG neutron.agent.ovn.metadata_agent [-] oslo_messaging_rabbit.rabbit_retry_backoff = 2 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Dec  1 14:13:12 np0005541455 ovn_metadata_agent[106828]: 2025-12-01 19:13:12.148 106833 DEBUG neutron.agent.ovn.metadata_agent [-] oslo_messaging_rabbit.rabbit_retry_interval = 1 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Dec  1 14:13:12 np0005541455 ovn_metadata_agent[106828]: 2025-12-01 19:13:12.148 106833 DEBUG neutron.agent.ovn.metadata_agent [-] oslo_messaging_rabbit.rabbit_transient_queues_ttl = 1800 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Dec  1 14:13:12 np0005541455 ovn_metadata_agent[106828]: 2025-12-01 19:13:12.148 106833 DEBUG neutron.agent.ovn.metadata_agent [-] oslo_messaging_rabbit.rpc_conn_pool_size = 30 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Dec  1 14:13:12 np0005541455 ovn_metadata_agent[106828]: 2025-12-01 19:13:12.148 106833 DEBUG neutron.agent.ovn.metadata_agent [-] oslo_messaging_rabbit.ssl      = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Dec  1 14:13:12 np0005541455 ovn_metadata_agent[106828]: 2025-12-01 19:13:12.149 106833 DEBUG neutron.agent.ovn.metadata_agent [-] oslo_messaging_rabbit.ssl_ca_file =  log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Dec  1 14:13:12 np0005541455 ovn_metadata_agent[106828]: 2025-12-01 19:13:12.149 106833 DEBUG neutron.agent.ovn.metadata_agent [-] oslo_messaging_rabbit.ssl_cert_file =  log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Dec  1 14:13:12 np0005541455 ovn_metadata_agent[106828]: 2025-12-01 19:13:12.149 106833 DEBUG neutron.agent.ovn.metadata_agent [-] oslo_messaging_rabbit.ssl_enforce_fips_mode = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Dec  1 14:13:12 np0005541455 ovn_metadata_agent[106828]: 2025-12-01 19:13:12.149 106833 DEBUG neutron.agent.ovn.metadata_agent [-] oslo_messaging_rabbit.ssl_key_file =  log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Dec  1 14:13:12 np0005541455 ovn_metadata_agent[106828]: 2025-12-01 19:13:12.149 106833 DEBUG neutron.agent.ovn.metadata_agent [-] oslo_messaging_rabbit.ssl_version =  log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Dec  1 14:13:12 np0005541455 ovn_metadata_agent[106828]: 2025-12-01 19:13:12.149 106833 DEBUG neutron.agent.ovn.metadata_agent [-] oslo_messaging_notifications.driver = [] log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Dec  1 14:13:12 np0005541455 ovn_metadata_agent[106828]: 2025-12-01 19:13:12.149 106833 DEBUG neutron.agent.ovn.metadata_agent [-] oslo_messaging_notifications.retry = -1 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Dec  1 14:13:12 np0005541455 ovn_metadata_agent[106828]: 2025-12-01 19:13:12.149 106833 DEBUG neutron.agent.ovn.metadata_agent [-] oslo_messaging_notifications.topics = ['notifications'] log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Dec  1 14:13:12 np0005541455 ovn_metadata_agent[106828]: 2025-12-01 19:13:12.149 106833 DEBUG neutron.agent.ovn.metadata_agent [-] oslo_messaging_notifications.transport_url = **** log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Dec  1 14:13:12 np0005541455 ovn_metadata_agent[106828]: 2025-12-01 19:13:12.150 106833 DEBUG neutron.agent.ovn.metadata_agent [-] ******************************************************************************** log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2613#033[00m
Dec  1 14:13:12 np0005541455 ovn_metadata_agent[106828]: 2025-12-01 19:13:12.159 106833 DEBUG ovsdbapp.backend.ovs_idl [-] Created schema index Bridge.name autocreate_indices /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/__init__.py:106#033[00m
Dec  1 14:13:12 np0005541455 ovn_metadata_agent[106828]: 2025-12-01 19:13:12.160 106833 DEBUG ovsdbapp.backend.ovs_idl [-] Created schema index Port.name autocreate_indices /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/__init__.py:106#033[00m
Dec  1 14:13:12 np0005541455 ovn_metadata_agent[106828]: 2025-12-01 19:13:12.160 106833 DEBUG ovsdbapp.backend.ovs_idl [-] Created schema index Interface.name autocreate_indices /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/__init__.py:106#033[00m
Dec  1 14:13:12 np0005541455 ovn_metadata_agent[106828]: 2025-12-01 19:13:12.160 106833 INFO ovsdbapp.backend.ovs_idl.vlog [-] tcp:127.0.0.1:6640: connecting...#033[00m
Dec  1 14:13:12 np0005541455 ovn_metadata_agent[106828]: 2025-12-01 19:13:12.160 106833 INFO ovsdbapp.backend.ovs_idl.vlog [-] tcp:127.0.0.1:6640: connected#033[00m
Dec  1 14:13:12 np0005541455 ovn_metadata_agent[106828]: 2025-12-01 19:13:12.173 106833 DEBUG neutron.agent.ovn.metadata.agent [-] Loaded chassis name 91869463-7ce7-4561-8225-db4a77bb5f12 (UUID: 91869463-7ce7-4561-8225-db4a77bb5f12) and ovn bridge br-int. _load_config /usr/lib/python3.9/site-packages/neutron/agent/ovn/metadata/agent.py:309#033[00m
Dec  1 14:13:12 np0005541455 ovn_metadata_agent[106828]: 2025-12-01 19:13:12.203 106833 INFO neutron.agent.ovn.metadata.ovsdb [-] Getting OvsdbSbOvnIdl for MetadataAgent with retry#033[00m
Dec  1 14:13:12 np0005541455 ovn_metadata_agent[106828]: 2025-12-01 19:13:12.203 106833 DEBUG ovsdbapp.backend.ovs_idl [-] Created lookup_table index Chassis.name autocreate_indices /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/__init__.py:87#033[00m
Dec  1 14:13:12 np0005541455 ovn_metadata_agent[106828]: 2025-12-01 19:13:12.203 106833 DEBUG ovsdbapp.backend.ovs_idl [-] Created schema index Datapath_Binding.tunnel_key autocreate_indices /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/__init__.py:106#033[00m
Dec  1 14:13:12 np0005541455 ovn_metadata_agent[106828]: 2025-12-01 19:13:12.203 106833 DEBUG ovsdbapp.backend.ovs_idl [-] Created schema index Chassis_Private.name autocreate_indices /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/__init__.py:106#033[00m
Dec  1 14:13:12 np0005541455 ovn_metadata_agent[106828]: 2025-12-01 19:13:12.206 106833 INFO ovsdbapp.backend.ovs_idl.vlog [-] ssl:ovsdbserver-sb.openstack.svc:6642: connecting...#033[00m
Dec  1 14:13:12 np0005541455 ovn_metadata_agent[106828]: 2025-12-01 19:13:12.216 106833 INFO ovsdbapp.backend.ovs_idl.vlog [-] ssl:ovsdbserver-sb.openstack.svc:6642: connected#033[00m
Dec  1 14:13:12 np0005541455 ovn_metadata_agent[106828]: 2025-12-01 19:13:12.222 106833 DEBUG ovsdbapp.backend.ovs_idl.event [-] Matched CREATE: ChassisPrivateCreateEvent(events=('create',), table='Chassis_Private', conditions=(('name', '=', '91869463-7ce7-4561-8225-db4a77bb5f12'),), old_conditions=None), priority=20 to row=Chassis_Private(chassis=[<ovs.db.idl.Row object at 0x7f1b36766670>], external_ids={}, name=91869463-7ce7-4561-8225-db4a77bb5f12, nb_cfg_timestamp=1764616334846, nb_cfg=1) old= matches /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/event.py:43#033[00m
Dec  1 14:13:12 np0005541455 ovn_metadata_agent[106828]: 2025-12-01 19:13:12.223 106833 DEBUG neutron_lib.callbacks.manager [-] Subscribe: <bound method MetadataProxyHandler.post_fork_initialize of <neutron.agent.ovn.metadata.server.MetadataProxyHandler object at 0x7f1b36766160>> process after_init 55550000, False subscribe /usr/lib/python3.9/site-packages/neutron_lib/callbacks/manager.py:52#033[00m
Dec  1 14:13:12 np0005541455 ovn_metadata_agent[106828]: 2025-12-01 19:13:12.224 106833 DEBUG oslo_concurrency.lockutils [-] Acquiring lock "singleton_lock" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:312#033[00m
Dec  1 14:13:12 np0005541455 ovn_metadata_agent[106828]: 2025-12-01 19:13:12.224 106833 DEBUG oslo_concurrency.lockutils [-] Acquired lock "singleton_lock" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:315#033[00m
Dec  1 14:13:12 np0005541455 ovn_metadata_agent[106828]: 2025-12-01 19:13:12.224 106833 DEBUG oslo_concurrency.lockutils [-] Releasing lock "singleton_lock" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:333#033[00m
Dec  1 14:13:12 np0005541455 ovn_metadata_agent[106828]: 2025-12-01 19:13:12.224 106833 INFO oslo_service.service [-] Starting 1 workers#033[00m
Dec  1 14:13:12 np0005541455 ovn_metadata_agent[106828]: 2025-12-01 19:13:12.228 106833 DEBUG oslo_service.service [-] Started child 106940 _start_child /usr/lib/python3.9/site-packages/oslo_service/service.py:575#033[00m
Dec  1 14:13:12 np0005541455 ovn_metadata_agent[106828]: 2025-12-01 19:13:12.232 106833 INFO oslo.privsep.daemon [-] Running privsep helper: ['sudo', 'neutron-rootwrap', '/etc/neutron/rootwrap.conf', 'privsep-helper', '--config-file', '/etc/neutron/neutron.conf', '--config-dir', '/etc/neutron.conf.d', '--privsep_context', 'neutron.privileged.namespace_cmd', '--privsep_sock_path', '/tmp/tmpgz9nutjx/privsep.sock']#033[00m
Dec  1 14:13:12 np0005541455 ovn_metadata_agent[106828]: 2025-12-01 19:13:12.235 106940 DEBUG neutron_lib.callbacks.manager [-] Publish callbacks ['neutron.agent.ovn.metadata.server.MetadataProxyHandler.post_fork_initialize-964926'] for process (None), after_init _notify_loop /usr/lib/python3.9/site-packages/neutron_lib/callbacks/manager.py:184#033[00m
Dec  1 14:13:12 np0005541455 ovn_metadata_agent[106828]: 2025-12-01 19:13:12.277 106940 INFO neutron.agent.ovn.metadata.ovsdb [-] Getting OvsdbSbOvnIdl for MetadataAgent with retry#033[00m
Dec  1 14:13:12 np0005541455 ovn_metadata_agent[106828]: 2025-12-01 19:13:12.279 106940 DEBUG ovsdbapp.backend.ovs_idl [-] Created lookup_table index Chassis.name autocreate_indices /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/__init__.py:87#033[00m
Dec  1 14:13:12 np0005541455 ovn_metadata_agent[106828]: 2025-12-01 19:13:12.279 106940 DEBUG ovsdbapp.backend.ovs_idl [-] Created schema index Datapath_Binding.tunnel_key autocreate_indices /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/__init__.py:106#033[00m
Dec  1 14:13:12 np0005541455 ovn_metadata_agent[106828]: 2025-12-01 19:13:12.286 106940 INFO ovsdbapp.backend.ovs_idl.vlog [-] ssl:ovsdbserver-sb.openstack.svc:6642: connecting...#033[00m
Dec  1 14:13:12 np0005541455 ovn_metadata_agent[106828]: 2025-12-01 19:13:12.297 106940 INFO ovsdbapp.backend.ovs_idl.vlog [-] ssl:ovsdbserver-sb.openstack.svc:6642: connected#033[00m
Dec  1 14:13:12 np0005541455 ovn_metadata_agent[106828]: 2025-12-01 19:13:12.307 106940 INFO eventlet.wsgi.server [-] (106940) wsgi starting up on http:/var/lib/neutron/metadata_proxy#033[00m
Dec  1 14:13:12 np0005541455 kernel: capability: warning: `privsep-helper' uses deprecated v2 capabilities in a way that may be insecure
Dec  1 14:13:12 np0005541455 ovn_metadata_agent[106828]: 2025-12-01 19:13:12.983 106833 INFO oslo.privsep.daemon [-] Spawned new privsep daemon via rootwrap#033[00m
Dec  1 14:13:12 np0005541455 ovn_metadata_agent[106828]: 2025-12-01 19:13:12.983 106833 DEBUG oslo.privsep.daemon [-] Accepted privsep connection to /tmp/tmpgz9nutjx/privsep.sock __init__ /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:362#033[00m
Dec  1 14:13:12 np0005541455 ovn_metadata_agent[106828]: 2025-12-01 19:13:12.837 106945 INFO oslo.privsep.daemon [-] privsep daemon starting#033[00m
Dec  1 14:13:12 np0005541455 ovn_metadata_agent[106828]: 2025-12-01 19:13:12.842 106945 INFO oslo.privsep.daemon [-] privsep process running with uid/gid: 0/0#033[00m
Dec  1 14:13:12 np0005541455 ovn_metadata_agent[106828]: 2025-12-01 19:13:12.845 106945 INFO oslo.privsep.daemon [-] privsep process running with capabilities (eff/prm/inh): CAP_SYS_ADMIN/CAP_SYS_ADMIN/none#033[00m
Dec  1 14:13:12 np0005541455 ovn_metadata_agent[106828]: 2025-12-01 19:13:12.845 106945 INFO oslo.privsep.daemon [-] privsep daemon running as pid 106945#033[00m
Dec  1 14:13:12 np0005541455 ovn_metadata_agent[106828]: 2025-12-01 19:13:12.986 106945 DEBUG oslo.privsep.daemon [-] privsep: reply[d676edda-af9c-4ca2-b5d6-564b7311f205]: (2,) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Dec  1 14:13:13 np0005541455 ovn_metadata_agent[106828]: 2025-12-01 19:13:13.531 106945 DEBUG oslo_concurrency.lockutils [-] Acquiring lock "context-manager" by "neutron_lib.db.api._create_context_manager" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m
Dec  1 14:13:13 np0005541455 ovn_metadata_agent[106828]: 2025-12-01 19:13:13.531 106945 DEBUG oslo_concurrency.lockutils [-] Lock "context-manager" acquired by "neutron_lib.db.api._create_context_manager" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m
Dec  1 14:13:13 np0005541455 ovn_metadata_agent[106828]: 2025-12-01 19:13:13.531 106945 DEBUG oslo_concurrency.lockutils [-] Lock "context-manager" "released" by "neutron_lib.db.api._create_context_manager" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m
Dec  1 14:13:14 np0005541455 ovn_metadata_agent[106828]: 2025-12-01 19:13:14.085 106945 DEBUG oslo.privsep.daemon [-] privsep: reply[9283da9f-1e44-4264-affb-82391f11ce6a]: (4, []) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Dec  1 14:13:14 np0005541455 ovn_metadata_agent[106828]: 2025-12-01 19:13:14.089 106833 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): DbAddCommand(_result=None, table=Chassis_Private, record=91869463-7ce7-4561-8225-db4a77bb5f12, column=external_ids, values=({'neutron:ovn-metadata-id': '6b6fc3b2-bcf7-5b2e-ba9f-8afdc050f8a3'},)) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89#033[00m
Dec  1 14:13:14 np0005541455 ovn_metadata_agent[106828]: 2025-12-01 19:13:14.111 106833 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): DbSetCommand(_result=None, table=Chassis_Private, record=91869463-7ce7-4561-8225-db4a77bb5f12, col_values=(('external_ids', {'neutron:ovn-bridge': 'br-int'}),), if_exists=True) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89#033[00m
Dec  1 14:13:14 np0005541455 ovn_metadata_agent[106828]: 2025-12-01 19:13:14.121 106833 DEBUG oslo_service.service [-] Full set of CONF: wait /usr/lib/python3.9/site-packages/oslo_service/service.py:649#033[00m
Dec  1 14:13:14 np0005541455 ovn_metadata_agent[106828]: 2025-12-01 19:13:14.122 106833 DEBUG oslo_service.service [-] ******************************************************************************** log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2589#033[00m
Dec  1 14:13:14 np0005541455 ovn_metadata_agent[106828]: 2025-12-01 19:13:14.122 106833 DEBUG oslo_service.service [-] Configuration options gathered from: log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2590#033[00m
Dec  1 14:13:14 np0005541455 ovn_metadata_agent[106828]: 2025-12-01 19:13:14.122 106833 DEBUG oslo_service.service [-] command line args: [] log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2591#033[00m
Dec  1 14:13:14 np0005541455 ovn_metadata_agent[106828]: 2025-12-01 19:13:14.122 106833 DEBUG oslo_service.service [-] config files: ['/etc/neutron/neutron.conf'] log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2592#033[00m
Dec  1 14:13:14 np0005541455 ovn_metadata_agent[106828]: 2025-12-01 19:13:14.122 106833 DEBUG oslo_service.service [-] ================================================================================ log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2594#033[00m
Dec  1 14:13:14 np0005541455 ovn_metadata_agent[106828]: 2025-12-01 19:13:14.122 106833 DEBUG oslo_service.service [-] agent_down_time                = 75 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602#033[00m
Dec  1 14:13:14 np0005541455 ovn_metadata_agent[106828]: 2025-12-01 19:13:14.123 106833 DEBUG oslo_service.service [-] allow_bulk                     = True log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602#033[00m
Dec  1 14:13:14 np0005541455 ovn_metadata_agent[106828]: 2025-12-01 19:13:14.123 106833 DEBUG oslo_service.service [-] api_extensions_path            =  log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602#033[00m
Dec  1 14:13:14 np0005541455 ovn_metadata_agent[106828]: 2025-12-01 19:13:14.123 106833 DEBUG oslo_service.service [-] api_paste_config               = api-paste.ini log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602#033[00m
Dec  1 14:13:14 np0005541455 ovn_metadata_agent[106828]: 2025-12-01 19:13:14.123 106833 DEBUG oslo_service.service [-] api_workers                    = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602#033[00m
Dec  1 14:13:14 np0005541455 ovn_metadata_agent[106828]: 2025-12-01 19:13:14.123 106833 DEBUG oslo_service.service [-] auth_ca_cert                   = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602#033[00m
Dec  1 14:13:14 np0005541455 ovn_metadata_agent[106828]: 2025-12-01 19:13:14.123 106833 DEBUG oslo_service.service [-] auth_strategy                  = keystone log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602#033[00m
Dec  1 14:13:14 np0005541455 ovn_metadata_agent[106828]: 2025-12-01 19:13:14.124 106833 DEBUG oslo_service.service [-] backlog                        = 4096 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602#033[00m
Dec  1 14:13:14 np0005541455 ovn_metadata_agent[106828]: 2025-12-01 19:13:14.124 106833 DEBUG oslo_service.service [-] base_mac                       = fa:16:3e:00:00:00 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602#033[00m
Dec  1 14:13:14 np0005541455 ovn_metadata_agent[106828]: 2025-12-01 19:13:14.124 106833 DEBUG oslo_service.service [-] bind_host                      = 0.0.0.0 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602#033[00m
Dec  1 14:13:14 np0005541455 ovn_metadata_agent[106828]: 2025-12-01 19:13:14.124 106833 DEBUG oslo_service.service [-] bind_port                      = 9696 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602#033[00m
Dec  1 14:13:14 np0005541455 ovn_metadata_agent[106828]: 2025-12-01 19:13:14.124 106833 DEBUG oslo_service.service [-] client_socket_timeout          = 900 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602#033[00m
Dec  1 14:13:14 np0005541455 ovn_metadata_agent[106828]: 2025-12-01 19:13:14.124 106833 DEBUG oslo_service.service [-] config_dir                     = ['/etc/neutron.conf.d'] log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602#033[00m
Dec  1 14:13:14 np0005541455 ovn_metadata_agent[106828]: 2025-12-01 19:13:14.125 106833 DEBUG oslo_service.service [-] config_file                    = ['/etc/neutron/neutron.conf'] log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602#033[00m
Dec  1 14:13:14 np0005541455 ovn_metadata_agent[106828]: 2025-12-01 19:13:14.125 106833 DEBUG oslo_service.service [-] config_source                  = [] log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602#033[00m
Dec  1 14:13:14 np0005541455 ovn_metadata_agent[106828]: 2025-12-01 19:13:14.125 106833 DEBUG oslo_service.service [-] control_exchange               = neutron log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602#033[00m
Dec  1 14:13:14 np0005541455 ovn_metadata_agent[106828]: 2025-12-01 19:13:14.125 106833 DEBUG oslo_service.service [-] core_plugin                    = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602#033[00m
Dec  1 14:13:14 np0005541455 ovn_metadata_agent[106828]: 2025-12-01 19:13:14.125 106833 DEBUG oslo_service.service [-] debug                          = True log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602#033[00m
Dec  1 14:13:14 np0005541455 ovn_metadata_agent[106828]: 2025-12-01 19:13:14.125 106833 DEBUG oslo_service.service [-] default_availability_zones     = [] log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602#033[00m
Dec  1 14:13:14 np0005541455 ovn_metadata_agent[106828]: 2025-12-01 19:13:14.126 106833 DEBUG oslo_service.service [-] default_log_levels             = ['amqp=WARN', 'amqplib=WARN', 'boto=WARN', 'qpid=WARN', 'sqlalchemy=WARN', 'suds=INFO', 'oslo.messaging=INFO', 'oslo_messaging=INFO', 'iso8601=WARN', 'requests.packages.urllib3.connectionpool=WARN', 'urllib3.connectionpool=WARN', 'websocket=WARN', 'requests.packages.urllib3.util.retry=WARN', 'urllib3.util.retry=WARN', 'keystonemiddleware=WARN', 'routes.middleware=WARN', 'stevedore=WARN', 'taskflow=WARN', 'keystoneauth=WARN', 'oslo.cache=INFO', 'oslo_policy=INFO', 'dogpile.core.dogpile=INFO', 'OFPHandler=INFO', 'OfctlService=INFO', 'os_ken.base.app_manager=INFO', 'os_ken.controller.controller=INFO'] log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602#033[00m
Dec  1 14:13:14 np0005541455 ovn_metadata_agent[106828]: 2025-12-01 19:13:14.126 106833 DEBUG oslo_service.service [-] dhcp_agent_notification        = True log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602#033[00m
Dec  1 14:13:14 np0005541455 ovn_metadata_agent[106828]: 2025-12-01 19:13:14.126 106833 DEBUG oslo_service.service [-] dhcp_lease_duration            = 86400 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602#033[00m
Dec  1 14:13:14 np0005541455 ovn_metadata_agent[106828]: 2025-12-01 19:13:14.126 106833 DEBUG oslo_service.service [-] dhcp_load_type                 = networks log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602#033[00m
Dec  1 14:13:14 np0005541455 ovn_metadata_agent[106828]: 2025-12-01 19:13:14.126 106833 DEBUG oslo_service.service [-] dns_domain                     = openstacklocal log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602#033[00m
Dec  1 14:13:14 np0005541455 ovn_metadata_agent[106828]: 2025-12-01 19:13:14.126 106833 DEBUG oslo_service.service [-] enable_new_agents              = True log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602#033[00m
Dec  1 14:13:14 np0005541455 ovn_metadata_agent[106828]: 2025-12-01 19:13:14.127 106833 DEBUG oslo_service.service [-] enable_traditional_dhcp        = True log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602#033[00m
Dec  1 14:13:14 np0005541455 ovn_metadata_agent[106828]: 2025-12-01 19:13:14.127 106833 DEBUG oslo_service.service [-] external_dns_driver            = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602#033[00m
Dec  1 14:13:14 np0005541455 ovn_metadata_agent[106828]: 2025-12-01 19:13:14.127 106833 DEBUG oslo_service.service [-] external_pids                  = /var/lib/neutron/external/pids log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602#033[00m
Dec  1 14:13:14 np0005541455 ovn_metadata_agent[106828]: 2025-12-01 19:13:14.127 106833 DEBUG oslo_service.service [-] filter_validation              = True log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602#033[00m
Dec  1 14:13:14 np0005541455 ovn_metadata_agent[106828]: 2025-12-01 19:13:14.127 106833 DEBUG oslo_service.service [-] global_physnet_mtu             = 1500 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602#033[00m
Dec  1 14:13:14 np0005541455 ovn_metadata_agent[106828]: 2025-12-01 19:13:14.127 106833 DEBUG oslo_service.service [-] graceful_shutdown_timeout      = 60 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602#033[00m
Dec  1 14:13:14 np0005541455 ovn_metadata_agent[106828]: 2025-12-01 19:13:14.128 106833 DEBUG oslo_service.service [-] host                           = compute-0 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602#033[00m
Dec  1 14:13:14 np0005541455 ovn_metadata_agent[106828]: 2025-12-01 19:13:14.128 106833 DEBUG oslo_service.service [-] http_retries                   = 3 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602#033[00m
Dec  1 14:13:14 np0005541455 ovn_metadata_agent[106828]: 2025-12-01 19:13:14.128 106833 DEBUG oslo_service.service [-] instance_format                = [instance: %(uuid)s]  log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602#033[00m
Dec  1 14:13:14 np0005541455 ovn_metadata_agent[106828]: 2025-12-01 19:13:14.128 106833 DEBUG oslo_service.service [-] instance_uuid_format           = [instance: %(uuid)s]  log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602#033[00m
Dec  1 14:13:14 np0005541455 ovn_metadata_agent[106828]: 2025-12-01 19:13:14.128 106833 DEBUG oslo_service.service [-] ipam_driver                    = internal log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602#033[00m
Dec  1 14:13:14 np0005541455 ovn_metadata_agent[106828]: 2025-12-01 19:13:14.128 106833 DEBUG oslo_service.service [-] ipv6_pd_enabled                = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602#033[00m
Dec  1 14:13:14 np0005541455 ovn_metadata_agent[106828]: 2025-12-01 19:13:14.129 106833 DEBUG oslo_service.service [-] log_config_append              = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602#033[00m
Dec  1 14:13:14 np0005541455 ovn_metadata_agent[106828]: 2025-12-01 19:13:14.129 106833 DEBUG oslo_service.service [-] log_date_format                = %Y-%m-%d %H:%M:%S log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602#033[00m
Dec  1 14:13:14 np0005541455 ovn_metadata_agent[106828]: 2025-12-01 19:13:14.129 106833 DEBUG oslo_service.service [-] log_dir                        = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602#033[00m
Dec  1 14:13:14 np0005541455 ovn_metadata_agent[106828]: 2025-12-01 19:13:14.129 106833 DEBUG oslo_service.service [-] log_file                       = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602#033[00m
Dec  1 14:13:14 np0005541455 ovn_metadata_agent[106828]: 2025-12-01 19:13:14.129 106833 DEBUG oslo_service.service [-] log_options                    = True log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602#033[00m
Dec  1 14:13:14 np0005541455 ovn_metadata_agent[106828]: 2025-12-01 19:13:14.129 106833 DEBUG oslo_service.service [-] log_rotate_interval            = 1 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602#033[00m
Dec  1 14:13:14 np0005541455 ovn_metadata_agent[106828]: 2025-12-01 19:13:14.129 106833 DEBUG oslo_service.service [-] log_rotate_interval_type       = days log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602#033[00m
Dec  1 14:13:14 np0005541455 ovn_metadata_agent[106828]: 2025-12-01 19:13:14.130 106833 DEBUG oslo_service.service [-] log_rotation_type              = none log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602#033[00m
Dec  1 14:13:14 np0005541455 ovn_metadata_agent[106828]: 2025-12-01 19:13:14.130 106833 DEBUG oslo_service.service [-] logging_context_format_string  = %(asctime)s.%(msecs)03d %(process)d %(levelname)s %(name)s [%(global_request_id)s %(request_id)s %(user_identity)s] %(instance)s%(message)s log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602#033[00m
Dec  1 14:13:14 np0005541455 ovn_metadata_agent[106828]: 2025-12-01 19:13:14.130 106833 DEBUG oslo_service.service [-] logging_debug_format_suffix    = %(funcName)s %(pathname)s:%(lineno)d log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602#033[00m
Dec  1 14:13:14 np0005541455 ovn_metadata_agent[106828]: 2025-12-01 19:13:14.130 106833 DEBUG oslo_service.service [-] logging_default_format_string  = %(asctime)s.%(msecs)03d %(process)d %(levelname)s %(name)s [-] %(instance)s%(message)s log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602#033[00m
Dec  1 14:13:14 np0005541455 ovn_metadata_agent[106828]: 2025-12-01 19:13:14.130 106833 DEBUG oslo_service.service [-] logging_exception_prefix       = %(asctime)s.%(msecs)03d %(process)d ERROR %(name)s %(instance)s log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602#033[00m
Dec  1 14:13:14 np0005541455 ovn_metadata_agent[106828]: 2025-12-01 19:13:14.130 106833 DEBUG oslo_service.service [-] logging_user_identity_format   = %(user)s %(project)s %(domain)s %(system_scope)s %(user_domain)s %(project_domain)s log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602#033[00m
Dec  1 14:13:14 np0005541455 ovn_metadata_agent[106828]: 2025-12-01 19:13:14.130 106833 DEBUG oslo_service.service [-] max_dns_nameservers            = 5 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602#033[00m
Dec  1 14:13:14 np0005541455 ovn_metadata_agent[106828]: 2025-12-01 19:13:14.130 106833 DEBUG oslo_service.service [-] max_header_line                = 16384 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602#033[00m
Dec  1 14:13:14 np0005541455 ovn_metadata_agent[106828]: 2025-12-01 19:13:14.131 106833 DEBUG oslo_service.service [-] max_logfile_count              = 30 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602#033[00m
Dec  1 14:13:14 np0005541455 ovn_metadata_agent[106828]: 2025-12-01 19:13:14.131 106833 DEBUG oslo_service.service [-] max_logfile_size_mb            = 200 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602#033[00m
Dec  1 14:13:14 np0005541455 ovn_metadata_agent[106828]: 2025-12-01 19:13:14.131 106833 DEBUG oslo_service.service [-] max_subnet_host_routes         = 20 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602#033[00m
Dec  1 14:13:14 np0005541455 ovn_metadata_agent[106828]: 2025-12-01 19:13:14.131 106833 DEBUG oslo_service.service [-] metadata_backlog               = 4096 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602#033[00m
Dec  1 14:13:14 np0005541455 ovn_metadata_agent[106828]: 2025-12-01 19:13:14.131 106833 DEBUG oslo_service.service [-] metadata_proxy_group           =  log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602#033[00m
Dec  1 14:13:14 np0005541455 ovn_metadata_agent[106828]: 2025-12-01 19:13:14.131 106833 DEBUG oslo_service.service [-] metadata_proxy_shared_secret   = **** log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602#033[00m
Dec  1 14:13:14 np0005541455 ovn_metadata_agent[106828]: 2025-12-01 19:13:14.131 106833 DEBUG oslo_service.service [-] metadata_proxy_socket          = /var/lib/neutron/metadata_proxy log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602#033[00m
Dec  1 14:13:14 np0005541455 ovn_metadata_agent[106828]: 2025-12-01 19:13:14.132 106833 DEBUG oslo_service.service [-] metadata_proxy_socket_mode     = deduce log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602#033[00m
Dec  1 14:13:14 np0005541455 ovn_metadata_agent[106828]: 2025-12-01 19:13:14.132 106833 DEBUG oslo_service.service [-] metadata_proxy_user            =  log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602#033[00m
Dec  1 14:13:14 np0005541455 ovn_metadata_agent[106828]: 2025-12-01 19:13:14.132 106833 DEBUG oslo_service.service [-] metadata_workers               = 1 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602#033[00m
Dec  1 14:13:14 np0005541455 ovn_metadata_agent[106828]: 2025-12-01 19:13:14.132 106833 DEBUG oslo_service.service [-] network_link_prefix            = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602#033[00m
Dec  1 14:13:14 np0005541455 ovn_metadata_agent[106828]: 2025-12-01 19:13:14.132 106833 DEBUG oslo_service.service [-] notify_nova_on_port_data_changes = True log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602#033[00m
Dec  1 14:13:14 np0005541455 ovn_metadata_agent[106828]: 2025-12-01 19:13:14.132 106833 DEBUG oslo_service.service [-] notify_nova_on_port_status_changes = True log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602#033[00m
Dec  1 14:13:14 np0005541455 ovn_metadata_agent[106828]: 2025-12-01 19:13:14.132 106833 DEBUG oslo_service.service [-] nova_client_cert               =  log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602#033[00m
Dec  1 14:13:14 np0005541455 ovn_metadata_agent[106828]: 2025-12-01 19:13:14.133 106833 DEBUG oslo_service.service [-] nova_client_priv_key           =  log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602#033[00m
Dec  1 14:13:14 np0005541455 ovn_metadata_agent[106828]: 2025-12-01 19:13:14.133 106833 DEBUG oslo_service.service [-] nova_metadata_host             = nova-metadata-internal.openstack.svc log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602#033[00m
Dec  1 14:13:14 np0005541455 ovn_metadata_agent[106828]: 2025-12-01 19:13:14.133 106833 DEBUG oslo_service.service [-] nova_metadata_insecure         = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602#033[00m
Dec  1 14:13:14 np0005541455 ovn_metadata_agent[106828]: 2025-12-01 19:13:14.133 106833 DEBUG oslo_service.service [-] nova_metadata_port             = 8775 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602#033[00m
Dec  1 14:13:14 np0005541455 ovn_metadata_agent[106828]: 2025-12-01 19:13:14.133 106833 DEBUG oslo_service.service [-] nova_metadata_protocol         = https log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602#033[00m
Dec  1 14:13:14 np0005541455 ovn_metadata_agent[106828]: 2025-12-01 19:13:14.133 106833 DEBUG oslo_service.service [-] pagination_max_limit           = -1 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602#033[00m
Dec  1 14:13:14 np0005541455 ovn_metadata_agent[106828]: 2025-12-01 19:13:14.133 106833 DEBUG oslo_service.service [-] periodic_fuzzy_delay           = 5 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602#033[00m
Dec  1 14:13:14 np0005541455 ovn_metadata_agent[106828]: 2025-12-01 19:13:14.134 106833 DEBUG oslo_service.service [-] periodic_interval              = 40 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602#033[00m
Dec  1 14:13:14 np0005541455 ovn_metadata_agent[106828]: 2025-12-01 19:13:14.134 106833 DEBUG oslo_service.service [-] publish_errors                 = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602#033[00m
Dec  1 14:13:14 np0005541455 ovn_metadata_agent[106828]: 2025-12-01 19:13:14.134 106833 DEBUG oslo_service.service [-] rate_limit_burst               = 0 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602#033[00m
Dec  1 14:13:14 np0005541455 ovn_metadata_agent[106828]: 2025-12-01 19:13:14.134 106833 DEBUG oslo_service.service [-] rate_limit_except_level        = CRITICAL log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602#033[00m
Dec  1 14:13:14 np0005541455 ovn_metadata_agent[106828]: 2025-12-01 19:13:14.134 106833 DEBUG oslo_service.service [-] rate_limit_interval            = 0 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602#033[00m
Dec  1 14:13:14 np0005541455 ovn_metadata_agent[106828]: 2025-12-01 19:13:14.134 106833 DEBUG oslo_service.service [-] retry_until_window             = 30 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602#033[00m
Dec  1 14:13:14 np0005541455 ovn_metadata_agent[106828]: 2025-12-01 19:13:14.134 106833 DEBUG oslo_service.service [-] rpc_resources_processing_step  = 20 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602#033[00m
Dec  1 14:13:14 np0005541455 ovn_metadata_agent[106828]: 2025-12-01 19:13:14.135 106833 DEBUG oslo_service.service [-] rpc_response_max_timeout       = 600 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602#033[00m
Dec  1 14:13:14 np0005541455 ovn_metadata_agent[106828]: 2025-12-01 19:13:14.135 106833 DEBUG oslo_service.service [-] rpc_state_report_workers       = 1 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602#033[00m
Dec  1 14:13:14 np0005541455 ovn_metadata_agent[106828]: 2025-12-01 19:13:14.135 106833 DEBUG oslo_service.service [-] rpc_workers                    = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602#033[00m
Dec  1 14:13:14 np0005541455 ovn_metadata_agent[106828]: 2025-12-01 19:13:14.135 106833 DEBUG oslo_service.service [-] send_events_interval           = 2 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602#033[00m
Dec  1 14:13:14 np0005541455 ovn_metadata_agent[106828]: 2025-12-01 19:13:14.135 106833 DEBUG oslo_service.service [-] service_plugins                = [] log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602#033[00m
Dec  1 14:13:14 np0005541455 ovn_metadata_agent[106828]: 2025-12-01 19:13:14.135 106833 DEBUG oslo_service.service [-] setproctitle                   = on log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602#033[00m
Dec  1 14:13:14 np0005541455 ovn_metadata_agent[106828]: 2025-12-01 19:13:14.135 106833 DEBUG oslo_service.service [-] state_path                     = /var/lib/neutron log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602#033[00m
Dec  1 14:13:14 np0005541455 ovn_metadata_agent[106828]: 2025-12-01 19:13:14.136 106833 DEBUG oslo_service.service [-] syslog_log_facility            = syslog log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602#033[00m
Dec  1 14:13:14 np0005541455 ovn_metadata_agent[106828]: 2025-12-01 19:13:14.136 106833 DEBUG oslo_service.service [-] tcp_keepidle                   = 600 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602#033[00m
Dec  1 14:13:14 np0005541455 ovn_metadata_agent[106828]: 2025-12-01 19:13:14.136 106833 DEBUG oslo_service.service [-] transport_url                  = **** log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602#033[00m
Dec  1 14:13:14 np0005541455 ovn_metadata_agent[106828]: 2025-12-01 19:13:14.136 106833 DEBUG oslo_service.service [-] use_eventlog                   = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602#033[00m
Dec  1 14:13:14 np0005541455 ovn_metadata_agent[106828]: 2025-12-01 19:13:14.136 106833 DEBUG oslo_service.service [-] use_journal                    = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602#033[00m
Dec  1 14:13:14 np0005541455 ovn_metadata_agent[106828]: 2025-12-01 19:13:14.136 106833 DEBUG oslo_service.service [-] use_json                       = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602#033[00m
Dec  1 14:13:14 np0005541455 ovn_metadata_agent[106828]: 2025-12-01 19:13:14.136 106833 DEBUG oslo_service.service [-] use_ssl                        = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602#033[00m
Dec  1 14:13:14 np0005541455 ovn_metadata_agent[106828]: 2025-12-01 19:13:14.137 106833 DEBUG oslo_service.service [-] use_stderr                     = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602#033[00m
Dec  1 14:13:14 np0005541455 ovn_metadata_agent[106828]: 2025-12-01 19:13:14.137 106833 DEBUG oslo_service.service [-] use_syslog                     = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602#033[00m
Dec  1 14:13:14 np0005541455 ovn_metadata_agent[106828]: 2025-12-01 19:13:14.137 106833 DEBUG oslo_service.service [-] vlan_transparent               = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602#033[00m
Dec  1 14:13:14 np0005541455 ovn_metadata_agent[106828]: 2025-12-01 19:13:14.137 106833 DEBUG oslo_service.service [-] watch_log_file                 = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602#033[00m
Dec  1 14:13:14 np0005541455 ovn_metadata_agent[106828]: 2025-12-01 19:13:14.137 106833 DEBUG oslo_service.service [-] wsgi_default_pool_size         = 100 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602#033[00m
Dec  1 14:13:14 np0005541455 ovn_metadata_agent[106828]: 2025-12-01 19:13:14.137 106833 DEBUG oslo_service.service [-] wsgi_keep_alive                = True log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602#033[00m
Dec  1 14:13:14 np0005541455 ovn_metadata_agent[106828]: 2025-12-01 19:13:14.137 106833 DEBUG oslo_service.service [-] wsgi_log_format                = %(client_ip)s "%(request_line)s" status: %(status_code)s  len: %(body_length)s time: %(wall_seconds).7f log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602#033[00m
Dec  1 14:13:14 np0005541455 ovn_metadata_agent[106828]: 2025-12-01 19:13:14.138 106833 DEBUG oslo_service.service [-] wsgi_server_debug              = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602#033[00m
Dec  1 14:13:14 np0005541455 ovn_metadata_agent[106828]: 2025-12-01 19:13:14.138 106833 DEBUG oslo_service.service [-] oslo_concurrency.disable_process_locking = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Dec  1 14:13:14 np0005541455 ovn_metadata_agent[106828]: 2025-12-01 19:13:14.138 106833 DEBUG oslo_service.service [-] oslo_concurrency.lock_path     = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Dec  1 14:13:14 np0005541455 ovn_metadata_agent[106828]: 2025-12-01 19:13:14.138 106833 DEBUG oslo_service.service [-] profiler.connection_string     = messaging:// log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Dec  1 14:13:14 np0005541455 ovn_metadata_agent[106828]: 2025-12-01 19:13:14.138 106833 DEBUG oslo_service.service [-] profiler.enabled               = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Dec  1 14:13:14 np0005541455 ovn_metadata_agent[106828]: 2025-12-01 19:13:14.138 106833 DEBUG oslo_service.service [-] profiler.es_doc_type           = notification log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Dec  1 14:13:14 np0005541455 ovn_metadata_agent[106828]: 2025-12-01 19:13:14.139 106833 DEBUG oslo_service.service [-] profiler.es_scroll_size        = 10000 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Dec  1 14:13:14 np0005541455 ovn_metadata_agent[106828]: 2025-12-01 19:13:14.139 106833 DEBUG oslo_service.service [-] profiler.es_scroll_time        = 2m log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Dec  1 14:13:14 np0005541455 ovn_metadata_agent[106828]: 2025-12-01 19:13:14.139 106833 DEBUG oslo_service.service [-] profiler.filter_error_trace    = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Dec  1 14:13:14 np0005541455 ovn_metadata_agent[106828]: 2025-12-01 19:13:14.139 106833 DEBUG oslo_service.service [-] profiler.hmac_keys             = SECRET_KEY log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Dec  1 14:13:14 np0005541455 ovn_metadata_agent[106828]: 2025-12-01 19:13:14.139 106833 DEBUG oslo_service.service [-] profiler.sentinel_service_name = mymaster log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Dec  1 14:13:14 np0005541455 ovn_metadata_agent[106828]: 2025-12-01 19:13:14.139 106833 DEBUG oslo_service.service [-] profiler.socket_timeout        = 0.1 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Dec  1 14:13:14 np0005541455 ovn_metadata_agent[106828]: 2025-12-01 19:13:14.139 106833 DEBUG oslo_service.service [-] profiler.trace_sqlalchemy      = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Dec  1 14:13:14 np0005541455 ovn_metadata_agent[106828]: 2025-12-01 19:13:14.140 106833 DEBUG oslo_service.service [-] oslo_policy.enforce_new_defaults = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Dec  1 14:13:14 np0005541455 ovn_metadata_agent[106828]: 2025-12-01 19:13:14.140 106833 DEBUG oslo_service.service [-] oslo_policy.enforce_scope      = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Dec  1 14:13:14 np0005541455 ovn_metadata_agent[106828]: 2025-12-01 19:13:14.140 106833 DEBUG oslo_service.service [-] oslo_policy.policy_default_rule = default log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Dec  1 14:13:14 np0005541455 ovn_metadata_agent[106828]: 2025-12-01 19:13:14.140 106833 DEBUG oslo_service.service [-] oslo_policy.policy_dirs        = ['policy.d'] log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Dec  1 14:13:14 np0005541455 ovn_metadata_agent[106828]: 2025-12-01 19:13:14.140 106833 DEBUG oslo_service.service [-] oslo_policy.policy_file        = policy.yaml log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Dec  1 14:13:14 np0005541455 ovn_metadata_agent[106828]: 2025-12-01 19:13:14.140 106833 DEBUG oslo_service.service [-] oslo_policy.remote_content_type = application/x-www-form-urlencoded log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Dec  1 14:13:14 np0005541455 ovn_metadata_agent[106828]: 2025-12-01 19:13:14.141 106833 DEBUG oslo_service.service [-] oslo_policy.remote_ssl_ca_crt_file = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Dec  1 14:13:14 np0005541455 ovn_metadata_agent[106828]: 2025-12-01 19:13:14.141 106833 DEBUG oslo_service.service [-] oslo_policy.remote_ssl_client_crt_file = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Dec  1 14:13:14 np0005541455 ovn_metadata_agent[106828]: 2025-12-01 19:13:14.141 106833 DEBUG oslo_service.service [-] oslo_policy.remote_ssl_client_key_file = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Dec  1 14:13:14 np0005541455 ovn_metadata_agent[106828]: 2025-12-01 19:13:14.141 106833 DEBUG oslo_service.service [-] oslo_policy.remote_ssl_verify_server_crt = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Dec  1 14:13:14 np0005541455 ovn_metadata_agent[106828]: 2025-12-01 19:13:14.141 106833 DEBUG oslo_service.service [-] oslo_messaging_metrics.metrics_buffer_size = 1000 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Dec  1 14:13:14 np0005541455 ovn_metadata_agent[106828]: 2025-12-01 19:13:14.141 106833 DEBUG oslo_service.service [-] oslo_messaging_metrics.metrics_enabled = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Dec  1 14:13:14 np0005541455 ovn_metadata_agent[106828]: 2025-12-01 19:13:14.141 106833 DEBUG oslo_service.service [-] oslo_messaging_metrics.metrics_process_name =  log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Dec  1 14:13:14 np0005541455 ovn_metadata_agent[106828]: 2025-12-01 19:13:14.142 106833 DEBUG oslo_service.service [-] oslo_messaging_metrics.metrics_socket_file = /var/tmp/metrics_collector.sock log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Dec  1 14:13:14 np0005541455 ovn_metadata_agent[106828]: 2025-12-01 19:13:14.142 106833 DEBUG oslo_service.service [-] oslo_messaging_metrics.metrics_thread_stop_timeout = 10 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Dec  1 14:13:14 np0005541455 ovn_metadata_agent[106828]: 2025-12-01 19:13:14.142 106833 DEBUG oslo_service.service [-] oslo_middleware.http_basic_auth_user_file = /etc/htpasswd log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Dec  1 14:13:14 np0005541455 ovn_metadata_agent[106828]: 2025-12-01 19:13:14.142 106833 DEBUG oslo_service.service [-] service_providers.service_provider = [] log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Dec  1 14:13:14 np0005541455 ovn_metadata_agent[106828]: 2025-12-01 19:13:14.142 106833 DEBUG oslo_service.service [-] privsep.capabilities           = [21, 12, 1, 2, 19] log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Dec  1 14:13:14 np0005541455 ovn_metadata_agent[106828]: 2025-12-01 19:13:14.142 106833 DEBUG oslo_service.service [-] privsep.group                  = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Dec  1 14:13:14 np0005541455 ovn_metadata_agent[106828]: 2025-12-01 19:13:14.143 106833 DEBUG oslo_service.service [-] privsep.helper_command         = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Dec  1 14:13:14 np0005541455 ovn_metadata_agent[106828]: 2025-12-01 19:13:14.143 106833 DEBUG oslo_service.service [-] privsep.logger_name            = oslo_privsep.daemon log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Dec  1 14:13:14 np0005541455 ovn_metadata_agent[106828]: 2025-12-01 19:13:14.143 106833 DEBUG oslo_service.service [-] privsep.thread_pool_size       = 8 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Dec  1 14:13:14 np0005541455 ovn_metadata_agent[106828]: 2025-12-01 19:13:14.143 106833 DEBUG oslo_service.service [-] privsep.user                   = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Dec  1 14:13:14 np0005541455 ovn_metadata_agent[106828]: 2025-12-01 19:13:14.143 106833 DEBUG oslo_service.service [-] privsep_dhcp_release.capabilities = [21, 12] log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Dec  1 14:13:14 np0005541455 ovn_metadata_agent[106828]: 2025-12-01 19:13:14.143 106833 DEBUG oslo_service.service [-] privsep_dhcp_release.group     = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Dec  1 14:13:14 np0005541455 ovn_metadata_agent[106828]: 2025-12-01 19:13:14.143 106833 DEBUG oslo_service.service [-] privsep_dhcp_release.helper_command = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Dec  1 14:13:14 np0005541455 ovn_metadata_agent[106828]: 2025-12-01 19:13:14.143 106833 DEBUG oslo_service.service [-] privsep_dhcp_release.logger_name = oslo_privsep.daemon log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Dec  1 14:13:14 np0005541455 ovn_metadata_agent[106828]: 2025-12-01 19:13:14.144 106833 DEBUG oslo_service.service [-] privsep_dhcp_release.thread_pool_size = 8 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Dec  1 14:13:14 np0005541455 ovn_metadata_agent[106828]: 2025-12-01 19:13:14.144 106833 DEBUG oslo_service.service [-] privsep_dhcp_release.user      = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Dec  1 14:13:14 np0005541455 ovn_metadata_agent[106828]: 2025-12-01 19:13:14.144 106833 DEBUG oslo_service.service [-] privsep_ovs_vsctl.capabilities = [21, 12] log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Dec  1 14:13:14 np0005541455 ovn_metadata_agent[106828]: 2025-12-01 19:13:14.144 106833 DEBUG oslo_service.service [-] privsep_ovs_vsctl.group        = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Dec  1 14:13:14 np0005541455 ovn_metadata_agent[106828]: 2025-12-01 19:13:14.144 106833 DEBUG oslo_service.service [-] privsep_ovs_vsctl.helper_command = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Dec  1 14:13:14 np0005541455 ovn_metadata_agent[106828]: 2025-12-01 19:13:14.144 106833 DEBUG oslo_service.service [-] privsep_ovs_vsctl.logger_name  = oslo_privsep.daemon log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Dec  1 14:13:14 np0005541455 ovn_metadata_agent[106828]: 2025-12-01 19:13:14.144 106833 DEBUG oslo_service.service [-] privsep_ovs_vsctl.thread_pool_size = 8 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Dec  1 14:13:14 np0005541455 ovn_metadata_agent[106828]: 2025-12-01 19:13:14.145 106833 DEBUG oslo_service.service [-] privsep_ovs_vsctl.user         = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Dec  1 14:13:14 np0005541455 ovn_metadata_agent[106828]: 2025-12-01 19:13:14.145 106833 DEBUG oslo_service.service [-] privsep_namespace.capabilities = [21] log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Dec  1 14:13:14 np0005541455 ovn_metadata_agent[106828]: 2025-12-01 19:13:14.145 106833 DEBUG oslo_service.service [-] privsep_namespace.group        = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Dec  1 14:13:14 np0005541455 ovn_metadata_agent[106828]: 2025-12-01 19:13:14.145 106833 DEBUG oslo_service.service [-] privsep_namespace.helper_command = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Dec  1 14:13:14 np0005541455 ovn_metadata_agent[106828]: 2025-12-01 19:13:14.145 106833 DEBUG oslo_service.service [-] privsep_namespace.logger_name  = oslo_privsep.daemon log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Dec  1 14:13:14 np0005541455 ovn_metadata_agent[106828]: 2025-12-01 19:13:14.145 106833 DEBUG oslo_service.service [-] privsep_namespace.thread_pool_size = 8 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Dec  1 14:13:14 np0005541455 ovn_metadata_agent[106828]: 2025-12-01 19:13:14.145 106833 DEBUG oslo_service.service [-] privsep_namespace.user         = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Dec  1 14:13:14 np0005541455 ovn_metadata_agent[106828]: 2025-12-01 19:13:14.145 106833 DEBUG oslo_service.service [-] privsep_conntrack.capabilities = [12] log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Dec  1 14:13:14 np0005541455 ovn_metadata_agent[106828]: 2025-12-01 19:13:14.146 106833 DEBUG oslo_service.service [-] privsep_conntrack.group        = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Dec  1 14:13:14 np0005541455 ovn_metadata_agent[106828]: 2025-12-01 19:13:14.146 106833 DEBUG oslo_service.service [-] privsep_conntrack.helper_command = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Dec  1 14:13:14 np0005541455 ovn_metadata_agent[106828]: 2025-12-01 19:13:14.146 106833 DEBUG oslo_service.service [-] privsep_conntrack.logger_name  = oslo_privsep.daemon log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Dec  1 14:13:14 np0005541455 ovn_metadata_agent[106828]: 2025-12-01 19:13:14.146 106833 DEBUG oslo_service.service [-] privsep_conntrack.thread_pool_size = 8 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Dec  1 14:13:14 np0005541455 ovn_metadata_agent[106828]: 2025-12-01 19:13:14.146 106833 DEBUG oslo_service.service [-] privsep_conntrack.user         = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Dec  1 14:13:14 np0005541455 ovn_metadata_agent[106828]: 2025-12-01 19:13:14.146 106833 DEBUG oslo_service.service [-] privsep_link.capabilities      = [12, 21] log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Dec  1 14:13:14 np0005541455 ovn_metadata_agent[106828]: 2025-12-01 19:13:14.146 106833 DEBUG oslo_service.service [-] privsep_link.group             = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Dec  1 14:13:14 np0005541455 ovn_metadata_agent[106828]: 2025-12-01 19:13:14.147 106833 DEBUG oslo_service.service [-] privsep_link.helper_command    = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Dec  1 14:13:14 np0005541455 ovn_metadata_agent[106828]: 2025-12-01 19:13:14.147 106833 DEBUG oslo_service.service [-] privsep_link.logger_name       = oslo_privsep.daemon log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Dec  1 14:13:14 np0005541455 ovn_metadata_agent[106828]: 2025-12-01 19:13:14.147 106833 DEBUG oslo_service.service [-] privsep_link.thread_pool_size  = 8 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Dec  1 14:13:14 np0005541455 ovn_metadata_agent[106828]: 2025-12-01 19:13:14.147 106833 DEBUG oslo_service.service [-] privsep_link.user              = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Dec  1 14:13:14 np0005541455 ovn_metadata_agent[106828]: 2025-12-01 19:13:14.147 106833 DEBUG oslo_service.service [-] AGENT.check_child_processes_action = respawn log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Dec  1 14:13:14 np0005541455 ovn_metadata_agent[106828]: 2025-12-01 19:13:14.147 106833 DEBUG oslo_service.service [-] AGENT.check_child_processes_interval = 60 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Dec  1 14:13:14 np0005541455 ovn_metadata_agent[106828]: 2025-12-01 19:13:14.147 106833 DEBUG oslo_service.service [-] AGENT.comment_iptables_rules   = True log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Dec  1 14:13:14 np0005541455 ovn_metadata_agent[106828]: 2025-12-01 19:13:14.148 106833 DEBUG oslo_service.service [-] AGENT.debug_iptables_rules     = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Dec  1 14:13:14 np0005541455 ovn_metadata_agent[106828]: 2025-12-01 19:13:14.148 106833 DEBUG oslo_service.service [-] AGENT.kill_scripts_path        = /etc/neutron/kill_scripts/ log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Dec  1 14:13:14 np0005541455 ovn_metadata_agent[106828]: 2025-12-01 19:13:14.148 106833 DEBUG oslo_service.service [-] AGENT.root_helper              = sudo neutron-rootwrap /etc/neutron/rootwrap.conf log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Dec  1 14:13:14 np0005541455 ovn_metadata_agent[106828]: 2025-12-01 19:13:14.148 106833 DEBUG oslo_service.service [-] AGENT.root_helper_daemon       = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Dec  1 14:13:14 np0005541455 ovn_metadata_agent[106828]: 2025-12-01 19:13:14.148 106833 DEBUG oslo_service.service [-] AGENT.use_helper_for_ns_read   = True log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Dec  1 14:13:14 np0005541455 ovn_metadata_agent[106828]: 2025-12-01 19:13:14.148 106833 DEBUG oslo_service.service [-] AGENT.use_random_fully         = True log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Dec  1 14:13:14 np0005541455 ovn_metadata_agent[106828]: 2025-12-01 19:13:14.148 106833 DEBUG oslo_service.service [-] oslo_versionedobjects.fatal_exception_format_errors = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Dec  1 14:13:14 np0005541455 ovn_metadata_agent[106828]: 2025-12-01 19:13:14.149 106833 DEBUG oslo_service.service [-] QUOTAS.default_quota           = -1 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Dec  1 14:13:14 np0005541455 ovn_metadata_agent[106828]: 2025-12-01 19:13:14.149 106833 DEBUG oslo_service.service [-] QUOTAS.quota_driver            = neutron.db.quota.driver_nolock.DbQuotaNoLockDriver log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Dec  1 14:13:14 np0005541455 ovn_metadata_agent[106828]: 2025-12-01 19:13:14.149 106833 DEBUG oslo_service.service [-] QUOTAS.quota_network           = 100 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Dec  1 14:13:14 np0005541455 ovn_metadata_agent[106828]: 2025-12-01 19:13:14.149 106833 DEBUG oslo_service.service [-] QUOTAS.quota_port              = 500 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Dec  1 14:13:14 np0005541455 ovn_metadata_agent[106828]: 2025-12-01 19:13:14.149 106833 DEBUG oslo_service.service [-] QUOTAS.quota_security_group    = 10 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Dec  1 14:13:14 np0005541455 ovn_metadata_agent[106828]: 2025-12-01 19:13:14.149 106833 DEBUG oslo_service.service [-] QUOTAS.quota_security_group_rule = 100 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Dec  1 14:13:14 np0005541455 ovn_metadata_agent[106828]: 2025-12-01 19:13:14.149 106833 DEBUG oslo_service.service [-] QUOTAS.quota_subnet            = 100 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Dec  1 14:13:14 np0005541455 ovn_metadata_agent[106828]: 2025-12-01 19:13:14.150 106833 DEBUG oslo_service.service [-] QUOTAS.track_quota_usage       = True log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Dec  1 14:13:14 np0005541455 ovn_metadata_agent[106828]: 2025-12-01 19:13:14.150 106833 DEBUG oslo_service.service [-] nova.auth_section              = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Dec  1 14:13:14 np0005541455 ovn_metadata_agent[106828]: 2025-12-01 19:13:14.150 106833 DEBUG oslo_service.service [-] nova.auth_type                 = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Dec  1 14:13:14 np0005541455 ovn_metadata_agent[106828]: 2025-12-01 19:13:14.150 106833 DEBUG oslo_service.service [-] nova.cafile                    = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Dec  1 14:13:14 np0005541455 ovn_metadata_agent[106828]: 2025-12-01 19:13:14.150 106833 DEBUG oslo_service.service [-] nova.certfile                  = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Dec  1 14:13:14 np0005541455 ovn_metadata_agent[106828]: 2025-12-01 19:13:14.150 106833 DEBUG oslo_service.service [-] nova.collect_timing            = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Dec  1 14:13:14 np0005541455 ovn_metadata_agent[106828]: 2025-12-01 19:13:14.150 106833 DEBUG oslo_service.service [-] nova.endpoint_type             = public log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Dec  1 14:13:14 np0005541455 ovn_metadata_agent[106828]: 2025-12-01 19:13:14.151 106833 DEBUG oslo_service.service [-] nova.insecure                  = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Dec  1 14:13:14 np0005541455 ovn_metadata_agent[106828]: 2025-12-01 19:13:14.151 106833 DEBUG oslo_service.service [-] nova.keyfile                   = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Dec  1 14:13:14 np0005541455 ovn_metadata_agent[106828]: 2025-12-01 19:13:14.151 106833 DEBUG oslo_service.service [-] nova.region_name               = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Dec  1 14:13:14 np0005541455 ovn_metadata_agent[106828]: 2025-12-01 19:13:14.151 106833 DEBUG oslo_service.service [-] nova.split_loggers             = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Dec  1 14:13:14 np0005541455 ovn_metadata_agent[106828]: 2025-12-01 19:13:14.151 106833 DEBUG oslo_service.service [-] nova.timeout                   = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Dec  1 14:13:14 np0005541455 ovn_metadata_agent[106828]: 2025-12-01 19:13:14.151 106833 DEBUG oslo_service.service [-] placement.auth_section         = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Dec  1 14:13:14 np0005541455 ovn_metadata_agent[106828]: 2025-12-01 19:13:14.151 106833 DEBUG oslo_service.service [-] placement.auth_type            = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Dec  1 14:13:14 np0005541455 ovn_metadata_agent[106828]: 2025-12-01 19:13:14.151 106833 DEBUG oslo_service.service [-] placement.cafile               = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Dec  1 14:13:14 np0005541455 ovn_metadata_agent[106828]: 2025-12-01 19:13:14.152 106833 DEBUG oslo_service.service [-] placement.certfile             = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Dec  1 14:13:14 np0005541455 ovn_metadata_agent[106828]: 2025-12-01 19:13:14.152 106833 DEBUG oslo_service.service [-] placement.collect_timing       = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Dec  1 14:13:14 np0005541455 ovn_metadata_agent[106828]: 2025-12-01 19:13:14.152 106833 DEBUG oslo_service.service [-] placement.endpoint_type        = public log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Dec  1 14:13:14 np0005541455 ovn_metadata_agent[106828]: 2025-12-01 19:13:14.152 106833 DEBUG oslo_service.service [-] placement.insecure             = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Dec  1 14:13:14 np0005541455 ovn_metadata_agent[106828]: 2025-12-01 19:13:14.152 106833 DEBUG oslo_service.service [-] placement.keyfile              = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Dec  1 14:13:14 np0005541455 ovn_metadata_agent[106828]: 2025-12-01 19:13:14.152 106833 DEBUG oslo_service.service [-] placement.region_name          = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Dec  1 14:13:14 np0005541455 ovn_metadata_agent[106828]: 2025-12-01 19:13:14.152 106833 DEBUG oslo_service.service [-] placement.split_loggers        = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Dec  1 14:13:14 np0005541455 ovn_metadata_agent[106828]: 2025-12-01 19:13:14.153 106833 DEBUG oslo_service.service [-] placement.timeout              = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Dec  1 14:13:14 np0005541455 ovn_metadata_agent[106828]: 2025-12-01 19:13:14.153 106833 DEBUG oslo_service.service [-] ironic.auth_section            = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Dec  1 14:13:14 np0005541455 ovn_metadata_agent[106828]: 2025-12-01 19:13:14.153 106833 DEBUG oslo_service.service [-] ironic.auth_type               = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Dec  1 14:13:14 np0005541455 ovn_metadata_agent[106828]: 2025-12-01 19:13:14.153 106833 DEBUG oslo_service.service [-] ironic.cafile                  = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Dec  1 14:13:14 np0005541455 ovn_metadata_agent[106828]: 2025-12-01 19:13:14.153 106833 DEBUG oslo_service.service [-] ironic.certfile                = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Dec  1 14:13:14 np0005541455 ovn_metadata_agent[106828]: 2025-12-01 19:13:14.153 106833 DEBUG oslo_service.service [-] ironic.collect_timing          = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Dec  1 14:13:14 np0005541455 ovn_metadata_agent[106828]: 2025-12-01 19:13:14.153 106833 DEBUG oslo_service.service [-] ironic.connect_retries         = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Dec  1 14:13:14 np0005541455 ovn_metadata_agent[106828]: 2025-12-01 19:13:14.154 106833 DEBUG oslo_service.service [-] ironic.connect_retry_delay     = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Dec  1 14:13:14 np0005541455 ovn_metadata_agent[106828]: 2025-12-01 19:13:14.154 106833 DEBUG oslo_service.service [-] ironic.enable_notifications    = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Dec  1 14:13:14 np0005541455 ovn_metadata_agent[106828]: 2025-12-01 19:13:14.154 106833 DEBUG oslo_service.service [-] ironic.endpoint_override       = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Dec  1 14:13:14 np0005541455 ovn_metadata_agent[106828]: 2025-12-01 19:13:14.154 106833 DEBUG oslo_service.service [-] ironic.insecure                = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Dec  1 14:13:14 np0005541455 ovn_metadata_agent[106828]: 2025-12-01 19:13:14.154 106833 DEBUG oslo_service.service [-] ironic.interface               = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Dec  1 14:13:14 np0005541455 ovn_metadata_agent[106828]: 2025-12-01 19:13:14.154 106833 DEBUG oslo_service.service [-] ironic.keyfile                 = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Dec  1 14:13:14 np0005541455 ovn_metadata_agent[106828]: 2025-12-01 19:13:14.154 106833 DEBUG oslo_service.service [-] ironic.max_version             = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Dec  1 14:13:14 np0005541455 ovn_metadata_agent[106828]: 2025-12-01 19:13:14.154 106833 DEBUG oslo_service.service [-] ironic.min_version             = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Dec  1 14:13:14 np0005541455 ovn_metadata_agent[106828]: 2025-12-01 19:13:14.155 106833 DEBUG oslo_service.service [-] ironic.region_name             = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Dec  1 14:13:14 np0005541455 ovn_metadata_agent[106828]: 2025-12-01 19:13:14.155 106833 DEBUG oslo_service.service [-] ironic.service_name            = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Dec  1 14:13:14 np0005541455 ovn_metadata_agent[106828]: 2025-12-01 19:13:14.155 106833 DEBUG oslo_service.service [-] ironic.service_type            = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Dec  1 14:13:14 np0005541455 ovn_metadata_agent[106828]: 2025-12-01 19:13:14.155 106833 DEBUG oslo_service.service [-] ironic.split_loggers           = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Dec  1 14:13:14 np0005541455 ovn_metadata_agent[106828]: 2025-12-01 19:13:14.155 106833 DEBUG oslo_service.service [-] ironic.status_code_retries     = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Dec  1 14:13:14 np0005541455 ovn_metadata_agent[106828]: 2025-12-01 19:13:14.155 106833 DEBUG oslo_service.service [-] ironic.status_code_retry_delay = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Dec  1 14:13:14 np0005541455 ovn_metadata_agent[106828]: 2025-12-01 19:13:14.155 106833 DEBUG oslo_service.service [-] ironic.timeout                 = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Dec  1 14:13:14 np0005541455 ovn_metadata_agent[106828]: 2025-12-01 19:13:14.156 106833 DEBUG oslo_service.service [-] ironic.valid_interfaces        = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Dec  1 14:13:14 np0005541455 ovn_metadata_agent[106828]: 2025-12-01 19:13:14.156 106833 DEBUG oslo_service.service [-] ironic.version                 = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Dec  1 14:13:14 np0005541455 ovn_metadata_agent[106828]: 2025-12-01 19:13:14.156 106833 DEBUG oslo_service.service [-] cli_script.dry_run             = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Dec  1 14:13:14 np0005541455 ovn_metadata_agent[106828]: 2025-12-01 19:13:14.156 106833 DEBUG oslo_service.service [-] ovn.allow_stateless_action_supported = True log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Dec  1 14:13:14 np0005541455 ovn_metadata_agent[106828]: 2025-12-01 19:13:14.156 106833 DEBUG oslo_service.service [-] ovn.dhcp_default_lease_time    = 43200 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Dec  1 14:13:14 np0005541455 ovn_metadata_agent[106828]: 2025-12-01 19:13:14.156 106833 DEBUG oslo_service.service [-] ovn.disable_ovn_dhcp_for_baremetal_ports = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Dec  1 14:13:14 np0005541455 ovn_metadata_agent[106828]: 2025-12-01 19:13:14.157 106833 DEBUG oslo_service.service [-] ovn.dns_servers                = [] log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Dec  1 14:13:14 np0005541455 ovn_metadata_agent[106828]: 2025-12-01 19:13:14.157 106833 DEBUG oslo_service.service [-] ovn.enable_distributed_floating_ip = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Dec  1 14:13:14 np0005541455 ovn_metadata_agent[106828]: 2025-12-01 19:13:14.157 106833 DEBUG oslo_service.service [-] ovn.neutron_sync_mode          = log log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Dec  1 14:13:14 np0005541455 ovn_metadata_agent[106828]: 2025-12-01 19:13:14.157 106833 DEBUG oslo_service.service [-] ovn.ovn_dhcp4_global_options   = {} log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Dec  1 14:13:14 np0005541455 ovn_metadata_agent[106828]: 2025-12-01 19:13:14.157 106833 DEBUG oslo_service.service [-] ovn.ovn_dhcp6_global_options   = {} log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Dec  1 14:13:14 np0005541455 ovn_metadata_agent[106828]: 2025-12-01 19:13:14.157 106833 DEBUG oslo_service.service [-] ovn.ovn_emit_need_to_frag      = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Dec  1 14:13:14 np0005541455 ovn_metadata_agent[106828]: 2025-12-01 19:13:14.157 106833 DEBUG oslo_service.service [-] ovn.ovn_l3_mode                = True log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Dec  1 14:13:14 np0005541455 ovn_metadata_agent[106828]: 2025-12-01 19:13:14.158 106833 DEBUG oslo_service.service [-] ovn.ovn_l3_scheduler           = leastloaded log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Dec  1 14:13:14 np0005541455 ovn_metadata_agent[106828]: 2025-12-01 19:13:14.158 106833 DEBUG oslo_service.service [-] ovn.ovn_metadata_enabled       = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Dec  1 14:13:14 np0005541455 ovn_metadata_agent[106828]: 2025-12-01 19:13:14.158 106833 DEBUG oslo_service.service [-] ovn.ovn_nb_ca_cert             =  log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Dec  1 14:13:14 np0005541455 ovn_metadata_agent[106828]: 2025-12-01 19:13:14.158 106833 DEBUG oslo_service.service [-] ovn.ovn_nb_certificate         =  log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Dec  1 14:13:14 np0005541455 ovn_metadata_agent[106828]: 2025-12-01 19:13:14.158 106833 DEBUG oslo_service.service [-] ovn.ovn_nb_connection          = tcp:127.0.0.1:6641 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Dec  1 14:13:14 np0005541455 ovn_metadata_agent[106828]: 2025-12-01 19:13:14.158 106833 DEBUG oslo_service.service [-] ovn.ovn_nb_private_key         =  log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Dec  1 14:13:14 np0005541455 ovn_metadata_agent[106828]: 2025-12-01 19:13:14.158 106833 DEBUG oslo_service.service [-] ovn.ovn_sb_ca_cert             = /etc/pki/tls/certs/ovndbca.crt log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Dec  1 14:13:14 np0005541455 ovn_metadata_agent[106828]: 2025-12-01 19:13:14.159 106833 DEBUG oslo_service.service [-] ovn.ovn_sb_certificate         = /etc/pki/tls/certs/ovndb.crt log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Dec  1 14:13:14 np0005541455 ovn_metadata_agent[106828]: 2025-12-01 19:13:14.159 106833 DEBUG oslo_service.service [-] ovn.ovn_sb_connection          = ssl:ovsdbserver-sb.openstack.svc:6642 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Dec  1 14:13:14 np0005541455 ovn_metadata_agent[106828]: 2025-12-01 19:13:14.159 106833 DEBUG oslo_service.service [-] ovn.ovn_sb_private_key         = /etc/pki/tls/private/ovndb.key log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Dec  1 14:13:14 np0005541455 ovn_metadata_agent[106828]: 2025-12-01 19:13:14.159 106833 DEBUG oslo_service.service [-] ovn.ovsdb_connection_timeout   = 180 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Dec  1 14:13:14 np0005541455 ovn_metadata_agent[106828]: 2025-12-01 19:13:14.159 106833 DEBUG oslo_service.service [-] ovn.ovsdb_log_level            = INFO log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Dec  1 14:13:14 np0005541455 ovn_metadata_agent[106828]: 2025-12-01 19:13:14.159 106833 DEBUG oslo_service.service [-] ovn.ovsdb_probe_interval       = 60000 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Dec  1 14:13:14 np0005541455 ovn_metadata_agent[106828]: 2025-12-01 19:13:14.159 106833 DEBUG oslo_service.service [-] ovn.ovsdb_retry_max_interval   = 180 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Dec  1 14:13:14 np0005541455 ovn_metadata_agent[106828]: 2025-12-01 19:13:14.159 106833 DEBUG oslo_service.service [-] ovn.vhost_sock_dir             = /var/run/openvswitch log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Dec  1 14:13:14 np0005541455 ovn_metadata_agent[106828]: 2025-12-01 19:13:14.160 106833 DEBUG oslo_service.service [-] ovn.vif_type                   = ovs log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Dec  1 14:13:14 np0005541455 ovn_metadata_agent[106828]: 2025-12-01 19:13:14.160 106833 DEBUG oslo_service.service [-] OVS.bridge_mac_table_size      = 50000 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Dec  1 14:13:14 np0005541455 ovn_metadata_agent[106828]: 2025-12-01 19:13:14.160 106833 DEBUG oslo_service.service [-] OVS.igmp_snooping_enable       = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Dec  1 14:13:14 np0005541455 ovn_metadata_agent[106828]: 2025-12-01 19:13:14.160 106833 DEBUG oslo_service.service [-] OVS.ovsdb_timeout              = 10 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Dec  1 14:13:14 np0005541455 ovn_metadata_agent[106828]: 2025-12-01 19:13:14.160 106833 DEBUG oslo_service.service [-] ovs.ovsdb_connection           = tcp:127.0.0.1:6640 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Dec  1 14:13:14 np0005541455 ovn_metadata_agent[106828]: 2025-12-01 19:13:14.160 106833 DEBUG oslo_service.service [-] ovs.ovsdb_connection_timeout   = 180 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Dec  1 14:13:14 np0005541455 ovn_metadata_agent[106828]: 2025-12-01 19:13:14.160 106833 DEBUG oslo_service.service [-] oslo_messaging_rabbit.amqp_auto_delete = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Dec  1 14:13:14 np0005541455 ovn_metadata_agent[106828]: 2025-12-01 19:13:14.161 106833 DEBUG oslo_service.service [-] oslo_messaging_rabbit.amqp_durable_queues = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Dec  1 14:13:14 np0005541455 ovn_metadata_agent[106828]: 2025-12-01 19:13:14.161 106833 DEBUG oslo_service.service [-] oslo_messaging_rabbit.conn_pool_min_size = 2 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Dec  1 14:13:14 np0005541455 ovn_metadata_agent[106828]: 2025-12-01 19:13:14.161 106833 DEBUG oslo_service.service [-] oslo_messaging_rabbit.conn_pool_ttl = 1200 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Dec  1 14:13:14 np0005541455 ovn_metadata_agent[106828]: 2025-12-01 19:13:14.161 106833 DEBUG oslo_service.service [-] oslo_messaging_rabbit.direct_mandatory_flag = True log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Dec  1 14:13:14 np0005541455 ovn_metadata_agent[106828]: 2025-12-01 19:13:14.161 106833 DEBUG oslo_service.service [-] oslo_messaging_rabbit.enable_cancel_on_failover = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Dec  1 14:13:14 np0005541455 ovn_metadata_agent[106828]: 2025-12-01 19:13:14.161 106833 DEBUG oslo_service.service [-] oslo_messaging_rabbit.heartbeat_in_pthread = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Dec  1 14:13:14 np0005541455 ovn_metadata_agent[106828]: 2025-12-01 19:13:14.161 106833 DEBUG oslo_service.service [-] oslo_messaging_rabbit.heartbeat_rate = 2 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Dec  1 14:13:14 np0005541455 ovn_metadata_agent[106828]: 2025-12-01 19:13:14.162 106833 DEBUG oslo_service.service [-] oslo_messaging_rabbit.heartbeat_timeout_threshold = 60 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Dec  1 14:13:14 np0005541455 ovn_metadata_agent[106828]: 2025-12-01 19:13:14.162 106833 DEBUG oslo_service.service [-] oslo_messaging_rabbit.kombu_compression = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Dec  1 14:13:14 np0005541455 ovn_metadata_agent[106828]: 2025-12-01 19:13:14.162 106833 DEBUG oslo_service.service [-] oslo_messaging_rabbit.kombu_failover_strategy = round-robin log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Dec  1 14:13:14 np0005541455 ovn_metadata_agent[106828]: 2025-12-01 19:13:14.162 106833 DEBUG oslo_service.service [-] oslo_messaging_rabbit.kombu_missing_consumer_retry_timeout = 60 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Dec  1 14:13:14 np0005541455 ovn_metadata_agent[106828]: 2025-12-01 19:13:14.162 106833 DEBUG oslo_service.service [-] oslo_messaging_rabbit.kombu_reconnect_delay = 1.0 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Dec  1 14:13:14 np0005541455 ovn_metadata_agent[106828]: 2025-12-01 19:13:14.162 106833 DEBUG oslo_service.service [-] oslo_messaging_rabbit.rabbit_ha_queues = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Dec  1 14:13:14 np0005541455 ovn_metadata_agent[106828]: 2025-12-01 19:13:14.162 106833 DEBUG oslo_service.service [-] oslo_messaging_rabbit.rabbit_interval_max = 30 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Dec  1 14:13:14 np0005541455 ovn_metadata_agent[106828]: 2025-12-01 19:13:14.163 106833 DEBUG oslo_service.service [-] oslo_messaging_rabbit.rabbit_login_method = AMQPLAIN log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Dec  1 14:13:14 np0005541455 ovn_metadata_agent[106828]: 2025-12-01 19:13:14.163 106833 DEBUG oslo_service.service [-] oslo_messaging_rabbit.rabbit_qos_prefetch_count = 0 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Dec  1 14:13:14 np0005541455 ovn_metadata_agent[106828]: 2025-12-01 19:13:14.163 106833 DEBUG oslo_service.service [-] oslo_messaging_rabbit.rabbit_quorum_delivery_limit = 0 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Dec  1 14:13:14 np0005541455 ovn_metadata_agent[106828]: 2025-12-01 19:13:14.163 106833 DEBUG oslo_service.service [-] oslo_messaging_rabbit.rabbit_quorum_max_memory_bytes = 0 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Dec  1 14:13:14 np0005541455 ovn_metadata_agent[106828]: 2025-12-01 19:13:14.163 106833 DEBUG oslo_service.service [-] oslo_messaging_rabbit.rabbit_quorum_max_memory_length = 0 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Dec  1 14:13:14 np0005541455 ovn_metadata_agent[106828]: 2025-12-01 19:13:14.163 106833 DEBUG oslo_service.service [-] oslo_messaging_rabbit.rabbit_quorum_queue = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Dec  1 14:13:14 np0005541455 ovn_metadata_agent[106828]: 2025-12-01 19:13:14.163 106833 DEBUG oslo_service.service [-] oslo_messaging_rabbit.rabbit_retry_backoff = 2 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Dec  1 14:13:14 np0005541455 ovn_metadata_agent[106828]: 2025-12-01 19:13:14.164 106833 DEBUG oslo_service.service [-] oslo_messaging_rabbit.rabbit_retry_interval = 1 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Dec  1 14:13:14 np0005541455 ovn_metadata_agent[106828]: 2025-12-01 19:13:14.164 106833 DEBUG oslo_service.service [-] oslo_messaging_rabbit.rabbit_transient_queues_ttl = 1800 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Dec  1 14:13:14 np0005541455 ovn_metadata_agent[106828]: 2025-12-01 19:13:14.164 106833 DEBUG oslo_service.service [-] oslo_messaging_rabbit.rpc_conn_pool_size = 30 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Dec  1 14:13:14 np0005541455 ovn_metadata_agent[106828]: 2025-12-01 19:13:14.164 106833 DEBUG oslo_service.service [-] oslo_messaging_rabbit.ssl      = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Dec  1 14:13:14 np0005541455 ovn_metadata_agent[106828]: 2025-12-01 19:13:14.164 106833 DEBUG oslo_service.service [-] oslo_messaging_rabbit.ssl_ca_file =  log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Dec  1 14:13:14 np0005541455 ovn_metadata_agent[106828]: 2025-12-01 19:13:14.164 106833 DEBUG oslo_service.service [-] oslo_messaging_rabbit.ssl_cert_file =  log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Dec  1 14:13:14 np0005541455 ovn_metadata_agent[106828]: 2025-12-01 19:13:14.164 106833 DEBUG oslo_service.service [-] oslo_messaging_rabbit.ssl_enforce_fips_mode = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Dec  1 14:13:14 np0005541455 ovn_metadata_agent[106828]: 2025-12-01 19:13:14.165 106833 DEBUG oslo_service.service [-] oslo_messaging_rabbit.ssl_key_file =  log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Dec  1 14:13:14 np0005541455 ovn_metadata_agent[106828]: 2025-12-01 19:13:14.165 106833 DEBUG oslo_service.service [-] oslo_messaging_rabbit.ssl_version =  log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Dec  1 14:13:14 np0005541455 ovn_metadata_agent[106828]: 2025-12-01 19:13:14.165 106833 DEBUG oslo_service.service [-] oslo_messaging_notifications.driver = [] log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Dec  1 14:13:14 np0005541455 ovn_metadata_agent[106828]: 2025-12-01 19:13:14.165 106833 DEBUG oslo_service.service [-] oslo_messaging_notifications.retry = -1 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Dec  1 14:13:14 np0005541455 ovn_metadata_agent[106828]: 2025-12-01 19:13:14.165 106833 DEBUG oslo_service.service [-] oslo_messaging_notifications.topics = ['notifications'] log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Dec  1 14:13:14 np0005541455 ovn_metadata_agent[106828]: 2025-12-01 19:13:14.165 106833 DEBUG oslo_service.service [-] oslo_messaging_notifications.transport_url = **** log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Dec  1 14:13:14 np0005541455 ovn_metadata_agent[106828]: 2025-12-01 19:13:14.165 106833 DEBUG oslo_service.service [-] ******************************************************************************** log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2613#033[00m
Dec  1 14:13:16 np0005541455 systemd-logind[797]: New session 23 of user zuul.
Dec  1 14:13:16 np0005541455 systemd[1]: Started Session 23 of User zuul.
Dec  1 14:13:17 np0005541455 python3.9[107103]: ansible-ansible.builtin.setup Invoked with gather_subset=['!all', '!min', 'local'] gather_timeout=10 filter=[] fact_path=/etc/ansible/facts.d
Dec  1 14:13:19 np0005541455 python3.9[107259]: ansible-ansible.legacy.command Invoked with _raw_params=podman ps -a --filter name=^nova_virtlogd$ --format \{\{.Names\}\} _uses_shell=False expand_argument_vars=True stdin_add_newline=True strip_empty_ends=True cmd=None argv=None chdir=None executable=None creates=None removes=None stdin=None
Dec  1 14:13:20 np0005541455 python3.9[107424]: ansible-ansible.builtin.systemd_service Invoked with daemon_reload=True daemon_reexec=False scope=system no_block=False name=None state=None enabled=None force=None masked=None
Dec  1 14:13:20 np0005541455 systemd[1]: Reloading.
Dec  1 14:13:20 np0005541455 systemd-sysv-generator: SysV service '/etc/rc.d/init.d/network' lacks a native systemd unit file. Automatically generating a unit file for compatibility. Please update package to include a native systemd unit file, in order to make it more safe and robust.
Dec  1 14:13:20 np0005541455 systemd-rc-local-generator: /etc/rc.d/rc.local is not marked executable, skipping.
Dec  1 14:13:21 np0005541455 python3.9[107609]: ansible-ansible.builtin.service_facts Invoked
Dec  1 14:13:21 np0005541455 network[107626]: You are using 'network' service provided by 'network-scripts', which are now deprecated.
Dec  1 14:13:21 np0005541455 network[107627]: 'network-scripts' will be removed from distribution in near future.
Dec  1 14:13:21 np0005541455 network[107628]: It is advised to switch to 'NetworkManager' instead for network management.
Dec  1 14:13:27 np0005541455 python3.9[107889]: ansible-ansible.builtin.systemd_service Invoked with enabled=False name=tripleo_nova_libvirt.target state=stopped daemon_reload=False daemon_reexec=False scope=system no_block=False force=None masked=None
Dec  1 14:13:28 np0005541455 python3.9[108042]: ansible-ansible.builtin.systemd_service Invoked with enabled=False name=tripleo_nova_virtlogd_wrapper.service state=stopped daemon_reload=False daemon_reexec=False scope=system no_block=False force=None masked=None
Dec  1 14:13:29 np0005541455 python3.9[108195]: ansible-ansible.builtin.systemd_service Invoked with enabled=False name=tripleo_nova_virtnodedevd.service state=stopped daemon_reload=False daemon_reexec=False scope=system no_block=False force=None masked=None
Dec  1 14:13:29 np0005541455 python3.9[108348]: ansible-ansible.builtin.systemd_service Invoked with enabled=False name=tripleo_nova_virtproxyd.service state=stopped daemon_reload=False daemon_reexec=False scope=system no_block=False force=None masked=None
Dec  1 14:13:30 np0005541455 python3.9[108501]: ansible-ansible.builtin.systemd_service Invoked with enabled=False name=tripleo_nova_virtqemud.service state=stopped daemon_reload=False daemon_reexec=False scope=system no_block=False force=None masked=None
Dec  1 14:13:31 np0005541455 python3.9[108654]: ansible-ansible.builtin.systemd_service Invoked with enabled=False name=tripleo_nova_virtsecretd.service state=stopped daemon_reload=False daemon_reexec=False scope=system no_block=False force=None masked=None
Dec  1 14:13:32 np0005541455 python3.9[108807]: ansible-ansible.builtin.systemd_service Invoked with enabled=False name=tripleo_nova_virtstoraged.service state=stopped daemon_reload=False daemon_reexec=False scope=system no_block=False force=None masked=None
Dec  1 14:13:33 np0005541455 python3.9[108960]: ansible-ansible.builtin.file Invoked with path=/usr/lib/systemd/system/tripleo_nova_libvirt.target state=absent recurse=False force=False follow=True modification_time_format=%Y%m%d%H%M.%S access_time_format=%Y%m%d%H%M.%S unsafe_writes=False _original_basename=None _diff_peek=None src=None modification_time=None access_time=None mode=None owner=None group=None seuser=None serole=None selevel=None setype=None attributes=None
Dec  1 14:13:34 np0005541455 python3.9[109112]: ansible-ansible.builtin.file Invoked with path=/usr/lib/systemd/system/tripleo_nova_virtlogd_wrapper.service state=absent recurse=False force=False follow=True modification_time_format=%Y%m%d%H%M.%S access_time_format=%Y%m%d%H%M.%S unsafe_writes=False _original_basename=None _diff_peek=None src=None modification_time=None access_time=None mode=None owner=None group=None seuser=None serole=None selevel=None setype=None attributes=None
Dec  1 14:13:35 np0005541455 python3.9[109264]: ansible-ansible.builtin.file Invoked with path=/usr/lib/systemd/system/tripleo_nova_virtnodedevd.service state=absent recurse=False force=False follow=True modification_time_format=%Y%m%d%H%M.%S access_time_format=%Y%m%d%H%M.%S unsafe_writes=False _original_basename=None _diff_peek=None src=None modification_time=None access_time=None mode=None owner=None group=None seuser=None serole=None selevel=None setype=None attributes=None
Dec  1 14:13:35 np0005541455 python3.9[109416]: ansible-ansible.builtin.file Invoked with path=/usr/lib/systemd/system/tripleo_nova_virtproxyd.service state=absent recurse=False force=False follow=True modification_time_format=%Y%m%d%H%M.%S access_time_format=%Y%m%d%H%M.%S unsafe_writes=False _original_basename=None _diff_peek=None src=None modification_time=None access_time=None mode=None owner=None group=None seuser=None serole=None selevel=None setype=None attributes=None
Dec  1 14:13:36 np0005541455 python3.9[109568]: ansible-ansible.builtin.file Invoked with path=/usr/lib/systemd/system/tripleo_nova_virtqemud.service state=absent recurse=False force=False follow=True modification_time_format=%Y%m%d%H%M.%S access_time_format=%Y%m%d%H%M.%S unsafe_writes=False _original_basename=None _diff_peek=None src=None modification_time=None access_time=None mode=None owner=None group=None seuser=None serole=None selevel=None setype=None attributes=None
Dec  1 14:13:37 np0005541455 python3.9[109720]: ansible-ansible.builtin.file Invoked with path=/usr/lib/systemd/system/tripleo_nova_virtsecretd.service state=absent recurse=False force=False follow=True modification_time_format=%Y%m%d%H%M.%S access_time_format=%Y%m%d%H%M.%S unsafe_writes=False _original_basename=None _diff_peek=None src=None modification_time=None access_time=None mode=None owner=None group=None seuser=None serole=None selevel=None setype=None attributes=None
Dec  1 14:13:37 np0005541455 podman[109844]: 2025-12-01 19:13:37.929835228 +0000 UTC m=+0.135843722 container health_status ac5c9902abf0db9f43c889599b2bcc73d33eb8b65444ffdd9b56a5cc93dab792 (image=quay.io/podified-antelope-centos9/openstack-ovn-controller:current-podified, name=ovn_controller, health_status=healthy, health_failing_streak=0, health_log=, org.label-schema.vendor=CentOS, org.label-schema.build-date=20251125, tcib_build_tag=fa2bb8efef6782c26ea7f1675eeb36dd, tcib_managed=true, config_id=ovn_controller, io.buildah.version=1.41.3, org.label-schema.license=GPLv2, org.label-schema.schema-version=1.0, container_name=ovn_controller, managed_by=edpm_ansible, org.label-schema.name=CentOS Stream 9 Base Image, config_data={'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ovn_controller', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-ovn-controller:current-podified', 'net': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/lib/modules:/lib/modules:ro', '/run:/run', '/var/lib/openvswitch/ovn:/run/ovn:shared,z', '/var/lib/kolla/config_files/ovn_controller.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/openstack/cacerts/ovn/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/certs/ovn/default/ca.crt:/etc/pki/tls/certs/ovndbca.crt:ro,z', '/var/lib/openstack/certs/ovn/default/tls.crt:/etc/pki/tls/certs/ovndb.crt:ro,z', '/var/lib/openstack/certs/ovn/default/tls.key:/etc/pki/tls/private/ovndb.key:ro,Z', '/var/lib/openstack/healthchecks/ovn_controller:/openstack:ro,z']}, maintainer=OpenStack Kubernetes Operator team)
Dec  1 14:13:38 np0005541455 python3.9[109883]: ansible-ansible.builtin.file Invoked with path=/usr/lib/systemd/system/tripleo_nova_virtstoraged.service state=absent recurse=False force=False follow=True modification_time_format=%Y%m%d%H%M.%S access_time_format=%Y%m%d%H%M.%S unsafe_writes=False _original_basename=None _diff_peek=None src=None modification_time=None access_time=None mode=None owner=None group=None seuser=None serole=None selevel=None setype=None attributes=None
Dec  1 14:13:38 np0005541455 python3.9[110052]: ansible-ansible.builtin.file Invoked with path=/etc/systemd/system/tripleo_nova_libvirt.target state=absent recurse=False force=False follow=True modification_time_format=%Y%m%d%H%M.%S access_time_format=%Y%m%d%H%M.%S unsafe_writes=False _original_basename=None _diff_peek=None src=None modification_time=None access_time=None mode=None owner=None group=None seuser=None serole=None selevel=None setype=None attributes=None
Dec  1 14:13:39 np0005541455 python3.9[110204]: ansible-ansible.builtin.file Invoked with path=/etc/systemd/system/tripleo_nova_virtlogd_wrapper.service state=absent recurse=False force=False follow=True modification_time_format=%Y%m%d%H%M.%S access_time_format=%Y%m%d%H%M.%S unsafe_writes=False _original_basename=None _diff_peek=None src=None modification_time=None access_time=None mode=None owner=None group=None seuser=None serole=None selevel=None setype=None attributes=None
Dec  1 14:13:40 np0005541455 python3.9[110356]: ansible-ansible.builtin.file Invoked with path=/etc/systemd/system/tripleo_nova_virtnodedevd.service state=absent recurse=False force=False follow=True modification_time_format=%Y%m%d%H%M.%S access_time_format=%Y%m%d%H%M.%S unsafe_writes=False _original_basename=None _diff_peek=None src=None modification_time=None access_time=None mode=None owner=None group=None seuser=None serole=None selevel=None setype=None attributes=None
Dec  1 14:13:41 np0005541455 podman[110480]: 2025-12-01 19:13:41.042611971 +0000 UTC m=+0.108939737 container health_status 43b014a7c88484529ca37fbc1aa040d68d3c565a681d98a3ffe696ded1c66c8b (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified, name=ovn_metadata_agent, health_status=healthy, health_failing_streak=0, health_log=, config_data={'cgroupns': 'host', 'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'EDPM_CONFIG_HASH': '0823bd3e096c75f72e4a95820d41b0d4b6a1172bd2892ddb9f29b788a11bc87d'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ovn_metadata_agent', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified', 'net': 'host', 'pid': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/run/openvswitch:/run/openvswitch:z', '/var/lib/config-data/ansible-generated/neutron-ovn-metadata-agent:/etc/neutron.conf.d:z', '/run/netns:/run/netns:shared', '/var/lib/kolla/config_files/ovn_metadata_agent.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/neutron:/var/lib/neutron:shared,z', '/var/lib/neutron/ovn_metadata_haproxy_wrapper:/usr/local/bin/haproxy:ro', '/var/lib/neutron/kill_scripts:/etc/neutron/kill_scripts:ro', '/var/lib/openstack/cacerts/neutron-metadata/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/ca.crt:/etc/pki/tls/certs/ovndbca.crt:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/tls.crt:/etc/pki/tls/certs/ovndb.crt:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/tls.key:/etc/pki/tls/private/ovndb.key:ro,Z', '/var/lib/openstack/healthchecks/ovn_metadata_agent:/openstack:ro,z']}, tcib_managed=true, container_name=ovn_metadata_agent, io.buildah.version=1.41.3, maintainer=OpenStack Kubernetes Operator team, managed_by=edpm_ansible, tcib_build_tag=fa2bb8efef6782c26ea7f1675eeb36dd, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS, config_id=ovn_metadata_agent, org.label-schema.build-date=20251125, org.label-schema.license=GPLv2)
Dec  1 14:13:41 np0005541455 python3.9[110525]: ansible-ansible.builtin.file Invoked with path=/etc/systemd/system/tripleo_nova_virtproxyd.service state=absent recurse=False force=False follow=True modification_time_format=%Y%m%d%H%M.%S access_time_format=%Y%m%d%H%M.%S unsafe_writes=False _original_basename=None _diff_peek=None src=None modification_time=None access_time=None mode=None owner=None group=None seuser=None serole=None selevel=None setype=None attributes=None
Dec  1 14:13:41 np0005541455 python3.9[110678]: ansible-ansible.builtin.file Invoked with path=/etc/systemd/system/tripleo_nova_virtqemud.service state=absent recurse=False force=False follow=True modification_time_format=%Y%m%d%H%M.%S access_time_format=%Y%m%d%H%M.%S unsafe_writes=False _original_basename=None _diff_peek=None src=None modification_time=None access_time=None mode=None owner=None group=None seuser=None serole=None selevel=None setype=None attributes=None
Dec  1 14:13:42 np0005541455 python3.9[110830]: ansible-ansible.builtin.file Invoked with path=/etc/systemd/system/tripleo_nova_virtsecretd.service state=absent recurse=False force=False follow=True modification_time_format=%Y%m%d%H%M.%S access_time_format=%Y%m%d%H%M.%S unsafe_writes=False _original_basename=None _diff_peek=None src=None modification_time=None access_time=None mode=None owner=None group=None seuser=None serole=None selevel=None setype=None attributes=None
Dec  1 14:13:43 np0005541455 python3.9[110982]: ansible-ansible.builtin.file Invoked with path=/etc/systemd/system/tripleo_nova_virtstoraged.service state=absent recurse=False force=False follow=True modification_time_format=%Y%m%d%H%M.%S access_time_format=%Y%m%d%H%M.%S unsafe_writes=False _original_basename=None _diff_peek=None src=None modification_time=None access_time=None mode=None owner=None group=None seuser=None serole=None selevel=None setype=None attributes=None
Dec  1 14:13:44 np0005541455 python3.9[111134]: ansible-ansible.legacy.command Invoked with _raw_params=if systemctl is-active certmonger.service; then#012  systemctl disable --now certmonger.service#012  test -f /etc/systemd/system/certmonger.service || systemctl mask certmonger.service#012fi#012 _uses_shell=True expand_argument_vars=True stdin_add_newline=True strip_empty_ends=True cmd=None argv=None chdir=None executable=None creates=None removes=None stdin=None
Dec  1 14:13:45 np0005541455 python3.9[111286]: ansible-ansible.builtin.find Invoked with file_type=any hidden=True paths=['/var/lib/certmonger/requests'] patterns=[] read_whole_file=False age_stamp=mtime recurse=False follow=False get_checksum=False checksum_algorithm=sha1 use_regex=False exact_mode=True excludes=None contains=None age=None size=None depth=None mode=None encoding=None limit=None
Dec  1 14:13:46 np0005541455 python3.9[111440]: ansible-ansible.builtin.systemd_service Invoked with daemon_reload=True daemon_reexec=False scope=system no_block=False name=None state=None enabled=None force=None masked=None
Dec  1 14:13:46 np0005541455 systemd[1]: Reloading.
Dec  1 14:13:46 np0005541455 systemd-rc-local-generator: /etc/rc.d/rc.local is not marked executable, skipping.
Dec  1 14:13:46 np0005541455 systemd-sysv-generator: SysV service '/etc/rc.d/init.d/network' lacks a native systemd unit file. Automatically generating a unit file for compatibility. Please update package to include a native systemd unit file, in order to make it more safe and robust.
Dec  1 14:13:47 np0005541455 python3.9[111626]: ansible-ansible.legacy.command Invoked with cmd=/usr/bin/systemctl reset-failed tripleo_nova_libvirt.target _uses_shell=False expand_argument_vars=True stdin_add_newline=True strip_empty_ends=True _raw_params=None argv=None chdir=None executable=None creates=None removes=None stdin=None
Dec  1 14:13:48 np0005541455 python3.9[111779]: ansible-ansible.legacy.command Invoked with cmd=/usr/bin/systemctl reset-failed tripleo_nova_virtlogd_wrapper.service _uses_shell=False expand_argument_vars=True stdin_add_newline=True strip_empty_ends=True _raw_params=None argv=None chdir=None executable=None creates=None removes=None stdin=None
Dec  1 14:13:48 np0005541455 python3.9[111932]: ansible-ansible.legacy.command Invoked with cmd=/usr/bin/systemctl reset-failed tripleo_nova_virtnodedevd.service _uses_shell=False expand_argument_vars=True stdin_add_newline=True strip_empty_ends=True _raw_params=None argv=None chdir=None executable=None creates=None removes=None stdin=None
Dec  1 14:13:49 np0005541455 python3.9[112085]: ansible-ansible.legacy.command Invoked with cmd=/usr/bin/systemctl reset-failed tripleo_nova_virtproxyd.service _uses_shell=False expand_argument_vars=True stdin_add_newline=True strip_empty_ends=True _raw_params=None argv=None chdir=None executable=None creates=None removes=None stdin=None
Dec  1 14:13:50 np0005541455 python3.9[112238]: ansible-ansible.legacy.command Invoked with cmd=/usr/bin/systemctl reset-failed tripleo_nova_virtqemud.service _uses_shell=False expand_argument_vars=True stdin_add_newline=True strip_empty_ends=True _raw_params=None argv=None chdir=None executable=None creates=None removes=None stdin=None
Dec  1 14:13:51 np0005541455 python3.9[112391]: ansible-ansible.legacy.command Invoked with cmd=/usr/bin/systemctl reset-failed tripleo_nova_virtsecretd.service _uses_shell=False expand_argument_vars=True stdin_add_newline=True strip_empty_ends=True _raw_params=None argv=None chdir=None executable=None creates=None removes=None stdin=None
Dec  1 14:13:52 np0005541455 python3.9[112544]: ansible-ansible.legacy.command Invoked with cmd=/usr/bin/systemctl reset-failed tripleo_nova_virtstoraged.service _uses_shell=False expand_argument_vars=True stdin_add_newline=True strip_empty_ends=True _raw_params=None argv=None chdir=None executable=None creates=None removes=None stdin=None
Dec  1 14:13:53 np0005541455 python3.9[112697]: ansible-ansible.builtin.getent Invoked with database=passwd key=libvirt fail_key=True service=None split=None
Dec  1 14:13:54 np0005541455 python3.9[112850]: ansible-ansible.builtin.group Invoked with gid=42473 name=libvirt state=present force=False system=False local=False non_unique=False gid_min=None gid_max=None
Dec  1 14:13:55 np0005541455 python3.9[113008]: ansible-ansible.builtin.user Invoked with comment=libvirt user group=libvirt groups=[''] name=libvirt shell=/sbin/nologin state=present uid=42473 non_unique=False force=False remove=False create_home=True system=False move_home=False append=False ssh_key_bits=0 ssh_key_type=rsa ssh_key_comment=ansible-generated on compute-0 update_password=always home=None password=NOT_LOGGING_PARAMETER login_class=None password_expire_max=None password_expire_min=None password_expire_warn=None hidden=None seuser=None skeleton=None generate_ssh_key=None ssh_key_file=None ssh_key_passphrase=NOT_LOGGING_PARAMETER expires=None password_lock=None local=None profile=None authorization=None role=None umask=None password_expire_account_disable=None uid_min=None uid_max=None
Dec  1 14:13:56 np0005541455 python3.9[113168]: ansible-ansible.legacy.setup Invoked with filter=['ansible_pkg_mgr'] gather_subset=['!all'] gather_timeout=10 fact_path=/etc/ansible/facts.d
Dec  1 14:13:57 np0005541455 python3.9[113252]: ansible-ansible.legacy.dnf Invoked with name=['libvirt ', 'libvirt-admin ', 'libvirt-client ', 'libvirt-daemon ', 'qemu-kvm', 'qemu-img', 'libguestfs', 'libseccomp', 'swtpm', 'swtpm-tools', 'edk2-ovmf', 'ceph-common', 'cyrus-sasl-scram'] state=present allow_downgrade=False allowerasing=False autoremove=False bugfix=False cacheonly=False disable_gpg_check=False disable_plugin=[] disablerepo=[] download_only=False enable_plugin=[] enablerepo=[] exclude=[] installroot=/ install_weak_deps=True security=False skip_broken=False update_cache=False update_only=False validate_certs=True sslverify=True lock_timeout=30 use_backend=auto best=None conf_file=None disable_excludes=None download_dir=None list=None nobest=None releasever=None
Dec  1 14:14:08 np0005541455 podman[113329]: 2025-12-01 19:14:08.372242378 +0000 UTC m=+0.144793810 container health_status ac5c9902abf0db9f43c889599b2bcc73d33eb8b65444ffdd9b56a5cc93dab792 (image=quay.io/podified-antelope-centos9/openstack-ovn-controller:current-podified, name=ovn_controller, health_status=healthy, health_failing_streak=0, health_log=, config_data={'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ovn_controller', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-ovn-controller:current-podified', 'net': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/lib/modules:/lib/modules:ro', '/run:/run', '/var/lib/openvswitch/ovn:/run/ovn:shared,z', '/var/lib/kolla/config_files/ovn_controller.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/openstack/cacerts/ovn/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/certs/ovn/default/ca.crt:/etc/pki/tls/certs/ovndbca.crt:ro,z', '/var/lib/openstack/certs/ovn/default/tls.crt:/etc/pki/tls/certs/ovndb.crt:ro,z', '/var/lib/openstack/certs/ovn/default/tls.key:/etc/pki/tls/private/ovndb.key:ro,Z', '/var/lib/openstack/healthchecks/ovn_controller:/openstack:ro,z']}, maintainer=OpenStack Kubernetes Operator team, managed_by=edpm_ansible, org.label-schema.build-date=20251125, org.label-schema.vendor=CentOS, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.schema-version=1.0, tcib_build_tag=fa2bb8efef6782c26ea7f1675eeb36dd, tcib_managed=true, config_id=ovn_controller, container_name=ovn_controller, io.buildah.version=1.41.3)
Dec  1 14:14:11 np0005541455 podman[113443]: 2025-12-01 19:14:11.316595661 +0000 UTC m=+0.080101844 container health_status 43b014a7c88484529ca37fbc1aa040d68d3c565a681d98a3ffe696ded1c66c8b (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified, name=ovn_metadata_agent, health_status=healthy, health_failing_streak=0, health_log=, tcib_build_tag=fa2bb8efef6782c26ea7f1675eeb36dd, tcib_managed=true, config_id=ovn_metadata_agent, io.buildah.version=1.41.3, org.label-schema.build-date=20251125, config_data={'cgroupns': 'host', 'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'EDPM_CONFIG_HASH': '0823bd3e096c75f72e4a95820d41b0d4b6a1172bd2892ddb9f29b788a11bc87d'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ovn_metadata_agent', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified', 'net': 'host', 'pid': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/run/openvswitch:/run/openvswitch:z', '/var/lib/config-data/ansible-generated/neutron-ovn-metadata-agent:/etc/neutron.conf.d:z', '/run/netns:/run/netns:shared', '/var/lib/kolla/config_files/ovn_metadata_agent.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/neutron:/var/lib/neutron:shared,z', '/var/lib/neutron/ovn_metadata_haproxy_wrapper:/usr/local/bin/haproxy:ro', '/var/lib/neutron/kill_scripts:/etc/neutron/kill_scripts:ro', '/var/lib/openstack/cacerts/neutron-metadata/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/ca.crt:/etc/pki/tls/certs/ovndbca.crt:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/tls.crt:/etc/pki/tls/certs/ovndb.crt:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/tls.key:/etc/pki/tls/private/ovndb.key:ro,Z', '/var/lib/openstack/healthchecks/ovn_metadata_agent:/openstack:ro,z']}, container_name=ovn_metadata_agent, maintainer=OpenStack Kubernetes Operator team, org.label-schema.name=CentOS Stream 9 Base Image, managed_by=edpm_ansible, org.label-schema.license=GPLv2, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS)
Dec  1 14:14:12 np0005541455 ovn_metadata_agent[106828]: 2025-12-01 19:14:12.153 106833 DEBUG oslo_concurrency.lockutils [-] Acquiring lock "_check_child_processes" by "neutron.agent.linux.external_process.ProcessMonitor._check_child_processes" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m
Dec  1 14:14:12 np0005541455 ovn_metadata_agent[106828]: 2025-12-01 19:14:12.154 106833 DEBUG oslo_concurrency.lockutils [-] Lock "_check_child_processes" acquired by "neutron.agent.linux.external_process.ProcessMonitor._check_child_processes" :: waited 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m
Dec  1 14:14:12 np0005541455 ovn_metadata_agent[106828]: 2025-12-01 19:14:12.154 106833 DEBUG oslo_concurrency.lockutils [-] Lock "_check_child_processes" "released" by "neutron.agent.linux.external_process.ProcessMonitor._check_child_processes" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m
Dec  1 14:14:25 np0005541455 kernel: SELinux:  Converting 2757 SID table entries...
Dec  1 14:14:25 np0005541455 kernel: SELinux:  policy capability network_peer_controls=1
Dec  1 14:14:25 np0005541455 kernel: SELinux:  policy capability open_perms=1
Dec  1 14:14:25 np0005541455 kernel: SELinux:  policy capability extended_socket_class=1
Dec  1 14:14:25 np0005541455 kernel: SELinux:  policy capability always_check_network=0
Dec  1 14:14:25 np0005541455 kernel: SELinux:  policy capability cgroup_seclabel=1
Dec  1 14:14:25 np0005541455 kernel: SELinux:  policy capability nnp_nosuid_transition=1
Dec  1 14:14:25 np0005541455 kernel: SELinux:  policy capability genfs_seclabel_symlinks=1
Dec  1 14:14:34 np0005541455 kernel: SELinux:  Converting 2757 SID table entries...
Dec  1 14:14:34 np0005541455 kernel: SELinux:  policy capability network_peer_controls=1
Dec  1 14:14:34 np0005541455 kernel: SELinux:  policy capability open_perms=1
Dec  1 14:14:34 np0005541455 kernel: SELinux:  policy capability extended_socket_class=1
Dec  1 14:14:34 np0005541455 kernel: SELinux:  policy capability always_check_network=0
Dec  1 14:14:34 np0005541455 kernel: SELinux:  policy capability cgroup_seclabel=1
Dec  1 14:14:34 np0005541455 kernel: SELinux:  policy capability nnp_nosuid_transition=1
Dec  1 14:14:34 np0005541455 kernel: SELinux:  policy capability genfs_seclabel_symlinks=1
Dec  1 14:14:39 np0005541455 dbus-broker-launch[772]: avc:  op=load_policy lsm=selinux seqno=13 res=1
Dec  1 14:14:39 np0005541455 podman[113501]: 2025-12-01 19:14:39.385824662 +0000 UTC m=+0.143130427 container health_status ac5c9902abf0db9f43c889599b2bcc73d33eb8b65444ffdd9b56a5cc93dab792 (image=quay.io/podified-antelope-centos9/openstack-ovn-controller:current-podified, name=ovn_controller, health_status=healthy, health_failing_streak=0, health_log=, config_data={'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ovn_controller', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-ovn-controller:current-podified', 'net': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/lib/modules:/lib/modules:ro', '/run:/run', '/var/lib/openvswitch/ovn:/run/ovn:shared,z', '/var/lib/kolla/config_files/ovn_controller.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/openstack/cacerts/ovn/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/certs/ovn/default/ca.crt:/etc/pki/tls/certs/ovndbca.crt:ro,z', '/var/lib/openstack/certs/ovn/default/tls.crt:/etc/pki/tls/certs/ovndb.crt:ro,z', '/var/lib/openstack/certs/ovn/default/tls.key:/etc/pki/tls/private/ovndb.key:ro,Z', '/var/lib/openstack/healthchecks/ovn_controller:/openstack:ro,z']}, io.buildah.version=1.41.3, managed_by=edpm_ansible, org.label-schema.name=CentOS Stream 9 Base Image, container_name=ovn_controller, maintainer=OpenStack Kubernetes Operator team, org.label-schema.schema-version=1.0, tcib_build_tag=fa2bb8efef6782c26ea7f1675eeb36dd, config_id=ovn_controller, org.label-schema.build-date=20251125, org.label-schema.license=GPLv2, org.label-schema.vendor=CentOS, tcib_managed=true)
Dec  1 14:14:42 np0005541455 podman[113527]: 2025-12-01 19:14:42.31812536 +0000 UTC m=+0.086890219 container health_status 43b014a7c88484529ca37fbc1aa040d68d3c565a681d98a3ffe696ded1c66c8b (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified, name=ovn_metadata_agent, health_status=healthy, health_failing_streak=0, health_log=, config_data={'cgroupns': 'host', 'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'EDPM_CONFIG_HASH': '0823bd3e096c75f72e4a95820d41b0d4b6a1172bd2892ddb9f29b788a11bc87d'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ovn_metadata_agent', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified', 'net': 'host', 'pid': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/run/openvswitch:/run/openvswitch:z', '/var/lib/config-data/ansible-generated/neutron-ovn-metadata-agent:/etc/neutron.conf.d:z', '/run/netns:/run/netns:shared', '/var/lib/kolla/config_files/ovn_metadata_agent.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/neutron:/var/lib/neutron:shared,z', '/var/lib/neutron/ovn_metadata_haproxy_wrapper:/usr/local/bin/haproxy:ro', '/var/lib/neutron/kill_scripts:/etc/neutron/kill_scripts:ro', '/var/lib/openstack/cacerts/neutron-metadata/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/ca.crt:/etc/pki/tls/certs/ovndbca.crt:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/tls.crt:/etc/pki/tls/certs/ovndb.crt:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/tls.key:/etc/pki/tls/private/ovndb.key:ro,Z', '/var/lib/openstack/healthchecks/ovn_metadata_agent:/openstack:ro,z']}, managed_by=edpm_ansible, org.label-schema.vendor=CentOS, org.label-schema.build-date=20251125, org.label-schema.license=GPLv2, tcib_managed=true, config_id=ovn_metadata_agent, io.buildah.version=1.41.3, maintainer=OpenStack Kubernetes Operator team, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.schema-version=1.0, container_name=ovn_metadata_agent, tcib_build_tag=fa2bb8efef6782c26ea7f1675eeb36dd)
Dec  1 14:15:10 np0005541455 podman[127846]: 2025-12-01 19:15:10.32946702 +0000 UTC m=+0.104242316 container health_status ac5c9902abf0db9f43c889599b2bcc73d33eb8b65444ffdd9b56a5cc93dab792 (image=quay.io/podified-antelope-centos9/openstack-ovn-controller:current-podified, name=ovn_controller, health_status=healthy, health_failing_streak=0, health_log=, org.label-schema.vendor=CentOS, config_id=ovn_controller, tcib_build_tag=fa2bb8efef6782c26ea7f1675eeb36dd, tcib_managed=true, config_data={'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ovn_controller', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-ovn-controller:current-podified', 'net': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/lib/modules:/lib/modules:ro', '/run:/run', '/var/lib/openvswitch/ovn:/run/ovn:shared,z', '/var/lib/kolla/config_files/ovn_controller.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/openstack/cacerts/ovn/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/certs/ovn/default/ca.crt:/etc/pki/tls/certs/ovndbca.crt:ro,z', '/var/lib/openstack/certs/ovn/default/tls.crt:/etc/pki/tls/certs/ovndb.crt:ro,z', '/var/lib/openstack/certs/ovn/default/tls.key:/etc/pki/tls/private/ovndb.key:ro,Z', '/var/lib/openstack/healthchecks/ovn_controller:/openstack:ro,z']}, container_name=ovn_controller, org.label-schema.build-date=20251125, org.label-schema.name=CentOS Stream 9 Base Image, maintainer=OpenStack Kubernetes Operator team, org.label-schema.schema-version=1.0, io.buildah.version=1.41.3, managed_by=edpm_ansible, org.label-schema.license=GPLv2)
Dec  1 14:15:12 np0005541455 ovn_metadata_agent[106828]: 2025-12-01 19:15:12.154 106833 DEBUG oslo_concurrency.lockutils [-] Acquiring lock "_check_child_processes" by "neutron.agent.linux.external_process.ProcessMonitor._check_child_processes" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m
Dec  1 14:15:12 np0005541455 ovn_metadata_agent[106828]: 2025-12-01 19:15:12.155 106833 DEBUG oslo_concurrency.lockutils [-] Lock "_check_child_processes" acquired by "neutron.agent.linux.external_process.ProcessMonitor._check_child_processes" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m
Dec  1 14:15:12 np0005541455 ovn_metadata_agent[106828]: 2025-12-01 19:15:12.155 106833 DEBUG oslo_concurrency.lockutils [-] Lock "_check_child_processes" "released" by "neutron.agent.linux.external_process.ProcessMonitor._check_child_processes" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m
Dec  1 14:15:13 np0005541455 podman[129873]: 2025-12-01 19:15:13.30258679 +0000 UTC m=+0.073643182 container health_status 43b014a7c88484529ca37fbc1aa040d68d3c565a681d98a3ffe696ded1c66c8b (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified, name=ovn_metadata_agent, health_status=healthy, health_failing_streak=0, health_log=, tcib_managed=true, container_name=ovn_metadata_agent, org.label-schema.build-date=20251125, org.label-schema.schema-version=1.0, tcib_build_tag=fa2bb8efef6782c26ea7f1675eeb36dd, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.vendor=CentOS, config_data={'cgroupns': 'host', 'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'EDPM_CONFIG_HASH': '0823bd3e096c75f72e4a95820d41b0d4b6a1172bd2892ddb9f29b788a11bc87d'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ovn_metadata_agent', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified', 'net': 'host', 'pid': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/run/openvswitch:/run/openvswitch:z', '/var/lib/config-data/ansible-generated/neutron-ovn-metadata-agent:/etc/neutron.conf.d:z', '/run/netns:/run/netns:shared', '/var/lib/kolla/config_files/ovn_metadata_agent.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/neutron:/var/lib/neutron:shared,z', '/var/lib/neutron/ovn_metadata_haproxy_wrapper:/usr/local/bin/haproxy:ro', '/var/lib/neutron/kill_scripts:/etc/neutron/kill_scripts:ro', '/var/lib/openstack/cacerts/neutron-metadata/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/ca.crt:/etc/pki/tls/certs/ovndbca.crt:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/tls.crt:/etc/pki/tls/certs/ovndb.crt:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/tls.key:/etc/pki/tls/private/ovndb.key:ro,Z', '/var/lib/openstack/healthchecks/ovn_metadata_agent:/openstack:ro,z']}, config_id=ovn_metadata_agent, io.buildah.version=1.41.3, managed_by=edpm_ansible, maintainer=OpenStack Kubernetes Operator team)
Dec  1 14:15:27 np0005541455 kernel: SELinux:  Converting 2758 SID table entries...
Dec  1 14:15:27 np0005541455 kernel: SELinux:  policy capability network_peer_controls=1
Dec  1 14:15:27 np0005541455 kernel: SELinux:  policy capability open_perms=1
Dec  1 14:15:27 np0005541455 kernel: SELinux:  policy capability extended_socket_class=1
Dec  1 14:15:27 np0005541455 kernel: SELinux:  policy capability always_check_network=0
Dec  1 14:15:27 np0005541455 kernel: SELinux:  policy capability cgroup_seclabel=1
Dec  1 14:15:27 np0005541455 kernel: SELinux:  policy capability nnp_nosuid_transition=1
Dec  1 14:15:27 np0005541455 kernel: SELinux:  policy capability genfs_seclabel_symlinks=1
Dec  1 14:15:28 np0005541455 dbus-broker-launch[763]: Noticed file-system modification, trigger reload.
Dec  1 14:15:28 np0005541455 dbus-broker-launch[772]: avc:  op=load_policy lsm=selinux seqno=14 res=1
Dec  1 14:15:28 np0005541455 dbus-broker-launch[763]: Noticed file-system modification, trigger reload.
Dec  1 14:15:35 np0005541455 systemd[1]: Stopping OpenSSH server daemon...
Dec  1 14:15:35 np0005541455 systemd[1]: sshd.service: Deactivated successfully.
Dec  1 14:15:35 np0005541455 systemd[1]: Stopped OpenSSH server daemon.
Dec  1 14:15:35 np0005541455 systemd[1]: sshd.service: Consumed 3.408s CPU time, read 564.0K from disk, written 44.0K to disk.
Dec  1 14:15:35 np0005541455 systemd[1]: Stopped target sshd-keygen.target.
Dec  1 14:15:35 np0005541455 systemd[1]: Stopping sshd-keygen.target...
Dec  1 14:15:35 np0005541455 systemd[1]: OpenSSH ecdsa Server Key Generation was skipped because of an unmet condition check (ConditionPathExists=!/run/systemd/generator.early/multi-user.target.wants/cloud-init.target).
Dec  1 14:15:35 np0005541455 systemd[1]: OpenSSH ed25519 Server Key Generation was skipped because of an unmet condition check (ConditionPathExists=!/run/systemd/generator.early/multi-user.target.wants/cloud-init.target).
Dec  1 14:15:35 np0005541455 systemd[1]: OpenSSH rsa Server Key Generation was skipped because of an unmet condition check (ConditionPathExists=!/run/systemd/generator.early/multi-user.target.wants/cloud-init.target).
Dec  1 14:15:35 np0005541455 systemd[1]: Reached target sshd-keygen.target.
Dec  1 14:15:35 np0005541455 systemd[1]: Starting OpenSSH server daemon...
Dec  1 14:15:35 np0005541455 systemd[1]: Started OpenSSH server daemon.
Dec  1 14:15:38 np0005541455 systemd[1]: Started /usr/bin/systemctl start man-db-cache-update.
Dec  1 14:15:38 np0005541455 systemd[1]: Starting man-db-cache-update.service...
Dec  1 14:15:38 np0005541455 systemd[1]: Reloading.
Dec  1 14:15:38 np0005541455 systemd-rc-local-generator: /etc/rc.d/rc.local is not marked executable, skipping.
Dec  1 14:15:38 np0005541455 systemd-sysv-generator: SysV service '/etc/rc.d/init.d/network' lacks a native systemd unit file. Automatically generating a unit file for compatibility. Please update package to include a native systemd unit file, in order to make it more safe and robust.
Dec  1 14:15:38 np0005541455 systemd[1]: Queuing reload/restart jobs for marked units…
Dec  1 14:15:41 np0005541455 podman[134961]: 2025-12-01 19:15:41.318347159 +0000 UTC m=+0.086567621 container health_status ac5c9902abf0db9f43c889599b2bcc73d33eb8b65444ffdd9b56a5cc93dab792 (image=quay.io/podified-antelope-centos9/openstack-ovn-controller:current-podified, name=ovn_controller, health_status=healthy, health_failing_streak=0, health_log=, managed_by=edpm_ansible, org.label-schema.build-date=20251125, config_id=ovn_controller, org.label-schema.license=GPLv2, org.label-schema.schema-version=1.0, tcib_build_tag=fa2bb8efef6782c26ea7f1675eeb36dd, config_data={'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ovn_controller', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-ovn-controller:current-podified', 'net': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/lib/modules:/lib/modules:ro', '/run:/run', '/var/lib/openvswitch/ovn:/run/ovn:shared,z', '/var/lib/kolla/config_files/ovn_controller.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/openstack/cacerts/ovn/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/certs/ovn/default/ca.crt:/etc/pki/tls/certs/ovndbca.crt:ro,z', '/var/lib/openstack/certs/ovn/default/tls.crt:/etc/pki/tls/certs/ovndb.crt:ro,z', '/var/lib/openstack/certs/ovn/default/tls.key:/etc/pki/tls/private/ovndb.key:ro,Z', '/var/lib/openstack/healthchecks/ovn_controller:/openstack:ro,z']}, io.buildah.version=1.41.3, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.vendor=CentOS, tcib_managed=true, container_name=ovn_controller, maintainer=OpenStack Kubernetes Operator team)
Dec  1 14:15:42 np0005541455 python3.9[136098]: ansible-ansible.builtin.systemd Invoked with enabled=False masked=True name=libvirtd state=stopped daemon_reload=False daemon_reexec=False scope=system no_block=False force=None
Dec  1 14:15:42 np0005541455 systemd[1]: Reloading.
Dec  1 14:15:42 np0005541455 systemd-rc-local-generator: /etc/rc.d/rc.local is not marked executable, skipping.
Dec  1 14:15:42 np0005541455 systemd-sysv-generator: SysV service '/etc/rc.d/init.d/network' lacks a native systemd unit file. Automatically generating a unit file for compatibility. Please update package to include a native systemd unit file, in order to make it more safe and robust.
Dec  1 14:15:43 np0005541455 podman[137145]: 2025-12-01 19:15:43.422714394 +0000 UTC m=+0.058766327 container health_status 43b014a7c88484529ca37fbc1aa040d68d3c565a681d98a3ffe696ded1c66c8b (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified, name=ovn_metadata_agent, health_status=healthy, health_failing_streak=0, health_log=, tcib_build_tag=fa2bb8efef6782c26ea7f1675eeb36dd, config_id=ovn_metadata_agent, container_name=ovn_metadata_agent, org.label-schema.license=GPLv2, maintainer=OpenStack Kubernetes Operator team, tcib_managed=true, io.buildah.version=1.41.3, managed_by=edpm_ansible, org.label-schema.vendor=CentOS, config_data={'cgroupns': 'host', 'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'EDPM_CONFIG_HASH': '0823bd3e096c75f72e4a95820d41b0d4b6a1172bd2892ddb9f29b788a11bc87d'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ovn_metadata_agent', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified', 'net': 'host', 'pid': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/run/openvswitch:/run/openvswitch:z', '/var/lib/config-data/ansible-generated/neutron-ovn-metadata-agent:/etc/neutron.conf.d:z', '/run/netns:/run/netns:shared', '/var/lib/kolla/config_files/ovn_metadata_agent.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/neutron:/var/lib/neutron:shared,z', '/var/lib/neutron/ovn_metadata_haproxy_wrapper:/usr/local/bin/haproxy:ro', '/var/lib/neutron/kill_scripts:/etc/neutron/kill_scripts:ro', '/var/lib/openstack/cacerts/neutron-metadata/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/ca.crt:/etc/pki/tls/certs/ovndbca.crt:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/tls.crt:/etc/pki/tls/certs/ovndb.crt:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/tls.key:/etc/pki/tls/private/ovndb.key:ro,Z', '/var/lib/openstack/healthchecks/ovn_metadata_agent:/openstack:ro,z']}, org.label-schema.build-date=20251125, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.schema-version=1.0)
Dec  1 14:15:43 np0005541455 python3.9[137261]: ansible-ansible.builtin.systemd Invoked with enabled=False masked=True name=libvirtd-tcp.socket state=stopped daemon_reload=False daemon_reexec=False scope=system no_block=False force=None
Dec  1 14:15:43 np0005541455 systemd[1]: Reloading.
Dec  1 14:15:43 np0005541455 systemd-rc-local-generator: /etc/rc.d/rc.local is not marked executable, skipping.
Dec  1 14:15:43 np0005541455 systemd-sysv-generator: SysV service '/etc/rc.d/init.d/network' lacks a native systemd unit file. Automatically generating a unit file for compatibility. Please update package to include a native systemd unit file, in order to make it more safe and robust.
Dec  1 14:15:44 np0005541455 python3.9[138433]: ansible-ansible.builtin.systemd Invoked with enabled=False masked=True name=libvirtd-tls.socket state=stopped daemon_reload=False daemon_reexec=False scope=system no_block=False force=None
Dec  1 14:15:44 np0005541455 systemd[1]: Reloading.
Dec  1 14:15:44 np0005541455 systemd-sysv-generator: SysV service '/etc/rc.d/init.d/network' lacks a native systemd unit file. Automatically generating a unit file for compatibility. Please update package to include a native systemd unit file, in order to make it more safe and robust.
Dec  1 14:15:44 np0005541455 systemd-rc-local-generator: /etc/rc.d/rc.local is not marked executable, skipping.
Dec  1 14:15:45 np0005541455 python3.9[139589]: ansible-ansible.builtin.systemd Invoked with enabled=False masked=True name=virtproxyd-tcp.socket state=stopped daemon_reload=False daemon_reexec=False scope=system no_block=False force=None
Dec  1 14:15:45 np0005541455 systemd[1]: Reloading.
Dec  1 14:15:45 np0005541455 systemd-sysv-generator: SysV service '/etc/rc.d/init.d/network' lacks a native systemd unit file. Automatically generating a unit file for compatibility. Please update package to include a native systemd unit file, in order to make it more safe and robust.
Dec  1 14:15:45 np0005541455 systemd-rc-local-generator: /etc/rc.d/rc.local is not marked executable, skipping.
Dec  1 14:15:46 np0005541455 systemd[1]: man-db-cache-update.service: Deactivated successfully.
Dec  1 14:15:46 np0005541455 systemd[1]: Finished man-db-cache-update.service.
Dec  1 14:15:46 np0005541455 systemd[1]: man-db-cache-update.service: Consumed 10.928s CPU time.
Dec  1 14:15:46 np0005541455 systemd[1]: run-r7bd89a1f479b438286a4a987a310ead5.service: Deactivated successfully.
Dec  1 14:15:47 np0005541455 python3.9[140757]: ansible-ansible.builtin.systemd Invoked with enabled=True masked=False name=virtlogd.service daemon_reload=False daemon_reexec=False scope=system no_block=False state=None force=None
Dec  1 14:15:47 np0005541455 systemd[1]: Reloading.
Dec  1 14:15:47 np0005541455 systemd-sysv-generator: SysV service '/etc/rc.d/init.d/network' lacks a native systemd unit file. Automatically generating a unit file for compatibility. Please update package to include a native systemd unit file, in order to make it more safe and robust.
Dec  1 14:15:47 np0005541455 systemd-rc-local-generator: /etc/rc.d/rc.local is not marked executable, skipping.
Dec  1 14:15:48 np0005541455 python3.9[140947]: ansible-ansible.builtin.systemd Invoked with enabled=True masked=False name=virtnodedevd.service daemon_reload=False daemon_reexec=False scope=system no_block=False state=None force=None
Dec  1 14:15:48 np0005541455 systemd[1]: Reloading.
Dec  1 14:15:48 np0005541455 systemd-sysv-generator: SysV service '/etc/rc.d/init.d/network' lacks a native systemd unit file. Automatically generating a unit file for compatibility. Please update package to include a native systemd unit file, in order to make it more safe and robust.
Dec  1 14:15:48 np0005541455 systemd-rc-local-generator: /etc/rc.d/rc.local is not marked executable, skipping.
Dec  1 14:15:49 np0005541455 python3.9[141137]: ansible-ansible.builtin.systemd Invoked with enabled=True masked=False name=virtproxyd.service daemon_reload=False daemon_reexec=False scope=system no_block=False state=None force=None
Dec  1 14:15:49 np0005541455 systemd[1]: Reloading.
Dec  1 14:15:49 np0005541455 systemd-sysv-generator: SysV service '/etc/rc.d/init.d/network' lacks a native systemd unit file. Automatically generating a unit file for compatibility. Please update package to include a native systemd unit file, in order to make it more safe and robust.
Dec  1 14:15:49 np0005541455 systemd-rc-local-generator: /etc/rc.d/rc.local is not marked executable, skipping.
Dec  1 14:15:50 np0005541455 python3.9[141327]: ansible-ansible.builtin.systemd Invoked with enabled=True masked=False name=virtqemud.service daemon_reload=False daemon_reexec=False scope=system no_block=False state=None force=None
Dec  1 14:15:51 np0005541455 python3.9[141482]: ansible-ansible.builtin.systemd Invoked with enabled=True masked=False name=virtsecretd.service daemon_reload=False daemon_reexec=False scope=system no_block=False state=None force=None
Dec  1 14:15:51 np0005541455 systemd[1]: Reloading.
Dec  1 14:15:51 np0005541455 systemd-rc-local-generator: /etc/rc.d/rc.local is not marked executable, skipping.
Dec  1 14:15:51 np0005541455 systemd-sysv-generator: SysV service '/etc/rc.d/init.d/network' lacks a native systemd unit file. Automatically generating a unit file for compatibility. Please update package to include a native systemd unit file, in order to make it more safe and robust.
Dec  1 14:15:52 np0005541455 python3.9[141672]: ansible-ansible.builtin.systemd Invoked with enabled=True masked=False name=virtproxyd-tls.socket state=started daemon_reload=False daemon_reexec=False scope=system no_block=False force=None
Dec  1 14:15:52 np0005541455 systemd[1]: Reloading.
Dec  1 14:15:52 np0005541455 systemd-rc-local-generator: /etc/rc.d/rc.local is not marked executable, skipping.
Dec  1 14:15:52 np0005541455 systemd-sysv-generator: SysV service '/etc/rc.d/init.d/network' lacks a native systemd unit file. Automatically generating a unit file for compatibility. Please update package to include a native systemd unit file, in order to make it more safe and robust.
Dec  1 14:15:52 np0005541455 systemd[1]: Listening on libvirt proxy daemon socket.
Dec  1 14:15:52 np0005541455 systemd[1]: Listening on libvirt proxy daemon TLS IP socket.
Dec  1 14:15:53 np0005541455 python3.9[141867]: ansible-ansible.builtin.systemd Invoked with enabled=True masked=False name=virtlogd.socket daemon_reload=False daemon_reexec=False scope=system no_block=False state=None force=None
Dec  1 14:15:54 np0005541455 python3.9[142022]: ansible-ansible.builtin.systemd Invoked with enabled=True masked=False name=virtlogd-admin.socket daemon_reload=False daemon_reexec=False scope=system no_block=False state=None force=None
Dec  1 14:15:55 np0005541455 python3.9[142177]: ansible-ansible.builtin.systemd Invoked with enabled=True masked=False name=virtnodedevd.socket daemon_reload=False daemon_reexec=False scope=system no_block=False state=None force=None
Dec  1 14:15:56 np0005541455 python3.9[142332]: ansible-ansible.builtin.systemd Invoked with enabled=True masked=False name=virtnodedevd-ro.socket daemon_reload=False daemon_reexec=False scope=system no_block=False state=None force=None
Dec  1 14:15:56 np0005541455 python3.9[142487]: ansible-ansible.builtin.systemd Invoked with enabled=True masked=False name=virtnodedevd-admin.socket daemon_reload=False daemon_reexec=False scope=system no_block=False state=None force=None
Dec  1 14:15:57 np0005541455 python3.9[142642]: ansible-ansible.builtin.systemd Invoked with enabled=True masked=False name=virtproxyd.socket daemon_reload=False daemon_reexec=False scope=system no_block=False state=None force=None
Dec  1 14:15:58 np0005541455 python3.9[142797]: ansible-ansible.builtin.systemd Invoked with enabled=True masked=False name=virtproxyd-ro.socket daemon_reload=False daemon_reexec=False scope=system no_block=False state=None force=None
Dec  1 14:15:59 np0005541455 irqbalance[790]: Cannot change IRQ 26 affinity: Operation not permitted
Dec  1 14:15:59 np0005541455 irqbalance[790]: IRQ 26 affinity is now unmanaged
Dec  1 14:15:59 np0005541455 python3.9[142952]: ansible-ansible.builtin.systemd Invoked with enabled=True masked=False name=virtproxyd-admin.socket daemon_reload=False daemon_reexec=False scope=system no_block=False state=None force=None
Dec  1 14:16:00 np0005541455 python3.9[143107]: ansible-ansible.builtin.systemd Invoked with enabled=True masked=False name=virtqemud.socket daemon_reload=False daemon_reexec=False scope=system no_block=False state=None force=None
Dec  1 14:16:01 np0005541455 python3.9[143262]: ansible-ansible.builtin.systemd Invoked with enabled=True masked=False name=virtqemud-ro.socket daemon_reload=False daemon_reexec=False scope=system no_block=False state=None force=None
Dec  1 14:16:02 np0005541455 python3.9[143417]: ansible-ansible.builtin.systemd Invoked with enabled=True masked=False name=virtqemud-admin.socket daemon_reload=False daemon_reexec=False scope=system no_block=False state=None force=None
Dec  1 14:16:02 np0005541455 python3.9[143572]: ansible-ansible.builtin.systemd Invoked with enabled=True masked=False name=virtsecretd.socket daemon_reload=False daemon_reexec=False scope=system no_block=False state=None force=None
Dec  1 14:16:03 np0005541455 python3.9[143727]: ansible-ansible.builtin.systemd Invoked with enabled=True masked=False name=virtsecretd-ro.socket daemon_reload=False daemon_reexec=False scope=system no_block=False state=None force=None
Dec  1 14:16:04 np0005541455 python3.9[143882]: ansible-ansible.builtin.systemd Invoked with enabled=True masked=False name=virtsecretd-admin.socket daemon_reload=False daemon_reexec=False scope=system no_block=False state=None force=None
Dec  1 14:16:05 np0005541455 python3.9[144037]: ansible-ansible.builtin.file Invoked with group=root owner=root path=/etc/tmpfiles.d/ setype=container_file_t state=directory recurse=False force=False follow=True modification_time_format=%Y%m%d%H%M.%S access_time_format=%Y%m%d%H%M.%S unsafe_writes=False _original_basename=None _diff_peek=None src=None modification_time=None access_time=None mode=None seuser=None serole=None selevel=None attributes=None
Dec  1 14:16:06 np0005541455 python3.9[144189]: ansible-ansible.builtin.file Invoked with group=root owner=root path=/var/lib/edpm-config/firewall setype=container_file_t state=directory recurse=False force=False follow=True modification_time_format=%Y%m%d%H%M.%S access_time_format=%Y%m%d%H%M.%S unsafe_writes=False _original_basename=None _diff_peek=None src=None modification_time=None access_time=None mode=None seuser=None serole=None selevel=None attributes=None
Dec  1 14:16:07 np0005541455 python3.9[144341]: ansible-ansible.builtin.file Invoked with group=root mode=0755 owner=root path=/etc/pki/libvirt setype=container_file_t state=directory recurse=False force=False follow=True modification_time_format=%Y%m%d%H%M.%S access_time_format=%Y%m%d%H%M.%S unsafe_writes=False _original_basename=None _diff_peek=None src=None modification_time=None access_time=None seuser=None serole=None selevel=None attributes=None
Dec  1 14:16:07 np0005541455 python3.9[144493]: ansible-ansible.builtin.file Invoked with group=root mode=0755 owner=root path=/etc/pki/libvirt/private setype=container_file_t state=directory recurse=False force=False follow=True modification_time_format=%Y%m%d%H%M.%S access_time_format=%Y%m%d%H%M.%S unsafe_writes=False _original_basename=None _diff_peek=None src=None modification_time=None access_time=None seuser=None serole=None selevel=None attributes=None
Dec  1 14:16:08 np0005541455 python3.9[144645]: ansible-ansible.builtin.file Invoked with group=root mode=0755 owner=root path=/etc/pki/CA setype=container_file_t state=directory recurse=False force=False follow=True modification_time_format=%Y%m%d%H%M.%S access_time_format=%Y%m%d%H%M.%S unsafe_writes=False _original_basename=None _diff_peek=None src=None modification_time=None access_time=None seuser=None serole=None selevel=None attributes=None
Dec  1 14:16:09 np0005541455 python3.9[144797]: ansible-ansible.builtin.file Invoked with group=qemu owner=root path=/etc/pki/qemu setype=container_file_t state=directory recurse=False force=False follow=True modification_time_format=%Y%m%d%H%M.%S access_time_format=%Y%m%d%H%M.%S unsafe_writes=False _original_basename=None _diff_peek=None src=None modification_time=None access_time=None mode=None seuser=None serole=None selevel=None attributes=None
Dec  1 14:16:10 np0005541455 python3.9[144949]: ansible-ansible.legacy.stat Invoked with path=/etc/libvirt/virtlogd.conf follow=False get_checksum=True get_size=False checksum_algorithm=sha1 get_mime=True get_attributes=True get_selinux_context=False
Dec  1 14:16:11 np0005541455 python3.9[145074]: ansible-ansible.legacy.copy Invoked with dest=/etc/libvirt/virtlogd.conf group=libvirt mode=0640 owner=libvirt src=/home/zuul/.ansible/tmp/ansible-tmp-1764616569.3042939-554-117228392471984/.source.conf follow=False _original_basename=virtlogd.conf checksum=d7a72ae92c2c205983b029473e05a6aa4c58ec24 backup=False force=True remote_src=False unsafe_writes=False content=NOT_LOGGING_PARAMETER validate=None directory_mode=None local_follow=None seuser=None serole=None selevel=None setype=None attributes=None
Dec  1 14:16:11 np0005541455 podman[145198]: 2025-12-01 19:16:11.684435438 +0000 UTC m=+0.093232715 container health_status ac5c9902abf0db9f43c889599b2bcc73d33eb8b65444ffdd9b56a5cc93dab792 (image=quay.io/podified-antelope-centos9/openstack-ovn-controller:current-podified, name=ovn_controller, health_status=healthy, health_failing_streak=0, health_log=, tcib_build_tag=fa2bb8efef6782c26ea7f1675eeb36dd, config_data={'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ovn_controller', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-ovn-controller:current-podified', 'net': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/lib/modules:/lib/modules:ro', '/run:/run', '/var/lib/openvswitch/ovn:/run/ovn:shared,z', '/var/lib/kolla/config_files/ovn_controller.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/openstack/cacerts/ovn/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/certs/ovn/default/ca.crt:/etc/pki/tls/certs/ovndbca.crt:ro,z', '/var/lib/openstack/certs/ovn/default/tls.crt:/etc/pki/tls/certs/ovndb.crt:ro,z', '/var/lib/openstack/certs/ovn/default/tls.key:/etc/pki/tls/private/ovndb.key:ro,Z', '/var/lib/openstack/healthchecks/ovn_controller:/openstack:ro,z']}, config_id=ovn_controller, org.label-schema.schema-version=1.0, tcib_managed=true, container_name=ovn_controller, managed_by=edpm_ansible, org.label-schema.build-date=20251125, io.buildah.version=1.41.3, maintainer=OpenStack Kubernetes Operator team, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.vendor=CentOS)
Dec  1 14:16:11 np0005541455 python3.9[145242]: ansible-ansible.legacy.stat Invoked with path=/etc/libvirt/virtnodedevd.conf follow=False get_checksum=True get_size=False checksum_algorithm=sha1 get_mime=True get_attributes=True get_selinux_context=False
Dec  1 14:16:12 np0005541455 ovn_metadata_agent[106828]: 2025-12-01 19:16:12.155 106833 DEBUG oslo_concurrency.lockutils [-] Acquiring lock "_check_child_processes" by "neutron.agent.linux.external_process.ProcessMonitor._check_child_processes" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m
Dec  1 14:16:12 np0005541455 ovn_metadata_agent[106828]: 2025-12-01 19:16:12.156 106833 DEBUG oslo_concurrency.lockutils [-] Lock "_check_child_processes" acquired by "neutron.agent.linux.external_process.ProcessMonitor._check_child_processes" :: waited 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m
Dec  1 14:16:12 np0005541455 ovn_metadata_agent[106828]: 2025-12-01 19:16:12.156 106833 DEBUG oslo_concurrency.lockutils [-] Lock "_check_child_processes" "released" by "neutron.agent.linux.external_process.ProcessMonitor._check_child_processes" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m
Dec  1 14:16:12 np0005541455 python3.9[145377]: ansible-ansible.legacy.copy Invoked with dest=/etc/libvirt/virtnodedevd.conf group=libvirt mode=0640 owner=libvirt src=/home/zuul/.ansible/tmp/ansible-tmp-1764616571.2751055-554-21113100694248/.source.conf follow=False _original_basename=virtnodedevd.conf checksum=7a604468adb2868f1ab6ebd0fd4622286e6373e2 backup=False force=True remote_src=False unsafe_writes=False content=NOT_LOGGING_PARAMETER validate=None directory_mode=None local_follow=None seuser=None serole=None selevel=None setype=None attributes=None
Dec  1 14:16:13 np0005541455 python3.9[145529]: ansible-ansible.legacy.stat Invoked with path=/etc/libvirt/virtproxyd.conf follow=False get_checksum=True get_size=False checksum_algorithm=sha1 get_mime=True get_attributes=True get_selinux_context=False
Dec  1 14:16:13 np0005541455 podman[145626]: 2025-12-01 19:16:13.739959253 +0000 UTC m=+0.079474175 container health_status 43b014a7c88484529ca37fbc1aa040d68d3c565a681d98a3ffe696ded1c66c8b (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified, name=ovn_metadata_agent, health_status=healthy, health_failing_streak=0, health_log=, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.vendor=CentOS, org.label-schema.build-date=20251125, org.label-schema.license=GPLv2, tcib_build_tag=fa2bb8efef6782c26ea7f1675eeb36dd, tcib_managed=true, config_data={'cgroupns': 'host', 'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'EDPM_CONFIG_HASH': '0823bd3e096c75f72e4a95820d41b0d4b6a1172bd2892ddb9f29b788a11bc87d'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ovn_metadata_agent', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified', 'net': 'host', 'pid': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/run/openvswitch:/run/openvswitch:z', '/var/lib/config-data/ansible-generated/neutron-ovn-metadata-agent:/etc/neutron.conf.d:z', '/run/netns:/run/netns:shared', '/var/lib/kolla/config_files/ovn_metadata_agent.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/neutron:/var/lib/neutron:shared,z', '/var/lib/neutron/ovn_metadata_haproxy_wrapper:/usr/local/bin/haproxy:ro', '/var/lib/neutron/kill_scripts:/etc/neutron/kill_scripts:ro', '/var/lib/openstack/cacerts/neutron-metadata/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/ca.crt:/etc/pki/tls/certs/ovndbca.crt:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/tls.crt:/etc/pki/tls/certs/ovndb.crt:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/tls.key:/etc/pki/tls/private/ovndb.key:ro,Z', '/var/lib/openstack/healthchecks/ovn_metadata_agent:/openstack:ro,z']}, config_id=ovn_metadata_agent, container_name=ovn_metadata_agent, managed_by=edpm_ansible, org.label-schema.schema-version=1.0, io.buildah.version=1.41.3, maintainer=OpenStack Kubernetes Operator team)
Dec  1 14:16:13 np0005541455 python3.9[145669]: ansible-ansible.legacy.copy Invoked with dest=/etc/libvirt/virtproxyd.conf group=libvirt mode=0640 owner=libvirt src=/home/zuul/.ansible/tmp/ansible-tmp-1764616572.6065323-554-125875543198216/.source.conf follow=False _original_basename=virtproxyd.conf checksum=28bc484b7c9988e03de49d4fcc0a088ea975f716 backup=False force=True remote_src=False unsafe_writes=False content=NOT_LOGGING_PARAMETER validate=None directory_mode=None local_follow=None seuser=None serole=None selevel=None setype=None attributes=None
Dec  1 14:16:14 np0005541455 python3.9[145825]: ansible-ansible.legacy.stat Invoked with path=/etc/libvirt/virtqemud.conf follow=False get_checksum=True get_size=False checksum_algorithm=sha1 get_mime=True get_attributes=True get_selinux_context=False
Dec  1 14:16:15 np0005541455 python3.9[145950]: ansible-ansible.legacy.copy Invoked with dest=/etc/libvirt/virtqemud.conf group=libvirt mode=0640 owner=libvirt src=/home/zuul/.ansible/tmp/ansible-tmp-1764616574.042654-554-273905711678382/.source.conf follow=False _original_basename=virtqemud.conf checksum=7a604468adb2868f1ab6ebd0fd4622286e6373e2 backup=False force=True remote_src=False unsafe_writes=False content=NOT_LOGGING_PARAMETER validate=None directory_mode=None local_follow=None seuser=None serole=None selevel=None setype=None attributes=None
Dec  1 14:16:16 np0005541455 python3.9[146102]: ansible-ansible.legacy.stat Invoked with path=/etc/libvirt/qemu.conf follow=False get_checksum=True get_size=False checksum_algorithm=sha1 get_mime=True get_attributes=True get_selinux_context=False
Dec  1 14:16:16 np0005541455 python3.9[146227]: ansible-ansible.legacy.copy Invoked with dest=/etc/libvirt/qemu.conf group=libvirt mode=0640 owner=libvirt src=/home/zuul/.ansible/tmp/ansible-tmp-1764616575.4813232-554-139304544887859/.source.conf follow=False _original_basename=qemu.conf.j2 checksum=c44de21af13c90603565570f09ff60c6a41ed8df backup=False force=True remote_src=False unsafe_writes=False content=NOT_LOGGING_PARAMETER validate=None directory_mode=None local_follow=None seuser=None serole=None selevel=None setype=None attributes=None
Dec  1 14:16:17 np0005541455 python3.9[146379]: ansible-ansible.legacy.stat Invoked with path=/etc/libvirt/virtsecretd.conf follow=False get_checksum=True get_size=False checksum_algorithm=sha1 get_mime=True get_attributes=True get_selinux_context=False
Dec  1 14:16:18 np0005541455 python3.9[146504]: ansible-ansible.legacy.copy Invoked with dest=/etc/libvirt/virtsecretd.conf group=libvirt mode=0640 owner=libvirt src=/home/zuul/.ansible/tmp/ansible-tmp-1764616576.9034898-554-110528225670467/.source.conf follow=False _original_basename=virtsecretd.conf checksum=7a604468adb2868f1ab6ebd0fd4622286e6373e2 backup=False force=True remote_src=False unsafe_writes=False content=NOT_LOGGING_PARAMETER validate=None directory_mode=None local_follow=None seuser=None serole=None selevel=None setype=None attributes=None
Dec  1 14:16:18 np0005541455 python3.9[146656]: ansible-ansible.legacy.stat Invoked with path=/etc/libvirt/auth.conf follow=False get_checksum=True get_size=False checksum_algorithm=sha1 get_mime=True get_attributes=True get_selinux_context=False
Dec  1 14:16:19 np0005541455 python3.9[146779]: ansible-ansible.legacy.copy Invoked with dest=/etc/libvirt/auth.conf group=libvirt mode=0600 owner=libvirt src=/home/zuul/.ansible/tmp/ansible-tmp-1764616578.405974-554-237997155194768/.source.conf follow=False _original_basename=auth.conf checksum=a94cd818c374cec2c8425b70d2e0e2f41b743ae4 backup=False force=True remote_src=False unsafe_writes=False content=NOT_LOGGING_PARAMETER validate=None directory_mode=None local_follow=None seuser=None serole=None selevel=None setype=None attributes=None
Dec  1 14:16:20 np0005541455 python3.9[146931]: ansible-ansible.legacy.stat Invoked with path=/etc/sasl2/libvirt.conf follow=False get_checksum=True get_size=False checksum_algorithm=sha1 get_mime=True get_attributes=True get_selinux_context=False
Dec  1 14:16:21 np0005541455 python3.9[147056]: ansible-ansible.legacy.copy Invoked with dest=/etc/sasl2/libvirt.conf group=libvirt mode=0640 owner=libvirt src=/home/zuul/.ansible/tmp/ansible-tmp-1764616580.0488567-554-236701450916003/.source.conf follow=False _original_basename=sasl_libvirt.conf checksum=652e4d404bf79253d06956b8e9847c9364979d4a backup=False force=True remote_src=False unsafe_writes=False content=NOT_LOGGING_PARAMETER validate=None directory_mode=None local_follow=None seuser=None serole=None selevel=None setype=None attributes=None
Dec  1 14:16:21 np0005541455 python3.9[147208]: ansible-ansible.legacy.command Invoked with cmd=saslpasswd2 -f /etc/libvirt/passwd.db -p -a libvirt -u openstack migration stdin=12345678 _uses_shell=False expand_argument_vars=True stdin_add_newline=True strip_empty_ends=True _raw_params=None argv=None chdir=None executable=None creates=None removes=None
Dec  1 14:16:22 np0005541455 python3.9[147361]: ansible-ansible.builtin.file Invoked with group=root mode=0755 owner=root path=/etc/systemd/system/virtlogd.socket.d state=directory recurse=False force=False follow=True modification_time_format=%Y%m%d%H%M.%S access_time_format=%Y%m%d%H%M.%S unsafe_writes=False _original_basename=None _diff_peek=None src=None modification_time=None access_time=None seuser=None serole=None selevel=None setype=None attributes=None
Dec  1 14:16:23 np0005541455 python3.9[147513]: ansible-ansible.builtin.file Invoked with group=root mode=0755 owner=root path=/etc/systemd/system/virtlogd-admin.socket.d state=directory recurse=False force=False follow=True modification_time_format=%Y%m%d%H%M.%S access_time_format=%Y%m%d%H%M.%S unsafe_writes=False _original_basename=None _diff_peek=None src=None modification_time=None access_time=None seuser=None serole=None selevel=None setype=None attributes=None
Dec  1 14:16:24 np0005541455 python3.9[147665]: ansible-ansible.builtin.file Invoked with group=root mode=0755 owner=root path=/etc/systemd/system/virtnodedevd.socket.d state=directory recurse=False force=False follow=True modification_time_format=%Y%m%d%H%M.%S access_time_format=%Y%m%d%H%M.%S unsafe_writes=False _original_basename=None _diff_peek=None src=None modification_time=None access_time=None seuser=None serole=None selevel=None setype=None attributes=None
Dec  1 14:16:25 np0005541455 python3.9[147817]: ansible-ansible.builtin.file Invoked with group=root mode=0755 owner=root path=/etc/systemd/system/virtnodedevd-ro.socket.d state=directory recurse=False force=False follow=True modification_time_format=%Y%m%d%H%M.%S access_time_format=%Y%m%d%H%M.%S unsafe_writes=False _original_basename=None _diff_peek=None src=None modification_time=None access_time=None seuser=None serole=None selevel=None setype=None attributes=None
Dec  1 14:16:25 np0005541455 python3.9[147969]: ansible-ansible.builtin.file Invoked with group=root mode=0755 owner=root path=/etc/systemd/system/virtnodedevd-admin.socket.d state=directory recurse=False force=False follow=True modification_time_format=%Y%m%d%H%M.%S access_time_format=%Y%m%d%H%M.%S unsafe_writes=False _original_basename=None _diff_peek=None src=None modification_time=None access_time=None seuser=None serole=None selevel=None setype=None attributes=None
Dec  1 14:16:26 np0005541455 python3.9[148121]: ansible-ansible.builtin.file Invoked with group=root mode=0755 owner=root path=/etc/systemd/system/virtproxyd.socket.d state=directory recurse=False force=False follow=True modification_time_format=%Y%m%d%H%M.%S access_time_format=%Y%m%d%H%M.%S unsafe_writes=False _original_basename=None _diff_peek=None src=None modification_time=None access_time=None seuser=None serole=None selevel=None setype=None attributes=None
Dec  1 14:16:27 np0005541455 python3.9[148273]: ansible-ansible.builtin.file Invoked with group=root mode=0755 owner=root path=/etc/systemd/system/virtproxyd-ro.socket.d state=directory recurse=False force=False follow=True modification_time_format=%Y%m%d%H%M.%S access_time_format=%Y%m%d%H%M.%S unsafe_writes=False _original_basename=None _diff_peek=None src=None modification_time=None access_time=None seuser=None serole=None selevel=None setype=None attributes=None
Dec  1 14:16:28 np0005541455 python3.9[148425]: ansible-ansible.builtin.file Invoked with group=root mode=0755 owner=root path=/etc/systemd/system/virtproxyd-admin.socket.d state=directory recurse=False force=False follow=True modification_time_format=%Y%m%d%H%M.%S access_time_format=%Y%m%d%H%M.%S unsafe_writes=False _original_basename=None _diff_peek=None src=None modification_time=None access_time=None seuser=None serole=None selevel=None setype=None attributes=None
Dec  1 14:16:28 np0005541455 python3.9[148577]: ansible-ansible.builtin.file Invoked with group=root mode=0755 owner=root path=/etc/systemd/system/virtqemud.socket.d state=directory recurse=False force=False follow=True modification_time_format=%Y%m%d%H%M.%S access_time_format=%Y%m%d%H%M.%S unsafe_writes=False _original_basename=None _diff_peek=None src=None modification_time=None access_time=None seuser=None serole=None selevel=None setype=None attributes=None
Dec  1 14:16:29 np0005541455 python3.9[148729]: ansible-ansible.builtin.file Invoked with group=root mode=0755 owner=root path=/etc/systemd/system/virtqemud-ro.socket.d state=directory recurse=False force=False follow=True modification_time_format=%Y%m%d%H%M.%S access_time_format=%Y%m%d%H%M.%S unsafe_writes=False _original_basename=None _diff_peek=None src=None modification_time=None access_time=None seuser=None serole=None selevel=None setype=None attributes=None
Dec  1 14:16:30 np0005541455 python3.9[148881]: ansible-ansible.builtin.file Invoked with group=root mode=0755 owner=root path=/etc/systemd/system/virtqemud-admin.socket.d state=directory recurse=False force=False follow=True modification_time_format=%Y%m%d%H%M.%S access_time_format=%Y%m%d%H%M.%S unsafe_writes=False _original_basename=None _diff_peek=None src=None modification_time=None access_time=None seuser=None serole=None selevel=None setype=None attributes=None
Dec  1 14:16:31 np0005541455 python3.9[149033]: ansible-ansible.builtin.file Invoked with group=root mode=0755 owner=root path=/etc/systemd/system/virtsecretd.socket.d state=directory recurse=False force=False follow=True modification_time_format=%Y%m%d%H%M.%S access_time_format=%Y%m%d%H%M.%S unsafe_writes=False _original_basename=None _diff_peek=None src=None modification_time=None access_time=None seuser=None serole=None selevel=None setype=None attributes=None
Dec  1 14:16:31 np0005541455 python3.9[149185]: ansible-ansible.builtin.file Invoked with group=root mode=0755 owner=root path=/etc/systemd/system/virtsecretd-ro.socket.d state=directory recurse=False force=False follow=True modification_time_format=%Y%m%d%H%M.%S access_time_format=%Y%m%d%H%M.%S unsafe_writes=False _original_basename=None _diff_peek=None src=None modification_time=None access_time=None seuser=None serole=None selevel=None setype=None attributes=None
Dec  1 14:16:32 np0005541455 python3.9[149337]: ansible-ansible.builtin.file Invoked with group=root mode=0755 owner=root path=/etc/systemd/system/virtsecretd-admin.socket.d state=directory recurse=False force=False follow=True modification_time_format=%Y%m%d%H%M.%S access_time_format=%Y%m%d%H%M.%S unsafe_writes=False _original_basename=None _diff_peek=None src=None modification_time=None access_time=None seuser=None serole=None selevel=None setype=None attributes=None
Dec  1 14:16:33 np0005541455 python3.9[149489]: ansible-ansible.legacy.stat Invoked with path=/etc/systemd/system/virtlogd.socket.d/override.conf follow=False get_checksum=True get_size=False checksum_algorithm=sha1 get_mime=True get_attributes=True get_selinux_context=False
Dec  1 14:16:34 np0005541455 python3.9[149612]: ansible-ansible.legacy.copy Invoked with dest=/etc/systemd/system/virtlogd.socket.d/override.conf group=root mode=0644 owner=root src=/home/zuul/.ansible/tmp/ansible-tmp-1764616592.8767107-775-183214161433298/.source.conf follow=False _original_basename=libvirt-socket.unit.j2 checksum=0bad41f409b4ee7e780a2a59dc18f5c84ed99826 backup=False force=True remote_src=False unsafe_writes=False content=NOT_LOGGING_PARAMETER validate=None directory_mode=None local_follow=None seuser=None serole=None selevel=None setype=None attributes=None
Dec  1 14:16:35 np0005541455 python3.9[149764]: ansible-ansible.legacy.stat Invoked with path=/etc/systemd/system/virtlogd-admin.socket.d/override.conf follow=False get_checksum=True get_size=False checksum_algorithm=sha1 get_mime=True get_attributes=True get_selinux_context=False
Dec  1 14:16:35 np0005541455 python3.9[149887]: ansible-ansible.legacy.copy Invoked with dest=/etc/systemd/system/virtlogd-admin.socket.d/override.conf group=root mode=0644 owner=root src=/home/zuul/.ansible/tmp/ansible-tmp-1764616594.5526187-775-19744206989196/.source.conf follow=False _original_basename=libvirt-socket.unit.j2 checksum=0bad41f409b4ee7e780a2a59dc18f5c84ed99826 backup=False force=True remote_src=False unsafe_writes=False content=NOT_LOGGING_PARAMETER validate=None directory_mode=None local_follow=None seuser=None serole=None selevel=None setype=None attributes=None
Dec  1 14:16:36 np0005541455 python3.9[150039]: ansible-ansible.legacy.stat Invoked with path=/etc/systemd/system/virtnodedevd.socket.d/override.conf follow=False get_checksum=True get_size=False checksum_algorithm=sha1 get_mime=True get_attributes=True get_selinux_context=False
Dec  1 14:16:37 np0005541455 python3.9[150162]: ansible-ansible.legacy.copy Invoked with dest=/etc/systemd/system/virtnodedevd.socket.d/override.conf group=root mode=0644 owner=root src=/home/zuul/.ansible/tmp/ansible-tmp-1764616596.038286-775-220372975002964/.source.conf follow=False _original_basename=libvirt-socket.unit.j2 checksum=0bad41f409b4ee7e780a2a59dc18f5c84ed99826 backup=False force=True remote_src=False unsafe_writes=False content=NOT_LOGGING_PARAMETER validate=None directory_mode=None local_follow=None seuser=None serole=None selevel=None setype=None attributes=None
Dec  1 14:16:38 np0005541455 python3.9[150314]: ansible-ansible.legacy.stat Invoked with path=/etc/systemd/system/virtnodedevd-ro.socket.d/override.conf follow=False get_checksum=True get_size=False checksum_algorithm=sha1 get_mime=True get_attributes=True get_selinux_context=False
Dec  1 14:16:38 np0005541455 python3.9[150437]: ansible-ansible.legacy.copy Invoked with dest=/etc/systemd/system/virtnodedevd-ro.socket.d/override.conf group=root mode=0644 owner=root src=/home/zuul/.ansible/tmp/ansible-tmp-1764616597.5535362-775-205591512636975/.source.conf follow=False _original_basename=libvirt-socket.unit.j2 checksum=0bad41f409b4ee7e780a2a59dc18f5c84ed99826 backup=False force=True remote_src=False unsafe_writes=False content=NOT_LOGGING_PARAMETER validate=None directory_mode=None local_follow=None seuser=None serole=None selevel=None setype=None attributes=None
Dec  1 14:16:39 np0005541455 python3.9[150589]: ansible-ansible.legacy.stat Invoked with path=/etc/systemd/system/virtnodedevd-admin.socket.d/override.conf follow=False get_checksum=True get_size=False checksum_algorithm=sha1 get_mime=True get_attributes=True get_selinux_context=False
Dec  1 14:16:40 np0005541455 python3.9[150712]: ansible-ansible.legacy.copy Invoked with dest=/etc/systemd/system/virtnodedevd-admin.socket.d/override.conf group=root mode=0644 owner=root src=/home/zuul/.ansible/tmp/ansible-tmp-1764616598.901137-775-194036386665836/.source.conf follow=False _original_basename=libvirt-socket.unit.j2 checksum=0bad41f409b4ee7e780a2a59dc18f5c84ed99826 backup=False force=True remote_src=False unsafe_writes=False content=NOT_LOGGING_PARAMETER validate=None directory_mode=None local_follow=None seuser=None serole=None selevel=None setype=None attributes=None
Dec  1 14:16:40 np0005541455 python3.9[150864]: ansible-ansible.legacy.stat Invoked with path=/etc/systemd/system/virtproxyd.socket.d/override.conf follow=False get_checksum=True get_size=False checksum_algorithm=sha1 get_mime=True get_attributes=True get_selinux_context=False
Dec  1 14:16:41 np0005541455 python3.9[150987]: ansible-ansible.legacy.copy Invoked with dest=/etc/systemd/system/virtproxyd.socket.d/override.conf group=root mode=0644 owner=root src=/home/zuul/.ansible/tmp/ansible-tmp-1764616600.3548431-775-266690685562378/.source.conf follow=False _original_basename=libvirt-socket.unit.j2 checksum=0bad41f409b4ee7e780a2a59dc18f5c84ed99826 backup=False force=True remote_src=False unsafe_writes=False content=NOT_LOGGING_PARAMETER validate=None directory_mode=None local_follow=None seuser=None serole=None selevel=None setype=None attributes=None
Dec  1 14:16:42 np0005541455 podman[151111]: 2025-12-01 19:16:42.242850415 +0000 UTC m=+0.178567356 container health_status ac5c9902abf0db9f43c889599b2bcc73d33eb8b65444ffdd9b56a5cc93dab792 (image=quay.io/podified-antelope-centos9/openstack-ovn-controller:current-podified, name=ovn_controller, health_status=healthy, health_failing_streak=0, health_log=, org.label-schema.build-date=20251125, managed_by=edpm_ansible, org.label-schema.name=CentOS Stream 9 Base Image, tcib_managed=true, config_data={'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ovn_controller', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-ovn-controller:current-podified', 'net': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/lib/modules:/lib/modules:ro', '/run:/run', '/var/lib/openvswitch/ovn:/run/ovn:shared,z', '/var/lib/kolla/config_files/ovn_controller.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/openstack/cacerts/ovn/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/certs/ovn/default/ca.crt:/etc/pki/tls/certs/ovndbca.crt:ro,z', '/var/lib/openstack/certs/ovn/default/tls.crt:/etc/pki/tls/certs/ovndb.crt:ro,z', '/var/lib/openstack/certs/ovn/default/tls.key:/etc/pki/tls/private/ovndb.key:ro,Z', '/var/lib/openstack/healthchecks/ovn_controller:/openstack:ro,z']}, container_name=ovn_controller, io.buildah.version=1.41.3, org.label-schema.license=GPLv2, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS, tcib_build_tag=fa2bb8efef6782c26ea7f1675eeb36dd, config_id=ovn_controller, maintainer=OpenStack Kubernetes Operator team)
Dec  1 14:16:42 np0005541455 python3.9[151156]: ansible-ansible.legacy.stat Invoked with path=/etc/systemd/system/virtproxyd-ro.socket.d/override.conf follow=False get_checksum=True get_size=False checksum_algorithm=sha1 get_mime=True get_attributes=True get_selinux_context=False
Dec  1 14:16:42 np0005541455 python3.9[151288]: ansible-ansible.legacy.copy Invoked with dest=/etc/systemd/system/virtproxyd-ro.socket.d/override.conf group=root mode=0644 owner=root src=/home/zuul/.ansible/tmp/ansible-tmp-1764616601.6206574-775-243562696798645/.source.conf follow=False _original_basename=libvirt-socket.unit.j2 checksum=0bad41f409b4ee7e780a2a59dc18f5c84ed99826 backup=False force=True remote_src=False unsafe_writes=False content=NOT_LOGGING_PARAMETER validate=None directory_mode=None local_follow=None seuser=None serole=None selevel=None setype=None attributes=None
Dec  1 14:16:43 np0005541455 python3.9[151440]: ansible-ansible.legacy.stat Invoked with path=/etc/systemd/system/virtproxyd-admin.socket.d/override.conf follow=False get_checksum=True get_size=False checksum_algorithm=sha1 get_mime=True get_attributes=True get_selinux_context=False
Dec  1 14:16:44 np0005541455 podman[151535]: 2025-12-01 19:16:44.128476592 +0000 UTC m=+0.055198347 container health_status 43b014a7c88484529ca37fbc1aa040d68d3c565a681d98a3ffe696ded1c66c8b (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified, name=ovn_metadata_agent, health_status=healthy, health_failing_streak=0, health_log=, tcib_build_tag=fa2bb8efef6782c26ea7f1675eeb36dd, config_data={'cgroupns': 'host', 'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'EDPM_CONFIG_HASH': '0823bd3e096c75f72e4a95820d41b0d4b6a1172bd2892ddb9f29b788a11bc87d'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ovn_metadata_agent', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified', 'net': 'host', 'pid': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/run/openvswitch:/run/openvswitch:z', '/var/lib/config-data/ansible-generated/neutron-ovn-metadata-agent:/etc/neutron.conf.d:z', '/run/netns:/run/netns:shared', '/var/lib/kolla/config_files/ovn_metadata_agent.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/neutron:/var/lib/neutron:shared,z', '/var/lib/neutron/ovn_metadata_haproxy_wrapper:/usr/local/bin/haproxy:ro', '/var/lib/neutron/kill_scripts:/etc/neutron/kill_scripts:ro', '/var/lib/openstack/cacerts/neutron-metadata/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/ca.crt:/etc/pki/tls/certs/ovndbca.crt:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/tls.crt:/etc/pki/tls/certs/ovndb.crt:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/tls.key:/etc/pki/tls/private/ovndb.key:ro,Z', '/var/lib/openstack/healthchecks/ovn_metadata_agent:/openstack:ro,z']}, managed_by=edpm_ansible, org.label-schema.schema-version=1.0, tcib_managed=true, config_id=ovn_metadata_agent, org.label-schema.vendor=CentOS, container_name=ovn_metadata_agent, io.buildah.version=1.41.3, org.label-schema.build-date=20251125, org.label-schema.license=GPLv2, maintainer=OpenStack Kubernetes Operator team, org.label-schema.name=CentOS Stream 9 Base Image)
Dec  1 14:16:44 np0005541455 python3.9[151582]: ansible-ansible.legacy.copy Invoked with dest=/etc/systemd/system/virtproxyd-admin.socket.d/override.conf group=root mode=0644 owner=root src=/home/zuul/.ansible/tmp/ansible-tmp-1764616603.157746-775-280144105720666/.source.conf follow=False _original_basename=libvirt-socket.unit.j2 checksum=0bad41f409b4ee7e780a2a59dc18f5c84ed99826 backup=False force=True remote_src=False unsafe_writes=False content=NOT_LOGGING_PARAMETER validate=None directory_mode=None local_follow=None seuser=None serole=None selevel=None setype=None attributes=None
Dec  1 14:16:45 np0005541455 python3.9[151735]: ansible-ansible.legacy.stat Invoked with path=/etc/systemd/system/virtqemud.socket.d/override.conf follow=False get_checksum=True get_size=False checksum_algorithm=sha1 get_mime=True get_attributes=True get_selinux_context=False
Dec  1 14:16:45 np0005541455 python3.9[151858]: ansible-ansible.legacy.copy Invoked with dest=/etc/systemd/system/virtqemud.socket.d/override.conf group=root mode=0644 owner=root src=/home/zuul/.ansible/tmp/ansible-tmp-1764616604.57521-775-172848948130791/.source.conf follow=False _original_basename=libvirt-socket.unit.j2 checksum=0bad41f409b4ee7e780a2a59dc18f5c84ed99826 backup=False force=True remote_src=False unsafe_writes=False content=NOT_LOGGING_PARAMETER validate=None directory_mode=None local_follow=None seuser=None serole=None selevel=None setype=None attributes=None
Dec  1 14:16:46 np0005541455 python3.9[152010]: ansible-ansible.legacy.stat Invoked with path=/etc/systemd/system/virtqemud-ro.socket.d/override.conf follow=False get_checksum=True get_size=False checksum_algorithm=sha1 get_mime=True get_attributes=True get_selinux_context=False
Dec  1 14:16:47 np0005541455 python3.9[152133]: ansible-ansible.legacy.copy Invoked with dest=/etc/systemd/system/virtqemud-ro.socket.d/override.conf group=root mode=0644 owner=root src=/home/zuul/.ansible/tmp/ansible-tmp-1764616605.9150863-775-83456307378297/.source.conf follow=False _original_basename=libvirt-socket.unit.j2 checksum=0bad41f409b4ee7e780a2a59dc18f5c84ed99826 backup=False force=True remote_src=False unsafe_writes=False content=NOT_LOGGING_PARAMETER validate=None directory_mode=None local_follow=None seuser=None serole=None selevel=None setype=None attributes=None
Dec  1 14:16:47 np0005541455 python3.9[152285]: ansible-ansible.legacy.stat Invoked with path=/etc/systemd/system/virtqemud-admin.socket.d/override.conf follow=False get_checksum=True get_size=False checksum_algorithm=sha1 get_mime=True get_attributes=True get_selinux_context=False
Dec  1 14:16:48 np0005541455 python3.9[152408]: ansible-ansible.legacy.copy Invoked with dest=/etc/systemd/system/virtqemud-admin.socket.d/override.conf group=root mode=0644 owner=root src=/home/zuul/.ansible/tmp/ansible-tmp-1764616607.4043508-775-71935782526875/.source.conf follow=False _original_basename=libvirt-socket.unit.j2 checksum=0bad41f409b4ee7e780a2a59dc18f5c84ed99826 backup=False force=True remote_src=False unsafe_writes=False content=NOT_LOGGING_PARAMETER validate=None directory_mode=None local_follow=None seuser=None serole=None selevel=None setype=None attributes=None
Dec  1 14:16:49 np0005541455 python3.9[152560]: ansible-ansible.legacy.stat Invoked with path=/etc/systemd/system/virtsecretd.socket.d/override.conf follow=False get_checksum=True get_size=False checksum_algorithm=sha1 get_mime=True get_attributes=True get_selinux_context=False
Dec  1 14:16:49 np0005541455 python3.9[152683]: ansible-ansible.legacy.copy Invoked with dest=/etc/systemd/system/virtsecretd.socket.d/override.conf group=root mode=0644 owner=root src=/home/zuul/.ansible/tmp/ansible-tmp-1764616608.652021-775-22322833050899/.source.conf follow=False _original_basename=libvirt-socket.unit.j2 checksum=0bad41f409b4ee7e780a2a59dc18f5c84ed99826 backup=False force=True remote_src=False unsafe_writes=False content=NOT_LOGGING_PARAMETER validate=None directory_mode=None local_follow=None seuser=None serole=None selevel=None setype=None attributes=None
Dec  1 14:16:50 np0005541455 python3.9[152835]: ansible-ansible.legacy.stat Invoked with path=/etc/systemd/system/virtsecretd-ro.socket.d/override.conf follow=False get_checksum=True get_size=False checksum_algorithm=sha1 get_mime=True get_attributes=True get_selinux_context=False
Dec  1 14:16:51 np0005541455 python3.9[152958]: ansible-ansible.legacy.copy Invoked with dest=/etc/systemd/system/virtsecretd-ro.socket.d/override.conf group=root mode=0644 owner=root src=/home/zuul/.ansible/tmp/ansible-tmp-1764616609.892475-775-247320006072877/.source.conf follow=False _original_basename=libvirt-socket.unit.j2 checksum=0bad41f409b4ee7e780a2a59dc18f5c84ed99826 backup=False force=True remote_src=False unsafe_writes=False content=NOT_LOGGING_PARAMETER validate=None directory_mode=None local_follow=None seuser=None serole=None selevel=None setype=None attributes=None
Dec  1 14:16:52 np0005541455 python3.9[153112]: ansible-ansible.legacy.stat Invoked with path=/etc/systemd/system/virtsecretd-admin.socket.d/override.conf follow=False get_checksum=True get_size=False checksum_algorithm=sha1 get_mime=True get_attributes=True get_selinux_context=False
Dec  1 14:16:52 np0005541455 python3.9[153235]: ansible-ansible.legacy.copy Invoked with dest=/etc/systemd/system/virtsecretd-admin.socket.d/override.conf group=root mode=0644 owner=root src=/home/zuul/.ansible/tmp/ansible-tmp-1764616611.6783087-775-239072225845698/.source.conf follow=False _original_basename=libvirt-socket.unit.j2 checksum=0bad41f409b4ee7e780a2a59dc18f5c84ed99826 backup=False force=True remote_src=False unsafe_writes=False content=NOT_LOGGING_PARAMETER validate=None directory_mode=None local_follow=None seuser=None serole=None selevel=None setype=None attributes=None
Dec  1 14:16:53 np0005541455 python3.9[153385]: ansible-ansible.legacy.command Invoked with _raw_params=set -o pipefail#012ls -lRZ /run/libvirt | grep -E ':container_\S+_t'#012 _uses_shell=True expand_argument_vars=True stdin_add_newline=True strip_empty_ends=True cmd=None argv=None chdir=None executable=None creates=None removes=None stdin=None
Dec  1 14:16:54 np0005541455 python3.9[153540]: ansible-ansible.posix.seboolean Invoked with name=os_enable_vtpm persistent=True state=True ignore_selinux_state=False
Dec  1 14:16:57 np0005541455 dbus-broker-launch[772]: avc:  op=load_policy lsm=selinux seqno=15 res=1
Dec  1 14:16:57 np0005541455 python3.9[153696]: ansible-ansible.legacy.copy Invoked with dest=/etc/pki/libvirt/servercert.pem group=root mode=0644 owner=root remote_src=True src=/var/lib/openstack/certs/libvirt/default/tls.crt backup=False force=True follow=False unsafe_writes=False _original_basename=None content=NOT_LOGGING_PARAMETER validate=None directory_mode=None local_follow=None checksum=None seuser=None serole=None selevel=None setype=None attributes=None
Dec  1 14:16:58 np0005541455 python3.9[153848]: ansible-ansible.legacy.copy Invoked with dest=/etc/pki/libvirt/private/serverkey.pem group=root mode=0600 owner=root remote_src=True src=/var/lib/openstack/certs/libvirt/default/tls.key backup=False force=True follow=False unsafe_writes=False _original_basename=None content=NOT_LOGGING_PARAMETER validate=None directory_mode=None local_follow=None checksum=None seuser=None serole=None selevel=None setype=None attributes=None
Dec  1 14:16:58 np0005541455 python3.9[154000]: ansible-ansible.legacy.copy Invoked with dest=/etc/pki/libvirt/clientcert.pem group=root mode=0644 owner=root remote_src=True src=/var/lib/openstack/certs/libvirt/default/tls.crt backup=False force=True follow=False unsafe_writes=False _original_basename=None content=NOT_LOGGING_PARAMETER validate=None directory_mode=None local_follow=None checksum=None seuser=None serole=None selevel=None setype=None attributes=None
Dec  1 14:16:59 np0005541455 python3.9[154152]: ansible-ansible.legacy.copy Invoked with dest=/etc/pki/libvirt/private/clientkey.pem group=root mode=0644 owner=root remote_src=True src=/var/lib/openstack/certs/libvirt/default/tls.key backup=False force=True follow=False unsafe_writes=False _original_basename=None content=NOT_LOGGING_PARAMETER validate=None directory_mode=None local_follow=None checksum=None seuser=None serole=None selevel=None setype=None attributes=None
Dec  1 14:17:00 np0005541455 python3.9[154304]: ansible-ansible.legacy.copy Invoked with dest=/etc/pki/CA/cacert.pem group=root mode=0644 owner=root remote_src=True src=/var/lib/openstack/certs/libvirt/default/ca.crt backup=False force=True follow=False unsafe_writes=False _original_basename=None content=NOT_LOGGING_PARAMETER validate=None directory_mode=None local_follow=None checksum=None seuser=None serole=None selevel=None setype=None attributes=None
Dec  1 14:17:01 np0005541455 python3.9[154456]: ansible-ansible.legacy.copy Invoked with dest=/etc/pki/qemu/server-cert.pem group=qemu mode=0640 owner=root remote_src=True src=/var/lib/openstack/certs/libvirt/default/tls.crt backup=False force=True follow=False unsafe_writes=False _original_basename=None content=NOT_LOGGING_PARAMETER validate=None directory_mode=None local_follow=None checksum=None seuser=None serole=None selevel=None setype=None attributes=None
Dec  1 14:17:01 np0005541455 python3.9[154610]: ansible-ansible.legacy.copy Invoked with dest=/etc/pki/qemu/server-key.pem group=qemu mode=0640 owner=root remote_src=True src=/var/lib/openstack/certs/libvirt/default/tls.key backup=False force=True follow=False unsafe_writes=False _original_basename=None content=NOT_LOGGING_PARAMETER validate=None directory_mode=None local_follow=None checksum=None seuser=None serole=None selevel=None setype=None attributes=None
Dec  1 14:17:02 np0005541455 python3.9[154762]: ansible-ansible.legacy.copy Invoked with dest=/etc/pki/qemu/client-cert.pem group=qemu mode=0640 owner=root remote_src=True src=/var/lib/openstack/certs/libvirt/default/tls.crt backup=False force=True follow=False unsafe_writes=False _original_basename=None content=NOT_LOGGING_PARAMETER validate=None directory_mode=None local_follow=None checksum=None seuser=None serole=None selevel=None setype=None attributes=None
Dec  1 14:17:03 np0005541455 python3.9[154916]: ansible-ansible.legacy.copy Invoked with dest=/etc/pki/qemu/client-key.pem group=qemu mode=0640 owner=root remote_src=True src=/var/lib/openstack/certs/libvirt/default/tls.key backup=False force=True follow=False unsafe_writes=False _original_basename=None content=NOT_LOGGING_PARAMETER validate=None directory_mode=None local_follow=None checksum=None seuser=None serole=None selevel=None setype=None attributes=None
Dec  1 14:17:03 np0005541455 python3.9[155068]: ansible-ansible.legacy.copy Invoked with dest=/etc/pki/qemu/ca-cert.pem group=qemu mode=0640 owner=root remote_src=True src=/var/lib/openstack/certs/libvirt/default/ca.crt backup=False force=True follow=False unsafe_writes=False _original_basename=None content=NOT_LOGGING_PARAMETER validate=None directory_mode=None local_follow=None checksum=None seuser=None serole=None selevel=None setype=None attributes=None
Dec  1 14:17:04 np0005541455 python3.9[155220]: ansible-ansible.builtin.systemd Invoked with daemon_reload=True name=virtlogd.service state=restarted daemon_reexec=False scope=system no_block=False enabled=None force=None masked=None
Dec  1 14:17:04 np0005541455 systemd[1]: Reloading.
Dec  1 14:17:04 np0005541455 systemd-rc-local-generator: /etc/rc.d/rc.local is not marked executable, skipping.
Dec  1 14:17:04 np0005541455 systemd-sysv-generator: SysV service '/etc/rc.d/init.d/network' lacks a native systemd unit file. Automatically generating a unit file for compatibility. Please update package to include a native systemd unit file, in order to make it more safe and robust.
Dec  1 14:17:04 np0005541455 systemd[1]: Starting libvirt logging daemon socket...
Dec  1 14:17:04 np0005541455 systemd[1]: Listening on libvirt logging daemon socket.
Dec  1 14:17:04 np0005541455 systemd[1]: Starting libvirt logging daemon admin socket...
Dec  1 14:17:04 np0005541455 systemd[1]: Listening on libvirt logging daemon admin socket.
Dec  1 14:17:04 np0005541455 systemd[1]: Starting libvirt logging daemon...
Dec  1 14:17:04 np0005541455 systemd[1]: Started libvirt logging daemon.
Dec  1 14:17:05 np0005541455 python3.9[155413]: ansible-ansible.builtin.systemd Invoked with daemon_reload=True name=virtnodedevd.service state=restarted daemon_reexec=False scope=system no_block=False enabled=None force=None masked=None
Dec  1 14:17:05 np0005541455 systemd[1]: Reloading.
Dec  1 14:17:05 np0005541455 systemd-rc-local-generator: /etc/rc.d/rc.local is not marked executable, skipping.
Dec  1 14:17:05 np0005541455 systemd-sysv-generator: SysV service '/etc/rc.d/init.d/network' lacks a native systemd unit file. Automatically generating a unit file for compatibility. Please update package to include a native systemd unit file, in order to make it more safe and robust.
Dec  1 14:17:06 np0005541455 systemd[1]: Starting libvirt nodedev daemon socket...
Dec  1 14:17:06 np0005541455 systemd[1]: Listening on libvirt nodedev daemon socket.
Dec  1 14:17:06 np0005541455 systemd[1]: Starting libvirt nodedev daemon admin socket...
Dec  1 14:17:06 np0005541455 systemd[1]: Starting libvirt nodedev daemon read-only socket...
Dec  1 14:17:06 np0005541455 systemd[1]: Listening on libvirt nodedev daemon admin socket.
Dec  1 14:17:06 np0005541455 systemd[1]: Listening on libvirt nodedev daemon read-only socket.
Dec  1 14:17:06 np0005541455 systemd[1]: Starting libvirt nodedev daemon...
Dec  1 14:17:06 np0005541455 systemd[1]: Started libvirt nodedev daemon.
Dec  1 14:17:06 np0005541455 systemd[1]: Starting SETroubleshoot daemon for processing new SELinux denial logs...
Dec  1 14:17:06 np0005541455 systemd[1]: Started SETroubleshoot daemon for processing new SELinux denial logs.
Dec  1 14:17:06 np0005541455 systemd[1]: Created slice Slice /system/dbus-:1.1-org.fedoraproject.SetroubleshootPrivileged.
Dec  1 14:17:06 np0005541455 systemd[1]: Started dbus-:1.1-org.fedoraproject.SetroubleshootPrivileged@0.service.
Dec  1 14:17:06 np0005541455 python3.9[155630]: ansible-ansible.builtin.systemd Invoked with daemon_reload=True name=virtproxyd.service state=restarted daemon_reexec=False scope=system no_block=False enabled=None force=None masked=None
Dec  1 14:17:06 np0005541455 systemd[1]: Reloading.
Dec  1 14:17:06 np0005541455 systemd-sysv-generator: SysV service '/etc/rc.d/init.d/network' lacks a native systemd unit file. Automatically generating a unit file for compatibility. Please update package to include a native systemd unit file, in order to make it more safe and robust.
Dec  1 14:17:06 np0005541455 systemd-rc-local-generator: /etc/rc.d/rc.local is not marked executable, skipping.
Dec  1 14:17:07 np0005541455 systemd[1]: Starting libvirt proxy daemon admin socket...
Dec  1 14:17:07 np0005541455 systemd[1]: Starting libvirt proxy daemon read-only socket...
Dec  1 14:17:07 np0005541455 systemd[1]: Listening on libvirt proxy daemon admin socket.
Dec  1 14:17:07 np0005541455 systemd[1]: Listening on libvirt proxy daemon read-only socket.
Dec  1 14:17:07 np0005541455 systemd[1]: Starting libvirt proxy daemon...
Dec  1 14:17:07 np0005541455 systemd[1]: Started libvirt proxy daemon.
Dec  1 14:17:07 np0005541455 setroubleshoot[155475]: SELinux is preventing /usr/sbin/virtlogd from using the dac_read_search capability. For complete SELinux messages run: sealert -l 24a3d1f9-f5c5-4094-a1a5-5ead50478997
Dec  1 14:17:07 np0005541455 setroubleshoot[155475]: SELinux is preventing /usr/sbin/virtlogd from using the dac_read_search capability.#012#012*****  Plugin dac_override (91.4 confidence) suggests   **********************#012#012If you want to help identify if domain needs this access or you have a file with the wrong permissions on your system#012Then turn on full auditing to get path information about the offending file and generate the error again.#012Do#012#012Turn on full auditing#012# auditctl -w /etc/shadow -p w#012Try to recreate AVC. Then execute#012# ausearch -m avc -ts recent#012If you see PATH record check ownership/permissions on file, and fix it,#012otherwise report as a bugzilla.#012#012*****  Plugin catchall (9.59 confidence) suggests   **************************#012#012If you believe that virtlogd should have the dac_read_search capability by default.#012Then you should report this as a bug.#012You can generate a local policy module to allow this access.#012Do#012allow this access for now by executing:#012# ausearch -c 'virtlogd' --raw | audit2allow -M my-virtlogd#012# semodule -X 300 -i my-virtlogd.pp#012
Dec  1 14:17:07 np0005541455 setroubleshoot[155475]: SELinux is preventing /usr/sbin/virtlogd from using the dac_read_search capability. For complete SELinux messages run: sealert -l 24a3d1f9-f5c5-4094-a1a5-5ead50478997
Dec  1 14:17:07 np0005541455 setroubleshoot[155475]: SELinux is preventing /usr/sbin/virtlogd from using the dac_read_search capability.#012#012*****  Plugin dac_override (91.4 confidence) suggests   **********************#012#012If you want to help identify if domain needs this access or you have a file with the wrong permissions on your system#012Then turn on full auditing to get path information about the offending file and generate the error again.#012Do#012#012Turn on full auditing#012# auditctl -w /etc/shadow -p w#012Try to recreate AVC. Then execute#012# ausearch -m avc -ts recent#012If you see PATH record check ownership/permissions on file, and fix it,#012otherwise report as a bugzilla.#012#012*****  Plugin catchall (9.59 confidence) suggests   **************************#012#012If you believe that virtlogd should have the dac_read_search capability by default.#012Then you should report this as a bug.#012You can generate a local policy module to allow this access.#012Do#012allow this access for now by executing:#012# ausearch -c 'virtlogd' --raw | audit2allow -M my-virtlogd#012# semodule -X 300 -i my-virtlogd.pp#012
Dec  1 14:17:07 np0005541455 python3.9[155851]: ansible-ansible.builtin.systemd Invoked with daemon_reload=True name=virtqemud.service state=restarted daemon_reexec=False scope=system no_block=False enabled=None force=None masked=None
Dec  1 14:17:07 np0005541455 systemd[1]: Reloading.
Dec  1 14:17:07 np0005541455 systemd-rc-local-generator: /etc/rc.d/rc.local is not marked executable, skipping.
Dec  1 14:17:07 np0005541455 systemd-sysv-generator: SysV service '/etc/rc.d/init.d/network' lacks a native systemd unit file. Automatically generating a unit file for compatibility. Please update package to include a native systemd unit file, in order to make it more safe and robust.
Dec  1 14:17:08 np0005541455 systemd[1]: Listening on libvirt locking daemon socket.
Dec  1 14:17:08 np0005541455 systemd[1]: Starting libvirt QEMU daemon socket...
Dec  1 14:17:08 np0005541455 systemd[1]: Virtual Machine and Container Storage (Compatibility) was skipped because of an unmet condition check (ConditionPathExists=/var/lib/machines.raw).
Dec  1 14:17:08 np0005541455 systemd[1]: Starting Virtual Machine and Container Registration Service...
Dec  1 14:17:08 np0005541455 systemd[1]: Listening on libvirt QEMU daemon socket.
Dec  1 14:17:08 np0005541455 systemd[1]: Starting libvirt QEMU daemon admin socket...
Dec  1 14:17:08 np0005541455 systemd[1]: Starting libvirt QEMU daemon read-only socket...
Dec  1 14:17:08 np0005541455 systemd[1]: Listening on libvirt QEMU daemon admin socket.
Dec  1 14:17:08 np0005541455 systemd[1]: Listening on libvirt QEMU daemon read-only socket.
Dec  1 14:17:08 np0005541455 systemd[1]: Started Virtual Machine and Container Registration Service.
Dec  1 14:17:08 np0005541455 systemd[1]: Starting libvirt QEMU daemon...
Dec  1 14:17:08 np0005541455 systemd[1]: Started libvirt QEMU daemon.
Dec  1 14:17:08 np0005541455 python3.9[156066]: ansible-ansible.builtin.systemd Invoked with daemon_reload=True name=virtsecretd.service state=restarted daemon_reexec=False scope=system no_block=False enabled=None force=None masked=None
Dec  1 14:17:08 np0005541455 systemd[1]: Reloading.
Dec  1 14:17:08 np0005541455 systemd-sysv-generator: SysV service '/etc/rc.d/init.d/network' lacks a native systemd unit file. Automatically generating a unit file for compatibility. Please update package to include a native systemd unit file, in order to make it more safe and robust.
Dec  1 14:17:08 np0005541455 systemd-rc-local-generator: /etc/rc.d/rc.local is not marked executable, skipping.
Dec  1 14:17:09 np0005541455 systemd[1]: Starting libvirt secret daemon socket...
Dec  1 14:17:09 np0005541455 systemd[1]: Listening on libvirt secret daemon socket.
Dec  1 14:17:09 np0005541455 systemd[1]: Starting libvirt secret daemon admin socket...
Dec  1 14:17:09 np0005541455 systemd[1]: Starting libvirt secret daemon read-only socket...
Dec  1 14:17:09 np0005541455 systemd[1]: Listening on libvirt secret daemon read-only socket.
Dec  1 14:17:09 np0005541455 systemd[1]: Listening on libvirt secret daemon admin socket.
Dec  1 14:17:09 np0005541455 systemd[1]: Starting libvirt secret daemon...
Dec  1 14:17:09 np0005541455 systemd[1]: Started libvirt secret daemon.
Dec  1 14:17:09 np0005541455 python3.9[156278]: ansible-ansible.builtin.file Invoked with mode=0755 path=/var/lib/openstack/config/ceph state=directory recurse=False force=False follow=True modification_time_format=%Y%m%d%H%M.%S access_time_format=%Y%m%d%H%M.%S unsafe_writes=False _original_basename=None _diff_peek=None src=None modification_time=None access_time=None owner=None group=None seuser=None serole=None selevel=None setype=None attributes=None
Dec  1 14:17:10 np0005541455 python3.9[156430]: ansible-ansible.builtin.find Invoked with paths=['/var/lib/openstack/config/ceph'] patterns=['*.conf'] read_whole_file=False file_type=file age_stamp=mtime recurse=False hidden=False follow=False get_checksum=False checksum_algorithm=sha1 use_regex=False exact_mode=True excludes=None contains=None age=None size=None depth=None mode=None encoding=None limit=None
Dec  1 14:17:12 np0005541455 python3.9[156582]: ansible-ansible.legacy.stat Invoked with path=/var/lib/edpm-config/firewall/libvirt.yaml follow=False get_checksum=True get_size=False checksum_algorithm=sha1 get_mime=True get_attributes=True get_selinux_context=False
Dec  1 14:17:12 np0005541455 ovn_metadata_agent[106828]: 2025-12-01 19:17:12.157 106833 DEBUG oslo_concurrency.lockutils [-] Acquiring lock "_check_child_processes" by "neutron.agent.linux.external_process.ProcessMonitor._check_child_processes" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m
Dec  1 14:17:12 np0005541455 ovn_metadata_agent[106828]: 2025-12-01 19:17:12.160 106833 DEBUG oslo_concurrency.lockutils [-] Lock "_check_child_processes" acquired by "neutron.agent.linux.external_process.ProcessMonitor._check_child_processes" :: waited 0.003s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m
Dec  1 14:17:12 np0005541455 ovn_metadata_agent[106828]: 2025-12-01 19:17:12.161 106833 DEBUG oslo_concurrency.lockutils [-] Lock "_check_child_processes" "released" by "neutron.agent.linux.external_process.ProcessMonitor._check_child_processes" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m
Dec  1 14:17:12 np0005541455 podman[156677]: 2025-12-01 19:17:12.545875407 +0000 UTC m=+0.098046062 container health_status ac5c9902abf0db9f43c889599b2bcc73d33eb8b65444ffdd9b56a5cc93dab792 (image=quay.io/podified-antelope-centos9/openstack-ovn-controller:current-podified, name=ovn_controller, health_status=healthy, health_failing_streak=0, health_log=, config_id=ovn_controller, maintainer=OpenStack Kubernetes Operator team, org.label-schema.build-date=20251125, io.buildah.version=1.41.3, org.label-schema.license=GPLv2, config_data={'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ovn_controller', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-ovn-controller:current-podified', 'net': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/lib/modules:/lib/modules:ro', '/run:/run', '/var/lib/openvswitch/ovn:/run/ovn:shared,z', '/var/lib/kolla/config_files/ovn_controller.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/openstack/cacerts/ovn/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/certs/ovn/default/ca.crt:/etc/pki/tls/certs/ovndbca.crt:ro,z', '/var/lib/openstack/certs/ovn/default/tls.crt:/etc/pki/tls/certs/ovndb.crt:ro,z', '/var/lib/openstack/certs/ovn/default/tls.key:/etc/pki/tls/private/ovndb.key:ro,Z', '/var/lib/openstack/healthchecks/ovn_controller:/openstack:ro,z']}, container_name=ovn_controller, tcib_build_tag=fa2bb8efef6782c26ea7f1675eeb36dd, managed_by=edpm_ansible, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS, tcib_managed=true)
Dec  1 14:17:12 np0005541455 python3.9[156722]: ansible-ansible.legacy.copy Invoked with dest=/var/lib/edpm-config/firewall/libvirt.yaml mode=0640 src=/home/zuul/.ansible/tmp/ansible-tmp-1764616631.394991-1120-8604548184782/.source.yaml follow=False _original_basename=firewall.yaml.j2 checksum=5ca83b1310a74c5e48c4c3d4640e1cb8fdac1061 backup=False force=True remote_src=False unsafe_writes=False content=NOT_LOGGING_PARAMETER validate=None directory_mode=None local_follow=None owner=None group=None seuser=None serole=None selevel=None setype=None attributes=None
Dec  1 14:17:13 np0005541455 python3.9[156884]: ansible-ansible.builtin.file Invoked with group=root mode=0750 owner=root path=/var/lib/edpm-config/firewall state=directory recurse=False force=False follow=True modification_time_format=%Y%m%d%H%M.%S access_time_format=%Y%m%d%H%M.%S unsafe_writes=False _original_basename=None _diff_peek=None src=None modification_time=None access_time=None seuser=None serole=None selevel=None setype=None attributes=None
Dec  1 14:17:14 np0005541455 podman[157036]: 2025-12-01 19:17:14.267754684 +0000 UTC m=+0.083924711 container health_status 43b014a7c88484529ca37fbc1aa040d68d3c565a681d98a3ffe696ded1c66c8b (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified, name=ovn_metadata_agent, health_status=healthy, health_failing_streak=0, health_log=, tcib_build_tag=fa2bb8efef6782c26ea7f1675eeb36dd, container_name=ovn_metadata_agent, org.label-schema.schema-version=1.0, tcib_managed=true, config_data={'cgroupns': 'host', 'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'EDPM_CONFIG_HASH': '0823bd3e096c75f72e4a95820d41b0d4b6a1172bd2892ddb9f29b788a11bc87d'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ovn_metadata_agent', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified', 'net': 'host', 'pid': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/run/openvswitch:/run/openvswitch:z', '/var/lib/config-data/ansible-generated/neutron-ovn-metadata-agent:/etc/neutron.conf.d:z', '/run/netns:/run/netns:shared', '/var/lib/kolla/config_files/ovn_metadata_agent.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/neutron:/var/lib/neutron:shared,z', '/var/lib/neutron/ovn_metadata_haproxy_wrapper:/usr/local/bin/haproxy:ro', '/var/lib/neutron/kill_scripts:/etc/neutron/kill_scripts:ro', '/var/lib/openstack/cacerts/neutron-metadata/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/ca.crt:/etc/pki/tls/certs/ovndbca.crt:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/tls.crt:/etc/pki/tls/certs/ovndb.crt:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/tls.key:/etc/pki/tls/private/ovndb.key:ro,Z', '/var/lib/openstack/healthchecks/ovn_metadata_agent:/openstack:ro,z']}, config_id=ovn_metadata_agent, managed_by=edpm_ansible, org.label-schema.license=GPLv2, maintainer=OpenStack Kubernetes Operator team, org.label-schema.build-date=20251125, org.label-schema.name=CentOS Stream 9 Base Image, io.buildah.version=1.41.3, org.label-schema.vendor=CentOS)
Dec  1 14:17:14 np0005541455 python3.9[157037]: ansible-ansible.legacy.stat Invoked with path=/var/lib/edpm-config/firewall/edpm-nftables-base.yaml follow=False get_checksum=True get_size=False checksum_algorithm=sha1 get_mime=True get_attributes=True get_selinux_context=False
Dec  1 14:17:14 np0005541455 python3.9[157131]: ansible-ansible.legacy.file Invoked with mode=0644 dest=/var/lib/edpm-config/firewall/edpm-nftables-base.yaml _original_basename=base-rules.yaml.j2 recurse=False state=file path=/var/lib/edpm-config/firewall/edpm-nftables-base.yaml force=False follow=True modification_time_format=%Y%m%d%H%M.%S access_time_format=%Y%m%d%H%M.%S unsafe_writes=False _diff_peek=None src=None modification_time=None access_time=None owner=None group=None seuser=None serole=None selevel=None setype=None attributes=None
Dec  1 14:17:15 np0005541455 python3.9[157283]: ansible-ansible.legacy.stat Invoked with path=/var/lib/edpm-config/firewall/edpm-nftables-user-rules.yaml follow=False get_checksum=True get_size=False checksum_algorithm=sha1 get_mime=True get_attributes=True get_selinux_context=False
Dec  1 14:17:16 np0005541455 python3.9[157361]: ansible-ansible.legacy.file Invoked with mode=0644 dest=/var/lib/edpm-config/firewall/edpm-nftables-user-rules.yaml _original_basename=.ttlofcky recurse=False state=file path=/var/lib/edpm-config/firewall/edpm-nftables-user-rules.yaml force=False follow=True modification_time_format=%Y%m%d%H%M.%S access_time_format=%Y%m%d%H%M.%S unsafe_writes=False _diff_peek=None src=None modification_time=None access_time=None owner=None group=None seuser=None serole=None selevel=None setype=None attributes=None
Dec  1 14:17:16 np0005541455 python3.9[157513]: ansible-ansible.legacy.stat Invoked with path=/etc/nftables/iptables.nft follow=False get_checksum=True get_size=False checksum_algorithm=sha1 get_mime=True get_attributes=True get_selinux_context=False
Dec  1 14:17:17 np0005541455 python3.9[157591]: ansible-ansible.legacy.file Invoked with group=root mode=0600 owner=root dest=/etc/nftables/iptables.nft _original_basename=iptables.nft recurse=False state=file path=/etc/nftables/iptables.nft force=False follow=True modification_time_format=%Y%m%d%H%M.%S access_time_format=%Y%m%d%H%M.%S unsafe_writes=False _diff_peek=None src=None modification_time=None access_time=None seuser=None serole=None selevel=None setype=None attributes=None
Dec  1 14:17:17 np0005541455 systemd[1]: dbus-:1.1-org.fedoraproject.SetroubleshootPrivileged@0.service: Deactivated successfully.
Dec  1 14:17:17 np0005541455 systemd[1]: setroubleshootd.service: Deactivated successfully.
Dec  1 14:17:18 np0005541455 python3.9[157743]: ansible-ansible.legacy.command Invoked with _raw_params=nft -j list ruleset _uses_shell=False expand_argument_vars=True stdin_add_newline=True strip_empty_ends=True cmd=None argv=None chdir=None executable=None creates=None removes=None stdin=None
Dec  1 14:17:18 np0005541455 python3[157896]: ansible-edpm_nftables_from_files Invoked with src=/var/lib/edpm-config/firewall
Dec  1 14:17:19 np0005541455 python3.9[158048]: ansible-ansible.legacy.stat Invoked with path=/etc/nftables/edpm-jumps.nft follow=False get_checksum=True get_size=False checksum_algorithm=sha1 get_mime=True get_attributes=True get_selinux_context=False
Dec  1 14:17:20 np0005541455 python3.9[158126]: ansible-ansible.legacy.file Invoked with group=root mode=0600 owner=root dest=/etc/nftables/edpm-jumps.nft _original_basename=jump-chain.j2 recurse=False state=file path=/etc/nftables/edpm-jumps.nft force=False follow=True modification_time_format=%Y%m%d%H%M.%S access_time_format=%Y%m%d%H%M.%S unsafe_writes=False _diff_peek=None src=None modification_time=None access_time=None seuser=None serole=None selevel=None setype=None attributes=None
Dec  1 14:17:20 np0005541455 python3.9[158278]: ansible-ansible.legacy.stat Invoked with path=/etc/nftables/edpm-update-jumps.nft follow=False get_checksum=True get_size=False checksum_algorithm=sha1 get_mime=True get_attributes=True get_selinux_context=False
Dec  1 14:17:21 np0005541455 python3.9[158356]: ansible-ansible.legacy.file Invoked with group=root mode=0600 owner=root dest=/etc/nftables/edpm-update-jumps.nft _original_basename=jump-chain.j2 recurse=False state=file path=/etc/nftables/edpm-update-jumps.nft force=False follow=True modification_time_format=%Y%m%d%H%M.%S access_time_format=%Y%m%d%H%M.%S unsafe_writes=False _diff_peek=None src=None modification_time=None access_time=None seuser=None serole=None selevel=None setype=None attributes=None
Dec  1 14:17:21 np0005541455 python3.9[158508]: ansible-ansible.legacy.stat Invoked with path=/etc/nftables/edpm-flushes.nft follow=False get_checksum=True get_size=False checksum_algorithm=sha1 get_mime=True get_attributes=True get_selinux_context=False
Dec  1 14:17:22 np0005541455 python3.9[158586]: ansible-ansible.legacy.file Invoked with group=root mode=0600 owner=root dest=/etc/nftables/edpm-flushes.nft _original_basename=flush-chain.j2 recurse=False state=file path=/etc/nftables/edpm-flushes.nft force=False follow=True modification_time_format=%Y%m%d%H%M.%S access_time_format=%Y%m%d%H%M.%S unsafe_writes=False _diff_peek=None src=None modification_time=None access_time=None seuser=None serole=None selevel=None setype=None attributes=None
Dec  1 14:17:23 np0005541455 python3.9[158738]: ansible-ansible.legacy.stat Invoked with path=/etc/nftables/edpm-chains.nft follow=False get_checksum=True get_size=False checksum_algorithm=sha1 get_mime=True get_attributes=True get_selinux_context=False
Dec  1 14:17:23 np0005541455 python3.9[158816]: ansible-ansible.legacy.file Invoked with group=root mode=0600 owner=root dest=/etc/nftables/edpm-chains.nft _original_basename=chains.j2 recurse=False state=file path=/etc/nftables/edpm-chains.nft force=False follow=True modification_time_format=%Y%m%d%H%M.%S access_time_format=%Y%m%d%H%M.%S unsafe_writes=False _diff_peek=None src=None modification_time=None access_time=None seuser=None serole=None selevel=None setype=None attributes=None
Dec  1 14:17:24 np0005541455 python3.9[158968]: ansible-ansible.legacy.stat Invoked with path=/etc/nftables/edpm-rules.nft follow=False get_checksum=True get_size=False checksum_algorithm=sha1 get_mime=True get_attributes=True get_selinux_context=False
Dec  1 14:17:24 np0005541455 python3.9[159093]: ansible-ansible.legacy.copy Invoked with dest=/etc/nftables/edpm-rules.nft group=root mode=0600 owner=root src=/home/zuul/.ansible/tmp/ansible-tmp-1764616643.7379832-1245-196966482487208/.source.nft follow=False _original_basename=ruleset.j2 checksum=8a12d4eb5149b6e500230381c1359a710881e9b0 backup=False force=True remote_src=False unsafe_writes=False content=NOT_LOGGING_PARAMETER validate=None directory_mode=None local_follow=None seuser=None serole=None selevel=None setype=None attributes=None
Dec  1 14:17:25 np0005541455 python3.9[159245]: ansible-ansible.builtin.file Invoked with group=root mode=0600 owner=root path=/etc/nftables/edpm-rules.nft.changed state=touch recurse=False force=False follow=True modification_time_format=%Y%m%d%H%M.%S access_time_format=%Y%m%d%H%M.%S unsafe_writes=False _original_basename=None _diff_peek=None src=None modification_time=None access_time=None seuser=None serole=None selevel=None setype=None attributes=None
Dec  1 14:17:26 np0005541455 python3.9[159397]: ansible-ansible.legacy.command Invoked with _raw_params=set -o pipefail; cat /etc/nftables/edpm-chains.nft /etc/nftables/edpm-flushes.nft /etc/nftables/edpm-rules.nft /etc/nftables/edpm-update-jumps.nft /etc/nftables/edpm-jumps.nft | nft -c -f - _uses_shell=True expand_argument_vars=True stdin_add_newline=True strip_empty_ends=True cmd=None argv=None chdir=None executable=None creates=None removes=None stdin=None
Dec  1 14:17:27 np0005541455 python3.9[159552]: ansible-ansible.builtin.blockinfile Invoked with backup=False block=include "/etc/nftables/iptables.nft"#012include "/etc/nftables/edpm-chains.nft"#012include "/etc/nftables/edpm-rules.nft"#012include "/etc/nftables/edpm-jumps.nft"#012 path=/etc/sysconfig/nftables.conf validate=nft -c -f %s state=present marker=# {mark} ANSIBLE MANAGED BLOCK create=False marker_begin=BEGIN marker_end=END append_newline=False prepend_newline=False encoding=utf-8 unsafe_writes=False insertafter=None insertbefore=None mode=None owner=None group=None seuser=None serole=None selevel=None setype=None attributes=None
Dec  1 14:17:28 np0005541455 python3.9[159704]: ansible-ansible.legacy.command Invoked with _raw_params=nft -f /etc/nftables/edpm-chains.nft _uses_shell=False expand_argument_vars=True stdin_add_newline=True strip_empty_ends=True cmd=None argv=None chdir=None executable=None creates=None removes=None stdin=None
Dec  1 14:17:28 np0005541455 python3.9[159857]: ansible-ansible.builtin.stat Invoked with path=/etc/nftables/edpm-rules.nft.changed follow=False get_checksum=True get_mime=True get_attributes=True get_selinux_context=False checksum_algorithm=sha1
Dec  1 14:17:29 np0005541455 python3.9[160011]: ansible-ansible.legacy.command Invoked with _raw_params=set -o pipefail; cat /etc/nftables/edpm-flushes.nft /etc/nftables/edpm-rules.nft /etc/nftables/edpm-update-jumps.nft | nft -f - _uses_shell=True expand_argument_vars=True stdin_add_newline=True strip_empty_ends=True cmd=None argv=None chdir=None executable=None creates=None removes=None stdin=None
Dec  1 14:17:30 np0005541455 python3.9[160166]: ansible-ansible.builtin.file Invoked with path=/etc/nftables/edpm-rules.nft.changed state=absent recurse=False force=False follow=True modification_time_format=%Y%m%d%H%M.%S access_time_format=%Y%m%d%H%M.%S unsafe_writes=False _original_basename=None _diff_peek=None src=None modification_time=None access_time=None mode=None owner=None group=None seuser=None serole=None selevel=None setype=None attributes=None
Dec  1 14:17:30 np0005541455 python3.9[160318]: ansible-ansible.legacy.stat Invoked with path=/etc/systemd/system/edpm_libvirt.target follow=False get_checksum=True get_size=False checksum_algorithm=sha1 get_mime=True get_attributes=True get_selinux_context=False
Dec  1 14:17:31 np0005541455 python3.9[160441]: ansible-ansible.legacy.copy Invoked with dest=/etc/systemd/system/edpm_libvirt.target mode=0644 src=/home/zuul/.ansible/tmp/ansible-tmp-1764616650.4752588-1317-155357900206400/.source.target follow=False _original_basename=edpm_libvirt.target checksum=13035a1aa0f414c677b14be9a5a363b6623d393c backup=False force=True remote_src=False unsafe_writes=False content=NOT_LOGGING_PARAMETER validate=None directory_mode=None local_follow=None owner=None group=None seuser=None serole=None selevel=None setype=None attributes=None
Dec  1 14:17:32 np0005541455 python3.9[160593]: ansible-ansible.legacy.stat Invoked with path=/etc/systemd/system/edpm_libvirt_guests.service follow=False get_checksum=True get_size=False checksum_algorithm=sha1 get_mime=True get_attributes=True get_selinux_context=False
Dec  1 14:17:32 np0005541455 python3.9[160716]: ansible-ansible.legacy.copy Invoked with dest=/etc/systemd/system/edpm_libvirt_guests.service mode=0644 src=/home/zuul/.ansible/tmp/ansible-tmp-1764616651.79057-1332-127977653777829/.source.service follow=False _original_basename=edpm_libvirt_guests.service checksum=db83430a42fc2ccfd6ed8b56ebf04f3dff9cd0cf backup=False force=True remote_src=False unsafe_writes=False content=NOT_LOGGING_PARAMETER validate=None directory_mode=None local_follow=None owner=None group=None seuser=None serole=None selevel=None setype=None attributes=None
Dec  1 14:17:33 np0005541455 python3.9[160868]: ansible-ansible.legacy.stat Invoked with path=/etc/systemd/system/virt-guest-shutdown.target follow=False get_checksum=True get_size=False checksum_algorithm=sha1 get_mime=True get_attributes=True get_selinux_context=False
Dec  1 14:17:34 np0005541455 python3.9[160991]: ansible-ansible.legacy.copy Invoked with dest=/etc/systemd/system/virt-guest-shutdown.target mode=0644 src=/home/zuul/.ansible/tmp/ansible-tmp-1764616653.0668325-1347-146115792197755/.source.target follow=False _original_basename=virt-guest-shutdown.target checksum=49ca149619c596cbba877418629d2cf8f7b0f5cf backup=False force=True remote_src=False unsafe_writes=False content=NOT_LOGGING_PARAMETER validate=None directory_mode=None local_follow=None owner=None group=None seuser=None serole=None selevel=None setype=None attributes=None
Dec  1 14:17:35 np0005541455 python3.9[161143]: ansible-ansible.builtin.systemd Invoked with daemon_reload=True enabled=True name=edpm_libvirt.target state=restarted daemon_reexec=False scope=system no_block=False force=None masked=None
Dec  1 14:17:35 np0005541455 systemd[1]: Reloading.
Dec  1 14:17:35 np0005541455 systemd-sysv-generator: SysV service '/etc/rc.d/init.d/network' lacks a native systemd unit file. Automatically generating a unit file for compatibility. Please update package to include a native systemd unit file, in order to make it more safe and robust.
Dec  1 14:17:35 np0005541455 systemd-rc-local-generator: /etc/rc.d/rc.local is not marked executable, skipping.
Dec  1 14:17:35 np0005541455 systemd[1]: Reached target edpm_libvirt.target.
Dec  1 14:17:36 np0005541455 python3.9[161333]: ansible-ansible.builtin.systemd Invoked with daemon_reload=True enabled=True name=edpm_libvirt_guests daemon_reexec=False scope=system no_block=False state=None force=None masked=None
Dec  1 14:17:36 np0005541455 systemd[1]: Reloading.
Dec  1 14:17:36 np0005541455 systemd-sysv-generator: SysV service '/etc/rc.d/init.d/network' lacks a native systemd unit file. Automatically generating a unit file for compatibility. Please update package to include a native systemd unit file, in order to make it more safe and robust.
Dec  1 14:17:36 np0005541455 systemd-rc-local-generator: /etc/rc.d/rc.local is not marked executable, skipping.
Dec  1 14:17:36 np0005541455 systemd[1]: Reloading.
Dec  1 14:17:36 np0005541455 systemd-rc-local-generator: /etc/rc.d/rc.local is not marked executable, skipping.
Dec  1 14:17:37 np0005541455 systemd-sysv-generator: SysV service '/etc/rc.d/init.d/network' lacks a native systemd unit file. Automatically generating a unit file for compatibility. Please update package to include a native systemd unit file, in order to make it more safe and robust.
Dec  1 14:17:37 np0005541455 systemd[1]: session-23.scope: Deactivated successfully.
Dec  1 14:17:37 np0005541455 systemd[1]: session-23.scope: Consumed 3min 31.210s CPU time.
Dec  1 14:17:37 np0005541455 systemd-logind[797]: Session 23 logged out. Waiting for processes to exit.
Dec  1 14:17:37 np0005541455 systemd-logind[797]: Removed session 23.
Dec  1 14:17:43 np0005541455 podman[161430]: 2025-12-01 19:17:43.316980791 +0000 UTC m=+0.090405575 container health_status ac5c9902abf0db9f43c889599b2bcc73d33eb8b65444ffdd9b56a5cc93dab792 (image=quay.io/podified-antelope-centos9/openstack-ovn-controller:current-podified, name=ovn_controller, health_status=healthy, health_failing_streak=0, health_log=, org.label-schema.build-date=20251125, org.label-schema.schema-version=1.0, tcib_managed=true, org.label-schema.license=GPLv2, org.label-schema.vendor=CentOS, tcib_build_tag=fa2bb8efef6782c26ea7f1675eeb36dd, io.buildah.version=1.41.3, maintainer=OpenStack Kubernetes Operator team, config_id=ovn_controller, container_name=ovn_controller, org.label-schema.name=CentOS Stream 9 Base Image, config_data={'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ovn_controller', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-ovn-controller:current-podified', 'net': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/lib/modules:/lib/modules:ro', '/run:/run', '/var/lib/openvswitch/ovn:/run/ovn:shared,z', '/var/lib/kolla/config_files/ovn_controller.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/openstack/cacerts/ovn/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/certs/ovn/default/ca.crt:/etc/pki/tls/certs/ovndbca.crt:ro,z', '/var/lib/openstack/certs/ovn/default/tls.crt:/etc/pki/tls/certs/ovndb.crt:ro,z', '/var/lib/openstack/certs/ovn/default/tls.key:/etc/pki/tls/private/ovndb.key:ro,Z', '/var/lib/openstack/healthchecks/ovn_controller:/openstack:ro,z']}, managed_by=edpm_ansible)
Dec  1 14:17:43 np0005541455 systemd-logind[797]: New session 24 of user zuul.
Dec  1 14:17:43 np0005541455 systemd[1]: Started Session 24 of User zuul.
Dec  1 14:17:44 np0005541455 podman[161584]: 2025-12-01 19:17:44.422538157 +0000 UTC m=+0.091132147 container health_status 43b014a7c88484529ca37fbc1aa040d68d3c565a681d98a3ffe696ded1c66c8b (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified, name=ovn_metadata_agent, health_status=healthy, health_failing_streak=0, health_log=, container_name=ovn_metadata_agent, maintainer=OpenStack Kubernetes Operator team, managed_by=edpm_ansible, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS, tcib_managed=true, io.buildah.version=1.41.3, org.label-schema.build-date=20251125, org.label-schema.name=CentOS Stream 9 Base Image, config_id=ovn_metadata_agent, org.label-schema.license=GPLv2, tcib_build_tag=fa2bb8efef6782c26ea7f1675eeb36dd, config_data={'cgroupns': 'host', 'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'EDPM_CONFIG_HASH': '0823bd3e096c75f72e4a95820d41b0d4b6a1172bd2892ddb9f29b788a11bc87d'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ovn_metadata_agent', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified', 'net': 'host', 'pid': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/run/openvswitch:/run/openvswitch:z', '/var/lib/config-data/ansible-generated/neutron-ovn-metadata-agent:/etc/neutron.conf.d:z', '/run/netns:/run/netns:shared', '/var/lib/kolla/config_files/ovn_metadata_agent.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/neutron:/var/lib/neutron:shared,z', '/var/lib/neutron/ovn_metadata_haproxy_wrapper:/usr/local/bin/haproxy:ro', '/var/lib/neutron/kill_scripts:/etc/neutron/kill_scripts:ro', '/var/lib/openstack/cacerts/neutron-metadata/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/ca.crt:/etc/pki/tls/certs/ovndbca.crt:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/tls.crt:/etc/pki/tls/certs/ovndb.crt:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/tls.key:/etc/pki/tls/private/ovndb.key:ro,Z', '/var/lib/openstack/healthchecks/ovn_metadata_agent:/openstack:ro,z']})
Dec  1 14:17:44 np0005541455 python3.9[161623]: ansible-ansible.builtin.setup Invoked with gather_subset=['!all', '!min', 'local'] gather_timeout=10 filter=[] fact_path=/etc/ansible/facts.d
Dec  1 14:17:46 np0005541455 python3.9[161783]: ansible-ansible.builtin.service_facts Invoked
Dec  1 14:17:46 np0005541455 network[161800]: You are using 'network' service provided by 'network-scripts', which are now deprecated.
Dec  1 14:17:46 np0005541455 network[161801]: 'network-scripts' will be removed from distribution in near future.
Dec  1 14:17:46 np0005541455 network[161802]: It is advised to switch to 'NetworkManager' instead for network management.
Dec  1 14:17:52 np0005541455 python3.9[162074]: ansible-ansible.legacy.setup Invoked with filter=['ansible_pkg_mgr'] gather_subset=['!all'] gather_timeout=10 fact_path=/etc/ansible/facts.d
Dec  1 14:17:53 np0005541455 python3.9[162158]: ansible-ansible.legacy.dnf Invoked with name=['iscsi-initiator-utils'] state=present allow_downgrade=False allowerasing=False autoremove=False bugfix=False cacheonly=False disable_gpg_check=False disable_plugin=[] disablerepo=[] download_only=False enable_plugin=[] enablerepo=[] exclude=[] installroot=/ install_weak_deps=True security=False skip_broken=False update_cache=False update_only=False validate_certs=True sslverify=True lock_timeout=30 use_backend=auto best=None conf_file=None disable_excludes=None download_dir=None list=None nobest=None releasever=None
Dec  1 14:17:59 np0005541455 python3.9[162311]: ansible-ansible.builtin.stat Invoked with path=/var/lib/config-data/puppet-generated/iscsid/etc/iscsi follow=False get_checksum=True get_mime=True get_attributes=True get_selinux_context=False checksum_algorithm=sha1
Dec  1 14:17:59 np0005541455 python3.9[162463]: ansible-ansible.legacy.command Invoked with _raw_params=/usr/sbin/restorecon -nvr /etc/iscsi /var/lib/iscsi _uses_shell=False expand_argument_vars=True stdin_add_newline=True strip_empty_ends=True cmd=None argv=None chdir=None executable=None creates=None removes=None stdin=None
Dec  1 14:18:00 np0005541455 python3.9[162616]: ansible-ansible.builtin.stat Invoked with path=/etc/iscsi/.initiator_reset follow=False get_checksum=True get_mime=True get_attributes=True get_selinux_context=False checksum_algorithm=sha1
Dec  1 14:18:01 np0005541455 python3.9[162768]: ansible-ansible.legacy.command Invoked with _raw_params=/usr/sbin/iscsi-iname _uses_shell=False expand_argument_vars=True stdin_add_newline=True strip_empty_ends=True cmd=None argv=None chdir=None executable=None creates=None removes=None stdin=None
Dec  1 14:18:02 np0005541455 python3.9[162921]: ansible-ansible.legacy.stat Invoked with path=/etc/iscsi/initiatorname.iscsi follow=False get_checksum=True get_size=False checksum_algorithm=sha1 get_mime=True get_attributes=True get_selinux_context=False
Dec  1 14:18:02 np0005541455 python3.9[163044]: ansible-ansible.legacy.copy Invoked with dest=/etc/iscsi/initiatorname.iscsi mode=0644 src=/home/zuul/.ansible/tmp/ansible-tmp-1764616681.7845082-95-221532177275050/.source.iscsi _original_basename=.091qew8a follow=False checksum=1a56c5e0eeaf09e8db82af6541ac10a9f2839107 backup=False force=True remote_src=False unsafe_writes=False content=NOT_LOGGING_PARAMETER validate=None directory_mode=None local_follow=None owner=None group=None seuser=None serole=None selevel=None setype=None attributes=None
Dec  1 14:18:03 np0005541455 python3.9[163196]: ansible-ansible.builtin.file Invoked with mode=0600 path=/etc/iscsi/.initiator_reset state=touch recurse=False force=False follow=True modification_time_format=%Y%m%d%H%M.%S access_time_format=%Y%m%d%H%M.%S unsafe_writes=False _original_basename=None _diff_peek=None src=None modification_time=None access_time=None owner=None group=None seuser=None serole=None selevel=None setype=None attributes=None
Dec  1 14:18:04 np0005541455 python3.9[163348]: ansible-ansible.builtin.lineinfile Invoked with insertafter=^#node.session.auth.chap.algs line=node.session.auth.chap_algs = SHA3-256,SHA256,SHA1,MD5 path=/etc/iscsi/iscsid.conf regexp=^node.session.auth.chap_algs state=present encoding=utf-8 backrefs=False create=False backup=False firstmatch=False unsafe_writes=False search_string=None insertbefore=None validate=None mode=None owner=None group=None seuser=None serole=None selevel=None setype=None attributes=None
Dec  1 14:18:04 np0005541455 rsyslogd[1005]: imjournal: journal files changed, reloading...  [v8.2510.0-2.el9 try https://www.rsyslog.com/e/0 ]
Dec  1 14:18:04 np0005541455 rsyslogd[1005]: imjournal: journal files changed, reloading...  [v8.2510.0-2.el9 try https://www.rsyslog.com/e/0 ]
Dec  1 14:18:05 np0005541455 python3.9[163501]: ansible-ansible.builtin.systemd_service Invoked with enabled=True name=iscsid.socket state=started daemon_reload=False daemon_reexec=False scope=system no_block=False force=None masked=None
Dec  1 14:18:05 np0005541455 systemd[1]: Listening on Open-iSCSI iscsid Socket.
Dec  1 14:18:06 np0005541455 python3.9[163657]: ansible-ansible.builtin.systemd_service Invoked with enabled=True name=iscsid state=started daemon_reload=False daemon_reexec=False scope=system no_block=False force=None masked=None
Dec  1 14:18:06 np0005541455 systemd[1]: Reloading.
Dec  1 14:18:06 np0005541455 systemd-rc-local-generator: /etc/rc.d/rc.local is not marked executable, skipping.
Dec  1 14:18:06 np0005541455 systemd-sysv-generator: SysV service '/etc/rc.d/init.d/network' lacks a native systemd unit file. Automatically generating a unit file for compatibility. Please update package to include a native systemd unit file, in order to make it more safe and robust.
Dec  1 14:18:07 np0005541455 systemd[1]: One time configuration for iscsi.service was skipped because of an unmet condition check (ConditionPathExists=!/etc/iscsi/initiatorname.iscsi).
Dec  1 14:18:07 np0005541455 systemd[1]: Starting Open-iSCSI...
Dec  1 14:18:07 np0005541455 kernel: Loading iSCSI transport class v2.0-870.
Dec  1 14:18:07 np0005541455 systemd[1]: Started Open-iSCSI.
Dec  1 14:18:07 np0005541455 systemd[1]: Starting Logout off all iSCSI sessions on shutdown...
Dec  1 14:18:07 np0005541455 systemd[1]: Finished Logout off all iSCSI sessions on shutdown.
Dec  1 14:18:08 np0005541455 python3.9[163859]: ansible-ansible.builtin.service_facts Invoked
Dec  1 14:18:08 np0005541455 network[163876]: You are using 'network' service provided by 'network-scripts', which are now deprecated.
Dec  1 14:18:08 np0005541455 network[163877]: 'network-scripts' will be removed from distribution in near future.
Dec  1 14:18:08 np0005541455 network[163878]: It is advised to switch to 'NetworkManager' instead for network management.
Dec  1 14:18:12 np0005541455 ovn_metadata_agent[106828]: 2025-12-01 19:18:12.158 106833 DEBUG oslo_concurrency.lockutils [-] Acquiring lock "_check_child_processes" by "neutron.agent.linux.external_process.ProcessMonitor._check_child_processes" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m
Dec  1 14:18:12 np0005541455 ovn_metadata_agent[106828]: 2025-12-01 19:18:12.159 106833 DEBUG oslo_concurrency.lockutils [-] Lock "_check_child_processes" acquired by "neutron.agent.linux.external_process.ProcessMonitor._check_child_processes" :: waited 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m
Dec  1 14:18:12 np0005541455 ovn_metadata_agent[106828]: 2025-12-01 19:18:12.159 106833 DEBUG oslo_concurrency.lockutils [-] Lock "_check_child_processes" "released" by "neutron.agent.linux.external_process.ProcessMonitor._check_child_processes" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m
Dec  1 14:18:13 np0005541455 podman[163985]: 2025-12-01 19:18:13.456950812 +0000 UTC m=+0.096159747 container health_status ac5c9902abf0db9f43c889599b2bcc73d33eb8b65444ffdd9b56a5cc93dab792 (image=quay.io/podified-antelope-centos9/openstack-ovn-controller:current-podified, name=ovn_controller, health_status=healthy, health_failing_streak=0, health_log=, tcib_build_tag=fa2bb8efef6782c26ea7f1675eeb36dd, tcib_managed=true, config_id=ovn_controller, managed_by=edpm_ansible, org.label-schema.build-date=20251125, org.label-schema.name=CentOS Stream 9 Base Image, config_data={'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ovn_controller', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-ovn-controller:current-podified', 'net': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/lib/modules:/lib/modules:ro', '/run:/run', '/var/lib/openvswitch/ovn:/run/ovn:shared,z', '/var/lib/kolla/config_files/ovn_controller.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/openstack/cacerts/ovn/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/certs/ovn/default/ca.crt:/etc/pki/tls/certs/ovndbca.crt:ro,z', '/var/lib/openstack/certs/ovn/default/tls.crt:/etc/pki/tls/certs/ovndb.crt:ro,z', '/var/lib/openstack/certs/ovn/default/tls.key:/etc/pki/tls/private/ovndb.key:ro,Z', '/var/lib/openstack/healthchecks/ovn_controller:/openstack:ro,z']}, container_name=ovn_controller, maintainer=OpenStack Kubernetes Operator team, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS, io.buildah.version=1.41.3, org.label-schema.license=GPLv2)
Dec  1 14:18:14 np0005541455 python3.9[164178]: ansible-ansible.builtin.file Invoked with mode=0755 path=/etc/modules-load.d selevel=s0 setype=etc_t state=directory recurse=False force=False follow=True modification_time_format=%Y%m%d%H%M.%S access_time_format=%Y%m%d%H%M.%S unsafe_writes=False _original_basename=None _diff_peek=None src=None modification_time=None access_time=None owner=None group=None seuser=None serole=None attributes=None
Dec  1 14:18:15 np0005541455 podman[164302]: 2025-12-01 19:18:15.217225519 +0000 UTC m=+0.082965486 container health_status 43b014a7c88484529ca37fbc1aa040d68d3c565a681d98a3ffe696ded1c66c8b (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified, name=ovn_metadata_agent, health_status=healthy, health_failing_streak=0, health_log=, managed_by=edpm_ansible, org.label-schema.schema-version=1.0, tcib_managed=true, maintainer=OpenStack Kubernetes Operator team, org.label-schema.build-date=20251125, tcib_build_tag=fa2bb8efef6782c26ea7f1675eeb36dd, config_data={'cgroupns': 'host', 'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'EDPM_CONFIG_HASH': '0823bd3e096c75f72e4a95820d41b0d4b6a1172bd2892ddb9f29b788a11bc87d'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ovn_metadata_agent', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified', 'net': 'host', 'pid': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/run/openvswitch:/run/openvswitch:z', '/var/lib/config-data/ansible-generated/neutron-ovn-metadata-agent:/etc/neutron.conf.d:z', '/run/netns:/run/netns:shared', '/var/lib/kolla/config_files/ovn_metadata_agent.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/neutron:/var/lib/neutron:shared,z', '/var/lib/neutron/ovn_metadata_haproxy_wrapper:/usr/local/bin/haproxy:ro', '/var/lib/neutron/kill_scripts:/etc/neutron/kill_scripts:ro', '/var/lib/openstack/cacerts/neutron-metadata/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/ca.crt:/etc/pki/tls/certs/ovndbca.crt:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/tls.crt:/etc/pki/tls/certs/ovndb.crt:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/tls.key:/etc/pki/tls/private/ovndb.key:ro,Z', '/var/lib/openstack/healthchecks/ovn_metadata_agent:/openstack:ro,z']}, config_id=ovn_metadata_agent, org.label-schema.vendor=CentOS, container_name=ovn_metadata_agent, io.buildah.version=1.41.3, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 9 Base Image)
Dec  1 14:18:15 np0005541455 python3.9[164347]: ansible-community.general.modprobe Invoked with name=dm-multipath state=present params= persistent=disabled
Dec  1 14:18:16 np0005541455 python3.9[164505]: ansible-ansible.legacy.stat Invoked with path=/etc/modules-load.d/dm-multipath.conf follow=False get_checksum=True get_size=False checksum_algorithm=sha1 get_mime=True get_attributes=True get_selinux_context=False
Dec  1 14:18:16 np0005541455 python3.9[164628]: ansible-ansible.legacy.copy Invoked with dest=/etc/modules-load.d/dm-multipath.conf mode=0644 src=/home/zuul/.ansible/tmp/ansible-tmp-1764616695.6421094-172-77457367416795/.source.conf follow=False _original_basename=module-load.conf.j2 checksum=065061c60917e4f67cecc70d12ce55e42f9d0b3f backup=False force=True remote_src=False unsafe_writes=False content=NOT_LOGGING_PARAMETER validate=None directory_mode=None local_follow=None owner=None group=None seuser=None serole=None selevel=None setype=None attributes=None
Dec  1 14:18:17 np0005541455 python3.9[164782]: ansible-ansible.builtin.lineinfile Invoked with create=True dest=/etc/modules line=dm-multipath  mode=0644 state=present path=/etc/modules encoding=utf-8 backrefs=False backup=False firstmatch=False unsafe_writes=False regexp=None search_string=None insertafter=None insertbefore=None validate=None owner=None group=None seuser=None serole=None selevel=None setype=None attributes=None
Dec  1 14:18:18 np0005541455 python3.9[164934]: ansible-ansible.builtin.systemd Invoked with name=systemd-modules-load.service state=restarted daemon_reload=False daemon_reexec=False scope=system no_block=False enabled=None force=None masked=None
Dec  1 14:18:18 np0005541455 systemd[1]: systemd-modules-load.service: Deactivated successfully.
Dec  1 14:18:18 np0005541455 systemd[1]: Stopped Load Kernel Modules.
Dec  1 14:18:18 np0005541455 systemd[1]: Stopping Load Kernel Modules...
Dec  1 14:18:18 np0005541455 systemd[1]: Starting Load Kernel Modules...
Dec  1 14:18:18 np0005541455 systemd[1]: Finished Load Kernel Modules.
Dec  1 14:18:19 np0005541455 python3.9[165090]: ansible-ansible.builtin.file Invoked with mode=0755 path=/etc/multipath setype=container_file_t state=directory recurse=False force=False follow=True modification_time_format=%Y%m%d%H%M.%S access_time_format=%Y%m%d%H%M.%S unsafe_writes=False _original_basename=None _diff_peek=None src=None modification_time=None access_time=None owner=None group=None seuser=None serole=None selevel=None attributes=None
Dec  1 14:18:20 np0005541455 python3.9[165242]: ansible-ansible.builtin.stat Invoked with path=/etc/multipath.conf follow=False get_checksum=True get_mime=True get_attributes=True get_selinux_context=False checksum_algorithm=sha1
Dec  1 14:18:21 np0005541455 python3.9[165394]: ansible-ansible.builtin.stat Invoked with path=/etc/multipath.conf follow=False get_checksum=True get_mime=True get_attributes=True get_selinux_context=False checksum_algorithm=sha1
Dec  1 14:18:21 np0005541455 python3.9[165546]: ansible-ansible.legacy.stat Invoked with path=/etc/multipath.conf follow=False get_checksum=True get_size=False checksum_algorithm=sha1 get_mime=True get_attributes=True get_selinux_context=False
Dec  1 14:18:22 np0005541455 python3.9[165669]: ansible-ansible.legacy.copy Invoked with dest=/etc/multipath.conf mode=0644 src=/home/zuul/.ansible/tmp/ansible-tmp-1764616701.2645445-230-95017423038092/.source.conf _original_basename=multipath.conf follow=False checksum=bf02ab264d3d648048a81f3bacec8bc58db93162 backup=False force=True remote_src=False unsafe_writes=False content=NOT_LOGGING_PARAMETER validate=None directory_mode=None local_follow=None owner=None group=None seuser=None serole=None selevel=None setype=None attributes=None
Dec  1 14:18:23 np0005541455 python3.9[165821]: ansible-ansible.legacy.command Invoked with _raw_params=grep -q '^blacklist\s*{' /etc/multipath.conf _uses_shell=True expand_argument_vars=True stdin_add_newline=True strip_empty_ends=True cmd=None argv=None chdir=None executable=None creates=None removes=None stdin=None
Dec  1 14:18:24 np0005541455 python3.9[165974]: ansible-ansible.builtin.lineinfile Invoked with line=blacklist { path=/etc/multipath.conf state=present encoding=utf-8 backrefs=False create=False backup=False firstmatch=False unsafe_writes=False regexp=None search_string=None insertafter=None insertbefore=None validate=None mode=None owner=None group=None seuser=None serole=None selevel=None setype=None attributes=None
Dec  1 14:18:25 np0005541455 python3.9[166126]: ansible-ansible.builtin.replace Invoked with path=/etc/multipath.conf regexp=^(blacklist {) replace=\1\n} backup=False encoding=utf-8 unsafe_writes=False after=None before=None validate=None mode=None owner=None group=None seuser=None serole=None selevel=None setype=None attributes=None
Dec  1 14:18:25 np0005541455 python3.9[166278]: ansible-ansible.builtin.replace Invoked with path=/etc/multipath.conf regexp=^blacklist\s*{\n[\s]+devnode \"\.\*\" replace=blacklist { backup=False encoding=utf-8 unsafe_writes=False after=None before=None validate=None mode=None owner=None group=None seuser=None serole=None selevel=None setype=None attributes=None
Dec  1 14:18:26 np0005541455 python3.9[166430]: ansible-ansible.builtin.lineinfile Invoked with firstmatch=True insertafter=^defaults line=        find_multipaths yes path=/etc/multipath.conf regexp=^\s+find_multipaths state=present encoding=utf-8 backrefs=False create=False backup=False unsafe_writes=False search_string=None insertbefore=None validate=None mode=None owner=None group=None seuser=None serole=None selevel=None setype=None attributes=None
Dec  1 14:18:27 np0005541455 python3.9[166582]: ansible-ansible.builtin.lineinfile Invoked with firstmatch=True insertafter=^defaults line=        recheck_wwid yes path=/etc/multipath.conf regexp=^\s+recheck_wwid state=present encoding=utf-8 backrefs=False create=False backup=False unsafe_writes=False search_string=None insertbefore=None validate=None mode=None owner=None group=None seuser=None serole=None selevel=None setype=None attributes=None
Dec  1 14:18:27 np0005541455 python3.9[166734]: ansible-ansible.builtin.lineinfile Invoked with firstmatch=True insertafter=^defaults line=        skip_kpartx yes path=/etc/multipath.conf regexp=^\s+skip_kpartx state=present encoding=utf-8 backrefs=False create=False backup=False unsafe_writes=False search_string=None insertbefore=None validate=None mode=None owner=None group=None seuser=None serole=None selevel=None setype=None attributes=None
Dec  1 14:18:28 np0005541455 python3.9[166886]: ansible-ansible.builtin.lineinfile Invoked with firstmatch=True insertafter=^defaults line=        user_friendly_names no path=/etc/multipath.conf regexp=^\s+user_friendly_names state=present encoding=utf-8 backrefs=False create=False backup=False unsafe_writes=False search_string=None insertbefore=None validate=None mode=None owner=None group=None seuser=None serole=None selevel=None setype=None attributes=None
Dec  1 14:18:29 np0005541455 python3.9[167038]: ansible-ansible.builtin.stat Invoked with path=/etc/multipath.conf follow=False get_checksum=True get_mime=True get_attributes=True get_selinux_context=False checksum_algorithm=sha1
Dec  1 14:18:30 np0005541455 python3.9[167192]: ansible-ansible.builtin.file Invoked with mode=0644 path=/etc/multipath/.multipath_restart_required state=touch recurse=False force=False follow=True modification_time_format=%Y%m%d%H%M.%S access_time_format=%Y%m%d%H%M.%S unsafe_writes=False _original_basename=None _diff_peek=None src=None modification_time=None access_time=None owner=None group=None seuser=None serole=None selevel=None setype=None attributes=None
Dec  1 14:18:31 np0005541455 python3.9[167344]: ansible-ansible.builtin.file Invoked with path=/var/local/libexec recurse=True setype=container_file_t state=directory force=False follow=True modification_time_format=%Y%m%d%H%M.%S access_time_format=%Y%m%d%H%M.%S unsafe_writes=False _original_basename=None _diff_peek=None src=None modification_time=None access_time=None mode=None owner=None group=None seuser=None serole=None selevel=None attributes=None
Dec  1 14:18:31 np0005541455 python3.9[167496]: ansible-ansible.legacy.stat Invoked with path=/var/local/libexec/edpm-container-shutdown follow=False get_checksum=True get_size=False checksum_algorithm=sha1 get_mime=True get_attributes=True get_selinux_context=False
Dec  1 14:18:32 np0005541455 python3.9[167574]: ansible-ansible.legacy.file Invoked with group=root mode=0700 owner=root setype=container_file_t dest=/var/local/libexec/edpm-container-shutdown _original_basename=edpm-container-shutdown recurse=False state=file path=/var/local/libexec/edpm-container-shutdown force=False follow=True modification_time_format=%Y%m%d%H%M.%S access_time_format=%Y%m%d%H%M.%S unsafe_writes=False _diff_peek=None src=None modification_time=None access_time=None seuser=None serole=None selevel=None attributes=None
Dec  1 14:18:33 np0005541455 python3.9[167726]: ansible-ansible.legacy.stat Invoked with path=/var/local/libexec/edpm-start-podman-container follow=False get_checksum=True get_size=False checksum_algorithm=sha1 get_mime=True get_attributes=True get_selinux_context=False
Dec  1 14:18:33 np0005541455 python3.9[167804]: ansible-ansible.legacy.file Invoked with group=root mode=0700 owner=root setype=container_file_t dest=/var/local/libexec/edpm-start-podman-container _original_basename=edpm-start-podman-container recurse=False state=file path=/var/local/libexec/edpm-start-podman-container force=False follow=True modification_time_format=%Y%m%d%H%M.%S access_time_format=%Y%m%d%H%M.%S unsafe_writes=False _diff_peek=None src=None modification_time=None access_time=None seuser=None serole=None selevel=None attributes=None
Dec  1 14:18:34 np0005541455 python3.9[167956]: ansible-ansible.builtin.file Invoked with mode=420 path=/etc/systemd/system-preset state=directory recurse=False force=False follow=True modification_time_format=%Y%m%d%H%M.%S access_time_format=%Y%m%d%H%M.%S unsafe_writes=False _original_basename=None _diff_peek=None src=None modification_time=None access_time=None owner=None group=None seuser=None serole=None selevel=None setype=None attributes=None
Dec  1 14:18:35 np0005541455 python3.9[168108]: ansible-ansible.legacy.stat Invoked with path=/etc/systemd/system/edpm-container-shutdown.service follow=False get_checksum=True get_size=False checksum_algorithm=sha1 get_mime=True get_attributes=True get_selinux_context=False
Dec  1 14:18:35 np0005541455 python3.9[168186]: ansible-ansible.legacy.file Invoked with group=root mode=0644 owner=root dest=/etc/systemd/system/edpm-container-shutdown.service _original_basename=edpm-container-shutdown-service recurse=False state=file path=/etc/systemd/system/edpm-container-shutdown.service force=False follow=True modification_time_format=%Y%m%d%H%M.%S access_time_format=%Y%m%d%H%M.%S unsafe_writes=False _diff_peek=None src=None modification_time=None access_time=None seuser=None serole=None selevel=None setype=None attributes=None
Dec  1 14:18:36 np0005541455 python3.9[168338]: ansible-ansible.legacy.stat Invoked with path=/etc/systemd/system-preset/91-edpm-container-shutdown.preset follow=False get_checksum=True get_size=False checksum_algorithm=sha1 get_mime=True get_attributes=True get_selinux_context=False
Dec  1 14:18:37 np0005541455 python3.9[168416]: ansible-ansible.legacy.file Invoked with group=root mode=0644 owner=root dest=/etc/systemd/system-preset/91-edpm-container-shutdown.preset _original_basename=91-edpm-container-shutdown-preset recurse=False state=file path=/etc/systemd/system-preset/91-edpm-container-shutdown.preset force=False follow=True modification_time_format=%Y%m%d%H%M.%S access_time_format=%Y%m%d%H%M.%S unsafe_writes=False _diff_peek=None src=None modification_time=None access_time=None seuser=None serole=None selevel=None setype=None attributes=None
Dec  1 14:18:38 np0005541455 python3.9[168568]: ansible-ansible.builtin.systemd Invoked with daemon_reload=True enabled=True name=edpm-container-shutdown state=started daemon_reexec=False scope=system no_block=False force=None masked=None
Dec  1 14:18:38 np0005541455 systemd[1]: Reloading.
Dec  1 14:18:38 np0005541455 systemd-rc-local-generator: /etc/rc.d/rc.local is not marked executable, skipping.
Dec  1 14:18:38 np0005541455 systemd-sysv-generator: SysV service '/etc/rc.d/init.d/network' lacks a native systemd unit file. Automatically generating a unit file for compatibility. Please update package to include a native systemd unit file, in order to make it more safe and robust.
Dec  1 14:18:39 np0005541455 python3.9[168758]: ansible-ansible.legacy.stat Invoked with path=/etc/systemd/system/netns-placeholder.service follow=False get_checksum=True get_size=False checksum_algorithm=sha1 get_mime=True get_attributes=True get_selinux_context=False
Dec  1 14:18:39 np0005541455 python3.9[168836]: ansible-ansible.legacy.file Invoked with group=root mode=0644 owner=root dest=/etc/systemd/system/netns-placeholder.service _original_basename=netns-placeholder-service recurse=False state=file path=/etc/systemd/system/netns-placeholder.service force=False follow=True modification_time_format=%Y%m%d%H%M.%S access_time_format=%Y%m%d%H%M.%S unsafe_writes=False _diff_peek=None src=None modification_time=None access_time=None seuser=None serole=None selevel=None setype=None attributes=None
Dec  1 14:18:40 np0005541455 python3.9[168988]: ansible-ansible.legacy.stat Invoked with path=/etc/systemd/system-preset/91-netns-placeholder.preset follow=False get_checksum=True get_size=False checksum_algorithm=sha1 get_mime=True get_attributes=True get_selinux_context=False
Dec  1 14:18:41 np0005541455 python3.9[169066]: ansible-ansible.legacy.file Invoked with group=root mode=0644 owner=root dest=/etc/systemd/system-preset/91-netns-placeholder.preset _original_basename=91-netns-placeholder-preset recurse=False state=file path=/etc/systemd/system-preset/91-netns-placeholder.preset force=False follow=True modification_time_format=%Y%m%d%H%M.%S access_time_format=%Y%m%d%H%M.%S unsafe_writes=False _diff_peek=None src=None modification_time=None access_time=None seuser=None serole=None selevel=None setype=None attributes=None
Dec  1 14:18:42 np0005541455 python3.9[169218]: ansible-ansible.builtin.systemd Invoked with daemon_reload=True enabled=True name=netns-placeholder state=started daemon_reexec=False scope=system no_block=False force=None masked=None
Dec  1 14:18:42 np0005541455 systemd[1]: Reloading.
Dec  1 14:18:42 np0005541455 systemd-rc-local-generator: /etc/rc.d/rc.local is not marked executable, skipping.
Dec  1 14:18:42 np0005541455 systemd-sysv-generator: SysV service '/etc/rc.d/init.d/network' lacks a native systemd unit file. Automatically generating a unit file for compatibility. Please update package to include a native systemd unit file, in order to make it more safe and robust.
Dec  1 14:18:42 np0005541455 systemd[1]: Starting Create netns directory...
Dec  1 14:18:42 np0005541455 systemd[1]: run-netns-placeholder.mount: Deactivated successfully.
Dec  1 14:18:42 np0005541455 systemd[1]: netns-placeholder.service: Deactivated successfully.
Dec  1 14:18:42 np0005541455 systemd[1]: Finished Create netns directory.
Dec  1 14:18:43 np0005541455 python3.9[169411]: ansible-ansible.builtin.file Invoked with group=zuul mode=0755 owner=zuul path=/var/lib/openstack/healthchecks setype=container_file_t state=directory recurse=False force=False follow=True modification_time_format=%Y%m%d%H%M.%S access_time_format=%Y%m%d%H%M.%S unsafe_writes=False _original_basename=None _diff_peek=None src=None modification_time=None access_time=None seuser=None serole=None selevel=None attributes=None
Dec  1 14:18:43 np0005541455 podman[169412]: 2025-12-01 19:18:43.806088767 +0000 UTC m=+0.129600058 container health_status ac5c9902abf0db9f43c889599b2bcc73d33eb8b65444ffdd9b56a5cc93dab792 (image=quay.io/podified-antelope-centos9/openstack-ovn-controller:current-podified, name=ovn_controller, health_status=healthy, health_failing_streak=0, health_log=, managed_by=edpm_ansible, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.vendor=CentOS, tcib_build_tag=fa2bb8efef6782c26ea7f1675eeb36dd, org.label-schema.build-date=20251125, tcib_managed=true, io.buildah.version=1.41.3, config_data={'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ovn_controller', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-ovn-controller:current-podified', 'net': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/lib/modules:/lib/modules:ro', '/run:/run', '/var/lib/openvswitch/ovn:/run/ovn:shared,z', '/var/lib/kolla/config_files/ovn_controller.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/openstack/cacerts/ovn/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/certs/ovn/default/ca.crt:/etc/pki/tls/certs/ovndbca.crt:ro,z', '/var/lib/openstack/certs/ovn/default/tls.crt:/etc/pki/tls/certs/ovndb.crt:ro,z', '/var/lib/openstack/certs/ovn/default/tls.key:/etc/pki/tls/private/ovndb.key:ro,Z', '/var/lib/openstack/healthchecks/ovn_controller:/openstack:ro,z']}, config_id=ovn_controller, maintainer=OpenStack Kubernetes Operator team, org.label-schema.schema-version=1.0, container_name=ovn_controller)
Dec  1 14:18:44 np0005541455 python3.9[169589]: ansible-ansible.legacy.stat Invoked with path=/var/lib/openstack/healthchecks/multipathd/healthcheck follow=False get_checksum=True get_size=False checksum_algorithm=sha1 get_mime=True get_attributes=True get_selinux_context=False
Dec  1 14:18:45 np0005541455 python3.9[169712]: ansible-ansible.legacy.copy Invoked with dest=/var/lib/openstack/healthchecks/multipathd/ group=zuul mode=0700 owner=zuul setype=container_file_t src=/home/zuul/.ansible/tmp/ansible-tmp-1764616723.8674183-437-191815909596939/.source _original_basename=healthcheck follow=False checksum=af9d0c1c8f3cb0e30ce9609be9d5b01924d0d23f backup=False force=True remote_src=False unsafe_writes=False content=NOT_LOGGING_PARAMETER validate=None directory_mode=None local_follow=None seuser=None serole=None selevel=None attributes=None
Dec  1 14:18:45 np0005541455 podman[169836]: 2025-12-01 19:18:45.852845247 +0000 UTC m=+0.071298561 container health_status 43b014a7c88484529ca37fbc1aa040d68d3c565a681d98a3ffe696ded1c66c8b (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified, name=ovn_metadata_agent, health_status=healthy, health_failing_streak=0, health_log=, maintainer=OpenStack Kubernetes Operator team, managed_by=edpm_ansible, tcib_managed=true, org.label-schema.license=GPLv2, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS, container_name=ovn_metadata_agent, io.buildah.version=1.41.3, org.label-schema.build-date=20251125, config_id=ovn_metadata_agent, org.label-schema.name=CentOS Stream 9 Base Image, tcib_build_tag=fa2bb8efef6782c26ea7f1675eeb36dd, config_data={'cgroupns': 'host', 'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'EDPM_CONFIG_HASH': '0823bd3e096c75f72e4a95820d41b0d4b6a1172bd2892ddb9f29b788a11bc87d'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ovn_metadata_agent', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified', 'net': 'host', 'pid': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/run/openvswitch:/run/openvswitch:z', '/var/lib/config-data/ansible-generated/neutron-ovn-metadata-agent:/etc/neutron.conf.d:z', '/run/netns:/run/netns:shared', '/var/lib/kolla/config_files/ovn_metadata_agent.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/neutron:/var/lib/neutron:shared,z', '/var/lib/neutron/ovn_metadata_haproxy_wrapper:/usr/local/bin/haproxy:ro', '/var/lib/neutron/kill_scripts:/etc/neutron/kill_scripts:ro', '/var/lib/openstack/cacerts/neutron-metadata/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/ca.crt:/etc/pki/tls/certs/ovndbca.crt:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/tls.crt:/etc/pki/tls/certs/ovndb.crt:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/tls.key:/etc/pki/tls/private/ovndb.key:ro,Z', '/var/lib/openstack/healthchecks/ovn_metadata_agent:/openstack:ro,z']})
Dec  1 14:18:46 np0005541455 python3.9[169883]: ansible-ansible.builtin.file Invoked with path=/var/lib/kolla/config_files recurse=True setype=container_file_t state=directory force=False follow=True modification_time_format=%Y%m%d%H%M.%S access_time_format=%Y%m%d%H%M.%S unsafe_writes=False _original_basename=None _diff_peek=None src=None modification_time=None access_time=None mode=None owner=None group=None seuser=None serole=None selevel=None attributes=None
Dec  1 14:18:46 np0005541455 python3.9[170035]: ansible-ansible.legacy.stat Invoked with path=/var/lib/kolla/config_files/multipathd.json follow=False get_checksum=True get_size=False checksum_algorithm=sha1 get_mime=True get_attributes=True get_selinux_context=False
Dec  1 14:18:47 np0005541455 python3.9[170158]: ansible-ansible.legacy.copy Invoked with dest=/var/lib/kolla/config_files/multipathd.json mode=0600 src=/home/zuul/.ansible/tmp/ansible-tmp-1764616726.293792-462-67231813740273/.source.json _original_basename=.lqxz1n3c follow=False checksum=3f7959ee8ac9757398adcc451c3b416c957d7c14 backup=False force=True remote_src=False unsafe_writes=False content=NOT_LOGGING_PARAMETER validate=None directory_mode=None local_follow=None owner=None group=None seuser=None serole=None selevel=None setype=None attributes=None
Dec  1 14:18:48 np0005541455 python3.9[170310]: ansible-ansible.builtin.file Invoked with mode=0755 path=/var/lib/edpm-config/container-startup-config/multipathd state=directory recurse=False force=False follow=True modification_time_format=%Y%m%d%H%M.%S access_time_format=%Y%m%d%H%M.%S unsafe_writes=False _original_basename=None _diff_peek=None src=None modification_time=None access_time=None owner=None group=None seuser=None serole=None selevel=None setype=None attributes=None
Dec  1 14:18:51 np0005541455 python3.9[170737]: ansible-container_config_data Invoked with config_overrides={} config_path=/var/lib/edpm-config/container-startup-config/multipathd config_pattern=*.json debug=False
Dec  1 14:18:51 np0005541455 python3.9[170891]: ansible-container_config_hash Invoked with check_mode=False config_vol_prefix=/var/lib/config-data
Dec  1 14:18:52 np0005541455 python3.9[171043]: ansible-containers.podman.podman_container_info Invoked with executable=podman name=None
Dec  1 14:18:54 np0005541455 python3[171222]: ansible-edpm_container_manage Invoked with concurrency=1 config_dir=/var/lib/edpm-config/container-startup-config/multipathd config_id=multipathd config_overrides={} config_patterns=*.json log_base_path=/var/log/containers/stdouts debug=False
Dec  1 14:18:54 np0005541455 podman[171258]: 2025-12-01 19:18:54.73251697 +0000 UTC m=+0.049729421 container create eee51cf6f5ac491b85fb09827fece37ea9afa564acb449d4ec0d0155a452f02b (image=quay.io/podified-antelope-centos9/openstack-multipathd:current-podified, name=multipathd, config_id=multipathd, org.label-schema.license=GPLv2, container_name=multipathd, maintainer=OpenStack Kubernetes Operator team, io.buildah.version=1.41.3, tcib_managed=true, config_data={'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/multipathd', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-multipathd:current-podified', 'net': 'host', 'privileged': True, 'restart': 'always', 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/var/lib/kolla/config_files/multipathd.json:/var/lib/kolla/config_files/config.json:ro', '/dev:/dev', '/run/udev:/run/udev', '/sys:/sys', '/lib/modules:/lib/modules:ro', '/etc/iscsi:/etc/iscsi:ro', '/var/lib/iscsi:/var/lib/iscsi', '/etc/multipath:/etc/multipath:z', '/etc/multipath.conf:/etc/multipath.conf:ro', '/var/lib/openstack/healthchecks/multipathd:/openstack:ro,z']}, org.label-schema.build-date=20251125, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.schema-version=1.0, managed_by=edpm_ansible, tcib_build_tag=fa2bb8efef6782c26ea7f1675eeb36dd, org.label-schema.vendor=CentOS)
Dec  1 14:18:54 np0005541455 podman[171258]: 2025-12-01 19:18:54.709900945 +0000 UTC m=+0.027113426 image pull 9af6aa52ee187025bc25565b66d3eefb486acac26f9281e33f4cce76a40d21f7 quay.io/podified-antelope-centos9/openstack-multipathd:current-podified
Dec  1 14:18:54 np0005541455 python3[171222]: ansible-edpm_container_manage PODMAN-CONTAINER-DEBUG: podman create --name multipathd --conmon-pidfile /run/multipathd.pid --env KOLLA_CONFIG_STRATEGY=COPY_ALWAYS --healthcheck-command /openstack/healthcheck --label config_id=multipathd --label container_name=multipathd --label managed_by=edpm_ansible --label config_data={'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/multipathd', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-multipathd:current-podified', 'net': 'host', 'privileged': True, 'restart': 'always', 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/var/lib/kolla/config_files/multipathd.json:/var/lib/kolla/config_files/config.json:ro', '/dev:/dev', '/run/udev:/run/udev', '/sys:/sys', '/lib/modules:/lib/modules:ro', '/etc/iscsi:/etc/iscsi:ro', '/var/lib/iscsi:/var/lib/iscsi', '/etc/multipath:/etc/multipath:z', '/etc/multipath.conf:/etc/multipath.conf:ro', '/var/lib/openstack/healthchecks/multipathd:/openstack:ro,z']} --log-driver journald --log-level info --network host --privileged=True --volume /etc/hosts:/etc/hosts:ro --volume /etc/localtime:/etc/localtime:ro --volume /etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro --volume /etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro --volume /etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro --volume /etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro --volume /etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro --volume /dev/log:/dev/log --volume /var/lib/kolla/config_files/multipathd.json:/var/lib/kolla/config_files/config.json:ro --volume /dev:/dev --volume /run/udev:/run/udev --volume /sys:/sys --volume /lib/modules:/lib/modules:ro --volume /etc/iscsi:/etc/iscsi:ro --volume /var/lib/iscsi:/var/lib/iscsi --volume /etc/multipath:/etc/multipath:z --volume /etc/multipath.conf:/etc/multipath.conf:ro --volume /var/lib/openstack/healthchecks/multipathd:/openstack:ro,z quay.io/podified-antelope-centos9/openstack-multipathd:current-podified
Dec  1 14:18:55 np0005541455 python3.9[171448]: ansible-ansible.builtin.stat Invoked with path=/etc/sysconfig/podman_drop_in follow=False get_checksum=True get_mime=True get_attributes=True get_selinux_context=False checksum_algorithm=sha1
Dec  1 14:18:56 np0005541455 python3.9[171602]: ansible-file Invoked with path=/etc/systemd/system/edpm_multipathd.requires state=absent recurse=False force=False follow=True modification_time_format=%Y%m%d%H%M.%S access_time_format=%Y%m%d%H%M.%S unsafe_writes=False _original_basename=None _diff_peek=None src=None modification_time=None access_time=None mode=None owner=None group=None seuser=None serole=None selevel=None setype=None attributes=None
Dec  1 14:18:56 np0005541455 python3.9[171679]: ansible-stat Invoked with path=/etc/systemd/system/edpm_multipathd_healthcheck.timer follow=False get_checksum=True get_mime=True get_attributes=True get_selinux_context=False checksum_algorithm=sha1
Dec  1 14:18:57 np0005541455 python3.9[171831]: ansible-copy Invoked with src=/home/zuul/.ansible/tmp/ansible-tmp-1764616736.996218-550-73290111591440/source dest=/etc/systemd/system/edpm_multipathd.service mode=0644 owner=root group=root backup=False force=True remote_src=False follow=False unsafe_writes=False _original_basename=None content=NOT_LOGGING_PARAMETER validate=None directory_mode=None local_follow=None checksum=None seuser=None serole=None selevel=None setype=None attributes=None
Dec  1 14:18:58 np0005541455 python3.9[171907]: ansible-systemd Invoked with daemon_reload=True daemon_reexec=False scope=system no_block=False name=None state=None enabled=None force=None masked=None
Dec  1 14:18:58 np0005541455 systemd[1]: Reloading.
Dec  1 14:18:58 np0005541455 systemd-rc-local-generator: /etc/rc.d/rc.local is not marked executable, skipping.
Dec  1 14:18:58 np0005541455 systemd-sysv-generator: SysV service '/etc/rc.d/init.d/network' lacks a native systemd unit file. Automatically generating a unit file for compatibility. Please update package to include a native systemd unit file, in order to make it more safe and robust.
Dec  1 14:18:59 np0005541455 python3.9[172018]: ansible-systemd Invoked with state=restarted name=edpm_multipathd.service enabled=True daemon_reload=False daemon_reexec=False scope=system no_block=False force=None masked=None
Dec  1 14:18:59 np0005541455 systemd[1]: Reloading.
Dec  1 14:18:59 np0005541455 systemd-rc-local-generator: /etc/rc.d/rc.local is not marked executable, skipping.
Dec  1 14:18:59 np0005541455 systemd-sysv-generator: SysV service '/etc/rc.d/init.d/network' lacks a native systemd unit file. Automatically generating a unit file for compatibility. Please update package to include a native systemd unit file, in order to make it more safe and robust.
Dec  1 14:18:59 np0005541455 systemd[1]: Starting multipathd container...
Dec  1 14:18:59 np0005541455 systemd[1]: Started libcrun container.
Dec  1 14:18:59 np0005541455 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/d1faacd27ae8bf2b3aec79db683023f086e40c86a49a82e51027663f6dd2307f/merged/etc/multipath supports timestamps until 2038 (0x7fffffff)
Dec  1 14:18:59 np0005541455 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/d1faacd27ae8bf2b3aec79db683023f086e40c86a49a82e51027663f6dd2307f/merged/var/lib/iscsi supports timestamps until 2038 (0x7fffffff)
Dec  1 14:18:59 np0005541455 systemd[1]: Started /usr/bin/podman healthcheck run eee51cf6f5ac491b85fb09827fece37ea9afa564acb449d4ec0d0155a452f02b.
Dec  1 14:18:59 np0005541455 podman[172057]: 2025-12-01 19:18:59.908694531 +0000 UTC m=+0.137781743 container init eee51cf6f5ac491b85fb09827fece37ea9afa564acb449d4ec0d0155a452f02b (image=quay.io/podified-antelope-centos9/openstack-multipathd:current-podified, name=multipathd, managed_by=edpm_ansible, tcib_build_tag=fa2bb8efef6782c26ea7f1675eeb36dd, config_data={'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/multipathd', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-multipathd:current-podified', 'net': 'host', 'privileged': True, 'restart': 'always', 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/var/lib/kolla/config_files/multipathd.json:/var/lib/kolla/config_files/config.json:ro', '/dev:/dev', '/run/udev:/run/udev', '/sys:/sys', '/lib/modules:/lib/modules:ro', '/etc/iscsi:/etc/iscsi:ro', '/var/lib/iscsi:/var/lib/iscsi', '/etc/multipath:/etc/multipath:z', '/etc/multipath.conf:/etc/multipath.conf:ro', '/var/lib/openstack/healthchecks/multipathd:/openstack:ro,z']}, maintainer=OpenStack Kubernetes Operator team, org.label-schema.build-date=20251125, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.schema-version=1.0, tcib_managed=true, config_id=multipathd, org.label-schema.license=GPLv2, org.label-schema.vendor=CentOS, container_name=multipathd, io.buildah.version=1.41.3)
Dec  1 14:18:59 np0005541455 multipathd[172072]: + sudo -E kolla_set_configs
Dec  1 14:18:59 np0005541455 podman[172057]: 2025-12-01 19:18:59.956648285 +0000 UTC m=+0.185735407 container start eee51cf6f5ac491b85fb09827fece37ea9afa564acb449d4ec0d0155a452f02b (image=quay.io/podified-antelope-centos9/openstack-multipathd:current-podified, name=multipathd, io.buildah.version=1.41.3, org.label-schema.build-date=20251125, org.label-schema.schema-version=1.0, config_data={'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/multipathd', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-multipathd:current-podified', 'net': 'host', 'privileged': True, 'restart': 'always', 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/var/lib/kolla/config_files/multipathd.json:/var/lib/kolla/config_files/config.json:ro', '/dev:/dev', '/run/udev:/run/udev', '/sys:/sys', '/lib/modules:/lib/modules:ro', '/etc/iscsi:/etc/iscsi:ro', '/var/lib/iscsi:/var/lib/iscsi', '/etc/multipath:/etc/multipath:z', '/etc/multipath.conf:/etc/multipath.conf:ro', '/var/lib/openstack/healthchecks/multipathd:/openstack:ro,z']}, config_id=multipathd, org.label-schema.license=GPLv2, managed_by=edpm_ansible, org.label-schema.name=CentOS Stream 9 Base Image, tcib_build_tag=fa2bb8efef6782c26ea7f1675eeb36dd, tcib_managed=true, maintainer=OpenStack Kubernetes Operator team, org.label-schema.vendor=CentOS, container_name=multipathd)
Dec  1 14:18:59 np0005541455 podman[172057]: multipathd
Dec  1 14:18:59 np0005541455 systemd[1]: Started multipathd container.
Dec  1 14:18:59 np0005541455 multipathd[172072]: INFO:__main__:Loading config file at /var/lib/kolla/config_files/config.json
Dec  1 14:18:59 np0005541455 multipathd[172072]: INFO:__main__:Validating config file
Dec  1 14:18:59 np0005541455 multipathd[172072]: INFO:__main__:Kolla config strategy set to: COPY_ALWAYS
Dec  1 14:18:59 np0005541455 multipathd[172072]: INFO:__main__:Writing out command to execute
Dec  1 14:18:59 np0005541455 multipathd[172072]: ++ cat /run_command
Dec  1 14:18:59 np0005541455 multipathd[172072]: + CMD='/usr/sbin/multipathd -d'
Dec  1 14:18:59 np0005541455 multipathd[172072]: + ARGS=
Dec  1 14:18:59 np0005541455 multipathd[172072]: + sudo kolla_copy_cacerts
Dec  1 14:19:00 np0005541455 multipathd[172072]: + [[ ! -n '' ]]
Dec  1 14:19:00 np0005541455 multipathd[172072]: + . kolla_extend_start
Dec  1 14:19:00 np0005541455 multipathd[172072]: + echo 'Running command: '\''/usr/sbin/multipathd -d'\'''
Dec  1 14:19:00 np0005541455 multipathd[172072]: Running command: '/usr/sbin/multipathd -d'
Dec  1 14:19:00 np0005541455 multipathd[172072]: + umask 0022
Dec  1 14:19:00 np0005541455 multipathd[172072]: + exec /usr/sbin/multipathd -d
Dec  1 14:19:00 np0005541455 podman[172079]: 2025-12-01 19:19:00.038789764 +0000 UTC m=+0.071721285 container health_status eee51cf6f5ac491b85fb09827fece37ea9afa564acb449d4ec0d0155a452f02b (image=quay.io/podified-antelope-centos9/openstack-multipathd:current-podified, name=multipathd, health_status=starting, health_failing_streak=1, health_log=, org.label-schema.name=CentOS Stream 9 Base Image, config_data={'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/multipathd', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-multipathd:current-podified', 'net': 'host', 'privileged': True, 'restart': 'always', 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/var/lib/kolla/config_files/multipathd.json:/var/lib/kolla/config_files/config.json:ro', '/dev:/dev', '/run/udev:/run/udev', '/sys:/sys', '/lib/modules:/lib/modules:ro', '/etc/iscsi:/etc/iscsi:ro', '/var/lib/iscsi:/var/lib/iscsi', '/etc/multipath:/etc/multipath:z', '/etc/multipath.conf:/etc/multipath.conf:ro', '/var/lib/openstack/healthchecks/multipathd:/openstack:ro,z']}, org.label-schema.build-date=20251125, org.label-schema.schema-version=1.0, maintainer=OpenStack Kubernetes Operator team, tcib_build_tag=fa2bb8efef6782c26ea7f1675eeb36dd, managed_by=edpm_ansible, org.label-schema.vendor=CentOS, tcib_managed=true, config_id=multipathd, container_name=multipathd, io.buildah.version=1.41.3, org.label-schema.license=GPLv2)
Dec  1 14:19:00 np0005541455 multipathd[172072]: 3227.817611 | --------start up--------
Dec  1 14:19:00 np0005541455 multipathd[172072]: 3227.817631 | read /etc/multipath.conf
Dec  1 14:19:00 np0005541455 multipathd[172072]: 3227.822077 | path checkers start up
Dec  1 14:19:00 np0005541455 systemd[1]: eee51cf6f5ac491b85fb09827fece37ea9afa564acb449d4ec0d0155a452f02b-7e47fc20ef3af1ae.service: Main process exited, code=exited, status=1/FAILURE
Dec  1 14:19:00 np0005541455 systemd[1]: eee51cf6f5ac491b85fb09827fece37ea9afa564acb449d4ec0d0155a452f02b-7e47fc20ef3af1ae.service: Failed with result 'exit-code'.
Dec  1 14:19:00 np0005541455 python3.9[172259]: ansible-ansible.builtin.stat Invoked with path=/etc/multipath/.multipath_restart_required follow=False get_checksum=True get_mime=True get_attributes=True get_selinux_context=False checksum_algorithm=sha1
Dec  1 14:19:01 np0005541455 python3.9[172413]: ansible-ansible.legacy.command Invoked with _raw_params=podman ps --filter volume=/etc/multipath.conf --format {{.Names}} _uses_shell=False expand_argument_vars=True stdin_add_newline=True strip_empty_ends=True cmd=None argv=None chdir=None executable=None creates=None removes=None stdin=None
Dec  1 14:19:02 np0005541455 python3.9[172578]: ansible-ansible.builtin.systemd Invoked with name=edpm_multipathd state=restarted daemon_reload=False daemon_reexec=False scope=system no_block=False enabled=None force=None masked=None
Dec  1 14:19:02 np0005541455 systemd[1]: Stopping multipathd container...
Dec  1 14:19:02 np0005541455 multipathd[172072]: 3230.257877 | exit (signal)
Dec  1 14:19:02 np0005541455 multipathd[172072]: 3230.257952 | --------shut down-------
Dec  1 14:19:02 np0005541455 systemd[1]: libpod-eee51cf6f5ac491b85fb09827fece37ea9afa564acb449d4ec0d0155a452f02b.scope: Deactivated successfully.
Dec  1 14:19:02 np0005541455 podman[172582]: 2025-12-01 19:19:02.527051 +0000 UTC m=+0.086049247 container died eee51cf6f5ac491b85fb09827fece37ea9afa564acb449d4ec0d0155a452f02b (image=quay.io/podified-antelope-centos9/openstack-multipathd:current-podified, name=multipathd, org.label-schema.build-date=20251125, io.buildah.version=1.41.3, org.label-schema.license=GPLv2, org.label-schema.schema-version=1.0, tcib_managed=true, maintainer=OpenStack Kubernetes Operator team, org.label-schema.name=CentOS Stream 9 Base Image, tcib_build_tag=fa2bb8efef6782c26ea7f1675eeb36dd, config_data={'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/multipathd', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-multipathd:current-podified', 'net': 'host', 'privileged': True, 'restart': 'always', 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/var/lib/kolla/config_files/multipathd.json:/var/lib/kolla/config_files/config.json:ro', '/dev:/dev', '/run/udev:/run/udev', '/sys:/sys', '/lib/modules:/lib/modules:ro', '/etc/iscsi:/etc/iscsi:ro', '/var/lib/iscsi:/var/lib/iscsi', '/etc/multipath:/etc/multipath:z', '/etc/multipath.conf:/etc/multipath.conf:ro', '/var/lib/openstack/healthchecks/multipathd:/openstack:ro,z']}, managed_by=edpm_ansible, org.label-schema.vendor=CentOS, config_id=multipathd, container_name=multipathd)
Dec  1 14:19:02 np0005541455 systemd[1]: eee51cf6f5ac491b85fb09827fece37ea9afa564acb449d4ec0d0155a452f02b-7e47fc20ef3af1ae.timer: Deactivated successfully.
Dec  1 14:19:02 np0005541455 systemd[1]: Stopped /usr/bin/podman healthcheck run eee51cf6f5ac491b85fb09827fece37ea9afa564acb449d4ec0d0155a452f02b.
Dec  1 14:19:02 np0005541455 systemd[1]: var-lib-containers-storage-overlay\x2dcontainers-eee51cf6f5ac491b85fb09827fece37ea9afa564acb449d4ec0d0155a452f02b-userdata-shm.mount: Deactivated successfully.
Dec  1 14:19:02 np0005541455 systemd[1]: var-lib-containers-storage-overlay-d1faacd27ae8bf2b3aec79db683023f086e40c86a49a82e51027663f6dd2307f-merged.mount: Deactivated successfully.
Dec  1 14:19:02 np0005541455 podman[172582]: 2025-12-01 19:19:02.589901152 +0000 UTC m=+0.148899329 container cleanup eee51cf6f5ac491b85fb09827fece37ea9afa564acb449d4ec0d0155a452f02b (image=quay.io/podified-antelope-centos9/openstack-multipathd:current-podified, name=multipathd, config_id=multipathd, container_name=multipathd, tcib_build_tag=fa2bb8efef6782c26ea7f1675eeb36dd, config_data={'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/multipathd', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-multipathd:current-podified', 'net': 'host', 'privileged': True, 'restart': 'always', 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/var/lib/kolla/config_files/multipathd.json:/var/lib/kolla/config_files/config.json:ro', '/dev:/dev', '/run/udev:/run/udev', '/sys:/sys', '/lib/modules:/lib/modules:ro', '/etc/iscsi:/etc/iscsi:ro', '/var/lib/iscsi:/var/lib/iscsi', '/etc/multipath:/etc/multipath:z', '/etc/multipath.conf:/etc/multipath.conf:ro', '/var/lib/openstack/healthchecks/multipathd:/openstack:ro,z']}, io.buildah.version=1.41.3, managed_by=edpm_ansible, org.label-schema.license=GPLv2, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS, tcib_managed=true, maintainer=OpenStack Kubernetes Operator team, org.label-schema.build-date=20251125, org.label-schema.name=CentOS Stream 9 Base Image)
Dec  1 14:19:02 np0005541455 podman[172582]: multipathd
Dec  1 14:19:02 np0005541455 podman[172612]: multipathd
Dec  1 14:19:02 np0005541455 systemd[1]: edpm_multipathd.service: Deactivated successfully.
Dec  1 14:19:02 np0005541455 systemd[1]: Stopped multipathd container.
Dec  1 14:19:02 np0005541455 systemd[1]: Starting multipathd container...
Dec  1 14:19:02 np0005541455 systemd[1]: Started libcrun container.
Dec  1 14:19:02 np0005541455 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/d1faacd27ae8bf2b3aec79db683023f086e40c86a49a82e51027663f6dd2307f/merged/etc/multipath supports timestamps until 2038 (0x7fffffff)
Dec  1 14:19:02 np0005541455 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/d1faacd27ae8bf2b3aec79db683023f086e40c86a49a82e51027663f6dd2307f/merged/var/lib/iscsi supports timestamps until 2038 (0x7fffffff)
Dec  1 14:19:02 np0005541455 systemd[1]: Started /usr/bin/podman healthcheck run eee51cf6f5ac491b85fb09827fece37ea9afa564acb449d4ec0d0155a452f02b.
Dec  1 14:19:02 np0005541455 podman[172625]: 2025-12-01 19:19:02.775169544 +0000 UTC m=+0.108417684 container init eee51cf6f5ac491b85fb09827fece37ea9afa564acb449d4ec0d0155a452f02b (image=quay.io/podified-antelope-centos9/openstack-multipathd:current-podified, name=multipathd, config_id=multipathd, container_name=multipathd, maintainer=OpenStack Kubernetes Operator team, org.label-schema.name=CentOS Stream 9 Base Image, tcib_build_tag=fa2bb8efef6782c26ea7f1675eeb36dd, org.label-schema.license=GPLv2, io.buildah.version=1.41.3, org.label-schema.schema-version=1.0, config_data={'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/multipathd', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-multipathd:current-podified', 'net': 'host', 'privileged': True, 'restart': 'always', 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/var/lib/kolla/config_files/multipathd.json:/var/lib/kolla/config_files/config.json:ro', '/dev:/dev', '/run/udev:/run/udev', '/sys:/sys', '/lib/modules:/lib/modules:ro', '/etc/iscsi:/etc/iscsi:ro', '/var/lib/iscsi:/var/lib/iscsi', '/etc/multipath:/etc/multipath:z', '/etc/multipath.conf:/etc/multipath.conf:ro', '/var/lib/openstack/healthchecks/multipathd:/openstack:ro,z']}, managed_by=edpm_ansible, org.label-schema.build-date=20251125, org.label-schema.vendor=CentOS, tcib_managed=true)
Dec  1 14:19:02 np0005541455 multipathd[172640]: + sudo -E kolla_set_configs
Dec  1 14:19:02 np0005541455 podman[172625]: 2025-12-01 19:19:02.799519984 +0000 UTC m=+0.132768094 container start eee51cf6f5ac491b85fb09827fece37ea9afa564acb449d4ec0d0155a452f02b (image=quay.io/podified-antelope-centos9/openstack-multipathd:current-podified, name=multipathd, tcib_build_tag=fa2bb8efef6782c26ea7f1675eeb36dd, managed_by=edpm_ansible, org.label-schema.build-date=20251125, org.label-schema.license=GPLv2, org.label-schema.vendor=CentOS, tcib_managed=true, io.buildah.version=1.41.3, maintainer=OpenStack Kubernetes Operator team, org.label-schema.name=CentOS Stream 9 Base Image, config_data={'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/multipathd', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-multipathd:current-podified', 'net': 'host', 'privileged': True, 'restart': 'always', 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/var/lib/kolla/config_files/multipathd.json:/var/lib/kolla/config_files/config.json:ro', '/dev:/dev', '/run/udev:/run/udev', '/sys:/sys', '/lib/modules:/lib/modules:ro', '/etc/iscsi:/etc/iscsi:ro', '/var/lib/iscsi:/var/lib/iscsi', '/etc/multipath:/etc/multipath:z', '/etc/multipath.conf:/etc/multipath.conf:ro', '/var/lib/openstack/healthchecks/multipathd:/openstack:ro,z']}, config_id=multipathd, container_name=multipathd, org.label-schema.schema-version=1.0)
Dec  1 14:19:02 np0005541455 podman[172625]: multipathd
Dec  1 14:19:02 np0005541455 systemd[1]: Started multipathd container.
Dec  1 14:19:02 np0005541455 multipathd[172640]: INFO:__main__:Loading config file at /var/lib/kolla/config_files/config.json
Dec  1 14:19:02 np0005541455 multipathd[172640]: INFO:__main__:Validating config file
Dec  1 14:19:02 np0005541455 multipathd[172640]: INFO:__main__:Kolla config strategy set to: COPY_ALWAYS
Dec  1 14:19:02 np0005541455 multipathd[172640]: INFO:__main__:Writing out command to execute
Dec  1 14:19:02 np0005541455 multipathd[172640]: ++ cat /run_command
Dec  1 14:19:02 np0005541455 multipathd[172640]: + CMD='/usr/sbin/multipathd -d'
Dec  1 14:19:02 np0005541455 multipathd[172640]: + ARGS=
Dec  1 14:19:02 np0005541455 multipathd[172640]: + sudo kolla_copy_cacerts
Dec  1 14:19:02 np0005541455 podman[172647]: 2025-12-01 19:19:02.896066078 +0000 UTC m=+0.085815840 container health_status eee51cf6f5ac491b85fb09827fece37ea9afa564acb449d4ec0d0155a452f02b (image=quay.io/podified-antelope-centos9/openstack-multipathd:current-podified, name=multipathd, health_status=starting, health_failing_streak=1, health_log=, tcib_build_tag=fa2bb8efef6782c26ea7f1675eeb36dd, container_name=multipathd, io.buildah.version=1.41.3, org.label-schema.build-date=20251125, org.label-schema.vendor=CentOS, config_id=multipathd, maintainer=OpenStack Kubernetes Operator team, managed_by=edpm_ansible, tcib_managed=true, config_data={'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/multipathd', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-multipathd:current-podified', 'net': 'host', 'privileged': True, 'restart': 'always', 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/var/lib/kolla/config_files/multipathd.json:/var/lib/kolla/config_files/config.json:ro', '/dev:/dev', '/run/udev:/run/udev', '/sys:/sys', '/lib/modules:/lib/modules:ro', '/etc/iscsi:/etc/iscsi:ro', '/var/lib/iscsi:/var/lib/iscsi', '/etc/multipath:/etc/multipath:z', '/etc/multipath.conf:/etc/multipath.conf:ro', '/var/lib/openstack/healthchecks/multipathd:/openstack:ro,z']}, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.schema-version=1.0)
Dec  1 14:19:02 np0005541455 systemd[1]: eee51cf6f5ac491b85fb09827fece37ea9afa564acb449d4ec0d0155a452f02b-66cb4703ae7fda50.service: Main process exited, code=exited, status=1/FAILURE
Dec  1 14:19:02 np0005541455 systemd[1]: eee51cf6f5ac491b85fb09827fece37ea9afa564acb449d4ec0d0155a452f02b-66cb4703ae7fda50.service: Failed with result 'exit-code'.
Dec  1 14:19:02 np0005541455 multipathd[172640]: + [[ ! -n '' ]]
Dec  1 14:19:02 np0005541455 multipathd[172640]: + . kolla_extend_start
Dec  1 14:19:02 np0005541455 multipathd[172640]: Running command: '/usr/sbin/multipathd -d'
Dec  1 14:19:02 np0005541455 multipathd[172640]: + echo 'Running command: '\''/usr/sbin/multipathd -d'\'''
Dec  1 14:19:02 np0005541455 multipathd[172640]: + umask 0022
Dec  1 14:19:02 np0005541455 multipathd[172640]: + exec /usr/sbin/multipathd -d
Dec  1 14:19:02 np0005541455 multipathd[172640]: 3230.702734 | --------start up--------
Dec  1 14:19:02 np0005541455 multipathd[172640]: 3230.702755 | read /etc/multipath.conf
Dec  1 14:19:02 np0005541455 multipathd[172640]: 3230.709554 | path checkers start up
Dec  1 14:19:03 np0005541455 python3.9[172830]: ansible-ansible.builtin.file Invoked with path=/etc/multipath/.multipath_restart_required state=absent recurse=False force=False follow=True modification_time_format=%Y%m%d%H%M.%S access_time_format=%Y%m%d%H%M.%S unsafe_writes=False _original_basename=None _diff_peek=None src=None modification_time=None access_time=None mode=None owner=None group=None seuser=None serole=None selevel=None setype=None attributes=None
Dec  1 14:19:04 np0005541455 python3.9[172982]: ansible-ansible.builtin.file Invoked with mode=0755 path=/etc/modules-load.d selevel=s0 setype=etc_t state=directory recurse=False force=False follow=True modification_time_format=%Y%m%d%H%M.%S access_time_format=%Y%m%d%H%M.%S unsafe_writes=False _original_basename=None _diff_peek=None src=None modification_time=None access_time=None owner=None group=None seuser=None serole=None attributes=None
Dec  1 14:19:05 np0005541455 python3.9[173134]: ansible-community.general.modprobe Invoked with name=nvme-fabrics state=present params= persistent=disabled
Dec  1 14:19:05 np0005541455 kernel: Key type psk registered
Dec  1 14:19:06 np0005541455 python3.9[173295]: ansible-ansible.legacy.stat Invoked with path=/etc/modules-load.d/nvme-fabrics.conf follow=False get_checksum=True get_size=False checksum_algorithm=sha1 get_mime=True get_attributes=True get_selinux_context=False
Dec  1 14:19:06 np0005541455 systemd[1]: virtnodedevd.service: Deactivated successfully.
Dec  1 14:19:06 np0005541455 python3.9[173419]: ansible-ansible.legacy.copy Invoked with dest=/etc/modules-load.d/nvme-fabrics.conf mode=0644 src=/home/zuul/.ansible/tmp/ansible-tmp-1764616745.5364034-630-55756857722654/.source.conf follow=False _original_basename=module-load.conf.j2 checksum=783c778f0c68cc414f35486f234cbb1cf3f9bbff backup=False force=True remote_src=False unsafe_writes=False content=NOT_LOGGING_PARAMETER validate=None directory_mode=None local_follow=None owner=None group=None seuser=None serole=None selevel=None setype=None attributes=None
Dec  1 14:19:07 np0005541455 systemd[1]: virtproxyd.service: Deactivated successfully.
Dec  1 14:19:07 np0005541455 python3.9[173572]: ansible-ansible.builtin.lineinfile Invoked with create=True dest=/etc/modules line=nvme-fabrics  mode=0644 state=present path=/etc/modules encoding=utf-8 backrefs=False backup=False firstmatch=False unsafe_writes=False regexp=None search_string=None insertafter=None insertbefore=None validate=None owner=None group=None seuser=None serole=None selevel=None setype=None attributes=None
Dec  1 14:19:08 np0005541455 systemd[1]: virtqemud.service: Deactivated successfully.
Dec  1 14:19:08 np0005541455 python3.9[173725]: ansible-ansible.builtin.systemd Invoked with name=systemd-modules-load.service state=restarted daemon_reload=False daemon_reexec=False scope=system no_block=False enabled=None force=None masked=None
Dec  1 14:19:08 np0005541455 systemd[1]: systemd-modules-load.service: Deactivated successfully.
Dec  1 14:19:08 np0005541455 systemd[1]: Stopped Load Kernel Modules.
Dec  1 14:19:08 np0005541455 systemd[1]: Stopping Load Kernel Modules...
Dec  1 14:19:08 np0005541455 systemd[1]: Starting Load Kernel Modules...
Dec  1 14:19:08 np0005541455 systemd[1]: Finished Load Kernel Modules.
Dec  1 14:19:09 np0005541455 systemd[1]: virtsecretd.service: Deactivated successfully.
Dec  1 14:19:09 np0005541455 python3.9[173882]: ansible-ansible.legacy.dnf Invoked with name=['nvme-cli'] state=present allow_downgrade=False allowerasing=False autoremove=False bugfix=False cacheonly=False disable_gpg_check=False disable_plugin=[] disablerepo=[] download_only=False enable_plugin=[] enablerepo=[] exclude=[] installroot=/ install_weak_deps=True security=False skip_broken=False update_cache=False update_only=False validate_certs=True sslverify=True lock_timeout=30 use_backend=auto best=None conf_file=None disable_excludes=None download_dir=None list=None nobest=None releasever=None
Dec  1 14:19:11 np0005541455 systemd[1]: Reloading.
Dec  1 14:19:11 np0005541455 systemd-rc-local-generator: /etc/rc.d/rc.local is not marked executable, skipping.
Dec  1 14:19:11 np0005541455 systemd-sysv-generator: SysV service '/etc/rc.d/init.d/network' lacks a native systemd unit file. Automatically generating a unit file for compatibility. Please update package to include a native systemd unit file, in order to make it more safe and robust.
Dec  1 14:19:11 np0005541455 systemd[1]: Reloading.
Dec  1 14:19:11 np0005541455 systemd-rc-local-generator: /etc/rc.d/rc.local is not marked executable, skipping.
Dec  1 14:19:11 np0005541455 systemd-sysv-generator: SysV service '/etc/rc.d/init.d/network' lacks a native systemd unit file. Automatically generating a unit file for compatibility. Please update package to include a native systemd unit file, in order to make it more safe and robust.
Dec  1 14:19:12 np0005541455 systemd-logind[797]: Watching system buttons on /dev/input/event0 (Power Button)
Dec  1 14:19:12 np0005541455 systemd-logind[797]: Watching system buttons on /dev/input/event1 (AT Translated Set 2 keyboard)
Dec  1 14:19:12 np0005541455 ovn_metadata_agent[106828]: 2025-12-01 19:19:12.159 106833 DEBUG oslo_concurrency.lockutils [-] Acquiring lock "_check_child_processes" by "neutron.agent.linux.external_process.ProcessMonitor._check_child_processes" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m
Dec  1 14:19:12 np0005541455 ovn_metadata_agent[106828]: 2025-12-01 19:19:12.161 106833 DEBUG oslo_concurrency.lockutils [-] Lock "_check_child_processes" acquired by "neutron.agent.linux.external_process.ProcessMonitor._check_child_processes" :: waited 0.002s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m
Dec  1 14:19:12 np0005541455 ovn_metadata_agent[106828]: 2025-12-01 19:19:12.161 106833 DEBUG oslo_concurrency.lockutils [-] Lock "_check_child_processes" "released" by "neutron.agent.linux.external_process.ProcessMonitor._check_child_processes" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m
Dec  1 14:19:12 np0005541455 systemd[1]: Started /usr/bin/systemctl start man-db-cache-update.
Dec  1 14:19:12 np0005541455 systemd[1]: Starting man-db-cache-update.service...
Dec  1 14:19:12 np0005541455 systemd[1]: Reloading.
Dec  1 14:19:12 np0005541455 systemd-rc-local-generator: /etc/rc.d/rc.local is not marked executable, skipping.
Dec  1 14:19:12 np0005541455 systemd-sysv-generator: SysV service '/etc/rc.d/init.d/network' lacks a native systemd unit file. Automatically generating a unit file for compatibility. Please update package to include a native systemd unit file, in order to make it more safe and robust.
Dec  1 14:19:12 np0005541455 systemd[1]: Queuing reload/restart jobs for marked units…
Dec  1 14:19:13 np0005541455 systemd[1]: man-db-cache-update.service: Deactivated successfully.
Dec  1 14:19:13 np0005541455 systemd[1]: Finished man-db-cache-update.service.
Dec  1 14:19:13 np0005541455 systemd[1]: man-db-cache-update.service: Consumed 1.622s CPU time.
Dec  1 14:19:13 np0005541455 systemd[1]: run-rbd0d79d5aaa24e318e1ad96db488cac4.service: Deactivated successfully.
Dec  1 14:19:13 np0005541455 python3.9[175322]: ansible-ansible.builtin.systemd_service Invoked with name=iscsid state=restarted daemon_reload=False daemon_reexec=False scope=system no_block=False enabled=None force=None masked=None
Dec  1 14:19:13 np0005541455 iscsid[163698]: iscsid shutting down.
Dec  1 14:19:13 np0005541455 systemd[1]: Stopping Open-iSCSI...
Dec  1 14:19:13 np0005541455 systemd[1]: iscsid.service: Deactivated successfully.
Dec  1 14:19:13 np0005541455 systemd[1]: Stopped Open-iSCSI.
Dec  1 14:19:13 np0005541455 systemd[1]: One time configuration for iscsi.service was skipped because of an unmet condition check (ConditionPathExists=!/etc/iscsi/initiatorname.iscsi).
Dec  1 14:19:13 np0005541455 systemd[1]: Starting Open-iSCSI...
Dec  1 14:19:13 np0005541455 systemd[1]: Started Open-iSCSI.
Dec  1 14:19:14 np0005541455 podman[175339]: 2025-12-01 19:19:14.023286834 +0000 UTC m=+0.092286252 container health_status ac5c9902abf0db9f43c889599b2bcc73d33eb8b65444ffdd9b56a5cc93dab792 (image=quay.io/podified-antelope-centos9/openstack-ovn-controller:current-podified, name=ovn_controller, health_status=healthy, health_failing_streak=0, health_log=, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.schema-version=1.0, tcib_build_tag=fa2bb8efef6782c26ea7f1675eeb36dd, config_id=ovn_controller, io.buildah.version=1.41.3, container_name=ovn_controller, maintainer=OpenStack Kubernetes Operator team, config_data={'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ovn_controller', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-ovn-controller:current-podified', 'net': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/lib/modules:/lib/modules:ro', '/run:/run', '/var/lib/openvswitch/ovn:/run/ovn:shared,z', '/var/lib/kolla/config_files/ovn_controller.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/openstack/cacerts/ovn/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/certs/ovn/default/ca.crt:/etc/pki/tls/certs/ovndbca.crt:ro,z', '/var/lib/openstack/certs/ovn/default/tls.crt:/etc/pki/tls/certs/ovndb.crt:ro,z', '/var/lib/openstack/certs/ovn/default/tls.key:/etc/pki/tls/private/ovndb.key:ro,Z', '/var/lib/openstack/healthchecks/ovn_controller:/openstack:ro,z']}, org.label-schema.vendor=CentOS, tcib_managed=true, managed_by=edpm_ansible, org.label-schema.build-date=20251125, org.label-schema.license=GPLv2)
Dec  1 14:19:14 np0005541455 python3.9[175518]: ansible-ansible.builtin.setup Invoked with gather_subset=['!all', '!min', 'local'] gather_timeout=10 filter=[] fact_path=/etc/ansible/facts.d
Dec  1 14:19:15 np0005541455 python3.9[175674]: ansible-ansible.builtin.file Invoked with mode=0644 path=/etc/ssh/ssh_known_hosts state=touch recurse=False force=False follow=True modification_time_format=%Y%m%d%H%M.%S access_time_format=%Y%m%d%H%M.%S unsafe_writes=False _original_basename=None _diff_peek=None src=None modification_time=None access_time=None owner=None group=None seuser=None serole=None selevel=None setype=None attributes=None
Dec  1 14:19:16 np0005541455 podman[175699]: 2025-12-01 19:19:16.312329018 +0000 UTC m=+0.082777665 container health_status 43b014a7c88484529ca37fbc1aa040d68d3c565a681d98a3ffe696ded1c66c8b (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified, name=ovn_metadata_agent, health_status=healthy, health_failing_streak=0, health_log=, org.label-schema.schema-version=1.0, managed_by=edpm_ansible, org.label-schema.license=GPLv2, tcib_managed=true, io.buildah.version=1.41.3, config_data={'cgroupns': 'host', 'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'EDPM_CONFIG_HASH': '0823bd3e096c75f72e4a95820d41b0d4b6a1172bd2892ddb9f29b788a11bc87d'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ovn_metadata_agent', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified', 'net': 'host', 'pid': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/run/openvswitch:/run/openvswitch:z', '/var/lib/config-data/ansible-generated/neutron-ovn-metadata-agent:/etc/neutron.conf.d:z', '/run/netns:/run/netns:shared', '/var/lib/kolla/config_files/ovn_metadata_agent.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/neutron:/var/lib/neutron:shared,z', '/var/lib/neutron/ovn_metadata_haproxy_wrapper:/usr/local/bin/haproxy:ro', '/var/lib/neutron/kill_scripts:/etc/neutron/kill_scripts:ro', '/var/lib/openstack/cacerts/neutron-metadata/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/ca.crt:/etc/pki/tls/certs/ovndbca.crt:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/tls.crt:/etc/pki/tls/certs/ovndb.crt:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/tls.key:/etc/pki/tls/private/ovndb.key:ro,Z', '/var/lib/openstack/healthchecks/ovn_metadata_agent:/openstack:ro,z']}, org.label-schema.vendor=CentOS, tcib_build_tag=fa2bb8efef6782c26ea7f1675eeb36dd, config_id=ovn_metadata_agent, container_name=ovn_metadata_agent, maintainer=OpenStack Kubernetes Operator team, org.label-schema.build-date=20251125, org.label-schema.name=CentOS Stream 9 Base Image)
Dec  1 14:19:17 np0005541455 python3.9[175846]: ansible-ansible.builtin.systemd_service Invoked with daemon_reload=True daemon_reexec=False scope=system no_block=False name=None state=None enabled=None force=None masked=None
Dec  1 14:19:17 np0005541455 systemd[1]: Reloading.
Dec  1 14:19:17 np0005541455 systemd-rc-local-generator: /etc/rc.d/rc.local is not marked executable, skipping.
Dec  1 14:19:17 np0005541455 systemd-sysv-generator: SysV service '/etc/rc.d/init.d/network' lacks a native systemd unit file. Automatically generating a unit file for compatibility. Please update package to include a native systemd unit file, in order to make it more safe and robust.
Dec  1 14:19:18 np0005541455 python3.9[176030]: ansible-ansible.builtin.service_facts Invoked
Dec  1 14:19:18 np0005541455 network[176047]: You are using 'network' service provided by 'network-scripts', which are now deprecated.
Dec  1 14:19:18 np0005541455 network[176048]: 'network-scripts' will be removed from distribution in near future.
Dec  1 14:19:18 np0005541455 network[176049]: It is advised to switch to 'NetworkManager' instead for network management.
Dec  1 14:19:23 np0005541455 python3.9[176323]: ansible-ansible.builtin.systemd_service Invoked with enabled=False name=tripleo_nova_compute.service state=stopped daemon_reload=False daemon_reexec=False scope=system no_block=False force=None masked=None
Dec  1 14:19:24 np0005541455 python3.9[176476]: ansible-ansible.builtin.systemd_service Invoked with enabled=False name=tripleo_nova_migration_target.service state=stopped daemon_reload=False daemon_reexec=False scope=system no_block=False force=None masked=None
Dec  1 14:19:25 np0005541455 python3.9[176629]: ansible-ansible.builtin.systemd_service Invoked with enabled=False name=tripleo_nova_api_cron.service state=stopped daemon_reload=False daemon_reexec=False scope=system no_block=False force=None masked=None
Dec  1 14:19:26 np0005541455 python3.9[176782]: ansible-ansible.builtin.systemd_service Invoked with enabled=False name=tripleo_nova_api.service state=stopped daemon_reload=False daemon_reexec=False scope=system no_block=False force=None masked=None
Dec  1 14:19:27 np0005541455 python3.9[176935]: ansible-ansible.builtin.systemd_service Invoked with enabled=False name=tripleo_nova_conductor.service state=stopped daemon_reload=False daemon_reexec=False scope=system no_block=False force=None masked=None
Dec  1 14:19:27 np0005541455 python3.9[177088]: ansible-ansible.builtin.systemd_service Invoked with enabled=False name=tripleo_nova_metadata.service state=stopped daemon_reload=False daemon_reexec=False scope=system no_block=False force=None masked=None
Dec  1 14:19:28 np0005541455 python3.9[177241]: ansible-ansible.builtin.systemd_service Invoked with enabled=False name=tripleo_nova_scheduler.service state=stopped daemon_reload=False daemon_reexec=False scope=system no_block=False force=None masked=None
Dec  1 14:19:29 np0005541455 python3.9[177394]: ansible-ansible.builtin.systemd_service Invoked with enabled=False name=tripleo_nova_vnc_proxy.service state=stopped daemon_reload=False daemon_reexec=False scope=system no_block=False force=None masked=None
Dec  1 14:19:30 np0005541455 python3.9[177547]: ansible-ansible.builtin.file Invoked with path=/usr/lib/systemd/system/tripleo_nova_compute.service state=absent recurse=False force=False follow=True modification_time_format=%Y%m%d%H%M.%S access_time_format=%Y%m%d%H%M.%S unsafe_writes=False _original_basename=None _diff_peek=None src=None modification_time=None access_time=None mode=None owner=None group=None seuser=None serole=None selevel=None setype=None attributes=None
Dec  1 14:19:31 np0005541455 python3.9[177699]: ansible-ansible.builtin.file Invoked with path=/usr/lib/systemd/system/tripleo_nova_migration_target.service state=absent recurse=False force=False follow=True modification_time_format=%Y%m%d%H%M.%S access_time_format=%Y%m%d%H%M.%S unsafe_writes=False _original_basename=None _diff_peek=None src=None modification_time=None access_time=None mode=None owner=None group=None seuser=None serole=None selevel=None setype=None attributes=None
Dec  1 14:19:31 np0005541455 python3.9[177851]: ansible-ansible.builtin.file Invoked with path=/usr/lib/systemd/system/tripleo_nova_api_cron.service state=absent recurse=False force=False follow=True modification_time_format=%Y%m%d%H%M.%S access_time_format=%Y%m%d%H%M.%S unsafe_writes=False _original_basename=None _diff_peek=None src=None modification_time=None access_time=None mode=None owner=None group=None seuser=None serole=None selevel=None setype=None attributes=None
Dec  1 14:19:32 np0005541455 python3.9[178003]: ansible-ansible.builtin.file Invoked with path=/usr/lib/systemd/system/tripleo_nova_api.service state=absent recurse=False force=False follow=True modification_time_format=%Y%m%d%H%M.%S access_time_format=%Y%m%d%H%M.%S unsafe_writes=False _original_basename=None _diff_peek=None src=None modification_time=None access_time=None mode=None owner=None group=None seuser=None serole=None selevel=None setype=None attributes=None
Dec  1 14:19:33 np0005541455 podman[178120]: 2025-12-01 19:19:33.353984451 +0000 UTC m=+0.104851463 container health_status eee51cf6f5ac491b85fb09827fece37ea9afa564acb449d4ec0d0155a452f02b (image=quay.io/podified-antelope-centos9/openstack-multipathd:current-podified, name=multipathd, health_status=healthy, health_failing_streak=0, health_log=, io.buildah.version=1.41.3, maintainer=OpenStack Kubernetes Operator team, org.label-schema.name=CentOS Stream 9 Base Image, container_name=multipathd, org.label-schema.schema-version=1.0, tcib_managed=true, org.label-schema.build-date=20251125, tcib_build_tag=fa2bb8efef6782c26ea7f1675eeb36dd, config_data={'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/multipathd', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-multipathd:current-podified', 'net': 'host', 'privileged': True, 'restart': 'always', 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/var/lib/kolla/config_files/multipathd.json:/var/lib/kolla/config_files/config.json:ro', '/dev:/dev', '/run/udev:/run/udev', '/sys:/sys', '/lib/modules:/lib/modules:ro', '/etc/iscsi:/etc/iscsi:ro', '/var/lib/iscsi:/var/lib/iscsi', '/etc/multipath:/etc/multipath:z', '/etc/multipath.conf:/etc/multipath.conf:ro', '/var/lib/openstack/healthchecks/multipathd:/openstack:ro,z']}, config_id=multipathd, managed_by=edpm_ansible, org.label-schema.license=GPLv2, org.label-schema.vendor=CentOS)
Dec  1 14:19:33 np0005541455 python3.9[178174]: ansible-ansible.builtin.file Invoked with path=/usr/lib/systemd/system/tripleo_nova_conductor.service state=absent recurse=False force=False follow=True modification_time_format=%Y%m%d%H%M.%S access_time_format=%Y%m%d%H%M.%S unsafe_writes=False _original_basename=None _diff_peek=None src=None modification_time=None access_time=None mode=None owner=None group=None seuser=None serole=None selevel=None setype=None attributes=None
Dec  1 14:19:34 np0005541455 python3.9[178326]: ansible-ansible.builtin.file Invoked with path=/usr/lib/systemd/system/tripleo_nova_metadata.service state=absent recurse=False force=False follow=True modification_time_format=%Y%m%d%H%M.%S access_time_format=%Y%m%d%H%M.%S unsafe_writes=False _original_basename=None _diff_peek=None src=None modification_time=None access_time=None mode=None owner=None group=None seuser=None serole=None selevel=None setype=None attributes=None
Dec  1 14:19:35 np0005541455 python3.9[178478]: ansible-ansible.builtin.file Invoked with path=/usr/lib/systemd/system/tripleo_nova_scheduler.service state=absent recurse=False force=False follow=True modification_time_format=%Y%m%d%H%M.%S access_time_format=%Y%m%d%H%M.%S unsafe_writes=False _original_basename=None _diff_peek=None src=None modification_time=None access_time=None mode=None owner=None group=None seuser=None serole=None selevel=None setype=None attributes=None
Dec  1 14:19:35 np0005541455 python3.9[178630]: ansible-ansible.builtin.file Invoked with path=/usr/lib/systemd/system/tripleo_nova_vnc_proxy.service state=absent recurse=False force=False follow=True modification_time_format=%Y%m%d%H%M.%S access_time_format=%Y%m%d%H%M.%S unsafe_writes=False _original_basename=None _diff_peek=None src=None modification_time=None access_time=None mode=None owner=None group=None seuser=None serole=None selevel=None setype=None attributes=None
Dec  1 14:19:36 np0005541455 python3.9[178782]: ansible-ansible.builtin.file Invoked with path=/etc/systemd/system/tripleo_nova_compute.service state=absent recurse=False force=False follow=True modification_time_format=%Y%m%d%H%M.%S access_time_format=%Y%m%d%H%M.%S unsafe_writes=False _original_basename=None _diff_peek=None src=None modification_time=None access_time=None mode=None owner=None group=None seuser=None serole=None selevel=None setype=None attributes=None
Dec  1 14:19:37 np0005541455 python3.9[178934]: ansible-ansible.builtin.file Invoked with path=/etc/systemd/system/tripleo_nova_migration_target.service state=absent recurse=False force=False follow=True modification_time_format=%Y%m%d%H%M.%S access_time_format=%Y%m%d%H%M.%S unsafe_writes=False _original_basename=None _diff_peek=None src=None modification_time=None access_time=None mode=None owner=None group=None seuser=None serole=None selevel=None setype=None attributes=None
Dec  1 14:19:37 np0005541455 python3.9[179086]: ansible-ansible.builtin.file Invoked with path=/etc/systemd/system/tripleo_nova_api_cron.service state=absent recurse=False force=False follow=True modification_time_format=%Y%m%d%H%M.%S access_time_format=%Y%m%d%H%M.%S unsafe_writes=False _original_basename=None _diff_peek=None src=None modification_time=None access_time=None mode=None owner=None group=None seuser=None serole=None selevel=None setype=None attributes=None
Dec  1 14:19:38 np0005541455 python3.9[179238]: ansible-ansible.builtin.file Invoked with path=/etc/systemd/system/tripleo_nova_api.service state=absent recurse=False force=False follow=True modification_time_format=%Y%m%d%H%M.%S access_time_format=%Y%m%d%H%M.%S unsafe_writes=False _original_basename=None _diff_peek=None src=None modification_time=None access_time=None mode=None owner=None group=None seuser=None serole=None selevel=None setype=None attributes=None
Dec  1 14:19:39 np0005541455 python3.9[179390]: ansible-ansible.builtin.file Invoked with path=/etc/systemd/system/tripleo_nova_conductor.service state=absent recurse=False force=False follow=True modification_time_format=%Y%m%d%H%M.%S access_time_format=%Y%m%d%H%M.%S unsafe_writes=False _original_basename=None _diff_peek=None src=None modification_time=None access_time=None mode=None owner=None group=None seuser=None serole=None selevel=None setype=None attributes=None
Dec  1 14:19:39 np0005541455 python3.9[179542]: ansible-ansible.builtin.file Invoked with path=/etc/systemd/system/tripleo_nova_metadata.service state=absent recurse=False force=False follow=True modification_time_format=%Y%m%d%H%M.%S access_time_format=%Y%m%d%H%M.%S unsafe_writes=False _original_basename=None _diff_peek=None src=None modification_time=None access_time=None mode=None owner=None group=None seuser=None serole=None selevel=None setype=None attributes=None
Dec  1 14:19:40 np0005541455 python3.9[179694]: ansible-ansible.builtin.file Invoked with path=/etc/systemd/system/tripleo_nova_scheduler.service state=absent recurse=False force=False follow=True modification_time_format=%Y%m%d%H%M.%S access_time_format=%Y%m%d%H%M.%S unsafe_writes=False _original_basename=None _diff_peek=None src=None modification_time=None access_time=None mode=None owner=None group=None seuser=None serole=None selevel=None setype=None attributes=None
Dec  1 14:19:41 np0005541455 python3.9[179846]: ansible-ansible.builtin.file Invoked with path=/etc/systemd/system/tripleo_nova_vnc_proxy.service state=absent recurse=False force=False follow=True modification_time_format=%Y%m%d%H%M.%S access_time_format=%Y%m%d%H%M.%S unsafe_writes=False _original_basename=None _diff_peek=None src=None modification_time=None access_time=None mode=None owner=None group=None seuser=None serole=None selevel=None setype=None attributes=None
Dec  1 14:19:42 np0005541455 python3.9[179998]: ansible-ansible.legacy.command Invoked with _raw_params=if systemctl is-active certmonger.service; then#012  systemctl disable --now certmonger.service#012  test -f /etc/systemd/system/certmonger.service || systemctl mask certmonger.service#012fi#012 _uses_shell=True expand_argument_vars=True stdin_add_newline=True strip_empty_ends=True cmd=None argv=None chdir=None executable=None creates=None removes=None stdin=None
Dec  1 14:19:43 np0005541455 python3.9[180152]: ansible-ansible.builtin.find Invoked with file_type=any hidden=True paths=['/var/lib/certmonger/requests'] patterns=[] read_whole_file=False age_stamp=mtime recurse=False follow=False get_checksum=False checksum_algorithm=sha1 use_regex=False exact_mode=True excludes=None contains=None age=None size=None depth=None mode=None encoding=None limit=None
Dec  1 14:19:44 np0005541455 python3.9[180304]: ansible-ansible.builtin.systemd_service Invoked with daemon_reload=True daemon_reexec=False scope=system no_block=False name=None state=None enabled=None force=None masked=None
Dec  1 14:19:44 np0005541455 systemd[1]: Reloading.
Dec  1 14:19:44 np0005541455 systemd-rc-local-generator: /etc/rc.d/rc.local is not marked executable, skipping.
Dec  1 14:19:44 np0005541455 systemd-sysv-generator: SysV service '/etc/rc.d/init.d/network' lacks a native systemd unit file. Automatically generating a unit file for compatibility. Please update package to include a native systemd unit file, in order to make it more safe and robust.
Dec  1 14:19:44 np0005541455 podman[180306]: 2025-12-01 19:19:44.26436277 +0000 UTC m=+0.139504275 container health_status ac5c9902abf0db9f43c889599b2bcc73d33eb8b65444ffdd9b56a5cc93dab792 (image=quay.io/podified-antelope-centos9/openstack-ovn-controller:current-podified, name=ovn_controller, health_status=healthy, health_failing_streak=0, health_log=, org.label-schema.license=GPLv2, tcib_build_tag=fa2bb8efef6782c26ea7f1675eeb36dd, org.label-schema.build-date=20251125, org.label-schema.schema-version=1.0, config_data={'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ovn_controller', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-ovn-controller:current-podified', 'net': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/lib/modules:/lib/modules:ro', '/run:/run', '/var/lib/openvswitch/ovn:/run/ovn:shared,z', '/var/lib/kolla/config_files/ovn_controller.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/openstack/cacerts/ovn/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/certs/ovn/default/ca.crt:/etc/pki/tls/certs/ovndbca.crt:ro,z', '/var/lib/openstack/certs/ovn/default/tls.crt:/etc/pki/tls/certs/ovndb.crt:ro,z', '/var/lib/openstack/certs/ovn/default/tls.key:/etc/pki/tls/private/ovndb.key:ro,Z', '/var/lib/openstack/healthchecks/ovn_controller:/openstack:ro,z']}, container_name=ovn_controller, maintainer=OpenStack Kubernetes Operator team, managed_by=edpm_ansible, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.vendor=CentOS, tcib_managed=true, config_id=ovn_controller, io.buildah.version=1.41.3)
Dec  1 14:19:45 np0005541455 python3.9[180517]: ansible-ansible.legacy.command Invoked with cmd=/usr/bin/systemctl reset-failed tripleo_nova_compute.service _uses_shell=False expand_argument_vars=True stdin_add_newline=True strip_empty_ends=True _raw_params=None argv=None chdir=None executable=None creates=None removes=None stdin=None
Dec  1 14:19:45 np0005541455 python3.9[180670]: ansible-ansible.legacy.command Invoked with cmd=/usr/bin/systemctl reset-failed tripleo_nova_migration_target.service _uses_shell=False expand_argument_vars=True stdin_add_newline=True strip_empty_ends=True _raw_params=None argv=None chdir=None executable=None creates=None removes=None stdin=None
Dec  1 14:19:46 np0005541455 python3.9[180823]: ansible-ansible.legacy.command Invoked with cmd=/usr/bin/systemctl reset-failed tripleo_nova_api_cron.service _uses_shell=False expand_argument_vars=True stdin_add_newline=True strip_empty_ends=True _raw_params=None argv=None chdir=None executable=None creates=None removes=None stdin=None
Dec  1 14:19:46 np0005541455 podman[180825]: 2025-12-01 19:19:46.509015508 +0000 UTC m=+0.070992346 container health_status 43b014a7c88484529ca37fbc1aa040d68d3c565a681d98a3ffe696ded1c66c8b (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified, name=ovn_metadata_agent, health_status=healthy, health_failing_streak=0, health_log=, config_id=ovn_metadata_agent, org.label-schema.license=GPLv2, org.label-schema.vendor=CentOS, io.buildah.version=1.41.3, org.label-schema.build-date=20251125, org.label-schema.name=CentOS Stream 9 Base Image, tcib_managed=true, container_name=ovn_metadata_agent, managed_by=edpm_ansible, config_data={'cgroupns': 'host', 'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'EDPM_CONFIG_HASH': '0823bd3e096c75f72e4a95820d41b0d4b6a1172bd2892ddb9f29b788a11bc87d'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ovn_metadata_agent', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified', 'net': 'host', 'pid': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/run/openvswitch:/run/openvswitch:z', '/var/lib/config-data/ansible-generated/neutron-ovn-metadata-agent:/etc/neutron.conf.d:z', '/run/netns:/run/netns:shared', '/var/lib/kolla/config_files/ovn_metadata_agent.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/neutron:/var/lib/neutron:shared,z', '/var/lib/neutron/ovn_metadata_haproxy_wrapper:/usr/local/bin/haproxy:ro', '/var/lib/neutron/kill_scripts:/etc/neutron/kill_scripts:ro', '/var/lib/openstack/cacerts/neutron-metadata/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/ca.crt:/etc/pki/tls/certs/ovndbca.crt:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/tls.crt:/etc/pki/tls/certs/ovndb.crt:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/tls.key:/etc/pki/tls/private/ovndb.key:ro,Z', '/var/lib/openstack/healthchecks/ovn_metadata_agent:/openstack:ro,z']}, maintainer=OpenStack Kubernetes Operator team, org.label-schema.schema-version=1.0, tcib_build_tag=fa2bb8efef6782c26ea7f1675eeb36dd)
Dec  1 14:19:47 np0005541455 python3.9[180994]: ansible-ansible.legacy.command Invoked with cmd=/usr/bin/systemctl reset-failed tripleo_nova_api.service _uses_shell=False expand_argument_vars=True stdin_add_newline=True strip_empty_ends=True _raw_params=None argv=None chdir=None executable=None creates=None removes=None stdin=None
Dec  1 14:19:47 np0005541455 python3.9[181147]: ansible-ansible.legacy.command Invoked with cmd=/usr/bin/systemctl reset-failed tripleo_nova_conductor.service _uses_shell=False expand_argument_vars=True stdin_add_newline=True strip_empty_ends=True _raw_params=None argv=None chdir=None executable=None creates=None removes=None stdin=None
Dec  1 14:19:48 np0005541455 python3.9[181300]: ansible-ansible.legacy.command Invoked with cmd=/usr/bin/systemctl reset-failed tripleo_nova_metadata.service _uses_shell=False expand_argument_vars=True stdin_add_newline=True strip_empty_ends=True _raw_params=None argv=None chdir=None executable=None creates=None removes=None stdin=None
Dec  1 14:19:49 np0005541455 python3.9[181453]: ansible-ansible.legacy.command Invoked with cmd=/usr/bin/systemctl reset-failed tripleo_nova_scheduler.service _uses_shell=False expand_argument_vars=True stdin_add_newline=True strip_empty_ends=True _raw_params=None argv=None chdir=None executable=None creates=None removes=None stdin=None
Dec  1 14:19:49 np0005541455 python3.9[181606]: ansible-ansible.legacy.command Invoked with cmd=/usr/bin/systemctl reset-failed tripleo_nova_vnc_proxy.service _uses_shell=False expand_argument_vars=True stdin_add_newline=True strip_empty_ends=True _raw_params=None argv=None chdir=None executable=None creates=None removes=None stdin=None
Dec  1 14:19:51 np0005541455 python3.9[181759]: ansible-ansible.builtin.file Invoked with group=zuul mode=0755 owner=zuul path=/var/lib/openstack/config/nova setype=container_file_t state=directory recurse=False force=False follow=True modification_time_format=%Y%m%d%H%M.%S access_time_format=%Y%m%d%H%M.%S unsafe_writes=False _original_basename=None _diff_peek=None src=None modification_time=None access_time=None seuser=None serole=None selevel=None attributes=None
Dec  1 14:19:52 np0005541455 python3.9[181911]: ansible-ansible.builtin.file Invoked with group=zuul mode=0755 owner=zuul path=/var/lib/openstack/config/containers setype=container_file_t state=directory recurse=False force=False follow=True modification_time_format=%Y%m%d%H%M.%S access_time_format=%Y%m%d%H%M.%S unsafe_writes=False _original_basename=None _diff_peek=None src=None modification_time=None access_time=None seuser=None serole=None selevel=None attributes=None
Dec  1 14:19:53 np0005541455 python3.9[182063]: ansible-ansible.builtin.file Invoked with group=zuul mode=0755 owner=zuul path=/var/lib/openstack/config/nova_nvme_cleaner setype=container_file_t state=directory recurse=False force=False follow=True modification_time_format=%Y%m%d%H%M.%S access_time_format=%Y%m%d%H%M.%S unsafe_writes=False _original_basename=None _diff_peek=None src=None modification_time=None access_time=None seuser=None serole=None selevel=None attributes=None
Dec  1 14:19:53 np0005541455 python3.9[182215]: ansible-ansible.builtin.file Invoked with group=zuul mode=0755 owner=zuul path=/var/lib/nova setype=container_file_t state=directory recurse=False force=False follow=True modification_time_format=%Y%m%d%H%M.%S access_time_format=%Y%m%d%H%M.%S unsafe_writes=False _original_basename=None _diff_peek=None src=None modification_time=None access_time=None seuser=None serole=None selevel=None attributes=None
Dec  1 14:19:54 np0005541455 python3.9[182367]: ansible-ansible.builtin.file Invoked with group=zuul mode=0755 owner=zuul path=/var/lib/_nova_secontext setype=container_file_t state=directory recurse=False force=False follow=True modification_time_format=%Y%m%d%H%M.%S access_time_format=%Y%m%d%H%M.%S unsafe_writes=False _original_basename=None _diff_peek=None src=None modification_time=None access_time=None seuser=None serole=None selevel=None attributes=None
Dec  1 14:19:55 np0005541455 python3.9[182519]: ansible-ansible.builtin.file Invoked with group=zuul mode=0755 owner=zuul path=/var/lib/nova/instances setype=container_file_t state=directory recurse=False force=False follow=True modification_time_format=%Y%m%d%H%M.%S access_time_format=%Y%m%d%H%M.%S unsafe_writes=False _original_basename=None _diff_peek=None src=None modification_time=None access_time=None seuser=None serole=None selevel=None attributes=None
Dec  1 14:19:55 np0005541455 python3.9[182671]: ansible-ansible.builtin.file Invoked with group=root mode=0750 owner=root path=/etc/ceph setype=container_file_t state=directory recurse=False force=False follow=True modification_time_format=%Y%m%d%H%M.%S access_time_format=%Y%m%d%H%M.%S unsafe_writes=False _original_basename=None _diff_peek=None src=None modification_time=None access_time=None seuser=None serole=None selevel=None attributes=None
Dec  1 14:19:56 np0005541455 python3.9[182823]: ansible-ansible.builtin.file Invoked with group=zuul owner=zuul path=/etc/multipath setype=container_file_t state=directory recurse=False force=False follow=True modification_time_format=%Y%m%d%H%M.%S access_time_format=%Y%m%d%H%M.%S unsafe_writes=False _original_basename=None _diff_peek=None src=None modification_time=None access_time=None mode=None seuser=None serole=None selevel=None attributes=None
Dec  1 14:19:57 np0005541455 python3.9[182975]: ansible-ansible.builtin.file Invoked with group=zuul owner=zuul path=/etc/nvme setype=container_file_t state=directory recurse=False force=False follow=True modification_time_format=%Y%m%d%H%M.%S access_time_format=%Y%m%d%H%M.%S unsafe_writes=False _original_basename=None _diff_peek=None src=None modification_time=None access_time=None mode=None seuser=None serole=None selevel=None attributes=None
Dec  1 14:19:57 np0005541455 python3.9[183127]: ansible-ansible.builtin.file Invoked with group=zuul owner=zuul path=/run/openvswitch setype=container_file_t state=directory recurse=False force=False follow=True modification_time_format=%Y%m%d%H%M.%S access_time_format=%Y%m%d%H%M.%S unsafe_writes=False _original_basename=None _diff_peek=None src=None modification_time=None access_time=None mode=None seuser=None serole=None selevel=None attributes=None
Dec  1 14:20:02 np0005541455 python3.9[183279]: ansible-ansible.builtin.getent Invoked with database=passwd key=nova fail_key=True service=None split=None
Dec  1 14:20:03 np0005541455 podman[183404]: 2025-12-01 19:20:03.661520752 +0000 UTC m=+0.060496659 container health_status eee51cf6f5ac491b85fb09827fece37ea9afa564acb449d4ec0d0155a452f02b (image=quay.io/podified-antelope-centos9/openstack-multipathd:current-podified, name=multipathd, health_status=healthy, health_failing_streak=0, health_log=, io.buildah.version=1.41.3, org.label-schema.schema-version=1.0, config_data={'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/multipathd', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-multipathd:current-podified', 'net': 'host', 'privileged': True, 'restart': 'always', 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/var/lib/kolla/config_files/multipathd.json:/var/lib/kolla/config_files/config.json:ro', '/dev:/dev', '/run/udev:/run/udev', '/sys:/sys', '/lib/modules:/lib/modules:ro', '/etc/iscsi:/etc/iscsi:ro', '/var/lib/iscsi:/var/lib/iscsi', '/etc/multipath:/etc/multipath:z', '/etc/multipath.conf:/etc/multipath.conf:ro', '/var/lib/openstack/healthchecks/multipathd:/openstack:ro,z']}, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.vendor=CentOS, tcib_build_tag=fa2bb8efef6782c26ea7f1675eeb36dd, maintainer=OpenStack Kubernetes Operator team, managed_by=edpm_ansible, tcib_managed=true, org.label-schema.build-date=20251125, org.label-schema.license=GPLv2, config_id=multipathd, container_name=multipathd)
Dec  1 14:20:03 np0005541455 python3.9[183449]: ansible-ansible.builtin.group Invoked with gid=42436 name=nova state=present force=False system=False local=False non_unique=False gid_min=None gid_max=None
Dec  1 14:20:04 np0005541455 python3.9[183608]: ansible-ansible.builtin.user Invoked with comment=nova user group=nova groups=['libvirt'] name=nova shell=/bin/sh state=present uid=42436 non_unique=False force=False remove=False create_home=True system=False move_home=False append=False ssh_key_bits=0 ssh_key_type=rsa ssh_key_comment=ansible-generated on compute-0 update_password=always home=None password=NOT_LOGGING_PARAMETER login_class=None password_expire_max=None password_expire_min=None password_expire_warn=None hidden=None seuser=None skeleton=None generate_ssh_key=None ssh_key_file=None ssh_key_passphrase=NOT_LOGGING_PARAMETER expires=None password_lock=None local=None profile=None authorization=None role=None umask=None password_expire_account_disable=None uid_min=None uid_max=None
Dec  1 14:20:05 np0005541455 systemd-logind[797]: New session 25 of user zuul.
Dec  1 14:20:05 np0005541455 systemd[1]: Started Session 25 of User zuul.
Dec  1 14:20:05 np0005541455 systemd[1]: session-25.scope: Deactivated successfully.
Dec  1 14:20:05 np0005541455 systemd-logind[797]: Session 25 logged out. Waiting for processes to exit.
Dec  1 14:20:05 np0005541455 systemd-logind[797]: Removed session 25.
Dec  1 14:20:06 np0005541455 python3.9[183794]: ansible-ansible.legacy.stat Invoked with path=/var/lib/openstack/config/nova/config.json follow=False get_checksum=True get_size=False checksum_algorithm=sha1 get_mime=True get_attributes=True get_selinux_context=False
Dec  1 14:20:07 np0005541455 python3.9[183915]: ansible-ansible.legacy.copy Invoked with dest=/var/lib/openstack/config/nova/config.json mode=0644 setype=container_file_t src=/home/zuul/.ansible/tmp/ansible-tmp-1764616806.1221259-1229-97537337022945/.source.json follow=False _original_basename=config.json.j2 checksum=b51012bfb0ca26296dcf3793a2f284446fb1395e backup=False force=True remote_src=False unsafe_writes=False content=NOT_LOGGING_PARAMETER validate=None directory_mode=None local_follow=None owner=None group=None seuser=None serole=None selevel=None attributes=None
Dec  1 14:20:07 np0005541455 python3.9[184065]: ansible-ansible.legacy.stat Invoked with path=/var/lib/openstack/config/nova/nova-blank.conf follow=False get_checksum=True get_size=False checksum_algorithm=sha1 get_mime=True get_attributes=True get_selinux_context=False
Dec  1 14:20:08 np0005541455 python3.9[184141]: ansible-ansible.legacy.file Invoked with mode=0644 setype=container_file_t dest=/var/lib/openstack/config/nova/nova-blank.conf _original_basename=nova-blank.conf recurse=False state=file path=/var/lib/openstack/config/nova/nova-blank.conf force=False follow=True modification_time_format=%Y%m%d%H%M.%S access_time_format=%Y%m%d%H%M.%S unsafe_writes=False _diff_peek=None src=None modification_time=None access_time=None owner=None group=None seuser=None serole=None selevel=None attributes=None
Dec  1 14:20:08 np0005541455 python3.9[184291]: ansible-ansible.legacy.stat Invoked with path=/var/lib/openstack/config/nova/ssh-config follow=False get_checksum=True get_size=False checksum_algorithm=sha1 get_mime=True get_attributes=True get_selinux_context=False
Dec  1 14:20:09 np0005541455 python3.9[184412]: ansible-ansible.legacy.copy Invoked with dest=/var/lib/openstack/config/nova/ssh-config mode=0644 setype=container_file_t src=/home/zuul/.ansible/tmp/ansible-tmp-1764616808.4577782-1229-85919814122696/.source follow=False _original_basename=ssh-config checksum=4297f735c41bdc1ff52d72e6f623a02242f37958 backup=False force=True remote_src=False unsafe_writes=False content=NOT_LOGGING_PARAMETER validate=None directory_mode=None local_follow=None owner=None group=None seuser=None serole=None selevel=None attributes=None
Dec  1 14:20:10 np0005541455 python3.9[184562]: ansible-ansible.legacy.stat Invoked with path=/var/lib/openstack/config/nova/02-nova-host-specific.conf follow=False get_checksum=True get_size=False checksum_algorithm=sha1 get_mime=True get_attributes=True get_selinux_context=False
Dec  1 14:20:10 np0005541455 python3.9[184683]: ansible-ansible.legacy.copy Invoked with dest=/var/lib/openstack/config/nova/02-nova-host-specific.conf mode=0644 setype=container_file_t src=/home/zuul/.ansible/tmp/ansible-tmp-1764616809.6858387-1229-65524624614678/.source.conf follow=False _original_basename=02-nova-host-specific.conf.j2 checksum=1feba546d0beacad9258164ab79b8a747685ccc8 backup=False force=True remote_src=False unsafe_writes=False content=NOT_LOGGING_PARAMETER validate=None directory_mode=None local_follow=None owner=None group=None seuser=None serole=None selevel=None attributes=None
Dec  1 14:20:11 np0005541455 python3.9[184833]: ansible-ansible.legacy.stat Invoked with path=/var/lib/openstack/config/nova/nova_statedir_ownership.py follow=False get_checksum=True get_size=False checksum_algorithm=sha1 get_mime=True get_attributes=True get_selinux_context=False
Dec  1 14:20:12 np0005541455 python3.9[184954]: ansible-ansible.legacy.copy Invoked with dest=/var/lib/openstack/config/nova/nova_statedir_ownership.py mode=0644 setype=container_file_t src=/home/zuul/.ansible/tmp/ansible-tmp-1764616810.9266248-1229-138411431894767/.source.py follow=False _original_basename=nova_statedir_ownership.py checksum=c6c8a3cfefa5efd60ceb1408c4e977becedb71e2 backup=False force=True remote_src=False unsafe_writes=False content=NOT_LOGGING_PARAMETER validate=None directory_mode=None local_follow=None owner=None group=None seuser=None serole=None selevel=None attributes=None
Dec  1 14:20:12 np0005541455 ovn_metadata_agent[106828]: 2025-12-01 19:20:12.161 106833 DEBUG oslo_concurrency.lockutils [-] Acquiring lock "_check_child_processes" by "neutron.agent.linux.external_process.ProcessMonitor._check_child_processes" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m
Dec  1 14:20:12 np0005541455 ovn_metadata_agent[106828]: 2025-12-01 19:20:12.163 106833 DEBUG oslo_concurrency.lockutils [-] Lock "_check_child_processes" acquired by "neutron.agent.linux.external_process.ProcessMonitor._check_child_processes" :: waited 0.002s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m
Dec  1 14:20:12 np0005541455 ovn_metadata_agent[106828]: 2025-12-01 19:20:12.163 106833 DEBUG oslo_concurrency.lockutils [-] Lock "_check_child_processes" "released" by "neutron.agent.linux.external_process.ProcessMonitor._check_child_processes" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m
Dec  1 14:20:12 np0005541455 python3.9[185104]: ansible-ansible.legacy.stat Invoked with path=/var/lib/openstack/config/nova/run-on-host follow=False get_checksum=True get_size=False checksum_algorithm=sha1 get_mime=True get_attributes=True get_selinux_context=False
Dec  1 14:20:13 np0005541455 python3.9[185225]: ansible-ansible.legacy.copy Invoked with dest=/var/lib/openstack/config/nova/run-on-host mode=0644 setype=container_file_t src=/home/zuul/.ansible/tmp/ansible-tmp-1764616812.3107975-1229-144139975857145/.source follow=False _original_basename=run-on-host checksum=93aba8edc83d5878604a66d37fea2f12b60bdea2 backup=False force=True remote_src=False unsafe_writes=False content=NOT_LOGGING_PARAMETER validate=None directory_mode=None local_follow=None owner=None group=None seuser=None serole=None selevel=None attributes=None
Dec  1 14:20:14 np0005541455 python3.9[185377]: ansible-ansible.builtin.file Invoked with group=nova mode=0700 owner=nova path=/home/nova/.ssh state=directory recurse=False force=False follow=True modification_time_format=%Y%m%d%H%M.%S access_time_format=%Y%m%d%H%M.%S unsafe_writes=False _original_basename=None _diff_peek=None src=None modification_time=None access_time=None seuser=None serole=None selevel=None setype=None attributes=None
Dec  1 14:20:14 np0005541455 podman[185501]: 2025-12-01 19:20:14.809022917 +0000 UTC m=+0.102317142 container health_status ac5c9902abf0db9f43c889599b2bcc73d33eb8b65444ffdd9b56a5cc93dab792 (image=quay.io/podified-antelope-centos9/openstack-ovn-controller:current-podified, name=ovn_controller, health_status=healthy, health_failing_streak=0, health_log=, container_name=ovn_controller, maintainer=OpenStack Kubernetes Operator team, org.label-schema.license=GPLv2, org.label-schema.vendor=CentOS, org.label-schema.build-date=20251125, org.label-schema.name=CentOS Stream 9 Base Image, tcib_build_tag=fa2bb8efef6782c26ea7f1675eeb36dd, tcib_managed=true, io.buildah.version=1.41.3, managed_by=edpm_ansible, org.label-schema.schema-version=1.0, config_data={'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ovn_controller', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-ovn-controller:current-podified', 'net': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/lib/modules:/lib/modules:ro', '/run:/run', '/var/lib/openvswitch/ovn:/run/ovn:shared,z', '/var/lib/kolla/config_files/ovn_controller.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/openstack/cacerts/ovn/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/certs/ovn/default/ca.crt:/etc/pki/tls/certs/ovndbca.crt:ro,z', '/var/lib/openstack/certs/ovn/default/tls.crt:/etc/pki/tls/certs/ovndb.crt:ro,z', '/var/lib/openstack/certs/ovn/default/tls.key:/etc/pki/tls/private/ovndb.key:ro,Z', '/var/lib/openstack/healthchecks/ovn_controller:/openstack:ro,z']}, config_id=ovn_controller)
Dec  1 14:20:15 np0005541455 python3.9[185547]: ansible-ansible.legacy.copy Invoked with dest=/home/nova/.ssh/authorized_keys group=nova mode=0600 owner=nova remote_src=True src=/var/lib/openstack/config/nova/ssh-publickey backup=False force=True follow=False unsafe_writes=False _original_basename=None content=NOT_LOGGING_PARAMETER validate=None directory_mode=None local_follow=None checksum=None seuser=None serole=None selevel=None setype=None attributes=None
Dec  1 14:20:15 np0005541455 python3.9[185705]: ansible-ansible.builtin.stat Invoked with path=/var/lib/nova/compute_id follow=False get_checksum=True get_mime=True get_attributes=True get_selinux_context=False checksum_algorithm=sha1
Dec  1 14:20:16 np0005541455 python3.9[185857]: ansible-ansible.legacy.stat Invoked with path=/var/lib/nova/compute_id follow=False get_checksum=True get_size=False checksum_algorithm=sha1 get_mime=True get_attributes=True get_selinux_context=False
Dec  1 14:20:16 np0005541455 podman[185952]: 2025-12-01 19:20:16.86747231 +0000 UTC m=+0.063300652 container health_status 43b014a7c88484529ca37fbc1aa040d68d3c565a681d98a3ffe696ded1c66c8b (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified, name=ovn_metadata_agent, health_status=healthy, health_failing_streak=0, health_log=, org.label-schema.license=GPLv2, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS, io.buildah.version=1.41.3, maintainer=OpenStack Kubernetes Operator team, managed_by=edpm_ansible, config_data={'cgroupns': 'host', 'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'EDPM_CONFIG_HASH': '0823bd3e096c75f72e4a95820d41b0d4b6a1172bd2892ddb9f29b788a11bc87d'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ovn_metadata_agent', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified', 'net': 'host', 'pid': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/run/openvswitch:/run/openvswitch:z', '/var/lib/config-data/ansible-generated/neutron-ovn-metadata-agent:/etc/neutron.conf.d:z', '/run/netns:/run/netns:shared', '/var/lib/kolla/config_files/ovn_metadata_agent.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/neutron:/var/lib/neutron:shared,z', '/var/lib/neutron/ovn_metadata_haproxy_wrapper:/usr/local/bin/haproxy:ro', '/var/lib/neutron/kill_scripts:/etc/neutron/kill_scripts:ro', '/var/lib/openstack/cacerts/neutron-metadata/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/ca.crt:/etc/pki/tls/certs/ovndbca.crt:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/tls.crt:/etc/pki/tls/certs/ovndb.crt:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/tls.key:/etc/pki/tls/private/ovndb.key:ro,Z', '/var/lib/openstack/healthchecks/ovn_metadata_agent:/openstack:ro,z']}, config_id=ovn_metadata_agent, org.label-schema.build-date=20251125, org.label-schema.name=CentOS Stream 9 Base Image, tcib_build_tag=fa2bb8efef6782c26ea7f1675eeb36dd, tcib_managed=true, container_name=ovn_metadata_agent)
Dec  1 14:20:17 np0005541455 python3.9[185998]: ansible-ansible.legacy.copy Invoked with attributes=+i dest=/var/lib/nova/compute_id group=nova mode=0400 owner=nova src=/home/zuul/.ansible/tmp/ansible-tmp-1764616815.9268067-1336-61960759324495/.source _original_basename=.dyichljz follow=False checksum=caad8b41a5335c5f45be24ade6ca6437b8be23ce backup=False force=True remote_src=False unsafe_writes=False content=NOT_LOGGING_PARAMETER validate=None directory_mode=None local_follow=None seuser=None serole=None selevel=None setype=None
Dec  1 14:20:17 np0005541455 python3.9[186150]: ansible-ansible.builtin.stat Invoked with path=/var/lib/openstack/cacerts/nova/tls-ca-bundle.pem follow=False get_checksum=True get_mime=True get_attributes=True get_selinux_context=False checksum_algorithm=sha1
Dec  1 14:20:18 np0005541455 python3.9[186302]: ansible-ansible.legacy.stat Invoked with path=/var/lib/openstack/config/containers/nova_compute.json follow=False get_checksum=True get_size=False checksum_algorithm=sha1 get_mime=True get_attributes=True get_selinux_context=False
Dec  1 14:20:19 np0005541455 python3.9[186423]: ansible-ansible.legacy.copy Invoked with dest=/var/lib/openstack/config/containers/nova_compute.json mode=0644 setype=container_file_t src=/home/zuul/.ansible/tmp/ansible-tmp-1764616818.1714084-1362-28540915831460/.source.json follow=False _original_basename=nova_compute.json.j2 checksum=211ffd0bca4b407eb4de45a749ef70116a7806fd backup=False force=True remote_src=False unsafe_writes=False content=NOT_LOGGING_PARAMETER validate=None directory_mode=None local_follow=None owner=None group=None seuser=None serole=None selevel=None attributes=None
Dec  1 14:20:20 np0005541455 python3.9[186573]: ansible-ansible.legacy.stat Invoked with path=/var/lib/openstack/config/containers/nova_compute_init.json follow=False get_checksum=True get_size=False checksum_algorithm=sha1 get_mime=True get_attributes=True get_selinux_context=False
Dec  1 14:20:20 np0005541455 python3.9[186694]: ansible-ansible.legacy.copy Invoked with dest=/var/lib/openstack/config/containers/nova_compute_init.json mode=0700 setype=container_file_t src=/home/zuul/.ansible/tmp/ansible-tmp-1764616819.755044-1377-207187616098689/.source.json follow=False _original_basename=nova_compute_init.json.j2 checksum=60b024e6db49dc6e700fc0d50263944d98d4c034 backup=False force=True remote_src=False unsafe_writes=False content=NOT_LOGGING_PARAMETER validate=None directory_mode=None local_follow=None owner=None group=None seuser=None serole=None selevel=None attributes=None
Dec  1 14:20:21 np0005541455 python3.9[186846]: ansible-container_config_data Invoked with config_overrides={} config_path=/var/lib/openstack/config/containers config_pattern=nova_compute_init.json debug=False
Dec  1 14:20:22 np0005541455 python3.9[186998]: ansible-container_config_hash Invoked with check_mode=False config_vol_prefix=/var/lib/config-data
Dec  1 14:20:23 np0005541455 python3[187150]: ansible-edpm_container_manage Invoked with concurrency=1 config_dir=/var/lib/openstack/config/containers config_id=edpm config_overrides={} config_patterns=nova_compute_init.json log_base_path=/var/log/containers/stdouts debug=False
Dec  1 14:20:23 np0005541455 podman[187185]: 2025-12-01 19:20:23.541432118 +0000 UTC m=+0.017740055 image pull 5571c1b2140c835f70406e4553b3b44135b9c9b4eb673345cbd571460c5d59a3 quay.io/podified-antelope-centos9/openstack-nova-compute:current-podified
Dec  1 14:20:23 np0005541455 podman[187185]: 2025-12-01 19:20:23.687046484 +0000 UTC m=+0.163354401 container create b7e71d1fa76afceccc1bd66ef02742c75fce787a57939c3d37257723a6c83554 (image=quay.io/podified-antelope-centos9/openstack-nova-compute:current-podified, name=nova_compute_init, org.label-schema.name=CentOS Stream 9 Base Image, config_id=edpm, tcib_build_tag=fa2bb8efef6782c26ea7f1675eeb36dd, managed_by=edpm_ansible, io.buildah.version=1.41.3, maintainer=OpenStack Kubernetes Operator team, org.label-schema.schema-version=1.0, org.label-schema.build-date=20251125, config_data={'image': 'quay.io/podified-antelope-centos9/openstack-nova-compute:current-podified', 'privileged': False, 'user': 'root', 'restart': 'never', 'command': 'bash -c $* -- eval python3 /sbin/nova_statedir_ownership.py | logger -t nova_compute_init', 'net': 'none', 'security_opt': ['label=disable'], 'detach': False, 'environment': {'NOVA_STATEDIR_OWNERSHIP_SKIP': '/var/lib/nova/compute_id', '__OS_DEBUG': False}, 'volumes': ['/dev/log:/dev/log', '/var/lib/nova:/var/lib/nova:shared', '/var/lib/_nova_secontext:/var/lib/_nova_secontext:shared,z', '/var/lib/openstack/config/nova/nova_statedir_ownership.py:/sbin/nova_statedir_ownership.py:z']}, container_name=nova_compute_init, org.label-schema.vendor=CentOS, tcib_managed=true, org.label-schema.license=GPLv2)
Dec  1 14:20:23 np0005541455 python3[187150]: ansible-edpm_container_manage PODMAN-CONTAINER-DEBUG: podman create --name nova_compute_init --conmon-pidfile /run/nova_compute_init.pid --env NOVA_STATEDIR_OWNERSHIP_SKIP=/var/lib/nova/compute_id --env __OS_DEBUG=False --label config_id=edpm --label container_name=nova_compute_init --label managed_by=edpm_ansible --label config_data={'image': 'quay.io/podified-antelope-centos9/openstack-nova-compute:current-podified', 'privileged': False, 'user': 'root', 'restart': 'never', 'command': 'bash -c $* -- eval python3 /sbin/nova_statedir_ownership.py | logger -t nova_compute_init', 'net': 'none', 'security_opt': ['label=disable'], 'detach': False, 'environment': {'NOVA_STATEDIR_OWNERSHIP_SKIP': '/var/lib/nova/compute_id', '__OS_DEBUG': False}, 'volumes': ['/dev/log:/dev/log', '/var/lib/nova:/var/lib/nova:shared', '/var/lib/_nova_secontext:/var/lib/_nova_secontext:shared,z', '/var/lib/openstack/config/nova/nova_statedir_ownership.py:/sbin/nova_statedir_ownership.py:z']} --log-driver journald --log-level info --network none --privileged=False --security-opt label=disable --user root --volume /dev/log:/dev/log --volume /var/lib/nova:/var/lib/nova:shared --volume /var/lib/_nova_secontext:/var/lib/_nova_secontext:shared,z --volume /var/lib/openstack/config/nova/nova_statedir_ownership.py:/sbin/nova_statedir_ownership.py:z quay.io/podified-antelope-centos9/openstack-nova-compute:current-podified bash -c $* -- eval python3 /sbin/nova_statedir_ownership.py | logger -t nova_compute_init
Dec  1 14:20:24 np0005541455 python3.9[187375]: ansible-ansible.builtin.stat Invoked with path=/etc/sysconfig/podman_drop_in follow=False get_checksum=True get_mime=True get_attributes=True get_selinux_context=False checksum_algorithm=sha1
Dec  1 14:20:25 np0005541455 python3.9[187529]: ansible-container_config_data Invoked with config_overrides={} config_path=/var/lib/openstack/config/containers config_pattern=nova_compute.json debug=False
Dec  1 14:20:26 np0005541455 python3.9[187681]: ansible-container_config_hash Invoked with check_mode=False config_vol_prefix=/var/lib/config-data
Dec  1 14:20:27 np0005541455 python3[187833]: ansible-edpm_container_manage Invoked with concurrency=1 config_dir=/var/lib/openstack/config/containers config_id=edpm config_overrides={} config_patterns=nova_compute.json log_base_path=/var/log/containers/stdouts debug=False
Dec  1 14:20:27 np0005541455 podman[187868]: 2025-12-01 19:20:27.637316407 +0000 UTC m=+0.059316807 container create a49600dc8e564699c8907e2ca54945c314b5b17705fe80360125fde83c0dc967 (image=quay.io/podified-antelope-centos9/openstack-nova-compute:current-podified, name=nova_compute, org.label-schema.schema-version=1.0, tcib_managed=true, org.label-schema.build-date=20251125, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.vendor=CentOS, maintainer=OpenStack Kubernetes Operator team, org.label-schema.license=GPLv2, config_id=edpm, container_name=nova_compute, io.buildah.version=1.41.3, managed_by=edpm_ansible, config_data={'image': 'quay.io/podified-antelope-centos9/openstack-nova-compute:current-podified', 'privileged': True, 'user': 'nova', 'restart': 'always', 'command': 'kolla_start', 'net': 'host', 'pid': 'host', 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'volumes': ['/var/lib/openstack/config/nova:/var/lib/kolla/config_files:ro', '/var/lib/openstack/cacerts/nova/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/etc/localtime:/etc/localtime:ro', '/lib/modules:/lib/modules:ro', '/dev:/dev', '/var/lib/libvirt:/var/lib/libvirt', '/run/libvirt:/run/libvirt:shared', '/var/lib/nova:/var/lib/nova:shared', '/var/lib/iscsi:/var/lib/iscsi', '/etc/multipath:/etc/multipath:z', '/etc/multipath.conf:/etc/multipath.conf:ro', '/etc/iscsi:/etc/iscsi:ro', '/etc/nvme:/etc/nvme', '/var/lib/openstack/config/ceph:/var/lib/kolla/config_files/ceph:ro', '/etc/ssh/ssh_known_hosts:/etc/ssh/ssh_known_hosts:ro']}, tcib_build_tag=fa2bb8efef6782c26ea7f1675eeb36dd)
Dec  1 14:20:27 np0005541455 podman[187868]: 2025-12-01 19:20:27.605528623 +0000 UTC m=+0.027529083 image pull 5571c1b2140c835f70406e4553b3b44135b9c9b4eb673345cbd571460c5d59a3 quay.io/podified-antelope-centos9/openstack-nova-compute:current-podified
Dec  1 14:20:27 np0005541455 python3[187833]: ansible-edpm_container_manage PODMAN-CONTAINER-DEBUG: podman create --name nova_compute --conmon-pidfile /run/nova_compute.pid --env KOLLA_CONFIG_STRATEGY=COPY_ALWAYS --label config_id=edpm --label container_name=nova_compute --label managed_by=edpm_ansible --label config_data={'image': 'quay.io/podified-antelope-centos9/openstack-nova-compute:current-podified', 'privileged': True, 'user': 'nova', 'restart': 'always', 'command': 'kolla_start', 'net': 'host', 'pid': 'host', 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'volumes': ['/var/lib/openstack/config/nova:/var/lib/kolla/config_files:ro', '/var/lib/openstack/cacerts/nova/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/etc/localtime:/etc/localtime:ro', '/lib/modules:/lib/modules:ro', '/dev:/dev', '/var/lib/libvirt:/var/lib/libvirt', '/run/libvirt:/run/libvirt:shared', '/var/lib/nova:/var/lib/nova:shared', '/var/lib/iscsi:/var/lib/iscsi', '/etc/multipath:/etc/multipath:z', '/etc/multipath.conf:/etc/multipath.conf:ro', '/etc/iscsi:/etc/iscsi:ro', '/etc/nvme:/etc/nvme', '/var/lib/openstack/config/ceph:/var/lib/kolla/config_files/ceph:ro', '/etc/ssh/ssh_known_hosts:/etc/ssh/ssh_known_hosts:ro']} --log-driver journald --log-level info --network host --pid host --privileged=True --user nova --volume /var/lib/openstack/config/nova:/var/lib/kolla/config_files:ro --volume /var/lib/openstack/cacerts/nova/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z --volume /etc/localtime:/etc/localtime:ro --volume /lib/modules:/lib/modules:ro --volume /dev:/dev --volume /var/lib/libvirt:/var/lib/libvirt --volume /run/libvirt:/run/libvirt:shared --volume /var/lib/nova:/var/lib/nova:shared --volume /var/lib/iscsi:/var/lib/iscsi --volume /etc/multipath:/etc/multipath:z --volume /etc/multipath.conf:/etc/multipath.conf:ro --volume /etc/iscsi:/etc/iscsi:ro --volume /etc/nvme:/etc/nvme --volume /var/lib/openstack/config/ceph:/var/lib/kolla/config_files/ceph:ro --volume /etc/ssh/ssh_known_hosts:/etc/ssh/ssh_known_hosts:ro quay.io/podified-antelope-centos9/openstack-nova-compute:current-podified kolla_start
Dec  1 14:20:28 np0005541455 python3.9[188058]: ansible-ansible.builtin.stat Invoked with path=/etc/sysconfig/podman_drop_in follow=False get_checksum=True get_mime=True get_attributes=True get_selinux_context=False checksum_algorithm=sha1
Dec  1 14:20:29 np0005541455 python3.9[188212]: ansible-file Invoked with path=/etc/systemd/system/edpm_nova_compute.requires state=absent recurse=False force=False follow=True modification_time_format=%Y%m%d%H%M.%S access_time_format=%Y%m%d%H%M.%S unsafe_writes=False _original_basename=None _diff_peek=None src=None modification_time=None access_time=None mode=None owner=None group=None seuser=None serole=None selevel=None setype=None attributes=None
Dec  1 14:20:30 np0005541455 python3.9[188363]: ansible-copy Invoked with src=/home/zuul/.ansible/tmp/ansible-tmp-1764616829.5587053-1469-210359000367771/source dest=/etc/systemd/system/edpm_nova_compute.service mode=0644 owner=root group=root backup=False force=True remote_src=False follow=False unsafe_writes=False _original_basename=None content=NOT_LOGGING_PARAMETER validate=None directory_mode=None local_follow=None checksum=None seuser=None serole=None selevel=None setype=None attributes=None
Dec  1 14:20:30 np0005541455 python3.9[188439]: ansible-systemd Invoked with daemon_reload=True daemon_reexec=False scope=system no_block=False name=None state=None enabled=None force=None masked=None
Dec  1 14:20:30 np0005541455 systemd[1]: Reloading.
Dec  1 14:20:30 np0005541455 systemd-rc-local-generator: /etc/rc.d/rc.local is not marked executable, skipping.
Dec  1 14:20:30 np0005541455 systemd-sysv-generator: SysV service '/etc/rc.d/init.d/network' lacks a native systemd unit file. Automatically generating a unit file for compatibility. Please update package to include a native systemd unit file, in order to make it more safe and robust.
Dec  1 14:20:31 np0005541455 python3.9[188550]: ansible-systemd Invoked with state=restarted name=edpm_nova_compute.service enabled=True daemon_reload=False daemon_reexec=False scope=system no_block=False force=None masked=None
Dec  1 14:20:31 np0005541455 systemd[1]: Reloading.
Dec  1 14:20:31 np0005541455 systemd-rc-local-generator: /etc/rc.d/rc.local is not marked executable, skipping.
Dec  1 14:20:31 np0005541455 systemd-sysv-generator: SysV service '/etc/rc.d/init.d/network' lacks a native systemd unit file. Automatically generating a unit file for compatibility. Please update package to include a native systemd unit file, in order to make it more safe and robust.
Dec  1 14:20:32 np0005541455 systemd[1]: Starting nova_compute container...
Dec  1 14:20:32 np0005541455 systemd[1]: Started libcrun container.
Dec  1 14:20:32 np0005541455 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/15c8a5dfb1d7f7861327b27ce7d523a282a53d89ff97c0ca1b5edbc129905915/merged/etc/multipath supports timestamps until 2038 (0x7fffffff)
Dec  1 14:20:32 np0005541455 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/15c8a5dfb1d7f7861327b27ce7d523a282a53d89ff97c0ca1b5edbc129905915/merged/etc/nvme supports timestamps until 2038 (0x7fffffff)
Dec  1 14:20:32 np0005541455 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/15c8a5dfb1d7f7861327b27ce7d523a282a53d89ff97c0ca1b5edbc129905915/merged/var/lib/iscsi supports timestamps until 2038 (0x7fffffff)
Dec  1 14:20:32 np0005541455 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/15c8a5dfb1d7f7861327b27ce7d523a282a53d89ff97c0ca1b5edbc129905915/merged/var/lib/libvirt supports timestamps until 2038 (0x7fffffff)
Dec  1 14:20:32 np0005541455 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/15c8a5dfb1d7f7861327b27ce7d523a282a53d89ff97c0ca1b5edbc129905915/merged/var/lib/nova supports timestamps until 2038 (0x7fffffff)
Dec  1 14:20:32 np0005541455 podman[188589]: 2025-12-01 19:20:32.310641131 +0000 UTC m=+0.148789766 container init a49600dc8e564699c8907e2ca54945c314b5b17705fe80360125fde83c0dc967 (image=quay.io/podified-antelope-centos9/openstack-nova-compute:current-podified, name=nova_compute, config_id=edpm, maintainer=OpenStack Kubernetes Operator team, org.label-schema.license=GPLv2, container_name=nova_compute, managed_by=edpm_ansible, org.label-schema.build-date=20251125, org.label-schema.name=CentOS Stream 9 Base Image, config_data={'image': 'quay.io/podified-antelope-centos9/openstack-nova-compute:current-podified', 'privileged': True, 'user': 'nova', 'restart': 'always', 'command': 'kolla_start', 'net': 'host', 'pid': 'host', 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'volumes': ['/var/lib/openstack/config/nova:/var/lib/kolla/config_files:ro', '/var/lib/openstack/cacerts/nova/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/etc/localtime:/etc/localtime:ro', '/lib/modules:/lib/modules:ro', '/dev:/dev', '/var/lib/libvirt:/var/lib/libvirt', '/run/libvirt:/run/libvirt:shared', '/var/lib/nova:/var/lib/nova:shared', '/var/lib/iscsi:/var/lib/iscsi', '/etc/multipath:/etc/multipath:z', '/etc/multipath.conf:/etc/multipath.conf:ro', '/etc/iscsi:/etc/iscsi:ro', '/etc/nvme:/etc/nvme', '/var/lib/openstack/config/ceph:/var/lib/kolla/config_files/ceph:ro', '/etc/ssh/ssh_known_hosts:/etc/ssh/ssh_known_hosts:ro']}, io.buildah.version=1.41.3, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS, tcib_build_tag=fa2bb8efef6782c26ea7f1675eeb36dd, tcib_managed=true)
Dec  1 14:20:32 np0005541455 podman[188589]: 2025-12-01 19:20:32.318697303 +0000 UTC m=+0.156845878 container start a49600dc8e564699c8907e2ca54945c314b5b17705fe80360125fde83c0dc967 (image=quay.io/podified-antelope-centos9/openstack-nova-compute:current-podified, name=nova_compute, org.label-schema.license=GPLv2, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS, container_name=nova_compute, org.label-schema.name=CentOS Stream 9 Base Image, maintainer=OpenStack Kubernetes Operator team, tcib_build_tag=fa2bb8efef6782c26ea7f1675eeb36dd, managed_by=edpm_ansible, org.label-schema.build-date=20251125, tcib_managed=true, config_data={'image': 'quay.io/podified-antelope-centos9/openstack-nova-compute:current-podified', 'privileged': True, 'user': 'nova', 'restart': 'always', 'command': 'kolla_start', 'net': 'host', 'pid': 'host', 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'volumes': ['/var/lib/openstack/config/nova:/var/lib/kolla/config_files:ro', '/var/lib/openstack/cacerts/nova/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/etc/localtime:/etc/localtime:ro', '/lib/modules:/lib/modules:ro', '/dev:/dev', '/var/lib/libvirt:/var/lib/libvirt', '/run/libvirt:/run/libvirt:shared', '/var/lib/nova:/var/lib/nova:shared', '/var/lib/iscsi:/var/lib/iscsi', '/etc/multipath:/etc/multipath:z', '/etc/multipath.conf:/etc/multipath.conf:ro', '/etc/iscsi:/etc/iscsi:ro', '/etc/nvme:/etc/nvme', '/var/lib/openstack/config/ceph:/var/lib/kolla/config_files/ceph:ro', '/etc/ssh/ssh_known_hosts:/etc/ssh/ssh_known_hosts:ro']}, config_id=edpm, io.buildah.version=1.41.3)
Dec  1 14:20:32 np0005541455 podman[188589]: nova_compute
Dec  1 14:20:32 np0005541455 nova_compute[188605]: + sudo -E kolla_set_configs
Dec  1 14:20:32 np0005541455 systemd[1]: Started nova_compute container.
Dec  1 14:20:32 np0005541455 nova_compute[188605]: INFO:__main__:Loading config file at /var/lib/kolla/config_files/config.json
Dec  1 14:20:32 np0005541455 nova_compute[188605]: INFO:__main__:Validating config file
Dec  1 14:20:32 np0005541455 nova_compute[188605]: INFO:__main__:Kolla config strategy set to: COPY_ALWAYS
Dec  1 14:20:32 np0005541455 nova_compute[188605]: INFO:__main__:Copying service configuration files
Dec  1 14:20:32 np0005541455 nova_compute[188605]: INFO:__main__:Deleting /etc/nova/nova.conf
Dec  1 14:20:32 np0005541455 nova_compute[188605]: INFO:__main__:Copying /var/lib/kolla/config_files/nova-blank.conf to /etc/nova/nova.conf
Dec  1 14:20:32 np0005541455 nova_compute[188605]: INFO:__main__:Setting permission for /etc/nova/nova.conf
Dec  1 14:20:32 np0005541455 nova_compute[188605]: INFO:__main__:Copying /var/lib/kolla/config_files/01-nova.conf to /etc/nova/nova.conf.d/01-nova.conf
Dec  1 14:20:32 np0005541455 nova_compute[188605]: INFO:__main__:Setting permission for /etc/nova/nova.conf.d/01-nova.conf
Dec  1 14:20:32 np0005541455 nova_compute[188605]: INFO:__main__:Copying /var/lib/kolla/config_files/25-nova-extra.conf to /etc/nova/nova.conf.d/25-nova-extra.conf
Dec  1 14:20:32 np0005541455 nova_compute[188605]: INFO:__main__:Setting permission for /etc/nova/nova.conf.d/25-nova-extra.conf
Dec  1 14:20:32 np0005541455 nova_compute[188605]: INFO:__main__:Copying /var/lib/kolla/config_files/nova-blank.conf to /etc/nova/nova.conf.d/nova-blank.conf
Dec  1 14:20:32 np0005541455 nova_compute[188605]: INFO:__main__:Setting permission for /etc/nova/nova.conf.d/nova-blank.conf
Dec  1 14:20:32 np0005541455 nova_compute[188605]: INFO:__main__:Copying /var/lib/kolla/config_files/02-nova-host-specific.conf to /etc/nova/nova.conf.d/02-nova-host-specific.conf
Dec  1 14:20:32 np0005541455 nova_compute[188605]: INFO:__main__:Setting permission for /etc/nova/nova.conf.d/02-nova-host-specific.conf
Dec  1 14:20:32 np0005541455 nova_compute[188605]: INFO:__main__:Deleting /etc/ceph
Dec  1 14:20:32 np0005541455 nova_compute[188605]: INFO:__main__:Creating directory /etc/ceph
Dec  1 14:20:32 np0005541455 nova_compute[188605]: INFO:__main__:Setting permission for /etc/ceph
Dec  1 14:20:32 np0005541455 nova_compute[188605]: INFO:__main__:Copying /var/lib/kolla/config_files/ssh-privatekey to /var/lib/nova/.ssh/ssh-privatekey
Dec  1 14:20:32 np0005541455 nova_compute[188605]: INFO:__main__:Setting permission for /var/lib/nova/.ssh/ssh-privatekey
Dec  1 14:20:32 np0005541455 nova_compute[188605]: INFO:__main__:Copying /var/lib/kolla/config_files/ssh-config to /var/lib/nova/.ssh/config
Dec  1 14:20:32 np0005541455 nova_compute[188605]: INFO:__main__:Setting permission for /var/lib/nova/.ssh/config
Dec  1 14:20:32 np0005541455 nova_compute[188605]: INFO:__main__:Deleting /usr/sbin/iscsiadm
Dec  1 14:20:32 np0005541455 nova_compute[188605]: INFO:__main__:Copying /var/lib/kolla/config_files/run-on-host to /usr/sbin/iscsiadm
Dec  1 14:20:32 np0005541455 nova_compute[188605]: INFO:__main__:Setting permission for /usr/sbin/iscsiadm
Dec  1 14:20:32 np0005541455 nova_compute[188605]: INFO:__main__:Writing out command to execute
Dec  1 14:20:32 np0005541455 nova_compute[188605]: INFO:__main__:Setting permission for /var/lib/nova/.ssh/
Dec  1 14:20:32 np0005541455 nova_compute[188605]: INFO:__main__:Setting permission for /var/lib/nova/.ssh/ssh-privatekey
Dec  1 14:20:32 np0005541455 nova_compute[188605]: INFO:__main__:Setting permission for /var/lib/nova/.ssh/config
Dec  1 14:20:32 np0005541455 nova_compute[188605]: ++ cat /run_command
Dec  1 14:20:32 np0005541455 nova_compute[188605]: + CMD=nova-compute
Dec  1 14:20:32 np0005541455 nova_compute[188605]: + ARGS=
Dec  1 14:20:32 np0005541455 nova_compute[188605]: + sudo kolla_copy_cacerts
Dec  1 14:20:32 np0005541455 nova_compute[188605]: + [[ ! -n '' ]]
Dec  1 14:20:32 np0005541455 nova_compute[188605]: + . kolla_extend_start
Dec  1 14:20:32 np0005541455 nova_compute[188605]: Running command: 'nova-compute'
Dec  1 14:20:32 np0005541455 nova_compute[188605]: + echo 'Running command: '\''nova-compute'\'''
Dec  1 14:20:32 np0005541455 nova_compute[188605]: + umask 0022
Dec  1 14:20:32 np0005541455 nova_compute[188605]: + exec nova-compute
Dec  1 14:20:33 np0005541455 python3.9[188767]: ansible-ansible.builtin.stat Invoked with path=/etc/systemd/system/edpm_nova_nvme_cleaner_healthcheck.service follow=False get_checksum=True get_mime=True get_attributes=True get_selinux_context=False checksum_algorithm=sha1
Dec  1 14:20:34 np0005541455 podman[188891]: 2025-12-01 19:20:34.137818758 +0000 UTC m=+0.095735987 container health_status eee51cf6f5ac491b85fb09827fece37ea9afa564acb449d4ec0d0155a452f02b (image=quay.io/podified-antelope-centos9/openstack-multipathd:current-podified, name=multipathd, health_status=healthy, health_failing_streak=0, health_log=, config_id=multipathd, io.buildah.version=1.41.3, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.schema-version=1.0, maintainer=OpenStack Kubernetes Operator team, org.label-schema.build-date=20251125, org.label-schema.license=GPLv2, tcib_build_tag=fa2bb8efef6782c26ea7f1675eeb36dd, tcib_managed=true, config_data={'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/multipathd', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-multipathd:current-podified', 'net': 'host', 'privileged': True, 'restart': 'always', 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/var/lib/kolla/config_files/multipathd.json:/var/lib/kolla/config_files/config.json:ro', '/dev:/dev', '/run/udev:/run/udev', '/sys:/sys', '/lib/modules:/lib/modules:ro', '/etc/iscsi:/etc/iscsi:ro', '/var/lib/iscsi:/var/lib/iscsi', '/etc/multipath:/etc/multipath:z', '/etc/multipath.conf:/etc/multipath.conf:ro', '/var/lib/openstack/healthchecks/multipathd:/openstack:ro,z']}, org.label-schema.vendor=CentOS, container_name=multipathd, managed_by=edpm_ansible)
Dec  1 14:20:34 np0005541455 python3.9[188928]: ansible-ansible.builtin.stat Invoked with path=/etc/systemd/system/edpm_nova_nvme_cleaner.service follow=False get_checksum=True get_mime=True get_attributes=True get_selinux_context=False checksum_algorithm=sha1
Dec  1 14:20:34 np0005541455 nova_compute[188605]: 2025-12-01 19:20:34.331 188609 DEBUG os_vif [-] Loaded VIF plugin class '<class 'vif_plug_linux_bridge.linux_bridge.LinuxBridgePlugin'>' with name 'linux_bridge' initialize /usr/lib/python3.9/site-packages/os_vif/__init__.py:44#033[00m
Dec  1 14:20:34 np0005541455 nova_compute[188605]: 2025-12-01 19:20:34.332 188609 DEBUG os_vif [-] Loaded VIF plugin class '<class 'vif_plug_noop.noop.NoOpPlugin'>' with name 'noop' initialize /usr/lib/python3.9/site-packages/os_vif/__init__.py:44#033[00m
Dec  1 14:20:34 np0005541455 nova_compute[188605]: 2025-12-01 19:20:34.332 188609 DEBUG os_vif [-] Loaded VIF plugin class '<class 'vif_plug_ovs.ovs.OvsPlugin'>' with name 'ovs' initialize /usr/lib/python3.9/site-packages/os_vif/__init__.py:44#033[00m
Dec  1 14:20:34 np0005541455 nova_compute[188605]: 2025-12-01 19:20:34.332 188609 INFO os_vif [-] Loaded VIF plugins: linux_bridge, noop, ovs#033[00m
Dec  1 14:20:34 np0005541455 nova_compute[188605]: 2025-12-01 19:20:34.461 188609 DEBUG oslo_concurrency.processutils [-] Running cmd (subprocess): grep -F node.session.scan /sbin/iscsiadm execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384#033[00m
Dec  1 14:20:34 np0005541455 nova_compute[188605]: 2025-12-01 19:20:34.488 188609 DEBUG oslo_concurrency.processutils [-] CMD "grep -F node.session.scan /sbin/iscsiadm" returned: 1 in 0.026s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422#033[00m
Dec  1 14:20:34 np0005541455 nova_compute[188605]: 2025-12-01 19:20:34.488 188609 DEBUG oslo_concurrency.processutils [-] 'grep -F node.session.scan /sbin/iscsiadm' failed. Not Retrying. execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:473#033[00m
Dec  1 14:20:35 np0005541455 python3.9[189089]: ansible-ansible.builtin.stat Invoked with path=/etc/systemd/system/edpm_nova_nvme_cleaner.service.requires follow=False get_checksum=True get_mime=True get_attributes=True get_selinux_context=False checksum_algorithm=sha1
Dec  1 14:20:35 np0005541455 nova_compute[188605]: 2025-12-01 19:20:35.082 188609 INFO nova.virt.driver [None req-b71ca0b1-6b06-4f9a-80f6-79fc48762184 - - - - - -] Loading compute driver 'libvirt.LibvirtDriver'#033[00m
Dec  1 14:20:35 np0005541455 nova_compute[188605]: 2025-12-01 19:20:35.222 188609 INFO nova.compute.provider_config [None req-b71ca0b1-6b06-4f9a-80f6-79fc48762184 - - - - - -] No provider configs found in /etc/nova/provider_config/. If files are present, ensure the Nova process has access.#033[00m
Dec  1 14:20:35 np0005541455 nova_compute[188605]: 2025-12-01 19:20:35.267 188609 DEBUG oslo_concurrency.lockutils [None req-b71ca0b1-6b06-4f9a-80f6-79fc48762184 - - - - - -] Acquiring lock "singleton_lock" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:312#033[00m
Dec  1 14:20:35 np0005541455 nova_compute[188605]: 2025-12-01 19:20:35.267 188609 DEBUG oslo_concurrency.lockutils [None req-b71ca0b1-6b06-4f9a-80f6-79fc48762184 - - - - - -] Acquired lock "singleton_lock" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:315#033[00m
Dec  1 14:20:35 np0005541455 nova_compute[188605]: 2025-12-01 19:20:35.268 188609 DEBUG oslo_concurrency.lockutils [None req-b71ca0b1-6b06-4f9a-80f6-79fc48762184 - - - - - -] Releasing lock "singleton_lock" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:333#033[00m
Dec  1 14:20:35 np0005541455 nova_compute[188605]: 2025-12-01 19:20:35.268 188609 DEBUG oslo_service.service [None req-b71ca0b1-6b06-4f9a-80f6-79fc48762184 - - - - - -] Full set of CONF: _wait_for_exit_or_signal /usr/lib/python3.9/site-packages/oslo_service/service.py:362#033[00m
Dec  1 14:20:35 np0005541455 nova_compute[188605]: 2025-12-01 19:20:35.268 188609 DEBUG oslo_service.service [None req-b71ca0b1-6b06-4f9a-80f6-79fc48762184 - - - - - -] ******************************************************************************** log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2589#033[00m
Dec  1 14:20:35 np0005541455 nova_compute[188605]: 2025-12-01 19:20:35.268 188609 DEBUG oslo_service.service [None req-b71ca0b1-6b06-4f9a-80f6-79fc48762184 - - - - - -] Configuration options gathered from: log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2590#033[00m
Dec  1 14:20:35 np0005541455 nova_compute[188605]: 2025-12-01 19:20:35.268 188609 DEBUG oslo_service.service [None req-b71ca0b1-6b06-4f9a-80f6-79fc48762184 - - - - - -] command line args: [] log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2591#033[00m
Dec  1 14:20:35 np0005541455 nova_compute[188605]: 2025-12-01 19:20:35.268 188609 DEBUG oslo_service.service [None req-b71ca0b1-6b06-4f9a-80f6-79fc48762184 - - - - - -] config files: ['/etc/nova/nova.conf', '/etc/nova/nova-compute.conf'] log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2592#033[00m
Dec  1 14:20:35 np0005541455 nova_compute[188605]: 2025-12-01 19:20:35.268 188609 DEBUG oslo_service.service [None req-b71ca0b1-6b06-4f9a-80f6-79fc48762184 - - - - - -] ================================================================================ log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2594#033[00m
Dec  1 14:20:35 np0005541455 nova_compute[188605]: 2025-12-01 19:20:35.269 188609 DEBUG oslo_service.service [None req-b71ca0b1-6b06-4f9a-80f6-79fc48762184 - - - - - -] allow_resize_to_same_host      = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602#033[00m
Dec  1 14:20:35 np0005541455 nova_compute[188605]: 2025-12-01 19:20:35.269 188609 DEBUG oslo_service.service [None req-b71ca0b1-6b06-4f9a-80f6-79fc48762184 - - - - - -] arq_binding_timeout            = 300 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602#033[00m
Dec  1 14:20:35 np0005541455 nova_compute[188605]: 2025-12-01 19:20:35.269 188609 DEBUG oslo_service.service [None req-b71ca0b1-6b06-4f9a-80f6-79fc48762184 - - - - - -] backdoor_port                  = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602#033[00m
Dec  1 14:20:35 np0005541455 nova_compute[188605]: 2025-12-01 19:20:35.269 188609 DEBUG oslo_service.service [None req-b71ca0b1-6b06-4f9a-80f6-79fc48762184 - - - - - -] backdoor_socket                = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602#033[00m
Dec  1 14:20:35 np0005541455 nova_compute[188605]: 2025-12-01 19:20:35.269 188609 DEBUG oslo_service.service [None req-b71ca0b1-6b06-4f9a-80f6-79fc48762184 - - - - - -] block_device_allocate_retries  = 60 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602#033[00m
Dec  1 14:20:35 np0005541455 nova_compute[188605]: 2025-12-01 19:20:35.269 188609 DEBUG oslo_service.service [None req-b71ca0b1-6b06-4f9a-80f6-79fc48762184 - - - - - -] block_device_allocate_retries_interval = 3 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602#033[00m
Dec  1 14:20:35 np0005541455 nova_compute[188605]: 2025-12-01 19:20:35.269 188609 DEBUG oslo_service.service [None req-b71ca0b1-6b06-4f9a-80f6-79fc48762184 - - - - - -] cert                           = self.pem log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602#033[00m
Dec  1 14:20:35 np0005541455 nova_compute[188605]: 2025-12-01 19:20:35.270 188609 DEBUG oslo_service.service [None req-b71ca0b1-6b06-4f9a-80f6-79fc48762184 - - - - - -] compute_driver                 = libvirt.LibvirtDriver log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602#033[00m
Dec  1 14:20:35 np0005541455 nova_compute[188605]: 2025-12-01 19:20:35.270 188609 DEBUG oslo_service.service [None req-b71ca0b1-6b06-4f9a-80f6-79fc48762184 - - - - - -] compute_monitors               = [] log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602#033[00m
Dec  1 14:20:35 np0005541455 nova_compute[188605]: 2025-12-01 19:20:35.270 188609 DEBUG oslo_service.service [None req-b71ca0b1-6b06-4f9a-80f6-79fc48762184 - - - - - -] config_dir                     = ['/etc/nova/nova.conf.d'] log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602#033[00m
Dec  1 14:20:35 np0005541455 nova_compute[188605]: 2025-12-01 19:20:35.270 188609 DEBUG oslo_service.service [None req-b71ca0b1-6b06-4f9a-80f6-79fc48762184 - - - - - -] config_drive_format            = iso9660 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602#033[00m
Dec  1 14:20:35 np0005541455 nova_compute[188605]: 2025-12-01 19:20:35.270 188609 DEBUG oslo_service.service [None req-b71ca0b1-6b06-4f9a-80f6-79fc48762184 - - - - - -] config_file                    = ['/etc/nova/nova.conf', '/etc/nova/nova-compute.conf'] log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602#033[00m
Dec  1 14:20:35 np0005541455 nova_compute[188605]: 2025-12-01 19:20:35.270 188609 DEBUG oslo_service.service [None req-b71ca0b1-6b06-4f9a-80f6-79fc48762184 - - - - - -] config_source                  = [] log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602#033[00m
Dec  1 14:20:35 np0005541455 nova_compute[188605]: 2025-12-01 19:20:35.270 188609 DEBUG oslo_service.service [None req-b71ca0b1-6b06-4f9a-80f6-79fc48762184 - - - - - -] console_host                   = compute-0 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602#033[00m
Dec  1 14:20:35 np0005541455 nova_compute[188605]: 2025-12-01 19:20:35.271 188609 DEBUG oslo_service.service [None req-b71ca0b1-6b06-4f9a-80f6-79fc48762184 - - - - - -] control_exchange               = nova log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602#033[00m
Dec  1 14:20:35 np0005541455 nova_compute[188605]: 2025-12-01 19:20:35.271 188609 DEBUG oslo_service.service [None req-b71ca0b1-6b06-4f9a-80f6-79fc48762184 - - - - - -] cpu_allocation_ratio           = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602#033[00m
Dec  1 14:20:35 np0005541455 nova_compute[188605]: 2025-12-01 19:20:35.271 188609 DEBUG oslo_service.service [None req-b71ca0b1-6b06-4f9a-80f6-79fc48762184 - - - - - -] daemon                         = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602#033[00m
Dec  1 14:20:35 np0005541455 nova_compute[188605]: 2025-12-01 19:20:35.271 188609 DEBUG oslo_service.service [None req-b71ca0b1-6b06-4f9a-80f6-79fc48762184 - - - - - -] debug                          = True log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602#033[00m
Dec  1 14:20:35 np0005541455 nova_compute[188605]: 2025-12-01 19:20:35.271 188609 DEBUG oslo_service.service [None req-b71ca0b1-6b06-4f9a-80f6-79fc48762184 - - - - - -] default_access_ip_network_name = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602#033[00m
Dec  1 14:20:35 np0005541455 nova_compute[188605]: 2025-12-01 19:20:35.271 188609 DEBUG oslo_service.service [None req-b71ca0b1-6b06-4f9a-80f6-79fc48762184 - - - - - -] default_availability_zone      = nova log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602#033[00m
Dec  1 14:20:35 np0005541455 nova_compute[188605]: 2025-12-01 19:20:35.271 188609 DEBUG oslo_service.service [None req-b71ca0b1-6b06-4f9a-80f6-79fc48762184 - - - - - -] default_ephemeral_format       = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602#033[00m
Dec  1 14:20:35 np0005541455 nova_compute[188605]: 2025-12-01 19:20:35.272 188609 DEBUG oslo_service.service [None req-b71ca0b1-6b06-4f9a-80f6-79fc48762184 - - - - - -] default_log_levels             = ['amqp=WARN', 'amqplib=WARN', 'boto=WARN', 'qpid=WARN', 'sqlalchemy=WARN', 'suds=INFO', 'oslo.messaging=INFO', 'oslo_messaging=INFO', 'iso8601=WARN', 'requests.packages.urllib3.connectionpool=WARN', 'urllib3.connectionpool=WARN', 'websocket=WARN', 'requests.packages.urllib3.util.retry=WARN', 'urllib3.util.retry=WARN', 'keystonemiddleware=WARN', 'routes.middleware=WARN', 'stevedore=WARN', 'taskflow=WARN', 'keystoneauth=WARN', 'oslo.cache=INFO', 'oslo_policy=INFO', 'dogpile.core.dogpile=INFO', 'glanceclient=WARN', 'oslo.privsep.daemon=INFO'] log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602#033[00m
Dec  1 14:20:35 np0005541455 nova_compute[188605]: 2025-12-01 19:20:35.272 188609 DEBUG oslo_service.service [None req-b71ca0b1-6b06-4f9a-80f6-79fc48762184 - - - - - -] default_schedule_zone          = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602#033[00m
Dec  1 14:20:35 np0005541455 nova_compute[188605]: 2025-12-01 19:20:35.272 188609 DEBUG oslo_service.service [None req-b71ca0b1-6b06-4f9a-80f6-79fc48762184 - - - - - -] disk_allocation_ratio          = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602#033[00m
Dec  1 14:20:35 np0005541455 nova_compute[188605]: 2025-12-01 19:20:35.272 188609 DEBUG oslo_service.service [None req-b71ca0b1-6b06-4f9a-80f6-79fc48762184 - - - - - -] enable_new_services            = True log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602#033[00m
Dec  1 14:20:35 np0005541455 nova_compute[188605]: 2025-12-01 19:20:35.272 188609 DEBUG oslo_service.service [None req-b71ca0b1-6b06-4f9a-80f6-79fc48762184 - - - - - -] enabled_apis                   = ['osapi_compute', 'metadata'] log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602#033[00m
Dec  1 14:20:35 np0005541455 nova_compute[188605]: 2025-12-01 19:20:35.272 188609 DEBUG oslo_service.service [None req-b71ca0b1-6b06-4f9a-80f6-79fc48762184 - - - - - -] enabled_ssl_apis               = [] log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602#033[00m
Dec  1 14:20:35 np0005541455 nova_compute[188605]: 2025-12-01 19:20:35.272 188609 DEBUG oslo_service.service [None req-b71ca0b1-6b06-4f9a-80f6-79fc48762184 - - - - - -] flat_injected                  = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602#033[00m
Dec  1 14:20:35 np0005541455 nova_compute[188605]: 2025-12-01 19:20:35.272 188609 DEBUG oslo_service.service [None req-b71ca0b1-6b06-4f9a-80f6-79fc48762184 - - - - - -] force_config_drive             = True log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602#033[00m
Dec  1 14:20:35 np0005541455 nova_compute[188605]: 2025-12-01 19:20:35.273 188609 DEBUG oslo_service.service [None req-b71ca0b1-6b06-4f9a-80f6-79fc48762184 - - - - - -] force_raw_images               = True log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602#033[00m
Dec  1 14:20:35 np0005541455 nova_compute[188605]: 2025-12-01 19:20:35.273 188609 DEBUG oslo_service.service [None req-b71ca0b1-6b06-4f9a-80f6-79fc48762184 - - - - - -] graceful_shutdown_timeout      = 60 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602#033[00m
Dec  1 14:20:35 np0005541455 nova_compute[188605]: 2025-12-01 19:20:35.273 188609 DEBUG oslo_service.service [None req-b71ca0b1-6b06-4f9a-80f6-79fc48762184 - - - - - -] heal_instance_info_cache_interval = 60 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602#033[00m
Dec  1 14:20:35 np0005541455 nova_compute[188605]: 2025-12-01 19:20:35.273 188609 DEBUG oslo_service.service [None req-b71ca0b1-6b06-4f9a-80f6-79fc48762184 - - - - - -] host                           = compute-0.ctlplane.example.com log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602#033[00m
Dec  1 14:20:35 np0005541455 nova_compute[188605]: 2025-12-01 19:20:35.273 188609 DEBUG oslo_service.service [None req-b71ca0b1-6b06-4f9a-80f6-79fc48762184 - - - - - -] initial_cpu_allocation_ratio   = 4.0 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602#033[00m
Dec  1 14:20:35 np0005541455 nova_compute[188605]: 2025-12-01 19:20:35.273 188609 DEBUG oslo_service.service [None req-b71ca0b1-6b06-4f9a-80f6-79fc48762184 - - - - - -] initial_disk_allocation_ratio  = 0.9 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602#033[00m
Dec  1 14:20:35 np0005541455 nova_compute[188605]: 2025-12-01 19:20:35.273 188609 DEBUG oslo_service.service [None req-b71ca0b1-6b06-4f9a-80f6-79fc48762184 - - - - - -] initial_ram_allocation_ratio   = 1.0 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602#033[00m
Dec  1 14:20:35 np0005541455 nova_compute[188605]: 2025-12-01 19:20:35.274 188609 DEBUG oslo_service.service [None req-b71ca0b1-6b06-4f9a-80f6-79fc48762184 - - - - - -] injected_network_template      = /usr/lib/python3.9/site-packages/nova/virt/interfaces.template log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602#033[00m
Dec  1 14:20:35 np0005541455 nova_compute[188605]: 2025-12-01 19:20:35.274 188609 DEBUG oslo_service.service [None req-b71ca0b1-6b06-4f9a-80f6-79fc48762184 - - - - - -] instance_build_timeout         = 0 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602#033[00m
Dec  1 14:20:35 np0005541455 nova_compute[188605]: 2025-12-01 19:20:35.274 188609 DEBUG oslo_service.service [None req-b71ca0b1-6b06-4f9a-80f6-79fc48762184 - - - - - -] instance_delete_interval       = 300 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602#033[00m
Dec  1 14:20:35 np0005541455 nova_compute[188605]: 2025-12-01 19:20:35.274 188609 DEBUG oslo_service.service [None req-b71ca0b1-6b06-4f9a-80f6-79fc48762184 - - - - - -] instance_format                = [instance: %(uuid)s]  log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602#033[00m
Dec  1 14:20:35 np0005541455 nova_compute[188605]: 2025-12-01 19:20:35.274 188609 DEBUG oslo_service.service [None req-b71ca0b1-6b06-4f9a-80f6-79fc48762184 - - - - - -] instance_name_template         = instance-%08x log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602#033[00m
Dec  1 14:20:35 np0005541455 nova_compute[188605]: 2025-12-01 19:20:35.274 188609 DEBUG oslo_service.service [None req-b71ca0b1-6b06-4f9a-80f6-79fc48762184 - - - - - -] instance_usage_audit           = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602#033[00m
Dec  1 14:20:35 np0005541455 nova_compute[188605]: 2025-12-01 19:20:35.274 188609 DEBUG oslo_service.service [None req-b71ca0b1-6b06-4f9a-80f6-79fc48762184 - - - - - -] instance_usage_audit_period    = month log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602#033[00m
Dec  1 14:20:35 np0005541455 nova_compute[188605]: 2025-12-01 19:20:35.275 188609 DEBUG oslo_service.service [None req-b71ca0b1-6b06-4f9a-80f6-79fc48762184 - - - - - -] instance_uuid_format           = [instance: %(uuid)s]  log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602#033[00m
Dec  1 14:20:35 np0005541455 nova_compute[188605]: 2025-12-01 19:20:35.275 188609 DEBUG oslo_service.service [None req-b71ca0b1-6b06-4f9a-80f6-79fc48762184 - - - - - -] instances_path                 = /var/lib/nova/instances log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602#033[00m
Dec  1 14:20:35 np0005541455 nova_compute[188605]: 2025-12-01 19:20:35.275 188609 DEBUG oslo_service.service [None req-b71ca0b1-6b06-4f9a-80f6-79fc48762184 - - - - - -] internal_service_availability_zone = internal log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602#033[00m
Dec  1 14:20:35 np0005541455 nova_compute[188605]: 2025-12-01 19:20:35.275 188609 DEBUG oslo_service.service [None req-b71ca0b1-6b06-4f9a-80f6-79fc48762184 - - - - - -] key                            = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602#033[00m
Dec  1 14:20:35 np0005541455 nova_compute[188605]: 2025-12-01 19:20:35.275 188609 DEBUG oslo_service.service [None req-b71ca0b1-6b06-4f9a-80f6-79fc48762184 - - - - - -] live_migration_retry_count     = 30 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602#033[00m
Dec  1 14:20:35 np0005541455 nova_compute[188605]: 2025-12-01 19:20:35.275 188609 DEBUG oslo_service.service [None req-b71ca0b1-6b06-4f9a-80f6-79fc48762184 - - - - - -] log_config_append              = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602#033[00m
Dec  1 14:20:35 np0005541455 nova_compute[188605]: 2025-12-01 19:20:35.275 188609 DEBUG oslo_service.service [None req-b71ca0b1-6b06-4f9a-80f6-79fc48762184 - - - - - -] log_date_format                = %Y-%m-%d %H:%M:%S log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602#033[00m
Dec  1 14:20:35 np0005541455 nova_compute[188605]: 2025-12-01 19:20:35.276 188609 DEBUG oslo_service.service [None req-b71ca0b1-6b06-4f9a-80f6-79fc48762184 - - - - - -] log_dir                        = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602#033[00m
Dec  1 14:20:35 np0005541455 nova_compute[188605]: 2025-12-01 19:20:35.276 188609 DEBUG oslo_service.service [None req-b71ca0b1-6b06-4f9a-80f6-79fc48762184 - - - - - -] log_file                       = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602#033[00m
Dec  1 14:20:35 np0005541455 nova_compute[188605]: 2025-12-01 19:20:35.276 188609 DEBUG oslo_service.service [None req-b71ca0b1-6b06-4f9a-80f6-79fc48762184 - - - - - -] log_options                    = True log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602#033[00m
Dec  1 14:20:35 np0005541455 nova_compute[188605]: 2025-12-01 19:20:35.276 188609 DEBUG oslo_service.service [None req-b71ca0b1-6b06-4f9a-80f6-79fc48762184 - - - - - -] log_rotate_interval            = 1 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602#033[00m
Dec  1 14:20:35 np0005541455 nova_compute[188605]: 2025-12-01 19:20:35.276 188609 DEBUG oslo_service.service [None req-b71ca0b1-6b06-4f9a-80f6-79fc48762184 - - - - - -] log_rotate_interval_type       = days log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602#033[00m
Dec  1 14:20:35 np0005541455 nova_compute[188605]: 2025-12-01 19:20:35.276 188609 DEBUG oslo_service.service [None req-b71ca0b1-6b06-4f9a-80f6-79fc48762184 - - - - - -] log_rotation_type              = size log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602#033[00m
Dec  1 14:20:35 np0005541455 nova_compute[188605]: 2025-12-01 19:20:35.276 188609 DEBUG oslo_service.service [None req-b71ca0b1-6b06-4f9a-80f6-79fc48762184 - - - - - -] logging_context_format_string  = %(asctime)s.%(msecs)03d %(process)d %(levelname)s %(name)s [%(global_request_id)s %(request_id)s %(user_identity)s] %(instance)s%(message)s log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602#033[00m
Dec  1 14:20:35 np0005541455 nova_compute[188605]: 2025-12-01 19:20:35.276 188609 DEBUG oslo_service.service [None req-b71ca0b1-6b06-4f9a-80f6-79fc48762184 - - - - - -] logging_debug_format_suffix    = %(funcName)s %(pathname)s:%(lineno)d log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602#033[00m
Dec  1 14:20:35 np0005541455 nova_compute[188605]: 2025-12-01 19:20:35.277 188609 DEBUG oslo_service.service [None req-b71ca0b1-6b06-4f9a-80f6-79fc48762184 - - - - - -] logging_default_format_string  = %(asctime)s.%(msecs)03d %(process)d %(levelname)s %(name)s [-] %(instance)s%(message)s log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602#033[00m
Dec  1 14:20:35 np0005541455 nova_compute[188605]: 2025-12-01 19:20:35.277 188609 DEBUG oslo_service.service [None req-b71ca0b1-6b06-4f9a-80f6-79fc48762184 - - - - - -] logging_exception_prefix       = %(asctime)s.%(msecs)03d %(process)d ERROR %(name)s %(instance)s log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602#033[00m
Dec  1 14:20:35 np0005541455 nova_compute[188605]: 2025-12-01 19:20:35.277 188609 DEBUG oslo_service.service [None req-b71ca0b1-6b06-4f9a-80f6-79fc48762184 - - - - - -] logging_user_identity_format   = %(user)s %(project)s %(domain)s %(system_scope)s %(user_domain)s %(project_domain)s log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602#033[00m
Dec  1 14:20:35 np0005541455 nova_compute[188605]: 2025-12-01 19:20:35.277 188609 DEBUG oslo_service.service [None req-b71ca0b1-6b06-4f9a-80f6-79fc48762184 - - - - - -] long_rpc_timeout               = 1800 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602#033[00m
Dec  1 14:20:35 np0005541455 nova_compute[188605]: 2025-12-01 19:20:35.277 188609 DEBUG oslo_service.service [None req-b71ca0b1-6b06-4f9a-80f6-79fc48762184 - - - - - -] max_concurrent_builds          = 10 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602#033[00m
Dec  1 14:20:35 np0005541455 nova_compute[188605]: 2025-12-01 19:20:35.277 188609 DEBUG oslo_service.service [None req-b71ca0b1-6b06-4f9a-80f6-79fc48762184 - - - - - -] max_concurrent_live_migrations = 1 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602#033[00m
Dec  1 14:20:35 np0005541455 nova_compute[188605]: 2025-12-01 19:20:35.277 188609 DEBUG oslo_service.service [None req-b71ca0b1-6b06-4f9a-80f6-79fc48762184 - - - - - -] max_concurrent_snapshots       = 5 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602#033[00m
Dec  1 14:20:35 np0005541455 nova_compute[188605]: 2025-12-01 19:20:35.278 188609 DEBUG oslo_service.service [None req-b71ca0b1-6b06-4f9a-80f6-79fc48762184 - - - - - -] max_local_block_devices        = 3 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602#033[00m
Dec  1 14:20:35 np0005541455 nova_compute[188605]: 2025-12-01 19:20:35.278 188609 DEBUG oslo_service.service [None req-b71ca0b1-6b06-4f9a-80f6-79fc48762184 - - - - - -] max_logfile_count              = 1 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602#033[00m
Dec  1 14:20:35 np0005541455 nova_compute[188605]: 2025-12-01 19:20:35.278 188609 DEBUG oslo_service.service [None req-b71ca0b1-6b06-4f9a-80f6-79fc48762184 - - - - - -] max_logfile_size_mb            = 20 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602#033[00m
Dec  1 14:20:35 np0005541455 nova_compute[188605]: 2025-12-01 19:20:35.278 188609 DEBUG oslo_service.service [None req-b71ca0b1-6b06-4f9a-80f6-79fc48762184 - - - - - -] maximum_instance_delete_attempts = 5 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602#033[00m
Dec  1 14:20:35 np0005541455 nova_compute[188605]: 2025-12-01 19:20:35.278 188609 DEBUG oslo_service.service [None req-b71ca0b1-6b06-4f9a-80f6-79fc48762184 - - - - - -] metadata_listen                = 0.0.0.0 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602#033[00m
Dec  1 14:20:35 np0005541455 nova_compute[188605]: 2025-12-01 19:20:35.278 188609 DEBUG oslo_service.service [None req-b71ca0b1-6b06-4f9a-80f6-79fc48762184 - - - - - -] metadata_listen_port           = 8775 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602#033[00m
Dec  1 14:20:35 np0005541455 nova_compute[188605]: 2025-12-01 19:20:35.278 188609 DEBUG oslo_service.service [None req-b71ca0b1-6b06-4f9a-80f6-79fc48762184 - - - - - -] metadata_workers               = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602#033[00m
Dec  1 14:20:35 np0005541455 nova_compute[188605]: 2025-12-01 19:20:35.278 188609 DEBUG oslo_service.service [None req-b71ca0b1-6b06-4f9a-80f6-79fc48762184 - - - - - -] migrate_max_retries            = -1 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602#033[00m
Dec  1 14:20:35 np0005541455 nova_compute[188605]: 2025-12-01 19:20:35.279 188609 DEBUG oslo_service.service [None req-b71ca0b1-6b06-4f9a-80f6-79fc48762184 - - - - - -] mkisofs_cmd                    = /usr/bin/mkisofs log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602#033[00m
Dec  1 14:20:35 np0005541455 nova_compute[188605]: 2025-12-01 19:20:35.279 188609 DEBUG oslo_service.service [None req-b71ca0b1-6b06-4f9a-80f6-79fc48762184 - - - - - -] my_block_storage_ip            = 192.168.122.100 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602#033[00m
Dec  1 14:20:35 np0005541455 nova_compute[188605]: 2025-12-01 19:20:35.279 188609 DEBUG oslo_service.service [None req-b71ca0b1-6b06-4f9a-80f6-79fc48762184 - - - - - -] my_ip                          = 192.168.122.100 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602#033[00m
Dec  1 14:20:35 np0005541455 nova_compute[188605]: 2025-12-01 19:20:35.279 188609 DEBUG oslo_service.service [None req-b71ca0b1-6b06-4f9a-80f6-79fc48762184 - - - - - -] network_allocate_retries       = 0 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602#033[00m
Dec  1 14:20:35 np0005541455 nova_compute[188605]: 2025-12-01 19:20:35.279 188609 DEBUG oslo_service.service [None req-b71ca0b1-6b06-4f9a-80f6-79fc48762184 - - - - - -] non_inheritable_image_properties = ['cache_in_nova', 'bittorrent'] log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602#033[00m
Dec  1 14:20:35 np0005541455 nova_compute[188605]: 2025-12-01 19:20:35.279 188609 DEBUG oslo_service.service [None req-b71ca0b1-6b06-4f9a-80f6-79fc48762184 - - - - - -] osapi_compute_listen           = 0.0.0.0 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602#033[00m
Dec  1 14:20:35 np0005541455 nova_compute[188605]: 2025-12-01 19:20:35.279 188609 DEBUG oslo_service.service [None req-b71ca0b1-6b06-4f9a-80f6-79fc48762184 - - - - - -] osapi_compute_listen_port      = 8774 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602#033[00m
Dec  1 14:20:35 np0005541455 nova_compute[188605]: 2025-12-01 19:20:35.280 188609 DEBUG oslo_service.service [None req-b71ca0b1-6b06-4f9a-80f6-79fc48762184 - - - - - -] osapi_compute_unique_server_name_scope =  log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602#033[00m
Dec  1 14:20:35 np0005541455 nova_compute[188605]: 2025-12-01 19:20:35.280 188609 DEBUG oslo_service.service [None req-b71ca0b1-6b06-4f9a-80f6-79fc48762184 - - - - - -] osapi_compute_workers          = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602#033[00m
Dec  1 14:20:35 np0005541455 nova_compute[188605]: 2025-12-01 19:20:35.280 188609 DEBUG oslo_service.service [None req-b71ca0b1-6b06-4f9a-80f6-79fc48762184 - - - - - -] password_length                = 12 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602#033[00m
Dec  1 14:20:35 np0005541455 nova_compute[188605]: 2025-12-01 19:20:35.280 188609 DEBUG oslo_service.service [None req-b71ca0b1-6b06-4f9a-80f6-79fc48762184 - - - - - -] periodic_enable                = True log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602#033[00m
Dec  1 14:20:35 np0005541455 nova_compute[188605]: 2025-12-01 19:20:35.280 188609 DEBUG oslo_service.service [None req-b71ca0b1-6b06-4f9a-80f6-79fc48762184 - - - - - -] periodic_fuzzy_delay           = 60 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602#033[00m
Dec  1 14:20:35 np0005541455 nova_compute[188605]: 2025-12-01 19:20:35.280 188609 DEBUG oslo_service.service [None req-b71ca0b1-6b06-4f9a-80f6-79fc48762184 - - - - - -] pointer_model                  = usbtablet log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602#033[00m
Dec  1 14:20:35 np0005541455 nova_compute[188605]: 2025-12-01 19:20:35.280 188609 DEBUG oslo_service.service [None req-b71ca0b1-6b06-4f9a-80f6-79fc48762184 - - - - - -] preallocate_images             = none log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602#033[00m
Dec  1 14:20:35 np0005541455 nova_compute[188605]: 2025-12-01 19:20:35.281 188609 DEBUG oslo_service.service [None req-b71ca0b1-6b06-4f9a-80f6-79fc48762184 - - - - - -] publish_errors                 = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602#033[00m
Dec  1 14:20:35 np0005541455 nova_compute[188605]: 2025-12-01 19:20:35.281 188609 DEBUG oslo_service.service [None req-b71ca0b1-6b06-4f9a-80f6-79fc48762184 - - - - - -] pybasedir                      = /usr/lib/python3.9/site-packages log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602#033[00m
Dec  1 14:20:35 np0005541455 nova_compute[188605]: 2025-12-01 19:20:35.281 188609 DEBUG oslo_service.service [None req-b71ca0b1-6b06-4f9a-80f6-79fc48762184 - - - - - -] ram_allocation_ratio           = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602#033[00m
Dec  1 14:20:35 np0005541455 nova_compute[188605]: 2025-12-01 19:20:35.281 188609 DEBUG oslo_service.service [None req-b71ca0b1-6b06-4f9a-80f6-79fc48762184 - - - - - -] rate_limit_burst               = 0 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602#033[00m
Dec  1 14:20:35 np0005541455 nova_compute[188605]: 2025-12-01 19:20:35.281 188609 DEBUG oslo_service.service [None req-b71ca0b1-6b06-4f9a-80f6-79fc48762184 - - - - - -] rate_limit_except_level        = CRITICAL log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602#033[00m
Dec  1 14:20:35 np0005541455 nova_compute[188605]: 2025-12-01 19:20:35.281 188609 DEBUG oslo_service.service [None req-b71ca0b1-6b06-4f9a-80f6-79fc48762184 - - - - - -] rate_limit_interval            = 0 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602#033[00m
Dec  1 14:20:35 np0005541455 nova_compute[188605]: 2025-12-01 19:20:35.281 188609 DEBUG oslo_service.service [None req-b71ca0b1-6b06-4f9a-80f6-79fc48762184 - - - - - -] reboot_timeout                 = 0 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602#033[00m
Dec  1 14:20:35 np0005541455 nova_compute[188605]: 2025-12-01 19:20:35.281 188609 DEBUG oslo_service.service [None req-b71ca0b1-6b06-4f9a-80f6-79fc48762184 - - - - - -] reclaim_instance_interval      = 0 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602#033[00m
Dec  1 14:20:35 np0005541455 nova_compute[188605]: 2025-12-01 19:20:35.282 188609 DEBUG oslo_service.service [None req-b71ca0b1-6b06-4f9a-80f6-79fc48762184 - - - - - -] record                         = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602#033[00m
Dec  1 14:20:35 np0005541455 nova_compute[188605]: 2025-12-01 19:20:35.282 188609 DEBUG oslo_service.service [None req-b71ca0b1-6b06-4f9a-80f6-79fc48762184 - - - - - -] reimage_timeout_per_gb         = 20 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602#033[00m
Dec  1 14:20:35 np0005541455 nova_compute[188605]: 2025-12-01 19:20:35.282 188609 DEBUG oslo_service.service [None req-b71ca0b1-6b06-4f9a-80f6-79fc48762184 - - - - - -] report_interval                = 10 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602#033[00m
Dec  1 14:20:35 np0005541455 nova_compute[188605]: 2025-12-01 19:20:35.282 188609 DEBUG oslo_service.service [None req-b71ca0b1-6b06-4f9a-80f6-79fc48762184 - - - - - -] rescue_timeout                 = 0 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602#033[00m
Dec  1 14:20:35 np0005541455 nova_compute[188605]: 2025-12-01 19:20:35.282 188609 DEBUG oslo_service.service [None req-b71ca0b1-6b06-4f9a-80f6-79fc48762184 - - - - - -] reserved_host_cpus             = 0 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602#033[00m
Dec  1 14:20:35 np0005541455 nova_compute[188605]: 2025-12-01 19:20:35.282 188609 DEBUG oslo_service.service [None req-b71ca0b1-6b06-4f9a-80f6-79fc48762184 - - - - - -] reserved_host_disk_mb          = 0 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602#033[00m
Dec  1 14:20:35 np0005541455 nova_compute[188605]: 2025-12-01 19:20:35.282 188609 DEBUG oslo_service.service [None req-b71ca0b1-6b06-4f9a-80f6-79fc48762184 - - - - - -] reserved_host_memory_mb        = 512 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602#033[00m
Dec  1 14:20:35 np0005541455 nova_compute[188605]: 2025-12-01 19:20:35.282 188609 DEBUG oslo_service.service [None req-b71ca0b1-6b06-4f9a-80f6-79fc48762184 - - - - - -] reserved_huge_pages            = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602#033[00m
Dec  1 14:20:35 np0005541455 nova_compute[188605]: 2025-12-01 19:20:35.283 188609 DEBUG oslo_service.service [None req-b71ca0b1-6b06-4f9a-80f6-79fc48762184 - - - - - -] resize_confirm_window          = 0 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602#033[00m
Dec  1 14:20:35 np0005541455 nova_compute[188605]: 2025-12-01 19:20:35.283 188609 DEBUG oslo_service.service [None req-b71ca0b1-6b06-4f9a-80f6-79fc48762184 - - - - - -] resize_fs_using_block_device   = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602#033[00m
Dec  1 14:20:35 np0005541455 nova_compute[188605]: 2025-12-01 19:20:35.283 188609 DEBUG oslo_service.service [None req-b71ca0b1-6b06-4f9a-80f6-79fc48762184 - - - - - -] resume_guests_state_on_host_boot = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602#033[00m
Dec  1 14:20:35 np0005541455 nova_compute[188605]: 2025-12-01 19:20:35.283 188609 DEBUG oslo_service.service [None req-b71ca0b1-6b06-4f9a-80f6-79fc48762184 - - - - - -] rootwrap_config                = /etc/nova/rootwrap.conf log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602#033[00m
Dec  1 14:20:35 np0005541455 nova_compute[188605]: 2025-12-01 19:20:35.283 188609 DEBUG oslo_service.service [None req-b71ca0b1-6b06-4f9a-80f6-79fc48762184 - - - - - -] rpc_response_timeout           = 60 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602#033[00m
Dec  1 14:20:35 np0005541455 nova_compute[188605]: 2025-12-01 19:20:35.283 188609 DEBUG oslo_service.service [None req-b71ca0b1-6b06-4f9a-80f6-79fc48762184 - - - - - -] run_external_periodic_tasks    = True log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602#033[00m
Dec  1 14:20:35 np0005541455 nova_compute[188605]: 2025-12-01 19:20:35.283 188609 DEBUG oslo_service.service [None req-b71ca0b1-6b06-4f9a-80f6-79fc48762184 - - - - - -] running_deleted_instance_action = reap log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602#033[00m
Dec  1 14:20:35 np0005541455 nova_compute[188605]: 2025-12-01 19:20:35.284 188609 DEBUG oslo_service.service [None req-b71ca0b1-6b06-4f9a-80f6-79fc48762184 - - - - - -] running_deleted_instance_poll_interval = 1800 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602#033[00m
Dec  1 14:20:35 np0005541455 nova_compute[188605]: 2025-12-01 19:20:35.284 188609 DEBUG oslo_service.service [None req-b71ca0b1-6b06-4f9a-80f6-79fc48762184 - - - - - -] running_deleted_instance_timeout = 0 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602#033[00m
Dec  1 14:20:35 np0005541455 nova_compute[188605]: 2025-12-01 19:20:35.284 188609 DEBUG oslo_service.service [None req-b71ca0b1-6b06-4f9a-80f6-79fc48762184 - - - - - -] scheduler_instance_sync_interval = 120 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602#033[00m
Dec  1 14:20:35 np0005541455 nova_compute[188605]: 2025-12-01 19:20:35.284 188609 DEBUG oslo_service.service [None req-b71ca0b1-6b06-4f9a-80f6-79fc48762184 - - - - - -] service_down_time              = 60 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602#033[00m
Dec  1 14:20:35 np0005541455 nova_compute[188605]: 2025-12-01 19:20:35.284 188609 DEBUG oslo_service.service [None req-b71ca0b1-6b06-4f9a-80f6-79fc48762184 - - - - - -] servicegroup_driver            = db log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602#033[00m
Dec  1 14:20:35 np0005541455 nova_compute[188605]: 2025-12-01 19:20:35.284 188609 DEBUG oslo_service.service [None req-b71ca0b1-6b06-4f9a-80f6-79fc48762184 - - - - - -] shelved_offload_time           = 0 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602#033[00m
Dec  1 14:20:35 np0005541455 nova_compute[188605]: 2025-12-01 19:20:35.284 188609 DEBUG oslo_service.service [None req-b71ca0b1-6b06-4f9a-80f6-79fc48762184 - - - - - -] shelved_poll_interval          = 3600 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602#033[00m
Dec  1 14:20:35 np0005541455 nova_compute[188605]: 2025-12-01 19:20:35.285 188609 DEBUG oslo_service.service [None req-b71ca0b1-6b06-4f9a-80f6-79fc48762184 - - - - - -] shutdown_timeout               = 60 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602#033[00m
Dec  1 14:20:35 np0005541455 nova_compute[188605]: 2025-12-01 19:20:35.285 188609 DEBUG oslo_service.service [None req-b71ca0b1-6b06-4f9a-80f6-79fc48762184 - - - - - -] source_is_ipv6                 = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602#033[00m
Dec  1 14:20:35 np0005541455 nova_compute[188605]: 2025-12-01 19:20:35.285 188609 DEBUG oslo_service.service [None req-b71ca0b1-6b06-4f9a-80f6-79fc48762184 - - - - - -] ssl_only                       = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602#033[00m
Dec  1 14:20:35 np0005541455 nova_compute[188605]: 2025-12-01 19:20:35.285 188609 DEBUG oslo_service.service [None req-b71ca0b1-6b06-4f9a-80f6-79fc48762184 - - - - - -] state_path                     = /var/lib/nova log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602#033[00m
Dec  1 14:20:35 np0005541455 nova_compute[188605]: 2025-12-01 19:20:35.285 188609 DEBUG oslo_service.service [None req-b71ca0b1-6b06-4f9a-80f6-79fc48762184 - - - - - -] sync_power_state_interval      = 600 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602#033[00m
Dec  1 14:20:35 np0005541455 nova_compute[188605]: 2025-12-01 19:20:35.285 188609 DEBUG oslo_service.service [None req-b71ca0b1-6b06-4f9a-80f6-79fc48762184 - - - - - -] sync_power_state_pool_size     = 1000 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602#033[00m
Dec  1 14:20:35 np0005541455 nova_compute[188605]: 2025-12-01 19:20:35.285 188609 DEBUG oslo_service.service [None req-b71ca0b1-6b06-4f9a-80f6-79fc48762184 - - - - - -] syslog_log_facility            = LOG_USER log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602#033[00m
Dec  1 14:20:35 np0005541455 nova_compute[188605]: 2025-12-01 19:20:35.285 188609 DEBUG oslo_service.service [None req-b71ca0b1-6b06-4f9a-80f6-79fc48762184 - - - - - -] tempdir                        = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602#033[00m
Dec  1 14:20:35 np0005541455 nova_compute[188605]: 2025-12-01 19:20:35.286 188609 DEBUG oslo_service.service [None req-b71ca0b1-6b06-4f9a-80f6-79fc48762184 - - - - - -] timeout_nbd                    = 10 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602#033[00m
Dec  1 14:20:35 np0005541455 nova_compute[188605]: 2025-12-01 19:20:35.286 188609 DEBUG oslo_service.service [None req-b71ca0b1-6b06-4f9a-80f6-79fc48762184 - - - - - -] transport_url                  = **** log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602#033[00m
Dec  1 14:20:35 np0005541455 nova_compute[188605]: 2025-12-01 19:20:35.286 188609 DEBUG oslo_service.service [None req-b71ca0b1-6b06-4f9a-80f6-79fc48762184 - - - - - -] update_resources_interval      = 0 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602#033[00m
Dec  1 14:20:35 np0005541455 nova_compute[188605]: 2025-12-01 19:20:35.286 188609 DEBUG oslo_service.service [None req-b71ca0b1-6b06-4f9a-80f6-79fc48762184 - - - - - -] use_cow_images                 = True log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602#033[00m
Dec  1 14:20:35 np0005541455 nova_compute[188605]: 2025-12-01 19:20:35.286 188609 DEBUG oslo_service.service [None req-b71ca0b1-6b06-4f9a-80f6-79fc48762184 - - - - - -] use_eventlog                   = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602#033[00m
Dec  1 14:20:35 np0005541455 nova_compute[188605]: 2025-12-01 19:20:35.286 188609 DEBUG oslo_service.service [None req-b71ca0b1-6b06-4f9a-80f6-79fc48762184 - - - - - -] use_journal                    = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602#033[00m
Dec  1 14:20:35 np0005541455 nova_compute[188605]: 2025-12-01 19:20:35.286 188609 DEBUG oslo_service.service [None req-b71ca0b1-6b06-4f9a-80f6-79fc48762184 - - - - - -] use_json                       = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602#033[00m
Dec  1 14:20:35 np0005541455 nova_compute[188605]: 2025-12-01 19:20:35.286 188609 DEBUG oslo_service.service [None req-b71ca0b1-6b06-4f9a-80f6-79fc48762184 - - - - - -] use_rootwrap_daemon            = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602#033[00m
Dec  1 14:20:35 np0005541455 nova_compute[188605]: 2025-12-01 19:20:35.287 188609 DEBUG oslo_service.service [None req-b71ca0b1-6b06-4f9a-80f6-79fc48762184 - - - - - -] use_stderr                     = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602#033[00m
Dec  1 14:20:35 np0005541455 nova_compute[188605]: 2025-12-01 19:20:35.287 188609 DEBUG oslo_service.service [None req-b71ca0b1-6b06-4f9a-80f6-79fc48762184 - - - - - -] use_syslog                     = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602#033[00m
Dec  1 14:20:35 np0005541455 nova_compute[188605]: 2025-12-01 19:20:35.287 188609 DEBUG oslo_service.service [None req-b71ca0b1-6b06-4f9a-80f6-79fc48762184 - - - - - -] vcpu_pin_set                   = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602#033[00m
Dec  1 14:20:35 np0005541455 nova_compute[188605]: 2025-12-01 19:20:35.287 188609 DEBUG oslo_service.service [None req-b71ca0b1-6b06-4f9a-80f6-79fc48762184 - - - - - -] vif_plugging_is_fatal          = True log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602#033[00m
Dec  1 14:20:35 np0005541455 nova_compute[188605]: 2025-12-01 19:20:35.287 188609 DEBUG oslo_service.service [None req-b71ca0b1-6b06-4f9a-80f6-79fc48762184 - - - - - -] vif_plugging_timeout           = 300 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602#033[00m
Dec  1 14:20:35 np0005541455 nova_compute[188605]: 2025-12-01 19:20:35.287 188609 DEBUG oslo_service.service [None req-b71ca0b1-6b06-4f9a-80f6-79fc48762184 - - - - - -] virt_mkfs                      = [] log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602#033[00m
Dec  1 14:20:35 np0005541455 nova_compute[188605]: 2025-12-01 19:20:35.287 188609 DEBUG oslo_service.service [None req-b71ca0b1-6b06-4f9a-80f6-79fc48762184 - - - - - -] volume_usage_poll_interval     = 0 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602#033[00m
Dec  1 14:20:35 np0005541455 nova_compute[188605]: 2025-12-01 19:20:35.288 188609 DEBUG oslo_service.service [None req-b71ca0b1-6b06-4f9a-80f6-79fc48762184 - - - - - -] watch_log_file                 = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602#033[00m
Dec  1 14:20:35 np0005541455 nova_compute[188605]: 2025-12-01 19:20:35.288 188609 DEBUG oslo_service.service [None req-b71ca0b1-6b06-4f9a-80f6-79fc48762184 - - - - - -] web                            = /usr/share/spice-html5 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602#033[00m
Dec  1 14:20:35 np0005541455 nova_compute[188605]: 2025-12-01 19:20:35.288 188609 DEBUG oslo_service.service [None req-b71ca0b1-6b06-4f9a-80f6-79fc48762184 - - - - - -] oslo_concurrency.disable_process_locking = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Dec  1 14:20:35 np0005541455 nova_compute[188605]: 2025-12-01 19:20:35.288 188609 DEBUG oslo_service.service [None req-b71ca0b1-6b06-4f9a-80f6-79fc48762184 - - - - - -] oslo_concurrency.lock_path     = /var/lib/nova/tmp log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Dec  1 14:20:35 np0005541455 nova_compute[188605]: 2025-12-01 19:20:35.288 188609 DEBUG oslo_service.service [None req-b71ca0b1-6b06-4f9a-80f6-79fc48762184 - - - - - -] oslo_messaging_metrics.metrics_buffer_size = 1000 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Dec  1 14:20:35 np0005541455 nova_compute[188605]: 2025-12-01 19:20:35.288 188609 DEBUG oslo_service.service [None req-b71ca0b1-6b06-4f9a-80f6-79fc48762184 - - - - - -] oslo_messaging_metrics.metrics_enabled = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Dec  1 14:20:35 np0005541455 nova_compute[188605]: 2025-12-01 19:20:35.288 188609 DEBUG oslo_service.service [None req-b71ca0b1-6b06-4f9a-80f6-79fc48762184 - - - - - -] oslo_messaging_metrics.metrics_process_name =  log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Dec  1 14:20:35 np0005541455 nova_compute[188605]: 2025-12-01 19:20:35.289 188609 DEBUG oslo_service.service [None req-b71ca0b1-6b06-4f9a-80f6-79fc48762184 - - - - - -] oslo_messaging_metrics.metrics_socket_file = /var/tmp/metrics_collector.sock log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Dec  1 14:20:35 np0005541455 nova_compute[188605]: 2025-12-01 19:20:35.289 188609 DEBUG oslo_service.service [None req-b71ca0b1-6b06-4f9a-80f6-79fc48762184 - - - - - -] oslo_messaging_metrics.metrics_thread_stop_timeout = 10 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Dec  1 14:20:35 np0005541455 nova_compute[188605]: 2025-12-01 19:20:35.289 188609 DEBUG oslo_service.service [None req-b71ca0b1-6b06-4f9a-80f6-79fc48762184 - - - - - -] api.auth_strategy              = keystone log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Dec  1 14:20:35 np0005541455 nova_compute[188605]: 2025-12-01 19:20:35.289 188609 DEBUG oslo_service.service [None req-b71ca0b1-6b06-4f9a-80f6-79fc48762184 - - - - - -] api.compute_link_prefix        = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Dec  1 14:20:35 np0005541455 nova_compute[188605]: 2025-12-01 19:20:35.289 188609 DEBUG oslo_service.service [None req-b71ca0b1-6b06-4f9a-80f6-79fc48762184 - - - - - -] api.config_drive_skip_versions = 1.0 2007-01-19 2007-03-01 2007-08-29 2007-10-10 2007-12-15 2008-02-01 2008-09-01 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Dec  1 14:20:35 np0005541455 nova_compute[188605]: 2025-12-01 19:20:35.289 188609 DEBUG oslo_service.service [None req-b71ca0b1-6b06-4f9a-80f6-79fc48762184 - - - - - -] api.dhcp_domain                =  log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Dec  1 14:20:35 np0005541455 nova_compute[188605]: 2025-12-01 19:20:35.289 188609 DEBUG oslo_service.service [None req-b71ca0b1-6b06-4f9a-80f6-79fc48762184 - - - - - -] api.enable_instance_password   = True log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Dec  1 14:20:35 np0005541455 nova_compute[188605]: 2025-12-01 19:20:35.290 188609 DEBUG oslo_service.service [None req-b71ca0b1-6b06-4f9a-80f6-79fc48762184 - - - - - -] api.glance_link_prefix         = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Dec  1 14:20:35 np0005541455 nova_compute[188605]: 2025-12-01 19:20:35.290 188609 DEBUG oslo_service.service [None req-b71ca0b1-6b06-4f9a-80f6-79fc48762184 - - - - - -] api.instance_list_cells_batch_fixed_size = 100 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Dec  1 14:20:35 np0005541455 nova_compute[188605]: 2025-12-01 19:20:35.290 188609 DEBUG oslo_service.service [None req-b71ca0b1-6b06-4f9a-80f6-79fc48762184 - - - - - -] api.instance_list_cells_batch_strategy = distributed log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Dec  1 14:20:35 np0005541455 nova_compute[188605]: 2025-12-01 19:20:35.290 188609 DEBUG oslo_service.service [None req-b71ca0b1-6b06-4f9a-80f6-79fc48762184 - - - - - -] api.instance_list_per_project_cells = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Dec  1 14:20:35 np0005541455 nova_compute[188605]: 2025-12-01 19:20:35.290 188609 DEBUG oslo_service.service [None req-b71ca0b1-6b06-4f9a-80f6-79fc48762184 - - - - - -] api.list_records_by_skipping_down_cells = True log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Dec  1 14:20:35 np0005541455 nova_compute[188605]: 2025-12-01 19:20:35.290 188609 DEBUG oslo_service.service [None req-b71ca0b1-6b06-4f9a-80f6-79fc48762184 - - - - - -] api.local_metadata_per_cell    = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Dec  1 14:20:35 np0005541455 nova_compute[188605]: 2025-12-01 19:20:35.290 188609 DEBUG oslo_service.service [None req-b71ca0b1-6b06-4f9a-80f6-79fc48762184 - - - - - -] api.max_limit                  = 1000 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Dec  1 14:20:35 np0005541455 nova_compute[188605]: 2025-12-01 19:20:35.291 188609 DEBUG oslo_service.service [None req-b71ca0b1-6b06-4f9a-80f6-79fc48762184 - - - - - -] api.metadata_cache_expiration  = 15 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Dec  1 14:20:35 np0005541455 nova_compute[188605]: 2025-12-01 19:20:35.291 188609 DEBUG oslo_service.service [None req-b71ca0b1-6b06-4f9a-80f6-79fc48762184 - - - - - -] api.neutron_default_tenant_id  = default log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Dec  1 14:20:35 np0005541455 nova_compute[188605]: 2025-12-01 19:20:35.291 188609 DEBUG oslo_service.service [None req-b71ca0b1-6b06-4f9a-80f6-79fc48762184 - - - - - -] api.use_forwarded_for          = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Dec  1 14:20:35 np0005541455 nova_compute[188605]: 2025-12-01 19:20:35.291 188609 DEBUG oslo_service.service [None req-b71ca0b1-6b06-4f9a-80f6-79fc48762184 - - - - - -] api.use_neutron_default_nets   = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Dec  1 14:20:35 np0005541455 nova_compute[188605]: 2025-12-01 19:20:35.291 188609 DEBUG oslo_service.service [None req-b71ca0b1-6b06-4f9a-80f6-79fc48762184 - - - - - -] api.vendordata_dynamic_connect_timeout = 5 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Dec  1 14:20:35 np0005541455 nova_compute[188605]: 2025-12-01 19:20:35.291 188609 DEBUG oslo_service.service [None req-b71ca0b1-6b06-4f9a-80f6-79fc48762184 - - - - - -] api.vendordata_dynamic_failure_fatal = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Dec  1 14:20:35 np0005541455 nova_compute[188605]: 2025-12-01 19:20:35.291 188609 DEBUG oslo_service.service [None req-b71ca0b1-6b06-4f9a-80f6-79fc48762184 - - - - - -] api.vendordata_dynamic_read_timeout = 5 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Dec  1 14:20:35 np0005541455 nova_compute[188605]: 2025-12-01 19:20:35.292 188609 DEBUG oslo_service.service [None req-b71ca0b1-6b06-4f9a-80f6-79fc48762184 - - - - - -] api.vendordata_dynamic_ssl_certfile =  log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Dec  1 14:20:35 np0005541455 nova_compute[188605]: 2025-12-01 19:20:35.292 188609 DEBUG oslo_service.service [None req-b71ca0b1-6b06-4f9a-80f6-79fc48762184 - - - - - -] api.vendordata_dynamic_targets = [] log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Dec  1 14:20:35 np0005541455 nova_compute[188605]: 2025-12-01 19:20:35.292 188609 DEBUG oslo_service.service [None req-b71ca0b1-6b06-4f9a-80f6-79fc48762184 - - - - - -] api.vendordata_jsonfile_path   = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Dec  1 14:20:35 np0005541455 nova_compute[188605]: 2025-12-01 19:20:35.292 188609 DEBUG oslo_service.service [None req-b71ca0b1-6b06-4f9a-80f6-79fc48762184 - - - - - -] api.vendordata_providers       = ['StaticJSON'] log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Dec  1 14:20:35 np0005541455 nova_compute[188605]: 2025-12-01 19:20:35.292 188609 DEBUG oslo_service.service [None req-b71ca0b1-6b06-4f9a-80f6-79fc48762184 - - - - - -] cache.backend                  = oslo_cache.dict log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Dec  1 14:20:35 np0005541455 nova_compute[188605]: 2025-12-01 19:20:35.292 188609 DEBUG oslo_service.service [None req-b71ca0b1-6b06-4f9a-80f6-79fc48762184 - - - - - -] cache.backend_argument         = **** log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Dec  1 14:20:35 np0005541455 nova_compute[188605]: 2025-12-01 19:20:35.292 188609 DEBUG oslo_service.service [None req-b71ca0b1-6b06-4f9a-80f6-79fc48762184 - - - - - -] cache.config_prefix            = cache.oslo log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Dec  1 14:20:35 np0005541455 nova_compute[188605]: 2025-12-01 19:20:35.293 188609 DEBUG oslo_service.service [None req-b71ca0b1-6b06-4f9a-80f6-79fc48762184 - - - - - -] cache.dead_timeout             = 60.0 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Dec  1 14:20:35 np0005541455 nova_compute[188605]: 2025-12-01 19:20:35.293 188609 DEBUG oslo_service.service [None req-b71ca0b1-6b06-4f9a-80f6-79fc48762184 - - - - - -] cache.debug_cache_backend      = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Dec  1 14:20:35 np0005541455 nova_compute[188605]: 2025-12-01 19:20:35.293 188609 DEBUG oslo_service.service [None req-b71ca0b1-6b06-4f9a-80f6-79fc48762184 - - - - - -] cache.enable_retry_client      = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Dec  1 14:20:35 np0005541455 nova_compute[188605]: 2025-12-01 19:20:35.293 188609 DEBUG oslo_service.service [None req-b71ca0b1-6b06-4f9a-80f6-79fc48762184 - - - - - -] cache.enable_socket_keepalive  = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Dec  1 14:20:35 np0005541455 nova_compute[188605]: 2025-12-01 19:20:35.293 188609 DEBUG oslo_service.service [None req-b71ca0b1-6b06-4f9a-80f6-79fc48762184 - - - - - -] cache.enabled                  = True log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Dec  1 14:20:35 np0005541455 nova_compute[188605]: 2025-12-01 19:20:35.293 188609 DEBUG oslo_service.service [None req-b71ca0b1-6b06-4f9a-80f6-79fc48762184 - - - - - -] cache.expiration_time          = 600 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Dec  1 14:20:35 np0005541455 nova_compute[188605]: 2025-12-01 19:20:35.294 188609 DEBUG oslo_service.service [None req-b71ca0b1-6b06-4f9a-80f6-79fc48762184 - - - - - -] cache.hashclient_retry_attempts = 2 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Dec  1 14:20:35 np0005541455 nova_compute[188605]: 2025-12-01 19:20:35.294 188609 DEBUG oslo_service.service [None req-b71ca0b1-6b06-4f9a-80f6-79fc48762184 - - - - - -] cache.hashclient_retry_delay   = 1.0 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Dec  1 14:20:35 np0005541455 nova_compute[188605]: 2025-12-01 19:20:35.294 188609 DEBUG oslo_service.service [None req-b71ca0b1-6b06-4f9a-80f6-79fc48762184 - - - - - -] cache.memcache_dead_retry      = 300 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Dec  1 14:20:35 np0005541455 nova_compute[188605]: 2025-12-01 19:20:35.294 188609 DEBUG oslo_service.service [None req-b71ca0b1-6b06-4f9a-80f6-79fc48762184 - - - - - -] cache.memcache_password        =  log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Dec  1 14:20:35 np0005541455 nova_compute[188605]: 2025-12-01 19:20:35.294 188609 DEBUG oslo_service.service [None req-b71ca0b1-6b06-4f9a-80f6-79fc48762184 - - - - - -] cache.memcache_pool_connection_get_timeout = 10 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Dec  1 14:20:35 np0005541455 nova_compute[188605]: 2025-12-01 19:20:35.294 188609 DEBUG oslo_service.service [None req-b71ca0b1-6b06-4f9a-80f6-79fc48762184 - - - - - -] cache.memcache_pool_flush_on_reconnect = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Dec  1 14:20:35 np0005541455 nova_compute[188605]: 2025-12-01 19:20:35.294 188609 DEBUG oslo_service.service [None req-b71ca0b1-6b06-4f9a-80f6-79fc48762184 - - - - - -] cache.memcache_pool_maxsize    = 10 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Dec  1 14:20:35 np0005541455 nova_compute[188605]: 2025-12-01 19:20:35.295 188609 DEBUG oslo_service.service [None req-b71ca0b1-6b06-4f9a-80f6-79fc48762184 - - - - - -] cache.memcache_pool_unused_timeout = 60 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Dec  1 14:20:35 np0005541455 nova_compute[188605]: 2025-12-01 19:20:35.295 188609 DEBUG oslo_service.service [None req-b71ca0b1-6b06-4f9a-80f6-79fc48762184 - - - - - -] cache.memcache_sasl_enabled    = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Dec  1 14:20:35 np0005541455 nova_compute[188605]: 2025-12-01 19:20:35.295 188609 DEBUG oslo_service.service [None req-b71ca0b1-6b06-4f9a-80f6-79fc48762184 - - - - - -] cache.memcache_servers         = ['localhost:11211'] log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Dec  1 14:20:35 np0005541455 nova_compute[188605]: 2025-12-01 19:20:35.295 188609 DEBUG oslo_service.service [None req-b71ca0b1-6b06-4f9a-80f6-79fc48762184 - - - - - -] cache.memcache_socket_timeout  = 1.0 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Dec  1 14:20:35 np0005541455 nova_compute[188605]: 2025-12-01 19:20:35.295 188609 DEBUG oslo_service.service [None req-b71ca0b1-6b06-4f9a-80f6-79fc48762184 - - - - - -] cache.memcache_username        =  log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Dec  1 14:20:35 np0005541455 nova_compute[188605]: 2025-12-01 19:20:35.295 188609 DEBUG oslo_service.service [None req-b71ca0b1-6b06-4f9a-80f6-79fc48762184 - - - - - -] cache.proxies                  = [] log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Dec  1 14:20:35 np0005541455 nova_compute[188605]: 2025-12-01 19:20:35.295 188609 DEBUG oslo_service.service [None req-b71ca0b1-6b06-4f9a-80f6-79fc48762184 - - - - - -] cache.retry_attempts           = 2 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Dec  1 14:20:35 np0005541455 nova_compute[188605]: 2025-12-01 19:20:35.296 188609 DEBUG oslo_service.service [None req-b71ca0b1-6b06-4f9a-80f6-79fc48762184 - - - - - -] cache.retry_delay              = 0.0 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Dec  1 14:20:35 np0005541455 nova_compute[188605]: 2025-12-01 19:20:35.296 188609 DEBUG oslo_service.service [None req-b71ca0b1-6b06-4f9a-80f6-79fc48762184 - - - - - -] cache.socket_keepalive_count   = 1 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Dec  1 14:20:35 np0005541455 nova_compute[188605]: 2025-12-01 19:20:35.296 188609 DEBUG oslo_service.service [None req-b71ca0b1-6b06-4f9a-80f6-79fc48762184 - - - - - -] cache.socket_keepalive_idle    = 1 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Dec  1 14:20:35 np0005541455 nova_compute[188605]: 2025-12-01 19:20:35.296 188609 DEBUG oslo_service.service [None req-b71ca0b1-6b06-4f9a-80f6-79fc48762184 - - - - - -] cache.socket_keepalive_interval = 1 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Dec  1 14:20:35 np0005541455 nova_compute[188605]: 2025-12-01 19:20:35.296 188609 DEBUG oslo_service.service [None req-b71ca0b1-6b06-4f9a-80f6-79fc48762184 - - - - - -] cache.tls_allowed_ciphers      = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Dec  1 14:20:35 np0005541455 nova_compute[188605]: 2025-12-01 19:20:35.296 188609 DEBUG oslo_service.service [None req-b71ca0b1-6b06-4f9a-80f6-79fc48762184 - - - - - -] cache.tls_cafile               = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Dec  1 14:20:35 np0005541455 nova_compute[188605]: 2025-12-01 19:20:35.296 188609 DEBUG oslo_service.service [None req-b71ca0b1-6b06-4f9a-80f6-79fc48762184 - - - - - -] cache.tls_certfile             = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Dec  1 14:20:35 np0005541455 nova_compute[188605]: 2025-12-01 19:20:35.297 188609 DEBUG oslo_service.service [None req-b71ca0b1-6b06-4f9a-80f6-79fc48762184 - - - - - -] cache.tls_enabled              = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Dec  1 14:20:35 np0005541455 nova_compute[188605]: 2025-12-01 19:20:35.297 188609 DEBUG oslo_service.service [None req-b71ca0b1-6b06-4f9a-80f6-79fc48762184 - - - - - -] cache.tls_keyfile              = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Dec  1 14:20:35 np0005541455 nova_compute[188605]: 2025-12-01 19:20:35.297 188609 DEBUG oslo_service.service [None req-b71ca0b1-6b06-4f9a-80f6-79fc48762184 - - - - - -] cinder.auth_section            = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Dec  1 14:20:35 np0005541455 nova_compute[188605]: 2025-12-01 19:20:35.297 188609 DEBUG oslo_service.service [None req-b71ca0b1-6b06-4f9a-80f6-79fc48762184 - - - - - -] cinder.auth_type               = password log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Dec  1 14:20:35 np0005541455 nova_compute[188605]: 2025-12-01 19:20:35.297 188609 DEBUG oslo_service.service [None req-b71ca0b1-6b06-4f9a-80f6-79fc48762184 - - - - - -] cinder.cafile                  = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Dec  1 14:20:35 np0005541455 nova_compute[188605]: 2025-12-01 19:20:35.297 188609 DEBUG oslo_service.service [None req-b71ca0b1-6b06-4f9a-80f6-79fc48762184 - - - - - -] cinder.catalog_info            = volumev3:cinderv3:internalURL log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Dec  1 14:20:35 np0005541455 nova_compute[188605]: 2025-12-01 19:20:35.297 188609 DEBUG oslo_service.service [None req-b71ca0b1-6b06-4f9a-80f6-79fc48762184 - - - - - -] cinder.certfile                = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Dec  1 14:20:35 np0005541455 nova_compute[188605]: 2025-12-01 19:20:35.298 188609 DEBUG oslo_service.service [None req-b71ca0b1-6b06-4f9a-80f6-79fc48762184 - - - - - -] cinder.collect_timing          = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Dec  1 14:20:35 np0005541455 nova_compute[188605]: 2025-12-01 19:20:35.298 188609 DEBUG oslo_service.service [None req-b71ca0b1-6b06-4f9a-80f6-79fc48762184 - - - - - -] cinder.cross_az_attach         = True log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Dec  1 14:20:35 np0005541455 nova_compute[188605]: 2025-12-01 19:20:35.298 188609 DEBUG oslo_service.service [None req-b71ca0b1-6b06-4f9a-80f6-79fc48762184 - - - - - -] cinder.debug                   = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Dec  1 14:20:35 np0005541455 nova_compute[188605]: 2025-12-01 19:20:35.298 188609 DEBUG oslo_service.service [None req-b71ca0b1-6b06-4f9a-80f6-79fc48762184 - - - - - -] cinder.endpoint_template       = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Dec  1 14:20:35 np0005541455 nova_compute[188605]: 2025-12-01 19:20:35.298 188609 DEBUG oslo_service.service [None req-b71ca0b1-6b06-4f9a-80f6-79fc48762184 - - - - - -] cinder.http_retries            = 3 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Dec  1 14:20:35 np0005541455 nova_compute[188605]: 2025-12-01 19:20:35.298 188609 DEBUG oslo_service.service [None req-b71ca0b1-6b06-4f9a-80f6-79fc48762184 - - - - - -] cinder.insecure                = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Dec  1 14:20:35 np0005541455 nova_compute[188605]: 2025-12-01 19:20:35.298 188609 DEBUG oslo_service.service [None req-b71ca0b1-6b06-4f9a-80f6-79fc48762184 - - - - - -] cinder.keyfile                 = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Dec  1 14:20:35 np0005541455 nova_compute[188605]: 2025-12-01 19:20:35.299 188609 DEBUG oslo_service.service [None req-b71ca0b1-6b06-4f9a-80f6-79fc48762184 - - - - - -] cinder.os_region_name          = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Dec  1 14:20:35 np0005541455 nova_compute[188605]: 2025-12-01 19:20:35.299 188609 DEBUG oslo_service.service [None req-b71ca0b1-6b06-4f9a-80f6-79fc48762184 - - - - - -] cinder.split_loggers           = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Dec  1 14:20:35 np0005541455 nova_compute[188605]: 2025-12-01 19:20:35.299 188609 DEBUG oslo_service.service [None req-b71ca0b1-6b06-4f9a-80f6-79fc48762184 - - - - - -] cinder.timeout                 = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Dec  1 14:20:35 np0005541455 nova_compute[188605]: 2025-12-01 19:20:35.299 188609 DEBUG oslo_service.service [None req-b71ca0b1-6b06-4f9a-80f6-79fc48762184 - - - - - -] compute.consecutive_build_service_disable_threshold = 10 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Dec  1 14:20:35 np0005541455 nova_compute[188605]: 2025-12-01 19:20:35.299 188609 DEBUG oslo_service.service [None req-b71ca0b1-6b06-4f9a-80f6-79fc48762184 - - - - - -] compute.cpu_dedicated_set      = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Dec  1 14:20:35 np0005541455 nova_compute[188605]: 2025-12-01 19:20:35.299 188609 DEBUG oslo_service.service [None req-b71ca0b1-6b06-4f9a-80f6-79fc48762184 - - - - - -] compute.cpu_shared_set         = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Dec  1 14:20:35 np0005541455 nova_compute[188605]: 2025-12-01 19:20:35.299 188609 DEBUG oslo_service.service [None req-b71ca0b1-6b06-4f9a-80f6-79fc48762184 - - - - - -] compute.image_type_exclude_list = [] log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Dec  1 14:20:35 np0005541455 nova_compute[188605]: 2025-12-01 19:20:35.300 188609 DEBUG oslo_service.service [None req-b71ca0b1-6b06-4f9a-80f6-79fc48762184 - - - - - -] compute.live_migration_wait_for_vif_plug = True log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Dec  1 14:20:35 np0005541455 nova_compute[188605]: 2025-12-01 19:20:35.300 188609 DEBUG oslo_service.service [None req-b71ca0b1-6b06-4f9a-80f6-79fc48762184 - - - - - -] compute.max_concurrent_disk_ops = 0 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Dec  1 14:20:35 np0005541455 nova_compute[188605]: 2025-12-01 19:20:35.300 188609 DEBUG oslo_service.service [None req-b71ca0b1-6b06-4f9a-80f6-79fc48762184 - - - - - -] compute.max_disk_devices_to_attach = -1 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Dec  1 14:20:35 np0005541455 nova_compute[188605]: 2025-12-01 19:20:35.300 188609 DEBUG oslo_service.service [None req-b71ca0b1-6b06-4f9a-80f6-79fc48762184 - - - - - -] compute.packing_host_numa_cells_allocation_strategy = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Dec  1 14:20:35 np0005541455 nova_compute[188605]: 2025-12-01 19:20:35.300 188609 DEBUG oslo_service.service [None req-b71ca0b1-6b06-4f9a-80f6-79fc48762184 - - - - - -] compute.provider_config_location = /etc/nova/provider_config/ log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Dec  1 14:20:35 np0005541455 nova_compute[188605]: 2025-12-01 19:20:35.300 188609 DEBUG oslo_service.service [None req-b71ca0b1-6b06-4f9a-80f6-79fc48762184 - - - - - -] compute.resource_provider_association_refresh = 300 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Dec  1 14:20:35 np0005541455 nova_compute[188605]: 2025-12-01 19:20:35.300 188609 DEBUG oslo_service.service [None req-b71ca0b1-6b06-4f9a-80f6-79fc48762184 - - - - - -] compute.shutdown_retry_interval = 10 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Dec  1 14:20:35 np0005541455 nova_compute[188605]: 2025-12-01 19:20:35.301 188609 DEBUG oslo_service.service [None req-b71ca0b1-6b06-4f9a-80f6-79fc48762184 - - - - - -] compute.vmdk_allowed_types     = ['streamOptimized', 'monolithicSparse'] log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Dec  1 14:20:35 np0005541455 nova_compute[188605]: 2025-12-01 19:20:35.301 188609 DEBUG oslo_service.service [None req-b71ca0b1-6b06-4f9a-80f6-79fc48762184 - - - - - -] conductor.workers              = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Dec  1 14:20:35 np0005541455 nova_compute[188605]: 2025-12-01 19:20:35.301 188609 DEBUG oslo_service.service [None req-b71ca0b1-6b06-4f9a-80f6-79fc48762184 - - - - - -] console.allowed_origins        = [] log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Dec  1 14:20:35 np0005541455 nova_compute[188605]: 2025-12-01 19:20:35.301 188609 DEBUG oslo_service.service [None req-b71ca0b1-6b06-4f9a-80f6-79fc48762184 - - - - - -] console.ssl_ciphers            = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Dec  1 14:20:35 np0005541455 nova_compute[188605]: 2025-12-01 19:20:35.301 188609 DEBUG oslo_service.service [None req-b71ca0b1-6b06-4f9a-80f6-79fc48762184 - - - - - -] console.ssl_minimum_version    = default log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Dec  1 14:20:35 np0005541455 nova_compute[188605]: 2025-12-01 19:20:35.301 188609 DEBUG oslo_service.service [None req-b71ca0b1-6b06-4f9a-80f6-79fc48762184 - - - - - -] consoleauth.token_ttl          = 600 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Dec  1 14:20:35 np0005541455 nova_compute[188605]: 2025-12-01 19:20:35.302 188609 DEBUG oslo_service.service [None req-b71ca0b1-6b06-4f9a-80f6-79fc48762184 - - - - - -] cyborg.cafile                  = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Dec  1 14:20:35 np0005541455 nova_compute[188605]: 2025-12-01 19:20:35.302 188609 DEBUG oslo_service.service [None req-b71ca0b1-6b06-4f9a-80f6-79fc48762184 - - - - - -] cyborg.certfile                = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Dec  1 14:20:35 np0005541455 nova_compute[188605]: 2025-12-01 19:20:35.302 188609 DEBUG oslo_service.service [None req-b71ca0b1-6b06-4f9a-80f6-79fc48762184 - - - - - -] cyborg.collect_timing          = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Dec  1 14:20:35 np0005541455 nova_compute[188605]: 2025-12-01 19:20:35.302 188609 DEBUG oslo_service.service [None req-b71ca0b1-6b06-4f9a-80f6-79fc48762184 - - - - - -] cyborg.connect_retries         = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Dec  1 14:20:35 np0005541455 nova_compute[188605]: 2025-12-01 19:20:35.302 188609 DEBUG oslo_service.service [None req-b71ca0b1-6b06-4f9a-80f6-79fc48762184 - - - - - -] cyborg.connect_retry_delay     = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Dec  1 14:20:35 np0005541455 nova_compute[188605]: 2025-12-01 19:20:35.302 188609 DEBUG oslo_service.service [None req-b71ca0b1-6b06-4f9a-80f6-79fc48762184 - - - - - -] cyborg.endpoint_override       = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Dec  1 14:20:35 np0005541455 nova_compute[188605]: 2025-12-01 19:20:35.302 188609 DEBUG oslo_service.service [None req-b71ca0b1-6b06-4f9a-80f6-79fc48762184 - - - - - -] cyborg.insecure                = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Dec  1 14:20:35 np0005541455 nova_compute[188605]: 2025-12-01 19:20:35.303 188609 DEBUG oslo_service.service [None req-b71ca0b1-6b06-4f9a-80f6-79fc48762184 - - - - - -] cyborg.keyfile                 = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Dec  1 14:20:35 np0005541455 nova_compute[188605]: 2025-12-01 19:20:35.303 188609 DEBUG oslo_service.service [None req-b71ca0b1-6b06-4f9a-80f6-79fc48762184 - - - - - -] cyborg.max_version             = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Dec  1 14:20:35 np0005541455 nova_compute[188605]: 2025-12-01 19:20:35.303 188609 DEBUG oslo_service.service [None req-b71ca0b1-6b06-4f9a-80f6-79fc48762184 - - - - - -] cyborg.min_version             = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Dec  1 14:20:35 np0005541455 nova_compute[188605]: 2025-12-01 19:20:35.303 188609 DEBUG oslo_service.service [None req-b71ca0b1-6b06-4f9a-80f6-79fc48762184 - - - - - -] cyborg.region_name             = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Dec  1 14:20:35 np0005541455 nova_compute[188605]: 2025-12-01 19:20:35.303 188609 DEBUG oslo_service.service [None req-b71ca0b1-6b06-4f9a-80f6-79fc48762184 - - - - - -] cyborg.service_name            = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Dec  1 14:20:35 np0005541455 nova_compute[188605]: 2025-12-01 19:20:35.303 188609 DEBUG oslo_service.service [None req-b71ca0b1-6b06-4f9a-80f6-79fc48762184 - - - - - -] cyborg.service_type            = accelerator log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Dec  1 14:20:35 np0005541455 nova_compute[188605]: 2025-12-01 19:20:35.303 188609 DEBUG oslo_service.service [None req-b71ca0b1-6b06-4f9a-80f6-79fc48762184 - - - - - -] cyborg.split_loggers           = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Dec  1 14:20:35 np0005541455 nova_compute[188605]: 2025-12-01 19:20:35.304 188609 DEBUG oslo_service.service [None req-b71ca0b1-6b06-4f9a-80f6-79fc48762184 - - - - - -] cyborg.status_code_retries     = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Dec  1 14:20:35 np0005541455 nova_compute[188605]: 2025-12-01 19:20:35.304 188609 DEBUG oslo_service.service [None req-b71ca0b1-6b06-4f9a-80f6-79fc48762184 - - - - - -] cyborg.status_code_retry_delay = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Dec  1 14:20:35 np0005541455 nova_compute[188605]: 2025-12-01 19:20:35.304 188609 DEBUG oslo_service.service [None req-b71ca0b1-6b06-4f9a-80f6-79fc48762184 - - - - - -] cyborg.timeout                 = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Dec  1 14:20:35 np0005541455 nova_compute[188605]: 2025-12-01 19:20:35.304 188609 DEBUG oslo_service.service [None req-b71ca0b1-6b06-4f9a-80f6-79fc48762184 - - - - - -] cyborg.valid_interfaces        = ['internal', 'public'] log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Dec  1 14:20:35 np0005541455 nova_compute[188605]: 2025-12-01 19:20:35.304 188609 DEBUG oslo_service.service [None req-b71ca0b1-6b06-4f9a-80f6-79fc48762184 - - - - - -] cyborg.version                 = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Dec  1 14:20:35 np0005541455 nova_compute[188605]: 2025-12-01 19:20:35.304 188609 DEBUG oslo_service.service [None req-b71ca0b1-6b06-4f9a-80f6-79fc48762184 - - - - - -] database.backend               = sqlalchemy log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Dec  1 14:20:35 np0005541455 nova_compute[188605]: 2025-12-01 19:20:35.304 188609 DEBUG oslo_service.service [None req-b71ca0b1-6b06-4f9a-80f6-79fc48762184 - - - - - -] database.connection            = **** log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Dec  1 14:20:35 np0005541455 nova_compute[188605]: 2025-12-01 19:20:35.305 188609 DEBUG oslo_service.service [None req-b71ca0b1-6b06-4f9a-80f6-79fc48762184 - - - - - -] database.connection_debug      = 0 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Dec  1 14:20:35 np0005541455 nova_compute[188605]: 2025-12-01 19:20:35.305 188609 DEBUG oslo_service.service [None req-b71ca0b1-6b06-4f9a-80f6-79fc48762184 - - - - - -] database.connection_parameters =  log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Dec  1 14:20:35 np0005541455 nova_compute[188605]: 2025-12-01 19:20:35.305 188609 DEBUG oslo_service.service [None req-b71ca0b1-6b06-4f9a-80f6-79fc48762184 - - - - - -] database.connection_recycle_time = 3600 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Dec  1 14:20:35 np0005541455 nova_compute[188605]: 2025-12-01 19:20:35.305 188609 DEBUG oslo_service.service [None req-b71ca0b1-6b06-4f9a-80f6-79fc48762184 - - - - - -] database.connection_trace      = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Dec  1 14:20:35 np0005541455 nova_compute[188605]: 2025-12-01 19:20:35.305 188609 DEBUG oslo_service.service [None req-b71ca0b1-6b06-4f9a-80f6-79fc48762184 - - - - - -] database.db_inc_retry_interval = True log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Dec  1 14:20:35 np0005541455 nova_compute[188605]: 2025-12-01 19:20:35.305 188609 DEBUG oslo_service.service [None req-b71ca0b1-6b06-4f9a-80f6-79fc48762184 - - - - - -] database.db_max_retries        = 20 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Dec  1 14:20:35 np0005541455 nova_compute[188605]: 2025-12-01 19:20:35.305 188609 DEBUG oslo_service.service [None req-b71ca0b1-6b06-4f9a-80f6-79fc48762184 - - - - - -] database.db_max_retry_interval = 10 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Dec  1 14:20:35 np0005541455 nova_compute[188605]: 2025-12-01 19:20:35.306 188609 DEBUG oslo_service.service [None req-b71ca0b1-6b06-4f9a-80f6-79fc48762184 - - - - - -] database.db_retry_interval     = 1 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Dec  1 14:20:35 np0005541455 nova_compute[188605]: 2025-12-01 19:20:35.306 188609 DEBUG oslo_service.service [None req-b71ca0b1-6b06-4f9a-80f6-79fc48762184 - - - - - -] database.max_overflow          = 50 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Dec  1 14:20:35 np0005541455 nova_compute[188605]: 2025-12-01 19:20:35.306 188609 DEBUG oslo_service.service [None req-b71ca0b1-6b06-4f9a-80f6-79fc48762184 - - - - - -] database.max_pool_size         = 5 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Dec  1 14:20:35 np0005541455 nova_compute[188605]: 2025-12-01 19:20:35.306 188609 DEBUG oslo_service.service [None req-b71ca0b1-6b06-4f9a-80f6-79fc48762184 - - - - - -] database.max_retries           = 10 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Dec  1 14:20:35 np0005541455 nova_compute[188605]: 2025-12-01 19:20:35.306 188609 DEBUG oslo_service.service [None req-b71ca0b1-6b06-4f9a-80f6-79fc48762184 - - - - - -] database.mysql_enable_ndb      = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Dec  1 14:20:35 np0005541455 nova_compute[188605]: 2025-12-01 19:20:35.306 188609 DEBUG oslo_service.service [None req-b71ca0b1-6b06-4f9a-80f6-79fc48762184 - - - - - -] database.mysql_sql_mode        = TRADITIONAL log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Dec  1 14:20:35 np0005541455 nova_compute[188605]: 2025-12-01 19:20:35.306 188609 DEBUG oslo_service.service [None req-b71ca0b1-6b06-4f9a-80f6-79fc48762184 - - - - - -] database.mysql_wsrep_sync_wait = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Dec  1 14:20:35 np0005541455 nova_compute[188605]: 2025-12-01 19:20:35.307 188609 DEBUG oslo_service.service [None req-b71ca0b1-6b06-4f9a-80f6-79fc48762184 - - - - - -] database.pool_timeout          = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Dec  1 14:20:35 np0005541455 nova_compute[188605]: 2025-12-01 19:20:35.307 188609 DEBUG oslo_service.service [None req-b71ca0b1-6b06-4f9a-80f6-79fc48762184 - - - - - -] database.retry_interval        = 10 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Dec  1 14:20:35 np0005541455 nova_compute[188605]: 2025-12-01 19:20:35.307 188609 DEBUG oslo_service.service [None req-b71ca0b1-6b06-4f9a-80f6-79fc48762184 - - - - - -] database.slave_connection      = **** log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Dec  1 14:20:35 np0005541455 nova_compute[188605]: 2025-12-01 19:20:35.307 188609 DEBUG oslo_service.service [None req-b71ca0b1-6b06-4f9a-80f6-79fc48762184 - - - - - -] database.sqlite_synchronous    = True log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Dec  1 14:20:35 np0005541455 nova_compute[188605]: 2025-12-01 19:20:35.307 188609 DEBUG oslo_service.service [None req-b71ca0b1-6b06-4f9a-80f6-79fc48762184 - - - - - -] api_database.backend           = sqlalchemy log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Dec  1 14:20:35 np0005541455 nova_compute[188605]: 2025-12-01 19:20:35.307 188609 DEBUG oslo_service.service [None req-b71ca0b1-6b06-4f9a-80f6-79fc48762184 - - - - - -] api_database.connection        = **** log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Dec  1 14:20:35 np0005541455 nova_compute[188605]: 2025-12-01 19:20:35.307 188609 DEBUG oslo_service.service [None req-b71ca0b1-6b06-4f9a-80f6-79fc48762184 - - - - - -] api_database.connection_debug  = 0 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Dec  1 14:20:35 np0005541455 nova_compute[188605]: 2025-12-01 19:20:35.308 188609 DEBUG oslo_service.service [None req-b71ca0b1-6b06-4f9a-80f6-79fc48762184 - - - - - -] api_database.connection_parameters =  log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Dec  1 14:20:35 np0005541455 nova_compute[188605]: 2025-12-01 19:20:35.308 188609 DEBUG oslo_service.service [None req-b71ca0b1-6b06-4f9a-80f6-79fc48762184 - - - - - -] api_database.connection_recycle_time = 3600 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Dec  1 14:20:35 np0005541455 nova_compute[188605]: 2025-12-01 19:20:35.308 188609 DEBUG oslo_service.service [None req-b71ca0b1-6b06-4f9a-80f6-79fc48762184 - - - - - -] api_database.connection_trace  = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Dec  1 14:20:35 np0005541455 nova_compute[188605]: 2025-12-01 19:20:35.308 188609 DEBUG oslo_service.service [None req-b71ca0b1-6b06-4f9a-80f6-79fc48762184 - - - - - -] api_database.db_inc_retry_interval = True log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Dec  1 14:20:35 np0005541455 nova_compute[188605]: 2025-12-01 19:20:35.308 188609 DEBUG oslo_service.service [None req-b71ca0b1-6b06-4f9a-80f6-79fc48762184 - - - - - -] api_database.db_max_retries    = 20 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Dec  1 14:20:35 np0005541455 nova_compute[188605]: 2025-12-01 19:20:35.308 188609 DEBUG oslo_service.service [None req-b71ca0b1-6b06-4f9a-80f6-79fc48762184 - - - - - -] api_database.db_max_retry_interval = 10 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Dec  1 14:20:35 np0005541455 nova_compute[188605]: 2025-12-01 19:20:35.308 188609 DEBUG oslo_service.service [None req-b71ca0b1-6b06-4f9a-80f6-79fc48762184 - - - - - -] api_database.db_retry_interval = 1 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Dec  1 14:20:35 np0005541455 nova_compute[188605]: 2025-12-01 19:20:35.308 188609 DEBUG oslo_service.service [None req-b71ca0b1-6b06-4f9a-80f6-79fc48762184 - - - - - -] api_database.max_overflow      = 50 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Dec  1 14:20:35 np0005541455 nova_compute[188605]: 2025-12-01 19:20:35.309 188609 DEBUG oslo_service.service [None req-b71ca0b1-6b06-4f9a-80f6-79fc48762184 - - - - - -] api_database.max_pool_size     = 5 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Dec  1 14:20:35 np0005541455 nova_compute[188605]: 2025-12-01 19:20:35.309 188609 DEBUG oslo_service.service [None req-b71ca0b1-6b06-4f9a-80f6-79fc48762184 - - - - - -] api_database.max_retries       = 10 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Dec  1 14:20:35 np0005541455 nova_compute[188605]: 2025-12-01 19:20:35.309 188609 DEBUG oslo_service.service [None req-b71ca0b1-6b06-4f9a-80f6-79fc48762184 - - - - - -] api_database.mysql_enable_ndb  = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Dec  1 14:20:35 np0005541455 nova_compute[188605]: 2025-12-01 19:20:35.309 188609 DEBUG oslo_service.service [None req-b71ca0b1-6b06-4f9a-80f6-79fc48762184 - - - - - -] api_database.mysql_sql_mode    = TRADITIONAL log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Dec  1 14:20:35 np0005541455 nova_compute[188605]: 2025-12-01 19:20:35.309 188609 DEBUG oslo_service.service [None req-b71ca0b1-6b06-4f9a-80f6-79fc48762184 - - - - - -] api_database.mysql_wsrep_sync_wait = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Dec  1 14:20:35 np0005541455 nova_compute[188605]: 2025-12-01 19:20:35.309 188609 DEBUG oslo_service.service [None req-b71ca0b1-6b06-4f9a-80f6-79fc48762184 - - - - - -] api_database.pool_timeout      = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Dec  1 14:20:35 np0005541455 nova_compute[188605]: 2025-12-01 19:20:35.309 188609 DEBUG oslo_service.service [None req-b71ca0b1-6b06-4f9a-80f6-79fc48762184 - - - - - -] api_database.retry_interval    = 10 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Dec  1 14:20:35 np0005541455 nova_compute[188605]: 2025-12-01 19:20:35.310 188609 DEBUG oslo_service.service [None req-b71ca0b1-6b06-4f9a-80f6-79fc48762184 - - - - - -] api_database.slave_connection  = **** log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Dec  1 14:20:35 np0005541455 nova_compute[188605]: 2025-12-01 19:20:35.310 188609 DEBUG oslo_service.service [None req-b71ca0b1-6b06-4f9a-80f6-79fc48762184 - - - - - -] api_database.sqlite_synchronous = True log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Dec  1 14:20:35 np0005541455 nova_compute[188605]: 2025-12-01 19:20:35.310 188609 DEBUG oslo_service.service [None req-b71ca0b1-6b06-4f9a-80f6-79fc48762184 - - - - - -] devices.enabled_mdev_types     = [] log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Dec  1 14:20:35 np0005541455 nova_compute[188605]: 2025-12-01 19:20:35.310 188609 DEBUG oslo_service.service [None req-b71ca0b1-6b06-4f9a-80f6-79fc48762184 - - - - - -] ephemeral_storage_encryption.cipher = aes-xts-plain64 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Dec  1 14:20:35 np0005541455 nova_compute[188605]: 2025-12-01 19:20:35.310 188609 DEBUG oslo_service.service [None req-b71ca0b1-6b06-4f9a-80f6-79fc48762184 - - - - - -] ephemeral_storage_encryption.enabled = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Dec  1 14:20:35 np0005541455 nova_compute[188605]: 2025-12-01 19:20:35.310 188609 DEBUG oslo_service.service [None req-b71ca0b1-6b06-4f9a-80f6-79fc48762184 - - - - - -] ephemeral_storage_encryption.key_size = 512 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Dec  1 14:20:35 np0005541455 nova_compute[188605]: 2025-12-01 19:20:35.310 188609 DEBUG oslo_service.service [None req-b71ca0b1-6b06-4f9a-80f6-79fc48762184 - - - - - -] glance.api_servers             = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Dec  1 14:20:35 np0005541455 nova_compute[188605]: 2025-12-01 19:20:35.311 188609 DEBUG oslo_service.service [None req-b71ca0b1-6b06-4f9a-80f6-79fc48762184 - - - - - -] glance.cafile                  = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Dec  1 14:20:35 np0005541455 nova_compute[188605]: 2025-12-01 19:20:35.311 188609 DEBUG oslo_service.service [None req-b71ca0b1-6b06-4f9a-80f6-79fc48762184 - - - - - -] glance.certfile                = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Dec  1 14:20:35 np0005541455 nova_compute[188605]: 2025-12-01 19:20:35.311 188609 DEBUG oslo_service.service [None req-b71ca0b1-6b06-4f9a-80f6-79fc48762184 - - - - - -] glance.collect_timing          = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Dec  1 14:20:35 np0005541455 nova_compute[188605]: 2025-12-01 19:20:35.311 188609 DEBUG oslo_service.service [None req-b71ca0b1-6b06-4f9a-80f6-79fc48762184 - - - - - -] glance.connect_retries         = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Dec  1 14:20:35 np0005541455 nova_compute[188605]: 2025-12-01 19:20:35.311 188609 DEBUG oslo_service.service [None req-b71ca0b1-6b06-4f9a-80f6-79fc48762184 - - - - - -] glance.connect_retry_delay     = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Dec  1 14:20:35 np0005541455 nova_compute[188605]: 2025-12-01 19:20:35.311 188609 DEBUG oslo_service.service [None req-b71ca0b1-6b06-4f9a-80f6-79fc48762184 - - - - - -] glance.debug                   = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Dec  1 14:20:35 np0005541455 nova_compute[188605]: 2025-12-01 19:20:35.311 188609 DEBUG oslo_service.service [None req-b71ca0b1-6b06-4f9a-80f6-79fc48762184 - - - - - -] glance.default_trusted_certificate_ids = [] log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Dec  1 14:20:35 np0005541455 nova_compute[188605]: 2025-12-01 19:20:35.312 188609 DEBUG oslo_service.service [None req-b71ca0b1-6b06-4f9a-80f6-79fc48762184 - - - - - -] glance.enable_certificate_validation = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Dec  1 14:20:35 np0005541455 nova_compute[188605]: 2025-12-01 19:20:35.312 188609 DEBUG oslo_service.service [None req-b71ca0b1-6b06-4f9a-80f6-79fc48762184 - - - - - -] glance.enable_rbd_download     = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Dec  1 14:20:35 np0005541455 nova_compute[188605]: 2025-12-01 19:20:35.312 188609 DEBUG oslo_service.service [None req-b71ca0b1-6b06-4f9a-80f6-79fc48762184 - - - - - -] glance.endpoint_override       = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Dec  1 14:20:35 np0005541455 nova_compute[188605]: 2025-12-01 19:20:35.312 188609 DEBUG oslo_service.service [None req-b71ca0b1-6b06-4f9a-80f6-79fc48762184 - - - - - -] glance.insecure                = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Dec  1 14:20:35 np0005541455 nova_compute[188605]: 2025-12-01 19:20:35.312 188609 DEBUG oslo_service.service [None req-b71ca0b1-6b06-4f9a-80f6-79fc48762184 - - - - - -] glance.keyfile                 = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Dec  1 14:20:35 np0005541455 nova_compute[188605]: 2025-12-01 19:20:35.312 188609 DEBUG oslo_service.service [None req-b71ca0b1-6b06-4f9a-80f6-79fc48762184 - - - - - -] glance.max_version             = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Dec  1 14:20:35 np0005541455 nova_compute[188605]: 2025-12-01 19:20:35.312 188609 DEBUG oslo_service.service [None req-b71ca0b1-6b06-4f9a-80f6-79fc48762184 - - - - - -] glance.min_version             = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Dec  1 14:20:35 np0005541455 nova_compute[188605]: 2025-12-01 19:20:35.313 188609 DEBUG oslo_service.service [None req-b71ca0b1-6b06-4f9a-80f6-79fc48762184 - - - - - -] glance.num_retries             = 3 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Dec  1 14:20:35 np0005541455 nova_compute[188605]: 2025-12-01 19:20:35.313 188609 DEBUG oslo_service.service [None req-b71ca0b1-6b06-4f9a-80f6-79fc48762184 - - - - - -] glance.rbd_ceph_conf           =  log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Dec  1 14:20:35 np0005541455 nova_compute[188605]: 2025-12-01 19:20:35.313 188609 DEBUG oslo_service.service [None req-b71ca0b1-6b06-4f9a-80f6-79fc48762184 - - - - - -] glance.rbd_connect_timeout     = 5 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Dec  1 14:20:35 np0005541455 nova_compute[188605]: 2025-12-01 19:20:35.313 188609 DEBUG oslo_service.service [None req-b71ca0b1-6b06-4f9a-80f6-79fc48762184 - - - - - -] glance.rbd_pool                =  log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Dec  1 14:20:35 np0005541455 nova_compute[188605]: 2025-12-01 19:20:35.313 188609 DEBUG oslo_service.service [None req-b71ca0b1-6b06-4f9a-80f6-79fc48762184 - - - - - -] glance.rbd_user                =  log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Dec  1 14:20:35 np0005541455 nova_compute[188605]: 2025-12-01 19:20:35.313 188609 DEBUG oslo_service.service [None req-b71ca0b1-6b06-4f9a-80f6-79fc48762184 - - - - - -] glance.region_name             = regionOne log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Dec  1 14:20:35 np0005541455 nova_compute[188605]: 2025-12-01 19:20:35.313 188609 DEBUG oslo_service.service [None req-b71ca0b1-6b06-4f9a-80f6-79fc48762184 - - - - - -] glance.service_name            = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Dec  1 14:20:35 np0005541455 nova_compute[188605]: 2025-12-01 19:20:35.314 188609 DEBUG oslo_service.service [None req-b71ca0b1-6b06-4f9a-80f6-79fc48762184 - - - - - -] glance.service_type            = image log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Dec  1 14:20:35 np0005541455 nova_compute[188605]: 2025-12-01 19:20:35.314 188609 DEBUG oslo_service.service [None req-b71ca0b1-6b06-4f9a-80f6-79fc48762184 - - - - - -] glance.split_loggers           = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Dec  1 14:20:35 np0005541455 nova_compute[188605]: 2025-12-01 19:20:35.314 188609 DEBUG oslo_service.service [None req-b71ca0b1-6b06-4f9a-80f6-79fc48762184 - - - - - -] glance.status_code_retries     = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Dec  1 14:20:35 np0005541455 nova_compute[188605]: 2025-12-01 19:20:35.314 188609 DEBUG oslo_service.service [None req-b71ca0b1-6b06-4f9a-80f6-79fc48762184 - - - - - -] glance.status_code_retry_delay = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Dec  1 14:20:35 np0005541455 nova_compute[188605]: 2025-12-01 19:20:35.314 188609 DEBUG oslo_service.service [None req-b71ca0b1-6b06-4f9a-80f6-79fc48762184 - - - - - -] glance.timeout                 = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Dec  1 14:20:35 np0005541455 nova_compute[188605]: 2025-12-01 19:20:35.314 188609 DEBUG oslo_service.service [None req-b71ca0b1-6b06-4f9a-80f6-79fc48762184 - - - - - -] glance.valid_interfaces        = ['internal'] log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Dec  1 14:20:35 np0005541455 nova_compute[188605]: 2025-12-01 19:20:35.314 188609 DEBUG oslo_service.service [None req-b71ca0b1-6b06-4f9a-80f6-79fc48762184 - - - - - -] glance.verify_glance_signatures = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Dec  1 14:20:35 np0005541455 nova_compute[188605]: 2025-12-01 19:20:35.314 188609 DEBUG oslo_service.service [None req-b71ca0b1-6b06-4f9a-80f6-79fc48762184 - - - - - -] glance.version                 = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Dec  1 14:20:35 np0005541455 nova_compute[188605]: 2025-12-01 19:20:35.315 188609 DEBUG oslo_service.service [None req-b71ca0b1-6b06-4f9a-80f6-79fc48762184 - - - - - -] guestfs.debug                  = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Dec  1 14:20:35 np0005541455 nova_compute[188605]: 2025-12-01 19:20:35.315 188609 DEBUG oslo_service.service [None req-b71ca0b1-6b06-4f9a-80f6-79fc48762184 - - - - - -] hyperv.config_drive_cdrom      = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Dec  1 14:20:35 np0005541455 nova_compute[188605]: 2025-12-01 19:20:35.315 188609 DEBUG oslo_service.service [None req-b71ca0b1-6b06-4f9a-80f6-79fc48762184 - - - - - -] hyperv.config_drive_inject_password = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Dec  1 14:20:35 np0005541455 nova_compute[188605]: 2025-12-01 19:20:35.315 188609 DEBUG oslo_service.service [None req-b71ca0b1-6b06-4f9a-80f6-79fc48762184 - - - - - -] hyperv.dynamic_memory_ratio    = 1.0 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Dec  1 14:20:35 np0005541455 nova_compute[188605]: 2025-12-01 19:20:35.315 188609 DEBUG oslo_service.service [None req-b71ca0b1-6b06-4f9a-80f6-79fc48762184 - - - - - -] hyperv.enable_instance_metrics_collection = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Dec  1 14:20:35 np0005541455 nova_compute[188605]: 2025-12-01 19:20:35.315 188609 DEBUG oslo_service.service [None req-b71ca0b1-6b06-4f9a-80f6-79fc48762184 - - - - - -] hyperv.enable_remotefx         = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Dec  1 14:20:35 np0005541455 nova_compute[188605]: 2025-12-01 19:20:35.315 188609 DEBUG oslo_service.service [None req-b71ca0b1-6b06-4f9a-80f6-79fc48762184 - - - - - -] hyperv.instances_path_share    =  log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Dec  1 14:20:35 np0005541455 nova_compute[188605]: 2025-12-01 19:20:35.316 188609 DEBUG oslo_service.service [None req-b71ca0b1-6b06-4f9a-80f6-79fc48762184 - - - - - -] hyperv.iscsi_initiator_list    = [] log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Dec  1 14:20:35 np0005541455 nova_compute[188605]: 2025-12-01 19:20:35.316 188609 DEBUG oslo_service.service [None req-b71ca0b1-6b06-4f9a-80f6-79fc48762184 - - - - - -] hyperv.limit_cpu_features      = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Dec  1 14:20:35 np0005541455 nova_compute[188605]: 2025-12-01 19:20:35.316 188609 DEBUG oslo_service.service [None req-b71ca0b1-6b06-4f9a-80f6-79fc48762184 - - - - - -] hyperv.mounted_disk_query_retry_count = 10 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Dec  1 14:20:35 np0005541455 nova_compute[188605]: 2025-12-01 19:20:35.316 188609 DEBUG oslo_service.service [None req-b71ca0b1-6b06-4f9a-80f6-79fc48762184 - - - - - -] hyperv.mounted_disk_query_retry_interval = 5 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Dec  1 14:20:35 np0005541455 nova_compute[188605]: 2025-12-01 19:20:35.316 188609 DEBUG oslo_service.service [None req-b71ca0b1-6b06-4f9a-80f6-79fc48762184 - - - - - -] hyperv.power_state_check_timeframe = 60 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Dec  1 14:20:35 np0005541455 nova_compute[188605]: 2025-12-01 19:20:35.316 188609 DEBUG oslo_service.service [None req-b71ca0b1-6b06-4f9a-80f6-79fc48762184 - - - - - -] hyperv.power_state_event_polling_interval = 2 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Dec  1 14:20:35 np0005541455 nova_compute[188605]: 2025-12-01 19:20:35.316 188609 DEBUG oslo_service.service [None req-b71ca0b1-6b06-4f9a-80f6-79fc48762184 - - - - - -] hyperv.qemu_img_cmd            = qemu-img.exe log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Dec  1 14:20:35 np0005541455 nova_compute[188605]: 2025-12-01 19:20:35.317 188609 DEBUG oslo_service.service [None req-b71ca0b1-6b06-4f9a-80f6-79fc48762184 - - - - - -] hyperv.use_multipath_io        = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Dec  1 14:20:35 np0005541455 nova_compute[188605]: 2025-12-01 19:20:35.317 188609 DEBUG oslo_service.service [None req-b71ca0b1-6b06-4f9a-80f6-79fc48762184 - - - - - -] hyperv.volume_attach_retry_count = 10 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Dec  1 14:20:35 np0005541455 nova_compute[188605]: 2025-12-01 19:20:35.317 188609 DEBUG oslo_service.service [None req-b71ca0b1-6b06-4f9a-80f6-79fc48762184 - - - - - -] hyperv.volume_attach_retry_interval = 5 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Dec  1 14:20:35 np0005541455 nova_compute[188605]: 2025-12-01 19:20:35.317 188609 DEBUG oslo_service.service [None req-b71ca0b1-6b06-4f9a-80f6-79fc48762184 - - - - - -] hyperv.vswitch_name            = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Dec  1 14:20:35 np0005541455 nova_compute[188605]: 2025-12-01 19:20:35.317 188609 DEBUG oslo_service.service [None req-b71ca0b1-6b06-4f9a-80f6-79fc48762184 - - - - - -] hyperv.wait_soft_reboot_seconds = 60 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Dec  1 14:20:35 np0005541455 nova_compute[188605]: 2025-12-01 19:20:35.317 188609 DEBUG oslo_service.service [None req-b71ca0b1-6b06-4f9a-80f6-79fc48762184 - - - - - -] mks.enabled                    = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Dec  1 14:20:35 np0005541455 nova_compute[188605]: 2025-12-01 19:20:35.318 188609 DEBUG oslo_service.service [None req-b71ca0b1-6b06-4f9a-80f6-79fc48762184 - - - - - -] mks.mksproxy_base_url          = http://127.0.0.1:6090/ log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Dec  1 14:20:35 np0005541455 nova_compute[188605]: 2025-12-01 19:20:35.318 188609 DEBUG oslo_service.service [None req-b71ca0b1-6b06-4f9a-80f6-79fc48762184 - - - - - -] image_cache.manager_interval   = 2400 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Dec  1 14:20:35 np0005541455 nova_compute[188605]: 2025-12-01 19:20:35.318 188609 DEBUG oslo_service.service [None req-b71ca0b1-6b06-4f9a-80f6-79fc48762184 - - - - - -] image_cache.precache_concurrency = 1 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Dec  1 14:20:35 np0005541455 nova_compute[188605]: 2025-12-01 19:20:35.318 188609 DEBUG oslo_service.service [None req-b71ca0b1-6b06-4f9a-80f6-79fc48762184 - - - - - -] image_cache.remove_unused_base_images = True log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Dec  1 14:20:35 np0005541455 nova_compute[188605]: 2025-12-01 19:20:35.318 188609 DEBUG oslo_service.service [None req-b71ca0b1-6b06-4f9a-80f6-79fc48762184 - - - - - -] image_cache.remove_unused_original_minimum_age_seconds = 86400 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Dec  1 14:20:35 np0005541455 nova_compute[188605]: 2025-12-01 19:20:35.318 188609 DEBUG oslo_service.service [None req-b71ca0b1-6b06-4f9a-80f6-79fc48762184 - - - - - -] image_cache.remove_unused_resized_minimum_age_seconds = 3600 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Dec  1 14:20:35 np0005541455 nova_compute[188605]: 2025-12-01 19:20:35.318 188609 DEBUG oslo_service.service [None req-b71ca0b1-6b06-4f9a-80f6-79fc48762184 - - - - - -] image_cache.subdirectory_name  = _base log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Dec  1 14:20:35 np0005541455 nova_compute[188605]: 2025-12-01 19:20:35.319 188609 DEBUG oslo_service.service [None req-b71ca0b1-6b06-4f9a-80f6-79fc48762184 - - - - - -] ironic.api_max_retries         = 60 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Dec  1 14:20:35 np0005541455 nova_compute[188605]: 2025-12-01 19:20:35.319 188609 DEBUG oslo_service.service [None req-b71ca0b1-6b06-4f9a-80f6-79fc48762184 - - - - - -] ironic.api_retry_interval      = 2 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Dec  1 14:20:35 np0005541455 nova_compute[188605]: 2025-12-01 19:20:35.319 188609 DEBUG oslo_service.service [None req-b71ca0b1-6b06-4f9a-80f6-79fc48762184 - - - - - -] ironic.auth_section            = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Dec  1 14:20:35 np0005541455 nova_compute[188605]: 2025-12-01 19:20:35.319 188609 DEBUG oslo_service.service [None req-b71ca0b1-6b06-4f9a-80f6-79fc48762184 - - - - - -] ironic.auth_type               = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Dec  1 14:20:35 np0005541455 nova_compute[188605]: 2025-12-01 19:20:35.319 188609 DEBUG oslo_service.service [None req-b71ca0b1-6b06-4f9a-80f6-79fc48762184 - - - - - -] ironic.cafile                  = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Dec  1 14:20:35 np0005541455 nova_compute[188605]: 2025-12-01 19:20:35.319 188609 DEBUG oslo_service.service [None req-b71ca0b1-6b06-4f9a-80f6-79fc48762184 - - - - - -] ironic.certfile                = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Dec  1 14:20:35 np0005541455 nova_compute[188605]: 2025-12-01 19:20:35.319 188609 DEBUG oslo_service.service [None req-b71ca0b1-6b06-4f9a-80f6-79fc48762184 - - - - - -] ironic.collect_timing          = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Dec  1 14:20:35 np0005541455 nova_compute[188605]: 2025-12-01 19:20:35.320 188609 DEBUG oslo_service.service [None req-b71ca0b1-6b06-4f9a-80f6-79fc48762184 - - - - - -] ironic.connect_retries         = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Dec  1 14:20:35 np0005541455 nova_compute[188605]: 2025-12-01 19:20:35.320 188609 DEBUG oslo_service.service [None req-b71ca0b1-6b06-4f9a-80f6-79fc48762184 - - - - - -] ironic.connect_retry_delay     = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Dec  1 14:20:35 np0005541455 nova_compute[188605]: 2025-12-01 19:20:35.320 188609 DEBUG oslo_service.service [None req-b71ca0b1-6b06-4f9a-80f6-79fc48762184 - - - - - -] ironic.endpoint_override       = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Dec  1 14:20:35 np0005541455 nova_compute[188605]: 2025-12-01 19:20:35.320 188609 DEBUG oslo_service.service [None req-b71ca0b1-6b06-4f9a-80f6-79fc48762184 - - - - - -] ironic.insecure                = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Dec  1 14:20:35 np0005541455 nova_compute[188605]: 2025-12-01 19:20:35.320 188609 DEBUG oslo_service.service [None req-b71ca0b1-6b06-4f9a-80f6-79fc48762184 - - - - - -] ironic.keyfile                 = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Dec  1 14:20:35 np0005541455 nova_compute[188605]: 2025-12-01 19:20:35.320 188609 DEBUG oslo_service.service [None req-b71ca0b1-6b06-4f9a-80f6-79fc48762184 - - - - - -] ironic.max_version             = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Dec  1 14:20:35 np0005541455 nova_compute[188605]: 2025-12-01 19:20:35.320 188609 DEBUG oslo_service.service [None req-b71ca0b1-6b06-4f9a-80f6-79fc48762184 - - - - - -] ironic.min_version             = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Dec  1 14:20:35 np0005541455 nova_compute[188605]: 2025-12-01 19:20:35.321 188609 DEBUG oslo_service.service [None req-b71ca0b1-6b06-4f9a-80f6-79fc48762184 - - - - - -] ironic.partition_key           = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Dec  1 14:20:35 np0005541455 nova_compute[188605]: 2025-12-01 19:20:35.321 188609 DEBUG oslo_service.service [None req-b71ca0b1-6b06-4f9a-80f6-79fc48762184 - - - - - -] ironic.peer_list               = [] log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Dec  1 14:20:35 np0005541455 nova_compute[188605]: 2025-12-01 19:20:35.321 188609 DEBUG oslo_service.service [None req-b71ca0b1-6b06-4f9a-80f6-79fc48762184 - - - - - -] ironic.region_name             = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Dec  1 14:20:35 np0005541455 nova_compute[188605]: 2025-12-01 19:20:35.321 188609 DEBUG oslo_service.service [None req-b71ca0b1-6b06-4f9a-80f6-79fc48762184 - - - - - -] ironic.serial_console_state_timeout = 10 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Dec  1 14:20:35 np0005541455 nova_compute[188605]: 2025-12-01 19:20:35.321 188609 DEBUG oslo_service.service [None req-b71ca0b1-6b06-4f9a-80f6-79fc48762184 - - - - - -] ironic.service_name            = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Dec  1 14:20:35 np0005541455 nova_compute[188605]: 2025-12-01 19:20:35.321 188609 DEBUG oslo_service.service [None req-b71ca0b1-6b06-4f9a-80f6-79fc48762184 - - - - - -] ironic.service_type            = baremetal log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Dec  1 14:20:35 np0005541455 nova_compute[188605]: 2025-12-01 19:20:35.321 188609 DEBUG oslo_service.service [None req-b71ca0b1-6b06-4f9a-80f6-79fc48762184 - - - - - -] ironic.split_loggers           = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Dec  1 14:20:35 np0005541455 nova_compute[188605]: 2025-12-01 19:20:35.321 188609 DEBUG oslo_service.service [None req-b71ca0b1-6b06-4f9a-80f6-79fc48762184 - - - - - -] ironic.status_code_retries     = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Dec  1 14:20:35 np0005541455 nova_compute[188605]: 2025-12-01 19:20:35.322 188609 DEBUG oslo_service.service [None req-b71ca0b1-6b06-4f9a-80f6-79fc48762184 - - - - - -] ironic.status_code_retry_delay = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Dec  1 14:20:35 np0005541455 nova_compute[188605]: 2025-12-01 19:20:35.322 188609 DEBUG oslo_service.service [None req-b71ca0b1-6b06-4f9a-80f6-79fc48762184 - - - - - -] ironic.timeout                 = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Dec  1 14:20:35 np0005541455 nova_compute[188605]: 2025-12-01 19:20:35.322 188609 DEBUG oslo_service.service [None req-b71ca0b1-6b06-4f9a-80f6-79fc48762184 - - - - - -] ironic.valid_interfaces        = ['internal', 'public'] log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Dec  1 14:20:35 np0005541455 nova_compute[188605]: 2025-12-01 19:20:35.322 188609 DEBUG oslo_service.service [None req-b71ca0b1-6b06-4f9a-80f6-79fc48762184 - - - - - -] ironic.version                 = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Dec  1 14:20:35 np0005541455 nova_compute[188605]: 2025-12-01 19:20:35.322 188609 DEBUG oslo_service.service [None req-b71ca0b1-6b06-4f9a-80f6-79fc48762184 - - - - - -] key_manager.backend            = barbican log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Dec  1 14:20:35 np0005541455 nova_compute[188605]: 2025-12-01 19:20:35.322 188609 DEBUG oslo_service.service [None req-b71ca0b1-6b06-4f9a-80f6-79fc48762184 - - - - - -] key_manager.fixed_key          = **** log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Dec  1 14:20:35 np0005541455 nova_compute[188605]: 2025-12-01 19:20:35.322 188609 DEBUG oslo_service.service [None req-b71ca0b1-6b06-4f9a-80f6-79fc48762184 - - - - - -] barbican.auth_endpoint         = http://localhost/identity/v3 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Dec  1 14:20:35 np0005541455 nova_compute[188605]: 2025-12-01 19:20:35.323 188609 DEBUG oslo_service.service [None req-b71ca0b1-6b06-4f9a-80f6-79fc48762184 - - - - - -] barbican.barbican_api_version  = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Dec  1 14:20:35 np0005541455 nova_compute[188605]: 2025-12-01 19:20:35.323 188609 DEBUG oslo_service.service [None req-b71ca0b1-6b06-4f9a-80f6-79fc48762184 - - - - - -] barbican.barbican_endpoint     = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Dec  1 14:20:35 np0005541455 nova_compute[188605]: 2025-12-01 19:20:35.323 188609 DEBUG oslo_service.service [None req-b71ca0b1-6b06-4f9a-80f6-79fc48762184 - - - - - -] barbican.barbican_endpoint_type = internal log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Dec  1 14:20:35 np0005541455 nova_compute[188605]: 2025-12-01 19:20:35.323 188609 DEBUG oslo_service.service [None req-b71ca0b1-6b06-4f9a-80f6-79fc48762184 - - - - - -] barbican.barbican_region_name  = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Dec  1 14:20:35 np0005541455 nova_compute[188605]: 2025-12-01 19:20:35.323 188609 DEBUG oslo_service.service [None req-b71ca0b1-6b06-4f9a-80f6-79fc48762184 - - - - - -] barbican.cafile                = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Dec  1 14:20:35 np0005541455 nova_compute[188605]: 2025-12-01 19:20:35.323 188609 DEBUG oslo_service.service [None req-b71ca0b1-6b06-4f9a-80f6-79fc48762184 - - - - - -] barbican.certfile              = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Dec  1 14:20:35 np0005541455 nova_compute[188605]: 2025-12-01 19:20:35.324 188609 DEBUG oslo_service.service [None req-b71ca0b1-6b06-4f9a-80f6-79fc48762184 - - - - - -] barbican.collect_timing        = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Dec  1 14:20:35 np0005541455 nova_compute[188605]: 2025-12-01 19:20:35.324 188609 DEBUG oslo_service.service [None req-b71ca0b1-6b06-4f9a-80f6-79fc48762184 - - - - - -] barbican.insecure              = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Dec  1 14:20:35 np0005541455 nova_compute[188605]: 2025-12-01 19:20:35.324 188609 DEBUG oslo_service.service [None req-b71ca0b1-6b06-4f9a-80f6-79fc48762184 - - - - - -] barbican.keyfile               = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Dec  1 14:20:35 np0005541455 nova_compute[188605]: 2025-12-01 19:20:35.324 188609 DEBUG oslo_service.service [None req-b71ca0b1-6b06-4f9a-80f6-79fc48762184 - - - - - -] barbican.number_of_retries     = 60 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Dec  1 14:20:35 np0005541455 nova_compute[188605]: 2025-12-01 19:20:35.324 188609 DEBUG oslo_service.service [None req-b71ca0b1-6b06-4f9a-80f6-79fc48762184 - - - - - -] barbican.retry_delay           = 1 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Dec  1 14:20:35 np0005541455 nova_compute[188605]: 2025-12-01 19:20:35.324 188609 DEBUG oslo_service.service [None req-b71ca0b1-6b06-4f9a-80f6-79fc48762184 - - - - - -] barbican.send_service_user_token = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Dec  1 14:20:35 np0005541455 nova_compute[188605]: 2025-12-01 19:20:35.324 188609 DEBUG oslo_service.service [None req-b71ca0b1-6b06-4f9a-80f6-79fc48762184 - - - - - -] barbican.split_loggers         = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Dec  1 14:20:35 np0005541455 nova_compute[188605]: 2025-12-01 19:20:35.324 188609 DEBUG oslo_service.service [None req-b71ca0b1-6b06-4f9a-80f6-79fc48762184 - - - - - -] barbican.timeout               = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Dec  1 14:20:35 np0005541455 nova_compute[188605]: 2025-12-01 19:20:35.325 188609 DEBUG oslo_service.service [None req-b71ca0b1-6b06-4f9a-80f6-79fc48762184 - - - - - -] barbican.verify_ssl            = True log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Dec  1 14:20:35 np0005541455 nova_compute[188605]: 2025-12-01 19:20:35.325 188609 DEBUG oslo_service.service [None req-b71ca0b1-6b06-4f9a-80f6-79fc48762184 - - - - - -] barbican.verify_ssl_path       = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Dec  1 14:20:35 np0005541455 nova_compute[188605]: 2025-12-01 19:20:35.325 188609 DEBUG oslo_service.service [None req-b71ca0b1-6b06-4f9a-80f6-79fc48762184 - - - - - -] barbican_service_user.auth_section = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Dec  1 14:20:35 np0005541455 nova_compute[188605]: 2025-12-01 19:20:35.325 188609 DEBUG oslo_service.service [None req-b71ca0b1-6b06-4f9a-80f6-79fc48762184 - - - - - -] barbican_service_user.auth_type = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Dec  1 14:20:35 np0005541455 nova_compute[188605]: 2025-12-01 19:20:35.325 188609 DEBUG oslo_service.service [None req-b71ca0b1-6b06-4f9a-80f6-79fc48762184 - - - - - -] barbican_service_user.cafile   = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Dec  1 14:20:35 np0005541455 nova_compute[188605]: 2025-12-01 19:20:35.325 188609 DEBUG oslo_service.service [None req-b71ca0b1-6b06-4f9a-80f6-79fc48762184 - - - - - -] barbican_service_user.certfile = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Dec  1 14:20:35 np0005541455 nova_compute[188605]: 2025-12-01 19:20:35.325 188609 DEBUG oslo_service.service [None req-b71ca0b1-6b06-4f9a-80f6-79fc48762184 - - - - - -] barbican_service_user.collect_timing = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Dec  1 14:20:35 np0005541455 nova_compute[188605]: 2025-12-01 19:20:35.326 188609 DEBUG oslo_service.service [None req-b71ca0b1-6b06-4f9a-80f6-79fc48762184 - - - - - -] barbican_service_user.insecure = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Dec  1 14:20:35 np0005541455 nova_compute[188605]: 2025-12-01 19:20:35.326 188609 DEBUG oslo_service.service [None req-b71ca0b1-6b06-4f9a-80f6-79fc48762184 - - - - - -] barbican_service_user.keyfile  = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Dec  1 14:20:35 np0005541455 nova_compute[188605]: 2025-12-01 19:20:35.326 188609 DEBUG oslo_service.service [None req-b71ca0b1-6b06-4f9a-80f6-79fc48762184 - - - - - -] barbican_service_user.split_loggers = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Dec  1 14:20:35 np0005541455 nova_compute[188605]: 2025-12-01 19:20:35.326 188609 DEBUG oslo_service.service [None req-b71ca0b1-6b06-4f9a-80f6-79fc48762184 - - - - - -] barbican_service_user.timeout  = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Dec  1 14:20:35 np0005541455 nova_compute[188605]: 2025-12-01 19:20:35.326 188609 DEBUG oslo_service.service [None req-b71ca0b1-6b06-4f9a-80f6-79fc48762184 - - - - - -] vault.approle_role_id          = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Dec  1 14:20:35 np0005541455 nova_compute[188605]: 2025-12-01 19:20:35.326 188609 DEBUG oslo_service.service [None req-b71ca0b1-6b06-4f9a-80f6-79fc48762184 - - - - - -] vault.approle_secret_id        = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Dec  1 14:20:35 np0005541455 nova_compute[188605]: 2025-12-01 19:20:35.326 188609 DEBUG oslo_service.service [None req-b71ca0b1-6b06-4f9a-80f6-79fc48762184 - - - - - -] vault.cafile                   = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Dec  1 14:20:35 np0005541455 nova_compute[188605]: 2025-12-01 19:20:35.327 188609 DEBUG oslo_service.service [None req-b71ca0b1-6b06-4f9a-80f6-79fc48762184 - - - - - -] vault.certfile                 = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Dec  1 14:20:35 np0005541455 nova_compute[188605]: 2025-12-01 19:20:35.327 188609 DEBUG oslo_service.service [None req-b71ca0b1-6b06-4f9a-80f6-79fc48762184 - - - - - -] vault.collect_timing           = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Dec  1 14:20:35 np0005541455 nova_compute[188605]: 2025-12-01 19:20:35.327 188609 DEBUG oslo_service.service [None req-b71ca0b1-6b06-4f9a-80f6-79fc48762184 - - - - - -] vault.insecure                 = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Dec  1 14:20:35 np0005541455 nova_compute[188605]: 2025-12-01 19:20:35.327 188609 DEBUG oslo_service.service [None req-b71ca0b1-6b06-4f9a-80f6-79fc48762184 - - - - - -] vault.keyfile                  = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Dec  1 14:20:35 np0005541455 nova_compute[188605]: 2025-12-01 19:20:35.327 188609 DEBUG oslo_service.service [None req-b71ca0b1-6b06-4f9a-80f6-79fc48762184 - - - - - -] vault.kv_mountpoint            = secret log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Dec  1 14:20:35 np0005541455 nova_compute[188605]: 2025-12-01 19:20:35.327 188609 DEBUG oslo_service.service [None req-b71ca0b1-6b06-4f9a-80f6-79fc48762184 - - - - - -] vault.kv_version               = 2 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Dec  1 14:20:35 np0005541455 nova_compute[188605]: 2025-12-01 19:20:35.327 188609 DEBUG oslo_service.service [None req-b71ca0b1-6b06-4f9a-80f6-79fc48762184 - - - - - -] vault.namespace                = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Dec  1 14:20:35 np0005541455 nova_compute[188605]: 2025-12-01 19:20:35.327 188609 DEBUG oslo_service.service [None req-b71ca0b1-6b06-4f9a-80f6-79fc48762184 - - - - - -] vault.root_token_id            = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Dec  1 14:20:35 np0005541455 nova_compute[188605]: 2025-12-01 19:20:35.328 188609 DEBUG oslo_service.service [None req-b71ca0b1-6b06-4f9a-80f6-79fc48762184 - - - - - -] vault.split_loggers            = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Dec  1 14:20:35 np0005541455 nova_compute[188605]: 2025-12-01 19:20:35.328 188609 DEBUG oslo_service.service [None req-b71ca0b1-6b06-4f9a-80f6-79fc48762184 - - - - - -] vault.ssl_ca_crt_file          = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Dec  1 14:20:35 np0005541455 nova_compute[188605]: 2025-12-01 19:20:35.328 188609 DEBUG oslo_service.service [None req-b71ca0b1-6b06-4f9a-80f6-79fc48762184 - - - - - -] vault.timeout                  = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Dec  1 14:20:35 np0005541455 nova_compute[188605]: 2025-12-01 19:20:35.328 188609 DEBUG oslo_service.service [None req-b71ca0b1-6b06-4f9a-80f6-79fc48762184 - - - - - -] vault.use_ssl                  = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Dec  1 14:20:35 np0005541455 nova_compute[188605]: 2025-12-01 19:20:35.328 188609 DEBUG oslo_service.service [None req-b71ca0b1-6b06-4f9a-80f6-79fc48762184 - - - - - -] vault.vault_url                = http://127.0.0.1:8200 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Dec  1 14:20:35 np0005541455 nova_compute[188605]: 2025-12-01 19:20:35.328 188609 DEBUG oslo_service.service [None req-b71ca0b1-6b06-4f9a-80f6-79fc48762184 - - - - - -] keystone.cafile                = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Dec  1 14:20:35 np0005541455 nova_compute[188605]: 2025-12-01 19:20:35.328 188609 DEBUG oslo_service.service [None req-b71ca0b1-6b06-4f9a-80f6-79fc48762184 - - - - - -] keystone.certfile              = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Dec  1 14:20:35 np0005541455 nova_compute[188605]: 2025-12-01 19:20:35.329 188609 DEBUG oslo_service.service [None req-b71ca0b1-6b06-4f9a-80f6-79fc48762184 - - - - - -] keystone.collect_timing        = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Dec  1 14:20:35 np0005541455 nova_compute[188605]: 2025-12-01 19:20:35.329 188609 DEBUG oslo_service.service [None req-b71ca0b1-6b06-4f9a-80f6-79fc48762184 - - - - - -] keystone.connect_retries       = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Dec  1 14:20:35 np0005541455 nova_compute[188605]: 2025-12-01 19:20:35.329 188609 DEBUG oslo_service.service [None req-b71ca0b1-6b06-4f9a-80f6-79fc48762184 - - - - - -] keystone.connect_retry_delay   = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Dec  1 14:20:35 np0005541455 nova_compute[188605]: 2025-12-01 19:20:35.329 188609 DEBUG oslo_service.service [None req-b71ca0b1-6b06-4f9a-80f6-79fc48762184 - - - - - -] keystone.endpoint_override     = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Dec  1 14:20:35 np0005541455 nova_compute[188605]: 2025-12-01 19:20:35.329 188609 DEBUG oslo_service.service [None req-b71ca0b1-6b06-4f9a-80f6-79fc48762184 - - - - - -] keystone.insecure              = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Dec  1 14:20:35 np0005541455 nova_compute[188605]: 2025-12-01 19:20:35.329 188609 DEBUG oslo_service.service [None req-b71ca0b1-6b06-4f9a-80f6-79fc48762184 - - - - - -] keystone.keyfile               = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Dec  1 14:20:35 np0005541455 nova_compute[188605]: 2025-12-01 19:20:35.329 188609 DEBUG oslo_service.service [None req-b71ca0b1-6b06-4f9a-80f6-79fc48762184 - - - - - -] keystone.max_version           = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Dec  1 14:20:35 np0005541455 nova_compute[188605]: 2025-12-01 19:20:35.330 188609 DEBUG oslo_service.service [None req-b71ca0b1-6b06-4f9a-80f6-79fc48762184 - - - - - -] keystone.min_version           = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Dec  1 14:20:35 np0005541455 nova_compute[188605]: 2025-12-01 19:20:35.330 188609 DEBUG oslo_service.service [None req-b71ca0b1-6b06-4f9a-80f6-79fc48762184 - - - - - -] keystone.region_name           = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Dec  1 14:20:35 np0005541455 nova_compute[188605]: 2025-12-01 19:20:35.330 188609 DEBUG oslo_service.service [None req-b71ca0b1-6b06-4f9a-80f6-79fc48762184 - - - - - -] keystone.service_name          = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Dec  1 14:20:35 np0005541455 nova_compute[188605]: 2025-12-01 19:20:35.330 188609 DEBUG oslo_service.service [None req-b71ca0b1-6b06-4f9a-80f6-79fc48762184 - - - - - -] keystone.service_type          = identity log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Dec  1 14:20:35 np0005541455 nova_compute[188605]: 2025-12-01 19:20:35.330 188609 DEBUG oslo_service.service [None req-b71ca0b1-6b06-4f9a-80f6-79fc48762184 - - - - - -] keystone.split_loggers         = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Dec  1 14:20:35 np0005541455 nova_compute[188605]: 2025-12-01 19:20:35.330 188609 DEBUG oslo_service.service [None req-b71ca0b1-6b06-4f9a-80f6-79fc48762184 - - - - - -] keystone.status_code_retries   = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Dec  1 14:20:35 np0005541455 nova_compute[188605]: 2025-12-01 19:20:35.330 188609 DEBUG oslo_service.service [None req-b71ca0b1-6b06-4f9a-80f6-79fc48762184 - - - - - -] keystone.status_code_retry_delay = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Dec  1 14:20:35 np0005541455 nova_compute[188605]: 2025-12-01 19:20:35.330 188609 DEBUG oslo_service.service [None req-b71ca0b1-6b06-4f9a-80f6-79fc48762184 - - - - - -] keystone.timeout               = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Dec  1 14:20:35 np0005541455 nova_compute[188605]: 2025-12-01 19:20:35.331 188609 DEBUG oslo_service.service [None req-b71ca0b1-6b06-4f9a-80f6-79fc48762184 - - - - - -] keystone.valid_interfaces      = ['internal', 'public'] log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Dec  1 14:20:35 np0005541455 nova_compute[188605]: 2025-12-01 19:20:35.331 188609 DEBUG oslo_service.service [None req-b71ca0b1-6b06-4f9a-80f6-79fc48762184 - - - - - -] keystone.version               = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Dec  1 14:20:35 np0005541455 nova_compute[188605]: 2025-12-01 19:20:35.331 188609 DEBUG oslo_service.service [None req-b71ca0b1-6b06-4f9a-80f6-79fc48762184 - - - - - -] libvirt.connection_uri         =  log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Dec  1 14:20:35 np0005541455 nova_compute[188605]: 2025-12-01 19:20:35.331 188609 DEBUG oslo_service.service [None req-b71ca0b1-6b06-4f9a-80f6-79fc48762184 - - - - - -] libvirt.cpu_mode               = host-model log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Dec  1 14:20:35 np0005541455 nova_compute[188605]: 2025-12-01 19:20:35.331 188609 DEBUG oslo_service.service [None req-b71ca0b1-6b06-4f9a-80f6-79fc48762184 - - - - - -] libvirt.cpu_model_extra_flags  = [] log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Dec  1 14:20:35 np0005541455 nova_compute[188605]: 2025-12-01 19:20:35.331 188609 DEBUG oslo_service.service [None req-b71ca0b1-6b06-4f9a-80f6-79fc48762184 - - - - - -] libvirt.cpu_models             = [] log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Dec  1 14:20:35 np0005541455 nova_compute[188605]: 2025-12-01 19:20:35.332 188609 DEBUG oslo_service.service [None req-b71ca0b1-6b06-4f9a-80f6-79fc48762184 - - - - - -] libvirt.cpu_power_governor_high = performance log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Dec  1 14:20:35 np0005541455 nova_compute[188605]: 2025-12-01 19:20:35.332 188609 DEBUG oslo_service.service [None req-b71ca0b1-6b06-4f9a-80f6-79fc48762184 - - - - - -] libvirt.cpu_power_governor_low = powersave log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Dec  1 14:20:35 np0005541455 nova_compute[188605]: 2025-12-01 19:20:35.332 188609 DEBUG oslo_service.service [None req-b71ca0b1-6b06-4f9a-80f6-79fc48762184 - - - - - -] libvirt.cpu_power_management   = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Dec  1 14:20:35 np0005541455 nova_compute[188605]: 2025-12-01 19:20:35.332 188609 DEBUG oslo_service.service [None req-b71ca0b1-6b06-4f9a-80f6-79fc48762184 - - - - - -] libvirt.cpu_power_management_strategy = cpu_state log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Dec  1 14:20:35 np0005541455 nova_compute[188605]: 2025-12-01 19:20:35.332 188609 DEBUG oslo_service.service [None req-b71ca0b1-6b06-4f9a-80f6-79fc48762184 - - - - - -] libvirt.device_detach_attempts = 8 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Dec  1 14:20:35 np0005541455 nova_compute[188605]: 2025-12-01 19:20:35.332 188609 DEBUG oslo_service.service [None req-b71ca0b1-6b06-4f9a-80f6-79fc48762184 - - - - - -] libvirt.device_detach_timeout  = 20 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Dec  1 14:20:35 np0005541455 nova_compute[188605]: 2025-12-01 19:20:35.332 188609 DEBUG oslo_service.service [None req-b71ca0b1-6b06-4f9a-80f6-79fc48762184 - - - - - -] libvirt.disk_cachemodes        = [] log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Dec  1 14:20:35 np0005541455 nova_compute[188605]: 2025-12-01 19:20:35.333 188609 DEBUG oslo_service.service [None req-b71ca0b1-6b06-4f9a-80f6-79fc48762184 - - - - - -] libvirt.disk_prefix            = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Dec  1 14:20:35 np0005541455 nova_compute[188605]: 2025-12-01 19:20:35.333 188609 DEBUG oslo_service.service [None req-b71ca0b1-6b06-4f9a-80f6-79fc48762184 - - - - - -] libvirt.enabled_perf_events    = [] log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Dec  1 14:20:35 np0005541455 nova_compute[188605]: 2025-12-01 19:20:35.333 188609 DEBUG oslo_service.service [None req-b71ca0b1-6b06-4f9a-80f6-79fc48762184 - - - - - -] libvirt.file_backed_memory     = 0 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Dec  1 14:20:35 np0005541455 nova_compute[188605]: 2025-12-01 19:20:35.333 188609 DEBUG oslo_service.service [None req-b71ca0b1-6b06-4f9a-80f6-79fc48762184 - - - - - -] libvirt.gid_maps               = [] log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Dec  1 14:20:35 np0005541455 nova_compute[188605]: 2025-12-01 19:20:35.333 188609 DEBUG oslo_service.service [None req-b71ca0b1-6b06-4f9a-80f6-79fc48762184 - - - - - -] libvirt.hw_disk_discard        = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Dec  1 14:20:35 np0005541455 nova_compute[188605]: 2025-12-01 19:20:35.333 188609 DEBUG oslo_service.service [None req-b71ca0b1-6b06-4f9a-80f6-79fc48762184 - - - - - -] libvirt.hw_machine_type        = ['x86_64=q35'] log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Dec  1 14:20:35 np0005541455 nova_compute[188605]: 2025-12-01 19:20:35.333 188609 DEBUG oslo_service.service [None req-b71ca0b1-6b06-4f9a-80f6-79fc48762184 - - - - - -] libvirt.images_rbd_ceph_conf   =  log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Dec  1 14:20:35 np0005541455 nova_compute[188605]: 2025-12-01 19:20:35.334 188609 DEBUG oslo_service.service [None req-b71ca0b1-6b06-4f9a-80f6-79fc48762184 - - - - - -] libvirt.images_rbd_glance_copy_poll_interval = 15 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Dec  1 14:20:35 np0005541455 nova_compute[188605]: 2025-12-01 19:20:35.334 188609 DEBUG oslo_service.service [None req-b71ca0b1-6b06-4f9a-80f6-79fc48762184 - - - - - -] libvirt.images_rbd_glance_copy_timeout = 600 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Dec  1 14:20:35 np0005541455 nova_compute[188605]: 2025-12-01 19:20:35.334 188609 DEBUG oslo_service.service [None req-b71ca0b1-6b06-4f9a-80f6-79fc48762184 - - - - - -] libvirt.images_rbd_glance_store_name =  log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Dec  1 14:20:35 np0005541455 nova_compute[188605]: 2025-12-01 19:20:35.334 188609 DEBUG oslo_service.service [None req-b71ca0b1-6b06-4f9a-80f6-79fc48762184 - - - - - -] libvirt.images_rbd_pool        = rbd log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Dec  1 14:20:35 np0005541455 nova_compute[188605]: 2025-12-01 19:20:35.334 188609 DEBUG oslo_service.service [None req-b71ca0b1-6b06-4f9a-80f6-79fc48762184 - - - - - -] libvirt.images_type            = qcow2 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Dec  1 14:20:35 np0005541455 nova_compute[188605]: 2025-12-01 19:20:35.334 188609 DEBUG oslo_service.service [None req-b71ca0b1-6b06-4f9a-80f6-79fc48762184 - - - - - -] libvirt.images_volume_group    = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Dec  1 14:20:35 np0005541455 nova_compute[188605]: 2025-12-01 19:20:35.335 188609 DEBUG oslo_service.service [None req-b71ca0b1-6b06-4f9a-80f6-79fc48762184 - - - - - -] libvirt.inject_key             = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Dec  1 14:20:35 np0005541455 nova_compute[188605]: 2025-12-01 19:20:35.335 188609 DEBUG oslo_service.service [None req-b71ca0b1-6b06-4f9a-80f6-79fc48762184 - - - - - -] libvirt.inject_partition       = -2 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Dec  1 14:20:35 np0005541455 nova_compute[188605]: 2025-12-01 19:20:35.335 188609 DEBUG oslo_service.service [None req-b71ca0b1-6b06-4f9a-80f6-79fc48762184 - - - - - -] libvirt.inject_password        = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Dec  1 14:20:35 np0005541455 nova_compute[188605]: 2025-12-01 19:20:35.335 188609 DEBUG oslo_service.service [None req-b71ca0b1-6b06-4f9a-80f6-79fc48762184 - - - - - -] libvirt.iscsi_iface            = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Dec  1 14:20:35 np0005541455 nova_compute[188605]: 2025-12-01 19:20:35.335 188609 DEBUG oslo_service.service [None req-b71ca0b1-6b06-4f9a-80f6-79fc48762184 - - - - - -] libvirt.iser_use_multipath     = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Dec  1 14:20:35 np0005541455 nova_compute[188605]: 2025-12-01 19:20:35.335 188609 DEBUG oslo_service.service [None req-b71ca0b1-6b06-4f9a-80f6-79fc48762184 - - - - - -] libvirt.live_migration_bandwidth = 0 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Dec  1 14:20:35 np0005541455 nova_compute[188605]: 2025-12-01 19:20:35.336 188609 DEBUG oslo_service.service [None req-b71ca0b1-6b06-4f9a-80f6-79fc48762184 - - - - - -] libvirt.live_migration_completion_timeout = 800 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Dec  1 14:20:35 np0005541455 nova_compute[188605]: 2025-12-01 19:20:35.336 188609 DEBUG oslo_service.service [None req-b71ca0b1-6b06-4f9a-80f6-79fc48762184 - - - - - -] libvirt.live_migration_downtime = 500 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Dec  1 14:20:35 np0005541455 nova_compute[188605]: 2025-12-01 19:20:35.336 188609 DEBUG oslo_service.service [None req-b71ca0b1-6b06-4f9a-80f6-79fc48762184 - - - - - -] libvirt.live_migration_downtime_delay = 75 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Dec  1 14:20:35 np0005541455 nova_compute[188605]: 2025-12-01 19:20:35.336 188609 DEBUG oslo_service.service [None req-b71ca0b1-6b06-4f9a-80f6-79fc48762184 - - - - - -] libvirt.live_migration_downtime_steps = 10 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Dec  1 14:20:35 np0005541455 nova_compute[188605]: 2025-12-01 19:20:35.336 188609 DEBUG oslo_service.service [None req-b71ca0b1-6b06-4f9a-80f6-79fc48762184 - - - - - -] libvirt.live_migration_inbound_addr = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Dec  1 14:20:35 np0005541455 nova_compute[188605]: 2025-12-01 19:20:35.336 188609 DEBUG oslo_service.service [None req-b71ca0b1-6b06-4f9a-80f6-79fc48762184 - - - - - -] libvirt.live_migration_permit_auto_converge = True log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Dec  1 14:20:35 np0005541455 nova_compute[188605]: 2025-12-01 19:20:35.337 188609 DEBUG oslo_service.service [None req-b71ca0b1-6b06-4f9a-80f6-79fc48762184 - - - - - -] libvirt.live_migration_permit_post_copy = True log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Dec  1 14:20:35 np0005541455 nova_compute[188605]: 2025-12-01 19:20:35.337 188609 DEBUG oslo_service.service [None req-b71ca0b1-6b06-4f9a-80f6-79fc48762184 - - - - - -] libvirt.live_migration_scheme  = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Dec  1 14:20:35 np0005541455 nova_compute[188605]: 2025-12-01 19:20:35.337 188609 DEBUG oslo_service.service [None req-b71ca0b1-6b06-4f9a-80f6-79fc48762184 - - - - - -] libvirt.live_migration_timeout_action = force_complete log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Dec  1 14:20:35 np0005541455 nova_compute[188605]: 2025-12-01 19:20:35.337 188609 DEBUG oslo_service.service [None req-b71ca0b1-6b06-4f9a-80f6-79fc48762184 - - - - - -] libvirt.live_migration_tunnelled = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Dec  1 14:20:35 np0005541455 nova_compute[188605]: 2025-12-01 19:20:35.337 188609 WARNING oslo_config.cfg [None req-b71ca0b1-6b06-4f9a-80f6-79fc48762184 - - - - - -] Deprecated: Option "live_migration_uri" from group "libvirt" is deprecated for removal (
Dec  1 14:20:35 np0005541455 nova_compute[188605]: live_migration_uri is deprecated for removal in favor of two other options that
Dec  1 14:20:35 np0005541455 nova_compute[188605]: allow to change live migration scheme and target URI: ``live_migration_scheme``
Dec  1 14:20:35 np0005541455 nova_compute[188605]: and ``live_migration_inbound_addr`` respectively.
Dec  1 14:20:35 np0005541455 nova_compute[188605]: ).  Its value may be silently ignored in the future.#033[00m
Dec  1 14:20:35 np0005541455 nova_compute[188605]: 2025-12-01 19:20:35.338 188609 DEBUG oslo_service.service [None req-b71ca0b1-6b06-4f9a-80f6-79fc48762184 - - - - - -] libvirt.live_migration_uri     = qemu+tls://%s/system log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Dec  1 14:20:35 np0005541455 nova_compute[188605]: 2025-12-01 19:20:35.338 188609 DEBUG oslo_service.service [None req-b71ca0b1-6b06-4f9a-80f6-79fc48762184 - - - - - -] libvirt.live_migration_with_native_tls = True log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Dec  1 14:20:35 np0005541455 nova_compute[188605]: 2025-12-01 19:20:35.338 188609 DEBUG oslo_service.service [None req-b71ca0b1-6b06-4f9a-80f6-79fc48762184 - - - - - -] libvirt.max_queues             = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Dec  1 14:20:35 np0005541455 nova_compute[188605]: 2025-12-01 19:20:35.338 188609 DEBUG oslo_service.service [None req-b71ca0b1-6b06-4f9a-80f6-79fc48762184 - - - - - -] libvirt.mem_stats_period_seconds = 10 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Dec  1 14:20:35 np0005541455 nova_compute[188605]: 2025-12-01 19:20:35.339 188609 DEBUG oslo_service.service [None req-b71ca0b1-6b06-4f9a-80f6-79fc48762184 - - - - - -] libvirt.nfs_mount_options      = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Dec  1 14:20:35 np0005541455 nova_compute[188605]: 2025-12-01 19:20:35.339 188609 DEBUG oslo_service.service [None req-b71ca0b1-6b06-4f9a-80f6-79fc48762184 - - - - - -] libvirt.nfs_mount_point_base   = /var/lib/nova/mnt log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Dec  1 14:20:35 np0005541455 nova_compute[188605]: 2025-12-01 19:20:35.339 188609 DEBUG oslo_service.service [None req-b71ca0b1-6b06-4f9a-80f6-79fc48762184 - - - - - -] libvirt.num_aoe_discover_tries = 3 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Dec  1 14:20:35 np0005541455 nova_compute[188605]: 2025-12-01 19:20:35.339 188609 DEBUG oslo_service.service [None req-b71ca0b1-6b06-4f9a-80f6-79fc48762184 - - - - - -] libvirt.num_iser_scan_tries    = 5 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Dec  1 14:20:35 np0005541455 nova_compute[188605]: 2025-12-01 19:20:35.339 188609 DEBUG oslo_service.service [None req-b71ca0b1-6b06-4f9a-80f6-79fc48762184 - - - - - -] libvirt.num_memory_encrypted_guests = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Dec  1 14:20:35 np0005541455 nova_compute[188605]: 2025-12-01 19:20:35.339 188609 DEBUG oslo_service.service [None req-b71ca0b1-6b06-4f9a-80f6-79fc48762184 - - - - - -] libvirt.num_nvme_discover_tries = 5 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Dec  1 14:20:35 np0005541455 nova_compute[188605]: 2025-12-01 19:20:35.339 188609 DEBUG oslo_service.service [None req-b71ca0b1-6b06-4f9a-80f6-79fc48762184 - - - - - -] libvirt.num_pcie_ports         = 24 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Dec  1 14:20:35 np0005541455 nova_compute[188605]: 2025-12-01 19:20:35.340 188609 DEBUG oslo_service.service [None req-b71ca0b1-6b06-4f9a-80f6-79fc48762184 - - - - - -] libvirt.num_volume_scan_tries  = 5 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Dec  1 14:20:35 np0005541455 nova_compute[188605]: 2025-12-01 19:20:35.340 188609 DEBUG oslo_service.service [None req-b71ca0b1-6b06-4f9a-80f6-79fc48762184 - - - - - -] libvirt.pmem_namespaces        = [] log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Dec  1 14:20:35 np0005541455 nova_compute[188605]: 2025-12-01 19:20:35.340 188609 DEBUG oslo_service.service [None req-b71ca0b1-6b06-4f9a-80f6-79fc48762184 - - - - - -] libvirt.quobyte_client_cfg     = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Dec  1 14:20:35 np0005541455 nova_compute[188605]: 2025-12-01 19:20:35.340 188609 DEBUG oslo_service.service [None req-b71ca0b1-6b06-4f9a-80f6-79fc48762184 - - - - - -] libvirt.quobyte_mount_point_base = /var/lib/nova/mnt log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Dec  1 14:20:35 np0005541455 nova_compute[188605]: 2025-12-01 19:20:35.340 188609 DEBUG oslo_service.service [None req-b71ca0b1-6b06-4f9a-80f6-79fc48762184 - - - - - -] libvirt.rbd_connect_timeout    = 5 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Dec  1 14:20:35 np0005541455 nova_compute[188605]: 2025-12-01 19:20:35.340 188609 DEBUG oslo_service.service [None req-b71ca0b1-6b06-4f9a-80f6-79fc48762184 - - - - - -] libvirt.rbd_destroy_volume_retries = 12 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Dec  1 14:20:35 np0005541455 nova_compute[188605]: 2025-12-01 19:20:35.340 188609 DEBUG oslo_service.service [None req-b71ca0b1-6b06-4f9a-80f6-79fc48762184 - - - - - -] libvirt.rbd_destroy_volume_retry_interval = 5 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Dec  1 14:20:35 np0005541455 nova_compute[188605]: 2025-12-01 19:20:35.341 188609 DEBUG oslo_service.service [None req-b71ca0b1-6b06-4f9a-80f6-79fc48762184 - - - - - -] libvirt.rbd_secret_uuid        = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Dec  1 14:20:35 np0005541455 nova_compute[188605]: 2025-12-01 19:20:35.341 188609 DEBUG oslo_service.service [None req-b71ca0b1-6b06-4f9a-80f6-79fc48762184 - - - - - -] libvirt.rbd_user               = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Dec  1 14:20:35 np0005541455 nova_compute[188605]: 2025-12-01 19:20:35.341 188609 DEBUG oslo_service.service [None req-b71ca0b1-6b06-4f9a-80f6-79fc48762184 - - - - - -] libvirt.realtime_scheduler_priority = 1 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Dec  1 14:20:35 np0005541455 nova_compute[188605]: 2025-12-01 19:20:35.341 188609 DEBUG oslo_service.service [None req-b71ca0b1-6b06-4f9a-80f6-79fc48762184 - - - - - -] libvirt.remote_filesystem_transport = ssh log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Dec  1 14:20:35 np0005541455 nova_compute[188605]: 2025-12-01 19:20:35.341 188609 DEBUG oslo_service.service [None req-b71ca0b1-6b06-4f9a-80f6-79fc48762184 - - - - - -] libvirt.rescue_image_id        = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Dec  1 14:20:35 np0005541455 nova_compute[188605]: 2025-12-01 19:20:35.341 188609 DEBUG oslo_service.service [None req-b71ca0b1-6b06-4f9a-80f6-79fc48762184 - - - - - -] libvirt.rescue_kernel_id       = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Dec  1 14:20:35 np0005541455 nova_compute[188605]: 2025-12-01 19:20:35.341 188609 DEBUG oslo_service.service [None req-b71ca0b1-6b06-4f9a-80f6-79fc48762184 - - - - - -] libvirt.rescue_ramdisk_id      = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Dec  1 14:20:35 np0005541455 nova_compute[188605]: 2025-12-01 19:20:35.342 188609 DEBUG oslo_service.service [None req-b71ca0b1-6b06-4f9a-80f6-79fc48762184 - - - - - -] libvirt.rng_dev_path           = /dev/urandom log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Dec  1 14:20:35 np0005541455 nova_compute[188605]: 2025-12-01 19:20:35.342 188609 DEBUG oslo_service.service [None req-b71ca0b1-6b06-4f9a-80f6-79fc48762184 - - - - - -] libvirt.rx_queue_size          = 512 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Dec  1 14:20:35 np0005541455 nova_compute[188605]: 2025-12-01 19:20:35.342 188609 DEBUG oslo_service.service [None req-b71ca0b1-6b06-4f9a-80f6-79fc48762184 - - - - - -] libvirt.smbfs_mount_options    =  log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Dec  1 14:20:35 np0005541455 nova_compute[188605]: 2025-12-01 19:20:35.342 188609 DEBUG oslo_service.service [None req-b71ca0b1-6b06-4f9a-80f6-79fc48762184 - - - - - -] libvirt.smbfs_mount_point_base = /var/lib/nova/mnt log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Dec  1 14:20:35 np0005541455 nova_compute[188605]: 2025-12-01 19:20:35.342 188609 DEBUG oslo_service.service [None req-b71ca0b1-6b06-4f9a-80f6-79fc48762184 - - - - - -] libvirt.snapshot_compression   = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Dec  1 14:20:35 np0005541455 nova_compute[188605]: 2025-12-01 19:20:35.342 188609 DEBUG oslo_service.service [None req-b71ca0b1-6b06-4f9a-80f6-79fc48762184 - - - - - -] libvirt.snapshot_image_format  = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Dec  1 14:20:35 np0005541455 nova_compute[188605]: 2025-12-01 19:20:35.343 188609 DEBUG oslo_service.service [None req-b71ca0b1-6b06-4f9a-80f6-79fc48762184 - - - - - -] libvirt.snapshots_directory    = /var/lib/nova/instances/snapshots log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Dec  1 14:20:35 np0005541455 nova_compute[188605]: 2025-12-01 19:20:35.343 188609 DEBUG oslo_service.service [None req-b71ca0b1-6b06-4f9a-80f6-79fc48762184 - - - - - -] libvirt.sparse_logical_volumes = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Dec  1 14:20:35 np0005541455 nova_compute[188605]: 2025-12-01 19:20:35.343 188609 DEBUG oslo_service.service [None req-b71ca0b1-6b06-4f9a-80f6-79fc48762184 - - - - - -] libvirt.swtpm_enabled          = True log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Dec  1 14:20:35 np0005541455 nova_compute[188605]: 2025-12-01 19:20:35.343 188609 DEBUG oslo_service.service [None req-b71ca0b1-6b06-4f9a-80f6-79fc48762184 - - - - - -] libvirt.swtpm_group            = tss log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Dec  1 14:20:35 np0005541455 nova_compute[188605]: 2025-12-01 19:20:35.343 188609 DEBUG oslo_service.service [None req-b71ca0b1-6b06-4f9a-80f6-79fc48762184 - - - - - -] libvirt.swtpm_user             = tss log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Dec  1 14:20:35 np0005541455 nova_compute[188605]: 2025-12-01 19:20:35.343 188609 DEBUG oslo_service.service [None req-b71ca0b1-6b06-4f9a-80f6-79fc48762184 - - - - - -] libvirt.sysinfo_serial         = unique log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Dec  1 14:20:35 np0005541455 nova_compute[188605]: 2025-12-01 19:20:35.344 188609 DEBUG oslo_service.service [None req-b71ca0b1-6b06-4f9a-80f6-79fc48762184 - - - - - -] libvirt.tx_queue_size          = 512 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Dec  1 14:20:35 np0005541455 nova_compute[188605]: 2025-12-01 19:20:35.344 188609 DEBUG oslo_service.service [None req-b71ca0b1-6b06-4f9a-80f6-79fc48762184 - - - - - -] libvirt.uid_maps               = [] log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Dec  1 14:20:35 np0005541455 nova_compute[188605]: 2025-12-01 19:20:35.344 188609 DEBUG oslo_service.service [None req-b71ca0b1-6b06-4f9a-80f6-79fc48762184 - - - - - -] libvirt.use_virtio_for_bridges = True log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Dec  1 14:20:35 np0005541455 nova_compute[188605]: 2025-12-01 19:20:35.344 188609 DEBUG oslo_service.service [None req-b71ca0b1-6b06-4f9a-80f6-79fc48762184 - - - - - -] libvirt.virt_type              = kvm log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Dec  1 14:20:35 np0005541455 nova_compute[188605]: 2025-12-01 19:20:35.344 188609 DEBUG oslo_service.service [None req-b71ca0b1-6b06-4f9a-80f6-79fc48762184 - - - - - -] libvirt.volume_clear           = zero log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Dec  1 14:20:35 np0005541455 nova_compute[188605]: 2025-12-01 19:20:35.344 188609 DEBUG oslo_service.service [None req-b71ca0b1-6b06-4f9a-80f6-79fc48762184 - - - - - -] libvirt.volume_clear_size      = 0 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Dec  1 14:20:35 np0005541455 nova_compute[188605]: 2025-12-01 19:20:35.344 188609 DEBUG oslo_service.service [None req-b71ca0b1-6b06-4f9a-80f6-79fc48762184 - - - - - -] libvirt.volume_use_multipath   = True log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Dec  1 14:20:35 np0005541455 nova_compute[188605]: 2025-12-01 19:20:35.345 188609 DEBUG oslo_service.service [None req-b71ca0b1-6b06-4f9a-80f6-79fc48762184 - - - - - -] libvirt.vzstorage_cache_path   = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Dec  1 14:20:35 np0005541455 nova_compute[188605]: 2025-12-01 19:20:35.345 188609 DEBUG oslo_service.service [None req-b71ca0b1-6b06-4f9a-80f6-79fc48762184 - - - - - -] libvirt.vzstorage_log_path     = /var/log/vstorage/%(cluster_name)s/nova.log.gz log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Dec  1 14:20:35 np0005541455 nova_compute[188605]: 2025-12-01 19:20:35.345 188609 DEBUG oslo_service.service [None req-b71ca0b1-6b06-4f9a-80f6-79fc48762184 - - - - - -] libvirt.vzstorage_mount_group  = qemu log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Dec  1 14:20:35 np0005541455 nova_compute[188605]: 2025-12-01 19:20:35.345 188609 DEBUG oslo_service.service [None req-b71ca0b1-6b06-4f9a-80f6-79fc48762184 - - - - - -] libvirt.vzstorage_mount_opts   = [] log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Dec  1 14:20:35 np0005541455 nova_compute[188605]: 2025-12-01 19:20:35.345 188609 DEBUG oslo_service.service [None req-b71ca0b1-6b06-4f9a-80f6-79fc48762184 - - - - - -] libvirt.vzstorage_mount_perms  = 0770 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Dec  1 14:20:35 np0005541455 nova_compute[188605]: 2025-12-01 19:20:35.345 188609 DEBUG oslo_service.service [None req-b71ca0b1-6b06-4f9a-80f6-79fc48762184 - - - - - -] libvirt.vzstorage_mount_point_base = /var/lib/nova/mnt log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Dec  1 14:20:35 np0005541455 nova_compute[188605]: 2025-12-01 19:20:35.345 188609 DEBUG oslo_service.service [None req-b71ca0b1-6b06-4f9a-80f6-79fc48762184 - - - - - -] libvirt.vzstorage_mount_user   = stack log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Dec  1 14:20:35 np0005541455 nova_compute[188605]: 2025-12-01 19:20:35.346 188609 DEBUG oslo_service.service [None req-b71ca0b1-6b06-4f9a-80f6-79fc48762184 - - - - - -] libvirt.wait_soft_reboot_seconds = 120 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Dec  1 14:20:35 np0005541455 nova_compute[188605]: 2025-12-01 19:20:35.346 188609 DEBUG oslo_service.service [None req-b71ca0b1-6b06-4f9a-80f6-79fc48762184 - - - - - -] neutron.auth_section           = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Dec  1 14:20:35 np0005541455 nova_compute[188605]: 2025-12-01 19:20:35.346 188609 DEBUG oslo_service.service [None req-b71ca0b1-6b06-4f9a-80f6-79fc48762184 - - - - - -] neutron.auth_type              = password log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Dec  1 14:20:35 np0005541455 nova_compute[188605]: 2025-12-01 19:20:35.346 188609 DEBUG oslo_service.service [None req-b71ca0b1-6b06-4f9a-80f6-79fc48762184 - - - - - -] neutron.cafile                 = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Dec  1 14:20:35 np0005541455 nova_compute[188605]: 2025-12-01 19:20:35.346 188609 DEBUG oslo_service.service [None req-b71ca0b1-6b06-4f9a-80f6-79fc48762184 - - - - - -] neutron.certfile               = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Dec  1 14:20:35 np0005541455 nova_compute[188605]: 2025-12-01 19:20:35.346 188609 DEBUG oslo_service.service [None req-b71ca0b1-6b06-4f9a-80f6-79fc48762184 - - - - - -] neutron.collect_timing         = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Dec  1 14:20:35 np0005541455 nova_compute[188605]: 2025-12-01 19:20:35.346 188609 DEBUG oslo_service.service [None req-b71ca0b1-6b06-4f9a-80f6-79fc48762184 - - - - - -] neutron.connect_retries        = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Dec  1 14:20:35 np0005541455 nova_compute[188605]: 2025-12-01 19:20:35.347 188609 DEBUG oslo_service.service [None req-b71ca0b1-6b06-4f9a-80f6-79fc48762184 - - - - - -] neutron.connect_retry_delay    = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Dec  1 14:20:35 np0005541455 nova_compute[188605]: 2025-12-01 19:20:35.347 188609 DEBUG oslo_service.service [None req-b71ca0b1-6b06-4f9a-80f6-79fc48762184 - - - - - -] neutron.default_floating_pool  = nova log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Dec  1 14:20:35 np0005541455 nova_compute[188605]: 2025-12-01 19:20:35.347 188609 DEBUG oslo_service.service [None req-b71ca0b1-6b06-4f9a-80f6-79fc48762184 - - - - - -] neutron.endpoint_override      = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Dec  1 14:20:35 np0005541455 nova_compute[188605]: 2025-12-01 19:20:35.347 188609 DEBUG oslo_service.service [None req-b71ca0b1-6b06-4f9a-80f6-79fc48762184 - - - - - -] neutron.extension_sync_interval = 600 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Dec  1 14:20:35 np0005541455 nova_compute[188605]: 2025-12-01 19:20:35.347 188609 DEBUG oslo_service.service [None req-b71ca0b1-6b06-4f9a-80f6-79fc48762184 - - - - - -] neutron.http_retries           = 3 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Dec  1 14:20:35 np0005541455 nova_compute[188605]: 2025-12-01 19:20:35.347 188609 DEBUG oslo_service.service [None req-b71ca0b1-6b06-4f9a-80f6-79fc48762184 - - - - - -] neutron.insecure               = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Dec  1 14:20:35 np0005541455 nova_compute[188605]: 2025-12-01 19:20:35.347 188609 DEBUG oslo_service.service [None req-b71ca0b1-6b06-4f9a-80f6-79fc48762184 - - - - - -] neutron.keyfile                = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Dec  1 14:20:35 np0005541455 nova_compute[188605]: 2025-12-01 19:20:35.348 188609 DEBUG oslo_service.service [None req-b71ca0b1-6b06-4f9a-80f6-79fc48762184 - - - - - -] neutron.max_version            = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Dec  1 14:20:35 np0005541455 nova_compute[188605]: 2025-12-01 19:20:35.348 188609 DEBUG oslo_service.service [None req-b71ca0b1-6b06-4f9a-80f6-79fc48762184 - - - - - -] neutron.metadata_proxy_shared_secret = **** log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Dec  1 14:20:35 np0005541455 nova_compute[188605]: 2025-12-01 19:20:35.348 188609 DEBUG oslo_service.service [None req-b71ca0b1-6b06-4f9a-80f6-79fc48762184 - - - - - -] neutron.min_version            = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Dec  1 14:20:35 np0005541455 nova_compute[188605]: 2025-12-01 19:20:35.348 188609 DEBUG oslo_service.service [None req-b71ca0b1-6b06-4f9a-80f6-79fc48762184 - - - - - -] neutron.ovs_bridge             = br-int log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Dec  1 14:20:35 np0005541455 nova_compute[188605]: 2025-12-01 19:20:35.348 188609 DEBUG oslo_service.service [None req-b71ca0b1-6b06-4f9a-80f6-79fc48762184 - - - - - -] neutron.physnets               = [] log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Dec  1 14:20:35 np0005541455 nova_compute[188605]: 2025-12-01 19:20:35.348 188609 DEBUG oslo_service.service [None req-b71ca0b1-6b06-4f9a-80f6-79fc48762184 - - - - - -] neutron.region_name            = regionOne log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Dec  1 14:20:35 np0005541455 nova_compute[188605]: 2025-12-01 19:20:35.348 188609 DEBUG oslo_service.service [None req-b71ca0b1-6b06-4f9a-80f6-79fc48762184 - - - - - -] neutron.service_metadata_proxy = True log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Dec  1 14:20:35 np0005541455 nova_compute[188605]: 2025-12-01 19:20:35.349 188609 DEBUG oslo_service.service [None req-b71ca0b1-6b06-4f9a-80f6-79fc48762184 - - - - - -] neutron.service_name           = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Dec  1 14:20:35 np0005541455 nova_compute[188605]: 2025-12-01 19:20:35.349 188609 DEBUG oslo_service.service [None req-b71ca0b1-6b06-4f9a-80f6-79fc48762184 - - - - - -] neutron.service_type           = network log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Dec  1 14:20:35 np0005541455 nova_compute[188605]: 2025-12-01 19:20:35.349 188609 DEBUG oslo_service.service [None req-b71ca0b1-6b06-4f9a-80f6-79fc48762184 - - - - - -] neutron.split_loggers          = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Dec  1 14:20:35 np0005541455 nova_compute[188605]: 2025-12-01 19:20:35.349 188609 DEBUG oslo_service.service [None req-b71ca0b1-6b06-4f9a-80f6-79fc48762184 - - - - - -] neutron.status_code_retries    = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Dec  1 14:20:35 np0005541455 nova_compute[188605]: 2025-12-01 19:20:35.349 188609 DEBUG oslo_service.service [None req-b71ca0b1-6b06-4f9a-80f6-79fc48762184 - - - - - -] neutron.status_code_retry_delay = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Dec  1 14:20:35 np0005541455 nova_compute[188605]: 2025-12-01 19:20:35.349 188609 DEBUG oslo_service.service [None req-b71ca0b1-6b06-4f9a-80f6-79fc48762184 - - - - - -] neutron.timeout                = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Dec  1 14:20:35 np0005541455 nova_compute[188605]: 2025-12-01 19:20:35.350 188609 DEBUG oslo_service.service [None req-b71ca0b1-6b06-4f9a-80f6-79fc48762184 - - - - - -] neutron.valid_interfaces       = ['internal'] log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Dec  1 14:20:35 np0005541455 nova_compute[188605]: 2025-12-01 19:20:35.350 188609 DEBUG oslo_service.service [None req-b71ca0b1-6b06-4f9a-80f6-79fc48762184 - - - - - -] neutron.version                = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Dec  1 14:20:35 np0005541455 nova_compute[188605]: 2025-12-01 19:20:35.350 188609 DEBUG oslo_service.service [None req-b71ca0b1-6b06-4f9a-80f6-79fc48762184 - - - - - -] notifications.bdms_in_notifications = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Dec  1 14:20:35 np0005541455 nova_compute[188605]: 2025-12-01 19:20:35.350 188609 DEBUG oslo_service.service [None req-b71ca0b1-6b06-4f9a-80f6-79fc48762184 - - - - - -] notifications.default_level    = INFO log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Dec  1 14:20:35 np0005541455 nova_compute[188605]: 2025-12-01 19:20:35.350 188609 DEBUG oslo_service.service [None req-b71ca0b1-6b06-4f9a-80f6-79fc48762184 - - - - - -] notifications.notification_format = unversioned log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Dec  1 14:20:35 np0005541455 nova_compute[188605]: 2025-12-01 19:20:35.350 188609 DEBUG oslo_service.service [None req-b71ca0b1-6b06-4f9a-80f6-79fc48762184 - - - - - -] notifications.notify_on_state_change = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Dec  1 14:20:35 np0005541455 nova_compute[188605]: 2025-12-01 19:20:35.351 188609 DEBUG oslo_service.service [None req-b71ca0b1-6b06-4f9a-80f6-79fc48762184 - - - - - -] notifications.versioned_notifications_topics = ['versioned_notifications'] log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Dec  1 14:20:35 np0005541455 nova_compute[188605]: 2025-12-01 19:20:35.351 188609 DEBUG oslo_service.service [None req-b71ca0b1-6b06-4f9a-80f6-79fc48762184 - - - - - -] pci.alias                      = [] log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Dec  1 14:20:35 np0005541455 nova_compute[188605]: 2025-12-01 19:20:35.351 188609 DEBUG oslo_service.service [None req-b71ca0b1-6b06-4f9a-80f6-79fc48762184 - - - - - -] pci.device_spec                = [] log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Dec  1 14:20:35 np0005541455 nova_compute[188605]: 2025-12-01 19:20:35.351 188609 DEBUG oslo_service.service [None req-b71ca0b1-6b06-4f9a-80f6-79fc48762184 - - - - - -] pci.report_in_placement        = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Dec  1 14:20:35 np0005541455 nova_compute[188605]: 2025-12-01 19:20:35.351 188609 DEBUG oslo_service.service [None req-b71ca0b1-6b06-4f9a-80f6-79fc48762184 - - - - - -] placement.auth_section         = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Dec  1 14:20:35 np0005541455 nova_compute[188605]: 2025-12-01 19:20:35.351 188609 DEBUG oslo_service.service [None req-b71ca0b1-6b06-4f9a-80f6-79fc48762184 - - - - - -] placement.auth_type            = password log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Dec  1 14:20:35 np0005541455 nova_compute[188605]: 2025-12-01 19:20:35.351 188609 DEBUG oslo_service.service [None req-b71ca0b1-6b06-4f9a-80f6-79fc48762184 - - - - - -] placement.auth_url             = https://keystone-internal.openstack.svc:5000 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Dec  1 14:20:35 np0005541455 nova_compute[188605]: 2025-12-01 19:20:35.352 188609 DEBUG oslo_service.service [None req-b71ca0b1-6b06-4f9a-80f6-79fc48762184 - - - - - -] placement.cafile               = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Dec  1 14:20:35 np0005541455 nova_compute[188605]: 2025-12-01 19:20:35.352 188609 DEBUG oslo_service.service [None req-b71ca0b1-6b06-4f9a-80f6-79fc48762184 - - - - - -] placement.certfile             = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Dec  1 14:20:35 np0005541455 nova_compute[188605]: 2025-12-01 19:20:35.352 188609 DEBUG oslo_service.service [None req-b71ca0b1-6b06-4f9a-80f6-79fc48762184 - - - - - -] placement.collect_timing       = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Dec  1 14:20:35 np0005541455 nova_compute[188605]: 2025-12-01 19:20:35.352 188609 DEBUG oslo_service.service [None req-b71ca0b1-6b06-4f9a-80f6-79fc48762184 - - - - - -] placement.connect_retries      = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Dec  1 14:20:35 np0005541455 nova_compute[188605]: 2025-12-01 19:20:35.352 188609 DEBUG oslo_service.service [None req-b71ca0b1-6b06-4f9a-80f6-79fc48762184 - - - - - -] placement.connect_retry_delay  = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Dec  1 14:20:35 np0005541455 nova_compute[188605]: 2025-12-01 19:20:35.352 188609 DEBUG oslo_service.service [None req-b71ca0b1-6b06-4f9a-80f6-79fc48762184 - - - - - -] placement.default_domain_id    = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Dec  1 14:20:35 np0005541455 nova_compute[188605]: 2025-12-01 19:20:35.352 188609 DEBUG oslo_service.service [None req-b71ca0b1-6b06-4f9a-80f6-79fc48762184 - - - - - -] placement.default_domain_name  = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Dec  1 14:20:35 np0005541455 nova_compute[188605]: 2025-12-01 19:20:35.353 188609 DEBUG oslo_service.service [None req-b71ca0b1-6b06-4f9a-80f6-79fc48762184 - - - - - -] placement.domain_id            = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Dec  1 14:20:35 np0005541455 nova_compute[188605]: 2025-12-01 19:20:35.353 188609 DEBUG oslo_service.service [None req-b71ca0b1-6b06-4f9a-80f6-79fc48762184 - - - - - -] placement.domain_name          = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Dec  1 14:20:35 np0005541455 nova_compute[188605]: 2025-12-01 19:20:35.353 188609 DEBUG oslo_service.service [None req-b71ca0b1-6b06-4f9a-80f6-79fc48762184 - - - - - -] placement.endpoint_override    = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Dec  1 14:20:35 np0005541455 nova_compute[188605]: 2025-12-01 19:20:35.353 188609 DEBUG oslo_service.service [None req-b71ca0b1-6b06-4f9a-80f6-79fc48762184 - - - - - -] placement.insecure             = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Dec  1 14:20:35 np0005541455 nova_compute[188605]: 2025-12-01 19:20:35.353 188609 DEBUG oslo_service.service [None req-b71ca0b1-6b06-4f9a-80f6-79fc48762184 - - - - - -] placement.keyfile              = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Dec  1 14:20:35 np0005541455 nova_compute[188605]: 2025-12-01 19:20:35.353 188609 DEBUG oslo_service.service [None req-b71ca0b1-6b06-4f9a-80f6-79fc48762184 - - - - - -] placement.max_version          = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Dec  1 14:20:35 np0005541455 nova_compute[188605]: 2025-12-01 19:20:35.353 188609 DEBUG oslo_service.service [None req-b71ca0b1-6b06-4f9a-80f6-79fc48762184 - - - - - -] placement.min_version          = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Dec  1 14:20:35 np0005541455 nova_compute[188605]: 2025-12-01 19:20:35.354 188609 DEBUG oslo_service.service [None req-b71ca0b1-6b06-4f9a-80f6-79fc48762184 - - - - - -] placement.password             = **** log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Dec  1 14:20:35 np0005541455 nova_compute[188605]: 2025-12-01 19:20:35.354 188609 DEBUG oslo_service.service [None req-b71ca0b1-6b06-4f9a-80f6-79fc48762184 - - - - - -] placement.project_domain_id    = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Dec  1 14:20:35 np0005541455 nova_compute[188605]: 2025-12-01 19:20:35.354 188609 DEBUG oslo_service.service [None req-b71ca0b1-6b06-4f9a-80f6-79fc48762184 - - - - - -] placement.project_domain_name  = Default log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Dec  1 14:20:35 np0005541455 nova_compute[188605]: 2025-12-01 19:20:35.354 188609 DEBUG oslo_service.service [None req-b71ca0b1-6b06-4f9a-80f6-79fc48762184 - - - - - -] placement.project_id           = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Dec  1 14:20:35 np0005541455 nova_compute[188605]: 2025-12-01 19:20:35.354 188609 DEBUG oslo_service.service [None req-b71ca0b1-6b06-4f9a-80f6-79fc48762184 - - - - - -] placement.project_name         = service log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Dec  1 14:20:35 np0005541455 nova_compute[188605]: 2025-12-01 19:20:35.354 188609 DEBUG oslo_service.service [None req-b71ca0b1-6b06-4f9a-80f6-79fc48762184 - - - - - -] placement.region_name          = regionOne log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Dec  1 14:20:35 np0005541455 nova_compute[188605]: 2025-12-01 19:20:35.354 188609 DEBUG oslo_service.service [None req-b71ca0b1-6b06-4f9a-80f6-79fc48762184 - - - - - -] placement.service_name         = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Dec  1 14:20:35 np0005541455 nova_compute[188605]: 2025-12-01 19:20:35.355 188609 DEBUG oslo_service.service [None req-b71ca0b1-6b06-4f9a-80f6-79fc48762184 - - - - - -] placement.service_type         = placement log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Dec  1 14:20:35 np0005541455 nova_compute[188605]: 2025-12-01 19:20:35.355 188609 DEBUG oslo_service.service [None req-b71ca0b1-6b06-4f9a-80f6-79fc48762184 - - - - - -] placement.split_loggers        = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Dec  1 14:20:35 np0005541455 nova_compute[188605]: 2025-12-01 19:20:35.355 188609 DEBUG oslo_service.service [None req-b71ca0b1-6b06-4f9a-80f6-79fc48762184 - - - - - -] placement.status_code_retries  = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Dec  1 14:20:35 np0005541455 nova_compute[188605]: 2025-12-01 19:20:35.355 188609 DEBUG oslo_service.service [None req-b71ca0b1-6b06-4f9a-80f6-79fc48762184 - - - - - -] placement.status_code_retry_delay = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Dec  1 14:20:35 np0005541455 nova_compute[188605]: 2025-12-01 19:20:35.355 188609 DEBUG oslo_service.service [None req-b71ca0b1-6b06-4f9a-80f6-79fc48762184 - - - - - -] placement.system_scope         = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Dec  1 14:20:35 np0005541455 nova_compute[188605]: 2025-12-01 19:20:35.355 188609 DEBUG oslo_service.service [None req-b71ca0b1-6b06-4f9a-80f6-79fc48762184 - - - - - -] placement.timeout              = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Dec  1 14:20:35 np0005541455 nova_compute[188605]: 2025-12-01 19:20:35.355 188609 DEBUG oslo_service.service [None req-b71ca0b1-6b06-4f9a-80f6-79fc48762184 - - - - - -] placement.trust_id             = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Dec  1 14:20:35 np0005541455 nova_compute[188605]: 2025-12-01 19:20:35.356 188609 DEBUG oslo_service.service [None req-b71ca0b1-6b06-4f9a-80f6-79fc48762184 - - - - - -] placement.user_domain_id       = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Dec  1 14:20:35 np0005541455 nova_compute[188605]: 2025-12-01 19:20:35.356 188609 DEBUG oslo_service.service [None req-b71ca0b1-6b06-4f9a-80f6-79fc48762184 - - - - - -] placement.user_domain_name     = Default log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Dec  1 14:20:35 np0005541455 nova_compute[188605]: 2025-12-01 19:20:35.356 188609 DEBUG oslo_service.service [None req-b71ca0b1-6b06-4f9a-80f6-79fc48762184 - - - - - -] placement.user_id              = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Dec  1 14:20:35 np0005541455 nova_compute[188605]: 2025-12-01 19:20:35.356 188609 DEBUG oslo_service.service [None req-b71ca0b1-6b06-4f9a-80f6-79fc48762184 - - - - - -] placement.username             = nova log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Dec  1 14:20:35 np0005541455 nova_compute[188605]: 2025-12-01 19:20:35.356 188609 DEBUG oslo_service.service [None req-b71ca0b1-6b06-4f9a-80f6-79fc48762184 - - - - - -] placement.valid_interfaces     = ['internal'] log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Dec  1 14:20:35 np0005541455 nova_compute[188605]: 2025-12-01 19:20:35.356 188609 DEBUG oslo_service.service [None req-b71ca0b1-6b06-4f9a-80f6-79fc48762184 - - - - - -] placement.version              = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Dec  1 14:20:35 np0005541455 nova_compute[188605]: 2025-12-01 19:20:35.356 188609 DEBUG oslo_service.service [None req-b71ca0b1-6b06-4f9a-80f6-79fc48762184 - - - - - -] quota.cores                    = 20 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Dec  1 14:20:35 np0005541455 nova_compute[188605]: 2025-12-01 19:20:35.357 188609 DEBUG oslo_service.service [None req-b71ca0b1-6b06-4f9a-80f6-79fc48762184 - - - - - -] quota.count_usage_from_placement = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Dec  1 14:20:35 np0005541455 nova_compute[188605]: 2025-12-01 19:20:35.357 188609 DEBUG oslo_service.service [None req-b71ca0b1-6b06-4f9a-80f6-79fc48762184 - - - - - -] quota.driver                   = nova.quota.DbQuotaDriver log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Dec  1 14:20:35 np0005541455 nova_compute[188605]: 2025-12-01 19:20:35.357 188609 DEBUG oslo_service.service [None req-b71ca0b1-6b06-4f9a-80f6-79fc48762184 - - - - - -] quota.injected_file_content_bytes = 10240 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Dec  1 14:20:35 np0005541455 nova_compute[188605]: 2025-12-01 19:20:35.357 188609 DEBUG oslo_service.service [None req-b71ca0b1-6b06-4f9a-80f6-79fc48762184 - - - - - -] quota.injected_file_path_length = 255 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Dec  1 14:20:35 np0005541455 nova_compute[188605]: 2025-12-01 19:20:35.357 188609 DEBUG oslo_service.service [None req-b71ca0b1-6b06-4f9a-80f6-79fc48762184 - - - - - -] quota.injected_files           = 5 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Dec  1 14:20:35 np0005541455 nova_compute[188605]: 2025-12-01 19:20:35.357 188609 DEBUG oslo_service.service [None req-b71ca0b1-6b06-4f9a-80f6-79fc48762184 - - - - - -] quota.instances                = 10 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Dec  1 14:20:35 np0005541455 nova_compute[188605]: 2025-12-01 19:20:35.357 188609 DEBUG oslo_service.service [None req-b71ca0b1-6b06-4f9a-80f6-79fc48762184 - - - - - -] quota.key_pairs                = 100 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Dec  1 14:20:35 np0005541455 nova_compute[188605]: 2025-12-01 19:20:35.358 188609 DEBUG oslo_service.service [None req-b71ca0b1-6b06-4f9a-80f6-79fc48762184 - - - - - -] quota.metadata_items           = 128 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Dec  1 14:20:35 np0005541455 nova_compute[188605]: 2025-12-01 19:20:35.358 188609 DEBUG oslo_service.service [None req-b71ca0b1-6b06-4f9a-80f6-79fc48762184 - - - - - -] quota.ram                      = 51200 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Dec  1 14:20:35 np0005541455 nova_compute[188605]: 2025-12-01 19:20:35.358 188609 DEBUG oslo_service.service [None req-b71ca0b1-6b06-4f9a-80f6-79fc48762184 - - - - - -] quota.recheck_quota            = True log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Dec  1 14:20:35 np0005541455 nova_compute[188605]: 2025-12-01 19:20:35.358 188609 DEBUG oslo_service.service [None req-b71ca0b1-6b06-4f9a-80f6-79fc48762184 - - - - - -] quota.server_group_members     = 10 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Dec  1 14:20:35 np0005541455 nova_compute[188605]: 2025-12-01 19:20:35.358 188609 DEBUG oslo_service.service [None req-b71ca0b1-6b06-4f9a-80f6-79fc48762184 - - - - - -] quota.server_groups            = 10 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Dec  1 14:20:35 np0005541455 nova_compute[188605]: 2025-12-01 19:20:35.358 188609 DEBUG oslo_service.service [None req-b71ca0b1-6b06-4f9a-80f6-79fc48762184 - - - - - -] rdp.enabled                    = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Dec  1 14:20:35 np0005541455 nova_compute[188605]: 2025-12-01 19:20:35.359 188609 DEBUG oslo_service.service [None req-b71ca0b1-6b06-4f9a-80f6-79fc48762184 - - - - - -] rdp.html5_proxy_base_url       = http://127.0.0.1:6083/ log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Dec  1 14:20:35 np0005541455 nova_compute[188605]: 2025-12-01 19:20:35.359 188609 DEBUG oslo_service.service [None req-b71ca0b1-6b06-4f9a-80f6-79fc48762184 - - - - - -] scheduler.discover_hosts_in_cells_interval = -1 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Dec  1 14:20:35 np0005541455 nova_compute[188605]: 2025-12-01 19:20:35.359 188609 DEBUG oslo_service.service [None req-b71ca0b1-6b06-4f9a-80f6-79fc48762184 - - - - - -] scheduler.enable_isolated_aggregate_filtering = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Dec  1 14:20:35 np0005541455 nova_compute[188605]: 2025-12-01 19:20:35.359 188609 DEBUG oslo_service.service [None req-b71ca0b1-6b06-4f9a-80f6-79fc48762184 - - - - - -] scheduler.image_metadata_prefilter = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Dec  1 14:20:35 np0005541455 nova_compute[188605]: 2025-12-01 19:20:35.359 188609 DEBUG oslo_service.service [None req-b71ca0b1-6b06-4f9a-80f6-79fc48762184 - - - - - -] scheduler.limit_tenants_to_placement_aggregate = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Dec  1 14:20:35 np0005541455 nova_compute[188605]: 2025-12-01 19:20:35.359 188609 DEBUG oslo_service.service [None req-b71ca0b1-6b06-4f9a-80f6-79fc48762184 - - - - - -] scheduler.max_attempts         = 3 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Dec  1 14:20:35 np0005541455 nova_compute[188605]: 2025-12-01 19:20:35.359 188609 DEBUG oslo_service.service [None req-b71ca0b1-6b06-4f9a-80f6-79fc48762184 - - - - - -] scheduler.max_placement_results = 1000 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Dec  1 14:20:35 np0005541455 nova_compute[188605]: 2025-12-01 19:20:35.360 188609 DEBUG oslo_service.service [None req-b71ca0b1-6b06-4f9a-80f6-79fc48762184 - - - - - -] scheduler.placement_aggregate_required_for_tenants = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Dec  1 14:20:35 np0005541455 nova_compute[188605]: 2025-12-01 19:20:35.360 188609 DEBUG oslo_service.service [None req-b71ca0b1-6b06-4f9a-80f6-79fc48762184 - - - - - -] scheduler.query_placement_for_availability_zone = True log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Dec  1 14:20:35 np0005541455 nova_compute[188605]: 2025-12-01 19:20:35.360 188609 DEBUG oslo_service.service [None req-b71ca0b1-6b06-4f9a-80f6-79fc48762184 - - - - - -] scheduler.query_placement_for_image_type_support = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Dec  1 14:20:35 np0005541455 nova_compute[188605]: 2025-12-01 19:20:35.360 188609 DEBUG oslo_service.service [None req-b71ca0b1-6b06-4f9a-80f6-79fc48762184 - - - - - -] scheduler.query_placement_for_routed_network_aggregates = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Dec  1 14:20:35 np0005541455 nova_compute[188605]: 2025-12-01 19:20:35.360 188609 DEBUG oslo_service.service [None req-b71ca0b1-6b06-4f9a-80f6-79fc48762184 - - - - - -] scheduler.workers              = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Dec  1 14:20:35 np0005541455 nova_compute[188605]: 2025-12-01 19:20:35.360 188609 DEBUG oslo_service.service [None req-b71ca0b1-6b06-4f9a-80f6-79fc48762184 - - - - - -] filter_scheduler.aggregate_image_properties_isolation_namespace = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Dec  1 14:20:35 np0005541455 nova_compute[188605]: 2025-12-01 19:20:35.360 188609 DEBUG oslo_service.service [None req-b71ca0b1-6b06-4f9a-80f6-79fc48762184 - - - - - -] filter_scheduler.aggregate_image_properties_isolation_separator = . log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Dec  1 14:20:35 np0005541455 nova_compute[188605]: 2025-12-01 19:20:35.361 188609 DEBUG oslo_service.service [None req-b71ca0b1-6b06-4f9a-80f6-79fc48762184 - - - - - -] filter_scheduler.available_filters = ['nova.scheduler.filters.all_filters'] log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Dec  1 14:20:35 np0005541455 nova_compute[188605]: 2025-12-01 19:20:35.361 188609 DEBUG oslo_service.service [None req-b71ca0b1-6b06-4f9a-80f6-79fc48762184 - - - - - -] filter_scheduler.build_failure_weight_multiplier = 1000000.0 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Dec  1 14:20:35 np0005541455 nova_compute[188605]: 2025-12-01 19:20:35.361 188609 DEBUG oslo_service.service [None req-b71ca0b1-6b06-4f9a-80f6-79fc48762184 - - - - - -] filter_scheduler.cpu_weight_multiplier = 1.0 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Dec  1 14:20:35 np0005541455 nova_compute[188605]: 2025-12-01 19:20:35.361 188609 DEBUG oslo_service.service [None req-b71ca0b1-6b06-4f9a-80f6-79fc48762184 - - - - - -] filter_scheduler.cross_cell_move_weight_multiplier = 1000000.0 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Dec  1 14:20:35 np0005541455 nova_compute[188605]: 2025-12-01 19:20:35.361 188609 DEBUG oslo_service.service [None req-b71ca0b1-6b06-4f9a-80f6-79fc48762184 - - - - - -] filter_scheduler.disk_weight_multiplier = 1.0 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Dec  1 14:20:35 np0005541455 nova_compute[188605]: 2025-12-01 19:20:35.361 188609 DEBUG oslo_service.service [None req-b71ca0b1-6b06-4f9a-80f6-79fc48762184 - - - - - -] filter_scheduler.enabled_filters = ['ComputeFilter', 'ComputeCapabilitiesFilter', 'ImagePropertiesFilter', 'ServerGroupAntiAffinityFilter', 'ServerGroupAffinityFilter'] log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Dec  1 14:20:35 np0005541455 nova_compute[188605]: 2025-12-01 19:20:35.361 188609 DEBUG oslo_service.service [None req-b71ca0b1-6b06-4f9a-80f6-79fc48762184 - - - - - -] filter_scheduler.host_subset_size = 1 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Dec  1 14:20:35 np0005541455 nova_compute[188605]: 2025-12-01 19:20:35.362 188609 DEBUG oslo_service.service [None req-b71ca0b1-6b06-4f9a-80f6-79fc48762184 - - - - - -] filter_scheduler.image_properties_default_architecture = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Dec  1 14:20:35 np0005541455 nova_compute[188605]: 2025-12-01 19:20:35.362 188609 DEBUG oslo_service.service [None req-b71ca0b1-6b06-4f9a-80f6-79fc48762184 - - - - - -] filter_scheduler.io_ops_weight_multiplier = -1.0 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Dec  1 14:20:35 np0005541455 nova_compute[188605]: 2025-12-01 19:20:35.362 188609 DEBUG oslo_service.service [None req-b71ca0b1-6b06-4f9a-80f6-79fc48762184 - - - - - -] filter_scheduler.isolated_hosts = [] log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Dec  1 14:20:35 np0005541455 nova_compute[188605]: 2025-12-01 19:20:35.362 188609 DEBUG oslo_service.service [None req-b71ca0b1-6b06-4f9a-80f6-79fc48762184 - - - - - -] filter_scheduler.isolated_images = [] log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Dec  1 14:20:35 np0005541455 nova_compute[188605]: 2025-12-01 19:20:35.362 188609 DEBUG oslo_service.service [None req-b71ca0b1-6b06-4f9a-80f6-79fc48762184 - - - - - -] filter_scheduler.max_instances_per_host = 50 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Dec  1 14:20:35 np0005541455 nova_compute[188605]: 2025-12-01 19:20:35.362 188609 DEBUG oslo_service.service [None req-b71ca0b1-6b06-4f9a-80f6-79fc48762184 - - - - - -] filter_scheduler.max_io_ops_per_host = 8 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Dec  1 14:20:35 np0005541455 nova_compute[188605]: 2025-12-01 19:20:35.362 188609 DEBUG oslo_service.service [None req-b71ca0b1-6b06-4f9a-80f6-79fc48762184 - - - - - -] filter_scheduler.pci_in_placement = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Dec  1 14:20:35 np0005541455 nova_compute[188605]: 2025-12-01 19:20:35.363 188609 DEBUG oslo_service.service [None req-b71ca0b1-6b06-4f9a-80f6-79fc48762184 - - - - - -] filter_scheduler.pci_weight_multiplier = 1.0 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Dec  1 14:20:35 np0005541455 nova_compute[188605]: 2025-12-01 19:20:35.363 188609 DEBUG oslo_service.service [None req-b71ca0b1-6b06-4f9a-80f6-79fc48762184 - - - - - -] filter_scheduler.ram_weight_multiplier = 1.0 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Dec  1 14:20:35 np0005541455 nova_compute[188605]: 2025-12-01 19:20:35.363 188609 DEBUG oslo_service.service [None req-b71ca0b1-6b06-4f9a-80f6-79fc48762184 - - - - - -] filter_scheduler.restrict_isolated_hosts_to_isolated_images = True log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Dec  1 14:20:35 np0005541455 nova_compute[188605]: 2025-12-01 19:20:35.363 188609 DEBUG oslo_service.service [None req-b71ca0b1-6b06-4f9a-80f6-79fc48762184 - - - - - -] filter_scheduler.shuffle_best_same_weighed_hosts = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Dec  1 14:20:35 np0005541455 nova_compute[188605]: 2025-12-01 19:20:35.363 188609 DEBUG oslo_service.service [None req-b71ca0b1-6b06-4f9a-80f6-79fc48762184 - - - - - -] filter_scheduler.soft_affinity_weight_multiplier = 1.0 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Dec  1 14:20:35 np0005541455 nova_compute[188605]: 2025-12-01 19:20:35.363 188609 DEBUG oslo_service.service [None req-b71ca0b1-6b06-4f9a-80f6-79fc48762184 - - - - - -] filter_scheduler.soft_anti_affinity_weight_multiplier = 1.0 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Dec  1 14:20:35 np0005541455 nova_compute[188605]: 2025-12-01 19:20:35.364 188609 DEBUG oslo_service.service [None req-b71ca0b1-6b06-4f9a-80f6-79fc48762184 - - - - - -] filter_scheduler.track_instance_changes = True log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Dec  1 14:20:35 np0005541455 nova_compute[188605]: 2025-12-01 19:20:35.364 188609 DEBUG oslo_service.service [None req-b71ca0b1-6b06-4f9a-80f6-79fc48762184 - - - - - -] filter_scheduler.weight_classes = ['nova.scheduler.weights.all_weighers'] log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Dec  1 14:20:35 np0005541455 nova_compute[188605]: 2025-12-01 19:20:35.364 188609 DEBUG oslo_service.service [None req-b71ca0b1-6b06-4f9a-80f6-79fc48762184 - - - - - -] metrics.required               = True log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Dec  1 14:20:35 np0005541455 nova_compute[188605]: 2025-12-01 19:20:35.364 188609 DEBUG oslo_service.service [None req-b71ca0b1-6b06-4f9a-80f6-79fc48762184 - - - - - -] metrics.weight_multiplier      = 1.0 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Dec  1 14:20:35 np0005541455 nova_compute[188605]: 2025-12-01 19:20:35.364 188609 DEBUG oslo_service.service [None req-b71ca0b1-6b06-4f9a-80f6-79fc48762184 - - - - - -] metrics.weight_of_unavailable  = -10000.0 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Dec  1 14:20:35 np0005541455 nova_compute[188605]: 2025-12-01 19:20:35.364 188609 DEBUG oslo_service.service [None req-b71ca0b1-6b06-4f9a-80f6-79fc48762184 - - - - - -] metrics.weight_setting         = [] log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Dec  1 14:20:35 np0005541455 nova_compute[188605]: 2025-12-01 19:20:35.364 188609 DEBUG oslo_service.service [None req-b71ca0b1-6b06-4f9a-80f6-79fc48762184 - - - - - -] serial_console.base_url        = ws://127.0.0.1:6083/ log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Dec  1 14:20:35 np0005541455 nova_compute[188605]: 2025-12-01 19:20:35.365 188609 DEBUG oslo_service.service [None req-b71ca0b1-6b06-4f9a-80f6-79fc48762184 - - - - - -] serial_console.enabled         = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Dec  1 14:20:35 np0005541455 nova_compute[188605]: 2025-12-01 19:20:35.365 188609 DEBUG oslo_service.service [None req-b71ca0b1-6b06-4f9a-80f6-79fc48762184 - - - - - -] serial_console.port_range      = 10000:20000 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Dec  1 14:20:35 np0005541455 nova_compute[188605]: 2025-12-01 19:20:35.365 188609 DEBUG oslo_service.service [None req-b71ca0b1-6b06-4f9a-80f6-79fc48762184 - - - - - -] serial_console.proxyclient_address = 127.0.0.1 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Dec  1 14:20:35 np0005541455 nova_compute[188605]: 2025-12-01 19:20:35.365 188609 DEBUG oslo_service.service [None req-b71ca0b1-6b06-4f9a-80f6-79fc48762184 - - - - - -] serial_console.serialproxy_host = 0.0.0.0 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Dec  1 14:20:35 np0005541455 nova_compute[188605]: 2025-12-01 19:20:35.365 188609 DEBUG oslo_service.service [None req-b71ca0b1-6b06-4f9a-80f6-79fc48762184 - - - - - -] serial_console.serialproxy_port = 6083 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Dec  1 14:20:35 np0005541455 nova_compute[188605]: 2025-12-01 19:20:35.365 188609 DEBUG oslo_service.service [None req-b71ca0b1-6b06-4f9a-80f6-79fc48762184 - - - - - -] service_user.auth_section      = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Dec  1 14:20:35 np0005541455 nova_compute[188605]: 2025-12-01 19:20:35.365 188609 DEBUG oslo_service.service [None req-b71ca0b1-6b06-4f9a-80f6-79fc48762184 - - - - - -] service_user.auth_type         = password log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Dec  1 14:20:35 np0005541455 nova_compute[188605]: 2025-12-01 19:20:35.366 188609 DEBUG oslo_service.service [None req-b71ca0b1-6b06-4f9a-80f6-79fc48762184 - - - - - -] service_user.cafile            = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Dec  1 14:20:35 np0005541455 nova_compute[188605]: 2025-12-01 19:20:35.366 188609 DEBUG oslo_service.service [None req-b71ca0b1-6b06-4f9a-80f6-79fc48762184 - - - - - -] service_user.certfile          = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Dec  1 14:20:35 np0005541455 nova_compute[188605]: 2025-12-01 19:20:35.366 188609 DEBUG oslo_service.service [None req-b71ca0b1-6b06-4f9a-80f6-79fc48762184 - - - - - -] service_user.collect_timing    = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Dec  1 14:20:35 np0005541455 nova_compute[188605]: 2025-12-01 19:20:35.366 188609 DEBUG oslo_service.service [None req-b71ca0b1-6b06-4f9a-80f6-79fc48762184 - - - - - -] service_user.insecure          = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Dec  1 14:20:35 np0005541455 nova_compute[188605]: 2025-12-01 19:20:35.366 188609 DEBUG oslo_service.service [None req-b71ca0b1-6b06-4f9a-80f6-79fc48762184 - - - - - -] service_user.keyfile           = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Dec  1 14:20:35 np0005541455 nova_compute[188605]: 2025-12-01 19:20:35.366 188609 DEBUG oslo_service.service [None req-b71ca0b1-6b06-4f9a-80f6-79fc48762184 - - - - - -] service_user.send_service_user_token = True log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Dec  1 14:20:35 np0005541455 nova_compute[188605]: 2025-12-01 19:20:35.366 188609 DEBUG oslo_service.service [None req-b71ca0b1-6b06-4f9a-80f6-79fc48762184 - - - - - -] service_user.split_loggers     = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Dec  1 14:20:35 np0005541455 nova_compute[188605]: 2025-12-01 19:20:35.367 188609 DEBUG oslo_service.service [None req-b71ca0b1-6b06-4f9a-80f6-79fc48762184 - - - - - -] service_user.timeout           = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Dec  1 14:20:35 np0005541455 nova_compute[188605]: 2025-12-01 19:20:35.367 188609 DEBUG oslo_service.service [None req-b71ca0b1-6b06-4f9a-80f6-79fc48762184 - - - - - -] spice.agent_enabled            = True log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Dec  1 14:20:35 np0005541455 nova_compute[188605]: 2025-12-01 19:20:35.367 188609 DEBUG oslo_service.service [None req-b71ca0b1-6b06-4f9a-80f6-79fc48762184 - - - - - -] spice.enabled                  = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Dec  1 14:20:35 np0005541455 nova_compute[188605]: 2025-12-01 19:20:35.367 188609 DEBUG oslo_service.service [None req-b71ca0b1-6b06-4f9a-80f6-79fc48762184 - - - - - -] spice.html5proxy_base_url      = http://127.0.0.1:6082/spice_auto.html log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Dec  1 14:20:35 np0005541455 nova_compute[188605]: 2025-12-01 19:20:35.367 188609 DEBUG oslo_service.service [None req-b71ca0b1-6b06-4f9a-80f6-79fc48762184 - - - - - -] spice.html5proxy_host          = 0.0.0.0 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Dec  1 14:20:35 np0005541455 nova_compute[188605]: 2025-12-01 19:20:35.367 188609 DEBUG oslo_service.service [None req-b71ca0b1-6b06-4f9a-80f6-79fc48762184 - - - - - -] spice.html5proxy_port          = 6082 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Dec  1 14:20:35 np0005541455 nova_compute[188605]: 2025-12-01 19:20:35.368 188609 DEBUG oslo_service.service [None req-b71ca0b1-6b06-4f9a-80f6-79fc48762184 - - - - - -] spice.image_compression        = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Dec  1 14:20:35 np0005541455 nova_compute[188605]: 2025-12-01 19:20:35.368 188609 DEBUG oslo_service.service [None req-b71ca0b1-6b06-4f9a-80f6-79fc48762184 - - - - - -] spice.jpeg_compression         = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Dec  1 14:20:35 np0005541455 nova_compute[188605]: 2025-12-01 19:20:35.368 188609 DEBUG oslo_service.service [None req-b71ca0b1-6b06-4f9a-80f6-79fc48762184 - - - - - -] spice.playback_compression     = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Dec  1 14:20:35 np0005541455 nova_compute[188605]: 2025-12-01 19:20:35.368 188609 DEBUG oslo_service.service [None req-b71ca0b1-6b06-4f9a-80f6-79fc48762184 - - - - - -] spice.server_listen            = 127.0.0.1 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Dec  1 14:20:35 np0005541455 nova_compute[188605]: 2025-12-01 19:20:35.368 188609 DEBUG oslo_service.service [None req-b71ca0b1-6b06-4f9a-80f6-79fc48762184 - - - - - -] spice.server_proxyclient_address = 127.0.0.1 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Dec  1 14:20:35 np0005541455 nova_compute[188605]: 2025-12-01 19:20:35.368 188609 DEBUG oslo_service.service [None req-b71ca0b1-6b06-4f9a-80f6-79fc48762184 - - - - - -] spice.streaming_mode           = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Dec  1 14:20:35 np0005541455 nova_compute[188605]: 2025-12-01 19:20:35.369 188609 DEBUG oslo_service.service [None req-b71ca0b1-6b06-4f9a-80f6-79fc48762184 - - - - - -] spice.zlib_compression         = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Dec  1 14:20:35 np0005541455 nova_compute[188605]: 2025-12-01 19:20:35.369 188609 DEBUG oslo_service.service [None req-b71ca0b1-6b06-4f9a-80f6-79fc48762184 - - - - - -] upgrade_levels.baseapi         = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Dec  1 14:20:35 np0005541455 nova_compute[188605]: 2025-12-01 19:20:35.369 188609 DEBUG oslo_service.service [None req-b71ca0b1-6b06-4f9a-80f6-79fc48762184 - - - - - -] upgrade_levels.cert            = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Dec  1 14:20:35 np0005541455 nova_compute[188605]: 2025-12-01 19:20:35.369 188609 DEBUG oslo_service.service [None req-b71ca0b1-6b06-4f9a-80f6-79fc48762184 - - - - - -] upgrade_levels.compute         = auto log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Dec  1 14:20:35 np0005541455 nova_compute[188605]: 2025-12-01 19:20:35.369 188609 DEBUG oslo_service.service [None req-b71ca0b1-6b06-4f9a-80f6-79fc48762184 - - - - - -] upgrade_levels.conductor       = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Dec  1 14:20:35 np0005541455 nova_compute[188605]: 2025-12-01 19:20:35.369 188609 DEBUG oslo_service.service [None req-b71ca0b1-6b06-4f9a-80f6-79fc48762184 - - - - - -] upgrade_levels.scheduler       = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Dec  1 14:20:35 np0005541455 nova_compute[188605]: 2025-12-01 19:20:35.370 188609 DEBUG oslo_service.service [None req-b71ca0b1-6b06-4f9a-80f6-79fc48762184 - - - - - -] vendordata_dynamic_auth.auth_section = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Dec  1 14:20:35 np0005541455 nova_compute[188605]: 2025-12-01 19:20:35.370 188609 DEBUG oslo_service.service [None req-b71ca0b1-6b06-4f9a-80f6-79fc48762184 - - - - - -] vendordata_dynamic_auth.auth_type = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Dec  1 14:20:35 np0005541455 nova_compute[188605]: 2025-12-01 19:20:35.370 188609 DEBUG oslo_service.service [None req-b71ca0b1-6b06-4f9a-80f6-79fc48762184 - - - - - -] vendordata_dynamic_auth.cafile = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Dec  1 14:20:35 np0005541455 nova_compute[188605]: 2025-12-01 19:20:35.370 188609 DEBUG oslo_service.service [None req-b71ca0b1-6b06-4f9a-80f6-79fc48762184 - - - - - -] vendordata_dynamic_auth.certfile = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Dec  1 14:20:35 np0005541455 nova_compute[188605]: 2025-12-01 19:20:35.370 188609 DEBUG oslo_service.service [None req-b71ca0b1-6b06-4f9a-80f6-79fc48762184 - - - - - -] vendordata_dynamic_auth.collect_timing = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Dec  1 14:20:35 np0005541455 nova_compute[188605]: 2025-12-01 19:20:35.370 188609 DEBUG oslo_service.service [None req-b71ca0b1-6b06-4f9a-80f6-79fc48762184 - - - - - -] vendordata_dynamic_auth.insecure = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Dec  1 14:20:35 np0005541455 nova_compute[188605]: 2025-12-01 19:20:35.370 188609 DEBUG oslo_service.service [None req-b71ca0b1-6b06-4f9a-80f6-79fc48762184 - - - - - -] vendordata_dynamic_auth.keyfile = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Dec  1 14:20:35 np0005541455 nova_compute[188605]: 2025-12-01 19:20:35.371 188609 DEBUG oslo_service.service [None req-b71ca0b1-6b06-4f9a-80f6-79fc48762184 - - - - - -] vendordata_dynamic_auth.split_loggers = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Dec  1 14:20:35 np0005541455 nova_compute[188605]: 2025-12-01 19:20:35.371 188609 DEBUG oslo_service.service [None req-b71ca0b1-6b06-4f9a-80f6-79fc48762184 - - - - - -] vendordata_dynamic_auth.timeout = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Dec  1 14:20:35 np0005541455 nova_compute[188605]: 2025-12-01 19:20:35.371 188609 DEBUG oslo_service.service [None req-b71ca0b1-6b06-4f9a-80f6-79fc48762184 - - - - - -] vmware.api_retry_count         = 10 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Dec  1 14:20:35 np0005541455 nova_compute[188605]: 2025-12-01 19:20:35.371 188609 DEBUG oslo_service.service [None req-b71ca0b1-6b06-4f9a-80f6-79fc48762184 - - - - - -] vmware.ca_file                 = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Dec  1 14:20:35 np0005541455 nova_compute[188605]: 2025-12-01 19:20:35.371 188609 DEBUG oslo_service.service [None req-b71ca0b1-6b06-4f9a-80f6-79fc48762184 - - - - - -] vmware.cache_prefix            = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Dec  1 14:20:35 np0005541455 nova_compute[188605]: 2025-12-01 19:20:35.371 188609 DEBUG oslo_service.service [None req-b71ca0b1-6b06-4f9a-80f6-79fc48762184 - - - - - -] vmware.cluster_name            = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Dec  1 14:20:35 np0005541455 nova_compute[188605]: 2025-12-01 19:20:35.372 188609 DEBUG oslo_service.service [None req-b71ca0b1-6b06-4f9a-80f6-79fc48762184 - - - - - -] vmware.connection_pool_size    = 10 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Dec  1 14:20:35 np0005541455 nova_compute[188605]: 2025-12-01 19:20:35.372 188609 DEBUG oslo_service.service [None req-b71ca0b1-6b06-4f9a-80f6-79fc48762184 - - - - - -] vmware.console_delay_seconds   = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Dec  1 14:20:35 np0005541455 nova_compute[188605]: 2025-12-01 19:20:35.372 188609 DEBUG oslo_service.service [None req-b71ca0b1-6b06-4f9a-80f6-79fc48762184 - - - - - -] vmware.datastore_regex         = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Dec  1 14:20:35 np0005541455 nova_compute[188605]: 2025-12-01 19:20:35.372 188609 DEBUG oslo_service.service [None req-b71ca0b1-6b06-4f9a-80f6-79fc48762184 - - - - - -] vmware.host_ip                 = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Dec  1 14:20:35 np0005541455 nova_compute[188605]: 2025-12-01 19:20:35.372 188609 DEBUG oslo_service.service [None req-b71ca0b1-6b06-4f9a-80f6-79fc48762184 - - - - - -] vmware.host_password           = **** log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Dec  1 14:20:35 np0005541455 nova_compute[188605]: 2025-12-01 19:20:35.372 188609 DEBUG oslo_service.service [None req-b71ca0b1-6b06-4f9a-80f6-79fc48762184 - - - - - -] vmware.host_port               = 443 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Dec  1 14:20:35 np0005541455 nova_compute[188605]: 2025-12-01 19:20:35.373 188609 DEBUG oslo_service.service [None req-b71ca0b1-6b06-4f9a-80f6-79fc48762184 - - - - - -] vmware.host_username           = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Dec  1 14:20:35 np0005541455 nova_compute[188605]: 2025-12-01 19:20:35.373 188609 DEBUG oslo_service.service [None req-b71ca0b1-6b06-4f9a-80f6-79fc48762184 - - - - - -] vmware.insecure                = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Dec  1 14:20:35 np0005541455 nova_compute[188605]: 2025-12-01 19:20:35.373 188609 DEBUG oslo_service.service [None req-b71ca0b1-6b06-4f9a-80f6-79fc48762184 - - - - - -] vmware.integration_bridge      = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Dec  1 14:20:35 np0005541455 nova_compute[188605]: 2025-12-01 19:20:35.373 188609 DEBUG oslo_service.service [None req-b71ca0b1-6b06-4f9a-80f6-79fc48762184 - - - - - -] vmware.maximum_objects         = 100 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Dec  1 14:20:35 np0005541455 nova_compute[188605]: 2025-12-01 19:20:35.373 188609 DEBUG oslo_service.service [None req-b71ca0b1-6b06-4f9a-80f6-79fc48762184 - - - - - -] vmware.pbm_default_policy      = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Dec  1 14:20:35 np0005541455 nova_compute[188605]: 2025-12-01 19:20:35.373 188609 DEBUG oslo_service.service [None req-b71ca0b1-6b06-4f9a-80f6-79fc48762184 - - - - - -] vmware.pbm_enabled             = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Dec  1 14:20:35 np0005541455 nova_compute[188605]: 2025-12-01 19:20:35.373 188609 DEBUG oslo_service.service [None req-b71ca0b1-6b06-4f9a-80f6-79fc48762184 - - - - - -] vmware.pbm_wsdl_location       = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Dec  1 14:20:35 np0005541455 nova_compute[188605]: 2025-12-01 19:20:35.374 188609 DEBUG oslo_service.service [None req-b71ca0b1-6b06-4f9a-80f6-79fc48762184 - - - - - -] vmware.serial_log_dir          = /opt/vmware/vspc log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Dec  1 14:20:35 np0005541455 nova_compute[188605]: 2025-12-01 19:20:35.374 188609 DEBUG oslo_service.service [None req-b71ca0b1-6b06-4f9a-80f6-79fc48762184 - - - - - -] vmware.serial_port_proxy_uri   = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Dec  1 14:20:35 np0005541455 nova_compute[188605]: 2025-12-01 19:20:35.374 188609 DEBUG oslo_service.service [None req-b71ca0b1-6b06-4f9a-80f6-79fc48762184 - - - - - -] vmware.serial_port_service_uri = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Dec  1 14:20:35 np0005541455 nova_compute[188605]: 2025-12-01 19:20:35.374 188609 DEBUG oslo_service.service [None req-b71ca0b1-6b06-4f9a-80f6-79fc48762184 - - - - - -] vmware.task_poll_interval      = 0.5 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Dec  1 14:20:35 np0005541455 nova_compute[188605]: 2025-12-01 19:20:35.374 188609 DEBUG oslo_service.service [None req-b71ca0b1-6b06-4f9a-80f6-79fc48762184 - - - - - -] vmware.use_linked_clone        = True log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Dec  1 14:20:35 np0005541455 nova_compute[188605]: 2025-12-01 19:20:35.374 188609 DEBUG oslo_service.service [None req-b71ca0b1-6b06-4f9a-80f6-79fc48762184 - - - - - -] vmware.vnc_keymap              = en-us log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Dec  1 14:20:35 np0005541455 nova_compute[188605]: 2025-12-01 19:20:35.374 188609 DEBUG oslo_service.service [None req-b71ca0b1-6b06-4f9a-80f6-79fc48762184 - - - - - -] vmware.vnc_port                = 5900 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Dec  1 14:20:35 np0005541455 nova_compute[188605]: 2025-12-01 19:20:35.375 188609 DEBUG oslo_service.service [None req-b71ca0b1-6b06-4f9a-80f6-79fc48762184 - - - - - -] vmware.vnc_port_total          = 10000 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Dec  1 14:20:35 np0005541455 nova_compute[188605]: 2025-12-01 19:20:35.375 188609 DEBUG oslo_service.service [None req-b71ca0b1-6b06-4f9a-80f6-79fc48762184 - - - - - -] vnc.auth_schemes               = ['none'] log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Dec  1 14:20:35 np0005541455 nova_compute[188605]: 2025-12-01 19:20:35.375 188609 DEBUG oslo_service.service [None req-b71ca0b1-6b06-4f9a-80f6-79fc48762184 - - - - - -] vnc.enabled                    = True log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Dec  1 14:20:35 np0005541455 nova_compute[188605]: 2025-12-01 19:20:35.375 188609 DEBUG oslo_service.service [None req-b71ca0b1-6b06-4f9a-80f6-79fc48762184 - - - - - -] vnc.novncproxy_base_url        = https://nova-novncproxy-cell1-public-openstack.apps-crc.testing/vnc_lite.html log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Dec  1 14:20:35 np0005541455 nova_compute[188605]: 2025-12-01 19:20:35.375 188609 DEBUG oslo_service.service [None req-b71ca0b1-6b06-4f9a-80f6-79fc48762184 - - - - - -] vnc.novncproxy_host            = 0.0.0.0 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Dec  1 14:20:35 np0005541455 nova_compute[188605]: 2025-12-01 19:20:35.375 188609 DEBUG oslo_service.service [None req-b71ca0b1-6b06-4f9a-80f6-79fc48762184 - - - - - -] vnc.novncproxy_port            = 6080 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Dec  1 14:20:35 np0005541455 nova_compute[188605]: 2025-12-01 19:20:35.376 188609 DEBUG oslo_service.service [None req-b71ca0b1-6b06-4f9a-80f6-79fc48762184 - - - - - -] vnc.server_listen              = ::0 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Dec  1 14:20:35 np0005541455 nova_compute[188605]: 2025-12-01 19:20:35.376 188609 DEBUG oslo_service.service [None req-b71ca0b1-6b06-4f9a-80f6-79fc48762184 - - - - - -] vnc.server_proxyclient_address = 192.168.122.100 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Dec  1 14:20:35 np0005541455 nova_compute[188605]: 2025-12-01 19:20:35.376 188609 DEBUG oslo_service.service [None req-b71ca0b1-6b06-4f9a-80f6-79fc48762184 - - - - - -] vnc.vencrypt_ca_certs          = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Dec  1 14:20:35 np0005541455 nova_compute[188605]: 2025-12-01 19:20:35.376 188609 DEBUG oslo_service.service [None req-b71ca0b1-6b06-4f9a-80f6-79fc48762184 - - - - - -] vnc.vencrypt_client_cert       = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Dec  1 14:20:35 np0005541455 nova_compute[188605]: 2025-12-01 19:20:35.376 188609 DEBUG oslo_service.service [None req-b71ca0b1-6b06-4f9a-80f6-79fc48762184 - - - - - -] vnc.vencrypt_client_key        = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Dec  1 14:20:35 np0005541455 nova_compute[188605]: 2025-12-01 19:20:35.376 188609 DEBUG oslo_service.service [None req-b71ca0b1-6b06-4f9a-80f6-79fc48762184 - - - - - -] workarounds.disable_compute_service_check_for_ffu = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Dec  1 14:20:35 np0005541455 nova_compute[188605]: 2025-12-01 19:20:35.377 188609 DEBUG oslo_service.service [None req-b71ca0b1-6b06-4f9a-80f6-79fc48762184 - - - - - -] workarounds.disable_deep_image_inspection = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Dec  1 14:20:35 np0005541455 nova_compute[188605]: 2025-12-01 19:20:35.377 188609 DEBUG oslo_service.service [None req-b71ca0b1-6b06-4f9a-80f6-79fc48762184 - - - - - -] workarounds.disable_fallback_pcpu_query = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Dec  1 14:20:35 np0005541455 nova_compute[188605]: 2025-12-01 19:20:35.377 188609 DEBUG oslo_service.service [None req-b71ca0b1-6b06-4f9a-80f6-79fc48762184 - - - - - -] workarounds.disable_group_policy_check_upcall = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Dec  1 14:20:35 np0005541455 nova_compute[188605]: 2025-12-01 19:20:35.377 188609 DEBUG oslo_service.service [None req-b71ca0b1-6b06-4f9a-80f6-79fc48762184 - - - - - -] workarounds.disable_libvirt_livesnapshot = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Dec  1 14:20:35 np0005541455 nova_compute[188605]: 2025-12-01 19:20:35.377 188609 DEBUG oslo_service.service [None req-b71ca0b1-6b06-4f9a-80f6-79fc48762184 - - - - - -] workarounds.disable_rootwrap   = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Dec  1 14:20:35 np0005541455 nova_compute[188605]: 2025-12-01 19:20:35.378 188609 DEBUG oslo_service.service [None req-b71ca0b1-6b06-4f9a-80f6-79fc48762184 - - - - - -] workarounds.enable_numa_live_migration = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Dec  1 14:20:35 np0005541455 nova_compute[188605]: 2025-12-01 19:20:35.378 188609 DEBUG oslo_service.service [None req-b71ca0b1-6b06-4f9a-80f6-79fc48762184 - - - - - -] workarounds.enable_qemu_monitor_announce_self = True log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Dec  1 14:20:35 np0005541455 nova_compute[188605]: 2025-12-01 19:20:35.378 188609 DEBUG oslo_service.service [None req-b71ca0b1-6b06-4f9a-80f6-79fc48762184 - - - - - -] workarounds.ensure_libvirt_rbd_instance_dir_cleanup = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Dec  1 14:20:35 np0005541455 nova_compute[188605]: 2025-12-01 19:20:35.378 188609 DEBUG oslo_service.service [None req-b71ca0b1-6b06-4f9a-80f6-79fc48762184 - - - - - -] workarounds.handle_virt_lifecycle_events = True log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Dec  1 14:20:35 np0005541455 nova_compute[188605]: 2025-12-01 19:20:35.378 188609 DEBUG oslo_service.service [None req-b71ca0b1-6b06-4f9a-80f6-79fc48762184 - - - - - -] workarounds.libvirt_disable_apic = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Dec  1 14:20:35 np0005541455 nova_compute[188605]: 2025-12-01 19:20:35.378 188609 DEBUG oslo_service.service [None req-b71ca0b1-6b06-4f9a-80f6-79fc48762184 - - - - - -] workarounds.never_download_image_if_on_rbd = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Dec  1 14:20:35 np0005541455 nova_compute[188605]: 2025-12-01 19:20:35.379 188609 DEBUG oslo_service.service [None req-b71ca0b1-6b06-4f9a-80f6-79fc48762184 - - - - - -] workarounds.qemu_monitor_announce_self_count = 3 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Dec  1 14:20:35 np0005541455 nova_compute[188605]: 2025-12-01 19:20:35.379 188609 DEBUG oslo_service.service [None req-b71ca0b1-6b06-4f9a-80f6-79fc48762184 - - - - - -] workarounds.qemu_monitor_announce_self_interval = 1 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Dec  1 14:20:35 np0005541455 nova_compute[188605]: 2025-12-01 19:20:35.379 188609 DEBUG oslo_service.service [None req-b71ca0b1-6b06-4f9a-80f6-79fc48762184 - - - - - -] workarounds.reserve_disk_resource_for_image_cache = True log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Dec  1 14:20:35 np0005541455 nova_compute[188605]: 2025-12-01 19:20:35.379 188609 DEBUG oslo_service.service [None req-b71ca0b1-6b06-4f9a-80f6-79fc48762184 - - - - - -] workarounds.skip_cpu_compare_at_startup = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Dec  1 14:20:35 np0005541455 nova_compute[188605]: 2025-12-01 19:20:35.379 188609 DEBUG oslo_service.service [None req-b71ca0b1-6b06-4f9a-80f6-79fc48762184 - - - - - -] workarounds.skip_cpu_compare_on_dest = True log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Dec  1 14:20:35 np0005541455 nova_compute[188605]: 2025-12-01 19:20:35.379 188609 DEBUG oslo_service.service [None req-b71ca0b1-6b06-4f9a-80f6-79fc48762184 - - - - - -] workarounds.skip_hypervisor_version_check_on_lm = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Dec  1 14:20:35 np0005541455 nova_compute[188605]: 2025-12-01 19:20:35.379 188609 DEBUG oslo_service.service [None req-b71ca0b1-6b06-4f9a-80f6-79fc48762184 - - - - - -] workarounds.skip_reserve_in_use_ironic_nodes = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Dec  1 14:20:35 np0005541455 nova_compute[188605]: 2025-12-01 19:20:35.380 188609 DEBUG oslo_service.service [None req-b71ca0b1-6b06-4f9a-80f6-79fc48762184 - - - - - -] workarounds.unified_limits_count_pcpu_as_vcpu = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Dec  1 14:20:35 np0005541455 nova_compute[188605]: 2025-12-01 19:20:35.380 188609 DEBUG oslo_service.service [None req-b71ca0b1-6b06-4f9a-80f6-79fc48762184 - - - - - -] workarounds.wait_for_vif_plugged_event_during_hard_reboot = [] log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Dec  1 14:20:35 np0005541455 nova_compute[188605]: 2025-12-01 19:20:35.380 188609 DEBUG oslo_service.service [None req-b71ca0b1-6b06-4f9a-80f6-79fc48762184 - - - - - -] wsgi.api_paste_config          = api-paste.ini log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Dec  1 14:20:35 np0005541455 nova_compute[188605]: 2025-12-01 19:20:35.380 188609 DEBUG oslo_service.service [None req-b71ca0b1-6b06-4f9a-80f6-79fc48762184 - - - - - -] wsgi.client_socket_timeout     = 900 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Dec  1 14:20:35 np0005541455 nova_compute[188605]: 2025-12-01 19:20:35.380 188609 DEBUG oslo_service.service [None req-b71ca0b1-6b06-4f9a-80f6-79fc48762184 - - - - - -] wsgi.default_pool_size         = 1000 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Dec  1 14:20:35 np0005541455 nova_compute[188605]: 2025-12-01 19:20:35.380 188609 DEBUG oslo_service.service [None req-b71ca0b1-6b06-4f9a-80f6-79fc48762184 - - - - - -] wsgi.keep_alive                = True log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Dec  1 14:20:35 np0005541455 nova_compute[188605]: 2025-12-01 19:20:35.380 188609 DEBUG oslo_service.service [None req-b71ca0b1-6b06-4f9a-80f6-79fc48762184 - - - - - -] wsgi.max_header_line           = 16384 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Dec  1 14:20:35 np0005541455 nova_compute[188605]: 2025-12-01 19:20:35.381 188609 DEBUG oslo_service.service [None req-b71ca0b1-6b06-4f9a-80f6-79fc48762184 - - - - - -] wsgi.secure_proxy_ssl_header   = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Dec  1 14:20:35 np0005541455 nova_compute[188605]: 2025-12-01 19:20:35.381 188609 DEBUG oslo_service.service [None req-b71ca0b1-6b06-4f9a-80f6-79fc48762184 - - - - - -] wsgi.ssl_ca_file               = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Dec  1 14:20:35 np0005541455 nova_compute[188605]: 2025-12-01 19:20:35.381 188609 DEBUG oslo_service.service [None req-b71ca0b1-6b06-4f9a-80f6-79fc48762184 - - - - - -] wsgi.ssl_cert_file             = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Dec  1 14:20:35 np0005541455 nova_compute[188605]: 2025-12-01 19:20:35.381 188609 DEBUG oslo_service.service [None req-b71ca0b1-6b06-4f9a-80f6-79fc48762184 - - - - - -] wsgi.ssl_key_file              = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Dec  1 14:20:35 np0005541455 nova_compute[188605]: 2025-12-01 19:20:35.381 188609 DEBUG oslo_service.service [None req-b71ca0b1-6b06-4f9a-80f6-79fc48762184 - - - - - -] wsgi.tcp_keepidle              = 600 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Dec  1 14:20:35 np0005541455 nova_compute[188605]: 2025-12-01 19:20:35.381 188609 DEBUG oslo_service.service [None req-b71ca0b1-6b06-4f9a-80f6-79fc48762184 - - - - - -] wsgi.wsgi_log_format           = %(client_ip)s "%(request_line)s" status: %(status_code)s len: %(body_length)s time: %(wall_seconds).7f log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Dec  1 14:20:35 np0005541455 nova_compute[188605]: 2025-12-01 19:20:35.381 188609 DEBUG oslo_service.service [None req-b71ca0b1-6b06-4f9a-80f6-79fc48762184 - - - - - -] zvm.ca_file                    = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Dec  1 14:20:35 np0005541455 nova_compute[188605]: 2025-12-01 19:20:35.382 188609 DEBUG oslo_service.service [None req-b71ca0b1-6b06-4f9a-80f6-79fc48762184 - - - - - -] zvm.cloud_connector_url        = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Dec  1 14:20:35 np0005541455 nova_compute[188605]: 2025-12-01 19:20:35.382 188609 DEBUG oslo_service.service [None req-b71ca0b1-6b06-4f9a-80f6-79fc48762184 - - - - - -] zvm.image_tmp_path             = /var/lib/nova/images log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Dec  1 14:20:35 np0005541455 nova_compute[188605]: 2025-12-01 19:20:35.382 188609 DEBUG oslo_service.service [None req-b71ca0b1-6b06-4f9a-80f6-79fc48762184 - - - - - -] zvm.reachable_timeout          = 300 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Dec  1 14:20:35 np0005541455 nova_compute[188605]: 2025-12-01 19:20:35.382 188609 DEBUG oslo_service.service [None req-b71ca0b1-6b06-4f9a-80f6-79fc48762184 - - - - - -] oslo_policy.enforce_new_defaults = True log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Dec  1 14:20:35 np0005541455 nova_compute[188605]: 2025-12-01 19:20:35.382 188609 DEBUG oslo_service.service [None req-b71ca0b1-6b06-4f9a-80f6-79fc48762184 - - - - - -] oslo_policy.enforce_scope      = True log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Dec  1 14:20:35 np0005541455 nova_compute[188605]: 2025-12-01 19:20:35.383 188609 DEBUG oslo_service.service [None req-b71ca0b1-6b06-4f9a-80f6-79fc48762184 - - - - - -] oslo_policy.policy_default_rule = default log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Dec  1 14:20:35 np0005541455 nova_compute[188605]: 2025-12-01 19:20:35.383 188609 DEBUG oslo_service.service [None req-b71ca0b1-6b06-4f9a-80f6-79fc48762184 - - - - - -] oslo_policy.policy_dirs        = ['policy.d'] log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Dec  1 14:20:35 np0005541455 nova_compute[188605]: 2025-12-01 19:20:35.383 188609 DEBUG oslo_service.service [None req-b71ca0b1-6b06-4f9a-80f6-79fc48762184 - - - - - -] oslo_policy.policy_file        = policy.yaml log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Dec  1 14:20:35 np0005541455 nova_compute[188605]: 2025-12-01 19:20:35.383 188609 DEBUG oslo_service.service [None req-b71ca0b1-6b06-4f9a-80f6-79fc48762184 - - - - - -] oslo_policy.remote_content_type = application/x-www-form-urlencoded log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Dec  1 14:20:35 np0005541455 nova_compute[188605]: 2025-12-01 19:20:35.383 188609 DEBUG oslo_service.service [None req-b71ca0b1-6b06-4f9a-80f6-79fc48762184 - - - - - -] oslo_policy.remote_ssl_ca_crt_file = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Dec  1 14:20:35 np0005541455 nova_compute[188605]: 2025-12-01 19:20:35.383 188609 DEBUG oslo_service.service [None req-b71ca0b1-6b06-4f9a-80f6-79fc48762184 - - - - - -] oslo_policy.remote_ssl_client_crt_file = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Dec  1 14:20:35 np0005541455 nova_compute[188605]: 2025-12-01 19:20:35.384 188609 DEBUG oslo_service.service [None req-b71ca0b1-6b06-4f9a-80f6-79fc48762184 - - - - - -] oslo_policy.remote_ssl_client_key_file = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Dec  1 14:20:35 np0005541455 nova_compute[188605]: 2025-12-01 19:20:35.384 188609 DEBUG oslo_service.service [None req-b71ca0b1-6b06-4f9a-80f6-79fc48762184 - - - - - -] oslo_policy.remote_ssl_verify_server_crt = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Dec  1 14:20:35 np0005541455 nova_compute[188605]: 2025-12-01 19:20:35.384 188609 DEBUG oslo_service.service [None req-b71ca0b1-6b06-4f9a-80f6-79fc48762184 - - - - - -] oslo_versionedobjects.fatal_exception_format_errors = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Dec  1 14:20:35 np0005541455 nova_compute[188605]: 2025-12-01 19:20:35.384 188609 DEBUG oslo_service.service [None req-b71ca0b1-6b06-4f9a-80f6-79fc48762184 - - - - - -] oslo_middleware.http_basic_auth_user_file = /etc/htpasswd log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Dec  1 14:20:35 np0005541455 nova_compute[188605]: 2025-12-01 19:20:35.384 188609 DEBUG oslo_service.service [None req-b71ca0b1-6b06-4f9a-80f6-79fc48762184 - - - - - -] remote_debug.host              = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Dec  1 14:20:35 np0005541455 nova_compute[188605]: 2025-12-01 19:20:35.384 188609 DEBUG oslo_service.service [None req-b71ca0b1-6b06-4f9a-80f6-79fc48762184 - - - - - -] remote_debug.port              = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Dec  1 14:20:35 np0005541455 nova_compute[188605]: 2025-12-01 19:20:35.385 188609 DEBUG oslo_service.service [None req-b71ca0b1-6b06-4f9a-80f6-79fc48762184 - - - - - -] oslo_messaging_rabbit.amqp_auto_delete = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Dec  1 14:20:35 np0005541455 nova_compute[188605]: 2025-12-01 19:20:35.385 188609 DEBUG oslo_service.service [None req-b71ca0b1-6b06-4f9a-80f6-79fc48762184 - - - - - -] oslo_messaging_rabbit.amqp_durable_queues = True log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Dec  1 14:20:35 np0005541455 nova_compute[188605]: 2025-12-01 19:20:35.385 188609 DEBUG oslo_service.service [None req-b71ca0b1-6b06-4f9a-80f6-79fc48762184 - - - - - -] oslo_messaging_rabbit.conn_pool_min_size = 2 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Dec  1 14:20:35 np0005541455 nova_compute[188605]: 2025-12-01 19:20:35.385 188609 DEBUG oslo_service.service [None req-b71ca0b1-6b06-4f9a-80f6-79fc48762184 - - - - - -] oslo_messaging_rabbit.conn_pool_ttl = 1200 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Dec  1 14:20:35 np0005541455 nova_compute[188605]: 2025-12-01 19:20:35.385 188609 DEBUG oslo_service.service [None req-b71ca0b1-6b06-4f9a-80f6-79fc48762184 - - - - - -] oslo_messaging_rabbit.direct_mandatory_flag = True log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Dec  1 14:20:35 np0005541455 nova_compute[188605]: 2025-12-01 19:20:35.385 188609 DEBUG oslo_service.service [None req-b71ca0b1-6b06-4f9a-80f6-79fc48762184 - - - - - -] oslo_messaging_rabbit.enable_cancel_on_failover = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Dec  1 14:20:35 np0005541455 nova_compute[188605]: 2025-12-01 19:20:35.385 188609 DEBUG oslo_service.service [None req-b71ca0b1-6b06-4f9a-80f6-79fc48762184 - - - - - -] oslo_messaging_rabbit.heartbeat_in_pthread = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Dec  1 14:20:35 np0005541455 nova_compute[188605]: 2025-12-01 19:20:35.386 188609 DEBUG oslo_service.service [None req-b71ca0b1-6b06-4f9a-80f6-79fc48762184 - - - - - -] oslo_messaging_rabbit.heartbeat_rate = 2 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Dec  1 14:20:35 np0005541455 nova_compute[188605]: 2025-12-01 19:20:35.386 188609 DEBUG oslo_service.service [None req-b71ca0b1-6b06-4f9a-80f6-79fc48762184 - - - - - -] oslo_messaging_rabbit.heartbeat_timeout_threshold = 60 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Dec  1 14:20:35 np0005541455 nova_compute[188605]: 2025-12-01 19:20:35.386 188609 DEBUG oslo_service.service [None req-b71ca0b1-6b06-4f9a-80f6-79fc48762184 - - - - - -] oslo_messaging_rabbit.kombu_compression = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Dec  1 14:20:35 np0005541455 nova_compute[188605]: 2025-12-01 19:20:35.386 188609 DEBUG oslo_service.service [None req-b71ca0b1-6b06-4f9a-80f6-79fc48762184 - - - - - -] oslo_messaging_rabbit.kombu_failover_strategy = round-robin log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Dec  1 14:20:35 np0005541455 nova_compute[188605]: 2025-12-01 19:20:35.386 188609 DEBUG oslo_service.service [None req-b71ca0b1-6b06-4f9a-80f6-79fc48762184 - - - - - -] oslo_messaging_rabbit.kombu_missing_consumer_retry_timeout = 60 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Dec  1 14:20:35 np0005541455 nova_compute[188605]: 2025-12-01 19:20:35.386 188609 DEBUG oslo_service.service [None req-b71ca0b1-6b06-4f9a-80f6-79fc48762184 - - - - - -] oslo_messaging_rabbit.kombu_reconnect_delay = 1.0 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Dec  1 14:20:35 np0005541455 nova_compute[188605]: 2025-12-01 19:20:35.386 188609 DEBUG oslo_service.service [None req-b71ca0b1-6b06-4f9a-80f6-79fc48762184 - - - - - -] oslo_messaging_rabbit.rabbit_ha_queues = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Dec  1 14:20:35 np0005541455 nova_compute[188605]: 2025-12-01 19:20:35.387 188609 DEBUG oslo_service.service [None req-b71ca0b1-6b06-4f9a-80f6-79fc48762184 - - - - - -] oslo_messaging_rabbit.rabbit_interval_max = 30 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Dec  1 14:20:35 np0005541455 nova_compute[188605]: 2025-12-01 19:20:35.387 188609 DEBUG oslo_service.service [None req-b71ca0b1-6b06-4f9a-80f6-79fc48762184 - - - - - -] oslo_messaging_rabbit.rabbit_login_method = AMQPLAIN log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Dec  1 14:20:35 np0005541455 nova_compute[188605]: 2025-12-01 19:20:35.387 188609 DEBUG oslo_service.service [None req-b71ca0b1-6b06-4f9a-80f6-79fc48762184 - - - - - -] oslo_messaging_rabbit.rabbit_qos_prefetch_count = 0 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Dec  1 14:20:35 np0005541455 nova_compute[188605]: 2025-12-01 19:20:35.387 188609 DEBUG oslo_service.service [None req-b71ca0b1-6b06-4f9a-80f6-79fc48762184 - - - - - -] oslo_messaging_rabbit.rabbit_quorum_delivery_limit = 0 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Dec  1 14:20:35 np0005541455 nova_compute[188605]: 2025-12-01 19:20:35.387 188609 DEBUG oslo_service.service [None req-b71ca0b1-6b06-4f9a-80f6-79fc48762184 - - - - - -] oslo_messaging_rabbit.rabbit_quorum_max_memory_bytes = 0 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Dec  1 14:20:35 np0005541455 nova_compute[188605]: 2025-12-01 19:20:35.387 188609 DEBUG oslo_service.service [None req-b71ca0b1-6b06-4f9a-80f6-79fc48762184 - - - - - -] oslo_messaging_rabbit.rabbit_quorum_max_memory_length = 0 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Dec  1 14:20:35 np0005541455 nova_compute[188605]: 2025-12-01 19:20:35.387 188609 DEBUG oslo_service.service [None req-b71ca0b1-6b06-4f9a-80f6-79fc48762184 - - - - - -] oslo_messaging_rabbit.rabbit_quorum_queue = True log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Dec  1 14:20:35 np0005541455 nova_compute[188605]: 2025-12-01 19:20:35.387 188609 DEBUG oslo_service.service [None req-b71ca0b1-6b06-4f9a-80f6-79fc48762184 - - - - - -] oslo_messaging_rabbit.rabbit_retry_backoff = 2 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Dec  1 14:20:35 np0005541455 nova_compute[188605]: 2025-12-01 19:20:35.388 188609 DEBUG oslo_service.service [None req-b71ca0b1-6b06-4f9a-80f6-79fc48762184 - - - - - -] oslo_messaging_rabbit.rabbit_retry_interval = 1 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Dec  1 14:20:35 np0005541455 nova_compute[188605]: 2025-12-01 19:20:35.388 188609 DEBUG oslo_service.service [None req-b71ca0b1-6b06-4f9a-80f6-79fc48762184 - - - - - -] oslo_messaging_rabbit.rabbit_transient_queues_ttl = 1800 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Dec  1 14:20:35 np0005541455 nova_compute[188605]: 2025-12-01 19:20:35.388 188609 DEBUG oslo_service.service [None req-b71ca0b1-6b06-4f9a-80f6-79fc48762184 - - - - - -] oslo_messaging_rabbit.rpc_conn_pool_size = 30 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Dec  1 14:20:35 np0005541455 nova_compute[188605]: 2025-12-01 19:20:35.388 188609 DEBUG oslo_service.service [None req-b71ca0b1-6b06-4f9a-80f6-79fc48762184 - - - - - -] oslo_messaging_rabbit.ssl      = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Dec  1 14:20:35 np0005541455 nova_compute[188605]: 2025-12-01 19:20:35.388 188609 DEBUG oslo_service.service [None req-b71ca0b1-6b06-4f9a-80f6-79fc48762184 - - - - - -] oslo_messaging_rabbit.ssl_ca_file =  log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Dec  1 14:20:35 np0005541455 nova_compute[188605]: 2025-12-01 19:20:35.388 188609 DEBUG oslo_service.service [None req-b71ca0b1-6b06-4f9a-80f6-79fc48762184 - - - - - -] oslo_messaging_rabbit.ssl_cert_file =  log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Dec  1 14:20:35 np0005541455 nova_compute[188605]: 2025-12-01 19:20:35.388 188609 DEBUG oslo_service.service [None req-b71ca0b1-6b06-4f9a-80f6-79fc48762184 - - - - - -] oslo_messaging_rabbit.ssl_enforce_fips_mode = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Dec  1 14:20:35 np0005541455 nova_compute[188605]: 2025-12-01 19:20:35.389 188609 DEBUG oslo_service.service [None req-b71ca0b1-6b06-4f9a-80f6-79fc48762184 - - - - - -] oslo_messaging_rabbit.ssl_key_file =  log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Dec  1 14:20:35 np0005541455 nova_compute[188605]: 2025-12-01 19:20:35.389 188609 DEBUG oslo_service.service [None req-b71ca0b1-6b06-4f9a-80f6-79fc48762184 - - - - - -] oslo_messaging_rabbit.ssl_version =  log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Dec  1 14:20:35 np0005541455 nova_compute[188605]: 2025-12-01 19:20:35.389 188609 DEBUG oslo_service.service [None req-b71ca0b1-6b06-4f9a-80f6-79fc48762184 - - - - - -] oslo_messaging_notifications.driver = ['noop'] log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Dec  1 14:20:35 np0005541455 nova_compute[188605]: 2025-12-01 19:20:35.389 188609 DEBUG oslo_service.service [None req-b71ca0b1-6b06-4f9a-80f6-79fc48762184 - - - - - -] oslo_messaging_notifications.retry = -1 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Dec  1 14:20:35 np0005541455 nova_compute[188605]: 2025-12-01 19:20:35.389 188609 DEBUG oslo_service.service [None req-b71ca0b1-6b06-4f9a-80f6-79fc48762184 - - - - - -] oslo_messaging_notifications.topics = ['notifications'] log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Dec  1 14:20:35 np0005541455 nova_compute[188605]: 2025-12-01 19:20:35.389 188609 DEBUG oslo_service.service [None req-b71ca0b1-6b06-4f9a-80f6-79fc48762184 - - - - - -] oslo_messaging_notifications.transport_url = **** log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Dec  1 14:20:35 np0005541455 nova_compute[188605]: 2025-12-01 19:20:35.390 188609 DEBUG oslo_service.service [None req-b71ca0b1-6b06-4f9a-80f6-79fc48762184 - - - - - -] oslo_limit.auth_section        = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Dec  1 14:20:35 np0005541455 nova_compute[188605]: 2025-12-01 19:20:35.390 188609 DEBUG oslo_service.service [None req-b71ca0b1-6b06-4f9a-80f6-79fc48762184 - - - - - -] oslo_limit.auth_type           = password log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Dec  1 14:20:35 np0005541455 nova_compute[188605]: 2025-12-01 19:20:35.390 188609 DEBUG oslo_service.service [None req-b71ca0b1-6b06-4f9a-80f6-79fc48762184 - - - - - -] oslo_limit.auth_url            = https://keystone-internal.openstack.svc:5000 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Dec  1 14:20:35 np0005541455 nova_compute[188605]: 2025-12-01 19:20:35.390 188609 DEBUG oslo_service.service [None req-b71ca0b1-6b06-4f9a-80f6-79fc48762184 - - - - - -] oslo_limit.cafile              = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Dec  1 14:20:35 np0005541455 nova_compute[188605]: 2025-12-01 19:20:35.390 188609 DEBUG oslo_service.service [None req-b71ca0b1-6b06-4f9a-80f6-79fc48762184 - - - - - -] oslo_limit.certfile            = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Dec  1 14:20:35 np0005541455 nova_compute[188605]: 2025-12-01 19:20:35.390 188609 DEBUG oslo_service.service [None req-b71ca0b1-6b06-4f9a-80f6-79fc48762184 - - - - - -] oslo_limit.collect_timing      = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Dec  1 14:20:35 np0005541455 nova_compute[188605]: 2025-12-01 19:20:35.391 188609 DEBUG oslo_service.service [None req-b71ca0b1-6b06-4f9a-80f6-79fc48762184 - - - - - -] oslo_limit.connect_retries     = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Dec  1 14:20:35 np0005541455 nova_compute[188605]: 2025-12-01 19:20:35.391 188609 DEBUG oslo_service.service [None req-b71ca0b1-6b06-4f9a-80f6-79fc48762184 - - - - - -] oslo_limit.connect_retry_delay = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Dec  1 14:20:35 np0005541455 nova_compute[188605]: 2025-12-01 19:20:35.391 188609 DEBUG oslo_service.service [None req-b71ca0b1-6b06-4f9a-80f6-79fc48762184 - - - - - -] oslo_limit.default_domain_id   = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Dec  1 14:20:35 np0005541455 nova_compute[188605]: 2025-12-01 19:20:35.391 188609 DEBUG oslo_service.service [None req-b71ca0b1-6b06-4f9a-80f6-79fc48762184 - - - - - -] oslo_limit.default_domain_name = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Dec  1 14:20:35 np0005541455 nova_compute[188605]: 2025-12-01 19:20:35.391 188609 DEBUG oslo_service.service [None req-b71ca0b1-6b06-4f9a-80f6-79fc48762184 - - - - - -] oslo_limit.domain_id           = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Dec  1 14:20:35 np0005541455 nova_compute[188605]: 2025-12-01 19:20:35.392 188609 DEBUG oslo_service.service [None req-b71ca0b1-6b06-4f9a-80f6-79fc48762184 - - - - - -] oslo_limit.domain_name         = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Dec  1 14:20:35 np0005541455 nova_compute[188605]: 2025-12-01 19:20:35.392 188609 DEBUG oslo_service.service [None req-b71ca0b1-6b06-4f9a-80f6-79fc48762184 - - - - - -] oslo_limit.endpoint_id         = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Dec  1 14:20:35 np0005541455 nova_compute[188605]: 2025-12-01 19:20:35.392 188609 DEBUG oslo_service.service [None req-b71ca0b1-6b06-4f9a-80f6-79fc48762184 - - - - - -] oslo_limit.endpoint_override   = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Dec  1 14:20:35 np0005541455 nova_compute[188605]: 2025-12-01 19:20:35.392 188609 DEBUG oslo_service.service [None req-b71ca0b1-6b06-4f9a-80f6-79fc48762184 - - - - - -] oslo_limit.insecure            = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Dec  1 14:20:35 np0005541455 nova_compute[188605]: 2025-12-01 19:20:35.392 188609 DEBUG oslo_service.service [None req-b71ca0b1-6b06-4f9a-80f6-79fc48762184 - - - - - -] oslo_limit.keyfile             = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Dec  1 14:20:35 np0005541455 nova_compute[188605]: 2025-12-01 19:20:35.392 188609 DEBUG oslo_service.service [None req-b71ca0b1-6b06-4f9a-80f6-79fc48762184 - - - - - -] oslo_limit.max_version         = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Dec  1 14:20:35 np0005541455 nova_compute[188605]: 2025-12-01 19:20:35.392 188609 DEBUG oslo_service.service [None req-b71ca0b1-6b06-4f9a-80f6-79fc48762184 - - - - - -] oslo_limit.min_version         = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Dec  1 14:20:35 np0005541455 nova_compute[188605]: 2025-12-01 19:20:35.393 188609 DEBUG oslo_service.service [None req-b71ca0b1-6b06-4f9a-80f6-79fc48762184 - - - - - -] oslo_limit.password            = **** log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Dec  1 14:20:35 np0005541455 nova_compute[188605]: 2025-12-01 19:20:35.393 188609 DEBUG oslo_service.service [None req-b71ca0b1-6b06-4f9a-80f6-79fc48762184 - - - - - -] oslo_limit.project_domain_id   = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Dec  1 14:20:35 np0005541455 nova_compute[188605]: 2025-12-01 19:20:35.393 188609 DEBUG oslo_service.service [None req-b71ca0b1-6b06-4f9a-80f6-79fc48762184 - - - - - -] oslo_limit.project_domain_name = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Dec  1 14:20:35 np0005541455 nova_compute[188605]: 2025-12-01 19:20:35.393 188609 DEBUG oslo_service.service [None req-b71ca0b1-6b06-4f9a-80f6-79fc48762184 - - - - - -] oslo_limit.project_id          = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Dec  1 14:20:35 np0005541455 nova_compute[188605]: 2025-12-01 19:20:35.393 188609 DEBUG oslo_service.service [None req-b71ca0b1-6b06-4f9a-80f6-79fc48762184 - - - - - -] oslo_limit.project_name        = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Dec  1 14:20:35 np0005541455 nova_compute[188605]: 2025-12-01 19:20:35.393 188609 DEBUG oslo_service.service [None req-b71ca0b1-6b06-4f9a-80f6-79fc48762184 - - - - - -] oslo_limit.region_name         = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Dec  1 14:20:35 np0005541455 nova_compute[188605]: 2025-12-01 19:20:35.393 188609 DEBUG oslo_service.service [None req-b71ca0b1-6b06-4f9a-80f6-79fc48762184 - - - - - -] oslo_limit.service_name        = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Dec  1 14:20:35 np0005541455 nova_compute[188605]: 2025-12-01 19:20:35.394 188609 DEBUG oslo_service.service [None req-b71ca0b1-6b06-4f9a-80f6-79fc48762184 - - - - - -] oslo_limit.service_type        = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Dec  1 14:20:35 np0005541455 nova_compute[188605]: 2025-12-01 19:20:35.394 188609 DEBUG oslo_service.service [None req-b71ca0b1-6b06-4f9a-80f6-79fc48762184 - - - - - -] oslo_limit.split_loggers       = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Dec  1 14:20:35 np0005541455 nova_compute[188605]: 2025-12-01 19:20:35.394 188609 DEBUG oslo_service.service [None req-b71ca0b1-6b06-4f9a-80f6-79fc48762184 - - - - - -] oslo_limit.status_code_retries = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Dec  1 14:20:35 np0005541455 nova_compute[188605]: 2025-12-01 19:20:35.394 188609 DEBUG oslo_service.service [None req-b71ca0b1-6b06-4f9a-80f6-79fc48762184 - - - - - -] oslo_limit.status_code_retry_delay = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Dec  1 14:20:35 np0005541455 nova_compute[188605]: 2025-12-01 19:20:35.394 188609 DEBUG oslo_service.service [None req-b71ca0b1-6b06-4f9a-80f6-79fc48762184 - - - - - -] oslo_limit.system_scope        = all log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Dec  1 14:20:35 np0005541455 nova_compute[188605]: 2025-12-01 19:20:35.394 188609 DEBUG oslo_service.service [None req-b71ca0b1-6b06-4f9a-80f6-79fc48762184 - - - - - -] oslo_limit.timeout             = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Dec  1 14:20:35 np0005541455 nova_compute[188605]: 2025-12-01 19:20:35.394 188609 DEBUG oslo_service.service [None req-b71ca0b1-6b06-4f9a-80f6-79fc48762184 - - - - - -] oslo_limit.trust_id            = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Dec  1 14:20:35 np0005541455 nova_compute[188605]: 2025-12-01 19:20:35.395 188609 DEBUG oslo_service.service [None req-b71ca0b1-6b06-4f9a-80f6-79fc48762184 - - - - - -] oslo_limit.user_domain_id      = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Dec  1 14:20:35 np0005541455 nova_compute[188605]: 2025-12-01 19:20:35.395 188609 DEBUG oslo_service.service [None req-b71ca0b1-6b06-4f9a-80f6-79fc48762184 - - - - - -] oslo_limit.user_domain_name    = Default log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Dec  1 14:20:35 np0005541455 nova_compute[188605]: 2025-12-01 19:20:35.395 188609 DEBUG oslo_service.service [None req-b71ca0b1-6b06-4f9a-80f6-79fc48762184 - - - - - -] oslo_limit.user_id             = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Dec  1 14:20:35 np0005541455 nova_compute[188605]: 2025-12-01 19:20:35.395 188609 DEBUG oslo_service.service [None req-b71ca0b1-6b06-4f9a-80f6-79fc48762184 - - - - - -] oslo_limit.username            = nova log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Dec  1 14:20:35 np0005541455 nova_compute[188605]: 2025-12-01 19:20:35.395 188609 DEBUG oslo_service.service [None req-b71ca0b1-6b06-4f9a-80f6-79fc48762184 - - - - - -] oslo_limit.valid_interfaces    = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Dec  1 14:20:35 np0005541455 nova_compute[188605]: 2025-12-01 19:20:35.395 188609 DEBUG oslo_service.service [None req-b71ca0b1-6b06-4f9a-80f6-79fc48762184 - - - - - -] oslo_limit.version             = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Dec  1 14:20:35 np0005541455 nova_compute[188605]: 2025-12-01 19:20:35.396 188609 DEBUG oslo_service.service [None req-b71ca0b1-6b06-4f9a-80f6-79fc48762184 - - - - - -] oslo_reports.file_event_handler = /var/lib/nova log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Dec  1 14:20:35 np0005541455 nova_compute[188605]: 2025-12-01 19:20:35.396 188609 DEBUG oslo_service.service [None req-b71ca0b1-6b06-4f9a-80f6-79fc48762184 - - - - - -] oslo_reports.file_event_handler_interval = 1 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Dec  1 14:20:35 np0005541455 nova_compute[188605]: 2025-12-01 19:20:35.396 188609 DEBUG oslo_service.service [None req-b71ca0b1-6b06-4f9a-80f6-79fc48762184 - - - - - -] oslo_reports.log_dir           = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Dec  1 14:20:35 np0005541455 nova_compute[188605]: 2025-12-01 19:20:35.396 188609 DEBUG oslo_service.service [None req-b71ca0b1-6b06-4f9a-80f6-79fc48762184 - - - - - -] vif_plug_linux_bridge_privileged.capabilities = [12] log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Dec  1 14:20:35 np0005541455 nova_compute[188605]: 2025-12-01 19:20:35.396 188609 DEBUG oslo_service.service [None req-b71ca0b1-6b06-4f9a-80f6-79fc48762184 - - - - - -] vif_plug_linux_bridge_privileged.group = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Dec  1 14:20:35 np0005541455 nova_compute[188605]: 2025-12-01 19:20:35.396 188609 DEBUG oslo_service.service [None req-b71ca0b1-6b06-4f9a-80f6-79fc48762184 - - - - - -] vif_plug_linux_bridge_privileged.helper_command = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Dec  1 14:20:35 np0005541455 nova_compute[188605]: 2025-12-01 19:20:35.397 188609 DEBUG oslo_service.service [None req-b71ca0b1-6b06-4f9a-80f6-79fc48762184 - - - - - -] vif_plug_linux_bridge_privileged.logger_name = oslo_privsep.daemon log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Dec  1 14:20:35 np0005541455 nova_compute[188605]: 2025-12-01 19:20:35.397 188609 DEBUG oslo_service.service [None req-b71ca0b1-6b06-4f9a-80f6-79fc48762184 - - - - - -] vif_plug_linux_bridge_privileged.thread_pool_size = 8 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Dec  1 14:20:35 np0005541455 nova_compute[188605]: 2025-12-01 19:20:35.397 188609 DEBUG oslo_service.service [None req-b71ca0b1-6b06-4f9a-80f6-79fc48762184 - - - - - -] vif_plug_linux_bridge_privileged.user = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Dec  1 14:20:35 np0005541455 nova_compute[188605]: 2025-12-01 19:20:35.397 188609 DEBUG oslo_service.service [None req-b71ca0b1-6b06-4f9a-80f6-79fc48762184 - - - - - -] vif_plug_ovs_privileged.capabilities = [12, 1] log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Dec  1 14:20:35 np0005541455 nova_compute[188605]: 2025-12-01 19:20:35.397 188609 DEBUG oslo_service.service [None req-b71ca0b1-6b06-4f9a-80f6-79fc48762184 - - - - - -] vif_plug_ovs_privileged.group  = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Dec  1 14:20:35 np0005541455 nova_compute[188605]: 2025-12-01 19:20:35.397 188609 DEBUG oslo_service.service [None req-b71ca0b1-6b06-4f9a-80f6-79fc48762184 - - - - - -] vif_plug_ovs_privileged.helper_command = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Dec  1 14:20:35 np0005541455 nova_compute[188605]: 2025-12-01 19:20:35.397 188609 DEBUG oslo_service.service [None req-b71ca0b1-6b06-4f9a-80f6-79fc48762184 - - - - - -] vif_plug_ovs_privileged.logger_name = oslo_privsep.daemon log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Dec  1 14:20:35 np0005541455 nova_compute[188605]: 2025-12-01 19:20:35.398 188609 DEBUG oslo_service.service [None req-b71ca0b1-6b06-4f9a-80f6-79fc48762184 - - - - - -] vif_plug_ovs_privileged.thread_pool_size = 8 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Dec  1 14:20:35 np0005541455 nova_compute[188605]: 2025-12-01 19:20:35.398 188609 DEBUG oslo_service.service [None req-b71ca0b1-6b06-4f9a-80f6-79fc48762184 - - - - - -] vif_plug_ovs_privileged.user   = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Dec  1 14:20:35 np0005541455 nova_compute[188605]: 2025-12-01 19:20:35.398 188609 DEBUG oslo_service.service [None req-b71ca0b1-6b06-4f9a-80f6-79fc48762184 - - - - - -] os_vif_linux_bridge.flat_interface = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Dec  1 14:20:35 np0005541455 nova_compute[188605]: 2025-12-01 19:20:35.398 188609 DEBUG oslo_service.service [None req-b71ca0b1-6b06-4f9a-80f6-79fc48762184 - - - - - -] os_vif_linux_bridge.forward_bridge_interface = ['all'] log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Dec  1 14:20:35 np0005541455 nova_compute[188605]: 2025-12-01 19:20:35.398 188609 DEBUG oslo_service.service [None req-b71ca0b1-6b06-4f9a-80f6-79fc48762184 - - - - - -] os_vif_linux_bridge.iptables_bottom_regex =  log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Dec  1 14:20:35 np0005541455 nova_compute[188605]: 2025-12-01 19:20:35.398 188609 DEBUG oslo_service.service [None req-b71ca0b1-6b06-4f9a-80f6-79fc48762184 - - - - - -] os_vif_linux_bridge.iptables_drop_action = DROP log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Dec  1 14:20:35 np0005541455 nova_compute[188605]: 2025-12-01 19:20:35.398 188609 DEBUG oslo_service.service [None req-b71ca0b1-6b06-4f9a-80f6-79fc48762184 - - - - - -] os_vif_linux_bridge.iptables_top_regex =  log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Dec  1 14:20:35 np0005541455 nova_compute[188605]: 2025-12-01 19:20:35.399 188609 DEBUG oslo_service.service [None req-b71ca0b1-6b06-4f9a-80f6-79fc48762184 - - - - - -] os_vif_linux_bridge.network_device_mtu = 1500 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Dec  1 14:20:35 np0005541455 nova_compute[188605]: 2025-12-01 19:20:35.399 188609 DEBUG oslo_service.service [None req-b71ca0b1-6b06-4f9a-80f6-79fc48762184 - - - - - -] os_vif_linux_bridge.use_ipv6   = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Dec  1 14:20:35 np0005541455 nova_compute[188605]: 2025-12-01 19:20:35.399 188609 DEBUG oslo_service.service [None req-b71ca0b1-6b06-4f9a-80f6-79fc48762184 - - - - - -] os_vif_linux_bridge.vlan_interface = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Dec  1 14:20:35 np0005541455 nova_compute[188605]: 2025-12-01 19:20:35.399 188609 DEBUG oslo_service.service [None req-b71ca0b1-6b06-4f9a-80f6-79fc48762184 - - - - - -] os_vif_ovs.isolate_vif         = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Dec  1 14:20:35 np0005541455 nova_compute[188605]: 2025-12-01 19:20:35.399 188609 DEBUG oslo_service.service [None req-b71ca0b1-6b06-4f9a-80f6-79fc48762184 - - - - - -] os_vif_ovs.network_device_mtu  = 1500 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Dec  1 14:20:35 np0005541455 nova_compute[188605]: 2025-12-01 19:20:35.399 188609 DEBUG oslo_service.service [None req-b71ca0b1-6b06-4f9a-80f6-79fc48762184 - - - - - -] os_vif_ovs.ovs_vsctl_timeout   = 120 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Dec  1 14:20:35 np0005541455 nova_compute[188605]: 2025-12-01 19:20:35.400 188609 DEBUG oslo_service.service [None req-b71ca0b1-6b06-4f9a-80f6-79fc48762184 - - - - - -] os_vif_ovs.ovsdb_connection    = tcp:127.0.0.1:6640 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Dec  1 14:20:35 np0005541455 nova_compute[188605]: 2025-12-01 19:20:35.400 188609 DEBUG oslo_service.service [None req-b71ca0b1-6b06-4f9a-80f6-79fc48762184 - - - - - -] os_vif_ovs.ovsdb_interface     = native log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Dec  1 14:20:35 np0005541455 nova_compute[188605]: 2025-12-01 19:20:35.400 188609 DEBUG oslo_service.service [None req-b71ca0b1-6b06-4f9a-80f6-79fc48762184 - - - - - -] os_vif_ovs.per_port_bridge     = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Dec  1 14:20:35 np0005541455 nova_compute[188605]: 2025-12-01 19:20:35.400 188609 DEBUG oslo_service.service [None req-b71ca0b1-6b06-4f9a-80f6-79fc48762184 - - - - - -] os_brick.lock_path             = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Dec  1 14:20:35 np0005541455 nova_compute[188605]: 2025-12-01 19:20:35.400 188609 DEBUG oslo_service.service [None req-b71ca0b1-6b06-4f9a-80f6-79fc48762184 - - - - - -] os_brick.wait_mpath_device_attempts = 4 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Dec  1 14:20:35 np0005541455 nova_compute[188605]: 2025-12-01 19:20:35.400 188609 DEBUG oslo_service.service [None req-b71ca0b1-6b06-4f9a-80f6-79fc48762184 - - - - - -] os_brick.wait_mpath_device_interval = 1 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Dec  1 14:20:35 np0005541455 nova_compute[188605]: 2025-12-01 19:20:35.400 188609 DEBUG oslo_service.service [None req-b71ca0b1-6b06-4f9a-80f6-79fc48762184 - - - - - -] privsep_osbrick.capabilities   = [21] log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Dec  1 14:20:35 np0005541455 nova_compute[188605]: 2025-12-01 19:20:35.401 188609 DEBUG oslo_service.service [None req-b71ca0b1-6b06-4f9a-80f6-79fc48762184 - - - - - -] privsep_osbrick.group          = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Dec  1 14:20:35 np0005541455 nova_compute[188605]: 2025-12-01 19:20:35.401 188609 DEBUG oslo_service.service [None req-b71ca0b1-6b06-4f9a-80f6-79fc48762184 - - - - - -] privsep_osbrick.helper_command = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Dec  1 14:20:35 np0005541455 nova_compute[188605]: 2025-12-01 19:20:35.401 188609 DEBUG oslo_service.service [None req-b71ca0b1-6b06-4f9a-80f6-79fc48762184 - - - - - -] privsep_osbrick.logger_name    = os_brick.privileged log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Dec  1 14:20:35 np0005541455 nova_compute[188605]: 2025-12-01 19:20:35.401 188609 DEBUG oslo_service.service [None req-b71ca0b1-6b06-4f9a-80f6-79fc48762184 - - - - - -] privsep_osbrick.thread_pool_size = 8 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Dec  1 14:20:35 np0005541455 nova_compute[188605]: 2025-12-01 19:20:35.401 188609 DEBUG oslo_service.service [None req-b71ca0b1-6b06-4f9a-80f6-79fc48762184 - - - - - -] privsep_osbrick.user           = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Dec  1 14:20:35 np0005541455 nova_compute[188605]: 2025-12-01 19:20:35.401 188609 DEBUG oslo_service.service [None req-b71ca0b1-6b06-4f9a-80f6-79fc48762184 - - - - - -] nova_sys_admin.capabilities    = [0, 1, 2, 3, 12, 21] log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Dec  1 14:20:35 np0005541455 nova_compute[188605]: 2025-12-01 19:20:35.402 188609 DEBUG oslo_service.service [None req-b71ca0b1-6b06-4f9a-80f6-79fc48762184 - - - - - -] nova_sys_admin.group           = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Dec  1 14:20:35 np0005541455 nova_compute[188605]: 2025-12-01 19:20:35.402 188609 DEBUG oslo_service.service [None req-b71ca0b1-6b06-4f9a-80f6-79fc48762184 - - - - - -] nova_sys_admin.helper_command  = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Dec  1 14:20:35 np0005541455 nova_compute[188605]: 2025-12-01 19:20:35.402 188609 DEBUG oslo_service.service [None req-b71ca0b1-6b06-4f9a-80f6-79fc48762184 - - - - - -] nova_sys_admin.logger_name     = oslo_privsep.daemon log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Dec  1 14:20:35 np0005541455 nova_compute[188605]: 2025-12-01 19:20:35.402 188609 DEBUG oslo_service.service [None req-b71ca0b1-6b06-4f9a-80f6-79fc48762184 - - - - - -] nova_sys_admin.thread_pool_size = 8 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Dec  1 14:20:35 np0005541455 nova_compute[188605]: 2025-12-01 19:20:35.402 188609 DEBUG oslo_service.service [None req-b71ca0b1-6b06-4f9a-80f6-79fc48762184 - - - - - -] nova_sys_admin.user            = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Dec  1 14:20:35 np0005541455 nova_compute[188605]: 2025-12-01 19:20:35.402 188609 DEBUG oslo_service.service [None req-b71ca0b1-6b06-4f9a-80f6-79fc48762184 - - - - - -] ******************************************************************************** log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2613#033[00m
Dec  1 14:20:35 np0005541455 nova_compute[188605]: 2025-12-01 19:20:35.403 188609 INFO nova.service [-] Starting compute node (version 27.5.2-0.20250829104910.6f8decf.el9)#033[00m
Dec  1 14:20:35 np0005541455 nova_compute[188605]: 2025-12-01 19:20:35.471 188609 DEBUG nova.virt.libvirt.host [None req-7f8cefb8-f392-4857-a15c-0577ffacd27c - - - - - -] Starting native event thread _init_events /usr/lib/python3.9/site-packages/nova/virt/libvirt/host.py:492#033[00m
Dec  1 14:20:35 np0005541455 nova_compute[188605]: 2025-12-01 19:20:35.472 188609 DEBUG nova.virt.libvirt.host [None req-7f8cefb8-f392-4857-a15c-0577ffacd27c - - - - - -] Starting green dispatch thread _init_events /usr/lib/python3.9/site-packages/nova/virt/libvirt/host.py:498#033[00m
Dec  1 14:20:35 np0005541455 nova_compute[188605]: 2025-12-01 19:20:35.472 188609 DEBUG nova.virt.libvirt.host [None req-7f8cefb8-f392-4857-a15c-0577ffacd27c - - - - - -] Starting connection event dispatch thread initialize /usr/lib/python3.9/site-packages/nova/virt/libvirt/host.py:620#033[00m
Dec  1 14:20:35 np0005541455 nova_compute[188605]: 2025-12-01 19:20:35.473 188609 DEBUG nova.virt.libvirt.host [None req-7f8cefb8-f392-4857-a15c-0577ffacd27c - - - - - -] Connecting to libvirt: qemu:///system _get_new_connection /usr/lib/python3.9/site-packages/nova/virt/libvirt/host.py:503#033[00m
Dec  1 14:20:35 np0005541455 systemd[1]: Starting libvirt QEMU daemon...
Dec  1 14:20:35 np0005541455 systemd[1]: Started libvirt QEMU daemon.
Dec  1 14:20:35 np0005541455 nova_compute[188605]: 2025-12-01 19:20:35.548 188609 DEBUG nova.virt.libvirt.host [None req-7f8cefb8-f392-4857-a15c-0577ffacd27c - - - - - -] Registering for lifecycle events <nova.virt.libvirt.host.Host object at 0x7f8a720e7370> _get_new_connection /usr/lib/python3.9/site-packages/nova/virt/libvirt/host.py:509#033[00m
Dec  1 14:20:35 np0005541455 nova_compute[188605]: 2025-12-01 19:20:35.550 188609 DEBUG nova.virt.libvirt.host [None req-7f8cefb8-f392-4857-a15c-0577ffacd27c - - - - - -] Registering for connection events: <nova.virt.libvirt.host.Host object at 0x7f8a720e7370> _get_new_connection /usr/lib/python3.9/site-packages/nova/virt/libvirt/host.py:530#033[00m
Dec  1 14:20:35 np0005541455 nova_compute[188605]: 2025-12-01 19:20:35.552 188609 INFO nova.virt.libvirt.driver [None req-7f8cefb8-f392-4857-a15c-0577ffacd27c - - - - - -] Connection event '1' reason 'None'#033[00m
Dec  1 14:20:35 np0005541455 nova_compute[188605]: 2025-12-01 19:20:35.602 188609 WARNING nova.virt.libvirt.driver [None req-7f8cefb8-f392-4857-a15c-0577ffacd27c - - - - - -] Cannot update service status on host "compute-0.ctlplane.example.com" since it is not registered.: nova.exception_Remote.ComputeHostNotFound_Remote: Compute host compute-0.ctlplane.example.com could not be found.#033[00m
Dec  1 14:20:35 np0005541455 nova_compute[188605]: 2025-12-01 19:20:35.603 188609 DEBUG nova.virt.libvirt.volume.mount [None req-7f8cefb8-f392-4857-a15c-0577ffacd27c - - - - - -] Initialising _HostMountState generation 0 host_up /usr/lib/python3.9/site-packages/nova/virt/libvirt/volume/mount.py:130#033[00m
Dec  1 14:20:36 np0005541455 python3.9[189293]: ansible-containers.podman.podman_container Invoked with name=nova_nvme_cleaner state=absent executable=podman detach=True debug=False force_restart=False force_delete=True generate_systemd={} image_strict=False recreate=False image=None annotation=None arch=None attach=None authfile=None blkio_weight=None blkio_weight_device=None cap_add=None cap_drop=None cgroup_conf=None cgroup_parent=None cgroupns=None cgroups=None chrootdirs=None cidfile=None cmd_args=None conmon_pidfile=None command=None cpu_period=None cpu_quota=None cpu_rt_period=None cpu_rt_runtime=None cpu_shares=None cpus=None cpuset_cpus=None cpuset_mems=None decryption_key=None delete_depend=None delete_time=None delete_volumes=None detach_keys=None device=None device_cgroup_rule=None device_read_bps=None device_read_iops=None device_write_bps=None device_write_iops=None dns=None dns_option=None dns_search=None entrypoint=None env=None env_file=None env_host=None env_merge=None etc_hosts=None expose=None gidmap=None gpus=None group_add=None group_entry=None healthcheck=None healthcheck_interval=None healthcheck_retries=None healthcheck_start_period=None health_startup_cmd=None health_startup_interval=None health_startup_retries=None health_startup_success=None health_startup_timeout=None healthcheck_timeout=None healthcheck_failure_action=None hooks_dir=None hostname=None hostuser=None http_proxy=None image_volume=None init=None init_ctr=None init_path=None interactive=None ip=None ip6=None ipc=None kernel_memory=None label=None label_file=None log_driver=None log_level=None log_opt=None mac_address=None memory=None memory_reservation=None memory_swap=None memory_swappiness=None mount=None network=None network_aliases=None no_healthcheck=None no_hosts=None oom_kill_disable=None oom_score_adj=None os=None passwd=None passwd_entry=None personality=None pid=None pid_file=None pids_limit=None platform=None pod=None pod_id_file=None preserve_fd=None preserve_fds=None privileged=None publish=None publish_all=None pull=None quadlet_dir=None quadlet_filename=None quadlet_file_mode=None quadlet_options=None rdt_class=None read_only=None read_only_tmpfs=None requires=None restart_policy=None restart_time=None retry=None retry_delay=None rm=None rmi=None rootfs=None seccomp_policy=None secrets=NOT_LOGGING_PARAMETER sdnotify=None security_opt=None shm_size=None shm_size_systemd=None sig_proxy=None stop_signal=None stop_timeout=None stop_time=None subgidname=None subuidname=None sysctl=None systemd=None timeout=None timezone=None tls_verify=None tmpfs=None tty=None uidmap=None ulimit=None umask=None unsetenv=None unsetenv_all=None user=None userns=None uts=None variant=None volume=None volumes_from=None workdir=None
Dec  1 14:20:36 np0005541455 rsyslogd[1005]: imjournal: journal files changed, reloading...  [v8.2510.0-2.el9 try https://www.rsyslog.com/e/0 ]
Dec  1 14:20:36 np0005541455 rsyslogd[1005]: imjournal: journal files changed, reloading...  [v8.2510.0-2.el9 try https://www.rsyslog.com/e/0 ]
Dec  1 14:20:36 np0005541455 nova_compute[188605]: 2025-12-01 19:20:36.401 188609 INFO nova.virt.libvirt.host [None req-7f8cefb8-f392-4857-a15c-0577ffacd27c - - - - - -] Libvirt host capabilities <capabilities>
Dec  1 14:20:36 np0005541455 nova_compute[188605]: 
Dec  1 14:20:36 np0005541455 nova_compute[188605]:  <host>
Dec  1 14:20:36 np0005541455 nova_compute[188605]:    <uuid>321a04b4-6595-4e40-a9f1-f8a11b88d7a9</uuid>
Dec  1 14:20:36 np0005541455 nova_compute[188605]:    <cpu>
Dec  1 14:20:36 np0005541455 nova_compute[188605]:      <arch>x86_64</arch>
Dec  1 14:20:36 np0005541455 nova_compute[188605]:      <model>EPYC-Rome-v4</model>
Dec  1 14:20:36 np0005541455 nova_compute[188605]:      <vendor>AMD</vendor>
Dec  1 14:20:36 np0005541455 nova_compute[188605]:      <microcode version='16777317'/>
Dec  1 14:20:36 np0005541455 nova_compute[188605]:      <signature family='23' model='49' stepping='0'/>
Dec  1 14:20:36 np0005541455 nova_compute[188605]:      <topology sockets='8' dies='1' clusters='1' cores='1' threads='1'/>
Dec  1 14:20:36 np0005541455 nova_compute[188605]:      <maxphysaddr mode='emulate' bits='40'/>
Dec  1 14:20:36 np0005541455 nova_compute[188605]:      <feature name='x2apic'/>
Dec  1 14:20:36 np0005541455 nova_compute[188605]:      <feature name='tsc-deadline'/>
Dec  1 14:20:36 np0005541455 nova_compute[188605]:      <feature name='osxsave'/>
Dec  1 14:20:36 np0005541455 nova_compute[188605]:      <feature name='hypervisor'/>
Dec  1 14:20:36 np0005541455 nova_compute[188605]:      <feature name='tsc_adjust'/>
Dec  1 14:20:36 np0005541455 nova_compute[188605]:      <feature name='spec-ctrl'/>
Dec  1 14:20:36 np0005541455 nova_compute[188605]:      <feature name='stibp'/>
Dec  1 14:20:36 np0005541455 nova_compute[188605]:      <feature name='arch-capabilities'/>
Dec  1 14:20:36 np0005541455 nova_compute[188605]:      <feature name='ssbd'/>
Dec  1 14:20:36 np0005541455 nova_compute[188605]:      <feature name='cmp_legacy'/>
Dec  1 14:20:36 np0005541455 nova_compute[188605]:      <feature name='topoext'/>
Dec  1 14:20:36 np0005541455 nova_compute[188605]:      <feature name='virt-ssbd'/>
Dec  1 14:20:36 np0005541455 nova_compute[188605]:      <feature name='lbrv'/>
Dec  1 14:20:36 np0005541455 nova_compute[188605]:      <feature name='tsc-scale'/>
Dec  1 14:20:36 np0005541455 nova_compute[188605]:      <feature name='vmcb-clean'/>
Dec  1 14:20:36 np0005541455 nova_compute[188605]:      <feature name='pause-filter'/>
Dec  1 14:20:36 np0005541455 nova_compute[188605]:      <feature name='pfthreshold'/>
Dec  1 14:20:36 np0005541455 nova_compute[188605]:      <feature name='svme-addr-chk'/>
Dec  1 14:20:36 np0005541455 nova_compute[188605]:      <feature name='rdctl-no'/>
Dec  1 14:20:36 np0005541455 nova_compute[188605]:      <feature name='skip-l1dfl-vmentry'/>
Dec  1 14:20:36 np0005541455 nova_compute[188605]:      <feature name='mds-no'/>
Dec  1 14:20:36 np0005541455 nova_compute[188605]:      <feature name='pschange-mc-no'/>
Dec  1 14:20:36 np0005541455 nova_compute[188605]:      <pages unit='KiB' size='4'/>
Dec  1 14:20:36 np0005541455 nova_compute[188605]:      <pages unit='KiB' size='2048'/>
Dec  1 14:20:36 np0005541455 nova_compute[188605]:      <pages unit='KiB' size='1048576'/>
Dec  1 14:20:36 np0005541455 nova_compute[188605]:    </cpu>
Dec  1 14:20:36 np0005541455 nova_compute[188605]:    <power_management>
Dec  1 14:20:36 np0005541455 nova_compute[188605]:      <suspend_mem/>
Dec  1 14:20:36 np0005541455 nova_compute[188605]:      <suspend_disk/>
Dec  1 14:20:36 np0005541455 nova_compute[188605]:      <suspend_hybrid/>
Dec  1 14:20:36 np0005541455 nova_compute[188605]:    </power_management>
Dec  1 14:20:36 np0005541455 nova_compute[188605]:    <iommu support='no'/>
Dec  1 14:20:36 np0005541455 nova_compute[188605]:    <migration_features>
Dec  1 14:20:36 np0005541455 nova_compute[188605]:      <live/>
Dec  1 14:20:36 np0005541455 nova_compute[188605]:      <uri_transports>
Dec  1 14:20:36 np0005541455 nova_compute[188605]:        <uri_transport>tcp</uri_transport>
Dec  1 14:20:36 np0005541455 nova_compute[188605]:        <uri_transport>rdma</uri_transport>
Dec  1 14:20:36 np0005541455 nova_compute[188605]:      </uri_transports>
Dec  1 14:20:36 np0005541455 nova_compute[188605]:    </migration_features>
Dec  1 14:20:36 np0005541455 nova_compute[188605]:    <topology>
Dec  1 14:20:36 np0005541455 nova_compute[188605]:      <cells num='1'>
Dec  1 14:20:36 np0005541455 nova_compute[188605]:        <cell id='0'>
Dec  1 14:20:36 np0005541455 nova_compute[188605]:          <memory unit='KiB'>7864324</memory>
Dec  1 14:20:36 np0005541455 nova_compute[188605]:          <pages unit='KiB' size='4'>1966081</pages>
Dec  1 14:20:36 np0005541455 nova_compute[188605]:          <pages unit='KiB' size='2048'>0</pages>
Dec  1 14:20:36 np0005541455 nova_compute[188605]:          <pages unit='KiB' size='1048576'>0</pages>
Dec  1 14:20:36 np0005541455 nova_compute[188605]:          <distances>
Dec  1 14:20:36 np0005541455 nova_compute[188605]:            <sibling id='0' value='10'/>
Dec  1 14:20:36 np0005541455 nova_compute[188605]:          </distances>
Dec  1 14:20:36 np0005541455 nova_compute[188605]:          <cpus num='8'>
Dec  1 14:20:36 np0005541455 nova_compute[188605]:            <cpu id='0' socket_id='0' die_id='0' cluster_id='65535' core_id='0' siblings='0'/>
Dec  1 14:20:36 np0005541455 nova_compute[188605]:            <cpu id='1' socket_id='1' die_id='1' cluster_id='65535' core_id='0' siblings='1'/>
Dec  1 14:20:36 np0005541455 nova_compute[188605]:            <cpu id='2' socket_id='2' die_id='2' cluster_id='65535' core_id='0' siblings='2'/>
Dec  1 14:20:36 np0005541455 nova_compute[188605]:            <cpu id='3' socket_id='3' die_id='3' cluster_id='65535' core_id='0' siblings='3'/>
Dec  1 14:20:36 np0005541455 nova_compute[188605]:            <cpu id='4' socket_id='4' die_id='4' cluster_id='65535' core_id='0' siblings='4'/>
Dec  1 14:20:36 np0005541455 nova_compute[188605]:            <cpu id='5' socket_id='5' die_id='5' cluster_id='65535' core_id='0' siblings='5'/>
Dec  1 14:20:36 np0005541455 nova_compute[188605]:            <cpu id='6' socket_id='6' die_id='6' cluster_id='65535' core_id='0' siblings='6'/>
Dec  1 14:20:36 np0005541455 nova_compute[188605]:            <cpu id='7' socket_id='7' die_id='7' cluster_id='65535' core_id='0' siblings='7'/>
Dec  1 14:20:36 np0005541455 nova_compute[188605]:          </cpus>
Dec  1 14:20:36 np0005541455 nova_compute[188605]:        </cell>
Dec  1 14:20:36 np0005541455 nova_compute[188605]:      </cells>
Dec  1 14:20:36 np0005541455 nova_compute[188605]:    </topology>
Dec  1 14:20:36 np0005541455 nova_compute[188605]:    <cache>
Dec  1 14:20:36 np0005541455 nova_compute[188605]:      <bank id='0' level='2' type='both' size='512' unit='KiB' cpus='0'/>
Dec  1 14:20:36 np0005541455 nova_compute[188605]:      <bank id='1' level='2' type='both' size='512' unit='KiB' cpus='1'/>
Dec  1 14:20:36 np0005541455 nova_compute[188605]:      <bank id='2' level='2' type='both' size='512' unit='KiB' cpus='2'/>
Dec  1 14:20:36 np0005541455 nova_compute[188605]:      <bank id='3' level='2' type='both' size='512' unit='KiB' cpus='3'/>
Dec  1 14:20:36 np0005541455 nova_compute[188605]:      <bank id='4' level='2' type='both' size='512' unit='KiB' cpus='4'/>
Dec  1 14:20:36 np0005541455 nova_compute[188605]:      <bank id='5' level='2' type='both' size='512' unit='KiB' cpus='5'/>
Dec  1 14:20:36 np0005541455 nova_compute[188605]:      <bank id='6' level='2' type='both' size='512' unit='KiB' cpus='6'/>
Dec  1 14:20:36 np0005541455 nova_compute[188605]:      <bank id='7' level='2' type='both' size='512' unit='KiB' cpus='7'/>
Dec  1 14:20:36 np0005541455 nova_compute[188605]:      <bank id='0' level='3' type='both' size='16' unit='MiB' cpus='0'/>
Dec  1 14:20:36 np0005541455 nova_compute[188605]:      <bank id='1' level='3' type='both' size='16' unit='MiB' cpus='1'/>
Dec  1 14:20:36 np0005541455 nova_compute[188605]:      <bank id='2' level='3' type='both' size='16' unit='MiB' cpus='2'/>
Dec  1 14:20:36 np0005541455 nova_compute[188605]:      <bank id='3' level='3' type='both' size='16' unit='MiB' cpus='3'/>
Dec  1 14:20:36 np0005541455 nova_compute[188605]:      <bank id='4' level='3' type='both' size='16' unit='MiB' cpus='4'/>
Dec  1 14:20:36 np0005541455 nova_compute[188605]:      <bank id='5' level='3' type='both' size='16' unit='MiB' cpus='5'/>
Dec  1 14:20:36 np0005541455 nova_compute[188605]:      <bank id='6' level='3' type='both' size='16' unit='MiB' cpus='6'/>
Dec  1 14:20:36 np0005541455 nova_compute[188605]:      <bank id='7' level='3' type='both' size='16' unit='MiB' cpus='7'/>
Dec  1 14:20:36 np0005541455 nova_compute[188605]:    </cache>
Dec  1 14:20:36 np0005541455 nova_compute[188605]:    <secmodel>
Dec  1 14:20:36 np0005541455 nova_compute[188605]:      <model>selinux</model>
Dec  1 14:20:36 np0005541455 nova_compute[188605]:      <doi>0</doi>
Dec  1 14:20:36 np0005541455 nova_compute[188605]:      <baselabel type='kvm'>system_u:system_r:svirt_t:s0</baselabel>
Dec  1 14:20:36 np0005541455 nova_compute[188605]:      <baselabel type='qemu'>system_u:system_r:svirt_tcg_t:s0</baselabel>
Dec  1 14:20:36 np0005541455 nova_compute[188605]:    </secmodel>
Dec  1 14:20:36 np0005541455 nova_compute[188605]:    <secmodel>
Dec  1 14:20:36 np0005541455 nova_compute[188605]:      <model>dac</model>
Dec  1 14:20:36 np0005541455 nova_compute[188605]:      <doi>0</doi>
Dec  1 14:20:36 np0005541455 nova_compute[188605]:      <baselabel type='kvm'>+107:+107</baselabel>
Dec  1 14:20:36 np0005541455 nova_compute[188605]:      <baselabel type='qemu'>+107:+107</baselabel>
Dec  1 14:20:36 np0005541455 nova_compute[188605]:    </secmodel>
Dec  1 14:20:36 np0005541455 nova_compute[188605]:  </host>
Dec  1 14:20:36 np0005541455 nova_compute[188605]: 
Dec  1 14:20:36 np0005541455 nova_compute[188605]:  <guest>
Dec  1 14:20:36 np0005541455 nova_compute[188605]:    <os_type>hvm</os_type>
Dec  1 14:20:36 np0005541455 nova_compute[188605]:    <arch name='i686'>
Dec  1 14:20:36 np0005541455 nova_compute[188605]:      <wordsize>32</wordsize>
Dec  1 14:20:36 np0005541455 nova_compute[188605]:      <emulator>/usr/libexec/qemu-kvm</emulator>
Dec  1 14:20:36 np0005541455 nova_compute[188605]:      <machine maxCpus='240' deprecated='yes'>pc-i440fx-rhel7.6.0</machine>
Dec  1 14:20:36 np0005541455 nova_compute[188605]:      <machine canonical='pc-i440fx-rhel7.6.0' maxCpus='240' deprecated='yes'>pc</machine>
Dec  1 14:20:36 np0005541455 nova_compute[188605]:      <machine maxCpus='4096'>pc-q35-rhel9.8.0</machine>
Dec  1 14:20:36 np0005541455 nova_compute[188605]:      <machine canonical='pc-q35-rhel9.8.0' maxCpus='4096'>q35</machine>
Dec  1 14:20:36 np0005541455 nova_compute[188605]:      <machine maxCpus='4096'>pc-q35-rhel9.6.0</machine>
Dec  1 14:20:36 np0005541455 nova_compute[188605]:      <machine maxCpus='710' deprecated='yes'>pc-q35-rhel8.6.0</machine>
Dec  1 14:20:36 np0005541455 nova_compute[188605]:      <machine maxCpus='710'>pc-q35-rhel9.4.0</machine>
Dec  1 14:20:36 np0005541455 nova_compute[188605]:      <machine maxCpus='710' deprecated='yes'>pc-q35-rhel8.5.0</machine>
Dec  1 14:20:36 np0005541455 nova_compute[188605]:      <machine maxCpus='710' deprecated='yes'>pc-q35-rhel8.3.0</machine>
Dec  1 14:20:36 np0005541455 nova_compute[188605]:      <machine maxCpus='710' deprecated='yes'>pc-q35-rhel7.6.0</machine>
Dec  1 14:20:36 np0005541455 nova_compute[188605]:      <machine maxCpus='710' deprecated='yes'>pc-q35-rhel8.4.0</machine>
Dec  1 14:20:36 np0005541455 nova_compute[188605]:      <machine maxCpus='710'>pc-q35-rhel9.2.0</machine>
Dec  1 14:20:36 np0005541455 nova_compute[188605]:      <machine maxCpus='710' deprecated='yes'>pc-q35-rhel8.2.0</machine>
Dec  1 14:20:36 np0005541455 nova_compute[188605]:      <machine maxCpus='710'>pc-q35-rhel9.0.0</machine>
Dec  1 14:20:36 np0005541455 nova_compute[188605]:      <machine maxCpus='710' deprecated='yes'>pc-q35-rhel8.0.0</machine>
Dec  1 14:20:36 np0005541455 nova_compute[188605]:      <machine maxCpus='710' deprecated='yes'>pc-q35-rhel8.1.0</machine>
Dec  1 14:20:36 np0005541455 nova_compute[188605]:      <domain type='qemu'/>
Dec  1 14:20:36 np0005541455 nova_compute[188605]:      <domain type='kvm'/>
Dec  1 14:20:36 np0005541455 nova_compute[188605]:    </arch>
Dec  1 14:20:36 np0005541455 nova_compute[188605]:    <features>
Dec  1 14:20:36 np0005541455 nova_compute[188605]:      <pae/>
Dec  1 14:20:36 np0005541455 nova_compute[188605]:      <nonpae/>
Dec  1 14:20:36 np0005541455 nova_compute[188605]:      <acpi default='on' toggle='yes'/>
Dec  1 14:20:36 np0005541455 nova_compute[188605]:      <apic default='on' toggle='no'/>
Dec  1 14:20:36 np0005541455 nova_compute[188605]:      <cpuselection/>
Dec  1 14:20:36 np0005541455 nova_compute[188605]:      <deviceboot/>
Dec  1 14:20:36 np0005541455 nova_compute[188605]:      <disksnapshot default='on' toggle='no'/>
Dec  1 14:20:36 np0005541455 nova_compute[188605]:      <externalSnapshot/>
Dec  1 14:20:36 np0005541455 nova_compute[188605]:    </features>
Dec  1 14:20:36 np0005541455 nova_compute[188605]:  </guest>
Dec  1 14:20:36 np0005541455 nova_compute[188605]: 
Dec  1 14:20:36 np0005541455 nova_compute[188605]:  <guest>
Dec  1 14:20:36 np0005541455 nova_compute[188605]:    <os_type>hvm</os_type>
Dec  1 14:20:36 np0005541455 nova_compute[188605]:    <arch name='x86_64'>
Dec  1 14:20:36 np0005541455 nova_compute[188605]:      <wordsize>64</wordsize>
Dec  1 14:20:36 np0005541455 nova_compute[188605]:      <emulator>/usr/libexec/qemu-kvm</emulator>
Dec  1 14:20:36 np0005541455 nova_compute[188605]:      <machine maxCpus='240' deprecated='yes'>pc-i440fx-rhel7.6.0</machine>
Dec  1 14:20:36 np0005541455 nova_compute[188605]:      <machine canonical='pc-i440fx-rhel7.6.0' maxCpus='240' deprecated='yes'>pc</machine>
Dec  1 14:20:36 np0005541455 nova_compute[188605]:      <machine maxCpus='4096'>pc-q35-rhel9.8.0</machine>
Dec  1 14:20:36 np0005541455 nova_compute[188605]:      <machine canonical='pc-q35-rhel9.8.0' maxCpus='4096'>q35</machine>
Dec  1 14:20:36 np0005541455 nova_compute[188605]:      <machine maxCpus='4096'>pc-q35-rhel9.6.0</machine>
Dec  1 14:20:36 np0005541455 nova_compute[188605]:      <machine maxCpus='710' deprecated='yes'>pc-q35-rhel8.6.0</machine>
Dec  1 14:20:36 np0005541455 nova_compute[188605]:      <machine maxCpus='710'>pc-q35-rhel9.4.0</machine>
Dec  1 14:20:36 np0005541455 nova_compute[188605]:      <machine maxCpus='710' deprecated='yes'>pc-q35-rhel8.5.0</machine>
Dec  1 14:20:36 np0005541455 nova_compute[188605]:      <machine maxCpus='710' deprecated='yes'>pc-q35-rhel8.3.0</machine>
Dec  1 14:20:36 np0005541455 nova_compute[188605]:      <machine maxCpus='710' deprecated='yes'>pc-q35-rhel7.6.0</machine>
Dec  1 14:20:36 np0005541455 nova_compute[188605]:      <machine maxCpus='710' deprecated='yes'>pc-q35-rhel8.4.0</machine>
Dec  1 14:20:36 np0005541455 nova_compute[188605]:      <machine maxCpus='710'>pc-q35-rhel9.2.0</machine>
Dec  1 14:20:36 np0005541455 nova_compute[188605]:      <machine maxCpus='710' deprecated='yes'>pc-q35-rhel8.2.0</machine>
Dec  1 14:20:36 np0005541455 nova_compute[188605]:      <machine maxCpus='710'>pc-q35-rhel9.0.0</machine>
Dec  1 14:20:36 np0005541455 nova_compute[188605]:      <machine maxCpus='710' deprecated='yes'>pc-q35-rhel8.0.0</machine>
Dec  1 14:20:36 np0005541455 nova_compute[188605]:      <machine maxCpus='710' deprecated='yes'>pc-q35-rhel8.1.0</machine>
Dec  1 14:20:36 np0005541455 nova_compute[188605]:      <domain type='qemu'/>
Dec  1 14:20:36 np0005541455 nova_compute[188605]:      <domain type='kvm'/>
Dec  1 14:20:36 np0005541455 nova_compute[188605]:    </arch>
Dec  1 14:20:36 np0005541455 nova_compute[188605]:    <features>
Dec  1 14:20:36 np0005541455 nova_compute[188605]:      <acpi default='on' toggle='yes'/>
Dec  1 14:20:36 np0005541455 nova_compute[188605]:      <apic default='on' toggle='no'/>
Dec  1 14:20:36 np0005541455 nova_compute[188605]:      <cpuselection/>
Dec  1 14:20:36 np0005541455 nova_compute[188605]:      <deviceboot/>
Dec  1 14:20:36 np0005541455 nova_compute[188605]:      <disksnapshot default='on' toggle='no'/>
Dec  1 14:20:36 np0005541455 nova_compute[188605]:      <externalSnapshot/>
Dec  1 14:20:36 np0005541455 nova_compute[188605]:    </features>
Dec  1 14:20:36 np0005541455 nova_compute[188605]:  </guest>
Dec  1 14:20:36 np0005541455 nova_compute[188605]: 
Dec  1 14:20:36 np0005541455 nova_compute[188605]: </capabilities>
Dec  1 14:20:36 np0005541455 nova_compute[188605]: #033[00m
Dec  1 14:20:36 np0005541455 nova_compute[188605]: 2025-12-01 19:20:36.408 188609 DEBUG nova.virt.libvirt.host [None req-7f8cefb8-f392-4857-a15c-0577ffacd27c - - - - - -] Getting domain capabilities for i686 via machine types: {'q35', 'pc'} _get_machine_types /usr/lib/python3.9/site-packages/nova/virt/libvirt/host.py:952#033[00m
Dec  1 14:20:36 np0005541455 nova_compute[188605]: 2025-12-01 19:20:36.425 188609 DEBUG nova.virt.libvirt.host [None req-7f8cefb8-f392-4857-a15c-0577ffacd27c - - - - - -] Libvirt host hypervisor capabilities for arch=i686 and machine_type=q35:
Dec  1 14:20:36 np0005541455 nova_compute[188605]: <domainCapabilities>
Dec  1 14:20:36 np0005541455 nova_compute[188605]:  <path>/usr/libexec/qemu-kvm</path>
Dec  1 14:20:36 np0005541455 nova_compute[188605]:  <domain>kvm</domain>
Dec  1 14:20:36 np0005541455 nova_compute[188605]:  <machine>pc-q35-rhel9.8.0</machine>
Dec  1 14:20:36 np0005541455 nova_compute[188605]:  <arch>i686</arch>
Dec  1 14:20:36 np0005541455 nova_compute[188605]:  <vcpu max='4096'/>
Dec  1 14:20:36 np0005541455 nova_compute[188605]:  <iothreads supported='yes'/>
Dec  1 14:20:36 np0005541455 nova_compute[188605]:  <os supported='yes'>
Dec  1 14:20:36 np0005541455 nova_compute[188605]:    <enum name='firmware'/>
Dec  1 14:20:36 np0005541455 nova_compute[188605]:    <loader supported='yes'>
Dec  1 14:20:36 np0005541455 nova_compute[188605]:      <value>/usr/share/OVMF/OVMF_CODE.secboot.fd</value>
Dec  1 14:20:36 np0005541455 nova_compute[188605]:      <enum name='type'>
Dec  1 14:20:36 np0005541455 nova_compute[188605]:        <value>rom</value>
Dec  1 14:20:36 np0005541455 nova_compute[188605]:        <value>pflash</value>
Dec  1 14:20:36 np0005541455 nova_compute[188605]:      </enum>
Dec  1 14:20:36 np0005541455 nova_compute[188605]:      <enum name='readonly'>
Dec  1 14:20:36 np0005541455 nova_compute[188605]:        <value>yes</value>
Dec  1 14:20:36 np0005541455 nova_compute[188605]:        <value>no</value>
Dec  1 14:20:36 np0005541455 nova_compute[188605]:      </enum>
Dec  1 14:20:36 np0005541455 nova_compute[188605]:      <enum name='secure'>
Dec  1 14:20:36 np0005541455 nova_compute[188605]:        <value>no</value>
Dec  1 14:20:36 np0005541455 nova_compute[188605]:      </enum>
Dec  1 14:20:36 np0005541455 nova_compute[188605]:    </loader>
Dec  1 14:20:36 np0005541455 nova_compute[188605]:  </os>
Dec  1 14:20:36 np0005541455 nova_compute[188605]:  <cpu>
Dec  1 14:20:36 np0005541455 nova_compute[188605]:    <mode name='host-passthrough' supported='yes'>
Dec  1 14:20:36 np0005541455 nova_compute[188605]:      <enum name='hostPassthroughMigratable'>
Dec  1 14:20:36 np0005541455 nova_compute[188605]:        <value>on</value>
Dec  1 14:20:36 np0005541455 nova_compute[188605]:        <value>off</value>
Dec  1 14:20:36 np0005541455 nova_compute[188605]:      </enum>
Dec  1 14:20:36 np0005541455 nova_compute[188605]:    </mode>
Dec  1 14:20:36 np0005541455 nova_compute[188605]:    <mode name='maximum' supported='yes'>
Dec  1 14:20:36 np0005541455 nova_compute[188605]:      <enum name='maximumMigratable'>
Dec  1 14:20:36 np0005541455 nova_compute[188605]:        <value>on</value>
Dec  1 14:20:36 np0005541455 nova_compute[188605]:        <value>off</value>
Dec  1 14:20:36 np0005541455 nova_compute[188605]:      </enum>
Dec  1 14:20:36 np0005541455 nova_compute[188605]:    </mode>
Dec  1 14:20:36 np0005541455 nova_compute[188605]:    <mode name='host-model' supported='yes'>
Dec  1 14:20:36 np0005541455 nova_compute[188605]:      <model fallback='forbid'>EPYC-Rome</model>
Dec  1 14:20:36 np0005541455 nova_compute[188605]:      <vendor>AMD</vendor>
Dec  1 14:20:36 np0005541455 nova_compute[188605]:      <maxphysaddr mode='passthrough' limit='40'/>
Dec  1 14:20:36 np0005541455 nova_compute[188605]:      <feature policy='require' name='x2apic'/>
Dec  1 14:20:36 np0005541455 nova_compute[188605]:      <feature policy='require' name='tsc-deadline'/>
Dec  1 14:20:36 np0005541455 nova_compute[188605]:      <feature policy='require' name='hypervisor'/>
Dec  1 14:20:36 np0005541455 nova_compute[188605]:      <feature policy='require' name='tsc_adjust'/>
Dec  1 14:20:36 np0005541455 nova_compute[188605]:      <feature policy='require' name='spec-ctrl'/>
Dec  1 14:20:36 np0005541455 nova_compute[188605]:      <feature policy='require' name='stibp'/>
Dec  1 14:20:36 np0005541455 nova_compute[188605]:      <feature policy='require' name='ssbd'/>
Dec  1 14:20:36 np0005541455 nova_compute[188605]:      <feature policy='require' name='cmp_legacy'/>
Dec  1 14:20:36 np0005541455 nova_compute[188605]:      <feature policy='require' name='overflow-recov'/>
Dec  1 14:20:36 np0005541455 nova_compute[188605]:      <feature policy='require' name='succor'/>
Dec  1 14:20:36 np0005541455 nova_compute[188605]:      <feature policy='require' name='ibrs'/>
Dec  1 14:20:36 np0005541455 nova_compute[188605]:      <feature policy='require' name='amd-ssbd'/>
Dec  1 14:20:36 np0005541455 nova_compute[188605]:      <feature policy='require' name='virt-ssbd'/>
Dec  1 14:20:36 np0005541455 nova_compute[188605]:      <feature policy='require' name='lbrv'/>
Dec  1 14:20:36 np0005541455 nova_compute[188605]:      <feature policy='require' name='tsc-scale'/>
Dec  1 14:20:36 np0005541455 nova_compute[188605]:      <feature policy='require' name='vmcb-clean'/>
Dec  1 14:20:36 np0005541455 nova_compute[188605]:      <feature policy='require' name='flushbyasid'/>
Dec  1 14:20:36 np0005541455 nova_compute[188605]:      <feature policy='require' name='pause-filter'/>
Dec  1 14:20:36 np0005541455 nova_compute[188605]:      <feature policy='require' name='pfthreshold'/>
Dec  1 14:20:36 np0005541455 nova_compute[188605]:      <feature policy='require' name='svme-addr-chk'/>
Dec  1 14:20:36 np0005541455 nova_compute[188605]:      <feature policy='require' name='lfence-always-serializing'/>
Dec  1 14:20:36 np0005541455 nova_compute[188605]:      <feature policy='disable' name='xsaves'/>
Dec  1 14:20:36 np0005541455 nova_compute[188605]:    </mode>
Dec  1 14:20:36 np0005541455 nova_compute[188605]:    <mode name='custom' supported='yes'>
Dec  1 14:20:36 np0005541455 nova_compute[188605]:      <model usable='yes' deprecated='yes' vendor='unknown' canonical='486-v1'>486</model>
Dec  1 14:20:36 np0005541455 nova_compute[188605]:      <model usable='yes' deprecated='yes' vendor='unknown'>486-v1</model>
Dec  1 14:20:36 np0005541455 nova_compute[188605]:      <model usable='no' vendor='Intel' canonical='Broadwell-v1'>Broadwell</model>
Dec  1 14:20:36 np0005541455 nova_compute[188605]:      <blockers model='Broadwell'>
Dec  1 14:20:36 np0005541455 nova_compute[188605]:        <feature name='erms'/>
Dec  1 14:20:36 np0005541455 nova_compute[188605]:        <feature name='hle'/>
Dec  1 14:20:36 np0005541455 nova_compute[188605]:        <feature name='invpcid'/>
Dec  1 14:20:36 np0005541455 nova_compute[188605]:        <feature name='pcid'/>
Dec  1 14:20:36 np0005541455 nova_compute[188605]:        <feature name='rtm'/>
Dec  1 14:20:36 np0005541455 nova_compute[188605]:      </blockers>
Dec  1 14:20:36 np0005541455 nova_compute[188605]:      <model usable='no' vendor='Intel' canonical='Broadwell-v3'>Broadwell-IBRS</model>
Dec  1 14:20:36 np0005541455 nova_compute[188605]:      <blockers model='Broadwell-IBRS'>
Dec  1 14:20:36 np0005541455 nova_compute[188605]:        <feature name='erms'/>
Dec  1 14:20:36 np0005541455 nova_compute[188605]:        <feature name='hle'/>
Dec  1 14:20:36 np0005541455 nova_compute[188605]:        <feature name='invpcid'/>
Dec  1 14:20:36 np0005541455 nova_compute[188605]:        <feature name='pcid'/>
Dec  1 14:20:36 np0005541455 nova_compute[188605]:        <feature name='rtm'/>
Dec  1 14:20:36 np0005541455 nova_compute[188605]:      </blockers>
Dec  1 14:20:36 np0005541455 nova_compute[188605]:      <model usable='no' vendor='Intel' canonical='Broadwell-v2'>Broadwell-noTSX</model>
Dec  1 14:20:36 np0005541455 nova_compute[188605]:      <blockers model='Broadwell-noTSX'>
Dec  1 14:20:36 np0005541455 nova_compute[188605]:        <feature name='erms'/>
Dec  1 14:20:36 np0005541455 nova_compute[188605]:        <feature name='invpcid'/>
Dec  1 14:20:36 np0005541455 nova_compute[188605]:        <feature name='pcid'/>
Dec  1 14:20:36 np0005541455 nova_compute[188605]:      </blockers>
Dec  1 14:20:36 np0005541455 nova_compute[188605]:      <model usable='no' vendor='Intel' canonical='Broadwell-v4'>Broadwell-noTSX-IBRS</model>
Dec  1 14:20:36 np0005541455 nova_compute[188605]:      <blockers model='Broadwell-noTSX-IBRS'>
Dec  1 14:20:36 np0005541455 nova_compute[188605]:        <feature name='erms'/>
Dec  1 14:20:36 np0005541455 nova_compute[188605]:        <feature name='invpcid'/>
Dec  1 14:20:36 np0005541455 nova_compute[188605]:        <feature name='pcid'/>
Dec  1 14:20:36 np0005541455 nova_compute[188605]:      </blockers>
Dec  1 14:20:36 np0005541455 nova_compute[188605]:      <model usable='no' vendor='Intel'>Broadwell-v1</model>
Dec  1 14:20:36 np0005541455 nova_compute[188605]:      <blockers model='Broadwell-v1'>
Dec  1 14:20:36 np0005541455 nova_compute[188605]:        <feature name='erms'/>
Dec  1 14:20:36 np0005541455 nova_compute[188605]:        <feature name='hle'/>
Dec  1 14:20:36 np0005541455 nova_compute[188605]:        <feature name='invpcid'/>
Dec  1 14:20:36 np0005541455 nova_compute[188605]:        <feature name='pcid'/>
Dec  1 14:20:36 np0005541455 nova_compute[188605]:        <feature name='rtm'/>
Dec  1 14:20:36 np0005541455 nova_compute[188605]:      </blockers>
Dec  1 14:20:36 np0005541455 nova_compute[188605]:      <model usable='no' vendor='Intel'>Broadwell-v2</model>
Dec  1 14:20:36 np0005541455 nova_compute[188605]:      <blockers model='Broadwell-v2'>
Dec  1 14:20:36 np0005541455 nova_compute[188605]:        <feature name='erms'/>
Dec  1 14:20:36 np0005541455 nova_compute[188605]:        <feature name='invpcid'/>
Dec  1 14:20:36 np0005541455 nova_compute[188605]:        <feature name='pcid'/>
Dec  1 14:20:36 np0005541455 nova_compute[188605]:      </blockers>
Dec  1 14:20:36 np0005541455 nova_compute[188605]:      <model usable='no' vendor='Intel'>Broadwell-v3</model>
Dec  1 14:20:36 np0005541455 nova_compute[188605]:      <blockers model='Broadwell-v3'>
Dec  1 14:20:36 np0005541455 nova_compute[188605]:        <feature name='erms'/>
Dec  1 14:20:36 np0005541455 nova_compute[188605]:        <feature name='hle'/>
Dec  1 14:20:36 np0005541455 nova_compute[188605]:        <feature name='invpcid'/>
Dec  1 14:20:36 np0005541455 nova_compute[188605]:        <feature name='pcid'/>
Dec  1 14:20:36 np0005541455 nova_compute[188605]:        <feature name='rtm'/>
Dec  1 14:20:36 np0005541455 nova_compute[188605]:      </blockers>
Dec  1 14:20:36 np0005541455 nova_compute[188605]:      <model usable='no' vendor='Intel'>Broadwell-v4</model>
Dec  1 14:20:36 np0005541455 nova_compute[188605]:      <blockers model='Broadwell-v4'>
Dec  1 14:20:36 np0005541455 nova_compute[188605]:        <feature name='erms'/>
Dec  1 14:20:36 np0005541455 nova_compute[188605]:        <feature name='invpcid'/>
Dec  1 14:20:36 np0005541455 nova_compute[188605]:        <feature name='pcid'/>
Dec  1 14:20:36 np0005541455 nova_compute[188605]:      </blockers>
Dec  1 14:20:36 np0005541455 nova_compute[188605]:      <model usable='no' vendor='Intel' canonical='Cascadelake-Server-v1'>Cascadelake-Server</model>
Dec  1 14:20:36 np0005541455 nova_compute[188605]:      <blockers model='Cascadelake-Server'>
Dec  1 14:20:36 np0005541455 nova_compute[188605]:        <feature name='avx512bw'/>
Dec  1 14:20:36 np0005541455 nova_compute[188605]:        <feature name='avx512cd'/>
Dec  1 14:20:36 np0005541455 nova_compute[188605]:        <feature name='avx512dq'/>
Dec  1 14:20:36 np0005541455 nova_compute[188605]:        <feature name='avx512f'/>
Dec  1 14:20:36 np0005541455 nova_compute[188605]:        <feature name='avx512vl'/>
Dec  1 14:20:36 np0005541455 nova_compute[188605]:        <feature name='avx512vnni'/>
Dec  1 14:20:36 np0005541455 nova_compute[188605]:        <feature name='erms'/>
Dec  1 14:20:36 np0005541455 nova_compute[188605]:        <feature name='hle'/>
Dec  1 14:20:36 np0005541455 nova_compute[188605]:        <feature name='invpcid'/>
Dec  1 14:20:36 np0005541455 nova_compute[188605]:        <feature name='pcid'/>
Dec  1 14:20:36 np0005541455 nova_compute[188605]:        <feature name='pku'/>
Dec  1 14:20:36 np0005541455 nova_compute[188605]:        <feature name='rtm'/>
Dec  1 14:20:36 np0005541455 nova_compute[188605]:      </blockers>
Dec  1 14:20:36 np0005541455 nova_compute[188605]:      <model usable='no' vendor='Intel' canonical='Cascadelake-Server-v3'>Cascadelake-Server-noTSX</model>
Dec  1 14:20:36 np0005541455 nova_compute[188605]:      <blockers model='Cascadelake-Server-noTSX'>
Dec  1 14:20:36 np0005541455 nova_compute[188605]:        <feature name='avx512bw'/>
Dec  1 14:20:36 np0005541455 nova_compute[188605]:        <feature name='avx512cd'/>
Dec  1 14:20:36 np0005541455 nova_compute[188605]:        <feature name='avx512dq'/>
Dec  1 14:20:36 np0005541455 nova_compute[188605]:        <feature name='avx512f'/>
Dec  1 14:20:36 np0005541455 nova_compute[188605]:        <feature name='avx512vl'/>
Dec  1 14:20:36 np0005541455 nova_compute[188605]:        <feature name='avx512vnni'/>
Dec  1 14:20:36 np0005541455 nova_compute[188605]:        <feature name='erms'/>
Dec  1 14:20:36 np0005541455 nova_compute[188605]:        <feature name='ibrs-all'/>
Dec  1 14:20:36 np0005541455 nova_compute[188605]:        <feature name='invpcid'/>
Dec  1 14:20:36 np0005541455 nova_compute[188605]:        <feature name='pcid'/>
Dec  1 14:20:36 np0005541455 nova_compute[188605]:        <feature name='pku'/>
Dec  1 14:20:36 np0005541455 nova_compute[188605]:      </blockers>
Dec  1 14:20:36 np0005541455 nova_compute[188605]:      <model usable='no' vendor='Intel'>Cascadelake-Server-v1</model>
Dec  1 14:20:36 np0005541455 nova_compute[188605]:      <blockers model='Cascadelake-Server-v1'>
Dec  1 14:20:36 np0005541455 nova_compute[188605]:        <feature name='avx512bw'/>
Dec  1 14:20:36 np0005541455 nova_compute[188605]:        <feature name='avx512cd'/>
Dec  1 14:20:36 np0005541455 nova_compute[188605]:        <feature name='avx512dq'/>
Dec  1 14:20:36 np0005541455 nova_compute[188605]:        <feature name='avx512f'/>
Dec  1 14:20:36 np0005541455 nova_compute[188605]:        <feature name='avx512vl'/>
Dec  1 14:20:36 np0005541455 nova_compute[188605]:        <feature name='avx512vnni'/>
Dec  1 14:20:36 np0005541455 nova_compute[188605]:        <feature name='erms'/>
Dec  1 14:20:36 np0005541455 nova_compute[188605]:        <feature name='hle'/>
Dec  1 14:20:36 np0005541455 nova_compute[188605]:        <feature name='invpcid'/>
Dec  1 14:20:36 np0005541455 nova_compute[188605]:        <feature name='pcid'/>
Dec  1 14:20:36 np0005541455 nova_compute[188605]:        <feature name='pku'/>
Dec  1 14:20:36 np0005541455 nova_compute[188605]:        <feature name='rtm'/>
Dec  1 14:20:36 np0005541455 nova_compute[188605]:      </blockers>
Dec  1 14:20:36 np0005541455 nova_compute[188605]:      <model usable='no' vendor='Intel'>Cascadelake-Server-v2</model>
Dec  1 14:20:36 np0005541455 nova_compute[188605]:      <blockers model='Cascadelake-Server-v2'>
Dec  1 14:20:36 np0005541455 nova_compute[188605]:        <feature name='avx512bw'/>
Dec  1 14:20:36 np0005541455 nova_compute[188605]:        <feature name='avx512cd'/>
Dec  1 14:20:36 np0005541455 nova_compute[188605]:        <feature name='avx512dq'/>
Dec  1 14:20:36 np0005541455 nova_compute[188605]:        <feature name='avx512f'/>
Dec  1 14:20:36 np0005541455 nova_compute[188605]:        <feature name='avx512vl'/>
Dec  1 14:20:36 np0005541455 nova_compute[188605]:        <feature name='avx512vnni'/>
Dec  1 14:20:36 np0005541455 nova_compute[188605]:        <feature name='erms'/>
Dec  1 14:20:36 np0005541455 nova_compute[188605]:        <feature name='hle'/>
Dec  1 14:20:36 np0005541455 nova_compute[188605]:        <feature name='ibrs-all'/>
Dec  1 14:20:36 np0005541455 nova_compute[188605]:        <feature name='invpcid'/>
Dec  1 14:20:36 np0005541455 nova_compute[188605]:        <feature name='pcid'/>
Dec  1 14:20:36 np0005541455 nova_compute[188605]:        <feature name='pku'/>
Dec  1 14:20:36 np0005541455 nova_compute[188605]:        <feature name='rtm'/>
Dec  1 14:20:36 np0005541455 nova_compute[188605]:      </blockers>
Dec  1 14:20:36 np0005541455 nova_compute[188605]:      <model usable='no' vendor='Intel'>Cascadelake-Server-v3</model>
Dec  1 14:20:36 np0005541455 nova_compute[188605]:      <blockers model='Cascadelake-Server-v3'>
Dec  1 14:20:36 np0005541455 nova_compute[188605]:        <feature name='avx512bw'/>
Dec  1 14:20:36 np0005541455 nova_compute[188605]:        <feature name='avx512cd'/>
Dec  1 14:20:36 np0005541455 nova_compute[188605]:        <feature name='avx512dq'/>
Dec  1 14:20:36 np0005541455 nova_compute[188605]:        <feature name='avx512f'/>
Dec  1 14:20:36 np0005541455 nova_compute[188605]:        <feature name='avx512vl'/>
Dec  1 14:20:36 np0005541455 nova_compute[188605]:        <feature name='avx512vnni'/>
Dec  1 14:20:36 np0005541455 nova_compute[188605]:        <feature name='erms'/>
Dec  1 14:20:36 np0005541455 nova_compute[188605]:        <feature name='ibrs-all'/>
Dec  1 14:20:36 np0005541455 nova_compute[188605]:        <feature name='invpcid'/>
Dec  1 14:20:36 np0005541455 nova_compute[188605]:        <feature name='pcid'/>
Dec  1 14:20:36 np0005541455 nova_compute[188605]:        <feature name='pku'/>
Dec  1 14:20:36 np0005541455 nova_compute[188605]:      </blockers>
Dec  1 14:20:36 np0005541455 nova_compute[188605]:      <model usable='no' vendor='Intel'>Cascadelake-Server-v4</model>
Dec  1 14:20:36 np0005541455 nova_compute[188605]:      <blockers model='Cascadelake-Server-v4'>
Dec  1 14:20:36 np0005541455 nova_compute[188605]:        <feature name='avx512bw'/>
Dec  1 14:20:36 np0005541455 nova_compute[188605]:        <feature name='avx512cd'/>
Dec  1 14:20:36 np0005541455 nova_compute[188605]:        <feature name='avx512dq'/>
Dec  1 14:20:36 np0005541455 nova_compute[188605]:        <feature name='avx512f'/>
Dec  1 14:20:36 np0005541455 nova_compute[188605]:        <feature name='avx512vl'/>
Dec  1 14:20:36 np0005541455 nova_compute[188605]:        <feature name='avx512vnni'/>
Dec  1 14:20:36 np0005541455 nova_compute[188605]:        <feature name='erms'/>
Dec  1 14:20:36 np0005541455 nova_compute[188605]:        <feature name='ibrs-all'/>
Dec  1 14:20:36 np0005541455 nova_compute[188605]:        <feature name='invpcid'/>
Dec  1 14:20:36 np0005541455 nova_compute[188605]:        <feature name='pcid'/>
Dec  1 14:20:36 np0005541455 nova_compute[188605]:        <feature name='pku'/>
Dec  1 14:20:36 np0005541455 nova_compute[188605]:      </blockers>
Dec  1 14:20:36 np0005541455 nova_compute[188605]:      <model usable='no' vendor='Intel'>Cascadelake-Server-v5</model>
Dec  1 14:20:36 np0005541455 nova_compute[188605]:      <blockers model='Cascadelake-Server-v5'>
Dec  1 14:20:36 np0005541455 nova_compute[188605]:        <feature name='avx512bw'/>
Dec  1 14:20:36 np0005541455 nova_compute[188605]:        <feature name='avx512cd'/>
Dec  1 14:20:36 np0005541455 nova_compute[188605]:        <feature name='avx512dq'/>
Dec  1 14:20:36 np0005541455 nova_compute[188605]:        <feature name='avx512f'/>
Dec  1 14:20:36 np0005541455 nova_compute[188605]:        <feature name='avx512vl'/>
Dec  1 14:20:36 np0005541455 nova_compute[188605]:        <feature name='avx512vnni'/>
Dec  1 14:20:36 np0005541455 nova_compute[188605]:        <feature name='erms'/>
Dec  1 14:20:36 np0005541455 nova_compute[188605]:        <feature name='ibrs-all'/>
Dec  1 14:20:36 np0005541455 nova_compute[188605]:        <feature name='invpcid'/>
Dec  1 14:20:36 np0005541455 nova_compute[188605]:        <feature name='pcid'/>
Dec  1 14:20:36 np0005541455 nova_compute[188605]:        <feature name='pku'/>
Dec  1 14:20:36 np0005541455 nova_compute[188605]:        <feature name='xsaves'/>
Dec  1 14:20:36 np0005541455 nova_compute[188605]:      </blockers>
Dec  1 14:20:36 np0005541455 nova_compute[188605]:      <model usable='yes' deprecated='yes' vendor='Intel' canonical='Conroe-v1'>Conroe</model>
Dec  1 14:20:36 np0005541455 nova_compute[188605]:      <model usable='yes' deprecated='yes' vendor='Intel'>Conroe-v1</model>
Dec  1 14:20:36 np0005541455 nova_compute[188605]:      <model usable='no' vendor='Intel' canonical='Cooperlake-v1'>Cooperlake</model>
Dec  1 14:20:36 np0005541455 nova_compute[188605]:      <blockers model='Cooperlake'>
Dec  1 14:20:36 np0005541455 nova_compute[188605]:        <feature name='avx512-bf16'/>
Dec  1 14:20:36 np0005541455 nova_compute[188605]:        <feature name='avx512bw'/>
Dec  1 14:20:36 np0005541455 nova_compute[188605]:        <feature name='avx512cd'/>
Dec  1 14:20:36 np0005541455 nova_compute[188605]:        <feature name='avx512dq'/>
Dec  1 14:20:36 np0005541455 nova_compute[188605]:        <feature name='avx512f'/>
Dec  1 14:20:36 np0005541455 nova_compute[188605]:        <feature name='avx512vl'/>
Dec  1 14:20:36 np0005541455 nova_compute[188605]:        <feature name='avx512vnni'/>
Dec  1 14:20:36 np0005541455 nova_compute[188605]:        <feature name='erms'/>
Dec  1 14:20:36 np0005541455 nova_compute[188605]:        <feature name='hle'/>
Dec  1 14:20:36 np0005541455 nova_compute[188605]:        <feature name='ibrs-all'/>
Dec  1 14:20:36 np0005541455 nova_compute[188605]:        <feature name='invpcid'/>
Dec  1 14:20:36 np0005541455 nova_compute[188605]:        <feature name='pcid'/>
Dec  1 14:20:36 np0005541455 nova_compute[188605]:        <feature name='pku'/>
Dec  1 14:20:36 np0005541455 nova_compute[188605]:        <feature name='rtm'/>
Dec  1 14:20:36 np0005541455 nova_compute[188605]:        <feature name='taa-no'/>
Dec  1 14:20:36 np0005541455 nova_compute[188605]:      </blockers>
Dec  1 14:20:36 np0005541455 nova_compute[188605]:      <model usable='no' vendor='Intel'>Cooperlake-v1</model>
Dec  1 14:20:36 np0005541455 nova_compute[188605]:      <blockers model='Cooperlake-v1'>
Dec  1 14:20:36 np0005541455 nova_compute[188605]:        <feature name='avx512-bf16'/>
Dec  1 14:20:36 np0005541455 nova_compute[188605]:        <feature name='avx512bw'/>
Dec  1 14:20:36 np0005541455 nova_compute[188605]:        <feature name='avx512cd'/>
Dec  1 14:20:36 np0005541455 nova_compute[188605]:        <feature name='avx512dq'/>
Dec  1 14:20:36 np0005541455 nova_compute[188605]:        <feature name='avx512f'/>
Dec  1 14:20:36 np0005541455 nova_compute[188605]:        <feature name='avx512vl'/>
Dec  1 14:20:36 np0005541455 nova_compute[188605]:        <feature name='avx512vnni'/>
Dec  1 14:20:36 np0005541455 nova_compute[188605]:        <feature name='erms'/>
Dec  1 14:20:36 np0005541455 nova_compute[188605]:        <feature name='hle'/>
Dec  1 14:20:36 np0005541455 nova_compute[188605]:        <feature name='ibrs-all'/>
Dec  1 14:20:36 np0005541455 nova_compute[188605]:        <feature name='invpcid'/>
Dec  1 14:20:36 np0005541455 nova_compute[188605]:        <feature name='pcid'/>
Dec  1 14:20:36 np0005541455 nova_compute[188605]:        <feature name='pku'/>
Dec  1 14:20:36 np0005541455 nova_compute[188605]:        <feature name='rtm'/>
Dec  1 14:20:36 np0005541455 nova_compute[188605]:        <feature name='taa-no'/>
Dec  1 14:20:36 np0005541455 nova_compute[188605]:      </blockers>
Dec  1 14:20:36 np0005541455 nova_compute[188605]:      <model usable='no' vendor='Intel'>Cooperlake-v2</model>
Dec  1 14:20:36 np0005541455 nova_compute[188605]:      <blockers model='Cooperlake-v2'>
Dec  1 14:20:36 np0005541455 nova_compute[188605]:        <feature name='avx512-bf16'/>
Dec  1 14:20:36 np0005541455 nova_compute[188605]:        <feature name='avx512bw'/>
Dec  1 14:20:36 np0005541455 nova_compute[188605]:        <feature name='avx512cd'/>
Dec  1 14:20:36 np0005541455 nova_compute[188605]:        <feature name='avx512dq'/>
Dec  1 14:20:36 np0005541455 nova_compute[188605]:        <feature name='avx512f'/>
Dec  1 14:20:36 np0005541455 nova_compute[188605]:        <feature name='avx512vl'/>
Dec  1 14:20:36 np0005541455 nova_compute[188605]:        <feature name='avx512vnni'/>
Dec  1 14:20:36 np0005541455 nova_compute[188605]:        <feature name='erms'/>
Dec  1 14:20:36 np0005541455 nova_compute[188605]:        <feature name='hle'/>
Dec  1 14:20:36 np0005541455 nova_compute[188605]:        <feature name='ibrs-all'/>
Dec  1 14:20:36 np0005541455 nova_compute[188605]:        <feature name='invpcid'/>
Dec  1 14:20:36 np0005541455 nova_compute[188605]:        <feature name='pcid'/>
Dec  1 14:20:36 np0005541455 nova_compute[188605]:        <feature name='pku'/>
Dec  1 14:20:36 np0005541455 nova_compute[188605]:        <feature name='rtm'/>
Dec  1 14:20:36 np0005541455 nova_compute[188605]:        <feature name='taa-no'/>
Dec  1 14:20:36 np0005541455 nova_compute[188605]:        <feature name='xsaves'/>
Dec  1 14:20:36 np0005541455 nova_compute[188605]:      </blockers>
Dec  1 14:20:36 np0005541455 nova_compute[188605]:      <model usable='no' vendor='Intel' canonical='Denverton-v1'>Denverton</model>
Dec  1 14:20:36 np0005541455 nova_compute[188605]:      <blockers model='Denverton'>
Dec  1 14:20:36 np0005541455 nova_compute[188605]:        <feature name='erms'/>
Dec  1 14:20:36 np0005541455 nova_compute[188605]:        <feature name='mpx'/>
Dec  1 14:20:36 np0005541455 nova_compute[188605]:      </blockers>
Dec  1 14:20:36 np0005541455 nova_compute[188605]:      <model usable='no' vendor='Intel'>Denverton-v1</model>
Dec  1 14:20:36 np0005541455 nova_compute[188605]:      <blockers model='Denverton-v1'>
Dec  1 14:20:36 np0005541455 nova_compute[188605]:        <feature name='erms'/>
Dec  1 14:20:36 np0005541455 nova_compute[188605]:        <feature name='mpx'/>
Dec  1 14:20:36 np0005541455 nova_compute[188605]:      </blockers>
Dec  1 14:20:36 np0005541455 nova_compute[188605]:      <model usable='no' vendor='Intel'>Denverton-v2</model>
Dec  1 14:20:36 np0005541455 nova_compute[188605]:      <blockers model='Denverton-v2'>
Dec  1 14:20:36 np0005541455 nova_compute[188605]:        <feature name='erms'/>
Dec  1 14:20:36 np0005541455 nova_compute[188605]:      </blockers>
Dec  1 14:20:36 np0005541455 nova_compute[188605]:      <model usable='no' vendor='Intel'>Denverton-v3</model>
Dec  1 14:20:36 np0005541455 nova_compute[188605]:      <blockers model='Denverton-v3'>
Dec  1 14:20:36 np0005541455 nova_compute[188605]:        <feature name='erms'/>
Dec  1 14:20:36 np0005541455 nova_compute[188605]:        <feature name='xsaves'/>
Dec  1 14:20:36 np0005541455 nova_compute[188605]:      </blockers>
Dec  1 14:20:36 np0005541455 nova_compute[188605]:      <model usable='yes' vendor='Hygon' canonical='Dhyana-v1'>Dhyana</model>
Dec  1 14:20:36 np0005541455 nova_compute[188605]:      <model usable='yes' vendor='Hygon'>Dhyana-v1</model>
Dec  1 14:20:36 np0005541455 nova_compute[188605]:      <model usable='no' vendor='Hygon'>Dhyana-v2</model>
Dec  1 14:20:36 np0005541455 nova_compute[188605]:      <blockers model='Dhyana-v2'>
Dec  1 14:20:36 np0005541455 nova_compute[188605]:        <feature name='xsaves'/>
Dec  1 14:20:36 np0005541455 nova_compute[188605]:      </blockers>
Dec  1 14:20:36 np0005541455 nova_compute[188605]:      <model usable='yes' vendor='AMD' canonical='EPYC-v1'>EPYC</model>
Dec  1 14:20:36 np0005541455 nova_compute[188605]:      <model usable='no' vendor='AMD' canonical='EPYC-Genoa-v1'>EPYC-Genoa</model>
Dec  1 14:20:36 np0005541455 nova_compute[188605]:      <blockers model='EPYC-Genoa'>
Dec  1 14:20:36 np0005541455 nova_compute[188605]:        <feature name='amd-psfd'/>
Dec  1 14:20:36 np0005541455 nova_compute[188605]:        <feature name='auto-ibrs'/>
Dec  1 14:20:36 np0005541455 nova_compute[188605]:        <feature name='avx512-bf16'/>
Dec  1 14:20:36 np0005541455 nova_compute[188605]:        <feature name='avx512-vpopcntdq'/>
Dec  1 14:20:36 np0005541455 nova_compute[188605]:        <feature name='avx512bitalg'/>
Dec  1 14:20:36 np0005541455 nova_compute[188605]:        <feature name='avx512bw'/>
Dec  1 14:20:36 np0005541455 nova_compute[188605]:        <feature name='avx512cd'/>
Dec  1 14:20:36 np0005541455 nova_compute[188605]:        <feature name='avx512dq'/>
Dec  1 14:20:36 np0005541455 nova_compute[188605]:        <feature name='avx512f'/>
Dec  1 14:20:36 np0005541455 nova_compute[188605]:        <feature name='avx512ifma'/>
Dec  1 14:20:36 np0005541455 nova_compute[188605]:        <feature name='avx512vbmi'/>
Dec  1 14:20:36 np0005541455 nova_compute[188605]:        <feature name='avx512vbmi2'/>
Dec  1 14:20:36 np0005541455 nova_compute[188605]:        <feature name='avx512vl'/>
Dec  1 14:20:36 np0005541455 nova_compute[188605]:        <feature name='avx512vnni'/>
Dec  1 14:20:36 np0005541455 nova_compute[188605]:        <feature name='erms'/>
Dec  1 14:20:36 np0005541455 nova_compute[188605]:        <feature name='fsrm'/>
Dec  1 14:20:36 np0005541455 nova_compute[188605]:        <feature name='gfni'/>
Dec  1 14:20:36 np0005541455 nova_compute[188605]:        <feature name='invpcid'/>
Dec  1 14:20:36 np0005541455 nova_compute[188605]:        <feature name='la57'/>
Dec  1 14:20:36 np0005541455 nova_compute[188605]:        <feature name='no-nested-data-bp'/>
Dec  1 14:20:36 np0005541455 nova_compute[188605]:        <feature name='null-sel-clr-base'/>
Dec  1 14:20:36 np0005541455 nova_compute[188605]:        <feature name='pcid'/>
Dec  1 14:20:36 np0005541455 nova_compute[188605]:        <feature name='pku'/>
Dec  1 14:20:36 np0005541455 nova_compute[188605]:        <feature name='stibp-always-on'/>
Dec  1 14:20:36 np0005541455 nova_compute[188605]:        <feature name='vaes'/>
Dec  1 14:20:36 np0005541455 nova_compute[188605]:        <feature name='vpclmulqdq'/>
Dec  1 14:20:36 np0005541455 nova_compute[188605]:        <feature name='xsaves'/>
Dec  1 14:20:36 np0005541455 nova_compute[188605]:      </blockers>
Dec  1 14:20:36 np0005541455 nova_compute[188605]:      <model usable='no' vendor='AMD'>EPYC-Genoa-v1</model>
Dec  1 14:20:36 np0005541455 nova_compute[188605]:      <blockers model='EPYC-Genoa-v1'>
Dec  1 14:20:36 np0005541455 nova_compute[188605]:        <feature name='amd-psfd'/>
Dec  1 14:20:36 np0005541455 nova_compute[188605]:        <feature name='auto-ibrs'/>
Dec  1 14:20:36 np0005541455 nova_compute[188605]:        <feature name='avx512-bf16'/>
Dec  1 14:20:36 np0005541455 nova_compute[188605]:        <feature name='avx512-vpopcntdq'/>
Dec  1 14:20:36 np0005541455 nova_compute[188605]:        <feature name='avx512bitalg'/>
Dec  1 14:20:36 np0005541455 nova_compute[188605]:        <feature name='avx512bw'/>
Dec  1 14:20:36 np0005541455 nova_compute[188605]:        <feature name='avx512cd'/>
Dec  1 14:20:36 np0005541455 nova_compute[188605]:        <feature name='avx512dq'/>
Dec  1 14:20:36 np0005541455 nova_compute[188605]:        <feature name='avx512f'/>
Dec  1 14:20:36 np0005541455 nova_compute[188605]:        <feature name='avx512ifma'/>
Dec  1 14:20:36 np0005541455 nova_compute[188605]:        <feature name='avx512vbmi'/>
Dec  1 14:20:36 np0005541455 nova_compute[188605]:        <feature name='avx512vbmi2'/>
Dec  1 14:20:36 np0005541455 nova_compute[188605]:        <feature name='avx512vl'/>
Dec  1 14:20:36 np0005541455 nova_compute[188605]:        <feature name='avx512vnni'/>
Dec  1 14:20:36 np0005541455 nova_compute[188605]:        <feature name='erms'/>
Dec  1 14:20:36 np0005541455 nova_compute[188605]:        <feature name='fsrm'/>
Dec  1 14:20:36 np0005541455 nova_compute[188605]:        <feature name='gfni'/>
Dec  1 14:20:36 np0005541455 nova_compute[188605]:        <feature name='invpcid'/>
Dec  1 14:20:36 np0005541455 nova_compute[188605]:        <feature name='la57'/>
Dec  1 14:20:36 np0005541455 nova_compute[188605]:        <feature name='no-nested-data-bp'/>
Dec  1 14:20:36 np0005541455 nova_compute[188605]:        <feature name='null-sel-clr-base'/>
Dec  1 14:20:36 np0005541455 nova_compute[188605]:        <feature name='pcid'/>
Dec  1 14:20:36 np0005541455 nova_compute[188605]:        <feature name='pku'/>
Dec  1 14:20:36 np0005541455 nova_compute[188605]:        <feature name='stibp-always-on'/>
Dec  1 14:20:36 np0005541455 nova_compute[188605]:        <feature name='vaes'/>
Dec  1 14:20:36 np0005541455 nova_compute[188605]:        <feature name='vpclmulqdq'/>
Dec  1 14:20:36 np0005541455 nova_compute[188605]:        <feature name='xsaves'/>
Dec  1 14:20:36 np0005541455 nova_compute[188605]:      </blockers>
Dec  1 14:20:36 np0005541455 nova_compute[188605]:      <model usable='yes' vendor='AMD' canonical='EPYC-v2'>EPYC-IBPB</model>
Dec  1 14:20:36 np0005541455 nova_compute[188605]:      <model usable='no' vendor='AMD' canonical='EPYC-Milan-v1'>EPYC-Milan</model>
Dec  1 14:20:36 np0005541455 nova_compute[188605]:      <blockers model='EPYC-Milan'>
Dec  1 14:20:36 np0005541455 nova_compute[188605]:        <feature name='erms'/>
Dec  1 14:20:36 np0005541455 nova_compute[188605]:        <feature name='fsrm'/>
Dec  1 14:20:36 np0005541455 nova_compute[188605]:        <feature name='invpcid'/>
Dec  1 14:20:36 np0005541455 nova_compute[188605]:        <feature name='pcid'/>
Dec  1 14:20:36 np0005541455 nova_compute[188605]:        <feature name='pku'/>
Dec  1 14:20:36 np0005541455 nova_compute[188605]:        <feature name='xsaves'/>
Dec  1 14:20:36 np0005541455 nova_compute[188605]:      </blockers>
Dec  1 14:20:36 np0005541455 nova_compute[188605]:      <model usable='no' vendor='AMD'>EPYC-Milan-v1</model>
Dec  1 14:20:36 np0005541455 nova_compute[188605]:      <blockers model='EPYC-Milan-v1'>
Dec  1 14:20:36 np0005541455 nova_compute[188605]:        <feature name='erms'/>
Dec  1 14:20:36 np0005541455 nova_compute[188605]:        <feature name='fsrm'/>
Dec  1 14:20:36 np0005541455 nova_compute[188605]:        <feature name='invpcid'/>
Dec  1 14:20:36 np0005541455 nova_compute[188605]:        <feature name='pcid'/>
Dec  1 14:20:36 np0005541455 nova_compute[188605]:        <feature name='pku'/>
Dec  1 14:20:36 np0005541455 nova_compute[188605]:        <feature name='xsaves'/>
Dec  1 14:20:36 np0005541455 nova_compute[188605]:      </blockers>
Dec  1 14:20:36 np0005541455 nova_compute[188605]:      <model usable='no' vendor='AMD'>EPYC-Milan-v2</model>
Dec  1 14:20:36 np0005541455 nova_compute[188605]:      <blockers model='EPYC-Milan-v2'>
Dec  1 14:20:36 np0005541455 nova_compute[188605]:        <feature name='amd-psfd'/>
Dec  1 14:20:36 np0005541455 nova_compute[188605]:        <feature name='erms'/>
Dec  1 14:20:36 np0005541455 nova_compute[188605]:        <feature name='fsrm'/>
Dec  1 14:20:36 np0005541455 nova_compute[188605]:        <feature name='invpcid'/>
Dec  1 14:20:36 np0005541455 nova_compute[188605]:        <feature name='no-nested-data-bp'/>
Dec  1 14:20:36 np0005541455 nova_compute[188605]:        <feature name='null-sel-clr-base'/>
Dec  1 14:20:36 np0005541455 nova_compute[188605]:        <feature name='pcid'/>
Dec  1 14:20:36 np0005541455 nova_compute[188605]:        <feature name='pku'/>
Dec  1 14:20:36 np0005541455 nova_compute[188605]:        <feature name='stibp-always-on'/>
Dec  1 14:20:36 np0005541455 nova_compute[188605]:        <feature name='vaes'/>
Dec  1 14:20:36 np0005541455 nova_compute[188605]:        <feature name='vpclmulqdq'/>
Dec  1 14:20:36 np0005541455 nova_compute[188605]:        <feature name='xsaves'/>
Dec  1 14:20:36 np0005541455 nova_compute[188605]:      </blockers>
Dec  1 14:20:36 np0005541455 nova_compute[188605]:      <model usable='no' vendor='AMD' canonical='EPYC-Rome-v1'>EPYC-Rome</model>
Dec  1 14:20:36 np0005541455 nova_compute[188605]:      <blockers model='EPYC-Rome'>
Dec  1 14:20:36 np0005541455 nova_compute[188605]:        <feature name='xsaves'/>
Dec  1 14:20:36 np0005541455 nova_compute[188605]:      </blockers>
Dec  1 14:20:36 np0005541455 nova_compute[188605]:      <model usable='no' vendor='AMD'>EPYC-Rome-v1</model>
Dec  1 14:20:36 np0005541455 nova_compute[188605]:      <blockers model='EPYC-Rome-v1'>
Dec  1 14:20:36 np0005541455 nova_compute[188605]:        <feature name='xsaves'/>
Dec  1 14:20:36 np0005541455 nova_compute[188605]:      </blockers>
Dec  1 14:20:36 np0005541455 nova_compute[188605]:      <model usable='no' vendor='AMD'>EPYC-Rome-v2</model>
Dec  1 14:20:36 np0005541455 nova_compute[188605]:      <blockers model='EPYC-Rome-v2'>
Dec  1 14:20:36 np0005541455 nova_compute[188605]:        <feature name='xsaves'/>
Dec  1 14:20:36 np0005541455 nova_compute[188605]:      </blockers>
Dec  1 14:20:36 np0005541455 nova_compute[188605]:      <model usable='no' vendor='AMD'>EPYC-Rome-v3</model>
Dec  1 14:20:36 np0005541455 nova_compute[188605]:      <blockers model='EPYC-Rome-v3'>
Dec  1 14:20:36 np0005541455 nova_compute[188605]:        <feature name='xsaves'/>
Dec  1 14:20:36 np0005541455 nova_compute[188605]:      </blockers>
Dec  1 14:20:36 np0005541455 nova_compute[188605]:      <model usable='yes' vendor='AMD'>EPYC-Rome-v4</model>
Dec  1 14:20:36 np0005541455 nova_compute[188605]:      <model usable='yes' vendor='AMD'>EPYC-v1</model>
Dec  1 14:20:36 np0005541455 nova_compute[188605]:      <model usable='yes' vendor='AMD'>EPYC-v2</model>
Dec  1 14:20:36 np0005541455 nova_compute[188605]:      <model usable='no' vendor='AMD'>EPYC-v3</model>
Dec  1 14:20:36 np0005541455 nova_compute[188605]:      <blockers model='EPYC-v3'>
Dec  1 14:20:36 np0005541455 nova_compute[188605]:        <feature name='xsaves'/>
Dec  1 14:20:36 np0005541455 nova_compute[188605]:      </blockers>
Dec  1 14:20:36 np0005541455 nova_compute[188605]:      <model usable='no' vendor='AMD'>EPYC-v4</model>
Dec  1 14:20:36 np0005541455 nova_compute[188605]:      <blockers model='EPYC-v4'>
Dec  1 14:20:36 np0005541455 nova_compute[188605]:        <feature name='xsaves'/>
Dec  1 14:20:36 np0005541455 nova_compute[188605]:      </blockers>
Dec  1 14:20:36 np0005541455 nova_compute[188605]:      <model usable='no' vendor='Intel' canonical='GraniteRapids-v1'>GraniteRapids</model>
Dec  1 14:20:36 np0005541455 nova_compute[188605]:      <blockers model='GraniteRapids'>
Dec  1 14:20:36 np0005541455 nova_compute[188605]:        <feature name='amx-bf16'/>
Dec  1 14:20:36 np0005541455 nova_compute[188605]:        <feature name='amx-fp16'/>
Dec  1 14:20:36 np0005541455 nova_compute[188605]:        <feature name='amx-int8'/>
Dec  1 14:20:36 np0005541455 nova_compute[188605]:        <feature name='amx-tile'/>
Dec  1 14:20:36 np0005541455 nova_compute[188605]:        <feature name='avx-vnni'/>
Dec  1 14:20:36 np0005541455 nova_compute[188605]:        <feature name='avx512-bf16'/>
Dec  1 14:20:36 np0005541455 nova_compute[188605]:        <feature name='avx512-fp16'/>
Dec  1 14:20:36 np0005541455 nova_compute[188605]:        <feature name='avx512-vpopcntdq'/>
Dec  1 14:20:36 np0005541455 nova_compute[188605]:        <feature name='avx512bitalg'/>
Dec  1 14:20:36 np0005541455 nova_compute[188605]:        <feature name='avx512bw'/>
Dec  1 14:20:36 np0005541455 nova_compute[188605]:        <feature name='avx512cd'/>
Dec  1 14:20:36 np0005541455 nova_compute[188605]:        <feature name='avx512dq'/>
Dec  1 14:20:36 np0005541455 nova_compute[188605]:        <feature name='avx512f'/>
Dec  1 14:20:36 np0005541455 nova_compute[188605]:        <feature name='avx512ifma'/>
Dec  1 14:20:36 np0005541455 nova_compute[188605]:        <feature name='avx512vbmi'/>
Dec  1 14:20:36 np0005541455 nova_compute[188605]:        <feature name='avx512vbmi2'/>
Dec  1 14:20:36 np0005541455 nova_compute[188605]:        <feature name='avx512vl'/>
Dec  1 14:20:36 np0005541455 nova_compute[188605]:        <feature name='avx512vnni'/>
Dec  1 14:20:36 np0005541455 nova_compute[188605]:        <feature name='bus-lock-detect'/>
Dec  1 14:20:36 np0005541455 nova_compute[188605]:        <feature name='erms'/>
Dec  1 14:20:36 np0005541455 nova_compute[188605]:        <feature name='fbsdp-no'/>
Dec  1 14:20:36 np0005541455 nova_compute[188605]:        <feature name='fsrc'/>
Dec  1 14:20:36 np0005541455 nova_compute[188605]:        <feature name='fsrm'/>
Dec  1 14:20:36 np0005541455 nova_compute[188605]:        <feature name='fsrs'/>
Dec  1 14:20:36 np0005541455 nova_compute[188605]:        <feature name='fzrm'/>
Dec  1 14:20:36 np0005541455 nova_compute[188605]:        <feature name='gfni'/>
Dec  1 14:20:36 np0005541455 nova_compute[188605]:        <feature name='hle'/>
Dec  1 14:20:36 np0005541455 nova_compute[188605]:        <feature name='ibrs-all'/>
Dec  1 14:20:36 np0005541455 nova_compute[188605]:        <feature name='invpcid'/>
Dec  1 14:20:36 np0005541455 nova_compute[188605]:        <feature name='la57'/>
Dec  1 14:20:36 np0005541455 nova_compute[188605]:        <feature name='mcdt-no'/>
Dec  1 14:20:36 np0005541455 nova_compute[188605]:        <feature name='pbrsb-no'/>
Dec  1 14:20:36 np0005541455 nova_compute[188605]:        <feature name='pcid'/>
Dec  1 14:20:36 np0005541455 nova_compute[188605]:        <feature name='pku'/>
Dec  1 14:20:36 np0005541455 nova_compute[188605]:        <feature name='prefetchiti'/>
Dec  1 14:20:36 np0005541455 nova_compute[188605]:        <feature name='psdp-no'/>
Dec  1 14:20:36 np0005541455 nova_compute[188605]:        <feature name='rtm'/>
Dec  1 14:20:36 np0005541455 nova_compute[188605]:        <feature name='sbdr-ssdp-no'/>
Dec  1 14:20:36 np0005541455 nova_compute[188605]:        <feature name='serialize'/>
Dec  1 14:20:36 np0005541455 nova_compute[188605]:        <feature name='taa-no'/>
Dec  1 14:20:36 np0005541455 nova_compute[188605]:        <feature name='tsx-ldtrk'/>
Dec  1 14:20:36 np0005541455 nova_compute[188605]:        <feature name='vaes'/>
Dec  1 14:20:36 np0005541455 nova_compute[188605]:        <feature name='vpclmulqdq'/>
Dec  1 14:20:36 np0005541455 nova_compute[188605]:        <feature name='xfd'/>
Dec  1 14:20:36 np0005541455 nova_compute[188605]:        <feature name='xsaves'/>
Dec  1 14:20:36 np0005541455 nova_compute[188605]:      </blockers>
Dec  1 14:20:36 np0005541455 nova_compute[188605]:      <model usable='no' vendor='Intel'>GraniteRapids-v1</model>
Dec  1 14:20:36 np0005541455 nova_compute[188605]:      <blockers model='GraniteRapids-v1'>
Dec  1 14:20:36 np0005541455 nova_compute[188605]:        <feature name='amx-bf16'/>
Dec  1 14:20:36 np0005541455 nova_compute[188605]:        <feature name='amx-fp16'/>
Dec  1 14:20:36 np0005541455 nova_compute[188605]:        <feature name='amx-int8'/>
Dec  1 14:20:36 np0005541455 nova_compute[188605]:        <feature name='amx-tile'/>
Dec  1 14:20:36 np0005541455 nova_compute[188605]:        <feature name='avx-vnni'/>
Dec  1 14:20:36 np0005541455 nova_compute[188605]:        <feature name='avx512-bf16'/>
Dec  1 14:20:36 np0005541455 nova_compute[188605]:        <feature name='avx512-fp16'/>
Dec  1 14:20:36 np0005541455 nova_compute[188605]:        <feature name='avx512-vpopcntdq'/>
Dec  1 14:20:36 np0005541455 nova_compute[188605]:        <feature name='avx512bitalg'/>
Dec  1 14:20:36 np0005541455 nova_compute[188605]:        <feature name='avx512bw'/>
Dec  1 14:20:36 np0005541455 nova_compute[188605]:        <feature name='avx512cd'/>
Dec  1 14:20:36 np0005541455 nova_compute[188605]:        <feature name='avx512dq'/>
Dec  1 14:20:36 np0005541455 nova_compute[188605]:        <feature name='avx512f'/>
Dec  1 14:20:36 np0005541455 nova_compute[188605]:        <feature name='avx512ifma'/>
Dec  1 14:20:36 np0005541455 nova_compute[188605]:        <feature name='avx512vbmi'/>
Dec  1 14:20:36 np0005541455 nova_compute[188605]:        <feature name='avx512vbmi2'/>
Dec  1 14:20:36 np0005541455 nova_compute[188605]:        <feature name='avx512vl'/>
Dec  1 14:20:36 np0005541455 nova_compute[188605]:        <feature name='avx512vnni'/>
Dec  1 14:20:36 np0005541455 nova_compute[188605]:        <feature name='bus-lock-detect'/>
Dec  1 14:20:36 np0005541455 nova_compute[188605]:        <feature name='erms'/>
Dec  1 14:20:36 np0005541455 nova_compute[188605]:        <feature name='fbsdp-no'/>
Dec  1 14:20:36 np0005541455 nova_compute[188605]:        <feature name='fsrc'/>
Dec  1 14:20:36 np0005541455 nova_compute[188605]:        <feature name='fsrm'/>
Dec  1 14:20:36 np0005541455 nova_compute[188605]:        <feature name='fsrs'/>
Dec  1 14:20:36 np0005541455 nova_compute[188605]:        <feature name='fzrm'/>
Dec  1 14:20:36 np0005541455 nova_compute[188605]:        <feature name='gfni'/>
Dec  1 14:20:36 np0005541455 nova_compute[188605]:        <feature name='hle'/>
Dec  1 14:20:36 np0005541455 nova_compute[188605]:        <feature name='ibrs-all'/>
Dec  1 14:20:36 np0005541455 nova_compute[188605]:        <feature name='invpcid'/>
Dec  1 14:20:36 np0005541455 nova_compute[188605]:        <feature name='la57'/>
Dec  1 14:20:36 np0005541455 nova_compute[188605]:        <feature name='mcdt-no'/>
Dec  1 14:20:36 np0005541455 nova_compute[188605]:        <feature name='pbrsb-no'/>
Dec  1 14:20:36 np0005541455 nova_compute[188605]:        <feature name='pcid'/>
Dec  1 14:20:36 np0005541455 nova_compute[188605]:        <feature name='pku'/>
Dec  1 14:20:36 np0005541455 nova_compute[188605]:        <feature name='prefetchiti'/>
Dec  1 14:20:36 np0005541455 nova_compute[188605]:        <feature name='psdp-no'/>
Dec  1 14:20:36 np0005541455 nova_compute[188605]:        <feature name='rtm'/>
Dec  1 14:20:36 np0005541455 nova_compute[188605]:        <feature name='sbdr-ssdp-no'/>
Dec  1 14:20:36 np0005541455 nova_compute[188605]:        <feature name='serialize'/>
Dec  1 14:20:36 np0005541455 nova_compute[188605]:        <feature name='taa-no'/>
Dec  1 14:20:36 np0005541455 nova_compute[188605]:        <feature name='tsx-ldtrk'/>
Dec  1 14:20:36 np0005541455 nova_compute[188605]:        <feature name='vaes'/>
Dec  1 14:20:36 np0005541455 nova_compute[188605]:        <feature name='vpclmulqdq'/>
Dec  1 14:20:36 np0005541455 nova_compute[188605]:        <feature name='xfd'/>
Dec  1 14:20:36 np0005541455 nova_compute[188605]:        <feature name='xsaves'/>
Dec  1 14:20:36 np0005541455 nova_compute[188605]:      </blockers>
Dec  1 14:20:36 np0005541455 nova_compute[188605]:      <model usable='no' vendor='Intel'>GraniteRapids-v2</model>
Dec  1 14:20:36 np0005541455 nova_compute[188605]:      <blockers model='GraniteRapids-v2'>
Dec  1 14:20:36 np0005541455 nova_compute[188605]:        <feature name='amx-bf16'/>
Dec  1 14:20:36 np0005541455 nova_compute[188605]:        <feature name='amx-fp16'/>
Dec  1 14:20:36 np0005541455 nova_compute[188605]:        <feature name='amx-int8'/>
Dec  1 14:20:36 np0005541455 nova_compute[188605]:        <feature name='amx-tile'/>
Dec  1 14:20:36 np0005541455 nova_compute[188605]:        <feature name='avx-vnni'/>
Dec  1 14:20:36 np0005541455 nova_compute[188605]:        <feature name='avx10'/>
Dec  1 14:20:36 np0005541455 nova_compute[188605]:        <feature name='avx10-128'/>
Dec  1 14:20:36 np0005541455 nova_compute[188605]:        <feature name='avx10-256'/>
Dec  1 14:20:36 np0005541455 nova_compute[188605]:        <feature name='avx10-512'/>
Dec  1 14:20:36 np0005541455 nova_compute[188605]:        <feature name='avx512-bf16'/>
Dec  1 14:20:36 np0005541455 nova_compute[188605]:        <feature name='avx512-fp16'/>
Dec  1 14:20:36 np0005541455 nova_compute[188605]:        <feature name='avx512-vpopcntdq'/>
Dec  1 14:20:36 np0005541455 nova_compute[188605]:        <feature name='avx512bitalg'/>
Dec  1 14:20:36 np0005541455 nova_compute[188605]:        <feature name='avx512bw'/>
Dec  1 14:20:36 np0005541455 nova_compute[188605]:        <feature name='avx512cd'/>
Dec  1 14:20:36 np0005541455 nova_compute[188605]:        <feature name='avx512dq'/>
Dec  1 14:20:36 np0005541455 nova_compute[188605]:        <feature name='avx512f'/>
Dec  1 14:20:36 np0005541455 nova_compute[188605]:        <feature name='avx512ifma'/>
Dec  1 14:20:36 np0005541455 nova_compute[188605]:        <feature name='avx512vbmi'/>
Dec  1 14:20:36 np0005541455 nova_compute[188605]:        <feature name='avx512vbmi2'/>
Dec  1 14:20:36 np0005541455 nova_compute[188605]:        <feature name='avx512vl'/>
Dec  1 14:20:36 np0005541455 nova_compute[188605]:        <feature name='avx512vnni'/>
Dec  1 14:20:36 np0005541455 nova_compute[188605]:        <feature name='bus-lock-detect'/>
Dec  1 14:20:36 np0005541455 nova_compute[188605]:        <feature name='cldemote'/>
Dec  1 14:20:36 np0005541455 nova_compute[188605]:        <feature name='erms'/>
Dec  1 14:20:36 np0005541455 nova_compute[188605]:        <feature name='fbsdp-no'/>
Dec  1 14:20:36 np0005541455 nova_compute[188605]:        <feature name='fsrc'/>
Dec  1 14:20:36 np0005541455 nova_compute[188605]:        <feature name='fsrm'/>
Dec  1 14:20:36 np0005541455 nova_compute[188605]:        <feature name='fsrs'/>
Dec  1 14:20:36 np0005541455 nova_compute[188605]:        <feature name='fzrm'/>
Dec  1 14:20:36 np0005541455 nova_compute[188605]:        <feature name='gfni'/>
Dec  1 14:20:36 np0005541455 nova_compute[188605]:        <feature name='hle'/>
Dec  1 14:20:36 np0005541455 nova_compute[188605]:        <feature name='ibrs-all'/>
Dec  1 14:20:36 np0005541455 nova_compute[188605]:        <feature name='invpcid'/>
Dec  1 14:20:36 np0005541455 nova_compute[188605]:        <feature name='la57'/>
Dec  1 14:20:36 np0005541455 nova_compute[188605]:        <feature name='mcdt-no'/>
Dec  1 14:20:36 np0005541455 nova_compute[188605]:        <feature name='movdir64b'/>
Dec  1 14:20:36 np0005541455 nova_compute[188605]:        <feature name='movdiri'/>
Dec  1 14:20:36 np0005541455 nova_compute[188605]:        <feature name='pbrsb-no'/>
Dec  1 14:20:36 np0005541455 nova_compute[188605]:        <feature name='pcid'/>
Dec  1 14:20:36 np0005541455 nova_compute[188605]:        <feature name='pku'/>
Dec  1 14:20:36 np0005541455 nova_compute[188605]:        <feature name='prefetchiti'/>
Dec  1 14:20:36 np0005541455 nova_compute[188605]:        <feature name='psdp-no'/>
Dec  1 14:20:36 np0005541455 nova_compute[188605]:        <feature name='rtm'/>
Dec  1 14:20:36 np0005541455 nova_compute[188605]:        <feature name='sbdr-ssdp-no'/>
Dec  1 14:20:36 np0005541455 nova_compute[188605]:        <feature name='serialize'/>
Dec  1 14:20:36 np0005541455 nova_compute[188605]:        <feature name='ss'/>
Dec  1 14:20:36 np0005541455 nova_compute[188605]:        <feature name='taa-no'/>
Dec  1 14:20:36 np0005541455 nova_compute[188605]:        <feature name='tsx-ldtrk'/>
Dec  1 14:20:36 np0005541455 nova_compute[188605]:        <feature name='vaes'/>
Dec  1 14:20:36 np0005541455 nova_compute[188605]:        <feature name='vpclmulqdq'/>
Dec  1 14:20:36 np0005541455 nova_compute[188605]:        <feature name='xfd'/>
Dec  1 14:20:36 np0005541455 nova_compute[188605]:        <feature name='xsaves'/>
Dec  1 14:20:36 np0005541455 nova_compute[188605]:      </blockers>
Dec  1 14:20:36 np0005541455 nova_compute[188605]:      <model usable='no' vendor='Intel' canonical='Haswell-v1'>Haswell</model>
Dec  1 14:20:36 np0005541455 nova_compute[188605]:      <blockers model='Haswell'>
Dec  1 14:20:36 np0005541455 nova_compute[188605]:        <feature name='erms'/>
Dec  1 14:20:36 np0005541455 nova_compute[188605]:        <feature name='hle'/>
Dec  1 14:20:36 np0005541455 nova_compute[188605]:        <feature name='invpcid'/>
Dec  1 14:20:36 np0005541455 nova_compute[188605]:        <feature name='pcid'/>
Dec  1 14:20:36 np0005541455 nova_compute[188605]:        <feature name='rtm'/>
Dec  1 14:20:36 np0005541455 nova_compute[188605]:      </blockers>
Dec  1 14:20:36 np0005541455 nova_compute[188605]:      <model usable='no' vendor='Intel' canonical='Haswell-v3'>Haswell-IBRS</model>
Dec  1 14:20:36 np0005541455 nova_compute[188605]:      <blockers model='Haswell-IBRS'>
Dec  1 14:20:36 np0005541455 nova_compute[188605]:        <feature name='erms'/>
Dec  1 14:20:36 np0005541455 nova_compute[188605]:        <feature name='hle'/>
Dec  1 14:20:36 np0005541455 nova_compute[188605]:        <feature name='invpcid'/>
Dec  1 14:20:36 np0005541455 nova_compute[188605]:        <feature name='pcid'/>
Dec  1 14:20:36 np0005541455 nova_compute[188605]:        <feature name='rtm'/>
Dec  1 14:20:36 np0005541455 nova_compute[188605]:      </blockers>
Dec  1 14:20:36 np0005541455 nova_compute[188605]:      <model usable='no' vendor='Intel' canonical='Haswell-v2'>Haswell-noTSX</model>
Dec  1 14:20:36 np0005541455 nova_compute[188605]:      <blockers model='Haswell-noTSX'>
Dec  1 14:20:36 np0005541455 nova_compute[188605]:        <feature name='erms'/>
Dec  1 14:20:36 np0005541455 nova_compute[188605]:        <feature name='invpcid'/>
Dec  1 14:20:36 np0005541455 nova_compute[188605]:        <feature name='pcid'/>
Dec  1 14:20:36 np0005541455 nova_compute[188605]:      </blockers>
Dec  1 14:20:36 np0005541455 nova_compute[188605]:      <model usable='no' vendor='Intel' canonical='Haswell-v4'>Haswell-noTSX-IBRS</model>
Dec  1 14:20:36 np0005541455 nova_compute[188605]:      <blockers model='Haswell-noTSX-IBRS'>
Dec  1 14:20:36 np0005541455 nova_compute[188605]:        <feature name='erms'/>
Dec  1 14:20:36 np0005541455 nova_compute[188605]:        <feature name='invpcid'/>
Dec  1 14:20:36 np0005541455 nova_compute[188605]:        <feature name='pcid'/>
Dec  1 14:20:36 np0005541455 nova_compute[188605]:      </blockers>
Dec  1 14:20:36 np0005541455 nova_compute[188605]:      <model usable='no' vendor='Intel'>Haswell-v1</model>
Dec  1 14:20:36 np0005541455 nova_compute[188605]:      <blockers model='Haswell-v1'>
Dec  1 14:20:36 np0005541455 nova_compute[188605]:        <feature name='erms'/>
Dec  1 14:20:36 np0005541455 nova_compute[188605]:        <feature name='hle'/>
Dec  1 14:20:36 np0005541455 nova_compute[188605]:        <feature name='invpcid'/>
Dec  1 14:20:36 np0005541455 nova_compute[188605]:        <feature name='pcid'/>
Dec  1 14:20:36 np0005541455 nova_compute[188605]:        <feature name='rtm'/>
Dec  1 14:20:36 np0005541455 nova_compute[188605]:      </blockers>
Dec  1 14:20:36 np0005541455 nova_compute[188605]:      <model usable='no' vendor='Intel'>Haswell-v2</model>
Dec  1 14:20:36 np0005541455 nova_compute[188605]:      <blockers model='Haswell-v2'>
Dec  1 14:20:36 np0005541455 nova_compute[188605]:        <feature name='erms'/>
Dec  1 14:20:36 np0005541455 nova_compute[188605]:        <feature name='invpcid'/>
Dec  1 14:20:36 np0005541455 nova_compute[188605]:        <feature name='pcid'/>
Dec  1 14:20:36 np0005541455 nova_compute[188605]:      </blockers>
Dec  1 14:20:36 np0005541455 nova_compute[188605]:      <model usable='no' vendor='Intel'>Haswell-v3</model>
Dec  1 14:20:36 np0005541455 nova_compute[188605]:      <blockers model='Haswell-v3'>
Dec  1 14:20:36 np0005541455 nova_compute[188605]:        <feature name='erms'/>
Dec  1 14:20:36 np0005541455 nova_compute[188605]:        <feature name='hle'/>
Dec  1 14:20:36 np0005541455 nova_compute[188605]:        <feature name='invpcid'/>
Dec  1 14:20:36 np0005541455 nova_compute[188605]:        <feature name='pcid'/>
Dec  1 14:20:36 np0005541455 nova_compute[188605]:        <feature name='rtm'/>
Dec  1 14:20:36 np0005541455 nova_compute[188605]:      </blockers>
Dec  1 14:20:36 np0005541455 nova_compute[188605]:      <model usable='no' vendor='Intel'>Haswell-v4</model>
Dec  1 14:20:36 np0005541455 nova_compute[188605]:      <blockers model='Haswell-v4'>
Dec  1 14:20:36 np0005541455 nova_compute[188605]:        <feature name='erms'/>
Dec  1 14:20:36 np0005541455 nova_compute[188605]:        <feature name='invpcid'/>
Dec  1 14:20:36 np0005541455 nova_compute[188605]:        <feature name='pcid'/>
Dec  1 14:20:36 np0005541455 nova_compute[188605]:      </blockers>
Dec  1 14:20:36 np0005541455 nova_compute[188605]:      <model usable='no' vendor='Intel' canonical='Icelake-Server-v1'>Icelake-Server</model>
Dec  1 14:20:36 np0005541455 nova_compute[188605]:      <blockers model='Icelake-Server'>
Dec  1 14:20:36 np0005541455 nova_compute[188605]:        <feature name='avx512-vpopcntdq'/>
Dec  1 14:20:36 np0005541455 nova_compute[188605]:        <feature name='avx512bitalg'/>
Dec  1 14:20:36 np0005541455 nova_compute[188605]:        <feature name='avx512bw'/>
Dec  1 14:20:36 np0005541455 nova_compute[188605]:        <feature name='avx512cd'/>
Dec  1 14:20:36 np0005541455 nova_compute[188605]:        <feature name='avx512dq'/>
Dec  1 14:20:36 np0005541455 nova_compute[188605]:        <feature name='avx512f'/>
Dec  1 14:20:36 np0005541455 nova_compute[188605]:        <feature name='avx512vbmi'/>
Dec  1 14:20:36 np0005541455 nova_compute[188605]:        <feature name='avx512vbmi2'/>
Dec  1 14:20:36 np0005541455 nova_compute[188605]:        <feature name='avx512vl'/>
Dec  1 14:20:36 np0005541455 nova_compute[188605]:        <feature name='avx512vnni'/>
Dec  1 14:20:36 np0005541455 nova_compute[188605]:        <feature name='erms'/>
Dec  1 14:20:36 np0005541455 nova_compute[188605]:        <feature name='gfni'/>
Dec  1 14:20:36 np0005541455 nova_compute[188605]:        <feature name='hle'/>
Dec  1 14:20:36 np0005541455 nova_compute[188605]:        <feature name='invpcid'/>
Dec  1 14:20:36 np0005541455 nova_compute[188605]:        <feature name='la57'/>
Dec  1 14:20:36 np0005541455 nova_compute[188605]:        <feature name='pcid'/>
Dec  1 14:20:36 np0005541455 nova_compute[188605]:        <feature name='pku'/>
Dec  1 14:20:36 np0005541455 nova_compute[188605]:        <feature name='rtm'/>
Dec  1 14:20:36 np0005541455 nova_compute[188605]:        <feature name='vaes'/>
Dec  1 14:20:36 np0005541455 nova_compute[188605]:        <feature name='vpclmulqdq'/>
Dec  1 14:20:36 np0005541455 nova_compute[188605]:      </blockers>
Dec  1 14:20:36 np0005541455 nova_compute[188605]:      <model usable='no' vendor='Intel' canonical='Icelake-Server-v2'>Icelake-Server-noTSX</model>
Dec  1 14:20:36 np0005541455 nova_compute[188605]:      <blockers model='Icelake-Server-noTSX'>
Dec  1 14:20:36 np0005541455 nova_compute[188605]:        <feature name='avx512-vpopcntdq'/>
Dec  1 14:20:36 np0005541455 nova_compute[188605]:        <feature name='avx512bitalg'/>
Dec  1 14:20:36 np0005541455 nova_compute[188605]:        <feature name='avx512bw'/>
Dec  1 14:20:36 np0005541455 nova_compute[188605]:        <feature name='avx512cd'/>
Dec  1 14:20:36 np0005541455 nova_compute[188605]:        <feature name='avx512dq'/>
Dec  1 14:20:36 np0005541455 nova_compute[188605]:        <feature name='avx512f'/>
Dec  1 14:20:36 np0005541455 nova_compute[188605]:        <feature name='avx512vbmi'/>
Dec  1 14:20:36 np0005541455 nova_compute[188605]:        <feature name='avx512vbmi2'/>
Dec  1 14:20:36 np0005541455 nova_compute[188605]:        <feature name='avx512vl'/>
Dec  1 14:20:36 np0005541455 nova_compute[188605]:        <feature name='avx512vnni'/>
Dec  1 14:20:36 np0005541455 nova_compute[188605]:        <feature name='erms'/>
Dec  1 14:20:36 np0005541455 nova_compute[188605]:        <feature name='gfni'/>
Dec  1 14:20:36 np0005541455 nova_compute[188605]:        <feature name='invpcid'/>
Dec  1 14:20:36 np0005541455 nova_compute[188605]:        <feature name='la57'/>
Dec  1 14:20:36 np0005541455 nova_compute[188605]:        <feature name='pcid'/>
Dec  1 14:20:36 np0005541455 nova_compute[188605]:        <feature name='pku'/>
Dec  1 14:20:36 np0005541455 nova_compute[188605]:        <feature name='vaes'/>
Dec  1 14:20:36 np0005541455 nova_compute[188605]:        <feature name='vpclmulqdq'/>
Dec  1 14:20:36 np0005541455 nova_compute[188605]:      </blockers>
Dec  1 14:20:36 np0005541455 nova_compute[188605]:      <model usable='no' vendor='Intel'>Icelake-Server-v1</model>
Dec  1 14:20:36 np0005541455 nova_compute[188605]:      <blockers model='Icelake-Server-v1'>
Dec  1 14:20:36 np0005541455 nova_compute[188605]:        <feature name='avx512-vpopcntdq'/>
Dec  1 14:20:36 np0005541455 nova_compute[188605]:        <feature name='avx512bitalg'/>
Dec  1 14:20:36 np0005541455 nova_compute[188605]:        <feature name='avx512bw'/>
Dec  1 14:20:36 np0005541455 nova_compute[188605]:        <feature name='avx512cd'/>
Dec  1 14:20:36 np0005541455 nova_compute[188605]:        <feature name='avx512dq'/>
Dec  1 14:20:36 np0005541455 nova_compute[188605]:        <feature name='avx512f'/>
Dec  1 14:20:36 np0005541455 nova_compute[188605]:        <feature name='avx512vbmi'/>
Dec  1 14:20:36 np0005541455 nova_compute[188605]:        <feature name='avx512vbmi2'/>
Dec  1 14:20:36 np0005541455 nova_compute[188605]:        <feature name='avx512vl'/>
Dec  1 14:20:36 np0005541455 nova_compute[188605]:        <feature name='avx512vnni'/>
Dec  1 14:20:36 np0005541455 nova_compute[188605]:        <feature name='erms'/>
Dec  1 14:20:36 np0005541455 nova_compute[188605]:        <feature name='gfni'/>
Dec  1 14:20:36 np0005541455 nova_compute[188605]:        <feature name='hle'/>
Dec  1 14:20:36 np0005541455 nova_compute[188605]:        <feature name='invpcid'/>
Dec  1 14:20:36 np0005541455 nova_compute[188605]:        <feature name='la57'/>
Dec  1 14:20:36 np0005541455 nova_compute[188605]:        <feature name='pcid'/>
Dec  1 14:20:36 np0005541455 nova_compute[188605]:        <feature name='pku'/>
Dec  1 14:20:36 np0005541455 nova_compute[188605]:        <feature name='rtm'/>
Dec  1 14:20:36 np0005541455 nova_compute[188605]:        <feature name='vaes'/>
Dec  1 14:20:36 np0005541455 nova_compute[188605]:        <feature name='vpclmulqdq'/>
Dec  1 14:20:36 np0005541455 nova_compute[188605]:      </blockers>
Dec  1 14:20:36 np0005541455 nova_compute[188605]:      <model usable='no' vendor='Intel'>Icelake-Server-v2</model>
Dec  1 14:20:36 np0005541455 nova_compute[188605]:      <blockers model='Icelake-Server-v2'>
Dec  1 14:20:36 np0005541455 nova_compute[188605]:        <feature name='avx512-vpopcntdq'/>
Dec  1 14:20:36 np0005541455 nova_compute[188605]:        <feature name='avx512bitalg'/>
Dec  1 14:20:36 np0005541455 nova_compute[188605]:        <feature name='avx512bw'/>
Dec  1 14:20:36 np0005541455 nova_compute[188605]:        <feature name='avx512cd'/>
Dec  1 14:20:36 np0005541455 nova_compute[188605]:        <feature name='avx512dq'/>
Dec  1 14:20:36 np0005541455 nova_compute[188605]:        <feature name='avx512f'/>
Dec  1 14:20:36 np0005541455 nova_compute[188605]:        <feature name='avx512vbmi'/>
Dec  1 14:20:36 np0005541455 nova_compute[188605]:        <feature name='avx512vbmi2'/>
Dec  1 14:20:36 np0005541455 nova_compute[188605]:        <feature name='avx512vl'/>
Dec  1 14:20:36 np0005541455 nova_compute[188605]:        <feature name='avx512vnni'/>
Dec  1 14:20:36 np0005541455 nova_compute[188605]:        <feature name='erms'/>
Dec  1 14:20:36 np0005541455 nova_compute[188605]:        <feature name='gfni'/>
Dec  1 14:20:36 np0005541455 nova_compute[188605]:        <feature name='invpcid'/>
Dec  1 14:20:36 np0005541455 nova_compute[188605]:        <feature name='la57'/>
Dec  1 14:20:36 np0005541455 nova_compute[188605]:        <feature name='pcid'/>
Dec  1 14:20:36 np0005541455 nova_compute[188605]:        <feature name='pku'/>
Dec  1 14:20:36 np0005541455 nova_compute[188605]:        <feature name='vaes'/>
Dec  1 14:20:36 np0005541455 nova_compute[188605]:        <feature name='vpclmulqdq'/>
Dec  1 14:20:36 np0005541455 nova_compute[188605]:      </blockers>
Dec  1 14:20:36 np0005541455 nova_compute[188605]:      <model usable='no' vendor='Intel'>Icelake-Server-v3</model>
Dec  1 14:20:36 np0005541455 nova_compute[188605]:      <blockers model='Icelake-Server-v3'>
Dec  1 14:20:36 np0005541455 nova_compute[188605]:        <feature name='avx512-vpopcntdq'/>
Dec  1 14:20:36 np0005541455 nova_compute[188605]:        <feature name='avx512bitalg'/>
Dec  1 14:20:36 np0005541455 nova_compute[188605]:        <feature name='avx512bw'/>
Dec  1 14:20:36 np0005541455 nova_compute[188605]:        <feature name='avx512cd'/>
Dec  1 14:20:36 np0005541455 nova_compute[188605]:        <feature name='avx512dq'/>
Dec  1 14:20:36 np0005541455 nova_compute[188605]:        <feature name='avx512f'/>
Dec  1 14:20:36 np0005541455 nova_compute[188605]:        <feature name='avx512vbmi'/>
Dec  1 14:20:36 np0005541455 nova_compute[188605]:        <feature name='avx512vbmi2'/>
Dec  1 14:20:36 np0005541455 nova_compute[188605]:        <feature name='avx512vl'/>
Dec  1 14:20:36 np0005541455 nova_compute[188605]:        <feature name='avx512vnni'/>
Dec  1 14:20:36 np0005541455 nova_compute[188605]:        <feature name='erms'/>
Dec  1 14:20:36 np0005541455 nova_compute[188605]:        <feature name='gfni'/>
Dec  1 14:20:36 np0005541455 nova_compute[188605]:        <feature name='ibrs-all'/>
Dec  1 14:20:36 np0005541455 nova_compute[188605]:        <feature name='invpcid'/>
Dec  1 14:20:36 np0005541455 nova_compute[188605]:        <feature name='la57'/>
Dec  1 14:20:36 np0005541455 nova_compute[188605]:        <feature name='pcid'/>
Dec  1 14:20:36 np0005541455 nova_compute[188605]:        <feature name='pku'/>
Dec  1 14:20:36 np0005541455 nova_compute[188605]:        <feature name='taa-no'/>
Dec  1 14:20:36 np0005541455 nova_compute[188605]:        <feature name='vaes'/>
Dec  1 14:20:36 np0005541455 nova_compute[188605]:        <feature name='vpclmulqdq'/>
Dec  1 14:20:36 np0005541455 nova_compute[188605]:      </blockers>
Dec  1 14:20:36 np0005541455 nova_compute[188605]:      <model usable='no' vendor='Intel'>Icelake-Server-v4</model>
Dec  1 14:20:36 np0005541455 nova_compute[188605]:      <blockers model='Icelake-Server-v4'>
Dec  1 14:20:36 np0005541455 nova_compute[188605]:        <feature name='avx512-vpopcntdq'/>
Dec  1 14:20:36 np0005541455 nova_compute[188605]:        <feature name='avx512bitalg'/>
Dec  1 14:20:36 np0005541455 nova_compute[188605]:        <feature name='avx512bw'/>
Dec  1 14:20:36 np0005541455 nova_compute[188605]:        <feature name='avx512cd'/>
Dec  1 14:20:36 np0005541455 nova_compute[188605]:        <feature name='avx512dq'/>
Dec  1 14:20:36 np0005541455 nova_compute[188605]:        <feature name='avx512f'/>
Dec  1 14:20:36 np0005541455 nova_compute[188605]:        <feature name='avx512ifma'/>
Dec  1 14:20:36 np0005541455 nova_compute[188605]:        <feature name='avx512vbmi'/>
Dec  1 14:20:36 np0005541455 nova_compute[188605]:        <feature name='avx512vbmi2'/>
Dec  1 14:20:36 np0005541455 nova_compute[188605]:        <feature name='avx512vl'/>
Dec  1 14:20:36 np0005541455 nova_compute[188605]:        <feature name='avx512vnni'/>
Dec  1 14:20:36 np0005541455 nova_compute[188605]:        <feature name='erms'/>
Dec  1 14:20:36 np0005541455 nova_compute[188605]:        <feature name='fsrm'/>
Dec  1 14:20:36 np0005541455 nova_compute[188605]:        <feature name='gfni'/>
Dec  1 14:20:36 np0005541455 nova_compute[188605]:        <feature name='ibrs-all'/>
Dec  1 14:20:36 np0005541455 nova_compute[188605]:        <feature name='invpcid'/>
Dec  1 14:20:36 np0005541455 nova_compute[188605]:        <feature name='la57'/>
Dec  1 14:20:36 np0005541455 nova_compute[188605]:        <feature name='pcid'/>
Dec  1 14:20:36 np0005541455 nova_compute[188605]:        <feature name='pku'/>
Dec  1 14:20:36 np0005541455 nova_compute[188605]:        <feature name='taa-no'/>
Dec  1 14:20:36 np0005541455 nova_compute[188605]:        <feature name='vaes'/>
Dec  1 14:20:36 np0005541455 nova_compute[188605]:        <feature name='vpclmulqdq'/>
Dec  1 14:20:36 np0005541455 nova_compute[188605]:      </blockers>
Dec  1 14:20:36 np0005541455 nova_compute[188605]:      <model usable='no' vendor='Intel'>Icelake-Server-v5</model>
Dec  1 14:20:36 np0005541455 nova_compute[188605]:      <blockers model='Icelake-Server-v5'>
Dec  1 14:20:36 np0005541455 nova_compute[188605]:        <feature name='avx512-vpopcntdq'/>
Dec  1 14:20:36 np0005541455 nova_compute[188605]:        <feature name='avx512bitalg'/>
Dec  1 14:20:36 np0005541455 nova_compute[188605]:        <feature name='avx512bw'/>
Dec  1 14:20:36 np0005541455 nova_compute[188605]:        <feature name='avx512cd'/>
Dec  1 14:20:36 np0005541455 nova_compute[188605]:        <feature name='avx512dq'/>
Dec  1 14:20:36 np0005541455 nova_compute[188605]:        <feature name='avx512f'/>
Dec  1 14:20:36 np0005541455 nova_compute[188605]:        <feature name='avx512ifma'/>
Dec  1 14:20:36 np0005541455 nova_compute[188605]:        <feature name='avx512vbmi'/>
Dec  1 14:20:36 np0005541455 nova_compute[188605]:        <feature name='avx512vbmi2'/>
Dec  1 14:20:36 np0005541455 nova_compute[188605]:        <feature name='avx512vl'/>
Dec  1 14:20:36 np0005541455 nova_compute[188605]:        <feature name='avx512vnni'/>
Dec  1 14:20:36 np0005541455 nova_compute[188605]:        <feature name='erms'/>
Dec  1 14:20:36 np0005541455 nova_compute[188605]:        <feature name='fsrm'/>
Dec  1 14:20:36 np0005541455 nova_compute[188605]:        <feature name='gfni'/>
Dec  1 14:20:36 np0005541455 nova_compute[188605]:        <feature name='ibrs-all'/>
Dec  1 14:20:36 np0005541455 nova_compute[188605]:        <feature name='invpcid'/>
Dec  1 14:20:36 np0005541455 nova_compute[188605]:        <feature name='la57'/>
Dec  1 14:20:36 np0005541455 nova_compute[188605]:        <feature name='pcid'/>
Dec  1 14:20:36 np0005541455 nova_compute[188605]:        <feature name='pku'/>
Dec  1 14:20:36 np0005541455 nova_compute[188605]:        <feature name='taa-no'/>
Dec  1 14:20:36 np0005541455 nova_compute[188605]:        <feature name='vaes'/>
Dec  1 14:20:36 np0005541455 nova_compute[188605]:        <feature name='vpclmulqdq'/>
Dec  1 14:20:36 np0005541455 nova_compute[188605]:        <feature name='xsaves'/>
Dec  1 14:20:36 np0005541455 nova_compute[188605]:      </blockers>
Dec  1 14:20:36 np0005541455 nova_compute[188605]:      <model usable='no' vendor='Intel'>Icelake-Server-v6</model>
Dec  1 14:20:36 np0005541455 nova_compute[188605]:      <blockers model='Icelake-Server-v6'>
Dec  1 14:20:36 np0005541455 nova_compute[188605]:        <feature name='avx512-vpopcntdq'/>
Dec  1 14:20:36 np0005541455 nova_compute[188605]:        <feature name='avx512bitalg'/>
Dec  1 14:20:36 np0005541455 nova_compute[188605]:        <feature name='avx512bw'/>
Dec  1 14:20:36 np0005541455 nova_compute[188605]:        <feature name='avx512cd'/>
Dec  1 14:20:36 np0005541455 nova_compute[188605]:        <feature name='avx512dq'/>
Dec  1 14:20:36 np0005541455 nova_compute[188605]:        <feature name='avx512f'/>
Dec  1 14:20:36 np0005541455 nova_compute[188605]:        <feature name='avx512ifma'/>
Dec  1 14:20:36 np0005541455 nova_compute[188605]:        <feature name='avx512vbmi'/>
Dec  1 14:20:36 np0005541455 nova_compute[188605]:        <feature name='avx512vbmi2'/>
Dec  1 14:20:36 np0005541455 nova_compute[188605]:        <feature name='avx512vl'/>
Dec  1 14:20:36 np0005541455 nova_compute[188605]:        <feature name='avx512vnni'/>
Dec  1 14:20:36 np0005541455 nova_compute[188605]:        <feature name='erms'/>
Dec  1 14:20:36 np0005541455 nova_compute[188605]:        <feature name='fsrm'/>
Dec  1 14:20:36 np0005541455 nova_compute[188605]:        <feature name='gfni'/>
Dec  1 14:20:36 np0005541455 nova_compute[188605]:        <feature name='ibrs-all'/>
Dec  1 14:20:36 np0005541455 nova_compute[188605]:        <feature name='invpcid'/>
Dec  1 14:20:36 np0005541455 nova_compute[188605]:        <feature name='la57'/>
Dec  1 14:20:36 np0005541455 nova_compute[188605]:        <feature name='pcid'/>
Dec  1 14:20:36 np0005541455 nova_compute[188605]:        <feature name='pku'/>
Dec  1 14:20:36 np0005541455 nova_compute[188605]:        <feature name='taa-no'/>
Dec  1 14:20:36 np0005541455 nova_compute[188605]:        <feature name='vaes'/>
Dec  1 14:20:36 np0005541455 nova_compute[188605]:        <feature name='vpclmulqdq'/>
Dec  1 14:20:36 np0005541455 nova_compute[188605]:        <feature name='xsaves'/>
Dec  1 14:20:36 np0005541455 nova_compute[188605]:      </blockers>
Dec  1 14:20:36 np0005541455 nova_compute[188605]:      <model usable='no' vendor='Intel'>Icelake-Server-v7</model>
Dec  1 14:20:36 np0005541455 nova_compute[188605]:      <blockers model='Icelake-Server-v7'>
Dec  1 14:20:36 np0005541455 nova_compute[188605]:        <feature name='avx512-vpopcntdq'/>
Dec  1 14:20:36 np0005541455 nova_compute[188605]:        <feature name='avx512bitalg'/>
Dec  1 14:20:36 np0005541455 nova_compute[188605]:        <feature name='avx512bw'/>
Dec  1 14:20:36 np0005541455 nova_compute[188605]:        <feature name='avx512cd'/>
Dec  1 14:20:36 np0005541455 nova_compute[188605]:        <feature name='avx512dq'/>
Dec  1 14:20:36 np0005541455 nova_compute[188605]:        <feature name='avx512f'/>
Dec  1 14:20:36 np0005541455 nova_compute[188605]:        <feature name='avx512ifma'/>
Dec  1 14:20:36 np0005541455 nova_compute[188605]:        <feature name='avx512vbmi'/>
Dec  1 14:20:36 np0005541455 nova_compute[188605]:        <feature name='avx512vbmi2'/>
Dec  1 14:20:36 np0005541455 nova_compute[188605]:        <feature name='avx512vl'/>
Dec  1 14:20:36 np0005541455 nova_compute[188605]:        <feature name='avx512vnni'/>
Dec  1 14:20:36 np0005541455 nova_compute[188605]:        <feature name='erms'/>
Dec  1 14:20:36 np0005541455 nova_compute[188605]:        <feature name='fsrm'/>
Dec  1 14:20:36 np0005541455 nova_compute[188605]:        <feature name='gfni'/>
Dec  1 14:20:36 np0005541455 nova_compute[188605]:        <feature name='hle'/>
Dec  1 14:20:36 np0005541455 nova_compute[188605]:        <feature name='ibrs-all'/>
Dec  1 14:20:36 np0005541455 nova_compute[188605]:        <feature name='invpcid'/>
Dec  1 14:20:36 np0005541455 nova_compute[188605]:        <feature name='la57'/>
Dec  1 14:20:36 np0005541455 nova_compute[188605]:        <feature name='pcid'/>
Dec  1 14:20:36 np0005541455 nova_compute[188605]:        <feature name='pku'/>
Dec  1 14:20:36 np0005541455 nova_compute[188605]:        <feature name='rtm'/>
Dec  1 14:20:36 np0005541455 nova_compute[188605]:        <feature name='taa-no'/>
Dec  1 14:20:36 np0005541455 nova_compute[188605]:        <feature name='vaes'/>
Dec  1 14:20:36 np0005541455 nova_compute[188605]:        <feature name='vpclmulqdq'/>
Dec  1 14:20:36 np0005541455 nova_compute[188605]:        <feature name='xsaves'/>
Dec  1 14:20:36 np0005541455 nova_compute[188605]:      </blockers>
Dec  1 14:20:36 np0005541455 nova_compute[188605]:      <model usable='no' vendor='Intel' canonical='IvyBridge-v1'>IvyBridge</model>
Dec  1 14:20:36 np0005541455 nova_compute[188605]:      <blockers model='IvyBridge'>
Dec  1 14:20:36 np0005541455 nova_compute[188605]:        <feature name='erms'/>
Dec  1 14:20:36 np0005541455 nova_compute[188605]:      </blockers>
Dec  1 14:20:36 np0005541455 nova_compute[188605]:      <model usable='no' vendor='Intel' canonical='IvyBridge-v2'>IvyBridge-IBRS</model>
Dec  1 14:20:36 np0005541455 nova_compute[188605]:      <blockers model='IvyBridge-IBRS'>
Dec  1 14:20:36 np0005541455 nova_compute[188605]:        <feature name='erms'/>
Dec  1 14:20:36 np0005541455 nova_compute[188605]:      </blockers>
Dec  1 14:20:36 np0005541455 nova_compute[188605]:      <model usable='no' vendor='Intel'>IvyBridge-v1</model>
Dec  1 14:20:36 np0005541455 nova_compute[188605]:      <blockers model='IvyBridge-v1'>
Dec  1 14:20:36 np0005541455 nova_compute[188605]:        <feature name='erms'/>
Dec  1 14:20:36 np0005541455 nova_compute[188605]:      </blockers>
Dec  1 14:20:36 np0005541455 nova_compute[188605]:      <model usable='no' vendor='Intel'>IvyBridge-v2</model>
Dec  1 14:20:36 np0005541455 nova_compute[188605]:      <blockers model='IvyBridge-v2'>
Dec  1 14:20:36 np0005541455 nova_compute[188605]:        <feature name='erms'/>
Dec  1 14:20:36 np0005541455 nova_compute[188605]:      </blockers>
Dec  1 14:20:36 np0005541455 nova_compute[188605]:      <model usable='no' vendor='Intel' canonical='KnightsMill-v1'>KnightsMill</model>
Dec  1 14:20:36 np0005541455 nova_compute[188605]:      <blockers model='KnightsMill'>
Dec  1 14:20:36 np0005541455 nova_compute[188605]:        <feature name='avx512-4fmaps'/>
Dec  1 14:20:36 np0005541455 nova_compute[188605]:        <feature name='avx512-4vnniw'/>
Dec  1 14:20:36 np0005541455 nova_compute[188605]:        <feature name='avx512-vpopcntdq'/>
Dec  1 14:20:36 np0005541455 nova_compute[188605]:        <feature name='avx512cd'/>
Dec  1 14:20:36 np0005541455 nova_compute[188605]:        <feature name='avx512er'/>
Dec  1 14:20:36 np0005541455 nova_compute[188605]:        <feature name='avx512f'/>
Dec  1 14:20:36 np0005541455 nova_compute[188605]:        <feature name='avx512pf'/>
Dec  1 14:20:36 np0005541455 nova_compute[188605]:        <feature name='erms'/>
Dec  1 14:20:36 np0005541455 nova_compute[188605]:        <feature name='ss'/>
Dec  1 14:20:36 np0005541455 nova_compute[188605]:      </blockers>
Dec  1 14:20:36 np0005541455 nova_compute[188605]:      <model usable='no' vendor='Intel'>KnightsMill-v1</model>
Dec  1 14:20:36 np0005541455 nova_compute[188605]:      <blockers model='KnightsMill-v1'>
Dec  1 14:20:36 np0005541455 nova_compute[188605]:        <feature name='avx512-4fmaps'/>
Dec  1 14:20:36 np0005541455 nova_compute[188605]:        <feature name='avx512-4vnniw'/>
Dec  1 14:20:36 np0005541455 nova_compute[188605]:        <feature name='avx512-vpopcntdq'/>
Dec  1 14:20:36 np0005541455 nova_compute[188605]:        <feature name='avx512cd'/>
Dec  1 14:20:36 np0005541455 nova_compute[188605]:        <feature name='avx512er'/>
Dec  1 14:20:36 np0005541455 nova_compute[188605]:        <feature name='avx512f'/>
Dec  1 14:20:36 np0005541455 nova_compute[188605]:        <feature name='avx512pf'/>
Dec  1 14:20:36 np0005541455 nova_compute[188605]:        <feature name='erms'/>
Dec  1 14:20:36 np0005541455 nova_compute[188605]:        <feature name='ss'/>
Dec  1 14:20:36 np0005541455 nova_compute[188605]:      </blockers>
Dec  1 14:20:36 np0005541455 nova_compute[188605]:      <model usable='yes' vendor='Intel' canonical='Nehalem-v1'>Nehalem</model>
Dec  1 14:20:36 np0005541455 nova_compute[188605]:      <model usable='yes' vendor='Intel' canonical='Nehalem-v2'>Nehalem-IBRS</model>
Dec  1 14:20:36 np0005541455 nova_compute[188605]:      <model usable='yes' vendor='Intel'>Nehalem-v1</model>
Dec  1 14:20:36 np0005541455 nova_compute[188605]:      <model usable='yes' vendor='Intel'>Nehalem-v2</model>
Dec  1 14:20:36 np0005541455 nova_compute[188605]:      <model usable='yes' deprecated='yes' vendor='AMD' canonical='Opteron_G1-v1'>Opteron_G1</model>
Dec  1 14:20:36 np0005541455 nova_compute[188605]:      <model usable='yes' deprecated='yes' vendor='AMD'>Opteron_G1-v1</model>
Dec  1 14:20:36 np0005541455 nova_compute[188605]:      <model usable='yes' deprecated='yes' vendor='AMD' canonical='Opteron_G2-v1'>Opteron_G2</model>
Dec  1 14:20:36 np0005541455 nova_compute[188605]:      <model usable='yes' deprecated='yes' vendor='AMD'>Opteron_G2-v1</model>
Dec  1 14:20:36 np0005541455 nova_compute[188605]:      <model usable='yes' deprecated='yes' vendor='AMD' canonical='Opteron_G3-v1'>Opteron_G3</model>
Dec  1 14:20:36 np0005541455 nova_compute[188605]:      <model usable='yes' deprecated='yes' vendor='AMD'>Opteron_G3-v1</model>
Dec  1 14:20:36 np0005541455 nova_compute[188605]:      <model usable='no' vendor='AMD' canonical='Opteron_G4-v1'>Opteron_G4</model>
Dec  1 14:20:36 np0005541455 nova_compute[188605]:      <blockers model='Opteron_G4'>
Dec  1 14:20:36 np0005541455 nova_compute[188605]:        <feature name='fma4'/>
Dec  1 14:20:36 np0005541455 nova_compute[188605]:        <feature name='xop'/>
Dec  1 14:20:36 np0005541455 nova_compute[188605]:      </blockers>
Dec  1 14:20:36 np0005541455 nova_compute[188605]:      <model usable='no' vendor='AMD'>Opteron_G4-v1</model>
Dec  1 14:20:36 np0005541455 nova_compute[188605]:      <blockers model='Opteron_G4-v1'>
Dec  1 14:20:36 np0005541455 nova_compute[188605]:        <feature name='fma4'/>
Dec  1 14:20:36 np0005541455 nova_compute[188605]:        <feature name='xop'/>
Dec  1 14:20:36 np0005541455 nova_compute[188605]:      </blockers>
Dec  1 14:20:36 np0005541455 nova_compute[188605]:      <model usable='no' vendor='AMD' canonical='Opteron_G5-v1'>Opteron_G5</model>
Dec  1 14:20:36 np0005541455 nova_compute[188605]:      <blockers model='Opteron_G5'>
Dec  1 14:20:36 np0005541455 nova_compute[188605]:        <feature name='fma4'/>
Dec  1 14:20:36 np0005541455 nova_compute[188605]:        <feature name='tbm'/>
Dec  1 14:20:36 np0005541455 nova_compute[188605]:        <feature name='xop'/>
Dec  1 14:20:36 np0005541455 nova_compute[188605]:      </blockers>
Dec  1 14:20:36 np0005541455 nova_compute[188605]:      <model usable='no' vendor='AMD'>Opteron_G5-v1</model>
Dec  1 14:20:36 np0005541455 nova_compute[188605]:      <blockers model='Opteron_G5-v1'>
Dec  1 14:20:36 np0005541455 nova_compute[188605]:        <feature name='fma4'/>
Dec  1 14:20:36 np0005541455 nova_compute[188605]:        <feature name='tbm'/>
Dec  1 14:20:36 np0005541455 nova_compute[188605]:        <feature name='xop'/>
Dec  1 14:20:36 np0005541455 nova_compute[188605]:      </blockers>
Dec  1 14:20:36 np0005541455 nova_compute[188605]:      <model usable='yes' deprecated='yes' vendor='Intel' canonical='Penryn-v1'>Penryn</model>
Dec  1 14:20:36 np0005541455 nova_compute[188605]:      <model usable='yes' deprecated='yes' vendor='Intel'>Penryn-v1</model>
Dec  1 14:20:36 np0005541455 nova_compute[188605]:      <model usable='yes' vendor='Intel' canonical='SandyBridge-v1'>SandyBridge</model>
Dec  1 14:20:36 np0005541455 nova_compute[188605]:      <model usable='yes' vendor='Intel' canonical='SandyBridge-v2'>SandyBridge-IBRS</model>
Dec  1 14:20:36 np0005541455 nova_compute[188605]:      <model usable='yes' vendor='Intel'>SandyBridge-v1</model>
Dec  1 14:20:36 np0005541455 nova_compute[188605]:      <model usable='yes' vendor='Intel'>SandyBridge-v2</model>
Dec  1 14:20:36 np0005541455 nova_compute[188605]:      <model usable='no' vendor='Intel' canonical='SapphireRapids-v1'>SapphireRapids</model>
Dec  1 14:20:36 np0005541455 nova_compute[188605]:      <blockers model='SapphireRapids'>
Dec  1 14:20:36 np0005541455 nova_compute[188605]:        <feature name='amx-bf16'/>
Dec  1 14:20:36 np0005541455 nova_compute[188605]:        <feature name='amx-int8'/>
Dec  1 14:20:36 np0005541455 nova_compute[188605]:        <feature name='amx-tile'/>
Dec  1 14:20:36 np0005541455 nova_compute[188605]:        <feature name='avx-vnni'/>
Dec  1 14:20:36 np0005541455 nova_compute[188605]:        <feature name='avx512-bf16'/>
Dec  1 14:20:36 np0005541455 nova_compute[188605]:        <feature name='avx512-fp16'/>
Dec  1 14:20:36 np0005541455 nova_compute[188605]:        <feature name='avx512-vpopcntdq'/>
Dec  1 14:20:36 np0005541455 nova_compute[188605]:        <feature name='avx512bitalg'/>
Dec  1 14:20:36 np0005541455 nova_compute[188605]:        <feature name='avx512bw'/>
Dec  1 14:20:36 np0005541455 nova_compute[188605]:        <feature name='avx512cd'/>
Dec  1 14:20:36 np0005541455 nova_compute[188605]:        <feature name='avx512dq'/>
Dec  1 14:20:36 np0005541455 nova_compute[188605]:        <feature name='avx512f'/>
Dec  1 14:20:36 np0005541455 nova_compute[188605]:        <feature name='avx512ifma'/>
Dec  1 14:20:36 np0005541455 nova_compute[188605]:        <feature name='avx512vbmi'/>
Dec  1 14:20:36 np0005541455 nova_compute[188605]:        <feature name='avx512vbmi2'/>
Dec  1 14:20:36 np0005541455 nova_compute[188605]:        <feature name='avx512vl'/>
Dec  1 14:20:36 np0005541455 nova_compute[188605]:        <feature name='avx512vnni'/>
Dec  1 14:20:36 np0005541455 nova_compute[188605]:        <feature name='bus-lock-detect'/>
Dec  1 14:20:36 np0005541455 nova_compute[188605]:        <feature name='erms'/>
Dec  1 14:20:36 np0005541455 nova_compute[188605]:        <feature name='fsrc'/>
Dec  1 14:20:36 np0005541455 nova_compute[188605]:        <feature name='fsrm'/>
Dec  1 14:20:36 np0005541455 nova_compute[188605]:        <feature name='fsrs'/>
Dec  1 14:20:36 np0005541455 nova_compute[188605]:        <feature name='fzrm'/>
Dec  1 14:20:36 np0005541455 nova_compute[188605]:        <feature name='gfni'/>
Dec  1 14:20:36 np0005541455 nova_compute[188605]:        <feature name='hle'/>
Dec  1 14:20:36 np0005541455 nova_compute[188605]:        <feature name='ibrs-all'/>
Dec  1 14:20:36 np0005541455 nova_compute[188605]:        <feature name='invpcid'/>
Dec  1 14:20:36 np0005541455 nova_compute[188605]:        <feature name='la57'/>
Dec  1 14:20:36 np0005541455 nova_compute[188605]:        <feature name='pcid'/>
Dec  1 14:20:36 np0005541455 nova_compute[188605]:        <feature name='pku'/>
Dec  1 14:20:36 np0005541455 nova_compute[188605]:        <feature name='rtm'/>
Dec  1 14:20:36 np0005541455 nova_compute[188605]:        <feature name='serialize'/>
Dec  1 14:20:36 np0005541455 nova_compute[188605]:        <feature name='taa-no'/>
Dec  1 14:20:36 np0005541455 nova_compute[188605]:        <feature name='tsx-ldtrk'/>
Dec  1 14:20:36 np0005541455 nova_compute[188605]:        <feature name='vaes'/>
Dec  1 14:20:36 np0005541455 nova_compute[188605]:        <feature name='vpclmulqdq'/>
Dec  1 14:20:36 np0005541455 nova_compute[188605]:        <feature name='xfd'/>
Dec  1 14:20:36 np0005541455 nova_compute[188605]:        <feature name='xsaves'/>
Dec  1 14:20:36 np0005541455 nova_compute[188605]:      </blockers>
Dec  1 14:20:36 np0005541455 nova_compute[188605]:      <model usable='no' vendor='Intel'>SapphireRapids-v1</model>
Dec  1 14:20:36 np0005541455 nova_compute[188605]:      <blockers model='SapphireRapids-v1'>
Dec  1 14:20:36 np0005541455 nova_compute[188605]:        <feature name='amx-bf16'/>
Dec  1 14:20:36 np0005541455 nova_compute[188605]:        <feature name='amx-int8'/>
Dec  1 14:20:36 np0005541455 nova_compute[188605]:        <feature name='amx-tile'/>
Dec  1 14:20:36 np0005541455 nova_compute[188605]:        <feature name='avx-vnni'/>
Dec  1 14:20:36 np0005541455 nova_compute[188605]:        <feature name='avx512-bf16'/>
Dec  1 14:20:36 np0005541455 nova_compute[188605]:        <feature name='avx512-fp16'/>
Dec  1 14:20:36 np0005541455 nova_compute[188605]:        <feature name='avx512-vpopcntdq'/>
Dec  1 14:20:36 np0005541455 nova_compute[188605]:        <feature name='avx512bitalg'/>
Dec  1 14:20:36 np0005541455 nova_compute[188605]:        <feature name='avx512bw'/>
Dec  1 14:20:36 np0005541455 nova_compute[188605]:        <feature name='avx512cd'/>
Dec  1 14:20:36 np0005541455 nova_compute[188605]:        <feature name='avx512dq'/>
Dec  1 14:20:36 np0005541455 nova_compute[188605]:        <feature name='avx512f'/>
Dec  1 14:20:36 np0005541455 nova_compute[188605]:        <feature name='avx512ifma'/>
Dec  1 14:20:36 np0005541455 nova_compute[188605]:        <feature name='avx512vbmi'/>
Dec  1 14:20:36 np0005541455 nova_compute[188605]:        <feature name='avx512vbmi2'/>
Dec  1 14:20:36 np0005541455 nova_compute[188605]:        <feature name='avx512vl'/>
Dec  1 14:20:36 np0005541455 nova_compute[188605]:        <feature name='avx512vnni'/>
Dec  1 14:20:36 np0005541455 nova_compute[188605]:        <feature name='bus-lock-detect'/>
Dec  1 14:20:36 np0005541455 nova_compute[188605]:        <feature name='erms'/>
Dec  1 14:20:36 np0005541455 nova_compute[188605]:        <feature name='fsrc'/>
Dec  1 14:20:36 np0005541455 nova_compute[188605]:        <feature name='fsrm'/>
Dec  1 14:20:36 np0005541455 nova_compute[188605]:        <feature name='fsrs'/>
Dec  1 14:20:36 np0005541455 nova_compute[188605]:        <feature name='fzrm'/>
Dec  1 14:20:36 np0005541455 nova_compute[188605]:        <feature name='gfni'/>
Dec  1 14:20:36 np0005541455 nova_compute[188605]:        <feature name='hle'/>
Dec  1 14:20:36 np0005541455 nova_compute[188605]:        <feature name='ibrs-all'/>
Dec  1 14:20:36 np0005541455 nova_compute[188605]:        <feature name='invpcid'/>
Dec  1 14:20:36 np0005541455 nova_compute[188605]:        <feature name='la57'/>
Dec  1 14:20:36 np0005541455 nova_compute[188605]:        <feature name='pcid'/>
Dec  1 14:20:36 np0005541455 nova_compute[188605]:        <feature name='pku'/>
Dec  1 14:20:36 np0005541455 nova_compute[188605]:        <feature name='rtm'/>
Dec  1 14:20:36 np0005541455 nova_compute[188605]:        <feature name='serialize'/>
Dec  1 14:20:36 np0005541455 nova_compute[188605]:        <feature name='taa-no'/>
Dec  1 14:20:36 np0005541455 nova_compute[188605]:        <feature name='tsx-ldtrk'/>
Dec  1 14:20:36 np0005541455 nova_compute[188605]:        <feature name='vaes'/>
Dec  1 14:20:36 np0005541455 nova_compute[188605]:        <feature name='vpclmulqdq'/>
Dec  1 14:20:36 np0005541455 nova_compute[188605]:        <feature name='xfd'/>
Dec  1 14:20:36 np0005541455 nova_compute[188605]:        <feature name='xsaves'/>
Dec  1 14:20:36 np0005541455 nova_compute[188605]:      </blockers>
Dec  1 14:20:36 np0005541455 nova_compute[188605]:      <model usable='no' vendor='Intel'>SapphireRapids-v2</model>
Dec  1 14:20:36 np0005541455 nova_compute[188605]:      <blockers model='SapphireRapids-v2'>
Dec  1 14:20:36 np0005541455 nova_compute[188605]:        <feature name='amx-bf16'/>
Dec  1 14:20:36 np0005541455 nova_compute[188605]:        <feature name='amx-int8'/>
Dec  1 14:20:36 np0005541455 nova_compute[188605]:        <feature name='amx-tile'/>
Dec  1 14:20:36 np0005541455 nova_compute[188605]:        <feature name='avx-vnni'/>
Dec  1 14:20:36 np0005541455 nova_compute[188605]:        <feature name='avx512-bf16'/>
Dec  1 14:20:36 np0005541455 nova_compute[188605]:        <feature name='avx512-fp16'/>
Dec  1 14:20:36 np0005541455 nova_compute[188605]:        <feature name='avx512-vpopcntdq'/>
Dec  1 14:20:36 np0005541455 nova_compute[188605]:        <feature name='avx512bitalg'/>
Dec  1 14:20:36 np0005541455 nova_compute[188605]:        <feature name='avx512bw'/>
Dec  1 14:20:36 np0005541455 nova_compute[188605]:        <feature name='avx512cd'/>
Dec  1 14:20:36 np0005541455 nova_compute[188605]:        <feature name='avx512dq'/>
Dec  1 14:20:36 np0005541455 nova_compute[188605]:        <feature name='avx512f'/>
Dec  1 14:20:36 np0005541455 nova_compute[188605]:        <feature name='avx512ifma'/>
Dec  1 14:20:36 np0005541455 nova_compute[188605]:        <feature name='avx512vbmi'/>
Dec  1 14:20:36 np0005541455 nova_compute[188605]:        <feature name='avx512vbmi2'/>
Dec  1 14:20:36 np0005541455 nova_compute[188605]:        <feature name='avx512vl'/>
Dec  1 14:20:36 np0005541455 nova_compute[188605]:        <feature name='avx512vnni'/>
Dec  1 14:20:36 np0005541455 nova_compute[188605]:        <feature name='bus-lock-detect'/>
Dec  1 14:20:36 np0005541455 nova_compute[188605]:        <feature name='erms'/>
Dec  1 14:20:36 np0005541455 nova_compute[188605]:        <feature name='fbsdp-no'/>
Dec  1 14:20:36 np0005541455 nova_compute[188605]:        <feature name='fsrc'/>
Dec  1 14:20:36 np0005541455 nova_compute[188605]:        <feature name='fsrm'/>
Dec  1 14:20:36 np0005541455 nova_compute[188605]:        <feature name='fsrs'/>
Dec  1 14:20:36 np0005541455 nova_compute[188605]:        <feature name='fzrm'/>
Dec  1 14:20:36 np0005541455 nova_compute[188605]:        <feature name='gfni'/>
Dec  1 14:20:36 np0005541455 nova_compute[188605]:        <feature name='hle'/>
Dec  1 14:20:36 np0005541455 nova_compute[188605]:        <feature name='ibrs-all'/>
Dec  1 14:20:36 np0005541455 nova_compute[188605]:        <feature name='invpcid'/>
Dec  1 14:20:36 np0005541455 nova_compute[188605]:        <feature name='la57'/>
Dec  1 14:20:36 np0005541455 nova_compute[188605]:        <feature name='pcid'/>
Dec  1 14:20:36 np0005541455 nova_compute[188605]:        <feature name='pku'/>
Dec  1 14:20:36 np0005541455 nova_compute[188605]:        <feature name='psdp-no'/>
Dec  1 14:20:36 np0005541455 nova_compute[188605]:        <feature name='rtm'/>
Dec  1 14:20:36 np0005541455 nova_compute[188605]:        <feature name='sbdr-ssdp-no'/>
Dec  1 14:20:36 np0005541455 nova_compute[188605]:        <feature name='serialize'/>
Dec  1 14:20:36 np0005541455 nova_compute[188605]:        <feature name='taa-no'/>
Dec  1 14:20:36 np0005541455 nova_compute[188605]:        <feature name='tsx-ldtrk'/>
Dec  1 14:20:36 np0005541455 nova_compute[188605]:        <feature name='vaes'/>
Dec  1 14:20:36 np0005541455 nova_compute[188605]:        <feature name='vpclmulqdq'/>
Dec  1 14:20:36 np0005541455 nova_compute[188605]:        <feature name='xfd'/>
Dec  1 14:20:36 np0005541455 nova_compute[188605]:        <feature name='xsaves'/>
Dec  1 14:20:36 np0005541455 nova_compute[188605]:      </blockers>
Dec  1 14:20:36 np0005541455 nova_compute[188605]:      <model usable='no' vendor='Intel'>SapphireRapids-v3</model>
Dec  1 14:20:36 np0005541455 nova_compute[188605]:      <blockers model='SapphireRapids-v3'>
Dec  1 14:20:36 np0005541455 nova_compute[188605]:        <feature name='amx-bf16'/>
Dec  1 14:20:36 np0005541455 nova_compute[188605]:        <feature name='amx-int8'/>
Dec  1 14:20:36 np0005541455 nova_compute[188605]:        <feature name='amx-tile'/>
Dec  1 14:20:36 np0005541455 nova_compute[188605]:        <feature name='avx-vnni'/>
Dec  1 14:20:36 np0005541455 nova_compute[188605]:        <feature name='avx512-bf16'/>
Dec  1 14:20:36 np0005541455 nova_compute[188605]:        <feature name='avx512-fp16'/>
Dec  1 14:20:36 np0005541455 nova_compute[188605]:        <feature name='avx512-vpopcntdq'/>
Dec  1 14:20:36 np0005541455 nova_compute[188605]:        <feature name='avx512bitalg'/>
Dec  1 14:20:36 np0005541455 nova_compute[188605]:        <feature name='avx512bw'/>
Dec  1 14:20:36 np0005541455 nova_compute[188605]:        <feature name='avx512cd'/>
Dec  1 14:20:36 np0005541455 nova_compute[188605]:        <feature name='avx512dq'/>
Dec  1 14:20:36 np0005541455 nova_compute[188605]:        <feature name='avx512f'/>
Dec  1 14:20:36 np0005541455 nova_compute[188605]:        <feature name='avx512ifma'/>
Dec  1 14:20:36 np0005541455 nova_compute[188605]:        <feature name='avx512vbmi'/>
Dec  1 14:20:36 np0005541455 nova_compute[188605]:        <feature name='avx512vbmi2'/>
Dec  1 14:20:36 np0005541455 nova_compute[188605]:        <feature name='avx512vl'/>
Dec  1 14:20:36 np0005541455 nova_compute[188605]:        <feature name='avx512vnni'/>
Dec  1 14:20:36 np0005541455 nova_compute[188605]:        <feature name='bus-lock-detect'/>
Dec  1 14:20:36 np0005541455 nova_compute[188605]:        <feature name='cldemote'/>
Dec  1 14:20:36 np0005541455 nova_compute[188605]:        <feature name='erms'/>
Dec  1 14:20:36 np0005541455 nova_compute[188605]:        <feature name='fbsdp-no'/>
Dec  1 14:20:36 np0005541455 nova_compute[188605]:        <feature name='fsrc'/>
Dec  1 14:20:36 np0005541455 nova_compute[188605]:        <feature name='fsrm'/>
Dec  1 14:20:36 np0005541455 nova_compute[188605]:        <feature name='fsrs'/>
Dec  1 14:20:36 np0005541455 nova_compute[188605]:        <feature name='fzrm'/>
Dec  1 14:20:36 np0005541455 nova_compute[188605]:        <feature name='gfni'/>
Dec  1 14:20:36 np0005541455 nova_compute[188605]:        <feature name='hle'/>
Dec  1 14:20:36 np0005541455 nova_compute[188605]:        <feature name='ibrs-all'/>
Dec  1 14:20:36 np0005541455 nova_compute[188605]:        <feature name='invpcid'/>
Dec  1 14:20:36 np0005541455 nova_compute[188605]:        <feature name='la57'/>
Dec  1 14:20:36 np0005541455 nova_compute[188605]:        <feature name='movdir64b'/>
Dec  1 14:20:36 np0005541455 nova_compute[188605]:        <feature name='movdiri'/>
Dec  1 14:20:36 np0005541455 nova_compute[188605]:        <feature name='pcid'/>
Dec  1 14:20:36 np0005541455 nova_compute[188605]:        <feature name='pku'/>
Dec  1 14:20:36 np0005541455 nova_compute[188605]:        <feature name='psdp-no'/>
Dec  1 14:20:36 np0005541455 nova_compute[188605]:        <feature name='rtm'/>
Dec  1 14:20:36 np0005541455 nova_compute[188605]:        <feature name='sbdr-ssdp-no'/>
Dec  1 14:20:36 np0005541455 nova_compute[188605]:        <feature name='serialize'/>
Dec  1 14:20:36 np0005541455 nova_compute[188605]:        <feature name='ss'/>
Dec  1 14:20:36 np0005541455 nova_compute[188605]:        <feature name='taa-no'/>
Dec  1 14:20:36 np0005541455 nova_compute[188605]:        <feature name='tsx-ldtrk'/>
Dec  1 14:20:36 np0005541455 nova_compute[188605]:        <feature name='vaes'/>
Dec  1 14:20:36 np0005541455 nova_compute[188605]:        <feature name='vpclmulqdq'/>
Dec  1 14:20:36 np0005541455 nova_compute[188605]:        <feature name='xfd'/>
Dec  1 14:20:36 np0005541455 nova_compute[188605]:        <feature name='xsaves'/>
Dec  1 14:20:36 np0005541455 nova_compute[188605]:      </blockers>
Dec  1 14:20:36 np0005541455 nova_compute[188605]:      <model usable='no' vendor='Intel' canonical='SierraForest-v1'>SierraForest</model>
Dec  1 14:20:36 np0005541455 nova_compute[188605]:      <blockers model='SierraForest'>
Dec  1 14:20:36 np0005541455 nova_compute[188605]:        <feature name='avx-ifma'/>
Dec  1 14:20:36 np0005541455 nova_compute[188605]:        <feature name='avx-ne-convert'/>
Dec  1 14:20:36 np0005541455 nova_compute[188605]:        <feature name='avx-vnni'/>
Dec  1 14:20:36 np0005541455 nova_compute[188605]:        <feature name='avx-vnni-int8'/>
Dec  1 14:20:36 np0005541455 nova_compute[188605]:        <feature name='bus-lock-detect'/>
Dec  1 14:20:36 np0005541455 nova_compute[188605]:        <feature name='cmpccxadd'/>
Dec  1 14:20:36 np0005541455 nova_compute[188605]:        <feature name='erms'/>
Dec  1 14:20:36 np0005541455 nova_compute[188605]:        <feature name='fbsdp-no'/>
Dec  1 14:20:36 np0005541455 nova_compute[188605]:        <feature name='fsrm'/>
Dec  1 14:20:36 np0005541455 nova_compute[188605]:        <feature name='fsrs'/>
Dec  1 14:20:36 np0005541455 nova_compute[188605]:        <feature name='gfni'/>
Dec  1 14:20:36 np0005541455 nova_compute[188605]:        <feature name='ibrs-all'/>
Dec  1 14:20:36 np0005541455 nova_compute[188605]:        <feature name='invpcid'/>
Dec  1 14:20:36 np0005541455 nova_compute[188605]:        <feature name='mcdt-no'/>
Dec  1 14:20:36 np0005541455 nova_compute[188605]:        <feature name='pbrsb-no'/>
Dec  1 14:20:36 np0005541455 nova_compute[188605]:        <feature name='pcid'/>
Dec  1 14:20:36 np0005541455 nova_compute[188605]:        <feature name='pku'/>
Dec  1 14:20:36 np0005541455 nova_compute[188605]:        <feature name='psdp-no'/>
Dec  1 14:20:36 np0005541455 nova_compute[188605]:        <feature name='sbdr-ssdp-no'/>
Dec  1 14:20:36 np0005541455 nova_compute[188605]:        <feature name='serialize'/>
Dec  1 14:20:36 np0005541455 nova_compute[188605]:        <feature name='vaes'/>
Dec  1 14:20:36 np0005541455 nova_compute[188605]:        <feature name='vpclmulqdq'/>
Dec  1 14:20:36 np0005541455 nova_compute[188605]:        <feature name='xsaves'/>
Dec  1 14:20:36 np0005541455 nova_compute[188605]:      </blockers>
Dec  1 14:20:36 np0005541455 nova_compute[188605]:      <model usable='no' vendor='Intel'>SierraForest-v1</model>
Dec  1 14:20:36 np0005541455 nova_compute[188605]:      <blockers model='SierraForest-v1'>
Dec  1 14:20:36 np0005541455 nova_compute[188605]:        <feature name='avx-ifma'/>
Dec  1 14:20:36 np0005541455 nova_compute[188605]:        <feature name='avx-ne-convert'/>
Dec  1 14:20:36 np0005541455 nova_compute[188605]:        <feature name='avx-vnni'/>
Dec  1 14:20:36 np0005541455 nova_compute[188605]:        <feature name='avx-vnni-int8'/>
Dec  1 14:20:36 np0005541455 nova_compute[188605]:        <feature name='bus-lock-detect'/>
Dec  1 14:20:36 np0005541455 nova_compute[188605]:        <feature name='cmpccxadd'/>
Dec  1 14:20:36 np0005541455 nova_compute[188605]:        <feature name='erms'/>
Dec  1 14:20:36 np0005541455 nova_compute[188605]:        <feature name='fbsdp-no'/>
Dec  1 14:20:36 np0005541455 nova_compute[188605]:        <feature name='fsrm'/>
Dec  1 14:20:36 np0005541455 nova_compute[188605]:        <feature name='fsrs'/>
Dec  1 14:20:36 np0005541455 nova_compute[188605]:        <feature name='gfni'/>
Dec  1 14:20:36 np0005541455 nova_compute[188605]:        <feature name='ibrs-all'/>
Dec  1 14:20:36 np0005541455 nova_compute[188605]:        <feature name='invpcid'/>
Dec  1 14:20:36 np0005541455 nova_compute[188605]:        <feature name='mcdt-no'/>
Dec  1 14:20:36 np0005541455 nova_compute[188605]:        <feature name='pbrsb-no'/>
Dec  1 14:20:36 np0005541455 nova_compute[188605]:        <feature name='pcid'/>
Dec  1 14:20:36 np0005541455 nova_compute[188605]:        <feature name='pku'/>
Dec  1 14:20:36 np0005541455 nova_compute[188605]:        <feature name='psdp-no'/>
Dec  1 14:20:36 np0005541455 nova_compute[188605]:        <feature name='sbdr-ssdp-no'/>
Dec  1 14:20:36 np0005541455 nova_compute[188605]:        <feature name='serialize'/>
Dec  1 14:20:36 np0005541455 nova_compute[188605]:        <feature name='vaes'/>
Dec  1 14:20:36 np0005541455 nova_compute[188605]:        <feature name='vpclmulqdq'/>
Dec  1 14:20:36 np0005541455 nova_compute[188605]:        <feature name='xsaves'/>
Dec  1 14:20:36 np0005541455 nova_compute[188605]:      </blockers>
Dec  1 14:20:36 np0005541455 nova_compute[188605]:      <model usable='no' vendor='Intel' canonical='Skylake-Client-v1'>Skylake-Client</model>
Dec  1 14:20:36 np0005541455 nova_compute[188605]:      <blockers model='Skylake-Client'>
Dec  1 14:20:36 np0005541455 nova_compute[188605]:        <feature name='erms'/>
Dec  1 14:20:36 np0005541455 nova_compute[188605]:        <feature name='hle'/>
Dec  1 14:20:36 np0005541455 nova_compute[188605]:        <feature name='invpcid'/>
Dec  1 14:20:36 np0005541455 nova_compute[188605]:        <feature name='pcid'/>
Dec  1 14:20:36 np0005541455 nova_compute[188605]:        <feature name='rtm'/>
Dec  1 14:20:36 np0005541455 nova_compute[188605]:      </blockers>
Dec  1 14:20:36 np0005541455 nova_compute[188605]:      <model usable='no' vendor='Intel' canonical='Skylake-Client-v2'>Skylake-Client-IBRS</model>
Dec  1 14:20:36 np0005541455 nova_compute[188605]:      <blockers model='Skylake-Client-IBRS'>
Dec  1 14:20:36 np0005541455 nova_compute[188605]:        <feature name='erms'/>
Dec  1 14:20:36 np0005541455 nova_compute[188605]:        <feature name='hle'/>
Dec  1 14:20:36 np0005541455 nova_compute[188605]:        <feature name='invpcid'/>
Dec  1 14:20:36 np0005541455 nova_compute[188605]:        <feature name='pcid'/>
Dec  1 14:20:36 np0005541455 nova_compute[188605]:        <feature name='rtm'/>
Dec  1 14:20:36 np0005541455 nova_compute[188605]:      </blockers>
Dec  1 14:20:36 np0005541455 nova_compute[188605]:      <model usable='no' vendor='Intel' canonical='Skylake-Client-v3'>Skylake-Client-noTSX-IBRS</model>
Dec  1 14:20:36 np0005541455 nova_compute[188605]:      <blockers model='Skylake-Client-noTSX-IBRS'>
Dec  1 14:20:36 np0005541455 nova_compute[188605]:        <feature name='erms'/>
Dec  1 14:20:36 np0005541455 nova_compute[188605]:        <feature name='invpcid'/>
Dec  1 14:20:36 np0005541455 nova_compute[188605]:        <feature name='pcid'/>
Dec  1 14:20:36 np0005541455 nova_compute[188605]:      </blockers>
Dec  1 14:20:36 np0005541455 nova_compute[188605]:      <model usable='no' vendor='Intel'>Skylake-Client-v1</model>
Dec  1 14:20:36 np0005541455 nova_compute[188605]:      <blockers model='Skylake-Client-v1'>
Dec  1 14:20:36 np0005541455 nova_compute[188605]:        <feature name='erms'/>
Dec  1 14:20:36 np0005541455 nova_compute[188605]:        <feature name='hle'/>
Dec  1 14:20:36 np0005541455 nova_compute[188605]:        <feature name='invpcid'/>
Dec  1 14:20:36 np0005541455 nova_compute[188605]:        <feature name='pcid'/>
Dec  1 14:20:36 np0005541455 nova_compute[188605]:        <feature name='rtm'/>
Dec  1 14:20:36 np0005541455 nova_compute[188605]:      </blockers>
Dec  1 14:20:36 np0005541455 nova_compute[188605]:      <model usable='no' vendor='Intel'>Skylake-Client-v2</model>
Dec  1 14:20:36 np0005541455 nova_compute[188605]:      <blockers model='Skylake-Client-v2'>
Dec  1 14:20:36 np0005541455 nova_compute[188605]:        <feature name='erms'/>
Dec  1 14:20:36 np0005541455 nova_compute[188605]:        <feature name='hle'/>
Dec  1 14:20:36 np0005541455 nova_compute[188605]:        <feature name='invpcid'/>
Dec  1 14:20:36 np0005541455 nova_compute[188605]:        <feature name='pcid'/>
Dec  1 14:20:36 np0005541455 nova_compute[188605]:        <feature name='rtm'/>
Dec  1 14:20:36 np0005541455 nova_compute[188605]:      </blockers>
Dec  1 14:20:36 np0005541455 nova_compute[188605]:      <model usable='no' vendor='Intel'>Skylake-Client-v3</model>
Dec  1 14:20:36 np0005541455 nova_compute[188605]:      <blockers model='Skylake-Client-v3'>
Dec  1 14:20:36 np0005541455 nova_compute[188605]:        <feature name='erms'/>
Dec  1 14:20:36 np0005541455 nova_compute[188605]:        <feature name='invpcid'/>
Dec  1 14:20:36 np0005541455 nova_compute[188605]:        <feature name='pcid'/>
Dec  1 14:20:36 np0005541455 nova_compute[188605]:      </blockers>
Dec  1 14:20:36 np0005541455 nova_compute[188605]:      <model usable='no' vendor='Intel'>Skylake-Client-v4</model>
Dec  1 14:20:36 np0005541455 nova_compute[188605]:      <blockers model='Skylake-Client-v4'>
Dec  1 14:20:36 np0005541455 nova_compute[188605]:        <feature name='erms'/>
Dec  1 14:20:36 np0005541455 nova_compute[188605]:        <feature name='invpcid'/>
Dec  1 14:20:36 np0005541455 nova_compute[188605]:        <feature name='pcid'/>
Dec  1 14:20:36 np0005541455 nova_compute[188605]:        <feature name='xsaves'/>
Dec  1 14:20:36 np0005541455 nova_compute[188605]:      </blockers>
Dec  1 14:20:36 np0005541455 nova_compute[188605]:      <model usable='no' vendor='Intel' canonical='Skylake-Server-v1'>Skylake-Server</model>
Dec  1 14:20:36 np0005541455 nova_compute[188605]:      <blockers model='Skylake-Server'>
Dec  1 14:20:36 np0005541455 nova_compute[188605]:        <feature name='avx512bw'/>
Dec  1 14:20:36 np0005541455 nova_compute[188605]:        <feature name='avx512cd'/>
Dec  1 14:20:36 np0005541455 nova_compute[188605]:        <feature name='avx512dq'/>
Dec  1 14:20:36 np0005541455 nova_compute[188605]:        <feature name='avx512f'/>
Dec  1 14:20:36 np0005541455 nova_compute[188605]:        <feature name='avx512vl'/>
Dec  1 14:20:36 np0005541455 nova_compute[188605]:        <feature name='erms'/>
Dec  1 14:20:36 np0005541455 nova_compute[188605]:        <feature name='hle'/>
Dec  1 14:20:36 np0005541455 nova_compute[188605]:        <feature name='invpcid'/>
Dec  1 14:20:36 np0005541455 nova_compute[188605]:        <feature name='pcid'/>
Dec  1 14:20:36 np0005541455 nova_compute[188605]:        <feature name='pku'/>
Dec  1 14:20:36 np0005541455 nova_compute[188605]:        <feature name='rtm'/>
Dec  1 14:20:36 np0005541455 nova_compute[188605]:      </blockers>
Dec  1 14:20:36 np0005541455 nova_compute[188605]:      <model usable='no' vendor='Intel' canonical='Skylake-Server-v2'>Skylake-Server-IBRS</model>
Dec  1 14:20:36 np0005541455 nova_compute[188605]:      <blockers model='Skylake-Server-IBRS'>
Dec  1 14:20:36 np0005541455 nova_compute[188605]:        <feature name='avx512bw'/>
Dec  1 14:20:36 np0005541455 nova_compute[188605]:        <feature name='avx512cd'/>
Dec  1 14:20:36 np0005541455 nova_compute[188605]:        <feature name='avx512dq'/>
Dec  1 14:20:36 np0005541455 nova_compute[188605]:        <feature name='avx512f'/>
Dec  1 14:20:36 np0005541455 nova_compute[188605]:        <feature name='avx512vl'/>
Dec  1 14:20:36 np0005541455 nova_compute[188605]:        <feature name='erms'/>
Dec  1 14:20:36 np0005541455 nova_compute[188605]:        <feature name='hle'/>
Dec  1 14:20:36 np0005541455 nova_compute[188605]:        <feature name='invpcid'/>
Dec  1 14:20:36 np0005541455 nova_compute[188605]:        <feature name='pcid'/>
Dec  1 14:20:36 np0005541455 nova_compute[188605]:        <feature name='pku'/>
Dec  1 14:20:36 np0005541455 nova_compute[188605]:        <feature name='rtm'/>
Dec  1 14:20:36 np0005541455 nova_compute[188605]:      </blockers>
Dec  1 14:20:36 np0005541455 nova_compute[188605]:      <model usable='no' vendor='Intel' canonical='Skylake-Server-v3'>Skylake-Server-noTSX-IBRS</model>
Dec  1 14:20:36 np0005541455 nova_compute[188605]:      <blockers model='Skylake-Server-noTSX-IBRS'>
Dec  1 14:20:36 np0005541455 nova_compute[188605]:        <feature name='avx512bw'/>
Dec  1 14:20:36 np0005541455 nova_compute[188605]:        <feature name='avx512cd'/>
Dec  1 14:20:36 np0005541455 nova_compute[188605]:        <feature name='avx512dq'/>
Dec  1 14:20:36 np0005541455 nova_compute[188605]:        <feature name='avx512f'/>
Dec  1 14:20:36 np0005541455 nova_compute[188605]:        <feature name='avx512vl'/>
Dec  1 14:20:36 np0005541455 nova_compute[188605]:        <feature name='erms'/>
Dec  1 14:20:36 np0005541455 nova_compute[188605]:        <feature name='invpcid'/>
Dec  1 14:20:36 np0005541455 nova_compute[188605]:        <feature name='pcid'/>
Dec  1 14:20:36 np0005541455 nova_compute[188605]:        <feature name='pku'/>
Dec  1 14:20:36 np0005541455 nova_compute[188605]:      </blockers>
Dec  1 14:20:36 np0005541455 nova_compute[188605]:      <model usable='no' vendor='Intel'>Skylake-Server-v1</model>
Dec  1 14:20:36 np0005541455 nova_compute[188605]:      <blockers model='Skylake-Server-v1'>
Dec  1 14:20:36 np0005541455 nova_compute[188605]:        <feature name='avx512bw'/>
Dec  1 14:20:36 np0005541455 nova_compute[188605]:        <feature name='avx512cd'/>
Dec  1 14:20:36 np0005541455 nova_compute[188605]:        <feature name='avx512dq'/>
Dec  1 14:20:36 np0005541455 nova_compute[188605]:        <feature name='avx512f'/>
Dec  1 14:20:36 np0005541455 nova_compute[188605]:        <feature name='avx512vl'/>
Dec  1 14:20:36 np0005541455 nova_compute[188605]:        <feature name='erms'/>
Dec  1 14:20:36 np0005541455 nova_compute[188605]:        <feature name='hle'/>
Dec  1 14:20:36 np0005541455 nova_compute[188605]:        <feature name='invpcid'/>
Dec  1 14:20:36 np0005541455 nova_compute[188605]:        <feature name='pcid'/>
Dec  1 14:20:36 np0005541455 nova_compute[188605]:        <feature name='pku'/>
Dec  1 14:20:36 np0005541455 nova_compute[188605]:        <feature name='rtm'/>
Dec  1 14:20:36 np0005541455 nova_compute[188605]:      </blockers>
Dec  1 14:20:36 np0005541455 nova_compute[188605]:      <model usable='no' vendor='Intel'>Skylake-Server-v2</model>
Dec  1 14:20:36 np0005541455 nova_compute[188605]:      <blockers model='Skylake-Server-v2'>
Dec  1 14:20:36 np0005541455 nova_compute[188605]:        <feature name='avx512bw'/>
Dec  1 14:20:36 np0005541455 nova_compute[188605]:        <feature name='avx512cd'/>
Dec  1 14:20:36 np0005541455 nova_compute[188605]:        <feature name='avx512dq'/>
Dec  1 14:20:36 np0005541455 nova_compute[188605]:        <feature name='avx512f'/>
Dec  1 14:20:36 np0005541455 nova_compute[188605]:        <feature name='avx512vl'/>
Dec  1 14:20:36 np0005541455 nova_compute[188605]:        <feature name='erms'/>
Dec  1 14:20:36 np0005541455 nova_compute[188605]:        <feature name='hle'/>
Dec  1 14:20:36 np0005541455 nova_compute[188605]:        <feature name='invpcid'/>
Dec  1 14:20:36 np0005541455 nova_compute[188605]:        <feature name='pcid'/>
Dec  1 14:20:36 np0005541455 nova_compute[188605]:        <feature name='pku'/>
Dec  1 14:20:36 np0005541455 nova_compute[188605]:        <feature name='rtm'/>
Dec  1 14:20:36 np0005541455 nova_compute[188605]:      </blockers>
Dec  1 14:20:36 np0005541455 nova_compute[188605]:      <model usable='no' vendor='Intel'>Skylake-Server-v3</model>
Dec  1 14:20:36 np0005541455 nova_compute[188605]:      <blockers model='Skylake-Server-v3'>
Dec  1 14:20:36 np0005541455 nova_compute[188605]:        <feature name='avx512bw'/>
Dec  1 14:20:36 np0005541455 nova_compute[188605]:        <feature name='avx512cd'/>
Dec  1 14:20:36 np0005541455 nova_compute[188605]:        <feature name='avx512dq'/>
Dec  1 14:20:36 np0005541455 nova_compute[188605]:        <feature name='avx512f'/>
Dec  1 14:20:36 np0005541455 nova_compute[188605]:        <feature name='avx512vl'/>
Dec  1 14:20:36 np0005541455 nova_compute[188605]:        <feature name='erms'/>
Dec  1 14:20:36 np0005541455 nova_compute[188605]:        <feature name='invpcid'/>
Dec  1 14:20:36 np0005541455 nova_compute[188605]:        <feature name='pcid'/>
Dec  1 14:20:36 np0005541455 nova_compute[188605]:        <feature name='pku'/>
Dec  1 14:20:36 np0005541455 nova_compute[188605]:      </blockers>
Dec  1 14:20:36 np0005541455 nova_compute[188605]:      <model usable='no' vendor='Intel'>Skylake-Server-v4</model>
Dec  1 14:20:36 np0005541455 nova_compute[188605]:      <blockers model='Skylake-Server-v4'>
Dec  1 14:20:36 np0005541455 nova_compute[188605]:        <feature name='avx512bw'/>
Dec  1 14:20:36 np0005541455 nova_compute[188605]:        <feature name='avx512cd'/>
Dec  1 14:20:36 np0005541455 nova_compute[188605]:        <feature name='avx512dq'/>
Dec  1 14:20:36 np0005541455 nova_compute[188605]:        <feature name='avx512f'/>
Dec  1 14:20:36 np0005541455 nova_compute[188605]:        <feature name='avx512vl'/>
Dec  1 14:20:36 np0005541455 nova_compute[188605]:        <feature name='erms'/>
Dec  1 14:20:36 np0005541455 nova_compute[188605]:        <feature name='invpcid'/>
Dec  1 14:20:36 np0005541455 nova_compute[188605]:        <feature name='pcid'/>
Dec  1 14:20:36 np0005541455 nova_compute[188605]:        <feature name='pku'/>
Dec  1 14:20:36 np0005541455 nova_compute[188605]:      </blockers>
Dec  1 14:20:36 np0005541455 nova_compute[188605]:      <model usable='no' vendor='Intel'>Skylake-Server-v5</model>
Dec  1 14:20:36 np0005541455 nova_compute[188605]:      <blockers model='Skylake-Server-v5'>
Dec  1 14:20:36 np0005541455 nova_compute[188605]:        <feature name='avx512bw'/>
Dec  1 14:20:36 np0005541455 nova_compute[188605]:        <feature name='avx512cd'/>
Dec  1 14:20:36 np0005541455 nova_compute[188605]:        <feature name='avx512dq'/>
Dec  1 14:20:36 np0005541455 nova_compute[188605]:        <feature name='avx512f'/>
Dec  1 14:20:36 np0005541455 nova_compute[188605]:        <feature name='avx512vl'/>
Dec  1 14:20:36 np0005541455 nova_compute[188605]:        <feature name='erms'/>
Dec  1 14:20:36 np0005541455 nova_compute[188605]:        <feature name='invpcid'/>
Dec  1 14:20:36 np0005541455 nova_compute[188605]:        <feature name='pcid'/>
Dec  1 14:20:36 np0005541455 nova_compute[188605]:        <feature name='pku'/>
Dec  1 14:20:36 np0005541455 nova_compute[188605]:        <feature name='xsaves'/>
Dec  1 14:20:36 np0005541455 nova_compute[188605]:      </blockers>
Dec  1 14:20:36 np0005541455 nova_compute[188605]:      <model usable='no' vendor='Intel' canonical='Snowridge-v1'>Snowridge</model>
Dec  1 14:20:36 np0005541455 nova_compute[188605]:      <blockers model='Snowridge'>
Dec  1 14:20:36 np0005541455 nova_compute[188605]:        <feature name='cldemote'/>
Dec  1 14:20:36 np0005541455 nova_compute[188605]:        <feature name='core-capability'/>
Dec  1 14:20:36 np0005541455 nova_compute[188605]:        <feature name='erms'/>
Dec  1 14:20:36 np0005541455 nova_compute[188605]:        <feature name='gfni'/>
Dec  1 14:20:36 np0005541455 nova_compute[188605]:        <feature name='movdir64b'/>
Dec  1 14:20:36 np0005541455 nova_compute[188605]:        <feature name='movdiri'/>
Dec  1 14:20:36 np0005541455 nova_compute[188605]:        <feature name='mpx'/>
Dec  1 14:20:36 np0005541455 nova_compute[188605]:        <feature name='split-lock-detect'/>
Dec  1 14:20:36 np0005541455 nova_compute[188605]:      </blockers>
Dec  1 14:20:36 np0005541455 nova_compute[188605]:      <model usable='no' vendor='Intel'>Snowridge-v1</model>
Dec  1 14:20:36 np0005541455 nova_compute[188605]:      <blockers model='Snowridge-v1'>
Dec  1 14:20:36 np0005541455 nova_compute[188605]:        <feature name='cldemote'/>
Dec  1 14:20:36 np0005541455 nova_compute[188605]:        <feature name='core-capability'/>
Dec  1 14:20:36 np0005541455 nova_compute[188605]:        <feature name='erms'/>
Dec  1 14:20:36 np0005541455 nova_compute[188605]:        <feature name='gfni'/>
Dec  1 14:20:36 np0005541455 nova_compute[188605]:        <feature name='movdir64b'/>
Dec  1 14:20:36 np0005541455 nova_compute[188605]:        <feature name='movdiri'/>
Dec  1 14:20:36 np0005541455 nova_compute[188605]:        <feature name='mpx'/>
Dec  1 14:20:36 np0005541455 nova_compute[188605]:        <feature name='split-lock-detect'/>
Dec  1 14:20:36 np0005541455 nova_compute[188605]:      </blockers>
Dec  1 14:20:36 np0005541455 nova_compute[188605]:      <model usable='no' vendor='Intel'>Snowridge-v2</model>
Dec  1 14:20:36 np0005541455 nova_compute[188605]:      <blockers model='Snowridge-v2'>
Dec  1 14:20:36 np0005541455 nova_compute[188605]:        <feature name='cldemote'/>
Dec  1 14:20:36 np0005541455 nova_compute[188605]:        <feature name='core-capability'/>
Dec  1 14:20:36 np0005541455 nova_compute[188605]:        <feature name='erms'/>
Dec  1 14:20:36 np0005541455 nova_compute[188605]:        <feature name='gfni'/>
Dec  1 14:20:36 np0005541455 nova_compute[188605]:        <feature name='movdir64b'/>
Dec  1 14:20:36 np0005541455 nova_compute[188605]:        <feature name='movdiri'/>
Dec  1 14:20:36 np0005541455 nova_compute[188605]:        <feature name='split-lock-detect'/>
Dec  1 14:20:36 np0005541455 nova_compute[188605]:      </blockers>
Dec  1 14:20:36 np0005541455 nova_compute[188605]:      <model usable='no' vendor='Intel'>Snowridge-v3</model>
Dec  1 14:20:36 np0005541455 nova_compute[188605]:      <blockers model='Snowridge-v3'>
Dec  1 14:20:36 np0005541455 nova_compute[188605]:        <feature name='cldemote'/>
Dec  1 14:20:36 np0005541455 nova_compute[188605]:        <feature name='core-capability'/>
Dec  1 14:20:36 np0005541455 nova_compute[188605]:        <feature name='erms'/>
Dec  1 14:20:36 np0005541455 nova_compute[188605]:        <feature name='gfni'/>
Dec  1 14:20:36 np0005541455 nova_compute[188605]:        <feature name='movdir64b'/>
Dec  1 14:20:36 np0005541455 nova_compute[188605]:        <feature name='movdiri'/>
Dec  1 14:20:36 np0005541455 nova_compute[188605]:        <feature name='split-lock-detect'/>
Dec  1 14:20:36 np0005541455 nova_compute[188605]:        <feature name='xsaves'/>
Dec  1 14:20:36 np0005541455 nova_compute[188605]:      </blockers>
Dec  1 14:20:36 np0005541455 nova_compute[188605]:      <model usable='no' vendor='Intel'>Snowridge-v4</model>
Dec  1 14:20:36 np0005541455 nova_compute[188605]:      <blockers model='Snowridge-v4'>
Dec  1 14:20:36 np0005541455 nova_compute[188605]:        <feature name='cldemote'/>
Dec  1 14:20:36 np0005541455 nova_compute[188605]:        <feature name='erms'/>
Dec  1 14:20:36 np0005541455 nova_compute[188605]:        <feature name='gfni'/>
Dec  1 14:20:36 np0005541455 nova_compute[188605]:        <feature name='movdir64b'/>
Dec  1 14:20:36 np0005541455 nova_compute[188605]:        <feature name='movdiri'/>
Dec  1 14:20:36 np0005541455 nova_compute[188605]:        <feature name='xsaves'/>
Dec  1 14:20:36 np0005541455 nova_compute[188605]:      </blockers>
Dec  1 14:20:36 np0005541455 nova_compute[188605]:      <model usable='yes' vendor='Intel' canonical='Westmere-v1'>Westmere</model>
Dec  1 14:20:36 np0005541455 nova_compute[188605]:      <model usable='yes' vendor='Intel' canonical='Westmere-v2'>Westmere-IBRS</model>
Dec  1 14:20:36 np0005541455 nova_compute[188605]:      <model usable='yes' vendor='Intel'>Westmere-v1</model>
Dec  1 14:20:36 np0005541455 nova_compute[188605]:      <model usable='yes' vendor='Intel'>Westmere-v2</model>
Dec  1 14:20:36 np0005541455 nova_compute[188605]:      <model usable='no' deprecated='yes' vendor='AMD' canonical='athlon-v1'>athlon</model>
Dec  1 14:20:36 np0005541455 nova_compute[188605]:      <blockers model='athlon'>
Dec  1 14:20:36 np0005541455 nova_compute[188605]:        <feature name='3dnow'/>
Dec  1 14:20:36 np0005541455 nova_compute[188605]:        <feature name='3dnowext'/>
Dec  1 14:20:36 np0005541455 nova_compute[188605]:      </blockers>
Dec  1 14:20:36 np0005541455 nova_compute[188605]:      <model usable='no' deprecated='yes' vendor='AMD'>athlon-v1</model>
Dec  1 14:20:36 np0005541455 nova_compute[188605]:      <blockers model='athlon-v1'>
Dec  1 14:20:36 np0005541455 nova_compute[188605]:        <feature name='3dnow'/>
Dec  1 14:20:36 np0005541455 nova_compute[188605]:        <feature name='3dnowext'/>
Dec  1 14:20:36 np0005541455 nova_compute[188605]:      </blockers>
Dec  1 14:20:36 np0005541455 nova_compute[188605]:      <model usable='no' deprecated='yes' vendor='Intel' canonical='core2duo-v1'>core2duo</model>
Dec  1 14:20:36 np0005541455 nova_compute[188605]:      <blockers model='core2duo'>
Dec  1 14:20:36 np0005541455 nova_compute[188605]:        <feature name='ss'/>
Dec  1 14:20:36 np0005541455 nova_compute[188605]:      </blockers>
Dec  1 14:20:36 np0005541455 nova_compute[188605]:      <model usable='no' deprecated='yes' vendor='Intel'>core2duo-v1</model>
Dec  1 14:20:36 np0005541455 nova_compute[188605]:      <blockers model='core2duo-v1'>
Dec  1 14:20:36 np0005541455 nova_compute[188605]:        <feature name='ss'/>
Dec  1 14:20:36 np0005541455 nova_compute[188605]:      </blockers>
Dec  1 14:20:36 np0005541455 nova_compute[188605]:      <model usable='no' deprecated='yes' vendor='Intel' canonical='coreduo-v1'>coreduo</model>
Dec  1 14:20:36 np0005541455 nova_compute[188605]:      <blockers model='coreduo'>
Dec  1 14:20:36 np0005541455 nova_compute[188605]:        <feature name='ss'/>
Dec  1 14:20:36 np0005541455 nova_compute[188605]:      </blockers>
Dec  1 14:20:36 np0005541455 nova_compute[188605]:      <model usable='no' deprecated='yes' vendor='Intel'>coreduo-v1</model>
Dec  1 14:20:36 np0005541455 nova_compute[188605]:      <blockers model='coreduo-v1'>
Dec  1 14:20:36 np0005541455 nova_compute[188605]:        <feature name='ss'/>
Dec  1 14:20:36 np0005541455 nova_compute[188605]:      </blockers>
Dec  1 14:20:36 np0005541455 nova_compute[188605]:      <model usable='yes' deprecated='yes' vendor='unknown' canonical='kvm32-v1'>kvm32</model>
Dec  1 14:20:36 np0005541455 nova_compute[188605]:      <model usable='yes' deprecated='yes' vendor='unknown'>kvm32-v1</model>
Dec  1 14:20:36 np0005541455 nova_compute[188605]:      <model usable='yes' deprecated='yes' vendor='unknown' canonical='kvm64-v1'>kvm64</model>
Dec  1 14:20:36 np0005541455 nova_compute[188605]:      <model usable='yes' deprecated='yes' vendor='unknown'>kvm64-v1</model>
Dec  1 14:20:36 np0005541455 nova_compute[188605]:      <model usable='no' deprecated='yes' vendor='Intel' canonical='n270-v1'>n270</model>
Dec  1 14:20:36 np0005541455 nova_compute[188605]:      <blockers model='n270'>
Dec  1 14:20:36 np0005541455 nova_compute[188605]:        <feature name='ss'/>
Dec  1 14:20:36 np0005541455 nova_compute[188605]:      </blockers>
Dec  1 14:20:36 np0005541455 nova_compute[188605]:      <model usable='no' deprecated='yes' vendor='Intel'>n270-v1</model>
Dec  1 14:20:36 np0005541455 nova_compute[188605]:      <blockers model='n270-v1'>
Dec  1 14:20:36 np0005541455 nova_compute[188605]:        <feature name='ss'/>
Dec  1 14:20:36 np0005541455 nova_compute[188605]:      </blockers>
Dec  1 14:20:36 np0005541455 nova_compute[188605]:      <model usable='yes' deprecated='yes' vendor='unknown' canonical='pentium-v1'>pentium</model>
Dec  1 14:20:36 np0005541455 nova_compute[188605]:      <model usable='yes' deprecated='yes' vendor='unknown'>pentium-v1</model>
Dec  1 14:20:36 np0005541455 nova_compute[188605]:      <model usable='yes' deprecated='yes' vendor='unknown' canonical='pentium2-v1'>pentium2</model>
Dec  1 14:20:36 np0005541455 nova_compute[188605]:      <model usable='yes' deprecated='yes' vendor='unknown'>pentium2-v1</model>
Dec  1 14:20:36 np0005541455 nova_compute[188605]:      <model usable='yes' deprecated='yes' vendor='unknown' canonical='pentium3-v1'>pentium3</model>
Dec  1 14:20:36 np0005541455 nova_compute[188605]:      <model usable='yes' deprecated='yes' vendor='unknown'>pentium3-v1</model>
Dec  1 14:20:36 np0005541455 nova_compute[188605]:      <model usable='no' deprecated='yes' vendor='AMD' canonical='phenom-v1'>phenom</model>
Dec  1 14:20:36 np0005541455 nova_compute[188605]:      <blockers model='phenom'>
Dec  1 14:20:36 np0005541455 nova_compute[188605]:        <feature name='3dnow'/>
Dec  1 14:20:36 np0005541455 nova_compute[188605]:        <feature name='3dnowext'/>
Dec  1 14:20:36 np0005541455 nova_compute[188605]:      </blockers>
Dec  1 14:20:36 np0005541455 nova_compute[188605]:      <model usable='no' deprecated='yes' vendor='AMD'>phenom-v1</model>
Dec  1 14:20:36 np0005541455 nova_compute[188605]:      <blockers model='phenom-v1'>
Dec  1 14:20:36 np0005541455 nova_compute[188605]:        <feature name='3dnow'/>
Dec  1 14:20:36 np0005541455 nova_compute[188605]:        <feature name='3dnowext'/>
Dec  1 14:20:36 np0005541455 nova_compute[188605]:      </blockers>
Dec  1 14:20:36 np0005541455 nova_compute[188605]:      <model usable='yes' deprecated='yes' vendor='unknown' canonical='qemu32-v1'>qemu32</model>
Dec  1 14:20:36 np0005541455 nova_compute[188605]:      <model usable='yes' deprecated='yes' vendor='unknown'>qemu32-v1</model>
Dec  1 14:20:36 np0005541455 nova_compute[188605]:      <model usable='yes' deprecated='yes' vendor='unknown' canonical='qemu64-v1'>qemu64</model>
Dec  1 14:20:36 np0005541455 nova_compute[188605]:      <model usable='yes' deprecated='yes' vendor='unknown'>qemu64-v1</model>
Dec  1 14:20:36 np0005541455 nova_compute[188605]:    </mode>
Dec  1 14:20:36 np0005541455 nova_compute[188605]:  </cpu>
Dec  1 14:20:36 np0005541455 nova_compute[188605]:  <memoryBacking supported='yes'>
Dec  1 14:20:36 np0005541455 nova_compute[188605]:    <enum name='sourceType'>
Dec  1 14:20:36 np0005541455 nova_compute[188605]:      <value>file</value>
Dec  1 14:20:36 np0005541455 nova_compute[188605]:      <value>anonymous</value>
Dec  1 14:20:36 np0005541455 nova_compute[188605]:      <value>memfd</value>
Dec  1 14:20:36 np0005541455 nova_compute[188605]:    </enum>
Dec  1 14:20:36 np0005541455 nova_compute[188605]:  </memoryBacking>
Dec  1 14:20:36 np0005541455 nova_compute[188605]:  <devices>
Dec  1 14:20:36 np0005541455 nova_compute[188605]:    <disk supported='yes'>
Dec  1 14:20:36 np0005541455 nova_compute[188605]:      <enum name='diskDevice'>
Dec  1 14:20:36 np0005541455 nova_compute[188605]:        <value>disk</value>
Dec  1 14:20:36 np0005541455 nova_compute[188605]:        <value>cdrom</value>
Dec  1 14:20:36 np0005541455 nova_compute[188605]:        <value>floppy</value>
Dec  1 14:20:36 np0005541455 nova_compute[188605]:        <value>lun</value>
Dec  1 14:20:36 np0005541455 nova_compute[188605]:      </enum>
Dec  1 14:20:36 np0005541455 nova_compute[188605]:      <enum name='bus'>
Dec  1 14:20:36 np0005541455 nova_compute[188605]:        <value>fdc</value>
Dec  1 14:20:36 np0005541455 nova_compute[188605]:        <value>scsi</value>
Dec  1 14:20:36 np0005541455 nova_compute[188605]:        <value>virtio</value>
Dec  1 14:20:36 np0005541455 nova_compute[188605]:        <value>usb</value>
Dec  1 14:20:36 np0005541455 nova_compute[188605]:        <value>sata</value>
Dec  1 14:20:36 np0005541455 nova_compute[188605]:      </enum>
Dec  1 14:20:36 np0005541455 nova_compute[188605]:      <enum name='model'>
Dec  1 14:20:36 np0005541455 nova_compute[188605]:        <value>virtio</value>
Dec  1 14:20:36 np0005541455 nova_compute[188605]:        <value>virtio-transitional</value>
Dec  1 14:20:36 np0005541455 nova_compute[188605]:        <value>virtio-non-transitional</value>
Dec  1 14:20:36 np0005541455 nova_compute[188605]:      </enum>
Dec  1 14:20:36 np0005541455 nova_compute[188605]:    </disk>
Dec  1 14:20:36 np0005541455 nova_compute[188605]:    <graphics supported='yes'>
Dec  1 14:20:36 np0005541455 nova_compute[188605]:      <enum name='type'>
Dec  1 14:20:36 np0005541455 nova_compute[188605]:        <value>vnc</value>
Dec  1 14:20:36 np0005541455 nova_compute[188605]:        <value>egl-headless</value>
Dec  1 14:20:36 np0005541455 nova_compute[188605]:        <value>dbus</value>
Dec  1 14:20:36 np0005541455 nova_compute[188605]:      </enum>
Dec  1 14:20:36 np0005541455 nova_compute[188605]:    </graphics>
Dec  1 14:20:36 np0005541455 nova_compute[188605]:    <video supported='yes'>
Dec  1 14:20:36 np0005541455 nova_compute[188605]:      <enum name='modelType'>
Dec  1 14:20:36 np0005541455 nova_compute[188605]:        <value>vga</value>
Dec  1 14:20:36 np0005541455 nova_compute[188605]:        <value>cirrus</value>
Dec  1 14:20:36 np0005541455 nova_compute[188605]:        <value>virtio</value>
Dec  1 14:20:36 np0005541455 nova_compute[188605]:        <value>none</value>
Dec  1 14:20:36 np0005541455 nova_compute[188605]:        <value>bochs</value>
Dec  1 14:20:36 np0005541455 nova_compute[188605]:        <value>ramfb</value>
Dec  1 14:20:36 np0005541455 nova_compute[188605]:      </enum>
Dec  1 14:20:36 np0005541455 nova_compute[188605]:    </video>
Dec  1 14:20:36 np0005541455 nova_compute[188605]:    <hostdev supported='yes'>
Dec  1 14:20:36 np0005541455 nova_compute[188605]:      <enum name='mode'>
Dec  1 14:20:36 np0005541455 nova_compute[188605]:        <value>subsystem</value>
Dec  1 14:20:36 np0005541455 nova_compute[188605]:      </enum>
Dec  1 14:20:36 np0005541455 nova_compute[188605]:      <enum name='startupPolicy'>
Dec  1 14:20:36 np0005541455 nova_compute[188605]:        <value>default</value>
Dec  1 14:20:36 np0005541455 nova_compute[188605]:        <value>mandatory</value>
Dec  1 14:20:36 np0005541455 nova_compute[188605]:        <value>requisite</value>
Dec  1 14:20:36 np0005541455 nova_compute[188605]:        <value>optional</value>
Dec  1 14:20:36 np0005541455 nova_compute[188605]:      </enum>
Dec  1 14:20:36 np0005541455 nova_compute[188605]:      <enum name='subsysType'>
Dec  1 14:20:36 np0005541455 nova_compute[188605]:        <value>usb</value>
Dec  1 14:20:36 np0005541455 nova_compute[188605]:        <value>pci</value>
Dec  1 14:20:36 np0005541455 nova_compute[188605]:        <value>scsi</value>
Dec  1 14:20:36 np0005541455 nova_compute[188605]:      </enum>
Dec  1 14:20:36 np0005541455 nova_compute[188605]:      <enum name='capsType'/>
Dec  1 14:20:36 np0005541455 nova_compute[188605]:      <enum name='pciBackend'/>
Dec  1 14:20:36 np0005541455 nova_compute[188605]:    </hostdev>
Dec  1 14:20:36 np0005541455 nova_compute[188605]:    <rng supported='yes'>
Dec  1 14:20:36 np0005541455 nova_compute[188605]:      <enum name='model'>
Dec  1 14:20:36 np0005541455 nova_compute[188605]:        <value>virtio</value>
Dec  1 14:20:36 np0005541455 nova_compute[188605]:        <value>virtio-transitional</value>
Dec  1 14:20:36 np0005541455 nova_compute[188605]:        <value>virtio-non-transitional</value>
Dec  1 14:20:36 np0005541455 nova_compute[188605]:      </enum>
Dec  1 14:20:36 np0005541455 nova_compute[188605]:      <enum name='backendModel'>
Dec  1 14:20:36 np0005541455 nova_compute[188605]:        <value>random</value>
Dec  1 14:20:36 np0005541455 nova_compute[188605]:        <value>egd</value>
Dec  1 14:20:36 np0005541455 nova_compute[188605]:        <value>builtin</value>
Dec  1 14:20:36 np0005541455 nova_compute[188605]:      </enum>
Dec  1 14:20:36 np0005541455 nova_compute[188605]:    </rng>
Dec  1 14:20:36 np0005541455 nova_compute[188605]:    <filesystem supported='yes'>
Dec  1 14:20:36 np0005541455 nova_compute[188605]:      <enum name='driverType'>
Dec  1 14:20:36 np0005541455 nova_compute[188605]:        <value>path</value>
Dec  1 14:20:36 np0005541455 nova_compute[188605]:        <value>handle</value>
Dec  1 14:20:36 np0005541455 nova_compute[188605]:        <value>virtiofs</value>
Dec  1 14:20:36 np0005541455 nova_compute[188605]:      </enum>
Dec  1 14:20:36 np0005541455 nova_compute[188605]:    </filesystem>
Dec  1 14:20:36 np0005541455 nova_compute[188605]:    <tpm supported='yes'>
Dec  1 14:20:36 np0005541455 nova_compute[188605]:      <enum name='model'>
Dec  1 14:20:36 np0005541455 nova_compute[188605]:        <value>tpm-tis</value>
Dec  1 14:20:36 np0005541455 nova_compute[188605]:        <value>tpm-crb</value>
Dec  1 14:20:36 np0005541455 nova_compute[188605]:      </enum>
Dec  1 14:20:36 np0005541455 nova_compute[188605]:      <enum name='backendModel'>
Dec  1 14:20:36 np0005541455 nova_compute[188605]:        <value>emulator</value>
Dec  1 14:20:36 np0005541455 nova_compute[188605]:        <value>external</value>
Dec  1 14:20:36 np0005541455 nova_compute[188605]:      </enum>
Dec  1 14:20:36 np0005541455 nova_compute[188605]:      <enum name='backendVersion'>
Dec  1 14:20:36 np0005541455 nova_compute[188605]:        <value>2.0</value>
Dec  1 14:20:36 np0005541455 nova_compute[188605]:      </enum>
Dec  1 14:20:36 np0005541455 nova_compute[188605]:    </tpm>
Dec  1 14:20:36 np0005541455 nova_compute[188605]:    <redirdev supported='yes'>
Dec  1 14:20:36 np0005541455 nova_compute[188605]:      <enum name='bus'>
Dec  1 14:20:36 np0005541455 nova_compute[188605]:        <value>usb</value>
Dec  1 14:20:36 np0005541455 nova_compute[188605]:      </enum>
Dec  1 14:20:36 np0005541455 nova_compute[188605]:    </redirdev>
Dec  1 14:20:36 np0005541455 nova_compute[188605]:    <channel supported='yes'>
Dec  1 14:20:36 np0005541455 nova_compute[188605]:      <enum name='type'>
Dec  1 14:20:36 np0005541455 nova_compute[188605]:        <value>pty</value>
Dec  1 14:20:36 np0005541455 nova_compute[188605]:        <value>unix</value>
Dec  1 14:20:36 np0005541455 nova_compute[188605]:      </enum>
Dec  1 14:20:36 np0005541455 nova_compute[188605]:    </channel>
Dec  1 14:20:36 np0005541455 nova_compute[188605]:    <crypto supported='yes'>
Dec  1 14:20:36 np0005541455 nova_compute[188605]:      <enum name='model'/>
Dec  1 14:20:36 np0005541455 nova_compute[188605]:      <enum name='type'>
Dec  1 14:20:36 np0005541455 nova_compute[188605]:        <value>qemu</value>
Dec  1 14:20:36 np0005541455 nova_compute[188605]:      </enum>
Dec  1 14:20:36 np0005541455 nova_compute[188605]:      <enum name='backendModel'>
Dec  1 14:20:36 np0005541455 nova_compute[188605]:        <value>builtin</value>
Dec  1 14:20:36 np0005541455 nova_compute[188605]:      </enum>
Dec  1 14:20:36 np0005541455 nova_compute[188605]:    </crypto>
Dec  1 14:20:36 np0005541455 nova_compute[188605]:    <interface supported='yes'>
Dec  1 14:20:36 np0005541455 nova_compute[188605]:      <enum name='backendType'>
Dec  1 14:20:36 np0005541455 nova_compute[188605]:        <value>default</value>
Dec  1 14:20:36 np0005541455 nova_compute[188605]:        <value>passt</value>
Dec  1 14:20:36 np0005541455 nova_compute[188605]:      </enum>
Dec  1 14:20:36 np0005541455 nova_compute[188605]:    </interface>
Dec  1 14:20:36 np0005541455 nova_compute[188605]:    <panic supported='yes'>
Dec  1 14:20:36 np0005541455 nova_compute[188605]:      <enum name='model'>
Dec  1 14:20:36 np0005541455 nova_compute[188605]:        <value>isa</value>
Dec  1 14:20:36 np0005541455 nova_compute[188605]:        <value>hyperv</value>
Dec  1 14:20:36 np0005541455 nova_compute[188605]:      </enum>
Dec  1 14:20:36 np0005541455 nova_compute[188605]:    </panic>
Dec  1 14:20:36 np0005541455 nova_compute[188605]:    <console supported='yes'>
Dec  1 14:20:36 np0005541455 nova_compute[188605]:      <enum name='type'>
Dec  1 14:20:36 np0005541455 nova_compute[188605]:        <value>null</value>
Dec  1 14:20:36 np0005541455 nova_compute[188605]:        <value>vc</value>
Dec  1 14:20:36 np0005541455 nova_compute[188605]:        <value>pty</value>
Dec  1 14:20:36 np0005541455 nova_compute[188605]:        <value>dev</value>
Dec  1 14:20:36 np0005541455 nova_compute[188605]:        <value>file</value>
Dec  1 14:20:36 np0005541455 nova_compute[188605]:        <value>pipe</value>
Dec  1 14:20:36 np0005541455 nova_compute[188605]:        <value>stdio</value>
Dec  1 14:20:36 np0005541455 nova_compute[188605]:        <value>udp</value>
Dec  1 14:20:36 np0005541455 nova_compute[188605]:        <value>tcp</value>
Dec  1 14:20:36 np0005541455 nova_compute[188605]:        <value>unix</value>
Dec  1 14:20:36 np0005541455 nova_compute[188605]:        <value>qemu-vdagent</value>
Dec  1 14:20:36 np0005541455 nova_compute[188605]:        <value>dbus</value>
Dec  1 14:20:36 np0005541455 nova_compute[188605]:      </enum>
Dec  1 14:20:36 np0005541455 nova_compute[188605]:    </console>
Dec  1 14:20:36 np0005541455 nova_compute[188605]:  </devices>
Dec  1 14:20:36 np0005541455 nova_compute[188605]:  <features>
Dec  1 14:20:36 np0005541455 nova_compute[188605]:    <gic supported='no'/>
Dec  1 14:20:36 np0005541455 nova_compute[188605]:    <vmcoreinfo supported='yes'/>
Dec  1 14:20:36 np0005541455 nova_compute[188605]:    <genid supported='yes'/>
Dec  1 14:20:36 np0005541455 nova_compute[188605]:    <backingStoreInput supported='yes'/>
Dec  1 14:20:36 np0005541455 nova_compute[188605]:    <backup supported='yes'/>
Dec  1 14:20:36 np0005541455 nova_compute[188605]:    <async-teardown supported='yes'/>
Dec  1 14:20:36 np0005541455 nova_compute[188605]:    <ps2 supported='yes'/>
Dec  1 14:20:36 np0005541455 nova_compute[188605]:    <sev supported='no'/>
Dec  1 14:20:36 np0005541455 nova_compute[188605]:    <sgx supported='no'/>
Dec  1 14:20:36 np0005541455 nova_compute[188605]:    <hyperv supported='yes'>
Dec  1 14:20:36 np0005541455 nova_compute[188605]:      <enum name='features'>
Dec  1 14:20:36 np0005541455 nova_compute[188605]:        <value>relaxed</value>
Dec  1 14:20:36 np0005541455 nova_compute[188605]:        <value>vapic</value>
Dec  1 14:20:36 np0005541455 nova_compute[188605]:        <value>spinlocks</value>
Dec  1 14:20:36 np0005541455 nova_compute[188605]:        <value>vpindex</value>
Dec  1 14:20:36 np0005541455 nova_compute[188605]:        <value>runtime</value>
Dec  1 14:20:36 np0005541455 nova_compute[188605]:        <value>synic</value>
Dec  1 14:20:36 np0005541455 nova_compute[188605]:        <value>stimer</value>
Dec  1 14:20:36 np0005541455 nova_compute[188605]:        <value>reset</value>
Dec  1 14:20:36 np0005541455 nova_compute[188605]:        <value>vendor_id</value>
Dec  1 14:20:36 np0005541455 nova_compute[188605]:        <value>frequencies</value>
Dec  1 14:20:36 np0005541455 nova_compute[188605]:        <value>reenlightenment</value>
Dec  1 14:20:36 np0005541455 nova_compute[188605]:        <value>tlbflush</value>
Dec  1 14:20:36 np0005541455 nova_compute[188605]:        <value>ipi</value>
Dec  1 14:20:36 np0005541455 nova_compute[188605]:        <value>avic</value>
Dec  1 14:20:36 np0005541455 nova_compute[188605]:        <value>emsr_bitmap</value>
Dec  1 14:20:36 np0005541455 nova_compute[188605]:        <value>xmm_input</value>
Dec  1 14:20:36 np0005541455 nova_compute[188605]:      </enum>
Dec  1 14:20:36 np0005541455 nova_compute[188605]:      <defaults>
Dec  1 14:20:36 np0005541455 nova_compute[188605]:        <spinlocks>4095</spinlocks>
Dec  1 14:20:36 np0005541455 nova_compute[188605]:        <stimer_direct>on</stimer_direct>
Dec  1 14:20:36 np0005541455 nova_compute[188605]:        <tlbflush_direct>on</tlbflush_direct>
Dec  1 14:20:36 np0005541455 nova_compute[188605]:        <tlbflush_extended>on</tlbflush_extended>
Dec  1 14:20:36 np0005541455 nova_compute[188605]:        <vendor_id>Linux KVM Hv</vendor_id>
Dec  1 14:20:36 np0005541455 nova_compute[188605]:      </defaults>
Dec  1 14:20:36 np0005541455 nova_compute[188605]:    </hyperv>
Dec  1 14:20:36 np0005541455 nova_compute[188605]:    <launchSecurity supported='yes'>
Dec  1 14:20:36 np0005541455 nova_compute[188605]:      <enum name='sectype'>
Dec  1 14:20:36 np0005541455 nova_compute[188605]:        <value>tdx</value>
Dec  1 14:20:36 np0005541455 nova_compute[188605]:      </enum>
Dec  1 14:20:36 np0005541455 nova_compute[188605]:    </launchSecurity>
Dec  1 14:20:36 np0005541455 nova_compute[188605]:  </features>
Dec  1 14:20:36 np0005541455 nova_compute[188605]: </domainCapabilities>
Dec  1 14:20:36 np0005541455 nova_compute[188605]: _get_domain_capabilities /usr/lib/python3.9/site-packages/nova/virt/libvirt/host.py:1037#033[00m
Dec  1 14:20:36 np0005541455 nova_compute[188605]: 2025-12-01 19:20:36.431 188609 DEBUG nova.virt.libvirt.host [None req-7f8cefb8-f392-4857-a15c-0577ffacd27c - - - - - -] Libvirt host hypervisor capabilities for arch=i686 and machine_type=pc:
Dec  1 14:20:36 np0005541455 nova_compute[188605]: <domainCapabilities>
Dec  1 14:20:36 np0005541455 nova_compute[188605]:  <path>/usr/libexec/qemu-kvm</path>
Dec  1 14:20:36 np0005541455 nova_compute[188605]:  <domain>kvm</domain>
Dec  1 14:20:36 np0005541455 nova_compute[188605]:  <machine>pc-i440fx-rhel7.6.0</machine>
Dec  1 14:20:36 np0005541455 nova_compute[188605]:  <arch>i686</arch>
Dec  1 14:20:36 np0005541455 nova_compute[188605]:  <vcpu max='240'/>
Dec  1 14:20:36 np0005541455 nova_compute[188605]:  <iothreads supported='yes'/>
Dec  1 14:20:36 np0005541455 nova_compute[188605]:  <os supported='yes'>
Dec  1 14:20:36 np0005541455 nova_compute[188605]:    <enum name='firmware'/>
Dec  1 14:20:36 np0005541455 nova_compute[188605]:    <loader supported='yes'>
Dec  1 14:20:36 np0005541455 nova_compute[188605]:      <value>/usr/share/OVMF/OVMF_CODE.secboot.fd</value>
Dec  1 14:20:36 np0005541455 nova_compute[188605]:      <enum name='type'>
Dec  1 14:20:36 np0005541455 nova_compute[188605]:        <value>rom</value>
Dec  1 14:20:36 np0005541455 nova_compute[188605]:        <value>pflash</value>
Dec  1 14:20:36 np0005541455 nova_compute[188605]:      </enum>
Dec  1 14:20:36 np0005541455 nova_compute[188605]:      <enum name='readonly'>
Dec  1 14:20:36 np0005541455 nova_compute[188605]:        <value>yes</value>
Dec  1 14:20:36 np0005541455 nova_compute[188605]:        <value>no</value>
Dec  1 14:20:36 np0005541455 nova_compute[188605]:      </enum>
Dec  1 14:20:36 np0005541455 nova_compute[188605]:      <enum name='secure'>
Dec  1 14:20:36 np0005541455 nova_compute[188605]:        <value>no</value>
Dec  1 14:20:36 np0005541455 nova_compute[188605]:      </enum>
Dec  1 14:20:36 np0005541455 nova_compute[188605]:    </loader>
Dec  1 14:20:36 np0005541455 nova_compute[188605]:  </os>
Dec  1 14:20:36 np0005541455 nova_compute[188605]:  <cpu>
Dec  1 14:20:36 np0005541455 nova_compute[188605]:    <mode name='host-passthrough' supported='yes'>
Dec  1 14:20:36 np0005541455 nova_compute[188605]:      <enum name='hostPassthroughMigratable'>
Dec  1 14:20:36 np0005541455 nova_compute[188605]:        <value>on</value>
Dec  1 14:20:36 np0005541455 nova_compute[188605]:        <value>off</value>
Dec  1 14:20:36 np0005541455 nova_compute[188605]:      </enum>
Dec  1 14:20:36 np0005541455 nova_compute[188605]:    </mode>
Dec  1 14:20:36 np0005541455 nova_compute[188605]:    <mode name='maximum' supported='yes'>
Dec  1 14:20:36 np0005541455 nova_compute[188605]:      <enum name='maximumMigratable'>
Dec  1 14:20:36 np0005541455 nova_compute[188605]:        <value>on</value>
Dec  1 14:20:36 np0005541455 nova_compute[188605]:        <value>off</value>
Dec  1 14:20:36 np0005541455 nova_compute[188605]:      </enum>
Dec  1 14:20:36 np0005541455 nova_compute[188605]:    </mode>
Dec  1 14:20:36 np0005541455 nova_compute[188605]:    <mode name='host-model' supported='yes'>
Dec  1 14:20:36 np0005541455 nova_compute[188605]:      <model fallback='forbid'>EPYC-Rome</model>
Dec  1 14:20:36 np0005541455 nova_compute[188605]:      <vendor>AMD</vendor>
Dec  1 14:20:36 np0005541455 nova_compute[188605]:      <maxphysaddr mode='passthrough' limit='40'/>
Dec  1 14:20:36 np0005541455 nova_compute[188605]:      <feature policy='require' name='x2apic'/>
Dec  1 14:20:36 np0005541455 nova_compute[188605]:      <feature policy='require' name='tsc-deadline'/>
Dec  1 14:20:36 np0005541455 nova_compute[188605]:      <feature policy='require' name='hypervisor'/>
Dec  1 14:20:36 np0005541455 nova_compute[188605]:      <feature policy='require' name='tsc_adjust'/>
Dec  1 14:20:36 np0005541455 nova_compute[188605]:      <feature policy='require' name='spec-ctrl'/>
Dec  1 14:20:36 np0005541455 nova_compute[188605]:      <feature policy='require' name='stibp'/>
Dec  1 14:20:36 np0005541455 nova_compute[188605]:      <feature policy='require' name='ssbd'/>
Dec  1 14:20:36 np0005541455 nova_compute[188605]:      <feature policy='require' name='cmp_legacy'/>
Dec  1 14:20:36 np0005541455 nova_compute[188605]:      <feature policy='require' name='overflow-recov'/>
Dec  1 14:20:36 np0005541455 nova_compute[188605]:      <feature policy='require' name='succor'/>
Dec  1 14:20:36 np0005541455 nova_compute[188605]:      <feature policy='require' name='ibrs'/>
Dec  1 14:20:36 np0005541455 nova_compute[188605]:      <feature policy='require' name='amd-ssbd'/>
Dec  1 14:20:36 np0005541455 nova_compute[188605]:      <feature policy='require' name='virt-ssbd'/>
Dec  1 14:20:36 np0005541455 nova_compute[188605]:      <feature policy='require' name='lbrv'/>
Dec  1 14:20:36 np0005541455 nova_compute[188605]:      <feature policy='require' name='tsc-scale'/>
Dec  1 14:20:36 np0005541455 nova_compute[188605]:      <feature policy='require' name='vmcb-clean'/>
Dec  1 14:20:36 np0005541455 nova_compute[188605]:      <feature policy='require' name='flushbyasid'/>
Dec  1 14:20:36 np0005541455 nova_compute[188605]:      <feature policy='require' name='pause-filter'/>
Dec  1 14:20:36 np0005541455 nova_compute[188605]:      <feature policy='require' name='pfthreshold'/>
Dec  1 14:20:36 np0005541455 nova_compute[188605]:      <feature policy='require' name='svme-addr-chk'/>
Dec  1 14:20:36 np0005541455 nova_compute[188605]:      <feature policy='require' name='lfence-always-serializing'/>
Dec  1 14:20:36 np0005541455 nova_compute[188605]:      <feature policy='disable' name='xsaves'/>
Dec  1 14:20:36 np0005541455 nova_compute[188605]:    </mode>
Dec  1 14:20:36 np0005541455 nova_compute[188605]:    <mode name='custom' supported='yes'>
Dec  1 14:20:36 np0005541455 nova_compute[188605]:      <model usable='yes' deprecated='yes' vendor='unknown' canonical='486-v1'>486</model>
Dec  1 14:20:36 np0005541455 nova_compute[188605]:      <model usable='yes' deprecated='yes' vendor='unknown'>486-v1</model>
Dec  1 14:20:36 np0005541455 nova_compute[188605]:      <model usable='no' vendor='Intel' canonical='Broadwell-v1'>Broadwell</model>
Dec  1 14:20:36 np0005541455 nova_compute[188605]:      <blockers model='Broadwell'>
Dec  1 14:20:36 np0005541455 nova_compute[188605]:        <feature name='erms'/>
Dec  1 14:20:36 np0005541455 nova_compute[188605]:        <feature name='hle'/>
Dec  1 14:20:36 np0005541455 nova_compute[188605]:        <feature name='invpcid'/>
Dec  1 14:20:36 np0005541455 nova_compute[188605]:        <feature name='pcid'/>
Dec  1 14:20:36 np0005541455 nova_compute[188605]:        <feature name='rtm'/>
Dec  1 14:20:36 np0005541455 nova_compute[188605]:      </blockers>
Dec  1 14:20:36 np0005541455 nova_compute[188605]:      <model usable='no' vendor='Intel' canonical='Broadwell-v3'>Broadwell-IBRS</model>
Dec  1 14:20:36 np0005541455 nova_compute[188605]:      <blockers model='Broadwell-IBRS'>
Dec  1 14:20:36 np0005541455 nova_compute[188605]:        <feature name='erms'/>
Dec  1 14:20:36 np0005541455 nova_compute[188605]:        <feature name='hle'/>
Dec  1 14:20:36 np0005541455 nova_compute[188605]:        <feature name='invpcid'/>
Dec  1 14:20:36 np0005541455 nova_compute[188605]:        <feature name='pcid'/>
Dec  1 14:20:36 np0005541455 nova_compute[188605]:        <feature name='rtm'/>
Dec  1 14:20:36 np0005541455 nova_compute[188605]:      </blockers>
Dec  1 14:20:36 np0005541455 nova_compute[188605]:      <model usable='no' vendor='Intel' canonical='Broadwell-v2'>Broadwell-noTSX</model>
Dec  1 14:20:36 np0005541455 nova_compute[188605]:      <blockers model='Broadwell-noTSX'>
Dec  1 14:20:36 np0005541455 nova_compute[188605]:        <feature name='erms'/>
Dec  1 14:20:36 np0005541455 nova_compute[188605]:        <feature name='invpcid'/>
Dec  1 14:20:36 np0005541455 nova_compute[188605]:        <feature name='pcid'/>
Dec  1 14:20:36 np0005541455 nova_compute[188605]:      </blockers>
Dec  1 14:20:36 np0005541455 nova_compute[188605]:      <model usable='no' vendor='Intel' canonical='Broadwell-v4'>Broadwell-noTSX-IBRS</model>
Dec  1 14:20:36 np0005541455 nova_compute[188605]:      <blockers model='Broadwell-noTSX-IBRS'>
Dec  1 14:20:36 np0005541455 nova_compute[188605]:        <feature name='erms'/>
Dec  1 14:20:36 np0005541455 nova_compute[188605]:        <feature name='invpcid'/>
Dec  1 14:20:36 np0005541455 nova_compute[188605]:        <feature name='pcid'/>
Dec  1 14:20:36 np0005541455 nova_compute[188605]:      </blockers>
Dec  1 14:20:36 np0005541455 nova_compute[188605]:      <model usable='no' vendor='Intel'>Broadwell-v1</model>
Dec  1 14:20:36 np0005541455 nova_compute[188605]:      <blockers model='Broadwell-v1'>
Dec  1 14:20:36 np0005541455 nova_compute[188605]:        <feature name='erms'/>
Dec  1 14:20:36 np0005541455 nova_compute[188605]:        <feature name='hle'/>
Dec  1 14:20:36 np0005541455 nova_compute[188605]:        <feature name='invpcid'/>
Dec  1 14:20:36 np0005541455 nova_compute[188605]:        <feature name='pcid'/>
Dec  1 14:20:36 np0005541455 nova_compute[188605]:        <feature name='rtm'/>
Dec  1 14:20:36 np0005541455 nova_compute[188605]:      </blockers>
Dec  1 14:20:36 np0005541455 nova_compute[188605]:      <model usable='no' vendor='Intel'>Broadwell-v2</model>
Dec  1 14:20:36 np0005541455 nova_compute[188605]:      <blockers model='Broadwell-v2'>
Dec  1 14:20:36 np0005541455 nova_compute[188605]:        <feature name='erms'/>
Dec  1 14:20:36 np0005541455 nova_compute[188605]:        <feature name='invpcid'/>
Dec  1 14:20:36 np0005541455 nova_compute[188605]:        <feature name='pcid'/>
Dec  1 14:20:36 np0005541455 nova_compute[188605]:      </blockers>
Dec  1 14:20:36 np0005541455 nova_compute[188605]:      <model usable='no' vendor='Intel'>Broadwell-v3</model>
Dec  1 14:20:36 np0005541455 nova_compute[188605]:      <blockers model='Broadwell-v3'>
Dec  1 14:20:36 np0005541455 nova_compute[188605]:        <feature name='erms'/>
Dec  1 14:20:36 np0005541455 nova_compute[188605]:        <feature name='hle'/>
Dec  1 14:20:36 np0005541455 nova_compute[188605]:        <feature name='invpcid'/>
Dec  1 14:20:36 np0005541455 nova_compute[188605]:        <feature name='pcid'/>
Dec  1 14:20:36 np0005541455 nova_compute[188605]:        <feature name='rtm'/>
Dec  1 14:20:36 np0005541455 nova_compute[188605]:      </blockers>
Dec  1 14:20:36 np0005541455 nova_compute[188605]:      <model usable='no' vendor='Intel'>Broadwell-v4</model>
Dec  1 14:20:36 np0005541455 nova_compute[188605]:      <blockers model='Broadwell-v4'>
Dec  1 14:20:36 np0005541455 nova_compute[188605]:        <feature name='erms'/>
Dec  1 14:20:36 np0005541455 nova_compute[188605]:        <feature name='invpcid'/>
Dec  1 14:20:36 np0005541455 nova_compute[188605]:        <feature name='pcid'/>
Dec  1 14:20:36 np0005541455 nova_compute[188605]:      </blockers>
Dec  1 14:20:36 np0005541455 nova_compute[188605]:      <model usable='no' vendor='Intel' canonical='Cascadelake-Server-v1'>Cascadelake-Server</model>
Dec  1 14:20:36 np0005541455 nova_compute[188605]:      <blockers model='Cascadelake-Server'>
Dec  1 14:20:36 np0005541455 nova_compute[188605]:        <feature name='avx512bw'/>
Dec  1 14:20:36 np0005541455 nova_compute[188605]:        <feature name='avx512cd'/>
Dec  1 14:20:36 np0005541455 nova_compute[188605]:        <feature name='avx512dq'/>
Dec  1 14:20:36 np0005541455 nova_compute[188605]:        <feature name='avx512f'/>
Dec  1 14:20:36 np0005541455 nova_compute[188605]:        <feature name='avx512vl'/>
Dec  1 14:20:36 np0005541455 nova_compute[188605]:        <feature name='avx512vnni'/>
Dec  1 14:20:36 np0005541455 nova_compute[188605]:        <feature name='erms'/>
Dec  1 14:20:36 np0005541455 nova_compute[188605]:        <feature name='hle'/>
Dec  1 14:20:36 np0005541455 nova_compute[188605]:        <feature name='invpcid'/>
Dec  1 14:20:36 np0005541455 nova_compute[188605]:        <feature name='pcid'/>
Dec  1 14:20:36 np0005541455 nova_compute[188605]:        <feature name='pku'/>
Dec  1 14:20:36 np0005541455 nova_compute[188605]:        <feature name='rtm'/>
Dec  1 14:20:36 np0005541455 nova_compute[188605]:      </blockers>
Dec  1 14:20:36 np0005541455 nova_compute[188605]:      <model usable='no' vendor='Intel' canonical='Cascadelake-Server-v3'>Cascadelake-Server-noTSX</model>
Dec  1 14:20:36 np0005541455 nova_compute[188605]:      <blockers model='Cascadelake-Server-noTSX'>
Dec  1 14:20:36 np0005541455 nova_compute[188605]:        <feature name='avx512bw'/>
Dec  1 14:20:36 np0005541455 nova_compute[188605]:        <feature name='avx512cd'/>
Dec  1 14:20:36 np0005541455 nova_compute[188605]:        <feature name='avx512dq'/>
Dec  1 14:20:36 np0005541455 nova_compute[188605]:        <feature name='avx512f'/>
Dec  1 14:20:36 np0005541455 nova_compute[188605]:        <feature name='avx512vl'/>
Dec  1 14:20:36 np0005541455 nova_compute[188605]:        <feature name='avx512vnni'/>
Dec  1 14:20:36 np0005541455 nova_compute[188605]:        <feature name='erms'/>
Dec  1 14:20:36 np0005541455 nova_compute[188605]:        <feature name='ibrs-all'/>
Dec  1 14:20:36 np0005541455 nova_compute[188605]:        <feature name='invpcid'/>
Dec  1 14:20:36 np0005541455 nova_compute[188605]:        <feature name='pcid'/>
Dec  1 14:20:36 np0005541455 nova_compute[188605]:        <feature name='pku'/>
Dec  1 14:20:36 np0005541455 nova_compute[188605]:      </blockers>
Dec  1 14:20:36 np0005541455 nova_compute[188605]:      <model usable='no' vendor='Intel'>Cascadelake-Server-v1</model>
Dec  1 14:20:36 np0005541455 nova_compute[188605]:      <blockers model='Cascadelake-Server-v1'>
Dec  1 14:20:36 np0005541455 nova_compute[188605]:        <feature name='avx512bw'/>
Dec  1 14:20:36 np0005541455 nova_compute[188605]:        <feature name='avx512cd'/>
Dec  1 14:20:36 np0005541455 nova_compute[188605]:        <feature name='avx512dq'/>
Dec  1 14:20:36 np0005541455 nova_compute[188605]:        <feature name='avx512f'/>
Dec  1 14:20:36 np0005541455 nova_compute[188605]:        <feature name='avx512vl'/>
Dec  1 14:20:36 np0005541455 nova_compute[188605]:        <feature name='avx512vnni'/>
Dec  1 14:20:36 np0005541455 nova_compute[188605]:        <feature name='erms'/>
Dec  1 14:20:36 np0005541455 nova_compute[188605]:        <feature name='hle'/>
Dec  1 14:20:36 np0005541455 nova_compute[188605]:        <feature name='invpcid'/>
Dec  1 14:20:36 np0005541455 nova_compute[188605]:        <feature name='pcid'/>
Dec  1 14:20:36 np0005541455 nova_compute[188605]:        <feature name='pku'/>
Dec  1 14:20:36 np0005541455 nova_compute[188605]:        <feature name='rtm'/>
Dec  1 14:20:36 np0005541455 nova_compute[188605]:      </blockers>
Dec  1 14:20:36 np0005541455 nova_compute[188605]:      <model usable='no' vendor='Intel'>Cascadelake-Server-v2</model>
Dec  1 14:20:36 np0005541455 nova_compute[188605]:      <blockers model='Cascadelake-Server-v2'>
Dec  1 14:20:36 np0005541455 nova_compute[188605]:        <feature name='avx512bw'/>
Dec  1 14:20:36 np0005541455 nova_compute[188605]:        <feature name='avx512cd'/>
Dec  1 14:20:36 np0005541455 nova_compute[188605]:        <feature name='avx512dq'/>
Dec  1 14:20:36 np0005541455 nova_compute[188605]:        <feature name='avx512f'/>
Dec  1 14:20:36 np0005541455 nova_compute[188605]:        <feature name='avx512vl'/>
Dec  1 14:20:36 np0005541455 nova_compute[188605]:        <feature name='avx512vnni'/>
Dec  1 14:20:36 np0005541455 nova_compute[188605]:        <feature name='erms'/>
Dec  1 14:20:36 np0005541455 nova_compute[188605]:        <feature name='hle'/>
Dec  1 14:20:36 np0005541455 nova_compute[188605]:        <feature name='ibrs-all'/>
Dec  1 14:20:36 np0005541455 nova_compute[188605]:        <feature name='invpcid'/>
Dec  1 14:20:36 np0005541455 nova_compute[188605]:        <feature name='pcid'/>
Dec  1 14:20:36 np0005541455 nova_compute[188605]:        <feature name='pku'/>
Dec  1 14:20:36 np0005541455 nova_compute[188605]:        <feature name='rtm'/>
Dec  1 14:20:36 np0005541455 nova_compute[188605]:      </blockers>
Dec  1 14:20:36 np0005541455 nova_compute[188605]:      <model usable='no' vendor='Intel'>Cascadelake-Server-v3</model>
Dec  1 14:20:36 np0005541455 nova_compute[188605]:      <blockers model='Cascadelake-Server-v3'>
Dec  1 14:20:36 np0005541455 nova_compute[188605]:        <feature name='avx512bw'/>
Dec  1 14:20:36 np0005541455 nova_compute[188605]:        <feature name='avx512cd'/>
Dec  1 14:20:36 np0005541455 nova_compute[188605]:        <feature name='avx512dq'/>
Dec  1 14:20:36 np0005541455 nova_compute[188605]:        <feature name='avx512f'/>
Dec  1 14:20:36 np0005541455 nova_compute[188605]:        <feature name='avx512vl'/>
Dec  1 14:20:36 np0005541455 nova_compute[188605]:        <feature name='avx512vnni'/>
Dec  1 14:20:36 np0005541455 nova_compute[188605]:        <feature name='erms'/>
Dec  1 14:20:36 np0005541455 nova_compute[188605]:        <feature name='ibrs-all'/>
Dec  1 14:20:36 np0005541455 nova_compute[188605]:        <feature name='invpcid'/>
Dec  1 14:20:36 np0005541455 nova_compute[188605]:        <feature name='pcid'/>
Dec  1 14:20:36 np0005541455 nova_compute[188605]:        <feature name='pku'/>
Dec  1 14:20:36 np0005541455 nova_compute[188605]:      </blockers>
Dec  1 14:20:36 np0005541455 nova_compute[188605]:      <model usable='no' vendor='Intel'>Cascadelake-Server-v4</model>
Dec  1 14:20:36 np0005541455 nova_compute[188605]:      <blockers model='Cascadelake-Server-v4'>
Dec  1 14:20:36 np0005541455 nova_compute[188605]:        <feature name='avx512bw'/>
Dec  1 14:20:36 np0005541455 nova_compute[188605]:        <feature name='avx512cd'/>
Dec  1 14:20:36 np0005541455 nova_compute[188605]:        <feature name='avx512dq'/>
Dec  1 14:20:36 np0005541455 nova_compute[188605]:        <feature name='avx512f'/>
Dec  1 14:20:36 np0005541455 nova_compute[188605]:        <feature name='avx512vl'/>
Dec  1 14:20:36 np0005541455 nova_compute[188605]:        <feature name='avx512vnni'/>
Dec  1 14:20:36 np0005541455 nova_compute[188605]:        <feature name='erms'/>
Dec  1 14:20:36 np0005541455 nova_compute[188605]:        <feature name='ibrs-all'/>
Dec  1 14:20:36 np0005541455 nova_compute[188605]:        <feature name='invpcid'/>
Dec  1 14:20:36 np0005541455 nova_compute[188605]:        <feature name='pcid'/>
Dec  1 14:20:36 np0005541455 nova_compute[188605]:        <feature name='pku'/>
Dec  1 14:20:36 np0005541455 nova_compute[188605]:      </blockers>
Dec  1 14:20:36 np0005541455 nova_compute[188605]:      <model usable='no' vendor='Intel'>Cascadelake-Server-v5</model>
Dec  1 14:20:36 np0005541455 nova_compute[188605]:      <blockers model='Cascadelake-Server-v5'>
Dec  1 14:20:36 np0005541455 nova_compute[188605]:        <feature name='avx512bw'/>
Dec  1 14:20:36 np0005541455 nova_compute[188605]:        <feature name='avx512cd'/>
Dec  1 14:20:36 np0005541455 nova_compute[188605]:        <feature name='avx512dq'/>
Dec  1 14:20:36 np0005541455 nova_compute[188605]:        <feature name='avx512f'/>
Dec  1 14:20:36 np0005541455 nova_compute[188605]:        <feature name='avx512vl'/>
Dec  1 14:20:36 np0005541455 nova_compute[188605]:        <feature name='avx512vnni'/>
Dec  1 14:20:36 np0005541455 nova_compute[188605]:        <feature name='erms'/>
Dec  1 14:20:36 np0005541455 nova_compute[188605]:        <feature name='ibrs-all'/>
Dec  1 14:20:36 np0005541455 nova_compute[188605]:        <feature name='invpcid'/>
Dec  1 14:20:36 np0005541455 nova_compute[188605]:        <feature name='pcid'/>
Dec  1 14:20:36 np0005541455 nova_compute[188605]:        <feature name='pku'/>
Dec  1 14:20:36 np0005541455 nova_compute[188605]:        <feature name='xsaves'/>
Dec  1 14:20:36 np0005541455 nova_compute[188605]:      </blockers>
Dec  1 14:20:36 np0005541455 nova_compute[188605]:      <model usable='yes' deprecated='yes' vendor='Intel' canonical='Conroe-v1'>Conroe</model>
Dec  1 14:20:36 np0005541455 nova_compute[188605]:      <model usable='yes' deprecated='yes' vendor='Intel'>Conroe-v1</model>
Dec  1 14:20:36 np0005541455 nova_compute[188605]:      <model usable='no' vendor='Intel' canonical='Cooperlake-v1'>Cooperlake</model>
Dec  1 14:20:36 np0005541455 nova_compute[188605]:      <blockers model='Cooperlake'>
Dec  1 14:20:36 np0005541455 nova_compute[188605]:        <feature name='avx512-bf16'/>
Dec  1 14:20:36 np0005541455 nova_compute[188605]:        <feature name='avx512bw'/>
Dec  1 14:20:36 np0005541455 nova_compute[188605]:        <feature name='avx512cd'/>
Dec  1 14:20:36 np0005541455 nova_compute[188605]:        <feature name='avx512dq'/>
Dec  1 14:20:36 np0005541455 nova_compute[188605]:        <feature name='avx512f'/>
Dec  1 14:20:36 np0005541455 nova_compute[188605]:        <feature name='avx512vl'/>
Dec  1 14:20:36 np0005541455 nova_compute[188605]:        <feature name='avx512vnni'/>
Dec  1 14:20:36 np0005541455 nova_compute[188605]:        <feature name='erms'/>
Dec  1 14:20:36 np0005541455 nova_compute[188605]:        <feature name='hle'/>
Dec  1 14:20:36 np0005541455 nova_compute[188605]:        <feature name='ibrs-all'/>
Dec  1 14:20:36 np0005541455 nova_compute[188605]:        <feature name='invpcid'/>
Dec  1 14:20:36 np0005541455 nova_compute[188605]:        <feature name='pcid'/>
Dec  1 14:20:36 np0005541455 nova_compute[188605]:        <feature name='pku'/>
Dec  1 14:20:36 np0005541455 nova_compute[188605]:        <feature name='rtm'/>
Dec  1 14:20:36 np0005541455 nova_compute[188605]:        <feature name='taa-no'/>
Dec  1 14:20:36 np0005541455 nova_compute[188605]:      </blockers>
Dec  1 14:20:36 np0005541455 nova_compute[188605]:      <model usable='no' vendor='Intel'>Cooperlake-v1</model>
Dec  1 14:20:36 np0005541455 nova_compute[188605]:      <blockers model='Cooperlake-v1'>
Dec  1 14:20:36 np0005541455 nova_compute[188605]:        <feature name='avx512-bf16'/>
Dec  1 14:20:36 np0005541455 nova_compute[188605]:        <feature name='avx512bw'/>
Dec  1 14:20:36 np0005541455 nova_compute[188605]:        <feature name='avx512cd'/>
Dec  1 14:20:36 np0005541455 nova_compute[188605]:        <feature name='avx512dq'/>
Dec  1 14:20:36 np0005541455 nova_compute[188605]:        <feature name='avx512f'/>
Dec  1 14:20:36 np0005541455 nova_compute[188605]:        <feature name='avx512vl'/>
Dec  1 14:20:36 np0005541455 nova_compute[188605]:        <feature name='avx512vnni'/>
Dec  1 14:20:36 np0005541455 nova_compute[188605]:        <feature name='erms'/>
Dec  1 14:20:36 np0005541455 nova_compute[188605]:        <feature name='hle'/>
Dec  1 14:20:36 np0005541455 nova_compute[188605]:        <feature name='ibrs-all'/>
Dec  1 14:20:36 np0005541455 nova_compute[188605]:        <feature name='invpcid'/>
Dec  1 14:20:36 np0005541455 nova_compute[188605]:        <feature name='pcid'/>
Dec  1 14:20:36 np0005541455 nova_compute[188605]:        <feature name='pku'/>
Dec  1 14:20:36 np0005541455 nova_compute[188605]:        <feature name='rtm'/>
Dec  1 14:20:36 np0005541455 nova_compute[188605]:        <feature name='taa-no'/>
Dec  1 14:20:36 np0005541455 nova_compute[188605]:      </blockers>
Dec  1 14:20:36 np0005541455 nova_compute[188605]:      <model usable='no' vendor='Intel'>Cooperlake-v2</model>
Dec  1 14:20:36 np0005541455 nova_compute[188605]:      <blockers model='Cooperlake-v2'>
Dec  1 14:20:36 np0005541455 nova_compute[188605]:        <feature name='avx512-bf16'/>
Dec  1 14:20:36 np0005541455 nova_compute[188605]:        <feature name='avx512bw'/>
Dec  1 14:20:36 np0005541455 nova_compute[188605]:        <feature name='avx512cd'/>
Dec  1 14:20:36 np0005541455 nova_compute[188605]:        <feature name='avx512dq'/>
Dec  1 14:20:36 np0005541455 nova_compute[188605]:        <feature name='avx512f'/>
Dec  1 14:20:36 np0005541455 nova_compute[188605]:        <feature name='avx512vl'/>
Dec  1 14:20:36 np0005541455 nova_compute[188605]:        <feature name='avx512vnni'/>
Dec  1 14:20:36 np0005541455 nova_compute[188605]:        <feature name='erms'/>
Dec  1 14:20:36 np0005541455 nova_compute[188605]:        <feature name='hle'/>
Dec  1 14:20:36 np0005541455 nova_compute[188605]:        <feature name='ibrs-all'/>
Dec  1 14:20:36 np0005541455 nova_compute[188605]:        <feature name='invpcid'/>
Dec  1 14:20:36 np0005541455 nova_compute[188605]:        <feature name='pcid'/>
Dec  1 14:20:36 np0005541455 nova_compute[188605]:        <feature name='pku'/>
Dec  1 14:20:36 np0005541455 nova_compute[188605]:        <feature name='rtm'/>
Dec  1 14:20:36 np0005541455 nova_compute[188605]:        <feature name='taa-no'/>
Dec  1 14:20:36 np0005541455 nova_compute[188605]:        <feature name='xsaves'/>
Dec  1 14:20:36 np0005541455 nova_compute[188605]:      </blockers>
Dec  1 14:20:36 np0005541455 nova_compute[188605]:      <model usable='no' vendor='Intel' canonical='Denverton-v1'>Denverton</model>
Dec  1 14:20:36 np0005541455 nova_compute[188605]:      <blockers model='Denverton'>
Dec  1 14:20:36 np0005541455 nova_compute[188605]:        <feature name='erms'/>
Dec  1 14:20:36 np0005541455 nova_compute[188605]:        <feature name='mpx'/>
Dec  1 14:20:36 np0005541455 nova_compute[188605]:      </blockers>
Dec  1 14:20:36 np0005541455 nova_compute[188605]:      <model usable='no' vendor='Intel'>Denverton-v1</model>
Dec  1 14:20:36 np0005541455 nova_compute[188605]:      <blockers model='Denverton-v1'>
Dec  1 14:20:36 np0005541455 nova_compute[188605]:        <feature name='erms'/>
Dec  1 14:20:36 np0005541455 nova_compute[188605]:        <feature name='mpx'/>
Dec  1 14:20:36 np0005541455 nova_compute[188605]:      </blockers>
Dec  1 14:20:36 np0005541455 nova_compute[188605]:      <model usable='no' vendor='Intel'>Denverton-v2</model>
Dec  1 14:20:36 np0005541455 nova_compute[188605]:      <blockers model='Denverton-v2'>
Dec  1 14:20:36 np0005541455 nova_compute[188605]:        <feature name='erms'/>
Dec  1 14:20:36 np0005541455 nova_compute[188605]:      </blockers>
Dec  1 14:20:36 np0005541455 nova_compute[188605]:      <model usable='no' vendor='Intel'>Denverton-v3</model>
Dec  1 14:20:36 np0005541455 nova_compute[188605]:      <blockers model='Denverton-v3'>
Dec  1 14:20:36 np0005541455 nova_compute[188605]:        <feature name='erms'/>
Dec  1 14:20:36 np0005541455 nova_compute[188605]:        <feature name='xsaves'/>
Dec  1 14:20:36 np0005541455 nova_compute[188605]:      </blockers>
Dec  1 14:20:36 np0005541455 nova_compute[188605]:      <model usable='yes' vendor='Hygon' canonical='Dhyana-v1'>Dhyana</model>
Dec  1 14:20:36 np0005541455 nova_compute[188605]:      <model usable='yes' vendor='Hygon'>Dhyana-v1</model>
Dec  1 14:20:36 np0005541455 nova_compute[188605]:      <model usable='no' vendor='Hygon'>Dhyana-v2</model>
Dec  1 14:20:36 np0005541455 nova_compute[188605]:      <blockers model='Dhyana-v2'>
Dec  1 14:20:36 np0005541455 nova_compute[188605]:        <feature name='xsaves'/>
Dec  1 14:20:36 np0005541455 nova_compute[188605]:      </blockers>
Dec  1 14:20:36 np0005541455 nova_compute[188605]:      <model usable='yes' vendor='AMD' canonical='EPYC-v1'>EPYC</model>
Dec  1 14:20:36 np0005541455 nova_compute[188605]:      <model usable='no' vendor='AMD' canonical='EPYC-Genoa-v1'>EPYC-Genoa</model>
Dec  1 14:20:36 np0005541455 nova_compute[188605]:      <blockers model='EPYC-Genoa'>
Dec  1 14:20:36 np0005541455 nova_compute[188605]:        <feature name='amd-psfd'/>
Dec  1 14:20:36 np0005541455 nova_compute[188605]:        <feature name='auto-ibrs'/>
Dec  1 14:20:36 np0005541455 nova_compute[188605]:        <feature name='avx512-bf16'/>
Dec  1 14:20:36 np0005541455 nova_compute[188605]:        <feature name='avx512-vpopcntdq'/>
Dec  1 14:20:36 np0005541455 nova_compute[188605]:        <feature name='avx512bitalg'/>
Dec  1 14:20:36 np0005541455 nova_compute[188605]:        <feature name='avx512bw'/>
Dec  1 14:20:36 np0005541455 nova_compute[188605]:        <feature name='avx512cd'/>
Dec  1 14:20:36 np0005541455 nova_compute[188605]:        <feature name='avx512dq'/>
Dec  1 14:20:36 np0005541455 nova_compute[188605]:        <feature name='avx512f'/>
Dec  1 14:20:36 np0005541455 nova_compute[188605]:        <feature name='avx512ifma'/>
Dec  1 14:20:36 np0005541455 nova_compute[188605]:        <feature name='avx512vbmi'/>
Dec  1 14:20:36 np0005541455 nova_compute[188605]:        <feature name='avx512vbmi2'/>
Dec  1 14:20:36 np0005541455 nova_compute[188605]:        <feature name='avx512vl'/>
Dec  1 14:20:36 np0005541455 nova_compute[188605]:        <feature name='avx512vnni'/>
Dec  1 14:20:36 np0005541455 nova_compute[188605]:        <feature name='erms'/>
Dec  1 14:20:36 np0005541455 nova_compute[188605]:        <feature name='fsrm'/>
Dec  1 14:20:36 np0005541455 nova_compute[188605]:        <feature name='gfni'/>
Dec  1 14:20:36 np0005541455 nova_compute[188605]:        <feature name='invpcid'/>
Dec  1 14:20:36 np0005541455 nova_compute[188605]:        <feature name='la57'/>
Dec  1 14:20:36 np0005541455 nova_compute[188605]:        <feature name='no-nested-data-bp'/>
Dec  1 14:20:36 np0005541455 nova_compute[188605]:        <feature name='null-sel-clr-base'/>
Dec  1 14:20:36 np0005541455 nova_compute[188605]:        <feature name='pcid'/>
Dec  1 14:20:36 np0005541455 nova_compute[188605]:        <feature name='pku'/>
Dec  1 14:20:36 np0005541455 nova_compute[188605]:        <feature name='stibp-always-on'/>
Dec  1 14:20:36 np0005541455 nova_compute[188605]:        <feature name='vaes'/>
Dec  1 14:20:36 np0005541455 nova_compute[188605]:        <feature name='vpclmulqdq'/>
Dec  1 14:20:36 np0005541455 nova_compute[188605]:        <feature name='xsaves'/>
Dec  1 14:20:36 np0005541455 nova_compute[188605]:      </blockers>
Dec  1 14:20:36 np0005541455 nova_compute[188605]:      <model usable='no' vendor='AMD'>EPYC-Genoa-v1</model>
Dec  1 14:20:36 np0005541455 nova_compute[188605]:      <blockers model='EPYC-Genoa-v1'>
Dec  1 14:20:36 np0005541455 nova_compute[188605]:        <feature name='amd-psfd'/>
Dec  1 14:20:36 np0005541455 nova_compute[188605]:        <feature name='auto-ibrs'/>
Dec  1 14:20:36 np0005541455 nova_compute[188605]:        <feature name='avx512-bf16'/>
Dec  1 14:20:36 np0005541455 nova_compute[188605]:        <feature name='avx512-vpopcntdq'/>
Dec  1 14:20:36 np0005541455 nova_compute[188605]:        <feature name='avx512bitalg'/>
Dec  1 14:20:36 np0005541455 nova_compute[188605]:        <feature name='avx512bw'/>
Dec  1 14:20:36 np0005541455 nova_compute[188605]:        <feature name='avx512cd'/>
Dec  1 14:20:36 np0005541455 nova_compute[188605]:        <feature name='avx512dq'/>
Dec  1 14:20:36 np0005541455 nova_compute[188605]:        <feature name='avx512f'/>
Dec  1 14:20:36 np0005541455 nova_compute[188605]:        <feature name='avx512ifma'/>
Dec  1 14:20:36 np0005541455 nova_compute[188605]:        <feature name='avx512vbmi'/>
Dec  1 14:20:36 np0005541455 nova_compute[188605]:        <feature name='avx512vbmi2'/>
Dec  1 14:20:36 np0005541455 nova_compute[188605]:        <feature name='avx512vl'/>
Dec  1 14:20:36 np0005541455 nova_compute[188605]:        <feature name='avx512vnni'/>
Dec  1 14:20:36 np0005541455 nova_compute[188605]:        <feature name='erms'/>
Dec  1 14:20:36 np0005541455 nova_compute[188605]:        <feature name='fsrm'/>
Dec  1 14:20:36 np0005541455 nova_compute[188605]:        <feature name='gfni'/>
Dec  1 14:20:36 np0005541455 nova_compute[188605]:        <feature name='invpcid'/>
Dec  1 14:20:36 np0005541455 nova_compute[188605]:        <feature name='la57'/>
Dec  1 14:20:36 np0005541455 nova_compute[188605]:        <feature name='no-nested-data-bp'/>
Dec  1 14:20:36 np0005541455 nova_compute[188605]:        <feature name='null-sel-clr-base'/>
Dec  1 14:20:36 np0005541455 nova_compute[188605]:        <feature name='pcid'/>
Dec  1 14:20:36 np0005541455 nova_compute[188605]:        <feature name='pku'/>
Dec  1 14:20:36 np0005541455 nova_compute[188605]:        <feature name='stibp-always-on'/>
Dec  1 14:20:36 np0005541455 nova_compute[188605]:        <feature name='vaes'/>
Dec  1 14:20:36 np0005541455 nova_compute[188605]:        <feature name='vpclmulqdq'/>
Dec  1 14:20:36 np0005541455 nova_compute[188605]:        <feature name='xsaves'/>
Dec  1 14:20:36 np0005541455 nova_compute[188605]:      </blockers>
Dec  1 14:20:36 np0005541455 nova_compute[188605]:      <model usable='yes' vendor='AMD' canonical='EPYC-v2'>EPYC-IBPB</model>
Dec  1 14:20:36 np0005541455 nova_compute[188605]:      <model usable='no' vendor='AMD' canonical='EPYC-Milan-v1'>EPYC-Milan</model>
Dec  1 14:20:36 np0005541455 nova_compute[188605]:      <blockers model='EPYC-Milan'>
Dec  1 14:20:36 np0005541455 nova_compute[188605]:        <feature name='erms'/>
Dec  1 14:20:36 np0005541455 nova_compute[188605]:        <feature name='fsrm'/>
Dec  1 14:20:36 np0005541455 nova_compute[188605]:        <feature name='invpcid'/>
Dec  1 14:20:36 np0005541455 nova_compute[188605]:        <feature name='pcid'/>
Dec  1 14:20:36 np0005541455 nova_compute[188605]:        <feature name='pku'/>
Dec  1 14:20:36 np0005541455 nova_compute[188605]:        <feature name='xsaves'/>
Dec  1 14:20:36 np0005541455 nova_compute[188605]:      </blockers>
Dec  1 14:20:36 np0005541455 nova_compute[188605]:      <model usable='no' vendor='AMD'>EPYC-Milan-v1</model>
Dec  1 14:20:36 np0005541455 nova_compute[188605]:      <blockers model='EPYC-Milan-v1'>
Dec  1 14:20:36 np0005541455 nova_compute[188605]:        <feature name='erms'/>
Dec  1 14:20:36 np0005541455 nova_compute[188605]:        <feature name='fsrm'/>
Dec  1 14:20:36 np0005541455 nova_compute[188605]:        <feature name='invpcid'/>
Dec  1 14:20:36 np0005541455 nova_compute[188605]:        <feature name='pcid'/>
Dec  1 14:20:36 np0005541455 nova_compute[188605]:        <feature name='pku'/>
Dec  1 14:20:36 np0005541455 nova_compute[188605]:        <feature name='xsaves'/>
Dec  1 14:20:36 np0005541455 nova_compute[188605]:      </blockers>
Dec  1 14:20:36 np0005541455 nova_compute[188605]:      <model usable='no' vendor='AMD'>EPYC-Milan-v2</model>
Dec  1 14:20:36 np0005541455 nova_compute[188605]:      <blockers model='EPYC-Milan-v2'>
Dec  1 14:20:36 np0005541455 nova_compute[188605]:        <feature name='amd-psfd'/>
Dec  1 14:20:36 np0005541455 nova_compute[188605]:        <feature name='erms'/>
Dec  1 14:20:36 np0005541455 nova_compute[188605]:        <feature name='fsrm'/>
Dec  1 14:20:36 np0005541455 nova_compute[188605]:        <feature name='invpcid'/>
Dec  1 14:20:36 np0005541455 nova_compute[188605]:        <feature name='no-nested-data-bp'/>
Dec  1 14:20:36 np0005541455 nova_compute[188605]:        <feature name='null-sel-clr-base'/>
Dec  1 14:20:36 np0005541455 nova_compute[188605]:        <feature name='pcid'/>
Dec  1 14:20:36 np0005541455 nova_compute[188605]:        <feature name='pku'/>
Dec  1 14:20:36 np0005541455 nova_compute[188605]:        <feature name='stibp-always-on'/>
Dec  1 14:20:36 np0005541455 nova_compute[188605]:        <feature name='vaes'/>
Dec  1 14:20:36 np0005541455 nova_compute[188605]:        <feature name='vpclmulqdq'/>
Dec  1 14:20:36 np0005541455 nova_compute[188605]:        <feature name='xsaves'/>
Dec  1 14:20:36 np0005541455 nova_compute[188605]:      </blockers>
Dec  1 14:20:36 np0005541455 nova_compute[188605]:      <model usable='no' vendor='AMD' canonical='EPYC-Rome-v1'>EPYC-Rome</model>
Dec  1 14:20:36 np0005541455 nova_compute[188605]:      <blockers model='EPYC-Rome'>
Dec  1 14:20:36 np0005541455 nova_compute[188605]:        <feature name='xsaves'/>
Dec  1 14:20:36 np0005541455 nova_compute[188605]:      </blockers>
Dec  1 14:20:36 np0005541455 nova_compute[188605]:      <model usable='no' vendor='AMD'>EPYC-Rome-v1</model>
Dec  1 14:20:36 np0005541455 nova_compute[188605]:      <blockers model='EPYC-Rome-v1'>
Dec  1 14:20:36 np0005541455 nova_compute[188605]:        <feature name='xsaves'/>
Dec  1 14:20:36 np0005541455 nova_compute[188605]:      </blockers>
Dec  1 14:20:36 np0005541455 nova_compute[188605]:      <model usable='no' vendor='AMD'>EPYC-Rome-v2</model>
Dec  1 14:20:36 np0005541455 nova_compute[188605]:      <blockers model='EPYC-Rome-v2'>
Dec  1 14:20:36 np0005541455 nova_compute[188605]:        <feature name='xsaves'/>
Dec  1 14:20:36 np0005541455 nova_compute[188605]:      </blockers>
Dec  1 14:20:36 np0005541455 nova_compute[188605]:      <model usable='no' vendor='AMD'>EPYC-Rome-v3</model>
Dec  1 14:20:36 np0005541455 nova_compute[188605]:      <blockers model='EPYC-Rome-v3'>
Dec  1 14:20:36 np0005541455 nova_compute[188605]:        <feature name='xsaves'/>
Dec  1 14:20:36 np0005541455 nova_compute[188605]:      </blockers>
Dec  1 14:20:36 np0005541455 nova_compute[188605]:      <model usable='yes' vendor='AMD'>EPYC-Rome-v4</model>
Dec  1 14:20:36 np0005541455 nova_compute[188605]:      <model usable='yes' vendor='AMD'>EPYC-v1</model>
Dec  1 14:20:36 np0005541455 nova_compute[188605]:      <model usable='yes' vendor='AMD'>EPYC-v2</model>
Dec  1 14:20:36 np0005541455 nova_compute[188605]:      <model usable='no' vendor='AMD'>EPYC-v3</model>
Dec  1 14:20:36 np0005541455 nova_compute[188605]:      <blockers model='EPYC-v3'>
Dec  1 14:20:36 np0005541455 nova_compute[188605]:        <feature name='xsaves'/>
Dec  1 14:20:36 np0005541455 nova_compute[188605]:      </blockers>
Dec  1 14:20:36 np0005541455 nova_compute[188605]:      <model usable='no' vendor='AMD'>EPYC-v4</model>
Dec  1 14:20:36 np0005541455 nova_compute[188605]:      <blockers model='EPYC-v4'>
Dec  1 14:20:36 np0005541455 nova_compute[188605]:        <feature name='xsaves'/>
Dec  1 14:20:36 np0005541455 nova_compute[188605]:      </blockers>
Dec  1 14:20:36 np0005541455 nova_compute[188605]:      <model usable='no' vendor='Intel' canonical='GraniteRapids-v1'>GraniteRapids</model>
Dec  1 14:20:36 np0005541455 nova_compute[188605]:      <blockers model='GraniteRapids'>
Dec  1 14:20:36 np0005541455 nova_compute[188605]:        <feature name='amx-bf16'/>
Dec  1 14:20:36 np0005541455 nova_compute[188605]:        <feature name='amx-fp16'/>
Dec  1 14:20:36 np0005541455 nova_compute[188605]:        <feature name='amx-int8'/>
Dec  1 14:20:36 np0005541455 nova_compute[188605]:        <feature name='amx-tile'/>
Dec  1 14:20:36 np0005541455 nova_compute[188605]:        <feature name='avx-vnni'/>
Dec  1 14:20:36 np0005541455 nova_compute[188605]:        <feature name='avx512-bf16'/>
Dec  1 14:20:36 np0005541455 nova_compute[188605]:        <feature name='avx512-fp16'/>
Dec  1 14:20:36 np0005541455 nova_compute[188605]:        <feature name='avx512-vpopcntdq'/>
Dec  1 14:20:36 np0005541455 nova_compute[188605]:        <feature name='avx512bitalg'/>
Dec  1 14:20:36 np0005541455 nova_compute[188605]:        <feature name='avx512bw'/>
Dec  1 14:20:36 np0005541455 nova_compute[188605]:        <feature name='avx512cd'/>
Dec  1 14:20:36 np0005541455 nova_compute[188605]:        <feature name='avx512dq'/>
Dec  1 14:20:36 np0005541455 nova_compute[188605]:        <feature name='avx512f'/>
Dec  1 14:20:36 np0005541455 nova_compute[188605]:        <feature name='avx512ifma'/>
Dec  1 14:20:36 np0005541455 nova_compute[188605]:        <feature name='avx512vbmi'/>
Dec  1 14:20:36 np0005541455 nova_compute[188605]:        <feature name='avx512vbmi2'/>
Dec  1 14:20:36 np0005541455 nova_compute[188605]:        <feature name='avx512vl'/>
Dec  1 14:20:36 np0005541455 nova_compute[188605]:        <feature name='avx512vnni'/>
Dec  1 14:20:36 np0005541455 nova_compute[188605]:        <feature name='bus-lock-detect'/>
Dec  1 14:20:36 np0005541455 nova_compute[188605]:        <feature name='erms'/>
Dec  1 14:20:36 np0005541455 nova_compute[188605]:        <feature name='fbsdp-no'/>
Dec  1 14:20:36 np0005541455 nova_compute[188605]:        <feature name='fsrc'/>
Dec  1 14:20:36 np0005541455 nova_compute[188605]:        <feature name='fsrm'/>
Dec  1 14:20:36 np0005541455 nova_compute[188605]:        <feature name='fsrs'/>
Dec  1 14:20:36 np0005541455 nova_compute[188605]:        <feature name='fzrm'/>
Dec  1 14:20:36 np0005541455 nova_compute[188605]:        <feature name='gfni'/>
Dec  1 14:20:36 np0005541455 nova_compute[188605]:        <feature name='hle'/>
Dec  1 14:20:36 np0005541455 nova_compute[188605]:        <feature name='ibrs-all'/>
Dec  1 14:20:36 np0005541455 nova_compute[188605]:        <feature name='invpcid'/>
Dec  1 14:20:36 np0005541455 nova_compute[188605]:        <feature name='la57'/>
Dec  1 14:20:36 np0005541455 nova_compute[188605]:        <feature name='mcdt-no'/>
Dec  1 14:20:36 np0005541455 nova_compute[188605]:        <feature name='pbrsb-no'/>
Dec  1 14:20:36 np0005541455 nova_compute[188605]:        <feature name='pcid'/>
Dec  1 14:20:36 np0005541455 nova_compute[188605]:        <feature name='pku'/>
Dec  1 14:20:36 np0005541455 nova_compute[188605]:        <feature name='prefetchiti'/>
Dec  1 14:20:36 np0005541455 nova_compute[188605]:        <feature name='psdp-no'/>
Dec  1 14:20:36 np0005541455 nova_compute[188605]:        <feature name='rtm'/>
Dec  1 14:20:36 np0005541455 nova_compute[188605]:        <feature name='sbdr-ssdp-no'/>
Dec  1 14:20:36 np0005541455 nova_compute[188605]:        <feature name='serialize'/>
Dec  1 14:20:36 np0005541455 nova_compute[188605]:        <feature name='taa-no'/>
Dec  1 14:20:36 np0005541455 nova_compute[188605]:        <feature name='tsx-ldtrk'/>
Dec  1 14:20:36 np0005541455 nova_compute[188605]:        <feature name='vaes'/>
Dec  1 14:20:36 np0005541455 nova_compute[188605]:        <feature name='vpclmulqdq'/>
Dec  1 14:20:36 np0005541455 nova_compute[188605]:        <feature name='xfd'/>
Dec  1 14:20:36 np0005541455 nova_compute[188605]:        <feature name='xsaves'/>
Dec  1 14:20:36 np0005541455 nova_compute[188605]:      </blockers>
Dec  1 14:20:36 np0005541455 nova_compute[188605]:      <model usable='no' vendor='Intel'>GraniteRapids-v1</model>
Dec  1 14:20:36 np0005541455 nova_compute[188605]:      <blockers model='GraniteRapids-v1'>
Dec  1 14:20:36 np0005541455 nova_compute[188605]:        <feature name='amx-bf16'/>
Dec  1 14:20:36 np0005541455 nova_compute[188605]:        <feature name='amx-fp16'/>
Dec  1 14:20:36 np0005541455 nova_compute[188605]:        <feature name='amx-int8'/>
Dec  1 14:20:36 np0005541455 nova_compute[188605]:        <feature name='amx-tile'/>
Dec  1 14:20:36 np0005541455 nova_compute[188605]:        <feature name='avx-vnni'/>
Dec  1 14:20:36 np0005541455 nova_compute[188605]:        <feature name='avx512-bf16'/>
Dec  1 14:20:36 np0005541455 nova_compute[188605]:        <feature name='avx512-fp16'/>
Dec  1 14:20:36 np0005541455 nova_compute[188605]:        <feature name='avx512-vpopcntdq'/>
Dec  1 14:20:36 np0005541455 nova_compute[188605]:        <feature name='avx512bitalg'/>
Dec  1 14:20:36 np0005541455 nova_compute[188605]:        <feature name='avx512bw'/>
Dec  1 14:20:36 np0005541455 nova_compute[188605]:        <feature name='avx512cd'/>
Dec  1 14:20:36 np0005541455 nova_compute[188605]:        <feature name='avx512dq'/>
Dec  1 14:20:36 np0005541455 nova_compute[188605]:        <feature name='avx512f'/>
Dec  1 14:20:36 np0005541455 nova_compute[188605]:        <feature name='avx512ifma'/>
Dec  1 14:20:36 np0005541455 nova_compute[188605]:        <feature name='avx512vbmi'/>
Dec  1 14:20:36 np0005541455 nova_compute[188605]:        <feature name='avx512vbmi2'/>
Dec  1 14:20:36 np0005541455 nova_compute[188605]:        <feature name='avx512vl'/>
Dec  1 14:20:36 np0005541455 nova_compute[188605]:        <feature name='avx512vnni'/>
Dec  1 14:20:36 np0005541455 nova_compute[188605]:        <feature name='bus-lock-detect'/>
Dec  1 14:20:36 np0005541455 nova_compute[188605]:        <feature name='erms'/>
Dec  1 14:20:36 np0005541455 nova_compute[188605]:        <feature name='fbsdp-no'/>
Dec  1 14:20:36 np0005541455 nova_compute[188605]:        <feature name='fsrc'/>
Dec  1 14:20:36 np0005541455 nova_compute[188605]:        <feature name='fsrm'/>
Dec  1 14:20:36 np0005541455 nova_compute[188605]:        <feature name='fsrs'/>
Dec  1 14:20:36 np0005541455 nova_compute[188605]:        <feature name='fzrm'/>
Dec  1 14:20:36 np0005541455 nova_compute[188605]:        <feature name='gfni'/>
Dec  1 14:20:36 np0005541455 nova_compute[188605]:        <feature name='hle'/>
Dec  1 14:20:36 np0005541455 nova_compute[188605]:        <feature name='ibrs-all'/>
Dec  1 14:20:36 np0005541455 nova_compute[188605]:        <feature name='invpcid'/>
Dec  1 14:20:36 np0005541455 nova_compute[188605]:        <feature name='la57'/>
Dec  1 14:20:36 np0005541455 nova_compute[188605]:        <feature name='mcdt-no'/>
Dec  1 14:20:36 np0005541455 nova_compute[188605]:        <feature name='pbrsb-no'/>
Dec  1 14:20:36 np0005541455 nova_compute[188605]:        <feature name='pcid'/>
Dec  1 14:20:36 np0005541455 nova_compute[188605]:        <feature name='pku'/>
Dec  1 14:20:36 np0005541455 nova_compute[188605]:        <feature name='prefetchiti'/>
Dec  1 14:20:36 np0005541455 nova_compute[188605]:        <feature name='psdp-no'/>
Dec  1 14:20:36 np0005541455 nova_compute[188605]:        <feature name='rtm'/>
Dec  1 14:20:36 np0005541455 nova_compute[188605]:        <feature name='sbdr-ssdp-no'/>
Dec  1 14:20:36 np0005541455 nova_compute[188605]:        <feature name='serialize'/>
Dec  1 14:20:36 np0005541455 nova_compute[188605]:        <feature name='taa-no'/>
Dec  1 14:20:36 np0005541455 nova_compute[188605]:        <feature name='tsx-ldtrk'/>
Dec  1 14:20:36 np0005541455 nova_compute[188605]:        <feature name='vaes'/>
Dec  1 14:20:36 np0005541455 nova_compute[188605]:        <feature name='vpclmulqdq'/>
Dec  1 14:20:36 np0005541455 nova_compute[188605]:        <feature name='xfd'/>
Dec  1 14:20:36 np0005541455 nova_compute[188605]:        <feature name='xsaves'/>
Dec  1 14:20:36 np0005541455 nova_compute[188605]:      </blockers>
Dec  1 14:20:36 np0005541455 nova_compute[188605]:      <model usable='no' vendor='Intel'>GraniteRapids-v2</model>
Dec  1 14:20:36 np0005541455 nova_compute[188605]:      <blockers model='GraniteRapids-v2'>
Dec  1 14:20:36 np0005541455 nova_compute[188605]:        <feature name='amx-bf16'/>
Dec  1 14:20:36 np0005541455 nova_compute[188605]:        <feature name='amx-fp16'/>
Dec  1 14:20:36 np0005541455 nova_compute[188605]:        <feature name='amx-int8'/>
Dec  1 14:20:36 np0005541455 nova_compute[188605]:        <feature name='amx-tile'/>
Dec  1 14:20:36 np0005541455 nova_compute[188605]:        <feature name='avx-vnni'/>
Dec  1 14:20:36 np0005541455 nova_compute[188605]:        <feature name='avx10'/>
Dec  1 14:20:36 np0005541455 nova_compute[188605]:        <feature name='avx10-128'/>
Dec  1 14:20:36 np0005541455 nova_compute[188605]:        <feature name='avx10-256'/>
Dec  1 14:20:36 np0005541455 nova_compute[188605]:        <feature name='avx10-512'/>
Dec  1 14:20:36 np0005541455 nova_compute[188605]:        <feature name='avx512-bf16'/>
Dec  1 14:20:36 np0005541455 nova_compute[188605]:        <feature name='avx512-fp16'/>
Dec  1 14:20:36 np0005541455 nova_compute[188605]:        <feature name='avx512-vpopcntdq'/>
Dec  1 14:20:36 np0005541455 nova_compute[188605]:        <feature name='avx512bitalg'/>
Dec  1 14:20:36 np0005541455 nova_compute[188605]:        <feature name='avx512bw'/>
Dec  1 14:20:36 np0005541455 nova_compute[188605]:        <feature name='avx512cd'/>
Dec  1 14:20:36 np0005541455 nova_compute[188605]:        <feature name='avx512dq'/>
Dec  1 14:20:36 np0005541455 nova_compute[188605]:        <feature name='avx512f'/>
Dec  1 14:20:36 np0005541455 nova_compute[188605]:        <feature name='avx512ifma'/>
Dec  1 14:20:36 np0005541455 nova_compute[188605]:        <feature name='avx512vbmi'/>
Dec  1 14:20:36 np0005541455 nova_compute[188605]:        <feature name='avx512vbmi2'/>
Dec  1 14:20:36 np0005541455 nova_compute[188605]:        <feature name='avx512vl'/>
Dec  1 14:20:36 np0005541455 nova_compute[188605]:        <feature name='avx512vnni'/>
Dec  1 14:20:36 np0005541455 nova_compute[188605]:        <feature name='bus-lock-detect'/>
Dec  1 14:20:36 np0005541455 nova_compute[188605]:        <feature name='cldemote'/>
Dec  1 14:20:36 np0005541455 nova_compute[188605]:        <feature name='erms'/>
Dec  1 14:20:36 np0005541455 nova_compute[188605]:        <feature name='fbsdp-no'/>
Dec  1 14:20:36 np0005541455 nova_compute[188605]:        <feature name='fsrc'/>
Dec  1 14:20:36 np0005541455 nova_compute[188605]:        <feature name='fsrm'/>
Dec  1 14:20:36 np0005541455 nova_compute[188605]:        <feature name='fsrs'/>
Dec  1 14:20:36 np0005541455 nova_compute[188605]:        <feature name='fzrm'/>
Dec  1 14:20:36 np0005541455 nova_compute[188605]:        <feature name='gfni'/>
Dec  1 14:20:36 np0005541455 nova_compute[188605]:        <feature name='hle'/>
Dec  1 14:20:36 np0005541455 nova_compute[188605]:        <feature name='ibrs-all'/>
Dec  1 14:20:36 np0005541455 nova_compute[188605]:        <feature name='invpcid'/>
Dec  1 14:20:36 np0005541455 nova_compute[188605]:        <feature name='la57'/>
Dec  1 14:20:36 np0005541455 nova_compute[188605]:        <feature name='mcdt-no'/>
Dec  1 14:20:36 np0005541455 nova_compute[188605]:        <feature name='movdir64b'/>
Dec  1 14:20:36 np0005541455 nova_compute[188605]:        <feature name='movdiri'/>
Dec  1 14:20:36 np0005541455 nova_compute[188605]:        <feature name='pbrsb-no'/>
Dec  1 14:20:36 np0005541455 nova_compute[188605]:        <feature name='pcid'/>
Dec  1 14:20:36 np0005541455 nova_compute[188605]:        <feature name='pku'/>
Dec  1 14:20:36 np0005541455 nova_compute[188605]:        <feature name='prefetchiti'/>
Dec  1 14:20:36 np0005541455 nova_compute[188605]:        <feature name='psdp-no'/>
Dec  1 14:20:36 np0005541455 nova_compute[188605]:        <feature name='rtm'/>
Dec  1 14:20:36 np0005541455 nova_compute[188605]:        <feature name='sbdr-ssdp-no'/>
Dec  1 14:20:36 np0005541455 nova_compute[188605]:        <feature name='serialize'/>
Dec  1 14:20:36 np0005541455 nova_compute[188605]:        <feature name='ss'/>
Dec  1 14:20:36 np0005541455 nova_compute[188605]:        <feature name='taa-no'/>
Dec  1 14:20:36 np0005541455 nova_compute[188605]:        <feature name='tsx-ldtrk'/>
Dec  1 14:20:36 np0005541455 nova_compute[188605]:        <feature name='vaes'/>
Dec  1 14:20:36 np0005541455 nova_compute[188605]:        <feature name='vpclmulqdq'/>
Dec  1 14:20:36 np0005541455 nova_compute[188605]:        <feature name='xfd'/>
Dec  1 14:20:36 np0005541455 nova_compute[188605]:        <feature name='xsaves'/>
Dec  1 14:20:36 np0005541455 nova_compute[188605]:      </blockers>
Dec  1 14:20:36 np0005541455 nova_compute[188605]:      <model usable='no' vendor='Intel' canonical='Haswell-v1'>Haswell</model>
Dec  1 14:20:36 np0005541455 nova_compute[188605]:      <blockers model='Haswell'>
Dec  1 14:20:36 np0005541455 nova_compute[188605]:        <feature name='erms'/>
Dec  1 14:20:36 np0005541455 nova_compute[188605]:        <feature name='hle'/>
Dec  1 14:20:36 np0005541455 nova_compute[188605]:        <feature name='invpcid'/>
Dec  1 14:20:36 np0005541455 nova_compute[188605]:        <feature name='pcid'/>
Dec  1 14:20:36 np0005541455 nova_compute[188605]:        <feature name='rtm'/>
Dec  1 14:20:36 np0005541455 nova_compute[188605]:      </blockers>
Dec  1 14:20:36 np0005541455 nova_compute[188605]:      <model usable='no' vendor='Intel' canonical='Haswell-v3'>Haswell-IBRS</model>
Dec  1 14:20:36 np0005541455 nova_compute[188605]:      <blockers model='Haswell-IBRS'>
Dec  1 14:20:36 np0005541455 nova_compute[188605]:        <feature name='erms'/>
Dec  1 14:20:36 np0005541455 nova_compute[188605]:        <feature name='hle'/>
Dec  1 14:20:36 np0005541455 nova_compute[188605]:        <feature name='invpcid'/>
Dec  1 14:20:36 np0005541455 nova_compute[188605]:        <feature name='pcid'/>
Dec  1 14:20:36 np0005541455 nova_compute[188605]:        <feature name='rtm'/>
Dec  1 14:20:36 np0005541455 nova_compute[188605]:      </blockers>
Dec  1 14:20:36 np0005541455 nova_compute[188605]:      <model usable='no' vendor='Intel' canonical='Haswell-v2'>Haswell-noTSX</model>
Dec  1 14:20:36 np0005541455 nova_compute[188605]:      <blockers model='Haswell-noTSX'>
Dec  1 14:20:36 np0005541455 nova_compute[188605]:        <feature name='erms'/>
Dec  1 14:20:36 np0005541455 nova_compute[188605]:        <feature name='invpcid'/>
Dec  1 14:20:36 np0005541455 nova_compute[188605]:        <feature name='pcid'/>
Dec  1 14:20:36 np0005541455 nova_compute[188605]:      </blockers>
Dec  1 14:20:36 np0005541455 nova_compute[188605]:      <model usable='no' vendor='Intel' canonical='Haswell-v4'>Haswell-noTSX-IBRS</model>
Dec  1 14:20:36 np0005541455 nova_compute[188605]:      <blockers model='Haswell-noTSX-IBRS'>
Dec  1 14:20:36 np0005541455 nova_compute[188605]:        <feature name='erms'/>
Dec  1 14:20:36 np0005541455 nova_compute[188605]:        <feature name='invpcid'/>
Dec  1 14:20:36 np0005541455 nova_compute[188605]:        <feature name='pcid'/>
Dec  1 14:20:36 np0005541455 nova_compute[188605]:      </blockers>
Dec  1 14:20:36 np0005541455 nova_compute[188605]:      <model usable='no' vendor='Intel'>Haswell-v1</model>
Dec  1 14:20:36 np0005541455 nova_compute[188605]:      <blockers model='Haswell-v1'>
Dec  1 14:20:36 np0005541455 nova_compute[188605]:        <feature name='erms'/>
Dec  1 14:20:36 np0005541455 nova_compute[188605]:        <feature name='hle'/>
Dec  1 14:20:36 np0005541455 nova_compute[188605]:        <feature name='invpcid'/>
Dec  1 14:20:36 np0005541455 nova_compute[188605]:        <feature name='pcid'/>
Dec  1 14:20:36 np0005541455 nova_compute[188605]:        <feature name='rtm'/>
Dec  1 14:20:36 np0005541455 nova_compute[188605]:      </blockers>
Dec  1 14:20:36 np0005541455 nova_compute[188605]:      <model usable='no' vendor='Intel'>Haswell-v2</model>
Dec  1 14:20:36 np0005541455 nova_compute[188605]:      <blockers model='Haswell-v2'>
Dec  1 14:20:36 np0005541455 nova_compute[188605]:        <feature name='erms'/>
Dec  1 14:20:36 np0005541455 nova_compute[188605]:        <feature name='invpcid'/>
Dec  1 14:20:36 np0005541455 nova_compute[188605]:        <feature name='pcid'/>
Dec  1 14:20:36 np0005541455 nova_compute[188605]:      </blockers>
Dec  1 14:20:36 np0005541455 nova_compute[188605]:      <model usable='no' vendor='Intel'>Haswell-v3</model>
Dec  1 14:20:36 np0005541455 nova_compute[188605]:      <blockers model='Haswell-v3'>
Dec  1 14:20:36 np0005541455 nova_compute[188605]:        <feature name='erms'/>
Dec  1 14:20:36 np0005541455 nova_compute[188605]:        <feature name='hle'/>
Dec  1 14:20:36 np0005541455 nova_compute[188605]:        <feature name='invpcid'/>
Dec  1 14:20:36 np0005541455 nova_compute[188605]:        <feature name='pcid'/>
Dec  1 14:20:36 np0005541455 nova_compute[188605]:        <feature name='rtm'/>
Dec  1 14:20:36 np0005541455 nova_compute[188605]:      </blockers>
Dec  1 14:20:36 np0005541455 nova_compute[188605]:      <model usable='no' vendor='Intel'>Haswell-v4</model>
Dec  1 14:20:36 np0005541455 nova_compute[188605]:      <blockers model='Haswell-v4'>
Dec  1 14:20:36 np0005541455 nova_compute[188605]:        <feature name='erms'/>
Dec  1 14:20:36 np0005541455 nova_compute[188605]:        <feature name='invpcid'/>
Dec  1 14:20:36 np0005541455 nova_compute[188605]:        <feature name='pcid'/>
Dec  1 14:20:36 np0005541455 nova_compute[188605]:      </blockers>
Dec  1 14:20:36 np0005541455 nova_compute[188605]:      <model usable='no' vendor='Intel' canonical='Icelake-Server-v1'>Icelake-Server</model>
Dec  1 14:20:36 np0005541455 nova_compute[188605]:      <blockers model='Icelake-Server'>
Dec  1 14:20:36 np0005541455 nova_compute[188605]:        <feature name='avx512-vpopcntdq'/>
Dec  1 14:20:36 np0005541455 nova_compute[188605]:        <feature name='avx512bitalg'/>
Dec  1 14:20:36 np0005541455 nova_compute[188605]:        <feature name='avx512bw'/>
Dec  1 14:20:36 np0005541455 nova_compute[188605]:        <feature name='avx512cd'/>
Dec  1 14:20:36 np0005541455 nova_compute[188605]:        <feature name='avx512dq'/>
Dec  1 14:20:36 np0005541455 nova_compute[188605]:        <feature name='avx512f'/>
Dec  1 14:20:36 np0005541455 nova_compute[188605]:        <feature name='avx512vbmi'/>
Dec  1 14:20:36 np0005541455 nova_compute[188605]:        <feature name='avx512vbmi2'/>
Dec  1 14:20:36 np0005541455 nova_compute[188605]:        <feature name='avx512vl'/>
Dec  1 14:20:36 np0005541455 nova_compute[188605]:        <feature name='avx512vnni'/>
Dec  1 14:20:36 np0005541455 nova_compute[188605]:        <feature name='erms'/>
Dec  1 14:20:36 np0005541455 nova_compute[188605]:        <feature name='gfni'/>
Dec  1 14:20:36 np0005541455 nova_compute[188605]:        <feature name='hle'/>
Dec  1 14:20:36 np0005541455 nova_compute[188605]:        <feature name='invpcid'/>
Dec  1 14:20:36 np0005541455 nova_compute[188605]:        <feature name='la57'/>
Dec  1 14:20:36 np0005541455 nova_compute[188605]:        <feature name='pcid'/>
Dec  1 14:20:36 np0005541455 nova_compute[188605]:        <feature name='pku'/>
Dec  1 14:20:36 np0005541455 nova_compute[188605]:        <feature name='rtm'/>
Dec  1 14:20:36 np0005541455 nova_compute[188605]:        <feature name='vaes'/>
Dec  1 14:20:36 np0005541455 nova_compute[188605]:        <feature name='vpclmulqdq'/>
Dec  1 14:20:36 np0005541455 nova_compute[188605]:      </blockers>
Dec  1 14:20:36 np0005541455 nova_compute[188605]:      <model usable='no' vendor='Intel' canonical='Icelake-Server-v2'>Icelake-Server-noTSX</model>
Dec  1 14:20:36 np0005541455 nova_compute[188605]:      <blockers model='Icelake-Server-noTSX'>
Dec  1 14:20:36 np0005541455 nova_compute[188605]:        <feature name='avx512-vpopcntdq'/>
Dec  1 14:20:36 np0005541455 nova_compute[188605]:        <feature name='avx512bitalg'/>
Dec  1 14:20:36 np0005541455 nova_compute[188605]:        <feature name='avx512bw'/>
Dec  1 14:20:36 np0005541455 nova_compute[188605]:        <feature name='avx512cd'/>
Dec  1 14:20:36 np0005541455 nova_compute[188605]:        <feature name='avx512dq'/>
Dec  1 14:20:36 np0005541455 nova_compute[188605]:        <feature name='avx512f'/>
Dec  1 14:20:36 np0005541455 nova_compute[188605]:        <feature name='avx512vbmi'/>
Dec  1 14:20:36 np0005541455 nova_compute[188605]:        <feature name='avx512vbmi2'/>
Dec  1 14:20:36 np0005541455 nova_compute[188605]:        <feature name='avx512vl'/>
Dec  1 14:20:36 np0005541455 nova_compute[188605]:        <feature name='avx512vnni'/>
Dec  1 14:20:36 np0005541455 nova_compute[188605]:        <feature name='erms'/>
Dec  1 14:20:36 np0005541455 nova_compute[188605]:        <feature name='gfni'/>
Dec  1 14:20:36 np0005541455 nova_compute[188605]:        <feature name='invpcid'/>
Dec  1 14:20:36 np0005541455 nova_compute[188605]:        <feature name='la57'/>
Dec  1 14:20:36 np0005541455 nova_compute[188605]:        <feature name='pcid'/>
Dec  1 14:20:36 np0005541455 nova_compute[188605]:        <feature name='pku'/>
Dec  1 14:20:36 np0005541455 nova_compute[188605]:        <feature name='vaes'/>
Dec  1 14:20:36 np0005541455 nova_compute[188605]:        <feature name='vpclmulqdq'/>
Dec  1 14:20:36 np0005541455 nova_compute[188605]:      </blockers>
Dec  1 14:20:36 np0005541455 nova_compute[188605]:      <model usable='no' vendor='Intel'>Icelake-Server-v1</model>
Dec  1 14:20:36 np0005541455 nova_compute[188605]:      <blockers model='Icelake-Server-v1'>
Dec  1 14:20:36 np0005541455 nova_compute[188605]:        <feature name='avx512-vpopcntdq'/>
Dec  1 14:20:36 np0005541455 nova_compute[188605]:        <feature name='avx512bitalg'/>
Dec  1 14:20:36 np0005541455 nova_compute[188605]:        <feature name='avx512bw'/>
Dec  1 14:20:36 np0005541455 nova_compute[188605]:        <feature name='avx512cd'/>
Dec  1 14:20:36 np0005541455 nova_compute[188605]:        <feature name='avx512dq'/>
Dec  1 14:20:36 np0005541455 nova_compute[188605]:        <feature name='avx512f'/>
Dec  1 14:20:36 np0005541455 nova_compute[188605]:        <feature name='avx512vbmi'/>
Dec  1 14:20:36 np0005541455 nova_compute[188605]:        <feature name='avx512vbmi2'/>
Dec  1 14:20:36 np0005541455 nova_compute[188605]:        <feature name='avx512vl'/>
Dec  1 14:20:36 np0005541455 nova_compute[188605]:        <feature name='avx512vnni'/>
Dec  1 14:20:36 np0005541455 nova_compute[188605]:        <feature name='erms'/>
Dec  1 14:20:36 np0005541455 nova_compute[188605]:        <feature name='gfni'/>
Dec  1 14:20:36 np0005541455 nova_compute[188605]:        <feature name='hle'/>
Dec  1 14:20:36 np0005541455 nova_compute[188605]:        <feature name='invpcid'/>
Dec  1 14:20:36 np0005541455 nova_compute[188605]:        <feature name='la57'/>
Dec  1 14:20:36 np0005541455 nova_compute[188605]:        <feature name='pcid'/>
Dec  1 14:20:36 np0005541455 nova_compute[188605]:        <feature name='pku'/>
Dec  1 14:20:36 np0005541455 nova_compute[188605]:        <feature name='rtm'/>
Dec  1 14:20:36 np0005541455 nova_compute[188605]:        <feature name='vaes'/>
Dec  1 14:20:36 np0005541455 nova_compute[188605]:        <feature name='vpclmulqdq'/>
Dec  1 14:20:36 np0005541455 nova_compute[188605]:      </blockers>
Dec  1 14:20:36 np0005541455 nova_compute[188605]:      <model usable='no' vendor='Intel'>Icelake-Server-v2</model>
Dec  1 14:20:36 np0005541455 nova_compute[188605]:      <blockers model='Icelake-Server-v2'>
Dec  1 14:20:36 np0005541455 nova_compute[188605]:        <feature name='avx512-vpopcntdq'/>
Dec  1 14:20:36 np0005541455 nova_compute[188605]:        <feature name='avx512bitalg'/>
Dec  1 14:20:36 np0005541455 nova_compute[188605]:        <feature name='avx512bw'/>
Dec  1 14:20:36 np0005541455 nova_compute[188605]:        <feature name='avx512cd'/>
Dec  1 14:20:36 np0005541455 nova_compute[188605]:        <feature name='avx512dq'/>
Dec  1 14:20:36 np0005541455 nova_compute[188605]:        <feature name='avx512f'/>
Dec  1 14:20:36 np0005541455 nova_compute[188605]:        <feature name='avx512vbmi'/>
Dec  1 14:20:36 np0005541455 nova_compute[188605]:        <feature name='avx512vbmi2'/>
Dec  1 14:20:36 np0005541455 nova_compute[188605]:        <feature name='avx512vl'/>
Dec  1 14:20:36 np0005541455 nova_compute[188605]:        <feature name='avx512vnni'/>
Dec  1 14:20:36 np0005541455 nova_compute[188605]:        <feature name='erms'/>
Dec  1 14:20:36 np0005541455 nova_compute[188605]:        <feature name='gfni'/>
Dec  1 14:20:36 np0005541455 nova_compute[188605]:        <feature name='invpcid'/>
Dec  1 14:20:36 np0005541455 nova_compute[188605]:        <feature name='la57'/>
Dec  1 14:20:36 np0005541455 nova_compute[188605]:        <feature name='pcid'/>
Dec  1 14:20:36 np0005541455 nova_compute[188605]:        <feature name='pku'/>
Dec  1 14:20:36 np0005541455 nova_compute[188605]:        <feature name='vaes'/>
Dec  1 14:20:36 np0005541455 nova_compute[188605]:        <feature name='vpclmulqdq'/>
Dec  1 14:20:36 np0005541455 nova_compute[188605]:      </blockers>
Dec  1 14:20:36 np0005541455 nova_compute[188605]:      <model usable='no' vendor='Intel'>Icelake-Server-v3</model>
Dec  1 14:20:36 np0005541455 nova_compute[188605]:      <blockers model='Icelake-Server-v3'>
Dec  1 14:20:36 np0005541455 nova_compute[188605]:        <feature name='avx512-vpopcntdq'/>
Dec  1 14:20:36 np0005541455 nova_compute[188605]:        <feature name='avx512bitalg'/>
Dec  1 14:20:36 np0005541455 nova_compute[188605]:        <feature name='avx512bw'/>
Dec  1 14:20:36 np0005541455 nova_compute[188605]:        <feature name='avx512cd'/>
Dec  1 14:20:36 np0005541455 nova_compute[188605]:        <feature name='avx512dq'/>
Dec  1 14:20:36 np0005541455 nova_compute[188605]:        <feature name='avx512f'/>
Dec  1 14:20:36 np0005541455 nova_compute[188605]:        <feature name='avx512vbmi'/>
Dec  1 14:20:36 np0005541455 nova_compute[188605]:        <feature name='avx512vbmi2'/>
Dec  1 14:20:36 np0005541455 nova_compute[188605]:        <feature name='avx512vl'/>
Dec  1 14:20:36 np0005541455 nova_compute[188605]:        <feature name='avx512vnni'/>
Dec  1 14:20:36 np0005541455 nova_compute[188605]:        <feature name='erms'/>
Dec  1 14:20:36 np0005541455 nova_compute[188605]:        <feature name='gfni'/>
Dec  1 14:20:36 np0005541455 nova_compute[188605]:        <feature name='ibrs-all'/>
Dec  1 14:20:36 np0005541455 nova_compute[188605]:        <feature name='invpcid'/>
Dec  1 14:20:36 np0005541455 nova_compute[188605]:        <feature name='la57'/>
Dec  1 14:20:36 np0005541455 nova_compute[188605]:        <feature name='pcid'/>
Dec  1 14:20:36 np0005541455 nova_compute[188605]:        <feature name='pku'/>
Dec  1 14:20:36 np0005541455 nova_compute[188605]:        <feature name='taa-no'/>
Dec  1 14:20:36 np0005541455 nova_compute[188605]:        <feature name='vaes'/>
Dec  1 14:20:36 np0005541455 nova_compute[188605]:        <feature name='vpclmulqdq'/>
Dec  1 14:20:36 np0005541455 nova_compute[188605]:      </blockers>
Dec  1 14:20:36 np0005541455 nova_compute[188605]:      <model usable='no' vendor='Intel'>Icelake-Server-v4</model>
Dec  1 14:20:36 np0005541455 nova_compute[188605]:      <blockers model='Icelake-Server-v4'>
Dec  1 14:20:36 np0005541455 nova_compute[188605]:        <feature name='avx512-vpopcntdq'/>
Dec  1 14:20:36 np0005541455 nova_compute[188605]:        <feature name='avx512bitalg'/>
Dec  1 14:20:36 np0005541455 nova_compute[188605]:        <feature name='avx512bw'/>
Dec  1 14:20:36 np0005541455 nova_compute[188605]:        <feature name='avx512cd'/>
Dec  1 14:20:36 np0005541455 nova_compute[188605]:        <feature name='avx512dq'/>
Dec  1 14:20:36 np0005541455 nova_compute[188605]:        <feature name='avx512f'/>
Dec  1 14:20:36 np0005541455 nova_compute[188605]:        <feature name='avx512ifma'/>
Dec  1 14:20:36 np0005541455 nova_compute[188605]:        <feature name='avx512vbmi'/>
Dec  1 14:20:36 np0005541455 nova_compute[188605]:        <feature name='avx512vbmi2'/>
Dec  1 14:20:36 np0005541455 nova_compute[188605]:        <feature name='avx512vl'/>
Dec  1 14:20:36 np0005541455 nova_compute[188605]:        <feature name='avx512vnni'/>
Dec  1 14:20:36 np0005541455 nova_compute[188605]:        <feature name='erms'/>
Dec  1 14:20:36 np0005541455 nova_compute[188605]:        <feature name='fsrm'/>
Dec  1 14:20:36 np0005541455 nova_compute[188605]:        <feature name='gfni'/>
Dec  1 14:20:36 np0005541455 nova_compute[188605]:        <feature name='ibrs-all'/>
Dec  1 14:20:36 np0005541455 nova_compute[188605]:        <feature name='invpcid'/>
Dec  1 14:20:36 np0005541455 nova_compute[188605]:        <feature name='la57'/>
Dec  1 14:20:36 np0005541455 nova_compute[188605]:        <feature name='pcid'/>
Dec  1 14:20:36 np0005541455 nova_compute[188605]:        <feature name='pku'/>
Dec  1 14:20:36 np0005541455 nova_compute[188605]:        <feature name='taa-no'/>
Dec  1 14:20:36 np0005541455 nova_compute[188605]:        <feature name='vaes'/>
Dec  1 14:20:36 np0005541455 nova_compute[188605]:        <feature name='vpclmulqdq'/>
Dec  1 14:20:36 np0005541455 nova_compute[188605]:      </blockers>
Dec  1 14:20:36 np0005541455 nova_compute[188605]:      <model usable='no' vendor='Intel'>Icelake-Server-v5</model>
Dec  1 14:20:36 np0005541455 nova_compute[188605]:      <blockers model='Icelake-Server-v5'>
Dec  1 14:20:36 np0005541455 nova_compute[188605]:        <feature name='avx512-vpopcntdq'/>
Dec  1 14:20:36 np0005541455 nova_compute[188605]:        <feature name='avx512bitalg'/>
Dec  1 14:20:36 np0005541455 nova_compute[188605]:        <feature name='avx512bw'/>
Dec  1 14:20:36 np0005541455 nova_compute[188605]:        <feature name='avx512cd'/>
Dec  1 14:20:36 np0005541455 nova_compute[188605]:        <feature name='avx512dq'/>
Dec  1 14:20:36 np0005541455 nova_compute[188605]:        <feature name='avx512f'/>
Dec  1 14:20:36 np0005541455 nova_compute[188605]:        <feature name='avx512ifma'/>
Dec  1 14:20:36 np0005541455 nova_compute[188605]:        <feature name='avx512vbmi'/>
Dec  1 14:20:36 np0005541455 nova_compute[188605]:        <feature name='avx512vbmi2'/>
Dec  1 14:20:36 np0005541455 nova_compute[188605]:        <feature name='avx512vl'/>
Dec  1 14:20:36 np0005541455 nova_compute[188605]:        <feature name='avx512vnni'/>
Dec  1 14:20:36 np0005541455 nova_compute[188605]:        <feature name='erms'/>
Dec  1 14:20:36 np0005541455 nova_compute[188605]:        <feature name='fsrm'/>
Dec  1 14:20:36 np0005541455 nova_compute[188605]:        <feature name='gfni'/>
Dec  1 14:20:36 np0005541455 nova_compute[188605]:        <feature name='ibrs-all'/>
Dec  1 14:20:36 np0005541455 nova_compute[188605]:        <feature name='invpcid'/>
Dec  1 14:20:36 np0005541455 nova_compute[188605]:        <feature name='la57'/>
Dec  1 14:20:36 np0005541455 nova_compute[188605]:        <feature name='pcid'/>
Dec  1 14:20:36 np0005541455 nova_compute[188605]:        <feature name='pku'/>
Dec  1 14:20:36 np0005541455 nova_compute[188605]:        <feature name='taa-no'/>
Dec  1 14:20:36 np0005541455 nova_compute[188605]:        <feature name='vaes'/>
Dec  1 14:20:36 np0005541455 nova_compute[188605]:        <feature name='vpclmulqdq'/>
Dec  1 14:20:36 np0005541455 nova_compute[188605]:        <feature name='xsaves'/>
Dec  1 14:20:36 np0005541455 nova_compute[188605]:      </blockers>
Dec  1 14:20:36 np0005541455 nova_compute[188605]:      <model usable='no' vendor='Intel'>Icelake-Server-v6</model>
Dec  1 14:20:36 np0005541455 nova_compute[188605]:      <blockers model='Icelake-Server-v6'>
Dec  1 14:20:36 np0005541455 nova_compute[188605]:        <feature name='avx512-vpopcntdq'/>
Dec  1 14:20:36 np0005541455 nova_compute[188605]:        <feature name='avx512bitalg'/>
Dec  1 14:20:36 np0005541455 nova_compute[188605]:        <feature name='avx512bw'/>
Dec  1 14:20:36 np0005541455 nova_compute[188605]:        <feature name='avx512cd'/>
Dec  1 14:20:36 np0005541455 nova_compute[188605]:        <feature name='avx512dq'/>
Dec  1 14:20:36 np0005541455 nova_compute[188605]:        <feature name='avx512f'/>
Dec  1 14:20:36 np0005541455 nova_compute[188605]:        <feature name='avx512ifma'/>
Dec  1 14:20:36 np0005541455 nova_compute[188605]:        <feature name='avx512vbmi'/>
Dec  1 14:20:36 np0005541455 nova_compute[188605]:        <feature name='avx512vbmi2'/>
Dec  1 14:20:36 np0005541455 nova_compute[188605]:        <feature name='avx512vl'/>
Dec  1 14:20:36 np0005541455 nova_compute[188605]:        <feature name='avx512vnni'/>
Dec  1 14:20:36 np0005541455 nova_compute[188605]:        <feature name='erms'/>
Dec  1 14:20:36 np0005541455 nova_compute[188605]:        <feature name='fsrm'/>
Dec  1 14:20:36 np0005541455 nova_compute[188605]:        <feature name='gfni'/>
Dec  1 14:20:36 np0005541455 nova_compute[188605]:        <feature name='ibrs-all'/>
Dec  1 14:20:36 np0005541455 nova_compute[188605]:        <feature name='invpcid'/>
Dec  1 14:20:36 np0005541455 nova_compute[188605]:        <feature name='la57'/>
Dec  1 14:20:36 np0005541455 nova_compute[188605]:        <feature name='pcid'/>
Dec  1 14:20:36 np0005541455 nova_compute[188605]:        <feature name='pku'/>
Dec  1 14:20:36 np0005541455 nova_compute[188605]:        <feature name='taa-no'/>
Dec  1 14:20:36 np0005541455 nova_compute[188605]:        <feature name='vaes'/>
Dec  1 14:20:36 np0005541455 nova_compute[188605]:        <feature name='vpclmulqdq'/>
Dec  1 14:20:36 np0005541455 nova_compute[188605]:        <feature name='xsaves'/>
Dec  1 14:20:36 np0005541455 nova_compute[188605]:      </blockers>
Dec  1 14:20:36 np0005541455 nova_compute[188605]:      <model usable='no' vendor='Intel'>Icelake-Server-v7</model>
Dec  1 14:20:36 np0005541455 nova_compute[188605]:      <blockers model='Icelake-Server-v7'>
Dec  1 14:20:36 np0005541455 nova_compute[188605]:        <feature name='avx512-vpopcntdq'/>
Dec  1 14:20:36 np0005541455 nova_compute[188605]:        <feature name='avx512bitalg'/>
Dec  1 14:20:36 np0005541455 nova_compute[188605]:        <feature name='avx512bw'/>
Dec  1 14:20:36 np0005541455 nova_compute[188605]:        <feature name='avx512cd'/>
Dec  1 14:20:36 np0005541455 nova_compute[188605]:        <feature name='avx512dq'/>
Dec  1 14:20:36 np0005541455 nova_compute[188605]:        <feature name='avx512f'/>
Dec  1 14:20:36 np0005541455 nova_compute[188605]:        <feature name='avx512ifma'/>
Dec  1 14:20:36 np0005541455 nova_compute[188605]:        <feature name='avx512vbmi'/>
Dec  1 14:20:36 np0005541455 nova_compute[188605]:        <feature name='avx512vbmi2'/>
Dec  1 14:20:36 np0005541455 nova_compute[188605]:        <feature name='avx512vl'/>
Dec  1 14:20:36 np0005541455 nova_compute[188605]:        <feature name='avx512vnni'/>
Dec  1 14:20:36 np0005541455 nova_compute[188605]:        <feature name='erms'/>
Dec  1 14:20:36 np0005541455 nova_compute[188605]:        <feature name='fsrm'/>
Dec  1 14:20:36 np0005541455 nova_compute[188605]:        <feature name='gfni'/>
Dec  1 14:20:36 np0005541455 nova_compute[188605]:        <feature name='hle'/>
Dec  1 14:20:36 np0005541455 nova_compute[188605]:        <feature name='ibrs-all'/>
Dec  1 14:20:36 np0005541455 nova_compute[188605]:        <feature name='invpcid'/>
Dec  1 14:20:36 np0005541455 nova_compute[188605]:        <feature name='la57'/>
Dec  1 14:20:36 np0005541455 nova_compute[188605]:        <feature name='pcid'/>
Dec  1 14:20:36 np0005541455 nova_compute[188605]:        <feature name='pku'/>
Dec  1 14:20:36 np0005541455 nova_compute[188605]:        <feature name='rtm'/>
Dec  1 14:20:36 np0005541455 nova_compute[188605]:        <feature name='taa-no'/>
Dec  1 14:20:36 np0005541455 nova_compute[188605]:        <feature name='vaes'/>
Dec  1 14:20:36 np0005541455 nova_compute[188605]:        <feature name='vpclmulqdq'/>
Dec  1 14:20:36 np0005541455 nova_compute[188605]:        <feature name='xsaves'/>
Dec  1 14:20:36 np0005541455 nova_compute[188605]:      </blockers>
Dec  1 14:20:36 np0005541455 nova_compute[188605]:      <model usable='no' vendor='Intel' canonical='IvyBridge-v1'>IvyBridge</model>
Dec  1 14:20:36 np0005541455 nova_compute[188605]:      <blockers model='IvyBridge'>
Dec  1 14:20:36 np0005541455 nova_compute[188605]:        <feature name='erms'/>
Dec  1 14:20:36 np0005541455 nova_compute[188605]:      </blockers>
Dec  1 14:20:36 np0005541455 nova_compute[188605]:      <model usable='no' vendor='Intel' canonical='IvyBridge-v2'>IvyBridge-IBRS</model>
Dec  1 14:20:36 np0005541455 nova_compute[188605]:      <blockers model='IvyBridge-IBRS'>
Dec  1 14:20:36 np0005541455 nova_compute[188605]:        <feature name='erms'/>
Dec  1 14:20:36 np0005541455 nova_compute[188605]:      </blockers>
Dec  1 14:20:36 np0005541455 nova_compute[188605]:      <model usable='no' vendor='Intel'>IvyBridge-v1</model>
Dec  1 14:20:36 np0005541455 nova_compute[188605]:      <blockers model='IvyBridge-v1'>
Dec  1 14:20:36 np0005541455 nova_compute[188605]:        <feature name='erms'/>
Dec  1 14:20:36 np0005541455 nova_compute[188605]:      </blockers>
Dec  1 14:20:36 np0005541455 nova_compute[188605]:      <model usable='no' vendor='Intel'>IvyBridge-v2</model>
Dec  1 14:20:36 np0005541455 nova_compute[188605]:      <blockers model='IvyBridge-v2'>
Dec  1 14:20:36 np0005541455 nova_compute[188605]:        <feature name='erms'/>
Dec  1 14:20:36 np0005541455 nova_compute[188605]:      </blockers>
Dec  1 14:20:36 np0005541455 nova_compute[188605]:      <model usable='no' vendor='Intel' canonical='KnightsMill-v1'>KnightsMill</model>
Dec  1 14:20:36 np0005541455 nova_compute[188605]:      <blockers model='KnightsMill'>
Dec  1 14:20:36 np0005541455 nova_compute[188605]:        <feature name='avx512-4fmaps'/>
Dec  1 14:20:36 np0005541455 nova_compute[188605]:        <feature name='avx512-4vnniw'/>
Dec  1 14:20:36 np0005541455 nova_compute[188605]:        <feature name='avx512-vpopcntdq'/>
Dec  1 14:20:36 np0005541455 nova_compute[188605]:        <feature name='avx512cd'/>
Dec  1 14:20:36 np0005541455 nova_compute[188605]:        <feature name='avx512er'/>
Dec  1 14:20:36 np0005541455 nova_compute[188605]:        <feature name='avx512f'/>
Dec  1 14:20:36 np0005541455 nova_compute[188605]:        <feature name='avx512pf'/>
Dec  1 14:20:36 np0005541455 nova_compute[188605]:        <feature name='erms'/>
Dec  1 14:20:36 np0005541455 nova_compute[188605]:        <feature name='ss'/>
Dec  1 14:20:36 np0005541455 nova_compute[188605]:      </blockers>
Dec  1 14:20:36 np0005541455 nova_compute[188605]:      <model usable='no' vendor='Intel'>KnightsMill-v1</model>
Dec  1 14:20:36 np0005541455 nova_compute[188605]:      <blockers model='KnightsMill-v1'>
Dec  1 14:20:36 np0005541455 nova_compute[188605]:        <feature name='avx512-4fmaps'/>
Dec  1 14:20:36 np0005541455 nova_compute[188605]:        <feature name='avx512-4vnniw'/>
Dec  1 14:20:36 np0005541455 nova_compute[188605]:        <feature name='avx512-vpopcntdq'/>
Dec  1 14:20:36 np0005541455 nova_compute[188605]:        <feature name='avx512cd'/>
Dec  1 14:20:36 np0005541455 nova_compute[188605]:        <feature name='avx512er'/>
Dec  1 14:20:36 np0005541455 nova_compute[188605]:        <feature name='avx512f'/>
Dec  1 14:20:36 np0005541455 nova_compute[188605]:        <feature name='avx512pf'/>
Dec  1 14:20:36 np0005541455 nova_compute[188605]:        <feature name='erms'/>
Dec  1 14:20:36 np0005541455 nova_compute[188605]:        <feature name='ss'/>
Dec  1 14:20:36 np0005541455 nova_compute[188605]:      </blockers>
Dec  1 14:20:36 np0005541455 nova_compute[188605]:      <model usable='yes' vendor='Intel' canonical='Nehalem-v1'>Nehalem</model>
Dec  1 14:20:36 np0005541455 nova_compute[188605]:      <model usable='yes' vendor='Intel' canonical='Nehalem-v2'>Nehalem-IBRS</model>
Dec  1 14:20:36 np0005541455 nova_compute[188605]:      <model usable='yes' vendor='Intel'>Nehalem-v1</model>
Dec  1 14:20:36 np0005541455 nova_compute[188605]:      <model usable='yes' vendor='Intel'>Nehalem-v2</model>
Dec  1 14:20:36 np0005541455 nova_compute[188605]:      <model usable='yes' deprecated='yes' vendor='AMD' canonical='Opteron_G1-v1'>Opteron_G1</model>
Dec  1 14:20:36 np0005541455 nova_compute[188605]:      <model usable='yes' deprecated='yes' vendor='AMD'>Opteron_G1-v1</model>
Dec  1 14:20:36 np0005541455 nova_compute[188605]:      <model usable='yes' deprecated='yes' vendor='AMD' canonical='Opteron_G2-v1'>Opteron_G2</model>
Dec  1 14:20:36 np0005541455 nova_compute[188605]:      <model usable='yes' deprecated='yes' vendor='AMD'>Opteron_G2-v1</model>
Dec  1 14:20:36 np0005541455 nova_compute[188605]:      <model usable='yes' deprecated='yes' vendor='AMD' canonical='Opteron_G3-v1'>Opteron_G3</model>
Dec  1 14:20:36 np0005541455 nova_compute[188605]:      <model usable='yes' deprecated='yes' vendor='AMD'>Opteron_G3-v1</model>
Dec  1 14:20:36 np0005541455 nova_compute[188605]:      <model usable='no' vendor='AMD' canonical='Opteron_G4-v1'>Opteron_G4</model>
Dec  1 14:20:36 np0005541455 nova_compute[188605]:      <blockers model='Opteron_G4'>
Dec  1 14:20:36 np0005541455 nova_compute[188605]:        <feature name='fma4'/>
Dec  1 14:20:36 np0005541455 nova_compute[188605]:        <feature name='xop'/>
Dec  1 14:20:36 np0005541455 nova_compute[188605]:      </blockers>
Dec  1 14:20:36 np0005541455 nova_compute[188605]:      <model usable='no' vendor='AMD'>Opteron_G4-v1</model>
Dec  1 14:20:36 np0005541455 nova_compute[188605]:      <blockers model='Opteron_G4-v1'>
Dec  1 14:20:36 np0005541455 nova_compute[188605]:        <feature name='fma4'/>
Dec  1 14:20:36 np0005541455 nova_compute[188605]:        <feature name='xop'/>
Dec  1 14:20:36 np0005541455 nova_compute[188605]:      </blockers>
Dec  1 14:20:36 np0005541455 nova_compute[188605]:      <model usable='no' vendor='AMD' canonical='Opteron_G5-v1'>Opteron_G5</model>
Dec  1 14:20:36 np0005541455 nova_compute[188605]:      <blockers model='Opteron_G5'>
Dec  1 14:20:36 np0005541455 nova_compute[188605]:        <feature name='fma4'/>
Dec  1 14:20:36 np0005541455 nova_compute[188605]:        <feature name='tbm'/>
Dec  1 14:20:36 np0005541455 nova_compute[188605]:        <feature name='xop'/>
Dec  1 14:20:36 np0005541455 nova_compute[188605]:      </blockers>
Dec  1 14:20:36 np0005541455 nova_compute[188605]:      <model usable='no' vendor='AMD'>Opteron_G5-v1</model>
Dec  1 14:20:36 np0005541455 nova_compute[188605]:      <blockers model='Opteron_G5-v1'>
Dec  1 14:20:36 np0005541455 nova_compute[188605]:        <feature name='fma4'/>
Dec  1 14:20:36 np0005541455 nova_compute[188605]:        <feature name='tbm'/>
Dec  1 14:20:36 np0005541455 nova_compute[188605]:        <feature name='xop'/>
Dec  1 14:20:36 np0005541455 nova_compute[188605]:      </blockers>
Dec  1 14:20:36 np0005541455 nova_compute[188605]:      <model usable='yes' deprecated='yes' vendor='Intel' canonical='Penryn-v1'>Penryn</model>
Dec  1 14:20:36 np0005541455 nova_compute[188605]:      <model usable='yes' deprecated='yes' vendor='Intel'>Penryn-v1</model>
Dec  1 14:20:36 np0005541455 nova_compute[188605]:      <model usable='yes' vendor='Intel' canonical='SandyBridge-v1'>SandyBridge</model>
Dec  1 14:20:36 np0005541455 nova_compute[188605]:      <model usable='yes' vendor='Intel' canonical='SandyBridge-v2'>SandyBridge-IBRS</model>
Dec  1 14:20:36 np0005541455 nova_compute[188605]:      <model usable='yes' vendor='Intel'>SandyBridge-v1</model>
Dec  1 14:20:36 np0005541455 nova_compute[188605]:      <model usable='yes' vendor='Intel'>SandyBridge-v2</model>
Dec  1 14:20:36 np0005541455 nova_compute[188605]:      <model usable='no' vendor='Intel' canonical='SapphireRapids-v1'>SapphireRapids</model>
Dec  1 14:20:36 np0005541455 nova_compute[188605]:      <blockers model='SapphireRapids'>
Dec  1 14:20:36 np0005541455 nova_compute[188605]:        <feature name='amx-bf16'/>
Dec  1 14:20:36 np0005541455 nova_compute[188605]:        <feature name='amx-int8'/>
Dec  1 14:20:36 np0005541455 nova_compute[188605]:        <feature name='amx-tile'/>
Dec  1 14:20:36 np0005541455 nova_compute[188605]:        <feature name='avx-vnni'/>
Dec  1 14:20:36 np0005541455 nova_compute[188605]:        <feature name='avx512-bf16'/>
Dec  1 14:20:36 np0005541455 nova_compute[188605]:        <feature name='avx512-fp16'/>
Dec  1 14:20:36 np0005541455 nova_compute[188605]:        <feature name='avx512-vpopcntdq'/>
Dec  1 14:20:36 np0005541455 nova_compute[188605]:        <feature name='avx512bitalg'/>
Dec  1 14:20:36 np0005541455 nova_compute[188605]:        <feature name='avx512bw'/>
Dec  1 14:20:36 np0005541455 nova_compute[188605]:        <feature name='avx512cd'/>
Dec  1 14:20:36 np0005541455 nova_compute[188605]:        <feature name='avx512dq'/>
Dec  1 14:20:36 np0005541455 nova_compute[188605]:        <feature name='avx512f'/>
Dec  1 14:20:36 np0005541455 nova_compute[188605]:        <feature name='avx512ifma'/>
Dec  1 14:20:36 np0005541455 nova_compute[188605]:        <feature name='avx512vbmi'/>
Dec  1 14:20:36 np0005541455 nova_compute[188605]:        <feature name='avx512vbmi2'/>
Dec  1 14:20:36 np0005541455 nova_compute[188605]:        <feature name='avx512vl'/>
Dec  1 14:20:36 np0005541455 nova_compute[188605]:        <feature name='avx512vnni'/>
Dec  1 14:20:36 np0005541455 nova_compute[188605]:        <feature name='bus-lock-detect'/>
Dec  1 14:20:36 np0005541455 nova_compute[188605]:        <feature name='erms'/>
Dec  1 14:20:36 np0005541455 nova_compute[188605]:        <feature name='fsrc'/>
Dec  1 14:20:36 np0005541455 nova_compute[188605]:        <feature name='fsrm'/>
Dec  1 14:20:36 np0005541455 nova_compute[188605]:        <feature name='fsrs'/>
Dec  1 14:20:36 np0005541455 nova_compute[188605]:        <feature name='fzrm'/>
Dec  1 14:20:36 np0005541455 nova_compute[188605]:        <feature name='gfni'/>
Dec  1 14:20:36 np0005541455 nova_compute[188605]:        <feature name='hle'/>
Dec  1 14:20:36 np0005541455 nova_compute[188605]:        <feature name='ibrs-all'/>
Dec  1 14:20:36 np0005541455 nova_compute[188605]:        <feature name='invpcid'/>
Dec  1 14:20:36 np0005541455 nova_compute[188605]:        <feature name='la57'/>
Dec  1 14:20:36 np0005541455 nova_compute[188605]:        <feature name='pcid'/>
Dec  1 14:20:36 np0005541455 nova_compute[188605]:        <feature name='pku'/>
Dec  1 14:20:36 np0005541455 nova_compute[188605]:        <feature name='rtm'/>
Dec  1 14:20:36 np0005541455 nova_compute[188605]:        <feature name='serialize'/>
Dec  1 14:20:36 np0005541455 nova_compute[188605]:        <feature name='taa-no'/>
Dec  1 14:20:36 np0005541455 nova_compute[188605]:        <feature name='tsx-ldtrk'/>
Dec  1 14:20:36 np0005541455 nova_compute[188605]:        <feature name='vaes'/>
Dec  1 14:20:36 np0005541455 nova_compute[188605]:        <feature name='vpclmulqdq'/>
Dec  1 14:20:36 np0005541455 nova_compute[188605]:        <feature name='xfd'/>
Dec  1 14:20:36 np0005541455 nova_compute[188605]:        <feature name='xsaves'/>
Dec  1 14:20:36 np0005541455 nova_compute[188605]:      </blockers>
Dec  1 14:20:36 np0005541455 nova_compute[188605]:      <model usable='no' vendor='Intel'>SapphireRapids-v1</model>
Dec  1 14:20:36 np0005541455 nova_compute[188605]:      <blockers model='SapphireRapids-v1'>
Dec  1 14:20:36 np0005541455 nova_compute[188605]:        <feature name='amx-bf16'/>
Dec  1 14:20:36 np0005541455 nova_compute[188605]:        <feature name='amx-int8'/>
Dec  1 14:20:36 np0005541455 nova_compute[188605]:        <feature name='amx-tile'/>
Dec  1 14:20:36 np0005541455 nova_compute[188605]:        <feature name='avx-vnni'/>
Dec  1 14:20:36 np0005541455 nova_compute[188605]:        <feature name='avx512-bf16'/>
Dec  1 14:20:36 np0005541455 nova_compute[188605]:        <feature name='avx512-fp16'/>
Dec  1 14:20:36 np0005541455 nova_compute[188605]:        <feature name='avx512-vpopcntdq'/>
Dec  1 14:20:36 np0005541455 nova_compute[188605]:        <feature name='avx512bitalg'/>
Dec  1 14:20:36 np0005541455 nova_compute[188605]:        <feature name='avx512bw'/>
Dec  1 14:20:36 np0005541455 nova_compute[188605]:        <feature name='avx512cd'/>
Dec  1 14:20:36 np0005541455 nova_compute[188605]:        <feature name='avx512dq'/>
Dec  1 14:20:36 np0005541455 nova_compute[188605]:        <feature name='avx512f'/>
Dec  1 14:20:36 np0005541455 nova_compute[188605]:        <feature name='avx512ifma'/>
Dec  1 14:20:36 np0005541455 nova_compute[188605]:        <feature name='avx512vbmi'/>
Dec  1 14:20:36 np0005541455 nova_compute[188605]:        <feature name='avx512vbmi2'/>
Dec  1 14:20:36 np0005541455 nova_compute[188605]:        <feature name='avx512vl'/>
Dec  1 14:20:36 np0005541455 nova_compute[188605]:        <feature name='avx512vnni'/>
Dec  1 14:20:36 np0005541455 nova_compute[188605]:        <feature name='bus-lock-detect'/>
Dec  1 14:20:36 np0005541455 nova_compute[188605]:        <feature name='erms'/>
Dec  1 14:20:36 np0005541455 nova_compute[188605]:        <feature name='fsrc'/>
Dec  1 14:20:36 np0005541455 nova_compute[188605]:        <feature name='fsrm'/>
Dec  1 14:20:36 np0005541455 nova_compute[188605]:        <feature name='fsrs'/>
Dec  1 14:20:36 np0005541455 nova_compute[188605]:        <feature name='fzrm'/>
Dec  1 14:20:36 np0005541455 nova_compute[188605]:        <feature name='gfni'/>
Dec  1 14:20:36 np0005541455 nova_compute[188605]:        <feature name='hle'/>
Dec  1 14:20:36 np0005541455 nova_compute[188605]:        <feature name='ibrs-all'/>
Dec  1 14:20:36 np0005541455 nova_compute[188605]:        <feature name='invpcid'/>
Dec  1 14:20:36 np0005541455 nova_compute[188605]:        <feature name='la57'/>
Dec  1 14:20:36 np0005541455 nova_compute[188605]:        <feature name='pcid'/>
Dec  1 14:20:36 np0005541455 nova_compute[188605]:        <feature name='pku'/>
Dec  1 14:20:36 np0005541455 nova_compute[188605]:        <feature name='rtm'/>
Dec  1 14:20:36 np0005541455 nova_compute[188605]:        <feature name='serialize'/>
Dec  1 14:20:36 np0005541455 nova_compute[188605]:        <feature name='taa-no'/>
Dec  1 14:20:36 np0005541455 nova_compute[188605]:        <feature name='tsx-ldtrk'/>
Dec  1 14:20:36 np0005541455 nova_compute[188605]:        <feature name='vaes'/>
Dec  1 14:20:36 np0005541455 nova_compute[188605]:        <feature name='vpclmulqdq'/>
Dec  1 14:20:36 np0005541455 nova_compute[188605]:        <feature name='xfd'/>
Dec  1 14:20:36 np0005541455 nova_compute[188605]:        <feature name='xsaves'/>
Dec  1 14:20:36 np0005541455 nova_compute[188605]:      </blockers>
Dec  1 14:20:36 np0005541455 nova_compute[188605]:      <model usable='no' vendor='Intel'>SapphireRapids-v2</model>
Dec  1 14:20:36 np0005541455 nova_compute[188605]:      <blockers model='SapphireRapids-v2'>
Dec  1 14:20:36 np0005541455 nova_compute[188605]:        <feature name='amx-bf16'/>
Dec  1 14:20:36 np0005541455 nova_compute[188605]:        <feature name='amx-int8'/>
Dec  1 14:20:36 np0005541455 nova_compute[188605]:        <feature name='amx-tile'/>
Dec  1 14:20:36 np0005541455 nova_compute[188605]:        <feature name='avx-vnni'/>
Dec  1 14:20:36 np0005541455 nova_compute[188605]:        <feature name='avx512-bf16'/>
Dec  1 14:20:36 np0005541455 nova_compute[188605]:        <feature name='avx512-fp16'/>
Dec  1 14:20:36 np0005541455 nova_compute[188605]:        <feature name='avx512-vpopcntdq'/>
Dec  1 14:20:36 np0005541455 nova_compute[188605]:        <feature name='avx512bitalg'/>
Dec  1 14:20:36 np0005541455 nova_compute[188605]:        <feature name='avx512bw'/>
Dec  1 14:20:36 np0005541455 nova_compute[188605]:        <feature name='avx512cd'/>
Dec  1 14:20:36 np0005541455 nova_compute[188605]:        <feature name='avx512dq'/>
Dec  1 14:20:36 np0005541455 nova_compute[188605]:        <feature name='avx512f'/>
Dec  1 14:20:36 np0005541455 nova_compute[188605]:        <feature name='avx512ifma'/>
Dec  1 14:20:36 np0005541455 nova_compute[188605]:        <feature name='avx512vbmi'/>
Dec  1 14:20:36 np0005541455 nova_compute[188605]:        <feature name='avx512vbmi2'/>
Dec  1 14:20:36 np0005541455 nova_compute[188605]:        <feature name='avx512vl'/>
Dec  1 14:20:36 np0005541455 nova_compute[188605]:        <feature name='avx512vnni'/>
Dec  1 14:20:36 np0005541455 nova_compute[188605]:        <feature name='bus-lock-detect'/>
Dec  1 14:20:36 np0005541455 nova_compute[188605]:        <feature name='erms'/>
Dec  1 14:20:36 np0005541455 nova_compute[188605]:        <feature name='fbsdp-no'/>
Dec  1 14:20:36 np0005541455 nova_compute[188605]:        <feature name='fsrc'/>
Dec  1 14:20:36 np0005541455 nova_compute[188605]:        <feature name='fsrm'/>
Dec  1 14:20:36 np0005541455 nova_compute[188605]:        <feature name='fsrs'/>
Dec  1 14:20:36 np0005541455 nova_compute[188605]:        <feature name='fzrm'/>
Dec  1 14:20:36 np0005541455 nova_compute[188605]:        <feature name='gfni'/>
Dec  1 14:20:36 np0005541455 nova_compute[188605]:        <feature name='hle'/>
Dec  1 14:20:36 np0005541455 nova_compute[188605]:        <feature name='ibrs-all'/>
Dec  1 14:20:36 np0005541455 nova_compute[188605]:        <feature name='invpcid'/>
Dec  1 14:20:36 np0005541455 nova_compute[188605]:        <feature name='la57'/>
Dec  1 14:20:36 np0005541455 nova_compute[188605]:        <feature name='pcid'/>
Dec  1 14:20:36 np0005541455 nova_compute[188605]:        <feature name='pku'/>
Dec  1 14:20:36 np0005541455 nova_compute[188605]:        <feature name='psdp-no'/>
Dec  1 14:20:36 np0005541455 nova_compute[188605]:        <feature name='rtm'/>
Dec  1 14:20:36 np0005541455 nova_compute[188605]:        <feature name='sbdr-ssdp-no'/>
Dec  1 14:20:36 np0005541455 nova_compute[188605]:        <feature name='serialize'/>
Dec  1 14:20:36 np0005541455 nova_compute[188605]:        <feature name='taa-no'/>
Dec  1 14:20:36 np0005541455 nova_compute[188605]:        <feature name='tsx-ldtrk'/>
Dec  1 14:20:36 np0005541455 nova_compute[188605]:        <feature name='vaes'/>
Dec  1 14:20:36 np0005541455 nova_compute[188605]:        <feature name='vpclmulqdq'/>
Dec  1 14:20:36 np0005541455 nova_compute[188605]:        <feature name='xfd'/>
Dec  1 14:20:36 np0005541455 nova_compute[188605]:        <feature name='xsaves'/>
Dec  1 14:20:36 np0005541455 nova_compute[188605]:      </blockers>
Dec  1 14:20:36 np0005541455 nova_compute[188605]:      <model usable='no' vendor='Intel'>SapphireRapids-v3</model>
Dec  1 14:20:36 np0005541455 nova_compute[188605]:      <blockers model='SapphireRapids-v3'>
Dec  1 14:20:36 np0005541455 nova_compute[188605]:        <feature name='amx-bf16'/>
Dec  1 14:20:36 np0005541455 nova_compute[188605]:        <feature name='amx-int8'/>
Dec  1 14:20:36 np0005541455 nova_compute[188605]:        <feature name='amx-tile'/>
Dec  1 14:20:36 np0005541455 nova_compute[188605]:        <feature name='avx-vnni'/>
Dec  1 14:20:36 np0005541455 nova_compute[188605]:        <feature name='avx512-bf16'/>
Dec  1 14:20:36 np0005541455 nova_compute[188605]:        <feature name='avx512-fp16'/>
Dec  1 14:20:36 np0005541455 nova_compute[188605]:        <feature name='avx512-vpopcntdq'/>
Dec  1 14:20:36 np0005541455 nova_compute[188605]:        <feature name='avx512bitalg'/>
Dec  1 14:20:36 np0005541455 nova_compute[188605]:        <feature name='avx512bw'/>
Dec  1 14:20:36 np0005541455 nova_compute[188605]:        <feature name='avx512cd'/>
Dec  1 14:20:36 np0005541455 nova_compute[188605]:        <feature name='avx512dq'/>
Dec  1 14:20:36 np0005541455 nova_compute[188605]:        <feature name='avx512f'/>
Dec  1 14:20:36 np0005541455 nova_compute[188605]:        <feature name='avx512ifma'/>
Dec  1 14:20:36 np0005541455 nova_compute[188605]:        <feature name='avx512vbmi'/>
Dec  1 14:20:36 np0005541455 nova_compute[188605]:        <feature name='avx512vbmi2'/>
Dec  1 14:20:36 np0005541455 nova_compute[188605]:        <feature name='avx512vl'/>
Dec  1 14:20:36 np0005541455 nova_compute[188605]:        <feature name='avx512vnni'/>
Dec  1 14:20:36 np0005541455 nova_compute[188605]:        <feature name='bus-lock-detect'/>
Dec  1 14:20:36 np0005541455 nova_compute[188605]:        <feature name='cldemote'/>
Dec  1 14:20:36 np0005541455 nova_compute[188605]:        <feature name='erms'/>
Dec  1 14:20:36 np0005541455 nova_compute[188605]:        <feature name='fbsdp-no'/>
Dec  1 14:20:36 np0005541455 nova_compute[188605]:        <feature name='fsrc'/>
Dec  1 14:20:36 np0005541455 nova_compute[188605]:        <feature name='fsrm'/>
Dec  1 14:20:36 np0005541455 nova_compute[188605]:        <feature name='fsrs'/>
Dec  1 14:20:36 np0005541455 nova_compute[188605]:        <feature name='fzrm'/>
Dec  1 14:20:36 np0005541455 nova_compute[188605]:        <feature name='gfni'/>
Dec  1 14:20:36 np0005541455 nova_compute[188605]:        <feature name='hle'/>
Dec  1 14:20:36 np0005541455 nova_compute[188605]:        <feature name='ibrs-all'/>
Dec  1 14:20:36 np0005541455 nova_compute[188605]:        <feature name='invpcid'/>
Dec  1 14:20:36 np0005541455 nova_compute[188605]:        <feature name='la57'/>
Dec  1 14:20:36 np0005541455 nova_compute[188605]:        <feature name='movdir64b'/>
Dec  1 14:20:36 np0005541455 nova_compute[188605]:        <feature name='movdiri'/>
Dec  1 14:20:36 np0005541455 nova_compute[188605]:        <feature name='pcid'/>
Dec  1 14:20:36 np0005541455 nova_compute[188605]:        <feature name='pku'/>
Dec  1 14:20:36 np0005541455 nova_compute[188605]:        <feature name='psdp-no'/>
Dec  1 14:20:36 np0005541455 nova_compute[188605]:        <feature name='rtm'/>
Dec  1 14:20:36 np0005541455 nova_compute[188605]:        <feature name='sbdr-ssdp-no'/>
Dec  1 14:20:36 np0005541455 nova_compute[188605]:        <feature name='serialize'/>
Dec  1 14:20:36 np0005541455 nova_compute[188605]:        <feature name='ss'/>
Dec  1 14:20:36 np0005541455 nova_compute[188605]:        <feature name='taa-no'/>
Dec  1 14:20:36 np0005541455 nova_compute[188605]:        <feature name='tsx-ldtrk'/>
Dec  1 14:20:36 np0005541455 nova_compute[188605]:        <feature name='vaes'/>
Dec  1 14:20:36 np0005541455 nova_compute[188605]:        <feature name='vpclmulqdq'/>
Dec  1 14:20:36 np0005541455 nova_compute[188605]:        <feature name='xfd'/>
Dec  1 14:20:36 np0005541455 nova_compute[188605]:        <feature name='xsaves'/>
Dec  1 14:20:36 np0005541455 nova_compute[188605]:      </blockers>
Dec  1 14:20:36 np0005541455 nova_compute[188605]:      <model usable='no' vendor='Intel' canonical='SierraForest-v1'>SierraForest</model>
Dec  1 14:20:36 np0005541455 nova_compute[188605]:      <blockers model='SierraForest'>
Dec  1 14:20:36 np0005541455 nova_compute[188605]:        <feature name='avx-ifma'/>
Dec  1 14:20:36 np0005541455 nova_compute[188605]:        <feature name='avx-ne-convert'/>
Dec  1 14:20:36 np0005541455 nova_compute[188605]:        <feature name='avx-vnni'/>
Dec  1 14:20:36 np0005541455 nova_compute[188605]:        <feature name='avx-vnni-int8'/>
Dec  1 14:20:36 np0005541455 nova_compute[188605]:        <feature name='bus-lock-detect'/>
Dec  1 14:20:36 np0005541455 nova_compute[188605]:        <feature name='cmpccxadd'/>
Dec  1 14:20:36 np0005541455 nova_compute[188605]:        <feature name='erms'/>
Dec  1 14:20:36 np0005541455 nova_compute[188605]:        <feature name='fbsdp-no'/>
Dec  1 14:20:36 np0005541455 nova_compute[188605]:        <feature name='fsrm'/>
Dec  1 14:20:36 np0005541455 nova_compute[188605]:        <feature name='fsrs'/>
Dec  1 14:20:36 np0005541455 nova_compute[188605]:        <feature name='gfni'/>
Dec  1 14:20:36 np0005541455 nova_compute[188605]:        <feature name='ibrs-all'/>
Dec  1 14:20:36 np0005541455 nova_compute[188605]:        <feature name='invpcid'/>
Dec  1 14:20:36 np0005541455 nova_compute[188605]:        <feature name='mcdt-no'/>
Dec  1 14:20:36 np0005541455 nova_compute[188605]:        <feature name='pbrsb-no'/>
Dec  1 14:20:36 np0005541455 nova_compute[188605]:        <feature name='pcid'/>
Dec  1 14:20:36 np0005541455 nova_compute[188605]:        <feature name='pku'/>
Dec  1 14:20:36 np0005541455 nova_compute[188605]:        <feature name='psdp-no'/>
Dec  1 14:20:36 np0005541455 nova_compute[188605]:        <feature name='sbdr-ssdp-no'/>
Dec  1 14:20:36 np0005541455 nova_compute[188605]:        <feature name='serialize'/>
Dec  1 14:20:36 np0005541455 nova_compute[188605]:        <feature name='vaes'/>
Dec  1 14:20:36 np0005541455 nova_compute[188605]:        <feature name='vpclmulqdq'/>
Dec  1 14:20:36 np0005541455 nova_compute[188605]:        <feature name='xsaves'/>
Dec  1 14:20:36 np0005541455 nova_compute[188605]:      </blockers>
Dec  1 14:20:36 np0005541455 nova_compute[188605]:      <model usable='no' vendor='Intel'>SierraForest-v1</model>
Dec  1 14:20:36 np0005541455 nova_compute[188605]:      <blockers model='SierraForest-v1'>
Dec  1 14:20:36 np0005541455 nova_compute[188605]:        <feature name='avx-ifma'/>
Dec  1 14:20:36 np0005541455 nova_compute[188605]:        <feature name='avx-ne-convert'/>
Dec  1 14:20:36 np0005541455 nova_compute[188605]:        <feature name='avx-vnni'/>
Dec  1 14:20:36 np0005541455 nova_compute[188605]:        <feature name='avx-vnni-int8'/>
Dec  1 14:20:36 np0005541455 nova_compute[188605]:        <feature name='bus-lock-detect'/>
Dec  1 14:20:36 np0005541455 nova_compute[188605]:        <feature name='cmpccxadd'/>
Dec  1 14:20:36 np0005541455 nova_compute[188605]:        <feature name='erms'/>
Dec  1 14:20:36 np0005541455 nova_compute[188605]:        <feature name='fbsdp-no'/>
Dec  1 14:20:36 np0005541455 nova_compute[188605]:        <feature name='fsrm'/>
Dec  1 14:20:36 np0005541455 nova_compute[188605]:        <feature name='fsrs'/>
Dec  1 14:20:36 np0005541455 nova_compute[188605]:        <feature name='gfni'/>
Dec  1 14:20:36 np0005541455 nova_compute[188605]:        <feature name='ibrs-all'/>
Dec  1 14:20:36 np0005541455 nova_compute[188605]:        <feature name='invpcid'/>
Dec  1 14:20:36 np0005541455 nova_compute[188605]:        <feature name='mcdt-no'/>
Dec  1 14:20:36 np0005541455 nova_compute[188605]:        <feature name='pbrsb-no'/>
Dec  1 14:20:36 np0005541455 nova_compute[188605]:        <feature name='pcid'/>
Dec  1 14:20:36 np0005541455 nova_compute[188605]:        <feature name='pku'/>
Dec  1 14:20:36 np0005541455 nova_compute[188605]:        <feature name='psdp-no'/>
Dec  1 14:20:36 np0005541455 nova_compute[188605]:        <feature name='sbdr-ssdp-no'/>
Dec  1 14:20:36 np0005541455 nova_compute[188605]:        <feature name='serialize'/>
Dec  1 14:20:36 np0005541455 nova_compute[188605]:        <feature name='vaes'/>
Dec  1 14:20:36 np0005541455 nova_compute[188605]:        <feature name='vpclmulqdq'/>
Dec  1 14:20:36 np0005541455 nova_compute[188605]:        <feature name='xsaves'/>
Dec  1 14:20:36 np0005541455 nova_compute[188605]:      </blockers>
Dec  1 14:20:36 np0005541455 nova_compute[188605]:      <model usable='no' vendor='Intel' canonical='Skylake-Client-v1'>Skylake-Client</model>
Dec  1 14:20:36 np0005541455 nova_compute[188605]:      <blockers model='Skylake-Client'>
Dec  1 14:20:36 np0005541455 nova_compute[188605]:        <feature name='erms'/>
Dec  1 14:20:36 np0005541455 nova_compute[188605]:        <feature name='hle'/>
Dec  1 14:20:36 np0005541455 nova_compute[188605]:        <feature name='invpcid'/>
Dec  1 14:20:36 np0005541455 nova_compute[188605]:        <feature name='pcid'/>
Dec  1 14:20:36 np0005541455 nova_compute[188605]:        <feature name='rtm'/>
Dec  1 14:20:36 np0005541455 nova_compute[188605]:      </blockers>
Dec  1 14:20:36 np0005541455 nova_compute[188605]:      <model usable='no' vendor='Intel' canonical='Skylake-Client-v2'>Skylake-Client-IBRS</model>
Dec  1 14:20:36 np0005541455 nova_compute[188605]:      <blockers model='Skylake-Client-IBRS'>
Dec  1 14:20:36 np0005541455 nova_compute[188605]:        <feature name='erms'/>
Dec  1 14:20:36 np0005541455 nova_compute[188605]:        <feature name='hle'/>
Dec  1 14:20:36 np0005541455 nova_compute[188605]:        <feature name='invpcid'/>
Dec  1 14:20:36 np0005541455 nova_compute[188605]:        <feature name='pcid'/>
Dec  1 14:20:36 np0005541455 nova_compute[188605]:        <feature name='rtm'/>
Dec  1 14:20:36 np0005541455 nova_compute[188605]:      </blockers>
Dec  1 14:20:36 np0005541455 nova_compute[188605]:      <model usable='no' vendor='Intel' canonical='Skylake-Client-v3'>Skylake-Client-noTSX-IBRS</model>
Dec  1 14:20:36 np0005541455 nova_compute[188605]:      <blockers model='Skylake-Client-noTSX-IBRS'>
Dec  1 14:20:36 np0005541455 nova_compute[188605]:        <feature name='erms'/>
Dec  1 14:20:36 np0005541455 nova_compute[188605]:        <feature name='invpcid'/>
Dec  1 14:20:36 np0005541455 nova_compute[188605]:        <feature name='pcid'/>
Dec  1 14:20:36 np0005541455 nova_compute[188605]:      </blockers>
Dec  1 14:20:36 np0005541455 nova_compute[188605]:      <model usable='no' vendor='Intel'>Skylake-Client-v1</model>
Dec  1 14:20:36 np0005541455 nova_compute[188605]:      <blockers model='Skylake-Client-v1'>
Dec  1 14:20:36 np0005541455 nova_compute[188605]:        <feature name='erms'/>
Dec  1 14:20:36 np0005541455 nova_compute[188605]:        <feature name='hle'/>
Dec  1 14:20:36 np0005541455 nova_compute[188605]:        <feature name='invpcid'/>
Dec  1 14:20:36 np0005541455 nova_compute[188605]:        <feature name='pcid'/>
Dec  1 14:20:36 np0005541455 nova_compute[188605]:        <feature name='rtm'/>
Dec  1 14:20:36 np0005541455 nova_compute[188605]:      </blockers>
Dec  1 14:20:36 np0005541455 nova_compute[188605]:      <model usable='no' vendor='Intel'>Skylake-Client-v2</model>
Dec  1 14:20:36 np0005541455 nova_compute[188605]:      <blockers model='Skylake-Client-v2'>
Dec  1 14:20:36 np0005541455 nova_compute[188605]:        <feature name='erms'/>
Dec  1 14:20:36 np0005541455 nova_compute[188605]:        <feature name='hle'/>
Dec  1 14:20:36 np0005541455 nova_compute[188605]:        <feature name='invpcid'/>
Dec  1 14:20:36 np0005541455 nova_compute[188605]:        <feature name='pcid'/>
Dec  1 14:20:36 np0005541455 nova_compute[188605]:        <feature name='rtm'/>
Dec  1 14:20:36 np0005541455 nova_compute[188605]:      </blockers>
Dec  1 14:20:36 np0005541455 nova_compute[188605]:      <model usable='no' vendor='Intel'>Skylake-Client-v3</model>
Dec  1 14:20:36 np0005541455 nova_compute[188605]:      <blockers model='Skylake-Client-v3'>
Dec  1 14:20:36 np0005541455 nova_compute[188605]:        <feature name='erms'/>
Dec  1 14:20:36 np0005541455 nova_compute[188605]:        <feature name='invpcid'/>
Dec  1 14:20:36 np0005541455 nova_compute[188605]:        <feature name='pcid'/>
Dec  1 14:20:36 np0005541455 nova_compute[188605]:      </blockers>
Dec  1 14:20:36 np0005541455 nova_compute[188605]:      <model usable='no' vendor='Intel'>Skylake-Client-v4</model>
Dec  1 14:20:36 np0005541455 nova_compute[188605]:      <blockers model='Skylake-Client-v4'>
Dec  1 14:20:36 np0005541455 nova_compute[188605]:        <feature name='erms'/>
Dec  1 14:20:36 np0005541455 nova_compute[188605]:        <feature name='invpcid'/>
Dec  1 14:20:36 np0005541455 nova_compute[188605]:        <feature name='pcid'/>
Dec  1 14:20:36 np0005541455 nova_compute[188605]:        <feature name='xsaves'/>
Dec  1 14:20:36 np0005541455 nova_compute[188605]:      </blockers>
Dec  1 14:20:36 np0005541455 nova_compute[188605]:      <model usable='no' vendor='Intel' canonical='Skylake-Server-v1'>Skylake-Server</model>
Dec  1 14:20:36 np0005541455 nova_compute[188605]:      <blockers model='Skylake-Server'>
Dec  1 14:20:36 np0005541455 nova_compute[188605]:        <feature name='avx512bw'/>
Dec  1 14:20:36 np0005541455 nova_compute[188605]:        <feature name='avx512cd'/>
Dec  1 14:20:36 np0005541455 nova_compute[188605]:        <feature name='avx512dq'/>
Dec  1 14:20:36 np0005541455 nova_compute[188605]:        <feature name='avx512f'/>
Dec  1 14:20:36 np0005541455 nova_compute[188605]:        <feature name='avx512vl'/>
Dec  1 14:20:36 np0005541455 nova_compute[188605]:        <feature name='erms'/>
Dec  1 14:20:36 np0005541455 nova_compute[188605]:        <feature name='hle'/>
Dec  1 14:20:36 np0005541455 nova_compute[188605]:        <feature name='invpcid'/>
Dec  1 14:20:36 np0005541455 nova_compute[188605]:        <feature name='pcid'/>
Dec  1 14:20:36 np0005541455 nova_compute[188605]:        <feature name='pku'/>
Dec  1 14:20:36 np0005541455 nova_compute[188605]:        <feature name='rtm'/>
Dec  1 14:20:36 np0005541455 nova_compute[188605]:      </blockers>
Dec  1 14:20:36 np0005541455 nova_compute[188605]:      <model usable='no' vendor='Intel' canonical='Skylake-Server-v2'>Skylake-Server-IBRS</model>
Dec  1 14:20:36 np0005541455 nova_compute[188605]:      <blockers model='Skylake-Server-IBRS'>
Dec  1 14:20:36 np0005541455 nova_compute[188605]:        <feature name='avx512bw'/>
Dec  1 14:20:36 np0005541455 nova_compute[188605]:        <feature name='avx512cd'/>
Dec  1 14:20:36 np0005541455 nova_compute[188605]:        <feature name='avx512dq'/>
Dec  1 14:20:36 np0005541455 nova_compute[188605]:        <feature name='avx512f'/>
Dec  1 14:20:36 np0005541455 nova_compute[188605]:        <feature name='avx512vl'/>
Dec  1 14:20:36 np0005541455 nova_compute[188605]:        <feature name='erms'/>
Dec  1 14:20:36 np0005541455 nova_compute[188605]:        <feature name='hle'/>
Dec  1 14:20:36 np0005541455 nova_compute[188605]:        <feature name='invpcid'/>
Dec  1 14:20:36 np0005541455 nova_compute[188605]:        <feature name='pcid'/>
Dec  1 14:20:36 np0005541455 nova_compute[188605]:        <feature name='pku'/>
Dec  1 14:20:36 np0005541455 nova_compute[188605]:        <feature name='rtm'/>
Dec  1 14:20:36 np0005541455 nova_compute[188605]:      </blockers>
Dec  1 14:20:36 np0005541455 nova_compute[188605]:      <model usable='no' vendor='Intel' canonical='Skylake-Server-v3'>Skylake-Server-noTSX-IBRS</model>
Dec  1 14:20:36 np0005541455 nova_compute[188605]:      <blockers model='Skylake-Server-noTSX-IBRS'>
Dec  1 14:20:36 np0005541455 nova_compute[188605]:        <feature name='avx512bw'/>
Dec  1 14:20:36 np0005541455 nova_compute[188605]:        <feature name='avx512cd'/>
Dec  1 14:20:36 np0005541455 nova_compute[188605]:        <feature name='avx512dq'/>
Dec  1 14:20:36 np0005541455 nova_compute[188605]:        <feature name='avx512f'/>
Dec  1 14:20:36 np0005541455 nova_compute[188605]:        <feature name='avx512vl'/>
Dec  1 14:20:36 np0005541455 nova_compute[188605]:        <feature name='erms'/>
Dec  1 14:20:36 np0005541455 nova_compute[188605]:        <feature name='invpcid'/>
Dec  1 14:20:36 np0005541455 nova_compute[188605]:        <feature name='pcid'/>
Dec  1 14:20:36 np0005541455 nova_compute[188605]:        <feature name='pku'/>
Dec  1 14:20:36 np0005541455 nova_compute[188605]:      </blockers>
Dec  1 14:20:36 np0005541455 nova_compute[188605]:      <model usable='no' vendor='Intel'>Skylake-Server-v1</model>
Dec  1 14:20:36 np0005541455 nova_compute[188605]:      <blockers model='Skylake-Server-v1'>
Dec  1 14:20:36 np0005541455 nova_compute[188605]:        <feature name='avx512bw'/>
Dec  1 14:20:36 np0005541455 nova_compute[188605]:        <feature name='avx512cd'/>
Dec  1 14:20:36 np0005541455 nova_compute[188605]:        <feature name='avx512dq'/>
Dec  1 14:20:36 np0005541455 nova_compute[188605]:        <feature name='avx512f'/>
Dec  1 14:20:36 np0005541455 nova_compute[188605]:        <feature name='avx512vl'/>
Dec  1 14:20:36 np0005541455 nova_compute[188605]:        <feature name='erms'/>
Dec  1 14:20:36 np0005541455 nova_compute[188605]:        <feature name='hle'/>
Dec  1 14:20:36 np0005541455 nova_compute[188605]:        <feature name='invpcid'/>
Dec  1 14:20:36 np0005541455 nova_compute[188605]:        <feature name='pcid'/>
Dec  1 14:20:36 np0005541455 nova_compute[188605]:        <feature name='pku'/>
Dec  1 14:20:36 np0005541455 nova_compute[188605]:        <feature name='rtm'/>
Dec  1 14:20:36 np0005541455 nova_compute[188605]:      </blockers>
Dec  1 14:20:36 np0005541455 nova_compute[188605]:      <model usable='no' vendor='Intel'>Skylake-Server-v2</model>
Dec  1 14:20:36 np0005541455 nova_compute[188605]:      <blockers model='Skylake-Server-v2'>
Dec  1 14:20:36 np0005541455 nova_compute[188605]:        <feature name='avx512bw'/>
Dec  1 14:20:36 np0005541455 nova_compute[188605]:        <feature name='avx512cd'/>
Dec  1 14:20:36 np0005541455 nova_compute[188605]:        <feature name='avx512dq'/>
Dec  1 14:20:36 np0005541455 nova_compute[188605]:        <feature name='avx512f'/>
Dec  1 14:20:36 np0005541455 nova_compute[188605]:        <feature name='avx512vl'/>
Dec  1 14:20:36 np0005541455 nova_compute[188605]:        <feature name='erms'/>
Dec  1 14:20:36 np0005541455 nova_compute[188605]:        <feature name='hle'/>
Dec  1 14:20:36 np0005541455 nova_compute[188605]:        <feature name='invpcid'/>
Dec  1 14:20:36 np0005541455 nova_compute[188605]:        <feature name='pcid'/>
Dec  1 14:20:36 np0005541455 nova_compute[188605]:        <feature name='pku'/>
Dec  1 14:20:36 np0005541455 nova_compute[188605]:        <feature name='rtm'/>
Dec  1 14:20:36 np0005541455 nova_compute[188605]:      </blockers>
Dec  1 14:20:36 np0005541455 nova_compute[188605]:      <model usable='no' vendor='Intel'>Skylake-Server-v3</model>
Dec  1 14:20:36 np0005541455 nova_compute[188605]:      <blockers model='Skylake-Server-v3'>
Dec  1 14:20:36 np0005541455 nova_compute[188605]:        <feature name='avx512bw'/>
Dec  1 14:20:36 np0005541455 nova_compute[188605]:        <feature name='avx512cd'/>
Dec  1 14:20:36 np0005541455 nova_compute[188605]:        <feature name='avx512dq'/>
Dec  1 14:20:36 np0005541455 nova_compute[188605]:        <feature name='avx512f'/>
Dec  1 14:20:36 np0005541455 nova_compute[188605]:        <feature name='avx512vl'/>
Dec  1 14:20:36 np0005541455 nova_compute[188605]:        <feature name='erms'/>
Dec  1 14:20:36 np0005541455 nova_compute[188605]:        <feature name='invpcid'/>
Dec  1 14:20:36 np0005541455 nova_compute[188605]:        <feature name='pcid'/>
Dec  1 14:20:36 np0005541455 nova_compute[188605]:        <feature name='pku'/>
Dec  1 14:20:36 np0005541455 nova_compute[188605]:      </blockers>
Dec  1 14:20:36 np0005541455 nova_compute[188605]:      <model usable='no' vendor='Intel'>Skylake-Server-v4</model>
Dec  1 14:20:36 np0005541455 nova_compute[188605]:      <blockers model='Skylake-Server-v4'>
Dec  1 14:20:36 np0005541455 nova_compute[188605]:        <feature name='avx512bw'/>
Dec  1 14:20:36 np0005541455 nova_compute[188605]:        <feature name='avx512cd'/>
Dec  1 14:20:36 np0005541455 nova_compute[188605]:        <feature name='avx512dq'/>
Dec  1 14:20:36 np0005541455 nova_compute[188605]:        <feature name='avx512f'/>
Dec  1 14:20:36 np0005541455 nova_compute[188605]:        <feature name='avx512vl'/>
Dec  1 14:20:36 np0005541455 nova_compute[188605]:        <feature name='erms'/>
Dec  1 14:20:36 np0005541455 nova_compute[188605]:        <feature name='invpcid'/>
Dec  1 14:20:36 np0005541455 nova_compute[188605]:        <feature name='pcid'/>
Dec  1 14:20:36 np0005541455 nova_compute[188605]:        <feature name='pku'/>
Dec  1 14:20:36 np0005541455 nova_compute[188605]:      </blockers>
Dec  1 14:20:36 np0005541455 nova_compute[188605]:      <model usable='no' vendor='Intel'>Skylake-Server-v5</model>
Dec  1 14:20:36 np0005541455 nova_compute[188605]:      <blockers model='Skylake-Server-v5'>
Dec  1 14:20:36 np0005541455 nova_compute[188605]:        <feature name='avx512bw'/>
Dec  1 14:20:36 np0005541455 nova_compute[188605]:        <feature name='avx512cd'/>
Dec  1 14:20:36 np0005541455 nova_compute[188605]:        <feature name='avx512dq'/>
Dec  1 14:20:36 np0005541455 nova_compute[188605]:        <feature name='avx512f'/>
Dec  1 14:20:36 np0005541455 nova_compute[188605]:        <feature name='avx512vl'/>
Dec  1 14:20:36 np0005541455 nova_compute[188605]:        <feature name='erms'/>
Dec  1 14:20:36 np0005541455 nova_compute[188605]:        <feature name='invpcid'/>
Dec  1 14:20:36 np0005541455 nova_compute[188605]:        <feature name='pcid'/>
Dec  1 14:20:36 np0005541455 nova_compute[188605]:        <feature name='pku'/>
Dec  1 14:20:36 np0005541455 nova_compute[188605]:        <feature name='xsaves'/>
Dec  1 14:20:36 np0005541455 nova_compute[188605]:      </blockers>
Dec  1 14:20:36 np0005541455 nova_compute[188605]:      <model usable='no' vendor='Intel' canonical='Snowridge-v1'>Snowridge</model>
Dec  1 14:20:36 np0005541455 nova_compute[188605]:      <blockers model='Snowridge'>
Dec  1 14:20:36 np0005541455 nova_compute[188605]:        <feature name='cldemote'/>
Dec  1 14:20:36 np0005541455 nova_compute[188605]:        <feature name='core-capability'/>
Dec  1 14:20:36 np0005541455 nova_compute[188605]:        <feature name='erms'/>
Dec  1 14:20:36 np0005541455 nova_compute[188605]:        <feature name='gfni'/>
Dec  1 14:20:36 np0005541455 nova_compute[188605]:        <feature name='movdir64b'/>
Dec  1 14:20:36 np0005541455 nova_compute[188605]:        <feature name='movdiri'/>
Dec  1 14:20:36 np0005541455 nova_compute[188605]:        <feature name='mpx'/>
Dec  1 14:20:36 np0005541455 nova_compute[188605]:        <feature name='split-lock-detect'/>
Dec  1 14:20:36 np0005541455 nova_compute[188605]:      </blockers>
Dec  1 14:20:36 np0005541455 nova_compute[188605]:      <model usable='no' vendor='Intel'>Snowridge-v1</model>
Dec  1 14:20:36 np0005541455 nova_compute[188605]:      <blockers model='Snowridge-v1'>
Dec  1 14:20:36 np0005541455 nova_compute[188605]:        <feature name='cldemote'/>
Dec  1 14:20:36 np0005541455 nova_compute[188605]:        <feature name='core-capability'/>
Dec  1 14:20:36 np0005541455 nova_compute[188605]:        <feature name='erms'/>
Dec  1 14:20:36 np0005541455 nova_compute[188605]:        <feature name='gfni'/>
Dec  1 14:20:36 np0005541455 nova_compute[188605]:        <feature name='movdir64b'/>
Dec  1 14:20:36 np0005541455 nova_compute[188605]:        <feature name='movdiri'/>
Dec  1 14:20:36 np0005541455 nova_compute[188605]:        <feature name='mpx'/>
Dec  1 14:20:36 np0005541455 nova_compute[188605]:        <feature name='split-lock-detect'/>
Dec  1 14:20:36 np0005541455 nova_compute[188605]:      </blockers>
Dec  1 14:20:36 np0005541455 nova_compute[188605]:      <model usable='no' vendor='Intel'>Snowridge-v2</model>
Dec  1 14:20:36 np0005541455 nova_compute[188605]:      <blockers model='Snowridge-v2'>
Dec  1 14:20:36 np0005541455 nova_compute[188605]:        <feature name='cldemote'/>
Dec  1 14:20:36 np0005541455 nova_compute[188605]:        <feature name='core-capability'/>
Dec  1 14:20:36 np0005541455 nova_compute[188605]:        <feature name='erms'/>
Dec  1 14:20:36 np0005541455 nova_compute[188605]:        <feature name='gfni'/>
Dec  1 14:20:36 np0005541455 nova_compute[188605]:        <feature name='movdir64b'/>
Dec  1 14:20:36 np0005541455 nova_compute[188605]:        <feature name='movdiri'/>
Dec  1 14:20:36 np0005541455 nova_compute[188605]:        <feature name='split-lock-detect'/>
Dec  1 14:20:36 np0005541455 nova_compute[188605]:      </blockers>
Dec  1 14:20:36 np0005541455 nova_compute[188605]:      <model usable='no' vendor='Intel'>Snowridge-v3</model>
Dec  1 14:20:36 np0005541455 nova_compute[188605]:      <blockers model='Snowridge-v3'>
Dec  1 14:20:36 np0005541455 nova_compute[188605]:        <feature name='cldemote'/>
Dec  1 14:20:36 np0005541455 nova_compute[188605]:        <feature name='core-capability'/>
Dec  1 14:20:36 np0005541455 nova_compute[188605]:        <feature name='erms'/>
Dec  1 14:20:36 np0005541455 nova_compute[188605]:        <feature name='gfni'/>
Dec  1 14:20:36 np0005541455 nova_compute[188605]:        <feature name='movdir64b'/>
Dec  1 14:20:36 np0005541455 nova_compute[188605]:        <feature name='movdiri'/>
Dec  1 14:20:36 np0005541455 nova_compute[188605]:        <feature name='split-lock-detect'/>
Dec  1 14:20:36 np0005541455 nova_compute[188605]:        <feature name='xsaves'/>
Dec  1 14:20:36 np0005541455 nova_compute[188605]:      </blockers>
Dec  1 14:20:36 np0005541455 nova_compute[188605]:      <model usable='no' vendor='Intel'>Snowridge-v4</model>
Dec  1 14:20:36 np0005541455 nova_compute[188605]:      <blockers model='Snowridge-v4'>
Dec  1 14:20:36 np0005541455 nova_compute[188605]:        <feature name='cldemote'/>
Dec  1 14:20:36 np0005541455 nova_compute[188605]:        <feature name='erms'/>
Dec  1 14:20:36 np0005541455 nova_compute[188605]:        <feature name='gfni'/>
Dec  1 14:20:36 np0005541455 nova_compute[188605]:        <feature name='movdir64b'/>
Dec  1 14:20:36 np0005541455 nova_compute[188605]:        <feature name='movdiri'/>
Dec  1 14:20:36 np0005541455 nova_compute[188605]:        <feature name='xsaves'/>
Dec  1 14:20:36 np0005541455 nova_compute[188605]:      </blockers>
Dec  1 14:20:36 np0005541455 nova_compute[188605]:      <model usable='yes' vendor='Intel' canonical='Westmere-v1'>Westmere</model>
Dec  1 14:20:36 np0005541455 nova_compute[188605]:      <model usable='yes' vendor='Intel' canonical='Westmere-v2'>Westmere-IBRS</model>
Dec  1 14:20:36 np0005541455 nova_compute[188605]:      <model usable='yes' vendor='Intel'>Westmere-v1</model>
Dec  1 14:20:36 np0005541455 nova_compute[188605]:      <model usable='yes' vendor='Intel'>Westmere-v2</model>
Dec  1 14:20:36 np0005541455 nova_compute[188605]:      <model usable='no' deprecated='yes' vendor='AMD' canonical='athlon-v1'>athlon</model>
Dec  1 14:20:36 np0005541455 nova_compute[188605]:      <blockers model='athlon'>
Dec  1 14:20:36 np0005541455 nova_compute[188605]:        <feature name='3dnow'/>
Dec  1 14:20:36 np0005541455 nova_compute[188605]:        <feature name='3dnowext'/>
Dec  1 14:20:36 np0005541455 nova_compute[188605]:      </blockers>
Dec  1 14:20:36 np0005541455 nova_compute[188605]:      <model usable='no' deprecated='yes' vendor='AMD'>athlon-v1</model>
Dec  1 14:20:36 np0005541455 nova_compute[188605]:      <blockers model='athlon-v1'>
Dec  1 14:20:36 np0005541455 nova_compute[188605]:        <feature name='3dnow'/>
Dec  1 14:20:36 np0005541455 nova_compute[188605]:        <feature name='3dnowext'/>
Dec  1 14:20:36 np0005541455 nova_compute[188605]:      </blockers>
Dec  1 14:20:36 np0005541455 nova_compute[188605]:      <model usable='no' deprecated='yes' vendor='Intel' canonical='core2duo-v1'>core2duo</model>
Dec  1 14:20:36 np0005541455 nova_compute[188605]:      <blockers model='core2duo'>
Dec  1 14:20:36 np0005541455 nova_compute[188605]:        <feature name='ss'/>
Dec  1 14:20:36 np0005541455 nova_compute[188605]:      </blockers>
Dec  1 14:20:36 np0005541455 nova_compute[188605]:      <model usable='no' deprecated='yes' vendor='Intel'>core2duo-v1</model>
Dec  1 14:20:36 np0005541455 nova_compute[188605]:      <blockers model='core2duo-v1'>
Dec  1 14:20:36 np0005541455 nova_compute[188605]:        <feature name='ss'/>
Dec  1 14:20:36 np0005541455 nova_compute[188605]:      </blockers>
Dec  1 14:20:36 np0005541455 nova_compute[188605]:      <model usable='no' deprecated='yes' vendor='Intel' canonical='coreduo-v1'>coreduo</model>
Dec  1 14:20:36 np0005541455 nova_compute[188605]:      <blockers model='coreduo'>
Dec  1 14:20:36 np0005541455 nova_compute[188605]:        <feature name='ss'/>
Dec  1 14:20:36 np0005541455 nova_compute[188605]:      </blockers>
Dec  1 14:20:36 np0005541455 nova_compute[188605]:      <model usable='no' deprecated='yes' vendor='Intel'>coreduo-v1</model>
Dec  1 14:20:36 np0005541455 nova_compute[188605]:      <blockers model='coreduo-v1'>
Dec  1 14:20:36 np0005541455 nova_compute[188605]:        <feature name='ss'/>
Dec  1 14:20:36 np0005541455 nova_compute[188605]:      </blockers>
Dec  1 14:20:36 np0005541455 nova_compute[188605]:      <model usable='yes' deprecated='yes' vendor='unknown' canonical='kvm32-v1'>kvm32</model>
Dec  1 14:20:36 np0005541455 nova_compute[188605]:      <model usable='yes' deprecated='yes' vendor='unknown'>kvm32-v1</model>
Dec  1 14:20:36 np0005541455 nova_compute[188605]:      <model usable='yes' deprecated='yes' vendor='unknown' canonical='kvm64-v1'>kvm64</model>
Dec  1 14:20:36 np0005541455 nova_compute[188605]:      <model usable='yes' deprecated='yes' vendor='unknown'>kvm64-v1</model>
Dec  1 14:20:36 np0005541455 nova_compute[188605]:      <model usable='no' deprecated='yes' vendor='Intel' canonical='n270-v1'>n270</model>
Dec  1 14:20:36 np0005541455 nova_compute[188605]:      <blockers model='n270'>
Dec  1 14:20:36 np0005541455 nova_compute[188605]:        <feature name='ss'/>
Dec  1 14:20:36 np0005541455 nova_compute[188605]:      </blockers>
Dec  1 14:20:36 np0005541455 nova_compute[188605]:      <model usable='no' deprecated='yes' vendor='Intel'>n270-v1</model>
Dec  1 14:20:36 np0005541455 nova_compute[188605]:      <blockers model='n270-v1'>
Dec  1 14:20:36 np0005541455 nova_compute[188605]:        <feature name='ss'/>
Dec  1 14:20:36 np0005541455 nova_compute[188605]:      </blockers>
Dec  1 14:20:36 np0005541455 nova_compute[188605]:      <model usable='yes' deprecated='yes' vendor='unknown' canonical='pentium-v1'>pentium</model>
Dec  1 14:20:36 np0005541455 nova_compute[188605]:      <model usable='yes' deprecated='yes' vendor='unknown'>pentium-v1</model>
Dec  1 14:20:36 np0005541455 nova_compute[188605]:      <model usable='yes' deprecated='yes' vendor='unknown' canonical='pentium2-v1'>pentium2</model>
Dec  1 14:20:36 np0005541455 nova_compute[188605]:      <model usable='yes' deprecated='yes' vendor='unknown'>pentium2-v1</model>
Dec  1 14:20:36 np0005541455 nova_compute[188605]:      <model usable='yes' deprecated='yes' vendor='unknown' canonical='pentium3-v1'>pentium3</model>
Dec  1 14:20:36 np0005541455 nova_compute[188605]:      <model usable='yes' deprecated='yes' vendor='unknown'>pentium3-v1</model>
Dec  1 14:20:36 np0005541455 nova_compute[188605]:      <model usable='no' deprecated='yes' vendor='AMD' canonical='phenom-v1'>phenom</model>
Dec  1 14:20:36 np0005541455 nova_compute[188605]:      <blockers model='phenom'>
Dec  1 14:20:36 np0005541455 nova_compute[188605]:        <feature name='3dnow'/>
Dec  1 14:20:36 np0005541455 nova_compute[188605]:        <feature name='3dnowext'/>
Dec  1 14:20:36 np0005541455 nova_compute[188605]:      </blockers>
Dec  1 14:20:36 np0005541455 nova_compute[188605]:      <model usable='no' deprecated='yes' vendor='AMD'>phenom-v1</model>
Dec  1 14:20:36 np0005541455 nova_compute[188605]:      <blockers model='phenom-v1'>
Dec  1 14:20:36 np0005541455 nova_compute[188605]:        <feature name='3dnow'/>
Dec  1 14:20:36 np0005541455 nova_compute[188605]:        <feature name='3dnowext'/>
Dec  1 14:20:36 np0005541455 nova_compute[188605]:      </blockers>
Dec  1 14:20:36 np0005541455 nova_compute[188605]:      <model usable='yes' deprecated='yes' vendor='unknown' canonical='qemu32-v1'>qemu32</model>
Dec  1 14:20:36 np0005541455 nova_compute[188605]:      <model usable='yes' deprecated='yes' vendor='unknown'>qemu32-v1</model>
Dec  1 14:20:36 np0005541455 nova_compute[188605]:      <model usable='yes' deprecated='yes' vendor='unknown' canonical='qemu64-v1'>qemu64</model>
Dec  1 14:20:36 np0005541455 nova_compute[188605]:      <model usable='yes' deprecated='yes' vendor='unknown'>qemu64-v1</model>
Dec  1 14:20:36 np0005541455 nova_compute[188605]:    </mode>
Dec  1 14:20:36 np0005541455 nova_compute[188605]:  </cpu>
Dec  1 14:20:36 np0005541455 nova_compute[188605]:  <memoryBacking supported='yes'>
Dec  1 14:20:36 np0005541455 nova_compute[188605]:    <enum name='sourceType'>
Dec  1 14:20:36 np0005541455 nova_compute[188605]:      <value>file</value>
Dec  1 14:20:36 np0005541455 nova_compute[188605]:      <value>anonymous</value>
Dec  1 14:20:36 np0005541455 nova_compute[188605]:      <value>memfd</value>
Dec  1 14:20:36 np0005541455 nova_compute[188605]:    </enum>
Dec  1 14:20:36 np0005541455 nova_compute[188605]:  </memoryBacking>
Dec  1 14:20:36 np0005541455 nova_compute[188605]:  <devices>
Dec  1 14:20:36 np0005541455 nova_compute[188605]:    <disk supported='yes'>
Dec  1 14:20:36 np0005541455 nova_compute[188605]:      <enum name='diskDevice'>
Dec  1 14:20:36 np0005541455 nova_compute[188605]:        <value>disk</value>
Dec  1 14:20:36 np0005541455 nova_compute[188605]:        <value>cdrom</value>
Dec  1 14:20:36 np0005541455 nova_compute[188605]:        <value>floppy</value>
Dec  1 14:20:36 np0005541455 nova_compute[188605]:        <value>lun</value>
Dec  1 14:20:36 np0005541455 nova_compute[188605]:      </enum>
Dec  1 14:20:36 np0005541455 nova_compute[188605]:      <enum name='bus'>
Dec  1 14:20:36 np0005541455 nova_compute[188605]:        <value>ide</value>
Dec  1 14:20:36 np0005541455 nova_compute[188605]:        <value>fdc</value>
Dec  1 14:20:36 np0005541455 nova_compute[188605]:        <value>scsi</value>
Dec  1 14:20:36 np0005541455 nova_compute[188605]:        <value>virtio</value>
Dec  1 14:20:36 np0005541455 nova_compute[188605]:        <value>usb</value>
Dec  1 14:20:36 np0005541455 nova_compute[188605]:        <value>sata</value>
Dec  1 14:20:36 np0005541455 nova_compute[188605]:      </enum>
Dec  1 14:20:36 np0005541455 nova_compute[188605]:      <enum name='model'>
Dec  1 14:20:36 np0005541455 nova_compute[188605]:        <value>virtio</value>
Dec  1 14:20:36 np0005541455 nova_compute[188605]:        <value>virtio-transitional</value>
Dec  1 14:20:36 np0005541455 nova_compute[188605]:        <value>virtio-non-transitional</value>
Dec  1 14:20:36 np0005541455 nova_compute[188605]:      </enum>
Dec  1 14:20:36 np0005541455 nova_compute[188605]:    </disk>
Dec  1 14:20:36 np0005541455 nova_compute[188605]:    <graphics supported='yes'>
Dec  1 14:20:36 np0005541455 nova_compute[188605]:      <enum name='type'>
Dec  1 14:20:36 np0005541455 nova_compute[188605]:        <value>vnc</value>
Dec  1 14:20:36 np0005541455 nova_compute[188605]:        <value>egl-headless</value>
Dec  1 14:20:36 np0005541455 nova_compute[188605]:        <value>dbus</value>
Dec  1 14:20:36 np0005541455 nova_compute[188605]:      </enum>
Dec  1 14:20:36 np0005541455 nova_compute[188605]:    </graphics>
Dec  1 14:20:36 np0005541455 nova_compute[188605]:    <video supported='yes'>
Dec  1 14:20:36 np0005541455 nova_compute[188605]:      <enum name='modelType'>
Dec  1 14:20:36 np0005541455 nova_compute[188605]:        <value>vga</value>
Dec  1 14:20:36 np0005541455 nova_compute[188605]:        <value>cirrus</value>
Dec  1 14:20:36 np0005541455 nova_compute[188605]:        <value>virtio</value>
Dec  1 14:20:36 np0005541455 nova_compute[188605]:        <value>none</value>
Dec  1 14:20:36 np0005541455 nova_compute[188605]:        <value>bochs</value>
Dec  1 14:20:36 np0005541455 nova_compute[188605]:        <value>ramfb</value>
Dec  1 14:20:36 np0005541455 nova_compute[188605]:      </enum>
Dec  1 14:20:36 np0005541455 nova_compute[188605]:    </video>
Dec  1 14:20:36 np0005541455 nova_compute[188605]:    <hostdev supported='yes'>
Dec  1 14:20:36 np0005541455 nova_compute[188605]:      <enum name='mode'>
Dec  1 14:20:36 np0005541455 nova_compute[188605]:        <value>subsystem</value>
Dec  1 14:20:36 np0005541455 nova_compute[188605]:      </enum>
Dec  1 14:20:36 np0005541455 nova_compute[188605]:      <enum name='startupPolicy'>
Dec  1 14:20:36 np0005541455 nova_compute[188605]:        <value>default</value>
Dec  1 14:20:36 np0005541455 nova_compute[188605]:        <value>mandatory</value>
Dec  1 14:20:36 np0005541455 nova_compute[188605]:        <value>requisite</value>
Dec  1 14:20:36 np0005541455 nova_compute[188605]:        <value>optional</value>
Dec  1 14:20:36 np0005541455 nova_compute[188605]:      </enum>
Dec  1 14:20:36 np0005541455 nova_compute[188605]:      <enum name='subsysType'>
Dec  1 14:20:36 np0005541455 nova_compute[188605]:        <value>usb</value>
Dec  1 14:20:36 np0005541455 nova_compute[188605]:        <value>pci</value>
Dec  1 14:20:36 np0005541455 nova_compute[188605]:        <value>scsi</value>
Dec  1 14:20:36 np0005541455 nova_compute[188605]:      </enum>
Dec  1 14:20:36 np0005541455 nova_compute[188605]:      <enum name='capsType'/>
Dec  1 14:20:36 np0005541455 nova_compute[188605]:      <enum name='pciBackend'/>
Dec  1 14:20:36 np0005541455 nova_compute[188605]:    </hostdev>
Dec  1 14:20:36 np0005541455 nova_compute[188605]:    <rng supported='yes'>
Dec  1 14:20:36 np0005541455 nova_compute[188605]:      <enum name='model'>
Dec  1 14:20:36 np0005541455 nova_compute[188605]:        <value>virtio</value>
Dec  1 14:20:36 np0005541455 nova_compute[188605]:        <value>virtio-transitional</value>
Dec  1 14:20:36 np0005541455 nova_compute[188605]:        <value>virtio-non-transitional</value>
Dec  1 14:20:36 np0005541455 nova_compute[188605]:      </enum>
Dec  1 14:20:36 np0005541455 nova_compute[188605]:      <enum name='backendModel'>
Dec  1 14:20:36 np0005541455 nova_compute[188605]:        <value>random</value>
Dec  1 14:20:36 np0005541455 nova_compute[188605]:        <value>egd</value>
Dec  1 14:20:36 np0005541455 nova_compute[188605]:        <value>builtin</value>
Dec  1 14:20:36 np0005541455 nova_compute[188605]:      </enum>
Dec  1 14:20:36 np0005541455 nova_compute[188605]:    </rng>
Dec  1 14:20:36 np0005541455 nova_compute[188605]:    <filesystem supported='yes'>
Dec  1 14:20:36 np0005541455 nova_compute[188605]:      <enum name='driverType'>
Dec  1 14:20:36 np0005541455 nova_compute[188605]:        <value>path</value>
Dec  1 14:20:36 np0005541455 nova_compute[188605]:        <value>handle</value>
Dec  1 14:20:36 np0005541455 nova_compute[188605]:        <value>virtiofs</value>
Dec  1 14:20:36 np0005541455 nova_compute[188605]:      </enum>
Dec  1 14:20:36 np0005541455 nova_compute[188605]:    </filesystem>
Dec  1 14:20:36 np0005541455 nova_compute[188605]:    <tpm supported='yes'>
Dec  1 14:20:36 np0005541455 nova_compute[188605]:      <enum name='model'>
Dec  1 14:20:36 np0005541455 nova_compute[188605]:        <value>tpm-tis</value>
Dec  1 14:20:36 np0005541455 nova_compute[188605]:        <value>tpm-crb</value>
Dec  1 14:20:36 np0005541455 nova_compute[188605]:      </enum>
Dec  1 14:20:36 np0005541455 nova_compute[188605]:      <enum name='backendModel'>
Dec  1 14:20:36 np0005541455 nova_compute[188605]:        <value>emulator</value>
Dec  1 14:20:36 np0005541455 nova_compute[188605]:        <value>external</value>
Dec  1 14:20:36 np0005541455 nova_compute[188605]:      </enum>
Dec  1 14:20:36 np0005541455 nova_compute[188605]:      <enum name='backendVersion'>
Dec  1 14:20:36 np0005541455 nova_compute[188605]:        <value>2.0</value>
Dec  1 14:20:36 np0005541455 nova_compute[188605]:      </enum>
Dec  1 14:20:36 np0005541455 nova_compute[188605]:    </tpm>
Dec  1 14:20:36 np0005541455 nova_compute[188605]:    <redirdev supported='yes'>
Dec  1 14:20:36 np0005541455 nova_compute[188605]:      <enum name='bus'>
Dec  1 14:20:36 np0005541455 nova_compute[188605]:        <value>usb</value>
Dec  1 14:20:36 np0005541455 nova_compute[188605]:      </enum>
Dec  1 14:20:36 np0005541455 nova_compute[188605]:    </redirdev>
Dec  1 14:20:36 np0005541455 nova_compute[188605]:    <channel supported='yes'>
Dec  1 14:20:36 np0005541455 nova_compute[188605]:      <enum name='type'>
Dec  1 14:20:36 np0005541455 nova_compute[188605]:        <value>pty</value>
Dec  1 14:20:36 np0005541455 nova_compute[188605]:        <value>unix</value>
Dec  1 14:20:36 np0005541455 nova_compute[188605]:      </enum>
Dec  1 14:20:36 np0005541455 nova_compute[188605]:    </channel>
Dec  1 14:20:36 np0005541455 nova_compute[188605]:    <crypto supported='yes'>
Dec  1 14:20:36 np0005541455 nova_compute[188605]:      <enum name='model'/>
Dec  1 14:20:36 np0005541455 nova_compute[188605]:      <enum name='type'>
Dec  1 14:20:36 np0005541455 nova_compute[188605]:        <value>qemu</value>
Dec  1 14:20:36 np0005541455 nova_compute[188605]:      </enum>
Dec  1 14:20:36 np0005541455 nova_compute[188605]:      <enum name='backendModel'>
Dec  1 14:20:36 np0005541455 nova_compute[188605]:        <value>builtin</value>
Dec  1 14:20:36 np0005541455 nova_compute[188605]:      </enum>
Dec  1 14:20:36 np0005541455 nova_compute[188605]:    </crypto>
Dec  1 14:20:36 np0005541455 nova_compute[188605]:    <interface supported='yes'>
Dec  1 14:20:36 np0005541455 nova_compute[188605]:      <enum name='backendType'>
Dec  1 14:20:36 np0005541455 nova_compute[188605]:        <value>default</value>
Dec  1 14:20:36 np0005541455 nova_compute[188605]:        <value>passt</value>
Dec  1 14:20:36 np0005541455 nova_compute[188605]:      </enum>
Dec  1 14:20:36 np0005541455 nova_compute[188605]:    </interface>
Dec  1 14:20:36 np0005541455 nova_compute[188605]:    <panic supported='yes'>
Dec  1 14:20:36 np0005541455 nova_compute[188605]:      <enum name='model'>
Dec  1 14:20:36 np0005541455 nova_compute[188605]:        <value>isa</value>
Dec  1 14:20:36 np0005541455 nova_compute[188605]:        <value>hyperv</value>
Dec  1 14:20:36 np0005541455 nova_compute[188605]:      </enum>
Dec  1 14:20:36 np0005541455 nova_compute[188605]:    </panic>
Dec  1 14:20:36 np0005541455 nova_compute[188605]:    <console supported='yes'>
Dec  1 14:20:36 np0005541455 nova_compute[188605]:      <enum name='type'>
Dec  1 14:20:36 np0005541455 nova_compute[188605]:        <value>null</value>
Dec  1 14:20:36 np0005541455 nova_compute[188605]:        <value>vc</value>
Dec  1 14:20:36 np0005541455 nova_compute[188605]:        <value>pty</value>
Dec  1 14:20:36 np0005541455 nova_compute[188605]:        <value>dev</value>
Dec  1 14:20:36 np0005541455 nova_compute[188605]:        <value>file</value>
Dec  1 14:20:36 np0005541455 nova_compute[188605]:        <value>pipe</value>
Dec  1 14:20:36 np0005541455 nova_compute[188605]:        <value>stdio</value>
Dec  1 14:20:36 np0005541455 nova_compute[188605]:        <value>udp</value>
Dec  1 14:20:36 np0005541455 nova_compute[188605]:        <value>tcp</value>
Dec  1 14:20:36 np0005541455 nova_compute[188605]:        <value>unix</value>
Dec  1 14:20:36 np0005541455 nova_compute[188605]:        <value>qemu-vdagent</value>
Dec  1 14:20:36 np0005541455 nova_compute[188605]:        <value>dbus</value>
Dec  1 14:20:36 np0005541455 nova_compute[188605]:      </enum>
Dec  1 14:20:36 np0005541455 nova_compute[188605]:    </console>
Dec  1 14:20:36 np0005541455 nova_compute[188605]:  </devices>
Dec  1 14:20:36 np0005541455 nova_compute[188605]:  <features>
Dec  1 14:20:36 np0005541455 nova_compute[188605]:    <gic supported='no'/>
Dec  1 14:20:36 np0005541455 nova_compute[188605]:    <vmcoreinfo supported='yes'/>
Dec  1 14:20:36 np0005541455 nova_compute[188605]:    <genid supported='yes'/>
Dec  1 14:20:36 np0005541455 nova_compute[188605]:    <backingStoreInput supported='yes'/>
Dec  1 14:20:36 np0005541455 nova_compute[188605]:    <backup supported='yes'/>
Dec  1 14:20:36 np0005541455 nova_compute[188605]:    <async-teardown supported='yes'/>
Dec  1 14:20:36 np0005541455 nova_compute[188605]:    <ps2 supported='yes'/>
Dec  1 14:20:36 np0005541455 nova_compute[188605]:    <sev supported='no'/>
Dec  1 14:20:36 np0005541455 nova_compute[188605]:    <sgx supported='no'/>
Dec  1 14:20:36 np0005541455 nova_compute[188605]:    <hyperv supported='yes'>
Dec  1 14:20:36 np0005541455 nova_compute[188605]:      <enum name='features'>
Dec  1 14:20:36 np0005541455 nova_compute[188605]:        <value>relaxed</value>
Dec  1 14:20:36 np0005541455 nova_compute[188605]:        <value>vapic</value>
Dec  1 14:20:36 np0005541455 nova_compute[188605]:        <value>spinlocks</value>
Dec  1 14:20:36 np0005541455 nova_compute[188605]:        <value>vpindex</value>
Dec  1 14:20:36 np0005541455 nova_compute[188605]:        <value>runtime</value>
Dec  1 14:20:36 np0005541455 nova_compute[188605]:        <value>synic</value>
Dec  1 14:20:36 np0005541455 nova_compute[188605]:        <value>stimer</value>
Dec  1 14:20:36 np0005541455 nova_compute[188605]:        <value>reset</value>
Dec  1 14:20:36 np0005541455 nova_compute[188605]:        <value>vendor_id</value>
Dec  1 14:20:36 np0005541455 nova_compute[188605]:        <value>frequencies</value>
Dec  1 14:20:36 np0005541455 nova_compute[188605]:        <value>reenlightenment</value>
Dec  1 14:20:36 np0005541455 nova_compute[188605]:        <value>tlbflush</value>
Dec  1 14:20:36 np0005541455 nova_compute[188605]:        <value>ipi</value>
Dec  1 14:20:36 np0005541455 nova_compute[188605]:        <value>avic</value>
Dec  1 14:20:36 np0005541455 nova_compute[188605]:        <value>emsr_bitmap</value>
Dec  1 14:20:36 np0005541455 nova_compute[188605]:        <value>xmm_input</value>
Dec  1 14:20:36 np0005541455 nova_compute[188605]:      </enum>
Dec  1 14:20:36 np0005541455 nova_compute[188605]:      <defaults>
Dec  1 14:20:36 np0005541455 nova_compute[188605]:        <spinlocks>4095</spinlocks>
Dec  1 14:20:36 np0005541455 nova_compute[188605]:        <stimer_direct>on</stimer_direct>
Dec  1 14:20:36 np0005541455 nova_compute[188605]:        <tlbflush_direct>on</tlbflush_direct>
Dec  1 14:20:36 np0005541455 nova_compute[188605]:        <tlbflush_extended>on</tlbflush_extended>
Dec  1 14:20:36 np0005541455 nova_compute[188605]:        <vendor_id>Linux KVM Hv</vendor_id>
Dec  1 14:20:36 np0005541455 nova_compute[188605]:      </defaults>
Dec  1 14:20:36 np0005541455 nova_compute[188605]:    </hyperv>
Dec  1 14:20:36 np0005541455 nova_compute[188605]:    <launchSecurity supported='yes'>
Dec  1 14:20:36 np0005541455 nova_compute[188605]:      <enum name='sectype'>
Dec  1 14:20:36 np0005541455 nova_compute[188605]:        <value>tdx</value>
Dec  1 14:20:36 np0005541455 nova_compute[188605]:      </enum>
Dec  1 14:20:36 np0005541455 nova_compute[188605]:    </launchSecurity>
Dec  1 14:20:36 np0005541455 nova_compute[188605]:  </features>
Dec  1 14:20:36 np0005541455 nova_compute[188605]: </domainCapabilities>
Dec  1 14:20:36 np0005541455 nova_compute[188605]: _get_domain_capabilities /usr/lib/python3.9/site-packages/nova/virt/libvirt/host.py:1037#033[00m
Dec  1 14:20:36 np0005541455 nova_compute[188605]: 2025-12-01 19:20:36.458 188609 DEBUG nova.virt.libvirt.host [None req-7f8cefb8-f392-4857-a15c-0577ffacd27c - - - - - -] Getting domain capabilities for x86_64 via machine types: {'q35', 'pc'} _get_machine_types /usr/lib/python3.9/site-packages/nova/virt/libvirt/host.py:952#033[00m
Dec  1 14:20:36 np0005541455 nova_compute[188605]: 2025-12-01 19:20:36.462 188609 DEBUG nova.virt.libvirt.host [None req-7f8cefb8-f392-4857-a15c-0577ffacd27c - - - - - -] Libvirt host hypervisor capabilities for arch=x86_64 and machine_type=q35:
Dec  1 14:20:36 np0005541455 nova_compute[188605]: <domainCapabilities>
Dec  1 14:20:36 np0005541455 nova_compute[188605]:  <path>/usr/libexec/qemu-kvm</path>
Dec  1 14:20:36 np0005541455 nova_compute[188605]:  <domain>kvm</domain>
Dec  1 14:20:36 np0005541455 nova_compute[188605]:  <machine>pc-q35-rhel9.8.0</machine>
Dec  1 14:20:36 np0005541455 nova_compute[188605]:  <arch>x86_64</arch>
Dec  1 14:20:36 np0005541455 nova_compute[188605]:  <vcpu max='4096'/>
Dec  1 14:20:36 np0005541455 nova_compute[188605]:  <iothreads supported='yes'/>
Dec  1 14:20:36 np0005541455 nova_compute[188605]:  <os supported='yes'>
Dec  1 14:20:36 np0005541455 nova_compute[188605]:    <enum name='firmware'>
Dec  1 14:20:36 np0005541455 nova_compute[188605]:      <value>efi</value>
Dec  1 14:20:36 np0005541455 nova_compute[188605]:    </enum>
Dec  1 14:20:36 np0005541455 nova_compute[188605]:    <loader supported='yes'>
Dec  1 14:20:36 np0005541455 nova_compute[188605]:      <value>/usr/share/edk2/ovmf/OVMF_CODE.secboot.fd</value>
Dec  1 14:20:36 np0005541455 nova_compute[188605]:      <value>/usr/share/edk2/ovmf/OVMF_CODE.fd</value>
Dec  1 14:20:36 np0005541455 nova_compute[188605]:      <value>/usr/share/edk2/ovmf/OVMF.amdsev.fd</value>
Dec  1 14:20:36 np0005541455 nova_compute[188605]:      <value>/usr/share/edk2/ovmf/OVMF.inteltdx.secboot.fd</value>
Dec  1 14:20:36 np0005541455 nova_compute[188605]:      <enum name='type'>
Dec  1 14:20:36 np0005541455 nova_compute[188605]:        <value>rom</value>
Dec  1 14:20:36 np0005541455 nova_compute[188605]:        <value>pflash</value>
Dec  1 14:20:36 np0005541455 nova_compute[188605]:      </enum>
Dec  1 14:20:36 np0005541455 nova_compute[188605]:      <enum name='readonly'>
Dec  1 14:20:36 np0005541455 nova_compute[188605]:        <value>yes</value>
Dec  1 14:20:36 np0005541455 nova_compute[188605]:        <value>no</value>
Dec  1 14:20:36 np0005541455 nova_compute[188605]:      </enum>
Dec  1 14:20:36 np0005541455 nova_compute[188605]:      <enum name='secure'>
Dec  1 14:20:36 np0005541455 nova_compute[188605]:        <value>yes</value>
Dec  1 14:20:36 np0005541455 nova_compute[188605]:        <value>no</value>
Dec  1 14:20:36 np0005541455 nova_compute[188605]:      </enum>
Dec  1 14:20:36 np0005541455 nova_compute[188605]:    </loader>
Dec  1 14:20:36 np0005541455 nova_compute[188605]:  </os>
Dec  1 14:20:36 np0005541455 nova_compute[188605]:  <cpu>
Dec  1 14:20:36 np0005541455 nova_compute[188605]:    <mode name='host-passthrough' supported='yes'>
Dec  1 14:20:36 np0005541455 nova_compute[188605]:      <enum name='hostPassthroughMigratable'>
Dec  1 14:20:36 np0005541455 nova_compute[188605]:        <value>on</value>
Dec  1 14:20:36 np0005541455 nova_compute[188605]:        <value>off</value>
Dec  1 14:20:36 np0005541455 nova_compute[188605]:      </enum>
Dec  1 14:20:36 np0005541455 nova_compute[188605]:    </mode>
Dec  1 14:20:36 np0005541455 nova_compute[188605]:    <mode name='maximum' supported='yes'>
Dec  1 14:20:36 np0005541455 nova_compute[188605]:      <enum name='maximumMigratable'>
Dec  1 14:20:36 np0005541455 nova_compute[188605]:        <value>on</value>
Dec  1 14:20:36 np0005541455 nova_compute[188605]:        <value>off</value>
Dec  1 14:20:36 np0005541455 nova_compute[188605]:      </enum>
Dec  1 14:20:36 np0005541455 nova_compute[188605]:    </mode>
Dec  1 14:20:36 np0005541455 nova_compute[188605]:    <mode name='host-model' supported='yes'>
Dec  1 14:20:36 np0005541455 nova_compute[188605]:      <model fallback='forbid'>EPYC-Rome</model>
Dec  1 14:20:36 np0005541455 nova_compute[188605]:      <vendor>AMD</vendor>
Dec  1 14:20:36 np0005541455 nova_compute[188605]:      <maxphysaddr mode='passthrough' limit='40'/>
Dec  1 14:20:36 np0005541455 nova_compute[188605]:      <feature policy='require' name='x2apic'/>
Dec  1 14:20:36 np0005541455 nova_compute[188605]:      <feature policy='require' name='tsc-deadline'/>
Dec  1 14:20:36 np0005541455 nova_compute[188605]:      <feature policy='require' name='hypervisor'/>
Dec  1 14:20:36 np0005541455 nova_compute[188605]:      <feature policy='require' name='tsc_adjust'/>
Dec  1 14:20:36 np0005541455 nova_compute[188605]:      <feature policy='require' name='spec-ctrl'/>
Dec  1 14:20:36 np0005541455 nova_compute[188605]:      <feature policy='require' name='stibp'/>
Dec  1 14:20:36 np0005541455 nova_compute[188605]:      <feature policy='require' name='ssbd'/>
Dec  1 14:20:36 np0005541455 nova_compute[188605]:      <feature policy='require' name='cmp_legacy'/>
Dec  1 14:20:36 np0005541455 nova_compute[188605]:      <feature policy='require' name='overflow-recov'/>
Dec  1 14:20:36 np0005541455 nova_compute[188605]:      <feature policy='require' name='succor'/>
Dec  1 14:20:36 np0005541455 nova_compute[188605]:      <feature policy='require' name='ibrs'/>
Dec  1 14:20:36 np0005541455 nova_compute[188605]:      <feature policy='require' name='amd-ssbd'/>
Dec  1 14:20:36 np0005541455 nova_compute[188605]:      <feature policy='require' name='virt-ssbd'/>
Dec  1 14:20:36 np0005541455 nova_compute[188605]:      <feature policy='require' name='lbrv'/>
Dec  1 14:20:36 np0005541455 nova_compute[188605]:      <feature policy='require' name='tsc-scale'/>
Dec  1 14:20:36 np0005541455 nova_compute[188605]:      <feature policy='require' name='vmcb-clean'/>
Dec  1 14:20:36 np0005541455 nova_compute[188605]:      <feature policy='require' name='flushbyasid'/>
Dec  1 14:20:36 np0005541455 nova_compute[188605]:      <feature policy='require' name='pause-filter'/>
Dec  1 14:20:36 np0005541455 nova_compute[188605]:      <feature policy='require' name='pfthreshold'/>
Dec  1 14:20:36 np0005541455 nova_compute[188605]:      <feature policy='require' name='svme-addr-chk'/>
Dec  1 14:20:36 np0005541455 nova_compute[188605]:      <feature policy='require' name='lfence-always-serializing'/>
Dec  1 14:20:36 np0005541455 nova_compute[188605]:      <feature policy='disable' name='xsaves'/>
Dec  1 14:20:36 np0005541455 nova_compute[188605]:    </mode>
Dec  1 14:20:36 np0005541455 nova_compute[188605]:    <mode name='custom' supported='yes'>
Dec  1 14:20:36 np0005541455 nova_compute[188605]:      <model usable='yes' deprecated='yes' vendor='unknown' canonical='486-v1'>486</model>
Dec  1 14:20:36 np0005541455 nova_compute[188605]:      <model usable='yes' deprecated='yes' vendor='unknown'>486-v1</model>
Dec  1 14:20:36 np0005541455 nova_compute[188605]:      <model usable='no' vendor='Intel' canonical='Broadwell-v1'>Broadwell</model>
Dec  1 14:20:36 np0005541455 nova_compute[188605]:      <blockers model='Broadwell'>
Dec  1 14:20:36 np0005541455 nova_compute[188605]:        <feature name='erms'/>
Dec  1 14:20:36 np0005541455 nova_compute[188605]:        <feature name='hle'/>
Dec  1 14:20:36 np0005541455 nova_compute[188605]:        <feature name='invpcid'/>
Dec  1 14:20:36 np0005541455 nova_compute[188605]:        <feature name='pcid'/>
Dec  1 14:20:36 np0005541455 nova_compute[188605]:        <feature name='rtm'/>
Dec  1 14:20:36 np0005541455 nova_compute[188605]:      </blockers>
Dec  1 14:20:36 np0005541455 nova_compute[188605]:      <model usable='no' vendor='Intel' canonical='Broadwell-v3'>Broadwell-IBRS</model>
Dec  1 14:20:36 np0005541455 nova_compute[188605]:      <blockers model='Broadwell-IBRS'>
Dec  1 14:20:36 np0005541455 nova_compute[188605]:        <feature name='erms'/>
Dec  1 14:20:36 np0005541455 nova_compute[188605]:        <feature name='hle'/>
Dec  1 14:20:36 np0005541455 nova_compute[188605]:        <feature name='invpcid'/>
Dec  1 14:20:36 np0005541455 nova_compute[188605]:        <feature name='pcid'/>
Dec  1 14:20:36 np0005541455 nova_compute[188605]:        <feature name='rtm'/>
Dec  1 14:20:36 np0005541455 nova_compute[188605]:      </blockers>
Dec  1 14:20:36 np0005541455 nova_compute[188605]:      <model usable='no' vendor='Intel' canonical='Broadwell-v2'>Broadwell-noTSX</model>
Dec  1 14:20:36 np0005541455 nova_compute[188605]:      <blockers model='Broadwell-noTSX'>
Dec  1 14:20:36 np0005541455 nova_compute[188605]:        <feature name='erms'/>
Dec  1 14:20:36 np0005541455 nova_compute[188605]:        <feature name='invpcid'/>
Dec  1 14:20:36 np0005541455 nova_compute[188605]:        <feature name='pcid'/>
Dec  1 14:20:36 np0005541455 nova_compute[188605]:      </blockers>
Dec  1 14:20:36 np0005541455 nova_compute[188605]:      <model usable='no' vendor='Intel' canonical='Broadwell-v4'>Broadwell-noTSX-IBRS</model>
Dec  1 14:20:36 np0005541455 nova_compute[188605]:      <blockers model='Broadwell-noTSX-IBRS'>
Dec  1 14:20:36 np0005541455 nova_compute[188605]:        <feature name='erms'/>
Dec  1 14:20:36 np0005541455 nova_compute[188605]:        <feature name='invpcid'/>
Dec  1 14:20:36 np0005541455 nova_compute[188605]:        <feature name='pcid'/>
Dec  1 14:20:36 np0005541455 nova_compute[188605]:      </blockers>
Dec  1 14:20:36 np0005541455 nova_compute[188605]:      <model usable='no' vendor='Intel'>Broadwell-v1</model>
Dec  1 14:20:36 np0005541455 nova_compute[188605]:      <blockers model='Broadwell-v1'>
Dec  1 14:20:36 np0005541455 nova_compute[188605]:        <feature name='erms'/>
Dec  1 14:20:36 np0005541455 nova_compute[188605]:        <feature name='hle'/>
Dec  1 14:20:36 np0005541455 nova_compute[188605]:        <feature name='invpcid'/>
Dec  1 14:20:36 np0005541455 nova_compute[188605]:        <feature name='pcid'/>
Dec  1 14:20:36 np0005541455 nova_compute[188605]:        <feature name='rtm'/>
Dec  1 14:20:36 np0005541455 nova_compute[188605]:      </blockers>
Dec  1 14:20:36 np0005541455 nova_compute[188605]:      <model usable='no' vendor='Intel'>Broadwell-v2</model>
Dec  1 14:20:36 np0005541455 nova_compute[188605]:      <blockers model='Broadwell-v2'>
Dec  1 14:20:36 np0005541455 nova_compute[188605]:        <feature name='erms'/>
Dec  1 14:20:36 np0005541455 nova_compute[188605]:        <feature name='invpcid'/>
Dec  1 14:20:36 np0005541455 nova_compute[188605]:        <feature name='pcid'/>
Dec  1 14:20:36 np0005541455 nova_compute[188605]:      </blockers>
Dec  1 14:20:36 np0005541455 nova_compute[188605]:      <model usable='no' vendor='Intel'>Broadwell-v3</model>
Dec  1 14:20:36 np0005541455 nova_compute[188605]:      <blockers model='Broadwell-v3'>
Dec  1 14:20:36 np0005541455 nova_compute[188605]:        <feature name='erms'/>
Dec  1 14:20:36 np0005541455 nova_compute[188605]:        <feature name='hle'/>
Dec  1 14:20:36 np0005541455 nova_compute[188605]:        <feature name='invpcid'/>
Dec  1 14:20:36 np0005541455 nova_compute[188605]:        <feature name='pcid'/>
Dec  1 14:20:36 np0005541455 nova_compute[188605]:        <feature name='rtm'/>
Dec  1 14:20:36 np0005541455 nova_compute[188605]:      </blockers>
Dec  1 14:20:36 np0005541455 nova_compute[188605]:      <model usable='no' vendor='Intel'>Broadwell-v4</model>
Dec  1 14:20:36 np0005541455 nova_compute[188605]:      <blockers model='Broadwell-v4'>
Dec  1 14:20:36 np0005541455 nova_compute[188605]:        <feature name='erms'/>
Dec  1 14:20:36 np0005541455 nova_compute[188605]:        <feature name='invpcid'/>
Dec  1 14:20:36 np0005541455 nova_compute[188605]:        <feature name='pcid'/>
Dec  1 14:20:36 np0005541455 nova_compute[188605]:      </blockers>
Dec  1 14:20:36 np0005541455 nova_compute[188605]:      <model usable='no' vendor='Intel' canonical='Cascadelake-Server-v1'>Cascadelake-Server</model>
Dec  1 14:20:36 np0005541455 nova_compute[188605]:      <blockers model='Cascadelake-Server'>
Dec  1 14:20:36 np0005541455 nova_compute[188605]:        <feature name='avx512bw'/>
Dec  1 14:20:36 np0005541455 nova_compute[188605]:        <feature name='avx512cd'/>
Dec  1 14:20:36 np0005541455 nova_compute[188605]:        <feature name='avx512dq'/>
Dec  1 14:20:36 np0005541455 nova_compute[188605]:        <feature name='avx512f'/>
Dec  1 14:20:36 np0005541455 nova_compute[188605]:        <feature name='avx512vl'/>
Dec  1 14:20:36 np0005541455 nova_compute[188605]:        <feature name='avx512vnni'/>
Dec  1 14:20:36 np0005541455 nova_compute[188605]:        <feature name='erms'/>
Dec  1 14:20:36 np0005541455 nova_compute[188605]:        <feature name='hle'/>
Dec  1 14:20:36 np0005541455 nova_compute[188605]:        <feature name='invpcid'/>
Dec  1 14:20:36 np0005541455 nova_compute[188605]:        <feature name='pcid'/>
Dec  1 14:20:36 np0005541455 nova_compute[188605]:        <feature name='pku'/>
Dec  1 14:20:36 np0005541455 nova_compute[188605]:        <feature name='rtm'/>
Dec  1 14:20:36 np0005541455 nova_compute[188605]:      </blockers>
Dec  1 14:20:36 np0005541455 nova_compute[188605]:      <model usable='no' vendor='Intel' canonical='Cascadelake-Server-v3'>Cascadelake-Server-noTSX</model>
Dec  1 14:20:36 np0005541455 nova_compute[188605]:      <blockers model='Cascadelake-Server-noTSX'>
Dec  1 14:20:36 np0005541455 nova_compute[188605]:        <feature name='avx512bw'/>
Dec  1 14:20:36 np0005541455 nova_compute[188605]:        <feature name='avx512cd'/>
Dec  1 14:20:36 np0005541455 nova_compute[188605]:        <feature name='avx512dq'/>
Dec  1 14:20:36 np0005541455 nova_compute[188605]:        <feature name='avx512f'/>
Dec  1 14:20:36 np0005541455 nova_compute[188605]:        <feature name='avx512vl'/>
Dec  1 14:20:36 np0005541455 nova_compute[188605]:        <feature name='avx512vnni'/>
Dec  1 14:20:36 np0005541455 nova_compute[188605]:        <feature name='erms'/>
Dec  1 14:20:36 np0005541455 nova_compute[188605]:        <feature name='ibrs-all'/>
Dec  1 14:20:36 np0005541455 nova_compute[188605]:        <feature name='invpcid'/>
Dec  1 14:20:36 np0005541455 nova_compute[188605]:        <feature name='pcid'/>
Dec  1 14:20:36 np0005541455 nova_compute[188605]:        <feature name='pku'/>
Dec  1 14:20:36 np0005541455 nova_compute[188605]:      </blockers>
Dec  1 14:20:36 np0005541455 nova_compute[188605]:      <model usable='no' vendor='Intel'>Cascadelake-Server-v1</model>
Dec  1 14:20:36 np0005541455 nova_compute[188605]:      <blockers model='Cascadelake-Server-v1'>
Dec  1 14:20:36 np0005541455 nova_compute[188605]:        <feature name='avx512bw'/>
Dec  1 14:20:36 np0005541455 nova_compute[188605]:        <feature name='avx512cd'/>
Dec  1 14:20:36 np0005541455 nova_compute[188605]:        <feature name='avx512dq'/>
Dec  1 14:20:36 np0005541455 nova_compute[188605]:        <feature name='avx512f'/>
Dec  1 14:20:36 np0005541455 nova_compute[188605]:        <feature name='avx512vl'/>
Dec  1 14:20:36 np0005541455 nova_compute[188605]:        <feature name='avx512vnni'/>
Dec  1 14:20:36 np0005541455 nova_compute[188605]:        <feature name='erms'/>
Dec  1 14:20:36 np0005541455 nova_compute[188605]:        <feature name='hle'/>
Dec  1 14:20:36 np0005541455 nova_compute[188605]:        <feature name='invpcid'/>
Dec  1 14:20:36 np0005541455 nova_compute[188605]:        <feature name='pcid'/>
Dec  1 14:20:36 np0005541455 nova_compute[188605]:        <feature name='pku'/>
Dec  1 14:20:36 np0005541455 nova_compute[188605]:        <feature name='rtm'/>
Dec  1 14:20:36 np0005541455 nova_compute[188605]:      </blockers>
Dec  1 14:20:36 np0005541455 nova_compute[188605]:      <model usable='no' vendor='Intel'>Cascadelake-Server-v2</model>
Dec  1 14:20:36 np0005541455 nova_compute[188605]:      <blockers model='Cascadelake-Server-v2'>
Dec  1 14:20:36 np0005541455 nova_compute[188605]:        <feature name='avx512bw'/>
Dec  1 14:20:36 np0005541455 nova_compute[188605]:        <feature name='avx512cd'/>
Dec  1 14:20:36 np0005541455 nova_compute[188605]:        <feature name='avx512dq'/>
Dec  1 14:20:36 np0005541455 nova_compute[188605]:        <feature name='avx512f'/>
Dec  1 14:20:36 np0005541455 nova_compute[188605]:        <feature name='avx512vl'/>
Dec  1 14:20:36 np0005541455 nova_compute[188605]:        <feature name='avx512vnni'/>
Dec  1 14:20:36 np0005541455 nova_compute[188605]:        <feature name='erms'/>
Dec  1 14:20:36 np0005541455 nova_compute[188605]:        <feature name='hle'/>
Dec  1 14:20:36 np0005541455 nova_compute[188605]:        <feature name='ibrs-all'/>
Dec  1 14:20:36 np0005541455 nova_compute[188605]:        <feature name='invpcid'/>
Dec  1 14:20:36 np0005541455 nova_compute[188605]:        <feature name='pcid'/>
Dec  1 14:20:36 np0005541455 nova_compute[188605]:        <feature name='pku'/>
Dec  1 14:20:36 np0005541455 nova_compute[188605]:        <feature name='rtm'/>
Dec  1 14:20:36 np0005541455 nova_compute[188605]:      </blockers>
Dec  1 14:20:36 np0005541455 nova_compute[188605]:      <model usable='no' vendor='Intel'>Cascadelake-Server-v3</model>
Dec  1 14:20:36 np0005541455 nova_compute[188605]:      <blockers model='Cascadelake-Server-v3'>
Dec  1 14:20:36 np0005541455 nova_compute[188605]:        <feature name='avx512bw'/>
Dec  1 14:20:36 np0005541455 nova_compute[188605]:        <feature name='avx512cd'/>
Dec  1 14:20:36 np0005541455 nova_compute[188605]:        <feature name='avx512dq'/>
Dec  1 14:20:36 np0005541455 nova_compute[188605]:        <feature name='avx512f'/>
Dec  1 14:20:36 np0005541455 nova_compute[188605]:        <feature name='avx512vl'/>
Dec  1 14:20:36 np0005541455 nova_compute[188605]:        <feature name='avx512vnni'/>
Dec  1 14:20:36 np0005541455 nova_compute[188605]:        <feature name='erms'/>
Dec  1 14:20:36 np0005541455 nova_compute[188605]:        <feature name='ibrs-all'/>
Dec  1 14:20:36 np0005541455 nova_compute[188605]:        <feature name='invpcid'/>
Dec  1 14:20:36 np0005541455 nova_compute[188605]:        <feature name='pcid'/>
Dec  1 14:20:36 np0005541455 nova_compute[188605]:        <feature name='pku'/>
Dec  1 14:20:36 np0005541455 nova_compute[188605]:      </blockers>
Dec  1 14:20:36 np0005541455 nova_compute[188605]:      <model usable='no' vendor='Intel'>Cascadelake-Server-v4</model>
Dec  1 14:20:36 np0005541455 nova_compute[188605]:      <blockers model='Cascadelake-Server-v4'>
Dec  1 14:20:36 np0005541455 nova_compute[188605]:        <feature name='avx512bw'/>
Dec  1 14:20:36 np0005541455 nova_compute[188605]:        <feature name='avx512cd'/>
Dec  1 14:20:36 np0005541455 nova_compute[188605]:        <feature name='avx512dq'/>
Dec  1 14:20:36 np0005541455 nova_compute[188605]:        <feature name='avx512f'/>
Dec  1 14:20:36 np0005541455 nova_compute[188605]:        <feature name='avx512vl'/>
Dec  1 14:20:36 np0005541455 nova_compute[188605]:        <feature name='avx512vnni'/>
Dec  1 14:20:36 np0005541455 nova_compute[188605]:        <feature name='erms'/>
Dec  1 14:20:36 np0005541455 nova_compute[188605]:        <feature name='ibrs-all'/>
Dec  1 14:20:36 np0005541455 nova_compute[188605]:        <feature name='invpcid'/>
Dec  1 14:20:36 np0005541455 nova_compute[188605]:        <feature name='pcid'/>
Dec  1 14:20:36 np0005541455 nova_compute[188605]:        <feature name='pku'/>
Dec  1 14:20:36 np0005541455 nova_compute[188605]:      </blockers>
Dec  1 14:20:36 np0005541455 nova_compute[188605]:      <model usable='no' vendor='Intel'>Cascadelake-Server-v5</model>
Dec  1 14:20:36 np0005541455 nova_compute[188605]:      <blockers model='Cascadelake-Server-v5'>
Dec  1 14:20:36 np0005541455 nova_compute[188605]:        <feature name='avx512bw'/>
Dec  1 14:20:36 np0005541455 nova_compute[188605]:        <feature name='avx512cd'/>
Dec  1 14:20:36 np0005541455 nova_compute[188605]:        <feature name='avx512dq'/>
Dec  1 14:20:36 np0005541455 nova_compute[188605]:        <feature name='avx512f'/>
Dec  1 14:20:36 np0005541455 nova_compute[188605]:        <feature name='avx512vl'/>
Dec  1 14:20:36 np0005541455 nova_compute[188605]:        <feature name='avx512vnni'/>
Dec  1 14:20:36 np0005541455 nova_compute[188605]:        <feature name='erms'/>
Dec  1 14:20:36 np0005541455 nova_compute[188605]:        <feature name='ibrs-all'/>
Dec  1 14:20:36 np0005541455 nova_compute[188605]:        <feature name='invpcid'/>
Dec  1 14:20:36 np0005541455 nova_compute[188605]:        <feature name='pcid'/>
Dec  1 14:20:36 np0005541455 nova_compute[188605]:        <feature name='pku'/>
Dec  1 14:20:36 np0005541455 nova_compute[188605]:        <feature name='xsaves'/>
Dec  1 14:20:36 np0005541455 nova_compute[188605]:      </blockers>
Dec  1 14:20:36 np0005541455 nova_compute[188605]:      <model usable='yes' deprecated='yes' vendor='Intel' canonical='Conroe-v1'>Conroe</model>
Dec  1 14:20:36 np0005541455 nova_compute[188605]:      <model usable='yes' deprecated='yes' vendor='Intel'>Conroe-v1</model>
Dec  1 14:20:36 np0005541455 nova_compute[188605]:      <model usable='no' vendor='Intel' canonical='Cooperlake-v1'>Cooperlake</model>
Dec  1 14:20:36 np0005541455 nova_compute[188605]:      <blockers model='Cooperlake'>
Dec  1 14:20:36 np0005541455 nova_compute[188605]:        <feature name='avx512-bf16'/>
Dec  1 14:20:36 np0005541455 nova_compute[188605]:        <feature name='avx512bw'/>
Dec  1 14:20:36 np0005541455 nova_compute[188605]:        <feature name='avx512cd'/>
Dec  1 14:20:36 np0005541455 nova_compute[188605]:        <feature name='avx512dq'/>
Dec  1 14:20:36 np0005541455 nova_compute[188605]:        <feature name='avx512f'/>
Dec  1 14:20:36 np0005541455 nova_compute[188605]:        <feature name='avx512vl'/>
Dec  1 14:20:36 np0005541455 nova_compute[188605]:        <feature name='avx512vnni'/>
Dec  1 14:20:36 np0005541455 nova_compute[188605]:        <feature name='erms'/>
Dec  1 14:20:36 np0005541455 nova_compute[188605]:        <feature name='hle'/>
Dec  1 14:20:36 np0005541455 nova_compute[188605]:        <feature name='ibrs-all'/>
Dec  1 14:20:36 np0005541455 nova_compute[188605]:        <feature name='invpcid'/>
Dec  1 14:20:36 np0005541455 nova_compute[188605]:        <feature name='pcid'/>
Dec  1 14:20:36 np0005541455 nova_compute[188605]:        <feature name='pku'/>
Dec  1 14:20:36 np0005541455 nova_compute[188605]:        <feature name='rtm'/>
Dec  1 14:20:36 np0005541455 nova_compute[188605]:        <feature name='taa-no'/>
Dec  1 14:20:36 np0005541455 nova_compute[188605]:      </blockers>
Dec  1 14:20:36 np0005541455 nova_compute[188605]:      <model usable='no' vendor='Intel'>Cooperlake-v1</model>
Dec  1 14:20:36 np0005541455 nova_compute[188605]:      <blockers model='Cooperlake-v1'>
Dec  1 14:20:36 np0005541455 nova_compute[188605]:        <feature name='avx512-bf16'/>
Dec  1 14:20:36 np0005541455 nova_compute[188605]:        <feature name='avx512bw'/>
Dec  1 14:20:36 np0005541455 nova_compute[188605]:        <feature name='avx512cd'/>
Dec  1 14:20:36 np0005541455 nova_compute[188605]:        <feature name='avx512dq'/>
Dec  1 14:20:36 np0005541455 nova_compute[188605]:        <feature name='avx512f'/>
Dec  1 14:20:36 np0005541455 nova_compute[188605]:        <feature name='avx512vl'/>
Dec  1 14:20:36 np0005541455 nova_compute[188605]:        <feature name='avx512vnni'/>
Dec  1 14:20:36 np0005541455 nova_compute[188605]:        <feature name='erms'/>
Dec  1 14:20:36 np0005541455 nova_compute[188605]:        <feature name='hle'/>
Dec  1 14:20:36 np0005541455 nova_compute[188605]:        <feature name='ibrs-all'/>
Dec  1 14:20:36 np0005541455 nova_compute[188605]:        <feature name='invpcid'/>
Dec  1 14:20:36 np0005541455 nova_compute[188605]:        <feature name='pcid'/>
Dec  1 14:20:36 np0005541455 nova_compute[188605]:        <feature name='pku'/>
Dec  1 14:20:36 np0005541455 nova_compute[188605]:        <feature name='rtm'/>
Dec  1 14:20:36 np0005541455 nova_compute[188605]:        <feature name='taa-no'/>
Dec  1 14:20:36 np0005541455 nova_compute[188605]:      </blockers>
Dec  1 14:20:36 np0005541455 nova_compute[188605]:      <model usable='no' vendor='Intel'>Cooperlake-v2</model>
Dec  1 14:20:36 np0005541455 nova_compute[188605]:      <blockers model='Cooperlake-v2'>
Dec  1 14:20:36 np0005541455 nova_compute[188605]:        <feature name='avx512-bf16'/>
Dec  1 14:20:36 np0005541455 nova_compute[188605]:        <feature name='avx512bw'/>
Dec  1 14:20:36 np0005541455 nova_compute[188605]:        <feature name='avx512cd'/>
Dec  1 14:20:36 np0005541455 nova_compute[188605]:        <feature name='avx512dq'/>
Dec  1 14:20:36 np0005541455 nova_compute[188605]:        <feature name='avx512f'/>
Dec  1 14:20:36 np0005541455 nova_compute[188605]:        <feature name='avx512vl'/>
Dec  1 14:20:36 np0005541455 nova_compute[188605]:        <feature name='avx512vnni'/>
Dec  1 14:20:36 np0005541455 nova_compute[188605]:        <feature name='erms'/>
Dec  1 14:20:36 np0005541455 nova_compute[188605]:        <feature name='hle'/>
Dec  1 14:20:36 np0005541455 nova_compute[188605]:        <feature name='ibrs-all'/>
Dec  1 14:20:36 np0005541455 nova_compute[188605]:        <feature name='invpcid'/>
Dec  1 14:20:36 np0005541455 nova_compute[188605]:        <feature name='pcid'/>
Dec  1 14:20:36 np0005541455 nova_compute[188605]:        <feature name='pku'/>
Dec  1 14:20:36 np0005541455 nova_compute[188605]:        <feature name='rtm'/>
Dec  1 14:20:36 np0005541455 nova_compute[188605]:        <feature name='taa-no'/>
Dec  1 14:20:36 np0005541455 nova_compute[188605]:        <feature name='xsaves'/>
Dec  1 14:20:36 np0005541455 nova_compute[188605]:      </blockers>
Dec  1 14:20:36 np0005541455 nova_compute[188605]:      <model usable='no' vendor='Intel' canonical='Denverton-v1'>Denverton</model>
Dec  1 14:20:36 np0005541455 nova_compute[188605]:      <blockers model='Denverton'>
Dec  1 14:20:36 np0005541455 nova_compute[188605]:        <feature name='erms'/>
Dec  1 14:20:36 np0005541455 nova_compute[188605]:        <feature name='mpx'/>
Dec  1 14:20:36 np0005541455 nova_compute[188605]:      </blockers>
Dec  1 14:20:36 np0005541455 nova_compute[188605]:      <model usable='no' vendor='Intel'>Denverton-v1</model>
Dec  1 14:20:36 np0005541455 nova_compute[188605]:      <blockers model='Denverton-v1'>
Dec  1 14:20:36 np0005541455 nova_compute[188605]:        <feature name='erms'/>
Dec  1 14:20:36 np0005541455 nova_compute[188605]:        <feature name='mpx'/>
Dec  1 14:20:36 np0005541455 nova_compute[188605]:      </blockers>
Dec  1 14:20:36 np0005541455 nova_compute[188605]:      <model usable='no' vendor='Intel'>Denverton-v2</model>
Dec  1 14:20:36 np0005541455 nova_compute[188605]:      <blockers model='Denverton-v2'>
Dec  1 14:20:36 np0005541455 nova_compute[188605]:        <feature name='erms'/>
Dec  1 14:20:36 np0005541455 nova_compute[188605]:      </blockers>
Dec  1 14:20:36 np0005541455 nova_compute[188605]:      <model usable='no' vendor='Intel'>Denverton-v3</model>
Dec  1 14:20:36 np0005541455 nova_compute[188605]:      <blockers model='Denverton-v3'>
Dec  1 14:20:36 np0005541455 nova_compute[188605]:        <feature name='erms'/>
Dec  1 14:20:36 np0005541455 nova_compute[188605]:        <feature name='xsaves'/>
Dec  1 14:20:36 np0005541455 nova_compute[188605]:      </blockers>
Dec  1 14:20:36 np0005541455 nova_compute[188605]:      <model usable='yes' vendor='Hygon' canonical='Dhyana-v1'>Dhyana</model>
Dec  1 14:20:36 np0005541455 nova_compute[188605]:      <model usable='yes' vendor='Hygon'>Dhyana-v1</model>
Dec  1 14:20:36 np0005541455 nova_compute[188605]:      <model usable='no' vendor='Hygon'>Dhyana-v2</model>
Dec  1 14:20:36 np0005541455 nova_compute[188605]:      <blockers model='Dhyana-v2'>
Dec  1 14:20:36 np0005541455 nova_compute[188605]:        <feature name='xsaves'/>
Dec  1 14:20:36 np0005541455 nova_compute[188605]:      </blockers>
Dec  1 14:20:36 np0005541455 nova_compute[188605]:      <model usable='yes' vendor='AMD' canonical='EPYC-v1'>EPYC</model>
Dec  1 14:20:36 np0005541455 nova_compute[188605]:      <model usable='no' vendor='AMD' canonical='EPYC-Genoa-v1'>EPYC-Genoa</model>
Dec  1 14:20:36 np0005541455 nova_compute[188605]:      <blockers model='EPYC-Genoa'>
Dec  1 14:20:36 np0005541455 nova_compute[188605]:        <feature name='amd-psfd'/>
Dec  1 14:20:36 np0005541455 nova_compute[188605]:        <feature name='auto-ibrs'/>
Dec  1 14:20:36 np0005541455 nova_compute[188605]:        <feature name='avx512-bf16'/>
Dec  1 14:20:36 np0005541455 nova_compute[188605]:        <feature name='avx512-vpopcntdq'/>
Dec  1 14:20:36 np0005541455 nova_compute[188605]:        <feature name='avx512bitalg'/>
Dec  1 14:20:36 np0005541455 nova_compute[188605]:        <feature name='avx512bw'/>
Dec  1 14:20:36 np0005541455 nova_compute[188605]:        <feature name='avx512cd'/>
Dec  1 14:20:36 np0005541455 nova_compute[188605]:        <feature name='avx512dq'/>
Dec  1 14:20:36 np0005541455 nova_compute[188605]:        <feature name='avx512f'/>
Dec  1 14:20:36 np0005541455 nova_compute[188605]:        <feature name='avx512ifma'/>
Dec  1 14:20:36 np0005541455 nova_compute[188605]:        <feature name='avx512vbmi'/>
Dec  1 14:20:36 np0005541455 nova_compute[188605]:        <feature name='avx512vbmi2'/>
Dec  1 14:20:36 np0005541455 nova_compute[188605]:        <feature name='avx512vl'/>
Dec  1 14:20:36 np0005541455 nova_compute[188605]:        <feature name='avx512vnni'/>
Dec  1 14:20:36 np0005541455 nova_compute[188605]:        <feature name='erms'/>
Dec  1 14:20:36 np0005541455 nova_compute[188605]:        <feature name='fsrm'/>
Dec  1 14:20:36 np0005541455 nova_compute[188605]:        <feature name='gfni'/>
Dec  1 14:20:36 np0005541455 nova_compute[188605]:        <feature name='invpcid'/>
Dec  1 14:20:36 np0005541455 nova_compute[188605]:        <feature name='la57'/>
Dec  1 14:20:36 np0005541455 nova_compute[188605]:        <feature name='no-nested-data-bp'/>
Dec  1 14:20:36 np0005541455 nova_compute[188605]:        <feature name='null-sel-clr-base'/>
Dec  1 14:20:36 np0005541455 nova_compute[188605]:        <feature name='pcid'/>
Dec  1 14:20:36 np0005541455 nova_compute[188605]:        <feature name='pku'/>
Dec  1 14:20:36 np0005541455 nova_compute[188605]:        <feature name='stibp-always-on'/>
Dec  1 14:20:36 np0005541455 nova_compute[188605]:        <feature name='vaes'/>
Dec  1 14:20:36 np0005541455 nova_compute[188605]:        <feature name='vpclmulqdq'/>
Dec  1 14:20:36 np0005541455 nova_compute[188605]:        <feature name='xsaves'/>
Dec  1 14:20:36 np0005541455 nova_compute[188605]:      </blockers>
Dec  1 14:20:36 np0005541455 nova_compute[188605]:      <model usable='no' vendor='AMD'>EPYC-Genoa-v1</model>
Dec  1 14:20:36 np0005541455 nova_compute[188605]:      <blockers model='EPYC-Genoa-v1'>
Dec  1 14:20:36 np0005541455 nova_compute[188605]:        <feature name='amd-psfd'/>
Dec  1 14:20:36 np0005541455 nova_compute[188605]:        <feature name='auto-ibrs'/>
Dec  1 14:20:36 np0005541455 nova_compute[188605]:        <feature name='avx512-bf16'/>
Dec  1 14:20:36 np0005541455 nova_compute[188605]:        <feature name='avx512-vpopcntdq'/>
Dec  1 14:20:36 np0005541455 nova_compute[188605]:        <feature name='avx512bitalg'/>
Dec  1 14:20:36 np0005541455 nova_compute[188605]:        <feature name='avx512bw'/>
Dec  1 14:20:36 np0005541455 nova_compute[188605]:        <feature name='avx512cd'/>
Dec  1 14:20:36 np0005541455 nova_compute[188605]:        <feature name='avx512dq'/>
Dec  1 14:20:36 np0005541455 nova_compute[188605]:        <feature name='avx512f'/>
Dec  1 14:20:36 np0005541455 nova_compute[188605]:        <feature name='avx512ifma'/>
Dec  1 14:20:36 np0005541455 nova_compute[188605]:        <feature name='avx512vbmi'/>
Dec  1 14:20:36 np0005541455 nova_compute[188605]:        <feature name='avx512vbmi2'/>
Dec  1 14:20:36 np0005541455 nova_compute[188605]:        <feature name='avx512vl'/>
Dec  1 14:20:36 np0005541455 nova_compute[188605]:        <feature name='avx512vnni'/>
Dec  1 14:20:36 np0005541455 nova_compute[188605]:        <feature name='erms'/>
Dec  1 14:20:36 np0005541455 nova_compute[188605]:        <feature name='fsrm'/>
Dec  1 14:20:36 np0005541455 nova_compute[188605]:        <feature name='gfni'/>
Dec  1 14:20:36 np0005541455 nova_compute[188605]:        <feature name='invpcid'/>
Dec  1 14:20:36 np0005541455 nova_compute[188605]:        <feature name='la57'/>
Dec  1 14:20:36 np0005541455 nova_compute[188605]:        <feature name='no-nested-data-bp'/>
Dec  1 14:20:36 np0005541455 nova_compute[188605]:        <feature name='null-sel-clr-base'/>
Dec  1 14:20:36 np0005541455 nova_compute[188605]:        <feature name='pcid'/>
Dec  1 14:20:36 np0005541455 nova_compute[188605]:        <feature name='pku'/>
Dec  1 14:20:36 np0005541455 nova_compute[188605]:        <feature name='stibp-always-on'/>
Dec  1 14:20:36 np0005541455 nova_compute[188605]:        <feature name='vaes'/>
Dec  1 14:20:36 np0005541455 nova_compute[188605]:        <feature name='vpclmulqdq'/>
Dec  1 14:20:36 np0005541455 nova_compute[188605]:        <feature name='xsaves'/>
Dec  1 14:20:36 np0005541455 nova_compute[188605]:      </blockers>
Dec  1 14:20:36 np0005541455 nova_compute[188605]:      <model usable='yes' vendor='AMD' canonical='EPYC-v2'>EPYC-IBPB</model>
Dec  1 14:20:36 np0005541455 nova_compute[188605]:      <model usable='no' vendor='AMD' canonical='EPYC-Milan-v1'>EPYC-Milan</model>
Dec  1 14:20:36 np0005541455 nova_compute[188605]:      <blockers model='EPYC-Milan'>
Dec  1 14:20:36 np0005541455 nova_compute[188605]:        <feature name='erms'/>
Dec  1 14:20:36 np0005541455 nova_compute[188605]:        <feature name='fsrm'/>
Dec  1 14:20:36 np0005541455 nova_compute[188605]:        <feature name='invpcid'/>
Dec  1 14:20:36 np0005541455 nova_compute[188605]:        <feature name='pcid'/>
Dec  1 14:20:36 np0005541455 nova_compute[188605]:        <feature name='pku'/>
Dec  1 14:20:36 np0005541455 nova_compute[188605]:        <feature name='xsaves'/>
Dec  1 14:20:36 np0005541455 nova_compute[188605]:      </blockers>
Dec  1 14:20:36 np0005541455 nova_compute[188605]:      <model usable='no' vendor='AMD'>EPYC-Milan-v1</model>
Dec  1 14:20:36 np0005541455 nova_compute[188605]:      <blockers model='EPYC-Milan-v1'>
Dec  1 14:20:36 np0005541455 nova_compute[188605]:        <feature name='erms'/>
Dec  1 14:20:36 np0005541455 nova_compute[188605]:        <feature name='fsrm'/>
Dec  1 14:20:36 np0005541455 nova_compute[188605]:        <feature name='invpcid'/>
Dec  1 14:20:36 np0005541455 nova_compute[188605]:        <feature name='pcid'/>
Dec  1 14:20:36 np0005541455 nova_compute[188605]:        <feature name='pku'/>
Dec  1 14:20:36 np0005541455 nova_compute[188605]:        <feature name='xsaves'/>
Dec  1 14:20:36 np0005541455 nova_compute[188605]:      </blockers>
Dec  1 14:20:36 np0005541455 nova_compute[188605]:      <model usable='no' vendor='AMD'>EPYC-Milan-v2</model>
Dec  1 14:20:36 np0005541455 nova_compute[188605]:      <blockers model='EPYC-Milan-v2'>
Dec  1 14:20:36 np0005541455 nova_compute[188605]:        <feature name='amd-psfd'/>
Dec  1 14:20:36 np0005541455 nova_compute[188605]:        <feature name='erms'/>
Dec  1 14:20:36 np0005541455 nova_compute[188605]:        <feature name='fsrm'/>
Dec  1 14:20:36 np0005541455 nova_compute[188605]:        <feature name='invpcid'/>
Dec  1 14:20:36 np0005541455 nova_compute[188605]:        <feature name='no-nested-data-bp'/>
Dec  1 14:20:36 np0005541455 nova_compute[188605]:        <feature name='null-sel-clr-base'/>
Dec  1 14:20:36 np0005541455 nova_compute[188605]:        <feature name='pcid'/>
Dec  1 14:20:36 np0005541455 nova_compute[188605]:        <feature name='pku'/>
Dec  1 14:20:36 np0005541455 nova_compute[188605]:        <feature name='stibp-always-on'/>
Dec  1 14:20:36 np0005541455 nova_compute[188605]:        <feature name='vaes'/>
Dec  1 14:20:36 np0005541455 nova_compute[188605]:        <feature name='vpclmulqdq'/>
Dec  1 14:20:36 np0005541455 nova_compute[188605]:        <feature name='xsaves'/>
Dec  1 14:20:36 np0005541455 nova_compute[188605]:      </blockers>
Dec  1 14:20:36 np0005541455 nova_compute[188605]:      <model usable='no' vendor='AMD' canonical='EPYC-Rome-v1'>EPYC-Rome</model>
Dec  1 14:20:36 np0005541455 nova_compute[188605]:      <blockers model='EPYC-Rome'>
Dec  1 14:20:36 np0005541455 nova_compute[188605]:        <feature name='xsaves'/>
Dec  1 14:20:36 np0005541455 nova_compute[188605]:      </blockers>
Dec  1 14:20:36 np0005541455 nova_compute[188605]:      <model usable='no' vendor='AMD'>EPYC-Rome-v1</model>
Dec  1 14:20:36 np0005541455 nova_compute[188605]:      <blockers model='EPYC-Rome-v1'>
Dec  1 14:20:36 np0005541455 nova_compute[188605]:        <feature name='xsaves'/>
Dec  1 14:20:36 np0005541455 nova_compute[188605]:      </blockers>
Dec  1 14:20:36 np0005541455 nova_compute[188605]:      <model usable='no' vendor='AMD'>EPYC-Rome-v2</model>
Dec  1 14:20:36 np0005541455 nova_compute[188605]:      <blockers model='EPYC-Rome-v2'>
Dec  1 14:20:36 np0005541455 nova_compute[188605]:        <feature name='xsaves'/>
Dec  1 14:20:36 np0005541455 nova_compute[188605]:      </blockers>
Dec  1 14:20:36 np0005541455 nova_compute[188605]:      <model usable='no' vendor='AMD'>EPYC-Rome-v3</model>
Dec  1 14:20:36 np0005541455 nova_compute[188605]:      <blockers model='EPYC-Rome-v3'>
Dec  1 14:20:36 np0005541455 nova_compute[188605]:        <feature name='xsaves'/>
Dec  1 14:20:36 np0005541455 nova_compute[188605]:      </blockers>
Dec  1 14:20:36 np0005541455 nova_compute[188605]:      <model usable='yes' vendor='AMD'>EPYC-Rome-v4</model>
Dec  1 14:20:36 np0005541455 nova_compute[188605]:      <model usable='yes' vendor='AMD'>EPYC-v1</model>
Dec  1 14:20:36 np0005541455 nova_compute[188605]:      <model usable='yes' vendor='AMD'>EPYC-v2</model>
Dec  1 14:20:36 np0005541455 nova_compute[188605]:      <model usable='no' vendor='AMD'>EPYC-v3</model>
Dec  1 14:20:36 np0005541455 nova_compute[188605]:      <blockers model='EPYC-v3'>
Dec  1 14:20:36 np0005541455 nova_compute[188605]:        <feature name='xsaves'/>
Dec  1 14:20:36 np0005541455 nova_compute[188605]:      </blockers>
Dec  1 14:20:36 np0005541455 nova_compute[188605]:      <model usable='no' vendor='AMD'>EPYC-v4</model>
Dec  1 14:20:36 np0005541455 nova_compute[188605]:      <blockers model='EPYC-v4'>
Dec  1 14:20:36 np0005541455 nova_compute[188605]:        <feature name='xsaves'/>
Dec  1 14:20:36 np0005541455 nova_compute[188605]:      </blockers>
Dec  1 14:20:36 np0005541455 nova_compute[188605]:      <model usable='no' vendor='Intel' canonical='GraniteRapids-v1'>GraniteRapids</model>
Dec  1 14:20:36 np0005541455 nova_compute[188605]:      <blockers model='GraniteRapids'>
Dec  1 14:20:36 np0005541455 nova_compute[188605]:        <feature name='amx-bf16'/>
Dec  1 14:20:36 np0005541455 nova_compute[188605]:        <feature name='amx-fp16'/>
Dec  1 14:20:36 np0005541455 nova_compute[188605]:        <feature name='amx-int8'/>
Dec  1 14:20:36 np0005541455 nova_compute[188605]:        <feature name='amx-tile'/>
Dec  1 14:20:36 np0005541455 nova_compute[188605]:        <feature name='avx-vnni'/>
Dec  1 14:20:36 np0005541455 nova_compute[188605]:        <feature name='avx512-bf16'/>
Dec  1 14:20:36 np0005541455 nova_compute[188605]:        <feature name='avx512-fp16'/>
Dec  1 14:20:36 np0005541455 nova_compute[188605]:        <feature name='avx512-vpopcntdq'/>
Dec  1 14:20:36 np0005541455 nova_compute[188605]:        <feature name='avx512bitalg'/>
Dec  1 14:20:36 np0005541455 nova_compute[188605]:        <feature name='avx512bw'/>
Dec  1 14:20:36 np0005541455 nova_compute[188605]:        <feature name='avx512cd'/>
Dec  1 14:20:36 np0005541455 nova_compute[188605]:        <feature name='avx512dq'/>
Dec  1 14:20:36 np0005541455 nova_compute[188605]:        <feature name='avx512f'/>
Dec  1 14:20:36 np0005541455 nova_compute[188605]:        <feature name='avx512ifma'/>
Dec  1 14:20:36 np0005541455 nova_compute[188605]:        <feature name='avx512vbmi'/>
Dec  1 14:20:36 np0005541455 nova_compute[188605]:        <feature name='avx512vbmi2'/>
Dec  1 14:20:36 np0005541455 nova_compute[188605]:        <feature name='avx512vl'/>
Dec  1 14:20:36 np0005541455 nova_compute[188605]:        <feature name='avx512vnni'/>
Dec  1 14:20:36 np0005541455 nova_compute[188605]:        <feature name='bus-lock-detect'/>
Dec  1 14:20:36 np0005541455 nova_compute[188605]:        <feature name='erms'/>
Dec  1 14:20:36 np0005541455 nova_compute[188605]:        <feature name='fbsdp-no'/>
Dec  1 14:20:36 np0005541455 nova_compute[188605]:        <feature name='fsrc'/>
Dec  1 14:20:36 np0005541455 nova_compute[188605]:        <feature name='fsrm'/>
Dec  1 14:20:36 np0005541455 nova_compute[188605]:        <feature name='fsrs'/>
Dec  1 14:20:36 np0005541455 nova_compute[188605]:        <feature name='fzrm'/>
Dec  1 14:20:36 np0005541455 nova_compute[188605]:        <feature name='gfni'/>
Dec  1 14:20:36 np0005541455 nova_compute[188605]:        <feature name='hle'/>
Dec  1 14:20:36 np0005541455 nova_compute[188605]:        <feature name='ibrs-all'/>
Dec  1 14:20:36 np0005541455 nova_compute[188605]:        <feature name='invpcid'/>
Dec  1 14:20:36 np0005541455 nova_compute[188605]:        <feature name='la57'/>
Dec  1 14:20:36 np0005541455 nova_compute[188605]:        <feature name='mcdt-no'/>
Dec  1 14:20:36 np0005541455 nova_compute[188605]:        <feature name='pbrsb-no'/>
Dec  1 14:20:36 np0005541455 nova_compute[188605]:        <feature name='pcid'/>
Dec  1 14:20:36 np0005541455 nova_compute[188605]:        <feature name='pku'/>
Dec  1 14:20:36 np0005541455 nova_compute[188605]:        <feature name='prefetchiti'/>
Dec  1 14:20:36 np0005541455 nova_compute[188605]:        <feature name='psdp-no'/>
Dec  1 14:20:36 np0005541455 nova_compute[188605]:        <feature name='rtm'/>
Dec  1 14:20:36 np0005541455 nova_compute[188605]:        <feature name='sbdr-ssdp-no'/>
Dec  1 14:20:36 np0005541455 nova_compute[188605]:        <feature name='serialize'/>
Dec  1 14:20:36 np0005541455 nova_compute[188605]:        <feature name='taa-no'/>
Dec  1 14:20:36 np0005541455 nova_compute[188605]:        <feature name='tsx-ldtrk'/>
Dec  1 14:20:36 np0005541455 nova_compute[188605]:        <feature name='vaes'/>
Dec  1 14:20:36 np0005541455 nova_compute[188605]:        <feature name='vpclmulqdq'/>
Dec  1 14:20:36 np0005541455 nova_compute[188605]:        <feature name='xfd'/>
Dec  1 14:20:36 np0005541455 nova_compute[188605]:        <feature name='xsaves'/>
Dec  1 14:20:36 np0005541455 nova_compute[188605]:      </blockers>
Dec  1 14:20:36 np0005541455 nova_compute[188605]:      <model usable='no' vendor='Intel'>GraniteRapids-v1</model>
Dec  1 14:20:36 np0005541455 nova_compute[188605]:      <blockers model='GraniteRapids-v1'>
Dec  1 14:20:36 np0005541455 nova_compute[188605]:        <feature name='amx-bf16'/>
Dec  1 14:20:36 np0005541455 nova_compute[188605]:        <feature name='amx-fp16'/>
Dec  1 14:20:36 np0005541455 nova_compute[188605]:        <feature name='amx-int8'/>
Dec  1 14:20:36 np0005541455 nova_compute[188605]:        <feature name='amx-tile'/>
Dec  1 14:20:36 np0005541455 nova_compute[188605]:        <feature name='avx-vnni'/>
Dec  1 14:20:36 np0005541455 nova_compute[188605]:        <feature name='avx512-bf16'/>
Dec  1 14:20:36 np0005541455 nova_compute[188605]:        <feature name='avx512-fp16'/>
Dec  1 14:20:36 np0005541455 nova_compute[188605]:        <feature name='avx512-vpopcntdq'/>
Dec  1 14:20:36 np0005541455 nova_compute[188605]:        <feature name='avx512bitalg'/>
Dec  1 14:20:36 np0005541455 nova_compute[188605]:        <feature name='avx512bw'/>
Dec  1 14:20:36 np0005541455 nova_compute[188605]:        <feature name='avx512cd'/>
Dec  1 14:20:36 np0005541455 nova_compute[188605]:        <feature name='avx512dq'/>
Dec  1 14:20:36 np0005541455 nova_compute[188605]:        <feature name='avx512f'/>
Dec  1 14:20:36 np0005541455 nova_compute[188605]:        <feature name='avx512ifma'/>
Dec  1 14:20:36 np0005541455 nova_compute[188605]:        <feature name='avx512vbmi'/>
Dec  1 14:20:36 np0005541455 nova_compute[188605]:        <feature name='avx512vbmi2'/>
Dec  1 14:20:36 np0005541455 nova_compute[188605]:        <feature name='avx512vl'/>
Dec  1 14:20:36 np0005541455 nova_compute[188605]:        <feature name='avx512vnni'/>
Dec  1 14:20:36 np0005541455 nova_compute[188605]:        <feature name='bus-lock-detect'/>
Dec  1 14:20:36 np0005541455 nova_compute[188605]:        <feature name='erms'/>
Dec  1 14:20:36 np0005541455 nova_compute[188605]:        <feature name='fbsdp-no'/>
Dec  1 14:20:36 np0005541455 nova_compute[188605]:        <feature name='fsrc'/>
Dec  1 14:20:36 np0005541455 nova_compute[188605]:        <feature name='fsrm'/>
Dec  1 14:20:36 np0005541455 nova_compute[188605]:        <feature name='fsrs'/>
Dec  1 14:20:36 np0005541455 nova_compute[188605]:        <feature name='fzrm'/>
Dec  1 14:20:36 np0005541455 nova_compute[188605]:        <feature name='gfni'/>
Dec  1 14:20:36 np0005541455 nova_compute[188605]:        <feature name='hle'/>
Dec  1 14:20:36 np0005541455 nova_compute[188605]:        <feature name='ibrs-all'/>
Dec  1 14:20:36 np0005541455 nova_compute[188605]:        <feature name='invpcid'/>
Dec  1 14:20:36 np0005541455 nova_compute[188605]:        <feature name='la57'/>
Dec  1 14:20:36 np0005541455 nova_compute[188605]:        <feature name='mcdt-no'/>
Dec  1 14:20:36 np0005541455 nova_compute[188605]:        <feature name='pbrsb-no'/>
Dec  1 14:20:36 np0005541455 nova_compute[188605]:        <feature name='pcid'/>
Dec  1 14:20:36 np0005541455 nova_compute[188605]:        <feature name='pku'/>
Dec  1 14:20:36 np0005541455 nova_compute[188605]:        <feature name='prefetchiti'/>
Dec  1 14:20:36 np0005541455 nova_compute[188605]:        <feature name='psdp-no'/>
Dec  1 14:20:36 np0005541455 nova_compute[188605]:        <feature name='rtm'/>
Dec  1 14:20:36 np0005541455 nova_compute[188605]:        <feature name='sbdr-ssdp-no'/>
Dec  1 14:20:36 np0005541455 nova_compute[188605]:        <feature name='serialize'/>
Dec  1 14:20:36 np0005541455 nova_compute[188605]:        <feature name='taa-no'/>
Dec  1 14:20:36 np0005541455 nova_compute[188605]:        <feature name='tsx-ldtrk'/>
Dec  1 14:20:36 np0005541455 nova_compute[188605]:        <feature name='vaes'/>
Dec  1 14:20:36 np0005541455 nova_compute[188605]:        <feature name='vpclmulqdq'/>
Dec  1 14:20:36 np0005541455 nova_compute[188605]:        <feature name='xfd'/>
Dec  1 14:20:36 np0005541455 nova_compute[188605]:        <feature name='xsaves'/>
Dec  1 14:20:36 np0005541455 nova_compute[188605]:      </blockers>
Dec  1 14:20:36 np0005541455 nova_compute[188605]:      <model usable='no' vendor='Intel'>GraniteRapids-v2</model>
Dec  1 14:20:36 np0005541455 nova_compute[188605]:      <blockers model='GraniteRapids-v2'>
Dec  1 14:20:36 np0005541455 nova_compute[188605]:        <feature name='amx-bf16'/>
Dec  1 14:20:36 np0005541455 nova_compute[188605]:        <feature name='amx-fp16'/>
Dec  1 14:20:36 np0005541455 nova_compute[188605]:        <feature name='amx-int8'/>
Dec  1 14:20:36 np0005541455 nova_compute[188605]:        <feature name='amx-tile'/>
Dec  1 14:20:36 np0005541455 nova_compute[188605]:        <feature name='avx-vnni'/>
Dec  1 14:20:36 np0005541455 nova_compute[188605]:        <feature name='avx10'/>
Dec  1 14:20:36 np0005541455 nova_compute[188605]:        <feature name='avx10-128'/>
Dec  1 14:20:36 np0005541455 nova_compute[188605]:        <feature name='avx10-256'/>
Dec  1 14:20:36 np0005541455 nova_compute[188605]:        <feature name='avx10-512'/>
Dec  1 14:20:36 np0005541455 nova_compute[188605]:        <feature name='avx512-bf16'/>
Dec  1 14:20:36 np0005541455 nova_compute[188605]:        <feature name='avx512-fp16'/>
Dec  1 14:20:36 np0005541455 nova_compute[188605]:        <feature name='avx512-vpopcntdq'/>
Dec  1 14:20:36 np0005541455 nova_compute[188605]:        <feature name='avx512bitalg'/>
Dec  1 14:20:36 np0005541455 nova_compute[188605]:        <feature name='avx512bw'/>
Dec  1 14:20:36 np0005541455 nova_compute[188605]:        <feature name='avx512cd'/>
Dec  1 14:20:36 np0005541455 nova_compute[188605]:        <feature name='avx512dq'/>
Dec  1 14:20:36 np0005541455 nova_compute[188605]:        <feature name='avx512f'/>
Dec  1 14:20:36 np0005541455 nova_compute[188605]:        <feature name='avx512ifma'/>
Dec  1 14:20:36 np0005541455 nova_compute[188605]:        <feature name='avx512vbmi'/>
Dec  1 14:20:36 np0005541455 nova_compute[188605]:        <feature name='avx512vbmi2'/>
Dec  1 14:20:36 np0005541455 nova_compute[188605]:        <feature name='avx512vl'/>
Dec  1 14:20:36 np0005541455 nova_compute[188605]:        <feature name='avx512vnni'/>
Dec  1 14:20:36 np0005541455 nova_compute[188605]:        <feature name='bus-lock-detect'/>
Dec  1 14:20:36 np0005541455 nova_compute[188605]:        <feature name='cldemote'/>
Dec  1 14:20:36 np0005541455 nova_compute[188605]:        <feature name='erms'/>
Dec  1 14:20:36 np0005541455 nova_compute[188605]:        <feature name='fbsdp-no'/>
Dec  1 14:20:36 np0005541455 nova_compute[188605]:        <feature name='fsrc'/>
Dec  1 14:20:36 np0005541455 nova_compute[188605]:        <feature name='fsrm'/>
Dec  1 14:20:36 np0005541455 nova_compute[188605]:        <feature name='fsrs'/>
Dec  1 14:20:36 np0005541455 nova_compute[188605]:        <feature name='fzrm'/>
Dec  1 14:20:36 np0005541455 nova_compute[188605]:        <feature name='gfni'/>
Dec  1 14:20:36 np0005541455 nova_compute[188605]:        <feature name='hle'/>
Dec  1 14:20:36 np0005541455 nova_compute[188605]:        <feature name='ibrs-all'/>
Dec  1 14:20:36 np0005541455 nova_compute[188605]:        <feature name='invpcid'/>
Dec  1 14:20:36 np0005541455 nova_compute[188605]:        <feature name='la57'/>
Dec  1 14:20:36 np0005541455 nova_compute[188605]:        <feature name='mcdt-no'/>
Dec  1 14:20:36 np0005541455 nova_compute[188605]:        <feature name='movdir64b'/>
Dec  1 14:20:36 np0005541455 nova_compute[188605]:        <feature name='movdiri'/>
Dec  1 14:20:36 np0005541455 nova_compute[188605]:        <feature name='pbrsb-no'/>
Dec  1 14:20:36 np0005541455 nova_compute[188605]:        <feature name='pcid'/>
Dec  1 14:20:36 np0005541455 nova_compute[188605]:        <feature name='pku'/>
Dec  1 14:20:36 np0005541455 nova_compute[188605]:        <feature name='prefetchiti'/>
Dec  1 14:20:36 np0005541455 nova_compute[188605]:        <feature name='psdp-no'/>
Dec  1 14:20:36 np0005541455 nova_compute[188605]:        <feature name='rtm'/>
Dec  1 14:20:36 np0005541455 nova_compute[188605]:        <feature name='sbdr-ssdp-no'/>
Dec  1 14:20:36 np0005541455 nova_compute[188605]:        <feature name='serialize'/>
Dec  1 14:20:36 np0005541455 nova_compute[188605]:        <feature name='ss'/>
Dec  1 14:20:36 np0005541455 nova_compute[188605]:        <feature name='taa-no'/>
Dec  1 14:20:36 np0005541455 nova_compute[188605]:        <feature name='tsx-ldtrk'/>
Dec  1 14:20:36 np0005541455 nova_compute[188605]:        <feature name='vaes'/>
Dec  1 14:20:36 np0005541455 nova_compute[188605]:        <feature name='vpclmulqdq'/>
Dec  1 14:20:36 np0005541455 nova_compute[188605]:        <feature name='xfd'/>
Dec  1 14:20:36 np0005541455 nova_compute[188605]:        <feature name='xsaves'/>
Dec  1 14:20:36 np0005541455 nova_compute[188605]:      </blockers>
Dec  1 14:20:36 np0005541455 nova_compute[188605]:      <model usable='no' vendor='Intel' canonical='Haswell-v1'>Haswell</model>
Dec  1 14:20:36 np0005541455 nova_compute[188605]:      <blockers model='Haswell'>
Dec  1 14:20:36 np0005541455 nova_compute[188605]:        <feature name='erms'/>
Dec  1 14:20:36 np0005541455 nova_compute[188605]:        <feature name='hle'/>
Dec  1 14:20:36 np0005541455 nova_compute[188605]:        <feature name='invpcid'/>
Dec  1 14:20:36 np0005541455 nova_compute[188605]:        <feature name='pcid'/>
Dec  1 14:20:36 np0005541455 nova_compute[188605]:        <feature name='rtm'/>
Dec  1 14:20:36 np0005541455 nova_compute[188605]:      </blockers>
Dec  1 14:20:36 np0005541455 nova_compute[188605]:      <model usable='no' vendor='Intel' canonical='Haswell-v3'>Haswell-IBRS</model>
Dec  1 14:20:36 np0005541455 nova_compute[188605]:      <blockers model='Haswell-IBRS'>
Dec  1 14:20:36 np0005541455 nova_compute[188605]:        <feature name='erms'/>
Dec  1 14:20:36 np0005541455 nova_compute[188605]:        <feature name='hle'/>
Dec  1 14:20:36 np0005541455 nova_compute[188605]:        <feature name='invpcid'/>
Dec  1 14:20:36 np0005541455 nova_compute[188605]:        <feature name='pcid'/>
Dec  1 14:20:36 np0005541455 nova_compute[188605]:        <feature name='rtm'/>
Dec  1 14:20:36 np0005541455 nova_compute[188605]:      </blockers>
Dec  1 14:20:36 np0005541455 nova_compute[188605]:      <model usable='no' vendor='Intel' canonical='Haswell-v2'>Haswell-noTSX</model>
Dec  1 14:20:36 np0005541455 nova_compute[188605]:      <blockers model='Haswell-noTSX'>
Dec  1 14:20:36 np0005541455 nova_compute[188605]:        <feature name='erms'/>
Dec  1 14:20:36 np0005541455 nova_compute[188605]:        <feature name='invpcid'/>
Dec  1 14:20:36 np0005541455 nova_compute[188605]:        <feature name='pcid'/>
Dec  1 14:20:36 np0005541455 nova_compute[188605]:      </blockers>
Dec  1 14:20:36 np0005541455 nova_compute[188605]:      <model usable='no' vendor='Intel' canonical='Haswell-v4'>Haswell-noTSX-IBRS</model>
Dec  1 14:20:36 np0005541455 nova_compute[188605]:      <blockers model='Haswell-noTSX-IBRS'>
Dec  1 14:20:36 np0005541455 nova_compute[188605]:        <feature name='erms'/>
Dec  1 14:20:36 np0005541455 nova_compute[188605]:        <feature name='invpcid'/>
Dec  1 14:20:36 np0005541455 nova_compute[188605]:        <feature name='pcid'/>
Dec  1 14:20:36 np0005541455 nova_compute[188605]:      </blockers>
Dec  1 14:20:36 np0005541455 nova_compute[188605]:      <model usable='no' vendor='Intel'>Haswell-v1</model>
Dec  1 14:20:36 np0005541455 nova_compute[188605]:      <blockers model='Haswell-v1'>
Dec  1 14:20:36 np0005541455 nova_compute[188605]:        <feature name='erms'/>
Dec  1 14:20:36 np0005541455 nova_compute[188605]:        <feature name='hle'/>
Dec  1 14:20:36 np0005541455 nova_compute[188605]:        <feature name='invpcid'/>
Dec  1 14:20:36 np0005541455 nova_compute[188605]:        <feature name='pcid'/>
Dec  1 14:20:36 np0005541455 nova_compute[188605]:        <feature name='rtm'/>
Dec  1 14:20:36 np0005541455 nova_compute[188605]:      </blockers>
Dec  1 14:20:36 np0005541455 nova_compute[188605]:      <model usable='no' vendor='Intel'>Haswell-v2</model>
Dec  1 14:20:36 np0005541455 nova_compute[188605]:      <blockers model='Haswell-v2'>
Dec  1 14:20:36 np0005541455 nova_compute[188605]:        <feature name='erms'/>
Dec  1 14:20:36 np0005541455 nova_compute[188605]:        <feature name='invpcid'/>
Dec  1 14:20:36 np0005541455 nova_compute[188605]:        <feature name='pcid'/>
Dec  1 14:20:36 np0005541455 nova_compute[188605]:      </blockers>
Dec  1 14:20:36 np0005541455 nova_compute[188605]:      <model usable='no' vendor='Intel'>Haswell-v3</model>
Dec  1 14:20:36 np0005541455 nova_compute[188605]:      <blockers model='Haswell-v3'>
Dec  1 14:20:36 np0005541455 nova_compute[188605]:        <feature name='erms'/>
Dec  1 14:20:36 np0005541455 nova_compute[188605]:        <feature name='hle'/>
Dec  1 14:20:36 np0005541455 nova_compute[188605]:        <feature name='invpcid'/>
Dec  1 14:20:36 np0005541455 nova_compute[188605]:        <feature name='pcid'/>
Dec  1 14:20:36 np0005541455 nova_compute[188605]:        <feature name='rtm'/>
Dec  1 14:20:36 np0005541455 nova_compute[188605]:      </blockers>
Dec  1 14:20:36 np0005541455 nova_compute[188605]:      <model usable='no' vendor='Intel'>Haswell-v4</model>
Dec  1 14:20:36 np0005541455 nova_compute[188605]:      <blockers model='Haswell-v4'>
Dec  1 14:20:36 np0005541455 nova_compute[188605]:        <feature name='erms'/>
Dec  1 14:20:36 np0005541455 nova_compute[188605]:        <feature name='invpcid'/>
Dec  1 14:20:36 np0005541455 nova_compute[188605]:        <feature name='pcid'/>
Dec  1 14:20:36 np0005541455 nova_compute[188605]:      </blockers>
Dec  1 14:20:36 np0005541455 nova_compute[188605]:      <model usable='no' vendor='Intel' canonical='Icelake-Server-v1'>Icelake-Server</model>
Dec  1 14:20:36 np0005541455 nova_compute[188605]:      <blockers model='Icelake-Server'>
Dec  1 14:20:36 np0005541455 nova_compute[188605]:        <feature name='avx512-vpopcntdq'/>
Dec  1 14:20:36 np0005541455 nova_compute[188605]:        <feature name='avx512bitalg'/>
Dec  1 14:20:36 np0005541455 nova_compute[188605]:        <feature name='avx512bw'/>
Dec  1 14:20:36 np0005541455 nova_compute[188605]:        <feature name='avx512cd'/>
Dec  1 14:20:36 np0005541455 nova_compute[188605]:        <feature name='avx512dq'/>
Dec  1 14:20:36 np0005541455 nova_compute[188605]:        <feature name='avx512f'/>
Dec  1 14:20:36 np0005541455 nova_compute[188605]:        <feature name='avx512vbmi'/>
Dec  1 14:20:36 np0005541455 nova_compute[188605]:        <feature name='avx512vbmi2'/>
Dec  1 14:20:36 np0005541455 nova_compute[188605]:        <feature name='avx512vl'/>
Dec  1 14:20:36 np0005541455 nova_compute[188605]:        <feature name='avx512vnni'/>
Dec  1 14:20:36 np0005541455 nova_compute[188605]:        <feature name='erms'/>
Dec  1 14:20:36 np0005541455 nova_compute[188605]:        <feature name='gfni'/>
Dec  1 14:20:36 np0005541455 nova_compute[188605]:        <feature name='hle'/>
Dec  1 14:20:36 np0005541455 nova_compute[188605]:        <feature name='invpcid'/>
Dec  1 14:20:36 np0005541455 nova_compute[188605]:        <feature name='la57'/>
Dec  1 14:20:36 np0005541455 nova_compute[188605]:        <feature name='pcid'/>
Dec  1 14:20:36 np0005541455 nova_compute[188605]:        <feature name='pku'/>
Dec  1 14:20:36 np0005541455 nova_compute[188605]:        <feature name='rtm'/>
Dec  1 14:20:36 np0005541455 nova_compute[188605]:        <feature name='vaes'/>
Dec  1 14:20:36 np0005541455 nova_compute[188605]:        <feature name='vpclmulqdq'/>
Dec  1 14:20:36 np0005541455 nova_compute[188605]:      </blockers>
Dec  1 14:20:36 np0005541455 nova_compute[188605]:      <model usable='no' vendor='Intel' canonical='Icelake-Server-v2'>Icelake-Server-noTSX</model>
Dec  1 14:20:36 np0005541455 nova_compute[188605]:      <blockers model='Icelake-Server-noTSX'>
Dec  1 14:20:36 np0005541455 nova_compute[188605]:        <feature name='avx512-vpopcntdq'/>
Dec  1 14:20:36 np0005541455 nova_compute[188605]:        <feature name='avx512bitalg'/>
Dec  1 14:20:36 np0005541455 nova_compute[188605]:        <feature name='avx512bw'/>
Dec  1 14:20:36 np0005541455 nova_compute[188605]:        <feature name='avx512cd'/>
Dec  1 14:20:36 np0005541455 nova_compute[188605]:        <feature name='avx512dq'/>
Dec  1 14:20:36 np0005541455 nova_compute[188605]:        <feature name='avx512f'/>
Dec  1 14:20:36 np0005541455 nova_compute[188605]:        <feature name='avx512vbmi'/>
Dec  1 14:20:36 np0005541455 nova_compute[188605]:        <feature name='avx512vbmi2'/>
Dec  1 14:20:36 np0005541455 nova_compute[188605]:        <feature name='avx512vl'/>
Dec  1 14:20:36 np0005541455 nova_compute[188605]:        <feature name='avx512vnni'/>
Dec  1 14:20:36 np0005541455 nova_compute[188605]:        <feature name='erms'/>
Dec  1 14:20:36 np0005541455 nova_compute[188605]:        <feature name='gfni'/>
Dec  1 14:20:36 np0005541455 nova_compute[188605]:        <feature name='invpcid'/>
Dec  1 14:20:36 np0005541455 nova_compute[188605]:        <feature name='la57'/>
Dec  1 14:20:36 np0005541455 nova_compute[188605]:        <feature name='pcid'/>
Dec  1 14:20:36 np0005541455 nova_compute[188605]:        <feature name='pku'/>
Dec  1 14:20:36 np0005541455 nova_compute[188605]:        <feature name='vaes'/>
Dec  1 14:20:36 np0005541455 nova_compute[188605]:        <feature name='vpclmulqdq'/>
Dec  1 14:20:36 np0005541455 nova_compute[188605]:      </blockers>
Dec  1 14:20:36 np0005541455 nova_compute[188605]:      <model usable='no' vendor='Intel'>Icelake-Server-v1</model>
Dec  1 14:20:36 np0005541455 nova_compute[188605]:      <blockers model='Icelake-Server-v1'>
Dec  1 14:20:36 np0005541455 nova_compute[188605]:        <feature name='avx512-vpopcntdq'/>
Dec  1 14:20:36 np0005541455 nova_compute[188605]:        <feature name='avx512bitalg'/>
Dec  1 14:20:36 np0005541455 nova_compute[188605]:        <feature name='avx512bw'/>
Dec  1 14:20:36 np0005541455 nova_compute[188605]:        <feature name='avx512cd'/>
Dec  1 14:20:36 np0005541455 nova_compute[188605]:        <feature name='avx512dq'/>
Dec  1 14:20:36 np0005541455 nova_compute[188605]:        <feature name='avx512f'/>
Dec  1 14:20:36 np0005541455 nova_compute[188605]:        <feature name='avx512vbmi'/>
Dec  1 14:20:36 np0005541455 nova_compute[188605]:        <feature name='avx512vbmi2'/>
Dec  1 14:20:36 np0005541455 nova_compute[188605]:        <feature name='avx512vl'/>
Dec  1 14:20:36 np0005541455 nova_compute[188605]:        <feature name='avx512vnni'/>
Dec  1 14:20:36 np0005541455 nova_compute[188605]:        <feature name='erms'/>
Dec  1 14:20:36 np0005541455 nova_compute[188605]:        <feature name='gfni'/>
Dec  1 14:20:36 np0005541455 nova_compute[188605]:        <feature name='hle'/>
Dec  1 14:20:36 np0005541455 nova_compute[188605]:        <feature name='invpcid'/>
Dec  1 14:20:36 np0005541455 nova_compute[188605]:        <feature name='la57'/>
Dec  1 14:20:36 np0005541455 nova_compute[188605]:        <feature name='pcid'/>
Dec  1 14:20:36 np0005541455 nova_compute[188605]:        <feature name='pku'/>
Dec  1 14:20:36 np0005541455 nova_compute[188605]:        <feature name='rtm'/>
Dec  1 14:20:36 np0005541455 nova_compute[188605]:        <feature name='vaes'/>
Dec  1 14:20:36 np0005541455 nova_compute[188605]:        <feature name='vpclmulqdq'/>
Dec  1 14:20:36 np0005541455 nova_compute[188605]:      </blockers>
Dec  1 14:20:36 np0005541455 nova_compute[188605]:      <model usable='no' vendor='Intel'>Icelake-Server-v2</model>
Dec  1 14:20:36 np0005541455 nova_compute[188605]:      <blockers model='Icelake-Server-v2'>
Dec  1 14:20:36 np0005541455 nova_compute[188605]:        <feature name='avx512-vpopcntdq'/>
Dec  1 14:20:36 np0005541455 nova_compute[188605]:        <feature name='avx512bitalg'/>
Dec  1 14:20:36 np0005541455 nova_compute[188605]:        <feature name='avx512bw'/>
Dec  1 14:20:36 np0005541455 nova_compute[188605]:        <feature name='avx512cd'/>
Dec  1 14:20:36 np0005541455 nova_compute[188605]:        <feature name='avx512dq'/>
Dec  1 14:20:36 np0005541455 nova_compute[188605]:        <feature name='avx512f'/>
Dec  1 14:20:36 np0005541455 nova_compute[188605]:        <feature name='avx512vbmi'/>
Dec  1 14:20:36 np0005541455 nova_compute[188605]:        <feature name='avx512vbmi2'/>
Dec  1 14:20:36 np0005541455 nova_compute[188605]:        <feature name='avx512vl'/>
Dec  1 14:20:36 np0005541455 nova_compute[188605]:        <feature name='avx512vnni'/>
Dec  1 14:20:36 np0005541455 nova_compute[188605]:        <feature name='erms'/>
Dec  1 14:20:36 np0005541455 nova_compute[188605]:        <feature name='gfni'/>
Dec  1 14:20:36 np0005541455 nova_compute[188605]:        <feature name='invpcid'/>
Dec  1 14:20:36 np0005541455 nova_compute[188605]:        <feature name='la57'/>
Dec  1 14:20:36 np0005541455 nova_compute[188605]:        <feature name='pcid'/>
Dec  1 14:20:36 np0005541455 nova_compute[188605]:        <feature name='pku'/>
Dec  1 14:20:36 np0005541455 nova_compute[188605]:        <feature name='vaes'/>
Dec  1 14:20:36 np0005541455 nova_compute[188605]:        <feature name='vpclmulqdq'/>
Dec  1 14:20:36 np0005541455 nova_compute[188605]:      </blockers>
Dec  1 14:20:36 np0005541455 nova_compute[188605]:      <model usable='no' vendor='Intel'>Icelake-Server-v3</model>
Dec  1 14:20:36 np0005541455 nova_compute[188605]:      <blockers model='Icelake-Server-v3'>
Dec  1 14:20:36 np0005541455 nova_compute[188605]:        <feature name='avx512-vpopcntdq'/>
Dec  1 14:20:36 np0005541455 nova_compute[188605]:        <feature name='avx512bitalg'/>
Dec  1 14:20:36 np0005541455 nova_compute[188605]:        <feature name='avx512bw'/>
Dec  1 14:20:36 np0005541455 nova_compute[188605]:        <feature name='avx512cd'/>
Dec  1 14:20:36 np0005541455 nova_compute[188605]:        <feature name='avx512dq'/>
Dec  1 14:20:36 np0005541455 nova_compute[188605]:        <feature name='avx512f'/>
Dec  1 14:20:36 np0005541455 nova_compute[188605]:        <feature name='avx512vbmi'/>
Dec  1 14:20:36 np0005541455 nova_compute[188605]:        <feature name='avx512vbmi2'/>
Dec  1 14:20:36 np0005541455 nova_compute[188605]:        <feature name='avx512vl'/>
Dec  1 14:20:36 np0005541455 nova_compute[188605]:        <feature name='avx512vnni'/>
Dec  1 14:20:36 np0005541455 nova_compute[188605]:        <feature name='erms'/>
Dec  1 14:20:36 np0005541455 nova_compute[188605]:        <feature name='gfni'/>
Dec  1 14:20:36 np0005541455 nova_compute[188605]:        <feature name='ibrs-all'/>
Dec  1 14:20:36 np0005541455 nova_compute[188605]:        <feature name='invpcid'/>
Dec  1 14:20:36 np0005541455 nova_compute[188605]:        <feature name='la57'/>
Dec  1 14:20:36 np0005541455 nova_compute[188605]:        <feature name='pcid'/>
Dec  1 14:20:36 np0005541455 nova_compute[188605]:        <feature name='pku'/>
Dec  1 14:20:36 np0005541455 nova_compute[188605]:        <feature name='taa-no'/>
Dec  1 14:20:36 np0005541455 nova_compute[188605]:        <feature name='vaes'/>
Dec  1 14:20:36 np0005541455 nova_compute[188605]:        <feature name='vpclmulqdq'/>
Dec  1 14:20:36 np0005541455 nova_compute[188605]:      </blockers>
Dec  1 14:20:36 np0005541455 nova_compute[188605]:      <model usable='no' vendor='Intel'>Icelake-Server-v4</model>
Dec  1 14:20:36 np0005541455 nova_compute[188605]:      <blockers model='Icelake-Server-v4'>
Dec  1 14:20:36 np0005541455 nova_compute[188605]:        <feature name='avx512-vpopcntdq'/>
Dec  1 14:20:36 np0005541455 nova_compute[188605]:        <feature name='avx512bitalg'/>
Dec  1 14:20:36 np0005541455 nova_compute[188605]:        <feature name='avx512bw'/>
Dec  1 14:20:36 np0005541455 nova_compute[188605]:        <feature name='avx512cd'/>
Dec  1 14:20:36 np0005541455 nova_compute[188605]:        <feature name='avx512dq'/>
Dec  1 14:20:36 np0005541455 nova_compute[188605]:        <feature name='avx512f'/>
Dec  1 14:20:36 np0005541455 nova_compute[188605]:        <feature name='avx512ifma'/>
Dec  1 14:20:36 np0005541455 nova_compute[188605]:        <feature name='avx512vbmi'/>
Dec  1 14:20:36 np0005541455 nova_compute[188605]:        <feature name='avx512vbmi2'/>
Dec  1 14:20:36 np0005541455 nova_compute[188605]:        <feature name='avx512vl'/>
Dec  1 14:20:36 np0005541455 nova_compute[188605]:        <feature name='avx512vnni'/>
Dec  1 14:20:36 np0005541455 nova_compute[188605]:        <feature name='erms'/>
Dec  1 14:20:36 np0005541455 nova_compute[188605]:        <feature name='fsrm'/>
Dec  1 14:20:36 np0005541455 nova_compute[188605]:        <feature name='gfni'/>
Dec  1 14:20:36 np0005541455 nova_compute[188605]:        <feature name='ibrs-all'/>
Dec  1 14:20:36 np0005541455 nova_compute[188605]:        <feature name='invpcid'/>
Dec  1 14:20:36 np0005541455 nova_compute[188605]:        <feature name='la57'/>
Dec  1 14:20:36 np0005541455 nova_compute[188605]:        <feature name='pcid'/>
Dec  1 14:20:36 np0005541455 nova_compute[188605]:        <feature name='pku'/>
Dec  1 14:20:36 np0005541455 nova_compute[188605]:        <feature name='taa-no'/>
Dec  1 14:20:36 np0005541455 nova_compute[188605]:        <feature name='vaes'/>
Dec  1 14:20:36 np0005541455 nova_compute[188605]:        <feature name='vpclmulqdq'/>
Dec  1 14:20:36 np0005541455 nova_compute[188605]:      </blockers>
Dec  1 14:20:36 np0005541455 nova_compute[188605]:      <model usable='no' vendor='Intel'>Icelake-Server-v5</model>
Dec  1 14:20:36 np0005541455 nova_compute[188605]:      <blockers model='Icelake-Server-v5'>
Dec  1 14:20:36 np0005541455 nova_compute[188605]:        <feature name='avx512-vpopcntdq'/>
Dec  1 14:20:36 np0005541455 nova_compute[188605]:        <feature name='avx512bitalg'/>
Dec  1 14:20:36 np0005541455 nova_compute[188605]:        <feature name='avx512bw'/>
Dec  1 14:20:36 np0005541455 nova_compute[188605]:        <feature name='avx512cd'/>
Dec  1 14:20:36 np0005541455 nova_compute[188605]:        <feature name='avx512dq'/>
Dec  1 14:20:36 np0005541455 nova_compute[188605]:        <feature name='avx512f'/>
Dec  1 14:20:36 np0005541455 nova_compute[188605]:        <feature name='avx512ifma'/>
Dec  1 14:20:36 np0005541455 nova_compute[188605]:        <feature name='avx512vbmi'/>
Dec  1 14:20:36 np0005541455 nova_compute[188605]:        <feature name='avx512vbmi2'/>
Dec  1 14:20:36 np0005541455 nova_compute[188605]:        <feature name='avx512vl'/>
Dec  1 14:20:36 np0005541455 nova_compute[188605]:        <feature name='avx512vnni'/>
Dec  1 14:20:36 np0005541455 nova_compute[188605]:        <feature name='erms'/>
Dec  1 14:20:36 np0005541455 nova_compute[188605]:        <feature name='fsrm'/>
Dec  1 14:20:36 np0005541455 nova_compute[188605]:        <feature name='gfni'/>
Dec  1 14:20:36 np0005541455 nova_compute[188605]:        <feature name='ibrs-all'/>
Dec  1 14:20:36 np0005541455 nova_compute[188605]:        <feature name='invpcid'/>
Dec  1 14:20:36 np0005541455 nova_compute[188605]:        <feature name='la57'/>
Dec  1 14:20:36 np0005541455 nova_compute[188605]:        <feature name='pcid'/>
Dec  1 14:20:36 np0005541455 nova_compute[188605]:        <feature name='pku'/>
Dec  1 14:20:36 np0005541455 nova_compute[188605]:        <feature name='taa-no'/>
Dec  1 14:20:36 np0005541455 nova_compute[188605]:        <feature name='vaes'/>
Dec  1 14:20:36 np0005541455 nova_compute[188605]:        <feature name='vpclmulqdq'/>
Dec  1 14:20:36 np0005541455 nova_compute[188605]:        <feature name='xsaves'/>
Dec  1 14:20:36 np0005541455 nova_compute[188605]:      </blockers>
Dec  1 14:20:36 np0005541455 nova_compute[188605]:      <model usable='no' vendor='Intel'>Icelake-Server-v6</model>
Dec  1 14:20:36 np0005541455 nova_compute[188605]:      <blockers model='Icelake-Server-v6'>
Dec  1 14:20:36 np0005541455 nova_compute[188605]:        <feature name='avx512-vpopcntdq'/>
Dec  1 14:20:36 np0005541455 nova_compute[188605]:        <feature name='avx512bitalg'/>
Dec  1 14:20:36 np0005541455 nova_compute[188605]:        <feature name='avx512bw'/>
Dec  1 14:20:36 np0005541455 nova_compute[188605]:        <feature name='avx512cd'/>
Dec  1 14:20:36 np0005541455 nova_compute[188605]:        <feature name='avx512dq'/>
Dec  1 14:20:36 np0005541455 nova_compute[188605]:        <feature name='avx512f'/>
Dec  1 14:20:36 np0005541455 nova_compute[188605]:        <feature name='avx512ifma'/>
Dec  1 14:20:36 np0005541455 nova_compute[188605]:        <feature name='avx512vbmi'/>
Dec  1 14:20:36 np0005541455 nova_compute[188605]:        <feature name='avx512vbmi2'/>
Dec  1 14:20:36 np0005541455 nova_compute[188605]:        <feature name='avx512vl'/>
Dec  1 14:20:36 np0005541455 nova_compute[188605]:        <feature name='avx512vnni'/>
Dec  1 14:20:36 np0005541455 nova_compute[188605]:        <feature name='erms'/>
Dec  1 14:20:36 np0005541455 nova_compute[188605]:        <feature name='fsrm'/>
Dec  1 14:20:36 np0005541455 nova_compute[188605]:        <feature name='gfni'/>
Dec  1 14:20:36 np0005541455 nova_compute[188605]:        <feature name='ibrs-all'/>
Dec  1 14:20:36 np0005541455 nova_compute[188605]:        <feature name='invpcid'/>
Dec  1 14:20:36 np0005541455 nova_compute[188605]:        <feature name='la57'/>
Dec  1 14:20:36 np0005541455 nova_compute[188605]:        <feature name='pcid'/>
Dec  1 14:20:36 np0005541455 nova_compute[188605]:        <feature name='pku'/>
Dec  1 14:20:36 np0005541455 nova_compute[188605]:        <feature name='taa-no'/>
Dec  1 14:20:36 np0005541455 nova_compute[188605]:        <feature name='vaes'/>
Dec  1 14:20:36 np0005541455 nova_compute[188605]:        <feature name='vpclmulqdq'/>
Dec  1 14:20:36 np0005541455 nova_compute[188605]:        <feature name='xsaves'/>
Dec  1 14:20:36 np0005541455 nova_compute[188605]:      </blockers>
Dec  1 14:20:36 np0005541455 nova_compute[188605]:      <model usable='no' vendor='Intel'>Icelake-Server-v7</model>
Dec  1 14:20:36 np0005541455 nova_compute[188605]:      <blockers model='Icelake-Server-v7'>
Dec  1 14:20:36 np0005541455 nova_compute[188605]:        <feature name='avx512-vpopcntdq'/>
Dec  1 14:20:36 np0005541455 nova_compute[188605]:        <feature name='avx512bitalg'/>
Dec  1 14:20:36 np0005541455 nova_compute[188605]:        <feature name='avx512bw'/>
Dec  1 14:20:36 np0005541455 nova_compute[188605]:        <feature name='avx512cd'/>
Dec  1 14:20:36 np0005541455 nova_compute[188605]:        <feature name='avx512dq'/>
Dec  1 14:20:36 np0005541455 nova_compute[188605]:        <feature name='avx512f'/>
Dec  1 14:20:36 np0005541455 nova_compute[188605]:        <feature name='avx512ifma'/>
Dec  1 14:20:36 np0005541455 nova_compute[188605]:        <feature name='avx512vbmi'/>
Dec  1 14:20:36 np0005541455 nova_compute[188605]:        <feature name='avx512vbmi2'/>
Dec  1 14:20:36 np0005541455 nova_compute[188605]:        <feature name='avx512vl'/>
Dec  1 14:20:36 np0005541455 nova_compute[188605]:        <feature name='avx512vnni'/>
Dec  1 14:20:36 np0005541455 nova_compute[188605]:        <feature name='erms'/>
Dec  1 14:20:36 np0005541455 nova_compute[188605]:        <feature name='fsrm'/>
Dec  1 14:20:36 np0005541455 nova_compute[188605]:        <feature name='gfni'/>
Dec  1 14:20:36 np0005541455 nova_compute[188605]:        <feature name='hle'/>
Dec  1 14:20:36 np0005541455 nova_compute[188605]:        <feature name='ibrs-all'/>
Dec  1 14:20:36 np0005541455 nova_compute[188605]:        <feature name='invpcid'/>
Dec  1 14:20:36 np0005541455 nova_compute[188605]:        <feature name='la57'/>
Dec  1 14:20:36 np0005541455 nova_compute[188605]:        <feature name='pcid'/>
Dec  1 14:20:36 np0005541455 nova_compute[188605]:        <feature name='pku'/>
Dec  1 14:20:36 np0005541455 nova_compute[188605]:        <feature name='rtm'/>
Dec  1 14:20:36 np0005541455 nova_compute[188605]:        <feature name='taa-no'/>
Dec  1 14:20:36 np0005541455 nova_compute[188605]:        <feature name='vaes'/>
Dec  1 14:20:36 np0005541455 nova_compute[188605]:        <feature name='vpclmulqdq'/>
Dec  1 14:20:36 np0005541455 nova_compute[188605]:        <feature name='xsaves'/>
Dec  1 14:20:36 np0005541455 nova_compute[188605]:      </blockers>
Dec  1 14:20:36 np0005541455 nova_compute[188605]:      <model usable='no' vendor='Intel' canonical='IvyBridge-v1'>IvyBridge</model>
Dec  1 14:20:36 np0005541455 nova_compute[188605]:      <blockers model='IvyBridge'>
Dec  1 14:20:36 np0005541455 nova_compute[188605]:        <feature name='erms'/>
Dec  1 14:20:36 np0005541455 nova_compute[188605]:      </blockers>
Dec  1 14:20:36 np0005541455 nova_compute[188605]:      <model usable='no' vendor='Intel' canonical='IvyBridge-v2'>IvyBridge-IBRS</model>
Dec  1 14:20:36 np0005541455 nova_compute[188605]:      <blockers model='IvyBridge-IBRS'>
Dec  1 14:20:36 np0005541455 nova_compute[188605]:        <feature name='erms'/>
Dec  1 14:20:36 np0005541455 nova_compute[188605]:      </blockers>
Dec  1 14:20:36 np0005541455 nova_compute[188605]:      <model usable='no' vendor='Intel'>IvyBridge-v1</model>
Dec  1 14:20:36 np0005541455 nova_compute[188605]:      <blockers model='IvyBridge-v1'>
Dec  1 14:20:36 np0005541455 nova_compute[188605]:        <feature name='erms'/>
Dec  1 14:20:36 np0005541455 nova_compute[188605]:      </blockers>
Dec  1 14:20:36 np0005541455 nova_compute[188605]:      <model usable='no' vendor='Intel'>IvyBridge-v2</model>
Dec  1 14:20:36 np0005541455 nova_compute[188605]:      <blockers model='IvyBridge-v2'>
Dec  1 14:20:36 np0005541455 nova_compute[188605]:        <feature name='erms'/>
Dec  1 14:20:36 np0005541455 nova_compute[188605]:      </blockers>
Dec  1 14:20:36 np0005541455 nova_compute[188605]:      <model usable='no' vendor='Intel' canonical='KnightsMill-v1'>KnightsMill</model>
Dec  1 14:20:36 np0005541455 nova_compute[188605]:      <blockers model='KnightsMill'>
Dec  1 14:20:36 np0005541455 nova_compute[188605]:        <feature name='avx512-4fmaps'/>
Dec  1 14:20:36 np0005541455 nova_compute[188605]:        <feature name='avx512-4vnniw'/>
Dec  1 14:20:36 np0005541455 nova_compute[188605]:        <feature name='avx512-vpopcntdq'/>
Dec  1 14:20:36 np0005541455 nova_compute[188605]:        <feature name='avx512cd'/>
Dec  1 14:20:36 np0005541455 nova_compute[188605]:        <feature name='avx512er'/>
Dec  1 14:20:36 np0005541455 nova_compute[188605]:        <feature name='avx512f'/>
Dec  1 14:20:36 np0005541455 nova_compute[188605]:        <feature name='avx512pf'/>
Dec  1 14:20:36 np0005541455 nova_compute[188605]:        <feature name='erms'/>
Dec  1 14:20:36 np0005541455 nova_compute[188605]:        <feature name='ss'/>
Dec  1 14:20:36 np0005541455 nova_compute[188605]:      </blockers>
Dec  1 14:20:36 np0005541455 nova_compute[188605]:      <model usable='no' vendor='Intel'>KnightsMill-v1</model>
Dec  1 14:20:36 np0005541455 nova_compute[188605]:      <blockers model='KnightsMill-v1'>
Dec  1 14:20:36 np0005541455 nova_compute[188605]:        <feature name='avx512-4fmaps'/>
Dec  1 14:20:36 np0005541455 nova_compute[188605]:        <feature name='avx512-4vnniw'/>
Dec  1 14:20:36 np0005541455 nova_compute[188605]:        <feature name='avx512-vpopcntdq'/>
Dec  1 14:20:36 np0005541455 nova_compute[188605]:        <feature name='avx512cd'/>
Dec  1 14:20:36 np0005541455 nova_compute[188605]:        <feature name='avx512er'/>
Dec  1 14:20:36 np0005541455 nova_compute[188605]:        <feature name='avx512f'/>
Dec  1 14:20:36 np0005541455 nova_compute[188605]:        <feature name='avx512pf'/>
Dec  1 14:20:36 np0005541455 nova_compute[188605]:        <feature name='erms'/>
Dec  1 14:20:36 np0005541455 nova_compute[188605]:        <feature name='ss'/>
Dec  1 14:20:36 np0005541455 nova_compute[188605]:      </blockers>
Dec  1 14:20:36 np0005541455 nova_compute[188605]:      <model usable='yes' vendor='Intel' canonical='Nehalem-v1'>Nehalem</model>
Dec  1 14:20:36 np0005541455 nova_compute[188605]:      <model usable='yes' vendor='Intel' canonical='Nehalem-v2'>Nehalem-IBRS</model>
Dec  1 14:20:36 np0005541455 nova_compute[188605]:      <model usable='yes' vendor='Intel'>Nehalem-v1</model>
Dec  1 14:20:36 np0005541455 nova_compute[188605]:      <model usable='yes' vendor='Intel'>Nehalem-v2</model>
Dec  1 14:20:36 np0005541455 nova_compute[188605]:      <model usable='yes' deprecated='yes' vendor='AMD' canonical='Opteron_G1-v1'>Opteron_G1</model>
Dec  1 14:20:36 np0005541455 nova_compute[188605]:      <model usable='yes' deprecated='yes' vendor='AMD'>Opteron_G1-v1</model>
Dec  1 14:20:36 np0005541455 nova_compute[188605]:      <model usable='yes' deprecated='yes' vendor='AMD' canonical='Opteron_G2-v1'>Opteron_G2</model>
Dec  1 14:20:36 np0005541455 nova_compute[188605]:      <model usable='yes' deprecated='yes' vendor='AMD'>Opteron_G2-v1</model>
Dec  1 14:20:36 np0005541455 nova_compute[188605]:      <model usable='yes' deprecated='yes' vendor='AMD' canonical='Opteron_G3-v1'>Opteron_G3</model>
Dec  1 14:20:36 np0005541455 nova_compute[188605]:      <model usable='yes' deprecated='yes' vendor='AMD'>Opteron_G3-v1</model>
Dec  1 14:20:36 np0005541455 nova_compute[188605]:      <model usable='no' vendor='AMD' canonical='Opteron_G4-v1'>Opteron_G4</model>
Dec  1 14:20:36 np0005541455 nova_compute[188605]:      <blockers model='Opteron_G4'>
Dec  1 14:20:36 np0005541455 nova_compute[188605]:        <feature name='fma4'/>
Dec  1 14:20:36 np0005541455 nova_compute[188605]:        <feature name='xop'/>
Dec  1 14:20:36 np0005541455 nova_compute[188605]:      </blockers>
Dec  1 14:20:36 np0005541455 nova_compute[188605]:      <model usable='no' vendor='AMD'>Opteron_G4-v1</model>
Dec  1 14:20:36 np0005541455 nova_compute[188605]:      <blockers model='Opteron_G4-v1'>
Dec  1 14:20:36 np0005541455 nova_compute[188605]:        <feature name='fma4'/>
Dec  1 14:20:36 np0005541455 nova_compute[188605]:        <feature name='xop'/>
Dec  1 14:20:36 np0005541455 nova_compute[188605]:      </blockers>
Dec  1 14:20:36 np0005541455 nova_compute[188605]:      <model usable='no' vendor='AMD' canonical='Opteron_G5-v1'>Opteron_G5</model>
Dec  1 14:20:36 np0005541455 nova_compute[188605]:      <blockers model='Opteron_G5'>
Dec  1 14:20:36 np0005541455 nova_compute[188605]:        <feature name='fma4'/>
Dec  1 14:20:36 np0005541455 nova_compute[188605]:        <feature name='tbm'/>
Dec  1 14:20:36 np0005541455 nova_compute[188605]:        <feature name='xop'/>
Dec  1 14:20:36 np0005541455 nova_compute[188605]:      </blockers>
Dec  1 14:20:36 np0005541455 nova_compute[188605]:      <model usable='no' vendor='AMD'>Opteron_G5-v1</model>
Dec  1 14:20:36 np0005541455 nova_compute[188605]:      <blockers model='Opteron_G5-v1'>
Dec  1 14:20:36 np0005541455 nova_compute[188605]:        <feature name='fma4'/>
Dec  1 14:20:36 np0005541455 nova_compute[188605]:        <feature name='tbm'/>
Dec  1 14:20:36 np0005541455 nova_compute[188605]:        <feature name='xop'/>
Dec  1 14:20:36 np0005541455 nova_compute[188605]:      </blockers>
Dec  1 14:20:36 np0005541455 nova_compute[188605]:      <model usable='yes' deprecated='yes' vendor='Intel' canonical='Penryn-v1'>Penryn</model>
Dec  1 14:20:36 np0005541455 nova_compute[188605]:      <model usable='yes' deprecated='yes' vendor='Intel'>Penryn-v1</model>
Dec  1 14:20:36 np0005541455 nova_compute[188605]:      <model usable='yes' vendor='Intel' canonical='SandyBridge-v1'>SandyBridge</model>
Dec  1 14:20:36 np0005541455 nova_compute[188605]:      <model usable='yes' vendor='Intel' canonical='SandyBridge-v2'>SandyBridge-IBRS</model>
Dec  1 14:20:36 np0005541455 nova_compute[188605]:      <model usable='yes' vendor='Intel'>SandyBridge-v1</model>
Dec  1 14:20:36 np0005541455 nova_compute[188605]:      <model usable='yes' vendor='Intel'>SandyBridge-v2</model>
Dec  1 14:20:36 np0005541455 nova_compute[188605]:      <model usable='no' vendor='Intel' canonical='SapphireRapids-v1'>SapphireRapids</model>
Dec  1 14:20:36 np0005541455 nova_compute[188605]:      <blockers model='SapphireRapids'>
Dec  1 14:20:36 np0005541455 nova_compute[188605]:        <feature name='amx-bf16'/>
Dec  1 14:20:36 np0005541455 nova_compute[188605]:        <feature name='amx-int8'/>
Dec  1 14:20:36 np0005541455 nova_compute[188605]:        <feature name='amx-tile'/>
Dec  1 14:20:36 np0005541455 nova_compute[188605]:        <feature name='avx-vnni'/>
Dec  1 14:20:36 np0005541455 nova_compute[188605]:        <feature name='avx512-bf16'/>
Dec  1 14:20:36 np0005541455 nova_compute[188605]:        <feature name='avx512-fp16'/>
Dec  1 14:20:36 np0005541455 nova_compute[188605]:        <feature name='avx512-vpopcntdq'/>
Dec  1 14:20:36 np0005541455 nova_compute[188605]:        <feature name='avx512bitalg'/>
Dec  1 14:20:36 np0005541455 nova_compute[188605]:        <feature name='avx512bw'/>
Dec  1 14:20:36 np0005541455 nova_compute[188605]:        <feature name='avx512cd'/>
Dec  1 14:20:36 np0005541455 nova_compute[188605]:        <feature name='avx512dq'/>
Dec  1 14:20:36 np0005541455 nova_compute[188605]:        <feature name='avx512f'/>
Dec  1 14:20:36 np0005541455 nova_compute[188605]:        <feature name='avx512ifma'/>
Dec  1 14:20:36 np0005541455 nova_compute[188605]:        <feature name='avx512vbmi'/>
Dec  1 14:20:36 np0005541455 nova_compute[188605]:        <feature name='avx512vbmi2'/>
Dec  1 14:20:36 np0005541455 nova_compute[188605]:        <feature name='avx512vl'/>
Dec  1 14:20:36 np0005541455 nova_compute[188605]:        <feature name='avx512vnni'/>
Dec  1 14:20:36 np0005541455 nova_compute[188605]:        <feature name='bus-lock-detect'/>
Dec  1 14:20:36 np0005541455 nova_compute[188605]:        <feature name='erms'/>
Dec  1 14:20:36 np0005541455 nova_compute[188605]:        <feature name='fsrc'/>
Dec  1 14:20:36 np0005541455 nova_compute[188605]:        <feature name='fsrm'/>
Dec  1 14:20:36 np0005541455 nova_compute[188605]:        <feature name='fsrs'/>
Dec  1 14:20:36 np0005541455 nova_compute[188605]:        <feature name='fzrm'/>
Dec  1 14:20:36 np0005541455 nova_compute[188605]:        <feature name='gfni'/>
Dec  1 14:20:36 np0005541455 nova_compute[188605]:        <feature name='hle'/>
Dec  1 14:20:36 np0005541455 nova_compute[188605]:        <feature name='ibrs-all'/>
Dec  1 14:20:36 np0005541455 nova_compute[188605]:        <feature name='invpcid'/>
Dec  1 14:20:36 np0005541455 nova_compute[188605]:        <feature name='la57'/>
Dec  1 14:20:36 np0005541455 nova_compute[188605]:        <feature name='pcid'/>
Dec  1 14:20:36 np0005541455 nova_compute[188605]:        <feature name='pku'/>
Dec  1 14:20:36 np0005541455 nova_compute[188605]:        <feature name='rtm'/>
Dec  1 14:20:36 np0005541455 nova_compute[188605]:        <feature name='serialize'/>
Dec  1 14:20:36 np0005541455 nova_compute[188605]:        <feature name='taa-no'/>
Dec  1 14:20:36 np0005541455 nova_compute[188605]:        <feature name='tsx-ldtrk'/>
Dec  1 14:20:36 np0005541455 nova_compute[188605]:        <feature name='vaes'/>
Dec  1 14:20:36 np0005541455 nova_compute[188605]:        <feature name='vpclmulqdq'/>
Dec  1 14:20:36 np0005541455 nova_compute[188605]:        <feature name='xfd'/>
Dec  1 14:20:36 np0005541455 nova_compute[188605]:        <feature name='xsaves'/>
Dec  1 14:20:36 np0005541455 nova_compute[188605]:      </blockers>
Dec  1 14:20:36 np0005541455 nova_compute[188605]:      <model usable='no' vendor='Intel'>SapphireRapids-v1</model>
Dec  1 14:20:36 np0005541455 nova_compute[188605]:      <blockers model='SapphireRapids-v1'>
Dec  1 14:20:36 np0005541455 nova_compute[188605]:        <feature name='amx-bf16'/>
Dec  1 14:20:36 np0005541455 nova_compute[188605]:        <feature name='amx-int8'/>
Dec  1 14:20:36 np0005541455 nova_compute[188605]:        <feature name='amx-tile'/>
Dec  1 14:20:36 np0005541455 nova_compute[188605]:        <feature name='avx-vnni'/>
Dec  1 14:20:36 np0005541455 nova_compute[188605]:        <feature name='avx512-bf16'/>
Dec  1 14:20:36 np0005541455 nova_compute[188605]:        <feature name='avx512-fp16'/>
Dec  1 14:20:36 np0005541455 nova_compute[188605]:        <feature name='avx512-vpopcntdq'/>
Dec  1 14:20:36 np0005541455 nova_compute[188605]:        <feature name='avx512bitalg'/>
Dec  1 14:20:36 np0005541455 nova_compute[188605]:        <feature name='avx512bw'/>
Dec  1 14:20:36 np0005541455 nova_compute[188605]:        <feature name='avx512cd'/>
Dec  1 14:20:36 np0005541455 nova_compute[188605]:        <feature name='avx512dq'/>
Dec  1 14:20:36 np0005541455 nova_compute[188605]:        <feature name='avx512f'/>
Dec  1 14:20:36 np0005541455 nova_compute[188605]:        <feature name='avx512ifma'/>
Dec  1 14:20:36 np0005541455 nova_compute[188605]:        <feature name='avx512vbmi'/>
Dec  1 14:20:36 np0005541455 nova_compute[188605]:        <feature name='avx512vbmi2'/>
Dec  1 14:20:36 np0005541455 nova_compute[188605]:        <feature name='avx512vl'/>
Dec  1 14:20:36 np0005541455 nova_compute[188605]:        <feature name='avx512vnni'/>
Dec  1 14:20:36 np0005541455 nova_compute[188605]:        <feature name='bus-lock-detect'/>
Dec  1 14:20:36 np0005541455 nova_compute[188605]:        <feature name='erms'/>
Dec  1 14:20:36 np0005541455 nova_compute[188605]:        <feature name='fsrc'/>
Dec  1 14:20:36 np0005541455 nova_compute[188605]:        <feature name='fsrm'/>
Dec  1 14:20:36 np0005541455 nova_compute[188605]:        <feature name='fsrs'/>
Dec  1 14:20:36 np0005541455 nova_compute[188605]:        <feature name='fzrm'/>
Dec  1 14:20:36 np0005541455 nova_compute[188605]:        <feature name='gfni'/>
Dec  1 14:20:36 np0005541455 nova_compute[188605]:        <feature name='hle'/>
Dec  1 14:20:36 np0005541455 nova_compute[188605]:        <feature name='ibrs-all'/>
Dec  1 14:20:36 np0005541455 nova_compute[188605]:        <feature name='invpcid'/>
Dec  1 14:20:36 np0005541455 nova_compute[188605]:        <feature name='la57'/>
Dec  1 14:20:36 np0005541455 nova_compute[188605]:        <feature name='pcid'/>
Dec  1 14:20:36 np0005541455 nova_compute[188605]:        <feature name='pku'/>
Dec  1 14:20:36 np0005541455 nova_compute[188605]:        <feature name='rtm'/>
Dec  1 14:20:36 np0005541455 nova_compute[188605]:        <feature name='serialize'/>
Dec  1 14:20:36 np0005541455 nova_compute[188605]:        <feature name='taa-no'/>
Dec  1 14:20:36 np0005541455 nova_compute[188605]:        <feature name='tsx-ldtrk'/>
Dec  1 14:20:36 np0005541455 nova_compute[188605]:        <feature name='vaes'/>
Dec  1 14:20:36 np0005541455 nova_compute[188605]:        <feature name='vpclmulqdq'/>
Dec  1 14:20:36 np0005541455 nova_compute[188605]:        <feature name='xfd'/>
Dec  1 14:20:36 np0005541455 nova_compute[188605]:        <feature name='xsaves'/>
Dec  1 14:20:36 np0005541455 nova_compute[188605]:      </blockers>
Dec  1 14:20:36 np0005541455 nova_compute[188605]:      <model usable='no' vendor='Intel'>SapphireRapids-v2</model>
Dec  1 14:20:36 np0005541455 nova_compute[188605]:      <blockers model='SapphireRapids-v2'>
Dec  1 14:20:36 np0005541455 nova_compute[188605]:        <feature name='amx-bf16'/>
Dec  1 14:20:36 np0005541455 nova_compute[188605]:        <feature name='amx-int8'/>
Dec  1 14:20:36 np0005541455 nova_compute[188605]:        <feature name='amx-tile'/>
Dec  1 14:20:36 np0005541455 nova_compute[188605]:        <feature name='avx-vnni'/>
Dec  1 14:20:36 np0005541455 nova_compute[188605]:        <feature name='avx512-bf16'/>
Dec  1 14:20:36 np0005541455 nova_compute[188605]:        <feature name='avx512-fp16'/>
Dec  1 14:20:36 np0005541455 nova_compute[188605]:        <feature name='avx512-vpopcntdq'/>
Dec  1 14:20:36 np0005541455 nova_compute[188605]:        <feature name='avx512bitalg'/>
Dec  1 14:20:36 np0005541455 nova_compute[188605]:        <feature name='avx512bw'/>
Dec  1 14:20:36 np0005541455 nova_compute[188605]:        <feature name='avx512cd'/>
Dec  1 14:20:36 np0005541455 nova_compute[188605]:        <feature name='avx512dq'/>
Dec  1 14:20:36 np0005541455 nova_compute[188605]:        <feature name='avx512f'/>
Dec  1 14:20:36 np0005541455 nova_compute[188605]:        <feature name='avx512ifma'/>
Dec  1 14:20:36 np0005541455 nova_compute[188605]:        <feature name='avx512vbmi'/>
Dec  1 14:20:36 np0005541455 nova_compute[188605]:        <feature name='avx512vbmi2'/>
Dec  1 14:20:36 np0005541455 nova_compute[188605]:        <feature name='avx512vl'/>
Dec  1 14:20:36 np0005541455 nova_compute[188605]:        <feature name='avx512vnni'/>
Dec  1 14:20:36 np0005541455 nova_compute[188605]:        <feature name='bus-lock-detect'/>
Dec  1 14:20:36 np0005541455 nova_compute[188605]:        <feature name='erms'/>
Dec  1 14:20:36 np0005541455 nova_compute[188605]:        <feature name='fbsdp-no'/>
Dec  1 14:20:36 np0005541455 nova_compute[188605]:        <feature name='fsrc'/>
Dec  1 14:20:36 np0005541455 nova_compute[188605]:        <feature name='fsrm'/>
Dec  1 14:20:36 np0005541455 nova_compute[188605]:        <feature name='fsrs'/>
Dec  1 14:20:36 np0005541455 nova_compute[188605]:        <feature name='fzrm'/>
Dec  1 14:20:36 np0005541455 nova_compute[188605]:        <feature name='gfni'/>
Dec  1 14:20:36 np0005541455 nova_compute[188605]:        <feature name='hle'/>
Dec  1 14:20:36 np0005541455 nova_compute[188605]:        <feature name='ibrs-all'/>
Dec  1 14:20:36 np0005541455 nova_compute[188605]:        <feature name='invpcid'/>
Dec  1 14:20:36 np0005541455 nova_compute[188605]:        <feature name='la57'/>
Dec  1 14:20:36 np0005541455 nova_compute[188605]:        <feature name='pcid'/>
Dec  1 14:20:36 np0005541455 nova_compute[188605]:        <feature name='pku'/>
Dec  1 14:20:36 np0005541455 nova_compute[188605]:        <feature name='psdp-no'/>
Dec  1 14:20:36 np0005541455 nova_compute[188605]:        <feature name='rtm'/>
Dec  1 14:20:36 np0005541455 nova_compute[188605]:        <feature name='sbdr-ssdp-no'/>
Dec  1 14:20:36 np0005541455 nova_compute[188605]:        <feature name='serialize'/>
Dec  1 14:20:36 np0005541455 nova_compute[188605]:        <feature name='taa-no'/>
Dec  1 14:20:36 np0005541455 nova_compute[188605]:        <feature name='tsx-ldtrk'/>
Dec  1 14:20:36 np0005541455 nova_compute[188605]:        <feature name='vaes'/>
Dec  1 14:20:36 np0005541455 nova_compute[188605]:        <feature name='vpclmulqdq'/>
Dec  1 14:20:36 np0005541455 nova_compute[188605]:        <feature name='xfd'/>
Dec  1 14:20:36 np0005541455 nova_compute[188605]:        <feature name='xsaves'/>
Dec  1 14:20:36 np0005541455 nova_compute[188605]:      </blockers>
Dec  1 14:20:36 np0005541455 nova_compute[188605]:      <model usable='no' vendor='Intel'>SapphireRapids-v3</model>
Dec  1 14:20:36 np0005541455 nova_compute[188605]:      <blockers model='SapphireRapids-v3'>
Dec  1 14:20:36 np0005541455 nova_compute[188605]:        <feature name='amx-bf16'/>
Dec  1 14:20:36 np0005541455 nova_compute[188605]:        <feature name='amx-int8'/>
Dec  1 14:20:36 np0005541455 nova_compute[188605]:        <feature name='amx-tile'/>
Dec  1 14:20:36 np0005541455 nova_compute[188605]:        <feature name='avx-vnni'/>
Dec  1 14:20:36 np0005541455 nova_compute[188605]:        <feature name='avx512-bf16'/>
Dec  1 14:20:36 np0005541455 nova_compute[188605]:        <feature name='avx512-fp16'/>
Dec  1 14:20:36 np0005541455 nova_compute[188605]:        <feature name='avx512-vpopcntdq'/>
Dec  1 14:20:36 np0005541455 nova_compute[188605]:        <feature name='avx512bitalg'/>
Dec  1 14:20:36 np0005541455 nova_compute[188605]:        <feature name='avx512bw'/>
Dec  1 14:20:36 np0005541455 nova_compute[188605]:        <feature name='avx512cd'/>
Dec  1 14:20:36 np0005541455 nova_compute[188605]:        <feature name='avx512dq'/>
Dec  1 14:20:36 np0005541455 nova_compute[188605]:        <feature name='avx512f'/>
Dec  1 14:20:36 np0005541455 nova_compute[188605]:        <feature name='avx512ifma'/>
Dec  1 14:20:36 np0005541455 nova_compute[188605]:        <feature name='avx512vbmi'/>
Dec  1 14:20:36 np0005541455 nova_compute[188605]:        <feature name='avx512vbmi2'/>
Dec  1 14:20:36 np0005541455 nova_compute[188605]:        <feature name='avx512vl'/>
Dec  1 14:20:36 np0005541455 nova_compute[188605]:        <feature name='avx512vnni'/>
Dec  1 14:20:36 np0005541455 nova_compute[188605]:        <feature name='bus-lock-detect'/>
Dec  1 14:20:36 np0005541455 nova_compute[188605]:        <feature name='cldemote'/>
Dec  1 14:20:36 np0005541455 nova_compute[188605]:        <feature name='erms'/>
Dec  1 14:20:36 np0005541455 nova_compute[188605]:        <feature name='fbsdp-no'/>
Dec  1 14:20:36 np0005541455 nova_compute[188605]:        <feature name='fsrc'/>
Dec  1 14:20:36 np0005541455 nova_compute[188605]:        <feature name='fsrm'/>
Dec  1 14:20:36 np0005541455 nova_compute[188605]:        <feature name='fsrs'/>
Dec  1 14:20:36 np0005541455 nova_compute[188605]:        <feature name='fzrm'/>
Dec  1 14:20:36 np0005541455 nova_compute[188605]:        <feature name='gfni'/>
Dec  1 14:20:36 np0005541455 nova_compute[188605]:        <feature name='hle'/>
Dec  1 14:20:36 np0005541455 nova_compute[188605]:        <feature name='ibrs-all'/>
Dec  1 14:20:36 np0005541455 nova_compute[188605]:        <feature name='invpcid'/>
Dec  1 14:20:36 np0005541455 nova_compute[188605]:        <feature name='la57'/>
Dec  1 14:20:36 np0005541455 nova_compute[188605]:        <feature name='movdir64b'/>
Dec  1 14:20:36 np0005541455 nova_compute[188605]:        <feature name='movdiri'/>
Dec  1 14:20:36 np0005541455 nova_compute[188605]:        <feature name='pcid'/>
Dec  1 14:20:36 np0005541455 nova_compute[188605]:        <feature name='pku'/>
Dec  1 14:20:36 np0005541455 nova_compute[188605]:        <feature name='psdp-no'/>
Dec  1 14:20:36 np0005541455 nova_compute[188605]:        <feature name='rtm'/>
Dec  1 14:20:36 np0005541455 nova_compute[188605]:        <feature name='sbdr-ssdp-no'/>
Dec  1 14:20:36 np0005541455 nova_compute[188605]:        <feature name='serialize'/>
Dec  1 14:20:36 np0005541455 nova_compute[188605]:        <feature name='ss'/>
Dec  1 14:20:36 np0005541455 nova_compute[188605]:        <feature name='taa-no'/>
Dec  1 14:20:36 np0005541455 nova_compute[188605]:        <feature name='tsx-ldtrk'/>
Dec  1 14:20:36 np0005541455 nova_compute[188605]:        <feature name='vaes'/>
Dec  1 14:20:36 np0005541455 nova_compute[188605]:        <feature name='vpclmulqdq'/>
Dec  1 14:20:36 np0005541455 nova_compute[188605]:        <feature name='xfd'/>
Dec  1 14:20:36 np0005541455 nova_compute[188605]:        <feature name='xsaves'/>
Dec  1 14:20:36 np0005541455 nova_compute[188605]:      </blockers>
Dec  1 14:20:36 np0005541455 nova_compute[188605]:      <model usable='no' vendor='Intel' canonical='SierraForest-v1'>SierraForest</model>
Dec  1 14:20:36 np0005541455 nova_compute[188605]:      <blockers model='SierraForest'>
Dec  1 14:20:36 np0005541455 nova_compute[188605]:        <feature name='avx-ifma'/>
Dec  1 14:20:36 np0005541455 nova_compute[188605]:        <feature name='avx-ne-convert'/>
Dec  1 14:20:36 np0005541455 nova_compute[188605]:        <feature name='avx-vnni'/>
Dec  1 14:20:36 np0005541455 nova_compute[188605]:        <feature name='avx-vnni-int8'/>
Dec  1 14:20:36 np0005541455 nova_compute[188605]:        <feature name='bus-lock-detect'/>
Dec  1 14:20:36 np0005541455 nova_compute[188605]:        <feature name='cmpccxadd'/>
Dec  1 14:20:36 np0005541455 nova_compute[188605]:        <feature name='erms'/>
Dec  1 14:20:36 np0005541455 nova_compute[188605]:        <feature name='fbsdp-no'/>
Dec  1 14:20:36 np0005541455 nova_compute[188605]:        <feature name='fsrm'/>
Dec  1 14:20:36 np0005541455 nova_compute[188605]:        <feature name='fsrs'/>
Dec  1 14:20:36 np0005541455 nova_compute[188605]:        <feature name='gfni'/>
Dec  1 14:20:36 np0005541455 nova_compute[188605]:        <feature name='ibrs-all'/>
Dec  1 14:20:36 np0005541455 nova_compute[188605]:        <feature name='invpcid'/>
Dec  1 14:20:36 np0005541455 nova_compute[188605]:        <feature name='mcdt-no'/>
Dec  1 14:20:36 np0005541455 nova_compute[188605]:        <feature name='pbrsb-no'/>
Dec  1 14:20:36 np0005541455 nova_compute[188605]:        <feature name='pcid'/>
Dec  1 14:20:36 np0005541455 nova_compute[188605]:        <feature name='pku'/>
Dec  1 14:20:36 np0005541455 nova_compute[188605]:        <feature name='psdp-no'/>
Dec  1 14:20:36 np0005541455 nova_compute[188605]:        <feature name='sbdr-ssdp-no'/>
Dec  1 14:20:36 np0005541455 nova_compute[188605]:        <feature name='serialize'/>
Dec  1 14:20:36 np0005541455 nova_compute[188605]:        <feature name='vaes'/>
Dec  1 14:20:36 np0005541455 nova_compute[188605]:        <feature name='vpclmulqdq'/>
Dec  1 14:20:36 np0005541455 nova_compute[188605]:        <feature name='xsaves'/>
Dec  1 14:20:36 np0005541455 nova_compute[188605]:      </blockers>
Dec  1 14:20:36 np0005541455 nova_compute[188605]:      <model usable='no' vendor='Intel'>SierraForest-v1</model>
Dec  1 14:20:36 np0005541455 nova_compute[188605]:      <blockers model='SierraForest-v1'>
Dec  1 14:20:36 np0005541455 nova_compute[188605]:        <feature name='avx-ifma'/>
Dec  1 14:20:36 np0005541455 nova_compute[188605]:        <feature name='avx-ne-convert'/>
Dec  1 14:20:36 np0005541455 nova_compute[188605]:        <feature name='avx-vnni'/>
Dec  1 14:20:36 np0005541455 nova_compute[188605]:        <feature name='avx-vnni-int8'/>
Dec  1 14:20:36 np0005541455 nova_compute[188605]:        <feature name='bus-lock-detect'/>
Dec  1 14:20:36 np0005541455 nova_compute[188605]:        <feature name='cmpccxadd'/>
Dec  1 14:20:36 np0005541455 nova_compute[188605]:        <feature name='erms'/>
Dec  1 14:20:36 np0005541455 nova_compute[188605]:        <feature name='fbsdp-no'/>
Dec  1 14:20:36 np0005541455 nova_compute[188605]:        <feature name='fsrm'/>
Dec  1 14:20:36 np0005541455 nova_compute[188605]:        <feature name='fsrs'/>
Dec  1 14:20:36 np0005541455 nova_compute[188605]:        <feature name='gfni'/>
Dec  1 14:20:36 np0005541455 nova_compute[188605]:        <feature name='ibrs-all'/>
Dec  1 14:20:36 np0005541455 nova_compute[188605]:        <feature name='invpcid'/>
Dec  1 14:20:36 np0005541455 nova_compute[188605]:        <feature name='mcdt-no'/>
Dec  1 14:20:36 np0005541455 nova_compute[188605]:        <feature name='pbrsb-no'/>
Dec  1 14:20:36 np0005541455 nova_compute[188605]:        <feature name='pcid'/>
Dec  1 14:20:36 np0005541455 nova_compute[188605]:        <feature name='pku'/>
Dec  1 14:20:36 np0005541455 nova_compute[188605]:        <feature name='psdp-no'/>
Dec  1 14:20:36 np0005541455 nova_compute[188605]:        <feature name='sbdr-ssdp-no'/>
Dec  1 14:20:36 np0005541455 nova_compute[188605]:        <feature name='serialize'/>
Dec  1 14:20:36 np0005541455 nova_compute[188605]:        <feature name='vaes'/>
Dec  1 14:20:36 np0005541455 nova_compute[188605]:        <feature name='vpclmulqdq'/>
Dec  1 14:20:36 np0005541455 nova_compute[188605]:        <feature name='xsaves'/>
Dec  1 14:20:36 np0005541455 nova_compute[188605]:      </blockers>
Dec  1 14:20:36 np0005541455 nova_compute[188605]:      <model usable='no' vendor='Intel' canonical='Skylake-Client-v1'>Skylake-Client</model>
Dec  1 14:20:36 np0005541455 nova_compute[188605]:      <blockers model='Skylake-Client'>
Dec  1 14:20:36 np0005541455 nova_compute[188605]:        <feature name='erms'/>
Dec  1 14:20:36 np0005541455 nova_compute[188605]:        <feature name='hle'/>
Dec  1 14:20:36 np0005541455 nova_compute[188605]:        <feature name='invpcid'/>
Dec  1 14:20:36 np0005541455 nova_compute[188605]:        <feature name='pcid'/>
Dec  1 14:20:36 np0005541455 nova_compute[188605]:        <feature name='rtm'/>
Dec  1 14:20:36 np0005541455 nova_compute[188605]:      </blockers>
Dec  1 14:20:36 np0005541455 nova_compute[188605]:      <model usable='no' vendor='Intel' canonical='Skylake-Client-v2'>Skylake-Client-IBRS</model>
Dec  1 14:20:36 np0005541455 nova_compute[188605]:      <blockers model='Skylake-Client-IBRS'>
Dec  1 14:20:36 np0005541455 nova_compute[188605]:        <feature name='erms'/>
Dec  1 14:20:36 np0005541455 nova_compute[188605]:        <feature name='hle'/>
Dec  1 14:20:36 np0005541455 nova_compute[188605]:        <feature name='invpcid'/>
Dec  1 14:20:36 np0005541455 nova_compute[188605]:        <feature name='pcid'/>
Dec  1 14:20:36 np0005541455 nova_compute[188605]:        <feature name='rtm'/>
Dec  1 14:20:36 np0005541455 nova_compute[188605]:      </blockers>
Dec  1 14:20:36 np0005541455 nova_compute[188605]:      <model usable='no' vendor='Intel' canonical='Skylake-Client-v3'>Skylake-Client-noTSX-IBRS</model>
Dec  1 14:20:36 np0005541455 nova_compute[188605]:      <blockers model='Skylake-Client-noTSX-IBRS'>
Dec  1 14:20:36 np0005541455 nova_compute[188605]:        <feature name='erms'/>
Dec  1 14:20:36 np0005541455 nova_compute[188605]:        <feature name='invpcid'/>
Dec  1 14:20:36 np0005541455 nova_compute[188605]:        <feature name='pcid'/>
Dec  1 14:20:36 np0005541455 nova_compute[188605]:      </blockers>
Dec  1 14:20:36 np0005541455 nova_compute[188605]:      <model usable='no' vendor='Intel'>Skylake-Client-v1</model>
Dec  1 14:20:36 np0005541455 nova_compute[188605]:      <blockers model='Skylake-Client-v1'>
Dec  1 14:20:36 np0005541455 nova_compute[188605]:        <feature name='erms'/>
Dec  1 14:20:36 np0005541455 nova_compute[188605]:        <feature name='hle'/>
Dec  1 14:20:36 np0005541455 nova_compute[188605]:        <feature name='invpcid'/>
Dec  1 14:20:36 np0005541455 nova_compute[188605]:        <feature name='pcid'/>
Dec  1 14:20:36 np0005541455 nova_compute[188605]:        <feature name='rtm'/>
Dec  1 14:20:36 np0005541455 nova_compute[188605]:      </blockers>
Dec  1 14:20:36 np0005541455 nova_compute[188605]:      <model usable='no' vendor='Intel'>Skylake-Client-v2</model>
Dec  1 14:20:36 np0005541455 nova_compute[188605]:      <blockers model='Skylake-Client-v2'>
Dec  1 14:20:36 np0005541455 nova_compute[188605]:        <feature name='erms'/>
Dec  1 14:20:36 np0005541455 nova_compute[188605]:        <feature name='hle'/>
Dec  1 14:20:36 np0005541455 nova_compute[188605]:        <feature name='invpcid'/>
Dec  1 14:20:36 np0005541455 nova_compute[188605]:        <feature name='pcid'/>
Dec  1 14:20:36 np0005541455 nova_compute[188605]:        <feature name='rtm'/>
Dec  1 14:20:36 np0005541455 nova_compute[188605]:      </blockers>
Dec  1 14:20:36 np0005541455 nova_compute[188605]:      <model usable='no' vendor='Intel'>Skylake-Client-v3</model>
Dec  1 14:20:36 np0005541455 nova_compute[188605]:      <blockers model='Skylake-Client-v3'>
Dec  1 14:20:36 np0005541455 nova_compute[188605]:        <feature name='erms'/>
Dec  1 14:20:36 np0005541455 nova_compute[188605]:        <feature name='invpcid'/>
Dec  1 14:20:36 np0005541455 nova_compute[188605]:        <feature name='pcid'/>
Dec  1 14:20:36 np0005541455 nova_compute[188605]:      </blockers>
Dec  1 14:20:36 np0005541455 nova_compute[188605]:      <model usable='no' vendor='Intel'>Skylake-Client-v4</model>
Dec  1 14:20:36 np0005541455 nova_compute[188605]:      <blockers model='Skylake-Client-v4'>
Dec  1 14:20:36 np0005541455 nova_compute[188605]:        <feature name='erms'/>
Dec  1 14:20:36 np0005541455 nova_compute[188605]:        <feature name='invpcid'/>
Dec  1 14:20:36 np0005541455 nova_compute[188605]:        <feature name='pcid'/>
Dec  1 14:20:36 np0005541455 nova_compute[188605]:        <feature name='xsaves'/>
Dec  1 14:20:36 np0005541455 nova_compute[188605]:      </blockers>
Dec  1 14:20:36 np0005541455 nova_compute[188605]:      <model usable='no' vendor='Intel' canonical='Skylake-Server-v1'>Skylake-Server</model>
Dec  1 14:20:36 np0005541455 nova_compute[188605]:      <blockers model='Skylake-Server'>
Dec  1 14:20:36 np0005541455 nova_compute[188605]:        <feature name='avx512bw'/>
Dec  1 14:20:36 np0005541455 nova_compute[188605]:        <feature name='avx512cd'/>
Dec  1 14:20:36 np0005541455 nova_compute[188605]:        <feature name='avx512dq'/>
Dec  1 14:20:36 np0005541455 nova_compute[188605]:        <feature name='avx512f'/>
Dec  1 14:20:36 np0005541455 nova_compute[188605]:        <feature name='avx512vl'/>
Dec  1 14:20:36 np0005541455 nova_compute[188605]:        <feature name='erms'/>
Dec  1 14:20:36 np0005541455 nova_compute[188605]:        <feature name='hle'/>
Dec  1 14:20:36 np0005541455 nova_compute[188605]:        <feature name='invpcid'/>
Dec  1 14:20:36 np0005541455 nova_compute[188605]:        <feature name='pcid'/>
Dec  1 14:20:36 np0005541455 nova_compute[188605]:        <feature name='pku'/>
Dec  1 14:20:36 np0005541455 nova_compute[188605]:        <feature name='rtm'/>
Dec  1 14:20:36 np0005541455 nova_compute[188605]:      </blockers>
Dec  1 14:20:36 np0005541455 nova_compute[188605]:      <model usable='no' vendor='Intel' canonical='Skylake-Server-v2'>Skylake-Server-IBRS</model>
Dec  1 14:20:36 np0005541455 nova_compute[188605]:      <blockers model='Skylake-Server-IBRS'>
Dec  1 14:20:36 np0005541455 nova_compute[188605]:        <feature name='avx512bw'/>
Dec  1 14:20:36 np0005541455 nova_compute[188605]:        <feature name='avx512cd'/>
Dec  1 14:20:36 np0005541455 nova_compute[188605]:        <feature name='avx512dq'/>
Dec  1 14:20:36 np0005541455 nova_compute[188605]:        <feature name='avx512f'/>
Dec  1 14:20:36 np0005541455 nova_compute[188605]:        <feature name='avx512vl'/>
Dec  1 14:20:36 np0005541455 nova_compute[188605]:        <feature name='erms'/>
Dec  1 14:20:36 np0005541455 nova_compute[188605]:        <feature name='hle'/>
Dec  1 14:20:36 np0005541455 nova_compute[188605]:        <feature name='invpcid'/>
Dec  1 14:20:36 np0005541455 nova_compute[188605]:        <feature name='pcid'/>
Dec  1 14:20:36 np0005541455 nova_compute[188605]:        <feature name='pku'/>
Dec  1 14:20:36 np0005541455 nova_compute[188605]:        <feature name='rtm'/>
Dec  1 14:20:36 np0005541455 nova_compute[188605]:      </blockers>
Dec  1 14:20:36 np0005541455 nova_compute[188605]:      <model usable='no' vendor='Intel' canonical='Skylake-Server-v3'>Skylake-Server-noTSX-IBRS</model>
Dec  1 14:20:36 np0005541455 nova_compute[188605]:      <blockers model='Skylake-Server-noTSX-IBRS'>
Dec  1 14:20:36 np0005541455 nova_compute[188605]:        <feature name='avx512bw'/>
Dec  1 14:20:36 np0005541455 nova_compute[188605]:        <feature name='avx512cd'/>
Dec  1 14:20:36 np0005541455 nova_compute[188605]:        <feature name='avx512dq'/>
Dec  1 14:20:36 np0005541455 nova_compute[188605]:        <feature name='avx512f'/>
Dec  1 14:20:36 np0005541455 nova_compute[188605]:        <feature name='avx512vl'/>
Dec  1 14:20:36 np0005541455 nova_compute[188605]:        <feature name='erms'/>
Dec  1 14:20:36 np0005541455 nova_compute[188605]:        <feature name='invpcid'/>
Dec  1 14:20:36 np0005541455 nova_compute[188605]:        <feature name='pcid'/>
Dec  1 14:20:36 np0005541455 nova_compute[188605]:        <feature name='pku'/>
Dec  1 14:20:36 np0005541455 nova_compute[188605]:      </blockers>
Dec  1 14:20:36 np0005541455 nova_compute[188605]:      <model usable='no' vendor='Intel'>Skylake-Server-v1</model>
Dec  1 14:20:36 np0005541455 nova_compute[188605]:      <blockers model='Skylake-Server-v1'>
Dec  1 14:20:36 np0005541455 nova_compute[188605]:        <feature name='avx512bw'/>
Dec  1 14:20:36 np0005541455 nova_compute[188605]:        <feature name='avx512cd'/>
Dec  1 14:20:36 np0005541455 nova_compute[188605]:        <feature name='avx512dq'/>
Dec  1 14:20:36 np0005541455 nova_compute[188605]:        <feature name='avx512f'/>
Dec  1 14:20:36 np0005541455 nova_compute[188605]:        <feature name='avx512vl'/>
Dec  1 14:20:36 np0005541455 nova_compute[188605]:        <feature name='erms'/>
Dec  1 14:20:36 np0005541455 nova_compute[188605]:        <feature name='hle'/>
Dec  1 14:20:36 np0005541455 nova_compute[188605]:        <feature name='invpcid'/>
Dec  1 14:20:36 np0005541455 nova_compute[188605]:        <feature name='pcid'/>
Dec  1 14:20:36 np0005541455 nova_compute[188605]:        <feature name='pku'/>
Dec  1 14:20:36 np0005541455 nova_compute[188605]:        <feature name='rtm'/>
Dec  1 14:20:36 np0005541455 nova_compute[188605]:      </blockers>
Dec  1 14:20:36 np0005541455 nova_compute[188605]:      <model usable='no' vendor='Intel'>Skylake-Server-v2</model>
Dec  1 14:20:36 np0005541455 nova_compute[188605]:      <blockers model='Skylake-Server-v2'>
Dec  1 14:20:36 np0005541455 nova_compute[188605]:        <feature name='avx512bw'/>
Dec  1 14:20:36 np0005541455 nova_compute[188605]:        <feature name='avx512cd'/>
Dec  1 14:20:36 np0005541455 nova_compute[188605]:        <feature name='avx512dq'/>
Dec  1 14:20:36 np0005541455 nova_compute[188605]:        <feature name='avx512f'/>
Dec  1 14:20:36 np0005541455 nova_compute[188605]:        <feature name='avx512vl'/>
Dec  1 14:20:36 np0005541455 nova_compute[188605]:        <feature name='erms'/>
Dec  1 14:20:36 np0005541455 nova_compute[188605]:        <feature name='hle'/>
Dec  1 14:20:36 np0005541455 nova_compute[188605]:        <feature name='invpcid'/>
Dec  1 14:20:36 np0005541455 nova_compute[188605]:        <feature name='pcid'/>
Dec  1 14:20:36 np0005541455 nova_compute[188605]:        <feature name='pku'/>
Dec  1 14:20:36 np0005541455 nova_compute[188605]:        <feature name='rtm'/>
Dec  1 14:20:36 np0005541455 nova_compute[188605]:      </blockers>
Dec  1 14:20:36 np0005541455 nova_compute[188605]:      <model usable='no' vendor='Intel'>Skylake-Server-v3</model>
Dec  1 14:20:36 np0005541455 nova_compute[188605]:      <blockers model='Skylake-Server-v3'>
Dec  1 14:20:36 np0005541455 nova_compute[188605]:        <feature name='avx512bw'/>
Dec  1 14:20:36 np0005541455 nova_compute[188605]:        <feature name='avx512cd'/>
Dec  1 14:20:36 np0005541455 nova_compute[188605]:        <feature name='avx512dq'/>
Dec  1 14:20:36 np0005541455 nova_compute[188605]:        <feature name='avx512f'/>
Dec  1 14:20:36 np0005541455 nova_compute[188605]:        <feature name='avx512vl'/>
Dec  1 14:20:36 np0005541455 nova_compute[188605]:        <feature name='erms'/>
Dec  1 14:20:36 np0005541455 nova_compute[188605]:        <feature name='invpcid'/>
Dec  1 14:20:36 np0005541455 nova_compute[188605]:        <feature name='pcid'/>
Dec  1 14:20:36 np0005541455 nova_compute[188605]:        <feature name='pku'/>
Dec  1 14:20:36 np0005541455 nova_compute[188605]:      </blockers>
Dec  1 14:20:36 np0005541455 nova_compute[188605]:      <model usable='no' vendor='Intel'>Skylake-Server-v4</model>
Dec  1 14:20:36 np0005541455 nova_compute[188605]:      <blockers model='Skylake-Server-v4'>
Dec  1 14:20:36 np0005541455 nova_compute[188605]:        <feature name='avx512bw'/>
Dec  1 14:20:36 np0005541455 nova_compute[188605]:        <feature name='avx512cd'/>
Dec  1 14:20:36 np0005541455 nova_compute[188605]:        <feature name='avx512dq'/>
Dec  1 14:20:36 np0005541455 nova_compute[188605]:        <feature name='avx512f'/>
Dec  1 14:20:36 np0005541455 nova_compute[188605]:        <feature name='avx512vl'/>
Dec  1 14:20:36 np0005541455 nova_compute[188605]:        <feature name='erms'/>
Dec  1 14:20:36 np0005541455 nova_compute[188605]:        <feature name='invpcid'/>
Dec  1 14:20:36 np0005541455 nova_compute[188605]:        <feature name='pcid'/>
Dec  1 14:20:36 np0005541455 nova_compute[188605]:        <feature name='pku'/>
Dec  1 14:20:36 np0005541455 nova_compute[188605]:      </blockers>
Dec  1 14:20:36 np0005541455 nova_compute[188605]:      <model usable='no' vendor='Intel'>Skylake-Server-v5</model>
Dec  1 14:20:36 np0005541455 nova_compute[188605]:      <blockers model='Skylake-Server-v5'>
Dec  1 14:20:36 np0005541455 nova_compute[188605]:        <feature name='avx512bw'/>
Dec  1 14:20:36 np0005541455 nova_compute[188605]:        <feature name='avx512cd'/>
Dec  1 14:20:36 np0005541455 nova_compute[188605]:        <feature name='avx512dq'/>
Dec  1 14:20:36 np0005541455 nova_compute[188605]:        <feature name='avx512f'/>
Dec  1 14:20:36 np0005541455 nova_compute[188605]:        <feature name='avx512vl'/>
Dec  1 14:20:36 np0005541455 nova_compute[188605]:        <feature name='erms'/>
Dec  1 14:20:36 np0005541455 nova_compute[188605]:        <feature name='invpcid'/>
Dec  1 14:20:36 np0005541455 nova_compute[188605]:        <feature name='pcid'/>
Dec  1 14:20:36 np0005541455 nova_compute[188605]:        <feature name='pku'/>
Dec  1 14:20:36 np0005541455 nova_compute[188605]:        <feature name='xsaves'/>
Dec  1 14:20:36 np0005541455 nova_compute[188605]:      </blockers>
Dec  1 14:20:36 np0005541455 nova_compute[188605]:      <model usable='no' vendor='Intel' canonical='Snowridge-v1'>Snowridge</model>
Dec  1 14:20:36 np0005541455 nova_compute[188605]:      <blockers model='Snowridge'>
Dec  1 14:20:36 np0005541455 nova_compute[188605]:        <feature name='cldemote'/>
Dec  1 14:20:36 np0005541455 nova_compute[188605]:        <feature name='core-capability'/>
Dec  1 14:20:36 np0005541455 nova_compute[188605]:        <feature name='erms'/>
Dec  1 14:20:36 np0005541455 nova_compute[188605]:        <feature name='gfni'/>
Dec  1 14:20:36 np0005541455 nova_compute[188605]:        <feature name='movdir64b'/>
Dec  1 14:20:36 np0005541455 nova_compute[188605]:        <feature name='movdiri'/>
Dec  1 14:20:36 np0005541455 nova_compute[188605]:        <feature name='mpx'/>
Dec  1 14:20:36 np0005541455 nova_compute[188605]:        <feature name='split-lock-detect'/>
Dec  1 14:20:36 np0005541455 nova_compute[188605]:      </blockers>
Dec  1 14:20:36 np0005541455 nova_compute[188605]:      <model usable='no' vendor='Intel'>Snowridge-v1</model>
Dec  1 14:20:36 np0005541455 nova_compute[188605]:      <blockers model='Snowridge-v1'>
Dec  1 14:20:36 np0005541455 nova_compute[188605]:        <feature name='cldemote'/>
Dec  1 14:20:36 np0005541455 nova_compute[188605]:        <feature name='core-capability'/>
Dec  1 14:20:36 np0005541455 nova_compute[188605]:        <feature name='erms'/>
Dec  1 14:20:36 np0005541455 nova_compute[188605]:        <feature name='gfni'/>
Dec  1 14:20:36 np0005541455 nova_compute[188605]:        <feature name='movdir64b'/>
Dec  1 14:20:36 np0005541455 nova_compute[188605]:        <feature name='movdiri'/>
Dec  1 14:20:36 np0005541455 nova_compute[188605]:        <feature name='mpx'/>
Dec  1 14:20:36 np0005541455 nova_compute[188605]:        <feature name='split-lock-detect'/>
Dec  1 14:20:36 np0005541455 nova_compute[188605]:      </blockers>
Dec  1 14:20:36 np0005541455 nova_compute[188605]:      <model usable='no' vendor='Intel'>Snowridge-v2</model>
Dec  1 14:20:36 np0005541455 nova_compute[188605]:      <blockers model='Snowridge-v2'>
Dec  1 14:20:36 np0005541455 nova_compute[188605]:        <feature name='cldemote'/>
Dec  1 14:20:36 np0005541455 nova_compute[188605]:        <feature name='core-capability'/>
Dec  1 14:20:36 np0005541455 nova_compute[188605]:        <feature name='erms'/>
Dec  1 14:20:36 np0005541455 nova_compute[188605]:        <feature name='gfni'/>
Dec  1 14:20:36 np0005541455 nova_compute[188605]:        <feature name='movdir64b'/>
Dec  1 14:20:36 np0005541455 nova_compute[188605]:        <feature name='movdiri'/>
Dec  1 14:20:36 np0005541455 nova_compute[188605]:        <feature name='split-lock-detect'/>
Dec  1 14:20:36 np0005541455 nova_compute[188605]:      </blockers>
Dec  1 14:20:36 np0005541455 nova_compute[188605]:      <model usable='no' vendor='Intel'>Snowridge-v3</model>
Dec  1 14:20:36 np0005541455 nova_compute[188605]:      <blockers model='Snowridge-v3'>
Dec  1 14:20:36 np0005541455 nova_compute[188605]:        <feature name='cldemote'/>
Dec  1 14:20:36 np0005541455 nova_compute[188605]:        <feature name='core-capability'/>
Dec  1 14:20:36 np0005541455 nova_compute[188605]:        <feature name='erms'/>
Dec  1 14:20:36 np0005541455 nova_compute[188605]:        <feature name='gfni'/>
Dec  1 14:20:36 np0005541455 nova_compute[188605]:        <feature name='movdir64b'/>
Dec  1 14:20:36 np0005541455 nova_compute[188605]:        <feature name='movdiri'/>
Dec  1 14:20:36 np0005541455 nova_compute[188605]:        <feature name='split-lock-detect'/>
Dec  1 14:20:36 np0005541455 nova_compute[188605]:        <feature name='xsaves'/>
Dec  1 14:20:36 np0005541455 nova_compute[188605]:      </blockers>
Dec  1 14:20:36 np0005541455 nova_compute[188605]:      <model usable='no' vendor='Intel'>Snowridge-v4</model>
Dec  1 14:20:36 np0005541455 nova_compute[188605]:      <blockers model='Snowridge-v4'>
Dec  1 14:20:36 np0005541455 nova_compute[188605]:        <feature name='cldemote'/>
Dec  1 14:20:36 np0005541455 nova_compute[188605]:        <feature name='erms'/>
Dec  1 14:20:36 np0005541455 nova_compute[188605]:        <feature name='gfni'/>
Dec  1 14:20:36 np0005541455 nova_compute[188605]:        <feature name='movdir64b'/>
Dec  1 14:20:36 np0005541455 nova_compute[188605]:        <feature name='movdiri'/>
Dec  1 14:20:36 np0005541455 nova_compute[188605]:        <feature name='xsaves'/>
Dec  1 14:20:36 np0005541455 nova_compute[188605]:      </blockers>
Dec  1 14:20:36 np0005541455 nova_compute[188605]:      <model usable='yes' vendor='Intel' canonical='Westmere-v1'>Westmere</model>
Dec  1 14:20:36 np0005541455 nova_compute[188605]:      <model usable='yes' vendor='Intel' canonical='Westmere-v2'>Westmere-IBRS</model>
Dec  1 14:20:36 np0005541455 nova_compute[188605]:      <model usable='yes' vendor='Intel'>Westmere-v1</model>
Dec  1 14:20:36 np0005541455 nova_compute[188605]:      <model usable='yes' vendor='Intel'>Westmere-v2</model>
Dec  1 14:20:36 np0005541455 nova_compute[188605]:      <model usable='no' deprecated='yes' vendor='AMD' canonical='athlon-v1'>athlon</model>
Dec  1 14:20:36 np0005541455 nova_compute[188605]:      <blockers model='athlon'>
Dec  1 14:20:36 np0005541455 nova_compute[188605]:        <feature name='3dnow'/>
Dec  1 14:20:36 np0005541455 nova_compute[188605]:        <feature name='3dnowext'/>
Dec  1 14:20:36 np0005541455 nova_compute[188605]:      </blockers>
Dec  1 14:20:36 np0005541455 nova_compute[188605]:      <model usable='no' deprecated='yes' vendor='AMD'>athlon-v1</model>
Dec  1 14:20:36 np0005541455 nova_compute[188605]:      <blockers model='athlon-v1'>
Dec  1 14:20:36 np0005541455 nova_compute[188605]:        <feature name='3dnow'/>
Dec  1 14:20:36 np0005541455 nova_compute[188605]:        <feature name='3dnowext'/>
Dec  1 14:20:36 np0005541455 nova_compute[188605]:      </blockers>
Dec  1 14:20:36 np0005541455 nova_compute[188605]:      <model usable='no' deprecated='yes' vendor='Intel' canonical='core2duo-v1'>core2duo</model>
Dec  1 14:20:36 np0005541455 nova_compute[188605]:      <blockers model='core2duo'>
Dec  1 14:20:36 np0005541455 nova_compute[188605]:        <feature name='ss'/>
Dec  1 14:20:36 np0005541455 nova_compute[188605]:      </blockers>
Dec  1 14:20:36 np0005541455 nova_compute[188605]:      <model usable='no' deprecated='yes' vendor='Intel'>core2duo-v1</model>
Dec  1 14:20:36 np0005541455 nova_compute[188605]:      <blockers model='core2duo-v1'>
Dec  1 14:20:36 np0005541455 nova_compute[188605]:        <feature name='ss'/>
Dec  1 14:20:36 np0005541455 nova_compute[188605]:      </blockers>
Dec  1 14:20:36 np0005541455 nova_compute[188605]:      <model usable='no' deprecated='yes' vendor='Intel' canonical='coreduo-v1'>coreduo</model>
Dec  1 14:20:36 np0005541455 nova_compute[188605]:      <blockers model='coreduo'>
Dec  1 14:20:36 np0005541455 nova_compute[188605]:        <feature name='ss'/>
Dec  1 14:20:36 np0005541455 nova_compute[188605]:      </blockers>
Dec  1 14:20:36 np0005541455 nova_compute[188605]:      <model usable='no' deprecated='yes' vendor='Intel'>coreduo-v1</model>
Dec  1 14:20:36 np0005541455 nova_compute[188605]:      <blockers model='coreduo-v1'>
Dec  1 14:20:36 np0005541455 nova_compute[188605]:        <feature name='ss'/>
Dec  1 14:20:36 np0005541455 nova_compute[188605]:      </blockers>
Dec  1 14:20:36 np0005541455 nova_compute[188605]:      <model usable='yes' deprecated='yes' vendor='unknown' canonical='kvm32-v1'>kvm32</model>
Dec  1 14:20:36 np0005541455 nova_compute[188605]:      <model usable='yes' deprecated='yes' vendor='unknown'>kvm32-v1</model>
Dec  1 14:20:36 np0005541455 nova_compute[188605]:      <model usable='yes' deprecated='yes' vendor='unknown' canonical='kvm64-v1'>kvm64</model>
Dec  1 14:20:36 np0005541455 nova_compute[188605]:      <model usable='yes' deprecated='yes' vendor='unknown'>kvm64-v1</model>
Dec  1 14:20:36 np0005541455 nova_compute[188605]:      <model usable='no' deprecated='yes' vendor='Intel' canonical='n270-v1'>n270</model>
Dec  1 14:20:36 np0005541455 nova_compute[188605]:      <blockers model='n270'>
Dec  1 14:20:36 np0005541455 nova_compute[188605]:        <feature name='ss'/>
Dec  1 14:20:36 np0005541455 nova_compute[188605]:      </blockers>
Dec  1 14:20:36 np0005541455 nova_compute[188605]:      <model usable='no' deprecated='yes' vendor='Intel'>n270-v1</model>
Dec  1 14:20:36 np0005541455 nova_compute[188605]:      <blockers model='n270-v1'>
Dec  1 14:20:36 np0005541455 nova_compute[188605]:        <feature name='ss'/>
Dec  1 14:20:36 np0005541455 nova_compute[188605]:      </blockers>
Dec  1 14:20:36 np0005541455 nova_compute[188605]:      <model usable='yes' deprecated='yes' vendor='unknown' canonical='pentium-v1'>pentium</model>
Dec  1 14:20:36 np0005541455 nova_compute[188605]:      <model usable='yes' deprecated='yes' vendor='unknown'>pentium-v1</model>
Dec  1 14:20:36 np0005541455 nova_compute[188605]:      <model usable='yes' deprecated='yes' vendor='unknown' canonical='pentium2-v1'>pentium2</model>
Dec  1 14:20:36 np0005541455 nova_compute[188605]:      <model usable='yes' deprecated='yes' vendor='unknown'>pentium2-v1</model>
Dec  1 14:20:36 np0005541455 nova_compute[188605]:      <model usable='yes' deprecated='yes' vendor='unknown' canonical='pentium3-v1'>pentium3</model>
Dec  1 14:20:36 np0005541455 nova_compute[188605]:      <model usable='yes' deprecated='yes' vendor='unknown'>pentium3-v1</model>
Dec  1 14:20:36 np0005541455 nova_compute[188605]:      <model usable='no' deprecated='yes' vendor='AMD' canonical='phenom-v1'>phenom</model>
Dec  1 14:20:36 np0005541455 nova_compute[188605]:      <blockers model='phenom'>
Dec  1 14:20:36 np0005541455 nova_compute[188605]:        <feature name='3dnow'/>
Dec  1 14:20:36 np0005541455 nova_compute[188605]:        <feature name='3dnowext'/>
Dec  1 14:20:36 np0005541455 nova_compute[188605]:      </blockers>
Dec  1 14:20:36 np0005541455 nova_compute[188605]:      <model usable='no' deprecated='yes' vendor='AMD'>phenom-v1</model>
Dec  1 14:20:36 np0005541455 nova_compute[188605]:      <blockers model='phenom-v1'>
Dec  1 14:20:36 np0005541455 nova_compute[188605]:        <feature name='3dnow'/>
Dec  1 14:20:36 np0005541455 nova_compute[188605]:        <feature name='3dnowext'/>
Dec  1 14:20:36 np0005541455 nova_compute[188605]:      </blockers>
Dec  1 14:20:36 np0005541455 nova_compute[188605]:      <model usable='yes' deprecated='yes' vendor='unknown' canonical='qemu32-v1'>qemu32</model>
Dec  1 14:20:36 np0005541455 nova_compute[188605]:      <model usable='yes' deprecated='yes' vendor='unknown'>qemu32-v1</model>
Dec  1 14:20:36 np0005541455 nova_compute[188605]:      <model usable='yes' deprecated='yes' vendor='unknown' canonical='qemu64-v1'>qemu64</model>
Dec  1 14:20:36 np0005541455 nova_compute[188605]:      <model usable='yes' deprecated='yes' vendor='unknown'>qemu64-v1</model>
Dec  1 14:20:36 np0005541455 nova_compute[188605]:    </mode>
Dec  1 14:20:36 np0005541455 nova_compute[188605]:  </cpu>
Dec  1 14:20:36 np0005541455 nova_compute[188605]:  <memoryBacking supported='yes'>
Dec  1 14:20:36 np0005541455 nova_compute[188605]:    <enum name='sourceType'>
Dec  1 14:20:36 np0005541455 nova_compute[188605]:      <value>file</value>
Dec  1 14:20:36 np0005541455 nova_compute[188605]:      <value>anonymous</value>
Dec  1 14:20:36 np0005541455 nova_compute[188605]:      <value>memfd</value>
Dec  1 14:20:36 np0005541455 nova_compute[188605]:    </enum>
Dec  1 14:20:36 np0005541455 nova_compute[188605]:  </memoryBacking>
Dec  1 14:20:36 np0005541455 nova_compute[188605]:  <devices>
Dec  1 14:20:36 np0005541455 nova_compute[188605]:    <disk supported='yes'>
Dec  1 14:20:36 np0005541455 nova_compute[188605]:      <enum name='diskDevice'>
Dec  1 14:20:36 np0005541455 nova_compute[188605]:        <value>disk</value>
Dec  1 14:20:36 np0005541455 nova_compute[188605]:        <value>cdrom</value>
Dec  1 14:20:36 np0005541455 nova_compute[188605]:        <value>floppy</value>
Dec  1 14:20:36 np0005541455 nova_compute[188605]:        <value>lun</value>
Dec  1 14:20:36 np0005541455 nova_compute[188605]:      </enum>
Dec  1 14:20:36 np0005541455 nova_compute[188605]:      <enum name='bus'>
Dec  1 14:20:36 np0005541455 nova_compute[188605]:        <value>fdc</value>
Dec  1 14:20:36 np0005541455 nova_compute[188605]:        <value>scsi</value>
Dec  1 14:20:36 np0005541455 nova_compute[188605]:        <value>virtio</value>
Dec  1 14:20:36 np0005541455 nova_compute[188605]:        <value>usb</value>
Dec  1 14:20:36 np0005541455 nova_compute[188605]:        <value>sata</value>
Dec  1 14:20:36 np0005541455 nova_compute[188605]:      </enum>
Dec  1 14:20:36 np0005541455 nova_compute[188605]:      <enum name='model'>
Dec  1 14:20:36 np0005541455 nova_compute[188605]:        <value>virtio</value>
Dec  1 14:20:36 np0005541455 nova_compute[188605]:        <value>virtio-transitional</value>
Dec  1 14:20:36 np0005541455 nova_compute[188605]:        <value>virtio-non-transitional</value>
Dec  1 14:20:36 np0005541455 nova_compute[188605]:      </enum>
Dec  1 14:20:36 np0005541455 nova_compute[188605]:    </disk>
Dec  1 14:20:36 np0005541455 nova_compute[188605]:    <graphics supported='yes'>
Dec  1 14:20:36 np0005541455 nova_compute[188605]:      <enum name='type'>
Dec  1 14:20:36 np0005541455 nova_compute[188605]:        <value>vnc</value>
Dec  1 14:20:36 np0005541455 nova_compute[188605]:        <value>egl-headless</value>
Dec  1 14:20:36 np0005541455 nova_compute[188605]:        <value>dbus</value>
Dec  1 14:20:36 np0005541455 nova_compute[188605]:      </enum>
Dec  1 14:20:36 np0005541455 nova_compute[188605]:    </graphics>
Dec  1 14:20:36 np0005541455 nova_compute[188605]:    <video supported='yes'>
Dec  1 14:20:36 np0005541455 nova_compute[188605]:      <enum name='modelType'>
Dec  1 14:20:36 np0005541455 nova_compute[188605]:        <value>vga</value>
Dec  1 14:20:36 np0005541455 nova_compute[188605]:        <value>cirrus</value>
Dec  1 14:20:36 np0005541455 nova_compute[188605]:        <value>virtio</value>
Dec  1 14:20:36 np0005541455 nova_compute[188605]:        <value>none</value>
Dec  1 14:20:36 np0005541455 nova_compute[188605]:        <value>bochs</value>
Dec  1 14:20:36 np0005541455 nova_compute[188605]:        <value>ramfb</value>
Dec  1 14:20:36 np0005541455 nova_compute[188605]:      </enum>
Dec  1 14:20:36 np0005541455 nova_compute[188605]:    </video>
Dec  1 14:20:36 np0005541455 nova_compute[188605]:    <hostdev supported='yes'>
Dec  1 14:20:36 np0005541455 nova_compute[188605]:      <enum name='mode'>
Dec  1 14:20:36 np0005541455 nova_compute[188605]:        <value>subsystem</value>
Dec  1 14:20:36 np0005541455 nova_compute[188605]:      </enum>
Dec  1 14:20:36 np0005541455 nova_compute[188605]:      <enum name='startupPolicy'>
Dec  1 14:20:36 np0005541455 nova_compute[188605]:        <value>default</value>
Dec  1 14:20:36 np0005541455 nova_compute[188605]:        <value>mandatory</value>
Dec  1 14:20:36 np0005541455 nova_compute[188605]:        <value>requisite</value>
Dec  1 14:20:36 np0005541455 nova_compute[188605]:        <value>optional</value>
Dec  1 14:20:36 np0005541455 nova_compute[188605]:      </enum>
Dec  1 14:20:36 np0005541455 nova_compute[188605]:      <enum name='subsysType'>
Dec  1 14:20:36 np0005541455 nova_compute[188605]:        <value>usb</value>
Dec  1 14:20:36 np0005541455 nova_compute[188605]:        <value>pci</value>
Dec  1 14:20:36 np0005541455 nova_compute[188605]:        <value>scsi</value>
Dec  1 14:20:36 np0005541455 nova_compute[188605]:      </enum>
Dec  1 14:20:36 np0005541455 nova_compute[188605]:      <enum name='capsType'/>
Dec  1 14:20:36 np0005541455 nova_compute[188605]:      <enum name='pciBackend'/>
Dec  1 14:20:36 np0005541455 nova_compute[188605]:    </hostdev>
Dec  1 14:20:36 np0005541455 nova_compute[188605]:    <rng supported='yes'>
Dec  1 14:20:36 np0005541455 nova_compute[188605]:      <enum name='model'>
Dec  1 14:20:36 np0005541455 nova_compute[188605]:        <value>virtio</value>
Dec  1 14:20:36 np0005541455 nova_compute[188605]:        <value>virtio-transitional</value>
Dec  1 14:20:36 np0005541455 nova_compute[188605]:        <value>virtio-non-transitional</value>
Dec  1 14:20:36 np0005541455 nova_compute[188605]:      </enum>
Dec  1 14:20:36 np0005541455 nova_compute[188605]:      <enum name='backendModel'>
Dec  1 14:20:36 np0005541455 nova_compute[188605]:        <value>random</value>
Dec  1 14:20:36 np0005541455 nova_compute[188605]:        <value>egd</value>
Dec  1 14:20:36 np0005541455 nova_compute[188605]:        <value>builtin</value>
Dec  1 14:20:36 np0005541455 nova_compute[188605]:      </enum>
Dec  1 14:20:36 np0005541455 nova_compute[188605]:    </rng>
Dec  1 14:20:36 np0005541455 nova_compute[188605]:    <filesystem supported='yes'>
Dec  1 14:20:36 np0005541455 nova_compute[188605]:      <enum name='driverType'>
Dec  1 14:20:36 np0005541455 nova_compute[188605]:        <value>path</value>
Dec  1 14:20:36 np0005541455 nova_compute[188605]:        <value>handle</value>
Dec  1 14:20:36 np0005541455 nova_compute[188605]:        <value>virtiofs</value>
Dec  1 14:20:36 np0005541455 nova_compute[188605]:      </enum>
Dec  1 14:20:36 np0005541455 nova_compute[188605]:    </filesystem>
Dec  1 14:20:36 np0005541455 nova_compute[188605]:    <tpm supported='yes'>
Dec  1 14:20:36 np0005541455 nova_compute[188605]:      <enum name='model'>
Dec  1 14:20:36 np0005541455 nova_compute[188605]:        <value>tpm-tis</value>
Dec  1 14:20:36 np0005541455 nova_compute[188605]:        <value>tpm-crb</value>
Dec  1 14:20:36 np0005541455 nova_compute[188605]:      </enum>
Dec  1 14:20:36 np0005541455 nova_compute[188605]:      <enum name='backendModel'>
Dec  1 14:20:36 np0005541455 nova_compute[188605]:        <value>emulator</value>
Dec  1 14:20:36 np0005541455 nova_compute[188605]:        <value>external</value>
Dec  1 14:20:36 np0005541455 nova_compute[188605]:      </enum>
Dec  1 14:20:36 np0005541455 nova_compute[188605]:      <enum name='backendVersion'>
Dec  1 14:20:36 np0005541455 nova_compute[188605]:        <value>2.0</value>
Dec  1 14:20:36 np0005541455 nova_compute[188605]:      </enum>
Dec  1 14:20:36 np0005541455 nova_compute[188605]:    </tpm>
Dec  1 14:20:36 np0005541455 nova_compute[188605]:    <redirdev supported='yes'>
Dec  1 14:20:36 np0005541455 nova_compute[188605]:      <enum name='bus'>
Dec  1 14:20:36 np0005541455 nova_compute[188605]:        <value>usb</value>
Dec  1 14:20:36 np0005541455 nova_compute[188605]:      </enum>
Dec  1 14:20:36 np0005541455 nova_compute[188605]:    </redirdev>
Dec  1 14:20:36 np0005541455 nova_compute[188605]:    <channel supported='yes'>
Dec  1 14:20:36 np0005541455 nova_compute[188605]:      <enum name='type'>
Dec  1 14:20:36 np0005541455 nova_compute[188605]:        <value>pty</value>
Dec  1 14:20:36 np0005541455 nova_compute[188605]:        <value>unix</value>
Dec  1 14:20:36 np0005541455 nova_compute[188605]:      </enum>
Dec  1 14:20:36 np0005541455 nova_compute[188605]:    </channel>
Dec  1 14:20:36 np0005541455 nova_compute[188605]:    <crypto supported='yes'>
Dec  1 14:20:36 np0005541455 nova_compute[188605]:      <enum name='model'/>
Dec  1 14:20:36 np0005541455 nova_compute[188605]:      <enum name='type'>
Dec  1 14:20:36 np0005541455 nova_compute[188605]:        <value>qemu</value>
Dec  1 14:20:36 np0005541455 nova_compute[188605]:      </enum>
Dec  1 14:20:36 np0005541455 nova_compute[188605]:      <enum name='backendModel'>
Dec  1 14:20:36 np0005541455 nova_compute[188605]:        <value>builtin</value>
Dec  1 14:20:36 np0005541455 nova_compute[188605]:      </enum>
Dec  1 14:20:36 np0005541455 nova_compute[188605]:    </crypto>
Dec  1 14:20:36 np0005541455 nova_compute[188605]:    <interface supported='yes'>
Dec  1 14:20:36 np0005541455 nova_compute[188605]:      <enum name='backendType'>
Dec  1 14:20:36 np0005541455 nova_compute[188605]:        <value>default</value>
Dec  1 14:20:36 np0005541455 nova_compute[188605]:        <value>passt</value>
Dec  1 14:20:36 np0005541455 nova_compute[188605]:      </enum>
Dec  1 14:20:36 np0005541455 nova_compute[188605]:    </interface>
Dec  1 14:20:36 np0005541455 nova_compute[188605]:    <panic supported='yes'>
Dec  1 14:20:36 np0005541455 nova_compute[188605]:      <enum name='model'>
Dec  1 14:20:36 np0005541455 nova_compute[188605]:        <value>isa</value>
Dec  1 14:20:36 np0005541455 nova_compute[188605]:        <value>hyperv</value>
Dec  1 14:20:36 np0005541455 nova_compute[188605]:      </enum>
Dec  1 14:20:36 np0005541455 nova_compute[188605]:    </panic>
Dec  1 14:20:36 np0005541455 nova_compute[188605]:    <console supported='yes'>
Dec  1 14:20:36 np0005541455 nova_compute[188605]:      <enum name='type'>
Dec  1 14:20:36 np0005541455 nova_compute[188605]:        <value>null</value>
Dec  1 14:20:36 np0005541455 nova_compute[188605]:        <value>vc</value>
Dec  1 14:20:36 np0005541455 nova_compute[188605]:        <value>pty</value>
Dec  1 14:20:36 np0005541455 nova_compute[188605]:        <value>dev</value>
Dec  1 14:20:36 np0005541455 nova_compute[188605]:        <value>file</value>
Dec  1 14:20:36 np0005541455 nova_compute[188605]:        <value>pipe</value>
Dec  1 14:20:36 np0005541455 nova_compute[188605]:        <value>stdio</value>
Dec  1 14:20:36 np0005541455 nova_compute[188605]:        <value>udp</value>
Dec  1 14:20:36 np0005541455 nova_compute[188605]:        <value>tcp</value>
Dec  1 14:20:36 np0005541455 nova_compute[188605]:        <value>unix</value>
Dec  1 14:20:36 np0005541455 nova_compute[188605]:        <value>qemu-vdagent</value>
Dec  1 14:20:36 np0005541455 nova_compute[188605]:        <value>dbus</value>
Dec  1 14:20:36 np0005541455 nova_compute[188605]:      </enum>
Dec  1 14:20:36 np0005541455 nova_compute[188605]:    </console>
Dec  1 14:20:36 np0005541455 nova_compute[188605]:  </devices>
Dec  1 14:20:36 np0005541455 nova_compute[188605]:  <features>
Dec  1 14:20:36 np0005541455 nova_compute[188605]:    <gic supported='no'/>
Dec  1 14:20:36 np0005541455 nova_compute[188605]:    <vmcoreinfo supported='yes'/>
Dec  1 14:20:36 np0005541455 nova_compute[188605]:    <genid supported='yes'/>
Dec  1 14:20:36 np0005541455 nova_compute[188605]:    <backingStoreInput supported='yes'/>
Dec  1 14:20:36 np0005541455 nova_compute[188605]:    <backup supported='yes'/>
Dec  1 14:20:36 np0005541455 nova_compute[188605]:    <async-teardown supported='yes'/>
Dec  1 14:20:36 np0005541455 nova_compute[188605]:    <ps2 supported='yes'/>
Dec  1 14:20:36 np0005541455 nova_compute[188605]:    <sev supported='no'/>
Dec  1 14:20:36 np0005541455 nova_compute[188605]:    <sgx supported='no'/>
Dec  1 14:20:36 np0005541455 nova_compute[188605]:    <hyperv supported='yes'>
Dec  1 14:20:36 np0005541455 nova_compute[188605]:      <enum name='features'>
Dec  1 14:20:36 np0005541455 nova_compute[188605]:        <value>relaxed</value>
Dec  1 14:20:36 np0005541455 nova_compute[188605]:        <value>vapic</value>
Dec  1 14:20:36 np0005541455 nova_compute[188605]:        <value>spinlocks</value>
Dec  1 14:20:36 np0005541455 nova_compute[188605]:        <value>vpindex</value>
Dec  1 14:20:36 np0005541455 nova_compute[188605]:        <value>runtime</value>
Dec  1 14:20:36 np0005541455 nova_compute[188605]:        <value>synic</value>
Dec  1 14:20:36 np0005541455 nova_compute[188605]:        <value>stimer</value>
Dec  1 14:20:36 np0005541455 nova_compute[188605]:        <value>reset</value>
Dec  1 14:20:36 np0005541455 nova_compute[188605]:        <value>vendor_id</value>
Dec  1 14:20:36 np0005541455 nova_compute[188605]:        <value>frequencies</value>
Dec  1 14:20:36 np0005541455 nova_compute[188605]:        <value>reenlightenment</value>
Dec  1 14:20:36 np0005541455 nova_compute[188605]:        <value>tlbflush</value>
Dec  1 14:20:36 np0005541455 nova_compute[188605]:        <value>ipi</value>
Dec  1 14:20:36 np0005541455 nova_compute[188605]:        <value>avic</value>
Dec  1 14:20:36 np0005541455 nova_compute[188605]:        <value>emsr_bitmap</value>
Dec  1 14:20:36 np0005541455 nova_compute[188605]:        <value>xmm_input</value>
Dec  1 14:20:36 np0005541455 nova_compute[188605]:      </enum>
Dec  1 14:20:36 np0005541455 nova_compute[188605]:      <defaults>
Dec  1 14:20:36 np0005541455 nova_compute[188605]:        <spinlocks>4095</spinlocks>
Dec  1 14:20:36 np0005541455 nova_compute[188605]:        <stimer_direct>on</stimer_direct>
Dec  1 14:20:36 np0005541455 nova_compute[188605]:        <tlbflush_direct>on</tlbflush_direct>
Dec  1 14:20:36 np0005541455 nova_compute[188605]:        <tlbflush_extended>on</tlbflush_extended>
Dec  1 14:20:36 np0005541455 nova_compute[188605]:        <vendor_id>Linux KVM Hv</vendor_id>
Dec  1 14:20:36 np0005541455 nova_compute[188605]:      </defaults>
Dec  1 14:20:36 np0005541455 nova_compute[188605]:    </hyperv>
Dec  1 14:20:36 np0005541455 nova_compute[188605]:    <launchSecurity supported='yes'>
Dec  1 14:20:36 np0005541455 nova_compute[188605]:      <enum name='sectype'>
Dec  1 14:20:36 np0005541455 nova_compute[188605]:        <value>tdx</value>
Dec  1 14:20:36 np0005541455 nova_compute[188605]:      </enum>
Dec  1 14:20:36 np0005541455 nova_compute[188605]:    </launchSecurity>
Dec  1 14:20:36 np0005541455 nova_compute[188605]:  </features>
Dec  1 14:20:36 np0005541455 nova_compute[188605]: </domainCapabilities>
Dec  1 14:20:36 np0005541455 nova_compute[188605]: _get_domain_capabilities /usr/lib/python3.9/site-packages/nova/virt/libvirt/host.py:1037#033[00m
Dec  1 14:20:36 np0005541455 nova_compute[188605]: 2025-12-01 19:20:36.519 188609 DEBUG nova.virt.libvirt.host [None req-7f8cefb8-f392-4857-a15c-0577ffacd27c - - - - - -] Libvirt host hypervisor capabilities for arch=x86_64 and machine_type=pc:
Dec  1 14:20:36 np0005541455 nova_compute[188605]: <domainCapabilities>
Dec  1 14:20:36 np0005541455 nova_compute[188605]:  <path>/usr/libexec/qemu-kvm</path>
Dec  1 14:20:36 np0005541455 nova_compute[188605]:  <domain>kvm</domain>
Dec  1 14:20:36 np0005541455 nova_compute[188605]:  <machine>pc-i440fx-rhel7.6.0</machine>
Dec  1 14:20:36 np0005541455 nova_compute[188605]:  <arch>x86_64</arch>
Dec  1 14:20:36 np0005541455 nova_compute[188605]:  <vcpu max='240'/>
Dec  1 14:20:36 np0005541455 nova_compute[188605]:  <iothreads supported='yes'/>
Dec  1 14:20:36 np0005541455 nova_compute[188605]:  <os supported='yes'>
Dec  1 14:20:36 np0005541455 nova_compute[188605]:    <enum name='firmware'/>
Dec  1 14:20:36 np0005541455 nova_compute[188605]:    <loader supported='yes'>
Dec  1 14:20:36 np0005541455 nova_compute[188605]:      <value>/usr/share/OVMF/OVMF_CODE.secboot.fd</value>
Dec  1 14:20:36 np0005541455 nova_compute[188605]:      <enum name='type'>
Dec  1 14:20:36 np0005541455 nova_compute[188605]:        <value>rom</value>
Dec  1 14:20:36 np0005541455 nova_compute[188605]:        <value>pflash</value>
Dec  1 14:20:36 np0005541455 nova_compute[188605]:      </enum>
Dec  1 14:20:36 np0005541455 nova_compute[188605]:      <enum name='readonly'>
Dec  1 14:20:36 np0005541455 nova_compute[188605]:        <value>yes</value>
Dec  1 14:20:36 np0005541455 nova_compute[188605]:        <value>no</value>
Dec  1 14:20:36 np0005541455 nova_compute[188605]:      </enum>
Dec  1 14:20:36 np0005541455 nova_compute[188605]:      <enum name='secure'>
Dec  1 14:20:36 np0005541455 nova_compute[188605]:        <value>no</value>
Dec  1 14:20:36 np0005541455 nova_compute[188605]:      </enum>
Dec  1 14:20:36 np0005541455 nova_compute[188605]:    </loader>
Dec  1 14:20:36 np0005541455 nova_compute[188605]:  </os>
Dec  1 14:20:36 np0005541455 nova_compute[188605]:  <cpu>
Dec  1 14:20:36 np0005541455 nova_compute[188605]:    <mode name='host-passthrough' supported='yes'>
Dec  1 14:20:36 np0005541455 nova_compute[188605]:      <enum name='hostPassthroughMigratable'>
Dec  1 14:20:36 np0005541455 nova_compute[188605]:        <value>on</value>
Dec  1 14:20:36 np0005541455 nova_compute[188605]:        <value>off</value>
Dec  1 14:20:36 np0005541455 nova_compute[188605]:      </enum>
Dec  1 14:20:36 np0005541455 nova_compute[188605]:    </mode>
Dec  1 14:20:36 np0005541455 nova_compute[188605]:    <mode name='maximum' supported='yes'>
Dec  1 14:20:36 np0005541455 nova_compute[188605]:      <enum name='maximumMigratable'>
Dec  1 14:20:36 np0005541455 nova_compute[188605]:        <value>on</value>
Dec  1 14:20:36 np0005541455 nova_compute[188605]:        <value>off</value>
Dec  1 14:20:36 np0005541455 nova_compute[188605]:      </enum>
Dec  1 14:20:36 np0005541455 nova_compute[188605]:    </mode>
Dec  1 14:20:36 np0005541455 nova_compute[188605]:    <mode name='host-model' supported='yes'>
Dec  1 14:20:36 np0005541455 nova_compute[188605]:      <model fallback='forbid'>EPYC-Rome</model>
Dec  1 14:20:36 np0005541455 nova_compute[188605]:      <vendor>AMD</vendor>
Dec  1 14:20:36 np0005541455 nova_compute[188605]:      <maxphysaddr mode='passthrough' limit='40'/>
Dec  1 14:20:36 np0005541455 nova_compute[188605]:      <feature policy='require' name='x2apic'/>
Dec  1 14:20:36 np0005541455 nova_compute[188605]:      <feature policy='require' name='tsc-deadline'/>
Dec  1 14:20:36 np0005541455 nova_compute[188605]:      <feature policy='require' name='hypervisor'/>
Dec  1 14:20:36 np0005541455 nova_compute[188605]:      <feature policy='require' name='tsc_adjust'/>
Dec  1 14:20:36 np0005541455 nova_compute[188605]:      <feature policy='require' name='spec-ctrl'/>
Dec  1 14:20:36 np0005541455 nova_compute[188605]:      <feature policy='require' name='stibp'/>
Dec  1 14:20:36 np0005541455 nova_compute[188605]:      <feature policy='require' name='ssbd'/>
Dec  1 14:20:36 np0005541455 nova_compute[188605]:      <feature policy='require' name='cmp_legacy'/>
Dec  1 14:20:36 np0005541455 nova_compute[188605]:      <feature policy='require' name='overflow-recov'/>
Dec  1 14:20:36 np0005541455 nova_compute[188605]:      <feature policy='require' name='succor'/>
Dec  1 14:20:36 np0005541455 nova_compute[188605]:      <feature policy='require' name='ibrs'/>
Dec  1 14:20:36 np0005541455 nova_compute[188605]:      <feature policy='require' name='amd-ssbd'/>
Dec  1 14:20:36 np0005541455 nova_compute[188605]:      <feature policy='require' name='virt-ssbd'/>
Dec  1 14:20:36 np0005541455 nova_compute[188605]:      <feature policy='require' name='lbrv'/>
Dec  1 14:20:36 np0005541455 nova_compute[188605]:      <feature policy='require' name='tsc-scale'/>
Dec  1 14:20:36 np0005541455 nova_compute[188605]:      <feature policy='require' name='vmcb-clean'/>
Dec  1 14:20:36 np0005541455 nova_compute[188605]:      <feature policy='require' name='flushbyasid'/>
Dec  1 14:20:36 np0005541455 nova_compute[188605]:      <feature policy='require' name='pause-filter'/>
Dec  1 14:20:36 np0005541455 nova_compute[188605]:      <feature policy='require' name='pfthreshold'/>
Dec  1 14:20:36 np0005541455 nova_compute[188605]:      <feature policy='require' name='svme-addr-chk'/>
Dec  1 14:20:36 np0005541455 nova_compute[188605]:      <feature policy='require' name='lfence-always-serializing'/>
Dec  1 14:20:36 np0005541455 nova_compute[188605]:      <feature policy='disable' name='xsaves'/>
Dec  1 14:20:36 np0005541455 nova_compute[188605]:    </mode>
Dec  1 14:20:36 np0005541455 nova_compute[188605]:    <mode name='custom' supported='yes'>
Dec  1 14:20:36 np0005541455 nova_compute[188605]:      <model usable='yes' deprecated='yes' vendor='unknown' canonical='486-v1'>486</model>
Dec  1 14:20:36 np0005541455 nova_compute[188605]:      <model usable='yes' deprecated='yes' vendor='unknown'>486-v1</model>
Dec  1 14:20:36 np0005541455 nova_compute[188605]:      <model usable='no' vendor='Intel' canonical='Broadwell-v1'>Broadwell</model>
Dec  1 14:20:36 np0005541455 nova_compute[188605]:      <blockers model='Broadwell'>
Dec  1 14:20:36 np0005541455 nova_compute[188605]:        <feature name='erms'/>
Dec  1 14:20:36 np0005541455 nova_compute[188605]:        <feature name='hle'/>
Dec  1 14:20:36 np0005541455 nova_compute[188605]:        <feature name='invpcid'/>
Dec  1 14:20:36 np0005541455 nova_compute[188605]:        <feature name='pcid'/>
Dec  1 14:20:36 np0005541455 nova_compute[188605]:        <feature name='rtm'/>
Dec  1 14:20:36 np0005541455 nova_compute[188605]:      </blockers>
Dec  1 14:20:36 np0005541455 nova_compute[188605]:      <model usable='no' vendor='Intel' canonical='Broadwell-v3'>Broadwell-IBRS</model>
Dec  1 14:20:36 np0005541455 nova_compute[188605]:      <blockers model='Broadwell-IBRS'>
Dec  1 14:20:36 np0005541455 nova_compute[188605]:        <feature name='erms'/>
Dec  1 14:20:36 np0005541455 nova_compute[188605]:        <feature name='hle'/>
Dec  1 14:20:36 np0005541455 nova_compute[188605]:        <feature name='invpcid'/>
Dec  1 14:20:36 np0005541455 nova_compute[188605]:        <feature name='pcid'/>
Dec  1 14:20:36 np0005541455 nova_compute[188605]:        <feature name='rtm'/>
Dec  1 14:20:36 np0005541455 nova_compute[188605]:      </blockers>
Dec  1 14:20:36 np0005541455 nova_compute[188605]:      <model usable='no' vendor='Intel' canonical='Broadwell-v2'>Broadwell-noTSX</model>
Dec  1 14:20:36 np0005541455 nova_compute[188605]:      <blockers model='Broadwell-noTSX'>
Dec  1 14:20:36 np0005541455 nova_compute[188605]:        <feature name='erms'/>
Dec  1 14:20:36 np0005541455 nova_compute[188605]:        <feature name='invpcid'/>
Dec  1 14:20:36 np0005541455 nova_compute[188605]:        <feature name='pcid'/>
Dec  1 14:20:36 np0005541455 nova_compute[188605]:      </blockers>
Dec  1 14:20:36 np0005541455 nova_compute[188605]:      <model usable='no' vendor='Intel' canonical='Broadwell-v4'>Broadwell-noTSX-IBRS</model>
Dec  1 14:20:36 np0005541455 nova_compute[188605]:      <blockers model='Broadwell-noTSX-IBRS'>
Dec  1 14:20:36 np0005541455 nova_compute[188605]:        <feature name='erms'/>
Dec  1 14:20:36 np0005541455 nova_compute[188605]:        <feature name='invpcid'/>
Dec  1 14:20:36 np0005541455 nova_compute[188605]:        <feature name='pcid'/>
Dec  1 14:20:36 np0005541455 nova_compute[188605]:      </blockers>
Dec  1 14:20:36 np0005541455 nova_compute[188605]:      <model usable='no' vendor='Intel'>Broadwell-v1</model>
Dec  1 14:20:36 np0005541455 nova_compute[188605]:      <blockers model='Broadwell-v1'>
Dec  1 14:20:36 np0005541455 nova_compute[188605]:        <feature name='erms'/>
Dec  1 14:20:36 np0005541455 nova_compute[188605]:        <feature name='hle'/>
Dec  1 14:20:36 np0005541455 nova_compute[188605]:        <feature name='invpcid'/>
Dec  1 14:20:36 np0005541455 nova_compute[188605]:        <feature name='pcid'/>
Dec  1 14:20:36 np0005541455 nova_compute[188605]:        <feature name='rtm'/>
Dec  1 14:20:36 np0005541455 nova_compute[188605]:      </blockers>
Dec  1 14:20:36 np0005541455 nova_compute[188605]:      <model usable='no' vendor='Intel'>Broadwell-v2</model>
Dec  1 14:20:36 np0005541455 nova_compute[188605]:      <blockers model='Broadwell-v2'>
Dec  1 14:20:36 np0005541455 nova_compute[188605]:        <feature name='erms'/>
Dec  1 14:20:36 np0005541455 nova_compute[188605]:        <feature name='invpcid'/>
Dec  1 14:20:36 np0005541455 nova_compute[188605]:        <feature name='pcid'/>
Dec  1 14:20:36 np0005541455 nova_compute[188605]:      </blockers>
Dec  1 14:20:36 np0005541455 nova_compute[188605]:      <model usable='no' vendor='Intel'>Broadwell-v3</model>
Dec  1 14:20:36 np0005541455 nova_compute[188605]:      <blockers model='Broadwell-v3'>
Dec  1 14:20:36 np0005541455 nova_compute[188605]:        <feature name='erms'/>
Dec  1 14:20:36 np0005541455 nova_compute[188605]:        <feature name='hle'/>
Dec  1 14:20:36 np0005541455 nova_compute[188605]:        <feature name='invpcid'/>
Dec  1 14:20:36 np0005541455 nova_compute[188605]:        <feature name='pcid'/>
Dec  1 14:20:36 np0005541455 nova_compute[188605]:        <feature name='rtm'/>
Dec  1 14:20:36 np0005541455 nova_compute[188605]:      </blockers>
Dec  1 14:20:36 np0005541455 nova_compute[188605]:      <model usable='no' vendor='Intel'>Broadwell-v4</model>
Dec  1 14:20:36 np0005541455 nova_compute[188605]:      <blockers model='Broadwell-v4'>
Dec  1 14:20:36 np0005541455 nova_compute[188605]:        <feature name='erms'/>
Dec  1 14:20:36 np0005541455 nova_compute[188605]:        <feature name='invpcid'/>
Dec  1 14:20:36 np0005541455 nova_compute[188605]:        <feature name='pcid'/>
Dec  1 14:20:36 np0005541455 nova_compute[188605]:      </blockers>
Dec  1 14:20:36 np0005541455 nova_compute[188605]:      <model usable='no' vendor='Intel' canonical='Cascadelake-Server-v1'>Cascadelake-Server</model>
Dec  1 14:20:36 np0005541455 nova_compute[188605]:      <blockers model='Cascadelake-Server'>
Dec  1 14:20:36 np0005541455 nova_compute[188605]:        <feature name='avx512bw'/>
Dec  1 14:20:36 np0005541455 nova_compute[188605]:        <feature name='avx512cd'/>
Dec  1 14:20:36 np0005541455 nova_compute[188605]:        <feature name='avx512dq'/>
Dec  1 14:20:36 np0005541455 nova_compute[188605]:        <feature name='avx512f'/>
Dec  1 14:20:36 np0005541455 nova_compute[188605]:        <feature name='avx512vl'/>
Dec  1 14:20:36 np0005541455 nova_compute[188605]:        <feature name='avx512vnni'/>
Dec  1 14:20:36 np0005541455 nova_compute[188605]:        <feature name='erms'/>
Dec  1 14:20:36 np0005541455 nova_compute[188605]:        <feature name='hle'/>
Dec  1 14:20:36 np0005541455 nova_compute[188605]:        <feature name='invpcid'/>
Dec  1 14:20:36 np0005541455 nova_compute[188605]:        <feature name='pcid'/>
Dec  1 14:20:36 np0005541455 nova_compute[188605]:        <feature name='pku'/>
Dec  1 14:20:36 np0005541455 nova_compute[188605]:        <feature name='rtm'/>
Dec  1 14:20:36 np0005541455 nova_compute[188605]:      </blockers>
Dec  1 14:20:36 np0005541455 nova_compute[188605]:      <model usable='no' vendor='Intel' canonical='Cascadelake-Server-v3'>Cascadelake-Server-noTSX</model>
Dec  1 14:20:36 np0005541455 nova_compute[188605]:      <blockers model='Cascadelake-Server-noTSX'>
Dec  1 14:20:36 np0005541455 nova_compute[188605]:        <feature name='avx512bw'/>
Dec  1 14:20:36 np0005541455 nova_compute[188605]:        <feature name='avx512cd'/>
Dec  1 14:20:36 np0005541455 nova_compute[188605]:        <feature name='avx512dq'/>
Dec  1 14:20:36 np0005541455 nova_compute[188605]:        <feature name='avx512f'/>
Dec  1 14:20:36 np0005541455 nova_compute[188605]:        <feature name='avx512vl'/>
Dec  1 14:20:36 np0005541455 nova_compute[188605]:        <feature name='avx512vnni'/>
Dec  1 14:20:36 np0005541455 nova_compute[188605]:        <feature name='erms'/>
Dec  1 14:20:36 np0005541455 nova_compute[188605]:        <feature name='ibrs-all'/>
Dec  1 14:20:36 np0005541455 nova_compute[188605]:        <feature name='invpcid'/>
Dec  1 14:20:36 np0005541455 nova_compute[188605]:        <feature name='pcid'/>
Dec  1 14:20:36 np0005541455 nova_compute[188605]:        <feature name='pku'/>
Dec  1 14:20:36 np0005541455 nova_compute[188605]:      </blockers>
Dec  1 14:20:36 np0005541455 nova_compute[188605]:      <model usable='no' vendor='Intel'>Cascadelake-Server-v1</model>
Dec  1 14:20:36 np0005541455 nova_compute[188605]:      <blockers model='Cascadelake-Server-v1'>
Dec  1 14:20:36 np0005541455 nova_compute[188605]:        <feature name='avx512bw'/>
Dec  1 14:20:36 np0005541455 nova_compute[188605]:        <feature name='avx512cd'/>
Dec  1 14:20:36 np0005541455 nova_compute[188605]:        <feature name='avx512dq'/>
Dec  1 14:20:36 np0005541455 nova_compute[188605]:        <feature name='avx512f'/>
Dec  1 14:20:36 np0005541455 nova_compute[188605]:        <feature name='avx512vl'/>
Dec  1 14:20:36 np0005541455 nova_compute[188605]:        <feature name='avx512vnni'/>
Dec  1 14:20:36 np0005541455 nova_compute[188605]:        <feature name='erms'/>
Dec  1 14:20:36 np0005541455 nova_compute[188605]:        <feature name='hle'/>
Dec  1 14:20:36 np0005541455 nova_compute[188605]:        <feature name='invpcid'/>
Dec  1 14:20:36 np0005541455 nova_compute[188605]:        <feature name='pcid'/>
Dec  1 14:20:36 np0005541455 nova_compute[188605]:        <feature name='pku'/>
Dec  1 14:20:36 np0005541455 nova_compute[188605]:        <feature name='rtm'/>
Dec  1 14:20:36 np0005541455 nova_compute[188605]:      </blockers>
Dec  1 14:20:36 np0005541455 nova_compute[188605]:      <model usable='no' vendor='Intel'>Cascadelake-Server-v2</model>
Dec  1 14:20:36 np0005541455 nova_compute[188605]:      <blockers model='Cascadelake-Server-v2'>
Dec  1 14:20:36 np0005541455 nova_compute[188605]:        <feature name='avx512bw'/>
Dec  1 14:20:36 np0005541455 nova_compute[188605]:        <feature name='avx512cd'/>
Dec  1 14:20:36 np0005541455 nova_compute[188605]:        <feature name='avx512dq'/>
Dec  1 14:20:36 np0005541455 nova_compute[188605]:        <feature name='avx512f'/>
Dec  1 14:20:36 np0005541455 nova_compute[188605]:        <feature name='avx512vl'/>
Dec  1 14:20:36 np0005541455 nova_compute[188605]:        <feature name='avx512vnni'/>
Dec  1 14:20:36 np0005541455 nova_compute[188605]:        <feature name='erms'/>
Dec  1 14:20:36 np0005541455 nova_compute[188605]:        <feature name='hle'/>
Dec  1 14:20:36 np0005541455 nova_compute[188605]:        <feature name='ibrs-all'/>
Dec  1 14:20:36 np0005541455 nova_compute[188605]:        <feature name='invpcid'/>
Dec  1 14:20:36 np0005541455 nova_compute[188605]:        <feature name='pcid'/>
Dec  1 14:20:36 np0005541455 nova_compute[188605]:        <feature name='pku'/>
Dec  1 14:20:36 np0005541455 nova_compute[188605]:        <feature name='rtm'/>
Dec  1 14:20:36 np0005541455 nova_compute[188605]:      </blockers>
Dec  1 14:20:36 np0005541455 nova_compute[188605]:      <model usable='no' vendor='Intel'>Cascadelake-Server-v3</model>
Dec  1 14:20:36 np0005541455 nova_compute[188605]:      <blockers model='Cascadelake-Server-v3'>
Dec  1 14:20:36 np0005541455 nova_compute[188605]:        <feature name='avx512bw'/>
Dec  1 14:20:36 np0005541455 nova_compute[188605]:        <feature name='avx512cd'/>
Dec  1 14:20:36 np0005541455 nova_compute[188605]:        <feature name='avx512dq'/>
Dec  1 14:20:36 np0005541455 nova_compute[188605]:        <feature name='avx512f'/>
Dec  1 14:20:36 np0005541455 nova_compute[188605]:        <feature name='avx512vl'/>
Dec  1 14:20:36 np0005541455 nova_compute[188605]:        <feature name='avx512vnni'/>
Dec  1 14:20:36 np0005541455 nova_compute[188605]:        <feature name='erms'/>
Dec  1 14:20:36 np0005541455 nova_compute[188605]:        <feature name='ibrs-all'/>
Dec  1 14:20:36 np0005541455 nova_compute[188605]:        <feature name='invpcid'/>
Dec  1 14:20:36 np0005541455 nova_compute[188605]:        <feature name='pcid'/>
Dec  1 14:20:36 np0005541455 nova_compute[188605]:        <feature name='pku'/>
Dec  1 14:20:36 np0005541455 nova_compute[188605]:      </blockers>
Dec  1 14:20:36 np0005541455 nova_compute[188605]:      <model usable='no' vendor='Intel'>Cascadelake-Server-v4</model>
Dec  1 14:20:36 np0005541455 nova_compute[188605]:      <blockers model='Cascadelake-Server-v4'>
Dec  1 14:20:36 np0005541455 nova_compute[188605]:        <feature name='avx512bw'/>
Dec  1 14:20:36 np0005541455 nova_compute[188605]:        <feature name='avx512cd'/>
Dec  1 14:20:36 np0005541455 nova_compute[188605]:        <feature name='avx512dq'/>
Dec  1 14:20:36 np0005541455 nova_compute[188605]:        <feature name='avx512f'/>
Dec  1 14:20:36 np0005541455 nova_compute[188605]:        <feature name='avx512vl'/>
Dec  1 14:20:36 np0005541455 nova_compute[188605]:        <feature name='avx512vnni'/>
Dec  1 14:20:36 np0005541455 nova_compute[188605]:        <feature name='erms'/>
Dec  1 14:20:36 np0005541455 nova_compute[188605]:        <feature name='ibrs-all'/>
Dec  1 14:20:36 np0005541455 nova_compute[188605]:        <feature name='invpcid'/>
Dec  1 14:20:36 np0005541455 nova_compute[188605]:        <feature name='pcid'/>
Dec  1 14:20:36 np0005541455 nova_compute[188605]:        <feature name='pku'/>
Dec  1 14:20:36 np0005541455 nova_compute[188605]:      </blockers>
Dec  1 14:20:36 np0005541455 nova_compute[188605]:      <model usable='no' vendor='Intel'>Cascadelake-Server-v5</model>
Dec  1 14:20:36 np0005541455 nova_compute[188605]:      <blockers model='Cascadelake-Server-v5'>
Dec  1 14:20:36 np0005541455 nova_compute[188605]:        <feature name='avx512bw'/>
Dec  1 14:20:36 np0005541455 nova_compute[188605]:        <feature name='avx512cd'/>
Dec  1 14:20:36 np0005541455 nova_compute[188605]:        <feature name='avx512dq'/>
Dec  1 14:20:36 np0005541455 nova_compute[188605]:        <feature name='avx512f'/>
Dec  1 14:20:36 np0005541455 nova_compute[188605]:        <feature name='avx512vl'/>
Dec  1 14:20:36 np0005541455 nova_compute[188605]:        <feature name='avx512vnni'/>
Dec  1 14:20:36 np0005541455 nova_compute[188605]:        <feature name='erms'/>
Dec  1 14:20:36 np0005541455 nova_compute[188605]:        <feature name='ibrs-all'/>
Dec  1 14:20:36 np0005541455 nova_compute[188605]:        <feature name='invpcid'/>
Dec  1 14:20:36 np0005541455 nova_compute[188605]:        <feature name='pcid'/>
Dec  1 14:20:36 np0005541455 nova_compute[188605]:        <feature name='pku'/>
Dec  1 14:20:36 np0005541455 nova_compute[188605]:        <feature name='xsaves'/>
Dec  1 14:20:36 np0005541455 nova_compute[188605]:      </blockers>
Dec  1 14:20:36 np0005541455 nova_compute[188605]:      <model usable='yes' deprecated='yes' vendor='Intel' canonical='Conroe-v1'>Conroe</model>
Dec  1 14:20:36 np0005541455 nova_compute[188605]:      <model usable='yes' deprecated='yes' vendor='Intel'>Conroe-v1</model>
Dec  1 14:20:36 np0005541455 nova_compute[188605]:      <model usable='no' vendor='Intel' canonical='Cooperlake-v1'>Cooperlake</model>
Dec  1 14:20:36 np0005541455 nova_compute[188605]:      <blockers model='Cooperlake'>
Dec  1 14:20:36 np0005541455 nova_compute[188605]:        <feature name='avx512-bf16'/>
Dec  1 14:20:36 np0005541455 nova_compute[188605]:        <feature name='avx512bw'/>
Dec  1 14:20:36 np0005541455 nova_compute[188605]:        <feature name='avx512cd'/>
Dec  1 14:20:36 np0005541455 nova_compute[188605]:        <feature name='avx512dq'/>
Dec  1 14:20:36 np0005541455 nova_compute[188605]:        <feature name='avx512f'/>
Dec  1 14:20:36 np0005541455 nova_compute[188605]:        <feature name='avx512vl'/>
Dec  1 14:20:36 np0005541455 nova_compute[188605]:        <feature name='avx512vnni'/>
Dec  1 14:20:36 np0005541455 nova_compute[188605]:        <feature name='erms'/>
Dec  1 14:20:36 np0005541455 nova_compute[188605]:        <feature name='hle'/>
Dec  1 14:20:36 np0005541455 nova_compute[188605]:        <feature name='ibrs-all'/>
Dec  1 14:20:36 np0005541455 nova_compute[188605]:        <feature name='invpcid'/>
Dec  1 14:20:36 np0005541455 nova_compute[188605]:        <feature name='pcid'/>
Dec  1 14:20:36 np0005541455 nova_compute[188605]:        <feature name='pku'/>
Dec  1 14:20:36 np0005541455 nova_compute[188605]:        <feature name='rtm'/>
Dec  1 14:20:36 np0005541455 nova_compute[188605]:        <feature name='taa-no'/>
Dec  1 14:20:36 np0005541455 nova_compute[188605]:      </blockers>
Dec  1 14:20:36 np0005541455 nova_compute[188605]:      <model usable='no' vendor='Intel'>Cooperlake-v1</model>
Dec  1 14:20:36 np0005541455 nova_compute[188605]:      <blockers model='Cooperlake-v1'>
Dec  1 14:20:36 np0005541455 nova_compute[188605]:        <feature name='avx512-bf16'/>
Dec  1 14:20:36 np0005541455 nova_compute[188605]:        <feature name='avx512bw'/>
Dec  1 14:20:36 np0005541455 nova_compute[188605]:        <feature name='avx512cd'/>
Dec  1 14:20:36 np0005541455 nova_compute[188605]:        <feature name='avx512dq'/>
Dec  1 14:20:36 np0005541455 nova_compute[188605]:        <feature name='avx512f'/>
Dec  1 14:20:36 np0005541455 nova_compute[188605]:        <feature name='avx512vl'/>
Dec  1 14:20:36 np0005541455 nova_compute[188605]:        <feature name='avx512vnni'/>
Dec  1 14:20:36 np0005541455 nova_compute[188605]:        <feature name='erms'/>
Dec  1 14:20:36 np0005541455 nova_compute[188605]:        <feature name='hle'/>
Dec  1 14:20:36 np0005541455 nova_compute[188605]:        <feature name='ibrs-all'/>
Dec  1 14:20:36 np0005541455 nova_compute[188605]:        <feature name='invpcid'/>
Dec  1 14:20:36 np0005541455 nova_compute[188605]:        <feature name='pcid'/>
Dec  1 14:20:36 np0005541455 nova_compute[188605]:        <feature name='pku'/>
Dec  1 14:20:36 np0005541455 nova_compute[188605]:        <feature name='rtm'/>
Dec  1 14:20:36 np0005541455 nova_compute[188605]:        <feature name='taa-no'/>
Dec  1 14:20:36 np0005541455 nova_compute[188605]:      </blockers>
Dec  1 14:20:36 np0005541455 nova_compute[188605]:      <model usable='no' vendor='Intel'>Cooperlake-v2</model>
Dec  1 14:20:36 np0005541455 nova_compute[188605]:      <blockers model='Cooperlake-v2'>
Dec  1 14:20:36 np0005541455 nova_compute[188605]:        <feature name='avx512-bf16'/>
Dec  1 14:20:36 np0005541455 nova_compute[188605]:        <feature name='avx512bw'/>
Dec  1 14:20:36 np0005541455 nova_compute[188605]:        <feature name='avx512cd'/>
Dec  1 14:20:36 np0005541455 nova_compute[188605]:        <feature name='avx512dq'/>
Dec  1 14:20:36 np0005541455 nova_compute[188605]:        <feature name='avx512f'/>
Dec  1 14:20:36 np0005541455 nova_compute[188605]:        <feature name='avx512vl'/>
Dec  1 14:20:36 np0005541455 nova_compute[188605]:        <feature name='avx512vnni'/>
Dec  1 14:20:36 np0005541455 nova_compute[188605]:        <feature name='erms'/>
Dec  1 14:20:36 np0005541455 nova_compute[188605]:        <feature name='hle'/>
Dec  1 14:20:36 np0005541455 nova_compute[188605]:        <feature name='ibrs-all'/>
Dec  1 14:20:36 np0005541455 nova_compute[188605]:        <feature name='invpcid'/>
Dec  1 14:20:36 np0005541455 nova_compute[188605]:        <feature name='pcid'/>
Dec  1 14:20:36 np0005541455 nova_compute[188605]:        <feature name='pku'/>
Dec  1 14:20:36 np0005541455 nova_compute[188605]:        <feature name='rtm'/>
Dec  1 14:20:36 np0005541455 nova_compute[188605]:        <feature name='taa-no'/>
Dec  1 14:20:36 np0005541455 nova_compute[188605]:        <feature name='xsaves'/>
Dec  1 14:20:36 np0005541455 nova_compute[188605]:      </blockers>
Dec  1 14:20:36 np0005541455 nova_compute[188605]:      <model usable='no' vendor='Intel' canonical='Denverton-v1'>Denverton</model>
Dec  1 14:20:36 np0005541455 nova_compute[188605]:      <blockers model='Denverton'>
Dec  1 14:20:36 np0005541455 nova_compute[188605]:        <feature name='erms'/>
Dec  1 14:20:36 np0005541455 nova_compute[188605]:        <feature name='mpx'/>
Dec  1 14:20:36 np0005541455 nova_compute[188605]:      </blockers>
Dec  1 14:20:36 np0005541455 nova_compute[188605]:      <model usable='no' vendor='Intel'>Denverton-v1</model>
Dec  1 14:20:36 np0005541455 nova_compute[188605]:      <blockers model='Denverton-v1'>
Dec  1 14:20:36 np0005541455 nova_compute[188605]:        <feature name='erms'/>
Dec  1 14:20:36 np0005541455 nova_compute[188605]:        <feature name='mpx'/>
Dec  1 14:20:36 np0005541455 nova_compute[188605]:      </blockers>
Dec  1 14:20:36 np0005541455 nova_compute[188605]:      <model usable='no' vendor='Intel'>Denverton-v2</model>
Dec  1 14:20:36 np0005541455 nova_compute[188605]:      <blockers model='Denverton-v2'>
Dec  1 14:20:36 np0005541455 nova_compute[188605]:        <feature name='erms'/>
Dec  1 14:20:36 np0005541455 nova_compute[188605]:      </blockers>
Dec  1 14:20:36 np0005541455 nova_compute[188605]:      <model usable='no' vendor='Intel'>Denverton-v3</model>
Dec  1 14:20:36 np0005541455 nova_compute[188605]:      <blockers model='Denverton-v3'>
Dec  1 14:20:36 np0005541455 nova_compute[188605]:        <feature name='erms'/>
Dec  1 14:20:36 np0005541455 nova_compute[188605]:        <feature name='xsaves'/>
Dec  1 14:20:36 np0005541455 nova_compute[188605]:      </blockers>
Dec  1 14:20:36 np0005541455 nova_compute[188605]:      <model usable='yes' vendor='Hygon' canonical='Dhyana-v1'>Dhyana</model>
Dec  1 14:20:36 np0005541455 nova_compute[188605]:      <model usable='yes' vendor='Hygon'>Dhyana-v1</model>
Dec  1 14:20:36 np0005541455 nova_compute[188605]:      <model usable='no' vendor='Hygon'>Dhyana-v2</model>
Dec  1 14:20:36 np0005541455 nova_compute[188605]:      <blockers model='Dhyana-v2'>
Dec  1 14:20:36 np0005541455 nova_compute[188605]:        <feature name='xsaves'/>
Dec  1 14:20:36 np0005541455 nova_compute[188605]:      </blockers>
Dec  1 14:20:36 np0005541455 nova_compute[188605]:      <model usable='yes' vendor='AMD' canonical='EPYC-v1'>EPYC</model>
Dec  1 14:20:36 np0005541455 nova_compute[188605]:      <model usable='no' vendor='AMD' canonical='EPYC-Genoa-v1'>EPYC-Genoa</model>
Dec  1 14:20:36 np0005541455 nova_compute[188605]:      <blockers model='EPYC-Genoa'>
Dec  1 14:20:36 np0005541455 nova_compute[188605]:        <feature name='amd-psfd'/>
Dec  1 14:20:36 np0005541455 nova_compute[188605]:        <feature name='auto-ibrs'/>
Dec  1 14:20:36 np0005541455 nova_compute[188605]:        <feature name='avx512-bf16'/>
Dec  1 14:20:36 np0005541455 nova_compute[188605]:        <feature name='avx512-vpopcntdq'/>
Dec  1 14:20:36 np0005541455 nova_compute[188605]:        <feature name='avx512bitalg'/>
Dec  1 14:20:36 np0005541455 nova_compute[188605]:        <feature name='avx512bw'/>
Dec  1 14:20:36 np0005541455 nova_compute[188605]:        <feature name='avx512cd'/>
Dec  1 14:20:36 np0005541455 nova_compute[188605]:        <feature name='avx512dq'/>
Dec  1 14:20:36 np0005541455 nova_compute[188605]:        <feature name='avx512f'/>
Dec  1 14:20:36 np0005541455 nova_compute[188605]:        <feature name='avx512ifma'/>
Dec  1 14:20:36 np0005541455 nova_compute[188605]:        <feature name='avx512vbmi'/>
Dec  1 14:20:36 np0005541455 nova_compute[188605]:        <feature name='avx512vbmi2'/>
Dec  1 14:20:36 np0005541455 nova_compute[188605]:        <feature name='avx512vl'/>
Dec  1 14:20:36 np0005541455 nova_compute[188605]:        <feature name='avx512vnni'/>
Dec  1 14:20:36 np0005541455 nova_compute[188605]:        <feature name='erms'/>
Dec  1 14:20:36 np0005541455 nova_compute[188605]:        <feature name='fsrm'/>
Dec  1 14:20:36 np0005541455 nova_compute[188605]:        <feature name='gfni'/>
Dec  1 14:20:36 np0005541455 nova_compute[188605]:        <feature name='invpcid'/>
Dec  1 14:20:36 np0005541455 nova_compute[188605]:        <feature name='la57'/>
Dec  1 14:20:36 np0005541455 nova_compute[188605]:        <feature name='no-nested-data-bp'/>
Dec  1 14:20:36 np0005541455 nova_compute[188605]:        <feature name='null-sel-clr-base'/>
Dec  1 14:20:36 np0005541455 nova_compute[188605]:        <feature name='pcid'/>
Dec  1 14:20:36 np0005541455 nova_compute[188605]:        <feature name='pku'/>
Dec  1 14:20:36 np0005541455 nova_compute[188605]:        <feature name='stibp-always-on'/>
Dec  1 14:20:36 np0005541455 nova_compute[188605]:        <feature name='vaes'/>
Dec  1 14:20:36 np0005541455 nova_compute[188605]:        <feature name='vpclmulqdq'/>
Dec  1 14:20:36 np0005541455 nova_compute[188605]:        <feature name='xsaves'/>
Dec  1 14:20:36 np0005541455 nova_compute[188605]:      </blockers>
Dec  1 14:20:36 np0005541455 nova_compute[188605]:      <model usable='no' vendor='AMD'>EPYC-Genoa-v1</model>
Dec  1 14:20:36 np0005541455 nova_compute[188605]:      <blockers model='EPYC-Genoa-v1'>
Dec  1 14:20:36 np0005541455 nova_compute[188605]:        <feature name='amd-psfd'/>
Dec  1 14:20:36 np0005541455 nova_compute[188605]:        <feature name='auto-ibrs'/>
Dec  1 14:20:36 np0005541455 nova_compute[188605]:        <feature name='avx512-bf16'/>
Dec  1 14:20:36 np0005541455 nova_compute[188605]:        <feature name='avx512-vpopcntdq'/>
Dec  1 14:20:36 np0005541455 nova_compute[188605]:        <feature name='avx512bitalg'/>
Dec  1 14:20:36 np0005541455 nova_compute[188605]:        <feature name='avx512bw'/>
Dec  1 14:20:36 np0005541455 nova_compute[188605]:        <feature name='avx512cd'/>
Dec  1 14:20:36 np0005541455 nova_compute[188605]:        <feature name='avx512dq'/>
Dec  1 14:20:36 np0005541455 nova_compute[188605]:        <feature name='avx512f'/>
Dec  1 14:20:36 np0005541455 nova_compute[188605]:        <feature name='avx512ifma'/>
Dec  1 14:20:36 np0005541455 nova_compute[188605]:        <feature name='avx512vbmi'/>
Dec  1 14:20:36 np0005541455 nova_compute[188605]:        <feature name='avx512vbmi2'/>
Dec  1 14:20:36 np0005541455 nova_compute[188605]:        <feature name='avx512vl'/>
Dec  1 14:20:36 np0005541455 nova_compute[188605]:        <feature name='avx512vnni'/>
Dec  1 14:20:36 np0005541455 nova_compute[188605]:        <feature name='erms'/>
Dec  1 14:20:36 np0005541455 nova_compute[188605]:        <feature name='fsrm'/>
Dec  1 14:20:36 np0005541455 nova_compute[188605]:        <feature name='gfni'/>
Dec  1 14:20:36 np0005541455 nova_compute[188605]:        <feature name='invpcid'/>
Dec  1 14:20:36 np0005541455 nova_compute[188605]:        <feature name='la57'/>
Dec  1 14:20:36 np0005541455 nova_compute[188605]:        <feature name='no-nested-data-bp'/>
Dec  1 14:20:36 np0005541455 nova_compute[188605]:        <feature name='null-sel-clr-base'/>
Dec  1 14:20:36 np0005541455 nova_compute[188605]:        <feature name='pcid'/>
Dec  1 14:20:36 np0005541455 nova_compute[188605]:        <feature name='pku'/>
Dec  1 14:20:36 np0005541455 nova_compute[188605]:        <feature name='stibp-always-on'/>
Dec  1 14:20:36 np0005541455 nova_compute[188605]:        <feature name='vaes'/>
Dec  1 14:20:36 np0005541455 nova_compute[188605]:        <feature name='vpclmulqdq'/>
Dec  1 14:20:36 np0005541455 nova_compute[188605]:        <feature name='xsaves'/>
Dec  1 14:20:36 np0005541455 nova_compute[188605]:      </blockers>
Dec  1 14:20:36 np0005541455 nova_compute[188605]:      <model usable='yes' vendor='AMD' canonical='EPYC-v2'>EPYC-IBPB</model>
Dec  1 14:20:36 np0005541455 nova_compute[188605]:      <model usable='no' vendor='AMD' canonical='EPYC-Milan-v1'>EPYC-Milan</model>
Dec  1 14:20:36 np0005541455 nova_compute[188605]:      <blockers model='EPYC-Milan'>
Dec  1 14:20:36 np0005541455 nova_compute[188605]:        <feature name='erms'/>
Dec  1 14:20:36 np0005541455 nova_compute[188605]:        <feature name='fsrm'/>
Dec  1 14:20:36 np0005541455 nova_compute[188605]:        <feature name='invpcid'/>
Dec  1 14:20:36 np0005541455 nova_compute[188605]:        <feature name='pcid'/>
Dec  1 14:20:36 np0005541455 nova_compute[188605]:        <feature name='pku'/>
Dec  1 14:20:36 np0005541455 nova_compute[188605]:        <feature name='xsaves'/>
Dec  1 14:20:36 np0005541455 nova_compute[188605]:      </blockers>
Dec  1 14:20:36 np0005541455 nova_compute[188605]:      <model usable='no' vendor='AMD'>EPYC-Milan-v1</model>
Dec  1 14:20:36 np0005541455 nova_compute[188605]:      <blockers model='EPYC-Milan-v1'>
Dec  1 14:20:36 np0005541455 nova_compute[188605]:        <feature name='erms'/>
Dec  1 14:20:36 np0005541455 nova_compute[188605]:        <feature name='fsrm'/>
Dec  1 14:20:36 np0005541455 nova_compute[188605]:        <feature name='invpcid'/>
Dec  1 14:20:36 np0005541455 nova_compute[188605]:        <feature name='pcid'/>
Dec  1 14:20:36 np0005541455 nova_compute[188605]:        <feature name='pku'/>
Dec  1 14:20:36 np0005541455 nova_compute[188605]:        <feature name='xsaves'/>
Dec  1 14:20:36 np0005541455 nova_compute[188605]:      </blockers>
Dec  1 14:20:36 np0005541455 nova_compute[188605]:      <model usable='no' vendor='AMD'>EPYC-Milan-v2</model>
Dec  1 14:20:36 np0005541455 nova_compute[188605]:      <blockers model='EPYC-Milan-v2'>
Dec  1 14:20:36 np0005541455 nova_compute[188605]:        <feature name='amd-psfd'/>
Dec  1 14:20:36 np0005541455 nova_compute[188605]:        <feature name='erms'/>
Dec  1 14:20:36 np0005541455 nova_compute[188605]:        <feature name='fsrm'/>
Dec  1 14:20:36 np0005541455 nova_compute[188605]:        <feature name='invpcid'/>
Dec  1 14:20:36 np0005541455 nova_compute[188605]:        <feature name='no-nested-data-bp'/>
Dec  1 14:20:36 np0005541455 nova_compute[188605]:        <feature name='null-sel-clr-base'/>
Dec  1 14:20:36 np0005541455 nova_compute[188605]:        <feature name='pcid'/>
Dec  1 14:20:36 np0005541455 nova_compute[188605]:        <feature name='pku'/>
Dec  1 14:20:36 np0005541455 nova_compute[188605]:        <feature name='stibp-always-on'/>
Dec  1 14:20:36 np0005541455 nova_compute[188605]:        <feature name='vaes'/>
Dec  1 14:20:36 np0005541455 nova_compute[188605]:        <feature name='vpclmulqdq'/>
Dec  1 14:20:36 np0005541455 nova_compute[188605]:        <feature name='xsaves'/>
Dec  1 14:20:36 np0005541455 nova_compute[188605]:      </blockers>
Dec  1 14:20:36 np0005541455 nova_compute[188605]:      <model usable='no' vendor='AMD' canonical='EPYC-Rome-v1'>EPYC-Rome</model>
Dec  1 14:20:36 np0005541455 nova_compute[188605]:      <blockers model='EPYC-Rome'>
Dec  1 14:20:36 np0005541455 nova_compute[188605]:        <feature name='xsaves'/>
Dec  1 14:20:36 np0005541455 nova_compute[188605]:      </blockers>
Dec  1 14:20:36 np0005541455 nova_compute[188605]:      <model usable='no' vendor='AMD'>EPYC-Rome-v1</model>
Dec  1 14:20:36 np0005541455 nova_compute[188605]:      <blockers model='EPYC-Rome-v1'>
Dec  1 14:20:36 np0005541455 nova_compute[188605]:        <feature name='xsaves'/>
Dec  1 14:20:36 np0005541455 nova_compute[188605]:      </blockers>
Dec  1 14:20:36 np0005541455 nova_compute[188605]:      <model usable='no' vendor='AMD'>EPYC-Rome-v2</model>
Dec  1 14:20:36 np0005541455 nova_compute[188605]:      <blockers model='EPYC-Rome-v2'>
Dec  1 14:20:36 np0005541455 nova_compute[188605]:        <feature name='xsaves'/>
Dec  1 14:20:36 np0005541455 nova_compute[188605]:      </blockers>
Dec  1 14:20:36 np0005541455 nova_compute[188605]:      <model usable='no' vendor='AMD'>EPYC-Rome-v3</model>
Dec  1 14:20:36 np0005541455 nova_compute[188605]:      <blockers model='EPYC-Rome-v3'>
Dec  1 14:20:36 np0005541455 nova_compute[188605]:        <feature name='xsaves'/>
Dec  1 14:20:36 np0005541455 nova_compute[188605]:      </blockers>
Dec  1 14:20:36 np0005541455 nova_compute[188605]:      <model usable='yes' vendor='AMD'>EPYC-Rome-v4</model>
Dec  1 14:20:36 np0005541455 nova_compute[188605]:      <model usable='yes' vendor='AMD'>EPYC-v1</model>
Dec  1 14:20:36 np0005541455 nova_compute[188605]:      <model usable='yes' vendor='AMD'>EPYC-v2</model>
Dec  1 14:20:36 np0005541455 nova_compute[188605]:      <model usable='no' vendor='AMD'>EPYC-v3</model>
Dec  1 14:20:36 np0005541455 nova_compute[188605]:      <blockers model='EPYC-v3'>
Dec  1 14:20:36 np0005541455 nova_compute[188605]:        <feature name='xsaves'/>
Dec  1 14:20:36 np0005541455 nova_compute[188605]:      </blockers>
Dec  1 14:20:36 np0005541455 nova_compute[188605]:      <model usable='no' vendor='AMD'>EPYC-v4</model>
Dec  1 14:20:36 np0005541455 nova_compute[188605]:      <blockers model='EPYC-v4'>
Dec  1 14:20:36 np0005541455 nova_compute[188605]:        <feature name='xsaves'/>
Dec  1 14:20:36 np0005541455 nova_compute[188605]:      </blockers>
Dec  1 14:20:36 np0005541455 nova_compute[188605]:      <model usable='no' vendor='Intel' canonical='GraniteRapids-v1'>GraniteRapids</model>
Dec  1 14:20:36 np0005541455 nova_compute[188605]:      <blockers model='GraniteRapids'>
Dec  1 14:20:36 np0005541455 nova_compute[188605]:        <feature name='amx-bf16'/>
Dec  1 14:20:36 np0005541455 nova_compute[188605]:        <feature name='amx-fp16'/>
Dec  1 14:20:36 np0005541455 nova_compute[188605]:        <feature name='amx-int8'/>
Dec  1 14:20:36 np0005541455 nova_compute[188605]:        <feature name='amx-tile'/>
Dec  1 14:20:36 np0005541455 nova_compute[188605]:        <feature name='avx-vnni'/>
Dec  1 14:20:36 np0005541455 nova_compute[188605]:        <feature name='avx512-bf16'/>
Dec  1 14:20:36 np0005541455 nova_compute[188605]:        <feature name='avx512-fp16'/>
Dec  1 14:20:36 np0005541455 nova_compute[188605]:        <feature name='avx512-vpopcntdq'/>
Dec  1 14:20:36 np0005541455 nova_compute[188605]:        <feature name='avx512bitalg'/>
Dec  1 14:20:36 np0005541455 nova_compute[188605]:        <feature name='avx512bw'/>
Dec  1 14:20:36 np0005541455 nova_compute[188605]:        <feature name='avx512cd'/>
Dec  1 14:20:36 np0005541455 nova_compute[188605]:        <feature name='avx512dq'/>
Dec  1 14:20:36 np0005541455 nova_compute[188605]:        <feature name='avx512f'/>
Dec  1 14:20:36 np0005541455 nova_compute[188605]:        <feature name='avx512ifma'/>
Dec  1 14:20:36 np0005541455 nova_compute[188605]:        <feature name='avx512vbmi'/>
Dec  1 14:20:36 np0005541455 nova_compute[188605]:        <feature name='avx512vbmi2'/>
Dec  1 14:20:36 np0005541455 nova_compute[188605]:        <feature name='avx512vl'/>
Dec  1 14:20:36 np0005541455 nova_compute[188605]:        <feature name='avx512vnni'/>
Dec  1 14:20:36 np0005541455 nova_compute[188605]:        <feature name='bus-lock-detect'/>
Dec  1 14:20:36 np0005541455 nova_compute[188605]:        <feature name='erms'/>
Dec  1 14:20:36 np0005541455 nova_compute[188605]:        <feature name='fbsdp-no'/>
Dec  1 14:20:36 np0005541455 nova_compute[188605]:        <feature name='fsrc'/>
Dec  1 14:20:36 np0005541455 nova_compute[188605]:        <feature name='fsrm'/>
Dec  1 14:20:36 np0005541455 nova_compute[188605]:        <feature name='fsrs'/>
Dec  1 14:20:36 np0005541455 nova_compute[188605]:        <feature name='fzrm'/>
Dec  1 14:20:36 np0005541455 nova_compute[188605]:        <feature name='gfni'/>
Dec  1 14:20:36 np0005541455 nova_compute[188605]:        <feature name='hle'/>
Dec  1 14:20:36 np0005541455 nova_compute[188605]:        <feature name='ibrs-all'/>
Dec  1 14:20:36 np0005541455 nova_compute[188605]:        <feature name='invpcid'/>
Dec  1 14:20:36 np0005541455 nova_compute[188605]:        <feature name='la57'/>
Dec  1 14:20:36 np0005541455 nova_compute[188605]:        <feature name='mcdt-no'/>
Dec  1 14:20:36 np0005541455 nova_compute[188605]:        <feature name='pbrsb-no'/>
Dec  1 14:20:36 np0005541455 nova_compute[188605]:        <feature name='pcid'/>
Dec  1 14:20:36 np0005541455 nova_compute[188605]:        <feature name='pku'/>
Dec  1 14:20:36 np0005541455 nova_compute[188605]:        <feature name='prefetchiti'/>
Dec  1 14:20:36 np0005541455 nova_compute[188605]:        <feature name='psdp-no'/>
Dec  1 14:20:36 np0005541455 nova_compute[188605]:        <feature name='rtm'/>
Dec  1 14:20:36 np0005541455 nova_compute[188605]:        <feature name='sbdr-ssdp-no'/>
Dec  1 14:20:36 np0005541455 nova_compute[188605]:        <feature name='serialize'/>
Dec  1 14:20:36 np0005541455 nova_compute[188605]:        <feature name='taa-no'/>
Dec  1 14:20:36 np0005541455 nova_compute[188605]:        <feature name='tsx-ldtrk'/>
Dec  1 14:20:36 np0005541455 nova_compute[188605]:        <feature name='vaes'/>
Dec  1 14:20:36 np0005541455 nova_compute[188605]:        <feature name='vpclmulqdq'/>
Dec  1 14:20:36 np0005541455 nova_compute[188605]:        <feature name='xfd'/>
Dec  1 14:20:36 np0005541455 nova_compute[188605]:        <feature name='xsaves'/>
Dec  1 14:20:36 np0005541455 nova_compute[188605]:      </blockers>
Dec  1 14:20:36 np0005541455 nova_compute[188605]:      <model usable='no' vendor='Intel'>GraniteRapids-v1</model>
Dec  1 14:20:36 np0005541455 nova_compute[188605]:      <blockers model='GraniteRapids-v1'>
Dec  1 14:20:36 np0005541455 nova_compute[188605]:        <feature name='amx-bf16'/>
Dec  1 14:20:36 np0005541455 nova_compute[188605]:        <feature name='amx-fp16'/>
Dec  1 14:20:36 np0005541455 nova_compute[188605]:        <feature name='amx-int8'/>
Dec  1 14:20:36 np0005541455 nova_compute[188605]:        <feature name='amx-tile'/>
Dec  1 14:20:36 np0005541455 nova_compute[188605]:        <feature name='avx-vnni'/>
Dec  1 14:20:36 np0005541455 nova_compute[188605]:        <feature name='avx512-bf16'/>
Dec  1 14:20:36 np0005541455 nova_compute[188605]:        <feature name='avx512-fp16'/>
Dec  1 14:20:36 np0005541455 nova_compute[188605]:        <feature name='avx512-vpopcntdq'/>
Dec  1 14:20:36 np0005541455 nova_compute[188605]:        <feature name='avx512bitalg'/>
Dec  1 14:20:36 np0005541455 nova_compute[188605]:        <feature name='avx512bw'/>
Dec  1 14:20:36 np0005541455 nova_compute[188605]:        <feature name='avx512cd'/>
Dec  1 14:20:36 np0005541455 nova_compute[188605]:        <feature name='avx512dq'/>
Dec  1 14:20:36 np0005541455 nova_compute[188605]:        <feature name='avx512f'/>
Dec  1 14:20:36 np0005541455 nova_compute[188605]:        <feature name='avx512ifma'/>
Dec  1 14:20:36 np0005541455 nova_compute[188605]:        <feature name='avx512vbmi'/>
Dec  1 14:20:36 np0005541455 nova_compute[188605]:        <feature name='avx512vbmi2'/>
Dec  1 14:20:36 np0005541455 nova_compute[188605]:        <feature name='avx512vl'/>
Dec  1 14:20:36 np0005541455 nova_compute[188605]:        <feature name='avx512vnni'/>
Dec  1 14:20:36 np0005541455 nova_compute[188605]:        <feature name='bus-lock-detect'/>
Dec  1 14:20:36 np0005541455 nova_compute[188605]:        <feature name='erms'/>
Dec  1 14:20:36 np0005541455 nova_compute[188605]:        <feature name='fbsdp-no'/>
Dec  1 14:20:36 np0005541455 nova_compute[188605]:        <feature name='fsrc'/>
Dec  1 14:20:36 np0005541455 nova_compute[188605]:        <feature name='fsrm'/>
Dec  1 14:20:36 np0005541455 nova_compute[188605]:        <feature name='fsrs'/>
Dec  1 14:20:36 np0005541455 nova_compute[188605]:        <feature name='fzrm'/>
Dec  1 14:20:36 np0005541455 nova_compute[188605]:        <feature name='gfni'/>
Dec  1 14:20:36 np0005541455 nova_compute[188605]:        <feature name='hle'/>
Dec  1 14:20:36 np0005541455 nova_compute[188605]:        <feature name='ibrs-all'/>
Dec  1 14:20:36 np0005541455 nova_compute[188605]:        <feature name='invpcid'/>
Dec  1 14:20:36 np0005541455 nova_compute[188605]:        <feature name='la57'/>
Dec  1 14:20:36 np0005541455 nova_compute[188605]:        <feature name='mcdt-no'/>
Dec  1 14:20:36 np0005541455 nova_compute[188605]:        <feature name='pbrsb-no'/>
Dec  1 14:20:36 np0005541455 nova_compute[188605]:        <feature name='pcid'/>
Dec  1 14:20:36 np0005541455 nova_compute[188605]:        <feature name='pku'/>
Dec  1 14:20:36 np0005541455 nova_compute[188605]:        <feature name='prefetchiti'/>
Dec  1 14:20:36 np0005541455 nova_compute[188605]:        <feature name='psdp-no'/>
Dec  1 14:20:36 np0005541455 nova_compute[188605]:        <feature name='rtm'/>
Dec  1 14:20:36 np0005541455 nova_compute[188605]:        <feature name='sbdr-ssdp-no'/>
Dec  1 14:20:36 np0005541455 nova_compute[188605]:        <feature name='serialize'/>
Dec  1 14:20:36 np0005541455 nova_compute[188605]:        <feature name='taa-no'/>
Dec  1 14:20:36 np0005541455 nova_compute[188605]:        <feature name='tsx-ldtrk'/>
Dec  1 14:20:36 np0005541455 nova_compute[188605]:        <feature name='vaes'/>
Dec  1 14:20:36 np0005541455 nova_compute[188605]:        <feature name='vpclmulqdq'/>
Dec  1 14:20:36 np0005541455 nova_compute[188605]:        <feature name='xfd'/>
Dec  1 14:20:36 np0005541455 nova_compute[188605]:        <feature name='xsaves'/>
Dec  1 14:20:36 np0005541455 nova_compute[188605]:      </blockers>
Dec  1 14:20:36 np0005541455 nova_compute[188605]:      <model usable='no' vendor='Intel'>GraniteRapids-v2</model>
Dec  1 14:20:36 np0005541455 nova_compute[188605]:      <blockers model='GraniteRapids-v2'>
Dec  1 14:20:36 np0005541455 nova_compute[188605]:        <feature name='amx-bf16'/>
Dec  1 14:20:36 np0005541455 nova_compute[188605]:        <feature name='amx-fp16'/>
Dec  1 14:20:36 np0005541455 nova_compute[188605]:        <feature name='amx-int8'/>
Dec  1 14:20:36 np0005541455 nova_compute[188605]:        <feature name='amx-tile'/>
Dec  1 14:20:36 np0005541455 nova_compute[188605]:        <feature name='avx-vnni'/>
Dec  1 14:20:36 np0005541455 nova_compute[188605]:        <feature name='avx10'/>
Dec  1 14:20:36 np0005541455 nova_compute[188605]:        <feature name='avx10-128'/>
Dec  1 14:20:36 np0005541455 nova_compute[188605]:        <feature name='avx10-256'/>
Dec  1 14:20:36 np0005541455 nova_compute[188605]:        <feature name='avx10-512'/>
Dec  1 14:20:36 np0005541455 nova_compute[188605]:        <feature name='avx512-bf16'/>
Dec  1 14:20:36 np0005541455 nova_compute[188605]:        <feature name='avx512-fp16'/>
Dec  1 14:20:36 np0005541455 nova_compute[188605]:        <feature name='avx512-vpopcntdq'/>
Dec  1 14:20:36 np0005541455 nova_compute[188605]:        <feature name='avx512bitalg'/>
Dec  1 14:20:36 np0005541455 nova_compute[188605]:        <feature name='avx512bw'/>
Dec  1 14:20:36 np0005541455 nova_compute[188605]:        <feature name='avx512cd'/>
Dec  1 14:20:36 np0005541455 nova_compute[188605]:        <feature name='avx512dq'/>
Dec  1 14:20:36 np0005541455 nova_compute[188605]:        <feature name='avx512f'/>
Dec  1 14:20:36 np0005541455 nova_compute[188605]:        <feature name='avx512ifma'/>
Dec  1 14:20:36 np0005541455 nova_compute[188605]:        <feature name='avx512vbmi'/>
Dec  1 14:20:36 np0005541455 nova_compute[188605]:        <feature name='avx512vbmi2'/>
Dec  1 14:20:36 np0005541455 nova_compute[188605]:        <feature name='avx512vl'/>
Dec  1 14:20:36 np0005541455 nova_compute[188605]:        <feature name='avx512vnni'/>
Dec  1 14:20:36 np0005541455 nova_compute[188605]:        <feature name='bus-lock-detect'/>
Dec  1 14:20:36 np0005541455 nova_compute[188605]:        <feature name='cldemote'/>
Dec  1 14:20:36 np0005541455 nova_compute[188605]:        <feature name='erms'/>
Dec  1 14:20:36 np0005541455 nova_compute[188605]:        <feature name='fbsdp-no'/>
Dec  1 14:20:36 np0005541455 nova_compute[188605]:        <feature name='fsrc'/>
Dec  1 14:20:36 np0005541455 nova_compute[188605]:        <feature name='fsrm'/>
Dec  1 14:20:36 np0005541455 nova_compute[188605]:        <feature name='fsrs'/>
Dec  1 14:20:36 np0005541455 nova_compute[188605]:        <feature name='fzrm'/>
Dec  1 14:20:36 np0005541455 nova_compute[188605]:        <feature name='gfni'/>
Dec  1 14:20:36 np0005541455 nova_compute[188605]:        <feature name='hle'/>
Dec  1 14:20:36 np0005541455 nova_compute[188605]:        <feature name='ibrs-all'/>
Dec  1 14:20:36 np0005541455 nova_compute[188605]:        <feature name='invpcid'/>
Dec  1 14:20:36 np0005541455 nova_compute[188605]:        <feature name='la57'/>
Dec  1 14:20:36 np0005541455 nova_compute[188605]:        <feature name='mcdt-no'/>
Dec  1 14:20:36 np0005541455 nova_compute[188605]:        <feature name='movdir64b'/>
Dec  1 14:20:36 np0005541455 nova_compute[188605]:        <feature name='movdiri'/>
Dec  1 14:20:36 np0005541455 nova_compute[188605]:        <feature name='pbrsb-no'/>
Dec  1 14:20:36 np0005541455 nova_compute[188605]:        <feature name='pcid'/>
Dec  1 14:20:36 np0005541455 nova_compute[188605]:        <feature name='pku'/>
Dec  1 14:20:36 np0005541455 nova_compute[188605]:        <feature name='prefetchiti'/>
Dec  1 14:20:36 np0005541455 nova_compute[188605]:        <feature name='psdp-no'/>
Dec  1 14:20:36 np0005541455 nova_compute[188605]:        <feature name='rtm'/>
Dec  1 14:20:36 np0005541455 nova_compute[188605]:        <feature name='sbdr-ssdp-no'/>
Dec  1 14:20:36 np0005541455 nova_compute[188605]:        <feature name='serialize'/>
Dec  1 14:20:36 np0005541455 nova_compute[188605]:        <feature name='ss'/>
Dec  1 14:20:36 np0005541455 nova_compute[188605]:        <feature name='taa-no'/>
Dec  1 14:20:36 np0005541455 nova_compute[188605]:        <feature name='tsx-ldtrk'/>
Dec  1 14:20:36 np0005541455 nova_compute[188605]:        <feature name='vaes'/>
Dec  1 14:20:36 np0005541455 nova_compute[188605]:        <feature name='vpclmulqdq'/>
Dec  1 14:20:36 np0005541455 nova_compute[188605]:        <feature name='xfd'/>
Dec  1 14:20:36 np0005541455 nova_compute[188605]:        <feature name='xsaves'/>
Dec  1 14:20:36 np0005541455 nova_compute[188605]:      </blockers>
Dec  1 14:20:36 np0005541455 nova_compute[188605]:      <model usable='no' vendor='Intel' canonical='Haswell-v1'>Haswell</model>
Dec  1 14:20:36 np0005541455 nova_compute[188605]:      <blockers model='Haswell'>
Dec  1 14:20:36 np0005541455 nova_compute[188605]:        <feature name='erms'/>
Dec  1 14:20:36 np0005541455 nova_compute[188605]:        <feature name='hle'/>
Dec  1 14:20:36 np0005541455 nova_compute[188605]:        <feature name='invpcid'/>
Dec  1 14:20:36 np0005541455 nova_compute[188605]:        <feature name='pcid'/>
Dec  1 14:20:36 np0005541455 nova_compute[188605]:        <feature name='rtm'/>
Dec  1 14:20:36 np0005541455 nova_compute[188605]:      </blockers>
Dec  1 14:20:36 np0005541455 nova_compute[188605]:      <model usable='no' vendor='Intel' canonical='Haswell-v3'>Haswell-IBRS</model>
Dec  1 14:20:36 np0005541455 nova_compute[188605]:      <blockers model='Haswell-IBRS'>
Dec  1 14:20:36 np0005541455 nova_compute[188605]:        <feature name='erms'/>
Dec  1 14:20:36 np0005541455 nova_compute[188605]:        <feature name='hle'/>
Dec  1 14:20:36 np0005541455 nova_compute[188605]:        <feature name='invpcid'/>
Dec  1 14:20:36 np0005541455 nova_compute[188605]:        <feature name='pcid'/>
Dec  1 14:20:36 np0005541455 nova_compute[188605]:        <feature name='rtm'/>
Dec  1 14:20:36 np0005541455 nova_compute[188605]:      </blockers>
Dec  1 14:20:36 np0005541455 nova_compute[188605]:      <model usable='no' vendor='Intel' canonical='Haswell-v2'>Haswell-noTSX</model>
Dec  1 14:20:36 np0005541455 nova_compute[188605]:      <blockers model='Haswell-noTSX'>
Dec  1 14:20:36 np0005541455 nova_compute[188605]:        <feature name='erms'/>
Dec  1 14:20:36 np0005541455 nova_compute[188605]:        <feature name='invpcid'/>
Dec  1 14:20:36 np0005541455 nova_compute[188605]:        <feature name='pcid'/>
Dec  1 14:20:36 np0005541455 nova_compute[188605]:      </blockers>
Dec  1 14:20:36 np0005541455 nova_compute[188605]:      <model usable='no' vendor='Intel' canonical='Haswell-v4'>Haswell-noTSX-IBRS</model>
Dec  1 14:20:36 np0005541455 nova_compute[188605]:      <blockers model='Haswell-noTSX-IBRS'>
Dec  1 14:20:36 np0005541455 nova_compute[188605]:        <feature name='erms'/>
Dec  1 14:20:36 np0005541455 nova_compute[188605]:        <feature name='invpcid'/>
Dec  1 14:20:36 np0005541455 nova_compute[188605]:        <feature name='pcid'/>
Dec  1 14:20:36 np0005541455 nova_compute[188605]:      </blockers>
Dec  1 14:20:36 np0005541455 nova_compute[188605]:      <model usable='no' vendor='Intel'>Haswell-v1</model>
Dec  1 14:20:36 np0005541455 nova_compute[188605]:      <blockers model='Haswell-v1'>
Dec  1 14:20:36 np0005541455 nova_compute[188605]:        <feature name='erms'/>
Dec  1 14:20:36 np0005541455 nova_compute[188605]:        <feature name='hle'/>
Dec  1 14:20:36 np0005541455 nova_compute[188605]:        <feature name='invpcid'/>
Dec  1 14:20:36 np0005541455 nova_compute[188605]:        <feature name='pcid'/>
Dec  1 14:20:36 np0005541455 nova_compute[188605]:        <feature name='rtm'/>
Dec  1 14:20:36 np0005541455 nova_compute[188605]:      </blockers>
Dec  1 14:20:36 np0005541455 nova_compute[188605]:      <model usable='no' vendor='Intel'>Haswell-v2</model>
Dec  1 14:20:36 np0005541455 nova_compute[188605]:      <blockers model='Haswell-v2'>
Dec  1 14:20:36 np0005541455 nova_compute[188605]:        <feature name='erms'/>
Dec  1 14:20:36 np0005541455 nova_compute[188605]:        <feature name='invpcid'/>
Dec  1 14:20:36 np0005541455 nova_compute[188605]:        <feature name='pcid'/>
Dec  1 14:20:36 np0005541455 nova_compute[188605]:      </blockers>
Dec  1 14:20:36 np0005541455 nova_compute[188605]:      <model usable='no' vendor='Intel'>Haswell-v3</model>
Dec  1 14:20:36 np0005541455 nova_compute[188605]:      <blockers model='Haswell-v3'>
Dec  1 14:20:36 np0005541455 nova_compute[188605]:        <feature name='erms'/>
Dec  1 14:20:36 np0005541455 nova_compute[188605]:        <feature name='hle'/>
Dec  1 14:20:36 np0005541455 nova_compute[188605]:        <feature name='invpcid'/>
Dec  1 14:20:36 np0005541455 nova_compute[188605]:        <feature name='pcid'/>
Dec  1 14:20:36 np0005541455 nova_compute[188605]:        <feature name='rtm'/>
Dec  1 14:20:36 np0005541455 nova_compute[188605]:      </blockers>
Dec  1 14:20:36 np0005541455 nova_compute[188605]:      <model usable='no' vendor='Intel'>Haswell-v4</model>
Dec  1 14:20:36 np0005541455 nova_compute[188605]:      <blockers model='Haswell-v4'>
Dec  1 14:20:36 np0005541455 nova_compute[188605]:        <feature name='erms'/>
Dec  1 14:20:36 np0005541455 nova_compute[188605]:        <feature name='invpcid'/>
Dec  1 14:20:36 np0005541455 nova_compute[188605]:        <feature name='pcid'/>
Dec  1 14:20:36 np0005541455 nova_compute[188605]:      </blockers>
Dec  1 14:20:36 np0005541455 nova_compute[188605]:      <model usable='no' vendor='Intel' canonical='Icelake-Server-v1'>Icelake-Server</model>
Dec  1 14:20:36 np0005541455 nova_compute[188605]:      <blockers model='Icelake-Server'>
Dec  1 14:20:36 np0005541455 nova_compute[188605]:        <feature name='avx512-vpopcntdq'/>
Dec  1 14:20:36 np0005541455 nova_compute[188605]:        <feature name='avx512bitalg'/>
Dec  1 14:20:36 np0005541455 nova_compute[188605]:        <feature name='avx512bw'/>
Dec  1 14:20:36 np0005541455 nova_compute[188605]:        <feature name='avx512cd'/>
Dec  1 14:20:36 np0005541455 nova_compute[188605]:        <feature name='avx512dq'/>
Dec  1 14:20:36 np0005541455 nova_compute[188605]:        <feature name='avx512f'/>
Dec  1 14:20:36 np0005541455 nova_compute[188605]:        <feature name='avx512vbmi'/>
Dec  1 14:20:36 np0005541455 nova_compute[188605]:        <feature name='avx512vbmi2'/>
Dec  1 14:20:36 np0005541455 nova_compute[188605]:        <feature name='avx512vl'/>
Dec  1 14:20:36 np0005541455 nova_compute[188605]:        <feature name='avx512vnni'/>
Dec  1 14:20:36 np0005541455 nova_compute[188605]:        <feature name='erms'/>
Dec  1 14:20:36 np0005541455 nova_compute[188605]:        <feature name='gfni'/>
Dec  1 14:20:36 np0005541455 nova_compute[188605]:        <feature name='hle'/>
Dec  1 14:20:36 np0005541455 nova_compute[188605]:        <feature name='invpcid'/>
Dec  1 14:20:36 np0005541455 nova_compute[188605]:        <feature name='la57'/>
Dec  1 14:20:36 np0005541455 nova_compute[188605]:        <feature name='pcid'/>
Dec  1 14:20:36 np0005541455 nova_compute[188605]:        <feature name='pku'/>
Dec  1 14:20:36 np0005541455 nova_compute[188605]:        <feature name='rtm'/>
Dec  1 14:20:36 np0005541455 nova_compute[188605]:        <feature name='vaes'/>
Dec  1 14:20:36 np0005541455 nova_compute[188605]:        <feature name='vpclmulqdq'/>
Dec  1 14:20:36 np0005541455 nova_compute[188605]:      </blockers>
Dec  1 14:20:36 np0005541455 nova_compute[188605]:      <model usable='no' vendor='Intel' canonical='Icelake-Server-v2'>Icelake-Server-noTSX</model>
Dec  1 14:20:36 np0005541455 nova_compute[188605]:      <blockers model='Icelake-Server-noTSX'>
Dec  1 14:20:36 np0005541455 nova_compute[188605]:        <feature name='avx512-vpopcntdq'/>
Dec  1 14:20:36 np0005541455 nova_compute[188605]:        <feature name='avx512bitalg'/>
Dec  1 14:20:36 np0005541455 nova_compute[188605]:        <feature name='avx512bw'/>
Dec  1 14:20:36 np0005541455 nova_compute[188605]:        <feature name='avx512cd'/>
Dec  1 14:20:36 np0005541455 nova_compute[188605]:        <feature name='avx512dq'/>
Dec  1 14:20:36 np0005541455 nova_compute[188605]:        <feature name='avx512f'/>
Dec  1 14:20:36 np0005541455 nova_compute[188605]:        <feature name='avx512vbmi'/>
Dec  1 14:20:36 np0005541455 nova_compute[188605]:        <feature name='avx512vbmi2'/>
Dec  1 14:20:36 np0005541455 nova_compute[188605]:        <feature name='avx512vl'/>
Dec  1 14:20:36 np0005541455 nova_compute[188605]:        <feature name='avx512vnni'/>
Dec  1 14:20:36 np0005541455 nova_compute[188605]:        <feature name='erms'/>
Dec  1 14:20:36 np0005541455 nova_compute[188605]:        <feature name='gfni'/>
Dec  1 14:20:36 np0005541455 nova_compute[188605]:        <feature name='invpcid'/>
Dec  1 14:20:36 np0005541455 nova_compute[188605]:        <feature name='la57'/>
Dec  1 14:20:36 np0005541455 nova_compute[188605]:        <feature name='pcid'/>
Dec  1 14:20:36 np0005541455 nova_compute[188605]:        <feature name='pku'/>
Dec  1 14:20:36 np0005541455 nova_compute[188605]:        <feature name='vaes'/>
Dec  1 14:20:36 np0005541455 nova_compute[188605]:        <feature name='vpclmulqdq'/>
Dec  1 14:20:36 np0005541455 nova_compute[188605]:      </blockers>
Dec  1 14:20:36 np0005541455 nova_compute[188605]:      <model usable='no' vendor='Intel'>Icelake-Server-v1</model>
Dec  1 14:20:36 np0005541455 nova_compute[188605]:      <blockers model='Icelake-Server-v1'>
Dec  1 14:20:36 np0005541455 nova_compute[188605]:        <feature name='avx512-vpopcntdq'/>
Dec  1 14:20:36 np0005541455 nova_compute[188605]:        <feature name='avx512bitalg'/>
Dec  1 14:20:36 np0005541455 nova_compute[188605]:        <feature name='avx512bw'/>
Dec  1 14:20:36 np0005541455 nova_compute[188605]:        <feature name='avx512cd'/>
Dec  1 14:20:36 np0005541455 nova_compute[188605]:        <feature name='avx512dq'/>
Dec  1 14:20:36 np0005541455 nova_compute[188605]:        <feature name='avx512f'/>
Dec  1 14:20:36 np0005541455 nova_compute[188605]:        <feature name='avx512vbmi'/>
Dec  1 14:20:36 np0005541455 nova_compute[188605]:        <feature name='avx512vbmi2'/>
Dec  1 14:20:36 np0005541455 nova_compute[188605]:        <feature name='avx512vl'/>
Dec  1 14:20:36 np0005541455 nova_compute[188605]:        <feature name='avx512vnni'/>
Dec  1 14:20:36 np0005541455 nova_compute[188605]:        <feature name='erms'/>
Dec  1 14:20:36 np0005541455 nova_compute[188605]:        <feature name='gfni'/>
Dec  1 14:20:36 np0005541455 nova_compute[188605]:        <feature name='hle'/>
Dec  1 14:20:36 np0005541455 nova_compute[188605]:        <feature name='invpcid'/>
Dec  1 14:20:36 np0005541455 nova_compute[188605]:        <feature name='la57'/>
Dec  1 14:20:36 np0005541455 nova_compute[188605]:        <feature name='pcid'/>
Dec  1 14:20:36 np0005541455 nova_compute[188605]:        <feature name='pku'/>
Dec  1 14:20:36 np0005541455 nova_compute[188605]:        <feature name='rtm'/>
Dec  1 14:20:36 np0005541455 nova_compute[188605]:        <feature name='vaes'/>
Dec  1 14:20:36 np0005541455 nova_compute[188605]:        <feature name='vpclmulqdq'/>
Dec  1 14:20:36 np0005541455 nova_compute[188605]:      </blockers>
Dec  1 14:20:36 np0005541455 nova_compute[188605]:      <model usable='no' vendor='Intel'>Icelake-Server-v2</model>
Dec  1 14:20:36 np0005541455 nova_compute[188605]:      <blockers model='Icelake-Server-v2'>
Dec  1 14:20:36 np0005541455 nova_compute[188605]:        <feature name='avx512-vpopcntdq'/>
Dec  1 14:20:36 np0005541455 nova_compute[188605]:        <feature name='avx512bitalg'/>
Dec  1 14:20:36 np0005541455 nova_compute[188605]:        <feature name='avx512bw'/>
Dec  1 14:20:36 np0005541455 nova_compute[188605]:        <feature name='avx512cd'/>
Dec  1 14:20:36 np0005541455 nova_compute[188605]:        <feature name='avx512dq'/>
Dec  1 14:20:36 np0005541455 nova_compute[188605]:        <feature name='avx512f'/>
Dec  1 14:20:36 np0005541455 nova_compute[188605]:        <feature name='avx512vbmi'/>
Dec  1 14:20:36 np0005541455 nova_compute[188605]:        <feature name='avx512vbmi2'/>
Dec  1 14:20:36 np0005541455 nova_compute[188605]:        <feature name='avx512vl'/>
Dec  1 14:20:36 np0005541455 nova_compute[188605]:        <feature name='avx512vnni'/>
Dec  1 14:20:36 np0005541455 nova_compute[188605]:        <feature name='erms'/>
Dec  1 14:20:36 np0005541455 nova_compute[188605]:        <feature name='gfni'/>
Dec  1 14:20:36 np0005541455 nova_compute[188605]:        <feature name='invpcid'/>
Dec  1 14:20:36 np0005541455 nova_compute[188605]:        <feature name='la57'/>
Dec  1 14:20:36 np0005541455 nova_compute[188605]:        <feature name='pcid'/>
Dec  1 14:20:36 np0005541455 nova_compute[188605]:        <feature name='pku'/>
Dec  1 14:20:36 np0005541455 nova_compute[188605]:        <feature name='vaes'/>
Dec  1 14:20:36 np0005541455 nova_compute[188605]:        <feature name='vpclmulqdq'/>
Dec  1 14:20:36 np0005541455 nova_compute[188605]:      </blockers>
Dec  1 14:20:36 np0005541455 nova_compute[188605]:      <model usable='no' vendor='Intel'>Icelake-Server-v3</model>
Dec  1 14:20:36 np0005541455 nova_compute[188605]:      <blockers model='Icelake-Server-v3'>
Dec  1 14:20:36 np0005541455 nova_compute[188605]:        <feature name='avx512-vpopcntdq'/>
Dec  1 14:20:36 np0005541455 nova_compute[188605]:        <feature name='avx512bitalg'/>
Dec  1 14:20:36 np0005541455 nova_compute[188605]:        <feature name='avx512bw'/>
Dec  1 14:20:36 np0005541455 nova_compute[188605]:        <feature name='avx512cd'/>
Dec  1 14:20:36 np0005541455 nova_compute[188605]:        <feature name='avx512dq'/>
Dec  1 14:20:36 np0005541455 nova_compute[188605]:        <feature name='avx512f'/>
Dec  1 14:20:36 np0005541455 nova_compute[188605]:        <feature name='avx512vbmi'/>
Dec  1 14:20:36 np0005541455 nova_compute[188605]:        <feature name='avx512vbmi2'/>
Dec  1 14:20:36 np0005541455 nova_compute[188605]:        <feature name='avx512vl'/>
Dec  1 14:20:36 np0005541455 nova_compute[188605]:        <feature name='avx512vnni'/>
Dec  1 14:20:36 np0005541455 nova_compute[188605]:        <feature name='erms'/>
Dec  1 14:20:36 np0005541455 nova_compute[188605]:        <feature name='gfni'/>
Dec  1 14:20:36 np0005541455 nova_compute[188605]:        <feature name='ibrs-all'/>
Dec  1 14:20:36 np0005541455 nova_compute[188605]:        <feature name='invpcid'/>
Dec  1 14:20:36 np0005541455 nova_compute[188605]:        <feature name='la57'/>
Dec  1 14:20:36 np0005541455 nova_compute[188605]:        <feature name='pcid'/>
Dec  1 14:20:36 np0005541455 nova_compute[188605]:        <feature name='pku'/>
Dec  1 14:20:36 np0005541455 nova_compute[188605]:        <feature name='taa-no'/>
Dec  1 14:20:36 np0005541455 nova_compute[188605]:        <feature name='vaes'/>
Dec  1 14:20:36 np0005541455 nova_compute[188605]:        <feature name='vpclmulqdq'/>
Dec  1 14:20:36 np0005541455 nova_compute[188605]:      </blockers>
Dec  1 14:20:36 np0005541455 nova_compute[188605]:      <model usable='no' vendor='Intel'>Icelake-Server-v4</model>
Dec  1 14:20:36 np0005541455 nova_compute[188605]:      <blockers model='Icelake-Server-v4'>
Dec  1 14:20:36 np0005541455 nova_compute[188605]:        <feature name='avx512-vpopcntdq'/>
Dec  1 14:20:36 np0005541455 nova_compute[188605]:        <feature name='avx512bitalg'/>
Dec  1 14:20:36 np0005541455 nova_compute[188605]:        <feature name='avx512bw'/>
Dec  1 14:20:36 np0005541455 nova_compute[188605]:        <feature name='avx512cd'/>
Dec  1 14:20:36 np0005541455 nova_compute[188605]:        <feature name='avx512dq'/>
Dec  1 14:20:36 np0005541455 nova_compute[188605]:        <feature name='avx512f'/>
Dec  1 14:20:36 np0005541455 nova_compute[188605]:        <feature name='avx512ifma'/>
Dec  1 14:20:36 np0005541455 nova_compute[188605]:        <feature name='avx512vbmi'/>
Dec  1 14:20:36 np0005541455 nova_compute[188605]:        <feature name='avx512vbmi2'/>
Dec  1 14:20:36 np0005541455 nova_compute[188605]:        <feature name='avx512vl'/>
Dec  1 14:20:36 np0005541455 nova_compute[188605]:        <feature name='avx512vnni'/>
Dec  1 14:20:36 np0005541455 nova_compute[188605]:        <feature name='erms'/>
Dec  1 14:20:36 np0005541455 nova_compute[188605]:        <feature name='fsrm'/>
Dec  1 14:20:36 np0005541455 nova_compute[188605]:        <feature name='gfni'/>
Dec  1 14:20:36 np0005541455 nova_compute[188605]:        <feature name='ibrs-all'/>
Dec  1 14:20:36 np0005541455 nova_compute[188605]:        <feature name='invpcid'/>
Dec  1 14:20:36 np0005541455 nova_compute[188605]:        <feature name='la57'/>
Dec  1 14:20:36 np0005541455 nova_compute[188605]:        <feature name='pcid'/>
Dec  1 14:20:36 np0005541455 nova_compute[188605]:        <feature name='pku'/>
Dec  1 14:20:36 np0005541455 nova_compute[188605]:        <feature name='taa-no'/>
Dec  1 14:20:36 np0005541455 nova_compute[188605]:        <feature name='vaes'/>
Dec  1 14:20:36 np0005541455 nova_compute[188605]:        <feature name='vpclmulqdq'/>
Dec  1 14:20:36 np0005541455 nova_compute[188605]:      </blockers>
Dec  1 14:20:36 np0005541455 nova_compute[188605]:      <model usable='no' vendor='Intel'>Icelake-Server-v5</model>
Dec  1 14:20:36 np0005541455 nova_compute[188605]:      <blockers model='Icelake-Server-v5'>
Dec  1 14:20:36 np0005541455 nova_compute[188605]:        <feature name='avx512-vpopcntdq'/>
Dec  1 14:20:36 np0005541455 nova_compute[188605]:        <feature name='avx512bitalg'/>
Dec  1 14:20:36 np0005541455 nova_compute[188605]:        <feature name='avx512bw'/>
Dec  1 14:20:36 np0005541455 nova_compute[188605]:        <feature name='avx512cd'/>
Dec  1 14:20:36 np0005541455 nova_compute[188605]:        <feature name='avx512dq'/>
Dec  1 14:20:36 np0005541455 nova_compute[188605]:        <feature name='avx512f'/>
Dec  1 14:20:36 np0005541455 nova_compute[188605]:        <feature name='avx512ifma'/>
Dec  1 14:20:36 np0005541455 nova_compute[188605]:        <feature name='avx512vbmi'/>
Dec  1 14:20:36 np0005541455 nova_compute[188605]:        <feature name='avx512vbmi2'/>
Dec  1 14:20:36 np0005541455 nova_compute[188605]:        <feature name='avx512vl'/>
Dec  1 14:20:36 np0005541455 nova_compute[188605]:        <feature name='avx512vnni'/>
Dec  1 14:20:36 np0005541455 nova_compute[188605]:        <feature name='erms'/>
Dec  1 14:20:36 np0005541455 nova_compute[188605]:        <feature name='fsrm'/>
Dec  1 14:20:36 np0005541455 nova_compute[188605]:        <feature name='gfni'/>
Dec  1 14:20:36 np0005541455 nova_compute[188605]:        <feature name='ibrs-all'/>
Dec  1 14:20:36 np0005541455 nova_compute[188605]:        <feature name='invpcid'/>
Dec  1 14:20:36 np0005541455 nova_compute[188605]:        <feature name='la57'/>
Dec  1 14:20:36 np0005541455 nova_compute[188605]:        <feature name='pcid'/>
Dec  1 14:20:36 np0005541455 nova_compute[188605]:        <feature name='pku'/>
Dec  1 14:20:36 np0005541455 nova_compute[188605]:        <feature name='taa-no'/>
Dec  1 14:20:36 np0005541455 nova_compute[188605]:        <feature name='vaes'/>
Dec  1 14:20:36 np0005541455 nova_compute[188605]:        <feature name='vpclmulqdq'/>
Dec  1 14:20:36 np0005541455 nova_compute[188605]:        <feature name='xsaves'/>
Dec  1 14:20:36 np0005541455 nova_compute[188605]:      </blockers>
Dec  1 14:20:36 np0005541455 nova_compute[188605]:      <model usable='no' vendor='Intel'>Icelake-Server-v6</model>
Dec  1 14:20:36 np0005541455 nova_compute[188605]:      <blockers model='Icelake-Server-v6'>
Dec  1 14:20:36 np0005541455 nova_compute[188605]:        <feature name='avx512-vpopcntdq'/>
Dec  1 14:20:36 np0005541455 nova_compute[188605]:        <feature name='avx512bitalg'/>
Dec  1 14:20:36 np0005541455 nova_compute[188605]:        <feature name='avx512bw'/>
Dec  1 14:20:36 np0005541455 nova_compute[188605]:        <feature name='avx512cd'/>
Dec  1 14:20:36 np0005541455 nova_compute[188605]:        <feature name='avx512dq'/>
Dec  1 14:20:36 np0005541455 nova_compute[188605]:        <feature name='avx512f'/>
Dec  1 14:20:36 np0005541455 nova_compute[188605]:        <feature name='avx512ifma'/>
Dec  1 14:20:36 np0005541455 nova_compute[188605]:        <feature name='avx512vbmi'/>
Dec  1 14:20:36 np0005541455 nova_compute[188605]:        <feature name='avx512vbmi2'/>
Dec  1 14:20:36 np0005541455 nova_compute[188605]:        <feature name='avx512vl'/>
Dec  1 14:20:36 np0005541455 nova_compute[188605]:        <feature name='avx512vnni'/>
Dec  1 14:20:36 np0005541455 nova_compute[188605]:        <feature name='erms'/>
Dec  1 14:20:36 np0005541455 nova_compute[188605]:        <feature name='fsrm'/>
Dec  1 14:20:36 np0005541455 nova_compute[188605]:        <feature name='gfni'/>
Dec  1 14:20:36 np0005541455 nova_compute[188605]:        <feature name='ibrs-all'/>
Dec  1 14:20:36 np0005541455 nova_compute[188605]:        <feature name='invpcid'/>
Dec  1 14:20:36 np0005541455 nova_compute[188605]:        <feature name='la57'/>
Dec  1 14:20:36 np0005541455 nova_compute[188605]:        <feature name='pcid'/>
Dec  1 14:20:36 np0005541455 nova_compute[188605]:        <feature name='pku'/>
Dec  1 14:20:36 np0005541455 nova_compute[188605]:        <feature name='taa-no'/>
Dec  1 14:20:36 np0005541455 nova_compute[188605]:        <feature name='vaes'/>
Dec  1 14:20:36 np0005541455 nova_compute[188605]:        <feature name='vpclmulqdq'/>
Dec  1 14:20:36 np0005541455 nova_compute[188605]:        <feature name='xsaves'/>
Dec  1 14:20:36 np0005541455 nova_compute[188605]:      </blockers>
Dec  1 14:20:36 np0005541455 nova_compute[188605]:      <model usable='no' vendor='Intel'>Icelake-Server-v7</model>
Dec  1 14:20:36 np0005541455 nova_compute[188605]:      <blockers model='Icelake-Server-v7'>
Dec  1 14:20:36 np0005541455 nova_compute[188605]:        <feature name='avx512-vpopcntdq'/>
Dec  1 14:20:36 np0005541455 nova_compute[188605]:        <feature name='avx512bitalg'/>
Dec  1 14:20:36 np0005541455 nova_compute[188605]:        <feature name='avx512bw'/>
Dec  1 14:20:36 np0005541455 nova_compute[188605]:        <feature name='avx512cd'/>
Dec  1 14:20:36 np0005541455 nova_compute[188605]:        <feature name='avx512dq'/>
Dec  1 14:20:36 np0005541455 nova_compute[188605]:        <feature name='avx512f'/>
Dec  1 14:20:36 np0005541455 nova_compute[188605]:        <feature name='avx512ifma'/>
Dec  1 14:20:36 np0005541455 nova_compute[188605]:        <feature name='avx512vbmi'/>
Dec  1 14:20:36 np0005541455 nova_compute[188605]:        <feature name='avx512vbmi2'/>
Dec  1 14:20:36 np0005541455 nova_compute[188605]:        <feature name='avx512vl'/>
Dec  1 14:20:36 np0005541455 nova_compute[188605]:        <feature name='avx512vnni'/>
Dec  1 14:20:36 np0005541455 nova_compute[188605]:        <feature name='erms'/>
Dec  1 14:20:36 np0005541455 nova_compute[188605]:        <feature name='fsrm'/>
Dec  1 14:20:36 np0005541455 nova_compute[188605]:        <feature name='gfni'/>
Dec  1 14:20:36 np0005541455 nova_compute[188605]:        <feature name='hle'/>
Dec  1 14:20:36 np0005541455 nova_compute[188605]:        <feature name='ibrs-all'/>
Dec  1 14:20:36 np0005541455 nova_compute[188605]:        <feature name='invpcid'/>
Dec  1 14:20:36 np0005541455 nova_compute[188605]:        <feature name='la57'/>
Dec  1 14:20:36 np0005541455 nova_compute[188605]:        <feature name='pcid'/>
Dec  1 14:20:36 np0005541455 nova_compute[188605]:        <feature name='pku'/>
Dec  1 14:20:36 np0005541455 nova_compute[188605]:        <feature name='rtm'/>
Dec  1 14:20:36 np0005541455 nova_compute[188605]:        <feature name='taa-no'/>
Dec  1 14:20:36 np0005541455 nova_compute[188605]:        <feature name='vaes'/>
Dec  1 14:20:36 np0005541455 nova_compute[188605]:        <feature name='vpclmulqdq'/>
Dec  1 14:20:36 np0005541455 nova_compute[188605]:        <feature name='xsaves'/>
Dec  1 14:20:36 np0005541455 nova_compute[188605]:      </blockers>
Dec  1 14:20:36 np0005541455 nova_compute[188605]:      <model usable='no' vendor='Intel' canonical='IvyBridge-v1'>IvyBridge</model>
Dec  1 14:20:36 np0005541455 nova_compute[188605]:      <blockers model='IvyBridge'>
Dec  1 14:20:36 np0005541455 nova_compute[188605]:        <feature name='erms'/>
Dec  1 14:20:36 np0005541455 nova_compute[188605]:      </blockers>
Dec  1 14:20:36 np0005541455 nova_compute[188605]:      <model usable='no' vendor='Intel' canonical='IvyBridge-v2'>IvyBridge-IBRS</model>
Dec  1 14:20:36 np0005541455 nova_compute[188605]:      <blockers model='IvyBridge-IBRS'>
Dec  1 14:20:36 np0005541455 nova_compute[188605]:        <feature name='erms'/>
Dec  1 14:20:36 np0005541455 nova_compute[188605]:      </blockers>
Dec  1 14:20:36 np0005541455 nova_compute[188605]:      <model usable='no' vendor='Intel'>IvyBridge-v1</model>
Dec  1 14:20:36 np0005541455 nova_compute[188605]:      <blockers model='IvyBridge-v1'>
Dec  1 14:20:36 np0005541455 nova_compute[188605]:        <feature name='erms'/>
Dec  1 14:20:36 np0005541455 nova_compute[188605]:      </blockers>
Dec  1 14:20:36 np0005541455 nova_compute[188605]:      <model usable='no' vendor='Intel'>IvyBridge-v2</model>
Dec  1 14:20:36 np0005541455 nova_compute[188605]:      <blockers model='IvyBridge-v2'>
Dec  1 14:20:36 np0005541455 nova_compute[188605]:        <feature name='erms'/>
Dec  1 14:20:36 np0005541455 nova_compute[188605]:      </blockers>
Dec  1 14:20:36 np0005541455 nova_compute[188605]:      <model usable='no' vendor='Intel' canonical='KnightsMill-v1'>KnightsMill</model>
Dec  1 14:20:36 np0005541455 nova_compute[188605]:      <blockers model='KnightsMill'>
Dec  1 14:20:36 np0005541455 nova_compute[188605]:        <feature name='avx512-4fmaps'/>
Dec  1 14:20:36 np0005541455 nova_compute[188605]:        <feature name='avx512-4vnniw'/>
Dec  1 14:20:36 np0005541455 nova_compute[188605]:        <feature name='avx512-vpopcntdq'/>
Dec  1 14:20:36 np0005541455 nova_compute[188605]:        <feature name='avx512cd'/>
Dec  1 14:20:36 np0005541455 nova_compute[188605]:        <feature name='avx512er'/>
Dec  1 14:20:36 np0005541455 nova_compute[188605]:        <feature name='avx512f'/>
Dec  1 14:20:36 np0005541455 nova_compute[188605]:        <feature name='avx512pf'/>
Dec  1 14:20:36 np0005541455 nova_compute[188605]:        <feature name='erms'/>
Dec  1 14:20:36 np0005541455 nova_compute[188605]:        <feature name='ss'/>
Dec  1 14:20:36 np0005541455 nova_compute[188605]:      </blockers>
Dec  1 14:20:36 np0005541455 nova_compute[188605]:      <model usable='no' vendor='Intel'>KnightsMill-v1</model>
Dec  1 14:20:36 np0005541455 nova_compute[188605]:      <blockers model='KnightsMill-v1'>
Dec  1 14:20:36 np0005541455 nova_compute[188605]:        <feature name='avx512-4fmaps'/>
Dec  1 14:20:36 np0005541455 nova_compute[188605]:        <feature name='avx512-4vnniw'/>
Dec  1 14:20:36 np0005541455 nova_compute[188605]:        <feature name='avx512-vpopcntdq'/>
Dec  1 14:20:36 np0005541455 nova_compute[188605]:        <feature name='avx512cd'/>
Dec  1 14:20:36 np0005541455 nova_compute[188605]:        <feature name='avx512er'/>
Dec  1 14:20:36 np0005541455 nova_compute[188605]:        <feature name='avx512f'/>
Dec  1 14:20:36 np0005541455 nova_compute[188605]:        <feature name='avx512pf'/>
Dec  1 14:20:36 np0005541455 nova_compute[188605]:        <feature name='erms'/>
Dec  1 14:20:36 np0005541455 nova_compute[188605]:        <feature name='ss'/>
Dec  1 14:20:36 np0005541455 nova_compute[188605]:      </blockers>
Dec  1 14:20:36 np0005541455 nova_compute[188605]:      <model usable='yes' vendor='Intel' canonical='Nehalem-v1'>Nehalem</model>
Dec  1 14:20:36 np0005541455 nova_compute[188605]:      <model usable='yes' vendor='Intel' canonical='Nehalem-v2'>Nehalem-IBRS</model>
Dec  1 14:20:36 np0005541455 nova_compute[188605]:      <model usable='yes' vendor='Intel'>Nehalem-v1</model>
Dec  1 14:20:36 np0005541455 nova_compute[188605]:      <model usable='yes' vendor='Intel'>Nehalem-v2</model>
Dec  1 14:20:36 np0005541455 nova_compute[188605]:      <model usable='yes' deprecated='yes' vendor='AMD' canonical='Opteron_G1-v1'>Opteron_G1</model>
Dec  1 14:20:36 np0005541455 nova_compute[188605]:      <model usable='yes' deprecated='yes' vendor='AMD'>Opteron_G1-v1</model>
Dec  1 14:20:36 np0005541455 nova_compute[188605]:      <model usable='yes' deprecated='yes' vendor='AMD' canonical='Opteron_G2-v1'>Opteron_G2</model>
Dec  1 14:20:36 np0005541455 nova_compute[188605]:      <model usable='yes' deprecated='yes' vendor='AMD'>Opteron_G2-v1</model>
Dec  1 14:20:36 np0005541455 nova_compute[188605]:      <model usable='yes' deprecated='yes' vendor='AMD' canonical='Opteron_G3-v1'>Opteron_G3</model>
Dec  1 14:20:36 np0005541455 nova_compute[188605]:      <model usable='yes' deprecated='yes' vendor='AMD'>Opteron_G3-v1</model>
Dec  1 14:20:36 np0005541455 nova_compute[188605]:      <model usable='no' vendor='AMD' canonical='Opteron_G4-v1'>Opteron_G4</model>
Dec  1 14:20:36 np0005541455 nova_compute[188605]:      <blockers model='Opteron_G4'>
Dec  1 14:20:36 np0005541455 nova_compute[188605]:        <feature name='fma4'/>
Dec  1 14:20:36 np0005541455 nova_compute[188605]:        <feature name='xop'/>
Dec  1 14:20:36 np0005541455 nova_compute[188605]:      </blockers>
Dec  1 14:20:36 np0005541455 nova_compute[188605]:      <model usable='no' vendor='AMD'>Opteron_G4-v1</model>
Dec  1 14:20:36 np0005541455 nova_compute[188605]:      <blockers model='Opteron_G4-v1'>
Dec  1 14:20:36 np0005541455 nova_compute[188605]:        <feature name='fma4'/>
Dec  1 14:20:36 np0005541455 nova_compute[188605]:        <feature name='xop'/>
Dec  1 14:20:36 np0005541455 nova_compute[188605]:      </blockers>
Dec  1 14:20:36 np0005541455 nova_compute[188605]:      <model usable='no' vendor='AMD' canonical='Opteron_G5-v1'>Opteron_G5</model>
Dec  1 14:20:36 np0005541455 nova_compute[188605]:      <blockers model='Opteron_G5'>
Dec  1 14:20:36 np0005541455 nova_compute[188605]:        <feature name='fma4'/>
Dec  1 14:20:36 np0005541455 nova_compute[188605]:        <feature name='tbm'/>
Dec  1 14:20:36 np0005541455 nova_compute[188605]:        <feature name='xop'/>
Dec  1 14:20:36 np0005541455 nova_compute[188605]:      </blockers>
Dec  1 14:20:36 np0005541455 nova_compute[188605]:      <model usable='no' vendor='AMD'>Opteron_G5-v1</model>
Dec  1 14:20:36 np0005541455 nova_compute[188605]:      <blockers model='Opteron_G5-v1'>
Dec  1 14:20:36 np0005541455 nova_compute[188605]:        <feature name='fma4'/>
Dec  1 14:20:36 np0005541455 nova_compute[188605]:        <feature name='tbm'/>
Dec  1 14:20:36 np0005541455 nova_compute[188605]:        <feature name='xop'/>
Dec  1 14:20:36 np0005541455 nova_compute[188605]:      </blockers>
Dec  1 14:20:36 np0005541455 nova_compute[188605]:      <model usable='yes' deprecated='yes' vendor='Intel' canonical='Penryn-v1'>Penryn</model>
Dec  1 14:20:36 np0005541455 nova_compute[188605]:      <model usable='yes' deprecated='yes' vendor='Intel'>Penryn-v1</model>
Dec  1 14:20:36 np0005541455 nova_compute[188605]:      <model usable='yes' vendor='Intel' canonical='SandyBridge-v1'>SandyBridge</model>
Dec  1 14:20:36 np0005541455 nova_compute[188605]:      <model usable='yes' vendor='Intel' canonical='SandyBridge-v2'>SandyBridge-IBRS</model>
Dec  1 14:20:36 np0005541455 nova_compute[188605]:      <model usable='yes' vendor='Intel'>SandyBridge-v1</model>
Dec  1 14:20:36 np0005541455 nova_compute[188605]:      <model usable='yes' vendor='Intel'>SandyBridge-v2</model>
Dec  1 14:20:36 np0005541455 nova_compute[188605]:      <model usable='no' vendor='Intel' canonical='SapphireRapids-v1'>SapphireRapids</model>
Dec  1 14:20:36 np0005541455 nova_compute[188605]:      <blockers model='SapphireRapids'>
Dec  1 14:20:36 np0005541455 nova_compute[188605]:        <feature name='amx-bf16'/>
Dec  1 14:20:36 np0005541455 nova_compute[188605]:        <feature name='amx-int8'/>
Dec  1 14:20:36 np0005541455 nova_compute[188605]:        <feature name='amx-tile'/>
Dec  1 14:20:36 np0005541455 nova_compute[188605]:        <feature name='avx-vnni'/>
Dec  1 14:20:36 np0005541455 nova_compute[188605]:        <feature name='avx512-bf16'/>
Dec  1 14:20:36 np0005541455 nova_compute[188605]:        <feature name='avx512-fp16'/>
Dec  1 14:20:36 np0005541455 nova_compute[188605]:        <feature name='avx512-vpopcntdq'/>
Dec  1 14:20:36 np0005541455 nova_compute[188605]:        <feature name='avx512bitalg'/>
Dec  1 14:20:36 np0005541455 nova_compute[188605]:        <feature name='avx512bw'/>
Dec  1 14:20:36 np0005541455 nova_compute[188605]:        <feature name='avx512cd'/>
Dec  1 14:20:36 np0005541455 nova_compute[188605]:        <feature name='avx512dq'/>
Dec  1 14:20:36 np0005541455 nova_compute[188605]:        <feature name='avx512f'/>
Dec  1 14:20:36 np0005541455 nova_compute[188605]:        <feature name='avx512ifma'/>
Dec  1 14:20:36 np0005541455 nova_compute[188605]:        <feature name='avx512vbmi'/>
Dec  1 14:20:36 np0005541455 nova_compute[188605]:        <feature name='avx512vbmi2'/>
Dec  1 14:20:36 np0005541455 nova_compute[188605]:        <feature name='avx512vl'/>
Dec  1 14:20:36 np0005541455 nova_compute[188605]:        <feature name='avx512vnni'/>
Dec  1 14:20:36 np0005541455 nova_compute[188605]:        <feature name='bus-lock-detect'/>
Dec  1 14:20:36 np0005541455 nova_compute[188605]:        <feature name='erms'/>
Dec  1 14:20:36 np0005541455 nova_compute[188605]:        <feature name='fsrc'/>
Dec  1 14:20:36 np0005541455 nova_compute[188605]:        <feature name='fsrm'/>
Dec  1 14:20:36 np0005541455 nova_compute[188605]:        <feature name='fsrs'/>
Dec  1 14:20:36 np0005541455 nova_compute[188605]:        <feature name='fzrm'/>
Dec  1 14:20:36 np0005541455 nova_compute[188605]:        <feature name='gfni'/>
Dec  1 14:20:36 np0005541455 nova_compute[188605]:        <feature name='hle'/>
Dec  1 14:20:36 np0005541455 nova_compute[188605]:        <feature name='ibrs-all'/>
Dec  1 14:20:36 np0005541455 nova_compute[188605]:        <feature name='invpcid'/>
Dec  1 14:20:36 np0005541455 nova_compute[188605]:        <feature name='la57'/>
Dec  1 14:20:36 np0005541455 nova_compute[188605]:        <feature name='pcid'/>
Dec  1 14:20:36 np0005541455 nova_compute[188605]:        <feature name='pku'/>
Dec  1 14:20:36 np0005541455 nova_compute[188605]:        <feature name='rtm'/>
Dec  1 14:20:36 np0005541455 nova_compute[188605]:        <feature name='serialize'/>
Dec  1 14:20:36 np0005541455 nova_compute[188605]:        <feature name='taa-no'/>
Dec  1 14:20:36 np0005541455 nova_compute[188605]:        <feature name='tsx-ldtrk'/>
Dec  1 14:20:36 np0005541455 nova_compute[188605]:        <feature name='vaes'/>
Dec  1 14:20:36 np0005541455 nova_compute[188605]:        <feature name='vpclmulqdq'/>
Dec  1 14:20:36 np0005541455 nova_compute[188605]:        <feature name='xfd'/>
Dec  1 14:20:36 np0005541455 nova_compute[188605]:        <feature name='xsaves'/>
Dec  1 14:20:36 np0005541455 nova_compute[188605]:      </blockers>
Dec  1 14:20:36 np0005541455 nova_compute[188605]:      <model usable='no' vendor='Intel'>SapphireRapids-v1</model>
Dec  1 14:20:36 np0005541455 nova_compute[188605]:      <blockers model='SapphireRapids-v1'>
Dec  1 14:20:36 np0005541455 nova_compute[188605]:        <feature name='amx-bf16'/>
Dec  1 14:20:36 np0005541455 nova_compute[188605]:        <feature name='amx-int8'/>
Dec  1 14:20:36 np0005541455 nova_compute[188605]:        <feature name='amx-tile'/>
Dec  1 14:20:36 np0005541455 nova_compute[188605]:        <feature name='avx-vnni'/>
Dec  1 14:20:36 np0005541455 nova_compute[188605]:        <feature name='avx512-bf16'/>
Dec  1 14:20:36 np0005541455 nova_compute[188605]:        <feature name='avx512-fp16'/>
Dec  1 14:20:36 np0005541455 nova_compute[188605]:        <feature name='avx512-vpopcntdq'/>
Dec  1 14:20:36 np0005541455 nova_compute[188605]:        <feature name='avx512bitalg'/>
Dec  1 14:20:36 np0005541455 nova_compute[188605]:        <feature name='avx512bw'/>
Dec  1 14:20:36 np0005541455 nova_compute[188605]:        <feature name='avx512cd'/>
Dec  1 14:20:36 np0005541455 nova_compute[188605]:        <feature name='avx512dq'/>
Dec  1 14:20:36 np0005541455 nova_compute[188605]:        <feature name='avx512f'/>
Dec  1 14:20:36 np0005541455 nova_compute[188605]:        <feature name='avx512ifma'/>
Dec  1 14:20:36 np0005541455 nova_compute[188605]:        <feature name='avx512vbmi'/>
Dec  1 14:20:36 np0005541455 nova_compute[188605]:        <feature name='avx512vbmi2'/>
Dec  1 14:20:36 np0005541455 nova_compute[188605]:        <feature name='avx512vl'/>
Dec  1 14:20:36 np0005541455 nova_compute[188605]:        <feature name='avx512vnni'/>
Dec  1 14:20:36 np0005541455 nova_compute[188605]:        <feature name='bus-lock-detect'/>
Dec  1 14:20:36 np0005541455 nova_compute[188605]:        <feature name='erms'/>
Dec  1 14:20:36 np0005541455 nova_compute[188605]:        <feature name='fsrc'/>
Dec  1 14:20:36 np0005541455 nova_compute[188605]:        <feature name='fsrm'/>
Dec  1 14:20:36 np0005541455 nova_compute[188605]:        <feature name='fsrs'/>
Dec  1 14:20:36 np0005541455 nova_compute[188605]:        <feature name='fzrm'/>
Dec  1 14:20:36 np0005541455 nova_compute[188605]:        <feature name='gfni'/>
Dec  1 14:20:36 np0005541455 nova_compute[188605]:        <feature name='hle'/>
Dec  1 14:20:36 np0005541455 nova_compute[188605]:        <feature name='ibrs-all'/>
Dec  1 14:20:36 np0005541455 nova_compute[188605]:        <feature name='invpcid'/>
Dec  1 14:20:36 np0005541455 nova_compute[188605]:        <feature name='la57'/>
Dec  1 14:20:36 np0005541455 nova_compute[188605]:        <feature name='pcid'/>
Dec  1 14:20:36 np0005541455 nova_compute[188605]:        <feature name='pku'/>
Dec  1 14:20:36 np0005541455 nova_compute[188605]:        <feature name='rtm'/>
Dec  1 14:20:36 np0005541455 nova_compute[188605]:        <feature name='serialize'/>
Dec  1 14:20:36 np0005541455 nova_compute[188605]:        <feature name='taa-no'/>
Dec  1 14:20:36 np0005541455 nova_compute[188605]:        <feature name='tsx-ldtrk'/>
Dec  1 14:20:36 np0005541455 nova_compute[188605]:        <feature name='vaes'/>
Dec  1 14:20:36 np0005541455 nova_compute[188605]:        <feature name='vpclmulqdq'/>
Dec  1 14:20:36 np0005541455 nova_compute[188605]:        <feature name='xfd'/>
Dec  1 14:20:36 np0005541455 nova_compute[188605]:        <feature name='xsaves'/>
Dec  1 14:20:36 np0005541455 nova_compute[188605]:      </blockers>
Dec  1 14:20:36 np0005541455 nova_compute[188605]:      <model usable='no' vendor='Intel'>SapphireRapids-v2</model>
Dec  1 14:20:36 np0005541455 nova_compute[188605]:      <blockers model='SapphireRapids-v2'>
Dec  1 14:20:36 np0005541455 nova_compute[188605]:        <feature name='amx-bf16'/>
Dec  1 14:20:36 np0005541455 nova_compute[188605]:        <feature name='amx-int8'/>
Dec  1 14:20:36 np0005541455 nova_compute[188605]:        <feature name='amx-tile'/>
Dec  1 14:20:36 np0005541455 nova_compute[188605]:        <feature name='avx-vnni'/>
Dec  1 14:20:36 np0005541455 nova_compute[188605]:        <feature name='avx512-bf16'/>
Dec  1 14:20:36 np0005541455 nova_compute[188605]:        <feature name='avx512-fp16'/>
Dec  1 14:20:36 np0005541455 nova_compute[188605]:        <feature name='avx512-vpopcntdq'/>
Dec  1 14:20:36 np0005541455 nova_compute[188605]:        <feature name='avx512bitalg'/>
Dec  1 14:20:36 np0005541455 nova_compute[188605]:        <feature name='avx512bw'/>
Dec  1 14:20:36 np0005541455 nova_compute[188605]:        <feature name='avx512cd'/>
Dec  1 14:20:36 np0005541455 nova_compute[188605]:        <feature name='avx512dq'/>
Dec  1 14:20:36 np0005541455 nova_compute[188605]:        <feature name='avx512f'/>
Dec  1 14:20:36 np0005541455 nova_compute[188605]:        <feature name='avx512ifma'/>
Dec  1 14:20:36 np0005541455 nova_compute[188605]:        <feature name='avx512vbmi'/>
Dec  1 14:20:36 np0005541455 nova_compute[188605]:        <feature name='avx512vbmi2'/>
Dec  1 14:20:36 np0005541455 nova_compute[188605]:        <feature name='avx512vl'/>
Dec  1 14:20:36 np0005541455 nova_compute[188605]:        <feature name='avx512vnni'/>
Dec  1 14:20:36 np0005541455 nova_compute[188605]:        <feature name='bus-lock-detect'/>
Dec  1 14:20:36 np0005541455 nova_compute[188605]:        <feature name='erms'/>
Dec  1 14:20:36 np0005541455 nova_compute[188605]:        <feature name='fbsdp-no'/>
Dec  1 14:20:36 np0005541455 nova_compute[188605]:        <feature name='fsrc'/>
Dec  1 14:20:36 np0005541455 nova_compute[188605]:        <feature name='fsrm'/>
Dec  1 14:20:36 np0005541455 nova_compute[188605]:        <feature name='fsrs'/>
Dec  1 14:20:36 np0005541455 nova_compute[188605]:        <feature name='fzrm'/>
Dec  1 14:20:36 np0005541455 nova_compute[188605]:        <feature name='gfni'/>
Dec  1 14:20:36 np0005541455 nova_compute[188605]:        <feature name='hle'/>
Dec  1 14:20:36 np0005541455 nova_compute[188605]:        <feature name='ibrs-all'/>
Dec  1 14:20:36 np0005541455 nova_compute[188605]:        <feature name='invpcid'/>
Dec  1 14:20:36 np0005541455 nova_compute[188605]:        <feature name='la57'/>
Dec  1 14:20:36 np0005541455 nova_compute[188605]:        <feature name='pcid'/>
Dec  1 14:20:36 np0005541455 nova_compute[188605]:        <feature name='pku'/>
Dec  1 14:20:36 np0005541455 nova_compute[188605]:        <feature name='psdp-no'/>
Dec  1 14:20:36 np0005541455 nova_compute[188605]:        <feature name='rtm'/>
Dec  1 14:20:36 np0005541455 nova_compute[188605]:        <feature name='sbdr-ssdp-no'/>
Dec  1 14:20:36 np0005541455 nova_compute[188605]:        <feature name='serialize'/>
Dec  1 14:20:36 np0005541455 nova_compute[188605]:        <feature name='taa-no'/>
Dec  1 14:20:36 np0005541455 nova_compute[188605]:        <feature name='tsx-ldtrk'/>
Dec  1 14:20:36 np0005541455 nova_compute[188605]:        <feature name='vaes'/>
Dec  1 14:20:36 np0005541455 nova_compute[188605]:        <feature name='vpclmulqdq'/>
Dec  1 14:20:36 np0005541455 nova_compute[188605]:        <feature name='xfd'/>
Dec  1 14:20:36 np0005541455 nova_compute[188605]:        <feature name='xsaves'/>
Dec  1 14:20:36 np0005541455 nova_compute[188605]:      </blockers>
Dec  1 14:20:36 np0005541455 nova_compute[188605]:      <model usable='no' vendor='Intel'>SapphireRapids-v3</model>
Dec  1 14:20:36 np0005541455 nova_compute[188605]:      <blockers model='SapphireRapids-v3'>
Dec  1 14:20:36 np0005541455 nova_compute[188605]:        <feature name='amx-bf16'/>
Dec  1 14:20:36 np0005541455 nova_compute[188605]:        <feature name='amx-int8'/>
Dec  1 14:20:36 np0005541455 nova_compute[188605]:        <feature name='amx-tile'/>
Dec  1 14:20:36 np0005541455 nova_compute[188605]:        <feature name='avx-vnni'/>
Dec  1 14:20:36 np0005541455 nova_compute[188605]:        <feature name='avx512-bf16'/>
Dec  1 14:20:36 np0005541455 nova_compute[188605]:        <feature name='avx512-fp16'/>
Dec  1 14:20:36 np0005541455 nova_compute[188605]:        <feature name='avx512-vpopcntdq'/>
Dec  1 14:20:36 np0005541455 nova_compute[188605]:        <feature name='avx512bitalg'/>
Dec  1 14:20:36 np0005541455 nova_compute[188605]:        <feature name='avx512bw'/>
Dec  1 14:20:36 np0005541455 nova_compute[188605]:        <feature name='avx512cd'/>
Dec  1 14:20:36 np0005541455 nova_compute[188605]:        <feature name='avx512dq'/>
Dec  1 14:20:36 np0005541455 nova_compute[188605]:        <feature name='avx512f'/>
Dec  1 14:20:36 np0005541455 nova_compute[188605]:        <feature name='avx512ifma'/>
Dec  1 14:20:36 np0005541455 nova_compute[188605]:        <feature name='avx512vbmi'/>
Dec  1 14:20:36 np0005541455 nova_compute[188605]:        <feature name='avx512vbmi2'/>
Dec  1 14:20:36 np0005541455 nova_compute[188605]:        <feature name='avx512vl'/>
Dec  1 14:20:36 np0005541455 nova_compute[188605]:        <feature name='avx512vnni'/>
Dec  1 14:20:36 np0005541455 nova_compute[188605]:        <feature name='bus-lock-detect'/>
Dec  1 14:20:36 np0005541455 nova_compute[188605]:        <feature name='cldemote'/>
Dec  1 14:20:36 np0005541455 nova_compute[188605]:        <feature name='erms'/>
Dec  1 14:20:36 np0005541455 nova_compute[188605]:        <feature name='fbsdp-no'/>
Dec  1 14:20:36 np0005541455 nova_compute[188605]:        <feature name='fsrc'/>
Dec  1 14:20:36 np0005541455 nova_compute[188605]:        <feature name='fsrm'/>
Dec  1 14:20:36 np0005541455 nova_compute[188605]:        <feature name='fsrs'/>
Dec  1 14:20:36 np0005541455 nova_compute[188605]:        <feature name='fzrm'/>
Dec  1 14:20:36 np0005541455 nova_compute[188605]:        <feature name='gfni'/>
Dec  1 14:20:36 np0005541455 nova_compute[188605]:        <feature name='hle'/>
Dec  1 14:20:36 np0005541455 nova_compute[188605]:        <feature name='ibrs-all'/>
Dec  1 14:20:36 np0005541455 nova_compute[188605]:        <feature name='invpcid'/>
Dec  1 14:20:36 np0005541455 nova_compute[188605]:        <feature name='la57'/>
Dec  1 14:20:36 np0005541455 nova_compute[188605]:        <feature name='movdir64b'/>
Dec  1 14:20:36 np0005541455 nova_compute[188605]:        <feature name='movdiri'/>
Dec  1 14:20:36 np0005541455 nova_compute[188605]:        <feature name='pcid'/>
Dec  1 14:20:36 np0005541455 nova_compute[188605]:        <feature name='pku'/>
Dec  1 14:20:36 np0005541455 nova_compute[188605]:        <feature name='psdp-no'/>
Dec  1 14:20:36 np0005541455 nova_compute[188605]:        <feature name='rtm'/>
Dec  1 14:20:36 np0005541455 nova_compute[188605]:        <feature name='sbdr-ssdp-no'/>
Dec  1 14:20:36 np0005541455 nova_compute[188605]:        <feature name='serialize'/>
Dec  1 14:20:36 np0005541455 nova_compute[188605]:        <feature name='ss'/>
Dec  1 14:20:36 np0005541455 nova_compute[188605]:        <feature name='taa-no'/>
Dec  1 14:20:36 np0005541455 nova_compute[188605]:        <feature name='tsx-ldtrk'/>
Dec  1 14:20:36 np0005541455 nova_compute[188605]:        <feature name='vaes'/>
Dec  1 14:20:36 np0005541455 nova_compute[188605]:        <feature name='vpclmulqdq'/>
Dec  1 14:20:36 np0005541455 nova_compute[188605]:        <feature name='xfd'/>
Dec  1 14:20:36 np0005541455 nova_compute[188605]:        <feature name='xsaves'/>
Dec  1 14:20:36 np0005541455 nova_compute[188605]:      </blockers>
Dec  1 14:20:36 np0005541455 nova_compute[188605]:      <model usable='no' vendor='Intel' canonical='SierraForest-v1'>SierraForest</model>
Dec  1 14:20:36 np0005541455 nova_compute[188605]:      <blockers model='SierraForest'>
Dec  1 14:20:36 np0005541455 nova_compute[188605]:        <feature name='avx-ifma'/>
Dec  1 14:20:36 np0005541455 nova_compute[188605]:        <feature name='avx-ne-convert'/>
Dec  1 14:20:36 np0005541455 nova_compute[188605]:        <feature name='avx-vnni'/>
Dec  1 14:20:36 np0005541455 nova_compute[188605]:        <feature name='avx-vnni-int8'/>
Dec  1 14:20:36 np0005541455 nova_compute[188605]:        <feature name='bus-lock-detect'/>
Dec  1 14:20:36 np0005541455 nova_compute[188605]:        <feature name='cmpccxadd'/>
Dec  1 14:20:36 np0005541455 nova_compute[188605]:        <feature name='erms'/>
Dec  1 14:20:36 np0005541455 nova_compute[188605]:        <feature name='fbsdp-no'/>
Dec  1 14:20:36 np0005541455 nova_compute[188605]:        <feature name='fsrm'/>
Dec  1 14:20:36 np0005541455 nova_compute[188605]:        <feature name='fsrs'/>
Dec  1 14:20:36 np0005541455 nova_compute[188605]:        <feature name='gfni'/>
Dec  1 14:20:36 np0005541455 nova_compute[188605]:        <feature name='ibrs-all'/>
Dec  1 14:20:36 np0005541455 nova_compute[188605]:        <feature name='invpcid'/>
Dec  1 14:20:36 np0005541455 nova_compute[188605]:        <feature name='mcdt-no'/>
Dec  1 14:20:36 np0005541455 nova_compute[188605]:        <feature name='pbrsb-no'/>
Dec  1 14:20:36 np0005541455 nova_compute[188605]:        <feature name='pcid'/>
Dec  1 14:20:36 np0005541455 nova_compute[188605]:        <feature name='pku'/>
Dec  1 14:20:36 np0005541455 nova_compute[188605]:        <feature name='psdp-no'/>
Dec  1 14:20:36 np0005541455 nova_compute[188605]:        <feature name='sbdr-ssdp-no'/>
Dec  1 14:20:36 np0005541455 nova_compute[188605]:        <feature name='serialize'/>
Dec  1 14:20:36 np0005541455 nova_compute[188605]:        <feature name='vaes'/>
Dec  1 14:20:36 np0005541455 nova_compute[188605]:        <feature name='vpclmulqdq'/>
Dec  1 14:20:36 np0005541455 nova_compute[188605]:        <feature name='xsaves'/>
Dec  1 14:20:36 np0005541455 nova_compute[188605]:      </blockers>
Dec  1 14:20:36 np0005541455 nova_compute[188605]:      <model usable='no' vendor='Intel'>SierraForest-v1</model>
Dec  1 14:20:36 np0005541455 nova_compute[188605]:      <blockers model='SierraForest-v1'>
Dec  1 14:20:36 np0005541455 nova_compute[188605]:        <feature name='avx-ifma'/>
Dec  1 14:20:36 np0005541455 nova_compute[188605]:        <feature name='avx-ne-convert'/>
Dec  1 14:20:36 np0005541455 nova_compute[188605]:        <feature name='avx-vnni'/>
Dec  1 14:20:36 np0005541455 nova_compute[188605]:        <feature name='avx-vnni-int8'/>
Dec  1 14:20:36 np0005541455 nova_compute[188605]:        <feature name='bus-lock-detect'/>
Dec  1 14:20:36 np0005541455 nova_compute[188605]:        <feature name='cmpccxadd'/>
Dec  1 14:20:36 np0005541455 nova_compute[188605]:        <feature name='erms'/>
Dec  1 14:20:36 np0005541455 nova_compute[188605]:        <feature name='fbsdp-no'/>
Dec  1 14:20:36 np0005541455 nova_compute[188605]:        <feature name='fsrm'/>
Dec  1 14:20:36 np0005541455 nova_compute[188605]:        <feature name='fsrs'/>
Dec  1 14:20:36 np0005541455 nova_compute[188605]:        <feature name='gfni'/>
Dec  1 14:20:36 np0005541455 nova_compute[188605]:        <feature name='ibrs-all'/>
Dec  1 14:20:36 np0005541455 nova_compute[188605]:        <feature name='invpcid'/>
Dec  1 14:20:36 np0005541455 nova_compute[188605]:        <feature name='mcdt-no'/>
Dec  1 14:20:36 np0005541455 nova_compute[188605]:        <feature name='pbrsb-no'/>
Dec  1 14:20:36 np0005541455 nova_compute[188605]:        <feature name='pcid'/>
Dec  1 14:20:36 np0005541455 nova_compute[188605]:        <feature name='pku'/>
Dec  1 14:20:36 np0005541455 nova_compute[188605]:        <feature name='psdp-no'/>
Dec  1 14:20:36 np0005541455 nova_compute[188605]:        <feature name='sbdr-ssdp-no'/>
Dec  1 14:20:36 np0005541455 nova_compute[188605]:        <feature name='serialize'/>
Dec  1 14:20:36 np0005541455 nova_compute[188605]:        <feature name='vaes'/>
Dec  1 14:20:36 np0005541455 nova_compute[188605]:        <feature name='vpclmulqdq'/>
Dec  1 14:20:36 np0005541455 nova_compute[188605]:        <feature name='xsaves'/>
Dec  1 14:20:36 np0005541455 nova_compute[188605]:      </blockers>
Dec  1 14:20:36 np0005541455 nova_compute[188605]:      <model usable='no' vendor='Intel' canonical='Skylake-Client-v1'>Skylake-Client</model>
Dec  1 14:20:36 np0005541455 nova_compute[188605]:      <blockers model='Skylake-Client'>
Dec  1 14:20:36 np0005541455 nova_compute[188605]:        <feature name='erms'/>
Dec  1 14:20:36 np0005541455 nova_compute[188605]:        <feature name='hle'/>
Dec  1 14:20:36 np0005541455 nova_compute[188605]:        <feature name='invpcid'/>
Dec  1 14:20:36 np0005541455 nova_compute[188605]:        <feature name='pcid'/>
Dec  1 14:20:36 np0005541455 nova_compute[188605]:        <feature name='rtm'/>
Dec  1 14:20:36 np0005541455 nova_compute[188605]:      </blockers>
Dec  1 14:20:36 np0005541455 nova_compute[188605]:      <model usable='no' vendor='Intel' canonical='Skylake-Client-v2'>Skylake-Client-IBRS</model>
Dec  1 14:20:36 np0005541455 nova_compute[188605]:      <blockers model='Skylake-Client-IBRS'>
Dec  1 14:20:36 np0005541455 nova_compute[188605]:        <feature name='erms'/>
Dec  1 14:20:36 np0005541455 nova_compute[188605]:        <feature name='hle'/>
Dec  1 14:20:36 np0005541455 nova_compute[188605]:        <feature name='invpcid'/>
Dec  1 14:20:36 np0005541455 nova_compute[188605]:        <feature name='pcid'/>
Dec  1 14:20:36 np0005541455 nova_compute[188605]:        <feature name='rtm'/>
Dec  1 14:20:36 np0005541455 nova_compute[188605]:      </blockers>
Dec  1 14:20:36 np0005541455 nova_compute[188605]:      <model usable='no' vendor='Intel' canonical='Skylake-Client-v3'>Skylake-Client-noTSX-IBRS</model>
Dec  1 14:20:36 np0005541455 nova_compute[188605]:      <blockers model='Skylake-Client-noTSX-IBRS'>
Dec  1 14:20:36 np0005541455 nova_compute[188605]:        <feature name='erms'/>
Dec  1 14:20:36 np0005541455 nova_compute[188605]:        <feature name='invpcid'/>
Dec  1 14:20:36 np0005541455 nova_compute[188605]:        <feature name='pcid'/>
Dec  1 14:20:36 np0005541455 nova_compute[188605]:      </blockers>
Dec  1 14:20:36 np0005541455 nova_compute[188605]:      <model usable='no' vendor='Intel'>Skylake-Client-v1</model>
Dec  1 14:20:36 np0005541455 nova_compute[188605]:      <blockers model='Skylake-Client-v1'>
Dec  1 14:20:36 np0005541455 nova_compute[188605]:        <feature name='erms'/>
Dec  1 14:20:36 np0005541455 nova_compute[188605]:        <feature name='hle'/>
Dec  1 14:20:36 np0005541455 nova_compute[188605]:        <feature name='invpcid'/>
Dec  1 14:20:36 np0005541455 nova_compute[188605]:        <feature name='pcid'/>
Dec  1 14:20:36 np0005541455 nova_compute[188605]:        <feature name='rtm'/>
Dec  1 14:20:36 np0005541455 nova_compute[188605]:      </blockers>
Dec  1 14:20:36 np0005541455 nova_compute[188605]:      <model usable='no' vendor='Intel'>Skylake-Client-v2</model>
Dec  1 14:20:36 np0005541455 nova_compute[188605]:      <blockers model='Skylake-Client-v2'>
Dec  1 14:20:36 np0005541455 nova_compute[188605]:        <feature name='erms'/>
Dec  1 14:20:36 np0005541455 nova_compute[188605]:        <feature name='hle'/>
Dec  1 14:20:36 np0005541455 nova_compute[188605]:        <feature name='invpcid'/>
Dec  1 14:20:36 np0005541455 nova_compute[188605]:        <feature name='pcid'/>
Dec  1 14:20:36 np0005541455 nova_compute[188605]:        <feature name='rtm'/>
Dec  1 14:20:36 np0005541455 nova_compute[188605]:      </blockers>
Dec  1 14:20:36 np0005541455 nova_compute[188605]:      <model usable='no' vendor='Intel'>Skylake-Client-v3</model>
Dec  1 14:20:36 np0005541455 nova_compute[188605]:      <blockers model='Skylake-Client-v3'>
Dec  1 14:20:36 np0005541455 nova_compute[188605]:        <feature name='erms'/>
Dec  1 14:20:36 np0005541455 nova_compute[188605]:        <feature name='invpcid'/>
Dec  1 14:20:36 np0005541455 nova_compute[188605]:        <feature name='pcid'/>
Dec  1 14:20:36 np0005541455 nova_compute[188605]:      </blockers>
Dec  1 14:20:36 np0005541455 nova_compute[188605]:      <model usable='no' vendor='Intel'>Skylake-Client-v4</model>
Dec  1 14:20:36 np0005541455 nova_compute[188605]:      <blockers model='Skylake-Client-v4'>
Dec  1 14:20:36 np0005541455 nova_compute[188605]:        <feature name='erms'/>
Dec  1 14:20:36 np0005541455 nova_compute[188605]:        <feature name='invpcid'/>
Dec  1 14:20:36 np0005541455 nova_compute[188605]:        <feature name='pcid'/>
Dec  1 14:20:36 np0005541455 nova_compute[188605]:        <feature name='xsaves'/>
Dec  1 14:20:36 np0005541455 nova_compute[188605]:      </blockers>
Dec  1 14:20:36 np0005541455 nova_compute[188605]:      <model usable='no' vendor='Intel' canonical='Skylake-Server-v1'>Skylake-Server</model>
Dec  1 14:20:36 np0005541455 nova_compute[188605]:      <blockers model='Skylake-Server'>
Dec  1 14:20:36 np0005541455 nova_compute[188605]:        <feature name='avx512bw'/>
Dec  1 14:20:36 np0005541455 nova_compute[188605]:        <feature name='avx512cd'/>
Dec  1 14:20:36 np0005541455 nova_compute[188605]:        <feature name='avx512dq'/>
Dec  1 14:20:36 np0005541455 nova_compute[188605]:        <feature name='avx512f'/>
Dec  1 14:20:36 np0005541455 nova_compute[188605]:        <feature name='avx512vl'/>
Dec  1 14:20:36 np0005541455 nova_compute[188605]:        <feature name='erms'/>
Dec  1 14:20:36 np0005541455 nova_compute[188605]:        <feature name='hle'/>
Dec  1 14:20:36 np0005541455 nova_compute[188605]:        <feature name='invpcid'/>
Dec  1 14:20:36 np0005541455 nova_compute[188605]:        <feature name='pcid'/>
Dec  1 14:20:36 np0005541455 nova_compute[188605]:        <feature name='pku'/>
Dec  1 14:20:36 np0005541455 nova_compute[188605]:        <feature name='rtm'/>
Dec  1 14:20:36 np0005541455 nova_compute[188605]:      </blockers>
Dec  1 14:20:36 np0005541455 nova_compute[188605]:      <model usable='no' vendor='Intel' canonical='Skylake-Server-v2'>Skylake-Server-IBRS</model>
Dec  1 14:20:36 np0005541455 nova_compute[188605]:      <blockers model='Skylake-Server-IBRS'>
Dec  1 14:20:36 np0005541455 nova_compute[188605]:        <feature name='avx512bw'/>
Dec  1 14:20:36 np0005541455 nova_compute[188605]:        <feature name='avx512cd'/>
Dec  1 14:20:36 np0005541455 nova_compute[188605]:        <feature name='avx512dq'/>
Dec  1 14:20:36 np0005541455 nova_compute[188605]:        <feature name='avx512f'/>
Dec  1 14:20:36 np0005541455 nova_compute[188605]:        <feature name='avx512vl'/>
Dec  1 14:20:36 np0005541455 nova_compute[188605]:        <feature name='erms'/>
Dec  1 14:20:36 np0005541455 nova_compute[188605]:        <feature name='hle'/>
Dec  1 14:20:36 np0005541455 nova_compute[188605]:        <feature name='invpcid'/>
Dec  1 14:20:36 np0005541455 nova_compute[188605]:        <feature name='pcid'/>
Dec  1 14:20:36 np0005541455 nova_compute[188605]:        <feature name='pku'/>
Dec  1 14:20:36 np0005541455 nova_compute[188605]:        <feature name='rtm'/>
Dec  1 14:20:36 np0005541455 nova_compute[188605]:      </blockers>
Dec  1 14:20:36 np0005541455 nova_compute[188605]:      <model usable='no' vendor='Intel' canonical='Skylake-Server-v3'>Skylake-Server-noTSX-IBRS</model>
Dec  1 14:20:36 np0005541455 nova_compute[188605]:      <blockers model='Skylake-Server-noTSX-IBRS'>
Dec  1 14:20:36 np0005541455 nova_compute[188605]:        <feature name='avx512bw'/>
Dec  1 14:20:36 np0005541455 nova_compute[188605]:        <feature name='avx512cd'/>
Dec  1 14:20:36 np0005541455 nova_compute[188605]:        <feature name='avx512dq'/>
Dec  1 14:20:36 np0005541455 nova_compute[188605]:        <feature name='avx512f'/>
Dec  1 14:20:36 np0005541455 nova_compute[188605]:        <feature name='avx512vl'/>
Dec  1 14:20:36 np0005541455 nova_compute[188605]:        <feature name='erms'/>
Dec  1 14:20:36 np0005541455 nova_compute[188605]:        <feature name='invpcid'/>
Dec  1 14:20:36 np0005541455 nova_compute[188605]:        <feature name='pcid'/>
Dec  1 14:20:36 np0005541455 nova_compute[188605]:        <feature name='pku'/>
Dec  1 14:20:36 np0005541455 nova_compute[188605]:      </blockers>
Dec  1 14:20:36 np0005541455 nova_compute[188605]:      <model usable='no' vendor='Intel'>Skylake-Server-v1</model>
Dec  1 14:20:36 np0005541455 nova_compute[188605]:      <blockers model='Skylake-Server-v1'>
Dec  1 14:20:36 np0005541455 nova_compute[188605]:        <feature name='avx512bw'/>
Dec  1 14:20:36 np0005541455 nova_compute[188605]:        <feature name='avx512cd'/>
Dec  1 14:20:36 np0005541455 nova_compute[188605]:        <feature name='avx512dq'/>
Dec  1 14:20:36 np0005541455 nova_compute[188605]:        <feature name='avx512f'/>
Dec  1 14:20:36 np0005541455 nova_compute[188605]:        <feature name='avx512vl'/>
Dec  1 14:20:36 np0005541455 nova_compute[188605]:        <feature name='erms'/>
Dec  1 14:20:36 np0005541455 nova_compute[188605]:        <feature name='hle'/>
Dec  1 14:20:36 np0005541455 nova_compute[188605]:        <feature name='invpcid'/>
Dec  1 14:20:36 np0005541455 nova_compute[188605]:        <feature name='pcid'/>
Dec  1 14:20:36 np0005541455 nova_compute[188605]:        <feature name='pku'/>
Dec  1 14:20:36 np0005541455 nova_compute[188605]:        <feature name='rtm'/>
Dec  1 14:20:36 np0005541455 nova_compute[188605]:      </blockers>
Dec  1 14:20:36 np0005541455 nova_compute[188605]:      <model usable='no' vendor='Intel'>Skylake-Server-v2</model>
Dec  1 14:20:36 np0005541455 nova_compute[188605]:      <blockers model='Skylake-Server-v2'>
Dec  1 14:20:36 np0005541455 nova_compute[188605]:        <feature name='avx512bw'/>
Dec  1 14:20:36 np0005541455 nova_compute[188605]:        <feature name='avx512cd'/>
Dec  1 14:20:36 np0005541455 nova_compute[188605]:        <feature name='avx512dq'/>
Dec  1 14:20:36 np0005541455 nova_compute[188605]:        <feature name='avx512f'/>
Dec  1 14:20:36 np0005541455 nova_compute[188605]:        <feature name='avx512vl'/>
Dec  1 14:20:36 np0005541455 nova_compute[188605]:        <feature name='erms'/>
Dec  1 14:20:36 np0005541455 nova_compute[188605]:        <feature name='hle'/>
Dec  1 14:20:36 np0005541455 nova_compute[188605]:        <feature name='invpcid'/>
Dec  1 14:20:36 np0005541455 nova_compute[188605]:        <feature name='pcid'/>
Dec  1 14:20:36 np0005541455 nova_compute[188605]:        <feature name='pku'/>
Dec  1 14:20:36 np0005541455 nova_compute[188605]:        <feature name='rtm'/>
Dec  1 14:20:36 np0005541455 nova_compute[188605]:      </blockers>
Dec  1 14:20:36 np0005541455 nova_compute[188605]:      <model usable='no' vendor='Intel'>Skylake-Server-v3</model>
Dec  1 14:20:36 np0005541455 nova_compute[188605]:      <blockers model='Skylake-Server-v3'>
Dec  1 14:20:36 np0005541455 nova_compute[188605]:        <feature name='avx512bw'/>
Dec  1 14:20:36 np0005541455 nova_compute[188605]:        <feature name='avx512cd'/>
Dec  1 14:20:36 np0005541455 nova_compute[188605]:        <feature name='avx512dq'/>
Dec  1 14:20:36 np0005541455 nova_compute[188605]:        <feature name='avx512f'/>
Dec  1 14:20:36 np0005541455 nova_compute[188605]:        <feature name='avx512vl'/>
Dec  1 14:20:36 np0005541455 nova_compute[188605]:        <feature name='erms'/>
Dec  1 14:20:36 np0005541455 nova_compute[188605]:        <feature name='invpcid'/>
Dec  1 14:20:36 np0005541455 nova_compute[188605]:        <feature name='pcid'/>
Dec  1 14:20:36 np0005541455 nova_compute[188605]:        <feature name='pku'/>
Dec  1 14:20:36 np0005541455 nova_compute[188605]:      </blockers>
Dec  1 14:20:36 np0005541455 nova_compute[188605]:      <model usable='no' vendor='Intel'>Skylake-Server-v4</model>
Dec  1 14:20:36 np0005541455 nova_compute[188605]:      <blockers model='Skylake-Server-v4'>
Dec  1 14:20:36 np0005541455 nova_compute[188605]:        <feature name='avx512bw'/>
Dec  1 14:20:36 np0005541455 nova_compute[188605]:        <feature name='avx512cd'/>
Dec  1 14:20:36 np0005541455 nova_compute[188605]:        <feature name='avx512dq'/>
Dec  1 14:20:36 np0005541455 nova_compute[188605]:        <feature name='avx512f'/>
Dec  1 14:20:36 np0005541455 nova_compute[188605]:        <feature name='avx512vl'/>
Dec  1 14:20:36 np0005541455 nova_compute[188605]:        <feature name='erms'/>
Dec  1 14:20:36 np0005541455 nova_compute[188605]:        <feature name='invpcid'/>
Dec  1 14:20:36 np0005541455 nova_compute[188605]:        <feature name='pcid'/>
Dec  1 14:20:36 np0005541455 nova_compute[188605]:        <feature name='pku'/>
Dec  1 14:20:36 np0005541455 nova_compute[188605]:      </blockers>
Dec  1 14:20:36 np0005541455 nova_compute[188605]:      <model usable='no' vendor='Intel'>Skylake-Server-v5</model>
Dec  1 14:20:36 np0005541455 nova_compute[188605]:      <blockers model='Skylake-Server-v5'>
Dec  1 14:20:36 np0005541455 nova_compute[188605]:        <feature name='avx512bw'/>
Dec  1 14:20:36 np0005541455 nova_compute[188605]:        <feature name='avx512cd'/>
Dec  1 14:20:36 np0005541455 nova_compute[188605]:        <feature name='avx512dq'/>
Dec  1 14:20:36 np0005541455 nova_compute[188605]:        <feature name='avx512f'/>
Dec  1 14:20:36 np0005541455 nova_compute[188605]:        <feature name='avx512vl'/>
Dec  1 14:20:36 np0005541455 nova_compute[188605]:        <feature name='erms'/>
Dec  1 14:20:36 np0005541455 nova_compute[188605]:        <feature name='invpcid'/>
Dec  1 14:20:36 np0005541455 nova_compute[188605]:        <feature name='pcid'/>
Dec  1 14:20:36 np0005541455 nova_compute[188605]:        <feature name='pku'/>
Dec  1 14:20:36 np0005541455 nova_compute[188605]:        <feature name='xsaves'/>
Dec  1 14:20:36 np0005541455 nova_compute[188605]:      </blockers>
Dec  1 14:20:36 np0005541455 nova_compute[188605]:      <model usable='no' vendor='Intel' canonical='Snowridge-v1'>Snowridge</model>
Dec  1 14:20:36 np0005541455 nova_compute[188605]:      <blockers model='Snowridge'>
Dec  1 14:20:36 np0005541455 nova_compute[188605]:        <feature name='cldemote'/>
Dec  1 14:20:36 np0005541455 nova_compute[188605]:        <feature name='core-capability'/>
Dec  1 14:20:36 np0005541455 nova_compute[188605]:        <feature name='erms'/>
Dec  1 14:20:36 np0005541455 nova_compute[188605]:        <feature name='gfni'/>
Dec  1 14:20:36 np0005541455 nova_compute[188605]:        <feature name='movdir64b'/>
Dec  1 14:20:36 np0005541455 nova_compute[188605]:        <feature name='movdiri'/>
Dec  1 14:20:36 np0005541455 nova_compute[188605]:        <feature name='mpx'/>
Dec  1 14:20:36 np0005541455 nova_compute[188605]:        <feature name='split-lock-detect'/>
Dec  1 14:20:36 np0005541455 nova_compute[188605]:      </blockers>
Dec  1 14:20:36 np0005541455 nova_compute[188605]:      <model usable='no' vendor='Intel'>Snowridge-v1</model>
Dec  1 14:20:36 np0005541455 nova_compute[188605]:      <blockers model='Snowridge-v1'>
Dec  1 14:20:36 np0005541455 nova_compute[188605]:        <feature name='cldemote'/>
Dec  1 14:20:36 np0005541455 nova_compute[188605]:        <feature name='core-capability'/>
Dec  1 14:20:36 np0005541455 nova_compute[188605]:        <feature name='erms'/>
Dec  1 14:20:36 np0005541455 nova_compute[188605]:        <feature name='gfni'/>
Dec  1 14:20:36 np0005541455 nova_compute[188605]:        <feature name='movdir64b'/>
Dec  1 14:20:36 np0005541455 nova_compute[188605]:        <feature name='movdiri'/>
Dec  1 14:20:36 np0005541455 nova_compute[188605]:        <feature name='mpx'/>
Dec  1 14:20:36 np0005541455 nova_compute[188605]:        <feature name='split-lock-detect'/>
Dec  1 14:20:36 np0005541455 nova_compute[188605]:      </blockers>
Dec  1 14:20:36 np0005541455 nova_compute[188605]:      <model usable='no' vendor='Intel'>Snowridge-v2</model>
Dec  1 14:20:36 np0005541455 nova_compute[188605]:      <blockers model='Snowridge-v2'>
Dec  1 14:20:36 np0005541455 nova_compute[188605]:        <feature name='cldemote'/>
Dec  1 14:20:36 np0005541455 nova_compute[188605]:        <feature name='core-capability'/>
Dec  1 14:20:36 np0005541455 nova_compute[188605]:        <feature name='erms'/>
Dec  1 14:20:36 np0005541455 nova_compute[188605]:        <feature name='gfni'/>
Dec  1 14:20:36 np0005541455 nova_compute[188605]:        <feature name='movdir64b'/>
Dec  1 14:20:36 np0005541455 nova_compute[188605]:        <feature name='movdiri'/>
Dec  1 14:20:36 np0005541455 nova_compute[188605]:        <feature name='split-lock-detect'/>
Dec  1 14:20:36 np0005541455 nova_compute[188605]:      </blockers>
Dec  1 14:20:36 np0005541455 nova_compute[188605]:      <model usable='no' vendor='Intel'>Snowridge-v3</model>
Dec  1 14:20:36 np0005541455 nova_compute[188605]:      <blockers model='Snowridge-v3'>
Dec  1 14:20:36 np0005541455 nova_compute[188605]:        <feature name='cldemote'/>
Dec  1 14:20:36 np0005541455 nova_compute[188605]:        <feature name='core-capability'/>
Dec  1 14:20:36 np0005541455 nova_compute[188605]:        <feature name='erms'/>
Dec  1 14:20:36 np0005541455 nova_compute[188605]:        <feature name='gfni'/>
Dec  1 14:20:36 np0005541455 nova_compute[188605]:        <feature name='movdir64b'/>
Dec  1 14:20:36 np0005541455 nova_compute[188605]:        <feature name='movdiri'/>
Dec  1 14:20:36 np0005541455 nova_compute[188605]:        <feature name='split-lock-detect'/>
Dec  1 14:20:36 np0005541455 nova_compute[188605]:        <feature name='xsaves'/>
Dec  1 14:20:36 np0005541455 nova_compute[188605]:      </blockers>
Dec  1 14:20:36 np0005541455 nova_compute[188605]:      <model usable='no' vendor='Intel'>Snowridge-v4</model>
Dec  1 14:20:36 np0005541455 nova_compute[188605]:      <blockers model='Snowridge-v4'>
Dec  1 14:20:36 np0005541455 nova_compute[188605]:        <feature name='cldemote'/>
Dec  1 14:20:36 np0005541455 nova_compute[188605]:        <feature name='erms'/>
Dec  1 14:20:36 np0005541455 nova_compute[188605]:        <feature name='gfni'/>
Dec  1 14:20:36 np0005541455 nova_compute[188605]:        <feature name='movdir64b'/>
Dec  1 14:20:36 np0005541455 nova_compute[188605]:        <feature name='movdiri'/>
Dec  1 14:20:36 np0005541455 nova_compute[188605]:        <feature name='xsaves'/>
Dec  1 14:20:36 np0005541455 nova_compute[188605]:      </blockers>
Dec  1 14:20:36 np0005541455 nova_compute[188605]:      <model usable='yes' vendor='Intel' canonical='Westmere-v1'>Westmere</model>
Dec  1 14:20:36 np0005541455 nova_compute[188605]:      <model usable='yes' vendor='Intel' canonical='Westmere-v2'>Westmere-IBRS</model>
Dec  1 14:20:36 np0005541455 nova_compute[188605]:      <model usable='yes' vendor='Intel'>Westmere-v1</model>
Dec  1 14:20:36 np0005541455 nova_compute[188605]:      <model usable='yes' vendor='Intel'>Westmere-v2</model>
Dec  1 14:20:36 np0005541455 nova_compute[188605]:      <model usable='no' deprecated='yes' vendor='AMD' canonical='athlon-v1'>athlon</model>
Dec  1 14:20:36 np0005541455 nova_compute[188605]:      <blockers model='athlon'>
Dec  1 14:20:36 np0005541455 nova_compute[188605]:        <feature name='3dnow'/>
Dec  1 14:20:36 np0005541455 nova_compute[188605]:        <feature name='3dnowext'/>
Dec  1 14:20:36 np0005541455 nova_compute[188605]:      </blockers>
Dec  1 14:20:36 np0005541455 nova_compute[188605]:      <model usable='no' deprecated='yes' vendor='AMD'>athlon-v1</model>
Dec  1 14:20:36 np0005541455 nova_compute[188605]:      <blockers model='athlon-v1'>
Dec  1 14:20:36 np0005541455 nova_compute[188605]:        <feature name='3dnow'/>
Dec  1 14:20:36 np0005541455 nova_compute[188605]:        <feature name='3dnowext'/>
Dec  1 14:20:36 np0005541455 nova_compute[188605]:      </blockers>
Dec  1 14:20:36 np0005541455 nova_compute[188605]:      <model usable='no' deprecated='yes' vendor='Intel' canonical='core2duo-v1'>core2duo</model>
Dec  1 14:20:36 np0005541455 nova_compute[188605]:      <blockers model='core2duo'>
Dec  1 14:20:36 np0005541455 nova_compute[188605]:        <feature name='ss'/>
Dec  1 14:20:36 np0005541455 nova_compute[188605]:      </blockers>
Dec  1 14:20:36 np0005541455 nova_compute[188605]:      <model usable='no' deprecated='yes' vendor='Intel'>core2duo-v1</model>
Dec  1 14:20:36 np0005541455 nova_compute[188605]:      <blockers model='core2duo-v1'>
Dec  1 14:20:36 np0005541455 nova_compute[188605]:        <feature name='ss'/>
Dec  1 14:20:36 np0005541455 nova_compute[188605]:      </blockers>
Dec  1 14:20:36 np0005541455 nova_compute[188605]:      <model usable='no' deprecated='yes' vendor='Intel' canonical='coreduo-v1'>coreduo</model>
Dec  1 14:20:36 np0005541455 nova_compute[188605]:      <blockers model='coreduo'>
Dec  1 14:20:36 np0005541455 nova_compute[188605]:        <feature name='ss'/>
Dec  1 14:20:36 np0005541455 nova_compute[188605]:      </blockers>
Dec  1 14:20:36 np0005541455 nova_compute[188605]:      <model usable='no' deprecated='yes' vendor='Intel'>coreduo-v1</model>
Dec  1 14:20:36 np0005541455 nova_compute[188605]:      <blockers model='coreduo-v1'>
Dec  1 14:20:36 np0005541455 nova_compute[188605]:        <feature name='ss'/>
Dec  1 14:20:36 np0005541455 nova_compute[188605]:      </blockers>
Dec  1 14:20:36 np0005541455 nova_compute[188605]:      <model usable='yes' deprecated='yes' vendor='unknown' canonical='kvm32-v1'>kvm32</model>
Dec  1 14:20:36 np0005541455 nova_compute[188605]:      <model usable='yes' deprecated='yes' vendor='unknown'>kvm32-v1</model>
Dec  1 14:20:36 np0005541455 nova_compute[188605]:      <model usable='yes' deprecated='yes' vendor='unknown' canonical='kvm64-v1'>kvm64</model>
Dec  1 14:20:36 np0005541455 nova_compute[188605]:      <model usable='yes' deprecated='yes' vendor='unknown'>kvm64-v1</model>
Dec  1 14:20:36 np0005541455 nova_compute[188605]:      <model usable='no' deprecated='yes' vendor='Intel' canonical='n270-v1'>n270</model>
Dec  1 14:20:36 np0005541455 nova_compute[188605]:      <blockers model='n270'>
Dec  1 14:20:36 np0005541455 nova_compute[188605]:        <feature name='ss'/>
Dec  1 14:20:36 np0005541455 nova_compute[188605]:      </blockers>
Dec  1 14:20:36 np0005541455 nova_compute[188605]:      <model usable='no' deprecated='yes' vendor='Intel'>n270-v1</model>
Dec  1 14:20:36 np0005541455 nova_compute[188605]:      <blockers model='n270-v1'>
Dec  1 14:20:36 np0005541455 nova_compute[188605]:        <feature name='ss'/>
Dec  1 14:20:36 np0005541455 nova_compute[188605]:      </blockers>
Dec  1 14:20:36 np0005541455 nova_compute[188605]:      <model usable='yes' deprecated='yes' vendor='unknown' canonical='pentium-v1'>pentium</model>
Dec  1 14:20:36 np0005541455 nova_compute[188605]:      <model usable='yes' deprecated='yes' vendor='unknown'>pentium-v1</model>
Dec  1 14:20:36 np0005541455 nova_compute[188605]:      <model usable='yes' deprecated='yes' vendor='unknown' canonical='pentium2-v1'>pentium2</model>
Dec  1 14:20:36 np0005541455 nova_compute[188605]:      <model usable='yes' deprecated='yes' vendor='unknown'>pentium2-v1</model>
Dec  1 14:20:36 np0005541455 nova_compute[188605]:      <model usable='yes' deprecated='yes' vendor='unknown' canonical='pentium3-v1'>pentium3</model>
Dec  1 14:20:36 np0005541455 nova_compute[188605]:      <model usable='yes' deprecated='yes' vendor='unknown'>pentium3-v1</model>
Dec  1 14:20:36 np0005541455 nova_compute[188605]:      <model usable='no' deprecated='yes' vendor='AMD' canonical='phenom-v1'>phenom</model>
Dec  1 14:20:36 np0005541455 nova_compute[188605]:      <blockers model='phenom'>
Dec  1 14:20:36 np0005541455 nova_compute[188605]:        <feature name='3dnow'/>
Dec  1 14:20:36 np0005541455 nova_compute[188605]:        <feature name='3dnowext'/>
Dec  1 14:20:36 np0005541455 nova_compute[188605]:      </blockers>
Dec  1 14:20:36 np0005541455 nova_compute[188605]:      <model usable='no' deprecated='yes' vendor='AMD'>phenom-v1</model>
Dec  1 14:20:36 np0005541455 nova_compute[188605]:      <blockers model='phenom-v1'>
Dec  1 14:20:36 np0005541455 nova_compute[188605]:        <feature name='3dnow'/>
Dec  1 14:20:36 np0005541455 nova_compute[188605]:        <feature name='3dnowext'/>
Dec  1 14:20:36 np0005541455 nova_compute[188605]:      </blockers>
Dec  1 14:20:36 np0005541455 nova_compute[188605]:      <model usable='yes' deprecated='yes' vendor='unknown' canonical='qemu32-v1'>qemu32</model>
Dec  1 14:20:36 np0005541455 nova_compute[188605]:      <model usable='yes' deprecated='yes' vendor='unknown'>qemu32-v1</model>
Dec  1 14:20:36 np0005541455 nova_compute[188605]:      <model usable='yes' deprecated='yes' vendor='unknown' canonical='qemu64-v1'>qemu64</model>
Dec  1 14:20:36 np0005541455 nova_compute[188605]:      <model usable='yes' deprecated='yes' vendor='unknown'>qemu64-v1</model>
Dec  1 14:20:36 np0005541455 nova_compute[188605]:    </mode>
Dec  1 14:20:36 np0005541455 nova_compute[188605]:  </cpu>
Dec  1 14:20:36 np0005541455 nova_compute[188605]:  <memoryBacking supported='yes'>
Dec  1 14:20:36 np0005541455 nova_compute[188605]:    <enum name='sourceType'>
Dec  1 14:20:36 np0005541455 nova_compute[188605]:      <value>file</value>
Dec  1 14:20:36 np0005541455 nova_compute[188605]:      <value>anonymous</value>
Dec  1 14:20:36 np0005541455 nova_compute[188605]:      <value>memfd</value>
Dec  1 14:20:36 np0005541455 nova_compute[188605]:    </enum>
Dec  1 14:20:36 np0005541455 nova_compute[188605]:  </memoryBacking>
Dec  1 14:20:36 np0005541455 nova_compute[188605]:  <devices>
Dec  1 14:20:36 np0005541455 nova_compute[188605]:    <disk supported='yes'>
Dec  1 14:20:36 np0005541455 nova_compute[188605]:      <enum name='diskDevice'>
Dec  1 14:20:36 np0005541455 nova_compute[188605]:        <value>disk</value>
Dec  1 14:20:36 np0005541455 nova_compute[188605]:        <value>cdrom</value>
Dec  1 14:20:36 np0005541455 nova_compute[188605]:        <value>floppy</value>
Dec  1 14:20:36 np0005541455 nova_compute[188605]:        <value>lun</value>
Dec  1 14:20:36 np0005541455 nova_compute[188605]:      </enum>
Dec  1 14:20:36 np0005541455 nova_compute[188605]:      <enum name='bus'>
Dec  1 14:20:36 np0005541455 nova_compute[188605]:        <value>ide</value>
Dec  1 14:20:36 np0005541455 nova_compute[188605]:        <value>fdc</value>
Dec  1 14:20:36 np0005541455 nova_compute[188605]:        <value>scsi</value>
Dec  1 14:20:36 np0005541455 nova_compute[188605]:        <value>virtio</value>
Dec  1 14:20:36 np0005541455 nova_compute[188605]:        <value>usb</value>
Dec  1 14:20:36 np0005541455 nova_compute[188605]:        <value>sata</value>
Dec  1 14:20:36 np0005541455 nova_compute[188605]:      </enum>
Dec  1 14:20:36 np0005541455 nova_compute[188605]:      <enum name='model'>
Dec  1 14:20:36 np0005541455 nova_compute[188605]:        <value>virtio</value>
Dec  1 14:20:36 np0005541455 nova_compute[188605]:        <value>virtio-transitional</value>
Dec  1 14:20:36 np0005541455 nova_compute[188605]:        <value>virtio-non-transitional</value>
Dec  1 14:20:36 np0005541455 nova_compute[188605]:      </enum>
Dec  1 14:20:36 np0005541455 nova_compute[188605]:    </disk>
Dec  1 14:20:36 np0005541455 nova_compute[188605]:    <graphics supported='yes'>
Dec  1 14:20:36 np0005541455 nova_compute[188605]:      <enum name='type'>
Dec  1 14:20:36 np0005541455 nova_compute[188605]:        <value>vnc</value>
Dec  1 14:20:36 np0005541455 nova_compute[188605]:        <value>egl-headless</value>
Dec  1 14:20:36 np0005541455 nova_compute[188605]:        <value>dbus</value>
Dec  1 14:20:36 np0005541455 nova_compute[188605]:      </enum>
Dec  1 14:20:36 np0005541455 nova_compute[188605]:    </graphics>
Dec  1 14:20:36 np0005541455 nova_compute[188605]:    <video supported='yes'>
Dec  1 14:20:36 np0005541455 nova_compute[188605]:      <enum name='modelType'>
Dec  1 14:20:36 np0005541455 nova_compute[188605]:        <value>vga</value>
Dec  1 14:20:36 np0005541455 nova_compute[188605]:        <value>cirrus</value>
Dec  1 14:20:36 np0005541455 nova_compute[188605]:        <value>virtio</value>
Dec  1 14:20:36 np0005541455 nova_compute[188605]:        <value>none</value>
Dec  1 14:20:36 np0005541455 nova_compute[188605]:        <value>bochs</value>
Dec  1 14:20:36 np0005541455 nova_compute[188605]:        <value>ramfb</value>
Dec  1 14:20:36 np0005541455 nova_compute[188605]:      </enum>
Dec  1 14:20:36 np0005541455 nova_compute[188605]:    </video>
Dec  1 14:20:36 np0005541455 nova_compute[188605]:    <hostdev supported='yes'>
Dec  1 14:20:36 np0005541455 nova_compute[188605]:      <enum name='mode'>
Dec  1 14:20:36 np0005541455 nova_compute[188605]:        <value>subsystem</value>
Dec  1 14:20:36 np0005541455 nova_compute[188605]:      </enum>
Dec  1 14:20:36 np0005541455 nova_compute[188605]:      <enum name='startupPolicy'>
Dec  1 14:20:36 np0005541455 nova_compute[188605]:        <value>default</value>
Dec  1 14:20:36 np0005541455 nova_compute[188605]:        <value>mandatory</value>
Dec  1 14:20:36 np0005541455 nova_compute[188605]:        <value>requisite</value>
Dec  1 14:20:36 np0005541455 nova_compute[188605]:        <value>optional</value>
Dec  1 14:20:36 np0005541455 nova_compute[188605]:      </enum>
Dec  1 14:20:36 np0005541455 nova_compute[188605]:      <enum name='subsysType'>
Dec  1 14:20:36 np0005541455 nova_compute[188605]:        <value>usb</value>
Dec  1 14:20:36 np0005541455 nova_compute[188605]:        <value>pci</value>
Dec  1 14:20:36 np0005541455 nova_compute[188605]:        <value>scsi</value>
Dec  1 14:20:36 np0005541455 nova_compute[188605]:      </enum>
Dec  1 14:20:36 np0005541455 nova_compute[188605]:      <enum name='capsType'/>
Dec  1 14:20:36 np0005541455 nova_compute[188605]:      <enum name='pciBackend'/>
Dec  1 14:20:36 np0005541455 nova_compute[188605]:    </hostdev>
Dec  1 14:20:36 np0005541455 nova_compute[188605]:    <rng supported='yes'>
Dec  1 14:20:36 np0005541455 nova_compute[188605]:      <enum name='model'>
Dec  1 14:20:36 np0005541455 nova_compute[188605]:        <value>virtio</value>
Dec  1 14:20:36 np0005541455 nova_compute[188605]:        <value>virtio-transitional</value>
Dec  1 14:20:36 np0005541455 nova_compute[188605]:        <value>virtio-non-transitional</value>
Dec  1 14:20:36 np0005541455 nova_compute[188605]:      </enum>
Dec  1 14:20:36 np0005541455 nova_compute[188605]:      <enum name='backendModel'>
Dec  1 14:20:36 np0005541455 nova_compute[188605]:        <value>random</value>
Dec  1 14:20:36 np0005541455 nova_compute[188605]:        <value>egd</value>
Dec  1 14:20:36 np0005541455 nova_compute[188605]:        <value>builtin</value>
Dec  1 14:20:36 np0005541455 nova_compute[188605]:      </enum>
Dec  1 14:20:36 np0005541455 nova_compute[188605]:    </rng>
Dec  1 14:20:36 np0005541455 nova_compute[188605]:    <filesystem supported='yes'>
Dec  1 14:20:36 np0005541455 nova_compute[188605]:      <enum name='driverType'>
Dec  1 14:20:36 np0005541455 nova_compute[188605]:        <value>path</value>
Dec  1 14:20:36 np0005541455 nova_compute[188605]:        <value>handle</value>
Dec  1 14:20:36 np0005541455 nova_compute[188605]:        <value>virtiofs</value>
Dec  1 14:20:36 np0005541455 nova_compute[188605]:      </enum>
Dec  1 14:20:36 np0005541455 nova_compute[188605]:    </filesystem>
Dec  1 14:20:36 np0005541455 nova_compute[188605]:    <tpm supported='yes'>
Dec  1 14:20:36 np0005541455 nova_compute[188605]:      <enum name='model'>
Dec  1 14:20:36 np0005541455 nova_compute[188605]:        <value>tpm-tis</value>
Dec  1 14:20:36 np0005541455 nova_compute[188605]:        <value>tpm-crb</value>
Dec  1 14:20:36 np0005541455 nova_compute[188605]:      </enum>
Dec  1 14:20:36 np0005541455 nova_compute[188605]:      <enum name='backendModel'>
Dec  1 14:20:36 np0005541455 nova_compute[188605]:        <value>emulator</value>
Dec  1 14:20:36 np0005541455 nova_compute[188605]:        <value>external</value>
Dec  1 14:20:36 np0005541455 nova_compute[188605]:      </enum>
Dec  1 14:20:36 np0005541455 nova_compute[188605]:      <enum name='backendVersion'>
Dec  1 14:20:36 np0005541455 nova_compute[188605]:        <value>2.0</value>
Dec  1 14:20:36 np0005541455 nova_compute[188605]:      </enum>
Dec  1 14:20:36 np0005541455 nova_compute[188605]:    </tpm>
Dec  1 14:20:36 np0005541455 nova_compute[188605]:    <redirdev supported='yes'>
Dec  1 14:20:36 np0005541455 nova_compute[188605]:      <enum name='bus'>
Dec  1 14:20:36 np0005541455 nova_compute[188605]:        <value>usb</value>
Dec  1 14:20:36 np0005541455 nova_compute[188605]:      </enum>
Dec  1 14:20:36 np0005541455 nova_compute[188605]:    </redirdev>
Dec  1 14:20:36 np0005541455 nova_compute[188605]:    <channel supported='yes'>
Dec  1 14:20:36 np0005541455 nova_compute[188605]:      <enum name='type'>
Dec  1 14:20:36 np0005541455 nova_compute[188605]:        <value>pty</value>
Dec  1 14:20:36 np0005541455 nova_compute[188605]:        <value>unix</value>
Dec  1 14:20:36 np0005541455 nova_compute[188605]:      </enum>
Dec  1 14:20:36 np0005541455 nova_compute[188605]:    </channel>
Dec  1 14:20:36 np0005541455 nova_compute[188605]:    <crypto supported='yes'>
Dec  1 14:20:36 np0005541455 nova_compute[188605]:      <enum name='model'/>
Dec  1 14:20:36 np0005541455 nova_compute[188605]:      <enum name='type'>
Dec  1 14:20:36 np0005541455 nova_compute[188605]:        <value>qemu</value>
Dec  1 14:20:36 np0005541455 nova_compute[188605]:      </enum>
Dec  1 14:20:36 np0005541455 nova_compute[188605]:      <enum name='backendModel'>
Dec  1 14:20:36 np0005541455 nova_compute[188605]:        <value>builtin</value>
Dec  1 14:20:36 np0005541455 nova_compute[188605]:      </enum>
Dec  1 14:20:36 np0005541455 nova_compute[188605]:    </crypto>
Dec  1 14:20:36 np0005541455 nova_compute[188605]:    <interface supported='yes'>
Dec  1 14:20:36 np0005541455 nova_compute[188605]:      <enum name='backendType'>
Dec  1 14:20:36 np0005541455 nova_compute[188605]:        <value>default</value>
Dec  1 14:20:36 np0005541455 nova_compute[188605]:        <value>passt</value>
Dec  1 14:20:36 np0005541455 nova_compute[188605]:      </enum>
Dec  1 14:20:36 np0005541455 nova_compute[188605]:    </interface>
Dec  1 14:20:36 np0005541455 nova_compute[188605]:    <panic supported='yes'>
Dec  1 14:20:36 np0005541455 nova_compute[188605]:      <enum name='model'>
Dec  1 14:20:36 np0005541455 nova_compute[188605]:        <value>isa</value>
Dec  1 14:20:36 np0005541455 nova_compute[188605]:        <value>hyperv</value>
Dec  1 14:20:36 np0005541455 nova_compute[188605]:      </enum>
Dec  1 14:20:36 np0005541455 nova_compute[188605]:    </panic>
Dec  1 14:20:36 np0005541455 nova_compute[188605]:    <console supported='yes'>
Dec  1 14:20:36 np0005541455 nova_compute[188605]:      <enum name='type'>
Dec  1 14:20:36 np0005541455 nova_compute[188605]:        <value>null</value>
Dec  1 14:20:36 np0005541455 nova_compute[188605]:        <value>vc</value>
Dec  1 14:20:36 np0005541455 nova_compute[188605]:        <value>pty</value>
Dec  1 14:20:36 np0005541455 nova_compute[188605]:        <value>dev</value>
Dec  1 14:20:36 np0005541455 nova_compute[188605]:        <value>file</value>
Dec  1 14:20:36 np0005541455 nova_compute[188605]:        <value>pipe</value>
Dec  1 14:20:36 np0005541455 nova_compute[188605]:        <value>stdio</value>
Dec  1 14:20:36 np0005541455 nova_compute[188605]:        <value>udp</value>
Dec  1 14:20:36 np0005541455 nova_compute[188605]:        <value>tcp</value>
Dec  1 14:20:36 np0005541455 nova_compute[188605]:        <value>unix</value>
Dec  1 14:20:36 np0005541455 nova_compute[188605]:        <value>qemu-vdagent</value>
Dec  1 14:20:36 np0005541455 nova_compute[188605]:        <value>dbus</value>
Dec  1 14:20:36 np0005541455 nova_compute[188605]:      </enum>
Dec  1 14:20:36 np0005541455 nova_compute[188605]:    </console>
Dec  1 14:20:36 np0005541455 nova_compute[188605]:  </devices>
Dec  1 14:20:36 np0005541455 nova_compute[188605]:  <features>
Dec  1 14:20:36 np0005541455 nova_compute[188605]:    <gic supported='no'/>
Dec  1 14:20:36 np0005541455 nova_compute[188605]:    <vmcoreinfo supported='yes'/>
Dec  1 14:20:36 np0005541455 nova_compute[188605]:    <genid supported='yes'/>
Dec  1 14:20:36 np0005541455 nova_compute[188605]:    <backingStoreInput supported='yes'/>
Dec  1 14:20:36 np0005541455 nova_compute[188605]:    <backup supported='yes'/>
Dec  1 14:20:36 np0005541455 nova_compute[188605]:    <async-teardown supported='yes'/>
Dec  1 14:20:36 np0005541455 nova_compute[188605]:    <ps2 supported='yes'/>
Dec  1 14:20:36 np0005541455 nova_compute[188605]:    <sev supported='no'/>
Dec  1 14:20:36 np0005541455 nova_compute[188605]:    <sgx supported='no'/>
Dec  1 14:20:36 np0005541455 nova_compute[188605]:    <hyperv supported='yes'>
Dec  1 14:20:36 np0005541455 nova_compute[188605]:      <enum name='features'>
Dec  1 14:20:36 np0005541455 nova_compute[188605]:        <value>relaxed</value>
Dec  1 14:20:36 np0005541455 nova_compute[188605]:        <value>vapic</value>
Dec  1 14:20:36 np0005541455 nova_compute[188605]:        <value>spinlocks</value>
Dec  1 14:20:36 np0005541455 nova_compute[188605]:        <value>vpindex</value>
Dec  1 14:20:36 np0005541455 nova_compute[188605]:        <value>runtime</value>
Dec  1 14:20:36 np0005541455 nova_compute[188605]:        <value>synic</value>
Dec  1 14:20:36 np0005541455 nova_compute[188605]:        <value>stimer</value>
Dec  1 14:20:36 np0005541455 nova_compute[188605]:        <value>reset</value>
Dec  1 14:20:36 np0005541455 nova_compute[188605]:        <value>vendor_id</value>
Dec  1 14:20:36 np0005541455 nova_compute[188605]:        <value>frequencies</value>
Dec  1 14:20:36 np0005541455 nova_compute[188605]:        <value>reenlightenment</value>
Dec  1 14:20:36 np0005541455 nova_compute[188605]:        <value>tlbflush</value>
Dec  1 14:20:36 np0005541455 nova_compute[188605]:        <value>ipi</value>
Dec  1 14:20:36 np0005541455 nova_compute[188605]:        <value>avic</value>
Dec  1 14:20:36 np0005541455 nova_compute[188605]:        <value>emsr_bitmap</value>
Dec  1 14:20:36 np0005541455 nova_compute[188605]:        <value>xmm_input</value>
Dec  1 14:20:36 np0005541455 nova_compute[188605]:      </enum>
Dec  1 14:20:36 np0005541455 nova_compute[188605]:      <defaults>
Dec  1 14:20:36 np0005541455 nova_compute[188605]:        <spinlocks>4095</spinlocks>
Dec  1 14:20:36 np0005541455 nova_compute[188605]:        <stimer_direct>on</stimer_direct>
Dec  1 14:20:36 np0005541455 nova_compute[188605]:        <tlbflush_direct>on</tlbflush_direct>
Dec  1 14:20:36 np0005541455 nova_compute[188605]:        <tlbflush_extended>on</tlbflush_extended>
Dec  1 14:20:36 np0005541455 nova_compute[188605]:        <vendor_id>Linux KVM Hv</vendor_id>
Dec  1 14:20:36 np0005541455 nova_compute[188605]:      </defaults>
Dec  1 14:20:36 np0005541455 nova_compute[188605]:    </hyperv>
Dec  1 14:20:36 np0005541455 nova_compute[188605]:    <launchSecurity supported='yes'>
Dec  1 14:20:36 np0005541455 nova_compute[188605]:      <enum name='sectype'>
Dec  1 14:20:36 np0005541455 nova_compute[188605]:        <value>tdx</value>
Dec  1 14:20:36 np0005541455 nova_compute[188605]:      </enum>
Dec  1 14:20:36 np0005541455 nova_compute[188605]:    </launchSecurity>
Dec  1 14:20:36 np0005541455 nova_compute[188605]:  </features>
Dec  1 14:20:36 np0005541455 nova_compute[188605]: </domainCapabilities>
Dec  1 14:20:36 np0005541455 nova_compute[188605]: _get_domain_capabilities /usr/lib/python3.9/site-packages/nova/virt/libvirt/host.py:1037#033[00m
Dec  1 14:20:36 np0005541455 nova_compute[188605]: 2025-12-01 19:20:36.576 188609 DEBUG nova.virt.libvirt.host [None req-7f8cefb8-f392-4857-a15c-0577ffacd27c - - - - - -] Checking secure boot support for host arch (x86_64) supports_secure_boot /usr/lib/python3.9/site-packages/nova/virt/libvirt/host.py:1782#033[00m
Dec  1 14:20:36 np0005541455 nova_compute[188605]: 2025-12-01 19:20:36.577 188609 INFO nova.virt.libvirt.host [None req-7f8cefb8-f392-4857-a15c-0577ffacd27c - - - - - -] Secure Boot support detected#033[00m
Dec  1 14:20:36 np0005541455 nova_compute[188605]: 2025-12-01 19:20:36.579 188609 INFO nova.virt.libvirt.driver [None req-7f8cefb8-f392-4857-a15c-0577ffacd27c - - - - - -] The live_migration_permit_post_copy is set to True and post copy live migration is available so auto-converge will not be in use.#033[00m
Dec  1 14:20:36 np0005541455 nova_compute[188605]: 2025-12-01 19:20:36.579 188609 INFO nova.virt.libvirt.driver [None req-7f8cefb8-f392-4857-a15c-0577ffacd27c - - - - - -] The live_migration_permit_post_copy is set to True and post copy live migration is available so auto-converge will not be in use.#033[00m
Dec  1 14:20:36 np0005541455 nova_compute[188605]: 2025-12-01 19:20:36.593 188609 DEBUG nova.virt.libvirt.driver [None req-7f8cefb8-f392-4857-a15c-0577ffacd27c - - - - - -] Enabling emulated TPM support _check_vtpm_support /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:1097#033[00m
Dec  1 14:20:36 np0005541455 nova_compute[188605]: 2025-12-01 19:20:36.633 188609 INFO nova.virt.node [None req-7f8cefb8-f392-4857-a15c-0577ffacd27c - - - - - -] Determined node identity 0211b5d4-bab8-409f-8f53-df766ffbcb27 from /var/lib/nova/compute_id#033[00m
Dec  1 14:20:36 np0005541455 nova_compute[188605]: 2025-12-01 19:20:36.658 188609 WARNING nova.compute.manager [None req-7f8cefb8-f392-4857-a15c-0577ffacd27c - - - - - -] Compute nodes ['0211b5d4-bab8-409f-8f53-df766ffbcb27'] for host compute-0.ctlplane.example.com were not found in the database. If this is the first time this service is starting on this host, then you can ignore this warning.#033[00m
Dec  1 14:20:36 np0005541455 nova_compute[188605]: 2025-12-01 19:20:36.708 188609 INFO nova.compute.manager [None req-7f8cefb8-f392-4857-a15c-0577ffacd27c - - - - - -] Looking for unclaimed instances stuck in BUILDING status for nodes managed by this host#033[00m
Dec  1 14:20:36 np0005541455 nova_compute[188605]: 2025-12-01 19:20:36.771 188609 WARNING nova.compute.manager [None req-7f8cefb8-f392-4857-a15c-0577ffacd27c - - - - - -] No compute node record found for host compute-0.ctlplane.example.com. If this is the first time this service is starting on this host, then you can ignore this warning.: nova.exception_Remote.ComputeHostNotFound_Remote: Compute host compute-0.ctlplane.example.com could not be found.#033[00m
Dec  1 14:20:36 np0005541455 nova_compute[188605]: 2025-12-01 19:20:36.771 188609 DEBUG oslo_concurrency.lockutils [None req-7f8cefb8-f392-4857-a15c-0577ffacd27c - - - - - -] Acquiring lock "compute_resources" by "nova.compute.resource_tracker.ResourceTracker.clean_compute_node_cache" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m
Dec  1 14:20:36 np0005541455 nova_compute[188605]: 2025-12-01 19:20:36.772 188609 DEBUG oslo_concurrency.lockutils [None req-7f8cefb8-f392-4857-a15c-0577ffacd27c - - - - - -] Lock "compute_resources" acquired by "nova.compute.resource_tracker.ResourceTracker.clean_compute_node_cache" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m
Dec  1 14:20:36 np0005541455 nova_compute[188605]: 2025-12-01 19:20:36.772 188609 DEBUG oslo_concurrency.lockutils [None req-7f8cefb8-f392-4857-a15c-0577ffacd27c - - - - - -] Lock "compute_resources" "released" by "nova.compute.resource_tracker.ResourceTracker.clean_compute_node_cache" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m
Dec  1 14:20:36 np0005541455 nova_compute[188605]: 2025-12-01 19:20:36.772 188609 DEBUG nova.compute.resource_tracker [None req-7f8cefb8-f392-4857-a15c-0577ffacd27c - - - - - -] Auditing locally available compute resources for compute-0.ctlplane.example.com (node: compute-0.ctlplane.example.com) update_available_resource /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:861#033[00m
Dec  1 14:20:36 np0005541455 systemd[1]: Starting libvirt nodedev daemon...
Dec  1 14:20:36 np0005541455 systemd[1]: Started libvirt nodedev daemon.
Dec  1 14:20:37 np0005541455 nova_compute[188605]: 2025-12-01 19:20:37.092 188609 WARNING nova.virt.libvirt.driver [None req-7f8cefb8-f392-4857-a15c-0577ffacd27c - - - - - -] This host appears to have multiple sockets per NUMA node. The `socket` PCI NUMA affinity will not be supported.#033[00m
Dec  1 14:20:37 np0005541455 nova_compute[188605]: 2025-12-01 19:20:37.094 188609 DEBUG nova.compute.resource_tracker [None req-7f8cefb8-f392-4857-a15c-0577ffacd27c - - - - - -] Hypervisor/Node resource view: name=compute-0.ctlplane.example.com free_ram=6051MB free_disk=72.60916519165039GB free_vcpus=8 pci_devices=[{"dev_id": "pci_0000_00_03_0", "address": "0000:00:03.0", "product_id": "1000", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1000", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_01_3", "address": "0000:00:01.3", "product_id": "7113", "vendor_id": "8086", "numa_node": null, "label": "label_8086_7113", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_06_0", "address": "0000:00:06.0", "product_id": "1005", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1005", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_04_0", "address": "0000:00:04.0", "product_id": "1001", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1001", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_01_1", "address": "0000:00:01.1", "product_id": "7010", "vendor_id": "8086", "numa_node": null, "label": "label_8086_7010", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_00_0", "address": "0000:00:00.0", "product_id": "1237", "vendor_id": "8086", "numa_node": null, "label": "label_8086_1237", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_05_0", "address": "0000:00:05.0", "product_id": "1002", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1002", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_02_0", "address": "0000:00:02.0", "product_id": "1050", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1050", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_01_2", "address": "0000:00:01.2", "product_id": "7020", "vendor_id": "8086", "numa_node": null, "label": "label_8086_7020", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_01_0", "address": "0000:00:01.0", "product_id": "7000", "vendor_id": "8086", "numa_node": null, "label": "label_8086_7000", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_07_0", "address": "0000:00:07.0", "product_id": "1000", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1000", "dev_type": "type-PCI"}] _report_hypervisor_resource_view /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:1034#033[00m
Dec  1 14:20:37 np0005541455 nova_compute[188605]: 2025-12-01 19:20:37.094 188609 DEBUG oslo_concurrency.lockutils [None req-7f8cefb8-f392-4857-a15c-0577ffacd27c - - - - - -] Acquiring lock "compute_resources" by "nova.compute.resource_tracker.ResourceTracker._update_available_resource" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m
Dec  1 14:20:37 np0005541455 nova_compute[188605]: 2025-12-01 19:20:37.095 188609 DEBUG oslo_concurrency.lockutils [None req-7f8cefb8-f392-4857-a15c-0577ffacd27c - - - - - -] Lock "compute_resources" acquired by "nova.compute.resource_tracker.ResourceTracker._update_available_resource" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m
Dec  1 14:20:37 np0005541455 nova_compute[188605]: 2025-12-01 19:20:37.114 188609 WARNING nova.compute.resource_tracker [None req-7f8cefb8-f392-4857-a15c-0577ffacd27c - - - - - -] No compute node record for compute-0.ctlplane.example.com:0211b5d4-bab8-409f-8f53-df766ffbcb27: nova.exception_Remote.ComputeHostNotFound_Remote: Compute host 0211b5d4-bab8-409f-8f53-df766ffbcb27 could not be found.#033[00m
Dec  1 14:20:37 np0005541455 nova_compute[188605]: 2025-12-01 19:20:37.136 188609 INFO nova.compute.resource_tracker [None req-7f8cefb8-f392-4857-a15c-0577ffacd27c - - - - - -] Compute node record created for compute-0.ctlplane.example.com:compute-0.ctlplane.example.com with uuid: 0211b5d4-bab8-409f-8f53-df766ffbcb27#033[00m
Dec  1 14:20:37 np0005541455 nova_compute[188605]: 2025-12-01 19:20:37.215 188609 DEBUG nova.compute.resource_tracker [None req-7f8cefb8-f392-4857-a15c-0577ffacd27c - - - - - -] Total usable vcpus: 8, total allocated vcpus: 0 _report_final_resource_view /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:1057#033[00m
Dec  1 14:20:37 np0005541455 nova_compute[188605]: 2025-12-01 19:20:37.215 188609 DEBUG nova.compute.resource_tracker [None req-7f8cefb8-f392-4857-a15c-0577ffacd27c - - - - - -] Final resource view: name=compute-0.ctlplane.example.com phys_ram=7680MB used_ram=512MB phys_disk=79GB used_disk=0GB total_vcpus=8 used_vcpus=0 pci_stats=[] _report_final_resource_view /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:1066#033[00m
Dec  1 14:20:37 np0005541455 python3.9[189504]: ansible-ansible.builtin.systemd Invoked with name=edpm_nova_compute.service state=restarted daemon_reload=False daemon_reexec=False scope=system no_block=False enabled=None force=None masked=None
Dec  1 14:20:37 np0005541455 systemd[1]: Stopping nova_compute container...
Dec  1 14:20:37 np0005541455 nova_compute[188605]: 2025-12-01 19:20:37.369 188609 DEBUG oslo_concurrency.lockutils [None req-7f8cefb8-f392-4857-a15c-0577ffacd27c - - - - - -] Lock "compute_resources" "released" by "nova.compute.resource_tracker.ResourceTracker._update_available_resource" :: held 0.274s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m
Dec  1 14:20:37 np0005541455 nova_compute[188605]: 2025-12-01 19:20:37.369 188609 DEBUG oslo_concurrency.lockutils [None req-b71ca0b1-6b06-4f9a-80f6-79fc48762184 - - - - - -] Acquiring lock "singleton_lock" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:312#033[00m
Dec  1 14:20:37 np0005541455 nova_compute[188605]: 2025-12-01 19:20:37.369 188609 DEBUG oslo_concurrency.lockutils [None req-b71ca0b1-6b06-4f9a-80f6-79fc48762184 - - - - - -] Acquired lock "singleton_lock" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:315#033[00m
Dec  1 14:20:37 np0005541455 nova_compute[188605]: 2025-12-01 19:20:37.369 188609 DEBUG oslo_concurrency.lockutils [None req-b71ca0b1-6b06-4f9a-80f6-79fc48762184 - - - - - -] Releasing lock "singleton_lock" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:333#033[00m
Dec  1 14:20:37 np0005541455 virtqemud[189187]: libvirt version: 11.9.0, package: 1.el9 (builder@centos.org, 2025-11-04-09:54:50, )
Dec  1 14:20:37 np0005541455 virtqemud[189187]: hostname: compute-0
Dec  1 14:20:37 np0005541455 virtqemud[189187]: End of file while reading data: Input/output error
Dec  1 14:20:37 np0005541455 systemd[1]: libpod-a49600dc8e564699c8907e2ca54945c314b5b17705fe80360125fde83c0dc967.scope: Deactivated successfully.
Dec  1 14:20:37 np0005541455 systemd[1]: libpod-a49600dc8e564699c8907e2ca54945c314b5b17705fe80360125fde83c0dc967.scope: Consumed 3.103s CPU time.
Dec  1 14:20:37 np0005541455 conmon[188605]: conmon a49600dc8e564699c890 <nwarn>: Failed to open cgroups file: /sys/fs/cgroup/machine.slice/libpod-a49600dc8e564699c8907e2ca54945c314b5b17705fe80360125fde83c0dc967.scope/container/memory.events
Dec  1 14:20:37 np0005541455 podman[189508]: 2025-12-01 19:20:37.786644189 +0000 UTC m=+0.478149500 container died a49600dc8e564699c8907e2ca54945c314b5b17705fe80360125fde83c0dc967 (image=quay.io/podified-antelope-centos9/openstack-nova-compute:current-podified, name=nova_compute, org.label-schema.build-date=20251125, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.vendor=CentOS, org.label-schema.license=GPLv2, org.label-schema.schema-version=1.0, tcib_build_tag=fa2bb8efef6782c26ea7f1675eeb36dd, config_data={'image': 'quay.io/podified-antelope-centos9/openstack-nova-compute:current-podified', 'privileged': True, 'user': 'nova', 'restart': 'always', 'command': 'kolla_start', 'net': 'host', 'pid': 'host', 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'volumes': ['/var/lib/openstack/config/nova:/var/lib/kolla/config_files:ro', '/var/lib/openstack/cacerts/nova/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/etc/localtime:/etc/localtime:ro', '/lib/modules:/lib/modules:ro', '/dev:/dev', '/var/lib/libvirt:/var/lib/libvirt', '/run/libvirt:/run/libvirt:shared', '/var/lib/nova:/var/lib/nova:shared', '/var/lib/iscsi:/var/lib/iscsi', '/etc/multipath:/etc/multipath:z', '/etc/multipath.conf:/etc/multipath.conf:ro', '/etc/iscsi:/etc/iscsi:ro', '/etc/nvme:/etc/nvme', '/var/lib/openstack/config/ceph:/var/lib/kolla/config_files/ceph:ro', '/etc/ssh/ssh_known_hosts:/etc/ssh/ssh_known_hosts:ro']}, config_id=edpm, container_name=nova_compute, maintainer=OpenStack Kubernetes Operator team, tcib_managed=true, io.buildah.version=1.41.3, managed_by=edpm_ansible)
Dec  1 14:20:37 np0005541455 systemd[1]: var-lib-containers-storage-overlay\x2dcontainers-a49600dc8e564699c8907e2ca54945c314b5b17705fe80360125fde83c0dc967-userdata-shm.mount: Deactivated successfully.
Dec  1 14:20:37 np0005541455 systemd[1]: var-lib-containers-storage-overlay-15c8a5dfb1d7f7861327b27ce7d523a282a53d89ff97c0ca1b5edbc129905915-merged.mount: Deactivated successfully.
Dec  1 14:20:37 np0005541455 podman[189508]: 2025-12-01 19:20:37.852071727 +0000 UTC m=+0.543577028 container cleanup a49600dc8e564699c8907e2ca54945c314b5b17705fe80360125fde83c0dc967 (image=quay.io/podified-antelope-centos9/openstack-nova-compute:current-podified, name=nova_compute, org.label-schema.schema-version=1.0, container_name=nova_compute, tcib_build_tag=fa2bb8efef6782c26ea7f1675eeb36dd, tcib_managed=true, config_data={'image': 'quay.io/podified-antelope-centos9/openstack-nova-compute:current-podified', 'privileged': True, 'user': 'nova', 'restart': 'always', 'command': 'kolla_start', 'net': 'host', 'pid': 'host', 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'volumes': ['/var/lib/openstack/config/nova:/var/lib/kolla/config_files:ro', '/var/lib/openstack/cacerts/nova/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/etc/localtime:/etc/localtime:ro', '/lib/modules:/lib/modules:ro', '/dev:/dev', '/var/lib/libvirt:/var/lib/libvirt', '/run/libvirt:/run/libvirt:shared', '/var/lib/nova:/var/lib/nova:shared', '/var/lib/iscsi:/var/lib/iscsi', '/etc/multipath:/etc/multipath:z', '/etc/multipath.conf:/etc/multipath.conf:ro', '/etc/iscsi:/etc/iscsi:ro', '/etc/nvme:/etc/nvme', '/var/lib/openstack/config/ceph:/var/lib/kolla/config_files/ceph:ro', '/etc/ssh/ssh_known_hosts:/etc/ssh/ssh_known_hosts:ro']}, io.buildah.version=1.41.3, managed_by=edpm_ansible, org.label-schema.build-date=20251125, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.vendor=CentOS, config_id=edpm, maintainer=OpenStack Kubernetes Operator team, org.label-schema.license=GPLv2)
Dec  1 14:20:37 np0005541455 podman[189508]: nova_compute
Dec  1 14:20:37 np0005541455 podman[189536]: nova_compute
Dec  1 14:20:37 np0005541455 systemd[1]: edpm_nova_compute.service: Deactivated successfully.
Dec  1 14:20:37 np0005541455 systemd[1]: Stopped nova_compute container.
Dec  1 14:20:37 np0005541455 systemd[1]: Starting nova_compute container...
Dec  1 14:20:38 np0005541455 systemd[1]: Started libcrun container.
Dec  1 14:20:38 np0005541455 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/15c8a5dfb1d7f7861327b27ce7d523a282a53d89ff97c0ca1b5edbc129905915/merged/etc/multipath supports timestamps until 2038 (0x7fffffff)
Dec  1 14:20:38 np0005541455 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/15c8a5dfb1d7f7861327b27ce7d523a282a53d89ff97c0ca1b5edbc129905915/merged/etc/nvme supports timestamps until 2038 (0x7fffffff)
Dec  1 14:20:38 np0005541455 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/15c8a5dfb1d7f7861327b27ce7d523a282a53d89ff97c0ca1b5edbc129905915/merged/var/lib/iscsi supports timestamps until 2038 (0x7fffffff)
Dec  1 14:20:38 np0005541455 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/15c8a5dfb1d7f7861327b27ce7d523a282a53d89ff97c0ca1b5edbc129905915/merged/var/lib/libvirt supports timestamps until 2038 (0x7fffffff)
Dec  1 14:20:38 np0005541455 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/15c8a5dfb1d7f7861327b27ce7d523a282a53d89ff97c0ca1b5edbc129905915/merged/var/lib/nova supports timestamps until 2038 (0x7fffffff)
Dec  1 14:20:38 np0005541455 podman[189549]: 2025-12-01 19:20:38.069418847 +0000 UTC m=+0.115514916 container init a49600dc8e564699c8907e2ca54945c314b5b17705fe80360125fde83c0dc967 (image=quay.io/podified-antelope-centos9/openstack-nova-compute:current-podified, name=nova_compute, maintainer=OpenStack Kubernetes Operator team, org.label-schema.build-date=20251125, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 9 Base Image, config_id=edpm, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS, config_data={'image': 'quay.io/podified-antelope-centos9/openstack-nova-compute:current-podified', 'privileged': True, 'user': 'nova', 'restart': 'always', 'command': 'kolla_start', 'net': 'host', 'pid': 'host', 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'volumes': ['/var/lib/openstack/config/nova:/var/lib/kolla/config_files:ro', '/var/lib/openstack/cacerts/nova/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/etc/localtime:/etc/localtime:ro', '/lib/modules:/lib/modules:ro', '/dev:/dev', '/var/lib/libvirt:/var/lib/libvirt', '/run/libvirt:/run/libvirt:shared', '/var/lib/nova:/var/lib/nova:shared', '/var/lib/iscsi:/var/lib/iscsi', '/etc/multipath:/etc/multipath:z', '/etc/multipath.conf:/etc/multipath.conf:ro', '/etc/iscsi:/etc/iscsi:ro', '/etc/nvme:/etc/nvme', '/var/lib/openstack/config/ceph:/var/lib/kolla/config_files/ceph:ro', '/etc/ssh/ssh_known_hosts:/etc/ssh/ssh_known_hosts:ro']}, container_name=nova_compute, managed_by=edpm_ansible, tcib_build_tag=fa2bb8efef6782c26ea7f1675eeb36dd, tcib_managed=true, io.buildah.version=1.41.3)
Dec  1 14:20:38 np0005541455 podman[189549]: 2025-12-01 19:20:38.077355825 +0000 UTC m=+0.123451864 container start a49600dc8e564699c8907e2ca54945c314b5b17705fe80360125fde83c0dc967 (image=quay.io/podified-antelope-centos9/openstack-nova-compute:current-podified, name=nova_compute, org.label-schema.name=CentOS Stream 9 Base Image, container_name=nova_compute, io.buildah.version=1.41.3, managed_by=edpm_ansible, org.label-schema.build-date=20251125, org.label-schema.license=GPLv2, config_id=edpm, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS, tcib_build_tag=fa2bb8efef6782c26ea7f1675eeb36dd, tcib_managed=true, config_data={'image': 'quay.io/podified-antelope-centos9/openstack-nova-compute:current-podified', 'privileged': True, 'user': 'nova', 'restart': 'always', 'command': 'kolla_start', 'net': 'host', 'pid': 'host', 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'volumes': ['/var/lib/openstack/config/nova:/var/lib/kolla/config_files:ro', '/var/lib/openstack/cacerts/nova/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/etc/localtime:/etc/localtime:ro', '/lib/modules:/lib/modules:ro', '/dev:/dev', '/var/lib/libvirt:/var/lib/libvirt', '/run/libvirt:/run/libvirt:shared', '/var/lib/nova:/var/lib/nova:shared', '/var/lib/iscsi:/var/lib/iscsi', '/etc/multipath:/etc/multipath:z', '/etc/multipath.conf:/etc/multipath.conf:ro', '/etc/iscsi:/etc/iscsi:ro', '/etc/nvme:/etc/nvme', '/var/lib/openstack/config/ceph:/var/lib/kolla/config_files/ceph:ro', '/etc/ssh/ssh_known_hosts:/etc/ssh/ssh_known_hosts:ro']}, maintainer=OpenStack Kubernetes Operator team)
Dec  1 14:20:38 np0005541455 podman[189549]: nova_compute
Dec  1 14:20:38 np0005541455 nova_compute[189564]: + sudo -E kolla_set_configs
Dec  1 14:20:38 np0005541455 systemd[1]: Started nova_compute container.
Dec  1 14:20:38 np0005541455 nova_compute[189564]: INFO:__main__:Loading config file at /var/lib/kolla/config_files/config.json
Dec  1 14:20:38 np0005541455 nova_compute[189564]: INFO:__main__:Validating config file
Dec  1 14:20:38 np0005541455 nova_compute[189564]: INFO:__main__:Kolla config strategy set to: COPY_ALWAYS
Dec  1 14:20:38 np0005541455 nova_compute[189564]: INFO:__main__:Copying service configuration files
Dec  1 14:20:38 np0005541455 nova_compute[189564]: INFO:__main__:Deleting /etc/nova/nova.conf
Dec  1 14:20:38 np0005541455 nova_compute[189564]: INFO:__main__:Copying /var/lib/kolla/config_files/nova-blank.conf to /etc/nova/nova.conf
Dec  1 14:20:38 np0005541455 nova_compute[189564]: INFO:__main__:Setting permission for /etc/nova/nova.conf
Dec  1 14:20:38 np0005541455 nova_compute[189564]: INFO:__main__:Deleting /etc/nova/nova.conf.d/01-nova.conf
Dec  1 14:20:38 np0005541455 nova_compute[189564]: INFO:__main__:Copying /var/lib/kolla/config_files/01-nova.conf to /etc/nova/nova.conf.d/01-nova.conf
Dec  1 14:20:38 np0005541455 nova_compute[189564]: INFO:__main__:Setting permission for /etc/nova/nova.conf.d/01-nova.conf
Dec  1 14:20:38 np0005541455 nova_compute[189564]: INFO:__main__:Deleting /etc/nova/nova.conf.d/25-nova-extra.conf
Dec  1 14:20:38 np0005541455 nova_compute[189564]: INFO:__main__:Copying /var/lib/kolla/config_files/25-nova-extra.conf to /etc/nova/nova.conf.d/25-nova-extra.conf
Dec  1 14:20:38 np0005541455 nova_compute[189564]: INFO:__main__:Setting permission for /etc/nova/nova.conf.d/25-nova-extra.conf
Dec  1 14:20:38 np0005541455 nova_compute[189564]: INFO:__main__:Deleting /etc/nova/nova.conf.d/nova-blank.conf
Dec  1 14:20:38 np0005541455 nova_compute[189564]: INFO:__main__:Copying /var/lib/kolla/config_files/nova-blank.conf to /etc/nova/nova.conf.d/nova-blank.conf
Dec  1 14:20:38 np0005541455 nova_compute[189564]: INFO:__main__:Setting permission for /etc/nova/nova.conf.d/nova-blank.conf
Dec  1 14:20:38 np0005541455 nova_compute[189564]: INFO:__main__:Deleting /etc/nova/nova.conf.d/02-nova-host-specific.conf
Dec  1 14:20:38 np0005541455 nova_compute[189564]: INFO:__main__:Copying /var/lib/kolla/config_files/02-nova-host-specific.conf to /etc/nova/nova.conf.d/02-nova-host-specific.conf
Dec  1 14:20:38 np0005541455 nova_compute[189564]: INFO:__main__:Setting permission for /etc/nova/nova.conf.d/02-nova-host-specific.conf
Dec  1 14:20:38 np0005541455 nova_compute[189564]: INFO:__main__:Deleting /etc/ceph
Dec  1 14:20:38 np0005541455 nova_compute[189564]: INFO:__main__:Creating directory /etc/ceph
Dec  1 14:20:38 np0005541455 nova_compute[189564]: INFO:__main__:Setting permission for /etc/ceph
Dec  1 14:20:38 np0005541455 nova_compute[189564]: INFO:__main__:Deleting /var/lib/nova/.ssh/ssh-privatekey
Dec  1 14:20:38 np0005541455 nova_compute[189564]: INFO:__main__:Copying /var/lib/kolla/config_files/ssh-privatekey to /var/lib/nova/.ssh/ssh-privatekey
Dec  1 14:20:38 np0005541455 nova_compute[189564]: INFO:__main__:Setting permission for /var/lib/nova/.ssh/ssh-privatekey
Dec  1 14:20:38 np0005541455 nova_compute[189564]: INFO:__main__:Deleting /var/lib/nova/.ssh/config
Dec  1 14:20:38 np0005541455 nova_compute[189564]: INFO:__main__:Copying /var/lib/kolla/config_files/ssh-config to /var/lib/nova/.ssh/config
Dec  1 14:20:38 np0005541455 nova_compute[189564]: INFO:__main__:Setting permission for /var/lib/nova/.ssh/config
Dec  1 14:20:38 np0005541455 nova_compute[189564]: INFO:__main__:Deleting /usr/sbin/iscsiadm
Dec  1 14:20:38 np0005541455 nova_compute[189564]: INFO:__main__:Copying /var/lib/kolla/config_files/run-on-host to /usr/sbin/iscsiadm
Dec  1 14:20:38 np0005541455 nova_compute[189564]: INFO:__main__:Setting permission for /usr/sbin/iscsiadm
Dec  1 14:20:38 np0005541455 nova_compute[189564]: INFO:__main__:Writing out command to execute
Dec  1 14:20:38 np0005541455 nova_compute[189564]: INFO:__main__:Setting permission for /var/lib/nova/.ssh/
Dec  1 14:20:38 np0005541455 nova_compute[189564]: INFO:__main__:Setting permission for /var/lib/nova/.ssh/ssh-privatekey
Dec  1 14:20:38 np0005541455 nova_compute[189564]: INFO:__main__:Setting permission for /var/lib/nova/.ssh/config
Dec  1 14:20:38 np0005541455 nova_compute[189564]: ++ cat /run_command
Dec  1 14:20:38 np0005541455 nova_compute[189564]: + CMD=nova-compute
Dec  1 14:20:38 np0005541455 nova_compute[189564]: + ARGS=
Dec  1 14:20:38 np0005541455 nova_compute[189564]: + sudo kolla_copy_cacerts
Dec  1 14:20:38 np0005541455 nova_compute[189564]: + [[ ! -n '' ]]
Dec  1 14:20:38 np0005541455 nova_compute[189564]: + . kolla_extend_start
Dec  1 14:20:38 np0005541455 nova_compute[189564]: Running command: 'nova-compute'
Dec  1 14:20:38 np0005541455 nova_compute[189564]: + echo 'Running command: '\''nova-compute'\'''
Dec  1 14:20:38 np0005541455 nova_compute[189564]: + umask 0022
Dec  1 14:20:38 np0005541455 nova_compute[189564]: + exec nova-compute
Dec  1 14:20:38 np0005541455 python3.9[189727]: ansible-containers.podman.podman_container Invoked with name=nova_compute_init state=started executable=podman detach=True debug=False force_restart=False force_delete=True generate_systemd={} image_strict=False recreate=False image=None annotation=None arch=None attach=None authfile=None blkio_weight=None blkio_weight_device=None cap_add=None cap_drop=None cgroup_conf=None cgroup_parent=None cgroupns=None cgroups=None chrootdirs=None cidfile=None cmd_args=None conmon_pidfile=None command=None cpu_period=None cpu_quota=None cpu_rt_period=None cpu_rt_runtime=None cpu_shares=None cpus=None cpuset_cpus=None cpuset_mems=None decryption_key=None delete_depend=None delete_time=None delete_volumes=None detach_keys=None device=None device_cgroup_rule=None device_read_bps=None device_read_iops=None device_write_bps=None device_write_iops=None dns=None dns_option=None dns_search=None entrypoint=None env=None env_file=None env_host=None env_merge=None etc_hosts=None expose=None gidmap=None gpus=None group_add=None group_entry=None healthcheck=None healthcheck_interval=None healthcheck_retries=None healthcheck_start_period=None health_startup_cmd=None health_startup_interval=None health_startup_retries=None health_startup_success=None health_startup_timeout=None healthcheck_timeout=None healthcheck_failure_action=None hooks_dir=None hostname=None hostuser=None http_proxy=None image_volume=None init=None init_ctr=None init_path=None interactive=None ip=None ip6=None ipc=None kernel_memory=None label=None label_file=None log_driver=None log_level=None log_opt=None mac_address=None memory=None memory_reservation=None memory_swap=None memory_swappiness=None mount=None network=None network_aliases=None no_healthcheck=None no_hosts=None oom_kill_disable=None oom_score_adj=None os=None passwd=None passwd_entry=None personality=None pid=None pid_file=None pids_limit=None platform=None pod=None pod_id_file=None preserve_fd=None preserve_fds=None privileged=None publish=None publish_all=None pull=None quadlet_dir=None quadlet_filename=None quadlet_file_mode=None quadlet_options=None rdt_class=None read_only=None read_only_tmpfs=None requires=None restart_policy=None restart_time=None retry=None retry_delay=None rm=None rmi=None rootfs=None seccomp_policy=None secrets=NOT_LOGGING_PARAMETER sdnotify=None security_opt=None shm_size=None shm_size_systemd=None sig_proxy=None stop_signal=None stop_timeout=None stop_time=None subgidname=None subuidname=None sysctl=None systemd=None timeout=None timezone=None tls_verify=None tmpfs=None tty=None uidmap=None ulimit=None umask=None unsetenv=None unsetenv_all=None user=None userns=None uts=None variant=None volume=None volumes_from=None workdir=None
Dec  1 14:20:39 np0005541455 systemd[1]: Started libpod-conmon-b7e71d1fa76afceccc1bd66ef02742c75fce787a57939c3d37257723a6c83554.scope.
Dec  1 14:20:39 np0005541455 systemd[1]: Started libcrun container.
Dec  1 14:20:39 np0005541455 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/cca82a1f63d50db2dbb9ed2b409e5a3a816f0ab9b22b218e617210203f930808/merged/usr/sbin/nova_statedir_ownership.py supports timestamps until 2038 (0x7fffffff)
Dec  1 14:20:39 np0005541455 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/cca82a1f63d50db2dbb9ed2b409e5a3a816f0ab9b22b218e617210203f930808/merged/var/lib/_nova_secontext supports timestamps until 2038 (0x7fffffff)
Dec  1 14:20:39 np0005541455 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/cca82a1f63d50db2dbb9ed2b409e5a3a816f0ab9b22b218e617210203f930808/merged/var/lib/nova supports timestamps until 2038 (0x7fffffff)
Dec  1 14:20:39 np0005541455 podman[189753]: 2025-12-01 19:20:39.142002045 +0000 UTC m=+0.131767064 container init b7e71d1fa76afceccc1bd66ef02742c75fce787a57939c3d37257723a6c83554 (image=quay.io/podified-antelope-centos9/openstack-nova-compute:current-podified, name=nova_compute_init, tcib_build_tag=fa2bb8efef6782c26ea7f1675eeb36dd, config_data={'image': 'quay.io/podified-antelope-centos9/openstack-nova-compute:current-podified', 'privileged': False, 'user': 'root', 'restart': 'never', 'command': 'bash -c $* -- eval python3 /sbin/nova_statedir_ownership.py | logger -t nova_compute_init', 'net': 'none', 'security_opt': ['label=disable'], 'detach': False, 'environment': {'NOVA_STATEDIR_OWNERSHIP_SKIP': '/var/lib/nova/compute_id', '__OS_DEBUG': False}, 'volumes': ['/dev/log:/dev/log', '/var/lib/nova:/var/lib/nova:shared', '/var/lib/_nova_secontext:/var/lib/_nova_secontext:shared,z', '/var/lib/openstack/config/nova/nova_statedir_ownership.py:/sbin/nova_statedir_ownership.py:z']}, config_id=edpm, maintainer=OpenStack Kubernetes Operator team, managed_by=edpm_ansible, org.label-schema.schema-version=1.0, tcib_managed=true, io.buildah.version=1.41.3, org.label-schema.build-date=20251125, org.label-schema.name=CentOS Stream 9 Base Image, container_name=nova_compute_init, org.label-schema.license=GPLv2, org.label-schema.vendor=CentOS)
Dec  1 14:20:39 np0005541455 podman[189753]: 2025-12-01 19:20:39.153913398 +0000 UTC m=+0.143678367 container start b7e71d1fa76afceccc1bd66ef02742c75fce787a57939c3d37257723a6c83554 (image=quay.io/podified-antelope-centos9/openstack-nova-compute:current-podified, name=nova_compute_init, org.label-schema.vendor=CentOS, tcib_build_tag=fa2bb8efef6782c26ea7f1675eeb36dd, maintainer=OpenStack Kubernetes Operator team, managed_by=edpm_ansible, tcib_managed=true, io.buildah.version=1.41.3, org.label-schema.build-date=20251125, config_data={'image': 'quay.io/podified-antelope-centos9/openstack-nova-compute:current-podified', 'privileged': False, 'user': 'root', 'restart': 'never', 'command': 'bash -c $* -- eval python3 /sbin/nova_statedir_ownership.py | logger -t nova_compute_init', 'net': 'none', 'security_opt': ['label=disable'], 'detach': False, 'environment': {'NOVA_STATEDIR_OWNERSHIP_SKIP': '/var/lib/nova/compute_id', '__OS_DEBUG': False}, 'volumes': ['/dev/log:/dev/log', '/var/lib/nova:/var/lib/nova:shared', '/var/lib/_nova_secontext:/var/lib/_nova_secontext:shared,z', '/var/lib/openstack/config/nova/nova_statedir_ownership.py:/sbin/nova_statedir_ownership.py:z']}, config_id=edpm, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.schema-version=1.0, container_name=nova_compute_init, org.label-schema.license=GPLv2)
Dec  1 14:20:39 np0005541455 python3.9[189727]: ansible-containers.podman.podman_container PODMAN-CONTAINER-DEBUG: podman start nova_compute_init
Dec  1 14:20:39 np0005541455 nova_compute_init[189774]: INFO:nova_statedir:Applying nova statedir ownership
Dec  1 14:20:39 np0005541455 nova_compute_init[189774]: INFO:nova_statedir:Target ownership for /var/lib/nova: 42436:42436
Dec  1 14:20:39 np0005541455 nova_compute_init[189774]: INFO:nova_statedir:Checking uid: 1000 gid: 1000 path: /var/lib/nova/
Dec  1 14:20:39 np0005541455 nova_compute_init[189774]: INFO:nova_statedir:Changing ownership of /var/lib/nova from 1000:1000 to 42436:42436
Dec  1 14:20:39 np0005541455 nova_compute_init[189774]: INFO:nova_statedir:Setting selinux context of /var/lib/nova to system_u:object_r:container_file_t:s0
Dec  1 14:20:39 np0005541455 nova_compute_init[189774]: INFO:nova_statedir:Checking uid: 1000 gid: 1000 path: /var/lib/nova/instances/
Dec  1 14:20:39 np0005541455 nova_compute_init[189774]: INFO:nova_statedir:Changing ownership of /var/lib/nova/instances from 1000:1000 to 42436:42436
Dec  1 14:20:39 np0005541455 nova_compute_init[189774]: INFO:nova_statedir:Setting selinux context of /var/lib/nova/instances to system_u:object_r:container_file_t:s0
Dec  1 14:20:39 np0005541455 nova_compute_init[189774]: INFO:nova_statedir:Checking uid: 42436 gid: 42436 path: /var/lib/nova/.ssh/
Dec  1 14:20:39 np0005541455 nova_compute_init[189774]: INFO:nova_statedir:Ownership of /var/lib/nova/.ssh already 42436:42436
Dec  1 14:20:39 np0005541455 nova_compute_init[189774]: INFO:nova_statedir:Setting selinux context of /var/lib/nova/.ssh to system_u:object_r:container_file_t:s0
Dec  1 14:20:39 np0005541455 nova_compute_init[189774]: INFO:nova_statedir:Checking uid: 42436 gid: 42436 path: /var/lib/nova/.ssh/ssh-privatekey
Dec  1 14:20:39 np0005541455 nova_compute_init[189774]: INFO:nova_statedir:Checking uid: 42436 gid: 42436 path: /var/lib/nova/.ssh/config
Dec  1 14:20:39 np0005541455 nova_compute_init[189774]: INFO:nova_statedir:Nova statedir ownership complete
Dec  1 14:20:39 np0005541455 systemd[1]: libpod-b7e71d1fa76afceccc1bd66ef02742c75fce787a57939c3d37257723a6c83554.scope: Deactivated successfully.
Dec  1 14:20:39 np0005541455 podman[189775]: 2025-12-01 19:20:39.238590116 +0000 UTC m=+0.044383509 container died b7e71d1fa76afceccc1bd66ef02742c75fce787a57939c3d37257723a6c83554 (image=quay.io/podified-antelope-centos9/openstack-nova-compute:current-podified, name=nova_compute_init, tcib_build_tag=fa2bb8efef6782c26ea7f1675eeb36dd, tcib_managed=true, config_data={'image': 'quay.io/podified-antelope-centos9/openstack-nova-compute:current-podified', 'privileged': False, 'user': 'root', 'restart': 'never', 'command': 'bash -c $* -- eval python3 /sbin/nova_statedir_ownership.py | logger -t nova_compute_init', 'net': 'none', 'security_opt': ['label=disable'], 'detach': False, 'environment': {'NOVA_STATEDIR_OWNERSHIP_SKIP': '/var/lib/nova/compute_id', '__OS_DEBUG': False}, 'volumes': ['/dev/log:/dev/log', '/var/lib/nova:/var/lib/nova:shared', '/var/lib/_nova_secontext:/var/lib/_nova_secontext:shared,z', '/var/lib/openstack/config/nova/nova_statedir_ownership.py:/sbin/nova_statedir_ownership.py:z']}, maintainer=OpenStack Kubernetes Operator team, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS, container_name=nova_compute_init, org.label-schema.name=CentOS Stream 9 Base Image, managed_by=edpm_ansible, org.label-schema.license=GPLv2, config_id=edpm, io.buildah.version=1.41.3, org.label-schema.build-date=20251125)
Dec  1 14:20:39 np0005541455 systemd[1]: var-lib-containers-storage-overlay\x2dcontainers-b7e71d1fa76afceccc1bd66ef02742c75fce787a57939c3d37257723a6c83554-userdata-shm.mount: Deactivated successfully.
Dec  1 14:20:39 np0005541455 systemd[1]: var-lib-containers-storage-overlay-cca82a1f63d50db2dbb9ed2b409e5a3a816f0ab9b22b218e617210203f930808-merged.mount: Deactivated successfully.
Dec  1 14:20:39 np0005541455 podman[189786]: 2025-12-01 19:20:39.385053449 +0000 UTC m=+0.153411801 container cleanup b7e71d1fa76afceccc1bd66ef02742c75fce787a57939c3d37257723a6c83554 (image=quay.io/podified-antelope-centos9/openstack-nova-compute:current-podified, name=nova_compute_init, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.vendor=CentOS, tcib_build_tag=fa2bb8efef6782c26ea7f1675eeb36dd, tcib_managed=true, config_data={'image': 'quay.io/podified-antelope-centos9/openstack-nova-compute:current-podified', 'privileged': False, 'user': 'root', 'restart': 'never', 'command': 'bash -c $* -- eval python3 /sbin/nova_statedir_ownership.py | logger -t nova_compute_init', 'net': 'none', 'security_opt': ['label=disable'], 'detach': False, 'environment': {'NOVA_STATEDIR_OWNERSHIP_SKIP': '/var/lib/nova/compute_id', '__OS_DEBUG': False}, 'volumes': ['/dev/log:/dev/log', '/var/lib/nova:/var/lib/nova:shared', '/var/lib/_nova_secontext:/var/lib/_nova_secontext:shared,z', '/var/lib/openstack/config/nova/nova_statedir_ownership.py:/sbin/nova_statedir_ownership.py:z']}, managed_by=edpm_ansible, org.label-schema.build-date=20251125, org.label-schema.license=GPLv2, config_id=edpm, io.buildah.version=1.41.3, org.label-schema.schema-version=1.0, container_name=nova_compute_init, maintainer=OpenStack Kubernetes Operator team)
Dec  1 14:20:39 np0005541455 systemd[1]: libpod-conmon-b7e71d1fa76afceccc1bd66ef02742c75fce787a57939c3d37257723a6c83554.scope: Deactivated successfully.
Dec  1 14:20:39 np0005541455 systemd-logind[797]: Session 24 logged out. Waiting for processes to exit.
Dec  1 14:20:39 np0005541455 systemd[1]: session-24.scope: Deactivated successfully.
Dec  1 14:20:39 np0005541455 systemd[1]: session-24.scope: Consumed 2min 2.955s CPU time.
Dec  1 14:20:39 np0005541455 systemd-logind[797]: Removed session 24.
Dec  1 14:20:40 np0005541455 nova_compute[189564]: 2025-12-01 19:20:40.085 189568 DEBUG os_vif [-] Loaded VIF plugin class '<class 'vif_plug_linux_bridge.linux_bridge.LinuxBridgePlugin'>' with name 'linux_bridge' initialize /usr/lib/python3.9/site-packages/os_vif/__init__.py:44#033[00m
Dec  1 14:20:40 np0005541455 nova_compute[189564]: 2025-12-01 19:20:40.085 189568 DEBUG os_vif [-] Loaded VIF plugin class '<class 'vif_plug_noop.noop.NoOpPlugin'>' with name 'noop' initialize /usr/lib/python3.9/site-packages/os_vif/__init__.py:44#033[00m
Dec  1 14:20:40 np0005541455 nova_compute[189564]: 2025-12-01 19:20:40.085 189568 DEBUG os_vif [-] Loaded VIF plugin class '<class 'vif_plug_ovs.ovs.OvsPlugin'>' with name 'ovs' initialize /usr/lib/python3.9/site-packages/os_vif/__init__.py:44#033[00m
Dec  1 14:20:40 np0005541455 nova_compute[189564]: 2025-12-01 19:20:40.085 189568 INFO os_vif [-] Loaded VIF plugins: linux_bridge, noop, ovs#033[00m
Dec  1 14:20:40 np0005541455 nova_compute[189564]: 2025-12-01 19:20:40.212 189568 DEBUG oslo_concurrency.processutils [-] Running cmd (subprocess): grep -F node.session.scan /sbin/iscsiadm execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384#033[00m
Dec  1 14:20:40 np0005541455 nova_compute[189564]: 2025-12-01 19:20:40.238 189568 DEBUG oslo_concurrency.processutils [-] CMD "grep -F node.session.scan /sbin/iscsiadm" returned: 1 in 0.026s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422#033[00m
Dec  1 14:20:40 np0005541455 nova_compute[189564]: 2025-12-01 19:20:40.238 189568 DEBUG oslo_concurrency.processutils [-] 'grep -F node.session.scan /sbin/iscsiadm' failed. Not Retrying. execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:473#033[00m
Dec  1 14:20:40 np0005541455 nova_compute[189564]: 2025-12-01 19:20:40.764 189568 INFO nova.virt.driver [None req-21d47958-9538-4c89-99e0-3c24c7751a04 - - - - - -] Loading compute driver 'libvirt.LibvirtDriver'#033[00m
Dec  1 14:20:40 np0005541455 nova_compute[189564]: 2025-12-01 19:20:40.882 189568 INFO nova.compute.provider_config [None req-21d47958-9538-4c89-99e0-3c24c7751a04 - - - - - -] No provider configs found in /etc/nova/provider_config/. If files are present, ensure the Nova process has access.#033[00m
Dec  1 14:20:40 np0005541455 nova_compute[189564]: 2025-12-01 19:20:40.899 189568 DEBUG oslo_concurrency.lockutils [None req-21d47958-9538-4c89-99e0-3c24c7751a04 - - - - - -] Acquiring lock "singleton_lock" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:312#033[00m
Dec  1 14:20:40 np0005541455 nova_compute[189564]: 2025-12-01 19:20:40.899 189568 DEBUG oslo_concurrency.lockutils [None req-21d47958-9538-4c89-99e0-3c24c7751a04 - - - - - -] Acquired lock "singleton_lock" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:315#033[00m
Dec  1 14:20:40 np0005541455 nova_compute[189564]: 2025-12-01 19:20:40.899 189568 DEBUG oslo_concurrency.lockutils [None req-21d47958-9538-4c89-99e0-3c24c7751a04 - - - - - -] Releasing lock "singleton_lock" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:333#033[00m
Dec  1 14:20:40 np0005541455 nova_compute[189564]: 2025-12-01 19:20:40.900 189568 DEBUG oslo_service.service [None req-21d47958-9538-4c89-99e0-3c24c7751a04 - - - - - -] Full set of CONF: _wait_for_exit_or_signal /usr/lib/python3.9/site-packages/oslo_service/service.py:362#033[00m
Dec  1 14:20:40 np0005541455 nova_compute[189564]: 2025-12-01 19:20:40.900 189568 DEBUG oslo_service.service [None req-21d47958-9538-4c89-99e0-3c24c7751a04 - - - - - -] ******************************************************************************** log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2589#033[00m
Dec  1 14:20:40 np0005541455 nova_compute[189564]: 2025-12-01 19:20:40.900 189568 DEBUG oslo_service.service [None req-21d47958-9538-4c89-99e0-3c24c7751a04 - - - - - -] Configuration options gathered from: log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2590#033[00m
Dec  1 14:20:40 np0005541455 nova_compute[189564]: 2025-12-01 19:20:40.900 189568 DEBUG oslo_service.service [None req-21d47958-9538-4c89-99e0-3c24c7751a04 - - - - - -] command line args: [] log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2591#033[00m
Dec  1 14:20:40 np0005541455 nova_compute[189564]: 2025-12-01 19:20:40.900 189568 DEBUG oslo_service.service [None req-21d47958-9538-4c89-99e0-3c24c7751a04 - - - - - -] config files: ['/etc/nova/nova.conf', '/etc/nova/nova-compute.conf'] log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2592#033[00m
Dec  1 14:20:40 np0005541455 nova_compute[189564]: 2025-12-01 19:20:40.901 189568 DEBUG oslo_service.service [None req-21d47958-9538-4c89-99e0-3c24c7751a04 - - - - - -] ================================================================================ log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2594#033[00m
Dec  1 14:20:40 np0005541455 nova_compute[189564]: 2025-12-01 19:20:40.901 189568 DEBUG oslo_service.service [None req-21d47958-9538-4c89-99e0-3c24c7751a04 - - - - - -] allow_resize_to_same_host      = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602#033[00m
Dec  1 14:20:40 np0005541455 nova_compute[189564]: 2025-12-01 19:20:40.901 189568 DEBUG oslo_service.service [None req-21d47958-9538-4c89-99e0-3c24c7751a04 - - - - - -] arq_binding_timeout            = 300 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602#033[00m
Dec  1 14:20:40 np0005541455 nova_compute[189564]: 2025-12-01 19:20:40.901 189568 DEBUG oslo_service.service [None req-21d47958-9538-4c89-99e0-3c24c7751a04 - - - - - -] backdoor_port                  = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602#033[00m
Dec  1 14:20:40 np0005541455 nova_compute[189564]: 2025-12-01 19:20:40.901 189568 DEBUG oslo_service.service [None req-21d47958-9538-4c89-99e0-3c24c7751a04 - - - - - -] backdoor_socket                = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602#033[00m
Dec  1 14:20:40 np0005541455 nova_compute[189564]: 2025-12-01 19:20:40.902 189568 DEBUG oslo_service.service [None req-21d47958-9538-4c89-99e0-3c24c7751a04 - - - - - -] block_device_allocate_retries  = 60 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602#033[00m
Dec  1 14:20:40 np0005541455 nova_compute[189564]: 2025-12-01 19:20:40.902 189568 DEBUG oslo_service.service [None req-21d47958-9538-4c89-99e0-3c24c7751a04 - - - - - -] block_device_allocate_retries_interval = 3 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602#033[00m
Dec  1 14:20:40 np0005541455 nova_compute[189564]: 2025-12-01 19:20:40.902 189568 DEBUG oslo_service.service [None req-21d47958-9538-4c89-99e0-3c24c7751a04 - - - - - -] cert                           = self.pem log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602#033[00m
Dec  1 14:20:40 np0005541455 nova_compute[189564]: 2025-12-01 19:20:40.902 189568 DEBUG oslo_service.service [None req-21d47958-9538-4c89-99e0-3c24c7751a04 - - - - - -] compute_driver                 = libvirt.LibvirtDriver log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602#033[00m
Dec  1 14:20:40 np0005541455 nova_compute[189564]: 2025-12-01 19:20:40.902 189568 DEBUG oslo_service.service [None req-21d47958-9538-4c89-99e0-3c24c7751a04 - - - - - -] compute_monitors               = [] log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602#033[00m
Dec  1 14:20:40 np0005541455 nova_compute[189564]: 2025-12-01 19:20:40.903 189568 DEBUG oslo_service.service [None req-21d47958-9538-4c89-99e0-3c24c7751a04 - - - - - -] config_dir                     = ['/etc/nova/nova.conf.d'] log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602#033[00m
Dec  1 14:20:40 np0005541455 nova_compute[189564]: 2025-12-01 19:20:40.903 189568 DEBUG oslo_service.service [None req-21d47958-9538-4c89-99e0-3c24c7751a04 - - - - - -] config_drive_format            = iso9660 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602#033[00m
Dec  1 14:20:40 np0005541455 nova_compute[189564]: 2025-12-01 19:20:40.903 189568 DEBUG oslo_service.service [None req-21d47958-9538-4c89-99e0-3c24c7751a04 - - - - - -] config_file                    = ['/etc/nova/nova.conf', '/etc/nova/nova-compute.conf'] log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602#033[00m
Dec  1 14:20:40 np0005541455 nova_compute[189564]: 2025-12-01 19:20:40.903 189568 DEBUG oslo_service.service [None req-21d47958-9538-4c89-99e0-3c24c7751a04 - - - - - -] config_source                  = [] log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602#033[00m
Dec  1 14:20:40 np0005541455 nova_compute[189564]: 2025-12-01 19:20:40.903 189568 DEBUG oslo_service.service [None req-21d47958-9538-4c89-99e0-3c24c7751a04 - - - - - -] console_host                   = compute-0 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602#033[00m
Dec  1 14:20:40 np0005541455 nova_compute[189564]: 2025-12-01 19:20:40.904 189568 DEBUG oslo_service.service [None req-21d47958-9538-4c89-99e0-3c24c7751a04 - - - - - -] control_exchange               = nova log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602#033[00m
Dec  1 14:20:40 np0005541455 nova_compute[189564]: 2025-12-01 19:20:40.904 189568 DEBUG oslo_service.service [None req-21d47958-9538-4c89-99e0-3c24c7751a04 - - - - - -] cpu_allocation_ratio           = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602#033[00m
Dec  1 14:20:40 np0005541455 nova_compute[189564]: 2025-12-01 19:20:40.904 189568 DEBUG oslo_service.service [None req-21d47958-9538-4c89-99e0-3c24c7751a04 - - - - - -] daemon                         = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602#033[00m
Dec  1 14:20:40 np0005541455 nova_compute[189564]: 2025-12-01 19:20:40.904 189568 DEBUG oslo_service.service [None req-21d47958-9538-4c89-99e0-3c24c7751a04 - - - - - -] debug                          = True log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602#033[00m
Dec  1 14:20:40 np0005541455 nova_compute[189564]: 2025-12-01 19:20:40.904 189568 DEBUG oslo_service.service [None req-21d47958-9538-4c89-99e0-3c24c7751a04 - - - - - -] default_access_ip_network_name = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602#033[00m
Dec  1 14:20:40 np0005541455 nova_compute[189564]: 2025-12-01 19:20:40.904 189568 DEBUG oslo_service.service [None req-21d47958-9538-4c89-99e0-3c24c7751a04 - - - - - -] default_availability_zone      = nova log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602#033[00m
Dec  1 14:20:40 np0005541455 nova_compute[189564]: 2025-12-01 19:20:40.905 189568 DEBUG oslo_service.service [None req-21d47958-9538-4c89-99e0-3c24c7751a04 - - - - - -] default_ephemeral_format       = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602#033[00m
Dec  1 14:20:40 np0005541455 nova_compute[189564]: 2025-12-01 19:20:40.905 189568 DEBUG oslo_service.service [None req-21d47958-9538-4c89-99e0-3c24c7751a04 - - - - - -] default_log_levels             = ['amqp=WARN', 'amqplib=WARN', 'boto=WARN', 'qpid=WARN', 'sqlalchemy=WARN', 'suds=INFO', 'oslo.messaging=INFO', 'oslo_messaging=INFO', 'iso8601=WARN', 'requests.packages.urllib3.connectionpool=WARN', 'urllib3.connectionpool=WARN', 'websocket=WARN', 'requests.packages.urllib3.util.retry=WARN', 'urllib3.util.retry=WARN', 'keystonemiddleware=WARN', 'routes.middleware=WARN', 'stevedore=WARN', 'taskflow=WARN', 'keystoneauth=WARN', 'oslo.cache=INFO', 'oslo_policy=INFO', 'dogpile.core.dogpile=INFO', 'glanceclient=WARN', 'oslo.privsep.daemon=INFO'] log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602#033[00m
Dec  1 14:20:40 np0005541455 nova_compute[189564]: 2025-12-01 19:20:40.905 189568 DEBUG oslo_service.service [None req-21d47958-9538-4c89-99e0-3c24c7751a04 - - - - - -] default_schedule_zone          = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602#033[00m
Dec  1 14:20:40 np0005541455 nova_compute[189564]: 2025-12-01 19:20:40.906 189568 DEBUG oslo_service.service [None req-21d47958-9538-4c89-99e0-3c24c7751a04 - - - - - -] disk_allocation_ratio          = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602#033[00m
Dec  1 14:20:40 np0005541455 nova_compute[189564]: 2025-12-01 19:20:40.906 189568 DEBUG oslo_service.service [None req-21d47958-9538-4c89-99e0-3c24c7751a04 - - - - - -] enable_new_services            = True log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602#033[00m
Dec  1 14:20:40 np0005541455 nova_compute[189564]: 2025-12-01 19:20:40.906 189568 DEBUG oslo_service.service [None req-21d47958-9538-4c89-99e0-3c24c7751a04 - - - - - -] enabled_apis                   = ['osapi_compute', 'metadata'] log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602#033[00m
Dec  1 14:20:40 np0005541455 nova_compute[189564]: 2025-12-01 19:20:40.906 189568 DEBUG oslo_service.service [None req-21d47958-9538-4c89-99e0-3c24c7751a04 - - - - - -] enabled_ssl_apis               = [] log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602#033[00m
Dec  1 14:20:40 np0005541455 nova_compute[189564]: 2025-12-01 19:20:40.906 189568 DEBUG oslo_service.service [None req-21d47958-9538-4c89-99e0-3c24c7751a04 - - - - - -] flat_injected                  = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602#033[00m
Dec  1 14:20:40 np0005541455 nova_compute[189564]: 2025-12-01 19:20:40.907 189568 DEBUG oslo_service.service [None req-21d47958-9538-4c89-99e0-3c24c7751a04 - - - - - -] force_config_drive             = True log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602#033[00m
Dec  1 14:20:40 np0005541455 nova_compute[189564]: 2025-12-01 19:20:40.907 189568 DEBUG oslo_service.service [None req-21d47958-9538-4c89-99e0-3c24c7751a04 - - - - - -] force_raw_images               = True log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602#033[00m
Dec  1 14:20:40 np0005541455 nova_compute[189564]: 2025-12-01 19:20:40.907 189568 DEBUG oslo_service.service [None req-21d47958-9538-4c89-99e0-3c24c7751a04 - - - - - -] graceful_shutdown_timeout      = 60 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602#033[00m
Dec  1 14:20:40 np0005541455 nova_compute[189564]: 2025-12-01 19:20:40.907 189568 DEBUG oslo_service.service [None req-21d47958-9538-4c89-99e0-3c24c7751a04 - - - - - -] heal_instance_info_cache_interval = 60 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602#033[00m
Dec  1 14:20:40 np0005541455 nova_compute[189564]: 2025-12-01 19:20:40.907 189568 DEBUG oslo_service.service [None req-21d47958-9538-4c89-99e0-3c24c7751a04 - - - - - -] host                           = compute-0.ctlplane.example.com log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602#033[00m
Dec  1 14:20:40 np0005541455 nova_compute[189564]: 2025-12-01 19:20:40.908 189568 DEBUG oslo_service.service [None req-21d47958-9538-4c89-99e0-3c24c7751a04 - - - - - -] initial_cpu_allocation_ratio   = 4.0 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602#033[00m
Dec  1 14:20:40 np0005541455 nova_compute[189564]: 2025-12-01 19:20:40.908 189568 DEBUG oslo_service.service [None req-21d47958-9538-4c89-99e0-3c24c7751a04 - - - - - -] initial_disk_allocation_ratio  = 0.9 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602#033[00m
Dec  1 14:20:40 np0005541455 nova_compute[189564]: 2025-12-01 19:20:40.908 189568 DEBUG oslo_service.service [None req-21d47958-9538-4c89-99e0-3c24c7751a04 - - - - - -] initial_ram_allocation_ratio   = 1.0 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602#033[00m
Dec  1 14:20:40 np0005541455 nova_compute[189564]: 2025-12-01 19:20:40.908 189568 DEBUG oslo_service.service [None req-21d47958-9538-4c89-99e0-3c24c7751a04 - - - - - -] injected_network_template      = /usr/lib/python3.9/site-packages/nova/virt/interfaces.template log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602#033[00m
Dec  1 14:20:40 np0005541455 nova_compute[189564]: 2025-12-01 19:20:40.908 189568 DEBUG oslo_service.service [None req-21d47958-9538-4c89-99e0-3c24c7751a04 - - - - - -] instance_build_timeout         = 0 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602#033[00m
Dec  1 14:20:40 np0005541455 nova_compute[189564]: 2025-12-01 19:20:40.909 189568 DEBUG oslo_service.service [None req-21d47958-9538-4c89-99e0-3c24c7751a04 - - - - - -] instance_delete_interval       = 300 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602#033[00m
Dec  1 14:20:40 np0005541455 nova_compute[189564]: 2025-12-01 19:20:40.909 189568 DEBUG oslo_service.service [None req-21d47958-9538-4c89-99e0-3c24c7751a04 - - - - - -] instance_format                = [instance: %(uuid)s]  log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602#033[00m
Dec  1 14:20:40 np0005541455 nova_compute[189564]: 2025-12-01 19:20:40.909 189568 DEBUG oslo_service.service [None req-21d47958-9538-4c89-99e0-3c24c7751a04 - - - - - -] instance_name_template         = instance-%08x log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602#033[00m
Dec  1 14:20:40 np0005541455 nova_compute[189564]: 2025-12-01 19:20:40.909 189568 DEBUG oslo_service.service [None req-21d47958-9538-4c89-99e0-3c24c7751a04 - - - - - -] instance_usage_audit           = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602#033[00m
Dec  1 14:20:40 np0005541455 nova_compute[189564]: 2025-12-01 19:20:40.909 189568 DEBUG oslo_service.service [None req-21d47958-9538-4c89-99e0-3c24c7751a04 - - - - - -] instance_usage_audit_period    = month log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602#033[00m
Dec  1 14:20:40 np0005541455 nova_compute[189564]: 2025-12-01 19:20:40.910 189568 DEBUG oslo_service.service [None req-21d47958-9538-4c89-99e0-3c24c7751a04 - - - - - -] instance_uuid_format           = [instance: %(uuid)s]  log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602#033[00m
Dec  1 14:20:40 np0005541455 nova_compute[189564]: 2025-12-01 19:20:40.910 189568 DEBUG oslo_service.service [None req-21d47958-9538-4c89-99e0-3c24c7751a04 - - - - - -] instances_path                 = /var/lib/nova/instances log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602#033[00m
Dec  1 14:20:40 np0005541455 nova_compute[189564]: 2025-12-01 19:20:40.910 189568 DEBUG oslo_service.service [None req-21d47958-9538-4c89-99e0-3c24c7751a04 - - - - - -] internal_service_availability_zone = internal log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602#033[00m
Dec  1 14:20:40 np0005541455 nova_compute[189564]: 2025-12-01 19:20:40.910 189568 DEBUG oslo_service.service [None req-21d47958-9538-4c89-99e0-3c24c7751a04 - - - - - -] key                            = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602#033[00m
Dec  1 14:20:40 np0005541455 nova_compute[189564]: 2025-12-01 19:20:40.910 189568 DEBUG oslo_service.service [None req-21d47958-9538-4c89-99e0-3c24c7751a04 - - - - - -] live_migration_retry_count     = 30 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602#033[00m
Dec  1 14:20:40 np0005541455 nova_compute[189564]: 2025-12-01 19:20:40.911 189568 DEBUG oslo_service.service [None req-21d47958-9538-4c89-99e0-3c24c7751a04 - - - - - -] log_config_append              = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602#033[00m
Dec  1 14:20:40 np0005541455 nova_compute[189564]: 2025-12-01 19:20:40.911 189568 DEBUG oslo_service.service [None req-21d47958-9538-4c89-99e0-3c24c7751a04 - - - - - -] log_date_format                = %Y-%m-%d %H:%M:%S log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602#033[00m
Dec  1 14:20:40 np0005541455 nova_compute[189564]: 2025-12-01 19:20:40.911 189568 DEBUG oslo_service.service [None req-21d47958-9538-4c89-99e0-3c24c7751a04 - - - - - -] log_dir                        = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602#033[00m
Dec  1 14:20:40 np0005541455 nova_compute[189564]: 2025-12-01 19:20:40.911 189568 DEBUG oslo_service.service [None req-21d47958-9538-4c89-99e0-3c24c7751a04 - - - - - -] log_file                       = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602#033[00m
Dec  1 14:20:40 np0005541455 nova_compute[189564]: 2025-12-01 19:20:40.911 189568 DEBUG oslo_service.service [None req-21d47958-9538-4c89-99e0-3c24c7751a04 - - - - - -] log_options                    = True log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602#033[00m
Dec  1 14:20:40 np0005541455 nova_compute[189564]: 2025-12-01 19:20:40.912 189568 DEBUG oslo_service.service [None req-21d47958-9538-4c89-99e0-3c24c7751a04 - - - - - -] log_rotate_interval            = 1 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602#033[00m
Dec  1 14:20:40 np0005541455 nova_compute[189564]: 2025-12-01 19:20:40.912 189568 DEBUG oslo_service.service [None req-21d47958-9538-4c89-99e0-3c24c7751a04 - - - - - -] log_rotate_interval_type       = days log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602#033[00m
Dec  1 14:20:40 np0005541455 nova_compute[189564]: 2025-12-01 19:20:40.912 189568 DEBUG oslo_service.service [None req-21d47958-9538-4c89-99e0-3c24c7751a04 - - - - - -] log_rotation_type              = size log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602#033[00m
Dec  1 14:20:40 np0005541455 nova_compute[189564]: 2025-12-01 19:20:40.912 189568 DEBUG oslo_service.service [None req-21d47958-9538-4c89-99e0-3c24c7751a04 - - - - - -] logging_context_format_string  = %(asctime)s.%(msecs)03d %(process)d %(levelname)s %(name)s [%(global_request_id)s %(request_id)s %(user_identity)s] %(instance)s%(message)s log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602#033[00m
Dec  1 14:20:40 np0005541455 nova_compute[189564]: 2025-12-01 19:20:40.912 189568 DEBUG oslo_service.service [None req-21d47958-9538-4c89-99e0-3c24c7751a04 - - - - - -] logging_debug_format_suffix    = %(funcName)s %(pathname)s:%(lineno)d log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602#033[00m
Dec  1 14:20:40 np0005541455 nova_compute[189564]: 2025-12-01 19:20:40.913 189568 DEBUG oslo_service.service [None req-21d47958-9538-4c89-99e0-3c24c7751a04 - - - - - -] logging_default_format_string  = %(asctime)s.%(msecs)03d %(process)d %(levelname)s %(name)s [-] %(instance)s%(message)s log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602#033[00m
Dec  1 14:20:40 np0005541455 nova_compute[189564]: 2025-12-01 19:20:40.913 189568 DEBUG oslo_service.service [None req-21d47958-9538-4c89-99e0-3c24c7751a04 - - - - - -] logging_exception_prefix       = %(asctime)s.%(msecs)03d %(process)d ERROR %(name)s %(instance)s log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602#033[00m
Dec  1 14:20:40 np0005541455 nova_compute[189564]: 2025-12-01 19:20:40.913 189568 DEBUG oslo_service.service [None req-21d47958-9538-4c89-99e0-3c24c7751a04 - - - - - -] logging_user_identity_format   = %(user)s %(project)s %(domain)s %(system_scope)s %(user_domain)s %(project_domain)s log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602#033[00m
Dec  1 14:20:40 np0005541455 nova_compute[189564]: 2025-12-01 19:20:40.913 189568 DEBUG oslo_service.service [None req-21d47958-9538-4c89-99e0-3c24c7751a04 - - - - - -] long_rpc_timeout               = 1800 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602#033[00m
Dec  1 14:20:40 np0005541455 nova_compute[189564]: 2025-12-01 19:20:40.913 189568 DEBUG oslo_service.service [None req-21d47958-9538-4c89-99e0-3c24c7751a04 - - - - - -] max_concurrent_builds          = 10 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602#033[00m
Dec  1 14:20:40 np0005541455 nova_compute[189564]: 2025-12-01 19:20:40.914 189568 DEBUG oslo_service.service [None req-21d47958-9538-4c89-99e0-3c24c7751a04 - - - - - -] max_concurrent_live_migrations = 1 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602#033[00m
Dec  1 14:20:40 np0005541455 nova_compute[189564]: 2025-12-01 19:20:40.914 189568 DEBUG oslo_service.service [None req-21d47958-9538-4c89-99e0-3c24c7751a04 - - - - - -] max_concurrent_snapshots       = 5 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602#033[00m
Dec  1 14:20:40 np0005541455 nova_compute[189564]: 2025-12-01 19:20:40.914 189568 DEBUG oslo_service.service [None req-21d47958-9538-4c89-99e0-3c24c7751a04 - - - - - -] max_local_block_devices        = 3 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602#033[00m
Dec  1 14:20:40 np0005541455 nova_compute[189564]: 2025-12-01 19:20:40.914 189568 DEBUG oslo_service.service [None req-21d47958-9538-4c89-99e0-3c24c7751a04 - - - - - -] max_logfile_count              = 1 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602#033[00m
Dec  1 14:20:40 np0005541455 nova_compute[189564]: 2025-12-01 19:20:40.914 189568 DEBUG oslo_service.service [None req-21d47958-9538-4c89-99e0-3c24c7751a04 - - - - - -] max_logfile_size_mb            = 20 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602#033[00m
Dec  1 14:20:40 np0005541455 nova_compute[189564]: 2025-12-01 19:20:40.915 189568 DEBUG oslo_service.service [None req-21d47958-9538-4c89-99e0-3c24c7751a04 - - - - - -] maximum_instance_delete_attempts = 5 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602#033[00m
Dec  1 14:20:40 np0005541455 nova_compute[189564]: 2025-12-01 19:20:40.915 189568 DEBUG oslo_service.service [None req-21d47958-9538-4c89-99e0-3c24c7751a04 - - - - - -] metadata_listen                = 0.0.0.0 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602#033[00m
Dec  1 14:20:40 np0005541455 nova_compute[189564]: 2025-12-01 19:20:40.915 189568 DEBUG oslo_service.service [None req-21d47958-9538-4c89-99e0-3c24c7751a04 - - - - - -] metadata_listen_port           = 8775 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602#033[00m
Dec  1 14:20:40 np0005541455 nova_compute[189564]: 2025-12-01 19:20:40.915 189568 DEBUG oslo_service.service [None req-21d47958-9538-4c89-99e0-3c24c7751a04 - - - - - -] metadata_workers               = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602#033[00m
Dec  1 14:20:40 np0005541455 nova_compute[189564]: 2025-12-01 19:20:40.915 189568 DEBUG oslo_service.service [None req-21d47958-9538-4c89-99e0-3c24c7751a04 - - - - - -] migrate_max_retries            = -1 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602#033[00m
Dec  1 14:20:40 np0005541455 nova_compute[189564]: 2025-12-01 19:20:40.916 189568 DEBUG oslo_service.service [None req-21d47958-9538-4c89-99e0-3c24c7751a04 - - - - - -] mkisofs_cmd                    = /usr/bin/mkisofs log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602#033[00m
Dec  1 14:20:40 np0005541455 nova_compute[189564]: 2025-12-01 19:20:40.916 189568 DEBUG oslo_service.service [None req-21d47958-9538-4c89-99e0-3c24c7751a04 - - - - - -] my_block_storage_ip            = 192.168.122.100 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602#033[00m
Dec  1 14:20:40 np0005541455 nova_compute[189564]: 2025-12-01 19:20:40.916 189568 DEBUG oslo_service.service [None req-21d47958-9538-4c89-99e0-3c24c7751a04 - - - - - -] my_ip                          = 192.168.122.100 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602#033[00m
Dec  1 14:20:40 np0005541455 nova_compute[189564]: 2025-12-01 19:20:40.916 189568 DEBUG oslo_service.service [None req-21d47958-9538-4c89-99e0-3c24c7751a04 - - - - - -] network_allocate_retries       = 0 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602#033[00m
Dec  1 14:20:40 np0005541455 nova_compute[189564]: 2025-12-01 19:20:40.916 189568 DEBUG oslo_service.service [None req-21d47958-9538-4c89-99e0-3c24c7751a04 - - - - - -] non_inheritable_image_properties = ['cache_in_nova', 'bittorrent'] log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602#033[00m
Dec  1 14:20:40 np0005541455 nova_compute[189564]: 2025-12-01 19:20:40.917 189568 DEBUG oslo_service.service [None req-21d47958-9538-4c89-99e0-3c24c7751a04 - - - - - -] osapi_compute_listen           = 0.0.0.0 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602#033[00m
Dec  1 14:20:40 np0005541455 nova_compute[189564]: 2025-12-01 19:20:40.917 189568 DEBUG oslo_service.service [None req-21d47958-9538-4c89-99e0-3c24c7751a04 - - - - - -] osapi_compute_listen_port      = 8774 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602#033[00m
Dec  1 14:20:40 np0005541455 nova_compute[189564]: 2025-12-01 19:20:40.917 189568 DEBUG oslo_service.service [None req-21d47958-9538-4c89-99e0-3c24c7751a04 - - - - - -] osapi_compute_unique_server_name_scope =  log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602#033[00m
Dec  1 14:20:40 np0005541455 nova_compute[189564]: 2025-12-01 19:20:40.917 189568 DEBUG oslo_service.service [None req-21d47958-9538-4c89-99e0-3c24c7751a04 - - - - - -] osapi_compute_workers          = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602#033[00m
Dec  1 14:20:40 np0005541455 nova_compute[189564]: 2025-12-01 19:20:40.917 189568 DEBUG oslo_service.service [None req-21d47958-9538-4c89-99e0-3c24c7751a04 - - - - - -] password_length                = 12 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602#033[00m
Dec  1 14:20:40 np0005541455 nova_compute[189564]: 2025-12-01 19:20:40.918 189568 DEBUG oslo_service.service [None req-21d47958-9538-4c89-99e0-3c24c7751a04 - - - - - -] periodic_enable                = True log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602#033[00m
Dec  1 14:20:40 np0005541455 nova_compute[189564]: 2025-12-01 19:20:40.918 189568 DEBUG oslo_service.service [None req-21d47958-9538-4c89-99e0-3c24c7751a04 - - - - - -] periodic_fuzzy_delay           = 60 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602#033[00m
Dec  1 14:20:40 np0005541455 nova_compute[189564]: 2025-12-01 19:20:40.918 189568 DEBUG oslo_service.service [None req-21d47958-9538-4c89-99e0-3c24c7751a04 - - - - - -] pointer_model                  = usbtablet log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602#033[00m
Dec  1 14:20:40 np0005541455 nova_compute[189564]: 2025-12-01 19:20:40.918 189568 DEBUG oslo_service.service [None req-21d47958-9538-4c89-99e0-3c24c7751a04 - - - - - -] preallocate_images             = none log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602#033[00m
Dec  1 14:20:40 np0005541455 nova_compute[189564]: 2025-12-01 19:20:40.918 189568 DEBUG oslo_service.service [None req-21d47958-9538-4c89-99e0-3c24c7751a04 - - - - - -] publish_errors                 = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602#033[00m
Dec  1 14:20:40 np0005541455 nova_compute[189564]: 2025-12-01 19:20:40.919 189568 DEBUG oslo_service.service [None req-21d47958-9538-4c89-99e0-3c24c7751a04 - - - - - -] pybasedir                      = /usr/lib/python3.9/site-packages log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602#033[00m
Dec  1 14:20:40 np0005541455 nova_compute[189564]: 2025-12-01 19:20:40.919 189568 DEBUG oslo_service.service [None req-21d47958-9538-4c89-99e0-3c24c7751a04 - - - - - -] ram_allocation_ratio           = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602#033[00m
Dec  1 14:20:40 np0005541455 nova_compute[189564]: 2025-12-01 19:20:40.919 189568 DEBUG oslo_service.service [None req-21d47958-9538-4c89-99e0-3c24c7751a04 - - - - - -] rate_limit_burst               = 0 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602#033[00m
Dec  1 14:20:40 np0005541455 nova_compute[189564]: 2025-12-01 19:20:40.919 189568 DEBUG oslo_service.service [None req-21d47958-9538-4c89-99e0-3c24c7751a04 - - - - - -] rate_limit_except_level        = CRITICAL log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602#033[00m
Dec  1 14:20:40 np0005541455 nova_compute[189564]: 2025-12-01 19:20:40.919 189568 DEBUG oslo_service.service [None req-21d47958-9538-4c89-99e0-3c24c7751a04 - - - - - -] rate_limit_interval            = 0 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602#033[00m
Dec  1 14:20:40 np0005541455 nova_compute[189564]: 2025-12-01 19:20:40.920 189568 DEBUG oslo_service.service [None req-21d47958-9538-4c89-99e0-3c24c7751a04 - - - - - -] reboot_timeout                 = 0 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602#033[00m
Dec  1 14:20:40 np0005541455 nova_compute[189564]: 2025-12-01 19:20:40.920 189568 DEBUG oslo_service.service [None req-21d47958-9538-4c89-99e0-3c24c7751a04 - - - - - -] reclaim_instance_interval      = 0 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602#033[00m
Dec  1 14:20:40 np0005541455 nova_compute[189564]: 2025-12-01 19:20:40.920 189568 DEBUG oslo_service.service [None req-21d47958-9538-4c89-99e0-3c24c7751a04 - - - - - -] record                         = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602#033[00m
Dec  1 14:20:40 np0005541455 nova_compute[189564]: 2025-12-01 19:20:40.920 189568 DEBUG oslo_service.service [None req-21d47958-9538-4c89-99e0-3c24c7751a04 - - - - - -] reimage_timeout_per_gb         = 20 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602#033[00m
Dec  1 14:20:40 np0005541455 nova_compute[189564]: 2025-12-01 19:20:40.920 189568 DEBUG oslo_service.service [None req-21d47958-9538-4c89-99e0-3c24c7751a04 - - - - - -] report_interval                = 10 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602#033[00m
Dec  1 14:20:40 np0005541455 nova_compute[189564]: 2025-12-01 19:20:40.921 189568 DEBUG oslo_service.service [None req-21d47958-9538-4c89-99e0-3c24c7751a04 - - - - - -] rescue_timeout                 = 0 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602#033[00m
Dec  1 14:20:40 np0005541455 nova_compute[189564]: 2025-12-01 19:20:40.921 189568 DEBUG oslo_service.service [None req-21d47958-9538-4c89-99e0-3c24c7751a04 - - - - - -] reserved_host_cpus             = 0 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602#033[00m
Dec  1 14:20:40 np0005541455 nova_compute[189564]: 2025-12-01 19:20:40.921 189568 DEBUG oslo_service.service [None req-21d47958-9538-4c89-99e0-3c24c7751a04 - - - - - -] reserved_host_disk_mb          = 0 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602#033[00m
Dec  1 14:20:40 np0005541455 nova_compute[189564]: 2025-12-01 19:20:40.921 189568 DEBUG oslo_service.service [None req-21d47958-9538-4c89-99e0-3c24c7751a04 - - - - - -] reserved_host_memory_mb        = 512 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602#033[00m
Dec  1 14:20:40 np0005541455 nova_compute[189564]: 2025-12-01 19:20:40.921 189568 DEBUG oslo_service.service [None req-21d47958-9538-4c89-99e0-3c24c7751a04 - - - - - -] reserved_huge_pages            = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602#033[00m
Dec  1 14:20:40 np0005541455 nova_compute[189564]: 2025-12-01 19:20:40.922 189568 DEBUG oslo_service.service [None req-21d47958-9538-4c89-99e0-3c24c7751a04 - - - - - -] resize_confirm_window          = 0 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602#033[00m
Dec  1 14:20:40 np0005541455 nova_compute[189564]: 2025-12-01 19:20:40.922 189568 DEBUG oslo_service.service [None req-21d47958-9538-4c89-99e0-3c24c7751a04 - - - - - -] resize_fs_using_block_device   = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602#033[00m
Dec  1 14:20:40 np0005541455 nova_compute[189564]: 2025-12-01 19:20:40.922 189568 DEBUG oslo_service.service [None req-21d47958-9538-4c89-99e0-3c24c7751a04 - - - - - -] resume_guests_state_on_host_boot = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602#033[00m
Dec  1 14:20:40 np0005541455 nova_compute[189564]: 2025-12-01 19:20:40.922 189568 DEBUG oslo_service.service [None req-21d47958-9538-4c89-99e0-3c24c7751a04 - - - - - -] rootwrap_config                = /etc/nova/rootwrap.conf log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602#033[00m
Dec  1 14:20:40 np0005541455 nova_compute[189564]: 2025-12-01 19:20:40.922 189568 DEBUG oslo_service.service [None req-21d47958-9538-4c89-99e0-3c24c7751a04 - - - - - -] rpc_response_timeout           = 60 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602#033[00m
Dec  1 14:20:40 np0005541455 nova_compute[189564]: 2025-12-01 19:20:40.922 189568 DEBUG oslo_service.service [None req-21d47958-9538-4c89-99e0-3c24c7751a04 - - - - - -] run_external_periodic_tasks    = True log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602#033[00m
Dec  1 14:20:40 np0005541455 nova_compute[189564]: 2025-12-01 19:20:40.923 189568 DEBUG oslo_service.service [None req-21d47958-9538-4c89-99e0-3c24c7751a04 - - - - - -] running_deleted_instance_action = reap log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602#033[00m
Dec  1 14:20:40 np0005541455 nova_compute[189564]: 2025-12-01 19:20:40.923 189568 DEBUG oslo_service.service [None req-21d47958-9538-4c89-99e0-3c24c7751a04 - - - - - -] running_deleted_instance_poll_interval = 1800 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602#033[00m
Dec  1 14:20:40 np0005541455 nova_compute[189564]: 2025-12-01 19:20:40.923 189568 DEBUG oslo_service.service [None req-21d47958-9538-4c89-99e0-3c24c7751a04 - - - - - -] running_deleted_instance_timeout = 0 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602#033[00m
Dec  1 14:20:40 np0005541455 nova_compute[189564]: 2025-12-01 19:20:40.923 189568 DEBUG oslo_service.service [None req-21d47958-9538-4c89-99e0-3c24c7751a04 - - - - - -] scheduler_instance_sync_interval = 120 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602#033[00m
Dec  1 14:20:40 np0005541455 nova_compute[189564]: 2025-12-01 19:20:40.924 189568 DEBUG oslo_service.service [None req-21d47958-9538-4c89-99e0-3c24c7751a04 - - - - - -] service_down_time              = 60 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602#033[00m
Dec  1 14:20:40 np0005541455 nova_compute[189564]: 2025-12-01 19:20:40.924 189568 DEBUG oslo_service.service [None req-21d47958-9538-4c89-99e0-3c24c7751a04 - - - - - -] servicegroup_driver            = db log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602#033[00m
Dec  1 14:20:40 np0005541455 nova_compute[189564]: 2025-12-01 19:20:40.924 189568 DEBUG oslo_service.service [None req-21d47958-9538-4c89-99e0-3c24c7751a04 - - - - - -] shelved_offload_time           = 0 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602#033[00m
Dec  1 14:20:40 np0005541455 nova_compute[189564]: 2025-12-01 19:20:40.924 189568 DEBUG oslo_service.service [None req-21d47958-9538-4c89-99e0-3c24c7751a04 - - - - - -] shelved_poll_interval          = 3600 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602#033[00m
Dec  1 14:20:40 np0005541455 nova_compute[189564]: 2025-12-01 19:20:40.924 189568 DEBUG oslo_service.service [None req-21d47958-9538-4c89-99e0-3c24c7751a04 - - - - - -] shutdown_timeout               = 60 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602#033[00m
Dec  1 14:20:40 np0005541455 nova_compute[189564]: 2025-12-01 19:20:40.924 189568 DEBUG oslo_service.service [None req-21d47958-9538-4c89-99e0-3c24c7751a04 - - - - - -] source_is_ipv6                 = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602#033[00m
Dec  1 14:20:40 np0005541455 nova_compute[189564]: 2025-12-01 19:20:40.925 189568 DEBUG oslo_service.service [None req-21d47958-9538-4c89-99e0-3c24c7751a04 - - - - - -] ssl_only                       = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602#033[00m
Dec  1 14:20:40 np0005541455 nova_compute[189564]: 2025-12-01 19:20:40.925 189568 DEBUG oslo_service.service [None req-21d47958-9538-4c89-99e0-3c24c7751a04 - - - - - -] state_path                     = /var/lib/nova log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602#033[00m
Dec  1 14:20:40 np0005541455 nova_compute[189564]: 2025-12-01 19:20:40.925 189568 DEBUG oslo_service.service [None req-21d47958-9538-4c89-99e0-3c24c7751a04 - - - - - -] sync_power_state_interval      = 600 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602#033[00m
Dec  1 14:20:40 np0005541455 nova_compute[189564]: 2025-12-01 19:20:40.925 189568 DEBUG oslo_service.service [None req-21d47958-9538-4c89-99e0-3c24c7751a04 - - - - - -] sync_power_state_pool_size     = 1000 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602#033[00m
Dec  1 14:20:40 np0005541455 nova_compute[189564]: 2025-12-01 19:20:40.925 189568 DEBUG oslo_service.service [None req-21d47958-9538-4c89-99e0-3c24c7751a04 - - - - - -] syslog_log_facility            = LOG_USER log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602#033[00m
Dec  1 14:20:40 np0005541455 nova_compute[189564]: 2025-12-01 19:20:40.926 189568 DEBUG oslo_service.service [None req-21d47958-9538-4c89-99e0-3c24c7751a04 - - - - - -] tempdir                        = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602#033[00m
Dec  1 14:20:40 np0005541455 nova_compute[189564]: 2025-12-01 19:20:40.926 189568 DEBUG oslo_service.service [None req-21d47958-9538-4c89-99e0-3c24c7751a04 - - - - - -] timeout_nbd                    = 10 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602#033[00m
Dec  1 14:20:40 np0005541455 nova_compute[189564]: 2025-12-01 19:20:40.926 189568 DEBUG oslo_service.service [None req-21d47958-9538-4c89-99e0-3c24c7751a04 - - - - - -] transport_url                  = **** log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602#033[00m
Dec  1 14:20:40 np0005541455 nova_compute[189564]: 2025-12-01 19:20:40.926 189568 DEBUG oslo_service.service [None req-21d47958-9538-4c89-99e0-3c24c7751a04 - - - - - -] update_resources_interval      = 0 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602#033[00m
Dec  1 14:20:40 np0005541455 nova_compute[189564]: 2025-12-01 19:20:40.926 189568 DEBUG oslo_service.service [None req-21d47958-9538-4c89-99e0-3c24c7751a04 - - - - - -] use_cow_images                 = True log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602#033[00m
Dec  1 14:20:40 np0005541455 nova_compute[189564]: 2025-12-01 19:20:40.926 189568 DEBUG oslo_service.service [None req-21d47958-9538-4c89-99e0-3c24c7751a04 - - - - - -] use_eventlog                   = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602#033[00m
Dec  1 14:20:40 np0005541455 nova_compute[189564]: 2025-12-01 19:20:40.926 189568 DEBUG oslo_service.service [None req-21d47958-9538-4c89-99e0-3c24c7751a04 - - - - - -] use_journal                    = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602#033[00m
Dec  1 14:20:40 np0005541455 nova_compute[189564]: 2025-12-01 19:20:40.927 189568 DEBUG oslo_service.service [None req-21d47958-9538-4c89-99e0-3c24c7751a04 - - - - - -] use_json                       = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602#033[00m
Dec  1 14:20:40 np0005541455 nova_compute[189564]: 2025-12-01 19:20:40.927 189568 DEBUG oslo_service.service [None req-21d47958-9538-4c89-99e0-3c24c7751a04 - - - - - -] use_rootwrap_daemon            = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602#033[00m
Dec  1 14:20:40 np0005541455 nova_compute[189564]: 2025-12-01 19:20:40.927 189568 DEBUG oslo_service.service [None req-21d47958-9538-4c89-99e0-3c24c7751a04 - - - - - -] use_stderr                     = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602#033[00m
Dec  1 14:20:40 np0005541455 nova_compute[189564]: 2025-12-01 19:20:40.927 189568 DEBUG oslo_service.service [None req-21d47958-9538-4c89-99e0-3c24c7751a04 - - - - - -] use_syslog                     = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602#033[00m
Dec  1 14:20:40 np0005541455 nova_compute[189564]: 2025-12-01 19:20:40.927 189568 DEBUG oslo_service.service [None req-21d47958-9538-4c89-99e0-3c24c7751a04 - - - - - -] vcpu_pin_set                   = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602#033[00m
Dec  1 14:20:40 np0005541455 nova_compute[189564]: 2025-12-01 19:20:40.927 189568 DEBUG oslo_service.service [None req-21d47958-9538-4c89-99e0-3c24c7751a04 - - - - - -] vif_plugging_is_fatal          = True log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602#033[00m
Dec  1 14:20:40 np0005541455 nova_compute[189564]: 2025-12-01 19:20:40.927 189568 DEBUG oslo_service.service [None req-21d47958-9538-4c89-99e0-3c24c7751a04 - - - - - -] vif_plugging_timeout           = 300 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602#033[00m
Dec  1 14:20:40 np0005541455 nova_compute[189564]: 2025-12-01 19:20:40.928 189568 DEBUG oslo_service.service [None req-21d47958-9538-4c89-99e0-3c24c7751a04 - - - - - -] virt_mkfs                      = [] log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602#033[00m
Dec  1 14:20:40 np0005541455 nova_compute[189564]: 2025-12-01 19:20:40.928 189568 DEBUG oslo_service.service [None req-21d47958-9538-4c89-99e0-3c24c7751a04 - - - - - -] volume_usage_poll_interval     = 0 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602#033[00m
Dec  1 14:20:40 np0005541455 nova_compute[189564]: 2025-12-01 19:20:40.928 189568 DEBUG oslo_service.service [None req-21d47958-9538-4c89-99e0-3c24c7751a04 - - - - - -] watch_log_file                 = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602#033[00m
Dec  1 14:20:40 np0005541455 nova_compute[189564]: 2025-12-01 19:20:40.928 189568 DEBUG oslo_service.service [None req-21d47958-9538-4c89-99e0-3c24c7751a04 - - - - - -] web                            = /usr/share/spice-html5 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602#033[00m
Dec  1 14:20:40 np0005541455 nova_compute[189564]: 2025-12-01 19:20:40.928 189568 DEBUG oslo_service.service [None req-21d47958-9538-4c89-99e0-3c24c7751a04 - - - - - -] oslo_concurrency.disable_process_locking = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Dec  1 14:20:40 np0005541455 nova_compute[189564]: 2025-12-01 19:20:40.928 189568 DEBUG oslo_service.service [None req-21d47958-9538-4c89-99e0-3c24c7751a04 - - - - - -] oslo_concurrency.lock_path     = /var/lib/nova/tmp log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Dec  1 14:20:40 np0005541455 nova_compute[189564]: 2025-12-01 19:20:40.929 189568 DEBUG oslo_service.service [None req-21d47958-9538-4c89-99e0-3c24c7751a04 - - - - - -] oslo_messaging_metrics.metrics_buffer_size = 1000 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Dec  1 14:20:40 np0005541455 nova_compute[189564]: 2025-12-01 19:20:40.929 189568 DEBUG oslo_service.service [None req-21d47958-9538-4c89-99e0-3c24c7751a04 - - - - - -] oslo_messaging_metrics.metrics_enabled = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Dec  1 14:20:40 np0005541455 nova_compute[189564]: 2025-12-01 19:20:40.929 189568 DEBUG oslo_service.service [None req-21d47958-9538-4c89-99e0-3c24c7751a04 - - - - - -] oslo_messaging_metrics.metrics_process_name =  log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Dec  1 14:20:40 np0005541455 nova_compute[189564]: 2025-12-01 19:20:40.929 189568 DEBUG oslo_service.service [None req-21d47958-9538-4c89-99e0-3c24c7751a04 - - - - - -] oslo_messaging_metrics.metrics_socket_file = /var/tmp/metrics_collector.sock log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Dec  1 14:20:40 np0005541455 nova_compute[189564]: 2025-12-01 19:20:40.929 189568 DEBUG oslo_service.service [None req-21d47958-9538-4c89-99e0-3c24c7751a04 - - - - - -] oslo_messaging_metrics.metrics_thread_stop_timeout = 10 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Dec  1 14:20:40 np0005541455 nova_compute[189564]: 2025-12-01 19:20:40.929 189568 DEBUG oslo_service.service [None req-21d47958-9538-4c89-99e0-3c24c7751a04 - - - - - -] api.auth_strategy              = keystone log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Dec  1 14:20:40 np0005541455 nova_compute[189564]: 2025-12-01 19:20:40.929 189568 DEBUG oslo_service.service [None req-21d47958-9538-4c89-99e0-3c24c7751a04 - - - - - -] api.compute_link_prefix        = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Dec  1 14:20:40 np0005541455 nova_compute[189564]: 2025-12-01 19:20:40.930 189568 DEBUG oslo_service.service [None req-21d47958-9538-4c89-99e0-3c24c7751a04 - - - - - -] api.config_drive_skip_versions = 1.0 2007-01-19 2007-03-01 2007-08-29 2007-10-10 2007-12-15 2008-02-01 2008-09-01 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Dec  1 14:20:40 np0005541455 nova_compute[189564]: 2025-12-01 19:20:40.930 189568 DEBUG oslo_service.service [None req-21d47958-9538-4c89-99e0-3c24c7751a04 - - - - - -] api.dhcp_domain                =  log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Dec  1 14:20:40 np0005541455 nova_compute[189564]: 2025-12-01 19:20:40.930 189568 DEBUG oslo_service.service [None req-21d47958-9538-4c89-99e0-3c24c7751a04 - - - - - -] api.enable_instance_password   = True log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Dec  1 14:20:40 np0005541455 nova_compute[189564]: 2025-12-01 19:20:40.930 189568 DEBUG oslo_service.service [None req-21d47958-9538-4c89-99e0-3c24c7751a04 - - - - - -] api.glance_link_prefix         = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Dec  1 14:20:40 np0005541455 nova_compute[189564]: 2025-12-01 19:20:40.930 189568 DEBUG oslo_service.service [None req-21d47958-9538-4c89-99e0-3c24c7751a04 - - - - - -] api.instance_list_cells_batch_fixed_size = 100 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Dec  1 14:20:40 np0005541455 nova_compute[189564]: 2025-12-01 19:20:40.930 189568 DEBUG oslo_service.service [None req-21d47958-9538-4c89-99e0-3c24c7751a04 - - - - - -] api.instance_list_cells_batch_strategy = distributed log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Dec  1 14:20:40 np0005541455 nova_compute[189564]: 2025-12-01 19:20:40.931 189568 DEBUG oslo_service.service [None req-21d47958-9538-4c89-99e0-3c24c7751a04 - - - - - -] api.instance_list_per_project_cells = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Dec  1 14:20:40 np0005541455 nova_compute[189564]: 2025-12-01 19:20:40.931 189568 DEBUG oslo_service.service [None req-21d47958-9538-4c89-99e0-3c24c7751a04 - - - - - -] api.list_records_by_skipping_down_cells = True log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Dec  1 14:20:40 np0005541455 nova_compute[189564]: 2025-12-01 19:20:40.931 189568 DEBUG oslo_service.service [None req-21d47958-9538-4c89-99e0-3c24c7751a04 - - - - - -] api.local_metadata_per_cell    = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Dec  1 14:20:40 np0005541455 nova_compute[189564]: 2025-12-01 19:20:40.931 189568 DEBUG oslo_service.service [None req-21d47958-9538-4c89-99e0-3c24c7751a04 - - - - - -] api.max_limit                  = 1000 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Dec  1 14:20:40 np0005541455 nova_compute[189564]: 2025-12-01 19:20:40.931 189568 DEBUG oslo_service.service [None req-21d47958-9538-4c89-99e0-3c24c7751a04 - - - - - -] api.metadata_cache_expiration  = 15 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Dec  1 14:20:40 np0005541455 nova_compute[189564]: 2025-12-01 19:20:40.931 189568 DEBUG oslo_service.service [None req-21d47958-9538-4c89-99e0-3c24c7751a04 - - - - - -] api.neutron_default_tenant_id  = default log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Dec  1 14:20:40 np0005541455 nova_compute[189564]: 2025-12-01 19:20:40.931 189568 DEBUG oslo_service.service [None req-21d47958-9538-4c89-99e0-3c24c7751a04 - - - - - -] api.use_forwarded_for          = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Dec  1 14:20:40 np0005541455 nova_compute[189564]: 2025-12-01 19:20:40.932 189568 DEBUG oslo_service.service [None req-21d47958-9538-4c89-99e0-3c24c7751a04 - - - - - -] api.use_neutron_default_nets   = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Dec  1 14:20:40 np0005541455 nova_compute[189564]: 2025-12-01 19:20:40.932 189568 DEBUG oslo_service.service [None req-21d47958-9538-4c89-99e0-3c24c7751a04 - - - - - -] api.vendordata_dynamic_connect_timeout = 5 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Dec  1 14:20:40 np0005541455 nova_compute[189564]: 2025-12-01 19:20:40.932 189568 DEBUG oslo_service.service [None req-21d47958-9538-4c89-99e0-3c24c7751a04 - - - - - -] api.vendordata_dynamic_failure_fatal = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Dec  1 14:20:40 np0005541455 nova_compute[189564]: 2025-12-01 19:20:40.932 189568 DEBUG oslo_service.service [None req-21d47958-9538-4c89-99e0-3c24c7751a04 - - - - - -] api.vendordata_dynamic_read_timeout = 5 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Dec  1 14:20:40 np0005541455 nova_compute[189564]: 2025-12-01 19:20:40.932 189568 DEBUG oslo_service.service [None req-21d47958-9538-4c89-99e0-3c24c7751a04 - - - - - -] api.vendordata_dynamic_ssl_certfile =  log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Dec  1 14:20:40 np0005541455 nova_compute[189564]: 2025-12-01 19:20:40.932 189568 DEBUG oslo_service.service [None req-21d47958-9538-4c89-99e0-3c24c7751a04 - - - - - -] api.vendordata_dynamic_targets = [] log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Dec  1 14:20:40 np0005541455 nova_compute[189564]: 2025-12-01 19:20:40.933 189568 DEBUG oslo_service.service [None req-21d47958-9538-4c89-99e0-3c24c7751a04 - - - - - -] api.vendordata_jsonfile_path   = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Dec  1 14:20:40 np0005541455 nova_compute[189564]: 2025-12-01 19:20:40.933 189568 DEBUG oslo_service.service [None req-21d47958-9538-4c89-99e0-3c24c7751a04 - - - - - -] api.vendordata_providers       = ['StaticJSON'] log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Dec  1 14:20:40 np0005541455 nova_compute[189564]: 2025-12-01 19:20:40.933 189568 DEBUG oslo_service.service [None req-21d47958-9538-4c89-99e0-3c24c7751a04 - - - - - -] cache.backend                  = oslo_cache.dict log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Dec  1 14:20:40 np0005541455 nova_compute[189564]: 2025-12-01 19:20:40.933 189568 DEBUG oslo_service.service [None req-21d47958-9538-4c89-99e0-3c24c7751a04 - - - - - -] cache.backend_argument         = **** log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Dec  1 14:20:40 np0005541455 nova_compute[189564]: 2025-12-01 19:20:40.933 189568 DEBUG oslo_service.service [None req-21d47958-9538-4c89-99e0-3c24c7751a04 - - - - - -] cache.config_prefix            = cache.oslo log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Dec  1 14:20:40 np0005541455 nova_compute[189564]: 2025-12-01 19:20:40.934 189568 DEBUG oslo_service.service [None req-21d47958-9538-4c89-99e0-3c24c7751a04 - - - - - -] cache.dead_timeout             = 60.0 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Dec  1 14:20:40 np0005541455 nova_compute[189564]: 2025-12-01 19:20:40.934 189568 DEBUG oslo_service.service [None req-21d47958-9538-4c89-99e0-3c24c7751a04 - - - - - -] cache.debug_cache_backend      = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Dec  1 14:20:40 np0005541455 nova_compute[189564]: 2025-12-01 19:20:40.934 189568 DEBUG oslo_service.service [None req-21d47958-9538-4c89-99e0-3c24c7751a04 - - - - - -] cache.enable_retry_client      = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Dec  1 14:20:40 np0005541455 nova_compute[189564]: 2025-12-01 19:20:40.934 189568 DEBUG oslo_service.service [None req-21d47958-9538-4c89-99e0-3c24c7751a04 - - - - - -] cache.enable_socket_keepalive  = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Dec  1 14:20:40 np0005541455 nova_compute[189564]: 2025-12-01 19:20:40.934 189568 DEBUG oslo_service.service [None req-21d47958-9538-4c89-99e0-3c24c7751a04 - - - - - -] cache.enabled                  = True log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Dec  1 14:20:40 np0005541455 nova_compute[189564]: 2025-12-01 19:20:40.935 189568 DEBUG oslo_service.service [None req-21d47958-9538-4c89-99e0-3c24c7751a04 - - - - - -] cache.expiration_time          = 600 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Dec  1 14:20:40 np0005541455 nova_compute[189564]: 2025-12-01 19:20:40.935 189568 DEBUG oslo_service.service [None req-21d47958-9538-4c89-99e0-3c24c7751a04 - - - - - -] cache.hashclient_retry_attempts = 2 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Dec  1 14:20:40 np0005541455 nova_compute[189564]: 2025-12-01 19:20:40.935 189568 DEBUG oslo_service.service [None req-21d47958-9538-4c89-99e0-3c24c7751a04 - - - - - -] cache.hashclient_retry_delay   = 1.0 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Dec  1 14:20:40 np0005541455 nova_compute[189564]: 2025-12-01 19:20:40.935 189568 DEBUG oslo_service.service [None req-21d47958-9538-4c89-99e0-3c24c7751a04 - - - - - -] cache.memcache_dead_retry      = 300 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Dec  1 14:20:40 np0005541455 nova_compute[189564]: 2025-12-01 19:20:40.935 189568 DEBUG oslo_service.service [None req-21d47958-9538-4c89-99e0-3c24c7751a04 - - - - - -] cache.memcache_password        =  log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Dec  1 14:20:40 np0005541455 nova_compute[189564]: 2025-12-01 19:20:40.936 189568 DEBUG oslo_service.service [None req-21d47958-9538-4c89-99e0-3c24c7751a04 - - - - - -] cache.memcache_pool_connection_get_timeout = 10 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Dec  1 14:20:40 np0005541455 nova_compute[189564]: 2025-12-01 19:20:40.936 189568 DEBUG oslo_service.service [None req-21d47958-9538-4c89-99e0-3c24c7751a04 - - - - - -] cache.memcache_pool_flush_on_reconnect = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Dec  1 14:20:40 np0005541455 nova_compute[189564]: 2025-12-01 19:20:40.936 189568 DEBUG oslo_service.service [None req-21d47958-9538-4c89-99e0-3c24c7751a04 - - - - - -] cache.memcache_pool_maxsize    = 10 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Dec  1 14:20:40 np0005541455 nova_compute[189564]: 2025-12-01 19:20:40.936 189568 DEBUG oslo_service.service [None req-21d47958-9538-4c89-99e0-3c24c7751a04 - - - - - -] cache.memcache_pool_unused_timeout = 60 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Dec  1 14:20:40 np0005541455 nova_compute[189564]: 2025-12-01 19:20:40.936 189568 DEBUG oslo_service.service [None req-21d47958-9538-4c89-99e0-3c24c7751a04 - - - - - -] cache.memcache_sasl_enabled    = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Dec  1 14:20:40 np0005541455 nova_compute[189564]: 2025-12-01 19:20:40.937 189568 DEBUG oslo_service.service [None req-21d47958-9538-4c89-99e0-3c24c7751a04 - - - - - -] cache.memcache_servers         = ['localhost:11211'] log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Dec  1 14:20:40 np0005541455 nova_compute[189564]: 2025-12-01 19:20:40.937 189568 DEBUG oslo_service.service [None req-21d47958-9538-4c89-99e0-3c24c7751a04 - - - - - -] cache.memcache_socket_timeout  = 1.0 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Dec  1 14:20:40 np0005541455 nova_compute[189564]: 2025-12-01 19:20:40.937 189568 DEBUG oslo_service.service [None req-21d47958-9538-4c89-99e0-3c24c7751a04 - - - - - -] cache.memcache_username        =  log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Dec  1 14:20:40 np0005541455 nova_compute[189564]: 2025-12-01 19:20:40.937 189568 DEBUG oslo_service.service [None req-21d47958-9538-4c89-99e0-3c24c7751a04 - - - - - -] cache.proxies                  = [] log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Dec  1 14:20:40 np0005541455 nova_compute[189564]: 2025-12-01 19:20:40.938 189568 DEBUG oslo_service.service [None req-21d47958-9538-4c89-99e0-3c24c7751a04 - - - - - -] cache.retry_attempts           = 2 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Dec  1 14:20:40 np0005541455 nova_compute[189564]: 2025-12-01 19:20:40.938 189568 DEBUG oslo_service.service [None req-21d47958-9538-4c89-99e0-3c24c7751a04 - - - - - -] cache.retry_delay              = 0.0 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Dec  1 14:20:40 np0005541455 nova_compute[189564]: 2025-12-01 19:20:40.938 189568 DEBUG oslo_service.service [None req-21d47958-9538-4c89-99e0-3c24c7751a04 - - - - - -] cache.socket_keepalive_count   = 1 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Dec  1 14:20:40 np0005541455 nova_compute[189564]: 2025-12-01 19:20:40.938 189568 DEBUG oslo_service.service [None req-21d47958-9538-4c89-99e0-3c24c7751a04 - - - - - -] cache.socket_keepalive_idle    = 1 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Dec  1 14:20:40 np0005541455 nova_compute[189564]: 2025-12-01 19:20:40.938 189568 DEBUG oslo_service.service [None req-21d47958-9538-4c89-99e0-3c24c7751a04 - - - - - -] cache.socket_keepalive_interval = 1 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Dec  1 14:20:40 np0005541455 nova_compute[189564]: 2025-12-01 19:20:40.939 189568 DEBUG oslo_service.service [None req-21d47958-9538-4c89-99e0-3c24c7751a04 - - - - - -] cache.tls_allowed_ciphers      = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Dec  1 14:20:40 np0005541455 nova_compute[189564]: 2025-12-01 19:20:40.939 189568 DEBUG oslo_service.service [None req-21d47958-9538-4c89-99e0-3c24c7751a04 - - - - - -] cache.tls_cafile               = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Dec  1 14:20:40 np0005541455 nova_compute[189564]: 2025-12-01 19:20:40.939 189568 DEBUG oslo_service.service [None req-21d47958-9538-4c89-99e0-3c24c7751a04 - - - - - -] cache.tls_certfile             = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Dec  1 14:20:40 np0005541455 nova_compute[189564]: 2025-12-01 19:20:40.939 189568 DEBUG oslo_service.service [None req-21d47958-9538-4c89-99e0-3c24c7751a04 - - - - - -] cache.tls_enabled              = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Dec  1 14:20:40 np0005541455 nova_compute[189564]: 2025-12-01 19:20:40.939 189568 DEBUG oslo_service.service [None req-21d47958-9538-4c89-99e0-3c24c7751a04 - - - - - -] cache.tls_keyfile              = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Dec  1 14:20:40 np0005541455 nova_compute[189564]: 2025-12-01 19:20:40.940 189568 DEBUG oslo_service.service [None req-21d47958-9538-4c89-99e0-3c24c7751a04 - - - - - -] cinder.auth_section            = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Dec  1 14:20:40 np0005541455 nova_compute[189564]: 2025-12-01 19:20:40.940 189568 DEBUG oslo_service.service [None req-21d47958-9538-4c89-99e0-3c24c7751a04 - - - - - -] cinder.auth_type               = password log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Dec  1 14:20:40 np0005541455 nova_compute[189564]: 2025-12-01 19:20:40.940 189568 DEBUG oslo_service.service [None req-21d47958-9538-4c89-99e0-3c24c7751a04 - - - - - -] cinder.cafile                  = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Dec  1 14:20:40 np0005541455 nova_compute[189564]: 2025-12-01 19:20:40.940 189568 DEBUG oslo_service.service [None req-21d47958-9538-4c89-99e0-3c24c7751a04 - - - - - -] cinder.catalog_info            = volumev3:cinderv3:internalURL log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Dec  1 14:20:40 np0005541455 nova_compute[189564]: 2025-12-01 19:20:40.940 189568 DEBUG oslo_service.service [None req-21d47958-9538-4c89-99e0-3c24c7751a04 - - - - - -] cinder.certfile                = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Dec  1 14:20:40 np0005541455 nova_compute[189564]: 2025-12-01 19:20:40.940 189568 DEBUG oslo_service.service [None req-21d47958-9538-4c89-99e0-3c24c7751a04 - - - - - -] cinder.collect_timing          = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Dec  1 14:20:40 np0005541455 nova_compute[189564]: 2025-12-01 19:20:40.941 189568 DEBUG oslo_service.service [None req-21d47958-9538-4c89-99e0-3c24c7751a04 - - - - - -] cinder.cross_az_attach         = True log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Dec  1 14:20:40 np0005541455 nova_compute[189564]: 2025-12-01 19:20:40.941 189568 DEBUG oslo_service.service [None req-21d47958-9538-4c89-99e0-3c24c7751a04 - - - - - -] cinder.debug                   = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Dec  1 14:20:40 np0005541455 nova_compute[189564]: 2025-12-01 19:20:40.941 189568 DEBUG oslo_service.service [None req-21d47958-9538-4c89-99e0-3c24c7751a04 - - - - - -] cinder.endpoint_template       = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Dec  1 14:20:40 np0005541455 nova_compute[189564]: 2025-12-01 19:20:40.941 189568 DEBUG oslo_service.service [None req-21d47958-9538-4c89-99e0-3c24c7751a04 - - - - - -] cinder.http_retries            = 3 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Dec  1 14:20:40 np0005541455 nova_compute[189564]: 2025-12-01 19:20:40.941 189568 DEBUG oslo_service.service [None req-21d47958-9538-4c89-99e0-3c24c7751a04 - - - - - -] cinder.insecure                = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Dec  1 14:20:40 np0005541455 nova_compute[189564]: 2025-12-01 19:20:40.941 189568 DEBUG oslo_service.service [None req-21d47958-9538-4c89-99e0-3c24c7751a04 - - - - - -] cinder.keyfile                 = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Dec  1 14:20:40 np0005541455 nova_compute[189564]: 2025-12-01 19:20:40.941 189568 DEBUG oslo_service.service [None req-21d47958-9538-4c89-99e0-3c24c7751a04 - - - - - -] cinder.os_region_name          = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Dec  1 14:20:40 np0005541455 nova_compute[189564]: 2025-12-01 19:20:40.942 189568 DEBUG oslo_service.service [None req-21d47958-9538-4c89-99e0-3c24c7751a04 - - - - - -] cinder.split_loggers           = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Dec  1 14:20:40 np0005541455 nova_compute[189564]: 2025-12-01 19:20:40.942 189568 DEBUG oslo_service.service [None req-21d47958-9538-4c89-99e0-3c24c7751a04 - - - - - -] cinder.timeout                 = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Dec  1 14:20:40 np0005541455 nova_compute[189564]: 2025-12-01 19:20:40.942 189568 DEBUG oslo_service.service [None req-21d47958-9538-4c89-99e0-3c24c7751a04 - - - - - -] compute.consecutive_build_service_disable_threshold = 10 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Dec  1 14:20:40 np0005541455 nova_compute[189564]: 2025-12-01 19:20:40.942 189568 DEBUG oslo_service.service [None req-21d47958-9538-4c89-99e0-3c24c7751a04 - - - - - -] compute.cpu_dedicated_set      = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Dec  1 14:20:40 np0005541455 nova_compute[189564]: 2025-12-01 19:20:40.942 189568 DEBUG oslo_service.service [None req-21d47958-9538-4c89-99e0-3c24c7751a04 - - - - - -] compute.cpu_shared_set         = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Dec  1 14:20:40 np0005541455 nova_compute[189564]: 2025-12-01 19:20:40.942 189568 DEBUG oslo_service.service [None req-21d47958-9538-4c89-99e0-3c24c7751a04 - - - - - -] compute.image_type_exclude_list = [] log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Dec  1 14:20:40 np0005541455 nova_compute[189564]: 2025-12-01 19:20:40.943 189568 DEBUG oslo_service.service [None req-21d47958-9538-4c89-99e0-3c24c7751a04 - - - - - -] compute.live_migration_wait_for_vif_plug = True log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Dec  1 14:20:40 np0005541455 nova_compute[189564]: 2025-12-01 19:20:40.943 189568 DEBUG oslo_service.service [None req-21d47958-9538-4c89-99e0-3c24c7751a04 - - - - - -] compute.max_concurrent_disk_ops = 0 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Dec  1 14:20:40 np0005541455 nova_compute[189564]: 2025-12-01 19:20:40.943 189568 DEBUG oslo_service.service [None req-21d47958-9538-4c89-99e0-3c24c7751a04 - - - - - -] compute.max_disk_devices_to_attach = -1 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Dec  1 14:20:40 np0005541455 nova_compute[189564]: 2025-12-01 19:20:40.943 189568 DEBUG oslo_service.service [None req-21d47958-9538-4c89-99e0-3c24c7751a04 - - - - - -] compute.packing_host_numa_cells_allocation_strategy = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Dec  1 14:20:40 np0005541455 nova_compute[189564]: 2025-12-01 19:20:40.943 189568 DEBUG oslo_service.service [None req-21d47958-9538-4c89-99e0-3c24c7751a04 - - - - - -] compute.provider_config_location = /etc/nova/provider_config/ log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Dec  1 14:20:40 np0005541455 nova_compute[189564]: 2025-12-01 19:20:40.943 189568 DEBUG oslo_service.service [None req-21d47958-9538-4c89-99e0-3c24c7751a04 - - - - - -] compute.resource_provider_association_refresh = 300 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Dec  1 14:20:40 np0005541455 nova_compute[189564]: 2025-12-01 19:20:40.943 189568 DEBUG oslo_service.service [None req-21d47958-9538-4c89-99e0-3c24c7751a04 - - - - - -] compute.shutdown_retry_interval = 10 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Dec  1 14:20:40 np0005541455 nova_compute[189564]: 2025-12-01 19:20:40.944 189568 DEBUG oslo_service.service [None req-21d47958-9538-4c89-99e0-3c24c7751a04 - - - - - -] compute.vmdk_allowed_types     = ['streamOptimized', 'monolithicSparse'] log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Dec  1 14:20:40 np0005541455 nova_compute[189564]: 2025-12-01 19:20:40.944 189568 DEBUG oslo_service.service [None req-21d47958-9538-4c89-99e0-3c24c7751a04 - - - - - -] conductor.workers              = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Dec  1 14:20:40 np0005541455 nova_compute[189564]: 2025-12-01 19:20:40.944 189568 DEBUG oslo_service.service [None req-21d47958-9538-4c89-99e0-3c24c7751a04 - - - - - -] console.allowed_origins        = [] log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Dec  1 14:20:40 np0005541455 nova_compute[189564]: 2025-12-01 19:20:40.944 189568 DEBUG oslo_service.service [None req-21d47958-9538-4c89-99e0-3c24c7751a04 - - - - - -] console.ssl_ciphers            = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Dec  1 14:20:40 np0005541455 nova_compute[189564]: 2025-12-01 19:20:40.944 189568 DEBUG oslo_service.service [None req-21d47958-9538-4c89-99e0-3c24c7751a04 - - - - - -] console.ssl_minimum_version    = default log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Dec  1 14:20:40 np0005541455 nova_compute[189564]: 2025-12-01 19:20:40.945 189568 DEBUG oslo_service.service [None req-21d47958-9538-4c89-99e0-3c24c7751a04 - - - - - -] consoleauth.token_ttl          = 600 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Dec  1 14:20:40 np0005541455 nova_compute[189564]: 2025-12-01 19:20:40.945 189568 DEBUG oslo_service.service [None req-21d47958-9538-4c89-99e0-3c24c7751a04 - - - - - -] cyborg.cafile                  = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Dec  1 14:20:40 np0005541455 nova_compute[189564]: 2025-12-01 19:20:40.945 189568 DEBUG oslo_service.service [None req-21d47958-9538-4c89-99e0-3c24c7751a04 - - - - - -] cyborg.certfile                = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Dec  1 14:20:40 np0005541455 nova_compute[189564]: 2025-12-01 19:20:40.945 189568 DEBUG oslo_service.service [None req-21d47958-9538-4c89-99e0-3c24c7751a04 - - - - - -] cyborg.collect_timing          = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Dec  1 14:20:40 np0005541455 nova_compute[189564]: 2025-12-01 19:20:40.945 189568 DEBUG oslo_service.service [None req-21d47958-9538-4c89-99e0-3c24c7751a04 - - - - - -] cyborg.connect_retries         = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Dec  1 14:20:40 np0005541455 nova_compute[189564]: 2025-12-01 19:20:40.945 189568 DEBUG oslo_service.service [None req-21d47958-9538-4c89-99e0-3c24c7751a04 - - - - - -] cyborg.connect_retry_delay     = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Dec  1 14:20:40 np0005541455 nova_compute[189564]: 2025-12-01 19:20:40.946 189568 DEBUG oslo_service.service [None req-21d47958-9538-4c89-99e0-3c24c7751a04 - - - - - -] cyborg.endpoint_override       = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Dec  1 14:20:40 np0005541455 nova_compute[189564]: 2025-12-01 19:20:40.946 189568 DEBUG oslo_service.service [None req-21d47958-9538-4c89-99e0-3c24c7751a04 - - - - - -] cyborg.insecure                = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Dec  1 14:20:40 np0005541455 nova_compute[189564]: 2025-12-01 19:20:40.946 189568 DEBUG oslo_service.service [None req-21d47958-9538-4c89-99e0-3c24c7751a04 - - - - - -] cyborg.keyfile                 = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Dec  1 14:20:40 np0005541455 nova_compute[189564]: 2025-12-01 19:20:40.946 189568 DEBUG oslo_service.service [None req-21d47958-9538-4c89-99e0-3c24c7751a04 - - - - - -] cyborg.max_version             = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Dec  1 14:20:40 np0005541455 nova_compute[189564]: 2025-12-01 19:20:40.946 189568 DEBUG oslo_service.service [None req-21d47958-9538-4c89-99e0-3c24c7751a04 - - - - - -] cyborg.min_version             = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Dec  1 14:20:40 np0005541455 nova_compute[189564]: 2025-12-01 19:20:40.946 189568 DEBUG oslo_service.service [None req-21d47958-9538-4c89-99e0-3c24c7751a04 - - - - - -] cyborg.region_name             = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Dec  1 14:20:40 np0005541455 nova_compute[189564]: 2025-12-01 19:20:40.946 189568 DEBUG oslo_service.service [None req-21d47958-9538-4c89-99e0-3c24c7751a04 - - - - - -] cyborg.service_name            = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Dec  1 14:20:40 np0005541455 nova_compute[189564]: 2025-12-01 19:20:40.946 189568 DEBUG oslo_service.service [None req-21d47958-9538-4c89-99e0-3c24c7751a04 - - - - - -] cyborg.service_type            = accelerator log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Dec  1 14:20:40 np0005541455 nova_compute[189564]: 2025-12-01 19:20:40.947 189568 DEBUG oslo_service.service [None req-21d47958-9538-4c89-99e0-3c24c7751a04 - - - - - -] cyborg.split_loggers           = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Dec  1 14:20:40 np0005541455 nova_compute[189564]: 2025-12-01 19:20:40.947 189568 DEBUG oslo_service.service [None req-21d47958-9538-4c89-99e0-3c24c7751a04 - - - - - -] cyborg.status_code_retries     = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Dec  1 14:20:40 np0005541455 nova_compute[189564]: 2025-12-01 19:20:40.947 189568 DEBUG oslo_service.service [None req-21d47958-9538-4c89-99e0-3c24c7751a04 - - - - - -] cyborg.status_code_retry_delay = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Dec  1 14:20:40 np0005541455 nova_compute[189564]: 2025-12-01 19:20:40.947 189568 DEBUG oslo_service.service [None req-21d47958-9538-4c89-99e0-3c24c7751a04 - - - - - -] cyborg.timeout                 = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Dec  1 14:20:40 np0005541455 nova_compute[189564]: 2025-12-01 19:20:40.947 189568 DEBUG oslo_service.service [None req-21d47958-9538-4c89-99e0-3c24c7751a04 - - - - - -] cyborg.valid_interfaces        = ['internal', 'public'] log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Dec  1 14:20:40 np0005541455 nova_compute[189564]: 2025-12-01 19:20:40.947 189568 DEBUG oslo_service.service [None req-21d47958-9538-4c89-99e0-3c24c7751a04 - - - - - -] cyborg.version                 = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Dec  1 14:20:40 np0005541455 nova_compute[189564]: 2025-12-01 19:20:40.947 189568 DEBUG oslo_service.service [None req-21d47958-9538-4c89-99e0-3c24c7751a04 - - - - - -] database.backend               = sqlalchemy log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Dec  1 14:20:40 np0005541455 nova_compute[189564]: 2025-12-01 19:20:40.948 189568 DEBUG oslo_service.service [None req-21d47958-9538-4c89-99e0-3c24c7751a04 - - - - - -] database.connection            = **** log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Dec  1 14:20:40 np0005541455 nova_compute[189564]: 2025-12-01 19:20:40.948 189568 DEBUG oslo_service.service [None req-21d47958-9538-4c89-99e0-3c24c7751a04 - - - - - -] database.connection_debug      = 0 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Dec  1 14:20:40 np0005541455 nova_compute[189564]: 2025-12-01 19:20:40.948 189568 DEBUG oslo_service.service [None req-21d47958-9538-4c89-99e0-3c24c7751a04 - - - - - -] database.connection_parameters =  log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Dec  1 14:20:40 np0005541455 nova_compute[189564]: 2025-12-01 19:20:40.948 189568 DEBUG oslo_service.service [None req-21d47958-9538-4c89-99e0-3c24c7751a04 - - - - - -] database.connection_recycle_time = 3600 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Dec  1 14:20:40 np0005541455 nova_compute[189564]: 2025-12-01 19:20:40.948 189568 DEBUG oslo_service.service [None req-21d47958-9538-4c89-99e0-3c24c7751a04 - - - - - -] database.connection_trace      = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Dec  1 14:20:40 np0005541455 nova_compute[189564]: 2025-12-01 19:20:40.948 189568 DEBUG oslo_service.service [None req-21d47958-9538-4c89-99e0-3c24c7751a04 - - - - - -] database.db_inc_retry_interval = True log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Dec  1 14:20:40 np0005541455 nova_compute[189564]: 2025-12-01 19:20:40.948 189568 DEBUG oslo_service.service [None req-21d47958-9538-4c89-99e0-3c24c7751a04 - - - - - -] database.db_max_retries        = 20 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Dec  1 14:20:40 np0005541455 nova_compute[189564]: 2025-12-01 19:20:40.949 189568 DEBUG oslo_service.service [None req-21d47958-9538-4c89-99e0-3c24c7751a04 - - - - - -] database.db_max_retry_interval = 10 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Dec  1 14:20:40 np0005541455 nova_compute[189564]: 2025-12-01 19:20:40.949 189568 DEBUG oslo_service.service [None req-21d47958-9538-4c89-99e0-3c24c7751a04 - - - - - -] database.db_retry_interval     = 1 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Dec  1 14:20:40 np0005541455 nova_compute[189564]: 2025-12-01 19:20:40.949 189568 DEBUG oslo_service.service [None req-21d47958-9538-4c89-99e0-3c24c7751a04 - - - - - -] database.max_overflow          = 50 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Dec  1 14:20:40 np0005541455 nova_compute[189564]: 2025-12-01 19:20:40.949 189568 DEBUG oslo_service.service [None req-21d47958-9538-4c89-99e0-3c24c7751a04 - - - - - -] database.max_pool_size         = 5 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Dec  1 14:20:40 np0005541455 nova_compute[189564]: 2025-12-01 19:20:40.949 189568 DEBUG oslo_service.service [None req-21d47958-9538-4c89-99e0-3c24c7751a04 - - - - - -] database.max_retries           = 10 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Dec  1 14:20:40 np0005541455 nova_compute[189564]: 2025-12-01 19:20:40.949 189568 DEBUG oslo_service.service [None req-21d47958-9538-4c89-99e0-3c24c7751a04 - - - - - -] database.mysql_enable_ndb      = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Dec  1 14:20:40 np0005541455 nova_compute[189564]: 2025-12-01 19:20:40.949 189568 DEBUG oslo_service.service [None req-21d47958-9538-4c89-99e0-3c24c7751a04 - - - - - -] database.mysql_sql_mode        = TRADITIONAL log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Dec  1 14:20:40 np0005541455 nova_compute[189564]: 2025-12-01 19:20:40.950 189568 DEBUG oslo_service.service [None req-21d47958-9538-4c89-99e0-3c24c7751a04 - - - - - -] database.mysql_wsrep_sync_wait = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Dec  1 14:20:40 np0005541455 nova_compute[189564]: 2025-12-01 19:20:40.950 189568 DEBUG oslo_service.service [None req-21d47958-9538-4c89-99e0-3c24c7751a04 - - - - - -] database.pool_timeout          = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Dec  1 14:20:40 np0005541455 nova_compute[189564]: 2025-12-01 19:20:40.950 189568 DEBUG oslo_service.service [None req-21d47958-9538-4c89-99e0-3c24c7751a04 - - - - - -] database.retry_interval        = 10 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Dec  1 14:20:40 np0005541455 nova_compute[189564]: 2025-12-01 19:20:40.950 189568 DEBUG oslo_service.service [None req-21d47958-9538-4c89-99e0-3c24c7751a04 - - - - - -] database.slave_connection      = **** log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Dec  1 14:20:40 np0005541455 nova_compute[189564]: 2025-12-01 19:20:40.950 189568 DEBUG oslo_service.service [None req-21d47958-9538-4c89-99e0-3c24c7751a04 - - - - - -] database.sqlite_synchronous    = True log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Dec  1 14:20:40 np0005541455 nova_compute[189564]: 2025-12-01 19:20:40.950 189568 DEBUG oslo_service.service [None req-21d47958-9538-4c89-99e0-3c24c7751a04 - - - - - -] api_database.backend           = sqlalchemy log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Dec  1 14:20:40 np0005541455 nova_compute[189564]: 2025-12-01 19:20:40.950 189568 DEBUG oslo_service.service [None req-21d47958-9538-4c89-99e0-3c24c7751a04 - - - - - -] api_database.connection        = **** log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Dec  1 14:20:40 np0005541455 nova_compute[189564]: 2025-12-01 19:20:40.951 189568 DEBUG oslo_service.service [None req-21d47958-9538-4c89-99e0-3c24c7751a04 - - - - - -] api_database.connection_debug  = 0 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Dec  1 14:20:40 np0005541455 nova_compute[189564]: 2025-12-01 19:20:40.951 189568 DEBUG oslo_service.service [None req-21d47958-9538-4c89-99e0-3c24c7751a04 - - - - - -] api_database.connection_parameters =  log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Dec  1 14:20:40 np0005541455 nova_compute[189564]: 2025-12-01 19:20:40.951 189568 DEBUG oslo_service.service [None req-21d47958-9538-4c89-99e0-3c24c7751a04 - - - - - -] api_database.connection_recycle_time = 3600 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Dec  1 14:20:40 np0005541455 nova_compute[189564]: 2025-12-01 19:20:40.951 189568 DEBUG oslo_service.service [None req-21d47958-9538-4c89-99e0-3c24c7751a04 - - - - - -] api_database.connection_trace  = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Dec  1 14:20:40 np0005541455 nova_compute[189564]: 2025-12-01 19:20:40.951 189568 DEBUG oslo_service.service [None req-21d47958-9538-4c89-99e0-3c24c7751a04 - - - - - -] api_database.db_inc_retry_interval = True log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Dec  1 14:20:40 np0005541455 nova_compute[189564]: 2025-12-01 19:20:40.951 189568 DEBUG oslo_service.service [None req-21d47958-9538-4c89-99e0-3c24c7751a04 - - - - - -] api_database.db_max_retries    = 20 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Dec  1 14:20:40 np0005541455 nova_compute[189564]: 2025-12-01 19:20:40.951 189568 DEBUG oslo_service.service [None req-21d47958-9538-4c89-99e0-3c24c7751a04 - - - - - -] api_database.db_max_retry_interval = 10 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Dec  1 14:20:40 np0005541455 nova_compute[189564]: 2025-12-01 19:20:40.952 189568 DEBUG oslo_service.service [None req-21d47958-9538-4c89-99e0-3c24c7751a04 - - - - - -] api_database.db_retry_interval = 1 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Dec  1 14:20:40 np0005541455 nova_compute[189564]: 2025-12-01 19:20:40.952 189568 DEBUG oslo_service.service [None req-21d47958-9538-4c89-99e0-3c24c7751a04 - - - - - -] api_database.max_overflow      = 50 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Dec  1 14:20:40 np0005541455 nova_compute[189564]: 2025-12-01 19:20:40.952 189568 DEBUG oslo_service.service [None req-21d47958-9538-4c89-99e0-3c24c7751a04 - - - - - -] api_database.max_pool_size     = 5 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Dec  1 14:20:40 np0005541455 nova_compute[189564]: 2025-12-01 19:20:40.952 189568 DEBUG oslo_service.service [None req-21d47958-9538-4c89-99e0-3c24c7751a04 - - - - - -] api_database.max_retries       = 10 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Dec  1 14:20:40 np0005541455 nova_compute[189564]: 2025-12-01 19:20:40.952 189568 DEBUG oslo_service.service [None req-21d47958-9538-4c89-99e0-3c24c7751a04 - - - - - -] api_database.mysql_enable_ndb  = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Dec  1 14:20:40 np0005541455 nova_compute[189564]: 2025-12-01 19:20:40.952 189568 DEBUG oslo_service.service [None req-21d47958-9538-4c89-99e0-3c24c7751a04 - - - - - -] api_database.mysql_sql_mode    = TRADITIONAL log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Dec  1 14:20:40 np0005541455 nova_compute[189564]: 2025-12-01 19:20:40.952 189568 DEBUG oslo_service.service [None req-21d47958-9538-4c89-99e0-3c24c7751a04 - - - - - -] api_database.mysql_wsrep_sync_wait = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Dec  1 14:20:40 np0005541455 nova_compute[189564]: 2025-12-01 19:20:40.953 189568 DEBUG oslo_service.service [None req-21d47958-9538-4c89-99e0-3c24c7751a04 - - - - - -] api_database.pool_timeout      = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Dec  1 14:20:40 np0005541455 nova_compute[189564]: 2025-12-01 19:20:40.953 189568 DEBUG oslo_service.service [None req-21d47958-9538-4c89-99e0-3c24c7751a04 - - - - - -] api_database.retry_interval    = 10 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Dec  1 14:20:40 np0005541455 nova_compute[189564]: 2025-12-01 19:20:40.953 189568 DEBUG oslo_service.service [None req-21d47958-9538-4c89-99e0-3c24c7751a04 - - - - - -] api_database.slave_connection  = **** log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Dec  1 14:20:40 np0005541455 nova_compute[189564]: 2025-12-01 19:20:40.953 189568 DEBUG oslo_service.service [None req-21d47958-9538-4c89-99e0-3c24c7751a04 - - - - - -] api_database.sqlite_synchronous = True log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Dec  1 14:20:40 np0005541455 nova_compute[189564]: 2025-12-01 19:20:40.953 189568 DEBUG oslo_service.service [None req-21d47958-9538-4c89-99e0-3c24c7751a04 - - - - - -] devices.enabled_mdev_types     = [] log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Dec  1 14:20:40 np0005541455 nova_compute[189564]: 2025-12-01 19:20:40.953 189568 DEBUG oslo_service.service [None req-21d47958-9538-4c89-99e0-3c24c7751a04 - - - - - -] ephemeral_storage_encryption.cipher = aes-xts-plain64 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Dec  1 14:20:40 np0005541455 nova_compute[189564]: 2025-12-01 19:20:40.953 189568 DEBUG oslo_service.service [None req-21d47958-9538-4c89-99e0-3c24c7751a04 - - - - - -] ephemeral_storage_encryption.enabled = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Dec  1 14:20:40 np0005541455 nova_compute[189564]: 2025-12-01 19:20:40.954 189568 DEBUG oslo_service.service [None req-21d47958-9538-4c89-99e0-3c24c7751a04 - - - - - -] ephemeral_storage_encryption.key_size = 512 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Dec  1 14:20:40 np0005541455 nova_compute[189564]: 2025-12-01 19:20:40.954 189568 DEBUG oslo_service.service [None req-21d47958-9538-4c89-99e0-3c24c7751a04 - - - - - -] glance.api_servers             = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Dec  1 14:20:40 np0005541455 nova_compute[189564]: 2025-12-01 19:20:40.954 189568 DEBUG oslo_service.service [None req-21d47958-9538-4c89-99e0-3c24c7751a04 - - - - - -] glance.cafile                  = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Dec  1 14:20:40 np0005541455 nova_compute[189564]: 2025-12-01 19:20:40.954 189568 DEBUG oslo_service.service [None req-21d47958-9538-4c89-99e0-3c24c7751a04 - - - - - -] glance.certfile                = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Dec  1 14:20:40 np0005541455 nova_compute[189564]: 2025-12-01 19:20:40.954 189568 DEBUG oslo_service.service [None req-21d47958-9538-4c89-99e0-3c24c7751a04 - - - - - -] glance.collect_timing          = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Dec  1 14:20:40 np0005541455 nova_compute[189564]: 2025-12-01 19:20:40.954 189568 DEBUG oslo_service.service [None req-21d47958-9538-4c89-99e0-3c24c7751a04 - - - - - -] glance.connect_retries         = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Dec  1 14:20:40 np0005541455 nova_compute[189564]: 2025-12-01 19:20:40.955 189568 DEBUG oslo_service.service [None req-21d47958-9538-4c89-99e0-3c24c7751a04 - - - - - -] glance.connect_retry_delay     = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Dec  1 14:20:40 np0005541455 nova_compute[189564]: 2025-12-01 19:20:40.955 189568 DEBUG oslo_service.service [None req-21d47958-9538-4c89-99e0-3c24c7751a04 - - - - - -] glance.debug                   = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Dec  1 14:20:40 np0005541455 nova_compute[189564]: 2025-12-01 19:20:40.955 189568 DEBUG oslo_service.service [None req-21d47958-9538-4c89-99e0-3c24c7751a04 - - - - - -] glance.default_trusted_certificate_ids = [] log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Dec  1 14:20:40 np0005541455 nova_compute[189564]: 2025-12-01 19:20:40.955 189568 DEBUG oslo_service.service [None req-21d47958-9538-4c89-99e0-3c24c7751a04 - - - - - -] glance.enable_certificate_validation = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Dec  1 14:20:40 np0005541455 nova_compute[189564]: 2025-12-01 19:20:40.955 189568 DEBUG oslo_service.service [None req-21d47958-9538-4c89-99e0-3c24c7751a04 - - - - - -] glance.enable_rbd_download     = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Dec  1 14:20:40 np0005541455 nova_compute[189564]: 2025-12-01 19:20:40.955 189568 DEBUG oslo_service.service [None req-21d47958-9538-4c89-99e0-3c24c7751a04 - - - - - -] glance.endpoint_override       = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Dec  1 14:20:40 np0005541455 nova_compute[189564]: 2025-12-01 19:20:40.955 189568 DEBUG oslo_service.service [None req-21d47958-9538-4c89-99e0-3c24c7751a04 - - - - - -] glance.insecure                = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Dec  1 14:20:40 np0005541455 nova_compute[189564]: 2025-12-01 19:20:40.956 189568 DEBUG oslo_service.service [None req-21d47958-9538-4c89-99e0-3c24c7751a04 - - - - - -] glance.keyfile                 = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Dec  1 14:20:40 np0005541455 nova_compute[189564]: 2025-12-01 19:20:40.956 189568 DEBUG oslo_service.service [None req-21d47958-9538-4c89-99e0-3c24c7751a04 - - - - - -] glance.max_version             = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Dec  1 14:20:40 np0005541455 nova_compute[189564]: 2025-12-01 19:20:40.956 189568 DEBUG oslo_service.service [None req-21d47958-9538-4c89-99e0-3c24c7751a04 - - - - - -] glance.min_version             = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Dec  1 14:20:40 np0005541455 nova_compute[189564]: 2025-12-01 19:20:40.956 189568 DEBUG oslo_service.service [None req-21d47958-9538-4c89-99e0-3c24c7751a04 - - - - - -] glance.num_retries             = 3 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Dec  1 14:20:40 np0005541455 nova_compute[189564]: 2025-12-01 19:20:40.956 189568 DEBUG oslo_service.service [None req-21d47958-9538-4c89-99e0-3c24c7751a04 - - - - - -] glance.rbd_ceph_conf           =  log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Dec  1 14:20:40 np0005541455 nova_compute[189564]: 2025-12-01 19:20:40.956 189568 DEBUG oslo_service.service [None req-21d47958-9538-4c89-99e0-3c24c7751a04 - - - - - -] glance.rbd_connect_timeout     = 5 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Dec  1 14:20:40 np0005541455 nova_compute[189564]: 2025-12-01 19:20:40.956 189568 DEBUG oslo_service.service [None req-21d47958-9538-4c89-99e0-3c24c7751a04 - - - - - -] glance.rbd_pool                =  log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Dec  1 14:20:40 np0005541455 nova_compute[189564]: 2025-12-01 19:20:40.957 189568 DEBUG oslo_service.service [None req-21d47958-9538-4c89-99e0-3c24c7751a04 - - - - - -] glance.rbd_user                =  log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Dec  1 14:20:40 np0005541455 nova_compute[189564]: 2025-12-01 19:20:40.957 189568 DEBUG oslo_service.service [None req-21d47958-9538-4c89-99e0-3c24c7751a04 - - - - - -] glance.region_name             = regionOne log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Dec  1 14:20:40 np0005541455 nova_compute[189564]: 2025-12-01 19:20:40.957 189568 DEBUG oslo_service.service [None req-21d47958-9538-4c89-99e0-3c24c7751a04 - - - - - -] glance.service_name            = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Dec  1 14:20:40 np0005541455 nova_compute[189564]: 2025-12-01 19:20:40.957 189568 DEBUG oslo_service.service [None req-21d47958-9538-4c89-99e0-3c24c7751a04 - - - - - -] glance.service_type            = image log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Dec  1 14:20:40 np0005541455 nova_compute[189564]: 2025-12-01 19:20:40.957 189568 DEBUG oslo_service.service [None req-21d47958-9538-4c89-99e0-3c24c7751a04 - - - - - -] glance.split_loggers           = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Dec  1 14:20:40 np0005541455 nova_compute[189564]: 2025-12-01 19:20:40.957 189568 DEBUG oslo_service.service [None req-21d47958-9538-4c89-99e0-3c24c7751a04 - - - - - -] glance.status_code_retries     = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Dec  1 14:20:40 np0005541455 nova_compute[189564]: 2025-12-01 19:20:40.957 189568 DEBUG oslo_service.service [None req-21d47958-9538-4c89-99e0-3c24c7751a04 - - - - - -] glance.status_code_retry_delay = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Dec  1 14:20:40 np0005541455 nova_compute[189564]: 2025-12-01 19:20:40.958 189568 DEBUG oslo_service.service [None req-21d47958-9538-4c89-99e0-3c24c7751a04 - - - - - -] glance.timeout                 = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Dec  1 14:20:40 np0005541455 nova_compute[189564]: 2025-12-01 19:20:40.958 189568 DEBUG oslo_service.service [None req-21d47958-9538-4c89-99e0-3c24c7751a04 - - - - - -] glance.valid_interfaces        = ['internal'] log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Dec  1 14:20:40 np0005541455 nova_compute[189564]: 2025-12-01 19:20:40.958 189568 DEBUG oslo_service.service [None req-21d47958-9538-4c89-99e0-3c24c7751a04 - - - - - -] glance.verify_glance_signatures = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Dec  1 14:20:40 np0005541455 nova_compute[189564]: 2025-12-01 19:20:40.958 189568 DEBUG oslo_service.service [None req-21d47958-9538-4c89-99e0-3c24c7751a04 - - - - - -] glance.version                 = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Dec  1 14:20:40 np0005541455 nova_compute[189564]: 2025-12-01 19:20:40.958 189568 DEBUG oslo_service.service [None req-21d47958-9538-4c89-99e0-3c24c7751a04 - - - - - -] guestfs.debug                  = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Dec  1 14:20:40 np0005541455 nova_compute[189564]: 2025-12-01 19:20:40.958 189568 DEBUG oslo_service.service [None req-21d47958-9538-4c89-99e0-3c24c7751a04 - - - - - -] hyperv.config_drive_cdrom      = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Dec  1 14:20:40 np0005541455 nova_compute[189564]: 2025-12-01 19:20:40.958 189568 DEBUG oslo_service.service [None req-21d47958-9538-4c89-99e0-3c24c7751a04 - - - - - -] hyperv.config_drive_inject_password = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Dec  1 14:20:40 np0005541455 nova_compute[189564]: 2025-12-01 19:20:40.959 189568 DEBUG oslo_service.service [None req-21d47958-9538-4c89-99e0-3c24c7751a04 - - - - - -] hyperv.dynamic_memory_ratio    = 1.0 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Dec  1 14:20:40 np0005541455 nova_compute[189564]: 2025-12-01 19:20:40.959 189568 DEBUG oslo_service.service [None req-21d47958-9538-4c89-99e0-3c24c7751a04 - - - - - -] hyperv.enable_instance_metrics_collection = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Dec  1 14:20:40 np0005541455 nova_compute[189564]: 2025-12-01 19:20:40.959 189568 DEBUG oslo_service.service [None req-21d47958-9538-4c89-99e0-3c24c7751a04 - - - - - -] hyperv.enable_remotefx         = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Dec  1 14:20:40 np0005541455 nova_compute[189564]: 2025-12-01 19:20:40.959 189568 DEBUG oslo_service.service [None req-21d47958-9538-4c89-99e0-3c24c7751a04 - - - - - -] hyperv.instances_path_share    =  log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Dec  1 14:20:40 np0005541455 nova_compute[189564]: 2025-12-01 19:20:40.959 189568 DEBUG oslo_service.service [None req-21d47958-9538-4c89-99e0-3c24c7751a04 - - - - - -] hyperv.iscsi_initiator_list    = [] log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Dec  1 14:20:40 np0005541455 nova_compute[189564]: 2025-12-01 19:20:40.959 189568 DEBUG oslo_service.service [None req-21d47958-9538-4c89-99e0-3c24c7751a04 - - - - - -] hyperv.limit_cpu_features      = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Dec  1 14:20:40 np0005541455 nova_compute[189564]: 2025-12-01 19:20:40.959 189568 DEBUG oslo_service.service [None req-21d47958-9538-4c89-99e0-3c24c7751a04 - - - - - -] hyperv.mounted_disk_query_retry_count = 10 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Dec  1 14:20:40 np0005541455 nova_compute[189564]: 2025-12-01 19:20:40.960 189568 DEBUG oslo_service.service [None req-21d47958-9538-4c89-99e0-3c24c7751a04 - - - - - -] hyperv.mounted_disk_query_retry_interval = 5 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Dec  1 14:20:40 np0005541455 nova_compute[189564]: 2025-12-01 19:20:40.960 189568 DEBUG oslo_service.service [None req-21d47958-9538-4c89-99e0-3c24c7751a04 - - - - - -] hyperv.power_state_check_timeframe = 60 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Dec  1 14:20:40 np0005541455 nova_compute[189564]: 2025-12-01 19:20:40.960 189568 DEBUG oslo_service.service [None req-21d47958-9538-4c89-99e0-3c24c7751a04 - - - - - -] hyperv.power_state_event_polling_interval = 2 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Dec  1 14:20:40 np0005541455 nova_compute[189564]: 2025-12-01 19:20:40.960 189568 DEBUG oslo_service.service [None req-21d47958-9538-4c89-99e0-3c24c7751a04 - - - - - -] hyperv.qemu_img_cmd            = qemu-img.exe log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Dec  1 14:20:40 np0005541455 nova_compute[189564]: 2025-12-01 19:20:40.960 189568 DEBUG oslo_service.service [None req-21d47958-9538-4c89-99e0-3c24c7751a04 - - - - - -] hyperv.use_multipath_io        = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Dec  1 14:20:40 np0005541455 nova_compute[189564]: 2025-12-01 19:20:40.960 189568 DEBUG oslo_service.service [None req-21d47958-9538-4c89-99e0-3c24c7751a04 - - - - - -] hyperv.volume_attach_retry_count = 10 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Dec  1 14:20:40 np0005541455 nova_compute[189564]: 2025-12-01 19:20:40.960 189568 DEBUG oslo_service.service [None req-21d47958-9538-4c89-99e0-3c24c7751a04 - - - - - -] hyperv.volume_attach_retry_interval = 5 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Dec  1 14:20:40 np0005541455 nova_compute[189564]: 2025-12-01 19:20:40.961 189568 DEBUG oslo_service.service [None req-21d47958-9538-4c89-99e0-3c24c7751a04 - - - - - -] hyperv.vswitch_name            = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Dec  1 14:20:40 np0005541455 nova_compute[189564]: 2025-12-01 19:20:40.961 189568 DEBUG oslo_service.service [None req-21d47958-9538-4c89-99e0-3c24c7751a04 - - - - - -] hyperv.wait_soft_reboot_seconds = 60 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Dec  1 14:20:40 np0005541455 nova_compute[189564]: 2025-12-01 19:20:40.961 189568 DEBUG oslo_service.service [None req-21d47958-9538-4c89-99e0-3c24c7751a04 - - - - - -] mks.enabled                    = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Dec  1 14:20:40 np0005541455 nova_compute[189564]: 2025-12-01 19:20:40.961 189568 DEBUG oslo_service.service [None req-21d47958-9538-4c89-99e0-3c24c7751a04 - - - - - -] mks.mksproxy_base_url          = http://127.0.0.1:6090/ log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Dec  1 14:20:40 np0005541455 nova_compute[189564]: 2025-12-01 19:20:40.961 189568 DEBUG oslo_service.service [None req-21d47958-9538-4c89-99e0-3c24c7751a04 - - - - - -] image_cache.manager_interval   = 2400 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Dec  1 14:20:40 np0005541455 nova_compute[189564]: 2025-12-01 19:20:40.961 189568 DEBUG oslo_service.service [None req-21d47958-9538-4c89-99e0-3c24c7751a04 - - - - - -] image_cache.precache_concurrency = 1 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Dec  1 14:20:40 np0005541455 nova_compute[189564]: 2025-12-01 19:20:40.962 189568 DEBUG oslo_service.service [None req-21d47958-9538-4c89-99e0-3c24c7751a04 - - - - - -] image_cache.remove_unused_base_images = True log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Dec  1 14:20:40 np0005541455 nova_compute[189564]: 2025-12-01 19:20:40.962 189568 DEBUG oslo_service.service [None req-21d47958-9538-4c89-99e0-3c24c7751a04 - - - - - -] image_cache.remove_unused_original_minimum_age_seconds = 86400 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Dec  1 14:20:40 np0005541455 nova_compute[189564]: 2025-12-01 19:20:40.962 189568 DEBUG oslo_service.service [None req-21d47958-9538-4c89-99e0-3c24c7751a04 - - - - - -] image_cache.remove_unused_resized_minimum_age_seconds = 3600 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Dec  1 14:20:40 np0005541455 nova_compute[189564]: 2025-12-01 19:20:40.962 189568 DEBUG oslo_service.service [None req-21d47958-9538-4c89-99e0-3c24c7751a04 - - - - - -] image_cache.subdirectory_name  = _base log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Dec  1 14:20:40 np0005541455 nova_compute[189564]: 2025-12-01 19:20:40.962 189568 DEBUG oslo_service.service [None req-21d47958-9538-4c89-99e0-3c24c7751a04 - - - - - -] ironic.api_max_retries         = 60 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Dec  1 14:20:40 np0005541455 nova_compute[189564]: 2025-12-01 19:20:40.962 189568 DEBUG oslo_service.service [None req-21d47958-9538-4c89-99e0-3c24c7751a04 - - - - - -] ironic.api_retry_interval      = 2 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Dec  1 14:20:40 np0005541455 nova_compute[189564]: 2025-12-01 19:20:40.962 189568 DEBUG oslo_service.service [None req-21d47958-9538-4c89-99e0-3c24c7751a04 - - - - - -] ironic.auth_section            = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Dec  1 14:20:40 np0005541455 nova_compute[189564]: 2025-12-01 19:20:40.963 189568 DEBUG oslo_service.service [None req-21d47958-9538-4c89-99e0-3c24c7751a04 - - - - - -] ironic.auth_type               = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Dec  1 14:20:40 np0005541455 nova_compute[189564]: 2025-12-01 19:20:40.963 189568 DEBUG oslo_service.service [None req-21d47958-9538-4c89-99e0-3c24c7751a04 - - - - - -] ironic.cafile                  = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Dec  1 14:20:40 np0005541455 nova_compute[189564]: 2025-12-01 19:20:40.963 189568 DEBUG oslo_service.service [None req-21d47958-9538-4c89-99e0-3c24c7751a04 - - - - - -] ironic.certfile                = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Dec  1 14:20:40 np0005541455 nova_compute[189564]: 2025-12-01 19:20:40.963 189568 DEBUG oslo_service.service [None req-21d47958-9538-4c89-99e0-3c24c7751a04 - - - - - -] ironic.collect_timing          = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Dec  1 14:20:40 np0005541455 nova_compute[189564]: 2025-12-01 19:20:40.963 189568 DEBUG oslo_service.service [None req-21d47958-9538-4c89-99e0-3c24c7751a04 - - - - - -] ironic.connect_retries         = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Dec  1 14:20:40 np0005541455 nova_compute[189564]: 2025-12-01 19:20:40.963 189568 DEBUG oslo_service.service [None req-21d47958-9538-4c89-99e0-3c24c7751a04 - - - - - -] ironic.connect_retry_delay     = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Dec  1 14:20:40 np0005541455 nova_compute[189564]: 2025-12-01 19:20:40.963 189568 DEBUG oslo_service.service [None req-21d47958-9538-4c89-99e0-3c24c7751a04 - - - - - -] ironic.endpoint_override       = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Dec  1 14:20:40 np0005541455 nova_compute[189564]: 2025-12-01 19:20:40.963 189568 DEBUG oslo_service.service [None req-21d47958-9538-4c89-99e0-3c24c7751a04 - - - - - -] ironic.insecure                = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Dec  1 14:20:40 np0005541455 nova_compute[189564]: 2025-12-01 19:20:40.964 189568 DEBUG oslo_service.service [None req-21d47958-9538-4c89-99e0-3c24c7751a04 - - - - - -] ironic.keyfile                 = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Dec  1 14:20:40 np0005541455 nova_compute[189564]: 2025-12-01 19:20:40.964 189568 DEBUG oslo_service.service [None req-21d47958-9538-4c89-99e0-3c24c7751a04 - - - - - -] ironic.max_version             = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Dec  1 14:20:40 np0005541455 nova_compute[189564]: 2025-12-01 19:20:40.964 189568 DEBUG oslo_service.service [None req-21d47958-9538-4c89-99e0-3c24c7751a04 - - - - - -] ironic.min_version             = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Dec  1 14:20:40 np0005541455 nova_compute[189564]: 2025-12-01 19:20:40.964 189568 DEBUG oslo_service.service [None req-21d47958-9538-4c89-99e0-3c24c7751a04 - - - - - -] ironic.partition_key           = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Dec  1 14:20:40 np0005541455 nova_compute[189564]: 2025-12-01 19:20:40.964 189568 DEBUG oslo_service.service [None req-21d47958-9538-4c89-99e0-3c24c7751a04 - - - - - -] ironic.peer_list               = [] log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Dec  1 14:20:40 np0005541455 nova_compute[189564]: 2025-12-01 19:20:40.964 189568 DEBUG oslo_service.service [None req-21d47958-9538-4c89-99e0-3c24c7751a04 - - - - - -] ironic.region_name             = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Dec  1 14:20:40 np0005541455 nova_compute[189564]: 2025-12-01 19:20:40.964 189568 DEBUG oslo_service.service [None req-21d47958-9538-4c89-99e0-3c24c7751a04 - - - - - -] ironic.serial_console_state_timeout = 10 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Dec  1 14:20:40 np0005541455 nova_compute[189564]: 2025-12-01 19:20:40.965 189568 DEBUG oslo_service.service [None req-21d47958-9538-4c89-99e0-3c24c7751a04 - - - - - -] ironic.service_name            = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Dec  1 14:20:40 np0005541455 nova_compute[189564]: 2025-12-01 19:20:40.965 189568 DEBUG oslo_service.service [None req-21d47958-9538-4c89-99e0-3c24c7751a04 - - - - - -] ironic.service_type            = baremetal log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Dec  1 14:20:40 np0005541455 nova_compute[189564]: 2025-12-01 19:20:40.965 189568 DEBUG oslo_service.service [None req-21d47958-9538-4c89-99e0-3c24c7751a04 - - - - - -] ironic.split_loggers           = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Dec  1 14:20:40 np0005541455 nova_compute[189564]: 2025-12-01 19:20:40.965 189568 DEBUG oslo_service.service [None req-21d47958-9538-4c89-99e0-3c24c7751a04 - - - - - -] ironic.status_code_retries     = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Dec  1 14:20:40 np0005541455 nova_compute[189564]: 2025-12-01 19:20:40.965 189568 DEBUG oslo_service.service [None req-21d47958-9538-4c89-99e0-3c24c7751a04 - - - - - -] ironic.status_code_retry_delay = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Dec  1 14:20:40 np0005541455 nova_compute[189564]: 2025-12-01 19:20:40.965 189568 DEBUG oslo_service.service [None req-21d47958-9538-4c89-99e0-3c24c7751a04 - - - - - -] ironic.timeout                 = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Dec  1 14:20:40 np0005541455 nova_compute[189564]: 2025-12-01 19:20:40.965 189568 DEBUG oslo_service.service [None req-21d47958-9538-4c89-99e0-3c24c7751a04 - - - - - -] ironic.valid_interfaces        = ['internal', 'public'] log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Dec  1 14:20:40 np0005541455 nova_compute[189564]: 2025-12-01 19:20:40.966 189568 DEBUG oslo_service.service [None req-21d47958-9538-4c89-99e0-3c24c7751a04 - - - - - -] ironic.version                 = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Dec  1 14:20:40 np0005541455 nova_compute[189564]: 2025-12-01 19:20:40.966 189568 DEBUG oslo_service.service [None req-21d47958-9538-4c89-99e0-3c24c7751a04 - - - - - -] key_manager.backend            = barbican log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Dec  1 14:20:40 np0005541455 nova_compute[189564]: 2025-12-01 19:20:40.966 189568 DEBUG oslo_service.service [None req-21d47958-9538-4c89-99e0-3c24c7751a04 - - - - - -] key_manager.fixed_key          = **** log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Dec  1 14:20:40 np0005541455 nova_compute[189564]: 2025-12-01 19:20:40.966 189568 DEBUG oslo_service.service [None req-21d47958-9538-4c89-99e0-3c24c7751a04 - - - - - -] barbican.auth_endpoint         = http://localhost/identity/v3 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Dec  1 14:20:40 np0005541455 nova_compute[189564]: 2025-12-01 19:20:40.966 189568 DEBUG oslo_service.service [None req-21d47958-9538-4c89-99e0-3c24c7751a04 - - - - - -] barbican.barbican_api_version  = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Dec  1 14:20:40 np0005541455 nova_compute[189564]: 2025-12-01 19:20:40.966 189568 DEBUG oslo_service.service [None req-21d47958-9538-4c89-99e0-3c24c7751a04 - - - - - -] barbican.barbican_endpoint     = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Dec  1 14:20:40 np0005541455 nova_compute[189564]: 2025-12-01 19:20:40.966 189568 DEBUG oslo_service.service [None req-21d47958-9538-4c89-99e0-3c24c7751a04 - - - - - -] barbican.barbican_endpoint_type = internal log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Dec  1 14:20:40 np0005541455 nova_compute[189564]: 2025-12-01 19:20:40.967 189568 DEBUG oslo_service.service [None req-21d47958-9538-4c89-99e0-3c24c7751a04 - - - - - -] barbican.barbican_region_name  = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Dec  1 14:20:40 np0005541455 nova_compute[189564]: 2025-12-01 19:20:40.967 189568 DEBUG oslo_service.service [None req-21d47958-9538-4c89-99e0-3c24c7751a04 - - - - - -] barbican.cafile                = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Dec  1 14:20:40 np0005541455 nova_compute[189564]: 2025-12-01 19:20:40.967 189568 DEBUG oslo_service.service [None req-21d47958-9538-4c89-99e0-3c24c7751a04 - - - - - -] barbican.certfile              = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Dec  1 14:20:40 np0005541455 nova_compute[189564]: 2025-12-01 19:20:40.967 189568 DEBUG oslo_service.service [None req-21d47958-9538-4c89-99e0-3c24c7751a04 - - - - - -] barbican.collect_timing        = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Dec  1 14:20:40 np0005541455 nova_compute[189564]: 2025-12-01 19:20:40.967 189568 DEBUG oslo_service.service [None req-21d47958-9538-4c89-99e0-3c24c7751a04 - - - - - -] barbican.insecure              = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Dec  1 14:20:40 np0005541455 nova_compute[189564]: 2025-12-01 19:20:40.967 189568 DEBUG oslo_service.service [None req-21d47958-9538-4c89-99e0-3c24c7751a04 - - - - - -] barbican.keyfile               = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Dec  1 14:20:40 np0005541455 nova_compute[189564]: 2025-12-01 19:20:40.967 189568 DEBUG oslo_service.service [None req-21d47958-9538-4c89-99e0-3c24c7751a04 - - - - - -] barbican.number_of_retries     = 60 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Dec  1 14:20:40 np0005541455 nova_compute[189564]: 2025-12-01 19:20:40.968 189568 DEBUG oslo_service.service [None req-21d47958-9538-4c89-99e0-3c24c7751a04 - - - - - -] barbican.retry_delay           = 1 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Dec  1 14:20:40 np0005541455 nova_compute[189564]: 2025-12-01 19:20:40.968 189568 DEBUG oslo_service.service [None req-21d47958-9538-4c89-99e0-3c24c7751a04 - - - - - -] barbican.send_service_user_token = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Dec  1 14:20:40 np0005541455 nova_compute[189564]: 2025-12-01 19:20:40.968 189568 DEBUG oslo_service.service [None req-21d47958-9538-4c89-99e0-3c24c7751a04 - - - - - -] barbican.split_loggers         = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Dec  1 14:20:40 np0005541455 nova_compute[189564]: 2025-12-01 19:20:40.968 189568 DEBUG oslo_service.service [None req-21d47958-9538-4c89-99e0-3c24c7751a04 - - - - - -] barbican.timeout               = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Dec  1 14:20:40 np0005541455 nova_compute[189564]: 2025-12-01 19:20:40.968 189568 DEBUG oslo_service.service [None req-21d47958-9538-4c89-99e0-3c24c7751a04 - - - - - -] barbican.verify_ssl            = True log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Dec  1 14:20:40 np0005541455 nova_compute[189564]: 2025-12-01 19:20:40.968 189568 DEBUG oslo_service.service [None req-21d47958-9538-4c89-99e0-3c24c7751a04 - - - - - -] barbican.verify_ssl_path       = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Dec  1 14:20:40 np0005541455 nova_compute[189564]: 2025-12-01 19:20:40.968 189568 DEBUG oslo_service.service [None req-21d47958-9538-4c89-99e0-3c24c7751a04 - - - - - -] barbican_service_user.auth_section = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Dec  1 14:20:40 np0005541455 nova_compute[189564]: 2025-12-01 19:20:40.969 189568 DEBUG oslo_service.service [None req-21d47958-9538-4c89-99e0-3c24c7751a04 - - - - - -] barbican_service_user.auth_type = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Dec  1 14:20:40 np0005541455 nova_compute[189564]: 2025-12-01 19:20:40.969 189568 DEBUG oslo_service.service [None req-21d47958-9538-4c89-99e0-3c24c7751a04 - - - - - -] barbican_service_user.cafile   = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Dec  1 14:20:40 np0005541455 nova_compute[189564]: 2025-12-01 19:20:40.969 189568 DEBUG oslo_service.service [None req-21d47958-9538-4c89-99e0-3c24c7751a04 - - - - - -] barbican_service_user.certfile = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Dec  1 14:20:40 np0005541455 nova_compute[189564]: 2025-12-01 19:20:40.969 189568 DEBUG oslo_service.service [None req-21d47958-9538-4c89-99e0-3c24c7751a04 - - - - - -] barbican_service_user.collect_timing = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Dec  1 14:20:40 np0005541455 nova_compute[189564]: 2025-12-01 19:20:40.969 189568 DEBUG oslo_service.service [None req-21d47958-9538-4c89-99e0-3c24c7751a04 - - - - - -] barbican_service_user.insecure = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Dec  1 14:20:40 np0005541455 nova_compute[189564]: 2025-12-01 19:20:40.969 189568 DEBUG oslo_service.service [None req-21d47958-9538-4c89-99e0-3c24c7751a04 - - - - - -] barbican_service_user.keyfile  = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Dec  1 14:20:40 np0005541455 nova_compute[189564]: 2025-12-01 19:20:40.970 189568 DEBUG oslo_service.service [None req-21d47958-9538-4c89-99e0-3c24c7751a04 - - - - - -] barbican_service_user.split_loggers = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Dec  1 14:20:40 np0005541455 nova_compute[189564]: 2025-12-01 19:20:40.970 189568 DEBUG oslo_service.service [None req-21d47958-9538-4c89-99e0-3c24c7751a04 - - - - - -] barbican_service_user.timeout  = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Dec  1 14:20:40 np0005541455 nova_compute[189564]: 2025-12-01 19:20:40.970 189568 DEBUG oslo_service.service [None req-21d47958-9538-4c89-99e0-3c24c7751a04 - - - - - -] vault.approle_role_id          = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Dec  1 14:20:40 np0005541455 nova_compute[189564]: 2025-12-01 19:20:40.970 189568 DEBUG oslo_service.service [None req-21d47958-9538-4c89-99e0-3c24c7751a04 - - - - - -] vault.approle_secret_id        = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Dec  1 14:20:40 np0005541455 nova_compute[189564]: 2025-12-01 19:20:40.970 189568 DEBUG oslo_service.service [None req-21d47958-9538-4c89-99e0-3c24c7751a04 - - - - - -] vault.cafile                   = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Dec  1 14:20:40 np0005541455 nova_compute[189564]: 2025-12-01 19:20:40.970 189568 DEBUG oslo_service.service [None req-21d47958-9538-4c89-99e0-3c24c7751a04 - - - - - -] vault.certfile                 = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Dec  1 14:20:40 np0005541455 nova_compute[189564]: 2025-12-01 19:20:40.971 189568 DEBUG oslo_service.service [None req-21d47958-9538-4c89-99e0-3c24c7751a04 - - - - - -] vault.collect_timing           = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Dec  1 14:20:40 np0005541455 nova_compute[189564]: 2025-12-01 19:20:40.971 189568 DEBUG oslo_service.service [None req-21d47958-9538-4c89-99e0-3c24c7751a04 - - - - - -] vault.insecure                 = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Dec  1 14:20:40 np0005541455 nova_compute[189564]: 2025-12-01 19:20:40.971 189568 DEBUG oslo_service.service [None req-21d47958-9538-4c89-99e0-3c24c7751a04 - - - - - -] vault.keyfile                  = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Dec  1 14:20:40 np0005541455 nova_compute[189564]: 2025-12-01 19:20:40.971 189568 DEBUG oslo_service.service [None req-21d47958-9538-4c89-99e0-3c24c7751a04 - - - - - -] vault.kv_mountpoint            = secret log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Dec  1 14:20:40 np0005541455 nova_compute[189564]: 2025-12-01 19:20:40.971 189568 DEBUG oslo_service.service [None req-21d47958-9538-4c89-99e0-3c24c7751a04 - - - - - -] vault.kv_version               = 2 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Dec  1 14:20:40 np0005541455 nova_compute[189564]: 2025-12-01 19:20:40.971 189568 DEBUG oslo_service.service [None req-21d47958-9538-4c89-99e0-3c24c7751a04 - - - - - -] vault.namespace                = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Dec  1 14:20:40 np0005541455 nova_compute[189564]: 2025-12-01 19:20:40.972 189568 DEBUG oslo_service.service [None req-21d47958-9538-4c89-99e0-3c24c7751a04 - - - - - -] vault.root_token_id            = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Dec  1 14:20:40 np0005541455 nova_compute[189564]: 2025-12-01 19:20:40.972 189568 DEBUG oslo_service.service [None req-21d47958-9538-4c89-99e0-3c24c7751a04 - - - - - -] vault.split_loggers            = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Dec  1 14:20:40 np0005541455 nova_compute[189564]: 2025-12-01 19:20:40.972 189568 DEBUG oslo_service.service [None req-21d47958-9538-4c89-99e0-3c24c7751a04 - - - - - -] vault.ssl_ca_crt_file          = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Dec  1 14:20:40 np0005541455 nova_compute[189564]: 2025-12-01 19:20:40.972 189568 DEBUG oslo_service.service [None req-21d47958-9538-4c89-99e0-3c24c7751a04 - - - - - -] vault.timeout                  = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Dec  1 14:20:40 np0005541455 nova_compute[189564]: 2025-12-01 19:20:40.972 189568 DEBUG oslo_service.service [None req-21d47958-9538-4c89-99e0-3c24c7751a04 - - - - - -] vault.use_ssl                  = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Dec  1 14:20:40 np0005541455 nova_compute[189564]: 2025-12-01 19:20:40.972 189568 DEBUG oslo_service.service [None req-21d47958-9538-4c89-99e0-3c24c7751a04 - - - - - -] vault.vault_url                = http://127.0.0.1:8200 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Dec  1 14:20:40 np0005541455 nova_compute[189564]: 2025-12-01 19:20:40.972 189568 DEBUG oslo_service.service [None req-21d47958-9538-4c89-99e0-3c24c7751a04 - - - - - -] keystone.cafile                = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Dec  1 14:20:40 np0005541455 nova_compute[189564]: 2025-12-01 19:20:40.973 189568 DEBUG oslo_service.service [None req-21d47958-9538-4c89-99e0-3c24c7751a04 - - - - - -] keystone.certfile              = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Dec  1 14:20:40 np0005541455 nova_compute[189564]: 2025-12-01 19:20:40.973 189568 DEBUG oslo_service.service [None req-21d47958-9538-4c89-99e0-3c24c7751a04 - - - - - -] keystone.collect_timing        = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Dec  1 14:20:40 np0005541455 nova_compute[189564]: 2025-12-01 19:20:40.973 189568 DEBUG oslo_service.service [None req-21d47958-9538-4c89-99e0-3c24c7751a04 - - - - - -] keystone.connect_retries       = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Dec  1 14:20:40 np0005541455 nova_compute[189564]: 2025-12-01 19:20:40.973 189568 DEBUG oslo_service.service [None req-21d47958-9538-4c89-99e0-3c24c7751a04 - - - - - -] keystone.connect_retry_delay   = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Dec  1 14:20:40 np0005541455 nova_compute[189564]: 2025-12-01 19:20:40.973 189568 DEBUG oslo_service.service [None req-21d47958-9538-4c89-99e0-3c24c7751a04 - - - - - -] keystone.endpoint_override     = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Dec  1 14:20:40 np0005541455 nova_compute[189564]: 2025-12-01 19:20:40.973 189568 DEBUG oslo_service.service [None req-21d47958-9538-4c89-99e0-3c24c7751a04 - - - - - -] keystone.insecure              = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Dec  1 14:20:40 np0005541455 nova_compute[189564]: 2025-12-01 19:20:40.973 189568 DEBUG oslo_service.service [None req-21d47958-9538-4c89-99e0-3c24c7751a04 - - - - - -] keystone.keyfile               = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Dec  1 14:20:40 np0005541455 nova_compute[189564]: 2025-12-01 19:20:40.974 189568 DEBUG oslo_service.service [None req-21d47958-9538-4c89-99e0-3c24c7751a04 - - - - - -] keystone.max_version           = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Dec  1 14:20:40 np0005541455 nova_compute[189564]: 2025-12-01 19:20:40.974 189568 DEBUG oslo_service.service [None req-21d47958-9538-4c89-99e0-3c24c7751a04 - - - - - -] keystone.min_version           = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Dec  1 14:20:40 np0005541455 nova_compute[189564]: 2025-12-01 19:20:40.974 189568 DEBUG oslo_service.service [None req-21d47958-9538-4c89-99e0-3c24c7751a04 - - - - - -] keystone.region_name           = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Dec  1 14:20:40 np0005541455 nova_compute[189564]: 2025-12-01 19:20:40.974 189568 DEBUG oslo_service.service [None req-21d47958-9538-4c89-99e0-3c24c7751a04 - - - - - -] keystone.service_name          = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Dec  1 14:20:40 np0005541455 nova_compute[189564]: 2025-12-01 19:20:40.974 189568 DEBUG oslo_service.service [None req-21d47958-9538-4c89-99e0-3c24c7751a04 - - - - - -] keystone.service_type          = identity log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Dec  1 14:20:40 np0005541455 nova_compute[189564]: 2025-12-01 19:20:40.974 189568 DEBUG oslo_service.service [None req-21d47958-9538-4c89-99e0-3c24c7751a04 - - - - - -] keystone.split_loggers         = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Dec  1 14:20:40 np0005541455 nova_compute[189564]: 2025-12-01 19:20:40.974 189568 DEBUG oslo_service.service [None req-21d47958-9538-4c89-99e0-3c24c7751a04 - - - - - -] keystone.status_code_retries   = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Dec  1 14:20:40 np0005541455 nova_compute[189564]: 2025-12-01 19:20:40.975 189568 DEBUG oslo_service.service [None req-21d47958-9538-4c89-99e0-3c24c7751a04 - - - - - -] keystone.status_code_retry_delay = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Dec  1 14:20:40 np0005541455 nova_compute[189564]: 2025-12-01 19:20:40.975 189568 DEBUG oslo_service.service [None req-21d47958-9538-4c89-99e0-3c24c7751a04 - - - - - -] keystone.timeout               = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Dec  1 14:20:40 np0005541455 nova_compute[189564]: 2025-12-01 19:20:40.975 189568 DEBUG oslo_service.service [None req-21d47958-9538-4c89-99e0-3c24c7751a04 - - - - - -] keystone.valid_interfaces      = ['internal', 'public'] log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Dec  1 14:20:40 np0005541455 nova_compute[189564]: 2025-12-01 19:20:40.975 189568 DEBUG oslo_service.service [None req-21d47958-9538-4c89-99e0-3c24c7751a04 - - - - - -] keystone.version               = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Dec  1 14:20:40 np0005541455 nova_compute[189564]: 2025-12-01 19:20:40.975 189568 DEBUG oslo_service.service [None req-21d47958-9538-4c89-99e0-3c24c7751a04 - - - - - -] libvirt.connection_uri         =  log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Dec  1 14:20:40 np0005541455 nova_compute[189564]: 2025-12-01 19:20:40.975 189568 DEBUG oslo_service.service [None req-21d47958-9538-4c89-99e0-3c24c7751a04 - - - - - -] libvirt.cpu_mode               = host-model log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Dec  1 14:20:40 np0005541455 nova_compute[189564]: 2025-12-01 19:20:40.975 189568 DEBUG oslo_service.service [None req-21d47958-9538-4c89-99e0-3c24c7751a04 - - - - - -] libvirt.cpu_model_extra_flags  = [] log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Dec  1 14:20:40 np0005541455 nova_compute[189564]: 2025-12-01 19:20:40.976 189568 DEBUG oslo_service.service [None req-21d47958-9538-4c89-99e0-3c24c7751a04 - - - - - -] libvirt.cpu_models             = [] log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Dec  1 14:20:40 np0005541455 nova_compute[189564]: 2025-12-01 19:20:40.976 189568 DEBUG oslo_service.service [None req-21d47958-9538-4c89-99e0-3c24c7751a04 - - - - - -] libvirt.cpu_power_governor_high = performance log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Dec  1 14:20:40 np0005541455 nova_compute[189564]: 2025-12-01 19:20:40.976 189568 DEBUG oslo_service.service [None req-21d47958-9538-4c89-99e0-3c24c7751a04 - - - - - -] libvirt.cpu_power_governor_low = powersave log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Dec  1 14:20:40 np0005541455 nova_compute[189564]: 2025-12-01 19:20:40.976 189568 DEBUG oslo_service.service [None req-21d47958-9538-4c89-99e0-3c24c7751a04 - - - - - -] libvirt.cpu_power_management   = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Dec  1 14:20:40 np0005541455 nova_compute[189564]: 2025-12-01 19:20:40.976 189568 DEBUG oslo_service.service [None req-21d47958-9538-4c89-99e0-3c24c7751a04 - - - - - -] libvirt.cpu_power_management_strategy = cpu_state log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Dec  1 14:20:40 np0005541455 nova_compute[189564]: 2025-12-01 19:20:40.976 189568 DEBUG oslo_service.service [None req-21d47958-9538-4c89-99e0-3c24c7751a04 - - - - - -] libvirt.device_detach_attempts = 8 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Dec  1 14:20:40 np0005541455 nova_compute[189564]: 2025-12-01 19:20:40.976 189568 DEBUG oslo_service.service [None req-21d47958-9538-4c89-99e0-3c24c7751a04 - - - - - -] libvirt.device_detach_timeout  = 20 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Dec  1 14:20:40 np0005541455 nova_compute[189564]: 2025-12-01 19:20:40.977 189568 DEBUG oslo_service.service [None req-21d47958-9538-4c89-99e0-3c24c7751a04 - - - - - -] libvirt.disk_cachemodes        = [] log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Dec  1 14:20:40 np0005541455 nova_compute[189564]: 2025-12-01 19:20:40.977 189568 DEBUG oslo_service.service [None req-21d47958-9538-4c89-99e0-3c24c7751a04 - - - - - -] libvirt.disk_prefix            = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Dec  1 14:20:40 np0005541455 nova_compute[189564]: 2025-12-01 19:20:40.977 189568 DEBUG oslo_service.service [None req-21d47958-9538-4c89-99e0-3c24c7751a04 - - - - - -] libvirt.enabled_perf_events    = [] log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Dec  1 14:20:40 np0005541455 nova_compute[189564]: 2025-12-01 19:20:40.977 189568 DEBUG oslo_service.service [None req-21d47958-9538-4c89-99e0-3c24c7751a04 - - - - - -] libvirt.file_backed_memory     = 0 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Dec  1 14:20:40 np0005541455 nova_compute[189564]: 2025-12-01 19:20:40.977 189568 DEBUG oslo_service.service [None req-21d47958-9538-4c89-99e0-3c24c7751a04 - - - - - -] libvirt.gid_maps               = [] log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Dec  1 14:20:40 np0005541455 nova_compute[189564]: 2025-12-01 19:20:40.977 189568 DEBUG oslo_service.service [None req-21d47958-9538-4c89-99e0-3c24c7751a04 - - - - - -] libvirt.hw_disk_discard        = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Dec  1 14:20:40 np0005541455 nova_compute[189564]: 2025-12-01 19:20:40.977 189568 DEBUG oslo_service.service [None req-21d47958-9538-4c89-99e0-3c24c7751a04 - - - - - -] libvirt.hw_machine_type        = ['x86_64=q35'] log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Dec  1 14:20:40 np0005541455 nova_compute[189564]: 2025-12-01 19:20:40.978 189568 DEBUG oslo_service.service [None req-21d47958-9538-4c89-99e0-3c24c7751a04 - - - - - -] libvirt.images_rbd_ceph_conf   =  log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Dec  1 14:20:40 np0005541455 nova_compute[189564]: 2025-12-01 19:20:40.978 189568 DEBUG oslo_service.service [None req-21d47958-9538-4c89-99e0-3c24c7751a04 - - - - - -] libvirt.images_rbd_glance_copy_poll_interval = 15 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Dec  1 14:20:40 np0005541455 nova_compute[189564]: 2025-12-01 19:20:40.978 189568 DEBUG oslo_service.service [None req-21d47958-9538-4c89-99e0-3c24c7751a04 - - - - - -] libvirt.images_rbd_glance_copy_timeout = 600 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Dec  1 14:20:40 np0005541455 nova_compute[189564]: 2025-12-01 19:20:40.978 189568 DEBUG oslo_service.service [None req-21d47958-9538-4c89-99e0-3c24c7751a04 - - - - - -] libvirt.images_rbd_glance_store_name =  log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Dec  1 14:20:40 np0005541455 nova_compute[189564]: 2025-12-01 19:20:40.978 189568 DEBUG oslo_service.service [None req-21d47958-9538-4c89-99e0-3c24c7751a04 - - - - - -] libvirt.images_rbd_pool        = rbd log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Dec  1 14:20:40 np0005541455 nova_compute[189564]: 2025-12-01 19:20:40.978 189568 DEBUG oslo_service.service [None req-21d47958-9538-4c89-99e0-3c24c7751a04 - - - - - -] libvirt.images_type            = qcow2 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Dec  1 14:20:40 np0005541455 nova_compute[189564]: 2025-12-01 19:20:40.978 189568 DEBUG oslo_service.service [None req-21d47958-9538-4c89-99e0-3c24c7751a04 - - - - - -] libvirt.images_volume_group    = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Dec  1 14:20:40 np0005541455 nova_compute[189564]: 2025-12-01 19:20:40.979 189568 DEBUG oslo_service.service [None req-21d47958-9538-4c89-99e0-3c24c7751a04 - - - - - -] libvirt.inject_key             = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Dec  1 14:20:40 np0005541455 nova_compute[189564]: 2025-12-01 19:20:40.979 189568 DEBUG oslo_service.service [None req-21d47958-9538-4c89-99e0-3c24c7751a04 - - - - - -] libvirt.inject_partition       = -2 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Dec  1 14:20:40 np0005541455 nova_compute[189564]: 2025-12-01 19:20:40.979 189568 DEBUG oslo_service.service [None req-21d47958-9538-4c89-99e0-3c24c7751a04 - - - - - -] libvirt.inject_password        = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Dec  1 14:20:40 np0005541455 nova_compute[189564]: 2025-12-01 19:20:40.979 189568 DEBUG oslo_service.service [None req-21d47958-9538-4c89-99e0-3c24c7751a04 - - - - - -] libvirt.iscsi_iface            = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Dec  1 14:20:40 np0005541455 nova_compute[189564]: 2025-12-01 19:20:40.979 189568 DEBUG oslo_service.service [None req-21d47958-9538-4c89-99e0-3c24c7751a04 - - - - - -] libvirt.iser_use_multipath     = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Dec  1 14:20:40 np0005541455 nova_compute[189564]: 2025-12-01 19:20:40.979 189568 DEBUG oslo_service.service [None req-21d47958-9538-4c89-99e0-3c24c7751a04 - - - - - -] libvirt.live_migration_bandwidth = 0 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Dec  1 14:20:40 np0005541455 nova_compute[189564]: 2025-12-01 19:20:40.979 189568 DEBUG oslo_service.service [None req-21d47958-9538-4c89-99e0-3c24c7751a04 - - - - - -] libvirt.live_migration_completion_timeout = 800 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Dec  1 14:20:40 np0005541455 nova_compute[189564]: 2025-12-01 19:20:40.980 189568 DEBUG oslo_service.service [None req-21d47958-9538-4c89-99e0-3c24c7751a04 - - - - - -] libvirt.live_migration_downtime = 500 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Dec  1 14:20:40 np0005541455 nova_compute[189564]: 2025-12-01 19:20:40.980 189568 DEBUG oslo_service.service [None req-21d47958-9538-4c89-99e0-3c24c7751a04 - - - - - -] libvirt.live_migration_downtime_delay = 75 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Dec  1 14:20:40 np0005541455 nova_compute[189564]: 2025-12-01 19:20:40.980 189568 DEBUG oslo_service.service [None req-21d47958-9538-4c89-99e0-3c24c7751a04 - - - - - -] libvirt.live_migration_downtime_steps = 10 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Dec  1 14:20:40 np0005541455 nova_compute[189564]: 2025-12-01 19:20:40.980 189568 DEBUG oslo_service.service [None req-21d47958-9538-4c89-99e0-3c24c7751a04 - - - - - -] libvirt.live_migration_inbound_addr = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Dec  1 14:20:40 np0005541455 nova_compute[189564]: 2025-12-01 19:20:40.980 189568 DEBUG oslo_service.service [None req-21d47958-9538-4c89-99e0-3c24c7751a04 - - - - - -] libvirt.live_migration_permit_auto_converge = True log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Dec  1 14:20:40 np0005541455 nova_compute[189564]: 2025-12-01 19:20:40.980 189568 DEBUG oslo_service.service [None req-21d47958-9538-4c89-99e0-3c24c7751a04 - - - - - -] libvirt.live_migration_permit_post_copy = True log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Dec  1 14:20:40 np0005541455 nova_compute[189564]: 2025-12-01 19:20:40.980 189568 DEBUG oslo_service.service [None req-21d47958-9538-4c89-99e0-3c24c7751a04 - - - - - -] libvirt.live_migration_scheme  = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Dec  1 14:20:40 np0005541455 nova_compute[189564]: 2025-12-01 19:20:40.981 189568 DEBUG oslo_service.service [None req-21d47958-9538-4c89-99e0-3c24c7751a04 - - - - - -] libvirt.live_migration_timeout_action = force_complete log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Dec  1 14:20:40 np0005541455 nova_compute[189564]: 2025-12-01 19:20:40.981 189568 DEBUG oslo_service.service [None req-21d47958-9538-4c89-99e0-3c24c7751a04 - - - - - -] libvirt.live_migration_tunnelled = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Dec  1 14:20:40 np0005541455 nova_compute[189564]: 2025-12-01 19:20:40.981 189568 WARNING oslo_config.cfg [None req-21d47958-9538-4c89-99e0-3c24c7751a04 - - - - - -] Deprecated: Option "live_migration_uri" from group "libvirt" is deprecated for removal (
Dec  1 14:20:40 np0005541455 nova_compute[189564]: live_migration_uri is deprecated for removal in favor of two other options that
Dec  1 14:20:40 np0005541455 nova_compute[189564]: allow to change live migration scheme and target URI: ``live_migration_scheme``
Dec  1 14:20:40 np0005541455 nova_compute[189564]: and ``live_migration_inbound_addr`` respectively.
Dec  1 14:20:40 np0005541455 nova_compute[189564]: ).  Its value may be silently ignored in the future.#033[00m
Dec  1 14:20:40 np0005541455 nova_compute[189564]: 2025-12-01 19:20:40.981 189568 DEBUG oslo_service.service [None req-21d47958-9538-4c89-99e0-3c24c7751a04 - - - - - -] libvirt.live_migration_uri     = qemu+tls://%s/system log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Dec  1 14:20:40 np0005541455 nova_compute[189564]: 2025-12-01 19:20:40.981 189568 DEBUG oslo_service.service [None req-21d47958-9538-4c89-99e0-3c24c7751a04 - - - - - -] libvirt.live_migration_with_native_tls = True log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Dec  1 14:20:40 np0005541455 nova_compute[189564]: 2025-12-01 19:20:40.982 189568 DEBUG oslo_service.service [None req-21d47958-9538-4c89-99e0-3c24c7751a04 - - - - - -] libvirt.max_queues             = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Dec  1 14:20:40 np0005541455 nova_compute[189564]: 2025-12-01 19:20:40.982 189568 DEBUG oslo_service.service [None req-21d47958-9538-4c89-99e0-3c24c7751a04 - - - - - -] libvirt.mem_stats_period_seconds = 10 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Dec  1 14:20:40 np0005541455 nova_compute[189564]: 2025-12-01 19:20:40.982 189568 DEBUG oslo_service.service [None req-21d47958-9538-4c89-99e0-3c24c7751a04 - - - - - -] libvirt.nfs_mount_options      = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Dec  1 14:20:40 np0005541455 nova_compute[189564]: 2025-12-01 19:20:40.982 189568 DEBUG oslo_service.service [None req-21d47958-9538-4c89-99e0-3c24c7751a04 - - - - - -] libvirt.nfs_mount_point_base   = /var/lib/nova/mnt log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Dec  1 14:20:40 np0005541455 nova_compute[189564]: 2025-12-01 19:20:40.982 189568 DEBUG oslo_service.service [None req-21d47958-9538-4c89-99e0-3c24c7751a04 - - - - - -] libvirt.num_aoe_discover_tries = 3 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Dec  1 14:20:40 np0005541455 nova_compute[189564]: 2025-12-01 19:20:40.982 189568 DEBUG oslo_service.service [None req-21d47958-9538-4c89-99e0-3c24c7751a04 - - - - - -] libvirt.num_iser_scan_tries    = 5 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Dec  1 14:20:40 np0005541455 nova_compute[189564]: 2025-12-01 19:20:40.982 189568 DEBUG oslo_service.service [None req-21d47958-9538-4c89-99e0-3c24c7751a04 - - - - - -] libvirt.num_memory_encrypted_guests = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Dec  1 14:20:40 np0005541455 nova_compute[189564]: 2025-12-01 19:20:40.983 189568 DEBUG oslo_service.service [None req-21d47958-9538-4c89-99e0-3c24c7751a04 - - - - - -] libvirt.num_nvme_discover_tries = 5 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Dec  1 14:20:40 np0005541455 nova_compute[189564]: 2025-12-01 19:20:40.983 189568 DEBUG oslo_service.service [None req-21d47958-9538-4c89-99e0-3c24c7751a04 - - - - - -] libvirt.num_pcie_ports         = 24 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Dec  1 14:20:40 np0005541455 nova_compute[189564]: 2025-12-01 19:20:40.983 189568 DEBUG oslo_service.service [None req-21d47958-9538-4c89-99e0-3c24c7751a04 - - - - - -] libvirt.num_volume_scan_tries  = 5 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Dec  1 14:20:40 np0005541455 nova_compute[189564]: 2025-12-01 19:20:40.983 189568 DEBUG oslo_service.service [None req-21d47958-9538-4c89-99e0-3c24c7751a04 - - - - - -] libvirt.pmem_namespaces        = [] log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Dec  1 14:20:40 np0005541455 nova_compute[189564]: 2025-12-01 19:20:40.983 189568 DEBUG oslo_service.service [None req-21d47958-9538-4c89-99e0-3c24c7751a04 - - - - - -] libvirt.quobyte_client_cfg     = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Dec  1 14:20:40 np0005541455 nova_compute[189564]: 2025-12-01 19:20:40.983 189568 DEBUG oslo_service.service [None req-21d47958-9538-4c89-99e0-3c24c7751a04 - - - - - -] libvirt.quobyte_mount_point_base = /var/lib/nova/mnt log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Dec  1 14:20:40 np0005541455 nova_compute[189564]: 2025-12-01 19:20:40.983 189568 DEBUG oslo_service.service [None req-21d47958-9538-4c89-99e0-3c24c7751a04 - - - - - -] libvirt.rbd_connect_timeout    = 5 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Dec  1 14:20:40 np0005541455 nova_compute[189564]: 2025-12-01 19:20:40.984 189568 DEBUG oslo_service.service [None req-21d47958-9538-4c89-99e0-3c24c7751a04 - - - - - -] libvirt.rbd_destroy_volume_retries = 12 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Dec  1 14:20:40 np0005541455 nova_compute[189564]: 2025-12-01 19:20:40.984 189568 DEBUG oslo_service.service [None req-21d47958-9538-4c89-99e0-3c24c7751a04 - - - - - -] libvirt.rbd_destroy_volume_retry_interval = 5 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Dec  1 14:20:40 np0005541455 nova_compute[189564]: 2025-12-01 19:20:40.984 189568 DEBUG oslo_service.service [None req-21d47958-9538-4c89-99e0-3c24c7751a04 - - - - - -] libvirt.rbd_secret_uuid        = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Dec  1 14:20:40 np0005541455 nova_compute[189564]: 2025-12-01 19:20:40.984 189568 DEBUG oslo_service.service [None req-21d47958-9538-4c89-99e0-3c24c7751a04 - - - - - -] libvirt.rbd_user               = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Dec  1 14:20:40 np0005541455 nova_compute[189564]: 2025-12-01 19:20:40.984 189568 DEBUG oslo_service.service [None req-21d47958-9538-4c89-99e0-3c24c7751a04 - - - - - -] libvirt.realtime_scheduler_priority = 1 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Dec  1 14:20:40 np0005541455 nova_compute[189564]: 2025-12-01 19:20:40.984 189568 DEBUG oslo_service.service [None req-21d47958-9538-4c89-99e0-3c24c7751a04 - - - - - -] libvirt.remote_filesystem_transport = ssh log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Dec  1 14:20:40 np0005541455 nova_compute[189564]: 2025-12-01 19:20:40.984 189568 DEBUG oslo_service.service [None req-21d47958-9538-4c89-99e0-3c24c7751a04 - - - - - -] libvirt.rescue_image_id        = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Dec  1 14:20:40 np0005541455 nova_compute[189564]: 2025-12-01 19:20:40.985 189568 DEBUG oslo_service.service [None req-21d47958-9538-4c89-99e0-3c24c7751a04 - - - - - -] libvirt.rescue_kernel_id       = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Dec  1 14:20:40 np0005541455 nova_compute[189564]: 2025-12-01 19:20:40.985 189568 DEBUG oslo_service.service [None req-21d47958-9538-4c89-99e0-3c24c7751a04 - - - - - -] libvirt.rescue_ramdisk_id      = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Dec  1 14:20:40 np0005541455 nova_compute[189564]: 2025-12-01 19:20:40.985 189568 DEBUG oslo_service.service [None req-21d47958-9538-4c89-99e0-3c24c7751a04 - - - - - -] libvirt.rng_dev_path           = /dev/urandom log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Dec  1 14:20:40 np0005541455 nova_compute[189564]: 2025-12-01 19:20:40.985 189568 DEBUG oslo_service.service [None req-21d47958-9538-4c89-99e0-3c24c7751a04 - - - - - -] libvirt.rx_queue_size          = 512 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Dec  1 14:20:40 np0005541455 nova_compute[189564]: 2025-12-01 19:20:40.985 189568 DEBUG oslo_service.service [None req-21d47958-9538-4c89-99e0-3c24c7751a04 - - - - - -] libvirt.smbfs_mount_options    =  log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Dec  1 14:20:40 np0005541455 nova_compute[189564]: 2025-12-01 19:20:40.985 189568 DEBUG oslo_service.service [None req-21d47958-9538-4c89-99e0-3c24c7751a04 - - - - - -] libvirt.smbfs_mount_point_base = /var/lib/nova/mnt log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Dec  1 14:20:40 np0005541455 nova_compute[189564]: 2025-12-01 19:20:40.985 189568 DEBUG oslo_service.service [None req-21d47958-9538-4c89-99e0-3c24c7751a04 - - - - - -] libvirt.snapshot_compression   = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Dec  1 14:20:40 np0005541455 nova_compute[189564]: 2025-12-01 19:20:40.986 189568 DEBUG oslo_service.service [None req-21d47958-9538-4c89-99e0-3c24c7751a04 - - - - - -] libvirt.snapshot_image_format  = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Dec  1 14:20:40 np0005541455 nova_compute[189564]: 2025-12-01 19:20:40.986 189568 DEBUG oslo_service.service [None req-21d47958-9538-4c89-99e0-3c24c7751a04 - - - - - -] libvirt.snapshots_directory    = /var/lib/nova/instances/snapshots log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Dec  1 14:20:40 np0005541455 nova_compute[189564]: 2025-12-01 19:20:40.986 189568 DEBUG oslo_service.service [None req-21d47958-9538-4c89-99e0-3c24c7751a04 - - - - - -] libvirt.sparse_logical_volumes = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Dec  1 14:20:40 np0005541455 nova_compute[189564]: 2025-12-01 19:20:40.986 189568 DEBUG oslo_service.service [None req-21d47958-9538-4c89-99e0-3c24c7751a04 - - - - - -] libvirt.swtpm_enabled          = True log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Dec  1 14:20:40 np0005541455 nova_compute[189564]: 2025-12-01 19:20:40.986 189568 DEBUG oslo_service.service [None req-21d47958-9538-4c89-99e0-3c24c7751a04 - - - - - -] libvirt.swtpm_group            = tss log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Dec  1 14:20:40 np0005541455 nova_compute[189564]: 2025-12-01 19:20:40.987 189568 DEBUG oslo_service.service [None req-21d47958-9538-4c89-99e0-3c24c7751a04 - - - - - -] libvirt.swtpm_user             = tss log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Dec  1 14:20:40 np0005541455 nova_compute[189564]: 2025-12-01 19:20:40.987 189568 DEBUG oslo_service.service [None req-21d47958-9538-4c89-99e0-3c24c7751a04 - - - - - -] libvirt.sysinfo_serial         = unique log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Dec  1 14:20:40 np0005541455 nova_compute[189564]: 2025-12-01 19:20:40.987 189568 DEBUG oslo_service.service [None req-21d47958-9538-4c89-99e0-3c24c7751a04 - - - - - -] libvirt.tx_queue_size          = 512 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Dec  1 14:20:40 np0005541455 nova_compute[189564]: 2025-12-01 19:20:40.987 189568 DEBUG oslo_service.service [None req-21d47958-9538-4c89-99e0-3c24c7751a04 - - - - - -] libvirt.uid_maps               = [] log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Dec  1 14:20:40 np0005541455 nova_compute[189564]: 2025-12-01 19:20:40.987 189568 DEBUG oslo_service.service [None req-21d47958-9538-4c89-99e0-3c24c7751a04 - - - - - -] libvirt.use_virtio_for_bridges = True log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Dec  1 14:20:40 np0005541455 nova_compute[189564]: 2025-12-01 19:20:40.987 189568 DEBUG oslo_service.service [None req-21d47958-9538-4c89-99e0-3c24c7751a04 - - - - - -] libvirt.virt_type              = kvm log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Dec  1 14:20:40 np0005541455 nova_compute[189564]: 2025-12-01 19:20:40.987 189568 DEBUG oslo_service.service [None req-21d47958-9538-4c89-99e0-3c24c7751a04 - - - - - -] libvirt.volume_clear           = zero log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Dec  1 14:20:40 np0005541455 nova_compute[189564]: 2025-12-01 19:20:40.988 189568 DEBUG oslo_service.service [None req-21d47958-9538-4c89-99e0-3c24c7751a04 - - - - - -] libvirt.volume_clear_size      = 0 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Dec  1 14:20:40 np0005541455 nova_compute[189564]: 2025-12-01 19:20:40.988 189568 DEBUG oslo_service.service [None req-21d47958-9538-4c89-99e0-3c24c7751a04 - - - - - -] libvirt.volume_use_multipath   = True log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Dec  1 14:20:40 np0005541455 nova_compute[189564]: 2025-12-01 19:20:40.988 189568 DEBUG oslo_service.service [None req-21d47958-9538-4c89-99e0-3c24c7751a04 - - - - - -] libvirt.vzstorage_cache_path   = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Dec  1 14:20:40 np0005541455 nova_compute[189564]: 2025-12-01 19:20:40.988 189568 DEBUG oslo_service.service [None req-21d47958-9538-4c89-99e0-3c24c7751a04 - - - - - -] libvirt.vzstorage_log_path     = /var/log/vstorage/%(cluster_name)s/nova.log.gz log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Dec  1 14:20:40 np0005541455 nova_compute[189564]: 2025-12-01 19:20:40.988 189568 DEBUG oslo_service.service [None req-21d47958-9538-4c89-99e0-3c24c7751a04 - - - - - -] libvirt.vzstorage_mount_group  = qemu log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Dec  1 14:20:40 np0005541455 nova_compute[189564]: 2025-12-01 19:20:40.988 189568 DEBUG oslo_service.service [None req-21d47958-9538-4c89-99e0-3c24c7751a04 - - - - - -] libvirt.vzstorage_mount_opts   = [] log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Dec  1 14:20:40 np0005541455 nova_compute[189564]: 2025-12-01 19:20:40.988 189568 DEBUG oslo_service.service [None req-21d47958-9538-4c89-99e0-3c24c7751a04 - - - - - -] libvirt.vzstorage_mount_perms  = 0770 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Dec  1 14:20:40 np0005541455 nova_compute[189564]: 2025-12-01 19:20:40.989 189568 DEBUG oslo_service.service [None req-21d47958-9538-4c89-99e0-3c24c7751a04 - - - - - -] libvirt.vzstorage_mount_point_base = /var/lib/nova/mnt log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Dec  1 14:20:40 np0005541455 nova_compute[189564]: 2025-12-01 19:20:40.989 189568 DEBUG oslo_service.service [None req-21d47958-9538-4c89-99e0-3c24c7751a04 - - - - - -] libvirt.vzstorage_mount_user   = stack log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Dec  1 14:20:40 np0005541455 nova_compute[189564]: 2025-12-01 19:20:40.989 189568 DEBUG oslo_service.service [None req-21d47958-9538-4c89-99e0-3c24c7751a04 - - - - - -] libvirt.wait_soft_reboot_seconds = 120 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Dec  1 14:20:40 np0005541455 nova_compute[189564]: 2025-12-01 19:20:40.989 189568 DEBUG oslo_service.service [None req-21d47958-9538-4c89-99e0-3c24c7751a04 - - - - - -] neutron.auth_section           = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Dec  1 14:20:40 np0005541455 nova_compute[189564]: 2025-12-01 19:20:40.989 189568 DEBUG oslo_service.service [None req-21d47958-9538-4c89-99e0-3c24c7751a04 - - - - - -] neutron.auth_type              = password log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Dec  1 14:20:40 np0005541455 nova_compute[189564]: 2025-12-01 19:20:40.989 189568 DEBUG oslo_service.service [None req-21d47958-9538-4c89-99e0-3c24c7751a04 - - - - - -] neutron.cafile                 = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Dec  1 14:20:40 np0005541455 nova_compute[189564]: 2025-12-01 19:20:40.989 189568 DEBUG oslo_service.service [None req-21d47958-9538-4c89-99e0-3c24c7751a04 - - - - - -] neutron.certfile               = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Dec  1 14:20:40 np0005541455 nova_compute[189564]: 2025-12-01 19:20:40.990 189568 DEBUG oslo_service.service [None req-21d47958-9538-4c89-99e0-3c24c7751a04 - - - - - -] neutron.collect_timing         = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Dec  1 14:20:40 np0005541455 nova_compute[189564]: 2025-12-01 19:20:40.990 189568 DEBUG oslo_service.service [None req-21d47958-9538-4c89-99e0-3c24c7751a04 - - - - - -] neutron.connect_retries        = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Dec  1 14:20:40 np0005541455 nova_compute[189564]: 2025-12-01 19:20:40.990 189568 DEBUG oslo_service.service [None req-21d47958-9538-4c89-99e0-3c24c7751a04 - - - - - -] neutron.connect_retry_delay    = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Dec  1 14:20:40 np0005541455 nova_compute[189564]: 2025-12-01 19:20:40.990 189568 DEBUG oslo_service.service [None req-21d47958-9538-4c89-99e0-3c24c7751a04 - - - - - -] neutron.default_floating_pool  = nova log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Dec  1 14:20:40 np0005541455 nova_compute[189564]: 2025-12-01 19:20:40.990 189568 DEBUG oslo_service.service [None req-21d47958-9538-4c89-99e0-3c24c7751a04 - - - - - -] neutron.endpoint_override      = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Dec  1 14:20:40 np0005541455 nova_compute[189564]: 2025-12-01 19:20:40.990 189568 DEBUG oslo_service.service [None req-21d47958-9538-4c89-99e0-3c24c7751a04 - - - - - -] neutron.extension_sync_interval = 600 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Dec  1 14:20:40 np0005541455 nova_compute[189564]: 2025-12-01 19:20:40.990 189568 DEBUG oslo_service.service [None req-21d47958-9538-4c89-99e0-3c24c7751a04 - - - - - -] neutron.http_retries           = 3 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Dec  1 14:20:40 np0005541455 nova_compute[189564]: 2025-12-01 19:20:40.991 189568 DEBUG oslo_service.service [None req-21d47958-9538-4c89-99e0-3c24c7751a04 - - - - - -] neutron.insecure               = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Dec  1 14:20:40 np0005541455 nova_compute[189564]: 2025-12-01 19:20:40.991 189568 DEBUG oslo_service.service [None req-21d47958-9538-4c89-99e0-3c24c7751a04 - - - - - -] neutron.keyfile                = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Dec  1 14:20:40 np0005541455 nova_compute[189564]: 2025-12-01 19:20:40.991 189568 DEBUG oslo_service.service [None req-21d47958-9538-4c89-99e0-3c24c7751a04 - - - - - -] neutron.max_version            = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Dec  1 14:20:40 np0005541455 nova_compute[189564]: 2025-12-01 19:20:40.991 189568 DEBUG oslo_service.service [None req-21d47958-9538-4c89-99e0-3c24c7751a04 - - - - - -] neutron.metadata_proxy_shared_secret = **** log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Dec  1 14:20:40 np0005541455 nova_compute[189564]: 2025-12-01 19:20:40.991 189568 DEBUG oslo_service.service [None req-21d47958-9538-4c89-99e0-3c24c7751a04 - - - - - -] neutron.min_version            = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Dec  1 14:20:40 np0005541455 nova_compute[189564]: 2025-12-01 19:20:40.991 189568 DEBUG oslo_service.service [None req-21d47958-9538-4c89-99e0-3c24c7751a04 - - - - - -] neutron.ovs_bridge             = br-int log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Dec  1 14:20:40 np0005541455 nova_compute[189564]: 2025-12-01 19:20:40.991 189568 DEBUG oslo_service.service [None req-21d47958-9538-4c89-99e0-3c24c7751a04 - - - - - -] neutron.physnets               = [] log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Dec  1 14:20:40 np0005541455 nova_compute[189564]: 2025-12-01 19:20:40.992 189568 DEBUG oslo_service.service [None req-21d47958-9538-4c89-99e0-3c24c7751a04 - - - - - -] neutron.region_name            = regionOne log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Dec  1 14:20:40 np0005541455 nova_compute[189564]: 2025-12-01 19:20:40.992 189568 DEBUG oslo_service.service [None req-21d47958-9538-4c89-99e0-3c24c7751a04 - - - - - -] neutron.service_metadata_proxy = True log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Dec  1 14:20:40 np0005541455 nova_compute[189564]: 2025-12-01 19:20:40.992 189568 DEBUG oslo_service.service [None req-21d47958-9538-4c89-99e0-3c24c7751a04 - - - - - -] neutron.service_name           = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Dec  1 14:20:40 np0005541455 nova_compute[189564]: 2025-12-01 19:20:40.992 189568 DEBUG oslo_service.service [None req-21d47958-9538-4c89-99e0-3c24c7751a04 - - - - - -] neutron.service_type           = network log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Dec  1 14:20:40 np0005541455 nova_compute[189564]: 2025-12-01 19:20:40.992 189568 DEBUG oslo_service.service [None req-21d47958-9538-4c89-99e0-3c24c7751a04 - - - - - -] neutron.split_loggers          = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Dec  1 14:20:40 np0005541455 nova_compute[189564]: 2025-12-01 19:20:40.992 189568 DEBUG oslo_service.service [None req-21d47958-9538-4c89-99e0-3c24c7751a04 - - - - - -] neutron.status_code_retries    = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Dec  1 14:20:40 np0005541455 nova_compute[189564]: 2025-12-01 19:20:40.992 189568 DEBUG oslo_service.service [None req-21d47958-9538-4c89-99e0-3c24c7751a04 - - - - - -] neutron.status_code_retry_delay = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Dec  1 14:20:40 np0005541455 nova_compute[189564]: 2025-12-01 19:20:40.993 189568 DEBUG oslo_service.service [None req-21d47958-9538-4c89-99e0-3c24c7751a04 - - - - - -] neutron.timeout                = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Dec  1 14:20:40 np0005541455 nova_compute[189564]: 2025-12-01 19:20:40.993 189568 DEBUG oslo_service.service [None req-21d47958-9538-4c89-99e0-3c24c7751a04 - - - - - -] neutron.valid_interfaces       = ['internal'] log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Dec  1 14:20:40 np0005541455 nova_compute[189564]: 2025-12-01 19:20:40.993 189568 DEBUG oslo_service.service [None req-21d47958-9538-4c89-99e0-3c24c7751a04 - - - - - -] neutron.version                = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Dec  1 14:20:40 np0005541455 nova_compute[189564]: 2025-12-01 19:20:40.993 189568 DEBUG oslo_service.service [None req-21d47958-9538-4c89-99e0-3c24c7751a04 - - - - - -] notifications.bdms_in_notifications = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Dec  1 14:20:40 np0005541455 nova_compute[189564]: 2025-12-01 19:20:40.993 189568 DEBUG oslo_service.service [None req-21d47958-9538-4c89-99e0-3c24c7751a04 - - - - - -] notifications.default_level    = INFO log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Dec  1 14:20:40 np0005541455 nova_compute[189564]: 2025-12-01 19:20:40.993 189568 DEBUG oslo_service.service [None req-21d47958-9538-4c89-99e0-3c24c7751a04 - - - - - -] notifications.notification_format = unversioned log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Dec  1 14:20:40 np0005541455 nova_compute[189564]: 2025-12-01 19:20:40.993 189568 DEBUG oslo_service.service [None req-21d47958-9538-4c89-99e0-3c24c7751a04 - - - - - -] notifications.notify_on_state_change = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Dec  1 14:20:40 np0005541455 nova_compute[189564]: 2025-12-01 19:20:40.994 189568 DEBUG oslo_service.service [None req-21d47958-9538-4c89-99e0-3c24c7751a04 - - - - - -] notifications.versioned_notifications_topics = ['versioned_notifications'] log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Dec  1 14:20:40 np0005541455 nova_compute[189564]: 2025-12-01 19:20:40.994 189568 DEBUG oslo_service.service [None req-21d47958-9538-4c89-99e0-3c24c7751a04 - - - - - -] pci.alias                      = [] log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Dec  1 14:20:40 np0005541455 nova_compute[189564]: 2025-12-01 19:20:40.994 189568 DEBUG oslo_service.service [None req-21d47958-9538-4c89-99e0-3c24c7751a04 - - - - - -] pci.device_spec                = [] log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Dec  1 14:20:40 np0005541455 nova_compute[189564]: 2025-12-01 19:20:40.994 189568 DEBUG oslo_service.service [None req-21d47958-9538-4c89-99e0-3c24c7751a04 - - - - - -] pci.report_in_placement        = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Dec  1 14:20:40 np0005541455 nova_compute[189564]: 2025-12-01 19:20:40.994 189568 DEBUG oslo_service.service [None req-21d47958-9538-4c89-99e0-3c24c7751a04 - - - - - -] placement.auth_section         = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Dec  1 14:20:40 np0005541455 nova_compute[189564]: 2025-12-01 19:20:40.994 189568 DEBUG oslo_service.service [None req-21d47958-9538-4c89-99e0-3c24c7751a04 - - - - - -] placement.auth_type            = password log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Dec  1 14:20:40 np0005541455 nova_compute[189564]: 2025-12-01 19:20:40.995 189568 DEBUG oslo_service.service [None req-21d47958-9538-4c89-99e0-3c24c7751a04 - - - - - -] placement.auth_url             = https://keystone-internal.openstack.svc:5000 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Dec  1 14:20:40 np0005541455 nova_compute[189564]: 2025-12-01 19:20:40.995 189568 DEBUG oslo_service.service [None req-21d47958-9538-4c89-99e0-3c24c7751a04 - - - - - -] placement.cafile               = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Dec  1 14:20:40 np0005541455 nova_compute[189564]: 2025-12-01 19:20:40.995 189568 DEBUG oslo_service.service [None req-21d47958-9538-4c89-99e0-3c24c7751a04 - - - - - -] placement.certfile             = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Dec  1 14:20:40 np0005541455 nova_compute[189564]: 2025-12-01 19:20:40.995 189568 DEBUG oslo_service.service [None req-21d47958-9538-4c89-99e0-3c24c7751a04 - - - - - -] placement.collect_timing       = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Dec  1 14:20:40 np0005541455 nova_compute[189564]: 2025-12-01 19:20:40.995 189568 DEBUG oslo_service.service [None req-21d47958-9538-4c89-99e0-3c24c7751a04 - - - - - -] placement.connect_retries      = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Dec  1 14:20:40 np0005541455 nova_compute[189564]: 2025-12-01 19:20:40.995 189568 DEBUG oslo_service.service [None req-21d47958-9538-4c89-99e0-3c24c7751a04 - - - - - -] placement.connect_retry_delay  = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Dec  1 14:20:40 np0005541455 nova_compute[189564]: 2025-12-01 19:20:40.995 189568 DEBUG oslo_service.service [None req-21d47958-9538-4c89-99e0-3c24c7751a04 - - - - - -] placement.default_domain_id    = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Dec  1 14:20:40 np0005541455 nova_compute[189564]: 2025-12-01 19:20:40.996 189568 DEBUG oslo_service.service [None req-21d47958-9538-4c89-99e0-3c24c7751a04 - - - - - -] placement.default_domain_name  = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Dec  1 14:20:40 np0005541455 nova_compute[189564]: 2025-12-01 19:20:40.996 189568 DEBUG oslo_service.service [None req-21d47958-9538-4c89-99e0-3c24c7751a04 - - - - - -] placement.domain_id            = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Dec  1 14:20:40 np0005541455 nova_compute[189564]: 2025-12-01 19:20:40.996 189568 DEBUG oslo_service.service [None req-21d47958-9538-4c89-99e0-3c24c7751a04 - - - - - -] placement.domain_name          = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Dec  1 14:20:40 np0005541455 nova_compute[189564]: 2025-12-01 19:20:40.996 189568 DEBUG oslo_service.service [None req-21d47958-9538-4c89-99e0-3c24c7751a04 - - - - - -] placement.endpoint_override    = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Dec  1 14:20:40 np0005541455 nova_compute[189564]: 2025-12-01 19:20:40.996 189568 DEBUG oslo_service.service [None req-21d47958-9538-4c89-99e0-3c24c7751a04 - - - - - -] placement.insecure             = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Dec  1 14:20:40 np0005541455 nova_compute[189564]: 2025-12-01 19:20:40.996 189568 DEBUG oslo_service.service [None req-21d47958-9538-4c89-99e0-3c24c7751a04 - - - - - -] placement.keyfile              = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Dec  1 14:20:40 np0005541455 nova_compute[189564]: 2025-12-01 19:20:40.996 189568 DEBUG oslo_service.service [None req-21d47958-9538-4c89-99e0-3c24c7751a04 - - - - - -] placement.max_version          = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Dec  1 14:20:40 np0005541455 nova_compute[189564]: 2025-12-01 19:20:40.996 189568 DEBUG oslo_service.service [None req-21d47958-9538-4c89-99e0-3c24c7751a04 - - - - - -] placement.min_version          = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Dec  1 14:20:40 np0005541455 nova_compute[189564]: 2025-12-01 19:20:40.997 189568 DEBUG oslo_service.service [None req-21d47958-9538-4c89-99e0-3c24c7751a04 - - - - - -] placement.password             = **** log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Dec  1 14:20:40 np0005541455 nova_compute[189564]: 2025-12-01 19:20:40.997 189568 DEBUG oslo_service.service [None req-21d47958-9538-4c89-99e0-3c24c7751a04 - - - - - -] placement.project_domain_id    = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Dec  1 14:20:40 np0005541455 nova_compute[189564]: 2025-12-01 19:20:40.997 189568 DEBUG oslo_service.service [None req-21d47958-9538-4c89-99e0-3c24c7751a04 - - - - - -] placement.project_domain_name  = Default log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Dec  1 14:20:40 np0005541455 nova_compute[189564]: 2025-12-01 19:20:40.997 189568 DEBUG oslo_service.service [None req-21d47958-9538-4c89-99e0-3c24c7751a04 - - - - - -] placement.project_id           = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Dec  1 14:20:40 np0005541455 nova_compute[189564]: 2025-12-01 19:20:40.997 189568 DEBUG oslo_service.service [None req-21d47958-9538-4c89-99e0-3c24c7751a04 - - - - - -] placement.project_name         = service log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Dec  1 14:20:40 np0005541455 nova_compute[189564]: 2025-12-01 19:20:40.997 189568 DEBUG oslo_service.service [None req-21d47958-9538-4c89-99e0-3c24c7751a04 - - - - - -] placement.region_name          = regionOne log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Dec  1 14:20:40 np0005541455 nova_compute[189564]: 2025-12-01 19:20:40.997 189568 DEBUG oslo_service.service [None req-21d47958-9538-4c89-99e0-3c24c7751a04 - - - - - -] placement.service_name         = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Dec  1 14:20:40 np0005541455 nova_compute[189564]: 2025-12-01 19:20:40.998 189568 DEBUG oslo_service.service [None req-21d47958-9538-4c89-99e0-3c24c7751a04 - - - - - -] placement.service_type         = placement log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Dec  1 14:20:40 np0005541455 nova_compute[189564]: 2025-12-01 19:20:40.998 189568 DEBUG oslo_service.service [None req-21d47958-9538-4c89-99e0-3c24c7751a04 - - - - - -] placement.split_loggers        = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Dec  1 14:20:40 np0005541455 nova_compute[189564]: 2025-12-01 19:20:40.998 189568 DEBUG oslo_service.service [None req-21d47958-9538-4c89-99e0-3c24c7751a04 - - - - - -] placement.status_code_retries  = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Dec  1 14:20:40 np0005541455 nova_compute[189564]: 2025-12-01 19:20:40.998 189568 DEBUG oslo_service.service [None req-21d47958-9538-4c89-99e0-3c24c7751a04 - - - - - -] placement.status_code_retry_delay = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Dec  1 14:20:40 np0005541455 nova_compute[189564]: 2025-12-01 19:20:40.998 189568 DEBUG oslo_service.service [None req-21d47958-9538-4c89-99e0-3c24c7751a04 - - - - - -] placement.system_scope         = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Dec  1 14:20:40 np0005541455 nova_compute[189564]: 2025-12-01 19:20:40.998 189568 DEBUG oslo_service.service [None req-21d47958-9538-4c89-99e0-3c24c7751a04 - - - - - -] placement.timeout              = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Dec  1 14:20:40 np0005541455 nova_compute[189564]: 2025-12-01 19:20:40.998 189568 DEBUG oslo_service.service [None req-21d47958-9538-4c89-99e0-3c24c7751a04 - - - - - -] placement.trust_id             = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Dec  1 14:20:40 np0005541455 nova_compute[189564]: 2025-12-01 19:20:40.999 189568 DEBUG oslo_service.service [None req-21d47958-9538-4c89-99e0-3c24c7751a04 - - - - - -] placement.user_domain_id       = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Dec  1 14:20:40 np0005541455 nova_compute[189564]: 2025-12-01 19:20:40.999 189568 DEBUG oslo_service.service [None req-21d47958-9538-4c89-99e0-3c24c7751a04 - - - - - -] placement.user_domain_name     = Default log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Dec  1 14:20:40 np0005541455 nova_compute[189564]: 2025-12-01 19:20:40.999 189568 DEBUG oslo_service.service [None req-21d47958-9538-4c89-99e0-3c24c7751a04 - - - - - -] placement.user_id              = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Dec  1 14:20:40 np0005541455 nova_compute[189564]: 2025-12-01 19:20:40.999 189568 DEBUG oslo_service.service [None req-21d47958-9538-4c89-99e0-3c24c7751a04 - - - - - -] placement.username             = nova log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Dec  1 14:20:40 np0005541455 nova_compute[189564]: 2025-12-01 19:20:40.999 189568 DEBUG oslo_service.service [None req-21d47958-9538-4c89-99e0-3c24c7751a04 - - - - - -] placement.valid_interfaces     = ['internal'] log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Dec  1 14:20:40 np0005541455 nova_compute[189564]: 2025-12-01 19:20:40.999 189568 DEBUG oslo_service.service [None req-21d47958-9538-4c89-99e0-3c24c7751a04 - - - - - -] placement.version              = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Dec  1 14:20:41 np0005541455 nova_compute[189564]: 2025-12-01 19:20:40.999 189568 DEBUG oslo_service.service [None req-21d47958-9538-4c89-99e0-3c24c7751a04 - - - - - -] quota.cores                    = 20 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Dec  1 14:20:41 np0005541455 nova_compute[189564]: 2025-12-01 19:20:41.000 189568 DEBUG oslo_service.service [None req-21d47958-9538-4c89-99e0-3c24c7751a04 - - - - - -] quota.count_usage_from_placement = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Dec  1 14:20:41 np0005541455 nova_compute[189564]: 2025-12-01 19:20:41.000 189568 DEBUG oslo_service.service [None req-21d47958-9538-4c89-99e0-3c24c7751a04 - - - - - -] quota.driver                   = nova.quota.DbQuotaDriver log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Dec  1 14:20:41 np0005541455 nova_compute[189564]: 2025-12-01 19:20:41.000 189568 DEBUG oslo_service.service [None req-21d47958-9538-4c89-99e0-3c24c7751a04 - - - - - -] quota.injected_file_content_bytes = 10240 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Dec  1 14:20:41 np0005541455 nova_compute[189564]: 2025-12-01 19:20:41.000 189568 DEBUG oslo_service.service [None req-21d47958-9538-4c89-99e0-3c24c7751a04 - - - - - -] quota.injected_file_path_length = 255 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Dec  1 14:20:41 np0005541455 nova_compute[189564]: 2025-12-01 19:20:41.000 189568 DEBUG oslo_service.service [None req-21d47958-9538-4c89-99e0-3c24c7751a04 - - - - - -] quota.injected_files           = 5 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Dec  1 14:20:41 np0005541455 nova_compute[189564]: 2025-12-01 19:20:41.000 189568 DEBUG oslo_service.service [None req-21d47958-9538-4c89-99e0-3c24c7751a04 - - - - - -] quota.instances                = 10 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Dec  1 14:20:41 np0005541455 nova_compute[189564]: 2025-12-01 19:20:41.000 189568 DEBUG oslo_service.service [None req-21d47958-9538-4c89-99e0-3c24c7751a04 - - - - - -] quota.key_pairs                = 100 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Dec  1 14:20:41 np0005541455 nova_compute[189564]: 2025-12-01 19:20:41.000 189568 DEBUG oslo_service.service [None req-21d47958-9538-4c89-99e0-3c24c7751a04 - - - - - -] quota.metadata_items           = 128 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Dec  1 14:20:41 np0005541455 nova_compute[189564]: 2025-12-01 19:20:41.001 189568 DEBUG oslo_service.service [None req-21d47958-9538-4c89-99e0-3c24c7751a04 - - - - - -] quota.ram                      = 51200 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Dec  1 14:20:41 np0005541455 nova_compute[189564]: 2025-12-01 19:20:41.001 189568 DEBUG oslo_service.service [None req-21d47958-9538-4c89-99e0-3c24c7751a04 - - - - - -] quota.recheck_quota            = True log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Dec  1 14:20:41 np0005541455 nova_compute[189564]: 2025-12-01 19:20:41.001 189568 DEBUG oslo_service.service [None req-21d47958-9538-4c89-99e0-3c24c7751a04 - - - - - -] quota.server_group_members     = 10 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Dec  1 14:20:41 np0005541455 nova_compute[189564]: 2025-12-01 19:20:41.001 189568 DEBUG oslo_service.service [None req-21d47958-9538-4c89-99e0-3c24c7751a04 - - - - - -] quota.server_groups            = 10 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Dec  1 14:20:41 np0005541455 nova_compute[189564]: 2025-12-01 19:20:41.001 189568 DEBUG oslo_service.service [None req-21d47958-9538-4c89-99e0-3c24c7751a04 - - - - - -] rdp.enabled                    = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Dec  1 14:20:41 np0005541455 nova_compute[189564]: 2025-12-01 19:20:41.002 189568 DEBUG oslo_service.service [None req-21d47958-9538-4c89-99e0-3c24c7751a04 - - - - - -] rdp.html5_proxy_base_url       = http://127.0.0.1:6083/ log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Dec  1 14:20:41 np0005541455 nova_compute[189564]: 2025-12-01 19:20:41.002 189568 DEBUG oslo_service.service [None req-21d47958-9538-4c89-99e0-3c24c7751a04 - - - - - -] scheduler.discover_hosts_in_cells_interval = -1 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Dec  1 14:20:41 np0005541455 nova_compute[189564]: 2025-12-01 19:20:41.002 189568 DEBUG oslo_service.service [None req-21d47958-9538-4c89-99e0-3c24c7751a04 - - - - - -] scheduler.enable_isolated_aggregate_filtering = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Dec  1 14:20:41 np0005541455 nova_compute[189564]: 2025-12-01 19:20:41.002 189568 DEBUG oslo_service.service [None req-21d47958-9538-4c89-99e0-3c24c7751a04 - - - - - -] scheduler.image_metadata_prefilter = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Dec  1 14:20:41 np0005541455 nova_compute[189564]: 2025-12-01 19:20:41.002 189568 DEBUG oslo_service.service [None req-21d47958-9538-4c89-99e0-3c24c7751a04 - - - - - -] scheduler.limit_tenants_to_placement_aggregate = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Dec  1 14:20:41 np0005541455 nova_compute[189564]: 2025-12-01 19:20:41.002 189568 DEBUG oslo_service.service [None req-21d47958-9538-4c89-99e0-3c24c7751a04 - - - - - -] scheduler.max_attempts         = 3 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Dec  1 14:20:41 np0005541455 nova_compute[189564]: 2025-12-01 19:20:41.002 189568 DEBUG oslo_service.service [None req-21d47958-9538-4c89-99e0-3c24c7751a04 - - - - - -] scheduler.max_placement_results = 1000 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Dec  1 14:20:41 np0005541455 nova_compute[189564]: 2025-12-01 19:20:41.003 189568 DEBUG oslo_service.service [None req-21d47958-9538-4c89-99e0-3c24c7751a04 - - - - - -] scheduler.placement_aggregate_required_for_tenants = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Dec  1 14:20:41 np0005541455 nova_compute[189564]: 2025-12-01 19:20:41.003 189568 DEBUG oslo_service.service [None req-21d47958-9538-4c89-99e0-3c24c7751a04 - - - - - -] scheduler.query_placement_for_availability_zone = True log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Dec  1 14:20:41 np0005541455 nova_compute[189564]: 2025-12-01 19:20:41.003 189568 DEBUG oslo_service.service [None req-21d47958-9538-4c89-99e0-3c24c7751a04 - - - - - -] scheduler.query_placement_for_image_type_support = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Dec  1 14:20:41 np0005541455 nova_compute[189564]: 2025-12-01 19:20:41.003 189568 DEBUG oslo_service.service [None req-21d47958-9538-4c89-99e0-3c24c7751a04 - - - - - -] scheduler.query_placement_for_routed_network_aggregates = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Dec  1 14:20:41 np0005541455 nova_compute[189564]: 2025-12-01 19:20:41.003 189568 DEBUG oslo_service.service [None req-21d47958-9538-4c89-99e0-3c24c7751a04 - - - - - -] scheduler.workers              = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Dec  1 14:20:41 np0005541455 nova_compute[189564]: 2025-12-01 19:20:41.003 189568 DEBUG oslo_service.service [None req-21d47958-9538-4c89-99e0-3c24c7751a04 - - - - - -] filter_scheduler.aggregate_image_properties_isolation_namespace = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Dec  1 14:20:41 np0005541455 nova_compute[189564]: 2025-12-01 19:20:41.003 189568 DEBUG oslo_service.service [None req-21d47958-9538-4c89-99e0-3c24c7751a04 - - - - - -] filter_scheduler.aggregate_image_properties_isolation_separator = . log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Dec  1 14:20:41 np0005541455 nova_compute[189564]: 2025-12-01 19:20:41.004 189568 DEBUG oslo_service.service [None req-21d47958-9538-4c89-99e0-3c24c7751a04 - - - - - -] filter_scheduler.available_filters = ['nova.scheduler.filters.all_filters'] log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Dec  1 14:20:41 np0005541455 nova_compute[189564]: 2025-12-01 19:20:41.004 189568 DEBUG oslo_service.service [None req-21d47958-9538-4c89-99e0-3c24c7751a04 - - - - - -] filter_scheduler.build_failure_weight_multiplier = 1000000.0 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Dec  1 14:20:41 np0005541455 nova_compute[189564]: 2025-12-01 19:20:41.004 189568 DEBUG oslo_service.service [None req-21d47958-9538-4c89-99e0-3c24c7751a04 - - - - - -] filter_scheduler.cpu_weight_multiplier = 1.0 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Dec  1 14:20:41 np0005541455 nova_compute[189564]: 2025-12-01 19:20:41.004 189568 DEBUG oslo_service.service [None req-21d47958-9538-4c89-99e0-3c24c7751a04 - - - - - -] filter_scheduler.cross_cell_move_weight_multiplier = 1000000.0 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Dec  1 14:20:41 np0005541455 nova_compute[189564]: 2025-12-01 19:20:41.004 189568 DEBUG oslo_service.service [None req-21d47958-9538-4c89-99e0-3c24c7751a04 - - - - - -] filter_scheduler.disk_weight_multiplier = 1.0 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Dec  1 14:20:41 np0005541455 nova_compute[189564]: 2025-12-01 19:20:41.004 189568 DEBUG oslo_service.service [None req-21d47958-9538-4c89-99e0-3c24c7751a04 - - - - - -] filter_scheduler.enabled_filters = ['ComputeFilter', 'ComputeCapabilitiesFilter', 'ImagePropertiesFilter', 'ServerGroupAntiAffinityFilter', 'ServerGroupAffinityFilter'] log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Dec  1 14:20:41 np0005541455 nova_compute[189564]: 2025-12-01 19:20:41.004 189568 DEBUG oslo_service.service [None req-21d47958-9538-4c89-99e0-3c24c7751a04 - - - - - -] filter_scheduler.host_subset_size = 1 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Dec  1 14:20:41 np0005541455 nova_compute[189564]: 2025-12-01 19:20:41.004 189568 DEBUG oslo_service.service [None req-21d47958-9538-4c89-99e0-3c24c7751a04 - - - - - -] filter_scheduler.image_properties_default_architecture = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Dec  1 14:20:41 np0005541455 nova_compute[189564]: 2025-12-01 19:20:41.005 189568 DEBUG oslo_service.service [None req-21d47958-9538-4c89-99e0-3c24c7751a04 - - - - - -] filter_scheduler.io_ops_weight_multiplier = -1.0 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Dec  1 14:20:41 np0005541455 nova_compute[189564]: 2025-12-01 19:20:41.005 189568 DEBUG oslo_service.service [None req-21d47958-9538-4c89-99e0-3c24c7751a04 - - - - - -] filter_scheduler.isolated_hosts = [] log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Dec  1 14:20:41 np0005541455 nova_compute[189564]: 2025-12-01 19:20:41.005 189568 DEBUG oslo_service.service [None req-21d47958-9538-4c89-99e0-3c24c7751a04 - - - - - -] filter_scheduler.isolated_images = [] log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Dec  1 14:20:41 np0005541455 nova_compute[189564]: 2025-12-01 19:20:41.005 189568 DEBUG oslo_service.service [None req-21d47958-9538-4c89-99e0-3c24c7751a04 - - - - - -] filter_scheduler.max_instances_per_host = 50 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Dec  1 14:20:41 np0005541455 nova_compute[189564]: 2025-12-01 19:20:41.005 189568 DEBUG oslo_service.service [None req-21d47958-9538-4c89-99e0-3c24c7751a04 - - - - - -] filter_scheduler.max_io_ops_per_host = 8 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Dec  1 14:20:41 np0005541455 nova_compute[189564]: 2025-12-01 19:20:41.005 189568 DEBUG oslo_service.service [None req-21d47958-9538-4c89-99e0-3c24c7751a04 - - - - - -] filter_scheduler.pci_in_placement = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Dec  1 14:20:41 np0005541455 nova_compute[189564]: 2025-12-01 19:20:41.005 189568 DEBUG oslo_service.service [None req-21d47958-9538-4c89-99e0-3c24c7751a04 - - - - - -] filter_scheduler.pci_weight_multiplier = 1.0 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Dec  1 14:20:41 np0005541455 nova_compute[189564]: 2025-12-01 19:20:41.006 189568 DEBUG oslo_service.service [None req-21d47958-9538-4c89-99e0-3c24c7751a04 - - - - - -] filter_scheduler.ram_weight_multiplier = 1.0 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Dec  1 14:20:41 np0005541455 nova_compute[189564]: 2025-12-01 19:20:41.006 189568 DEBUG oslo_service.service [None req-21d47958-9538-4c89-99e0-3c24c7751a04 - - - - - -] filter_scheduler.restrict_isolated_hosts_to_isolated_images = True log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Dec  1 14:20:41 np0005541455 nova_compute[189564]: 2025-12-01 19:20:41.006 189568 DEBUG oslo_service.service [None req-21d47958-9538-4c89-99e0-3c24c7751a04 - - - - - -] filter_scheduler.shuffle_best_same_weighed_hosts = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Dec  1 14:20:41 np0005541455 nova_compute[189564]: 2025-12-01 19:20:41.006 189568 DEBUG oslo_service.service [None req-21d47958-9538-4c89-99e0-3c24c7751a04 - - - - - -] filter_scheduler.soft_affinity_weight_multiplier = 1.0 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Dec  1 14:20:41 np0005541455 nova_compute[189564]: 2025-12-01 19:20:41.006 189568 DEBUG oslo_service.service [None req-21d47958-9538-4c89-99e0-3c24c7751a04 - - - - - -] filter_scheduler.soft_anti_affinity_weight_multiplier = 1.0 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Dec  1 14:20:41 np0005541455 nova_compute[189564]: 2025-12-01 19:20:41.006 189568 DEBUG oslo_service.service [None req-21d47958-9538-4c89-99e0-3c24c7751a04 - - - - - -] filter_scheduler.track_instance_changes = True log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Dec  1 14:20:41 np0005541455 nova_compute[189564]: 2025-12-01 19:20:41.006 189568 DEBUG oslo_service.service [None req-21d47958-9538-4c89-99e0-3c24c7751a04 - - - - - -] filter_scheduler.weight_classes = ['nova.scheduler.weights.all_weighers'] log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Dec  1 14:20:41 np0005541455 nova_compute[189564]: 2025-12-01 19:20:41.006 189568 DEBUG oslo_service.service [None req-21d47958-9538-4c89-99e0-3c24c7751a04 - - - - - -] metrics.required               = True log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Dec  1 14:20:41 np0005541455 nova_compute[189564]: 2025-12-01 19:20:41.007 189568 DEBUG oslo_service.service [None req-21d47958-9538-4c89-99e0-3c24c7751a04 - - - - - -] metrics.weight_multiplier      = 1.0 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Dec  1 14:20:41 np0005541455 nova_compute[189564]: 2025-12-01 19:20:41.007 189568 DEBUG oslo_service.service [None req-21d47958-9538-4c89-99e0-3c24c7751a04 - - - - - -] metrics.weight_of_unavailable  = -10000.0 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Dec  1 14:20:41 np0005541455 nova_compute[189564]: 2025-12-01 19:20:41.007 189568 DEBUG oslo_service.service [None req-21d47958-9538-4c89-99e0-3c24c7751a04 - - - - - -] metrics.weight_setting         = [] log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Dec  1 14:20:41 np0005541455 nova_compute[189564]: 2025-12-01 19:20:41.007 189568 DEBUG oslo_service.service [None req-21d47958-9538-4c89-99e0-3c24c7751a04 - - - - - -] serial_console.base_url        = ws://127.0.0.1:6083/ log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Dec  1 14:20:41 np0005541455 nova_compute[189564]: 2025-12-01 19:20:41.007 189568 DEBUG oslo_service.service [None req-21d47958-9538-4c89-99e0-3c24c7751a04 - - - - - -] serial_console.enabled         = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Dec  1 14:20:41 np0005541455 nova_compute[189564]: 2025-12-01 19:20:41.007 189568 DEBUG oslo_service.service [None req-21d47958-9538-4c89-99e0-3c24c7751a04 - - - - - -] serial_console.port_range      = 10000:20000 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Dec  1 14:20:41 np0005541455 nova_compute[189564]: 2025-12-01 19:20:41.008 189568 DEBUG oslo_service.service [None req-21d47958-9538-4c89-99e0-3c24c7751a04 - - - - - -] serial_console.proxyclient_address = 127.0.0.1 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Dec  1 14:20:41 np0005541455 nova_compute[189564]: 2025-12-01 19:20:41.008 189568 DEBUG oslo_service.service [None req-21d47958-9538-4c89-99e0-3c24c7751a04 - - - - - -] serial_console.serialproxy_host = 0.0.0.0 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Dec  1 14:20:41 np0005541455 nova_compute[189564]: 2025-12-01 19:20:41.008 189568 DEBUG oslo_service.service [None req-21d47958-9538-4c89-99e0-3c24c7751a04 - - - - - -] serial_console.serialproxy_port = 6083 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Dec  1 14:20:41 np0005541455 nova_compute[189564]: 2025-12-01 19:20:41.008 189568 DEBUG oslo_service.service [None req-21d47958-9538-4c89-99e0-3c24c7751a04 - - - - - -] service_user.auth_section      = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Dec  1 14:20:41 np0005541455 nova_compute[189564]: 2025-12-01 19:20:41.008 189568 DEBUG oslo_service.service [None req-21d47958-9538-4c89-99e0-3c24c7751a04 - - - - - -] service_user.auth_type         = password log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Dec  1 14:20:41 np0005541455 nova_compute[189564]: 2025-12-01 19:20:41.008 189568 DEBUG oslo_service.service [None req-21d47958-9538-4c89-99e0-3c24c7751a04 - - - - - -] service_user.cafile            = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Dec  1 14:20:41 np0005541455 nova_compute[189564]: 2025-12-01 19:20:41.008 189568 DEBUG oslo_service.service [None req-21d47958-9538-4c89-99e0-3c24c7751a04 - - - - - -] service_user.certfile          = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Dec  1 14:20:41 np0005541455 nova_compute[189564]: 2025-12-01 19:20:41.008 189568 DEBUG oslo_service.service [None req-21d47958-9538-4c89-99e0-3c24c7751a04 - - - - - -] service_user.collect_timing    = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Dec  1 14:20:41 np0005541455 nova_compute[189564]: 2025-12-01 19:20:41.009 189568 DEBUG oslo_service.service [None req-21d47958-9538-4c89-99e0-3c24c7751a04 - - - - - -] service_user.insecure          = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Dec  1 14:20:41 np0005541455 nova_compute[189564]: 2025-12-01 19:20:41.009 189568 DEBUG oslo_service.service [None req-21d47958-9538-4c89-99e0-3c24c7751a04 - - - - - -] service_user.keyfile           = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Dec  1 14:20:41 np0005541455 nova_compute[189564]: 2025-12-01 19:20:41.009 189568 DEBUG oslo_service.service [None req-21d47958-9538-4c89-99e0-3c24c7751a04 - - - - - -] service_user.send_service_user_token = True log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Dec  1 14:20:41 np0005541455 nova_compute[189564]: 2025-12-01 19:20:41.009 189568 DEBUG oslo_service.service [None req-21d47958-9538-4c89-99e0-3c24c7751a04 - - - - - -] service_user.split_loggers     = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Dec  1 14:20:41 np0005541455 nova_compute[189564]: 2025-12-01 19:20:41.009 189568 DEBUG oslo_service.service [None req-21d47958-9538-4c89-99e0-3c24c7751a04 - - - - - -] service_user.timeout           = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Dec  1 14:20:41 np0005541455 nova_compute[189564]: 2025-12-01 19:20:41.009 189568 DEBUG oslo_service.service [None req-21d47958-9538-4c89-99e0-3c24c7751a04 - - - - - -] spice.agent_enabled            = True log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Dec  1 14:20:41 np0005541455 nova_compute[189564]: 2025-12-01 19:20:41.009 189568 DEBUG oslo_service.service [None req-21d47958-9538-4c89-99e0-3c24c7751a04 - - - - - -] spice.enabled                  = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Dec  1 14:20:41 np0005541455 nova_compute[189564]: 2025-12-01 19:20:41.010 189568 DEBUG oslo_service.service [None req-21d47958-9538-4c89-99e0-3c24c7751a04 - - - - - -] spice.html5proxy_base_url      = http://127.0.0.1:6082/spice_auto.html log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Dec  1 14:20:41 np0005541455 nova_compute[189564]: 2025-12-01 19:20:41.010 189568 DEBUG oslo_service.service [None req-21d47958-9538-4c89-99e0-3c24c7751a04 - - - - - -] spice.html5proxy_host          = 0.0.0.0 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Dec  1 14:20:41 np0005541455 nova_compute[189564]: 2025-12-01 19:20:41.010 189568 DEBUG oslo_service.service [None req-21d47958-9538-4c89-99e0-3c24c7751a04 - - - - - -] spice.html5proxy_port          = 6082 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Dec  1 14:20:41 np0005541455 nova_compute[189564]: 2025-12-01 19:20:41.010 189568 DEBUG oslo_service.service [None req-21d47958-9538-4c89-99e0-3c24c7751a04 - - - - - -] spice.image_compression        = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Dec  1 14:20:41 np0005541455 nova_compute[189564]: 2025-12-01 19:20:41.010 189568 DEBUG oslo_service.service [None req-21d47958-9538-4c89-99e0-3c24c7751a04 - - - - - -] spice.jpeg_compression         = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Dec  1 14:20:41 np0005541455 nova_compute[189564]: 2025-12-01 19:20:41.010 189568 DEBUG oslo_service.service [None req-21d47958-9538-4c89-99e0-3c24c7751a04 - - - - - -] spice.playback_compression     = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Dec  1 14:20:41 np0005541455 nova_compute[189564]: 2025-12-01 19:20:41.010 189568 DEBUG oslo_service.service [None req-21d47958-9538-4c89-99e0-3c24c7751a04 - - - - - -] spice.server_listen            = 127.0.0.1 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Dec  1 14:20:41 np0005541455 nova_compute[189564]: 2025-12-01 19:20:41.011 189568 DEBUG oslo_service.service [None req-21d47958-9538-4c89-99e0-3c24c7751a04 - - - - - -] spice.server_proxyclient_address = 127.0.0.1 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Dec  1 14:20:41 np0005541455 nova_compute[189564]: 2025-12-01 19:20:41.011 189568 DEBUG oslo_service.service [None req-21d47958-9538-4c89-99e0-3c24c7751a04 - - - - - -] spice.streaming_mode           = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Dec  1 14:20:41 np0005541455 nova_compute[189564]: 2025-12-01 19:20:41.011 189568 DEBUG oslo_service.service [None req-21d47958-9538-4c89-99e0-3c24c7751a04 - - - - - -] spice.zlib_compression         = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Dec  1 14:20:41 np0005541455 nova_compute[189564]: 2025-12-01 19:20:41.011 189568 DEBUG oslo_service.service [None req-21d47958-9538-4c89-99e0-3c24c7751a04 - - - - - -] upgrade_levels.baseapi         = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Dec  1 14:20:41 np0005541455 nova_compute[189564]: 2025-12-01 19:20:41.011 189568 DEBUG oslo_service.service [None req-21d47958-9538-4c89-99e0-3c24c7751a04 - - - - - -] upgrade_levels.cert            = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Dec  1 14:20:41 np0005541455 nova_compute[189564]: 2025-12-01 19:20:41.011 189568 DEBUG oslo_service.service [None req-21d47958-9538-4c89-99e0-3c24c7751a04 - - - - - -] upgrade_levels.compute         = auto log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Dec  1 14:20:41 np0005541455 nova_compute[189564]: 2025-12-01 19:20:41.011 189568 DEBUG oslo_service.service [None req-21d47958-9538-4c89-99e0-3c24c7751a04 - - - - - -] upgrade_levels.conductor       = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Dec  1 14:20:41 np0005541455 nova_compute[189564]: 2025-12-01 19:20:41.011 189568 DEBUG oslo_service.service [None req-21d47958-9538-4c89-99e0-3c24c7751a04 - - - - - -] upgrade_levels.scheduler       = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Dec  1 14:20:41 np0005541455 nova_compute[189564]: 2025-12-01 19:20:41.012 189568 DEBUG oslo_service.service [None req-21d47958-9538-4c89-99e0-3c24c7751a04 - - - - - -] vendordata_dynamic_auth.auth_section = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Dec  1 14:20:41 np0005541455 nova_compute[189564]: 2025-12-01 19:20:41.012 189568 DEBUG oslo_service.service [None req-21d47958-9538-4c89-99e0-3c24c7751a04 - - - - - -] vendordata_dynamic_auth.auth_type = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Dec  1 14:20:41 np0005541455 nova_compute[189564]: 2025-12-01 19:20:41.012 189568 DEBUG oslo_service.service [None req-21d47958-9538-4c89-99e0-3c24c7751a04 - - - - - -] vendordata_dynamic_auth.cafile = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Dec  1 14:20:41 np0005541455 nova_compute[189564]: 2025-12-01 19:20:41.012 189568 DEBUG oslo_service.service [None req-21d47958-9538-4c89-99e0-3c24c7751a04 - - - - - -] vendordata_dynamic_auth.certfile = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Dec  1 14:20:41 np0005541455 nova_compute[189564]: 2025-12-01 19:20:41.012 189568 DEBUG oslo_service.service [None req-21d47958-9538-4c89-99e0-3c24c7751a04 - - - - - -] vendordata_dynamic_auth.collect_timing = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Dec  1 14:20:41 np0005541455 nova_compute[189564]: 2025-12-01 19:20:41.012 189568 DEBUG oslo_service.service [None req-21d47958-9538-4c89-99e0-3c24c7751a04 - - - - - -] vendordata_dynamic_auth.insecure = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Dec  1 14:20:41 np0005541455 nova_compute[189564]: 2025-12-01 19:20:41.012 189568 DEBUG oslo_service.service [None req-21d47958-9538-4c89-99e0-3c24c7751a04 - - - - - -] vendordata_dynamic_auth.keyfile = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Dec  1 14:20:41 np0005541455 nova_compute[189564]: 2025-12-01 19:20:41.012 189568 DEBUG oslo_service.service [None req-21d47958-9538-4c89-99e0-3c24c7751a04 - - - - - -] vendordata_dynamic_auth.split_loggers = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Dec  1 14:20:41 np0005541455 nova_compute[189564]: 2025-12-01 19:20:41.013 189568 DEBUG oslo_service.service [None req-21d47958-9538-4c89-99e0-3c24c7751a04 - - - - - -] vendordata_dynamic_auth.timeout = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Dec  1 14:20:41 np0005541455 nova_compute[189564]: 2025-12-01 19:20:41.013 189568 DEBUG oslo_service.service [None req-21d47958-9538-4c89-99e0-3c24c7751a04 - - - - - -] vmware.api_retry_count         = 10 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Dec  1 14:20:41 np0005541455 nova_compute[189564]: 2025-12-01 19:20:41.013 189568 DEBUG oslo_service.service [None req-21d47958-9538-4c89-99e0-3c24c7751a04 - - - - - -] vmware.ca_file                 = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Dec  1 14:20:41 np0005541455 nova_compute[189564]: 2025-12-01 19:20:41.013 189568 DEBUG oslo_service.service [None req-21d47958-9538-4c89-99e0-3c24c7751a04 - - - - - -] vmware.cache_prefix            = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Dec  1 14:20:41 np0005541455 nova_compute[189564]: 2025-12-01 19:20:41.013 189568 DEBUG oslo_service.service [None req-21d47958-9538-4c89-99e0-3c24c7751a04 - - - - - -] vmware.cluster_name            = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Dec  1 14:20:41 np0005541455 nova_compute[189564]: 2025-12-01 19:20:41.013 189568 DEBUG oslo_service.service [None req-21d47958-9538-4c89-99e0-3c24c7751a04 - - - - - -] vmware.connection_pool_size    = 10 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Dec  1 14:20:41 np0005541455 nova_compute[189564]: 2025-12-01 19:20:41.013 189568 DEBUG oslo_service.service [None req-21d47958-9538-4c89-99e0-3c24c7751a04 - - - - - -] vmware.console_delay_seconds   = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Dec  1 14:20:41 np0005541455 nova_compute[189564]: 2025-12-01 19:20:41.014 189568 DEBUG oslo_service.service [None req-21d47958-9538-4c89-99e0-3c24c7751a04 - - - - - -] vmware.datastore_regex         = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Dec  1 14:20:41 np0005541455 nova_compute[189564]: 2025-12-01 19:20:41.014 189568 DEBUG oslo_service.service [None req-21d47958-9538-4c89-99e0-3c24c7751a04 - - - - - -] vmware.host_ip                 = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Dec  1 14:20:41 np0005541455 nova_compute[189564]: 2025-12-01 19:20:41.014 189568 DEBUG oslo_service.service [None req-21d47958-9538-4c89-99e0-3c24c7751a04 - - - - - -] vmware.host_password           = **** log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Dec  1 14:20:41 np0005541455 nova_compute[189564]: 2025-12-01 19:20:41.014 189568 DEBUG oslo_service.service [None req-21d47958-9538-4c89-99e0-3c24c7751a04 - - - - - -] vmware.host_port               = 443 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Dec  1 14:20:41 np0005541455 nova_compute[189564]: 2025-12-01 19:20:41.014 189568 DEBUG oslo_service.service [None req-21d47958-9538-4c89-99e0-3c24c7751a04 - - - - - -] vmware.host_username           = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Dec  1 14:20:41 np0005541455 nova_compute[189564]: 2025-12-01 19:20:41.014 189568 DEBUG oslo_service.service [None req-21d47958-9538-4c89-99e0-3c24c7751a04 - - - - - -] vmware.insecure                = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Dec  1 14:20:41 np0005541455 nova_compute[189564]: 2025-12-01 19:20:41.014 189568 DEBUG oslo_service.service [None req-21d47958-9538-4c89-99e0-3c24c7751a04 - - - - - -] vmware.integration_bridge      = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Dec  1 14:20:41 np0005541455 nova_compute[189564]: 2025-12-01 19:20:41.014 189568 DEBUG oslo_service.service [None req-21d47958-9538-4c89-99e0-3c24c7751a04 - - - - - -] vmware.maximum_objects         = 100 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Dec  1 14:20:41 np0005541455 nova_compute[189564]: 2025-12-01 19:20:41.015 189568 DEBUG oslo_service.service [None req-21d47958-9538-4c89-99e0-3c24c7751a04 - - - - - -] vmware.pbm_default_policy      = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Dec  1 14:20:41 np0005541455 nova_compute[189564]: 2025-12-01 19:20:41.015 189568 DEBUG oslo_service.service [None req-21d47958-9538-4c89-99e0-3c24c7751a04 - - - - - -] vmware.pbm_enabled             = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Dec  1 14:20:41 np0005541455 nova_compute[189564]: 2025-12-01 19:20:41.015 189568 DEBUG oslo_service.service [None req-21d47958-9538-4c89-99e0-3c24c7751a04 - - - - - -] vmware.pbm_wsdl_location       = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Dec  1 14:20:41 np0005541455 nova_compute[189564]: 2025-12-01 19:20:41.015 189568 DEBUG oslo_service.service [None req-21d47958-9538-4c89-99e0-3c24c7751a04 - - - - - -] vmware.serial_log_dir          = /opt/vmware/vspc log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Dec  1 14:20:41 np0005541455 nova_compute[189564]: 2025-12-01 19:20:41.015 189568 DEBUG oslo_service.service [None req-21d47958-9538-4c89-99e0-3c24c7751a04 - - - - - -] vmware.serial_port_proxy_uri   = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Dec  1 14:20:41 np0005541455 nova_compute[189564]: 2025-12-01 19:20:41.015 189568 DEBUG oslo_service.service [None req-21d47958-9538-4c89-99e0-3c24c7751a04 - - - - - -] vmware.serial_port_service_uri = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Dec  1 14:20:41 np0005541455 nova_compute[189564]: 2025-12-01 19:20:41.015 189568 DEBUG oslo_service.service [None req-21d47958-9538-4c89-99e0-3c24c7751a04 - - - - - -] vmware.task_poll_interval      = 0.5 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Dec  1 14:20:41 np0005541455 nova_compute[189564]: 2025-12-01 19:20:41.015 189568 DEBUG oslo_service.service [None req-21d47958-9538-4c89-99e0-3c24c7751a04 - - - - - -] vmware.use_linked_clone        = True log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Dec  1 14:20:41 np0005541455 nova_compute[189564]: 2025-12-01 19:20:41.016 189568 DEBUG oslo_service.service [None req-21d47958-9538-4c89-99e0-3c24c7751a04 - - - - - -] vmware.vnc_keymap              = en-us log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Dec  1 14:20:41 np0005541455 nova_compute[189564]: 2025-12-01 19:20:41.016 189568 DEBUG oslo_service.service [None req-21d47958-9538-4c89-99e0-3c24c7751a04 - - - - - -] vmware.vnc_port                = 5900 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Dec  1 14:20:41 np0005541455 nova_compute[189564]: 2025-12-01 19:20:41.016 189568 DEBUG oslo_service.service [None req-21d47958-9538-4c89-99e0-3c24c7751a04 - - - - - -] vmware.vnc_port_total          = 10000 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Dec  1 14:20:41 np0005541455 nova_compute[189564]: 2025-12-01 19:20:41.016 189568 DEBUG oslo_service.service [None req-21d47958-9538-4c89-99e0-3c24c7751a04 - - - - - -] vnc.auth_schemes               = ['none'] log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Dec  1 14:20:41 np0005541455 nova_compute[189564]: 2025-12-01 19:20:41.016 189568 DEBUG oslo_service.service [None req-21d47958-9538-4c89-99e0-3c24c7751a04 - - - - - -] vnc.enabled                    = True log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Dec  1 14:20:41 np0005541455 nova_compute[189564]: 2025-12-01 19:20:41.016 189568 DEBUG oslo_service.service [None req-21d47958-9538-4c89-99e0-3c24c7751a04 - - - - - -] vnc.novncproxy_base_url        = https://nova-novncproxy-cell1-public-openstack.apps-crc.testing/vnc_lite.html log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Dec  1 14:20:41 np0005541455 nova_compute[189564]: 2025-12-01 19:20:41.017 189568 DEBUG oslo_service.service [None req-21d47958-9538-4c89-99e0-3c24c7751a04 - - - - - -] vnc.novncproxy_host            = 0.0.0.0 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Dec  1 14:20:41 np0005541455 nova_compute[189564]: 2025-12-01 19:20:41.017 189568 DEBUG oslo_service.service [None req-21d47958-9538-4c89-99e0-3c24c7751a04 - - - - - -] vnc.novncproxy_port            = 6080 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Dec  1 14:20:41 np0005541455 nova_compute[189564]: 2025-12-01 19:20:41.017 189568 DEBUG oslo_service.service [None req-21d47958-9538-4c89-99e0-3c24c7751a04 - - - - - -] vnc.server_listen              = ::0 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Dec  1 14:20:41 np0005541455 nova_compute[189564]: 2025-12-01 19:20:41.017 189568 DEBUG oslo_service.service [None req-21d47958-9538-4c89-99e0-3c24c7751a04 - - - - - -] vnc.server_proxyclient_address = 192.168.122.100 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Dec  1 14:20:41 np0005541455 nova_compute[189564]: 2025-12-01 19:20:41.017 189568 DEBUG oslo_service.service [None req-21d47958-9538-4c89-99e0-3c24c7751a04 - - - - - -] vnc.vencrypt_ca_certs          = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Dec  1 14:20:41 np0005541455 nova_compute[189564]: 2025-12-01 19:20:41.017 189568 DEBUG oslo_service.service [None req-21d47958-9538-4c89-99e0-3c24c7751a04 - - - - - -] vnc.vencrypt_client_cert       = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Dec  1 14:20:41 np0005541455 nova_compute[189564]: 2025-12-01 19:20:41.018 189568 DEBUG oslo_service.service [None req-21d47958-9538-4c89-99e0-3c24c7751a04 - - - - - -] vnc.vencrypt_client_key        = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Dec  1 14:20:41 np0005541455 nova_compute[189564]: 2025-12-01 19:20:41.018 189568 DEBUG oslo_service.service [None req-21d47958-9538-4c89-99e0-3c24c7751a04 - - - - - -] workarounds.disable_compute_service_check_for_ffu = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Dec  1 14:20:41 np0005541455 nova_compute[189564]: 2025-12-01 19:20:41.018 189568 DEBUG oslo_service.service [None req-21d47958-9538-4c89-99e0-3c24c7751a04 - - - - - -] workarounds.disable_deep_image_inspection = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Dec  1 14:20:41 np0005541455 nova_compute[189564]: 2025-12-01 19:20:41.018 189568 DEBUG oslo_service.service [None req-21d47958-9538-4c89-99e0-3c24c7751a04 - - - - - -] workarounds.disable_fallback_pcpu_query = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Dec  1 14:20:41 np0005541455 nova_compute[189564]: 2025-12-01 19:20:41.018 189568 DEBUG oslo_service.service [None req-21d47958-9538-4c89-99e0-3c24c7751a04 - - - - - -] workarounds.disable_group_policy_check_upcall = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Dec  1 14:20:41 np0005541455 nova_compute[189564]: 2025-12-01 19:20:41.018 189568 DEBUG oslo_service.service [None req-21d47958-9538-4c89-99e0-3c24c7751a04 - - - - - -] workarounds.disable_libvirt_livesnapshot = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Dec  1 14:20:41 np0005541455 nova_compute[189564]: 2025-12-01 19:20:41.018 189568 DEBUG oslo_service.service [None req-21d47958-9538-4c89-99e0-3c24c7751a04 - - - - - -] workarounds.disable_rootwrap   = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Dec  1 14:20:41 np0005541455 nova_compute[189564]: 2025-12-01 19:20:41.019 189568 DEBUG oslo_service.service [None req-21d47958-9538-4c89-99e0-3c24c7751a04 - - - - - -] workarounds.enable_numa_live_migration = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Dec  1 14:20:41 np0005541455 nova_compute[189564]: 2025-12-01 19:20:41.019 189568 DEBUG oslo_service.service [None req-21d47958-9538-4c89-99e0-3c24c7751a04 - - - - - -] workarounds.enable_qemu_monitor_announce_self = True log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Dec  1 14:20:41 np0005541455 nova_compute[189564]: 2025-12-01 19:20:41.019 189568 DEBUG oslo_service.service [None req-21d47958-9538-4c89-99e0-3c24c7751a04 - - - - - -] workarounds.ensure_libvirt_rbd_instance_dir_cleanup = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Dec  1 14:20:41 np0005541455 nova_compute[189564]: 2025-12-01 19:20:41.019 189568 DEBUG oslo_service.service [None req-21d47958-9538-4c89-99e0-3c24c7751a04 - - - - - -] workarounds.handle_virt_lifecycle_events = True log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Dec  1 14:20:41 np0005541455 nova_compute[189564]: 2025-12-01 19:20:41.019 189568 DEBUG oslo_service.service [None req-21d47958-9538-4c89-99e0-3c24c7751a04 - - - - - -] workarounds.libvirt_disable_apic = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Dec  1 14:20:41 np0005541455 nova_compute[189564]: 2025-12-01 19:20:41.019 189568 DEBUG oslo_service.service [None req-21d47958-9538-4c89-99e0-3c24c7751a04 - - - - - -] workarounds.never_download_image_if_on_rbd = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Dec  1 14:20:41 np0005541455 nova_compute[189564]: 2025-12-01 19:20:41.019 189568 DEBUG oslo_service.service [None req-21d47958-9538-4c89-99e0-3c24c7751a04 - - - - - -] workarounds.qemu_monitor_announce_self_count = 3 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Dec  1 14:20:41 np0005541455 nova_compute[189564]: 2025-12-01 19:20:41.019 189568 DEBUG oslo_service.service [None req-21d47958-9538-4c89-99e0-3c24c7751a04 - - - - - -] workarounds.qemu_monitor_announce_self_interval = 1 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Dec  1 14:20:41 np0005541455 nova_compute[189564]: 2025-12-01 19:20:41.020 189568 DEBUG oslo_service.service [None req-21d47958-9538-4c89-99e0-3c24c7751a04 - - - - - -] workarounds.reserve_disk_resource_for_image_cache = True log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Dec  1 14:20:41 np0005541455 nova_compute[189564]: 2025-12-01 19:20:41.020 189568 DEBUG oslo_service.service [None req-21d47958-9538-4c89-99e0-3c24c7751a04 - - - - - -] workarounds.skip_cpu_compare_at_startup = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Dec  1 14:20:41 np0005541455 nova_compute[189564]: 2025-12-01 19:20:41.020 189568 DEBUG oslo_service.service [None req-21d47958-9538-4c89-99e0-3c24c7751a04 - - - - - -] workarounds.skip_cpu_compare_on_dest = True log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Dec  1 14:20:41 np0005541455 nova_compute[189564]: 2025-12-01 19:20:41.020 189568 DEBUG oslo_service.service [None req-21d47958-9538-4c89-99e0-3c24c7751a04 - - - - - -] workarounds.skip_hypervisor_version_check_on_lm = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Dec  1 14:20:41 np0005541455 nova_compute[189564]: 2025-12-01 19:20:41.020 189568 DEBUG oslo_service.service [None req-21d47958-9538-4c89-99e0-3c24c7751a04 - - - - - -] workarounds.skip_reserve_in_use_ironic_nodes = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Dec  1 14:20:41 np0005541455 nova_compute[189564]: 2025-12-01 19:20:41.020 189568 DEBUG oslo_service.service [None req-21d47958-9538-4c89-99e0-3c24c7751a04 - - - - - -] workarounds.unified_limits_count_pcpu_as_vcpu = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Dec  1 14:20:41 np0005541455 nova_compute[189564]: 2025-12-01 19:20:41.020 189568 DEBUG oslo_service.service [None req-21d47958-9538-4c89-99e0-3c24c7751a04 - - - - - -] workarounds.wait_for_vif_plugged_event_during_hard_reboot = [] log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Dec  1 14:20:41 np0005541455 nova_compute[189564]: 2025-12-01 19:20:41.021 189568 DEBUG oslo_service.service [None req-21d47958-9538-4c89-99e0-3c24c7751a04 - - - - - -] wsgi.api_paste_config          = api-paste.ini log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Dec  1 14:20:41 np0005541455 nova_compute[189564]: 2025-12-01 19:20:41.021 189568 DEBUG oslo_service.service [None req-21d47958-9538-4c89-99e0-3c24c7751a04 - - - - - -] wsgi.client_socket_timeout     = 900 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Dec  1 14:20:41 np0005541455 nova_compute[189564]: 2025-12-01 19:20:41.021 189568 DEBUG oslo_service.service [None req-21d47958-9538-4c89-99e0-3c24c7751a04 - - - - - -] wsgi.default_pool_size         = 1000 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Dec  1 14:20:41 np0005541455 nova_compute[189564]: 2025-12-01 19:20:41.021 189568 DEBUG oslo_service.service [None req-21d47958-9538-4c89-99e0-3c24c7751a04 - - - - - -] wsgi.keep_alive                = True log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Dec  1 14:20:41 np0005541455 nova_compute[189564]: 2025-12-01 19:20:41.021 189568 DEBUG oslo_service.service [None req-21d47958-9538-4c89-99e0-3c24c7751a04 - - - - - -] wsgi.max_header_line           = 16384 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Dec  1 14:20:41 np0005541455 nova_compute[189564]: 2025-12-01 19:20:41.021 189568 DEBUG oslo_service.service [None req-21d47958-9538-4c89-99e0-3c24c7751a04 - - - - - -] wsgi.secure_proxy_ssl_header   = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Dec  1 14:20:41 np0005541455 nova_compute[189564]: 2025-12-01 19:20:41.021 189568 DEBUG oslo_service.service [None req-21d47958-9538-4c89-99e0-3c24c7751a04 - - - - - -] wsgi.ssl_ca_file               = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Dec  1 14:20:41 np0005541455 nova_compute[189564]: 2025-12-01 19:20:41.021 189568 DEBUG oslo_service.service [None req-21d47958-9538-4c89-99e0-3c24c7751a04 - - - - - -] wsgi.ssl_cert_file             = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Dec  1 14:20:41 np0005541455 nova_compute[189564]: 2025-12-01 19:20:41.022 189568 DEBUG oslo_service.service [None req-21d47958-9538-4c89-99e0-3c24c7751a04 - - - - - -] wsgi.ssl_key_file              = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Dec  1 14:20:41 np0005541455 nova_compute[189564]: 2025-12-01 19:20:41.022 189568 DEBUG oslo_service.service [None req-21d47958-9538-4c89-99e0-3c24c7751a04 - - - - - -] wsgi.tcp_keepidle              = 600 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Dec  1 14:20:41 np0005541455 nova_compute[189564]: 2025-12-01 19:20:41.022 189568 DEBUG oslo_service.service [None req-21d47958-9538-4c89-99e0-3c24c7751a04 - - - - - -] wsgi.wsgi_log_format           = %(client_ip)s "%(request_line)s" status: %(status_code)s len: %(body_length)s time: %(wall_seconds).7f log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Dec  1 14:20:41 np0005541455 nova_compute[189564]: 2025-12-01 19:20:41.022 189568 DEBUG oslo_service.service [None req-21d47958-9538-4c89-99e0-3c24c7751a04 - - - - - -] zvm.ca_file                    = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Dec  1 14:20:41 np0005541455 nova_compute[189564]: 2025-12-01 19:20:41.022 189568 DEBUG oslo_service.service [None req-21d47958-9538-4c89-99e0-3c24c7751a04 - - - - - -] zvm.cloud_connector_url        = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Dec  1 14:20:41 np0005541455 nova_compute[189564]: 2025-12-01 19:20:41.022 189568 DEBUG oslo_service.service [None req-21d47958-9538-4c89-99e0-3c24c7751a04 - - - - - -] zvm.image_tmp_path             = /var/lib/nova/images log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Dec  1 14:20:41 np0005541455 nova_compute[189564]: 2025-12-01 19:20:41.022 189568 DEBUG oslo_service.service [None req-21d47958-9538-4c89-99e0-3c24c7751a04 - - - - - -] zvm.reachable_timeout          = 300 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Dec  1 14:20:41 np0005541455 nova_compute[189564]: 2025-12-01 19:20:41.023 189568 DEBUG oslo_service.service [None req-21d47958-9538-4c89-99e0-3c24c7751a04 - - - - - -] oslo_policy.enforce_new_defaults = True log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Dec  1 14:20:41 np0005541455 nova_compute[189564]: 2025-12-01 19:20:41.023 189568 DEBUG oslo_service.service [None req-21d47958-9538-4c89-99e0-3c24c7751a04 - - - - - -] oslo_policy.enforce_scope      = True log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Dec  1 14:20:41 np0005541455 nova_compute[189564]: 2025-12-01 19:20:41.023 189568 DEBUG oslo_service.service [None req-21d47958-9538-4c89-99e0-3c24c7751a04 - - - - - -] oslo_policy.policy_default_rule = default log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Dec  1 14:20:41 np0005541455 nova_compute[189564]: 2025-12-01 19:20:41.023 189568 DEBUG oslo_service.service [None req-21d47958-9538-4c89-99e0-3c24c7751a04 - - - - - -] oslo_policy.policy_dirs        = ['policy.d'] log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Dec  1 14:20:41 np0005541455 nova_compute[189564]: 2025-12-01 19:20:41.023 189568 DEBUG oslo_service.service [None req-21d47958-9538-4c89-99e0-3c24c7751a04 - - - - - -] oslo_policy.policy_file        = policy.yaml log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Dec  1 14:20:41 np0005541455 nova_compute[189564]: 2025-12-01 19:20:41.023 189568 DEBUG oslo_service.service [None req-21d47958-9538-4c89-99e0-3c24c7751a04 - - - - - -] oslo_policy.remote_content_type = application/x-www-form-urlencoded log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Dec  1 14:20:41 np0005541455 nova_compute[189564]: 2025-12-01 19:20:41.023 189568 DEBUG oslo_service.service [None req-21d47958-9538-4c89-99e0-3c24c7751a04 - - - - - -] oslo_policy.remote_ssl_ca_crt_file = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Dec  1 14:20:41 np0005541455 nova_compute[189564]: 2025-12-01 19:20:41.024 189568 DEBUG oslo_service.service [None req-21d47958-9538-4c89-99e0-3c24c7751a04 - - - - - -] oslo_policy.remote_ssl_client_crt_file = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Dec  1 14:20:41 np0005541455 nova_compute[189564]: 2025-12-01 19:20:41.024 189568 DEBUG oslo_service.service [None req-21d47958-9538-4c89-99e0-3c24c7751a04 - - - - - -] oslo_policy.remote_ssl_client_key_file = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Dec  1 14:20:41 np0005541455 nova_compute[189564]: 2025-12-01 19:20:41.024 189568 DEBUG oslo_service.service [None req-21d47958-9538-4c89-99e0-3c24c7751a04 - - - - - -] oslo_policy.remote_ssl_verify_server_crt = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Dec  1 14:20:41 np0005541455 nova_compute[189564]: 2025-12-01 19:20:41.024 189568 DEBUG oslo_service.service [None req-21d47958-9538-4c89-99e0-3c24c7751a04 - - - - - -] oslo_versionedobjects.fatal_exception_format_errors = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Dec  1 14:20:41 np0005541455 nova_compute[189564]: 2025-12-01 19:20:41.024 189568 DEBUG oslo_service.service [None req-21d47958-9538-4c89-99e0-3c24c7751a04 - - - - - -] oslo_middleware.http_basic_auth_user_file = /etc/htpasswd log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Dec  1 14:20:41 np0005541455 nova_compute[189564]: 2025-12-01 19:20:41.024 189568 DEBUG oslo_service.service [None req-21d47958-9538-4c89-99e0-3c24c7751a04 - - - - - -] remote_debug.host              = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Dec  1 14:20:41 np0005541455 nova_compute[189564]: 2025-12-01 19:20:41.024 189568 DEBUG oslo_service.service [None req-21d47958-9538-4c89-99e0-3c24c7751a04 - - - - - -] remote_debug.port              = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Dec  1 14:20:41 np0005541455 nova_compute[189564]: 2025-12-01 19:20:41.025 189568 DEBUG oslo_service.service [None req-21d47958-9538-4c89-99e0-3c24c7751a04 - - - - - -] oslo_messaging_rabbit.amqp_auto_delete = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Dec  1 14:20:41 np0005541455 nova_compute[189564]: 2025-12-01 19:20:41.025 189568 DEBUG oslo_service.service [None req-21d47958-9538-4c89-99e0-3c24c7751a04 - - - - - -] oslo_messaging_rabbit.amqp_durable_queues = True log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Dec  1 14:20:41 np0005541455 nova_compute[189564]: 2025-12-01 19:20:41.025 189568 DEBUG oslo_service.service [None req-21d47958-9538-4c89-99e0-3c24c7751a04 - - - - - -] oslo_messaging_rabbit.conn_pool_min_size = 2 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Dec  1 14:20:41 np0005541455 nova_compute[189564]: 2025-12-01 19:20:41.025 189568 DEBUG oslo_service.service [None req-21d47958-9538-4c89-99e0-3c24c7751a04 - - - - - -] oslo_messaging_rabbit.conn_pool_ttl = 1200 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Dec  1 14:20:41 np0005541455 nova_compute[189564]: 2025-12-01 19:20:41.025 189568 DEBUG oslo_service.service [None req-21d47958-9538-4c89-99e0-3c24c7751a04 - - - - - -] oslo_messaging_rabbit.direct_mandatory_flag = True log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Dec  1 14:20:41 np0005541455 nova_compute[189564]: 2025-12-01 19:20:41.025 189568 DEBUG oslo_service.service [None req-21d47958-9538-4c89-99e0-3c24c7751a04 - - - - - -] oslo_messaging_rabbit.enable_cancel_on_failover = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Dec  1 14:20:41 np0005541455 nova_compute[189564]: 2025-12-01 19:20:41.025 189568 DEBUG oslo_service.service [None req-21d47958-9538-4c89-99e0-3c24c7751a04 - - - - - -] oslo_messaging_rabbit.heartbeat_in_pthread = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Dec  1 14:20:41 np0005541455 nova_compute[189564]: 2025-12-01 19:20:41.026 189568 DEBUG oslo_service.service [None req-21d47958-9538-4c89-99e0-3c24c7751a04 - - - - - -] oslo_messaging_rabbit.heartbeat_rate = 2 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Dec  1 14:20:41 np0005541455 nova_compute[189564]: 2025-12-01 19:20:41.026 189568 DEBUG oslo_service.service [None req-21d47958-9538-4c89-99e0-3c24c7751a04 - - - - - -] oslo_messaging_rabbit.heartbeat_timeout_threshold = 60 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Dec  1 14:20:41 np0005541455 nova_compute[189564]: 2025-12-01 19:20:41.026 189568 DEBUG oslo_service.service [None req-21d47958-9538-4c89-99e0-3c24c7751a04 - - - - - -] oslo_messaging_rabbit.kombu_compression = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Dec  1 14:20:41 np0005541455 nova_compute[189564]: 2025-12-01 19:20:41.026 189568 DEBUG oslo_service.service [None req-21d47958-9538-4c89-99e0-3c24c7751a04 - - - - - -] oslo_messaging_rabbit.kombu_failover_strategy = round-robin log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Dec  1 14:20:41 np0005541455 nova_compute[189564]: 2025-12-01 19:20:41.026 189568 DEBUG oslo_service.service [None req-21d47958-9538-4c89-99e0-3c24c7751a04 - - - - - -] oslo_messaging_rabbit.kombu_missing_consumer_retry_timeout = 60 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Dec  1 14:20:41 np0005541455 nova_compute[189564]: 2025-12-01 19:20:41.026 189568 DEBUG oslo_service.service [None req-21d47958-9538-4c89-99e0-3c24c7751a04 - - - - - -] oslo_messaging_rabbit.kombu_reconnect_delay = 1.0 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Dec  1 14:20:41 np0005541455 nova_compute[189564]: 2025-12-01 19:20:41.026 189568 DEBUG oslo_service.service [None req-21d47958-9538-4c89-99e0-3c24c7751a04 - - - - - -] oslo_messaging_rabbit.rabbit_ha_queues = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Dec  1 14:20:41 np0005541455 nova_compute[189564]: 2025-12-01 19:20:41.027 189568 DEBUG oslo_service.service [None req-21d47958-9538-4c89-99e0-3c24c7751a04 - - - - - -] oslo_messaging_rabbit.rabbit_interval_max = 30 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Dec  1 14:20:41 np0005541455 nova_compute[189564]: 2025-12-01 19:20:41.027 189568 DEBUG oslo_service.service [None req-21d47958-9538-4c89-99e0-3c24c7751a04 - - - - - -] oslo_messaging_rabbit.rabbit_login_method = AMQPLAIN log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Dec  1 14:20:41 np0005541455 nova_compute[189564]: 2025-12-01 19:20:41.027 189568 DEBUG oslo_service.service [None req-21d47958-9538-4c89-99e0-3c24c7751a04 - - - - - -] oslo_messaging_rabbit.rabbit_qos_prefetch_count = 0 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Dec  1 14:20:41 np0005541455 nova_compute[189564]: 2025-12-01 19:20:41.027 189568 DEBUG oslo_service.service [None req-21d47958-9538-4c89-99e0-3c24c7751a04 - - - - - -] oslo_messaging_rabbit.rabbit_quorum_delivery_limit = 0 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Dec  1 14:20:41 np0005541455 nova_compute[189564]: 2025-12-01 19:20:41.027 189568 DEBUG oslo_service.service [None req-21d47958-9538-4c89-99e0-3c24c7751a04 - - - - - -] oslo_messaging_rabbit.rabbit_quorum_max_memory_bytes = 0 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Dec  1 14:20:41 np0005541455 nova_compute[189564]: 2025-12-01 19:20:41.027 189568 DEBUG oslo_service.service [None req-21d47958-9538-4c89-99e0-3c24c7751a04 - - - - - -] oslo_messaging_rabbit.rabbit_quorum_max_memory_length = 0 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Dec  1 14:20:41 np0005541455 nova_compute[189564]: 2025-12-01 19:20:41.027 189568 DEBUG oslo_service.service [None req-21d47958-9538-4c89-99e0-3c24c7751a04 - - - - - -] oslo_messaging_rabbit.rabbit_quorum_queue = True log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Dec  1 14:20:41 np0005541455 nova_compute[189564]: 2025-12-01 19:20:41.027 189568 DEBUG oslo_service.service [None req-21d47958-9538-4c89-99e0-3c24c7751a04 - - - - - -] oslo_messaging_rabbit.rabbit_retry_backoff = 2 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Dec  1 14:20:41 np0005541455 nova_compute[189564]: 2025-12-01 19:20:41.028 189568 DEBUG oslo_service.service [None req-21d47958-9538-4c89-99e0-3c24c7751a04 - - - - - -] oslo_messaging_rabbit.rabbit_retry_interval = 1 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Dec  1 14:20:41 np0005541455 nova_compute[189564]: 2025-12-01 19:20:41.028 189568 DEBUG oslo_service.service [None req-21d47958-9538-4c89-99e0-3c24c7751a04 - - - - - -] oslo_messaging_rabbit.rabbit_transient_queues_ttl = 1800 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Dec  1 14:20:41 np0005541455 nova_compute[189564]: 2025-12-01 19:20:41.028 189568 DEBUG oslo_service.service [None req-21d47958-9538-4c89-99e0-3c24c7751a04 - - - - - -] oslo_messaging_rabbit.rpc_conn_pool_size = 30 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Dec  1 14:20:41 np0005541455 nova_compute[189564]: 2025-12-01 19:20:41.028 189568 DEBUG oslo_service.service [None req-21d47958-9538-4c89-99e0-3c24c7751a04 - - - - - -] oslo_messaging_rabbit.ssl      = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Dec  1 14:20:41 np0005541455 nova_compute[189564]: 2025-12-01 19:20:41.028 189568 DEBUG oslo_service.service [None req-21d47958-9538-4c89-99e0-3c24c7751a04 - - - - - -] oslo_messaging_rabbit.ssl_ca_file =  log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Dec  1 14:20:41 np0005541455 nova_compute[189564]: 2025-12-01 19:20:41.028 189568 DEBUG oslo_service.service [None req-21d47958-9538-4c89-99e0-3c24c7751a04 - - - - - -] oslo_messaging_rabbit.ssl_cert_file =  log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Dec  1 14:20:41 np0005541455 nova_compute[189564]: 2025-12-01 19:20:41.028 189568 DEBUG oslo_service.service [None req-21d47958-9538-4c89-99e0-3c24c7751a04 - - - - - -] oslo_messaging_rabbit.ssl_enforce_fips_mode = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Dec  1 14:20:41 np0005541455 nova_compute[189564]: 2025-12-01 19:20:41.029 189568 DEBUG oslo_service.service [None req-21d47958-9538-4c89-99e0-3c24c7751a04 - - - - - -] oslo_messaging_rabbit.ssl_key_file =  log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Dec  1 14:20:41 np0005541455 nova_compute[189564]: 2025-12-01 19:20:41.029 189568 DEBUG oslo_service.service [None req-21d47958-9538-4c89-99e0-3c24c7751a04 - - - - - -] oslo_messaging_rabbit.ssl_version =  log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Dec  1 14:20:41 np0005541455 nova_compute[189564]: 2025-12-01 19:20:41.029 189568 DEBUG oslo_service.service [None req-21d47958-9538-4c89-99e0-3c24c7751a04 - - - - - -] oslo_messaging_notifications.driver = ['noop'] log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Dec  1 14:20:41 np0005541455 nova_compute[189564]: 2025-12-01 19:20:41.029 189568 DEBUG oslo_service.service [None req-21d47958-9538-4c89-99e0-3c24c7751a04 - - - - - -] oslo_messaging_notifications.retry = -1 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Dec  1 14:20:41 np0005541455 nova_compute[189564]: 2025-12-01 19:20:41.029 189568 DEBUG oslo_service.service [None req-21d47958-9538-4c89-99e0-3c24c7751a04 - - - - - -] oslo_messaging_notifications.topics = ['notifications'] log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Dec  1 14:20:41 np0005541455 nova_compute[189564]: 2025-12-01 19:20:41.029 189568 DEBUG oslo_service.service [None req-21d47958-9538-4c89-99e0-3c24c7751a04 - - - - - -] oslo_messaging_notifications.transport_url = **** log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Dec  1 14:20:41 np0005541455 nova_compute[189564]: 2025-12-01 19:20:41.029 189568 DEBUG oslo_service.service [None req-21d47958-9538-4c89-99e0-3c24c7751a04 - - - - - -] oslo_limit.auth_section        = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Dec  1 14:20:41 np0005541455 nova_compute[189564]: 2025-12-01 19:20:41.030 189568 DEBUG oslo_service.service [None req-21d47958-9538-4c89-99e0-3c24c7751a04 - - - - - -] oslo_limit.auth_type           = password log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Dec  1 14:20:41 np0005541455 nova_compute[189564]: 2025-12-01 19:20:41.030 189568 DEBUG oslo_service.service [None req-21d47958-9538-4c89-99e0-3c24c7751a04 - - - - - -] oslo_limit.auth_url            = https://keystone-internal.openstack.svc:5000 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Dec  1 14:20:41 np0005541455 nova_compute[189564]: 2025-12-01 19:20:41.030 189568 DEBUG oslo_service.service [None req-21d47958-9538-4c89-99e0-3c24c7751a04 - - - - - -] oslo_limit.cafile              = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Dec  1 14:20:41 np0005541455 nova_compute[189564]: 2025-12-01 19:20:41.030 189568 DEBUG oslo_service.service [None req-21d47958-9538-4c89-99e0-3c24c7751a04 - - - - - -] oslo_limit.certfile            = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Dec  1 14:20:41 np0005541455 nova_compute[189564]: 2025-12-01 19:20:41.030 189568 DEBUG oslo_service.service [None req-21d47958-9538-4c89-99e0-3c24c7751a04 - - - - - -] oslo_limit.collect_timing      = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Dec  1 14:20:41 np0005541455 nova_compute[189564]: 2025-12-01 19:20:41.030 189568 DEBUG oslo_service.service [None req-21d47958-9538-4c89-99e0-3c24c7751a04 - - - - - -] oslo_limit.connect_retries     = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Dec  1 14:20:41 np0005541455 nova_compute[189564]: 2025-12-01 19:20:41.030 189568 DEBUG oslo_service.service [None req-21d47958-9538-4c89-99e0-3c24c7751a04 - - - - - -] oslo_limit.connect_retry_delay = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Dec  1 14:20:41 np0005541455 nova_compute[189564]: 2025-12-01 19:20:41.031 189568 DEBUG oslo_service.service [None req-21d47958-9538-4c89-99e0-3c24c7751a04 - - - - - -] oslo_limit.default_domain_id   = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Dec  1 14:20:41 np0005541455 nova_compute[189564]: 2025-12-01 19:20:41.031 189568 DEBUG oslo_service.service [None req-21d47958-9538-4c89-99e0-3c24c7751a04 - - - - - -] oslo_limit.default_domain_name = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Dec  1 14:20:41 np0005541455 nova_compute[189564]: 2025-12-01 19:20:41.031 189568 DEBUG oslo_service.service [None req-21d47958-9538-4c89-99e0-3c24c7751a04 - - - - - -] oslo_limit.domain_id           = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Dec  1 14:20:41 np0005541455 nova_compute[189564]: 2025-12-01 19:20:41.031 189568 DEBUG oslo_service.service [None req-21d47958-9538-4c89-99e0-3c24c7751a04 - - - - - -] oslo_limit.domain_name         = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Dec  1 14:20:41 np0005541455 nova_compute[189564]: 2025-12-01 19:20:41.031 189568 DEBUG oslo_service.service [None req-21d47958-9538-4c89-99e0-3c24c7751a04 - - - - - -] oslo_limit.endpoint_id         = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Dec  1 14:20:41 np0005541455 nova_compute[189564]: 2025-12-01 19:20:41.031 189568 DEBUG oslo_service.service [None req-21d47958-9538-4c89-99e0-3c24c7751a04 - - - - - -] oslo_limit.endpoint_override   = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Dec  1 14:20:41 np0005541455 nova_compute[189564]: 2025-12-01 19:20:41.031 189568 DEBUG oslo_service.service [None req-21d47958-9538-4c89-99e0-3c24c7751a04 - - - - - -] oslo_limit.insecure            = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Dec  1 14:20:41 np0005541455 nova_compute[189564]: 2025-12-01 19:20:41.031 189568 DEBUG oslo_service.service [None req-21d47958-9538-4c89-99e0-3c24c7751a04 - - - - - -] oslo_limit.keyfile             = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Dec  1 14:20:41 np0005541455 nova_compute[189564]: 2025-12-01 19:20:41.032 189568 DEBUG oslo_service.service [None req-21d47958-9538-4c89-99e0-3c24c7751a04 - - - - - -] oslo_limit.max_version         = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Dec  1 14:20:41 np0005541455 nova_compute[189564]: 2025-12-01 19:20:41.032 189568 DEBUG oslo_service.service [None req-21d47958-9538-4c89-99e0-3c24c7751a04 - - - - - -] oslo_limit.min_version         = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Dec  1 14:20:41 np0005541455 nova_compute[189564]: 2025-12-01 19:20:41.032 189568 DEBUG oslo_service.service [None req-21d47958-9538-4c89-99e0-3c24c7751a04 - - - - - -] oslo_limit.password            = **** log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Dec  1 14:20:41 np0005541455 nova_compute[189564]: 2025-12-01 19:20:41.032 189568 DEBUG oslo_service.service [None req-21d47958-9538-4c89-99e0-3c24c7751a04 - - - - - -] oslo_limit.project_domain_id   = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Dec  1 14:20:41 np0005541455 nova_compute[189564]: 2025-12-01 19:20:41.032 189568 DEBUG oslo_service.service [None req-21d47958-9538-4c89-99e0-3c24c7751a04 - - - - - -] oslo_limit.project_domain_name = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Dec  1 14:20:41 np0005541455 nova_compute[189564]: 2025-12-01 19:20:41.032 189568 DEBUG oslo_service.service [None req-21d47958-9538-4c89-99e0-3c24c7751a04 - - - - - -] oslo_limit.project_id          = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Dec  1 14:20:41 np0005541455 nova_compute[189564]: 2025-12-01 19:20:41.032 189568 DEBUG oslo_service.service [None req-21d47958-9538-4c89-99e0-3c24c7751a04 - - - - - -] oslo_limit.project_name        = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Dec  1 14:20:41 np0005541455 nova_compute[189564]: 2025-12-01 19:20:41.033 189568 DEBUG oslo_service.service [None req-21d47958-9538-4c89-99e0-3c24c7751a04 - - - - - -] oslo_limit.region_name         = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Dec  1 14:20:41 np0005541455 nova_compute[189564]: 2025-12-01 19:20:41.033 189568 DEBUG oslo_service.service [None req-21d47958-9538-4c89-99e0-3c24c7751a04 - - - - - -] oslo_limit.service_name        = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Dec  1 14:20:41 np0005541455 nova_compute[189564]: 2025-12-01 19:20:41.033 189568 DEBUG oslo_service.service [None req-21d47958-9538-4c89-99e0-3c24c7751a04 - - - - - -] oslo_limit.service_type        = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Dec  1 14:20:41 np0005541455 nova_compute[189564]: 2025-12-01 19:20:41.033 189568 DEBUG oslo_service.service [None req-21d47958-9538-4c89-99e0-3c24c7751a04 - - - - - -] oslo_limit.split_loggers       = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Dec  1 14:20:41 np0005541455 nova_compute[189564]: 2025-12-01 19:20:41.033 189568 DEBUG oslo_service.service [None req-21d47958-9538-4c89-99e0-3c24c7751a04 - - - - - -] oslo_limit.status_code_retries = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Dec  1 14:20:41 np0005541455 nova_compute[189564]: 2025-12-01 19:20:41.033 189568 DEBUG oslo_service.service [None req-21d47958-9538-4c89-99e0-3c24c7751a04 - - - - - -] oslo_limit.status_code_retry_delay = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Dec  1 14:20:41 np0005541455 nova_compute[189564]: 2025-12-01 19:20:41.033 189568 DEBUG oslo_service.service [None req-21d47958-9538-4c89-99e0-3c24c7751a04 - - - - - -] oslo_limit.system_scope        = all log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Dec  1 14:20:41 np0005541455 nova_compute[189564]: 2025-12-01 19:20:41.034 189568 DEBUG oslo_service.service [None req-21d47958-9538-4c89-99e0-3c24c7751a04 - - - - - -] oslo_limit.timeout             = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Dec  1 14:20:41 np0005541455 nova_compute[189564]: 2025-12-01 19:20:41.034 189568 DEBUG oslo_service.service [None req-21d47958-9538-4c89-99e0-3c24c7751a04 - - - - - -] oslo_limit.trust_id            = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Dec  1 14:20:41 np0005541455 nova_compute[189564]: 2025-12-01 19:20:41.034 189568 DEBUG oslo_service.service [None req-21d47958-9538-4c89-99e0-3c24c7751a04 - - - - - -] oslo_limit.user_domain_id      = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Dec  1 14:20:41 np0005541455 nova_compute[189564]: 2025-12-01 19:20:41.034 189568 DEBUG oslo_service.service [None req-21d47958-9538-4c89-99e0-3c24c7751a04 - - - - - -] oslo_limit.user_domain_name    = Default log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Dec  1 14:20:41 np0005541455 nova_compute[189564]: 2025-12-01 19:20:41.034 189568 DEBUG oslo_service.service [None req-21d47958-9538-4c89-99e0-3c24c7751a04 - - - - - -] oslo_limit.user_id             = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Dec  1 14:20:41 np0005541455 nova_compute[189564]: 2025-12-01 19:20:41.034 189568 DEBUG oslo_service.service [None req-21d47958-9538-4c89-99e0-3c24c7751a04 - - - - - -] oslo_limit.username            = nova log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Dec  1 14:20:41 np0005541455 nova_compute[189564]: 2025-12-01 19:20:41.034 189568 DEBUG oslo_service.service [None req-21d47958-9538-4c89-99e0-3c24c7751a04 - - - - - -] oslo_limit.valid_interfaces    = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Dec  1 14:20:41 np0005541455 nova_compute[189564]: 2025-12-01 19:20:41.034 189568 DEBUG oslo_service.service [None req-21d47958-9538-4c89-99e0-3c24c7751a04 - - - - - -] oslo_limit.version             = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Dec  1 14:20:41 np0005541455 nova_compute[189564]: 2025-12-01 19:20:41.035 189568 DEBUG oslo_service.service [None req-21d47958-9538-4c89-99e0-3c24c7751a04 - - - - - -] oslo_reports.file_event_handler = /var/lib/nova log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Dec  1 14:20:41 np0005541455 nova_compute[189564]: 2025-12-01 19:20:41.035 189568 DEBUG oslo_service.service [None req-21d47958-9538-4c89-99e0-3c24c7751a04 - - - - - -] oslo_reports.file_event_handler_interval = 1 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Dec  1 14:20:41 np0005541455 nova_compute[189564]: 2025-12-01 19:20:41.035 189568 DEBUG oslo_service.service [None req-21d47958-9538-4c89-99e0-3c24c7751a04 - - - - - -] oslo_reports.log_dir           = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Dec  1 14:20:41 np0005541455 nova_compute[189564]: 2025-12-01 19:20:41.035 189568 DEBUG oslo_service.service [None req-21d47958-9538-4c89-99e0-3c24c7751a04 - - - - - -] vif_plug_linux_bridge_privileged.capabilities = [12] log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Dec  1 14:20:41 np0005541455 nova_compute[189564]: 2025-12-01 19:20:41.036 189568 DEBUG oslo_service.service [None req-21d47958-9538-4c89-99e0-3c24c7751a04 - - - - - -] vif_plug_linux_bridge_privileged.group = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Dec  1 14:20:41 np0005541455 nova_compute[189564]: 2025-12-01 19:20:41.036 189568 DEBUG oslo_service.service [None req-21d47958-9538-4c89-99e0-3c24c7751a04 - - - - - -] vif_plug_linux_bridge_privileged.helper_command = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Dec  1 14:20:41 np0005541455 nova_compute[189564]: 2025-12-01 19:20:41.036 189568 DEBUG oslo_service.service [None req-21d47958-9538-4c89-99e0-3c24c7751a04 - - - - - -] vif_plug_linux_bridge_privileged.logger_name = oslo_privsep.daemon log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Dec  1 14:20:41 np0005541455 nova_compute[189564]: 2025-12-01 19:20:41.036 189568 DEBUG oslo_service.service [None req-21d47958-9538-4c89-99e0-3c24c7751a04 - - - - - -] vif_plug_linux_bridge_privileged.thread_pool_size = 8 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Dec  1 14:20:41 np0005541455 nova_compute[189564]: 2025-12-01 19:20:41.036 189568 DEBUG oslo_service.service [None req-21d47958-9538-4c89-99e0-3c24c7751a04 - - - - - -] vif_plug_linux_bridge_privileged.user = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Dec  1 14:20:41 np0005541455 nova_compute[189564]: 2025-12-01 19:20:41.037 189568 DEBUG oslo_service.service [None req-21d47958-9538-4c89-99e0-3c24c7751a04 - - - - - -] vif_plug_ovs_privileged.capabilities = [12, 1] log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Dec  1 14:20:41 np0005541455 nova_compute[189564]: 2025-12-01 19:20:41.037 189568 DEBUG oslo_service.service [None req-21d47958-9538-4c89-99e0-3c24c7751a04 - - - - - -] vif_plug_ovs_privileged.group  = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Dec  1 14:20:41 np0005541455 nova_compute[189564]: 2025-12-01 19:20:41.037 189568 DEBUG oslo_service.service [None req-21d47958-9538-4c89-99e0-3c24c7751a04 - - - - - -] vif_plug_ovs_privileged.helper_command = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Dec  1 14:20:41 np0005541455 nova_compute[189564]: 2025-12-01 19:20:41.037 189568 DEBUG oslo_service.service [None req-21d47958-9538-4c89-99e0-3c24c7751a04 - - - - - -] vif_plug_ovs_privileged.logger_name = oslo_privsep.daemon log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Dec  1 14:20:41 np0005541455 nova_compute[189564]: 2025-12-01 19:20:41.037 189568 DEBUG oslo_service.service [None req-21d47958-9538-4c89-99e0-3c24c7751a04 - - - - - -] vif_plug_ovs_privileged.thread_pool_size = 8 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Dec  1 14:20:41 np0005541455 nova_compute[189564]: 2025-12-01 19:20:41.037 189568 DEBUG oslo_service.service [None req-21d47958-9538-4c89-99e0-3c24c7751a04 - - - - - -] vif_plug_ovs_privileged.user   = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Dec  1 14:20:41 np0005541455 nova_compute[189564]: 2025-12-01 19:20:41.038 189568 DEBUG oslo_service.service [None req-21d47958-9538-4c89-99e0-3c24c7751a04 - - - - - -] os_vif_linux_bridge.flat_interface = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Dec  1 14:20:41 np0005541455 nova_compute[189564]: 2025-12-01 19:20:41.038 189568 DEBUG oslo_service.service [None req-21d47958-9538-4c89-99e0-3c24c7751a04 - - - - - -] os_vif_linux_bridge.forward_bridge_interface = ['all'] log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Dec  1 14:20:41 np0005541455 nova_compute[189564]: 2025-12-01 19:20:41.038 189568 DEBUG oslo_service.service [None req-21d47958-9538-4c89-99e0-3c24c7751a04 - - - - - -] os_vif_linux_bridge.iptables_bottom_regex =  log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Dec  1 14:20:41 np0005541455 nova_compute[189564]: 2025-12-01 19:20:41.038 189568 DEBUG oslo_service.service [None req-21d47958-9538-4c89-99e0-3c24c7751a04 - - - - - -] os_vif_linux_bridge.iptables_drop_action = DROP log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Dec  1 14:20:41 np0005541455 nova_compute[189564]: 2025-12-01 19:20:41.038 189568 DEBUG oslo_service.service [None req-21d47958-9538-4c89-99e0-3c24c7751a04 - - - - - -] os_vif_linux_bridge.iptables_top_regex =  log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Dec  1 14:20:41 np0005541455 nova_compute[189564]: 2025-12-01 19:20:41.038 189568 DEBUG oslo_service.service [None req-21d47958-9538-4c89-99e0-3c24c7751a04 - - - - - -] os_vif_linux_bridge.network_device_mtu = 1500 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Dec  1 14:20:41 np0005541455 nova_compute[189564]: 2025-12-01 19:20:41.038 189568 DEBUG oslo_service.service [None req-21d47958-9538-4c89-99e0-3c24c7751a04 - - - - - -] os_vif_linux_bridge.use_ipv6   = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Dec  1 14:20:41 np0005541455 nova_compute[189564]: 2025-12-01 19:20:41.039 189568 DEBUG oslo_service.service [None req-21d47958-9538-4c89-99e0-3c24c7751a04 - - - - - -] os_vif_linux_bridge.vlan_interface = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Dec  1 14:20:41 np0005541455 nova_compute[189564]: 2025-12-01 19:20:41.039 189568 DEBUG oslo_service.service [None req-21d47958-9538-4c89-99e0-3c24c7751a04 - - - - - -] os_vif_ovs.isolate_vif         = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Dec  1 14:20:41 np0005541455 nova_compute[189564]: 2025-12-01 19:20:41.039 189568 DEBUG oslo_service.service [None req-21d47958-9538-4c89-99e0-3c24c7751a04 - - - - - -] os_vif_ovs.network_device_mtu  = 1500 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Dec  1 14:20:41 np0005541455 nova_compute[189564]: 2025-12-01 19:20:41.039 189568 DEBUG oslo_service.service [None req-21d47958-9538-4c89-99e0-3c24c7751a04 - - - - - -] os_vif_ovs.ovs_vsctl_timeout   = 120 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Dec  1 14:20:41 np0005541455 nova_compute[189564]: 2025-12-01 19:20:41.039 189568 DEBUG oslo_service.service [None req-21d47958-9538-4c89-99e0-3c24c7751a04 - - - - - -] os_vif_ovs.ovsdb_connection    = tcp:127.0.0.1:6640 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Dec  1 14:20:41 np0005541455 nova_compute[189564]: 2025-12-01 19:20:41.039 189568 DEBUG oslo_service.service [None req-21d47958-9538-4c89-99e0-3c24c7751a04 - - - - - -] os_vif_ovs.ovsdb_interface     = native log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Dec  1 14:20:41 np0005541455 nova_compute[189564]: 2025-12-01 19:20:41.040 189568 DEBUG oslo_service.service [None req-21d47958-9538-4c89-99e0-3c24c7751a04 - - - - - -] os_vif_ovs.per_port_bridge     = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Dec  1 14:20:41 np0005541455 nova_compute[189564]: 2025-12-01 19:20:41.040 189568 DEBUG oslo_service.service [None req-21d47958-9538-4c89-99e0-3c24c7751a04 - - - - - -] os_brick.lock_path             = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Dec  1 14:20:41 np0005541455 nova_compute[189564]: 2025-12-01 19:20:41.040 189568 DEBUG oslo_service.service [None req-21d47958-9538-4c89-99e0-3c24c7751a04 - - - - - -] os_brick.wait_mpath_device_attempts = 4 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Dec  1 14:20:41 np0005541455 nova_compute[189564]: 2025-12-01 19:20:41.040 189568 DEBUG oslo_service.service [None req-21d47958-9538-4c89-99e0-3c24c7751a04 - - - - - -] os_brick.wait_mpath_device_interval = 1 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Dec  1 14:20:41 np0005541455 nova_compute[189564]: 2025-12-01 19:20:41.040 189568 DEBUG oslo_service.service [None req-21d47958-9538-4c89-99e0-3c24c7751a04 - - - - - -] privsep_osbrick.capabilities   = [21] log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Dec  1 14:20:41 np0005541455 nova_compute[189564]: 2025-12-01 19:20:41.040 189568 DEBUG oslo_service.service [None req-21d47958-9538-4c89-99e0-3c24c7751a04 - - - - - -] privsep_osbrick.group          = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Dec  1 14:20:41 np0005541455 nova_compute[189564]: 2025-12-01 19:20:41.040 189568 DEBUG oslo_service.service [None req-21d47958-9538-4c89-99e0-3c24c7751a04 - - - - - -] privsep_osbrick.helper_command = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Dec  1 14:20:41 np0005541455 nova_compute[189564]: 2025-12-01 19:20:41.040 189568 DEBUG oslo_service.service [None req-21d47958-9538-4c89-99e0-3c24c7751a04 - - - - - -] privsep_osbrick.logger_name    = os_brick.privileged log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Dec  1 14:20:41 np0005541455 nova_compute[189564]: 2025-12-01 19:20:41.041 189568 DEBUG oslo_service.service [None req-21d47958-9538-4c89-99e0-3c24c7751a04 - - - - - -] privsep_osbrick.thread_pool_size = 8 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Dec  1 14:20:41 np0005541455 nova_compute[189564]: 2025-12-01 19:20:41.041 189568 DEBUG oslo_service.service [None req-21d47958-9538-4c89-99e0-3c24c7751a04 - - - - - -] privsep_osbrick.user           = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Dec  1 14:20:41 np0005541455 nova_compute[189564]: 2025-12-01 19:20:41.041 189568 DEBUG oslo_service.service [None req-21d47958-9538-4c89-99e0-3c24c7751a04 - - - - - -] nova_sys_admin.capabilities    = [0, 1, 2, 3, 12, 21] log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Dec  1 14:20:41 np0005541455 nova_compute[189564]: 2025-12-01 19:20:41.041 189568 DEBUG oslo_service.service [None req-21d47958-9538-4c89-99e0-3c24c7751a04 - - - - - -] nova_sys_admin.group           = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Dec  1 14:20:41 np0005541455 nova_compute[189564]: 2025-12-01 19:20:41.041 189568 DEBUG oslo_service.service [None req-21d47958-9538-4c89-99e0-3c24c7751a04 - - - - - -] nova_sys_admin.helper_command  = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Dec  1 14:20:41 np0005541455 nova_compute[189564]: 2025-12-01 19:20:41.041 189568 DEBUG oslo_service.service [None req-21d47958-9538-4c89-99e0-3c24c7751a04 - - - - - -] nova_sys_admin.logger_name     = oslo_privsep.daemon log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Dec  1 14:20:41 np0005541455 nova_compute[189564]: 2025-12-01 19:20:41.042 189568 DEBUG oslo_service.service [None req-21d47958-9538-4c89-99e0-3c24c7751a04 - - - - - -] nova_sys_admin.thread_pool_size = 8 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Dec  1 14:20:41 np0005541455 nova_compute[189564]: 2025-12-01 19:20:41.042 189568 DEBUG oslo_service.service [None req-21d47958-9538-4c89-99e0-3c24c7751a04 - - - - - -] nova_sys_admin.user            = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Dec  1 14:20:41 np0005541455 nova_compute[189564]: 2025-12-01 19:20:41.042 189568 DEBUG oslo_service.service [None req-21d47958-9538-4c89-99e0-3c24c7751a04 - - - - - -] ******************************************************************************** log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2613#033[00m
Dec  1 14:20:41 np0005541455 nova_compute[189564]: 2025-12-01 19:20:41.043 189568 INFO nova.service [-] Starting compute node (version 27.5.2-0.20250829104910.6f8decf.el9)#033[00m
Dec  1 14:20:41 np0005541455 nova_compute[189564]: 2025-12-01 19:20:41.058 189568 INFO nova.virt.node [None req-025acbbd-8b0a-4055-b5a6-f0460d6fa220 - - - - - -] Determined node identity 0211b5d4-bab8-409f-8f53-df766ffbcb27 from /var/lib/nova/compute_id#033[00m
Dec  1 14:20:41 np0005541455 nova_compute[189564]: 2025-12-01 19:20:41.058 189568 DEBUG nova.virt.libvirt.host [None req-025acbbd-8b0a-4055-b5a6-f0460d6fa220 - - - - - -] Starting native event thread _init_events /usr/lib/python3.9/site-packages/nova/virt/libvirt/host.py:492#033[00m
Dec  1 14:20:41 np0005541455 nova_compute[189564]: 2025-12-01 19:20:41.059 189568 DEBUG nova.virt.libvirt.host [None req-025acbbd-8b0a-4055-b5a6-f0460d6fa220 - - - - - -] Starting green dispatch thread _init_events /usr/lib/python3.9/site-packages/nova/virt/libvirt/host.py:498#033[00m
Dec  1 14:20:41 np0005541455 nova_compute[189564]: 2025-12-01 19:20:41.059 189568 DEBUG nova.virt.libvirt.host [None req-025acbbd-8b0a-4055-b5a6-f0460d6fa220 - - - - - -] Starting connection event dispatch thread initialize /usr/lib/python3.9/site-packages/nova/virt/libvirt/host.py:620#033[00m
Dec  1 14:20:41 np0005541455 nova_compute[189564]: 2025-12-01 19:20:41.059 189568 DEBUG nova.virt.libvirt.host [None req-025acbbd-8b0a-4055-b5a6-f0460d6fa220 - - - - - -] Connecting to libvirt: qemu:///system _get_new_connection /usr/lib/python3.9/site-packages/nova/virt/libvirt/host.py:503#033[00m
Dec  1 14:20:41 np0005541455 nova_compute[189564]: 2025-12-01 19:20:41.073 189568 DEBUG nova.virt.libvirt.host [None req-025acbbd-8b0a-4055-b5a6-f0460d6fa220 - - - - - -] Registering for lifecycle events <nova.virt.libvirt.host.Host object at 0x7fb0da033af0> _get_new_connection /usr/lib/python3.9/site-packages/nova/virt/libvirt/host.py:509#033[00m
Dec  1 14:20:41 np0005541455 nova_compute[189564]: 2025-12-01 19:20:41.075 189568 DEBUG nova.virt.libvirt.host [None req-025acbbd-8b0a-4055-b5a6-f0460d6fa220 - - - - - -] Registering for connection events: <nova.virt.libvirt.host.Host object at 0x7fb0da033af0> _get_new_connection /usr/lib/python3.9/site-packages/nova/virt/libvirt/host.py:530#033[00m
Dec  1 14:20:41 np0005541455 nova_compute[189564]: 2025-12-01 19:20:41.076 189568 INFO nova.virt.libvirt.driver [None req-025acbbd-8b0a-4055-b5a6-f0460d6fa220 - - - - - -] Connection event '1' reason 'None'#033[00m
Dec  1 14:20:41 np0005541455 nova_compute[189564]: 2025-12-01 19:20:41.083 189568 INFO nova.virt.libvirt.host [None req-025acbbd-8b0a-4055-b5a6-f0460d6fa220 - - - - - -] Libvirt host capabilities <capabilities>
Dec  1 14:20:41 np0005541455 nova_compute[189564]: 
Dec  1 14:20:41 np0005541455 nova_compute[189564]:  <host>
Dec  1 14:20:41 np0005541455 nova_compute[189564]:    <uuid>321a04b4-6595-4e40-a9f1-f8a11b88d7a9</uuid>
Dec  1 14:20:41 np0005541455 nova_compute[189564]:    <cpu>
Dec  1 14:20:41 np0005541455 nova_compute[189564]:      <arch>x86_64</arch>
Dec  1 14:20:41 np0005541455 nova_compute[189564]:      <model>EPYC-Rome-v4</model>
Dec  1 14:20:41 np0005541455 nova_compute[189564]:      <vendor>AMD</vendor>
Dec  1 14:20:41 np0005541455 nova_compute[189564]:      <microcode version='16777317'/>
Dec  1 14:20:41 np0005541455 nova_compute[189564]:      <signature family='23' model='49' stepping='0'/>
Dec  1 14:20:41 np0005541455 nova_compute[189564]:      <topology sockets='8' dies='1' clusters='1' cores='1' threads='1'/>
Dec  1 14:20:41 np0005541455 nova_compute[189564]:      <maxphysaddr mode='emulate' bits='40'/>
Dec  1 14:20:41 np0005541455 nova_compute[189564]:      <feature name='x2apic'/>
Dec  1 14:20:41 np0005541455 nova_compute[189564]:      <feature name='tsc-deadline'/>
Dec  1 14:20:41 np0005541455 nova_compute[189564]:      <feature name='osxsave'/>
Dec  1 14:20:41 np0005541455 nova_compute[189564]:      <feature name='hypervisor'/>
Dec  1 14:20:41 np0005541455 nova_compute[189564]:      <feature name='tsc_adjust'/>
Dec  1 14:20:41 np0005541455 nova_compute[189564]:      <feature name='spec-ctrl'/>
Dec  1 14:20:41 np0005541455 nova_compute[189564]:      <feature name='stibp'/>
Dec  1 14:20:41 np0005541455 nova_compute[189564]:      <feature name='arch-capabilities'/>
Dec  1 14:20:41 np0005541455 nova_compute[189564]:      <feature name='ssbd'/>
Dec  1 14:20:41 np0005541455 nova_compute[189564]:      <feature name='cmp_legacy'/>
Dec  1 14:20:41 np0005541455 nova_compute[189564]:      <feature name='topoext'/>
Dec  1 14:20:41 np0005541455 nova_compute[189564]:      <feature name='virt-ssbd'/>
Dec  1 14:20:41 np0005541455 nova_compute[189564]:      <feature name='lbrv'/>
Dec  1 14:20:41 np0005541455 nova_compute[189564]:      <feature name='tsc-scale'/>
Dec  1 14:20:41 np0005541455 nova_compute[189564]:      <feature name='vmcb-clean'/>
Dec  1 14:20:41 np0005541455 nova_compute[189564]:      <feature name='pause-filter'/>
Dec  1 14:20:41 np0005541455 nova_compute[189564]:      <feature name='pfthreshold'/>
Dec  1 14:20:41 np0005541455 nova_compute[189564]:      <feature name='svme-addr-chk'/>
Dec  1 14:20:41 np0005541455 nova_compute[189564]:      <feature name='rdctl-no'/>
Dec  1 14:20:41 np0005541455 nova_compute[189564]:      <feature name='skip-l1dfl-vmentry'/>
Dec  1 14:20:41 np0005541455 nova_compute[189564]:      <feature name='mds-no'/>
Dec  1 14:20:41 np0005541455 nova_compute[189564]:      <feature name='pschange-mc-no'/>
Dec  1 14:20:41 np0005541455 nova_compute[189564]:      <pages unit='KiB' size='4'/>
Dec  1 14:20:41 np0005541455 nova_compute[189564]:      <pages unit='KiB' size='2048'/>
Dec  1 14:20:41 np0005541455 nova_compute[189564]:      <pages unit='KiB' size='1048576'/>
Dec  1 14:20:41 np0005541455 nova_compute[189564]:    </cpu>
Dec  1 14:20:41 np0005541455 nova_compute[189564]:    <power_management>
Dec  1 14:20:41 np0005541455 nova_compute[189564]:      <suspend_mem/>
Dec  1 14:20:41 np0005541455 nova_compute[189564]:      <suspend_disk/>
Dec  1 14:20:41 np0005541455 nova_compute[189564]:      <suspend_hybrid/>
Dec  1 14:20:41 np0005541455 nova_compute[189564]:    </power_management>
Dec  1 14:20:41 np0005541455 nova_compute[189564]:    <iommu support='no'/>
Dec  1 14:20:41 np0005541455 nova_compute[189564]:    <migration_features>
Dec  1 14:20:41 np0005541455 nova_compute[189564]:      <live/>
Dec  1 14:20:41 np0005541455 nova_compute[189564]:      <uri_transports>
Dec  1 14:20:41 np0005541455 nova_compute[189564]:        <uri_transport>tcp</uri_transport>
Dec  1 14:20:41 np0005541455 nova_compute[189564]:        <uri_transport>rdma</uri_transport>
Dec  1 14:20:41 np0005541455 nova_compute[189564]:      </uri_transports>
Dec  1 14:20:41 np0005541455 nova_compute[189564]:    </migration_features>
Dec  1 14:20:41 np0005541455 nova_compute[189564]:    <topology>
Dec  1 14:20:41 np0005541455 nova_compute[189564]:      <cells num='1'>
Dec  1 14:20:41 np0005541455 nova_compute[189564]:        <cell id='0'>
Dec  1 14:20:41 np0005541455 nova_compute[189564]:          <memory unit='KiB'>7864324</memory>
Dec  1 14:20:41 np0005541455 nova_compute[189564]:          <pages unit='KiB' size='4'>1966081</pages>
Dec  1 14:20:41 np0005541455 nova_compute[189564]:          <pages unit='KiB' size='2048'>0</pages>
Dec  1 14:20:41 np0005541455 nova_compute[189564]:          <pages unit='KiB' size='1048576'>0</pages>
Dec  1 14:20:41 np0005541455 nova_compute[189564]:          <distances>
Dec  1 14:20:41 np0005541455 nova_compute[189564]:            <sibling id='0' value='10'/>
Dec  1 14:20:41 np0005541455 nova_compute[189564]:          </distances>
Dec  1 14:20:41 np0005541455 nova_compute[189564]:          <cpus num='8'>
Dec  1 14:20:41 np0005541455 nova_compute[189564]:            <cpu id='0' socket_id='0' die_id='0' cluster_id='65535' core_id='0' siblings='0'/>
Dec  1 14:20:41 np0005541455 nova_compute[189564]:            <cpu id='1' socket_id='1' die_id='1' cluster_id='65535' core_id='0' siblings='1'/>
Dec  1 14:20:41 np0005541455 nova_compute[189564]:            <cpu id='2' socket_id='2' die_id='2' cluster_id='65535' core_id='0' siblings='2'/>
Dec  1 14:20:41 np0005541455 nova_compute[189564]:            <cpu id='3' socket_id='3' die_id='3' cluster_id='65535' core_id='0' siblings='3'/>
Dec  1 14:20:41 np0005541455 nova_compute[189564]:            <cpu id='4' socket_id='4' die_id='4' cluster_id='65535' core_id='0' siblings='4'/>
Dec  1 14:20:41 np0005541455 nova_compute[189564]:            <cpu id='5' socket_id='5' die_id='5' cluster_id='65535' core_id='0' siblings='5'/>
Dec  1 14:20:41 np0005541455 nova_compute[189564]:            <cpu id='6' socket_id='6' die_id='6' cluster_id='65535' core_id='0' siblings='6'/>
Dec  1 14:20:41 np0005541455 nova_compute[189564]:            <cpu id='7' socket_id='7' die_id='7' cluster_id='65535' core_id='0' siblings='7'/>
Dec  1 14:20:41 np0005541455 nova_compute[189564]:          </cpus>
Dec  1 14:20:41 np0005541455 nova_compute[189564]:        </cell>
Dec  1 14:20:41 np0005541455 nova_compute[189564]:      </cells>
Dec  1 14:20:41 np0005541455 nova_compute[189564]:    </topology>
Dec  1 14:20:41 np0005541455 nova_compute[189564]:    <cache>
Dec  1 14:20:41 np0005541455 nova_compute[189564]:      <bank id='0' level='2' type='both' size='512' unit='KiB' cpus='0'/>
Dec  1 14:20:41 np0005541455 nova_compute[189564]:      <bank id='1' level='2' type='both' size='512' unit='KiB' cpus='1'/>
Dec  1 14:20:41 np0005541455 nova_compute[189564]:      <bank id='2' level='2' type='both' size='512' unit='KiB' cpus='2'/>
Dec  1 14:20:41 np0005541455 nova_compute[189564]:      <bank id='3' level='2' type='both' size='512' unit='KiB' cpus='3'/>
Dec  1 14:20:41 np0005541455 nova_compute[189564]:      <bank id='4' level='2' type='both' size='512' unit='KiB' cpus='4'/>
Dec  1 14:20:41 np0005541455 nova_compute[189564]:      <bank id='5' level='2' type='both' size='512' unit='KiB' cpus='5'/>
Dec  1 14:20:41 np0005541455 nova_compute[189564]:      <bank id='6' level='2' type='both' size='512' unit='KiB' cpus='6'/>
Dec  1 14:20:41 np0005541455 nova_compute[189564]:      <bank id='7' level='2' type='both' size='512' unit='KiB' cpus='7'/>
Dec  1 14:20:41 np0005541455 nova_compute[189564]:      <bank id='0' level='3' type='both' size='16' unit='MiB' cpus='0'/>
Dec  1 14:20:41 np0005541455 nova_compute[189564]:      <bank id='1' level='3' type='both' size='16' unit='MiB' cpus='1'/>
Dec  1 14:20:41 np0005541455 nova_compute[189564]:      <bank id='2' level='3' type='both' size='16' unit='MiB' cpus='2'/>
Dec  1 14:20:41 np0005541455 nova_compute[189564]:      <bank id='3' level='3' type='both' size='16' unit='MiB' cpus='3'/>
Dec  1 14:20:41 np0005541455 nova_compute[189564]:      <bank id='4' level='3' type='both' size='16' unit='MiB' cpus='4'/>
Dec  1 14:20:41 np0005541455 nova_compute[189564]:      <bank id='5' level='3' type='both' size='16' unit='MiB' cpus='5'/>
Dec  1 14:20:41 np0005541455 nova_compute[189564]:      <bank id='6' level='3' type='both' size='16' unit='MiB' cpus='6'/>
Dec  1 14:20:41 np0005541455 nova_compute[189564]:      <bank id='7' level='3' type='both' size='16' unit='MiB' cpus='7'/>
Dec  1 14:20:41 np0005541455 nova_compute[189564]:    </cache>
Dec  1 14:20:41 np0005541455 nova_compute[189564]:    <secmodel>
Dec  1 14:20:41 np0005541455 nova_compute[189564]:      <model>selinux</model>
Dec  1 14:20:41 np0005541455 nova_compute[189564]:      <doi>0</doi>
Dec  1 14:20:41 np0005541455 nova_compute[189564]:      <baselabel type='kvm'>system_u:system_r:svirt_t:s0</baselabel>
Dec  1 14:20:41 np0005541455 nova_compute[189564]:      <baselabel type='qemu'>system_u:system_r:svirt_tcg_t:s0</baselabel>
Dec  1 14:20:41 np0005541455 nova_compute[189564]:    </secmodel>
Dec  1 14:20:41 np0005541455 nova_compute[189564]:    <secmodel>
Dec  1 14:20:41 np0005541455 nova_compute[189564]:      <model>dac</model>
Dec  1 14:20:41 np0005541455 nova_compute[189564]:      <doi>0</doi>
Dec  1 14:20:41 np0005541455 nova_compute[189564]:      <baselabel type='kvm'>+107:+107</baselabel>
Dec  1 14:20:41 np0005541455 nova_compute[189564]:      <baselabel type='qemu'>+107:+107</baselabel>
Dec  1 14:20:41 np0005541455 nova_compute[189564]:    </secmodel>
Dec  1 14:20:41 np0005541455 nova_compute[189564]:  </host>
Dec  1 14:20:41 np0005541455 nova_compute[189564]: 
Dec  1 14:20:41 np0005541455 nova_compute[189564]:  <guest>
Dec  1 14:20:41 np0005541455 nova_compute[189564]:    <os_type>hvm</os_type>
Dec  1 14:20:41 np0005541455 nova_compute[189564]:    <arch name='i686'>
Dec  1 14:20:41 np0005541455 nova_compute[189564]:      <wordsize>32</wordsize>
Dec  1 14:20:41 np0005541455 nova_compute[189564]:      <emulator>/usr/libexec/qemu-kvm</emulator>
Dec  1 14:20:41 np0005541455 nova_compute[189564]:      <machine maxCpus='240' deprecated='yes'>pc-i440fx-rhel7.6.0</machine>
Dec  1 14:20:41 np0005541455 nova_compute[189564]:      <machine canonical='pc-i440fx-rhel7.6.0' maxCpus='240' deprecated='yes'>pc</machine>
Dec  1 14:20:41 np0005541455 nova_compute[189564]:      <machine maxCpus='4096'>pc-q35-rhel9.8.0</machine>
Dec  1 14:20:41 np0005541455 nova_compute[189564]:      <machine canonical='pc-q35-rhel9.8.0' maxCpus='4096'>q35</machine>
Dec  1 14:20:41 np0005541455 nova_compute[189564]:      <machine maxCpus='4096'>pc-q35-rhel9.6.0</machine>
Dec  1 14:20:41 np0005541455 nova_compute[189564]:      <machine maxCpus='710' deprecated='yes'>pc-q35-rhel8.6.0</machine>
Dec  1 14:20:41 np0005541455 nova_compute[189564]:      <machine maxCpus='710'>pc-q35-rhel9.4.0</machine>
Dec  1 14:20:41 np0005541455 nova_compute[189564]:      <machine maxCpus='710' deprecated='yes'>pc-q35-rhel8.5.0</machine>
Dec  1 14:20:41 np0005541455 nova_compute[189564]:      <machine maxCpus='710' deprecated='yes'>pc-q35-rhel8.3.0</machine>
Dec  1 14:20:41 np0005541455 nova_compute[189564]:      <machine maxCpus='710' deprecated='yes'>pc-q35-rhel7.6.0</machine>
Dec  1 14:20:41 np0005541455 nova_compute[189564]:      <machine maxCpus='710' deprecated='yes'>pc-q35-rhel8.4.0</machine>
Dec  1 14:20:41 np0005541455 nova_compute[189564]:      <machine maxCpus='710'>pc-q35-rhel9.2.0</machine>
Dec  1 14:20:41 np0005541455 nova_compute[189564]:      <machine maxCpus='710' deprecated='yes'>pc-q35-rhel8.2.0</machine>
Dec  1 14:20:41 np0005541455 nova_compute[189564]:      <machine maxCpus='710'>pc-q35-rhel9.0.0</machine>
Dec  1 14:20:41 np0005541455 nova_compute[189564]:      <machine maxCpus='710' deprecated='yes'>pc-q35-rhel8.0.0</machine>
Dec  1 14:20:41 np0005541455 nova_compute[189564]:      <machine maxCpus='710' deprecated='yes'>pc-q35-rhel8.1.0</machine>
Dec  1 14:20:41 np0005541455 nova_compute[189564]:      <domain type='qemu'/>
Dec  1 14:20:41 np0005541455 nova_compute[189564]:      <domain type='kvm'/>
Dec  1 14:20:41 np0005541455 nova_compute[189564]:    </arch>
Dec  1 14:20:41 np0005541455 nova_compute[189564]:    <features>
Dec  1 14:20:41 np0005541455 nova_compute[189564]:      <pae/>
Dec  1 14:20:41 np0005541455 nova_compute[189564]:      <nonpae/>
Dec  1 14:20:41 np0005541455 nova_compute[189564]:      <acpi default='on' toggle='yes'/>
Dec  1 14:20:41 np0005541455 nova_compute[189564]:      <apic default='on' toggle='no'/>
Dec  1 14:20:41 np0005541455 nova_compute[189564]:      <cpuselection/>
Dec  1 14:20:41 np0005541455 nova_compute[189564]:      <deviceboot/>
Dec  1 14:20:41 np0005541455 nova_compute[189564]:      <disksnapshot default='on' toggle='no'/>
Dec  1 14:20:41 np0005541455 nova_compute[189564]:      <externalSnapshot/>
Dec  1 14:20:41 np0005541455 nova_compute[189564]:    </features>
Dec  1 14:20:41 np0005541455 nova_compute[189564]:  </guest>
Dec  1 14:20:41 np0005541455 nova_compute[189564]: 
Dec  1 14:20:41 np0005541455 nova_compute[189564]:  <guest>
Dec  1 14:20:41 np0005541455 nova_compute[189564]:    <os_type>hvm</os_type>
Dec  1 14:20:41 np0005541455 nova_compute[189564]:    <arch name='x86_64'>
Dec  1 14:20:41 np0005541455 nova_compute[189564]:      <wordsize>64</wordsize>
Dec  1 14:20:41 np0005541455 nova_compute[189564]:      <emulator>/usr/libexec/qemu-kvm</emulator>
Dec  1 14:20:41 np0005541455 nova_compute[189564]:      <machine maxCpus='240' deprecated='yes'>pc-i440fx-rhel7.6.0</machine>
Dec  1 14:20:41 np0005541455 nova_compute[189564]:      <machine canonical='pc-i440fx-rhel7.6.0' maxCpus='240' deprecated='yes'>pc</machine>
Dec  1 14:20:41 np0005541455 nova_compute[189564]:      <machine maxCpus='4096'>pc-q35-rhel9.8.0</machine>
Dec  1 14:20:41 np0005541455 nova_compute[189564]:      <machine canonical='pc-q35-rhel9.8.0' maxCpus='4096'>q35</machine>
Dec  1 14:20:41 np0005541455 nova_compute[189564]:      <machine maxCpus='4096'>pc-q35-rhel9.6.0</machine>
Dec  1 14:20:41 np0005541455 nova_compute[189564]:      <machine maxCpus='710' deprecated='yes'>pc-q35-rhel8.6.0</machine>
Dec  1 14:20:41 np0005541455 nova_compute[189564]:      <machine maxCpus='710'>pc-q35-rhel9.4.0</machine>
Dec  1 14:20:41 np0005541455 nova_compute[189564]:      <machine maxCpus='710' deprecated='yes'>pc-q35-rhel8.5.0</machine>
Dec  1 14:20:41 np0005541455 nova_compute[189564]:      <machine maxCpus='710' deprecated='yes'>pc-q35-rhel8.3.0</machine>
Dec  1 14:20:41 np0005541455 nova_compute[189564]:      <machine maxCpus='710' deprecated='yes'>pc-q35-rhel7.6.0</machine>
Dec  1 14:20:41 np0005541455 nova_compute[189564]:      <machine maxCpus='710' deprecated='yes'>pc-q35-rhel8.4.0</machine>
Dec  1 14:20:41 np0005541455 nova_compute[189564]:      <machine maxCpus='710'>pc-q35-rhel9.2.0</machine>
Dec  1 14:20:41 np0005541455 nova_compute[189564]:      <machine maxCpus='710' deprecated='yes'>pc-q35-rhel8.2.0</machine>
Dec  1 14:20:41 np0005541455 nova_compute[189564]:      <machine maxCpus='710'>pc-q35-rhel9.0.0</machine>
Dec  1 14:20:41 np0005541455 nova_compute[189564]:      <machine maxCpus='710' deprecated='yes'>pc-q35-rhel8.0.0</machine>
Dec  1 14:20:41 np0005541455 nova_compute[189564]:      <machine maxCpus='710' deprecated='yes'>pc-q35-rhel8.1.0</machine>
Dec  1 14:20:41 np0005541455 nova_compute[189564]:      <domain type='qemu'/>
Dec  1 14:20:41 np0005541455 nova_compute[189564]:      <domain type='kvm'/>
Dec  1 14:20:41 np0005541455 nova_compute[189564]:    </arch>
Dec  1 14:20:41 np0005541455 nova_compute[189564]:    <features>
Dec  1 14:20:41 np0005541455 nova_compute[189564]:      <acpi default='on' toggle='yes'/>
Dec  1 14:20:41 np0005541455 nova_compute[189564]:      <apic default='on' toggle='no'/>
Dec  1 14:20:41 np0005541455 nova_compute[189564]:      <cpuselection/>
Dec  1 14:20:41 np0005541455 nova_compute[189564]:      <deviceboot/>
Dec  1 14:20:41 np0005541455 nova_compute[189564]:      <disksnapshot default='on' toggle='no'/>
Dec  1 14:20:41 np0005541455 nova_compute[189564]:      <externalSnapshot/>
Dec  1 14:20:41 np0005541455 nova_compute[189564]:    </features>
Dec  1 14:20:41 np0005541455 nova_compute[189564]:  </guest>
Dec  1 14:20:41 np0005541455 nova_compute[189564]: 
Dec  1 14:20:41 np0005541455 nova_compute[189564]: </capabilities>
Dec  1 14:20:41 np0005541455 nova_compute[189564]: #033[00m
Dec  1 14:20:41 np0005541455 nova_compute[189564]: 2025-12-01 19:20:41.093 189568 DEBUG nova.virt.libvirt.volume.mount [None req-025acbbd-8b0a-4055-b5a6-f0460d6fa220 - - - - - -] Initialising _HostMountState generation 0 host_up /usr/lib/python3.9/site-packages/nova/virt/libvirt/volume/mount.py:130#033[00m
Dec  1 14:20:41 np0005541455 nova_compute[189564]: 2025-12-01 19:20:41.099 189568 DEBUG nova.virt.libvirt.host [None req-025acbbd-8b0a-4055-b5a6-f0460d6fa220 - - - - - -] Getting domain capabilities for i686 via machine types: {'q35', 'pc'} _get_machine_types /usr/lib/python3.9/site-packages/nova/virt/libvirt/host.py:952#033[00m
Dec  1 14:20:41 np0005541455 nova_compute[189564]: 2025-12-01 19:20:41.104 189568 DEBUG nova.virt.libvirt.host [None req-025acbbd-8b0a-4055-b5a6-f0460d6fa220 - - - - - -] Libvirt host hypervisor capabilities for arch=i686 and machine_type=q35:
Dec  1 14:20:41 np0005541455 nova_compute[189564]: <domainCapabilities>
Dec  1 14:20:41 np0005541455 nova_compute[189564]:  <path>/usr/libexec/qemu-kvm</path>
Dec  1 14:20:41 np0005541455 nova_compute[189564]:  <domain>kvm</domain>
Dec  1 14:20:41 np0005541455 nova_compute[189564]:  <machine>pc-q35-rhel9.8.0</machine>
Dec  1 14:20:41 np0005541455 nova_compute[189564]:  <arch>i686</arch>
Dec  1 14:20:41 np0005541455 nova_compute[189564]:  <vcpu max='4096'/>
Dec  1 14:20:41 np0005541455 nova_compute[189564]:  <iothreads supported='yes'/>
Dec  1 14:20:41 np0005541455 nova_compute[189564]:  <os supported='yes'>
Dec  1 14:20:41 np0005541455 nova_compute[189564]:    <enum name='firmware'/>
Dec  1 14:20:41 np0005541455 nova_compute[189564]:    <loader supported='yes'>
Dec  1 14:20:41 np0005541455 nova_compute[189564]:      <value>/usr/share/OVMF/OVMF_CODE.secboot.fd</value>
Dec  1 14:20:41 np0005541455 nova_compute[189564]:      <enum name='type'>
Dec  1 14:20:41 np0005541455 nova_compute[189564]:        <value>rom</value>
Dec  1 14:20:41 np0005541455 nova_compute[189564]:        <value>pflash</value>
Dec  1 14:20:41 np0005541455 nova_compute[189564]:      </enum>
Dec  1 14:20:41 np0005541455 nova_compute[189564]:      <enum name='readonly'>
Dec  1 14:20:41 np0005541455 nova_compute[189564]:        <value>yes</value>
Dec  1 14:20:41 np0005541455 nova_compute[189564]:        <value>no</value>
Dec  1 14:20:41 np0005541455 nova_compute[189564]:      </enum>
Dec  1 14:20:41 np0005541455 nova_compute[189564]:      <enum name='secure'>
Dec  1 14:20:41 np0005541455 nova_compute[189564]:        <value>no</value>
Dec  1 14:20:41 np0005541455 nova_compute[189564]:      </enum>
Dec  1 14:20:41 np0005541455 nova_compute[189564]:    </loader>
Dec  1 14:20:41 np0005541455 nova_compute[189564]:  </os>
Dec  1 14:20:41 np0005541455 nova_compute[189564]:  <cpu>
Dec  1 14:20:41 np0005541455 nova_compute[189564]:    <mode name='host-passthrough' supported='yes'>
Dec  1 14:20:41 np0005541455 nova_compute[189564]:      <enum name='hostPassthroughMigratable'>
Dec  1 14:20:41 np0005541455 nova_compute[189564]:        <value>on</value>
Dec  1 14:20:41 np0005541455 nova_compute[189564]:        <value>off</value>
Dec  1 14:20:41 np0005541455 nova_compute[189564]:      </enum>
Dec  1 14:20:41 np0005541455 nova_compute[189564]:    </mode>
Dec  1 14:20:41 np0005541455 nova_compute[189564]:    <mode name='maximum' supported='yes'>
Dec  1 14:20:41 np0005541455 nova_compute[189564]:      <enum name='maximumMigratable'>
Dec  1 14:20:41 np0005541455 nova_compute[189564]:        <value>on</value>
Dec  1 14:20:41 np0005541455 nova_compute[189564]:        <value>off</value>
Dec  1 14:20:41 np0005541455 nova_compute[189564]:      </enum>
Dec  1 14:20:41 np0005541455 nova_compute[189564]:    </mode>
Dec  1 14:20:41 np0005541455 nova_compute[189564]:    <mode name='host-model' supported='yes'>
Dec  1 14:20:41 np0005541455 nova_compute[189564]:      <model fallback='forbid'>EPYC-Rome</model>
Dec  1 14:20:41 np0005541455 nova_compute[189564]:      <vendor>AMD</vendor>
Dec  1 14:20:41 np0005541455 nova_compute[189564]:      <maxphysaddr mode='passthrough' limit='40'/>
Dec  1 14:20:41 np0005541455 nova_compute[189564]:      <feature policy='require' name='x2apic'/>
Dec  1 14:20:41 np0005541455 nova_compute[189564]:      <feature policy='require' name='tsc-deadline'/>
Dec  1 14:20:41 np0005541455 nova_compute[189564]:      <feature policy='require' name='hypervisor'/>
Dec  1 14:20:41 np0005541455 nova_compute[189564]:      <feature policy='require' name='tsc_adjust'/>
Dec  1 14:20:41 np0005541455 nova_compute[189564]:      <feature policy='require' name='spec-ctrl'/>
Dec  1 14:20:41 np0005541455 nova_compute[189564]:      <feature policy='require' name='stibp'/>
Dec  1 14:20:41 np0005541455 nova_compute[189564]:      <feature policy='require' name='ssbd'/>
Dec  1 14:20:41 np0005541455 nova_compute[189564]:      <feature policy='require' name='cmp_legacy'/>
Dec  1 14:20:41 np0005541455 nova_compute[189564]:      <feature policy='require' name='overflow-recov'/>
Dec  1 14:20:41 np0005541455 nova_compute[189564]:      <feature policy='require' name='succor'/>
Dec  1 14:20:41 np0005541455 nova_compute[189564]:      <feature policy='require' name='ibrs'/>
Dec  1 14:20:41 np0005541455 nova_compute[189564]:      <feature policy='require' name='amd-ssbd'/>
Dec  1 14:20:41 np0005541455 nova_compute[189564]:      <feature policy='require' name='virt-ssbd'/>
Dec  1 14:20:41 np0005541455 nova_compute[189564]:      <feature policy='require' name='lbrv'/>
Dec  1 14:20:41 np0005541455 nova_compute[189564]:      <feature policy='require' name='tsc-scale'/>
Dec  1 14:20:41 np0005541455 nova_compute[189564]:      <feature policy='require' name='vmcb-clean'/>
Dec  1 14:20:41 np0005541455 nova_compute[189564]:      <feature policy='require' name='flushbyasid'/>
Dec  1 14:20:41 np0005541455 nova_compute[189564]:      <feature policy='require' name='pause-filter'/>
Dec  1 14:20:41 np0005541455 nova_compute[189564]:      <feature policy='require' name='pfthreshold'/>
Dec  1 14:20:41 np0005541455 nova_compute[189564]:      <feature policy='require' name='svme-addr-chk'/>
Dec  1 14:20:41 np0005541455 nova_compute[189564]:      <feature policy='require' name='lfence-always-serializing'/>
Dec  1 14:20:41 np0005541455 nova_compute[189564]:      <feature policy='disable' name='xsaves'/>
Dec  1 14:20:41 np0005541455 nova_compute[189564]:    </mode>
Dec  1 14:20:41 np0005541455 nova_compute[189564]:    <mode name='custom' supported='yes'>
Dec  1 14:20:41 np0005541455 nova_compute[189564]:      <model usable='yes' deprecated='yes' vendor='unknown' canonical='486-v1'>486</model>
Dec  1 14:20:41 np0005541455 nova_compute[189564]:      <model usable='yes' deprecated='yes' vendor='unknown'>486-v1</model>
Dec  1 14:20:41 np0005541455 nova_compute[189564]:      <model usable='no' vendor='Intel' canonical='Broadwell-v1'>Broadwell</model>
Dec  1 14:20:41 np0005541455 nova_compute[189564]:      <blockers model='Broadwell'>
Dec  1 14:20:41 np0005541455 nova_compute[189564]:        <feature name='erms'/>
Dec  1 14:20:41 np0005541455 nova_compute[189564]:        <feature name='hle'/>
Dec  1 14:20:41 np0005541455 nova_compute[189564]:        <feature name='invpcid'/>
Dec  1 14:20:41 np0005541455 nova_compute[189564]:        <feature name='pcid'/>
Dec  1 14:20:41 np0005541455 nova_compute[189564]:        <feature name='rtm'/>
Dec  1 14:20:41 np0005541455 nova_compute[189564]:      </blockers>
Dec  1 14:20:41 np0005541455 nova_compute[189564]:      <model usable='no' vendor='Intel' canonical='Broadwell-v3'>Broadwell-IBRS</model>
Dec  1 14:20:41 np0005541455 nova_compute[189564]:      <blockers model='Broadwell-IBRS'>
Dec  1 14:20:41 np0005541455 nova_compute[189564]:        <feature name='erms'/>
Dec  1 14:20:41 np0005541455 nova_compute[189564]:        <feature name='hle'/>
Dec  1 14:20:41 np0005541455 nova_compute[189564]:        <feature name='invpcid'/>
Dec  1 14:20:41 np0005541455 nova_compute[189564]:        <feature name='pcid'/>
Dec  1 14:20:41 np0005541455 nova_compute[189564]:        <feature name='rtm'/>
Dec  1 14:20:41 np0005541455 nova_compute[189564]:      </blockers>
Dec  1 14:20:41 np0005541455 nova_compute[189564]:      <model usable='no' vendor='Intel' canonical='Broadwell-v2'>Broadwell-noTSX</model>
Dec  1 14:20:41 np0005541455 nova_compute[189564]:      <blockers model='Broadwell-noTSX'>
Dec  1 14:20:41 np0005541455 nova_compute[189564]:        <feature name='erms'/>
Dec  1 14:20:41 np0005541455 nova_compute[189564]:        <feature name='invpcid'/>
Dec  1 14:20:41 np0005541455 nova_compute[189564]:        <feature name='pcid'/>
Dec  1 14:20:41 np0005541455 nova_compute[189564]:      </blockers>
Dec  1 14:20:41 np0005541455 nova_compute[189564]:      <model usable='no' vendor='Intel' canonical='Broadwell-v4'>Broadwell-noTSX-IBRS</model>
Dec  1 14:20:41 np0005541455 nova_compute[189564]:      <blockers model='Broadwell-noTSX-IBRS'>
Dec  1 14:20:41 np0005541455 nova_compute[189564]:        <feature name='erms'/>
Dec  1 14:20:41 np0005541455 nova_compute[189564]:        <feature name='invpcid'/>
Dec  1 14:20:41 np0005541455 nova_compute[189564]:        <feature name='pcid'/>
Dec  1 14:20:41 np0005541455 nova_compute[189564]:      </blockers>
Dec  1 14:20:41 np0005541455 nova_compute[189564]:      <model usable='no' vendor='Intel'>Broadwell-v1</model>
Dec  1 14:20:41 np0005541455 nova_compute[189564]:      <blockers model='Broadwell-v1'>
Dec  1 14:20:41 np0005541455 nova_compute[189564]:        <feature name='erms'/>
Dec  1 14:20:41 np0005541455 nova_compute[189564]:        <feature name='hle'/>
Dec  1 14:20:41 np0005541455 nova_compute[189564]:        <feature name='invpcid'/>
Dec  1 14:20:41 np0005541455 nova_compute[189564]:        <feature name='pcid'/>
Dec  1 14:20:41 np0005541455 nova_compute[189564]:        <feature name='rtm'/>
Dec  1 14:20:41 np0005541455 nova_compute[189564]:      </blockers>
Dec  1 14:20:41 np0005541455 nova_compute[189564]:      <model usable='no' vendor='Intel'>Broadwell-v2</model>
Dec  1 14:20:41 np0005541455 nova_compute[189564]:      <blockers model='Broadwell-v2'>
Dec  1 14:20:41 np0005541455 nova_compute[189564]:        <feature name='erms'/>
Dec  1 14:20:41 np0005541455 nova_compute[189564]:        <feature name='invpcid'/>
Dec  1 14:20:41 np0005541455 nova_compute[189564]:        <feature name='pcid'/>
Dec  1 14:20:41 np0005541455 nova_compute[189564]:      </blockers>
Dec  1 14:20:41 np0005541455 nova_compute[189564]:      <model usable='no' vendor='Intel'>Broadwell-v3</model>
Dec  1 14:20:41 np0005541455 nova_compute[189564]:      <blockers model='Broadwell-v3'>
Dec  1 14:20:41 np0005541455 nova_compute[189564]:        <feature name='erms'/>
Dec  1 14:20:41 np0005541455 nova_compute[189564]:        <feature name='hle'/>
Dec  1 14:20:41 np0005541455 nova_compute[189564]:        <feature name='invpcid'/>
Dec  1 14:20:41 np0005541455 nova_compute[189564]:        <feature name='pcid'/>
Dec  1 14:20:41 np0005541455 nova_compute[189564]:        <feature name='rtm'/>
Dec  1 14:20:41 np0005541455 nova_compute[189564]:      </blockers>
Dec  1 14:20:41 np0005541455 nova_compute[189564]:      <model usable='no' vendor='Intel'>Broadwell-v4</model>
Dec  1 14:20:41 np0005541455 nova_compute[189564]:      <blockers model='Broadwell-v4'>
Dec  1 14:20:41 np0005541455 nova_compute[189564]:        <feature name='erms'/>
Dec  1 14:20:41 np0005541455 nova_compute[189564]:        <feature name='invpcid'/>
Dec  1 14:20:41 np0005541455 nova_compute[189564]:        <feature name='pcid'/>
Dec  1 14:20:41 np0005541455 nova_compute[189564]:      </blockers>
Dec  1 14:20:41 np0005541455 nova_compute[189564]:      <model usable='no' vendor='Intel' canonical='Cascadelake-Server-v1'>Cascadelake-Server</model>
Dec  1 14:20:41 np0005541455 nova_compute[189564]:      <blockers model='Cascadelake-Server'>
Dec  1 14:20:41 np0005541455 nova_compute[189564]:        <feature name='avx512bw'/>
Dec  1 14:20:41 np0005541455 nova_compute[189564]:        <feature name='avx512cd'/>
Dec  1 14:20:41 np0005541455 nova_compute[189564]:        <feature name='avx512dq'/>
Dec  1 14:20:41 np0005541455 nova_compute[189564]:        <feature name='avx512f'/>
Dec  1 14:20:41 np0005541455 nova_compute[189564]:        <feature name='avx512vl'/>
Dec  1 14:20:41 np0005541455 nova_compute[189564]:        <feature name='avx512vnni'/>
Dec  1 14:20:41 np0005541455 nova_compute[189564]:        <feature name='erms'/>
Dec  1 14:20:41 np0005541455 nova_compute[189564]:        <feature name='hle'/>
Dec  1 14:20:41 np0005541455 nova_compute[189564]:        <feature name='invpcid'/>
Dec  1 14:20:41 np0005541455 nova_compute[189564]:        <feature name='pcid'/>
Dec  1 14:20:41 np0005541455 nova_compute[189564]:        <feature name='pku'/>
Dec  1 14:20:41 np0005541455 nova_compute[189564]:        <feature name='rtm'/>
Dec  1 14:20:41 np0005541455 nova_compute[189564]:      </blockers>
Dec  1 14:20:41 np0005541455 nova_compute[189564]:      <model usable='no' vendor='Intel' canonical='Cascadelake-Server-v3'>Cascadelake-Server-noTSX</model>
Dec  1 14:20:41 np0005541455 nova_compute[189564]:      <blockers model='Cascadelake-Server-noTSX'>
Dec  1 14:20:41 np0005541455 nova_compute[189564]:        <feature name='avx512bw'/>
Dec  1 14:20:41 np0005541455 nova_compute[189564]:        <feature name='avx512cd'/>
Dec  1 14:20:41 np0005541455 nova_compute[189564]:        <feature name='avx512dq'/>
Dec  1 14:20:41 np0005541455 nova_compute[189564]:        <feature name='avx512f'/>
Dec  1 14:20:41 np0005541455 nova_compute[189564]:        <feature name='avx512vl'/>
Dec  1 14:20:41 np0005541455 nova_compute[189564]:        <feature name='avx512vnni'/>
Dec  1 14:20:41 np0005541455 nova_compute[189564]:        <feature name='erms'/>
Dec  1 14:20:41 np0005541455 nova_compute[189564]:        <feature name='ibrs-all'/>
Dec  1 14:20:41 np0005541455 nova_compute[189564]:        <feature name='invpcid'/>
Dec  1 14:20:41 np0005541455 nova_compute[189564]:        <feature name='pcid'/>
Dec  1 14:20:41 np0005541455 nova_compute[189564]:        <feature name='pku'/>
Dec  1 14:20:41 np0005541455 nova_compute[189564]:      </blockers>
Dec  1 14:20:41 np0005541455 nova_compute[189564]:      <model usable='no' vendor='Intel'>Cascadelake-Server-v1</model>
Dec  1 14:20:41 np0005541455 nova_compute[189564]:      <blockers model='Cascadelake-Server-v1'>
Dec  1 14:20:41 np0005541455 nova_compute[189564]:        <feature name='avx512bw'/>
Dec  1 14:20:41 np0005541455 nova_compute[189564]:        <feature name='avx512cd'/>
Dec  1 14:20:41 np0005541455 nova_compute[189564]:        <feature name='avx512dq'/>
Dec  1 14:20:41 np0005541455 nova_compute[189564]:        <feature name='avx512f'/>
Dec  1 14:20:41 np0005541455 nova_compute[189564]:        <feature name='avx512vl'/>
Dec  1 14:20:41 np0005541455 nova_compute[189564]:        <feature name='avx512vnni'/>
Dec  1 14:20:41 np0005541455 nova_compute[189564]:        <feature name='erms'/>
Dec  1 14:20:41 np0005541455 nova_compute[189564]:        <feature name='hle'/>
Dec  1 14:20:41 np0005541455 nova_compute[189564]:        <feature name='invpcid'/>
Dec  1 14:20:41 np0005541455 nova_compute[189564]:        <feature name='pcid'/>
Dec  1 14:20:41 np0005541455 nova_compute[189564]:        <feature name='pku'/>
Dec  1 14:20:41 np0005541455 nova_compute[189564]:        <feature name='rtm'/>
Dec  1 14:20:41 np0005541455 nova_compute[189564]:      </blockers>
Dec  1 14:20:41 np0005541455 nova_compute[189564]:      <model usable='no' vendor='Intel'>Cascadelake-Server-v2</model>
Dec  1 14:20:41 np0005541455 nova_compute[189564]:      <blockers model='Cascadelake-Server-v2'>
Dec  1 14:20:41 np0005541455 nova_compute[189564]:        <feature name='avx512bw'/>
Dec  1 14:20:41 np0005541455 nova_compute[189564]:        <feature name='avx512cd'/>
Dec  1 14:20:41 np0005541455 nova_compute[189564]:        <feature name='avx512dq'/>
Dec  1 14:20:41 np0005541455 nova_compute[189564]:        <feature name='avx512f'/>
Dec  1 14:20:41 np0005541455 nova_compute[189564]:        <feature name='avx512vl'/>
Dec  1 14:20:41 np0005541455 nova_compute[189564]:        <feature name='avx512vnni'/>
Dec  1 14:20:41 np0005541455 nova_compute[189564]:        <feature name='erms'/>
Dec  1 14:20:41 np0005541455 nova_compute[189564]:        <feature name='hle'/>
Dec  1 14:20:41 np0005541455 nova_compute[189564]:        <feature name='ibrs-all'/>
Dec  1 14:20:41 np0005541455 nova_compute[189564]:        <feature name='invpcid'/>
Dec  1 14:20:41 np0005541455 nova_compute[189564]:        <feature name='pcid'/>
Dec  1 14:20:41 np0005541455 nova_compute[189564]:        <feature name='pku'/>
Dec  1 14:20:41 np0005541455 nova_compute[189564]:        <feature name='rtm'/>
Dec  1 14:20:41 np0005541455 nova_compute[189564]:      </blockers>
Dec  1 14:20:41 np0005541455 nova_compute[189564]:      <model usable='no' vendor='Intel'>Cascadelake-Server-v3</model>
Dec  1 14:20:41 np0005541455 nova_compute[189564]:      <blockers model='Cascadelake-Server-v3'>
Dec  1 14:20:41 np0005541455 nova_compute[189564]:        <feature name='avx512bw'/>
Dec  1 14:20:41 np0005541455 nova_compute[189564]:        <feature name='avx512cd'/>
Dec  1 14:20:41 np0005541455 nova_compute[189564]:        <feature name='avx512dq'/>
Dec  1 14:20:41 np0005541455 nova_compute[189564]:        <feature name='avx512f'/>
Dec  1 14:20:41 np0005541455 nova_compute[189564]:        <feature name='avx512vl'/>
Dec  1 14:20:41 np0005541455 nova_compute[189564]:        <feature name='avx512vnni'/>
Dec  1 14:20:41 np0005541455 nova_compute[189564]:        <feature name='erms'/>
Dec  1 14:20:41 np0005541455 nova_compute[189564]:        <feature name='ibrs-all'/>
Dec  1 14:20:41 np0005541455 nova_compute[189564]:        <feature name='invpcid'/>
Dec  1 14:20:41 np0005541455 nova_compute[189564]:        <feature name='pcid'/>
Dec  1 14:20:41 np0005541455 nova_compute[189564]:        <feature name='pku'/>
Dec  1 14:20:41 np0005541455 nova_compute[189564]:      </blockers>
Dec  1 14:20:41 np0005541455 nova_compute[189564]:      <model usable='no' vendor='Intel'>Cascadelake-Server-v4</model>
Dec  1 14:20:41 np0005541455 nova_compute[189564]:      <blockers model='Cascadelake-Server-v4'>
Dec  1 14:20:41 np0005541455 nova_compute[189564]:        <feature name='avx512bw'/>
Dec  1 14:20:41 np0005541455 nova_compute[189564]:        <feature name='avx512cd'/>
Dec  1 14:20:41 np0005541455 nova_compute[189564]:        <feature name='avx512dq'/>
Dec  1 14:20:41 np0005541455 nova_compute[189564]:        <feature name='avx512f'/>
Dec  1 14:20:41 np0005541455 nova_compute[189564]:        <feature name='avx512vl'/>
Dec  1 14:20:41 np0005541455 nova_compute[189564]:        <feature name='avx512vnni'/>
Dec  1 14:20:41 np0005541455 nova_compute[189564]:        <feature name='erms'/>
Dec  1 14:20:41 np0005541455 nova_compute[189564]:        <feature name='ibrs-all'/>
Dec  1 14:20:41 np0005541455 nova_compute[189564]:        <feature name='invpcid'/>
Dec  1 14:20:41 np0005541455 nova_compute[189564]:        <feature name='pcid'/>
Dec  1 14:20:41 np0005541455 nova_compute[189564]:        <feature name='pku'/>
Dec  1 14:20:41 np0005541455 nova_compute[189564]:      </blockers>
Dec  1 14:20:41 np0005541455 nova_compute[189564]:      <model usable='no' vendor='Intel'>Cascadelake-Server-v5</model>
Dec  1 14:20:41 np0005541455 nova_compute[189564]:      <blockers model='Cascadelake-Server-v5'>
Dec  1 14:20:41 np0005541455 nova_compute[189564]:        <feature name='avx512bw'/>
Dec  1 14:20:41 np0005541455 nova_compute[189564]:        <feature name='avx512cd'/>
Dec  1 14:20:41 np0005541455 nova_compute[189564]:        <feature name='avx512dq'/>
Dec  1 14:20:41 np0005541455 nova_compute[189564]:        <feature name='avx512f'/>
Dec  1 14:20:41 np0005541455 nova_compute[189564]:        <feature name='avx512vl'/>
Dec  1 14:20:41 np0005541455 nova_compute[189564]:        <feature name='avx512vnni'/>
Dec  1 14:20:41 np0005541455 nova_compute[189564]:        <feature name='erms'/>
Dec  1 14:20:41 np0005541455 nova_compute[189564]:        <feature name='ibrs-all'/>
Dec  1 14:20:41 np0005541455 nova_compute[189564]:        <feature name='invpcid'/>
Dec  1 14:20:41 np0005541455 nova_compute[189564]:        <feature name='pcid'/>
Dec  1 14:20:41 np0005541455 nova_compute[189564]:        <feature name='pku'/>
Dec  1 14:20:41 np0005541455 nova_compute[189564]:        <feature name='xsaves'/>
Dec  1 14:20:41 np0005541455 nova_compute[189564]:      </blockers>
Dec  1 14:20:41 np0005541455 nova_compute[189564]:      <model usable='yes' deprecated='yes' vendor='Intel' canonical='Conroe-v1'>Conroe</model>
Dec  1 14:20:41 np0005541455 nova_compute[189564]:      <model usable='yes' deprecated='yes' vendor='Intel'>Conroe-v1</model>
Dec  1 14:20:41 np0005541455 nova_compute[189564]:      <model usable='no' vendor='Intel' canonical='Cooperlake-v1'>Cooperlake</model>
Dec  1 14:20:41 np0005541455 nova_compute[189564]:      <blockers model='Cooperlake'>
Dec  1 14:20:41 np0005541455 nova_compute[189564]:        <feature name='avx512-bf16'/>
Dec  1 14:20:41 np0005541455 nova_compute[189564]:        <feature name='avx512bw'/>
Dec  1 14:20:41 np0005541455 nova_compute[189564]:        <feature name='avx512cd'/>
Dec  1 14:20:41 np0005541455 nova_compute[189564]:        <feature name='avx512dq'/>
Dec  1 14:20:41 np0005541455 nova_compute[189564]:        <feature name='avx512f'/>
Dec  1 14:20:41 np0005541455 nova_compute[189564]:        <feature name='avx512vl'/>
Dec  1 14:20:41 np0005541455 nova_compute[189564]:        <feature name='avx512vnni'/>
Dec  1 14:20:41 np0005541455 nova_compute[189564]:        <feature name='erms'/>
Dec  1 14:20:41 np0005541455 nova_compute[189564]:        <feature name='hle'/>
Dec  1 14:20:41 np0005541455 nova_compute[189564]:        <feature name='ibrs-all'/>
Dec  1 14:20:41 np0005541455 nova_compute[189564]:        <feature name='invpcid'/>
Dec  1 14:20:41 np0005541455 nova_compute[189564]:        <feature name='pcid'/>
Dec  1 14:20:41 np0005541455 nova_compute[189564]:        <feature name='pku'/>
Dec  1 14:20:41 np0005541455 nova_compute[189564]:        <feature name='rtm'/>
Dec  1 14:20:41 np0005541455 nova_compute[189564]:        <feature name='taa-no'/>
Dec  1 14:20:41 np0005541455 nova_compute[189564]:      </blockers>
Dec  1 14:20:41 np0005541455 nova_compute[189564]:      <model usable='no' vendor='Intel'>Cooperlake-v1</model>
Dec  1 14:20:41 np0005541455 nova_compute[189564]:      <blockers model='Cooperlake-v1'>
Dec  1 14:20:41 np0005541455 nova_compute[189564]:        <feature name='avx512-bf16'/>
Dec  1 14:20:41 np0005541455 nova_compute[189564]:        <feature name='avx512bw'/>
Dec  1 14:20:41 np0005541455 nova_compute[189564]:        <feature name='avx512cd'/>
Dec  1 14:20:41 np0005541455 nova_compute[189564]:        <feature name='avx512dq'/>
Dec  1 14:20:41 np0005541455 nova_compute[189564]:        <feature name='avx512f'/>
Dec  1 14:20:41 np0005541455 nova_compute[189564]:        <feature name='avx512vl'/>
Dec  1 14:20:41 np0005541455 nova_compute[189564]:        <feature name='avx512vnni'/>
Dec  1 14:20:41 np0005541455 nova_compute[189564]:        <feature name='erms'/>
Dec  1 14:20:41 np0005541455 nova_compute[189564]:        <feature name='hle'/>
Dec  1 14:20:41 np0005541455 nova_compute[189564]:        <feature name='ibrs-all'/>
Dec  1 14:20:41 np0005541455 nova_compute[189564]:        <feature name='invpcid'/>
Dec  1 14:20:41 np0005541455 nova_compute[189564]:        <feature name='pcid'/>
Dec  1 14:20:41 np0005541455 nova_compute[189564]:        <feature name='pku'/>
Dec  1 14:20:41 np0005541455 nova_compute[189564]:        <feature name='rtm'/>
Dec  1 14:20:41 np0005541455 nova_compute[189564]:        <feature name='taa-no'/>
Dec  1 14:20:41 np0005541455 nova_compute[189564]:      </blockers>
Dec  1 14:20:41 np0005541455 nova_compute[189564]:      <model usable='no' vendor='Intel'>Cooperlake-v2</model>
Dec  1 14:20:41 np0005541455 nova_compute[189564]:      <blockers model='Cooperlake-v2'>
Dec  1 14:20:41 np0005541455 nova_compute[189564]:        <feature name='avx512-bf16'/>
Dec  1 14:20:41 np0005541455 nova_compute[189564]:        <feature name='avx512bw'/>
Dec  1 14:20:41 np0005541455 nova_compute[189564]:        <feature name='avx512cd'/>
Dec  1 14:20:41 np0005541455 nova_compute[189564]:        <feature name='avx512dq'/>
Dec  1 14:20:41 np0005541455 nova_compute[189564]:        <feature name='avx512f'/>
Dec  1 14:20:41 np0005541455 nova_compute[189564]:        <feature name='avx512vl'/>
Dec  1 14:20:41 np0005541455 nova_compute[189564]:        <feature name='avx512vnni'/>
Dec  1 14:20:41 np0005541455 nova_compute[189564]:        <feature name='erms'/>
Dec  1 14:20:41 np0005541455 nova_compute[189564]:        <feature name='hle'/>
Dec  1 14:20:41 np0005541455 nova_compute[189564]:        <feature name='ibrs-all'/>
Dec  1 14:20:41 np0005541455 nova_compute[189564]:        <feature name='invpcid'/>
Dec  1 14:20:41 np0005541455 nova_compute[189564]:        <feature name='pcid'/>
Dec  1 14:20:41 np0005541455 nova_compute[189564]:        <feature name='pku'/>
Dec  1 14:20:41 np0005541455 nova_compute[189564]:        <feature name='rtm'/>
Dec  1 14:20:41 np0005541455 nova_compute[189564]:        <feature name='taa-no'/>
Dec  1 14:20:41 np0005541455 nova_compute[189564]:        <feature name='xsaves'/>
Dec  1 14:20:41 np0005541455 nova_compute[189564]:      </blockers>
Dec  1 14:20:41 np0005541455 nova_compute[189564]:      <model usable='no' vendor='Intel' canonical='Denverton-v1'>Denverton</model>
Dec  1 14:20:41 np0005541455 nova_compute[189564]:      <blockers model='Denverton'>
Dec  1 14:20:41 np0005541455 nova_compute[189564]:        <feature name='erms'/>
Dec  1 14:20:41 np0005541455 nova_compute[189564]:        <feature name='mpx'/>
Dec  1 14:20:41 np0005541455 nova_compute[189564]:      </blockers>
Dec  1 14:20:41 np0005541455 nova_compute[189564]:      <model usable='no' vendor='Intel'>Denverton-v1</model>
Dec  1 14:20:41 np0005541455 nova_compute[189564]:      <blockers model='Denverton-v1'>
Dec  1 14:20:41 np0005541455 nova_compute[189564]:        <feature name='erms'/>
Dec  1 14:20:41 np0005541455 nova_compute[189564]:        <feature name='mpx'/>
Dec  1 14:20:41 np0005541455 nova_compute[189564]:      </blockers>
Dec  1 14:20:41 np0005541455 nova_compute[189564]:      <model usable='no' vendor='Intel'>Denverton-v2</model>
Dec  1 14:20:41 np0005541455 nova_compute[189564]:      <blockers model='Denverton-v2'>
Dec  1 14:20:41 np0005541455 nova_compute[189564]:        <feature name='erms'/>
Dec  1 14:20:41 np0005541455 nova_compute[189564]:      </blockers>
Dec  1 14:20:41 np0005541455 nova_compute[189564]:      <model usable='no' vendor='Intel'>Denverton-v3</model>
Dec  1 14:20:41 np0005541455 nova_compute[189564]:      <blockers model='Denverton-v3'>
Dec  1 14:20:41 np0005541455 nova_compute[189564]:        <feature name='erms'/>
Dec  1 14:20:41 np0005541455 nova_compute[189564]:        <feature name='xsaves'/>
Dec  1 14:20:41 np0005541455 nova_compute[189564]:      </blockers>
Dec  1 14:20:41 np0005541455 nova_compute[189564]:      <model usable='yes' vendor='Hygon' canonical='Dhyana-v1'>Dhyana</model>
Dec  1 14:20:41 np0005541455 nova_compute[189564]:      <model usable='yes' vendor='Hygon'>Dhyana-v1</model>
Dec  1 14:20:41 np0005541455 nova_compute[189564]:      <model usable='no' vendor='Hygon'>Dhyana-v2</model>
Dec  1 14:20:41 np0005541455 nova_compute[189564]:      <blockers model='Dhyana-v2'>
Dec  1 14:20:41 np0005541455 nova_compute[189564]:        <feature name='xsaves'/>
Dec  1 14:20:41 np0005541455 nova_compute[189564]:      </blockers>
Dec  1 14:20:41 np0005541455 nova_compute[189564]:      <model usable='yes' vendor='AMD' canonical='EPYC-v1'>EPYC</model>
Dec  1 14:20:41 np0005541455 nova_compute[189564]:      <model usable='no' vendor='AMD' canonical='EPYC-Genoa-v1'>EPYC-Genoa</model>
Dec  1 14:20:41 np0005541455 nova_compute[189564]:      <blockers model='EPYC-Genoa'>
Dec  1 14:20:41 np0005541455 nova_compute[189564]:        <feature name='amd-psfd'/>
Dec  1 14:20:41 np0005541455 nova_compute[189564]:        <feature name='auto-ibrs'/>
Dec  1 14:20:41 np0005541455 nova_compute[189564]:        <feature name='avx512-bf16'/>
Dec  1 14:20:41 np0005541455 nova_compute[189564]:        <feature name='avx512-vpopcntdq'/>
Dec  1 14:20:41 np0005541455 nova_compute[189564]:        <feature name='avx512bitalg'/>
Dec  1 14:20:41 np0005541455 nova_compute[189564]:        <feature name='avx512bw'/>
Dec  1 14:20:41 np0005541455 nova_compute[189564]:        <feature name='avx512cd'/>
Dec  1 14:20:41 np0005541455 nova_compute[189564]:        <feature name='avx512dq'/>
Dec  1 14:20:41 np0005541455 nova_compute[189564]:        <feature name='avx512f'/>
Dec  1 14:20:41 np0005541455 nova_compute[189564]:        <feature name='avx512ifma'/>
Dec  1 14:20:41 np0005541455 nova_compute[189564]:        <feature name='avx512vbmi'/>
Dec  1 14:20:41 np0005541455 nova_compute[189564]:        <feature name='avx512vbmi2'/>
Dec  1 14:20:41 np0005541455 nova_compute[189564]:        <feature name='avx512vl'/>
Dec  1 14:20:41 np0005541455 nova_compute[189564]:        <feature name='avx512vnni'/>
Dec  1 14:20:41 np0005541455 nova_compute[189564]:        <feature name='erms'/>
Dec  1 14:20:41 np0005541455 nova_compute[189564]:        <feature name='fsrm'/>
Dec  1 14:20:41 np0005541455 nova_compute[189564]:        <feature name='gfni'/>
Dec  1 14:20:41 np0005541455 nova_compute[189564]:        <feature name='invpcid'/>
Dec  1 14:20:41 np0005541455 nova_compute[189564]:        <feature name='la57'/>
Dec  1 14:20:41 np0005541455 nova_compute[189564]:        <feature name='no-nested-data-bp'/>
Dec  1 14:20:41 np0005541455 nova_compute[189564]:        <feature name='null-sel-clr-base'/>
Dec  1 14:20:41 np0005541455 nova_compute[189564]:        <feature name='pcid'/>
Dec  1 14:20:41 np0005541455 nova_compute[189564]:        <feature name='pku'/>
Dec  1 14:20:41 np0005541455 nova_compute[189564]:        <feature name='stibp-always-on'/>
Dec  1 14:20:41 np0005541455 nova_compute[189564]:        <feature name='vaes'/>
Dec  1 14:20:41 np0005541455 nova_compute[189564]:        <feature name='vpclmulqdq'/>
Dec  1 14:20:41 np0005541455 nova_compute[189564]:        <feature name='xsaves'/>
Dec  1 14:20:41 np0005541455 nova_compute[189564]:      </blockers>
Dec  1 14:20:41 np0005541455 nova_compute[189564]:      <model usable='no' vendor='AMD'>EPYC-Genoa-v1</model>
Dec  1 14:20:41 np0005541455 nova_compute[189564]:      <blockers model='EPYC-Genoa-v1'>
Dec  1 14:20:41 np0005541455 nova_compute[189564]:        <feature name='amd-psfd'/>
Dec  1 14:20:41 np0005541455 nova_compute[189564]:        <feature name='auto-ibrs'/>
Dec  1 14:20:41 np0005541455 nova_compute[189564]:        <feature name='avx512-bf16'/>
Dec  1 14:20:41 np0005541455 nova_compute[189564]:        <feature name='avx512-vpopcntdq'/>
Dec  1 14:20:41 np0005541455 nova_compute[189564]:        <feature name='avx512bitalg'/>
Dec  1 14:20:41 np0005541455 nova_compute[189564]:        <feature name='avx512bw'/>
Dec  1 14:20:41 np0005541455 nova_compute[189564]:        <feature name='avx512cd'/>
Dec  1 14:20:41 np0005541455 nova_compute[189564]:        <feature name='avx512dq'/>
Dec  1 14:20:41 np0005541455 nova_compute[189564]:        <feature name='avx512f'/>
Dec  1 14:20:41 np0005541455 nova_compute[189564]:        <feature name='avx512ifma'/>
Dec  1 14:20:41 np0005541455 nova_compute[189564]:        <feature name='avx512vbmi'/>
Dec  1 14:20:41 np0005541455 nova_compute[189564]:        <feature name='avx512vbmi2'/>
Dec  1 14:20:41 np0005541455 nova_compute[189564]:        <feature name='avx512vl'/>
Dec  1 14:20:41 np0005541455 nova_compute[189564]:        <feature name='avx512vnni'/>
Dec  1 14:20:41 np0005541455 nova_compute[189564]:        <feature name='erms'/>
Dec  1 14:20:41 np0005541455 nova_compute[189564]:        <feature name='fsrm'/>
Dec  1 14:20:41 np0005541455 nova_compute[189564]:        <feature name='gfni'/>
Dec  1 14:20:41 np0005541455 nova_compute[189564]:        <feature name='invpcid'/>
Dec  1 14:20:41 np0005541455 nova_compute[189564]:        <feature name='la57'/>
Dec  1 14:20:41 np0005541455 nova_compute[189564]:        <feature name='no-nested-data-bp'/>
Dec  1 14:20:41 np0005541455 nova_compute[189564]:        <feature name='null-sel-clr-base'/>
Dec  1 14:20:41 np0005541455 nova_compute[189564]:        <feature name='pcid'/>
Dec  1 14:20:41 np0005541455 nova_compute[189564]:        <feature name='pku'/>
Dec  1 14:20:41 np0005541455 nova_compute[189564]:        <feature name='stibp-always-on'/>
Dec  1 14:20:41 np0005541455 nova_compute[189564]:        <feature name='vaes'/>
Dec  1 14:20:41 np0005541455 nova_compute[189564]:        <feature name='vpclmulqdq'/>
Dec  1 14:20:41 np0005541455 nova_compute[189564]:        <feature name='xsaves'/>
Dec  1 14:20:41 np0005541455 nova_compute[189564]:      </blockers>
Dec  1 14:20:41 np0005541455 nova_compute[189564]:      <model usable='yes' vendor='AMD' canonical='EPYC-v2'>EPYC-IBPB</model>
Dec  1 14:20:41 np0005541455 nova_compute[189564]:      <model usable='no' vendor='AMD' canonical='EPYC-Milan-v1'>EPYC-Milan</model>
Dec  1 14:20:41 np0005541455 nova_compute[189564]:      <blockers model='EPYC-Milan'>
Dec  1 14:20:41 np0005541455 nova_compute[189564]:        <feature name='erms'/>
Dec  1 14:20:41 np0005541455 nova_compute[189564]:        <feature name='fsrm'/>
Dec  1 14:20:41 np0005541455 nova_compute[189564]:        <feature name='invpcid'/>
Dec  1 14:20:41 np0005541455 nova_compute[189564]:        <feature name='pcid'/>
Dec  1 14:20:41 np0005541455 nova_compute[189564]:        <feature name='pku'/>
Dec  1 14:20:41 np0005541455 nova_compute[189564]:        <feature name='xsaves'/>
Dec  1 14:20:41 np0005541455 nova_compute[189564]:      </blockers>
Dec  1 14:20:41 np0005541455 nova_compute[189564]:      <model usable='no' vendor='AMD'>EPYC-Milan-v1</model>
Dec  1 14:20:41 np0005541455 nova_compute[189564]:      <blockers model='EPYC-Milan-v1'>
Dec  1 14:20:41 np0005541455 nova_compute[189564]:        <feature name='erms'/>
Dec  1 14:20:41 np0005541455 nova_compute[189564]:        <feature name='fsrm'/>
Dec  1 14:20:41 np0005541455 nova_compute[189564]:        <feature name='invpcid'/>
Dec  1 14:20:41 np0005541455 nova_compute[189564]:        <feature name='pcid'/>
Dec  1 14:20:41 np0005541455 nova_compute[189564]:        <feature name='pku'/>
Dec  1 14:20:41 np0005541455 nova_compute[189564]:        <feature name='xsaves'/>
Dec  1 14:20:41 np0005541455 nova_compute[189564]:      </blockers>
Dec  1 14:20:41 np0005541455 nova_compute[189564]:      <model usable='no' vendor='AMD'>EPYC-Milan-v2</model>
Dec  1 14:20:41 np0005541455 nova_compute[189564]:      <blockers model='EPYC-Milan-v2'>
Dec  1 14:20:41 np0005541455 nova_compute[189564]:        <feature name='amd-psfd'/>
Dec  1 14:20:41 np0005541455 nova_compute[189564]:        <feature name='erms'/>
Dec  1 14:20:41 np0005541455 nova_compute[189564]:        <feature name='fsrm'/>
Dec  1 14:20:41 np0005541455 nova_compute[189564]:        <feature name='invpcid'/>
Dec  1 14:20:41 np0005541455 nova_compute[189564]:        <feature name='no-nested-data-bp'/>
Dec  1 14:20:41 np0005541455 nova_compute[189564]:        <feature name='null-sel-clr-base'/>
Dec  1 14:20:41 np0005541455 nova_compute[189564]:        <feature name='pcid'/>
Dec  1 14:20:41 np0005541455 nova_compute[189564]:        <feature name='pku'/>
Dec  1 14:20:41 np0005541455 nova_compute[189564]:        <feature name='stibp-always-on'/>
Dec  1 14:20:41 np0005541455 nova_compute[189564]:        <feature name='vaes'/>
Dec  1 14:20:41 np0005541455 nova_compute[189564]:        <feature name='vpclmulqdq'/>
Dec  1 14:20:41 np0005541455 nova_compute[189564]:        <feature name='xsaves'/>
Dec  1 14:20:41 np0005541455 nova_compute[189564]:      </blockers>
Dec  1 14:20:41 np0005541455 nova_compute[189564]:      <model usable='no' vendor='AMD' canonical='EPYC-Rome-v1'>EPYC-Rome</model>
Dec  1 14:20:41 np0005541455 nova_compute[189564]:      <blockers model='EPYC-Rome'>
Dec  1 14:20:41 np0005541455 nova_compute[189564]:        <feature name='xsaves'/>
Dec  1 14:20:41 np0005541455 nova_compute[189564]:      </blockers>
Dec  1 14:20:41 np0005541455 nova_compute[189564]:      <model usable='no' vendor='AMD'>EPYC-Rome-v1</model>
Dec  1 14:20:41 np0005541455 nova_compute[189564]:      <blockers model='EPYC-Rome-v1'>
Dec  1 14:20:41 np0005541455 nova_compute[189564]:        <feature name='xsaves'/>
Dec  1 14:20:41 np0005541455 nova_compute[189564]:      </blockers>
Dec  1 14:20:41 np0005541455 nova_compute[189564]:      <model usable='no' vendor='AMD'>EPYC-Rome-v2</model>
Dec  1 14:20:41 np0005541455 nova_compute[189564]:      <blockers model='EPYC-Rome-v2'>
Dec  1 14:20:41 np0005541455 nova_compute[189564]:        <feature name='xsaves'/>
Dec  1 14:20:41 np0005541455 nova_compute[189564]:      </blockers>
Dec  1 14:20:41 np0005541455 nova_compute[189564]:      <model usable='no' vendor='AMD'>EPYC-Rome-v3</model>
Dec  1 14:20:41 np0005541455 nova_compute[189564]:      <blockers model='EPYC-Rome-v3'>
Dec  1 14:20:41 np0005541455 nova_compute[189564]:        <feature name='xsaves'/>
Dec  1 14:20:41 np0005541455 nova_compute[189564]:      </blockers>
Dec  1 14:20:41 np0005541455 nova_compute[189564]:      <model usable='yes' vendor='AMD'>EPYC-Rome-v4</model>
Dec  1 14:20:41 np0005541455 nova_compute[189564]:      <model usable='yes' vendor='AMD'>EPYC-v1</model>
Dec  1 14:20:41 np0005541455 nova_compute[189564]:      <model usable='yes' vendor='AMD'>EPYC-v2</model>
Dec  1 14:20:41 np0005541455 nova_compute[189564]:      <model usable='no' vendor='AMD'>EPYC-v3</model>
Dec  1 14:20:41 np0005541455 nova_compute[189564]:      <blockers model='EPYC-v3'>
Dec  1 14:20:41 np0005541455 nova_compute[189564]:        <feature name='xsaves'/>
Dec  1 14:20:41 np0005541455 nova_compute[189564]:      </blockers>
Dec  1 14:20:41 np0005541455 nova_compute[189564]:      <model usable='no' vendor='AMD'>EPYC-v4</model>
Dec  1 14:20:41 np0005541455 nova_compute[189564]:      <blockers model='EPYC-v4'>
Dec  1 14:20:41 np0005541455 nova_compute[189564]:        <feature name='xsaves'/>
Dec  1 14:20:41 np0005541455 nova_compute[189564]:      </blockers>
Dec  1 14:20:41 np0005541455 nova_compute[189564]:      <model usable='no' vendor='Intel' canonical='GraniteRapids-v1'>GraniteRapids</model>
Dec  1 14:20:41 np0005541455 nova_compute[189564]:      <blockers model='GraniteRapids'>
Dec  1 14:20:41 np0005541455 nova_compute[189564]:        <feature name='amx-bf16'/>
Dec  1 14:20:41 np0005541455 nova_compute[189564]:        <feature name='amx-fp16'/>
Dec  1 14:20:41 np0005541455 nova_compute[189564]:        <feature name='amx-int8'/>
Dec  1 14:20:41 np0005541455 nova_compute[189564]:        <feature name='amx-tile'/>
Dec  1 14:20:41 np0005541455 nova_compute[189564]:        <feature name='avx-vnni'/>
Dec  1 14:20:41 np0005541455 nova_compute[189564]:        <feature name='avx512-bf16'/>
Dec  1 14:20:41 np0005541455 nova_compute[189564]:        <feature name='avx512-fp16'/>
Dec  1 14:20:41 np0005541455 nova_compute[189564]:        <feature name='avx512-vpopcntdq'/>
Dec  1 14:20:41 np0005541455 nova_compute[189564]:        <feature name='avx512bitalg'/>
Dec  1 14:20:41 np0005541455 nova_compute[189564]:        <feature name='avx512bw'/>
Dec  1 14:20:41 np0005541455 nova_compute[189564]:        <feature name='avx512cd'/>
Dec  1 14:20:41 np0005541455 nova_compute[189564]:        <feature name='avx512dq'/>
Dec  1 14:20:41 np0005541455 nova_compute[189564]:        <feature name='avx512f'/>
Dec  1 14:20:41 np0005541455 nova_compute[189564]:        <feature name='avx512ifma'/>
Dec  1 14:20:41 np0005541455 nova_compute[189564]:        <feature name='avx512vbmi'/>
Dec  1 14:20:41 np0005541455 nova_compute[189564]:        <feature name='avx512vbmi2'/>
Dec  1 14:20:41 np0005541455 nova_compute[189564]:        <feature name='avx512vl'/>
Dec  1 14:20:41 np0005541455 nova_compute[189564]:        <feature name='avx512vnni'/>
Dec  1 14:20:41 np0005541455 nova_compute[189564]:        <feature name='bus-lock-detect'/>
Dec  1 14:20:41 np0005541455 nova_compute[189564]:        <feature name='erms'/>
Dec  1 14:20:41 np0005541455 nova_compute[189564]:        <feature name='fbsdp-no'/>
Dec  1 14:20:41 np0005541455 nova_compute[189564]:        <feature name='fsrc'/>
Dec  1 14:20:41 np0005541455 nova_compute[189564]:        <feature name='fsrm'/>
Dec  1 14:20:41 np0005541455 nova_compute[189564]:        <feature name='fsrs'/>
Dec  1 14:20:41 np0005541455 nova_compute[189564]:        <feature name='fzrm'/>
Dec  1 14:20:41 np0005541455 nova_compute[189564]:        <feature name='gfni'/>
Dec  1 14:20:41 np0005541455 nova_compute[189564]:        <feature name='hle'/>
Dec  1 14:20:41 np0005541455 nova_compute[189564]:        <feature name='ibrs-all'/>
Dec  1 14:20:41 np0005541455 nova_compute[189564]:        <feature name='invpcid'/>
Dec  1 14:20:41 np0005541455 nova_compute[189564]:        <feature name='la57'/>
Dec  1 14:20:41 np0005541455 nova_compute[189564]:        <feature name='mcdt-no'/>
Dec  1 14:20:41 np0005541455 nova_compute[189564]:        <feature name='pbrsb-no'/>
Dec  1 14:20:41 np0005541455 nova_compute[189564]:        <feature name='pcid'/>
Dec  1 14:20:41 np0005541455 nova_compute[189564]:        <feature name='pku'/>
Dec  1 14:20:41 np0005541455 nova_compute[189564]:        <feature name='prefetchiti'/>
Dec  1 14:20:41 np0005541455 nova_compute[189564]:        <feature name='psdp-no'/>
Dec  1 14:20:41 np0005541455 nova_compute[189564]:        <feature name='rtm'/>
Dec  1 14:20:41 np0005541455 nova_compute[189564]:        <feature name='sbdr-ssdp-no'/>
Dec  1 14:20:41 np0005541455 nova_compute[189564]:        <feature name='serialize'/>
Dec  1 14:20:41 np0005541455 nova_compute[189564]:        <feature name='taa-no'/>
Dec  1 14:20:41 np0005541455 nova_compute[189564]:        <feature name='tsx-ldtrk'/>
Dec  1 14:20:41 np0005541455 nova_compute[189564]:        <feature name='vaes'/>
Dec  1 14:20:41 np0005541455 nova_compute[189564]:        <feature name='vpclmulqdq'/>
Dec  1 14:20:41 np0005541455 nova_compute[189564]:        <feature name='xfd'/>
Dec  1 14:20:41 np0005541455 nova_compute[189564]:        <feature name='xsaves'/>
Dec  1 14:20:41 np0005541455 nova_compute[189564]:      </blockers>
Dec  1 14:20:41 np0005541455 nova_compute[189564]:      <model usable='no' vendor='Intel'>GraniteRapids-v1</model>
Dec  1 14:20:41 np0005541455 nova_compute[189564]:      <blockers model='GraniteRapids-v1'>
Dec  1 14:20:41 np0005541455 nova_compute[189564]:        <feature name='amx-bf16'/>
Dec  1 14:20:41 np0005541455 nova_compute[189564]:        <feature name='amx-fp16'/>
Dec  1 14:20:41 np0005541455 nova_compute[189564]:        <feature name='amx-int8'/>
Dec  1 14:20:41 np0005541455 nova_compute[189564]:        <feature name='amx-tile'/>
Dec  1 14:20:41 np0005541455 nova_compute[189564]:        <feature name='avx-vnni'/>
Dec  1 14:20:41 np0005541455 nova_compute[189564]:        <feature name='avx512-bf16'/>
Dec  1 14:20:41 np0005541455 nova_compute[189564]:        <feature name='avx512-fp16'/>
Dec  1 14:20:41 np0005541455 nova_compute[189564]:        <feature name='avx512-vpopcntdq'/>
Dec  1 14:20:41 np0005541455 nova_compute[189564]:        <feature name='avx512bitalg'/>
Dec  1 14:20:41 np0005541455 nova_compute[189564]:        <feature name='avx512bw'/>
Dec  1 14:20:41 np0005541455 nova_compute[189564]:        <feature name='avx512cd'/>
Dec  1 14:20:41 np0005541455 nova_compute[189564]:        <feature name='avx512dq'/>
Dec  1 14:20:41 np0005541455 nova_compute[189564]:        <feature name='avx512f'/>
Dec  1 14:20:41 np0005541455 nova_compute[189564]:        <feature name='avx512ifma'/>
Dec  1 14:20:41 np0005541455 nova_compute[189564]:        <feature name='avx512vbmi'/>
Dec  1 14:20:41 np0005541455 nova_compute[189564]:        <feature name='avx512vbmi2'/>
Dec  1 14:20:41 np0005541455 nova_compute[189564]:        <feature name='avx512vl'/>
Dec  1 14:20:41 np0005541455 nova_compute[189564]:        <feature name='avx512vnni'/>
Dec  1 14:20:41 np0005541455 nova_compute[189564]:        <feature name='bus-lock-detect'/>
Dec  1 14:20:41 np0005541455 nova_compute[189564]:        <feature name='erms'/>
Dec  1 14:20:41 np0005541455 nova_compute[189564]:        <feature name='fbsdp-no'/>
Dec  1 14:20:41 np0005541455 nova_compute[189564]:        <feature name='fsrc'/>
Dec  1 14:20:41 np0005541455 nova_compute[189564]:        <feature name='fsrm'/>
Dec  1 14:20:41 np0005541455 nova_compute[189564]:        <feature name='fsrs'/>
Dec  1 14:20:41 np0005541455 nova_compute[189564]:        <feature name='fzrm'/>
Dec  1 14:20:41 np0005541455 nova_compute[189564]:        <feature name='gfni'/>
Dec  1 14:20:41 np0005541455 nova_compute[189564]:        <feature name='hle'/>
Dec  1 14:20:41 np0005541455 nova_compute[189564]:        <feature name='ibrs-all'/>
Dec  1 14:20:41 np0005541455 nova_compute[189564]:        <feature name='invpcid'/>
Dec  1 14:20:41 np0005541455 nova_compute[189564]:        <feature name='la57'/>
Dec  1 14:20:41 np0005541455 nova_compute[189564]:        <feature name='mcdt-no'/>
Dec  1 14:20:41 np0005541455 nova_compute[189564]:        <feature name='pbrsb-no'/>
Dec  1 14:20:41 np0005541455 nova_compute[189564]:        <feature name='pcid'/>
Dec  1 14:20:41 np0005541455 nova_compute[189564]:        <feature name='pku'/>
Dec  1 14:20:41 np0005541455 nova_compute[189564]:        <feature name='prefetchiti'/>
Dec  1 14:20:41 np0005541455 nova_compute[189564]:        <feature name='psdp-no'/>
Dec  1 14:20:41 np0005541455 nova_compute[189564]:        <feature name='rtm'/>
Dec  1 14:20:41 np0005541455 nova_compute[189564]:        <feature name='sbdr-ssdp-no'/>
Dec  1 14:20:41 np0005541455 nova_compute[189564]:        <feature name='serialize'/>
Dec  1 14:20:41 np0005541455 nova_compute[189564]:        <feature name='taa-no'/>
Dec  1 14:20:41 np0005541455 nova_compute[189564]:        <feature name='tsx-ldtrk'/>
Dec  1 14:20:41 np0005541455 nova_compute[189564]:        <feature name='vaes'/>
Dec  1 14:20:41 np0005541455 nova_compute[189564]:        <feature name='vpclmulqdq'/>
Dec  1 14:20:41 np0005541455 nova_compute[189564]:        <feature name='xfd'/>
Dec  1 14:20:41 np0005541455 nova_compute[189564]:        <feature name='xsaves'/>
Dec  1 14:20:41 np0005541455 nova_compute[189564]:      </blockers>
Dec  1 14:20:41 np0005541455 nova_compute[189564]:      <model usable='no' vendor='Intel'>GraniteRapids-v2</model>
Dec  1 14:20:41 np0005541455 nova_compute[189564]:      <blockers model='GraniteRapids-v2'>
Dec  1 14:20:41 np0005541455 nova_compute[189564]:        <feature name='amx-bf16'/>
Dec  1 14:20:41 np0005541455 nova_compute[189564]:        <feature name='amx-fp16'/>
Dec  1 14:20:41 np0005541455 nova_compute[189564]:        <feature name='amx-int8'/>
Dec  1 14:20:41 np0005541455 nova_compute[189564]:        <feature name='amx-tile'/>
Dec  1 14:20:41 np0005541455 nova_compute[189564]:        <feature name='avx-vnni'/>
Dec  1 14:20:41 np0005541455 nova_compute[189564]:        <feature name='avx10'/>
Dec  1 14:20:41 np0005541455 nova_compute[189564]:        <feature name='avx10-128'/>
Dec  1 14:20:41 np0005541455 nova_compute[189564]:        <feature name='avx10-256'/>
Dec  1 14:20:41 np0005541455 nova_compute[189564]:        <feature name='avx10-512'/>
Dec  1 14:20:41 np0005541455 nova_compute[189564]:        <feature name='avx512-bf16'/>
Dec  1 14:20:41 np0005541455 nova_compute[189564]:        <feature name='avx512-fp16'/>
Dec  1 14:20:41 np0005541455 nova_compute[189564]:        <feature name='avx512-vpopcntdq'/>
Dec  1 14:20:41 np0005541455 nova_compute[189564]:        <feature name='avx512bitalg'/>
Dec  1 14:20:41 np0005541455 nova_compute[189564]:        <feature name='avx512bw'/>
Dec  1 14:20:41 np0005541455 nova_compute[189564]:        <feature name='avx512cd'/>
Dec  1 14:20:41 np0005541455 nova_compute[189564]:        <feature name='avx512dq'/>
Dec  1 14:20:41 np0005541455 nova_compute[189564]:        <feature name='avx512f'/>
Dec  1 14:20:41 np0005541455 nova_compute[189564]:        <feature name='avx512ifma'/>
Dec  1 14:20:41 np0005541455 nova_compute[189564]:        <feature name='avx512vbmi'/>
Dec  1 14:20:41 np0005541455 nova_compute[189564]:        <feature name='avx512vbmi2'/>
Dec  1 14:20:41 np0005541455 nova_compute[189564]:        <feature name='avx512vl'/>
Dec  1 14:20:41 np0005541455 nova_compute[189564]:        <feature name='avx512vnni'/>
Dec  1 14:20:41 np0005541455 nova_compute[189564]:        <feature name='bus-lock-detect'/>
Dec  1 14:20:41 np0005541455 nova_compute[189564]:        <feature name='cldemote'/>
Dec  1 14:20:41 np0005541455 nova_compute[189564]:        <feature name='erms'/>
Dec  1 14:20:41 np0005541455 nova_compute[189564]:        <feature name='fbsdp-no'/>
Dec  1 14:20:41 np0005541455 nova_compute[189564]:        <feature name='fsrc'/>
Dec  1 14:20:41 np0005541455 nova_compute[189564]:        <feature name='fsrm'/>
Dec  1 14:20:41 np0005541455 nova_compute[189564]:        <feature name='fsrs'/>
Dec  1 14:20:41 np0005541455 nova_compute[189564]:        <feature name='fzrm'/>
Dec  1 14:20:41 np0005541455 nova_compute[189564]:        <feature name='gfni'/>
Dec  1 14:20:41 np0005541455 nova_compute[189564]:        <feature name='hle'/>
Dec  1 14:20:41 np0005541455 nova_compute[189564]:        <feature name='ibrs-all'/>
Dec  1 14:20:41 np0005541455 nova_compute[189564]:        <feature name='invpcid'/>
Dec  1 14:20:41 np0005541455 nova_compute[189564]:        <feature name='la57'/>
Dec  1 14:20:41 np0005541455 nova_compute[189564]:        <feature name='mcdt-no'/>
Dec  1 14:20:41 np0005541455 nova_compute[189564]:        <feature name='movdir64b'/>
Dec  1 14:20:41 np0005541455 nova_compute[189564]:        <feature name='movdiri'/>
Dec  1 14:20:41 np0005541455 nova_compute[189564]:        <feature name='pbrsb-no'/>
Dec  1 14:20:41 np0005541455 nova_compute[189564]:        <feature name='pcid'/>
Dec  1 14:20:41 np0005541455 nova_compute[189564]:        <feature name='pku'/>
Dec  1 14:20:41 np0005541455 nova_compute[189564]:        <feature name='prefetchiti'/>
Dec  1 14:20:41 np0005541455 nova_compute[189564]:        <feature name='psdp-no'/>
Dec  1 14:20:41 np0005541455 nova_compute[189564]:        <feature name='rtm'/>
Dec  1 14:20:41 np0005541455 nova_compute[189564]:        <feature name='sbdr-ssdp-no'/>
Dec  1 14:20:41 np0005541455 nova_compute[189564]:        <feature name='serialize'/>
Dec  1 14:20:41 np0005541455 nova_compute[189564]:        <feature name='ss'/>
Dec  1 14:20:41 np0005541455 nova_compute[189564]:        <feature name='taa-no'/>
Dec  1 14:20:41 np0005541455 nova_compute[189564]:        <feature name='tsx-ldtrk'/>
Dec  1 14:20:41 np0005541455 nova_compute[189564]:        <feature name='vaes'/>
Dec  1 14:20:41 np0005541455 nova_compute[189564]:        <feature name='vpclmulqdq'/>
Dec  1 14:20:41 np0005541455 nova_compute[189564]:        <feature name='xfd'/>
Dec  1 14:20:41 np0005541455 nova_compute[189564]:        <feature name='xsaves'/>
Dec  1 14:20:41 np0005541455 nova_compute[189564]:      </blockers>
Dec  1 14:20:41 np0005541455 nova_compute[189564]:      <model usable='no' vendor='Intel' canonical='Haswell-v1'>Haswell</model>
Dec  1 14:20:41 np0005541455 nova_compute[189564]:      <blockers model='Haswell'>
Dec  1 14:20:41 np0005541455 nova_compute[189564]:        <feature name='erms'/>
Dec  1 14:20:41 np0005541455 nova_compute[189564]:        <feature name='hle'/>
Dec  1 14:20:41 np0005541455 nova_compute[189564]:        <feature name='invpcid'/>
Dec  1 14:20:41 np0005541455 nova_compute[189564]:        <feature name='pcid'/>
Dec  1 14:20:41 np0005541455 nova_compute[189564]:        <feature name='rtm'/>
Dec  1 14:20:41 np0005541455 nova_compute[189564]:      </blockers>
Dec  1 14:20:41 np0005541455 nova_compute[189564]:      <model usable='no' vendor='Intel' canonical='Haswell-v3'>Haswell-IBRS</model>
Dec  1 14:20:41 np0005541455 nova_compute[189564]:      <blockers model='Haswell-IBRS'>
Dec  1 14:20:41 np0005541455 nova_compute[189564]:        <feature name='erms'/>
Dec  1 14:20:41 np0005541455 nova_compute[189564]:        <feature name='hle'/>
Dec  1 14:20:41 np0005541455 nova_compute[189564]:        <feature name='invpcid'/>
Dec  1 14:20:41 np0005541455 nova_compute[189564]:        <feature name='pcid'/>
Dec  1 14:20:41 np0005541455 nova_compute[189564]:        <feature name='rtm'/>
Dec  1 14:20:41 np0005541455 nova_compute[189564]:      </blockers>
Dec  1 14:20:41 np0005541455 nova_compute[189564]:      <model usable='no' vendor='Intel' canonical='Haswell-v2'>Haswell-noTSX</model>
Dec  1 14:20:41 np0005541455 nova_compute[189564]:      <blockers model='Haswell-noTSX'>
Dec  1 14:20:41 np0005541455 nova_compute[189564]:        <feature name='erms'/>
Dec  1 14:20:41 np0005541455 nova_compute[189564]:        <feature name='invpcid'/>
Dec  1 14:20:41 np0005541455 nova_compute[189564]:        <feature name='pcid'/>
Dec  1 14:20:41 np0005541455 nova_compute[189564]:      </blockers>
Dec  1 14:20:41 np0005541455 nova_compute[189564]:      <model usable='no' vendor='Intel' canonical='Haswell-v4'>Haswell-noTSX-IBRS</model>
Dec  1 14:20:41 np0005541455 nova_compute[189564]:      <blockers model='Haswell-noTSX-IBRS'>
Dec  1 14:20:41 np0005541455 nova_compute[189564]:        <feature name='erms'/>
Dec  1 14:20:41 np0005541455 nova_compute[189564]:        <feature name='invpcid'/>
Dec  1 14:20:41 np0005541455 nova_compute[189564]:        <feature name='pcid'/>
Dec  1 14:20:41 np0005541455 nova_compute[189564]:      </blockers>
Dec  1 14:20:41 np0005541455 nova_compute[189564]:      <model usable='no' vendor='Intel'>Haswell-v1</model>
Dec  1 14:20:41 np0005541455 nova_compute[189564]:      <blockers model='Haswell-v1'>
Dec  1 14:20:41 np0005541455 nova_compute[189564]:        <feature name='erms'/>
Dec  1 14:20:41 np0005541455 nova_compute[189564]:        <feature name='hle'/>
Dec  1 14:20:41 np0005541455 nova_compute[189564]:        <feature name='invpcid'/>
Dec  1 14:20:41 np0005541455 nova_compute[189564]:        <feature name='pcid'/>
Dec  1 14:20:41 np0005541455 nova_compute[189564]:        <feature name='rtm'/>
Dec  1 14:20:41 np0005541455 nova_compute[189564]:      </blockers>
Dec  1 14:20:41 np0005541455 nova_compute[189564]:      <model usable='no' vendor='Intel'>Haswell-v2</model>
Dec  1 14:20:41 np0005541455 nova_compute[189564]:      <blockers model='Haswell-v2'>
Dec  1 14:20:41 np0005541455 nova_compute[189564]:        <feature name='erms'/>
Dec  1 14:20:41 np0005541455 nova_compute[189564]:        <feature name='invpcid'/>
Dec  1 14:20:41 np0005541455 nova_compute[189564]:        <feature name='pcid'/>
Dec  1 14:20:41 np0005541455 nova_compute[189564]:      </blockers>
Dec  1 14:20:41 np0005541455 nova_compute[189564]:      <model usable='no' vendor='Intel'>Haswell-v3</model>
Dec  1 14:20:41 np0005541455 nova_compute[189564]:      <blockers model='Haswell-v3'>
Dec  1 14:20:41 np0005541455 nova_compute[189564]:        <feature name='erms'/>
Dec  1 14:20:41 np0005541455 nova_compute[189564]:        <feature name='hle'/>
Dec  1 14:20:41 np0005541455 nova_compute[189564]:        <feature name='invpcid'/>
Dec  1 14:20:41 np0005541455 nova_compute[189564]:        <feature name='pcid'/>
Dec  1 14:20:41 np0005541455 nova_compute[189564]:        <feature name='rtm'/>
Dec  1 14:20:41 np0005541455 nova_compute[189564]:      </blockers>
Dec  1 14:20:41 np0005541455 nova_compute[189564]:      <model usable='no' vendor='Intel'>Haswell-v4</model>
Dec  1 14:20:41 np0005541455 nova_compute[189564]:      <blockers model='Haswell-v4'>
Dec  1 14:20:41 np0005541455 nova_compute[189564]:        <feature name='erms'/>
Dec  1 14:20:41 np0005541455 nova_compute[189564]:        <feature name='invpcid'/>
Dec  1 14:20:41 np0005541455 nova_compute[189564]:        <feature name='pcid'/>
Dec  1 14:20:41 np0005541455 nova_compute[189564]:      </blockers>
Dec  1 14:20:41 np0005541455 nova_compute[189564]:      <model usable='no' vendor='Intel' canonical='Icelake-Server-v1'>Icelake-Server</model>
Dec  1 14:20:41 np0005541455 nova_compute[189564]:      <blockers model='Icelake-Server'>
Dec  1 14:20:41 np0005541455 nova_compute[189564]:        <feature name='avx512-vpopcntdq'/>
Dec  1 14:20:41 np0005541455 nova_compute[189564]:        <feature name='avx512bitalg'/>
Dec  1 14:20:41 np0005541455 nova_compute[189564]:        <feature name='avx512bw'/>
Dec  1 14:20:41 np0005541455 nova_compute[189564]:        <feature name='avx512cd'/>
Dec  1 14:20:41 np0005541455 nova_compute[189564]:        <feature name='avx512dq'/>
Dec  1 14:20:41 np0005541455 nova_compute[189564]:        <feature name='avx512f'/>
Dec  1 14:20:41 np0005541455 nova_compute[189564]:        <feature name='avx512vbmi'/>
Dec  1 14:20:41 np0005541455 nova_compute[189564]:        <feature name='avx512vbmi2'/>
Dec  1 14:20:41 np0005541455 nova_compute[189564]:        <feature name='avx512vl'/>
Dec  1 14:20:41 np0005541455 nova_compute[189564]:        <feature name='avx512vnni'/>
Dec  1 14:20:41 np0005541455 nova_compute[189564]:        <feature name='erms'/>
Dec  1 14:20:41 np0005541455 nova_compute[189564]:        <feature name='gfni'/>
Dec  1 14:20:41 np0005541455 nova_compute[189564]:        <feature name='hle'/>
Dec  1 14:20:41 np0005541455 nova_compute[189564]:        <feature name='invpcid'/>
Dec  1 14:20:41 np0005541455 nova_compute[189564]:        <feature name='la57'/>
Dec  1 14:20:41 np0005541455 nova_compute[189564]:        <feature name='pcid'/>
Dec  1 14:20:41 np0005541455 nova_compute[189564]:        <feature name='pku'/>
Dec  1 14:20:41 np0005541455 nova_compute[189564]:        <feature name='rtm'/>
Dec  1 14:20:41 np0005541455 nova_compute[189564]:        <feature name='vaes'/>
Dec  1 14:20:41 np0005541455 nova_compute[189564]:        <feature name='vpclmulqdq'/>
Dec  1 14:20:41 np0005541455 nova_compute[189564]:      </blockers>
Dec  1 14:20:41 np0005541455 nova_compute[189564]:      <model usable='no' vendor='Intel' canonical='Icelake-Server-v2'>Icelake-Server-noTSX</model>
Dec  1 14:20:41 np0005541455 nova_compute[189564]:      <blockers model='Icelake-Server-noTSX'>
Dec  1 14:20:41 np0005541455 nova_compute[189564]:        <feature name='avx512-vpopcntdq'/>
Dec  1 14:20:41 np0005541455 nova_compute[189564]:        <feature name='avx512bitalg'/>
Dec  1 14:20:41 np0005541455 nova_compute[189564]:        <feature name='avx512bw'/>
Dec  1 14:20:41 np0005541455 nova_compute[189564]:        <feature name='avx512cd'/>
Dec  1 14:20:41 np0005541455 nova_compute[189564]:        <feature name='avx512dq'/>
Dec  1 14:20:41 np0005541455 nova_compute[189564]:        <feature name='avx512f'/>
Dec  1 14:20:41 np0005541455 nova_compute[189564]:        <feature name='avx512vbmi'/>
Dec  1 14:20:41 np0005541455 nova_compute[189564]:        <feature name='avx512vbmi2'/>
Dec  1 14:20:41 np0005541455 nova_compute[189564]:        <feature name='avx512vl'/>
Dec  1 14:20:41 np0005541455 nova_compute[189564]:        <feature name='avx512vnni'/>
Dec  1 14:20:41 np0005541455 nova_compute[189564]:        <feature name='erms'/>
Dec  1 14:20:41 np0005541455 nova_compute[189564]:        <feature name='gfni'/>
Dec  1 14:20:41 np0005541455 nova_compute[189564]:        <feature name='invpcid'/>
Dec  1 14:20:41 np0005541455 nova_compute[189564]:        <feature name='la57'/>
Dec  1 14:20:41 np0005541455 nova_compute[189564]:        <feature name='pcid'/>
Dec  1 14:20:41 np0005541455 nova_compute[189564]:        <feature name='pku'/>
Dec  1 14:20:41 np0005541455 nova_compute[189564]:        <feature name='vaes'/>
Dec  1 14:20:41 np0005541455 nova_compute[189564]:        <feature name='vpclmulqdq'/>
Dec  1 14:20:41 np0005541455 nova_compute[189564]:      </blockers>
Dec  1 14:20:41 np0005541455 nova_compute[189564]:      <model usable='no' vendor='Intel'>Icelake-Server-v1</model>
Dec  1 14:20:41 np0005541455 nova_compute[189564]:      <blockers model='Icelake-Server-v1'>
Dec  1 14:20:41 np0005541455 nova_compute[189564]:        <feature name='avx512-vpopcntdq'/>
Dec  1 14:20:41 np0005541455 nova_compute[189564]:        <feature name='avx512bitalg'/>
Dec  1 14:20:41 np0005541455 nova_compute[189564]:        <feature name='avx512bw'/>
Dec  1 14:20:41 np0005541455 nova_compute[189564]:        <feature name='avx512cd'/>
Dec  1 14:20:41 np0005541455 nova_compute[189564]:        <feature name='avx512dq'/>
Dec  1 14:20:41 np0005541455 nova_compute[189564]:        <feature name='avx512f'/>
Dec  1 14:20:41 np0005541455 nova_compute[189564]:        <feature name='avx512vbmi'/>
Dec  1 14:20:41 np0005541455 nova_compute[189564]:        <feature name='avx512vbmi2'/>
Dec  1 14:20:41 np0005541455 nova_compute[189564]:        <feature name='avx512vl'/>
Dec  1 14:20:41 np0005541455 nova_compute[189564]:        <feature name='avx512vnni'/>
Dec  1 14:20:41 np0005541455 nova_compute[189564]:        <feature name='erms'/>
Dec  1 14:20:41 np0005541455 nova_compute[189564]:        <feature name='gfni'/>
Dec  1 14:20:41 np0005541455 nova_compute[189564]:        <feature name='hle'/>
Dec  1 14:20:41 np0005541455 nova_compute[189564]:        <feature name='invpcid'/>
Dec  1 14:20:41 np0005541455 nova_compute[189564]:        <feature name='la57'/>
Dec  1 14:20:41 np0005541455 nova_compute[189564]:        <feature name='pcid'/>
Dec  1 14:20:41 np0005541455 nova_compute[189564]:        <feature name='pku'/>
Dec  1 14:20:41 np0005541455 nova_compute[189564]:        <feature name='rtm'/>
Dec  1 14:20:41 np0005541455 nova_compute[189564]:        <feature name='vaes'/>
Dec  1 14:20:41 np0005541455 nova_compute[189564]:        <feature name='vpclmulqdq'/>
Dec  1 14:20:41 np0005541455 nova_compute[189564]:      </blockers>
Dec  1 14:20:41 np0005541455 nova_compute[189564]:      <model usable='no' vendor='Intel'>Icelake-Server-v2</model>
Dec  1 14:20:41 np0005541455 nova_compute[189564]:      <blockers model='Icelake-Server-v2'>
Dec  1 14:20:41 np0005541455 nova_compute[189564]:        <feature name='avx512-vpopcntdq'/>
Dec  1 14:20:41 np0005541455 nova_compute[189564]:        <feature name='avx512bitalg'/>
Dec  1 14:20:41 np0005541455 nova_compute[189564]:        <feature name='avx512bw'/>
Dec  1 14:20:41 np0005541455 nova_compute[189564]:        <feature name='avx512cd'/>
Dec  1 14:20:41 np0005541455 nova_compute[189564]:        <feature name='avx512dq'/>
Dec  1 14:20:41 np0005541455 nova_compute[189564]:        <feature name='avx512f'/>
Dec  1 14:20:41 np0005541455 nova_compute[189564]:        <feature name='avx512vbmi'/>
Dec  1 14:20:41 np0005541455 nova_compute[189564]:        <feature name='avx512vbmi2'/>
Dec  1 14:20:41 np0005541455 nova_compute[189564]:        <feature name='avx512vl'/>
Dec  1 14:20:41 np0005541455 nova_compute[189564]:        <feature name='avx512vnni'/>
Dec  1 14:20:41 np0005541455 nova_compute[189564]:        <feature name='erms'/>
Dec  1 14:20:41 np0005541455 nova_compute[189564]:        <feature name='gfni'/>
Dec  1 14:20:41 np0005541455 nova_compute[189564]:        <feature name='invpcid'/>
Dec  1 14:20:41 np0005541455 nova_compute[189564]:        <feature name='la57'/>
Dec  1 14:20:41 np0005541455 nova_compute[189564]:        <feature name='pcid'/>
Dec  1 14:20:41 np0005541455 nova_compute[189564]:        <feature name='pku'/>
Dec  1 14:20:41 np0005541455 nova_compute[189564]:        <feature name='vaes'/>
Dec  1 14:20:41 np0005541455 nova_compute[189564]:        <feature name='vpclmulqdq'/>
Dec  1 14:20:41 np0005541455 nova_compute[189564]:      </blockers>
Dec  1 14:20:41 np0005541455 nova_compute[189564]:      <model usable='no' vendor='Intel'>Icelake-Server-v3</model>
Dec  1 14:20:41 np0005541455 nova_compute[189564]:      <blockers model='Icelake-Server-v3'>
Dec  1 14:20:41 np0005541455 nova_compute[189564]:        <feature name='avx512-vpopcntdq'/>
Dec  1 14:20:41 np0005541455 nova_compute[189564]:        <feature name='avx512bitalg'/>
Dec  1 14:20:41 np0005541455 nova_compute[189564]:        <feature name='avx512bw'/>
Dec  1 14:20:41 np0005541455 nova_compute[189564]:        <feature name='avx512cd'/>
Dec  1 14:20:41 np0005541455 nova_compute[189564]:        <feature name='avx512dq'/>
Dec  1 14:20:41 np0005541455 nova_compute[189564]:        <feature name='avx512f'/>
Dec  1 14:20:41 np0005541455 nova_compute[189564]:        <feature name='avx512vbmi'/>
Dec  1 14:20:41 np0005541455 nova_compute[189564]:        <feature name='avx512vbmi2'/>
Dec  1 14:20:41 np0005541455 nova_compute[189564]:        <feature name='avx512vl'/>
Dec  1 14:20:41 np0005541455 nova_compute[189564]:        <feature name='avx512vnni'/>
Dec  1 14:20:41 np0005541455 nova_compute[189564]:        <feature name='erms'/>
Dec  1 14:20:41 np0005541455 nova_compute[189564]:        <feature name='gfni'/>
Dec  1 14:20:41 np0005541455 nova_compute[189564]:        <feature name='ibrs-all'/>
Dec  1 14:20:41 np0005541455 nova_compute[189564]:        <feature name='invpcid'/>
Dec  1 14:20:41 np0005541455 nova_compute[189564]:        <feature name='la57'/>
Dec  1 14:20:41 np0005541455 nova_compute[189564]:        <feature name='pcid'/>
Dec  1 14:20:41 np0005541455 nova_compute[189564]:        <feature name='pku'/>
Dec  1 14:20:41 np0005541455 nova_compute[189564]:        <feature name='taa-no'/>
Dec  1 14:20:41 np0005541455 nova_compute[189564]:        <feature name='vaes'/>
Dec  1 14:20:41 np0005541455 nova_compute[189564]:        <feature name='vpclmulqdq'/>
Dec  1 14:20:41 np0005541455 nova_compute[189564]:      </blockers>
Dec  1 14:20:41 np0005541455 nova_compute[189564]:      <model usable='no' vendor='Intel'>Icelake-Server-v4</model>
Dec  1 14:20:41 np0005541455 nova_compute[189564]:      <blockers model='Icelake-Server-v4'>
Dec  1 14:20:41 np0005541455 nova_compute[189564]:        <feature name='avx512-vpopcntdq'/>
Dec  1 14:20:41 np0005541455 nova_compute[189564]:        <feature name='avx512bitalg'/>
Dec  1 14:20:41 np0005541455 nova_compute[189564]:        <feature name='avx512bw'/>
Dec  1 14:20:41 np0005541455 nova_compute[189564]:        <feature name='avx512cd'/>
Dec  1 14:20:41 np0005541455 nova_compute[189564]:        <feature name='avx512dq'/>
Dec  1 14:20:41 np0005541455 nova_compute[189564]:        <feature name='avx512f'/>
Dec  1 14:20:41 np0005541455 nova_compute[189564]:        <feature name='avx512ifma'/>
Dec  1 14:20:41 np0005541455 nova_compute[189564]:        <feature name='avx512vbmi'/>
Dec  1 14:20:41 np0005541455 nova_compute[189564]:        <feature name='avx512vbmi2'/>
Dec  1 14:20:41 np0005541455 nova_compute[189564]:        <feature name='avx512vl'/>
Dec  1 14:20:41 np0005541455 nova_compute[189564]:        <feature name='avx512vnni'/>
Dec  1 14:20:41 np0005541455 nova_compute[189564]:        <feature name='erms'/>
Dec  1 14:20:41 np0005541455 nova_compute[189564]:        <feature name='fsrm'/>
Dec  1 14:20:41 np0005541455 nova_compute[189564]:        <feature name='gfni'/>
Dec  1 14:20:41 np0005541455 nova_compute[189564]:        <feature name='ibrs-all'/>
Dec  1 14:20:41 np0005541455 nova_compute[189564]:        <feature name='invpcid'/>
Dec  1 14:20:41 np0005541455 nova_compute[189564]:        <feature name='la57'/>
Dec  1 14:20:41 np0005541455 nova_compute[189564]:        <feature name='pcid'/>
Dec  1 14:20:41 np0005541455 nova_compute[189564]:        <feature name='pku'/>
Dec  1 14:20:41 np0005541455 nova_compute[189564]:        <feature name='taa-no'/>
Dec  1 14:20:41 np0005541455 nova_compute[189564]:        <feature name='vaes'/>
Dec  1 14:20:41 np0005541455 nova_compute[189564]:        <feature name='vpclmulqdq'/>
Dec  1 14:20:41 np0005541455 nova_compute[189564]:      </blockers>
Dec  1 14:20:41 np0005541455 nova_compute[189564]:      <model usable='no' vendor='Intel'>Icelake-Server-v5</model>
Dec  1 14:20:41 np0005541455 nova_compute[189564]:      <blockers model='Icelake-Server-v5'>
Dec  1 14:20:41 np0005541455 nova_compute[189564]:        <feature name='avx512-vpopcntdq'/>
Dec  1 14:20:41 np0005541455 nova_compute[189564]:        <feature name='avx512bitalg'/>
Dec  1 14:20:41 np0005541455 nova_compute[189564]:        <feature name='avx512bw'/>
Dec  1 14:20:41 np0005541455 nova_compute[189564]:        <feature name='avx512cd'/>
Dec  1 14:20:41 np0005541455 nova_compute[189564]:        <feature name='avx512dq'/>
Dec  1 14:20:41 np0005541455 nova_compute[189564]:        <feature name='avx512f'/>
Dec  1 14:20:41 np0005541455 nova_compute[189564]:        <feature name='avx512ifma'/>
Dec  1 14:20:41 np0005541455 nova_compute[189564]:        <feature name='avx512vbmi'/>
Dec  1 14:20:41 np0005541455 nova_compute[189564]:        <feature name='avx512vbmi2'/>
Dec  1 14:20:41 np0005541455 nova_compute[189564]:        <feature name='avx512vl'/>
Dec  1 14:20:41 np0005541455 nova_compute[189564]:        <feature name='avx512vnni'/>
Dec  1 14:20:41 np0005541455 nova_compute[189564]:        <feature name='erms'/>
Dec  1 14:20:41 np0005541455 nova_compute[189564]:        <feature name='fsrm'/>
Dec  1 14:20:41 np0005541455 nova_compute[189564]:        <feature name='gfni'/>
Dec  1 14:20:41 np0005541455 nova_compute[189564]:        <feature name='ibrs-all'/>
Dec  1 14:20:41 np0005541455 nova_compute[189564]:        <feature name='invpcid'/>
Dec  1 14:20:41 np0005541455 nova_compute[189564]:        <feature name='la57'/>
Dec  1 14:20:41 np0005541455 nova_compute[189564]:        <feature name='pcid'/>
Dec  1 14:20:41 np0005541455 nova_compute[189564]:        <feature name='pku'/>
Dec  1 14:20:41 np0005541455 nova_compute[189564]:        <feature name='taa-no'/>
Dec  1 14:20:41 np0005541455 nova_compute[189564]:        <feature name='vaes'/>
Dec  1 14:20:41 np0005541455 nova_compute[189564]:        <feature name='vpclmulqdq'/>
Dec  1 14:20:41 np0005541455 nova_compute[189564]:        <feature name='xsaves'/>
Dec  1 14:20:41 np0005541455 nova_compute[189564]:      </blockers>
Dec  1 14:20:41 np0005541455 nova_compute[189564]:      <model usable='no' vendor='Intel'>Icelake-Server-v6</model>
Dec  1 14:20:41 np0005541455 nova_compute[189564]:      <blockers model='Icelake-Server-v6'>
Dec  1 14:20:41 np0005541455 nova_compute[189564]:        <feature name='avx512-vpopcntdq'/>
Dec  1 14:20:41 np0005541455 nova_compute[189564]:        <feature name='avx512bitalg'/>
Dec  1 14:20:41 np0005541455 nova_compute[189564]:        <feature name='avx512bw'/>
Dec  1 14:20:41 np0005541455 nova_compute[189564]:        <feature name='avx512cd'/>
Dec  1 14:20:41 np0005541455 nova_compute[189564]:        <feature name='avx512dq'/>
Dec  1 14:20:41 np0005541455 nova_compute[189564]:        <feature name='avx512f'/>
Dec  1 14:20:41 np0005541455 nova_compute[189564]:        <feature name='avx512ifma'/>
Dec  1 14:20:41 np0005541455 nova_compute[189564]:        <feature name='avx512vbmi'/>
Dec  1 14:20:41 np0005541455 nova_compute[189564]:        <feature name='avx512vbmi2'/>
Dec  1 14:20:41 np0005541455 nova_compute[189564]:        <feature name='avx512vl'/>
Dec  1 14:20:41 np0005541455 nova_compute[189564]:        <feature name='avx512vnni'/>
Dec  1 14:20:41 np0005541455 nova_compute[189564]:        <feature name='erms'/>
Dec  1 14:20:41 np0005541455 nova_compute[189564]:        <feature name='fsrm'/>
Dec  1 14:20:41 np0005541455 nova_compute[189564]:        <feature name='gfni'/>
Dec  1 14:20:41 np0005541455 nova_compute[189564]:        <feature name='ibrs-all'/>
Dec  1 14:20:41 np0005541455 nova_compute[189564]:        <feature name='invpcid'/>
Dec  1 14:20:41 np0005541455 nova_compute[189564]:        <feature name='la57'/>
Dec  1 14:20:41 np0005541455 nova_compute[189564]:        <feature name='pcid'/>
Dec  1 14:20:41 np0005541455 nova_compute[189564]:        <feature name='pku'/>
Dec  1 14:20:41 np0005541455 nova_compute[189564]:        <feature name='taa-no'/>
Dec  1 14:20:41 np0005541455 nova_compute[189564]:        <feature name='vaes'/>
Dec  1 14:20:41 np0005541455 nova_compute[189564]:        <feature name='vpclmulqdq'/>
Dec  1 14:20:41 np0005541455 nova_compute[189564]:        <feature name='xsaves'/>
Dec  1 14:20:41 np0005541455 nova_compute[189564]:      </blockers>
Dec  1 14:20:41 np0005541455 nova_compute[189564]:      <model usable='no' vendor='Intel'>Icelake-Server-v7</model>
Dec  1 14:20:41 np0005541455 nova_compute[189564]:      <blockers model='Icelake-Server-v7'>
Dec  1 14:20:41 np0005541455 nova_compute[189564]:        <feature name='avx512-vpopcntdq'/>
Dec  1 14:20:41 np0005541455 nova_compute[189564]:        <feature name='avx512bitalg'/>
Dec  1 14:20:41 np0005541455 nova_compute[189564]:        <feature name='avx512bw'/>
Dec  1 14:20:41 np0005541455 nova_compute[189564]:        <feature name='avx512cd'/>
Dec  1 14:20:41 np0005541455 nova_compute[189564]:        <feature name='avx512dq'/>
Dec  1 14:20:41 np0005541455 nova_compute[189564]:        <feature name='avx512f'/>
Dec  1 14:20:41 np0005541455 nova_compute[189564]:        <feature name='avx512ifma'/>
Dec  1 14:20:41 np0005541455 nova_compute[189564]:        <feature name='avx512vbmi'/>
Dec  1 14:20:41 np0005541455 nova_compute[189564]:        <feature name='avx512vbmi2'/>
Dec  1 14:20:41 np0005541455 nova_compute[189564]:        <feature name='avx512vl'/>
Dec  1 14:20:41 np0005541455 nova_compute[189564]:        <feature name='avx512vnni'/>
Dec  1 14:20:41 np0005541455 nova_compute[189564]:        <feature name='erms'/>
Dec  1 14:20:41 np0005541455 nova_compute[189564]:        <feature name='fsrm'/>
Dec  1 14:20:41 np0005541455 nova_compute[189564]:        <feature name='gfni'/>
Dec  1 14:20:41 np0005541455 nova_compute[189564]:        <feature name='hle'/>
Dec  1 14:20:41 np0005541455 nova_compute[189564]:        <feature name='ibrs-all'/>
Dec  1 14:20:41 np0005541455 nova_compute[189564]:        <feature name='invpcid'/>
Dec  1 14:20:41 np0005541455 nova_compute[189564]:        <feature name='la57'/>
Dec  1 14:20:41 np0005541455 nova_compute[189564]:        <feature name='pcid'/>
Dec  1 14:20:41 np0005541455 nova_compute[189564]:        <feature name='pku'/>
Dec  1 14:20:41 np0005541455 nova_compute[189564]:        <feature name='rtm'/>
Dec  1 14:20:41 np0005541455 nova_compute[189564]:        <feature name='taa-no'/>
Dec  1 14:20:41 np0005541455 nova_compute[189564]:        <feature name='vaes'/>
Dec  1 14:20:41 np0005541455 nova_compute[189564]:        <feature name='vpclmulqdq'/>
Dec  1 14:20:41 np0005541455 nova_compute[189564]:        <feature name='xsaves'/>
Dec  1 14:20:41 np0005541455 nova_compute[189564]:      </blockers>
Dec  1 14:20:41 np0005541455 nova_compute[189564]:      <model usable='no' vendor='Intel' canonical='IvyBridge-v1'>IvyBridge</model>
Dec  1 14:20:41 np0005541455 nova_compute[189564]:      <blockers model='IvyBridge'>
Dec  1 14:20:41 np0005541455 nova_compute[189564]:        <feature name='erms'/>
Dec  1 14:20:41 np0005541455 nova_compute[189564]:      </blockers>
Dec  1 14:20:41 np0005541455 nova_compute[189564]:      <model usable='no' vendor='Intel' canonical='IvyBridge-v2'>IvyBridge-IBRS</model>
Dec  1 14:20:41 np0005541455 nova_compute[189564]:      <blockers model='IvyBridge-IBRS'>
Dec  1 14:20:41 np0005541455 nova_compute[189564]:        <feature name='erms'/>
Dec  1 14:20:41 np0005541455 nova_compute[189564]:      </blockers>
Dec  1 14:20:41 np0005541455 nova_compute[189564]:      <model usable='no' vendor='Intel'>IvyBridge-v1</model>
Dec  1 14:20:41 np0005541455 nova_compute[189564]:      <blockers model='IvyBridge-v1'>
Dec  1 14:20:41 np0005541455 nova_compute[189564]:        <feature name='erms'/>
Dec  1 14:20:41 np0005541455 nova_compute[189564]:      </blockers>
Dec  1 14:20:41 np0005541455 nova_compute[189564]:      <model usable='no' vendor='Intel'>IvyBridge-v2</model>
Dec  1 14:20:41 np0005541455 nova_compute[189564]:      <blockers model='IvyBridge-v2'>
Dec  1 14:20:41 np0005541455 nova_compute[189564]:        <feature name='erms'/>
Dec  1 14:20:41 np0005541455 nova_compute[189564]:      </blockers>
Dec  1 14:20:41 np0005541455 nova_compute[189564]:      <model usable='no' vendor='Intel' canonical='KnightsMill-v1'>KnightsMill</model>
Dec  1 14:20:41 np0005541455 nova_compute[189564]:      <blockers model='KnightsMill'>
Dec  1 14:20:41 np0005541455 nova_compute[189564]:        <feature name='avx512-4fmaps'/>
Dec  1 14:20:41 np0005541455 nova_compute[189564]:        <feature name='avx512-4vnniw'/>
Dec  1 14:20:41 np0005541455 nova_compute[189564]:        <feature name='avx512-vpopcntdq'/>
Dec  1 14:20:41 np0005541455 nova_compute[189564]:        <feature name='avx512cd'/>
Dec  1 14:20:41 np0005541455 nova_compute[189564]:        <feature name='avx512er'/>
Dec  1 14:20:41 np0005541455 nova_compute[189564]:        <feature name='avx512f'/>
Dec  1 14:20:41 np0005541455 nova_compute[189564]:        <feature name='avx512pf'/>
Dec  1 14:20:41 np0005541455 nova_compute[189564]:        <feature name='erms'/>
Dec  1 14:20:41 np0005541455 nova_compute[189564]:        <feature name='ss'/>
Dec  1 14:20:41 np0005541455 nova_compute[189564]:      </blockers>
Dec  1 14:20:41 np0005541455 nova_compute[189564]:      <model usable='no' vendor='Intel'>KnightsMill-v1</model>
Dec  1 14:20:41 np0005541455 nova_compute[189564]:      <blockers model='KnightsMill-v1'>
Dec  1 14:20:41 np0005541455 nova_compute[189564]:        <feature name='avx512-4fmaps'/>
Dec  1 14:20:41 np0005541455 nova_compute[189564]:        <feature name='avx512-4vnniw'/>
Dec  1 14:20:41 np0005541455 nova_compute[189564]:        <feature name='avx512-vpopcntdq'/>
Dec  1 14:20:41 np0005541455 nova_compute[189564]:        <feature name='avx512cd'/>
Dec  1 14:20:41 np0005541455 nova_compute[189564]:        <feature name='avx512er'/>
Dec  1 14:20:41 np0005541455 nova_compute[189564]:        <feature name='avx512f'/>
Dec  1 14:20:41 np0005541455 nova_compute[189564]:        <feature name='avx512pf'/>
Dec  1 14:20:41 np0005541455 nova_compute[189564]:        <feature name='erms'/>
Dec  1 14:20:41 np0005541455 nova_compute[189564]:        <feature name='ss'/>
Dec  1 14:20:41 np0005541455 nova_compute[189564]:      </blockers>
Dec  1 14:20:41 np0005541455 nova_compute[189564]:      <model usable='yes' vendor='Intel' canonical='Nehalem-v1'>Nehalem</model>
Dec  1 14:20:41 np0005541455 nova_compute[189564]:      <model usable='yes' vendor='Intel' canonical='Nehalem-v2'>Nehalem-IBRS</model>
Dec  1 14:20:41 np0005541455 nova_compute[189564]:      <model usable='yes' vendor='Intel'>Nehalem-v1</model>
Dec  1 14:20:41 np0005541455 nova_compute[189564]:      <model usable='yes' vendor='Intel'>Nehalem-v2</model>
Dec  1 14:20:41 np0005541455 nova_compute[189564]:      <model usable='yes' deprecated='yes' vendor='AMD' canonical='Opteron_G1-v1'>Opteron_G1</model>
Dec  1 14:20:41 np0005541455 nova_compute[189564]:      <model usable='yes' deprecated='yes' vendor='AMD'>Opteron_G1-v1</model>
Dec  1 14:20:41 np0005541455 nova_compute[189564]:      <model usable='yes' deprecated='yes' vendor='AMD' canonical='Opteron_G2-v1'>Opteron_G2</model>
Dec  1 14:20:41 np0005541455 nova_compute[189564]:      <model usable='yes' deprecated='yes' vendor='AMD'>Opteron_G2-v1</model>
Dec  1 14:20:41 np0005541455 nova_compute[189564]:      <model usable='yes' deprecated='yes' vendor='AMD' canonical='Opteron_G3-v1'>Opteron_G3</model>
Dec  1 14:20:41 np0005541455 nova_compute[189564]:      <model usable='yes' deprecated='yes' vendor='AMD'>Opteron_G3-v1</model>
Dec  1 14:20:41 np0005541455 nova_compute[189564]:      <model usable='no' vendor='AMD' canonical='Opteron_G4-v1'>Opteron_G4</model>
Dec  1 14:20:41 np0005541455 nova_compute[189564]:      <blockers model='Opteron_G4'>
Dec  1 14:20:41 np0005541455 nova_compute[189564]:        <feature name='fma4'/>
Dec  1 14:20:41 np0005541455 nova_compute[189564]:        <feature name='xop'/>
Dec  1 14:20:41 np0005541455 nova_compute[189564]:      </blockers>
Dec  1 14:20:41 np0005541455 nova_compute[189564]:      <model usable='no' vendor='AMD'>Opteron_G4-v1</model>
Dec  1 14:20:41 np0005541455 nova_compute[189564]:      <blockers model='Opteron_G4-v1'>
Dec  1 14:20:41 np0005541455 nova_compute[189564]:        <feature name='fma4'/>
Dec  1 14:20:41 np0005541455 nova_compute[189564]:        <feature name='xop'/>
Dec  1 14:20:41 np0005541455 nova_compute[189564]:      </blockers>
Dec  1 14:20:41 np0005541455 nova_compute[189564]:      <model usable='no' vendor='AMD' canonical='Opteron_G5-v1'>Opteron_G5</model>
Dec  1 14:20:41 np0005541455 nova_compute[189564]:      <blockers model='Opteron_G5'>
Dec  1 14:20:41 np0005541455 nova_compute[189564]:        <feature name='fma4'/>
Dec  1 14:20:41 np0005541455 nova_compute[189564]:        <feature name='tbm'/>
Dec  1 14:20:41 np0005541455 nova_compute[189564]:        <feature name='xop'/>
Dec  1 14:20:41 np0005541455 nova_compute[189564]:      </blockers>
Dec  1 14:20:41 np0005541455 nova_compute[189564]:      <model usable='no' vendor='AMD'>Opteron_G5-v1</model>
Dec  1 14:20:41 np0005541455 nova_compute[189564]:      <blockers model='Opteron_G5-v1'>
Dec  1 14:20:41 np0005541455 nova_compute[189564]:        <feature name='fma4'/>
Dec  1 14:20:41 np0005541455 nova_compute[189564]:        <feature name='tbm'/>
Dec  1 14:20:41 np0005541455 nova_compute[189564]:        <feature name='xop'/>
Dec  1 14:20:41 np0005541455 nova_compute[189564]:      </blockers>
Dec  1 14:20:41 np0005541455 nova_compute[189564]:      <model usable='yes' deprecated='yes' vendor='Intel' canonical='Penryn-v1'>Penryn</model>
Dec  1 14:20:41 np0005541455 nova_compute[189564]:      <model usable='yes' deprecated='yes' vendor='Intel'>Penryn-v1</model>
Dec  1 14:20:41 np0005541455 nova_compute[189564]:      <model usable='yes' vendor='Intel' canonical='SandyBridge-v1'>SandyBridge</model>
Dec  1 14:20:41 np0005541455 nova_compute[189564]:      <model usable='yes' vendor='Intel' canonical='SandyBridge-v2'>SandyBridge-IBRS</model>
Dec  1 14:20:41 np0005541455 nova_compute[189564]:      <model usable='yes' vendor='Intel'>SandyBridge-v1</model>
Dec  1 14:20:41 np0005541455 nova_compute[189564]:      <model usable='yes' vendor='Intel'>SandyBridge-v2</model>
Dec  1 14:20:41 np0005541455 nova_compute[189564]:      <model usable='no' vendor='Intel' canonical='SapphireRapids-v1'>SapphireRapids</model>
Dec  1 14:20:41 np0005541455 nova_compute[189564]:      <blockers model='SapphireRapids'>
Dec  1 14:20:41 np0005541455 nova_compute[189564]:        <feature name='amx-bf16'/>
Dec  1 14:20:41 np0005541455 nova_compute[189564]:        <feature name='amx-int8'/>
Dec  1 14:20:41 np0005541455 nova_compute[189564]:        <feature name='amx-tile'/>
Dec  1 14:20:41 np0005541455 nova_compute[189564]:        <feature name='avx-vnni'/>
Dec  1 14:20:41 np0005541455 nova_compute[189564]:        <feature name='avx512-bf16'/>
Dec  1 14:20:41 np0005541455 nova_compute[189564]:        <feature name='avx512-fp16'/>
Dec  1 14:20:41 np0005541455 nova_compute[189564]:        <feature name='avx512-vpopcntdq'/>
Dec  1 14:20:41 np0005541455 nova_compute[189564]:        <feature name='avx512bitalg'/>
Dec  1 14:20:41 np0005541455 nova_compute[189564]:        <feature name='avx512bw'/>
Dec  1 14:20:41 np0005541455 nova_compute[189564]:        <feature name='avx512cd'/>
Dec  1 14:20:41 np0005541455 nova_compute[189564]:        <feature name='avx512dq'/>
Dec  1 14:20:41 np0005541455 nova_compute[189564]:        <feature name='avx512f'/>
Dec  1 14:20:41 np0005541455 nova_compute[189564]:        <feature name='avx512ifma'/>
Dec  1 14:20:41 np0005541455 nova_compute[189564]:        <feature name='avx512vbmi'/>
Dec  1 14:20:41 np0005541455 nova_compute[189564]:        <feature name='avx512vbmi2'/>
Dec  1 14:20:41 np0005541455 nova_compute[189564]:        <feature name='avx512vl'/>
Dec  1 14:20:41 np0005541455 nova_compute[189564]:        <feature name='avx512vnni'/>
Dec  1 14:20:41 np0005541455 nova_compute[189564]:        <feature name='bus-lock-detect'/>
Dec  1 14:20:41 np0005541455 nova_compute[189564]:        <feature name='erms'/>
Dec  1 14:20:41 np0005541455 nova_compute[189564]:        <feature name='fsrc'/>
Dec  1 14:20:41 np0005541455 nova_compute[189564]:        <feature name='fsrm'/>
Dec  1 14:20:41 np0005541455 nova_compute[189564]:        <feature name='fsrs'/>
Dec  1 14:20:41 np0005541455 nova_compute[189564]:        <feature name='fzrm'/>
Dec  1 14:20:41 np0005541455 nova_compute[189564]:        <feature name='gfni'/>
Dec  1 14:20:41 np0005541455 nova_compute[189564]:        <feature name='hle'/>
Dec  1 14:20:41 np0005541455 nova_compute[189564]:        <feature name='ibrs-all'/>
Dec  1 14:20:41 np0005541455 nova_compute[189564]:        <feature name='invpcid'/>
Dec  1 14:20:41 np0005541455 nova_compute[189564]:        <feature name='la57'/>
Dec  1 14:20:41 np0005541455 nova_compute[189564]:        <feature name='pcid'/>
Dec  1 14:20:41 np0005541455 nova_compute[189564]:        <feature name='pku'/>
Dec  1 14:20:41 np0005541455 nova_compute[189564]:        <feature name='rtm'/>
Dec  1 14:20:41 np0005541455 nova_compute[189564]:        <feature name='serialize'/>
Dec  1 14:20:41 np0005541455 nova_compute[189564]:        <feature name='taa-no'/>
Dec  1 14:20:41 np0005541455 nova_compute[189564]:        <feature name='tsx-ldtrk'/>
Dec  1 14:20:41 np0005541455 nova_compute[189564]:        <feature name='vaes'/>
Dec  1 14:20:41 np0005541455 nova_compute[189564]:        <feature name='vpclmulqdq'/>
Dec  1 14:20:41 np0005541455 nova_compute[189564]:        <feature name='xfd'/>
Dec  1 14:20:41 np0005541455 nova_compute[189564]:        <feature name='xsaves'/>
Dec  1 14:20:41 np0005541455 nova_compute[189564]:      </blockers>
Dec  1 14:20:41 np0005541455 nova_compute[189564]:      <model usable='no' vendor='Intel'>SapphireRapids-v1</model>
Dec  1 14:20:41 np0005541455 nova_compute[189564]:      <blockers model='SapphireRapids-v1'>
Dec  1 14:20:41 np0005541455 nova_compute[189564]:        <feature name='amx-bf16'/>
Dec  1 14:20:41 np0005541455 nova_compute[189564]:        <feature name='amx-int8'/>
Dec  1 14:20:41 np0005541455 nova_compute[189564]:        <feature name='amx-tile'/>
Dec  1 14:20:41 np0005541455 nova_compute[189564]:        <feature name='avx-vnni'/>
Dec  1 14:20:41 np0005541455 nova_compute[189564]:        <feature name='avx512-bf16'/>
Dec  1 14:20:41 np0005541455 nova_compute[189564]:        <feature name='avx512-fp16'/>
Dec  1 14:20:41 np0005541455 nova_compute[189564]:        <feature name='avx512-vpopcntdq'/>
Dec  1 14:20:41 np0005541455 nova_compute[189564]:        <feature name='avx512bitalg'/>
Dec  1 14:20:41 np0005541455 nova_compute[189564]:        <feature name='avx512bw'/>
Dec  1 14:20:41 np0005541455 nova_compute[189564]:        <feature name='avx512cd'/>
Dec  1 14:20:41 np0005541455 nova_compute[189564]:        <feature name='avx512dq'/>
Dec  1 14:20:41 np0005541455 nova_compute[189564]:        <feature name='avx512f'/>
Dec  1 14:20:41 np0005541455 nova_compute[189564]:        <feature name='avx512ifma'/>
Dec  1 14:20:41 np0005541455 nova_compute[189564]:        <feature name='avx512vbmi'/>
Dec  1 14:20:41 np0005541455 nova_compute[189564]:        <feature name='avx512vbmi2'/>
Dec  1 14:20:41 np0005541455 nova_compute[189564]:        <feature name='avx512vl'/>
Dec  1 14:20:41 np0005541455 nova_compute[189564]:        <feature name='avx512vnni'/>
Dec  1 14:20:41 np0005541455 nova_compute[189564]:        <feature name='bus-lock-detect'/>
Dec  1 14:20:41 np0005541455 nova_compute[189564]:        <feature name='erms'/>
Dec  1 14:20:41 np0005541455 nova_compute[189564]:        <feature name='fsrc'/>
Dec  1 14:20:41 np0005541455 nova_compute[189564]:        <feature name='fsrm'/>
Dec  1 14:20:41 np0005541455 nova_compute[189564]:        <feature name='fsrs'/>
Dec  1 14:20:41 np0005541455 nova_compute[189564]:        <feature name='fzrm'/>
Dec  1 14:20:41 np0005541455 nova_compute[189564]:        <feature name='gfni'/>
Dec  1 14:20:41 np0005541455 nova_compute[189564]:        <feature name='hle'/>
Dec  1 14:20:41 np0005541455 nova_compute[189564]:        <feature name='ibrs-all'/>
Dec  1 14:20:41 np0005541455 nova_compute[189564]:        <feature name='invpcid'/>
Dec  1 14:20:41 np0005541455 nova_compute[189564]:        <feature name='la57'/>
Dec  1 14:20:41 np0005541455 nova_compute[189564]:        <feature name='pcid'/>
Dec  1 14:20:41 np0005541455 nova_compute[189564]:        <feature name='pku'/>
Dec  1 14:20:41 np0005541455 nova_compute[189564]:        <feature name='rtm'/>
Dec  1 14:20:41 np0005541455 nova_compute[189564]:        <feature name='serialize'/>
Dec  1 14:20:41 np0005541455 nova_compute[189564]:        <feature name='taa-no'/>
Dec  1 14:20:41 np0005541455 nova_compute[189564]:        <feature name='tsx-ldtrk'/>
Dec  1 14:20:41 np0005541455 nova_compute[189564]:        <feature name='vaes'/>
Dec  1 14:20:41 np0005541455 nova_compute[189564]:        <feature name='vpclmulqdq'/>
Dec  1 14:20:41 np0005541455 nova_compute[189564]:        <feature name='xfd'/>
Dec  1 14:20:41 np0005541455 nova_compute[189564]:        <feature name='xsaves'/>
Dec  1 14:20:41 np0005541455 nova_compute[189564]:      </blockers>
Dec  1 14:20:41 np0005541455 nova_compute[189564]:      <model usable='no' vendor='Intel'>SapphireRapids-v2</model>
Dec  1 14:20:41 np0005541455 nova_compute[189564]:      <blockers model='SapphireRapids-v2'>
Dec  1 14:20:41 np0005541455 nova_compute[189564]:        <feature name='amx-bf16'/>
Dec  1 14:20:41 np0005541455 nova_compute[189564]:        <feature name='amx-int8'/>
Dec  1 14:20:41 np0005541455 nova_compute[189564]:        <feature name='amx-tile'/>
Dec  1 14:20:41 np0005541455 nova_compute[189564]:        <feature name='avx-vnni'/>
Dec  1 14:20:41 np0005541455 nova_compute[189564]:        <feature name='avx512-bf16'/>
Dec  1 14:20:41 np0005541455 nova_compute[189564]:        <feature name='avx512-fp16'/>
Dec  1 14:20:41 np0005541455 nova_compute[189564]:        <feature name='avx512-vpopcntdq'/>
Dec  1 14:20:41 np0005541455 nova_compute[189564]:        <feature name='avx512bitalg'/>
Dec  1 14:20:41 np0005541455 nova_compute[189564]:        <feature name='avx512bw'/>
Dec  1 14:20:41 np0005541455 nova_compute[189564]:        <feature name='avx512cd'/>
Dec  1 14:20:41 np0005541455 nova_compute[189564]:        <feature name='avx512dq'/>
Dec  1 14:20:41 np0005541455 nova_compute[189564]:        <feature name='avx512f'/>
Dec  1 14:20:41 np0005541455 nova_compute[189564]:        <feature name='avx512ifma'/>
Dec  1 14:20:41 np0005541455 nova_compute[189564]:        <feature name='avx512vbmi'/>
Dec  1 14:20:41 np0005541455 nova_compute[189564]:        <feature name='avx512vbmi2'/>
Dec  1 14:20:41 np0005541455 nova_compute[189564]:        <feature name='avx512vl'/>
Dec  1 14:20:41 np0005541455 nova_compute[189564]:        <feature name='avx512vnni'/>
Dec  1 14:20:41 np0005541455 nova_compute[189564]:        <feature name='bus-lock-detect'/>
Dec  1 14:20:41 np0005541455 nova_compute[189564]:        <feature name='erms'/>
Dec  1 14:20:41 np0005541455 nova_compute[189564]:        <feature name='fbsdp-no'/>
Dec  1 14:20:41 np0005541455 nova_compute[189564]:        <feature name='fsrc'/>
Dec  1 14:20:41 np0005541455 nova_compute[189564]:        <feature name='fsrm'/>
Dec  1 14:20:41 np0005541455 nova_compute[189564]:        <feature name='fsrs'/>
Dec  1 14:20:41 np0005541455 nova_compute[189564]:        <feature name='fzrm'/>
Dec  1 14:20:41 np0005541455 nova_compute[189564]:        <feature name='gfni'/>
Dec  1 14:20:41 np0005541455 nova_compute[189564]:        <feature name='hle'/>
Dec  1 14:20:41 np0005541455 nova_compute[189564]:        <feature name='ibrs-all'/>
Dec  1 14:20:41 np0005541455 nova_compute[189564]:        <feature name='invpcid'/>
Dec  1 14:20:41 np0005541455 nova_compute[189564]:        <feature name='la57'/>
Dec  1 14:20:41 np0005541455 nova_compute[189564]:        <feature name='pcid'/>
Dec  1 14:20:41 np0005541455 nova_compute[189564]:        <feature name='pku'/>
Dec  1 14:20:41 np0005541455 nova_compute[189564]:        <feature name='psdp-no'/>
Dec  1 14:20:41 np0005541455 nova_compute[189564]:        <feature name='rtm'/>
Dec  1 14:20:41 np0005541455 nova_compute[189564]:        <feature name='sbdr-ssdp-no'/>
Dec  1 14:20:41 np0005541455 nova_compute[189564]:        <feature name='serialize'/>
Dec  1 14:20:41 np0005541455 nova_compute[189564]:        <feature name='taa-no'/>
Dec  1 14:20:41 np0005541455 nova_compute[189564]:        <feature name='tsx-ldtrk'/>
Dec  1 14:20:41 np0005541455 nova_compute[189564]:        <feature name='vaes'/>
Dec  1 14:20:41 np0005541455 nova_compute[189564]:        <feature name='vpclmulqdq'/>
Dec  1 14:20:41 np0005541455 nova_compute[189564]:        <feature name='xfd'/>
Dec  1 14:20:41 np0005541455 nova_compute[189564]:        <feature name='xsaves'/>
Dec  1 14:20:41 np0005541455 nova_compute[189564]:      </blockers>
Dec  1 14:20:41 np0005541455 nova_compute[189564]:      <model usable='no' vendor='Intel'>SapphireRapids-v3</model>
Dec  1 14:20:41 np0005541455 nova_compute[189564]:      <blockers model='SapphireRapids-v3'>
Dec  1 14:20:41 np0005541455 nova_compute[189564]:        <feature name='amx-bf16'/>
Dec  1 14:20:41 np0005541455 nova_compute[189564]:        <feature name='amx-int8'/>
Dec  1 14:20:41 np0005541455 nova_compute[189564]:        <feature name='amx-tile'/>
Dec  1 14:20:41 np0005541455 nova_compute[189564]:        <feature name='avx-vnni'/>
Dec  1 14:20:41 np0005541455 nova_compute[189564]:        <feature name='avx512-bf16'/>
Dec  1 14:20:41 np0005541455 nova_compute[189564]:        <feature name='avx512-fp16'/>
Dec  1 14:20:41 np0005541455 nova_compute[189564]:        <feature name='avx512-vpopcntdq'/>
Dec  1 14:20:41 np0005541455 nova_compute[189564]:        <feature name='avx512bitalg'/>
Dec  1 14:20:41 np0005541455 nova_compute[189564]:        <feature name='avx512bw'/>
Dec  1 14:20:41 np0005541455 nova_compute[189564]:        <feature name='avx512cd'/>
Dec  1 14:20:41 np0005541455 nova_compute[189564]:        <feature name='avx512dq'/>
Dec  1 14:20:41 np0005541455 nova_compute[189564]:        <feature name='avx512f'/>
Dec  1 14:20:41 np0005541455 nova_compute[189564]:        <feature name='avx512ifma'/>
Dec  1 14:20:41 np0005541455 nova_compute[189564]:        <feature name='avx512vbmi'/>
Dec  1 14:20:41 np0005541455 nova_compute[189564]:        <feature name='avx512vbmi2'/>
Dec  1 14:20:41 np0005541455 nova_compute[189564]:        <feature name='avx512vl'/>
Dec  1 14:20:41 np0005541455 nova_compute[189564]:        <feature name='avx512vnni'/>
Dec  1 14:20:41 np0005541455 nova_compute[189564]:        <feature name='bus-lock-detect'/>
Dec  1 14:20:41 np0005541455 nova_compute[189564]:        <feature name='cldemote'/>
Dec  1 14:20:41 np0005541455 nova_compute[189564]:        <feature name='erms'/>
Dec  1 14:20:41 np0005541455 nova_compute[189564]:        <feature name='fbsdp-no'/>
Dec  1 14:20:41 np0005541455 nova_compute[189564]:        <feature name='fsrc'/>
Dec  1 14:20:41 np0005541455 nova_compute[189564]:        <feature name='fsrm'/>
Dec  1 14:20:41 np0005541455 nova_compute[189564]:        <feature name='fsrs'/>
Dec  1 14:20:41 np0005541455 nova_compute[189564]:        <feature name='fzrm'/>
Dec  1 14:20:41 np0005541455 nova_compute[189564]:        <feature name='gfni'/>
Dec  1 14:20:41 np0005541455 nova_compute[189564]:        <feature name='hle'/>
Dec  1 14:20:41 np0005541455 nova_compute[189564]:        <feature name='ibrs-all'/>
Dec  1 14:20:41 np0005541455 nova_compute[189564]:        <feature name='invpcid'/>
Dec  1 14:20:41 np0005541455 nova_compute[189564]:        <feature name='la57'/>
Dec  1 14:20:41 np0005541455 nova_compute[189564]:        <feature name='movdir64b'/>
Dec  1 14:20:41 np0005541455 nova_compute[189564]:        <feature name='movdiri'/>
Dec  1 14:20:41 np0005541455 nova_compute[189564]:        <feature name='pcid'/>
Dec  1 14:20:41 np0005541455 nova_compute[189564]:        <feature name='pku'/>
Dec  1 14:20:41 np0005541455 nova_compute[189564]:        <feature name='psdp-no'/>
Dec  1 14:20:41 np0005541455 nova_compute[189564]:        <feature name='rtm'/>
Dec  1 14:20:41 np0005541455 nova_compute[189564]:        <feature name='sbdr-ssdp-no'/>
Dec  1 14:20:41 np0005541455 nova_compute[189564]:        <feature name='serialize'/>
Dec  1 14:20:41 np0005541455 nova_compute[189564]:        <feature name='ss'/>
Dec  1 14:20:41 np0005541455 nova_compute[189564]:        <feature name='taa-no'/>
Dec  1 14:20:41 np0005541455 nova_compute[189564]:        <feature name='tsx-ldtrk'/>
Dec  1 14:20:41 np0005541455 nova_compute[189564]:        <feature name='vaes'/>
Dec  1 14:20:41 np0005541455 nova_compute[189564]:        <feature name='vpclmulqdq'/>
Dec  1 14:20:41 np0005541455 nova_compute[189564]:        <feature name='xfd'/>
Dec  1 14:20:41 np0005541455 nova_compute[189564]:        <feature name='xsaves'/>
Dec  1 14:20:41 np0005541455 nova_compute[189564]:      </blockers>
Dec  1 14:20:41 np0005541455 nova_compute[189564]:      <model usable='no' vendor='Intel' canonical='SierraForest-v1'>SierraForest</model>
Dec  1 14:20:41 np0005541455 nova_compute[189564]:      <blockers model='SierraForest'>
Dec  1 14:20:41 np0005541455 nova_compute[189564]:        <feature name='avx-ifma'/>
Dec  1 14:20:41 np0005541455 nova_compute[189564]:        <feature name='avx-ne-convert'/>
Dec  1 14:20:41 np0005541455 nova_compute[189564]:        <feature name='avx-vnni'/>
Dec  1 14:20:41 np0005541455 nova_compute[189564]:        <feature name='avx-vnni-int8'/>
Dec  1 14:20:41 np0005541455 nova_compute[189564]:        <feature name='bus-lock-detect'/>
Dec  1 14:20:41 np0005541455 nova_compute[189564]:        <feature name='cmpccxadd'/>
Dec  1 14:20:41 np0005541455 nova_compute[189564]:        <feature name='erms'/>
Dec  1 14:20:41 np0005541455 nova_compute[189564]:        <feature name='fbsdp-no'/>
Dec  1 14:20:41 np0005541455 nova_compute[189564]:        <feature name='fsrm'/>
Dec  1 14:20:41 np0005541455 nova_compute[189564]:        <feature name='fsrs'/>
Dec  1 14:20:41 np0005541455 nova_compute[189564]:        <feature name='gfni'/>
Dec  1 14:20:41 np0005541455 nova_compute[189564]:        <feature name='ibrs-all'/>
Dec  1 14:20:41 np0005541455 nova_compute[189564]:        <feature name='invpcid'/>
Dec  1 14:20:41 np0005541455 nova_compute[189564]:        <feature name='mcdt-no'/>
Dec  1 14:20:41 np0005541455 nova_compute[189564]:        <feature name='pbrsb-no'/>
Dec  1 14:20:41 np0005541455 nova_compute[189564]:        <feature name='pcid'/>
Dec  1 14:20:41 np0005541455 nova_compute[189564]:        <feature name='pku'/>
Dec  1 14:20:41 np0005541455 nova_compute[189564]:        <feature name='psdp-no'/>
Dec  1 14:20:41 np0005541455 nova_compute[189564]:        <feature name='sbdr-ssdp-no'/>
Dec  1 14:20:41 np0005541455 nova_compute[189564]:        <feature name='serialize'/>
Dec  1 14:20:41 np0005541455 nova_compute[189564]:        <feature name='vaes'/>
Dec  1 14:20:41 np0005541455 nova_compute[189564]:        <feature name='vpclmulqdq'/>
Dec  1 14:20:41 np0005541455 nova_compute[189564]:        <feature name='xsaves'/>
Dec  1 14:20:41 np0005541455 nova_compute[189564]:      </blockers>
Dec  1 14:20:41 np0005541455 nova_compute[189564]:      <model usable='no' vendor='Intel'>SierraForest-v1</model>
Dec  1 14:20:41 np0005541455 nova_compute[189564]:      <blockers model='SierraForest-v1'>
Dec  1 14:20:41 np0005541455 nova_compute[189564]:        <feature name='avx-ifma'/>
Dec  1 14:20:41 np0005541455 nova_compute[189564]:        <feature name='avx-ne-convert'/>
Dec  1 14:20:41 np0005541455 nova_compute[189564]:        <feature name='avx-vnni'/>
Dec  1 14:20:41 np0005541455 nova_compute[189564]:        <feature name='avx-vnni-int8'/>
Dec  1 14:20:41 np0005541455 nova_compute[189564]:        <feature name='bus-lock-detect'/>
Dec  1 14:20:41 np0005541455 nova_compute[189564]:        <feature name='cmpccxadd'/>
Dec  1 14:20:41 np0005541455 nova_compute[189564]:        <feature name='erms'/>
Dec  1 14:20:41 np0005541455 nova_compute[189564]:        <feature name='fbsdp-no'/>
Dec  1 14:20:41 np0005541455 nova_compute[189564]:        <feature name='fsrm'/>
Dec  1 14:20:41 np0005541455 nova_compute[189564]:        <feature name='fsrs'/>
Dec  1 14:20:41 np0005541455 nova_compute[189564]:        <feature name='gfni'/>
Dec  1 14:20:41 np0005541455 nova_compute[189564]:        <feature name='ibrs-all'/>
Dec  1 14:20:41 np0005541455 nova_compute[189564]:        <feature name='invpcid'/>
Dec  1 14:20:41 np0005541455 nova_compute[189564]:        <feature name='mcdt-no'/>
Dec  1 14:20:41 np0005541455 nova_compute[189564]:        <feature name='pbrsb-no'/>
Dec  1 14:20:41 np0005541455 nova_compute[189564]:        <feature name='pcid'/>
Dec  1 14:20:41 np0005541455 nova_compute[189564]:        <feature name='pku'/>
Dec  1 14:20:41 np0005541455 nova_compute[189564]:        <feature name='psdp-no'/>
Dec  1 14:20:41 np0005541455 nova_compute[189564]:        <feature name='sbdr-ssdp-no'/>
Dec  1 14:20:41 np0005541455 nova_compute[189564]:        <feature name='serialize'/>
Dec  1 14:20:41 np0005541455 nova_compute[189564]:        <feature name='vaes'/>
Dec  1 14:20:41 np0005541455 nova_compute[189564]:        <feature name='vpclmulqdq'/>
Dec  1 14:20:41 np0005541455 nova_compute[189564]:        <feature name='xsaves'/>
Dec  1 14:20:41 np0005541455 nova_compute[189564]:      </blockers>
Dec  1 14:20:41 np0005541455 nova_compute[189564]:      <model usable='no' vendor='Intel' canonical='Skylake-Client-v1'>Skylake-Client</model>
Dec  1 14:20:41 np0005541455 nova_compute[189564]:      <blockers model='Skylake-Client'>
Dec  1 14:20:41 np0005541455 nova_compute[189564]:        <feature name='erms'/>
Dec  1 14:20:41 np0005541455 nova_compute[189564]:        <feature name='hle'/>
Dec  1 14:20:41 np0005541455 nova_compute[189564]:        <feature name='invpcid'/>
Dec  1 14:20:41 np0005541455 nova_compute[189564]:        <feature name='pcid'/>
Dec  1 14:20:41 np0005541455 nova_compute[189564]:        <feature name='rtm'/>
Dec  1 14:20:41 np0005541455 nova_compute[189564]:      </blockers>
Dec  1 14:20:41 np0005541455 nova_compute[189564]:      <model usable='no' vendor='Intel' canonical='Skylake-Client-v2'>Skylake-Client-IBRS</model>
Dec  1 14:20:41 np0005541455 nova_compute[189564]:      <blockers model='Skylake-Client-IBRS'>
Dec  1 14:20:41 np0005541455 nova_compute[189564]:        <feature name='erms'/>
Dec  1 14:20:41 np0005541455 nova_compute[189564]:        <feature name='hle'/>
Dec  1 14:20:41 np0005541455 nova_compute[189564]:        <feature name='invpcid'/>
Dec  1 14:20:41 np0005541455 nova_compute[189564]:        <feature name='pcid'/>
Dec  1 14:20:41 np0005541455 nova_compute[189564]:        <feature name='rtm'/>
Dec  1 14:20:41 np0005541455 nova_compute[189564]:      </blockers>
Dec  1 14:20:41 np0005541455 nova_compute[189564]:      <model usable='no' vendor='Intel' canonical='Skylake-Client-v3'>Skylake-Client-noTSX-IBRS</model>
Dec  1 14:20:41 np0005541455 nova_compute[189564]:      <blockers model='Skylake-Client-noTSX-IBRS'>
Dec  1 14:20:41 np0005541455 nova_compute[189564]:        <feature name='erms'/>
Dec  1 14:20:41 np0005541455 nova_compute[189564]:        <feature name='invpcid'/>
Dec  1 14:20:41 np0005541455 nova_compute[189564]:        <feature name='pcid'/>
Dec  1 14:20:41 np0005541455 nova_compute[189564]:      </blockers>
Dec  1 14:20:41 np0005541455 nova_compute[189564]:      <model usable='no' vendor='Intel'>Skylake-Client-v1</model>
Dec  1 14:20:41 np0005541455 nova_compute[189564]:      <blockers model='Skylake-Client-v1'>
Dec  1 14:20:41 np0005541455 nova_compute[189564]:        <feature name='erms'/>
Dec  1 14:20:41 np0005541455 nova_compute[189564]:        <feature name='hle'/>
Dec  1 14:20:41 np0005541455 nova_compute[189564]:        <feature name='invpcid'/>
Dec  1 14:20:41 np0005541455 nova_compute[189564]:        <feature name='pcid'/>
Dec  1 14:20:41 np0005541455 nova_compute[189564]:        <feature name='rtm'/>
Dec  1 14:20:41 np0005541455 nova_compute[189564]:      </blockers>
Dec  1 14:20:41 np0005541455 nova_compute[189564]:      <model usable='no' vendor='Intel'>Skylake-Client-v2</model>
Dec  1 14:20:41 np0005541455 nova_compute[189564]:      <blockers model='Skylake-Client-v2'>
Dec  1 14:20:41 np0005541455 nova_compute[189564]:        <feature name='erms'/>
Dec  1 14:20:41 np0005541455 nova_compute[189564]:        <feature name='hle'/>
Dec  1 14:20:41 np0005541455 nova_compute[189564]:        <feature name='invpcid'/>
Dec  1 14:20:41 np0005541455 nova_compute[189564]:        <feature name='pcid'/>
Dec  1 14:20:41 np0005541455 nova_compute[189564]:        <feature name='rtm'/>
Dec  1 14:20:41 np0005541455 nova_compute[189564]:      </blockers>
Dec  1 14:20:41 np0005541455 nova_compute[189564]:      <model usable='no' vendor='Intel'>Skylake-Client-v3</model>
Dec  1 14:20:41 np0005541455 nova_compute[189564]:      <blockers model='Skylake-Client-v3'>
Dec  1 14:20:41 np0005541455 nova_compute[189564]:        <feature name='erms'/>
Dec  1 14:20:41 np0005541455 nova_compute[189564]:        <feature name='invpcid'/>
Dec  1 14:20:41 np0005541455 nova_compute[189564]:        <feature name='pcid'/>
Dec  1 14:20:41 np0005541455 nova_compute[189564]:      </blockers>
Dec  1 14:20:41 np0005541455 nova_compute[189564]:      <model usable='no' vendor='Intel'>Skylake-Client-v4</model>
Dec  1 14:20:41 np0005541455 nova_compute[189564]:      <blockers model='Skylake-Client-v4'>
Dec  1 14:20:41 np0005541455 nova_compute[189564]:        <feature name='erms'/>
Dec  1 14:20:41 np0005541455 nova_compute[189564]:        <feature name='invpcid'/>
Dec  1 14:20:41 np0005541455 nova_compute[189564]:        <feature name='pcid'/>
Dec  1 14:20:41 np0005541455 nova_compute[189564]:        <feature name='xsaves'/>
Dec  1 14:20:41 np0005541455 nova_compute[189564]:      </blockers>
Dec  1 14:20:41 np0005541455 nova_compute[189564]:      <model usable='no' vendor='Intel' canonical='Skylake-Server-v1'>Skylake-Server</model>
Dec  1 14:20:41 np0005541455 nova_compute[189564]:      <blockers model='Skylake-Server'>
Dec  1 14:20:41 np0005541455 nova_compute[189564]:        <feature name='avx512bw'/>
Dec  1 14:20:41 np0005541455 nova_compute[189564]:        <feature name='avx512cd'/>
Dec  1 14:20:41 np0005541455 nova_compute[189564]:        <feature name='avx512dq'/>
Dec  1 14:20:41 np0005541455 nova_compute[189564]:        <feature name='avx512f'/>
Dec  1 14:20:41 np0005541455 nova_compute[189564]:        <feature name='avx512vl'/>
Dec  1 14:20:41 np0005541455 nova_compute[189564]:        <feature name='erms'/>
Dec  1 14:20:41 np0005541455 nova_compute[189564]:        <feature name='hle'/>
Dec  1 14:20:41 np0005541455 nova_compute[189564]:        <feature name='invpcid'/>
Dec  1 14:20:41 np0005541455 nova_compute[189564]:        <feature name='pcid'/>
Dec  1 14:20:41 np0005541455 nova_compute[189564]:        <feature name='pku'/>
Dec  1 14:20:41 np0005541455 nova_compute[189564]:        <feature name='rtm'/>
Dec  1 14:20:41 np0005541455 nova_compute[189564]:      </blockers>
Dec  1 14:20:41 np0005541455 nova_compute[189564]:      <model usable='no' vendor='Intel' canonical='Skylake-Server-v2'>Skylake-Server-IBRS</model>
Dec  1 14:20:41 np0005541455 nova_compute[189564]:      <blockers model='Skylake-Server-IBRS'>
Dec  1 14:20:41 np0005541455 nova_compute[189564]:        <feature name='avx512bw'/>
Dec  1 14:20:41 np0005541455 nova_compute[189564]:        <feature name='avx512cd'/>
Dec  1 14:20:41 np0005541455 nova_compute[189564]:        <feature name='avx512dq'/>
Dec  1 14:20:41 np0005541455 nova_compute[189564]:        <feature name='avx512f'/>
Dec  1 14:20:41 np0005541455 nova_compute[189564]:        <feature name='avx512vl'/>
Dec  1 14:20:41 np0005541455 nova_compute[189564]:        <feature name='erms'/>
Dec  1 14:20:41 np0005541455 nova_compute[189564]:        <feature name='hle'/>
Dec  1 14:20:41 np0005541455 nova_compute[189564]:        <feature name='invpcid'/>
Dec  1 14:20:41 np0005541455 nova_compute[189564]:        <feature name='pcid'/>
Dec  1 14:20:41 np0005541455 nova_compute[189564]:        <feature name='pku'/>
Dec  1 14:20:41 np0005541455 nova_compute[189564]:        <feature name='rtm'/>
Dec  1 14:20:41 np0005541455 nova_compute[189564]:      </blockers>
Dec  1 14:20:41 np0005541455 nova_compute[189564]:      <model usable='no' vendor='Intel' canonical='Skylake-Server-v3'>Skylake-Server-noTSX-IBRS</model>
Dec  1 14:20:41 np0005541455 nova_compute[189564]:      <blockers model='Skylake-Server-noTSX-IBRS'>
Dec  1 14:20:41 np0005541455 nova_compute[189564]:        <feature name='avx512bw'/>
Dec  1 14:20:41 np0005541455 nova_compute[189564]:        <feature name='avx512cd'/>
Dec  1 14:20:41 np0005541455 nova_compute[189564]:        <feature name='avx512dq'/>
Dec  1 14:20:41 np0005541455 nova_compute[189564]:        <feature name='avx512f'/>
Dec  1 14:20:41 np0005541455 nova_compute[189564]:        <feature name='avx512vl'/>
Dec  1 14:20:41 np0005541455 nova_compute[189564]:        <feature name='erms'/>
Dec  1 14:20:41 np0005541455 nova_compute[189564]:        <feature name='invpcid'/>
Dec  1 14:20:41 np0005541455 nova_compute[189564]:        <feature name='pcid'/>
Dec  1 14:20:41 np0005541455 nova_compute[189564]:        <feature name='pku'/>
Dec  1 14:20:41 np0005541455 nova_compute[189564]:      </blockers>
Dec  1 14:20:41 np0005541455 nova_compute[189564]:      <model usable='no' vendor='Intel'>Skylake-Server-v1</model>
Dec  1 14:20:41 np0005541455 nova_compute[189564]:      <blockers model='Skylake-Server-v1'>
Dec  1 14:20:41 np0005541455 nova_compute[189564]:        <feature name='avx512bw'/>
Dec  1 14:20:41 np0005541455 nova_compute[189564]:        <feature name='avx512cd'/>
Dec  1 14:20:41 np0005541455 nova_compute[189564]:        <feature name='avx512dq'/>
Dec  1 14:20:41 np0005541455 nova_compute[189564]:        <feature name='avx512f'/>
Dec  1 14:20:41 np0005541455 nova_compute[189564]:        <feature name='avx512vl'/>
Dec  1 14:20:41 np0005541455 nova_compute[189564]:        <feature name='erms'/>
Dec  1 14:20:41 np0005541455 nova_compute[189564]:        <feature name='hle'/>
Dec  1 14:20:41 np0005541455 nova_compute[189564]:        <feature name='invpcid'/>
Dec  1 14:20:41 np0005541455 nova_compute[189564]:        <feature name='pcid'/>
Dec  1 14:20:41 np0005541455 nova_compute[189564]:        <feature name='pku'/>
Dec  1 14:20:41 np0005541455 nova_compute[189564]:        <feature name='rtm'/>
Dec  1 14:20:41 np0005541455 nova_compute[189564]:      </blockers>
Dec  1 14:20:41 np0005541455 nova_compute[189564]:      <model usable='no' vendor='Intel'>Skylake-Server-v2</model>
Dec  1 14:20:41 np0005541455 nova_compute[189564]:      <blockers model='Skylake-Server-v2'>
Dec  1 14:20:41 np0005541455 nova_compute[189564]:        <feature name='avx512bw'/>
Dec  1 14:20:41 np0005541455 nova_compute[189564]:        <feature name='avx512cd'/>
Dec  1 14:20:41 np0005541455 nova_compute[189564]:        <feature name='avx512dq'/>
Dec  1 14:20:41 np0005541455 nova_compute[189564]:        <feature name='avx512f'/>
Dec  1 14:20:41 np0005541455 nova_compute[189564]:        <feature name='avx512vl'/>
Dec  1 14:20:41 np0005541455 nova_compute[189564]:        <feature name='erms'/>
Dec  1 14:20:41 np0005541455 nova_compute[189564]:        <feature name='hle'/>
Dec  1 14:20:41 np0005541455 nova_compute[189564]:        <feature name='invpcid'/>
Dec  1 14:20:41 np0005541455 nova_compute[189564]:        <feature name='pcid'/>
Dec  1 14:20:41 np0005541455 nova_compute[189564]:        <feature name='pku'/>
Dec  1 14:20:41 np0005541455 nova_compute[189564]:        <feature name='rtm'/>
Dec  1 14:20:41 np0005541455 nova_compute[189564]:      </blockers>
Dec  1 14:20:41 np0005541455 nova_compute[189564]:      <model usable='no' vendor='Intel'>Skylake-Server-v3</model>
Dec  1 14:20:41 np0005541455 nova_compute[189564]:      <blockers model='Skylake-Server-v3'>
Dec  1 14:20:41 np0005541455 nova_compute[189564]:        <feature name='avx512bw'/>
Dec  1 14:20:41 np0005541455 nova_compute[189564]:        <feature name='avx512cd'/>
Dec  1 14:20:41 np0005541455 nova_compute[189564]:        <feature name='avx512dq'/>
Dec  1 14:20:41 np0005541455 nova_compute[189564]:        <feature name='avx512f'/>
Dec  1 14:20:41 np0005541455 nova_compute[189564]:        <feature name='avx512vl'/>
Dec  1 14:20:41 np0005541455 nova_compute[189564]:        <feature name='erms'/>
Dec  1 14:20:41 np0005541455 nova_compute[189564]:        <feature name='invpcid'/>
Dec  1 14:20:41 np0005541455 nova_compute[189564]:        <feature name='pcid'/>
Dec  1 14:20:41 np0005541455 nova_compute[189564]:        <feature name='pku'/>
Dec  1 14:20:41 np0005541455 nova_compute[189564]:      </blockers>
Dec  1 14:20:41 np0005541455 nova_compute[189564]:      <model usable='no' vendor='Intel'>Skylake-Server-v4</model>
Dec  1 14:20:41 np0005541455 nova_compute[189564]:      <blockers model='Skylake-Server-v4'>
Dec  1 14:20:41 np0005541455 nova_compute[189564]:        <feature name='avx512bw'/>
Dec  1 14:20:41 np0005541455 nova_compute[189564]:        <feature name='avx512cd'/>
Dec  1 14:20:41 np0005541455 nova_compute[189564]:        <feature name='avx512dq'/>
Dec  1 14:20:41 np0005541455 nova_compute[189564]:        <feature name='avx512f'/>
Dec  1 14:20:41 np0005541455 nova_compute[189564]:        <feature name='avx512vl'/>
Dec  1 14:20:41 np0005541455 nova_compute[189564]:        <feature name='erms'/>
Dec  1 14:20:41 np0005541455 nova_compute[189564]:        <feature name='invpcid'/>
Dec  1 14:20:41 np0005541455 nova_compute[189564]:        <feature name='pcid'/>
Dec  1 14:20:41 np0005541455 nova_compute[189564]:        <feature name='pku'/>
Dec  1 14:20:41 np0005541455 nova_compute[189564]:      </blockers>
Dec  1 14:20:41 np0005541455 nova_compute[189564]:      <model usable='no' vendor='Intel'>Skylake-Server-v5</model>
Dec  1 14:20:41 np0005541455 nova_compute[189564]:      <blockers model='Skylake-Server-v5'>
Dec  1 14:20:41 np0005541455 nova_compute[189564]:        <feature name='avx512bw'/>
Dec  1 14:20:41 np0005541455 nova_compute[189564]:        <feature name='avx512cd'/>
Dec  1 14:20:41 np0005541455 nova_compute[189564]:        <feature name='avx512dq'/>
Dec  1 14:20:41 np0005541455 nova_compute[189564]:        <feature name='avx512f'/>
Dec  1 14:20:41 np0005541455 nova_compute[189564]:        <feature name='avx512vl'/>
Dec  1 14:20:41 np0005541455 nova_compute[189564]:        <feature name='erms'/>
Dec  1 14:20:41 np0005541455 nova_compute[189564]:        <feature name='invpcid'/>
Dec  1 14:20:41 np0005541455 nova_compute[189564]:        <feature name='pcid'/>
Dec  1 14:20:41 np0005541455 nova_compute[189564]:        <feature name='pku'/>
Dec  1 14:20:41 np0005541455 nova_compute[189564]:        <feature name='xsaves'/>
Dec  1 14:20:41 np0005541455 nova_compute[189564]:      </blockers>
Dec  1 14:20:41 np0005541455 nova_compute[189564]:      <model usable='no' vendor='Intel' canonical='Snowridge-v1'>Snowridge</model>
Dec  1 14:20:41 np0005541455 nova_compute[189564]:      <blockers model='Snowridge'>
Dec  1 14:20:41 np0005541455 nova_compute[189564]:        <feature name='cldemote'/>
Dec  1 14:20:41 np0005541455 nova_compute[189564]:        <feature name='core-capability'/>
Dec  1 14:20:41 np0005541455 nova_compute[189564]:        <feature name='erms'/>
Dec  1 14:20:41 np0005541455 nova_compute[189564]:        <feature name='gfni'/>
Dec  1 14:20:41 np0005541455 nova_compute[189564]:        <feature name='movdir64b'/>
Dec  1 14:20:41 np0005541455 nova_compute[189564]:        <feature name='movdiri'/>
Dec  1 14:20:41 np0005541455 nova_compute[189564]:        <feature name='mpx'/>
Dec  1 14:20:41 np0005541455 nova_compute[189564]:        <feature name='split-lock-detect'/>
Dec  1 14:20:41 np0005541455 nova_compute[189564]:      </blockers>
Dec  1 14:20:41 np0005541455 nova_compute[189564]:      <model usable='no' vendor='Intel'>Snowridge-v1</model>
Dec  1 14:20:41 np0005541455 nova_compute[189564]:      <blockers model='Snowridge-v1'>
Dec  1 14:20:41 np0005541455 nova_compute[189564]:        <feature name='cldemote'/>
Dec  1 14:20:41 np0005541455 nova_compute[189564]:        <feature name='core-capability'/>
Dec  1 14:20:41 np0005541455 nova_compute[189564]:        <feature name='erms'/>
Dec  1 14:20:41 np0005541455 nova_compute[189564]:        <feature name='gfni'/>
Dec  1 14:20:41 np0005541455 nova_compute[189564]:        <feature name='movdir64b'/>
Dec  1 14:20:41 np0005541455 nova_compute[189564]:        <feature name='movdiri'/>
Dec  1 14:20:41 np0005541455 nova_compute[189564]:        <feature name='mpx'/>
Dec  1 14:20:41 np0005541455 nova_compute[189564]:        <feature name='split-lock-detect'/>
Dec  1 14:20:41 np0005541455 nova_compute[189564]:      </blockers>
Dec  1 14:20:41 np0005541455 nova_compute[189564]:      <model usable='no' vendor='Intel'>Snowridge-v2</model>
Dec  1 14:20:41 np0005541455 nova_compute[189564]:      <blockers model='Snowridge-v2'>
Dec  1 14:20:41 np0005541455 nova_compute[189564]:        <feature name='cldemote'/>
Dec  1 14:20:41 np0005541455 nova_compute[189564]:        <feature name='core-capability'/>
Dec  1 14:20:41 np0005541455 nova_compute[189564]:        <feature name='erms'/>
Dec  1 14:20:41 np0005541455 nova_compute[189564]:        <feature name='gfni'/>
Dec  1 14:20:41 np0005541455 nova_compute[189564]:        <feature name='movdir64b'/>
Dec  1 14:20:41 np0005541455 nova_compute[189564]:        <feature name='movdiri'/>
Dec  1 14:20:41 np0005541455 nova_compute[189564]:        <feature name='split-lock-detect'/>
Dec  1 14:20:41 np0005541455 nova_compute[189564]:      </blockers>
Dec  1 14:20:41 np0005541455 nova_compute[189564]:      <model usable='no' vendor='Intel'>Snowridge-v3</model>
Dec  1 14:20:41 np0005541455 nova_compute[189564]:      <blockers model='Snowridge-v3'>
Dec  1 14:20:41 np0005541455 nova_compute[189564]:        <feature name='cldemote'/>
Dec  1 14:20:41 np0005541455 nova_compute[189564]:        <feature name='core-capability'/>
Dec  1 14:20:41 np0005541455 nova_compute[189564]:        <feature name='erms'/>
Dec  1 14:20:41 np0005541455 nova_compute[189564]:        <feature name='gfni'/>
Dec  1 14:20:41 np0005541455 nova_compute[189564]:        <feature name='movdir64b'/>
Dec  1 14:20:41 np0005541455 nova_compute[189564]:        <feature name='movdiri'/>
Dec  1 14:20:41 np0005541455 nova_compute[189564]:        <feature name='split-lock-detect'/>
Dec  1 14:20:41 np0005541455 nova_compute[189564]:        <feature name='xsaves'/>
Dec  1 14:20:41 np0005541455 nova_compute[189564]:      </blockers>
Dec  1 14:20:41 np0005541455 nova_compute[189564]:      <model usable='no' vendor='Intel'>Snowridge-v4</model>
Dec  1 14:20:41 np0005541455 nova_compute[189564]:      <blockers model='Snowridge-v4'>
Dec  1 14:20:41 np0005541455 nova_compute[189564]:        <feature name='cldemote'/>
Dec  1 14:20:41 np0005541455 nova_compute[189564]:        <feature name='erms'/>
Dec  1 14:20:41 np0005541455 nova_compute[189564]:        <feature name='gfni'/>
Dec  1 14:20:41 np0005541455 nova_compute[189564]:        <feature name='movdir64b'/>
Dec  1 14:20:41 np0005541455 nova_compute[189564]:        <feature name='movdiri'/>
Dec  1 14:20:41 np0005541455 nova_compute[189564]:        <feature name='xsaves'/>
Dec  1 14:20:41 np0005541455 nova_compute[189564]:      </blockers>
Dec  1 14:20:41 np0005541455 nova_compute[189564]:      <model usable='yes' vendor='Intel' canonical='Westmere-v1'>Westmere</model>
Dec  1 14:20:41 np0005541455 nova_compute[189564]:      <model usable='yes' vendor='Intel' canonical='Westmere-v2'>Westmere-IBRS</model>
Dec  1 14:20:41 np0005541455 nova_compute[189564]:      <model usable='yes' vendor='Intel'>Westmere-v1</model>
Dec  1 14:20:41 np0005541455 nova_compute[189564]:      <model usable='yes' vendor='Intel'>Westmere-v2</model>
Dec  1 14:20:41 np0005541455 nova_compute[189564]:      <model usable='no' deprecated='yes' vendor='AMD' canonical='athlon-v1'>athlon</model>
Dec  1 14:20:41 np0005541455 nova_compute[189564]:      <blockers model='athlon'>
Dec  1 14:20:41 np0005541455 nova_compute[189564]:        <feature name='3dnow'/>
Dec  1 14:20:41 np0005541455 nova_compute[189564]:        <feature name='3dnowext'/>
Dec  1 14:20:41 np0005541455 nova_compute[189564]:      </blockers>
Dec  1 14:20:41 np0005541455 nova_compute[189564]:      <model usable='no' deprecated='yes' vendor='AMD'>athlon-v1</model>
Dec  1 14:20:41 np0005541455 nova_compute[189564]:      <blockers model='athlon-v1'>
Dec  1 14:20:41 np0005541455 nova_compute[189564]:        <feature name='3dnow'/>
Dec  1 14:20:41 np0005541455 nova_compute[189564]:        <feature name='3dnowext'/>
Dec  1 14:20:41 np0005541455 nova_compute[189564]:      </blockers>
Dec  1 14:20:41 np0005541455 nova_compute[189564]:      <model usable='no' deprecated='yes' vendor='Intel' canonical='core2duo-v1'>core2duo</model>
Dec  1 14:20:41 np0005541455 nova_compute[189564]:      <blockers model='core2duo'>
Dec  1 14:20:41 np0005541455 nova_compute[189564]:        <feature name='ss'/>
Dec  1 14:20:41 np0005541455 nova_compute[189564]:      </blockers>
Dec  1 14:20:41 np0005541455 nova_compute[189564]:      <model usable='no' deprecated='yes' vendor='Intel'>core2duo-v1</model>
Dec  1 14:20:41 np0005541455 nova_compute[189564]:      <blockers model='core2duo-v1'>
Dec  1 14:20:41 np0005541455 nova_compute[189564]:        <feature name='ss'/>
Dec  1 14:20:41 np0005541455 nova_compute[189564]:      </blockers>
Dec  1 14:20:41 np0005541455 nova_compute[189564]:      <model usable='no' deprecated='yes' vendor='Intel' canonical='coreduo-v1'>coreduo</model>
Dec  1 14:20:41 np0005541455 nova_compute[189564]:      <blockers model='coreduo'>
Dec  1 14:20:41 np0005541455 nova_compute[189564]:        <feature name='ss'/>
Dec  1 14:20:41 np0005541455 nova_compute[189564]:      </blockers>
Dec  1 14:20:41 np0005541455 nova_compute[189564]:      <model usable='no' deprecated='yes' vendor='Intel'>coreduo-v1</model>
Dec  1 14:20:41 np0005541455 nova_compute[189564]:      <blockers model='coreduo-v1'>
Dec  1 14:20:41 np0005541455 nova_compute[189564]:        <feature name='ss'/>
Dec  1 14:20:41 np0005541455 nova_compute[189564]:      </blockers>
Dec  1 14:20:41 np0005541455 nova_compute[189564]:      <model usable='yes' deprecated='yes' vendor='unknown' canonical='kvm32-v1'>kvm32</model>
Dec  1 14:20:41 np0005541455 nova_compute[189564]:      <model usable='yes' deprecated='yes' vendor='unknown'>kvm32-v1</model>
Dec  1 14:20:41 np0005541455 nova_compute[189564]:      <model usable='yes' deprecated='yes' vendor='unknown' canonical='kvm64-v1'>kvm64</model>
Dec  1 14:20:41 np0005541455 nova_compute[189564]:      <model usable='yes' deprecated='yes' vendor='unknown'>kvm64-v1</model>
Dec  1 14:20:41 np0005541455 nova_compute[189564]:      <model usable='no' deprecated='yes' vendor='Intel' canonical='n270-v1'>n270</model>
Dec  1 14:20:41 np0005541455 nova_compute[189564]:      <blockers model='n270'>
Dec  1 14:20:41 np0005541455 nova_compute[189564]:        <feature name='ss'/>
Dec  1 14:20:41 np0005541455 nova_compute[189564]:      </blockers>
Dec  1 14:20:41 np0005541455 nova_compute[189564]:      <model usable='no' deprecated='yes' vendor='Intel'>n270-v1</model>
Dec  1 14:20:41 np0005541455 nova_compute[189564]:      <blockers model='n270-v1'>
Dec  1 14:20:41 np0005541455 nova_compute[189564]:        <feature name='ss'/>
Dec  1 14:20:41 np0005541455 nova_compute[189564]:      </blockers>
Dec  1 14:20:41 np0005541455 nova_compute[189564]:      <model usable='yes' deprecated='yes' vendor='unknown' canonical='pentium-v1'>pentium</model>
Dec  1 14:20:41 np0005541455 nova_compute[189564]:      <model usable='yes' deprecated='yes' vendor='unknown'>pentium-v1</model>
Dec  1 14:20:41 np0005541455 nova_compute[189564]:      <model usable='yes' deprecated='yes' vendor='unknown' canonical='pentium2-v1'>pentium2</model>
Dec  1 14:20:41 np0005541455 nova_compute[189564]:      <model usable='yes' deprecated='yes' vendor='unknown'>pentium2-v1</model>
Dec  1 14:20:41 np0005541455 nova_compute[189564]:      <model usable='yes' deprecated='yes' vendor='unknown' canonical='pentium3-v1'>pentium3</model>
Dec  1 14:20:41 np0005541455 nova_compute[189564]:      <model usable='yes' deprecated='yes' vendor='unknown'>pentium3-v1</model>
Dec  1 14:20:41 np0005541455 nova_compute[189564]:      <model usable='no' deprecated='yes' vendor='AMD' canonical='phenom-v1'>phenom</model>
Dec  1 14:20:41 np0005541455 nova_compute[189564]:      <blockers model='phenom'>
Dec  1 14:20:41 np0005541455 nova_compute[189564]:        <feature name='3dnow'/>
Dec  1 14:20:41 np0005541455 nova_compute[189564]:        <feature name='3dnowext'/>
Dec  1 14:20:41 np0005541455 nova_compute[189564]:      </blockers>
Dec  1 14:20:41 np0005541455 nova_compute[189564]:      <model usable='no' deprecated='yes' vendor='AMD'>phenom-v1</model>
Dec  1 14:20:41 np0005541455 nova_compute[189564]:      <blockers model='phenom-v1'>
Dec  1 14:20:41 np0005541455 nova_compute[189564]:        <feature name='3dnow'/>
Dec  1 14:20:41 np0005541455 nova_compute[189564]:        <feature name='3dnowext'/>
Dec  1 14:20:41 np0005541455 nova_compute[189564]:      </blockers>
Dec  1 14:20:41 np0005541455 nova_compute[189564]:      <model usable='yes' deprecated='yes' vendor='unknown' canonical='qemu32-v1'>qemu32</model>
Dec  1 14:20:41 np0005541455 nova_compute[189564]:      <model usable='yes' deprecated='yes' vendor='unknown'>qemu32-v1</model>
Dec  1 14:20:41 np0005541455 nova_compute[189564]:      <model usable='yes' deprecated='yes' vendor='unknown' canonical='qemu64-v1'>qemu64</model>
Dec  1 14:20:41 np0005541455 nova_compute[189564]:      <model usable='yes' deprecated='yes' vendor='unknown'>qemu64-v1</model>
Dec  1 14:20:41 np0005541455 nova_compute[189564]:    </mode>
Dec  1 14:20:41 np0005541455 nova_compute[189564]:  </cpu>
Dec  1 14:20:41 np0005541455 nova_compute[189564]:  <memoryBacking supported='yes'>
Dec  1 14:20:41 np0005541455 nova_compute[189564]:    <enum name='sourceType'>
Dec  1 14:20:41 np0005541455 nova_compute[189564]:      <value>file</value>
Dec  1 14:20:41 np0005541455 nova_compute[189564]:      <value>anonymous</value>
Dec  1 14:20:41 np0005541455 nova_compute[189564]:      <value>memfd</value>
Dec  1 14:20:41 np0005541455 nova_compute[189564]:    </enum>
Dec  1 14:20:41 np0005541455 nova_compute[189564]:  </memoryBacking>
Dec  1 14:20:41 np0005541455 nova_compute[189564]:  <devices>
Dec  1 14:20:41 np0005541455 nova_compute[189564]:    <disk supported='yes'>
Dec  1 14:20:41 np0005541455 nova_compute[189564]:      <enum name='diskDevice'>
Dec  1 14:20:41 np0005541455 nova_compute[189564]:        <value>disk</value>
Dec  1 14:20:41 np0005541455 nova_compute[189564]:        <value>cdrom</value>
Dec  1 14:20:41 np0005541455 nova_compute[189564]:        <value>floppy</value>
Dec  1 14:20:41 np0005541455 nova_compute[189564]:        <value>lun</value>
Dec  1 14:20:41 np0005541455 nova_compute[189564]:      </enum>
Dec  1 14:20:41 np0005541455 nova_compute[189564]:      <enum name='bus'>
Dec  1 14:20:41 np0005541455 nova_compute[189564]:        <value>fdc</value>
Dec  1 14:20:41 np0005541455 nova_compute[189564]:        <value>scsi</value>
Dec  1 14:20:41 np0005541455 nova_compute[189564]:        <value>virtio</value>
Dec  1 14:20:41 np0005541455 nova_compute[189564]:        <value>usb</value>
Dec  1 14:20:41 np0005541455 nova_compute[189564]:        <value>sata</value>
Dec  1 14:20:41 np0005541455 nova_compute[189564]:      </enum>
Dec  1 14:20:41 np0005541455 nova_compute[189564]:      <enum name='model'>
Dec  1 14:20:41 np0005541455 nova_compute[189564]:        <value>virtio</value>
Dec  1 14:20:41 np0005541455 nova_compute[189564]:        <value>virtio-transitional</value>
Dec  1 14:20:41 np0005541455 nova_compute[189564]:        <value>virtio-non-transitional</value>
Dec  1 14:20:41 np0005541455 nova_compute[189564]:      </enum>
Dec  1 14:20:41 np0005541455 nova_compute[189564]:    </disk>
Dec  1 14:20:41 np0005541455 nova_compute[189564]:    <graphics supported='yes'>
Dec  1 14:20:41 np0005541455 nova_compute[189564]:      <enum name='type'>
Dec  1 14:20:41 np0005541455 nova_compute[189564]:        <value>vnc</value>
Dec  1 14:20:41 np0005541455 nova_compute[189564]:        <value>egl-headless</value>
Dec  1 14:20:41 np0005541455 nova_compute[189564]:        <value>dbus</value>
Dec  1 14:20:41 np0005541455 nova_compute[189564]:      </enum>
Dec  1 14:20:41 np0005541455 nova_compute[189564]:    </graphics>
Dec  1 14:20:41 np0005541455 nova_compute[189564]:    <video supported='yes'>
Dec  1 14:20:41 np0005541455 nova_compute[189564]:      <enum name='modelType'>
Dec  1 14:20:41 np0005541455 nova_compute[189564]:        <value>vga</value>
Dec  1 14:20:41 np0005541455 nova_compute[189564]:        <value>cirrus</value>
Dec  1 14:20:41 np0005541455 nova_compute[189564]:        <value>virtio</value>
Dec  1 14:20:41 np0005541455 nova_compute[189564]:        <value>none</value>
Dec  1 14:20:41 np0005541455 nova_compute[189564]:        <value>bochs</value>
Dec  1 14:20:41 np0005541455 nova_compute[189564]:        <value>ramfb</value>
Dec  1 14:20:41 np0005541455 nova_compute[189564]:      </enum>
Dec  1 14:20:41 np0005541455 nova_compute[189564]:    </video>
Dec  1 14:20:41 np0005541455 nova_compute[189564]:    <hostdev supported='yes'>
Dec  1 14:20:41 np0005541455 nova_compute[189564]:      <enum name='mode'>
Dec  1 14:20:41 np0005541455 nova_compute[189564]:        <value>subsystem</value>
Dec  1 14:20:41 np0005541455 nova_compute[189564]:      </enum>
Dec  1 14:20:41 np0005541455 nova_compute[189564]:      <enum name='startupPolicy'>
Dec  1 14:20:41 np0005541455 nova_compute[189564]:        <value>default</value>
Dec  1 14:20:41 np0005541455 nova_compute[189564]:        <value>mandatory</value>
Dec  1 14:20:41 np0005541455 nova_compute[189564]:        <value>requisite</value>
Dec  1 14:20:41 np0005541455 nova_compute[189564]:        <value>optional</value>
Dec  1 14:20:41 np0005541455 nova_compute[189564]:      </enum>
Dec  1 14:20:41 np0005541455 nova_compute[189564]:      <enum name='subsysType'>
Dec  1 14:20:41 np0005541455 nova_compute[189564]:        <value>usb</value>
Dec  1 14:20:41 np0005541455 nova_compute[189564]:        <value>pci</value>
Dec  1 14:20:41 np0005541455 nova_compute[189564]:        <value>scsi</value>
Dec  1 14:20:41 np0005541455 nova_compute[189564]:      </enum>
Dec  1 14:20:41 np0005541455 nova_compute[189564]:      <enum name='capsType'/>
Dec  1 14:20:41 np0005541455 nova_compute[189564]:      <enum name='pciBackend'/>
Dec  1 14:20:41 np0005541455 nova_compute[189564]:    </hostdev>
Dec  1 14:20:41 np0005541455 nova_compute[189564]:    <rng supported='yes'>
Dec  1 14:20:41 np0005541455 nova_compute[189564]:      <enum name='model'>
Dec  1 14:20:41 np0005541455 nova_compute[189564]:        <value>virtio</value>
Dec  1 14:20:41 np0005541455 nova_compute[189564]:        <value>virtio-transitional</value>
Dec  1 14:20:41 np0005541455 nova_compute[189564]:        <value>virtio-non-transitional</value>
Dec  1 14:20:41 np0005541455 nova_compute[189564]:      </enum>
Dec  1 14:20:41 np0005541455 nova_compute[189564]:      <enum name='backendModel'>
Dec  1 14:20:41 np0005541455 nova_compute[189564]:        <value>random</value>
Dec  1 14:20:41 np0005541455 nova_compute[189564]:        <value>egd</value>
Dec  1 14:20:41 np0005541455 nova_compute[189564]:        <value>builtin</value>
Dec  1 14:20:41 np0005541455 nova_compute[189564]:      </enum>
Dec  1 14:20:41 np0005541455 nova_compute[189564]:    </rng>
Dec  1 14:20:41 np0005541455 nova_compute[189564]:    <filesystem supported='yes'>
Dec  1 14:20:41 np0005541455 nova_compute[189564]:      <enum name='driverType'>
Dec  1 14:20:41 np0005541455 nova_compute[189564]:        <value>path</value>
Dec  1 14:20:41 np0005541455 nova_compute[189564]:        <value>handle</value>
Dec  1 14:20:41 np0005541455 nova_compute[189564]:        <value>virtiofs</value>
Dec  1 14:20:41 np0005541455 nova_compute[189564]:      </enum>
Dec  1 14:20:41 np0005541455 nova_compute[189564]:    </filesystem>
Dec  1 14:20:41 np0005541455 nova_compute[189564]:    <tpm supported='yes'>
Dec  1 14:20:41 np0005541455 nova_compute[189564]:      <enum name='model'>
Dec  1 14:20:41 np0005541455 nova_compute[189564]:        <value>tpm-tis</value>
Dec  1 14:20:41 np0005541455 nova_compute[189564]:        <value>tpm-crb</value>
Dec  1 14:20:41 np0005541455 nova_compute[189564]:      </enum>
Dec  1 14:20:41 np0005541455 nova_compute[189564]:      <enum name='backendModel'>
Dec  1 14:20:41 np0005541455 nova_compute[189564]:        <value>emulator</value>
Dec  1 14:20:41 np0005541455 nova_compute[189564]:        <value>external</value>
Dec  1 14:20:41 np0005541455 nova_compute[189564]:      </enum>
Dec  1 14:20:41 np0005541455 nova_compute[189564]:      <enum name='backendVersion'>
Dec  1 14:20:41 np0005541455 nova_compute[189564]:        <value>2.0</value>
Dec  1 14:20:41 np0005541455 nova_compute[189564]:      </enum>
Dec  1 14:20:41 np0005541455 nova_compute[189564]:    </tpm>
Dec  1 14:20:41 np0005541455 nova_compute[189564]:    <redirdev supported='yes'>
Dec  1 14:20:41 np0005541455 nova_compute[189564]:      <enum name='bus'>
Dec  1 14:20:41 np0005541455 nova_compute[189564]:        <value>usb</value>
Dec  1 14:20:41 np0005541455 nova_compute[189564]:      </enum>
Dec  1 14:20:41 np0005541455 nova_compute[189564]:    </redirdev>
Dec  1 14:20:41 np0005541455 nova_compute[189564]:    <channel supported='yes'>
Dec  1 14:20:41 np0005541455 nova_compute[189564]:      <enum name='type'>
Dec  1 14:20:41 np0005541455 nova_compute[189564]:        <value>pty</value>
Dec  1 14:20:41 np0005541455 nova_compute[189564]:        <value>unix</value>
Dec  1 14:20:41 np0005541455 nova_compute[189564]:      </enum>
Dec  1 14:20:41 np0005541455 nova_compute[189564]:    </channel>
Dec  1 14:20:41 np0005541455 nova_compute[189564]:    <crypto supported='yes'>
Dec  1 14:20:41 np0005541455 nova_compute[189564]:      <enum name='model'/>
Dec  1 14:20:41 np0005541455 nova_compute[189564]:      <enum name='type'>
Dec  1 14:20:41 np0005541455 nova_compute[189564]:        <value>qemu</value>
Dec  1 14:20:41 np0005541455 nova_compute[189564]:      </enum>
Dec  1 14:20:41 np0005541455 nova_compute[189564]:      <enum name='backendModel'>
Dec  1 14:20:41 np0005541455 nova_compute[189564]:        <value>builtin</value>
Dec  1 14:20:41 np0005541455 nova_compute[189564]:      </enum>
Dec  1 14:20:41 np0005541455 nova_compute[189564]:    </crypto>
Dec  1 14:20:41 np0005541455 nova_compute[189564]:    <interface supported='yes'>
Dec  1 14:20:41 np0005541455 nova_compute[189564]:      <enum name='backendType'>
Dec  1 14:20:41 np0005541455 nova_compute[189564]:        <value>default</value>
Dec  1 14:20:41 np0005541455 nova_compute[189564]:        <value>passt</value>
Dec  1 14:20:41 np0005541455 nova_compute[189564]:      </enum>
Dec  1 14:20:41 np0005541455 nova_compute[189564]:    </interface>
Dec  1 14:20:41 np0005541455 nova_compute[189564]:    <panic supported='yes'>
Dec  1 14:20:41 np0005541455 nova_compute[189564]:      <enum name='model'>
Dec  1 14:20:41 np0005541455 nova_compute[189564]:        <value>isa</value>
Dec  1 14:20:41 np0005541455 nova_compute[189564]:        <value>hyperv</value>
Dec  1 14:20:41 np0005541455 nova_compute[189564]:      </enum>
Dec  1 14:20:41 np0005541455 nova_compute[189564]:    </panic>
Dec  1 14:20:41 np0005541455 nova_compute[189564]:    <console supported='yes'>
Dec  1 14:20:41 np0005541455 nova_compute[189564]:      <enum name='type'>
Dec  1 14:20:41 np0005541455 nova_compute[189564]:        <value>null</value>
Dec  1 14:20:41 np0005541455 nova_compute[189564]:        <value>vc</value>
Dec  1 14:20:41 np0005541455 nova_compute[189564]:        <value>pty</value>
Dec  1 14:20:41 np0005541455 nova_compute[189564]:        <value>dev</value>
Dec  1 14:20:41 np0005541455 nova_compute[189564]:        <value>file</value>
Dec  1 14:20:41 np0005541455 nova_compute[189564]:        <value>pipe</value>
Dec  1 14:20:41 np0005541455 nova_compute[189564]:        <value>stdio</value>
Dec  1 14:20:41 np0005541455 nova_compute[189564]:        <value>udp</value>
Dec  1 14:20:41 np0005541455 nova_compute[189564]:        <value>tcp</value>
Dec  1 14:20:41 np0005541455 nova_compute[189564]:        <value>unix</value>
Dec  1 14:20:41 np0005541455 nova_compute[189564]:        <value>qemu-vdagent</value>
Dec  1 14:20:41 np0005541455 nova_compute[189564]:        <value>dbus</value>
Dec  1 14:20:41 np0005541455 nova_compute[189564]:      </enum>
Dec  1 14:20:41 np0005541455 nova_compute[189564]:    </console>
Dec  1 14:20:41 np0005541455 nova_compute[189564]:  </devices>
Dec  1 14:20:41 np0005541455 nova_compute[189564]:  <features>
Dec  1 14:20:41 np0005541455 nova_compute[189564]:    <gic supported='no'/>
Dec  1 14:20:41 np0005541455 nova_compute[189564]:    <vmcoreinfo supported='yes'/>
Dec  1 14:20:41 np0005541455 nova_compute[189564]:    <genid supported='yes'/>
Dec  1 14:20:41 np0005541455 nova_compute[189564]:    <backingStoreInput supported='yes'/>
Dec  1 14:20:41 np0005541455 nova_compute[189564]:    <backup supported='yes'/>
Dec  1 14:20:41 np0005541455 nova_compute[189564]:    <async-teardown supported='yes'/>
Dec  1 14:20:41 np0005541455 nova_compute[189564]:    <ps2 supported='yes'/>
Dec  1 14:20:41 np0005541455 nova_compute[189564]:    <sev supported='no'/>
Dec  1 14:20:41 np0005541455 nova_compute[189564]:    <sgx supported='no'/>
Dec  1 14:20:41 np0005541455 nova_compute[189564]:    <hyperv supported='yes'>
Dec  1 14:20:41 np0005541455 nova_compute[189564]:      <enum name='features'>
Dec  1 14:20:41 np0005541455 nova_compute[189564]:        <value>relaxed</value>
Dec  1 14:20:41 np0005541455 nova_compute[189564]:        <value>vapic</value>
Dec  1 14:20:41 np0005541455 nova_compute[189564]:        <value>spinlocks</value>
Dec  1 14:20:41 np0005541455 nova_compute[189564]:        <value>vpindex</value>
Dec  1 14:20:41 np0005541455 nova_compute[189564]:        <value>runtime</value>
Dec  1 14:20:41 np0005541455 nova_compute[189564]:        <value>synic</value>
Dec  1 14:20:41 np0005541455 nova_compute[189564]:        <value>stimer</value>
Dec  1 14:20:41 np0005541455 nova_compute[189564]:        <value>reset</value>
Dec  1 14:20:41 np0005541455 nova_compute[189564]:        <value>vendor_id</value>
Dec  1 14:20:41 np0005541455 nova_compute[189564]:        <value>frequencies</value>
Dec  1 14:20:41 np0005541455 nova_compute[189564]:        <value>reenlightenment</value>
Dec  1 14:20:41 np0005541455 nova_compute[189564]:        <value>tlbflush</value>
Dec  1 14:20:41 np0005541455 nova_compute[189564]:        <value>ipi</value>
Dec  1 14:20:41 np0005541455 nova_compute[189564]:        <value>avic</value>
Dec  1 14:20:41 np0005541455 nova_compute[189564]:        <value>emsr_bitmap</value>
Dec  1 14:20:41 np0005541455 nova_compute[189564]:        <value>xmm_input</value>
Dec  1 14:20:41 np0005541455 nova_compute[189564]:      </enum>
Dec  1 14:20:41 np0005541455 nova_compute[189564]:      <defaults>
Dec  1 14:20:41 np0005541455 nova_compute[189564]:        <spinlocks>4095</spinlocks>
Dec  1 14:20:41 np0005541455 nova_compute[189564]:        <stimer_direct>on</stimer_direct>
Dec  1 14:20:41 np0005541455 nova_compute[189564]:        <tlbflush_direct>on</tlbflush_direct>
Dec  1 14:20:41 np0005541455 nova_compute[189564]:        <tlbflush_extended>on</tlbflush_extended>
Dec  1 14:20:41 np0005541455 nova_compute[189564]:        <vendor_id>Linux KVM Hv</vendor_id>
Dec  1 14:20:41 np0005541455 nova_compute[189564]:      </defaults>
Dec  1 14:20:41 np0005541455 nova_compute[189564]:    </hyperv>
Dec  1 14:20:41 np0005541455 nova_compute[189564]:    <launchSecurity supported='yes'>
Dec  1 14:20:41 np0005541455 nova_compute[189564]:      <enum name='sectype'>
Dec  1 14:20:41 np0005541455 nova_compute[189564]:        <value>tdx</value>
Dec  1 14:20:41 np0005541455 nova_compute[189564]:      </enum>
Dec  1 14:20:41 np0005541455 nova_compute[189564]:    </launchSecurity>
Dec  1 14:20:41 np0005541455 nova_compute[189564]:  </features>
Dec  1 14:20:41 np0005541455 nova_compute[189564]: </domainCapabilities>
Dec  1 14:20:41 np0005541455 nova_compute[189564]: _get_domain_capabilities /usr/lib/python3.9/site-packages/nova/virt/libvirt/host.py:1037#033[00m
Dec  1 14:20:41 np0005541455 nova_compute[189564]: 2025-12-01 19:20:41.111 189568 DEBUG nova.virt.libvirt.host [None req-025acbbd-8b0a-4055-b5a6-f0460d6fa220 - - - - - -] Libvirt host hypervisor capabilities for arch=i686 and machine_type=pc:
Dec  1 14:20:41 np0005541455 nova_compute[189564]: <domainCapabilities>
Dec  1 14:20:41 np0005541455 nova_compute[189564]:  <path>/usr/libexec/qemu-kvm</path>
Dec  1 14:20:41 np0005541455 nova_compute[189564]:  <domain>kvm</domain>
Dec  1 14:20:41 np0005541455 nova_compute[189564]:  <machine>pc-i440fx-rhel7.6.0</machine>
Dec  1 14:20:41 np0005541455 nova_compute[189564]:  <arch>i686</arch>
Dec  1 14:20:41 np0005541455 nova_compute[189564]:  <vcpu max='240'/>
Dec  1 14:20:41 np0005541455 nova_compute[189564]:  <iothreads supported='yes'/>
Dec  1 14:20:41 np0005541455 nova_compute[189564]:  <os supported='yes'>
Dec  1 14:20:41 np0005541455 nova_compute[189564]:    <enum name='firmware'/>
Dec  1 14:20:41 np0005541455 nova_compute[189564]:    <loader supported='yes'>
Dec  1 14:20:41 np0005541455 nova_compute[189564]:      <value>/usr/share/OVMF/OVMF_CODE.secboot.fd</value>
Dec  1 14:20:41 np0005541455 nova_compute[189564]:      <enum name='type'>
Dec  1 14:20:41 np0005541455 nova_compute[189564]:        <value>rom</value>
Dec  1 14:20:41 np0005541455 nova_compute[189564]:        <value>pflash</value>
Dec  1 14:20:41 np0005541455 nova_compute[189564]:      </enum>
Dec  1 14:20:41 np0005541455 nova_compute[189564]:      <enum name='readonly'>
Dec  1 14:20:41 np0005541455 nova_compute[189564]:        <value>yes</value>
Dec  1 14:20:41 np0005541455 nova_compute[189564]:        <value>no</value>
Dec  1 14:20:41 np0005541455 nova_compute[189564]:      </enum>
Dec  1 14:20:41 np0005541455 nova_compute[189564]:      <enum name='secure'>
Dec  1 14:20:41 np0005541455 nova_compute[189564]:        <value>no</value>
Dec  1 14:20:41 np0005541455 nova_compute[189564]:      </enum>
Dec  1 14:20:41 np0005541455 nova_compute[189564]:    </loader>
Dec  1 14:20:41 np0005541455 nova_compute[189564]:  </os>
Dec  1 14:20:41 np0005541455 nova_compute[189564]:  <cpu>
Dec  1 14:20:41 np0005541455 nova_compute[189564]:    <mode name='host-passthrough' supported='yes'>
Dec  1 14:20:41 np0005541455 nova_compute[189564]:      <enum name='hostPassthroughMigratable'>
Dec  1 14:20:41 np0005541455 nova_compute[189564]:        <value>on</value>
Dec  1 14:20:41 np0005541455 nova_compute[189564]:        <value>off</value>
Dec  1 14:20:41 np0005541455 nova_compute[189564]:      </enum>
Dec  1 14:20:41 np0005541455 nova_compute[189564]:    </mode>
Dec  1 14:20:41 np0005541455 nova_compute[189564]:    <mode name='maximum' supported='yes'>
Dec  1 14:20:41 np0005541455 nova_compute[189564]:      <enum name='maximumMigratable'>
Dec  1 14:20:41 np0005541455 nova_compute[189564]:        <value>on</value>
Dec  1 14:20:41 np0005541455 nova_compute[189564]:        <value>off</value>
Dec  1 14:20:41 np0005541455 nova_compute[189564]:      </enum>
Dec  1 14:20:41 np0005541455 nova_compute[189564]:    </mode>
Dec  1 14:20:41 np0005541455 nova_compute[189564]:    <mode name='host-model' supported='yes'>
Dec  1 14:20:41 np0005541455 nova_compute[189564]:      <model fallback='forbid'>EPYC-Rome</model>
Dec  1 14:20:41 np0005541455 nova_compute[189564]:      <vendor>AMD</vendor>
Dec  1 14:20:41 np0005541455 nova_compute[189564]:      <maxphysaddr mode='passthrough' limit='40'/>
Dec  1 14:20:41 np0005541455 nova_compute[189564]:      <feature policy='require' name='x2apic'/>
Dec  1 14:20:41 np0005541455 nova_compute[189564]:      <feature policy='require' name='tsc-deadline'/>
Dec  1 14:20:41 np0005541455 nova_compute[189564]:      <feature policy='require' name='hypervisor'/>
Dec  1 14:20:41 np0005541455 nova_compute[189564]:      <feature policy='require' name='tsc_adjust'/>
Dec  1 14:20:41 np0005541455 nova_compute[189564]:      <feature policy='require' name='spec-ctrl'/>
Dec  1 14:20:41 np0005541455 nova_compute[189564]:      <feature policy='require' name='stibp'/>
Dec  1 14:20:41 np0005541455 nova_compute[189564]:      <feature policy='require' name='ssbd'/>
Dec  1 14:20:41 np0005541455 nova_compute[189564]:      <feature policy='require' name='cmp_legacy'/>
Dec  1 14:20:41 np0005541455 nova_compute[189564]:      <feature policy='require' name='overflow-recov'/>
Dec  1 14:20:41 np0005541455 nova_compute[189564]:      <feature policy='require' name='succor'/>
Dec  1 14:20:41 np0005541455 nova_compute[189564]:      <feature policy='require' name='ibrs'/>
Dec  1 14:20:41 np0005541455 nova_compute[189564]:      <feature policy='require' name='amd-ssbd'/>
Dec  1 14:20:41 np0005541455 nova_compute[189564]:      <feature policy='require' name='virt-ssbd'/>
Dec  1 14:20:41 np0005541455 nova_compute[189564]:      <feature policy='require' name='lbrv'/>
Dec  1 14:20:41 np0005541455 nova_compute[189564]:      <feature policy='require' name='tsc-scale'/>
Dec  1 14:20:41 np0005541455 nova_compute[189564]:      <feature policy='require' name='vmcb-clean'/>
Dec  1 14:20:41 np0005541455 nova_compute[189564]:      <feature policy='require' name='flushbyasid'/>
Dec  1 14:20:41 np0005541455 nova_compute[189564]:      <feature policy='require' name='pause-filter'/>
Dec  1 14:20:41 np0005541455 nova_compute[189564]:      <feature policy='require' name='pfthreshold'/>
Dec  1 14:20:41 np0005541455 nova_compute[189564]:      <feature policy='require' name='svme-addr-chk'/>
Dec  1 14:20:41 np0005541455 nova_compute[189564]:      <feature policy='require' name='lfence-always-serializing'/>
Dec  1 14:20:41 np0005541455 nova_compute[189564]:      <feature policy='disable' name='xsaves'/>
Dec  1 14:20:41 np0005541455 nova_compute[189564]:    </mode>
Dec  1 14:20:41 np0005541455 nova_compute[189564]:    <mode name='custom' supported='yes'>
Dec  1 14:20:41 np0005541455 nova_compute[189564]:      <model usable='yes' deprecated='yes' vendor='unknown' canonical='486-v1'>486</model>
Dec  1 14:20:41 np0005541455 nova_compute[189564]:      <model usable='yes' deprecated='yes' vendor='unknown'>486-v1</model>
Dec  1 14:20:41 np0005541455 nova_compute[189564]:      <model usable='no' vendor='Intel' canonical='Broadwell-v1'>Broadwell</model>
Dec  1 14:20:41 np0005541455 nova_compute[189564]:      <blockers model='Broadwell'>
Dec  1 14:20:41 np0005541455 nova_compute[189564]:        <feature name='erms'/>
Dec  1 14:20:41 np0005541455 nova_compute[189564]:        <feature name='hle'/>
Dec  1 14:20:41 np0005541455 nova_compute[189564]:        <feature name='invpcid'/>
Dec  1 14:20:41 np0005541455 nova_compute[189564]:        <feature name='pcid'/>
Dec  1 14:20:41 np0005541455 nova_compute[189564]:        <feature name='rtm'/>
Dec  1 14:20:41 np0005541455 nova_compute[189564]:      </blockers>
Dec  1 14:20:41 np0005541455 nova_compute[189564]:      <model usable='no' vendor='Intel' canonical='Broadwell-v3'>Broadwell-IBRS</model>
Dec  1 14:20:41 np0005541455 nova_compute[189564]:      <blockers model='Broadwell-IBRS'>
Dec  1 14:20:41 np0005541455 nova_compute[189564]:        <feature name='erms'/>
Dec  1 14:20:41 np0005541455 nova_compute[189564]:        <feature name='hle'/>
Dec  1 14:20:41 np0005541455 nova_compute[189564]:        <feature name='invpcid'/>
Dec  1 14:20:41 np0005541455 nova_compute[189564]:        <feature name='pcid'/>
Dec  1 14:20:41 np0005541455 nova_compute[189564]:        <feature name='rtm'/>
Dec  1 14:20:41 np0005541455 nova_compute[189564]:      </blockers>
Dec  1 14:20:41 np0005541455 nova_compute[189564]:      <model usable='no' vendor='Intel' canonical='Broadwell-v2'>Broadwell-noTSX</model>
Dec  1 14:20:41 np0005541455 nova_compute[189564]:      <blockers model='Broadwell-noTSX'>
Dec  1 14:20:41 np0005541455 nova_compute[189564]:        <feature name='erms'/>
Dec  1 14:20:41 np0005541455 nova_compute[189564]:        <feature name='invpcid'/>
Dec  1 14:20:41 np0005541455 nova_compute[189564]:        <feature name='pcid'/>
Dec  1 14:20:41 np0005541455 nova_compute[189564]:      </blockers>
Dec  1 14:20:41 np0005541455 nova_compute[189564]:      <model usable='no' vendor='Intel' canonical='Broadwell-v4'>Broadwell-noTSX-IBRS</model>
Dec  1 14:20:41 np0005541455 nova_compute[189564]:      <blockers model='Broadwell-noTSX-IBRS'>
Dec  1 14:20:41 np0005541455 nova_compute[189564]:        <feature name='erms'/>
Dec  1 14:20:41 np0005541455 nova_compute[189564]:        <feature name='invpcid'/>
Dec  1 14:20:41 np0005541455 nova_compute[189564]:        <feature name='pcid'/>
Dec  1 14:20:41 np0005541455 nova_compute[189564]:      </blockers>
Dec  1 14:20:41 np0005541455 nova_compute[189564]:      <model usable='no' vendor='Intel'>Broadwell-v1</model>
Dec  1 14:20:41 np0005541455 nova_compute[189564]:      <blockers model='Broadwell-v1'>
Dec  1 14:20:41 np0005541455 nova_compute[189564]:        <feature name='erms'/>
Dec  1 14:20:41 np0005541455 nova_compute[189564]:        <feature name='hle'/>
Dec  1 14:20:41 np0005541455 nova_compute[189564]:        <feature name='invpcid'/>
Dec  1 14:20:41 np0005541455 nova_compute[189564]:        <feature name='pcid'/>
Dec  1 14:20:41 np0005541455 nova_compute[189564]:        <feature name='rtm'/>
Dec  1 14:20:41 np0005541455 nova_compute[189564]:      </blockers>
Dec  1 14:20:41 np0005541455 nova_compute[189564]:      <model usable='no' vendor='Intel'>Broadwell-v2</model>
Dec  1 14:20:41 np0005541455 nova_compute[189564]:      <blockers model='Broadwell-v2'>
Dec  1 14:20:41 np0005541455 nova_compute[189564]:        <feature name='erms'/>
Dec  1 14:20:41 np0005541455 nova_compute[189564]:        <feature name='invpcid'/>
Dec  1 14:20:41 np0005541455 nova_compute[189564]:        <feature name='pcid'/>
Dec  1 14:20:41 np0005541455 nova_compute[189564]:      </blockers>
Dec  1 14:20:41 np0005541455 nova_compute[189564]:      <model usable='no' vendor='Intel'>Broadwell-v3</model>
Dec  1 14:20:41 np0005541455 nova_compute[189564]:      <blockers model='Broadwell-v3'>
Dec  1 14:20:41 np0005541455 nova_compute[189564]:        <feature name='erms'/>
Dec  1 14:20:41 np0005541455 nova_compute[189564]:        <feature name='hle'/>
Dec  1 14:20:41 np0005541455 nova_compute[189564]:        <feature name='invpcid'/>
Dec  1 14:20:41 np0005541455 nova_compute[189564]:        <feature name='pcid'/>
Dec  1 14:20:41 np0005541455 nova_compute[189564]:        <feature name='rtm'/>
Dec  1 14:20:41 np0005541455 nova_compute[189564]:      </blockers>
Dec  1 14:20:41 np0005541455 nova_compute[189564]:      <model usable='no' vendor='Intel'>Broadwell-v4</model>
Dec  1 14:20:41 np0005541455 nova_compute[189564]:      <blockers model='Broadwell-v4'>
Dec  1 14:20:41 np0005541455 nova_compute[189564]:        <feature name='erms'/>
Dec  1 14:20:41 np0005541455 nova_compute[189564]:        <feature name='invpcid'/>
Dec  1 14:20:41 np0005541455 nova_compute[189564]:        <feature name='pcid'/>
Dec  1 14:20:41 np0005541455 nova_compute[189564]:      </blockers>
Dec  1 14:20:41 np0005541455 nova_compute[189564]:      <model usable='no' vendor='Intel' canonical='Cascadelake-Server-v1'>Cascadelake-Server</model>
Dec  1 14:20:41 np0005541455 nova_compute[189564]:      <blockers model='Cascadelake-Server'>
Dec  1 14:20:41 np0005541455 nova_compute[189564]:        <feature name='avx512bw'/>
Dec  1 14:20:41 np0005541455 nova_compute[189564]:        <feature name='avx512cd'/>
Dec  1 14:20:41 np0005541455 nova_compute[189564]:        <feature name='avx512dq'/>
Dec  1 14:20:41 np0005541455 nova_compute[189564]:        <feature name='avx512f'/>
Dec  1 14:20:41 np0005541455 nova_compute[189564]:        <feature name='avx512vl'/>
Dec  1 14:20:41 np0005541455 nova_compute[189564]:        <feature name='avx512vnni'/>
Dec  1 14:20:41 np0005541455 nova_compute[189564]:        <feature name='erms'/>
Dec  1 14:20:41 np0005541455 nova_compute[189564]:        <feature name='hle'/>
Dec  1 14:20:41 np0005541455 nova_compute[189564]:        <feature name='invpcid'/>
Dec  1 14:20:41 np0005541455 nova_compute[189564]:        <feature name='pcid'/>
Dec  1 14:20:41 np0005541455 nova_compute[189564]:        <feature name='pku'/>
Dec  1 14:20:41 np0005541455 nova_compute[189564]:        <feature name='rtm'/>
Dec  1 14:20:41 np0005541455 nova_compute[189564]:      </blockers>
Dec  1 14:20:41 np0005541455 nova_compute[189564]:      <model usable='no' vendor='Intel' canonical='Cascadelake-Server-v3'>Cascadelake-Server-noTSX</model>
Dec  1 14:20:41 np0005541455 nova_compute[189564]:      <blockers model='Cascadelake-Server-noTSX'>
Dec  1 14:20:41 np0005541455 nova_compute[189564]:        <feature name='avx512bw'/>
Dec  1 14:20:41 np0005541455 nova_compute[189564]:        <feature name='avx512cd'/>
Dec  1 14:20:41 np0005541455 nova_compute[189564]:        <feature name='avx512dq'/>
Dec  1 14:20:41 np0005541455 nova_compute[189564]:        <feature name='avx512f'/>
Dec  1 14:20:41 np0005541455 nova_compute[189564]:        <feature name='avx512vl'/>
Dec  1 14:20:41 np0005541455 nova_compute[189564]:        <feature name='avx512vnni'/>
Dec  1 14:20:41 np0005541455 nova_compute[189564]:        <feature name='erms'/>
Dec  1 14:20:41 np0005541455 nova_compute[189564]:        <feature name='ibrs-all'/>
Dec  1 14:20:41 np0005541455 nova_compute[189564]:        <feature name='invpcid'/>
Dec  1 14:20:41 np0005541455 nova_compute[189564]:        <feature name='pcid'/>
Dec  1 14:20:41 np0005541455 nova_compute[189564]:        <feature name='pku'/>
Dec  1 14:20:41 np0005541455 nova_compute[189564]:      </blockers>
Dec  1 14:20:41 np0005541455 nova_compute[189564]:      <model usable='no' vendor='Intel'>Cascadelake-Server-v1</model>
Dec  1 14:20:41 np0005541455 nova_compute[189564]:      <blockers model='Cascadelake-Server-v1'>
Dec  1 14:20:41 np0005541455 nova_compute[189564]:        <feature name='avx512bw'/>
Dec  1 14:20:41 np0005541455 nova_compute[189564]:        <feature name='avx512cd'/>
Dec  1 14:20:41 np0005541455 nova_compute[189564]:        <feature name='avx512dq'/>
Dec  1 14:20:41 np0005541455 nova_compute[189564]:        <feature name='avx512f'/>
Dec  1 14:20:41 np0005541455 nova_compute[189564]:        <feature name='avx512vl'/>
Dec  1 14:20:41 np0005541455 nova_compute[189564]:        <feature name='avx512vnni'/>
Dec  1 14:20:41 np0005541455 nova_compute[189564]:        <feature name='erms'/>
Dec  1 14:20:41 np0005541455 nova_compute[189564]:        <feature name='hle'/>
Dec  1 14:20:41 np0005541455 nova_compute[189564]:        <feature name='invpcid'/>
Dec  1 14:20:41 np0005541455 nova_compute[189564]:        <feature name='pcid'/>
Dec  1 14:20:41 np0005541455 nova_compute[189564]:        <feature name='pku'/>
Dec  1 14:20:41 np0005541455 nova_compute[189564]:        <feature name='rtm'/>
Dec  1 14:20:41 np0005541455 nova_compute[189564]:      </blockers>
Dec  1 14:20:41 np0005541455 nova_compute[189564]:      <model usable='no' vendor='Intel'>Cascadelake-Server-v2</model>
Dec  1 14:20:41 np0005541455 nova_compute[189564]:      <blockers model='Cascadelake-Server-v2'>
Dec  1 14:20:41 np0005541455 nova_compute[189564]:        <feature name='avx512bw'/>
Dec  1 14:20:41 np0005541455 nova_compute[189564]:        <feature name='avx512cd'/>
Dec  1 14:20:41 np0005541455 nova_compute[189564]:        <feature name='avx512dq'/>
Dec  1 14:20:41 np0005541455 nova_compute[189564]:        <feature name='avx512f'/>
Dec  1 14:20:41 np0005541455 nova_compute[189564]:        <feature name='avx512vl'/>
Dec  1 14:20:41 np0005541455 nova_compute[189564]:        <feature name='avx512vnni'/>
Dec  1 14:20:41 np0005541455 nova_compute[189564]:        <feature name='erms'/>
Dec  1 14:20:41 np0005541455 nova_compute[189564]:        <feature name='hle'/>
Dec  1 14:20:41 np0005541455 nova_compute[189564]:        <feature name='ibrs-all'/>
Dec  1 14:20:41 np0005541455 nova_compute[189564]:        <feature name='invpcid'/>
Dec  1 14:20:41 np0005541455 nova_compute[189564]:        <feature name='pcid'/>
Dec  1 14:20:41 np0005541455 nova_compute[189564]:        <feature name='pku'/>
Dec  1 14:20:41 np0005541455 nova_compute[189564]:        <feature name='rtm'/>
Dec  1 14:20:41 np0005541455 nova_compute[189564]:      </blockers>
Dec  1 14:20:41 np0005541455 nova_compute[189564]:      <model usable='no' vendor='Intel'>Cascadelake-Server-v3</model>
Dec  1 14:20:41 np0005541455 nova_compute[189564]:      <blockers model='Cascadelake-Server-v3'>
Dec  1 14:20:41 np0005541455 nova_compute[189564]:        <feature name='avx512bw'/>
Dec  1 14:20:41 np0005541455 nova_compute[189564]:        <feature name='avx512cd'/>
Dec  1 14:20:41 np0005541455 nova_compute[189564]:        <feature name='avx512dq'/>
Dec  1 14:20:41 np0005541455 nova_compute[189564]:        <feature name='avx512f'/>
Dec  1 14:20:41 np0005541455 nova_compute[189564]:        <feature name='avx512vl'/>
Dec  1 14:20:41 np0005541455 nova_compute[189564]:        <feature name='avx512vnni'/>
Dec  1 14:20:41 np0005541455 nova_compute[189564]:        <feature name='erms'/>
Dec  1 14:20:41 np0005541455 nova_compute[189564]:        <feature name='ibrs-all'/>
Dec  1 14:20:41 np0005541455 nova_compute[189564]:        <feature name='invpcid'/>
Dec  1 14:20:41 np0005541455 nova_compute[189564]:        <feature name='pcid'/>
Dec  1 14:20:41 np0005541455 nova_compute[189564]:        <feature name='pku'/>
Dec  1 14:20:41 np0005541455 nova_compute[189564]:      </blockers>
Dec  1 14:20:41 np0005541455 nova_compute[189564]:      <model usable='no' vendor='Intel'>Cascadelake-Server-v4</model>
Dec  1 14:20:41 np0005541455 nova_compute[189564]:      <blockers model='Cascadelake-Server-v4'>
Dec  1 14:20:41 np0005541455 nova_compute[189564]:        <feature name='avx512bw'/>
Dec  1 14:20:41 np0005541455 nova_compute[189564]:        <feature name='avx512cd'/>
Dec  1 14:20:41 np0005541455 nova_compute[189564]:        <feature name='avx512dq'/>
Dec  1 14:20:41 np0005541455 nova_compute[189564]:        <feature name='avx512f'/>
Dec  1 14:20:41 np0005541455 nova_compute[189564]:        <feature name='avx512vl'/>
Dec  1 14:20:41 np0005541455 nova_compute[189564]:        <feature name='avx512vnni'/>
Dec  1 14:20:41 np0005541455 nova_compute[189564]:        <feature name='erms'/>
Dec  1 14:20:41 np0005541455 nova_compute[189564]:        <feature name='ibrs-all'/>
Dec  1 14:20:41 np0005541455 nova_compute[189564]:        <feature name='invpcid'/>
Dec  1 14:20:41 np0005541455 nova_compute[189564]:        <feature name='pcid'/>
Dec  1 14:20:41 np0005541455 nova_compute[189564]:        <feature name='pku'/>
Dec  1 14:20:41 np0005541455 nova_compute[189564]:      </blockers>
Dec  1 14:20:41 np0005541455 nova_compute[189564]:      <model usable='no' vendor='Intel'>Cascadelake-Server-v5</model>
Dec  1 14:20:41 np0005541455 nova_compute[189564]:      <blockers model='Cascadelake-Server-v5'>
Dec  1 14:20:41 np0005541455 nova_compute[189564]:        <feature name='avx512bw'/>
Dec  1 14:20:41 np0005541455 nova_compute[189564]:        <feature name='avx512cd'/>
Dec  1 14:20:41 np0005541455 nova_compute[189564]:        <feature name='avx512dq'/>
Dec  1 14:20:41 np0005541455 nova_compute[189564]:        <feature name='avx512f'/>
Dec  1 14:20:41 np0005541455 nova_compute[189564]:        <feature name='avx512vl'/>
Dec  1 14:20:41 np0005541455 nova_compute[189564]:        <feature name='avx512vnni'/>
Dec  1 14:20:41 np0005541455 nova_compute[189564]:        <feature name='erms'/>
Dec  1 14:20:41 np0005541455 nova_compute[189564]:        <feature name='ibrs-all'/>
Dec  1 14:20:41 np0005541455 nova_compute[189564]:        <feature name='invpcid'/>
Dec  1 14:20:41 np0005541455 nova_compute[189564]:        <feature name='pcid'/>
Dec  1 14:20:41 np0005541455 nova_compute[189564]:        <feature name='pku'/>
Dec  1 14:20:41 np0005541455 nova_compute[189564]:        <feature name='xsaves'/>
Dec  1 14:20:41 np0005541455 nova_compute[189564]:      </blockers>
Dec  1 14:20:41 np0005541455 nova_compute[189564]:      <model usable='yes' deprecated='yes' vendor='Intel' canonical='Conroe-v1'>Conroe</model>
Dec  1 14:20:41 np0005541455 nova_compute[189564]:      <model usable='yes' deprecated='yes' vendor='Intel'>Conroe-v1</model>
Dec  1 14:20:41 np0005541455 nova_compute[189564]:      <model usable='no' vendor='Intel' canonical='Cooperlake-v1'>Cooperlake</model>
Dec  1 14:20:41 np0005541455 nova_compute[189564]:      <blockers model='Cooperlake'>
Dec  1 14:20:41 np0005541455 nova_compute[189564]:        <feature name='avx512-bf16'/>
Dec  1 14:20:41 np0005541455 nova_compute[189564]:        <feature name='avx512bw'/>
Dec  1 14:20:41 np0005541455 nova_compute[189564]:        <feature name='avx512cd'/>
Dec  1 14:20:41 np0005541455 nova_compute[189564]:        <feature name='avx512dq'/>
Dec  1 14:20:41 np0005541455 nova_compute[189564]:        <feature name='avx512f'/>
Dec  1 14:20:41 np0005541455 nova_compute[189564]:        <feature name='avx512vl'/>
Dec  1 14:20:41 np0005541455 nova_compute[189564]:        <feature name='avx512vnni'/>
Dec  1 14:20:41 np0005541455 nova_compute[189564]:        <feature name='erms'/>
Dec  1 14:20:41 np0005541455 nova_compute[189564]:        <feature name='hle'/>
Dec  1 14:20:41 np0005541455 nova_compute[189564]:        <feature name='ibrs-all'/>
Dec  1 14:20:41 np0005541455 nova_compute[189564]:        <feature name='invpcid'/>
Dec  1 14:20:41 np0005541455 nova_compute[189564]:        <feature name='pcid'/>
Dec  1 14:20:41 np0005541455 nova_compute[189564]:        <feature name='pku'/>
Dec  1 14:20:41 np0005541455 nova_compute[189564]:        <feature name='rtm'/>
Dec  1 14:20:41 np0005541455 nova_compute[189564]:        <feature name='taa-no'/>
Dec  1 14:20:41 np0005541455 nova_compute[189564]:      </blockers>
Dec  1 14:20:41 np0005541455 nova_compute[189564]:      <model usable='no' vendor='Intel'>Cooperlake-v1</model>
Dec  1 14:20:41 np0005541455 nova_compute[189564]:      <blockers model='Cooperlake-v1'>
Dec  1 14:20:41 np0005541455 nova_compute[189564]:        <feature name='avx512-bf16'/>
Dec  1 14:20:41 np0005541455 nova_compute[189564]:        <feature name='avx512bw'/>
Dec  1 14:20:41 np0005541455 nova_compute[189564]:        <feature name='avx512cd'/>
Dec  1 14:20:41 np0005541455 nova_compute[189564]:        <feature name='avx512dq'/>
Dec  1 14:20:41 np0005541455 nova_compute[189564]:        <feature name='avx512f'/>
Dec  1 14:20:41 np0005541455 nova_compute[189564]:        <feature name='avx512vl'/>
Dec  1 14:20:41 np0005541455 nova_compute[189564]:        <feature name='avx512vnni'/>
Dec  1 14:20:41 np0005541455 nova_compute[189564]:        <feature name='erms'/>
Dec  1 14:20:41 np0005541455 nova_compute[189564]:        <feature name='hle'/>
Dec  1 14:20:41 np0005541455 nova_compute[189564]:        <feature name='ibrs-all'/>
Dec  1 14:20:41 np0005541455 nova_compute[189564]:        <feature name='invpcid'/>
Dec  1 14:20:41 np0005541455 nova_compute[189564]:        <feature name='pcid'/>
Dec  1 14:20:41 np0005541455 nova_compute[189564]:        <feature name='pku'/>
Dec  1 14:20:41 np0005541455 nova_compute[189564]:        <feature name='rtm'/>
Dec  1 14:20:41 np0005541455 nova_compute[189564]:        <feature name='taa-no'/>
Dec  1 14:20:41 np0005541455 nova_compute[189564]:      </blockers>
Dec  1 14:20:41 np0005541455 nova_compute[189564]:      <model usable='no' vendor='Intel'>Cooperlake-v2</model>
Dec  1 14:20:41 np0005541455 nova_compute[189564]:      <blockers model='Cooperlake-v2'>
Dec  1 14:20:41 np0005541455 nova_compute[189564]:        <feature name='avx512-bf16'/>
Dec  1 14:20:41 np0005541455 nova_compute[189564]:        <feature name='avx512bw'/>
Dec  1 14:20:41 np0005541455 nova_compute[189564]:        <feature name='avx512cd'/>
Dec  1 14:20:41 np0005541455 nova_compute[189564]:        <feature name='avx512dq'/>
Dec  1 14:20:41 np0005541455 nova_compute[189564]:        <feature name='avx512f'/>
Dec  1 14:20:41 np0005541455 nova_compute[189564]:        <feature name='avx512vl'/>
Dec  1 14:20:41 np0005541455 nova_compute[189564]:        <feature name='avx512vnni'/>
Dec  1 14:20:41 np0005541455 nova_compute[189564]:        <feature name='erms'/>
Dec  1 14:20:41 np0005541455 nova_compute[189564]:        <feature name='hle'/>
Dec  1 14:20:41 np0005541455 nova_compute[189564]:        <feature name='ibrs-all'/>
Dec  1 14:20:41 np0005541455 nova_compute[189564]:        <feature name='invpcid'/>
Dec  1 14:20:41 np0005541455 nova_compute[189564]:        <feature name='pcid'/>
Dec  1 14:20:41 np0005541455 nova_compute[189564]:        <feature name='pku'/>
Dec  1 14:20:41 np0005541455 nova_compute[189564]:        <feature name='rtm'/>
Dec  1 14:20:41 np0005541455 nova_compute[189564]:        <feature name='taa-no'/>
Dec  1 14:20:41 np0005541455 nova_compute[189564]:        <feature name='xsaves'/>
Dec  1 14:20:41 np0005541455 nova_compute[189564]:      </blockers>
Dec  1 14:20:41 np0005541455 nova_compute[189564]:      <model usable='no' vendor='Intel' canonical='Denverton-v1'>Denverton</model>
Dec  1 14:20:41 np0005541455 nova_compute[189564]:      <blockers model='Denverton'>
Dec  1 14:20:41 np0005541455 nova_compute[189564]:        <feature name='erms'/>
Dec  1 14:20:41 np0005541455 nova_compute[189564]:        <feature name='mpx'/>
Dec  1 14:20:41 np0005541455 nova_compute[189564]:      </blockers>
Dec  1 14:20:41 np0005541455 nova_compute[189564]:      <model usable='no' vendor='Intel'>Denverton-v1</model>
Dec  1 14:20:41 np0005541455 nova_compute[189564]:      <blockers model='Denverton-v1'>
Dec  1 14:20:41 np0005541455 nova_compute[189564]:        <feature name='erms'/>
Dec  1 14:20:41 np0005541455 nova_compute[189564]:        <feature name='mpx'/>
Dec  1 14:20:41 np0005541455 nova_compute[189564]:      </blockers>
Dec  1 14:20:41 np0005541455 nova_compute[189564]:      <model usable='no' vendor='Intel'>Denverton-v2</model>
Dec  1 14:20:41 np0005541455 nova_compute[189564]:      <blockers model='Denverton-v2'>
Dec  1 14:20:41 np0005541455 nova_compute[189564]:        <feature name='erms'/>
Dec  1 14:20:41 np0005541455 nova_compute[189564]:      </blockers>
Dec  1 14:20:41 np0005541455 nova_compute[189564]:      <model usable='no' vendor='Intel'>Denverton-v3</model>
Dec  1 14:20:41 np0005541455 nova_compute[189564]:      <blockers model='Denverton-v3'>
Dec  1 14:20:41 np0005541455 nova_compute[189564]:        <feature name='erms'/>
Dec  1 14:20:41 np0005541455 nova_compute[189564]:        <feature name='xsaves'/>
Dec  1 14:20:41 np0005541455 nova_compute[189564]:      </blockers>
Dec  1 14:20:41 np0005541455 nova_compute[189564]:      <model usable='yes' vendor='Hygon' canonical='Dhyana-v1'>Dhyana</model>
Dec  1 14:20:41 np0005541455 nova_compute[189564]:      <model usable='yes' vendor='Hygon'>Dhyana-v1</model>
Dec  1 14:20:41 np0005541455 nova_compute[189564]:      <model usable='no' vendor='Hygon'>Dhyana-v2</model>
Dec  1 14:20:41 np0005541455 nova_compute[189564]:      <blockers model='Dhyana-v2'>
Dec  1 14:20:41 np0005541455 nova_compute[189564]:        <feature name='xsaves'/>
Dec  1 14:20:41 np0005541455 nova_compute[189564]:      </blockers>
Dec  1 14:20:41 np0005541455 nova_compute[189564]:      <model usable='yes' vendor='AMD' canonical='EPYC-v1'>EPYC</model>
Dec  1 14:20:41 np0005541455 nova_compute[189564]:      <model usable='no' vendor='AMD' canonical='EPYC-Genoa-v1'>EPYC-Genoa</model>
Dec  1 14:20:41 np0005541455 nova_compute[189564]:      <blockers model='EPYC-Genoa'>
Dec  1 14:20:41 np0005541455 nova_compute[189564]:        <feature name='amd-psfd'/>
Dec  1 14:20:41 np0005541455 nova_compute[189564]:        <feature name='auto-ibrs'/>
Dec  1 14:20:41 np0005541455 nova_compute[189564]:        <feature name='avx512-bf16'/>
Dec  1 14:20:41 np0005541455 nova_compute[189564]:        <feature name='avx512-vpopcntdq'/>
Dec  1 14:20:41 np0005541455 nova_compute[189564]:        <feature name='avx512bitalg'/>
Dec  1 14:20:41 np0005541455 nova_compute[189564]:        <feature name='avx512bw'/>
Dec  1 14:20:41 np0005541455 nova_compute[189564]:        <feature name='avx512cd'/>
Dec  1 14:20:41 np0005541455 nova_compute[189564]:        <feature name='avx512dq'/>
Dec  1 14:20:41 np0005541455 nova_compute[189564]:        <feature name='avx512f'/>
Dec  1 14:20:41 np0005541455 nova_compute[189564]:        <feature name='avx512ifma'/>
Dec  1 14:20:41 np0005541455 nova_compute[189564]:        <feature name='avx512vbmi'/>
Dec  1 14:20:41 np0005541455 nova_compute[189564]:        <feature name='avx512vbmi2'/>
Dec  1 14:20:41 np0005541455 nova_compute[189564]:        <feature name='avx512vl'/>
Dec  1 14:20:41 np0005541455 nova_compute[189564]:        <feature name='avx512vnni'/>
Dec  1 14:20:41 np0005541455 nova_compute[189564]:        <feature name='erms'/>
Dec  1 14:20:41 np0005541455 nova_compute[189564]:        <feature name='fsrm'/>
Dec  1 14:20:41 np0005541455 nova_compute[189564]:        <feature name='gfni'/>
Dec  1 14:20:41 np0005541455 nova_compute[189564]:        <feature name='invpcid'/>
Dec  1 14:20:41 np0005541455 nova_compute[189564]:        <feature name='la57'/>
Dec  1 14:20:41 np0005541455 nova_compute[189564]:        <feature name='no-nested-data-bp'/>
Dec  1 14:20:41 np0005541455 nova_compute[189564]:        <feature name='null-sel-clr-base'/>
Dec  1 14:20:41 np0005541455 nova_compute[189564]:        <feature name='pcid'/>
Dec  1 14:20:41 np0005541455 nova_compute[189564]:        <feature name='pku'/>
Dec  1 14:20:41 np0005541455 nova_compute[189564]:        <feature name='stibp-always-on'/>
Dec  1 14:20:41 np0005541455 nova_compute[189564]:        <feature name='vaes'/>
Dec  1 14:20:41 np0005541455 nova_compute[189564]:        <feature name='vpclmulqdq'/>
Dec  1 14:20:41 np0005541455 nova_compute[189564]:        <feature name='xsaves'/>
Dec  1 14:20:41 np0005541455 nova_compute[189564]:      </blockers>
Dec  1 14:20:41 np0005541455 nova_compute[189564]:      <model usable='no' vendor='AMD'>EPYC-Genoa-v1</model>
Dec  1 14:20:41 np0005541455 nova_compute[189564]:      <blockers model='EPYC-Genoa-v1'>
Dec  1 14:20:41 np0005541455 nova_compute[189564]:        <feature name='amd-psfd'/>
Dec  1 14:20:41 np0005541455 nova_compute[189564]:        <feature name='auto-ibrs'/>
Dec  1 14:20:41 np0005541455 nova_compute[189564]:        <feature name='avx512-bf16'/>
Dec  1 14:20:41 np0005541455 nova_compute[189564]:        <feature name='avx512-vpopcntdq'/>
Dec  1 14:20:41 np0005541455 nova_compute[189564]:        <feature name='avx512bitalg'/>
Dec  1 14:20:41 np0005541455 nova_compute[189564]:        <feature name='avx512bw'/>
Dec  1 14:20:41 np0005541455 nova_compute[189564]:        <feature name='avx512cd'/>
Dec  1 14:20:41 np0005541455 nova_compute[189564]:        <feature name='avx512dq'/>
Dec  1 14:20:41 np0005541455 nova_compute[189564]:        <feature name='avx512f'/>
Dec  1 14:20:41 np0005541455 nova_compute[189564]:        <feature name='avx512ifma'/>
Dec  1 14:20:41 np0005541455 nova_compute[189564]:        <feature name='avx512vbmi'/>
Dec  1 14:20:41 np0005541455 nova_compute[189564]:        <feature name='avx512vbmi2'/>
Dec  1 14:20:41 np0005541455 nova_compute[189564]:        <feature name='avx512vl'/>
Dec  1 14:20:41 np0005541455 nova_compute[189564]:        <feature name='avx512vnni'/>
Dec  1 14:20:41 np0005541455 nova_compute[189564]:        <feature name='erms'/>
Dec  1 14:20:41 np0005541455 nova_compute[189564]:        <feature name='fsrm'/>
Dec  1 14:20:41 np0005541455 nova_compute[189564]:        <feature name='gfni'/>
Dec  1 14:20:41 np0005541455 nova_compute[189564]:        <feature name='invpcid'/>
Dec  1 14:20:41 np0005541455 nova_compute[189564]:        <feature name='la57'/>
Dec  1 14:20:41 np0005541455 nova_compute[189564]:        <feature name='no-nested-data-bp'/>
Dec  1 14:20:41 np0005541455 nova_compute[189564]:        <feature name='null-sel-clr-base'/>
Dec  1 14:20:41 np0005541455 nova_compute[189564]:        <feature name='pcid'/>
Dec  1 14:20:41 np0005541455 nova_compute[189564]:        <feature name='pku'/>
Dec  1 14:20:41 np0005541455 nova_compute[189564]:        <feature name='stibp-always-on'/>
Dec  1 14:20:41 np0005541455 nova_compute[189564]:        <feature name='vaes'/>
Dec  1 14:20:41 np0005541455 nova_compute[189564]:        <feature name='vpclmulqdq'/>
Dec  1 14:20:41 np0005541455 nova_compute[189564]:        <feature name='xsaves'/>
Dec  1 14:20:41 np0005541455 nova_compute[189564]:      </blockers>
Dec  1 14:20:41 np0005541455 nova_compute[189564]:      <model usable='yes' vendor='AMD' canonical='EPYC-v2'>EPYC-IBPB</model>
Dec  1 14:20:41 np0005541455 nova_compute[189564]:      <model usable='no' vendor='AMD' canonical='EPYC-Milan-v1'>EPYC-Milan</model>
Dec  1 14:20:41 np0005541455 nova_compute[189564]:      <blockers model='EPYC-Milan'>
Dec  1 14:20:41 np0005541455 nova_compute[189564]:        <feature name='erms'/>
Dec  1 14:20:41 np0005541455 nova_compute[189564]:        <feature name='fsrm'/>
Dec  1 14:20:41 np0005541455 nova_compute[189564]:        <feature name='invpcid'/>
Dec  1 14:20:41 np0005541455 nova_compute[189564]:        <feature name='pcid'/>
Dec  1 14:20:41 np0005541455 nova_compute[189564]:        <feature name='pku'/>
Dec  1 14:20:41 np0005541455 nova_compute[189564]:        <feature name='xsaves'/>
Dec  1 14:20:41 np0005541455 nova_compute[189564]:      </blockers>
Dec  1 14:20:41 np0005541455 nova_compute[189564]:      <model usable='no' vendor='AMD'>EPYC-Milan-v1</model>
Dec  1 14:20:41 np0005541455 nova_compute[189564]:      <blockers model='EPYC-Milan-v1'>
Dec  1 14:20:41 np0005541455 nova_compute[189564]:        <feature name='erms'/>
Dec  1 14:20:41 np0005541455 nova_compute[189564]:        <feature name='fsrm'/>
Dec  1 14:20:41 np0005541455 nova_compute[189564]:        <feature name='invpcid'/>
Dec  1 14:20:41 np0005541455 nova_compute[189564]:        <feature name='pcid'/>
Dec  1 14:20:41 np0005541455 nova_compute[189564]:        <feature name='pku'/>
Dec  1 14:20:41 np0005541455 nova_compute[189564]:        <feature name='xsaves'/>
Dec  1 14:20:41 np0005541455 nova_compute[189564]:      </blockers>
Dec  1 14:20:41 np0005541455 nova_compute[189564]:      <model usable='no' vendor='AMD'>EPYC-Milan-v2</model>
Dec  1 14:20:41 np0005541455 nova_compute[189564]:      <blockers model='EPYC-Milan-v2'>
Dec  1 14:20:41 np0005541455 nova_compute[189564]:        <feature name='amd-psfd'/>
Dec  1 14:20:41 np0005541455 nova_compute[189564]:        <feature name='erms'/>
Dec  1 14:20:41 np0005541455 nova_compute[189564]:        <feature name='fsrm'/>
Dec  1 14:20:41 np0005541455 nova_compute[189564]:        <feature name='invpcid'/>
Dec  1 14:20:41 np0005541455 nova_compute[189564]:        <feature name='no-nested-data-bp'/>
Dec  1 14:20:41 np0005541455 nova_compute[189564]:        <feature name='null-sel-clr-base'/>
Dec  1 14:20:41 np0005541455 nova_compute[189564]:        <feature name='pcid'/>
Dec  1 14:20:41 np0005541455 nova_compute[189564]:        <feature name='pku'/>
Dec  1 14:20:41 np0005541455 nova_compute[189564]:        <feature name='stibp-always-on'/>
Dec  1 14:20:41 np0005541455 nova_compute[189564]:        <feature name='vaes'/>
Dec  1 14:20:41 np0005541455 nova_compute[189564]:        <feature name='vpclmulqdq'/>
Dec  1 14:20:41 np0005541455 nova_compute[189564]:        <feature name='xsaves'/>
Dec  1 14:20:41 np0005541455 nova_compute[189564]:      </blockers>
Dec  1 14:20:41 np0005541455 nova_compute[189564]:      <model usable='no' vendor='AMD' canonical='EPYC-Rome-v1'>EPYC-Rome</model>
Dec  1 14:20:41 np0005541455 nova_compute[189564]:      <blockers model='EPYC-Rome'>
Dec  1 14:20:41 np0005541455 nova_compute[189564]:        <feature name='xsaves'/>
Dec  1 14:20:41 np0005541455 nova_compute[189564]:      </blockers>
Dec  1 14:20:41 np0005541455 nova_compute[189564]:      <model usable='no' vendor='AMD'>EPYC-Rome-v1</model>
Dec  1 14:20:41 np0005541455 nova_compute[189564]:      <blockers model='EPYC-Rome-v1'>
Dec  1 14:20:41 np0005541455 nova_compute[189564]:        <feature name='xsaves'/>
Dec  1 14:20:41 np0005541455 nova_compute[189564]:      </blockers>
Dec  1 14:20:41 np0005541455 nova_compute[189564]:      <model usable='no' vendor='AMD'>EPYC-Rome-v2</model>
Dec  1 14:20:41 np0005541455 nova_compute[189564]:      <blockers model='EPYC-Rome-v2'>
Dec  1 14:20:41 np0005541455 nova_compute[189564]:        <feature name='xsaves'/>
Dec  1 14:20:41 np0005541455 nova_compute[189564]:      </blockers>
Dec  1 14:20:41 np0005541455 nova_compute[189564]:      <model usable='no' vendor='AMD'>EPYC-Rome-v3</model>
Dec  1 14:20:41 np0005541455 nova_compute[189564]:      <blockers model='EPYC-Rome-v3'>
Dec  1 14:20:41 np0005541455 nova_compute[189564]:        <feature name='xsaves'/>
Dec  1 14:20:41 np0005541455 nova_compute[189564]:      </blockers>
Dec  1 14:20:41 np0005541455 nova_compute[189564]:      <model usable='yes' vendor='AMD'>EPYC-Rome-v4</model>
Dec  1 14:20:41 np0005541455 nova_compute[189564]:      <model usable='yes' vendor='AMD'>EPYC-v1</model>
Dec  1 14:20:41 np0005541455 nova_compute[189564]:      <model usable='yes' vendor='AMD'>EPYC-v2</model>
Dec  1 14:20:41 np0005541455 nova_compute[189564]:      <model usable='no' vendor='AMD'>EPYC-v3</model>
Dec  1 14:20:41 np0005541455 nova_compute[189564]:      <blockers model='EPYC-v3'>
Dec  1 14:20:41 np0005541455 nova_compute[189564]:        <feature name='xsaves'/>
Dec  1 14:20:41 np0005541455 nova_compute[189564]:      </blockers>
Dec  1 14:20:41 np0005541455 nova_compute[189564]:      <model usable='no' vendor='AMD'>EPYC-v4</model>
Dec  1 14:20:41 np0005541455 nova_compute[189564]:      <blockers model='EPYC-v4'>
Dec  1 14:20:41 np0005541455 nova_compute[189564]:        <feature name='xsaves'/>
Dec  1 14:20:41 np0005541455 nova_compute[189564]:      </blockers>
Dec  1 14:20:41 np0005541455 nova_compute[189564]:      <model usable='no' vendor='Intel' canonical='GraniteRapids-v1'>GraniteRapids</model>
Dec  1 14:20:41 np0005541455 nova_compute[189564]:      <blockers model='GraniteRapids'>
Dec  1 14:20:41 np0005541455 nova_compute[189564]:        <feature name='amx-bf16'/>
Dec  1 14:20:41 np0005541455 nova_compute[189564]:        <feature name='amx-fp16'/>
Dec  1 14:20:41 np0005541455 nova_compute[189564]:        <feature name='amx-int8'/>
Dec  1 14:20:41 np0005541455 nova_compute[189564]:        <feature name='amx-tile'/>
Dec  1 14:20:41 np0005541455 nova_compute[189564]:        <feature name='avx-vnni'/>
Dec  1 14:20:41 np0005541455 nova_compute[189564]:        <feature name='avx512-bf16'/>
Dec  1 14:20:41 np0005541455 nova_compute[189564]:        <feature name='avx512-fp16'/>
Dec  1 14:20:41 np0005541455 nova_compute[189564]:        <feature name='avx512-vpopcntdq'/>
Dec  1 14:20:41 np0005541455 nova_compute[189564]:        <feature name='avx512bitalg'/>
Dec  1 14:20:41 np0005541455 nova_compute[189564]:        <feature name='avx512bw'/>
Dec  1 14:20:41 np0005541455 nova_compute[189564]:        <feature name='avx512cd'/>
Dec  1 14:20:41 np0005541455 nova_compute[189564]:        <feature name='avx512dq'/>
Dec  1 14:20:41 np0005541455 nova_compute[189564]:        <feature name='avx512f'/>
Dec  1 14:20:41 np0005541455 nova_compute[189564]:        <feature name='avx512ifma'/>
Dec  1 14:20:41 np0005541455 nova_compute[189564]:        <feature name='avx512vbmi'/>
Dec  1 14:20:41 np0005541455 nova_compute[189564]:        <feature name='avx512vbmi2'/>
Dec  1 14:20:41 np0005541455 nova_compute[189564]:        <feature name='avx512vl'/>
Dec  1 14:20:41 np0005541455 nova_compute[189564]:        <feature name='avx512vnni'/>
Dec  1 14:20:41 np0005541455 nova_compute[189564]:        <feature name='bus-lock-detect'/>
Dec  1 14:20:41 np0005541455 nova_compute[189564]:        <feature name='erms'/>
Dec  1 14:20:41 np0005541455 nova_compute[189564]:        <feature name='fbsdp-no'/>
Dec  1 14:20:41 np0005541455 nova_compute[189564]:        <feature name='fsrc'/>
Dec  1 14:20:41 np0005541455 nova_compute[189564]:        <feature name='fsrm'/>
Dec  1 14:20:41 np0005541455 nova_compute[189564]:        <feature name='fsrs'/>
Dec  1 14:20:41 np0005541455 nova_compute[189564]:        <feature name='fzrm'/>
Dec  1 14:20:41 np0005541455 nova_compute[189564]:        <feature name='gfni'/>
Dec  1 14:20:41 np0005541455 nova_compute[189564]:        <feature name='hle'/>
Dec  1 14:20:41 np0005541455 nova_compute[189564]:        <feature name='ibrs-all'/>
Dec  1 14:20:41 np0005541455 nova_compute[189564]:        <feature name='invpcid'/>
Dec  1 14:20:41 np0005541455 nova_compute[189564]:        <feature name='la57'/>
Dec  1 14:20:41 np0005541455 nova_compute[189564]:        <feature name='mcdt-no'/>
Dec  1 14:20:41 np0005541455 nova_compute[189564]:        <feature name='pbrsb-no'/>
Dec  1 14:20:41 np0005541455 nova_compute[189564]:        <feature name='pcid'/>
Dec  1 14:20:41 np0005541455 nova_compute[189564]:        <feature name='pku'/>
Dec  1 14:20:41 np0005541455 nova_compute[189564]:        <feature name='prefetchiti'/>
Dec  1 14:20:41 np0005541455 nova_compute[189564]:        <feature name='psdp-no'/>
Dec  1 14:20:41 np0005541455 nova_compute[189564]:        <feature name='rtm'/>
Dec  1 14:20:41 np0005541455 nova_compute[189564]:        <feature name='sbdr-ssdp-no'/>
Dec  1 14:20:41 np0005541455 nova_compute[189564]:        <feature name='serialize'/>
Dec  1 14:20:41 np0005541455 nova_compute[189564]:        <feature name='taa-no'/>
Dec  1 14:20:41 np0005541455 nova_compute[189564]:        <feature name='tsx-ldtrk'/>
Dec  1 14:20:41 np0005541455 nova_compute[189564]:        <feature name='vaes'/>
Dec  1 14:20:41 np0005541455 nova_compute[189564]:        <feature name='vpclmulqdq'/>
Dec  1 14:20:41 np0005541455 nova_compute[189564]:        <feature name='xfd'/>
Dec  1 14:20:41 np0005541455 nova_compute[189564]:        <feature name='xsaves'/>
Dec  1 14:20:41 np0005541455 nova_compute[189564]:      </blockers>
Dec  1 14:20:41 np0005541455 nova_compute[189564]:      <model usable='no' vendor='Intel'>GraniteRapids-v1</model>
Dec  1 14:20:41 np0005541455 nova_compute[189564]:      <blockers model='GraniteRapids-v1'>
Dec  1 14:20:41 np0005541455 nova_compute[189564]:        <feature name='amx-bf16'/>
Dec  1 14:20:41 np0005541455 nova_compute[189564]:        <feature name='amx-fp16'/>
Dec  1 14:20:41 np0005541455 nova_compute[189564]:        <feature name='amx-int8'/>
Dec  1 14:20:41 np0005541455 nova_compute[189564]:        <feature name='amx-tile'/>
Dec  1 14:20:41 np0005541455 nova_compute[189564]:        <feature name='avx-vnni'/>
Dec  1 14:20:41 np0005541455 nova_compute[189564]:        <feature name='avx512-bf16'/>
Dec  1 14:20:41 np0005541455 nova_compute[189564]:        <feature name='avx512-fp16'/>
Dec  1 14:20:41 np0005541455 nova_compute[189564]:        <feature name='avx512-vpopcntdq'/>
Dec  1 14:20:41 np0005541455 nova_compute[189564]:        <feature name='avx512bitalg'/>
Dec  1 14:20:41 np0005541455 nova_compute[189564]:        <feature name='avx512bw'/>
Dec  1 14:20:41 np0005541455 nova_compute[189564]:        <feature name='avx512cd'/>
Dec  1 14:20:41 np0005541455 nova_compute[189564]:        <feature name='avx512dq'/>
Dec  1 14:20:41 np0005541455 nova_compute[189564]:        <feature name='avx512f'/>
Dec  1 14:20:41 np0005541455 nova_compute[189564]:        <feature name='avx512ifma'/>
Dec  1 14:20:41 np0005541455 nova_compute[189564]:        <feature name='avx512vbmi'/>
Dec  1 14:20:41 np0005541455 nova_compute[189564]:        <feature name='avx512vbmi2'/>
Dec  1 14:20:41 np0005541455 nova_compute[189564]:        <feature name='avx512vl'/>
Dec  1 14:20:41 np0005541455 nova_compute[189564]:        <feature name='avx512vnni'/>
Dec  1 14:20:41 np0005541455 nova_compute[189564]:        <feature name='bus-lock-detect'/>
Dec  1 14:20:41 np0005541455 nova_compute[189564]:        <feature name='erms'/>
Dec  1 14:20:41 np0005541455 nova_compute[189564]:        <feature name='fbsdp-no'/>
Dec  1 14:20:41 np0005541455 nova_compute[189564]:        <feature name='fsrc'/>
Dec  1 14:20:41 np0005541455 nova_compute[189564]:        <feature name='fsrm'/>
Dec  1 14:20:41 np0005541455 nova_compute[189564]:        <feature name='fsrs'/>
Dec  1 14:20:41 np0005541455 nova_compute[189564]:        <feature name='fzrm'/>
Dec  1 14:20:41 np0005541455 nova_compute[189564]:        <feature name='gfni'/>
Dec  1 14:20:41 np0005541455 nova_compute[189564]:        <feature name='hle'/>
Dec  1 14:20:41 np0005541455 nova_compute[189564]:        <feature name='ibrs-all'/>
Dec  1 14:20:41 np0005541455 nova_compute[189564]:        <feature name='invpcid'/>
Dec  1 14:20:41 np0005541455 nova_compute[189564]:        <feature name='la57'/>
Dec  1 14:20:41 np0005541455 nova_compute[189564]:        <feature name='mcdt-no'/>
Dec  1 14:20:41 np0005541455 nova_compute[189564]:        <feature name='pbrsb-no'/>
Dec  1 14:20:41 np0005541455 nova_compute[189564]:        <feature name='pcid'/>
Dec  1 14:20:41 np0005541455 nova_compute[189564]:        <feature name='pku'/>
Dec  1 14:20:41 np0005541455 nova_compute[189564]:        <feature name='prefetchiti'/>
Dec  1 14:20:41 np0005541455 nova_compute[189564]:        <feature name='psdp-no'/>
Dec  1 14:20:41 np0005541455 nova_compute[189564]:        <feature name='rtm'/>
Dec  1 14:20:41 np0005541455 nova_compute[189564]:        <feature name='sbdr-ssdp-no'/>
Dec  1 14:20:41 np0005541455 nova_compute[189564]:        <feature name='serialize'/>
Dec  1 14:20:41 np0005541455 nova_compute[189564]:        <feature name='taa-no'/>
Dec  1 14:20:41 np0005541455 nova_compute[189564]:        <feature name='tsx-ldtrk'/>
Dec  1 14:20:41 np0005541455 nova_compute[189564]:        <feature name='vaes'/>
Dec  1 14:20:41 np0005541455 nova_compute[189564]:        <feature name='vpclmulqdq'/>
Dec  1 14:20:41 np0005541455 nova_compute[189564]:        <feature name='xfd'/>
Dec  1 14:20:41 np0005541455 nova_compute[189564]:        <feature name='xsaves'/>
Dec  1 14:20:41 np0005541455 nova_compute[189564]:      </blockers>
Dec  1 14:20:41 np0005541455 nova_compute[189564]:      <model usable='no' vendor='Intel'>GraniteRapids-v2</model>
Dec  1 14:20:41 np0005541455 nova_compute[189564]:      <blockers model='GraniteRapids-v2'>
Dec  1 14:20:41 np0005541455 nova_compute[189564]:        <feature name='amx-bf16'/>
Dec  1 14:20:41 np0005541455 nova_compute[189564]:        <feature name='amx-fp16'/>
Dec  1 14:20:41 np0005541455 nova_compute[189564]:        <feature name='amx-int8'/>
Dec  1 14:20:41 np0005541455 nova_compute[189564]:        <feature name='amx-tile'/>
Dec  1 14:20:41 np0005541455 nova_compute[189564]:        <feature name='avx-vnni'/>
Dec  1 14:20:41 np0005541455 nova_compute[189564]:        <feature name='avx10'/>
Dec  1 14:20:41 np0005541455 nova_compute[189564]:        <feature name='avx10-128'/>
Dec  1 14:20:41 np0005541455 nova_compute[189564]:        <feature name='avx10-256'/>
Dec  1 14:20:41 np0005541455 nova_compute[189564]:        <feature name='avx10-512'/>
Dec  1 14:20:41 np0005541455 nova_compute[189564]:        <feature name='avx512-bf16'/>
Dec  1 14:20:41 np0005541455 nova_compute[189564]:        <feature name='avx512-fp16'/>
Dec  1 14:20:41 np0005541455 nova_compute[189564]:        <feature name='avx512-vpopcntdq'/>
Dec  1 14:20:41 np0005541455 nova_compute[189564]:        <feature name='avx512bitalg'/>
Dec  1 14:20:41 np0005541455 nova_compute[189564]:        <feature name='avx512bw'/>
Dec  1 14:20:41 np0005541455 nova_compute[189564]:        <feature name='avx512cd'/>
Dec  1 14:20:41 np0005541455 nova_compute[189564]:        <feature name='avx512dq'/>
Dec  1 14:20:41 np0005541455 nova_compute[189564]:        <feature name='avx512f'/>
Dec  1 14:20:41 np0005541455 nova_compute[189564]:        <feature name='avx512ifma'/>
Dec  1 14:20:41 np0005541455 nova_compute[189564]:        <feature name='avx512vbmi'/>
Dec  1 14:20:41 np0005541455 nova_compute[189564]:        <feature name='avx512vbmi2'/>
Dec  1 14:20:41 np0005541455 nova_compute[189564]:        <feature name='avx512vl'/>
Dec  1 14:20:41 np0005541455 nova_compute[189564]:        <feature name='avx512vnni'/>
Dec  1 14:20:41 np0005541455 nova_compute[189564]:        <feature name='bus-lock-detect'/>
Dec  1 14:20:41 np0005541455 nova_compute[189564]:        <feature name='cldemote'/>
Dec  1 14:20:41 np0005541455 nova_compute[189564]:        <feature name='erms'/>
Dec  1 14:20:41 np0005541455 nova_compute[189564]:        <feature name='fbsdp-no'/>
Dec  1 14:20:41 np0005541455 nova_compute[189564]:        <feature name='fsrc'/>
Dec  1 14:20:41 np0005541455 nova_compute[189564]:        <feature name='fsrm'/>
Dec  1 14:20:41 np0005541455 nova_compute[189564]:        <feature name='fsrs'/>
Dec  1 14:20:41 np0005541455 nova_compute[189564]:        <feature name='fzrm'/>
Dec  1 14:20:41 np0005541455 nova_compute[189564]:        <feature name='gfni'/>
Dec  1 14:20:41 np0005541455 nova_compute[189564]:        <feature name='hle'/>
Dec  1 14:20:41 np0005541455 nova_compute[189564]:        <feature name='ibrs-all'/>
Dec  1 14:20:41 np0005541455 nova_compute[189564]:        <feature name='invpcid'/>
Dec  1 14:20:41 np0005541455 nova_compute[189564]:        <feature name='la57'/>
Dec  1 14:20:41 np0005541455 nova_compute[189564]:        <feature name='mcdt-no'/>
Dec  1 14:20:41 np0005541455 nova_compute[189564]:        <feature name='movdir64b'/>
Dec  1 14:20:41 np0005541455 nova_compute[189564]:        <feature name='movdiri'/>
Dec  1 14:20:41 np0005541455 nova_compute[189564]:        <feature name='pbrsb-no'/>
Dec  1 14:20:41 np0005541455 nova_compute[189564]:        <feature name='pcid'/>
Dec  1 14:20:41 np0005541455 nova_compute[189564]:        <feature name='pku'/>
Dec  1 14:20:41 np0005541455 nova_compute[189564]:        <feature name='prefetchiti'/>
Dec  1 14:20:41 np0005541455 nova_compute[189564]:        <feature name='psdp-no'/>
Dec  1 14:20:41 np0005541455 nova_compute[189564]:        <feature name='rtm'/>
Dec  1 14:20:41 np0005541455 nova_compute[189564]:        <feature name='sbdr-ssdp-no'/>
Dec  1 14:20:41 np0005541455 nova_compute[189564]:        <feature name='serialize'/>
Dec  1 14:20:41 np0005541455 nova_compute[189564]:        <feature name='ss'/>
Dec  1 14:20:41 np0005541455 nova_compute[189564]:        <feature name='taa-no'/>
Dec  1 14:20:41 np0005541455 nova_compute[189564]:        <feature name='tsx-ldtrk'/>
Dec  1 14:20:41 np0005541455 nova_compute[189564]:        <feature name='vaes'/>
Dec  1 14:20:41 np0005541455 nova_compute[189564]:        <feature name='vpclmulqdq'/>
Dec  1 14:20:41 np0005541455 nova_compute[189564]:        <feature name='xfd'/>
Dec  1 14:20:41 np0005541455 nova_compute[189564]:        <feature name='xsaves'/>
Dec  1 14:20:41 np0005541455 nova_compute[189564]:      </blockers>
Dec  1 14:20:41 np0005541455 nova_compute[189564]:      <model usable='no' vendor='Intel' canonical='Haswell-v1'>Haswell</model>
Dec  1 14:20:41 np0005541455 nova_compute[189564]:      <blockers model='Haswell'>
Dec  1 14:20:41 np0005541455 nova_compute[189564]:        <feature name='erms'/>
Dec  1 14:20:41 np0005541455 nova_compute[189564]:        <feature name='hle'/>
Dec  1 14:20:41 np0005541455 nova_compute[189564]:        <feature name='invpcid'/>
Dec  1 14:20:41 np0005541455 nova_compute[189564]:        <feature name='pcid'/>
Dec  1 14:20:41 np0005541455 nova_compute[189564]:        <feature name='rtm'/>
Dec  1 14:20:41 np0005541455 nova_compute[189564]:      </blockers>
Dec  1 14:20:41 np0005541455 nova_compute[189564]:      <model usable='no' vendor='Intel' canonical='Haswell-v3'>Haswell-IBRS</model>
Dec  1 14:20:41 np0005541455 nova_compute[189564]:      <blockers model='Haswell-IBRS'>
Dec  1 14:20:41 np0005541455 nova_compute[189564]:        <feature name='erms'/>
Dec  1 14:20:41 np0005541455 nova_compute[189564]:        <feature name='hle'/>
Dec  1 14:20:41 np0005541455 nova_compute[189564]:        <feature name='invpcid'/>
Dec  1 14:20:41 np0005541455 nova_compute[189564]:        <feature name='pcid'/>
Dec  1 14:20:41 np0005541455 nova_compute[189564]:        <feature name='rtm'/>
Dec  1 14:20:41 np0005541455 nova_compute[189564]:      </blockers>
Dec  1 14:20:41 np0005541455 nova_compute[189564]:      <model usable='no' vendor='Intel' canonical='Haswell-v2'>Haswell-noTSX</model>
Dec  1 14:20:41 np0005541455 nova_compute[189564]:      <blockers model='Haswell-noTSX'>
Dec  1 14:20:41 np0005541455 nova_compute[189564]:        <feature name='erms'/>
Dec  1 14:20:41 np0005541455 nova_compute[189564]:        <feature name='invpcid'/>
Dec  1 14:20:41 np0005541455 nova_compute[189564]:        <feature name='pcid'/>
Dec  1 14:20:41 np0005541455 nova_compute[189564]:      </blockers>
Dec  1 14:20:41 np0005541455 nova_compute[189564]:      <model usable='no' vendor='Intel' canonical='Haswell-v4'>Haswell-noTSX-IBRS</model>
Dec  1 14:20:41 np0005541455 nova_compute[189564]:      <blockers model='Haswell-noTSX-IBRS'>
Dec  1 14:20:41 np0005541455 nova_compute[189564]:        <feature name='erms'/>
Dec  1 14:20:41 np0005541455 nova_compute[189564]:        <feature name='invpcid'/>
Dec  1 14:20:41 np0005541455 nova_compute[189564]:        <feature name='pcid'/>
Dec  1 14:20:41 np0005541455 nova_compute[189564]:      </blockers>
Dec  1 14:20:41 np0005541455 nova_compute[189564]:      <model usable='no' vendor='Intel'>Haswell-v1</model>
Dec  1 14:20:41 np0005541455 nova_compute[189564]:      <blockers model='Haswell-v1'>
Dec  1 14:20:41 np0005541455 nova_compute[189564]:        <feature name='erms'/>
Dec  1 14:20:41 np0005541455 nova_compute[189564]:        <feature name='hle'/>
Dec  1 14:20:41 np0005541455 nova_compute[189564]:        <feature name='invpcid'/>
Dec  1 14:20:41 np0005541455 nova_compute[189564]:        <feature name='pcid'/>
Dec  1 14:20:41 np0005541455 nova_compute[189564]:        <feature name='rtm'/>
Dec  1 14:20:41 np0005541455 nova_compute[189564]:      </blockers>
Dec  1 14:20:41 np0005541455 nova_compute[189564]:      <model usable='no' vendor='Intel'>Haswell-v2</model>
Dec  1 14:20:41 np0005541455 nova_compute[189564]:      <blockers model='Haswell-v2'>
Dec  1 14:20:41 np0005541455 nova_compute[189564]:        <feature name='erms'/>
Dec  1 14:20:41 np0005541455 nova_compute[189564]:        <feature name='invpcid'/>
Dec  1 14:20:41 np0005541455 nova_compute[189564]:        <feature name='pcid'/>
Dec  1 14:20:41 np0005541455 nova_compute[189564]:      </blockers>
Dec  1 14:20:41 np0005541455 nova_compute[189564]:      <model usable='no' vendor='Intel'>Haswell-v3</model>
Dec  1 14:20:41 np0005541455 nova_compute[189564]:      <blockers model='Haswell-v3'>
Dec  1 14:20:41 np0005541455 nova_compute[189564]:        <feature name='erms'/>
Dec  1 14:20:41 np0005541455 nova_compute[189564]:        <feature name='hle'/>
Dec  1 14:20:41 np0005541455 nova_compute[189564]:        <feature name='invpcid'/>
Dec  1 14:20:41 np0005541455 nova_compute[189564]:        <feature name='pcid'/>
Dec  1 14:20:41 np0005541455 nova_compute[189564]:        <feature name='rtm'/>
Dec  1 14:20:41 np0005541455 nova_compute[189564]:      </blockers>
Dec  1 14:20:41 np0005541455 nova_compute[189564]:      <model usable='no' vendor='Intel'>Haswell-v4</model>
Dec  1 14:20:41 np0005541455 nova_compute[189564]:      <blockers model='Haswell-v4'>
Dec  1 14:20:41 np0005541455 nova_compute[189564]:        <feature name='erms'/>
Dec  1 14:20:41 np0005541455 nova_compute[189564]:        <feature name='invpcid'/>
Dec  1 14:20:41 np0005541455 nova_compute[189564]:        <feature name='pcid'/>
Dec  1 14:20:41 np0005541455 nova_compute[189564]:      </blockers>
Dec  1 14:20:41 np0005541455 nova_compute[189564]:      <model usable='no' vendor='Intel' canonical='Icelake-Server-v1'>Icelake-Server</model>
Dec  1 14:20:41 np0005541455 nova_compute[189564]:      <blockers model='Icelake-Server'>
Dec  1 14:20:41 np0005541455 nova_compute[189564]:        <feature name='avx512-vpopcntdq'/>
Dec  1 14:20:41 np0005541455 nova_compute[189564]:        <feature name='avx512bitalg'/>
Dec  1 14:20:41 np0005541455 nova_compute[189564]:        <feature name='avx512bw'/>
Dec  1 14:20:41 np0005541455 nova_compute[189564]:        <feature name='avx512cd'/>
Dec  1 14:20:41 np0005541455 nova_compute[189564]:        <feature name='avx512dq'/>
Dec  1 14:20:41 np0005541455 nova_compute[189564]:        <feature name='avx512f'/>
Dec  1 14:20:41 np0005541455 nova_compute[189564]:        <feature name='avx512vbmi'/>
Dec  1 14:20:41 np0005541455 nova_compute[189564]:        <feature name='avx512vbmi2'/>
Dec  1 14:20:41 np0005541455 nova_compute[189564]:        <feature name='avx512vl'/>
Dec  1 14:20:41 np0005541455 nova_compute[189564]:        <feature name='avx512vnni'/>
Dec  1 14:20:41 np0005541455 nova_compute[189564]:        <feature name='erms'/>
Dec  1 14:20:41 np0005541455 nova_compute[189564]:        <feature name='gfni'/>
Dec  1 14:20:41 np0005541455 nova_compute[189564]:        <feature name='hle'/>
Dec  1 14:20:41 np0005541455 nova_compute[189564]:        <feature name='invpcid'/>
Dec  1 14:20:41 np0005541455 nova_compute[189564]:        <feature name='la57'/>
Dec  1 14:20:41 np0005541455 nova_compute[189564]:        <feature name='pcid'/>
Dec  1 14:20:41 np0005541455 nova_compute[189564]:        <feature name='pku'/>
Dec  1 14:20:41 np0005541455 nova_compute[189564]:        <feature name='rtm'/>
Dec  1 14:20:41 np0005541455 nova_compute[189564]:        <feature name='vaes'/>
Dec  1 14:20:41 np0005541455 nova_compute[189564]:        <feature name='vpclmulqdq'/>
Dec  1 14:20:41 np0005541455 nova_compute[189564]:      </blockers>
Dec  1 14:20:41 np0005541455 nova_compute[189564]:      <model usable='no' vendor='Intel' canonical='Icelake-Server-v2'>Icelake-Server-noTSX</model>
Dec  1 14:20:41 np0005541455 nova_compute[189564]:      <blockers model='Icelake-Server-noTSX'>
Dec  1 14:20:41 np0005541455 nova_compute[189564]:        <feature name='avx512-vpopcntdq'/>
Dec  1 14:20:41 np0005541455 nova_compute[189564]:        <feature name='avx512bitalg'/>
Dec  1 14:20:41 np0005541455 nova_compute[189564]:        <feature name='avx512bw'/>
Dec  1 14:20:41 np0005541455 nova_compute[189564]:        <feature name='avx512cd'/>
Dec  1 14:20:41 np0005541455 nova_compute[189564]:        <feature name='avx512dq'/>
Dec  1 14:20:41 np0005541455 nova_compute[189564]:        <feature name='avx512f'/>
Dec  1 14:20:41 np0005541455 nova_compute[189564]:        <feature name='avx512vbmi'/>
Dec  1 14:20:41 np0005541455 nova_compute[189564]:        <feature name='avx512vbmi2'/>
Dec  1 14:20:41 np0005541455 nova_compute[189564]:        <feature name='avx512vl'/>
Dec  1 14:20:41 np0005541455 nova_compute[189564]:        <feature name='avx512vnni'/>
Dec  1 14:20:41 np0005541455 nova_compute[189564]:        <feature name='erms'/>
Dec  1 14:20:41 np0005541455 nova_compute[189564]:        <feature name='gfni'/>
Dec  1 14:20:41 np0005541455 nova_compute[189564]:        <feature name='invpcid'/>
Dec  1 14:20:41 np0005541455 nova_compute[189564]:        <feature name='la57'/>
Dec  1 14:20:41 np0005541455 nova_compute[189564]:        <feature name='pcid'/>
Dec  1 14:20:41 np0005541455 nova_compute[189564]:        <feature name='pku'/>
Dec  1 14:20:41 np0005541455 nova_compute[189564]:        <feature name='vaes'/>
Dec  1 14:20:41 np0005541455 nova_compute[189564]:        <feature name='vpclmulqdq'/>
Dec  1 14:20:41 np0005541455 nova_compute[189564]:      </blockers>
Dec  1 14:20:41 np0005541455 nova_compute[189564]:      <model usable='no' vendor='Intel'>Icelake-Server-v1</model>
Dec  1 14:20:41 np0005541455 nova_compute[189564]:      <blockers model='Icelake-Server-v1'>
Dec  1 14:20:41 np0005541455 nova_compute[189564]:        <feature name='avx512-vpopcntdq'/>
Dec  1 14:20:41 np0005541455 nova_compute[189564]:        <feature name='avx512bitalg'/>
Dec  1 14:20:41 np0005541455 nova_compute[189564]:        <feature name='avx512bw'/>
Dec  1 14:20:41 np0005541455 nova_compute[189564]:        <feature name='avx512cd'/>
Dec  1 14:20:41 np0005541455 nova_compute[189564]:        <feature name='avx512dq'/>
Dec  1 14:20:41 np0005541455 nova_compute[189564]:        <feature name='avx512f'/>
Dec  1 14:20:41 np0005541455 nova_compute[189564]:        <feature name='avx512vbmi'/>
Dec  1 14:20:41 np0005541455 nova_compute[189564]:        <feature name='avx512vbmi2'/>
Dec  1 14:20:41 np0005541455 nova_compute[189564]:        <feature name='avx512vl'/>
Dec  1 14:20:41 np0005541455 nova_compute[189564]:        <feature name='avx512vnni'/>
Dec  1 14:20:41 np0005541455 nova_compute[189564]:        <feature name='erms'/>
Dec  1 14:20:41 np0005541455 nova_compute[189564]:        <feature name='gfni'/>
Dec  1 14:20:41 np0005541455 nova_compute[189564]:        <feature name='hle'/>
Dec  1 14:20:41 np0005541455 nova_compute[189564]:        <feature name='invpcid'/>
Dec  1 14:20:41 np0005541455 nova_compute[189564]:        <feature name='la57'/>
Dec  1 14:20:41 np0005541455 nova_compute[189564]:        <feature name='pcid'/>
Dec  1 14:20:41 np0005541455 nova_compute[189564]:        <feature name='pku'/>
Dec  1 14:20:41 np0005541455 nova_compute[189564]:        <feature name='rtm'/>
Dec  1 14:20:41 np0005541455 nova_compute[189564]:        <feature name='vaes'/>
Dec  1 14:20:41 np0005541455 nova_compute[189564]:        <feature name='vpclmulqdq'/>
Dec  1 14:20:41 np0005541455 nova_compute[189564]:      </blockers>
Dec  1 14:20:41 np0005541455 nova_compute[189564]:      <model usable='no' vendor='Intel'>Icelake-Server-v2</model>
Dec  1 14:20:41 np0005541455 nova_compute[189564]:      <blockers model='Icelake-Server-v2'>
Dec  1 14:20:41 np0005541455 nova_compute[189564]:        <feature name='avx512-vpopcntdq'/>
Dec  1 14:20:41 np0005541455 nova_compute[189564]:        <feature name='avx512bitalg'/>
Dec  1 14:20:41 np0005541455 nova_compute[189564]:        <feature name='avx512bw'/>
Dec  1 14:20:41 np0005541455 nova_compute[189564]:        <feature name='avx512cd'/>
Dec  1 14:20:41 np0005541455 nova_compute[189564]:        <feature name='avx512dq'/>
Dec  1 14:20:41 np0005541455 nova_compute[189564]:        <feature name='avx512f'/>
Dec  1 14:20:41 np0005541455 nova_compute[189564]:        <feature name='avx512vbmi'/>
Dec  1 14:20:41 np0005541455 nova_compute[189564]:        <feature name='avx512vbmi2'/>
Dec  1 14:20:41 np0005541455 nova_compute[189564]:        <feature name='avx512vl'/>
Dec  1 14:20:41 np0005541455 nova_compute[189564]:        <feature name='avx512vnni'/>
Dec  1 14:20:41 np0005541455 nova_compute[189564]:        <feature name='erms'/>
Dec  1 14:20:41 np0005541455 nova_compute[189564]:        <feature name='gfni'/>
Dec  1 14:20:41 np0005541455 nova_compute[189564]:        <feature name='invpcid'/>
Dec  1 14:20:41 np0005541455 nova_compute[189564]:        <feature name='la57'/>
Dec  1 14:20:41 np0005541455 nova_compute[189564]:        <feature name='pcid'/>
Dec  1 14:20:41 np0005541455 nova_compute[189564]:        <feature name='pku'/>
Dec  1 14:20:41 np0005541455 nova_compute[189564]:        <feature name='vaes'/>
Dec  1 14:20:41 np0005541455 nova_compute[189564]:        <feature name='vpclmulqdq'/>
Dec  1 14:20:41 np0005541455 nova_compute[189564]:      </blockers>
Dec  1 14:20:41 np0005541455 nova_compute[189564]:      <model usable='no' vendor='Intel'>Icelake-Server-v3</model>
Dec  1 14:20:41 np0005541455 nova_compute[189564]:      <blockers model='Icelake-Server-v3'>
Dec  1 14:20:41 np0005541455 nova_compute[189564]:        <feature name='avx512-vpopcntdq'/>
Dec  1 14:20:41 np0005541455 nova_compute[189564]:        <feature name='avx512bitalg'/>
Dec  1 14:20:41 np0005541455 nova_compute[189564]:        <feature name='avx512bw'/>
Dec  1 14:20:41 np0005541455 nova_compute[189564]:        <feature name='avx512cd'/>
Dec  1 14:20:41 np0005541455 nova_compute[189564]:        <feature name='avx512dq'/>
Dec  1 14:20:41 np0005541455 nova_compute[189564]:        <feature name='avx512f'/>
Dec  1 14:20:41 np0005541455 nova_compute[189564]:        <feature name='avx512vbmi'/>
Dec  1 14:20:41 np0005541455 nova_compute[189564]:        <feature name='avx512vbmi2'/>
Dec  1 14:20:41 np0005541455 nova_compute[189564]:        <feature name='avx512vl'/>
Dec  1 14:20:41 np0005541455 nova_compute[189564]:        <feature name='avx512vnni'/>
Dec  1 14:20:41 np0005541455 nova_compute[189564]:        <feature name='erms'/>
Dec  1 14:20:41 np0005541455 nova_compute[189564]:        <feature name='gfni'/>
Dec  1 14:20:41 np0005541455 nova_compute[189564]:        <feature name='ibrs-all'/>
Dec  1 14:20:41 np0005541455 nova_compute[189564]:        <feature name='invpcid'/>
Dec  1 14:20:41 np0005541455 nova_compute[189564]:        <feature name='la57'/>
Dec  1 14:20:41 np0005541455 nova_compute[189564]:        <feature name='pcid'/>
Dec  1 14:20:41 np0005541455 nova_compute[189564]:        <feature name='pku'/>
Dec  1 14:20:41 np0005541455 nova_compute[189564]:        <feature name='taa-no'/>
Dec  1 14:20:41 np0005541455 nova_compute[189564]:        <feature name='vaes'/>
Dec  1 14:20:41 np0005541455 nova_compute[189564]:        <feature name='vpclmulqdq'/>
Dec  1 14:20:41 np0005541455 nova_compute[189564]:      </blockers>
Dec  1 14:20:41 np0005541455 nova_compute[189564]:      <model usable='no' vendor='Intel'>Icelake-Server-v4</model>
Dec  1 14:20:41 np0005541455 nova_compute[189564]:      <blockers model='Icelake-Server-v4'>
Dec  1 14:20:41 np0005541455 nova_compute[189564]:        <feature name='avx512-vpopcntdq'/>
Dec  1 14:20:41 np0005541455 nova_compute[189564]:        <feature name='avx512bitalg'/>
Dec  1 14:20:41 np0005541455 nova_compute[189564]:        <feature name='avx512bw'/>
Dec  1 14:20:41 np0005541455 nova_compute[189564]:        <feature name='avx512cd'/>
Dec  1 14:20:41 np0005541455 nova_compute[189564]:        <feature name='avx512dq'/>
Dec  1 14:20:41 np0005541455 nova_compute[189564]:        <feature name='avx512f'/>
Dec  1 14:20:41 np0005541455 nova_compute[189564]:        <feature name='avx512ifma'/>
Dec  1 14:20:41 np0005541455 nova_compute[189564]:        <feature name='avx512vbmi'/>
Dec  1 14:20:41 np0005541455 nova_compute[189564]:        <feature name='avx512vbmi2'/>
Dec  1 14:20:41 np0005541455 nova_compute[189564]:        <feature name='avx512vl'/>
Dec  1 14:20:41 np0005541455 nova_compute[189564]:        <feature name='avx512vnni'/>
Dec  1 14:20:41 np0005541455 nova_compute[189564]:        <feature name='erms'/>
Dec  1 14:20:41 np0005541455 nova_compute[189564]:        <feature name='fsrm'/>
Dec  1 14:20:41 np0005541455 nova_compute[189564]:        <feature name='gfni'/>
Dec  1 14:20:41 np0005541455 nova_compute[189564]:        <feature name='ibrs-all'/>
Dec  1 14:20:41 np0005541455 nova_compute[189564]:        <feature name='invpcid'/>
Dec  1 14:20:41 np0005541455 nova_compute[189564]:        <feature name='la57'/>
Dec  1 14:20:41 np0005541455 nova_compute[189564]:        <feature name='pcid'/>
Dec  1 14:20:41 np0005541455 nova_compute[189564]:        <feature name='pku'/>
Dec  1 14:20:41 np0005541455 nova_compute[189564]:        <feature name='taa-no'/>
Dec  1 14:20:41 np0005541455 nova_compute[189564]:        <feature name='vaes'/>
Dec  1 14:20:41 np0005541455 nova_compute[189564]:        <feature name='vpclmulqdq'/>
Dec  1 14:20:41 np0005541455 nova_compute[189564]:      </blockers>
Dec  1 14:20:41 np0005541455 nova_compute[189564]:      <model usable='no' vendor='Intel'>Icelake-Server-v5</model>
Dec  1 14:20:41 np0005541455 nova_compute[189564]:      <blockers model='Icelake-Server-v5'>
Dec  1 14:20:41 np0005541455 nova_compute[189564]:        <feature name='avx512-vpopcntdq'/>
Dec  1 14:20:41 np0005541455 nova_compute[189564]:        <feature name='avx512bitalg'/>
Dec  1 14:20:41 np0005541455 nova_compute[189564]:        <feature name='avx512bw'/>
Dec  1 14:20:41 np0005541455 nova_compute[189564]:        <feature name='avx512cd'/>
Dec  1 14:20:41 np0005541455 nova_compute[189564]:        <feature name='avx512dq'/>
Dec  1 14:20:41 np0005541455 nova_compute[189564]:        <feature name='avx512f'/>
Dec  1 14:20:41 np0005541455 nova_compute[189564]:        <feature name='avx512ifma'/>
Dec  1 14:20:41 np0005541455 nova_compute[189564]:        <feature name='avx512vbmi'/>
Dec  1 14:20:41 np0005541455 nova_compute[189564]:        <feature name='avx512vbmi2'/>
Dec  1 14:20:41 np0005541455 nova_compute[189564]:        <feature name='avx512vl'/>
Dec  1 14:20:41 np0005541455 nova_compute[189564]:        <feature name='avx512vnni'/>
Dec  1 14:20:41 np0005541455 nova_compute[189564]:        <feature name='erms'/>
Dec  1 14:20:41 np0005541455 nova_compute[189564]:        <feature name='fsrm'/>
Dec  1 14:20:41 np0005541455 nova_compute[189564]:        <feature name='gfni'/>
Dec  1 14:20:41 np0005541455 nova_compute[189564]:        <feature name='ibrs-all'/>
Dec  1 14:20:41 np0005541455 nova_compute[189564]:        <feature name='invpcid'/>
Dec  1 14:20:41 np0005541455 nova_compute[189564]:        <feature name='la57'/>
Dec  1 14:20:41 np0005541455 nova_compute[189564]:        <feature name='pcid'/>
Dec  1 14:20:41 np0005541455 nova_compute[189564]:        <feature name='pku'/>
Dec  1 14:20:41 np0005541455 nova_compute[189564]:        <feature name='taa-no'/>
Dec  1 14:20:41 np0005541455 nova_compute[189564]:        <feature name='vaes'/>
Dec  1 14:20:41 np0005541455 nova_compute[189564]:        <feature name='vpclmulqdq'/>
Dec  1 14:20:41 np0005541455 nova_compute[189564]:        <feature name='xsaves'/>
Dec  1 14:20:41 np0005541455 nova_compute[189564]:      </blockers>
Dec  1 14:20:41 np0005541455 nova_compute[189564]:      <model usable='no' vendor='Intel'>Icelake-Server-v6</model>
Dec  1 14:20:41 np0005541455 nova_compute[189564]:      <blockers model='Icelake-Server-v6'>
Dec  1 14:20:41 np0005541455 nova_compute[189564]:        <feature name='avx512-vpopcntdq'/>
Dec  1 14:20:41 np0005541455 nova_compute[189564]:        <feature name='avx512bitalg'/>
Dec  1 14:20:41 np0005541455 nova_compute[189564]:        <feature name='avx512bw'/>
Dec  1 14:20:41 np0005541455 nova_compute[189564]:        <feature name='avx512cd'/>
Dec  1 14:20:41 np0005541455 nova_compute[189564]:        <feature name='avx512dq'/>
Dec  1 14:20:41 np0005541455 nova_compute[189564]:        <feature name='avx512f'/>
Dec  1 14:20:41 np0005541455 nova_compute[189564]:        <feature name='avx512ifma'/>
Dec  1 14:20:41 np0005541455 nova_compute[189564]:        <feature name='avx512vbmi'/>
Dec  1 14:20:41 np0005541455 nova_compute[189564]:        <feature name='avx512vbmi2'/>
Dec  1 14:20:41 np0005541455 nova_compute[189564]:        <feature name='avx512vl'/>
Dec  1 14:20:41 np0005541455 nova_compute[189564]:        <feature name='avx512vnni'/>
Dec  1 14:20:41 np0005541455 nova_compute[189564]:        <feature name='erms'/>
Dec  1 14:20:41 np0005541455 nova_compute[189564]:        <feature name='fsrm'/>
Dec  1 14:20:41 np0005541455 nova_compute[189564]:        <feature name='gfni'/>
Dec  1 14:20:41 np0005541455 nova_compute[189564]:        <feature name='ibrs-all'/>
Dec  1 14:20:41 np0005541455 nova_compute[189564]:        <feature name='invpcid'/>
Dec  1 14:20:41 np0005541455 nova_compute[189564]:        <feature name='la57'/>
Dec  1 14:20:41 np0005541455 nova_compute[189564]:        <feature name='pcid'/>
Dec  1 14:20:41 np0005541455 nova_compute[189564]:        <feature name='pku'/>
Dec  1 14:20:41 np0005541455 nova_compute[189564]:        <feature name='taa-no'/>
Dec  1 14:20:41 np0005541455 nova_compute[189564]:        <feature name='vaes'/>
Dec  1 14:20:41 np0005541455 nova_compute[189564]:        <feature name='vpclmulqdq'/>
Dec  1 14:20:41 np0005541455 nova_compute[189564]:        <feature name='xsaves'/>
Dec  1 14:20:41 np0005541455 nova_compute[189564]:      </blockers>
Dec  1 14:20:41 np0005541455 nova_compute[189564]:      <model usable='no' vendor='Intel'>Icelake-Server-v7</model>
Dec  1 14:20:41 np0005541455 nova_compute[189564]:      <blockers model='Icelake-Server-v7'>
Dec  1 14:20:41 np0005541455 nova_compute[189564]:        <feature name='avx512-vpopcntdq'/>
Dec  1 14:20:41 np0005541455 nova_compute[189564]:        <feature name='avx512bitalg'/>
Dec  1 14:20:41 np0005541455 nova_compute[189564]:        <feature name='avx512bw'/>
Dec  1 14:20:41 np0005541455 nova_compute[189564]:        <feature name='avx512cd'/>
Dec  1 14:20:41 np0005541455 nova_compute[189564]:        <feature name='avx512dq'/>
Dec  1 14:20:41 np0005541455 nova_compute[189564]:        <feature name='avx512f'/>
Dec  1 14:20:41 np0005541455 nova_compute[189564]:        <feature name='avx512ifma'/>
Dec  1 14:20:41 np0005541455 nova_compute[189564]:        <feature name='avx512vbmi'/>
Dec  1 14:20:41 np0005541455 nova_compute[189564]:        <feature name='avx512vbmi2'/>
Dec  1 14:20:41 np0005541455 nova_compute[189564]:        <feature name='avx512vl'/>
Dec  1 14:20:41 np0005541455 nova_compute[189564]:        <feature name='avx512vnni'/>
Dec  1 14:20:41 np0005541455 nova_compute[189564]:        <feature name='erms'/>
Dec  1 14:20:41 np0005541455 nova_compute[189564]:        <feature name='fsrm'/>
Dec  1 14:20:41 np0005541455 nova_compute[189564]:        <feature name='gfni'/>
Dec  1 14:20:41 np0005541455 nova_compute[189564]:        <feature name='hle'/>
Dec  1 14:20:41 np0005541455 nova_compute[189564]:        <feature name='ibrs-all'/>
Dec  1 14:20:41 np0005541455 nova_compute[189564]:        <feature name='invpcid'/>
Dec  1 14:20:41 np0005541455 nova_compute[189564]:        <feature name='la57'/>
Dec  1 14:20:41 np0005541455 nova_compute[189564]:        <feature name='pcid'/>
Dec  1 14:20:41 np0005541455 nova_compute[189564]:        <feature name='pku'/>
Dec  1 14:20:41 np0005541455 nova_compute[189564]:        <feature name='rtm'/>
Dec  1 14:20:41 np0005541455 nova_compute[189564]:        <feature name='taa-no'/>
Dec  1 14:20:41 np0005541455 nova_compute[189564]:        <feature name='vaes'/>
Dec  1 14:20:41 np0005541455 nova_compute[189564]:        <feature name='vpclmulqdq'/>
Dec  1 14:20:41 np0005541455 nova_compute[189564]:        <feature name='xsaves'/>
Dec  1 14:20:41 np0005541455 nova_compute[189564]:      </blockers>
Dec  1 14:20:41 np0005541455 nova_compute[189564]:      <model usable='no' vendor='Intel' canonical='IvyBridge-v1'>IvyBridge</model>
Dec  1 14:20:41 np0005541455 nova_compute[189564]:      <blockers model='IvyBridge'>
Dec  1 14:20:41 np0005541455 nova_compute[189564]:        <feature name='erms'/>
Dec  1 14:20:41 np0005541455 nova_compute[189564]:      </blockers>
Dec  1 14:20:41 np0005541455 nova_compute[189564]:      <model usable='no' vendor='Intel' canonical='IvyBridge-v2'>IvyBridge-IBRS</model>
Dec  1 14:20:41 np0005541455 nova_compute[189564]:      <blockers model='IvyBridge-IBRS'>
Dec  1 14:20:41 np0005541455 nova_compute[189564]:        <feature name='erms'/>
Dec  1 14:20:41 np0005541455 nova_compute[189564]:      </blockers>
Dec  1 14:20:41 np0005541455 nova_compute[189564]:      <model usable='no' vendor='Intel'>IvyBridge-v1</model>
Dec  1 14:20:41 np0005541455 nova_compute[189564]:      <blockers model='IvyBridge-v1'>
Dec  1 14:20:41 np0005541455 nova_compute[189564]:        <feature name='erms'/>
Dec  1 14:20:41 np0005541455 nova_compute[189564]:      </blockers>
Dec  1 14:20:41 np0005541455 nova_compute[189564]:      <model usable='no' vendor='Intel'>IvyBridge-v2</model>
Dec  1 14:20:41 np0005541455 nova_compute[189564]:      <blockers model='IvyBridge-v2'>
Dec  1 14:20:41 np0005541455 nova_compute[189564]:        <feature name='erms'/>
Dec  1 14:20:41 np0005541455 nova_compute[189564]:      </blockers>
Dec  1 14:20:41 np0005541455 nova_compute[189564]:      <model usable='no' vendor='Intel' canonical='KnightsMill-v1'>KnightsMill</model>
Dec  1 14:20:41 np0005541455 nova_compute[189564]:      <blockers model='KnightsMill'>
Dec  1 14:20:41 np0005541455 nova_compute[189564]:        <feature name='avx512-4fmaps'/>
Dec  1 14:20:41 np0005541455 nova_compute[189564]:        <feature name='avx512-4vnniw'/>
Dec  1 14:20:41 np0005541455 nova_compute[189564]:        <feature name='avx512-vpopcntdq'/>
Dec  1 14:20:41 np0005541455 nova_compute[189564]:        <feature name='avx512cd'/>
Dec  1 14:20:41 np0005541455 nova_compute[189564]:        <feature name='avx512er'/>
Dec  1 14:20:41 np0005541455 nova_compute[189564]:        <feature name='avx512f'/>
Dec  1 14:20:41 np0005541455 nova_compute[189564]:        <feature name='avx512pf'/>
Dec  1 14:20:41 np0005541455 nova_compute[189564]:        <feature name='erms'/>
Dec  1 14:20:41 np0005541455 nova_compute[189564]:        <feature name='ss'/>
Dec  1 14:20:41 np0005541455 nova_compute[189564]:      </blockers>
Dec  1 14:20:41 np0005541455 nova_compute[189564]:      <model usable='no' vendor='Intel'>KnightsMill-v1</model>
Dec  1 14:20:41 np0005541455 nova_compute[189564]:      <blockers model='KnightsMill-v1'>
Dec  1 14:20:41 np0005541455 nova_compute[189564]:        <feature name='avx512-4fmaps'/>
Dec  1 14:20:41 np0005541455 nova_compute[189564]:        <feature name='avx512-4vnniw'/>
Dec  1 14:20:41 np0005541455 nova_compute[189564]:        <feature name='avx512-vpopcntdq'/>
Dec  1 14:20:41 np0005541455 nova_compute[189564]:        <feature name='avx512cd'/>
Dec  1 14:20:41 np0005541455 nova_compute[189564]:        <feature name='avx512er'/>
Dec  1 14:20:41 np0005541455 nova_compute[189564]:        <feature name='avx512f'/>
Dec  1 14:20:41 np0005541455 nova_compute[189564]:        <feature name='avx512pf'/>
Dec  1 14:20:41 np0005541455 nova_compute[189564]:        <feature name='erms'/>
Dec  1 14:20:41 np0005541455 nova_compute[189564]:        <feature name='ss'/>
Dec  1 14:20:41 np0005541455 nova_compute[189564]:      </blockers>
Dec  1 14:20:41 np0005541455 nova_compute[189564]:      <model usable='yes' vendor='Intel' canonical='Nehalem-v1'>Nehalem</model>
Dec  1 14:20:41 np0005541455 nova_compute[189564]:      <model usable='yes' vendor='Intel' canonical='Nehalem-v2'>Nehalem-IBRS</model>
Dec  1 14:20:41 np0005541455 nova_compute[189564]:      <model usable='yes' vendor='Intel'>Nehalem-v1</model>
Dec  1 14:20:41 np0005541455 nova_compute[189564]:      <model usable='yes' vendor='Intel'>Nehalem-v2</model>
Dec  1 14:20:41 np0005541455 nova_compute[189564]:      <model usable='yes' deprecated='yes' vendor='AMD' canonical='Opteron_G1-v1'>Opteron_G1</model>
Dec  1 14:20:41 np0005541455 nova_compute[189564]:      <model usable='yes' deprecated='yes' vendor='AMD'>Opteron_G1-v1</model>
Dec  1 14:20:41 np0005541455 nova_compute[189564]:      <model usable='yes' deprecated='yes' vendor='AMD' canonical='Opteron_G2-v1'>Opteron_G2</model>
Dec  1 14:20:41 np0005541455 nova_compute[189564]:      <model usable='yes' deprecated='yes' vendor='AMD'>Opteron_G2-v1</model>
Dec  1 14:20:41 np0005541455 nova_compute[189564]:      <model usable='yes' deprecated='yes' vendor='AMD' canonical='Opteron_G3-v1'>Opteron_G3</model>
Dec  1 14:20:41 np0005541455 nova_compute[189564]:      <model usable='yes' deprecated='yes' vendor='AMD'>Opteron_G3-v1</model>
Dec  1 14:20:41 np0005541455 nova_compute[189564]:      <model usable='no' vendor='AMD' canonical='Opteron_G4-v1'>Opteron_G4</model>
Dec  1 14:20:41 np0005541455 nova_compute[189564]:      <blockers model='Opteron_G4'>
Dec  1 14:20:41 np0005541455 nova_compute[189564]:        <feature name='fma4'/>
Dec  1 14:20:41 np0005541455 nova_compute[189564]:        <feature name='xop'/>
Dec  1 14:20:41 np0005541455 nova_compute[189564]:      </blockers>
Dec  1 14:20:41 np0005541455 nova_compute[189564]:      <model usable='no' vendor='AMD'>Opteron_G4-v1</model>
Dec  1 14:20:41 np0005541455 nova_compute[189564]:      <blockers model='Opteron_G4-v1'>
Dec  1 14:20:41 np0005541455 nova_compute[189564]:        <feature name='fma4'/>
Dec  1 14:20:41 np0005541455 nova_compute[189564]:        <feature name='xop'/>
Dec  1 14:20:41 np0005541455 nova_compute[189564]:      </blockers>
Dec  1 14:20:41 np0005541455 nova_compute[189564]:      <model usable='no' vendor='AMD' canonical='Opteron_G5-v1'>Opteron_G5</model>
Dec  1 14:20:41 np0005541455 nova_compute[189564]:      <blockers model='Opteron_G5'>
Dec  1 14:20:41 np0005541455 nova_compute[189564]:        <feature name='fma4'/>
Dec  1 14:20:41 np0005541455 nova_compute[189564]:        <feature name='tbm'/>
Dec  1 14:20:41 np0005541455 nova_compute[189564]:        <feature name='xop'/>
Dec  1 14:20:41 np0005541455 nova_compute[189564]:      </blockers>
Dec  1 14:20:41 np0005541455 nova_compute[189564]:      <model usable='no' vendor='AMD'>Opteron_G5-v1</model>
Dec  1 14:20:41 np0005541455 nova_compute[189564]:      <blockers model='Opteron_G5-v1'>
Dec  1 14:20:41 np0005541455 nova_compute[189564]:        <feature name='fma4'/>
Dec  1 14:20:41 np0005541455 nova_compute[189564]:        <feature name='tbm'/>
Dec  1 14:20:41 np0005541455 nova_compute[189564]:        <feature name='xop'/>
Dec  1 14:20:41 np0005541455 nova_compute[189564]:      </blockers>
Dec  1 14:20:41 np0005541455 nova_compute[189564]:      <model usable='yes' deprecated='yes' vendor='Intel' canonical='Penryn-v1'>Penryn</model>
Dec  1 14:20:41 np0005541455 nova_compute[189564]:      <model usable='yes' deprecated='yes' vendor='Intel'>Penryn-v1</model>
Dec  1 14:20:41 np0005541455 nova_compute[189564]:      <model usable='yes' vendor='Intel' canonical='SandyBridge-v1'>SandyBridge</model>
Dec  1 14:20:41 np0005541455 nova_compute[189564]:      <model usable='yes' vendor='Intel' canonical='SandyBridge-v2'>SandyBridge-IBRS</model>
Dec  1 14:20:41 np0005541455 nova_compute[189564]:      <model usable='yes' vendor='Intel'>SandyBridge-v1</model>
Dec  1 14:20:41 np0005541455 nova_compute[189564]:      <model usable='yes' vendor='Intel'>SandyBridge-v2</model>
Dec  1 14:20:41 np0005541455 nova_compute[189564]:      <model usable='no' vendor='Intel' canonical='SapphireRapids-v1'>SapphireRapids</model>
Dec  1 14:20:41 np0005541455 nova_compute[189564]:      <blockers model='SapphireRapids'>
Dec  1 14:20:41 np0005541455 nova_compute[189564]:        <feature name='amx-bf16'/>
Dec  1 14:20:41 np0005541455 nova_compute[189564]:        <feature name='amx-int8'/>
Dec  1 14:20:41 np0005541455 nova_compute[189564]:        <feature name='amx-tile'/>
Dec  1 14:20:41 np0005541455 nova_compute[189564]:        <feature name='avx-vnni'/>
Dec  1 14:20:41 np0005541455 nova_compute[189564]:        <feature name='avx512-bf16'/>
Dec  1 14:20:41 np0005541455 nova_compute[189564]:        <feature name='avx512-fp16'/>
Dec  1 14:20:41 np0005541455 nova_compute[189564]:        <feature name='avx512-vpopcntdq'/>
Dec  1 14:20:41 np0005541455 nova_compute[189564]:        <feature name='avx512bitalg'/>
Dec  1 14:20:41 np0005541455 nova_compute[189564]:        <feature name='avx512bw'/>
Dec  1 14:20:41 np0005541455 nova_compute[189564]:        <feature name='avx512cd'/>
Dec  1 14:20:41 np0005541455 nova_compute[189564]:        <feature name='avx512dq'/>
Dec  1 14:20:41 np0005541455 nova_compute[189564]:        <feature name='avx512f'/>
Dec  1 14:20:41 np0005541455 nova_compute[189564]:        <feature name='avx512ifma'/>
Dec  1 14:20:41 np0005541455 nova_compute[189564]:        <feature name='avx512vbmi'/>
Dec  1 14:20:41 np0005541455 nova_compute[189564]:        <feature name='avx512vbmi2'/>
Dec  1 14:20:41 np0005541455 nova_compute[189564]:        <feature name='avx512vl'/>
Dec  1 14:20:41 np0005541455 nova_compute[189564]:        <feature name='avx512vnni'/>
Dec  1 14:20:41 np0005541455 nova_compute[189564]:        <feature name='bus-lock-detect'/>
Dec  1 14:20:41 np0005541455 nova_compute[189564]:        <feature name='erms'/>
Dec  1 14:20:41 np0005541455 nova_compute[189564]:        <feature name='fsrc'/>
Dec  1 14:20:41 np0005541455 nova_compute[189564]:        <feature name='fsrm'/>
Dec  1 14:20:41 np0005541455 nova_compute[189564]:        <feature name='fsrs'/>
Dec  1 14:20:41 np0005541455 nova_compute[189564]:        <feature name='fzrm'/>
Dec  1 14:20:41 np0005541455 nova_compute[189564]:        <feature name='gfni'/>
Dec  1 14:20:41 np0005541455 nova_compute[189564]:        <feature name='hle'/>
Dec  1 14:20:41 np0005541455 nova_compute[189564]:        <feature name='ibrs-all'/>
Dec  1 14:20:41 np0005541455 nova_compute[189564]:        <feature name='invpcid'/>
Dec  1 14:20:41 np0005541455 nova_compute[189564]:        <feature name='la57'/>
Dec  1 14:20:41 np0005541455 nova_compute[189564]:        <feature name='pcid'/>
Dec  1 14:20:41 np0005541455 nova_compute[189564]:        <feature name='pku'/>
Dec  1 14:20:41 np0005541455 nova_compute[189564]:        <feature name='rtm'/>
Dec  1 14:20:41 np0005541455 nova_compute[189564]:        <feature name='serialize'/>
Dec  1 14:20:41 np0005541455 nova_compute[189564]:        <feature name='taa-no'/>
Dec  1 14:20:41 np0005541455 nova_compute[189564]:        <feature name='tsx-ldtrk'/>
Dec  1 14:20:41 np0005541455 nova_compute[189564]:        <feature name='vaes'/>
Dec  1 14:20:41 np0005541455 nova_compute[189564]:        <feature name='vpclmulqdq'/>
Dec  1 14:20:41 np0005541455 nova_compute[189564]:        <feature name='xfd'/>
Dec  1 14:20:41 np0005541455 nova_compute[189564]:        <feature name='xsaves'/>
Dec  1 14:20:41 np0005541455 nova_compute[189564]:      </blockers>
Dec  1 14:20:41 np0005541455 nova_compute[189564]:      <model usable='no' vendor='Intel'>SapphireRapids-v1</model>
Dec  1 14:20:41 np0005541455 nova_compute[189564]:      <blockers model='SapphireRapids-v1'>
Dec  1 14:20:41 np0005541455 nova_compute[189564]:        <feature name='amx-bf16'/>
Dec  1 14:20:41 np0005541455 nova_compute[189564]:        <feature name='amx-int8'/>
Dec  1 14:20:41 np0005541455 nova_compute[189564]:        <feature name='amx-tile'/>
Dec  1 14:20:41 np0005541455 nova_compute[189564]:        <feature name='avx-vnni'/>
Dec  1 14:20:41 np0005541455 nova_compute[189564]:        <feature name='avx512-bf16'/>
Dec  1 14:20:41 np0005541455 nova_compute[189564]:        <feature name='avx512-fp16'/>
Dec  1 14:20:41 np0005541455 nova_compute[189564]:        <feature name='avx512-vpopcntdq'/>
Dec  1 14:20:41 np0005541455 nova_compute[189564]:        <feature name='avx512bitalg'/>
Dec  1 14:20:41 np0005541455 nova_compute[189564]:        <feature name='avx512bw'/>
Dec  1 14:20:41 np0005541455 nova_compute[189564]:        <feature name='avx512cd'/>
Dec  1 14:20:41 np0005541455 nova_compute[189564]:        <feature name='avx512dq'/>
Dec  1 14:20:41 np0005541455 nova_compute[189564]:        <feature name='avx512f'/>
Dec  1 14:20:41 np0005541455 nova_compute[189564]:        <feature name='avx512ifma'/>
Dec  1 14:20:41 np0005541455 nova_compute[189564]:        <feature name='avx512vbmi'/>
Dec  1 14:20:41 np0005541455 nova_compute[189564]:        <feature name='avx512vbmi2'/>
Dec  1 14:20:41 np0005541455 nova_compute[189564]:        <feature name='avx512vl'/>
Dec  1 14:20:41 np0005541455 nova_compute[189564]:        <feature name='avx512vnni'/>
Dec  1 14:20:41 np0005541455 nova_compute[189564]:        <feature name='bus-lock-detect'/>
Dec  1 14:20:41 np0005541455 nova_compute[189564]:        <feature name='erms'/>
Dec  1 14:20:41 np0005541455 nova_compute[189564]:        <feature name='fsrc'/>
Dec  1 14:20:41 np0005541455 nova_compute[189564]:        <feature name='fsrm'/>
Dec  1 14:20:41 np0005541455 nova_compute[189564]:        <feature name='fsrs'/>
Dec  1 14:20:41 np0005541455 nova_compute[189564]:        <feature name='fzrm'/>
Dec  1 14:20:41 np0005541455 nova_compute[189564]:        <feature name='gfni'/>
Dec  1 14:20:41 np0005541455 nova_compute[189564]:        <feature name='hle'/>
Dec  1 14:20:41 np0005541455 nova_compute[189564]:        <feature name='ibrs-all'/>
Dec  1 14:20:41 np0005541455 nova_compute[189564]:        <feature name='invpcid'/>
Dec  1 14:20:41 np0005541455 nova_compute[189564]:        <feature name='la57'/>
Dec  1 14:20:41 np0005541455 nova_compute[189564]:        <feature name='pcid'/>
Dec  1 14:20:41 np0005541455 nova_compute[189564]:        <feature name='pku'/>
Dec  1 14:20:41 np0005541455 nova_compute[189564]:        <feature name='rtm'/>
Dec  1 14:20:41 np0005541455 nova_compute[189564]:        <feature name='serialize'/>
Dec  1 14:20:41 np0005541455 nova_compute[189564]:        <feature name='taa-no'/>
Dec  1 14:20:41 np0005541455 nova_compute[189564]:        <feature name='tsx-ldtrk'/>
Dec  1 14:20:41 np0005541455 nova_compute[189564]:        <feature name='vaes'/>
Dec  1 14:20:41 np0005541455 nova_compute[189564]:        <feature name='vpclmulqdq'/>
Dec  1 14:20:41 np0005541455 nova_compute[189564]:        <feature name='xfd'/>
Dec  1 14:20:41 np0005541455 nova_compute[189564]:        <feature name='xsaves'/>
Dec  1 14:20:41 np0005541455 nova_compute[189564]:      </blockers>
Dec  1 14:20:41 np0005541455 nova_compute[189564]:      <model usable='no' vendor='Intel'>SapphireRapids-v2</model>
Dec  1 14:20:41 np0005541455 nova_compute[189564]:      <blockers model='SapphireRapids-v2'>
Dec  1 14:20:41 np0005541455 nova_compute[189564]:        <feature name='amx-bf16'/>
Dec  1 14:20:41 np0005541455 nova_compute[189564]:        <feature name='amx-int8'/>
Dec  1 14:20:41 np0005541455 nova_compute[189564]:        <feature name='amx-tile'/>
Dec  1 14:20:41 np0005541455 nova_compute[189564]:        <feature name='avx-vnni'/>
Dec  1 14:20:41 np0005541455 nova_compute[189564]:        <feature name='avx512-bf16'/>
Dec  1 14:20:41 np0005541455 nova_compute[189564]:        <feature name='avx512-fp16'/>
Dec  1 14:20:41 np0005541455 nova_compute[189564]:        <feature name='avx512-vpopcntdq'/>
Dec  1 14:20:41 np0005541455 nova_compute[189564]:        <feature name='avx512bitalg'/>
Dec  1 14:20:41 np0005541455 nova_compute[189564]:        <feature name='avx512bw'/>
Dec  1 14:20:41 np0005541455 nova_compute[189564]:        <feature name='avx512cd'/>
Dec  1 14:20:41 np0005541455 nova_compute[189564]:        <feature name='avx512dq'/>
Dec  1 14:20:41 np0005541455 nova_compute[189564]:        <feature name='avx512f'/>
Dec  1 14:20:41 np0005541455 nova_compute[189564]:        <feature name='avx512ifma'/>
Dec  1 14:20:41 np0005541455 nova_compute[189564]:        <feature name='avx512vbmi'/>
Dec  1 14:20:41 np0005541455 nova_compute[189564]:        <feature name='avx512vbmi2'/>
Dec  1 14:20:41 np0005541455 nova_compute[189564]:        <feature name='avx512vl'/>
Dec  1 14:20:41 np0005541455 nova_compute[189564]:        <feature name='avx512vnni'/>
Dec  1 14:20:41 np0005541455 nova_compute[189564]:        <feature name='bus-lock-detect'/>
Dec  1 14:20:41 np0005541455 nova_compute[189564]:        <feature name='erms'/>
Dec  1 14:20:41 np0005541455 nova_compute[189564]:        <feature name='fbsdp-no'/>
Dec  1 14:20:41 np0005541455 nova_compute[189564]:        <feature name='fsrc'/>
Dec  1 14:20:41 np0005541455 nova_compute[189564]:        <feature name='fsrm'/>
Dec  1 14:20:41 np0005541455 nova_compute[189564]:        <feature name='fsrs'/>
Dec  1 14:20:41 np0005541455 nova_compute[189564]:        <feature name='fzrm'/>
Dec  1 14:20:41 np0005541455 nova_compute[189564]:        <feature name='gfni'/>
Dec  1 14:20:41 np0005541455 nova_compute[189564]:        <feature name='hle'/>
Dec  1 14:20:41 np0005541455 nova_compute[189564]:        <feature name='ibrs-all'/>
Dec  1 14:20:41 np0005541455 nova_compute[189564]:        <feature name='invpcid'/>
Dec  1 14:20:41 np0005541455 nova_compute[189564]:        <feature name='la57'/>
Dec  1 14:20:41 np0005541455 nova_compute[189564]:        <feature name='pcid'/>
Dec  1 14:20:41 np0005541455 nova_compute[189564]:        <feature name='pku'/>
Dec  1 14:20:41 np0005541455 nova_compute[189564]:        <feature name='psdp-no'/>
Dec  1 14:20:41 np0005541455 nova_compute[189564]:        <feature name='rtm'/>
Dec  1 14:20:41 np0005541455 nova_compute[189564]:        <feature name='sbdr-ssdp-no'/>
Dec  1 14:20:41 np0005541455 nova_compute[189564]:        <feature name='serialize'/>
Dec  1 14:20:41 np0005541455 nova_compute[189564]:        <feature name='taa-no'/>
Dec  1 14:20:41 np0005541455 nova_compute[189564]:        <feature name='tsx-ldtrk'/>
Dec  1 14:20:41 np0005541455 nova_compute[189564]:        <feature name='vaes'/>
Dec  1 14:20:41 np0005541455 nova_compute[189564]:        <feature name='vpclmulqdq'/>
Dec  1 14:20:41 np0005541455 nova_compute[189564]:        <feature name='xfd'/>
Dec  1 14:20:41 np0005541455 nova_compute[189564]:        <feature name='xsaves'/>
Dec  1 14:20:41 np0005541455 nova_compute[189564]:      </blockers>
Dec  1 14:20:41 np0005541455 nova_compute[189564]:      <model usable='no' vendor='Intel'>SapphireRapids-v3</model>
Dec  1 14:20:41 np0005541455 nova_compute[189564]:      <blockers model='SapphireRapids-v3'>
Dec  1 14:20:41 np0005541455 nova_compute[189564]:        <feature name='amx-bf16'/>
Dec  1 14:20:41 np0005541455 nova_compute[189564]:        <feature name='amx-int8'/>
Dec  1 14:20:41 np0005541455 nova_compute[189564]:        <feature name='amx-tile'/>
Dec  1 14:20:41 np0005541455 nova_compute[189564]:        <feature name='avx-vnni'/>
Dec  1 14:20:41 np0005541455 nova_compute[189564]:        <feature name='avx512-bf16'/>
Dec  1 14:20:41 np0005541455 nova_compute[189564]:        <feature name='avx512-fp16'/>
Dec  1 14:20:41 np0005541455 nova_compute[189564]:        <feature name='avx512-vpopcntdq'/>
Dec  1 14:20:41 np0005541455 nova_compute[189564]:        <feature name='avx512bitalg'/>
Dec  1 14:20:41 np0005541455 nova_compute[189564]:        <feature name='avx512bw'/>
Dec  1 14:20:41 np0005541455 nova_compute[189564]:        <feature name='avx512cd'/>
Dec  1 14:20:41 np0005541455 nova_compute[189564]:        <feature name='avx512dq'/>
Dec  1 14:20:41 np0005541455 nova_compute[189564]:        <feature name='avx512f'/>
Dec  1 14:20:41 np0005541455 nova_compute[189564]:        <feature name='avx512ifma'/>
Dec  1 14:20:41 np0005541455 nova_compute[189564]:        <feature name='avx512vbmi'/>
Dec  1 14:20:41 np0005541455 nova_compute[189564]:        <feature name='avx512vbmi2'/>
Dec  1 14:20:41 np0005541455 nova_compute[189564]:        <feature name='avx512vl'/>
Dec  1 14:20:41 np0005541455 nova_compute[189564]:        <feature name='avx512vnni'/>
Dec  1 14:20:41 np0005541455 nova_compute[189564]:        <feature name='bus-lock-detect'/>
Dec  1 14:20:41 np0005541455 nova_compute[189564]:        <feature name='cldemote'/>
Dec  1 14:20:41 np0005541455 nova_compute[189564]:        <feature name='erms'/>
Dec  1 14:20:41 np0005541455 nova_compute[189564]:        <feature name='fbsdp-no'/>
Dec  1 14:20:41 np0005541455 nova_compute[189564]:        <feature name='fsrc'/>
Dec  1 14:20:41 np0005541455 nova_compute[189564]:        <feature name='fsrm'/>
Dec  1 14:20:41 np0005541455 nova_compute[189564]:        <feature name='fsrs'/>
Dec  1 14:20:41 np0005541455 nova_compute[189564]:        <feature name='fzrm'/>
Dec  1 14:20:41 np0005541455 nova_compute[189564]:        <feature name='gfni'/>
Dec  1 14:20:41 np0005541455 nova_compute[189564]:        <feature name='hle'/>
Dec  1 14:20:41 np0005541455 nova_compute[189564]:        <feature name='ibrs-all'/>
Dec  1 14:20:41 np0005541455 nova_compute[189564]:        <feature name='invpcid'/>
Dec  1 14:20:41 np0005541455 nova_compute[189564]:        <feature name='la57'/>
Dec  1 14:20:41 np0005541455 nova_compute[189564]:        <feature name='movdir64b'/>
Dec  1 14:20:41 np0005541455 nova_compute[189564]:        <feature name='movdiri'/>
Dec  1 14:20:41 np0005541455 nova_compute[189564]:        <feature name='pcid'/>
Dec  1 14:20:41 np0005541455 nova_compute[189564]:        <feature name='pku'/>
Dec  1 14:20:41 np0005541455 nova_compute[189564]:        <feature name='psdp-no'/>
Dec  1 14:20:41 np0005541455 nova_compute[189564]:        <feature name='rtm'/>
Dec  1 14:20:41 np0005541455 nova_compute[189564]:        <feature name='sbdr-ssdp-no'/>
Dec  1 14:20:41 np0005541455 nova_compute[189564]:        <feature name='serialize'/>
Dec  1 14:20:41 np0005541455 nova_compute[189564]:        <feature name='ss'/>
Dec  1 14:20:41 np0005541455 nova_compute[189564]:        <feature name='taa-no'/>
Dec  1 14:20:41 np0005541455 nova_compute[189564]:        <feature name='tsx-ldtrk'/>
Dec  1 14:20:41 np0005541455 nova_compute[189564]:        <feature name='vaes'/>
Dec  1 14:20:41 np0005541455 nova_compute[189564]:        <feature name='vpclmulqdq'/>
Dec  1 14:20:41 np0005541455 nova_compute[189564]:        <feature name='xfd'/>
Dec  1 14:20:41 np0005541455 nova_compute[189564]:        <feature name='xsaves'/>
Dec  1 14:20:41 np0005541455 nova_compute[189564]:      </blockers>
Dec  1 14:20:41 np0005541455 nova_compute[189564]:      <model usable='no' vendor='Intel' canonical='SierraForest-v1'>SierraForest</model>
Dec  1 14:20:41 np0005541455 nova_compute[189564]:      <blockers model='SierraForest'>
Dec  1 14:20:41 np0005541455 nova_compute[189564]:        <feature name='avx-ifma'/>
Dec  1 14:20:41 np0005541455 nova_compute[189564]:        <feature name='avx-ne-convert'/>
Dec  1 14:20:41 np0005541455 nova_compute[189564]:        <feature name='avx-vnni'/>
Dec  1 14:20:41 np0005541455 nova_compute[189564]:        <feature name='avx-vnni-int8'/>
Dec  1 14:20:41 np0005541455 nova_compute[189564]:        <feature name='bus-lock-detect'/>
Dec  1 14:20:41 np0005541455 nova_compute[189564]:        <feature name='cmpccxadd'/>
Dec  1 14:20:41 np0005541455 nova_compute[189564]:        <feature name='erms'/>
Dec  1 14:20:41 np0005541455 nova_compute[189564]:        <feature name='fbsdp-no'/>
Dec  1 14:20:41 np0005541455 nova_compute[189564]:        <feature name='fsrm'/>
Dec  1 14:20:41 np0005541455 nova_compute[189564]:        <feature name='fsrs'/>
Dec  1 14:20:41 np0005541455 nova_compute[189564]:        <feature name='gfni'/>
Dec  1 14:20:41 np0005541455 nova_compute[189564]:        <feature name='ibrs-all'/>
Dec  1 14:20:41 np0005541455 nova_compute[189564]:        <feature name='invpcid'/>
Dec  1 14:20:41 np0005541455 nova_compute[189564]:        <feature name='mcdt-no'/>
Dec  1 14:20:41 np0005541455 nova_compute[189564]:        <feature name='pbrsb-no'/>
Dec  1 14:20:41 np0005541455 nova_compute[189564]:        <feature name='pcid'/>
Dec  1 14:20:41 np0005541455 nova_compute[189564]:        <feature name='pku'/>
Dec  1 14:20:41 np0005541455 nova_compute[189564]:        <feature name='psdp-no'/>
Dec  1 14:20:41 np0005541455 nova_compute[189564]:        <feature name='sbdr-ssdp-no'/>
Dec  1 14:20:41 np0005541455 nova_compute[189564]:        <feature name='serialize'/>
Dec  1 14:20:41 np0005541455 nova_compute[189564]:        <feature name='vaes'/>
Dec  1 14:20:41 np0005541455 nova_compute[189564]:        <feature name='vpclmulqdq'/>
Dec  1 14:20:41 np0005541455 nova_compute[189564]:        <feature name='xsaves'/>
Dec  1 14:20:41 np0005541455 nova_compute[189564]:      </blockers>
Dec  1 14:20:41 np0005541455 nova_compute[189564]:      <model usable='no' vendor='Intel'>SierraForest-v1</model>
Dec  1 14:20:41 np0005541455 nova_compute[189564]:      <blockers model='SierraForest-v1'>
Dec  1 14:20:41 np0005541455 nova_compute[189564]:        <feature name='avx-ifma'/>
Dec  1 14:20:41 np0005541455 nova_compute[189564]:        <feature name='avx-ne-convert'/>
Dec  1 14:20:41 np0005541455 nova_compute[189564]:        <feature name='avx-vnni'/>
Dec  1 14:20:41 np0005541455 nova_compute[189564]:        <feature name='avx-vnni-int8'/>
Dec  1 14:20:41 np0005541455 nova_compute[189564]:        <feature name='bus-lock-detect'/>
Dec  1 14:20:41 np0005541455 nova_compute[189564]:        <feature name='cmpccxadd'/>
Dec  1 14:20:41 np0005541455 nova_compute[189564]:        <feature name='erms'/>
Dec  1 14:20:41 np0005541455 nova_compute[189564]:        <feature name='fbsdp-no'/>
Dec  1 14:20:41 np0005541455 nova_compute[189564]:        <feature name='fsrm'/>
Dec  1 14:20:41 np0005541455 nova_compute[189564]:        <feature name='fsrs'/>
Dec  1 14:20:41 np0005541455 nova_compute[189564]:        <feature name='gfni'/>
Dec  1 14:20:41 np0005541455 nova_compute[189564]:        <feature name='ibrs-all'/>
Dec  1 14:20:41 np0005541455 nova_compute[189564]:        <feature name='invpcid'/>
Dec  1 14:20:41 np0005541455 nova_compute[189564]:        <feature name='mcdt-no'/>
Dec  1 14:20:41 np0005541455 nova_compute[189564]:        <feature name='pbrsb-no'/>
Dec  1 14:20:41 np0005541455 nova_compute[189564]:        <feature name='pcid'/>
Dec  1 14:20:41 np0005541455 nova_compute[189564]:        <feature name='pku'/>
Dec  1 14:20:41 np0005541455 nova_compute[189564]:        <feature name='psdp-no'/>
Dec  1 14:20:41 np0005541455 nova_compute[189564]:        <feature name='sbdr-ssdp-no'/>
Dec  1 14:20:41 np0005541455 nova_compute[189564]:        <feature name='serialize'/>
Dec  1 14:20:41 np0005541455 nova_compute[189564]:        <feature name='vaes'/>
Dec  1 14:20:41 np0005541455 nova_compute[189564]:        <feature name='vpclmulqdq'/>
Dec  1 14:20:41 np0005541455 nova_compute[189564]:        <feature name='xsaves'/>
Dec  1 14:20:41 np0005541455 nova_compute[189564]:      </blockers>
Dec  1 14:20:41 np0005541455 nova_compute[189564]:      <model usable='no' vendor='Intel' canonical='Skylake-Client-v1'>Skylake-Client</model>
Dec  1 14:20:41 np0005541455 nova_compute[189564]:      <blockers model='Skylake-Client'>
Dec  1 14:20:41 np0005541455 nova_compute[189564]:        <feature name='erms'/>
Dec  1 14:20:41 np0005541455 nova_compute[189564]:        <feature name='hle'/>
Dec  1 14:20:41 np0005541455 nova_compute[189564]:        <feature name='invpcid'/>
Dec  1 14:20:41 np0005541455 nova_compute[189564]:        <feature name='pcid'/>
Dec  1 14:20:41 np0005541455 nova_compute[189564]:        <feature name='rtm'/>
Dec  1 14:20:41 np0005541455 nova_compute[189564]:      </blockers>
Dec  1 14:20:41 np0005541455 nova_compute[189564]:      <model usable='no' vendor='Intel' canonical='Skylake-Client-v2'>Skylake-Client-IBRS</model>
Dec  1 14:20:41 np0005541455 nova_compute[189564]:      <blockers model='Skylake-Client-IBRS'>
Dec  1 14:20:41 np0005541455 nova_compute[189564]:        <feature name='erms'/>
Dec  1 14:20:41 np0005541455 nova_compute[189564]:        <feature name='hle'/>
Dec  1 14:20:41 np0005541455 nova_compute[189564]:        <feature name='invpcid'/>
Dec  1 14:20:41 np0005541455 nova_compute[189564]:        <feature name='pcid'/>
Dec  1 14:20:41 np0005541455 nova_compute[189564]:        <feature name='rtm'/>
Dec  1 14:20:41 np0005541455 nova_compute[189564]:      </blockers>
Dec  1 14:20:41 np0005541455 nova_compute[189564]:      <model usable='no' vendor='Intel' canonical='Skylake-Client-v3'>Skylake-Client-noTSX-IBRS</model>
Dec  1 14:20:41 np0005541455 nova_compute[189564]:      <blockers model='Skylake-Client-noTSX-IBRS'>
Dec  1 14:20:41 np0005541455 nova_compute[189564]:        <feature name='erms'/>
Dec  1 14:20:41 np0005541455 nova_compute[189564]:        <feature name='invpcid'/>
Dec  1 14:20:41 np0005541455 nova_compute[189564]:        <feature name='pcid'/>
Dec  1 14:20:41 np0005541455 nova_compute[189564]:      </blockers>
Dec  1 14:20:41 np0005541455 nova_compute[189564]:      <model usable='no' vendor='Intel'>Skylake-Client-v1</model>
Dec  1 14:20:41 np0005541455 nova_compute[189564]:      <blockers model='Skylake-Client-v1'>
Dec  1 14:20:41 np0005541455 nova_compute[189564]:        <feature name='erms'/>
Dec  1 14:20:41 np0005541455 nova_compute[189564]:        <feature name='hle'/>
Dec  1 14:20:41 np0005541455 nova_compute[189564]:        <feature name='invpcid'/>
Dec  1 14:20:41 np0005541455 nova_compute[189564]:        <feature name='pcid'/>
Dec  1 14:20:41 np0005541455 nova_compute[189564]:        <feature name='rtm'/>
Dec  1 14:20:41 np0005541455 nova_compute[189564]:      </blockers>
Dec  1 14:20:41 np0005541455 nova_compute[189564]:      <model usable='no' vendor='Intel'>Skylake-Client-v2</model>
Dec  1 14:20:41 np0005541455 nova_compute[189564]:      <blockers model='Skylake-Client-v2'>
Dec  1 14:20:41 np0005541455 nova_compute[189564]:        <feature name='erms'/>
Dec  1 14:20:41 np0005541455 nova_compute[189564]:        <feature name='hle'/>
Dec  1 14:20:41 np0005541455 nova_compute[189564]:        <feature name='invpcid'/>
Dec  1 14:20:41 np0005541455 nova_compute[189564]:        <feature name='pcid'/>
Dec  1 14:20:41 np0005541455 nova_compute[189564]:        <feature name='rtm'/>
Dec  1 14:20:41 np0005541455 nova_compute[189564]:      </blockers>
Dec  1 14:20:41 np0005541455 nova_compute[189564]:      <model usable='no' vendor='Intel'>Skylake-Client-v3</model>
Dec  1 14:20:41 np0005541455 nova_compute[189564]:      <blockers model='Skylake-Client-v3'>
Dec  1 14:20:41 np0005541455 nova_compute[189564]:        <feature name='erms'/>
Dec  1 14:20:41 np0005541455 nova_compute[189564]:        <feature name='invpcid'/>
Dec  1 14:20:41 np0005541455 nova_compute[189564]:        <feature name='pcid'/>
Dec  1 14:20:41 np0005541455 nova_compute[189564]:      </blockers>
Dec  1 14:20:41 np0005541455 nova_compute[189564]:      <model usable='no' vendor='Intel'>Skylake-Client-v4</model>
Dec  1 14:20:41 np0005541455 nova_compute[189564]:      <blockers model='Skylake-Client-v4'>
Dec  1 14:20:41 np0005541455 nova_compute[189564]:        <feature name='erms'/>
Dec  1 14:20:41 np0005541455 nova_compute[189564]:        <feature name='invpcid'/>
Dec  1 14:20:41 np0005541455 nova_compute[189564]:        <feature name='pcid'/>
Dec  1 14:20:41 np0005541455 nova_compute[189564]:        <feature name='xsaves'/>
Dec  1 14:20:41 np0005541455 nova_compute[189564]:      </blockers>
Dec  1 14:20:41 np0005541455 nova_compute[189564]:      <model usable='no' vendor='Intel' canonical='Skylake-Server-v1'>Skylake-Server</model>
Dec  1 14:20:41 np0005541455 nova_compute[189564]:      <blockers model='Skylake-Server'>
Dec  1 14:20:41 np0005541455 nova_compute[189564]:        <feature name='avx512bw'/>
Dec  1 14:20:41 np0005541455 nova_compute[189564]:        <feature name='avx512cd'/>
Dec  1 14:20:41 np0005541455 nova_compute[189564]:        <feature name='avx512dq'/>
Dec  1 14:20:41 np0005541455 nova_compute[189564]:        <feature name='avx512f'/>
Dec  1 14:20:41 np0005541455 nova_compute[189564]:        <feature name='avx512vl'/>
Dec  1 14:20:41 np0005541455 nova_compute[189564]:        <feature name='erms'/>
Dec  1 14:20:41 np0005541455 nova_compute[189564]:        <feature name='hle'/>
Dec  1 14:20:41 np0005541455 nova_compute[189564]:        <feature name='invpcid'/>
Dec  1 14:20:41 np0005541455 nova_compute[189564]:        <feature name='pcid'/>
Dec  1 14:20:41 np0005541455 nova_compute[189564]:        <feature name='pku'/>
Dec  1 14:20:41 np0005541455 nova_compute[189564]:        <feature name='rtm'/>
Dec  1 14:20:41 np0005541455 nova_compute[189564]:      </blockers>
Dec  1 14:20:41 np0005541455 nova_compute[189564]:      <model usable='no' vendor='Intel' canonical='Skylake-Server-v2'>Skylake-Server-IBRS</model>
Dec  1 14:20:41 np0005541455 nova_compute[189564]:      <blockers model='Skylake-Server-IBRS'>
Dec  1 14:20:41 np0005541455 nova_compute[189564]:        <feature name='avx512bw'/>
Dec  1 14:20:41 np0005541455 nova_compute[189564]:        <feature name='avx512cd'/>
Dec  1 14:20:41 np0005541455 nova_compute[189564]:        <feature name='avx512dq'/>
Dec  1 14:20:41 np0005541455 nova_compute[189564]:        <feature name='avx512f'/>
Dec  1 14:20:41 np0005541455 nova_compute[189564]:        <feature name='avx512vl'/>
Dec  1 14:20:41 np0005541455 nova_compute[189564]:        <feature name='erms'/>
Dec  1 14:20:41 np0005541455 nova_compute[189564]:        <feature name='hle'/>
Dec  1 14:20:41 np0005541455 nova_compute[189564]:        <feature name='invpcid'/>
Dec  1 14:20:41 np0005541455 nova_compute[189564]:        <feature name='pcid'/>
Dec  1 14:20:41 np0005541455 nova_compute[189564]:        <feature name='pku'/>
Dec  1 14:20:41 np0005541455 nova_compute[189564]:        <feature name='rtm'/>
Dec  1 14:20:41 np0005541455 nova_compute[189564]:      </blockers>
Dec  1 14:20:41 np0005541455 nova_compute[189564]:      <model usable='no' vendor='Intel' canonical='Skylake-Server-v3'>Skylake-Server-noTSX-IBRS</model>
Dec  1 14:20:41 np0005541455 nova_compute[189564]:      <blockers model='Skylake-Server-noTSX-IBRS'>
Dec  1 14:20:41 np0005541455 nova_compute[189564]:        <feature name='avx512bw'/>
Dec  1 14:20:41 np0005541455 nova_compute[189564]:        <feature name='avx512cd'/>
Dec  1 14:20:41 np0005541455 nova_compute[189564]:        <feature name='avx512dq'/>
Dec  1 14:20:41 np0005541455 nova_compute[189564]:        <feature name='avx512f'/>
Dec  1 14:20:41 np0005541455 nova_compute[189564]:        <feature name='avx512vl'/>
Dec  1 14:20:41 np0005541455 nova_compute[189564]:        <feature name='erms'/>
Dec  1 14:20:41 np0005541455 nova_compute[189564]:        <feature name='invpcid'/>
Dec  1 14:20:41 np0005541455 nova_compute[189564]:        <feature name='pcid'/>
Dec  1 14:20:41 np0005541455 nova_compute[189564]:        <feature name='pku'/>
Dec  1 14:20:41 np0005541455 nova_compute[189564]:      </blockers>
Dec  1 14:20:41 np0005541455 nova_compute[189564]:      <model usable='no' vendor='Intel'>Skylake-Server-v1</model>
Dec  1 14:20:41 np0005541455 nova_compute[189564]:      <blockers model='Skylake-Server-v1'>
Dec  1 14:20:41 np0005541455 nova_compute[189564]:        <feature name='avx512bw'/>
Dec  1 14:20:41 np0005541455 nova_compute[189564]:        <feature name='avx512cd'/>
Dec  1 14:20:41 np0005541455 nova_compute[189564]:        <feature name='avx512dq'/>
Dec  1 14:20:41 np0005541455 nova_compute[189564]:        <feature name='avx512f'/>
Dec  1 14:20:41 np0005541455 nova_compute[189564]:        <feature name='avx512vl'/>
Dec  1 14:20:41 np0005541455 nova_compute[189564]:        <feature name='erms'/>
Dec  1 14:20:41 np0005541455 nova_compute[189564]:        <feature name='hle'/>
Dec  1 14:20:41 np0005541455 nova_compute[189564]:        <feature name='invpcid'/>
Dec  1 14:20:41 np0005541455 nova_compute[189564]:        <feature name='pcid'/>
Dec  1 14:20:41 np0005541455 nova_compute[189564]:        <feature name='pku'/>
Dec  1 14:20:41 np0005541455 nova_compute[189564]:        <feature name='rtm'/>
Dec  1 14:20:41 np0005541455 nova_compute[189564]:      </blockers>
Dec  1 14:20:41 np0005541455 nova_compute[189564]:      <model usable='no' vendor='Intel'>Skylake-Server-v2</model>
Dec  1 14:20:41 np0005541455 nova_compute[189564]:      <blockers model='Skylake-Server-v2'>
Dec  1 14:20:41 np0005541455 nova_compute[189564]:        <feature name='avx512bw'/>
Dec  1 14:20:41 np0005541455 nova_compute[189564]:        <feature name='avx512cd'/>
Dec  1 14:20:41 np0005541455 nova_compute[189564]:        <feature name='avx512dq'/>
Dec  1 14:20:41 np0005541455 nova_compute[189564]:        <feature name='avx512f'/>
Dec  1 14:20:41 np0005541455 nova_compute[189564]:        <feature name='avx512vl'/>
Dec  1 14:20:41 np0005541455 nova_compute[189564]:        <feature name='erms'/>
Dec  1 14:20:41 np0005541455 nova_compute[189564]:        <feature name='hle'/>
Dec  1 14:20:41 np0005541455 nova_compute[189564]:        <feature name='invpcid'/>
Dec  1 14:20:41 np0005541455 nova_compute[189564]:        <feature name='pcid'/>
Dec  1 14:20:41 np0005541455 nova_compute[189564]:        <feature name='pku'/>
Dec  1 14:20:41 np0005541455 nova_compute[189564]:        <feature name='rtm'/>
Dec  1 14:20:41 np0005541455 nova_compute[189564]:      </blockers>
Dec  1 14:20:41 np0005541455 nova_compute[189564]:      <model usable='no' vendor='Intel'>Skylake-Server-v3</model>
Dec  1 14:20:41 np0005541455 nova_compute[189564]:      <blockers model='Skylake-Server-v3'>
Dec  1 14:20:41 np0005541455 nova_compute[189564]:        <feature name='avx512bw'/>
Dec  1 14:20:41 np0005541455 nova_compute[189564]:        <feature name='avx512cd'/>
Dec  1 14:20:41 np0005541455 nova_compute[189564]:        <feature name='avx512dq'/>
Dec  1 14:20:41 np0005541455 nova_compute[189564]:        <feature name='avx512f'/>
Dec  1 14:20:41 np0005541455 nova_compute[189564]:        <feature name='avx512vl'/>
Dec  1 14:20:41 np0005541455 nova_compute[189564]:        <feature name='erms'/>
Dec  1 14:20:41 np0005541455 nova_compute[189564]:        <feature name='invpcid'/>
Dec  1 14:20:41 np0005541455 nova_compute[189564]:        <feature name='pcid'/>
Dec  1 14:20:41 np0005541455 nova_compute[189564]:        <feature name='pku'/>
Dec  1 14:20:41 np0005541455 nova_compute[189564]:      </blockers>
Dec  1 14:20:41 np0005541455 nova_compute[189564]:      <model usable='no' vendor='Intel'>Skylake-Server-v4</model>
Dec  1 14:20:41 np0005541455 nova_compute[189564]:      <blockers model='Skylake-Server-v4'>
Dec  1 14:20:41 np0005541455 nova_compute[189564]:        <feature name='avx512bw'/>
Dec  1 14:20:41 np0005541455 nova_compute[189564]:        <feature name='avx512cd'/>
Dec  1 14:20:41 np0005541455 nova_compute[189564]:        <feature name='avx512dq'/>
Dec  1 14:20:41 np0005541455 nova_compute[189564]:        <feature name='avx512f'/>
Dec  1 14:20:41 np0005541455 nova_compute[189564]:        <feature name='avx512vl'/>
Dec  1 14:20:41 np0005541455 nova_compute[189564]:        <feature name='erms'/>
Dec  1 14:20:41 np0005541455 nova_compute[189564]:        <feature name='invpcid'/>
Dec  1 14:20:41 np0005541455 nova_compute[189564]:        <feature name='pcid'/>
Dec  1 14:20:41 np0005541455 nova_compute[189564]:        <feature name='pku'/>
Dec  1 14:20:41 np0005541455 nova_compute[189564]:      </blockers>
Dec  1 14:20:41 np0005541455 nova_compute[189564]:      <model usable='no' vendor='Intel'>Skylake-Server-v5</model>
Dec  1 14:20:41 np0005541455 nova_compute[189564]:      <blockers model='Skylake-Server-v5'>
Dec  1 14:20:41 np0005541455 nova_compute[189564]:        <feature name='avx512bw'/>
Dec  1 14:20:41 np0005541455 nova_compute[189564]:        <feature name='avx512cd'/>
Dec  1 14:20:41 np0005541455 nova_compute[189564]:        <feature name='avx512dq'/>
Dec  1 14:20:41 np0005541455 nova_compute[189564]:        <feature name='avx512f'/>
Dec  1 14:20:41 np0005541455 nova_compute[189564]:        <feature name='avx512vl'/>
Dec  1 14:20:41 np0005541455 nova_compute[189564]:        <feature name='erms'/>
Dec  1 14:20:41 np0005541455 nova_compute[189564]:        <feature name='invpcid'/>
Dec  1 14:20:41 np0005541455 nova_compute[189564]:        <feature name='pcid'/>
Dec  1 14:20:41 np0005541455 nova_compute[189564]:        <feature name='pku'/>
Dec  1 14:20:41 np0005541455 nova_compute[189564]:        <feature name='xsaves'/>
Dec  1 14:20:41 np0005541455 nova_compute[189564]:      </blockers>
Dec  1 14:20:41 np0005541455 nova_compute[189564]:      <model usable='no' vendor='Intel' canonical='Snowridge-v1'>Snowridge</model>
Dec  1 14:20:41 np0005541455 nova_compute[189564]:      <blockers model='Snowridge'>
Dec  1 14:20:41 np0005541455 nova_compute[189564]:        <feature name='cldemote'/>
Dec  1 14:20:41 np0005541455 nova_compute[189564]:        <feature name='core-capability'/>
Dec  1 14:20:41 np0005541455 nova_compute[189564]:        <feature name='erms'/>
Dec  1 14:20:41 np0005541455 nova_compute[189564]:        <feature name='gfni'/>
Dec  1 14:20:41 np0005541455 nova_compute[189564]:        <feature name='movdir64b'/>
Dec  1 14:20:41 np0005541455 nova_compute[189564]:        <feature name='movdiri'/>
Dec  1 14:20:41 np0005541455 nova_compute[189564]:        <feature name='mpx'/>
Dec  1 14:20:41 np0005541455 nova_compute[189564]:        <feature name='split-lock-detect'/>
Dec  1 14:20:41 np0005541455 nova_compute[189564]:      </blockers>
Dec  1 14:20:41 np0005541455 nova_compute[189564]:      <model usable='no' vendor='Intel'>Snowridge-v1</model>
Dec  1 14:20:41 np0005541455 nova_compute[189564]:      <blockers model='Snowridge-v1'>
Dec  1 14:20:41 np0005541455 nova_compute[189564]:        <feature name='cldemote'/>
Dec  1 14:20:41 np0005541455 nova_compute[189564]:        <feature name='core-capability'/>
Dec  1 14:20:41 np0005541455 nova_compute[189564]:        <feature name='erms'/>
Dec  1 14:20:41 np0005541455 nova_compute[189564]:        <feature name='gfni'/>
Dec  1 14:20:41 np0005541455 nova_compute[189564]:        <feature name='movdir64b'/>
Dec  1 14:20:41 np0005541455 nova_compute[189564]:        <feature name='movdiri'/>
Dec  1 14:20:41 np0005541455 nova_compute[189564]:        <feature name='mpx'/>
Dec  1 14:20:41 np0005541455 nova_compute[189564]:        <feature name='split-lock-detect'/>
Dec  1 14:20:41 np0005541455 nova_compute[189564]:      </blockers>
Dec  1 14:20:41 np0005541455 nova_compute[189564]:      <model usable='no' vendor='Intel'>Snowridge-v2</model>
Dec  1 14:20:41 np0005541455 nova_compute[189564]:      <blockers model='Snowridge-v2'>
Dec  1 14:20:41 np0005541455 nova_compute[189564]:        <feature name='cldemote'/>
Dec  1 14:20:41 np0005541455 nova_compute[189564]:        <feature name='core-capability'/>
Dec  1 14:20:41 np0005541455 nova_compute[189564]:        <feature name='erms'/>
Dec  1 14:20:41 np0005541455 nova_compute[189564]:        <feature name='gfni'/>
Dec  1 14:20:41 np0005541455 nova_compute[189564]:        <feature name='movdir64b'/>
Dec  1 14:20:41 np0005541455 nova_compute[189564]:        <feature name='movdiri'/>
Dec  1 14:20:41 np0005541455 nova_compute[189564]:        <feature name='split-lock-detect'/>
Dec  1 14:20:41 np0005541455 nova_compute[189564]:      </blockers>
Dec  1 14:20:41 np0005541455 nova_compute[189564]:      <model usable='no' vendor='Intel'>Snowridge-v3</model>
Dec  1 14:20:41 np0005541455 nova_compute[189564]:      <blockers model='Snowridge-v3'>
Dec  1 14:20:41 np0005541455 nova_compute[189564]:        <feature name='cldemote'/>
Dec  1 14:20:41 np0005541455 nova_compute[189564]:        <feature name='core-capability'/>
Dec  1 14:20:41 np0005541455 nova_compute[189564]:        <feature name='erms'/>
Dec  1 14:20:41 np0005541455 nova_compute[189564]:        <feature name='gfni'/>
Dec  1 14:20:41 np0005541455 nova_compute[189564]:        <feature name='movdir64b'/>
Dec  1 14:20:41 np0005541455 nova_compute[189564]:        <feature name='movdiri'/>
Dec  1 14:20:41 np0005541455 nova_compute[189564]:        <feature name='split-lock-detect'/>
Dec  1 14:20:41 np0005541455 nova_compute[189564]:        <feature name='xsaves'/>
Dec  1 14:20:41 np0005541455 nova_compute[189564]:      </blockers>
Dec  1 14:20:41 np0005541455 nova_compute[189564]:      <model usable='no' vendor='Intel'>Snowridge-v4</model>
Dec  1 14:20:41 np0005541455 nova_compute[189564]:      <blockers model='Snowridge-v4'>
Dec  1 14:20:41 np0005541455 nova_compute[189564]:        <feature name='cldemote'/>
Dec  1 14:20:41 np0005541455 nova_compute[189564]:        <feature name='erms'/>
Dec  1 14:20:41 np0005541455 nova_compute[189564]:        <feature name='gfni'/>
Dec  1 14:20:41 np0005541455 nova_compute[189564]:        <feature name='movdir64b'/>
Dec  1 14:20:41 np0005541455 nova_compute[189564]:        <feature name='movdiri'/>
Dec  1 14:20:41 np0005541455 nova_compute[189564]:        <feature name='xsaves'/>
Dec  1 14:20:41 np0005541455 nova_compute[189564]:      </blockers>
Dec  1 14:20:41 np0005541455 nova_compute[189564]:      <model usable='yes' vendor='Intel' canonical='Westmere-v1'>Westmere</model>
Dec  1 14:20:41 np0005541455 nova_compute[189564]:      <model usable='yes' vendor='Intel' canonical='Westmere-v2'>Westmere-IBRS</model>
Dec  1 14:20:41 np0005541455 nova_compute[189564]:      <model usable='yes' vendor='Intel'>Westmere-v1</model>
Dec  1 14:20:41 np0005541455 nova_compute[189564]:      <model usable='yes' vendor='Intel'>Westmere-v2</model>
Dec  1 14:20:41 np0005541455 nova_compute[189564]:      <model usable='no' deprecated='yes' vendor='AMD' canonical='athlon-v1'>athlon</model>
Dec  1 14:20:41 np0005541455 nova_compute[189564]:      <blockers model='athlon'>
Dec  1 14:20:41 np0005541455 nova_compute[189564]:        <feature name='3dnow'/>
Dec  1 14:20:41 np0005541455 nova_compute[189564]:        <feature name='3dnowext'/>
Dec  1 14:20:41 np0005541455 nova_compute[189564]:      </blockers>
Dec  1 14:20:41 np0005541455 nova_compute[189564]:      <model usable='no' deprecated='yes' vendor='AMD'>athlon-v1</model>
Dec  1 14:20:41 np0005541455 nova_compute[189564]:      <blockers model='athlon-v1'>
Dec  1 14:20:41 np0005541455 nova_compute[189564]:        <feature name='3dnow'/>
Dec  1 14:20:41 np0005541455 nova_compute[189564]:        <feature name='3dnowext'/>
Dec  1 14:20:41 np0005541455 nova_compute[189564]:      </blockers>
Dec  1 14:20:41 np0005541455 nova_compute[189564]:      <model usable='no' deprecated='yes' vendor='Intel' canonical='core2duo-v1'>core2duo</model>
Dec  1 14:20:41 np0005541455 nova_compute[189564]:      <blockers model='core2duo'>
Dec  1 14:20:41 np0005541455 nova_compute[189564]:        <feature name='ss'/>
Dec  1 14:20:41 np0005541455 nova_compute[189564]:      </blockers>
Dec  1 14:20:41 np0005541455 nova_compute[189564]:      <model usable='no' deprecated='yes' vendor='Intel'>core2duo-v1</model>
Dec  1 14:20:41 np0005541455 nova_compute[189564]:      <blockers model='core2duo-v1'>
Dec  1 14:20:41 np0005541455 nova_compute[189564]:        <feature name='ss'/>
Dec  1 14:20:41 np0005541455 nova_compute[189564]:      </blockers>
Dec  1 14:20:41 np0005541455 nova_compute[189564]:      <model usable='no' deprecated='yes' vendor='Intel' canonical='coreduo-v1'>coreduo</model>
Dec  1 14:20:41 np0005541455 nova_compute[189564]:      <blockers model='coreduo'>
Dec  1 14:20:41 np0005541455 nova_compute[189564]:        <feature name='ss'/>
Dec  1 14:20:41 np0005541455 nova_compute[189564]:      </blockers>
Dec  1 14:20:41 np0005541455 nova_compute[189564]:      <model usable='no' deprecated='yes' vendor='Intel'>coreduo-v1</model>
Dec  1 14:20:41 np0005541455 nova_compute[189564]:      <blockers model='coreduo-v1'>
Dec  1 14:20:41 np0005541455 nova_compute[189564]:        <feature name='ss'/>
Dec  1 14:20:41 np0005541455 nova_compute[189564]:      </blockers>
Dec  1 14:20:41 np0005541455 nova_compute[189564]:      <model usable='yes' deprecated='yes' vendor='unknown' canonical='kvm32-v1'>kvm32</model>
Dec  1 14:20:41 np0005541455 nova_compute[189564]:      <model usable='yes' deprecated='yes' vendor='unknown'>kvm32-v1</model>
Dec  1 14:20:41 np0005541455 nova_compute[189564]:      <model usable='yes' deprecated='yes' vendor='unknown' canonical='kvm64-v1'>kvm64</model>
Dec  1 14:20:41 np0005541455 nova_compute[189564]:      <model usable='yes' deprecated='yes' vendor='unknown'>kvm64-v1</model>
Dec  1 14:20:41 np0005541455 nova_compute[189564]:      <model usable='no' deprecated='yes' vendor='Intel' canonical='n270-v1'>n270</model>
Dec  1 14:20:41 np0005541455 nova_compute[189564]:      <blockers model='n270'>
Dec  1 14:20:41 np0005541455 nova_compute[189564]:        <feature name='ss'/>
Dec  1 14:20:41 np0005541455 nova_compute[189564]:      </blockers>
Dec  1 14:20:41 np0005541455 nova_compute[189564]:      <model usable='no' deprecated='yes' vendor='Intel'>n270-v1</model>
Dec  1 14:20:41 np0005541455 nova_compute[189564]:      <blockers model='n270-v1'>
Dec  1 14:20:41 np0005541455 nova_compute[189564]:        <feature name='ss'/>
Dec  1 14:20:41 np0005541455 nova_compute[189564]:      </blockers>
Dec  1 14:20:41 np0005541455 nova_compute[189564]:      <model usable='yes' deprecated='yes' vendor='unknown' canonical='pentium-v1'>pentium</model>
Dec  1 14:20:41 np0005541455 nova_compute[189564]:      <model usable='yes' deprecated='yes' vendor='unknown'>pentium-v1</model>
Dec  1 14:20:41 np0005541455 nova_compute[189564]:      <model usable='yes' deprecated='yes' vendor='unknown' canonical='pentium2-v1'>pentium2</model>
Dec  1 14:20:41 np0005541455 nova_compute[189564]:      <model usable='yes' deprecated='yes' vendor='unknown'>pentium2-v1</model>
Dec  1 14:20:41 np0005541455 nova_compute[189564]:      <model usable='yes' deprecated='yes' vendor='unknown' canonical='pentium3-v1'>pentium3</model>
Dec  1 14:20:41 np0005541455 nova_compute[189564]:      <model usable='yes' deprecated='yes' vendor='unknown'>pentium3-v1</model>
Dec  1 14:20:41 np0005541455 nova_compute[189564]:      <model usable='no' deprecated='yes' vendor='AMD' canonical='phenom-v1'>phenom</model>
Dec  1 14:20:41 np0005541455 nova_compute[189564]:      <blockers model='phenom'>
Dec  1 14:20:41 np0005541455 nova_compute[189564]:        <feature name='3dnow'/>
Dec  1 14:20:41 np0005541455 nova_compute[189564]:        <feature name='3dnowext'/>
Dec  1 14:20:41 np0005541455 nova_compute[189564]:      </blockers>
Dec  1 14:20:41 np0005541455 nova_compute[189564]:      <model usable='no' deprecated='yes' vendor='AMD'>phenom-v1</model>
Dec  1 14:20:41 np0005541455 nova_compute[189564]:      <blockers model='phenom-v1'>
Dec  1 14:20:41 np0005541455 nova_compute[189564]:        <feature name='3dnow'/>
Dec  1 14:20:41 np0005541455 nova_compute[189564]:        <feature name='3dnowext'/>
Dec  1 14:20:41 np0005541455 nova_compute[189564]:      </blockers>
Dec  1 14:20:41 np0005541455 nova_compute[189564]:      <model usable='yes' deprecated='yes' vendor='unknown' canonical='qemu32-v1'>qemu32</model>
Dec  1 14:20:41 np0005541455 nova_compute[189564]:      <model usable='yes' deprecated='yes' vendor='unknown'>qemu32-v1</model>
Dec  1 14:20:41 np0005541455 nova_compute[189564]:      <model usable='yes' deprecated='yes' vendor='unknown' canonical='qemu64-v1'>qemu64</model>
Dec  1 14:20:41 np0005541455 nova_compute[189564]:      <model usable='yes' deprecated='yes' vendor='unknown'>qemu64-v1</model>
Dec  1 14:20:41 np0005541455 nova_compute[189564]:    </mode>
Dec  1 14:20:41 np0005541455 nova_compute[189564]:  </cpu>
Dec  1 14:20:41 np0005541455 nova_compute[189564]:  <memoryBacking supported='yes'>
Dec  1 14:20:41 np0005541455 nova_compute[189564]:    <enum name='sourceType'>
Dec  1 14:20:41 np0005541455 nova_compute[189564]:      <value>file</value>
Dec  1 14:20:41 np0005541455 nova_compute[189564]:      <value>anonymous</value>
Dec  1 14:20:41 np0005541455 nova_compute[189564]:      <value>memfd</value>
Dec  1 14:20:41 np0005541455 nova_compute[189564]:    </enum>
Dec  1 14:20:41 np0005541455 nova_compute[189564]:  </memoryBacking>
Dec  1 14:20:41 np0005541455 nova_compute[189564]:  <devices>
Dec  1 14:20:41 np0005541455 nova_compute[189564]:    <disk supported='yes'>
Dec  1 14:20:41 np0005541455 nova_compute[189564]:      <enum name='diskDevice'>
Dec  1 14:20:41 np0005541455 nova_compute[189564]:        <value>disk</value>
Dec  1 14:20:41 np0005541455 nova_compute[189564]:        <value>cdrom</value>
Dec  1 14:20:41 np0005541455 nova_compute[189564]:        <value>floppy</value>
Dec  1 14:20:41 np0005541455 nova_compute[189564]:        <value>lun</value>
Dec  1 14:20:41 np0005541455 nova_compute[189564]:      </enum>
Dec  1 14:20:41 np0005541455 nova_compute[189564]:      <enum name='bus'>
Dec  1 14:20:41 np0005541455 nova_compute[189564]:        <value>ide</value>
Dec  1 14:20:41 np0005541455 nova_compute[189564]:        <value>fdc</value>
Dec  1 14:20:41 np0005541455 nova_compute[189564]:        <value>scsi</value>
Dec  1 14:20:41 np0005541455 nova_compute[189564]:        <value>virtio</value>
Dec  1 14:20:41 np0005541455 nova_compute[189564]:        <value>usb</value>
Dec  1 14:20:41 np0005541455 nova_compute[189564]:        <value>sata</value>
Dec  1 14:20:41 np0005541455 nova_compute[189564]:      </enum>
Dec  1 14:20:41 np0005541455 nova_compute[189564]:      <enum name='model'>
Dec  1 14:20:41 np0005541455 nova_compute[189564]:        <value>virtio</value>
Dec  1 14:20:41 np0005541455 nova_compute[189564]:        <value>virtio-transitional</value>
Dec  1 14:20:41 np0005541455 nova_compute[189564]:        <value>virtio-non-transitional</value>
Dec  1 14:20:41 np0005541455 nova_compute[189564]:      </enum>
Dec  1 14:20:41 np0005541455 nova_compute[189564]:    </disk>
Dec  1 14:20:41 np0005541455 nova_compute[189564]:    <graphics supported='yes'>
Dec  1 14:20:41 np0005541455 nova_compute[189564]:      <enum name='type'>
Dec  1 14:20:41 np0005541455 nova_compute[189564]:        <value>vnc</value>
Dec  1 14:20:41 np0005541455 nova_compute[189564]:        <value>egl-headless</value>
Dec  1 14:20:41 np0005541455 nova_compute[189564]:        <value>dbus</value>
Dec  1 14:20:41 np0005541455 nova_compute[189564]:      </enum>
Dec  1 14:20:41 np0005541455 nova_compute[189564]:    </graphics>
Dec  1 14:20:41 np0005541455 nova_compute[189564]:    <video supported='yes'>
Dec  1 14:20:41 np0005541455 nova_compute[189564]:      <enum name='modelType'>
Dec  1 14:20:41 np0005541455 nova_compute[189564]:        <value>vga</value>
Dec  1 14:20:41 np0005541455 nova_compute[189564]:        <value>cirrus</value>
Dec  1 14:20:41 np0005541455 nova_compute[189564]:        <value>virtio</value>
Dec  1 14:20:41 np0005541455 nova_compute[189564]:        <value>none</value>
Dec  1 14:20:41 np0005541455 nova_compute[189564]:        <value>bochs</value>
Dec  1 14:20:41 np0005541455 nova_compute[189564]:        <value>ramfb</value>
Dec  1 14:20:41 np0005541455 nova_compute[189564]:      </enum>
Dec  1 14:20:41 np0005541455 nova_compute[189564]:    </video>
Dec  1 14:20:41 np0005541455 nova_compute[189564]:    <hostdev supported='yes'>
Dec  1 14:20:41 np0005541455 nova_compute[189564]:      <enum name='mode'>
Dec  1 14:20:41 np0005541455 nova_compute[189564]:        <value>subsystem</value>
Dec  1 14:20:41 np0005541455 nova_compute[189564]:      </enum>
Dec  1 14:20:41 np0005541455 nova_compute[189564]:      <enum name='startupPolicy'>
Dec  1 14:20:41 np0005541455 nova_compute[189564]:        <value>default</value>
Dec  1 14:20:41 np0005541455 nova_compute[189564]:        <value>mandatory</value>
Dec  1 14:20:41 np0005541455 nova_compute[189564]:        <value>requisite</value>
Dec  1 14:20:41 np0005541455 nova_compute[189564]:        <value>optional</value>
Dec  1 14:20:41 np0005541455 nova_compute[189564]:      </enum>
Dec  1 14:20:41 np0005541455 nova_compute[189564]:      <enum name='subsysType'>
Dec  1 14:20:41 np0005541455 nova_compute[189564]:        <value>usb</value>
Dec  1 14:20:41 np0005541455 nova_compute[189564]:        <value>pci</value>
Dec  1 14:20:41 np0005541455 nova_compute[189564]:        <value>scsi</value>
Dec  1 14:20:41 np0005541455 nova_compute[189564]:      </enum>
Dec  1 14:20:41 np0005541455 nova_compute[189564]:      <enum name='capsType'/>
Dec  1 14:20:41 np0005541455 nova_compute[189564]:      <enum name='pciBackend'/>
Dec  1 14:20:41 np0005541455 nova_compute[189564]:    </hostdev>
Dec  1 14:20:41 np0005541455 nova_compute[189564]:    <rng supported='yes'>
Dec  1 14:20:41 np0005541455 nova_compute[189564]:      <enum name='model'>
Dec  1 14:20:41 np0005541455 nova_compute[189564]:        <value>virtio</value>
Dec  1 14:20:41 np0005541455 nova_compute[189564]:        <value>virtio-transitional</value>
Dec  1 14:20:41 np0005541455 nova_compute[189564]:        <value>virtio-non-transitional</value>
Dec  1 14:20:41 np0005541455 nova_compute[189564]:      </enum>
Dec  1 14:20:41 np0005541455 nova_compute[189564]:      <enum name='backendModel'>
Dec  1 14:20:41 np0005541455 nova_compute[189564]:        <value>random</value>
Dec  1 14:20:41 np0005541455 nova_compute[189564]:        <value>egd</value>
Dec  1 14:20:41 np0005541455 nova_compute[189564]:        <value>builtin</value>
Dec  1 14:20:41 np0005541455 nova_compute[189564]:      </enum>
Dec  1 14:20:41 np0005541455 nova_compute[189564]:    </rng>
Dec  1 14:20:41 np0005541455 nova_compute[189564]:    <filesystem supported='yes'>
Dec  1 14:20:41 np0005541455 nova_compute[189564]:      <enum name='driverType'>
Dec  1 14:20:41 np0005541455 nova_compute[189564]:        <value>path</value>
Dec  1 14:20:41 np0005541455 nova_compute[189564]:        <value>handle</value>
Dec  1 14:20:41 np0005541455 nova_compute[189564]:        <value>virtiofs</value>
Dec  1 14:20:41 np0005541455 nova_compute[189564]:      </enum>
Dec  1 14:20:41 np0005541455 nova_compute[189564]:    </filesystem>
Dec  1 14:20:41 np0005541455 nova_compute[189564]:    <tpm supported='yes'>
Dec  1 14:20:41 np0005541455 nova_compute[189564]:      <enum name='model'>
Dec  1 14:20:41 np0005541455 nova_compute[189564]:        <value>tpm-tis</value>
Dec  1 14:20:41 np0005541455 nova_compute[189564]:        <value>tpm-crb</value>
Dec  1 14:20:41 np0005541455 nova_compute[189564]:      </enum>
Dec  1 14:20:41 np0005541455 nova_compute[189564]:      <enum name='backendModel'>
Dec  1 14:20:41 np0005541455 nova_compute[189564]:        <value>emulator</value>
Dec  1 14:20:41 np0005541455 nova_compute[189564]:        <value>external</value>
Dec  1 14:20:41 np0005541455 nova_compute[189564]:      </enum>
Dec  1 14:20:41 np0005541455 nova_compute[189564]:      <enum name='backendVersion'>
Dec  1 14:20:41 np0005541455 nova_compute[189564]:        <value>2.0</value>
Dec  1 14:20:41 np0005541455 nova_compute[189564]:      </enum>
Dec  1 14:20:41 np0005541455 nova_compute[189564]:    </tpm>
Dec  1 14:20:41 np0005541455 nova_compute[189564]:    <redirdev supported='yes'>
Dec  1 14:20:41 np0005541455 nova_compute[189564]:      <enum name='bus'>
Dec  1 14:20:41 np0005541455 nova_compute[189564]:        <value>usb</value>
Dec  1 14:20:41 np0005541455 nova_compute[189564]:      </enum>
Dec  1 14:20:41 np0005541455 nova_compute[189564]:    </redirdev>
Dec  1 14:20:41 np0005541455 nova_compute[189564]:    <channel supported='yes'>
Dec  1 14:20:41 np0005541455 nova_compute[189564]:      <enum name='type'>
Dec  1 14:20:41 np0005541455 nova_compute[189564]:        <value>pty</value>
Dec  1 14:20:41 np0005541455 nova_compute[189564]:        <value>unix</value>
Dec  1 14:20:41 np0005541455 nova_compute[189564]:      </enum>
Dec  1 14:20:41 np0005541455 nova_compute[189564]:    </channel>
Dec  1 14:20:41 np0005541455 nova_compute[189564]:    <crypto supported='yes'>
Dec  1 14:20:41 np0005541455 nova_compute[189564]:      <enum name='model'/>
Dec  1 14:20:41 np0005541455 nova_compute[189564]:      <enum name='type'>
Dec  1 14:20:41 np0005541455 nova_compute[189564]:        <value>qemu</value>
Dec  1 14:20:41 np0005541455 nova_compute[189564]:      </enum>
Dec  1 14:20:41 np0005541455 nova_compute[189564]:      <enum name='backendModel'>
Dec  1 14:20:41 np0005541455 nova_compute[189564]:        <value>builtin</value>
Dec  1 14:20:41 np0005541455 nova_compute[189564]:      </enum>
Dec  1 14:20:41 np0005541455 nova_compute[189564]:    </crypto>
Dec  1 14:20:41 np0005541455 nova_compute[189564]:    <interface supported='yes'>
Dec  1 14:20:41 np0005541455 nova_compute[189564]:      <enum name='backendType'>
Dec  1 14:20:41 np0005541455 nova_compute[189564]:        <value>default</value>
Dec  1 14:20:41 np0005541455 nova_compute[189564]:        <value>passt</value>
Dec  1 14:20:41 np0005541455 nova_compute[189564]:      </enum>
Dec  1 14:20:41 np0005541455 nova_compute[189564]:    </interface>
Dec  1 14:20:41 np0005541455 nova_compute[189564]:    <panic supported='yes'>
Dec  1 14:20:41 np0005541455 nova_compute[189564]:      <enum name='model'>
Dec  1 14:20:41 np0005541455 nova_compute[189564]:        <value>isa</value>
Dec  1 14:20:41 np0005541455 nova_compute[189564]:        <value>hyperv</value>
Dec  1 14:20:41 np0005541455 nova_compute[189564]:      </enum>
Dec  1 14:20:41 np0005541455 nova_compute[189564]:    </panic>
Dec  1 14:20:41 np0005541455 nova_compute[189564]:    <console supported='yes'>
Dec  1 14:20:41 np0005541455 nova_compute[189564]:      <enum name='type'>
Dec  1 14:20:41 np0005541455 nova_compute[189564]:        <value>null</value>
Dec  1 14:20:41 np0005541455 nova_compute[189564]:        <value>vc</value>
Dec  1 14:20:41 np0005541455 nova_compute[189564]:        <value>pty</value>
Dec  1 14:20:41 np0005541455 nova_compute[189564]:        <value>dev</value>
Dec  1 14:20:41 np0005541455 nova_compute[189564]:        <value>file</value>
Dec  1 14:20:41 np0005541455 nova_compute[189564]:        <value>pipe</value>
Dec  1 14:20:41 np0005541455 nova_compute[189564]:        <value>stdio</value>
Dec  1 14:20:41 np0005541455 nova_compute[189564]:        <value>udp</value>
Dec  1 14:20:41 np0005541455 nova_compute[189564]:        <value>tcp</value>
Dec  1 14:20:41 np0005541455 nova_compute[189564]:        <value>unix</value>
Dec  1 14:20:41 np0005541455 nova_compute[189564]:        <value>qemu-vdagent</value>
Dec  1 14:20:41 np0005541455 nova_compute[189564]:        <value>dbus</value>
Dec  1 14:20:41 np0005541455 nova_compute[189564]:      </enum>
Dec  1 14:20:41 np0005541455 nova_compute[189564]:    </console>
Dec  1 14:20:41 np0005541455 nova_compute[189564]:  </devices>
Dec  1 14:20:41 np0005541455 nova_compute[189564]:  <features>
Dec  1 14:20:41 np0005541455 nova_compute[189564]:    <gic supported='no'/>
Dec  1 14:20:41 np0005541455 nova_compute[189564]:    <vmcoreinfo supported='yes'/>
Dec  1 14:20:41 np0005541455 nova_compute[189564]:    <genid supported='yes'/>
Dec  1 14:20:41 np0005541455 nova_compute[189564]:    <backingStoreInput supported='yes'/>
Dec  1 14:20:41 np0005541455 nova_compute[189564]:    <backup supported='yes'/>
Dec  1 14:20:41 np0005541455 nova_compute[189564]:    <async-teardown supported='yes'/>
Dec  1 14:20:41 np0005541455 nova_compute[189564]:    <ps2 supported='yes'/>
Dec  1 14:20:41 np0005541455 nova_compute[189564]:    <sev supported='no'/>
Dec  1 14:20:41 np0005541455 nova_compute[189564]:    <sgx supported='no'/>
Dec  1 14:20:41 np0005541455 nova_compute[189564]:    <hyperv supported='yes'>
Dec  1 14:20:41 np0005541455 nova_compute[189564]:      <enum name='features'>
Dec  1 14:20:41 np0005541455 nova_compute[189564]:        <value>relaxed</value>
Dec  1 14:20:41 np0005541455 nova_compute[189564]:        <value>vapic</value>
Dec  1 14:20:41 np0005541455 nova_compute[189564]:        <value>spinlocks</value>
Dec  1 14:20:41 np0005541455 nova_compute[189564]:        <value>vpindex</value>
Dec  1 14:20:41 np0005541455 nova_compute[189564]:        <value>runtime</value>
Dec  1 14:20:41 np0005541455 nova_compute[189564]:        <value>synic</value>
Dec  1 14:20:41 np0005541455 nova_compute[189564]:        <value>stimer</value>
Dec  1 14:20:41 np0005541455 nova_compute[189564]:        <value>reset</value>
Dec  1 14:20:41 np0005541455 nova_compute[189564]:        <value>vendor_id</value>
Dec  1 14:20:41 np0005541455 nova_compute[189564]:        <value>frequencies</value>
Dec  1 14:20:41 np0005541455 nova_compute[189564]:        <value>reenlightenment</value>
Dec  1 14:20:41 np0005541455 nova_compute[189564]:        <value>tlbflush</value>
Dec  1 14:20:41 np0005541455 nova_compute[189564]:        <value>ipi</value>
Dec  1 14:20:41 np0005541455 nova_compute[189564]:        <value>avic</value>
Dec  1 14:20:41 np0005541455 nova_compute[189564]:        <value>emsr_bitmap</value>
Dec  1 14:20:41 np0005541455 nova_compute[189564]:        <value>xmm_input</value>
Dec  1 14:20:41 np0005541455 nova_compute[189564]:      </enum>
Dec  1 14:20:41 np0005541455 nova_compute[189564]:      <defaults>
Dec  1 14:20:41 np0005541455 nova_compute[189564]:        <spinlocks>4095</spinlocks>
Dec  1 14:20:41 np0005541455 nova_compute[189564]:        <stimer_direct>on</stimer_direct>
Dec  1 14:20:41 np0005541455 nova_compute[189564]:        <tlbflush_direct>on</tlbflush_direct>
Dec  1 14:20:41 np0005541455 nova_compute[189564]:        <tlbflush_extended>on</tlbflush_extended>
Dec  1 14:20:41 np0005541455 nova_compute[189564]:        <vendor_id>Linux KVM Hv</vendor_id>
Dec  1 14:20:41 np0005541455 nova_compute[189564]:      </defaults>
Dec  1 14:20:41 np0005541455 nova_compute[189564]:    </hyperv>
Dec  1 14:20:41 np0005541455 nova_compute[189564]:    <launchSecurity supported='yes'>
Dec  1 14:20:41 np0005541455 nova_compute[189564]:      <enum name='sectype'>
Dec  1 14:20:41 np0005541455 nova_compute[189564]:        <value>tdx</value>
Dec  1 14:20:41 np0005541455 nova_compute[189564]:      </enum>
Dec  1 14:20:41 np0005541455 nova_compute[189564]:    </launchSecurity>
Dec  1 14:20:41 np0005541455 nova_compute[189564]:  </features>
Dec  1 14:20:41 np0005541455 nova_compute[189564]: </domainCapabilities>
Dec  1 14:20:41 np0005541455 nova_compute[189564]: _get_domain_capabilities /usr/lib/python3.9/site-packages/nova/virt/libvirt/host.py:1037#033[00m
Dec  1 14:20:41 np0005541455 nova_compute[189564]: 2025-12-01 19:20:41.153 189568 DEBUG nova.virt.libvirt.host [None req-025acbbd-8b0a-4055-b5a6-f0460d6fa220 - - - - - -] Getting domain capabilities for x86_64 via machine types: {'q35', 'pc'} _get_machine_types /usr/lib/python3.9/site-packages/nova/virt/libvirt/host.py:952#033[00m
Dec  1 14:20:41 np0005541455 nova_compute[189564]: 2025-12-01 19:20:41.157 189568 DEBUG nova.virt.libvirt.host [None req-025acbbd-8b0a-4055-b5a6-f0460d6fa220 - - - - - -] Libvirt host hypervisor capabilities for arch=x86_64 and machine_type=q35:
Dec  1 14:20:41 np0005541455 nova_compute[189564]: <domainCapabilities>
Dec  1 14:20:41 np0005541455 nova_compute[189564]:  <path>/usr/libexec/qemu-kvm</path>
Dec  1 14:20:41 np0005541455 nova_compute[189564]:  <domain>kvm</domain>
Dec  1 14:20:41 np0005541455 nova_compute[189564]:  <machine>pc-q35-rhel9.8.0</machine>
Dec  1 14:20:41 np0005541455 nova_compute[189564]:  <arch>x86_64</arch>
Dec  1 14:20:41 np0005541455 nova_compute[189564]:  <vcpu max='4096'/>
Dec  1 14:20:41 np0005541455 nova_compute[189564]:  <iothreads supported='yes'/>
Dec  1 14:20:41 np0005541455 nova_compute[189564]:  <os supported='yes'>
Dec  1 14:20:41 np0005541455 nova_compute[189564]:    <enum name='firmware'>
Dec  1 14:20:41 np0005541455 nova_compute[189564]:      <value>efi</value>
Dec  1 14:20:41 np0005541455 nova_compute[189564]:    </enum>
Dec  1 14:20:41 np0005541455 nova_compute[189564]:    <loader supported='yes'>
Dec  1 14:20:41 np0005541455 nova_compute[189564]:      <value>/usr/share/edk2/ovmf/OVMF_CODE.secboot.fd</value>
Dec  1 14:20:41 np0005541455 nova_compute[189564]:      <value>/usr/share/edk2/ovmf/OVMF_CODE.fd</value>
Dec  1 14:20:41 np0005541455 nova_compute[189564]:      <value>/usr/share/edk2/ovmf/OVMF.amdsev.fd</value>
Dec  1 14:20:41 np0005541455 nova_compute[189564]:      <value>/usr/share/edk2/ovmf/OVMF.inteltdx.secboot.fd</value>
Dec  1 14:20:41 np0005541455 nova_compute[189564]:      <enum name='type'>
Dec  1 14:20:41 np0005541455 nova_compute[189564]:        <value>rom</value>
Dec  1 14:20:41 np0005541455 nova_compute[189564]:        <value>pflash</value>
Dec  1 14:20:41 np0005541455 nova_compute[189564]:      </enum>
Dec  1 14:20:41 np0005541455 nova_compute[189564]:      <enum name='readonly'>
Dec  1 14:20:41 np0005541455 nova_compute[189564]:        <value>yes</value>
Dec  1 14:20:41 np0005541455 nova_compute[189564]:        <value>no</value>
Dec  1 14:20:41 np0005541455 nova_compute[189564]:      </enum>
Dec  1 14:20:41 np0005541455 nova_compute[189564]:      <enum name='secure'>
Dec  1 14:20:41 np0005541455 nova_compute[189564]:        <value>yes</value>
Dec  1 14:20:41 np0005541455 nova_compute[189564]:        <value>no</value>
Dec  1 14:20:41 np0005541455 nova_compute[189564]:      </enum>
Dec  1 14:20:41 np0005541455 nova_compute[189564]:    </loader>
Dec  1 14:20:41 np0005541455 nova_compute[189564]:  </os>
Dec  1 14:20:41 np0005541455 nova_compute[189564]:  <cpu>
Dec  1 14:20:41 np0005541455 nova_compute[189564]:    <mode name='host-passthrough' supported='yes'>
Dec  1 14:20:41 np0005541455 nova_compute[189564]:      <enum name='hostPassthroughMigratable'>
Dec  1 14:20:41 np0005541455 nova_compute[189564]:        <value>on</value>
Dec  1 14:20:41 np0005541455 nova_compute[189564]:        <value>off</value>
Dec  1 14:20:41 np0005541455 nova_compute[189564]:      </enum>
Dec  1 14:20:41 np0005541455 nova_compute[189564]:    </mode>
Dec  1 14:20:41 np0005541455 nova_compute[189564]:    <mode name='maximum' supported='yes'>
Dec  1 14:20:41 np0005541455 nova_compute[189564]:      <enum name='maximumMigratable'>
Dec  1 14:20:41 np0005541455 nova_compute[189564]:        <value>on</value>
Dec  1 14:20:41 np0005541455 nova_compute[189564]:        <value>off</value>
Dec  1 14:20:41 np0005541455 nova_compute[189564]:      </enum>
Dec  1 14:20:41 np0005541455 nova_compute[189564]:    </mode>
Dec  1 14:20:41 np0005541455 nova_compute[189564]:    <mode name='host-model' supported='yes'>
Dec  1 14:20:41 np0005541455 nova_compute[189564]:      <model fallback='forbid'>EPYC-Rome</model>
Dec  1 14:20:41 np0005541455 nova_compute[189564]:      <vendor>AMD</vendor>
Dec  1 14:20:41 np0005541455 nova_compute[189564]:      <maxphysaddr mode='passthrough' limit='40'/>
Dec  1 14:20:41 np0005541455 nova_compute[189564]:      <feature policy='require' name='x2apic'/>
Dec  1 14:20:41 np0005541455 nova_compute[189564]:      <feature policy='require' name='tsc-deadline'/>
Dec  1 14:20:41 np0005541455 nova_compute[189564]:      <feature policy='require' name='hypervisor'/>
Dec  1 14:20:41 np0005541455 nova_compute[189564]:      <feature policy='require' name='tsc_adjust'/>
Dec  1 14:20:41 np0005541455 nova_compute[189564]:      <feature policy='require' name='spec-ctrl'/>
Dec  1 14:20:41 np0005541455 nova_compute[189564]:      <feature policy='require' name='stibp'/>
Dec  1 14:20:41 np0005541455 nova_compute[189564]:      <feature policy='require' name='ssbd'/>
Dec  1 14:20:41 np0005541455 nova_compute[189564]:      <feature policy='require' name='cmp_legacy'/>
Dec  1 14:20:41 np0005541455 nova_compute[189564]:      <feature policy='require' name='overflow-recov'/>
Dec  1 14:20:41 np0005541455 nova_compute[189564]:      <feature policy='require' name='succor'/>
Dec  1 14:20:41 np0005541455 nova_compute[189564]:      <feature policy='require' name='ibrs'/>
Dec  1 14:20:41 np0005541455 nova_compute[189564]:      <feature policy='require' name='amd-ssbd'/>
Dec  1 14:20:41 np0005541455 nova_compute[189564]:      <feature policy='require' name='virt-ssbd'/>
Dec  1 14:20:41 np0005541455 nova_compute[189564]:      <feature policy='require' name='lbrv'/>
Dec  1 14:20:41 np0005541455 nova_compute[189564]:      <feature policy='require' name='tsc-scale'/>
Dec  1 14:20:41 np0005541455 nova_compute[189564]:      <feature policy='require' name='vmcb-clean'/>
Dec  1 14:20:41 np0005541455 nova_compute[189564]:      <feature policy='require' name='flushbyasid'/>
Dec  1 14:20:41 np0005541455 nova_compute[189564]:      <feature policy='require' name='pause-filter'/>
Dec  1 14:20:41 np0005541455 nova_compute[189564]:      <feature policy='require' name='pfthreshold'/>
Dec  1 14:20:41 np0005541455 nova_compute[189564]:      <feature policy='require' name='svme-addr-chk'/>
Dec  1 14:20:41 np0005541455 nova_compute[189564]:      <feature policy='require' name='lfence-always-serializing'/>
Dec  1 14:20:41 np0005541455 nova_compute[189564]:      <feature policy='disable' name='xsaves'/>
Dec  1 14:20:41 np0005541455 nova_compute[189564]:    </mode>
Dec  1 14:20:41 np0005541455 nova_compute[189564]:    <mode name='custom' supported='yes'>
Dec  1 14:20:41 np0005541455 nova_compute[189564]:      <model usable='yes' deprecated='yes' vendor='unknown' canonical='486-v1'>486</model>
Dec  1 14:20:41 np0005541455 nova_compute[189564]:      <model usable='yes' deprecated='yes' vendor='unknown'>486-v1</model>
Dec  1 14:20:41 np0005541455 nova_compute[189564]:      <model usable='no' vendor='Intel' canonical='Broadwell-v1'>Broadwell</model>
Dec  1 14:20:41 np0005541455 nova_compute[189564]:      <blockers model='Broadwell'>
Dec  1 14:20:41 np0005541455 nova_compute[189564]:        <feature name='erms'/>
Dec  1 14:20:41 np0005541455 nova_compute[189564]:        <feature name='hle'/>
Dec  1 14:20:41 np0005541455 nova_compute[189564]:        <feature name='invpcid'/>
Dec  1 14:20:41 np0005541455 nova_compute[189564]:        <feature name='pcid'/>
Dec  1 14:20:41 np0005541455 nova_compute[189564]:        <feature name='rtm'/>
Dec  1 14:20:41 np0005541455 nova_compute[189564]:      </blockers>
Dec  1 14:20:41 np0005541455 nova_compute[189564]:      <model usable='no' vendor='Intel' canonical='Broadwell-v3'>Broadwell-IBRS</model>
Dec  1 14:20:41 np0005541455 nova_compute[189564]:      <blockers model='Broadwell-IBRS'>
Dec  1 14:20:41 np0005541455 nova_compute[189564]:        <feature name='erms'/>
Dec  1 14:20:41 np0005541455 nova_compute[189564]:        <feature name='hle'/>
Dec  1 14:20:41 np0005541455 nova_compute[189564]:        <feature name='invpcid'/>
Dec  1 14:20:41 np0005541455 nova_compute[189564]:        <feature name='pcid'/>
Dec  1 14:20:41 np0005541455 nova_compute[189564]:        <feature name='rtm'/>
Dec  1 14:20:41 np0005541455 nova_compute[189564]:      </blockers>
Dec  1 14:20:41 np0005541455 nova_compute[189564]:      <model usable='no' vendor='Intel' canonical='Broadwell-v2'>Broadwell-noTSX</model>
Dec  1 14:20:41 np0005541455 nova_compute[189564]:      <blockers model='Broadwell-noTSX'>
Dec  1 14:20:41 np0005541455 nova_compute[189564]:        <feature name='erms'/>
Dec  1 14:20:41 np0005541455 nova_compute[189564]:        <feature name='invpcid'/>
Dec  1 14:20:41 np0005541455 nova_compute[189564]:        <feature name='pcid'/>
Dec  1 14:20:41 np0005541455 nova_compute[189564]:      </blockers>
Dec  1 14:20:41 np0005541455 nova_compute[189564]:      <model usable='no' vendor='Intel' canonical='Broadwell-v4'>Broadwell-noTSX-IBRS</model>
Dec  1 14:20:41 np0005541455 nova_compute[189564]:      <blockers model='Broadwell-noTSX-IBRS'>
Dec  1 14:20:41 np0005541455 nova_compute[189564]:        <feature name='erms'/>
Dec  1 14:20:41 np0005541455 nova_compute[189564]:        <feature name='invpcid'/>
Dec  1 14:20:41 np0005541455 nova_compute[189564]:        <feature name='pcid'/>
Dec  1 14:20:41 np0005541455 nova_compute[189564]:      </blockers>
Dec  1 14:20:41 np0005541455 nova_compute[189564]:      <model usable='no' vendor='Intel'>Broadwell-v1</model>
Dec  1 14:20:41 np0005541455 nova_compute[189564]:      <blockers model='Broadwell-v1'>
Dec  1 14:20:41 np0005541455 nova_compute[189564]:        <feature name='erms'/>
Dec  1 14:20:41 np0005541455 nova_compute[189564]:        <feature name='hle'/>
Dec  1 14:20:41 np0005541455 nova_compute[189564]:        <feature name='invpcid'/>
Dec  1 14:20:41 np0005541455 nova_compute[189564]:        <feature name='pcid'/>
Dec  1 14:20:41 np0005541455 nova_compute[189564]:        <feature name='rtm'/>
Dec  1 14:20:41 np0005541455 nova_compute[189564]:      </blockers>
Dec  1 14:20:41 np0005541455 nova_compute[189564]:      <model usable='no' vendor='Intel'>Broadwell-v2</model>
Dec  1 14:20:41 np0005541455 nova_compute[189564]:      <blockers model='Broadwell-v2'>
Dec  1 14:20:41 np0005541455 nova_compute[189564]:        <feature name='erms'/>
Dec  1 14:20:41 np0005541455 nova_compute[189564]:        <feature name='invpcid'/>
Dec  1 14:20:41 np0005541455 nova_compute[189564]:        <feature name='pcid'/>
Dec  1 14:20:41 np0005541455 nova_compute[189564]:      </blockers>
Dec  1 14:20:41 np0005541455 nova_compute[189564]:      <model usable='no' vendor='Intel'>Broadwell-v3</model>
Dec  1 14:20:41 np0005541455 nova_compute[189564]:      <blockers model='Broadwell-v3'>
Dec  1 14:20:41 np0005541455 nova_compute[189564]:        <feature name='erms'/>
Dec  1 14:20:41 np0005541455 nova_compute[189564]:        <feature name='hle'/>
Dec  1 14:20:41 np0005541455 nova_compute[189564]:        <feature name='invpcid'/>
Dec  1 14:20:41 np0005541455 nova_compute[189564]:        <feature name='pcid'/>
Dec  1 14:20:41 np0005541455 nova_compute[189564]:        <feature name='rtm'/>
Dec  1 14:20:41 np0005541455 nova_compute[189564]:      </blockers>
Dec  1 14:20:41 np0005541455 nova_compute[189564]:      <model usable='no' vendor='Intel'>Broadwell-v4</model>
Dec  1 14:20:41 np0005541455 nova_compute[189564]:      <blockers model='Broadwell-v4'>
Dec  1 14:20:41 np0005541455 nova_compute[189564]:        <feature name='erms'/>
Dec  1 14:20:41 np0005541455 nova_compute[189564]:        <feature name='invpcid'/>
Dec  1 14:20:41 np0005541455 nova_compute[189564]:        <feature name='pcid'/>
Dec  1 14:20:41 np0005541455 nova_compute[189564]:      </blockers>
Dec  1 14:20:41 np0005541455 nova_compute[189564]:      <model usable='no' vendor='Intel' canonical='Cascadelake-Server-v1'>Cascadelake-Server</model>
Dec  1 14:20:41 np0005541455 nova_compute[189564]:      <blockers model='Cascadelake-Server'>
Dec  1 14:20:41 np0005541455 nova_compute[189564]:        <feature name='avx512bw'/>
Dec  1 14:20:41 np0005541455 nova_compute[189564]:        <feature name='avx512cd'/>
Dec  1 14:20:41 np0005541455 nova_compute[189564]:        <feature name='avx512dq'/>
Dec  1 14:20:41 np0005541455 nova_compute[189564]:        <feature name='avx512f'/>
Dec  1 14:20:41 np0005541455 nova_compute[189564]:        <feature name='avx512vl'/>
Dec  1 14:20:41 np0005541455 nova_compute[189564]:        <feature name='avx512vnni'/>
Dec  1 14:20:41 np0005541455 nova_compute[189564]:        <feature name='erms'/>
Dec  1 14:20:41 np0005541455 nova_compute[189564]:        <feature name='hle'/>
Dec  1 14:20:41 np0005541455 nova_compute[189564]:        <feature name='invpcid'/>
Dec  1 14:20:41 np0005541455 nova_compute[189564]:        <feature name='pcid'/>
Dec  1 14:20:41 np0005541455 nova_compute[189564]:        <feature name='pku'/>
Dec  1 14:20:41 np0005541455 nova_compute[189564]:        <feature name='rtm'/>
Dec  1 14:20:41 np0005541455 nova_compute[189564]:      </blockers>
Dec  1 14:20:41 np0005541455 nova_compute[189564]:      <model usable='no' vendor='Intel' canonical='Cascadelake-Server-v3'>Cascadelake-Server-noTSX</model>
Dec  1 14:20:41 np0005541455 nova_compute[189564]:      <blockers model='Cascadelake-Server-noTSX'>
Dec  1 14:20:41 np0005541455 nova_compute[189564]:        <feature name='avx512bw'/>
Dec  1 14:20:41 np0005541455 nova_compute[189564]:        <feature name='avx512cd'/>
Dec  1 14:20:41 np0005541455 nova_compute[189564]:        <feature name='avx512dq'/>
Dec  1 14:20:41 np0005541455 nova_compute[189564]:        <feature name='avx512f'/>
Dec  1 14:20:41 np0005541455 nova_compute[189564]:        <feature name='avx512vl'/>
Dec  1 14:20:41 np0005541455 nova_compute[189564]:        <feature name='avx512vnni'/>
Dec  1 14:20:41 np0005541455 nova_compute[189564]:        <feature name='erms'/>
Dec  1 14:20:41 np0005541455 nova_compute[189564]:        <feature name='ibrs-all'/>
Dec  1 14:20:41 np0005541455 nova_compute[189564]:        <feature name='invpcid'/>
Dec  1 14:20:41 np0005541455 nova_compute[189564]:        <feature name='pcid'/>
Dec  1 14:20:41 np0005541455 nova_compute[189564]:        <feature name='pku'/>
Dec  1 14:20:41 np0005541455 nova_compute[189564]:      </blockers>
Dec  1 14:20:41 np0005541455 nova_compute[189564]:      <model usable='no' vendor='Intel'>Cascadelake-Server-v1</model>
Dec  1 14:20:41 np0005541455 nova_compute[189564]:      <blockers model='Cascadelake-Server-v1'>
Dec  1 14:20:41 np0005541455 nova_compute[189564]:        <feature name='avx512bw'/>
Dec  1 14:20:41 np0005541455 nova_compute[189564]:        <feature name='avx512cd'/>
Dec  1 14:20:41 np0005541455 nova_compute[189564]:        <feature name='avx512dq'/>
Dec  1 14:20:41 np0005541455 nova_compute[189564]:        <feature name='avx512f'/>
Dec  1 14:20:41 np0005541455 nova_compute[189564]:        <feature name='avx512vl'/>
Dec  1 14:20:41 np0005541455 nova_compute[189564]:        <feature name='avx512vnni'/>
Dec  1 14:20:41 np0005541455 nova_compute[189564]:        <feature name='erms'/>
Dec  1 14:20:41 np0005541455 nova_compute[189564]:        <feature name='hle'/>
Dec  1 14:20:41 np0005541455 nova_compute[189564]:        <feature name='invpcid'/>
Dec  1 14:20:41 np0005541455 nova_compute[189564]:        <feature name='pcid'/>
Dec  1 14:20:41 np0005541455 nova_compute[189564]:        <feature name='pku'/>
Dec  1 14:20:41 np0005541455 nova_compute[189564]:        <feature name='rtm'/>
Dec  1 14:20:41 np0005541455 nova_compute[189564]:      </blockers>
Dec  1 14:20:41 np0005541455 nova_compute[189564]:      <model usable='no' vendor='Intel'>Cascadelake-Server-v2</model>
Dec  1 14:20:41 np0005541455 nova_compute[189564]:      <blockers model='Cascadelake-Server-v2'>
Dec  1 14:20:41 np0005541455 nova_compute[189564]:        <feature name='avx512bw'/>
Dec  1 14:20:41 np0005541455 nova_compute[189564]:        <feature name='avx512cd'/>
Dec  1 14:20:41 np0005541455 nova_compute[189564]:        <feature name='avx512dq'/>
Dec  1 14:20:41 np0005541455 nova_compute[189564]:        <feature name='avx512f'/>
Dec  1 14:20:41 np0005541455 nova_compute[189564]:        <feature name='avx512vl'/>
Dec  1 14:20:41 np0005541455 nova_compute[189564]:        <feature name='avx512vnni'/>
Dec  1 14:20:41 np0005541455 nova_compute[189564]:        <feature name='erms'/>
Dec  1 14:20:41 np0005541455 nova_compute[189564]:        <feature name='hle'/>
Dec  1 14:20:41 np0005541455 nova_compute[189564]:        <feature name='ibrs-all'/>
Dec  1 14:20:41 np0005541455 nova_compute[189564]:        <feature name='invpcid'/>
Dec  1 14:20:41 np0005541455 nova_compute[189564]:        <feature name='pcid'/>
Dec  1 14:20:41 np0005541455 nova_compute[189564]:        <feature name='pku'/>
Dec  1 14:20:41 np0005541455 nova_compute[189564]:        <feature name='rtm'/>
Dec  1 14:20:41 np0005541455 nova_compute[189564]:      </blockers>
Dec  1 14:20:41 np0005541455 nova_compute[189564]:      <model usable='no' vendor='Intel'>Cascadelake-Server-v3</model>
Dec  1 14:20:41 np0005541455 nova_compute[189564]:      <blockers model='Cascadelake-Server-v3'>
Dec  1 14:20:41 np0005541455 nova_compute[189564]:        <feature name='avx512bw'/>
Dec  1 14:20:41 np0005541455 nova_compute[189564]:        <feature name='avx512cd'/>
Dec  1 14:20:41 np0005541455 nova_compute[189564]:        <feature name='avx512dq'/>
Dec  1 14:20:41 np0005541455 nova_compute[189564]:        <feature name='avx512f'/>
Dec  1 14:20:41 np0005541455 nova_compute[189564]:        <feature name='avx512vl'/>
Dec  1 14:20:41 np0005541455 nova_compute[189564]:        <feature name='avx512vnni'/>
Dec  1 14:20:41 np0005541455 nova_compute[189564]:        <feature name='erms'/>
Dec  1 14:20:41 np0005541455 nova_compute[189564]:        <feature name='ibrs-all'/>
Dec  1 14:20:41 np0005541455 nova_compute[189564]:        <feature name='invpcid'/>
Dec  1 14:20:41 np0005541455 nova_compute[189564]:        <feature name='pcid'/>
Dec  1 14:20:41 np0005541455 nova_compute[189564]:        <feature name='pku'/>
Dec  1 14:20:41 np0005541455 nova_compute[189564]:      </blockers>
Dec  1 14:20:41 np0005541455 nova_compute[189564]:      <model usable='no' vendor='Intel'>Cascadelake-Server-v4</model>
Dec  1 14:20:41 np0005541455 nova_compute[189564]:      <blockers model='Cascadelake-Server-v4'>
Dec  1 14:20:41 np0005541455 nova_compute[189564]:        <feature name='avx512bw'/>
Dec  1 14:20:41 np0005541455 nova_compute[189564]:        <feature name='avx512cd'/>
Dec  1 14:20:41 np0005541455 nova_compute[189564]:        <feature name='avx512dq'/>
Dec  1 14:20:41 np0005541455 nova_compute[189564]:        <feature name='avx512f'/>
Dec  1 14:20:41 np0005541455 nova_compute[189564]:        <feature name='avx512vl'/>
Dec  1 14:20:41 np0005541455 nova_compute[189564]:        <feature name='avx512vnni'/>
Dec  1 14:20:41 np0005541455 nova_compute[189564]:        <feature name='erms'/>
Dec  1 14:20:41 np0005541455 nova_compute[189564]:        <feature name='ibrs-all'/>
Dec  1 14:20:41 np0005541455 nova_compute[189564]:        <feature name='invpcid'/>
Dec  1 14:20:41 np0005541455 nova_compute[189564]:        <feature name='pcid'/>
Dec  1 14:20:41 np0005541455 nova_compute[189564]:        <feature name='pku'/>
Dec  1 14:20:41 np0005541455 nova_compute[189564]:      </blockers>
Dec  1 14:20:41 np0005541455 nova_compute[189564]:      <model usable='no' vendor='Intel'>Cascadelake-Server-v5</model>
Dec  1 14:20:41 np0005541455 nova_compute[189564]:      <blockers model='Cascadelake-Server-v5'>
Dec  1 14:20:41 np0005541455 nova_compute[189564]:        <feature name='avx512bw'/>
Dec  1 14:20:41 np0005541455 nova_compute[189564]:        <feature name='avx512cd'/>
Dec  1 14:20:41 np0005541455 nova_compute[189564]:        <feature name='avx512dq'/>
Dec  1 14:20:41 np0005541455 nova_compute[189564]:        <feature name='avx512f'/>
Dec  1 14:20:41 np0005541455 nova_compute[189564]:        <feature name='avx512vl'/>
Dec  1 14:20:41 np0005541455 nova_compute[189564]:        <feature name='avx512vnni'/>
Dec  1 14:20:41 np0005541455 nova_compute[189564]:        <feature name='erms'/>
Dec  1 14:20:41 np0005541455 nova_compute[189564]:        <feature name='ibrs-all'/>
Dec  1 14:20:41 np0005541455 nova_compute[189564]:        <feature name='invpcid'/>
Dec  1 14:20:41 np0005541455 nova_compute[189564]:        <feature name='pcid'/>
Dec  1 14:20:41 np0005541455 nova_compute[189564]:        <feature name='pku'/>
Dec  1 14:20:41 np0005541455 nova_compute[189564]:        <feature name='xsaves'/>
Dec  1 14:20:41 np0005541455 nova_compute[189564]:      </blockers>
Dec  1 14:20:41 np0005541455 nova_compute[189564]:      <model usable='yes' deprecated='yes' vendor='Intel' canonical='Conroe-v1'>Conroe</model>
Dec  1 14:20:41 np0005541455 nova_compute[189564]:      <model usable='yes' deprecated='yes' vendor='Intel'>Conroe-v1</model>
Dec  1 14:20:41 np0005541455 nova_compute[189564]:      <model usable='no' vendor='Intel' canonical='Cooperlake-v1'>Cooperlake</model>
Dec  1 14:20:41 np0005541455 nova_compute[189564]:      <blockers model='Cooperlake'>
Dec  1 14:20:41 np0005541455 nova_compute[189564]:        <feature name='avx512-bf16'/>
Dec  1 14:20:41 np0005541455 nova_compute[189564]:        <feature name='avx512bw'/>
Dec  1 14:20:41 np0005541455 nova_compute[189564]:        <feature name='avx512cd'/>
Dec  1 14:20:41 np0005541455 nova_compute[189564]:        <feature name='avx512dq'/>
Dec  1 14:20:41 np0005541455 nova_compute[189564]:        <feature name='avx512f'/>
Dec  1 14:20:41 np0005541455 nova_compute[189564]:        <feature name='avx512vl'/>
Dec  1 14:20:41 np0005541455 nova_compute[189564]:        <feature name='avx512vnni'/>
Dec  1 14:20:41 np0005541455 nova_compute[189564]:        <feature name='erms'/>
Dec  1 14:20:41 np0005541455 nova_compute[189564]:        <feature name='hle'/>
Dec  1 14:20:41 np0005541455 nova_compute[189564]:        <feature name='ibrs-all'/>
Dec  1 14:20:41 np0005541455 nova_compute[189564]:        <feature name='invpcid'/>
Dec  1 14:20:41 np0005541455 nova_compute[189564]:        <feature name='pcid'/>
Dec  1 14:20:41 np0005541455 nova_compute[189564]:        <feature name='pku'/>
Dec  1 14:20:41 np0005541455 nova_compute[189564]:        <feature name='rtm'/>
Dec  1 14:20:41 np0005541455 nova_compute[189564]:        <feature name='taa-no'/>
Dec  1 14:20:41 np0005541455 nova_compute[189564]:      </blockers>
Dec  1 14:20:41 np0005541455 nova_compute[189564]:      <model usable='no' vendor='Intel'>Cooperlake-v1</model>
Dec  1 14:20:41 np0005541455 nova_compute[189564]:      <blockers model='Cooperlake-v1'>
Dec  1 14:20:41 np0005541455 nova_compute[189564]:        <feature name='avx512-bf16'/>
Dec  1 14:20:41 np0005541455 nova_compute[189564]:        <feature name='avx512bw'/>
Dec  1 14:20:41 np0005541455 nova_compute[189564]:        <feature name='avx512cd'/>
Dec  1 14:20:41 np0005541455 nova_compute[189564]:        <feature name='avx512dq'/>
Dec  1 14:20:41 np0005541455 nova_compute[189564]:        <feature name='avx512f'/>
Dec  1 14:20:41 np0005541455 nova_compute[189564]:        <feature name='avx512vl'/>
Dec  1 14:20:41 np0005541455 nova_compute[189564]:        <feature name='avx512vnni'/>
Dec  1 14:20:41 np0005541455 nova_compute[189564]:        <feature name='erms'/>
Dec  1 14:20:41 np0005541455 nova_compute[189564]:        <feature name='hle'/>
Dec  1 14:20:41 np0005541455 nova_compute[189564]:        <feature name='ibrs-all'/>
Dec  1 14:20:41 np0005541455 nova_compute[189564]:        <feature name='invpcid'/>
Dec  1 14:20:41 np0005541455 nova_compute[189564]:        <feature name='pcid'/>
Dec  1 14:20:41 np0005541455 nova_compute[189564]:        <feature name='pku'/>
Dec  1 14:20:41 np0005541455 nova_compute[189564]:        <feature name='rtm'/>
Dec  1 14:20:41 np0005541455 nova_compute[189564]:        <feature name='taa-no'/>
Dec  1 14:20:41 np0005541455 nova_compute[189564]:      </blockers>
Dec  1 14:20:41 np0005541455 nova_compute[189564]:      <model usable='no' vendor='Intel'>Cooperlake-v2</model>
Dec  1 14:20:41 np0005541455 nova_compute[189564]:      <blockers model='Cooperlake-v2'>
Dec  1 14:20:41 np0005541455 nova_compute[189564]:        <feature name='avx512-bf16'/>
Dec  1 14:20:41 np0005541455 nova_compute[189564]:        <feature name='avx512bw'/>
Dec  1 14:20:41 np0005541455 nova_compute[189564]:        <feature name='avx512cd'/>
Dec  1 14:20:41 np0005541455 nova_compute[189564]:        <feature name='avx512dq'/>
Dec  1 14:20:41 np0005541455 nova_compute[189564]:        <feature name='avx512f'/>
Dec  1 14:20:41 np0005541455 nova_compute[189564]:        <feature name='avx512vl'/>
Dec  1 14:20:41 np0005541455 nova_compute[189564]:        <feature name='avx512vnni'/>
Dec  1 14:20:41 np0005541455 nova_compute[189564]:        <feature name='erms'/>
Dec  1 14:20:41 np0005541455 nova_compute[189564]:        <feature name='hle'/>
Dec  1 14:20:41 np0005541455 nova_compute[189564]:        <feature name='ibrs-all'/>
Dec  1 14:20:41 np0005541455 nova_compute[189564]:        <feature name='invpcid'/>
Dec  1 14:20:41 np0005541455 nova_compute[189564]:        <feature name='pcid'/>
Dec  1 14:20:41 np0005541455 nova_compute[189564]:        <feature name='pku'/>
Dec  1 14:20:41 np0005541455 nova_compute[189564]:        <feature name='rtm'/>
Dec  1 14:20:41 np0005541455 nova_compute[189564]:        <feature name='taa-no'/>
Dec  1 14:20:41 np0005541455 nova_compute[189564]:        <feature name='xsaves'/>
Dec  1 14:20:41 np0005541455 nova_compute[189564]:      </blockers>
Dec  1 14:20:41 np0005541455 nova_compute[189564]:      <model usable='no' vendor='Intel' canonical='Denverton-v1'>Denverton</model>
Dec  1 14:20:41 np0005541455 nova_compute[189564]:      <blockers model='Denverton'>
Dec  1 14:20:41 np0005541455 nova_compute[189564]:        <feature name='erms'/>
Dec  1 14:20:41 np0005541455 nova_compute[189564]:        <feature name='mpx'/>
Dec  1 14:20:41 np0005541455 nova_compute[189564]:      </blockers>
Dec  1 14:20:41 np0005541455 nova_compute[189564]:      <model usable='no' vendor='Intel'>Denverton-v1</model>
Dec  1 14:20:41 np0005541455 nova_compute[189564]:      <blockers model='Denverton-v1'>
Dec  1 14:20:41 np0005541455 nova_compute[189564]:        <feature name='erms'/>
Dec  1 14:20:41 np0005541455 nova_compute[189564]:        <feature name='mpx'/>
Dec  1 14:20:41 np0005541455 nova_compute[189564]:      </blockers>
Dec  1 14:20:41 np0005541455 nova_compute[189564]:      <model usable='no' vendor='Intel'>Denverton-v2</model>
Dec  1 14:20:41 np0005541455 nova_compute[189564]:      <blockers model='Denverton-v2'>
Dec  1 14:20:41 np0005541455 nova_compute[189564]:        <feature name='erms'/>
Dec  1 14:20:41 np0005541455 nova_compute[189564]:      </blockers>
Dec  1 14:20:41 np0005541455 nova_compute[189564]:      <model usable='no' vendor='Intel'>Denverton-v3</model>
Dec  1 14:20:41 np0005541455 nova_compute[189564]:      <blockers model='Denverton-v3'>
Dec  1 14:20:41 np0005541455 nova_compute[189564]:        <feature name='erms'/>
Dec  1 14:20:41 np0005541455 nova_compute[189564]:        <feature name='xsaves'/>
Dec  1 14:20:41 np0005541455 nova_compute[189564]:      </blockers>
Dec  1 14:20:41 np0005541455 nova_compute[189564]:      <model usable='yes' vendor='Hygon' canonical='Dhyana-v1'>Dhyana</model>
Dec  1 14:20:41 np0005541455 nova_compute[189564]:      <model usable='yes' vendor='Hygon'>Dhyana-v1</model>
Dec  1 14:20:41 np0005541455 nova_compute[189564]:      <model usable='no' vendor='Hygon'>Dhyana-v2</model>
Dec  1 14:20:41 np0005541455 nova_compute[189564]:      <blockers model='Dhyana-v2'>
Dec  1 14:20:41 np0005541455 nova_compute[189564]:        <feature name='xsaves'/>
Dec  1 14:20:41 np0005541455 nova_compute[189564]:      </blockers>
Dec  1 14:20:41 np0005541455 nova_compute[189564]:      <model usable='yes' vendor='AMD' canonical='EPYC-v1'>EPYC</model>
Dec  1 14:20:41 np0005541455 nova_compute[189564]:      <model usable='no' vendor='AMD' canonical='EPYC-Genoa-v1'>EPYC-Genoa</model>
Dec  1 14:20:41 np0005541455 nova_compute[189564]:      <blockers model='EPYC-Genoa'>
Dec  1 14:20:41 np0005541455 nova_compute[189564]:        <feature name='amd-psfd'/>
Dec  1 14:20:41 np0005541455 nova_compute[189564]:        <feature name='auto-ibrs'/>
Dec  1 14:20:41 np0005541455 nova_compute[189564]:        <feature name='avx512-bf16'/>
Dec  1 14:20:41 np0005541455 nova_compute[189564]:        <feature name='avx512-vpopcntdq'/>
Dec  1 14:20:41 np0005541455 nova_compute[189564]:        <feature name='avx512bitalg'/>
Dec  1 14:20:41 np0005541455 nova_compute[189564]:        <feature name='avx512bw'/>
Dec  1 14:20:41 np0005541455 nova_compute[189564]:        <feature name='avx512cd'/>
Dec  1 14:20:41 np0005541455 nova_compute[189564]:        <feature name='avx512dq'/>
Dec  1 14:20:41 np0005541455 nova_compute[189564]:        <feature name='avx512f'/>
Dec  1 14:20:41 np0005541455 nova_compute[189564]:        <feature name='avx512ifma'/>
Dec  1 14:20:41 np0005541455 nova_compute[189564]:        <feature name='avx512vbmi'/>
Dec  1 14:20:41 np0005541455 nova_compute[189564]:        <feature name='avx512vbmi2'/>
Dec  1 14:20:41 np0005541455 nova_compute[189564]:        <feature name='avx512vl'/>
Dec  1 14:20:41 np0005541455 nova_compute[189564]:        <feature name='avx512vnni'/>
Dec  1 14:20:41 np0005541455 nova_compute[189564]:        <feature name='erms'/>
Dec  1 14:20:41 np0005541455 nova_compute[189564]:        <feature name='fsrm'/>
Dec  1 14:20:41 np0005541455 nova_compute[189564]:        <feature name='gfni'/>
Dec  1 14:20:41 np0005541455 nova_compute[189564]:        <feature name='invpcid'/>
Dec  1 14:20:41 np0005541455 nova_compute[189564]:        <feature name='la57'/>
Dec  1 14:20:41 np0005541455 nova_compute[189564]:        <feature name='no-nested-data-bp'/>
Dec  1 14:20:41 np0005541455 nova_compute[189564]:        <feature name='null-sel-clr-base'/>
Dec  1 14:20:41 np0005541455 nova_compute[189564]:        <feature name='pcid'/>
Dec  1 14:20:41 np0005541455 nova_compute[189564]:        <feature name='pku'/>
Dec  1 14:20:41 np0005541455 nova_compute[189564]:        <feature name='stibp-always-on'/>
Dec  1 14:20:41 np0005541455 nova_compute[189564]:        <feature name='vaes'/>
Dec  1 14:20:41 np0005541455 nova_compute[189564]:        <feature name='vpclmulqdq'/>
Dec  1 14:20:41 np0005541455 nova_compute[189564]:        <feature name='xsaves'/>
Dec  1 14:20:41 np0005541455 nova_compute[189564]:      </blockers>
Dec  1 14:20:41 np0005541455 nova_compute[189564]:      <model usable='no' vendor='AMD'>EPYC-Genoa-v1</model>
Dec  1 14:20:41 np0005541455 nova_compute[189564]:      <blockers model='EPYC-Genoa-v1'>
Dec  1 14:20:41 np0005541455 nova_compute[189564]:        <feature name='amd-psfd'/>
Dec  1 14:20:41 np0005541455 nova_compute[189564]:        <feature name='auto-ibrs'/>
Dec  1 14:20:41 np0005541455 nova_compute[189564]:        <feature name='avx512-bf16'/>
Dec  1 14:20:41 np0005541455 nova_compute[189564]:        <feature name='avx512-vpopcntdq'/>
Dec  1 14:20:41 np0005541455 nova_compute[189564]:        <feature name='avx512bitalg'/>
Dec  1 14:20:41 np0005541455 nova_compute[189564]:        <feature name='avx512bw'/>
Dec  1 14:20:41 np0005541455 nova_compute[189564]:        <feature name='avx512cd'/>
Dec  1 14:20:41 np0005541455 nova_compute[189564]:        <feature name='avx512dq'/>
Dec  1 14:20:41 np0005541455 nova_compute[189564]:        <feature name='avx512f'/>
Dec  1 14:20:41 np0005541455 nova_compute[189564]:        <feature name='avx512ifma'/>
Dec  1 14:20:41 np0005541455 nova_compute[189564]:        <feature name='avx512vbmi'/>
Dec  1 14:20:41 np0005541455 nova_compute[189564]:        <feature name='avx512vbmi2'/>
Dec  1 14:20:41 np0005541455 nova_compute[189564]:        <feature name='avx512vl'/>
Dec  1 14:20:41 np0005541455 nova_compute[189564]:        <feature name='avx512vnni'/>
Dec  1 14:20:41 np0005541455 nova_compute[189564]:        <feature name='erms'/>
Dec  1 14:20:41 np0005541455 nova_compute[189564]:        <feature name='fsrm'/>
Dec  1 14:20:41 np0005541455 nova_compute[189564]:        <feature name='gfni'/>
Dec  1 14:20:41 np0005541455 nova_compute[189564]:        <feature name='invpcid'/>
Dec  1 14:20:41 np0005541455 nova_compute[189564]:        <feature name='la57'/>
Dec  1 14:20:41 np0005541455 nova_compute[189564]:        <feature name='no-nested-data-bp'/>
Dec  1 14:20:41 np0005541455 nova_compute[189564]:        <feature name='null-sel-clr-base'/>
Dec  1 14:20:41 np0005541455 nova_compute[189564]:        <feature name='pcid'/>
Dec  1 14:20:41 np0005541455 nova_compute[189564]:        <feature name='pku'/>
Dec  1 14:20:41 np0005541455 nova_compute[189564]:        <feature name='stibp-always-on'/>
Dec  1 14:20:41 np0005541455 nova_compute[189564]:        <feature name='vaes'/>
Dec  1 14:20:41 np0005541455 nova_compute[189564]:        <feature name='vpclmulqdq'/>
Dec  1 14:20:41 np0005541455 nova_compute[189564]:        <feature name='xsaves'/>
Dec  1 14:20:41 np0005541455 nova_compute[189564]:      </blockers>
Dec  1 14:20:41 np0005541455 nova_compute[189564]:      <model usable='yes' vendor='AMD' canonical='EPYC-v2'>EPYC-IBPB</model>
Dec  1 14:20:41 np0005541455 nova_compute[189564]:      <model usable='no' vendor='AMD' canonical='EPYC-Milan-v1'>EPYC-Milan</model>
Dec  1 14:20:41 np0005541455 nova_compute[189564]:      <blockers model='EPYC-Milan'>
Dec  1 14:20:41 np0005541455 nova_compute[189564]:        <feature name='erms'/>
Dec  1 14:20:41 np0005541455 nova_compute[189564]:        <feature name='fsrm'/>
Dec  1 14:20:41 np0005541455 nova_compute[189564]:        <feature name='invpcid'/>
Dec  1 14:20:41 np0005541455 nova_compute[189564]:        <feature name='pcid'/>
Dec  1 14:20:41 np0005541455 nova_compute[189564]:        <feature name='pku'/>
Dec  1 14:20:41 np0005541455 nova_compute[189564]:        <feature name='xsaves'/>
Dec  1 14:20:41 np0005541455 nova_compute[189564]:      </blockers>
Dec  1 14:20:41 np0005541455 nova_compute[189564]:      <model usable='no' vendor='AMD'>EPYC-Milan-v1</model>
Dec  1 14:20:41 np0005541455 nova_compute[189564]:      <blockers model='EPYC-Milan-v1'>
Dec  1 14:20:41 np0005541455 nova_compute[189564]:        <feature name='erms'/>
Dec  1 14:20:41 np0005541455 nova_compute[189564]:        <feature name='fsrm'/>
Dec  1 14:20:41 np0005541455 nova_compute[189564]:        <feature name='invpcid'/>
Dec  1 14:20:41 np0005541455 nova_compute[189564]:        <feature name='pcid'/>
Dec  1 14:20:41 np0005541455 nova_compute[189564]:        <feature name='pku'/>
Dec  1 14:20:41 np0005541455 nova_compute[189564]:        <feature name='xsaves'/>
Dec  1 14:20:41 np0005541455 nova_compute[189564]:      </blockers>
Dec  1 14:20:41 np0005541455 nova_compute[189564]:      <model usable='no' vendor='AMD'>EPYC-Milan-v2</model>
Dec  1 14:20:41 np0005541455 nova_compute[189564]:      <blockers model='EPYC-Milan-v2'>
Dec  1 14:20:41 np0005541455 nova_compute[189564]:        <feature name='amd-psfd'/>
Dec  1 14:20:41 np0005541455 nova_compute[189564]:        <feature name='erms'/>
Dec  1 14:20:41 np0005541455 nova_compute[189564]:        <feature name='fsrm'/>
Dec  1 14:20:41 np0005541455 nova_compute[189564]:        <feature name='invpcid'/>
Dec  1 14:20:41 np0005541455 nova_compute[189564]:        <feature name='no-nested-data-bp'/>
Dec  1 14:20:41 np0005541455 nova_compute[189564]:        <feature name='null-sel-clr-base'/>
Dec  1 14:20:41 np0005541455 nova_compute[189564]:        <feature name='pcid'/>
Dec  1 14:20:41 np0005541455 nova_compute[189564]:        <feature name='pku'/>
Dec  1 14:20:41 np0005541455 nova_compute[189564]:        <feature name='stibp-always-on'/>
Dec  1 14:20:41 np0005541455 nova_compute[189564]:        <feature name='vaes'/>
Dec  1 14:20:41 np0005541455 nova_compute[189564]:        <feature name='vpclmulqdq'/>
Dec  1 14:20:41 np0005541455 nova_compute[189564]:        <feature name='xsaves'/>
Dec  1 14:20:41 np0005541455 nova_compute[189564]:      </blockers>
Dec  1 14:20:41 np0005541455 nova_compute[189564]:      <model usable='no' vendor='AMD' canonical='EPYC-Rome-v1'>EPYC-Rome</model>
Dec  1 14:20:41 np0005541455 nova_compute[189564]:      <blockers model='EPYC-Rome'>
Dec  1 14:20:41 np0005541455 nova_compute[189564]:        <feature name='xsaves'/>
Dec  1 14:20:41 np0005541455 nova_compute[189564]:      </blockers>
Dec  1 14:20:41 np0005541455 nova_compute[189564]:      <model usable='no' vendor='AMD'>EPYC-Rome-v1</model>
Dec  1 14:20:41 np0005541455 nova_compute[189564]:      <blockers model='EPYC-Rome-v1'>
Dec  1 14:20:41 np0005541455 nova_compute[189564]:        <feature name='xsaves'/>
Dec  1 14:20:41 np0005541455 nova_compute[189564]:      </blockers>
Dec  1 14:20:41 np0005541455 nova_compute[189564]:      <model usable='no' vendor='AMD'>EPYC-Rome-v2</model>
Dec  1 14:20:41 np0005541455 nova_compute[189564]:      <blockers model='EPYC-Rome-v2'>
Dec  1 14:20:41 np0005541455 nova_compute[189564]:        <feature name='xsaves'/>
Dec  1 14:20:41 np0005541455 nova_compute[189564]:      </blockers>
Dec  1 14:20:41 np0005541455 nova_compute[189564]:      <model usable='no' vendor='AMD'>EPYC-Rome-v3</model>
Dec  1 14:20:41 np0005541455 nova_compute[189564]:      <blockers model='EPYC-Rome-v3'>
Dec  1 14:20:41 np0005541455 nova_compute[189564]:        <feature name='xsaves'/>
Dec  1 14:20:41 np0005541455 nova_compute[189564]:      </blockers>
Dec  1 14:20:41 np0005541455 nova_compute[189564]:      <model usable='yes' vendor='AMD'>EPYC-Rome-v4</model>
Dec  1 14:20:41 np0005541455 nova_compute[189564]:      <model usable='yes' vendor='AMD'>EPYC-v1</model>
Dec  1 14:20:41 np0005541455 nova_compute[189564]:      <model usable='yes' vendor='AMD'>EPYC-v2</model>
Dec  1 14:20:41 np0005541455 nova_compute[189564]:      <model usable='no' vendor='AMD'>EPYC-v3</model>
Dec  1 14:20:41 np0005541455 nova_compute[189564]:      <blockers model='EPYC-v3'>
Dec  1 14:20:41 np0005541455 nova_compute[189564]:        <feature name='xsaves'/>
Dec  1 14:20:41 np0005541455 nova_compute[189564]:      </blockers>
Dec  1 14:20:41 np0005541455 nova_compute[189564]:      <model usable='no' vendor='AMD'>EPYC-v4</model>
Dec  1 14:20:41 np0005541455 nova_compute[189564]:      <blockers model='EPYC-v4'>
Dec  1 14:20:41 np0005541455 nova_compute[189564]:        <feature name='xsaves'/>
Dec  1 14:20:41 np0005541455 nova_compute[189564]:      </blockers>
Dec  1 14:20:41 np0005541455 nova_compute[189564]:      <model usable='no' vendor='Intel' canonical='GraniteRapids-v1'>GraniteRapids</model>
Dec  1 14:20:41 np0005541455 nova_compute[189564]:      <blockers model='GraniteRapids'>
Dec  1 14:20:41 np0005541455 nova_compute[189564]:        <feature name='amx-bf16'/>
Dec  1 14:20:41 np0005541455 nova_compute[189564]:        <feature name='amx-fp16'/>
Dec  1 14:20:41 np0005541455 nova_compute[189564]:        <feature name='amx-int8'/>
Dec  1 14:20:41 np0005541455 nova_compute[189564]:        <feature name='amx-tile'/>
Dec  1 14:20:41 np0005541455 nova_compute[189564]:        <feature name='avx-vnni'/>
Dec  1 14:20:41 np0005541455 nova_compute[189564]:        <feature name='avx512-bf16'/>
Dec  1 14:20:41 np0005541455 nova_compute[189564]:        <feature name='avx512-fp16'/>
Dec  1 14:20:41 np0005541455 nova_compute[189564]:        <feature name='avx512-vpopcntdq'/>
Dec  1 14:20:41 np0005541455 nova_compute[189564]:        <feature name='avx512bitalg'/>
Dec  1 14:20:41 np0005541455 nova_compute[189564]:        <feature name='avx512bw'/>
Dec  1 14:20:41 np0005541455 nova_compute[189564]:        <feature name='avx512cd'/>
Dec  1 14:20:41 np0005541455 nova_compute[189564]:        <feature name='avx512dq'/>
Dec  1 14:20:41 np0005541455 nova_compute[189564]:        <feature name='avx512f'/>
Dec  1 14:20:41 np0005541455 nova_compute[189564]:        <feature name='avx512ifma'/>
Dec  1 14:20:41 np0005541455 nova_compute[189564]:        <feature name='avx512vbmi'/>
Dec  1 14:20:41 np0005541455 nova_compute[189564]:        <feature name='avx512vbmi2'/>
Dec  1 14:20:41 np0005541455 nova_compute[189564]:        <feature name='avx512vl'/>
Dec  1 14:20:41 np0005541455 nova_compute[189564]:        <feature name='avx512vnni'/>
Dec  1 14:20:41 np0005541455 nova_compute[189564]:        <feature name='bus-lock-detect'/>
Dec  1 14:20:41 np0005541455 nova_compute[189564]:        <feature name='erms'/>
Dec  1 14:20:41 np0005541455 nova_compute[189564]:        <feature name='fbsdp-no'/>
Dec  1 14:20:41 np0005541455 nova_compute[189564]:        <feature name='fsrc'/>
Dec  1 14:20:41 np0005541455 nova_compute[189564]:        <feature name='fsrm'/>
Dec  1 14:20:41 np0005541455 nova_compute[189564]:        <feature name='fsrs'/>
Dec  1 14:20:41 np0005541455 nova_compute[189564]:        <feature name='fzrm'/>
Dec  1 14:20:41 np0005541455 nova_compute[189564]:        <feature name='gfni'/>
Dec  1 14:20:41 np0005541455 nova_compute[189564]:        <feature name='hle'/>
Dec  1 14:20:41 np0005541455 nova_compute[189564]:        <feature name='ibrs-all'/>
Dec  1 14:20:41 np0005541455 nova_compute[189564]:        <feature name='invpcid'/>
Dec  1 14:20:41 np0005541455 nova_compute[189564]:        <feature name='la57'/>
Dec  1 14:20:41 np0005541455 nova_compute[189564]:        <feature name='mcdt-no'/>
Dec  1 14:20:41 np0005541455 nova_compute[189564]:        <feature name='pbrsb-no'/>
Dec  1 14:20:41 np0005541455 nova_compute[189564]:        <feature name='pcid'/>
Dec  1 14:20:41 np0005541455 nova_compute[189564]:        <feature name='pku'/>
Dec  1 14:20:41 np0005541455 nova_compute[189564]:        <feature name='prefetchiti'/>
Dec  1 14:20:41 np0005541455 nova_compute[189564]:        <feature name='psdp-no'/>
Dec  1 14:20:41 np0005541455 nova_compute[189564]:        <feature name='rtm'/>
Dec  1 14:20:41 np0005541455 nova_compute[189564]:        <feature name='sbdr-ssdp-no'/>
Dec  1 14:20:41 np0005541455 nova_compute[189564]:        <feature name='serialize'/>
Dec  1 14:20:41 np0005541455 nova_compute[189564]:        <feature name='taa-no'/>
Dec  1 14:20:41 np0005541455 nova_compute[189564]:        <feature name='tsx-ldtrk'/>
Dec  1 14:20:41 np0005541455 nova_compute[189564]:        <feature name='vaes'/>
Dec  1 14:20:41 np0005541455 nova_compute[189564]:        <feature name='vpclmulqdq'/>
Dec  1 14:20:41 np0005541455 nova_compute[189564]:        <feature name='xfd'/>
Dec  1 14:20:41 np0005541455 nova_compute[189564]:        <feature name='xsaves'/>
Dec  1 14:20:41 np0005541455 nova_compute[189564]:      </blockers>
Dec  1 14:20:41 np0005541455 nova_compute[189564]:      <model usable='no' vendor='Intel'>GraniteRapids-v1</model>
Dec  1 14:20:41 np0005541455 nova_compute[189564]:      <blockers model='GraniteRapids-v1'>
Dec  1 14:20:41 np0005541455 nova_compute[189564]:        <feature name='amx-bf16'/>
Dec  1 14:20:41 np0005541455 nova_compute[189564]:        <feature name='amx-fp16'/>
Dec  1 14:20:41 np0005541455 nova_compute[189564]:        <feature name='amx-int8'/>
Dec  1 14:20:41 np0005541455 nova_compute[189564]:        <feature name='amx-tile'/>
Dec  1 14:20:41 np0005541455 nova_compute[189564]:        <feature name='avx-vnni'/>
Dec  1 14:20:41 np0005541455 nova_compute[189564]:        <feature name='avx512-bf16'/>
Dec  1 14:20:41 np0005541455 nova_compute[189564]:        <feature name='avx512-fp16'/>
Dec  1 14:20:41 np0005541455 nova_compute[189564]:        <feature name='avx512-vpopcntdq'/>
Dec  1 14:20:41 np0005541455 nova_compute[189564]:        <feature name='avx512bitalg'/>
Dec  1 14:20:41 np0005541455 nova_compute[189564]:        <feature name='avx512bw'/>
Dec  1 14:20:41 np0005541455 nova_compute[189564]:        <feature name='avx512cd'/>
Dec  1 14:20:41 np0005541455 nova_compute[189564]:        <feature name='avx512dq'/>
Dec  1 14:20:41 np0005541455 nova_compute[189564]:        <feature name='avx512f'/>
Dec  1 14:20:41 np0005541455 nova_compute[189564]:        <feature name='avx512ifma'/>
Dec  1 14:20:41 np0005541455 nova_compute[189564]:        <feature name='avx512vbmi'/>
Dec  1 14:20:41 np0005541455 nova_compute[189564]:        <feature name='avx512vbmi2'/>
Dec  1 14:20:41 np0005541455 nova_compute[189564]:        <feature name='avx512vl'/>
Dec  1 14:20:41 np0005541455 nova_compute[189564]:        <feature name='avx512vnni'/>
Dec  1 14:20:41 np0005541455 nova_compute[189564]:        <feature name='bus-lock-detect'/>
Dec  1 14:20:41 np0005541455 nova_compute[189564]:        <feature name='erms'/>
Dec  1 14:20:41 np0005541455 nova_compute[189564]:        <feature name='fbsdp-no'/>
Dec  1 14:20:41 np0005541455 nova_compute[189564]:        <feature name='fsrc'/>
Dec  1 14:20:41 np0005541455 nova_compute[189564]:        <feature name='fsrm'/>
Dec  1 14:20:41 np0005541455 nova_compute[189564]:        <feature name='fsrs'/>
Dec  1 14:20:41 np0005541455 nova_compute[189564]:        <feature name='fzrm'/>
Dec  1 14:20:41 np0005541455 nova_compute[189564]:        <feature name='gfni'/>
Dec  1 14:20:41 np0005541455 nova_compute[189564]:        <feature name='hle'/>
Dec  1 14:20:41 np0005541455 nova_compute[189564]:        <feature name='ibrs-all'/>
Dec  1 14:20:41 np0005541455 nova_compute[189564]:        <feature name='invpcid'/>
Dec  1 14:20:41 np0005541455 nova_compute[189564]:        <feature name='la57'/>
Dec  1 14:20:41 np0005541455 nova_compute[189564]:        <feature name='mcdt-no'/>
Dec  1 14:20:41 np0005541455 nova_compute[189564]:        <feature name='pbrsb-no'/>
Dec  1 14:20:41 np0005541455 nova_compute[189564]:        <feature name='pcid'/>
Dec  1 14:20:41 np0005541455 nova_compute[189564]:        <feature name='pku'/>
Dec  1 14:20:41 np0005541455 nova_compute[189564]:        <feature name='prefetchiti'/>
Dec  1 14:20:41 np0005541455 nova_compute[189564]:        <feature name='psdp-no'/>
Dec  1 14:20:41 np0005541455 nova_compute[189564]:        <feature name='rtm'/>
Dec  1 14:20:41 np0005541455 nova_compute[189564]:        <feature name='sbdr-ssdp-no'/>
Dec  1 14:20:41 np0005541455 nova_compute[189564]:        <feature name='serialize'/>
Dec  1 14:20:41 np0005541455 nova_compute[189564]:        <feature name='taa-no'/>
Dec  1 14:20:41 np0005541455 nova_compute[189564]:        <feature name='tsx-ldtrk'/>
Dec  1 14:20:41 np0005541455 nova_compute[189564]:        <feature name='vaes'/>
Dec  1 14:20:41 np0005541455 nova_compute[189564]:        <feature name='vpclmulqdq'/>
Dec  1 14:20:41 np0005541455 nova_compute[189564]:        <feature name='xfd'/>
Dec  1 14:20:41 np0005541455 nova_compute[189564]:        <feature name='xsaves'/>
Dec  1 14:20:41 np0005541455 nova_compute[189564]:      </blockers>
Dec  1 14:20:41 np0005541455 nova_compute[189564]:      <model usable='no' vendor='Intel'>GraniteRapids-v2</model>
Dec  1 14:20:41 np0005541455 nova_compute[189564]:      <blockers model='GraniteRapids-v2'>
Dec  1 14:20:41 np0005541455 nova_compute[189564]:        <feature name='amx-bf16'/>
Dec  1 14:20:41 np0005541455 nova_compute[189564]:        <feature name='amx-fp16'/>
Dec  1 14:20:41 np0005541455 nova_compute[189564]:        <feature name='amx-int8'/>
Dec  1 14:20:41 np0005541455 nova_compute[189564]:        <feature name='amx-tile'/>
Dec  1 14:20:41 np0005541455 nova_compute[189564]:        <feature name='avx-vnni'/>
Dec  1 14:20:41 np0005541455 nova_compute[189564]:        <feature name='avx10'/>
Dec  1 14:20:41 np0005541455 nova_compute[189564]:        <feature name='avx10-128'/>
Dec  1 14:20:41 np0005541455 nova_compute[189564]:        <feature name='avx10-256'/>
Dec  1 14:20:41 np0005541455 nova_compute[189564]:        <feature name='avx10-512'/>
Dec  1 14:20:41 np0005541455 nova_compute[189564]:        <feature name='avx512-bf16'/>
Dec  1 14:20:41 np0005541455 nova_compute[189564]:        <feature name='avx512-fp16'/>
Dec  1 14:20:41 np0005541455 nova_compute[189564]:        <feature name='avx512-vpopcntdq'/>
Dec  1 14:20:41 np0005541455 nova_compute[189564]:        <feature name='avx512bitalg'/>
Dec  1 14:20:41 np0005541455 nova_compute[189564]:        <feature name='avx512bw'/>
Dec  1 14:20:41 np0005541455 nova_compute[189564]:        <feature name='avx512cd'/>
Dec  1 14:20:41 np0005541455 nova_compute[189564]:        <feature name='avx512dq'/>
Dec  1 14:20:41 np0005541455 nova_compute[189564]:        <feature name='avx512f'/>
Dec  1 14:20:41 np0005541455 nova_compute[189564]:        <feature name='avx512ifma'/>
Dec  1 14:20:41 np0005541455 nova_compute[189564]:        <feature name='avx512vbmi'/>
Dec  1 14:20:41 np0005541455 nova_compute[189564]:        <feature name='avx512vbmi2'/>
Dec  1 14:20:41 np0005541455 nova_compute[189564]:        <feature name='avx512vl'/>
Dec  1 14:20:41 np0005541455 nova_compute[189564]:        <feature name='avx512vnni'/>
Dec  1 14:20:41 np0005541455 nova_compute[189564]:        <feature name='bus-lock-detect'/>
Dec  1 14:20:41 np0005541455 nova_compute[189564]:        <feature name='cldemote'/>
Dec  1 14:20:41 np0005541455 nova_compute[189564]:        <feature name='erms'/>
Dec  1 14:20:41 np0005541455 nova_compute[189564]:        <feature name='fbsdp-no'/>
Dec  1 14:20:41 np0005541455 nova_compute[189564]:        <feature name='fsrc'/>
Dec  1 14:20:41 np0005541455 nova_compute[189564]:        <feature name='fsrm'/>
Dec  1 14:20:41 np0005541455 nova_compute[189564]:        <feature name='fsrs'/>
Dec  1 14:20:41 np0005541455 nova_compute[189564]:        <feature name='fzrm'/>
Dec  1 14:20:41 np0005541455 nova_compute[189564]:        <feature name='gfni'/>
Dec  1 14:20:41 np0005541455 nova_compute[189564]:        <feature name='hle'/>
Dec  1 14:20:41 np0005541455 nova_compute[189564]:        <feature name='ibrs-all'/>
Dec  1 14:20:41 np0005541455 nova_compute[189564]:        <feature name='invpcid'/>
Dec  1 14:20:41 np0005541455 nova_compute[189564]:        <feature name='la57'/>
Dec  1 14:20:41 np0005541455 nova_compute[189564]:        <feature name='mcdt-no'/>
Dec  1 14:20:41 np0005541455 nova_compute[189564]:        <feature name='movdir64b'/>
Dec  1 14:20:41 np0005541455 nova_compute[189564]:        <feature name='movdiri'/>
Dec  1 14:20:41 np0005541455 nova_compute[189564]:        <feature name='pbrsb-no'/>
Dec  1 14:20:41 np0005541455 nova_compute[189564]:        <feature name='pcid'/>
Dec  1 14:20:41 np0005541455 nova_compute[189564]:        <feature name='pku'/>
Dec  1 14:20:41 np0005541455 nova_compute[189564]:        <feature name='prefetchiti'/>
Dec  1 14:20:41 np0005541455 nova_compute[189564]:        <feature name='psdp-no'/>
Dec  1 14:20:41 np0005541455 nova_compute[189564]:        <feature name='rtm'/>
Dec  1 14:20:41 np0005541455 nova_compute[189564]:        <feature name='sbdr-ssdp-no'/>
Dec  1 14:20:41 np0005541455 nova_compute[189564]:        <feature name='serialize'/>
Dec  1 14:20:41 np0005541455 nova_compute[189564]:        <feature name='ss'/>
Dec  1 14:20:41 np0005541455 nova_compute[189564]:        <feature name='taa-no'/>
Dec  1 14:20:41 np0005541455 nova_compute[189564]:        <feature name='tsx-ldtrk'/>
Dec  1 14:20:41 np0005541455 nova_compute[189564]:        <feature name='vaes'/>
Dec  1 14:20:41 np0005541455 nova_compute[189564]:        <feature name='vpclmulqdq'/>
Dec  1 14:20:41 np0005541455 nova_compute[189564]:        <feature name='xfd'/>
Dec  1 14:20:41 np0005541455 nova_compute[189564]:        <feature name='xsaves'/>
Dec  1 14:20:41 np0005541455 nova_compute[189564]:      </blockers>
Dec  1 14:20:41 np0005541455 nova_compute[189564]:      <model usable='no' vendor='Intel' canonical='Haswell-v1'>Haswell</model>
Dec  1 14:20:41 np0005541455 nova_compute[189564]:      <blockers model='Haswell'>
Dec  1 14:20:41 np0005541455 nova_compute[189564]:        <feature name='erms'/>
Dec  1 14:20:41 np0005541455 nova_compute[189564]:        <feature name='hle'/>
Dec  1 14:20:41 np0005541455 nova_compute[189564]:        <feature name='invpcid'/>
Dec  1 14:20:41 np0005541455 nova_compute[189564]:        <feature name='pcid'/>
Dec  1 14:20:41 np0005541455 nova_compute[189564]:        <feature name='rtm'/>
Dec  1 14:20:41 np0005541455 nova_compute[189564]:      </blockers>
Dec  1 14:20:41 np0005541455 nova_compute[189564]:      <model usable='no' vendor='Intel' canonical='Haswell-v3'>Haswell-IBRS</model>
Dec  1 14:20:41 np0005541455 nova_compute[189564]:      <blockers model='Haswell-IBRS'>
Dec  1 14:20:41 np0005541455 nova_compute[189564]:        <feature name='erms'/>
Dec  1 14:20:41 np0005541455 nova_compute[189564]:        <feature name='hle'/>
Dec  1 14:20:41 np0005541455 nova_compute[189564]:        <feature name='invpcid'/>
Dec  1 14:20:41 np0005541455 nova_compute[189564]:        <feature name='pcid'/>
Dec  1 14:20:41 np0005541455 nova_compute[189564]:        <feature name='rtm'/>
Dec  1 14:20:41 np0005541455 nova_compute[189564]:      </blockers>
Dec  1 14:20:41 np0005541455 nova_compute[189564]:      <model usable='no' vendor='Intel' canonical='Haswell-v2'>Haswell-noTSX</model>
Dec  1 14:20:41 np0005541455 nova_compute[189564]:      <blockers model='Haswell-noTSX'>
Dec  1 14:20:41 np0005541455 nova_compute[189564]:        <feature name='erms'/>
Dec  1 14:20:41 np0005541455 nova_compute[189564]:        <feature name='invpcid'/>
Dec  1 14:20:41 np0005541455 nova_compute[189564]:        <feature name='pcid'/>
Dec  1 14:20:41 np0005541455 nova_compute[189564]:      </blockers>
Dec  1 14:20:41 np0005541455 nova_compute[189564]:      <model usable='no' vendor='Intel' canonical='Haswell-v4'>Haswell-noTSX-IBRS</model>
Dec  1 14:20:41 np0005541455 nova_compute[189564]:      <blockers model='Haswell-noTSX-IBRS'>
Dec  1 14:20:41 np0005541455 nova_compute[189564]:        <feature name='erms'/>
Dec  1 14:20:41 np0005541455 nova_compute[189564]:        <feature name='invpcid'/>
Dec  1 14:20:41 np0005541455 nova_compute[189564]:        <feature name='pcid'/>
Dec  1 14:20:41 np0005541455 nova_compute[189564]:      </blockers>
Dec  1 14:20:41 np0005541455 nova_compute[189564]:      <model usable='no' vendor='Intel'>Haswell-v1</model>
Dec  1 14:20:41 np0005541455 nova_compute[189564]:      <blockers model='Haswell-v1'>
Dec  1 14:20:41 np0005541455 nova_compute[189564]:        <feature name='erms'/>
Dec  1 14:20:41 np0005541455 nova_compute[189564]:        <feature name='hle'/>
Dec  1 14:20:41 np0005541455 nova_compute[189564]:        <feature name='invpcid'/>
Dec  1 14:20:41 np0005541455 nova_compute[189564]:        <feature name='pcid'/>
Dec  1 14:20:41 np0005541455 nova_compute[189564]:        <feature name='rtm'/>
Dec  1 14:20:41 np0005541455 nova_compute[189564]:      </blockers>
Dec  1 14:20:41 np0005541455 nova_compute[189564]:      <model usable='no' vendor='Intel'>Haswell-v2</model>
Dec  1 14:20:41 np0005541455 nova_compute[189564]:      <blockers model='Haswell-v2'>
Dec  1 14:20:41 np0005541455 nova_compute[189564]:        <feature name='erms'/>
Dec  1 14:20:41 np0005541455 nova_compute[189564]:        <feature name='invpcid'/>
Dec  1 14:20:41 np0005541455 nova_compute[189564]:        <feature name='pcid'/>
Dec  1 14:20:41 np0005541455 nova_compute[189564]:      </blockers>
Dec  1 14:20:41 np0005541455 nova_compute[189564]:      <model usable='no' vendor='Intel'>Haswell-v3</model>
Dec  1 14:20:41 np0005541455 nova_compute[189564]:      <blockers model='Haswell-v3'>
Dec  1 14:20:41 np0005541455 nova_compute[189564]:        <feature name='erms'/>
Dec  1 14:20:41 np0005541455 nova_compute[189564]:        <feature name='hle'/>
Dec  1 14:20:41 np0005541455 nova_compute[189564]:        <feature name='invpcid'/>
Dec  1 14:20:41 np0005541455 nova_compute[189564]:        <feature name='pcid'/>
Dec  1 14:20:41 np0005541455 nova_compute[189564]:        <feature name='rtm'/>
Dec  1 14:20:41 np0005541455 nova_compute[189564]:      </blockers>
Dec  1 14:20:41 np0005541455 nova_compute[189564]:      <model usable='no' vendor='Intel'>Haswell-v4</model>
Dec  1 14:20:41 np0005541455 nova_compute[189564]:      <blockers model='Haswell-v4'>
Dec  1 14:20:41 np0005541455 nova_compute[189564]:        <feature name='erms'/>
Dec  1 14:20:41 np0005541455 nova_compute[189564]:        <feature name='invpcid'/>
Dec  1 14:20:41 np0005541455 nova_compute[189564]:        <feature name='pcid'/>
Dec  1 14:20:41 np0005541455 nova_compute[189564]:      </blockers>
Dec  1 14:20:41 np0005541455 nova_compute[189564]:      <model usable='no' vendor='Intel' canonical='Icelake-Server-v1'>Icelake-Server</model>
Dec  1 14:20:41 np0005541455 nova_compute[189564]:      <blockers model='Icelake-Server'>
Dec  1 14:20:41 np0005541455 nova_compute[189564]:        <feature name='avx512-vpopcntdq'/>
Dec  1 14:20:41 np0005541455 nova_compute[189564]:        <feature name='avx512bitalg'/>
Dec  1 14:20:41 np0005541455 nova_compute[189564]:        <feature name='avx512bw'/>
Dec  1 14:20:41 np0005541455 nova_compute[189564]:        <feature name='avx512cd'/>
Dec  1 14:20:41 np0005541455 nova_compute[189564]:        <feature name='avx512dq'/>
Dec  1 14:20:41 np0005541455 nova_compute[189564]:        <feature name='avx512f'/>
Dec  1 14:20:41 np0005541455 nova_compute[189564]:        <feature name='avx512vbmi'/>
Dec  1 14:20:41 np0005541455 nova_compute[189564]:        <feature name='avx512vbmi2'/>
Dec  1 14:20:41 np0005541455 nova_compute[189564]:        <feature name='avx512vl'/>
Dec  1 14:20:41 np0005541455 nova_compute[189564]:        <feature name='avx512vnni'/>
Dec  1 14:20:41 np0005541455 nova_compute[189564]:        <feature name='erms'/>
Dec  1 14:20:41 np0005541455 nova_compute[189564]:        <feature name='gfni'/>
Dec  1 14:20:41 np0005541455 nova_compute[189564]:        <feature name='hle'/>
Dec  1 14:20:41 np0005541455 nova_compute[189564]:        <feature name='invpcid'/>
Dec  1 14:20:41 np0005541455 nova_compute[189564]:        <feature name='la57'/>
Dec  1 14:20:41 np0005541455 nova_compute[189564]:        <feature name='pcid'/>
Dec  1 14:20:41 np0005541455 nova_compute[189564]:        <feature name='pku'/>
Dec  1 14:20:41 np0005541455 nova_compute[189564]:        <feature name='rtm'/>
Dec  1 14:20:41 np0005541455 nova_compute[189564]:        <feature name='vaes'/>
Dec  1 14:20:41 np0005541455 nova_compute[189564]:        <feature name='vpclmulqdq'/>
Dec  1 14:20:41 np0005541455 nova_compute[189564]:      </blockers>
Dec  1 14:20:41 np0005541455 nova_compute[189564]:      <model usable='no' vendor='Intel' canonical='Icelake-Server-v2'>Icelake-Server-noTSX</model>
Dec  1 14:20:41 np0005541455 nova_compute[189564]:      <blockers model='Icelake-Server-noTSX'>
Dec  1 14:20:41 np0005541455 nova_compute[189564]:        <feature name='avx512-vpopcntdq'/>
Dec  1 14:20:41 np0005541455 nova_compute[189564]:        <feature name='avx512bitalg'/>
Dec  1 14:20:41 np0005541455 nova_compute[189564]:        <feature name='avx512bw'/>
Dec  1 14:20:41 np0005541455 nova_compute[189564]:        <feature name='avx512cd'/>
Dec  1 14:20:41 np0005541455 nova_compute[189564]:        <feature name='avx512dq'/>
Dec  1 14:20:41 np0005541455 nova_compute[189564]:        <feature name='avx512f'/>
Dec  1 14:20:41 np0005541455 nova_compute[189564]:        <feature name='avx512vbmi'/>
Dec  1 14:20:41 np0005541455 nova_compute[189564]:        <feature name='avx512vbmi2'/>
Dec  1 14:20:41 np0005541455 nova_compute[189564]:        <feature name='avx512vl'/>
Dec  1 14:20:41 np0005541455 nova_compute[189564]:        <feature name='avx512vnni'/>
Dec  1 14:20:41 np0005541455 nova_compute[189564]:        <feature name='erms'/>
Dec  1 14:20:41 np0005541455 nova_compute[189564]:        <feature name='gfni'/>
Dec  1 14:20:41 np0005541455 nova_compute[189564]:        <feature name='invpcid'/>
Dec  1 14:20:41 np0005541455 nova_compute[189564]:        <feature name='la57'/>
Dec  1 14:20:41 np0005541455 nova_compute[189564]:        <feature name='pcid'/>
Dec  1 14:20:41 np0005541455 nova_compute[189564]:        <feature name='pku'/>
Dec  1 14:20:41 np0005541455 nova_compute[189564]:        <feature name='vaes'/>
Dec  1 14:20:41 np0005541455 nova_compute[189564]:        <feature name='vpclmulqdq'/>
Dec  1 14:20:41 np0005541455 nova_compute[189564]:      </blockers>
Dec  1 14:20:41 np0005541455 nova_compute[189564]:      <model usable='no' vendor='Intel'>Icelake-Server-v1</model>
Dec  1 14:20:41 np0005541455 nova_compute[189564]:      <blockers model='Icelake-Server-v1'>
Dec  1 14:20:41 np0005541455 nova_compute[189564]:        <feature name='avx512-vpopcntdq'/>
Dec  1 14:20:41 np0005541455 nova_compute[189564]:        <feature name='avx512bitalg'/>
Dec  1 14:20:41 np0005541455 nova_compute[189564]:        <feature name='avx512bw'/>
Dec  1 14:20:41 np0005541455 nova_compute[189564]:        <feature name='avx512cd'/>
Dec  1 14:20:41 np0005541455 nova_compute[189564]:        <feature name='avx512dq'/>
Dec  1 14:20:41 np0005541455 nova_compute[189564]:        <feature name='avx512f'/>
Dec  1 14:20:41 np0005541455 nova_compute[189564]:        <feature name='avx512vbmi'/>
Dec  1 14:20:41 np0005541455 nova_compute[189564]:        <feature name='avx512vbmi2'/>
Dec  1 14:20:41 np0005541455 nova_compute[189564]:        <feature name='avx512vl'/>
Dec  1 14:20:41 np0005541455 nova_compute[189564]:        <feature name='avx512vnni'/>
Dec  1 14:20:41 np0005541455 nova_compute[189564]:        <feature name='erms'/>
Dec  1 14:20:41 np0005541455 nova_compute[189564]:        <feature name='gfni'/>
Dec  1 14:20:41 np0005541455 nova_compute[189564]:        <feature name='hle'/>
Dec  1 14:20:41 np0005541455 nova_compute[189564]:        <feature name='invpcid'/>
Dec  1 14:20:41 np0005541455 nova_compute[189564]:        <feature name='la57'/>
Dec  1 14:20:41 np0005541455 nova_compute[189564]:        <feature name='pcid'/>
Dec  1 14:20:41 np0005541455 nova_compute[189564]:        <feature name='pku'/>
Dec  1 14:20:41 np0005541455 nova_compute[189564]:        <feature name='rtm'/>
Dec  1 14:20:41 np0005541455 nova_compute[189564]:        <feature name='vaes'/>
Dec  1 14:20:41 np0005541455 nova_compute[189564]:        <feature name='vpclmulqdq'/>
Dec  1 14:20:41 np0005541455 nova_compute[189564]:      </blockers>
Dec  1 14:20:41 np0005541455 nova_compute[189564]:      <model usable='no' vendor='Intel'>Icelake-Server-v2</model>
Dec  1 14:20:41 np0005541455 nova_compute[189564]:      <blockers model='Icelake-Server-v2'>
Dec  1 14:20:41 np0005541455 nova_compute[189564]:        <feature name='avx512-vpopcntdq'/>
Dec  1 14:20:41 np0005541455 nova_compute[189564]:        <feature name='avx512bitalg'/>
Dec  1 14:20:41 np0005541455 nova_compute[189564]:        <feature name='avx512bw'/>
Dec  1 14:20:41 np0005541455 nova_compute[189564]:        <feature name='avx512cd'/>
Dec  1 14:20:41 np0005541455 nova_compute[189564]:        <feature name='avx512dq'/>
Dec  1 14:20:41 np0005541455 nova_compute[189564]:        <feature name='avx512f'/>
Dec  1 14:20:41 np0005541455 nova_compute[189564]:        <feature name='avx512vbmi'/>
Dec  1 14:20:41 np0005541455 nova_compute[189564]:        <feature name='avx512vbmi2'/>
Dec  1 14:20:41 np0005541455 nova_compute[189564]:        <feature name='avx512vl'/>
Dec  1 14:20:41 np0005541455 nova_compute[189564]:        <feature name='avx512vnni'/>
Dec  1 14:20:41 np0005541455 nova_compute[189564]:        <feature name='erms'/>
Dec  1 14:20:41 np0005541455 nova_compute[189564]:        <feature name='gfni'/>
Dec  1 14:20:41 np0005541455 nova_compute[189564]:        <feature name='invpcid'/>
Dec  1 14:20:41 np0005541455 nova_compute[189564]:        <feature name='la57'/>
Dec  1 14:20:41 np0005541455 nova_compute[189564]:        <feature name='pcid'/>
Dec  1 14:20:41 np0005541455 nova_compute[189564]:        <feature name='pku'/>
Dec  1 14:20:41 np0005541455 nova_compute[189564]:        <feature name='vaes'/>
Dec  1 14:20:41 np0005541455 nova_compute[189564]:        <feature name='vpclmulqdq'/>
Dec  1 14:20:41 np0005541455 nova_compute[189564]:      </blockers>
Dec  1 14:20:41 np0005541455 nova_compute[189564]:      <model usable='no' vendor='Intel'>Icelake-Server-v3</model>
Dec  1 14:20:41 np0005541455 nova_compute[189564]:      <blockers model='Icelake-Server-v3'>
Dec  1 14:20:41 np0005541455 nova_compute[189564]:        <feature name='avx512-vpopcntdq'/>
Dec  1 14:20:41 np0005541455 nova_compute[189564]:        <feature name='avx512bitalg'/>
Dec  1 14:20:41 np0005541455 nova_compute[189564]:        <feature name='avx512bw'/>
Dec  1 14:20:41 np0005541455 nova_compute[189564]:        <feature name='avx512cd'/>
Dec  1 14:20:41 np0005541455 nova_compute[189564]:        <feature name='avx512dq'/>
Dec  1 14:20:41 np0005541455 nova_compute[189564]:        <feature name='avx512f'/>
Dec  1 14:20:41 np0005541455 nova_compute[189564]:        <feature name='avx512vbmi'/>
Dec  1 14:20:41 np0005541455 nova_compute[189564]:        <feature name='avx512vbmi2'/>
Dec  1 14:20:41 np0005541455 nova_compute[189564]:        <feature name='avx512vl'/>
Dec  1 14:20:41 np0005541455 nova_compute[189564]:        <feature name='avx512vnni'/>
Dec  1 14:20:41 np0005541455 nova_compute[189564]:        <feature name='erms'/>
Dec  1 14:20:41 np0005541455 nova_compute[189564]:        <feature name='gfni'/>
Dec  1 14:20:41 np0005541455 nova_compute[189564]:        <feature name='ibrs-all'/>
Dec  1 14:20:41 np0005541455 nova_compute[189564]:        <feature name='invpcid'/>
Dec  1 14:20:41 np0005541455 nova_compute[189564]:        <feature name='la57'/>
Dec  1 14:20:41 np0005541455 nova_compute[189564]:        <feature name='pcid'/>
Dec  1 14:20:41 np0005541455 nova_compute[189564]:        <feature name='pku'/>
Dec  1 14:20:41 np0005541455 nova_compute[189564]:        <feature name='taa-no'/>
Dec  1 14:20:41 np0005541455 nova_compute[189564]:        <feature name='vaes'/>
Dec  1 14:20:41 np0005541455 nova_compute[189564]:        <feature name='vpclmulqdq'/>
Dec  1 14:20:41 np0005541455 nova_compute[189564]:      </blockers>
Dec  1 14:20:41 np0005541455 nova_compute[189564]:      <model usable='no' vendor='Intel'>Icelake-Server-v4</model>
Dec  1 14:20:41 np0005541455 nova_compute[189564]:      <blockers model='Icelake-Server-v4'>
Dec  1 14:20:41 np0005541455 nova_compute[189564]:        <feature name='avx512-vpopcntdq'/>
Dec  1 14:20:41 np0005541455 nova_compute[189564]:        <feature name='avx512bitalg'/>
Dec  1 14:20:41 np0005541455 nova_compute[189564]:        <feature name='avx512bw'/>
Dec  1 14:20:41 np0005541455 nova_compute[189564]:        <feature name='avx512cd'/>
Dec  1 14:20:41 np0005541455 nova_compute[189564]:        <feature name='avx512dq'/>
Dec  1 14:20:41 np0005541455 nova_compute[189564]:        <feature name='avx512f'/>
Dec  1 14:20:41 np0005541455 nova_compute[189564]:        <feature name='avx512ifma'/>
Dec  1 14:20:41 np0005541455 nova_compute[189564]:        <feature name='avx512vbmi'/>
Dec  1 14:20:41 np0005541455 nova_compute[189564]:        <feature name='avx512vbmi2'/>
Dec  1 14:20:41 np0005541455 nova_compute[189564]:        <feature name='avx512vl'/>
Dec  1 14:20:41 np0005541455 nova_compute[189564]:        <feature name='avx512vnni'/>
Dec  1 14:20:41 np0005541455 nova_compute[189564]:        <feature name='erms'/>
Dec  1 14:20:41 np0005541455 nova_compute[189564]:        <feature name='fsrm'/>
Dec  1 14:20:41 np0005541455 nova_compute[189564]:        <feature name='gfni'/>
Dec  1 14:20:41 np0005541455 nova_compute[189564]:        <feature name='ibrs-all'/>
Dec  1 14:20:41 np0005541455 nova_compute[189564]:        <feature name='invpcid'/>
Dec  1 14:20:41 np0005541455 nova_compute[189564]:        <feature name='la57'/>
Dec  1 14:20:41 np0005541455 nova_compute[189564]:        <feature name='pcid'/>
Dec  1 14:20:41 np0005541455 nova_compute[189564]:        <feature name='pku'/>
Dec  1 14:20:41 np0005541455 nova_compute[189564]:        <feature name='taa-no'/>
Dec  1 14:20:41 np0005541455 nova_compute[189564]:        <feature name='vaes'/>
Dec  1 14:20:41 np0005541455 nova_compute[189564]:        <feature name='vpclmulqdq'/>
Dec  1 14:20:41 np0005541455 nova_compute[189564]:      </blockers>
Dec  1 14:20:41 np0005541455 nova_compute[189564]:      <model usable='no' vendor='Intel'>Icelake-Server-v5</model>
Dec  1 14:20:41 np0005541455 nova_compute[189564]:      <blockers model='Icelake-Server-v5'>
Dec  1 14:20:41 np0005541455 nova_compute[189564]:        <feature name='avx512-vpopcntdq'/>
Dec  1 14:20:41 np0005541455 nova_compute[189564]:        <feature name='avx512bitalg'/>
Dec  1 14:20:41 np0005541455 nova_compute[189564]:        <feature name='avx512bw'/>
Dec  1 14:20:41 np0005541455 nova_compute[189564]:        <feature name='avx512cd'/>
Dec  1 14:20:41 np0005541455 nova_compute[189564]:        <feature name='avx512dq'/>
Dec  1 14:20:41 np0005541455 nova_compute[189564]:        <feature name='avx512f'/>
Dec  1 14:20:41 np0005541455 nova_compute[189564]:        <feature name='avx512ifma'/>
Dec  1 14:20:41 np0005541455 nova_compute[189564]:        <feature name='avx512vbmi'/>
Dec  1 14:20:41 np0005541455 nova_compute[189564]:        <feature name='avx512vbmi2'/>
Dec  1 14:20:41 np0005541455 nova_compute[189564]:        <feature name='avx512vl'/>
Dec  1 14:20:41 np0005541455 nova_compute[189564]:        <feature name='avx512vnni'/>
Dec  1 14:20:41 np0005541455 nova_compute[189564]:        <feature name='erms'/>
Dec  1 14:20:41 np0005541455 nova_compute[189564]:        <feature name='fsrm'/>
Dec  1 14:20:41 np0005541455 nova_compute[189564]:        <feature name='gfni'/>
Dec  1 14:20:41 np0005541455 nova_compute[189564]:        <feature name='ibrs-all'/>
Dec  1 14:20:41 np0005541455 nova_compute[189564]:        <feature name='invpcid'/>
Dec  1 14:20:41 np0005541455 nova_compute[189564]:        <feature name='la57'/>
Dec  1 14:20:41 np0005541455 nova_compute[189564]:        <feature name='pcid'/>
Dec  1 14:20:41 np0005541455 nova_compute[189564]:        <feature name='pku'/>
Dec  1 14:20:41 np0005541455 nova_compute[189564]:        <feature name='taa-no'/>
Dec  1 14:20:41 np0005541455 nova_compute[189564]:        <feature name='vaes'/>
Dec  1 14:20:41 np0005541455 nova_compute[189564]:        <feature name='vpclmulqdq'/>
Dec  1 14:20:41 np0005541455 nova_compute[189564]:        <feature name='xsaves'/>
Dec  1 14:20:41 np0005541455 nova_compute[189564]:      </blockers>
Dec  1 14:20:41 np0005541455 nova_compute[189564]:      <model usable='no' vendor='Intel'>Icelake-Server-v6</model>
Dec  1 14:20:41 np0005541455 nova_compute[189564]:      <blockers model='Icelake-Server-v6'>
Dec  1 14:20:41 np0005541455 nova_compute[189564]:        <feature name='avx512-vpopcntdq'/>
Dec  1 14:20:41 np0005541455 nova_compute[189564]:        <feature name='avx512bitalg'/>
Dec  1 14:20:41 np0005541455 nova_compute[189564]:        <feature name='avx512bw'/>
Dec  1 14:20:41 np0005541455 nova_compute[189564]:        <feature name='avx512cd'/>
Dec  1 14:20:41 np0005541455 nova_compute[189564]:        <feature name='avx512dq'/>
Dec  1 14:20:41 np0005541455 nova_compute[189564]:        <feature name='avx512f'/>
Dec  1 14:20:41 np0005541455 nova_compute[189564]:        <feature name='avx512ifma'/>
Dec  1 14:20:41 np0005541455 nova_compute[189564]:        <feature name='avx512vbmi'/>
Dec  1 14:20:41 np0005541455 nova_compute[189564]:        <feature name='avx512vbmi2'/>
Dec  1 14:20:41 np0005541455 nova_compute[189564]:        <feature name='avx512vl'/>
Dec  1 14:20:41 np0005541455 nova_compute[189564]:        <feature name='avx512vnni'/>
Dec  1 14:20:41 np0005541455 nova_compute[189564]:        <feature name='erms'/>
Dec  1 14:20:41 np0005541455 nova_compute[189564]:        <feature name='fsrm'/>
Dec  1 14:20:41 np0005541455 nova_compute[189564]:        <feature name='gfni'/>
Dec  1 14:20:41 np0005541455 nova_compute[189564]:        <feature name='ibrs-all'/>
Dec  1 14:20:41 np0005541455 nova_compute[189564]:        <feature name='invpcid'/>
Dec  1 14:20:41 np0005541455 nova_compute[189564]:        <feature name='la57'/>
Dec  1 14:20:41 np0005541455 nova_compute[189564]:        <feature name='pcid'/>
Dec  1 14:20:41 np0005541455 nova_compute[189564]:        <feature name='pku'/>
Dec  1 14:20:41 np0005541455 nova_compute[189564]:        <feature name='taa-no'/>
Dec  1 14:20:41 np0005541455 nova_compute[189564]:        <feature name='vaes'/>
Dec  1 14:20:41 np0005541455 nova_compute[189564]:        <feature name='vpclmulqdq'/>
Dec  1 14:20:41 np0005541455 nova_compute[189564]:        <feature name='xsaves'/>
Dec  1 14:20:41 np0005541455 nova_compute[189564]:      </blockers>
Dec  1 14:20:41 np0005541455 nova_compute[189564]:      <model usable='no' vendor='Intel'>Icelake-Server-v7</model>
Dec  1 14:20:41 np0005541455 nova_compute[189564]:      <blockers model='Icelake-Server-v7'>
Dec  1 14:20:41 np0005541455 nova_compute[189564]:        <feature name='avx512-vpopcntdq'/>
Dec  1 14:20:41 np0005541455 nova_compute[189564]:        <feature name='avx512bitalg'/>
Dec  1 14:20:41 np0005541455 nova_compute[189564]:        <feature name='avx512bw'/>
Dec  1 14:20:41 np0005541455 nova_compute[189564]:        <feature name='avx512cd'/>
Dec  1 14:20:41 np0005541455 nova_compute[189564]:        <feature name='avx512dq'/>
Dec  1 14:20:41 np0005541455 nova_compute[189564]:        <feature name='avx512f'/>
Dec  1 14:20:41 np0005541455 nova_compute[189564]:        <feature name='avx512ifma'/>
Dec  1 14:20:41 np0005541455 nova_compute[189564]:        <feature name='avx512vbmi'/>
Dec  1 14:20:41 np0005541455 nova_compute[189564]:        <feature name='avx512vbmi2'/>
Dec  1 14:20:41 np0005541455 nova_compute[189564]:        <feature name='avx512vl'/>
Dec  1 14:20:41 np0005541455 nova_compute[189564]:        <feature name='avx512vnni'/>
Dec  1 14:20:41 np0005541455 nova_compute[189564]:        <feature name='erms'/>
Dec  1 14:20:41 np0005541455 nova_compute[189564]:        <feature name='fsrm'/>
Dec  1 14:20:41 np0005541455 nova_compute[189564]:        <feature name='gfni'/>
Dec  1 14:20:41 np0005541455 nova_compute[189564]:        <feature name='hle'/>
Dec  1 14:20:41 np0005541455 nova_compute[189564]:        <feature name='ibrs-all'/>
Dec  1 14:20:41 np0005541455 nova_compute[189564]:        <feature name='invpcid'/>
Dec  1 14:20:41 np0005541455 nova_compute[189564]:        <feature name='la57'/>
Dec  1 14:20:41 np0005541455 nova_compute[189564]:        <feature name='pcid'/>
Dec  1 14:20:41 np0005541455 nova_compute[189564]:        <feature name='pku'/>
Dec  1 14:20:41 np0005541455 nova_compute[189564]:        <feature name='rtm'/>
Dec  1 14:20:41 np0005541455 nova_compute[189564]:        <feature name='taa-no'/>
Dec  1 14:20:41 np0005541455 nova_compute[189564]:        <feature name='vaes'/>
Dec  1 14:20:41 np0005541455 nova_compute[189564]:        <feature name='vpclmulqdq'/>
Dec  1 14:20:41 np0005541455 nova_compute[189564]:        <feature name='xsaves'/>
Dec  1 14:20:41 np0005541455 nova_compute[189564]:      </blockers>
Dec  1 14:20:41 np0005541455 nova_compute[189564]:      <model usable='no' vendor='Intel' canonical='IvyBridge-v1'>IvyBridge</model>
Dec  1 14:20:41 np0005541455 nova_compute[189564]:      <blockers model='IvyBridge'>
Dec  1 14:20:41 np0005541455 nova_compute[189564]:        <feature name='erms'/>
Dec  1 14:20:41 np0005541455 nova_compute[189564]:      </blockers>
Dec  1 14:20:41 np0005541455 nova_compute[189564]:      <model usable='no' vendor='Intel' canonical='IvyBridge-v2'>IvyBridge-IBRS</model>
Dec  1 14:20:41 np0005541455 nova_compute[189564]:      <blockers model='IvyBridge-IBRS'>
Dec  1 14:20:41 np0005541455 nova_compute[189564]:        <feature name='erms'/>
Dec  1 14:20:41 np0005541455 nova_compute[189564]:      </blockers>
Dec  1 14:20:41 np0005541455 nova_compute[189564]:      <model usable='no' vendor='Intel'>IvyBridge-v1</model>
Dec  1 14:20:41 np0005541455 nova_compute[189564]:      <blockers model='IvyBridge-v1'>
Dec  1 14:20:41 np0005541455 nova_compute[189564]:        <feature name='erms'/>
Dec  1 14:20:41 np0005541455 nova_compute[189564]:      </blockers>
Dec  1 14:20:41 np0005541455 nova_compute[189564]:      <model usable='no' vendor='Intel'>IvyBridge-v2</model>
Dec  1 14:20:41 np0005541455 nova_compute[189564]:      <blockers model='IvyBridge-v2'>
Dec  1 14:20:41 np0005541455 nova_compute[189564]:        <feature name='erms'/>
Dec  1 14:20:41 np0005541455 nova_compute[189564]:      </blockers>
Dec  1 14:20:41 np0005541455 nova_compute[189564]:      <model usable='no' vendor='Intel' canonical='KnightsMill-v1'>KnightsMill</model>
Dec  1 14:20:41 np0005541455 nova_compute[189564]:      <blockers model='KnightsMill'>
Dec  1 14:20:41 np0005541455 nova_compute[189564]:        <feature name='avx512-4fmaps'/>
Dec  1 14:20:41 np0005541455 nova_compute[189564]:        <feature name='avx512-4vnniw'/>
Dec  1 14:20:41 np0005541455 nova_compute[189564]:        <feature name='avx512-vpopcntdq'/>
Dec  1 14:20:41 np0005541455 nova_compute[189564]:        <feature name='avx512cd'/>
Dec  1 14:20:41 np0005541455 nova_compute[189564]:        <feature name='avx512er'/>
Dec  1 14:20:41 np0005541455 nova_compute[189564]:        <feature name='avx512f'/>
Dec  1 14:20:41 np0005541455 nova_compute[189564]:        <feature name='avx512pf'/>
Dec  1 14:20:41 np0005541455 nova_compute[189564]:        <feature name='erms'/>
Dec  1 14:20:41 np0005541455 nova_compute[189564]:        <feature name='ss'/>
Dec  1 14:20:41 np0005541455 nova_compute[189564]:      </blockers>
Dec  1 14:20:41 np0005541455 nova_compute[189564]:      <model usable='no' vendor='Intel'>KnightsMill-v1</model>
Dec  1 14:20:41 np0005541455 nova_compute[189564]:      <blockers model='KnightsMill-v1'>
Dec  1 14:20:41 np0005541455 nova_compute[189564]:        <feature name='avx512-4fmaps'/>
Dec  1 14:20:41 np0005541455 nova_compute[189564]:        <feature name='avx512-4vnniw'/>
Dec  1 14:20:41 np0005541455 nova_compute[189564]:        <feature name='avx512-vpopcntdq'/>
Dec  1 14:20:41 np0005541455 nova_compute[189564]:        <feature name='avx512cd'/>
Dec  1 14:20:41 np0005541455 nova_compute[189564]:        <feature name='avx512er'/>
Dec  1 14:20:41 np0005541455 nova_compute[189564]:        <feature name='avx512f'/>
Dec  1 14:20:41 np0005541455 nova_compute[189564]:        <feature name='avx512pf'/>
Dec  1 14:20:41 np0005541455 nova_compute[189564]:        <feature name='erms'/>
Dec  1 14:20:41 np0005541455 nova_compute[189564]:        <feature name='ss'/>
Dec  1 14:20:41 np0005541455 nova_compute[189564]:      </blockers>
Dec  1 14:20:41 np0005541455 nova_compute[189564]:      <model usable='yes' vendor='Intel' canonical='Nehalem-v1'>Nehalem</model>
Dec  1 14:20:41 np0005541455 nova_compute[189564]:      <model usable='yes' vendor='Intel' canonical='Nehalem-v2'>Nehalem-IBRS</model>
Dec  1 14:20:41 np0005541455 nova_compute[189564]:      <model usable='yes' vendor='Intel'>Nehalem-v1</model>
Dec  1 14:20:41 np0005541455 nova_compute[189564]:      <model usable='yes' vendor='Intel'>Nehalem-v2</model>
Dec  1 14:20:41 np0005541455 nova_compute[189564]:      <model usable='yes' deprecated='yes' vendor='AMD' canonical='Opteron_G1-v1'>Opteron_G1</model>
Dec  1 14:20:41 np0005541455 nova_compute[189564]:      <model usable='yes' deprecated='yes' vendor='AMD'>Opteron_G1-v1</model>
Dec  1 14:20:41 np0005541455 nova_compute[189564]:      <model usable='yes' deprecated='yes' vendor='AMD' canonical='Opteron_G2-v1'>Opteron_G2</model>
Dec  1 14:20:41 np0005541455 nova_compute[189564]:      <model usable='yes' deprecated='yes' vendor='AMD'>Opteron_G2-v1</model>
Dec  1 14:20:41 np0005541455 nova_compute[189564]:      <model usable='yes' deprecated='yes' vendor='AMD' canonical='Opteron_G3-v1'>Opteron_G3</model>
Dec  1 14:20:41 np0005541455 nova_compute[189564]:      <model usable='yes' deprecated='yes' vendor='AMD'>Opteron_G3-v1</model>
Dec  1 14:20:41 np0005541455 nova_compute[189564]:      <model usable='no' vendor='AMD' canonical='Opteron_G4-v1'>Opteron_G4</model>
Dec  1 14:20:41 np0005541455 nova_compute[189564]:      <blockers model='Opteron_G4'>
Dec  1 14:20:41 np0005541455 nova_compute[189564]:        <feature name='fma4'/>
Dec  1 14:20:41 np0005541455 nova_compute[189564]:        <feature name='xop'/>
Dec  1 14:20:41 np0005541455 nova_compute[189564]:      </blockers>
Dec  1 14:20:41 np0005541455 nova_compute[189564]:      <model usable='no' vendor='AMD'>Opteron_G4-v1</model>
Dec  1 14:20:41 np0005541455 nova_compute[189564]:      <blockers model='Opteron_G4-v1'>
Dec  1 14:20:41 np0005541455 nova_compute[189564]:        <feature name='fma4'/>
Dec  1 14:20:41 np0005541455 nova_compute[189564]:        <feature name='xop'/>
Dec  1 14:20:41 np0005541455 nova_compute[189564]:      </blockers>
Dec  1 14:20:41 np0005541455 nova_compute[189564]:      <model usable='no' vendor='AMD' canonical='Opteron_G5-v1'>Opteron_G5</model>
Dec  1 14:20:41 np0005541455 nova_compute[189564]:      <blockers model='Opteron_G5'>
Dec  1 14:20:41 np0005541455 nova_compute[189564]:        <feature name='fma4'/>
Dec  1 14:20:41 np0005541455 nova_compute[189564]:        <feature name='tbm'/>
Dec  1 14:20:41 np0005541455 nova_compute[189564]:        <feature name='xop'/>
Dec  1 14:20:41 np0005541455 nova_compute[189564]:      </blockers>
Dec  1 14:20:41 np0005541455 nova_compute[189564]:      <model usable='no' vendor='AMD'>Opteron_G5-v1</model>
Dec  1 14:20:41 np0005541455 nova_compute[189564]:      <blockers model='Opteron_G5-v1'>
Dec  1 14:20:41 np0005541455 nova_compute[189564]:        <feature name='fma4'/>
Dec  1 14:20:41 np0005541455 nova_compute[189564]:        <feature name='tbm'/>
Dec  1 14:20:41 np0005541455 nova_compute[189564]:        <feature name='xop'/>
Dec  1 14:20:41 np0005541455 nova_compute[189564]:      </blockers>
Dec  1 14:20:41 np0005541455 nova_compute[189564]:      <model usable='yes' deprecated='yes' vendor='Intel' canonical='Penryn-v1'>Penryn</model>
Dec  1 14:20:41 np0005541455 nova_compute[189564]:      <model usable='yes' deprecated='yes' vendor='Intel'>Penryn-v1</model>
Dec  1 14:20:41 np0005541455 nova_compute[189564]:      <model usable='yes' vendor='Intel' canonical='SandyBridge-v1'>SandyBridge</model>
Dec  1 14:20:41 np0005541455 nova_compute[189564]:      <model usable='yes' vendor='Intel' canonical='SandyBridge-v2'>SandyBridge-IBRS</model>
Dec  1 14:20:41 np0005541455 nova_compute[189564]:      <model usable='yes' vendor='Intel'>SandyBridge-v1</model>
Dec  1 14:20:41 np0005541455 nova_compute[189564]:      <model usable='yes' vendor='Intel'>SandyBridge-v2</model>
Dec  1 14:20:41 np0005541455 nova_compute[189564]:      <model usable='no' vendor='Intel' canonical='SapphireRapids-v1'>SapphireRapids</model>
Dec  1 14:20:41 np0005541455 nova_compute[189564]:      <blockers model='SapphireRapids'>
Dec  1 14:20:41 np0005541455 nova_compute[189564]:        <feature name='amx-bf16'/>
Dec  1 14:20:41 np0005541455 nova_compute[189564]:        <feature name='amx-int8'/>
Dec  1 14:20:41 np0005541455 nova_compute[189564]:        <feature name='amx-tile'/>
Dec  1 14:20:41 np0005541455 nova_compute[189564]:        <feature name='avx-vnni'/>
Dec  1 14:20:41 np0005541455 nova_compute[189564]:        <feature name='avx512-bf16'/>
Dec  1 14:20:41 np0005541455 nova_compute[189564]:        <feature name='avx512-fp16'/>
Dec  1 14:20:41 np0005541455 nova_compute[189564]:        <feature name='avx512-vpopcntdq'/>
Dec  1 14:20:41 np0005541455 nova_compute[189564]:        <feature name='avx512bitalg'/>
Dec  1 14:20:41 np0005541455 nova_compute[189564]:        <feature name='avx512bw'/>
Dec  1 14:20:41 np0005541455 nova_compute[189564]:        <feature name='avx512cd'/>
Dec  1 14:20:41 np0005541455 nova_compute[189564]:        <feature name='avx512dq'/>
Dec  1 14:20:41 np0005541455 nova_compute[189564]:        <feature name='avx512f'/>
Dec  1 14:20:41 np0005541455 nova_compute[189564]:        <feature name='avx512ifma'/>
Dec  1 14:20:41 np0005541455 nova_compute[189564]:        <feature name='avx512vbmi'/>
Dec  1 14:20:41 np0005541455 nova_compute[189564]:        <feature name='avx512vbmi2'/>
Dec  1 14:20:41 np0005541455 nova_compute[189564]:        <feature name='avx512vl'/>
Dec  1 14:20:41 np0005541455 nova_compute[189564]:        <feature name='avx512vnni'/>
Dec  1 14:20:41 np0005541455 nova_compute[189564]:        <feature name='bus-lock-detect'/>
Dec  1 14:20:41 np0005541455 nova_compute[189564]:        <feature name='erms'/>
Dec  1 14:20:41 np0005541455 nova_compute[189564]:        <feature name='fsrc'/>
Dec  1 14:20:41 np0005541455 nova_compute[189564]:        <feature name='fsrm'/>
Dec  1 14:20:41 np0005541455 nova_compute[189564]:        <feature name='fsrs'/>
Dec  1 14:20:41 np0005541455 nova_compute[189564]:        <feature name='fzrm'/>
Dec  1 14:20:41 np0005541455 nova_compute[189564]:        <feature name='gfni'/>
Dec  1 14:20:41 np0005541455 nova_compute[189564]:        <feature name='hle'/>
Dec  1 14:20:41 np0005541455 nova_compute[189564]:        <feature name='ibrs-all'/>
Dec  1 14:20:41 np0005541455 nova_compute[189564]:        <feature name='invpcid'/>
Dec  1 14:20:41 np0005541455 nova_compute[189564]:        <feature name='la57'/>
Dec  1 14:20:41 np0005541455 nova_compute[189564]:        <feature name='pcid'/>
Dec  1 14:20:41 np0005541455 nova_compute[189564]:        <feature name='pku'/>
Dec  1 14:20:41 np0005541455 nova_compute[189564]:        <feature name='rtm'/>
Dec  1 14:20:41 np0005541455 nova_compute[189564]:        <feature name='serialize'/>
Dec  1 14:20:41 np0005541455 nova_compute[189564]:        <feature name='taa-no'/>
Dec  1 14:20:41 np0005541455 nova_compute[189564]:        <feature name='tsx-ldtrk'/>
Dec  1 14:20:41 np0005541455 nova_compute[189564]:        <feature name='vaes'/>
Dec  1 14:20:41 np0005541455 nova_compute[189564]:        <feature name='vpclmulqdq'/>
Dec  1 14:20:41 np0005541455 nova_compute[189564]:        <feature name='xfd'/>
Dec  1 14:20:41 np0005541455 nova_compute[189564]:        <feature name='xsaves'/>
Dec  1 14:20:41 np0005541455 nova_compute[189564]:      </blockers>
Dec  1 14:20:41 np0005541455 nova_compute[189564]:      <model usable='no' vendor='Intel'>SapphireRapids-v1</model>
Dec  1 14:20:41 np0005541455 nova_compute[189564]:      <blockers model='SapphireRapids-v1'>
Dec  1 14:20:41 np0005541455 nova_compute[189564]:        <feature name='amx-bf16'/>
Dec  1 14:20:41 np0005541455 nova_compute[189564]:        <feature name='amx-int8'/>
Dec  1 14:20:41 np0005541455 nova_compute[189564]:        <feature name='amx-tile'/>
Dec  1 14:20:41 np0005541455 nova_compute[189564]:        <feature name='avx-vnni'/>
Dec  1 14:20:41 np0005541455 nova_compute[189564]:        <feature name='avx512-bf16'/>
Dec  1 14:20:41 np0005541455 nova_compute[189564]:        <feature name='avx512-fp16'/>
Dec  1 14:20:41 np0005541455 nova_compute[189564]:        <feature name='avx512-vpopcntdq'/>
Dec  1 14:20:41 np0005541455 nova_compute[189564]:        <feature name='avx512bitalg'/>
Dec  1 14:20:41 np0005541455 nova_compute[189564]:        <feature name='avx512bw'/>
Dec  1 14:20:41 np0005541455 nova_compute[189564]:        <feature name='avx512cd'/>
Dec  1 14:20:41 np0005541455 nova_compute[189564]:        <feature name='avx512dq'/>
Dec  1 14:20:41 np0005541455 nova_compute[189564]:        <feature name='avx512f'/>
Dec  1 14:20:41 np0005541455 nova_compute[189564]:        <feature name='avx512ifma'/>
Dec  1 14:20:41 np0005541455 nova_compute[189564]:        <feature name='avx512vbmi'/>
Dec  1 14:20:41 np0005541455 nova_compute[189564]:        <feature name='avx512vbmi2'/>
Dec  1 14:20:41 np0005541455 nova_compute[189564]:        <feature name='avx512vl'/>
Dec  1 14:20:41 np0005541455 nova_compute[189564]:        <feature name='avx512vnni'/>
Dec  1 14:20:41 np0005541455 nova_compute[189564]:        <feature name='bus-lock-detect'/>
Dec  1 14:20:41 np0005541455 nova_compute[189564]:        <feature name='erms'/>
Dec  1 14:20:41 np0005541455 nova_compute[189564]:        <feature name='fsrc'/>
Dec  1 14:20:41 np0005541455 nova_compute[189564]:        <feature name='fsrm'/>
Dec  1 14:20:41 np0005541455 nova_compute[189564]:        <feature name='fsrs'/>
Dec  1 14:20:41 np0005541455 nova_compute[189564]:        <feature name='fzrm'/>
Dec  1 14:20:41 np0005541455 nova_compute[189564]:        <feature name='gfni'/>
Dec  1 14:20:41 np0005541455 nova_compute[189564]:        <feature name='hle'/>
Dec  1 14:20:41 np0005541455 nova_compute[189564]:        <feature name='ibrs-all'/>
Dec  1 14:20:41 np0005541455 nova_compute[189564]:        <feature name='invpcid'/>
Dec  1 14:20:41 np0005541455 nova_compute[189564]:        <feature name='la57'/>
Dec  1 14:20:41 np0005541455 nova_compute[189564]:        <feature name='pcid'/>
Dec  1 14:20:41 np0005541455 nova_compute[189564]:        <feature name='pku'/>
Dec  1 14:20:41 np0005541455 nova_compute[189564]:        <feature name='rtm'/>
Dec  1 14:20:41 np0005541455 nova_compute[189564]:        <feature name='serialize'/>
Dec  1 14:20:41 np0005541455 nova_compute[189564]:        <feature name='taa-no'/>
Dec  1 14:20:41 np0005541455 nova_compute[189564]:        <feature name='tsx-ldtrk'/>
Dec  1 14:20:41 np0005541455 nova_compute[189564]:        <feature name='vaes'/>
Dec  1 14:20:41 np0005541455 nova_compute[189564]:        <feature name='vpclmulqdq'/>
Dec  1 14:20:41 np0005541455 nova_compute[189564]:        <feature name='xfd'/>
Dec  1 14:20:41 np0005541455 nova_compute[189564]:        <feature name='xsaves'/>
Dec  1 14:20:41 np0005541455 nova_compute[189564]:      </blockers>
Dec  1 14:20:41 np0005541455 nova_compute[189564]:      <model usable='no' vendor='Intel'>SapphireRapids-v2</model>
Dec  1 14:20:41 np0005541455 nova_compute[189564]:      <blockers model='SapphireRapids-v2'>
Dec  1 14:20:41 np0005541455 nova_compute[189564]:        <feature name='amx-bf16'/>
Dec  1 14:20:41 np0005541455 nova_compute[189564]:        <feature name='amx-int8'/>
Dec  1 14:20:41 np0005541455 nova_compute[189564]:        <feature name='amx-tile'/>
Dec  1 14:20:41 np0005541455 nova_compute[189564]:        <feature name='avx-vnni'/>
Dec  1 14:20:41 np0005541455 nova_compute[189564]:        <feature name='avx512-bf16'/>
Dec  1 14:20:41 np0005541455 nova_compute[189564]:        <feature name='avx512-fp16'/>
Dec  1 14:20:41 np0005541455 nova_compute[189564]:        <feature name='avx512-vpopcntdq'/>
Dec  1 14:20:41 np0005541455 nova_compute[189564]:        <feature name='avx512bitalg'/>
Dec  1 14:20:41 np0005541455 nova_compute[189564]:        <feature name='avx512bw'/>
Dec  1 14:20:41 np0005541455 nova_compute[189564]:        <feature name='avx512cd'/>
Dec  1 14:20:41 np0005541455 nova_compute[189564]:        <feature name='avx512dq'/>
Dec  1 14:20:41 np0005541455 nova_compute[189564]:        <feature name='avx512f'/>
Dec  1 14:20:41 np0005541455 nova_compute[189564]:        <feature name='avx512ifma'/>
Dec  1 14:20:41 np0005541455 nova_compute[189564]:        <feature name='avx512vbmi'/>
Dec  1 14:20:41 np0005541455 nova_compute[189564]:        <feature name='avx512vbmi2'/>
Dec  1 14:20:41 np0005541455 nova_compute[189564]:        <feature name='avx512vl'/>
Dec  1 14:20:41 np0005541455 nova_compute[189564]:        <feature name='avx512vnni'/>
Dec  1 14:20:41 np0005541455 nova_compute[189564]:        <feature name='bus-lock-detect'/>
Dec  1 14:20:41 np0005541455 nova_compute[189564]:        <feature name='erms'/>
Dec  1 14:20:41 np0005541455 nova_compute[189564]:        <feature name='fbsdp-no'/>
Dec  1 14:20:41 np0005541455 nova_compute[189564]:        <feature name='fsrc'/>
Dec  1 14:20:41 np0005541455 nova_compute[189564]:        <feature name='fsrm'/>
Dec  1 14:20:41 np0005541455 nova_compute[189564]:        <feature name='fsrs'/>
Dec  1 14:20:41 np0005541455 nova_compute[189564]:        <feature name='fzrm'/>
Dec  1 14:20:41 np0005541455 nova_compute[189564]:        <feature name='gfni'/>
Dec  1 14:20:41 np0005541455 nova_compute[189564]:        <feature name='hle'/>
Dec  1 14:20:41 np0005541455 nova_compute[189564]:        <feature name='ibrs-all'/>
Dec  1 14:20:41 np0005541455 nova_compute[189564]:        <feature name='invpcid'/>
Dec  1 14:20:41 np0005541455 nova_compute[189564]:        <feature name='la57'/>
Dec  1 14:20:41 np0005541455 nova_compute[189564]:        <feature name='pcid'/>
Dec  1 14:20:41 np0005541455 nova_compute[189564]:        <feature name='pku'/>
Dec  1 14:20:41 np0005541455 nova_compute[189564]:        <feature name='psdp-no'/>
Dec  1 14:20:41 np0005541455 nova_compute[189564]:        <feature name='rtm'/>
Dec  1 14:20:41 np0005541455 nova_compute[189564]:        <feature name='sbdr-ssdp-no'/>
Dec  1 14:20:41 np0005541455 nova_compute[189564]:        <feature name='serialize'/>
Dec  1 14:20:41 np0005541455 nova_compute[189564]:        <feature name='taa-no'/>
Dec  1 14:20:41 np0005541455 nova_compute[189564]:        <feature name='tsx-ldtrk'/>
Dec  1 14:20:41 np0005541455 nova_compute[189564]:        <feature name='vaes'/>
Dec  1 14:20:41 np0005541455 nova_compute[189564]:        <feature name='vpclmulqdq'/>
Dec  1 14:20:41 np0005541455 nova_compute[189564]:        <feature name='xfd'/>
Dec  1 14:20:41 np0005541455 nova_compute[189564]:        <feature name='xsaves'/>
Dec  1 14:20:41 np0005541455 nova_compute[189564]:      </blockers>
Dec  1 14:20:41 np0005541455 nova_compute[189564]:      <model usable='no' vendor='Intel'>SapphireRapids-v3</model>
Dec  1 14:20:41 np0005541455 nova_compute[189564]:      <blockers model='SapphireRapids-v3'>
Dec  1 14:20:41 np0005541455 nova_compute[189564]:        <feature name='amx-bf16'/>
Dec  1 14:20:41 np0005541455 nova_compute[189564]:        <feature name='amx-int8'/>
Dec  1 14:20:41 np0005541455 nova_compute[189564]:        <feature name='amx-tile'/>
Dec  1 14:20:41 np0005541455 nova_compute[189564]:        <feature name='avx-vnni'/>
Dec  1 14:20:41 np0005541455 nova_compute[189564]:        <feature name='avx512-bf16'/>
Dec  1 14:20:41 np0005541455 nova_compute[189564]:        <feature name='avx512-fp16'/>
Dec  1 14:20:41 np0005541455 nova_compute[189564]:        <feature name='avx512-vpopcntdq'/>
Dec  1 14:20:41 np0005541455 nova_compute[189564]:        <feature name='avx512bitalg'/>
Dec  1 14:20:41 np0005541455 nova_compute[189564]:        <feature name='avx512bw'/>
Dec  1 14:20:41 np0005541455 nova_compute[189564]:        <feature name='avx512cd'/>
Dec  1 14:20:41 np0005541455 nova_compute[189564]:        <feature name='avx512dq'/>
Dec  1 14:20:41 np0005541455 nova_compute[189564]:        <feature name='avx512f'/>
Dec  1 14:20:41 np0005541455 nova_compute[189564]:        <feature name='avx512ifma'/>
Dec  1 14:20:41 np0005541455 nova_compute[189564]:        <feature name='avx512vbmi'/>
Dec  1 14:20:41 np0005541455 nova_compute[189564]:        <feature name='avx512vbmi2'/>
Dec  1 14:20:41 np0005541455 nova_compute[189564]:        <feature name='avx512vl'/>
Dec  1 14:20:41 np0005541455 nova_compute[189564]:        <feature name='avx512vnni'/>
Dec  1 14:20:41 np0005541455 nova_compute[189564]:        <feature name='bus-lock-detect'/>
Dec  1 14:20:41 np0005541455 nova_compute[189564]:        <feature name='cldemote'/>
Dec  1 14:20:41 np0005541455 nova_compute[189564]:        <feature name='erms'/>
Dec  1 14:20:41 np0005541455 nova_compute[189564]:        <feature name='fbsdp-no'/>
Dec  1 14:20:41 np0005541455 nova_compute[189564]:        <feature name='fsrc'/>
Dec  1 14:20:41 np0005541455 nova_compute[189564]:        <feature name='fsrm'/>
Dec  1 14:20:41 np0005541455 nova_compute[189564]:        <feature name='fsrs'/>
Dec  1 14:20:41 np0005541455 nova_compute[189564]:        <feature name='fzrm'/>
Dec  1 14:20:41 np0005541455 nova_compute[189564]:        <feature name='gfni'/>
Dec  1 14:20:41 np0005541455 nova_compute[189564]:        <feature name='hle'/>
Dec  1 14:20:41 np0005541455 nova_compute[189564]:        <feature name='ibrs-all'/>
Dec  1 14:20:41 np0005541455 nova_compute[189564]:        <feature name='invpcid'/>
Dec  1 14:20:41 np0005541455 nova_compute[189564]:        <feature name='la57'/>
Dec  1 14:20:41 np0005541455 nova_compute[189564]:        <feature name='movdir64b'/>
Dec  1 14:20:41 np0005541455 nova_compute[189564]:        <feature name='movdiri'/>
Dec  1 14:20:41 np0005541455 nova_compute[189564]:        <feature name='pcid'/>
Dec  1 14:20:41 np0005541455 nova_compute[189564]:        <feature name='pku'/>
Dec  1 14:20:41 np0005541455 nova_compute[189564]:        <feature name='psdp-no'/>
Dec  1 14:20:41 np0005541455 nova_compute[189564]:        <feature name='rtm'/>
Dec  1 14:20:41 np0005541455 nova_compute[189564]:        <feature name='sbdr-ssdp-no'/>
Dec  1 14:20:41 np0005541455 nova_compute[189564]:        <feature name='serialize'/>
Dec  1 14:20:41 np0005541455 nova_compute[189564]:        <feature name='ss'/>
Dec  1 14:20:41 np0005541455 nova_compute[189564]:        <feature name='taa-no'/>
Dec  1 14:20:41 np0005541455 nova_compute[189564]:        <feature name='tsx-ldtrk'/>
Dec  1 14:20:41 np0005541455 nova_compute[189564]:        <feature name='vaes'/>
Dec  1 14:20:41 np0005541455 nova_compute[189564]:        <feature name='vpclmulqdq'/>
Dec  1 14:20:41 np0005541455 nova_compute[189564]:        <feature name='xfd'/>
Dec  1 14:20:41 np0005541455 nova_compute[189564]:        <feature name='xsaves'/>
Dec  1 14:20:41 np0005541455 nova_compute[189564]:      </blockers>
Dec  1 14:20:41 np0005541455 nova_compute[189564]:      <model usable='no' vendor='Intel' canonical='SierraForest-v1'>SierraForest</model>
Dec  1 14:20:41 np0005541455 nova_compute[189564]:      <blockers model='SierraForest'>
Dec  1 14:20:41 np0005541455 nova_compute[189564]:        <feature name='avx-ifma'/>
Dec  1 14:20:41 np0005541455 nova_compute[189564]:        <feature name='avx-ne-convert'/>
Dec  1 14:20:41 np0005541455 nova_compute[189564]:        <feature name='avx-vnni'/>
Dec  1 14:20:41 np0005541455 nova_compute[189564]:        <feature name='avx-vnni-int8'/>
Dec  1 14:20:41 np0005541455 nova_compute[189564]:        <feature name='bus-lock-detect'/>
Dec  1 14:20:41 np0005541455 nova_compute[189564]:        <feature name='cmpccxadd'/>
Dec  1 14:20:41 np0005541455 nova_compute[189564]:        <feature name='erms'/>
Dec  1 14:20:41 np0005541455 nova_compute[189564]:        <feature name='fbsdp-no'/>
Dec  1 14:20:41 np0005541455 nova_compute[189564]:        <feature name='fsrm'/>
Dec  1 14:20:41 np0005541455 nova_compute[189564]:        <feature name='fsrs'/>
Dec  1 14:20:41 np0005541455 nova_compute[189564]:        <feature name='gfni'/>
Dec  1 14:20:41 np0005541455 nova_compute[189564]:        <feature name='ibrs-all'/>
Dec  1 14:20:41 np0005541455 nova_compute[189564]:        <feature name='invpcid'/>
Dec  1 14:20:41 np0005541455 nova_compute[189564]:        <feature name='mcdt-no'/>
Dec  1 14:20:41 np0005541455 nova_compute[189564]:        <feature name='pbrsb-no'/>
Dec  1 14:20:41 np0005541455 nova_compute[189564]:        <feature name='pcid'/>
Dec  1 14:20:41 np0005541455 nova_compute[189564]:        <feature name='pku'/>
Dec  1 14:20:41 np0005541455 nova_compute[189564]:        <feature name='psdp-no'/>
Dec  1 14:20:41 np0005541455 nova_compute[189564]:        <feature name='sbdr-ssdp-no'/>
Dec  1 14:20:41 np0005541455 nova_compute[189564]:        <feature name='serialize'/>
Dec  1 14:20:41 np0005541455 nova_compute[189564]:        <feature name='vaes'/>
Dec  1 14:20:41 np0005541455 nova_compute[189564]:        <feature name='vpclmulqdq'/>
Dec  1 14:20:41 np0005541455 nova_compute[189564]:        <feature name='xsaves'/>
Dec  1 14:20:41 np0005541455 nova_compute[189564]:      </blockers>
Dec  1 14:20:41 np0005541455 nova_compute[189564]:      <model usable='no' vendor='Intel'>SierraForest-v1</model>
Dec  1 14:20:41 np0005541455 nova_compute[189564]:      <blockers model='SierraForest-v1'>
Dec  1 14:20:41 np0005541455 nova_compute[189564]:        <feature name='avx-ifma'/>
Dec  1 14:20:41 np0005541455 nova_compute[189564]:        <feature name='avx-ne-convert'/>
Dec  1 14:20:41 np0005541455 nova_compute[189564]:        <feature name='avx-vnni'/>
Dec  1 14:20:41 np0005541455 nova_compute[189564]:        <feature name='avx-vnni-int8'/>
Dec  1 14:20:41 np0005541455 nova_compute[189564]:        <feature name='bus-lock-detect'/>
Dec  1 14:20:41 np0005541455 nova_compute[189564]:        <feature name='cmpccxadd'/>
Dec  1 14:20:41 np0005541455 nova_compute[189564]:        <feature name='erms'/>
Dec  1 14:20:41 np0005541455 nova_compute[189564]:        <feature name='fbsdp-no'/>
Dec  1 14:20:41 np0005541455 nova_compute[189564]:        <feature name='fsrm'/>
Dec  1 14:20:41 np0005541455 nova_compute[189564]:        <feature name='fsrs'/>
Dec  1 14:20:41 np0005541455 nova_compute[189564]:        <feature name='gfni'/>
Dec  1 14:20:41 np0005541455 nova_compute[189564]:        <feature name='ibrs-all'/>
Dec  1 14:20:41 np0005541455 nova_compute[189564]:        <feature name='invpcid'/>
Dec  1 14:20:41 np0005541455 nova_compute[189564]:        <feature name='mcdt-no'/>
Dec  1 14:20:41 np0005541455 nova_compute[189564]:        <feature name='pbrsb-no'/>
Dec  1 14:20:41 np0005541455 nova_compute[189564]:        <feature name='pcid'/>
Dec  1 14:20:41 np0005541455 nova_compute[189564]:        <feature name='pku'/>
Dec  1 14:20:41 np0005541455 nova_compute[189564]:        <feature name='psdp-no'/>
Dec  1 14:20:41 np0005541455 nova_compute[189564]:        <feature name='sbdr-ssdp-no'/>
Dec  1 14:20:41 np0005541455 nova_compute[189564]:        <feature name='serialize'/>
Dec  1 14:20:41 np0005541455 nova_compute[189564]:        <feature name='vaes'/>
Dec  1 14:20:41 np0005541455 nova_compute[189564]:        <feature name='vpclmulqdq'/>
Dec  1 14:20:41 np0005541455 nova_compute[189564]:        <feature name='xsaves'/>
Dec  1 14:20:41 np0005541455 nova_compute[189564]:      </blockers>
Dec  1 14:20:41 np0005541455 nova_compute[189564]:      <model usable='no' vendor='Intel' canonical='Skylake-Client-v1'>Skylake-Client</model>
Dec  1 14:20:41 np0005541455 nova_compute[189564]:      <blockers model='Skylake-Client'>
Dec  1 14:20:41 np0005541455 nova_compute[189564]:        <feature name='erms'/>
Dec  1 14:20:41 np0005541455 nova_compute[189564]:        <feature name='hle'/>
Dec  1 14:20:41 np0005541455 nova_compute[189564]:        <feature name='invpcid'/>
Dec  1 14:20:41 np0005541455 nova_compute[189564]:        <feature name='pcid'/>
Dec  1 14:20:41 np0005541455 nova_compute[189564]:        <feature name='rtm'/>
Dec  1 14:20:41 np0005541455 nova_compute[189564]:      </blockers>
Dec  1 14:20:41 np0005541455 nova_compute[189564]:      <model usable='no' vendor='Intel' canonical='Skylake-Client-v2'>Skylake-Client-IBRS</model>
Dec  1 14:20:41 np0005541455 nova_compute[189564]:      <blockers model='Skylake-Client-IBRS'>
Dec  1 14:20:41 np0005541455 nova_compute[189564]:        <feature name='erms'/>
Dec  1 14:20:41 np0005541455 nova_compute[189564]:        <feature name='hle'/>
Dec  1 14:20:41 np0005541455 nova_compute[189564]:        <feature name='invpcid'/>
Dec  1 14:20:41 np0005541455 nova_compute[189564]:        <feature name='pcid'/>
Dec  1 14:20:41 np0005541455 nova_compute[189564]:        <feature name='rtm'/>
Dec  1 14:20:41 np0005541455 nova_compute[189564]:      </blockers>
Dec  1 14:20:41 np0005541455 nova_compute[189564]:      <model usable='no' vendor='Intel' canonical='Skylake-Client-v3'>Skylake-Client-noTSX-IBRS</model>
Dec  1 14:20:41 np0005541455 nova_compute[189564]:      <blockers model='Skylake-Client-noTSX-IBRS'>
Dec  1 14:20:41 np0005541455 nova_compute[189564]:        <feature name='erms'/>
Dec  1 14:20:41 np0005541455 nova_compute[189564]:        <feature name='invpcid'/>
Dec  1 14:20:41 np0005541455 nova_compute[189564]:        <feature name='pcid'/>
Dec  1 14:20:41 np0005541455 nova_compute[189564]:      </blockers>
Dec  1 14:20:41 np0005541455 nova_compute[189564]:      <model usable='no' vendor='Intel'>Skylake-Client-v1</model>
Dec  1 14:20:41 np0005541455 nova_compute[189564]:      <blockers model='Skylake-Client-v1'>
Dec  1 14:20:41 np0005541455 nova_compute[189564]:        <feature name='erms'/>
Dec  1 14:20:41 np0005541455 nova_compute[189564]:        <feature name='hle'/>
Dec  1 14:20:41 np0005541455 nova_compute[189564]:        <feature name='invpcid'/>
Dec  1 14:20:41 np0005541455 nova_compute[189564]:        <feature name='pcid'/>
Dec  1 14:20:41 np0005541455 nova_compute[189564]:        <feature name='rtm'/>
Dec  1 14:20:41 np0005541455 nova_compute[189564]:      </blockers>
Dec  1 14:20:41 np0005541455 nova_compute[189564]:      <model usable='no' vendor='Intel'>Skylake-Client-v2</model>
Dec  1 14:20:41 np0005541455 nova_compute[189564]:      <blockers model='Skylake-Client-v2'>
Dec  1 14:20:41 np0005541455 nova_compute[189564]:        <feature name='erms'/>
Dec  1 14:20:41 np0005541455 nova_compute[189564]:        <feature name='hle'/>
Dec  1 14:20:41 np0005541455 nova_compute[189564]:        <feature name='invpcid'/>
Dec  1 14:20:41 np0005541455 nova_compute[189564]:        <feature name='pcid'/>
Dec  1 14:20:41 np0005541455 nova_compute[189564]:        <feature name='rtm'/>
Dec  1 14:20:41 np0005541455 nova_compute[189564]:      </blockers>
Dec  1 14:20:41 np0005541455 nova_compute[189564]:      <model usable='no' vendor='Intel'>Skylake-Client-v3</model>
Dec  1 14:20:41 np0005541455 nova_compute[189564]:      <blockers model='Skylake-Client-v3'>
Dec  1 14:20:41 np0005541455 nova_compute[189564]:        <feature name='erms'/>
Dec  1 14:20:41 np0005541455 nova_compute[189564]:        <feature name='invpcid'/>
Dec  1 14:20:41 np0005541455 nova_compute[189564]:        <feature name='pcid'/>
Dec  1 14:20:41 np0005541455 nova_compute[189564]:      </blockers>
Dec  1 14:20:41 np0005541455 nova_compute[189564]:      <model usable='no' vendor='Intel'>Skylake-Client-v4</model>
Dec  1 14:20:41 np0005541455 nova_compute[189564]:      <blockers model='Skylake-Client-v4'>
Dec  1 14:20:41 np0005541455 nova_compute[189564]:        <feature name='erms'/>
Dec  1 14:20:41 np0005541455 nova_compute[189564]:        <feature name='invpcid'/>
Dec  1 14:20:41 np0005541455 nova_compute[189564]:        <feature name='pcid'/>
Dec  1 14:20:41 np0005541455 nova_compute[189564]:        <feature name='xsaves'/>
Dec  1 14:20:41 np0005541455 nova_compute[189564]:      </blockers>
Dec  1 14:20:41 np0005541455 nova_compute[189564]:      <model usable='no' vendor='Intel' canonical='Skylake-Server-v1'>Skylake-Server</model>
Dec  1 14:20:41 np0005541455 nova_compute[189564]:      <blockers model='Skylake-Server'>
Dec  1 14:20:41 np0005541455 nova_compute[189564]:        <feature name='avx512bw'/>
Dec  1 14:20:41 np0005541455 nova_compute[189564]:        <feature name='avx512cd'/>
Dec  1 14:20:41 np0005541455 nova_compute[189564]:        <feature name='avx512dq'/>
Dec  1 14:20:41 np0005541455 nova_compute[189564]:        <feature name='avx512f'/>
Dec  1 14:20:41 np0005541455 nova_compute[189564]:        <feature name='avx512vl'/>
Dec  1 14:20:41 np0005541455 nova_compute[189564]:        <feature name='erms'/>
Dec  1 14:20:41 np0005541455 nova_compute[189564]:        <feature name='hle'/>
Dec  1 14:20:41 np0005541455 nova_compute[189564]:        <feature name='invpcid'/>
Dec  1 14:20:41 np0005541455 nova_compute[189564]:        <feature name='pcid'/>
Dec  1 14:20:41 np0005541455 nova_compute[189564]:        <feature name='pku'/>
Dec  1 14:20:41 np0005541455 nova_compute[189564]:        <feature name='rtm'/>
Dec  1 14:20:41 np0005541455 nova_compute[189564]:      </blockers>
Dec  1 14:20:41 np0005541455 nova_compute[189564]:      <model usable='no' vendor='Intel' canonical='Skylake-Server-v2'>Skylake-Server-IBRS</model>
Dec  1 14:20:41 np0005541455 nova_compute[189564]:      <blockers model='Skylake-Server-IBRS'>
Dec  1 14:20:41 np0005541455 nova_compute[189564]:        <feature name='avx512bw'/>
Dec  1 14:20:41 np0005541455 nova_compute[189564]:        <feature name='avx512cd'/>
Dec  1 14:20:41 np0005541455 nova_compute[189564]:        <feature name='avx512dq'/>
Dec  1 14:20:41 np0005541455 nova_compute[189564]:        <feature name='avx512f'/>
Dec  1 14:20:41 np0005541455 nova_compute[189564]:        <feature name='avx512vl'/>
Dec  1 14:20:41 np0005541455 nova_compute[189564]:        <feature name='erms'/>
Dec  1 14:20:41 np0005541455 nova_compute[189564]:        <feature name='hle'/>
Dec  1 14:20:41 np0005541455 nova_compute[189564]:        <feature name='invpcid'/>
Dec  1 14:20:41 np0005541455 nova_compute[189564]:        <feature name='pcid'/>
Dec  1 14:20:41 np0005541455 nova_compute[189564]:        <feature name='pku'/>
Dec  1 14:20:41 np0005541455 nova_compute[189564]:        <feature name='rtm'/>
Dec  1 14:20:41 np0005541455 nova_compute[189564]:      </blockers>
Dec  1 14:20:41 np0005541455 nova_compute[189564]:      <model usable='no' vendor='Intel' canonical='Skylake-Server-v3'>Skylake-Server-noTSX-IBRS</model>
Dec  1 14:20:41 np0005541455 nova_compute[189564]:      <blockers model='Skylake-Server-noTSX-IBRS'>
Dec  1 14:20:41 np0005541455 nova_compute[189564]:        <feature name='avx512bw'/>
Dec  1 14:20:41 np0005541455 nova_compute[189564]:        <feature name='avx512cd'/>
Dec  1 14:20:41 np0005541455 nova_compute[189564]:        <feature name='avx512dq'/>
Dec  1 14:20:41 np0005541455 nova_compute[189564]:        <feature name='avx512f'/>
Dec  1 14:20:41 np0005541455 nova_compute[189564]:        <feature name='avx512vl'/>
Dec  1 14:20:41 np0005541455 nova_compute[189564]:        <feature name='erms'/>
Dec  1 14:20:41 np0005541455 nova_compute[189564]:        <feature name='invpcid'/>
Dec  1 14:20:41 np0005541455 nova_compute[189564]:        <feature name='pcid'/>
Dec  1 14:20:41 np0005541455 nova_compute[189564]:        <feature name='pku'/>
Dec  1 14:20:41 np0005541455 nova_compute[189564]:      </blockers>
Dec  1 14:20:41 np0005541455 nova_compute[189564]:      <model usable='no' vendor='Intel'>Skylake-Server-v1</model>
Dec  1 14:20:41 np0005541455 nova_compute[189564]:      <blockers model='Skylake-Server-v1'>
Dec  1 14:20:41 np0005541455 nova_compute[189564]:        <feature name='avx512bw'/>
Dec  1 14:20:41 np0005541455 nova_compute[189564]:        <feature name='avx512cd'/>
Dec  1 14:20:41 np0005541455 nova_compute[189564]:        <feature name='avx512dq'/>
Dec  1 14:20:41 np0005541455 nova_compute[189564]:        <feature name='avx512f'/>
Dec  1 14:20:41 np0005541455 nova_compute[189564]:        <feature name='avx512vl'/>
Dec  1 14:20:41 np0005541455 nova_compute[189564]:        <feature name='erms'/>
Dec  1 14:20:41 np0005541455 nova_compute[189564]:        <feature name='hle'/>
Dec  1 14:20:41 np0005541455 nova_compute[189564]:        <feature name='invpcid'/>
Dec  1 14:20:41 np0005541455 nova_compute[189564]:        <feature name='pcid'/>
Dec  1 14:20:41 np0005541455 nova_compute[189564]:        <feature name='pku'/>
Dec  1 14:20:41 np0005541455 nova_compute[189564]:        <feature name='rtm'/>
Dec  1 14:20:41 np0005541455 nova_compute[189564]:      </blockers>
Dec  1 14:20:41 np0005541455 nova_compute[189564]:      <model usable='no' vendor='Intel'>Skylake-Server-v2</model>
Dec  1 14:20:41 np0005541455 nova_compute[189564]:      <blockers model='Skylake-Server-v2'>
Dec  1 14:20:41 np0005541455 nova_compute[189564]:        <feature name='avx512bw'/>
Dec  1 14:20:41 np0005541455 nova_compute[189564]:        <feature name='avx512cd'/>
Dec  1 14:20:41 np0005541455 nova_compute[189564]:        <feature name='avx512dq'/>
Dec  1 14:20:41 np0005541455 nova_compute[189564]:        <feature name='avx512f'/>
Dec  1 14:20:41 np0005541455 nova_compute[189564]:        <feature name='avx512vl'/>
Dec  1 14:20:41 np0005541455 nova_compute[189564]:        <feature name='erms'/>
Dec  1 14:20:41 np0005541455 nova_compute[189564]:        <feature name='hle'/>
Dec  1 14:20:41 np0005541455 nova_compute[189564]:        <feature name='invpcid'/>
Dec  1 14:20:41 np0005541455 nova_compute[189564]:        <feature name='pcid'/>
Dec  1 14:20:41 np0005541455 nova_compute[189564]:        <feature name='pku'/>
Dec  1 14:20:41 np0005541455 nova_compute[189564]:        <feature name='rtm'/>
Dec  1 14:20:41 np0005541455 nova_compute[189564]:      </blockers>
Dec  1 14:20:41 np0005541455 nova_compute[189564]:      <model usable='no' vendor='Intel'>Skylake-Server-v3</model>
Dec  1 14:20:41 np0005541455 nova_compute[189564]:      <blockers model='Skylake-Server-v3'>
Dec  1 14:20:41 np0005541455 nova_compute[189564]:        <feature name='avx512bw'/>
Dec  1 14:20:41 np0005541455 nova_compute[189564]:        <feature name='avx512cd'/>
Dec  1 14:20:41 np0005541455 nova_compute[189564]:        <feature name='avx512dq'/>
Dec  1 14:20:41 np0005541455 nova_compute[189564]:        <feature name='avx512f'/>
Dec  1 14:20:41 np0005541455 nova_compute[189564]:        <feature name='avx512vl'/>
Dec  1 14:20:41 np0005541455 nova_compute[189564]:        <feature name='erms'/>
Dec  1 14:20:41 np0005541455 nova_compute[189564]:        <feature name='invpcid'/>
Dec  1 14:20:41 np0005541455 nova_compute[189564]:        <feature name='pcid'/>
Dec  1 14:20:41 np0005541455 nova_compute[189564]:        <feature name='pku'/>
Dec  1 14:20:41 np0005541455 nova_compute[189564]:      </blockers>
Dec  1 14:20:41 np0005541455 nova_compute[189564]:      <model usable='no' vendor='Intel'>Skylake-Server-v4</model>
Dec  1 14:20:41 np0005541455 nova_compute[189564]:      <blockers model='Skylake-Server-v4'>
Dec  1 14:20:41 np0005541455 nova_compute[189564]:        <feature name='avx512bw'/>
Dec  1 14:20:41 np0005541455 nova_compute[189564]:        <feature name='avx512cd'/>
Dec  1 14:20:41 np0005541455 nova_compute[189564]:        <feature name='avx512dq'/>
Dec  1 14:20:41 np0005541455 nova_compute[189564]:        <feature name='avx512f'/>
Dec  1 14:20:41 np0005541455 nova_compute[189564]:        <feature name='avx512vl'/>
Dec  1 14:20:41 np0005541455 nova_compute[189564]:        <feature name='erms'/>
Dec  1 14:20:41 np0005541455 nova_compute[189564]:        <feature name='invpcid'/>
Dec  1 14:20:41 np0005541455 nova_compute[189564]:        <feature name='pcid'/>
Dec  1 14:20:41 np0005541455 nova_compute[189564]:        <feature name='pku'/>
Dec  1 14:20:41 np0005541455 nova_compute[189564]:      </blockers>
Dec  1 14:20:41 np0005541455 nova_compute[189564]:      <model usable='no' vendor='Intel'>Skylake-Server-v5</model>
Dec  1 14:20:41 np0005541455 nova_compute[189564]:      <blockers model='Skylake-Server-v5'>
Dec  1 14:20:41 np0005541455 nova_compute[189564]:        <feature name='avx512bw'/>
Dec  1 14:20:41 np0005541455 nova_compute[189564]:        <feature name='avx512cd'/>
Dec  1 14:20:41 np0005541455 nova_compute[189564]:        <feature name='avx512dq'/>
Dec  1 14:20:41 np0005541455 nova_compute[189564]:        <feature name='avx512f'/>
Dec  1 14:20:41 np0005541455 nova_compute[189564]:        <feature name='avx512vl'/>
Dec  1 14:20:41 np0005541455 nova_compute[189564]:        <feature name='erms'/>
Dec  1 14:20:41 np0005541455 nova_compute[189564]:        <feature name='invpcid'/>
Dec  1 14:20:41 np0005541455 nova_compute[189564]:        <feature name='pcid'/>
Dec  1 14:20:41 np0005541455 nova_compute[189564]:        <feature name='pku'/>
Dec  1 14:20:41 np0005541455 nova_compute[189564]:        <feature name='xsaves'/>
Dec  1 14:20:41 np0005541455 nova_compute[189564]:      </blockers>
Dec  1 14:20:41 np0005541455 nova_compute[189564]:      <model usable='no' vendor='Intel' canonical='Snowridge-v1'>Snowridge</model>
Dec  1 14:20:41 np0005541455 nova_compute[189564]:      <blockers model='Snowridge'>
Dec  1 14:20:41 np0005541455 nova_compute[189564]:        <feature name='cldemote'/>
Dec  1 14:20:41 np0005541455 nova_compute[189564]:        <feature name='core-capability'/>
Dec  1 14:20:41 np0005541455 nova_compute[189564]:        <feature name='erms'/>
Dec  1 14:20:41 np0005541455 nova_compute[189564]:        <feature name='gfni'/>
Dec  1 14:20:41 np0005541455 nova_compute[189564]:        <feature name='movdir64b'/>
Dec  1 14:20:41 np0005541455 nova_compute[189564]:        <feature name='movdiri'/>
Dec  1 14:20:41 np0005541455 nova_compute[189564]:        <feature name='mpx'/>
Dec  1 14:20:41 np0005541455 nova_compute[189564]:        <feature name='split-lock-detect'/>
Dec  1 14:20:41 np0005541455 nova_compute[189564]:      </blockers>
Dec  1 14:20:41 np0005541455 nova_compute[189564]:      <model usable='no' vendor='Intel'>Snowridge-v1</model>
Dec  1 14:20:41 np0005541455 nova_compute[189564]:      <blockers model='Snowridge-v1'>
Dec  1 14:20:41 np0005541455 nova_compute[189564]:        <feature name='cldemote'/>
Dec  1 14:20:41 np0005541455 nova_compute[189564]:        <feature name='core-capability'/>
Dec  1 14:20:41 np0005541455 nova_compute[189564]:        <feature name='erms'/>
Dec  1 14:20:41 np0005541455 nova_compute[189564]:        <feature name='gfni'/>
Dec  1 14:20:41 np0005541455 nova_compute[189564]:        <feature name='movdir64b'/>
Dec  1 14:20:41 np0005541455 nova_compute[189564]:        <feature name='movdiri'/>
Dec  1 14:20:41 np0005541455 nova_compute[189564]:        <feature name='mpx'/>
Dec  1 14:20:41 np0005541455 nova_compute[189564]:        <feature name='split-lock-detect'/>
Dec  1 14:20:41 np0005541455 nova_compute[189564]:      </blockers>
Dec  1 14:20:41 np0005541455 nova_compute[189564]:      <model usable='no' vendor='Intel'>Snowridge-v2</model>
Dec  1 14:20:41 np0005541455 nova_compute[189564]:      <blockers model='Snowridge-v2'>
Dec  1 14:20:41 np0005541455 nova_compute[189564]:        <feature name='cldemote'/>
Dec  1 14:20:41 np0005541455 nova_compute[189564]:        <feature name='core-capability'/>
Dec  1 14:20:41 np0005541455 nova_compute[189564]:        <feature name='erms'/>
Dec  1 14:20:41 np0005541455 nova_compute[189564]:        <feature name='gfni'/>
Dec  1 14:20:41 np0005541455 nova_compute[189564]:        <feature name='movdir64b'/>
Dec  1 14:20:41 np0005541455 nova_compute[189564]:        <feature name='movdiri'/>
Dec  1 14:20:41 np0005541455 nova_compute[189564]:        <feature name='split-lock-detect'/>
Dec  1 14:20:41 np0005541455 nova_compute[189564]:      </blockers>
Dec  1 14:20:41 np0005541455 nova_compute[189564]:      <model usable='no' vendor='Intel'>Snowridge-v3</model>
Dec  1 14:20:41 np0005541455 nova_compute[189564]:      <blockers model='Snowridge-v3'>
Dec  1 14:20:41 np0005541455 nova_compute[189564]:        <feature name='cldemote'/>
Dec  1 14:20:41 np0005541455 nova_compute[189564]:        <feature name='core-capability'/>
Dec  1 14:20:41 np0005541455 nova_compute[189564]:        <feature name='erms'/>
Dec  1 14:20:41 np0005541455 nova_compute[189564]:        <feature name='gfni'/>
Dec  1 14:20:41 np0005541455 nova_compute[189564]:        <feature name='movdir64b'/>
Dec  1 14:20:41 np0005541455 nova_compute[189564]:        <feature name='movdiri'/>
Dec  1 14:20:41 np0005541455 nova_compute[189564]:        <feature name='split-lock-detect'/>
Dec  1 14:20:41 np0005541455 nova_compute[189564]:        <feature name='xsaves'/>
Dec  1 14:20:41 np0005541455 nova_compute[189564]:      </blockers>
Dec  1 14:20:41 np0005541455 nova_compute[189564]:      <model usable='no' vendor='Intel'>Snowridge-v4</model>
Dec  1 14:20:41 np0005541455 nova_compute[189564]:      <blockers model='Snowridge-v4'>
Dec  1 14:20:41 np0005541455 nova_compute[189564]:        <feature name='cldemote'/>
Dec  1 14:20:41 np0005541455 nova_compute[189564]:        <feature name='erms'/>
Dec  1 14:20:41 np0005541455 nova_compute[189564]:        <feature name='gfni'/>
Dec  1 14:20:41 np0005541455 nova_compute[189564]:        <feature name='movdir64b'/>
Dec  1 14:20:41 np0005541455 nova_compute[189564]:        <feature name='movdiri'/>
Dec  1 14:20:41 np0005541455 nova_compute[189564]:        <feature name='xsaves'/>
Dec  1 14:20:41 np0005541455 nova_compute[189564]:      </blockers>
Dec  1 14:20:41 np0005541455 nova_compute[189564]:      <model usable='yes' vendor='Intel' canonical='Westmere-v1'>Westmere</model>
Dec  1 14:20:41 np0005541455 nova_compute[189564]:      <model usable='yes' vendor='Intel' canonical='Westmere-v2'>Westmere-IBRS</model>
Dec  1 14:20:41 np0005541455 nova_compute[189564]:      <model usable='yes' vendor='Intel'>Westmere-v1</model>
Dec  1 14:20:41 np0005541455 nova_compute[189564]:      <model usable='yes' vendor='Intel'>Westmere-v2</model>
Dec  1 14:20:41 np0005541455 nova_compute[189564]:      <model usable='no' deprecated='yes' vendor='AMD' canonical='athlon-v1'>athlon</model>
Dec  1 14:20:41 np0005541455 nova_compute[189564]:      <blockers model='athlon'>
Dec  1 14:20:41 np0005541455 nova_compute[189564]:        <feature name='3dnow'/>
Dec  1 14:20:41 np0005541455 nova_compute[189564]:        <feature name='3dnowext'/>
Dec  1 14:20:41 np0005541455 nova_compute[189564]:      </blockers>
Dec  1 14:20:41 np0005541455 nova_compute[189564]:      <model usable='no' deprecated='yes' vendor='AMD'>athlon-v1</model>
Dec  1 14:20:41 np0005541455 nova_compute[189564]:      <blockers model='athlon-v1'>
Dec  1 14:20:41 np0005541455 nova_compute[189564]:        <feature name='3dnow'/>
Dec  1 14:20:41 np0005541455 nova_compute[189564]:        <feature name='3dnowext'/>
Dec  1 14:20:41 np0005541455 nova_compute[189564]:      </blockers>
Dec  1 14:20:41 np0005541455 nova_compute[189564]:      <model usable='no' deprecated='yes' vendor='Intel' canonical='core2duo-v1'>core2duo</model>
Dec  1 14:20:41 np0005541455 nova_compute[189564]:      <blockers model='core2duo'>
Dec  1 14:20:41 np0005541455 nova_compute[189564]:        <feature name='ss'/>
Dec  1 14:20:41 np0005541455 nova_compute[189564]:      </blockers>
Dec  1 14:20:41 np0005541455 nova_compute[189564]:      <model usable='no' deprecated='yes' vendor='Intel'>core2duo-v1</model>
Dec  1 14:20:41 np0005541455 nova_compute[189564]:      <blockers model='core2duo-v1'>
Dec  1 14:20:41 np0005541455 nova_compute[189564]:        <feature name='ss'/>
Dec  1 14:20:41 np0005541455 nova_compute[189564]:      </blockers>
Dec  1 14:20:41 np0005541455 nova_compute[189564]:      <model usable='no' deprecated='yes' vendor='Intel' canonical='coreduo-v1'>coreduo</model>
Dec  1 14:20:41 np0005541455 nova_compute[189564]:      <blockers model='coreduo'>
Dec  1 14:20:41 np0005541455 nova_compute[189564]:        <feature name='ss'/>
Dec  1 14:20:41 np0005541455 nova_compute[189564]:      </blockers>
Dec  1 14:20:41 np0005541455 nova_compute[189564]:      <model usable='no' deprecated='yes' vendor='Intel'>coreduo-v1</model>
Dec  1 14:20:41 np0005541455 nova_compute[189564]:      <blockers model='coreduo-v1'>
Dec  1 14:20:41 np0005541455 nova_compute[189564]:        <feature name='ss'/>
Dec  1 14:20:41 np0005541455 nova_compute[189564]:      </blockers>
Dec  1 14:20:41 np0005541455 nova_compute[189564]:      <model usable='yes' deprecated='yes' vendor='unknown' canonical='kvm32-v1'>kvm32</model>
Dec  1 14:20:41 np0005541455 nova_compute[189564]:      <model usable='yes' deprecated='yes' vendor='unknown'>kvm32-v1</model>
Dec  1 14:20:41 np0005541455 nova_compute[189564]:      <model usable='yes' deprecated='yes' vendor='unknown' canonical='kvm64-v1'>kvm64</model>
Dec  1 14:20:41 np0005541455 nova_compute[189564]:      <model usable='yes' deprecated='yes' vendor='unknown'>kvm64-v1</model>
Dec  1 14:20:41 np0005541455 nova_compute[189564]:      <model usable='no' deprecated='yes' vendor='Intel' canonical='n270-v1'>n270</model>
Dec  1 14:20:41 np0005541455 nova_compute[189564]:      <blockers model='n270'>
Dec  1 14:20:41 np0005541455 nova_compute[189564]:        <feature name='ss'/>
Dec  1 14:20:41 np0005541455 nova_compute[189564]:      </blockers>
Dec  1 14:20:41 np0005541455 nova_compute[189564]:      <model usable='no' deprecated='yes' vendor='Intel'>n270-v1</model>
Dec  1 14:20:41 np0005541455 nova_compute[189564]:      <blockers model='n270-v1'>
Dec  1 14:20:41 np0005541455 nova_compute[189564]:        <feature name='ss'/>
Dec  1 14:20:41 np0005541455 nova_compute[189564]:      </blockers>
Dec  1 14:20:41 np0005541455 nova_compute[189564]:      <model usable='yes' deprecated='yes' vendor='unknown' canonical='pentium-v1'>pentium</model>
Dec  1 14:20:41 np0005541455 nova_compute[189564]:      <model usable='yes' deprecated='yes' vendor='unknown'>pentium-v1</model>
Dec  1 14:20:41 np0005541455 nova_compute[189564]:      <model usable='yes' deprecated='yes' vendor='unknown' canonical='pentium2-v1'>pentium2</model>
Dec  1 14:20:41 np0005541455 nova_compute[189564]:      <model usable='yes' deprecated='yes' vendor='unknown'>pentium2-v1</model>
Dec  1 14:20:41 np0005541455 nova_compute[189564]:      <model usable='yes' deprecated='yes' vendor='unknown' canonical='pentium3-v1'>pentium3</model>
Dec  1 14:20:41 np0005541455 nova_compute[189564]:      <model usable='yes' deprecated='yes' vendor='unknown'>pentium3-v1</model>
Dec  1 14:20:41 np0005541455 nova_compute[189564]:      <model usable='no' deprecated='yes' vendor='AMD' canonical='phenom-v1'>phenom</model>
Dec  1 14:20:41 np0005541455 nova_compute[189564]:      <blockers model='phenom'>
Dec  1 14:20:41 np0005541455 nova_compute[189564]:        <feature name='3dnow'/>
Dec  1 14:20:41 np0005541455 nova_compute[189564]:        <feature name='3dnowext'/>
Dec  1 14:20:41 np0005541455 nova_compute[189564]:      </blockers>
Dec  1 14:20:41 np0005541455 nova_compute[189564]:      <model usable='no' deprecated='yes' vendor='AMD'>phenom-v1</model>
Dec  1 14:20:41 np0005541455 nova_compute[189564]:      <blockers model='phenom-v1'>
Dec  1 14:20:41 np0005541455 nova_compute[189564]:        <feature name='3dnow'/>
Dec  1 14:20:41 np0005541455 nova_compute[189564]:        <feature name='3dnowext'/>
Dec  1 14:20:41 np0005541455 nova_compute[189564]:      </blockers>
Dec  1 14:20:41 np0005541455 nova_compute[189564]:      <model usable='yes' deprecated='yes' vendor='unknown' canonical='qemu32-v1'>qemu32</model>
Dec  1 14:20:41 np0005541455 nova_compute[189564]:      <model usable='yes' deprecated='yes' vendor='unknown'>qemu32-v1</model>
Dec  1 14:20:41 np0005541455 nova_compute[189564]:      <model usable='yes' deprecated='yes' vendor='unknown' canonical='qemu64-v1'>qemu64</model>
Dec  1 14:20:41 np0005541455 nova_compute[189564]:      <model usable='yes' deprecated='yes' vendor='unknown'>qemu64-v1</model>
Dec  1 14:20:41 np0005541455 nova_compute[189564]:    </mode>
Dec  1 14:20:41 np0005541455 nova_compute[189564]:  </cpu>
Dec  1 14:20:41 np0005541455 nova_compute[189564]:  <memoryBacking supported='yes'>
Dec  1 14:20:41 np0005541455 nova_compute[189564]:    <enum name='sourceType'>
Dec  1 14:20:41 np0005541455 nova_compute[189564]:      <value>file</value>
Dec  1 14:20:41 np0005541455 nova_compute[189564]:      <value>anonymous</value>
Dec  1 14:20:41 np0005541455 nova_compute[189564]:      <value>memfd</value>
Dec  1 14:20:41 np0005541455 nova_compute[189564]:    </enum>
Dec  1 14:20:41 np0005541455 nova_compute[189564]:  </memoryBacking>
Dec  1 14:20:41 np0005541455 nova_compute[189564]:  <devices>
Dec  1 14:20:41 np0005541455 nova_compute[189564]:    <disk supported='yes'>
Dec  1 14:20:41 np0005541455 nova_compute[189564]:      <enum name='diskDevice'>
Dec  1 14:20:41 np0005541455 nova_compute[189564]:        <value>disk</value>
Dec  1 14:20:41 np0005541455 nova_compute[189564]:        <value>cdrom</value>
Dec  1 14:20:41 np0005541455 nova_compute[189564]:        <value>floppy</value>
Dec  1 14:20:41 np0005541455 nova_compute[189564]:        <value>lun</value>
Dec  1 14:20:41 np0005541455 nova_compute[189564]:      </enum>
Dec  1 14:20:41 np0005541455 nova_compute[189564]:      <enum name='bus'>
Dec  1 14:20:41 np0005541455 nova_compute[189564]:        <value>fdc</value>
Dec  1 14:20:41 np0005541455 nova_compute[189564]:        <value>scsi</value>
Dec  1 14:20:41 np0005541455 nova_compute[189564]:        <value>virtio</value>
Dec  1 14:20:41 np0005541455 nova_compute[189564]:        <value>usb</value>
Dec  1 14:20:41 np0005541455 nova_compute[189564]:        <value>sata</value>
Dec  1 14:20:41 np0005541455 nova_compute[189564]:      </enum>
Dec  1 14:20:41 np0005541455 nova_compute[189564]:      <enum name='model'>
Dec  1 14:20:41 np0005541455 nova_compute[189564]:        <value>virtio</value>
Dec  1 14:20:41 np0005541455 nova_compute[189564]:        <value>virtio-transitional</value>
Dec  1 14:20:41 np0005541455 nova_compute[189564]:        <value>virtio-non-transitional</value>
Dec  1 14:20:41 np0005541455 nova_compute[189564]:      </enum>
Dec  1 14:20:41 np0005541455 nova_compute[189564]:    </disk>
Dec  1 14:20:41 np0005541455 nova_compute[189564]:    <graphics supported='yes'>
Dec  1 14:20:41 np0005541455 nova_compute[189564]:      <enum name='type'>
Dec  1 14:20:41 np0005541455 nova_compute[189564]:        <value>vnc</value>
Dec  1 14:20:41 np0005541455 nova_compute[189564]:        <value>egl-headless</value>
Dec  1 14:20:41 np0005541455 nova_compute[189564]:        <value>dbus</value>
Dec  1 14:20:41 np0005541455 nova_compute[189564]:      </enum>
Dec  1 14:20:41 np0005541455 nova_compute[189564]:    </graphics>
Dec  1 14:20:41 np0005541455 nova_compute[189564]:    <video supported='yes'>
Dec  1 14:20:41 np0005541455 nova_compute[189564]:      <enum name='modelType'>
Dec  1 14:20:41 np0005541455 nova_compute[189564]:        <value>vga</value>
Dec  1 14:20:41 np0005541455 nova_compute[189564]:        <value>cirrus</value>
Dec  1 14:20:41 np0005541455 nova_compute[189564]:        <value>virtio</value>
Dec  1 14:20:41 np0005541455 nova_compute[189564]:        <value>none</value>
Dec  1 14:20:41 np0005541455 nova_compute[189564]:        <value>bochs</value>
Dec  1 14:20:41 np0005541455 nova_compute[189564]:        <value>ramfb</value>
Dec  1 14:20:41 np0005541455 nova_compute[189564]:      </enum>
Dec  1 14:20:41 np0005541455 nova_compute[189564]:    </video>
Dec  1 14:20:41 np0005541455 nova_compute[189564]:    <hostdev supported='yes'>
Dec  1 14:20:41 np0005541455 nova_compute[189564]:      <enum name='mode'>
Dec  1 14:20:41 np0005541455 nova_compute[189564]:        <value>subsystem</value>
Dec  1 14:20:41 np0005541455 nova_compute[189564]:      </enum>
Dec  1 14:20:41 np0005541455 nova_compute[189564]:      <enum name='startupPolicy'>
Dec  1 14:20:41 np0005541455 nova_compute[189564]:        <value>default</value>
Dec  1 14:20:41 np0005541455 nova_compute[189564]:        <value>mandatory</value>
Dec  1 14:20:41 np0005541455 nova_compute[189564]:        <value>requisite</value>
Dec  1 14:20:41 np0005541455 nova_compute[189564]:        <value>optional</value>
Dec  1 14:20:41 np0005541455 nova_compute[189564]:      </enum>
Dec  1 14:20:41 np0005541455 nova_compute[189564]:      <enum name='subsysType'>
Dec  1 14:20:41 np0005541455 nova_compute[189564]:        <value>usb</value>
Dec  1 14:20:41 np0005541455 nova_compute[189564]:        <value>pci</value>
Dec  1 14:20:41 np0005541455 nova_compute[189564]:        <value>scsi</value>
Dec  1 14:20:41 np0005541455 nova_compute[189564]:      </enum>
Dec  1 14:20:41 np0005541455 nova_compute[189564]:      <enum name='capsType'/>
Dec  1 14:20:41 np0005541455 nova_compute[189564]:      <enum name='pciBackend'/>
Dec  1 14:20:41 np0005541455 nova_compute[189564]:    </hostdev>
Dec  1 14:20:41 np0005541455 nova_compute[189564]:    <rng supported='yes'>
Dec  1 14:20:41 np0005541455 nova_compute[189564]:      <enum name='model'>
Dec  1 14:20:41 np0005541455 nova_compute[189564]:        <value>virtio</value>
Dec  1 14:20:41 np0005541455 nova_compute[189564]:        <value>virtio-transitional</value>
Dec  1 14:20:41 np0005541455 nova_compute[189564]:        <value>virtio-non-transitional</value>
Dec  1 14:20:41 np0005541455 nova_compute[189564]:      </enum>
Dec  1 14:20:41 np0005541455 nova_compute[189564]:      <enum name='backendModel'>
Dec  1 14:20:41 np0005541455 nova_compute[189564]:        <value>random</value>
Dec  1 14:20:41 np0005541455 nova_compute[189564]:        <value>egd</value>
Dec  1 14:20:41 np0005541455 nova_compute[189564]:        <value>builtin</value>
Dec  1 14:20:41 np0005541455 nova_compute[189564]:      </enum>
Dec  1 14:20:41 np0005541455 nova_compute[189564]:    </rng>
Dec  1 14:20:41 np0005541455 nova_compute[189564]:    <filesystem supported='yes'>
Dec  1 14:20:41 np0005541455 nova_compute[189564]:      <enum name='driverType'>
Dec  1 14:20:41 np0005541455 nova_compute[189564]:        <value>path</value>
Dec  1 14:20:41 np0005541455 nova_compute[189564]:        <value>handle</value>
Dec  1 14:20:41 np0005541455 nova_compute[189564]:        <value>virtiofs</value>
Dec  1 14:20:41 np0005541455 nova_compute[189564]:      </enum>
Dec  1 14:20:41 np0005541455 nova_compute[189564]:    </filesystem>
Dec  1 14:20:41 np0005541455 nova_compute[189564]:    <tpm supported='yes'>
Dec  1 14:20:41 np0005541455 nova_compute[189564]:      <enum name='model'>
Dec  1 14:20:41 np0005541455 nova_compute[189564]:        <value>tpm-tis</value>
Dec  1 14:20:41 np0005541455 nova_compute[189564]:        <value>tpm-crb</value>
Dec  1 14:20:41 np0005541455 nova_compute[189564]:      </enum>
Dec  1 14:20:41 np0005541455 nova_compute[189564]:      <enum name='backendModel'>
Dec  1 14:20:41 np0005541455 nova_compute[189564]:        <value>emulator</value>
Dec  1 14:20:41 np0005541455 nova_compute[189564]:        <value>external</value>
Dec  1 14:20:41 np0005541455 nova_compute[189564]:      </enum>
Dec  1 14:20:41 np0005541455 nova_compute[189564]:      <enum name='backendVersion'>
Dec  1 14:20:41 np0005541455 nova_compute[189564]:        <value>2.0</value>
Dec  1 14:20:41 np0005541455 nova_compute[189564]:      </enum>
Dec  1 14:20:41 np0005541455 nova_compute[189564]:    </tpm>
Dec  1 14:20:41 np0005541455 nova_compute[189564]:    <redirdev supported='yes'>
Dec  1 14:20:41 np0005541455 nova_compute[189564]:      <enum name='bus'>
Dec  1 14:20:41 np0005541455 nova_compute[189564]:        <value>usb</value>
Dec  1 14:20:41 np0005541455 nova_compute[189564]:      </enum>
Dec  1 14:20:41 np0005541455 nova_compute[189564]:    </redirdev>
Dec  1 14:20:41 np0005541455 nova_compute[189564]:    <channel supported='yes'>
Dec  1 14:20:41 np0005541455 nova_compute[189564]:      <enum name='type'>
Dec  1 14:20:41 np0005541455 nova_compute[189564]:        <value>pty</value>
Dec  1 14:20:41 np0005541455 nova_compute[189564]:        <value>unix</value>
Dec  1 14:20:41 np0005541455 nova_compute[189564]:      </enum>
Dec  1 14:20:41 np0005541455 nova_compute[189564]:    </channel>
Dec  1 14:20:41 np0005541455 nova_compute[189564]:    <crypto supported='yes'>
Dec  1 14:20:41 np0005541455 nova_compute[189564]:      <enum name='model'/>
Dec  1 14:20:41 np0005541455 nova_compute[189564]:      <enum name='type'>
Dec  1 14:20:41 np0005541455 nova_compute[189564]:        <value>qemu</value>
Dec  1 14:20:41 np0005541455 nova_compute[189564]:      </enum>
Dec  1 14:20:41 np0005541455 nova_compute[189564]:      <enum name='backendModel'>
Dec  1 14:20:41 np0005541455 nova_compute[189564]:        <value>builtin</value>
Dec  1 14:20:41 np0005541455 nova_compute[189564]:      </enum>
Dec  1 14:20:41 np0005541455 nova_compute[189564]:    </crypto>
Dec  1 14:20:41 np0005541455 nova_compute[189564]:    <interface supported='yes'>
Dec  1 14:20:41 np0005541455 nova_compute[189564]:      <enum name='backendType'>
Dec  1 14:20:41 np0005541455 nova_compute[189564]:        <value>default</value>
Dec  1 14:20:41 np0005541455 nova_compute[189564]:        <value>passt</value>
Dec  1 14:20:41 np0005541455 nova_compute[189564]:      </enum>
Dec  1 14:20:41 np0005541455 nova_compute[189564]:    </interface>
Dec  1 14:20:41 np0005541455 nova_compute[189564]:    <panic supported='yes'>
Dec  1 14:20:41 np0005541455 nova_compute[189564]:      <enum name='model'>
Dec  1 14:20:41 np0005541455 nova_compute[189564]:        <value>isa</value>
Dec  1 14:20:41 np0005541455 nova_compute[189564]:        <value>hyperv</value>
Dec  1 14:20:41 np0005541455 nova_compute[189564]:      </enum>
Dec  1 14:20:41 np0005541455 nova_compute[189564]:    </panic>
Dec  1 14:20:41 np0005541455 nova_compute[189564]:    <console supported='yes'>
Dec  1 14:20:41 np0005541455 nova_compute[189564]:      <enum name='type'>
Dec  1 14:20:41 np0005541455 nova_compute[189564]:        <value>null</value>
Dec  1 14:20:41 np0005541455 nova_compute[189564]:        <value>vc</value>
Dec  1 14:20:41 np0005541455 nova_compute[189564]:        <value>pty</value>
Dec  1 14:20:41 np0005541455 nova_compute[189564]:        <value>dev</value>
Dec  1 14:20:41 np0005541455 nova_compute[189564]:        <value>file</value>
Dec  1 14:20:41 np0005541455 nova_compute[189564]:        <value>pipe</value>
Dec  1 14:20:41 np0005541455 nova_compute[189564]:        <value>stdio</value>
Dec  1 14:20:41 np0005541455 nova_compute[189564]:        <value>udp</value>
Dec  1 14:20:41 np0005541455 nova_compute[189564]:        <value>tcp</value>
Dec  1 14:20:41 np0005541455 nova_compute[189564]:        <value>unix</value>
Dec  1 14:20:41 np0005541455 nova_compute[189564]:        <value>qemu-vdagent</value>
Dec  1 14:20:41 np0005541455 nova_compute[189564]:        <value>dbus</value>
Dec  1 14:20:41 np0005541455 nova_compute[189564]:      </enum>
Dec  1 14:20:41 np0005541455 nova_compute[189564]:    </console>
Dec  1 14:20:41 np0005541455 nova_compute[189564]:  </devices>
Dec  1 14:20:41 np0005541455 nova_compute[189564]:  <features>
Dec  1 14:20:41 np0005541455 nova_compute[189564]:    <gic supported='no'/>
Dec  1 14:20:41 np0005541455 nova_compute[189564]:    <vmcoreinfo supported='yes'/>
Dec  1 14:20:41 np0005541455 nova_compute[189564]:    <genid supported='yes'/>
Dec  1 14:20:41 np0005541455 nova_compute[189564]:    <backingStoreInput supported='yes'/>
Dec  1 14:20:41 np0005541455 nova_compute[189564]:    <backup supported='yes'/>
Dec  1 14:20:41 np0005541455 nova_compute[189564]:    <async-teardown supported='yes'/>
Dec  1 14:20:41 np0005541455 nova_compute[189564]:    <ps2 supported='yes'/>
Dec  1 14:20:41 np0005541455 nova_compute[189564]:    <sev supported='no'/>
Dec  1 14:20:41 np0005541455 nova_compute[189564]:    <sgx supported='no'/>
Dec  1 14:20:41 np0005541455 nova_compute[189564]:    <hyperv supported='yes'>
Dec  1 14:20:41 np0005541455 nova_compute[189564]:      <enum name='features'>
Dec  1 14:20:41 np0005541455 nova_compute[189564]:        <value>relaxed</value>
Dec  1 14:20:41 np0005541455 nova_compute[189564]:        <value>vapic</value>
Dec  1 14:20:41 np0005541455 nova_compute[189564]:        <value>spinlocks</value>
Dec  1 14:20:41 np0005541455 nova_compute[189564]:        <value>vpindex</value>
Dec  1 14:20:41 np0005541455 nova_compute[189564]:        <value>runtime</value>
Dec  1 14:20:41 np0005541455 nova_compute[189564]:        <value>synic</value>
Dec  1 14:20:41 np0005541455 nova_compute[189564]:        <value>stimer</value>
Dec  1 14:20:41 np0005541455 nova_compute[189564]:        <value>reset</value>
Dec  1 14:20:41 np0005541455 nova_compute[189564]:        <value>vendor_id</value>
Dec  1 14:20:41 np0005541455 nova_compute[189564]:        <value>frequencies</value>
Dec  1 14:20:41 np0005541455 nova_compute[189564]:        <value>reenlightenment</value>
Dec  1 14:20:41 np0005541455 nova_compute[189564]:        <value>tlbflush</value>
Dec  1 14:20:41 np0005541455 nova_compute[189564]:        <value>ipi</value>
Dec  1 14:20:41 np0005541455 nova_compute[189564]:        <value>avic</value>
Dec  1 14:20:41 np0005541455 nova_compute[189564]:        <value>emsr_bitmap</value>
Dec  1 14:20:41 np0005541455 nova_compute[189564]:        <value>xmm_input</value>
Dec  1 14:20:41 np0005541455 nova_compute[189564]:      </enum>
Dec  1 14:20:41 np0005541455 nova_compute[189564]:      <defaults>
Dec  1 14:20:41 np0005541455 nova_compute[189564]:        <spinlocks>4095</spinlocks>
Dec  1 14:20:41 np0005541455 nova_compute[189564]:        <stimer_direct>on</stimer_direct>
Dec  1 14:20:41 np0005541455 nova_compute[189564]:        <tlbflush_direct>on</tlbflush_direct>
Dec  1 14:20:41 np0005541455 nova_compute[189564]:        <tlbflush_extended>on</tlbflush_extended>
Dec  1 14:20:41 np0005541455 nova_compute[189564]:        <vendor_id>Linux KVM Hv</vendor_id>
Dec  1 14:20:41 np0005541455 nova_compute[189564]:      </defaults>
Dec  1 14:20:41 np0005541455 nova_compute[189564]:    </hyperv>
Dec  1 14:20:41 np0005541455 nova_compute[189564]:    <launchSecurity supported='yes'>
Dec  1 14:20:41 np0005541455 nova_compute[189564]:      <enum name='sectype'>
Dec  1 14:20:41 np0005541455 nova_compute[189564]:        <value>tdx</value>
Dec  1 14:20:41 np0005541455 nova_compute[189564]:      </enum>
Dec  1 14:20:41 np0005541455 nova_compute[189564]:    </launchSecurity>
Dec  1 14:20:41 np0005541455 nova_compute[189564]:  </features>
Dec  1 14:20:41 np0005541455 nova_compute[189564]: </domainCapabilities>
Dec  1 14:20:41 np0005541455 nova_compute[189564]: _get_domain_capabilities /usr/lib/python3.9/site-packages/nova/virt/libvirt/host.py:1037#033[00m
Dec  1 14:20:41 np0005541455 nova_compute[189564]: 2025-12-01 19:20:41.218 189568 DEBUG nova.virt.libvirt.host [None req-025acbbd-8b0a-4055-b5a6-f0460d6fa220 - - - - - -] Libvirt host hypervisor capabilities for arch=x86_64 and machine_type=pc:
Dec  1 14:20:41 np0005541455 nova_compute[189564]: <domainCapabilities>
Dec  1 14:20:41 np0005541455 nova_compute[189564]:  <path>/usr/libexec/qemu-kvm</path>
Dec  1 14:20:41 np0005541455 nova_compute[189564]:  <domain>kvm</domain>
Dec  1 14:20:41 np0005541455 nova_compute[189564]:  <machine>pc-i440fx-rhel7.6.0</machine>
Dec  1 14:20:41 np0005541455 nova_compute[189564]:  <arch>x86_64</arch>
Dec  1 14:20:41 np0005541455 nova_compute[189564]:  <vcpu max='240'/>
Dec  1 14:20:41 np0005541455 nova_compute[189564]:  <iothreads supported='yes'/>
Dec  1 14:20:41 np0005541455 nova_compute[189564]:  <os supported='yes'>
Dec  1 14:20:41 np0005541455 nova_compute[189564]:    <enum name='firmware'/>
Dec  1 14:20:41 np0005541455 nova_compute[189564]:    <loader supported='yes'>
Dec  1 14:20:41 np0005541455 nova_compute[189564]:      <value>/usr/share/OVMF/OVMF_CODE.secboot.fd</value>
Dec  1 14:20:41 np0005541455 nova_compute[189564]:      <enum name='type'>
Dec  1 14:20:41 np0005541455 nova_compute[189564]:        <value>rom</value>
Dec  1 14:20:41 np0005541455 nova_compute[189564]:        <value>pflash</value>
Dec  1 14:20:41 np0005541455 nova_compute[189564]:      </enum>
Dec  1 14:20:41 np0005541455 nova_compute[189564]:      <enum name='readonly'>
Dec  1 14:20:41 np0005541455 nova_compute[189564]:        <value>yes</value>
Dec  1 14:20:41 np0005541455 nova_compute[189564]:        <value>no</value>
Dec  1 14:20:41 np0005541455 nova_compute[189564]:      </enum>
Dec  1 14:20:41 np0005541455 nova_compute[189564]:      <enum name='secure'>
Dec  1 14:20:41 np0005541455 nova_compute[189564]:        <value>no</value>
Dec  1 14:20:41 np0005541455 nova_compute[189564]:      </enum>
Dec  1 14:20:41 np0005541455 nova_compute[189564]:    </loader>
Dec  1 14:20:41 np0005541455 nova_compute[189564]:  </os>
Dec  1 14:20:41 np0005541455 nova_compute[189564]:  <cpu>
Dec  1 14:20:41 np0005541455 nova_compute[189564]:    <mode name='host-passthrough' supported='yes'>
Dec  1 14:20:41 np0005541455 nova_compute[189564]:      <enum name='hostPassthroughMigratable'>
Dec  1 14:20:41 np0005541455 nova_compute[189564]:        <value>on</value>
Dec  1 14:20:41 np0005541455 nova_compute[189564]:        <value>off</value>
Dec  1 14:20:41 np0005541455 nova_compute[189564]:      </enum>
Dec  1 14:20:41 np0005541455 nova_compute[189564]:    </mode>
Dec  1 14:20:41 np0005541455 nova_compute[189564]:    <mode name='maximum' supported='yes'>
Dec  1 14:20:41 np0005541455 nova_compute[189564]:      <enum name='maximumMigratable'>
Dec  1 14:20:41 np0005541455 nova_compute[189564]:        <value>on</value>
Dec  1 14:20:41 np0005541455 nova_compute[189564]:        <value>off</value>
Dec  1 14:20:41 np0005541455 nova_compute[189564]:      </enum>
Dec  1 14:20:41 np0005541455 nova_compute[189564]:    </mode>
Dec  1 14:20:41 np0005541455 nova_compute[189564]:    <mode name='host-model' supported='yes'>
Dec  1 14:20:41 np0005541455 nova_compute[189564]:      <model fallback='forbid'>EPYC-Rome</model>
Dec  1 14:20:41 np0005541455 nova_compute[189564]:      <vendor>AMD</vendor>
Dec  1 14:20:41 np0005541455 nova_compute[189564]:      <maxphysaddr mode='passthrough' limit='40'/>
Dec  1 14:20:41 np0005541455 nova_compute[189564]:      <feature policy='require' name='x2apic'/>
Dec  1 14:20:41 np0005541455 nova_compute[189564]:      <feature policy='require' name='tsc-deadline'/>
Dec  1 14:20:41 np0005541455 nova_compute[189564]:      <feature policy='require' name='hypervisor'/>
Dec  1 14:20:41 np0005541455 nova_compute[189564]:      <feature policy='require' name='tsc_adjust'/>
Dec  1 14:20:41 np0005541455 nova_compute[189564]:      <feature policy='require' name='spec-ctrl'/>
Dec  1 14:20:41 np0005541455 nova_compute[189564]:      <feature policy='require' name='stibp'/>
Dec  1 14:20:41 np0005541455 nova_compute[189564]:      <feature policy='require' name='ssbd'/>
Dec  1 14:20:41 np0005541455 nova_compute[189564]:      <feature policy='require' name='cmp_legacy'/>
Dec  1 14:20:41 np0005541455 nova_compute[189564]:      <feature policy='require' name='overflow-recov'/>
Dec  1 14:20:41 np0005541455 nova_compute[189564]:      <feature policy='require' name='succor'/>
Dec  1 14:20:41 np0005541455 nova_compute[189564]:      <feature policy='require' name='ibrs'/>
Dec  1 14:20:41 np0005541455 nova_compute[189564]:      <feature policy='require' name='amd-ssbd'/>
Dec  1 14:20:41 np0005541455 nova_compute[189564]:      <feature policy='require' name='virt-ssbd'/>
Dec  1 14:20:41 np0005541455 nova_compute[189564]:      <feature policy='require' name='lbrv'/>
Dec  1 14:20:41 np0005541455 nova_compute[189564]:      <feature policy='require' name='tsc-scale'/>
Dec  1 14:20:41 np0005541455 nova_compute[189564]:      <feature policy='require' name='vmcb-clean'/>
Dec  1 14:20:41 np0005541455 nova_compute[189564]:      <feature policy='require' name='flushbyasid'/>
Dec  1 14:20:41 np0005541455 nova_compute[189564]:      <feature policy='require' name='pause-filter'/>
Dec  1 14:20:41 np0005541455 nova_compute[189564]:      <feature policy='require' name='pfthreshold'/>
Dec  1 14:20:41 np0005541455 nova_compute[189564]:      <feature policy='require' name='svme-addr-chk'/>
Dec  1 14:20:41 np0005541455 nova_compute[189564]:      <feature policy='require' name='lfence-always-serializing'/>
Dec  1 14:20:41 np0005541455 nova_compute[189564]:      <feature policy='disable' name='xsaves'/>
Dec  1 14:20:41 np0005541455 nova_compute[189564]:    </mode>
Dec  1 14:20:41 np0005541455 nova_compute[189564]:    <mode name='custom' supported='yes'>
Dec  1 14:20:41 np0005541455 nova_compute[189564]:      <model usable='yes' deprecated='yes' vendor='unknown' canonical='486-v1'>486</model>
Dec  1 14:20:41 np0005541455 nova_compute[189564]:      <model usable='yes' deprecated='yes' vendor='unknown'>486-v1</model>
Dec  1 14:20:41 np0005541455 nova_compute[189564]:      <model usable='no' vendor='Intel' canonical='Broadwell-v1'>Broadwell</model>
Dec  1 14:20:41 np0005541455 nova_compute[189564]:      <blockers model='Broadwell'>
Dec  1 14:20:41 np0005541455 nova_compute[189564]:        <feature name='erms'/>
Dec  1 14:20:41 np0005541455 nova_compute[189564]:        <feature name='hle'/>
Dec  1 14:20:41 np0005541455 nova_compute[189564]:        <feature name='invpcid'/>
Dec  1 14:20:41 np0005541455 nova_compute[189564]:        <feature name='pcid'/>
Dec  1 14:20:41 np0005541455 nova_compute[189564]:        <feature name='rtm'/>
Dec  1 14:20:41 np0005541455 nova_compute[189564]:      </blockers>
Dec  1 14:20:41 np0005541455 nova_compute[189564]:      <model usable='no' vendor='Intel' canonical='Broadwell-v3'>Broadwell-IBRS</model>
Dec  1 14:20:41 np0005541455 nova_compute[189564]:      <blockers model='Broadwell-IBRS'>
Dec  1 14:20:41 np0005541455 nova_compute[189564]:        <feature name='erms'/>
Dec  1 14:20:41 np0005541455 nova_compute[189564]:        <feature name='hle'/>
Dec  1 14:20:41 np0005541455 nova_compute[189564]:        <feature name='invpcid'/>
Dec  1 14:20:41 np0005541455 nova_compute[189564]:        <feature name='pcid'/>
Dec  1 14:20:41 np0005541455 nova_compute[189564]:        <feature name='rtm'/>
Dec  1 14:20:41 np0005541455 nova_compute[189564]:      </blockers>
Dec  1 14:20:41 np0005541455 nova_compute[189564]:      <model usable='no' vendor='Intel' canonical='Broadwell-v2'>Broadwell-noTSX</model>
Dec  1 14:20:41 np0005541455 nova_compute[189564]:      <blockers model='Broadwell-noTSX'>
Dec  1 14:20:41 np0005541455 nova_compute[189564]:        <feature name='erms'/>
Dec  1 14:20:41 np0005541455 nova_compute[189564]:        <feature name='invpcid'/>
Dec  1 14:20:41 np0005541455 nova_compute[189564]:        <feature name='pcid'/>
Dec  1 14:20:41 np0005541455 nova_compute[189564]:      </blockers>
Dec  1 14:20:41 np0005541455 nova_compute[189564]:      <model usable='no' vendor='Intel' canonical='Broadwell-v4'>Broadwell-noTSX-IBRS</model>
Dec  1 14:20:41 np0005541455 nova_compute[189564]:      <blockers model='Broadwell-noTSX-IBRS'>
Dec  1 14:20:41 np0005541455 nova_compute[189564]:        <feature name='erms'/>
Dec  1 14:20:41 np0005541455 nova_compute[189564]:        <feature name='invpcid'/>
Dec  1 14:20:41 np0005541455 nova_compute[189564]:        <feature name='pcid'/>
Dec  1 14:20:41 np0005541455 nova_compute[189564]:      </blockers>
Dec  1 14:20:41 np0005541455 nova_compute[189564]:      <model usable='no' vendor='Intel'>Broadwell-v1</model>
Dec  1 14:20:41 np0005541455 nova_compute[189564]:      <blockers model='Broadwell-v1'>
Dec  1 14:20:41 np0005541455 nova_compute[189564]:        <feature name='erms'/>
Dec  1 14:20:41 np0005541455 nova_compute[189564]:        <feature name='hle'/>
Dec  1 14:20:41 np0005541455 nova_compute[189564]:        <feature name='invpcid'/>
Dec  1 14:20:41 np0005541455 nova_compute[189564]:        <feature name='pcid'/>
Dec  1 14:20:41 np0005541455 nova_compute[189564]:        <feature name='rtm'/>
Dec  1 14:20:41 np0005541455 nova_compute[189564]:      </blockers>
Dec  1 14:20:41 np0005541455 nova_compute[189564]:      <model usable='no' vendor='Intel'>Broadwell-v2</model>
Dec  1 14:20:41 np0005541455 nova_compute[189564]:      <blockers model='Broadwell-v2'>
Dec  1 14:20:41 np0005541455 nova_compute[189564]:        <feature name='erms'/>
Dec  1 14:20:41 np0005541455 nova_compute[189564]:        <feature name='invpcid'/>
Dec  1 14:20:41 np0005541455 nova_compute[189564]:        <feature name='pcid'/>
Dec  1 14:20:41 np0005541455 nova_compute[189564]:      </blockers>
Dec  1 14:20:41 np0005541455 nova_compute[189564]:      <model usable='no' vendor='Intel'>Broadwell-v3</model>
Dec  1 14:20:41 np0005541455 nova_compute[189564]:      <blockers model='Broadwell-v3'>
Dec  1 14:20:41 np0005541455 nova_compute[189564]:        <feature name='erms'/>
Dec  1 14:20:41 np0005541455 nova_compute[189564]:        <feature name='hle'/>
Dec  1 14:20:41 np0005541455 nova_compute[189564]:        <feature name='invpcid'/>
Dec  1 14:20:41 np0005541455 nova_compute[189564]:        <feature name='pcid'/>
Dec  1 14:20:41 np0005541455 nova_compute[189564]:        <feature name='rtm'/>
Dec  1 14:20:41 np0005541455 nova_compute[189564]:      </blockers>
Dec  1 14:20:41 np0005541455 nova_compute[189564]:      <model usable='no' vendor='Intel'>Broadwell-v4</model>
Dec  1 14:20:41 np0005541455 nova_compute[189564]:      <blockers model='Broadwell-v4'>
Dec  1 14:20:41 np0005541455 nova_compute[189564]:        <feature name='erms'/>
Dec  1 14:20:41 np0005541455 nova_compute[189564]:        <feature name='invpcid'/>
Dec  1 14:20:41 np0005541455 nova_compute[189564]:        <feature name='pcid'/>
Dec  1 14:20:41 np0005541455 nova_compute[189564]:      </blockers>
Dec  1 14:20:41 np0005541455 nova_compute[189564]:      <model usable='no' vendor='Intel' canonical='Cascadelake-Server-v1'>Cascadelake-Server</model>
Dec  1 14:20:41 np0005541455 nova_compute[189564]:      <blockers model='Cascadelake-Server'>
Dec  1 14:20:41 np0005541455 nova_compute[189564]:        <feature name='avx512bw'/>
Dec  1 14:20:41 np0005541455 nova_compute[189564]:        <feature name='avx512cd'/>
Dec  1 14:20:41 np0005541455 nova_compute[189564]:        <feature name='avx512dq'/>
Dec  1 14:20:41 np0005541455 nova_compute[189564]:        <feature name='avx512f'/>
Dec  1 14:20:41 np0005541455 nova_compute[189564]:        <feature name='avx512vl'/>
Dec  1 14:20:41 np0005541455 nova_compute[189564]:        <feature name='avx512vnni'/>
Dec  1 14:20:41 np0005541455 nova_compute[189564]:        <feature name='erms'/>
Dec  1 14:20:41 np0005541455 nova_compute[189564]:        <feature name='hle'/>
Dec  1 14:20:41 np0005541455 nova_compute[189564]:        <feature name='invpcid'/>
Dec  1 14:20:41 np0005541455 nova_compute[189564]:        <feature name='pcid'/>
Dec  1 14:20:41 np0005541455 nova_compute[189564]:        <feature name='pku'/>
Dec  1 14:20:41 np0005541455 nova_compute[189564]:        <feature name='rtm'/>
Dec  1 14:20:41 np0005541455 nova_compute[189564]:      </blockers>
Dec  1 14:20:41 np0005541455 nova_compute[189564]:      <model usable='no' vendor='Intel' canonical='Cascadelake-Server-v3'>Cascadelake-Server-noTSX</model>
Dec  1 14:20:41 np0005541455 nova_compute[189564]:      <blockers model='Cascadelake-Server-noTSX'>
Dec  1 14:20:41 np0005541455 nova_compute[189564]:        <feature name='avx512bw'/>
Dec  1 14:20:41 np0005541455 nova_compute[189564]:        <feature name='avx512cd'/>
Dec  1 14:20:41 np0005541455 nova_compute[189564]:        <feature name='avx512dq'/>
Dec  1 14:20:41 np0005541455 nova_compute[189564]:        <feature name='avx512f'/>
Dec  1 14:20:41 np0005541455 nova_compute[189564]:        <feature name='avx512vl'/>
Dec  1 14:20:41 np0005541455 nova_compute[189564]:        <feature name='avx512vnni'/>
Dec  1 14:20:41 np0005541455 nova_compute[189564]:        <feature name='erms'/>
Dec  1 14:20:41 np0005541455 nova_compute[189564]:        <feature name='ibrs-all'/>
Dec  1 14:20:41 np0005541455 nova_compute[189564]:        <feature name='invpcid'/>
Dec  1 14:20:41 np0005541455 nova_compute[189564]:        <feature name='pcid'/>
Dec  1 14:20:41 np0005541455 nova_compute[189564]:        <feature name='pku'/>
Dec  1 14:20:41 np0005541455 nova_compute[189564]:      </blockers>
Dec  1 14:20:41 np0005541455 nova_compute[189564]:      <model usable='no' vendor='Intel'>Cascadelake-Server-v1</model>
Dec  1 14:20:41 np0005541455 nova_compute[189564]:      <blockers model='Cascadelake-Server-v1'>
Dec  1 14:20:41 np0005541455 nova_compute[189564]:        <feature name='avx512bw'/>
Dec  1 14:20:41 np0005541455 nova_compute[189564]:        <feature name='avx512cd'/>
Dec  1 14:20:41 np0005541455 nova_compute[189564]:        <feature name='avx512dq'/>
Dec  1 14:20:41 np0005541455 nova_compute[189564]:        <feature name='avx512f'/>
Dec  1 14:20:41 np0005541455 nova_compute[189564]:        <feature name='avx512vl'/>
Dec  1 14:20:41 np0005541455 nova_compute[189564]:        <feature name='avx512vnni'/>
Dec  1 14:20:41 np0005541455 nova_compute[189564]:        <feature name='erms'/>
Dec  1 14:20:41 np0005541455 nova_compute[189564]:        <feature name='hle'/>
Dec  1 14:20:41 np0005541455 nova_compute[189564]:        <feature name='invpcid'/>
Dec  1 14:20:41 np0005541455 nova_compute[189564]:        <feature name='pcid'/>
Dec  1 14:20:41 np0005541455 nova_compute[189564]:        <feature name='pku'/>
Dec  1 14:20:41 np0005541455 nova_compute[189564]:        <feature name='rtm'/>
Dec  1 14:20:41 np0005541455 nova_compute[189564]:      </blockers>
Dec  1 14:20:41 np0005541455 nova_compute[189564]:      <model usable='no' vendor='Intel'>Cascadelake-Server-v2</model>
Dec  1 14:20:41 np0005541455 nova_compute[189564]:      <blockers model='Cascadelake-Server-v2'>
Dec  1 14:20:41 np0005541455 nova_compute[189564]:        <feature name='avx512bw'/>
Dec  1 14:20:41 np0005541455 nova_compute[189564]:        <feature name='avx512cd'/>
Dec  1 14:20:41 np0005541455 nova_compute[189564]:        <feature name='avx512dq'/>
Dec  1 14:20:41 np0005541455 nova_compute[189564]:        <feature name='avx512f'/>
Dec  1 14:20:41 np0005541455 nova_compute[189564]:        <feature name='avx512vl'/>
Dec  1 14:20:41 np0005541455 nova_compute[189564]:        <feature name='avx512vnni'/>
Dec  1 14:20:41 np0005541455 nova_compute[189564]:        <feature name='erms'/>
Dec  1 14:20:41 np0005541455 nova_compute[189564]:        <feature name='hle'/>
Dec  1 14:20:41 np0005541455 nova_compute[189564]:        <feature name='ibrs-all'/>
Dec  1 14:20:41 np0005541455 nova_compute[189564]:        <feature name='invpcid'/>
Dec  1 14:20:41 np0005541455 nova_compute[189564]:        <feature name='pcid'/>
Dec  1 14:20:41 np0005541455 nova_compute[189564]:        <feature name='pku'/>
Dec  1 14:20:41 np0005541455 nova_compute[189564]:        <feature name='rtm'/>
Dec  1 14:20:41 np0005541455 nova_compute[189564]:      </blockers>
Dec  1 14:20:41 np0005541455 nova_compute[189564]:      <model usable='no' vendor='Intel'>Cascadelake-Server-v3</model>
Dec  1 14:20:41 np0005541455 nova_compute[189564]:      <blockers model='Cascadelake-Server-v3'>
Dec  1 14:20:41 np0005541455 nova_compute[189564]:        <feature name='avx512bw'/>
Dec  1 14:20:41 np0005541455 nova_compute[189564]:        <feature name='avx512cd'/>
Dec  1 14:20:41 np0005541455 nova_compute[189564]:        <feature name='avx512dq'/>
Dec  1 14:20:41 np0005541455 nova_compute[189564]:        <feature name='avx512f'/>
Dec  1 14:20:41 np0005541455 nova_compute[189564]:        <feature name='avx512vl'/>
Dec  1 14:20:41 np0005541455 nova_compute[189564]:        <feature name='avx512vnni'/>
Dec  1 14:20:41 np0005541455 nova_compute[189564]:        <feature name='erms'/>
Dec  1 14:20:41 np0005541455 nova_compute[189564]:        <feature name='ibrs-all'/>
Dec  1 14:20:41 np0005541455 nova_compute[189564]:        <feature name='invpcid'/>
Dec  1 14:20:41 np0005541455 nova_compute[189564]:        <feature name='pcid'/>
Dec  1 14:20:41 np0005541455 nova_compute[189564]:        <feature name='pku'/>
Dec  1 14:20:41 np0005541455 nova_compute[189564]:      </blockers>
Dec  1 14:20:41 np0005541455 nova_compute[189564]:      <model usable='no' vendor='Intel'>Cascadelake-Server-v4</model>
Dec  1 14:20:41 np0005541455 nova_compute[189564]:      <blockers model='Cascadelake-Server-v4'>
Dec  1 14:20:41 np0005541455 nova_compute[189564]:        <feature name='avx512bw'/>
Dec  1 14:20:41 np0005541455 nova_compute[189564]:        <feature name='avx512cd'/>
Dec  1 14:20:41 np0005541455 nova_compute[189564]:        <feature name='avx512dq'/>
Dec  1 14:20:41 np0005541455 nova_compute[189564]:        <feature name='avx512f'/>
Dec  1 14:20:41 np0005541455 nova_compute[189564]:        <feature name='avx512vl'/>
Dec  1 14:20:41 np0005541455 nova_compute[189564]:        <feature name='avx512vnni'/>
Dec  1 14:20:41 np0005541455 nova_compute[189564]:        <feature name='erms'/>
Dec  1 14:20:41 np0005541455 nova_compute[189564]:        <feature name='ibrs-all'/>
Dec  1 14:20:41 np0005541455 nova_compute[189564]:        <feature name='invpcid'/>
Dec  1 14:20:41 np0005541455 nova_compute[189564]:        <feature name='pcid'/>
Dec  1 14:20:41 np0005541455 nova_compute[189564]:        <feature name='pku'/>
Dec  1 14:20:41 np0005541455 nova_compute[189564]:      </blockers>
Dec  1 14:20:41 np0005541455 nova_compute[189564]:      <model usable='no' vendor='Intel'>Cascadelake-Server-v5</model>
Dec  1 14:20:41 np0005541455 nova_compute[189564]:      <blockers model='Cascadelake-Server-v5'>
Dec  1 14:20:41 np0005541455 nova_compute[189564]:        <feature name='avx512bw'/>
Dec  1 14:20:41 np0005541455 nova_compute[189564]:        <feature name='avx512cd'/>
Dec  1 14:20:41 np0005541455 nova_compute[189564]:        <feature name='avx512dq'/>
Dec  1 14:20:41 np0005541455 nova_compute[189564]:        <feature name='avx512f'/>
Dec  1 14:20:41 np0005541455 nova_compute[189564]:        <feature name='avx512vl'/>
Dec  1 14:20:41 np0005541455 nova_compute[189564]:        <feature name='avx512vnni'/>
Dec  1 14:20:41 np0005541455 nova_compute[189564]:        <feature name='erms'/>
Dec  1 14:20:41 np0005541455 nova_compute[189564]:        <feature name='ibrs-all'/>
Dec  1 14:20:41 np0005541455 nova_compute[189564]:        <feature name='invpcid'/>
Dec  1 14:20:41 np0005541455 nova_compute[189564]:        <feature name='pcid'/>
Dec  1 14:20:41 np0005541455 nova_compute[189564]:        <feature name='pku'/>
Dec  1 14:20:41 np0005541455 nova_compute[189564]:        <feature name='xsaves'/>
Dec  1 14:20:41 np0005541455 nova_compute[189564]:      </blockers>
Dec  1 14:20:41 np0005541455 nova_compute[189564]:      <model usable='yes' deprecated='yes' vendor='Intel' canonical='Conroe-v1'>Conroe</model>
Dec  1 14:20:41 np0005541455 nova_compute[189564]:      <model usable='yes' deprecated='yes' vendor='Intel'>Conroe-v1</model>
Dec  1 14:20:41 np0005541455 nova_compute[189564]:      <model usable='no' vendor='Intel' canonical='Cooperlake-v1'>Cooperlake</model>
Dec  1 14:20:41 np0005541455 nova_compute[189564]:      <blockers model='Cooperlake'>
Dec  1 14:20:41 np0005541455 nova_compute[189564]:        <feature name='avx512-bf16'/>
Dec  1 14:20:41 np0005541455 nova_compute[189564]:        <feature name='avx512bw'/>
Dec  1 14:20:41 np0005541455 nova_compute[189564]:        <feature name='avx512cd'/>
Dec  1 14:20:41 np0005541455 nova_compute[189564]:        <feature name='avx512dq'/>
Dec  1 14:20:41 np0005541455 nova_compute[189564]:        <feature name='avx512f'/>
Dec  1 14:20:41 np0005541455 nova_compute[189564]:        <feature name='avx512vl'/>
Dec  1 14:20:41 np0005541455 nova_compute[189564]:        <feature name='avx512vnni'/>
Dec  1 14:20:41 np0005541455 nova_compute[189564]:        <feature name='erms'/>
Dec  1 14:20:41 np0005541455 nova_compute[189564]:        <feature name='hle'/>
Dec  1 14:20:41 np0005541455 nova_compute[189564]:        <feature name='ibrs-all'/>
Dec  1 14:20:41 np0005541455 nova_compute[189564]:        <feature name='invpcid'/>
Dec  1 14:20:41 np0005541455 nova_compute[189564]:        <feature name='pcid'/>
Dec  1 14:20:41 np0005541455 nova_compute[189564]:        <feature name='pku'/>
Dec  1 14:20:41 np0005541455 nova_compute[189564]:        <feature name='rtm'/>
Dec  1 14:20:41 np0005541455 nova_compute[189564]:        <feature name='taa-no'/>
Dec  1 14:20:41 np0005541455 nova_compute[189564]:      </blockers>
Dec  1 14:20:41 np0005541455 nova_compute[189564]:      <model usable='no' vendor='Intel'>Cooperlake-v1</model>
Dec  1 14:20:41 np0005541455 nova_compute[189564]:      <blockers model='Cooperlake-v1'>
Dec  1 14:20:41 np0005541455 nova_compute[189564]:        <feature name='avx512-bf16'/>
Dec  1 14:20:41 np0005541455 nova_compute[189564]:        <feature name='avx512bw'/>
Dec  1 14:20:41 np0005541455 nova_compute[189564]:        <feature name='avx512cd'/>
Dec  1 14:20:41 np0005541455 nova_compute[189564]:        <feature name='avx512dq'/>
Dec  1 14:20:41 np0005541455 nova_compute[189564]:        <feature name='avx512f'/>
Dec  1 14:20:41 np0005541455 nova_compute[189564]:        <feature name='avx512vl'/>
Dec  1 14:20:41 np0005541455 nova_compute[189564]:        <feature name='avx512vnni'/>
Dec  1 14:20:41 np0005541455 nova_compute[189564]:        <feature name='erms'/>
Dec  1 14:20:41 np0005541455 nova_compute[189564]:        <feature name='hle'/>
Dec  1 14:20:41 np0005541455 nova_compute[189564]:        <feature name='ibrs-all'/>
Dec  1 14:20:41 np0005541455 nova_compute[189564]:        <feature name='invpcid'/>
Dec  1 14:20:41 np0005541455 nova_compute[189564]:        <feature name='pcid'/>
Dec  1 14:20:41 np0005541455 nova_compute[189564]:        <feature name='pku'/>
Dec  1 14:20:41 np0005541455 nova_compute[189564]:        <feature name='rtm'/>
Dec  1 14:20:41 np0005541455 nova_compute[189564]:        <feature name='taa-no'/>
Dec  1 14:20:41 np0005541455 nova_compute[189564]:      </blockers>
Dec  1 14:20:41 np0005541455 nova_compute[189564]:      <model usable='no' vendor='Intel'>Cooperlake-v2</model>
Dec  1 14:20:41 np0005541455 nova_compute[189564]:      <blockers model='Cooperlake-v2'>
Dec  1 14:20:41 np0005541455 nova_compute[189564]:        <feature name='avx512-bf16'/>
Dec  1 14:20:41 np0005541455 nova_compute[189564]:        <feature name='avx512bw'/>
Dec  1 14:20:41 np0005541455 nova_compute[189564]:        <feature name='avx512cd'/>
Dec  1 14:20:41 np0005541455 nova_compute[189564]:        <feature name='avx512dq'/>
Dec  1 14:20:41 np0005541455 nova_compute[189564]:        <feature name='avx512f'/>
Dec  1 14:20:41 np0005541455 nova_compute[189564]:        <feature name='avx512vl'/>
Dec  1 14:20:41 np0005541455 nova_compute[189564]:        <feature name='avx512vnni'/>
Dec  1 14:20:41 np0005541455 nova_compute[189564]:        <feature name='erms'/>
Dec  1 14:20:41 np0005541455 nova_compute[189564]:        <feature name='hle'/>
Dec  1 14:20:41 np0005541455 nova_compute[189564]:        <feature name='ibrs-all'/>
Dec  1 14:20:41 np0005541455 nova_compute[189564]:        <feature name='invpcid'/>
Dec  1 14:20:41 np0005541455 nova_compute[189564]:        <feature name='pcid'/>
Dec  1 14:20:41 np0005541455 nova_compute[189564]:        <feature name='pku'/>
Dec  1 14:20:41 np0005541455 nova_compute[189564]:        <feature name='rtm'/>
Dec  1 14:20:41 np0005541455 nova_compute[189564]:        <feature name='taa-no'/>
Dec  1 14:20:41 np0005541455 nova_compute[189564]:        <feature name='xsaves'/>
Dec  1 14:20:41 np0005541455 nova_compute[189564]:      </blockers>
Dec  1 14:20:41 np0005541455 nova_compute[189564]:      <model usable='no' vendor='Intel' canonical='Denverton-v1'>Denverton</model>
Dec  1 14:20:41 np0005541455 nova_compute[189564]:      <blockers model='Denverton'>
Dec  1 14:20:41 np0005541455 nova_compute[189564]:        <feature name='erms'/>
Dec  1 14:20:41 np0005541455 nova_compute[189564]:        <feature name='mpx'/>
Dec  1 14:20:41 np0005541455 nova_compute[189564]:      </blockers>
Dec  1 14:20:41 np0005541455 nova_compute[189564]:      <model usable='no' vendor='Intel'>Denverton-v1</model>
Dec  1 14:20:41 np0005541455 nova_compute[189564]:      <blockers model='Denverton-v1'>
Dec  1 14:20:41 np0005541455 nova_compute[189564]:        <feature name='erms'/>
Dec  1 14:20:41 np0005541455 nova_compute[189564]:        <feature name='mpx'/>
Dec  1 14:20:41 np0005541455 nova_compute[189564]:      </blockers>
Dec  1 14:20:41 np0005541455 nova_compute[189564]:      <model usable='no' vendor='Intel'>Denverton-v2</model>
Dec  1 14:20:41 np0005541455 nova_compute[189564]:      <blockers model='Denverton-v2'>
Dec  1 14:20:41 np0005541455 nova_compute[189564]:        <feature name='erms'/>
Dec  1 14:20:41 np0005541455 nova_compute[189564]:      </blockers>
Dec  1 14:20:41 np0005541455 nova_compute[189564]:      <model usable='no' vendor='Intel'>Denverton-v3</model>
Dec  1 14:20:41 np0005541455 nova_compute[189564]:      <blockers model='Denverton-v3'>
Dec  1 14:20:41 np0005541455 nova_compute[189564]:        <feature name='erms'/>
Dec  1 14:20:41 np0005541455 nova_compute[189564]:        <feature name='xsaves'/>
Dec  1 14:20:41 np0005541455 nova_compute[189564]:      </blockers>
Dec  1 14:20:41 np0005541455 nova_compute[189564]:      <model usable='yes' vendor='Hygon' canonical='Dhyana-v1'>Dhyana</model>
Dec  1 14:20:41 np0005541455 nova_compute[189564]:      <model usable='yes' vendor='Hygon'>Dhyana-v1</model>
Dec  1 14:20:41 np0005541455 nova_compute[189564]:      <model usable='no' vendor='Hygon'>Dhyana-v2</model>
Dec  1 14:20:41 np0005541455 nova_compute[189564]:      <blockers model='Dhyana-v2'>
Dec  1 14:20:41 np0005541455 nova_compute[189564]:        <feature name='xsaves'/>
Dec  1 14:20:41 np0005541455 nova_compute[189564]:      </blockers>
Dec  1 14:20:41 np0005541455 nova_compute[189564]:      <model usable='yes' vendor='AMD' canonical='EPYC-v1'>EPYC</model>
Dec  1 14:20:41 np0005541455 nova_compute[189564]:      <model usable='no' vendor='AMD' canonical='EPYC-Genoa-v1'>EPYC-Genoa</model>
Dec  1 14:20:41 np0005541455 nova_compute[189564]:      <blockers model='EPYC-Genoa'>
Dec  1 14:20:41 np0005541455 nova_compute[189564]:        <feature name='amd-psfd'/>
Dec  1 14:20:41 np0005541455 nova_compute[189564]:        <feature name='auto-ibrs'/>
Dec  1 14:20:41 np0005541455 nova_compute[189564]:        <feature name='avx512-bf16'/>
Dec  1 14:20:41 np0005541455 nova_compute[189564]:        <feature name='avx512-vpopcntdq'/>
Dec  1 14:20:41 np0005541455 nova_compute[189564]:        <feature name='avx512bitalg'/>
Dec  1 14:20:41 np0005541455 nova_compute[189564]:        <feature name='avx512bw'/>
Dec  1 14:20:41 np0005541455 nova_compute[189564]:        <feature name='avx512cd'/>
Dec  1 14:20:41 np0005541455 nova_compute[189564]:        <feature name='avx512dq'/>
Dec  1 14:20:41 np0005541455 nova_compute[189564]:        <feature name='avx512f'/>
Dec  1 14:20:41 np0005541455 nova_compute[189564]:        <feature name='avx512ifma'/>
Dec  1 14:20:41 np0005541455 nova_compute[189564]:        <feature name='avx512vbmi'/>
Dec  1 14:20:41 np0005541455 nova_compute[189564]:        <feature name='avx512vbmi2'/>
Dec  1 14:20:41 np0005541455 nova_compute[189564]:        <feature name='avx512vl'/>
Dec  1 14:20:41 np0005541455 nova_compute[189564]:        <feature name='avx512vnni'/>
Dec  1 14:20:41 np0005541455 nova_compute[189564]:        <feature name='erms'/>
Dec  1 14:20:41 np0005541455 nova_compute[189564]:        <feature name='fsrm'/>
Dec  1 14:20:41 np0005541455 nova_compute[189564]:        <feature name='gfni'/>
Dec  1 14:20:41 np0005541455 nova_compute[189564]:        <feature name='invpcid'/>
Dec  1 14:20:41 np0005541455 nova_compute[189564]:        <feature name='la57'/>
Dec  1 14:20:41 np0005541455 nova_compute[189564]:        <feature name='no-nested-data-bp'/>
Dec  1 14:20:41 np0005541455 nova_compute[189564]:        <feature name='null-sel-clr-base'/>
Dec  1 14:20:41 np0005541455 nova_compute[189564]:        <feature name='pcid'/>
Dec  1 14:20:41 np0005541455 nova_compute[189564]:        <feature name='pku'/>
Dec  1 14:20:41 np0005541455 nova_compute[189564]:        <feature name='stibp-always-on'/>
Dec  1 14:20:41 np0005541455 nova_compute[189564]:        <feature name='vaes'/>
Dec  1 14:20:41 np0005541455 nova_compute[189564]:        <feature name='vpclmulqdq'/>
Dec  1 14:20:41 np0005541455 nova_compute[189564]:        <feature name='xsaves'/>
Dec  1 14:20:41 np0005541455 nova_compute[189564]:      </blockers>
Dec  1 14:20:41 np0005541455 nova_compute[189564]:      <model usable='no' vendor='AMD'>EPYC-Genoa-v1</model>
Dec  1 14:20:41 np0005541455 nova_compute[189564]:      <blockers model='EPYC-Genoa-v1'>
Dec  1 14:20:41 np0005541455 nova_compute[189564]:        <feature name='amd-psfd'/>
Dec  1 14:20:41 np0005541455 nova_compute[189564]:        <feature name='auto-ibrs'/>
Dec  1 14:20:41 np0005541455 nova_compute[189564]:        <feature name='avx512-bf16'/>
Dec  1 14:20:41 np0005541455 nova_compute[189564]:        <feature name='avx512-vpopcntdq'/>
Dec  1 14:20:41 np0005541455 nova_compute[189564]:        <feature name='avx512bitalg'/>
Dec  1 14:20:41 np0005541455 nova_compute[189564]:        <feature name='avx512bw'/>
Dec  1 14:20:41 np0005541455 nova_compute[189564]:        <feature name='avx512cd'/>
Dec  1 14:20:41 np0005541455 nova_compute[189564]:        <feature name='avx512dq'/>
Dec  1 14:20:41 np0005541455 nova_compute[189564]:        <feature name='avx512f'/>
Dec  1 14:20:41 np0005541455 nova_compute[189564]:        <feature name='avx512ifma'/>
Dec  1 14:20:41 np0005541455 nova_compute[189564]:        <feature name='avx512vbmi'/>
Dec  1 14:20:41 np0005541455 nova_compute[189564]:        <feature name='avx512vbmi2'/>
Dec  1 14:20:41 np0005541455 nova_compute[189564]:        <feature name='avx512vl'/>
Dec  1 14:20:41 np0005541455 nova_compute[189564]:        <feature name='avx512vnni'/>
Dec  1 14:20:41 np0005541455 nova_compute[189564]:        <feature name='erms'/>
Dec  1 14:20:41 np0005541455 nova_compute[189564]:        <feature name='fsrm'/>
Dec  1 14:20:41 np0005541455 nova_compute[189564]:        <feature name='gfni'/>
Dec  1 14:20:41 np0005541455 nova_compute[189564]:        <feature name='invpcid'/>
Dec  1 14:20:41 np0005541455 nova_compute[189564]:        <feature name='la57'/>
Dec  1 14:20:41 np0005541455 nova_compute[189564]:        <feature name='no-nested-data-bp'/>
Dec  1 14:20:41 np0005541455 nova_compute[189564]:        <feature name='null-sel-clr-base'/>
Dec  1 14:20:41 np0005541455 nova_compute[189564]:        <feature name='pcid'/>
Dec  1 14:20:41 np0005541455 nova_compute[189564]:        <feature name='pku'/>
Dec  1 14:20:41 np0005541455 nova_compute[189564]:        <feature name='stibp-always-on'/>
Dec  1 14:20:41 np0005541455 nova_compute[189564]:        <feature name='vaes'/>
Dec  1 14:20:41 np0005541455 nova_compute[189564]:        <feature name='vpclmulqdq'/>
Dec  1 14:20:41 np0005541455 nova_compute[189564]:        <feature name='xsaves'/>
Dec  1 14:20:41 np0005541455 nova_compute[189564]:      </blockers>
Dec  1 14:20:41 np0005541455 nova_compute[189564]:      <model usable='yes' vendor='AMD' canonical='EPYC-v2'>EPYC-IBPB</model>
Dec  1 14:20:41 np0005541455 nova_compute[189564]:      <model usable='no' vendor='AMD' canonical='EPYC-Milan-v1'>EPYC-Milan</model>
Dec  1 14:20:41 np0005541455 nova_compute[189564]:      <blockers model='EPYC-Milan'>
Dec  1 14:20:41 np0005541455 nova_compute[189564]:        <feature name='erms'/>
Dec  1 14:20:41 np0005541455 nova_compute[189564]:        <feature name='fsrm'/>
Dec  1 14:20:41 np0005541455 nova_compute[189564]:        <feature name='invpcid'/>
Dec  1 14:20:41 np0005541455 nova_compute[189564]:        <feature name='pcid'/>
Dec  1 14:20:41 np0005541455 nova_compute[189564]:        <feature name='pku'/>
Dec  1 14:20:41 np0005541455 nova_compute[189564]:        <feature name='xsaves'/>
Dec  1 14:20:41 np0005541455 nova_compute[189564]:      </blockers>
Dec  1 14:20:41 np0005541455 nova_compute[189564]:      <model usable='no' vendor='AMD'>EPYC-Milan-v1</model>
Dec  1 14:20:41 np0005541455 nova_compute[189564]:      <blockers model='EPYC-Milan-v1'>
Dec  1 14:20:41 np0005541455 nova_compute[189564]:        <feature name='erms'/>
Dec  1 14:20:41 np0005541455 nova_compute[189564]:        <feature name='fsrm'/>
Dec  1 14:20:41 np0005541455 nova_compute[189564]:        <feature name='invpcid'/>
Dec  1 14:20:41 np0005541455 nova_compute[189564]:        <feature name='pcid'/>
Dec  1 14:20:41 np0005541455 nova_compute[189564]:        <feature name='pku'/>
Dec  1 14:20:41 np0005541455 nova_compute[189564]:        <feature name='xsaves'/>
Dec  1 14:20:41 np0005541455 nova_compute[189564]:      </blockers>
Dec  1 14:20:41 np0005541455 nova_compute[189564]:      <model usable='no' vendor='AMD'>EPYC-Milan-v2</model>
Dec  1 14:20:41 np0005541455 nova_compute[189564]:      <blockers model='EPYC-Milan-v2'>
Dec  1 14:20:41 np0005541455 nova_compute[189564]:        <feature name='amd-psfd'/>
Dec  1 14:20:41 np0005541455 nova_compute[189564]:        <feature name='erms'/>
Dec  1 14:20:41 np0005541455 nova_compute[189564]:        <feature name='fsrm'/>
Dec  1 14:20:41 np0005541455 nova_compute[189564]:        <feature name='invpcid'/>
Dec  1 14:20:41 np0005541455 nova_compute[189564]:        <feature name='no-nested-data-bp'/>
Dec  1 14:20:41 np0005541455 nova_compute[189564]:        <feature name='null-sel-clr-base'/>
Dec  1 14:20:41 np0005541455 nova_compute[189564]:        <feature name='pcid'/>
Dec  1 14:20:41 np0005541455 nova_compute[189564]:        <feature name='pku'/>
Dec  1 14:20:41 np0005541455 nova_compute[189564]:        <feature name='stibp-always-on'/>
Dec  1 14:20:41 np0005541455 nova_compute[189564]:        <feature name='vaes'/>
Dec  1 14:20:41 np0005541455 nova_compute[189564]:        <feature name='vpclmulqdq'/>
Dec  1 14:20:41 np0005541455 nova_compute[189564]:        <feature name='xsaves'/>
Dec  1 14:20:41 np0005541455 nova_compute[189564]:      </blockers>
Dec  1 14:20:41 np0005541455 nova_compute[189564]:      <model usable='no' vendor='AMD' canonical='EPYC-Rome-v1'>EPYC-Rome</model>
Dec  1 14:20:41 np0005541455 nova_compute[189564]:      <blockers model='EPYC-Rome'>
Dec  1 14:20:41 np0005541455 nova_compute[189564]:        <feature name='xsaves'/>
Dec  1 14:20:41 np0005541455 nova_compute[189564]:      </blockers>
Dec  1 14:20:41 np0005541455 nova_compute[189564]:      <model usable='no' vendor='AMD'>EPYC-Rome-v1</model>
Dec  1 14:20:41 np0005541455 nova_compute[189564]:      <blockers model='EPYC-Rome-v1'>
Dec  1 14:20:41 np0005541455 nova_compute[189564]:        <feature name='xsaves'/>
Dec  1 14:20:41 np0005541455 nova_compute[189564]:      </blockers>
Dec  1 14:20:41 np0005541455 nova_compute[189564]:      <model usable='no' vendor='AMD'>EPYC-Rome-v2</model>
Dec  1 14:20:41 np0005541455 nova_compute[189564]:      <blockers model='EPYC-Rome-v2'>
Dec  1 14:20:41 np0005541455 nova_compute[189564]:        <feature name='xsaves'/>
Dec  1 14:20:41 np0005541455 nova_compute[189564]:      </blockers>
Dec  1 14:20:41 np0005541455 nova_compute[189564]:      <model usable='no' vendor='AMD'>EPYC-Rome-v3</model>
Dec  1 14:20:41 np0005541455 nova_compute[189564]:      <blockers model='EPYC-Rome-v3'>
Dec  1 14:20:41 np0005541455 nova_compute[189564]:        <feature name='xsaves'/>
Dec  1 14:20:41 np0005541455 nova_compute[189564]:      </blockers>
Dec  1 14:20:41 np0005541455 nova_compute[189564]:      <model usable='yes' vendor='AMD'>EPYC-Rome-v4</model>
Dec  1 14:20:41 np0005541455 nova_compute[189564]:      <model usable='yes' vendor='AMD'>EPYC-v1</model>
Dec  1 14:20:41 np0005541455 nova_compute[189564]:      <model usable='yes' vendor='AMD'>EPYC-v2</model>
Dec  1 14:20:41 np0005541455 nova_compute[189564]:      <model usable='no' vendor='AMD'>EPYC-v3</model>
Dec  1 14:20:41 np0005541455 nova_compute[189564]:      <blockers model='EPYC-v3'>
Dec  1 14:20:41 np0005541455 nova_compute[189564]:        <feature name='xsaves'/>
Dec  1 14:20:41 np0005541455 nova_compute[189564]:      </blockers>
Dec  1 14:20:41 np0005541455 nova_compute[189564]:      <model usable='no' vendor='AMD'>EPYC-v4</model>
Dec  1 14:20:41 np0005541455 nova_compute[189564]:      <blockers model='EPYC-v4'>
Dec  1 14:20:41 np0005541455 nova_compute[189564]:        <feature name='xsaves'/>
Dec  1 14:20:41 np0005541455 nova_compute[189564]:      </blockers>
Dec  1 14:20:41 np0005541455 nova_compute[189564]:      <model usable='no' vendor='Intel' canonical='GraniteRapids-v1'>GraniteRapids</model>
Dec  1 14:20:41 np0005541455 nova_compute[189564]:      <blockers model='GraniteRapids'>
Dec  1 14:20:41 np0005541455 nova_compute[189564]:        <feature name='amx-bf16'/>
Dec  1 14:20:41 np0005541455 nova_compute[189564]:        <feature name='amx-fp16'/>
Dec  1 14:20:41 np0005541455 nova_compute[189564]:        <feature name='amx-int8'/>
Dec  1 14:20:41 np0005541455 nova_compute[189564]:        <feature name='amx-tile'/>
Dec  1 14:20:41 np0005541455 nova_compute[189564]:        <feature name='avx-vnni'/>
Dec  1 14:20:41 np0005541455 nova_compute[189564]:        <feature name='avx512-bf16'/>
Dec  1 14:20:41 np0005541455 nova_compute[189564]:        <feature name='avx512-fp16'/>
Dec  1 14:20:41 np0005541455 nova_compute[189564]:        <feature name='avx512-vpopcntdq'/>
Dec  1 14:20:41 np0005541455 nova_compute[189564]:        <feature name='avx512bitalg'/>
Dec  1 14:20:41 np0005541455 nova_compute[189564]:        <feature name='avx512bw'/>
Dec  1 14:20:41 np0005541455 nova_compute[189564]:        <feature name='avx512cd'/>
Dec  1 14:20:41 np0005541455 nova_compute[189564]:        <feature name='avx512dq'/>
Dec  1 14:20:41 np0005541455 nova_compute[189564]:        <feature name='avx512f'/>
Dec  1 14:20:41 np0005541455 nova_compute[189564]:        <feature name='avx512ifma'/>
Dec  1 14:20:41 np0005541455 nova_compute[189564]:        <feature name='avx512vbmi'/>
Dec  1 14:20:41 np0005541455 nova_compute[189564]:        <feature name='avx512vbmi2'/>
Dec  1 14:20:41 np0005541455 nova_compute[189564]:        <feature name='avx512vl'/>
Dec  1 14:20:41 np0005541455 nova_compute[189564]:        <feature name='avx512vnni'/>
Dec  1 14:20:41 np0005541455 nova_compute[189564]:        <feature name='bus-lock-detect'/>
Dec  1 14:20:41 np0005541455 nova_compute[189564]:        <feature name='erms'/>
Dec  1 14:20:41 np0005541455 nova_compute[189564]:        <feature name='fbsdp-no'/>
Dec  1 14:20:41 np0005541455 nova_compute[189564]:        <feature name='fsrc'/>
Dec  1 14:20:41 np0005541455 nova_compute[189564]:        <feature name='fsrm'/>
Dec  1 14:20:41 np0005541455 nova_compute[189564]:        <feature name='fsrs'/>
Dec  1 14:20:41 np0005541455 nova_compute[189564]:        <feature name='fzrm'/>
Dec  1 14:20:41 np0005541455 nova_compute[189564]:        <feature name='gfni'/>
Dec  1 14:20:41 np0005541455 nova_compute[189564]:        <feature name='hle'/>
Dec  1 14:20:41 np0005541455 nova_compute[189564]:        <feature name='ibrs-all'/>
Dec  1 14:20:41 np0005541455 nova_compute[189564]:        <feature name='invpcid'/>
Dec  1 14:20:41 np0005541455 nova_compute[189564]:        <feature name='la57'/>
Dec  1 14:20:41 np0005541455 nova_compute[189564]:        <feature name='mcdt-no'/>
Dec  1 14:20:41 np0005541455 nova_compute[189564]:        <feature name='pbrsb-no'/>
Dec  1 14:20:41 np0005541455 nova_compute[189564]:        <feature name='pcid'/>
Dec  1 14:20:41 np0005541455 nova_compute[189564]:        <feature name='pku'/>
Dec  1 14:20:41 np0005541455 nova_compute[189564]:        <feature name='prefetchiti'/>
Dec  1 14:20:41 np0005541455 nova_compute[189564]:        <feature name='psdp-no'/>
Dec  1 14:20:41 np0005541455 nova_compute[189564]:        <feature name='rtm'/>
Dec  1 14:20:41 np0005541455 nova_compute[189564]:        <feature name='sbdr-ssdp-no'/>
Dec  1 14:20:41 np0005541455 nova_compute[189564]:        <feature name='serialize'/>
Dec  1 14:20:41 np0005541455 nova_compute[189564]:        <feature name='taa-no'/>
Dec  1 14:20:41 np0005541455 nova_compute[189564]:        <feature name='tsx-ldtrk'/>
Dec  1 14:20:41 np0005541455 nova_compute[189564]:        <feature name='vaes'/>
Dec  1 14:20:41 np0005541455 nova_compute[189564]:        <feature name='vpclmulqdq'/>
Dec  1 14:20:41 np0005541455 nova_compute[189564]:        <feature name='xfd'/>
Dec  1 14:20:41 np0005541455 nova_compute[189564]:        <feature name='xsaves'/>
Dec  1 14:20:41 np0005541455 nova_compute[189564]:      </blockers>
Dec  1 14:20:41 np0005541455 nova_compute[189564]:      <model usable='no' vendor='Intel'>GraniteRapids-v1</model>
Dec  1 14:20:41 np0005541455 nova_compute[189564]:      <blockers model='GraniteRapids-v1'>
Dec  1 14:20:41 np0005541455 nova_compute[189564]:        <feature name='amx-bf16'/>
Dec  1 14:20:41 np0005541455 nova_compute[189564]:        <feature name='amx-fp16'/>
Dec  1 14:20:41 np0005541455 nova_compute[189564]:        <feature name='amx-int8'/>
Dec  1 14:20:41 np0005541455 nova_compute[189564]:        <feature name='amx-tile'/>
Dec  1 14:20:41 np0005541455 nova_compute[189564]:        <feature name='avx-vnni'/>
Dec  1 14:20:41 np0005541455 nova_compute[189564]:        <feature name='avx512-bf16'/>
Dec  1 14:20:41 np0005541455 nova_compute[189564]:        <feature name='avx512-fp16'/>
Dec  1 14:20:41 np0005541455 nova_compute[189564]:        <feature name='avx512-vpopcntdq'/>
Dec  1 14:20:41 np0005541455 nova_compute[189564]:        <feature name='avx512bitalg'/>
Dec  1 14:20:41 np0005541455 nova_compute[189564]:        <feature name='avx512bw'/>
Dec  1 14:20:41 np0005541455 nova_compute[189564]:        <feature name='avx512cd'/>
Dec  1 14:20:41 np0005541455 nova_compute[189564]:        <feature name='avx512dq'/>
Dec  1 14:20:41 np0005541455 nova_compute[189564]:        <feature name='avx512f'/>
Dec  1 14:20:41 np0005541455 nova_compute[189564]:        <feature name='avx512ifma'/>
Dec  1 14:20:41 np0005541455 nova_compute[189564]:        <feature name='avx512vbmi'/>
Dec  1 14:20:41 np0005541455 nova_compute[189564]:        <feature name='avx512vbmi2'/>
Dec  1 14:20:41 np0005541455 nova_compute[189564]:        <feature name='avx512vl'/>
Dec  1 14:20:41 np0005541455 nova_compute[189564]:        <feature name='avx512vnni'/>
Dec  1 14:20:41 np0005541455 nova_compute[189564]:        <feature name='bus-lock-detect'/>
Dec  1 14:20:41 np0005541455 nova_compute[189564]:        <feature name='erms'/>
Dec  1 14:20:41 np0005541455 nova_compute[189564]:        <feature name='fbsdp-no'/>
Dec  1 14:20:41 np0005541455 nova_compute[189564]:        <feature name='fsrc'/>
Dec  1 14:20:41 np0005541455 nova_compute[189564]:        <feature name='fsrm'/>
Dec  1 14:20:41 np0005541455 nova_compute[189564]:        <feature name='fsrs'/>
Dec  1 14:20:41 np0005541455 nova_compute[189564]:        <feature name='fzrm'/>
Dec  1 14:20:41 np0005541455 nova_compute[189564]:        <feature name='gfni'/>
Dec  1 14:20:41 np0005541455 nova_compute[189564]:        <feature name='hle'/>
Dec  1 14:20:41 np0005541455 nova_compute[189564]:        <feature name='ibrs-all'/>
Dec  1 14:20:41 np0005541455 nova_compute[189564]:        <feature name='invpcid'/>
Dec  1 14:20:41 np0005541455 nova_compute[189564]:        <feature name='la57'/>
Dec  1 14:20:41 np0005541455 nova_compute[189564]:        <feature name='mcdt-no'/>
Dec  1 14:20:41 np0005541455 nova_compute[189564]:        <feature name='pbrsb-no'/>
Dec  1 14:20:41 np0005541455 nova_compute[189564]:        <feature name='pcid'/>
Dec  1 14:20:41 np0005541455 nova_compute[189564]:        <feature name='pku'/>
Dec  1 14:20:41 np0005541455 nova_compute[189564]:        <feature name='prefetchiti'/>
Dec  1 14:20:41 np0005541455 nova_compute[189564]:        <feature name='psdp-no'/>
Dec  1 14:20:41 np0005541455 nova_compute[189564]:        <feature name='rtm'/>
Dec  1 14:20:41 np0005541455 nova_compute[189564]:        <feature name='sbdr-ssdp-no'/>
Dec  1 14:20:41 np0005541455 nova_compute[189564]:        <feature name='serialize'/>
Dec  1 14:20:41 np0005541455 nova_compute[189564]:        <feature name='taa-no'/>
Dec  1 14:20:41 np0005541455 nova_compute[189564]:        <feature name='tsx-ldtrk'/>
Dec  1 14:20:41 np0005541455 nova_compute[189564]:        <feature name='vaes'/>
Dec  1 14:20:41 np0005541455 nova_compute[189564]:        <feature name='vpclmulqdq'/>
Dec  1 14:20:41 np0005541455 nova_compute[189564]:        <feature name='xfd'/>
Dec  1 14:20:41 np0005541455 nova_compute[189564]:        <feature name='xsaves'/>
Dec  1 14:20:41 np0005541455 nova_compute[189564]:      </blockers>
Dec  1 14:20:41 np0005541455 nova_compute[189564]:      <model usable='no' vendor='Intel'>GraniteRapids-v2</model>
Dec  1 14:20:41 np0005541455 nova_compute[189564]:      <blockers model='GraniteRapids-v2'>
Dec  1 14:20:41 np0005541455 nova_compute[189564]:        <feature name='amx-bf16'/>
Dec  1 14:20:41 np0005541455 nova_compute[189564]:        <feature name='amx-fp16'/>
Dec  1 14:20:41 np0005541455 nova_compute[189564]:        <feature name='amx-int8'/>
Dec  1 14:20:41 np0005541455 nova_compute[189564]:        <feature name='amx-tile'/>
Dec  1 14:20:41 np0005541455 nova_compute[189564]:        <feature name='avx-vnni'/>
Dec  1 14:20:41 np0005541455 nova_compute[189564]:        <feature name='avx10'/>
Dec  1 14:20:41 np0005541455 nova_compute[189564]:        <feature name='avx10-128'/>
Dec  1 14:20:41 np0005541455 nova_compute[189564]:        <feature name='avx10-256'/>
Dec  1 14:20:41 np0005541455 nova_compute[189564]:        <feature name='avx10-512'/>
Dec  1 14:20:41 np0005541455 nova_compute[189564]:        <feature name='avx512-bf16'/>
Dec  1 14:20:41 np0005541455 nova_compute[189564]:        <feature name='avx512-fp16'/>
Dec  1 14:20:41 np0005541455 nova_compute[189564]:        <feature name='avx512-vpopcntdq'/>
Dec  1 14:20:41 np0005541455 nova_compute[189564]:        <feature name='avx512bitalg'/>
Dec  1 14:20:41 np0005541455 nova_compute[189564]:        <feature name='avx512bw'/>
Dec  1 14:20:41 np0005541455 nova_compute[189564]:        <feature name='avx512cd'/>
Dec  1 14:20:41 np0005541455 nova_compute[189564]:        <feature name='avx512dq'/>
Dec  1 14:20:41 np0005541455 nova_compute[189564]:        <feature name='avx512f'/>
Dec  1 14:20:41 np0005541455 nova_compute[189564]:        <feature name='avx512ifma'/>
Dec  1 14:20:41 np0005541455 nova_compute[189564]:        <feature name='avx512vbmi'/>
Dec  1 14:20:41 np0005541455 nova_compute[189564]:        <feature name='avx512vbmi2'/>
Dec  1 14:20:41 np0005541455 nova_compute[189564]:        <feature name='avx512vl'/>
Dec  1 14:20:41 np0005541455 nova_compute[189564]:        <feature name='avx512vnni'/>
Dec  1 14:20:41 np0005541455 nova_compute[189564]:        <feature name='bus-lock-detect'/>
Dec  1 14:20:41 np0005541455 nova_compute[189564]:        <feature name='cldemote'/>
Dec  1 14:20:41 np0005541455 nova_compute[189564]:        <feature name='erms'/>
Dec  1 14:20:41 np0005541455 nova_compute[189564]:        <feature name='fbsdp-no'/>
Dec  1 14:20:41 np0005541455 nova_compute[189564]:        <feature name='fsrc'/>
Dec  1 14:20:41 np0005541455 nova_compute[189564]:        <feature name='fsrm'/>
Dec  1 14:20:41 np0005541455 nova_compute[189564]:        <feature name='fsrs'/>
Dec  1 14:20:41 np0005541455 nova_compute[189564]:        <feature name='fzrm'/>
Dec  1 14:20:41 np0005541455 nova_compute[189564]:        <feature name='gfni'/>
Dec  1 14:20:41 np0005541455 nova_compute[189564]:        <feature name='hle'/>
Dec  1 14:20:41 np0005541455 nova_compute[189564]:        <feature name='ibrs-all'/>
Dec  1 14:20:41 np0005541455 nova_compute[189564]:        <feature name='invpcid'/>
Dec  1 14:20:41 np0005541455 nova_compute[189564]:        <feature name='la57'/>
Dec  1 14:20:41 np0005541455 nova_compute[189564]:        <feature name='mcdt-no'/>
Dec  1 14:20:41 np0005541455 nova_compute[189564]:        <feature name='movdir64b'/>
Dec  1 14:20:41 np0005541455 nova_compute[189564]:        <feature name='movdiri'/>
Dec  1 14:20:41 np0005541455 nova_compute[189564]:        <feature name='pbrsb-no'/>
Dec  1 14:20:41 np0005541455 nova_compute[189564]:        <feature name='pcid'/>
Dec  1 14:20:41 np0005541455 nova_compute[189564]:        <feature name='pku'/>
Dec  1 14:20:41 np0005541455 nova_compute[189564]:        <feature name='prefetchiti'/>
Dec  1 14:20:41 np0005541455 nova_compute[189564]:        <feature name='psdp-no'/>
Dec  1 14:20:41 np0005541455 nova_compute[189564]:        <feature name='rtm'/>
Dec  1 14:20:41 np0005541455 nova_compute[189564]:        <feature name='sbdr-ssdp-no'/>
Dec  1 14:20:41 np0005541455 nova_compute[189564]:        <feature name='serialize'/>
Dec  1 14:20:41 np0005541455 nova_compute[189564]:        <feature name='ss'/>
Dec  1 14:20:41 np0005541455 nova_compute[189564]:        <feature name='taa-no'/>
Dec  1 14:20:41 np0005541455 nova_compute[189564]:        <feature name='tsx-ldtrk'/>
Dec  1 14:20:41 np0005541455 nova_compute[189564]:        <feature name='vaes'/>
Dec  1 14:20:41 np0005541455 nova_compute[189564]:        <feature name='vpclmulqdq'/>
Dec  1 14:20:41 np0005541455 nova_compute[189564]:        <feature name='xfd'/>
Dec  1 14:20:41 np0005541455 nova_compute[189564]:        <feature name='xsaves'/>
Dec  1 14:20:41 np0005541455 nova_compute[189564]:      </blockers>
Dec  1 14:20:41 np0005541455 nova_compute[189564]:      <model usable='no' vendor='Intel' canonical='Haswell-v1'>Haswell</model>
Dec  1 14:20:41 np0005541455 nova_compute[189564]:      <blockers model='Haswell'>
Dec  1 14:20:41 np0005541455 nova_compute[189564]:        <feature name='erms'/>
Dec  1 14:20:41 np0005541455 nova_compute[189564]:        <feature name='hle'/>
Dec  1 14:20:41 np0005541455 nova_compute[189564]:        <feature name='invpcid'/>
Dec  1 14:20:41 np0005541455 nova_compute[189564]:        <feature name='pcid'/>
Dec  1 14:20:41 np0005541455 nova_compute[189564]:        <feature name='rtm'/>
Dec  1 14:20:41 np0005541455 nova_compute[189564]:      </blockers>
Dec  1 14:20:41 np0005541455 nova_compute[189564]:      <model usable='no' vendor='Intel' canonical='Haswell-v3'>Haswell-IBRS</model>
Dec  1 14:20:41 np0005541455 nova_compute[189564]:      <blockers model='Haswell-IBRS'>
Dec  1 14:20:41 np0005541455 nova_compute[189564]:        <feature name='erms'/>
Dec  1 14:20:41 np0005541455 nova_compute[189564]:        <feature name='hle'/>
Dec  1 14:20:41 np0005541455 nova_compute[189564]:        <feature name='invpcid'/>
Dec  1 14:20:41 np0005541455 nova_compute[189564]:        <feature name='pcid'/>
Dec  1 14:20:41 np0005541455 nova_compute[189564]:        <feature name='rtm'/>
Dec  1 14:20:41 np0005541455 nova_compute[189564]:      </blockers>
Dec  1 14:20:41 np0005541455 nova_compute[189564]:      <model usable='no' vendor='Intel' canonical='Haswell-v2'>Haswell-noTSX</model>
Dec  1 14:20:41 np0005541455 nova_compute[189564]:      <blockers model='Haswell-noTSX'>
Dec  1 14:20:41 np0005541455 nova_compute[189564]:        <feature name='erms'/>
Dec  1 14:20:41 np0005541455 nova_compute[189564]:        <feature name='invpcid'/>
Dec  1 14:20:41 np0005541455 nova_compute[189564]:        <feature name='pcid'/>
Dec  1 14:20:41 np0005541455 nova_compute[189564]:      </blockers>
Dec  1 14:20:41 np0005541455 nova_compute[189564]:      <model usable='no' vendor='Intel' canonical='Haswell-v4'>Haswell-noTSX-IBRS</model>
Dec  1 14:20:41 np0005541455 nova_compute[189564]:      <blockers model='Haswell-noTSX-IBRS'>
Dec  1 14:20:41 np0005541455 nova_compute[189564]:        <feature name='erms'/>
Dec  1 14:20:41 np0005541455 nova_compute[189564]:        <feature name='invpcid'/>
Dec  1 14:20:41 np0005541455 nova_compute[189564]:        <feature name='pcid'/>
Dec  1 14:20:41 np0005541455 nova_compute[189564]:      </blockers>
Dec  1 14:20:41 np0005541455 nova_compute[189564]:      <model usable='no' vendor='Intel'>Haswell-v1</model>
Dec  1 14:20:41 np0005541455 nova_compute[189564]:      <blockers model='Haswell-v1'>
Dec  1 14:20:41 np0005541455 nova_compute[189564]:        <feature name='erms'/>
Dec  1 14:20:41 np0005541455 nova_compute[189564]:        <feature name='hle'/>
Dec  1 14:20:41 np0005541455 nova_compute[189564]:        <feature name='invpcid'/>
Dec  1 14:20:41 np0005541455 nova_compute[189564]:        <feature name='pcid'/>
Dec  1 14:20:41 np0005541455 nova_compute[189564]:        <feature name='rtm'/>
Dec  1 14:20:41 np0005541455 nova_compute[189564]:      </blockers>
Dec  1 14:20:41 np0005541455 nova_compute[189564]:      <model usable='no' vendor='Intel'>Haswell-v2</model>
Dec  1 14:20:41 np0005541455 nova_compute[189564]:      <blockers model='Haswell-v2'>
Dec  1 14:20:41 np0005541455 nova_compute[189564]:        <feature name='erms'/>
Dec  1 14:20:41 np0005541455 nova_compute[189564]:        <feature name='invpcid'/>
Dec  1 14:20:41 np0005541455 nova_compute[189564]:        <feature name='pcid'/>
Dec  1 14:20:41 np0005541455 nova_compute[189564]:      </blockers>
Dec  1 14:20:41 np0005541455 nova_compute[189564]:      <model usable='no' vendor='Intel'>Haswell-v3</model>
Dec  1 14:20:41 np0005541455 nova_compute[189564]:      <blockers model='Haswell-v3'>
Dec  1 14:20:41 np0005541455 nova_compute[189564]:        <feature name='erms'/>
Dec  1 14:20:41 np0005541455 nova_compute[189564]:        <feature name='hle'/>
Dec  1 14:20:41 np0005541455 nova_compute[189564]:        <feature name='invpcid'/>
Dec  1 14:20:41 np0005541455 nova_compute[189564]:        <feature name='pcid'/>
Dec  1 14:20:41 np0005541455 nova_compute[189564]:        <feature name='rtm'/>
Dec  1 14:20:41 np0005541455 nova_compute[189564]:      </blockers>
Dec  1 14:20:41 np0005541455 nova_compute[189564]:      <model usable='no' vendor='Intel'>Haswell-v4</model>
Dec  1 14:20:41 np0005541455 nova_compute[189564]:      <blockers model='Haswell-v4'>
Dec  1 14:20:41 np0005541455 nova_compute[189564]:        <feature name='erms'/>
Dec  1 14:20:41 np0005541455 nova_compute[189564]:        <feature name='invpcid'/>
Dec  1 14:20:41 np0005541455 nova_compute[189564]:        <feature name='pcid'/>
Dec  1 14:20:41 np0005541455 nova_compute[189564]:      </blockers>
Dec  1 14:20:41 np0005541455 nova_compute[189564]:      <model usable='no' vendor='Intel' canonical='Icelake-Server-v1'>Icelake-Server</model>
Dec  1 14:20:41 np0005541455 nova_compute[189564]:      <blockers model='Icelake-Server'>
Dec  1 14:20:41 np0005541455 nova_compute[189564]:        <feature name='avx512-vpopcntdq'/>
Dec  1 14:20:41 np0005541455 nova_compute[189564]:        <feature name='avx512bitalg'/>
Dec  1 14:20:41 np0005541455 nova_compute[189564]:        <feature name='avx512bw'/>
Dec  1 14:20:41 np0005541455 nova_compute[189564]:        <feature name='avx512cd'/>
Dec  1 14:20:41 np0005541455 nova_compute[189564]:        <feature name='avx512dq'/>
Dec  1 14:20:41 np0005541455 nova_compute[189564]:        <feature name='avx512f'/>
Dec  1 14:20:41 np0005541455 nova_compute[189564]:        <feature name='avx512vbmi'/>
Dec  1 14:20:41 np0005541455 nova_compute[189564]:        <feature name='avx512vbmi2'/>
Dec  1 14:20:41 np0005541455 nova_compute[189564]:        <feature name='avx512vl'/>
Dec  1 14:20:41 np0005541455 nova_compute[189564]:        <feature name='avx512vnni'/>
Dec  1 14:20:41 np0005541455 nova_compute[189564]:        <feature name='erms'/>
Dec  1 14:20:41 np0005541455 nova_compute[189564]:        <feature name='gfni'/>
Dec  1 14:20:41 np0005541455 nova_compute[189564]:        <feature name='hle'/>
Dec  1 14:20:41 np0005541455 nova_compute[189564]:        <feature name='invpcid'/>
Dec  1 14:20:41 np0005541455 nova_compute[189564]:        <feature name='la57'/>
Dec  1 14:20:41 np0005541455 nova_compute[189564]:        <feature name='pcid'/>
Dec  1 14:20:41 np0005541455 nova_compute[189564]:        <feature name='pku'/>
Dec  1 14:20:41 np0005541455 nova_compute[189564]:        <feature name='rtm'/>
Dec  1 14:20:41 np0005541455 nova_compute[189564]:        <feature name='vaes'/>
Dec  1 14:20:41 np0005541455 nova_compute[189564]:        <feature name='vpclmulqdq'/>
Dec  1 14:20:41 np0005541455 nova_compute[189564]:      </blockers>
Dec  1 14:20:41 np0005541455 nova_compute[189564]:      <model usable='no' vendor='Intel' canonical='Icelake-Server-v2'>Icelake-Server-noTSX</model>
Dec  1 14:20:41 np0005541455 nova_compute[189564]:      <blockers model='Icelake-Server-noTSX'>
Dec  1 14:20:41 np0005541455 nova_compute[189564]:        <feature name='avx512-vpopcntdq'/>
Dec  1 14:20:41 np0005541455 nova_compute[189564]:        <feature name='avx512bitalg'/>
Dec  1 14:20:41 np0005541455 nova_compute[189564]:        <feature name='avx512bw'/>
Dec  1 14:20:41 np0005541455 nova_compute[189564]:        <feature name='avx512cd'/>
Dec  1 14:20:41 np0005541455 nova_compute[189564]:        <feature name='avx512dq'/>
Dec  1 14:20:41 np0005541455 nova_compute[189564]:        <feature name='avx512f'/>
Dec  1 14:20:41 np0005541455 nova_compute[189564]:        <feature name='avx512vbmi'/>
Dec  1 14:20:41 np0005541455 nova_compute[189564]:        <feature name='avx512vbmi2'/>
Dec  1 14:20:41 np0005541455 nova_compute[189564]:        <feature name='avx512vl'/>
Dec  1 14:20:41 np0005541455 nova_compute[189564]:        <feature name='avx512vnni'/>
Dec  1 14:20:41 np0005541455 nova_compute[189564]:        <feature name='erms'/>
Dec  1 14:20:41 np0005541455 nova_compute[189564]:        <feature name='gfni'/>
Dec  1 14:20:41 np0005541455 nova_compute[189564]:        <feature name='invpcid'/>
Dec  1 14:20:41 np0005541455 nova_compute[189564]:        <feature name='la57'/>
Dec  1 14:20:41 np0005541455 nova_compute[189564]:        <feature name='pcid'/>
Dec  1 14:20:41 np0005541455 nova_compute[189564]:        <feature name='pku'/>
Dec  1 14:20:41 np0005541455 nova_compute[189564]:        <feature name='vaes'/>
Dec  1 14:20:41 np0005541455 nova_compute[189564]:        <feature name='vpclmulqdq'/>
Dec  1 14:20:41 np0005541455 nova_compute[189564]:      </blockers>
Dec  1 14:20:41 np0005541455 nova_compute[189564]:      <model usable='no' vendor='Intel'>Icelake-Server-v1</model>
Dec  1 14:20:41 np0005541455 nova_compute[189564]:      <blockers model='Icelake-Server-v1'>
Dec  1 14:20:41 np0005541455 nova_compute[189564]:        <feature name='avx512-vpopcntdq'/>
Dec  1 14:20:41 np0005541455 nova_compute[189564]:        <feature name='avx512bitalg'/>
Dec  1 14:20:41 np0005541455 nova_compute[189564]:        <feature name='avx512bw'/>
Dec  1 14:20:41 np0005541455 nova_compute[189564]:        <feature name='avx512cd'/>
Dec  1 14:20:41 np0005541455 nova_compute[189564]:        <feature name='avx512dq'/>
Dec  1 14:20:41 np0005541455 nova_compute[189564]:        <feature name='avx512f'/>
Dec  1 14:20:41 np0005541455 nova_compute[189564]:        <feature name='avx512vbmi'/>
Dec  1 14:20:41 np0005541455 nova_compute[189564]:        <feature name='avx512vbmi2'/>
Dec  1 14:20:41 np0005541455 nova_compute[189564]:        <feature name='avx512vl'/>
Dec  1 14:20:41 np0005541455 nova_compute[189564]:        <feature name='avx512vnni'/>
Dec  1 14:20:41 np0005541455 nova_compute[189564]:        <feature name='erms'/>
Dec  1 14:20:41 np0005541455 nova_compute[189564]:        <feature name='gfni'/>
Dec  1 14:20:41 np0005541455 nova_compute[189564]:        <feature name='hle'/>
Dec  1 14:20:41 np0005541455 nova_compute[189564]:        <feature name='invpcid'/>
Dec  1 14:20:41 np0005541455 nova_compute[189564]:        <feature name='la57'/>
Dec  1 14:20:41 np0005541455 nova_compute[189564]:        <feature name='pcid'/>
Dec  1 14:20:41 np0005541455 nova_compute[189564]:        <feature name='pku'/>
Dec  1 14:20:41 np0005541455 nova_compute[189564]:        <feature name='rtm'/>
Dec  1 14:20:41 np0005541455 nova_compute[189564]:        <feature name='vaes'/>
Dec  1 14:20:41 np0005541455 nova_compute[189564]:        <feature name='vpclmulqdq'/>
Dec  1 14:20:41 np0005541455 nova_compute[189564]:      </blockers>
Dec  1 14:20:41 np0005541455 nova_compute[189564]:      <model usable='no' vendor='Intel'>Icelake-Server-v2</model>
Dec  1 14:20:41 np0005541455 nova_compute[189564]:      <blockers model='Icelake-Server-v2'>
Dec  1 14:20:41 np0005541455 nova_compute[189564]:        <feature name='avx512-vpopcntdq'/>
Dec  1 14:20:41 np0005541455 nova_compute[189564]:        <feature name='avx512bitalg'/>
Dec  1 14:20:41 np0005541455 nova_compute[189564]:        <feature name='avx512bw'/>
Dec  1 14:20:41 np0005541455 nova_compute[189564]:        <feature name='avx512cd'/>
Dec  1 14:20:41 np0005541455 nova_compute[189564]:        <feature name='avx512dq'/>
Dec  1 14:20:41 np0005541455 nova_compute[189564]:        <feature name='avx512f'/>
Dec  1 14:20:41 np0005541455 nova_compute[189564]:        <feature name='avx512vbmi'/>
Dec  1 14:20:41 np0005541455 nova_compute[189564]:        <feature name='avx512vbmi2'/>
Dec  1 14:20:41 np0005541455 nova_compute[189564]:        <feature name='avx512vl'/>
Dec  1 14:20:41 np0005541455 nova_compute[189564]:        <feature name='avx512vnni'/>
Dec  1 14:20:41 np0005541455 nova_compute[189564]:        <feature name='erms'/>
Dec  1 14:20:41 np0005541455 nova_compute[189564]:        <feature name='gfni'/>
Dec  1 14:20:41 np0005541455 nova_compute[189564]:        <feature name='invpcid'/>
Dec  1 14:20:41 np0005541455 nova_compute[189564]:        <feature name='la57'/>
Dec  1 14:20:41 np0005541455 nova_compute[189564]:        <feature name='pcid'/>
Dec  1 14:20:41 np0005541455 nova_compute[189564]:        <feature name='pku'/>
Dec  1 14:20:41 np0005541455 nova_compute[189564]:        <feature name='vaes'/>
Dec  1 14:20:41 np0005541455 nova_compute[189564]:        <feature name='vpclmulqdq'/>
Dec  1 14:20:41 np0005541455 nova_compute[189564]:      </blockers>
Dec  1 14:20:41 np0005541455 nova_compute[189564]:      <model usable='no' vendor='Intel'>Icelake-Server-v3</model>
Dec  1 14:20:41 np0005541455 nova_compute[189564]:      <blockers model='Icelake-Server-v3'>
Dec  1 14:20:41 np0005541455 nova_compute[189564]:        <feature name='avx512-vpopcntdq'/>
Dec  1 14:20:41 np0005541455 nova_compute[189564]:        <feature name='avx512bitalg'/>
Dec  1 14:20:41 np0005541455 nova_compute[189564]:        <feature name='avx512bw'/>
Dec  1 14:20:41 np0005541455 nova_compute[189564]:        <feature name='avx512cd'/>
Dec  1 14:20:41 np0005541455 nova_compute[189564]:        <feature name='avx512dq'/>
Dec  1 14:20:41 np0005541455 nova_compute[189564]:        <feature name='avx512f'/>
Dec  1 14:20:41 np0005541455 nova_compute[189564]:        <feature name='avx512vbmi'/>
Dec  1 14:20:41 np0005541455 nova_compute[189564]:        <feature name='avx512vbmi2'/>
Dec  1 14:20:41 np0005541455 nova_compute[189564]:        <feature name='avx512vl'/>
Dec  1 14:20:41 np0005541455 nova_compute[189564]:        <feature name='avx512vnni'/>
Dec  1 14:20:41 np0005541455 nova_compute[189564]:        <feature name='erms'/>
Dec  1 14:20:41 np0005541455 nova_compute[189564]:        <feature name='gfni'/>
Dec  1 14:20:41 np0005541455 nova_compute[189564]:        <feature name='ibrs-all'/>
Dec  1 14:20:41 np0005541455 nova_compute[189564]:        <feature name='invpcid'/>
Dec  1 14:20:41 np0005541455 nova_compute[189564]:        <feature name='la57'/>
Dec  1 14:20:41 np0005541455 nova_compute[189564]:        <feature name='pcid'/>
Dec  1 14:20:41 np0005541455 nova_compute[189564]:        <feature name='pku'/>
Dec  1 14:20:41 np0005541455 nova_compute[189564]:        <feature name='taa-no'/>
Dec  1 14:20:41 np0005541455 nova_compute[189564]:        <feature name='vaes'/>
Dec  1 14:20:41 np0005541455 nova_compute[189564]:        <feature name='vpclmulqdq'/>
Dec  1 14:20:41 np0005541455 nova_compute[189564]:      </blockers>
Dec  1 14:20:41 np0005541455 nova_compute[189564]:      <model usable='no' vendor='Intel'>Icelake-Server-v4</model>
Dec  1 14:20:41 np0005541455 nova_compute[189564]:      <blockers model='Icelake-Server-v4'>
Dec  1 14:20:41 np0005541455 nova_compute[189564]:        <feature name='avx512-vpopcntdq'/>
Dec  1 14:20:41 np0005541455 nova_compute[189564]:        <feature name='avx512bitalg'/>
Dec  1 14:20:41 np0005541455 nova_compute[189564]:        <feature name='avx512bw'/>
Dec  1 14:20:41 np0005541455 nova_compute[189564]:        <feature name='avx512cd'/>
Dec  1 14:20:41 np0005541455 nova_compute[189564]:        <feature name='avx512dq'/>
Dec  1 14:20:41 np0005541455 nova_compute[189564]:        <feature name='avx512f'/>
Dec  1 14:20:41 np0005541455 nova_compute[189564]:        <feature name='avx512ifma'/>
Dec  1 14:20:41 np0005541455 nova_compute[189564]:        <feature name='avx512vbmi'/>
Dec  1 14:20:41 np0005541455 nova_compute[189564]:        <feature name='avx512vbmi2'/>
Dec  1 14:20:41 np0005541455 nova_compute[189564]:        <feature name='avx512vl'/>
Dec  1 14:20:41 np0005541455 nova_compute[189564]:        <feature name='avx512vnni'/>
Dec  1 14:20:41 np0005541455 nova_compute[189564]:        <feature name='erms'/>
Dec  1 14:20:41 np0005541455 nova_compute[189564]:        <feature name='fsrm'/>
Dec  1 14:20:41 np0005541455 nova_compute[189564]:        <feature name='gfni'/>
Dec  1 14:20:41 np0005541455 nova_compute[189564]:        <feature name='ibrs-all'/>
Dec  1 14:20:41 np0005541455 nova_compute[189564]:        <feature name='invpcid'/>
Dec  1 14:20:41 np0005541455 nova_compute[189564]:        <feature name='la57'/>
Dec  1 14:20:41 np0005541455 nova_compute[189564]:        <feature name='pcid'/>
Dec  1 14:20:41 np0005541455 nova_compute[189564]:        <feature name='pku'/>
Dec  1 14:20:41 np0005541455 nova_compute[189564]:        <feature name='taa-no'/>
Dec  1 14:20:41 np0005541455 nova_compute[189564]:        <feature name='vaes'/>
Dec  1 14:20:41 np0005541455 nova_compute[189564]:        <feature name='vpclmulqdq'/>
Dec  1 14:20:41 np0005541455 nova_compute[189564]:      </blockers>
Dec  1 14:20:41 np0005541455 nova_compute[189564]:      <model usable='no' vendor='Intel'>Icelake-Server-v5</model>
Dec  1 14:20:41 np0005541455 nova_compute[189564]:      <blockers model='Icelake-Server-v5'>
Dec  1 14:20:41 np0005541455 nova_compute[189564]:        <feature name='avx512-vpopcntdq'/>
Dec  1 14:20:41 np0005541455 nova_compute[189564]:        <feature name='avx512bitalg'/>
Dec  1 14:20:41 np0005541455 nova_compute[189564]:        <feature name='avx512bw'/>
Dec  1 14:20:41 np0005541455 nova_compute[189564]:        <feature name='avx512cd'/>
Dec  1 14:20:41 np0005541455 nova_compute[189564]:        <feature name='avx512dq'/>
Dec  1 14:20:41 np0005541455 nova_compute[189564]:        <feature name='avx512f'/>
Dec  1 14:20:41 np0005541455 nova_compute[189564]:        <feature name='avx512ifma'/>
Dec  1 14:20:41 np0005541455 nova_compute[189564]:        <feature name='avx512vbmi'/>
Dec  1 14:20:41 np0005541455 nova_compute[189564]:        <feature name='avx512vbmi2'/>
Dec  1 14:20:41 np0005541455 nova_compute[189564]:        <feature name='avx512vl'/>
Dec  1 14:20:41 np0005541455 nova_compute[189564]:        <feature name='avx512vnni'/>
Dec  1 14:20:41 np0005541455 nova_compute[189564]:        <feature name='erms'/>
Dec  1 14:20:41 np0005541455 nova_compute[189564]:        <feature name='fsrm'/>
Dec  1 14:20:41 np0005541455 nova_compute[189564]:        <feature name='gfni'/>
Dec  1 14:20:41 np0005541455 nova_compute[189564]:        <feature name='ibrs-all'/>
Dec  1 14:20:41 np0005541455 nova_compute[189564]:        <feature name='invpcid'/>
Dec  1 14:20:41 np0005541455 nova_compute[189564]:        <feature name='la57'/>
Dec  1 14:20:41 np0005541455 nova_compute[189564]:        <feature name='pcid'/>
Dec  1 14:20:41 np0005541455 nova_compute[189564]:        <feature name='pku'/>
Dec  1 14:20:41 np0005541455 nova_compute[189564]:        <feature name='taa-no'/>
Dec  1 14:20:41 np0005541455 nova_compute[189564]:        <feature name='vaes'/>
Dec  1 14:20:41 np0005541455 nova_compute[189564]:        <feature name='vpclmulqdq'/>
Dec  1 14:20:41 np0005541455 nova_compute[189564]:        <feature name='xsaves'/>
Dec  1 14:20:41 np0005541455 nova_compute[189564]:      </blockers>
Dec  1 14:20:41 np0005541455 nova_compute[189564]:      <model usable='no' vendor='Intel'>Icelake-Server-v6</model>
Dec  1 14:20:41 np0005541455 nova_compute[189564]:      <blockers model='Icelake-Server-v6'>
Dec  1 14:20:41 np0005541455 nova_compute[189564]:        <feature name='avx512-vpopcntdq'/>
Dec  1 14:20:41 np0005541455 nova_compute[189564]:        <feature name='avx512bitalg'/>
Dec  1 14:20:41 np0005541455 nova_compute[189564]:        <feature name='avx512bw'/>
Dec  1 14:20:41 np0005541455 nova_compute[189564]:        <feature name='avx512cd'/>
Dec  1 14:20:41 np0005541455 nova_compute[189564]:        <feature name='avx512dq'/>
Dec  1 14:20:41 np0005541455 nova_compute[189564]:        <feature name='avx512f'/>
Dec  1 14:20:41 np0005541455 nova_compute[189564]:        <feature name='avx512ifma'/>
Dec  1 14:20:41 np0005541455 nova_compute[189564]:        <feature name='avx512vbmi'/>
Dec  1 14:20:41 np0005541455 nova_compute[189564]:        <feature name='avx512vbmi2'/>
Dec  1 14:20:41 np0005541455 nova_compute[189564]:        <feature name='avx512vl'/>
Dec  1 14:20:41 np0005541455 nova_compute[189564]:        <feature name='avx512vnni'/>
Dec  1 14:20:41 np0005541455 nova_compute[189564]:        <feature name='erms'/>
Dec  1 14:20:41 np0005541455 nova_compute[189564]:        <feature name='fsrm'/>
Dec  1 14:20:41 np0005541455 nova_compute[189564]:        <feature name='gfni'/>
Dec  1 14:20:41 np0005541455 nova_compute[189564]:        <feature name='ibrs-all'/>
Dec  1 14:20:41 np0005541455 nova_compute[189564]:        <feature name='invpcid'/>
Dec  1 14:20:41 np0005541455 nova_compute[189564]:        <feature name='la57'/>
Dec  1 14:20:41 np0005541455 nova_compute[189564]:        <feature name='pcid'/>
Dec  1 14:20:41 np0005541455 nova_compute[189564]:        <feature name='pku'/>
Dec  1 14:20:41 np0005541455 nova_compute[189564]:        <feature name='taa-no'/>
Dec  1 14:20:41 np0005541455 nova_compute[189564]:        <feature name='vaes'/>
Dec  1 14:20:41 np0005541455 nova_compute[189564]:        <feature name='vpclmulqdq'/>
Dec  1 14:20:41 np0005541455 nova_compute[189564]:        <feature name='xsaves'/>
Dec  1 14:20:41 np0005541455 nova_compute[189564]:      </blockers>
Dec  1 14:20:41 np0005541455 nova_compute[189564]:      <model usable='no' vendor='Intel'>Icelake-Server-v7</model>
Dec  1 14:20:41 np0005541455 nova_compute[189564]:      <blockers model='Icelake-Server-v7'>
Dec  1 14:20:41 np0005541455 nova_compute[189564]:        <feature name='avx512-vpopcntdq'/>
Dec  1 14:20:41 np0005541455 nova_compute[189564]:        <feature name='avx512bitalg'/>
Dec  1 14:20:41 np0005541455 nova_compute[189564]:        <feature name='avx512bw'/>
Dec  1 14:20:41 np0005541455 nova_compute[189564]:        <feature name='avx512cd'/>
Dec  1 14:20:41 np0005541455 nova_compute[189564]:        <feature name='avx512dq'/>
Dec  1 14:20:41 np0005541455 nova_compute[189564]:        <feature name='avx512f'/>
Dec  1 14:20:41 np0005541455 nova_compute[189564]:        <feature name='avx512ifma'/>
Dec  1 14:20:41 np0005541455 nova_compute[189564]:        <feature name='avx512vbmi'/>
Dec  1 14:20:41 np0005541455 nova_compute[189564]:        <feature name='avx512vbmi2'/>
Dec  1 14:20:41 np0005541455 nova_compute[189564]:        <feature name='avx512vl'/>
Dec  1 14:20:41 np0005541455 nova_compute[189564]:        <feature name='avx512vnni'/>
Dec  1 14:20:41 np0005541455 nova_compute[189564]:        <feature name='erms'/>
Dec  1 14:20:41 np0005541455 nova_compute[189564]:        <feature name='fsrm'/>
Dec  1 14:20:41 np0005541455 nova_compute[189564]:        <feature name='gfni'/>
Dec  1 14:20:41 np0005541455 nova_compute[189564]:        <feature name='hle'/>
Dec  1 14:20:41 np0005541455 nova_compute[189564]:        <feature name='ibrs-all'/>
Dec  1 14:20:41 np0005541455 nova_compute[189564]:        <feature name='invpcid'/>
Dec  1 14:20:41 np0005541455 nova_compute[189564]:        <feature name='la57'/>
Dec  1 14:20:41 np0005541455 nova_compute[189564]:        <feature name='pcid'/>
Dec  1 14:20:41 np0005541455 nova_compute[189564]:        <feature name='pku'/>
Dec  1 14:20:41 np0005541455 nova_compute[189564]:        <feature name='rtm'/>
Dec  1 14:20:41 np0005541455 nova_compute[189564]:        <feature name='taa-no'/>
Dec  1 14:20:41 np0005541455 nova_compute[189564]:        <feature name='vaes'/>
Dec  1 14:20:41 np0005541455 nova_compute[189564]:        <feature name='vpclmulqdq'/>
Dec  1 14:20:41 np0005541455 nova_compute[189564]:        <feature name='xsaves'/>
Dec  1 14:20:41 np0005541455 nova_compute[189564]:      </blockers>
Dec  1 14:20:41 np0005541455 nova_compute[189564]:      <model usable='no' vendor='Intel' canonical='IvyBridge-v1'>IvyBridge</model>
Dec  1 14:20:41 np0005541455 nova_compute[189564]:      <blockers model='IvyBridge'>
Dec  1 14:20:41 np0005541455 nova_compute[189564]:        <feature name='erms'/>
Dec  1 14:20:41 np0005541455 nova_compute[189564]:      </blockers>
Dec  1 14:20:41 np0005541455 nova_compute[189564]:      <model usable='no' vendor='Intel' canonical='IvyBridge-v2'>IvyBridge-IBRS</model>
Dec  1 14:20:41 np0005541455 nova_compute[189564]:      <blockers model='IvyBridge-IBRS'>
Dec  1 14:20:41 np0005541455 nova_compute[189564]:        <feature name='erms'/>
Dec  1 14:20:41 np0005541455 nova_compute[189564]:      </blockers>
Dec  1 14:20:41 np0005541455 nova_compute[189564]:      <model usable='no' vendor='Intel'>IvyBridge-v1</model>
Dec  1 14:20:41 np0005541455 nova_compute[189564]:      <blockers model='IvyBridge-v1'>
Dec  1 14:20:41 np0005541455 nova_compute[189564]:        <feature name='erms'/>
Dec  1 14:20:41 np0005541455 nova_compute[189564]:      </blockers>
Dec  1 14:20:41 np0005541455 nova_compute[189564]:      <model usable='no' vendor='Intel'>IvyBridge-v2</model>
Dec  1 14:20:41 np0005541455 nova_compute[189564]:      <blockers model='IvyBridge-v2'>
Dec  1 14:20:41 np0005541455 nova_compute[189564]:        <feature name='erms'/>
Dec  1 14:20:41 np0005541455 nova_compute[189564]:      </blockers>
Dec  1 14:20:41 np0005541455 nova_compute[189564]:      <model usable='no' vendor='Intel' canonical='KnightsMill-v1'>KnightsMill</model>
Dec  1 14:20:41 np0005541455 nova_compute[189564]:      <blockers model='KnightsMill'>
Dec  1 14:20:41 np0005541455 nova_compute[189564]:        <feature name='avx512-4fmaps'/>
Dec  1 14:20:41 np0005541455 nova_compute[189564]:        <feature name='avx512-4vnniw'/>
Dec  1 14:20:41 np0005541455 nova_compute[189564]:        <feature name='avx512-vpopcntdq'/>
Dec  1 14:20:41 np0005541455 nova_compute[189564]:        <feature name='avx512cd'/>
Dec  1 14:20:41 np0005541455 nova_compute[189564]:        <feature name='avx512er'/>
Dec  1 14:20:41 np0005541455 nova_compute[189564]:        <feature name='avx512f'/>
Dec  1 14:20:41 np0005541455 nova_compute[189564]:        <feature name='avx512pf'/>
Dec  1 14:20:41 np0005541455 nova_compute[189564]:        <feature name='erms'/>
Dec  1 14:20:41 np0005541455 nova_compute[189564]:        <feature name='ss'/>
Dec  1 14:20:41 np0005541455 nova_compute[189564]:      </blockers>
Dec  1 14:20:41 np0005541455 nova_compute[189564]:      <model usable='no' vendor='Intel'>KnightsMill-v1</model>
Dec  1 14:20:41 np0005541455 nova_compute[189564]:      <blockers model='KnightsMill-v1'>
Dec  1 14:20:41 np0005541455 nova_compute[189564]:        <feature name='avx512-4fmaps'/>
Dec  1 14:20:41 np0005541455 nova_compute[189564]:        <feature name='avx512-4vnniw'/>
Dec  1 14:20:41 np0005541455 nova_compute[189564]:        <feature name='avx512-vpopcntdq'/>
Dec  1 14:20:41 np0005541455 nova_compute[189564]:        <feature name='avx512cd'/>
Dec  1 14:20:41 np0005541455 nova_compute[189564]:        <feature name='avx512er'/>
Dec  1 14:20:41 np0005541455 nova_compute[189564]:        <feature name='avx512f'/>
Dec  1 14:20:41 np0005541455 nova_compute[189564]:        <feature name='avx512pf'/>
Dec  1 14:20:41 np0005541455 nova_compute[189564]:        <feature name='erms'/>
Dec  1 14:20:41 np0005541455 nova_compute[189564]:        <feature name='ss'/>
Dec  1 14:20:41 np0005541455 nova_compute[189564]:      </blockers>
Dec  1 14:20:41 np0005541455 nova_compute[189564]:      <model usable='yes' vendor='Intel' canonical='Nehalem-v1'>Nehalem</model>
Dec  1 14:20:41 np0005541455 nova_compute[189564]:      <model usable='yes' vendor='Intel' canonical='Nehalem-v2'>Nehalem-IBRS</model>
Dec  1 14:20:41 np0005541455 nova_compute[189564]:      <model usable='yes' vendor='Intel'>Nehalem-v1</model>
Dec  1 14:20:41 np0005541455 nova_compute[189564]:      <model usable='yes' vendor='Intel'>Nehalem-v2</model>
Dec  1 14:20:41 np0005541455 nova_compute[189564]:      <model usable='yes' deprecated='yes' vendor='AMD' canonical='Opteron_G1-v1'>Opteron_G1</model>
Dec  1 14:20:41 np0005541455 nova_compute[189564]:      <model usable='yes' deprecated='yes' vendor='AMD'>Opteron_G1-v1</model>
Dec  1 14:20:41 np0005541455 nova_compute[189564]:      <model usable='yes' deprecated='yes' vendor='AMD' canonical='Opteron_G2-v1'>Opteron_G2</model>
Dec  1 14:20:41 np0005541455 nova_compute[189564]:      <model usable='yes' deprecated='yes' vendor='AMD'>Opteron_G2-v1</model>
Dec  1 14:20:41 np0005541455 nova_compute[189564]:      <model usable='yes' deprecated='yes' vendor='AMD' canonical='Opteron_G3-v1'>Opteron_G3</model>
Dec  1 14:20:41 np0005541455 nova_compute[189564]:      <model usable='yes' deprecated='yes' vendor='AMD'>Opteron_G3-v1</model>
Dec  1 14:20:41 np0005541455 nova_compute[189564]:      <model usable='no' vendor='AMD' canonical='Opteron_G4-v1'>Opteron_G4</model>
Dec  1 14:20:41 np0005541455 nova_compute[189564]:      <blockers model='Opteron_G4'>
Dec  1 14:20:41 np0005541455 nova_compute[189564]:        <feature name='fma4'/>
Dec  1 14:20:41 np0005541455 nova_compute[189564]:        <feature name='xop'/>
Dec  1 14:20:41 np0005541455 nova_compute[189564]:      </blockers>
Dec  1 14:20:41 np0005541455 nova_compute[189564]:      <model usable='no' vendor='AMD'>Opteron_G4-v1</model>
Dec  1 14:20:41 np0005541455 nova_compute[189564]:      <blockers model='Opteron_G4-v1'>
Dec  1 14:20:41 np0005541455 nova_compute[189564]:        <feature name='fma4'/>
Dec  1 14:20:41 np0005541455 nova_compute[189564]:        <feature name='xop'/>
Dec  1 14:20:41 np0005541455 nova_compute[189564]:      </blockers>
Dec  1 14:20:41 np0005541455 nova_compute[189564]:      <model usable='no' vendor='AMD' canonical='Opteron_G5-v1'>Opteron_G5</model>
Dec  1 14:20:41 np0005541455 nova_compute[189564]:      <blockers model='Opteron_G5'>
Dec  1 14:20:41 np0005541455 nova_compute[189564]:        <feature name='fma4'/>
Dec  1 14:20:41 np0005541455 nova_compute[189564]:        <feature name='tbm'/>
Dec  1 14:20:41 np0005541455 nova_compute[189564]:        <feature name='xop'/>
Dec  1 14:20:41 np0005541455 nova_compute[189564]:      </blockers>
Dec  1 14:20:41 np0005541455 nova_compute[189564]:      <model usable='no' vendor='AMD'>Opteron_G5-v1</model>
Dec  1 14:20:41 np0005541455 nova_compute[189564]:      <blockers model='Opteron_G5-v1'>
Dec  1 14:20:41 np0005541455 nova_compute[189564]:        <feature name='fma4'/>
Dec  1 14:20:41 np0005541455 nova_compute[189564]:        <feature name='tbm'/>
Dec  1 14:20:41 np0005541455 nova_compute[189564]:        <feature name='xop'/>
Dec  1 14:20:41 np0005541455 nova_compute[189564]:      </blockers>
Dec  1 14:20:41 np0005541455 nova_compute[189564]:      <model usable='yes' deprecated='yes' vendor='Intel' canonical='Penryn-v1'>Penryn</model>
Dec  1 14:20:41 np0005541455 nova_compute[189564]:      <model usable='yes' deprecated='yes' vendor='Intel'>Penryn-v1</model>
Dec  1 14:20:41 np0005541455 nova_compute[189564]:      <model usable='yes' vendor='Intel' canonical='SandyBridge-v1'>SandyBridge</model>
Dec  1 14:20:41 np0005541455 nova_compute[189564]:      <model usable='yes' vendor='Intel' canonical='SandyBridge-v2'>SandyBridge-IBRS</model>
Dec  1 14:20:41 np0005541455 nova_compute[189564]:      <model usable='yes' vendor='Intel'>SandyBridge-v1</model>
Dec  1 14:20:41 np0005541455 nova_compute[189564]:      <model usable='yes' vendor='Intel'>SandyBridge-v2</model>
Dec  1 14:20:41 np0005541455 nova_compute[189564]:      <model usable='no' vendor='Intel' canonical='SapphireRapids-v1'>SapphireRapids</model>
Dec  1 14:20:41 np0005541455 nova_compute[189564]:      <blockers model='SapphireRapids'>
Dec  1 14:20:41 np0005541455 nova_compute[189564]:        <feature name='amx-bf16'/>
Dec  1 14:20:41 np0005541455 nova_compute[189564]:        <feature name='amx-int8'/>
Dec  1 14:20:41 np0005541455 nova_compute[189564]:        <feature name='amx-tile'/>
Dec  1 14:20:41 np0005541455 nova_compute[189564]:        <feature name='avx-vnni'/>
Dec  1 14:20:41 np0005541455 nova_compute[189564]:        <feature name='avx512-bf16'/>
Dec  1 14:20:41 np0005541455 nova_compute[189564]:        <feature name='avx512-fp16'/>
Dec  1 14:20:41 np0005541455 nova_compute[189564]:        <feature name='avx512-vpopcntdq'/>
Dec  1 14:20:41 np0005541455 nova_compute[189564]:        <feature name='avx512bitalg'/>
Dec  1 14:20:41 np0005541455 nova_compute[189564]:        <feature name='avx512bw'/>
Dec  1 14:20:41 np0005541455 nova_compute[189564]:        <feature name='avx512cd'/>
Dec  1 14:20:41 np0005541455 nova_compute[189564]:        <feature name='avx512dq'/>
Dec  1 14:20:41 np0005541455 nova_compute[189564]:        <feature name='avx512f'/>
Dec  1 14:20:41 np0005541455 nova_compute[189564]:        <feature name='avx512ifma'/>
Dec  1 14:20:41 np0005541455 nova_compute[189564]:        <feature name='avx512vbmi'/>
Dec  1 14:20:41 np0005541455 nova_compute[189564]:        <feature name='avx512vbmi2'/>
Dec  1 14:20:41 np0005541455 nova_compute[189564]:        <feature name='avx512vl'/>
Dec  1 14:20:41 np0005541455 nova_compute[189564]:        <feature name='avx512vnni'/>
Dec  1 14:20:41 np0005541455 nova_compute[189564]:        <feature name='bus-lock-detect'/>
Dec  1 14:20:41 np0005541455 nova_compute[189564]:        <feature name='erms'/>
Dec  1 14:20:41 np0005541455 nova_compute[189564]:        <feature name='fsrc'/>
Dec  1 14:20:41 np0005541455 nova_compute[189564]:        <feature name='fsrm'/>
Dec  1 14:20:41 np0005541455 nova_compute[189564]:        <feature name='fsrs'/>
Dec  1 14:20:41 np0005541455 nova_compute[189564]:        <feature name='fzrm'/>
Dec  1 14:20:41 np0005541455 nova_compute[189564]:        <feature name='gfni'/>
Dec  1 14:20:41 np0005541455 nova_compute[189564]:        <feature name='hle'/>
Dec  1 14:20:41 np0005541455 nova_compute[189564]:        <feature name='ibrs-all'/>
Dec  1 14:20:41 np0005541455 nova_compute[189564]:        <feature name='invpcid'/>
Dec  1 14:20:41 np0005541455 nova_compute[189564]:        <feature name='la57'/>
Dec  1 14:20:41 np0005541455 nova_compute[189564]:        <feature name='pcid'/>
Dec  1 14:20:41 np0005541455 nova_compute[189564]:        <feature name='pku'/>
Dec  1 14:20:41 np0005541455 nova_compute[189564]:        <feature name='rtm'/>
Dec  1 14:20:41 np0005541455 nova_compute[189564]:        <feature name='serialize'/>
Dec  1 14:20:41 np0005541455 nova_compute[189564]:        <feature name='taa-no'/>
Dec  1 14:20:41 np0005541455 nova_compute[189564]:        <feature name='tsx-ldtrk'/>
Dec  1 14:20:41 np0005541455 nova_compute[189564]:        <feature name='vaes'/>
Dec  1 14:20:41 np0005541455 nova_compute[189564]:        <feature name='vpclmulqdq'/>
Dec  1 14:20:41 np0005541455 nova_compute[189564]:        <feature name='xfd'/>
Dec  1 14:20:41 np0005541455 nova_compute[189564]:        <feature name='xsaves'/>
Dec  1 14:20:41 np0005541455 nova_compute[189564]:      </blockers>
Dec  1 14:20:41 np0005541455 nova_compute[189564]:      <model usable='no' vendor='Intel'>SapphireRapids-v1</model>
Dec  1 14:20:41 np0005541455 nova_compute[189564]:      <blockers model='SapphireRapids-v1'>
Dec  1 14:20:41 np0005541455 nova_compute[189564]:        <feature name='amx-bf16'/>
Dec  1 14:20:41 np0005541455 nova_compute[189564]:        <feature name='amx-int8'/>
Dec  1 14:20:41 np0005541455 nova_compute[189564]:        <feature name='amx-tile'/>
Dec  1 14:20:41 np0005541455 nova_compute[189564]:        <feature name='avx-vnni'/>
Dec  1 14:20:41 np0005541455 nova_compute[189564]:        <feature name='avx512-bf16'/>
Dec  1 14:20:41 np0005541455 nova_compute[189564]:        <feature name='avx512-fp16'/>
Dec  1 14:20:41 np0005541455 nova_compute[189564]:        <feature name='avx512-vpopcntdq'/>
Dec  1 14:20:41 np0005541455 nova_compute[189564]:        <feature name='avx512bitalg'/>
Dec  1 14:20:41 np0005541455 nova_compute[189564]:        <feature name='avx512bw'/>
Dec  1 14:20:41 np0005541455 nova_compute[189564]:        <feature name='avx512cd'/>
Dec  1 14:20:41 np0005541455 nova_compute[189564]:        <feature name='avx512dq'/>
Dec  1 14:20:41 np0005541455 nova_compute[189564]:        <feature name='avx512f'/>
Dec  1 14:20:41 np0005541455 nova_compute[189564]:        <feature name='avx512ifma'/>
Dec  1 14:20:41 np0005541455 nova_compute[189564]:        <feature name='avx512vbmi'/>
Dec  1 14:20:41 np0005541455 nova_compute[189564]:        <feature name='avx512vbmi2'/>
Dec  1 14:20:41 np0005541455 nova_compute[189564]:        <feature name='avx512vl'/>
Dec  1 14:20:41 np0005541455 nova_compute[189564]:        <feature name='avx512vnni'/>
Dec  1 14:20:41 np0005541455 nova_compute[189564]:        <feature name='bus-lock-detect'/>
Dec  1 14:20:41 np0005541455 nova_compute[189564]:        <feature name='erms'/>
Dec  1 14:20:41 np0005541455 nova_compute[189564]:        <feature name='fsrc'/>
Dec  1 14:20:41 np0005541455 nova_compute[189564]:        <feature name='fsrm'/>
Dec  1 14:20:41 np0005541455 nova_compute[189564]:        <feature name='fsrs'/>
Dec  1 14:20:41 np0005541455 nova_compute[189564]:        <feature name='fzrm'/>
Dec  1 14:20:41 np0005541455 nova_compute[189564]:        <feature name='gfni'/>
Dec  1 14:20:41 np0005541455 nova_compute[189564]:        <feature name='hle'/>
Dec  1 14:20:41 np0005541455 nova_compute[189564]:        <feature name='ibrs-all'/>
Dec  1 14:20:41 np0005541455 nova_compute[189564]:        <feature name='invpcid'/>
Dec  1 14:20:41 np0005541455 nova_compute[189564]:        <feature name='la57'/>
Dec  1 14:20:41 np0005541455 nova_compute[189564]:        <feature name='pcid'/>
Dec  1 14:20:41 np0005541455 nova_compute[189564]:        <feature name='pku'/>
Dec  1 14:20:41 np0005541455 nova_compute[189564]:        <feature name='rtm'/>
Dec  1 14:20:41 np0005541455 nova_compute[189564]:        <feature name='serialize'/>
Dec  1 14:20:41 np0005541455 nova_compute[189564]:        <feature name='taa-no'/>
Dec  1 14:20:41 np0005541455 nova_compute[189564]:        <feature name='tsx-ldtrk'/>
Dec  1 14:20:41 np0005541455 nova_compute[189564]:        <feature name='vaes'/>
Dec  1 14:20:41 np0005541455 nova_compute[189564]:        <feature name='vpclmulqdq'/>
Dec  1 14:20:41 np0005541455 nova_compute[189564]:        <feature name='xfd'/>
Dec  1 14:20:41 np0005541455 nova_compute[189564]:        <feature name='xsaves'/>
Dec  1 14:20:41 np0005541455 nova_compute[189564]:      </blockers>
Dec  1 14:20:41 np0005541455 nova_compute[189564]:      <model usable='no' vendor='Intel'>SapphireRapids-v2</model>
Dec  1 14:20:41 np0005541455 nova_compute[189564]:      <blockers model='SapphireRapids-v2'>
Dec  1 14:20:41 np0005541455 nova_compute[189564]:        <feature name='amx-bf16'/>
Dec  1 14:20:41 np0005541455 nova_compute[189564]:        <feature name='amx-int8'/>
Dec  1 14:20:41 np0005541455 nova_compute[189564]:        <feature name='amx-tile'/>
Dec  1 14:20:41 np0005541455 nova_compute[189564]:        <feature name='avx-vnni'/>
Dec  1 14:20:41 np0005541455 nova_compute[189564]:        <feature name='avx512-bf16'/>
Dec  1 14:20:41 np0005541455 nova_compute[189564]:        <feature name='avx512-fp16'/>
Dec  1 14:20:41 np0005541455 nova_compute[189564]:        <feature name='avx512-vpopcntdq'/>
Dec  1 14:20:41 np0005541455 nova_compute[189564]:        <feature name='avx512bitalg'/>
Dec  1 14:20:41 np0005541455 nova_compute[189564]:        <feature name='avx512bw'/>
Dec  1 14:20:41 np0005541455 nova_compute[189564]:        <feature name='avx512cd'/>
Dec  1 14:20:41 np0005541455 nova_compute[189564]:        <feature name='avx512dq'/>
Dec  1 14:20:41 np0005541455 nova_compute[189564]:        <feature name='avx512f'/>
Dec  1 14:20:41 np0005541455 nova_compute[189564]:        <feature name='avx512ifma'/>
Dec  1 14:20:41 np0005541455 nova_compute[189564]:        <feature name='avx512vbmi'/>
Dec  1 14:20:41 np0005541455 nova_compute[189564]:        <feature name='avx512vbmi2'/>
Dec  1 14:20:41 np0005541455 nova_compute[189564]:        <feature name='avx512vl'/>
Dec  1 14:20:41 np0005541455 nova_compute[189564]:        <feature name='avx512vnni'/>
Dec  1 14:20:41 np0005541455 nova_compute[189564]:        <feature name='bus-lock-detect'/>
Dec  1 14:20:41 np0005541455 nova_compute[189564]:        <feature name='erms'/>
Dec  1 14:20:41 np0005541455 nova_compute[189564]:        <feature name='fbsdp-no'/>
Dec  1 14:20:41 np0005541455 nova_compute[189564]:        <feature name='fsrc'/>
Dec  1 14:20:41 np0005541455 nova_compute[189564]:        <feature name='fsrm'/>
Dec  1 14:20:41 np0005541455 nova_compute[189564]:        <feature name='fsrs'/>
Dec  1 14:20:41 np0005541455 nova_compute[189564]:        <feature name='fzrm'/>
Dec  1 14:20:41 np0005541455 nova_compute[189564]:        <feature name='gfni'/>
Dec  1 14:20:41 np0005541455 nova_compute[189564]:        <feature name='hle'/>
Dec  1 14:20:41 np0005541455 nova_compute[189564]:        <feature name='ibrs-all'/>
Dec  1 14:20:41 np0005541455 nova_compute[189564]:        <feature name='invpcid'/>
Dec  1 14:20:41 np0005541455 nova_compute[189564]:        <feature name='la57'/>
Dec  1 14:20:41 np0005541455 nova_compute[189564]:        <feature name='pcid'/>
Dec  1 14:20:41 np0005541455 nova_compute[189564]:        <feature name='pku'/>
Dec  1 14:20:41 np0005541455 nova_compute[189564]:        <feature name='psdp-no'/>
Dec  1 14:20:41 np0005541455 nova_compute[189564]:        <feature name='rtm'/>
Dec  1 14:20:41 np0005541455 nova_compute[189564]:        <feature name='sbdr-ssdp-no'/>
Dec  1 14:20:41 np0005541455 nova_compute[189564]:        <feature name='serialize'/>
Dec  1 14:20:41 np0005541455 nova_compute[189564]:        <feature name='taa-no'/>
Dec  1 14:20:41 np0005541455 nova_compute[189564]:        <feature name='tsx-ldtrk'/>
Dec  1 14:20:41 np0005541455 nova_compute[189564]:        <feature name='vaes'/>
Dec  1 14:20:41 np0005541455 nova_compute[189564]:        <feature name='vpclmulqdq'/>
Dec  1 14:20:41 np0005541455 nova_compute[189564]:        <feature name='xfd'/>
Dec  1 14:20:41 np0005541455 nova_compute[189564]:        <feature name='xsaves'/>
Dec  1 14:20:41 np0005541455 nova_compute[189564]:      </blockers>
Dec  1 14:20:41 np0005541455 nova_compute[189564]:      <model usable='no' vendor='Intel'>SapphireRapids-v3</model>
Dec  1 14:20:41 np0005541455 nova_compute[189564]:      <blockers model='SapphireRapids-v3'>
Dec  1 14:20:41 np0005541455 nova_compute[189564]:        <feature name='amx-bf16'/>
Dec  1 14:20:41 np0005541455 nova_compute[189564]:        <feature name='amx-int8'/>
Dec  1 14:20:41 np0005541455 nova_compute[189564]:        <feature name='amx-tile'/>
Dec  1 14:20:41 np0005541455 nova_compute[189564]:        <feature name='avx-vnni'/>
Dec  1 14:20:41 np0005541455 nova_compute[189564]:        <feature name='avx512-bf16'/>
Dec  1 14:20:41 np0005541455 nova_compute[189564]:        <feature name='avx512-fp16'/>
Dec  1 14:20:41 np0005541455 nova_compute[189564]:        <feature name='avx512-vpopcntdq'/>
Dec  1 14:20:41 np0005541455 nova_compute[189564]:        <feature name='avx512bitalg'/>
Dec  1 14:20:41 np0005541455 nova_compute[189564]:        <feature name='avx512bw'/>
Dec  1 14:20:41 np0005541455 nova_compute[189564]:        <feature name='avx512cd'/>
Dec  1 14:20:41 np0005541455 nova_compute[189564]:        <feature name='avx512dq'/>
Dec  1 14:20:41 np0005541455 nova_compute[189564]:        <feature name='avx512f'/>
Dec  1 14:20:41 np0005541455 nova_compute[189564]:        <feature name='avx512ifma'/>
Dec  1 14:20:41 np0005541455 nova_compute[189564]:        <feature name='avx512vbmi'/>
Dec  1 14:20:41 np0005541455 nova_compute[189564]:        <feature name='avx512vbmi2'/>
Dec  1 14:20:41 np0005541455 nova_compute[189564]:        <feature name='avx512vl'/>
Dec  1 14:20:41 np0005541455 nova_compute[189564]:        <feature name='avx512vnni'/>
Dec  1 14:20:41 np0005541455 nova_compute[189564]:        <feature name='bus-lock-detect'/>
Dec  1 14:20:41 np0005541455 nova_compute[189564]:        <feature name='cldemote'/>
Dec  1 14:20:41 np0005541455 nova_compute[189564]:        <feature name='erms'/>
Dec  1 14:20:41 np0005541455 nova_compute[189564]:        <feature name='fbsdp-no'/>
Dec  1 14:20:41 np0005541455 nova_compute[189564]:        <feature name='fsrc'/>
Dec  1 14:20:41 np0005541455 nova_compute[189564]:        <feature name='fsrm'/>
Dec  1 14:20:41 np0005541455 nova_compute[189564]:        <feature name='fsrs'/>
Dec  1 14:20:41 np0005541455 nova_compute[189564]:        <feature name='fzrm'/>
Dec  1 14:20:41 np0005541455 nova_compute[189564]:        <feature name='gfni'/>
Dec  1 14:20:41 np0005541455 nova_compute[189564]:        <feature name='hle'/>
Dec  1 14:20:41 np0005541455 nova_compute[189564]:        <feature name='ibrs-all'/>
Dec  1 14:20:41 np0005541455 nova_compute[189564]:        <feature name='invpcid'/>
Dec  1 14:20:41 np0005541455 nova_compute[189564]:        <feature name='la57'/>
Dec  1 14:20:41 np0005541455 nova_compute[189564]:        <feature name='movdir64b'/>
Dec  1 14:20:41 np0005541455 nova_compute[189564]:        <feature name='movdiri'/>
Dec  1 14:20:41 np0005541455 nova_compute[189564]:        <feature name='pcid'/>
Dec  1 14:20:41 np0005541455 nova_compute[189564]:        <feature name='pku'/>
Dec  1 14:20:41 np0005541455 nova_compute[189564]:        <feature name='psdp-no'/>
Dec  1 14:20:41 np0005541455 nova_compute[189564]:        <feature name='rtm'/>
Dec  1 14:20:41 np0005541455 nova_compute[189564]:        <feature name='sbdr-ssdp-no'/>
Dec  1 14:20:41 np0005541455 nova_compute[189564]:        <feature name='serialize'/>
Dec  1 14:20:41 np0005541455 nova_compute[189564]:        <feature name='ss'/>
Dec  1 14:20:41 np0005541455 nova_compute[189564]:        <feature name='taa-no'/>
Dec  1 14:20:41 np0005541455 nova_compute[189564]:        <feature name='tsx-ldtrk'/>
Dec  1 14:20:41 np0005541455 nova_compute[189564]:        <feature name='vaes'/>
Dec  1 14:20:41 np0005541455 nova_compute[189564]:        <feature name='vpclmulqdq'/>
Dec  1 14:20:41 np0005541455 nova_compute[189564]:        <feature name='xfd'/>
Dec  1 14:20:41 np0005541455 nova_compute[189564]:        <feature name='xsaves'/>
Dec  1 14:20:41 np0005541455 nova_compute[189564]:      </blockers>
Dec  1 14:20:41 np0005541455 nova_compute[189564]:      <model usable='no' vendor='Intel' canonical='SierraForest-v1'>SierraForest</model>
Dec  1 14:20:41 np0005541455 nova_compute[189564]:      <blockers model='SierraForest'>
Dec  1 14:20:41 np0005541455 nova_compute[189564]:        <feature name='avx-ifma'/>
Dec  1 14:20:41 np0005541455 nova_compute[189564]:        <feature name='avx-ne-convert'/>
Dec  1 14:20:41 np0005541455 nova_compute[189564]:        <feature name='avx-vnni'/>
Dec  1 14:20:41 np0005541455 nova_compute[189564]:        <feature name='avx-vnni-int8'/>
Dec  1 14:20:41 np0005541455 nova_compute[189564]:        <feature name='bus-lock-detect'/>
Dec  1 14:20:41 np0005541455 nova_compute[189564]:        <feature name='cmpccxadd'/>
Dec  1 14:20:41 np0005541455 nova_compute[189564]:        <feature name='erms'/>
Dec  1 14:20:41 np0005541455 nova_compute[189564]:        <feature name='fbsdp-no'/>
Dec  1 14:20:41 np0005541455 nova_compute[189564]:        <feature name='fsrm'/>
Dec  1 14:20:41 np0005541455 nova_compute[189564]:        <feature name='fsrs'/>
Dec  1 14:20:41 np0005541455 nova_compute[189564]:        <feature name='gfni'/>
Dec  1 14:20:41 np0005541455 nova_compute[189564]:        <feature name='ibrs-all'/>
Dec  1 14:20:41 np0005541455 nova_compute[189564]:        <feature name='invpcid'/>
Dec  1 14:20:41 np0005541455 nova_compute[189564]:        <feature name='mcdt-no'/>
Dec  1 14:20:41 np0005541455 nova_compute[189564]:        <feature name='pbrsb-no'/>
Dec  1 14:20:41 np0005541455 nova_compute[189564]:        <feature name='pcid'/>
Dec  1 14:20:41 np0005541455 nova_compute[189564]:        <feature name='pku'/>
Dec  1 14:20:41 np0005541455 nova_compute[189564]:        <feature name='psdp-no'/>
Dec  1 14:20:41 np0005541455 nova_compute[189564]:        <feature name='sbdr-ssdp-no'/>
Dec  1 14:20:41 np0005541455 nova_compute[189564]:        <feature name='serialize'/>
Dec  1 14:20:41 np0005541455 nova_compute[189564]:        <feature name='vaes'/>
Dec  1 14:20:41 np0005541455 nova_compute[189564]:        <feature name='vpclmulqdq'/>
Dec  1 14:20:41 np0005541455 nova_compute[189564]:        <feature name='xsaves'/>
Dec  1 14:20:41 np0005541455 nova_compute[189564]:      </blockers>
Dec  1 14:20:41 np0005541455 nova_compute[189564]:      <model usable='no' vendor='Intel'>SierraForest-v1</model>
Dec  1 14:20:41 np0005541455 nova_compute[189564]:      <blockers model='SierraForest-v1'>
Dec  1 14:20:41 np0005541455 nova_compute[189564]:        <feature name='avx-ifma'/>
Dec  1 14:20:41 np0005541455 nova_compute[189564]:        <feature name='avx-ne-convert'/>
Dec  1 14:20:41 np0005541455 nova_compute[189564]:        <feature name='avx-vnni'/>
Dec  1 14:20:41 np0005541455 nova_compute[189564]:        <feature name='avx-vnni-int8'/>
Dec  1 14:20:41 np0005541455 nova_compute[189564]:        <feature name='bus-lock-detect'/>
Dec  1 14:20:41 np0005541455 nova_compute[189564]:        <feature name='cmpccxadd'/>
Dec  1 14:20:41 np0005541455 nova_compute[189564]:        <feature name='erms'/>
Dec  1 14:20:41 np0005541455 nova_compute[189564]:        <feature name='fbsdp-no'/>
Dec  1 14:20:41 np0005541455 nova_compute[189564]:        <feature name='fsrm'/>
Dec  1 14:20:41 np0005541455 nova_compute[189564]:        <feature name='fsrs'/>
Dec  1 14:20:41 np0005541455 nova_compute[189564]:        <feature name='gfni'/>
Dec  1 14:20:41 np0005541455 nova_compute[189564]:        <feature name='ibrs-all'/>
Dec  1 14:20:41 np0005541455 nova_compute[189564]:        <feature name='invpcid'/>
Dec  1 14:20:41 np0005541455 nova_compute[189564]:        <feature name='mcdt-no'/>
Dec  1 14:20:41 np0005541455 nova_compute[189564]:        <feature name='pbrsb-no'/>
Dec  1 14:20:41 np0005541455 nova_compute[189564]:        <feature name='pcid'/>
Dec  1 14:20:41 np0005541455 nova_compute[189564]:        <feature name='pku'/>
Dec  1 14:20:41 np0005541455 nova_compute[189564]:        <feature name='psdp-no'/>
Dec  1 14:20:41 np0005541455 nova_compute[189564]:        <feature name='sbdr-ssdp-no'/>
Dec  1 14:20:41 np0005541455 nova_compute[189564]:        <feature name='serialize'/>
Dec  1 14:20:41 np0005541455 nova_compute[189564]:        <feature name='vaes'/>
Dec  1 14:20:41 np0005541455 nova_compute[189564]:        <feature name='vpclmulqdq'/>
Dec  1 14:20:41 np0005541455 nova_compute[189564]:        <feature name='xsaves'/>
Dec  1 14:20:41 np0005541455 nova_compute[189564]:      </blockers>
Dec  1 14:20:41 np0005541455 nova_compute[189564]:      <model usable='no' vendor='Intel' canonical='Skylake-Client-v1'>Skylake-Client</model>
Dec  1 14:20:41 np0005541455 nova_compute[189564]:      <blockers model='Skylake-Client'>
Dec  1 14:20:41 np0005541455 nova_compute[189564]:        <feature name='erms'/>
Dec  1 14:20:41 np0005541455 nova_compute[189564]:        <feature name='hle'/>
Dec  1 14:20:41 np0005541455 nova_compute[189564]:        <feature name='invpcid'/>
Dec  1 14:20:41 np0005541455 nova_compute[189564]:        <feature name='pcid'/>
Dec  1 14:20:41 np0005541455 nova_compute[189564]:        <feature name='rtm'/>
Dec  1 14:20:41 np0005541455 nova_compute[189564]:      </blockers>
Dec  1 14:20:41 np0005541455 nova_compute[189564]:      <model usable='no' vendor='Intel' canonical='Skylake-Client-v2'>Skylake-Client-IBRS</model>
Dec  1 14:20:41 np0005541455 nova_compute[189564]:      <blockers model='Skylake-Client-IBRS'>
Dec  1 14:20:41 np0005541455 nova_compute[189564]:        <feature name='erms'/>
Dec  1 14:20:41 np0005541455 nova_compute[189564]:        <feature name='hle'/>
Dec  1 14:20:41 np0005541455 nova_compute[189564]:        <feature name='invpcid'/>
Dec  1 14:20:41 np0005541455 nova_compute[189564]:        <feature name='pcid'/>
Dec  1 14:20:41 np0005541455 nova_compute[189564]:        <feature name='rtm'/>
Dec  1 14:20:41 np0005541455 nova_compute[189564]:      </blockers>
Dec  1 14:20:41 np0005541455 nova_compute[189564]:      <model usable='no' vendor='Intel' canonical='Skylake-Client-v3'>Skylake-Client-noTSX-IBRS</model>
Dec  1 14:20:41 np0005541455 nova_compute[189564]:      <blockers model='Skylake-Client-noTSX-IBRS'>
Dec  1 14:20:41 np0005541455 nova_compute[189564]:        <feature name='erms'/>
Dec  1 14:20:41 np0005541455 nova_compute[189564]:        <feature name='invpcid'/>
Dec  1 14:20:41 np0005541455 nova_compute[189564]:        <feature name='pcid'/>
Dec  1 14:20:41 np0005541455 nova_compute[189564]:      </blockers>
Dec  1 14:20:41 np0005541455 nova_compute[189564]:      <model usable='no' vendor='Intel'>Skylake-Client-v1</model>
Dec  1 14:20:41 np0005541455 nova_compute[189564]:      <blockers model='Skylake-Client-v1'>
Dec  1 14:20:41 np0005541455 nova_compute[189564]:        <feature name='erms'/>
Dec  1 14:20:41 np0005541455 nova_compute[189564]:        <feature name='hle'/>
Dec  1 14:20:41 np0005541455 nova_compute[189564]:        <feature name='invpcid'/>
Dec  1 14:20:41 np0005541455 nova_compute[189564]:        <feature name='pcid'/>
Dec  1 14:20:41 np0005541455 nova_compute[189564]:        <feature name='rtm'/>
Dec  1 14:20:41 np0005541455 nova_compute[189564]:      </blockers>
Dec  1 14:20:41 np0005541455 nova_compute[189564]:      <model usable='no' vendor='Intel'>Skylake-Client-v2</model>
Dec  1 14:20:41 np0005541455 nova_compute[189564]:      <blockers model='Skylake-Client-v2'>
Dec  1 14:20:41 np0005541455 nova_compute[189564]:        <feature name='erms'/>
Dec  1 14:20:41 np0005541455 nova_compute[189564]:        <feature name='hle'/>
Dec  1 14:20:41 np0005541455 nova_compute[189564]:        <feature name='invpcid'/>
Dec  1 14:20:41 np0005541455 nova_compute[189564]:        <feature name='pcid'/>
Dec  1 14:20:41 np0005541455 nova_compute[189564]:        <feature name='rtm'/>
Dec  1 14:20:41 np0005541455 nova_compute[189564]:      </blockers>
Dec  1 14:20:41 np0005541455 nova_compute[189564]:      <model usable='no' vendor='Intel'>Skylake-Client-v3</model>
Dec  1 14:20:41 np0005541455 nova_compute[189564]:      <blockers model='Skylake-Client-v3'>
Dec  1 14:20:41 np0005541455 nova_compute[189564]:        <feature name='erms'/>
Dec  1 14:20:41 np0005541455 nova_compute[189564]:        <feature name='invpcid'/>
Dec  1 14:20:41 np0005541455 nova_compute[189564]:        <feature name='pcid'/>
Dec  1 14:20:41 np0005541455 nova_compute[189564]:      </blockers>
Dec  1 14:20:41 np0005541455 nova_compute[189564]:      <model usable='no' vendor='Intel'>Skylake-Client-v4</model>
Dec  1 14:20:41 np0005541455 nova_compute[189564]:      <blockers model='Skylake-Client-v4'>
Dec  1 14:20:41 np0005541455 nova_compute[189564]:        <feature name='erms'/>
Dec  1 14:20:41 np0005541455 nova_compute[189564]:        <feature name='invpcid'/>
Dec  1 14:20:41 np0005541455 nova_compute[189564]:        <feature name='pcid'/>
Dec  1 14:20:41 np0005541455 nova_compute[189564]:        <feature name='xsaves'/>
Dec  1 14:20:41 np0005541455 nova_compute[189564]:      </blockers>
Dec  1 14:20:41 np0005541455 nova_compute[189564]:      <model usable='no' vendor='Intel' canonical='Skylake-Server-v1'>Skylake-Server</model>
Dec  1 14:20:41 np0005541455 nova_compute[189564]:      <blockers model='Skylake-Server'>
Dec  1 14:20:41 np0005541455 nova_compute[189564]:        <feature name='avx512bw'/>
Dec  1 14:20:41 np0005541455 nova_compute[189564]:        <feature name='avx512cd'/>
Dec  1 14:20:41 np0005541455 nova_compute[189564]:        <feature name='avx512dq'/>
Dec  1 14:20:41 np0005541455 nova_compute[189564]:        <feature name='avx512f'/>
Dec  1 14:20:41 np0005541455 nova_compute[189564]:        <feature name='avx512vl'/>
Dec  1 14:20:41 np0005541455 nova_compute[189564]:        <feature name='erms'/>
Dec  1 14:20:41 np0005541455 nova_compute[189564]:        <feature name='hle'/>
Dec  1 14:20:41 np0005541455 nova_compute[189564]:        <feature name='invpcid'/>
Dec  1 14:20:41 np0005541455 nova_compute[189564]:        <feature name='pcid'/>
Dec  1 14:20:41 np0005541455 nova_compute[189564]:        <feature name='pku'/>
Dec  1 14:20:41 np0005541455 nova_compute[189564]:        <feature name='rtm'/>
Dec  1 14:20:41 np0005541455 nova_compute[189564]:      </blockers>
Dec  1 14:20:41 np0005541455 nova_compute[189564]:      <model usable='no' vendor='Intel' canonical='Skylake-Server-v2'>Skylake-Server-IBRS</model>
Dec  1 14:20:41 np0005541455 nova_compute[189564]:      <blockers model='Skylake-Server-IBRS'>
Dec  1 14:20:41 np0005541455 nova_compute[189564]:        <feature name='avx512bw'/>
Dec  1 14:20:41 np0005541455 nova_compute[189564]:        <feature name='avx512cd'/>
Dec  1 14:20:41 np0005541455 nova_compute[189564]:        <feature name='avx512dq'/>
Dec  1 14:20:41 np0005541455 nova_compute[189564]:        <feature name='avx512f'/>
Dec  1 14:20:41 np0005541455 nova_compute[189564]:        <feature name='avx512vl'/>
Dec  1 14:20:41 np0005541455 nova_compute[189564]:        <feature name='erms'/>
Dec  1 14:20:41 np0005541455 nova_compute[189564]:        <feature name='hle'/>
Dec  1 14:20:41 np0005541455 nova_compute[189564]:        <feature name='invpcid'/>
Dec  1 14:20:41 np0005541455 nova_compute[189564]:        <feature name='pcid'/>
Dec  1 14:20:41 np0005541455 nova_compute[189564]:        <feature name='pku'/>
Dec  1 14:20:41 np0005541455 nova_compute[189564]:        <feature name='rtm'/>
Dec  1 14:20:41 np0005541455 nova_compute[189564]:      </blockers>
Dec  1 14:20:41 np0005541455 nova_compute[189564]:      <model usable='no' vendor='Intel' canonical='Skylake-Server-v3'>Skylake-Server-noTSX-IBRS</model>
Dec  1 14:20:41 np0005541455 nova_compute[189564]:      <blockers model='Skylake-Server-noTSX-IBRS'>
Dec  1 14:20:41 np0005541455 nova_compute[189564]:        <feature name='avx512bw'/>
Dec  1 14:20:41 np0005541455 nova_compute[189564]:        <feature name='avx512cd'/>
Dec  1 14:20:41 np0005541455 nova_compute[189564]:        <feature name='avx512dq'/>
Dec  1 14:20:41 np0005541455 nova_compute[189564]:        <feature name='avx512f'/>
Dec  1 14:20:41 np0005541455 nova_compute[189564]:        <feature name='avx512vl'/>
Dec  1 14:20:41 np0005541455 nova_compute[189564]:        <feature name='erms'/>
Dec  1 14:20:41 np0005541455 nova_compute[189564]:        <feature name='invpcid'/>
Dec  1 14:20:41 np0005541455 nova_compute[189564]:        <feature name='pcid'/>
Dec  1 14:20:41 np0005541455 nova_compute[189564]:        <feature name='pku'/>
Dec  1 14:20:41 np0005541455 nova_compute[189564]:      </blockers>
Dec  1 14:20:41 np0005541455 nova_compute[189564]:      <model usable='no' vendor='Intel'>Skylake-Server-v1</model>
Dec  1 14:20:41 np0005541455 nova_compute[189564]:      <blockers model='Skylake-Server-v1'>
Dec  1 14:20:41 np0005541455 nova_compute[189564]:        <feature name='avx512bw'/>
Dec  1 14:20:41 np0005541455 nova_compute[189564]:        <feature name='avx512cd'/>
Dec  1 14:20:41 np0005541455 nova_compute[189564]:        <feature name='avx512dq'/>
Dec  1 14:20:41 np0005541455 nova_compute[189564]:        <feature name='avx512f'/>
Dec  1 14:20:41 np0005541455 nova_compute[189564]:        <feature name='avx512vl'/>
Dec  1 14:20:41 np0005541455 nova_compute[189564]:        <feature name='erms'/>
Dec  1 14:20:41 np0005541455 nova_compute[189564]:        <feature name='hle'/>
Dec  1 14:20:41 np0005541455 nova_compute[189564]:        <feature name='invpcid'/>
Dec  1 14:20:41 np0005541455 nova_compute[189564]:        <feature name='pcid'/>
Dec  1 14:20:41 np0005541455 nova_compute[189564]:        <feature name='pku'/>
Dec  1 14:20:41 np0005541455 nova_compute[189564]:        <feature name='rtm'/>
Dec  1 14:20:41 np0005541455 nova_compute[189564]:      </blockers>
Dec  1 14:20:41 np0005541455 nova_compute[189564]:      <model usable='no' vendor='Intel'>Skylake-Server-v2</model>
Dec  1 14:20:41 np0005541455 nova_compute[189564]:      <blockers model='Skylake-Server-v2'>
Dec  1 14:20:41 np0005541455 nova_compute[189564]:        <feature name='avx512bw'/>
Dec  1 14:20:41 np0005541455 nova_compute[189564]:        <feature name='avx512cd'/>
Dec  1 14:20:41 np0005541455 nova_compute[189564]:        <feature name='avx512dq'/>
Dec  1 14:20:41 np0005541455 nova_compute[189564]:        <feature name='avx512f'/>
Dec  1 14:20:41 np0005541455 nova_compute[189564]:        <feature name='avx512vl'/>
Dec  1 14:20:41 np0005541455 nova_compute[189564]:        <feature name='erms'/>
Dec  1 14:20:41 np0005541455 nova_compute[189564]:        <feature name='hle'/>
Dec  1 14:20:41 np0005541455 nova_compute[189564]:        <feature name='invpcid'/>
Dec  1 14:20:41 np0005541455 nova_compute[189564]:        <feature name='pcid'/>
Dec  1 14:20:41 np0005541455 nova_compute[189564]:        <feature name='pku'/>
Dec  1 14:20:41 np0005541455 nova_compute[189564]:        <feature name='rtm'/>
Dec  1 14:20:41 np0005541455 nova_compute[189564]:      </blockers>
Dec  1 14:20:41 np0005541455 nova_compute[189564]:      <model usable='no' vendor='Intel'>Skylake-Server-v3</model>
Dec  1 14:20:41 np0005541455 nova_compute[189564]:      <blockers model='Skylake-Server-v3'>
Dec  1 14:20:41 np0005541455 nova_compute[189564]:        <feature name='avx512bw'/>
Dec  1 14:20:41 np0005541455 nova_compute[189564]:        <feature name='avx512cd'/>
Dec  1 14:20:41 np0005541455 nova_compute[189564]:        <feature name='avx512dq'/>
Dec  1 14:20:41 np0005541455 nova_compute[189564]:        <feature name='avx512f'/>
Dec  1 14:20:41 np0005541455 nova_compute[189564]:        <feature name='avx512vl'/>
Dec  1 14:20:41 np0005541455 nova_compute[189564]:        <feature name='erms'/>
Dec  1 14:20:41 np0005541455 nova_compute[189564]:        <feature name='invpcid'/>
Dec  1 14:20:41 np0005541455 nova_compute[189564]:        <feature name='pcid'/>
Dec  1 14:20:41 np0005541455 nova_compute[189564]:        <feature name='pku'/>
Dec  1 14:20:41 np0005541455 nova_compute[189564]:      </blockers>
Dec  1 14:20:41 np0005541455 nova_compute[189564]:      <model usable='no' vendor='Intel'>Skylake-Server-v4</model>
Dec  1 14:20:41 np0005541455 nova_compute[189564]:      <blockers model='Skylake-Server-v4'>
Dec  1 14:20:41 np0005541455 nova_compute[189564]:        <feature name='avx512bw'/>
Dec  1 14:20:41 np0005541455 nova_compute[189564]:        <feature name='avx512cd'/>
Dec  1 14:20:41 np0005541455 nova_compute[189564]:        <feature name='avx512dq'/>
Dec  1 14:20:41 np0005541455 nova_compute[189564]:        <feature name='avx512f'/>
Dec  1 14:20:41 np0005541455 nova_compute[189564]:        <feature name='avx512vl'/>
Dec  1 14:20:41 np0005541455 nova_compute[189564]:        <feature name='erms'/>
Dec  1 14:20:41 np0005541455 nova_compute[189564]:        <feature name='invpcid'/>
Dec  1 14:20:41 np0005541455 nova_compute[189564]:        <feature name='pcid'/>
Dec  1 14:20:41 np0005541455 nova_compute[189564]:        <feature name='pku'/>
Dec  1 14:20:41 np0005541455 nova_compute[189564]:      </blockers>
Dec  1 14:20:41 np0005541455 nova_compute[189564]:      <model usable='no' vendor='Intel'>Skylake-Server-v5</model>
Dec  1 14:20:41 np0005541455 nova_compute[189564]:      <blockers model='Skylake-Server-v5'>
Dec  1 14:20:41 np0005541455 nova_compute[189564]:        <feature name='avx512bw'/>
Dec  1 14:20:41 np0005541455 nova_compute[189564]:        <feature name='avx512cd'/>
Dec  1 14:20:41 np0005541455 nova_compute[189564]:        <feature name='avx512dq'/>
Dec  1 14:20:41 np0005541455 nova_compute[189564]:        <feature name='avx512f'/>
Dec  1 14:20:41 np0005541455 nova_compute[189564]:        <feature name='avx512vl'/>
Dec  1 14:20:41 np0005541455 nova_compute[189564]:        <feature name='erms'/>
Dec  1 14:20:41 np0005541455 nova_compute[189564]:        <feature name='invpcid'/>
Dec  1 14:20:41 np0005541455 nova_compute[189564]:        <feature name='pcid'/>
Dec  1 14:20:41 np0005541455 nova_compute[189564]:        <feature name='pku'/>
Dec  1 14:20:41 np0005541455 nova_compute[189564]:        <feature name='xsaves'/>
Dec  1 14:20:41 np0005541455 nova_compute[189564]:      </blockers>
Dec  1 14:20:41 np0005541455 nova_compute[189564]:      <model usable='no' vendor='Intel' canonical='Snowridge-v1'>Snowridge</model>
Dec  1 14:20:41 np0005541455 nova_compute[189564]:      <blockers model='Snowridge'>
Dec  1 14:20:41 np0005541455 nova_compute[189564]:        <feature name='cldemote'/>
Dec  1 14:20:41 np0005541455 nova_compute[189564]:        <feature name='core-capability'/>
Dec  1 14:20:41 np0005541455 nova_compute[189564]:        <feature name='erms'/>
Dec  1 14:20:41 np0005541455 nova_compute[189564]:        <feature name='gfni'/>
Dec  1 14:20:41 np0005541455 nova_compute[189564]:        <feature name='movdir64b'/>
Dec  1 14:20:41 np0005541455 nova_compute[189564]:        <feature name='movdiri'/>
Dec  1 14:20:41 np0005541455 nova_compute[189564]:        <feature name='mpx'/>
Dec  1 14:20:41 np0005541455 nova_compute[189564]:        <feature name='split-lock-detect'/>
Dec  1 14:20:41 np0005541455 nova_compute[189564]:      </blockers>
Dec  1 14:20:41 np0005541455 nova_compute[189564]:      <model usable='no' vendor='Intel'>Snowridge-v1</model>
Dec  1 14:20:41 np0005541455 nova_compute[189564]:      <blockers model='Snowridge-v1'>
Dec  1 14:20:41 np0005541455 nova_compute[189564]:        <feature name='cldemote'/>
Dec  1 14:20:41 np0005541455 nova_compute[189564]:        <feature name='core-capability'/>
Dec  1 14:20:41 np0005541455 nova_compute[189564]:        <feature name='erms'/>
Dec  1 14:20:41 np0005541455 nova_compute[189564]:        <feature name='gfni'/>
Dec  1 14:20:41 np0005541455 nova_compute[189564]:        <feature name='movdir64b'/>
Dec  1 14:20:41 np0005541455 nova_compute[189564]:        <feature name='movdiri'/>
Dec  1 14:20:41 np0005541455 nova_compute[189564]:        <feature name='mpx'/>
Dec  1 14:20:41 np0005541455 nova_compute[189564]:        <feature name='split-lock-detect'/>
Dec  1 14:20:41 np0005541455 nova_compute[189564]:      </blockers>
Dec  1 14:20:41 np0005541455 nova_compute[189564]:      <model usable='no' vendor='Intel'>Snowridge-v2</model>
Dec  1 14:20:41 np0005541455 nova_compute[189564]:      <blockers model='Snowridge-v2'>
Dec  1 14:20:41 np0005541455 nova_compute[189564]:        <feature name='cldemote'/>
Dec  1 14:20:41 np0005541455 nova_compute[189564]:        <feature name='core-capability'/>
Dec  1 14:20:41 np0005541455 nova_compute[189564]:        <feature name='erms'/>
Dec  1 14:20:41 np0005541455 nova_compute[189564]:        <feature name='gfni'/>
Dec  1 14:20:41 np0005541455 nova_compute[189564]:        <feature name='movdir64b'/>
Dec  1 14:20:41 np0005541455 nova_compute[189564]:        <feature name='movdiri'/>
Dec  1 14:20:41 np0005541455 nova_compute[189564]:        <feature name='split-lock-detect'/>
Dec  1 14:20:41 np0005541455 nova_compute[189564]:      </blockers>
Dec  1 14:20:41 np0005541455 nova_compute[189564]:      <model usable='no' vendor='Intel'>Snowridge-v3</model>
Dec  1 14:20:41 np0005541455 nova_compute[189564]:      <blockers model='Snowridge-v3'>
Dec  1 14:20:41 np0005541455 nova_compute[189564]:        <feature name='cldemote'/>
Dec  1 14:20:41 np0005541455 nova_compute[189564]:        <feature name='core-capability'/>
Dec  1 14:20:41 np0005541455 nova_compute[189564]:        <feature name='erms'/>
Dec  1 14:20:41 np0005541455 nova_compute[189564]:        <feature name='gfni'/>
Dec  1 14:20:41 np0005541455 nova_compute[189564]:        <feature name='movdir64b'/>
Dec  1 14:20:41 np0005541455 nova_compute[189564]:        <feature name='movdiri'/>
Dec  1 14:20:41 np0005541455 nova_compute[189564]:        <feature name='split-lock-detect'/>
Dec  1 14:20:41 np0005541455 nova_compute[189564]:        <feature name='xsaves'/>
Dec  1 14:20:41 np0005541455 nova_compute[189564]:      </blockers>
Dec  1 14:20:41 np0005541455 nova_compute[189564]:      <model usable='no' vendor='Intel'>Snowridge-v4</model>
Dec  1 14:20:41 np0005541455 nova_compute[189564]:      <blockers model='Snowridge-v4'>
Dec  1 14:20:41 np0005541455 nova_compute[189564]:        <feature name='cldemote'/>
Dec  1 14:20:41 np0005541455 nova_compute[189564]:        <feature name='erms'/>
Dec  1 14:20:41 np0005541455 nova_compute[189564]:        <feature name='gfni'/>
Dec  1 14:20:41 np0005541455 nova_compute[189564]:        <feature name='movdir64b'/>
Dec  1 14:20:41 np0005541455 nova_compute[189564]:        <feature name='movdiri'/>
Dec  1 14:20:41 np0005541455 nova_compute[189564]:        <feature name='xsaves'/>
Dec  1 14:20:41 np0005541455 nova_compute[189564]:      </blockers>
Dec  1 14:20:41 np0005541455 nova_compute[189564]:      <model usable='yes' vendor='Intel' canonical='Westmere-v1'>Westmere</model>
Dec  1 14:20:41 np0005541455 nova_compute[189564]:      <model usable='yes' vendor='Intel' canonical='Westmere-v2'>Westmere-IBRS</model>
Dec  1 14:20:41 np0005541455 nova_compute[189564]:      <model usable='yes' vendor='Intel'>Westmere-v1</model>
Dec  1 14:20:41 np0005541455 nova_compute[189564]:      <model usable='yes' vendor='Intel'>Westmere-v2</model>
Dec  1 14:20:41 np0005541455 nova_compute[189564]:      <model usable='no' deprecated='yes' vendor='AMD' canonical='athlon-v1'>athlon</model>
Dec  1 14:20:41 np0005541455 nova_compute[189564]:      <blockers model='athlon'>
Dec  1 14:20:41 np0005541455 nova_compute[189564]:        <feature name='3dnow'/>
Dec  1 14:20:41 np0005541455 nova_compute[189564]:        <feature name='3dnowext'/>
Dec  1 14:20:41 np0005541455 nova_compute[189564]:      </blockers>
Dec  1 14:20:41 np0005541455 nova_compute[189564]:      <model usable='no' deprecated='yes' vendor='AMD'>athlon-v1</model>
Dec  1 14:20:41 np0005541455 nova_compute[189564]:      <blockers model='athlon-v1'>
Dec  1 14:20:41 np0005541455 nova_compute[189564]:        <feature name='3dnow'/>
Dec  1 14:20:41 np0005541455 nova_compute[189564]:        <feature name='3dnowext'/>
Dec  1 14:20:41 np0005541455 nova_compute[189564]:      </blockers>
Dec  1 14:20:41 np0005541455 nova_compute[189564]:      <model usable='no' deprecated='yes' vendor='Intel' canonical='core2duo-v1'>core2duo</model>
Dec  1 14:20:41 np0005541455 nova_compute[189564]:      <blockers model='core2duo'>
Dec  1 14:20:41 np0005541455 nova_compute[189564]:        <feature name='ss'/>
Dec  1 14:20:41 np0005541455 nova_compute[189564]:      </blockers>
Dec  1 14:20:41 np0005541455 nova_compute[189564]:      <model usable='no' deprecated='yes' vendor='Intel'>core2duo-v1</model>
Dec  1 14:20:41 np0005541455 nova_compute[189564]:      <blockers model='core2duo-v1'>
Dec  1 14:20:41 np0005541455 nova_compute[189564]:        <feature name='ss'/>
Dec  1 14:20:41 np0005541455 nova_compute[189564]:      </blockers>
Dec  1 14:20:41 np0005541455 nova_compute[189564]:      <model usable='no' deprecated='yes' vendor='Intel' canonical='coreduo-v1'>coreduo</model>
Dec  1 14:20:41 np0005541455 nova_compute[189564]:      <blockers model='coreduo'>
Dec  1 14:20:41 np0005541455 nova_compute[189564]:        <feature name='ss'/>
Dec  1 14:20:41 np0005541455 nova_compute[189564]:      </blockers>
Dec  1 14:20:41 np0005541455 nova_compute[189564]:      <model usable='no' deprecated='yes' vendor='Intel'>coreduo-v1</model>
Dec  1 14:20:41 np0005541455 nova_compute[189564]:      <blockers model='coreduo-v1'>
Dec  1 14:20:41 np0005541455 nova_compute[189564]:        <feature name='ss'/>
Dec  1 14:20:41 np0005541455 nova_compute[189564]:      </blockers>
Dec  1 14:20:41 np0005541455 nova_compute[189564]:      <model usable='yes' deprecated='yes' vendor='unknown' canonical='kvm32-v1'>kvm32</model>
Dec  1 14:20:41 np0005541455 nova_compute[189564]:      <model usable='yes' deprecated='yes' vendor='unknown'>kvm32-v1</model>
Dec  1 14:20:41 np0005541455 nova_compute[189564]:      <model usable='yes' deprecated='yes' vendor='unknown' canonical='kvm64-v1'>kvm64</model>
Dec  1 14:20:41 np0005541455 nova_compute[189564]:      <model usable='yes' deprecated='yes' vendor='unknown'>kvm64-v1</model>
Dec  1 14:20:41 np0005541455 nova_compute[189564]:      <model usable='no' deprecated='yes' vendor='Intel' canonical='n270-v1'>n270</model>
Dec  1 14:20:41 np0005541455 nova_compute[189564]:      <blockers model='n270'>
Dec  1 14:20:41 np0005541455 nova_compute[189564]:        <feature name='ss'/>
Dec  1 14:20:41 np0005541455 nova_compute[189564]:      </blockers>
Dec  1 14:20:41 np0005541455 nova_compute[189564]:      <model usable='no' deprecated='yes' vendor='Intel'>n270-v1</model>
Dec  1 14:20:41 np0005541455 nova_compute[189564]:      <blockers model='n270-v1'>
Dec  1 14:20:41 np0005541455 nova_compute[189564]:        <feature name='ss'/>
Dec  1 14:20:41 np0005541455 nova_compute[189564]:      </blockers>
Dec  1 14:20:41 np0005541455 nova_compute[189564]:      <model usable='yes' deprecated='yes' vendor='unknown' canonical='pentium-v1'>pentium</model>
Dec  1 14:20:41 np0005541455 nova_compute[189564]:      <model usable='yes' deprecated='yes' vendor='unknown'>pentium-v1</model>
Dec  1 14:20:41 np0005541455 nova_compute[189564]:      <model usable='yes' deprecated='yes' vendor='unknown' canonical='pentium2-v1'>pentium2</model>
Dec  1 14:20:41 np0005541455 nova_compute[189564]:      <model usable='yes' deprecated='yes' vendor='unknown'>pentium2-v1</model>
Dec  1 14:20:41 np0005541455 nova_compute[189564]:      <model usable='yes' deprecated='yes' vendor='unknown' canonical='pentium3-v1'>pentium3</model>
Dec  1 14:20:41 np0005541455 nova_compute[189564]:      <model usable='yes' deprecated='yes' vendor='unknown'>pentium3-v1</model>
Dec  1 14:20:41 np0005541455 nova_compute[189564]:      <model usable='no' deprecated='yes' vendor='AMD' canonical='phenom-v1'>phenom</model>
Dec  1 14:20:41 np0005541455 nova_compute[189564]:      <blockers model='phenom'>
Dec  1 14:20:41 np0005541455 nova_compute[189564]:        <feature name='3dnow'/>
Dec  1 14:20:41 np0005541455 nova_compute[189564]:        <feature name='3dnowext'/>
Dec  1 14:20:41 np0005541455 nova_compute[189564]:      </blockers>
Dec  1 14:20:41 np0005541455 nova_compute[189564]:      <model usable='no' deprecated='yes' vendor='AMD'>phenom-v1</model>
Dec  1 14:20:41 np0005541455 nova_compute[189564]:      <blockers model='phenom-v1'>
Dec  1 14:20:41 np0005541455 nova_compute[189564]:        <feature name='3dnow'/>
Dec  1 14:20:41 np0005541455 nova_compute[189564]:        <feature name='3dnowext'/>
Dec  1 14:20:41 np0005541455 nova_compute[189564]:      </blockers>
Dec  1 14:20:41 np0005541455 nova_compute[189564]:      <model usable='yes' deprecated='yes' vendor='unknown' canonical='qemu32-v1'>qemu32</model>
Dec  1 14:20:41 np0005541455 nova_compute[189564]:      <model usable='yes' deprecated='yes' vendor='unknown'>qemu32-v1</model>
Dec  1 14:20:41 np0005541455 nova_compute[189564]:      <model usable='yes' deprecated='yes' vendor='unknown' canonical='qemu64-v1'>qemu64</model>
Dec  1 14:20:41 np0005541455 nova_compute[189564]:      <model usable='yes' deprecated='yes' vendor='unknown'>qemu64-v1</model>
Dec  1 14:20:41 np0005541455 nova_compute[189564]:    </mode>
Dec  1 14:20:41 np0005541455 nova_compute[189564]:  </cpu>
Dec  1 14:20:41 np0005541455 nova_compute[189564]:  <memoryBacking supported='yes'>
Dec  1 14:20:41 np0005541455 nova_compute[189564]:    <enum name='sourceType'>
Dec  1 14:20:41 np0005541455 nova_compute[189564]:      <value>file</value>
Dec  1 14:20:41 np0005541455 nova_compute[189564]:      <value>anonymous</value>
Dec  1 14:20:41 np0005541455 nova_compute[189564]:      <value>memfd</value>
Dec  1 14:20:41 np0005541455 nova_compute[189564]:    </enum>
Dec  1 14:20:41 np0005541455 nova_compute[189564]:  </memoryBacking>
Dec  1 14:20:41 np0005541455 nova_compute[189564]:  <devices>
Dec  1 14:20:41 np0005541455 nova_compute[189564]:    <disk supported='yes'>
Dec  1 14:20:41 np0005541455 nova_compute[189564]:      <enum name='diskDevice'>
Dec  1 14:20:41 np0005541455 nova_compute[189564]:        <value>disk</value>
Dec  1 14:20:41 np0005541455 nova_compute[189564]:        <value>cdrom</value>
Dec  1 14:20:41 np0005541455 nova_compute[189564]:        <value>floppy</value>
Dec  1 14:20:41 np0005541455 nova_compute[189564]:        <value>lun</value>
Dec  1 14:20:41 np0005541455 nova_compute[189564]:      </enum>
Dec  1 14:20:41 np0005541455 nova_compute[189564]:      <enum name='bus'>
Dec  1 14:20:41 np0005541455 nova_compute[189564]:        <value>ide</value>
Dec  1 14:20:41 np0005541455 nova_compute[189564]:        <value>fdc</value>
Dec  1 14:20:41 np0005541455 nova_compute[189564]:        <value>scsi</value>
Dec  1 14:20:41 np0005541455 nova_compute[189564]:        <value>virtio</value>
Dec  1 14:20:41 np0005541455 nova_compute[189564]:        <value>usb</value>
Dec  1 14:20:41 np0005541455 nova_compute[189564]:        <value>sata</value>
Dec  1 14:20:41 np0005541455 nova_compute[189564]:      </enum>
Dec  1 14:20:41 np0005541455 nova_compute[189564]:      <enum name='model'>
Dec  1 14:20:41 np0005541455 nova_compute[189564]:        <value>virtio</value>
Dec  1 14:20:41 np0005541455 nova_compute[189564]:        <value>virtio-transitional</value>
Dec  1 14:20:41 np0005541455 nova_compute[189564]:        <value>virtio-non-transitional</value>
Dec  1 14:20:41 np0005541455 nova_compute[189564]:      </enum>
Dec  1 14:20:41 np0005541455 nova_compute[189564]:    </disk>
Dec  1 14:20:41 np0005541455 nova_compute[189564]:    <graphics supported='yes'>
Dec  1 14:20:41 np0005541455 nova_compute[189564]:      <enum name='type'>
Dec  1 14:20:41 np0005541455 nova_compute[189564]:        <value>vnc</value>
Dec  1 14:20:41 np0005541455 nova_compute[189564]:        <value>egl-headless</value>
Dec  1 14:20:41 np0005541455 nova_compute[189564]:        <value>dbus</value>
Dec  1 14:20:41 np0005541455 nova_compute[189564]:      </enum>
Dec  1 14:20:41 np0005541455 nova_compute[189564]:    </graphics>
Dec  1 14:20:41 np0005541455 nova_compute[189564]:    <video supported='yes'>
Dec  1 14:20:41 np0005541455 nova_compute[189564]:      <enum name='modelType'>
Dec  1 14:20:41 np0005541455 nova_compute[189564]:        <value>vga</value>
Dec  1 14:20:41 np0005541455 nova_compute[189564]:        <value>cirrus</value>
Dec  1 14:20:41 np0005541455 nova_compute[189564]:        <value>virtio</value>
Dec  1 14:20:41 np0005541455 nova_compute[189564]:        <value>none</value>
Dec  1 14:20:41 np0005541455 nova_compute[189564]:        <value>bochs</value>
Dec  1 14:20:41 np0005541455 nova_compute[189564]:        <value>ramfb</value>
Dec  1 14:20:41 np0005541455 nova_compute[189564]:      </enum>
Dec  1 14:20:41 np0005541455 nova_compute[189564]:    </video>
Dec  1 14:20:41 np0005541455 nova_compute[189564]:    <hostdev supported='yes'>
Dec  1 14:20:41 np0005541455 nova_compute[189564]:      <enum name='mode'>
Dec  1 14:20:41 np0005541455 nova_compute[189564]:        <value>subsystem</value>
Dec  1 14:20:41 np0005541455 nova_compute[189564]:      </enum>
Dec  1 14:20:41 np0005541455 nova_compute[189564]:      <enum name='startupPolicy'>
Dec  1 14:20:41 np0005541455 nova_compute[189564]:        <value>default</value>
Dec  1 14:20:41 np0005541455 nova_compute[189564]:        <value>mandatory</value>
Dec  1 14:20:41 np0005541455 nova_compute[189564]:        <value>requisite</value>
Dec  1 14:20:41 np0005541455 nova_compute[189564]:        <value>optional</value>
Dec  1 14:20:41 np0005541455 nova_compute[189564]:      </enum>
Dec  1 14:20:41 np0005541455 nova_compute[189564]:      <enum name='subsysType'>
Dec  1 14:20:41 np0005541455 nova_compute[189564]:        <value>usb</value>
Dec  1 14:20:41 np0005541455 nova_compute[189564]:        <value>pci</value>
Dec  1 14:20:41 np0005541455 nova_compute[189564]:        <value>scsi</value>
Dec  1 14:20:41 np0005541455 nova_compute[189564]:      </enum>
Dec  1 14:20:41 np0005541455 nova_compute[189564]:      <enum name='capsType'/>
Dec  1 14:20:41 np0005541455 nova_compute[189564]:      <enum name='pciBackend'/>
Dec  1 14:20:41 np0005541455 nova_compute[189564]:    </hostdev>
Dec  1 14:20:41 np0005541455 nova_compute[189564]:    <rng supported='yes'>
Dec  1 14:20:41 np0005541455 nova_compute[189564]:      <enum name='model'>
Dec  1 14:20:41 np0005541455 nova_compute[189564]:        <value>virtio</value>
Dec  1 14:20:41 np0005541455 nova_compute[189564]:        <value>virtio-transitional</value>
Dec  1 14:20:41 np0005541455 nova_compute[189564]:        <value>virtio-non-transitional</value>
Dec  1 14:20:41 np0005541455 nova_compute[189564]:      </enum>
Dec  1 14:20:41 np0005541455 nova_compute[189564]:      <enum name='backendModel'>
Dec  1 14:20:41 np0005541455 nova_compute[189564]:        <value>random</value>
Dec  1 14:20:41 np0005541455 nova_compute[189564]:        <value>egd</value>
Dec  1 14:20:41 np0005541455 nova_compute[189564]:        <value>builtin</value>
Dec  1 14:20:41 np0005541455 nova_compute[189564]:      </enum>
Dec  1 14:20:41 np0005541455 nova_compute[189564]:    </rng>
Dec  1 14:20:41 np0005541455 nova_compute[189564]:    <filesystem supported='yes'>
Dec  1 14:20:41 np0005541455 nova_compute[189564]:      <enum name='driverType'>
Dec  1 14:20:41 np0005541455 nova_compute[189564]:        <value>path</value>
Dec  1 14:20:41 np0005541455 nova_compute[189564]:        <value>handle</value>
Dec  1 14:20:41 np0005541455 nova_compute[189564]:        <value>virtiofs</value>
Dec  1 14:20:41 np0005541455 nova_compute[189564]:      </enum>
Dec  1 14:20:41 np0005541455 nova_compute[189564]:    </filesystem>
Dec  1 14:20:41 np0005541455 nova_compute[189564]:    <tpm supported='yes'>
Dec  1 14:20:41 np0005541455 nova_compute[189564]:      <enum name='model'>
Dec  1 14:20:41 np0005541455 nova_compute[189564]:        <value>tpm-tis</value>
Dec  1 14:20:41 np0005541455 nova_compute[189564]:        <value>tpm-crb</value>
Dec  1 14:20:41 np0005541455 nova_compute[189564]:      </enum>
Dec  1 14:20:41 np0005541455 nova_compute[189564]:      <enum name='backendModel'>
Dec  1 14:20:41 np0005541455 nova_compute[189564]:        <value>emulator</value>
Dec  1 14:20:41 np0005541455 nova_compute[189564]:        <value>external</value>
Dec  1 14:20:41 np0005541455 nova_compute[189564]:      </enum>
Dec  1 14:20:41 np0005541455 nova_compute[189564]:      <enum name='backendVersion'>
Dec  1 14:20:41 np0005541455 nova_compute[189564]:        <value>2.0</value>
Dec  1 14:20:41 np0005541455 nova_compute[189564]:      </enum>
Dec  1 14:20:41 np0005541455 nova_compute[189564]:    </tpm>
Dec  1 14:20:41 np0005541455 nova_compute[189564]:    <redirdev supported='yes'>
Dec  1 14:20:41 np0005541455 nova_compute[189564]:      <enum name='bus'>
Dec  1 14:20:41 np0005541455 nova_compute[189564]:        <value>usb</value>
Dec  1 14:20:41 np0005541455 nova_compute[189564]:      </enum>
Dec  1 14:20:41 np0005541455 nova_compute[189564]:    </redirdev>
Dec  1 14:20:41 np0005541455 nova_compute[189564]:    <channel supported='yes'>
Dec  1 14:20:41 np0005541455 nova_compute[189564]:      <enum name='type'>
Dec  1 14:20:41 np0005541455 nova_compute[189564]:        <value>pty</value>
Dec  1 14:20:41 np0005541455 nova_compute[189564]:        <value>unix</value>
Dec  1 14:20:41 np0005541455 nova_compute[189564]:      </enum>
Dec  1 14:20:41 np0005541455 nova_compute[189564]:    </channel>
Dec  1 14:20:41 np0005541455 nova_compute[189564]:    <crypto supported='yes'>
Dec  1 14:20:41 np0005541455 nova_compute[189564]:      <enum name='model'/>
Dec  1 14:20:41 np0005541455 nova_compute[189564]:      <enum name='type'>
Dec  1 14:20:41 np0005541455 nova_compute[189564]:        <value>qemu</value>
Dec  1 14:20:41 np0005541455 nova_compute[189564]:      </enum>
Dec  1 14:20:41 np0005541455 nova_compute[189564]:      <enum name='backendModel'>
Dec  1 14:20:41 np0005541455 nova_compute[189564]:        <value>builtin</value>
Dec  1 14:20:41 np0005541455 nova_compute[189564]:      </enum>
Dec  1 14:20:41 np0005541455 nova_compute[189564]:    </crypto>
Dec  1 14:20:41 np0005541455 nova_compute[189564]:    <interface supported='yes'>
Dec  1 14:20:41 np0005541455 nova_compute[189564]:      <enum name='backendType'>
Dec  1 14:20:41 np0005541455 nova_compute[189564]:        <value>default</value>
Dec  1 14:20:41 np0005541455 nova_compute[189564]:        <value>passt</value>
Dec  1 14:20:41 np0005541455 nova_compute[189564]:      </enum>
Dec  1 14:20:41 np0005541455 nova_compute[189564]:    </interface>
Dec  1 14:20:41 np0005541455 nova_compute[189564]:    <panic supported='yes'>
Dec  1 14:20:41 np0005541455 nova_compute[189564]:      <enum name='model'>
Dec  1 14:20:41 np0005541455 nova_compute[189564]:        <value>isa</value>
Dec  1 14:20:41 np0005541455 nova_compute[189564]:        <value>hyperv</value>
Dec  1 14:20:41 np0005541455 nova_compute[189564]:      </enum>
Dec  1 14:20:41 np0005541455 nova_compute[189564]:    </panic>
Dec  1 14:20:41 np0005541455 nova_compute[189564]:    <console supported='yes'>
Dec  1 14:20:41 np0005541455 nova_compute[189564]:      <enum name='type'>
Dec  1 14:20:41 np0005541455 nova_compute[189564]:        <value>null</value>
Dec  1 14:20:41 np0005541455 nova_compute[189564]:        <value>vc</value>
Dec  1 14:20:41 np0005541455 nova_compute[189564]:        <value>pty</value>
Dec  1 14:20:41 np0005541455 nova_compute[189564]:        <value>dev</value>
Dec  1 14:20:41 np0005541455 nova_compute[189564]:        <value>file</value>
Dec  1 14:20:41 np0005541455 nova_compute[189564]:        <value>pipe</value>
Dec  1 14:20:41 np0005541455 nova_compute[189564]:        <value>stdio</value>
Dec  1 14:20:41 np0005541455 nova_compute[189564]:        <value>udp</value>
Dec  1 14:20:41 np0005541455 nova_compute[189564]:        <value>tcp</value>
Dec  1 14:20:41 np0005541455 nova_compute[189564]:        <value>unix</value>
Dec  1 14:20:41 np0005541455 nova_compute[189564]:        <value>qemu-vdagent</value>
Dec  1 14:20:41 np0005541455 nova_compute[189564]:        <value>dbus</value>
Dec  1 14:20:41 np0005541455 nova_compute[189564]:      </enum>
Dec  1 14:20:41 np0005541455 nova_compute[189564]:    </console>
Dec  1 14:20:41 np0005541455 nova_compute[189564]:  </devices>
Dec  1 14:20:41 np0005541455 nova_compute[189564]:  <features>
Dec  1 14:20:41 np0005541455 nova_compute[189564]:    <gic supported='no'/>
Dec  1 14:20:41 np0005541455 nova_compute[189564]:    <vmcoreinfo supported='yes'/>
Dec  1 14:20:41 np0005541455 nova_compute[189564]:    <genid supported='yes'/>
Dec  1 14:20:41 np0005541455 nova_compute[189564]:    <backingStoreInput supported='yes'/>
Dec  1 14:20:41 np0005541455 nova_compute[189564]:    <backup supported='yes'/>
Dec  1 14:20:41 np0005541455 nova_compute[189564]:    <async-teardown supported='yes'/>
Dec  1 14:20:41 np0005541455 nova_compute[189564]:    <ps2 supported='yes'/>
Dec  1 14:20:41 np0005541455 nova_compute[189564]:    <sev supported='no'/>
Dec  1 14:20:41 np0005541455 nova_compute[189564]:    <sgx supported='no'/>
Dec  1 14:20:41 np0005541455 nova_compute[189564]:    <hyperv supported='yes'>
Dec  1 14:20:41 np0005541455 nova_compute[189564]:      <enum name='features'>
Dec  1 14:20:41 np0005541455 nova_compute[189564]:        <value>relaxed</value>
Dec  1 14:20:41 np0005541455 nova_compute[189564]:        <value>vapic</value>
Dec  1 14:20:41 np0005541455 nova_compute[189564]:        <value>spinlocks</value>
Dec  1 14:20:41 np0005541455 nova_compute[189564]:        <value>vpindex</value>
Dec  1 14:20:41 np0005541455 nova_compute[189564]:        <value>runtime</value>
Dec  1 14:20:41 np0005541455 nova_compute[189564]:        <value>synic</value>
Dec  1 14:20:41 np0005541455 nova_compute[189564]:        <value>stimer</value>
Dec  1 14:20:41 np0005541455 nova_compute[189564]:        <value>reset</value>
Dec  1 14:20:41 np0005541455 nova_compute[189564]:        <value>vendor_id</value>
Dec  1 14:20:41 np0005541455 nova_compute[189564]:        <value>frequencies</value>
Dec  1 14:20:41 np0005541455 nova_compute[189564]:        <value>reenlightenment</value>
Dec  1 14:20:41 np0005541455 nova_compute[189564]:        <value>tlbflush</value>
Dec  1 14:20:41 np0005541455 nova_compute[189564]:        <value>ipi</value>
Dec  1 14:20:41 np0005541455 nova_compute[189564]:        <value>avic</value>
Dec  1 14:20:41 np0005541455 nova_compute[189564]:        <value>emsr_bitmap</value>
Dec  1 14:20:41 np0005541455 nova_compute[189564]:        <value>xmm_input</value>
Dec  1 14:20:41 np0005541455 nova_compute[189564]:      </enum>
Dec  1 14:20:41 np0005541455 nova_compute[189564]:      <defaults>
Dec  1 14:20:41 np0005541455 nova_compute[189564]:        <spinlocks>4095</spinlocks>
Dec  1 14:20:41 np0005541455 nova_compute[189564]:        <stimer_direct>on</stimer_direct>
Dec  1 14:20:41 np0005541455 nova_compute[189564]:        <tlbflush_direct>on</tlbflush_direct>
Dec  1 14:20:41 np0005541455 nova_compute[189564]:        <tlbflush_extended>on</tlbflush_extended>
Dec  1 14:20:41 np0005541455 nova_compute[189564]:        <vendor_id>Linux KVM Hv</vendor_id>
Dec  1 14:20:41 np0005541455 nova_compute[189564]:      </defaults>
Dec  1 14:20:41 np0005541455 nova_compute[189564]:    </hyperv>
Dec  1 14:20:41 np0005541455 nova_compute[189564]:    <launchSecurity supported='yes'>
Dec  1 14:20:41 np0005541455 nova_compute[189564]:      <enum name='sectype'>
Dec  1 14:20:41 np0005541455 nova_compute[189564]:        <value>tdx</value>
Dec  1 14:20:41 np0005541455 nova_compute[189564]:      </enum>
Dec  1 14:20:41 np0005541455 nova_compute[189564]:    </launchSecurity>
Dec  1 14:20:41 np0005541455 nova_compute[189564]:  </features>
Dec  1 14:20:41 np0005541455 nova_compute[189564]: </domainCapabilities>
Dec  1 14:20:41 np0005541455 nova_compute[189564]: _get_domain_capabilities /usr/lib/python3.9/site-packages/nova/virt/libvirt/host.py:1037#033[00m
Dec  1 14:20:41 np0005541455 nova_compute[189564]: 2025-12-01 19:20:41.274 189568 DEBUG nova.virt.libvirt.host [None req-025acbbd-8b0a-4055-b5a6-f0460d6fa220 - - - - - -] Checking secure boot support for host arch (x86_64) supports_secure_boot /usr/lib/python3.9/site-packages/nova/virt/libvirt/host.py:1782#033[00m
Dec  1 14:20:41 np0005541455 nova_compute[189564]: 2025-12-01 19:20:41.275 189568 INFO nova.virt.libvirt.host [None req-025acbbd-8b0a-4055-b5a6-f0460d6fa220 - - - - - -] Secure Boot support detected#033[00m
Dec  1 14:20:41 np0005541455 nova_compute[189564]: 2025-12-01 19:20:41.277 189568 INFO nova.virt.libvirt.driver [None req-025acbbd-8b0a-4055-b5a6-f0460d6fa220 - - - - - -] The live_migration_permit_post_copy is set to True and post copy live migration is available so auto-converge will not be in use.#033[00m
Dec  1 14:20:41 np0005541455 nova_compute[189564]: 2025-12-01 19:20:41.277 189568 INFO nova.virt.libvirt.driver [None req-025acbbd-8b0a-4055-b5a6-f0460d6fa220 - - - - - -] The live_migration_permit_post_copy is set to True and post copy live migration is available so auto-converge will not be in use.#033[00m
Dec  1 14:20:41 np0005541455 nova_compute[189564]: 2025-12-01 19:20:41.286 189568 DEBUG nova.virt.libvirt.driver [None req-025acbbd-8b0a-4055-b5a6-f0460d6fa220 - - - - - -] Enabling emulated TPM support _check_vtpm_support /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:1097#033[00m
Dec  1 14:20:41 np0005541455 nova_compute[189564]: 2025-12-01 19:20:41.309 189568 INFO nova.virt.node [None req-025acbbd-8b0a-4055-b5a6-f0460d6fa220 - - - - - -] Determined node identity 0211b5d4-bab8-409f-8f53-df766ffbcb27 from /var/lib/nova/compute_id#033[00m
Dec  1 14:20:41 np0005541455 nova_compute[189564]: 2025-12-01 19:20:41.333 189568 DEBUG nova.compute.manager [None req-025acbbd-8b0a-4055-b5a6-f0460d6fa220 - - - - - -] Verified node 0211b5d4-bab8-409f-8f53-df766ffbcb27 matches my host compute-0.ctlplane.example.com _check_for_host_rename /usr/lib/python3.9/site-packages/nova/compute/manager.py:1568#033[00m
Dec  1 14:20:41 np0005541455 nova_compute[189564]: 2025-12-01 19:20:41.391 189568 INFO nova.compute.manager [None req-025acbbd-8b0a-4055-b5a6-f0460d6fa220 - - - - - -] Looking for unclaimed instances stuck in BUILDING status for nodes managed by this host#033[00m
Dec  1 14:20:41 np0005541455 nova_compute[189564]: 2025-12-01 19:20:41.845 189568 ERROR nova.compute.manager [None req-025acbbd-8b0a-4055-b5a6-f0460d6fa220 - - - - - -] Could not retrieve compute node resource provider 0211b5d4-bab8-409f-8f53-df766ffbcb27 and therefore unable to error out any instances stuck in BUILDING state. Error: Failed to retrieve allocations for resource provider 0211b5d4-bab8-409f-8f53-df766ffbcb27: {"errors": [{"status": 404, "title": "Not Found", "detail": "The resource could not be found.\n\n Resource provider '0211b5d4-bab8-409f-8f53-df766ffbcb27' not found: No resource provider with uuid 0211b5d4-bab8-409f-8f53-df766ffbcb27 found  ", "request_id": "req-02830547-327f-41c9-8b68-99c77a9a5a37"}]}: nova.exception.ResourceProviderAllocationRetrievalFailed: Failed to retrieve allocations for resource provider 0211b5d4-bab8-409f-8f53-df766ffbcb27: {"errors": [{"status": 404, "title": "Not Found", "detail": "The resource could not be found.\n\n Resource provider '0211b5d4-bab8-409f-8f53-df766ffbcb27' not found: No resource provider with uuid 0211b5d4-bab8-409f-8f53-df766ffbcb27 found  ", "request_id": "req-02830547-327f-41c9-8b68-99c77a9a5a37"}]}#033[00m
Dec  1 14:20:41 np0005541455 nova_compute[189564]: 2025-12-01 19:20:41.863 189568 DEBUG oslo_concurrency.lockutils [None req-025acbbd-8b0a-4055-b5a6-f0460d6fa220 - - - - - -] Acquiring lock "compute_resources" by "nova.compute.resource_tracker.ResourceTracker.clean_compute_node_cache" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m
Dec  1 14:20:41 np0005541455 nova_compute[189564]: 2025-12-01 19:20:41.863 189568 DEBUG oslo_concurrency.lockutils [None req-025acbbd-8b0a-4055-b5a6-f0460d6fa220 - - - - - -] Lock "compute_resources" acquired by "nova.compute.resource_tracker.ResourceTracker.clean_compute_node_cache" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m
Dec  1 14:20:41 np0005541455 nova_compute[189564]: 2025-12-01 19:20:41.863 189568 DEBUG oslo_concurrency.lockutils [None req-025acbbd-8b0a-4055-b5a6-f0460d6fa220 - - - - - -] Lock "compute_resources" "released" by "nova.compute.resource_tracker.ResourceTracker.clean_compute_node_cache" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m
Dec  1 14:20:41 np0005541455 nova_compute[189564]: 2025-12-01 19:20:41.864 189568 DEBUG nova.compute.resource_tracker [None req-025acbbd-8b0a-4055-b5a6-f0460d6fa220 - - - - - -] Auditing locally available compute resources for compute-0.ctlplane.example.com (node: compute-0.ctlplane.example.com) update_available_resource /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:861#033[00m
Dec  1 14:20:42 np0005541455 nova_compute[189564]: 2025-12-01 19:20:42.093 189568 WARNING nova.virt.libvirt.driver [None req-025acbbd-8b0a-4055-b5a6-f0460d6fa220 - - - - - -] This host appears to have multiple sockets per NUMA node. The `socket` PCI NUMA affinity will not be supported.#033[00m
Dec  1 14:20:42 np0005541455 nova_compute[189564]: 2025-12-01 19:20:42.094 189568 DEBUG nova.compute.resource_tracker [None req-025acbbd-8b0a-4055-b5a6-f0460d6fa220 - - - - - -] Hypervisor/Node resource view: name=compute-0.ctlplane.example.com free_ram=6034MB free_disk=72.60871124267578GB free_vcpus=8 pci_devices=[{"dev_id": "pci_0000_00_03_0", "address": "0000:00:03.0", "product_id": "1000", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1000", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_01_3", "address": "0000:00:01.3", "product_id": "7113", "vendor_id": "8086", "numa_node": null, "label": "label_8086_7113", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_06_0", "address": "0000:00:06.0", "product_id": "1005", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1005", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_04_0", "address": "0000:00:04.0", "product_id": "1001", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1001", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_01_1", "address": "0000:00:01.1", "product_id": "7010", "vendor_id": "8086", "numa_node": null, "label": "label_8086_7010", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_00_0", "address": "0000:00:00.0", "product_id": "1237", "vendor_id": "8086", "numa_node": null, "label": "label_8086_1237", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_05_0", "address": "0000:00:05.0", "product_id": "1002", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1002", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_02_0", "address": "0000:00:02.0", "product_id": "1050", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1050", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_01_2", "address": "0000:00:01.2", "product_id": "7020", "vendor_id": "8086", "numa_node": null, "label": "label_8086_7020", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_01_0", "address": "0000:00:01.0", "product_id": "7000", "vendor_id": "8086", "numa_node": null, "label": "label_8086_7000", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_07_0", "address": "0000:00:07.0", "product_id": "1000", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1000", "dev_type": "type-PCI"}] _report_hypervisor_resource_view /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:1034#033[00m
Dec  1 14:20:42 np0005541455 nova_compute[189564]: 2025-12-01 19:20:42.094 189568 DEBUG oslo_concurrency.lockutils [None req-025acbbd-8b0a-4055-b5a6-f0460d6fa220 - - - - - -] Acquiring lock "compute_resources" by "nova.compute.resource_tracker.ResourceTracker._update_available_resource" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m
Dec  1 14:20:42 np0005541455 nova_compute[189564]: 2025-12-01 19:20:42.095 189568 DEBUG oslo_concurrency.lockutils [None req-025acbbd-8b0a-4055-b5a6-f0460d6fa220 - - - - - -] Lock "compute_resources" acquired by "nova.compute.resource_tracker.ResourceTracker._update_available_resource" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m
Dec  1 14:20:42 np0005541455 nova_compute[189564]: 2025-12-01 19:20:42.285 189568 ERROR nova.compute.resource_tracker [None req-025acbbd-8b0a-4055-b5a6-f0460d6fa220 - - - - - -] Skipping removal of allocations for deleted instances: Failed to retrieve allocations for resource provider 0211b5d4-bab8-409f-8f53-df766ffbcb27: {"errors": [{"status": 404, "title": "Not Found", "detail": "The resource could not be found.\n\n Resource provider '0211b5d4-bab8-409f-8f53-df766ffbcb27' not found: No resource provider with uuid 0211b5d4-bab8-409f-8f53-df766ffbcb27 found  ", "request_id": "req-9b977e8e-5fa7-4fc4-b1bc-98e735b9b3d5"}]}: nova.exception.ResourceProviderAllocationRetrievalFailed: Failed to retrieve allocations for resource provider 0211b5d4-bab8-409f-8f53-df766ffbcb27: {"errors": [{"status": 404, "title": "Not Found", "detail": "The resource could not be found.\n\n Resource provider '0211b5d4-bab8-409f-8f53-df766ffbcb27' not found: No resource provider with uuid 0211b5d4-bab8-409f-8f53-df766ffbcb27 found  ", "request_id": "req-9b977e8e-5fa7-4fc4-b1bc-98e735b9b3d5"}]}#033[00m
Dec  1 14:20:42 np0005541455 nova_compute[189564]: 2025-12-01 19:20:42.286 189568 DEBUG nova.compute.resource_tracker [None req-025acbbd-8b0a-4055-b5a6-f0460d6fa220 - - - - - -] Total usable vcpus: 8, total allocated vcpus: 0 _report_final_resource_view /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:1057#033[00m
Dec  1 14:20:42 np0005541455 nova_compute[189564]: 2025-12-01 19:20:42.286 189568 DEBUG nova.compute.resource_tracker [None req-025acbbd-8b0a-4055-b5a6-f0460d6fa220 - - - - - -] Final resource view: name=compute-0.ctlplane.example.com phys_ram=7680MB used_ram=512MB phys_disk=79GB used_disk=0GB total_vcpus=8 used_vcpus=0 pci_stats=[] _report_final_resource_view /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:1066#033[00m
Dec  1 14:20:42 np0005541455 nova_compute[189564]: 2025-12-01 19:20:42.753 189568 INFO nova.scheduler.client.report [None req-025acbbd-8b0a-4055-b5a6-f0460d6fa220 - - - - - -] [req-6e2800d7-82c5-4e8a-9dc1-7fc168629133] Created resource provider record via placement API for resource provider with UUID 0211b5d4-bab8-409f-8f53-df766ffbcb27 and name compute-0.ctlplane.example.com.#033[00m
Dec  1 14:20:42 np0005541455 nova_compute[189564]: 2025-12-01 19:20:42.780 189568 DEBUG nova.virt.libvirt.host [None req-025acbbd-8b0a-4055-b5a6-f0460d6fa220 - - - - - -] /sys/module/kvm_amd/parameters/sev contains [N
Dec  1 14:20:42 np0005541455 nova_compute[189564]: ] _kernel_supports_amd_sev /usr/lib/python3.9/site-packages/nova/virt/libvirt/host.py:1803#033[00m
Dec  1 14:20:42 np0005541455 nova_compute[189564]: 2025-12-01 19:20:42.780 189568 INFO nova.virt.libvirt.host [None req-025acbbd-8b0a-4055-b5a6-f0460d6fa220 - - - - - -] kernel doesn't support AMD SEV#033[00m
Dec  1 14:20:42 np0005541455 nova_compute[189564]: 2025-12-01 19:20:42.781 189568 DEBUG nova.compute.provider_tree [None req-025acbbd-8b0a-4055-b5a6-f0460d6fa220 - - - - - -] Updating inventory in ProviderTree for provider 0211b5d4-bab8-409f-8f53-df766ffbcb27 with inventory: {'MEMORY_MB': {'total': 7680, 'min_unit': 1, 'max_unit': 7680, 'step_size': 1, 'allocation_ratio': 1.0, 'reserved': 512}, 'VCPU': {'total': 8, 'min_unit': 1, 'max_unit': 8, 'step_size': 1, 'allocation_ratio': 4.0, 'reserved': 0}, 'DISK_GB': {'total': 79, 'min_unit': 1, 'max_unit': 79, 'step_size': 1, 'allocation_ratio': 0.9, 'reserved': 0}} update_inventory /usr/lib/python3.9/site-packages/nova/compute/provider_tree.py:176#033[00m
Dec  1 14:20:42 np0005541455 nova_compute[189564]: 2025-12-01 19:20:42.781 189568 DEBUG nova.virt.libvirt.driver [None req-025acbbd-8b0a-4055-b5a6-f0460d6fa220 - - - - - -] CPU mode 'host-model' models '' was chosen, with extra flags: '' _get_guest_cpu_model_config /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:5396#033[00m
Dec  1 14:20:42 np0005541455 nova_compute[189564]: 2025-12-01 19:20:42.862 189568 DEBUG nova.scheduler.client.report [None req-025acbbd-8b0a-4055-b5a6-f0460d6fa220 - - - - - -] Updated inventory for provider 0211b5d4-bab8-409f-8f53-df766ffbcb27 with generation 0 in Placement from set_inventory_for_provider using data: {'MEMORY_MB': {'total': 7680, 'min_unit': 1, 'max_unit': 7680, 'step_size': 1, 'allocation_ratio': 1.0, 'reserved': 512}, 'VCPU': {'total': 8, 'min_unit': 1, 'max_unit': 8, 'step_size': 1, 'allocation_ratio': 4.0, 'reserved': 0}, 'DISK_GB': {'total': 79, 'min_unit': 1, 'max_unit': 79, 'step_size': 1, 'allocation_ratio': 0.9, 'reserved': 0}} set_inventory_for_provider /usr/lib/python3.9/site-packages/nova/scheduler/client/report.py:957#033[00m
Dec  1 14:20:42 np0005541455 nova_compute[189564]: 2025-12-01 19:20:42.862 189568 DEBUG nova.compute.provider_tree [None req-025acbbd-8b0a-4055-b5a6-f0460d6fa220 - - - - - -] Updating resource provider 0211b5d4-bab8-409f-8f53-df766ffbcb27 generation from 0 to 1 during operation: update_inventory _update_generation /usr/lib/python3.9/site-packages/nova/compute/provider_tree.py:164#033[00m
Dec  1 14:20:42 np0005541455 nova_compute[189564]: 2025-12-01 19:20:42.863 189568 DEBUG nova.compute.provider_tree [None req-025acbbd-8b0a-4055-b5a6-f0460d6fa220 - - - - - -] Updating inventory in ProviderTree for provider 0211b5d4-bab8-409f-8f53-df766ffbcb27 with inventory: {'MEMORY_MB': {'total': 7680, 'reserved': 512, 'min_unit': 1, 'max_unit': 7680, 'step_size': 1, 'allocation_ratio': 1.0}, 'VCPU': {'total': 8, 'reserved': 0, 'min_unit': 1, 'max_unit': 8, 'step_size': 1, 'allocation_ratio': 4.0}, 'DISK_GB': {'total': 79, 'reserved': 0, 'min_unit': 1, 'max_unit': 79, 'step_size': 1, 'allocation_ratio': 0.9}} update_inventory /usr/lib/python3.9/site-packages/nova/compute/provider_tree.py:176#033[00m
Dec  1 14:20:43 np0005541455 nova_compute[189564]: 2025-12-01 19:20:43.048 189568 DEBUG nova.compute.provider_tree [None req-025acbbd-8b0a-4055-b5a6-f0460d6fa220 - - - - - -] Updating resource provider 0211b5d4-bab8-409f-8f53-df766ffbcb27 generation from 1 to 2 during operation: update_traits _update_generation /usr/lib/python3.9/site-packages/nova/compute/provider_tree.py:164#033[00m
Dec  1 14:20:43 np0005541455 nova_compute[189564]: 2025-12-01 19:20:43.083 189568 DEBUG nova.compute.resource_tracker [None req-025acbbd-8b0a-4055-b5a6-f0460d6fa220 - - - - - -] Compute_service record updated for compute-0.ctlplane.example.com:compute-0.ctlplane.example.com _update_available_resource /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:995#033[00m
Dec  1 14:20:43 np0005541455 nova_compute[189564]: 2025-12-01 19:20:43.083 189568 DEBUG oslo_concurrency.lockutils [None req-025acbbd-8b0a-4055-b5a6-f0460d6fa220 - - - - - -] Lock "compute_resources" "released" by "nova.compute.resource_tracker.ResourceTracker._update_available_resource" :: held 0.989s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m
Dec  1 14:20:43 np0005541455 nova_compute[189564]: 2025-12-01 19:20:43.084 189568 DEBUG nova.service [None req-025acbbd-8b0a-4055-b5a6-f0460d6fa220 - - - - - -] Creating RPC server for service compute start /usr/lib/python3.9/site-packages/nova/service.py:182#033[00m
Dec  1 14:20:43 np0005541455 nova_compute[189564]: 2025-12-01 19:20:43.170 189568 DEBUG nova.service [None req-025acbbd-8b0a-4055-b5a6-f0460d6fa220 - - - - - -] Join ServiceGroup membership for this service compute start /usr/lib/python3.9/site-packages/nova/service.py:199#033[00m
Dec  1 14:20:43 np0005541455 nova_compute[189564]: 2025-12-01 19:20:43.170 189568 DEBUG nova.servicegroup.drivers.db [None req-025acbbd-8b0a-4055-b5a6-f0460d6fa220 - - - - - -] DB_Driver: join new ServiceGroup member compute-0.ctlplane.example.com to the compute group, service = <Service: host=compute-0.ctlplane.example.com, binary=nova-compute, manager_class_name=nova.compute.manager.ComputeManager> join /usr/lib/python3.9/site-packages/nova/servicegroup/drivers/db.py:44#033[00m
Dec  1 14:20:45 np0005541455 podman[189868]: 2025-12-01 19:20:45.355531947 +0000 UTC m=+0.131481454 container health_status ac5c9902abf0db9f43c889599b2bcc73d33eb8b65444ffdd9b56a5cc93dab792 (image=quay.io/podified-antelope-centos9/openstack-ovn-controller:current-podified, name=ovn_controller, health_status=healthy, health_failing_streak=0, health_log=, tcib_build_tag=fa2bb8efef6782c26ea7f1675eeb36dd, tcib_managed=true, config_data={'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ovn_controller', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-ovn-controller:current-podified', 'net': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/lib/modules:/lib/modules:ro', '/run:/run', '/var/lib/openvswitch/ovn:/run/ovn:shared,z', '/var/lib/kolla/config_files/ovn_controller.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/openstack/cacerts/ovn/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/certs/ovn/default/ca.crt:/etc/pki/tls/certs/ovndbca.crt:ro,z', '/var/lib/openstack/certs/ovn/default/tls.crt:/etc/pki/tls/certs/ovndb.crt:ro,z', '/var/lib/openstack/certs/ovn/default/tls.key:/etc/pki/tls/private/ovndb.key:ro,Z', '/var/lib/openstack/healthchecks/ovn_controller:/openstack:ro,z']}, config_id=ovn_controller, io.buildah.version=1.41.3, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.build-date=20251125, org.label-schema.schema-version=1.0, container_name=ovn_controller, maintainer=OpenStack Kubernetes Operator team, managed_by=edpm_ansible, org.label-schema.vendor=CentOS)
Dec  1 14:20:46 np0005541455 systemd-logind[797]: New session 26 of user zuul.
Dec  1 14:20:46 np0005541455 systemd[1]: Started Session 26 of User zuul.
Dec  1 14:20:47 np0005541455 python3.9[190048]: ansible-ansible.builtin.setup Invoked with gather_subset=['!all', '!min', 'local'] gather_timeout=10 filter=[] fact_path=/etc/ansible/facts.d
Dec  1 14:20:47 np0005541455 podman[190053]: 2025-12-01 19:20:47.324745719 +0000 UTC m=+0.095592982 container health_status 43b014a7c88484529ca37fbc1aa040d68d3c565a681d98a3ffe696ded1c66c8b (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified, name=ovn_metadata_agent, health_status=healthy, health_failing_streak=0, health_log=, io.buildah.version=1.41.3, tcib_build_tag=fa2bb8efef6782c26ea7f1675eeb36dd, container_name=ovn_metadata_agent, managed_by=edpm_ansible, org.label-schema.build-date=20251125, org.label-schema.license=GPLv2, org.label-schema.vendor=CentOS, tcib_managed=true, config_data={'cgroupns': 'host', 'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'EDPM_CONFIG_HASH': '0823bd3e096c75f72e4a95820d41b0d4b6a1172bd2892ddb9f29b788a11bc87d'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ovn_metadata_agent', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified', 'net': 'host', 'pid': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/run/openvswitch:/run/openvswitch:z', '/var/lib/config-data/ansible-generated/neutron-ovn-metadata-agent:/etc/neutron.conf.d:z', '/run/netns:/run/netns:shared', '/var/lib/kolla/config_files/ovn_metadata_agent.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/neutron:/var/lib/neutron:shared,z', '/var/lib/neutron/ovn_metadata_haproxy_wrapper:/usr/local/bin/haproxy:ro', '/var/lib/neutron/kill_scripts:/etc/neutron/kill_scripts:ro', '/var/lib/openstack/cacerts/neutron-metadata/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/ca.crt:/etc/pki/tls/certs/ovndbca.crt:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/tls.crt:/etc/pki/tls/certs/ovndb.crt:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/tls.key:/etc/pki/tls/private/ovndb.key:ro,Z', '/var/lib/openstack/healthchecks/ovn_metadata_agent:/openstack:ro,z']}, maintainer=OpenStack Kubernetes Operator team, org.label-schema.name=CentOS Stream 9 Base Image, config_id=ovn_metadata_agent, org.label-schema.schema-version=1.0)
Dec  1 14:20:48 np0005541455 python3.9[190221]: ansible-ansible.builtin.systemd_service Invoked with daemon_reload=True daemon_reexec=False scope=system no_block=False name=None state=None enabled=None force=None masked=None
Dec  1 14:20:48 np0005541455 systemd[1]: Reloading.
Dec  1 14:20:48 np0005541455 systemd-rc-local-generator: /etc/rc.d/rc.local is not marked executable, skipping.
Dec  1 14:20:48 np0005541455 systemd-sysv-generator: SysV service '/etc/rc.d/init.d/network' lacks a native systemd unit file. Automatically generating a unit file for compatibility. Please update package to include a native systemd unit file, in order to make it more safe and robust.
Dec  1 14:20:49 np0005541455 python3.9[190408]: ansible-ansible.builtin.service_facts Invoked
Dec  1 14:20:50 np0005541455 network[190425]: You are using 'network' service provided by 'network-scripts', which are now deprecated.
Dec  1 14:20:50 np0005541455 network[190426]: 'network-scripts' will be removed from distribution in near future.
Dec  1 14:20:50 np0005541455 network[190427]: It is advised to switch to 'NetworkManager' instead for network management.
Dec  1 14:20:56 np0005541455 python3.9[190701]: ansible-ansible.builtin.systemd_service Invoked with enabled=False name=tripleo_ceilometer_agent_compute.service state=stopped daemon_reload=False daemon_reexec=False scope=system no_block=False force=None masked=None
Dec  1 14:20:57 np0005541455 python3.9[190854]: ansible-ansible.builtin.file Invoked with path=/usr/lib/systemd/system/tripleo_ceilometer_agent_compute.service state=absent recurse=False force=False follow=True modification_time_format=%Y%m%d%H%M.%S access_time_format=%Y%m%d%H%M.%S unsafe_writes=False _original_basename=None _diff_peek=None src=None modification_time=None access_time=None mode=None owner=None group=None seuser=None serole=None selevel=None setype=None attributes=None
Dec  1 14:20:57 np0005541455 rsyslogd[1005]: imjournal: journal files changed, reloading...  [v8.2510.0-2.el9 try https://www.rsyslog.com/e/0 ]
Dec  1 14:20:57 np0005541455 rsyslogd[1005]: imjournal: journal files changed, reloading...  [v8.2510.0-2.el9 try https://www.rsyslog.com/e/0 ]
Dec  1 14:20:57 np0005541455 python3.9[191007]: ansible-ansible.builtin.file Invoked with path=/etc/systemd/system/tripleo_ceilometer_agent_compute.service state=absent recurse=False force=False follow=True modification_time_format=%Y%m%d%H%M.%S access_time_format=%Y%m%d%H%M.%S unsafe_writes=False _original_basename=None _diff_peek=None src=None modification_time=None access_time=None mode=None owner=None group=None seuser=None serole=None selevel=None setype=None attributes=None
Dec  1 14:20:58 np0005541455 python3.9[191159]: ansible-ansible.legacy.command Invoked with _raw_params=if systemctl is-active certmonger.service; then#012  systemctl disable --now certmonger.service#012  test -f /etc/systemd/system/certmonger.service || systemctl mask certmonger.service#012fi#012 _uses_shell=True expand_argument_vars=True stdin_add_newline=True strip_empty_ends=True cmd=None argv=None chdir=None executable=None creates=None removes=None stdin=None
Dec  1 14:20:59 np0005541455 python3.9[191311]: ansible-ansible.builtin.find Invoked with file_type=any hidden=True paths=['/var/lib/certmonger/requests'] patterns=[] read_whole_file=False age_stamp=mtime recurse=False follow=False get_checksum=False checksum_algorithm=sha1 use_regex=False exact_mode=True excludes=None contains=None age=None size=None depth=None mode=None encoding=None limit=None
Dec  1 14:21:00 np0005541455 python3.9[191463]: ansible-ansible.builtin.systemd_service Invoked with daemon_reload=True daemon_reexec=False scope=system no_block=False name=None state=None enabled=None force=None masked=None
Dec  1 14:21:00 np0005541455 systemd[1]: Reloading.
Dec  1 14:21:00 np0005541455 systemd-sysv-generator: SysV service '/etc/rc.d/init.d/network' lacks a native systemd unit file. Automatically generating a unit file for compatibility. Please update package to include a native systemd unit file, in order to make it more safe and robust.
Dec  1 14:21:00 np0005541455 systemd-rc-local-generator: /etc/rc.d/rc.local is not marked executable, skipping.
Dec  1 14:21:01 np0005541455 python3.9[191649]: ansible-ansible.legacy.command Invoked with cmd=/usr/bin/systemctl reset-failed tripleo_ceilometer_agent_compute.service _uses_shell=False expand_argument_vars=True stdin_add_newline=True strip_empty_ends=True _raw_params=None argv=None chdir=None executable=None creates=None removes=None stdin=None
Dec  1 14:21:02 np0005541455 python3.9[191802]: ansible-ansible.builtin.file Invoked with group=zuul mode=0750 owner=zuul path=/var/lib/openstack/config/telemetry recurse=True setype=container_file_t state=directory force=False follow=True modification_time_format=%Y%m%d%H%M.%S access_time_format=%Y%m%d%H%M.%S unsafe_writes=False _original_basename=None _diff_peek=None src=None modification_time=None access_time=None seuser=None serole=None selevel=None attributes=None
Dec  1 14:21:03 np0005541455 python3.9[191952]: ansible-ansible.builtin.stat Invoked with path=/var/lib/openstack/cacerts/telemetry/tls-ca-bundle.pem follow=False get_checksum=True get_mime=True get_attributes=True get_selinux_context=False checksum_algorithm=sha1
Dec  1 14:21:04 np0005541455 python3.9[192104]: ansible-ansible.legacy.stat Invoked with path=/var/lib/openstack/config/telemetry/ceilometer-host-specific.conf follow=False get_checksum=True get_size=False checksum_algorithm=sha1 get_mime=True get_attributes=True get_selinux_context=False
Dec  1 14:21:04 np0005541455 podman[192106]: 2025-12-01 19:21:04.319764034 +0000 UTC m=+0.087410055 container health_status eee51cf6f5ac491b85fb09827fece37ea9afa564acb449d4ec0d0155a452f02b (image=quay.io/podified-antelope-centos9/openstack-multipathd:current-podified, name=multipathd, health_status=healthy, health_failing_streak=0, health_log=, org.label-schema.license=GPLv2, org.label-schema.vendor=CentOS, tcib_build_tag=fa2bb8efef6782c26ea7f1675eeb36dd, container_name=multipathd, maintainer=OpenStack Kubernetes Operator team, org.label-schema.schema-version=1.0, tcib_managed=true, managed_by=edpm_ansible, org.label-schema.build-date=20251125, org.label-schema.name=CentOS Stream 9 Base Image, config_data={'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/multipathd', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-multipathd:current-podified', 'net': 'host', 'privileged': True, 'restart': 'always', 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/var/lib/kolla/config_files/multipathd.json:/var/lib/kolla/config_files/config.json:ro', '/dev:/dev', '/run/udev:/run/udev', '/sys:/sys', '/lib/modules:/lib/modules:ro', '/etc/iscsi:/etc/iscsi:ro', '/var/lib/iscsi:/var/lib/iscsi', '/etc/multipath:/etc/multipath:z', '/etc/multipath.conf:/etc/multipath.conf:ro', '/var/lib/openstack/healthchecks/multipathd:/openstack:ro,z']}, io.buildah.version=1.41.3, config_id=multipathd)
Dec  1 14:21:04 np0005541455 python3.9[192245]: ansible-ansible.legacy.copy Invoked with dest=/var/lib/openstack/config/telemetry/ceilometer-host-specific.conf mode=0644 setype=container_file_t src=/home/zuul/.ansible/tmp/ansible-tmp-1764616863.4718232-133-205013353941375/.source.conf follow=False _original_basename=ceilometer-host-specific.conf.j2 checksum=e86e0e43000ce9ccfe5aefbf8e8f2e3d15d05584 backup=False force=True remote_src=False unsafe_writes=False content=NOT_LOGGING_PARAMETER validate=None directory_mode=None local_follow=None owner=None group=None seuser=None serole=None selevel=None attributes=None
Dec  1 14:21:05 np0005541455 python3.9[192397]: ansible-ansible.builtin.group Invoked with name=libvirt state=present force=False system=False local=False non_unique=False gid=None gid_min=None gid_max=None
Dec  1 14:21:06 np0005541455 python3.9[192549]: ansible-ansible.builtin.getent Invoked with database=passwd key=ceilometer fail_key=True service=None split=None
Dec  1 14:21:07 np0005541455 python3.9[192702]: ansible-ansible.builtin.group Invoked with gid=42405 name=ceilometer state=present force=False system=False local=False non_unique=False gid_min=None gid_max=None
Dec  1 14:21:10 np0005541455 python3.9[192862]: ansible-ansible.builtin.user Invoked with comment=ceilometer user group=ceilometer groups=['libvirt'] name=ceilometer shell=/sbin/nologin state=present uid=42405 non_unique=False force=False remove=False create_home=True system=False move_home=False append=False ssh_key_bits=0 ssh_key_type=rsa ssh_key_comment=ansible-generated on compute-0 update_password=always home=None password=NOT_LOGGING_PARAMETER login_class=None password_expire_max=None password_expire_min=None password_expire_warn=None hidden=None seuser=None skeleton=None generate_ssh_key=None ssh_key_file=None ssh_key_passphrase=NOT_LOGGING_PARAMETER expires=None password_lock=None local=None profile=None authorization=None role=None umask=None password_expire_account_disable=None uid_min=None uid_max=None
Dec  1 14:21:11 np0005541455 python3.9[193020]: ansible-ansible.legacy.stat Invoked with path=/var/lib/openstack/config/telemetry/ceilometer.conf follow=False get_checksum=True get_size=False checksum_algorithm=sha1 get_mime=True get_attributes=True get_selinux_context=False
Dec  1 14:21:11 np0005541455 python3.9[193141]: ansible-ansible.legacy.copy Invoked with dest=/var/lib/openstack/config/telemetry/ceilometer.conf mode=0640 remote_src=False src=/home/zuul/.ansible/tmp/ansible-tmp-1764616870.9716523-201-111968073399905/.source.conf _original_basename=ceilometer.conf follow=False checksum=f74f01c63e6cdeca5458ef9aff2a1db5d6a4e4b9 backup=False force=True unsafe_writes=False content=NOT_LOGGING_PARAMETER validate=None directory_mode=None local_follow=None owner=None group=None seuser=None serole=None selevel=None setype=None attributes=None
Dec  1 14:21:12 np0005541455 ovn_metadata_agent[106828]: 2025-12-01 19:21:12.161 106833 DEBUG oslo_concurrency.lockutils [-] Acquiring lock "_check_child_processes" by "neutron.agent.linux.external_process.ProcessMonitor._check_child_processes" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m
Dec  1 14:21:12 np0005541455 ovn_metadata_agent[106828]: 2025-12-01 19:21:12.162 106833 DEBUG oslo_concurrency.lockutils [-] Lock "_check_child_processes" acquired by "neutron.agent.linux.external_process.ProcessMonitor._check_child_processes" :: waited 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m
Dec  1 14:21:12 np0005541455 ovn_metadata_agent[106828]: 2025-12-01 19:21:12.162 106833 DEBUG oslo_concurrency.lockutils [-] Lock "_check_child_processes" "released" by "neutron.agent.linux.external_process.ProcessMonitor._check_child_processes" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m
Dec  1 14:21:12 np0005541455 python3.9[193291]: ansible-ansible.legacy.stat Invoked with path=/var/lib/openstack/config/telemetry/polling.yaml follow=False get_checksum=True get_size=False checksum_algorithm=sha1 get_mime=True get_attributes=True get_selinux_context=False
Dec  1 14:21:13 np0005541455 python3.9[193412]: ansible-ansible.legacy.copy Invoked with dest=/var/lib/openstack/config/telemetry/polling.yaml mode=0640 remote_src=False src=/home/zuul/.ansible/tmp/ansible-tmp-1764616872.1805713-201-176453870905356/.source.yaml _original_basename=polling.yaml follow=False checksum=6c8680a286285f2e0ef9fa528ca754765e5ed0e5 backup=False force=True unsafe_writes=False content=NOT_LOGGING_PARAMETER validate=None directory_mode=None local_follow=None owner=None group=None seuser=None serole=None selevel=None setype=None attributes=None
Dec  1 14:21:13 np0005541455 python3.9[193562]: ansible-ansible.legacy.stat Invoked with path=/var/lib/openstack/config/telemetry/custom.conf follow=False get_checksum=True get_size=False checksum_algorithm=sha1 get_mime=True get_attributes=True get_selinux_context=False
Dec  1 14:21:14 np0005541455 python3.9[193683]: ansible-ansible.legacy.copy Invoked with dest=/var/lib/openstack/config/telemetry/custom.conf mode=0640 remote_src=False src=/home/zuul/.ansible/tmp/ansible-tmp-1764616873.2905216-201-10838272841058/.source.conf _original_basename=custom.conf follow=False checksum=838b8b0a7d7f72e55ab67d39f32e3cb3eca2139b backup=False force=True unsafe_writes=False content=NOT_LOGGING_PARAMETER validate=None directory_mode=None local_follow=None owner=None group=None seuser=None serole=None selevel=None setype=None attributes=None
Dec  1 14:21:14 np0005541455 python3.9[193833]: ansible-ansible.builtin.stat Invoked with path=/var/lib/openstack/certs/telemetry/default/tls.crt follow=False get_checksum=True get_mime=True get_attributes=True get_selinux_context=False checksum_algorithm=sha1
Dec  1 14:21:15 np0005541455 python3.9[193985]: ansible-ansible.builtin.stat Invoked with path=/var/lib/openstack/certs/telemetry/default/tls.key follow=False get_checksum=True get_mime=True get_attributes=True get_selinux_context=False checksum_algorithm=sha1
Dec  1 14:21:16 np0005541455 podman[194111]: 2025-12-01 19:21:16.100482073 +0000 UTC m=+0.088582492 container health_status ac5c9902abf0db9f43c889599b2bcc73d33eb8b65444ffdd9b56a5cc93dab792 (image=quay.io/podified-antelope-centos9/openstack-ovn-controller:current-podified, name=ovn_controller, health_status=healthy, health_failing_streak=0, health_log=, io.buildah.version=1.41.3, org.label-schema.name=CentOS Stream 9 Base Image, managed_by=edpm_ansible, org.label-schema.build-date=20251125, org.label-schema.license=GPLv2, org.label-schema.vendor=CentOS, tcib_build_tag=fa2bb8efef6782c26ea7f1675eeb36dd, tcib_managed=true, config_id=ovn_controller, container_name=ovn_controller, maintainer=OpenStack Kubernetes Operator team, org.label-schema.schema-version=1.0, config_data={'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ovn_controller', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-ovn-controller:current-podified', 'net': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/lib/modules:/lib/modules:ro', '/run:/run', '/var/lib/openvswitch/ovn:/run/ovn:shared,z', '/var/lib/kolla/config_files/ovn_controller.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/openstack/cacerts/ovn/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/certs/ovn/default/ca.crt:/etc/pki/tls/certs/ovndbca.crt:ro,z', '/var/lib/openstack/certs/ovn/default/tls.crt:/etc/pki/tls/certs/ovndb.crt:ro,z', '/var/lib/openstack/certs/ovn/default/tls.key:/etc/pki/tls/private/ovndb.key:ro,Z', '/var/lib/openstack/healthchecks/ovn_controller:/openstack:ro,z']})
Dec  1 14:21:16 np0005541455 python3.9[194154]: ansible-ansible.legacy.stat Invoked with path=/var/lib/openstack/config/telemetry/ceilometer-agent-compute.json follow=False get_checksum=True get_size=False checksum_algorithm=sha1 get_mime=True get_attributes=True get_selinux_context=False
Dec  1 14:21:17 np0005541455 python3.9[194285]: ansible-ansible.legacy.copy Invoked with dest=/var/lib/openstack/config/telemetry/ceilometer-agent-compute.json mode=420 src=/home/zuul/.ansible/tmp/ansible-tmp-1764616875.7492228-260-174084770210468/.source.json follow=False _original_basename=ceilometer-agent-compute.json.j2 checksum=264d11e8d3809e7ef745878dce7edd46098e25b2 backup=False force=True remote_src=False unsafe_writes=False content=NOT_LOGGING_PARAMETER validate=None directory_mode=None local_follow=None owner=None group=None seuser=None serole=None selevel=None setype=None attributes=None
Dec  1 14:21:17 np0005541455 podman[194409]: 2025-12-01 19:21:17.704486584 +0000 UTC m=+0.068136207 container health_status 43b014a7c88484529ca37fbc1aa040d68d3c565a681d98a3ffe696ded1c66c8b (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified, name=ovn_metadata_agent, health_status=healthy, health_failing_streak=0, health_log=, org.label-schema.vendor=CentOS, tcib_build_tag=fa2bb8efef6782c26ea7f1675eeb36dd, tcib_managed=true, config_data={'cgroupns': 'host', 'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'EDPM_CONFIG_HASH': '0823bd3e096c75f72e4a95820d41b0d4b6a1172bd2892ddb9f29b788a11bc87d'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ovn_metadata_agent', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified', 'net': 'host', 'pid': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/run/openvswitch:/run/openvswitch:z', '/var/lib/config-data/ansible-generated/neutron-ovn-metadata-agent:/etc/neutron.conf.d:z', '/run/netns:/run/netns:shared', '/var/lib/kolla/config_files/ovn_metadata_agent.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/neutron:/var/lib/neutron:shared,z', '/var/lib/neutron/ovn_metadata_haproxy_wrapper:/usr/local/bin/haproxy:ro', '/var/lib/neutron/kill_scripts:/etc/neutron/kill_scripts:ro', '/var/lib/openstack/cacerts/neutron-metadata/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/ca.crt:/etc/pki/tls/certs/ovndbca.crt:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/tls.crt:/etc/pki/tls/certs/ovndb.crt:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/tls.key:/etc/pki/tls/private/ovndb.key:ro,Z', '/var/lib/openstack/healthchecks/ovn_metadata_agent:/openstack:ro,z']}, io.buildah.version=1.41.3, maintainer=OpenStack Kubernetes Operator team, managed_by=edpm_ansible, org.label-schema.license=GPLv2, config_id=ovn_metadata_agent, container_name=ovn_metadata_agent, org.label-schema.build-date=20251125, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.schema-version=1.0)
Dec  1 14:21:17 np0005541455 python3.9[194446]: ansible-ansible.legacy.stat Invoked with path=/var/lib/openstack/config/telemetry/ceilometer-host-specific.conf follow=False get_checksum=True get_size=False checksum_algorithm=sha1 get_mime=True get_attributes=True get_selinux_context=False
Dec  1 14:21:18 np0005541455 python3.9[194531]: ansible-ansible.legacy.file Invoked with mode=420 dest=/var/lib/openstack/config/telemetry/ceilometer-host-specific.conf _original_basename=ceilometer-host-specific.conf.j2 recurse=False state=file path=/var/lib/openstack/config/telemetry/ceilometer-host-specific.conf force=False follow=True modification_time_format=%Y%m%d%H%M.%S access_time_format=%Y%m%d%H%M.%S unsafe_writes=False _diff_peek=None src=None modification_time=None access_time=None owner=None group=None seuser=None serole=None selevel=None setype=None attributes=None
Dec  1 14:21:19 np0005541455 python3.9[194681]: ansible-ansible.legacy.stat Invoked with path=/var/lib/openstack/config/telemetry/ceilometer_agent_compute.json follow=False get_checksum=True get_size=False checksum_algorithm=sha1 get_mime=True get_attributes=True get_selinux_context=False
Dec  1 14:21:19 np0005541455 python3.9[194802]: ansible-ansible.legacy.copy Invoked with dest=/var/lib/openstack/config/telemetry/ceilometer_agent_compute.json mode=420 src=/home/zuul/.ansible/tmp/ansible-tmp-1764616878.5433462-260-121202881010667/.source.json follow=False _original_basename=ceilometer_agent_compute.json.j2 checksum=4096a0f5410f47dcaf8ab19e56a9d8e211effecd backup=False force=True remote_src=False unsafe_writes=False content=NOT_LOGGING_PARAMETER validate=None directory_mode=None local_follow=None owner=None group=None seuser=None serole=None selevel=None setype=None attributes=None
Dec  1 14:21:20 np0005541455 python3.9[194952]: ansible-ansible.legacy.stat Invoked with path=/var/lib/openstack/config/telemetry/ceilometer_prom_exporter.yaml follow=False get_checksum=True get_size=False checksum_algorithm=sha1 get_mime=True get_attributes=True get_selinux_context=False
Dec  1 14:21:20 np0005541455 python3.9[195073]: ansible-ansible.legacy.copy Invoked with dest=/var/lib/openstack/config/telemetry/ceilometer_prom_exporter.yaml mode=420 src=/home/zuul/.ansible/tmp/ansible-tmp-1764616879.675216-260-248379694122328/.source.yaml follow=False _original_basename=ceilometer_prom_exporter.yaml.j2 checksum=10157c879411ee6023e506dc85a343cedc52700f backup=False force=True remote_src=False unsafe_writes=False content=NOT_LOGGING_PARAMETER validate=None directory_mode=None local_follow=None owner=None group=None seuser=None serole=None selevel=None setype=None attributes=None
Dec  1 14:21:21 np0005541455 python3.9[195223]: ansible-ansible.legacy.stat Invoked with path=/var/lib/openstack/config/telemetry/firewall.yaml follow=False get_checksum=True get_size=False checksum_algorithm=sha1 get_mime=True get_attributes=True get_selinux_context=False
Dec  1 14:21:21 np0005541455 python3.9[195344]: ansible-ansible.legacy.copy Invoked with dest=/var/lib/openstack/config/telemetry/firewall.yaml mode=420 src=/home/zuul/.ansible/tmp/ansible-tmp-1764616880.8718326-260-203672165528177/.source.yaml follow=False _original_basename=firewall.yaml.j2 checksum=d942d984493b214bda2913f753ff68cdcedff00e backup=False force=True remote_src=False unsafe_writes=False content=NOT_LOGGING_PARAMETER validate=None directory_mode=None local_follow=None owner=None group=None seuser=None serole=None selevel=None setype=None attributes=None
Dec  1 14:21:22 np0005541455 python3.9[195494]: ansible-ansible.legacy.stat Invoked with path=/var/lib/openstack/config/telemetry/node_exporter.json follow=False get_checksum=True get_size=False checksum_algorithm=sha1 get_mime=True get_attributes=True get_selinux_context=False
Dec  1 14:21:22 np0005541455 python3.9[195615]: ansible-ansible.legacy.copy Invoked with dest=/var/lib/openstack/config/telemetry/node_exporter.json mode=420 src=/home/zuul/.ansible/tmp/ansible-tmp-1764616881.9384916-260-7274571390357/.source.json follow=False _original_basename=node_exporter.json.j2 checksum=6e4982940d2bfae88404914dfaf72552f6356d81 backup=False force=True remote_src=False unsafe_writes=False content=NOT_LOGGING_PARAMETER validate=None directory_mode=None local_follow=None owner=None group=None seuser=None serole=None selevel=None setype=None attributes=None
Dec  1 14:21:23 np0005541455 python3.9[195765]: ansible-ansible.legacy.stat Invoked with path=/var/lib/openstack/config/telemetry/node_exporter.yaml follow=False get_checksum=True get_size=False checksum_algorithm=sha1 get_mime=True get_attributes=True get_selinux_context=False
Dec  1 14:21:24 np0005541455 python3.9[195886]: ansible-ansible.legacy.copy Invoked with dest=/var/lib/openstack/config/telemetry/node_exporter.yaml mode=420 src=/home/zuul/.ansible/tmp/ansible-tmp-1764616883.160376-260-245977655918667/.source.yaml follow=False _original_basename=node_exporter.yaml.j2 checksum=81d906d3e1e8c4f8367276f5d3a67b80ca7e989e backup=False force=True remote_src=False unsafe_writes=False content=NOT_LOGGING_PARAMETER validate=None directory_mode=None local_follow=None owner=None group=None seuser=None serole=None selevel=None setype=None attributes=None
Dec  1 14:21:25 np0005541455 python3.9[196036]: ansible-ansible.legacy.stat Invoked with path=/var/lib/openstack/config/telemetry/openstack_network_exporter.json follow=False get_checksum=True get_size=False checksum_algorithm=sha1 get_mime=True get_attributes=True get_selinux_context=False
Dec  1 14:21:25 np0005541455 python3.9[196157]: ansible-ansible.legacy.copy Invoked with dest=/var/lib/openstack/config/telemetry/openstack_network_exporter.json mode=420 src=/home/zuul/.ansible/tmp/ansible-tmp-1764616884.3494053-260-235750694052463/.source.json follow=False _original_basename=openstack_network_exporter.json.j2 checksum=d474f1e4c3dbd24762592c51cbe5311f0a037273 backup=False force=True remote_src=False unsafe_writes=False content=NOT_LOGGING_PARAMETER validate=None directory_mode=None local_follow=None owner=None group=None seuser=None serole=None selevel=None setype=None attributes=None
Dec  1 14:21:26 np0005541455 nova_compute[189564]: 2025-12-01 19:21:26.173 189568 DEBUG oslo_service.periodic_task [None req-7cec860b-c1c5-49d2-8a4f-732c76c1788d - - - - - -] Running periodic task ComputeManager._sync_power_states run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210#033[00m
Dec  1 14:21:26 np0005541455 python3.9[196307]: ansible-ansible.legacy.stat Invoked with path=/var/lib/openstack/config/telemetry/openstack_network_exporter.yaml follow=False get_checksum=True get_size=False checksum_algorithm=sha1 get_mime=True get_attributes=True get_selinux_context=False
Dec  1 14:21:26 np0005541455 nova_compute[189564]: 2025-12-01 19:21:26.312 189568 DEBUG oslo_service.periodic_task [None req-7cec860b-c1c5-49d2-8a4f-732c76c1788d - - - - - -] Running periodic task ComputeManager._cleanup_running_deleted_instances run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210#033[00m
Dec  1 14:21:26 np0005541455 python3.9[196428]: ansible-ansible.legacy.copy Invoked with dest=/var/lib/openstack/config/telemetry/openstack_network_exporter.yaml mode=420 src=/home/zuul/.ansible/tmp/ansible-tmp-1764616885.733125-260-218365159057390/.source.yaml follow=False _original_basename=openstack_network_exporter.yaml.j2 checksum=2b6bd0891e609bf38a73282f42888052b750bed6 backup=False force=True remote_src=False unsafe_writes=False content=NOT_LOGGING_PARAMETER validate=None directory_mode=None local_follow=None owner=None group=None seuser=None serole=None selevel=None setype=None attributes=None
Dec  1 14:21:27 np0005541455 python3.9[196578]: ansible-ansible.legacy.stat Invoked with path=/var/lib/openstack/config/telemetry/podman_exporter.json follow=False get_checksum=True get_size=False checksum_algorithm=sha1 get_mime=True get_attributes=True get_selinux_context=False
Dec  1 14:21:27 np0005541455 python3.9[196699]: ansible-ansible.legacy.copy Invoked with dest=/var/lib/openstack/config/telemetry/podman_exporter.json mode=420 src=/home/zuul/.ansible/tmp/ansible-tmp-1764616886.9545372-260-78352842912627/.source.json follow=False _original_basename=podman_exporter.json.j2 checksum=e342121a88f67e2bae7ebc05d1e6d350470198a5 backup=False force=True remote_src=False unsafe_writes=False content=NOT_LOGGING_PARAMETER validate=None directory_mode=None local_follow=None owner=None group=None seuser=None serole=None selevel=None setype=None attributes=None
Dec  1 14:21:28 np0005541455 python3.9[196849]: ansible-ansible.legacy.stat Invoked with path=/var/lib/openstack/config/telemetry/podman_exporter.yaml follow=False get_checksum=True get_size=False checksum_algorithm=sha1 get_mime=True get_attributes=True get_selinux_context=False
Dec  1 14:21:29 np0005541455 python3.9[196970]: ansible-ansible.legacy.copy Invoked with dest=/var/lib/openstack/config/telemetry/podman_exporter.yaml mode=420 src=/home/zuul/.ansible/tmp/ansible-tmp-1764616888.0581057-260-222270675372403/.source.yaml follow=False _original_basename=podman_exporter.yaml.j2 checksum=7ccb5eca2ff1dc337c3f3ecbbff5245af7149c47 backup=False force=True remote_src=False unsafe_writes=False content=NOT_LOGGING_PARAMETER validate=None directory_mode=None local_follow=None owner=None group=None seuser=None serole=None selevel=None setype=None attributes=None
Dec  1 14:21:29 np0005541455 python3.9[197120]: ansible-ansible.legacy.stat Invoked with path=/var/lib/openstack/config/telemetry/node_exporter.yaml follow=False get_checksum=True get_size=False checksum_algorithm=sha1 get_mime=True get_attributes=True get_selinux_context=False
Dec  1 14:21:30 np0005541455 python3.9[197196]: ansible-ansible.legacy.file Invoked with mode=0644 dest=/var/lib/openstack/config/telemetry/node_exporter.yaml _original_basename=node_exporter.yaml.j2 recurse=False state=file path=/var/lib/openstack/config/telemetry/node_exporter.yaml force=False follow=True modification_time_format=%Y%m%d%H%M.%S access_time_format=%Y%m%d%H%M.%S unsafe_writes=False _diff_peek=None src=None modification_time=None access_time=None owner=None group=None seuser=None serole=None selevel=None setype=None attributes=None
Dec  1 14:21:30 np0005541455 python3.9[197346]: ansible-ansible.legacy.stat Invoked with path=/var/lib/openstack/config/telemetry/podman_exporter.yaml follow=False get_checksum=True get_size=False checksum_algorithm=sha1 get_mime=True get_attributes=True get_selinux_context=False
Dec  1 14:21:31 np0005541455 python3.9[197422]: ansible-ansible.legacy.file Invoked with mode=0644 dest=/var/lib/openstack/config/telemetry/podman_exporter.yaml _original_basename=podman_exporter.yaml.j2 recurse=False state=file path=/var/lib/openstack/config/telemetry/podman_exporter.yaml force=False follow=True modification_time_format=%Y%m%d%H%M.%S access_time_format=%Y%m%d%H%M.%S unsafe_writes=False _diff_peek=None src=None modification_time=None access_time=None owner=None group=None seuser=None serole=None selevel=None setype=None attributes=None
Dec  1 14:21:32 np0005541455 python3.9[197572]: ansible-ansible.legacy.stat Invoked with path=/var/lib/openstack/config/telemetry/ceilometer_prom_exporter.yaml follow=False get_checksum=True get_size=False checksum_algorithm=sha1 get_mime=True get_attributes=True get_selinux_context=False
Dec  1 14:21:32 np0005541455 python3.9[197648]: ansible-ansible.legacy.file Invoked with mode=0644 dest=/var/lib/openstack/config/telemetry/ceilometer_prom_exporter.yaml _original_basename=ceilometer_prom_exporter.yaml.j2 recurse=False state=file path=/var/lib/openstack/config/telemetry/ceilometer_prom_exporter.yaml force=False follow=True modification_time_format=%Y%m%d%H%M.%S access_time_format=%Y%m%d%H%M.%S unsafe_writes=False _diff_peek=None src=None modification_time=None access_time=None owner=None group=None seuser=None serole=None selevel=None setype=None attributes=None
Dec  1 14:21:33 np0005541455 python3.9[197800]: ansible-ansible.builtin.file Invoked with group=ceilometer mode=0644 owner=ceilometer path=/var/lib/openstack/certs/telemetry/default/tls.crt recurse=False force=False follow=True modification_time_format=%Y%m%d%H%M.%S access_time_format=%Y%m%d%H%M.%S unsafe_writes=False state=None _original_basename=None _diff_peek=None src=None modification_time=None access_time=None seuser=None serole=None selevel=None setype=None attributes=None
Dec  1 14:21:33 np0005541455 python3.9[197952]: ansible-ansible.builtin.file Invoked with group=ceilometer mode=0644 owner=ceilometer path=/var/lib/openstack/certs/telemetry/default/tls.key recurse=False force=False follow=True modification_time_format=%Y%m%d%H%M.%S access_time_format=%Y%m%d%H%M.%S unsafe_writes=False state=None _original_basename=None _diff_peek=None src=None modification_time=None access_time=None seuser=None serole=None selevel=None setype=None attributes=None
Dec  1 14:21:34 np0005541455 python3.9[198104]: ansible-ansible.builtin.file Invoked with group=zuul mode=0755 owner=zuul path=/var/lib/openstack/healthchecks setype=container_file_t state=directory recurse=False force=False follow=True modification_time_format=%Y%m%d%H%M.%S access_time_format=%Y%m%d%H%M.%S unsafe_writes=False _original_basename=None _diff_peek=None src=None modification_time=None access_time=None seuser=None serole=None selevel=None attributes=None
Dec  1 14:21:34 np0005541455 podman[198105]: 2025-12-01 19:21:34.551540328 +0000 UTC m=+0.055909373 container health_status eee51cf6f5ac491b85fb09827fece37ea9afa564acb449d4ec0d0155a452f02b (image=quay.io/podified-antelope-centos9/openstack-multipathd:current-podified, name=multipathd, health_status=healthy, health_failing_streak=0, health_log=, managed_by=edpm_ansible, org.label-schema.build-date=20251125, org.label-schema.name=CentOS Stream 9 Base Image, tcib_build_tag=fa2bb8efef6782c26ea7f1675eeb36dd, io.buildah.version=1.41.3, tcib_managed=true, config_data={'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/multipathd', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-multipathd:current-podified', 'net': 'host', 'privileged': True, 'restart': 'always', 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/var/lib/kolla/config_files/multipathd.json:/var/lib/kolla/config_files/config.json:ro', '/dev:/dev', '/run/udev:/run/udev', '/sys:/sys', '/lib/modules:/lib/modules:ro', '/etc/iscsi:/etc/iscsi:ro', '/var/lib/iscsi:/var/lib/iscsi', '/etc/multipath:/etc/multipath:z', '/etc/multipath.conf:/etc/multipath.conf:ro', '/var/lib/openstack/healthchecks/multipathd:/openstack:ro,z']}, container_name=multipathd, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS, config_id=multipathd, maintainer=OpenStack Kubernetes Operator team, org.label-schema.license=GPLv2)
Dec  1 14:21:35 np0005541455 python3.9[198277]: ansible-ansible.builtin.systemd_service Invoked with enabled=True name=podman.socket state=started daemon_reload=False daemon_reexec=False scope=system no_block=False force=None masked=None
Dec  1 14:21:35 np0005541455 systemd[1]: Reloading.
Dec  1 14:21:35 np0005541455 systemd-rc-local-generator: /etc/rc.d/rc.local is not marked executable, skipping.
Dec  1 14:21:35 np0005541455 systemd-sysv-generator: SysV service '/etc/rc.d/init.d/network' lacks a native systemd unit file. Automatically generating a unit file for compatibility. Please update package to include a native systemd unit file, in order to make it more safe and robust.
Dec  1 14:21:35 np0005541455 systemd[1]: Listening on Podman API Socket.
Dec  1 14:21:36 np0005541455 python3.9[198468]: ansible-ansible.legacy.stat Invoked with path=/var/lib/openstack/healthchecks/ceilometer_agent_compute/healthcheck follow=False get_checksum=True get_size=False checksum_algorithm=sha1 get_mime=True get_attributes=True get_selinux_context=False
Dec  1 14:21:37 np0005541455 python3.9[198591]: ansible-ansible.legacy.copy Invoked with dest=/var/lib/openstack/healthchecks/ceilometer_agent_compute/ group=zuul mode=0700 owner=zuul setype=container_file_t src=/home/zuul/.ansible/tmp/ansible-tmp-1764616896.0528805-482-174776209199315/.source _original_basename=healthcheck follow=False checksum=ebb343c21fce35a02591a9351660cb7035a47d42 backup=False force=True remote_src=False unsafe_writes=False content=NOT_LOGGING_PARAMETER validate=None directory_mode=None local_follow=None seuser=None serole=None selevel=None attributes=None
Dec  1 14:21:37 np0005541455 python3.9[198667]: ansible-ansible.legacy.stat Invoked with path=/var/lib/openstack/healthchecks/ceilometer_agent_compute/healthcheck.future follow=False get_checksum=True get_size=False checksum_algorithm=sha1 get_mime=True get_attributes=True get_selinux_context=False
Dec  1 14:21:38 np0005541455 python3.9[198790]: ansible-ansible.legacy.copy Invoked with dest=/var/lib/openstack/healthchecks/ceilometer_agent_compute/ group=zuul mode=0700 owner=zuul setype=container_file_t src=/home/zuul/.ansible/tmp/ansible-tmp-1764616896.0528805-482-174776209199315/.source.future _original_basename=healthcheck.future follow=False checksum=d500a98192f4ddd70b4dfdc059e2d81aed36a294 backup=False force=True remote_src=False unsafe_writes=False content=NOT_LOGGING_PARAMETER validate=None directory_mode=None local_follow=None seuser=None serole=None selevel=None attributes=None
Dec  1 14:21:39 np0005541455 python3.9[198942]: ansible-container_config_data Invoked with config_overrides={} config_path=/var/lib/openstack/config/telemetry config_pattern=ceilometer_agent_compute.json debug=False
Dec  1 14:21:40 np0005541455 python3.9[199094]: ansible-container_config_hash Invoked with check_mode=False config_vol_prefix=/var/lib/config-data
Dec  1 14:21:40 np0005541455 nova_compute[189564]: 2025-12-01 19:21:40.249 189568 DEBUG oslo_service.periodic_task [None req-7cec860b-c1c5-49d2-8a4f-732c76c1788d - - - - - -] Running periodic task ComputeManager._check_instance_build_time run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210#033[00m
Dec  1 14:21:40 np0005541455 nova_compute[189564]: 2025-12-01 19:21:40.252 189568 DEBUG oslo_service.periodic_task [None req-7cec860b-c1c5-49d2-8a4f-732c76c1788d - - - - - -] Running periodic task ComputeManager._heal_instance_info_cache run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210#033[00m
Dec  1 14:21:40 np0005541455 nova_compute[189564]: 2025-12-01 19:21:40.252 189568 DEBUG nova.compute.manager [None req-7cec860b-c1c5-49d2-8a4f-732c76c1788d - - - - - -] Starting heal instance info cache _heal_instance_info_cache /usr/lib/python3.9/site-packages/nova/compute/manager.py:9858#033[00m
Dec  1 14:21:40 np0005541455 nova_compute[189564]: 2025-12-01 19:21:40.252 189568 DEBUG nova.compute.manager [None req-7cec860b-c1c5-49d2-8a4f-732c76c1788d - - - - - -] Rebuilding the list of instances to heal _heal_instance_info_cache /usr/lib/python3.9/site-packages/nova/compute/manager.py:9862#033[00m
Dec  1 14:21:40 np0005541455 nova_compute[189564]: 2025-12-01 19:21:40.268 189568 DEBUG nova.compute.manager [None req-7cec860b-c1c5-49d2-8a4f-732c76c1788d - - - - - -] Didn't find any instances for network info cache update. _heal_instance_info_cache /usr/lib/python3.9/site-packages/nova/compute/manager.py:9944#033[00m
Dec  1 14:21:40 np0005541455 nova_compute[189564]: 2025-12-01 19:21:40.269 189568 DEBUG oslo_service.periodic_task [None req-7cec860b-c1c5-49d2-8a4f-732c76c1788d - - - - - -] Running periodic task ComputeManager._poll_rebooting_instances run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210#033[00m
Dec  1 14:21:40 np0005541455 nova_compute[189564]: 2025-12-01 19:21:40.269 189568 DEBUG oslo_service.periodic_task [None req-7cec860b-c1c5-49d2-8a4f-732c76c1788d - - - - - -] Running periodic task ComputeManager._poll_rescued_instances run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210#033[00m
Dec  1 14:21:40 np0005541455 nova_compute[189564]: 2025-12-01 19:21:40.269 189568 DEBUG oslo_service.periodic_task [None req-7cec860b-c1c5-49d2-8a4f-732c76c1788d - - - - - -] Running periodic task ComputeManager._poll_unconfirmed_resizes run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210#033[00m
Dec  1 14:21:40 np0005541455 nova_compute[189564]: 2025-12-01 19:21:40.269 189568 DEBUG oslo_service.periodic_task [None req-7cec860b-c1c5-49d2-8a4f-732c76c1788d - - - - - -] Running periodic task ComputeManager._instance_usage_audit run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210#033[00m
Dec  1 14:21:40 np0005541455 nova_compute[189564]: 2025-12-01 19:21:40.269 189568 DEBUG oslo_service.periodic_task [None req-7cec860b-c1c5-49d2-8a4f-732c76c1788d - - - - - -] Running periodic task ComputeManager._poll_volume_usage run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210#033[00m
Dec  1 14:21:40 np0005541455 nova_compute[189564]: 2025-12-01 19:21:40.270 189568 DEBUG oslo_service.periodic_task [None req-7cec860b-c1c5-49d2-8a4f-732c76c1788d - - - - - -] Running periodic task ComputeManager._reclaim_queued_deletes run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210#033[00m
Dec  1 14:21:40 np0005541455 nova_compute[189564]: 2025-12-01 19:21:40.270 189568 DEBUG nova.compute.manager [None req-7cec860b-c1c5-49d2-8a4f-732c76c1788d - - - - - -] CONF.reclaim_instance_interval <= 0, skipping... _reclaim_queued_deletes /usr/lib/python3.9/site-packages/nova/compute/manager.py:10477#033[00m
Dec  1 14:21:40 np0005541455 nova_compute[189564]: 2025-12-01 19:21:40.270 189568 DEBUG oslo_service.periodic_task [None req-7cec860b-c1c5-49d2-8a4f-732c76c1788d - - - - - -] Running periodic task ComputeManager.update_available_resource run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210#033[00m
Dec  1 14:21:40 np0005541455 nova_compute[189564]: 2025-12-01 19:21:40.302 189568 DEBUG oslo_concurrency.lockutils [None req-7cec860b-c1c5-49d2-8a4f-732c76c1788d - - - - - -] Acquiring lock "compute_resources" by "nova.compute.resource_tracker.ResourceTracker.clean_compute_node_cache" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m
Dec  1 14:21:40 np0005541455 nova_compute[189564]: 2025-12-01 19:21:40.302 189568 DEBUG oslo_concurrency.lockutils [None req-7cec860b-c1c5-49d2-8a4f-732c76c1788d - - - - - -] Lock "compute_resources" acquired by "nova.compute.resource_tracker.ResourceTracker.clean_compute_node_cache" :: waited 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m
Dec  1 14:21:40 np0005541455 nova_compute[189564]: 2025-12-01 19:21:40.302 189568 DEBUG oslo_concurrency.lockutils [None req-7cec860b-c1c5-49d2-8a4f-732c76c1788d - - - - - -] Lock "compute_resources" "released" by "nova.compute.resource_tracker.ResourceTracker.clean_compute_node_cache" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m
Dec  1 14:21:40 np0005541455 nova_compute[189564]: 2025-12-01 19:21:40.302 189568 DEBUG nova.compute.resource_tracker [None req-7cec860b-c1c5-49d2-8a4f-732c76c1788d - - - - - -] Auditing locally available compute resources for compute-0.ctlplane.example.com (node: compute-0.ctlplane.example.com) update_available_resource /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:861#033[00m
Dec  1 14:21:40 np0005541455 nova_compute[189564]: 2025-12-01 19:21:40.468 189568 WARNING nova.virt.libvirt.driver [None req-7cec860b-c1c5-49d2-8a4f-732c76c1788d - - - - - -] This host appears to have multiple sockets per NUMA node. The `socket` PCI NUMA affinity will not be supported.#033[00m
Dec  1 14:21:40 np0005541455 nova_compute[189564]: 2025-12-01 19:21:40.470 189568 DEBUG nova.compute.resource_tracker [None req-7cec860b-c1c5-49d2-8a4f-732c76c1788d - - - - - -] Hypervisor/Node resource view: name=compute-0.ctlplane.example.com free_ram=6066MB free_disk=72.60656356811523GB free_vcpus=8 pci_devices=[{"dev_id": "pci_0000_00_03_0", "address": "0000:00:03.0", "product_id": "1000", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1000", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_01_3", "address": "0000:00:01.3", "product_id": "7113", "vendor_id": "8086", "numa_node": null, "label": "label_8086_7113", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_06_0", "address": "0000:00:06.0", "product_id": "1005", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1005", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_04_0", "address": "0000:00:04.0", "product_id": "1001", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1001", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_01_1", "address": "0000:00:01.1", "product_id": "7010", "vendor_id": "8086", "numa_node": null, "label": "label_8086_7010", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_00_0", "address": "0000:00:00.0", "product_id": "1237", "vendor_id": "8086", "numa_node": null, "label": "label_8086_1237", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_05_0", "address": "0000:00:05.0", "product_id": "1002", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1002", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_02_0", "address": "0000:00:02.0", "product_id": "1050", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1050", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_01_2", "address": "0000:00:01.2", "product_id": "7020", "vendor_id": "8086", "numa_node": null, "label": "label_8086_7020", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_01_0", "address": "0000:00:01.0", "product_id": "7000", "vendor_id": "8086", "numa_node": null, "label": "label_8086_7000", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_07_0", "address": "0000:00:07.0", "product_id": "1000", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1000", "dev_type": "type-PCI"}] _report_hypervisor_resource_view /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:1034#033[00m
Dec  1 14:21:40 np0005541455 nova_compute[189564]: 2025-12-01 19:21:40.470 189568 DEBUG oslo_concurrency.lockutils [None req-7cec860b-c1c5-49d2-8a4f-732c76c1788d - - - - - -] Acquiring lock "compute_resources" by "nova.compute.resource_tracker.ResourceTracker._update_available_resource" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m
Dec  1 14:21:40 np0005541455 nova_compute[189564]: 2025-12-01 19:21:40.471 189568 DEBUG oslo_concurrency.lockutils [None req-7cec860b-c1c5-49d2-8a4f-732c76c1788d - - - - - -] Lock "compute_resources" acquired by "nova.compute.resource_tracker.ResourceTracker._update_available_resource" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m
Dec  1 14:21:40 np0005541455 nova_compute[189564]: 2025-12-01 19:21:40.538 189568 DEBUG nova.compute.resource_tracker [None req-7cec860b-c1c5-49d2-8a4f-732c76c1788d - - - - - -] Total usable vcpus: 8, total allocated vcpus: 0 _report_final_resource_view /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:1057#033[00m
Dec  1 14:21:40 np0005541455 nova_compute[189564]: 2025-12-01 19:21:40.539 189568 DEBUG nova.compute.resource_tracker [None req-7cec860b-c1c5-49d2-8a4f-732c76c1788d - - - - - -] Final resource view: name=compute-0.ctlplane.example.com phys_ram=7680MB used_ram=512MB phys_disk=79GB used_disk=0GB total_vcpus=8 used_vcpus=0 pci_stats=[] _report_final_resource_view /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:1066#033[00m
Dec  1 14:21:40 np0005541455 nova_compute[189564]: 2025-12-01 19:21:40.570 189568 DEBUG nova.compute.provider_tree [None req-7cec860b-c1c5-49d2-8a4f-732c76c1788d - - - - - -] Inventory has not changed in ProviderTree for provider: 0211b5d4-bab8-409f-8f53-df766ffbcb27 update_inventory /usr/lib/python3.9/site-packages/nova/compute/provider_tree.py:180#033[00m
Dec  1 14:21:40 np0005541455 nova_compute[189564]: 2025-12-01 19:21:40.592 189568 DEBUG nova.scheduler.client.report [None req-7cec860b-c1c5-49d2-8a4f-732c76c1788d - - - - - -] Inventory has not changed for provider 0211b5d4-bab8-409f-8f53-df766ffbcb27 based on inventory data: {'MEMORY_MB': {'total': 7680, 'reserved': 512, 'min_unit': 1, 'max_unit': 7680, 'step_size': 1, 'allocation_ratio': 1.0}, 'VCPU': {'total': 8, 'reserved': 0, 'min_unit': 1, 'max_unit': 8, 'step_size': 1, 'allocation_ratio': 4.0}, 'DISK_GB': {'total': 79, 'reserved': 0, 'min_unit': 1, 'max_unit': 79, 'step_size': 1, 'allocation_ratio': 0.9}} set_inventory_for_provider /usr/lib/python3.9/site-packages/nova/scheduler/client/report.py:940#033[00m
Dec  1 14:21:40 np0005541455 nova_compute[189564]: 2025-12-01 19:21:40.594 189568 DEBUG nova.compute.resource_tracker [None req-7cec860b-c1c5-49d2-8a4f-732c76c1788d - - - - - -] Compute_service record updated for compute-0.ctlplane.example.com:compute-0.ctlplane.example.com _update_available_resource /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:995#033[00m
Dec  1 14:21:40 np0005541455 nova_compute[189564]: 2025-12-01 19:21:40.594 189568 DEBUG oslo_concurrency.lockutils [None req-7cec860b-c1c5-49d2-8a4f-732c76c1788d - - - - - -] Lock "compute_resources" "released" by "nova.compute.resource_tracker.ResourceTracker._update_available_resource" :: held 0.123s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m
Dec  1 14:21:40 np0005541455 auditd[701]: Audit daemon rotating log files
Dec  1 14:21:41 np0005541455 python3[199246]: ansible-edpm_container_manage Invoked with concurrency=1 config_dir=/var/lib/openstack/config/telemetry config_id=edpm config_overrides={} config_patterns=ceilometer_agent_compute.json log_base_path=/var/log/containers/stdouts debug=False
Dec  1 14:21:41 np0005541455 podman[199283]: 2025-12-01 19:21:41.535506959 +0000 UTC m=+0.061409636 container create 3a3d264f7eb8586ed3d44da8bad3c69e5911bcb2ca062b771386b6d47a5118de (image=quay.rdoproject.org/podified-master-centos10/openstack-ceilometer-compute:current-tested, name=ceilometer_agent_compute, config_id=edpm, container_name=ceilometer_agent_compute, org.label-schema.schema-version=1.0, org.label-schema.build-date=20251125, io.buildah.version=1.41.4, managed_by=edpm_ansible, tcib_build_tag=3a7876c5b6a4ff2e2bc50e11e9db5f42, org.label-schema.license=GPLv2, config_data={'image': 'quay.rdoproject.org/podified-master-centos10/openstack-ceilometer-compute:current-tested', 'user': 'ceilometer', 'restart': 'always', 'command': 'kolla_start', 'security_opt': 'label:type:ceilometer_polling_t', 'net': 'host', 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'OS_ENDPOINT_TYPE': 'internal'}, 'healthcheck': {'test': '/openstack/healthcheck compute', 'mount': '/var/lib/openstack/healthchecks/ceilometer_agent_compute'}, 'volumes': ['/var/lib/openstack/config/telemetry:/var/lib/openstack/config/:z', '/var/lib/openstack/config/telemetry/ceilometer-agent-compute.json:/var/lib/kolla/config_files/config.json:z', '/run/libvirt:/run/libvirt:shared,ro', '/etc/hosts:/etc/hosts:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/var/lib/openstack/cacerts/telemetry/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/config/telemetry/ceilometer_prom_exporter.yaml:/etc/ceilometer/ceilometer_prom_exporter.yaml:z', '/var/lib/openstack/certs/telemetry/default:/etc/ceilometer/tls:z', '/dev/log:/dev/log', '/var/lib/openstack/healthchecks/ceilometer_agent_compute:/openstack:ro,z']}, org.label-schema.vendor=CentOS, org.label-schema.name=CentOS Stream 10 Base Image, tcib_managed=true, maintainer=OpenStack Kubernetes Operator team)
Dec  1 14:21:41 np0005541455 podman[199283]: 2025-12-01 19:21:41.503126299 +0000 UTC m=+0.029029076 image pull b1b6d71b432c07886b3bae74df4dc9841d1f26407d5f96d6c1e400b0154d9a3d quay.rdoproject.org/podified-master-centos10/openstack-ceilometer-compute:current-tested
Dec  1 14:21:41 np0005541455 python3[199246]: ansible-edpm_container_manage PODMAN-CONTAINER-DEBUG: podman create --name ceilometer_agent_compute --conmon-pidfile /run/ceilometer_agent_compute.pid --env KOLLA_CONFIG_STRATEGY=COPY_ALWAYS --env OS_ENDPOINT_TYPE=internal --healthcheck-command /openstack/healthcheck compute --label config_id=edpm --label container_name=ceilometer_agent_compute --label managed_by=edpm_ansible --label config_data={'image': 'quay.rdoproject.org/podified-master-centos10/openstack-ceilometer-compute:current-tested', 'user': 'ceilometer', 'restart': 'always', 'command': 'kolla_start', 'security_opt': 'label:type:ceilometer_polling_t', 'net': 'host', 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'OS_ENDPOINT_TYPE': 'internal'}, 'healthcheck': {'test': '/openstack/healthcheck compute', 'mount': '/var/lib/openstack/healthchecks/ceilometer_agent_compute'}, 'volumes': ['/var/lib/openstack/config/telemetry:/var/lib/openstack/config/:z', '/var/lib/openstack/config/telemetry/ceilometer-agent-compute.json:/var/lib/kolla/config_files/config.json:z', '/run/libvirt:/run/libvirt:shared,ro', '/etc/hosts:/etc/hosts:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/var/lib/openstack/cacerts/telemetry/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/config/telemetry/ceilometer_prom_exporter.yaml:/etc/ceilometer/ceilometer_prom_exporter.yaml:z', '/var/lib/openstack/certs/telemetry/default:/etc/ceilometer/tls:z', '/dev/log:/dev/log', '/var/lib/openstack/healthchecks/ceilometer_agent_compute:/openstack:ro,z']} --log-driver journald --log-level info --network host --security-opt label:type:ceilometer_polling_t --user ceilometer --volume /var/lib/openstack/config/telemetry:/var/lib/openstack/config/:z --volume /var/lib/openstack/config/telemetry/ceilometer-agent-compute.json:/var/lib/kolla/config_files/config.json:z --volume /run/libvirt:/run/libvirt:shared,ro --volume /etc/hosts:/etc/hosts:ro --volume /etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro --volume /etc/localtime:/etc/localtime:ro --volume /etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro --volume /var/lib/openstack/cacerts/telemetry/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z --volume /var/lib/openstack/config/telemetry/ceilometer_prom_exporter.yaml:/etc/ceilometer/ceilometer_prom_exporter.yaml:z --volume /var/lib/openstack/certs/telemetry/default:/etc/ceilometer/tls:z --volume /dev/log:/dev/log --volume /var/lib/openstack/healthchecks/ceilometer_agent_compute:/openstack:ro,z quay.rdoproject.org/podified-master-centos10/openstack-ceilometer-compute:current-tested kolla_start
Dec  1 14:21:42 np0005541455 python3.9[199473]: ansible-ansible.builtin.stat Invoked with path=/etc/sysconfig/podman_drop_in follow=False get_checksum=True get_mime=True get_attributes=True get_selinux_context=False checksum_algorithm=sha1
Dec  1 14:21:43 np0005541455 python3.9[199628]: ansible-file Invoked with path=/etc/systemd/system/edpm_ceilometer_agent_compute.requires state=absent recurse=False force=False follow=True modification_time_format=%Y%m%d%H%M.%S access_time_format=%Y%m%d%H%M.%S unsafe_writes=False _original_basename=None _diff_peek=None src=None modification_time=None access_time=None mode=None owner=None group=None seuser=None serole=None selevel=None setype=None attributes=None
Dec  1 14:21:43 np0005541455 python3.9[199779]: ansible-copy Invoked with src=/home/zuul/.ansible/tmp/ansible-tmp-1764616903.085225-546-35273033534358/source dest=/etc/systemd/system/edpm_ceilometer_agent_compute.service mode=0644 owner=root group=root backup=False force=True remote_src=False follow=False unsafe_writes=False _original_basename=None content=NOT_LOGGING_PARAMETER validate=None directory_mode=None local_follow=None checksum=None seuser=None serole=None selevel=None setype=None attributes=None
Dec  1 14:21:44 np0005541455 python3.9[199855]: ansible-systemd Invoked with daemon_reload=True daemon_reexec=False scope=system no_block=False name=None state=None enabled=None force=None masked=None
Dec  1 14:21:44 np0005541455 systemd[1]: Reloading.
Dec  1 14:21:44 np0005541455 systemd-sysv-generator: SysV service '/etc/rc.d/init.d/network' lacks a native systemd unit file. Automatically generating a unit file for compatibility. Please update package to include a native systemd unit file, in order to make it more safe and robust.
Dec  1 14:21:44 np0005541455 systemd-rc-local-generator: /etc/rc.d/rc.local is not marked executable, skipping.
Dec  1 14:21:45 np0005541455 python3.9[199966]: ansible-systemd Invoked with state=restarted name=edpm_ceilometer_agent_compute.service enabled=True daemon_reload=False daemon_reexec=False scope=system no_block=False force=None masked=None
Dec  1 14:21:45 np0005541455 systemd[1]: Reloading.
Dec  1 14:21:45 np0005541455 systemd-rc-local-generator: /etc/rc.d/rc.local is not marked executable, skipping.
Dec  1 14:21:45 np0005541455 systemd-sysv-generator: SysV service '/etc/rc.d/init.d/network' lacks a native systemd unit file. Automatically generating a unit file for compatibility. Please update package to include a native systemd unit file, in order to make it more safe and robust.
Dec  1 14:21:45 np0005541455 systemd[1]: Starting ceilometer_agent_compute container...
Dec  1 14:21:45 np0005541455 systemd[1]: Started libcrun container.
Dec  1 14:21:45 np0005541455 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/5a9164896699846de4239b19faa50d6a73ade2163e052ea0be24031ec5788b8e/merged/etc/ceilometer/ceilometer_prom_exporter.yaml supports timestamps until 2038 (0x7fffffff)
Dec  1 14:21:45 np0005541455 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/5a9164896699846de4239b19faa50d6a73ade2163e052ea0be24031ec5788b8e/merged/etc/ceilometer/tls supports timestamps until 2038 (0x7fffffff)
Dec  1 14:21:45 np0005541455 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/5a9164896699846de4239b19faa50d6a73ade2163e052ea0be24031ec5788b8e/merged/var/lib/openstack/config supports timestamps until 2038 (0x7fffffff)
Dec  1 14:21:45 np0005541455 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/5a9164896699846de4239b19faa50d6a73ade2163e052ea0be24031ec5788b8e/merged/var/lib/kolla/config_files/config.json supports timestamps until 2038 (0x7fffffff)
Dec  1 14:21:45 np0005541455 systemd[1]: Started /usr/bin/podman healthcheck run 3a3d264f7eb8586ed3d44da8bad3c69e5911bcb2ca062b771386b6d47a5118de.
Dec  1 14:21:45 np0005541455 podman[200007]: 2025-12-01 19:21:45.877754361 +0000 UTC m=+0.111103582 container init 3a3d264f7eb8586ed3d44da8bad3c69e5911bcb2ca062b771386b6d47a5118de (image=quay.rdoproject.org/podified-master-centos10/openstack-ceilometer-compute:current-tested, name=ceilometer_agent_compute, config_id=edpm, container_name=ceilometer_agent_compute, org.label-schema.build-date=20251125, org.label-schema.name=CentOS Stream 10 Base Image, io.buildah.version=1.41.4, org.label-schema.vendor=CentOS, tcib_managed=true, maintainer=OpenStack Kubernetes Operator team, managed_by=edpm_ansible, org.label-schema.license=GPLv2, org.label-schema.schema-version=1.0, config_data={'image': 'quay.rdoproject.org/podified-master-centos10/openstack-ceilometer-compute:current-tested', 'user': 'ceilometer', 'restart': 'always', 'command': 'kolla_start', 'security_opt': 'label:type:ceilometer_polling_t', 'net': 'host', 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'OS_ENDPOINT_TYPE': 'internal'}, 'healthcheck': {'test': '/openstack/healthcheck compute', 'mount': '/var/lib/openstack/healthchecks/ceilometer_agent_compute'}, 'volumes': ['/var/lib/openstack/config/telemetry:/var/lib/openstack/config/:z', '/var/lib/openstack/config/telemetry/ceilometer-agent-compute.json:/var/lib/kolla/config_files/config.json:z', '/run/libvirt:/run/libvirt:shared,ro', '/etc/hosts:/etc/hosts:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/var/lib/openstack/cacerts/telemetry/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/config/telemetry/ceilometer_prom_exporter.yaml:/etc/ceilometer/ceilometer_prom_exporter.yaml:z', '/var/lib/openstack/certs/telemetry/default:/etc/ceilometer/tls:z', '/dev/log:/dev/log', '/var/lib/openstack/healthchecks/ceilometer_agent_compute:/openstack:ro,z']}, tcib_build_tag=3a7876c5b6a4ff2e2bc50e11e9db5f42)
Dec  1 14:21:45 np0005541455 ceilometer_agent_compute[200022]: + sudo -E kolla_set_configs
Dec  1 14:21:45 np0005541455 ceilometer_agent_compute[200022]: sudo: unable to send audit message: Operation not permitted
Dec  1 14:21:45 np0005541455 podman[200007]: 2025-12-01 19:21:45.915440678 +0000 UTC m=+0.148789849 container start 3a3d264f7eb8586ed3d44da8bad3c69e5911bcb2ca062b771386b6d47a5118de (image=quay.rdoproject.org/podified-master-centos10/openstack-ceilometer-compute:current-tested, name=ceilometer_agent_compute, config_data={'image': 'quay.rdoproject.org/podified-master-centos10/openstack-ceilometer-compute:current-tested', 'user': 'ceilometer', 'restart': 'always', 'command': 'kolla_start', 'security_opt': 'label:type:ceilometer_polling_t', 'net': 'host', 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'OS_ENDPOINT_TYPE': 'internal'}, 'healthcheck': {'test': '/openstack/healthcheck compute', 'mount': '/var/lib/openstack/healthchecks/ceilometer_agent_compute'}, 'volumes': ['/var/lib/openstack/config/telemetry:/var/lib/openstack/config/:z', '/var/lib/openstack/config/telemetry/ceilometer-agent-compute.json:/var/lib/kolla/config_files/config.json:z', '/run/libvirt:/run/libvirt:shared,ro', '/etc/hosts:/etc/hosts:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/var/lib/openstack/cacerts/telemetry/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/config/telemetry/ceilometer_prom_exporter.yaml:/etc/ceilometer/ceilometer_prom_exporter.yaml:z', '/var/lib/openstack/certs/telemetry/default:/etc/ceilometer/tls:z', '/dev/log:/dev/log', '/var/lib/openstack/healthchecks/ceilometer_agent_compute:/openstack:ro,z']}, container_name=ceilometer_agent_compute, maintainer=OpenStack Kubernetes Operator team, managed_by=edpm_ansible, org.label-schema.build-date=20251125, org.label-schema.name=CentOS Stream 10 Base Image, org.label-schema.schema-version=1.0, io.buildah.version=1.41.4, org.label-schema.vendor=CentOS, tcib_build_tag=3a7876c5b6a4ff2e2bc50e11e9db5f42, tcib_managed=true, config_id=edpm, org.label-schema.license=GPLv2)
Dec  1 14:21:45 np0005541455 podman[200007]: ceilometer_agent_compute
Dec  1 14:21:45 np0005541455 systemd[1]: Started ceilometer_agent_compute container.
Dec  1 14:21:45 np0005541455 ceilometer_agent_compute[200022]: INFO:__main__:Loading config file at /var/lib/kolla/config_files/config.json
Dec  1 14:21:45 np0005541455 ceilometer_agent_compute[200022]: INFO:__main__:Validating config file
Dec  1 14:21:45 np0005541455 ceilometer_agent_compute[200022]: INFO:__main__:Kolla config strategy set to: COPY_ALWAYS
Dec  1 14:21:45 np0005541455 ceilometer_agent_compute[200022]: INFO:__main__:Copying service configuration files
Dec  1 14:21:45 np0005541455 ceilometer_agent_compute[200022]: INFO:__main__:Deleting /etc/ceilometer/ceilometer.conf
Dec  1 14:21:45 np0005541455 ceilometer_agent_compute[200022]: INFO:__main__:Copying /var/lib/openstack/config/ceilometer.conf to /etc/ceilometer/ceilometer.conf
Dec  1 14:21:45 np0005541455 ceilometer_agent_compute[200022]: INFO:__main__:Setting permission for /etc/ceilometer/ceilometer.conf
Dec  1 14:21:45 np0005541455 ceilometer_agent_compute[200022]: INFO:__main__:Deleting /etc/ceilometer/polling.yaml
Dec  1 14:21:45 np0005541455 ceilometer_agent_compute[200022]: INFO:__main__:Copying /var/lib/openstack/config/polling.yaml to /etc/ceilometer/polling.yaml
Dec  1 14:21:45 np0005541455 ceilometer_agent_compute[200022]: INFO:__main__:Setting permission for /etc/ceilometer/polling.yaml
Dec  1 14:21:45 np0005541455 ceilometer_agent_compute[200022]: INFO:__main__:Copying /var/lib/openstack/config/custom.conf to /etc/ceilometer/ceilometer.conf.d/01-ceilometer-custom.conf
Dec  1 14:21:45 np0005541455 ceilometer_agent_compute[200022]: INFO:__main__:Setting permission for /etc/ceilometer/ceilometer.conf.d/01-ceilometer-custom.conf
Dec  1 14:21:45 np0005541455 ceilometer_agent_compute[200022]: INFO:__main__:Copying /var/lib/openstack/config/ceilometer-host-specific.conf to /etc/ceilometer/ceilometer.conf.d/02-ceilometer-host-specific.conf
Dec  1 14:21:45 np0005541455 ceilometer_agent_compute[200022]: INFO:__main__:Setting permission for /etc/ceilometer/ceilometer.conf.d/02-ceilometer-host-specific.conf
Dec  1 14:21:45 np0005541455 ceilometer_agent_compute[200022]: INFO:__main__:Writing out command to execute
Dec  1 14:21:45 np0005541455 ceilometer_agent_compute[200022]: ++ cat /run_command
Dec  1 14:21:45 np0005541455 ceilometer_agent_compute[200022]: + CMD='/usr/bin/ceilometer-polling --polling-namespaces compute --logfile /dev/stdout'
Dec  1 14:21:45 np0005541455 ceilometer_agent_compute[200022]: + ARGS=
Dec  1 14:21:45 np0005541455 ceilometer_agent_compute[200022]: + sudo kolla_copy_cacerts
Dec  1 14:21:45 np0005541455 podman[200029]: 2025-12-01 19:21:45.999463715 +0000 UTC m=+0.074289551 container health_status 3a3d264f7eb8586ed3d44da8bad3c69e5911bcb2ca062b771386b6d47a5118de (image=quay.rdoproject.org/podified-master-centos10/openstack-ceilometer-compute:current-tested, name=ceilometer_agent_compute, health_status=starting, health_failing_streak=1, health_log=, container_name=ceilometer_agent_compute, io.buildah.version=1.41.4, org.label-schema.schema-version=1.0, managed_by=edpm_ansible, org.label-schema.name=CentOS Stream 10 Base Image, tcib_build_tag=3a7876c5b6a4ff2e2bc50e11e9db5f42, config_data={'image': 'quay.rdoproject.org/podified-master-centos10/openstack-ceilometer-compute:current-tested', 'user': 'ceilometer', 'restart': 'always', 'command': 'kolla_start', 'security_opt': 'label:type:ceilometer_polling_t', 'net': 'host', 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'OS_ENDPOINT_TYPE': 'internal'}, 'healthcheck': {'test': '/openstack/healthcheck compute', 'mount': '/var/lib/openstack/healthchecks/ceilometer_agent_compute'}, 'volumes': ['/var/lib/openstack/config/telemetry:/var/lib/openstack/config/:z', '/var/lib/openstack/config/telemetry/ceilometer-agent-compute.json:/var/lib/kolla/config_files/config.json:z', '/run/libvirt:/run/libvirt:shared,ro', '/etc/hosts:/etc/hosts:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/var/lib/openstack/cacerts/telemetry/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/config/telemetry/ceilometer_prom_exporter.yaml:/etc/ceilometer/ceilometer_prom_exporter.yaml:z', '/var/lib/openstack/certs/telemetry/default:/etc/ceilometer/tls:z', '/dev/log:/dev/log', '/var/lib/openstack/healthchecks/ceilometer_agent_compute:/openstack:ro,z']}, config_id=edpm, maintainer=OpenStack Kubernetes Operator team, org.label-schema.vendor=CentOS, org.label-schema.build-date=20251125, org.label-schema.license=GPLv2, tcib_managed=true)
Dec  1 14:21:46 np0005541455 ceilometer_agent_compute[200022]: sudo: unable to send audit message: Operation not permitted
Dec  1 14:21:46 np0005541455 systemd[1]: 3a3d264f7eb8586ed3d44da8bad3c69e5911bcb2ca062b771386b6d47a5118de-141cc4d8b9d8f7db.service: Main process exited, code=exited, status=1/FAILURE
Dec  1 14:21:46 np0005541455 systemd[1]: 3a3d264f7eb8586ed3d44da8bad3c69e5911bcb2ca062b771386b6d47a5118de-141cc4d8b9d8f7db.service: Failed with result 'exit-code'.
Dec  1 14:21:46 np0005541455 ceilometer_agent_compute[200022]: + [[ ! -n '' ]]
Dec  1 14:21:46 np0005541455 ceilometer_agent_compute[200022]: + . kolla_extend_start
Dec  1 14:21:46 np0005541455 ceilometer_agent_compute[200022]: + echo 'Running command: '\''/usr/bin/ceilometer-polling --polling-namespaces compute --logfile /dev/stdout'\'''
Dec  1 14:21:46 np0005541455 ceilometer_agent_compute[200022]: Running command: '/usr/bin/ceilometer-polling --polling-namespaces compute --logfile /dev/stdout'
Dec  1 14:21:46 np0005541455 ceilometer_agent_compute[200022]: + umask 0022
Dec  1 14:21:46 np0005541455 ceilometer_agent_compute[200022]: + exec /usr/bin/ceilometer-polling --polling-namespaces compute --logfile /dev/stdout
Dec  1 14:21:46 np0005541455 podman[200135]: 2025-12-01 19:21:46.307073809 +0000 UTC m=+0.083095580 container health_status ac5c9902abf0db9f43c889599b2bcc73d33eb8b65444ffdd9b56a5cc93dab792 (image=quay.io/podified-antelope-centos9/openstack-ovn-controller:current-podified, name=ovn_controller, health_status=healthy, health_failing_streak=0, health_log=, config_data={'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ovn_controller', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-ovn-controller:current-podified', 'net': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/lib/modules:/lib/modules:ro', '/run:/run', '/var/lib/openvswitch/ovn:/run/ovn:shared,z', '/var/lib/kolla/config_files/ovn_controller.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/openstack/cacerts/ovn/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/certs/ovn/default/ca.crt:/etc/pki/tls/certs/ovndbca.crt:ro,z', '/var/lib/openstack/certs/ovn/default/tls.crt:/etc/pki/tls/certs/ovndb.crt:ro,z', '/var/lib/openstack/certs/ovn/default/tls.key:/etc/pki/tls/private/ovndb.key:ro,Z', '/var/lib/openstack/healthchecks/ovn_controller:/openstack:ro,z']}, io.buildah.version=1.41.3, maintainer=OpenStack Kubernetes Operator team, org.label-schema.build-date=20251125, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.vendor=CentOS, config_id=ovn_controller, org.label-schema.license=GPLv2, container_name=ovn_controller, managed_by=edpm_ansible, org.label-schema.schema-version=1.0, tcib_build_tag=fa2bb8efef6782c26ea7f1675eeb36dd, tcib_managed=true)
Dec  1 14:21:46 np0005541455 python3.9[200233]: ansible-ansible.builtin.systemd Invoked with name=edpm_ceilometer_agent_compute.service state=restarted daemon_reload=False daemon_reexec=False scope=system no_block=False enabled=None force=None masked=None
Dec  1 14:21:46 np0005541455 systemd[1]: Stopping ceilometer_agent_compute container...
Dec  1 14:21:46 np0005541455 ceilometer_agent_compute[200022]: 2025-12-01 19:21:46.797 2 DEBUG cotyledon.oslo_config_glue [-] Full set of CONF: _load_service_manager_options /usr/lib/python3.12/site-packages/cotyledon/oslo_config_glue.py:45
Dec  1 14:21:46 np0005541455 ceilometer_agent_compute[200022]: 2025-12-01 19:21:46.797 2 DEBUG cotyledon.oslo_config_glue [-] ******************************************************************************** log_opt_values /usr/lib/python3.12/site-packages/oslo_config/cfg.py:2804
Dec  1 14:21:46 np0005541455 ceilometer_agent_compute[200022]: 2025-12-01 19:21:46.798 2 DEBUG cotyledon.oslo_config_glue [-] Configuration options gathered from: log_opt_values /usr/lib/python3.12/site-packages/oslo_config/cfg.py:2805
Dec  1 14:21:46 np0005541455 ceilometer_agent_compute[200022]: 2025-12-01 19:21:46.798 2 DEBUG cotyledon.oslo_config_glue [-] command line args: ['--polling-namespaces', 'compute', '--logfile', '/dev/stdout'] log_opt_values /usr/lib/python3.12/site-packages/oslo_config/cfg.py:2806
Dec  1 14:21:46 np0005541455 ceilometer_agent_compute[200022]: 2025-12-01 19:21:46.798 2 DEBUG cotyledon.oslo_config_glue [-] config files: ['/etc/ceilometer/ceilometer.conf'] log_opt_values /usr/lib/python3.12/site-packages/oslo_config/cfg.py:2807
Dec  1 14:21:46 np0005541455 ceilometer_agent_compute[200022]: 2025-12-01 19:21:46.798 2 DEBUG cotyledon.oslo_config_glue [-] ================================================================================ log_opt_values /usr/lib/python3.12/site-packages/oslo_config/cfg.py:2809
Dec  1 14:21:46 np0005541455 ceilometer_agent_compute[200022]: 2025-12-01 19:21:46.798 2 DEBUG cotyledon.oslo_config_glue [-] batch_size                     = 50 log_opt_values /usr/lib/python3.12/site-packages/oslo_config/cfg.py:2817
Dec  1 14:21:46 np0005541455 ceilometer_agent_compute[200022]: 2025-12-01 19:21:46.798 2 DEBUG cotyledon.oslo_config_glue [-] cfg_file                       = polling.yaml log_opt_values /usr/lib/python3.12/site-packages/oslo_config/cfg.py:2817
Dec  1 14:21:46 np0005541455 ceilometer_agent_compute[200022]: 2025-12-01 19:21:46.798 2 DEBUG cotyledon.oslo_config_glue [-] config_dir                     = ['/etc/ceilometer/ceilometer.conf.d'] log_opt_values /usr/lib/python3.12/site-packages/oslo_config/cfg.py:2817
Dec  1 14:21:46 np0005541455 ceilometer_agent_compute[200022]: 2025-12-01 19:21:46.798 2 DEBUG cotyledon.oslo_config_glue [-] config_file                    = ['/etc/ceilometer/ceilometer.conf'] log_opt_values /usr/lib/python3.12/site-packages/oslo_config/cfg.py:2817
Dec  1 14:21:46 np0005541455 ceilometer_agent_compute[200022]: 2025-12-01 19:21:46.798 2 DEBUG cotyledon.oslo_config_glue [-] config_source                  = [] log_opt_values /usr/lib/python3.12/site-packages/oslo_config/cfg.py:2817
Dec  1 14:21:46 np0005541455 ceilometer_agent_compute[200022]: 2025-12-01 19:21:46.799 2 DEBUG cotyledon.oslo_config_glue [-] debug                          = True log_opt_values /usr/lib/python3.12/site-packages/oslo_config/cfg.py:2817
Dec  1 14:21:46 np0005541455 ceilometer_agent_compute[200022]: 2025-12-01 19:21:46.799 2 DEBUG cotyledon.oslo_config_glue [-] default_log_levels             = ['amqp=WARN', 'amqplib=WARN', 'boto=WARN', 'qpid=WARN', 'sqlalchemy=WARN', 'suds=INFO', 'oslo.messaging=INFO', 'oslo_messaging=INFO', 'iso8601=WARN', 'requests.packages.urllib3.connectionpool=WARN', 'urllib3.connectionpool=WARN', 'websocket=WARN', 'requests.packages.urllib3.util.retry=WARN', 'urllib3.util.retry=WARN', 'keystonemiddleware=WARN', 'routes.middleware=WARN', 'stevedore=WARN', 'taskflow=WARN', 'keystoneauth=WARN', 'oslo.cache=INFO', 'oslo_policy=INFO', 'dogpile.core.dogpile=INFO', 'futurist=INFO', 'neutronclient=INFO', 'keystoneclient=INFO'] log_opt_values /usr/lib/python3.12/site-packages/oslo_config/cfg.py:2817
Dec  1 14:21:46 np0005541455 ceilometer_agent_compute[200022]: 2025-12-01 19:21:46.799 2 DEBUG cotyledon.oslo_config_glue [-] enable_notifications           = True log_opt_values /usr/lib/python3.12/site-packages/oslo_config/cfg.py:2817
Dec  1 14:21:46 np0005541455 ceilometer_agent_compute[200022]: 2025-12-01 19:21:46.799 2 DEBUG cotyledon.oslo_config_glue [-] enable_prometheus_exporter     = False log_opt_values /usr/lib/python3.12/site-packages/oslo_config/cfg.py:2817
Dec  1 14:21:46 np0005541455 ceilometer_agent_compute[200022]: 2025-12-01 19:21:46.799 2 DEBUG cotyledon.oslo_config_glue [-] event_pipeline_cfg_file        = event_pipeline.yaml log_opt_values /usr/lib/python3.12/site-packages/oslo_config/cfg.py:2817
Dec  1 14:21:46 np0005541455 ceilometer_agent_compute[200022]: 2025-12-01 19:21:46.799 2 DEBUG cotyledon.oslo_config_glue [-] graceful_shutdown_timeout      = 60 log_opt_values /usr/lib/python3.12/site-packages/oslo_config/cfg.py:2817
Dec  1 14:21:46 np0005541455 ceilometer_agent_compute[200022]: 2025-12-01 19:21:46.799 2 DEBUG cotyledon.oslo_config_glue [-] heartbeat_socket_dir           = None log_opt_values /usr/lib/python3.12/site-packages/oslo_config/cfg.py:2817
Dec  1 14:21:46 np0005541455 ceilometer_agent_compute[200022]: 2025-12-01 19:21:46.799 2 DEBUG cotyledon.oslo_config_glue [-] host                           = compute-0.ctlplane.example.com log_opt_values /usr/lib/python3.12/site-packages/oslo_config/cfg.py:2817
Dec  1 14:21:46 np0005541455 ceilometer_agent_compute[200022]: 2025-12-01 19:21:46.799 2 DEBUG cotyledon.oslo_config_glue [-] http_timeout                   = 600 log_opt_values /usr/lib/python3.12/site-packages/oslo_config/cfg.py:2817
Dec  1 14:21:46 np0005541455 ceilometer_agent_compute[200022]: 2025-12-01 19:21:46.799 2 DEBUG cotyledon.oslo_config_glue [-] hypervisor_inspector           = libvirt log_opt_values /usr/lib/python3.12/site-packages/oslo_config/cfg.py:2817
Dec  1 14:21:46 np0005541455 ceilometer_agent_compute[200022]: 2025-12-01 19:21:46.800 2 WARNING oslo_config.cfg [-] Deprecated: Option "tenant_name_discovery" from group "DEFAULT" is deprecated. Use option "identity_name_discovery" from group "DEFAULT".
Dec  1 14:21:46 np0005541455 ceilometer_agent_compute[200022]: 2025-12-01 19:21:46.800 2 DEBUG cotyledon.oslo_config_glue [-] identity_name_discovery        = False log_opt_values /usr/lib/python3.12/site-packages/oslo_config/cfg.py:2817
Dec  1 14:21:46 np0005541455 ceilometer_agent_compute[200022]: 2025-12-01 19:21:46.800 2 DEBUG cotyledon.oslo_config_glue [-] ignore_disabled_projects       = False log_opt_values /usr/lib/python3.12/site-packages/oslo_config/cfg.py:2817
Dec  1 14:21:46 np0005541455 ceilometer_agent_compute[200022]: 2025-12-01 19:21:46.800 2 DEBUG cotyledon.oslo_config_glue [-] instance_format                = [instance: %(uuid)s]  log_opt_values /usr/lib/python3.12/site-packages/oslo_config/cfg.py:2817
Dec  1 14:21:46 np0005541455 ceilometer_agent_compute[200022]: 2025-12-01 19:21:46.800 2 DEBUG cotyledon.oslo_config_glue [-] instance_uuid_format           = [instance: %(uuid)s]  log_opt_values /usr/lib/python3.12/site-packages/oslo_config/cfg.py:2817
Dec  1 14:21:46 np0005541455 ceilometer_agent_compute[200022]: 2025-12-01 19:21:46.800 2 DEBUG cotyledon.oslo_config_glue [-] libvirt_type                   = kvm log_opt_values /usr/lib/python3.12/site-packages/oslo_config/cfg.py:2817
Dec  1 14:21:46 np0005541455 ceilometer_agent_compute[200022]: 2025-12-01 19:21:46.800 2 DEBUG cotyledon.oslo_config_glue [-] libvirt_uri                    =  log_opt_values /usr/lib/python3.12/site-packages/oslo_config/cfg.py:2817
Dec  1 14:21:46 np0005541455 ceilometer_agent_compute[200022]: 2025-12-01 19:21:46.800 2 DEBUG cotyledon.oslo_config_glue [-] log_color                      = False log_opt_values /usr/lib/python3.12/site-packages/oslo_config/cfg.py:2817
Dec  1 14:21:46 np0005541455 ceilometer_agent_compute[200022]: 2025-12-01 19:21:46.800 2 DEBUG cotyledon.oslo_config_glue [-] log_config_append              = None log_opt_values /usr/lib/python3.12/site-packages/oslo_config/cfg.py:2817
Dec  1 14:21:46 np0005541455 ceilometer_agent_compute[200022]: 2025-12-01 19:21:46.801 2 DEBUG cotyledon.oslo_config_glue [-] log_date_format                = %Y-%m-%d %H:%M:%S log_opt_values /usr/lib/python3.12/site-packages/oslo_config/cfg.py:2817
Dec  1 14:21:46 np0005541455 ceilometer_agent_compute[200022]: 2025-12-01 19:21:46.801 2 DEBUG cotyledon.oslo_config_glue [-] log_dir                        = /var/log/ceilometer log_opt_values /usr/lib/python3.12/site-packages/oslo_config/cfg.py:2817
Dec  1 14:21:46 np0005541455 ceilometer_agent_compute[200022]: 2025-12-01 19:21:46.801 2 DEBUG cotyledon.oslo_config_glue [-] log_file                       = /dev/stdout log_opt_values /usr/lib/python3.12/site-packages/oslo_config/cfg.py:2817
Dec  1 14:21:46 np0005541455 ceilometer_agent_compute[200022]: 2025-12-01 19:21:46.801 2 DEBUG cotyledon.oslo_config_glue [-] log_options                    = True log_opt_values /usr/lib/python3.12/site-packages/oslo_config/cfg.py:2817
Dec  1 14:21:46 np0005541455 ceilometer_agent_compute[200022]: 2025-12-01 19:21:46.801 2 DEBUG cotyledon.oslo_config_glue [-] log_rotate_interval            = 1 log_opt_values /usr/lib/python3.12/site-packages/oslo_config/cfg.py:2817
Dec  1 14:21:46 np0005541455 ceilometer_agent_compute[200022]: 2025-12-01 19:21:46.801 2 DEBUG cotyledon.oslo_config_glue [-] log_rotate_interval_type       = days log_opt_values /usr/lib/python3.12/site-packages/oslo_config/cfg.py:2817
Dec  1 14:21:46 np0005541455 ceilometer_agent_compute[200022]: 2025-12-01 19:21:46.801 2 DEBUG cotyledon.oslo_config_glue [-] log_rotation_type              = none log_opt_values /usr/lib/python3.12/site-packages/oslo_config/cfg.py:2817
Dec  1 14:21:46 np0005541455 ceilometer_agent_compute[200022]: 2025-12-01 19:21:46.801 2 DEBUG cotyledon.oslo_config_glue [-] logging_context_format_string  = %(asctime)s.%(msecs)03d %(process)d %(levelname)s %(name)s [%(global_request_id)s %(request_id)s %(user_identity)s] %(instance)s%(message)s log_opt_values /usr/lib/python3.12/site-packages/oslo_config/cfg.py:2817
Dec  1 14:21:46 np0005541455 ceilometer_agent_compute[200022]: 2025-12-01 19:21:46.801 2 DEBUG cotyledon.oslo_config_glue [-] logging_debug_format_suffix    = %(funcName)s %(pathname)s:%(lineno)d log_opt_values /usr/lib/python3.12/site-packages/oslo_config/cfg.py:2817
Dec  1 14:21:46 np0005541455 ceilometer_agent_compute[200022]: 2025-12-01 19:21:46.801 2 DEBUG cotyledon.oslo_config_glue [-] logging_default_format_string  = %(asctime)s.%(msecs)03d %(process)d %(levelname)s %(name)s [-] %(instance)s%(message)s log_opt_values /usr/lib/python3.12/site-packages/oslo_config/cfg.py:2817
Dec  1 14:21:46 np0005541455 ceilometer_agent_compute[200022]: 2025-12-01 19:21:46.801 2 DEBUG cotyledon.oslo_config_glue [-] logging_exception_prefix       = %(asctime)s.%(msecs)03d %(process)d ERROR %(name)s %(instance)s log_opt_values /usr/lib/python3.12/site-packages/oslo_config/cfg.py:2817
Dec  1 14:21:46 np0005541455 ceilometer_agent_compute[200022]: 2025-12-01 19:21:46.801 2 DEBUG cotyledon.oslo_config_glue [-] logging_user_identity_format   = %(user)s %(project)s %(domain)s %(system_scope)s %(user_domain)s %(project_domain)s log_opt_values /usr/lib/python3.12/site-packages/oslo_config/cfg.py:2817
Dec  1 14:21:46 np0005541455 ceilometer_agent_compute[200022]: 2025-12-01 19:21:46.801 2 DEBUG cotyledon.oslo_config_glue [-] max_logfile_count              = 30 log_opt_values /usr/lib/python3.12/site-packages/oslo_config/cfg.py:2817
Dec  1 14:21:46 np0005541455 ceilometer_agent_compute[200022]: 2025-12-01 19:21:46.802 2 DEBUG cotyledon.oslo_config_glue [-] max_logfile_size_mb            = 200 log_opt_values /usr/lib/python3.12/site-packages/oslo_config/cfg.py:2817
Dec  1 14:21:46 np0005541455 ceilometer_agent_compute[200022]: 2025-12-01 19:21:46.802 2 DEBUG cotyledon.oslo_config_glue [-] max_parallel_requests          = 64 log_opt_values /usr/lib/python3.12/site-packages/oslo_config/cfg.py:2817
Dec  1 14:21:46 np0005541455 ceilometer_agent_compute[200022]: 2025-12-01 19:21:46.802 2 DEBUG cotyledon.oslo_config_glue [-] partitioning_group_prefix      = None log_opt_values /usr/lib/python3.12/site-packages/oslo_config/cfg.py:2817
Dec  1 14:21:46 np0005541455 ceilometer_agent_compute[200022]: 2025-12-01 19:21:46.802 2 DEBUG cotyledon.oslo_config_glue [-] pipeline_cfg_file              = pipeline.yaml log_opt_values /usr/lib/python3.12/site-packages/oslo_config/cfg.py:2817
Dec  1 14:21:46 np0005541455 ceilometer_agent_compute[200022]: 2025-12-01 19:21:46.802 2 DEBUG cotyledon.oslo_config_glue [-] polling_namespaces             = ['compute'] log_opt_values /usr/lib/python3.12/site-packages/oslo_config/cfg.py:2817
Dec  1 14:21:46 np0005541455 ceilometer_agent_compute[200022]: 2025-12-01 19:21:46.802 2 DEBUG cotyledon.oslo_config_glue [-] pollsters_definitions_dirs     = ['/etc/ceilometer/pollsters.d'] log_opt_values /usr/lib/python3.12/site-packages/oslo_config/cfg.py:2817
Dec  1 14:21:46 np0005541455 ceilometer_agent_compute[200022]: 2025-12-01 19:21:46.802 2 DEBUG cotyledon.oslo_config_glue [-] prometheus_listen_addresses    = ['127.0.0.1:9101'] log_opt_values /usr/lib/python3.12/site-packages/oslo_config/cfg.py:2817
Dec  1 14:21:46 np0005541455 ceilometer_agent_compute[200022]: 2025-12-01 19:21:46.802 2 DEBUG cotyledon.oslo_config_glue [-] prometheus_tls_certfile        = None log_opt_values /usr/lib/python3.12/site-packages/oslo_config/cfg.py:2817
Dec  1 14:21:46 np0005541455 ceilometer_agent_compute[200022]: 2025-12-01 19:21:46.802 2 DEBUG cotyledon.oslo_config_glue [-] prometheus_tls_enable          = False log_opt_values /usr/lib/python3.12/site-packages/oslo_config/cfg.py:2817
Dec  1 14:21:46 np0005541455 ceilometer_agent_compute[200022]: 2025-12-01 19:21:46.802 2 DEBUG cotyledon.oslo_config_glue [-] prometheus_tls_keyfile         = None log_opt_values /usr/lib/python3.12/site-packages/oslo_config/cfg.py:2817
Dec  1 14:21:46 np0005541455 ceilometer_agent_compute[200022]: 2025-12-01 19:21:46.802 2 DEBUG cotyledon.oslo_config_glue [-] publish_errors                 = False log_opt_values /usr/lib/python3.12/site-packages/oslo_config/cfg.py:2817
Dec  1 14:21:46 np0005541455 ceilometer_agent_compute[200022]: 2025-12-01 19:21:46.802 2 DEBUG cotyledon.oslo_config_glue [-] rate_limit_burst               = 0 log_opt_values /usr/lib/python3.12/site-packages/oslo_config/cfg.py:2817
Dec  1 14:21:46 np0005541455 ceilometer_agent_compute[200022]: 2025-12-01 19:21:46.803 2 DEBUG cotyledon.oslo_config_glue [-] rate_limit_except_level        = CRITICAL log_opt_values /usr/lib/python3.12/site-packages/oslo_config/cfg.py:2817
Dec  1 14:21:46 np0005541455 ceilometer_agent_compute[200022]: 2025-12-01 19:21:46.803 2 DEBUG cotyledon.oslo_config_glue [-] rate_limit_interval            = 0 log_opt_values /usr/lib/python3.12/site-packages/oslo_config/cfg.py:2817
Dec  1 14:21:46 np0005541455 ceilometer_agent_compute[200022]: 2025-12-01 19:21:46.803 2 DEBUG cotyledon.oslo_config_glue [-] reseller_prefix                = AUTH_ log_opt_values /usr/lib/python3.12/site-packages/oslo_config/cfg.py:2817
Dec  1 14:21:46 np0005541455 ceilometer_agent_compute[200022]: 2025-12-01 19:21:46.803 2 DEBUG cotyledon.oslo_config_glue [-] reserved_metadata_keys         = [] log_opt_values /usr/lib/python3.12/site-packages/oslo_config/cfg.py:2817
Dec  1 14:21:46 np0005541455 ceilometer_agent_compute[200022]: 2025-12-01 19:21:46.803 2 DEBUG cotyledon.oslo_config_glue [-] reserved_metadata_length       = 256 log_opt_values /usr/lib/python3.12/site-packages/oslo_config/cfg.py:2817
Dec  1 14:21:46 np0005541455 ceilometer_agent_compute[200022]: 2025-12-01 19:21:46.803 2 DEBUG cotyledon.oslo_config_glue [-] reserved_metadata_namespace    = ['metering.'] log_opt_values /usr/lib/python3.12/site-packages/oslo_config/cfg.py:2817
Dec  1 14:21:46 np0005541455 ceilometer_agent_compute[200022]: 2025-12-01 19:21:46.803 2 DEBUG cotyledon.oslo_config_glue [-] rootwrap_config                = /etc/ceilometer/rootwrap.conf log_opt_values /usr/lib/python3.12/site-packages/oslo_config/cfg.py:2817
Dec  1 14:21:46 np0005541455 ceilometer_agent_compute[200022]: 2025-12-01 19:21:46.803 2 DEBUG cotyledon.oslo_config_glue [-] sample_source                  = openstack log_opt_values /usr/lib/python3.12/site-packages/oslo_config/cfg.py:2817
Dec  1 14:21:46 np0005541455 ceilometer_agent_compute[200022]: 2025-12-01 19:21:46.803 2 DEBUG cotyledon.oslo_config_glue [-] shell_completion               = None log_opt_values /usr/lib/python3.12/site-packages/oslo_config/cfg.py:2817
Dec  1 14:21:46 np0005541455 ceilometer_agent_compute[200022]: 2025-12-01 19:21:46.804 2 DEBUG cotyledon.oslo_config_glue [-] syslog_log_facility            = LOG_USER log_opt_values /usr/lib/python3.12/site-packages/oslo_config/cfg.py:2817
Dec  1 14:21:46 np0005541455 ceilometer_agent_compute[200022]: 2025-12-01 19:21:46.804 2 DEBUG cotyledon.oslo_config_glue [-] threads_to_process_pollsters   = 1 log_opt_values /usr/lib/python3.12/site-packages/oslo_config/cfg.py:2817
Dec  1 14:21:46 np0005541455 ceilometer_agent_compute[200022]: 2025-12-01 19:21:46.804 2 DEBUG cotyledon.oslo_config_glue [-] use_journal                    = False log_opt_values /usr/lib/python3.12/site-packages/oslo_config/cfg.py:2817
Dec  1 14:21:46 np0005541455 ceilometer_agent_compute[200022]: 2025-12-01 19:21:46.804 2 DEBUG cotyledon.oslo_config_glue [-] use_json                       = False log_opt_values /usr/lib/python3.12/site-packages/oslo_config/cfg.py:2817
Dec  1 14:21:46 np0005541455 ceilometer_agent_compute[200022]: 2025-12-01 19:21:46.804 2 DEBUG cotyledon.oslo_config_glue [-] use_stderr                     = False log_opt_values /usr/lib/python3.12/site-packages/oslo_config/cfg.py:2817
Dec  1 14:21:46 np0005541455 ceilometer_agent_compute[200022]: 2025-12-01 19:21:46.804 2 DEBUG cotyledon.oslo_config_glue [-] use_syslog                     = False log_opt_values /usr/lib/python3.12/site-packages/oslo_config/cfg.py:2817
Dec  1 14:21:46 np0005541455 ceilometer_agent_compute[200022]: 2025-12-01 19:21:46.804 2 DEBUG cotyledon.oslo_config_glue [-] watch_log_file                 = False log_opt_values /usr/lib/python3.12/site-packages/oslo_config/cfg.py:2817
Dec  1 14:21:46 np0005541455 ceilometer_agent_compute[200022]: 2025-12-01 19:21:46.804 2 DEBUG cotyledon.oslo_config_glue [-] compute.fetch_extra_metadata   = True log_opt_values /usr/lib/python3.12/site-packages/oslo_config/cfg.py:2824
Dec  1 14:21:46 np0005541455 ceilometer_agent_compute[200022]: 2025-12-01 19:21:46.804 2 DEBUG cotyledon.oslo_config_glue [-] compute.instance_discovery_method = libvirt_metadata log_opt_values /usr/lib/python3.12/site-packages/oslo_config/cfg.py:2824
Dec  1 14:21:46 np0005541455 ceilometer_agent_compute[200022]: 2025-12-01 19:21:46.804 2 DEBUG cotyledon.oslo_config_glue [-] compute.resource_cache_expiry  = 3600 log_opt_values /usr/lib/python3.12/site-packages/oslo_config/cfg.py:2824
Dec  1 14:21:46 np0005541455 ceilometer_agent_compute[200022]: 2025-12-01 19:21:46.804 2 DEBUG cotyledon.oslo_config_glue [-] compute.resource_update_interval = 0 log_opt_values /usr/lib/python3.12/site-packages/oslo_config/cfg.py:2824
Dec  1 14:21:46 np0005541455 ceilometer_agent_compute[200022]: 2025-12-01 19:21:46.804 2 DEBUG cotyledon.oslo_config_glue [-] coordination.backend_url       = **** log_opt_values /usr/lib/python3.12/site-packages/oslo_config/cfg.py:2824
Dec  1 14:21:46 np0005541455 ceilometer_agent_compute[200022]: 2025-12-01 19:21:46.805 2 DEBUG cotyledon.oslo_config_glue [-] event.definitions_cfg_file     = event_definitions.yaml log_opt_values /usr/lib/python3.12/site-packages/oslo_config/cfg.py:2824
Dec  1 14:21:46 np0005541455 ceilometer_agent_compute[200022]: 2025-12-01 19:21:46.805 2 DEBUG cotyledon.oslo_config_glue [-] event.drop_unmatched_notifications = False log_opt_values /usr/lib/python3.12/site-packages/oslo_config/cfg.py:2824
Dec  1 14:21:46 np0005541455 ceilometer_agent_compute[200022]: 2025-12-01 19:21:46.805 2 DEBUG cotyledon.oslo_config_glue [-] event.store_raw                = [] log_opt_values /usr/lib/python3.12/site-packages/oslo_config/cfg.py:2824
Dec  1 14:21:46 np0005541455 ceilometer_agent_compute[200022]: 2025-12-01 19:21:46.805 2 DEBUG cotyledon.oslo_config_glue [-] ipmi.polling_retry             = 3 log_opt_values /usr/lib/python3.12/site-packages/oslo_config/cfg.py:2824
Dec  1 14:21:46 np0005541455 ceilometer_agent_compute[200022]: 2025-12-01 19:21:46.805 2 DEBUG cotyledon.oslo_config_glue [-] meter.meter_definitions_dirs   = ['/etc/ceilometer/meters.d', '/usr/lib/python3.12/site-packages/ceilometer/data/meters.d'] log_opt_values /usr/lib/python3.12/site-packages/oslo_config/cfg.py:2824
Dec  1 14:21:46 np0005541455 ceilometer_agent_compute[200022]: 2025-12-01 19:21:46.805 2 DEBUG cotyledon.oslo_config_glue [-] notification.ack_on_event_error = True log_opt_values /usr/lib/python3.12/site-packages/oslo_config/cfg.py:2824
Dec  1 14:21:46 np0005541455 ceilometer_agent_compute[200022]: 2025-12-01 19:21:46.805 2 DEBUG cotyledon.oslo_config_glue [-] notification.batch_size        = 1 log_opt_values /usr/lib/python3.12/site-packages/oslo_config/cfg.py:2824
Dec  1 14:21:46 np0005541455 ceilometer_agent_compute[200022]: 2025-12-01 19:21:46.805 2 DEBUG cotyledon.oslo_config_glue [-] notification.batch_timeout     = None log_opt_values /usr/lib/python3.12/site-packages/oslo_config/cfg.py:2824
Dec  1 14:21:46 np0005541455 ceilometer_agent_compute[200022]: 2025-12-01 19:21:46.805 2 DEBUG cotyledon.oslo_config_glue [-] notification.messaging_urls    = **** log_opt_values /usr/lib/python3.12/site-packages/oslo_config/cfg.py:2824
Dec  1 14:21:46 np0005541455 ceilometer_agent_compute[200022]: 2025-12-01 19:21:46.805 2 DEBUG cotyledon.oslo_config_glue [-] notification.notification_control_exchanges = ['nova', 'glance', 'neutron', 'cinder', 'heat', 'keystone', 'trove', 'zaqar', 'swift', 'ceilometer', 'magnum', 'dns', 'ironic', 'aodh'] log_opt_values /usr/lib/python3.12/site-packages/oslo_config/cfg.py:2824
Dec  1 14:21:46 np0005541455 ceilometer_agent_compute[200022]: 2025-12-01 19:21:46.805 2 DEBUG cotyledon.oslo_config_glue [-] notification.pipelines         = ['meter', 'event'] log_opt_values /usr/lib/python3.12/site-packages/oslo_config/cfg.py:2824
Dec  1 14:21:46 np0005541455 ceilometer_agent_compute[200022]: 2025-12-01 19:21:46.806 2 DEBUG cotyledon.oslo_config_glue [-] notification.workers           = 1 log_opt_values /usr/lib/python3.12/site-packages/oslo_config/cfg.py:2824
Dec  1 14:21:46 np0005541455 ceilometer_agent_compute[200022]: 2025-12-01 19:21:46.806 2 DEBUG cotyledon.oslo_config_glue [-] polling.batch_size             = 50 log_opt_values /usr/lib/python3.12/site-packages/oslo_config/cfg.py:2824
Dec  1 14:21:46 np0005541455 ceilometer_agent_compute[200022]: 2025-12-01 19:21:46.806 2 DEBUG cotyledon.oslo_config_glue [-] polling.cfg_file               = polling.yaml log_opt_values /usr/lib/python3.12/site-packages/oslo_config/cfg.py:2824
Dec  1 14:21:46 np0005541455 ceilometer_agent_compute[200022]: 2025-12-01 19:21:46.806 2 DEBUG cotyledon.oslo_config_glue [-] polling.enable_notifications   = False log_opt_values /usr/lib/python3.12/site-packages/oslo_config/cfg.py:2824
Dec  1 14:21:46 np0005541455 ceilometer_agent_compute[200022]: 2025-12-01 19:21:46.806 2 DEBUG cotyledon.oslo_config_glue [-] polling.enable_prometheus_exporter = True log_opt_values /usr/lib/python3.12/site-packages/oslo_config/cfg.py:2824
Dec  1 14:21:46 np0005541455 ceilometer_agent_compute[200022]: 2025-12-01 19:21:46.806 2 DEBUG cotyledon.oslo_config_glue [-] polling.heartbeat_socket_dir   = /var/lib/ceilometer log_opt_values /usr/lib/python3.12/site-packages/oslo_config/cfg.py:2824
Dec  1 14:21:46 np0005541455 ceilometer_agent_compute[200022]: 2025-12-01 19:21:46.806 2 DEBUG cotyledon.oslo_config_glue [-] polling.identity_name_discovery = False log_opt_values /usr/lib/python3.12/site-packages/oslo_config/cfg.py:2824
Dec  1 14:21:46 np0005541455 ceilometer_agent_compute[200022]: 2025-12-01 19:21:46.806 2 DEBUG cotyledon.oslo_config_glue [-] polling.ignore_disabled_projects = False log_opt_values /usr/lib/python3.12/site-packages/oslo_config/cfg.py:2824
Dec  1 14:21:46 np0005541455 ceilometer_agent_compute[200022]: 2025-12-01 19:21:46.806 2 DEBUG cotyledon.oslo_config_glue [-] polling.partitioning_group_prefix = None log_opt_values /usr/lib/python3.12/site-packages/oslo_config/cfg.py:2824
Dec  1 14:21:46 np0005541455 ceilometer_agent_compute[200022]: 2025-12-01 19:21:46.806 2 DEBUG cotyledon.oslo_config_glue [-] polling.pollsters_definitions_dirs = ['/etc/ceilometer/pollsters.d'] log_opt_values /usr/lib/python3.12/site-packages/oslo_config/cfg.py:2824
Dec  1 14:21:46 np0005541455 ceilometer_agent_compute[200022]: 2025-12-01 19:21:46.806 2 DEBUG cotyledon.oslo_config_glue [-] polling.prometheus_listen_addresses = ['[::]:9101'] log_opt_values /usr/lib/python3.12/site-packages/oslo_config/cfg.py:2824
Dec  1 14:21:46 np0005541455 ceilometer_agent_compute[200022]: 2025-12-01 19:21:46.806 2 DEBUG cotyledon.oslo_config_glue [-] polling.prometheus_tls_certfile = /etc/ceilometer/tls/tls.crt log_opt_values /usr/lib/python3.12/site-packages/oslo_config/cfg.py:2824
Dec  1 14:21:46 np0005541455 ceilometer_agent_compute[200022]: 2025-12-01 19:21:46.807 2 DEBUG cotyledon.oslo_config_glue [-] polling.prometheus_tls_enable  = True log_opt_values /usr/lib/python3.12/site-packages/oslo_config/cfg.py:2824
Dec  1 14:21:46 np0005541455 ceilometer_agent_compute[200022]: 2025-12-01 19:21:46.807 2 DEBUG cotyledon.oslo_config_glue [-] polling.prometheus_tls_keyfile = /etc/ceilometer/tls/tls.key log_opt_values /usr/lib/python3.12/site-packages/oslo_config/cfg.py:2824
Dec  1 14:21:46 np0005541455 ceilometer_agent_compute[200022]: 2025-12-01 19:21:46.807 2 DEBUG cotyledon.oslo_config_glue [-] polling.threads_to_process_pollsters = 1 log_opt_values /usr/lib/python3.12/site-packages/oslo_config/cfg.py:2824
Dec  1 14:21:46 np0005541455 ceilometer_agent_compute[200022]: 2025-12-01 19:21:46.807 2 DEBUG cotyledon.oslo_config_glue [-] publisher.telemetry_secret     = **** log_opt_values /usr/lib/python3.12/site-packages/oslo_config/cfg.py:2824
Dec  1 14:21:46 np0005541455 ceilometer_agent_compute[200022]: 2025-12-01 19:21:46.807 2 DEBUG cotyledon.oslo_config_glue [-] publisher_notifier.event_topic = event log_opt_values /usr/lib/python3.12/site-packages/oslo_config/cfg.py:2824
Dec  1 14:21:46 np0005541455 ceilometer_agent_compute[200022]: 2025-12-01 19:21:46.807 2 DEBUG cotyledon.oslo_config_glue [-] publisher_notifier.metering_topic = metering log_opt_values /usr/lib/python3.12/site-packages/oslo_config/cfg.py:2824
Dec  1 14:21:46 np0005541455 ceilometer_agent_compute[200022]: 2025-12-01 19:21:46.807 2 DEBUG cotyledon.oslo_config_glue [-] publisher_notifier.telemetry_driver = messagingv2 log_opt_values /usr/lib/python3.12/site-packages/oslo_config/cfg.py:2824
Dec  1 14:21:46 np0005541455 ceilometer_agent_compute[200022]: 2025-12-01 19:21:46.807 2 DEBUG cotyledon.oslo_config_glue [-] rgw_admin_credentials.access_key = **** log_opt_values /usr/lib/python3.12/site-packages/oslo_config/cfg.py:2824
Dec  1 14:21:46 np0005541455 ceilometer_agent_compute[200022]: 2025-12-01 19:21:46.807 2 DEBUG cotyledon.oslo_config_glue [-] rgw_admin_credentials.secret_key = **** log_opt_values /usr/lib/python3.12/site-packages/oslo_config/cfg.py:2824
Dec  1 14:21:46 np0005541455 ceilometer_agent_compute[200022]: 2025-12-01 19:21:46.807 2 DEBUG cotyledon.oslo_config_glue [-] rgw_client.implicit_tenants    = False log_opt_values /usr/lib/python3.12/site-packages/oslo_config/cfg.py:2824
Dec  1 14:21:46 np0005541455 ceilometer_agent_compute[200022]: 2025-12-01 19:21:46.807 2 DEBUG cotyledon.oslo_config_glue [-] service_types.aodh             = alarming log_opt_values /usr/lib/python3.12/site-packages/oslo_config/cfg.py:2824
Dec  1 14:21:46 np0005541455 ceilometer_agent_compute[200022]: 2025-12-01 19:21:46.807 2 DEBUG cotyledon.oslo_config_glue [-] service_types.cinder           = volumev3 log_opt_values /usr/lib/python3.12/site-packages/oslo_config/cfg.py:2824
Dec  1 14:21:46 np0005541455 ceilometer_agent_compute[200022]: 2025-12-01 19:21:46.808 2 DEBUG cotyledon.oslo_config_glue [-] service_types.glance           = image log_opt_values /usr/lib/python3.12/site-packages/oslo_config/cfg.py:2824
Dec  1 14:21:46 np0005541455 ceilometer_agent_compute[200022]: 2025-12-01 19:21:46.808 2 DEBUG cotyledon.oslo_config_glue [-] service_types.neutron          = network log_opt_values /usr/lib/python3.12/site-packages/oslo_config/cfg.py:2824
Dec  1 14:21:46 np0005541455 ceilometer_agent_compute[200022]: 2025-12-01 19:21:46.808 2 DEBUG cotyledon.oslo_config_glue [-] service_types.nova             = compute log_opt_values /usr/lib/python3.12/site-packages/oslo_config/cfg.py:2824
Dec  1 14:21:46 np0005541455 ceilometer_agent_compute[200022]: 2025-12-01 19:21:46.808 2 DEBUG cotyledon.oslo_config_glue [-] service_types.radosgw          = None log_opt_values /usr/lib/python3.12/site-packages/oslo_config/cfg.py:2824
Dec  1 14:21:46 np0005541455 ceilometer_agent_compute[200022]: 2025-12-01 19:21:46.808 2 DEBUG cotyledon.oslo_config_glue [-] service_types.swift            = object-store log_opt_values /usr/lib/python3.12/site-packages/oslo_config/cfg.py:2824
Dec  1 14:21:46 np0005541455 ceilometer_agent_compute[200022]: 2025-12-01 19:21:46.808 2 DEBUG cotyledon.oslo_config_glue [-] service_credentials.auth_section = None log_opt_values /usr/lib/python3.12/site-packages/oslo_config/cfg.py:2824
Dec  1 14:21:46 np0005541455 ceilometer_agent_compute[200022]: 2025-12-01 19:21:46.808 2 DEBUG cotyledon.oslo_config_glue [-] service_credentials.auth_type  = password log_opt_values /usr/lib/python3.12/site-packages/oslo_config/cfg.py:2824
Dec  1 14:21:46 np0005541455 ceilometer_agent_compute[200022]: 2025-12-01 19:21:46.808 2 DEBUG cotyledon.oslo_config_glue [-] service_credentials.cafile     = None log_opt_values /usr/lib/python3.12/site-packages/oslo_config/cfg.py:2824
Dec  1 14:21:46 np0005541455 ceilometer_agent_compute[200022]: 2025-12-01 19:21:46.808 2 DEBUG cotyledon.oslo_config_glue [-] service_credentials.certfile   = None log_opt_values /usr/lib/python3.12/site-packages/oslo_config/cfg.py:2824
Dec  1 14:21:46 np0005541455 ceilometer_agent_compute[200022]: 2025-12-01 19:21:46.808 2 DEBUG cotyledon.oslo_config_glue [-] service_credentials.collect_timing = False log_opt_values /usr/lib/python3.12/site-packages/oslo_config/cfg.py:2824
Dec  1 14:21:46 np0005541455 ceilometer_agent_compute[200022]: 2025-12-01 19:21:46.808 2 DEBUG cotyledon.oslo_config_glue [-] service_credentials.insecure   = False log_opt_values /usr/lib/python3.12/site-packages/oslo_config/cfg.py:2824
Dec  1 14:21:46 np0005541455 ceilometer_agent_compute[200022]: 2025-12-01 19:21:46.809 2 DEBUG cotyledon.oslo_config_glue [-] service_credentials.interface  = internalURL log_opt_values /usr/lib/python3.12/site-packages/oslo_config/cfg.py:2824
Dec  1 14:21:46 np0005541455 ceilometer_agent_compute[200022]: 2025-12-01 19:21:46.809 2 DEBUG cotyledon.oslo_config_glue [-] service_credentials.keyfile    = None log_opt_values /usr/lib/python3.12/site-packages/oslo_config/cfg.py:2824
Dec  1 14:21:46 np0005541455 ceilometer_agent_compute[200022]: 2025-12-01 19:21:46.809 2 DEBUG cotyledon.oslo_config_glue [-] service_credentials.region_name = None log_opt_values /usr/lib/python3.12/site-packages/oslo_config/cfg.py:2824
Dec  1 14:21:46 np0005541455 ceilometer_agent_compute[200022]: 2025-12-01 19:21:46.809 2 DEBUG cotyledon.oslo_config_glue [-] service_credentials.split_loggers = False log_opt_values /usr/lib/python3.12/site-packages/oslo_config/cfg.py:2824
Dec  1 14:21:46 np0005541455 ceilometer_agent_compute[200022]: 2025-12-01 19:21:46.809 2 DEBUG cotyledon.oslo_config_glue [-] service_credentials.timeout    = None log_opt_values /usr/lib/python3.12/site-packages/oslo_config/cfg.py:2824
Dec  1 14:21:46 np0005541455 ceilometer_agent_compute[200022]: 2025-12-01 19:21:46.809 2 DEBUG cotyledon.oslo_config_glue [-] gnocchi.auth_section           = service_credentials log_opt_values /usr/lib/python3.12/site-packages/oslo_config/cfg.py:2824
Dec  1 14:21:46 np0005541455 ceilometer_agent_compute[200022]: 2025-12-01 19:21:46.809 2 DEBUG cotyledon.oslo_config_glue [-] gnocchi.auth_type              = None log_opt_values /usr/lib/python3.12/site-packages/oslo_config/cfg.py:2824
Dec  1 14:21:46 np0005541455 ceilometer_agent_compute[200022]: 2025-12-01 19:21:46.809 2 DEBUG cotyledon.oslo_config_glue [-] gnocchi.cafile                 = None log_opt_values /usr/lib/python3.12/site-packages/oslo_config/cfg.py:2824
Dec  1 14:21:46 np0005541455 ceilometer_agent_compute[200022]: 2025-12-01 19:21:46.809 2 DEBUG cotyledon.oslo_config_glue [-] gnocchi.certfile               = None log_opt_values /usr/lib/python3.12/site-packages/oslo_config/cfg.py:2824
Dec  1 14:21:46 np0005541455 ceilometer_agent_compute[200022]: 2025-12-01 19:21:46.809 2 DEBUG cotyledon.oslo_config_glue [-] gnocchi.collect_timing         = False log_opt_values /usr/lib/python3.12/site-packages/oslo_config/cfg.py:2824
Dec  1 14:21:46 np0005541455 ceilometer_agent_compute[200022]: 2025-12-01 19:21:46.809 2 DEBUG cotyledon.oslo_config_glue [-] gnocchi.insecure               = False log_opt_values /usr/lib/python3.12/site-packages/oslo_config/cfg.py:2824
Dec  1 14:21:46 np0005541455 ceilometer_agent_compute[200022]: 2025-12-01 19:21:46.809 2 DEBUG cotyledon.oslo_config_glue [-] gnocchi.interface              = internal log_opt_values /usr/lib/python3.12/site-packages/oslo_config/cfg.py:2824
Dec  1 14:21:46 np0005541455 ceilometer_agent_compute[200022]: 2025-12-01 19:21:46.809 2 DEBUG cotyledon.oslo_config_glue [-] gnocchi.keyfile                = None log_opt_values /usr/lib/python3.12/site-packages/oslo_config/cfg.py:2824
Dec  1 14:21:46 np0005541455 ceilometer_agent_compute[200022]: 2025-12-01 19:21:46.810 2 DEBUG cotyledon.oslo_config_glue [-] gnocchi.region_name            = None log_opt_values /usr/lib/python3.12/site-packages/oslo_config/cfg.py:2824
Dec  1 14:21:46 np0005541455 ceilometer_agent_compute[200022]: 2025-12-01 19:21:46.810 2 DEBUG cotyledon.oslo_config_glue [-] gnocchi.split_loggers          = False log_opt_values /usr/lib/python3.12/site-packages/oslo_config/cfg.py:2824
Dec  1 14:21:46 np0005541455 ceilometer_agent_compute[200022]: 2025-12-01 19:21:46.810 2 DEBUG cotyledon.oslo_config_glue [-] gnocchi.timeout                = None log_opt_values /usr/lib/python3.12/site-packages/oslo_config/cfg.py:2824
Dec  1 14:21:46 np0005541455 ceilometer_agent_compute[200022]: 2025-12-01 19:21:46.810 2 DEBUG cotyledon.oslo_config_glue [-] zaqar.auth_section             = service_credentials log_opt_values /usr/lib/python3.12/site-packages/oslo_config/cfg.py:2824
Dec  1 14:21:46 np0005541455 ceilometer_agent_compute[200022]: 2025-12-01 19:21:46.810 2 DEBUG cotyledon.oslo_config_glue [-] zaqar.auth_type                = None log_opt_values /usr/lib/python3.12/site-packages/oslo_config/cfg.py:2824
Dec  1 14:21:46 np0005541455 ceilometer_agent_compute[200022]: 2025-12-01 19:21:46.810 2 DEBUG cotyledon.oslo_config_glue [-] zaqar.cafile                   = None log_opt_values /usr/lib/python3.12/site-packages/oslo_config/cfg.py:2824
Dec  1 14:21:46 np0005541455 ceilometer_agent_compute[200022]: 2025-12-01 19:21:46.810 2 DEBUG cotyledon.oslo_config_glue [-] zaqar.certfile                 = None log_opt_values /usr/lib/python3.12/site-packages/oslo_config/cfg.py:2824
Dec  1 14:21:46 np0005541455 ceilometer_agent_compute[200022]: 2025-12-01 19:21:46.810 2 DEBUG cotyledon.oslo_config_glue [-] zaqar.collect_timing           = False log_opt_values /usr/lib/python3.12/site-packages/oslo_config/cfg.py:2824
Dec  1 14:21:46 np0005541455 ceilometer_agent_compute[200022]: 2025-12-01 19:21:46.810 2 DEBUG cotyledon.oslo_config_glue [-] zaqar.insecure                 = False log_opt_values /usr/lib/python3.12/site-packages/oslo_config/cfg.py:2824
Dec  1 14:21:46 np0005541455 ceilometer_agent_compute[200022]: 2025-12-01 19:21:46.810 2 DEBUG cotyledon.oslo_config_glue [-] zaqar.interface                = internal log_opt_values /usr/lib/python3.12/site-packages/oslo_config/cfg.py:2824
Dec  1 14:21:46 np0005541455 ceilometer_agent_compute[200022]: 2025-12-01 19:21:46.810 2 DEBUG cotyledon.oslo_config_glue [-] zaqar.keyfile                  = None log_opt_values /usr/lib/python3.12/site-packages/oslo_config/cfg.py:2824
Dec  1 14:21:46 np0005541455 ceilometer_agent_compute[200022]: 2025-12-01 19:21:46.810 2 DEBUG cotyledon.oslo_config_glue [-] zaqar.region_name              = None log_opt_values /usr/lib/python3.12/site-packages/oslo_config/cfg.py:2824
Dec  1 14:21:46 np0005541455 ceilometer_agent_compute[200022]: 2025-12-01 19:21:46.810 2 DEBUG cotyledon.oslo_config_glue [-] zaqar.split_loggers            = False log_opt_values /usr/lib/python3.12/site-packages/oslo_config/cfg.py:2824
Dec  1 14:21:46 np0005541455 ceilometer_agent_compute[200022]: 2025-12-01 19:21:46.811 2 DEBUG cotyledon.oslo_config_glue [-] zaqar.timeout                  = None log_opt_values /usr/lib/python3.12/site-packages/oslo_config/cfg.py:2824
Dec  1 14:21:46 np0005541455 ceilometer_agent_compute[200022]: 2025-12-01 19:21:46.811 2 DEBUG cotyledon.oslo_config_glue [-] oslo_reports.file_event_handler = None log_opt_values /usr/lib/python3.12/site-packages/oslo_config/cfg.py:2824
Dec  1 14:21:46 np0005541455 ceilometer_agent_compute[200022]: 2025-12-01 19:21:46.811 2 DEBUG cotyledon.oslo_config_glue [-] oslo_reports.file_event_handler_interval = 1 log_opt_values /usr/lib/python3.12/site-packages/oslo_config/cfg.py:2824
Dec  1 14:21:46 np0005541455 ceilometer_agent_compute[200022]: 2025-12-01 19:21:46.811 2 DEBUG cotyledon.oslo_config_glue [-] oslo_reports.log_dir           = None log_opt_values /usr/lib/python3.12/site-packages/oslo_config/cfg.py:2824
Dec  1 14:21:46 np0005541455 ceilometer_agent_compute[200022]: 2025-12-01 19:21:46.811 2 DEBUG cotyledon.oslo_config_glue [-] ******************************************************************************** log_opt_values /usr/lib/python3.12/site-packages/oslo_config/cfg.py:2828
Dec  1 14:21:46 np0005541455 ceilometer_agent_compute[200022]: 2025-12-01 19:21:46.812 2 INFO cotyledon._service_manager [-] Caught SIGTERM signal, graceful exiting of master process
Dec  1 14:21:46 np0005541455 ceilometer_agent_compute[200022]: 2025-12-01 19:21:46.831 12 INFO ceilometer.polling.manager [-] Starting heartbeat child service. Listening on /var/lib/ceilometer/ceilometer-compute.socket
Dec  1 14:21:46 np0005541455 ceilometer_agent_compute[200022]: 2025-12-01 19:21:46.832 12 DEBUG cotyledon.oslo_config_glue [-] Full set of CONF: _load_service_options /usr/lib/python3.12/site-packages/cotyledon/oslo_config_glue.py:53
Dec  1 14:21:46 np0005541455 ceilometer_agent_compute[200022]: 2025-12-01 19:21:46.832 12 DEBUG cotyledon.oslo_config_glue [-] ******************************************************************************** log_opt_values /usr/lib/python3.12/site-packages/oslo_config/cfg.py:2804
Dec  1 14:21:46 np0005541455 ceilometer_agent_compute[200022]: 2025-12-01 19:21:46.832 12 DEBUG cotyledon.oslo_config_glue [-] Configuration options gathered from: log_opt_values /usr/lib/python3.12/site-packages/oslo_config/cfg.py:2805
Dec  1 14:21:46 np0005541455 ceilometer_agent_compute[200022]: 2025-12-01 19:21:46.832 12 DEBUG cotyledon.oslo_config_glue [-] command line args: ['--polling-namespaces', 'compute', '--logfile', '/dev/stdout'] log_opt_values /usr/lib/python3.12/site-packages/oslo_config/cfg.py:2806
Dec  1 14:21:46 np0005541455 ceilometer_agent_compute[200022]: 2025-12-01 19:21:46.832 12 DEBUG cotyledon.oslo_config_glue [-] config files: ['/etc/ceilometer/ceilometer.conf'] log_opt_values /usr/lib/python3.12/site-packages/oslo_config/cfg.py:2807
Dec  1 14:21:46 np0005541455 ceilometer_agent_compute[200022]: 2025-12-01 19:21:46.832 12 DEBUG cotyledon.oslo_config_glue [-] ================================================================================ log_opt_values /usr/lib/python3.12/site-packages/oslo_config/cfg.py:2809
Dec  1 14:21:46 np0005541455 ceilometer_agent_compute[200022]: 2025-12-01 19:21:46.833 12 DEBUG cotyledon.oslo_config_glue [-] batch_size                     = 50 log_opt_values /usr/lib/python3.12/site-packages/oslo_config/cfg.py:2817
Dec  1 14:21:46 np0005541455 ceilometer_agent_compute[200022]: 2025-12-01 19:21:46.833 12 DEBUG cotyledon.oslo_config_glue [-] cfg_file                       = polling.yaml log_opt_values /usr/lib/python3.12/site-packages/oslo_config/cfg.py:2817
Dec  1 14:21:46 np0005541455 ceilometer_agent_compute[200022]: 2025-12-01 19:21:46.833 12 DEBUG cotyledon.oslo_config_glue [-] config_dir                     = ['/etc/ceilometer/ceilometer.conf.d'] log_opt_values /usr/lib/python3.12/site-packages/oslo_config/cfg.py:2817
Dec  1 14:21:46 np0005541455 ceilometer_agent_compute[200022]: 2025-12-01 19:21:46.833 12 DEBUG cotyledon.oslo_config_glue [-] config_file                    = ['/etc/ceilometer/ceilometer.conf'] log_opt_values /usr/lib/python3.12/site-packages/oslo_config/cfg.py:2817
Dec  1 14:21:46 np0005541455 ceilometer_agent_compute[200022]: 2025-12-01 19:21:46.833 12 DEBUG cotyledon.oslo_config_glue [-] config_source                  = [] log_opt_values /usr/lib/python3.12/site-packages/oslo_config/cfg.py:2817
Dec  1 14:21:46 np0005541455 ceilometer_agent_compute[200022]: 2025-12-01 19:21:46.833 12 DEBUG cotyledon.oslo_config_glue [-] debug                          = True log_opt_values /usr/lib/python3.12/site-packages/oslo_config/cfg.py:2817
Dec  1 14:21:46 np0005541455 ceilometer_agent_compute[200022]: 2025-12-01 19:21:46.833 12 DEBUG cotyledon.oslo_config_glue [-] default_log_levels             = ['amqp=WARN', 'amqplib=WARN', 'boto=WARN', 'qpid=WARN', 'sqlalchemy=WARN', 'suds=INFO', 'oslo.messaging=INFO', 'oslo_messaging=INFO', 'iso8601=WARN', 'requests.packages.urllib3.connectionpool=WARN', 'urllib3.connectionpool=WARN', 'websocket=WARN', 'requests.packages.urllib3.util.retry=WARN', 'urllib3.util.retry=WARN', 'keystonemiddleware=WARN', 'routes.middleware=WARN', 'stevedore=WARN', 'taskflow=WARN', 'keystoneauth=WARN', 'oslo.cache=INFO', 'oslo_policy=INFO', 'dogpile.core.dogpile=INFO', 'futurist=INFO', 'neutronclient=INFO', 'keystoneclient=INFO'] log_opt_values /usr/lib/python3.12/site-packages/oslo_config/cfg.py:2817
Dec  1 14:21:46 np0005541455 ceilometer_agent_compute[200022]: 2025-12-01 19:21:46.833 12 DEBUG cotyledon.oslo_config_glue [-] enable_notifications           = True log_opt_values /usr/lib/python3.12/site-packages/oslo_config/cfg.py:2817
Dec  1 14:21:46 np0005541455 ceilometer_agent_compute[200022]: 2025-12-01 19:21:46.833 12 DEBUG cotyledon.oslo_config_glue [-] enable_prometheus_exporter     = False log_opt_values /usr/lib/python3.12/site-packages/oslo_config/cfg.py:2817
Dec  1 14:21:46 np0005541455 ceilometer_agent_compute[200022]: 2025-12-01 19:21:46.833 12 DEBUG cotyledon.oslo_config_glue [-] event_pipeline_cfg_file        = event_pipeline.yaml log_opt_values /usr/lib/python3.12/site-packages/oslo_config/cfg.py:2817
Dec  1 14:21:46 np0005541455 ceilometer_agent_compute[200022]: 2025-12-01 19:21:46.833 12 DEBUG cotyledon.oslo_config_glue [-] graceful_shutdown_timeout      = 60 log_opt_values /usr/lib/python3.12/site-packages/oslo_config/cfg.py:2817
Dec  1 14:21:46 np0005541455 ceilometer_agent_compute[200022]: 2025-12-01 19:21:46.834 12 DEBUG cotyledon.oslo_config_glue [-] heartbeat_socket_dir           = None log_opt_values /usr/lib/python3.12/site-packages/oslo_config/cfg.py:2817
Dec  1 14:21:46 np0005541455 ceilometer_agent_compute[200022]: 2025-12-01 19:21:46.834 12 DEBUG cotyledon.oslo_config_glue [-] host                           = compute-0.ctlplane.example.com log_opt_values /usr/lib/python3.12/site-packages/oslo_config/cfg.py:2817
Dec  1 14:21:46 np0005541455 ceilometer_agent_compute[200022]: 2025-12-01 19:21:46.834 12 DEBUG cotyledon.oslo_config_glue [-] http_timeout                   = 600 log_opt_values /usr/lib/python3.12/site-packages/oslo_config/cfg.py:2817
Dec  1 14:21:46 np0005541455 ceilometer_agent_compute[200022]: 2025-12-01 19:21:46.834 12 DEBUG cotyledon.oslo_config_glue [-] hypervisor_inspector           = libvirt log_opt_values /usr/lib/python3.12/site-packages/oslo_config/cfg.py:2817
Dec  1 14:21:46 np0005541455 ceilometer_agent_compute[200022]: 2025-12-01 19:21:46.834 12 DEBUG cotyledon.oslo_config_glue [-] identity_name_discovery        = False log_opt_values /usr/lib/python3.12/site-packages/oslo_config/cfg.py:2817
Dec  1 14:21:46 np0005541455 ceilometer_agent_compute[200022]: 2025-12-01 19:21:46.834 12 DEBUG cotyledon.oslo_config_glue [-] ignore_disabled_projects       = False log_opt_values /usr/lib/python3.12/site-packages/oslo_config/cfg.py:2817
Dec  1 14:21:46 np0005541455 ceilometer_agent_compute[200022]: 2025-12-01 19:21:46.834 12 DEBUG cotyledon.oslo_config_glue [-] instance_format                = [instance: %(uuid)s]  log_opt_values /usr/lib/python3.12/site-packages/oslo_config/cfg.py:2817
Dec  1 14:21:46 np0005541455 ceilometer_agent_compute[200022]: 2025-12-01 19:21:46.834 12 DEBUG cotyledon.oslo_config_glue [-] instance_uuid_format           = [instance: %(uuid)s]  log_opt_values /usr/lib/python3.12/site-packages/oslo_config/cfg.py:2817
Dec  1 14:21:46 np0005541455 ceilometer_agent_compute[200022]: 2025-12-01 19:21:46.834 12 DEBUG cotyledon.oslo_config_glue [-] libvirt_type                   = kvm log_opt_values /usr/lib/python3.12/site-packages/oslo_config/cfg.py:2817
Dec  1 14:21:46 np0005541455 ceilometer_agent_compute[200022]: 2025-12-01 19:21:46.834 12 DEBUG cotyledon.oslo_config_glue [-] libvirt_uri                    =  log_opt_values /usr/lib/python3.12/site-packages/oslo_config/cfg.py:2817
Dec  1 14:21:46 np0005541455 ceilometer_agent_compute[200022]: 2025-12-01 19:21:46.834 12 DEBUG cotyledon.oslo_config_glue [-] log_color                      = False log_opt_values /usr/lib/python3.12/site-packages/oslo_config/cfg.py:2817
Dec  1 14:21:46 np0005541455 ceilometer_agent_compute[200022]: 2025-12-01 19:21:46.835 12 DEBUG cotyledon.oslo_config_glue [-] log_config_append              = None log_opt_values /usr/lib/python3.12/site-packages/oslo_config/cfg.py:2817
Dec  1 14:21:46 np0005541455 ceilometer_agent_compute[200022]: 2025-12-01 19:21:46.835 12 DEBUG cotyledon.oslo_config_glue [-] log_date_format                = %Y-%m-%d %H:%M:%S log_opt_values /usr/lib/python3.12/site-packages/oslo_config/cfg.py:2817
Dec  1 14:21:46 np0005541455 ceilometer_agent_compute[200022]: 2025-12-01 19:21:46.835 12 DEBUG cotyledon.oslo_config_glue [-] log_dir                        = /var/log/ceilometer log_opt_values /usr/lib/python3.12/site-packages/oslo_config/cfg.py:2817
Dec  1 14:21:46 np0005541455 ceilometer_agent_compute[200022]: 2025-12-01 19:21:46.835 12 DEBUG cotyledon.oslo_config_glue [-] log_file                       = /dev/stdout log_opt_values /usr/lib/python3.12/site-packages/oslo_config/cfg.py:2817
Dec  1 14:21:46 np0005541455 ceilometer_agent_compute[200022]: 2025-12-01 19:21:46.835 12 DEBUG cotyledon.oslo_config_glue [-] log_options                    = True log_opt_values /usr/lib/python3.12/site-packages/oslo_config/cfg.py:2817
Dec  1 14:21:46 np0005541455 ceilometer_agent_compute[200022]: 2025-12-01 19:21:46.835 12 DEBUG cotyledon.oslo_config_glue [-] log_rotate_interval            = 1 log_opt_values /usr/lib/python3.12/site-packages/oslo_config/cfg.py:2817
Dec  1 14:21:46 np0005541455 ceilometer_agent_compute[200022]: 2025-12-01 19:21:46.835 12 DEBUG cotyledon.oslo_config_glue [-] log_rotate_interval_type       = days log_opt_values /usr/lib/python3.12/site-packages/oslo_config/cfg.py:2817
Dec  1 14:21:46 np0005541455 ceilometer_agent_compute[200022]: 2025-12-01 19:21:46.835 12 DEBUG cotyledon.oslo_config_glue [-] log_rotation_type              = none log_opt_values /usr/lib/python3.12/site-packages/oslo_config/cfg.py:2817
Dec  1 14:21:46 np0005541455 ceilometer_agent_compute[200022]: 2025-12-01 19:21:46.835 12 DEBUG cotyledon.oslo_config_glue [-] logging_context_format_string  = %(asctime)s.%(msecs)03d %(process)d %(levelname)s %(name)s [%(global_request_id)s %(request_id)s %(user_identity)s] %(instance)s%(message)s log_opt_values /usr/lib/python3.12/site-packages/oslo_config/cfg.py:2817
Dec  1 14:21:46 np0005541455 ceilometer_agent_compute[200022]: 2025-12-01 19:21:46.835 12 DEBUG cotyledon.oslo_config_glue [-] logging_debug_format_suffix    = %(funcName)s %(pathname)s:%(lineno)d log_opt_values /usr/lib/python3.12/site-packages/oslo_config/cfg.py:2817
Dec  1 14:21:46 np0005541455 ceilometer_agent_compute[200022]: 2025-12-01 19:21:46.835 12 DEBUG cotyledon.oslo_config_glue [-] logging_default_format_string  = %(asctime)s.%(msecs)03d %(process)d %(levelname)s %(name)s [-] %(instance)s%(message)s log_opt_values /usr/lib/python3.12/site-packages/oslo_config/cfg.py:2817
Dec  1 14:21:46 np0005541455 ceilometer_agent_compute[200022]: 2025-12-01 19:21:46.835 12 DEBUG cotyledon.oslo_config_glue [-] logging_exception_prefix       = %(asctime)s.%(msecs)03d %(process)d ERROR %(name)s %(instance)s log_opt_values /usr/lib/python3.12/site-packages/oslo_config/cfg.py:2817
Dec  1 14:21:46 np0005541455 ceilometer_agent_compute[200022]: 2025-12-01 19:21:46.836 12 DEBUG cotyledon.oslo_config_glue [-] logging_user_identity_format   = %(user)s %(project)s %(domain)s %(system_scope)s %(user_domain)s %(project_domain)s log_opt_values /usr/lib/python3.12/site-packages/oslo_config/cfg.py:2817
Dec  1 14:21:46 np0005541455 ceilometer_agent_compute[200022]: 2025-12-01 19:21:46.836 12 DEBUG cotyledon.oslo_config_glue [-] max_logfile_count              = 30 log_opt_values /usr/lib/python3.12/site-packages/oslo_config/cfg.py:2817
Dec  1 14:21:46 np0005541455 ceilometer_agent_compute[200022]: 2025-12-01 19:21:46.836 12 DEBUG cotyledon.oslo_config_glue [-] max_logfile_size_mb            = 200 log_opt_values /usr/lib/python3.12/site-packages/oslo_config/cfg.py:2817
Dec  1 14:21:46 np0005541455 ceilometer_agent_compute[200022]: 2025-12-01 19:21:46.836 12 DEBUG cotyledon.oslo_config_glue [-] max_parallel_requests          = 64 log_opt_values /usr/lib/python3.12/site-packages/oslo_config/cfg.py:2817
Dec  1 14:21:46 np0005541455 ceilometer_agent_compute[200022]: 2025-12-01 19:21:46.836 12 DEBUG cotyledon.oslo_config_glue [-] partitioning_group_prefix      = None log_opt_values /usr/lib/python3.12/site-packages/oslo_config/cfg.py:2817
Dec  1 14:21:46 np0005541455 ceilometer_agent_compute[200022]: 2025-12-01 19:21:46.836 12 DEBUG cotyledon.oslo_config_glue [-] pipeline_cfg_file              = pipeline.yaml log_opt_values /usr/lib/python3.12/site-packages/oslo_config/cfg.py:2817
Dec  1 14:21:46 np0005541455 ceilometer_agent_compute[200022]: 2025-12-01 19:21:46.836 12 DEBUG cotyledon.oslo_config_glue [-] polling_namespaces             = ['compute'] log_opt_values /usr/lib/python3.12/site-packages/oslo_config/cfg.py:2817
Dec  1 14:21:46 np0005541455 ceilometer_agent_compute[200022]: 2025-12-01 19:21:46.836 12 DEBUG cotyledon.oslo_config_glue [-] pollsters_definitions_dirs     = ['/etc/ceilometer/pollsters.d'] log_opt_values /usr/lib/python3.12/site-packages/oslo_config/cfg.py:2817
Dec  1 14:21:46 np0005541455 ceilometer_agent_compute[200022]: 2025-12-01 19:21:46.836 12 DEBUG cotyledon.oslo_config_glue [-] prometheus_listen_addresses    = ['127.0.0.1:9101'] log_opt_values /usr/lib/python3.12/site-packages/oslo_config/cfg.py:2817
Dec  1 14:21:46 np0005541455 ceilometer_agent_compute[200022]: 2025-12-01 19:21:46.836 12 DEBUG cotyledon.oslo_config_glue [-] prometheus_tls_certfile        = None log_opt_values /usr/lib/python3.12/site-packages/oslo_config/cfg.py:2817
Dec  1 14:21:46 np0005541455 ceilometer_agent_compute[200022]: 2025-12-01 19:21:46.836 12 DEBUG cotyledon.oslo_config_glue [-] prometheus_tls_enable          = False log_opt_values /usr/lib/python3.12/site-packages/oslo_config/cfg.py:2817
Dec  1 14:21:46 np0005541455 ceilometer_agent_compute[200022]: 2025-12-01 19:21:46.836 12 DEBUG cotyledon.oslo_config_glue [-] prometheus_tls_keyfile         = None log_opt_values /usr/lib/python3.12/site-packages/oslo_config/cfg.py:2817
Dec  1 14:21:46 np0005541455 ceilometer_agent_compute[200022]: 2025-12-01 19:21:46.837 12 DEBUG cotyledon.oslo_config_glue [-] publish_errors                 = False log_opt_values /usr/lib/python3.12/site-packages/oslo_config/cfg.py:2817
Dec  1 14:21:46 np0005541455 ceilometer_agent_compute[200022]: 2025-12-01 19:21:46.837 12 DEBUG cotyledon.oslo_config_glue [-] rate_limit_burst               = 0 log_opt_values /usr/lib/python3.12/site-packages/oslo_config/cfg.py:2817
Dec  1 14:21:46 np0005541455 ceilometer_agent_compute[200022]: 2025-12-01 19:21:46.837 12 DEBUG cotyledon.oslo_config_glue [-] rate_limit_except_level        = CRITICAL log_opt_values /usr/lib/python3.12/site-packages/oslo_config/cfg.py:2817
Dec  1 14:21:46 np0005541455 ceilometer_agent_compute[200022]: 2025-12-01 19:21:46.837 12 DEBUG cotyledon.oslo_config_glue [-] rate_limit_interval            = 0 log_opt_values /usr/lib/python3.12/site-packages/oslo_config/cfg.py:2817
Dec  1 14:21:46 np0005541455 ceilometer_agent_compute[200022]: 2025-12-01 19:21:46.837 12 DEBUG cotyledon.oslo_config_glue [-] reseller_prefix                = AUTH_ log_opt_values /usr/lib/python3.12/site-packages/oslo_config/cfg.py:2817
Dec  1 14:21:46 np0005541455 ceilometer_agent_compute[200022]: 2025-12-01 19:21:46.837 12 DEBUG cotyledon.oslo_config_glue [-] reserved_metadata_keys         = [] log_opt_values /usr/lib/python3.12/site-packages/oslo_config/cfg.py:2817
Dec  1 14:21:46 np0005541455 ceilometer_agent_compute[200022]: 2025-12-01 19:21:46.837 12 DEBUG cotyledon.oslo_config_glue [-] reserved_metadata_length       = 256 log_opt_values /usr/lib/python3.12/site-packages/oslo_config/cfg.py:2817
Dec  1 14:21:46 np0005541455 ceilometer_agent_compute[200022]: 2025-12-01 19:21:46.837 12 DEBUG cotyledon.oslo_config_glue [-] reserved_metadata_namespace    = ['metering.'] log_opt_values /usr/lib/python3.12/site-packages/oslo_config/cfg.py:2817
Dec  1 14:21:46 np0005541455 ceilometer_agent_compute[200022]: 2025-12-01 19:21:46.837 12 DEBUG cotyledon.oslo_config_glue [-] rootwrap_config                = /etc/ceilometer/rootwrap.conf log_opt_values /usr/lib/python3.12/site-packages/oslo_config/cfg.py:2817
Dec  1 14:21:46 np0005541455 ceilometer_agent_compute[200022]: 2025-12-01 19:21:46.837 12 DEBUG cotyledon.oslo_config_glue [-] sample_source                  = openstack log_opt_values /usr/lib/python3.12/site-packages/oslo_config/cfg.py:2817
Dec  1 14:21:46 np0005541455 ceilometer_agent_compute[200022]: 2025-12-01 19:21:46.837 12 DEBUG cotyledon.oslo_config_glue [-] shell_completion               = None log_opt_values /usr/lib/python3.12/site-packages/oslo_config/cfg.py:2817
Dec  1 14:21:46 np0005541455 ceilometer_agent_compute[200022]: 2025-12-01 19:21:46.837 12 DEBUG cotyledon.oslo_config_glue [-] syslog_log_facility            = LOG_USER log_opt_values /usr/lib/python3.12/site-packages/oslo_config/cfg.py:2817
Dec  1 14:21:46 np0005541455 ceilometer_agent_compute[200022]: 2025-12-01 19:21:46.838 12 DEBUG cotyledon.oslo_config_glue [-] threads_to_process_pollsters   = 1 log_opt_values /usr/lib/python3.12/site-packages/oslo_config/cfg.py:2817
Dec  1 14:21:46 np0005541455 ceilometer_agent_compute[200022]: 2025-12-01 19:21:46.838 12 DEBUG cotyledon.oslo_config_glue [-] use_journal                    = False log_opt_values /usr/lib/python3.12/site-packages/oslo_config/cfg.py:2817
Dec  1 14:21:46 np0005541455 ceilometer_agent_compute[200022]: 2025-12-01 19:21:46.838 12 DEBUG cotyledon.oslo_config_glue [-] use_json                       = False log_opt_values /usr/lib/python3.12/site-packages/oslo_config/cfg.py:2817
Dec  1 14:21:46 np0005541455 ceilometer_agent_compute[200022]: 2025-12-01 19:21:46.838 12 DEBUG cotyledon.oslo_config_glue [-] use_stderr                     = False log_opt_values /usr/lib/python3.12/site-packages/oslo_config/cfg.py:2817
Dec  1 14:21:46 np0005541455 ceilometer_agent_compute[200022]: 2025-12-01 19:21:46.838 12 DEBUG cotyledon.oslo_config_glue [-] use_syslog                     = False log_opt_values /usr/lib/python3.12/site-packages/oslo_config/cfg.py:2817
Dec  1 14:21:46 np0005541455 ceilometer_agent_compute[200022]: 2025-12-01 19:21:46.838 12 DEBUG cotyledon.oslo_config_glue [-] watch_log_file                 = False log_opt_values /usr/lib/python3.12/site-packages/oslo_config/cfg.py:2817
Dec  1 14:21:46 np0005541455 ceilometer_agent_compute[200022]: 2025-12-01 19:21:46.838 12 DEBUG cotyledon.oslo_config_glue [-] compute.fetch_extra_metadata   = True log_opt_values /usr/lib/python3.12/site-packages/oslo_config/cfg.py:2824
Dec  1 14:21:46 np0005541455 ceilometer_agent_compute[200022]: 2025-12-01 19:21:46.838 12 DEBUG cotyledon.oslo_config_glue [-] compute.instance_discovery_method = libvirt_metadata log_opt_values /usr/lib/python3.12/site-packages/oslo_config/cfg.py:2824
Dec  1 14:21:46 np0005541455 ceilometer_agent_compute[200022]: 2025-12-01 19:21:46.838 12 DEBUG cotyledon.oslo_config_glue [-] compute.resource_cache_expiry  = 3600 log_opt_values /usr/lib/python3.12/site-packages/oslo_config/cfg.py:2824
Dec  1 14:21:46 np0005541455 ceilometer_agent_compute[200022]: 2025-12-01 19:21:46.838 12 DEBUG cotyledon.oslo_config_glue [-] compute.resource_update_interval = 0 log_opt_values /usr/lib/python3.12/site-packages/oslo_config/cfg.py:2824
Dec  1 14:21:46 np0005541455 ceilometer_agent_compute[200022]: 2025-12-01 19:21:46.838 12 DEBUG cotyledon.oslo_config_glue [-] coordination.backend_url       = **** log_opt_values /usr/lib/python3.12/site-packages/oslo_config/cfg.py:2824
Dec  1 14:21:46 np0005541455 ceilometer_agent_compute[200022]: 2025-12-01 19:21:46.839 12 DEBUG cotyledon.oslo_config_glue [-] event.definitions_cfg_file     = event_definitions.yaml log_opt_values /usr/lib/python3.12/site-packages/oslo_config/cfg.py:2824
Dec  1 14:21:46 np0005541455 ceilometer_agent_compute[200022]: 2025-12-01 19:21:46.839 12 DEBUG cotyledon.oslo_config_glue [-] event.drop_unmatched_notifications = False log_opt_values /usr/lib/python3.12/site-packages/oslo_config/cfg.py:2824
Dec  1 14:21:46 np0005541455 ceilometer_agent_compute[200022]: 2025-12-01 19:21:46.839 12 DEBUG cotyledon.oslo_config_glue [-] event.store_raw                = [] log_opt_values /usr/lib/python3.12/site-packages/oslo_config/cfg.py:2824
Dec  1 14:21:46 np0005541455 ceilometer_agent_compute[200022]: 2025-12-01 19:21:46.839 12 DEBUG cotyledon.oslo_config_glue [-] ipmi.polling_retry             = 3 log_opt_values /usr/lib/python3.12/site-packages/oslo_config/cfg.py:2824
Dec  1 14:21:46 np0005541455 ceilometer_agent_compute[200022]: 2025-12-01 19:21:46.839 12 DEBUG cotyledon.oslo_config_glue [-] meter.meter_definitions_dirs   = ['/etc/ceilometer/meters.d', '/usr/lib/python3.12/site-packages/ceilometer/data/meters.d'] log_opt_values /usr/lib/python3.12/site-packages/oslo_config/cfg.py:2824
Dec  1 14:21:46 np0005541455 ceilometer_agent_compute[200022]: 2025-12-01 19:21:46.839 12 DEBUG cotyledon.oslo_config_glue [-] notification.ack_on_event_error = True log_opt_values /usr/lib/python3.12/site-packages/oslo_config/cfg.py:2824
Dec  1 14:21:46 np0005541455 ceilometer_agent_compute[200022]: 2025-12-01 19:21:46.839 12 DEBUG cotyledon.oslo_config_glue [-] notification.batch_size        = 1 log_opt_values /usr/lib/python3.12/site-packages/oslo_config/cfg.py:2824
Dec  1 14:21:46 np0005541455 ceilometer_agent_compute[200022]: 2025-12-01 19:21:46.839 12 DEBUG cotyledon.oslo_config_glue [-] notification.batch_timeout     = None log_opt_values /usr/lib/python3.12/site-packages/oslo_config/cfg.py:2824
Dec  1 14:21:46 np0005541455 ceilometer_agent_compute[200022]: 2025-12-01 19:21:46.839 12 DEBUG cotyledon.oslo_config_glue [-] notification.messaging_urls    = **** log_opt_values /usr/lib/python3.12/site-packages/oslo_config/cfg.py:2824
Dec  1 14:21:46 np0005541455 ceilometer_agent_compute[200022]: 2025-12-01 19:21:46.840 12 DEBUG cotyledon.oslo_config_glue [-] notification.notification_control_exchanges = ['nova', 'glance', 'neutron', 'cinder', 'heat', 'keystone', 'trove', 'zaqar', 'swift', 'ceilometer', 'magnum', 'dns', 'ironic', 'aodh'] log_opt_values /usr/lib/python3.12/site-packages/oslo_config/cfg.py:2824
Dec  1 14:21:46 np0005541455 ceilometer_agent_compute[200022]: 2025-12-01 19:21:46.840 12 DEBUG cotyledon.oslo_config_glue [-] notification.pipelines         = ['meter', 'event'] log_opt_values /usr/lib/python3.12/site-packages/oslo_config/cfg.py:2824
Dec  1 14:21:46 np0005541455 ceilometer_agent_compute[200022]: 2025-12-01 19:21:46.840 12 DEBUG cotyledon.oslo_config_glue [-] notification.workers           = 1 log_opt_values /usr/lib/python3.12/site-packages/oslo_config/cfg.py:2824
Dec  1 14:21:46 np0005541455 ceilometer_agent_compute[200022]: 2025-12-01 19:21:46.840 12 DEBUG cotyledon.oslo_config_glue [-] polling.batch_size             = 50 log_opt_values /usr/lib/python3.12/site-packages/oslo_config/cfg.py:2824
Dec  1 14:21:46 np0005541455 ceilometer_agent_compute[200022]: 2025-12-01 19:21:46.840 12 DEBUG cotyledon.oslo_config_glue [-] polling.cfg_file               = polling.yaml log_opt_values /usr/lib/python3.12/site-packages/oslo_config/cfg.py:2824
Dec  1 14:21:46 np0005541455 ceilometer_agent_compute[200022]: 2025-12-01 19:21:46.840 12 DEBUG cotyledon.oslo_config_glue [-] polling.enable_notifications   = False log_opt_values /usr/lib/python3.12/site-packages/oslo_config/cfg.py:2824
Dec  1 14:21:46 np0005541455 ceilometer_agent_compute[200022]: 2025-12-01 19:21:46.840 12 DEBUG cotyledon.oslo_config_glue [-] polling.enable_prometheus_exporter = True log_opt_values /usr/lib/python3.12/site-packages/oslo_config/cfg.py:2824
Dec  1 14:21:46 np0005541455 ceilometer_agent_compute[200022]: 2025-12-01 19:21:46.840 12 DEBUG cotyledon.oslo_config_glue [-] polling.heartbeat_socket_dir   = /var/lib/ceilometer log_opt_values /usr/lib/python3.12/site-packages/oslo_config/cfg.py:2824
Dec  1 14:21:46 np0005541455 ceilometer_agent_compute[200022]: 2025-12-01 19:21:46.840 12 DEBUG cotyledon.oslo_config_glue [-] polling.identity_name_discovery = False log_opt_values /usr/lib/python3.12/site-packages/oslo_config/cfg.py:2824
Dec  1 14:21:46 np0005541455 ceilometer_agent_compute[200022]: 2025-12-01 19:21:46.840 12 DEBUG cotyledon.oslo_config_glue [-] polling.ignore_disabled_projects = False log_opt_values /usr/lib/python3.12/site-packages/oslo_config/cfg.py:2824
Dec  1 14:21:46 np0005541455 ceilometer_agent_compute[200022]: 2025-12-01 19:21:46.840 12 DEBUG cotyledon.oslo_config_glue [-] polling.partitioning_group_prefix = None log_opt_values /usr/lib/python3.12/site-packages/oslo_config/cfg.py:2824
Dec  1 14:21:46 np0005541455 ceilometer_agent_compute[200022]: 2025-12-01 19:21:46.841 12 DEBUG cotyledon.oslo_config_glue [-] polling.pollsters_definitions_dirs = ['/etc/ceilometer/pollsters.d'] log_opt_values /usr/lib/python3.12/site-packages/oslo_config/cfg.py:2824
Dec  1 14:21:46 np0005541455 ceilometer_agent_compute[200022]: 2025-12-01 19:21:46.841 12 DEBUG cotyledon.oslo_config_glue [-] polling.prometheus_listen_addresses = ['[::]:9101'] log_opt_values /usr/lib/python3.12/site-packages/oslo_config/cfg.py:2824
Dec  1 14:21:46 np0005541455 ceilometer_agent_compute[200022]: 2025-12-01 19:21:46.841 12 DEBUG cotyledon.oslo_config_glue [-] polling.prometheus_tls_certfile = /etc/ceilometer/tls/tls.crt log_opt_values /usr/lib/python3.12/site-packages/oslo_config/cfg.py:2824
Dec  1 14:21:46 np0005541455 ceilometer_agent_compute[200022]: 2025-12-01 19:21:46.841 12 DEBUG cotyledon.oslo_config_glue [-] polling.prometheus_tls_enable  = True log_opt_values /usr/lib/python3.12/site-packages/oslo_config/cfg.py:2824
Dec  1 14:21:46 np0005541455 ceilometer_agent_compute[200022]: 2025-12-01 19:21:46.841 12 DEBUG cotyledon.oslo_config_glue [-] polling.prometheus_tls_keyfile = /etc/ceilometer/tls/tls.key log_opt_values /usr/lib/python3.12/site-packages/oslo_config/cfg.py:2824
Dec  1 14:21:46 np0005541455 ceilometer_agent_compute[200022]: 2025-12-01 19:21:46.841 12 DEBUG cotyledon.oslo_config_glue [-] polling.threads_to_process_pollsters = 1 log_opt_values /usr/lib/python3.12/site-packages/oslo_config/cfg.py:2824
Dec  1 14:21:46 np0005541455 ceilometer_agent_compute[200022]: 2025-12-01 19:21:46.841 12 DEBUG cotyledon.oslo_config_glue [-] publisher.telemetry_secret     = **** log_opt_values /usr/lib/python3.12/site-packages/oslo_config/cfg.py:2824
Dec  1 14:21:46 np0005541455 ceilometer_agent_compute[200022]: 2025-12-01 19:21:46.841 12 DEBUG cotyledon.oslo_config_glue [-] publisher_notifier.event_topic = event log_opt_values /usr/lib/python3.12/site-packages/oslo_config/cfg.py:2824
Dec  1 14:21:46 np0005541455 ceilometer_agent_compute[200022]: 2025-12-01 19:21:46.841 12 DEBUG cotyledon.oslo_config_glue [-] publisher_notifier.metering_topic = metering log_opt_values /usr/lib/python3.12/site-packages/oslo_config/cfg.py:2824
Dec  1 14:21:46 np0005541455 ceilometer_agent_compute[200022]: 2025-12-01 19:21:46.841 12 DEBUG cotyledon.oslo_config_glue [-] publisher_notifier.telemetry_driver = messagingv2 log_opt_values /usr/lib/python3.12/site-packages/oslo_config/cfg.py:2824
Dec  1 14:21:46 np0005541455 ceilometer_agent_compute[200022]: 2025-12-01 19:21:46.841 12 DEBUG cotyledon.oslo_config_glue [-] rgw_admin_credentials.access_key = **** log_opt_values /usr/lib/python3.12/site-packages/oslo_config/cfg.py:2824
Dec  1 14:21:46 np0005541455 ceilometer_agent_compute[200022]: 2025-12-01 19:21:46.841 12 DEBUG cotyledon.oslo_config_glue [-] rgw_admin_credentials.secret_key = **** log_opt_values /usr/lib/python3.12/site-packages/oslo_config/cfg.py:2824
Dec  1 14:21:46 np0005541455 ceilometer_agent_compute[200022]: 2025-12-01 19:21:46.842 12 DEBUG cotyledon.oslo_config_glue [-] rgw_client.implicit_tenants    = False log_opt_values /usr/lib/python3.12/site-packages/oslo_config/cfg.py:2824
Dec  1 14:21:46 np0005541455 ceilometer_agent_compute[200022]: 2025-12-01 19:21:46.842 12 DEBUG cotyledon.oslo_config_glue [-] service_types.aodh             = alarming log_opt_values /usr/lib/python3.12/site-packages/oslo_config/cfg.py:2824
Dec  1 14:21:46 np0005541455 ceilometer_agent_compute[200022]: 2025-12-01 19:21:46.842 12 DEBUG cotyledon.oslo_config_glue [-] service_types.cinder           = volumev3 log_opt_values /usr/lib/python3.12/site-packages/oslo_config/cfg.py:2824
Dec  1 14:21:46 np0005541455 ceilometer_agent_compute[200022]: 2025-12-01 19:21:46.842 12 DEBUG cotyledon.oslo_config_glue [-] service_types.glance           = image log_opt_values /usr/lib/python3.12/site-packages/oslo_config/cfg.py:2824
Dec  1 14:21:46 np0005541455 ceilometer_agent_compute[200022]: 2025-12-01 19:21:46.842 12 DEBUG cotyledon.oslo_config_glue [-] service_types.neutron          = network log_opt_values /usr/lib/python3.12/site-packages/oslo_config/cfg.py:2824
Dec  1 14:21:46 np0005541455 ceilometer_agent_compute[200022]: 2025-12-01 19:21:46.842 12 DEBUG cotyledon.oslo_config_glue [-] service_types.nova             = compute log_opt_values /usr/lib/python3.12/site-packages/oslo_config/cfg.py:2824
Dec  1 14:21:46 np0005541455 ceilometer_agent_compute[200022]: 2025-12-01 19:21:46.842 12 DEBUG cotyledon.oslo_config_glue [-] service_types.radosgw          = None log_opt_values /usr/lib/python3.12/site-packages/oslo_config/cfg.py:2824
Dec  1 14:21:46 np0005541455 ceilometer_agent_compute[200022]: 2025-12-01 19:21:46.842 12 DEBUG cotyledon.oslo_config_glue [-] service_types.swift            = object-store log_opt_values /usr/lib/python3.12/site-packages/oslo_config/cfg.py:2824
Dec  1 14:21:46 np0005541455 ceilometer_agent_compute[200022]: 2025-12-01 19:21:46.842 12 DEBUG cotyledon.oslo_config_glue [-] service_credentials.auth_section = None log_opt_values /usr/lib/python3.12/site-packages/oslo_config/cfg.py:2824
Dec  1 14:21:46 np0005541455 ceilometer_agent_compute[200022]: 2025-12-01 19:21:46.842 12 DEBUG cotyledon.oslo_config_glue [-] service_credentials.auth_type  = password log_opt_values /usr/lib/python3.12/site-packages/oslo_config/cfg.py:2824
Dec  1 14:21:46 np0005541455 ceilometer_agent_compute[200022]: 2025-12-01 19:21:46.842 12 DEBUG cotyledon.oslo_config_glue [-] service_credentials.cafile     = None log_opt_values /usr/lib/python3.12/site-packages/oslo_config/cfg.py:2824
Dec  1 14:21:46 np0005541455 ceilometer_agent_compute[200022]: 2025-12-01 19:21:46.843 12 DEBUG cotyledon.oslo_config_glue [-] service_credentials.certfile   = None log_opt_values /usr/lib/python3.12/site-packages/oslo_config/cfg.py:2824
Dec  1 14:21:46 np0005541455 ceilometer_agent_compute[200022]: 2025-12-01 19:21:46.843 12 DEBUG cotyledon.oslo_config_glue [-] service_credentials.collect_timing = False log_opt_values /usr/lib/python3.12/site-packages/oslo_config/cfg.py:2824
Dec  1 14:21:46 np0005541455 ceilometer_agent_compute[200022]: 2025-12-01 19:21:46.843 12 DEBUG cotyledon.oslo_config_glue [-] service_credentials.insecure   = False log_opt_values /usr/lib/python3.12/site-packages/oslo_config/cfg.py:2824
Dec  1 14:21:46 np0005541455 ceilometer_agent_compute[200022]: 2025-12-01 19:21:46.843 12 DEBUG cotyledon.oslo_config_glue [-] service_credentials.interface  = internalURL log_opt_values /usr/lib/python3.12/site-packages/oslo_config/cfg.py:2824
Dec  1 14:21:46 np0005541455 ceilometer_agent_compute[200022]: 2025-12-01 19:21:46.843 12 DEBUG cotyledon.oslo_config_glue [-] service_credentials.keyfile    = None log_opt_values /usr/lib/python3.12/site-packages/oslo_config/cfg.py:2824
Dec  1 14:21:46 np0005541455 ceilometer_agent_compute[200022]: 2025-12-01 19:21:46.843 12 DEBUG cotyledon.oslo_config_glue [-] service_credentials.region_name = None log_opt_values /usr/lib/python3.12/site-packages/oslo_config/cfg.py:2824
Dec  1 14:21:46 np0005541455 ceilometer_agent_compute[200022]: 2025-12-01 19:21:46.843 12 DEBUG cotyledon.oslo_config_glue [-] service_credentials.split_loggers = False log_opt_values /usr/lib/python3.12/site-packages/oslo_config/cfg.py:2824
Dec  1 14:21:46 np0005541455 ceilometer_agent_compute[200022]: 2025-12-01 19:21:46.843 12 DEBUG cotyledon.oslo_config_glue [-] service_credentials.timeout    = None log_opt_values /usr/lib/python3.12/site-packages/oslo_config/cfg.py:2824
Dec  1 14:21:46 np0005541455 ceilometer_agent_compute[200022]: 2025-12-01 19:21:46.843 12 DEBUG cotyledon.oslo_config_glue [-] gnocchi.auth_section           = service_credentials log_opt_values /usr/lib/python3.12/site-packages/oslo_config/cfg.py:2824
Dec  1 14:21:46 np0005541455 ceilometer_agent_compute[200022]: 2025-12-01 19:21:46.843 12 DEBUG cotyledon.oslo_config_glue [-] gnocchi.auth_type              = None log_opt_values /usr/lib/python3.12/site-packages/oslo_config/cfg.py:2824
Dec  1 14:21:46 np0005541455 ceilometer_agent_compute[200022]: 2025-12-01 19:21:46.843 12 DEBUG cotyledon.oslo_config_glue [-] gnocchi.cafile                 = None log_opt_values /usr/lib/python3.12/site-packages/oslo_config/cfg.py:2824
Dec  1 14:21:46 np0005541455 ceilometer_agent_compute[200022]: 2025-12-01 19:21:46.843 12 DEBUG cotyledon.oslo_config_glue [-] gnocchi.certfile               = None log_opt_values /usr/lib/python3.12/site-packages/oslo_config/cfg.py:2824
Dec  1 14:21:46 np0005541455 ceilometer_agent_compute[200022]: 2025-12-01 19:21:46.844 12 DEBUG cotyledon.oslo_config_glue [-] gnocchi.collect_timing         = False log_opt_values /usr/lib/python3.12/site-packages/oslo_config/cfg.py:2824
Dec  1 14:21:46 np0005541455 ceilometer_agent_compute[200022]: 2025-12-01 19:21:46.844 12 DEBUG cotyledon.oslo_config_glue [-] gnocchi.insecure               = False log_opt_values /usr/lib/python3.12/site-packages/oslo_config/cfg.py:2824
Dec  1 14:21:46 np0005541455 ceilometer_agent_compute[200022]: 2025-12-01 19:21:46.844 12 DEBUG cotyledon.oslo_config_glue [-] gnocchi.interface              = internal log_opt_values /usr/lib/python3.12/site-packages/oslo_config/cfg.py:2824
Dec  1 14:21:46 np0005541455 ceilometer_agent_compute[200022]: 2025-12-01 19:21:46.844 12 DEBUG cotyledon.oslo_config_glue [-] gnocchi.keyfile                = None log_opt_values /usr/lib/python3.12/site-packages/oslo_config/cfg.py:2824
Dec  1 14:21:46 np0005541455 ceilometer_agent_compute[200022]: 2025-12-01 19:21:46.844 12 DEBUG cotyledon.oslo_config_glue [-] gnocchi.region_name            = None log_opt_values /usr/lib/python3.12/site-packages/oslo_config/cfg.py:2824
Dec  1 14:21:46 np0005541455 ceilometer_agent_compute[200022]: 2025-12-01 19:21:46.844 12 DEBUG cotyledon.oslo_config_glue [-] gnocchi.split_loggers          = False log_opt_values /usr/lib/python3.12/site-packages/oslo_config/cfg.py:2824
Dec  1 14:21:46 np0005541455 ceilometer_agent_compute[200022]: 2025-12-01 19:21:46.844 12 DEBUG cotyledon.oslo_config_glue [-] gnocchi.timeout                = None log_opt_values /usr/lib/python3.12/site-packages/oslo_config/cfg.py:2824
Dec  1 14:21:46 np0005541455 ceilometer_agent_compute[200022]: 2025-12-01 19:21:46.844 12 DEBUG cotyledon.oslo_config_glue [-] zaqar.auth_section             = service_credentials log_opt_values /usr/lib/python3.12/site-packages/oslo_config/cfg.py:2824
Dec  1 14:21:46 np0005541455 ceilometer_agent_compute[200022]: 2025-12-01 19:21:46.844 12 DEBUG cotyledon.oslo_config_glue [-] zaqar.auth_type                = None log_opt_values /usr/lib/python3.12/site-packages/oslo_config/cfg.py:2824
Dec  1 14:21:46 np0005541455 ceilometer_agent_compute[200022]: 2025-12-01 19:21:46.844 12 DEBUG cotyledon.oslo_config_glue [-] zaqar.cafile                   = None log_opt_values /usr/lib/python3.12/site-packages/oslo_config/cfg.py:2824
Dec  1 14:21:46 np0005541455 ceilometer_agent_compute[200022]: 2025-12-01 19:21:46.844 12 DEBUG cotyledon.oslo_config_glue [-] zaqar.certfile                 = None log_opt_values /usr/lib/python3.12/site-packages/oslo_config/cfg.py:2824
Dec  1 14:21:46 np0005541455 ceilometer_agent_compute[200022]: 2025-12-01 19:21:46.844 12 DEBUG cotyledon.oslo_config_glue [-] zaqar.collect_timing           = False log_opt_values /usr/lib/python3.12/site-packages/oslo_config/cfg.py:2824
Dec  1 14:21:46 np0005541455 ceilometer_agent_compute[200022]: 2025-12-01 19:21:46.845 12 DEBUG cotyledon.oslo_config_glue [-] zaqar.insecure                 = False log_opt_values /usr/lib/python3.12/site-packages/oslo_config/cfg.py:2824
Dec  1 14:21:46 np0005541455 ceilometer_agent_compute[200022]: 2025-12-01 19:21:46.845 12 DEBUG cotyledon.oslo_config_glue [-] zaqar.interface                = internal log_opt_values /usr/lib/python3.12/site-packages/oslo_config/cfg.py:2824
Dec  1 14:21:46 np0005541455 ceilometer_agent_compute[200022]: 2025-12-01 19:21:46.845 12 DEBUG cotyledon.oslo_config_glue [-] zaqar.keyfile                  = None log_opt_values /usr/lib/python3.12/site-packages/oslo_config/cfg.py:2824
Dec  1 14:21:46 np0005541455 ceilometer_agent_compute[200022]: 2025-12-01 19:21:46.845 12 DEBUG cotyledon.oslo_config_glue [-] zaqar.region_name              = None log_opt_values /usr/lib/python3.12/site-packages/oslo_config/cfg.py:2824
Dec  1 14:21:46 np0005541455 ceilometer_agent_compute[200022]: 2025-12-01 19:21:46.845 12 DEBUG cotyledon.oslo_config_glue [-] zaqar.split_loggers            = False log_opt_values /usr/lib/python3.12/site-packages/oslo_config/cfg.py:2824
Dec  1 14:21:46 np0005541455 ceilometer_agent_compute[200022]: 2025-12-01 19:21:46.845 12 DEBUG cotyledon.oslo_config_glue [-] zaqar.timeout                  = None log_opt_values /usr/lib/python3.12/site-packages/oslo_config/cfg.py:2824
Dec  1 14:21:46 np0005541455 ceilometer_agent_compute[200022]: 2025-12-01 19:21:46.845 12 DEBUG cotyledon.oslo_config_glue [-] oslo_reports.file_event_handler = None log_opt_values /usr/lib/python3.12/site-packages/oslo_config/cfg.py:2824
Dec  1 14:21:46 np0005541455 ceilometer_agent_compute[200022]: 2025-12-01 19:21:46.845 12 DEBUG cotyledon.oslo_config_glue [-] oslo_reports.file_event_handler_interval = 1 log_opt_values /usr/lib/python3.12/site-packages/oslo_config/cfg.py:2824
Dec  1 14:21:46 np0005541455 ceilometer_agent_compute[200022]: 2025-12-01 19:21:46.845 12 DEBUG cotyledon.oslo_config_glue [-] oslo_reports.log_dir           = None log_opt_values /usr/lib/python3.12/site-packages/oslo_config/cfg.py:2824
Dec  1 14:21:46 np0005541455 ceilometer_agent_compute[200022]: 2025-12-01 19:21:46.845 12 DEBUG cotyledon.oslo_config_glue [-] ******************************************************************************** log_opt_values /usr/lib/python3.12/site-packages/oslo_config/cfg.py:2828
Dec  1 14:21:46 np0005541455 ceilometer_agent_compute[200022]: 2025-12-01 19:21:46.845 12 DEBUG cotyledon._service [-] Run service AgentHeartBeatManager(0) [12] wait_forever /usr/lib/python3.12/site-packages/cotyledon/_service.py:263
Dec  1 14:21:46 np0005541455 ceilometer_agent_compute[200022]: 2025-12-01 19:21:46.847 12 DEBUG ceilometer.polling.manager [-] Started heartbeat child process. run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:519
Dec  1 14:21:46 np0005541455 ceilometer_agent_compute[200022]: 2025-12-01 19:21:46.848 12 DEBUG ceilometer.polling.manager [-] Started heartbeat update thread _read_queue /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:522
Dec  1 14:21:46 np0005541455 ceilometer_agent_compute[200022]: 2025-12-01 19:21:46.849 12 DEBUG ceilometer.polling.manager [-] Started heartbeat reporting thread _report_status /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:527
Dec  1 14:21:46 np0005541455 ceilometer_agent_compute[200022]: 2025-12-01 19:21:46.943 2 DEBUG cotyledon._service_manager [-] Killing services with signal SIGTERM _shutdown /usr/lib/python3.12/site-packages/cotyledon/_service_manager.py:319
Dec  1 14:21:46 np0005541455 ceilometer_agent_compute[200022]: 2025-12-01 19:21:46.944 2 DEBUG cotyledon._service_manager [-] Waiting services to terminate _shutdown /usr/lib/python3.12/site-packages/cotyledon/_service_manager.py:323
Dec  1 14:21:46 np0005541455 ceilometer_agent_compute[200022]: 2025-12-01 19:21:46.944 12 INFO cotyledon._service [-] Caught SIGTERM signal, graceful exiting of service AgentHeartBeatManager(0) [12]
Dec  1 14:21:47 np0005541455 ceilometer_agent_compute[200022]: 2025-12-01 19:21:47.075 14 DEBUG ceilometer.compute.virt.libvirt.utils [-] Connecting to libvirt: qemu:///system new_libvirt_connection /usr/lib/python3.12/site-packages/ceilometer/compute/virt/libvirt/utils.py:96
Dec  1 14:21:47 np0005541455 ceilometer_agent_compute[200022]: 2025-12-01 19:21:47.085 14 INFO ceilometer.polling.manager [-] Looking for dynamic pollsters configurations at [['/etc/ceilometer/pollsters.d']].
Dec  1 14:21:47 np0005541455 ceilometer_agent_compute[200022]: 2025-12-01 19:21:47.085 14 INFO ceilometer.polling.manager [-] No dynamic pollsters found in folder [/etc/ceilometer/pollsters.d].
Dec  1 14:21:47 np0005541455 ceilometer_agent_compute[200022]: 2025-12-01 19:21:47.085 14 INFO ceilometer.polling.manager [-] No dynamic pollsters file found in dirs [['/etc/ceilometer/pollsters.d']].
Dec  1 14:21:47 np0005541455 ceilometer_agent_compute[200022]: 2025-12-01 19:21:47.198 14 DEBUG cotyledon.oslo_config_glue [-] Full set of CONF: _load_service_options /usr/lib/python3.12/site-packages/cotyledon/oslo_config_glue.py:53
Dec  1 14:21:47 np0005541455 ceilometer_agent_compute[200022]: 2025-12-01 19:21:47.198 14 DEBUG cotyledon.oslo_config_glue [-] ******************************************************************************** log_opt_values /usr/lib/python3.12/site-packages/oslo_config/cfg.py:2804
Dec  1 14:21:47 np0005541455 ceilometer_agent_compute[200022]: 2025-12-01 19:21:47.198 14 DEBUG cotyledon.oslo_config_glue [-] Configuration options gathered from: log_opt_values /usr/lib/python3.12/site-packages/oslo_config/cfg.py:2805
Dec  1 14:21:47 np0005541455 ceilometer_agent_compute[200022]: 2025-12-01 19:21:47.198 14 DEBUG cotyledon.oslo_config_glue [-] command line args: ['--polling-namespaces', 'compute', '--logfile', '/dev/stdout'] log_opt_values /usr/lib/python3.12/site-packages/oslo_config/cfg.py:2806
Dec  1 14:21:47 np0005541455 ceilometer_agent_compute[200022]: 2025-12-01 19:21:47.198 14 DEBUG cotyledon.oslo_config_glue [-] config files: ['/etc/ceilometer/ceilometer.conf'] log_opt_values /usr/lib/python3.12/site-packages/oslo_config/cfg.py:2807
Dec  1 14:21:47 np0005541455 ceilometer_agent_compute[200022]: 2025-12-01 19:21:47.199 14 DEBUG cotyledon.oslo_config_glue [-] ================================================================================ log_opt_values /usr/lib/python3.12/site-packages/oslo_config/cfg.py:2809
Dec  1 14:21:47 np0005541455 ceilometer_agent_compute[200022]: 2025-12-01 19:21:47.199 14 DEBUG cotyledon.oslo_config_glue [-] batch_size                     = 50 log_opt_values /usr/lib/python3.12/site-packages/oslo_config/cfg.py:2817
Dec  1 14:21:47 np0005541455 ceilometer_agent_compute[200022]: 2025-12-01 19:21:47.199 14 DEBUG cotyledon.oslo_config_glue [-] cfg_file                       = polling.yaml log_opt_values /usr/lib/python3.12/site-packages/oslo_config/cfg.py:2817
Dec  1 14:21:47 np0005541455 ceilometer_agent_compute[200022]: 2025-12-01 19:21:47.199 14 DEBUG cotyledon.oslo_config_glue [-] config_dir                     = ['/etc/ceilometer/ceilometer.conf.d'] log_opt_values /usr/lib/python3.12/site-packages/oslo_config/cfg.py:2817
Dec  1 14:21:47 np0005541455 ceilometer_agent_compute[200022]: 2025-12-01 19:21:47.199 14 DEBUG cotyledon.oslo_config_glue [-] config_file                    = ['/etc/ceilometer/ceilometer.conf'] log_opt_values /usr/lib/python3.12/site-packages/oslo_config/cfg.py:2817
Dec  1 14:21:47 np0005541455 ceilometer_agent_compute[200022]: 2025-12-01 19:21:47.199 14 DEBUG cotyledon.oslo_config_glue [-] config_source                  = [] log_opt_values /usr/lib/python3.12/site-packages/oslo_config/cfg.py:2817
Dec  1 14:21:47 np0005541455 ceilometer_agent_compute[200022]: 2025-12-01 19:21:47.199 14 DEBUG cotyledon.oslo_config_glue [-] debug                          = True log_opt_values /usr/lib/python3.12/site-packages/oslo_config/cfg.py:2817
Dec  1 14:21:47 np0005541455 ceilometer_agent_compute[200022]: 2025-12-01 19:21:47.199 14 DEBUG cotyledon.oslo_config_glue [-] default_log_levels             = ['amqp=WARN', 'amqplib=WARN', 'boto=WARN', 'qpid=WARN', 'sqlalchemy=WARN', 'suds=INFO', 'oslo.messaging=INFO', 'oslo_messaging=INFO', 'iso8601=WARN', 'requests.packages.urllib3.connectionpool=WARN', 'urllib3.connectionpool=WARN', 'websocket=WARN', 'requests.packages.urllib3.util.retry=WARN', 'urllib3.util.retry=WARN', 'keystonemiddleware=WARN', 'routes.middleware=WARN', 'stevedore=WARN', 'taskflow=WARN', 'keystoneauth=WARN', 'oslo.cache=INFO', 'oslo_policy=INFO', 'dogpile.core.dogpile=INFO', 'futurist=INFO', 'neutronclient=INFO', 'keystoneclient=INFO'] log_opt_values /usr/lib/python3.12/site-packages/oslo_config/cfg.py:2817
Dec  1 14:21:47 np0005541455 ceilometer_agent_compute[200022]: 2025-12-01 19:21:47.199 14 DEBUG cotyledon.oslo_config_glue [-] enable_notifications           = True log_opt_values /usr/lib/python3.12/site-packages/oslo_config/cfg.py:2817
Dec  1 14:21:47 np0005541455 ceilometer_agent_compute[200022]: 2025-12-01 19:21:47.199 14 DEBUG cotyledon.oslo_config_glue [-] enable_prometheus_exporter     = False log_opt_values /usr/lib/python3.12/site-packages/oslo_config/cfg.py:2817
Dec  1 14:21:47 np0005541455 ceilometer_agent_compute[200022]: 2025-12-01 19:21:47.199 14 DEBUG cotyledon.oslo_config_glue [-] event_pipeline_cfg_file        = event_pipeline.yaml log_opt_values /usr/lib/python3.12/site-packages/oslo_config/cfg.py:2817
Dec  1 14:21:47 np0005541455 ceilometer_agent_compute[200022]: 2025-12-01 19:21:47.200 14 DEBUG cotyledon.oslo_config_glue [-] graceful_shutdown_timeout      = 60 log_opt_values /usr/lib/python3.12/site-packages/oslo_config/cfg.py:2817
Dec  1 14:21:47 np0005541455 ceilometer_agent_compute[200022]: 2025-12-01 19:21:47.200 14 DEBUG cotyledon.oslo_config_glue [-] heartbeat_socket_dir           = None log_opt_values /usr/lib/python3.12/site-packages/oslo_config/cfg.py:2817
Dec  1 14:21:47 np0005541455 ceilometer_agent_compute[200022]: 2025-12-01 19:21:47.200 14 DEBUG cotyledon.oslo_config_glue [-] host                           = compute-0.ctlplane.example.com log_opt_values /usr/lib/python3.12/site-packages/oslo_config/cfg.py:2817
Dec  1 14:21:47 np0005541455 ceilometer_agent_compute[200022]: 2025-12-01 19:21:47.200 14 DEBUG cotyledon.oslo_config_glue [-] http_timeout                   = 600 log_opt_values /usr/lib/python3.12/site-packages/oslo_config/cfg.py:2817
Dec  1 14:21:47 np0005541455 ceilometer_agent_compute[200022]: 2025-12-01 19:21:47.200 14 DEBUG cotyledon.oslo_config_glue [-] hypervisor_inspector           = libvirt log_opt_values /usr/lib/python3.12/site-packages/oslo_config/cfg.py:2817
Dec  1 14:21:47 np0005541455 ceilometer_agent_compute[200022]: 2025-12-01 19:21:47.200 14 DEBUG cotyledon.oslo_config_glue [-] identity_name_discovery        = False log_opt_values /usr/lib/python3.12/site-packages/oslo_config/cfg.py:2817
Dec  1 14:21:47 np0005541455 ceilometer_agent_compute[200022]: 2025-12-01 19:21:47.200 14 DEBUG cotyledon.oslo_config_glue [-] ignore_disabled_projects       = False log_opt_values /usr/lib/python3.12/site-packages/oslo_config/cfg.py:2817
Dec  1 14:21:47 np0005541455 ceilometer_agent_compute[200022]: 2025-12-01 19:21:47.200 14 DEBUG cotyledon.oslo_config_glue [-] instance_format                = [instance: %(uuid)s]  log_opt_values /usr/lib/python3.12/site-packages/oslo_config/cfg.py:2817
Dec  1 14:21:47 np0005541455 ceilometer_agent_compute[200022]: 2025-12-01 19:21:47.200 14 DEBUG cotyledon.oslo_config_glue [-] instance_uuid_format           = [instance: %(uuid)s]  log_opt_values /usr/lib/python3.12/site-packages/oslo_config/cfg.py:2817
Dec  1 14:21:47 np0005541455 ceilometer_agent_compute[200022]: 2025-12-01 19:21:47.200 14 DEBUG cotyledon.oslo_config_glue [-] libvirt_type                   = kvm log_opt_values /usr/lib/python3.12/site-packages/oslo_config/cfg.py:2817
Dec  1 14:21:47 np0005541455 ceilometer_agent_compute[200022]: 2025-12-01 19:21:47.200 14 DEBUG cotyledon.oslo_config_glue [-] libvirt_uri                    =  log_opt_values /usr/lib/python3.12/site-packages/oslo_config/cfg.py:2817
Dec  1 14:21:47 np0005541455 ceilometer_agent_compute[200022]: 2025-12-01 19:21:47.201 14 DEBUG cotyledon.oslo_config_glue [-] log_color                      = False log_opt_values /usr/lib/python3.12/site-packages/oslo_config/cfg.py:2817
Dec  1 14:21:47 np0005541455 ceilometer_agent_compute[200022]: 2025-12-01 19:21:47.201 14 DEBUG cotyledon.oslo_config_glue [-] log_config_append              = None log_opt_values /usr/lib/python3.12/site-packages/oslo_config/cfg.py:2817
Dec  1 14:21:47 np0005541455 ceilometer_agent_compute[200022]: 2025-12-01 19:21:47.201 14 DEBUG cotyledon.oslo_config_glue [-] log_date_format                = %Y-%m-%d %H:%M:%S log_opt_values /usr/lib/python3.12/site-packages/oslo_config/cfg.py:2817
Dec  1 14:21:47 np0005541455 ceilometer_agent_compute[200022]: 2025-12-01 19:21:47.201 14 DEBUG cotyledon.oslo_config_glue [-] log_dir                        = /var/log/ceilometer log_opt_values /usr/lib/python3.12/site-packages/oslo_config/cfg.py:2817
Dec  1 14:21:47 np0005541455 ceilometer_agent_compute[200022]: 2025-12-01 19:21:47.201 14 DEBUG cotyledon.oslo_config_glue [-] log_file                       = /dev/stdout log_opt_values /usr/lib/python3.12/site-packages/oslo_config/cfg.py:2817
Dec  1 14:21:47 np0005541455 ceilometer_agent_compute[200022]: 2025-12-01 19:21:47.201 14 DEBUG cotyledon.oslo_config_glue [-] log_options                    = True log_opt_values /usr/lib/python3.12/site-packages/oslo_config/cfg.py:2817
Dec  1 14:21:47 np0005541455 ceilometer_agent_compute[200022]: 2025-12-01 19:21:47.201 14 DEBUG cotyledon.oslo_config_glue [-] log_rotate_interval            = 1 log_opt_values /usr/lib/python3.12/site-packages/oslo_config/cfg.py:2817
Dec  1 14:21:47 np0005541455 ceilometer_agent_compute[200022]: 2025-12-01 19:21:47.201 14 DEBUG cotyledon.oslo_config_glue [-] log_rotate_interval_type       = days log_opt_values /usr/lib/python3.12/site-packages/oslo_config/cfg.py:2817
Dec  1 14:21:47 np0005541455 ceilometer_agent_compute[200022]: 2025-12-01 19:21:47.201 14 DEBUG cotyledon.oslo_config_glue [-] log_rotation_type              = none log_opt_values /usr/lib/python3.12/site-packages/oslo_config/cfg.py:2817
Dec  1 14:21:47 np0005541455 ceilometer_agent_compute[200022]: 2025-12-01 19:21:47.201 14 DEBUG cotyledon.oslo_config_glue [-] logging_context_format_string  = %(asctime)s.%(msecs)03d %(process)d %(levelname)s %(name)s [%(global_request_id)s %(request_id)s %(user_identity)s] %(instance)s%(message)s log_opt_values /usr/lib/python3.12/site-packages/oslo_config/cfg.py:2817
Dec  1 14:21:47 np0005541455 ceilometer_agent_compute[200022]: 2025-12-01 19:21:47.201 14 DEBUG cotyledon.oslo_config_glue [-] logging_debug_format_suffix    = %(funcName)s %(pathname)s:%(lineno)d log_opt_values /usr/lib/python3.12/site-packages/oslo_config/cfg.py:2817
Dec  1 14:21:47 np0005541455 ceilometer_agent_compute[200022]: 2025-12-01 19:21:47.202 14 DEBUG cotyledon.oslo_config_glue [-] logging_default_format_string  = %(asctime)s.%(msecs)03d %(process)d %(levelname)s %(name)s [-] %(instance)s%(message)s log_opt_values /usr/lib/python3.12/site-packages/oslo_config/cfg.py:2817
Dec  1 14:21:47 np0005541455 ceilometer_agent_compute[200022]: 2025-12-01 19:21:47.202 14 DEBUG cotyledon.oslo_config_glue [-] logging_exception_prefix       = %(asctime)s.%(msecs)03d %(process)d ERROR %(name)s %(instance)s log_opt_values /usr/lib/python3.12/site-packages/oslo_config/cfg.py:2817
Dec  1 14:21:47 np0005541455 ceilometer_agent_compute[200022]: 2025-12-01 19:21:47.202 14 DEBUG cotyledon.oslo_config_glue [-] logging_user_identity_format   = %(user)s %(project)s %(domain)s %(system_scope)s %(user_domain)s %(project_domain)s log_opt_values /usr/lib/python3.12/site-packages/oslo_config/cfg.py:2817
Dec  1 14:21:47 np0005541455 ceilometer_agent_compute[200022]: 2025-12-01 19:21:47.202 14 DEBUG cotyledon.oslo_config_glue [-] max_logfile_count              = 30 log_opt_values /usr/lib/python3.12/site-packages/oslo_config/cfg.py:2817
Dec  1 14:21:47 np0005541455 ceilometer_agent_compute[200022]: 2025-12-01 19:21:47.202 14 DEBUG cotyledon.oslo_config_glue [-] max_logfile_size_mb            = 200 log_opt_values /usr/lib/python3.12/site-packages/oslo_config/cfg.py:2817
Dec  1 14:21:47 np0005541455 ceilometer_agent_compute[200022]: 2025-12-01 19:21:47.202 14 DEBUG cotyledon.oslo_config_glue [-] max_parallel_requests          = 64 log_opt_values /usr/lib/python3.12/site-packages/oslo_config/cfg.py:2817
Dec  1 14:21:47 np0005541455 ceilometer_agent_compute[200022]: 2025-12-01 19:21:47.202 14 DEBUG cotyledon.oslo_config_glue [-] partitioning_group_prefix      = None log_opt_values /usr/lib/python3.12/site-packages/oslo_config/cfg.py:2817
Dec  1 14:21:47 np0005541455 ceilometer_agent_compute[200022]: 2025-12-01 19:21:47.202 14 DEBUG cotyledon.oslo_config_glue [-] pipeline_cfg_file              = pipeline.yaml log_opt_values /usr/lib/python3.12/site-packages/oslo_config/cfg.py:2817
Dec  1 14:21:47 np0005541455 ceilometer_agent_compute[200022]: 2025-12-01 19:21:47.202 14 DEBUG cotyledon.oslo_config_glue [-] polling_namespaces             = ['compute'] log_opt_values /usr/lib/python3.12/site-packages/oslo_config/cfg.py:2817
Dec  1 14:21:47 np0005541455 ceilometer_agent_compute[200022]: 2025-12-01 19:21:47.202 14 DEBUG cotyledon.oslo_config_glue [-] pollsters_definitions_dirs     = ['/etc/ceilometer/pollsters.d'] log_opt_values /usr/lib/python3.12/site-packages/oslo_config/cfg.py:2817
Dec  1 14:21:47 np0005541455 ceilometer_agent_compute[200022]: 2025-12-01 19:21:47.202 14 DEBUG cotyledon.oslo_config_glue [-] prometheus_listen_addresses    = ['127.0.0.1:9101'] log_opt_values /usr/lib/python3.12/site-packages/oslo_config/cfg.py:2817
Dec  1 14:21:47 np0005541455 ceilometer_agent_compute[200022]: 2025-12-01 19:21:47.202 14 DEBUG cotyledon.oslo_config_glue [-] prometheus_tls_certfile        = None log_opt_values /usr/lib/python3.12/site-packages/oslo_config/cfg.py:2817
Dec  1 14:21:47 np0005541455 ceilometer_agent_compute[200022]: 2025-12-01 19:21:47.203 14 DEBUG cotyledon.oslo_config_glue [-] prometheus_tls_enable          = False log_opt_values /usr/lib/python3.12/site-packages/oslo_config/cfg.py:2817
Dec  1 14:21:47 np0005541455 ceilometer_agent_compute[200022]: 2025-12-01 19:21:47.203 14 DEBUG cotyledon.oslo_config_glue [-] prometheus_tls_keyfile         = None log_opt_values /usr/lib/python3.12/site-packages/oslo_config/cfg.py:2817
Dec  1 14:21:47 np0005541455 ceilometer_agent_compute[200022]: 2025-12-01 19:21:47.203 14 DEBUG cotyledon.oslo_config_glue [-] publish_errors                 = False log_opt_values /usr/lib/python3.12/site-packages/oslo_config/cfg.py:2817
Dec  1 14:21:47 np0005541455 ceilometer_agent_compute[200022]: 2025-12-01 19:21:47.203 14 DEBUG cotyledon.oslo_config_glue [-] rate_limit_burst               = 0 log_opt_values /usr/lib/python3.12/site-packages/oslo_config/cfg.py:2817
Dec  1 14:21:47 np0005541455 ceilometer_agent_compute[200022]: 2025-12-01 19:21:47.203 14 DEBUG cotyledon.oslo_config_glue [-] rate_limit_except_level        = CRITICAL log_opt_values /usr/lib/python3.12/site-packages/oslo_config/cfg.py:2817
Dec  1 14:21:47 np0005541455 ceilometer_agent_compute[200022]: 2025-12-01 19:21:47.203 14 DEBUG cotyledon.oslo_config_glue [-] rate_limit_interval            = 0 log_opt_values /usr/lib/python3.12/site-packages/oslo_config/cfg.py:2817
Dec  1 14:21:47 np0005541455 ceilometer_agent_compute[200022]: 2025-12-01 19:21:47.203 14 DEBUG cotyledon.oslo_config_glue [-] reseller_prefix                = AUTH_ log_opt_values /usr/lib/python3.12/site-packages/oslo_config/cfg.py:2817
Dec  1 14:21:47 np0005541455 ceilometer_agent_compute[200022]: 2025-12-01 19:21:47.203 14 DEBUG cotyledon.oslo_config_glue [-] reserved_metadata_keys         = [] log_opt_values /usr/lib/python3.12/site-packages/oslo_config/cfg.py:2817
Dec  1 14:21:47 np0005541455 ceilometer_agent_compute[200022]: 2025-12-01 19:21:47.203 14 DEBUG cotyledon.oslo_config_glue [-] reserved_metadata_length       = 256 log_opt_values /usr/lib/python3.12/site-packages/oslo_config/cfg.py:2817
Dec  1 14:21:47 np0005541455 ceilometer_agent_compute[200022]: 2025-12-01 19:21:47.203 14 DEBUG cotyledon.oslo_config_glue [-] reserved_metadata_namespace    = ['metering.'] log_opt_values /usr/lib/python3.12/site-packages/oslo_config/cfg.py:2817
Dec  1 14:21:47 np0005541455 ceilometer_agent_compute[200022]: 2025-12-01 19:21:47.203 14 DEBUG cotyledon.oslo_config_glue [-] rootwrap_config                = /etc/ceilometer/rootwrap.conf log_opt_values /usr/lib/python3.12/site-packages/oslo_config/cfg.py:2817
Dec  1 14:21:47 np0005541455 ceilometer_agent_compute[200022]: 2025-12-01 19:21:47.203 14 DEBUG cotyledon.oslo_config_glue [-] sample_source                  = openstack log_opt_values /usr/lib/python3.12/site-packages/oslo_config/cfg.py:2817
Dec  1 14:21:47 np0005541455 ceilometer_agent_compute[200022]: 2025-12-01 19:21:47.204 14 DEBUG cotyledon.oslo_config_glue [-] shell_completion               = None log_opt_values /usr/lib/python3.12/site-packages/oslo_config/cfg.py:2817
Dec  1 14:21:47 np0005541455 ceilometer_agent_compute[200022]: 2025-12-01 19:21:47.204 14 DEBUG cotyledon.oslo_config_glue [-] syslog_log_facility            = LOG_USER log_opt_values /usr/lib/python3.12/site-packages/oslo_config/cfg.py:2817
Dec  1 14:21:47 np0005541455 ceilometer_agent_compute[200022]: 2025-12-01 19:21:47.204 14 DEBUG cotyledon.oslo_config_glue [-] threads_to_process_pollsters   = 1 log_opt_values /usr/lib/python3.12/site-packages/oslo_config/cfg.py:2817
Dec  1 14:21:47 np0005541455 ceilometer_agent_compute[200022]: 2025-12-01 19:21:47.204 14 DEBUG cotyledon.oslo_config_glue [-] use_journal                    = False log_opt_values /usr/lib/python3.12/site-packages/oslo_config/cfg.py:2817
Dec  1 14:21:47 np0005541455 ceilometer_agent_compute[200022]: 2025-12-01 19:21:47.204 14 DEBUG cotyledon.oslo_config_glue [-] use_json                       = False log_opt_values /usr/lib/python3.12/site-packages/oslo_config/cfg.py:2817
Dec  1 14:21:47 np0005541455 ceilometer_agent_compute[200022]: 2025-12-01 19:21:47.204 14 DEBUG cotyledon.oslo_config_glue [-] use_stderr                     = False log_opt_values /usr/lib/python3.12/site-packages/oslo_config/cfg.py:2817
Dec  1 14:21:47 np0005541455 ceilometer_agent_compute[200022]: 2025-12-01 19:21:47.204 14 DEBUG cotyledon.oslo_config_glue [-] use_syslog                     = False log_opt_values /usr/lib/python3.12/site-packages/oslo_config/cfg.py:2817
Dec  1 14:21:47 np0005541455 ceilometer_agent_compute[200022]: 2025-12-01 19:21:47.204 14 DEBUG cotyledon.oslo_config_glue [-] watch_log_file                 = False log_opt_values /usr/lib/python3.12/site-packages/oslo_config/cfg.py:2817
Dec  1 14:21:47 np0005541455 ceilometer_agent_compute[200022]: 2025-12-01 19:21:47.204 14 DEBUG cotyledon.oslo_config_glue [-] compute.fetch_extra_metadata   = True log_opt_values /usr/lib/python3.12/site-packages/oslo_config/cfg.py:2824
Dec  1 14:21:47 np0005541455 ceilometer_agent_compute[200022]: 2025-12-01 19:21:47.204 14 DEBUG cotyledon.oslo_config_glue [-] compute.instance_discovery_method = libvirt_metadata log_opt_values /usr/lib/python3.12/site-packages/oslo_config/cfg.py:2824
Dec  1 14:21:47 np0005541455 ceilometer_agent_compute[200022]: 2025-12-01 19:21:47.204 14 DEBUG cotyledon.oslo_config_glue [-] compute.resource_cache_expiry  = 3600 log_opt_values /usr/lib/python3.12/site-packages/oslo_config/cfg.py:2824
Dec  1 14:21:47 np0005541455 ceilometer_agent_compute[200022]: 2025-12-01 19:21:47.204 14 DEBUG cotyledon.oslo_config_glue [-] compute.resource_update_interval = 0 log_opt_values /usr/lib/python3.12/site-packages/oslo_config/cfg.py:2824
Dec  1 14:21:47 np0005541455 ceilometer_agent_compute[200022]: 2025-12-01 19:21:47.204 14 DEBUG cotyledon.oslo_config_glue [-] coordination.backend_url       = **** log_opt_values /usr/lib/python3.12/site-packages/oslo_config/cfg.py:2824
Dec  1 14:21:47 np0005541455 ceilometer_agent_compute[200022]: 2025-12-01 19:21:47.204 14 DEBUG cotyledon.oslo_config_glue [-] event.definitions_cfg_file     = event_definitions.yaml log_opt_values /usr/lib/python3.12/site-packages/oslo_config/cfg.py:2824
Dec  1 14:21:47 np0005541455 ceilometer_agent_compute[200022]: 2025-12-01 19:21:47.205 14 DEBUG cotyledon.oslo_config_glue [-] event.drop_unmatched_notifications = False log_opt_values /usr/lib/python3.12/site-packages/oslo_config/cfg.py:2824
Dec  1 14:21:47 np0005541455 ceilometer_agent_compute[200022]: 2025-12-01 19:21:47.205 14 DEBUG cotyledon.oslo_config_glue [-] event.store_raw                = [] log_opt_values /usr/lib/python3.12/site-packages/oslo_config/cfg.py:2824
Dec  1 14:21:47 np0005541455 ceilometer_agent_compute[200022]: 2025-12-01 19:21:47.205 14 DEBUG cotyledon.oslo_config_glue [-] ipmi.polling_retry             = 3 log_opt_values /usr/lib/python3.12/site-packages/oslo_config/cfg.py:2824
Dec  1 14:21:47 np0005541455 ceilometer_agent_compute[200022]: 2025-12-01 19:21:47.205 14 DEBUG cotyledon.oslo_config_glue [-] meter.meter_definitions_dirs   = ['/etc/ceilometer/meters.d', '/usr/lib/python3.12/site-packages/ceilometer/data/meters.d'] log_opt_values /usr/lib/python3.12/site-packages/oslo_config/cfg.py:2824
Dec  1 14:21:47 np0005541455 ceilometer_agent_compute[200022]: 2025-12-01 19:21:47.205 14 DEBUG cotyledon.oslo_config_glue [-] notification.ack_on_event_error = True log_opt_values /usr/lib/python3.12/site-packages/oslo_config/cfg.py:2824
Dec  1 14:21:47 np0005541455 ceilometer_agent_compute[200022]: 2025-12-01 19:21:47.205 14 DEBUG cotyledon.oslo_config_glue [-] notification.batch_size        = 1 log_opt_values /usr/lib/python3.12/site-packages/oslo_config/cfg.py:2824
Dec  1 14:21:47 np0005541455 ceilometer_agent_compute[200022]: 2025-12-01 19:21:47.205 14 DEBUG cotyledon.oslo_config_glue [-] notification.batch_timeout     = None log_opt_values /usr/lib/python3.12/site-packages/oslo_config/cfg.py:2824
Dec  1 14:21:47 np0005541455 ceilometer_agent_compute[200022]: 2025-12-01 19:21:47.205 14 DEBUG cotyledon.oslo_config_glue [-] notification.messaging_urls    = **** log_opt_values /usr/lib/python3.12/site-packages/oslo_config/cfg.py:2824
Dec  1 14:21:47 np0005541455 ceilometer_agent_compute[200022]: 2025-12-01 19:21:47.205 14 DEBUG cotyledon.oslo_config_glue [-] notification.notification_control_exchanges = ['nova', 'glance', 'neutron', 'cinder', 'heat', 'keystone', 'trove', 'zaqar', 'swift', 'ceilometer', 'magnum', 'dns', 'ironic', 'aodh'] log_opt_values /usr/lib/python3.12/site-packages/oslo_config/cfg.py:2824
Dec  1 14:21:47 np0005541455 ceilometer_agent_compute[200022]: 2025-12-01 19:21:47.205 14 DEBUG cotyledon.oslo_config_glue [-] notification.pipelines         = ['meter', 'event'] log_opt_values /usr/lib/python3.12/site-packages/oslo_config/cfg.py:2824
Dec  1 14:21:47 np0005541455 ceilometer_agent_compute[200022]: 2025-12-01 19:21:47.205 14 DEBUG cotyledon.oslo_config_glue [-] notification.workers           = 1 log_opt_values /usr/lib/python3.12/site-packages/oslo_config/cfg.py:2824
Dec  1 14:21:47 np0005541455 ceilometer_agent_compute[200022]: 2025-12-01 19:21:47.205 14 DEBUG cotyledon.oslo_config_glue [-] polling.batch_size             = 50 log_opt_values /usr/lib/python3.12/site-packages/oslo_config/cfg.py:2824
Dec  1 14:21:47 np0005541455 ceilometer_agent_compute[200022]: 2025-12-01 19:21:47.206 14 DEBUG cotyledon.oslo_config_glue [-] polling.cfg_file               = polling.yaml log_opt_values /usr/lib/python3.12/site-packages/oslo_config/cfg.py:2824
Dec  1 14:21:47 np0005541455 ceilometer_agent_compute[200022]: 2025-12-01 19:21:47.206 14 DEBUG cotyledon.oslo_config_glue [-] polling.enable_notifications   = False log_opt_values /usr/lib/python3.12/site-packages/oslo_config/cfg.py:2824
Dec  1 14:21:47 np0005541455 ceilometer_agent_compute[200022]: 2025-12-01 19:21:47.206 14 DEBUG cotyledon.oslo_config_glue [-] polling.enable_prometheus_exporter = True log_opt_values /usr/lib/python3.12/site-packages/oslo_config/cfg.py:2824
Dec  1 14:21:47 np0005541455 ceilometer_agent_compute[200022]: 2025-12-01 19:21:47.206 14 DEBUG cotyledon.oslo_config_glue [-] polling.heartbeat_socket_dir   = /var/lib/ceilometer log_opt_values /usr/lib/python3.12/site-packages/oslo_config/cfg.py:2824
Dec  1 14:21:47 np0005541455 ceilometer_agent_compute[200022]: 2025-12-01 19:21:47.206 14 DEBUG cotyledon.oslo_config_glue [-] polling.identity_name_discovery = False log_opt_values /usr/lib/python3.12/site-packages/oslo_config/cfg.py:2824
Dec  1 14:21:47 np0005541455 ceilometer_agent_compute[200022]: 2025-12-01 19:21:47.206 14 DEBUG cotyledon.oslo_config_glue [-] polling.ignore_disabled_projects = False log_opt_values /usr/lib/python3.12/site-packages/oslo_config/cfg.py:2824
Dec  1 14:21:47 np0005541455 ceilometer_agent_compute[200022]: 2025-12-01 19:21:47.206 14 DEBUG cotyledon.oslo_config_glue [-] polling.partitioning_group_prefix = None log_opt_values /usr/lib/python3.12/site-packages/oslo_config/cfg.py:2824
Dec  1 14:21:47 np0005541455 ceilometer_agent_compute[200022]: 2025-12-01 19:21:47.206 14 DEBUG cotyledon.oslo_config_glue [-] polling.pollsters_definitions_dirs = ['/etc/ceilometer/pollsters.d'] log_opt_values /usr/lib/python3.12/site-packages/oslo_config/cfg.py:2824
Dec  1 14:21:47 np0005541455 ceilometer_agent_compute[200022]: 2025-12-01 19:21:47.206 14 DEBUG cotyledon.oslo_config_glue [-] polling.prometheus_listen_addresses = ['[::]:9101'] log_opt_values /usr/lib/python3.12/site-packages/oslo_config/cfg.py:2824
Dec  1 14:21:47 np0005541455 ceilometer_agent_compute[200022]: 2025-12-01 19:21:47.206 14 DEBUG cotyledon.oslo_config_glue [-] polling.prometheus_tls_certfile = /etc/ceilometer/tls/tls.crt log_opt_values /usr/lib/python3.12/site-packages/oslo_config/cfg.py:2824
Dec  1 14:21:47 np0005541455 ceilometer_agent_compute[200022]: 2025-12-01 19:21:47.206 14 DEBUG cotyledon.oslo_config_glue [-] polling.prometheus_tls_enable  = True log_opt_values /usr/lib/python3.12/site-packages/oslo_config/cfg.py:2824
Dec  1 14:21:47 np0005541455 ceilometer_agent_compute[200022]: 2025-12-01 19:21:47.206 14 DEBUG cotyledon.oslo_config_glue [-] polling.prometheus_tls_keyfile = /etc/ceilometer/tls/tls.key log_opt_values /usr/lib/python3.12/site-packages/oslo_config/cfg.py:2824
Dec  1 14:21:47 np0005541455 ceilometer_agent_compute[200022]: 2025-12-01 19:21:47.206 14 DEBUG cotyledon.oslo_config_glue [-] polling.threads_to_process_pollsters = 1 log_opt_values /usr/lib/python3.12/site-packages/oslo_config/cfg.py:2824
Dec  1 14:21:47 np0005541455 ceilometer_agent_compute[200022]: 2025-12-01 19:21:47.206 14 DEBUG cotyledon.oslo_config_glue [-] publisher.telemetry_secret     = **** log_opt_values /usr/lib/python3.12/site-packages/oslo_config/cfg.py:2824
Dec  1 14:21:47 np0005541455 ceilometer_agent_compute[200022]: 2025-12-01 19:21:47.206 14 DEBUG cotyledon.oslo_config_glue [-] publisher_notifier.event_topic = event log_opt_values /usr/lib/python3.12/site-packages/oslo_config/cfg.py:2824
Dec  1 14:21:47 np0005541455 ceilometer_agent_compute[200022]: 2025-12-01 19:21:47.207 14 DEBUG cotyledon.oslo_config_glue [-] publisher_notifier.metering_topic = metering log_opt_values /usr/lib/python3.12/site-packages/oslo_config/cfg.py:2824
Dec  1 14:21:47 np0005541455 ceilometer_agent_compute[200022]: 2025-12-01 19:21:47.207 14 DEBUG cotyledon.oslo_config_glue [-] publisher_notifier.telemetry_driver = messagingv2 log_opt_values /usr/lib/python3.12/site-packages/oslo_config/cfg.py:2824
Dec  1 14:21:47 np0005541455 ceilometer_agent_compute[200022]: 2025-12-01 19:21:47.207 14 DEBUG cotyledon.oslo_config_glue [-] rgw_admin_credentials.access_key = **** log_opt_values /usr/lib/python3.12/site-packages/oslo_config/cfg.py:2824
Dec  1 14:21:47 np0005541455 ceilometer_agent_compute[200022]: 2025-12-01 19:21:47.207 14 DEBUG cotyledon.oslo_config_glue [-] rgw_admin_credentials.secret_key = **** log_opt_values /usr/lib/python3.12/site-packages/oslo_config/cfg.py:2824
Dec  1 14:21:47 np0005541455 ceilometer_agent_compute[200022]: 2025-12-01 19:21:47.207 14 DEBUG cotyledon.oslo_config_glue [-] rgw_client.implicit_tenants    = False log_opt_values /usr/lib/python3.12/site-packages/oslo_config/cfg.py:2824
Dec  1 14:21:47 np0005541455 ceilometer_agent_compute[200022]: 2025-12-01 19:21:47.207 14 DEBUG cotyledon.oslo_config_glue [-] service_types.aodh             = alarming log_opt_values /usr/lib/python3.12/site-packages/oslo_config/cfg.py:2824
Dec  1 14:21:47 np0005541455 ceilometer_agent_compute[200022]: 2025-12-01 19:21:47.207 14 DEBUG cotyledon.oslo_config_glue [-] service_types.cinder           = volumev3 log_opt_values /usr/lib/python3.12/site-packages/oslo_config/cfg.py:2824
Dec  1 14:21:47 np0005541455 ceilometer_agent_compute[200022]: 2025-12-01 19:21:47.207 14 DEBUG cotyledon.oslo_config_glue [-] service_types.glance           = image log_opt_values /usr/lib/python3.12/site-packages/oslo_config/cfg.py:2824
Dec  1 14:21:47 np0005541455 ceilometer_agent_compute[200022]: 2025-12-01 19:21:47.207 14 DEBUG cotyledon.oslo_config_glue [-] service_types.neutron          = network log_opt_values /usr/lib/python3.12/site-packages/oslo_config/cfg.py:2824
Dec  1 14:21:47 np0005541455 ceilometer_agent_compute[200022]: 2025-12-01 19:21:47.207 14 DEBUG cotyledon.oslo_config_glue [-] service_types.nova             = compute log_opt_values /usr/lib/python3.12/site-packages/oslo_config/cfg.py:2824
Dec  1 14:21:47 np0005541455 ceilometer_agent_compute[200022]: 2025-12-01 19:21:47.207 14 DEBUG cotyledon.oslo_config_glue [-] service_types.radosgw          = None log_opt_values /usr/lib/python3.12/site-packages/oslo_config/cfg.py:2824
Dec  1 14:21:47 np0005541455 ceilometer_agent_compute[200022]: 2025-12-01 19:21:47.207 14 DEBUG cotyledon.oslo_config_glue [-] service_types.swift            = object-store log_opt_values /usr/lib/python3.12/site-packages/oslo_config/cfg.py:2824
Dec  1 14:21:47 np0005541455 ceilometer_agent_compute[200022]: 2025-12-01 19:21:47.207 14 DEBUG cotyledon.oslo_config_glue [-] service_credentials.auth_section = None log_opt_values /usr/lib/python3.12/site-packages/oslo_config/cfg.py:2824
Dec  1 14:21:47 np0005541455 ceilometer_agent_compute[200022]: 2025-12-01 19:21:47.208 14 DEBUG cotyledon.oslo_config_glue [-] service_credentials.auth_type  = password log_opt_values /usr/lib/python3.12/site-packages/oslo_config/cfg.py:2824
Dec  1 14:21:47 np0005541455 ceilometer_agent_compute[200022]: 2025-12-01 19:21:47.208 14 DEBUG cotyledon.oslo_config_glue [-] service_credentials.auth_url   = https://keystone-internal.openstack.svc:5000 log_opt_values /usr/lib/python3.12/site-packages/oslo_config/cfg.py:2824
Dec  1 14:21:47 np0005541455 ceilometer_agent_compute[200022]: 2025-12-01 19:21:47.208 14 DEBUG cotyledon.oslo_config_glue [-] service_credentials.cafile     = None log_opt_values /usr/lib/python3.12/site-packages/oslo_config/cfg.py:2824
Dec  1 14:21:47 np0005541455 ceilometer_agent_compute[200022]: 2025-12-01 19:21:47.208 14 DEBUG cotyledon.oslo_config_glue [-] service_credentials.certfile   = None log_opt_values /usr/lib/python3.12/site-packages/oslo_config/cfg.py:2824
Dec  1 14:21:47 np0005541455 ceilometer_agent_compute[200022]: 2025-12-01 19:21:47.208 14 DEBUG cotyledon.oslo_config_glue [-] service_credentials.collect_timing = False log_opt_values /usr/lib/python3.12/site-packages/oslo_config/cfg.py:2824
Dec  1 14:21:47 np0005541455 ceilometer_agent_compute[200022]: 2025-12-01 19:21:47.208 14 DEBUG cotyledon.oslo_config_glue [-] service_credentials.default_domain_id = None log_opt_values /usr/lib/python3.12/site-packages/oslo_config/cfg.py:2824
Dec  1 14:21:47 np0005541455 ceilometer_agent_compute[200022]: 2025-12-01 19:21:47.208 14 DEBUG cotyledon.oslo_config_glue [-] service_credentials.default_domain_name = None log_opt_values /usr/lib/python3.12/site-packages/oslo_config/cfg.py:2824
Dec  1 14:21:47 np0005541455 ceilometer_agent_compute[200022]: 2025-12-01 19:21:47.208 14 DEBUG cotyledon.oslo_config_glue [-] service_credentials.domain_id  = None log_opt_values /usr/lib/python3.12/site-packages/oslo_config/cfg.py:2824
Dec  1 14:21:47 np0005541455 ceilometer_agent_compute[200022]: 2025-12-01 19:21:47.208 14 DEBUG cotyledon.oslo_config_glue [-] service_credentials.domain_name = None log_opt_values /usr/lib/python3.12/site-packages/oslo_config/cfg.py:2824
Dec  1 14:21:47 np0005541455 ceilometer_agent_compute[200022]: 2025-12-01 19:21:47.208 14 DEBUG cotyledon.oslo_config_glue [-] service_credentials.insecure   = False log_opt_values /usr/lib/python3.12/site-packages/oslo_config/cfg.py:2824
Dec  1 14:21:47 np0005541455 ceilometer_agent_compute[200022]: 2025-12-01 19:21:47.208 14 DEBUG cotyledon.oslo_config_glue [-] service_credentials.interface  = internalURL log_opt_values /usr/lib/python3.12/site-packages/oslo_config/cfg.py:2824
Dec  1 14:21:47 np0005541455 ceilometer_agent_compute[200022]: 2025-12-01 19:21:47.208 14 DEBUG cotyledon.oslo_config_glue [-] service_credentials.keyfile    = None log_opt_values /usr/lib/python3.12/site-packages/oslo_config/cfg.py:2824
Dec  1 14:21:47 np0005541455 ceilometer_agent_compute[200022]: 2025-12-01 19:21:47.208 14 DEBUG cotyledon.oslo_config_glue [-] service_credentials.password   = **** log_opt_values /usr/lib/python3.12/site-packages/oslo_config/cfg.py:2824
Dec  1 14:21:47 np0005541455 ceilometer_agent_compute[200022]: 2025-12-01 19:21:47.208 14 DEBUG cotyledon.oslo_config_glue [-] service_credentials.project_domain_id = None log_opt_values /usr/lib/python3.12/site-packages/oslo_config/cfg.py:2824
Dec  1 14:21:47 np0005541455 ceilometer_agent_compute[200022]: 2025-12-01 19:21:47.208 14 DEBUG cotyledon.oslo_config_glue [-] service_credentials.project_domain_name = Default log_opt_values /usr/lib/python3.12/site-packages/oslo_config/cfg.py:2824
Dec  1 14:21:47 np0005541455 ceilometer_agent_compute[200022]: 2025-12-01 19:21:47.209 14 DEBUG cotyledon.oslo_config_glue [-] service_credentials.project_id = None log_opt_values /usr/lib/python3.12/site-packages/oslo_config/cfg.py:2824
Dec  1 14:21:47 np0005541455 ceilometer_agent_compute[200022]: 2025-12-01 19:21:47.209 14 DEBUG cotyledon.oslo_config_glue [-] service_credentials.project_name = service log_opt_values /usr/lib/python3.12/site-packages/oslo_config/cfg.py:2824
Dec  1 14:21:47 np0005541455 ceilometer_agent_compute[200022]: 2025-12-01 19:21:47.209 14 DEBUG cotyledon.oslo_config_glue [-] service_credentials.region_name = None log_opt_values /usr/lib/python3.12/site-packages/oslo_config/cfg.py:2824
Dec  1 14:21:47 np0005541455 ceilometer_agent_compute[200022]: 2025-12-01 19:21:47.209 14 DEBUG cotyledon.oslo_config_glue [-] service_credentials.split_loggers = False log_opt_values /usr/lib/python3.12/site-packages/oslo_config/cfg.py:2824
Dec  1 14:21:47 np0005541455 ceilometer_agent_compute[200022]: 2025-12-01 19:21:47.209 14 DEBUG cotyledon.oslo_config_glue [-] service_credentials.system_scope = None log_opt_values /usr/lib/python3.12/site-packages/oslo_config/cfg.py:2824
Dec  1 14:21:47 np0005541455 ceilometer_agent_compute[200022]: 2025-12-01 19:21:47.209 14 DEBUG cotyledon.oslo_config_glue [-] service_credentials.timeout    = None log_opt_values /usr/lib/python3.12/site-packages/oslo_config/cfg.py:2824
Dec  1 14:21:47 np0005541455 ceilometer_agent_compute[200022]: 2025-12-01 19:21:47.209 14 DEBUG cotyledon.oslo_config_glue [-] service_credentials.trust_id   = None log_opt_values /usr/lib/python3.12/site-packages/oslo_config/cfg.py:2824
Dec  1 14:21:47 np0005541455 ceilometer_agent_compute[200022]: 2025-12-01 19:21:47.209 14 DEBUG cotyledon.oslo_config_glue [-] service_credentials.user_domain_id = None log_opt_values /usr/lib/python3.12/site-packages/oslo_config/cfg.py:2824
Dec  1 14:21:47 np0005541455 ceilometer_agent_compute[200022]: 2025-12-01 19:21:47.209 14 DEBUG cotyledon.oslo_config_glue [-] service_credentials.user_domain_name = Default log_opt_values /usr/lib/python3.12/site-packages/oslo_config/cfg.py:2824
Dec  1 14:21:47 np0005541455 ceilometer_agent_compute[200022]: 2025-12-01 19:21:47.209 14 DEBUG cotyledon.oslo_config_glue [-] service_credentials.user_id    = None log_opt_values /usr/lib/python3.12/site-packages/oslo_config/cfg.py:2824
Dec  1 14:21:47 np0005541455 ceilometer_agent_compute[200022]: 2025-12-01 19:21:47.209 14 DEBUG cotyledon.oslo_config_glue [-] service_credentials.username   = ceilometer log_opt_values /usr/lib/python3.12/site-packages/oslo_config/cfg.py:2824
Dec  1 14:21:47 np0005541455 ceilometer_agent_compute[200022]: 2025-12-01 19:21:47.209 14 DEBUG cotyledon.oslo_config_glue [-] gnocchi.auth_section           = service_credentials log_opt_values /usr/lib/python3.12/site-packages/oslo_config/cfg.py:2824
Dec  1 14:21:47 np0005541455 ceilometer_agent_compute[200022]: 2025-12-01 19:21:47.209 14 DEBUG cotyledon.oslo_config_glue [-] gnocchi.auth_type              = None log_opt_values /usr/lib/python3.12/site-packages/oslo_config/cfg.py:2824
Dec  1 14:21:47 np0005541455 ceilometer_agent_compute[200022]: 2025-12-01 19:21:47.209 14 DEBUG cotyledon.oslo_config_glue [-] gnocchi.cafile                 = None log_opt_values /usr/lib/python3.12/site-packages/oslo_config/cfg.py:2824
Dec  1 14:21:47 np0005541455 ceilometer_agent_compute[200022]: 2025-12-01 19:21:47.209 14 DEBUG cotyledon.oslo_config_glue [-] gnocchi.certfile               = None log_opt_values /usr/lib/python3.12/site-packages/oslo_config/cfg.py:2824
Dec  1 14:21:47 np0005541455 ceilometer_agent_compute[200022]: 2025-12-01 19:21:47.209 14 DEBUG cotyledon.oslo_config_glue [-] gnocchi.collect_timing         = False log_opt_values /usr/lib/python3.12/site-packages/oslo_config/cfg.py:2824
Dec  1 14:21:47 np0005541455 ceilometer_agent_compute[200022]: 2025-12-01 19:21:47.209 14 DEBUG cotyledon.oslo_config_glue [-] gnocchi.insecure               = False log_opt_values /usr/lib/python3.12/site-packages/oslo_config/cfg.py:2824
Dec  1 14:21:47 np0005541455 ceilometer_agent_compute[200022]: 2025-12-01 19:21:47.210 14 DEBUG cotyledon.oslo_config_glue [-] gnocchi.interface              = internal log_opt_values /usr/lib/python3.12/site-packages/oslo_config/cfg.py:2824
Dec  1 14:21:47 np0005541455 ceilometer_agent_compute[200022]: 2025-12-01 19:21:47.210 14 DEBUG cotyledon.oslo_config_glue [-] gnocchi.keyfile                = None log_opt_values /usr/lib/python3.12/site-packages/oslo_config/cfg.py:2824
Dec  1 14:21:47 np0005541455 ceilometer_agent_compute[200022]: 2025-12-01 19:21:47.210 14 DEBUG cotyledon.oslo_config_glue [-] gnocchi.region_name            = None log_opt_values /usr/lib/python3.12/site-packages/oslo_config/cfg.py:2824
Dec  1 14:21:47 np0005541455 ceilometer_agent_compute[200022]: 2025-12-01 19:21:47.210 14 DEBUG cotyledon.oslo_config_glue [-] gnocchi.split_loggers          = False log_opt_values /usr/lib/python3.12/site-packages/oslo_config/cfg.py:2824
Dec  1 14:21:47 np0005541455 ceilometer_agent_compute[200022]: 2025-12-01 19:21:47.210 14 DEBUG cotyledon.oslo_config_glue [-] gnocchi.timeout                = None log_opt_values /usr/lib/python3.12/site-packages/oslo_config/cfg.py:2824
Dec  1 14:21:47 np0005541455 ceilometer_agent_compute[200022]: 2025-12-01 19:21:47.210 14 DEBUG cotyledon.oslo_config_glue [-] zaqar.auth_section             = service_credentials log_opt_values /usr/lib/python3.12/site-packages/oslo_config/cfg.py:2824
Dec  1 14:21:47 np0005541455 ceilometer_agent_compute[200022]: 2025-12-01 19:21:47.210 14 DEBUG cotyledon.oslo_config_glue [-] zaqar.auth_type                = None log_opt_values /usr/lib/python3.12/site-packages/oslo_config/cfg.py:2824
Dec  1 14:21:47 np0005541455 ceilometer_agent_compute[200022]: 2025-12-01 19:21:47.210 14 DEBUG cotyledon.oslo_config_glue [-] zaqar.cafile                   = None log_opt_values /usr/lib/python3.12/site-packages/oslo_config/cfg.py:2824
Dec  1 14:21:47 np0005541455 ceilometer_agent_compute[200022]: 2025-12-01 19:21:47.210 14 DEBUG cotyledon.oslo_config_glue [-] zaqar.certfile                 = None log_opt_values /usr/lib/python3.12/site-packages/oslo_config/cfg.py:2824
Dec  1 14:21:47 np0005541455 ceilometer_agent_compute[200022]: 2025-12-01 19:21:47.210 14 DEBUG cotyledon.oslo_config_glue [-] zaqar.collect_timing           = False log_opt_values /usr/lib/python3.12/site-packages/oslo_config/cfg.py:2824
Dec  1 14:21:47 np0005541455 ceilometer_agent_compute[200022]: 2025-12-01 19:21:47.210 14 DEBUG cotyledon.oslo_config_glue [-] zaqar.insecure                 = False log_opt_values /usr/lib/python3.12/site-packages/oslo_config/cfg.py:2824
Dec  1 14:21:47 np0005541455 ceilometer_agent_compute[200022]: 2025-12-01 19:21:47.210 14 DEBUG cotyledon.oslo_config_glue [-] zaqar.interface                = internal log_opt_values /usr/lib/python3.12/site-packages/oslo_config/cfg.py:2824
Dec  1 14:21:47 np0005541455 ceilometer_agent_compute[200022]: 2025-12-01 19:21:47.210 14 DEBUG cotyledon.oslo_config_glue [-] zaqar.keyfile                  = None log_opt_values /usr/lib/python3.12/site-packages/oslo_config/cfg.py:2824
Dec  1 14:21:47 np0005541455 ceilometer_agent_compute[200022]: 2025-12-01 19:21:47.210 14 DEBUG cotyledon.oslo_config_glue [-] zaqar.region_name              = None log_opt_values /usr/lib/python3.12/site-packages/oslo_config/cfg.py:2824
Dec  1 14:21:47 np0005541455 ceilometer_agent_compute[200022]: 2025-12-01 19:21:47.211 14 DEBUG cotyledon.oslo_config_glue [-] zaqar.split_loggers            = False log_opt_values /usr/lib/python3.12/site-packages/oslo_config/cfg.py:2824
Dec  1 14:21:47 np0005541455 ceilometer_agent_compute[200022]: 2025-12-01 19:21:47.211 14 DEBUG cotyledon.oslo_config_glue [-] zaqar.timeout                  = None log_opt_values /usr/lib/python3.12/site-packages/oslo_config/cfg.py:2824
Dec  1 14:21:47 np0005541455 ceilometer_agent_compute[200022]: 2025-12-01 19:21:47.211 14 DEBUG cotyledon.oslo_config_glue [-] oslo_reports.file_event_handler = None log_opt_values /usr/lib/python3.12/site-packages/oslo_config/cfg.py:2824
Dec  1 14:21:47 np0005541455 ceilometer_agent_compute[200022]: 2025-12-01 19:21:47.211 14 DEBUG cotyledon.oslo_config_glue [-] oslo_reports.file_event_handler_interval = 1 log_opt_values /usr/lib/python3.12/site-packages/oslo_config/cfg.py:2824
Dec  1 14:21:47 np0005541455 ceilometer_agent_compute[200022]: 2025-12-01 19:21:47.211 14 DEBUG cotyledon.oslo_config_glue [-] oslo_reports.log_dir           = None log_opt_values /usr/lib/python3.12/site-packages/oslo_config/cfg.py:2824
Dec  1 14:21:47 np0005541455 ceilometer_agent_compute[200022]: 2025-12-01 19:21:47.211 14 DEBUG cotyledon.oslo_config_glue [-] ******************************************************************************** log_opt_values /usr/lib/python3.12/site-packages/oslo_config/cfg.py:2828
Dec  1 14:21:47 np0005541455 ceilometer_agent_compute[200022]: 2025-12-01 19:21:47.211 14 DEBUG cotyledon._service [-] Run service AgentManager(0) [14] wait_forever /usr/lib/python3.12/site-packages/cotyledon/_service.py:263
Dec  1 14:21:47 np0005541455 ceilometer_agent_compute[200022]: 2025-12-01 19:21:47.212 14 INFO cotyledon._service [-] Caught SIGTERM signal, graceful exiting of service AgentManager(0) [14]
Dec  1 14:21:47 np0005541455 ceilometer_agent_compute[200022]: 2025-12-01 19:21:47.224 2 DEBUG cotyledon._service_manager [-] Shutdown finish _shutdown /usr/lib/python3.12/site-packages/cotyledon/_service_manager.py:335
Dec  1 14:21:47 np0005541455 virtqemud[189187]: End of file while reading data: Input/output error
Dec  1 14:21:47 np0005541455 systemd[1]: libpod-3a3d264f7eb8586ed3d44da8bad3c69e5911bcb2ca062b771386b6d47a5118de.scope: Deactivated successfully.
Dec  1 14:21:47 np0005541455 systemd[1]: libpod-3a3d264f7eb8586ed3d44da8bad3c69e5911bcb2ca062b771386b6d47a5118de.scope: Consumed 1.533s CPU time.
Dec  1 14:21:47 np0005541455 podman[200237]: 2025-12-01 19:21:47.410077454 +0000 UTC m=+0.651214151 container died 3a3d264f7eb8586ed3d44da8bad3c69e5911bcb2ca062b771386b6d47a5118de (image=quay.rdoproject.org/podified-master-centos10/openstack-ceilometer-compute:current-tested, name=ceilometer_agent_compute, org.label-schema.name=CentOS Stream 10 Base Image, tcib_managed=true, config_data={'image': 'quay.rdoproject.org/podified-master-centos10/openstack-ceilometer-compute:current-tested', 'user': 'ceilometer', 'restart': 'always', 'command': 'kolla_start', 'security_opt': 'label:type:ceilometer_polling_t', 'net': 'host', 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'OS_ENDPOINT_TYPE': 'internal'}, 'healthcheck': {'test': '/openstack/healthcheck compute', 'mount': '/var/lib/openstack/healthchecks/ceilometer_agent_compute'}, 'volumes': ['/var/lib/openstack/config/telemetry:/var/lib/openstack/config/:z', '/var/lib/openstack/config/telemetry/ceilometer-agent-compute.json:/var/lib/kolla/config_files/config.json:z', '/run/libvirt:/run/libvirt:shared,ro', '/etc/hosts:/etc/hosts:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/var/lib/openstack/cacerts/telemetry/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/config/telemetry/ceilometer_prom_exporter.yaml:/etc/ceilometer/ceilometer_prom_exporter.yaml:z', '/var/lib/openstack/certs/telemetry/default:/etc/ceilometer/tls:z', '/dev/log:/dev/log', '/var/lib/openstack/healthchecks/ceilometer_agent_compute:/openstack:ro,z']}, io.buildah.version=1.41.4, managed_by=edpm_ansible, org.label-schema.schema-version=1.0, org.label-schema.build-date=20251125, org.label-schema.license=GPLv2, tcib_build_tag=3a7876c5b6a4ff2e2bc50e11e9db5f42, config_id=edpm, container_name=ceilometer_agent_compute, maintainer=OpenStack Kubernetes Operator team, org.label-schema.vendor=CentOS)
Dec  1 14:21:47 np0005541455 systemd[1]: 3a3d264f7eb8586ed3d44da8bad3c69e5911bcb2ca062b771386b6d47a5118de-141cc4d8b9d8f7db.timer: Deactivated successfully.
Dec  1 14:21:47 np0005541455 systemd[1]: Stopped /usr/bin/podman healthcheck run 3a3d264f7eb8586ed3d44da8bad3c69e5911bcb2ca062b771386b6d47a5118de.
Dec  1 14:21:47 np0005541455 systemd[1]: var-lib-containers-storage-overlay\x2dcontainers-3a3d264f7eb8586ed3d44da8bad3c69e5911bcb2ca062b771386b6d47a5118de-userdata-shm.mount: Deactivated successfully.
Dec  1 14:21:47 np0005541455 systemd[1]: var-lib-containers-storage-overlay-5a9164896699846de4239b19faa50d6a73ade2163e052ea0be24031ec5788b8e-merged.mount: Deactivated successfully.
Dec  1 14:21:47 np0005541455 podman[200237]: 2025-12-01 19:21:47.455948398 +0000 UTC m=+0.697085075 container cleanup 3a3d264f7eb8586ed3d44da8bad3c69e5911bcb2ca062b771386b6d47a5118de (image=quay.rdoproject.org/podified-master-centos10/openstack-ceilometer-compute:current-tested, name=ceilometer_agent_compute, io.buildah.version=1.41.4, maintainer=OpenStack Kubernetes Operator team, managed_by=edpm_ansible, org.label-schema.name=CentOS Stream 10 Base Image, tcib_managed=true, config_data={'image': 'quay.rdoproject.org/podified-master-centos10/openstack-ceilometer-compute:current-tested', 'user': 'ceilometer', 'restart': 'always', 'command': 'kolla_start', 'security_opt': 'label:type:ceilometer_polling_t', 'net': 'host', 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'OS_ENDPOINT_TYPE': 'internal'}, 'healthcheck': {'test': '/openstack/healthcheck compute', 'mount': '/var/lib/openstack/healthchecks/ceilometer_agent_compute'}, 'volumes': ['/var/lib/openstack/config/telemetry:/var/lib/openstack/config/:z', '/var/lib/openstack/config/telemetry/ceilometer-agent-compute.json:/var/lib/kolla/config_files/config.json:z', '/run/libvirt:/run/libvirt:shared,ro', '/etc/hosts:/etc/hosts:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/var/lib/openstack/cacerts/telemetry/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/config/telemetry/ceilometer_prom_exporter.yaml:/etc/ceilometer/ceilometer_prom_exporter.yaml:z', '/var/lib/openstack/certs/telemetry/default:/etc/ceilometer/tls:z', '/dev/log:/dev/log', '/var/lib/openstack/healthchecks/ceilometer_agent_compute:/openstack:ro,z']}, config_id=edpm, container_name=ceilometer_agent_compute, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS, org.label-schema.build-date=20251125, org.label-schema.license=GPLv2, tcib_build_tag=3a7876c5b6a4ff2e2bc50e11e9db5f42)
Dec  1 14:21:47 np0005541455 podman[200237]: ceilometer_agent_compute
Dec  1 14:21:47 np0005541455 podman[200279]: ceilometer_agent_compute
Dec  1 14:21:47 np0005541455 systemd[1]: edpm_ceilometer_agent_compute.service: Deactivated successfully.
Dec  1 14:21:47 np0005541455 systemd[1]: Stopped ceilometer_agent_compute container.
Dec  1 14:21:47 np0005541455 systemd[1]: Starting ceilometer_agent_compute container...
Dec  1 14:21:47 np0005541455 systemd[1]: Started libcrun container.
Dec  1 14:21:47 np0005541455 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/5a9164896699846de4239b19faa50d6a73ade2163e052ea0be24031ec5788b8e/merged/etc/ceilometer/ceilometer_prom_exporter.yaml supports timestamps until 2038 (0x7fffffff)
Dec  1 14:21:47 np0005541455 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/5a9164896699846de4239b19faa50d6a73ade2163e052ea0be24031ec5788b8e/merged/etc/ceilometer/tls supports timestamps until 2038 (0x7fffffff)
Dec  1 14:21:47 np0005541455 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/5a9164896699846de4239b19faa50d6a73ade2163e052ea0be24031ec5788b8e/merged/var/lib/openstack/config supports timestamps until 2038 (0x7fffffff)
Dec  1 14:21:47 np0005541455 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/5a9164896699846de4239b19faa50d6a73ade2163e052ea0be24031ec5788b8e/merged/var/lib/kolla/config_files/config.json supports timestamps until 2038 (0x7fffffff)
Dec  1 14:21:47 np0005541455 systemd[1]: Started /usr/bin/podman healthcheck run 3a3d264f7eb8586ed3d44da8bad3c69e5911bcb2ca062b771386b6d47a5118de.
Dec  1 14:21:47 np0005541455 podman[200292]: 2025-12-01 19:21:47.667488935 +0000 UTC m=+0.124303519 container init 3a3d264f7eb8586ed3d44da8bad3c69e5911bcb2ca062b771386b6d47a5118de (image=quay.rdoproject.org/podified-master-centos10/openstack-ceilometer-compute:current-tested, name=ceilometer_agent_compute, org.label-schema.vendor=CentOS, config_data={'image': 'quay.rdoproject.org/podified-master-centos10/openstack-ceilometer-compute:current-tested', 'user': 'ceilometer', 'restart': 'always', 'command': 'kolla_start', 'security_opt': 'label:type:ceilometer_polling_t', 'net': 'host', 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'OS_ENDPOINT_TYPE': 'internal'}, 'healthcheck': {'test': '/openstack/healthcheck compute', 'mount': '/var/lib/openstack/healthchecks/ceilometer_agent_compute'}, 'volumes': ['/var/lib/openstack/config/telemetry:/var/lib/openstack/config/:z', '/var/lib/openstack/config/telemetry/ceilometer-agent-compute.json:/var/lib/kolla/config_files/config.json:z', '/run/libvirt:/run/libvirt:shared,ro', '/etc/hosts:/etc/hosts:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/var/lib/openstack/cacerts/telemetry/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/config/telemetry/ceilometer_prom_exporter.yaml:/etc/ceilometer/ceilometer_prom_exporter.yaml:z', '/var/lib/openstack/certs/telemetry/default:/etc/ceilometer/tls:z', '/dev/log:/dev/log', '/var/lib/openstack/healthchecks/ceilometer_agent_compute:/openstack:ro,z']}, container_name=ceilometer_agent_compute, maintainer=OpenStack Kubernetes Operator team, managed_by=edpm_ansible, org.label-schema.schema-version=1.0, tcib_build_tag=3a7876c5b6a4ff2e2bc50e11e9db5f42, tcib_managed=true, config_id=edpm, io.buildah.version=1.41.4, org.label-schema.build-date=20251125, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 10 Base Image)
Dec  1 14:21:47 np0005541455 ceilometer_agent_compute[200308]: + sudo -E kolla_set_configs
Dec  1 14:21:47 np0005541455 ceilometer_agent_compute[200308]: sudo: unable to send audit message: Operation not permitted
Dec  1 14:21:47 np0005541455 podman[200292]: 2025-12-01 19:21:47.713638178 +0000 UTC m=+0.170452762 container start 3a3d264f7eb8586ed3d44da8bad3c69e5911bcb2ca062b771386b6d47a5118de (image=quay.rdoproject.org/podified-master-centos10/openstack-ceilometer-compute:current-tested, name=ceilometer_agent_compute, tcib_build_tag=3a7876c5b6a4ff2e2bc50e11e9db5f42, tcib_managed=true, config_id=edpm, io.buildah.version=1.41.4, org.label-schema.build-date=20251125, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 10 Base Image, org.label-schema.vendor=CentOS, org.label-schema.schema-version=1.0, config_data={'image': 'quay.rdoproject.org/podified-master-centos10/openstack-ceilometer-compute:current-tested', 'user': 'ceilometer', 'restart': 'always', 'command': 'kolla_start', 'security_opt': 'label:type:ceilometer_polling_t', 'net': 'host', 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'OS_ENDPOINT_TYPE': 'internal'}, 'healthcheck': {'test': '/openstack/healthcheck compute', 'mount': '/var/lib/openstack/healthchecks/ceilometer_agent_compute'}, 'volumes': ['/var/lib/openstack/config/telemetry:/var/lib/openstack/config/:z', '/var/lib/openstack/config/telemetry/ceilometer-agent-compute.json:/var/lib/kolla/config_files/config.json:z', '/run/libvirt:/run/libvirt:shared,ro', '/etc/hosts:/etc/hosts:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/var/lib/openstack/cacerts/telemetry/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/config/telemetry/ceilometer_prom_exporter.yaml:/etc/ceilometer/ceilometer_prom_exporter.yaml:z', '/var/lib/openstack/certs/telemetry/default:/etc/ceilometer/tls:z', '/dev/log:/dev/log', '/var/lib/openstack/healthchecks/ceilometer_agent_compute:/openstack:ro,z']}, container_name=ceilometer_agent_compute, maintainer=OpenStack Kubernetes Operator team, managed_by=edpm_ansible)
Dec  1 14:21:47 np0005541455 podman[200292]: ceilometer_agent_compute
Dec  1 14:21:47 np0005541455 systemd[1]: Started ceilometer_agent_compute container.
Dec  1 14:21:47 np0005541455 ceilometer_agent_compute[200308]: INFO:__main__:Loading config file at /var/lib/kolla/config_files/config.json
Dec  1 14:21:47 np0005541455 ceilometer_agent_compute[200308]: INFO:__main__:Validating config file
Dec  1 14:21:47 np0005541455 ceilometer_agent_compute[200308]: INFO:__main__:Kolla config strategy set to: COPY_ALWAYS
Dec  1 14:21:47 np0005541455 ceilometer_agent_compute[200308]: INFO:__main__:Copying service configuration files
Dec  1 14:21:47 np0005541455 ceilometer_agent_compute[200308]: INFO:__main__:Deleting /etc/ceilometer/ceilometer.conf
Dec  1 14:21:47 np0005541455 ceilometer_agent_compute[200308]: INFO:__main__:Copying /var/lib/openstack/config/ceilometer.conf to /etc/ceilometer/ceilometer.conf
Dec  1 14:21:47 np0005541455 ceilometer_agent_compute[200308]: INFO:__main__:Setting permission for /etc/ceilometer/ceilometer.conf
Dec  1 14:21:47 np0005541455 ceilometer_agent_compute[200308]: INFO:__main__:Deleting /etc/ceilometer/polling.yaml
Dec  1 14:21:47 np0005541455 ceilometer_agent_compute[200308]: INFO:__main__:Copying /var/lib/openstack/config/polling.yaml to /etc/ceilometer/polling.yaml
Dec  1 14:21:47 np0005541455 ceilometer_agent_compute[200308]: INFO:__main__:Setting permission for /etc/ceilometer/polling.yaml
Dec  1 14:21:47 np0005541455 ceilometer_agent_compute[200308]: INFO:__main__:Deleting /etc/ceilometer/ceilometer.conf.d/01-ceilometer-custom.conf
Dec  1 14:21:47 np0005541455 ceilometer_agent_compute[200308]: INFO:__main__:Copying /var/lib/openstack/config/custom.conf to /etc/ceilometer/ceilometer.conf.d/01-ceilometer-custom.conf
Dec  1 14:21:47 np0005541455 ceilometer_agent_compute[200308]: INFO:__main__:Setting permission for /etc/ceilometer/ceilometer.conf.d/01-ceilometer-custom.conf
Dec  1 14:21:47 np0005541455 ceilometer_agent_compute[200308]: INFO:__main__:Deleting /etc/ceilometer/ceilometer.conf.d/02-ceilometer-host-specific.conf
Dec  1 14:21:47 np0005541455 ceilometer_agent_compute[200308]: INFO:__main__:Copying /var/lib/openstack/config/ceilometer-host-specific.conf to /etc/ceilometer/ceilometer.conf.d/02-ceilometer-host-specific.conf
Dec  1 14:21:47 np0005541455 ceilometer_agent_compute[200308]: INFO:__main__:Setting permission for /etc/ceilometer/ceilometer.conf.d/02-ceilometer-host-specific.conf
Dec  1 14:21:47 np0005541455 ceilometer_agent_compute[200308]: INFO:__main__:Writing out command to execute
Dec  1 14:21:47 np0005541455 ceilometer_agent_compute[200308]: ++ cat /run_command
Dec  1 14:21:47 np0005541455 ceilometer_agent_compute[200308]: + CMD='/usr/bin/ceilometer-polling --polling-namespaces compute --logfile /dev/stdout'
Dec  1 14:21:47 np0005541455 ceilometer_agent_compute[200308]: + ARGS=
Dec  1 14:21:47 np0005541455 ceilometer_agent_compute[200308]: + sudo kolla_copy_cacerts
Dec  1 14:21:47 np0005541455 ceilometer_agent_compute[200308]: sudo: unable to send audit message: Operation not permitted
Dec  1 14:21:47 np0005541455 podman[200315]: 2025-12-01 19:21:47.794108654 +0000 UTC m=+0.065156214 container health_status 3a3d264f7eb8586ed3d44da8bad3c69e5911bcb2ca062b771386b6d47a5118de (image=quay.rdoproject.org/podified-master-centos10/openstack-ceilometer-compute:current-tested, name=ceilometer_agent_compute, health_status=starting, health_failing_streak=1, health_log=, maintainer=OpenStack Kubernetes Operator team, org.label-schema.license=GPLv2, org.label-schema.vendor=CentOS, container_name=ceilometer_agent_compute, io.buildah.version=1.41.4, org.label-schema.name=CentOS Stream 10 Base Image, tcib_build_tag=3a7876c5b6a4ff2e2bc50e11e9db5f42, tcib_managed=true, config_id=edpm, managed_by=edpm_ansible, org.label-schema.build-date=20251125, org.label-schema.schema-version=1.0, config_data={'image': 'quay.rdoproject.org/podified-master-centos10/openstack-ceilometer-compute:current-tested', 'user': 'ceilometer', 'restart': 'always', 'command': 'kolla_start', 'security_opt': 'label:type:ceilometer_polling_t', 'net': 'host', 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'OS_ENDPOINT_TYPE': 'internal'}, 'healthcheck': {'test': '/openstack/healthcheck compute', 'mount': '/var/lib/openstack/healthchecks/ceilometer_agent_compute'}, 'volumes': ['/var/lib/openstack/config/telemetry:/var/lib/openstack/config/:z', '/var/lib/openstack/config/telemetry/ceilometer-agent-compute.json:/var/lib/kolla/config_files/config.json:z', '/run/libvirt:/run/libvirt:shared,ro', '/etc/hosts:/etc/hosts:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/var/lib/openstack/cacerts/telemetry/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/config/telemetry/ceilometer_prom_exporter.yaml:/etc/ceilometer/ceilometer_prom_exporter.yaml:z', '/var/lib/openstack/certs/telemetry/default:/etc/ceilometer/tls:z', '/dev/log:/dev/log', '/var/lib/openstack/healthchecks/ceilometer_agent_compute:/openstack:ro,z']})
Dec  1 14:21:47 np0005541455 systemd[1]: 3a3d264f7eb8586ed3d44da8bad3c69e5911bcb2ca062b771386b6d47a5118de-2941d675b2b12b50.service: Main process exited, code=exited, status=1/FAILURE
Dec  1 14:21:47 np0005541455 systemd[1]: 3a3d264f7eb8586ed3d44da8bad3c69e5911bcb2ca062b771386b6d47a5118de-2941d675b2b12b50.service: Failed with result 'exit-code'.
Dec  1 14:21:47 np0005541455 ceilometer_agent_compute[200308]: Running command: '/usr/bin/ceilometer-polling --polling-namespaces compute --logfile /dev/stdout'
Dec  1 14:21:47 np0005541455 ceilometer_agent_compute[200308]: + [[ ! -n '' ]]
Dec  1 14:21:47 np0005541455 ceilometer_agent_compute[200308]: + . kolla_extend_start
Dec  1 14:21:47 np0005541455 ceilometer_agent_compute[200308]: + echo 'Running command: '\''/usr/bin/ceilometer-polling --polling-namespaces compute --logfile /dev/stdout'\'''
Dec  1 14:21:47 np0005541455 ceilometer_agent_compute[200308]: + umask 0022
Dec  1 14:21:47 np0005541455 ceilometer_agent_compute[200308]: + exec /usr/bin/ceilometer-polling --polling-namespaces compute --logfile /dev/stdout
Dec  1 14:21:47 np0005541455 podman[200318]: 2025-12-01 19:21:47.825744701 +0000 UTC m=+0.086697713 container health_status 43b014a7c88484529ca37fbc1aa040d68d3c565a681d98a3ffe696ded1c66c8b (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified, name=ovn_metadata_agent, health_status=healthy, health_failing_streak=0, health_log=, io.buildah.version=1.41.3, maintainer=OpenStack Kubernetes Operator team, managed_by=edpm_ansible, org.label-schema.vendor=CentOS, tcib_build_tag=fa2bb8efef6782c26ea7f1675eeb36dd, tcib_managed=true, config_id=ovn_metadata_agent, org.label-schema.name=CentOS Stream 9 Base Image, container_name=ovn_metadata_agent, org.label-schema.build-date=20251125, org.label-schema.license=GPLv2, org.label-schema.schema-version=1.0, config_data={'cgroupns': 'host', 'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'EDPM_CONFIG_HASH': '0823bd3e096c75f72e4a95820d41b0d4b6a1172bd2892ddb9f29b788a11bc87d'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ovn_metadata_agent', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified', 'net': 'host', 'pid': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/run/openvswitch:/run/openvswitch:z', '/var/lib/config-data/ansible-generated/neutron-ovn-metadata-agent:/etc/neutron.conf.d:z', '/run/netns:/run/netns:shared', '/var/lib/kolla/config_files/ovn_metadata_agent.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/neutron:/var/lib/neutron:shared,z', '/var/lib/neutron/ovn_metadata_haproxy_wrapper:/usr/local/bin/haproxy:ro', '/var/lib/neutron/kill_scripts:/etc/neutron/kill_scripts:ro', '/var/lib/openstack/cacerts/neutron-metadata/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/ca.crt:/etc/pki/tls/certs/ovndbca.crt:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/tls.crt:/etc/pki/tls/certs/ovndb.crt:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/tls.key:/etc/pki/tls/private/ovndb.key:ro,Z', '/var/lib/openstack/healthchecks/ovn_metadata_agent:/openstack:ro,z']})
Dec  1 14:21:48 np0005541455 python3.9[200506]: ansible-ansible.legacy.stat Invoked with path=/var/lib/openstack/healthchecks/node_exporter/healthcheck follow=False get_checksum=True get_size=False checksum_algorithm=sha1 get_mime=True get_attributes=True get_selinux_context=False
Dec  1 14:21:48 np0005541455 ceilometer_agent_compute[200308]: 2025-12-01 19:21:48.550 2 DEBUG cotyledon.oslo_config_glue [-] Full set of CONF: _load_service_manager_options /usr/lib/python3.12/site-packages/cotyledon/oslo_config_glue.py:45
Dec  1 14:21:48 np0005541455 ceilometer_agent_compute[200308]: 2025-12-01 19:21:48.550 2 DEBUG cotyledon.oslo_config_glue [-] ******************************************************************************** log_opt_values /usr/lib/python3.12/site-packages/oslo_config/cfg.py:2804
Dec  1 14:21:48 np0005541455 ceilometer_agent_compute[200308]: 2025-12-01 19:21:48.550 2 DEBUG cotyledon.oslo_config_glue [-] Configuration options gathered from: log_opt_values /usr/lib/python3.12/site-packages/oslo_config/cfg.py:2805
Dec  1 14:21:48 np0005541455 ceilometer_agent_compute[200308]: 2025-12-01 19:21:48.550 2 DEBUG cotyledon.oslo_config_glue [-] command line args: ['--polling-namespaces', 'compute', '--logfile', '/dev/stdout'] log_opt_values /usr/lib/python3.12/site-packages/oslo_config/cfg.py:2806
Dec  1 14:21:48 np0005541455 ceilometer_agent_compute[200308]: 2025-12-01 19:21:48.550 2 DEBUG cotyledon.oslo_config_glue [-] config files: ['/etc/ceilometer/ceilometer.conf'] log_opt_values /usr/lib/python3.12/site-packages/oslo_config/cfg.py:2807
Dec  1 14:21:48 np0005541455 ceilometer_agent_compute[200308]: 2025-12-01 19:21:48.551 2 DEBUG cotyledon.oslo_config_glue [-] ================================================================================ log_opt_values /usr/lib/python3.12/site-packages/oslo_config/cfg.py:2809
Dec  1 14:21:48 np0005541455 ceilometer_agent_compute[200308]: 2025-12-01 19:21:48.551 2 DEBUG cotyledon.oslo_config_glue [-] batch_size                     = 50 log_opt_values /usr/lib/python3.12/site-packages/oslo_config/cfg.py:2817
Dec  1 14:21:48 np0005541455 ceilometer_agent_compute[200308]: 2025-12-01 19:21:48.551 2 DEBUG cotyledon.oslo_config_glue [-] cfg_file                       = polling.yaml log_opt_values /usr/lib/python3.12/site-packages/oslo_config/cfg.py:2817
Dec  1 14:21:48 np0005541455 ceilometer_agent_compute[200308]: 2025-12-01 19:21:48.551 2 DEBUG cotyledon.oslo_config_glue [-] config_dir                     = ['/etc/ceilometer/ceilometer.conf.d'] log_opt_values /usr/lib/python3.12/site-packages/oslo_config/cfg.py:2817
Dec  1 14:21:48 np0005541455 ceilometer_agent_compute[200308]: 2025-12-01 19:21:48.551 2 DEBUG cotyledon.oslo_config_glue [-] config_file                    = ['/etc/ceilometer/ceilometer.conf'] log_opt_values /usr/lib/python3.12/site-packages/oslo_config/cfg.py:2817
Dec  1 14:21:48 np0005541455 ceilometer_agent_compute[200308]: 2025-12-01 19:21:48.551 2 DEBUG cotyledon.oslo_config_glue [-] config_source                  = [] log_opt_values /usr/lib/python3.12/site-packages/oslo_config/cfg.py:2817
Dec  1 14:21:48 np0005541455 ceilometer_agent_compute[200308]: 2025-12-01 19:21:48.551 2 DEBUG cotyledon.oslo_config_glue [-] debug                          = True log_opt_values /usr/lib/python3.12/site-packages/oslo_config/cfg.py:2817
Dec  1 14:21:48 np0005541455 ceilometer_agent_compute[200308]: 2025-12-01 19:21:48.551 2 DEBUG cotyledon.oslo_config_glue [-] default_log_levels             = ['amqp=WARN', 'amqplib=WARN', 'boto=WARN', 'qpid=WARN', 'sqlalchemy=WARN', 'suds=INFO', 'oslo.messaging=INFO', 'oslo_messaging=INFO', 'iso8601=WARN', 'requests.packages.urllib3.connectionpool=WARN', 'urllib3.connectionpool=WARN', 'websocket=WARN', 'requests.packages.urllib3.util.retry=WARN', 'urllib3.util.retry=WARN', 'keystonemiddleware=WARN', 'routes.middleware=WARN', 'stevedore=WARN', 'taskflow=WARN', 'keystoneauth=WARN', 'oslo.cache=INFO', 'oslo_policy=INFO', 'dogpile.core.dogpile=INFO', 'futurist=INFO', 'neutronclient=INFO', 'keystoneclient=INFO'] log_opt_values /usr/lib/python3.12/site-packages/oslo_config/cfg.py:2817
Dec  1 14:21:48 np0005541455 ceilometer_agent_compute[200308]: 2025-12-01 19:21:48.551 2 DEBUG cotyledon.oslo_config_glue [-] enable_notifications           = True log_opt_values /usr/lib/python3.12/site-packages/oslo_config/cfg.py:2817
Dec  1 14:21:48 np0005541455 ceilometer_agent_compute[200308]: 2025-12-01 19:21:48.551 2 DEBUG cotyledon.oslo_config_glue [-] enable_prometheus_exporter     = False log_opt_values /usr/lib/python3.12/site-packages/oslo_config/cfg.py:2817
Dec  1 14:21:48 np0005541455 ceilometer_agent_compute[200308]: 2025-12-01 19:21:48.551 2 DEBUG cotyledon.oslo_config_glue [-] event_pipeline_cfg_file        = event_pipeline.yaml log_opt_values /usr/lib/python3.12/site-packages/oslo_config/cfg.py:2817
Dec  1 14:21:48 np0005541455 ceilometer_agent_compute[200308]: 2025-12-01 19:21:48.551 2 DEBUG cotyledon.oslo_config_glue [-] graceful_shutdown_timeout      = 60 log_opt_values /usr/lib/python3.12/site-packages/oslo_config/cfg.py:2817
Dec  1 14:21:48 np0005541455 ceilometer_agent_compute[200308]: 2025-12-01 19:21:48.552 2 DEBUG cotyledon.oslo_config_glue [-] heartbeat_socket_dir           = None log_opt_values /usr/lib/python3.12/site-packages/oslo_config/cfg.py:2817
Dec  1 14:21:48 np0005541455 ceilometer_agent_compute[200308]: 2025-12-01 19:21:48.552 2 DEBUG cotyledon.oslo_config_glue [-] host                           = compute-0.ctlplane.example.com log_opt_values /usr/lib/python3.12/site-packages/oslo_config/cfg.py:2817
Dec  1 14:21:48 np0005541455 ceilometer_agent_compute[200308]: 2025-12-01 19:21:48.552 2 DEBUG cotyledon.oslo_config_glue [-] http_timeout                   = 600 log_opt_values /usr/lib/python3.12/site-packages/oslo_config/cfg.py:2817
Dec  1 14:21:48 np0005541455 ceilometer_agent_compute[200308]: 2025-12-01 19:21:48.552 2 DEBUG cotyledon.oslo_config_glue [-] hypervisor_inspector           = libvirt log_opt_values /usr/lib/python3.12/site-packages/oslo_config/cfg.py:2817
Dec  1 14:21:48 np0005541455 ceilometer_agent_compute[200308]: 2025-12-01 19:21:48.552 2 WARNING oslo_config.cfg [-] Deprecated: Option "tenant_name_discovery" from group "DEFAULT" is deprecated. Use option "identity_name_discovery" from group "DEFAULT".
Dec  1 14:21:48 np0005541455 ceilometer_agent_compute[200308]: 2025-12-01 19:21:48.552 2 DEBUG cotyledon.oslo_config_glue [-] identity_name_discovery        = False log_opt_values /usr/lib/python3.12/site-packages/oslo_config/cfg.py:2817
Dec  1 14:21:48 np0005541455 ceilometer_agent_compute[200308]: 2025-12-01 19:21:48.552 2 DEBUG cotyledon.oslo_config_glue [-] ignore_disabled_projects       = False log_opt_values /usr/lib/python3.12/site-packages/oslo_config/cfg.py:2817
Dec  1 14:21:48 np0005541455 ceilometer_agent_compute[200308]: 2025-12-01 19:21:48.552 2 DEBUG cotyledon.oslo_config_glue [-] instance_format                = [instance: %(uuid)s]  log_opt_values /usr/lib/python3.12/site-packages/oslo_config/cfg.py:2817
Dec  1 14:21:48 np0005541455 ceilometer_agent_compute[200308]: 2025-12-01 19:21:48.552 2 DEBUG cotyledon.oslo_config_glue [-] instance_uuid_format           = [instance: %(uuid)s]  log_opt_values /usr/lib/python3.12/site-packages/oslo_config/cfg.py:2817
Dec  1 14:21:48 np0005541455 ceilometer_agent_compute[200308]: 2025-12-01 19:21:48.552 2 DEBUG cotyledon.oslo_config_glue [-] libvirt_type                   = kvm log_opt_values /usr/lib/python3.12/site-packages/oslo_config/cfg.py:2817
Dec  1 14:21:48 np0005541455 ceilometer_agent_compute[200308]: 2025-12-01 19:21:48.553 2 DEBUG cotyledon.oslo_config_glue [-] libvirt_uri                    =  log_opt_values /usr/lib/python3.12/site-packages/oslo_config/cfg.py:2817
Dec  1 14:21:48 np0005541455 ceilometer_agent_compute[200308]: 2025-12-01 19:21:48.553 2 DEBUG cotyledon.oslo_config_glue [-] log_color                      = False log_opt_values /usr/lib/python3.12/site-packages/oslo_config/cfg.py:2817
Dec  1 14:21:48 np0005541455 ceilometer_agent_compute[200308]: 2025-12-01 19:21:48.553 2 DEBUG cotyledon.oslo_config_glue [-] log_config_append              = None log_opt_values /usr/lib/python3.12/site-packages/oslo_config/cfg.py:2817
Dec  1 14:21:48 np0005541455 ceilometer_agent_compute[200308]: 2025-12-01 19:21:48.553 2 DEBUG cotyledon.oslo_config_glue [-] log_date_format                = %Y-%m-%d %H:%M:%S log_opt_values /usr/lib/python3.12/site-packages/oslo_config/cfg.py:2817
Dec  1 14:21:48 np0005541455 ceilometer_agent_compute[200308]: 2025-12-01 19:21:48.553 2 DEBUG cotyledon.oslo_config_glue [-] log_dir                        = /var/log/ceilometer log_opt_values /usr/lib/python3.12/site-packages/oslo_config/cfg.py:2817
Dec  1 14:21:48 np0005541455 ceilometer_agent_compute[200308]: 2025-12-01 19:21:48.553 2 DEBUG cotyledon.oslo_config_glue [-] log_file                       = /dev/stdout log_opt_values /usr/lib/python3.12/site-packages/oslo_config/cfg.py:2817
Dec  1 14:21:48 np0005541455 ceilometer_agent_compute[200308]: 2025-12-01 19:21:48.553 2 DEBUG cotyledon.oslo_config_glue [-] log_options                    = True log_opt_values /usr/lib/python3.12/site-packages/oslo_config/cfg.py:2817
Dec  1 14:21:48 np0005541455 ceilometer_agent_compute[200308]: 2025-12-01 19:21:48.553 2 DEBUG cotyledon.oslo_config_glue [-] log_rotate_interval            = 1 log_opt_values /usr/lib/python3.12/site-packages/oslo_config/cfg.py:2817
Dec  1 14:21:48 np0005541455 ceilometer_agent_compute[200308]: 2025-12-01 19:21:48.553 2 DEBUG cotyledon.oslo_config_glue [-] log_rotate_interval_type       = days log_opt_values /usr/lib/python3.12/site-packages/oslo_config/cfg.py:2817
Dec  1 14:21:48 np0005541455 ceilometer_agent_compute[200308]: 2025-12-01 19:21:48.553 2 DEBUG cotyledon.oslo_config_glue [-] log_rotation_type              = none log_opt_values /usr/lib/python3.12/site-packages/oslo_config/cfg.py:2817
Dec  1 14:21:48 np0005541455 ceilometer_agent_compute[200308]: 2025-12-01 19:21:48.553 2 DEBUG cotyledon.oslo_config_glue [-] logging_context_format_string  = %(asctime)s.%(msecs)03d %(process)d %(levelname)s %(name)s [%(global_request_id)s %(request_id)s %(user_identity)s] %(instance)s%(message)s log_opt_values /usr/lib/python3.12/site-packages/oslo_config/cfg.py:2817
Dec  1 14:21:48 np0005541455 ceilometer_agent_compute[200308]: 2025-12-01 19:21:48.553 2 DEBUG cotyledon.oslo_config_glue [-] logging_debug_format_suffix    = %(funcName)s %(pathname)s:%(lineno)d log_opt_values /usr/lib/python3.12/site-packages/oslo_config/cfg.py:2817
Dec  1 14:21:48 np0005541455 ceilometer_agent_compute[200308]: 2025-12-01 19:21:48.553 2 DEBUG cotyledon.oslo_config_glue [-] logging_default_format_string  = %(asctime)s.%(msecs)03d %(process)d %(levelname)s %(name)s [-] %(instance)s%(message)s log_opt_values /usr/lib/python3.12/site-packages/oslo_config/cfg.py:2817
Dec  1 14:21:48 np0005541455 ceilometer_agent_compute[200308]: 2025-12-01 19:21:48.553 2 DEBUG cotyledon.oslo_config_glue [-] logging_exception_prefix       = %(asctime)s.%(msecs)03d %(process)d ERROR %(name)s %(instance)s log_opt_values /usr/lib/python3.12/site-packages/oslo_config/cfg.py:2817
Dec  1 14:21:48 np0005541455 ceilometer_agent_compute[200308]: 2025-12-01 19:21:48.554 2 DEBUG cotyledon.oslo_config_glue [-] logging_user_identity_format   = %(user)s %(project)s %(domain)s %(system_scope)s %(user_domain)s %(project_domain)s log_opt_values /usr/lib/python3.12/site-packages/oslo_config/cfg.py:2817
Dec  1 14:21:48 np0005541455 ceilometer_agent_compute[200308]: 2025-12-01 19:21:48.554 2 DEBUG cotyledon.oslo_config_glue [-] max_logfile_count              = 30 log_opt_values /usr/lib/python3.12/site-packages/oslo_config/cfg.py:2817
Dec  1 14:21:48 np0005541455 ceilometer_agent_compute[200308]: 2025-12-01 19:21:48.554 2 DEBUG cotyledon.oslo_config_glue [-] max_logfile_size_mb            = 200 log_opt_values /usr/lib/python3.12/site-packages/oslo_config/cfg.py:2817
Dec  1 14:21:48 np0005541455 ceilometer_agent_compute[200308]: 2025-12-01 19:21:48.554 2 DEBUG cotyledon.oslo_config_glue [-] max_parallel_requests          = 64 log_opt_values /usr/lib/python3.12/site-packages/oslo_config/cfg.py:2817
Dec  1 14:21:48 np0005541455 ceilometer_agent_compute[200308]: 2025-12-01 19:21:48.554 2 DEBUG cotyledon.oslo_config_glue [-] partitioning_group_prefix      = None log_opt_values /usr/lib/python3.12/site-packages/oslo_config/cfg.py:2817
Dec  1 14:21:48 np0005541455 ceilometer_agent_compute[200308]: 2025-12-01 19:21:48.554 2 DEBUG cotyledon.oslo_config_glue [-] pipeline_cfg_file              = pipeline.yaml log_opt_values /usr/lib/python3.12/site-packages/oslo_config/cfg.py:2817
Dec  1 14:21:48 np0005541455 ceilometer_agent_compute[200308]: 2025-12-01 19:21:48.554 2 DEBUG cotyledon.oslo_config_glue [-] polling_namespaces             = ['compute'] log_opt_values /usr/lib/python3.12/site-packages/oslo_config/cfg.py:2817
Dec  1 14:21:48 np0005541455 ceilometer_agent_compute[200308]: 2025-12-01 19:21:48.554 2 DEBUG cotyledon.oslo_config_glue [-] pollsters_definitions_dirs     = ['/etc/ceilometer/pollsters.d'] log_opt_values /usr/lib/python3.12/site-packages/oslo_config/cfg.py:2817
Dec  1 14:21:48 np0005541455 ceilometer_agent_compute[200308]: 2025-12-01 19:21:48.554 2 DEBUG cotyledon.oslo_config_glue [-] prometheus_listen_addresses    = ['127.0.0.1:9101'] log_opt_values /usr/lib/python3.12/site-packages/oslo_config/cfg.py:2817
Dec  1 14:21:48 np0005541455 ceilometer_agent_compute[200308]: 2025-12-01 19:21:48.554 2 DEBUG cotyledon.oslo_config_glue [-] prometheus_tls_certfile        = None log_opt_values /usr/lib/python3.12/site-packages/oslo_config/cfg.py:2817
Dec  1 14:21:48 np0005541455 ceilometer_agent_compute[200308]: 2025-12-01 19:21:48.554 2 DEBUG cotyledon.oslo_config_glue [-] prometheus_tls_enable          = False log_opt_values /usr/lib/python3.12/site-packages/oslo_config/cfg.py:2817
Dec  1 14:21:48 np0005541455 ceilometer_agent_compute[200308]: 2025-12-01 19:21:48.554 2 DEBUG cotyledon.oslo_config_glue [-] prometheus_tls_keyfile         = None log_opt_values /usr/lib/python3.12/site-packages/oslo_config/cfg.py:2817
Dec  1 14:21:48 np0005541455 ceilometer_agent_compute[200308]: 2025-12-01 19:21:48.554 2 DEBUG cotyledon.oslo_config_glue [-] publish_errors                 = False log_opt_values /usr/lib/python3.12/site-packages/oslo_config/cfg.py:2817
Dec  1 14:21:48 np0005541455 ceilometer_agent_compute[200308]: 2025-12-01 19:21:48.555 2 DEBUG cotyledon.oslo_config_glue [-] rate_limit_burst               = 0 log_opt_values /usr/lib/python3.12/site-packages/oslo_config/cfg.py:2817
Dec  1 14:21:48 np0005541455 ceilometer_agent_compute[200308]: 2025-12-01 19:21:48.555 2 DEBUG cotyledon.oslo_config_glue [-] rate_limit_except_level        = CRITICAL log_opt_values /usr/lib/python3.12/site-packages/oslo_config/cfg.py:2817
Dec  1 14:21:48 np0005541455 ceilometer_agent_compute[200308]: 2025-12-01 19:21:48.555 2 DEBUG cotyledon.oslo_config_glue [-] rate_limit_interval            = 0 log_opt_values /usr/lib/python3.12/site-packages/oslo_config/cfg.py:2817
Dec  1 14:21:48 np0005541455 ceilometer_agent_compute[200308]: 2025-12-01 19:21:48.555 2 DEBUG cotyledon.oslo_config_glue [-] reseller_prefix                = AUTH_ log_opt_values /usr/lib/python3.12/site-packages/oslo_config/cfg.py:2817
Dec  1 14:21:48 np0005541455 ceilometer_agent_compute[200308]: 2025-12-01 19:21:48.555 2 DEBUG cotyledon.oslo_config_glue [-] reserved_metadata_keys         = [] log_opt_values /usr/lib/python3.12/site-packages/oslo_config/cfg.py:2817
Dec  1 14:21:48 np0005541455 ceilometer_agent_compute[200308]: 2025-12-01 19:21:48.555 2 DEBUG cotyledon.oslo_config_glue [-] reserved_metadata_length       = 256 log_opt_values /usr/lib/python3.12/site-packages/oslo_config/cfg.py:2817
Dec  1 14:21:48 np0005541455 ceilometer_agent_compute[200308]: 2025-12-01 19:21:48.555 2 DEBUG cotyledon.oslo_config_glue [-] reserved_metadata_namespace    = ['metering.'] log_opt_values /usr/lib/python3.12/site-packages/oslo_config/cfg.py:2817
Dec  1 14:21:48 np0005541455 ceilometer_agent_compute[200308]: 2025-12-01 19:21:48.555 2 DEBUG cotyledon.oslo_config_glue [-] rootwrap_config                = /etc/ceilometer/rootwrap.conf log_opt_values /usr/lib/python3.12/site-packages/oslo_config/cfg.py:2817
Dec  1 14:21:48 np0005541455 ceilometer_agent_compute[200308]: 2025-12-01 19:21:48.555 2 DEBUG cotyledon.oslo_config_glue [-] sample_source                  = openstack log_opt_values /usr/lib/python3.12/site-packages/oslo_config/cfg.py:2817
Dec  1 14:21:48 np0005541455 ceilometer_agent_compute[200308]: 2025-12-01 19:21:48.555 2 DEBUG cotyledon.oslo_config_glue [-] shell_completion               = None log_opt_values /usr/lib/python3.12/site-packages/oslo_config/cfg.py:2817
Dec  1 14:21:48 np0005541455 ceilometer_agent_compute[200308]: 2025-12-01 19:21:48.555 2 DEBUG cotyledon.oslo_config_glue [-] syslog_log_facility            = LOG_USER log_opt_values /usr/lib/python3.12/site-packages/oslo_config/cfg.py:2817
Dec  1 14:21:48 np0005541455 ceilometer_agent_compute[200308]: 2025-12-01 19:21:48.555 2 DEBUG cotyledon.oslo_config_glue [-] threads_to_process_pollsters   = 1 log_opt_values /usr/lib/python3.12/site-packages/oslo_config/cfg.py:2817
Dec  1 14:21:48 np0005541455 ceilometer_agent_compute[200308]: 2025-12-01 19:21:48.555 2 DEBUG cotyledon.oslo_config_glue [-] use_journal                    = False log_opt_values /usr/lib/python3.12/site-packages/oslo_config/cfg.py:2817
Dec  1 14:21:48 np0005541455 ceilometer_agent_compute[200308]: 2025-12-01 19:21:48.556 2 DEBUG cotyledon.oslo_config_glue [-] use_json                       = False log_opt_values /usr/lib/python3.12/site-packages/oslo_config/cfg.py:2817
Dec  1 14:21:48 np0005541455 ceilometer_agent_compute[200308]: 2025-12-01 19:21:48.556 2 DEBUG cotyledon.oslo_config_glue [-] use_stderr                     = False log_opt_values /usr/lib/python3.12/site-packages/oslo_config/cfg.py:2817
Dec  1 14:21:48 np0005541455 ceilometer_agent_compute[200308]: 2025-12-01 19:21:48.556 2 DEBUG cotyledon.oslo_config_glue [-] use_syslog                     = False log_opt_values /usr/lib/python3.12/site-packages/oslo_config/cfg.py:2817
Dec  1 14:21:48 np0005541455 ceilometer_agent_compute[200308]: 2025-12-01 19:21:48.556 2 DEBUG cotyledon.oslo_config_glue [-] watch_log_file                 = False log_opt_values /usr/lib/python3.12/site-packages/oslo_config/cfg.py:2817
Dec  1 14:21:48 np0005541455 ceilometer_agent_compute[200308]: 2025-12-01 19:21:48.556 2 DEBUG cotyledon.oslo_config_glue [-] compute.fetch_extra_metadata   = True log_opt_values /usr/lib/python3.12/site-packages/oslo_config/cfg.py:2824
Dec  1 14:21:48 np0005541455 ceilometer_agent_compute[200308]: 2025-12-01 19:21:48.556 2 DEBUG cotyledon.oslo_config_glue [-] compute.instance_discovery_method = libvirt_metadata log_opt_values /usr/lib/python3.12/site-packages/oslo_config/cfg.py:2824
Dec  1 14:21:48 np0005541455 ceilometer_agent_compute[200308]: 2025-12-01 19:21:48.556 2 DEBUG cotyledon.oslo_config_glue [-] compute.resource_cache_expiry  = 3600 log_opt_values /usr/lib/python3.12/site-packages/oslo_config/cfg.py:2824
Dec  1 14:21:48 np0005541455 ceilometer_agent_compute[200308]: 2025-12-01 19:21:48.556 2 DEBUG cotyledon.oslo_config_glue [-] compute.resource_update_interval = 0 log_opt_values /usr/lib/python3.12/site-packages/oslo_config/cfg.py:2824
Dec  1 14:21:48 np0005541455 ceilometer_agent_compute[200308]: 2025-12-01 19:21:48.556 2 DEBUG cotyledon.oslo_config_glue [-] coordination.backend_url       = **** log_opt_values /usr/lib/python3.12/site-packages/oslo_config/cfg.py:2824
Dec  1 14:21:48 np0005541455 ceilometer_agent_compute[200308]: 2025-12-01 19:21:48.556 2 DEBUG cotyledon.oslo_config_glue [-] event.definitions_cfg_file     = event_definitions.yaml log_opt_values /usr/lib/python3.12/site-packages/oslo_config/cfg.py:2824
Dec  1 14:21:48 np0005541455 ceilometer_agent_compute[200308]: 2025-12-01 19:21:48.556 2 DEBUG cotyledon.oslo_config_glue [-] event.drop_unmatched_notifications = False log_opt_values /usr/lib/python3.12/site-packages/oslo_config/cfg.py:2824
Dec  1 14:21:48 np0005541455 ceilometer_agent_compute[200308]: 2025-12-01 19:21:48.556 2 DEBUG cotyledon.oslo_config_glue [-] event.store_raw                = [] log_opt_values /usr/lib/python3.12/site-packages/oslo_config/cfg.py:2824
Dec  1 14:21:48 np0005541455 ceilometer_agent_compute[200308]: 2025-12-01 19:21:48.557 2 DEBUG cotyledon.oslo_config_glue [-] ipmi.polling_retry             = 3 log_opt_values /usr/lib/python3.12/site-packages/oslo_config/cfg.py:2824
Dec  1 14:21:48 np0005541455 ceilometer_agent_compute[200308]: 2025-12-01 19:21:48.557 2 DEBUG cotyledon.oslo_config_glue [-] meter.meter_definitions_dirs   = ['/etc/ceilometer/meters.d', '/usr/lib/python3.12/site-packages/ceilometer/data/meters.d'] log_opt_values /usr/lib/python3.12/site-packages/oslo_config/cfg.py:2824
Dec  1 14:21:48 np0005541455 ceilometer_agent_compute[200308]: 2025-12-01 19:21:48.557 2 DEBUG cotyledon.oslo_config_glue [-] notification.ack_on_event_error = True log_opt_values /usr/lib/python3.12/site-packages/oslo_config/cfg.py:2824
Dec  1 14:21:48 np0005541455 ceilometer_agent_compute[200308]: 2025-12-01 19:21:48.557 2 DEBUG cotyledon.oslo_config_glue [-] notification.batch_size        = 1 log_opt_values /usr/lib/python3.12/site-packages/oslo_config/cfg.py:2824
Dec  1 14:21:48 np0005541455 ceilometer_agent_compute[200308]: 2025-12-01 19:21:48.557 2 DEBUG cotyledon.oslo_config_glue [-] notification.batch_timeout     = None log_opt_values /usr/lib/python3.12/site-packages/oslo_config/cfg.py:2824
Dec  1 14:21:48 np0005541455 ceilometer_agent_compute[200308]: 2025-12-01 19:21:48.557 2 DEBUG cotyledon.oslo_config_glue [-] notification.messaging_urls    = **** log_opt_values /usr/lib/python3.12/site-packages/oslo_config/cfg.py:2824
Dec  1 14:21:48 np0005541455 ceilometer_agent_compute[200308]: 2025-12-01 19:21:48.557 2 DEBUG cotyledon.oslo_config_glue [-] notification.notification_control_exchanges = ['nova', 'glance', 'neutron', 'cinder', 'heat', 'keystone', 'trove', 'zaqar', 'swift', 'ceilometer', 'magnum', 'dns', 'ironic', 'aodh'] log_opt_values /usr/lib/python3.12/site-packages/oslo_config/cfg.py:2824
Dec  1 14:21:48 np0005541455 ceilometer_agent_compute[200308]: 2025-12-01 19:21:48.557 2 DEBUG cotyledon.oslo_config_glue [-] notification.pipelines         = ['meter', 'event'] log_opt_values /usr/lib/python3.12/site-packages/oslo_config/cfg.py:2824
Dec  1 14:21:48 np0005541455 ceilometer_agent_compute[200308]: 2025-12-01 19:21:48.557 2 DEBUG cotyledon.oslo_config_glue [-] notification.workers           = 1 log_opt_values /usr/lib/python3.12/site-packages/oslo_config/cfg.py:2824
Dec  1 14:21:48 np0005541455 ceilometer_agent_compute[200308]: 2025-12-01 19:21:48.557 2 DEBUG cotyledon.oslo_config_glue [-] polling.batch_size             = 50 log_opt_values /usr/lib/python3.12/site-packages/oslo_config/cfg.py:2824
Dec  1 14:21:48 np0005541455 ceilometer_agent_compute[200308]: 2025-12-01 19:21:48.557 2 DEBUG cotyledon.oslo_config_glue [-] polling.cfg_file               = polling.yaml log_opt_values /usr/lib/python3.12/site-packages/oslo_config/cfg.py:2824
Dec  1 14:21:48 np0005541455 ceilometer_agent_compute[200308]: 2025-12-01 19:21:48.558 2 DEBUG cotyledon.oslo_config_glue [-] polling.enable_notifications   = False log_opt_values /usr/lib/python3.12/site-packages/oslo_config/cfg.py:2824
Dec  1 14:21:48 np0005541455 ceilometer_agent_compute[200308]: 2025-12-01 19:21:48.558 2 DEBUG cotyledon.oslo_config_glue [-] polling.enable_prometheus_exporter = True log_opt_values /usr/lib/python3.12/site-packages/oslo_config/cfg.py:2824
Dec  1 14:21:48 np0005541455 ceilometer_agent_compute[200308]: 2025-12-01 19:21:48.558 2 DEBUG cotyledon.oslo_config_glue [-] polling.heartbeat_socket_dir   = /var/lib/ceilometer log_opt_values /usr/lib/python3.12/site-packages/oslo_config/cfg.py:2824
Dec  1 14:21:48 np0005541455 ceilometer_agent_compute[200308]: 2025-12-01 19:21:48.558 2 DEBUG cotyledon.oslo_config_glue [-] polling.identity_name_discovery = False log_opt_values /usr/lib/python3.12/site-packages/oslo_config/cfg.py:2824
Dec  1 14:21:48 np0005541455 ceilometer_agent_compute[200308]: 2025-12-01 19:21:48.558 2 DEBUG cotyledon.oslo_config_glue [-] polling.ignore_disabled_projects = False log_opt_values /usr/lib/python3.12/site-packages/oslo_config/cfg.py:2824
Dec  1 14:21:48 np0005541455 ceilometer_agent_compute[200308]: 2025-12-01 19:21:48.558 2 DEBUG cotyledon.oslo_config_glue [-] polling.partitioning_group_prefix = None log_opt_values /usr/lib/python3.12/site-packages/oslo_config/cfg.py:2824
Dec  1 14:21:48 np0005541455 ceilometer_agent_compute[200308]: 2025-12-01 19:21:48.558 2 DEBUG cotyledon.oslo_config_glue [-] polling.pollsters_definitions_dirs = ['/etc/ceilometer/pollsters.d'] log_opt_values /usr/lib/python3.12/site-packages/oslo_config/cfg.py:2824
Dec  1 14:21:48 np0005541455 ceilometer_agent_compute[200308]: 2025-12-01 19:21:48.558 2 DEBUG cotyledon.oslo_config_glue [-] polling.prometheus_listen_addresses = ['[::]:9101'] log_opt_values /usr/lib/python3.12/site-packages/oslo_config/cfg.py:2824
Dec  1 14:21:48 np0005541455 ceilometer_agent_compute[200308]: 2025-12-01 19:21:48.558 2 DEBUG cotyledon.oslo_config_glue [-] polling.prometheus_tls_certfile = /etc/ceilometer/tls/tls.crt log_opt_values /usr/lib/python3.12/site-packages/oslo_config/cfg.py:2824
Dec  1 14:21:48 np0005541455 ceilometer_agent_compute[200308]: 2025-12-01 19:21:48.558 2 DEBUG cotyledon.oslo_config_glue [-] polling.prometheus_tls_enable  = True log_opt_values /usr/lib/python3.12/site-packages/oslo_config/cfg.py:2824
Dec  1 14:21:48 np0005541455 ceilometer_agent_compute[200308]: 2025-12-01 19:21:48.558 2 DEBUG cotyledon.oslo_config_glue [-] polling.prometheus_tls_keyfile = /etc/ceilometer/tls/tls.key log_opt_values /usr/lib/python3.12/site-packages/oslo_config/cfg.py:2824
Dec  1 14:21:48 np0005541455 ceilometer_agent_compute[200308]: 2025-12-01 19:21:48.559 2 DEBUG cotyledon.oslo_config_glue [-] polling.threads_to_process_pollsters = 1 log_opt_values /usr/lib/python3.12/site-packages/oslo_config/cfg.py:2824
Dec  1 14:21:48 np0005541455 ceilometer_agent_compute[200308]: 2025-12-01 19:21:48.559 2 DEBUG cotyledon.oslo_config_glue [-] publisher.telemetry_secret     = **** log_opt_values /usr/lib/python3.12/site-packages/oslo_config/cfg.py:2824
Dec  1 14:21:48 np0005541455 ceilometer_agent_compute[200308]: 2025-12-01 19:21:48.559 2 DEBUG cotyledon.oslo_config_glue [-] publisher_notifier.event_topic = event log_opt_values /usr/lib/python3.12/site-packages/oslo_config/cfg.py:2824
Dec  1 14:21:48 np0005541455 ceilometer_agent_compute[200308]: 2025-12-01 19:21:48.559 2 DEBUG cotyledon.oslo_config_glue [-] publisher_notifier.metering_topic = metering log_opt_values /usr/lib/python3.12/site-packages/oslo_config/cfg.py:2824
Dec  1 14:21:48 np0005541455 ceilometer_agent_compute[200308]: 2025-12-01 19:21:48.559 2 DEBUG cotyledon.oslo_config_glue [-] publisher_notifier.telemetry_driver = messagingv2 log_opt_values /usr/lib/python3.12/site-packages/oslo_config/cfg.py:2824
Dec  1 14:21:48 np0005541455 ceilometer_agent_compute[200308]: 2025-12-01 19:21:48.559 2 DEBUG cotyledon.oslo_config_glue [-] rgw_admin_credentials.access_key = **** log_opt_values /usr/lib/python3.12/site-packages/oslo_config/cfg.py:2824
Dec  1 14:21:48 np0005541455 ceilometer_agent_compute[200308]: 2025-12-01 19:21:48.559 2 DEBUG cotyledon.oslo_config_glue [-] rgw_admin_credentials.secret_key = **** log_opt_values /usr/lib/python3.12/site-packages/oslo_config/cfg.py:2824
Dec  1 14:21:48 np0005541455 ceilometer_agent_compute[200308]: 2025-12-01 19:21:48.559 2 DEBUG cotyledon.oslo_config_glue [-] rgw_client.implicit_tenants    = False log_opt_values /usr/lib/python3.12/site-packages/oslo_config/cfg.py:2824
Dec  1 14:21:48 np0005541455 ceilometer_agent_compute[200308]: 2025-12-01 19:21:48.559 2 DEBUG cotyledon.oslo_config_glue [-] service_types.aodh             = alarming log_opt_values /usr/lib/python3.12/site-packages/oslo_config/cfg.py:2824
Dec  1 14:21:48 np0005541455 ceilometer_agent_compute[200308]: 2025-12-01 19:21:48.559 2 DEBUG cotyledon.oslo_config_glue [-] service_types.cinder           = volumev3 log_opt_values /usr/lib/python3.12/site-packages/oslo_config/cfg.py:2824
Dec  1 14:21:48 np0005541455 ceilometer_agent_compute[200308]: 2025-12-01 19:21:48.559 2 DEBUG cotyledon.oslo_config_glue [-] service_types.glance           = image log_opt_values /usr/lib/python3.12/site-packages/oslo_config/cfg.py:2824
Dec  1 14:21:48 np0005541455 ceilometer_agent_compute[200308]: 2025-12-01 19:21:48.560 2 DEBUG cotyledon.oslo_config_glue [-] service_types.neutron          = network log_opt_values /usr/lib/python3.12/site-packages/oslo_config/cfg.py:2824
Dec  1 14:21:48 np0005541455 ceilometer_agent_compute[200308]: 2025-12-01 19:21:48.560 2 DEBUG cotyledon.oslo_config_glue [-] service_types.nova             = compute log_opt_values /usr/lib/python3.12/site-packages/oslo_config/cfg.py:2824
Dec  1 14:21:48 np0005541455 ceilometer_agent_compute[200308]: 2025-12-01 19:21:48.560 2 DEBUG cotyledon.oslo_config_glue [-] service_types.radosgw          = None log_opt_values /usr/lib/python3.12/site-packages/oslo_config/cfg.py:2824
Dec  1 14:21:48 np0005541455 ceilometer_agent_compute[200308]: 2025-12-01 19:21:48.560 2 DEBUG cotyledon.oslo_config_glue [-] service_types.swift            = object-store log_opt_values /usr/lib/python3.12/site-packages/oslo_config/cfg.py:2824
Dec  1 14:21:48 np0005541455 ceilometer_agent_compute[200308]: 2025-12-01 19:21:48.560 2 DEBUG cotyledon.oslo_config_glue [-] service_credentials.auth_section = None log_opt_values /usr/lib/python3.12/site-packages/oslo_config/cfg.py:2824
Dec  1 14:21:48 np0005541455 ceilometer_agent_compute[200308]: 2025-12-01 19:21:48.560 2 DEBUG cotyledon.oslo_config_glue [-] service_credentials.auth_type  = password log_opt_values /usr/lib/python3.12/site-packages/oslo_config/cfg.py:2824
Dec  1 14:21:48 np0005541455 ceilometer_agent_compute[200308]: 2025-12-01 19:21:48.560 2 DEBUG cotyledon.oslo_config_glue [-] service_credentials.cafile     = None log_opt_values /usr/lib/python3.12/site-packages/oslo_config/cfg.py:2824
Dec  1 14:21:48 np0005541455 ceilometer_agent_compute[200308]: 2025-12-01 19:21:48.560 2 DEBUG cotyledon.oslo_config_glue [-] service_credentials.certfile   = None log_opt_values /usr/lib/python3.12/site-packages/oslo_config/cfg.py:2824
Dec  1 14:21:48 np0005541455 ceilometer_agent_compute[200308]: 2025-12-01 19:21:48.560 2 DEBUG cotyledon.oslo_config_glue [-] service_credentials.collect_timing = False log_opt_values /usr/lib/python3.12/site-packages/oslo_config/cfg.py:2824
Dec  1 14:21:48 np0005541455 ceilometer_agent_compute[200308]: 2025-12-01 19:21:48.560 2 DEBUG cotyledon.oslo_config_glue [-] service_credentials.insecure   = False log_opt_values /usr/lib/python3.12/site-packages/oslo_config/cfg.py:2824
Dec  1 14:21:48 np0005541455 ceilometer_agent_compute[200308]: 2025-12-01 19:21:48.560 2 DEBUG cotyledon.oslo_config_glue [-] service_credentials.interface  = internalURL log_opt_values /usr/lib/python3.12/site-packages/oslo_config/cfg.py:2824
Dec  1 14:21:48 np0005541455 ceilometer_agent_compute[200308]: 2025-12-01 19:21:48.561 2 DEBUG cotyledon.oslo_config_glue [-] service_credentials.keyfile    = None log_opt_values /usr/lib/python3.12/site-packages/oslo_config/cfg.py:2824
Dec  1 14:21:48 np0005541455 ceilometer_agent_compute[200308]: 2025-12-01 19:21:48.561 2 DEBUG cotyledon.oslo_config_glue [-] service_credentials.region_name = None log_opt_values /usr/lib/python3.12/site-packages/oslo_config/cfg.py:2824
Dec  1 14:21:48 np0005541455 ceilometer_agent_compute[200308]: 2025-12-01 19:21:48.561 2 DEBUG cotyledon.oslo_config_glue [-] service_credentials.split_loggers = False log_opt_values /usr/lib/python3.12/site-packages/oslo_config/cfg.py:2824
Dec  1 14:21:48 np0005541455 ceilometer_agent_compute[200308]: 2025-12-01 19:21:48.561 2 DEBUG cotyledon.oslo_config_glue [-] service_credentials.timeout    = None log_opt_values /usr/lib/python3.12/site-packages/oslo_config/cfg.py:2824
Dec  1 14:21:48 np0005541455 ceilometer_agent_compute[200308]: 2025-12-01 19:21:48.561 2 DEBUG cotyledon.oslo_config_glue [-] gnocchi.auth_section           = service_credentials log_opt_values /usr/lib/python3.12/site-packages/oslo_config/cfg.py:2824
Dec  1 14:21:48 np0005541455 ceilometer_agent_compute[200308]: 2025-12-01 19:21:48.561 2 DEBUG cotyledon.oslo_config_glue [-] gnocchi.auth_type              = None log_opt_values /usr/lib/python3.12/site-packages/oslo_config/cfg.py:2824
Dec  1 14:21:48 np0005541455 ceilometer_agent_compute[200308]: 2025-12-01 19:21:48.561 2 DEBUG cotyledon.oslo_config_glue [-] gnocchi.cafile                 = None log_opt_values /usr/lib/python3.12/site-packages/oslo_config/cfg.py:2824
Dec  1 14:21:48 np0005541455 ceilometer_agent_compute[200308]: 2025-12-01 19:21:48.561 2 DEBUG cotyledon.oslo_config_glue [-] gnocchi.certfile               = None log_opt_values /usr/lib/python3.12/site-packages/oslo_config/cfg.py:2824
Dec  1 14:21:48 np0005541455 ceilometer_agent_compute[200308]: 2025-12-01 19:21:48.561 2 DEBUG cotyledon.oslo_config_glue [-] gnocchi.collect_timing         = False log_opt_values /usr/lib/python3.12/site-packages/oslo_config/cfg.py:2824
Dec  1 14:21:48 np0005541455 ceilometer_agent_compute[200308]: 2025-12-01 19:21:48.561 2 DEBUG cotyledon.oslo_config_glue [-] gnocchi.insecure               = False log_opt_values /usr/lib/python3.12/site-packages/oslo_config/cfg.py:2824
Dec  1 14:21:48 np0005541455 ceilometer_agent_compute[200308]: 2025-12-01 19:21:48.561 2 DEBUG cotyledon.oslo_config_glue [-] gnocchi.interface              = internal log_opt_values /usr/lib/python3.12/site-packages/oslo_config/cfg.py:2824
Dec  1 14:21:48 np0005541455 ceilometer_agent_compute[200308]: 2025-12-01 19:21:48.561 2 DEBUG cotyledon.oslo_config_glue [-] gnocchi.keyfile                = None log_opt_values /usr/lib/python3.12/site-packages/oslo_config/cfg.py:2824
Dec  1 14:21:48 np0005541455 ceilometer_agent_compute[200308]: 2025-12-01 19:21:48.561 2 DEBUG cotyledon.oslo_config_glue [-] gnocchi.region_name            = None log_opt_values /usr/lib/python3.12/site-packages/oslo_config/cfg.py:2824
Dec  1 14:21:48 np0005541455 ceilometer_agent_compute[200308]: 2025-12-01 19:21:48.562 2 DEBUG cotyledon.oslo_config_glue [-] gnocchi.split_loggers          = False log_opt_values /usr/lib/python3.12/site-packages/oslo_config/cfg.py:2824
Dec  1 14:21:48 np0005541455 ceilometer_agent_compute[200308]: 2025-12-01 19:21:48.562 2 DEBUG cotyledon.oslo_config_glue [-] gnocchi.timeout                = None log_opt_values /usr/lib/python3.12/site-packages/oslo_config/cfg.py:2824
Dec  1 14:21:48 np0005541455 ceilometer_agent_compute[200308]: 2025-12-01 19:21:48.562 2 DEBUG cotyledon.oslo_config_glue [-] zaqar.auth_section             = service_credentials log_opt_values /usr/lib/python3.12/site-packages/oslo_config/cfg.py:2824
Dec  1 14:21:48 np0005541455 ceilometer_agent_compute[200308]: 2025-12-01 19:21:48.562 2 DEBUG cotyledon.oslo_config_glue [-] zaqar.auth_type                = None log_opt_values /usr/lib/python3.12/site-packages/oslo_config/cfg.py:2824
Dec  1 14:21:48 np0005541455 ceilometer_agent_compute[200308]: 2025-12-01 19:21:48.562 2 DEBUG cotyledon.oslo_config_glue [-] zaqar.cafile                   = None log_opt_values /usr/lib/python3.12/site-packages/oslo_config/cfg.py:2824
Dec  1 14:21:48 np0005541455 ceilometer_agent_compute[200308]: 2025-12-01 19:21:48.562 2 DEBUG cotyledon.oslo_config_glue [-] zaqar.certfile                 = None log_opt_values /usr/lib/python3.12/site-packages/oslo_config/cfg.py:2824
Dec  1 14:21:48 np0005541455 ceilometer_agent_compute[200308]: 2025-12-01 19:21:48.562 2 DEBUG cotyledon.oslo_config_glue [-] zaqar.collect_timing           = False log_opt_values /usr/lib/python3.12/site-packages/oslo_config/cfg.py:2824
Dec  1 14:21:48 np0005541455 ceilometer_agent_compute[200308]: 2025-12-01 19:21:48.562 2 DEBUG cotyledon.oslo_config_glue [-] zaqar.insecure                 = False log_opt_values /usr/lib/python3.12/site-packages/oslo_config/cfg.py:2824
Dec  1 14:21:48 np0005541455 ceilometer_agent_compute[200308]: 2025-12-01 19:21:48.562 2 DEBUG cotyledon.oslo_config_glue [-] zaqar.interface                = internal log_opt_values /usr/lib/python3.12/site-packages/oslo_config/cfg.py:2824
Dec  1 14:21:48 np0005541455 ceilometer_agent_compute[200308]: 2025-12-01 19:21:48.562 2 DEBUG cotyledon.oslo_config_glue [-] zaqar.keyfile                  = None log_opt_values /usr/lib/python3.12/site-packages/oslo_config/cfg.py:2824
Dec  1 14:21:48 np0005541455 ceilometer_agent_compute[200308]: 2025-12-01 19:21:48.562 2 DEBUG cotyledon.oslo_config_glue [-] zaqar.region_name              = None log_opt_values /usr/lib/python3.12/site-packages/oslo_config/cfg.py:2824
Dec  1 14:21:48 np0005541455 ceilometer_agent_compute[200308]: 2025-12-01 19:21:48.562 2 DEBUG cotyledon.oslo_config_glue [-] zaqar.split_loggers            = False log_opt_values /usr/lib/python3.12/site-packages/oslo_config/cfg.py:2824
Dec  1 14:21:48 np0005541455 ceilometer_agent_compute[200308]: 2025-12-01 19:21:48.563 2 DEBUG cotyledon.oslo_config_glue [-] zaqar.timeout                  = None log_opt_values /usr/lib/python3.12/site-packages/oslo_config/cfg.py:2824
Dec  1 14:21:48 np0005541455 ceilometer_agent_compute[200308]: 2025-12-01 19:21:48.563 2 DEBUG cotyledon.oslo_config_glue [-] oslo_reports.file_event_handler = None log_opt_values /usr/lib/python3.12/site-packages/oslo_config/cfg.py:2824
Dec  1 14:21:48 np0005541455 ceilometer_agent_compute[200308]: 2025-12-01 19:21:48.563 2 DEBUG cotyledon.oslo_config_glue [-] oslo_reports.file_event_handler_interval = 1 log_opt_values /usr/lib/python3.12/site-packages/oslo_config/cfg.py:2824
Dec  1 14:21:48 np0005541455 ceilometer_agent_compute[200308]: 2025-12-01 19:21:48.563 2 DEBUG cotyledon.oslo_config_glue [-] oslo_reports.log_dir           = None log_opt_values /usr/lib/python3.12/site-packages/oslo_config/cfg.py:2824
Dec  1 14:21:48 np0005541455 ceilometer_agent_compute[200308]: 2025-12-01 19:21:48.563 2 DEBUG cotyledon.oslo_config_glue [-] ******************************************************************************** log_opt_values /usr/lib/python3.12/site-packages/oslo_config/cfg.py:2828
Dec  1 14:21:48 np0005541455 ceilometer_agent_compute[200308]: 2025-12-01 19:21:48.581 12 INFO ceilometer.polling.manager [-] Starting heartbeat child service. Listening on /var/lib/ceilometer/ceilometer-compute.socket
Dec  1 14:21:48 np0005541455 ceilometer_agent_compute[200308]: 2025-12-01 19:21:48.581 12 DEBUG cotyledon.oslo_config_glue [-] Full set of CONF: _load_service_options /usr/lib/python3.12/site-packages/cotyledon/oslo_config_glue.py:53
Dec  1 14:21:48 np0005541455 ceilometer_agent_compute[200308]: 2025-12-01 19:21:48.581 12 DEBUG cotyledon.oslo_config_glue [-] ******************************************************************************** log_opt_values /usr/lib/python3.12/site-packages/oslo_config/cfg.py:2804
Dec  1 14:21:48 np0005541455 ceilometer_agent_compute[200308]: 2025-12-01 19:21:48.582 12 DEBUG cotyledon.oslo_config_glue [-] Configuration options gathered from: log_opt_values /usr/lib/python3.12/site-packages/oslo_config/cfg.py:2805
Dec  1 14:21:48 np0005541455 ceilometer_agent_compute[200308]: 2025-12-01 19:21:48.582 12 DEBUG cotyledon.oslo_config_glue [-] command line args: ['--polling-namespaces', 'compute', '--logfile', '/dev/stdout'] log_opt_values /usr/lib/python3.12/site-packages/oslo_config/cfg.py:2806
Dec  1 14:21:48 np0005541455 ceilometer_agent_compute[200308]: 2025-12-01 19:21:48.582 12 DEBUG cotyledon.oslo_config_glue [-] config files: ['/etc/ceilometer/ceilometer.conf'] log_opt_values /usr/lib/python3.12/site-packages/oslo_config/cfg.py:2807
Dec  1 14:21:48 np0005541455 ceilometer_agent_compute[200308]: 2025-12-01 19:21:48.582 12 DEBUG cotyledon.oslo_config_glue [-] ================================================================================ log_opt_values /usr/lib/python3.12/site-packages/oslo_config/cfg.py:2809
Dec  1 14:21:48 np0005541455 ceilometer_agent_compute[200308]: 2025-12-01 19:21:48.582 12 DEBUG cotyledon.oslo_config_glue [-] batch_size                     = 50 log_opt_values /usr/lib/python3.12/site-packages/oslo_config/cfg.py:2817
Dec  1 14:21:48 np0005541455 ceilometer_agent_compute[200308]: 2025-12-01 19:21:48.582 12 DEBUG cotyledon.oslo_config_glue [-] cfg_file                       = polling.yaml log_opt_values /usr/lib/python3.12/site-packages/oslo_config/cfg.py:2817
Dec  1 14:21:48 np0005541455 ceilometer_agent_compute[200308]: 2025-12-01 19:21:48.582 12 DEBUG cotyledon.oslo_config_glue [-] config_dir                     = ['/etc/ceilometer/ceilometer.conf.d'] log_opt_values /usr/lib/python3.12/site-packages/oslo_config/cfg.py:2817
Dec  1 14:21:48 np0005541455 ceilometer_agent_compute[200308]: 2025-12-01 19:21:48.582 12 DEBUG cotyledon.oslo_config_glue [-] config_file                    = ['/etc/ceilometer/ceilometer.conf'] log_opt_values /usr/lib/python3.12/site-packages/oslo_config/cfg.py:2817
Dec  1 14:21:48 np0005541455 ceilometer_agent_compute[200308]: 2025-12-01 19:21:48.582 12 DEBUG cotyledon.oslo_config_glue [-] config_source                  = [] log_opt_values /usr/lib/python3.12/site-packages/oslo_config/cfg.py:2817
Dec  1 14:21:48 np0005541455 ceilometer_agent_compute[200308]: 2025-12-01 19:21:48.582 12 DEBUG cotyledon.oslo_config_glue [-] debug                          = True log_opt_values /usr/lib/python3.12/site-packages/oslo_config/cfg.py:2817
Dec  1 14:21:48 np0005541455 ceilometer_agent_compute[200308]: 2025-12-01 19:21:48.583 12 DEBUG cotyledon.oslo_config_glue [-] default_log_levels             = ['amqp=WARN', 'amqplib=WARN', 'boto=WARN', 'qpid=WARN', 'sqlalchemy=WARN', 'suds=INFO', 'oslo.messaging=INFO', 'oslo_messaging=INFO', 'iso8601=WARN', 'requests.packages.urllib3.connectionpool=WARN', 'urllib3.connectionpool=WARN', 'websocket=WARN', 'requests.packages.urllib3.util.retry=WARN', 'urllib3.util.retry=WARN', 'keystonemiddleware=WARN', 'routes.middleware=WARN', 'stevedore=WARN', 'taskflow=WARN', 'keystoneauth=WARN', 'oslo.cache=INFO', 'oslo_policy=INFO', 'dogpile.core.dogpile=INFO', 'futurist=INFO', 'neutronclient=INFO', 'keystoneclient=INFO'] log_opt_values /usr/lib/python3.12/site-packages/oslo_config/cfg.py:2817
Dec  1 14:21:48 np0005541455 ceilometer_agent_compute[200308]: 2025-12-01 19:21:48.583 12 DEBUG cotyledon.oslo_config_glue [-] enable_notifications           = True log_opt_values /usr/lib/python3.12/site-packages/oslo_config/cfg.py:2817
Dec  1 14:21:48 np0005541455 ceilometer_agent_compute[200308]: 2025-12-01 19:21:48.583 12 DEBUG cotyledon.oslo_config_glue [-] enable_prometheus_exporter     = False log_opt_values /usr/lib/python3.12/site-packages/oslo_config/cfg.py:2817
Dec  1 14:21:48 np0005541455 ceilometer_agent_compute[200308]: 2025-12-01 19:21:48.583 12 DEBUG cotyledon.oslo_config_glue [-] event_pipeline_cfg_file        = event_pipeline.yaml log_opt_values /usr/lib/python3.12/site-packages/oslo_config/cfg.py:2817
Dec  1 14:21:48 np0005541455 ceilometer_agent_compute[200308]: 2025-12-01 19:21:48.583 12 DEBUG cotyledon.oslo_config_glue [-] graceful_shutdown_timeout      = 60 log_opt_values /usr/lib/python3.12/site-packages/oslo_config/cfg.py:2817
Dec  1 14:21:48 np0005541455 ceilometer_agent_compute[200308]: 2025-12-01 19:21:48.583 12 DEBUG cotyledon.oslo_config_glue [-] heartbeat_socket_dir           = None log_opt_values /usr/lib/python3.12/site-packages/oslo_config/cfg.py:2817
Dec  1 14:21:48 np0005541455 ceilometer_agent_compute[200308]: 2025-12-01 19:21:48.583 12 DEBUG cotyledon.oslo_config_glue [-] host                           = compute-0.ctlplane.example.com log_opt_values /usr/lib/python3.12/site-packages/oslo_config/cfg.py:2817
Dec  1 14:21:48 np0005541455 ceilometer_agent_compute[200308]: 2025-12-01 19:21:48.583 12 DEBUG cotyledon.oslo_config_glue [-] http_timeout                   = 600 log_opt_values /usr/lib/python3.12/site-packages/oslo_config/cfg.py:2817
Dec  1 14:21:48 np0005541455 ceilometer_agent_compute[200308]: 2025-12-01 19:21:48.583 12 DEBUG cotyledon.oslo_config_glue [-] hypervisor_inspector           = libvirt log_opt_values /usr/lib/python3.12/site-packages/oslo_config/cfg.py:2817
Dec  1 14:21:48 np0005541455 ceilometer_agent_compute[200308]: 2025-12-01 19:21:48.583 12 DEBUG cotyledon.oslo_config_glue [-] identity_name_discovery        = False log_opt_values /usr/lib/python3.12/site-packages/oslo_config/cfg.py:2817
Dec  1 14:21:48 np0005541455 ceilometer_agent_compute[200308]: 2025-12-01 19:21:48.583 12 DEBUG cotyledon.oslo_config_glue [-] ignore_disabled_projects       = False log_opt_values /usr/lib/python3.12/site-packages/oslo_config/cfg.py:2817
Dec  1 14:21:48 np0005541455 ceilometer_agent_compute[200308]: 2025-12-01 19:21:48.584 12 DEBUG cotyledon.oslo_config_glue [-] instance_format                = [instance: %(uuid)s]  log_opt_values /usr/lib/python3.12/site-packages/oslo_config/cfg.py:2817
Dec  1 14:21:48 np0005541455 ceilometer_agent_compute[200308]: 2025-12-01 19:21:48.584 12 DEBUG cotyledon.oslo_config_glue [-] instance_uuid_format           = [instance: %(uuid)s]  log_opt_values /usr/lib/python3.12/site-packages/oslo_config/cfg.py:2817
Dec  1 14:21:48 np0005541455 ceilometer_agent_compute[200308]: 2025-12-01 19:21:48.584 12 DEBUG cotyledon.oslo_config_glue [-] libvirt_type                   = kvm log_opt_values /usr/lib/python3.12/site-packages/oslo_config/cfg.py:2817
Dec  1 14:21:48 np0005541455 ceilometer_agent_compute[200308]: 2025-12-01 19:21:48.584 12 DEBUG cotyledon.oslo_config_glue [-] libvirt_uri                    =  log_opt_values /usr/lib/python3.12/site-packages/oslo_config/cfg.py:2817
Dec  1 14:21:48 np0005541455 ceilometer_agent_compute[200308]: 2025-12-01 19:21:48.584 12 DEBUG cotyledon.oslo_config_glue [-] log_color                      = False log_opt_values /usr/lib/python3.12/site-packages/oslo_config/cfg.py:2817
Dec  1 14:21:48 np0005541455 ceilometer_agent_compute[200308]: 2025-12-01 19:21:48.584 12 DEBUG cotyledon.oslo_config_glue [-] log_config_append              = None log_opt_values /usr/lib/python3.12/site-packages/oslo_config/cfg.py:2817
Dec  1 14:21:48 np0005541455 ceilometer_agent_compute[200308]: 2025-12-01 19:21:48.584 12 DEBUG cotyledon.oslo_config_glue [-] log_date_format                = %Y-%m-%d %H:%M:%S log_opt_values /usr/lib/python3.12/site-packages/oslo_config/cfg.py:2817
Dec  1 14:21:48 np0005541455 ceilometer_agent_compute[200308]: 2025-12-01 19:21:48.584 12 DEBUG cotyledon.oslo_config_glue [-] log_dir                        = /var/log/ceilometer log_opt_values /usr/lib/python3.12/site-packages/oslo_config/cfg.py:2817
Dec  1 14:21:48 np0005541455 ceilometer_agent_compute[200308]: 2025-12-01 19:21:48.584 12 DEBUG cotyledon.oslo_config_glue [-] log_file                       = /dev/stdout log_opt_values /usr/lib/python3.12/site-packages/oslo_config/cfg.py:2817
Dec  1 14:21:48 np0005541455 ceilometer_agent_compute[200308]: 2025-12-01 19:21:48.584 12 DEBUG cotyledon.oslo_config_glue [-] log_options                    = True log_opt_values /usr/lib/python3.12/site-packages/oslo_config/cfg.py:2817
Dec  1 14:21:48 np0005541455 ceilometer_agent_compute[200308]: 2025-12-01 19:21:48.585 12 DEBUG cotyledon.oslo_config_glue [-] log_rotate_interval            = 1 log_opt_values /usr/lib/python3.12/site-packages/oslo_config/cfg.py:2817
Dec  1 14:21:48 np0005541455 ceilometer_agent_compute[200308]: 2025-12-01 19:21:48.585 12 DEBUG cotyledon.oslo_config_glue [-] log_rotate_interval_type       = days log_opt_values /usr/lib/python3.12/site-packages/oslo_config/cfg.py:2817
Dec  1 14:21:48 np0005541455 ceilometer_agent_compute[200308]: 2025-12-01 19:21:48.585 12 DEBUG cotyledon.oslo_config_glue [-] log_rotation_type              = none log_opt_values /usr/lib/python3.12/site-packages/oslo_config/cfg.py:2817
Dec  1 14:21:48 np0005541455 ceilometer_agent_compute[200308]: 2025-12-01 19:21:48.585 12 DEBUG cotyledon.oslo_config_glue [-] logging_context_format_string  = %(asctime)s.%(msecs)03d %(process)d %(levelname)s %(name)s [%(global_request_id)s %(request_id)s %(user_identity)s] %(instance)s%(message)s log_opt_values /usr/lib/python3.12/site-packages/oslo_config/cfg.py:2817
Dec  1 14:21:48 np0005541455 ceilometer_agent_compute[200308]: 2025-12-01 19:21:48.585 12 DEBUG cotyledon.oslo_config_glue [-] logging_debug_format_suffix    = %(funcName)s %(pathname)s:%(lineno)d log_opt_values /usr/lib/python3.12/site-packages/oslo_config/cfg.py:2817
Dec  1 14:21:48 np0005541455 ceilometer_agent_compute[200308]: 2025-12-01 19:21:48.585 12 DEBUG cotyledon.oslo_config_glue [-] logging_default_format_string  = %(asctime)s.%(msecs)03d %(process)d %(levelname)s %(name)s [-] %(instance)s%(message)s log_opt_values /usr/lib/python3.12/site-packages/oslo_config/cfg.py:2817
Dec  1 14:21:48 np0005541455 ceilometer_agent_compute[200308]: 2025-12-01 19:21:48.585 12 DEBUG cotyledon.oslo_config_glue [-] logging_exception_prefix       = %(asctime)s.%(msecs)03d %(process)d ERROR %(name)s %(instance)s log_opt_values /usr/lib/python3.12/site-packages/oslo_config/cfg.py:2817
Dec  1 14:21:48 np0005541455 ceilometer_agent_compute[200308]: 2025-12-01 19:21:48.585 12 DEBUG cotyledon.oslo_config_glue [-] logging_user_identity_format   = %(user)s %(project)s %(domain)s %(system_scope)s %(user_domain)s %(project_domain)s log_opt_values /usr/lib/python3.12/site-packages/oslo_config/cfg.py:2817
Dec  1 14:21:48 np0005541455 ceilometer_agent_compute[200308]: 2025-12-01 19:21:48.585 12 DEBUG cotyledon.oslo_config_glue [-] max_logfile_count              = 30 log_opt_values /usr/lib/python3.12/site-packages/oslo_config/cfg.py:2817
Dec  1 14:21:48 np0005541455 ceilometer_agent_compute[200308]: 2025-12-01 19:21:48.585 12 DEBUG cotyledon.oslo_config_glue [-] max_logfile_size_mb            = 200 log_opt_values /usr/lib/python3.12/site-packages/oslo_config/cfg.py:2817
Dec  1 14:21:48 np0005541455 ceilometer_agent_compute[200308]: 2025-12-01 19:21:48.585 12 DEBUG cotyledon.oslo_config_glue [-] max_parallel_requests          = 64 log_opt_values /usr/lib/python3.12/site-packages/oslo_config/cfg.py:2817
Dec  1 14:21:48 np0005541455 ceilometer_agent_compute[200308]: 2025-12-01 19:21:48.586 12 DEBUG cotyledon.oslo_config_glue [-] partitioning_group_prefix      = None log_opt_values /usr/lib/python3.12/site-packages/oslo_config/cfg.py:2817
Dec  1 14:21:48 np0005541455 ceilometer_agent_compute[200308]: 2025-12-01 19:21:48.586 12 DEBUG cotyledon.oslo_config_glue [-] pipeline_cfg_file              = pipeline.yaml log_opt_values /usr/lib/python3.12/site-packages/oslo_config/cfg.py:2817
Dec  1 14:21:48 np0005541455 ceilometer_agent_compute[200308]: 2025-12-01 19:21:48.586 12 DEBUG cotyledon.oslo_config_glue [-] polling_namespaces             = ['compute'] log_opt_values /usr/lib/python3.12/site-packages/oslo_config/cfg.py:2817
Dec  1 14:21:48 np0005541455 ceilometer_agent_compute[200308]: 2025-12-01 19:21:48.586 12 DEBUG cotyledon.oslo_config_glue [-] pollsters_definitions_dirs     = ['/etc/ceilometer/pollsters.d'] log_opt_values /usr/lib/python3.12/site-packages/oslo_config/cfg.py:2817
Dec  1 14:21:48 np0005541455 ceilometer_agent_compute[200308]: 2025-12-01 19:21:48.586 12 DEBUG cotyledon.oslo_config_glue [-] prometheus_listen_addresses    = ['127.0.0.1:9101'] log_opt_values /usr/lib/python3.12/site-packages/oslo_config/cfg.py:2817
Dec  1 14:21:48 np0005541455 ceilometer_agent_compute[200308]: 2025-12-01 19:21:48.586 12 DEBUG cotyledon.oslo_config_glue [-] prometheus_tls_certfile        = None log_opt_values /usr/lib/python3.12/site-packages/oslo_config/cfg.py:2817
Dec  1 14:21:48 np0005541455 ceilometer_agent_compute[200308]: 2025-12-01 19:21:48.586 12 DEBUG cotyledon.oslo_config_glue [-] prometheus_tls_enable          = False log_opt_values /usr/lib/python3.12/site-packages/oslo_config/cfg.py:2817
Dec  1 14:21:48 np0005541455 ceilometer_agent_compute[200308]: 2025-12-01 19:21:48.586 12 DEBUG cotyledon.oslo_config_glue [-] prometheus_tls_keyfile         = None log_opt_values /usr/lib/python3.12/site-packages/oslo_config/cfg.py:2817
Dec  1 14:21:48 np0005541455 ceilometer_agent_compute[200308]: 2025-12-01 19:21:48.586 12 DEBUG cotyledon.oslo_config_glue [-] publish_errors                 = False log_opt_values /usr/lib/python3.12/site-packages/oslo_config/cfg.py:2817
Dec  1 14:21:48 np0005541455 ceilometer_agent_compute[200308]: 2025-12-01 19:21:48.586 12 DEBUG cotyledon.oslo_config_glue [-] rate_limit_burst               = 0 log_opt_values /usr/lib/python3.12/site-packages/oslo_config/cfg.py:2817
Dec  1 14:21:48 np0005541455 ceilometer_agent_compute[200308]: 2025-12-01 19:21:48.586 12 DEBUG cotyledon.oslo_config_glue [-] rate_limit_except_level        = CRITICAL log_opt_values /usr/lib/python3.12/site-packages/oslo_config/cfg.py:2817
Dec  1 14:21:48 np0005541455 ceilometer_agent_compute[200308]: 2025-12-01 19:21:48.586 12 DEBUG cotyledon.oslo_config_glue [-] rate_limit_interval            = 0 log_opt_values /usr/lib/python3.12/site-packages/oslo_config/cfg.py:2817
Dec  1 14:21:48 np0005541455 ceilometer_agent_compute[200308]: 2025-12-01 19:21:48.587 12 DEBUG cotyledon.oslo_config_glue [-] reseller_prefix                = AUTH_ log_opt_values /usr/lib/python3.12/site-packages/oslo_config/cfg.py:2817
Dec  1 14:21:48 np0005541455 ceilometer_agent_compute[200308]: 2025-12-01 19:21:48.587 12 DEBUG cotyledon.oslo_config_glue [-] reserved_metadata_keys         = [] log_opt_values /usr/lib/python3.12/site-packages/oslo_config/cfg.py:2817
Dec  1 14:21:48 np0005541455 ceilometer_agent_compute[200308]: 2025-12-01 19:21:48.587 12 DEBUG cotyledon.oslo_config_glue [-] reserved_metadata_length       = 256 log_opt_values /usr/lib/python3.12/site-packages/oslo_config/cfg.py:2817
Dec  1 14:21:48 np0005541455 ceilometer_agent_compute[200308]: 2025-12-01 19:21:48.587 12 DEBUG cotyledon.oslo_config_glue [-] reserved_metadata_namespace    = ['metering.'] log_opt_values /usr/lib/python3.12/site-packages/oslo_config/cfg.py:2817
Dec  1 14:21:48 np0005541455 ceilometer_agent_compute[200308]: 2025-12-01 19:21:48.587 12 DEBUG cotyledon.oslo_config_glue [-] rootwrap_config                = /etc/ceilometer/rootwrap.conf log_opt_values /usr/lib/python3.12/site-packages/oslo_config/cfg.py:2817
Dec  1 14:21:48 np0005541455 ceilometer_agent_compute[200308]: 2025-12-01 19:21:48.587 12 DEBUG cotyledon.oslo_config_glue [-] sample_source                  = openstack log_opt_values /usr/lib/python3.12/site-packages/oslo_config/cfg.py:2817
Dec  1 14:21:48 np0005541455 ceilometer_agent_compute[200308]: 2025-12-01 19:21:48.587 12 DEBUG cotyledon.oslo_config_glue [-] shell_completion               = None log_opt_values /usr/lib/python3.12/site-packages/oslo_config/cfg.py:2817
Dec  1 14:21:48 np0005541455 ceilometer_agent_compute[200308]: 2025-12-01 19:21:48.587 12 DEBUG cotyledon.oslo_config_glue [-] syslog_log_facility            = LOG_USER log_opt_values /usr/lib/python3.12/site-packages/oslo_config/cfg.py:2817
Dec  1 14:21:48 np0005541455 ceilometer_agent_compute[200308]: 2025-12-01 19:21:48.587 12 DEBUG cotyledon.oslo_config_glue [-] threads_to_process_pollsters   = 1 log_opt_values /usr/lib/python3.12/site-packages/oslo_config/cfg.py:2817
Dec  1 14:21:48 np0005541455 ceilometer_agent_compute[200308]: 2025-12-01 19:21:48.587 12 DEBUG cotyledon.oslo_config_glue [-] use_journal                    = False log_opt_values /usr/lib/python3.12/site-packages/oslo_config/cfg.py:2817
Dec  1 14:21:48 np0005541455 ceilometer_agent_compute[200308]: 2025-12-01 19:21:48.587 12 DEBUG cotyledon.oslo_config_glue [-] use_json                       = False log_opt_values /usr/lib/python3.12/site-packages/oslo_config/cfg.py:2817
Dec  1 14:21:48 np0005541455 ceilometer_agent_compute[200308]: 2025-12-01 19:21:48.587 12 DEBUG cotyledon.oslo_config_glue [-] use_stderr                     = False log_opt_values /usr/lib/python3.12/site-packages/oslo_config/cfg.py:2817
Dec  1 14:21:48 np0005541455 ceilometer_agent_compute[200308]: 2025-12-01 19:21:48.588 12 DEBUG cotyledon.oslo_config_glue [-] use_syslog                     = False log_opt_values /usr/lib/python3.12/site-packages/oslo_config/cfg.py:2817
Dec  1 14:21:48 np0005541455 ceilometer_agent_compute[200308]: 2025-12-01 19:21:48.588 12 DEBUG cotyledon.oslo_config_glue [-] watch_log_file                 = False log_opt_values /usr/lib/python3.12/site-packages/oslo_config/cfg.py:2817
Dec  1 14:21:48 np0005541455 ceilometer_agent_compute[200308]: 2025-12-01 19:21:48.588 12 DEBUG cotyledon.oslo_config_glue [-] compute.fetch_extra_metadata   = True log_opt_values /usr/lib/python3.12/site-packages/oslo_config/cfg.py:2824
Dec  1 14:21:48 np0005541455 ceilometer_agent_compute[200308]: 2025-12-01 19:21:48.588 12 DEBUG cotyledon.oslo_config_glue [-] compute.instance_discovery_method = libvirt_metadata log_opt_values /usr/lib/python3.12/site-packages/oslo_config/cfg.py:2824
Dec  1 14:21:48 np0005541455 ceilometer_agent_compute[200308]: 2025-12-01 19:21:48.588 12 DEBUG cotyledon.oslo_config_glue [-] compute.resource_cache_expiry  = 3600 log_opt_values /usr/lib/python3.12/site-packages/oslo_config/cfg.py:2824
Dec  1 14:21:48 np0005541455 ceilometer_agent_compute[200308]: 2025-12-01 19:21:48.588 12 DEBUG cotyledon.oslo_config_glue [-] compute.resource_update_interval = 0 log_opt_values /usr/lib/python3.12/site-packages/oslo_config/cfg.py:2824
Dec  1 14:21:48 np0005541455 ceilometer_agent_compute[200308]: 2025-12-01 19:21:48.588 12 DEBUG cotyledon.oslo_config_glue [-] coordination.backend_url       = **** log_opt_values /usr/lib/python3.12/site-packages/oslo_config/cfg.py:2824
Dec  1 14:21:48 np0005541455 ceilometer_agent_compute[200308]: 2025-12-01 19:21:48.588 12 DEBUG cotyledon.oslo_config_glue [-] event.definitions_cfg_file     = event_definitions.yaml log_opt_values /usr/lib/python3.12/site-packages/oslo_config/cfg.py:2824
Dec  1 14:21:48 np0005541455 ceilometer_agent_compute[200308]: 2025-12-01 19:21:48.588 12 DEBUG cotyledon.oslo_config_glue [-] event.drop_unmatched_notifications = False log_opt_values /usr/lib/python3.12/site-packages/oslo_config/cfg.py:2824
Dec  1 14:21:48 np0005541455 ceilometer_agent_compute[200308]: 2025-12-01 19:21:48.588 12 DEBUG cotyledon.oslo_config_glue [-] event.store_raw                = [] log_opt_values /usr/lib/python3.12/site-packages/oslo_config/cfg.py:2824
Dec  1 14:21:48 np0005541455 ceilometer_agent_compute[200308]: 2025-12-01 19:21:48.589 12 DEBUG cotyledon.oslo_config_glue [-] ipmi.polling_retry             = 3 log_opt_values /usr/lib/python3.12/site-packages/oslo_config/cfg.py:2824
Dec  1 14:21:48 np0005541455 ceilometer_agent_compute[200308]: 2025-12-01 19:21:48.589 12 DEBUG cotyledon.oslo_config_glue [-] meter.meter_definitions_dirs   = ['/etc/ceilometer/meters.d', '/usr/lib/python3.12/site-packages/ceilometer/data/meters.d'] log_opt_values /usr/lib/python3.12/site-packages/oslo_config/cfg.py:2824
Dec  1 14:21:48 np0005541455 ceilometer_agent_compute[200308]: 2025-12-01 19:21:48.589 12 DEBUG cotyledon.oslo_config_glue [-] notification.ack_on_event_error = True log_opt_values /usr/lib/python3.12/site-packages/oslo_config/cfg.py:2824
Dec  1 14:21:48 np0005541455 ceilometer_agent_compute[200308]: 2025-12-01 19:21:48.589 12 DEBUG cotyledon.oslo_config_glue [-] notification.batch_size        = 1 log_opt_values /usr/lib/python3.12/site-packages/oslo_config/cfg.py:2824
Dec  1 14:21:48 np0005541455 ceilometer_agent_compute[200308]: 2025-12-01 19:21:48.589 12 DEBUG cotyledon.oslo_config_glue [-] notification.batch_timeout     = None log_opt_values /usr/lib/python3.12/site-packages/oslo_config/cfg.py:2824
Dec  1 14:21:48 np0005541455 ceilometer_agent_compute[200308]: 2025-12-01 19:21:48.589 12 DEBUG cotyledon.oslo_config_glue [-] notification.messaging_urls    = **** log_opt_values /usr/lib/python3.12/site-packages/oslo_config/cfg.py:2824
Dec  1 14:21:48 np0005541455 ceilometer_agent_compute[200308]: 2025-12-01 19:21:48.589 12 DEBUG cotyledon.oslo_config_glue [-] notification.notification_control_exchanges = ['nova', 'glance', 'neutron', 'cinder', 'heat', 'keystone', 'trove', 'zaqar', 'swift', 'ceilometer', 'magnum', 'dns', 'ironic', 'aodh'] log_opt_values /usr/lib/python3.12/site-packages/oslo_config/cfg.py:2824
Dec  1 14:21:48 np0005541455 ceilometer_agent_compute[200308]: 2025-12-01 19:21:48.589 12 DEBUG cotyledon.oslo_config_glue [-] notification.pipelines         = ['meter', 'event'] log_opt_values /usr/lib/python3.12/site-packages/oslo_config/cfg.py:2824
Dec  1 14:21:48 np0005541455 ceilometer_agent_compute[200308]: 2025-12-01 19:21:48.589 12 DEBUG cotyledon.oslo_config_glue [-] notification.workers           = 1 log_opt_values /usr/lib/python3.12/site-packages/oslo_config/cfg.py:2824
Dec  1 14:21:48 np0005541455 ceilometer_agent_compute[200308]: 2025-12-01 19:21:48.589 12 DEBUG cotyledon.oslo_config_glue [-] polling.batch_size             = 50 log_opt_values /usr/lib/python3.12/site-packages/oslo_config/cfg.py:2824
Dec  1 14:21:48 np0005541455 ceilometer_agent_compute[200308]: 2025-12-01 19:21:48.589 12 DEBUG cotyledon.oslo_config_glue [-] polling.cfg_file               = polling.yaml log_opt_values /usr/lib/python3.12/site-packages/oslo_config/cfg.py:2824
Dec  1 14:21:48 np0005541455 ceilometer_agent_compute[200308]: 2025-12-01 19:21:48.590 12 DEBUG cotyledon.oslo_config_glue [-] polling.enable_notifications   = False log_opt_values /usr/lib/python3.12/site-packages/oslo_config/cfg.py:2824
Dec  1 14:21:48 np0005541455 ceilometer_agent_compute[200308]: 2025-12-01 19:21:48.590 12 DEBUG cotyledon.oslo_config_glue [-] polling.enable_prometheus_exporter = True log_opt_values /usr/lib/python3.12/site-packages/oslo_config/cfg.py:2824
Dec  1 14:21:48 np0005541455 ceilometer_agent_compute[200308]: 2025-12-01 19:21:48.590 12 DEBUG cotyledon.oslo_config_glue [-] polling.heartbeat_socket_dir   = /var/lib/ceilometer log_opt_values /usr/lib/python3.12/site-packages/oslo_config/cfg.py:2824
Dec  1 14:21:48 np0005541455 ceilometer_agent_compute[200308]: 2025-12-01 19:21:48.590 12 DEBUG cotyledon.oslo_config_glue [-] polling.identity_name_discovery = False log_opt_values /usr/lib/python3.12/site-packages/oslo_config/cfg.py:2824
Dec  1 14:21:48 np0005541455 ceilometer_agent_compute[200308]: 2025-12-01 19:21:48.590 12 DEBUG cotyledon.oslo_config_glue [-] polling.ignore_disabled_projects = False log_opt_values /usr/lib/python3.12/site-packages/oslo_config/cfg.py:2824
Dec  1 14:21:48 np0005541455 ceilometer_agent_compute[200308]: 2025-12-01 19:21:48.590 12 DEBUG cotyledon.oslo_config_glue [-] polling.partitioning_group_prefix = None log_opt_values /usr/lib/python3.12/site-packages/oslo_config/cfg.py:2824
Dec  1 14:21:48 np0005541455 ceilometer_agent_compute[200308]: 2025-12-01 19:21:48.590 12 DEBUG cotyledon.oslo_config_glue [-] polling.pollsters_definitions_dirs = ['/etc/ceilometer/pollsters.d'] log_opt_values /usr/lib/python3.12/site-packages/oslo_config/cfg.py:2824
Dec  1 14:21:48 np0005541455 ceilometer_agent_compute[200308]: 2025-12-01 19:21:48.590 12 DEBUG cotyledon.oslo_config_glue [-] polling.prometheus_listen_addresses = ['[::]:9101'] log_opt_values /usr/lib/python3.12/site-packages/oslo_config/cfg.py:2824
Dec  1 14:21:48 np0005541455 ceilometer_agent_compute[200308]: 2025-12-01 19:21:48.590 12 DEBUG cotyledon.oslo_config_glue [-] polling.prometheus_tls_certfile = /etc/ceilometer/tls/tls.crt log_opt_values /usr/lib/python3.12/site-packages/oslo_config/cfg.py:2824
Dec  1 14:21:48 np0005541455 ceilometer_agent_compute[200308]: 2025-12-01 19:21:48.590 12 DEBUG cotyledon.oslo_config_glue [-] polling.prometheus_tls_enable  = True log_opt_values /usr/lib/python3.12/site-packages/oslo_config/cfg.py:2824
Dec  1 14:21:48 np0005541455 ceilometer_agent_compute[200308]: 2025-12-01 19:21:48.590 12 DEBUG cotyledon.oslo_config_glue [-] polling.prometheus_tls_keyfile = /etc/ceilometer/tls/tls.key log_opt_values /usr/lib/python3.12/site-packages/oslo_config/cfg.py:2824
Dec  1 14:21:48 np0005541455 ceilometer_agent_compute[200308]: 2025-12-01 19:21:48.590 12 DEBUG cotyledon.oslo_config_glue [-] polling.threads_to_process_pollsters = 1 log_opt_values /usr/lib/python3.12/site-packages/oslo_config/cfg.py:2824
Dec  1 14:21:48 np0005541455 ceilometer_agent_compute[200308]: 2025-12-01 19:21:48.591 12 DEBUG cotyledon.oslo_config_glue [-] publisher.telemetry_secret     = **** log_opt_values /usr/lib/python3.12/site-packages/oslo_config/cfg.py:2824
Dec  1 14:21:48 np0005541455 ceilometer_agent_compute[200308]: 2025-12-01 19:21:48.591 12 DEBUG cotyledon.oslo_config_glue [-] publisher_notifier.event_topic = event log_opt_values /usr/lib/python3.12/site-packages/oslo_config/cfg.py:2824
Dec  1 14:21:48 np0005541455 ceilometer_agent_compute[200308]: 2025-12-01 19:21:48.591 12 DEBUG cotyledon.oslo_config_glue [-] publisher_notifier.metering_topic = metering log_opt_values /usr/lib/python3.12/site-packages/oslo_config/cfg.py:2824
Dec  1 14:21:48 np0005541455 ceilometer_agent_compute[200308]: 2025-12-01 19:21:48.591 12 DEBUG cotyledon.oslo_config_glue [-] publisher_notifier.telemetry_driver = messagingv2 log_opt_values /usr/lib/python3.12/site-packages/oslo_config/cfg.py:2824
Dec  1 14:21:48 np0005541455 ceilometer_agent_compute[200308]: 2025-12-01 19:21:48.591 12 DEBUG cotyledon.oslo_config_glue [-] rgw_admin_credentials.access_key = **** log_opt_values /usr/lib/python3.12/site-packages/oslo_config/cfg.py:2824
Dec  1 14:21:48 np0005541455 ceilometer_agent_compute[200308]: 2025-12-01 19:21:48.591 12 DEBUG cotyledon.oslo_config_glue [-] rgw_admin_credentials.secret_key = **** log_opt_values /usr/lib/python3.12/site-packages/oslo_config/cfg.py:2824
Dec  1 14:21:48 np0005541455 ceilometer_agent_compute[200308]: 2025-12-01 19:21:48.591 12 DEBUG cotyledon.oslo_config_glue [-] rgw_client.implicit_tenants    = False log_opt_values /usr/lib/python3.12/site-packages/oslo_config/cfg.py:2824
Dec  1 14:21:48 np0005541455 ceilometer_agent_compute[200308]: 2025-12-01 19:21:48.591 12 DEBUG cotyledon.oslo_config_glue [-] service_types.aodh             = alarming log_opt_values /usr/lib/python3.12/site-packages/oslo_config/cfg.py:2824
Dec  1 14:21:48 np0005541455 ceilometer_agent_compute[200308]: 2025-12-01 19:21:48.591 12 DEBUG cotyledon.oslo_config_glue [-] service_types.cinder           = volumev3 log_opt_values /usr/lib/python3.12/site-packages/oslo_config/cfg.py:2824
Dec  1 14:21:48 np0005541455 ceilometer_agent_compute[200308]: 2025-12-01 19:21:48.591 12 DEBUG cotyledon.oslo_config_glue [-] service_types.glance           = image log_opt_values /usr/lib/python3.12/site-packages/oslo_config/cfg.py:2824
Dec  1 14:21:48 np0005541455 ceilometer_agent_compute[200308]: 2025-12-01 19:21:48.591 12 DEBUG cotyledon.oslo_config_glue [-] service_types.neutron          = network log_opt_values /usr/lib/python3.12/site-packages/oslo_config/cfg.py:2824
Dec  1 14:21:48 np0005541455 ceilometer_agent_compute[200308]: 2025-12-01 19:21:48.592 12 DEBUG cotyledon.oslo_config_glue [-] service_types.nova             = compute log_opt_values /usr/lib/python3.12/site-packages/oslo_config/cfg.py:2824
Dec  1 14:21:48 np0005541455 ceilometer_agent_compute[200308]: 2025-12-01 19:21:48.592 12 DEBUG cotyledon.oslo_config_glue [-] service_types.radosgw          = None log_opt_values /usr/lib/python3.12/site-packages/oslo_config/cfg.py:2824
Dec  1 14:21:48 np0005541455 ceilometer_agent_compute[200308]: 2025-12-01 19:21:48.592 12 DEBUG cotyledon.oslo_config_glue [-] service_types.swift            = object-store log_opt_values /usr/lib/python3.12/site-packages/oslo_config/cfg.py:2824
Dec  1 14:21:48 np0005541455 ceilometer_agent_compute[200308]: 2025-12-01 19:21:48.592 12 DEBUG cotyledon.oslo_config_glue [-] service_credentials.auth_section = None log_opt_values /usr/lib/python3.12/site-packages/oslo_config/cfg.py:2824
Dec  1 14:21:48 np0005541455 ceilometer_agent_compute[200308]: 2025-12-01 19:21:48.592 12 DEBUG cotyledon.oslo_config_glue [-] service_credentials.auth_type  = password log_opt_values /usr/lib/python3.12/site-packages/oslo_config/cfg.py:2824
Dec  1 14:21:48 np0005541455 ceilometer_agent_compute[200308]: 2025-12-01 19:21:48.592 12 DEBUG cotyledon.oslo_config_glue [-] service_credentials.cafile     = None log_opt_values /usr/lib/python3.12/site-packages/oslo_config/cfg.py:2824
Dec  1 14:21:48 np0005541455 ceilometer_agent_compute[200308]: 2025-12-01 19:21:48.592 12 DEBUG cotyledon.oslo_config_glue [-] service_credentials.certfile   = None log_opt_values /usr/lib/python3.12/site-packages/oslo_config/cfg.py:2824
Dec  1 14:21:48 np0005541455 ceilometer_agent_compute[200308]: 2025-12-01 19:21:48.592 12 DEBUG cotyledon.oslo_config_glue [-] service_credentials.collect_timing = False log_opt_values /usr/lib/python3.12/site-packages/oslo_config/cfg.py:2824
Dec  1 14:21:48 np0005541455 ceilometer_agent_compute[200308]: 2025-12-01 19:21:48.592 12 DEBUG cotyledon.oslo_config_glue [-] service_credentials.insecure   = False log_opt_values /usr/lib/python3.12/site-packages/oslo_config/cfg.py:2824
Dec  1 14:21:48 np0005541455 ceilometer_agent_compute[200308]: 2025-12-01 19:21:48.592 12 DEBUG cotyledon.oslo_config_glue [-] service_credentials.interface  = internalURL log_opt_values /usr/lib/python3.12/site-packages/oslo_config/cfg.py:2824
Dec  1 14:21:48 np0005541455 ceilometer_agent_compute[200308]: 2025-12-01 19:21:48.592 12 DEBUG cotyledon.oslo_config_glue [-] service_credentials.keyfile    = None log_opt_values /usr/lib/python3.12/site-packages/oslo_config/cfg.py:2824
Dec  1 14:21:48 np0005541455 ceilometer_agent_compute[200308]: 2025-12-01 19:21:48.593 12 DEBUG cotyledon.oslo_config_glue [-] service_credentials.region_name = None log_opt_values /usr/lib/python3.12/site-packages/oslo_config/cfg.py:2824
Dec  1 14:21:48 np0005541455 ceilometer_agent_compute[200308]: 2025-12-01 19:21:48.593 12 DEBUG cotyledon.oslo_config_glue [-] service_credentials.split_loggers = False log_opt_values /usr/lib/python3.12/site-packages/oslo_config/cfg.py:2824
Dec  1 14:21:48 np0005541455 ceilometer_agent_compute[200308]: 2025-12-01 19:21:48.593 12 DEBUG cotyledon.oslo_config_glue [-] service_credentials.timeout    = None log_opt_values /usr/lib/python3.12/site-packages/oslo_config/cfg.py:2824
Dec  1 14:21:48 np0005541455 ceilometer_agent_compute[200308]: 2025-12-01 19:21:48.593 12 DEBUG cotyledon.oslo_config_glue [-] gnocchi.auth_section           = service_credentials log_opt_values /usr/lib/python3.12/site-packages/oslo_config/cfg.py:2824
Dec  1 14:21:48 np0005541455 ceilometer_agent_compute[200308]: 2025-12-01 19:21:48.593 12 DEBUG cotyledon.oslo_config_glue [-] gnocchi.auth_type              = None log_opt_values /usr/lib/python3.12/site-packages/oslo_config/cfg.py:2824
Dec  1 14:21:48 np0005541455 ceilometer_agent_compute[200308]: 2025-12-01 19:21:48.593 12 DEBUG cotyledon.oslo_config_glue [-] gnocchi.cafile                 = None log_opt_values /usr/lib/python3.12/site-packages/oslo_config/cfg.py:2824
Dec  1 14:21:48 np0005541455 ceilometer_agent_compute[200308]: 2025-12-01 19:21:48.593 12 DEBUG cotyledon.oslo_config_glue [-] gnocchi.certfile               = None log_opt_values /usr/lib/python3.12/site-packages/oslo_config/cfg.py:2824
Dec  1 14:21:48 np0005541455 ceilometer_agent_compute[200308]: 2025-12-01 19:21:48.593 12 DEBUG cotyledon.oslo_config_glue [-] gnocchi.collect_timing         = False log_opt_values /usr/lib/python3.12/site-packages/oslo_config/cfg.py:2824
Dec  1 14:21:48 np0005541455 ceilometer_agent_compute[200308]: 2025-12-01 19:21:48.593 12 DEBUG cotyledon.oslo_config_glue [-] gnocchi.insecure               = False log_opt_values /usr/lib/python3.12/site-packages/oslo_config/cfg.py:2824
Dec  1 14:21:48 np0005541455 ceilometer_agent_compute[200308]: 2025-12-01 19:21:48.593 12 DEBUG cotyledon.oslo_config_glue [-] gnocchi.interface              = internal log_opt_values /usr/lib/python3.12/site-packages/oslo_config/cfg.py:2824
Dec  1 14:21:48 np0005541455 ceilometer_agent_compute[200308]: 2025-12-01 19:21:48.593 12 DEBUG cotyledon.oslo_config_glue [-] gnocchi.keyfile                = None log_opt_values /usr/lib/python3.12/site-packages/oslo_config/cfg.py:2824
Dec  1 14:21:48 np0005541455 ceilometer_agent_compute[200308]: 2025-12-01 19:21:48.593 12 DEBUG cotyledon.oslo_config_glue [-] gnocchi.region_name            = None log_opt_values /usr/lib/python3.12/site-packages/oslo_config/cfg.py:2824
Dec  1 14:21:48 np0005541455 ceilometer_agent_compute[200308]: 2025-12-01 19:21:48.594 12 DEBUG cotyledon.oslo_config_glue [-] gnocchi.split_loggers          = False log_opt_values /usr/lib/python3.12/site-packages/oslo_config/cfg.py:2824
Dec  1 14:21:48 np0005541455 ceilometer_agent_compute[200308]: 2025-12-01 19:21:48.594 12 DEBUG cotyledon.oslo_config_glue [-] gnocchi.timeout                = None log_opt_values /usr/lib/python3.12/site-packages/oslo_config/cfg.py:2824
Dec  1 14:21:48 np0005541455 ceilometer_agent_compute[200308]: 2025-12-01 19:21:48.594 12 DEBUG cotyledon.oslo_config_glue [-] zaqar.auth_section             = service_credentials log_opt_values /usr/lib/python3.12/site-packages/oslo_config/cfg.py:2824
Dec  1 14:21:48 np0005541455 ceilometer_agent_compute[200308]: 2025-12-01 19:21:48.594 12 DEBUG cotyledon.oslo_config_glue [-] zaqar.auth_type                = None log_opt_values /usr/lib/python3.12/site-packages/oslo_config/cfg.py:2824
Dec  1 14:21:48 np0005541455 ceilometer_agent_compute[200308]: 2025-12-01 19:21:48.594 12 DEBUG cotyledon.oslo_config_glue [-] zaqar.cafile                   = None log_opt_values /usr/lib/python3.12/site-packages/oslo_config/cfg.py:2824
Dec  1 14:21:48 np0005541455 ceilometer_agent_compute[200308]: 2025-12-01 19:21:48.594 12 DEBUG cotyledon.oslo_config_glue [-] zaqar.certfile                 = None log_opt_values /usr/lib/python3.12/site-packages/oslo_config/cfg.py:2824
Dec  1 14:21:48 np0005541455 ceilometer_agent_compute[200308]: 2025-12-01 19:21:48.594 12 DEBUG cotyledon.oslo_config_glue [-] zaqar.collect_timing           = False log_opt_values /usr/lib/python3.12/site-packages/oslo_config/cfg.py:2824
Dec  1 14:21:48 np0005541455 ceilometer_agent_compute[200308]: 2025-12-01 19:21:48.594 12 DEBUG cotyledon.oslo_config_glue [-] zaqar.insecure                 = False log_opt_values /usr/lib/python3.12/site-packages/oslo_config/cfg.py:2824
Dec  1 14:21:48 np0005541455 ceilometer_agent_compute[200308]: 2025-12-01 19:21:48.594 12 DEBUG cotyledon.oslo_config_glue [-] zaqar.interface                = internal log_opt_values /usr/lib/python3.12/site-packages/oslo_config/cfg.py:2824
Dec  1 14:21:48 np0005541455 ceilometer_agent_compute[200308]: 2025-12-01 19:21:48.594 12 DEBUG cotyledon.oslo_config_glue [-] zaqar.keyfile                  = None log_opt_values /usr/lib/python3.12/site-packages/oslo_config/cfg.py:2824
Dec  1 14:21:48 np0005541455 ceilometer_agent_compute[200308]: 2025-12-01 19:21:48.594 12 DEBUG cotyledon.oslo_config_glue [-] zaqar.region_name              = None log_opt_values /usr/lib/python3.12/site-packages/oslo_config/cfg.py:2824
Dec  1 14:21:48 np0005541455 ceilometer_agent_compute[200308]: 2025-12-01 19:21:48.594 12 DEBUG cotyledon.oslo_config_glue [-] zaqar.split_loggers            = False log_opt_values /usr/lib/python3.12/site-packages/oslo_config/cfg.py:2824
Dec  1 14:21:48 np0005541455 ceilometer_agent_compute[200308]: 2025-12-01 19:21:48.595 12 DEBUG cotyledon.oslo_config_glue [-] zaqar.timeout                  = None log_opt_values /usr/lib/python3.12/site-packages/oslo_config/cfg.py:2824
Dec  1 14:21:48 np0005541455 ceilometer_agent_compute[200308]: 2025-12-01 19:21:48.595 12 DEBUG cotyledon.oslo_config_glue [-] oslo_reports.file_event_handler = None log_opt_values /usr/lib/python3.12/site-packages/oslo_config/cfg.py:2824
Dec  1 14:21:48 np0005541455 ceilometer_agent_compute[200308]: 2025-12-01 19:21:48.595 12 DEBUG cotyledon.oslo_config_glue [-] oslo_reports.file_event_handler_interval = 1 log_opt_values /usr/lib/python3.12/site-packages/oslo_config/cfg.py:2824
Dec  1 14:21:48 np0005541455 ceilometer_agent_compute[200308]: 2025-12-01 19:21:48.595 12 DEBUG cotyledon.oslo_config_glue [-] oslo_reports.log_dir           = None log_opt_values /usr/lib/python3.12/site-packages/oslo_config/cfg.py:2824
Dec  1 14:21:48 np0005541455 ceilometer_agent_compute[200308]: 2025-12-01 19:21:48.595 12 DEBUG cotyledon.oslo_config_glue [-] ******************************************************************************** log_opt_values /usr/lib/python3.12/site-packages/oslo_config/cfg.py:2828
Dec  1 14:21:48 np0005541455 ceilometer_agent_compute[200308]: 2025-12-01 19:21:48.595 12 DEBUG cotyledon._service [-] Run service AgentHeartBeatManager(0) [12] wait_forever /usr/lib/python3.12/site-packages/cotyledon/_service.py:263
Dec  1 14:21:48 np0005541455 ceilometer_agent_compute[200308]: 2025-12-01 19:21:48.597 12 DEBUG ceilometer.polling.manager [-] Started heartbeat child process. run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:519
Dec  1 14:21:48 np0005541455 ceilometer_agent_compute[200308]: 2025-12-01 19:21:48.600 12 DEBUG ceilometer.polling.manager [-] Started heartbeat update thread _read_queue /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:522
Dec  1 14:21:48 np0005541455 ceilometer_agent_compute[200308]: 2025-12-01 19:21:48.602 12 DEBUG ceilometer.polling.manager [-] Started heartbeat reporting thread _report_status /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:527
Dec  1 14:21:48 np0005541455 ceilometer_agent_compute[200308]: 2025-12-01 19:21:48.622 15 DEBUG ceilometer.compute.virt.libvirt.utils [-] Connecting to libvirt: qemu:///system new_libvirt_connection /usr/lib/python3.12/site-packages/ceilometer/compute/virt/libvirt/utils.py:96
Dec  1 14:21:48 np0005541455 ceilometer_agent_compute[200308]: 2025-12-01 19:21:48.632 15 INFO ceilometer.polling.manager [-] Looking for dynamic pollsters configurations at [['/etc/ceilometer/pollsters.d']].
Dec  1 14:21:48 np0005541455 ceilometer_agent_compute[200308]: 2025-12-01 19:21:48.632 15 INFO ceilometer.polling.manager [-] No dynamic pollsters found in folder [/etc/ceilometer/pollsters.d].
Dec  1 14:21:48 np0005541455 ceilometer_agent_compute[200308]: 2025-12-01 19:21:48.633 15 INFO ceilometer.polling.manager [-] No dynamic pollsters file found in dirs [['/etc/ceilometer/pollsters.d']].
Dec  1 14:21:48 np0005541455 ceilometer_agent_compute[200308]: 2025-12-01 19:21:48.776 15 DEBUG cotyledon.oslo_config_glue [-] Full set of CONF: _load_service_options /usr/lib/python3.12/site-packages/cotyledon/oslo_config_glue.py:53
Dec  1 14:21:48 np0005541455 ceilometer_agent_compute[200308]: 2025-12-01 19:21:48.776 15 DEBUG cotyledon.oslo_config_glue [-] ******************************************************************************** log_opt_values /usr/lib/python3.12/site-packages/oslo_config/cfg.py:2804
Dec  1 14:21:48 np0005541455 ceilometer_agent_compute[200308]: 2025-12-01 19:21:48.776 15 DEBUG cotyledon.oslo_config_glue [-] Configuration options gathered from: log_opt_values /usr/lib/python3.12/site-packages/oslo_config/cfg.py:2805
Dec  1 14:21:48 np0005541455 ceilometer_agent_compute[200308]: 2025-12-01 19:21:48.776 15 DEBUG cotyledon.oslo_config_glue [-] command line args: ['--polling-namespaces', 'compute', '--logfile', '/dev/stdout'] log_opt_values /usr/lib/python3.12/site-packages/oslo_config/cfg.py:2806
Dec  1 14:21:48 np0005541455 ceilometer_agent_compute[200308]: 2025-12-01 19:21:48.776 15 DEBUG cotyledon.oslo_config_glue [-] config files: ['/etc/ceilometer/ceilometer.conf'] log_opt_values /usr/lib/python3.12/site-packages/oslo_config/cfg.py:2807
Dec  1 14:21:48 np0005541455 ceilometer_agent_compute[200308]: 2025-12-01 19:21:48.776 15 DEBUG cotyledon.oslo_config_glue [-] ================================================================================ log_opt_values /usr/lib/python3.12/site-packages/oslo_config/cfg.py:2809
Dec  1 14:21:48 np0005541455 ceilometer_agent_compute[200308]: 2025-12-01 19:21:48.776 15 DEBUG cotyledon.oslo_config_glue [-] batch_size                     = 50 log_opt_values /usr/lib/python3.12/site-packages/oslo_config/cfg.py:2817
Dec  1 14:21:48 np0005541455 ceilometer_agent_compute[200308]: 2025-12-01 19:21:48.777 15 DEBUG cotyledon.oslo_config_glue [-] cfg_file                       = polling.yaml log_opt_values /usr/lib/python3.12/site-packages/oslo_config/cfg.py:2817
Dec  1 14:21:48 np0005541455 ceilometer_agent_compute[200308]: 2025-12-01 19:21:48.777 15 DEBUG cotyledon.oslo_config_glue [-] config_dir                     = ['/etc/ceilometer/ceilometer.conf.d'] log_opt_values /usr/lib/python3.12/site-packages/oslo_config/cfg.py:2817
Dec  1 14:21:48 np0005541455 ceilometer_agent_compute[200308]: 2025-12-01 19:21:48.777 15 DEBUG cotyledon.oslo_config_glue [-] config_file                    = ['/etc/ceilometer/ceilometer.conf'] log_opt_values /usr/lib/python3.12/site-packages/oslo_config/cfg.py:2817
Dec  1 14:21:48 np0005541455 ceilometer_agent_compute[200308]: 2025-12-01 19:21:48.777 15 DEBUG cotyledon.oslo_config_glue [-] config_source                  = [] log_opt_values /usr/lib/python3.12/site-packages/oslo_config/cfg.py:2817
Dec  1 14:21:48 np0005541455 ceilometer_agent_compute[200308]: 2025-12-01 19:21:48.777 15 DEBUG cotyledon.oslo_config_glue [-] debug                          = True log_opt_values /usr/lib/python3.12/site-packages/oslo_config/cfg.py:2817
Dec  1 14:21:48 np0005541455 ceilometer_agent_compute[200308]: 2025-12-01 19:21:48.777 15 DEBUG cotyledon.oslo_config_glue [-] default_log_levels             = ['amqp=WARN', 'amqplib=WARN', 'boto=WARN', 'qpid=WARN', 'sqlalchemy=WARN', 'suds=INFO', 'oslo.messaging=INFO', 'oslo_messaging=INFO', 'iso8601=WARN', 'requests.packages.urllib3.connectionpool=WARN', 'urllib3.connectionpool=WARN', 'websocket=WARN', 'requests.packages.urllib3.util.retry=WARN', 'urllib3.util.retry=WARN', 'keystonemiddleware=WARN', 'routes.middleware=WARN', 'stevedore=WARN', 'taskflow=WARN', 'keystoneauth=WARN', 'oslo.cache=INFO', 'oslo_policy=INFO', 'dogpile.core.dogpile=INFO', 'futurist=INFO', 'neutronclient=INFO', 'keystoneclient=INFO'] log_opt_values /usr/lib/python3.12/site-packages/oslo_config/cfg.py:2817
Dec  1 14:21:48 np0005541455 ceilometer_agent_compute[200308]: 2025-12-01 19:21:48.777 15 DEBUG cotyledon.oslo_config_glue [-] enable_notifications           = True log_opt_values /usr/lib/python3.12/site-packages/oslo_config/cfg.py:2817
Dec  1 14:21:48 np0005541455 ceilometer_agent_compute[200308]: 2025-12-01 19:21:48.777 15 DEBUG cotyledon.oslo_config_glue [-] enable_prometheus_exporter     = False log_opt_values /usr/lib/python3.12/site-packages/oslo_config/cfg.py:2817
Dec  1 14:21:48 np0005541455 ceilometer_agent_compute[200308]: 2025-12-01 19:21:48.777 15 DEBUG cotyledon.oslo_config_glue [-] event_pipeline_cfg_file        = event_pipeline.yaml log_opt_values /usr/lib/python3.12/site-packages/oslo_config/cfg.py:2817
Dec  1 14:21:48 np0005541455 ceilometer_agent_compute[200308]: 2025-12-01 19:21:48.777 15 DEBUG cotyledon.oslo_config_glue [-] graceful_shutdown_timeout      = 60 log_opt_values /usr/lib/python3.12/site-packages/oslo_config/cfg.py:2817
Dec  1 14:21:48 np0005541455 ceilometer_agent_compute[200308]: 2025-12-01 19:21:48.777 15 DEBUG cotyledon.oslo_config_glue [-] heartbeat_socket_dir           = None log_opt_values /usr/lib/python3.12/site-packages/oslo_config/cfg.py:2817
Dec  1 14:21:48 np0005541455 ceilometer_agent_compute[200308]: 2025-12-01 19:21:48.778 15 DEBUG cotyledon.oslo_config_glue [-] host                           = compute-0.ctlplane.example.com log_opt_values /usr/lib/python3.12/site-packages/oslo_config/cfg.py:2817
Dec  1 14:21:48 np0005541455 ceilometer_agent_compute[200308]: 2025-12-01 19:21:48.778 15 DEBUG cotyledon.oslo_config_glue [-] http_timeout                   = 600 log_opt_values /usr/lib/python3.12/site-packages/oslo_config/cfg.py:2817
Dec  1 14:21:48 np0005541455 ceilometer_agent_compute[200308]: 2025-12-01 19:21:48.778 15 DEBUG cotyledon.oslo_config_glue [-] hypervisor_inspector           = libvirt log_opt_values /usr/lib/python3.12/site-packages/oslo_config/cfg.py:2817
Dec  1 14:21:48 np0005541455 ceilometer_agent_compute[200308]: 2025-12-01 19:21:48.778 15 DEBUG cotyledon.oslo_config_glue [-] identity_name_discovery        = False log_opt_values /usr/lib/python3.12/site-packages/oslo_config/cfg.py:2817
Dec  1 14:21:48 np0005541455 ceilometer_agent_compute[200308]: 2025-12-01 19:21:48.778 15 DEBUG cotyledon.oslo_config_glue [-] ignore_disabled_projects       = False log_opt_values /usr/lib/python3.12/site-packages/oslo_config/cfg.py:2817
Dec  1 14:21:48 np0005541455 ceilometer_agent_compute[200308]: 2025-12-01 19:21:48.778 15 DEBUG cotyledon.oslo_config_glue [-] instance_format                = [instance: %(uuid)s]  log_opt_values /usr/lib/python3.12/site-packages/oslo_config/cfg.py:2817
Dec  1 14:21:48 np0005541455 ceilometer_agent_compute[200308]: 2025-12-01 19:21:48.778 15 DEBUG cotyledon.oslo_config_glue [-] instance_uuid_format           = [instance: %(uuid)s]  log_opt_values /usr/lib/python3.12/site-packages/oslo_config/cfg.py:2817
Dec  1 14:21:48 np0005541455 ceilometer_agent_compute[200308]: 2025-12-01 19:21:48.778 15 DEBUG cotyledon.oslo_config_glue [-] libvirt_type                   = kvm log_opt_values /usr/lib/python3.12/site-packages/oslo_config/cfg.py:2817
Dec  1 14:21:48 np0005541455 ceilometer_agent_compute[200308]: 2025-12-01 19:21:48.778 15 DEBUG cotyledon.oslo_config_glue [-] libvirt_uri                    =  log_opt_values /usr/lib/python3.12/site-packages/oslo_config/cfg.py:2817
Dec  1 14:21:48 np0005541455 ceilometer_agent_compute[200308]: 2025-12-01 19:21:48.778 15 DEBUG cotyledon.oslo_config_glue [-] log_color                      = False log_opt_values /usr/lib/python3.12/site-packages/oslo_config/cfg.py:2817
Dec  1 14:21:48 np0005541455 ceilometer_agent_compute[200308]: 2025-12-01 19:21:48.779 15 DEBUG cotyledon.oslo_config_glue [-] log_config_append              = None log_opt_values /usr/lib/python3.12/site-packages/oslo_config/cfg.py:2817
Dec  1 14:21:48 np0005541455 ceilometer_agent_compute[200308]: 2025-12-01 19:21:48.779 15 DEBUG cotyledon.oslo_config_glue [-] log_date_format                = %Y-%m-%d %H:%M:%S log_opt_values /usr/lib/python3.12/site-packages/oslo_config/cfg.py:2817
Dec  1 14:21:48 np0005541455 ceilometer_agent_compute[200308]: 2025-12-01 19:21:48.779 15 DEBUG cotyledon.oslo_config_glue [-] log_dir                        = /var/log/ceilometer log_opt_values /usr/lib/python3.12/site-packages/oslo_config/cfg.py:2817
Dec  1 14:21:48 np0005541455 ceilometer_agent_compute[200308]: 2025-12-01 19:21:48.779 15 DEBUG cotyledon.oslo_config_glue [-] log_file                       = /dev/stdout log_opt_values /usr/lib/python3.12/site-packages/oslo_config/cfg.py:2817
Dec  1 14:21:48 np0005541455 ceilometer_agent_compute[200308]: 2025-12-01 19:21:48.779 15 DEBUG cotyledon.oslo_config_glue [-] log_options                    = True log_opt_values /usr/lib/python3.12/site-packages/oslo_config/cfg.py:2817
Dec  1 14:21:48 np0005541455 ceilometer_agent_compute[200308]: 2025-12-01 19:21:48.779 15 DEBUG cotyledon.oslo_config_glue [-] log_rotate_interval            = 1 log_opt_values /usr/lib/python3.12/site-packages/oslo_config/cfg.py:2817
Dec  1 14:21:48 np0005541455 ceilometer_agent_compute[200308]: 2025-12-01 19:21:48.779 15 DEBUG cotyledon.oslo_config_glue [-] log_rotate_interval_type       = days log_opt_values /usr/lib/python3.12/site-packages/oslo_config/cfg.py:2817
Dec  1 14:21:48 np0005541455 ceilometer_agent_compute[200308]: 2025-12-01 19:21:48.779 15 DEBUG cotyledon.oslo_config_glue [-] log_rotation_type              = none log_opt_values /usr/lib/python3.12/site-packages/oslo_config/cfg.py:2817
Dec  1 14:21:48 np0005541455 ceilometer_agent_compute[200308]: 2025-12-01 19:21:48.779 15 DEBUG cotyledon.oslo_config_glue [-] logging_context_format_string  = %(asctime)s.%(msecs)03d %(process)d %(levelname)s %(name)s [%(global_request_id)s %(request_id)s %(user_identity)s] %(instance)s%(message)s log_opt_values /usr/lib/python3.12/site-packages/oslo_config/cfg.py:2817
Dec  1 14:21:48 np0005541455 ceilometer_agent_compute[200308]: 2025-12-01 19:21:48.779 15 DEBUG cotyledon.oslo_config_glue [-] logging_debug_format_suffix    = %(funcName)s %(pathname)s:%(lineno)d log_opt_values /usr/lib/python3.12/site-packages/oslo_config/cfg.py:2817
Dec  1 14:21:48 np0005541455 ceilometer_agent_compute[200308]: 2025-12-01 19:21:48.779 15 DEBUG cotyledon.oslo_config_glue [-] logging_default_format_string  = %(asctime)s.%(msecs)03d %(process)d %(levelname)s %(name)s [-] %(instance)s%(message)s log_opt_values /usr/lib/python3.12/site-packages/oslo_config/cfg.py:2817
Dec  1 14:21:48 np0005541455 ceilometer_agent_compute[200308]: 2025-12-01 19:21:48.779 15 DEBUG cotyledon.oslo_config_glue [-] logging_exception_prefix       = %(asctime)s.%(msecs)03d %(process)d ERROR %(name)s %(instance)s log_opt_values /usr/lib/python3.12/site-packages/oslo_config/cfg.py:2817
Dec  1 14:21:48 np0005541455 ceilometer_agent_compute[200308]: 2025-12-01 19:21:48.779 15 DEBUG cotyledon.oslo_config_glue [-] logging_user_identity_format   = %(user)s %(project)s %(domain)s %(system_scope)s %(user_domain)s %(project_domain)s log_opt_values /usr/lib/python3.12/site-packages/oslo_config/cfg.py:2817
Dec  1 14:21:48 np0005541455 ceilometer_agent_compute[200308]: 2025-12-01 19:21:48.780 15 DEBUG cotyledon.oslo_config_glue [-] max_logfile_count              = 30 log_opt_values /usr/lib/python3.12/site-packages/oslo_config/cfg.py:2817
Dec  1 14:21:48 np0005541455 ceilometer_agent_compute[200308]: 2025-12-01 19:21:48.780 15 DEBUG cotyledon.oslo_config_glue [-] max_logfile_size_mb            = 200 log_opt_values /usr/lib/python3.12/site-packages/oslo_config/cfg.py:2817
Dec  1 14:21:48 np0005541455 ceilometer_agent_compute[200308]: 2025-12-01 19:21:48.780 15 DEBUG cotyledon.oslo_config_glue [-] max_parallel_requests          = 64 log_opt_values /usr/lib/python3.12/site-packages/oslo_config/cfg.py:2817
Dec  1 14:21:48 np0005541455 ceilometer_agent_compute[200308]: 2025-12-01 19:21:48.780 15 DEBUG cotyledon.oslo_config_glue [-] partitioning_group_prefix      = None log_opt_values /usr/lib/python3.12/site-packages/oslo_config/cfg.py:2817
Dec  1 14:21:48 np0005541455 ceilometer_agent_compute[200308]: 2025-12-01 19:21:48.780 15 DEBUG cotyledon.oslo_config_glue [-] pipeline_cfg_file              = pipeline.yaml log_opt_values /usr/lib/python3.12/site-packages/oslo_config/cfg.py:2817
Dec  1 14:21:48 np0005541455 ceilometer_agent_compute[200308]: 2025-12-01 19:21:48.780 15 DEBUG cotyledon.oslo_config_glue [-] polling_namespaces             = ['compute'] log_opt_values /usr/lib/python3.12/site-packages/oslo_config/cfg.py:2817
Dec  1 14:21:48 np0005541455 ceilometer_agent_compute[200308]: 2025-12-01 19:21:48.780 15 DEBUG cotyledon.oslo_config_glue [-] pollsters_definitions_dirs     = ['/etc/ceilometer/pollsters.d'] log_opt_values /usr/lib/python3.12/site-packages/oslo_config/cfg.py:2817
Dec  1 14:21:48 np0005541455 ceilometer_agent_compute[200308]: 2025-12-01 19:21:48.780 15 DEBUG cotyledon.oslo_config_glue [-] prometheus_listen_addresses    = ['127.0.0.1:9101'] log_opt_values /usr/lib/python3.12/site-packages/oslo_config/cfg.py:2817
Dec  1 14:21:48 np0005541455 ceilometer_agent_compute[200308]: 2025-12-01 19:21:48.780 15 DEBUG cotyledon.oslo_config_glue [-] prometheus_tls_certfile        = None log_opt_values /usr/lib/python3.12/site-packages/oslo_config/cfg.py:2817
Dec  1 14:21:48 np0005541455 ceilometer_agent_compute[200308]: 2025-12-01 19:21:48.780 15 DEBUG cotyledon.oslo_config_glue [-] prometheus_tls_enable          = False log_opt_values /usr/lib/python3.12/site-packages/oslo_config/cfg.py:2817
Dec  1 14:21:48 np0005541455 ceilometer_agent_compute[200308]: 2025-12-01 19:21:48.780 15 DEBUG cotyledon.oslo_config_glue [-] prometheus_tls_keyfile         = None log_opt_values /usr/lib/python3.12/site-packages/oslo_config/cfg.py:2817
Dec  1 14:21:48 np0005541455 ceilometer_agent_compute[200308]: 2025-12-01 19:21:48.780 15 DEBUG cotyledon.oslo_config_glue [-] publish_errors                 = False log_opt_values /usr/lib/python3.12/site-packages/oslo_config/cfg.py:2817
Dec  1 14:21:48 np0005541455 ceilometer_agent_compute[200308]: 2025-12-01 19:21:48.780 15 DEBUG cotyledon.oslo_config_glue [-] rate_limit_burst               = 0 log_opt_values /usr/lib/python3.12/site-packages/oslo_config/cfg.py:2817
Dec  1 14:21:48 np0005541455 ceilometer_agent_compute[200308]: 2025-12-01 19:21:48.781 15 DEBUG cotyledon.oslo_config_glue [-] rate_limit_except_level        = CRITICAL log_opt_values /usr/lib/python3.12/site-packages/oslo_config/cfg.py:2817
Dec  1 14:21:48 np0005541455 ceilometer_agent_compute[200308]: 2025-12-01 19:21:48.781 15 DEBUG cotyledon.oslo_config_glue [-] rate_limit_interval            = 0 log_opt_values /usr/lib/python3.12/site-packages/oslo_config/cfg.py:2817
Dec  1 14:21:48 np0005541455 ceilometer_agent_compute[200308]: 2025-12-01 19:21:48.781 15 DEBUG cotyledon.oslo_config_glue [-] reseller_prefix                = AUTH_ log_opt_values /usr/lib/python3.12/site-packages/oslo_config/cfg.py:2817
Dec  1 14:21:48 np0005541455 ceilometer_agent_compute[200308]: 2025-12-01 19:21:48.781 15 DEBUG cotyledon.oslo_config_glue [-] reserved_metadata_keys         = [] log_opt_values /usr/lib/python3.12/site-packages/oslo_config/cfg.py:2817
Dec  1 14:21:48 np0005541455 ceilometer_agent_compute[200308]: 2025-12-01 19:21:48.781 15 DEBUG cotyledon.oslo_config_glue [-] reserved_metadata_length       = 256 log_opt_values /usr/lib/python3.12/site-packages/oslo_config/cfg.py:2817
Dec  1 14:21:48 np0005541455 ceilometer_agent_compute[200308]: 2025-12-01 19:21:48.781 15 DEBUG cotyledon.oslo_config_glue [-] reserved_metadata_namespace    = ['metering.'] log_opt_values /usr/lib/python3.12/site-packages/oslo_config/cfg.py:2817
Dec  1 14:21:48 np0005541455 ceilometer_agent_compute[200308]: 2025-12-01 19:21:48.781 15 DEBUG cotyledon.oslo_config_glue [-] rootwrap_config                = /etc/ceilometer/rootwrap.conf log_opt_values /usr/lib/python3.12/site-packages/oslo_config/cfg.py:2817
Dec  1 14:21:48 np0005541455 ceilometer_agent_compute[200308]: 2025-12-01 19:21:48.781 15 DEBUG cotyledon.oslo_config_glue [-] sample_source                  = openstack log_opt_values /usr/lib/python3.12/site-packages/oslo_config/cfg.py:2817
Dec  1 14:21:48 np0005541455 ceilometer_agent_compute[200308]: 2025-12-01 19:21:48.781 15 DEBUG cotyledon.oslo_config_glue [-] shell_completion               = None log_opt_values /usr/lib/python3.12/site-packages/oslo_config/cfg.py:2817
Dec  1 14:21:48 np0005541455 ceilometer_agent_compute[200308]: 2025-12-01 19:21:48.781 15 DEBUG cotyledon.oslo_config_glue [-] syslog_log_facility            = LOG_USER log_opt_values /usr/lib/python3.12/site-packages/oslo_config/cfg.py:2817
Dec  1 14:21:48 np0005541455 ceilometer_agent_compute[200308]: 2025-12-01 19:21:48.781 15 DEBUG cotyledon.oslo_config_glue [-] threads_to_process_pollsters   = 1 log_opt_values /usr/lib/python3.12/site-packages/oslo_config/cfg.py:2817
Dec  1 14:21:48 np0005541455 ceilometer_agent_compute[200308]: 2025-12-01 19:21:48.781 15 DEBUG cotyledon.oslo_config_glue [-] use_journal                    = False log_opt_values /usr/lib/python3.12/site-packages/oslo_config/cfg.py:2817
Dec  1 14:21:48 np0005541455 ceilometer_agent_compute[200308]: 2025-12-01 19:21:48.782 15 DEBUG cotyledon.oslo_config_glue [-] use_json                       = False log_opt_values /usr/lib/python3.12/site-packages/oslo_config/cfg.py:2817
Dec  1 14:21:48 np0005541455 ceilometer_agent_compute[200308]: 2025-12-01 19:21:48.782 15 DEBUG cotyledon.oslo_config_glue [-] use_stderr                     = False log_opt_values /usr/lib/python3.12/site-packages/oslo_config/cfg.py:2817
Dec  1 14:21:48 np0005541455 ceilometer_agent_compute[200308]: 2025-12-01 19:21:48.782 15 DEBUG cotyledon.oslo_config_glue [-] use_syslog                     = False log_opt_values /usr/lib/python3.12/site-packages/oslo_config/cfg.py:2817
Dec  1 14:21:48 np0005541455 ceilometer_agent_compute[200308]: 2025-12-01 19:21:48.782 15 DEBUG cotyledon.oslo_config_glue [-] watch_log_file                 = False log_opt_values /usr/lib/python3.12/site-packages/oslo_config/cfg.py:2817
Dec  1 14:21:48 np0005541455 ceilometer_agent_compute[200308]: 2025-12-01 19:21:48.782 15 DEBUG cotyledon.oslo_config_glue [-] compute.fetch_extra_metadata   = True log_opt_values /usr/lib/python3.12/site-packages/oslo_config/cfg.py:2824
Dec  1 14:21:48 np0005541455 ceilometer_agent_compute[200308]: 2025-12-01 19:21:48.782 15 DEBUG cotyledon.oslo_config_glue [-] compute.instance_discovery_method = libvirt_metadata log_opt_values /usr/lib/python3.12/site-packages/oslo_config/cfg.py:2824
Dec  1 14:21:48 np0005541455 ceilometer_agent_compute[200308]: 2025-12-01 19:21:48.782 15 DEBUG cotyledon.oslo_config_glue [-] compute.resource_cache_expiry  = 3600 log_opt_values /usr/lib/python3.12/site-packages/oslo_config/cfg.py:2824
Dec  1 14:21:48 np0005541455 ceilometer_agent_compute[200308]: 2025-12-01 19:21:48.782 15 DEBUG cotyledon.oslo_config_glue [-] compute.resource_update_interval = 0 log_opt_values /usr/lib/python3.12/site-packages/oslo_config/cfg.py:2824
Dec  1 14:21:48 np0005541455 ceilometer_agent_compute[200308]: 2025-12-01 19:21:48.782 15 DEBUG cotyledon.oslo_config_glue [-] coordination.backend_url       = **** log_opt_values /usr/lib/python3.12/site-packages/oslo_config/cfg.py:2824
Dec  1 14:21:48 np0005541455 ceilometer_agent_compute[200308]: 2025-12-01 19:21:48.782 15 DEBUG cotyledon.oslo_config_glue [-] event.definitions_cfg_file     = event_definitions.yaml log_opt_values /usr/lib/python3.12/site-packages/oslo_config/cfg.py:2824
Dec  1 14:21:48 np0005541455 ceilometer_agent_compute[200308]: 2025-12-01 19:21:48.782 15 DEBUG cotyledon.oslo_config_glue [-] event.drop_unmatched_notifications = False log_opt_values /usr/lib/python3.12/site-packages/oslo_config/cfg.py:2824
Dec  1 14:21:48 np0005541455 ceilometer_agent_compute[200308]: 2025-12-01 19:21:48.782 15 DEBUG cotyledon.oslo_config_glue [-] event.store_raw                = [] log_opt_values /usr/lib/python3.12/site-packages/oslo_config/cfg.py:2824
Dec  1 14:21:48 np0005541455 ceilometer_agent_compute[200308]: 2025-12-01 19:21:48.783 15 DEBUG cotyledon.oslo_config_glue [-] ipmi.polling_retry             = 3 log_opt_values /usr/lib/python3.12/site-packages/oslo_config/cfg.py:2824
Dec  1 14:21:48 np0005541455 ceilometer_agent_compute[200308]: 2025-12-01 19:21:48.783 15 DEBUG cotyledon.oslo_config_glue [-] meter.meter_definitions_dirs   = ['/etc/ceilometer/meters.d', '/usr/lib/python3.12/site-packages/ceilometer/data/meters.d'] log_opt_values /usr/lib/python3.12/site-packages/oslo_config/cfg.py:2824
Dec  1 14:21:48 np0005541455 ceilometer_agent_compute[200308]: 2025-12-01 19:21:48.783 15 DEBUG cotyledon.oslo_config_glue [-] notification.ack_on_event_error = True log_opt_values /usr/lib/python3.12/site-packages/oslo_config/cfg.py:2824
Dec  1 14:21:48 np0005541455 ceilometer_agent_compute[200308]: 2025-12-01 19:21:48.783 15 DEBUG cotyledon.oslo_config_glue [-] notification.batch_size        = 1 log_opt_values /usr/lib/python3.12/site-packages/oslo_config/cfg.py:2824
Dec  1 14:21:48 np0005541455 ceilometer_agent_compute[200308]: 2025-12-01 19:21:48.783 15 DEBUG cotyledon.oslo_config_glue [-] notification.batch_timeout     = None log_opt_values /usr/lib/python3.12/site-packages/oslo_config/cfg.py:2824
Dec  1 14:21:48 np0005541455 ceilometer_agent_compute[200308]: 2025-12-01 19:21:48.783 15 DEBUG cotyledon.oslo_config_glue [-] notification.messaging_urls    = **** log_opt_values /usr/lib/python3.12/site-packages/oslo_config/cfg.py:2824
Dec  1 14:21:48 np0005541455 ceilometer_agent_compute[200308]: 2025-12-01 19:21:48.783 15 DEBUG cotyledon.oslo_config_glue [-] notification.notification_control_exchanges = ['nova', 'glance', 'neutron', 'cinder', 'heat', 'keystone', 'trove', 'zaqar', 'swift', 'ceilometer', 'magnum', 'dns', 'ironic', 'aodh'] log_opt_values /usr/lib/python3.12/site-packages/oslo_config/cfg.py:2824
Dec  1 14:21:48 np0005541455 ceilometer_agent_compute[200308]: 2025-12-01 19:21:48.783 15 DEBUG cotyledon.oslo_config_glue [-] notification.pipelines         = ['meter', 'event'] log_opt_values /usr/lib/python3.12/site-packages/oslo_config/cfg.py:2824
Dec  1 14:21:48 np0005541455 ceilometer_agent_compute[200308]: 2025-12-01 19:21:48.783 15 DEBUG cotyledon.oslo_config_glue [-] notification.workers           = 1 log_opt_values /usr/lib/python3.12/site-packages/oslo_config/cfg.py:2824
Dec  1 14:21:48 np0005541455 ceilometer_agent_compute[200308]: 2025-12-01 19:21:48.784 15 DEBUG cotyledon.oslo_config_glue [-] polling.batch_size             = 50 log_opt_values /usr/lib/python3.12/site-packages/oslo_config/cfg.py:2824
Dec  1 14:21:48 np0005541455 ceilometer_agent_compute[200308]: 2025-12-01 19:21:48.784 15 DEBUG cotyledon.oslo_config_glue [-] polling.cfg_file               = polling.yaml log_opt_values /usr/lib/python3.12/site-packages/oslo_config/cfg.py:2824
Dec  1 14:21:48 np0005541455 ceilometer_agent_compute[200308]: 2025-12-01 19:21:48.784 15 DEBUG cotyledon.oslo_config_glue [-] polling.enable_notifications   = False log_opt_values /usr/lib/python3.12/site-packages/oslo_config/cfg.py:2824
Dec  1 14:21:48 np0005541455 ceilometer_agent_compute[200308]: 2025-12-01 19:21:48.784 15 DEBUG cotyledon.oslo_config_glue [-] polling.enable_prometheus_exporter = True log_opt_values /usr/lib/python3.12/site-packages/oslo_config/cfg.py:2824
Dec  1 14:21:48 np0005541455 ceilometer_agent_compute[200308]: 2025-12-01 19:21:48.784 15 DEBUG cotyledon.oslo_config_glue [-] polling.heartbeat_socket_dir   = /var/lib/ceilometer log_opt_values /usr/lib/python3.12/site-packages/oslo_config/cfg.py:2824
Dec  1 14:21:48 np0005541455 ceilometer_agent_compute[200308]: 2025-12-01 19:21:48.784 15 DEBUG cotyledon.oslo_config_glue [-] polling.identity_name_discovery = False log_opt_values /usr/lib/python3.12/site-packages/oslo_config/cfg.py:2824
Dec  1 14:21:48 np0005541455 ceilometer_agent_compute[200308]: 2025-12-01 19:21:48.784 15 DEBUG cotyledon.oslo_config_glue [-] polling.ignore_disabled_projects = False log_opt_values /usr/lib/python3.12/site-packages/oslo_config/cfg.py:2824
Dec  1 14:21:48 np0005541455 ceilometer_agent_compute[200308]: 2025-12-01 19:21:48.784 15 DEBUG cotyledon.oslo_config_glue [-] polling.partitioning_group_prefix = None log_opt_values /usr/lib/python3.12/site-packages/oslo_config/cfg.py:2824
Dec  1 14:21:48 np0005541455 ceilometer_agent_compute[200308]: 2025-12-01 19:21:48.784 15 DEBUG cotyledon.oslo_config_glue [-] polling.pollsters_definitions_dirs = ['/etc/ceilometer/pollsters.d'] log_opt_values /usr/lib/python3.12/site-packages/oslo_config/cfg.py:2824
Dec  1 14:21:48 np0005541455 ceilometer_agent_compute[200308]: 2025-12-01 19:21:48.784 15 DEBUG cotyledon.oslo_config_glue [-] polling.prometheus_listen_addresses = ['[::]:9101'] log_opt_values /usr/lib/python3.12/site-packages/oslo_config/cfg.py:2824
Dec  1 14:21:48 np0005541455 ceilometer_agent_compute[200308]: 2025-12-01 19:21:48.784 15 DEBUG cotyledon.oslo_config_glue [-] polling.prometheus_tls_certfile = /etc/ceilometer/tls/tls.crt log_opt_values /usr/lib/python3.12/site-packages/oslo_config/cfg.py:2824
Dec  1 14:21:48 np0005541455 ceilometer_agent_compute[200308]: 2025-12-01 19:21:48.784 15 DEBUG cotyledon.oslo_config_glue [-] polling.prometheus_tls_enable  = True log_opt_values /usr/lib/python3.12/site-packages/oslo_config/cfg.py:2824
Dec  1 14:21:48 np0005541455 ceilometer_agent_compute[200308]: 2025-12-01 19:21:48.784 15 DEBUG cotyledon.oslo_config_glue [-] polling.prometheus_tls_keyfile = /etc/ceilometer/tls/tls.key log_opt_values /usr/lib/python3.12/site-packages/oslo_config/cfg.py:2824
Dec  1 14:21:48 np0005541455 ceilometer_agent_compute[200308]: 2025-12-01 19:21:48.784 15 DEBUG cotyledon.oslo_config_glue [-] polling.threads_to_process_pollsters = 1 log_opt_values /usr/lib/python3.12/site-packages/oslo_config/cfg.py:2824
Dec  1 14:21:48 np0005541455 ceilometer_agent_compute[200308]: 2025-12-01 19:21:48.784 15 DEBUG cotyledon.oslo_config_glue [-] publisher.telemetry_secret     = **** log_opt_values /usr/lib/python3.12/site-packages/oslo_config/cfg.py:2824
Dec  1 14:21:48 np0005541455 ceilometer_agent_compute[200308]: 2025-12-01 19:21:48.785 15 DEBUG cotyledon.oslo_config_glue [-] publisher_notifier.event_topic = event log_opt_values /usr/lib/python3.12/site-packages/oslo_config/cfg.py:2824
Dec  1 14:21:48 np0005541455 ceilometer_agent_compute[200308]: 2025-12-01 19:21:48.785 15 DEBUG cotyledon.oslo_config_glue [-] publisher_notifier.metering_topic = metering log_opt_values /usr/lib/python3.12/site-packages/oslo_config/cfg.py:2824
Dec  1 14:21:48 np0005541455 ceilometer_agent_compute[200308]: 2025-12-01 19:21:48.785 15 DEBUG cotyledon.oslo_config_glue [-] publisher_notifier.telemetry_driver = messagingv2 log_opt_values /usr/lib/python3.12/site-packages/oslo_config/cfg.py:2824
Dec  1 14:21:48 np0005541455 ceilometer_agent_compute[200308]: 2025-12-01 19:21:48.785 15 DEBUG cotyledon.oslo_config_glue [-] rgw_admin_credentials.access_key = **** log_opt_values /usr/lib/python3.12/site-packages/oslo_config/cfg.py:2824
Dec  1 14:21:48 np0005541455 ceilometer_agent_compute[200308]: 2025-12-01 19:21:48.785 15 DEBUG cotyledon.oslo_config_glue [-] rgw_admin_credentials.secret_key = **** log_opt_values /usr/lib/python3.12/site-packages/oslo_config/cfg.py:2824
Dec  1 14:21:48 np0005541455 ceilometer_agent_compute[200308]: 2025-12-01 19:21:48.785 15 DEBUG cotyledon.oslo_config_glue [-] rgw_client.implicit_tenants    = False log_opt_values /usr/lib/python3.12/site-packages/oslo_config/cfg.py:2824
Dec  1 14:21:48 np0005541455 ceilometer_agent_compute[200308]: 2025-12-01 19:21:48.785 15 DEBUG cotyledon.oslo_config_glue [-] service_types.aodh             = alarming log_opt_values /usr/lib/python3.12/site-packages/oslo_config/cfg.py:2824
Dec  1 14:21:48 np0005541455 ceilometer_agent_compute[200308]: 2025-12-01 19:21:48.785 15 DEBUG cotyledon.oslo_config_glue [-] service_types.cinder           = volumev3 log_opt_values /usr/lib/python3.12/site-packages/oslo_config/cfg.py:2824
Dec  1 14:21:48 np0005541455 ceilometer_agent_compute[200308]: 2025-12-01 19:21:48.785 15 DEBUG cotyledon.oslo_config_glue [-] service_types.glance           = image log_opt_values /usr/lib/python3.12/site-packages/oslo_config/cfg.py:2824
Dec  1 14:21:48 np0005541455 ceilometer_agent_compute[200308]: 2025-12-01 19:21:48.785 15 DEBUG cotyledon.oslo_config_glue [-] service_types.neutron          = network log_opt_values /usr/lib/python3.12/site-packages/oslo_config/cfg.py:2824
Dec  1 14:21:48 np0005541455 ceilometer_agent_compute[200308]: 2025-12-01 19:21:48.785 15 DEBUG cotyledon.oslo_config_glue [-] service_types.nova             = compute log_opt_values /usr/lib/python3.12/site-packages/oslo_config/cfg.py:2824
Dec  1 14:21:48 np0005541455 ceilometer_agent_compute[200308]: 2025-12-01 19:21:48.785 15 DEBUG cotyledon.oslo_config_glue [-] service_types.radosgw          = None log_opt_values /usr/lib/python3.12/site-packages/oslo_config/cfg.py:2824
Dec  1 14:21:48 np0005541455 ceilometer_agent_compute[200308]: 2025-12-01 19:21:48.785 15 DEBUG cotyledon.oslo_config_glue [-] service_types.swift            = object-store log_opt_values /usr/lib/python3.12/site-packages/oslo_config/cfg.py:2824
Dec  1 14:21:48 np0005541455 ceilometer_agent_compute[200308]: 2025-12-01 19:21:48.786 15 DEBUG cotyledon.oslo_config_glue [-] service_credentials.auth_section = None log_opt_values /usr/lib/python3.12/site-packages/oslo_config/cfg.py:2824
Dec  1 14:21:48 np0005541455 ceilometer_agent_compute[200308]: 2025-12-01 19:21:48.786 15 DEBUG cotyledon.oslo_config_glue [-] service_credentials.auth_type  = password log_opt_values /usr/lib/python3.12/site-packages/oslo_config/cfg.py:2824
Dec  1 14:21:48 np0005541455 ceilometer_agent_compute[200308]: 2025-12-01 19:21:48.786 15 DEBUG cotyledon.oslo_config_glue [-] service_credentials.auth_url   = https://keystone-internal.openstack.svc:5000 log_opt_values /usr/lib/python3.12/site-packages/oslo_config/cfg.py:2824
Dec  1 14:21:48 np0005541455 ceilometer_agent_compute[200308]: 2025-12-01 19:21:48.786 15 DEBUG cotyledon.oslo_config_glue [-] service_credentials.cafile     = None log_opt_values /usr/lib/python3.12/site-packages/oslo_config/cfg.py:2824
Dec  1 14:21:48 np0005541455 ceilometer_agent_compute[200308]: 2025-12-01 19:21:48.786 15 DEBUG cotyledon.oslo_config_glue [-] service_credentials.certfile   = None log_opt_values /usr/lib/python3.12/site-packages/oslo_config/cfg.py:2824
Dec  1 14:21:48 np0005541455 ceilometer_agent_compute[200308]: 2025-12-01 19:21:48.786 15 DEBUG cotyledon.oslo_config_glue [-] service_credentials.collect_timing = False log_opt_values /usr/lib/python3.12/site-packages/oslo_config/cfg.py:2824
Dec  1 14:21:48 np0005541455 ceilometer_agent_compute[200308]: 2025-12-01 19:21:48.786 15 DEBUG cotyledon.oslo_config_glue [-] service_credentials.default_domain_id = None log_opt_values /usr/lib/python3.12/site-packages/oslo_config/cfg.py:2824
Dec  1 14:21:48 np0005541455 ceilometer_agent_compute[200308]: 2025-12-01 19:21:48.786 15 DEBUG cotyledon.oslo_config_glue [-] service_credentials.default_domain_name = None log_opt_values /usr/lib/python3.12/site-packages/oslo_config/cfg.py:2824
Dec  1 14:21:48 np0005541455 ceilometer_agent_compute[200308]: 2025-12-01 19:21:48.786 15 DEBUG cotyledon.oslo_config_glue [-] service_credentials.domain_id  = None log_opt_values /usr/lib/python3.12/site-packages/oslo_config/cfg.py:2824
Dec  1 14:21:48 np0005541455 ceilometer_agent_compute[200308]: 2025-12-01 19:21:48.786 15 DEBUG cotyledon.oslo_config_glue [-] service_credentials.domain_name = None log_opt_values /usr/lib/python3.12/site-packages/oslo_config/cfg.py:2824
Dec  1 14:21:48 np0005541455 ceilometer_agent_compute[200308]: 2025-12-01 19:21:48.786 15 DEBUG cotyledon.oslo_config_glue [-] service_credentials.insecure   = False log_opt_values /usr/lib/python3.12/site-packages/oslo_config/cfg.py:2824
Dec  1 14:21:48 np0005541455 ceilometer_agent_compute[200308]: 2025-12-01 19:21:48.786 15 DEBUG cotyledon.oslo_config_glue [-] service_credentials.interface  = internalURL log_opt_values /usr/lib/python3.12/site-packages/oslo_config/cfg.py:2824
Dec  1 14:21:48 np0005541455 ceilometer_agent_compute[200308]: 2025-12-01 19:21:48.786 15 DEBUG cotyledon.oslo_config_glue [-] service_credentials.keyfile    = None log_opt_values /usr/lib/python3.12/site-packages/oslo_config/cfg.py:2824
Dec  1 14:21:48 np0005541455 ceilometer_agent_compute[200308]: 2025-12-01 19:21:48.786 15 DEBUG cotyledon.oslo_config_glue [-] service_credentials.password   = **** log_opt_values /usr/lib/python3.12/site-packages/oslo_config/cfg.py:2824
Dec  1 14:21:48 np0005541455 ceilometer_agent_compute[200308]: 2025-12-01 19:21:48.786 15 DEBUG cotyledon.oslo_config_glue [-] service_credentials.project_domain_id = None log_opt_values /usr/lib/python3.12/site-packages/oslo_config/cfg.py:2824
Dec  1 14:21:48 np0005541455 ceilometer_agent_compute[200308]: 2025-12-01 19:21:48.786 15 DEBUG cotyledon.oslo_config_glue [-] service_credentials.project_domain_name = Default log_opt_values /usr/lib/python3.12/site-packages/oslo_config/cfg.py:2824
Dec  1 14:21:48 np0005541455 ceilometer_agent_compute[200308]: 2025-12-01 19:21:48.786 15 DEBUG cotyledon.oslo_config_glue [-] service_credentials.project_id = None log_opt_values /usr/lib/python3.12/site-packages/oslo_config/cfg.py:2824
Dec  1 14:21:48 np0005541455 ceilometer_agent_compute[200308]: 2025-12-01 19:21:48.786 15 DEBUG cotyledon.oslo_config_glue [-] service_credentials.project_name = service log_opt_values /usr/lib/python3.12/site-packages/oslo_config/cfg.py:2824
Dec  1 14:21:48 np0005541455 ceilometer_agent_compute[200308]: 2025-12-01 19:21:48.786 15 DEBUG cotyledon.oslo_config_glue [-] service_credentials.region_name = None log_opt_values /usr/lib/python3.12/site-packages/oslo_config/cfg.py:2824
Dec  1 14:21:48 np0005541455 ceilometer_agent_compute[200308]: 2025-12-01 19:21:48.787 15 DEBUG cotyledon.oslo_config_glue [-] service_credentials.split_loggers = False log_opt_values /usr/lib/python3.12/site-packages/oslo_config/cfg.py:2824
Dec  1 14:21:48 np0005541455 ceilometer_agent_compute[200308]: 2025-12-01 19:21:48.787 15 DEBUG cotyledon.oslo_config_glue [-] service_credentials.system_scope = None log_opt_values /usr/lib/python3.12/site-packages/oslo_config/cfg.py:2824
Dec  1 14:21:48 np0005541455 ceilometer_agent_compute[200308]: 2025-12-01 19:21:48.787 15 DEBUG cotyledon.oslo_config_glue [-] service_credentials.timeout    = None log_opt_values /usr/lib/python3.12/site-packages/oslo_config/cfg.py:2824
Dec  1 14:21:48 np0005541455 ceilometer_agent_compute[200308]: 2025-12-01 19:21:48.787 15 DEBUG cotyledon.oslo_config_glue [-] service_credentials.trust_id   = None log_opt_values /usr/lib/python3.12/site-packages/oslo_config/cfg.py:2824
Dec  1 14:21:48 np0005541455 ceilometer_agent_compute[200308]: 2025-12-01 19:21:48.787 15 DEBUG cotyledon.oslo_config_glue [-] service_credentials.user_domain_id = None log_opt_values /usr/lib/python3.12/site-packages/oslo_config/cfg.py:2824
Dec  1 14:21:48 np0005541455 ceilometer_agent_compute[200308]: 2025-12-01 19:21:48.787 15 DEBUG cotyledon.oslo_config_glue [-] service_credentials.user_domain_name = Default log_opt_values /usr/lib/python3.12/site-packages/oslo_config/cfg.py:2824
Dec  1 14:21:48 np0005541455 ceilometer_agent_compute[200308]: 2025-12-01 19:21:48.787 15 DEBUG cotyledon.oslo_config_glue [-] service_credentials.user_id    = None log_opt_values /usr/lib/python3.12/site-packages/oslo_config/cfg.py:2824
Dec  1 14:21:48 np0005541455 ceilometer_agent_compute[200308]: 2025-12-01 19:21:48.787 15 DEBUG cotyledon.oslo_config_glue [-] service_credentials.username   = ceilometer log_opt_values /usr/lib/python3.12/site-packages/oslo_config/cfg.py:2824
Dec  1 14:21:48 np0005541455 ceilometer_agent_compute[200308]: 2025-12-01 19:21:48.787 15 DEBUG cotyledon.oslo_config_glue [-] gnocchi.auth_section           = service_credentials log_opt_values /usr/lib/python3.12/site-packages/oslo_config/cfg.py:2824
Dec  1 14:21:48 np0005541455 ceilometer_agent_compute[200308]: 2025-12-01 19:21:48.787 15 DEBUG cotyledon.oslo_config_glue [-] gnocchi.auth_type              = None log_opt_values /usr/lib/python3.12/site-packages/oslo_config/cfg.py:2824
Dec  1 14:21:48 np0005541455 ceilometer_agent_compute[200308]: 2025-12-01 19:21:48.787 15 DEBUG cotyledon.oslo_config_glue [-] gnocchi.cafile                 = None log_opt_values /usr/lib/python3.12/site-packages/oslo_config/cfg.py:2824
Dec  1 14:21:48 np0005541455 ceilometer_agent_compute[200308]: 2025-12-01 19:21:48.787 15 DEBUG cotyledon.oslo_config_glue [-] gnocchi.certfile               = None log_opt_values /usr/lib/python3.12/site-packages/oslo_config/cfg.py:2824
Dec  1 14:21:48 np0005541455 ceilometer_agent_compute[200308]: 2025-12-01 19:21:48.787 15 DEBUG cotyledon.oslo_config_glue [-] gnocchi.collect_timing         = False log_opt_values /usr/lib/python3.12/site-packages/oslo_config/cfg.py:2824
Dec  1 14:21:48 np0005541455 ceilometer_agent_compute[200308]: 2025-12-01 19:21:48.787 15 DEBUG cotyledon.oslo_config_glue [-] gnocchi.insecure               = False log_opt_values /usr/lib/python3.12/site-packages/oslo_config/cfg.py:2824
Dec  1 14:21:48 np0005541455 ceilometer_agent_compute[200308]: 2025-12-01 19:21:48.787 15 DEBUG cotyledon.oslo_config_glue [-] gnocchi.interface              = internal log_opt_values /usr/lib/python3.12/site-packages/oslo_config/cfg.py:2824
Dec  1 14:21:48 np0005541455 ceilometer_agent_compute[200308]: 2025-12-01 19:21:48.788 15 DEBUG cotyledon.oslo_config_glue [-] gnocchi.keyfile                = None log_opt_values /usr/lib/python3.12/site-packages/oslo_config/cfg.py:2824
Dec  1 14:21:48 np0005541455 ceilometer_agent_compute[200308]: 2025-12-01 19:21:48.788 15 DEBUG cotyledon.oslo_config_glue [-] gnocchi.region_name            = None log_opt_values /usr/lib/python3.12/site-packages/oslo_config/cfg.py:2824
Dec  1 14:21:48 np0005541455 ceilometer_agent_compute[200308]: 2025-12-01 19:21:48.788 15 DEBUG cotyledon.oslo_config_glue [-] gnocchi.split_loggers          = False log_opt_values /usr/lib/python3.12/site-packages/oslo_config/cfg.py:2824
Dec  1 14:21:48 np0005541455 ceilometer_agent_compute[200308]: 2025-12-01 19:21:48.788 15 DEBUG cotyledon.oslo_config_glue [-] gnocchi.timeout                = None log_opt_values /usr/lib/python3.12/site-packages/oslo_config/cfg.py:2824
Dec  1 14:21:48 np0005541455 ceilometer_agent_compute[200308]: 2025-12-01 19:21:48.788 15 DEBUG cotyledon.oslo_config_glue [-] zaqar.auth_section             = service_credentials log_opt_values /usr/lib/python3.12/site-packages/oslo_config/cfg.py:2824
Dec  1 14:21:48 np0005541455 ceilometer_agent_compute[200308]: 2025-12-01 19:21:48.788 15 DEBUG cotyledon.oslo_config_glue [-] zaqar.auth_type                = None log_opt_values /usr/lib/python3.12/site-packages/oslo_config/cfg.py:2824
Dec  1 14:21:48 np0005541455 ceilometer_agent_compute[200308]: 2025-12-01 19:21:48.788 15 DEBUG cotyledon.oslo_config_glue [-] zaqar.cafile                   = None log_opt_values /usr/lib/python3.12/site-packages/oslo_config/cfg.py:2824
Dec  1 14:21:48 np0005541455 ceilometer_agent_compute[200308]: 2025-12-01 19:21:48.788 15 DEBUG cotyledon.oslo_config_glue [-] zaqar.certfile                 = None log_opt_values /usr/lib/python3.12/site-packages/oslo_config/cfg.py:2824
Dec  1 14:21:48 np0005541455 ceilometer_agent_compute[200308]: 2025-12-01 19:21:48.788 15 DEBUG cotyledon.oslo_config_glue [-] zaqar.collect_timing           = False log_opt_values /usr/lib/python3.12/site-packages/oslo_config/cfg.py:2824
Dec  1 14:21:48 np0005541455 ceilometer_agent_compute[200308]: 2025-12-01 19:21:48.788 15 DEBUG cotyledon.oslo_config_glue [-] zaqar.insecure                 = False log_opt_values /usr/lib/python3.12/site-packages/oslo_config/cfg.py:2824
Dec  1 14:21:48 np0005541455 ceilometer_agent_compute[200308]: 2025-12-01 19:21:48.788 15 DEBUG cotyledon.oslo_config_glue [-] zaqar.interface                = internal log_opt_values /usr/lib/python3.12/site-packages/oslo_config/cfg.py:2824
Dec  1 14:21:48 np0005541455 ceilometer_agent_compute[200308]: 2025-12-01 19:21:48.788 15 DEBUG cotyledon.oslo_config_glue [-] zaqar.keyfile                  = None log_opt_values /usr/lib/python3.12/site-packages/oslo_config/cfg.py:2824
Dec  1 14:21:48 np0005541455 ceilometer_agent_compute[200308]: 2025-12-01 19:21:48.788 15 DEBUG cotyledon.oslo_config_glue [-] zaqar.region_name              = None log_opt_values /usr/lib/python3.12/site-packages/oslo_config/cfg.py:2824
Dec  1 14:21:48 np0005541455 ceilometer_agent_compute[200308]: 2025-12-01 19:21:48.789 15 DEBUG cotyledon.oslo_config_glue [-] zaqar.split_loggers            = False log_opt_values /usr/lib/python3.12/site-packages/oslo_config/cfg.py:2824
Dec  1 14:21:48 np0005541455 ceilometer_agent_compute[200308]: 2025-12-01 19:21:48.789 15 DEBUG cotyledon.oslo_config_glue [-] zaqar.timeout                  = None log_opt_values /usr/lib/python3.12/site-packages/oslo_config/cfg.py:2824
Dec  1 14:21:48 np0005541455 ceilometer_agent_compute[200308]: 2025-12-01 19:21:48.789 15 DEBUG cotyledon.oslo_config_glue [-] oslo_reports.file_event_handler = None log_opt_values /usr/lib/python3.12/site-packages/oslo_config/cfg.py:2824
Dec  1 14:21:48 np0005541455 ceilometer_agent_compute[200308]: 2025-12-01 19:21:48.789 15 DEBUG cotyledon.oslo_config_glue [-] oslo_reports.file_event_handler_interval = 1 log_opt_values /usr/lib/python3.12/site-packages/oslo_config/cfg.py:2824
Dec  1 14:21:48 np0005541455 ceilometer_agent_compute[200308]: 2025-12-01 19:21:48.789 15 DEBUG cotyledon.oslo_config_glue [-] oslo_reports.log_dir           = None log_opt_values /usr/lib/python3.12/site-packages/oslo_config/cfg.py:2824
Dec  1 14:21:48 np0005541455 ceilometer_agent_compute[200308]: 2025-12-01 19:21:48.789 15 DEBUG cotyledon.oslo_config_glue [-] ******************************************************************************** log_opt_values /usr/lib/python3.12/site-packages/oslo_config/cfg.py:2828
Dec  1 14:21:48 np0005541455 ceilometer_agent_compute[200308]: 2025-12-01 19:21:48.789 15 DEBUG cotyledon._service [-] Run service AgentManager(0) [15] wait_forever /usr/lib/python3.12/site-packages/cotyledon/_service.py:263
Dec  1 14:21:48 np0005541455 ceilometer_agent_compute[200308]: 2025-12-01 19:21:48.792 15 DEBUG ceilometer.agent [-] Config file: {'sources': [{'name': 'pollsters', 'interval': 120, 'meters': ['power.state', 'cpu', 'memory.usage', 'disk.*', 'network.*']}]} load_config /usr/lib/python3.12/site-packages/ceilometer/agent.py:64
Dec  1 14:21:48 np0005541455 ceilometer_agent_compute[200308]: 2025-12-01 19:21:48.807 15 DEBUG ceilometer.polling.manager [-] The number of pollsters in source [pollsters] is bigger than the number of worker threads to execute them. Therefore, one can expect the process to be longer than the expected. execute_polling_task_processing /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:253
Dec  1 14:21:48 np0005541455 ceilometer_agent_compute[200308]: 2025-12-01 19:21:48.807 15 DEBUG ceilometer.polling.manager [-] Processing pollsters for [pollsters] with [1] threads. execute_polling_task_processing /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:262
Dec  1 14:21:48 np0005541455 ceilometer_agent_compute[200308]: 2025-12-01 19:21:48.808 15 DEBUG ceilometer.polling.manager [-] Registering pollster [<stevedore.extension.Extension object at 0x7fcf6cc3f860>] from source [pollsters] to be executed via executor [<concurrent.futures.thread.ThreadPoolExecutor object at 0x7fcf6f28f260>] with cache [{}], pollster history [{}], and discovery cache [{}]. register_pollster_execution /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:276
Dec  1 14:21:48 np0005541455 ceilometer_agent_compute[200308]: 2025-12-01 19:21:48.808 15 DEBUG ceilometer.polling.manager [-] Executing discovery process for pollsters [<ceilometer.compute.pollsters.net.IncomingBytesDeltaPollster object at 0x7fcf6cc3f830>] and discovery method [local_instances] via process [<bound method AgentManager.discover of <ceilometer.polling.manager.AgentManager object at 0x7fcf6cc07a10>>]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:294
Dec  1 14:21:48 np0005541455 ceilometer_agent_compute[200308]: 2025-12-01 19:21:48.808 15 DEBUG ceilometer.polling.manager [-] Registering pollster [<stevedore.extension.Extension object at 0x7fcf6c2e4080>] from source [pollsters] to be executed via executor [<concurrent.futures.thread.ThreadPoolExecutor object at 0x7fcf6f28f260>] with cache [{}], pollster history [{}], and discovery cache [{}]. register_pollster_execution /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:276
Dec  1 14:21:48 np0005541455 ceilometer_agent_compute[200308]: 2025-12-01 19:21:48.808 15 DEBUG ceilometer.compute.virt.libvirt.utils [-] Connecting to libvirt: qemu:///system new_libvirt_connection /usr/lib/python3.12/site-packages/ceilometer/compute/virt/libvirt/utils.py:96
Dec  1 14:21:48 np0005541455 ceilometer_agent_compute[200308]: 2025-12-01 19:21:48.809 15 DEBUG ceilometer.polling.manager [-] Registering pollster [<stevedore.extension.Extension object at 0x7fcf6efc98b0>] from source [pollsters] to be executed via executor [<concurrent.futures.thread.ThreadPoolExecutor object at 0x7fcf6f28f260>] with cache [{}], pollster history [{}], and discovery cache [{}]. register_pollster_execution /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:276
Dec  1 14:21:48 np0005541455 ceilometer_agent_compute[200308]: 2025-12-01 19:21:48.809 15 DEBUG ceilometer.polling.manager [-] Registering pollster [<stevedore.extension.Extension object at 0x7fcf6c2e4110>] from source [pollsters] to be executed via executor [<concurrent.futures.thread.ThreadPoolExecutor object at 0x7fcf6f28f260>] with cache [{}], pollster history [{}], and discovery cache [{}]. register_pollster_execution /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:276
Dec  1 14:21:48 np0005541455 ceilometer_agent_compute[200308]: 2025-12-01 19:21:48.809 15 DEBUG ceilometer.polling.manager [-] Registering pollster [<stevedore.extension.Extension object at 0x7fcf6c2e41a0>] from source [pollsters] to be executed via executor [<concurrent.futures.thread.ThreadPoolExecutor object at 0x7fcf6f28f260>] with cache [{}], pollster history [{}], and discovery cache [{}]. register_pollster_execution /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:276
Dec  1 14:21:48 np0005541455 ceilometer_agent_compute[200308]: 2025-12-01 19:21:48.809 15 DEBUG ceilometer.polling.manager [-] Registering pollster [<stevedore.extension.Extension object at 0x7fcf6cc3f290>] from source [pollsters] to be executed via executor [<concurrent.futures.thread.ThreadPoolExecutor object at 0x7fcf6f28f260>] with cache [{}], pollster history [{}], and discovery cache [{}]. register_pollster_execution /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:276
Dec  1 14:21:48 np0005541455 ceilometer_agent_compute[200308]: 2025-12-01 19:21:48.809 15 DEBUG ceilometer.polling.manager [-] Registering pollster [<stevedore.extension.Extension object at 0x7fcf6cc3f2c0>] from source [pollsters] to be executed via executor [<concurrent.futures.thread.ThreadPoolExecutor object at 0x7fcf6f28f260>] with cache [{}], pollster history [{}], and discovery cache [{}]. register_pollster_execution /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:276
Dec  1 14:21:48 np0005541455 ceilometer_agent_compute[200308]: 2025-12-01 19:21:48.809 15 DEBUG ceilometer.polling.manager [-] Registering pollster [<stevedore.extension.Extension object at 0x7fcf6e1e92e0>] from source [pollsters] to be executed via executor [<concurrent.futures.thread.ThreadPoolExecutor object at 0x7fcf6f28f260>] with cache [{}], pollster history [{}], and discovery cache [{}]. register_pollster_execution /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:276
Dec  1 14:21:48 np0005541455 ceilometer_agent_compute[200308]: 2025-12-01 19:21:48.810 15 DEBUG ceilometer.polling.manager [-] Registering pollster [<stevedore.extension.Extension object at 0x7fcf6cc3fb00>] from source [pollsters] to be executed via executor [<concurrent.futures.thread.ThreadPoolExecutor object at 0x7fcf6f28f260>] with cache [{}], pollster history [{}], and discovery cache [{}]. register_pollster_execution /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:276
Dec  1 14:21:48 np0005541455 ceilometer_agent_compute[200308]: 2025-12-01 19:21:48.810 15 DEBUG ceilometer.polling.manager [-] Registering pollster [<stevedore.extension.Extension object at 0x7fcf6cc3f320>] from source [pollsters] to be executed via executor [<concurrent.futures.thread.ThreadPoolExecutor object at 0x7fcf6f28f260>] with cache [{}], pollster history [{}], and discovery cache [{}]. register_pollster_execution /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:276
Dec  1 14:21:48 np0005541455 ceilometer_agent_compute[200308]: 2025-12-01 19:21:48.810 15 DEBUG ceilometer.polling.manager [-] Registering pollster [<stevedore.extension.Extension object at 0x7fcf6cc3f380>] from source [pollsters] to be executed via executor [<concurrent.futures.thread.ThreadPoolExecutor object at 0x7fcf6f28f260>] with cache [{}], pollster history [{}], and discovery cache [{}]. register_pollster_execution /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:276
Dec  1 14:21:48 np0005541455 ceilometer_agent_compute[200308]: 2025-12-01 19:21:48.810 15 DEBUG ceilometer.polling.manager [-] Registering pollster [<stevedore.extension.Extension object at 0x7fcf6cc3f3e0>] from source [pollsters] to be executed via executor [<concurrent.futures.thread.ThreadPoolExecutor object at 0x7fcf6f28f260>] with cache [{}], pollster history [{}], and discovery cache [{}]. register_pollster_execution /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:276
Dec  1 14:21:48 np0005541455 ceilometer_agent_compute[200308]: 2025-12-01 19:21:48.810 15 DEBUG ceilometer.polling.manager [-] Registering pollster [<stevedore.extension.Extension object at 0x7fcf6cc3f440>] from source [pollsters] to be executed via executor [<concurrent.futures.thread.ThreadPoolExecutor object at 0x7fcf6f28f260>] with cache [{}], pollster history [{}], and discovery cache [{}]. register_pollster_execution /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:276
Dec  1 14:21:48 np0005541455 ceilometer_agent_compute[200308]: 2025-12-01 19:21:48.810 15 DEBUG ceilometer.polling.manager [-] Registering pollster [<stevedore.extension.Extension object at 0x7fcf6c2e4470>] from source [pollsters] to be executed via executor [<concurrent.futures.thread.ThreadPoolExecutor object at 0x7fcf6f28f260>] with cache [{}], pollster history [{}], and discovery cache [{}]. register_pollster_execution /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:276
Dec  1 14:21:48 np0005541455 ceilometer_agent_compute[200308]: 2025-12-01 19:21:48.810 15 DEBUG ceilometer.polling.manager [-] Registering pollster [<stevedore.extension.Extension object at 0x7fcf6cc3f4a0>] from source [pollsters] to be executed via executor [<concurrent.futures.thread.ThreadPoolExecutor object at 0x7fcf6f28f260>] with cache [{}], pollster history [{}], and discovery cache [{}]. register_pollster_execution /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:276
Dec  1 14:21:48 np0005541455 ceilometer_agent_compute[200308]: 2025-12-01 19:21:48.810 15 DEBUG ceilometer.polling.manager [-] Registering pollster [<stevedore.extension.Extension object at 0x7fcf6cc3f500>] from source [pollsters] to be executed via executor [<concurrent.futures.thread.ThreadPoolExecutor object at 0x7fcf6f28f260>] with cache [{}], pollster history [{}], and discovery cache [{}]. register_pollster_execution /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:276
Dec  1 14:21:48 np0005541455 ceilometer_agent_compute[200308]: 2025-12-01 19:21:48.810 15 DEBUG ceilometer.polling.manager [-] Registering pollster [<stevedore.extension.Extension object at 0x7fcf6cc3e540>] from source [pollsters] to be executed via executor [<concurrent.futures.thread.ThreadPoolExecutor object at 0x7fcf6f28f260>] with cache [{}], pollster history [{}], and discovery cache [{}]. register_pollster_execution /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:276
Dec  1 14:21:48 np0005541455 ceilometer_agent_compute[200308]: 2025-12-01 19:21:48.810 15 DEBUG ceilometer.polling.manager [-] Registering pollster [<stevedore.extension.Extension object at 0x7fcf6cc3f560>] from source [pollsters] to be executed via executor [<concurrent.futures.thread.ThreadPoolExecutor object at 0x7fcf6f28f260>] with cache [{}], pollster history [{}], and discovery cache [{}]. register_pollster_execution /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:276
Dec  1 14:21:48 np0005541455 ceilometer_agent_compute[200308]: 2025-12-01 19:21:48.811 15 DEBUG ceilometer.polling.manager [-] Registering pollster [<stevedore.extension.Extension object at 0x7fcf6cc3fd70>] from source [pollsters] to be executed via executor [<concurrent.futures.thread.ThreadPoolExecutor object at 0x7fcf6f28f260>] with cache [{}], pollster history [{}], and discovery cache [{}]. register_pollster_execution /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:276
Dec  1 14:21:48 np0005541455 ceilometer_agent_compute[200308]: 2025-12-01 19:21:48.811 15 DEBUG ceilometer.polling.manager [-] Registering pollster [<stevedore.extension.Extension object at 0x7fcf6cc3f5c0>] from source [pollsters] to be executed via executor [<concurrent.futures.thread.ThreadPoolExecutor object at 0x7fcf6f28f260>] with cache [{}], pollster history [{}], and discovery cache [{}]. register_pollster_execution /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:276
Dec  1 14:21:48 np0005541455 ceilometer_agent_compute[200308]: 2025-12-01 19:21:48.811 15 DEBUG ceilometer.polling.manager [-] Registering pollster [<stevedore.extension.Extension object at 0x7fcf6cc3fdd0>] from source [pollsters] to be executed via executor [<concurrent.futures.thread.ThreadPoolExecutor object at 0x7fcf6f28f260>] with cache [{}], pollster history [{}], and discovery cache [{}]. register_pollster_execution /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:276
Dec  1 14:21:48 np0005541455 ceilometer_agent_compute[200308]: 2025-12-01 19:21:48.811 15 DEBUG ceilometer.polling.manager [-] Registering pollster [<stevedore.extension.Extension object at 0x7fcf6cc3fe30>] from source [pollsters] to be executed via executor [<concurrent.futures.thread.ThreadPoolExecutor object at 0x7fcf6f28f260>] with cache [{}], pollster history [{}], and discovery cache [{}]. register_pollster_execution /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:276
Dec  1 14:21:48 np0005541455 ceilometer_agent_compute[200308]: 2025-12-01 19:21:48.813 15 DEBUG ceilometer.polling.manager [-] Registering pollster [<stevedore.extension.Extension object at 0x7fcf6cc3fec0>] from source [pollsters] to be executed via executor [<concurrent.futures.thread.ThreadPoolExecutor object at 0x7fcf6f28f260>] with cache [{}], pollster history [{'network.incoming.bytes.delta': []}], and discovery cache [{'local_instances': []}]. register_pollster_execution /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:276
Dec  1 14:21:48 np0005541455 ceilometer_agent_compute[200308]: 2025-12-01 19:21:48.813 15 DEBUG ceilometer.polling.manager [-] Skip pollster network.incoming.bytes.delta, no  resources found this cycle _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:321
Dec  1 14:21:48 np0005541455 ceilometer_agent_compute[200308]: 2025-12-01 19:21:48.813 15 DEBUG ceilometer.polling.manager [-] Registering pollster [<stevedore.extension.Extension object at 0x7fcf6cc3ffb0>] from source [pollsters] to be executed via executor [<concurrent.futures.thread.ThreadPoolExecutor object at 0x7fcf6f28f260>] with cache [{}], pollster history [{'network.incoming.bytes.delta': []}], and discovery cache [{'local_instances': []}]. register_pollster_execution /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:276
Dec  1 14:21:48 np0005541455 ceilometer_agent_compute[200308]: 2025-12-01 19:21:48.813 15 DEBUG ceilometer.polling.manager [-] Executing discovery process for pollsters [<ceilometer.compute.pollsters.net.OutgoingPacketsPollster object at 0x7fcf6c2e4050>] and discovery method [local_instances] via process [<bound method AgentManager.discover of <ceilometer.polling.manager.AgentManager object at 0x7fcf6cc07a10>>]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:294
Dec  1 14:21:48 np0005541455 ceilometer_agent_compute[200308]: 2025-12-01 19:21:48.813 15 DEBUG ceilometer.polling.manager [-] Registering pollster [<stevedore.extension.Extension object at 0x7fcf6cc3d7c0>] from source [pollsters] to be executed via executor [<concurrent.futures.thread.ThreadPoolExecutor object at 0x7fcf6f28f260>] with cache [{}], pollster history [{'network.incoming.bytes.delta': [], 'network.outgoing.packets': []}], and discovery cache [{'local_instances': []}]. register_pollster_execution /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:276
Dec  1 14:21:48 np0005541455 ceilometer_agent_compute[200308]: 2025-12-01 19:21:48.814 15 DEBUG ceilometer.polling.manager [-] Skip pollster network.outgoing.packets, no  resources found this cycle _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:321
Dec  1 14:21:48 np0005541455 ceilometer_agent_compute[200308]: 2025-12-01 19:21:48.814 15 DEBUG ceilometer.polling.manager [-] Registering pollster [<stevedore.extension.Extension object at 0x7fcf6cc3f7d0>] from source [pollsters] to be executed via executor [<concurrent.futures.thread.ThreadPoolExecutor object at 0x7fcf6f28f260>] with cache [{}], pollster history [{'network.incoming.bytes.delta': [], 'network.outgoing.packets': []}], and discovery cache [{'local_instances': []}]. register_pollster_execution /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:276
Dec  1 14:21:48 np0005541455 ceilometer_agent_compute[200308]: 2025-12-01 19:21:48.814 15 DEBUG ceilometer.polling.manager [-] Executing discovery process for pollsters [<ceilometer.compute.pollsters.net.OutgoingBytesDeltaPollster object at 0x7fcf6cc3ff20>] and discovery method [local_instances] via process [<bound method AgentManager.discover of <ceilometer.polling.manager.AgentManager object at 0x7fcf6cc07a10>>]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:294
Dec  1 14:21:48 np0005541455 ceilometer_agent_compute[200308]: 2025-12-01 19:21:48.814 15 DEBUG ceilometer.polling.manager [-] Skip pollster network.outgoing.bytes.delta, no  resources found this cycle _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:321
Dec  1 14:21:48 np0005541455 ceilometer_agent_compute[200308]: 2025-12-01 19:21:48.814 15 DEBUG ceilometer.polling.manager [-] Executing discovery process for pollsters [<ceilometer.compute.pollsters.net.OutgoingDropPollster object at 0x7fcf6c2e40e0>] and discovery method [local_instances] via process [<bound method AgentManager.discover of <ceilometer.polling.manager.AgentManager object at 0x7fcf6cc07a10>>]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:294
Dec  1 14:21:48 np0005541455 ceilometer_agent_compute[200308]: 2025-12-01 19:21:48.814 15 DEBUG ceilometer.polling.manager [-] Skip pollster network.outgoing.packets.drop, no  resources found this cycle _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:321
Dec  1 14:21:48 np0005541455 ceilometer_agent_compute[200308]: 2025-12-01 19:21:48.814 15 DEBUG ceilometer.polling.manager [-] Executing discovery process for pollsters [<ceilometer.compute.pollsters.net.OutgoingErrorsPollster object at 0x7fcf6c2e4170>] and discovery method [local_instances] via process [<bound method AgentManager.discover of <ceilometer.polling.manager.AgentManager object at 0x7fcf6cc07a10>>]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:294
Dec  1 14:21:48 np0005541455 ceilometer_agent_compute[200308]: 2025-12-01 19:21:48.814 15 DEBUG ceilometer.polling.manager [-] Skip pollster network.outgoing.packets.error, no  resources found this cycle _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:321
Dec  1 14:21:48 np0005541455 ceilometer_agent_compute[200308]: 2025-12-01 19:21:48.815 15 DEBUG ceilometer.polling.manager [-] Executing discovery process for pollsters [<ceilometer.compute.pollsters.disk.PerDeviceCapacityPollster object at 0x7fcf6cc3d820>] and discovery method [local_instances] via process [<bound method AgentManager.discover of <ceilometer.polling.manager.AgentManager object at 0x7fcf6cc07a10>>]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:294
Dec  1 14:21:48 np0005541455 ceilometer_agent_compute[200308]: 2025-12-01 19:21:48.815 15 DEBUG ceilometer.polling.manager [-] Skip pollster disk.device.capacity, no  resources found this cycle _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:321
Dec  1 14:21:48 np0005541455 ceilometer_agent_compute[200308]: 2025-12-01 19:21:48.815 15 DEBUG ceilometer.polling.manager [-] Executing discovery process for pollsters [<ceilometer.compute.pollsters.disk.PerDeviceReadBytesPollster object at 0x7fcf6cc3f1d0>] and discovery method [local_instances] via process [<bound method AgentManager.discover of <ceilometer.polling.manager.AgentManager object at 0x7fcf6cc07a10>>]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:294
Dec  1 14:21:48 np0005541455 ceilometer_agent_compute[200308]: 2025-12-01 19:21:48.815 15 DEBUG ceilometer.polling.manager [-] Skip pollster disk.device.read.bytes, no  resources found this cycle _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:321
Dec  1 14:21:48 np0005541455 ceilometer_agent_compute[200308]: 2025-12-01 19:21:48.815 15 DEBUG ceilometer.polling.manager [-] Executing discovery process for pollsters [<ceilometer.compute.pollsters.net.IncomingBytesPollster object at 0x7fcf6cc3f800>] and discovery method [local_instances] via process [<bound method AgentManager.discover of <ceilometer.polling.manager.AgentManager object at 0x7fcf6cc07a10>>]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:294
Dec  1 14:21:48 np0005541455 ceilometer_agent_compute[200308]: 2025-12-01 19:21:48.815 15 DEBUG ceilometer.polling.manager [-] Skip pollster network.incoming.bytes, no  resources found this cycle _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:321
Dec  1 14:21:48 np0005541455 ceilometer_agent_compute[200308]: 2025-12-01 19:21:48.815 15 DEBUG ceilometer.polling.manager [-] Executing discovery process for pollsters [<ceilometer.compute.pollsters.net.IncomingBytesRatePollster object at 0x7fcf6cc3fd10>] and discovery method [local_instances] via process [<bound method AgentManager.discover of <ceilometer.polling.manager.AgentManager object at 0x7fcf6cc07a10>>]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:294
Dec  1 14:21:48 np0005541455 ceilometer_agent_compute[200308]: 2025-12-01 19:21:48.815 15 DEBUG ceilometer.polling.manager [-] Skip pollster network.incoming.bytes.rate, no  resources found this cycle _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:321
Dec  1 14:21:48 np0005541455 ceilometer_agent_compute[200308]: 2025-12-01 19:21:48.815 15 DEBUG ceilometer.polling.manager [-] Executing discovery process for pollsters [<ceilometer.compute.pollsters.disk.PerDeviceDiskReadLatencyPollster object at 0x7fcf6cc3f2f0>] and discovery method [local_instances] via process [<bound method AgentManager.discover of <ceilometer.polling.manager.AgentManager object at 0x7fcf6cc07a10>>]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:294
Dec  1 14:21:48 np0005541455 ceilometer_agent_compute[200308]: 2025-12-01 19:21:48.815 15 DEBUG ceilometer.polling.manager [-] Skip pollster disk.device.read.latency, no  resources found this cycle _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:321
Dec  1 14:21:48 np0005541455 ceilometer_agent_compute[200308]: 2025-12-01 19:21:48.815 15 DEBUG ceilometer.polling.manager [-] Executing discovery process for pollsters [<ceilometer.compute.pollsters.disk.PerDeviceReadRequestsPollster object at 0x7fcf6cc3f350>] and discovery method [local_instances] via process [<bound method AgentManager.discover of <ceilometer.polling.manager.AgentManager object at 0x7fcf6cc07a10>>]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:294
Dec  1 14:21:48 np0005541455 ceilometer_agent_compute[200308]: 2025-12-01 19:21:48.815 15 DEBUG ceilometer.polling.manager [-] Skip pollster disk.device.read.requests, no  resources found this cycle _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:321
Dec  1 14:21:48 np0005541455 ceilometer_agent_compute[200308]: 2025-12-01 19:21:48.815 15 DEBUG ceilometer.polling.manager [-] Executing discovery process for pollsters [<ceilometer.compute.pollsters.disk.PerDevicePhysicalPollster object at 0x7fcf6cc3f3b0>] and discovery method [local_instances] via process [<bound method AgentManager.discover of <ceilometer.polling.manager.AgentManager object at 0x7fcf6cc07a10>>]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:294
Dec  1 14:21:48 np0005541455 ceilometer_agent_compute[200308]: 2025-12-01 19:21:48.816 15 DEBUG ceilometer.polling.manager [-] Skip pollster disk.device.usage, no  resources found this cycle _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:321
Dec  1 14:21:48 np0005541455 ceilometer_agent_compute[200308]: 2025-12-01 19:21:48.816 15 DEBUG ceilometer.polling.manager [-] Executing discovery process for pollsters [<ceilometer.compute.pollsters.disk.PerDeviceWriteBytesPollster object at 0x7fcf6cc3f410>] and discovery method [local_instances] via process [<bound method AgentManager.discover of <ceilometer.polling.manager.AgentManager object at 0x7fcf6cc07a10>>]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:294
Dec  1 14:21:48 np0005541455 ceilometer_agent_compute[200308]: 2025-12-01 19:21:48.816 15 DEBUG ceilometer.polling.manager [-] Skip pollster disk.device.write.bytes, no  resources found this cycle _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:321
Dec  1 14:21:48 np0005541455 ceilometer_agent_compute[200308]: 2025-12-01 19:21:48.816 15 DEBUG ceilometer.polling.manager [-] Executing discovery process for pollsters [<ceilometer.compute.pollsters.instance_stats.PowerStatePollster object at 0x7fcf6c2e4440>] and discovery method [local_instances] via process [<bound method AgentManager.discover of <ceilometer.polling.manager.AgentManager object at 0x7fcf6cc07a10>>]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:294
Dec  1 14:21:48 np0005541455 ceilometer_agent_compute[200308]: 2025-12-01 19:21:48.816 15 DEBUG ceilometer.polling.manager [-] Skip pollster power.state, no  resources found this cycle _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:321
Dec  1 14:21:48 np0005541455 ceilometer_agent_compute[200308]: 2025-12-01 19:21:48.816 15 DEBUG ceilometer.polling.manager [-] Executing discovery process for pollsters [<ceilometer.compute.pollsters.disk.PerDeviceDiskWriteLatencyPollster object at 0x7fcf6cc3f470>] and discovery method [local_instances] via process [<bound method AgentManager.discover of <ceilometer.polling.manager.AgentManager object at 0x7fcf6cc07a10>>]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:294
Dec  1 14:21:48 np0005541455 ceilometer_agent_compute[200308]: 2025-12-01 19:21:48.816 15 DEBUG ceilometer.polling.manager [-] Skip pollster disk.device.write.latency, no  resources found this cycle _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:321
Dec  1 14:21:48 np0005541455 ceilometer_agent_compute[200308]: 2025-12-01 19:21:48.816 15 DEBUG ceilometer.polling.manager [-] Executing discovery process for pollsters [<ceilometer.compute.pollsters.disk.PerDeviceWriteRequestsPollster object at 0x7fcf6cc3f4d0>] and discovery method [local_instances] via process [<bound method AgentManager.discover of <ceilometer.polling.manager.AgentManager object at 0x7fcf6cc07a10>>]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:294
Dec  1 14:21:48 np0005541455 ceilometer_agent_compute[200308]: 2025-12-01 19:21:48.816 15 DEBUG ceilometer.polling.manager [-] Skip pollster disk.device.write.requests, no  resources found this cycle _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:321
Dec  1 14:21:48 np0005541455 ceilometer_agent_compute[200308]: 2025-12-01 19:21:48.816 15 DEBUG ceilometer.polling.manager [-] Executing discovery process for pollsters [<ceilometer.compute.pollsters.disk.PerDeviceAllocationPollster object at 0x7fcf6cc3e5d0>] and discovery method [local_instances] via process [<bound method AgentManager.discover of <ceilometer.polling.manager.AgentManager object at 0x7fcf6cc07a10>>]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:294
Dec  1 14:21:48 np0005541455 ceilometer_agent_compute[200308]: 2025-12-01 19:21:48.816 15 DEBUG ceilometer.polling.manager [-] Skip pollster disk.device.allocation, no  resources found this cycle _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:321
Dec  1 14:21:48 np0005541455 ceilometer_agent_compute[200308]: 2025-12-01 19:21:48.816 15 DEBUG ceilometer.polling.manager [-] Executing discovery process for pollsters [<ceilometer.compute.pollsters.disk.EphemeralSizePollster object at 0x7fcf6cc3f530>] and discovery method [local_instances] via process [<bound method AgentManager.discover of <ceilometer.polling.manager.AgentManager object at 0x7fcf6cc07a10>>]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:294
Dec  1 14:21:48 np0005541455 ceilometer_agent_compute[200308]: 2025-12-01 19:21:48.816 15 DEBUG ceilometer.polling.manager [-] Skip pollster disk.ephemeral.size, no  resources found this cycle _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:321
Dec  1 14:21:48 np0005541455 ceilometer_agent_compute[200308]: 2025-12-01 19:21:48.816 15 DEBUG ceilometer.polling.manager [-] Executing discovery process for pollsters [<ceilometer.compute.pollsters.net.IncomingPacketsPollster object at 0x7fcf6cc3fd40>] and discovery method [local_instances] via process [<bound method AgentManager.discover of <ceilometer.polling.manager.AgentManager object at 0x7fcf6cc07a10>>]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:294
Dec  1 14:21:48 np0005541455 ceilometer_agent_compute[200308]: 2025-12-01 19:21:48.817 15 DEBUG ceilometer.polling.manager [-] Skip pollster network.incoming.packets, no  resources found this cycle _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:321
Dec  1 14:21:48 np0005541455 ceilometer_agent_compute[200308]: 2025-12-01 19:21:48.817 15 DEBUG ceilometer.polling.manager [-] Executing discovery process for pollsters [<ceilometer.compute.pollsters.disk.RootSizePollster object at 0x7fcf6cc3f590>] and discovery method [local_instances] via process [<bound method AgentManager.discover of <ceilometer.polling.manager.AgentManager object at 0x7fcf6cc07a10>>]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:294
Dec  1 14:21:48 np0005541455 ceilometer_agent_compute[200308]: 2025-12-01 19:21:48.817 15 DEBUG ceilometer.polling.manager [-] Skip pollster disk.root.size, no  resources found this cycle _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:321
Dec  1 14:21:48 np0005541455 ceilometer_agent_compute[200308]: 2025-12-01 19:21:48.817 15 DEBUG ceilometer.polling.manager [-] Executing discovery process for pollsters [<ceilometer.compute.pollsters.net.IncomingDropPollster object at 0x7fcf6cc3fda0>] and discovery method [local_instances] via process [<bound method AgentManager.discover of <ceilometer.polling.manager.AgentManager object at 0x7fcf6cc07a10>>]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:294
Dec  1 14:21:48 np0005541455 ceilometer_agent_compute[200308]: 2025-12-01 19:21:48.817 15 DEBUG ceilometer.polling.manager [-] Skip pollster network.incoming.packets.drop, no  resources found this cycle _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:321
Dec  1 14:21:48 np0005541455 ceilometer_agent_compute[200308]: 2025-12-01 19:21:48.817 15 DEBUG ceilometer.polling.manager [-] Executing discovery process for pollsters [<ceilometer.compute.pollsters.net.IncomingErrorsPollster object at 0x7fcf6cc3fe00>] and discovery method [local_instances] via process [<bound method AgentManager.discover of <ceilometer.polling.manager.AgentManager object at 0x7fcf6cc07a10>>]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:294
Dec  1 14:21:48 np0005541455 ceilometer_agent_compute[200308]: 2025-12-01 19:21:48.817 15 DEBUG ceilometer.polling.manager [-] Skip pollster network.incoming.packets.error, no  resources found this cycle _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:321
Dec  1 14:21:48 np0005541455 ceilometer_agent_compute[200308]: 2025-12-01 19:21:48.817 15 DEBUG ceilometer.polling.manager [-] Executing discovery process for pollsters [<ceilometer.compute.pollsters.net.OutgoingBytesPollster object at 0x7fcf6cc3fe90>] and discovery method [local_instances] via process [<bound method AgentManager.discover of <ceilometer.polling.manager.AgentManager object at 0x7fcf6cc07a10>>]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:294
Dec  1 14:21:48 np0005541455 ceilometer_agent_compute[200308]: 2025-12-01 19:21:48.817 15 DEBUG ceilometer.polling.manager [-] Skip pollster network.outgoing.bytes, no  resources found this cycle _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:321
Dec  1 14:21:48 np0005541455 ceilometer_agent_compute[200308]: 2025-12-01 19:21:48.817 15 DEBUG ceilometer.polling.manager [-] Executing discovery process for pollsters [<ceilometer.compute.pollsters.net.OutgoingBytesRatePollster object at 0x7fcf6cc3ff80>] and discovery method [local_instances] via process [<bound method AgentManager.discover of <ceilometer.polling.manager.AgentManager object at 0x7fcf6cc07a10>>]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:294
Dec  1 14:21:48 np0005541455 ceilometer_agent_compute[200308]: 2025-12-01 19:21:48.817 15 DEBUG ceilometer.polling.manager [-] Skip pollster network.outgoing.bytes.rate, no  resources found this cycle _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:321
Dec  1 14:21:48 np0005541455 ceilometer_agent_compute[200308]: 2025-12-01 19:21:48.817 15 DEBUG ceilometer.polling.manager [-] Executing discovery process for pollsters [<ceilometer.compute.pollsters.instance_stats.CPUPollster object at 0x7fcf6cbd1b80>] and discovery method [local_instances] via process [<bound method AgentManager.discover of <ceilometer.polling.manager.AgentManager object at 0x7fcf6cc07a10>>]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:294
Dec  1 14:21:48 np0005541455 ceilometer_agent_compute[200308]: 2025-12-01 19:21:48.818 15 DEBUG ceilometer.polling.manager [-] Skip pollster cpu, no  resources found this cycle _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:321
Dec  1 14:21:48 np0005541455 ceilometer_agent_compute[200308]: 2025-12-01 19:21:48.818 15 DEBUG ceilometer.polling.manager [-] Executing discovery process for pollsters [<ceilometer.compute.pollsters.instance_stats.MemoryUsagePollster object at 0x7fcf6cc3f7a0>] and discovery method [local_instances] via process [<bound method AgentManager.discover of <ceilometer.polling.manager.AgentManager object at 0x7fcf6cc07a10>>]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:294
Dec  1 14:21:48 np0005541455 ceilometer_agent_compute[200308]: 2025-12-01 19:21:48.818 15 DEBUG ceilometer.polling.manager [-] Skip pollster memory.usage, no  resources found this cycle _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:321
Dec  1 14:21:48 np0005541455 ceilometer_agent_compute[200308]: 2025-12-01 19:21:48.818 15 DEBUG ceilometer.polling.manager [-] Finished processing pollster [network.incoming.bytes.delta]. execute_polling_task_processing /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:272
Dec  1 14:21:48 np0005541455 ceilometer_agent_compute[200308]: 2025-12-01 19:21:48.818 15 DEBUG ceilometer.polling.manager [-] Finished processing pollster [network.outgoing.packets]. execute_polling_task_processing /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:272
Dec  1 14:21:48 np0005541455 ceilometer_agent_compute[200308]: 2025-12-01 19:21:48.818 15 DEBUG ceilometer.polling.manager [-] Finished processing pollster [network.outgoing.bytes.delta]. execute_polling_task_processing /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:272
Dec  1 14:21:48 np0005541455 ceilometer_agent_compute[200308]: 2025-12-01 19:21:48.818 15 DEBUG ceilometer.polling.manager [-] Finished processing pollster [network.outgoing.packets.drop]. execute_polling_task_processing /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:272
Dec  1 14:21:48 np0005541455 ceilometer_agent_compute[200308]: 2025-12-01 19:21:48.818 15 DEBUG ceilometer.polling.manager [-] Finished processing pollster [network.outgoing.packets.error]. execute_polling_task_processing /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:272
Dec  1 14:21:48 np0005541455 ceilometer_agent_compute[200308]: 2025-12-01 19:21:48.818 15 DEBUG ceilometer.polling.manager [-] Finished processing pollster [disk.device.capacity]. execute_polling_task_processing /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:272
Dec  1 14:21:48 np0005541455 ceilometer_agent_compute[200308]: 2025-12-01 19:21:48.818 15 DEBUG ceilometer.polling.manager [-] Finished processing pollster [disk.device.read.bytes]. execute_polling_task_processing /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:272
Dec  1 14:21:48 np0005541455 ceilometer_agent_compute[200308]: 2025-12-01 19:21:48.819 15 DEBUG ceilometer.polling.manager [-] Finished processing pollster [network.incoming.bytes]. execute_polling_task_processing /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:272
Dec  1 14:21:48 np0005541455 ceilometer_agent_compute[200308]: 2025-12-01 19:21:48.819 15 DEBUG ceilometer.polling.manager [-] Finished processing pollster [network.incoming.bytes.rate]. execute_polling_task_processing /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:272
Dec  1 14:21:48 np0005541455 ceilometer_agent_compute[200308]: 2025-12-01 19:21:48.819 15 DEBUG ceilometer.polling.manager [-] Finished processing pollster [disk.device.read.latency]. execute_polling_task_processing /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:272
Dec  1 14:21:48 np0005541455 ceilometer_agent_compute[200308]: 2025-12-01 19:21:48.819 15 DEBUG ceilometer.polling.manager [-] Finished processing pollster [disk.device.read.requests]. execute_polling_task_processing /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:272
Dec  1 14:21:48 np0005541455 ceilometer_agent_compute[200308]: 2025-12-01 19:21:48.819 15 DEBUG ceilometer.polling.manager [-] Finished processing pollster [disk.device.usage]. execute_polling_task_processing /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:272
Dec  1 14:21:48 np0005541455 ceilometer_agent_compute[200308]: 2025-12-01 19:21:48.819 15 DEBUG ceilometer.polling.manager [-] Finished processing pollster [disk.device.write.bytes]. execute_polling_task_processing /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:272
Dec  1 14:21:48 np0005541455 ceilometer_agent_compute[200308]: 2025-12-01 19:21:48.819 15 DEBUG ceilometer.polling.manager [-] Finished processing pollster [power.state]. execute_polling_task_processing /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:272
Dec  1 14:21:48 np0005541455 ceilometer_agent_compute[200308]: 2025-12-01 19:21:48.819 15 DEBUG ceilometer.polling.manager [-] Finished processing pollster [disk.device.write.latency]. execute_polling_task_processing /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:272
Dec  1 14:21:48 np0005541455 ceilometer_agent_compute[200308]: 2025-12-01 19:21:48.819 15 DEBUG ceilometer.polling.manager [-] Finished processing pollster [disk.device.write.requests]. execute_polling_task_processing /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:272
Dec  1 14:21:48 np0005541455 ceilometer_agent_compute[200308]: 2025-12-01 19:21:48.819 15 DEBUG ceilometer.polling.manager [-] Finished processing pollster [disk.device.allocation]. execute_polling_task_processing /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:272
Dec  1 14:21:48 np0005541455 ceilometer_agent_compute[200308]: 2025-12-01 19:21:48.819 15 DEBUG ceilometer.polling.manager [-] Finished processing pollster [disk.ephemeral.size]. execute_polling_task_processing /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:272
Dec  1 14:21:48 np0005541455 ceilometer_agent_compute[200308]: 2025-12-01 19:21:48.819 15 DEBUG ceilometer.polling.manager [-] Finished processing pollster [network.incoming.packets]. execute_polling_task_processing /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:272
Dec  1 14:21:48 np0005541455 ceilometer_agent_compute[200308]: 2025-12-01 19:21:48.819 15 DEBUG ceilometer.polling.manager [-] Finished processing pollster [disk.root.size]. execute_polling_task_processing /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:272
Dec  1 14:21:48 np0005541455 ceilometer_agent_compute[200308]: 2025-12-01 19:21:48.819 15 DEBUG ceilometer.polling.manager [-] Finished processing pollster [network.incoming.packets.drop]. execute_polling_task_processing /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:272
Dec  1 14:21:48 np0005541455 ceilometer_agent_compute[200308]: 2025-12-01 19:21:48.819 15 DEBUG ceilometer.polling.manager [-] Finished processing pollster [network.incoming.packets.error]. execute_polling_task_processing /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:272
Dec  1 14:21:48 np0005541455 ceilometer_agent_compute[200308]: 2025-12-01 19:21:48.819 15 DEBUG ceilometer.polling.manager [-] Finished processing pollster [network.outgoing.bytes]. execute_polling_task_processing /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:272
Dec  1 14:21:48 np0005541455 ceilometer_agent_compute[200308]: 2025-12-01 19:21:48.820 15 DEBUG ceilometer.polling.manager [-] Finished processing pollster [network.outgoing.bytes.rate]. execute_polling_task_processing /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:272
Dec  1 14:21:48 np0005541455 ceilometer_agent_compute[200308]: 2025-12-01 19:21:48.820 15 DEBUG ceilometer.polling.manager [-] Finished processing pollster [cpu]. execute_polling_task_processing /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:272
Dec  1 14:21:48 np0005541455 ceilometer_agent_compute[200308]: 2025-12-01 19:21:48.820 15 DEBUG ceilometer.polling.manager [-] Finished processing pollster [memory.usage]. execute_polling_task_processing /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:272
Dec  1 14:21:49 np0005541455 python3.9[200642]: ansible-ansible.legacy.copy Invoked with dest=/var/lib/openstack/healthchecks/node_exporter/ group=zuul mode=0700 owner=zuul setype=container_file_t src=/home/zuul/.ansible/tmp/ansible-tmp-1764616907.9796777-578-23907129310534/.source _original_basename=healthcheck follow=False checksum=e380c11c36804bfc65a818f2960cfa663daacfe5 backup=False force=True remote_src=False unsafe_writes=False content=NOT_LOGGING_PARAMETER validate=None directory_mode=None local_follow=None seuser=None serole=None selevel=None attributes=None
Dec  1 14:21:49 np0005541455 python3.9[200794]: ansible-container_config_data Invoked with config_overrides={} config_path=/var/lib/openstack/config/telemetry config_pattern=node_exporter.json debug=False
Dec  1 14:21:51 np0005541455 python3.9[200946]: ansible-container_config_hash Invoked with check_mode=False config_vol_prefix=/var/lib/config-data
Dec  1 14:21:52 np0005541455 python3[201098]: ansible-edpm_container_manage Invoked with concurrency=1 config_dir=/var/lib/openstack/config/telemetry config_id=edpm config_overrides={} config_patterns=node_exporter.json log_base_path=/var/log/containers/stdouts debug=False
Dec  1 14:21:52 np0005541455 podman[201133]: 2025-12-01 19:21:52.886586525 +0000 UTC m=+0.046940120 container create 9bc16c1e84935b321683dd2dfd3901959431e420d380b6b9982945dff3d516b2 (image=quay.io/prometheus/node-exporter:v1.5.0, name=node_exporter, config_id=edpm, container_name=node_exporter, managed_by=edpm_ansible, config_data={'image': 'quay.io/prometheus/node-exporter:v1.5.0', 'restart': 'always', 'recreate': True, 'user': 'root', 'privileged': True, 'ports': ['9100:9100'], 'command': ['--web.config.file=/etc/node_exporter/node_exporter.yaml', '--web.disable-exporter-metrics', '--collector.systemd', '--collector.systemd.unit-include=(edpm_.*|ovs.*|openvswitch|virt.*|rsyslog)\\.service', '--no-collector.dmi', '--no-collector.entropy', '--no-collector.thermal_zone', '--no-collector.time', '--no-collector.timex', '--no-collector.uname', '--no-collector.stat', '--no-collector.hwmon', '--no-collector.os', '--no-collector.selinux', '--no-collector.textfile', '--no-collector.powersupplyclass', '--no-collector.pressure', '--no-collector.rapl'], 'net': 'host', 'environment': {'OS_ENDPOINT_TYPE': 'internal'}, 'healthcheck': {'test': '/openstack/healthcheck node_exporter', 'mount': '/var/lib/openstack/healthchecks/node_exporter'}, 'volumes': ['/var/lib/openstack/config/telemetry/node_exporter.yaml:/etc/node_exporter/node_exporter.yaml:z', '/var/lib/openstack/certs/telemetry/default:/etc/node_exporter/tls:z', '/var/run/dbus/system_bus_socket:/var/run/dbus/system_bus_socket:rw', '/var/lib/openstack/healthchecks/node_exporter:/openstack:ro,z']}, maintainer=The Prometheus Authors <prometheus-developers@googlegroups.com>)
Dec  1 14:21:52 np0005541455 podman[201133]: 2025-12-01 19:21:52.859310136 +0000 UTC m=+0.019663761 image pull 0da6a335fe1356545476b749c68f022c897de3a2139e8f0054f6937349ee2b83 quay.io/prometheus/node-exporter:v1.5.0
Dec  1 14:21:52 np0005541455 python3[201098]: ansible-edpm_container_manage PODMAN-CONTAINER-DEBUG: podman create --name node_exporter --conmon-pidfile /run/node_exporter.pid --env OS_ENDPOINT_TYPE=internal --healthcheck-command /openstack/healthcheck node_exporter --label config_id=edpm --label container_name=node_exporter --label managed_by=edpm_ansible --label config_data={'image': 'quay.io/prometheus/node-exporter:v1.5.0', 'restart': 'always', 'recreate': True, 'user': 'root', 'privileged': True, 'ports': ['9100:9100'], 'command': ['--web.config.file=/etc/node_exporter/node_exporter.yaml', '--web.disable-exporter-metrics', '--collector.systemd', '--collector.systemd.unit-include=(edpm_.*|ovs.*|openvswitch|virt.*|rsyslog)\\.service', '--no-collector.dmi', '--no-collector.entropy', '--no-collector.thermal_zone', '--no-collector.time', '--no-collector.timex', '--no-collector.uname', '--no-collector.stat', '--no-collector.hwmon', '--no-collector.os', '--no-collector.selinux', '--no-collector.textfile', '--no-collector.powersupplyclass', '--no-collector.pressure', '--no-collector.rapl'], 'net': 'host', 'environment': {'OS_ENDPOINT_TYPE': 'internal'}, 'healthcheck': {'test': '/openstack/healthcheck node_exporter', 'mount': '/var/lib/openstack/healthchecks/node_exporter'}, 'volumes': ['/var/lib/openstack/config/telemetry/node_exporter.yaml:/etc/node_exporter/node_exporter.yaml:z', '/var/lib/openstack/certs/telemetry/default:/etc/node_exporter/tls:z', '/var/run/dbus/system_bus_socket:/var/run/dbus/system_bus_socket:rw', '/var/lib/openstack/healthchecks/node_exporter:/openstack:ro,z']} --log-driver journald --log-level info --network host --privileged=True --publish 9100:9100 --user root --volume /var/lib/openstack/config/telemetry/node_exporter.yaml:/etc/node_exporter/node_exporter.yaml:z --volume /var/lib/openstack/certs/telemetry/default:/etc/node_exporter/tls:z --volume /var/run/dbus/system_bus_socket:/var/run/dbus/system_bus_socket:rw --volume /var/lib/openstack/healthchecks/node_exporter:/openstack:ro,z quay.io/prometheus/node-exporter:v1.5.0 --web.config.file=/etc/node_exporter/node_exporter.yaml --web.disable-exporter-metrics --collector.systemd --collector.systemd.unit-include=(edpm_.*|ovs.*|openvswitch|virt.*|rsyslog)\.service --no-collector.dmi --no-collector.entropy --no-collector.thermal_zone --no-collector.time --no-collector.timex --no-collector.uname --no-collector.stat --no-collector.hwmon --no-collector.os --no-collector.selinux --no-collector.textfile --no-collector.powersupplyclass --no-collector.pressure --no-collector.rapl
Dec  1 14:21:53 np0005541455 python3.9[201323]: ansible-ansible.builtin.stat Invoked with path=/etc/sysconfig/podman_drop_in follow=False get_checksum=True get_mime=True get_attributes=True get_selinux_context=False checksum_algorithm=sha1
Dec  1 14:21:54 np0005541455 python3.9[201477]: ansible-file Invoked with path=/etc/systemd/system/edpm_node_exporter.requires state=absent recurse=False force=False follow=True modification_time_format=%Y%m%d%H%M.%S access_time_format=%Y%m%d%H%M.%S unsafe_writes=False _original_basename=None _diff_peek=None src=None modification_time=None access_time=None mode=None owner=None group=None seuser=None serole=None selevel=None setype=None attributes=None
Dec  1 14:21:54 np0005541455 python3.9[201628]: ansible-copy Invoked with src=/home/zuul/.ansible/tmp/ansible-tmp-1764616914.375564-631-238452377280874/source dest=/etc/systemd/system/edpm_node_exporter.service mode=0644 owner=root group=root backup=False force=True remote_src=False follow=False unsafe_writes=False _original_basename=None content=NOT_LOGGING_PARAMETER validate=None directory_mode=None local_follow=None checksum=None seuser=None serole=None selevel=None setype=None attributes=None
Dec  1 14:21:55 np0005541455 python3.9[201704]: ansible-systemd Invoked with daemon_reload=True daemon_reexec=False scope=system no_block=False name=None state=None enabled=None force=None masked=None
Dec  1 14:21:55 np0005541455 systemd[1]: Reloading.
Dec  1 14:21:55 np0005541455 systemd-rc-local-generator: /etc/rc.d/rc.local is not marked executable, skipping.
Dec  1 14:21:55 np0005541455 systemd-sysv-generator: SysV service '/etc/rc.d/init.d/network' lacks a native systemd unit file. Automatically generating a unit file for compatibility. Please update package to include a native systemd unit file, in order to make it more safe and robust.
Dec  1 14:21:56 np0005541455 python3.9[201815]: ansible-systemd Invoked with state=restarted name=edpm_node_exporter.service enabled=True daemon_reload=False daemon_reexec=False scope=system no_block=False force=None masked=None
Dec  1 14:21:56 np0005541455 systemd[1]: Reloading.
Dec  1 14:21:56 np0005541455 systemd-sysv-generator: SysV service '/etc/rc.d/init.d/network' lacks a native systemd unit file. Automatically generating a unit file for compatibility. Please update package to include a native systemd unit file, in order to make it more safe and robust.
Dec  1 14:21:56 np0005541455 systemd-rc-local-generator: /etc/rc.d/rc.local is not marked executable, skipping.
Dec  1 14:21:56 np0005541455 systemd[1]: Starting node_exporter container...
Dec  1 14:21:56 np0005541455 systemd[1]: Started libcrun container.
Dec  1 14:21:56 np0005541455 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/af3e1ab850098d4002a5852171558d4a3e5ff19417f8e1f29bddf28555050f02/merged/etc/node_exporter/node_exporter.yaml supports timestamps until 2038 (0x7fffffff)
Dec  1 14:21:56 np0005541455 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/af3e1ab850098d4002a5852171558d4a3e5ff19417f8e1f29bddf28555050f02/merged/etc/node_exporter/tls supports timestamps until 2038 (0x7fffffff)
Dec  1 14:21:57 np0005541455 systemd[1]: Started /usr/bin/podman healthcheck run 9bc16c1e84935b321683dd2dfd3901959431e420d380b6b9982945dff3d516b2.
Dec  1 14:21:57 np0005541455 podman[201854]: 2025-12-01 19:21:57.023089345 +0000 UTC m=+0.185912460 container init 9bc16c1e84935b321683dd2dfd3901959431e420d380b6b9982945dff3d516b2 (image=quay.io/prometheus/node-exporter:v1.5.0, name=node_exporter, config_data={'image': 'quay.io/prometheus/node-exporter:v1.5.0', 'restart': 'always', 'recreate': True, 'user': 'root', 'privileged': True, 'ports': ['9100:9100'], 'command': ['--web.config.file=/etc/node_exporter/node_exporter.yaml', '--web.disable-exporter-metrics', '--collector.systemd', '--collector.systemd.unit-include=(edpm_.*|ovs.*|openvswitch|virt.*|rsyslog)\\.service', '--no-collector.dmi', '--no-collector.entropy', '--no-collector.thermal_zone', '--no-collector.time', '--no-collector.timex', '--no-collector.uname', '--no-collector.stat', '--no-collector.hwmon', '--no-collector.os', '--no-collector.selinux', '--no-collector.textfile', '--no-collector.powersupplyclass', '--no-collector.pressure', '--no-collector.rapl'], 'net': 'host', 'environment': {'OS_ENDPOINT_TYPE': 'internal'}, 'healthcheck': {'test': '/openstack/healthcheck node_exporter', 'mount': '/var/lib/openstack/healthchecks/node_exporter'}, 'volumes': ['/var/lib/openstack/config/telemetry/node_exporter.yaml:/etc/node_exporter/node_exporter.yaml:z', '/var/lib/openstack/certs/telemetry/default:/etc/node_exporter/tls:z', '/var/run/dbus/system_bus_socket:/var/run/dbus/system_bus_socket:rw', '/var/lib/openstack/healthchecks/node_exporter:/openstack:ro,z']}, config_id=edpm, container_name=node_exporter, maintainer=The Prometheus Authors <prometheus-developers@googlegroups.com>, managed_by=edpm_ansible)
Dec  1 14:21:57 np0005541455 node_exporter[201869]: ts=2025-12-01T19:21:57.036Z caller=node_exporter.go:180 level=info msg="Starting node_exporter" version="(version=1.5.0, branch=HEAD, revision=1b48970ffcf5630534fb00bb0687d73c66d1c959)"
Dec  1 14:21:57 np0005541455 node_exporter[201869]: ts=2025-12-01T19:21:57.036Z caller=node_exporter.go:181 level=info msg="Build context" build_context="(go=go1.19.3, user=root@6e7732a7b81b, date=20221129-18:59:09)"
Dec  1 14:21:57 np0005541455 node_exporter[201869]: ts=2025-12-01T19:21:57.036Z caller=node_exporter.go:183 level=warn msg="Node Exporter is running as root user. This exporter is designed to run as unprivileged user, root is not required."
Dec  1 14:21:57 np0005541455 node_exporter[201869]: ts=2025-12-01T19:21:57.036Z caller=systemd_linux.go:152 level=info collector=systemd msg="Parsed flag --collector.systemd.unit-include" flag=(edpm_.*|ovs.*|openvswitch|virt.*|rsyslog)\.service
Dec  1 14:21:57 np0005541455 node_exporter[201869]: ts=2025-12-01T19:21:57.036Z caller=systemd_linux.go:154 level=info collector=systemd msg="Parsed flag --collector.systemd.unit-exclude" flag=.+\.(automount|device|mount|scope|slice)
Dec  1 14:21:57 np0005541455 node_exporter[201869]: ts=2025-12-01T19:21:57.037Z caller=filesystem_common.go:111 level=info collector=filesystem msg="Parsed flag --collector.filesystem.mount-points-exclude" flag=^/(dev|proc|run/credentials/.+|sys|var/lib/docker/.+|var/lib/containers/storage/.+)($|/)
Dec  1 14:21:57 np0005541455 node_exporter[201869]: ts=2025-12-01T19:21:57.037Z caller=filesystem_common.go:113 level=info collector=filesystem msg="Parsed flag --collector.filesystem.fs-types-exclude" flag=^(autofs|binfmt_misc|bpf|cgroup2?|configfs|debugfs|devpts|devtmpfs|fusectl|hugetlbfs|iso9660|mqueue|nsfs|overlay|proc|procfs|pstore|rpc_pipefs|securityfs|selinuxfs|squashfs|sysfs|tracefs)$
Dec  1 14:21:57 np0005541455 node_exporter[201869]: ts=2025-12-01T19:21:57.037Z caller=diskstats_common.go:111 level=info collector=diskstats msg="Parsed flag --collector.diskstats.device-exclude" flag=^(ram|loop|fd|(h|s|v|xv)d[a-z]|nvme\d+n\d+p)\d+$
Dec  1 14:21:57 np0005541455 node_exporter[201869]: ts=2025-12-01T19:21:57.037Z caller=diskstats_linux.go:264 level=error collector=diskstats msg="Failed to open directory, disabling udev device properties" path=/run/udev/data
Dec  1 14:21:57 np0005541455 node_exporter[201869]: ts=2025-12-01T19:21:57.038Z caller=node_exporter.go:110 level=info msg="Enabled collectors"
Dec  1 14:21:57 np0005541455 node_exporter[201869]: ts=2025-12-01T19:21:57.038Z caller=node_exporter.go:117 level=info collector=arp
Dec  1 14:21:57 np0005541455 node_exporter[201869]: ts=2025-12-01T19:21:57.038Z caller=node_exporter.go:117 level=info collector=bcache
Dec  1 14:21:57 np0005541455 node_exporter[201869]: ts=2025-12-01T19:21:57.038Z caller=node_exporter.go:117 level=info collector=bonding
Dec  1 14:21:57 np0005541455 node_exporter[201869]: ts=2025-12-01T19:21:57.038Z caller=node_exporter.go:117 level=info collector=btrfs
Dec  1 14:21:57 np0005541455 node_exporter[201869]: ts=2025-12-01T19:21:57.038Z caller=node_exporter.go:117 level=info collector=conntrack
Dec  1 14:21:57 np0005541455 node_exporter[201869]: ts=2025-12-01T19:21:57.038Z caller=node_exporter.go:117 level=info collector=cpu
Dec  1 14:21:57 np0005541455 node_exporter[201869]: ts=2025-12-01T19:21:57.038Z caller=node_exporter.go:117 level=info collector=cpufreq
Dec  1 14:21:57 np0005541455 node_exporter[201869]: ts=2025-12-01T19:21:57.038Z caller=node_exporter.go:117 level=info collector=diskstats
Dec  1 14:21:57 np0005541455 node_exporter[201869]: ts=2025-12-01T19:21:57.038Z caller=node_exporter.go:117 level=info collector=edac
Dec  1 14:21:57 np0005541455 node_exporter[201869]: ts=2025-12-01T19:21:57.038Z caller=node_exporter.go:117 level=info collector=fibrechannel
Dec  1 14:21:57 np0005541455 node_exporter[201869]: ts=2025-12-01T19:21:57.038Z caller=node_exporter.go:117 level=info collector=filefd
Dec  1 14:21:57 np0005541455 node_exporter[201869]: ts=2025-12-01T19:21:57.038Z caller=node_exporter.go:117 level=info collector=filesystem
Dec  1 14:21:57 np0005541455 node_exporter[201869]: ts=2025-12-01T19:21:57.038Z caller=node_exporter.go:117 level=info collector=infiniband
Dec  1 14:21:57 np0005541455 node_exporter[201869]: ts=2025-12-01T19:21:57.038Z caller=node_exporter.go:117 level=info collector=ipvs
Dec  1 14:21:57 np0005541455 node_exporter[201869]: ts=2025-12-01T19:21:57.038Z caller=node_exporter.go:117 level=info collector=loadavg
Dec  1 14:21:57 np0005541455 node_exporter[201869]: ts=2025-12-01T19:21:57.038Z caller=node_exporter.go:117 level=info collector=mdadm
Dec  1 14:21:57 np0005541455 node_exporter[201869]: ts=2025-12-01T19:21:57.038Z caller=node_exporter.go:117 level=info collector=meminfo
Dec  1 14:21:57 np0005541455 node_exporter[201869]: ts=2025-12-01T19:21:57.038Z caller=node_exporter.go:117 level=info collector=netclass
Dec  1 14:21:57 np0005541455 node_exporter[201869]: ts=2025-12-01T19:21:57.038Z caller=node_exporter.go:117 level=info collector=netdev
Dec  1 14:21:57 np0005541455 node_exporter[201869]: ts=2025-12-01T19:21:57.038Z caller=node_exporter.go:117 level=info collector=netstat
Dec  1 14:21:57 np0005541455 node_exporter[201869]: ts=2025-12-01T19:21:57.038Z caller=node_exporter.go:117 level=info collector=nfs
Dec  1 14:21:57 np0005541455 node_exporter[201869]: ts=2025-12-01T19:21:57.038Z caller=node_exporter.go:117 level=info collector=nfsd
Dec  1 14:21:57 np0005541455 node_exporter[201869]: ts=2025-12-01T19:21:57.038Z caller=node_exporter.go:117 level=info collector=nvme
Dec  1 14:21:57 np0005541455 node_exporter[201869]: ts=2025-12-01T19:21:57.038Z caller=node_exporter.go:117 level=info collector=schedstat
Dec  1 14:21:57 np0005541455 node_exporter[201869]: ts=2025-12-01T19:21:57.038Z caller=node_exporter.go:117 level=info collector=sockstat
Dec  1 14:21:57 np0005541455 node_exporter[201869]: ts=2025-12-01T19:21:57.038Z caller=node_exporter.go:117 level=info collector=softnet
Dec  1 14:21:57 np0005541455 node_exporter[201869]: ts=2025-12-01T19:21:57.038Z caller=node_exporter.go:117 level=info collector=systemd
Dec  1 14:21:57 np0005541455 node_exporter[201869]: ts=2025-12-01T19:21:57.038Z caller=node_exporter.go:117 level=info collector=tapestats
Dec  1 14:21:57 np0005541455 node_exporter[201869]: ts=2025-12-01T19:21:57.038Z caller=node_exporter.go:117 level=info collector=udp_queues
Dec  1 14:21:57 np0005541455 node_exporter[201869]: ts=2025-12-01T19:21:57.038Z caller=node_exporter.go:117 level=info collector=vmstat
Dec  1 14:21:57 np0005541455 node_exporter[201869]: ts=2025-12-01T19:21:57.038Z caller=node_exporter.go:117 level=info collector=xfs
Dec  1 14:21:57 np0005541455 node_exporter[201869]: ts=2025-12-01T19:21:57.038Z caller=node_exporter.go:117 level=info collector=zfs
Dec  1 14:21:57 np0005541455 node_exporter[201869]: ts=2025-12-01T19:21:57.039Z caller=tls_config.go:232 level=info msg="Listening on" address=[::]:9100
Dec  1 14:21:57 np0005541455 node_exporter[201869]: ts=2025-12-01T19:21:57.039Z caller=tls_config.go:268 level=info msg="TLS is enabled." http2=true address=[::]:9100
Dec  1 14:21:57 np0005541455 podman[201854]: 2025-12-01 19:21:57.057699855 +0000 UTC m=+0.220522990 container start 9bc16c1e84935b321683dd2dfd3901959431e420d380b6b9982945dff3d516b2 (image=quay.io/prometheus/node-exporter:v1.5.0, name=node_exporter, maintainer=The Prometheus Authors <prometheus-developers@googlegroups.com>, managed_by=edpm_ansible, config_data={'image': 'quay.io/prometheus/node-exporter:v1.5.0', 'restart': 'always', 'recreate': True, 'user': 'root', 'privileged': True, 'ports': ['9100:9100'], 'command': ['--web.config.file=/etc/node_exporter/node_exporter.yaml', '--web.disable-exporter-metrics', '--collector.systemd', '--collector.systemd.unit-include=(edpm_.*|ovs.*|openvswitch|virt.*|rsyslog)\\.service', '--no-collector.dmi', '--no-collector.entropy', '--no-collector.thermal_zone', '--no-collector.time', '--no-collector.timex', '--no-collector.uname', '--no-collector.stat', '--no-collector.hwmon', '--no-collector.os', '--no-collector.selinux', '--no-collector.textfile', '--no-collector.powersupplyclass', '--no-collector.pressure', '--no-collector.rapl'], 'net': 'host', 'environment': {'OS_ENDPOINT_TYPE': 'internal'}, 'healthcheck': {'test': '/openstack/healthcheck node_exporter', 'mount': '/var/lib/openstack/healthchecks/node_exporter'}, 'volumes': ['/var/lib/openstack/config/telemetry/node_exporter.yaml:/etc/node_exporter/node_exporter.yaml:z', '/var/lib/openstack/certs/telemetry/default:/etc/node_exporter/tls:z', '/var/run/dbus/system_bus_socket:/var/run/dbus/system_bus_socket:rw', '/var/lib/openstack/healthchecks/node_exporter:/openstack:ro,z']}, config_id=edpm, container_name=node_exporter)
Dec  1 14:21:57 np0005541455 podman[201854]: node_exporter
Dec  1 14:21:57 np0005541455 systemd[1]: Started node_exporter container.
Dec  1 14:21:57 np0005541455 podman[201878]: 2025-12-01 19:21:57.156362464 +0000 UTC m=+0.087019403 container health_status 9bc16c1e84935b321683dd2dfd3901959431e420d380b6b9982945dff3d516b2 (image=quay.io/prometheus/node-exporter:v1.5.0, name=node_exporter, health_status=healthy, health_failing_streak=0, health_log=, container_name=node_exporter, maintainer=The Prometheus Authors <prometheus-developers@googlegroups.com>, managed_by=edpm_ansible, config_data={'image': 'quay.io/prometheus/node-exporter:v1.5.0', 'restart': 'always', 'recreate': True, 'user': 'root', 'privileged': True, 'ports': ['9100:9100'], 'command': ['--web.config.file=/etc/node_exporter/node_exporter.yaml', '--web.disable-exporter-metrics', '--collector.systemd', '--collector.systemd.unit-include=(edpm_.*|ovs.*|openvswitch|virt.*|rsyslog)\\.service', '--no-collector.dmi', '--no-collector.entropy', '--no-collector.thermal_zone', '--no-collector.time', '--no-collector.timex', '--no-collector.uname', '--no-collector.stat', '--no-collector.hwmon', '--no-collector.os', '--no-collector.selinux', '--no-collector.textfile', '--no-collector.powersupplyclass', '--no-collector.pressure', '--no-collector.rapl'], 'net': 'host', 'environment': {'OS_ENDPOINT_TYPE': 'internal'}, 'healthcheck': {'test': '/openstack/healthcheck node_exporter', 'mount': '/var/lib/openstack/healthchecks/node_exporter'}, 'volumes': ['/var/lib/openstack/config/telemetry/node_exporter.yaml:/etc/node_exporter/node_exporter.yaml:z', '/var/lib/openstack/certs/telemetry/default:/etc/node_exporter/tls:z', '/var/run/dbus/system_bus_socket:/var/run/dbus/system_bus_socket:rw', '/var/lib/openstack/healthchecks/node_exporter:/openstack:ro,z']}, config_id=edpm)
Dec  1 14:21:57 np0005541455 python3.9[202053]: ansible-ansible.builtin.systemd Invoked with name=edpm_node_exporter.service state=restarted daemon_reload=False daemon_reexec=False scope=system no_block=False enabled=None force=None masked=None
Dec  1 14:21:57 np0005541455 systemd[1]: Stopping node_exporter container...
Dec  1 14:21:58 np0005541455 systemd[1]: libpod-9bc16c1e84935b321683dd2dfd3901959431e420d380b6b9982945dff3d516b2.scope: Deactivated successfully.
Dec  1 14:21:58 np0005541455 podman[202057]: 2025-12-01 19:21:58.030396904 +0000 UTC m=+0.062524071 container died 9bc16c1e84935b321683dd2dfd3901959431e420d380b6b9982945dff3d516b2 (image=quay.io/prometheus/node-exporter:v1.5.0, name=node_exporter, config_id=edpm, container_name=node_exporter, maintainer=The Prometheus Authors <prometheus-developers@googlegroups.com>, managed_by=edpm_ansible, config_data={'image': 'quay.io/prometheus/node-exporter:v1.5.0', 'restart': 'always', 'recreate': True, 'user': 'root', 'privileged': True, 'ports': ['9100:9100'], 'command': ['--web.config.file=/etc/node_exporter/node_exporter.yaml', '--web.disable-exporter-metrics', '--collector.systemd', '--collector.systemd.unit-include=(edpm_.*|ovs.*|openvswitch|virt.*|rsyslog)\\.service', '--no-collector.dmi', '--no-collector.entropy', '--no-collector.thermal_zone', '--no-collector.time', '--no-collector.timex', '--no-collector.uname', '--no-collector.stat', '--no-collector.hwmon', '--no-collector.os', '--no-collector.selinux', '--no-collector.textfile', '--no-collector.powersupplyclass', '--no-collector.pressure', '--no-collector.rapl'], 'net': 'host', 'environment': {'OS_ENDPOINT_TYPE': 'internal'}, 'healthcheck': {'test': '/openstack/healthcheck node_exporter', 'mount': '/var/lib/openstack/healthchecks/node_exporter'}, 'volumes': ['/var/lib/openstack/config/telemetry/node_exporter.yaml:/etc/node_exporter/node_exporter.yaml:z', '/var/lib/openstack/certs/telemetry/default:/etc/node_exporter/tls:z', '/var/run/dbus/system_bus_socket:/var/run/dbus/system_bus_socket:rw', '/var/lib/openstack/healthchecks/node_exporter:/openstack:ro,z']})
Dec  1 14:21:58 np0005541455 systemd[1]: 9bc16c1e84935b321683dd2dfd3901959431e420d380b6b9982945dff3d516b2-76e624430143b57c.timer: Deactivated successfully.
Dec  1 14:21:58 np0005541455 systemd[1]: Stopped /usr/bin/podman healthcheck run 9bc16c1e84935b321683dd2dfd3901959431e420d380b6b9982945dff3d516b2.
Dec  1 14:21:58 np0005541455 systemd[1]: var-lib-containers-storage-overlay\x2dcontainers-9bc16c1e84935b321683dd2dfd3901959431e420d380b6b9982945dff3d516b2-userdata-shm.mount: Deactivated successfully.
Dec  1 14:21:58 np0005541455 systemd[1]: var-lib-containers-storage-overlay-af3e1ab850098d4002a5852171558d4a3e5ff19417f8e1f29bddf28555050f02-merged.mount: Deactivated successfully.
Dec  1 14:21:58 np0005541455 podman[202057]: 2025-12-01 19:21:58.076756185 +0000 UTC m=+0.108883382 container cleanup 9bc16c1e84935b321683dd2dfd3901959431e420d380b6b9982945dff3d516b2 (image=quay.io/prometheus/node-exporter:v1.5.0, name=node_exporter, managed_by=edpm_ansible, config_data={'image': 'quay.io/prometheus/node-exporter:v1.5.0', 'restart': 'always', 'recreate': True, 'user': 'root', 'privileged': True, 'ports': ['9100:9100'], 'command': ['--web.config.file=/etc/node_exporter/node_exporter.yaml', '--web.disable-exporter-metrics', '--collector.systemd', '--collector.systemd.unit-include=(edpm_.*|ovs.*|openvswitch|virt.*|rsyslog)\\.service', '--no-collector.dmi', '--no-collector.entropy', '--no-collector.thermal_zone', '--no-collector.time', '--no-collector.timex', '--no-collector.uname', '--no-collector.stat', '--no-collector.hwmon', '--no-collector.os', '--no-collector.selinux', '--no-collector.textfile', '--no-collector.powersupplyclass', '--no-collector.pressure', '--no-collector.rapl'], 'net': 'host', 'environment': {'OS_ENDPOINT_TYPE': 'internal'}, 'healthcheck': {'test': '/openstack/healthcheck node_exporter', 'mount': '/var/lib/openstack/healthchecks/node_exporter'}, 'volumes': ['/var/lib/openstack/config/telemetry/node_exporter.yaml:/etc/node_exporter/node_exporter.yaml:z', '/var/lib/openstack/certs/telemetry/default:/etc/node_exporter/tls:z', '/var/run/dbus/system_bus_socket:/var/run/dbus/system_bus_socket:rw', '/var/lib/openstack/healthchecks/node_exporter:/openstack:ro,z']}, config_id=edpm, container_name=node_exporter, maintainer=The Prometheus Authors <prometheus-developers@googlegroups.com>)
Dec  1 14:21:58 np0005541455 podman[202057]: node_exporter
Dec  1 14:21:58 np0005541455 systemd[1]: edpm_node_exporter.service: Main process exited, code=exited, status=2/INVALIDARGUMENT
Dec  1 14:21:58 np0005541455 podman[202086]: node_exporter
Dec  1 14:21:58 np0005541455 systemd[1]: edpm_node_exporter.service: Failed with result 'exit-code'.
Dec  1 14:21:58 np0005541455 systemd[1]: Stopped node_exporter container.
Dec  1 14:21:58 np0005541455 systemd[1]: Starting node_exporter container...
Dec  1 14:21:58 np0005541455 systemd[1]: Started libcrun container.
Dec  1 14:21:58 np0005541455 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/af3e1ab850098d4002a5852171558d4a3e5ff19417f8e1f29bddf28555050f02/merged/etc/node_exporter/node_exporter.yaml supports timestamps until 2038 (0x7fffffff)
Dec  1 14:21:58 np0005541455 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/af3e1ab850098d4002a5852171558d4a3e5ff19417f8e1f29bddf28555050f02/merged/etc/node_exporter/tls supports timestamps until 2038 (0x7fffffff)
Dec  1 14:21:58 np0005541455 systemd[1]: Started /usr/bin/podman healthcheck run 9bc16c1e84935b321683dd2dfd3901959431e420d380b6b9982945dff3d516b2.
Dec  1 14:21:58 np0005541455 podman[202099]: 2025-12-01 19:21:58.302202449 +0000 UTC m=+0.116791701 container init 9bc16c1e84935b321683dd2dfd3901959431e420d380b6b9982945dff3d516b2 (image=quay.io/prometheus/node-exporter:v1.5.0, name=node_exporter, config_data={'image': 'quay.io/prometheus/node-exporter:v1.5.0', 'restart': 'always', 'recreate': True, 'user': 'root', 'privileged': True, 'ports': ['9100:9100'], 'command': ['--web.config.file=/etc/node_exporter/node_exporter.yaml', '--web.disable-exporter-metrics', '--collector.systemd', '--collector.systemd.unit-include=(edpm_.*|ovs.*|openvswitch|virt.*|rsyslog)\\.service', '--no-collector.dmi', '--no-collector.entropy', '--no-collector.thermal_zone', '--no-collector.time', '--no-collector.timex', '--no-collector.uname', '--no-collector.stat', '--no-collector.hwmon', '--no-collector.os', '--no-collector.selinux', '--no-collector.textfile', '--no-collector.powersupplyclass', '--no-collector.pressure', '--no-collector.rapl'], 'net': 'host', 'environment': {'OS_ENDPOINT_TYPE': 'internal'}, 'healthcheck': {'test': '/openstack/healthcheck node_exporter', 'mount': '/var/lib/openstack/healthchecks/node_exporter'}, 'volumes': ['/var/lib/openstack/config/telemetry/node_exporter.yaml:/etc/node_exporter/node_exporter.yaml:z', '/var/lib/openstack/certs/telemetry/default:/etc/node_exporter/tls:z', '/var/run/dbus/system_bus_socket:/var/run/dbus/system_bus_socket:rw', '/var/lib/openstack/healthchecks/node_exporter:/openstack:ro,z']}, config_id=edpm, container_name=node_exporter, maintainer=The Prometheus Authors <prometheus-developers@googlegroups.com>, managed_by=edpm_ansible)
Dec  1 14:21:58 np0005541455 node_exporter[202114]: ts=2025-12-01T19:21:58.326Z caller=node_exporter.go:180 level=info msg="Starting node_exporter" version="(version=1.5.0, branch=HEAD, revision=1b48970ffcf5630534fb00bb0687d73c66d1c959)"
Dec  1 14:21:58 np0005541455 node_exporter[202114]: ts=2025-12-01T19:21:58.326Z caller=node_exporter.go:181 level=info msg="Build context" build_context="(go=go1.19.3, user=root@6e7732a7b81b, date=20221129-18:59:09)"
Dec  1 14:21:58 np0005541455 node_exporter[202114]: ts=2025-12-01T19:21:58.326Z caller=node_exporter.go:183 level=warn msg="Node Exporter is running as root user. This exporter is designed to run as unprivileged user, root is not required."
Dec  1 14:21:58 np0005541455 node_exporter[202114]: ts=2025-12-01T19:21:58.327Z caller=filesystem_common.go:111 level=info collector=filesystem msg="Parsed flag --collector.filesystem.mount-points-exclude" flag=^/(dev|proc|run/credentials/.+|sys|var/lib/docker/.+|var/lib/containers/storage/.+)($|/)
Dec  1 14:21:58 np0005541455 node_exporter[202114]: ts=2025-12-01T19:21:58.327Z caller=filesystem_common.go:113 level=info collector=filesystem msg="Parsed flag --collector.filesystem.fs-types-exclude" flag=^(autofs|binfmt_misc|bpf|cgroup2?|configfs|debugfs|devpts|devtmpfs|fusectl|hugetlbfs|iso9660|mqueue|nsfs|overlay|proc|procfs|pstore|rpc_pipefs|securityfs|selinuxfs|squashfs|sysfs|tracefs)$
Dec  1 14:21:58 np0005541455 node_exporter[202114]: ts=2025-12-01T19:21:58.328Z caller=systemd_linux.go:152 level=info collector=systemd msg="Parsed flag --collector.systemd.unit-include" flag=(edpm_.*|ovs.*|openvswitch|virt.*|rsyslog)\.service
Dec  1 14:21:58 np0005541455 node_exporter[202114]: ts=2025-12-01T19:21:58.328Z caller=systemd_linux.go:154 level=info collector=systemd msg="Parsed flag --collector.systemd.unit-exclude" flag=.+\.(automount|device|mount|scope|slice)
Dec  1 14:21:58 np0005541455 node_exporter[202114]: ts=2025-12-01T19:21:58.328Z caller=diskstats_common.go:111 level=info collector=diskstats msg="Parsed flag --collector.diskstats.device-exclude" flag=^(ram|loop|fd|(h|s|v|xv)d[a-z]|nvme\d+n\d+p)\d+$
Dec  1 14:21:58 np0005541455 node_exporter[202114]: ts=2025-12-01T19:21:58.328Z caller=diskstats_linux.go:264 level=error collector=diskstats msg="Failed to open directory, disabling udev device properties" path=/run/udev/data
Dec  1 14:21:58 np0005541455 node_exporter[202114]: ts=2025-12-01T19:21:58.328Z caller=node_exporter.go:110 level=info msg="Enabled collectors"
Dec  1 14:21:58 np0005541455 node_exporter[202114]: ts=2025-12-01T19:21:58.328Z caller=node_exporter.go:117 level=info collector=arp
Dec  1 14:21:58 np0005541455 node_exporter[202114]: ts=2025-12-01T19:21:58.328Z caller=node_exporter.go:117 level=info collector=bcache
Dec  1 14:21:58 np0005541455 node_exporter[202114]: ts=2025-12-01T19:21:58.328Z caller=node_exporter.go:117 level=info collector=bonding
Dec  1 14:21:58 np0005541455 node_exporter[202114]: ts=2025-12-01T19:21:58.328Z caller=node_exporter.go:117 level=info collector=btrfs
Dec  1 14:21:58 np0005541455 node_exporter[202114]: ts=2025-12-01T19:21:58.328Z caller=node_exporter.go:117 level=info collector=conntrack
Dec  1 14:21:58 np0005541455 node_exporter[202114]: ts=2025-12-01T19:21:58.328Z caller=node_exporter.go:117 level=info collector=cpu
Dec  1 14:21:58 np0005541455 node_exporter[202114]: ts=2025-12-01T19:21:58.328Z caller=node_exporter.go:117 level=info collector=cpufreq
Dec  1 14:21:58 np0005541455 node_exporter[202114]: ts=2025-12-01T19:21:58.328Z caller=node_exporter.go:117 level=info collector=diskstats
Dec  1 14:21:58 np0005541455 node_exporter[202114]: ts=2025-12-01T19:21:58.328Z caller=node_exporter.go:117 level=info collector=edac
Dec  1 14:21:58 np0005541455 node_exporter[202114]: ts=2025-12-01T19:21:58.328Z caller=node_exporter.go:117 level=info collector=fibrechannel
Dec  1 14:21:58 np0005541455 node_exporter[202114]: ts=2025-12-01T19:21:58.328Z caller=node_exporter.go:117 level=info collector=filefd
Dec  1 14:21:58 np0005541455 node_exporter[202114]: ts=2025-12-01T19:21:58.328Z caller=node_exporter.go:117 level=info collector=filesystem
Dec  1 14:21:58 np0005541455 node_exporter[202114]: ts=2025-12-01T19:21:58.328Z caller=node_exporter.go:117 level=info collector=infiniband
Dec  1 14:21:58 np0005541455 node_exporter[202114]: ts=2025-12-01T19:21:58.328Z caller=node_exporter.go:117 level=info collector=ipvs
Dec  1 14:21:58 np0005541455 node_exporter[202114]: ts=2025-12-01T19:21:58.328Z caller=node_exporter.go:117 level=info collector=loadavg
Dec  1 14:21:58 np0005541455 node_exporter[202114]: ts=2025-12-01T19:21:58.328Z caller=node_exporter.go:117 level=info collector=mdadm
Dec  1 14:21:58 np0005541455 node_exporter[202114]: ts=2025-12-01T19:21:58.328Z caller=node_exporter.go:117 level=info collector=meminfo
Dec  1 14:21:58 np0005541455 node_exporter[202114]: ts=2025-12-01T19:21:58.328Z caller=node_exporter.go:117 level=info collector=netclass
Dec  1 14:21:58 np0005541455 node_exporter[202114]: ts=2025-12-01T19:21:58.328Z caller=node_exporter.go:117 level=info collector=netdev
Dec  1 14:21:58 np0005541455 node_exporter[202114]: ts=2025-12-01T19:21:58.328Z caller=node_exporter.go:117 level=info collector=netstat
Dec  1 14:21:58 np0005541455 node_exporter[202114]: ts=2025-12-01T19:21:58.328Z caller=node_exporter.go:117 level=info collector=nfs
Dec  1 14:21:58 np0005541455 node_exporter[202114]: ts=2025-12-01T19:21:58.328Z caller=node_exporter.go:117 level=info collector=nfsd
Dec  1 14:21:58 np0005541455 node_exporter[202114]: ts=2025-12-01T19:21:58.328Z caller=node_exporter.go:117 level=info collector=nvme
Dec  1 14:21:58 np0005541455 node_exporter[202114]: ts=2025-12-01T19:21:58.328Z caller=node_exporter.go:117 level=info collector=schedstat
Dec  1 14:21:58 np0005541455 node_exporter[202114]: ts=2025-12-01T19:21:58.328Z caller=node_exporter.go:117 level=info collector=sockstat
Dec  1 14:21:58 np0005541455 node_exporter[202114]: ts=2025-12-01T19:21:58.328Z caller=node_exporter.go:117 level=info collector=softnet
Dec  1 14:21:58 np0005541455 node_exporter[202114]: ts=2025-12-01T19:21:58.328Z caller=node_exporter.go:117 level=info collector=systemd
Dec  1 14:21:58 np0005541455 node_exporter[202114]: ts=2025-12-01T19:21:58.329Z caller=node_exporter.go:117 level=info collector=tapestats
Dec  1 14:21:58 np0005541455 node_exporter[202114]: ts=2025-12-01T19:21:58.329Z caller=node_exporter.go:117 level=info collector=udp_queues
Dec  1 14:21:58 np0005541455 node_exporter[202114]: ts=2025-12-01T19:21:58.329Z caller=node_exporter.go:117 level=info collector=vmstat
Dec  1 14:21:58 np0005541455 node_exporter[202114]: ts=2025-12-01T19:21:58.329Z caller=node_exporter.go:117 level=info collector=xfs
Dec  1 14:21:58 np0005541455 node_exporter[202114]: ts=2025-12-01T19:21:58.329Z caller=node_exporter.go:117 level=info collector=zfs
Dec  1 14:21:58 np0005541455 node_exporter[202114]: ts=2025-12-01T19:21:58.329Z caller=tls_config.go:232 level=info msg="Listening on" address=[::]:9100
Dec  1 14:21:58 np0005541455 node_exporter[202114]: ts=2025-12-01T19:21:58.330Z caller=tls_config.go:268 level=info msg="TLS is enabled." http2=true address=[::]:9100
Dec  1 14:21:58 np0005541455 podman[202099]: 2025-12-01 19:21:58.342340994 +0000 UTC m=+0.156930226 container start 9bc16c1e84935b321683dd2dfd3901959431e420d380b6b9982945dff3d516b2 (image=quay.io/prometheus/node-exporter:v1.5.0, name=node_exporter, container_name=node_exporter, maintainer=The Prometheus Authors <prometheus-developers@googlegroups.com>, managed_by=edpm_ansible, config_data={'image': 'quay.io/prometheus/node-exporter:v1.5.0', 'restart': 'always', 'recreate': True, 'user': 'root', 'privileged': True, 'ports': ['9100:9100'], 'command': ['--web.config.file=/etc/node_exporter/node_exporter.yaml', '--web.disable-exporter-metrics', '--collector.systemd', '--collector.systemd.unit-include=(edpm_.*|ovs.*|openvswitch|virt.*|rsyslog)\\.service', '--no-collector.dmi', '--no-collector.entropy', '--no-collector.thermal_zone', '--no-collector.time', '--no-collector.timex', '--no-collector.uname', '--no-collector.stat', '--no-collector.hwmon', '--no-collector.os', '--no-collector.selinux', '--no-collector.textfile', '--no-collector.powersupplyclass', '--no-collector.pressure', '--no-collector.rapl'], 'net': 'host', 'environment': {'OS_ENDPOINT_TYPE': 'internal'}, 'healthcheck': {'test': '/openstack/healthcheck node_exporter', 'mount': '/var/lib/openstack/healthchecks/node_exporter'}, 'volumes': ['/var/lib/openstack/config/telemetry/node_exporter.yaml:/etc/node_exporter/node_exporter.yaml:z', '/var/lib/openstack/certs/telemetry/default:/etc/node_exporter/tls:z', '/var/run/dbus/system_bus_socket:/var/run/dbus/system_bus_socket:rw', '/var/lib/openstack/healthchecks/node_exporter:/openstack:ro,z']}, config_id=edpm)
Dec  1 14:21:58 np0005541455 podman[202099]: node_exporter
Dec  1 14:21:58 np0005541455 systemd[1]: Started node_exporter container.
Dec  1 14:21:58 np0005541455 podman[202123]: 2025-12-01 19:21:58.429687135 +0000 UTC m=+0.068626503 container health_status 9bc16c1e84935b321683dd2dfd3901959431e420d380b6b9982945dff3d516b2 (image=quay.io/prometheus/node-exporter:v1.5.0, name=node_exporter, health_status=healthy, health_failing_streak=0, health_log=, config_data={'image': 'quay.io/prometheus/node-exporter:v1.5.0', 'restart': 'always', 'recreate': True, 'user': 'root', 'privileged': True, 'ports': ['9100:9100'], 'command': ['--web.config.file=/etc/node_exporter/node_exporter.yaml', '--web.disable-exporter-metrics', '--collector.systemd', '--collector.systemd.unit-include=(edpm_.*|ovs.*|openvswitch|virt.*|rsyslog)\\.service', '--no-collector.dmi', '--no-collector.entropy', '--no-collector.thermal_zone', '--no-collector.time', '--no-collector.timex', '--no-collector.uname', '--no-collector.stat', '--no-collector.hwmon', '--no-collector.os', '--no-collector.selinux', '--no-collector.textfile', '--no-collector.powersupplyclass', '--no-collector.pressure', '--no-collector.rapl'], 'net': 'host', 'environment': {'OS_ENDPOINT_TYPE': 'internal'}, 'healthcheck': {'test': '/openstack/healthcheck node_exporter', 'mount': '/var/lib/openstack/healthchecks/node_exporter'}, 'volumes': ['/var/lib/openstack/config/telemetry/node_exporter.yaml:/etc/node_exporter/node_exporter.yaml:z', '/var/lib/openstack/certs/telemetry/default:/etc/node_exporter/tls:z', '/var/run/dbus/system_bus_socket:/var/run/dbus/system_bus_socket:rw', '/var/lib/openstack/healthchecks/node_exporter:/openstack:ro,z']}, config_id=edpm, container_name=node_exporter, maintainer=The Prometheus Authors <prometheus-developers@googlegroups.com>, managed_by=edpm_ansible)
Dec  1 14:21:59 np0005541455 python3.9[202296]: ansible-ansible.legacy.stat Invoked with path=/var/lib/openstack/healthchecks/podman_exporter/healthcheck follow=False get_checksum=True get_size=False checksum_algorithm=sha1 get_mime=True get_attributes=True get_selinux_context=False
Dec  1 14:21:59 np0005541455 python3.9[202419]: ansible-ansible.legacy.copy Invoked with dest=/var/lib/openstack/healthchecks/podman_exporter/ group=zuul mode=0700 owner=zuul setype=container_file_t src=/home/zuul/.ansible/tmp/ansible-tmp-1764616918.5886788-663-249637384630066/.source _original_basename=healthcheck follow=False checksum=e380c11c36804bfc65a818f2960cfa663daacfe5 backup=False force=True remote_src=False unsafe_writes=False content=NOT_LOGGING_PARAMETER validate=None directory_mode=None local_follow=None seuser=None serole=None selevel=None attributes=None
Dec  1 14:22:00 np0005541455 python3.9[202571]: ansible-container_config_data Invoked with config_overrides={} config_path=/var/lib/openstack/config/telemetry config_pattern=podman_exporter.json debug=False
Dec  1 14:22:01 np0005541455 python3.9[202723]: ansible-container_config_hash Invoked with check_mode=False config_vol_prefix=/var/lib/config-data
Dec  1 14:22:02 np0005541455 python3[202875]: ansible-edpm_container_manage Invoked with concurrency=1 config_dir=/var/lib/openstack/config/telemetry config_id=edpm config_overrides={} config_patterns=podman_exporter.json log_base_path=/var/log/containers/stdouts debug=False
Dec  1 14:22:03 np0005541455 podman[202887]: 2025-12-01 19:22:03.749857222 +0000 UTC m=+1.388880494 image pull e56d40e393eb5ea8704d9af8cf0d74665df83747106713fda91530f201837815 quay.io/navidys/prometheus-podman-exporter:v1.10.1
Dec  1 14:22:03 np0005541455 podman[202982]: 2025-12-01 19:22:03.894891782 +0000 UTC m=+0.042173860 container create 61ddba5fa28aaa4735d9b3aecc3d300f499f9ae2248b5f55cd6d6127fcce4236 (image=quay.io/navidys/prometheus-podman-exporter:v1.10.1, name=podman_exporter, config_id=edpm, container_name=podman_exporter, managed_by=edpm_ansible, config_data={'image': 'quay.io/navidys/prometheus-podman-exporter:v1.10.1', 'restart': 'always', 'recreate': True, 'user': 'root', 'privileged': True, 'ports': ['9882:9882'], 'net': 'host', 'command': ['--web.config.file=/etc/podman_exporter/podman_exporter.yaml'], 'environment': {'OS_ENDPOINT_TYPE': 'internal', 'CONTAINER_HOST': 'unix:///run/podman/podman.sock'}, 'healthcheck': {'test': '/openstack/healthcheck podman_exporter', 'mount': '/var/lib/openstack/healthchecks/podman_exporter'}, 'volumes': ['/var/lib/openstack/config/telemetry/podman_exporter.yaml:/etc/podman_exporter/podman_exporter.yaml:z', '/var/lib/openstack/certs/telemetry/default:/etc/podman_exporter/tls:z', '/run/podman/podman.sock:/run/podman/podman.sock:rw,z', '/var/lib/openstack/healthchecks/podman_exporter:/openstack:ro,z']}, maintainer=Navid Yaghoobi <navidys@fedoraproject.org>)
Dec  1 14:22:03 np0005541455 podman[202982]: 2025-12-01 19:22:03.872830847 +0000 UTC m=+0.020112945 image pull e56d40e393eb5ea8704d9af8cf0d74665df83747106713fda91530f201837815 quay.io/navidys/prometheus-podman-exporter:v1.10.1
Dec  1 14:22:03 np0005541455 python3[202875]: ansible-edpm_container_manage PODMAN-CONTAINER-DEBUG: podman create --name podman_exporter --conmon-pidfile /run/podman_exporter.pid --env OS_ENDPOINT_TYPE=internal --env CONTAINER_HOST=unix:///run/podman/podman.sock --healthcheck-command /openstack/healthcheck podman_exporter --label config_id=edpm --label container_name=podman_exporter --label managed_by=edpm_ansible --label config_data={'image': 'quay.io/navidys/prometheus-podman-exporter:v1.10.1', 'restart': 'always', 'recreate': True, 'user': 'root', 'privileged': True, 'ports': ['9882:9882'], 'net': 'host', 'command': ['--web.config.file=/etc/podman_exporter/podman_exporter.yaml'], 'environment': {'OS_ENDPOINT_TYPE': 'internal', 'CONTAINER_HOST': 'unix:///run/podman/podman.sock'}, 'healthcheck': {'test': '/openstack/healthcheck podman_exporter', 'mount': '/var/lib/openstack/healthchecks/podman_exporter'}, 'volumes': ['/var/lib/openstack/config/telemetry/podman_exporter.yaml:/etc/podman_exporter/podman_exporter.yaml:z', '/var/lib/openstack/certs/telemetry/default:/etc/podman_exporter/tls:z', '/run/podman/podman.sock:/run/podman/podman.sock:rw,z', '/var/lib/openstack/healthchecks/podman_exporter:/openstack:ro,z']} --log-driver journald --log-level info --network host --privileged=True --publish 9882:9882 --user root --volume /var/lib/openstack/config/telemetry/podman_exporter.yaml:/etc/podman_exporter/podman_exporter.yaml:z --volume /var/lib/openstack/certs/telemetry/default:/etc/podman_exporter/tls:z --volume /run/podman/podman.sock:/run/podman/podman.sock:rw,z --volume /var/lib/openstack/healthchecks/podman_exporter:/openstack:ro,z quay.io/navidys/prometheus-podman-exporter:v1.10.1 --web.config.file=/etc/podman_exporter/podman_exporter.yaml
Dec  1 14:22:04 np0005541455 python3.9[203171]: ansible-ansible.builtin.stat Invoked with path=/etc/sysconfig/podman_drop_in follow=False get_checksum=True get_mime=True get_attributes=True get_selinux_context=False checksum_algorithm=sha1
Dec  1 14:22:05 np0005541455 podman[203277]: 2025-12-01 19:22:05.278460357 +0000 UTC m=+0.054689393 container health_status eee51cf6f5ac491b85fb09827fece37ea9afa564acb449d4ec0d0155a452f02b (image=quay.io/podified-antelope-centos9/openstack-multipathd:current-podified, name=multipathd, health_status=healthy, health_failing_streak=0, health_log=, org.label-schema.name=CentOS Stream 9 Base Image, tcib_build_tag=fa2bb8efef6782c26ea7f1675eeb36dd, container_name=multipathd, managed_by=edpm_ansible, org.label-schema.build-date=20251125, org.label-schema.schema-version=1.0, config_data={'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/multipathd', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-multipathd:current-podified', 'net': 'host', 'privileged': True, 'restart': 'always', 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/var/lib/kolla/config_files/multipathd.json:/var/lib/kolla/config_files/config.json:ro', '/dev:/dev', '/run/udev:/run/udev', '/sys:/sys', '/lib/modules:/lib/modules:ro', '/etc/iscsi:/etc/iscsi:ro', '/var/lib/iscsi:/var/lib/iscsi', '/etc/multipath:/etc/multipath:z', '/etc/multipath.conf:/etc/multipath.conf:ro', '/var/lib/openstack/healthchecks/multipathd:/openstack:ro,z']}, org.label-schema.license=GPLv2, tcib_managed=true, config_id=multipathd, maintainer=OpenStack Kubernetes Operator team, org.label-schema.vendor=CentOS, io.buildah.version=1.41.3)
Dec  1 14:22:05 np0005541455 python3.9[203345]: ansible-file Invoked with path=/etc/systemd/system/edpm_podman_exporter.requires state=absent recurse=False force=False follow=True modification_time_format=%Y%m%d%H%M.%S access_time_format=%Y%m%d%H%M.%S unsafe_writes=False _original_basename=None _diff_peek=None src=None modification_time=None access_time=None mode=None owner=None group=None seuser=None serole=None selevel=None setype=None attributes=None
Dec  1 14:22:06 np0005541455 python3.9[203497]: ansible-copy Invoked with src=/home/zuul/.ansible/tmp/ansible-tmp-1764616925.5552027-716-179286037665417/source dest=/etc/systemd/system/edpm_podman_exporter.service mode=0644 owner=root group=root backup=False force=True remote_src=False follow=False unsafe_writes=False _original_basename=None content=NOT_LOGGING_PARAMETER validate=None directory_mode=None local_follow=None checksum=None seuser=None serole=None selevel=None setype=None attributes=None
Dec  1 14:22:06 np0005541455 python3.9[203573]: ansible-systemd Invoked with daemon_reload=True daemon_reexec=False scope=system no_block=False name=None state=None enabled=None force=None masked=None
Dec  1 14:22:06 np0005541455 systemd[1]: Reloading.
Dec  1 14:22:06 np0005541455 systemd-rc-local-generator: /etc/rc.d/rc.local is not marked executable, skipping.
Dec  1 14:22:06 np0005541455 systemd-sysv-generator: SysV service '/etc/rc.d/init.d/network' lacks a native systemd unit file. Automatically generating a unit file for compatibility. Please update package to include a native systemd unit file, in order to make it more safe and robust.
Dec  1 14:22:07 np0005541455 python3.9[203684]: ansible-systemd Invoked with state=restarted name=edpm_podman_exporter.service enabled=True daemon_reload=False daemon_reexec=False scope=system no_block=False force=None masked=None
Dec  1 14:22:07 np0005541455 systemd[1]: Reloading.
Dec  1 14:22:07 np0005541455 systemd-rc-local-generator: /etc/rc.d/rc.local is not marked executable, skipping.
Dec  1 14:22:07 np0005541455 systemd-sysv-generator: SysV service '/etc/rc.d/init.d/network' lacks a native systemd unit file. Automatically generating a unit file for compatibility. Please update package to include a native systemd unit file, in order to make it more safe and robust.
Dec  1 14:22:07 np0005541455 systemd[1]: Starting podman_exporter container...
Dec  1 14:22:08 np0005541455 systemd[1]: Started libcrun container.
Dec  1 14:22:08 np0005541455 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/84d7051d8f22de1e31d6f6cdaf75e2812a3937e2d57ddd012bae9f925206d04a/merged/etc/podman_exporter/podman_exporter.yaml supports timestamps until 2038 (0x7fffffff)
Dec  1 14:22:08 np0005541455 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/84d7051d8f22de1e31d6f6cdaf75e2812a3937e2d57ddd012bae9f925206d04a/merged/etc/podman_exporter/tls supports timestamps until 2038 (0x7fffffff)
Dec  1 14:22:08 np0005541455 systemd[1]: Started /usr/bin/podman healthcheck run 61ddba5fa28aaa4735d9b3aecc3d300f499f9ae2248b5f55cd6d6127fcce4236.
Dec  1 14:22:08 np0005541455 podman[203724]: 2025-12-01 19:22:08.114915583 +0000 UTC m=+0.206162136 container init 61ddba5fa28aaa4735d9b3aecc3d300f499f9ae2248b5f55cd6d6127fcce4236 (image=quay.io/navidys/prometheus-podman-exporter:v1.10.1, name=podman_exporter, config_data={'image': 'quay.io/navidys/prometheus-podman-exporter:v1.10.1', 'restart': 'always', 'recreate': True, 'user': 'root', 'privileged': True, 'ports': ['9882:9882'], 'net': 'host', 'command': ['--web.config.file=/etc/podman_exporter/podman_exporter.yaml'], 'environment': {'OS_ENDPOINT_TYPE': 'internal', 'CONTAINER_HOST': 'unix:///run/podman/podman.sock'}, 'healthcheck': {'test': '/openstack/healthcheck podman_exporter', 'mount': '/var/lib/openstack/healthchecks/podman_exporter'}, 'volumes': ['/var/lib/openstack/config/telemetry/podman_exporter.yaml:/etc/podman_exporter/podman_exporter.yaml:z', '/var/lib/openstack/certs/telemetry/default:/etc/podman_exporter/tls:z', '/run/podman/podman.sock:/run/podman/podman.sock:rw,z', '/var/lib/openstack/healthchecks/podman_exporter:/openstack:ro,z']}, config_id=edpm, container_name=podman_exporter, maintainer=Navid Yaghoobi <navidys@fedoraproject.org>, managed_by=edpm_ansible)
Dec  1 14:22:08 np0005541455 podman_exporter[203739]: ts=2025-12-01T19:22:08.129Z caller=exporter.go:68 level=info msg="Starting podman-prometheus-exporter" version="(version=1.10.1, branch=HEAD, revision=1)"
Dec  1 14:22:08 np0005541455 podman_exporter[203739]: ts=2025-12-01T19:22:08.129Z caller=exporter.go:69 level=info msg=metrics enhanced=false
Dec  1 14:22:08 np0005541455 podman_exporter[203739]: ts=2025-12-01T19:22:08.129Z caller=handler.go:94 level=info msg="enabled collectors"
Dec  1 14:22:08 np0005541455 podman_exporter[203739]: ts=2025-12-01T19:22:08.129Z caller=handler.go:105 level=info collector=container
Dec  1 14:22:08 np0005541455 podman[203724]: 2025-12-01 19:22:08.14020129 +0000 UTC m=+0.231447763 container start 61ddba5fa28aaa4735d9b3aecc3d300f499f9ae2248b5f55cd6d6127fcce4236 (image=quay.io/navidys/prometheus-podman-exporter:v1.10.1, name=podman_exporter, config_id=edpm, container_name=podman_exporter, maintainer=Navid Yaghoobi <navidys@fedoraproject.org>, managed_by=edpm_ansible, config_data={'image': 'quay.io/navidys/prometheus-podman-exporter:v1.10.1', 'restart': 'always', 'recreate': True, 'user': 'root', 'privileged': True, 'ports': ['9882:9882'], 'net': 'host', 'command': ['--web.config.file=/etc/podman_exporter/podman_exporter.yaml'], 'environment': {'OS_ENDPOINT_TYPE': 'internal', 'CONTAINER_HOST': 'unix:///run/podman/podman.sock'}, 'healthcheck': {'test': '/openstack/healthcheck podman_exporter', 'mount': '/var/lib/openstack/healthchecks/podman_exporter'}, 'volumes': ['/var/lib/openstack/config/telemetry/podman_exporter.yaml:/etc/podman_exporter/podman_exporter.yaml:z', '/var/lib/openstack/certs/telemetry/default:/etc/podman_exporter/tls:z', '/run/podman/podman.sock:/run/podman/podman.sock:rw,z', '/var/lib/openstack/healthchecks/podman_exporter:/openstack:ro,z']})
Dec  1 14:22:08 np0005541455 podman[203724]: podman_exporter
Dec  1 14:22:08 np0005541455 systemd[1]: Starting Podman API Service...
Dec  1 14:22:08 np0005541455 systemd[1]: Started Podman API Service.
Dec  1 14:22:08 np0005541455 systemd[1]: Started podman_exporter container.
Dec  1 14:22:08 np0005541455 podman[203750]: time="2025-12-01T19:22:08Z" level=info msg="/usr/bin/podman filtering at log level info"
Dec  1 14:22:08 np0005541455 podman[203750]: time="2025-12-01T19:22:08Z" level=info msg="Setting parallel job count to 25"
Dec  1 14:22:08 np0005541455 podman[203750]: time="2025-12-01T19:22:08Z" level=info msg="Using sqlite as database backend"
Dec  1 14:22:08 np0005541455 podman[203750]: time="2025-12-01T19:22:08Z" level=info msg="Not using native diff for overlay, this may cause degraded performance for building images: kernel has CONFIG_OVERLAY_FS_REDIRECT_DIR enabled"
Dec  1 14:22:08 np0005541455 podman[203750]: time="2025-12-01T19:22:08Z" level=info msg="Using systemd socket activation to determine API endpoint"
Dec  1 14:22:08 np0005541455 podman[203750]: time="2025-12-01T19:22:08Z" level=info msg="API service listening on \"/run/podman/podman.sock\". URI: \"unix:///run/podman/podman.sock\""
Dec  1 14:22:08 np0005541455 podman[203750]: @ - - [01/Dec/2025:19:22:08 +0000] "GET /v4.9.3/libpod/_ping HTTP/1.1" 200 2 "" "Go-http-client/1.1"
Dec  1 14:22:08 np0005541455 podman[203750]: time="2025-12-01T19:22:08Z" level=info msg="List containers: received `last` parameter - overwriting `limit`"
Dec  1 14:22:08 np0005541455 podman[203750]: @ - - [01/Dec/2025:19:22:08 +0000] "GET /v4.9.3/libpod/containers/json?all=true&external=false&last=0&namespace=false&size=true&sync=false HTTP/1.1" 200 19587 "" "Go-http-client/1.1"
Dec  1 14:22:08 np0005541455 podman_exporter[203739]: ts=2025-12-01T19:22:08.211Z caller=exporter.go:96 level=info msg="Listening on" address=:9882
Dec  1 14:22:08 np0005541455 podman_exporter[203739]: ts=2025-12-01T19:22:08.211Z caller=tls_config.go:313 level=info msg="Listening on" address=[::]:9882
Dec  1 14:22:08 np0005541455 podman_exporter[203739]: ts=2025-12-01T19:22:08.212Z caller=tls_config.go:349 level=info msg="TLS is enabled." http2=true address=[::]:9882
Dec  1 14:22:08 np0005541455 podman[203748]: 2025-12-01 19:22:08.21349727 +0000 UTC m=+0.063309087 container health_status 61ddba5fa28aaa4735d9b3aecc3d300f499f9ae2248b5f55cd6d6127fcce4236 (image=quay.io/navidys/prometheus-podman-exporter:v1.10.1, name=podman_exporter, health_status=starting, health_failing_streak=1, health_log=, container_name=podman_exporter, maintainer=Navid Yaghoobi <navidys@fedoraproject.org>, managed_by=edpm_ansible, config_data={'image': 'quay.io/navidys/prometheus-podman-exporter:v1.10.1', 'restart': 'always', 'recreate': True, 'user': 'root', 'privileged': True, 'ports': ['9882:9882'], 'net': 'host', 'command': ['--web.config.file=/etc/podman_exporter/podman_exporter.yaml'], 'environment': {'OS_ENDPOINT_TYPE': 'internal', 'CONTAINER_HOST': 'unix:///run/podman/podman.sock'}, 'healthcheck': {'test': '/openstack/healthcheck podman_exporter', 'mount': '/var/lib/openstack/healthchecks/podman_exporter'}, 'volumes': ['/var/lib/openstack/config/telemetry/podman_exporter.yaml:/etc/podman_exporter/podman_exporter.yaml:z', '/var/lib/openstack/certs/telemetry/default:/etc/podman_exporter/tls:z', '/run/podman/podman.sock:/run/podman/podman.sock:rw,z', '/var/lib/openstack/healthchecks/podman_exporter:/openstack:ro,z']}, config_id=edpm)
Dec  1 14:22:08 np0005541455 systemd[1]: 61ddba5fa28aaa4735d9b3aecc3d300f499f9ae2248b5f55cd6d6127fcce4236-2b30acf1ad37caad.service: Main process exited, code=exited, status=1/FAILURE
Dec  1 14:22:08 np0005541455 systemd[1]: 61ddba5fa28aaa4735d9b3aecc3d300f499f9ae2248b5f55cd6d6127fcce4236-2b30acf1ad37caad.service: Failed with result 'exit-code'.
Dec  1 14:22:09 np0005541455 python3.9[203934]: ansible-ansible.builtin.systemd Invoked with name=edpm_podman_exporter.service state=restarted daemon_reload=False daemon_reexec=False scope=system no_block=False enabled=None force=None masked=None
Dec  1 14:22:09 np0005541455 systemd[1]: Stopping podman_exporter container...
Dec  1 14:22:09 np0005541455 podman[203750]: @ - - [01/Dec/2025:19:22:08 +0000] "GET /v4.9.3/libpod/events?filters=%7B%7D&since=&stream=true&until= HTTP/1.1" 200 1449 "" "Go-http-client/1.1"
Dec  1 14:22:09 np0005541455 systemd[1]: libpod-61ddba5fa28aaa4735d9b3aecc3d300f499f9ae2248b5f55cd6d6127fcce4236.scope: Deactivated successfully.
Dec  1 14:22:09 np0005541455 podman[203938]: 2025-12-01 19:22:09.108975576 +0000 UTC m=+0.053439865 container died 61ddba5fa28aaa4735d9b3aecc3d300f499f9ae2248b5f55cd6d6127fcce4236 (image=quay.io/navidys/prometheus-podman-exporter:v1.10.1, name=podman_exporter, managed_by=edpm_ansible, config_data={'image': 'quay.io/navidys/prometheus-podman-exporter:v1.10.1', 'restart': 'always', 'recreate': True, 'user': 'root', 'privileged': True, 'ports': ['9882:9882'], 'net': 'host', 'command': ['--web.config.file=/etc/podman_exporter/podman_exporter.yaml'], 'environment': {'OS_ENDPOINT_TYPE': 'internal', 'CONTAINER_HOST': 'unix:///run/podman/podman.sock'}, 'healthcheck': {'test': '/openstack/healthcheck podman_exporter', 'mount': '/var/lib/openstack/healthchecks/podman_exporter'}, 'volumes': ['/var/lib/openstack/config/telemetry/podman_exporter.yaml:/etc/podman_exporter/podman_exporter.yaml:z', '/var/lib/openstack/certs/telemetry/default:/etc/podman_exporter/tls:z', '/run/podman/podman.sock:/run/podman/podman.sock:rw,z', '/var/lib/openstack/healthchecks/podman_exporter:/openstack:ro,z']}, config_id=edpm, container_name=podman_exporter, maintainer=Navid Yaghoobi <navidys@fedoraproject.org>)
Dec  1 14:22:09 np0005541455 systemd[1]: 61ddba5fa28aaa4735d9b3aecc3d300f499f9ae2248b5f55cd6d6127fcce4236-2b30acf1ad37caad.timer: Deactivated successfully.
Dec  1 14:22:09 np0005541455 systemd[1]: Stopped /usr/bin/podman healthcheck run 61ddba5fa28aaa4735d9b3aecc3d300f499f9ae2248b5f55cd6d6127fcce4236.
Dec  1 14:22:09 np0005541455 systemd[1]: var-lib-containers-storage-overlay\x2dcontainers-61ddba5fa28aaa4735d9b3aecc3d300f499f9ae2248b5f55cd6d6127fcce4236-userdata-shm.mount: Deactivated successfully.
Dec  1 14:22:09 np0005541455 systemd[1]: var-lib-containers-storage-overlay-84d7051d8f22de1e31d6f6cdaf75e2812a3937e2d57ddd012bae9f925206d04a-merged.mount: Deactivated successfully.
Dec  1 14:22:09 np0005541455 podman[203938]: 2025-12-01 19:22:09.437747676 +0000 UTC m=+0.382211995 container cleanup 61ddba5fa28aaa4735d9b3aecc3d300f499f9ae2248b5f55cd6d6127fcce4236 (image=quay.io/navidys/prometheus-podman-exporter:v1.10.1, name=podman_exporter, config_data={'image': 'quay.io/navidys/prometheus-podman-exporter:v1.10.1', 'restart': 'always', 'recreate': True, 'user': 'root', 'privileged': True, 'ports': ['9882:9882'], 'net': 'host', 'command': ['--web.config.file=/etc/podman_exporter/podman_exporter.yaml'], 'environment': {'OS_ENDPOINT_TYPE': 'internal', 'CONTAINER_HOST': 'unix:///run/podman/podman.sock'}, 'healthcheck': {'test': '/openstack/healthcheck podman_exporter', 'mount': '/var/lib/openstack/healthchecks/podman_exporter'}, 'volumes': ['/var/lib/openstack/config/telemetry/podman_exporter.yaml:/etc/podman_exporter/podman_exporter.yaml:z', '/var/lib/openstack/certs/telemetry/default:/etc/podman_exporter/tls:z', '/run/podman/podman.sock:/run/podman/podman.sock:rw,z', '/var/lib/openstack/healthchecks/podman_exporter:/openstack:ro,z']}, config_id=edpm, container_name=podman_exporter, maintainer=Navid Yaghoobi <navidys@fedoraproject.org>, managed_by=edpm_ansible)
Dec  1 14:22:09 np0005541455 podman[203938]: podman_exporter
Dec  1 14:22:09 np0005541455 systemd[1]: edpm_podman_exporter.service: Main process exited, code=exited, status=2/INVALIDARGUMENT
Dec  1 14:22:09 np0005541455 podman[203967]: podman_exporter
Dec  1 14:22:09 np0005541455 systemd[1]: edpm_podman_exporter.service: Failed with result 'exit-code'.
Dec  1 14:22:09 np0005541455 systemd[1]: Stopped podman_exporter container.
Dec  1 14:22:09 np0005541455 systemd[1]: Starting podman_exporter container...
Dec  1 14:22:09 np0005541455 systemd[1]: Started libcrun container.
Dec  1 14:22:09 np0005541455 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/84d7051d8f22de1e31d6f6cdaf75e2812a3937e2d57ddd012bae9f925206d04a/merged/etc/podman_exporter/podman_exporter.yaml supports timestamps until 2038 (0x7fffffff)
Dec  1 14:22:09 np0005541455 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/84d7051d8f22de1e31d6f6cdaf75e2812a3937e2d57ddd012bae9f925206d04a/merged/etc/podman_exporter/tls supports timestamps until 2038 (0x7fffffff)
Dec  1 14:22:09 np0005541455 systemd[1]: Started /usr/bin/podman healthcheck run 61ddba5fa28aaa4735d9b3aecc3d300f499f9ae2248b5f55cd6d6127fcce4236.
Dec  1 14:22:09 np0005541455 podman[203980]: 2025-12-01 19:22:09.654693572 +0000 UTC m=+0.120571551 container init 61ddba5fa28aaa4735d9b3aecc3d300f499f9ae2248b5f55cd6d6127fcce4236 (image=quay.io/navidys/prometheus-podman-exporter:v1.10.1, name=podman_exporter, container_name=podman_exporter, maintainer=Navid Yaghoobi <navidys@fedoraproject.org>, managed_by=edpm_ansible, config_data={'image': 'quay.io/navidys/prometheus-podman-exporter:v1.10.1', 'restart': 'always', 'recreate': True, 'user': 'root', 'privileged': True, 'ports': ['9882:9882'], 'net': 'host', 'command': ['--web.config.file=/etc/podman_exporter/podman_exporter.yaml'], 'environment': {'OS_ENDPOINT_TYPE': 'internal', 'CONTAINER_HOST': 'unix:///run/podman/podman.sock'}, 'healthcheck': {'test': '/openstack/healthcheck podman_exporter', 'mount': '/var/lib/openstack/healthchecks/podman_exporter'}, 'volumes': ['/var/lib/openstack/config/telemetry/podman_exporter.yaml:/etc/podman_exporter/podman_exporter.yaml:z', '/var/lib/openstack/certs/telemetry/default:/etc/podman_exporter/tls:z', '/run/podman/podman.sock:/run/podman/podman.sock:rw,z', '/var/lib/openstack/healthchecks/podman_exporter:/openstack:ro,z']}, config_id=edpm)
Dec  1 14:22:09 np0005541455 podman_exporter[203996]: ts=2025-12-01T19:22:09.666Z caller=exporter.go:68 level=info msg="Starting podman-prometheus-exporter" version="(version=1.10.1, branch=HEAD, revision=1)"
Dec  1 14:22:09 np0005541455 podman_exporter[203996]: ts=2025-12-01T19:22:09.666Z caller=exporter.go:69 level=info msg=metrics enhanced=false
Dec  1 14:22:09 np0005541455 podman_exporter[203996]: ts=2025-12-01T19:22:09.666Z caller=handler.go:94 level=info msg="enabled collectors"
Dec  1 14:22:09 np0005541455 podman_exporter[203996]: ts=2025-12-01T19:22:09.666Z caller=handler.go:105 level=info collector=container
Dec  1 14:22:09 np0005541455 podman[203750]: @ - - [01/Dec/2025:19:22:09 +0000] "GET /v4.9.3/libpod/_ping HTTP/1.1" 200 2 "" "Go-http-client/1.1"
Dec  1 14:22:09 np0005541455 podman[203750]: time="2025-12-01T19:22:09Z" level=info msg="List containers: received `last` parameter - overwriting `limit`"
Dec  1 14:22:09 np0005541455 podman[203980]: 2025-12-01 19:22:09.688171467 +0000 UTC m=+0.154049436 container start 61ddba5fa28aaa4735d9b3aecc3d300f499f9ae2248b5f55cd6d6127fcce4236 (image=quay.io/navidys/prometheus-podman-exporter:v1.10.1, name=podman_exporter, config_id=edpm, container_name=podman_exporter, maintainer=Navid Yaghoobi <navidys@fedoraproject.org>, managed_by=edpm_ansible, config_data={'image': 'quay.io/navidys/prometheus-podman-exporter:v1.10.1', 'restart': 'always', 'recreate': True, 'user': 'root', 'privileged': True, 'ports': ['9882:9882'], 'net': 'host', 'command': ['--web.config.file=/etc/podman_exporter/podman_exporter.yaml'], 'environment': {'OS_ENDPOINT_TYPE': 'internal', 'CONTAINER_HOST': 'unix:///run/podman/podman.sock'}, 'healthcheck': {'test': '/openstack/healthcheck podman_exporter', 'mount': '/var/lib/openstack/healthchecks/podman_exporter'}, 'volumes': ['/var/lib/openstack/config/telemetry/podman_exporter.yaml:/etc/podman_exporter/podman_exporter.yaml:z', '/var/lib/openstack/certs/telemetry/default:/etc/podman_exporter/tls:z', '/run/podman/podman.sock:/run/podman/podman.sock:rw,z', '/var/lib/openstack/healthchecks/podman_exporter:/openstack:ro,z']})
Dec  1 14:22:09 np0005541455 podman[203980]: podman_exporter
Dec  1 14:22:09 np0005541455 podman[203750]: @ - - [01/Dec/2025:19:22:09 +0000] "GET /v4.9.3/libpod/containers/json?all=true&external=false&last=0&namespace=false&size=true&sync=false HTTP/1.1" 200 19589 "" "Go-http-client/1.1"
Dec  1 14:22:09 np0005541455 systemd[1]: Started podman_exporter container.
Dec  1 14:22:09 np0005541455 podman_exporter[203996]: ts=2025-12-01T19:22:09.695Z caller=exporter.go:96 level=info msg="Listening on" address=:9882
Dec  1 14:22:09 np0005541455 podman_exporter[203996]: ts=2025-12-01T19:22:09.696Z caller=tls_config.go:313 level=info msg="Listening on" address=[::]:9882
Dec  1 14:22:09 np0005541455 podman_exporter[203996]: ts=2025-12-01T19:22:09.697Z caller=tls_config.go:349 level=info msg="TLS is enabled." http2=true address=[::]:9882
Dec  1 14:22:09 np0005541455 podman[204005]: 2025-12-01 19:22:09.739426922 +0000 UTC m=+0.042112988 container health_status 61ddba5fa28aaa4735d9b3aecc3d300f499f9ae2248b5f55cd6d6127fcce4236 (image=quay.io/navidys/prometheus-podman-exporter:v1.10.1, name=podman_exporter, health_status=healthy, health_failing_streak=0, health_log=, container_name=podman_exporter, maintainer=Navid Yaghoobi <navidys@fedoraproject.org>, managed_by=edpm_ansible, config_data={'image': 'quay.io/navidys/prometheus-podman-exporter:v1.10.1', 'restart': 'always', 'recreate': True, 'user': 'root', 'privileged': True, 'ports': ['9882:9882'], 'net': 'host', 'command': ['--web.config.file=/etc/podman_exporter/podman_exporter.yaml'], 'environment': {'OS_ENDPOINT_TYPE': 'internal', 'CONTAINER_HOST': 'unix:///run/podman/podman.sock'}, 'healthcheck': {'test': '/openstack/healthcheck podman_exporter', 'mount': '/var/lib/openstack/healthchecks/podman_exporter'}, 'volumes': ['/var/lib/openstack/config/telemetry/podman_exporter.yaml:/etc/podman_exporter/podman_exporter.yaml:z', '/var/lib/openstack/certs/telemetry/default:/etc/podman_exporter/tls:z', '/run/podman/podman.sock:/run/podman/podman.sock:rw,z', '/var/lib/openstack/healthchecks/podman_exporter:/openstack:ro,z']}, config_id=edpm)
Dec  1 14:22:10 np0005541455 python3.9[204183]: ansible-ansible.legacy.stat Invoked with path=/var/lib/openstack/healthchecks/openstack_network_exporter/healthcheck follow=False get_checksum=True get_size=False checksum_algorithm=sha1 get_mime=True get_attributes=True get_selinux_context=False
Dec  1 14:22:11 np0005541455 python3.9[204306]: ansible-ansible.legacy.copy Invoked with dest=/var/lib/openstack/healthchecks/openstack_network_exporter/ group=zuul mode=0700 owner=zuul setype=container_file_t src=/home/zuul/.ansible/tmp/ansible-tmp-1764616929.8745494-748-161066472590830/.source _original_basename=healthcheck follow=False checksum=e380c11c36804bfc65a818f2960cfa663daacfe5 backup=False force=True remote_src=False unsafe_writes=False content=NOT_LOGGING_PARAMETER validate=None directory_mode=None local_follow=None seuser=None serole=None selevel=None attributes=None
Dec  1 14:22:11 np0005541455 python3.9[204458]: ansible-container_config_data Invoked with config_overrides={} config_path=/var/lib/openstack/config/telemetry config_pattern=openstack_network_exporter.json debug=False
Dec  1 14:22:12 np0005541455 ovn_metadata_agent[106828]: 2025-12-01 19:22:12.162 106833 DEBUG oslo_concurrency.lockutils [-] Acquiring lock "_check_child_processes" by "neutron.agent.linux.external_process.ProcessMonitor._check_child_processes" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m
Dec  1 14:22:12 np0005541455 ovn_metadata_agent[106828]: 2025-12-01 19:22:12.163 106833 DEBUG oslo_concurrency.lockutils [-] Lock "_check_child_processes" acquired by "neutron.agent.linux.external_process.ProcessMonitor._check_child_processes" :: waited 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m
Dec  1 14:22:12 np0005541455 ovn_metadata_agent[106828]: 2025-12-01 19:22:12.163 106833 DEBUG oslo_concurrency.lockutils [-] Lock "_check_child_processes" "released" by "neutron.agent.linux.external_process.ProcessMonitor._check_child_processes" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m
Dec  1 14:22:12 np0005541455 python3.9[204610]: ansible-container_config_hash Invoked with check_mode=False config_vol_prefix=/var/lib/config-data
Dec  1 14:22:13 np0005541455 python3[204762]: ansible-edpm_container_manage Invoked with concurrency=1 config_dir=/var/lib/openstack/config/telemetry config_id=edpm config_overrides={} config_patterns=openstack_network_exporter.json log_base_path=/var/log/containers/stdouts debug=False
Dec  1 14:22:15 np0005541455 podman[204777]: 2025-12-01 19:22:15.867520415 +0000 UTC m=+2.147749315 image pull 186c5e97c6f6912533851a0044ea6da23938910e7bddfb4a6c0be9b48ab2a1d1 quay.io/openstack-k8s-operators/openstack-network-exporter:current-podified
Dec  1 14:22:15 np0005541455 podman[204872]: 2025-12-01 19:22:15.991003566 +0000 UTC m=+0.045277508 container create b46bda7fc50db8041eef75400930fc7591d8331b3adc9964f77b2cc87c6b98e2 (image=quay.io/openstack-k8s-operators/openstack-network-exporter:current-podified, name=openstack_network_exporter, config_id=edpm, description=The Universal Base Image Minimal is a stripped down image that uses microdnf as a package manager. This base image is freely redistributable, but Red Hat only supports Red Hat technologies through subscriptions for Red Hat products. This image is maintained by Red Hat and updated regularly., io.openshift.tags=minimal rhel9, version=9.6, container_name=openstack_network_exporter, config_data={'image': 'quay.io/openstack-k8s-operators/openstack-network-exporter:current-podified', 'restart': 'always', 'recreate': True, 'privileged': True, 'ports': ['9105:9105'], 'command': [], 'net': 'host', 'environment': {'OS_ENDPOINT_TYPE': 'internal', 'OPENSTACK_NETWORK_EXPORTER_YAML': '/etc/openstack_network_exporter/openstack_network_exporter.yaml'}, 'healthcheck': {'test': '/openstack/healthcheck openstack-netwo', 'mount': '/var/lib/openstack/healthchecks/openstack_network_exporter'}, 'volumes': ['/var/lib/openstack/config/telemetry/openstack_network_exporter.yaml:/etc/openstack_network_exporter/openstack_network_exporter.yaml:z', '/var/lib/openstack/certs/telemetry/default:/etc/openstack_network_exporter/tls:z', '/var/run/openvswitch:/run/openvswitch:rw,z', '/var/lib/openvswitch/ovn:/run/ovn:rw,z', '/proc:/host/proc:ro', '/var/lib/openstack/healthchecks/openstack_network_exporter:/openstack:ro,z']}, architecture=x86_64, com.redhat.component=ubi9-minimal-container, io.k8s.description=The Universal Base Image Minimal is a stripped down image that uses microdnf as a package manager. This base image is freely redistributable, but Red Hat only supports Red Hat technologies through subscriptions for Red Hat products. This image is maintained by Red Hat and updated regularly., io.buildah.version=1.33.7, com.redhat.license_terms=https://www.redhat.com/en/about/red-hat-end-user-license-agreements#UBI, maintainer=Red Hat, Inc., vcs-type=git, name=ubi9-minimal, io.openshift.expose-services=, build-date=2025-08-20T13:12:41, summary=Provides the latest release of the minimal Red Hat Universal Base Image 9., io.k8s.display-name=Red Hat Universal Base Image 9 Minimal, release=1755695350, vcs-ref=f4b088292653bbf5ca8188a5e59ffd06a8671d4b, url=https://catalog.redhat.com/en/search?searchType=containers, distribution-scope=public, vendor=Red Hat, Inc., managed_by=edpm_ansible)
Dec  1 14:22:15 np0005541455 podman[204872]: 2025-12-01 19:22:15.967589498 +0000 UTC m=+0.021863430 image pull 186c5e97c6f6912533851a0044ea6da23938910e7bddfb4a6c0be9b48ab2a1d1 quay.io/openstack-k8s-operators/openstack-network-exporter:current-podified
Dec  1 14:22:15 np0005541455 python3[204762]: ansible-edpm_container_manage PODMAN-CONTAINER-DEBUG: podman create --name openstack_network_exporter --conmon-pidfile /run/openstack_network_exporter.pid --env OS_ENDPOINT_TYPE=internal --env OPENSTACK_NETWORK_EXPORTER_YAML=/etc/openstack_network_exporter/openstack_network_exporter.yaml --healthcheck-command /openstack/healthcheck openstack-netwo --label config_id=edpm --label container_name=openstack_network_exporter --label managed_by=edpm_ansible --label config_data={'image': 'quay.io/openstack-k8s-operators/openstack-network-exporter:current-podified', 'restart': 'always', 'recreate': True, 'privileged': True, 'ports': ['9105:9105'], 'command': [], 'net': 'host', 'environment': {'OS_ENDPOINT_TYPE': 'internal', 'OPENSTACK_NETWORK_EXPORTER_YAML': '/etc/openstack_network_exporter/openstack_network_exporter.yaml'}, 'healthcheck': {'test': '/openstack/healthcheck openstack-netwo', 'mount': '/var/lib/openstack/healthchecks/openstack_network_exporter'}, 'volumes': ['/var/lib/openstack/config/telemetry/openstack_network_exporter.yaml:/etc/openstack_network_exporter/openstack_network_exporter.yaml:z', '/var/lib/openstack/certs/telemetry/default:/etc/openstack_network_exporter/tls:z', '/var/run/openvswitch:/run/openvswitch:rw,z', '/var/lib/openvswitch/ovn:/run/ovn:rw,z', '/proc:/host/proc:ro', '/var/lib/openstack/healthchecks/openstack_network_exporter:/openstack:ro,z']} --log-driver journald --log-level info --network host --privileged=True --publish 9105:9105 --volume /var/lib/openstack/config/telemetry/openstack_network_exporter.yaml:/etc/openstack_network_exporter/openstack_network_exporter.yaml:z --volume /var/lib/openstack/certs/telemetry/default:/etc/openstack_network_exporter/tls:z --volume /var/run/openvswitch:/run/openvswitch:rw,z --volume /var/lib/openvswitch/ovn:/run/ovn:rw,z --volume /proc:/host/proc:ro --volume /var/lib/openstack/healthchecks/openstack_network_exporter:/openstack:ro,z quay.io/openstack-k8s-operators/openstack-network-exporter:current-podified
Dec  1 14:22:16 np0005541455 podman[205030]: 2025-12-01 19:22:16.723646296 +0000 UTC m=+0.128888975 container health_status ac5c9902abf0db9f43c889599b2bcc73d33eb8b65444ffdd9b56a5cc93dab792 (image=quay.io/podified-antelope-centos9/openstack-ovn-controller:current-podified, name=ovn_controller, health_status=healthy, health_failing_streak=0, health_log=, tcib_build_tag=fa2bb8efef6782c26ea7f1675eeb36dd, config_data={'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ovn_controller', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-ovn-controller:current-podified', 'net': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/lib/modules:/lib/modules:ro', '/run:/run', '/var/lib/openvswitch/ovn:/run/ovn:shared,z', '/var/lib/kolla/config_files/ovn_controller.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/openstack/cacerts/ovn/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/certs/ovn/default/ca.crt:/etc/pki/tls/certs/ovndbca.crt:ro,z', '/var/lib/openstack/certs/ovn/default/tls.crt:/etc/pki/tls/certs/ovndb.crt:ro,z', '/var/lib/openstack/certs/ovn/default/tls.key:/etc/pki/tls/private/ovndb.key:ro,Z', '/var/lib/openstack/healthchecks/ovn_controller:/openstack:ro,z']}, io.buildah.version=1.41.3, managed_by=edpm_ansible, org.label-schema.build-date=20251125, org.label-schema.license=GPLv2, org.label-schema.vendor=CentOS, config_id=ovn_controller, maintainer=OpenStack Kubernetes Operator team, tcib_managed=true, container_name=ovn_controller, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.schema-version=1.0)
Dec  1 14:22:16 np0005541455 python3.9[205078]: ansible-ansible.builtin.stat Invoked with path=/etc/sysconfig/podman_drop_in follow=False get_checksum=True get_mime=True get_attributes=True get_selinux_context=False checksum_algorithm=sha1
Dec  1 14:22:17 np0005541455 python3.9[205237]: ansible-file Invoked with path=/etc/systemd/system/edpm_openstack_network_exporter.requires state=absent recurse=False force=False follow=True modification_time_format=%Y%m%d%H%M.%S access_time_format=%Y%m%d%H%M.%S unsafe_writes=False _original_basename=None _diff_peek=None src=None modification_time=None access_time=None mode=None owner=None group=None seuser=None serole=None selevel=None setype=None attributes=None
Dec  1 14:22:18 np0005541455 podman[205315]: 2025-12-01 19:22:18.288723722 +0000 UTC m=+0.055072105 container health_status 43b014a7c88484529ca37fbc1aa040d68d3c565a681d98a3ffe696ded1c66c8b (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified, name=ovn_metadata_agent, health_status=healthy, health_failing_streak=0, health_log=, container_name=ovn_metadata_agent, managed_by=edpm_ansible, org.label-schema.schema-version=1.0, tcib_managed=true, config_data={'cgroupns': 'host', 'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'EDPM_CONFIG_HASH': '0823bd3e096c75f72e4a95820d41b0d4b6a1172bd2892ddb9f29b788a11bc87d'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ovn_metadata_agent', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified', 'net': 'host', 'pid': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/run/openvswitch:/run/openvswitch:z', '/var/lib/config-data/ansible-generated/neutron-ovn-metadata-agent:/etc/neutron.conf.d:z', '/run/netns:/run/netns:shared', '/var/lib/kolla/config_files/ovn_metadata_agent.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/neutron:/var/lib/neutron:shared,z', '/var/lib/neutron/ovn_metadata_haproxy_wrapper:/usr/local/bin/haproxy:ro', '/var/lib/neutron/kill_scripts:/etc/neutron/kill_scripts:ro', '/var/lib/openstack/cacerts/neutron-metadata/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/ca.crt:/etc/pki/tls/certs/ovndbca.crt:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/tls.crt:/etc/pki/tls/certs/ovndb.crt:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/tls.key:/etc/pki/tls/private/ovndb.key:ro,Z', '/var/lib/openstack/healthchecks/ovn_metadata_agent:/openstack:ro,z']}, io.buildah.version=1.41.3, maintainer=OpenStack Kubernetes Operator team, tcib_build_tag=fa2bb8efef6782c26ea7f1675eeb36dd, org.label-schema.build-date=20251125, org.label-schema.license=GPLv2, org.label-schema.vendor=CentOS, org.label-schema.name=CentOS Stream 9 Base Image, config_id=ovn_metadata_agent)
Dec  1 14:22:18 np0005541455 podman[205313]: 2025-12-01 19:22:18.295988863 +0000 UTC m=+0.065613460 container health_status 3a3d264f7eb8586ed3d44da8bad3c69e5911bcb2ca062b771386b6d47a5118de (image=quay.rdoproject.org/podified-master-centos10/openstack-ceilometer-compute:current-tested, name=ceilometer_agent_compute, health_status=starting, health_failing_streak=2, health_log=, tcib_build_tag=3a7876c5b6a4ff2e2bc50e11e9db5f42, tcib_managed=true, config_data={'image': 'quay.rdoproject.org/podified-master-centos10/openstack-ceilometer-compute:current-tested', 'user': 'ceilometer', 'restart': 'always', 'command': 'kolla_start', 'security_opt': 'label:type:ceilometer_polling_t', 'net': 'host', 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'OS_ENDPOINT_TYPE': 'internal'}, 'healthcheck': {'test': '/openstack/healthcheck compute', 'mount': '/var/lib/openstack/healthchecks/ceilometer_agent_compute'}, 'volumes': ['/var/lib/openstack/config/telemetry:/var/lib/openstack/config/:z', '/var/lib/openstack/config/telemetry/ceilometer-agent-compute.json:/var/lib/kolla/config_files/config.json:z', '/run/libvirt:/run/libvirt:shared,ro', '/etc/hosts:/etc/hosts:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/var/lib/openstack/cacerts/telemetry/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/config/telemetry/ceilometer_prom_exporter.yaml:/etc/ceilometer/ceilometer_prom_exporter.yaml:z', '/var/lib/openstack/certs/telemetry/default:/etc/ceilometer/tls:z', '/dev/log:/dev/log', '/var/lib/openstack/healthchecks/ceilometer_agent_compute:/openstack:ro,z']}, io.buildah.version=1.41.4, org.label-schema.build-date=20251125, org.label-schema.license=GPLv2, managed_by=edpm_ansible, org.label-schema.name=CentOS Stream 10 Base Image, config_id=edpm, container_name=ceilometer_agent_compute, maintainer=OpenStack Kubernetes Operator team, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS)
Dec  1 14:22:18 np0005541455 systemd[1]: 3a3d264f7eb8586ed3d44da8bad3c69e5911bcb2ca062b771386b6d47a5118de-2941d675b2b12b50.service: Main process exited, code=exited, status=1/FAILURE
Dec  1 14:22:18 np0005541455 systemd[1]: 3a3d264f7eb8586ed3d44da8bad3c69e5911bcb2ca062b771386b6d47a5118de-2941d675b2b12b50.service: Failed with result 'exit-code'.
Dec  1 14:22:18 np0005541455 python3.9[205424]: ansible-copy Invoked with src=/home/zuul/.ansible/tmp/ansible-tmp-1764616937.9170544-801-116782843476468/source dest=/etc/systemd/system/edpm_openstack_network_exporter.service mode=0644 owner=root group=root backup=False force=True remote_src=False follow=False unsafe_writes=False _original_basename=None content=NOT_LOGGING_PARAMETER validate=None directory_mode=None local_follow=None checksum=None seuser=None serole=None selevel=None setype=None attributes=None
Dec  1 14:22:19 np0005541455 python3.9[205500]: ansible-systemd Invoked with daemon_reload=True daemon_reexec=False scope=system no_block=False name=None state=None enabled=None force=None masked=None
Dec  1 14:22:19 np0005541455 systemd[1]: Reloading.
Dec  1 14:22:19 np0005541455 systemd-rc-local-generator: /etc/rc.d/rc.local is not marked executable, skipping.
Dec  1 14:22:19 np0005541455 systemd-sysv-generator: SysV service '/etc/rc.d/init.d/network' lacks a native systemd unit file. Automatically generating a unit file for compatibility. Please update package to include a native systemd unit file, in order to make it more safe and robust.
Dec  1 14:22:20 np0005541455 python3.9[205611]: ansible-systemd Invoked with state=restarted name=edpm_openstack_network_exporter.service enabled=True daemon_reload=False daemon_reexec=False scope=system no_block=False force=None masked=None
Dec  1 14:22:20 np0005541455 systemd[1]: Reloading.
Dec  1 14:22:20 np0005541455 systemd-rc-local-generator: /etc/rc.d/rc.local is not marked executable, skipping.
Dec  1 14:22:20 np0005541455 systemd-sysv-generator: SysV service '/etc/rc.d/init.d/network' lacks a native systemd unit file. Automatically generating a unit file for compatibility. Please update package to include a native systemd unit file, in order to make it more safe and robust.
Dec  1 14:22:20 np0005541455 systemd[1]: Starting openstack_network_exporter container...
Dec  1 14:22:20 np0005541455 systemd[1]: Started libcrun container.
Dec  1 14:22:20 np0005541455 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/f026cac2f7730a5cb7e712899457680001777cdf3ed1ca0d4cc9acd690d66b63/merged/run/ovn supports timestamps until 2038 (0x7fffffff)
Dec  1 14:22:20 np0005541455 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/f026cac2f7730a5cb7e712899457680001777cdf3ed1ca0d4cc9acd690d66b63/merged/etc/openstack_network_exporter/openstack_network_exporter.yaml supports timestamps until 2038 (0x7fffffff)
Dec  1 14:22:20 np0005541455 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/f026cac2f7730a5cb7e712899457680001777cdf3ed1ca0d4cc9acd690d66b63/merged/etc/openstack_network_exporter/tls supports timestamps until 2038 (0x7fffffff)
Dec  1 14:22:20 np0005541455 systemd[1]: Started /usr/bin/podman healthcheck run b46bda7fc50db8041eef75400930fc7591d8331b3adc9964f77b2cc87c6b98e2.
Dec  1 14:22:20 np0005541455 podman[205651]: 2025-12-01 19:22:20.782160258 +0000 UTC m=+0.155393599 container init b46bda7fc50db8041eef75400930fc7591d8331b3adc9964f77b2cc87c6b98e2 (image=quay.io/openstack-k8s-operators/openstack-network-exporter:current-podified, name=openstack_network_exporter, url=https://catalog.redhat.com/en/search?searchType=containers, description=The Universal Base Image Minimal is a stripped down image that uses microdnf as a package manager. This base image is freely redistributable, but Red Hat only supports Red Hat technologies through subscriptions for Red Hat products. This image is maintained by Red Hat and updated regularly., io.k8s.display-name=Red Hat Universal Base Image 9 Minimal, vendor=Red Hat, Inc., container_name=openstack_network_exporter, config_data={'image': 'quay.io/openstack-k8s-operators/openstack-network-exporter:current-podified', 'restart': 'always', 'recreate': True, 'privileged': True, 'ports': ['9105:9105'], 'command': [], 'net': 'host', 'environment': {'OS_ENDPOINT_TYPE': 'internal', 'OPENSTACK_NETWORK_EXPORTER_YAML': '/etc/openstack_network_exporter/openstack_network_exporter.yaml'}, 'healthcheck': {'test': '/openstack/healthcheck openstack-netwo', 'mount': '/var/lib/openstack/healthchecks/openstack_network_exporter'}, 'volumes': ['/var/lib/openstack/config/telemetry/openstack_network_exporter.yaml:/etc/openstack_network_exporter/openstack_network_exporter.yaml:z', '/var/lib/openstack/certs/telemetry/default:/etc/openstack_network_exporter/tls:z', '/var/run/openvswitch:/run/openvswitch:rw,z', '/var/lib/openvswitch/ovn:/run/ovn:rw,z', '/proc:/host/proc:ro', '/var/lib/openstack/healthchecks/openstack_network_exporter:/openstack:ro,z']}, architecture=x86_64, version=9.6, managed_by=edpm_ansible, maintainer=Red Hat, Inc., vcs-ref=f4b088292653bbf5ca8188a5e59ffd06a8671d4b, io.buildah.version=1.33.7, com.redhat.component=ubi9-minimal-container, io.openshift.tags=minimal rhel9, summary=Provides the latest release of the minimal Red Hat Universal Base Image 9., config_id=edpm, distribution-scope=public, build-date=2025-08-20T13:12:41, release=1755695350, vcs-type=git, io.k8s.description=The Universal Base Image Minimal is a stripped down image that uses microdnf as a package manager. This base image is freely redistributable, but Red Hat only supports Red Hat technologies through subscriptions for Red Hat products. This image is maintained by Red Hat and updated regularly., io.openshift.expose-services=, name=ubi9-minimal, com.redhat.license_terms=https://www.redhat.com/en/about/red-hat-end-user-license-agreements#UBI)
Dec  1 14:22:20 np0005541455 openstack_network_exporter[205666]: INFO    19:22:20 main.go:48: registering *bridge.Collector
Dec  1 14:22:20 np0005541455 openstack_network_exporter[205666]: INFO    19:22:20 main.go:48: registering *coverage.Collector
Dec  1 14:22:20 np0005541455 openstack_network_exporter[205666]: INFO    19:22:20 main.go:48: registering *datapath.Collector
Dec  1 14:22:20 np0005541455 openstack_network_exporter[205666]: INFO    19:22:20 main.go:48: registering *iface.Collector
Dec  1 14:22:20 np0005541455 openstack_network_exporter[205666]: INFO    19:22:20 main.go:48: registering *memory.Collector
Dec  1 14:22:20 np0005541455 openstack_network_exporter[205666]: INFO    19:22:20 main.go:48: registering *ovnnorthd.Collector
Dec  1 14:22:20 np0005541455 openstack_network_exporter[205666]: INFO    19:22:20 main.go:48: registering *ovn.Collector
Dec  1 14:22:20 np0005541455 openstack_network_exporter[205666]: INFO    19:22:20 main.go:48: registering *ovsdbserver.Collector
Dec  1 14:22:20 np0005541455 openstack_network_exporter[205666]: INFO    19:22:20 main.go:48: registering *pmd_perf.Collector
Dec  1 14:22:20 np0005541455 openstack_network_exporter[205666]: INFO    19:22:20 main.go:48: registering *pmd_rxq.Collector
Dec  1 14:22:20 np0005541455 openstack_network_exporter[205666]: INFO    19:22:20 main.go:48: registering *vswitch.Collector
Dec  1 14:22:20 np0005541455 openstack_network_exporter[205666]: NOTICE  19:22:20 main.go:76: listening on https://:9105/metrics
Dec  1 14:22:20 np0005541455 podman[205651]: 2025-12-01 19:22:20.809998854 +0000 UTC m=+0.183232125 container start b46bda7fc50db8041eef75400930fc7591d8331b3adc9964f77b2cc87c6b98e2 (image=quay.io/openstack-k8s-operators/openstack-network-exporter:current-podified, name=openstack_network_exporter, name=ubi9-minimal, build-date=2025-08-20T13:12:41, url=https://catalog.redhat.com/en/search?searchType=containers, io.buildah.version=1.33.7, summary=Provides the latest release of the minimal Red Hat Universal Base Image 9., com.redhat.license_terms=https://www.redhat.com/en/about/red-hat-end-user-license-agreements#UBI, maintainer=Red Hat, Inc., config_data={'image': 'quay.io/openstack-k8s-operators/openstack-network-exporter:current-podified', 'restart': 'always', 'recreate': True, 'privileged': True, 'ports': ['9105:9105'], 'command': [], 'net': 'host', 'environment': {'OS_ENDPOINT_TYPE': 'internal', 'OPENSTACK_NETWORK_EXPORTER_YAML': '/etc/openstack_network_exporter/openstack_network_exporter.yaml'}, 'healthcheck': {'test': '/openstack/healthcheck openstack-netwo', 'mount': '/var/lib/openstack/healthchecks/openstack_network_exporter'}, 'volumes': ['/var/lib/openstack/config/telemetry/openstack_network_exporter.yaml:/etc/openstack_network_exporter/openstack_network_exporter.yaml:z', '/var/lib/openstack/certs/telemetry/default:/etc/openstack_network_exporter/tls:z', '/var/run/openvswitch:/run/openvswitch:rw,z', '/var/lib/openvswitch/ovn:/run/ovn:rw,z', '/proc:/host/proc:ro', '/var/lib/openstack/healthchecks/openstack_network_exporter:/openstack:ro,z']}, vendor=Red Hat, Inc., architecture=x86_64, container_name=openstack_network_exporter, io.openshift.tags=minimal rhel9, distribution-scope=public, io.k8s.description=The Universal Base Image Minimal is a stripped down image that uses microdnf as a package manager. This base image is freely redistributable, but Red Hat only supports Red Hat technologies through subscriptions for Red Hat products. This image is maintained by Red Hat and updated regularly., io.openshift.expose-services=, release=1755695350, vcs-type=git, config_id=edpm, vcs-ref=f4b088292653bbf5ca8188a5e59ffd06a8671d4b, description=The Universal Base Image Minimal is a stripped down image that uses microdnf as a package manager. This base image is freely redistributable, but Red Hat only supports Red Hat technologies through subscriptions for Red Hat products. This image is maintained by Red Hat and updated regularly., io.k8s.display-name=Red Hat Universal Base Image 9 Minimal, com.redhat.component=ubi9-minimal-container, version=9.6, managed_by=edpm_ansible)
Dec  1 14:22:20 np0005541455 podman[205651]: openstack_network_exporter
Dec  1 14:22:20 np0005541455 systemd[1]: Started openstack_network_exporter container.
Dec  1 14:22:20 np0005541455 podman[205677]: 2025-12-01 19:22:20.906775086 +0000 UTC m=+0.085132422 container health_status b46bda7fc50db8041eef75400930fc7591d8331b3adc9964f77b2cc87c6b98e2 (image=quay.io/openstack-k8s-operators/openstack-network-exporter:current-podified, name=openstack_network_exporter, health_status=healthy, health_failing_streak=0, health_log=, maintainer=Red Hat, Inc., managed_by=edpm_ansible, io.k8s.display-name=Red Hat Universal Base Image 9 Minimal, io.openshift.tags=minimal rhel9, com.redhat.license_terms=https://www.redhat.com/en/about/red-hat-end-user-license-agreements#UBI, io.openshift.expose-services=, vcs-type=git, build-date=2025-08-20T13:12:41, container_name=openstack_network_exporter, vendor=Red Hat, Inc., version=9.6, release=1755695350, io.k8s.description=The Universal Base Image Minimal is a stripped down image that uses microdnf as a package manager. This base image is freely redistributable, but Red Hat only supports Red Hat technologies through subscriptions for Red Hat products. This image is maintained by Red Hat and updated regularly., summary=Provides the latest release of the minimal Red Hat Universal Base Image 9., config_data={'image': 'quay.io/openstack-k8s-operators/openstack-network-exporter:current-podified', 'restart': 'always', 'recreate': True, 'privileged': True, 'ports': ['9105:9105'], 'command': [], 'net': 'host', 'environment': {'OS_ENDPOINT_TYPE': 'internal', 'OPENSTACK_NETWORK_EXPORTER_YAML': '/etc/openstack_network_exporter/openstack_network_exporter.yaml'}, 'healthcheck': {'test': '/openstack/healthcheck openstack-netwo', 'mount': '/var/lib/openstack/healthchecks/openstack_network_exporter'}, 'volumes': ['/var/lib/openstack/config/telemetry/openstack_network_exporter.yaml:/etc/openstack_network_exporter/openstack_network_exporter.yaml:z', '/var/lib/openstack/certs/telemetry/default:/etc/openstack_network_exporter/tls:z', '/var/run/openvswitch:/run/openvswitch:rw,z', '/var/lib/openvswitch/ovn:/run/ovn:rw,z', '/proc:/host/proc:ro', '/var/lib/openstack/healthchecks/openstack_network_exporter:/openstack:ro,z']}, config_id=edpm, description=The Universal Base Image Minimal is a stripped down image that uses microdnf as a package manager. This base image is freely redistributable, but Red Hat only supports Red Hat technologies through subscriptions for Red Hat products. This image is maintained by Red Hat and updated regularly., url=https://catalog.redhat.com/en/search?searchType=containers, distribution-scope=public, name=ubi9-minimal, io.buildah.version=1.33.7, vcs-ref=f4b088292653bbf5ca8188a5e59ffd06a8671d4b, architecture=x86_64, com.redhat.component=ubi9-minimal-container)
Dec  1 14:22:21 np0005541455 python3.9[205853]: ansible-ansible.builtin.systemd Invoked with name=edpm_openstack_network_exporter.service state=restarted daemon_reload=False daemon_reexec=False scope=system no_block=False enabled=None force=None masked=None
Dec  1 14:22:21 np0005541455 systemd[1]: Stopping openstack_network_exporter container...
Dec  1 14:22:21 np0005541455 systemd[1]: libpod-b46bda7fc50db8041eef75400930fc7591d8331b3adc9964f77b2cc87c6b98e2.scope: Deactivated successfully.
Dec  1 14:22:21 np0005541455 podman[205857]: 2025-12-01 19:22:21.891838353 +0000 UTC m=+0.060673224 container died b46bda7fc50db8041eef75400930fc7591d8331b3adc9964f77b2cc87c6b98e2 (image=quay.io/openstack-k8s-operators/openstack-network-exporter:current-podified, name=openstack_network_exporter, io.k8s.display-name=Red Hat Universal Base Image 9 Minimal, architecture=x86_64, io.k8s.description=The Universal Base Image Minimal is a stripped down image that uses microdnf as a package manager. This base image is freely redistributable, but Red Hat only supports Red Hat technologies through subscriptions for Red Hat products. This image is maintained by Red Hat and updated regularly., io.openshift.tags=minimal rhel9, io.openshift.expose-services=, vcs-ref=f4b088292653bbf5ca8188a5e59ffd06a8671d4b, description=The Universal Base Image Minimal is a stripped down image that uses microdnf as a package manager. This base image is freely redistributable, but Red Hat only supports Red Hat technologies through subscriptions for Red Hat products. This image is maintained by Red Hat and updated regularly., managed_by=edpm_ansible, com.redhat.component=ubi9-minimal-container, container_name=openstack_network_exporter, release=1755695350, build-date=2025-08-20T13:12:41, vendor=Red Hat, Inc., config_data={'image': 'quay.io/openstack-k8s-operators/openstack-network-exporter:current-podified', 'restart': 'always', 'recreate': True, 'privileged': True, 'ports': ['9105:9105'], 'command': [], 'net': 'host', 'environment': {'OS_ENDPOINT_TYPE': 'internal', 'OPENSTACK_NETWORK_EXPORTER_YAML': '/etc/openstack_network_exporter/openstack_network_exporter.yaml'}, 'healthcheck': {'test': '/openstack/healthcheck openstack-netwo', 'mount': '/var/lib/openstack/healthchecks/openstack_network_exporter'}, 'volumes': ['/var/lib/openstack/config/telemetry/openstack_network_exporter.yaml:/etc/openstack_network_exporter/openstack_network_exporter.yaml:z', '/var/lib/openstack/certs/telemetry/default:/etc/openstack_network_exporter/tls:z', '/var/run/openvswitch:/run/openvswitch:rw,z', '/var/lib/openvswitch/ovn:/run/ovn:rw,z', '/proc:/host/proc:ro', '/var/lib/openstack/healthchecks/openstack_network_exporter:/openstack:ro,z']}, summary=Provides the latest release of the minimal Red Hat Universal Base Image 9., vcs-type=git, url=https://catalog.redhat.com/en/search?searchType=containers, distribution-scope=public, com.redhat.license_terms=https://www.redhat.com/en/about/red-hat-end-user-license-agreements#UBI, name=ubi9-minimal, config_id=edpm, io.buildah.version=1.33.7, version=9.6, maintainer=Red Hat, Inc.)
Dec  1 14:22:21 np0005541455 systemd[1]: b46bda7fc50db8041eef75400930fc7591d8331b3adc9964f77b2cc87c6b98e2-55b8f6324da44433.timer: Deactivated successfully.
Dec  1 14:22:21 np0005541455 systemd[1]: Stopped /usr/bin/podman healthcheck run b46bda7fc50db8041eef75400930fc7591d8331b3adc9964f77b2cc87c6b98e2.
Dec  1 14:22:21 np0005541455 systemd[1]: var-lib-containers-storage-overlay\x2dcontainers-b46bda7fc50db8041eef75400930fc7591d8331b3adc9964f77b2cc87c6b98e2-userdata-shm.mount: Deactivated successfully.
Dec  1 14:22:21 np0005541455 systemd[1]: var-lib-containers-storage-overlay-f026cac2f7730a5cb7e712899457680001777cdf3ed1ca0d4cc9acd690d66b63-merged.mount: Deactivated successfully.
Dec  1 14:22:23 np0005541455 podman[205857]: 2025-12-01 19:22:23.182876783 +0000 UTC m=+1.351711684 container cleanup b46bda7fc50db8041eef75400930fc7591d8331b3adc9964f77b2cc87c6b98e2 (image=quay.io/openstack-k8s-operators/openstack-network-exporter:current-podified, name=openstack_network_exporter, url=https://catalog.redhat.com/en/search?searchType=containers, container_name=openstack_network_exporter, description=The Universal Base Image Minimal is a stripped down image that uses microdnf as a package manager. This base image is freely redistributable, but Red Hat only supports Red Hat technologies through subscriptions for Red Hat products. This image is maintained by Red Hat and updated regularly., io.openshift.tags=minimal rhel9, distribution-scope=public, io.buildah.version=1.33.7, release=1755695350, version=9.6, build-date=2025-08-20T13:12:41, com.redhat.license_terms=https://www.redhat.com/en/about/red-hat-end-user-license-agreements#UBI, vendor=Red Hat, Inc., maintainer=Red Hat, Inc., io.k8s.display-name=Red Hat Universal Base Image 9 Minimal, architecture=x86_64, config_data={'image': 'quay.io/openstack-k8s-operators/openstack-network-exporter:current-podified', 'restart': 'always', 'recreate': True, 'privileged': True, 'ports': ['9105:9105'], 'command': [], 'net': 'host', 'environment': {'OS_ENDPOINT_TYPE': 'internal', 'OPENSTACK_NETWORK_EXPORTER_YAML': '/etc/openstack_network_exporter/openstack_network_exporter.yaml'}, 'healthcheck': {'test': '/openstack/healthcheck openstack-netwo', 'mount': '/var/lib/openstack/healthchecks/openstack_network_exporter'}, 'volumes': ['/var/lib/openstack/config/telemetry/openstack_network_exporter.yaml:/etc/openstack_network_exporter/openstack_network_exporter.yaml:z', '/var/lib/openstack/certs/telemetry/default:/etc/openstack_network_exporter/tls:z', '/var/run/openvswitch:/run/openvswitch:rw,z', '/var/lib/openvswitch/ovn:/run/ovn:rw,z', '/proc:/host/proc:ro', '/var/lib/openstack/healthchecks/openstack_network_exporter:/openstack:ro,z']}, managed_by=edpm_ansible, summary=Provides the latest release of the minimal Red Hat Universal Base Image 9., io.k8s.description=The Universal Base Image Minimal is a stripped down image that uses microdnf as a package manager. This base image is freely redistributable, but Red Hat only supports Red Hat technologies through subscriptions for Red Hat products. This image is maintained by Red Hat and updated regularly., com.redhat.component=ubi9-minimal-container, config_id=edpm, io.openshift.expose-services=, name=ubi9-minimal, vcs-ref=f4b088292653bbf5ca8188a5e59ffd06a8671d4b, vcs-type=git)
Dec  1 14:22:23 np0005541455 podman[205857]: openstack_network_exporter
Dec  1 14:22:23 np0005541455 systemd[1]: edpm_openstack_network_exporter.service: Main process exited, code=exited, status=2/INVALIDARGUMENT
Dec  1 14:22:23 np0005541455 podman[205886]: openstack_network_exporter
Dec  1 14:22:23 np0005541455 systemd[1]: edpm_openstack_network_exporter.service: Failed with result 'exit-code'.
Dec  1 14:22:23 np0005541455 systemd[1]: Stopped openstack_network_exporter container.
Dec  1 14:22:23 np0005541455 systemd[1]: Starting openstack_network_exporter container...
Dec  1 14:22:23 np0005541455 systemd[1]: Started libcrun container.
Dec  1 14:22:23 np0005541455 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/f026cac2f7730a5cb7e712899457680001777cdf3ed1ca0d4cc9acd690d66b63/merged/run/ovn supports timestamps until 2038 (0x7fffffff)
Dec  1 14:22:23 np0005541455 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/f026cac2f7730a5cb7e712899457680001777cdf3ed1ca0d4cc9acd690d66b63/merged/etc/openstack_network_exporter/openstack_network_exporter.yaml supports timestamps until 2038 (0x7fffffff)
Dec  1 14:22:23 np0005541455 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/f026cac2f7730a5cb7e712899457680001777cdf3ed1ca0d4cc9acd690d66b63/merged/etc/openstack_network_exporter/tls supports timestamps until 2038 (0x7fffffff)
Dec  1 14:22:23 np0005541455 systemd[1]: Started /usr/bin/podman healthcheck run b46bda7fc50db8041eef75400930fc7591d8331b3adc9964f77b2cc87c6b98e2.
Dec  1 14:22:23 np0005541455 podman[205899]: 2025-12-01 19:22:23.475772499 +0000 UTC m=+0.175154288 container init b46bda7fc50db8041eef75400930fc7591d8331b3adc9964f77b2cc87c6b98e2 (image=quay.io/openstack-k8s-operators/openstack-network-exporter:current-podified, name=openstack_network_exporter, maintainer=Red Hat, Inc., config_id=edpm, summary=Provides the latest release of the minimal Red Hat Universal Base Image 9., name=ubi9-minimal, vendor=Red Hat, Inc., config_data={'image': 'quay.io/openstack-k8s-operators/openstack-network-exporter:current-podified', 'restart': 'always', 'recreate': True, 'privileged': True, 'ports': ['9105:9105'], 'command': [], 'net': 'host', 'environment': {'OS_ENDPOINT_TYPE': 'internal', 'OPENSTACK_NETWORK_EXPORTER_YAML': '/etc/openstack_network_exporter/openstack_network_exporter.yaml'}, 'healthcheck': {'test': '/openstack/healthcheck openstack-netwo', 'mount': '/var/lib/openstack/healthchecks/openstack_network_exporter'}, 'volumes': ['/var/lib/openstack/config/telemetry/openstack_network_exporter.yaml:/etc/openstack_network_exporter/openstack_network_exporter.yaml:z', '/var/lib/openstack/certs/telemetry/default:/etc/openstack_network_exporter/tls:z', '/var/run/openvswitch:/run/openvswitch:rw,z', '/var/lib/openvswitch/ovn:/run/ovn:rw,z', '/proc:/host/proc:ro', '/var/lib/openstack/healthchecks/openstack_network_exporter:/openstack:ro,z']}, com.redhat.component=ubi9-minimal-container, vcs-ref=f4b088292653bbf5ca8188a5e59ffd06a8671d4b, io.openshift.expose-services=, io.openshift.tags=minimal rhel9, vcs-type=git, com.redhat.license_terms=https://www.redhat.com/en/about/red-hat-end-user-license-agreements#UBI, distribution-scope=public, managed_by=edpm_ansible, io.k8s.description=The Universal Base Image Minimal is a stripped down image that uses microdnf as a package manager. This base image is freely redistributable, but Red Hat only supports Red Hat technologies through subscriptions for Red Hat products. This image is maintained by Red Hat and updated regularly., build-date=2025-08-20T13:12:41, container_name=openstack_network_exporter, url=https://catalog.redhat.com/en/search?searchType=containers, description=The Universal Base Image Minimal is a stripped down image that uses microdnf as a package manager. This base image is freely redistributable, but Red Hat only supports Red Hat technologies through subscriptions for Red Hat products. This image is maintained by Red Hat and updated regularly., io.buildah.version=1.33.7, io.k8s.display-name=Red Hat Universal Base Image 9 Minimal, release=1755695350, version=9.6, architecture=x86_64)
Dec  1 14:22:23 np0005541455 openstack_network_exporter[205914]: INFO    19:22:23 main.go:48: registering *bridge.Collector
Dec  1 14:22:23 np0005541455 openstack_network_exporter[205914]: INFO    19:22:23 main.go:48: registering *coverage.Collector
Dec  1 14:22:23 np0005541455 openstack_network_exporter[205914]: INFO    19:22:23 main.go:48: registering *datapath.Collector
Dec  1 14:22:23 np0005541455 openstack_network_exporter[205914]: INFO    19:22:23 main.go:48: registering *iface.Collector
Dec  1 14:22:23 np0005541455 openstack_network_exporter[205914]: INFO    19:22:23 main.go:48: registering *memory.Collector
Dec  1 14:22:23 np0005541455 openstack_network_exporter[205914]: INFO    19:22:23 main.go:48: registering *ovnnorthd.Collector
Dec  1 14:22:23 np0005541455 openstack_network_exporter[205914]: INFO    19:22:23 main.go:48: registering *ovn.Collector
Dec  1 14:22:23 np0005541455 openstack_network_exporter[205914]: INFO    19:22:23 main.go:48: registering *ovsdbserver.Collector
Dec  1 14:22:23 np0005541455 openstack_network_exporter[205914]: INFO    19:22:23 main.go:48: registering *pmd_perf.Collector
Dec  1 14:22:23 np0005541455 openstack_network_exporter[205914]: INFO    19:22:23 main.go:48: registering *pmd_rxq.Collector
Dec  1 14:22:23 np0005541455 openstack_network_exporter[205914]: INFO    19:22:23 main.go:48: registering *vswitch.Collector
Dec  1 14:22:23 np0005541455 openstack_network_exporter[205914]: NOTICE  19:22:23 main.go:76: listening on https://:9105/metrics
Dec  1 14:22:23 np0005541455 podman[205899]: 2025-12-01 19:22:23.531561195 +0000 UTC m=+0.230942974 container start b46bda7fc50db8041eef75400930fc7591d8331b3adc9964f77b2cc87c6b98e2 (image=quay.io/openstack-k8s-operators/openstack-network-exporter:current-podified, name=openstack_network_exporter, com.redhat.license_terms=https://www.redhat.com/en/about/red-hat-end-user-license-agreements#UBI, distribution-scope=public, managed_by=edpm_ansible, com.redhat.component=ubi9-minimal-container, version=9.6, config_data={'image': 'quay.io/openstack-k8s-operators/openstack-network-exporter:current-podified', 'restart': 'always', 'recreate': True, 'privileged': True, 'ports': ['9105:9105'], 'command': [], 'net': 'host', 'environment': {'OS_ENDPOINT_TYPE': 'internal', 'OPENSTACK_NETWORK_EXPORTER_YAML': '/etc/openstack_network_exporter/openstack_network_exporter.yaml'}, 'healthcheck': {'test': '/openstack/healthcheck openstack-netwo', 'mount': '/var/lib/openstack/healthchecks/openstack_network_exporter'}, 'volumes': ['/var/lib/openstack/config/telemetry/openstack_network_exporter.yaml:/etc/openstack_network_exporter/openstack_network_exporter.yaml:z', '/var/lib/openstack/certs/telemetry/default:/etc/openstack_network_exporter/tls:z', '/var/run/openvswitch:/run/openvswitch:rw,z', '/var/lib/openvswitch/ovn:/run/ovn:rw,z', '/proc:/host/proc:ro', '/var/lib/openstack/healthchecks/openstack_network_exporter:/openstack:ro,z']}, description=The Universal Base Image Minimal is a stripped down image that uses microdnf as a package manager. This base image is freely redistributable, but Red Hat only supports Red Hat technologies through subscriptions for Red Hat products. This image is maintained by Red Hat and updated regularly., io.buildah.version=1.33.7, name=ubi9-minimal, io.k8s.description=The Universal Base Image Minimal is a stripped down image that uses microdnf as a package manager. This base image is freely redistributable, but Red Hat only supports Red Hat technologies through subscriptions for Red Hat products. This image is maintained by Red Hat and updated regularly., container_name=openstack_network_exporter, io.openshift.tags=minimal rhel9, url=https://catalog.redhat.com/en/search?searchType=containers, config_id=edpm, summary=Provides the latest release of the minimal Red Hat Universal Base Image 9., release=1755695350, vcs-ref=f4b088292653bbf5ca8188a5e59ffd06a8671d4b, build-date=2025-08-20T13:12:41, architecture=x86_64, maintainer=Red Hat, Inc., vcs-type=git, io.k8s.display-name=Red Hat Universal Base Image 9 Minimal, io.openshift.expose-services=, vendor=Red Hat, Inc.)
Dec  1 14:22:23 np0005541455 podman[205899]: openstack_network_exporter
Dec  1 14:22:23 np0005541455 systemd[1]: Started openstack_network_exporter container.
Dec  1 14:22:23 np0005541455 podman[205924]: 2025-12-01 19:22:23.640895587 +0000 UTC m=+0.104240071 container health_status b46bda7fc50db8041eef75400930fc7591d8331b3adc9964f77b2cc87c6b98e2 (image=quay.io/openstack-k8s-operators/openstack-network-exporter:current-podified, name=openstack_network_exporter, health_status=healthy, health_failing_streak=0, health_log=, container_name=openstack_network_exporter, url=https://catalog.redhat.com/en/search?searchType=containers, io.buildah.version=1.33.7, vendor=Red Hat, Inc., config_data={'image': 'quay.io/openstack-k8s-operators/openstack-network-exporter:current-podified', 'restart': 'always', 'recreate': True, 'privileged': True, 'ports': ['9105:9105'], 'command': [], 'net': 'host', 'environment': {'OS_ENDPOINT_TYPE': 'internal', 'OPENSTACK_NETWORK_EXPORTER_YAML': '/etc/openstack_network_exporter/openstack_network_exporter.yaml'}, 'healthcheck': {'test': '/openstack/healthcheck openstack-netwo', 'mount': '/var/lib/openstack/healthchecks/openstack_network_exporter'}, 'volumes': ['/var/lib/openstack/config/telemetry/openstack_network_exporter.yaml:/etc/openstack_network_exporter/openstack_network_exporter.yaml:z', '/var/lib/openstack/certs/telemetry/default:/etc/openstack_network_exporter/tls:z', '/var/run/openvswitch:/run/openvswitch:rw,z', '/var/lib/openvswitch/ovn:/run/ovn:rw,z', '/proc:/host/proc:ro', '/var/lib/openstack/healthchecks/openstack_network_exporter:/openstack:ro,z']}, vcs-type=git, com.redhat.component=ubi9-minimal-container, release=1755695350, summary=Provides the latest release of the minimal Red Hat Universal Base Image 9., architecture=x86_64, io.k8s.display-name=Red Hat Universal Base Image 9 Minimal, io.openshift.tags=minimal rhel9, version=9.6, io.openshift.expose-services=, distribution-scope=public, name=ubi9-minimal, build-date=2025-08-20T13:12:41, io.k8s.description=The Universal Base Image Minimal is a stripped down image that uses microdnf as a package manager. This base image is freely redistributable, but Red Hat only supports Red Hat technologies through subscriptions for Red Hat products. This image is maintained by Red Hat and updated regularly., managed_by=edpm_ansible, vcs-ref=f4b088292653bbf5ca8188a5e59ffd06a8671d4b, maintainer=Red Hat, Inc., config_id=edpm, description=The Universal Base Image Minimal is a stripped down image that uses microdnf as a package manager. This base image is freely redistributable, but Red Hat only supports Red Hat technologies through subscriptions for Red Hat products. This image is maintained by Red Hat and updated regularly., com.redhat.license_terms=https://www.redhat.com/en/about/red-hat-end-user-license-agreements#UBI)
Dec  1 14:22:24 np0005541455 python3.9[206099]: ansible-ansible.builtin.find Invoked with file_type=directory paths=['/var/lib/openstack/healthchecks/'] patterns=[] read_whole_file=False age_stamp=mtime recurse=False hidden=False follow=False get_checksum=False checksum_algorithm=sha1 use_regex=False exact_mode=True excludes=None contains=None age=None size=None depth=None mode=None encoding=None limit=None
Dec  1 14:22:25 np0005541455 python3.9[206251]: ansible-containers.podman.podman_container_info Invoked with name=['ovn_controller'] executable=podman
Dec  1 14:22:26 np0005541455 python3.9[206416]: ansible-containers.podman.podman_container_exec Invoked with command=id -u name=ovn_controller detach=False executable=podman privileged=False tty=False argv=None env=None user=None workdir=None
Dec  1 14:22:26 np0005541455 systemd[1]: Started libpod-conmon-ac5c9902abf0db9f43c889599b2bcc73d33eb8b65444ffdd9b56a5cc93dab792.scope.
Dec  1 14:22:26 np0005541455 podman[206417]: 2025-12-01 19:22:26.878140028 +0000 UTC m=+0.121749438 container exec ac5c9902abf0db9f43c889599b2bcc73d33eb8b65444ffdd9b56a5cc93dab792 (image=quay.io/podified-antelope-centos9/openstack-ovn-controller:current-podified, name=ovn_controller, managed_by=edpm_ansible, org.label-schema.schema-version=1.0, tcib_managed=true, config_data={'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ovn_controller', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-ovn-controller:current-podified', 'net': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/lib/modules:/lib/modules:ro', '/run:/run', '/var/lib/openvswitch/ovn:/run/ovn:shared,z', '/var/lib/kolla/config_files/ovn_controller.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/openstack/cacerts/ovn/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/certs/ovn/default/ca.crt:/etc/pki/tls/certs/ovndbca.crt:ro,z', '/var/lib/openstack/certs/ovn/default/tls.crt:/etc/pki/tls/certs/ovndb.crt:ro,z', '/var/lib/openstack/certs/ovn/default/tls.key:/etc/pki/tls/private/ovndb.key:ro,Z', '/var/lib/openstack/healthchecks/ovn_controller:/openstack:ro,z']}, config_id=ovn_controller, container_name=ovn_controller, io.buildah.version=1.41.3, maintainer=OpenStack Kubernetes Operator team, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.vendor=CentOS, org.label-schema.build-date=20251125, tcib_build_tag=fa2bb8efef6782c26ea7f1675eeb36dd)
Dec  1 14:22:26 np0005541455 podman[206417]: 2025-12-01 19:22:26.88699997 +0000 UTC m=+0.130609390 container exec_died ac5c9902abf0db9f43c889599b2bcc73d33eb8b65444ffdd9b56a5cc93dab792 (image=quay.io/podified-antelope-centos9/openstack-ovn-controller:current-podified, name=ovn_controller, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS, config_data={'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ovn_controller', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-ovn-controller:current-podified', 'net': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/lib/modules:/lib/modules:ro', '/run:/run', '/var/lib/openvswitch/ovn:/run/ovn:shared,z', '/var/lib/kolla/config_files/ovn_controller.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/openstack/cacerts/ovn/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/certs/ovn/default/ca.crt:/etc/pki/tls/certs/ovndbca.crt:ro,z', '/var/lib/openstack/certs/ovn/default/tls.crt:/etc/pki/tls/certs/ovndb.crt:ro,z', '/var/lib/openstack/certs/ovn/default/tls.key:/etc/pki/tls/private/ovndb.key:ro,Z', '/var/lib/openstack/healthchecks/ovn_controller:/openstack:ro,z']}, io.buildah.version=1.41.3, org.label-schema.build-date=20251125, managed_by=edpm_ansible, tcib_managed=true, config_id=ovn_controller, container_name=ovn_controller, maintainer=OpenStack Kubernetes Operator team, tcib_build_tag=fa2bb8efef6782c26ea7f1675eeb36dd)
Dec  1 14:22:26 np0005541455 systemd[1]: libpod-conmon-ac5c9902abf0db9f43c889599b2bcc73d33eb8b65444ffdd9b56a5cc93dab792.scope: Deactivated successfully.
Dec  1 14:22:27 np0005541455 python3.9[206600]: ansible-containers.podman.podman_container_exec Invoked with command=id -g name=ovn_controller detach=False executable=podman privileged=False tty=False argv=None env=None user=None workdir=None
Dec  1 14:22:27 np0005541455 systemd[1]: Started libpod-conmon-ac5c9902abf0db9f43c889599b2bcc73d33eb8b65444ffdd9b56a5cc93dab792.scope.
Dec  1 14:22:27 np0005541455 podman[206601]: 2025-12-01 19:22:27.798817754 +0000 UTC m=+0.103706373 container exec ac5c9902abf0db9f43c889599b2bcc73d33eb8b65444ffdd9b56a5cc93dab792 (image=quay.io/podified-antelope-centos9/openstack-ovn-controller:current-podified, name=ovn_controller, config_id=ovn_controller, io.buildah.version=1.41.3, org.label-schema.build-date=20251125, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS, maintainer=OpenStack Kubernetes Operator team, org.label-schema.license=GPLv2, tcib_managed=true, config_data={'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ovn_controller', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-ovn-controller:current-podified', 'net': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/lib/modules:/lib/modules:ro', '/run:/run', '/var/lib/openvswitch/ovn:/run/ovn:shared,z', '/var/lib/kolla/config_files/ovn_controller.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/openstack/cacerts/ovn/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/certs/ovn/default/ca.crt:/etc/pki/tls/certs/ovndbca.crt:ro,z', '/var/lib/openstack/certs/ovn/default/tls.crt:/etc/pki/tls/certs/ovndb.crt:ro,z', '/var/lib/openstack/certs/ovn/default/tls.key:/etc/pki/tls/private/ovndb.key:ro,Z', '/var/lib/openstack/healthchecks/ovn_controller:/openstack:ro,z']}, managed_by=edpm_ansible, tcib_build_tag=fa2bb8efef6782c26ea7f1675eeb36dd, container_name=ovn_controller, org.label-schema.name=CentOS Stream 9 Base Image)
Dec  1 14:22:27 np0005541455 podman[206601]: 2025-12-01 19:22:27.832072833 +0000 UTC m=+0.136961412 container exec_died ac5c9902abf0db9f43c889599b2bcc73d33eb8b65444ffdd9b56a5cc93dab792 (image=quay.io/podified-antelope-centos9/openstack-ovn-controller:current-podified, name=ovn_controller, tcib_managed=true, config_data={'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ovn_controller', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-ovn-controller:current-podified', 'net': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/lib/modules:/lib/modules:ro', '/run:/run', '/var/lib/openvswitch/ovn:/run/ovn:shared,z', '/var/lib/kolla/config_files/ovn_controller.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/openstack/cacerts/ovn/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/certs/ovn/default/ca.crt:/etc/pki/tls/certs/ovndbca.crt:ro,z', '/var/lib/openstack/certs/ovn/default/tls.crt:/etc/pki/tls/certs/ovndb.crt:ro,z', '/var/lib/openstack/certs/ovn/default/tls.key:/etc/pki/tls/private/ovndb.key:ro,Z', '/var/lib/openstack/healthchecks/ovn_controller:/openstack:ro,z']}, config_id=ovn_controller, org.label-schema.build-date=20251125, org.label-schema.vendor=CentOS, container_name=ovn_controller, maintainer=OpenStack Kubernetes Operator team, io.buildah.version=1.41.3, managed_by=edpm_ansible, org.label-schema.schema-version=1.0, tcib_build_tag=fa2bb8efef6782c26ea7f1675eeb36dd, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 9 Base Image)
Dec  1 14:22:27 np0005541455 systemd[1]: libpod-conmon-ac5c9902abf0db9f43c889599b2bcc73d33eb8b65444ffdd9b56a5cc93dab792.scope: Deactivated successfully.
Dec  1 14:22:28 np0005541455 python3.9[206783]: ansible-ansible.builtin.file Invoked with group=0 mode=0700 owner=0 path=/var/lib/openstack/healthchecks/ovn_controller recurse=True state=directory force=False follow=True modification_time_format=%Y%m%d%H%M.%S access_time_format=%Y%m%d%H%M.%S unsafe_writes=False _original_basename=None _diff_peek=None src=None modification_time=None access_time=None seuser=None serole=None selevel=None setype=None attributes=None
Dec  1 14:22:29 np0005541455 podman[206907]: 2025-12-01 19:22:29.148696058 +0000 UTC m=+0.055098137 container health_status 9bc16c1e84935b321683dd2dfd3901959431e420d380b6b9982945dff3d516b2 (image=quay.io/prometheus/node-exporter:v1.5.0, name=node_exporter, health_status=healthy, health_failing_streak=0, health_log=, config_data={'image': 'quay.io/prometheus/node-exporter:v1.5.0', 'restart': 'always', 'recreate': True, 'user': 'root', 'privileged': True, 'ports': ['9100:9100'], 'command': ['--web.config.file=/etc/node_exporter/node_exporter.yaml', '--web.disable-exporter-metrics', '--collector.systemd', '--collector.systemd.unit-include=(edpm_.*|ovs.*|openvswitch|virt.*|rsyslog)\\.service', '--no-collector.dmi', '--no-collector.entropy', '--no-collector.thermal_zone', '--no-collector.time', '--no-collector.timex', '--no-collector.uname', '--no-collector.stat', '--no-collector.hwmon', '--no-collector.os', '--no-collector.selinux', '--no-collector.textfile', '--no-collector.powersupplyclass', '--no-collector.pressure', '--no-collector.rapl'], 'net': 'host', 'environment': {'OS_ENDPOINT_TYPE': 'internal'}, 'healthcheck': {'test': '/openstack/healthcheck node_exporter', 'mount': '/var/lib/openstack/healthchecks/node_exporter'}, 'volumes': ['/var/lib/openstack/config/telemetry/node_exporter.yaml:/etc/node_exporter/node_exporter.yaml:z', '/var/lib/openstack/certs/telemetry/default:/etc/node_exporter/tls:z', '/var/run/dbus/system_bus_socket:/var/run/dbus/system_bus_socket:rw', '/var/lib/openstack/healthchecks/node_exporter:/openstack:ro,z']}, config_id=edpm, container_name=node_exporter, maintainer=The Prometheus Authors <prometheus-developers@googlegroups.com>, managed_by=edpm_ansible)
Dec  1 14:22:29 np0005541455 python3.9[206951]: ansible-containers.podman.podman_container_info Invoked with name=['ovn_metadata_agent'] executable=podman
Dec  1 14:22:30 np0005541455 python3.9[207121]: ansible-containers.podman.podman_container_exec Invoked with command=id -u name=ovn_metadata_agent detach=False executable=podman privileged=False tty=False argv=None env=None user=None workdir=None
Dec  1 14:22:30 np0005541455 systemd[1]: Started libpod-conmon-43b014a7c88484529ca37fbc1aa040d68d3c565a681d98a3ffe696ded1c66c8b.scope.
Dec  1 14:22:30 np0005541455 podman[207122]: 2025-12-01 19:22:30.327168043 +0000 UTC m=+0.084148611 container exec 43b014a7c88484529ca37fbc1aa040d68d3c565a681d98a3ffe696ded1c66c8b (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified, name=ovn_metadata_agent, config_data={'cgroupns': 'host', 'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'EDPM_CONFIG_HASH': '0823bd3e096c75f72e4a95820d41b0d4b6a1172bd2892ddb9f29b788a11bc87d'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ovn_metadata_agent', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified', 'net': 'host', 'pid': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/run/openvswitch:/run/openvswitch:z', '/var/lib/config-data/ansible-generated/neutron-ovn-metadata-agent:/etc/neutron.conf.d:z', '/run/netns:/run/netns:shared', '/var/lib/kolla/config_files/ovn_metadata_agent.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/neutron:/var/lib/neutron:shared,z', '/var/lib/neutron/ovn_metadata_haproxy_wrapper:/usr/local/bin/haproxy:ro', '/var/lib/neutron/kill_scripts:/etc/neutron/kill_scripts:ro', '/var/lib/openstack/cacerts/neutron-metadata/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/ca.crt:/etc/pki/tls/certs/ovndbca.crt:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/tls.crt:/etc/pki/tls/certs/ovndb.crt:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/tls.key:/etc/pki/tls/private/ovndb.key:ro,Z', '/var/lib/openstack/healthchecks/ovn_metadata_agent:/openstack:ro,z']}, maintainer=OpenStack Kubernetes Operator team, config_id=ovn_metadata_agent, io.buildah.version=1.41.3, org.label-schema.build-date=20251125, org.label-schema.license=GPLv2, org.label-schema.vendor=CentOS, managed_by=edpm_ansible, org.label-schema.name=CentOS Stream 9 Base Image, tcib_managed=true, container_name=ovn_metadata_agent, org.label-schema.schema-version=1.0, tcib_build_tag=fa2bb8efef6782c26ea7f1675eeb36dd)
Dec  1 14:22:30 np0005541455 podman[207122]: 2025-12-01 19:22:30.337840692 +0000 UTC m=+0.094821260 container exec_died 43b014a7c88484529ca37fbc1aa040d68d3c565a681d98a3ffe696ded1c66c8b (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified, name=ovn_metadata_agent, tcib_managed=true, container_name=ovn_metadata_agent, org.label-schema.license=GPLv2, org.label-schema.schema-version=1.0, tcib_build_tag=fa2bb8efef6782c26ea7f1675eeb36dd, config_data={'cgroupns': 'host', 'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'EDPM_CONFIG_HASH': '0823bd3e096c75f72e4a95820d41b0d4b6a1172bd2892ddb9f29b788a11bc87d'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ovn_metadata_agent', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified', 'net': 'host', 'pid': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/run/openvswitch:/run/openvswitch:z', '/var/lib/config-data/ansible-generated/neutron-ovn-metadata-agent:/etc/neutron.conf.d:z', '/run/netns:/run/netns:shared', '/var/lib/kolla/config_files/ovn_metadata_agent.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/neutron:/var/lib/neutron:shared,z', '/var/lib/neutron/ovn_metadata_haproxy_wrapper:/usr/local/bin/haproxy:ro', '/var/lib/neutron/kill_scripts:/etc/neutron/kill_scripts:ro', '/var/lib/openstack/cacerts/neutron-metadata/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/ca.crt:/etc/pki/tls/certs/ovndbca.crt:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/tls.crt:/etc/pki/tls/certs/ovndb.crt:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/tls.key:/etc/pki/tls/private/ovndb.key:ro,Z', '/var/lib/openstack/healthchecks/ovn_metadata_agent:/openstack:ro,z']}, maintainer=OpenStack Kubernetes Operator team, managed_by=edpm_ansible, org.label-schema.name=CentOS Stream 9 Base Image, config_id=ovn_metadata_agent, io.buildah.version=1.41.3, org.label-schema.build-date=20251125, org.label-schema.vendor=CentOS)
Dec  1 14:22:30 np0005541455 systemd[1]: libpod-conmon-43b014a7c88484529ca37fbc1aa040d68d3c565a681d98a3ffe696ded1c66c8b.scope: Deactivated successfully.
Dec  1 14:22:31 np0005541455 python3.9[207305]: ansible-containers.podman.podman_container_exec Invoked with command=id -g name=ovn_metadata_agent detach=False executable=podman privileged=False tty=False argv=None env=None user=None workdir=None
Dec  1 14:22:31 np0005541455 systemd[1]: Started libpod-conmon-43b014a7c88484529ca37fbc1aa040d68d3c565a681d98a3ffe696ded1c66c8b.scope.
Dec  1 14:22:31 np0005541455 podman[207306]: 2025-12-01 19:22:31.154650971 +0000 UTC m=+0.097668711 container exec 43b014a7c88484529ca37fbc1aa040d68d3c565a681d98a3ffe696ded1c66c8b (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified, name=ovn_metadata_agent, org.label-schema.license=GPLv2, tcib_build_tag=fa2bb8efef6782c26ea7f1675eeb36dd, tcib_managed=true, config_data={'cgroupns': 'host', 'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'EDPM_CONFIG_HASH': '0823bd3e096c75f72e4a95820d41b0d4b6a1172bd2892ddb9f29b788a11bc87d'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ovn_metadata_agent', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified', 'net': 'host', 'pid': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/run/openvswitch:/run/openvswitch:z', '/var/lib/config-data/ansible-generated/neutron-ovn-metadata-agent:/etc/neutron.conf.d:z', '/run/netns:/run/netns:shared', '/var/lib/kolla/config_files/ovn_metadata_agent.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/neutron:/var/lib/neutron:shared,z', '/var/lib/neutron/ovn_metadata_haproxy_wrapper:/usr/local/bin/haproxy:ro', '/var/lib/neutron/kill_scripts:/etc/neutron/kill_scripts:ro', '/var/lib/openstack/cacerts/neutron-metadata/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/ca.crt:/etc/pki/tls/certs/ovndbca.crt:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/tls.crt:/etc/pki/tls/certs/ovndb.crt:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/tls.key:/etc/pki/tls/private/ovndb.key:ro,Z', '/var/lib/openstack/healthchecks/ovn_metadata_agent:/openstack:ro,z']}, maintainer=OpenStack Kubernetes Operator team, managed_by=edpm_ansible, org.label-schema.build-date=20251125, org.label-schema.schema-version=1.0, container_name=ovn_metadata_agent, io.buildah.version=1.41.3, config_id=ovn_metadata_agent, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.vendor=CentOS)
Dec  1 14:22:31 np0005541455 podman[207306]: 2025-12-01 19:22:31.186067652 +0000 UTC m=+0.129085382 container exec_died 43b014a7c88484529ca37fbc1aa040d68d3c565a681d98a3ffe696ded1c66c8b (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified, name=ovn_metadata_agent, org.label-schema.license=GPLv2, tcib_build_tag=fa2bb8efef6782c26ea7f1675eeb36dd, config_data={'cgroupns': 'host', 'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'EDPM_CONFIG_HASH': '0823bd3e096c75f72e4a95820d41b0d4b6a1172bd2892ddb9f29b788a11bc87d'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ovn_metadata_agent', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified', 'net': 'host', 'pid': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/run/openvswitch:/run/openvswitch:z', '/var/lib/config-data/ansible-generated/neutron-ovn-metadata-agent:/etc/neutron.conf.d:z', '/run/netns:/run/netns:shared', '/var/lib/kolla/config_files/ovn_metadata_agent.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/neutron:/var/lib/neutron:shared,z', '/var/lib/neutron/ovn_metadata_haproxy_wrapper:/usr/local/bin/haproxy:ro', '/var/lib/neutron/kill_scripts:/etc/neutron/kill_scripts:ro', '/var/lib/openstack/cacerts/neutron-metadata/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/ca.crt:/etc/pki/tls/certs/ovndbca.crt:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/tls.crt:/etc/pki/tls/certs/ovndb.crt:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/tls.key:/etc/pki/tls/private/ovndb.key:ro,Z', '/var/lib/openstack/healthchecks/ovn_metadata_agent:/openstack:ro,z']}, config_id=ovn_metadata_agent, maintainer=OpenStack Kubernetes Operator team, managed_by=edpm_ansible, org.label-schema.build-date=20251125, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS, container_name=ovn_metadata_agent, tcib_managed=true, org.label-schema.name=CentOS Stream 9 Base Image, io.buildah.version=1.41.3)
Dec  1 14:22:31 np0005541455 systemd[1]: libpod-conmon-43b014a7c88484529ca37fbc1aa040d68d3c565a681d98a3ffe696ded1c66c8b.scope: Deactivated successfully.
Dec  1 14:22:31 np0005541455 python3.9[207487]: ansible-ansible.builtin.file Invoked with group=0 mode=0700 owner=0 path=/var/lib/openstack/healthchecks/ovn_metadata_agent recurse=True state=directory force=False follow=True modification_time_format=%Y%m%d%H%M.%S access_time_format=%Y%m%d%H%M.%S unsafe_writes=False _original_basename=None _diff_peek=None src=None modification_time=None access_time=None seuser=None serole=None selevel=None setype=None attributes=None
Dec  1 14:22:32 np0005541455 python3.9[207639]: ansible-containers.podman.podman_container_info Invoked with name=['multipathd'] executable=podman
Dec  1 14:22:33 np0005541455 python3.9[207805]: ansible-containers.podman.podman_container_exec Invoked with command=id -u name=multipathd detach=False executable=podman privileged=False tty=False argv=None env=None user=None workdir=None
Dec  1 14:22:33 np0005541455 systemd[1]: Started libpod-conmon-eee51cf6f5ac491b85fb09827fece37ea9afa564acb449d4ec0d0155a452f02b.scope.
Dec  1 14:22:33 np0005541455 podman[207806]: 2025-12-01 19:22:33.633741731 +0000 UTC m=+0.081281219 container exec eee51cf6f5ac491b85fb09827fece37ea9afa564acb449d4ec0d0155a452f02b (image=quay.io/podified-antelope-centos9/openstack-multipathd:current-podified, name=multipathd, config_id=multipathd, container_name=multipathd, managed_by=edpm_ansible, tcib_build_tag=fa2bb8efef6782c26ea7f1675eeb36dd, io.buildah.version=1.41.3, org.label-schema.build-date=20251125, org.label-schema.vendor=CentOS, tcib_managed=true, maintainer=OpenStack Kubernetes Operator team, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.schema-version=1.0, config_data={'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/multipathd', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-multipathd:current-podified', 'net': 'host', 'privileged': True, 'restart': 'always', 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/var/lib/kolla/config_files/multipathd.json:/var/lib/kolla/config_files/config.json:ro', '/dev:/dev', '/run/udev:/run/udev', '/sys:/sys', '/lib/modules:/lib/modules:ro', '/etc/iscsi:/etc/iscsi:ro', '/var/lib/iscsi:/var/lib/iscsi', '/etc/multipath:/etc/multipath:z', '/etc/multipath.conf:/etc/multipath.conf:ro', '/var/lib/openstack/healthchecks/multipathd:/openstack:ro,z']})
Dec  1 14:22:33 np0005541455 podman[207806]: 2025-12-01 19:22:33.667148005 +0000 UTC m=+0.114687533 container exec_died eee51cf6f5ac491b85fb09827fece37ea9afa564acb449d4ec0d0155a452f02b (image=quay.io/podified-antelope-centos9/openstack-multipathd:current-podified, name=multipathd, io.buildah.version=1.41.3, org.label-schema.build-date=20251125, org.label-schema.vendor=CentOS, config_id=multipathd, container_name=multipathd, managed_by=edpm_ansible, org.label-schema.license=GPLv2, tcib_managed=true, maintainer=OpenStack Kubernetes Operator team, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.schema-version=1.0, tcib_build_tag=fa2bb8efef6782c26ea7f1675eeb36dd, config_data={'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/multipathd', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-multipathd:current-podified', 'net': 'host', 'privileged': True, 'restart': 'always', 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/var/lib/kolla/config_files/multipathd.json:/var/lib/kolla/config_files/config.json:ro', '/dev:/dev', '/run/udev:/run/udev', '/sys:/sys', '/lib/modules:/lib/modules:ro', '/etc/iscsi:/etc/iscsi:ro', '/var/lib/iscsi:/var/lib/iscsi', '/etc/multipath:/etc/multipath:z', '/etc/multipath.conf:/etc/multipath.conf:ro', '/var/lib/openstack/healthchecks/multipathd:/openstack:ro,z']})
Dec  1 14:22:33 np0005541455 systemd[1]: libpod-conmon-eee51cf6f5ac491b85fb09827fece37ea9afa564acb449d4ec0d0155a452f02b.scope: Deactivated successfully.
Dec  1 14:22:34 np0005541455 python3.9[207987]: ansible-containers.podman.podman_container_exec Invoked with command=id -g name=multipathd detach=False executable=podman privileged=False tty=False argv=None env=None user=None workdir=None
Dec  1 14:22:34 np0005541455 systemd[1]: Started libpod-conmon-eee51cf6f5ac491b85fb09827fece37ea9afa564acb449d4ec0d0155a452f02b.scope.
Dec  1 14:22:34 np0005541455 podman[207988]: 2025-12-01 19:22:34.623206777 +0000 UTC m=+0.096041409 container exec eee51cf6f5ac491b85fb09827fece37ea9afa564acb449d4ec0d0155a452f02b (image=quay.io/podified-antelope-centos9/openstack-multipathd:current-podified, name=multipathd, managed_by=edpm_ansible, org.label-schema.license=GPLv2, tcib_managed=true, config_data={'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/multipathd', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-multipathd:current-podified', 'net': 'host', 'privileged': True, 'restart': 'always', 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/var/lib/kolla/config_files/multipathd.json:/var/lib/kolla/config_files/config.json:ro', '/dev:/dev', '/run/udev:/run/udev', '/sys:/sys', '/lib/modules:/lib/modules:ro', '/etc/iscsi:/etc/iscsi:ro', '/var/lib/iscsi:/var/lib/iscsi', '/etc/multipath:/etc/multipath:z', '/etc/multipath.conf:/etc/multipath.conf:ro', '/var/lib/openstack/healthchecks/multipathd:/openstack:ro,z']}, maintainer=OpenStack Kubernetes Operator team, org.label-schema.build-date=20251125, org.label-schema.name=CentOS Stream 9 Base Image, container_name=multipathd, tcib_build_tag=fa2bb8efef6782c26ea7f1675eeb36dd, config_id=multipathd, io.buildah.version=1.41.3, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS)
Dec  1 14:22:34 np0005541455 podman[207988]: 2025-12-01 19:22:34.65560404 +0000 UTC m=+0.128438662 container exec_died eee51cf6f5ac491b85fb09827fece37ea9afa564acb449d4ec0d0155a452f02b (image=quay.io/podified-antelope-centos9/openstack-multipathd:current-podified, name=multipathd, config_data={'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/multipathd', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-multipathd:current-podified', 'net': 'host', 'privileged': True, 'restart': 'always', 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/var/lib/kolla/config_files/multipathd.json:/var/lib/kolla/config_files/config.json:ro', '/dev:/dev', '/run/udev:/run/udev', '/sys:/sys', '/lib/modules:/lib/modules:ro', '/etc/iscsi:/etc/iscsi:ro', '/var/lib/iscsi:/var/lib/iscsi', '/etc/multipath:/etc/multipath:z', '/etc/multipath.conf:/etc/multipath.conf:ro', '/var/lib/openstack/healthchecks/multipathd:/openstack:ro,z']}, config_id=multipathd, org.label-schema.build-date=20251125, org.label-schema.schema-version=1.0, managed_by=edpm_ansible, tcib_managed=true, io.buildah.version=1.41.3, maintainer=OpenStack Kubernetes Operator team, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.vendor=CentOS, container_name=multipathd, org.label-schema.license=GPLv2, tcib_build_tag=fa2bb8efef6782c26ea7f1675eeb36dd)
Dec  1 14:22:34 np0005541455 systemd[1]: libpod-conmon-eee51cf6f5ac491b85fb09827fece37ea9afa564acb449d4ec0d0155a452f02b.scope: Deactivated successfully.
Dec  1 14:22:35 np0005541455 python3.9[208172]: ansible-ansible.builtin.file Invoked with group=0 mode=0700 owner=0 path=/var/lib/openstack/healthchecks/multipathd recurse=True state=directory force=False follow=True modification_time_format=%Y%m%d%H%M.%S access_time_format=%Y%m%d%H%M.%S unsafe_writes=False _original_basename=None _diff_peek=None src=None modification_time=None access_time=None seuser=None serole=None selevel=None setype=None attributes=None
Dec  1 14:22:36 np0005541455 podman[208296]: 2025-12-01 19:22:36.054142632 +0000 UTC m=+0.065007632 container health_status eee51cf6f5ac491b85fb09827fece37ea9afa564acb449d4ec0d0155a452f02b (image=quay.io/podified-antelope-centos9/openstack-multipathd:current-podified, name=multipathd, health_status=healthy, health_failing_streak=0, health_log=, config_id=multipathd, container_name=multipathd, org.label-schema.build-date=20251125, org.label-schema.vendor=CentOS, io.buildah.version=1.41.3, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 9 Base Image, tcib_managed=true, maintainer=OpenStack Kubernetes Operator team, managed_by=edpm_ansible, org.label-schema.schema-version=1.0, tcib_build_tag=fa2bb8efef6782c26ea7f1675eeb36dd, config_data={'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/multipathd', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-multipathd:current-podified', 'net': 'host', 'privileged': True, 'restart': 'always', 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/var/lib/kolla/config_files/multipathd.json:/var/lib/kolla/config_files/config.json:ro', '/dev:/dev', '/run/udev:/run/udev', '/sys:/sys', '/lib/modules:/lib/modules:ro', '/etc/iscsi:/etc/iscsi:ro', '/var/lib/iscsi:/var/lib/iscsi', '/etc/multipath:/etc/multipath:z', '/etc/multipath.conf:/etc/multipath.conf:ro', '/var/lib/openstack/healthchecks/multipathd:/openstack:ro,z']})
Dec  1 14:22:36 np0005541455 python3.9[208344]: ansible-containers.podman.podman_container_info Invoked with name=['ceilometer_agent_compute'] executable=podman
Dec  1 14:22:37 np0005541455 python3.9[208510]: ansible-containers.podman.podman_container_exec Invoked with command=id -u name=ceilometer_agent_compute detach=False executable=podman privileged=False tty=False argv=None env=None user=None workdir=None
Dec  1 14:22:37 np0005541455 systemd[1]: Started libpod-conmon-3a3d264f7eb8586ed3d44da8bad3c69e5911bcb2ca062b771386b6d47a5118de.scope.
Dec  1 14:22:37 np0005541455 podman[208511]: 2025-12-01 19:22:37.318649006 +0000 UTC m=+0.103506066 container exec 3a3d264f7eb8586ed3d44da8bad3c69e5911bcb2ca062b771386b6d47a5118de (image=quay.rdoproject.org/podified-master-centos10/openstack-ceilometer-compute:current-tested, name=ceilometer_agent_compute, io.buildah.version=1.41.4, maintainer=OpenStack Kubernetes Operator team, org.label-schema.schema-version=1.0, managed_by=edpm_ansible, org.label-schema.license=GPLv2, org.label-schema.vendor=CentOS, tcib_build_tag=3a7876c5b6a4ff2e2bc50e11e9db5f42, container_name=ceilometer_agent_compute, tcib_managed=true, org.label-schema.build-date=20251125, org.label-schema.name=CentOS Stream 10 Base Image, config_data={'image': 'quay.rdoproject.org/podified-master-centos10/openstack-ceilometer-compute:current-tested', 'user': 'ceilometer', 'restart': 'always', 'command': 'kolla_start', 'security_opt': 'label:type:ceilometer_polling_t', 'net': 'host', 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'OS_ENDPOINT_TYPE': 'internal'}, 'healthcheck': {'test': '/openstack/healthcheck compute', 'mount': '/var/lib/openstack/healthchecks/ceilometer_agent_compute'}, 'volumes': ['/var/lib/openstack/config/telemetry:/var/lib/openstack/config/:z', '/var/lib/openstack/config/telemetry/ceilometer-agent-compute.json:/var/lib/kolla/config_files/config.json:z', '/run/libvirt:/run/libvirt:shared,ro', '/etc/hosts:/etc/hosts:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/var/lib/openstack/cacerts/telemetry/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/config/telemetry/ceilometer_prom_exporter.yaml:/etc/ceilometer/ceilometer_prom_exporter.yaml:z', '/var/lib/openstack/certs/telemetry/default:/etc/ceilometer/tls:z', '/dev/log:/dev/log', '/var/lib/openstack/healthchecks/ceilometer_agent_compute:/openstack:ro,z']}, config_id=edpm)
Dec  1 14:22:37 np0005541455 podman[208511]: 2025-12-01 19:22:37.352417392 +0000 UTC m=+0.137274442 container exec_died 3a3d264f7eb8586ed3d44da8bad3c69e5911bcb2ca062b771386b6d47a5118de (image=quay.rdoproject.org/podified-master-centos10/openstack-ceilometer-compute:current-tested, name=ceilometer_agent_compute, container_name=ceilometer_agent_compute, io.buildah.version=1.41.4, org.label-schema.schema-version=1.0, tcib_managed=true, org.label-schema.build-date=20251125, org.label-schema.license=GPLv2, tcib_build_tag=3a7876c5b6a4ff2e2bc50e11e9db5f42, config_data={'image': 'quay.rdoproject.org/podified-master-centos10/openstack-ceilometer-compute:current-tested', 'user': 'ceilometer', 'restart': 'always', 'command': 'kolla_start', 'security_opt': 'label:type:ceilometer_polling_t', 'net': 'host', 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'OS_ENDPOINT_TYPE': 'internal'}, 'healthcheck': {'test': '/openstack/healthcheck compute', 'mount': '/var/lib/openstack/healthchecks/ceilometer_agent_compute'}, 'volumes': ['/var/lib/openstack/config/telemetry:/var/lib/openstack/config/:z', '/var/lib/openstack/config/telemetry/ceilometer-agent-compute.json:/var/lib/kolla/config_files/config.json:z', '/run/libvirt:/run/libvirt:shared,ro', '/etc/hosts:/etc/hosts:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/var/lib/openstack/cacerts/telemetry/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/config/telemetry/ceilometer_prom_exporter.yaml:/etc/ceilometer/ceilometer_prom_exporter.yaml:z', '/var/lib/openstack/certs/telemetry/default:/etc/ceilometer/tls:z', '/dev/log:/dev/log', '/var/lib/openstack/healthchecks/ceilometer_agent_compute:/openstack:ro,z']}, config_id=edpm, maintainer=OpenStack Kubernetes Operator team, managed_by=edpm_ansible, org.label-schema.name=CentOS Stream 10 Base Image, org.label-schema.vendor=CentOS)
Dec  1 14:22:37 np0005541455 systemd[1]: libpod-conmon-3a3d264f7eb8586ed3d44da8bad3c69e5911bcb2ca062b771386b6d47a5118de.scope: Deactivated successfully.
Dec  1 14:22:38 np0005541455 python3.9[208697]: ansible-containers.podman.podman_container_exec Invoked with command=id -g name=ceilometer_agent_compute detach=False executable=podman privileged=False tty=False argv=None env=None user=None workdir=None
Dec  1 14:22:38 np0005541455 systemd[1]: Started libpod-conmon-3a3d264f7eb8586ed3d44da8bad3c69e5911bcb2ca062b771386b6d47a5118de.scope.
Dec  1 14:22:38 np0005541455 podman[208698]: 2025-12-01 19:22:38.328213334 +0000 UTC m=+0.121913894 container exec 3a3d264f7eb8586ed3d44da8bad3c69e5911bcb2ca062b771386b6d47a5118de (image=quay.rdoproject.org/podified-master-centos10/openstack-ceilometer-compute:current-tested, name=ceilometer_agent_compute, container_name=ceilometer_agent_compute, org.label-schema.name=CentOS Stream 10 Base Image, org.label-schema.schema-version=1.0, managed_by=edpm_ansible, org.label-schema.build-date=20251125, org.label-schema.license=GPLv2, config_id=edpm, io.buildah.version=1.41.4, maintainer=OpenStack Kubernetes Operator team, org.label-schema.vendor=CentOS, tcib_build_tag=3a7876c5b6a4ff2e2bc50e11e9db5f42, tcib_managed=true, config_data={'image': 'quay.rdoproject.org/podified-master-centos10/openstack-ceilometer-compute:current-tested', 'user': 'ceilometer', 'restart': 'always', 'command': 'kolla_start', 'security_opt': 'label:type:ceilometer_polling_t', 'net': 'host', 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'OS_ENDPOINT_TYPE': 'internal'}, 'healthcheck': {'test': '/openstack/healthcheck compute', 'mount': '/var/lib/openstack/healthchecks/ceilometer_agent_compute'}, 'volumes': ['/var/lib/openstack/config/telemetry:/var/lib/openstack/config/:z', '/var/lib/openstack/config/telemetry/ceilometer-agent-compute.json:/var/lib/kolla/config_files/config.json:z', '/run/libvirt:/run/libvirt:shared,ro', '/etc/hosts:/etc/hosts:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/var/lib/openstack/cacerts/telemetry/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/config/telemetry/ceilometer_prom_exporter.yaml:/etc/ceilometer/ceilometer_prom_exporter.yaml:z', '/var/lib/openstack/certs/telemetry/default:/etc/ceilometer/tls:z', '/dev/log:/dev/log', '/var/lib/openstack/healthchecks/ceilometer_agent_compute:/openstack:ro,z']})
Dec  1 14:22:38 np0005541455 podman[208698]: 2025-12-01 19:22:38.358760646 +0000 UTC m=+0.152461236 container exec_died 3a3d264f7eb8586ed3d44da8bad3c69e5911bcb2ca062b771386b6d47a5118de (image=quay.rdoproject.org/podified-master-centos10/openstack-ceilometer-compute:current-tested, name=ceilometer_agent_compute, org.label-schema.name=CentOS Stream 10 Base Image, org.label-schema.vendor=CentOS, tcib_managed=true, config_data={'image': 'quay.rdoproject.org/podified-master-centos10/openstack-ceilometer-compute:current-tested', 'user': 'ceilometer', 'restart': 'always', 'command': 'kolla_start', 'security_opt': 'label:type:ceilometer_polling_t', 'net': 'host', 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'OS_ENDPOINT_TYPE': 'internal'}, 'healthcheck': {'test': '/openstack/healthcheck compute', 'mount': '/var/lib/openstack/healthchecks/ceilometer_agent_compute'}, 'volumes': ['/var/lib/openstack/config/telemetry:/var/lib/openstack/config/:z', '/var/lib/openstack/config/telemetry/ceilometer-agent-compute.json:/var/lib/kolla/config_files/config.json:z', '/run/libvirt:/run/libvirt:shared,ro', '/etc/hosts:/etc/hosts:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/var/lib/openstack/cacerts/telemetry/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/config/telemetry/ceilometer_prom_exporter.yaml:/etc/ceilometer/ceilometer_prom_exporter.yaml:z', '/var/lib/openstack/certs/telemetry/default:/etc/ceilometer/tls:z', '/dev/log:/dev/log', '/var/lib/openstack/healthchecks/ceilometer_agent_compute:/openstack:ro,z']}, container_name=ceilometer_agent_compute, managed_by=edpm_ansible, org.label-schema.license=GPLv2, io.buildah.version=1.41.4, maintainer=OpenStack Kubernetes Operator team, org.label-schema.schema-version=1.0, tcib_build_tag=3a7876c5b6a4ff2e2bc50e11e9db5f42, org.label-schema.build-date=20251125, config_id=edpm)
Dec  1 14:22:38 np0005541455 systemd[1]: libpod-conmon-3a3d264f7eb8586ed3d44da8bad3c69e5911bcb2ca062b771386b6d47a5118de.scope: Deactivated successfully.
Dec  1 14:22:39 np0005541455 python3.9[208882]: ansible-ansible.builtin.file Invoked with group=42405 mode=0700 owner=42405 path=/var/lib/openstack/healthchecks/ceilometer_agent_compute recurse=True state=directory force=False follow=True modification_time_format=%Y%m%d%H%M.%S access_time_format=%Y%m%d%H%M.%S unsafe_writes=False _original_basename=None _diff_peek=None src=None modification_time=None access_time=None seuser=None serole=None selevel=None setype=None attributes=None
Dec  1 14:22:39 np0005541455 python3.9[209036]: ansible-containers.podman.podman_container_info Invoked with name=['node_exporter'] executable=podman
Dec  1 14:22:40 np0005541455 podman[209125]: 2025-12-01 19:22:40.324951314 +0000 UTC m=+0.081993932 container health_status 61ddba5fa28aaa4735d9b3aecc3d300f499f9ae2248b5f55cd6d6127fcce4236 (image=quay.io/navidys/prometheus-podman-exporter:v1.10.1, name=podman_exporter, health_status=healthy, health_failing_streak=0, health_log=, container_name=podman_exporter, maintainer=Navid Yaghoobi <navidys@fedoraproject.org>, managed_by=edpm_ansible, config_data={'image': 'quay.io/navidys/prometheus-podman-exporter:v1.10.1', 'restart': 'always', 'recreate': True, 'user': 'root', 'privileged': True, 'ports': ['9882:9882'], 'net': 'host', 'command': ['--web.config.file=/etc/podman_exporter/podman_exporter.yaml'], 'environment': {'OS_ENDPOINT_TYPE': 'internal', 'CONTAINER_HOST': 'unix:///run/podman/podman.sock'}, 'healthcheck': {'test': '/openstack/healthcheck podman_exporter', 'mount': '/var/lib/openstack/healthchecks/podman_exporter'}, 'volumes': ['/var/lib/openstack/config/telemetry/podman_exporter.yaml:/etc/podman_exporter/podman_exporter.yaml:z', '/var/lib/openstack/certs/telemetry/default:/etc/podman_exporter/tls:z', '/run/podman/podman.sock:/run/podman/podman.sock:rw,z', '/var/lib/openstack/healthchecks/podman_exporter:/openstack:ro,z']}, config_id=edpm)
Dec  1 14:22:40 np0005541455 nova_compute[189564]: 2025-12-01 19:22:40.584 189568 DEBUG oslo_service.periodic_task [None req-7cec860b-c1c5-49d2-8a4f-732c76c1788d - - - - - -] Running periodic task ComputeManager._sync_scheduler_instance_info run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210#033[00m
Dec  1 14:22:40 np0005541455 nova_compute[189564]: 2025-12-01 19:22:40.609 189568 DEBUG oslo_service.periodic_task [None req-7cec860b-c1c5-49d2-8a4f-732c76c1788d - - - - - -] Running periodic task ComputeManager._poll_rescued_instances run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210#033[00m
Dec  1 14:22:40 np0005541455 nova_compute[189564]: 2025-12-01 19:22:40.609 189568 DEBUG oslo_service.periodic_task [None req-7cec860b-c1c5-49d2-8a4f-732c76c1788d - - - - - -] Running periodic task ComputeManager._reclaim_queued_deletes run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210#033[00m
Dec  1 14:22:40 np0005541455 nova_compute[189564]: 2025-12-01 19:22:40.609 189568 DEBUG nova.compute.manager [None req-7cec860b-c1c5-49d2-8a4f-732c76c1788d - - - - - -] CONF.reclaim_instance_interval <= 0, skipping... _reclaim_queued_deletes /usr/lib/python3.9/site-packages/nova/compute/manager.py:10477#033[00m
Dec  1 14:22:40 np0005541455 python3.9[209225]: ansible-containers.podman.podman_container_exec Invoked with command=id -u name=node_exporter detach=False executable=podman privileged=False tty=False argv=None env=None user=None workdir=None
Dec  1 14:22:40 np0005541455 systemd[1]: Started libpod-conmon-9bc16c1e84935b321683dd2dfd3901959431e420d380b6b9982945dff3d516b2.scope.
Dec  1 14:22:40 np0005541455 podman[209226]: 2025-12-01 19:22:40.747050195 +0000 UTC m=+0.075062191 container exec 9bc16c1e84935b321683dd2dfd3901959431e420d380b6b9982945dff3d516b2 (image=quay.io/prometheus/node-exporter:v1.5.0, name=node_exporter, config_data={'image': 'quay.io/prometheus/node-exporter:v1.5.0', 'restart': 'always', 'recreate': True, 'user': 'root', 'privileged': True, 'ports': ['9100:9100'], 'command': ['--web.config.file=/etc/node_exporter/node_exporter.yaml', '--web.disable-exporter-metrics', '--collector.systemd', '--collector.systemd.unit-include=(edpm_.*|ovs.*|openvswitch|virt.*|rsyslog)\\.service', '--no-collector.dmi', '--no-collector.entropy', '--no-collector.thermal_zone', '--no-collector.time', '--no-collector.timex', '--no-collector.uname', '--no-collector.stat', '--no-collector.hwmon', '--no-collector.os', '--no-collector.selinux', '--no-collector.textfile', '--no-collector.powersupplyclass', '--no-collector.pressure', '--no-collector.rapl'], 'net': 'host', 'environment': {'OS_ENDPOINT_TYPE': 'internal'}, 'healthcheck': {'test': '/openstack/healthcheck node_exporter', 'mount': '/var/lib/openstack/healthchecks/node_exporter'}, 'volumes': ['/var/lib/openstack/config/telemetry/node_exporter.yaml:/etc/node_exporter/node_exporter.yaml:z', '/var/lib/openstack/certs/telemetry/default:/etc/node_exporter/tls:z', '/var/run/dbus/system_bus_socket:/var/run/dbus/system_bus_socket:rw', '/var/lib/openstack/healthchecks/node_exporter:/openstack:ro,z']}, config_id=edpm, container_name=node_exporter, maintainer=The Prometheus Authors <prometheus-developers@googlegroups.com>, managed_by=edpm_ansible)
Dec  1 14:22:40 np0005541455 podman[209226]: 2025-12-01 19:22:40.780957165 +0000 UTC m=+0.108969161 container exec_died 9bc16c1e84935b321683dd2dfd3901959431e420d380b6b9982945dff3d516b2 (image=quay.io/prometheus/node-exporter:v1.5.0, name=node_exporter, container_name=node_exporter, maintainer=The Prometheus Authors <prometheus-developers@googlegroups.com>, managed_by=edpm_ansible, config_data={'image': 'quay.io/prometheus/node-exporter:v1.5.0', 'restart': 'always', 'recreate': True, 'user': 'root', 'privileged': True, 'ports': ['9100:9100'], 'command': ['--web.config.file=/etc/node_exporter/node_exporter.yaml', '--web.disable-exporter-metrics', '--collector.systemd', '--collector.systemd.unit-include=(edpm_.*|ovs.*|openvswitch|virt.*|rsyslog)\\.service', '--no-collector.dmi', '--no-collector.entropy', '--no-collector.thermal_zone', '--no-collector.time', '--no-collector.timex', '--no-collector.uname', '--no-collector.stat', '--no-collector.hwmon', '--no-collector.os', '--no-collector.selinux', '--no-collector.textfile', '--no-collector.powersupplyclass', '--no-collector.pressure', '--no-collector.rapl'], 'net': 'host', 'environment': {'OS_ENDPOINT_TYPE': 'internal'}, 'healthcheck': {'test': '/openstack/healthcheck node_exporter', 'mount': '/var/lib/openstack/healthchecks/node_exporter'}, 'volumes': ['/var/lib/openstack/config/telemetry/node_exporter.yaml:/etc/node_exporter/node_exporter.yaml:z', '/var/lib/openstack/certs/telemetry/default:/etc/node_exporter/tls:z', '/var/run/dbus/system_bus_socket:/var/run/dbus/system_bus_socket:rw', '/var/lib/openstack/healthchecks/node_exporter:/openstack:ro,z']}, config_id=edpm)
Dec  1 14:22:40 np0005541455 systemd[1]: libpod-conmon-9bc16c1e84935b321683dd2dfd3901959431e420d380b6b9982945dff3d516b2.scope: Deactivated successfully.
Dec  1 14:22:41 np0005541455 nova_compute[189564]: 2025-12-01 19:22:41.248 189568 DEBUG oslo_service.periodic_task [None req-7cec860b-c1c5-49d2-8a4f-732c76c1788d - - - - - -] Running periodic task ComputeManager._heal_instance_info_cache run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210#033[00m
Dec  1 14:22:41 np0005541455 nova_compute[189564]: 2025-12-01 19:22:41.248 189568 DEBUG nova.compute.manager [None req-7cec860b-c1c5-49d2-8a4f-732c76c1788d - - - - - -] Starting heal instance info cache _heal_instance_info_cache /usr/lib/python3.9/site-packages/nova/compute/manager.py:9858#033[00m
Dec  1 14:22:41 np0005541455 nova_compute[189564]: 2025-12-01 19:22:41.249 189568 DEBUG nova.compute.manager [None req-7cec860b-c1c5-49d2-8a4f-732c76c1788d - - - - - -] Rebuilding the list of instances to heal _heal_instance_info_cache /usr/lib/python3.9/site-packages/nova/compute/manager.py:9862#033[00m
Dec  1 14:22:41 np0005541455 nova_compute[189564]: 2025-12-01 19:22:41.276 189568 DEBUG nova.compute.manager [None req-7cec860b-c1c5-49d2-8a4f-732c76c1788d - - - - - -] Didn't find any instances for network info cache update. _heal_instance_info_cache /usr/lib/python3.9/site-packages/nova/compute/manager.py:9944#033[00m
Dec  1 14:22:41 np0005541455 nova_compute[189564]: 2025-12-01 19:22:41.276 189568 DEBUG oslo_service.periodic_task [None req-7cec860b-c1c5-49d2-8a4f-732c76c1788d - - - - - -] Running periodic task ComputeManager._poll_unconfirmed_resizes run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210#033[00m
Dec  1 14:22:41 np0005541455 nova_compute[189564]: 2025-12-01 19:22:41.276 189568 DEBUG oslo_service.periodic_task [None req-7cec860b-c1c5-49d2-8a4f-732c76c1788d - - - - - -] Running periodic task ComputeManager._poll_volume_usage run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210#033[00m
Dec  1 14:22:41 np0005541455 nova_compute[189564]: 2025-12-01 19:22:41.277 189568 DEBUG oslo_service.periodic_task [None req-7cec860b-c1c5-49d2-8a4f-732c76c1788d - - - - - -] Running periodic task ComputeManager.update_available_resource run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210#033[00m
Dec  1 14:22:41 np0005541455 nova_compute[189564]: 2025-12-01 19:22:41.310 189568 DEBUG oslo_concurrency.lockutils [None req-7cec860b-c1c5-49d2-8a4f-732c76c1788d - - - - - -] Acquiring lock "compute_resources" by "nova.compute.resource_tracker.ResourceTracker.clean_compute_node_cache" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m
Dec  1 14:22:41 np0005541455 nova_compute[189564]: 2025-12-01 19:22:41.310 189568 DEBUG oslo_concurrency.lockutils [None req-7cec860b-c1c5-49d2-8a4f-732c76c1788d - - - - - -] Lock "compute_resources" acquired by "nova.compute.resource_tracker.ResourceTracker.clean_compute_node_cache" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m
Dec  1 14:22:41 np0005541455 nova_compute[189564]: 2025-12-01 19:22:41.310 189568 DEBUG oslo_concurrency.lockutils [None req-7cec860b-c1c5-49d2-8a4f-732c76c1788d - - - - - -] Lock "compute_resources" "released" by "nova.compute.resource_tracker.ResourceTracker.clean_compute_node_cache" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m
Dec  1 14:22:41 np0005541455 nova_compute[189564]: 2025-12-01 19:22:41.310 189568 DEBUG nova.compute.resource_tracker [None req-7cec860b-c1c5-49d2-8a4f-732c76c1788d - - - - - -] Auditing locally available compute resources for compute-0.ctlplane.example.com (node: compute-0.ctlplane.example.com) update_available_resource /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:861#033[00m
Dec  1 14:22:41 np0005541455 nova_compute[189564]: 2025-12-01 19:22:41.485 189568 WARNING nova.virt.libvirt.driver [None req-7cec860b-c1c5-49d2-8a4f-732c76c1788d - - - - - -] This host appears to have multiple sockets per NUMA node. The `socket` PCI NUMA affinity will not be supported.#033[00m
Dec  1 14:22:41 np0005541455 nova_compute[189564]: 2025-12-01 19:22:41.487 189568 DEBUG nova.compute.resource_tracker [None req-7cec860b-c1c5-49d2-8a4f-732c76c1788d - - - - - -] Hypervisor/Node resource view: name=compute-0.ctlplane.example.com free_ram=5932MB free_disk=72.43852233886719GB free_vcpus=8 pci_devices=[{"dev_id": "pci_0000_00_03_0", "address": "0000:00:03.0", "product_id": "1000", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1000", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_01_3", "address": "0000:00:01.3", "product_id": "7113", "vendor_id": "8086", "numa_node": null, "label": "label_8086_7113", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_06_0", "address": "0000:00:06.0", "product_id": "1005", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1005", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_04_0", "address": "0000:00:04.0", "product_id": "1001", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1001", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_01_1", "address": "0000:00:01.1", "product_id": "7010", "vendor_id": "8086", "numa_node": null, "label": "label_8086_7010", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_00_0", "address": "0000:00:00.0", "product_id": "1237", "vendor_id": "8086", "numa_node": null, "label": "label_8086_1237", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_05_0", "address": "0000:00:05.0", "product_id": "1002", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1002", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_02_0", "address": "0000:00:02.0", "product_id": "1050", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1050", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_01_2", "address": "0000:00:01.2", "product_id": "7020", "vendor_id": "8086", "numa_node": null, "label": "label_8086_7020", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_01_0", "address": "0000:00:01.0", "product_id": "7000", "vendor_id": "8086", "numa_node": null, "label": "label_8086_7000", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_07_0", "address": "0000:00:07.0", "product_id": "1000", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1000", "dev_type": "type-PCI"}] _report_hypervisor_resource_view /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:1034#033[00m
Dec  1 14:22:41 np0005541455 nova_compute[189564]: 2025-12-01 19:22:41.487 189568 DEBUG oslo_concurrency.lockutils [None req-7cec860b-c1c5-49d2-8a4f-732c76c1788d - - - - - -] Acquiring lock "compute_resources" by "nova.compute.resource_tracker.ResourceTracker._update_available_resource" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m
Dec  1 14:22:41 np0005541455 nova_compute[189564]: 2025-12-01 19:22:41.488 189568 DEBUG oslo_concurrency.lockutils [None req-7cec860b-c1c5-49d2-8a4f-732c76c1788d - - - - - -] Lock "compute_resources" acquired by "nova.compute.resource_tracker.ResourceTracker._update_available_resource" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m
Dec  1 14:22:41 np0005541455 python3.9[209412]: ansible-containers.podman.podman_container_exec Invoked with command=id -g name=node_exporter detach=False executable=podman privileged=False tty=False argv=None env=None user=None workdir=None
Dec  1 14:22:41 np0005541455 nova_compute[189564]: 2025-12-01 19:22:41.648 189568 DEBUG nova.compute.resource_tracker [None req-7cec860b-c1c5-49d2-8a4f-732c76c1788d - - - - - -] Total usable vcpus: 8, total allocated vcpus: 0 _report_final_resource_view /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:1057#033[00m
Dec  1 14:22:41 np0005541455 nova_compute[189564]: 2025-12-01 19:22:41.648 189568 DEBUG nova.compute.resource_tracker [None req-7cec860b-c1c5-49d2-8a4f-732c76c1788d - - - - - -] Final resource view: name=compute-0.ctlplane.example.com phys_ram=7680MB used_ram=512MB phys_disk=79GB used_disk=0GB total_vcpus=8 used_vcpus=0 pci_stats=[] _report_final_resource_view /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:1066#033[00m
Dec  1 14:22:41 np0005541455 systemd[1]: Started libpod-conmon-9bc16c1e84935b321683dd2dfd3901959431e420d380b6b9982945dff3d516b2.scope.
Dec  1 14:22:41 np0005541455 podman[209413]: 2025-12-01 19:22:41.684433283 +0000 UTC m=+0.075467564 container exec 9bc16c1e84935b321683dd2dfd3901959431e420d380b6b9982945dff3d516b2 (image=quay.io/prometheus/node-exporter:v1.5.0, name=node_exporter, config_data={'image': 'quay.io/prometheus/node-exporter:v1.5.0', 'restart': 'always', 'recreate': True, 'user': 'root', 'privileged': True, 'ports': ['9100:9100'], 'command': ['--web.config.file=/etc/node_exporter/node_exporter.yaml', '--web.disable-exporter-metrics', '--collector.systemd', '--collector.systemd.unit-include=(edpm_.*|ovs.*|openvswitch|virt.*|rsyslog)\\.service', '--no-collector.dmi', '--no-collector.entropy', '--no-collector.thermal_zone', '--no-collector.time', '--no-collector.timex', '--no-collector.uname', '--no-collector.stat', '--no-collector.hwmon', '--no-collector.os', '--no-collector.selinux', '--no-collector.textfile', '--no-collector.powersupplyclass', '--no-collector.pressure', '--no-collector.rapl'], 'net': 'host', 'environment': {'OS_ENDPOINT_TYPE': 'internal'}, 'healthcheck': {'test': '/openstack/healthcheck node_exporter', 'mount': '/var/lib/openstack/healthchecks/node_exporter'}, 'volumes': ['/var/lib/openstack/config/telemetry/node_exporter.yaml:/etc/node_exporter/node_exporter.yaml:z', '/var/lib/openstack/certs/telemetry/default:/etc/node_exporter/tls:z', '/var/run/dbus/system_bus_socket:/var/run/dbus/system_bus_socket:rw', '/var/lib/openstack/healthchecks/node_exporter:/openstack:ro,z']}, config_id=edpm, container_name=node_exporter, maintainer=The Prometheus Authors <prometheus-developers@googlegroups.com>, managed_by=edpm_ansible)
Dec  1 14:22:41 np0005541455 nova_compute[189564]: 2025-12-01 19:22:41.703 189568 DEBUG nova.compute.provider_tree [None req-7cec860b-c1c5-49d2-8a4f-732c76c1788d - - - - - -] Inventory has not changed in ProviderTree for provider: 0211b5d4-bab8-409f-8f53-df766ffbcb27 update_inventory /usr/lib/python3.9/site-packages/nova/compute/provider_tree.py:180#033[00m
Dec  1 14:22:41 np0005541455 nova_compute[189564]: 2025-12-01 19:22:41.715 189568 DEBUG nova.scheduler.client.report [None req-7cec860b-c1c5-49d2-8a4f-732c76c1788d - - - - - -] Inventory has not changed for provider 0211b5d4-bab8-409f-8f53-df766ffbcb27 based on inventory data: {'MEMORY_MB': {'total': 7680, 'reserved': 512, 'min_unit': 1, 'max_unit': 7680, 'step_size': 1, 'allocation_ratio': 1.0}, 'VCPU': {'total': 8, 'reserved': 0, 'min_unit': 1, 'max_unit': 8, 'step_size': 1, 'allocation_ratio': 4.0}, 'DISK_GB': {'total': 79, 'reserved': 0, 'min_unit': 1, 'max_unit': 79, 'step_size': 1, 'allocation_ratio': 0.9}} set_inventory_for_provider /usr/lib/python3.9/site-packages/nova/scheduler/client/report.py:940#033[00m
Dec  1 14:22:41 np0005541455 nova_compute[189564]: 2025-12-01 19:22:41.716 189568 DEBUG nova.compute.resource_tracker [None req-7cec860b-c1c5-49d2-8a4f-732c76c1788d - - - - - -] Compute_service record updated for compute-0.ctlplane.example.com:compute-0.ctlplane.example.com _update_available_resource /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:995#033[00m
Dec  1 14:22:41 np0005541455 nova_compute[189564]: 2025-12-01 19:22:41.716 189568 DEBUG oslo_concurrency.lockutils [None req-7cec860b-c1c5-49d2-8a4f-732c76c1788d - - - - - -] Lock "compute_resources" "released" by "nova.compute.resource_tracker.ResourceTracker._update_available_resource" :: held 0.229s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m
Dec  1 14:22:41 np0005541455 podman[209413]: 2025-12-01 19:22:41.718089485 +0000 UTC m=+0.109123766 container exec_died 9bc16c1e84935b321683dd2dfd3901959431e420d380b6b9982945dff3d516b2 (image=quay.io/prometheus/node-exporter:v1.5.0, name=node_exporter, config_data={'image': 'quay.io/prometheus/node-exporter:v1.5.0', 'restart': 'always', 'recreate': True, 'user': 'root', 'privileged': True, 'ports': ['9100:9100'], 'command': ['--web.config.file=/etc/node_exporter/node_exporter.yaml', '--web.disable-exporter-metrics', '--collector.systemd', '--collector.systemd.unit-include=(edpm_.*|ovs.*|openvswitch|virt.*|rsyslog)\\.service', '--no-collector.dmi', '--no-collector.entropy', '--no-collector.thermal_zone', '--no-collector.time', '--no-collector.timex', '--no-collector.uname', '--no-collector.stat', '--no-collector.hwmon', '--no-collector.os', '--no-collector.selinux', '--no-collector.textfile', '--no-collector.powersupplyclass', '--no-collector.pressure', '--no-collector.rapl'], 'net': 'host', 'environment': {'OS_ENDPOINT_TYPE': 'internal'}, 'healthcheck': {'test': '/openstack/healthcheck node_exporter', 'mount': '/var/lib/openstack/healthchecks/node_exporter'}, 'volumes': ['/var/lib/openstack/config/telemetry/node_exporter.yaml:/etc/node_exporter/node_exporter.yaml:z', '/var/lib/openstack/certs/telemetry/default:/etc/node_exporter/tls:z', '/var/run/dbus/system_bus_socket:/var/run/dbus/system_bus_socket:rw', '/var/lib/openstack/healthchecks/node_exporter:/openstack:ro,z']}, config_id=edpm, container_name=node_exporter, maintainer=The Prometheus Authors <prometheus-developers@googlegroups.com>, managed_by=edpm_ansible)
Dec  1 14:22:41 np0005541455 systemd[1]: libpod-conmon-9bc16c1e84935b321683dd2dfd3901959431e420d380b6b9982945dff3d516b2.scope: Deactivated successfully.
Dec  1 14:22:42 np0005541455 python3.9[209593]: ansible-ansible.builtin.file Invoked with group=0 mode=0700 owner=0 path=/var/lib/openstack/healthchecks/node_exporter recurse=True state=directory force=False follow=True modification_time_format=%Y%m%d%H%M.%S access_time_format=%Y%m%d%H%M.%S unsafe_writes=False _original_basename=None _diff_peek=None src=None modification_time=None access_time=None seuser=None serole=None selevel=None setype=None attributes=None
Dec  1 14:22:42 np0005541455 nova_compute[189564]: 2025-12-01 19:22:42.688 189568 DEBUG oslo_service.periodic_task [None req-7cec860b-c1c5-49d2-8a4f-732c76c1788d - - - - - -] Running periodic task ComputeManager._check_instance_build_time run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210#033[00m
Dec  1 14:22:42 np0005541455 nova_compute[189564]: 2025-12-01 19:22:42.688 189568 DEBUG oslo_service.periodic_task [None req-7cec860b-c1c5-49d2-8a4f-732c76c1788d - - - - - -] Running periodic task ComputeManager._poll_rebooting_instances run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210#033[00m
Dec  1 14:22:42 np0005541455 nova_compute[189564]: 2025-12-01 19:22:42.689 189568 DEBUG oslo_service.periodic_task [None req-7cec860b-c1c5-49d2-8a4f-732c76c1788d - - - - - -] Running periodic task ComputeManager._instance_usage_audit run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210#033[00m
Dec  1 14:22:43 np0005541455 python3.9[209747]: ansible-containers.podman.podman_container_info Invoked with name=['podman_exporter'] executable=podman
Dec  1 14:22:44 np0005541455 python3.9[209913]: ansible-containers.podman.podman_container_exec Invoked with command=id -u name=podman_exporter detach=False executable=podman privileged=False tty=False argv=None env=None user=None workdir=None
Dec  1 14:22:44 np0005541455 systemd[1]: Started libpod-conmon-61ddba5fa28aaa4735d9b3aecc3d300f499f9ae2248b5f55cd6d6127fcce4236.scope.
Dec  1 14:22:44 np0005541455 podman[209914]: 2025-12-01 19:22:44.947747644 +0000 UTC m=+0.637636495 container exec 61ddba5fa28aaa4735d9b3aecc3d300f499f9ae2248b5f55cd6d6127fcce4236 (image=quay.io/navidys/prometheus-podman-exporter:v1.10.1, name=podman_exporter, config_data={'image': 'quay.io/navidys/prometheus-podman-exporter:v1.10.1', 'restart': 'always', 'recreate': True, 'user': 'root', 'privileged': True, 'ports': ['9882:9882'], 'net': 'host', 'command': ['--web.config.file=/etc/podman_exporter/podman_exporter.yaml'], 'environment': {'OS_ENDPOINT_TYPE': 'internal', 'CONTAINER_HOST': 'unix:///run/podman/podman.sock'}, 'healthcheck': {'test': '/openstack/healthcheck podman_exporter', 'mount': '/var/lib/openstack/healthchecks/podman_exporter'}, 'volumes': ['/var/lib/openstack/config/telemetry/podman_exporter.yaml:/etc/podman_exporter/podman_exporter.yaml:z', '/var/lib/openstack/certs/telemetry/default:/etc/podman_exporter/tls:z', '/run/podman/podman.sock:/run/podman/podman.sock:rw,z', '/var/lib/openstack/healthchecks/podman_exporter:/openstack:ro,z']}, config_id=edpm, container_name=podman_exporter, maintainer=Navid Yaghoobi <navidys@fedoraproject.org>, managed_by=edpm_ansible)
Dec  1 14:22:44 np0005541455 podman[209914]: 2025-12-01 19:22:44.978237035 +0000 UTC m=+0.668125916 container exec_died 61ddba5fa28aaa4735d9b3aecc3d300f499f9ae2248b5f55cd6d6127fcce4236 (image=quay.io/navidys/prometheus-podman-exporter:v1.10.1, name=podman_exporter, container_name=podman_exporter, maintainer=Navid Yaghoobi <navidys@fedoraproject.org>, managed_by=edpm_ansible, config_data={'image': 'quay.io/navidys/prometheus-podman-exporter:v1.10.1', 'restart': 'always', 'recreate': True, 'user': 'root', 'privileged': True, 'ports': ['9882:9882'], 'net': 'host', 'command': ['--web.config.file=/etc/podman_exporter/podman_exporter.yaml'], 'environment': {'OS_ENDPOINT_TYPE': 'internal', 'CONTAINER_HOST': 'unix:///run/podman/podman.sock'}, 'healthcheck': {'test': '/openstack/healthcheck podman_exporter', 'mount': '/var/lib/openstack/healthchecks/podman_exporter'}, 'volumes': ['/var/lib/openstack/config/telemetry/podman_exporter.yaml:/etc/podman_exporter/podman_exporter.yaml:z', '/var/lib/openstack/certs/telemetry/default:/etc/podman_exporter/tls:z', '/run/podman/podman.sock:/run/podman/podman.sock:rw,z', '/var/lib/openstack/healthchecks/podman_exporter:/openstack:ro,z']}, config_id=edpm)
Dec  1 14:22:45 np0005541455 systemd[1]: libpod-conmon-61ddba5fa28aaa4735d9b3aecc3d300f499f9ae2248b5f55cd6d6127fcce4236.scope: Deactivated successfully.
Dec  1 14:22:45 np0005541455 python3.9[210095]: ansible-containers.podman.podman_container_exec Invoked with command=id -g name=podman_exporter detach=False executable=podman privileged=False tty=False argv=None env=None user=None workdir=None
Dec  1 14:22:45 np0005541455 systemd[1]: Started libpod-conmon-61ddba5fa28aaa4735d9b3aecc3d300f499f9ae2248b5f55cd6d6127fcce4236.scope.
Dec  1 14:22:45 np0005541455 podman[210096]: 2025-12-01 19:22:45.886589919 +0000 UTC m=+0.063482062 container exec 61ddba5fa28aaa4735d9b3aecc3d300f499f9ae2248b5f55cd6d6127fcce4236 (image=quay.io/navidys/prometheus-podman-exporter:v1.10.1, name=podman_exporter, maintainer=Navid Yaghoobi <navidys@fedoraproject.org>, managed_by=edpm_ansible, config_data={'image': 'quay.io/navidys/prometheus-podman-exporter:v1.10.1', 'restart': 'always', 'recreate': True, 'user': 'root', 'privileged': True, 'ports': ['9882:9882'], 'net': 'host', 'command': ['--web.config.file=/etc/podman_exporter/podman_exporter.yaml'], 'environment': {'OS_ENDPOINT_TYPE': 'internal', 'CONTAINER_HOST': 'unix:///run/podman/podman.sock'}, 'healthcheck': {'test': '/openstack/healthcheck podman_exporter', 'mount': '/var/lib/openstack/healthchecks/podman_exporter'}, 'volumes': ['/var/lib/openstack/config/telemetry/podman_exporter.yaml:/etc/podman_exporter/podman_exporter.yaml:z', '/var/lib/openstack/certs/telemetry/default:/etc/podman_exporter/tls:z', '/run/podman/podman.sock:/run/podman/podman.sock:rw,z', '/var/lib/openstack/healthchecks/podman_exporter:/openstack:ro,z']}, config_id=edpm, container_name=podman_exporter)
Dec  1 14:22:45 np0005541455 podman[210115]: 2025-12-01 19:22:45.94471194 +0000 UTC m=+0.048852676 container exec_died 61ddba5fa28aaa4735d9b3aecc3d300f499f9ae2248b5f55cd6d6127fcce4236 (image=quay.io/navidys/prometheus-podman-exporter:v1.10.1, name=podman_exporter, config_id=edpm, container_name=podman_exporter, maintainer=Navid Yaghoobi <navidys@fedoraproject.org>, managed_by=edpm_ansible, config_data={'image': 'quay.io/navidys/prometheus-podman-exporter:v1.10.1', 'restart': 'always', 'recreate': True, 'user': 'root', 'privileged': True, 'ports': ['9882:9882'], 'net': 'host', 'command': ['--web.config.file=/etc/podman_exporter/podman_exporter.yaml'], 'environment': {'OS_ENDPOINT_TYPE': 'internal', 'CONTAINER_HOST': 'unix:///run/podman/podman.sock'}, 'healthcheck': {'test': '/openstack/healthcheck podman_exporter', 'mount': '/var/lib/openstack/healthchecks/podman_exporter'}, 'volumes': ['/var/lib/openstack/config/telemetry/podman_exporter.yaml:/etc/podman_exporter/podman_exporter.yaml:z', '/var/lib/openstack/certs/telemetry/default:/etc/podman_exporter/tls:z', '/run/podman/podman.sock:/run/podman/podman.sock:rw,z', '/var/lib/openstack/healthchecks/podman_exporter:/openstack:ro,z']})
Dec  1 14:22:45 np0005541455 podman[210096]: 2025-12-01 19:22:45.950310448 +0000 UTC m=+0.127202591 container exec_died 61ddba5fa28aaa4735d9b3aecc3d300f499f9ae2248b5f55cd6d6127fcce4236 (image=quay.io/navidys/prometheus-podman-exporter:v1.10.1, name=podman_exporter, managed_by=edpm_ansible, config_data={'image': 'quay.io/navidys/prometheus-podman-exporter:v1.10.1', 'restart': 'always', 'recreate': True, 'user': 'root', 'privileged': True, 'ports': ['9882:9882'], 'net': 'host', 'command': ['--web.config.file=/etc/podman_exporter/podman_exporter.yaml'], 'environment': {'OS_ENDPOINT_TYPE': 'internal', 'CONTAINER_HOST': 'unix:///run/podman/podman.sock'}, 'healthcheck': {'test': '/openstack/healthcheck podman_exporter', 'mount': '/var/lib/openstack/healthchecks/podman_exporter'}, 'volumes': ['/var/lib/openstack/config/telemetry/podman_exporter.yaml:/etc/podman_exporter/podman_exporter.yaml:z', '/var/lib/openstack/certs/telemetry/default:/etc/podman_exporter/tls:z', '/run/podman/podman.sock:/run/podman/podman.sock:rw,z', '/var/lib/openstack/healthchecks/podman_exporter:/openstack:ro,z']}, config_id=edpm, container_name=podman_exporter, maintainer=Navid Yaghoobi <navidys@fedoraproject.org>)
Dec  1 14:22:45 np0005541455 systemd[1]: libpod-conmon-61ddba5fa28aaa4735d9b3aecc3d300f499f9ae2248b5f55cd6d6127fcce4236.scope: Deactivated successfully.
Dec  1 14:22:46 np0005541455 python3.9[210279]: ansible-ansible.builtin.file Invoked with group=0 mode=0700 owner=0 path=/var/lib/openstack/healthchecks/podman_exporter recurse=True state=directory force=False follow=True modification_time_format=%Y%m%d%H%M.%S access_time_format=%Y%m%d%H%M.%S unsafe_writes=False _original_basename=None _diff_peek=None src=None modification_time=None access_time=None seuser=None serole=None selevel=None setype=None attributes=None
Dec  1 14:22:47 np0005541455 podman[210403]: 2025-12-01 19:22:47.265522497 +0000 UTC m=+0.183911707 container health_status ac5c9902abf0db9f43c889599b2bcc73d33eb8b65444ffdd9b56a5cc93dab792 (image=quay.io/podified-antelope-centos9/openstack-ovn-controller:current-podified, name=ovn_controller, health_status=healthy, health_failing_streak=0, health_log=, container_name=ovn_controller, maintainer=OpenStack Kubernetes Operator team, managed_by=edpm_ansible, org.label-schema.build-date=20251125, org.label-schema.vendor=CentOS, config_data={'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ovn_controller', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-ovn-controller:current-podified', 'net': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/lib/modules:/lib/modules:ro', '/run:/run', '/var/lib/openvswitch/ovn:/run/ovn:shared,z', '/var/lib/kolla/config_files/ovn_controller.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/openstack/cacerts/ovn/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/certs/ovn/default/ca.crt:/etc/pki/tls/certs/ovndbca.crt:ro,z', '/var/lib/openstack/certs/ovn/default/tls.crt:/etc/pki/tls/certs/ovndb.crt:ro,z', '/var/lib/openstack/certs/ovn/default/tls.key:/etc/pki/tls/private/ovndb.key:ro,Z', '/var/lib/openstack/healthchecks/ovn_controller:/openstack:ro,z']}, config_id=ovn_controller, org.label-schema.schema-version=1.0, tcib_build_tag=fa2bb8efef6782c26ea7f1675eeb36dd, io.buildah.version=1.41.3, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 9 Base Image, tcib_managed=true)
Dec  1 14:22:47 np0005541455 python3.9[210441]: ansible-containers.podman.podman_container_info Invoked with name=['openstack_network_exporter'] executable=podman
Dec  1 14:22:48 np0005541455 python3.9[210622]: ansible-containers.podman.podman_container_exec Invoked with command=id -u name=openstack_network_exporter detach=False executable=podman privileged=False tty=False argv=None env=None user=None workdir=None
Dec  1 14:22:48 np0005541455 systemd[1]: Started libpod-conmon-b46bda7fc50db8041eef75400930fc7591d8331b3adc9964f77b2cc87c6b98e2.scope.
Dec  1 14:22:48 np0005541455 podman[210623]: 2025-12-01 19:22:48.214853166 +0000 UTC m=+0.094626004 container exec b46bda7fc50db8041eef75400930fc7591d8331b3adc9964f77b2cc87c6b98e2 (image=quay.io/openstack-k8s-operators/openstack-network-exporter:current-podified, name=openstack_network_exporter, name=ubi9-minimal, io.k8s.description=The Universal Base Image Minimal is a stripped down image that uses microdnf as a package manager. This base image is freely redistributable, but Red Hat only supports Red Hat technologies through subscriptions for Red Hat products. This image is maintained by Red Hat and updated regularly., com.redhat.license_terms=https://www.redhat.com/en/about/red-hat-end-user-license-agreements#UBI, vcs-ref=f4b088292653bbf5ca8188a5e59ffd06a8671d4b, config_data={'image': 'quay.io/openstack-k8s-operators/openstack-network-exporter:current-podified', 'restart': 'always', 'recreate': True, 'privileged': True, 'ports': ['9105:9105'], 'command': [], 'net': 'host', 'environment': {'OS_ENDPOINT_TYPE': 'internal', 'OPENSTACK_NETWORK_EXPORTER_YAML': '/etc/openstack_network_exporter/openstack_network_exporter.yaml'}, 'healthcheck': {'test': '/openstack/healthcheck openstack-netwo', 'mount': '/var/lib/openstack/healthchecks/openstack_network_exporter'}, 'volumes': ['/var/lib/openstack/config/telemetry/openstack_network_exporter.yaml:/etc/openstack_network_exporter/openstack_network_exporter.yaml:z', '/var/lib/openstack/certs/telemetry/default:/etc/openstack_network_exporter/tls:z', '/var/run/openvswitch:/run/openvswitch:rw,z', '/var/lib/openvswitch/ovn:/run/ovn:rw,z', '/proc:/host/proc:ro', '/var/lib/openstack/healthchecks/openstack_network_exporter:/openstack:ro,z']}, container_name=openstack_network_exporter, architecture=x86_64, io.k8s.display-name=Red Hat Universal Base Image 9 Minimal, summary=Provides the latest release of the minimal Red Hat Universal Base Image 9., io.buildah.version=1.33.7, release=1755695350, description=The Universal Base Image Minimal is a stripped down image that uses microdnf as a package manager. This base image is freely redistributable, but Red Hat only supports Red Hat technologies through subscriptions for Red Hat products. This image is maintained by Red Hat and updated regularly., distribution-scope=public, maintainer=Red Hat, Inc., managed_by=edpm_ansible, vendor=Red Hat, Inc., version=9.6, build-date=2025-08-20T13:12:41, io.openshift.expose-services=, vcs-type=git, io.openshift.tags=minimal rhel9, url=https://catalog.redhat.com/en/search?searchType=containers, config_id=edpm, com.redhat.component=ubi9-minimal-container)
Dec  1 14:22:48 np0005541455 podman[210623]: 2025-12-01 19:22:48.25326586 +0000 UTC m=+0.133038738 container exec_died b46bda7fc50db8041eef75400930fc7591d8331b3adc9964f77b2cc87c6b98e2 (image=quay.io/openstack-k8s-operators/openstack-network-exporter:current-podified, name=openstack_network_exporter, com.redhat.license_terms=https://www.redhat.com/en/about/red-hat-end-user-license-agreements#UBI, release=1755695350, config_data={'image': 'quay.io/openstack-k8s-operators/openstack-network-exporter:current-podified', 'restart': 'always', 'recreate': True, 'privileged': True, 'ports': ['9105:9105'], 'command': [], 'net': 'host', 'environment': {'OS_ENDPOINT_TYPE': 'internal', 'OPENSTACK_NETWORK_EXPORTER_YAML': '/etc/openstack_network_exporter/openstack_network_exporter.yaml'}, 'healthcheck': {'test': '/openstack/healthcheck openstack-netwo', 'mount': '/var/lib/openstack/healthchecks/openstack_network_exporter'}, 'volumes': ['/var/lib/openstack/config/telemetry/openstack_network_exporter.yaml:/etc/openstack_network_exporter/openstack_network_exporter.yaml:z', '/var/lib/openstack/certs/telemetry/default:/etc/openstack_network_exporter/tls:z', '/var/run/openvswitch:/run/openvswitch:rw,z', '/var/lib/openvswitch/ovn:/run/ovn:rw,z', '/proc:/host/proc:ro', '/var/lib/openstack/healthchecks/openstack_network_exporter:/openstack:ro,z']}, distribution-scope=public, url=https://catalog.redhat.com/en/search?searchType=containers, io.k8s.description=The Universal Base Image Minimal is a stripped down image that uses microdnf as a package manager. This base image is freely redistributable, but Red Hat only supports Red Hat technologies through subscriptions for Red Hat products. This image is maintained by Red Hat and updated regularly., com.redhat.component=ubi9-minimal-container, description=The Universal Base Image Minimal is a stripped down image that uses microdnf as a package manager. This base image is freely redistributable, but Red Hat only supports Red Hat technologies through subscriptions for Red Hat products. This image is maintained by Red Hat and updated regularly., managed_by=edpm_ansible, version=9.6, io.openshift.expose-services=, vcs-ref=f4b088292653bbf5ca8188a5e59ffd06a8671d4b, io.openshift.tags=minimal rhel9, container_name=openstack_network_exporter, config_id=edpm, io.k8s.display-name=Red Hat Universal Base Image 9 Minimal, io.buildah.version=1.33.7, vcs-type=git, name=ubi9-minimal, maintainer=Red Hat, Inc., vendor=Red Hat, Inc., architecture=x86_64, build-date=2025-08-20T13:12:41, summary=Provides the latest release of the minimal Red Hat Universal Base Image 9.)
Dec  1 14:22:48 np0005541455 systemd[1]: libpod-conmon-b46bda7fc50db8041eef75400930fc7591d8331b3adc9964f77b2cc87c6b98e2.scope: Deactivated successfully.
Dec  1 14:22:48 np0005541455 podman[210656]: 2025-12-01 19:22:48.410792555 +0000 UTC m=+0.081533597 container health_status 43b014a7c88484529ca37fbc1aa040d68d3c565a681d98a3ffe696ded1c66c8b (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified, name=ovn_metadata_agent, health_status=healthy, health_failing_streak=0, health_log=, container_name=ovn_metadata_agent, managed_by=edpm_ansible, org.label-schema.license=GPLv2, org.label-schema.build-date=20251125, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.schema-version=1.0, config_data={'cgroupns': 'host', 'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'EDPM_CONFIG_HASH': '0823bd3e096c75f72e4a95820d41b0d4b6a1172bd2892ddb9f29b788a11bc87d'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ovn_metadata_agent', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified', 'net': 'host', 'pid': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/run/openvswitch:/run/openvswitch:z', '/var/lib/config-data/ansible-generated/neutron-ovn-metadata-agent:/etc/neutron.conf.d:z', '/run/netns:/run/netns:shared', '/var/lib/kolla/config_files/ovn_metadata_agent.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/neutron:/var/lib/neutron:shared,z', '/var/lib/neutron/ovn_metadata_haproxy_wrapper:/usr/local/bin/haproxy:ro', '/var/lib/neutron/kill_scripts:/etc/neutron/kill_scripts:ro', '/var/lib/openstack/cacerts/neutron-metadata/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/ca.crt:/etc/pki/tls/certs/ovndbca.crt:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/tls.crt:/etc/pki/tls/certs/ovndb.crt:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/tls.key:/etc/pki/tls/private/ovndb.key:ro,Z', '/var/lib/openstack/healthchecks/ovn_metadata_agent:/openstack:ro,z']}, config_id=ovn_metadata_agent, io.buildah.version=1.41.3, maintainer=OpenStack Kubernetes Operator team, org.label-schema.vendor=CentOS, tcib_build_tag=fa2bb8efef6782c26ea7f1675eeb36dd, tcib_managed=true)
Dec  1 14:22:48 np0005541455 podman[210657]: 2025-12-01 19:22:48.418487761 +0000 UTC m=+0.082590642 container health_status 3a3d264f7eb8586ed3d44da8bad3c69e5911bcb2ca062b771386b6d47a5118de (image=quay.rdoproject.org/podified-master-centos10/openstack-ceilometer-compute:current-tested, name=ceilometer_agent_compute, health_status=healthy, health_failing_streak=0, health_log=, config_id=edpm, io.buildah.version=1.41.4, org.label-schema.schema-version=1.0, org.label-schema.build-date=20251125, config_data={'image': 'quay.rdoproject.org/podified-master-centos10/openstack-ceilometer-compute:current-tested', 'user': 'ceilometer', 'restart': 'always', 'command': 'kolla_start', 'security_opt': 'label:type:ceilometer_polling_t', 'net': 'host', 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'OS_ENDPOINT_TYPE': 'internal'}, 'healthcheck': {'test': '/openstack/healthcheck compute', 'mount': '/var/lib/openstack/healthchecks/ceilometer_agent_compute'}, 'volumes': ['/var/lib/openstack/config/telemetry:/var/lib/openstack/config/:z', '/var/lib/openstack/config/telemetry/ceilometer-agent-compute.json:/var/lib/kolla/config_files/config.json:z', '/run/libvirt:/run/libvirt:shared,ro', '/etc/hosts:/etc/hosts:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/var/lib/openstack/cacerts/telemetry/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/config/telemetry/ceilometer_prom_exporter.yaml:/etc/ceilometer/ceilometer_prom_exporter.yaml:z', '/var/lib/openstack/certs/telemetry/default:/etc/ceilometer/tls:z', '/dev/log:/dev/log', '/var/lib/openstack/healthchecks/ceilometer_agent_compute:/openstack:ro,z']}, container_name=ceilometer_agent_compute, maintainer=OpenStack Kubernetes Operator team, managed_by=edpm_ansible, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 10 Base Image, org.label-schema.vendor=CentOS, tcib_build_tag=3a7876c5b6a4ff2e2bc50e11e9db5f42, tcib_managed=true)
Dec  1 14:22:48 np0005541455 python3.9[210846]: ansible-containers.podman.podman_container_exec Invoked with command=id -g name=openstack_network_exporter detach=False executable=podman privileged=False tty=False argv=None env=None user=None workdir=None
Dec  1 14:22:49 np0005541455 systemd[1]: Started libpod-conmon-b46bda7fc50db8041eef75400930fc7591d8331b3adc9964f77b2cc87c6b98e2.scope.
Dec  1 14:22:49 np0005541455 podman[210847]: 2025-12-01 19:22:49.116969062 +0000 UTC m=+0.100903175 container exec b46bda7fc50db8041eef75400930fc7591d8331b3adc9964f77b2cc87c6b98e2 (image=quay.io/openstack-k8s-operators/openstack-network-exporter:current-podified, name=openstack_network_exporter, summary=Provides the latest release of the minimal Red Hat Universal Base Image 9., build-date=2025-08-20T13:12:41, config_data={'image': 'quay.io/openstack-k8s-operators/openstack-network-exporter:current-podified', 'restart': 'always', 'recreate': True, 'privileged': True, 'ports': ['9105:9105'], 'command': [], 'net': 'host', 'environment': {'OS_ENDPOINT_TYPE': 'internal', 'OPENSTACK_NETWORK_EXPORTER_YAML': '/etc/openstack_network_exporter/openstack_network_exporter.yaml'}, 'healthcheck': {'test': '/openstack/healthcheck openstack-netwo', 'mount': '/var/lib/openstack/healthchecks/openstack_network_exporter'}, 'volumes': ['/var/lib/openstack/config/telemetry/openstack_network_exporter.yaml:/etc/openstack_network_exporter/openstack_network_exporter.yaml:z', '/var/lib/openstack/certs/telemetry/default:/etc/openstack_network_exporter/tls:z', '/var/run/openvswitch:/run/openvswitch:rw,z', '/var/lib/openvswitch/ovn:/run/ovn:rw,z', '/proc:/host/proc:ro', '/var/lib/openstack/healthchecks/openstack_network_exporter:/openstack:ro,z']}, io.openshift.expose-services=, description=The Universal Base Image Minimal is a stripped down image that uses microdnf as a package manager. This base image is freely redistributable, but Red Hat only supports Red Hat technologies through subscriptions for Red Hat products. This image is maintained by Red Hat and updated regularly., maintainer=Red Hat, Inc., distribution-scope=public, io.openshift.tags=minimal rhel9, name=ubi9-minimal, url=https://catalog.redhat.com/en/search?searchType=containers, vcs-type=git, com.redhat.component=ubi9-minimal-container, vendor=Red Hat, Inc., architecture=x86_64, com.redhat.license_terms=https://www.redhat.com/en/about/red-hat-end-user-license-agreements#UBI, container_name=openstack_network_exporter, io.buildah.version=1.33.7, managed_by=edpm_ansible, config_id=edpm, release=1755695350, io.k8s.description=The Universal Base Image Minimal is a stripped down image that uses microdnf as a package manager. This base image is freely redistributable, but Red Hat only supports Red Hat technologies through subscriptions for Red Hat products. This image is maintained by Red Hat and updated regularly., io.k8s.display-name=Red Hat Universal Base Image 9 Minimal, vcs-ref=f4b088292653bbf5ca8188a5e59ffd06a8671d4b, version=9.6)
Dec  1 14:22:49 np0005541455 podman[210847]: 2025-12-01 19:22:49.127887779 +0000 UTC m=+0.111821862 container exec_died b46bda7fc50db8041eef75400930fc7591d8331b3adc9964f77b2cc87c6b98e2 (image=quay.io/openstack-k8s-operators/openstack-network-exporter:current-podified, name=openstack_network_exporter, name=ubi9-minimal, url=https://catalog.redhat.com/en/search?searchType=containers, distribution-scope=public, build-date=2025-08-20T13:12:41, com.redhat.license_terms=https://www.redhat.com/en/about/red-hat-end-user-license-agreements#UBI, container_name=openstack_network_exporter, io.k8s.description=The Universal Base Image Minimal is a stripped down image that uses microdnf as a package manager. This base image is freely redistributable, but Red Hat only supports Red Hat technologies through subscriptions for Red Hat products. This image is maintained by Red Hat and updated regularly., io.k8s.display-name=Red Hat Universal Base Image 9 Minimal, release=1755695350, summary=Provides the latest release of the minimal Red Hat Universal Base Image 9., architecture=x86_64, managed_by=edpm_ansible, description=The Universal Base Image Minimal is a stripped down image that uses microdnf as a package manager. This base image is freely redistributable, but Red Hat only supports Red Hat technologies through subscriptions for Red Hat products. This image is maintained by Red Hat and updated regularly., version=9.6, config_id=edpm, config_data={'image': 'quay.io/openstack-k8s-operators/openstack-network-exporter:current-podified', 'restart': 'always', 'recreate': True, 'privileged': True, 'ports': ['9105:9105'], 'command': [], 'net': 'host', 'environment': {'OS_ENDPOINT_TYPE': 'internal', 'OPENSTACK_NETWORK_EXPORTER_YAML': '/etc/openstack_network_exporter/openstack_network_exporter.yaml'}, 'healthcheck': {'test': '/openstack/healthcheck openstack-netwo', 'mount': '/var/lib/openstack/healthchecks/openstack_network_exporter'}, 'volumes': ['/var/lib/openstack/config/telemetry/openstack_network_exporter.yaml:/etc/openstack_network_exporter/openstack_network_exporter.yaml:z', '/var/lib/openstack/certs/telemetry/default:/etc/openstack_network_exporter/tls:z', '/var/run/openvswitch:/run/openvswitch:rw,z', '/var/lib/openvswitch/ovn:/run/ovn:rw,z', '/proc:/host/proc:ro', '/var/lib/openstack/healthchecks/openstack_network_exporter:/openstack:ro,z']}, vcs-type=git, com.redhat.component=ubi9-minimal-container, vcs-ref=f4b088292653bbf5ca8188a5e59ffd06a8671d4b, vendor=Red Hat, Inc., io.buildah.version=1.33.7, io.openshift.expose-services=, io.openshift.tags=minimal rhel9, maintainer=Red Hat, Inc.)
Dec  1 14:22:49 np0005541455 systemd[1]: libpod-conmon-b46bda7fc50db8041eef75400930fc7591d8331b3adc9964f77b2cc87c6b98e2.scope: Deactivated successfully.
Dec  1 14:22:49 np0005541455 python3.9[211032]: ansible-ansible.builtin.file Invoked with group=0 mode=0700 owner=0 path=/var/lib/openstack/healthchecks/openstack_network_exporter recurse=True state=directory force=False follow=True modification_time_format=%Y%m%d%H%M.%S access_time_format=%Y%m%d%H%M.%S unsafe_writes=False _original_basename=None _diff_peek=None src=None modification_time=None access_time=None seuser=None serole=None selevel=None setype=None attributes=None
Dec  1 14:22:50 np0005541455 python3.9[211184]: ansible-ansible.builtin.file Invoked with group=root mode=0750 owner=root path=/var/lib/edpm-config/firewall/ state=directory recurse=False force=False follow=True modification_time_format=%Y%m%d%H%M.%S access_time_format=%Y%m%d%H%M.%S unsafe_writes=False _original_basename=None _diff_peek=None src=None modification_time=None access_time=None seuser=None serole=None selevel=None setype=None attributes=None
Dec  1 14:22:51 np0005541455 python3.9[211336]: ansible-ansible.legacy.stat Invoked with path=/var/lib/edpm-config/firewall/telemetry.yaml follow=False get_checksum=True get_size=False checksum_algorithm=sha1 get_mime=True get_attributes=True get_selinux_context=False
Dec  1 14:22:52 np0005541455 python3.9[211459]: ansible-ansible.legacy.copy Invoked with dest=/var/lib/edpm-config/firewall/telemetry.yaml mode=0640 src=/home/zuul/.ansible/tmp/ansible-tmp-1764616971.1904464-1082-139819684411083/.source.yaml follow=False _original_basename=firewall.yaml.j2 checksum=d942d984493b214bda2913f753ff68cdcedff00e backup=False force=True remote_src=False unsafe_writes=False content=NOT_LOGGING_PARAMETER validate=None directory_mode=None local_follow=None owner=None group=None seuser=None serole=None selevel=None setype=None attributes=None
Dec  1 14:22:53 np0005541455 python3.9[211611]: ansible-ansible.builtin.file Invoked with group=root mode=0750 owner=root path=/var/lib/edpm-config/firewall state=directory recurse=False force=False follow=True modification_time_format=%Y%m%d%H%M.%S access_time_format=%Y%m%d%H%M.%S unsafe_writes=False _original_basename=None _diff_peek=None src=None modification_time=None access_time=None seuser=None serole=None selevel=None setype=None attributes=None
Dec  1 14:22:53 np0005541455 python3.9[211763]: ansible-ansible.legacy.stat Invoked with path=/var/lib/edpm-config/firewall/edpm-nftables-base.yaml follow=False get_checksum=True get_size=False checksum_algorithm=sha1 get_mime=True get_attributes=True get_selinux_context=False
Dec  1 14:22:54 np0005541455 podman[211813]: 2025-12-01 19:22:54.092177794 +0000 UTC m=+0.113340801 container health_status b46bda7fc50db8041eef75400930fc7591d8331b3adc9964f77b2cc87c6b98e2 (image=quay.io/openstack-k8s-operators/openstack-network-exporter:current-podified, name=openstack_network_exporter, health_status=healthy, health_failing_streak=0, health_log=, container_name=openstack_network_exporter, summary=Provides the latest release of the minimal Red Hat Universal Base Image 9., version=9.6, architecture=x86_64, io.k8s.display-name=Red Hat Universal Base Image 9 Minimal, maintainer=Red Hat, Inc., url=https://catalog.redhat.com/en/search?searchType=containers, vcs-ref=f4b088292653bbf5ca8188a5e59ffd06a8671d4b, description=The Universal Base Image Minimal is a stripped down image that uses microdnf as a package manager. This base image is freely redistributable, but Red Hat only supports Red Hat technologies through subscriptions for Red Hat products. This image is maintained by Red Hat and updated regularly., vendor=Red Hat, Inc., io.openshift.tags=minimal rhel9, io.k8s.description=The Universal Base Image Minimal is a stripped down image that uses microdnf as a package manager. This base image is freely redistributable, but Red Hat only supports Red Hat technologies through subscriptions for Red Hat products. This image is maintained by Red Hat and updated regularly., io.openshift.expose-services=, name=ubi9-minimal, com.redhat.component=ubi9-minimal-container, config_data={'image': 'quay.io/openstack-k8s-operators/openstack-network-exporter:current-podified', 'restart': 'always', 'recreate': True, 'privileged': True, 'ports': ['9105:9105'], 'command': [], 'net': 'host', 'environment': {'OS_ENDPOINT_TYPE': 'internal', 'OPENSTACK_NETWORK_EXPORTER_YAML': '/etc/openstack_network_exporter/openstack_network_exporter.yaml'}, 'healthcheck': {'test': '/openstack/healthcheck openstack-netwo', 'mount': '/var/lib/openstack/healthchecks/openstack_network_exporter'}, 'volumes': ['/var/lib/openstack/config/telemetry/openstack_network_exporter.yaml:/etc/openstack_network_exporter/openstack_network_exporter.yaml:z', '/var/lib/openstack/certs/telemetry/default:/etc/openstack_network_exporter/tls:z', '/var/run/openvswitch:/run/openvswitch:rw,z', '/var/lib/openvswitch/ovn:/run/ovn:rw,z', '/proc:/host/proc:ro', '/var/lib/openstack/healthchecks/openstack_network_exporter:/openstack:ro,z']}, config_id=edpm, distribution-scope=public, io.buildah.version=1.33.7, build-date=2025-08-20T13:12:41, vcs-type=git, release=1755695350, com.redhat.license_terms=https://www.redhat.com/en/about/red-hat-end-user-license-agreements#UBI, managed_by=edpm_ansible)
Dec  1 14:22:54 np0005541455 python3.9[211861]: ansible-ansible.legacy.file Invoked with mode=0644 dest=/var/lib/edpm-config/firewall/edpm-nftables-base.yaml _original_basename=base-rules.yaml.j2 recurse=False state=file path=/var/lib/edpm-config/firewall/edpm-nftables-base.yaml force=False follow=True modification_time_format=%Y%m%d%H%M.%S access_time_format=%Y%m%d%H%M.%S unsafe_writes=False _diff_peek=None src=None modification_time=None access_time=None owner=None group=None seuser=None serole=None selevel=None setype=None attributes=None
Dec  1 14:22:55 np0005541455 python3.9[212013]: ansible-ansible.legacy.stat Invoked with path=/var/lib/edpm-config/firewall/edpm-nftables-user-rules.yaml follow=False get_checksum=True get_size=False checksum_algorithm=sha1 get_mime=True get_attributes=True get_selinux_context=False
Dec  1 14:22:55 np0005541455 python3.9[212091]: ansible-ansible.legacy.file Invoked with mode=0644 dest=/var/lib/edpm-config/firewall/edpm-nftables-user-rules.yaml _original_basename=.0d_znbwo recurse=False state=file path=/var/lib/edpm-config/firewall/edpm-nftables-user-rules.yaml force=False follow=True modification_time_format=%Y%m%d%H%M.%S access_time_format=%Y%m%d%H%M.%S unsafe_writes=False _diff_peek=None src=None modification_time=None access_time=None owner=None group=None seuser=None serole=None selevel=None setype=None attributes=None
Dec  1 14:22:56 np0005541455 python3.9[212243]: ansible-ansible.legacy.stat Invoked with path=/etc/nftables/iptables.nft follow=False get_checksum=True get_size=False checksum_algorithm=sha1 get_mime=True get_attributes=True get_selinux_context=False
Dec  1 14:22:56 np0005541455 python3.9[212321]: ansible-ansible.legacy.file Invoked with group=root mode=0600 owner=root dest=/etc/nftables/iptables.nft _original_basename=iptables.nft recurse=False state=file path=/etc/nftables/iptables.nft force=False follow=True modification_time_format=%Y%m%d%H%M.%S access_time_format=%Y%m%d%H%M.%S unsafe_writes=False _diff_peek=None src=None modification_time=None access_time=None seuser=None serole=None selevel=None setype=None attributes=None
Dec  1 14:22:57 np0005541455 python3.9[212473]: ansible-ansible.legacy.command Invoked with _raw_params=nft -j list ruleset _uses_shell=False expand_argument_vars=True stdin_add_newline=True strip_empty_ends=True cmd=None argv=None chdir=None executable=None creates=None removes=None stdin=None
Dec  1 14:22:58 np0005541455 python3[212626]: ansible-edpm_nftables_from_files Invoked with src=/var/lib/edpm-config/firewall
Dec  1 14:22:59 np0005541455 python3.9[212778]: ansible-ansible.legacy.stat Invoked with path=/etc/nftables/edpm-jumps.nft follow=False get_checksum=True get_size=False checksum_algorithm=sha1 get_mime=True get_attributes=True get_selinux_context=False
Dec  1 14:22:59 np0005541455 podman[212804]: 2025-12-01 19:22:59.280393128 +0000 UTC m=+0.046796602 container health_status 9bc16c1e84935b321683dd2dfd3901959431e420d380b6b9982945dff3d516b2 (image=quay.io/prometheus/node-exporter:v1.5.0, name=node_exporter, health_status=healthy, health_failing_streak=0, health_log=, managed_by=edpm_ansible, config_data={'image': 'quay.io/prometheus/node-exporter:v1.5.0', 'restart': 'always', 'recreate': True, 'user': 'root', 'privileged': True, 'ports': ['9100:9100'], 'command': ['--web.config.file=/etc/node_exporter/node_exporter.yaml', '--web.disable-exporter-metrics', '--collector.systemd', '--collector.systemd.unit-include=(edpm_.*|ovs.*|openvswitch|virt.*|rsyslog)\\.service', '--no-collector.dmi', '--no-collector.entropy', '--no-collector.thermal_zone', '--no-collector.time', '--no-collector.timex', '--no-collector.uname', '--no-collector.stat', '--no-collector.hwmon', '--no-collector.os', '--no-collector.selinux', '--no-collector.textfile', '--no-collector.powersupplyclass', '--no-collector.pressure', '--no-collector.rapl'], 'net': 'host', 'environment': {'OS_ENDPOINT_TYPE': 'internal'}, 'healthcheck': {'test': '/openstack/healthcheck node_exporter', 'mount': '/var/lib/openstack/healthchecks/node_exporter'}, 'volumes': ['/var/lib/openstack/config/telemetry/node_exporter.yaml:/etc/node_exporter/node_exporter.yaml:z', '/var/lib/openstack/certs/telemetry/default:/etc/node_exporter/tls:z', '/var/run/dbus/system_bus_socket:/var/run/dbus/system_bus_socket:rw', '/var/lib/openstack/healthchecks/node_exporter:/openstack:ro,z']}, config_id=edpm, container_name=node_exporter, maintainer=The Prometheus Authors <prometheus-developers@googlegroups.com>)
Dec  1 14:22:59 np0005541455 python3.9[212880]: ansible-ansible.legacy.file Invoked with group=root mode=0600 owner=root dest=/etc/nftables/edpm-jumps.nft _original_basename=jump-chain.j2 recurse=False state=file path=/etc/nftables/edpm-jumps.nft force=False follow=True modification_time_format=%Y%m%d%H%M.%S access_time_format=%Y%m%d%H%M.%S unsafe_writes=False _diff_peek=None src=None modification_time=None access_time=None seuser=None serole=None selevel=None setype=None attributes=None
Dec  1 14:23:00 np0005541455 python3.9[213032]: ansible-ansible.legacy.stat Invoked with path=/etc/nftables/edpm-update-jumps.nft follow=False get_checksum=True get_size=False checksum_algorithm=sha1 get_mime=True get_attributes=True get_selinux_context=False
Dec  1 14:23:00 np0005541455 python3.9[213110]: ansible-ansible.legacy.file Invoked with group=root mode=0600 owner=root dest=/etc/nftables/edpm-update-jumps.nft _original_basename=jump-chain.j2 recurse=False state=file path=/etc/nftables/edpm-update-jumps.nft force=False follow=True modification_time_format=%Y%m%d%H%M.%S access_time_format=%Y%m%d%H%M.%S unsafe_writes=False _diff_peek=None src=None modification_time=None access_time=None seuser=None serole=None selevel=None setype=None attributes=None
Dec  1 14:23:01 np0005541455 python3.9[213262]: ansible-ansible.legacy.stat Invoked with path=/etc/nftables/edpm-flushes.nft follow=False get_checksum=True get_size=False checksum_algorithm=sha1 get_mime=True get_attributes=True get_selinux_context=False
Dec  1 14:23:01 np0005541455 python3.9[213340]: ansible-ansible.legacy.file Invoked with group=root mode=0600 owner=root dest=/etc/nftables/edpm-flushes.nft _original_basename=flush-chain.j2 recurse=False state=file path=/etc/nftables/edpm-flushes.nft force=False follow=True modification_time_format=%Y%m%d%H%M.%S access_time_format=%Y%m%d%H%M.%S unsafe_writes=False _diff_peek=None src=None modification_time=None access_time=None seuser=None serole=None selevel=None setype=None attributes=None
Dec  1 14:23:02 np0005541455 python3.9[213492]: ansible-ansible.legacy.stat Invoked with path=/etc/nftables/edpm-chains.nft follow=False get_checksum=True get_size=False checksum_algorithm=sha1 get_mime=True get_attributes=True get_selinux_context=False
Dec  1 14:23:02 np0005541455 python3.9[213570]: ansible-ansible.legacy.file Invoked with group=root mode=0600 owner=root dest=/etc/nftables/edpm-chains.nft _original_basename=chains.j2 recurse=False state=file path=/etc/nftables/edpm-chains.nft force=False follow=True modification_time_format=%Y%m%d%H%M.%S access_time_format=%Y%m%d%H%M.%S unsafe_writes=False _diff_peek=None src=None modification_time=None access_time=None seuser=None serole=None selevel=None setype=None attributes=None
Dec  1 14:23:03 np0005541455 python3.9[213722]: ansible-ansible.legacy.stat Invoked with path=/etc/nftables/edpm-rules.nft follow=False get_checksum=True get_size=False checksum_algorithm=sha1 get_mime=True get_attributes=True get_selinux_context=False
Dec  1 14:23:04 np0005541455 python3.9[213847]: ansible-ansible.legacy.copy Invoked with dest=/etc/nftables/edpm-rules.nft group=root mode=0600 owner=root src=/home/zuul/.ansible/tmp/ansible-tmp-1764616983.1638644-1207-32919183522713/.source.nft follow=False _original_basename=ruleset.j2 checksum=fb3275eced3a2e06312143189928124e1b2df34a backup=False force=True remote_src=False unsafe_writes=False content=NOT_LOGGING_PARAMETER validate=None directory_mode=None local_follow=None seuser=None serole=None selevel=None setype=None attributes=None
Dec  1 14:23:05 np0005541455 python3.9[213999]: ansible-ansible.builtin.file Invoked with group=root mode=0600 owner=root path=/etc/nftables/edpm-rules.nft.changed state=touch recurse=False force=False follow=True modification_time_format=%Y%m%d%H%M.%S access_time_format=%Y%m%d%H%M.%S unsafe_writes=False _original_basename=None _diff_peek=None src=None modification_time=None access_time=None seuser=None serole=None selevel=None setype=None attributes=None
Dec  1 14:23:05 np0005541455 python3.9[214151]: ansible-ansible.legacy.command Invoked with _raw_params=set -o pipefail; cat /etc/nftables/edpm-chains.nft /etc/nftables/edpm-flushes.nft /etc/nftables/edpm-rules.nft /etc/nftables/edpm-update-jumps.nft /etc/nftables/edpm-jumps.nft | nft -c -f - _uses_shell=True expand_argument_vars=True stdin_add_newline=True strip_empty_ends=True cmd=None argv=None chdir=None executable=None creates=None removes=None stdin=None
Dec  1 14:23:06 np0005541455 podman[214179]: 2025-12-01 19:23:06.314486819 +0000 UTC m=+0.074657958 container health_status eee51cf6f5ac491b85fb09827fece37ea9afa564acb449d4ec0d0155a452f02b (image=quay.io/podified-antelope-centos9/openstack-multipathd:current-podified, name=multipathd, health_status=healthy, health_failing_streak=0, health_log=, org.label-schema.vendor=CentOS, tcib_managed=true, config_data={'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/multipathd', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-multipathd:current-podified', 'net': 'host', 'privileged': True, 'restart': 'always', 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/var/lib/kolla/config_files/multipathd.json:/var/lib/kolla/config_files/config.json:ro', '/dev:/dev', '/run/udev:/run/udev', '/sys:/sys', '/lib/modules:/lib/modules:ro', '/etc/iscsi:/etc/iscsi:ro', '/var/lib/iscsi:/var/lib/iscsi', '/etc/multipath:/etc/multipath:z', '/etc/multipath.conf:/etc/multipath.conf:ro', '/var/lib/openstack/healthchecks/multipathd:/openstack:ro,z']}, managed_by=edpm_ansible, org.label-schema.build-date=20251125, org.label-schema.name=CentOS Stream 9 Base Image, tcib_build_tag=fa2bb8efef6782c26ea7f1675eeb36dd, container_name=multipathd, org.label-schema.license=GPLv2, maintainer=OpenStack Kubernetes Operator team, org.label-schema.schema-version=1.0, config_id=multipathd, io.buildah.version=1.41.3)
Dec  1 14:23:06 np0005541455 python3.9[214328]: ansible-ansible.builtin.blockinfile Invoked with backup=False block=include "/etc/nftables/iptables.nft"#012include "/etc/nftables/edpm-chains.nft"#012include "/etc/nftables/edpm-rules.nft"#012include "/etc/nftables/edpm-jumps.nft"#012 path=/etc/sysconfig/nftables.conf validate=nft -c -f %s state=present marker=# {mark} ANSIBLE MANAGED BLOCK create=False marker_begin=BEGIN marker_end=END append_newline=False prepend_newline=False encoding=utf-8 unsafe_writes=False insertafter=None insertbefore=None mode=None owner=None group=None seuser=None serole=None selevel=None setype=None attributes=None
Dec  1 14:23:07 np0005541455 python3.9[214480]: ansible-ansible.legacy.command Invoked with _raw_params=nft -f /etc/nftables/edpm-chains.nft _uses_shell=False expand_argument_vars=True stdin_add_newline=True strip_empty_ends=True cmd=None argv=None chdir=None executable=None creates=None removes=None stdin=None
Dec  1 14:23:08 np0005541455 python3.9[214633]: ansible-ansible.builtin.stat Invoked with path=/etc/nftables/edpm-rules.nft.changed follow=False get_checksum=True get_mime=True get_attributes=True get_selinux_context=False checksum_algorithm=sha1
Dec  1 14:23:09 np0005541455 python3.9[214787]: ansible-ansible.legacy.command Invoked with _raw_params=set -o pipefail; cat /etc/nftables/edpm-flushes.nft /etc/nftables/edpm-rules.nft /etc/nftables/edpm-update-jumps.nft | nft -f - _uses_shell=True expand_argument_vars=True stdin_add_newline=True strip_empty_ends=True cmd=None argv=None chdir=None executable=None creates=None removes=None stdin=None
Dec  1 14:23:09 np0005541455 python3.9[214942]: ansible-ansible.builtin.file Invoked with path=/etc/nftables/edpm-rules.nft.changed state=absent recurse=False force=False follow=True modification_time_format=%Y%m%d%H%M.%S access_time_format=%Y%m%d%H%M.%S unsafe_writes=False _original_basename=None _diff_peek=None src=None modification_time=None access_time=None mode=None owner=None group=None seuser=None serole=None selevel=None setype=None attributes=None
Dec  1 14:23:10 np0005541455 systemd[1]: session-26.scope: Deactivated successfully.
Dec  1 14:23:10 np0005541455 systemd[1]: session-26.scope: Consumed 1min 41.865s CPU time.
Dec  1 14:23:10 np0005541455 systemd-logind[797]: Session 26 logged out. Waiting for processes to exit.
Dec  1 14:23:10 np0005541455 systemd-logind[797]: Removed session 26.
Dec  1 14:23:11 np0005541455 podman[214967]: 2025-12-01 19:23:11.27139791 +0000 UTC m=+0.045240951 container health_status 61ddba5fa28aaa4735d9b3aecc3d300f499f9ae2248b5f55cd6d6127fcce4236 (image=quay.io/navidys/prometheus-podman-exporter:v1.10.1, name=podman_exporter, health_status=healthy, health_failing_streak=0, health_log=, config_data={'image': 'quay.io/navidys/prometheus-podman-exporter:v1.10.1', 'restart': 'always', 'recreate': True, 'user': 'root', 'privileged': True, 'ports': ['9882:9882'], 'net': 'host', 'command': ['--web.config.file=/etc/podman_exporter/podman_exporter.yaml'], 'environment': {'OS_ENDPOINT_TYPE': 'internal', 'CONTAINER_HOST': 'unix:///run/podman/podman.sock'}, 'healthcheck': {'test': '/openstack/healthcheck podman_exporter', 'mount': '/var/lib/openstack/healthchecks/podman_exporter'}, 'volumes': ['/var/lib/openstack/config/telemetry/podman_exporter.yaml:/etc/podman_exporter/podman_exporter.yaml:z', '/var/lib/openstack/certs/telemetry/default:/etc/podman_exporter/tls:z', '/run/podman/podman.sock:/run/podman/podman.sock:rw,z', '/var/lib/openstack/healthchecks/podman_exporter:/openstack:ro,z']}, config_id=edpm, container_name=podman_exporter, maintainer=Navid Yaghoobi <navidys@fedoraproject.org>, managed_by=edpm_ansible)
Dec  1 14:23:12 np0005541455 ovn_metadata_agent[106828]: 2025-12-01 19:23:12.163 106833 DEBUG oslo_concurrency.lockutils [-] Acquiring lock "_check_child_processes" by "neutron.agent.linux.external_process.ProcessMonitor._check_child_processes" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m
Dec  1 14:23:12 np0005541455 ovn_metadata_agent[106828]: 2025-12-01 19:23:12.164 106833 DEBUG oslo_concurrency.lockutils [-] Lock "_check_child_processes" acquired by "neutron.agent.linux.external_process.ProcessMonitor._check_child_processes" :: waited 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m
Dec  1 14:23:12 np0005541455 ovn_metadata_agent[106828]: 2025-12-01 19:23:12.164 106833 DEBUG oslo_concurrency.lockutils [-] Lock "_check_child_processes" "released" by "neutron.agent.linux.external_process.ProcessMonitor._check_child_processes" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m
Dec  1 14:23:16 np0005541455 systemd-logind[797]: New session 27 of user zuul.
Dec  1 14:23:16 np0005541455 systemd[1]: Started Session 27 of User zuul.
Dec  1 14:23:17 np0005541455 python3.9[215147]: ansible-ansible.builtin.systemd_service Invoked with daemon_reload=True daemon_reexec=False scope=system no_block=False name=None state=None enabled=None force=None masked=None
Dec  1 14:23:17 np0005541455 systemd[1]: Reloading.
Dec  1 14:23:17 np0005541455 systemd-rc-local-generator: /etc/rc.d/rc.local is not marked executable, skipping.
Dec  1 14:23:17 np0005541455 systemd-sysv-generator: SysV service '/etc/rc.d/init.d/network' lacks a native systemd unit file. Automatically generating a unit file for compatibility. Please update package to include a native systemd unit file, in order to make it more safe and robust.
Dec  1 14:23:17 np0005541455 podman[215184]: 2025-12-01 19:23:17.621025945 +0000 UTC m=+0.099067205 container health_status ac5c9902abf0db9f43c889599b2bcc73d33eb8b65444ffdd9b56a5cc93dab792 (image=quay.io/podified-antelope-centos9/openstack-ovn-controller:current-podified, name=ovn_controller, health_status=healthy, health_failing_streak=0, health_log=, maintainer=OpenStack Kubernetes Operator team, managed_by=edpm_ansible, tcib_build_tag=fa2bb8efef6782c26ea7f1675eeb36dd, config_id=ovn_controller, org.label-schema.build-date=20251125, org.label-schema.license=GPLv2, container_name=ovn_controller, io.buildah.version=1.41.3, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.schema-version=1.0, config_data={'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ovn_controller', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-ovn-controller:current-podified', 'net': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/lib/modules:/lib/modules:ro', '/run:/run', '/var/lib/openvswitch/ovn:/run/ovn:shared,z', '/var/lib/kolla/config_files/ovn_controller.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/openstack/cacerts/ovn/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/certs/ovn/default/ca.crt:/etc/pki/tls/certs/ovndbca.crt:ro,z', '/var/lib/openstack/certs/ovn/default/tls.crt:/etc/pki/tls/certs/ovndb.crt:ro,z', '/var/lib/openstack/certs/ovn/default/tls.key:/etc/pki/tls/private/ovndb.key:ro,Z', '/var/lib/openstack/healthchecks/ovn_controller:/openstack:ro,z']}, org.label-schema.vendor=CentOS, tcib_managed=true)
Dec  1 14:23:18 np0005541455 python3.9[215358]: ansible-ansible.builtin.service_facts Invoked
Dec  1 14:23:18 np0005541455 network[215375]: You are using 'network' service provided by 'network-scripts', which are now deprecated.
Dec  1 14:23:18 np0005541455 network[215376]: 'network-scripts' will be removed from distribution in near future.
Dec  1 14:23:18 np0005541455 network[215377]: It is advised to switch to 'NetworkManager' instead for network management.
Dec  1 14:23:18 np0005541455 podman[215383]: 2025-12-01 19:23:18.634350032 +0000 UTC m=+0.075244698 container health_status 3a3d264f7eb8586ed3d44da8bad3c69e5911bcb2ca062b771386b6d47a5118de (image=quay.rdoproject.org/podified-master-centos10/openstack-ceilometer-compute:current-tested, name=ceilometer_agent_compute, health_status=healthy, health_failing_streak=0, health_log=, org.label-schema.schema-version=1.0, tcib_managed=true, config_data={'image': 'quay.rdoproject.org/podified-master-centos10/openstack-ceilometer-compute:current-tested', 'user': 'ceilometer', 'restart': 'always', 'command': 'kolla_start', 'security_opt': 'label:type:ceilometer_polling_t', 'net': 'host', 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'OS_ENDPOINT_TYPE': 'internal'}, 'healthcheck': {'test': '/openstack/healthcheck compute', 'mount': '/var/lib/openstack/healthchecks/ceilometer_agent_compute'}, 'volumes': ['/var/lib/openstack/config/telemetry:/var/lib/openstack/config/:z', '/var/lib/openstack/config/telemetry/ceilometer-agent-compute.json:/var/lib/kolla/config_files/config.json:z', '/run/libvirt:/run/libvirt:shared,ro', '/etc/hosts:/etc/hosts:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/var/lib/openstack/cacerts/telemetry/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/config/telemetry/ceilometer_prom_exporter.yaml:/etc/ceilometer/ceilometer_prom_exporter.yaml:z', '/var/lib/openstack/certs/telemetry/default:/etc/ceilometer/tls:z', '/dev/log:/dev/log', '/var/lib/openstack/healthchecks/ceilometer_agent_compute:/openstack:ro,z']}, io.buildah.version=1.41.4, managed_by=edpm_ansible, org.label-schema.build-date=20251125, org.label-schema.license=GPLv2, org.label-schema.vendor=CentOS, tcib_build_tag=3a7876c5b6a4ff2e2bc50e11e9db5f42, container_name=ceilometer_agent_compute, config_id=edpm, maintainer=OpenStack Kubernetes Operator team, org.label-schema.name=CentOS Stream 10 Base Image)
Dec  1 14:23:18 np0005541455 podman[215384]: 2025-12-01 19:23:18.644433193 +0000 UTC m=+0.088421166 container health_status 43b014a7c88484529ca37fbc1aa040d68d3c565a681d98a3ffe696ded1c66c8b (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified, name=ovn_metadata_agent, health_status=healthy, health_failing_streak=0, health_log=, managed_by=edpm_ansible, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS, config_data={'cgroupns': 'host', 'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'EDPM_CONFIG_HASH': '0823bd3e096c75f72e4a95820d41b0d4b6a1172bd2892ddb9f29b788a11bc87d'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ovn_metadata_agent', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified', 'net': 'host', 'pid': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/run/openvswitch:/run/openvswitch:z', '/var/lib/config-data/ansible-generated/neutron-ovn-metadata-agent:/etc/neutron.conf.d:z', '/run/netns:/run/netns:shared', '/var/lib/kolla/config_files/ovn_metadata_agent.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/neutron:/var/lib/neutron:shared,z', '/var/lib/neutron/ovn_metadata_haproxy_wrapper:/usr/local/bin/haproxy:ro', '/var/lib/neutron/kill_scripts:/etc/neutron/kill_scripts:ro', '/var/lib/openstack/cacerts/neutron-metadata/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/ca.crt:/etc/pki/tls/certs/ovndbca.crt:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/tls.crt:/etc/pki/tls/certs/ovndb.crt:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/tls.key:/etc/pki/tls/private/ovndb.key:ro,Z', '/var/lib/openstack/healthchecks/ovn_metadata_agent:/openstack:ro,z']}, io.buildah.version=1.41.3, org.label-schema.build-date=20251125, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 9 Base Image, container_name=ovn_metadata_agent, maintainer=OpenStack Kubernetes Operator team, tcib_build_tag=fa2bb8efef6782c26ea7f1675eeb36dd, tcib_managed=true, config_id=ovn_metadata_agent)
Dec  1 14:23:24 np0005541455 podman[215565]: 2025-12-01 19:23:24.301417358 +0000 UTC m=+0.073311786 container health_status b46bda7fc50db8041eef75400930fc7591d8331b3adc9964f77b2cc87c6b98e2 (image=quay.io/openstack-k8s-operators/openstack-network-exporter:current-podified, name=openstack_network_exporter, health_status=healthy, health_failing_streak=0, health_log=, com.redhat.component=ubi9-minimal-container, release=1755695350, vcs-ref=f4b088292653bbf5ca8188a5e59ffd06a8671d4b, architecture=x86_64, summary=Provides the latest release of the minimal Red Hat Universal Base Image 9., version=9.6, io.buildah.version=1.33.7, config_id=edpm, io.k8s.description=The Universal Base Image Minimal is a stripped down image that uses microdnf as a package manager. This base image is freely redistributable, but Red Hat only supports Red Hat technologies through subscriptions for Red Hat products. This image is maintained by Red Hat and updated regularly., vendor=Red Hat, Inc., maintainer=Red Hat, Inc., io.openshift.expose-services=, config_data={'image': 'quay.io/openstack-k8s-operators/openstack-network-exporter:current-podified', 'restart': 'always', 'recreate': True, 'privileged': True, 'ports': ['9105:9105'], 'command': [], 'net': 'host', 'environment': {'OS_ENDPOINT_TYPE': 'internal', 'OPENSTACK_NETWORK_EXPORTER_YAML': '/etc/openstack_network_exporter/openstack_network_exporter.yaml'}, 'healthcheck': {'test': '/openstack/healthcheck openstack-netwo', 'mount': '/var/lib/openstack/healthchecks/openstack_network_exporter'}, 'volumes': ['/var/lib/openstack/config/telemetry/openstack_network_exporter.yaml:/etc/openstack_network_exporter/openstack_network_exporter.yaml:z', '/var/lib/openstack/certs/telemetry/default:/etc/openstack_network_exporter/tls:z', '/var/run/openvswitch:/run/openvswitch:rw,z', '/var/lib/openvswitch/ovn:/run/ovn:rw,z', '/proc:/host/proc:ro', '/var/lib/openstack/healthchecks/openstack_network_exporter:/openstack:ro,z']}, url=https://catalog.redhat.com/en/search?searchType=containers, vcs-type=git, com.redhat.license_terms=https://www.redhat.com/en/about/red-hat-end-user-license-agreements#UBI, distribution-scope=public, io.k8s.display-name=Red Hat Universal Base Image 9 Minimal, io.openshift.tags=minimal rhel9, managed_by=edpm_ansible, description=The Universal Base Image Minimal is a stripped down image that uses microdnf as a package manager. This base image is freely redistributable, but Red Hat only supports Red Hat technologies through subscriptions for Red Hat products. This image is maintained by Red Hat and updated regularly., container_name=openstack_network_exporter, name=ubi9-minimal, build-date=2025-08-20T13:12:41)
Dec  1 14:23:24 np0005541455 python3.9[215715]: ansible-ansible.builtin.systemd_service Invoked with enabled=False name=tripleo_ceilometer_agent_ipmi.service state=stopped daemon_reload=False daemon_reexec=False scope=system no_block=False force=None masked=None
Dec  1 14:23:26 np0005541455 python3.9[215868]: ansible-ansible.builtin.file Invoked with path=/usr/lib/systemd/system/tripleo_ceilometer_agent_ipmi.service state=absent recurse=False force=False follow=True modification_time_format=%Y%m%d%H%M.%S access_time_format=%Y%m%d%H%M.%S unsafe_writes=False _original_basename=None _diff_peek=None src=None modification_time=None access_time=None mode=None owner=None group=None seuser=None serole=None selevel=None setype=None attributes=None
Dec  1 14:23:26 np0005541455 python3.9[216020]: ansible-ansible.builtin.file Invoked with path=/etc/systemd/system/tripleo_ceilometer_agent_ipmi.service state=absent recurse=False force=False follow=True modification_time_format=%Y%m%d%H%M.%S access_time_format=%Y%m%d%H%M.%S unsafe_writes=False _original_basename=None _diff_peek=None src=None modification_time=None access_time=None mode=None owner=None group=None seuser=None serole=None selevel=None setype=None attributes=None
Dec  1 14:23:27 np0005541455 python3.9[216172]: ansible-ansible.legacy.command Invoked with _raw_params=if systemctl is-active certmonger.service; then#012  systemctl disable --now certmonger.service#012  test -f /etc/systemd/system/certmonger.service || systemctl mask certmonger.service#012fi#012 _uses_shell=True expand_argument_vars=True stdin_add_newline=True strip_empty_ends=True cmd=None argv=None chdir=None executable=None creates=None removes=None stdin=None
Dec  1 14:23:28 np0005541455 python3.9[216324]: ansible-ansible.builtin.find Invoked with file_type=any hidden=True paths=['/var/lib/certmonger/requests'] patterns=[] read_whole_file=False age_stamp=mtime recurse=False follow=False get_checksum=False checksum_algorithm=sha1 use_regex=False exact_mode=True excludes=None contains=None age=None size=None depth=None mode=None encoding=None limit=None
Dec  1 14:23:29 np0005541455 python3.9[216476]: ansible-ansible.builtin.systemd_service Invoked with daemon_reload=True daemon_reexec=False scope=system no_block=False name=None state=None enabled=None force=None masked=None
Dec  1 14:23:29 np0005541455 systemd[1]: Reloading.
Dec  1 14:23:29 np0005541455 systemd-rc-local-generator: /etc/rc.d/rc.local is not marked executable, skipping.
Dec  1 14:23:29 np0005541455 systemd-sysv-generator: SysV service '/etc/rc.d/init.d/network' lacks a native systemd unit file. Automatically generating a unit file for compatibility. Please update package to include a native systemd unit file, in order to make it more safe and robust.
Dec  1 14:23:29 np0005541455 podman[216512]: 2025-12-01 19:23:29.591170267 +0000 UTC m=+0.054048252 container health_status 9bc16c1e84935b321683dd2dfd3901959431e420d380b6b9982945dff3d516b2 (image=quay.io/prometheus/node-exporter:v1.5.0, name=node_exporter, health_status=healthy, health_failing_streak=0, health_log=, container_name=node_exporter, maintainer=The Prometheus Authors <prometheus-developers@googlegroups.com>, managed_by=edpm_ansible, config_data={'image': 'quay.io/prometheus/node-exporter:v1.5.0', 'restart': 'always', 'recreate': True, 'user': 'root', 'privileged': True, 'ports': ['9100:9100'], 'command': ['--web.config.file=/etc/node_exporter/node_exporter.yaml', '--web.disable-exporter-metrics', '--collector.systemd', '--collector.systemd.unit-include=(edpm_.*|ovs.*|openvswitch|virt.*|rsyslog)\\.service', '--no-collector.dmi', '--no-collector.entropy', '--no-collector.thermal_zone', '--no-collector.time', '--no-collector.timex', '--no-collector.uname', '--no-collector.stat', '--no-collector.hwmon', '--no-collector.os', '--no-collector.selinux', '--no-collector.textfile', '--no-collector.powersupplyclass', '--no-collector.pressure', '--no-collector.rapl'], 'net': 'host', 'environment': {'OS_ENDPOINT_TYPE': 'internal'}, 'healthcheck': {'test': '/openstack/healthcheck node_exporter', 'mount': '/var/lib/openstack/healthchecks/node_exporter'}, 'volumes': ['/var/lib/openstack/config/telemetry/node_exporter.yaml:/etc/node_exporter/node_exporter.yaml:z', '/var/lib/openstack/certs/telemetry/default:/etc/node_exporter/tls:z', '/var/run/dbus/system_bus_socket:/var/run/dbus/system_bus_socket:rw', '/var/lib/openstack/healthchecks/node_exporter:/openstack:ro,z']}, config_id=edpm)
Dec  1 14:23:29 np0005541455 podman[203750]: time="2025-12-01T19:23:29Z" level=info msg="List containers: received `last` parameter - overwriting `limit`"
Dec  1 14:23:29 np0005541455 podman[203750]: @ - - [01/Dec/2025:19:23:29 +0000] "GET /v4.9.3/libpod/containers/json?all=true&external=false&last=0&namespace=false&size=false&sync=false HTTP/1.1" 200 22539 "" "Go-http-client/1.1"
Dec  1 14:23:29 np0005541455 podman[203750]: @ - - [01/Dec/2025:19:23:29 +0000] "GET /v4.9.3/libpod/containers/stats?all=false&interval=1&stream=false HTTP/1.1" 200 3419 "" "Go-http-client/1.1"
Dec  1 14:23:30 np0005541455 python3.9[216689]: ansible-ansible.legacy.command Invoked with cmd=/usr/bin/systemctl reset-failed tripleo_ceilometer_agent_ipmi.service _uses_shell=False expand_argument_vars=True stdin_add_newline=True strip_empty_ends=True _raw_params=None argv=None chdir=None executable=None creates=None removes=None stdin=None
Dec  1 14:23:30 np0005541455 python3.9[216842]: ansible-ansible.builtin.file Invoked with group=zuul mode=0750 owner=zuul path=/var/lib/openstack/config/telemetry-power-monitoring recurse=True setype=container_file_t state=directory force=False follow=True modification_time_format=%Y%m%d%H%M.%S access_time_format=%Y%m%d%H%M.%S unsafe_writes=False _original_basename=None _diff_peek=None src=None modification_time=None access_time=None seuser=None serole=None selevel=None attributes=None
Dec  1 14:23:31 np0005541455 openstack_network_exporter[205914]: ERROR   19:23:31 appctl.go:131: Failed to prepare call to ovsdb-server: no control socket files found for the ovs db server
Dec  1 14:23:31 np0005541455 openstack_network_exporter[205914]: ERROR   19:23:31 appctl.go:144: Failed to get PID for ovn-northd: no control socket files found for ovn-northd
Dec  1 14:23:31 np0005541455 openstack_network_exporter[205914]: ERROR   19:23:31 appctl.go:144: Failed to get PID for ovn-northd: no control socket files found for ovn-northd
Dec  1 14:23:31 np0005541455 openstack_network_exporter[205914]: ERROR   19:23:31 appctl.go:174: call(dpif-netdev/pmd-perf-show): please specify an existing datapath
Dec  1 14:23:31 np0005541455 openstack_network_exporter[205914]: 
Dec  1 14:23:31 np0005541455 openstack_network_exporter[205914]: ERROR   19:23:31 appctl.go:174: call(dpif-netdev/pmd-rxq-show): please specify an existing datapath
Dec  1 14:23:31 np0005541455 openstack_network_exporter[205914]: 
Dec  1 14:23:31 np0005541455 python3.9[216997]: ansible-ansible.builtin.stat Invoked with path=/var/lib/openstack/cacerts/telemetry-power-monitoring/tls-ca-bundle.pem follow=False get_checksum=True get_mime=True get_attributes=True get_selinux_context=False checksum_algorithm=sha1
Dec  1 14:23:32 np0005541455 python3.9[217149]: ansible-ansible.legacy.stat Invoked with path=/var/lib/openstack/config/telemetry-power-monitoring/ceilometer-host-specific.conf follow=False get_checksum=True get_size=False checksum_algorithm=sha1 get_mime=True get_attributes=True get_selinux_context=False
Dec  1 14:23:33 np0005541455 python3.9[217270]: ansible-ansible.legacy.copy Invoked with dest=/var/lib/openstack/config/telemetry-power-monitoring/ceilometer-host-specific.conf mode=0644 setype=container_file_t src=/home/zuul/.ansible/tmp/ansible-tmp-1764617011.9976215-125-222490161900521/.source.conf follow=False _original_basename=ceilometer-host-specific.conf.j2 checksum=e86e0e43000ce9ccfe5aefbf8e8f2e3d15d05584 backup=False force=True remote_src=False unsafe_writes=False content=NOT_LOGGING_PARAMETER validate=None directory_mode=None local_follow=None owner=None group=None seuser=None serole=None selevel=None attributes=None
Dec  1 14:23:34 np0005541455 python3.9[217422]: ansible-ansible.builtin.getent Invoked with database=passwd key=ceilometer fail_key=True service=None split=None
Dec  1 14:23:35 np0005541455 python3.9[217573]: ansible-ansible.legacy.stat Invoked with path=/var/lib/openstack/config/telemetry-power-monitoring/ceilometer.conf follow=False get_checksum=True get_size=False checksum_algorithm=sha1 get_mime=True get_attributes=True get_selinux_context=False
Dec  1 14:23:36 np0005541455 python3.9[217694]: ansible-ansible.legacy.copy Invoked with dest=/var/lib/openstack/config/telemetry-power-monitoring/ceilometer.conf mode=0640 remote_src=False src=/home/zuul/.ansible/tmp/ansible-tmp-1764617015.0680969-171-207959996129312/.source.conf _original_basename=ceilometer.conf follow=False checksum=e93ef84feaa07737af66c0c1da2fd4bdcae81d37 backup=False force=True unsafe_writes=False content=NOT_LOGGING_PARAMETER validate=None directory_mode=None local_follow=None owner=None group=None seuser=None serole=None selevel=None setype=None attributes=None
Dec  1 14:23:36 np0005541455 podman[217818]: 2025-12-01 19:23:36.557524841 +0000 UTC m=+0.055979053 container health_status eee51cf6f5ac491b85fb09827fece37ea9afa564acb449d4ec0d0155a452f02b (image=quay.io/podified-antelope-centos9/openstack-multipathd:current-podified, name=multipathd, health_status=healthy, health_failing_streak=0, health_log=, maintainer=OpenStack Kubernetes Operator team, org.label-schema.license=GPLv2, managed_by=edpm_ansible, org.label-schema.build-date=20251125, org.label-schema.name=CentOS Stream 9 Base Image, config_id=multipathd, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS, tcib_build_tag=fa2bb8efef6782c26ea7f1675eeb36dd, tcib_managed=true, config_data={'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/multipathd', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-multipathd:current-podified', 'net': 'host', 'privileged': True, 'restart': 'always', 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/var/lib/kolla/config_files/multipathd.json:/var/lib/kolla/config_files/config.json:ro', '/dev:/dev', '/run/udev:/run/udev', '/sys:/sys', '/lib/modules:/lib/modules:ro', '/etc/iscsi:/etc/iscsi:ro', '/var/lib/iscsi:/var/lib/iscsi', '/etc/multipath:/etc/multipath:z', '/etc/multipath.conf:/etc/multipath.conf:ro', '/var/lib/openstack/healthchecks/multipathd:/openstack:ro,z']}, container_name=multipathd, io.buildah.version=1.41.3)
Dec  1 14:23:36 np0005541455 python3.9[217859]: ansible-ansible.legacy.stat Invoked with path=/var/lib/openstack/config/telemetry-power-monitoring/polling.yaml follow=False get_checksum=True get_size=False checksum_algorithm=sha1 get_mime=True get_attributes=True get_selinux_context=False
Dec  1 14:23:37 np0005541455 python3.9[217985]: ansible-ansible.legacy.copy Invoked with dest=/var/lib/openstack/config/telemetry-power-monitoring/polling.yaml mode=0640 remote_src=False src=/home/zuul/.ansible/tmp/ansible-tmp-1764617016.2456815-171-119932051937315/.source.yaml _original_basename=polling.yaml follow=False checksum=5ef7021082c6431099dde63e021011029cd65119 backup=False force=True unsafe_writes=False content=NOT_LOGGING_PARAMETER validate=None directory_mode=None local_follow=None owner=None group=None seuser=None serole=None selevel=None setype=None attributes=None
Dec  1 14:23:37 np0005541455 python3.9[218135]: ansible-ansible.legacy.stat Invoked with path=/var/lib/openstack/config/telemetry-power-monitoring/custom.conf follow=False get_checksum=True get_size=False checksum_algorithm=sha1 get_mime=True get_attributes=True get_selinux_context=False
Dec  1 14:23:38 np0005541455 python3.9[218256]: ansible-ansible.legacy.copy Invoked with dest=/var/lib/openstack/config/telemetry-power-monitoring/custom.conf mode=0640 remote_src=False src=/home/zuul/.ansible/tmp/ansible-tmp-1764617017.3980098-171-114116842882818/.source.conf _original_basename=custom.conf follow=False checksum=838b8b0a7d7f72e55ab67d39f32e3cb3eca2139b backup=False force=True unsafe_writes=False content=NOT_LOGGING_PARAMETER validate=None directory_mode=None local_follow=None owner=None group=None seuser=None serole=None selevel=None setype=None attributes=None
Dec  1 14:23:39 np0005541455 python3.9[218406]: ansible-ansible.builtin.stat Invoked with path=/var/lib/openstack/certs/telemetry-power-monitoring/default/tls.crt follow=False get_checksum=True get_mime=True get_attributes=True get_selinux_context=False checksum_algorithm=sha1
Dec  1 14:23:39 np0005541455 python3.9[218558]: ansible-ansible.builtin.stat Invoked with path=/var/lib/openstack/certs/telemetry-power-monitoring/default/tls.key follow=False get_checksum=True get_mime=True get_attributes=True get_selinux_context=False checksum_algorithm=sha1
Dec  1 14:23:40 np0005541455 python3.9[218710]: ansible-ansible.legacy.stat Invoked with path=/var/lib/openstack/config/telemetry-power-monitoring/ceilometer-agent-ipmi.json follow=False get_checksum=True get_size=False checksum_algorithm=sha1 get_mime=True get_attributes=True get_selinux_context=False
Dec  1 14:23:40 np0005541455 python3.9[218831]: ansible-ansible.legacy.copy Invoked with dest=/var/lib/openstack/config/telemetry-power-monitoring/ceilometer-agent-ipmi.json mode=420 src=/home/zuul/.ansible/tmp/ansible-tmp-1764617019.8456516-230-155196049661869/.source.json follow=False _original_basename=ceilometer-agent-ipmi.json.j2 checksum=21255e7f7db3155b4a491729298d9407fe6f8335 backup=False force=True remote_src=False unsafe_writes=False content=NOT_LOGGING_PARAMETER validate=None directory_mode=None local_follow=None owner=None group=None seuser=None serole=None selevel=None setype=None attributes=None
Dec  1 14:23:41 np0005541455 nova_compute[189564]: 2025-12-01 19:23:41.247 189568 DEBUG oslo_service.periodic_task [None req-7cec860b-c1c5-49d2-8a4f-732c76c1788d - - - - - -] Running periodic task ComputeManager._poll_volume_usage run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210#033[00m
Dec  1 14:23:41 np0005541455 python3.9[218981]: ansible-ansible.legacy.stat Invoked with path=/var/lib/openstack/config/telemetry-power-monitoring/ceilometer-host-specific.conf follow=False get_checksum=True get_size=False checksum_algorithm=sha1 get_mime=True get_attributes=True get_selinux_context=False
Dec  1 14:23:41 np0005541455 podman[218982]: 2025-12-01 19:23:41.523585011 +0000 UTC m=+0.055623523 container health_status 61ddba5fa28aaa4735d9b3aecc3d300f499f9ae2248b5f55cd6d6127fcce4236 (image=quay.io/navidys/prometheus-podman-exporter:v1.10.1, name=podman_exporter, health_status=healthy, health_failing_streak=0, health_log=, config_data={'image': 'quay.io/navidys/prometheus-podman-exporter:v1.10.1', 'restart': 'always', 'recreate': True, 'user': 'root', 'privileged': True, 'ports': ['9882:9882'], 'net': 'host', 'command': ['--web.config.file=/etc/podman_exporter/podman_exporter.yaml'], 'environment': {'OS_ENDPOINT_TYPE': 'internal', 'CONTAINER_HOST': 'unix:///run/podman/podman.sock'}, 'healthcheck': {'test': '/openstack/healthcheck podman_exporter', 'mount': '/var/lib/openstack/healthchecks/podman_exporter'}, 'volumes': ['/var/lib/openstack/config/telemetry/podman_exporter.yaml:/etc/podman_exporter/podman_exporter.yaml:z', '/var/lib/openstack/certs/telemetry/default:/etc/podman_exporter/tls:z', '/run/podman/podman.sock:/run/podman/podman.sock:rw,z', '/var/lib/openstack/healthchecks/podman_exporter:/openstack:ro,z']}, config_id=edpm, container_name=podman_exporter, maintainer=Navid Yaghoobi <navidys@fedoraproject.org>, managed_by=edpm_ansible)
Dec  1 14:23:41 np0005541455 python3.9[219081]: ansible-ansible.legacy.file Invoked with mode=420 dest=/var/lib/openstack/config/telemetry-power-monitoring/ceilometer-host-specific.conf _original_basename=ceilometer-host-specific.conf.j2 recurse=False state=file path=/var/lib/openstack/config/telemetry-power-monitoring/ceilometer-host-specific.conf force=False follow=True modification_time_format=%Y%m%d%H%M.%S access_time_format=%Y%m%d%H%M.%S unsafe_writes=False _diff_peek=None src=None modification_time=None access_time=None owner=None group=None seuser=None serole=None selevel=None setype=None attributes=None
Dec  1 14:23:42 np0005541455 nova_compute[189564]: 2025-12-01 19:23:42.247 189568 DEBUG oslo_service.periodic_task [None req-7cec860b-c1c5-49d2-8a4f-732c76c1788d - - - - - -] Running periodic task ComputeManager._poll_rescued_instances run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210#033[00m
Dec  1 14:23:42 np0005541455 nova_compute[189564]: 2025-12-01 19:23:42.248 189568 DEBUG oslo_service.periodic_task [None req-7cec860b-c1c5-49d2-8a4f-732c76c1788d - - - - - -] Running periodic task ComputeManager._poll_unconfirmed_resizes run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210#033[00m
Dec  1 14:23:42 np0005541455 nova_compute[189564]: 2025-12-01 19:23:42.248 189568 DEBUG oslo_service.periodic_task [None req-7cec860b-c1c5-49d2-8a4f-732c76c1788d - - - - - -] Running periodic task ComputeManager._reclaim_queued_deletes run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210#033[00m
Dec  1 14:23:42 np0005541455 nova_compute[189564]: 2025-12-01 19:23:42.248 189568 DEBUG nova.compute.manager [None req-7cec860b-c1c5-49d2-8a4f-732c76c1788d - - - - - -] CONF.reclaim_instance_interval <= 0, skipping... _reclaim_queued_deletes /usr/lib/python3.9/site-packages/nova/compute/manager.py:10477#033[00m
Dec  1 14:23:42 np0005541455 nova_compute[189564]: 2025-12-01 19:23:42.248 189568 DEBUG oslo_service.periodic_task [None req-7cec860b-c1c5-49d2-8a4f-732c76c1788d - - - - - -] Running periodic task ComputeManager.update_available_resource run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210#033[00m
Dec  1 14:23:42 np0005541455 nova_compute[189564]: 2025-12-01 19:23:42.284 189568 DEBUG oslo_concurrency.lockutils [None req-7cec860b-c1c5-49d2-8a4f-732c76c1788d - - - - - -] Acquiring lock "compute_resources" by "nova.compute.resource_tracker.ResourceTracker.clean_compute_node_cache" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m
Dec  1 14:23:42 np0005541455 nova_compute[189564]: 2025-12-01 19:23:42.284 189568 DEBUG oslo_concurrency.lockutils [None req-7cec860b-c1c5-49d2-8a4f-732c76c1788d - - - - - -] Lock "compute_resources" acquired by "nova.compute.resource_tracker.ResourceTracker.clean_compute_node_cache" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m
Dec  1 14:23:42 np0005541455 nova_compute[189564]: 2025-12-01 19:23:42.284 189568 DEBUG oslo_concurrency.lockutils [None req-7cec860b-c1c5-49d2-8a4f-732c76c1788d - - - - - -] Lock "compute_resources" "released" by "nova.compute.resource_tracker.ResourceTracker.clean_compute_node_cache" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m
Dec  1 14:23:42 np0005541455 nova_compute[189564]: 2025-12-01 19:23:42.285 189568 DEBUG nova.compute.resource_tracker [None req-7cec860b-c1c5-49d2-8a4f-732c76c1788d - - - - - -] Auditing locally available compute resources for compute-0.ctlplane.example.com (node: compute-0.ctlplane.example.com) update_available_resource /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:861#033[00m
Dec  1 14:23:42 np0005541455 nova_compute[189564]: 2025-12-01 19:23:42.437 189568 WARNING nova.virt.libvirt.driver [None req-7cec860b-c1c5-49d2-8a4f-732c76c1788d - - - - - -] This host appears to have multiple sockets per NUMA node. The `socket` PCI NUMA affinity will not be supported.#033[00m
Dec  1 14:23:42 np0005541455 nova_compute[189564]: 2025-12-01 19:23:42.438 189568 DEBUG nova.compute.resource_tracker [None req-7cec860b-c1c5-49d2-8a4f-732c76c1788d - - - - - -] Hypervisor/Node resource view: name=compute-0.ctlplane.example.com free_ram=5891MB free_disk=72.43740463256836GB free_vcpus=8 pci_devices=[{"dev_id": "pci_0000_00_03_0", "address": "0000:00:03.0", "product_id": "1000", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1000", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_01_3", "address": "0000:00:01.3", "product_id": "7113", "vendor_id": "8086", "numa_node": null, "label": "label_8086_7113", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_06_0", "address": "0000:00:06.0", "product_id": "1005", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1005", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_04_0", "address": "0000:00:04.0", "product_id": "1001", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1001", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_01_1", "address": "0000:00:01.1", "product_id": "7010", "vendor_id": "8086", "numa_node": null, "label": "label_8086_7010", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_00_0", "address": "0000:00:00.0", "product_id": "1237", "vendor_id": "8086", "numa_node": null, "label": "label_8086_1237", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_05_0", "address": "0000:00:05.0", "product_id": "1002", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1002", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_02_0", "address": "0000:00:02.0", "product_id": "1050", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1050", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_01_2", "address": "0000:00:01.2", "product_id": "7020", "vendor_id": "8086", "numa_node": null, "label": "label_8086_7020", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_01_0", "address": "0000:00:01.0", "product_id": "7000", "vendor_id": "8086", "numa_node": null, "label": "label_8086_7000", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_07_0", "address": "0000:00:07.0", "product_id": "1000", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1000", "dev_type": "type-PCI"}] _report_hypervisor_resource_view /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:1034#033[00m
Dec  1 14:23:42 np0005541455 nova_compute[189564]: 2025-12-01 19:23:42.439 189568 DEBUG oslo_concurrency.lockutils [None req-7cec860b-c1c5-49d2-8a4f-732c76c1788d - - - - - -] Acquiring lock "compute_resources" by "nova.compute.resource_tracker.ResourceTracker._update_available_resource" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m
Dec  1 14:23:42 np0005541455 nova_compute[189564]: 2025-12-01 19:23:42.439 189568 DEBUG oslo_concurrency.lockutils [None req-7cec860b-c1c5-49d2-8a4f-732c76c1788d - - - - - -] Lock "compute_resources" acquired by "nova.compute.resource_tracker.ResourceTracker._update_available_resource" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m
Dec  1 14:23:42 np0005541455 python3.9[219231]: ansible-ansible.legacy.stat Invoked with path=/var/lib/openstack/config/telemetry-power-monitoring/ceilometer_agent_ipmi.json follow=False get_checksum=True get_size=False checksum_algorithm=sha1 get_mime=True get_attributes=True get_selinux_context=False
Dec  1 14:23:42 np0005541455 nova_compute[189564]: 2025-12-01 19:23:42.979 189568 DEBUG nova.compute.resource_tracker [None req-7cec860b-c1c5-49d2-8a4f-732c76c1788d - - - - - -] Total usable vcpus: 8, total allocated vcpus: 0 _report_final_resource_view /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:1057#033[00m
Dec  1 14:23:42 np0005541455 nova_compute[189564]: 2025-12-01 19:23:42.979 189568 DEBUG nova.compute.resource_tracker [None req-7cec860b-c1c5-49d2-8a4f-732c76c1788d - - - - - -] Final resource view: name=compute-0.ctlplane.example.com phys_ram=7680MB used_ram=512MB phys_disk=79GB used_disk=0GB total_vcpus=8 used_vcpus=0 pci_stats=[] _report_final_resource_view /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:1066#033[00m
Dec  1 14:23:43 np0005541455 nova_compute[189564]: 2025-12-01 19:23:43.012 189568 DEBUG nova.compute.provider_tree [None req-7cec860b-c1c5-49d2-8a4f-732c76c1788d - - - - - -] Inventory has not changed in ProviderTree for provider: 0211b5d4-bab8-409f-8f53-df766ffbcb27 update_inventory /usr/lib/python3.9/site-packages/nova/compute/provider_tree.py:180#033[00m
Dec  1 14:23:43 np0005541455 nova_compute[189564]: 2025-12-01 19:23:43.102 189568 DEBUG nova.scheduler.client.report [None req-7cec860b-c1c5-49d2-8a4f-732c76c1788d - - - - - -] Inventory has not changed for provider 0211b5d4-bab8-409f-8f53-df766ffbcb27 based on inventory data: {'MEMORY_MB': {'total': 7680, 'reserved': 512, 'min_unit': 1, 'max_unit': 7680, 'step_size': 1, 'allocation_ratio': 1.0}, 'VCPU': {'total': 8, 'reserved': 0, 'min_unit': 1, 'max_unit': 8, 'step_size': 1, 'allocation_ratio': 4.0}, 'DISK_GB': {'total': 79, 'reserved': 0, 'min_unit': 1, 'max_unit': 79, 'step_size': 1, 'allocation_ratio': 0.9}} set_inventory_for_provider /usr/lib/python3.9/site-packages/nova/scheduler/client/report.py:940#033[00m
Dec  1 14:23:43 np0005541455 nova_compute[189564]: 2025-12-01 19:23:43.104 189568 DEBUG nova.compute.resource_tracker [None req-7cec860b-c1c5-49d2-8a4f-732c76c1788d - - - - - -] Compute_service record updated for compute-0.ctlplane.example.com:compute-0.ctlplane.example.com _update_available_resource /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:995#033[00m
Dec  1 14:23:43 np0005541455 nova_compute[189564]: 2025-12-01 19:23:43.104 189568 DEBUG oslo_concurrency.lockutils [None req-7cec860b-c1c5-49d2-8a4f-732c76c1788d - - - - - -] Lock "compute_resources" "released" by "nova.compute.resource_tracker.ResourceTracker._update_available_resource" :: held 0.665s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m
Dec  1 14:23:43 np0005541455 python3.9[219352]: ansible-ansible.legacy.copy Invoked with dest=/var/lib/openstack/config/telemetry-power-monitoring/ceilometer_agent_ipmi.json mode=420 src=/home/zuul/.ansible/tmp/ansible-tmp-1764617022.0892866-230-96508114777620/.source.json follow=False _original_basename=ceilometer_agent_ipmi.json.j2 checksum=cf81874b7544c057599ec397442879f74d42b3ec backup=False force=True remote_src=False unsafe_writes=False content=NOT_LOGGING_PARAMETER validate=None directory_mode=None local_follow=None owner=None group=None seuser=None serole=None selevel=None setype=None attributes=None
Dec  1 14:23:43 np0005541455 python3.9[219502]: ansible-ansible.legacy.stat Invoked with path=/var/lib/openstack/config/telemetry-power-monitoring/ceilometer_prom_exporter.yaml follow=False get_checksum=True get_size=False checksum_algorithm=sha1 get_mime=True get_attributes=True get_selinux_context=False
Dec  1 14:23:44 np0005541455 nova_compute[189564]: 2025-12-01 19:23:44.104 189568 DEBUG oslo_service.periodic_task [None req-7cec860b-c1c5-49d2-8a4f-732c76c1788d - - - - - -] Running periodic task ComputeManager._heal_instance_info_cache run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210#033[00m
Dec  1 14:23:44 np0005541455 nova_compute[189564]: 2025-12-01 19:23:44.105 189568 DEBUG nova.compute.manager [None req-7cec860b-c1c5-49d2-8a4f-732c76c1788d - - - - - -] Starting heal instance info cache _heal_instance_info_cache /usr/lib/python3.9/site-packages/nova/compute/manager.py:9858#033[00m
Dec  1 14:23:44 np0005541455 nova_compute[189564]: 2025-12-01 19:23:44.105 189568 DEBUG nova.compute.manager [None req-7cec860b-c1c5-49d2-8a4f-732c76c1788d - - - - - -] Rebuilding the list of instances to heal _heal_instance_info_cache /usr/lib/python3.9/site-packages/nova/compute/manager.py:9862#033[00m
Dec  1 14:23:44 np0005541455 nova_compute[189564]: 2025-12-01 19:23:44.122 189568 DEBUG nova.compute.manager [None req-7cec860b-c1c5-49d2-8a4f-732c76c1788d - - - - - -] Didn't find any instances for network info cache update. _heal_instance_info_cache /usr/lib/python3.9/site-packages/nova/compute/manager.py:9944#033[00m
Dec  1 14:23:44 np0005541455 nova_compute[189564]: 2025-12-01 19:23:44.123 189568 DEBUG oslo_service.periodic_task [None req-7cec860b-c1c5-49d2-8a4f-732c76c1788d - - - - - -] Running periodic task ComputeManager._instance_usage_audit run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210#033[00m
Dec  1 14:23:44 np0005541455 python3.9[219623]: ansible-ansible.legacy.copy Invoked with dest=/var/lib/openstack/config/telemetry-power-monitoring/ceilometer_prom_exporter.yaml mode=420 src=/home/zuul/.ansible/tmp/ansible-tmp-1764617023.2810123-230-3582846267425/.source.yaml follow=False _original_basename=ceilometer_prom_exporter.yaml.j2 checksum=10157c879411ee6023e506dc85a343cedc52700f backup=False force=True remote_src=False unsafe_writes=False content=NOT_LOGGING_PARAMETER validate=None directory_mode=None local_follow=None owner=None group=None seuser=None serole=None selevel=None setype=None attributes=None
Dec  1 14:23:44 np0005541455 nova_compute[189564]: 2025-12-01 19:23:44.249 189568 DEBUG oslo_service.periodic_task [None req-7cec860b-c1c5-49d2-8a4f-732c76c1788d - - - - - -] Running periodic task ComputeManager._check_instance_build_time run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210#033[00m
Dec  1 14:23:44 np0005541455 nova_compute[189564]: 2025-12-01 19:23:44.249 189568 DEBUG oslo_service.periodic_task [None req-7cec860b-c1c5-49d2-8a4f-732c76c1788d - - - - - -] Running periodic task ComputeManager._poll_rebooting_instances run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210#033[00m
Dec  1 14:23:44 np0005541455 python3.9[219773]: ansible-ansible.legacy.stat Invoked with path=/var/lib/openstack/config/telemetry-power-monitoring/firewall.yaml follow=False get_checksum=True get_size=False checksum_algorithm=sha1 get_mime=True get_attributes=True get_selinux_context=False
Dec  1 14:23:45 np0005541455 python3.9[219894]: ansible-ansible.legacy.copy Invoked with dest=/var/lib/openstack/config/telemetry-power-monitoring/firewall.yaml mode=420 src=/home/zuul/.ansible/tmp/ansible-tmp-1764617024.3342001-230-6788872319030/.source.yaml follow=False _original_basename=firewall.yaml.j2 checksum=40b8960d32c81de936cddbeb137a8240ecc54e7b backup=False force=True remote_src=False unsafe_writes=False content=NOT_LOGGING_PARAMETER validate=None directory_mode=None local_follow=None owner=None group=None seuser=None serole=None selevel=None setype=None attributes=None
Dec  1 14:23:45 np0005541455 python3.9[220044]: ansible-ansible.legacy.stat Invoked with path=/var/lib/openstack/config/telemetry-power-monitoring/kepler.json follow=False get_checksum=True get_size=False checksum_algorithm=sha1 get_mime=True get_attributes=True get_selinux_context=False
Dec  1 14:23:46 np0005541455 python3.9[220165]: ansible-ansible.legacy.copy Invoked with dest=/var/lib/openstack/config/telemetry-power-monitoring/kepler.json mode=420 src=/home/zuul/.ansible/tmp/ansible-tmp-1764617025.479856-230-210681791143515/.source.json follow=False _original_basename=kepler.json.j2 checksum=89451093c8765edd3915016a9e87770fe489178d backup=False force=True remote_src=False unsafe_writes=False content=NOT_LOGGING_PARAMETER validate=None directory_mode=None local_follow=None owner=None group=None seuser=None serole=None selevel=None setype=None attributes=None
Dec  1 14:23:47 np0005541455 python3.9[220315]: ansible-ansible.legacy.stat Invoked with path=/var/lib/openstack/config/telemetry-power-monitoring/ceilometer_prom_exporter.yaml follow=False get_checksum=True get_size=False checksum_algorithm=sha1 get_mime=True get_attributes=True get_selinux_context=False
Dec  1 14:23:47 np0005541455 python3.9[220391]: ansible-ansible.legacy.file Invoked with mode=0644 dest=/var/lib/openstack/config/telemetry-power-monitoring/ceilometer_prom_exporter.yaml _original_basename=ceilometer_prom_exporter.yaml.j2 recurse=False state=file path=/var/lib/openstack/config/telemetry-power-monitoring/ceilometer_prom_exporter.yaml force=False follow=True modification_time_format=%Y%m%d%H%M.%S access_time_format=%Y%m%d%H%M.%S unsafe_writes=False _diff_peek=None src=None modification_time=None access_time=None owner=None group=None seuser=None serole=None selevel=None setype=None attributes=None
Dec  1 14:23:48 np0005541455 podman[220515]: 2025-12-01 19:23:48.215104964 +0000 UTC m=+0.118742916 container health_status ac5c9902abf0db9f43c889599b2bcc73d33eb8b65444ffdd9b56a5cc93dab792 (image=quay.io/podified-antelope-centos9/openstack-ovn-controller:current-podified, name=ovn_controller, health_status=healthy, health_failing_streak=0, health_log=, io.buildah.version=1.41.3, maintainer=OpenStack Kubernetes Operator team, managed_by=edpm_ansible, org.label-schema.schema-version=1.0, config_data={'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ovn_controller', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-ovn-controller:current-podified', 'net': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/lib/modules:/lib/modules:ro', '/run:/run', '/var/lib/openvswitch/ovn:/run/ovn:shared,z', '/var/lib/kolla/config_files/ovn_controller.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/openstack/cacerts/ovn/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/certs/ovn/default/ca.crt:/etc/pki/tls/certs/ovndbca.crt:ro,z', '/var/lib/openstack/certs/ovn/default/tls.crt:/etc/pki/tls/certs/ovndb.crt:ro,z', '/var/lib/openstack/certs/ovn/default/tls.key:/etc/pki/tls/private/ovndb.key:ro,Z', '/var/lib/openstack/healthchecks/ovn_controller:/openstack:ro,z']}, container_name=ovn_controller, config_id=ovn_controller, org.label-schema.build-date=20251125, org.label-schema.license=GPLv2, tcib_managed=true, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.vendor=CentOS, tcib_build_tag=fa2bb8efef6782c26ea7f1675eeb36dd)
Dec  1 14:23:48 np0005541455 python3.9[220560]: ansible-ansible.builtin.file Invoked with group=ceilometer mode=0644 owner=ceilometer path=/var/lib/openstack/certs/telemetry-power-monitoring/default/tls.crt recurse=False force=False follow=True modification_time_format=%Y%m%d%H%M.%S access_time_format=%Y%m%d%H%M.%S unsafe_writes=False state=None _original_basename=None _diff_peek=None src=None modification_time=None access_time=None seuser=None serole=None selevel=None setype=None attributes=None
Dec  1 14:23:48 np0005541455 ceilometer_agent_compute[200308]: 2025-12-01 19:23:48.807 15 DEBUG ceilometer.polling.manager [-] The number of pollsters in source [pollsters] is bigger than the number of worker threads to execute them. Therefore, one can expect the process to be longer than the expected. execute_polling_task_processing /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:253
Dec  1 14:23:48 np0005541455 ceilometer_agent_compute[200308]: 2025-12-01 19:23:48.808 15 DEBUG ceilometer.polling.manager [-] Processing pollsters for [pollsters] with [1] threads. execute_polling_task_processing /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:262
Dec  1 14:23:48 np0005541455 ceilometer_agent_compute[200308]: 2025-12-01 19:23:48.809 15 DEBUG ceilometer.polling.manager [-] Registering pollster [<stevedore.extension.Extension object at 0x7fcf6cc3f860>] from source [pollsters] to be executed via executor [<concurrent.futures.thread.ThreadPoolExecutor object at 0x7fcf66465dc0>] with cache [{}], pollster history [{}], and discovery cache [{}]. register_pollster_execution /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:276
Dec  1 14:23:48 np0005541455 ceilometer_agent_compute[200308]: 2025-12-01 19:23:48.809 15 DEBUG ceilometer.polling.manager [-] Executing discovery process for pollsters [<ceilometer.compute.pollsters.net.IncomingBytesDeltaPollster object at 0x7fcf6cc3f830>] and discovery method [local_instances] via process [<bound method AgentManager.discover of <ceilometer.polling.manager.AgentManager object at 0x7fcf6cc07a10>>]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:294
Dec  1 14:23:48 np0005541455 ceilometer_agent_compute[200308]: 2025-12-01 19:23:48.809 15 DEBUG ceilometer.polling.manager [-] Registering pollster [<stevedore.extension.Extension object at 0x7fcf6c2e4080>] from source [pollsters] to be executed via executor [<concurrent.futures.thread.ThreadPoolExecutor object at 0x7fcf66465dc0>] with cache [{}], pollster history [{}], and discovery cache [{}]. register_pollster_execution /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:276
Dec  1 14:23:48 np0005541455 ceilometer_agent_compute[200308]: 2025-12-01 19:23:48.810 15 DEBUG ceilometer.polling.manager [-] Registering pollster [<stevedore.extension.Extension object at 0x7fcf6efc98b0>] from source [pollsters] to be executed via executor [<concurrent.futures.thread.ThreadPoolExecutor object at 0x7fcf66465dc0>] with cache [{}], pollster history [{}], and discovery cache [{}]. register_pollster_execution /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:276
Dec  1 14:23:48 np0005541455 ceilometer_agent_compute[200308]: 2025-12-01 19:23:48.810 15 DEBUG ceilometer.polling.manager [-] Registering pollster [<stevedore.extension.Extension object at 0x7fcf6c2e4110>] from source [pollsters] to be executed via executor [<concurrent.futures.thread.ThreadPoolExecutor object at 0x7fcf66465dc0>] with cache [{}], pollster history [{}], and discovery cache [{}]. register_pollster_execution /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:276
Dec  1 14:23:48 np0005541455 ceilometer_agent_compute[200308]: 2025-12-01 19:23:48.810 15 DEBUG ceilometer.polling.manager [-] Registering pollster [<stevedore.extension.Extension object at 0x7fcf6c2e41a0>] from source [pollsters] to be executed via executor [<concurrent.futures.thread.ThreadPoolExecutor object at 0x7fcf66465dc0>] with cache [{}], pollster history [{}], and discovery cache [{}]. register_pollster_execution /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:276
Dec  1 14:23:48 np0005541455 ceilometer_agent_compute[200308]: 2025-12-01 19:23:48.810 15 DEBUG ceilometer.polling.manager [-] Registering pollster [<stevedore.extension.Extension object at 0x7fcf6cc3f290>] from source [pollsters] to be executed via executor [<concurrent.futures.thread.ThreadPoolExecutor object at 0x7fcf66465dc0>] with cache [{}], pollster history [{}], and discovery cache [{}]. register_pollster_execution /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:276
Dec  1 14:23:48 np0005541455 ceilometer_agent_compute[200308]: 2025-12-01 19:23:48.810 15 DEBUG ceilometer.polling.manager [-] Registering pollster [<stevedore.extension.Extension object at 0x7fcf6cc3f2c0>] from source [pollsters] to be executed via executor [<concurrent.futures.thread.ThreadPoolExecutor object at 0x7fcf66465dc0>] with cache [{}], pollster history [{}], and discovery cache [{}]. register_pollster_execution /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:276
Dec  1 14:23:48 np0005541455 ceilometer_agent_compute[200308]: 2025-12-01 19:23:48.810 15 DEBUG ceilometer.polling.manager [-] Registering pollster [<stevedore.extension.Extension object at 0x7fcf6e1e92e0>] from source [pollsters] to be executed via executor [<concurrent.futures.thread.ThreadPoolExecutor object at 0x7fcf66465dc0>] with cache [{}], pollster history [{}], and discovery cache [{}]. register_pollster_execution /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:276
Dec  1 14:23:48 np0005541455 ceilometer_agent_compute[200308]: 2025-12-01 19:23:48.810 15 DEBUG ceilometer.polling.manager [-] Registering pollster [<stevedore.extension.Extension object at 0x7fcf6cc3fb00>] from source [pollsters] to be executed via executor [<concurrent.futures.thread.ThreadPoolExecutor object at 0x7fcf66465dc0>] with cache [{}], pollster history [{}], and discovery cache [{}]. register_pollster_execution /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:276
Dec  1 14:23:48 np0005541455 ceilometer_agent_compute[200308]: 2025-12-01 19:23:48.810 15 DEBUG ceilometer.polling.manager [-] Registering pollster [<stevedore.extension.Extension object at 0x7fcf6cc3f320>] from source [pollsters] to be executed via executor [<concurrent.futures.thread.ThreadPoolExecutor object at 0x7fcf66465dc0>] with cache [{}], pollster history [{}], and discovery cache [{}]. register_pollster_execution /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:276
Dec  1 14:23:48 np0005541455 ceilometer_agent_compute[200308]: 2025-12-01 19:23:48.810 15 DEBUG ceilometer.polling.manager [-] Registering pollster [<stevedore.extension.Extension object at 0x7fcf6cc3f380>] from source [pollsters] to be executed via executor [<concurrent.futures.thread.ThreadPoolExecutor object at 0x7fcf66465dc0>] with cache [{}], pollster history [{}], and discovery cache [{}]. register_pollster_execution /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:276
Dec  1 14:23:48 np0005541455 ceilometer_agent_compute[200308]: 2025-12-01 19:23:48.811 15 DEBUG ceilometer.polling.manager [-] Registering pollster [<stevedore.extension.Extension object at 0x7fcf6cc3f3e0>] from source [pollsters] to be executed via executor [<concurrent.futures.thread.ThreadPoolExecutor object at 0x7fcf66465dc0>] with cache [{}], pollster history [{}], and discovery cache [{}]. register_pollster_execution /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:276
Dec  1 14:23:48 np0005541455 ceilometer_agent_compute[200308]: 2025-12-01 19:23:48.811 15 DEBUG ceilometer.polling.manager [-] Registering pollster [<stevedore.extension.Extension object at 0x7fcf6cc3f440>] from source [pollsters] to be executed via executor [<concurrent.futures.thread.ThreadPoolExecutor object at 0x7fcf66465dc0>] with cache [{}], pollster history [{}], and discovery cache [{}]. register_pollster_execution /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:276
Dec  1 14:23:48 np0005541455 ceilometer_agent_compute[200308]: 2025-12-01 19:23:48.811 15 DEBUG ceilometer.polling.manager [-] Registering pollster [<stevedore.extension.Extension object at 0x7fcf6c2e4470>] from source [pollsters] to be executed via executor [<concurrent.futures.thread.ThreadPoolExecutor object at 0x7fcf66465dc0>] with cache [{}], pollster history [{}], and discovery cache [{}]. register_pollster_execution /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:276
Dec  1 14:23:48 np0005541455 ceilometer_agent_compute[200308]: 2025-12-01 19:23:48.811 15 DEBUG ceilometer.polling.manager [-] Registering pollster [<stevedore.extension.Extension object at 0x7fcf6cc3f4a0>] from source [pollsters] to be executed via executor [<concurrent.futures.thread.ThreadPoolExecutor object at 0x7fcf66465dc0>] with cache [{}], pollster history [{}], and discovery cache [{}]. register_pollster_execution /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:276
Dec  1 14:23:48 np0005541455 ceilometer_agent_compute[200308]: 2025-12-01 19:23:48.811 15 DEBUG ceilometer.polling.manager [-] Registering pollster [<stevedore.extension.Extension object at 0x7fcf6cc3f500>] from source [pollsters] to be executed via executor [<concurrent.futures.thread.ThreadPoolExecutor object at 0x7fcf66465dc0>] with cache [{}], pollster history [{}], and discovery cache [{}]. register_pollster_execution /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:276
Dec  1 14:23:48 np0005541455 ceilometer_agent_compute[200308]: 2025-12-01 19:23:48.811 15 DEBUG ceilometer.polling.manager [-] Registering pollster [<stevedore.extension.Extension object at 0x7fcf6cc3e540>] from source [pollsters] to be executed via executor [<concurrent.futures.thread.ThreadPoolExecutor object at 0x7fcf66465dc0>] with cache [{}], pollster history [{}], and discovery cache [{}]. register_pollster_execution /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:276
Dec  1 14:23:48 np0005541455 ceilometer_agent_compute[200308]: 2025-12-01 19:23:48.811 15 DEBUG ceilometer.polling.manager [-] Registering pollster [<stevedore.extension.Extension object at 0x7fcf6cc3f560>] from source [pollsters] to be executed via executor [<concurrent.futures.thread.ThreadPoolExecutor object at 0x7fcf66465dc0>] with cache [{}], pollster history [{}], and discovery cache [{}]. register_pollster_execution /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:276
Dec  1 14:23:48 np0005541455 ceilometer_agent_compute[200308]: 2025-12-01 19:23:48.811 15 DEBUG ceilometer.polling.manager [-] Registering pollster [<stevedore.extension.Extension object at 0x7fcf6cc3fd70>] from source [pollsters] to be executed via executor [<concurrent.futures.thread.ThreadPoolExecutor object at 0x7fcf66465dc0>] with cache [{}], pollster history [{}], and discovery cache [{}]. register_pollster_execution /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:276
Dec  1 14:23:48 np0005541455 ceilometer_agent_compute[200308]: 2025-12-01 19:23:48.811 15 DEBUG ceilometer.polling.manager [-] Registering pollster [<stevedore.extension.Extension object at 0x7fcf6cc3f5c0>] from source [pollsters] to be executed via executor [<concurrent.futures.thread.ThreadPoolExecutor object at 0x7fcf66465dc0>] with cache [{}], pollster history [{}], and discovery cache [{}]. register_pollster_execution /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:276
Dec  1 14:23:48 np0005541455 ceilometer_agent_compute[200308]: 2025-12-01 19:23:48.811 15 DEBUG ceilometer.polling.manager [-] Registering pollster [<stevedore.extension.Extension object at 0x7fcf6cc3fdd0>] from source [pollsters] to be executed via executor [<concurrent.futures.thread.ThreadPoolExecutor object at 0x7fcf66465dc0>] with cache [{}], pollster history [{}], and discovery cache [{}]. register_pollster_execution /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:276
Dec  1 14:23:48 np0005541455 ceilometer_agent_compute[200308]: 2025-12-01 19:23:48.812 15 DEBUG ceilometer.polling.manager [-] Registering pollster [<stevedore.extension.Extension object at 0x7fcf6cc3fe30>] from source [pollsters] to be executed via executor [<concurrent.futures.thread.ThreadPoolExecutor object at 0x7fcf66465dc0>] with cache [{}], pollster history [{}], and discovery cache [{}]. register_pollster_execution /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:276
Dec  1 14:23:48 np0005541455 ceilometer_agent_compute[200308]: 2025-12-01 19:23:48.812 15 DEBUG ceilometer.polling.manager [-] Registering pollster [<stevedore.extension.Extension object at 0x7fcf6cc3fec0>] from source [pollsters] to be executed via executor [<concurrent.futures.thread.ThreadPoolExecutor object at 0x7fcf66465dc0>] with cache [{}], pollster history [{}], and discovery cache [{}]. register_pollster_execution /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:276
Dec  1 14:23:48 np0005541455 ceilometer_agent_compute[200308]: 2025-12-01 19:23:48.812 15 DEBUG ceilometer.polling.manager [-] Registering pollster [<stevedore.extension.Extension object at 0x7fcf6cc3ffb0>] from source [pollsters] to be executed via executor [<concurrent.futures.thread.ThreadPoolExecutor object at 0x7fcf66465dc0>] with cache [{}], pollster history [{}], and discovery cache [{}]. register_pollster_execution /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:276
Dec  1 14:23:48 np0005541455 ceilometer_agent_compute[200308]: 2025-12-01 19:23:48.812 15 DEBUG ceilometer.polling.manager [-] Registering pollster [<stevedore.extension.Extension object at 0x7fcf6cc3d7c0>] from source [pollsters] to be executed via executor [<concurrent.futures.thread.ThreadPoolExecutor object at 0x7fcf66465dc0>] with cache [{}], pollster history [{}], and discovery cache [{}]. register_pollster_execution /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:276
Dec  1 14:23:48 np0005541455 ceilometer_agent_compute[200308]: 2025-12-01 19:23:48.812 15 DEBUG ceilometer.polling.manager [-] Registering pollster [<stevedore.extension.Extension object at 0x7fcf6cc3f7d0>] from source [pollsters] to be executed via executor [<concurrent.futures.thread.ThreadPoolExecutor object at 0x7fcf66465dc0>] with cache [{}], pollster history [{}], and discovery cache [{}]. register_pollster_execution /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:276
Dec  1 14:23:48 np0005541455 ceilometer_agent_compute[200308]: 2025-12-01 19:23:48.813 15 DEBUG ceilometer.polling.manager [-] Skip pollster network.incoming.bytes.delta, no  resources found this cycle _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:321
Dec  1 14:23:48 np0005541455 ceilometer_agent_compute[200308]: 2025-12-01 19:23:48.813 15 DEBUG ceilometer.polling.manager [-] Executing discovery process for pollsters [<ceilometer.compute.pollsters.net.OutgoingPacketsPollster object at 0x7fcf6c2e4050>] and discovery method [local_instances] via process [<bound method AgentManager.discover of <ceilometer.polling.manager.AgentManager object at 0x7fcf6cc07a10>>]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:294
Dec  1 14:23:48 np0005541455 ceilometer_agent_compute[200308]: 2025-12-01 19:23:48.813 15 DEBUG ceilometer.polling.manager [-] Skip pollster network.outgoing.packets, no  resources found this cycle _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:321
Dec  1 14:23:48 np0005541455 ceilometer_agent_compute[200308]: 2025-12-01 19:23:48.813 15 DEBUG ceilometer.polling.manager [-] Executing discovery process for pollsters [<ceilometer.compute.pollsters.net.OutgoingBytesDeltaPollster object at 0x7fcf6cc3ff20>] and discovery method [local_instances] via process [<bound method AgentManager.discover of <ceilometer.polling.manager.AgentManager object at 0x7fcf6cc07a10>>]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:294
Dec  1 14:23:48 np0005541455 ceilometer_agent_compute[200308]: 2025-12-01 19:23:48.813 15 DEBUG ceilometer.polling.manager [-] Skip pollster network.outgoing.bytes.delta, no  resources found this cycle _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:321
Dec  1 14:23:48 np0005541455 ceilometer_agent_compute[200308]: 2025-12-01 19:23:48.813 15 DEBUG ceilometer.polling.manager [-] Executing discovery process for pollsters [<ceilometer.compute.pollsters.net.OutgoingDropPollster object at 0x7fcf6c2e40e0>] and discovery method [local_instances] via process [<bound method AgentManager.discover of <ceilometer.polling.manager.AgentManager object at 0x7fcf6cc07a10>>]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:294
Dec  1 14:23:48 np0005541455 ceilometer_agent_compute[200308]: 2025-12-01 19:23:48.813 15 DEBUG ceilometer.polling.manager [-] Skip pollster network.outgoing.packets.drop, no  resources found this cycle _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:321
Dec  1 14:23:48 np0005541455 ceilometer_agent_compute[200308]: 2025-12-01 19:23:48.813 15 DEBUG ceilometer.polling.manager [-] Executing discovery process for pollsters [<ceilometer.compute.pollsters.net.OutgoingErrorsPollster object at 0x7fcf6c2e4170>] and discovery method [local_instances] via process [<bound method AgentManager.discover of <ceilometer.polling.manager.AgentManager object at 0x7fcf6cc07a10>>]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:294
Dec  1 14:23:48 np0005541455 ceilometer_agent_compute[200308]: 2025-12-01 19:23:48.813 15 DEBUG ceilometer.polling.manager [-] Skip pollster network.outgoing.packets.error, no  resources found this cycle _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:321
Dec  1 14:23:48 np0005541455 ceilometer_agent_compute[200308]: 2025-12-01 19:23:48.814 15 DEBUG ceilometer.polling.manager [-] Executing discovery process for pollsters [<ceilometer.compute.pollsters.disk.PerDeviceCapacityPollster object at 0x7fcf6cc3d820>] and discovery method [local_instances] via process [<bound method AgentManager.discover of <ceilometer.polling.manager.AgentManager object at 0x7fcf6cc07a10>>]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:294
Dec  1 14:23:48 np0005541455 ceilometer_agent_compute[200308]: 2025-12-01 19:23:48.814 15 DEBUG ceilometer.polling.manager [-] Skip pollster disk.device.capacity, no  resources found this cycle _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:321
Dec  1 14:23:48 np0005541455 ceilometer_agent_compute[200308]: 2025-12-01 19:23:48.814 15 DEBUG ceilometer.polling.manager [-] Executing discovery process for pollsters [<ceilometer.compute.pollsters.disk.PerDeviceReadBytesPollster object at 0x7fcf6cc3f1d0>] and discovery method [local_instances] via process [<bound method AgentManager.discover of <ceilometer.polling.manager.AgentManager object at 0x7fcf6cc07a10>>]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:294
Dec  1 14:23:48 np0005541455 ceilometer_agent_compute[200308]: 2025-12-01 19:23:48.814 15 DEBUG ceilometer.polling.manager [-] Skip pollster disk.device.read.bytes, no  resources found this cycle _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:321
Dec  1 14:23:48 np0005541455 ceilometer_agent_compute[200308]: 2025-12-01 19:23:48.814 15 DEBUG ceilometer.polling.manager [-] Executing discovery process for pollsters [<ceilometer.compute.pollsters.net.IncomingBytesPollster object at 0x7fcf6cc3f800>] and discovery method [local_instances] via process [<bound method AgentManager.discover of <ceilometer.polling.manager.AgentManager object at 0x7fcf6cc07a10>>]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:294
Dec  1 14:23:48 np0005541455 ceilometer_agent_compute[200308]: 2025-12-01 19:23:48.814 15 DEBUG ceilometer.polling.manager [-] Skip pollster network.incoming.bytes, no  resources found this cycle _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:321
Dec  1 14:23:48 np0005541455 ceilometer_agent_compute[200308]: 2025-12-01 19:23:48.814 15 DEBUG ceilometer.polling.manager [-] Executing discovery process for pollsters [<ceilometer.compute.pollsters.net.IncomingBytesRatePollster object at 0x7fcf6cc3fd10>] and discovery method [local_instances] via process [<bound method AgentManager.discover of <ceilometer.polling.manager.AgentManager object at 0x7fcf6cc07a10>>]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:294
Dec  1 14:23:48 np0005541455 ceilometer_agent_compute[200308]: 2025-12-01 19:23:48.814 15 DEBUG ceilometer.polling.manager [-] Skip pollster network.incoming.bytes.rate, no  resources found this cycle _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:321
Dec  1 14:23:48 np0005541455 ceilometer_agent_compute[200308]: 2025-12-01 19:23:48.814 15 DEBUG ceilometer.polling.manager [-] Executing discovery process for pollsters [<ceilometer.compute.pollsters.disk.PerDeviceDiskReadLatencyPollster object at 0x7fcf6cc3f2f0>] and discovery method [local_instances] via process [<bound method AgentManager.discover of <ceilometer.polling.manager.AgentManager object at 0x7fcf6cc07a10>>]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:294
Dec  1 14:23:48 np0005541455 ceilometer_agent_compute[200308]: 2025-12-01 19:23:48.814 15 DEBUG ceilometer.polling.manager [-] Skip pollster disk.device.read.latency, no  resources found this cycle _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:321
Dec  1 14:23:48 np0005541455 ceilometer_agent_compute[200308]: 2025-12-01 19:23:48.814 15 DEBUG ceilometer.polling.manager [-] Executing discovery process for pollsters [<ceilometer.compute.pollsters.disk.PerDeviceReadRequestsPollster object at 0x7fcf6cc3f350>] and discovery method [local_instances] via process [<bound method AgentManager.discover of <ceilometer.polling.manager.AgentManager object at 0x7fcf6cc07a10>>]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:294
Dec  1 14:23:48 np0005541455 ceilometer_agent_compute[200308]: 2025-12-01 19:23:48.814 15 DEBUG ceilometer.polling.manager [-] Skip pollster disk.device.read.requests, no  resources found this cycle _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:321
Dec  1 14:23:48 np0005541455 ceilometer_agent_compute[200308]: 2025-12-01 19:23:48.814 15 DEBUG ceilometer.polling.manager [-] Executing discovery process for pollsters [<ceilometer.compute.pollsters.disk.PerDevicePhysicalPollster object at 0x7fcf6cc3f3b0>] and discovery method [local_instances] via process [<bound method AgentManager.discover of <ceilometer.polling.manager.AgentManager object at 0x7fcf6cc07a10>>]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:294
Dec  1 14:23:48 np0005541455 ceilometer_agent_compute[200308]: 2025-12-01 19:23:48.814 15 DEBUG ceilometer.polling.manager [-] Skip pollster disk.device.usage, no  resources found this cycle _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:321
Dec  1 14:23:48 np0005541455 ceilometer_agent_compute[200308]: 2025-12-01 19:23:48.814 15 DEBUG ceilometer.polling.manager [-] Executing discovery process for pollsters [<ceilometer.compute.pollsters.disk.PerDeviceWriteBytesPollster object at 0x7fcf6cc3f410>] and discovery method [local_instances] via process [<bound method AgentManager.discover of <ceilometer.polling.manager.AgentManager object at 0x7fcf6cc07a10>>]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:294
Dec  1 14:23:48 np0005541455 ceilometer_agent_compute[200308]: 2025-12-01 19:23:48.815 15 DEBUG ceilometer.polling.manager [-] Skip pollster disk.device.write.bytes, no  resources found this cycle _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:321
Dec  1 14:23:48 np0005541455 ceilometer_agent_compute[200308]: 2025-12-01 19:23:48.815 15 DEBUG ceilometer.polling.manager [-] Executing discovery process for pollsters [<ceilometer.compute.pollsters.instance_stats.PowerStatePollster object at 0x7fcf6c2e4440>] and discovery method [local_instances] via process [<bound method AgentManager.discover of <ceilometer.polling.manager.AgentManager object at 0x7fcf6cc07a10>>]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:294
Dec  1 14:23:48 np0005541455 ceilometer_agent_compute[200308]: 2025-12-01 19:23:48.815 15 DEBUG ceilometer.polling.manager [-] Skip pollster power.state, no  resources found this cycle _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:321
Dec  1 14:23:48 np0005541455 ceilometer_agent_compute[200308]: 2025-12-01 19:23:48.815 15 DEBUG ceilometer.polling.manager [-] Executing discovery process for pollsters [<ceilometer.compute.pollsters.disk.PerDeviceDiskWriteLatencyPollster object at 0x7fcf6cc3f470>] and discovery method [local_instances] via process [<bound method AgentManager.discover of <ceilometer.polling.manager.AgentManager object at 0x7fcf6cc07a10>>]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:294
Dec  1 14:23:48 np0005541455 ceilometer_agent_compute[200308]: 2025-12-01 19:23:48.815 15 DEBUG ceilometer.polling.manager [-] Skip pollster disk.device.write.latency, no  resources found this cycle _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:321
Dec  1 14:23:48 np0005541455 ceilometer_agent_compute[200308]: 2025-12-01 19:23:48.815 15 DEBUG ceilometer.polling.manager [-] Executing discovery process for pollsters [<ceilometer.compute.pollsters.disk.PerDeviceWriteRequestsPollster object at 0x7fcf6cc3f4d0>] and discovery method [local_instances] via process [<bound method AgentManager.discover of <ceilometer.polling.manager.AgentManager object at 0x7fcf6cc07a10>>]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:294
Dec  1 14:23:48 np0005541455 ceilometer_agent_compute[200308]: 2025-12-01 19:23:48.815 15 DEBUG ceilometer.polling.manager [-] Skip pollster disk.device.write.requests, no  resources found this cycle _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:321
Dec  1 14:23:48 np0005541455 ceilometer_agent_compute[200308]: 2025-12-01 19:23:48.815 15 DEBUG ceilometer.polling.manager [-] Executing discovery process for pollsters [<ceilometer.compute.pollsters.disk.PerDeviceAllocationPollster object at 0x7fcf6cc3e5d0>] and discovery method [local_instances] via process [<bound method AgentManager.discover of <ceilometer.polling.manager.AgentManager object at 0x7fcf6cc07a10>>]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:294
Dec  1 14:23:48 np0005541455 ceilometer_agent_compute[200308]: 2025-12-01 19:23:48.815 15 DEBUG ceilometer.polling.manager [-] Skip pollster disk.device.allocation, no  resources found this cycle _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:321
Dec  1 14:23:48 np0005541455 ceilometer_agent_compute[200308]: 2025-12-01 19:23:48.815 15 DEBUG ceilometer.polling.manager [-] Executing discovery process for pollsters [<ceilometer.compute.pollsters.disk.EphemeralSizePollster object at 0x7fcf6cc3f530>] and discovery method [local_instances] via process [<bound method AgentManager.discover of <ceilometer.polling.manager.AgentManager object at 0x7fcf6cc07a10>>]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:294
Dec  1 14:23:48 np0005541455 ceilometer_agent_compute[200308]: 2025-12-01 19:23:48.815 15 DEBUG ceilometer.polling.manager [-] Skip pollster disk.ephemeral.size, no  resources found this cycle _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:321
Dec  1 14:23:48 np0005541455 ceilometer_agent_compute[200308]: 2025-12-01 19:23:48.815 15 DEBUG ceilometer.polling.manager [-] Executing discovery process for pollsters [<ceilometer.compute.pollsters.net.IncomingPacketsPollster object at 0x7fcf6cc3fd40>] and discovery method [local_instances] via process [<bound method AgentManager.discover of <ceilometer.polling.manager.AgentManager object at 0x7fcf6cc07a10>>]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:294
Dec  1 14:23:48 np0005541455 ceilometer_agent_compute[200308]: 2025-12-01 19:23:48.815 15 DEBUG ceilometer.polling.manager [-] Skip pollster network.incoming.packets, no  resources found this cycle _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:321
Dec  1 14:23:48 np0005541455 ceilometer_agent_compute[200308]: 2025-12-01 19:23:48.816 15 DEBUG ceilometer.polling.manager [-] Executing discovery process for pollsters [<ceilometer.compute.pollsters.disk.RootSizePollster object at 0x7fcf6cc3f590>] and discovery method [local_instances] via process [<bound method AgentManager.discover of <ceilometer.polling.manager.AgentManager object at 0x7fcf6cc07a10>>]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:294
Dec  1 14:23:48 np0005541455 ceilometer_agent_compute[200308]: 2025-12-01 19:23:48.816 15 DEBUG ceilometer.polling.manager [-] Skip pollster disk.root.size, no  resources found this cycle _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:321
Dec  1 14:23:48 np0005541455 ceilometer_agent_compute[200308]: 2025-12-01 19:23:48.816 15 DEBUG ceilometer.polling.manager [-] Executing discovery process for pollsters [<ceilometer.compute.pollsters.net.IncomingDropPollster object at 0x7fcf6cc3fda0>] and discovery method [local_instances] via process [<bound method AgentManager.discover of <ceilometer.polling.manager.AgentManager object at 0x7fcf6cc07a10>>]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:294
Dec  1 14:23:48 np0005541455 ceilometer_agent_compute[200308]: 2025-12-01 19:23:48.816 15 DEBUG ceilometer.polling.manager [-] Skip pollster network.incoming.packets.drop, no  resources found this cycle _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:321
Dec  1 14:23:48 np0005541455 ceilometer_agent_compute[200308]: 2025-12-01 19:23:48.816 15 DEBUG ceilometer.polling.manager [-] Executing discovery process for pollsters [<ceilometer.compute.pollsters.net.IncomingErrorsPollster object at 0x7fcf6cc3fe00>] and discovery method [local_instances] via process [<bound method AgentManager.discover of <ceilometer.polling.manager.AgentManager object at 0x7fcf6cc07a10>>]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:294
Dec  1 14:23:48 np0005541455 ceilometer_agent_compute[200308]: 2025-12-01 19:23:48.816 15 DEBUG ceilometer.polling.manager [-] Skip pollster network.incoming.packets.error, no  resources found this cycle _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:321
Dec  1 14:23:48 np0005541455 ceilometer_agent_compute[200308]: 2025-12-01 19:23:48.816 15 DEBUG ceilometer.polling.manager [-] Executing discovery process for pollsters [<ceilometer.compute.pollsters.net.OutgoingBytesPollster object at 0x7fcf6cc3fe90>] and discovery method [local_instances] via process [<bound method AgentManager.discover of <ceilometer.polling.manager.AgentManager object at 0x7fcf6cc07a10>>]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:294
Dec  1 14:23:48 np0005541455 ceilometer_agent_compute[200308]: 2025-12-01 19:23:48.816 15 DEBUG ceilometer.polling.manager [-] Skip pollster network.outgoing.bytes, no  resources found this cycle _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:321
Dec  1 14:23:48 np0005541455 ceilometer_agent_compute[200308]: 2025-12-01 19:23:48.816 15 DEBUG ceilometer.polling.manager [-] Executing discovery process for pollsters [<ceilometer.compute.pollsters.net.OutgoingBytesRatePollster object at 0x7fcf6cc3ff80>] and discovery method [local_instances] via process [<bound method AgentManager.discover of <ceilometer.polling.manager.AgentManager object at 0x7fcf6cc07a10>>]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:294
Dec  1 14:23:48 np0005541455 ceilometer_agent_compute[200308]: 2025-12-01 19:23:48.816 15 DEBUG ceilometer.polling.manager [-] Skip pollster network.outgoing.bytes.rate, no  resources found this cycle _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:321
Dec  1 14:23:48 np0005541455 ceilometer_agent_compute[200308]: 2025-12-01 19:23:48.816 15 DEBUG ceilometer.polling.manager [-] Executing discovery process for pollsters [<ceilometer.compute.pollsters.instance_stats.CPUPollster object at 0x7fcf6cbd1b80>] and discovery method [local_instances] via process [<bound method AgentManager.discover of <ceilometer.polling.manager.AgentManager object at 0x7fcf6cc07a10>>]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:294
Dec  1 14:23:48 np0005541455 ceilometer_agent_compute[200308]: 2025-12-01 19:23:48.816 15 DEBUG ceilometer.polling.manager [-] Skip pollster cpu, no  resources found this cycle _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:321
Dec  1 14:23:48 np0005541455 ceilometer_agent_compute[200308]: 2025-12-01 19:23:48.816 15 DEBUG ceilometer.polling.manager [-] Executing discovery process for pollsters [<ceilometer.compute.pollsters.instance_stats.MemoryUsagePollster object at 0x7fcf6cc3f7a0>] and discovery method [local_instances] via process [<bound method AgentManager.discover of <ceilometer.polling.manager.AgentManager object at 0x7fcf6cc07a10>>]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:294
Dec  1 14:23:48 np0005541455 ceilometer_agent_compute[200308]: 2025-12-01 19:23:48.816 15 DEBUG ceilometer.polling.manager [-] Skip pollster memory.usage, no  resources found this cycle _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:321
Dec  1 14:23:48 np0005541455 ceilometer_agent_compute[200308]: 2025-12-01 19:23:48.817 15 DEBUG ceilometer.polling.manager [-] Finished processing pollster [network.incoming.bytes.delta]. execute_polling_task_processing /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:272
Dec  1 14:23:48 np0005541455 ceilometer_agent_compute[200308]: 2025-12-01 19:23:48.817 15 DEBUG ceilometer.polling.manager [-] Finished processing pollster [network.outgoing.packets]. execute_polling_task_processing /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:272
Dec  1 14:23:48 np0005541455 ceilometer_agent_compute[200308]: 2025-12-01 19:23:48.817 15 DEBUG ceilometer.polling.manager [-] Finished processing pollster [network.outgoing.bytes.delta]. execute_polling_task_processing /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:272
Dec  1 14:23:48 np0005541455 ceilometer_agent_compute[200308]: 2025-12-01 19:23:48.817 15 DEBUG ceilometer.polling.manager [-] Finished processing pollster [network.outgoing.packets.drop]. execute_polling_task_processing /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:272
Dec  1 14:23:48 np0005541455 ceilometer_agent_compute[200308]: 2025-12-01 19:23:48.817 15 DEBUG ceilometer.polling.manager [-] Finished processing pollster [network.outgoing.packets.error]. execute_polling_task_processing /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:272
Dec  1 14:23:48 np0005541455 ceilometer_agent_compute[200308]: 2025-12-01 19:23:48.817 15 DEBUG ceilometer.polling.manager [-] Finished processing pollster [disk.device.capacity]. execute_polling_task_processing /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:272
Dec  1 14:23:48 np0005541455 ceilometer_agent_compute[200308]: 2025-12-01 19:23:48.817 15 DEBUG ceilometer.polling.manager [-] Finished processing pollster [disk.device.read.bytes]. execute_polling_task_processing /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:272
Dec  1 14:23:48 np0005541455 ceilometer_agent_compute[200308]: 2025-12-01 19:23:48.817 15 DEBUG ceilometer.polling.manager [-] Finished processing pollster [network.incoming.bytes]. execute_polling_task_processing /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:272
Dec  1 14:23:48 np0005541455 ceilometer_agent_compute[200308]: 2025-12-01 19:23:48.817 15 DEBUG ceilometer.polling.manager [-] Finished processing pollster [network.incoming.bytes.rate]. execute_polling_task_processing /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:272
Dec  1 14:23:48 np0005541455 ceilometer_agent_compute[200308]: 2025-12-01 19:23:48.817 15 DEBUG ceilometer.polling.manager [-] Finished processing pollster [disk.device.read.latency]. execute_polling_task_processing /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:272
Dec  1 14:23:48 np0005541455 ceilometer_agent_compute[200308]: 2025-12-01 19:23:48.817 15 DEBUG ceilometer.polling.manager [-] Finished processing pollster [disk.device.read.requests]. execute_polling_task_processing /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:272
Dec  1 14:23:48 np0005541455 ceilometer_agent_compute[200308]: 2025-12-01 19:23:48.818 15 DEBUG ceilometer.polling.manager [-] Finished processing pollster [disk.device.usage]. execute_polling_task_processing /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:272
Dec  1 14:23:48 np0005541455 ceilometer_agent_compute[200308]: 2025-12-01 19:23:48.818 15 DEBUG ceilometer.polling.manager [-] Finished processing pollster [disk.device.write.bytes]. execute_polling_task_processing /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:272
Dec  1 14:23:48 np0005541455 ceilometer_agent_compute[200308]: 2025-12-01 19:23:48.818 15 DEBUG ceilometer.polling.manager [-] Finished processing pollster [power.state]. execute_polling_task_processing /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:272
Dec  1 14:23:48 np0005541455 ceilometer_agent_compute[200308]: 2025-12-01 19:23:48.818 15 DEBUG ceilometer.polling.manager [-] Finished processing pollster [disk.device.write.latency]. execute_polling_task_processing /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:272
Dec  1 14:23:48 np0005541455 ceilometer_agent_compute[200308]: 2025-12-01 19:23:48.818 15 DEBUG ceilometer.polling.manager [-] Finished processing pollster [disk.device.write.requests]. execute_polling_task_processing /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:272
Dec  1 14:23:48 np0005541455 ceilometer_agent_compute[200308]: 2025-12-01 19:23:48.818 15 DEBUG ceilometer.polling.manager [-] Finished processing pollster [disk.device.allocation]. execute_polling_task_processing /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:272
Dec  1 14:23:48 np0005541455 ceilometer_agent_compute[200308]: 2025-12-01 19:23:48.818 15 DEBUG ceilometer.polling.manager [-] Finished processing pollster [disk.ephemeral.size]. execute_polling_task_processing /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:272
Dec  1 14:23:48 np0005541455 ceilometer_agent_compute[200308]: 2025-12-01 19:23:48.818 15 DEBUG ceilometer.polling.manager [-] Finished processing pollster [network.incoming.packets]. execute_polling_task_processing /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:272
Dec  1 14:23:48 np0005541455 ceilometer_agent_compute[200308]: 2025-12-01 19:23:48.818 15 DEBUG ceilometer.polling.manager [-] Finished processing pollster [disk.root.size]. execute_polling_task_processing /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:272
Dec  1 14:23:48 np0005541455 ceilometer_agent_compute[200308]: 2025-12-01 19:23:48.818 15 DEBUG ceilometer.polling.manager [-] Finished processing pollster [network.incoming.packets.drop]. execute_polling_task_processing /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:272
Dec  1 14:23:48 np0005541455 ceilometer_agent_compute[200308]: 2025-12-01 19:23:48.818 15 DEBUG ceilometer.polling.manager [-] Finished processing pollster [network.incoming.packets.error]. execute_polling_task_processing /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:272
Dec  1 14:23:48 np0005541455 ceilometer_agent_compute[200308]: 2025-12-01 19:23:48.818 15 DEBUG ceilometer.polling.manager [-] Finished processing pollster [network.outgoing.bytes]. execute_polling_task_processing /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:272
Dec  1 14:23:48 np0005541455 ceilometer_agent_compute[200308]: 2025-12-01 19:23:48.818 15 DEBUG ceilometer.polling.manager [-] Finished processing pollster [network.outgoing.bytes.rate]. execute_polling_task_processing /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:272
Dec  1 14:23:48 np0005541455 ceilometer_agent_compute[200308]: 2025-12-01 19:23:48.818 15 DEBUG ceilometer.polling.manager [-] Finished processing pollster [cpu]. execute_polling_task_processing /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:272
Dec  1 14:23:48 np0005541455 ceilometer_agent_compute[200308]: 2025-12-01 19:23:48.819 15 DEBUG ceilometer.polling.manager [-] Finished processing pollster [memory.usage]. execute_polling_task_processing /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:272
Dec  1 14:23:49 np0005541455 python3.9[220722]: ansible-ansible.builtin.file Invoked with group=ceilometer mode=0644 owner=ceilometer path=/var/lib/openstack/certs/telemetry-power-monitoring/default/tls.key recurse=False force=False follow=True modification_time_format=%Y%m%d%H%M.%S access_time_format=%Y%m%d%H%M.%S unsafe_writes=False state=None _original_basename=None _diff_peek=None src=None modification_time=None access_time=None seuser=None serole=None selevel=None setype=None attributes=None
Dec  1 14:23:49 np0005541455 podman[220846]: 2025-12-01 19:23:49.623381013 +0000 UTC m=+0.053933598 container health_status 3a3d264f7eb8586ed3d44da8bad3c69e5911bcb2ca062b771386b6d47a5118de (image=quay.rdoproject.org/podified-master-centos10/openstack-ceilometer-compute:current-tested, name=ceilometer_agent_compute, health_status=healthy, health_failing_streak=0, health_log=, org.label-schema.name=CentOS Stream 10 Base Image, config_id=edpm, container_name=ceilometer_agent_compute, org.label-schema.build-date=20251125, tcib_managed=true, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS, tcib_build_tag=3a7876c5b6a4ff2e2bc50e11e9db5f42, config_data={'image': 'quay.rdoproject.org/podified-master-centos10/openstack-ceilometer-compute:current-tested', 'user': 'ceilometer', 'restart': 'always', 'command': 'kolla_start', 'security_opt': 'label:type:ceilometer_polling_t', 'net': 'host', 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'OS_ENDPOINT_TYPE': 'internal'}, 'healthcheck': {'test': '/openstack/healthcheck compute', 'mount': '/var/lib/openstack/healthchecks/ceilometer_agent_compute'}, 'volumes': ['/var/lib/openstack/config/telemetry:/var/lib/openstack/config/:z', '/var/lib/openstack/config/telemetry/ceilometer-agent-compute.json:/var/lib/kolla/config_files/config.json:z', '/run/libvirt:/run/libvirt:shared,ro', '/etc/hosts:/etc/hosts:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/var/lib/openstack/cacerts/telemetry/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/config/telemetry/ceilometer_prom_exporter.yaml:/etc/ceilometer/ceilometer_prom_exporter.yaml:z', '/var/lib/openstack/certs/telemetry/default:/etc/ceilometer/tls:z', '/dev/log:/dev/log', '/var/lib/openstack/healthchecks/ceilometer_agent_compute:/openstack:ro,z']}, maintainer=OpenStack Kubernetes Operator team, io.buildah.version=1.41.4, managed_by=edpm_ansible, org.label-schema.license=GPLv2)
Dec  1 14:23:49 np0005541455 podman[220847]: 2025-12-01 19:23:49.641245082 +0000 UTC m=+0.063222529 container health_status 43b014a7c88484529ca37fbc1aa040d68d3c565a681d98a3ffe696ded1c66c8b (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified, name=ovn_metadata_agent, health_status=healthy, health_failing_streak=0, health_log=, io.buildah.version=1.41.3, tcib_build_tag=fa2bb8efef6782c26ea7f1675eeb36dd, config_data={'cgroupns': 'host', 'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'EDPM_CONFIG_HASH': '0823bd3e096c75f72e4a95820d41b0d4b6a1172bd2892ddb9f29b788a11bc87d'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ovn_metadata_agent', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified', 'net': 'host', 'pid': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/run/openvswitch:/run/openvswitch:z', '/var/lib/config-data/ansible-generated/neutron-ovn-metadata-agent:/etc/neutron.conf.d:z', '/run/netns:/run/netns:shared', '/var/lib/kolla/config_files/ovn_metadata_agent.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/neutron:/var/lib/neutron:shared,z', '/var/lib/neutron/ovn_metadata_haproxy_wrapper:/usr/local/bin/haproxy:ro', '/var/lib/neutron/kill_scripts:/etc/neutron/kill_scripts:ro', '/var/lib/openstack/cacerts/neutron-metadata/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/ca.crt:/etc/pki/tls/certs/ovndbca.crt:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/tls.crt:/etc/pki/tls/certs/ovndb.crt:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/tls.key:/etc/pki/tls/private/ovndb.key:ro,Z', '/var/lib/openstack/healthchecks/ovn_metadata_agent:/openstack:ro,z']}, config_id=ovn_metadata_agent, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 9 Base Image, tcib_managed=true, managed_by=edpm_ansible, org.label-schema.build-date=20251125, org.label-schema.schema-version=1.0, maintainer=OpenStack Kubernetes Operator team, org.label-schema.vendor=CentOS, container_name=ovn_metadata_agent)
Dec  1 14:23:49 np0005541455 python3.9[220906]: ansible-ansible.builtin.file Invoked with group=zuul mode=0755 owner=zuul path=/var/lib/openstack/healthchecks setype=container_file_t state=directory recurse=False force=False follow=True modification_time_format=%Y%m%d%H%M.%S access_time_format=%Y%m%d%H%M.%S unsafe_writes=False _original_basename=None _diff_peek=None src=None modification_time=None access_time=None seuser=None serole=None selevel=None attributes=None
Dec  1 14:23:51 np0005541455 python3.9[221064]: ansible-ansible.legacy.stat Invoked with path=/var/lib/openstack/healthchecks/ceilometer_agent_ipmi/healthcheck follow=False get_checksum=True get_size=False checksum_algorithm=sha1 get_mime=True get_attributes=True get_selinux_context=False
Dec  1 14:23:51 np0005541455 python3.9[221187]: ansible-ansible.legacy.copy Invoked with dest=/var/lib/openstack/healthchecks/ceilometer_agent_ipmi/ group=zuul mode=0700 owner=zuul setype=container_file_t src=/home/zuul/.ansible/tmp/ansible-tmp-1764617030.0435853-349-35779967098725/.source _original_basename=healthcheck follow=False checksum=ebb343c21fce35a02591a9351660cb7035a47d42 backup=False force=True remote_src=False unsafe_writes=False content=NOT_LOGGING_PARAMETER validate=None directory_mode=None local_follow=None seuser=None serole=None selevel=None attributes=None
Dec  1 14:23:52 np0005541455 python3.9[221263]: ansible-ansible.legacy.stat Invoked with path=/var/lib/openstack/healthchecks/ceilometer_agent_ipmi/healthcheck.future follow=False get_checksum=True get_size=False checksum_algorithm=sha1 get_mime=True get_attributes=True get_selinux_context=False
Dec  1 14:23:52 np0005541455 python3.9[221386]: ansible-ansible.legacy.copy Invoked with dest=/var/lib/openstack/healthchecks/ceilometer_agent_ipmi/ group=zuul mode=0700 owner=zuul setype=container_file_t src=/home/zuul/.ansible/tmp/ansible-tmp-1764617030.0435853-349-35779967098725/.source.future _original_basename=healthcheck.future follow=False checksum=d500a98192f4ddd70b4dfdc059e2d81aed36a294 backup=False force=True remote_src=False unsafe_writes=False content=NOT_LOGGING_PARAMETER validate=None directory_mode=None local_follow=None seuser=None serole=None selevel=None attributes=None
Dec  1 14:23:53 np0005541455 python3.9[221538]: ansible-ansible.legacy.stat Invoked with path=/var/lib/openstack/healthchecks/kepler/healthcheck follow=False get_checksum=True get_size=False checksum_algorithm=sha1 get_mime=True get_attributes=True get_selinux_context=False
Dec  1 14:23:54 np0005541455 python3.9[221661]: ansible-ansible.legacy.copy Invoked with dest=/var/lib/openstack/healthchecks/kepler/ group=zuul mode=0700 owner=zuul setype=container_file_t src=/home/zuul/.ansible/tmp/ansible-tmp-1764617033.1077754-349-78490064138181/.source _original_basename=healthcheck follow=False checksum=57ed53cc150174efd98819129660d5b9ea9ea61a backup=False force=True remote_src=False unsafe_writes=False content=NOT_LOGGING_PARAMETER validate=None directory_mode=None local_follow=None seuser=None serole=None selevel=None attributes=None
Dec  1 14:23:55 np0005541455 podman[221785]: 2025-12-01 19:23:55.149683314 +0000 UTC m=+0.065991936 container health_status b46bda7fc50db8041eef75400930fc7591d8331b3adc9964f77b2cc87c6b98e2 (image=quay.io/openstack-k8s-operators/openstack-network-exporter:current-podified, name=openstack_network_exporter, health_status=healthy, health_failing_streak=0, health_log=, managed_by=edpm_ansible, vendor=Red Hat, Inc., container_name=openstack_network_exporter, summary=Provides the latest release of the minimal Red Hat Universal Base Image 9., build-date=2025-08-20T13:12:41, com.redhat.license_terms=https://www.redhat.com/en/about/red-hat-end-user-license-agreements#UBI, io.k8s.display-name=Red Hat Universal Base Image 9 Minimal, name=ubi9-minimal, io.openshift.tags=minimal rhel9, description=The Universal Base Image Minimal is a stripped down image that uses microdnf as a package manager. This base image is freely redistributable, but Red Hat only supports Red Hat technologies through subscriptions for Red Hat products. This image is maintained by Red Hat and updated regularly., distribution-scope=public, io.openshift.expose-services=, maintainer=Red Hat, Inc., config_data={'image': 'quay.io/openstack-k8s-operators/openstack-network-exporter:current-podified', 'restart': 'always', 'recreate': True, 'privileged': True, 'ports': ['9105:9105'], 'command': [], 'net': 'host', 'environment': {'OS_ENDPOINT_TYPE': 'internal', 'OPENSTACK_NETWORK_EXPORTER_YAML': '/etc/openstack_network_exporter/openstack_network_exporter.yaml'}, 'healthcheck': {'test': '/openstack/healthcheck openstack-netwo', 'mount': '/var/lib/openstack/healthchecks/openstack_network_exporter'}, 'volumes': ['/var/lib/openstack/config/telemetry/openstack_network_exporter.yaml:/etc/openstack_network_exporter/openstack_network_exporter.yaml:z', '/var/lib/openstack/certs/telemetry/default:/etc/openstack_network_exporter/tls:z', '/var/run/openvswitch:/run/openvswitch:rw,z', '/var/lib/openvswitch/ovn:/run/ovn:rw,z', '/proc:/host/proc:ro', '/var/lib/openstack/healthchecks/openstack_network_exporter:/openstack:ro,z']}, config_id=edpm, url=https://catalog.redhat.com/en/search?searchType=containers, io.k8s.description=The Universal Base Image Minimal is a stripped down image that uses microdnf as a package manager. This base image is freely redistributable, but Red Hat only supports Red Hat technologies through subscriptions for Red Hat products. This image is maintained by Red Hat and updated regularly., vcs-type=git, com.redhat.component=ubi9-minimal-container, release=1755695350, vcs-ref=f4b088292653bbf5ca8188a5e59ffd06a8671d4b, version=9.6, architecture=x86_64, io.buildah.version=1.33.7)
Dec  1 14:23:55 np0005541455 python3.9[221834]: ansible-container_config_data Invoked with config_overrides={} config_path=/var/lib/openstack/config/telemetry-power-monitoring config_pattern=ceilometer_agent_ipmi.json debug=False
Dec  1 14:23:56 np0005541455 python3.9[221986]: ansible-container_config_hash Invoked with check_mode=False config_vol_prefix=/var/lib/config-data
Dec  1 14:23:57 np0005541455 python3[222138]: ansible-edpm_container_manage Invoked with concurrency=1 config_dir=/var/lib/openstack/config/telemetry-power-monitoring config_id=edpm config_overrides={} config_patterns=ceilometer_agent_ipmi.json log_base_path=/var/log/containers/stdouts debug=False
Dec  1 14:23:57 np0005541455 podman[222176]: 2025-12-01 19:23:57.594686554 +0000 UTC m=+0.019036306 image pull 24d4416455a3caf43088be1a1fdcd72d9680ad5e64ac2b338cb2cc50d15f5acc quay.io/podified-antelope-centos9/openstack-ceilometer-ipmi:current-podified
Dec  1 14:23:58 np0005541455 podman[222176]: 2025-12-01 19:23:58.405180876 +0000 UTC m=+0.829530648 container create 34a1614f07848d6f362b3ed1fa2407dbcd0f2c7c831f6ef43ff8b2d278ce7c3d (image=quay.io/podified-antelope-centos9/openstack-ceilometer-ipmi:current-podified, name=ceilometer_agent_ipmi, maintainer=OpenStack Kubernetes Operator team, config_id=edpm, container_name=ceilometer_agent_ipmi, org.label-schema.build-date=20251125, tcib_build_tag=fa2bb8efef6782c26ea7f1675eeb36dd, io.buildah.version=1.41.3, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.schema-version=1.0, config_data={'image': 'quay.io/podified-antelope-centos9/openstack-ceilometer-ipmi:current-podified', 'user': 'ceilometer', 'restart': 'always', 'command': 'kolla_start', 'security_opt': 'label:type:ceilometer_polling_t', 'privileged': 'true', 'net': 'host', 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'OS_ENDPOINT_TYPE': 'internal'}, 'healthcheck': {'test': '/openstack/healthcheck ipmi', 'mount': '/var/lib/openstack/healthchecks/ceilometer_agent_ipmi'}, 'volumes': ['/var/lib/openstack/config/telemetry-power-monitoring:/var/lib/openstack/config/:z', '/var/lib/openstack/config/telemetry-power-monitoring/ceilometer-agent-ipmi.json:/var/lib/kolla/config_files/config.json:z', '/etc/hosts:/etc/hosts:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/var/lib/openstack/cacerts/telemetry-power-monitoring/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/config/telemetry-power-monitoring/ceilometer_prom_exporter.yaml:/etc/ceilometer/ceilometer_prom_exporter.yaml:z', '/var/lib/openstack/certs/telemetry-power-monitoring/default:/etc/ceilometer/tls:z', '/dev/log:/dev/log', '/var/lib/openstack/healthchecks/ceilometer_agent_ipmi:/openstack:ro,z']}, managed_by=edpm_ansible, org.label-schema.vendor=CentOS, org.label-schema.license=GPLv2, tcib_managed=true)
Dec  1 14:23:58 np0005541455 python3[222138]: ansible-edpm_container_manage PODMAN-CONTAINER-DEBUG: podman create --name ceilometer_agent_ipmi --conmon-pidfile /run/ceilometer_agent_ipmi.pid --env KOLLA_CONFIG_STRATEGY=COPY_ALWAYS --env OS_ENDPOINT_TYPE=internal --healthcheck-command /openstack/healthcheck ipmi --label config_id=edpm --label container_name=ceilometer_agent_ipmi --label managed_by=edpm_ansible --label config_data={'image': 'quay.io/podified-antelope-centos9/openstack-ceilometer-ipmi:current-podified', 'user': 'ceilometer', 'restart': 'always', 'command': 'kolla_start', 'security_opt': 'label:type:ceilometer_polling_t', 'privileged': 'true', 'net': 'host', 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'OS_ENDPOINT_TYPE': 'internal'}, 'healthcheck': {'test': '/openstack/healthcheck ipmi', 'mount': '/var/lib/openstack/healthchecks/ceilometer_agent_ipmi'}, 'volumes': ['/var/lib/openstack/config/telemetry-power-monitoring:/var/lib/openstack/config/:z', '/var/lib/openstack/config/telemetry-power-monitoring/ceilometer-agent-ipmi.json:/var/lib/kolla/config_files/config.json:z', '/etc/hosts:/etc/hosts:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/var/lib/openstack/cacerts/telemetry-power-monitoring/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/config/telemetry-power-monitoring/ceilometer_prom_exporter.yaml:/etc/ceilometer/ceilometer_prom_exporter.yaml:z', '/var/lib/openstack/certs/telemetry-power-monitoring/default:/etc/ceilometer/tls:z', '/dev/log:/dev/log', '/var/lib/openstack/healthchecks/ceilometer_agent_ipmi:/openstack:ro,z']} --log-driver journald --log-level info --network host --privileged=True --security-opt label:type:ceilometer_polling_t --user ceilometer --volume /var/lib/openstack/config/telemetry-power-monitoring:/var/lib/openstack/config/:z --volume /var/lib/openstack/config/telemetry-power-monitoring/ceilometer-agent-ipmi.json:/var/lib/kolla/config_files/config.json:z --volume /etc/hosts:/etc/hosts:ro --volume /etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro --volume /etc/localtime:/etc/localtime:ro --volume /etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro --volume /var/lib/openstack/cacerts/telemetry-power-monitoring/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z --volume /var/lib/openstack/config/telemetry-power-monitoring/ceilometer_prom_exporter.yaml:/etc/ceilometer/ceilometer_prom_exporter.yaml:z --volume /var/lib/openstack/certs/telemetry-power-monitoring/default:/etc/ceilometer/tls:z --volume /dev/log:/dev/log --volume /var/lib/openstack/healthchecks/ceilometer_agent_ipmi:/openstack:ro,z quay.io/podified-antelope-centos9/openstack-ceilometer-ipmi:current-podified kolla_start
Dec  1 14:23:59 np0005541455 python3.9[222364]: ansible-ansible.builtin.stat Invoked with path=/etc/sysconfig/podman_drop_in follow=False get_checksum=True get_mime=True get_attributes=True get_selinux_context=False checksum_algorithm=sha1
Dec  1 14:23:59 np0005541455 podman[203750]: time="2025-12-01T19:23:59Z" level=info msg="List containers: received `last` parameter - overwriting `limit`"
Dec  1 14:23:59 np0005541455 podman[203750]: @ - - [01/Dec/2025:19:23:59 +0000] "GET /v4.9.3/libpod/containers/json?all=true&external=false&last=0&namespace=false&size=false&sync=false HTTP/1.1" 200 25319 "" "Go-http-client/1.1"
Dec  1 14:23:59 np0005541455 podman[203750]: @ - - [01/Dec/2025:19:23:59 +0000] "GET /v4.9.3/libpod/containers/stats?all=false&interval=1&stream=false HTTP/1.1" 200 3425 "" "Go-http-client/1.1"
Dec  1 14:23:59 np0005541455 podman[222493]: 2025-12-01 19:23:59.911476502 +0000 UTC m=+0.077167065 container health_status 9bc16c1e84935b321683dd2dfd3901959431e420d380b6b9982945dff3d516b2 (image=quay.io/prometheus/node-exporter:v1.5.0, name=node_exporter, health_status=healthy, health_failing_streak=0, health_log=, maintainer=The Prometheus Authors <prometheus-developers@googlegroups.com>, managed_by=edpm_ansible, config_data={'image': 'quay.io/prometheus/node-exporter:v1.5.0', 'restart': 'always', 'recreate': True, 'user': 'root', 'privileged': True, 'ports': ['9100:9100'], 'command': ['--web.config.file=/etc/node_exporter/node_exporter.yaml', '--web.disable-exporter-metrics', '--collector.systemd', '--collector.systemd.unit-include=(edpm_.*|ovs.*|openvswitch|virt.*|rsyslog)\\.service', '--no-collector.dmi', '--no-collector.entropy', '--no-collector.thermal_zone', '--no-collector.time', '--no-collector.timex', '--no-collector.uname', '--no-collector.stat', '--no-collector.hwmon', '--no-collector.os', '--no-collector.selinux', '--no-collector.textfile', '--no-collector.powersupplyclass', '--no-collector.pressure', '--no-collector.rapl'], 'net': 'host', 'environment': {'OS_ENDPOINT_TYPE': 'internal'}, 'healthcheck': {'test': '/openstack/healthcheck node_exporter', 'mount': '/var/lib/openstack/healthchecks/node_exporter'}, 'volumes': ['/var/lib/openstack/config/telemetry/node_exporter.yaml:/etc/node_exporter/node_exporter.yaml:z', '/var/lib/openstack/certs/telemetry/default:/etc/node_exporter/tls:z', '/var/run/dbus/system_bus_socket:/var/run/dbus/system_bus_socket:rw', '/var/lib/openstack/healthchecks/node_exporter:/openstack:ro,z']}, config_id=edpm, container_name=node_exporter)
Dec  1 14:24:00 np0005541455 python3.9[222546]: ansible-file Invoked with path=/etc/systemd/system/edpm_ceilometer_agent_ipmi.requires state=absent recurse=False force=False follow=True modification_time_format=%Y%m%d%H%M.%S access_time_format=%Y%m%d%H%M.%S unsafe_writes=False _original_basename=None _diff_peek=None src=None modification_time=None access_time=None mode=None owner=None group=None seuser=None serole=None selevel=None setype=None attributes=None
Dec  1 14:24:00 np0005541455 python3.9[222697]: ansible-copy Invoked with src=/home/zuul/.ansible/tmp/ansible-tmp-1764617040.161327-427-39635129027002/source dest=/etc/systemd/system/edpm_ceilometer_agent_ipmi.service mode=0644 owner=root group=root backup=False force=True remote_src=False follow=False unsafe_writes=False _original_basename=None content=NOT_LOGGING_PARAMETER validate=None directory_mode=None local_follow=None checksum=None seuser=None serole=None selevel=None setype=None attributes=None
Dec  1 14:24:01 np0005541455 openstack_network_exporter[205914]: ERROR   19:24:01 appctl.go:144: Failed to get PID for ovn-northd: no control socket files found for ovn-northd
Dec  1 14:24:01 np0005541455 openstack_network_exporter[205914]: ERROR   19:24:01 appctl.go:144: Failed to get PID for ovn-northd: no control socket files found for ovn-northd
Dec  1 14:24:01 np0005541455 openstack_network_exporter[205914]: ERROR   19:24:01 appctl.go:131: Failed to prepare call to ovsdb-server: no control socket files found for the ovs db server
Dec  1 14:24:01 np0005541455 openstack_network_exporter[205914]: ERROR   19:24:01 appctl.go:174: call(dpif-netdev/pmd-perf-show): please specify an existing datapath
Dec  1 14:24:01 np0005541455 openstack_network_exporter[205914]: 
Dec  1 14:24:01 np0005541455 openstack_network_exporter[205914]: ERROR   19:24:01 appctl.go:174: call(dpif-netdev/pmd-rxq-show): please specify an existing datapath
Dec  1 14:24:01 np0005541455 openstack_network_exporter[205914]: 
Dec  1 14:24:01 np0005541455 python3.9[222773]: ansible-systemd Invoked with daemon_reload=True daemon_reexec=False scope=system no_block=False name=None state=None enabled=None force=None masked=None
Dec  1 14:24:01 np0005541455 systemd[1]: Reloading.
Dec  1 14:24:01 np0005541455 systemd-rc-local-generator: /etc/rc.d/rc.local is not marked executable, skipping.
Dec  1 14:24:01 np0005541455 systemd-sysv-generator: SysV service '/etc/rc.d/init.d/network' lacks a native systemd unit file. Automatically generating a unit file for compatibility. Please update package to include a native systemd unit file, in order to make it more safe and robust.
Dec  1 14:24:02 np0005541455 python3.9[222884]: ansible-systemd Invoked with state=restarted name=edpm_ceilometer_agent_ipmi.service enabled=True daemon_reload=False daemon_reexec=False scope=system no_block=False force=None masked=None
Dec  1 14:24:02 np0005541455 systemd[1]: Reloading.
Dec  1 14:24:03 np0005541455 systemd-rc-local-generator: /etc/rc.d/rc.local is not marked executable, skipping.
Dec  1 14:24:03 np0005541455 systemd-sysv-generator: SysV service '/etc/rc.d/init.d/network' lacks a native systemd unit file. Automatically generating a unit file for compatibility. Please update package to include a native systemd unit file, in order to make it more safe and robust.
Dec  1 14:24:03 np0005541455 systemd[1]: Starting ceilometer_agent_ipmi container...
Dec  1 14:24:03 np0005541455 systemd[1]: Started libcrun container.
Dec  1 14:24:03 np0005541455 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/7440de8af6a06264cef5bbbb36be527096fb3b76d58e4fec3d558c110a857554/merged/etc/ceilometer/tls supports timestamps until 2038 (0x7fffffff)
Dec  1 14:24:03 np0005541455 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/7440de8af6a06264cef5bbbb36be527096fb3b76d58e4fec3d558c110a857554/merged/etc/ceilometer/ceilometer_prom_exporter.yaml supports timestamps until 2038 (0x7fffffff)
Dec  1 14:24:03 np0005541455 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/7440de8af6a06264cef5bbbb36be527096fb3b76d58e4fec3d558c110a857554/merged/var/lib/openstack/config supports timestamps until 2038 (0x7fffffff)
Dec  1 14:24:03 np0005541455 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/7440de8af6a06264cef5bbbb36be527096fb3b76d58e4fec3d558c110a857554/merged/var/lib/kolla/config_files/config.json supports timestamps until 2038 (0x7fffffff)
Dec  1 14:24:03 np0005541455 systemd[1]: Started /usr/bin/podman healthcheck run 34a1614f07848d6f362b3ed1fa2407dbcd0f2c7c831f6ef43ff8b2d278ce7c3d.
Dec  1 14:24:03 np0005541455 podman[222925]: 2025-12-01 19:24:03.41385179 +0000 UTC m=+0.144933836 container init 34a1614f07848d6f362b3ed1fa2407dbcd0f2c7c831f6ef43ff8b2d278ce7c3d (image=quay.io/podified-antelope-centos9/openstack-ceilometer-ipmi:current-podified, name=ceilometer_agent_ipmi, tcib_managed=true, container_name=ceilometer_agent_ipmi, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS, config_data={'image': 'quay.io/podified-antelope-centos9/openstack-ceilometer-ipmi:current-podified', 'user': 'ceilometer', 'restart': 'always', 'command': 'kolla_start', 'security_opt': 'label:type:ceilometer_polling_t', 'privileged': 'true', 'net': 'host', 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'OS_ENDPOINT_TYPE': 'internal'}, 'healthcheck': {'test': '/openstack/healthcheck ipmi', 'mount': '/var/lib/openstack/healthchecks/ceilometer_agent_ipmi'}, 'volumes': ['/var/lib/openstack/config/telemetry-power-monitoring:/var/lib/openstack/config/:z', '/var/lib/openstack/config/telemetry-power-monitoring/ceilometer-agent-ipmi.json:/var/lib/kolla/config_files/config.json:z', '/etc/hosts:/etc/hosts:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/var/lib/openstack/cacerts/telemetry-power-monitoring/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/config/telemetry-power-monitoring/ceilometer_prom_exporter.yaml:/etc/ceilometer/ceilometer_prom_exporter.yaml:z', '/var/lib/openstack/certs/telemetry-power-monitoring/default:/etc/ceilometer/tls:z', '/dev/log:/dev/log', '/var/lib/openstack/healthchecks/ceilometer_agent_ipmi:/openstack:ro,z']}, config_id=edpm, maintainer=OpenStack Kubernetes Operator team, org.label-schema.build-date=20251125, io.buildah.version=1.41.3, managed_by=edpm_ansible, org.label-schema.license=GPLv2, tcib_build_tag=fa2bb8efef6782c26ea7f1675eeb36dd)
Dec  1 14:24:03 np0005541455 ceilometer_agent_ipmi[222940]: + sudo -E kolla_set_configs
Dec  1 14:24:03 np0005541455 podman[222925]: 2025-12-01 19:24:03.452415617 +0000 UTC m=+0.183497603 container start 34a1614f07848d6f362b3ed1fa2407dbcd0f2c7c831f6ef43ff8b2d278ce7c3d (image=quay.io/podified-antelope-centos9/openstack-ceilometer-ipmi:current-podified, name=ceilometer_agent_ipmi, org.label-schema.schema-version=1.0, config_id=edpm, io.buildah.version=1.41.3, maintainer=OpenStack Kubernetes Operator team, managed_by=edpm_ansible, tcib_build_tag=fa2bb8efef6782c26ea7f1675eeb36dd, container_name=ceilometer_agent_ipmi, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.vendor=CentOS, config_data={'image': 'quay.io/podified-antelope-centos9/openstack-ceilometer-ipmi:current-podified', 'user': 'ceilometer', 'restart': 'always', 'command': 'kolla_start', 'security_opt': 'label:type:ceilometer_polling_t', 'privileged': 'true', 'net': 'host', 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'OS_ENDPOINT_TYPE': 'internal'}, 'healthcheck': {'test': '/openstack/healthcheck ipmi', 'mount': '/var/lib/openstack/healthchecks/ceilometer_agent_ipmi'}, 'volumes': ['/var/lib/openstack/config/telemetry-power-monitoring:/var/lib/openstack/config/:z', '/var/lib/openstack/config/telemetry-power-monitoring/ceilometer-agent-ipmi.json:/var/lib/kolla/config_files/config.json:z', '/etc/hosts:/etc/hosts:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/var/lib/openstack/cacerts/telemetry-power-monitoring/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/config/telemetry-power-monitoring/ceilometer_prom_exporter.yaml:/etc/ceilometer/ceilometer_prom_exporter.yaml:z', '/var/lib/openstack/certs/telemetry-power-monitoring/default:/etc/ceilometer/tls:z', '/dev/log:/dev/log', '/var/lib/openstack/healthchecks/ceilometer_agent_ipmi:/openstack:ro,z']}, org.label-schema.license=GPLv2, tcib_managed=true, org.label-schema.build-date=20251125)
Dec  1 14:24:03 np0005541455 podman[222925]: ceilometer_agent_ipmi
Dec  1 14:24:03 np0005541455 systemd[1]: Started ceilometer_agent_ipmi container.
Dec  1 14:24:03 np0005541455 ceilometer_agent_ipmi[222940]: INFO:__main__:Loading config file at /var/lib/kolla/config_files/config.json
Dec  1 14:24:03 np0005541455 ceilometer_agent_ipmi[222940]: INFO:__main__:Validating config file
Dec  1 14:24:03 np0005541455 ceilometer_agent_ipmi[222940]: INFO:__main__:Kolla config strategy set to: COPY_ALWAYS
Dec  1 14:24:03 np0005541455 ceilometer_agent_ipmi[222940]: INFO:__main__:Copying service configuration files
Dec  1 14:24:03 np0005541455 ceilometer_agent_ipmi[222940]: INFO:__main__:Deleting /etc/ceilometer/ceilometer.conf
Dec  1 14:24:03 np0005541455 ceilometer_agent_ipmi[222940]: INFO:__main__:Copying /var/lib/openstack/config/ceilometer.conf to /etc/ceilometer/ceilometer.conf
Dec  1 14:24:03 np0005541455 ceilometer_agent_ipmi[222940]: INFO:__main__:Setting permission for /etc/ceilometer/ceilometer.conf
Dec  1 14:24:03 np0005541455 ceilometer_agent_ipmi[222940]: INFO:__main__:Deleting /etc/ceilometer/polling.yaml
Dec  1 14:24:03 np0005541455 ceilometer_agent_ipmi[222940]: INFO:__main__:Copying /var/lib/openstack/config/polling.yaml to /etc/ceilometer/polling.yaml
Dec  1 14:24:03 np0005541455 ceilometer_agent_ipmi[222940]: INFO:__main__:Setting permission for /etc/ceilometer/polling.yaml
Dec  1 14:24:03 np0005541455 ceilometer_agent_ipmi[222940]: INFO:__main__:Copying /var/lib/openstack/config/custom.conf to /etc/ceilometer/ceilometer.conf.d/01-ceilometer-custom.conf
Dec  1 14:24:03 np0005541455 ceilometer_agent_ipmi[222940]: INFO:__main__:Setting permission for /etc/ceilometer/ceilometer.conf.d/01-ceilometer-custom.conf
Dec  1 14:24:03 np0005541455 ceilometer_agent_ipmi[222940]: INFO:__main__:Copying /var/lib/openstack/config/ceilometer-host-specific.conf to /etc/ceilometer/ceilometer.conf.d/02-ceilometer-host-specific.conf
Dec  1 14:24:03 np0005541455 ceilometer_agent_ipmi[222940]: INFO:__main__:Setting permission for /etc/ceilometer/ceilometer.conf.d/02-ceilometer-host-specific.conf
Dec  1 14:24:03 np0005541455 ceilometer_agent_ipmi[222940]: INFO:__main__:Writing out command to execute
Dec  1 14:24:03 np0005541455 ceilometer_agent_ipmi[222940]: ++ cat /run_command
Dec  1 14:24:03 np0005541455 ceilometer_agent_ipmi[222940]: + CMD='/usr/bin/ceilometer-polling --polling-namespaces ipmi --logfile /dev/stdout'
Dec  1 14:24:03 np0005541455 ceilometer_agent_ipmi[222940]: + ARGS=
Dec  1 14:24:03 np0005541455 ceilometer_agent_ipmi[222940]: + sudo kolla_copy_cacerts
Dec  1 14:24:03 np0005541455 podman[222947]: 2025-12-01 19:24:03.550871297 +0000 UTC m=+0.079968803 container health_status 34a1614f07848d6f362b3ed1fa2407dbcd0f2c7c831f6ef43ff8b2d278ce7c3d (image=quay.io/podified-antelope-centos9/openstack-ceilometer-ipmi:current-podified, name=ceilometer_agent_ipmi, health_status=starting, health_failing_streak=1, health_log=, org.label-schema.name=CentOS Stream 9 Base Image, container_name=ceilometer_agent_ipmi, io.buildah.version=1.41.3, org.label-schema.vendor=CentOS, tcib_build_tag=fa2bb8efef6782c26ea7f1675eeb36dd, config_id=edpm, org.label-schema.schema-version=1.0, config_data={'image': 'quay.io/podified-antelope-centos9/openstack-ceilometer-ipmi:current-podified', 'user': 'ceilometer', 'restart': 'always', 'command': 'kolla_start', 'security_opt': 'label:type:ceilometer_polling_t', 'privileged': 'true', 'net': 'host', 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'OS_ENDPOINT_TYPE': 'internal'}, 'healthcheck': {'test': '/openstack/healthcheck ipmi', 'mount': '/var/lib/openstack/healthchecks/ceilometer_agent_ipmi'}, 'volumes': ['/var/lib/openstack/config/telemetry-power-monitoring:/var/lib/openstack/config/:z', '/var/lib/openstack/config/telemetry-power-monitoring/ceilometer-agent-ipmi.json:/var/lib/kolla/config_files/config.json:z', '/etc/hosts:/etc/hosts:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/var/lib/openstack/cacerts/telemetry-power-monitoring/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/config/telemetry-power-monitoring/ceilometer_prom_exporter.yaml:/etc/ceilometer/ceilometer_prom_exporter.yaml:z', '/var/lib/openstack/certs/telemetry-power-monitoring/default:/etc/ceilometer/tls:z', '/dev/log:/dev/log', '/var/lib/openstack/healthchecks/ceilometer_agent_ipmi:/openstack:ro,z']}, tcib_managed=true, maintainer=OpenStack Kubernetes Operator team, managed_by=edpm_ansible, org.label-schema.build-date=20251125, org.label-schema.license=GPLv2)
Dec  1 14:24:03 np0005541455 systemd[1]: 34a1614f07848d6f362b3ed1fa2407dbcd0f2c7c831f6ef43ff8b2d278ce7c3d-2a465762769e46.service: Main process exited, code=exited, status=1/FAILURE
Dec  1 14:24:03 np0005541455 systemd[1]: 34a1614f07848d6f362b3ed1fa2407dbcd0f2c7c831f6ef43ff8b2d278ce7c3d-2a465762769e46.service: Failed with result 'exit-code'.
Dec  1 14:24:03 np0005541455 ceilometer_agent_ipmi[222940]: + [[ ! -n '' ]]
Dec  1 14:24:03 np0005541455 ceilometer_agent_ipmi[222940]: + . kolla_extend_start
Dec  1 14:24:03 np0005541455 ceilometer_agent_ipmi[222940]: Running command: '/usr/bin/ceilometer-polling --polling-namespaces ipmi --logfile /dev/stdout'
Dec  1 14:24:03 np0005541455 ceilometer_agent_ipmi[222940]: + echo 'Running command: '\''/usr/bin/ceilometer-polling --polling-namespaces ipmi --logfile /dev/stdout'\'''
Dec  1 14:24:03 np0005541455 ceilometer_agent_ipmi[222940]: + umask 0022
Dec  1 14:24:03 np0005541455 ceilometer_agent_ipmi[222940]: + exec /usr/bin/ceilometer-polling --polling-namespaces ipmi --logfile /dev/stdout
Dec  1 14:24:04 np0005541455 ceilometer_agent_ipmi[222940]: 2025-12-01 19:24:04.347 2 DEBUG cotyledon.oslo_config_glue [-] Full set of CONF: _load_service_manager_options /usr/lib/python3.9/site-packages/cotyledon/oslo_config_glue.py:40
Dec  1 14:24:04 np0005541455 ceilometer_agent_ipmi[222940]: 2025-12-01 19:24:04.347 2 DEBUG cotyledon.oslo_config_glue [-] ******************************************************************************** log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2589
Dec  1 14:24:04 np0005541455 ceilometer_agent_ipmi[222940]: 2025-12-01 19:24:04.348 2 DEBUG cotyledon.oslo_config_glue [-] Configuration options gathered from: log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2590
Dec  1 14:24:04 np0005541455 ceilometer_agent_ipmi[222940]: 2025-12-01 19:24:04.348 2 DEBUG cotyledon.oslo_config_glue [-] command line args: ['--polling-namespaces', 'ipmi', '--logfile', '/dev/stdout'] log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2591
Dec  1 14:24:04 np0005541455 ceilometer_agent_ipmi[222940]: 2025-12-01 19:24:04.348 2 DEBUG cotyledon.oslo_config_glue [-] config files: ['/etc/ceilometer/ceilometer.conf'] log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2592
Dec  1 14:24:04 np0005541455 ceilometer_agent_ipmi[222940]: 2025-12-01 19:24:04.348 2 DEBUG cotyledon.oslo_config_glue [-] ================================================================================ log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2594
Dec  1 14:24:04 np0005541455 ceilometer_agent_ipmi[222940]: 2025-12-01 19:24:04.348 2 DEBUG cotyledon.oslo_config_glue [-] batch_size                     = 50 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602
Dec  1 14:24:04 np0005541455 ceilometer_agent_ipmi[222940]: 2025-12-01 19:24:04.348 2 DEBUG cotyledon.oslo_config_glue [-] cfg_file                       = polling.yaml log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602
Dec  1 14:24:04 np0005541455 ceilometer_agent_ipmi[222940]: 2025-12-01 19:24:04.348 2 DEBUG cotyledon.oslo_config_glue [-] config_dir                     = ['/etc/ceilometer/ceilometer.conf.d'] log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602
Dec  1 14:24:04 np0005541455 ceilometer_agent_ipmi[222940]: 2025-12-01 19:24:04.348 2 DEBUG cotyledon.oslo_config_glue [-] config_file                    = ['/etc/ceilometer/ceilometer.conf'] log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602
Dec  1 14:24:04 np0005541455 ceilometer_agent_ipmi[222940]: 2025-12-01 19:24:04.348 2 DEBUG cotyledon.oslo_config_glue [-] config_source                  = [] log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602
Dec  1 14:24:04 np0005541455 ceilometer_agent_ipmi[222940]: 2025-12-01 19:24:04.349 2 DEBUG cotyledon.oslo_config_glue [-] debug                          = True log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602
Dec  1 14:24:04 np0005541455 ceilometer_agent_ipmi[222940]: 2025-12-01 19:24:04.349 2 DEBUG cotyledon.oslo_config_glue [-] default_log_levels             = ['amqp=WARN', 'amqplib=WARN', 'boto=WARN', 'qpid=WARN', 'sqlalchemy=WARN', 'suds=INFO', 'oslo.messaging=INFO', 'oslo_messaging=INFO', 'iso8601=WARN', 'requests.packages.urllib3.connectionpool=WARN', 'urllib3.connectionpool=WARN', 'websocket=WARN', 'requests.packages.urllib3.util.retry=WARN', 'urllib3.util.retry=WARN', 'keystonemiddleware=WARN', 'routes.middleware=WARN', 'stevedore=WARN', 'taskflow=WARN', 'keystoneauth=WARN', 'oslo.cache=INFO', 'oslo_policy=INFO', 'dogpile.core.dogpile=INFO', 'futurist=INFO', 'neutronclient=INFO', 'keystoneclient=INFO'] log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602
Dec  1 14:24:04 np0005541455 ceilometer_agent_ipmi[222940]: 2025-12-01 19:24:04.349 2 DEBUG cotyledon.oslo_config_glue [-] event_pipeline_cfg_file        = event_pipeline.yaml log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602
Dec  1 14:24:04 np0005541455 ceilometer_agent_ipmi[222940]: 2025-12-01 19:24:04.349 2 DEBUG cotyledon.oslo_config_glue [-] graceful_shutdown_timeout      = 60 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602
Dec  1 14:24:04 np0005541455 ceilometer_agent_ipmi[222940]: 2025-12-01 19:24:04.349 2 DEBUG cotyledon.oslo_config_glue [-] host                           = compute-0.ctlplane.example.com log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602
Dec  1 14:24:04 np0005541455 ceilometer_agent_ipmi[222940]: 2025-12-01 19:24:04.349 2 DEBUG cotyledon.oslo_config_glue [-] http_timeout                   = 600 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602
Dec  1 14:24:04 np0005541455 ceilometer_agent_ipmi[222940]: 2025-12-01 19:24:04.349 2 DEBUG cotyledon.oslo_config_glue [-] hypervisor_inspector           = libvirt log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602
Dec  1 14:24:04 np0005541455 ceilometer_agent_ipmi[222940]: 2025-12-01 19:24:04.350 2 DEBUG cotyledon.oslo_config_glue [-] instance_format                = [instance: %(uuid)s]  log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602
Dec  1 14:24:04 np0005541455 ceilometer_agent_ipmi[222940]: 2025-12-01 19:24:04.350 2 DEBUG cotyledon.oslo_config_glue [-] instance_uuid_format           = [instance: %(uuid)s]  log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602
Dec  1 14:24:04 np0005541455 ceilometer_agent_ipmi[222940]: 2025-12-01 19:24:04.350 2 DEBUG cotyledon.oslo_config_glue [-] libvirt_type                   = kvm log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602
Dec  1 14:24:04 np0005541455 ceilometer_agent_ipmi[222940]: 2025-12-01 19:24:04.350 2 DEBUG cotyledon.oslo_config_glue [-] libvirt_uri                    =  log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602
Dec  1 14:24:04 np0005541455 ceilometer_agent_ipmi[222940]: 2025-12-01 19:24:04.350 2 DEBUG cotyledon.oslo_config_glue [-] log_config_append              = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602
Dec  1 14:24:04 np0005541455 ceilometer_agent_ipmi[222940]: 2025-12-01 19:24:04.350 2 DEBUG cotyledon.oslo_config_glue [-] log_date_format                = %Y-%m-%d %H:%M:%S log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602
Dec  1 14:24:04 np0005541455 ceilometer_agent_ipmi[222940]: 2025-12-01 19:24:04.350 2 DEBUG cotyledon.oslo_config_glue [-] log_dir                        = /var/log/ceilometer log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602
Dec  1 14:24:04 np0005541455 ceilometer_agent_ipmi[222940]: 2025-12-01 19:24:04.350 2 DEBUG cotyledon.oslo_config_glue [-] log_file                       = /dev/stdout log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602
Dec  1 14:24:04 np0005541455 ceilometer_agent_ipmi[222940]: 2025-12-01 19:24:04.351 2 DEBUG cotyledon.oslo_config_glue [-] log_options                    = True log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602
Dec  1 14:24:04 np0005541455 ceilometer_agent_ipmi[222940]: 2025-12-01 19:24:04.351 2 DEBUG cotyledon.oslo_config_glue [-] log_rotate_interval            = 1 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602
Dec  1 14:24:04 np0005541455 ceilometer_agent_ipmi[222940]: 2025-12-01 19:24:04.351 2 DEBUG cotyledon.oslo_config_glue [-] log_rotate_interval_type       = days log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602
Dec  1 14:24:04 np0005541455 ceilometer_agent_ipmi[222940]: 2025-12-01 19:24:04.351 2 DEBUG cotyledon.oslo_config_glue [-] log_rotation_type              = none log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602
Dec  1 14:24:04 np0005541455 ceilometer_agent_ipmi[222940]: 2025-12-01 19:24:04.351 2 DEBUG cotyledon.oslo_config_glue [-] logging_context_format_string  = %(asctime)s.%(msecs)03d %(process)d %(levelname)s %(name)s [%(global_request_id)s %(request_id)s %(user_identity)s] %(instance)s%(message)s log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602
Dec  1 14:24:04 np0005541455 ceilometer_agent_ipmi[222940]: 2025-12-01 19:24:04.351 2 DEBUG cotyledon.oslo_config_glue [-] logging_debug_format_suffix    = %(funcName)s %(pathname)s:%(lineno)d log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602
Dec  1 14:24:04 np0005541455 ceilometer_agent_ipmi[222940]: 2025-12-01 19:24:04.351 2 DEBUG cotyledon.oslo_config_glue [-] logging_default_format_string  = %(asctime)s.%(msecs)03d %(process)d %(levelname)s %(name)s [-] %(instance)s%(message)s log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602
Dec  1 14:24:04 np0005541455 ceilometer_agent_ipmi[222940]: 2025-12-01 19:24:04.351 2 DEBUG cotyledon.oslo_config_glue [-] logging_exception_prefix       = %(asctime)s.%(msecs)03d %(process)d ERROR %(name)s %(instance)s log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602
Dec  1 14:24:04 np0005541455 ceilometer_agent_ipmi[222940]: 2025-12-01 19:24:04.351 2 DEBUG cotyledon.oslo_config_glue [-] logging_user_identity_format   = %(user)s %(project)s %(domain)s %(system_scope)s %(user_domain)s %(project_domain)s log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602
Dec  1 14:24:04 np0005541455 ceilometer_agent_ipmi[222940]: 2025-12-01 19:24:04.352 2 DEBUG cotyledon.oslo_config_glue [-] max_logfile_count              = 30 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602
Dec  1 14:24:04 np0005541455 ceilometer_agent_ipmi[222940]: 2025-12-01 19:24:04.352 2 DEBUG cotyledon.oslo_config_glue [-] max_logfile_size_mb            = 200 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602
Dec  1 14:24:04 np0005541455 ceilometer_agent_ipmi[222940]: 2025-12-01 19:24:04.352 2 DEBUG cotyledon.oslo_config_glue [-] max_parallel_requests          = 64 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602
Dec  1 14:24:04 np0005541455 ceilometer_agent_ipmi[222940]: 2025-12-01 19:24:04.352 2 DEBUG cotyledon.oslo_config_glue [-] partitioning_group_prefix      = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602
Dec  1 14:24:04 np0005541455 ceilometer_agent_ipmi[222940]: 2025-12-01 19:24:04.352 2 DEBUG cotyledon.oslo_config_glue [-] pipeline_cfg_file              = pipeline.yaml log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602
Dec  1 14:24:04 np0005541455 ceilometer_agent_ipmi[222940]: 2025-12-01 19:24:04.352 2 DEBUG cotyledon.oslo_config_glue [-] polling_namespaces             = ['ipmi'] log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602
Dec  1 14:24:04 np0005541455 ceilometer_agent_ipmi[222940]: 2025-12-01 19:24:04.352 2 DEBUG cotyledon.oslo_config_glue [-] pollsters_definitions_dirs     = ['/etc/ceilometer/pollsters.d'] log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602
Dec  1 14:24:04 np0005541455 ceilometer_agent_ipmi[222940]: 2025-12-01 19:24:04.352 2 DEBUG cotyledon.oslo_config_glue [-] publish_errors                 = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602
Dec  1 14:24:04 np0005541455 ceilometer_agent_ipmi[222940]: 2025-12-01 19:24:04.353 2 DEBUG cotyledon.oslo_config_glue [-] rate_limit_burst               = 0 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602
Dec  1 14:24:04 np0005541455 ceilometer_agent_ipmi[222940]: 2025-12-01 19:24:04.353 2 DEBUG cotyledon.oslo_config_glue [-] rate_limit_except_level        = CRITICAL log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602
Dec  1 14:24:04 np0005541455 ceilometer_agent_ipmi[222940]: 2025-12-01 19:24:04.353 2 DEBUG cotyledon.oslo_config_glue [-] rate_limit_interval            = 0 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602
Dec  1 14:24:04 np0005541455 ceilometer_agent_ipmi[222940]: 2025-12-01 19:24:04.353 2 DEBUG cotyledon.oslo_config_glue [-] reseller_prefix                = AUTH_ log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602
Dec  1 14:24:04 np0005541455 ceilometer_agent_ipmi[222940]: 2025-12-01 19:24:04.353 2 DEBUG cotyledon.oslo_config_glue [-] reserved_metadata_keys         = [] log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602
Dec  1 14:24:04 np0005541455 ceilometer_agent_ipmi[222940]: 2025-12-01 19:24:04.353 2 DEBUG cotyledon.oslo_config_glue [-] reserved_metadata_length       = 256 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602
Dec  1 14:24:04 np0005541455 ceilometer_agent_ipmi[222940]: 2025-12-01 19:24:04.353 2 DEBUG cotyledon.oslo_config_glue [-] reserved_metadata_namespace    = ['metering.'] log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602
Dec  1 14:24:04 np0005541455 ceilometer_agent_ipmi[222940]: 2025-12-01 19:24:04.353 2 DEBUG cotyledon.oslo_config_glue [-] rootwrap_config                = /etc/ceilometer/rootwrap.conf log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602
Dec  1 14:24:04 np0005541455 ceilometer_agent_ipmi[222940]: 2025-12-01 19:24:04.354 2 DEBUG cotyledon.oslo_config_glue [-] sample_source                  = openstack log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602
Dec  1 14:24:04 np0005541455 ceilometer_agent_ipmi[222940]: 2025-12-01 19:24:04.354 2 DEBUG cotyledon.oslo_config_glue [-] syslog_log_facility            = LOG_USER log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602
Dec  1 14:24:04 np0005541455 ceilometer_agent_ipmi[222940]: 2025-12-01 19:24:04.354 2 DEBUG cotyledon.oslo_config_glue [-] tenant_name_discovery          = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602
Dec  1 14:24:04 np0005541455 ceilometer_agent_ipmi[222940]: 2025-12-01 19:24:04.354 2 DEBUG cotyledon.oslo_config_glue [-] use_eventlog                   = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602
Dec  1 14:24:04 np0005541455 ceilometer_agent_ipmi[222940]: 2025-12-01 19:24:04.354 2 DEBUG cotyledon.oslo_config_glue [-] use_journal                    = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602
Dec  1 14:24:04 np0005541455 ceilometer_agent_ipmi[222940]: 2025-12-01 19:24:04.354 2 DEBUG cotyledon.oslo_config_glue [-] use_json                       = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602
Dec  1 14:24:04 np0005541455 ceilometer_agent_ipmi[222940]: 2025-12-01 19:24:04.354 2 DEBUG cotyledon.oslo_config_glue [-] use_stderr                     = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602
Dec  1 14:24:04 np0005541455 ceilometer_agent_ipmi[222940]: 2025-12-01 19:24:04.354 2 DEBUG cotyledon.oslo_config_glue [-] use_syslog                     = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602
Dec  1 14:24:04 np0005541455 ceilometer_agent_ipmi[222940]: 2025-12-01 19:24:04.355 2 DEBUG cotyledon.oslo_config_glue [-] watch_log_file                 = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602
Dec  1 14:24:04 np0005541455 ceilometer_agent_ipmi[222940]: 2025-12-01 19:24:04.355 2 DEBUG cotyledon.oslo_config_glue [-] compute.instance_discovery_method = libvirt_metadata log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Dec  1 14:24:04 np0005541455 ceilometer_agent_ipmi[222940]: 2025-12-01 19:24:04.355 2 DEBUG cotyledon.oslo_config_glue [-] compute.resource_cache_expiry  = 3600 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Dec  1 14:24:04 np0005541455 ceilometer_agent_ipmi[222940]: 2025-12-01 19:24:04.355 2 DEBUG cotyledon.oslo_config_glue [-] compute.resource_update_interval = 0 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Dec  1 14:24:04 np0005541455 ceilometer_agent_ipmi[222940]: 2025-12-01 19:24:04.355 2 DEBUG cotyledon.oslo_config_glue [-] coordination.backend_url       = **** log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Dec  1 14:24:04 np0005541455 ceilometer_agent_ipmi[222940]: 2025-12-01 19:24:04.355 2 DEBUG cotyledon.oslo_config_glue [-] event.definitions_cfg_file     = event_definitions.yaml log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Dec  1 14:24:04 np0005541455 ceilometer_agent_ipmi[222940]: 2025-12-01 19:24:04.355 2 DEBUG cotyledon.oslo_config_glue [-] event.drop_unmatched_notifications = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Dec  1 14:24:04 np0005541455 ceilometer_agent_ipmi[222940]: 2025-12-01 19:24:04.355 2 DEBUG cotyledon.oslo_config_glue [-] event.store_raw                = [] log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Dec  1 14:24:04 np0005541455 ceilometer_agent_ipmi[222940]: 2025-12-01 19:24:04.356 2 DEBUG cotyledon.oslo_config_glue [-] ipmi.node_manager_init_retry   = 3 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Dec  1 14:24:04 np0005541455 ceilometer_agent_ipmi[222940]: 2025-12-01 19:24:04.356 2 DEBUG cotyledon.oslo_config_glue [-] ipmi.polling_retry             = 3 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Dec  1 14:24:04 np0005541455 ceilometer_agent_ipmi[222940]: 2025-12-01 19:24:04.356 2 DEBUG cotyledon.oslo_config_glue [-] meter.meter_definitions_dirs   = ['/etc/ceilometer/meters.d', '/usr/lib/python3.9/site-packages/ceilometer/data/meters.d'] log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Dec  1 14:24:04 np0005541455 ceilometer_agent_ipmi[222940]: 2025-12-01 19:24:04.356 2 DEBUG cotyledon.oslo_config_glue [-] monasca.archive_on_failure     = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Dec  1 14:24:04 np0005541455 ceilometer_agent_ipmi[222940]: 2025-12-01 19:24:04.356 2 DEBUG cotyledon.oslo_config_glue [-] monasca.archive_path           = mon_pub_failures.txt log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Dec  1 14:24:04 np0005541455 ceilometer_agent_ipmi[222940]: 2025-12-01 19:24:04.356 2 DEBUG cotyledon.oslo_config_glue [-] monasca.auth_section           = service_credentials log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Dec  1 14:24:04 np0005541455 ceilometer_agent_ipmi[222940]: 2025-12-01 19:24:04.356 2 DEBUG cotyledon.oslo_config_glue [-] monasca.auth_type              = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Dec  1 14:24:04 np0005541455 ceilometer_agent_ipmi[222940]: 2025-12-01 19:24:04.356 2 DEBUG cotyledon.oslo_config_glue [-] monasca.batch_count            = 1000 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Dec  1 14:24:04 np0005541455 ceilometer_agent_ipmi[222940]: 2025-12-01 19:24:04.357 2 DEBUG cotyledon.oslo_config_glue [-] monasca.batch_max_retries      = 3 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Dec  1 14:24:04 np0005541455 ceilometer_agent_ipmi[222940]: 2025-12-01 19:24:04.357 2 DEBUG cotyledon.oslo_config_glue [-] monasca.batch_mode             = True log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Dec  1 14:24:04 np0005541455 ceilometer_agent_ipmi[222940]: 2025-12-01 19:24:04.357 2 DEBUG cotyledon.oslo_config_glue [-] monasca.batch_polling_interval = 5 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Dec  1 14:24:04 np0005541455 ceilometer_agent_ipmi[222940]: 2025-12-01 19:24:04.357 2 DEBUG cotyledon.oslo_config_glue [-] monasca.batch_timeout          = 15 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Dec  1 14:24:04 np0005541455 ceilometer_agent_ipmi[222940]: 2025-12-01 19:24:04.357 2 DEBUG cotyledon.oslo_config_glue [-] monasca.cafile                 = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Dec  1 14:24:04 np0005541455 ceilometer_agent_ipmi[222940]: 2025-12-01 19:24:04.357 2 DEBUG cotyledon.oslo_config_glue [-] monasca.certfile               = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Dec  1 14:24:04 np0005541455 ceilometer_agent_ipmi[222940]: 2025-12-01 19:24:04.357 2 DEBUG cotyledon.oslo_config_glue [-] monasca.client_max_retries     = 3 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Dec  1 14:24:04 np0005541455 ceilometer_agent_ipmi[222940]: 2025-12-01 19:24:04.357 2 DEBUG cotyledon.oslo_config_glue [-] monasca.client_retry_interval  = 60 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Dec  1 14:24:04 np0005541455 ceilometer_agent_ipmi[222940]: 2025-12-01 19:24:04.358 2 DEBUG cotyledon.oslo_config_glue [-] monasca.clientapi_version      = 2_0 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Dec  1 14:24:04 np0005541455 ceilometer_agent_ipmi[222940]: 2025-12-01 19:24:04.358 2 DEBUG cotyledon.oslo_config_glue [-] monasca.cloud_name             = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Dec  1 14:24:04 np0005541455 ceilometer_agent_ipmi[222940]: 2025-12-01 19:24:04.358 2 DEBUG cotyledon.oslo_config_glue [-] monasca.cluster                = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Dec  1 14:24:04 np0005541455 ceilometer_agent_ipmi[222940]: 2025-12-01 19:24:04.358 2 DEBUG cotyledon.oslo_config_glue [-] monasca.collect_timing         = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Dec  1 14:24:04 np0005541455 ceilometer_agent_ipmi[222940]: 2025-12-01 19:24:04.358 2 DEBUG cotyledon.oslo_config_glue [-] monasca.control_plane          = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Dec  1 14:24:04 np0005541455 ceilometer_agent_ipmi[222940]: 2025-12-01 19:24:04.358 2 DEBUG cotyledon.oslo_config_glue [-] monasca.enable_api_pagination  = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Dec  1 14:24:04 np0005541455 ceilometer_agent_ipmi[222940]: 2025-12-01 19:24:04.358 2 DEBUG cotyledon.oslo_config_glue [-] monasca.insecure               = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Dec  1 14:24:04 np0005541455 ceilometer_agent_ipmi[222940]: 2025-12-01 19:24:04.358 2 DEBUG cotyledon.oslo_config_glue [-] monasca.interface              = internal log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Dec  1 14:24:04 np0005541455 ceilometer_agent_ipmi[222940]: 2025-12-01 19:24:04.359 2 DEBUG cotyledon.oslo_config_glue [-] monasca.keyfile                = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Dec  1 14:24:04 np0005541455 ceilometer_agent_ipmi[222940]: 2025-12-01 19:24:04.359 2 DEBUG cotyledon.oslo_config_glue [-] monasca.monasca_mappings       = /etc/ceilometer/monasca_field_definitions.yaml log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Dec  1 14:24:04 np0005541455 ceilometer_agent_ipmi[222940]: 2025-12-01 19:24:04.359 2 DEBUG cotyledon.oslo_config_glue [-] monasca.region_name            = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Dec  1 14:24:04 np0005541455 ceilometer_agent_ipmi[222940]: 2025-12-01 19:24:04.359 2 DEBUG cotyledon.oslo_config_glue [-] monasca.retry_on_failure       = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Dec  1 14:24:04 np0005541455 ceilometer_agent_ipmi[222940]: 2025-12-01 19:24:04.359 2 DEBUG cotyledon.oslo_config_glue [-] monasca.split_loggers          = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Dec  1 14:24:04 np0005541455 ceilometer_agent_ipmi[222940]: 2025-12-01 19:24:04.359 2 DEBUG cotyledon.oslo_config_glue [-] monasca.timeout                = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Dec  1 14:24:04 np0005541455 ceilometer_agent_ipmi[222940]: 2025-12-01 19:24:04.359 2 DEBUG cotyledon.oslo_config_glue [-] notification.ack_on_event_error = True log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Dec  1 14:24:04 np0005541455 ceilometer_agent_ipmi[222940]: 2025-12-01 19:24:04.359 2 DEBUG cotyledon.oslo_config_glue [-] notification.batch_size        = 1 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Dec  1 14:24:04 np0005541455 ceilometer_agent_ipmi[222940]: 2025-12-01 19:24:04.360 2 DEBUG cotyledon.oslo_config_glue [-] notification.batch_timeout     = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Dec  1 14:24:04 np0005541455 ceilometer_agent_ipmi[222940]: 2025-12-01 19:24:04.360 2 DEBUG cotyledon.oslo_config_glue [-] notification.messaging_urls    = **** log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Dec  1 14:24:04 np0005541455 ceilometer_agent_ipmi[222940]: 2025-12-01 19:24:04.360 2 DEBUG cotyledon.oslo_config_glue [-] notification.notification_control_exchanges = ['nova', 'glance', 'neutron', 'cinder', 'heat', 'keystone', 'sahara', 'trove', 'zaqar', 'swift', 'ceilometer', 'magnum', 'dns', 'ironic', 'aodh'] log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Dec  1 14:24:04 np0005541455 ceilometer_agent_ipmi[222940]: 2025-12-01 19:24:04.360 2 DEBUG cotyledon.oslo_config_glue [-] notification.pipelines         = ['meter', 'event'] log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Dec  1 14:24:04 np0005541455 ceilometer_agent_ipmi[222940]: 2025-12-01 19:24:04.360 2 DEBUG cotyledon.oslo_config_glue [-] notification.workers           = 1 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Dec  1 14:24:04 np0005541455 ceilometer_agent_ipmi[222940]: 2025-12-01 19:24:04.360 2 DEBUG cotyledon.oslo_config_glue [-] polling.batch_size             = 50 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Dec  1 14:24:04 np0005541455 ceilometer_agent_ipmi[222940]: 2025-12-01 19:24:04.360 2 DEBUG cotyledon.oslo_config_glue [-] polling.cfg_file               = polling.yaml log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Dec  1 14:24:04 np0005541455 ceilometer_agent_ipmi[222940]: 2025-12-01 19:24:04.361 2 DEBUG cotyledon.oslo_config_glue [-] polling.partitioning_group_prefix = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Dec  1 14:24:04 np0005541455 ceilometer_agent_ipmi[222940]: 2025-12-01 19:24:04.361 2 DEBUG cotyledon.oslo_config_glue [-] polling.pollsters_definitions_dirs = ['/etc/ceilometer/pollsters.d'] log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Dec  1 14:24:04 np0005541455 ceilometer_agent_ipmi[222940]: 2025-12-01 19:24:04.361 2 DEBUG cotyledon.oslo_config_glue [-] polling.tenant_name_discovery  = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Dec  1 14:24:04 np0005541455 ceilometer_agent_ipmi[222940]: 2025-12-01 19:24:04.361 2 DEBUG cotyledon.oslo_config_glue [-] publisher.telemetry_secret     = **** log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Dec  1 14:24:04 np0005541455 ceilometer_agent_ipmi[222940]: 2025-12-01 19:24:04.361 2 DEBUG cotyledon.oslo_config_glue [-] publisher_notifier.event_topic = event log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Dec  1 14:24:04 np0005541455 ceilometer_agent_ipmi[222940]: 2025-12-01 19:24:04.361 2 DEBUG cotyledon.oslo_config_glue [-] publisher_notifier.metering_topic = metering log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Dec  1 14:24:04 np0005541455 ceilometer_agent_ipmi[222940]: 2025-12-01 19:24:04.361 2 DEBUG cotyledon.oslo_config_glue [-] publisher_notifier.telemetry_driver = messagingv2 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Dec  1 14:24:04 np0005541455 ceilometer_agent_ipmi[222940]: 2025-12-01 19:24:04.361 2 DEBUG cotyledon.oslo_config_glue [-] rgw_admin_credentials.access_key = **** log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Dec  1 14:24:04 np0005541455 ceilometer_agent_ipmi[222940]: 2025-12-01 19:24:04.362 2 DEBUG cotyledon.oslo_config_glue [-] rgw_admin_credentials.secret_key = **** log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Dec  1 14:24:04 np0005541455 ceilometer_agent_ipmi[222940]: 2025-12-01 19:24:04.362 2 DEBUG cotyledon.oslo_config_glue [-] rgw_client.implicit_tenants    = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Dec  1 14:24:04 np0005541455 ceilometer_agent_ipmi[222940]: 2025-12-01 19:24:04.362 2 DEBUG cotyledon.oslo_config_glue [-] service_types.cinder           = volumev3 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Dec  1 14:24:04 np0005541455 ceilometer_agent_ipmi[222940]: 2025-12-01 19:24:04.362 2 DEBUG cotyledon.oslo_config_glue [-] service_types.glance           = image log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Dec  1 14:24:04 np0005541455 ceilometer_agent_ipmi[222940]: 2025-12-01 19:24:04.362 2 DEBUG cotyledon.oslo_config_glue [-] service_types.neutron          = network log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Dec  1 14:24:04 np0005541455 ceilometer_agent_ipmi[222940]: 2025-12-01 19:24:04.362 2 DEBUG cotyledon.oslo_config_glue [-] service_types.nova             = compute log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Dec  1 14:24:04 np0005541455 ceilometer_agent_ipmi[222940]: 2025-12-01 19:24:04.362 2 DEBUG cotyledon.oslo_config_glue [-] service_types.radosgw          = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Dec  1 14:24:04 np0005541455 ceilometer_agent_ipmi[222940]: 2025-12-01 19:24:04.362 2 DEBUG cotyledon.oslo_config_glue [-] service_types.swift            = object-store log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Dec  1 14:24:04 np0005541455 ceilometer_agent_ipmi[222940]: 2025-12-01 19:24:04.363 2 DEBUG cotyledon.oslo_config_glue [-] vmware.api_retry_count         = 10 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Dec  1 14:24:04 np0005541455 ceilometer_agent_ipmi[222940]: 2025-12-01 19:24:04.363 2 DEBUG cotyledon.oslo_config_glue [-] vmware.ca_file                 = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Dec  1 14:24:04 np0005541455 ceilometer_agent_ipmi[222940]: 2025-12-01 19:24:04.363 2 DEBUG cotyledon.oslo_config_glue [-] vmware.host_ip                 = 127.0.0.1 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Dec  1 14:24:04 np0005541455 ceilometer_agent_ipmi[222940]: 2025-12-01 19:24:04.363 2 DEBUG cotyledon.oslo_config_glue [-] vmware.host_password           = **** log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Dec  1 14:24:04 np0005541455 ceilometer_agent_ipmi[222940]: 2025-12-01 19:24:04.363 2 DEBUG cotyledon.oslo_config_glue [-] vmware.host_port               = 443 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Dec  1 14:24:04 np0005541455 ceilometer_agent_ipmi[222940]: 2025-12-01 19:24:04.363 2 DEBUG cotyledon.oslo_config_glue [-] vmware.host_username           =  log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Dec  1 14:24:04 np0005541455 ceilometer_agent_ipmi[222940]: 2025-12-01 19:24:04.363 2 DEBUG cotyledon.oslo_config_glue [-] vmware.insecure                = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Dec  1 14:24:04 np0005541455 ceilometer_agent_ipmi[222940]: 2025-12-01 19:24:04.363 2 DEBUG cotyledon.oslo_config_glue [-] vmware.task_poll_interval      = 0.5 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Dec  1 14:24:04 np0005541455 ceilometer_agent_ipmi[222940]: 2025-12-01 19:24:04.364 2 DEBUG cotyledon.oslo_config_glue [-] vmware.wsdl_location           = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Dec  1 14:24:04 np0005541455 ceilometer_agent_ipmi[222940]: 2025-12-01 19:24:04.364 2 DEBUG cotyledon.oslo_config_glue [-] service_credentials.auth_section = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Dec  1 14:24:04 np0005541455 ceilometer_agent_ipmi[222940]: 2025-12-01 19:24:04.364 2 DEBUG cotyledon.oslo_config_glue [-] service_credentials.auth_type  = password log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Dec  1 14:24:04 np0005541455 ceilometer_agent_ipmi[222940]: 2025-12-01 19:24:04.364 2 DEBUG cotyledon.oslo_config_glue [-] service_credentials.cafile     = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Dec  1 14:24:04 np0005541455 ceilometer_agent_ipmi[222940]: 2025-12-01 19:24:04.364 2 DEBUG cotyledon.oslo_config_glue [-] service_credentials.certfile   = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Dec  1 14:24:04 np0005541455 ceilometer_agent_ipmi[222940]: 2025-12-01 19:24:04.364 2 DEBUG cotyledon.oslo_config_glue [-] service_credentials.collect_timing = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Dec  1 14:24:04 np0005541455 ceilometer_agent_ipmi[222940]: 2025-12-01 19:24:04.364 2 DEBUG cotyledon.oslo_config_glue [-] service_credentials.insecure   = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Dec  1 14:24:04 np0005541455 ceilometer_agent_ipmi[222940]: 2025-12-01 19:24:04.364 2 DEBUG cotyledon.oslo_config_glue [-] service_credentials.interface  = internalURL log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Dec  1 14:24:04 np0005541455 ceilometer_agent_ipmi[222940]: 2025-12-01 19:24:04.365 2 DEBUG cotyledon.oslo_config_glue [-] service_credentials.keyfile    = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Dec  1 14:24:04 np0005541455 ceilometer_agent_ipmi[222940]: 2025-12-01 19:24:04.365 2 DEBUG cotyledon.oslo_config_glue [-] service_credentials.region_name = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Dec  1 14:24:04 np0005541455 ceilometer_agent_ipmi[222940]: 2025-12-01 19:24:04.365 2 DEBUG cotyledon.oslo_config_glue [-] service_credentials.split_loggers = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Dec  1 14:24:04 np0005541455 ceilometer_agent_ipmi[222940]: 2025-12-01 19:24:04.365 2 DEBUG cotyledon.oslo_config_glue [-] service_credentials.timeout    = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Dec  1 14:24:04 np0005541455 ceilometer_agent_ipmi[222940]: 2025-12-01 19:24:04.365 2 DEBUG cotyledon.oslo_config_glue [-] gnocchi.auth_section           = service_credentials log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Dec  1 14:24:04 np0005541455 ceilometer_agent_ipmi[222940]: 2025-12-01 19:24:04.365 2 DEBUG cotyledon.oslo_config_glue [-] gnocchi.auth_type              = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Dec  1 14:24:04 np0005541455 ceilometer_agent_ipmi[222940]: 2025-12-01 19:24:04.365 2 DEBUG cotyledon.oslo_config_glue [-] gnocchi.cafile                 = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Dec  1 14:24:04 np0005541455 ceilometer_agent_ipmi[222940]: 2025-12-01 19:24:04.365 2 DEBUG cotyledon.oslo_config_glue [-] gnocchi.certfile               = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Dec  1 14:24:04 np0005541455 ceilometer_agent_ipmi[222940]: 2025-12-01 19:24:04.366 2 DEBUG cotyledon.oslo_config_glue [-] gnocchi.collect_timing         = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Dec  1 14:24:04 np0005541455 ceilometer_agent_ipmi[222940]: 2025-12-01 19:24:04.366 2 DEBUG cotyledon.oslo_config_glue [-] gnocchi.insecure               = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Dec  1 14:24:04 np0005541455 ceilometer_agent_ipmi[222940]: 2025-12-01 19:24:04.366 2 DEBUG cotyledon.oslo_config_glue [-] gnocchi.interface              = internal log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Dec  1 14:24:04 np0005541455 ceilometer_agent_ipmi[222940]: 2025-12-01 19:24:04.366 2 DEBUG cotyledon.oslo_config_glue [-] gnocchi.keyfile                = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Dec  1 14:24:04 np0005541455 ceilometer_agent_ipmi[222940]: 2025-12-01 19:24:04.366 2 DEBUG cotyledon.oslo_config_glue [-] gnocchi.region_name            = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Dec  1 14:24:04 np0005541455 ceilometer_agent_ipmi[222940]: 2025-12-01 19:24:04.366 2 DEBUG cotyledon.oslo_config_glue [-] gnocchi.split_loggers          = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Dec  1 14:24:04 np0005541455 ceilometer_agent_ipmi[222940]: 2025-12-01 19:24:04.366 2 DEBUG cotyledon.oslo_config_glue [-] gnocchi.timeout                = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Dec  1 14:24:04 np0005541455 ceilometer_agent_ipmi[222940]: 2025-12-01 19:24:04.366 2 DEBUG cotyledon.oslo_config_glue [-] zaqar.auth_section             = service_credentials log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Dec  1 14:24:04 np0005541455 ceilometer_agent_ipmi[222940]: 2025-12-01 19:24:04.367 2 DEBUG cotyledon.oslo_config_glue [-] zaqar.auth_type                = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Dec  1 14:24:04 np0005541455 ceilometer_agent_ipmi[222940]: 2025-12-01 19:24:04.367 2 DEBUG cotyledon.oslo_config_glue [-] zaqar.cafile                   = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Dec  1 14:24:04 np0005541455 ceilometer_agent_ipmi[222940]: 2025-12-01 19:24:04.367 2 DEBUG cotyledon.oslo_config_glue [-] zaqar.certfile                 = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Dec  1 14:24:04 np0005541455 ceilometer_agent_ipmi[222940]: 2025-12-01 19:24:04.367 2 DEBUG cotyledon.oslo_config_glue [-] zaqar.collect_timing           = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Dec  1 14:24:04 np0005541455 ceilometer_agent_ipmi[222940]: 2025-12-01 19:24:04.367 2 DEBUG cotyledon.oslo_config_glue [-] zaqar.insecure                 = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Dec  1 14:24:04 np0005541455 ceilometer_agent_ipmi[222940]: 2025-12-01 19:24:04.367 2 DEBUG cotyledon.oslo_config_glue [-] zaqar.interface                = internal log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Dec  1 14:24:04 np0005541455 ceilometer_agent_ipmi[222940]: 2025-12-01 19:24:04.367 2 DEBUG cotyledon.oslo_config_glue [-] zaqar.keyfile                  = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Dec  1 14:24:04 np0005541455 ceilometer_agent_ipmi[222940]: 2025-12-01 19:24:04.367 2 DEBUG cotyledon.oslo_config_glue [-] zaqar.region_name              = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Dec  1 14:24:04 np0005541455 ceilometer_agent_ipmi[222940]: 2025-12-01 19:24:04.367 2 DEBUG cotyledon.oslo_config_glue [-] zaqar.split_loggers            = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Dec  1 14:24:04 np0005541455 ceilometer_agent_ipmi[222940]: 2025-12-01 19:24:04.368 2 DEBUG cotyledon.oslo_config_glue [-] zaqar.timeout                  = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Dec  1 14:24:04 np0005541455 ceilometer_agent_ipmi[222940]: 2025-12-01 19:24:04.368 2 DEBUG cotyledon.oslo_config_glue [-] ******************************************************************************** log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2613
Dec  1 14:24:04 np0005541455 ceilometer_agent_ipmi[222940]: 2025-12-01 19:24:04.390 12 INFO ceilometer.polling.manager [-] Looking for dynamic pollsters configurations at [['/etc/ceilometer/pollsters.d']].
Dec  1 14:24:04 np0005541455 ceilometer_agent_ipmi[222940]: 2025-12-01 19:24:04.392 12 INFO ceilometer.polling.manager [-] No dynamic pollsters found in folder [/etc/ceilometer/pollsters.d].
Dec  1 14:24:04 np0005541455 ceilometer_agent_ipmi[222940]: 2025-12-01 19:24:04.394 12 INFO ceilometer.polling.manager [-] No dynamic pollsters file found in dirs [['/etc/ceilometer/pollsters.d']].
Dec  1 14:24:04 np0005541455 python3.9[223120]: ansible-container_config_data Invoked with config_overrides={} config_path=/var/lib/openstack/config/telemetry-power-monitoring config_pattern=kepler.json debug=False
Dec  1 14:24:04 np0005541455 ceilometer_agent_ipmi[222940]: 2025-12-01 19:24:04.512 12 INFO oslo.privsep.daemon [-] Running privsep helper: ['sudo', 'ceilometer-rootwrap', '/etc/ceilometer/rootwrap.conf', 'privsep-helper', '--privsep_context', 'ceilometer.privsep.sys_admin_pctxt', '--privsep_sock_path', '/tmp/tmpm15cvcn8/privsep.sock']
Dec  1 14:24:05 np0005541455 ceilometer_agent_ipmi[222940]: 2025-12-01 19:24:05.126 12 INFO oslo.privsep.daemon [-] Spawned new privsep daemon via rootwrap
Dec  1 14:24:05 np0005541455 ceilometer_agent_ipmi[222940]: 2025-12-01 19:24:05.127 12 DEBUG oslo.privsep.daemon [-] Accepted privsep connection to /tmp/tmpm15cvcn8/privsep.sock __init__ /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:362
Dec  1 14:24:05 np0005541455 ceilometer_agent_ipmi[222940]: 2025-12-01 19:24:05.012 19 INFO oslo.privsep.daemon [-] privsep daemon starting
Dec  1 14:24:05 np0005541455 ceilometer_agent_ipmi[222940]: 2025-12-01 19:24:05.016 19 INFO oslo.privsep.daemon [-] privsep process running with uid/gid: 0/0
Dec  1 14:24:05 np0005541455 ceilometer_agent_ipmi[222940]: 2025-12-01 19:24:05.019 19 INFO oslo.privsep.daemon [-] privsep process running with capabilities (eff/prm/inh): CAP_CHOWN|CAP_DAC_OVERRIDE|CAP_DAC_READ_SEARCH|CAP_FOWNER|CAP_NET_ADMIN|CAP_SYS_ADMIN/CAP_CHOWN|CAP_DAC_OVERRIDE|CAP_DAC_READ_SEARCH|CAP_FOWNER|CAP_NET_ADMIN|CAP_SYS_ADMIN/none
Dec  1 14:24:05 np0005541455 ceilometer_agent_ipmi[222940]: 2025-12-01 19:24:05.019 19 INFO oslo.privsep.daemon [-] privsep daemon running as pid 19
Dec  1 14:24:05 np0005541455 ceilometer_agent_ipmi[222940]: 2025-12-01 19:24:05.258 12 DEBUG ceilometer.polling.manager [-] Skip loading extension for hardware.ipmi.current: IPMITool not supported on host _catch_extension_load_error /usr/lib/python3.9/site-packages/ceilometer/polling/manager.py:421
Dec  1 14:24:05 np0005541455 ceilometer_agent_ipmi[222940]: 2025-12-01 19:24:05.258 12 DEBUG ceilometer.polling.manager [-] Skip loading extension for hardware.ipmi.fan: IPMITool not supported on host _catch_extension_load_error /usr/lib/python3.9/site-packages/ceilometer/polling/manager.py:421
Dec  1 14:24:05 np0005541455 ceilometer_agent_ipmi[222940]: 2025-12-01 19:24:05.260 12 DEBUG ceilometer.polling.manager [-] Skip loading extension for hardware.ipmi.node.airflow: object.__new__() takes exactly one argument (the type to instantiate) _catch_extension_load_error /usr/lib/python3.9/site-packages/ceilometer/polling/manager.py:421
Dec  1 14:24:05 np0005541455 ceilometer_agent_ipmi[222940]: 2025-12-01 19:24:05.260 12 DEBUG ceilometer.polling.manager [-] Skip loading extension for hardware.ipmi.node.cpu_util: object.__new__() takes exactly one argument (the type to instantiate) _catch_extension_load_error /usr/lib/python3.9/site-packages/ceilometer/polling/manager.py:421
Dec  1 14:24:05 np0005541455 ceilometer_agent_ipmi[222940]: 2025-12-01 19:24:05.260 12 DEBUG ceilometer.polling.manager [-] Skip loading extension for hardware.ipmi.node.cups: object.__new__() takes exactly one argument (the type to instantiate) _catch_extension_load_error /usr/lib/python3.9/site-packages/ceilometer/polling/manager.py:421
Dec  1 14:24:05 np0005541455 ceilometer_agent_ipmi[222940]: 2025-12-01 19:24:05.260 12 DEBUG ceilometer.polling.manager [-] Skip loading extension for hardware.ipmi.node.io_util: object.__new__() takes exactly one argument (the type to instantiate) _catch_extension_load_error /usr/lib/python3.9/site-packages/ceilometer/polling/manager.py:421
Dec  1 14:24:05 np0005541455 ceilometer_agent_ipmi[222940]: 2025-12-01 19:24:05.261 12 DEBUG ceilometer.polling.manager [-] Skip loading extension for hardware.ipmi.node.mem_util: object.__new__() takes exactly one argument (the type to instantiate) _catch_extension_load_error /usr/lib/python3.9/site-packages/ceilometer/polling/manager.py:421
Dec  1 14:24:05 np0005541455 ceilometer_agent_ipmi[222940]: 2025-12-01 19:24:05.261 12 DEBUG ceilometer.polling.manager [-] Skip loading extension for hardware.ipmi.node.outlet_temperature: object.__new__() takes exactly one argument (the type to instantiate) _catch_extension_load_error /usr/lib/python3.9/site-packages/ceilometer/polling/manager.py:421
Dec  1 14:24:05 np0005541455 ceilometer_agent_ipmi[222940]: 2025-12-01 19:24:05.261 12 DEBUG ceilometer.polling.manager [-] Skip loading extension for hardware.ipmi.node.power: object.__new__() takes exactly one argument (the type to instantiate) _catch_extension_load_error /usr/lib/python3.9/site-packages/ceilometer/polling/manager.py:421
Dec  1 14:24:05 np0005541455 ceilometer_agent_ipmi[222940]: 2025-12-01 19:24:05.262 12 DEBUG ceilometer.polling.manager [-] Skip loading extension for hardware.ipmi.node.temperature: object.__new__() takes exactly one argument (the type to instantiate) _catch_extension_load_error /usr/lib/python3.9/site-packages/ceilometer/polling/manager.py:421
Dec  1 14:24:05 np0005541455 ceilometer_agent_ipmi[222940]: 2025-12-01 19:24:05.262 12 DEBUG ceilometer.polling.manager [-] Skip loading extension for hardware.ipmi.temperature: IPMITool not supported on host _catch_extension_load_error /usr/lib/python3.9/site-packages/ceilometer/polling/manager.py:421
Dec  1 14:24:05 np0005541455 ceilometer_agent_ipmi[222940]: 2025-12-01 19:24:05.262 12 DEBUG ceilometer.polling.manager [-] Skip loading extension for hardware.ipmi.voltage: IPMITool not supported on host _catch_extension_load_error /usr/lib/python3.9/site-packages/ceilometer/polling/manager.py:421
Dec  1 14:24:05 np0005541455 ceilometer_agent_ipmi[222940]: 2025-12-01 19:24:05.262 12 WARNING ceilometer.polling.manager [-] No valid pollsters can be loaded from ['ipmi'] namespaces
Dec  1 14:24:05 np0005541455 ceilometer_agent_ipmi[222940]: 2025-12-01 19:24:05.266 12 DEBUG cotyledon.oslo_config_glue [-] Full set of CONF: _load_service_options /usr/lib/python3.9/site-packages/cotyledon/oslo_config_glue.py:48
Dec  1 14:24:05 np0005541455 ceilometer_agent_ipmi[222940]: 2025-12-01 19:24:05.266 12 DEBUG cotyledon.oslo_config_glue [-] ******************************************************************************** log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2589
Dec  1 14:24:05 np0005541455 ceilometer_agent_ipmi[222940]: 2025-12-01 19:24:05.266 12 DEBUG cotyledon.oslo_config_glue [-] Configuration options gathered from: log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2590
Dec  1 14:24:05 np0005541455 ceilometer_agent_ipmi[222940]: 2025-12-01 19:24:05.266 12 DEBUG cotyledon.oslo_config_glue [-] command line args: ['--polling-namespaces', 'ipmi', '--logfile', '/dev/stdout'] log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2591
Dec  1 14:24:05 np0005541455 ceilometer_agent_ipmi[222940]: 2025-12-01 19:24:05.267 12 DEBUG cotyledon.oslo_config_glue [-] config files: ['/etc/ceilometer/ceilometer.conf'] log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2592
Dec  1 14:24:05 np0005541455 ceilometer_agent_ipmi[222940]: 2025-12-01 19:24:05.267 12 DEBUG cotyledon.oslo_config_glue [-] ================================================================================ log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2594
Dec  1 14:24:05 np0005541455 ceilometer_agent_ipmi[222940]: 2025-12-01 19:24:05.267 12 DEBUG cotyledon.oslo_config_glue [-] batch_size                     = 50 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602
Dec  1 14:24:05 np0005541455 ceilometer_agent_ipmi[222940]: 2025-12-01 19:24:05.267 12 DEBUG cotyledon.oslo_config_glue [-] cfg_file                       = polling.yaml log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602
Dec  1 14:24:05 np0005541455 ceilometer_agent_ipmi[222940]: 2025-12-01 19:24:05.268 12 DEBUG cotyledon.oslo_config_glue [-] config_dir                     = ['/etc/ceilometer/ceilometer.conf.d'] log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602
Dec  1 14:24:05 np0005541455 ceilometer_agent_ipmi[222940]: 2025-12-01 19:24:05.268 12 DEBUG cotyledon.oslo_config_glue [-] config_file                    = ['/etc/ceilometer/ceilometer.conf'] log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602
Dec  1 14:24:05 np0005541455 ceilometer_agent_ipmi[222940]: 2025-12-01 19:24:05.268 12 DEBUG cotyledon.oslo_config_glue [-] config_source                  = [] log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602
Dec  1 14:24:05 np0005541455 ceilometer_agent_ipmi[222940]: 2025-12-01 19:24:05.268 12 DEBUG cotyledon.oslo_config_glue [-] control_exchange               = ceilometer log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602
Dec  1 14:24:05 np0005541455 ceilometer_agent_ipmi[222940]: 2025-12-01 19:24:05.269 12 DEBUG cotyledon.oslo_config_glue [-] debug                          = True log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602
Dec  1 14:24:05 np0005541455 ceilometer_agent_ipmi[222940]: 2025-12-01 19:24:05.269 12 DEBUG cotyledon.oslo_config_glue [-] default_log_levels             = ['amqp=WARN', 'amqplib=WARN', 'boto=WARN', 'qpid=WARN', 'sqlalchemy=WARN', 'suds=INFO', 'oslo.messaging=INFO', 'oslo_messaging=INFO', 'iso8601=WARN', 'requests.packages.urllib3.connectionpool=WARN', 'urllib3.connectionpool=WARN', 'websocket=WARN', 'requests.packages.urllib3.util.retry=WARN', 'urllib3.util.retry=WARN', 'keystonemiddleware=WARN', 'routes.middleware=WARN', 'stevedore=WARN', 'taskflow=WARN', 'keystoneauth=WARN', 'oslo.cache=INFO', 'oslo_policy=INFO', 'dogpile.core.dogpile=INFO', 'futurist=INFO', 'neutronclient=INFO', 'keystoneclient=INFO'] log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602
Dec  1 14:24:05 np0005541455 ceilometer_agent_ipmi[222940]: 2025-12-01 19:24:05.269 12 DEBUG cotyledon.oslo_config_glue [-] event_pipeline_cfg_file        = event_pipeline.yaml log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602
Dec  1 14:24:05 np0005541455 ceilometer_agent_ipmi[222940]: 2025-12-01 19:24:05.269 12 DEBUG cotyledon.oslo_config_glue [-] graceful_shutdown_timeout      = 60 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602
Dec  1 14:24:05 np0005541455 ceilometer_agent_ipmi[222940]: 2025-12-01 19:24:05.270 12 DEBUG cotyledon.oslo_config_glue [-] host                           = compute-0.ctlplane.example.com log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602
Dec  1 14:24:05 np0005541455 ceilometer_agent_ipmi[222940]: 2025-12-01 19:24:05.270 12 DEBUG cotyledon.oslo_config_glue [-] http_timeout                   = 600 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602
Dec  1 14:24:05 np0005541455 ceilometer_agent_ipmi[222940]: 2025-12-01 19:24:05.270 12 DEBUG cotyledon.oslo_config_glue [-] hypervisor_inspector           = libvirt log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602
Dec  1 14:24:05 np0005541455 ceilometer_agent_ipmi[222940]: 2025-12-01 19:24:05.271 12 DEBUG cotyledon.oslo_config_glue [-] instance_format                = [instance: %(uuid)s]  log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602
Dec  1 14:24:05 np0005541455 ceilometer_agent_ipmi[222940]: 2025-12-01 19:24:05.271 12 DEBUG cotyledon.oslo_config_glue [-] instance_uuid_format           = [instance: %(uuid)s]  log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602
Dec  1 14:24:05 np0005541455 ceilometer_agent_ipmi[222940]: 2025-12-01 19:24:05.271 12 DEBUG cotyledon.oslo_config_glue [-] libvirt_type                   = kvm log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602
Dec  1 14:24:05 np0005541455 ceilometer_agent_ipmi[222940]: 2025-12-01 19:24:05.271 12 DEBUG cotyledon.oslo_config_glue [-] libvirt_uri                    =  log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602
Dec  1 14:24:05 np0005541455 ceilometer_agent_ipmi[222940]: 2025-12-01 19:24:05.272 12 DEBUG cotyledon.oslo_config_glue [-] log_config_append              = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602
Dec  1 14:24:05 np0005541455 ceilometer_agent_ipmi[222940]: 2025-12-01 19:24:05.272 12 DEBUG cotyledon.oslo_config_glue [-] log_date_format                = %Y-%m-%d %H:%M:%S log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602
Dec  1 14:24:05 np0005541455 ceilometer_agent_ipmi[222940]: 2025-12-01 19:24:05.272 12 DEBUG cotyledon.oslo_config_glue [-] log_dir                        = /var/log/ceilometer log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602
Dec  1 14:24:05 np0005541455 ceilometer_agent_ipmi[222940]: 2025-12-01 19:24:05.272 12 DEBUG cotyledon.oslo_config_glue [-] log_file                       = /dev/stdout log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602
Dec  1 14:24:05 np0005541455 ceilometer_agent_ipmi[222940]: 2025-12-01 19:24:05.272 12 DEBUG cotyledon.oslo_config_glue [-] log_options                    = True log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602
Dec  1 14:24:05 np0005541455 ceilometer_agent_ipmi[222940]: 2025-12-01 19:24:05.273 12 DEBUG cotyledon.oslo_config_glue [-] log_rotate_interval            = 1 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602
Dec  1 14:24:05 np0005541455 ceilometer_agent_ipmi[222940]: 2025-12-01 19:24:05.273 12 DEBUG cotyledon.oslo_config_glue [-] log_rotate_interval_type       = days log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602
Dec  1 14:24:05 np0005541455 ceilometer_agent_ipmi[222940]: 2025-12-01 19:24:05.273 12 DEBUG cotyledon.oslo_config_glue [-] log_rotation_type              = none log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602
Dec  1 14:24:05 np0005541455 ceilometer_agent_ipmi[222940]: 2025-12-01 19:24:05.273 12 DEBUG cotyledon.oslo_config_glue [-] logging_context_format_string  = %(asctime)s.%(msecs)03d %(process)d %(levelname)s %(name)s [%(global_request_id)s %(request_id)s %(user_identity)s] %(instance)s%(message)s log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602
Dec  1 14:24:05 np0005541455 ceilometer_agent_ipmi[222940]: 2025-12-01 19:24:05.273 12 DEBUG cotyledon.oslo_config_glue [-] logging_debug_format_suffix    = %(funcName)s %(pathname)s:%(lineno)d log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602
Dec  1 14:24:05 np0005541455 ceilometer_agent_ipmi[222940]: 2025-12-01 19:24:05.274 12 DEBUG cotyledon.oslo_config_glue [-] logging_default_format_string  = %(asctime)s.%(msecs)03d %(process)d %(levelname)s %(name)s [-] %(instance)s%(message)s log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602
Dec  1 14:24:05 np0005541455 ceilometer_agent_ipmi[222940]: 2025-12-01 19:24:05.274 12 DEBUG cotyledon.oslo_config_glue [-] logging_exception_prefix       = %(asctime)s.%(msecs)03d %(process)d ERROR %(name)s %(instance)s log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602
Dec  1 14:24:05 np0005541455 ceilometer_agent_ipmi[222940]: 2025-12-01 19:24:05.274 12 DEBUG cotyledon.oslo_config_glue [-] logging_user_identity_format   = %(user)s %(project)s %(domain)s %(system_scope)s %(user_domain)s %(project_domain)s log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602
Dec  1 14:24:05 np0005541455 ceilometer_agent_ipmi[222940]: 2025-12-01 19:24:05.274 12 DEBUG cotyledon.oslo_config_glue [-] max_logfile_count              = 30 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602
Dec  1 14:24:05 np0005541455 ceilometer_agent_ipmi[222940]: 2025-12-01 19:24:05.275 12 DEBUG cotyledon.oslo_config_glue [-] max_logfile_size_mb            = 200 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602
Dec  1 14:24:05 np0005541455 ceilometer_agent_ipmi[222940]: 2025-12-01 19:24:05.275 12 DEBUG cotyledon.oslo_config_glue [-] max_parallel_requests          = 64 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602
Dec  1 14:24:05 np0005541455 ceilometer_agent_ipmi[222940]: 2025-12-01 19:24:05.275 12 DEBUG cotyledon.oslo_config_glue [-] partitioning_group_prefix      = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602
Dec  1 14:24:05 np0005541455 ceilometer_agent_ipmi[222940]: 2025-12-01 19:24:05.275 12 DEBUG cotyledon.oslo_config_glue [-] pipeline_cfg_file              = pipeline.yaml log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602
Dec  1 14:24:05 np0005541455 ceilometer_agent_ipmi[222940]: 2025-12-01 19:24:05.276 12 DEBUG cotyledon.oslo_config_glue [-] polling_namespaces             = ['ipmi'] log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602
Dec  1 14:24:05 np0005541455 ceilometer_agent_ipmi[222940]: 2025-12-01 19:24:05.276 12 DEBUG cotyledon.oslo_config_glue [-] pollsters_definitions_dirs     = ['/etc/ceilometer/pollsters.d'] log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602
Dec  1 14:24:05 np0005541455 ceilometer_agent_ipmi[222940]: 2025-12-01 19:24:05.276 12 DEBUG cotyledon.oslo_config_glue [-] publish_errors                 = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602
Dec  1 14:24:05 np0005541455 ceilometer_agent_ipmi[222940]: 2025-12-01 19:24:05.276 12 DEBUG cotyledon.oslo_config_glue [-] rate_limit_burst               = 0 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602
Dec  1 14:24:05 np0005541455 ceilometer_agent_ipmi[222940]: 2025-12-01 19:24:05.277 12 DEBUG cotyledon.oslo_config_glue [-] rate_limit_except_level        = CRITICAL log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602
Dec  1 14:24:05 np0005541455 ceilometer_agent_ipmi[222940]: 2025-12-01 19:24:05.277 12 DEBUG cotyledon.oslo_config_glue [-] rate_limit_interval            = 0 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602
Dec  1 14:24:05 np0005541455 ceilometer_agent_ipmi[222940]: 2025-12-01 19:24:05.277 12 DEBUG cotyledon.oslo_config_glue [-] reseller_prefix                = AUTH_ log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602
Dec  1 14:24:05 np0005541455 ceilometer_agent_ipmi[222940]: 2025-12-01 19:24:05.277 12 DEBUG cotyledon.oslo_config_glue [-] reserved_metadata_keys         = [] log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602
Dec  1 14:24:05 np0005541455 ceilometer_agent_ipmi[222940]: 2025-12-01 19:24:05.278 12 DEBUG cotyledon.oslo_config_glue [-] reserved_metadata_length       = 256 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602
Dec  1 14:24:05 np0005541455 ceilometer_agent_ipmi[222940]: 2025-12-01 19:24:05.278 12 DEBUG cotyledon.oslo_config_glue [-] reserved_metadata_namespace    = ['metering.'] log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602
Dec  1 14:24:05 np0005541455 ceilometer_agent_ipmi[222940]: 2025-12-01 19:24:05.278 12 DEBUG cotyledon.oslo_config_glue [-] rootwrap_config                = /etc/ceilometer/rootwrap.conf log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602
Dec  1 14:24:05 np0005541455 ceilometer_agent_ipmi[222940]: 2025-12-01 19:24:05.278 12 DEBUG cotyledon.oslo_config_glue [-] sample_source                  = openstack log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602
Dec  1 14:24:05 np0005541455 ceilometer_agent_ipmi[222940]: 2025-12-01 19:24:05.279 12 DEBUG cotyledon.oslo_config_glue [-] syslog_log_facility            = LOG_USER log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602
Dec  1 14:24:05 np0005541455 ceilometer_agent_ipmi[222940]: 2025-12-01 19:24:05.279 12 DEBUG cotyledon.oslo_config_glue [-] tenant_name_discovery          = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602
Dec  1 14:24:05 np0005541455 ceilometer_agent_ipmi[222940]: 2025-12-01 19:24:05.279 12 DEBUG cotyledon.oslo_config_glue [-] transport_url                  = **** log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602
Dec  1 14:24:05 np0005541455 ceilometer_agent_ipmi[222940]: 2025-12-01 19:24:05.279 12 DEBUG cotyledon.oslo_config_glue [-] use_eventlog                   = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602
Dec  1 14:24:05 np0005541455 ceilometer_agent_ipmi[222940]: 2025-12-01 19:24:05.279 12 DEBUG cotyledon.oslo_config_glue [-] use_journal                    = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602
Dec  1 14:24:05 np0005541455 ceilometer_agent_ipmi[222940]: 2025-12-01 19:24:05.280 12 DEBUG cotyledon.oslo_config_glue [-] use_json                       = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602
Dec  1 14:24:05 np0005541455 ceilometer_agent_ipmi[222940]: 2025-12-01 19:24:05.280 12 DEBUG cotyledon.oslo_config_glue [-] use_stderr                     = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602
Dec  1 14:24:05 np0005541455 ceilometer_agent_ipmi[222940]: 2025-12-01 19:24:05.280 12 DEBUG cotyledon.oslo_config_glue [-] use_syslog                     = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602
Dec  1 14:24:05 np0005541455 ceilometer_agent_ipmi[222940]: 2025-12-01 19:24:05.280 12 DEBUG cotyledon.oslo_config_glue [-] watch_log_file                 = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602
Dec  1 14:24:05 np0005541455 ceilometer_agent_ipmi[222940]: 2025-12-01 19:24:05.281 12 DEBUG cotyledon.oslo_config_glue [-] compute.instance_discovery_method = libvirt_metadata log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Dec  1 14:24:05 np0005541455 ceilometer_agent_ipmi[222940]: 2025-12-01 19:24:05.281 12 DEBUG cotyledon.oslo_config_glue [-] compute.resource_cache_expiry  = 3600 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Dec  1 14:24:05 np0005541455 ceilometer_agent_ipmi[222940]: 2025-12-01 19:24:05.281 12 DEBUG cotyledon.oslo_config_glue [-] compute.resource_update_interval = 0 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Dec  1 14:24:05 np0005541455 ceilometer_agent_ipmi[222940]: 2025-12-01 19:24:05.281 12 DEBUG cotyledon.oslo_config_glue [-] coordination.backend_url       = **** log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Dec  1 14:24:05 np0005541455 ceilometer_agent_ipmi[222940]: 2025-12-01 19:24:05.282 12 DEBUG cotyledon.oslo_config_glue [-] event.definitions_cfg_file     = event_definitions.yaml log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Dec  1 14:24:05 np0005541455 ceilometer_agent_ipmi[222940]: 2025-12-01 19:24:05.282 12 DEBUG cotyledon.oslo_config_glue [-] event.drop_unmatched_notifications = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Dec  1 14:24:05 np0005541455 ceilometer_agent_ipmi[222940]: 2025-12-01 19:24:05.282 12 DEBUG cotyledon.oslo_config_glue [-] event.store_raw                = [] log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Dec  1 14:24:05 np0005541455 ceilometer_agent_ipmi[222940]: 2025-12-01 19:24:05.282 12 DEBUG cotyledon.oslo_config_glue [-] ipmi.node_manager_init_retry   = 3 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Dec  1 14:24:05 np0005541455 ceilometer_agent_ipmi[222940]: 2025-12-01 19:24:05.283 12 DEBUG cotyledon.oslo_config_glue [-] ipmi.polling_retry             = 3 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Dec  1 14:24:05 np0005541455 ceilometer_agent_ipmi[222940]: 2025-12-01 19:24:05.283 12 DEBUG cotyledon.oslo_config_glue [-] meter.meter_definitions_dirs   = ['/etc/ceilometer/meters.d', '/usr/lib/python3.9/site-packages/ceilometer/data/meters.d'] log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Dec  1 14:24:05 np0005541455 ceilometer_agent_ipmi[222940]: 2025-12-01 19:24:05.283 12 DEBUG cotyledon.oslo_config_glue [-] monasca.archive_on_failure     = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Dec  1 14:24:05 np0005541455 ceilometer_agent_ipmi[222940]: 2025-12-01 19:24:05.283 12 DEBUG cotyledon.oslo_config_glue [-] monasca.archive_path           = mon_pub_failures.txt log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Dec  1 14:24:05 np0005541455 ceilometer_agent_ipmi[222940]: 2025-12-01 19:24:05.284 12 DEBUG cotyledon.oslo_config_glue [-] monasca.auth_section           = service_credentials log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Dec  1 14:24:05 np0005541455 ceilometer_agent_ipmi[222940]: 2025-12-01 19:24:05.284 12 DEBUG cotyledon.oslo_config_glue [-] monasca.auth_type              = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Dec  1 14:24:05 np0005541455 ceilometer_agent_ipmi[222940]: 2025-12-01 19:24:05.284 12 DEBUG cotyledon.oslo_config_glue [-] monasca.batch_count            = 1000 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Dec  1 14:24:05 np0005541455 ceilometer_agent_ipmi[222940]: 2025-12-01 19:24:05.284 12 DEBUG cotyledon.oslo_config_glue [-] monasca.batch_max_retries      = 3 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Dec  1 14:24:05 np0005541455 ceilometer_agent_ipmi[222940]: 2025-12-01 19:24:05.285 12 DEBUG cotyledon.oslo_config_glue [-] monasca.batch_mode             = True log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Dec  1 14:24:05 np0005541455 ceilometer_agent_ipmi[222940]: 2025-12-01 19:24:05.285 12 DEBUG cotyledon.oslo_config_glue [-] monasca.batch_polling_interval = 5 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Dec  1 14:24:05 np0005541455 ceilometer_agent_ipmi[222940]: 2025-12-01 19:24:05.285 12 DEBUG cotyledon.oslo_config_glue [-] monasca.batch_timeout          = 15 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Dec  1 14:24:05 np0005541455 ceilometer_agent_ipmi[222940]: 2025-12-01 19:24:05.285 12 DEBUG cotyledon.oslo_config_glue [-] monasca.cafile                 = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Dec  1 14:24:05 np0005541455 ceilometer_agent_ipmi[222940]: 2025-12-01 19:24:05.286 12 DEBUG cotyledon.oslo_config_glue [-] monasca.certfile               = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Dec  1 14:24:05 np0005541455 ceilometer_agent_ipmi[222940]: 2025-12-01 19:24:05.286 12 DEBUG cotyledon.oslo_config_glue [-] monasca.client_max_retries     = 3 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Dec  1 14:24:05 np0005541455 ceilometer_agent_ipmi[222940]: 2025-12-01 19:24:05.286 12 DEBUG cotyledon.oslo_config_glue [-] monasca.client_retry_interval  = 60 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Dec  1 14:24:05 np0005541455 ceilometer_agent_ipmi[222940]: 2025-12-01 19:24:05.286 12 DEBUG cotyledon.oslo_config_glue [-] monasca.clientapi_version      = 2_0 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Dec  1 14:24:05 np0005541455 ceilometer_agent_ipmi[222940]: 2025-12-01 19:24:05.286 12 DEBUG cotyledon.oslo_config_glue [-] monasca.cloud_name             = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Dec  1 14:24:05 np0005541455 ceilometer_agent_ipmi[222940]: 2025-12-01 19:24:05.287 12 DEBUG cotyledon.oslo_config_glue [-] monasca.cluster                = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Dec  1 14:24:05 np0005541455 ceilometer_agent_ipmi[222940]: 2025-12-01 19:24:05.287 12 DEBUG cotyledon.oslo_config_glue [-] monasca.collect_timing         = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Dec  1 14:24:05 np0005541455 ceilometer_agent_ipmi[222940]: 2025-12-01 19:24:05.287 12 DEBUG cotyledon.oslo_config_glue [-] monasca.control_plane          = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Dec  1 14:24:05 np0005541455 ceilometer_agent_ipmi[222940]: 2025-12-01 19:24:05.287 12 DEBUG cotyledon.oslo_config_glue [-] monasca.enable_api_pagination  = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Dec  1 14:24:05 np0005541455 ceilometer_agent_ipmi[222940]: 2025-12-01 19:24:05.288 12 DEBUG cotyledon.oslo_config_glue [-] monasca.insecure               = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Dec  1 14:24:05 np0005541455 ceilometer_agent_ipmi[222940]: 2025-12-01 19:24:05.288 12 DEBUG cotyledon.oslo_config_glue [-] monasca.interface              = internal log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Dec  1 14:24:05 np0005541455 ceilometer_agent_ipmi[222940]: 2025-12-01 19:24:05.288 12 DEBUG cotyledon.oslo_config_glue [-] monasca.keyfile                = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Dec  1 14:24:05 np0005541455 ceilometer_agent_ipmi[222940]: 2025-12-01 19:24:05.288 12 DEBUG cotyledon.oslo_config_glue [-] monasca.monasca_mappings       = /etc/ceilometer/monasca_field_definitions.yaml log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Dec  1 14:24:05 np0005541455 ceilometer_agent_ipmi[222940]: 2025-12-01 19:24:05.289 12 DEBUG cotyledon.oslo_config_glue [-] monasca.region_name            = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Dec  1 14:24:05 np0005541455 ceilometer_agent_ipmi[222940]: 2025-12-01 19:24:05.289 12 DEBUG cotyledon.oslo_config_glue [-] monasca.retry_on_failure       = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Dec  1 14:24:05 np0005541455 ceilometer_agent_ipmi[222940]: 2025-12-01 19:24:05.289 12 DEBUG cotyledon.oslo_config_glue [-] monasca.split_loggers          = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Dec  1 14:24:05 np0005541455 ceilometer_agent_ipmi[222940]: 2025-12-01 19:24:05.289 12 DEBUG cotyledon.oslo_config_glue [-] monasca.timeout                = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Dec  1 14:24:05 np0005541455 ceilometer_agent_ipmi[222940]: 2025-12-01 19:24:05.290 12 DEBUG cotyledon.oslo_config_glue [-] notification.ack_on_event_error = True log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Dec  1 14:24:05 np0005541455 ceilometer_agent_ipmi[222940]: 2025-12-01 19:24:05.290 12 DEBUG cotyledon.oslo_config_glue [-] notification.batch_size        = 1 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Dec  1 14:24:05 np0005541455 ceilometer_agent_ipmi[222940]: 2025-12-01 19:24:05.290 12 DEBUG cotyledon.oslo_config_glue [-] notification.batch_timeout     = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Dec  1 14:24:05 np0005541455 ceilometer_agent_ipmi[222940]: 2025-12-01 19:24:05.290 12 DEBUG cotyledon.oslo_config_glue [-] notification.messaging_urls    = **** log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Dec  1 14:24:05 np0005541455 ceilometer_agent_ipmi[222940]: 2025-12-01 19:24:05.291 12 DEBUG cotyledon.oslo_config_glue [-] notification.notification_control_exchanges = ['nova', 'glance', 'neutron', 'cinder', 'heat', 'keystone', 'sahara', 'trove', 'zaqar', 'swift', 'ceilometer', 'magnum', 'dns', 'ironic', 'aodh'] log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Dec  1 14:24:05 np0005541455 ceilometer_agent_ipmi[222940]: 2025-12-01 19:24:05.291 12 DEBUG cotyledon.oslo_config_glue [-] notification.pipelines         = ['meter', 'event'] log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Dec  1 14:24:05 np0005541455 ceilometer_agent_ipmi[222940]: 2025-12-01 19:24:05.291 12 DEBUG cotyledon.oslo_config_glue [-] notification.workers           = 1 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Dec  1 14:24:05 np0005541455 ceilometer_agent_ipmi[222940]: 2025-12-01 19:24:05.291 12 DEBUG cotyledon.oslo_config_glue [-] polling.batch_size             = 50 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Dec  1 14:24:05 np0005541455 ceilometer_agent_ipmi[222940]: 2025-12-01 19:24:05.292 12 DEBUG cotyledon.oslo_config_glue [-] polling.cfg_file               = polling.yaml log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Dec  1 14:24:05 np0005541455 ceilometer_agent_ipmi[222940]: 2025-12-01 19:24:05.292 12 DEBUG cotyledon.oslo_config_glue [-] polling.partitioning_group_prefix = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Dec  1 14:24:05 np0005541455 ceilometer_agent_ipmi[222940]: 2025-12-01 19:24:05.292 12 DEBUG cotyledon.oslo_config_glue [-] polling.pollsters_definitions_dirs = ['/etc/ceilometer/pollsters.d'] log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Dec  1 14:24:05 np0005541455 ceilometer_agent_ipmi[222940]: 2025-12-01 19:24:05.292 12 DEBUG cotyledon.oslo_config_glue [-] polling.tenant_name_discovery  = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Dec  1 14:24:05 np0005541455 ceilometer_agent_ipmi[222940]: 2025-12-01 19:24:05.293 12 DEBUG cotyledon.oslo_config_glue [-] publisher.telemetry_secret     = **** log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Dec  1 14:24:05 np0005541455 ceilometer_agent_ipmi[222940]: 2025-12-01 19:24:05.293 12 DEBUG cotyledon.oslo_config_glue [-] publisher_notifier.event_topic = event log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Dec  1 14:24:05 np0005541455 ceilometer_agent_ipmi[222940]: 2025-12-01 19:24:05.293 12 DEBUG cotyledon.oslo_config_glue [-] publisher_notifier.metering_topic = metering log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Dec  1 14:24:05 np0005541455 ceilometer_agent_ipmi[222940]: 2025-12-01 19:24:05.294 12 DEBUG cotyledon.oslo_config_glue [-] publisher_notifier.telemetry_driver = messagingv2 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Dec  1 14:24:05 np0005541455 ceilometer_agent_ipmi[222940]: 2025-12-01 19:24:05.294 12 DEBUG cotyledon.oslo_config_glue [-] rgw_admin_credentials.access_key = **** log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Dec  1 14:24:05 np0005541455 ceilometer_agent_ipmi[222940]: 2025-12-01 19:24:05.294 12 DEBUG cotyledon.oslo_config_glue [-] rgw_admin_credentials.secret_key = **** log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Dec  1 14:24:05 np0005541455 ceilometer_agent_ipmi[222940]: 2025-12-01 19:24:05.294 12 DEBUG cotyledon.oslo_config_glue [-] rgw_client.implicit_tenants    = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Dec  1 14:24:05 np0005541455 ceilometer_agent_ipmi[222940]: 2025-12-01 19:24:05.294 12 DEBUG cotyledon.oslo_config_glue [-] service_types.cinder           = volumev3 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Dec  1 14:24:05 np0005541455 ceilometer_agent_ipmi[222940]: 2025-12-01 19:24:05.295 12 DEBUG cotyledon.oslo_config_glue [-] service_types.glance           = image log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Dec  1 14:24:05 np0005541455 ceilometer_agent_ipmi[222940]: 2025-12-01 19:24:05.295 12 DEBUG cotyledon.oslo_config_glue [-] service_types.neutron          = network log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Dec  1 14:24:05 np0005541455 ceilometer_agent_ipmi[222940]: 2025-12-01 19:24:05.295 12 DEBUG cotyledon.oslo_config_glue [-] service_types.nova             = compute log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Dec  1 14:24:05 np0005541455 ceilometer_agent_ipmi[222940]: 2025-12-01 19:24:05.295 12 DEBUG cotyledon.oslo_config_glue [-] service_types.radosgw          = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Dec  1 14:24:05 np0005541455 ceilometer_agent_ipmi[222940]: 2025-12-01 19:24:05.296 12 DEBUG cotyledon.oslo_config_glue [-] service_types.swift            = object-store log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Dec  1 14:24:05 np0005541455 ceilometer_agent_ipmi[222940]: 2025-12-01 19:24:05.296 12 DEBUG cotyledon.oslo_config_glue [-] vmware.api_retry_count         = 10 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Dec  1 14:24:05 np0005541455 ceilometer_agent_ipmi[222940]: 2025-12-01 19:24:05.296 12 DEBUG cotyledon.oslo_config_glue [-] vmware.ca_file                 = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Dec  1 14:24:05 np0005541455 ceilometer_agent_ipmi[222940]: 2025-12-01 19:24:05.296 12 DEBUG cotyledon.oslo_config_glue [-] vmware.host_ip                 = 127.0.0.1 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Dec  1 14:24:05 np0005541455 ceilometer_agent_ipmi[222940]: 2025-12-01 19:24:05.297 12 DEBUG cotyledon.oslo_config_glue [-] vmware.host_password           = **** log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Dec  1 14:24:05 np0005541455 ceilometer_agent_ipmi[222940]: 2025-12-01 19:24:05.297 12 DEBUG cotyledon.oslo_config_glue [-] vmware.host_port               = 443 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Dec  1 14:24:05 np0005541455 ceilometer_agent_ipmi[222940]: 2025-12-01 19:24:05.297 12 DEBUG cotyledon.oslo_config_glue [-] vmware.host_username           =  log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Dec  1 14:24:05 np0005541455 ceilometer_agent_ipmi[222940]: 2025-12-01 19:24:05.298 12 DEBUG cotyledon.oslo_config_glue [-] vmware.insecure                = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Dec  1 14:24:05 np0005541455 ceilometer_agent_ipmi[222940]: 2025-12-01 19:24:05.298 12 DEBUG cotyledon.oslo_config_glue [-] vmware.task_poll_interval      = 0.5 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Dec  1 14:24:05 np0005541455 ceilometer_agent_ipmi[222940]: 2025-12-01 19:24:05.298 12 DEBUG cotyledon.oslo_config_glue [-] vmware.wsdl_location           = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Dec  1 14:24:05 np0005541455 ceilometer_agent_ipmi[222940]: 2025-12-01 19:24:05.298 12 DEBUG cotyledon.oslo_config_glue [-] service_credentials.auth_section = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Dec  1 14:24:05 np0005541455 ceilometer_agent_ipmi[222940]: 2025-12-01 19:24:05.299 12 DEBUG cotyledon.oslo_config_glue [-] service_credentials.auth_type  = password log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Dec  1 14:24:05 np0005541455 ceilometer_agent_ipmi[222940]: 2025-12-01 19:24:05.299 12 DEBUG cotyledon.oslo_config_glue [-] service_credentials.cafile     = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Dec  1 14:24:05 np0005541455 ceilometer_agent_ipmi[222940]: 2025-12-01 19:24:05.299 12 DEBUG cotyledon.oslo_config_glue [-] service_credentials.certfile   = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Dec  1 14:24:05 np0005541455 ceilometer_agent_ipmi[222940]: 2025-12-01 19:24:05.299 12 DEBUG cotyledon.oslo_config_glue [-] service_credentials.collect_timing = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Dec  1 14:24:05 np0005541455 ceilometer_agent_ipmi[222940]: 2025-12-01 19:24:05.300 12 DEBUG cotyledon.oslo_config_glue [-] service_credentials.insecure   = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Dec  1 14:24:05 np0005541455 ceilometer_agent_ipmi[222940]: 2025-12-01 19:24:05.300 12 DEBUG cotyledon.oslo_config_glue [-] service_credentials.interface  = internalURL log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Dec  1 14:24:05 np0005541455 ceilometer_agent_ipmi[222940]: 2025-12-01 19:24:05.300 12 DEBUG cotyledon.oslo_config_glue [-] service_credentials.keyfile    = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Dec  1 14:24:05 np0005541455 ceilometer_agent_ipmi[222940]: 2025-12-01 19:24:05.300 12 DEBUG cotyledon.oslo_config_glue [-] service_credentials.region_name = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Dec  1 14:24:05 np0005541455 ceilometer_agent_ipmi[222940]: 2025-12-01 19:24:05.300 12 DEBUG cotyledon.oslo_config_glue [-] service_credentials.split_loggers = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Dec  1 14:24:05 np0005541455 ceilometer_agent_ipmi[222940]: 2025-12-01 19:24:05.301 12 DEBUG cotyledon.oslo_config_glue [-] service_credentials.timeout    = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Dec  1 14:24:05 np0005541455 ceilometer_agent_ipmi[222940]: 2025-12-01 19:24:05.301 12 DEBUG cotyledon.oslo_config_glue [-] gnocchi.auth_section           = service_credentials log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Dec  1 14:24:05 np0005541455 ceilometer_agent_ipmi[222940]: 2025-12-01 19:24:05.301 12 DEBUG cotyledon.oslo_config_glue [-] gnocchi.auth_type              = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Dec  1 14:24:05 np0005541455 ceilometer_agent_ipmi[222940]: 2025-12-01 19:24:05.301 12 DEBUG cotyledon.oslo_config_glue [-] gnocchi.cafile                 = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Dec  1 14:24:05 np0005541455 ceilometer_agent_ipmi[222940]: 2025-12-01 19:24:05.302 12 DEBUG cotyledon.oslo_config_glue [-] gnocchi.certfile               = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Dec  1 14:24:05 np0005541455 ceilometer_agent_ipmi[222940]: 2025-12-01 19:24:05.302 12 DEBUG cotyledon.oslo_config_glue [-] gnocchi.collect_timing         = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Dec  1 14:24:05 np0005541455 ceilometer_agent_ipmi[222940]: 2025-12-01 19:24:05.302 12 DEBUG cotyledon.oslo_config_glue [-] gnocchi.insecure               = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Dec  1 14:24:05 np0005541455 ceilometer_agent_ipmi[222940]: 2025-12-01 19:24:05.302 12 DEBUG cotyledon.oslo_config_glue [-] gnocchi.interface              = internal log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Dec  1 14:24:05 np0005541455 ceilometer_agent_ipmi[222940]: 2025-12-01 19:24:05.303 12 DEBUG cotyledon.oslo_config_glue [-] gnocchi.keyfile                = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Dec  1 14:24:05 np0005541455 ceilometer_agent_ipmi[222940]: 2025-12-01 19:24:05.303 12 DEBUG cotyledon.oslo_config_glue [-] gnocchi.region_name            = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Dec  1 14:24:05 np0005541455 ceilometer_agent_ipmi[222940]: 2025-12-01 19:24:05.303 12 DEBUG cotyledon.oslo_config_glue [-] gnocchi.split_loggers          = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Dec  1 14:24:05 np0005541455 ceilometer_agent_ipmi[222940]: 2025-12-01 19:24:05.303 12 DEBUG cotyledon.oslo_config_glue [-] gnocchi.timeout                = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Dec  1 14:24:05 np0005541455 ceilometer_agent_ipmi[222940]: 2025-12-01 19:24:05.303 12 DEBUG cotyledon.oslo_config_glue [-] zaqar.auth_section             = service_credentials log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Dec  1 14:24:05 np0005541455 ceilometer_agent_ipmi[222940]: 2025-12-01 19:24:05.304 12 DEBUG cotyledon.oslo_config_glue [-] zaqar.auth_type                = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Dec  1 14:24:05 np0005541455 ceilometer_agent_ipmi[222940]: 2025-12-01 19:24:05.304 12 DEBUG cotyledon.oslo_config_glue [-] zaqar.cafile                   = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Dec  1 14:24:05 np0005541455 ceilometer_agent_ipmi[222940]: 2025-12-01 19:24:05.304 12 DEBUG cotyledon.oslo_config_glue [-] zaqar.certfile                 = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Dec  1 14:24:05 np0005541455 ceilometer_agent_ipmi[222940]: 2025-12-01 19:24:05.304 12 DEBUG cotyledon.oslo_config_glue [-] zaqar.collect_timing           = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Dec  1 14:24:05 np0005541455 ceilometer_agent_ipmi[222940]: 2025-12-01 19:24:05.305 12 DEBUG cotyledon.oslo_config_glue [-] zaqar.insecure                 = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Dec  1 14:24:05 np0005541455 ceilometer_agent_ipmi[222940]: 2025-12-01 19:24:05.305 12 DEBUG cotyledon.oslo_config_glue [-] zaqar.interface                = internal log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Dec  1 14:24:05 np0005541455 ceilometer_agent_ipmi[222940]: 2025-12-01 19:24:05.305 12 DEBUG cotyledon.oslo_config_glue [-] zaqar.keyfile                  = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Dec  1 14:24:05 np0005541455 ceilometer_agent_ipmi[222940]: 2025-12-01 19:24:05.305 12 DEBUG cotyledon.oslo_config_glue [-] zaqar.region_name              = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Dec  1 14:24:05 np0005541455 ceilometer_agent_ipmi[222940]: 2025-12-01 19:24:05.306 12 DEBUG cotyledon.oslo_config_glue [-] zaqar.split_loggers            = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Dec  1 14:24:05 np0005541455 ceilometer_agent_ipmi[222940]: 2025-12-01 19:24:05.306 12 DEBUG cotyledon.oslo_config_glue [-] zaqar.timeout                  = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Dec  1 14:24:05 np0005541455 ceilometer_agent_ipmi[222940]: 2025-12-01 19:24:05.306 12 DEBUG cotyledon.oslo_config_glue [-] oslo_messaging_notifications.driver = ['noop'] log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Dec  1 14:24:05 np0005541455 ceilometer_agent_ipmi[222940]: 2025-12-01 19:24:05.306 12 DEBUG cotyledon.oslo_config_glue [-] oslo_messaging_notifications.retry = -1 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Dec  1 14:24:05 np0005541455 ceilometer_agent_ipmi[222940]: 2025-12-01 19:24:05.306 12 DEBUG cotyledon.oslo_config_glue [-] oslo_messaging_notifications.topics = ['notifications'] log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Dec  1 14:24:05 np0005541455 ceilometer_agent_ipmi[222940]: 2025-12-01 19:24:05.307 12 DEBUG cotyledon.oslo_config_glue [-] oslo_messaging_notifications.transport_url = **** log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Dec  1 14:24:05 np0005541455 ceilometer_agent_ipmi[222940]: 2025-12-01 19:24:05.307 12 DEBUG cotyledon.oslo_config_glue [-] oslo_messaging_rabbit.amqp_auto_delete = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Dec  1 14:24:05 np0005541455 ceilometer_agent_ipmi[222940]: 2025-12-01 19:24:05.307 12 DEBUG cotyledon.oslo_config_glue [-] oslo_messaging_rabbit.amqp_durable_queues = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Dec  1 14:24:05 np0005541455 ceilometer_agent_ipmi[222940]: 2025-12-01 19:24:05.307 12 DEBUG cotyledon.oslo_config_glue [-] oslo_messaging_rabbit.conn_pool_min_size = 2 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Dec  1 14:24:05 np0005541455 ceilometer_agent_ipmi[222940]: 2025-12-01 19:24:05.308 12 DEBUG cotyledon.oslo_config_glue [-] oslo_messaging_rabbit.conn_pool_ttl = 1200 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Dec  1 14:24:05 np0005541455 ceilometer_agent_ipmi[222940]: 2025-12-01 19:24:05.308 12 DEBUG cotyledon.oslo_config_glue [-] oslo_messaging_rabbit.direct_mandatory_flag = True log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Dec  1 14:24:05 np0005541455 ceilometer_agent_ipmi[222940]: 2025-12-01 19:24:05.308 12 DEBUG cotyledon.oslo_config_glue [-] oslo_messaging_rabbit.enable_cancel_on_failover = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Dec  1 14:24:05 np0005541455 ceilometer_agent_ipmi[222940]: 2025-12-01 19:24:05.308 12 DEBUG cotyledon.oslo_config_glue [-] oslo_messaging_rabbit.heartbeat_in_pthread = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Dec  1 14:24:05 np0005541455 ceilometer_agent_ipmi[222940]: 2025-12-01 19:24:05.309 12 DEBUG cotyledon.oslo_config_glue [-] oslo_messaging_rabbit.heartbeat_rate = 2 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Dec  1 14:24:05 np0005541455 ceilometer_agent_ipmi[222940]: 2025-12-01 19:24:05.309 12 DEBUG cotyledon.oslo_config_glue [-] oslo_messaging_rabbit.heartbeat_timeout_threshold = 60 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Dec  1 14:24:05 np0005541455 ceilometer_agent_ipmi[222940]: 2025-12-01 19:24:05.309 12 DEBUG cotyledon.oslo_config_glue [-] oslo_messaging_rabbit.kombu_compression = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Dec  1 14:24:05 np0005541455 ceilometer_agent_ipmi[222940]: 2025-12-01 19:24:05.309 12 DEBUG cotyledon.oslo_config_glue [-] oslo_messaging_rabbit.kombu_failover_strategy = round-robin log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Dec  1 14:24:05 np0005541455 ceilometer_agent_ipmi[222940]: 2025-12-01 19:24:05.310 12 DEBUG cotyledon.oslo_config_glue [-] oslo_messaging_rabbit.kombu_missing_consumer_retry_timeout = 60 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Dec  1 14:24:05 np0005541455 ceilometer_agent_ipmi[222940]: 2025-12-01 19:24:05.310 12 DEBUG cotyledon.oslo_config_glue [-] oslo_messaging_rabbit.kombu_reconnect_delay = 1.0 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Dec  1 14:24:05 np0005541455 ceilometer_agent_ipmi[222940]: 2025-12-01 19:24:05.310 12 DEBUG cotyledon.oslo_config_glue [-] oslo_messaging_rabbit.rabbit_ha_queues = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Dec  1 14:24:05 np0005541455 ceilometer_agent_ipmi[222940]: 2025-12-01 19:24:05.310 12 DEBUG cotyledon.oslo_config_glue [-] oslo_messaging_rabbit.rabbit_interval_max = 30 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Dec  1 14:24:05 np0005541455 ceilometer_agent_ipmi[222940]: 2025-12-01 19:24:05.311 12 DEBUG cotyledon.oslo_config_glue [-] oslo_messaging_rabbit.rabbit_login_method = AMQPLAIN log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Dec  1 14:24:05 np0005541455 ceilometer_agent_ipmi[222940]: 2025-12-01 19:24:05.311 12 DEBUG cotyledon.oslo_config_glue [-] oslo_messaging_rabbit.rabbit_qos_prefetch_count = 100 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Dec  1 14:24:05 np0005541455 ceilometer_agent_ipmi[222940]: 2025-12-01 19:24:05.311 12 DEBUG cotyledon.oslo_config_glue [-] oslo_messaging_rabbit.rabbit_quorum_delivery_limit = 0 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Dec  1 14:24:05 np0005541455 ceilometer_agent_ipmi[222940]: 2025-12-01 19:24:05.311 12 DEBUG cotyledon.oslo_config_glue [-] oslo_messaging_rabbit.rabbit_quorum_max_memory_bytes = 0 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Dec  1 14:24:05 np0005541455 ceilometer_agent_ipmi[222940]: 2025-12-01 19:24:05.312 12 DEBUG cotyledon.oslo_config_glue [-] oslo_messaging_rabbit.rabbit_quorum_max_memory_length = 0 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Dec  1 14:24:05 np0005541455 ceilometer_agent_ipmi[222940]: 2025-12-01 19:24:05.312 12 DEBUG cotyledon.oslo_config_glue [-] oslo_messaging_rabbit.rabbit_quorum_queue = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Dec  1 14:24:05 np0005541455 ceilometer_agent_ipmi[222940]: 2025-12-01 19:24:05.312 12 DEBUG cotyledon.oslo_config_glue [-] oslo_messaging_rabbit.rabbit_retry_backoff = 2 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Dec  1 14:24:05 np0005541455 ceilometer_agent_ipmi[222940]: 2025-12-01 19:24:05.312 12 DEBUG cotyledon.oslo_config_glue [-] oslo_messaging_rabbit.rabbit_retry_interval = 1 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Dec  1 14:24:05 np0005541455 ceilometer_agent_ipmi[222940]: 2025-12-01 19:24:05.312 12 DEBUG cotyledon.oslo_config_glue [-] oslo_messaging_rabbit.rabbit_transient_queues_ttl = 1800 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Dec  1 14:24:05 np0005541455 ceilometer_agent_ipmi[222940]: 2025-12-01 19:24:05.313 12 DEBUG cotyledon.oslo_config_glue [-] oslo_messaging_rabbit.rpc_conn_pool_size = 30 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Dec  1 14:24:05 np0005541455 ceilometer_agent_ipmi[222940]: 2025-12-01 19:24:05.313 12 DEBUG cotyledon.oslo_config_glue [-] oslo_messaging_rabbit.ssl      = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Dec  1 14:24:05 np0005541455 ceilometer_agent_ipmi[222940]: 2025-12-01 19:24:05.313 12 DEBUG cotyledon.oslo_config_glue [-] oslo_messaging_rabbit.ssl_ca_file =  log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Dec  1 14:24:05 np0005541455 ceilometer_agent_ipmi[222940]: 2025-12-01 19:24:05.313 12 DEBUG cotyledon.oslo_config_glue [-] oslo_messaging_rabbit.ssl_cert_file =  log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Dec  1 14:24:05 np0005541455 ceilometer_agent_ipmi[222940]: 2025-12-01 19:24:05.314 12 DEBUG cotyledon.oslo_config_glue [-] oslo_messaging_rabbit.ssl_enforce_fips_mode = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Dec  1 14:24:05 np0005541455 ceilometer_agent_ipmi[222940]: 2025-12-01 19:24:05.314 12 DEBUG cotyledon.oslo_config_glue [-] oslo_messaging_rabbit.ssl_key_file =  log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Dec  1 14:24:05 np0005541455 ceilometer_agent_ipmi[222940]: 2025-12-01 19:24:05.314 12 DEBUG cotyledon.oslo_config_glue [-] oslo_messaging_rabbit.ssl_version =  log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Dec  1 14:24:05 np0005541455 ceilometer_agent_ipmi[222940]: 2025-12-01 19:24:05.314 12 DEBUG cotyledon.oslo_config_glue [-] ******************************************************************************** log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2613
Dec  1 14:24:05 np0005541455 ceilometer_agent_ipmi[222940]: 2025-12-01 19:24:05.315 12 DEBUG cotyledon._service [-] Run service AgentManager(0) [12] wait_forever /usr/lib/python3.9/site-packages/cotyledon/_service.py:241
Dec  1 14:24:05 np0005541455 ceilometer_agent_ipmi[222940]: 2025-12-01 19:24:05.318 12 DEBUG ceilometer.agent [-] Config file: {'sources': [{'name': 'pollsters', 'interval': 120, 'meters': ['hardware.*']}]} load_config /usr/lib/python3.9/site-packages/ceilometer/agent.py:64
Dec  1 14:24:05 np0005541455 python3.9[223281]: ansible-container_config_hash Invoked with check_mode=False config_vol_prefix=/var/lib/config-data
Dec  1 14:24:06 np0005541455 python3[223437]: ansible-edpm_container_manage Invoked with concurrency=1 config_dir=/var/lib/openstack/config/telemetry-power-monitoring config_id=edpm config_overrides={} config_patterns=kepler.json log_base_path=/var/log/containers/stdouts debug=False
Dec  1 14:24:06 np0005541455 podman[223475]: 2025-12-01 19:24:06.641387668 +0000 UTC m=+0.060723152 container create 23921011954a99f31a49758e512d9e3575f6b2ebf536e7df85e3be11e7690b76 (image=quay.io/sustainable_computing_io/kepler:release-0.7.12, name=kepler, description=The Universal Base Image is designed and engineered to be the base layer for all of your containerized applications, middleware and utilities. This base image is freely redistributable, but Red Hat only supports Red Hat technologies through subscriptions for Red Hat products. This image is maintained by Red Hat and updated regularly., url=https://access.redhat.com/containers/#/registry.access.redhat.com/ubi9/images/9.4-1214.1726694543, vcs-type=git, summary=Provides the latest release of Red Hat Universal Base Image 9., io.k8s.description=The Universal Base Image is designed and engineered to be the base layer for all of your containerized applications, middleware and utilities. This base image is freely redistributable, but Red Hat only supports Red Hat technologies through subscriptions for Red Hat products. This image is maintained by Red Hat and updated regularly., architecture=x86_64, config_data={'image': 'quay.io/sustainable_computing_io/kepler:release-0.7.12', 'privileged': 'true', 'restart': 'always', 'ports': ['8888:8888'], 'net': 'host', 'command': '-v=2', 'recreate': True, 'environment': {'ENABLE_GPU': 'true', 'EXPOSE_CONTAINER_METRICS': 'true', 'ENABLE_PROCESS_METRICS': 'true', 'EXPOSE_VM_METRICS': 'true', 'EXPOSE_ESTIMATED_IDLE_POWER_METRICS': 'false', 'LIBVIRT_METADATA_URI': 'http://openstack.org/xmlns/libvirt/nova/1.1'}, 'healthcheck': {'test': '/openstack/healthcheck kepler', 'mount': '/var/lib/openstack/healthchecks/kepler'}, 'volumes': ['/lib/modules:/lib/modules:ro', '/run/libvirt:/run/libvirt:shared,ro', '/sys:/sys', '/proc:/proc', '/var/lib/openstack/healthchecks/kepler:/openstack:ro,z']}, io.buildah.version=1.29.0, managed_by=edpm_ansible, container_name=kepler, com.redhat.component=ubi9-container, build-date=2024-09-18T21:23:30, io.openshift.expose-services=, vcs-ref=e309397d02fc53f7fa99db1371b8700eb49f268f, com.redhat.license_terms=https://www.redhat.com/en/about/red-hat-end-user-license-agreements#UBI, release-0.7.12=, io.openshift.tags=base rhel9, release=1214.1726694543, vendor=Red Hat, Inc., name=ubi9, io.k8s.display-name=Red Hat Universal Base Image 9, version=9.4, distribution-scope=public, config_id=edpm, maintainer=Red Hat, Inc.)
Dec  1 14:24:06 np0005541455 podman[223475]: 2025-12-01 19:24:06.604872545 +0000 UTC m=+0.024208009 image pull ed61e3ea3188391c18595d8ceada2a5a01f0ece915c62fde355798735b5208d7 quay.io/sustainable_computing_io/kepler:release-0.7.12
Dec  1 14:24:06 np0005541455 python3[223437]: ansible-edpm_container_manage PODMAN-CONTAINER-DEBUG: podman create --name kepler --conmon-pidfile /run/kepler.pid --env ENABLE_GPU=true --env EXPOSE_CONTAINER_METRICS=true --env ENABLE_PROCESS_METRICS=true --env EXPOSE_VM_METRICS=true --env EXPOSE_ESTIMATED_IDLE_POWER_METRICS=false --env LIBVIRT_METADATA_URI=http://openstack.org/xmlns/libvirt/nova/1.1 --healthcheck-command /openstack/healthcheck kepler --label config_id=edpm --label container_name=kepler --label managed_by=edpm_ansible --label config_data={'image': 'quay.io/sustainable_computing_io/kepler:release-0.7.12', 'privileged': 'true', 'restart': 'always', 'ports': ['8888:8888'], 'net': 'host', 'command': '-v=2', 'recreate': True, 'environment': {'ENABLE_GPU': 'true', 'EXPOSE_CONTAINER_METRICS': 'true', 'ENABLE_PROCESS_METRICS': 'true', 'EXPOSE_VM_METRICS': 'true', 'EXPOSE_ESTIMATED_IDLE_POWER_METRICS': 'false', 'LIBVIRT_METADATA_URI': 'http://openstack.org/xmlns/libvirt/nova/1.1'}, 'healthcheck': {'test': '/openstack/healthcheck kepler', 'mount': '/var/lib/openstack/healthchecks/kepler'}, 'volumes': ['/lib/modules:/lib/modules:ro', '/run/libvirt:/run/libvirt:shared,ro', '/sys:/sys', '/proc:/proc', '/var/lib/openstack/healthchecks/kepler:/openstack:ro,z']} --log-driver journald --log-level info --network host --privileged=True --publish 8888:8888 --volume /lib/modules:/lib/modules:ro --volume /run/libvirt:/run/libvirt:shared,ro --volume /sys:/sys --volume /proc:/proc --volume /var/lib/openstack/healthchecks/kepler:/openstack:ro,z quay.io/sustainable_computing_io/kepler:release-0.7.12 -v=2
Dec  1 14:24:07 np0005541455 podman[223624]: 2025-12-01 19:24:07.279257978 +0000 UTC m=+0.050969956 container health_status eee51cf6f5ac491b85fb09827fece37ea9afa564acb449d4ec0d0155a452f02b (image=quay.io/podified-antelope-centos9/openstack-multipathd:current-podified, name=multipathd, health_status=healthy, health_failing_streak=0, health_log=, org.label-schema.schema-version=1.0, tcib_managed=true, config_data={'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/multipathd', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-multipathd:current-podified', 'net': 'host', 'privileged': True, 'restart': 'always', 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/var/lib/kolla/config_files/multipathd.json:/var/lib/kolla/config_files/config.json:ro', '/dev:/dev', '/run/udev:/run/udev', '/sys:/sys', '/lib/modules:/lib/modules:ro', '/etc/iscsi:/etc/iscsi:ro', '/var/lib/iscsi:/var/lib/iscsi', '/etc/multipath:/etc/multipath:z', '/etc/multipath.conf:/etc/multipath.conf:ro', '/var/lib/openstack/healthchecks/multipathd:/openstack:ro,z']}, maintainer=OpenStack Kubernetes Operator team, tcib_build_tag=fa2bb8efef6782c26ea7f1675eeb36dd, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.license=GPLv2, org.label-schema.vendor=CentOS, config_id=multipathd, container_name=multipathd, io.buildah.version=1.41.3, managed_by=edpm_ansible, org.label-schema.build-date=20251125)
Dec  1 14:24:07 np0005541455 python3.9[223687]: ansible-ansible.builtin.stat Invoked with path=/etc/sysconfig/podman_drop_in follow=False get_checksum=True get_mime=True get_attributes=True get_selinux_context=False checksum_algorithm=sha1
Dec  1 14:24:08 np0005541455 python3.9[223841]: ansible-file Invoked with path=/etc/systemd/system/edpm_kepler.requires state=absent recurse=False force=False follow=True modification_time_format=%Y%m%d%H%M.%S access_time_format=%Y%m%d%H%M.%S unsafe_writes=False _original_basename=None _diff_peek=None src=None modification_time=None access_time=None mode=None owner=None group=None seuser=None serole=None selevel=None setype=None attributes=None
Dec  1 14:24:09 np0005541455 python3.9[223992]: ansible-copy Invoked with src=/home/zuul/.ansible/tmp/ansible-tmp-1764617048.558851-489-94382357098837/source dest=/etc/systemd/system/edpm_kepler.service mode=0644 owner=root group=root backup=False force=True remote_src=False follow=False unsafe_writes=False _original_basename=None content=NOT_LOGGING_PARAMETER validate=None directory_mode=None local_follow=None checksum=None seuser=None serole=None selevel=None setype=None attributes=None
Dec  1 14:24:09 np0005541455 python3.9[224068]: ansible-systemd Invoked with daemon_reload=True daemon_reexec=False scope=system no_block=False name=None state=None enabled=None force=None masked=None
Dec  1 14:24:09 np0005541455 systemd[1]: Reloading.
Dec  1 14:24:10 np0005541455 systemd-rc-local-generator: /etc/rc.d/rc.local is not marked executable, skipping.
Dec  1 14:24:10 np0005541455 systemd-sysv-generator: SysV service '/etc/rc.d/init.d/network' lacks a native systemd unit file. Automatically generating a unit file for compatibility. Please update package to include a native systemd unit file, in order to make it more safe and robust.
Dec  1 14:24:10 np0005541455 python3.9[224180]: ansible-systemd Invoked with state=restarted name=edpm_kepler.service enabled=True daemon_reload=False daemon_reexec=False scope=system no_block=False force=None masked=None
Dec  1 14:24:10 np0005541455 systemd[1]: Reloading.
Dec  1 14:24:11 np0005541455 systemd-rc-local-generator: /etc/rc.d/rc.local is not marked executable, skipping.
Dec  1 14:24:11 np0005541455 systemd-sysv-generator: SysV service '/etc/rc.d/init.d/network' lacks a native systemd unit file. Automatically generating a unit file for compatibility. Please update package to include a native systemd unit file, in order to make it more safe and robust.
Dec  1 14:24:11 np0005541455 systemd[1]: Starting kepler container...
Dec  1 14:24:11 np0005541455 systemd[1]: Started libcrun container.
Dec  1 14:24:11 np0005541455 systemd[1]: Started /usr/bin/podman healthcheck run 23921011954a99f31a49758e512d9e3575f6b2ebf536e7df85e3be11e7690b76.
Dec  1 14:24:11 np0005541455 podman[224220]: 2025-12-01 19:24:11.386999409 +0000 UTC m=+0.091478423 container init 23921011954a99f31a49758e512d9e3575f6b2ebf536e7df85e3be11e7690b76 (image=quay.io/sustainable_computing_io/kepler:release-0.7.12, name=kepler, description=The Universal Base Image is designed and engineered to be the base layer for all of your containerized applications, middleware and utilities. This base image is freely redistributable, but Red Hat only supports Red Hat technologies through subscriptions for Red Hat products. This image is maintained by Red Hat and updated regularly., maintainer=Red Hat, Inc., distribution-scope=public, io.k8s.description=The Universal Base Image is designed and engineered to be the base layer for all of your containerized applications, middleware and utilities. This base image is freely redistributable, but Red Hat only supports Red Hat technologies through subscriptions for Red Hat products. This image is maintained by Red Hat and updated regularly., vcs-type=git, vendor=Red Hat, Inc., container_name=kepler, com.redhat.component=ubi9-container, io.openshift.expose-services=, architecture=x86_64, io.k8s.display-name=Red Hat Universal Base Image 9, io.openshift.tags=base rhel9, version=9.4, config_data={'image': 'quay.io/sustainable_computing_io/kepler:release-0.7.12', 'privileged': 'true', 'restart': 'always', 'ports': ['8888:8888'], 'net': 'host', 'command': '-v=2', 'recreate': True, 'environment': {'ENABLE_GPU': 'true', 'EXPOSE_CONTAINER_METRICS': 'true', 'ENABLE_PROCESS_METRICS': 'true', 'EXPOSE_VM_METRICS': 'true', 'EXPOSE_ESTIMATED_IDLE_POWER_METRICS': 'false', 'LIBVIRT_METADATA_URI': 'http://openstack.org/xmlns/libvirt/nova/1.1'}, 'healthcheck': {'test': '/openstack/healthcheck kepler', 'mount': '/var/lib/openstack/healthchecks/kepler'}, 'volumes': ['/lib/modules:/lib/modules:ro', '/run/libvirt:/run/libvirt:shared,ro', '/sys:/sys', '/proc:/proc', '/var/lib/openstack/healthchecks/kepler:/openstack:ro,z']}, config_id=edpm, summary=Provides the latest release of Red Hat Universal Base Image 9., managed_by=edpm_ansible, name=ubi9, release=1214.1726694543, url=https://access.redhat.com/containers/#/registry.access.redhat.com/ubi9/images/9.4-1214.1726694543, io.buildah.version=1.29.0, com.redhat.license_terms=https://www.redhat.com/en/about/red-hat-end-user-license-agreements#UBI, release-0.7.12=, vcs-ref=e309397d02fc53f7fa99db1371b8700eb49f268f, build-date=2024-09-18T21:23:30)
Dec  1 14:24:11 np0005541455 kepler[224236]: WARNING: failed to read int from file: open /sys/devices/system/cpu/cpu0/online: no such file or directory
Dec  1 14:24:11 np0005541455 kepler[224236]: I1201 19:24:11.412626       1 exporter.go:103] Kepler running on version: v0.7.12-dirty
Dec  1 14:24:11 np0005541455 kepler[224236]: I1201 19:24:11.412737       1 config.go:293] using gCgroup ID in the BPF program: true
Dec  1 14:24:11 np0005541455 kepler[224236]: I1201 19:24:11.412758       1 config.go:295] kernel version: 5.14
Dec  1 14:24:11 np0005541455 kepler[224236]: I1201 19:24:11.413291       1 power.go:78] Unable to obtain power, use estimate method
Dec  1 14:24:11 np0005541455 kepler[224236]: I1201 19:24:11.413311       1 redfish.go:169] failed to get redfish credential file path
Dec  1 14:24:11 np0005541455 kepler[224236]: I1201 19:24:11.413628       1 acpi.go:71] Could not find any ACPI power meter path. Is it a VM?
Dec  1 14:24:11 np0005541455 kepler[224236]: I1201 19:24:11.413636       1 power.go:79] using none to obtain power
Dec  1 14:24:11 np0005541455 kepler[224236]: E1201 19:24:11.413649       1 accelerator.go:154] [DUMMY] doesn't contain GPU
Dec  1 14:24:11 np0005541455 kepler[224236]: E1201 19:24:11.413666       1 exporter.go:154] failed to init GPU accelerators: no devices found
Dec  1 14:24:11 np0005541455 kepler[224236]: WARNING: failed to read int from file: open /sys/devices/system/cpu/cpu0/online: no such file or directory
Dec  1 14:24:11 np0005541455 kepler[224236]: I1201 19:24:11.415156       1 exporter.go:84] Number of CPUs: 8
Dec  1 14:24:11 np0005541455 podman[224220]: 2025-12-01 19:24:11.422283144 +0000 UTC m=+0.126762178 container start 23921011954a99f31a49758e512d9e3575f6b2ebf536e7df85e3be11e7690b76 (image=quay.io/sustainable_computing_io/kepler:release-0.7.12, name=kepler, release=1214.1726694543, architecture=x86_64, name=ubi9, io.k8s.description=The Universal Base Image is designed and engineered to be the base layer for all of your containerized applications, middleware and utilities. This base image is freely redistributable, but Red Hat only supports Red Hat technologies through subscriptions for Red Hat products. This image is maintained by Red Hat and updated regularly., io.k8s.display-name=Red Hat Universal Base Image 9, release-0.7.12=, build-date=2024-09-18T21:23:30, config_data={'image': 'quay.io/sustainable_computing_io/kepler:release-0.7.12', 'privileged': 'true', 'restart': 'always', 'ports': ['8888:8888'], 'net': 'host', 'command': '-v=2', 'recreate': True, 'environment': {'ENABLE_GPU': 'true', 'EXPOSE_CONTAINER_METRICS': 'true', 'ENABLE_PROCESS_METRICS': 'true', 'EXPOSE_VM_METRICS': 'true', 'EXPOSE_ESTIMATED_IDLE_POWER_METRICS': 'false', 'LIBVIRT_METADATA_URI': 'http://openstack.org/xmlns/libvirt/nova/1.1'}, 'healthcheck': {'test': '/openstack/healthcheck kepler', 'mount': '/var/lib/openstack/healthchecks/kepler'}, 'volumes': ['/lib/modules:/lib/modules:ro', '/run/libvirt:/run/libvirt:shared,ro', '/sys:/sys', '/proc:/proc', '/var/lib/openstack/healthchecks/kepler:/openstack:ro,z']}, config_id=edpm, maintainer=Red Hat, Inc., distribution-scope=public, io.buildah.version=1.29.0, vcs-ref=e309397d02fc53f7fa99db1371b8700eb49f268f, version=9.4, summary=Provides the latest release of Red Hat Universal Base Image 9., com.redhat.component=ubi9-container, com.redhat.license_terms=https://www.redhat.com/en/about/red-hat-end-user-license-agreements#UBI, url=https://access.redhat.com/containers/#/registry.access.redhat.com/ubi9/images/9.4-1214.1726694543, io.openshift.tags=base rhel9, vcs-type=git, vendor=Red Hat, Inc., container_name=kepler, description=The Universal Base Image is designed and engineered to be the base layer for all of your containerized applications, middleware and utilities. This base image is freely redistributable, but Red Hat only supports Red Hat technologies through subscriptions for Red Hat products. This image is maintained by Red Hat and updated regularly., io.openshift.expose-services=, managed_by=edpm_ansible)
Dec  1 14:24:11 np0005541455 podman[224220]: kepler
Dec  1 14:24:11 np0005541455 systemd[1]: Started kepler container.
Dec  1 14:24:11 np0005541455 podman[224246]: 2025-12-01 19:24:11.518641528 +0000 UTC m=+0.084370120 container health_status 23921011954a99f31a49758e512d9e3575f6b2ebf536e7df85e3be11e7690b76 (image=quay.io/sustainable_computing_io/kepler:release-0.7.12, name=kepler, health_status=starting, health_failing_streak=1, health_log=, url=https://access.redhat.com/containers/#/registry.access.redhat.com/ubi9/images/9.4-1214.1726694543, summary=Provides the latest release of Red Hat Universal Base Image 9., version=9.4, io.k8s.description=The Universal Base Image is designed and engineered to be the base layer for all of your containerized applications, middleware and utilities. This base image is freely redistributable, but Red Hat only supports Red Hat technologies through subscriptions for Red Hat products. This image is maintained by Red Hat and updated regularly., io.openshift.expose-services=, io.openshift.tags=base rhel9, io.k8s.display-name=Red Hat Universal Base Image 9, vendor=Red Hat, Inc., container_name=kepler, description=The Universal Base Image is designed and engineered to be the base layer for all of your containerized applications, middleware and utilities. This base image is freely redistributable, but Red Hat only supports Red Hat technologies through subscriptions for Red Hat products. This image is maintained by Red Hat and updated regularly., io.buildah.version=1.29.0, release-0.7.12=, com.redhat.component=ubi9-container, com.redhat.license_terms=https://www.redhat.com/en/about/red-hat-end-user-license-agreements#UBI, config_data={'image': 'quay.io/sustainable_computing_io/kepler:release-0.7.12', 'privileged': 'true', 'restart': 'always', 'ports': ['8888:8888'], 'net': 'host', 'command': '-v=2', 'recreate': True, 'environment': {'ENABLE_GPU': 'true', 'EXPOSE_CONTAINER_METRICS': 'true', 'ENABLE_PROCESS_METRICS': 'true', 'EXPOSE_VM_METRICS': 'true', 'EXPOSE_ESTIMATED_IDLE_POWER_METRICS': 'false', 'LIBVIRT_METADATA_URI': 'http://openstack.org/xmlns/libvirt/nova/1.1'}, 'healthcheck': {'test': '/openstack/healthcheck kepler', 'mount': '/var/lib/openstack/healthchecks/kepler'}, 'volumes': ['/lib/modules:/lib/modules:ro', '/run/libvirt:/run/libvirt:shared,ro', '/sys:/sys', '/proc:/proc', '/var/lib/openstack/healthchecks/kepler:/openstack:ro,z']}, name=ubi9, managed_by=edpm_ansible, config_id=edpm, maintainer=Red Hat, Inc., release=1214.1726694543, vcs-ref=e309397d02fc53f7fa99db1371b8700eb49f268f, distribution-scope=public, build-date=2024-09-18T21:23:30, vcs-type=git, architecture=x86_64)
Dec  1 14:24:11 np0005541455 systemd[1]: 23921011954a99f31a49758e512d9e3575f6b2ebf536e7df85e3be11e7690b76-245c16d177b69844.service: Main process exited, code=exited, status=1/FAILURE
Dec  1 14:24:11 np0005541455 systemd[1]: 23921011954a99f31a49758e512d9e3575f6b2ebf536e7df85e3be11e7690b76-245c16d177b69844.service: Failed with result 'exit-code'.
Dec  1 14:24:11 np0005541455 podman[224282]: 2025-12-01 19:24:11.630979734 +0000 UTC m=+0.068060590 container health_status 61ddba5fa28aaa4735d9b3aecc3d300f499f9ae2248b5f55cd6d6127fcce4236 (image=quay.io/navidys/prometheus-podman-exporter:v1.10.1, name=podman_exporter, health_status=healthy, health_failing_streak=0, health_log=, container_name=podman_exporter, maintainer=Navid Yaghoobi <navidys@fedoraproject.org>, managed_by=edpm_ansible, config_data={'image': 'quay.io/navidys/prometheus-podman-exporter:v1.10.1', 'restart': 'always', 'recreate': True, 'user': 'root', 'privileged': True, 'ports': ['9882:9882'], 'net': 'host', 'command': ['--web.config.file=/etc/podman_exporter/podman_exporter.yaml'], 'environment': {'OS_ENDPOINT_TYPE': 'internal', 'CONTAINER_HOST': 'unix:///run/podman/podman.sock'}, 'healthcheck': {'test': '/openstack/healthcheck podman_exporter', 'mount': '/var/lib/openstack/healthchecks/podman_exporter'}, 'volumes': ['/var/lib/openstack/config/telemetry/podman_exporter.yaml:/etc/podman_exporter/podman_exporter.yaml:z', '/var/lib/openstack/certs/telemetry/default:/etc/podman_exporter/tls:z', '/run/podman/podman.sock:/run/podman/podman.sock:rw,z', '/var/lib/openstack/healthchecks/podman_exporter:/openstack:ro,z']}, config_id=edpm)
Dec  1 14:24:12 np0005541455 kepler[224236]: I1201 19:24:12.001096       1 watcher.go:83] Using in cluster k8s config
Dec  1 14:24:12 np0005541455 kepler[224236]: I1201 19:24:12.001160       1 watcher.go:90] failed to get config: unable to load in-cluster configuration, KUBERNETES_SERVICE_HOST and KUBERNETES_SERVICE_PORT must be defined
Dec  1 14:24:12 np0005541455 kepler[224236]: E1201 19:24:12.001270       1 manager.go:59] could not run the watcher k8s APIserver watcher was not enabled
Dec  1 14:24:12 np0005541455 kepler[224236]: I1201 19:24:12.008753       1 process_energy.go:129] Using the Ratio Power Model to estimate PROCESS_TOTAL Power
Dec  1 14:24:12 np0005541455 kepler[224236]: I1201 19:24:12.009131       1 process_energy.go:130] Feature names: [bpf_cpu_time_ms]
Dec  1 14:24:12 np0005541455 kepler[224236]: I1201 19:24:12.017921       1 process_energy.go:129] Using the Ratio Power Model to estimate PROCESS_COMPONENTS Power
Dec  1 14:24:12 np0005541455 kepler[224236]: I1201 19:24:12.017948       1 process_energy.go:130] Feature names: [bpf_cpu_time_ms bpf_cpu_time_ms bpf_cpu_time_ms   gpu_compute_util]
Dec  1 14:24:12 np0005541455 kepler[224236]: I1201 19:24:12.031874       1 regressor.go:276] Created predictor linear for trainer: "SGDRegressorTrainer"
Dec  1 14:24:12 np0005541455 kepler[224236]: I1201 19:24:12.032221       1 model.go:125] Requesting for Machine Spec: &{authenticamd amd_epyc_rome 8 8 7 2800 1}
Dec  1 14:24:12 np0005541455 kepler[224236]: I1201 19:24:12.032474       1 node_platform_energy.go:53] Using the Regressor/AbsPower Power Model to estimate Node Platform Power
Dec  1 14:24:12 np0005541455 kepler[224236]: I1201 19:24:12.047387       1 regressor.go:276] Created predictor linear for trainer: "SGDRegressorTrainer"
Dec  1 14:24:12 np0005541455 kepler[224236]: I1201 19:24:12.047794       1 regressor.go:276] Created predictor linear for trainer: "SGDRegressorTrainer"
Dec  1 14:24:12 np0005541455 kepler[224236]: I1201 19:24:12.048043       1 regressor.go:276] Created predictor linear for trainer: "SGDRegressorTrainer"
Dec  1 14:24:12 np0005541455 kepler[224236]: I1201 19:24:12.048276       1 regressor.go:276] Created predictor linear for trainer: "SGDRegressorTrainer"
Dec  1 14:24:12 np0005541455 kepler[224236]: I1201 19:24:12.048510       1 model.go:125] Requesting for Machine Spec: &{authenticamd amd_epyc_rome 8 8 7 2800 1}
Dec  1 14:24:12 np0005541455 kepler[224236]: I1201 19:24:12.048877       1 node_component_energy.go:57] Using the Regressor/AbsPower Power Model to estimate Node Component Power
Dec  1 14:24:12 np0005541455 kepler[224236]: I1201 19:24:12.049257       1 prometheus_collector.go:90] Registered Process Prometheus metrics
Dec  1 14:24:12 np0005541455 kepler[224236]: I1201 19:24:12.049731       1 prometheus_collector.go:95] Registered Container Prometheus metrics
Dec  1 14:24:12 np0005541455 kepler[224236]: I1201 19:24:12.050140       1 prometheus_collector.go:100] Registered VM Prometheus metrics
Dec  1 14:24:12 np0005541455 kepler[224236]: I1201 19:24:12.050499       1 prometheus_collector.go:104] Registered Node Prometheus metrics
Dec  1 14:24:12 np0005541455 kepler[224236]: I1201 19:24:12.051075       1 exporter.go:194] starting to listen on 0.0.0.0:8888
Dec  1 14:24:12 np0005541455 kepler[224236]: I1201 19:24:12.052231       1 exporter.go:208] Started Kepler in 639.810611ms
Dec  1 14:24:12 np0005541455 ovn_metadata_agent[106828]: 2025-12-01 19:24:12.165 106833 DEBUG oslo_concurrency.lockutils [-] Acquiring lock "_check_child_processes" by "neutron.agent.linux.external_process.ProcessMonitor._check_child_processes" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m
Dec  1 14:24:12 np0005541455 ovn_metadata_agent[106828]: 2025-12-01 19:24:12.165 106833 DEBUG oslo_concurrency.lockutils [-] Lock "_check_child_processes" acquired by "neutron.agent.linux.external_process.ProcessMonitor._check_child_processes" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m
Dec  1 14:24:12 np0005541455 ovn_metadata_agent[106828]: 2025-12-01 19:24:12.165 106833 DEBUG oslo_concurrency.lockutils [-] Lock "_check_child_processes" "released" by "neutron.agent.linux.external_process.ProcessMonitor._check_child_processes" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m
Dec  1 14:24:12 np0005541455 python3.9[224454]: ansible-ansible.builtin.systemd Invoked with name=edpm_ceilometer_agent_ipmi.service state=restarted daemon_reload=False daemon_reexec=False scope=system no_block=False enabled=None force=None masked=None
Dec  1 14:24:12 np0005541455 systemd[1]: Stopping ceilometer_agent_ipmi container...
Dec  1 14:24:12 np0005541455 ceilometer_agent_ipmi[222940]: 2025-12-01 19:24:12.647 2 INFO cotyledon._service_manager [-] Caught SIGTERM signal, graceful exiting of master process
Dec  1 14:24:12 np0005541455 ceilometer_agent_ipmi[222940]: 2025-12-01 19:24:12.749 2 DEBUG cotyledon._service_manager [-] Killing services with signal SIGTERM _shutdown /usr/lib/python3.9/site-packages/cotyledon/_service_manager.py:304
Dec  1 14:24:12 np0005541455 ceilometer_agent_ipmi[222940]: 2025-12-01 19:24:12.750 2 DEBUG cotyledon._service_manager [-] Waiting services to terminate _shutdown /usr/lib/python3.9/site-packages/cotyledon/_service_manager.py:308
Dec  1 14:24:12 np0005541455 ceilometer_agent_ipmi[222940]: 2025-12-01 19:24:12.750 12 INFO cotyledon._service [-] Caught SIGTERM signal, graceful exiting of service AgentManager(0) [12]
Dec  1 14:24:12 np0005541455 ceilometer_agent_ipmi[222940]: 2025-12-01 19:24:12.764 2 DEBUG cotyledon._service_manager [-] Shutdown finish _shutdown /usr/lib/python3.9/site-packages/cotyledon/_service_manager.py:320
Dec  1 14:24:12 np0005541455 systemd[1]: libpod-34a1614f07848d6f362b3ed1fa2407dbcd0f2c7c831f6ef43ff8b2d278ce7c3d.scope: Deactivated successfully.
Dec  1 14:24:12 np0005541455 systemd[1]: libpod-34a1614f07848d6f362b3ed1fa2407dbcd0f2c7c831f6ef43ff8b2d278ce7c3d.scope: Consumed 2.144s CPU time.
Dec  1 14:24:12 np0005541455 podman[224458]: 2025-12-01 19:24:12.936365892 +0000 UTC m=+0.347613068 container died 34a1614f07848d6f362b3ed1fa2407dbcd0f2c7c831f6ef43ff8b2d278ce7c3d (image=quay.io/podified-antelope-centos9/openstack-ceilometer-ipmi:current-podified, name=ceilometer_agent_ipmi, config_id=edpm, container_name=ceilometer_agent_ipmi, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 9 Base Image, tcib_managed=true, maintainer=OpenStack Kubernetes Operator team, managed_by=edpm_ansible, org.label-schema.build-date=20251125, org.label-schema.schema-version=1.0, tcib_build_tag=fa2bb8efef6782c26ea7f1675eeb36dd, config_data={'image': 'quay.io/podified-antelope-centos9/openstack-ceilometer-ipmi:current-podified', 'user': 'ceilometer', 'restart': 'always', 'command': 'kolla_start', 'security_opt': 'label:type:ceilometer_polling_t', 'privileged': 'true', 'net': 'host', 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'OS_ENDPOINT_TYPE': 'internal'}, 'healthcheck': {'test': '/openstack/healthcheck ipmi', 'mount': '/var/lib/openstack/healthchecks/ceilometer_agent_ipmi'}, 'volumes': ['/var/lib/openstack/config/telemetry-power-monitoring:/var/lib/openstack/config/:z', '/var/lib/openstack/config/telemetry-power-monitoring/ceilometer-agent-ipmi.json:/var/lib/kolla/config_files/config.json:z', '/etc/hosts:/etc/hosts:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/var/lib/openstack/cacerts/telemetry-power-monitoring/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/config/telemetry-power-monitoring/ceilometer_prom_exporter.yaml:/etc/ceilometer/ceilometer_prom_exporter.yaml:z', '/var/lib/openstack/certs/telemetry-power-monitoring/default:/etc/ceilometer/tls:z', '/dev/log:/dev/log', '/var/lib/openstack/healthchecks/ceilometer_agent_ipmi:/openstack:ro,z']}, io.buildah.version=1.41.3, org.label-schema.vendor=CentOS)
Dec  1 14:24:12 np0005541455 systemd[1]: 34a1614f07848d6f362b3ed1fa2407dbcd0f2c7c831f6ef43ff8b2d278ce7c3d-2a465762769e46.timer: Deactivated successfully.
Dec  1 14:24:12 np0005541455 systemd[1]: Stopped /usr/bin/podman healthcheck run 34a1614f07848d6f362b3ed1fa2407dbcd0f2c7c831f6ef43ff8b2d278ce7c3d.
Dec  1 14:24:12 np0005541455 systemd[1]: var-lib-containers-storage-overlay\x2dcontainers-34a1614f07848d6f362b3ed1fa2407dbcd0f2c7c831f6ef43ff8b2d278ce7c3d-userdata-shm.mount: Deactivated successfully.
Dec  1 14:24:13 np0005541455 systemd[1]: var-lib-containers-storage-overlay-7440de8af6a06264cef5bbbb36be527096fb3b76d58e4fec3d558c110a857554-merged.mount: Deactivated successfully.
Dec  1 14:24:13 np0005541455 podman[224458]: 2025-12-01 19:24:13.038186039 +0000 UTC m=+0.449433145 container cleanup 34a1614f07848d6f362b3ed1fa2407dbcd0f2c7c831f6ef43ff8b2d278ce7c3d (image=quay.io/podified-antelope-centos9/openstack-ceilometer-ipmi:current-podified, name=ceilometer_agent_ipmi, org.label-schema.name=CentOS Stream 9 Base Image, tcib_managed=true, io.buildah.version=1.41.3, maintainer=OpenStack Kubernetes Operator team, managed_by=edpm_ansible, config_data={'image': 'quay.io/podified-antelope-centos9/openstack-ceilometer-ipmi:current-podified', 'user': 'ceilometer', 'restart': 'always', 'command': 'kolla_start', 'security_opt': 'label:type:ceilometer_polling_t', 'privileged': 'true', 'net': 'host', 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'OS_ENDPOINT_TYPE': 'internal'}, 'healthcheck': {'test': '/openstack/healthcheck ipmi', 'mount': '/var/lib/openstack/healthchecks/ceilometer_agent_ipmi'}, 'volumes': ['/var/lib/openstack/config/telemetry-power-monitoring:/var/lib/openstack/config/:z', '/var/lib/openstack/config/telemetry-power-monitoring/ceilometer-agent-ipmi.json:/var/lib/kolla/config_files/config.json:z', '/etc/hosts:/etc/hosts:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/var/lib/openstack/cacerts/telemetry-power-monitoring/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/config/telemetry-power-monitoring/ceilometer_prom_exporter.yaml:/etc/ceilometer/ceilometer_prom_exporter.yaml:z', '/var/lib/openstack/certs/telemetry-power-monitoring/default:/etc/ceilometer/tls:z', '/dev/log:/dev/log', '/var/lib/openstack/healthchecks/ceilometer_agent_ipmi:/openstack:ro,z']}, org.label-schema.build-date=20251125, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS, tcib_build_tag=fa2bb8efef6782c26ea7f1675eeb36dd, config_id=edpm, container_name=ceilometer_agent_ipmi, org.label-schema.license=GPLv2)
Dec  1 14:24:13 np0005541455 podman[224458]: ceilometer_agent_ipmi
Dec  1 14:24:13 np0005541455 podman[224486]: ceilometer_agent_ipmi
Dec  1 14:24:13 np0005541455 systemd[1]: edpm_ceilometer_agent_ipmi.service: Deactivated successfully.
Dec  1 14:24:13 np0005541455 systemd[1]: Stopped ceilometer_agent_ipmi container.
Dec  1 14:24:13 np0005541455 systemd[1]: Starting ceilometer_agent_ipmi container...
Dec  1 14:24:13 np0005541455 systemd[1]: Started libcrun container.
Dec  1 14:24:13 np0005541455 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/7440de8af6a06264cef5bbbb36be527096fb3b76d58e4fec3d558c110a857554/merged/etc/ceilometer/tls supports timestamps until 2038 (0x7fffffff)
Dec  1 14:24:13 np0005541455 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/7440de8af6a06264cef5bbbb36be527096fb3b76d58e4fec3d558c110a857554/merged/etc/ceilometer/ceilometer_prom_exporter.yaml supports timestamps until 2038 (0x7fffffff)
Dec  1 14:24:13 np0005541455 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/7440de8af6a06264cef5bbbb36be527096fb3b76d58e4fec3d558c110a857554/merged/var/lib/openstack/config supports timestamps until 2038 (0x7fffffff)
Dec  1 14:24:13 np0005541455 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/7440de8af6a06264cef5bbbb36be527096fb3b76d58e4fec3d558c110a857554/merged/var/lib/kolla/config_files/config.json supports timestamps until 2038 (0x7fffffff)
Dec  1 14:24:13 np0005541455 systemd[1]: Started /usr/bin/podman healthcheck run 34a1614f07848d6f362b3ed1fa2407dbcd0f2c7c831f6ef43ff8b2d278ce7c3d.
Dec  1 14:24:13 np0005541455 podman[224499]: 2025-12-01 19:24:13.347519968 +0000 UTC m=+0.187492588 container init 34a1614f07848d6f362b3ed1fa2407dbcd0f2c7c831f6ef43ff8b2d278ce7c3d (image=quay.io/podified-antelope-centos9/openstack-ceilometer-ipmi:current-podified, name=ceilometer_agent_ipmi, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.schema-version=1.0, config_data={'image': 'quay.io/podified-antelope-centos9/openstack-ceilometer-ipmi:current-podified', 'user': 'ceilometer', 'restart': 'always', 'command': 'kolla_start', 'security_opt': 'label:type:ceilometer_polling_t', 'privileged': 'true', 'net': 'host', 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'OS_ENDPOINT_TYPE': 'internal'}, 'healthcheck': {'test': '/openstack/healthcheck ipmi', 'mount': '/var/lib/openstack/healthchecks/ceilometer_agent_ipmi'}, 'volumes': ['/var/lib/openstack/config/telemetry-power-monitoring:/var/lib/openstack/config/:z', '/var/lib/openstack/config/telemetry-power-monitoring/ceilometer-agent-ipmi.json:/var/lib/kolla/config_files/config.json:z', '/etc/hosts:/etc/hosts:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/var/lib/openstack/cacerts/telemetry-power-monitoring/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/config/telemetry-power-monitoring/ceilometer_prom_exporter.yaml:/etc/ceilometer/ceilometer_prom_exporter.yaml:z', '/var/lib/openstack/certs/telemetry-power-monitoring/default:/etc/ceilometer/tls:z', '/dev/log:/dev/log', '/var/lib/openstack/healthchecks/ceilometer_agent_ipmi:/openstack:ro,z']}, config_id=edpm, maintainer=OpenStack Kubernetes Operator team, managed_by=edpm_ansible, org.label-schema.build-date=20251125, org.label-schema.vendor=CentOS, io.buildah.version=1.41.3, org.label-schema.license=GPLv2, tcib_build_tag=fa2bb8efef6782c26ea7f1675eeb36dd, tcib_managed=true, container_name=ceilometer_agent_ipmi)
Dec  1 14:24:13 np0005541455 ceilometer_agent_ipmi[224515]: + sudo -E kolla_set_configs
Dec  1 14:24:13 np0005541455 podman[224499]: 2025-12-01 19:24:13.400204257 +0000 UTC m=+0.240176857 container start 34a1614f07848d6f362b3ed1fa2407dbcd0f2c7c831f6ef43ff8b2d278ce7c3d (image=quay.io/podified-antelope-centos9/openstack-ceilometer-ipmi:current-podified, name=ceilometer_agent_ipmi, config_data={'image': 'quay.io/podified-antelope-centos9/openstack-ceilometer-ipmi:current-podified', 'user': 'ceilometer', 'restart': 'always', 'command': 'kolla_start', 'security_opt': 'label:type:ceilometer_polling_t', 'privileged': 'true', 'net': 'host', 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'OS_ENDPOINT_TYPE': 'internal'}, 'healthcheck': {'test': '/openstack/healthcheck ipmi', 'mount': '/var/lib/openstack/healthchecks/ceilometer_agent_ipmi'}, 'volumes': ['/var/lib/openstack/config/telemetry-power-monitoring:/var/lib/openstack/config/:z', '/var/lib/openstack/config/telemetry-power-monitoring/ceilometer-agent-ipmi.json:/var/lib/kolla/config_files/config.json:z', '/etc/hosts:/etc/hosts:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/var/lib/openstack/cacerts/telemetry-power-monitoring/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/config/telemetry-power-monitoring/ceilometer_prom_exporter.yaml:/etc/ceilometer/ceilometer_prom_exporter.yaml:z', '/var/lib/openstack/certs/telemetry-power-monitoring/default:/etc/ceilometer/tls:z', '/dev/log:/dev/log', '/var/lib/openstack/healthchecks/ceilometer_agent_ipmi:/openstack:ro,z']}, io.buildah.version=1.41.3, org.label-schema.license=GPLv2, tcib_build_tag=fa2bb8efef6782c26ea7f1675eeb36dd, config_id=edpm, container_name=ceilometer_agent_ipmi, managed_by=edpm_ansible, org.label-schema.build-date=20251125, org.label-schema.vendor=CentOS, tcib_managed=true, maintainer=OpenStack Kubernetes Operator team, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.schema-version=1.0)
Dec  1 14:24:13 np0005541455 podman[224499]: ceilometer_agent_ipmi
Dec  1 14:24:13 np0005541455 systemd[1]: Started ceilometer_agent_ipmi container.
Dec  1 14:24:13 np0005541455 ceilometer_agent_ipmi[224515]: INFO:__main__:Loading config file at /var/lib/kolla/config_files/config.json
Dec  1 14:24:13 np0005541455 ceilometer_agent_ipmi[224515]: INFO:__main__:Validating config file
Dec  1 14:24:13 np0005541455 ceilometer_agent_ipmi[224515]: INFO:__main__:Kolla config strategy set to: COPY_ALWAYS
Dec  1 14:24:13 np0005541455 ceilometer_agent_ipmi[224515]: INFO:__main__:Copying service configuration files
Dec  1 14:24:13 np0005541455 ceilometer_agent_ipmi[224515]: INFO:__main__:Deleting /etc/ceilometer/ceilometer.conf
Dec  1 14:24:13 np0005541455 ceilometer_agent_ipmi[224515]: INFO:__main__:Copying /var/lib/openstack/config/ceilometer.conf to /etc/ceilometer/ceilometer.conf
Dec  1 14:24:13 np0005541455 ceilometer_agent_ipmi[224515]: INFO:__main__:Setting permission for /etc/ceilometer/ceilometer.conf
Dec  1 14:24:13 np0005541455 ceilometer_agent_ipmi[224515]: INFO:__main__:Deleting /etc/ceilometer/polling.yaml
Dec  1 14:24:13 np0005541455 ceilometer_agent_ipmi[224515]: INFO:__main__:Copying /var/lib/openstack/config/polling.yaml to /etc/ceilometer/polling.yaml
Dec  1 14:24:13 np0005541455 ceilometer_agent_ipmi[224515]: INFO:__main__:Setting permission for /etc/ceilometer/polling.yaml
Dec  1 14:24:13 np0005541455 ceilometer_agent_ipmi[224515]: INFO:__main__:Deleting /etc/ceilometer/ceilometer.conf.d/01-ceilometer-custom.conf
Dec  1 14:24:13 np0005541455 ceilometer_agent_ipmi[224515]: INFO:__main__:Copying /var/lib/openstack/config/custom.conf to /etc/ceilometer/ceilometer.conf.d/01-ceilometer-custom.conf
Dec  1 14:24:13 np0005541455 ceilometer_agent_ipmi[224515]: INFO:__main__:Setting permission for /etc/ceilometer/ceilometer.conf.d/01-ceilometer-custom.conf
Dec  1 14:24:13 np0005541455 ceilometer_agent_ipmi[224515]: INFO:__main__:Deleting /etc/ceilometer/ceilometer.conf.d/02-ceilometer-host-specific.conf
Dec  1 14:24:13 np0005541455 ceilometer_agent_ipmi[224515]: INFO:__main__:Copying /var/lib/openstack/config/ceilometer-host-specific.conf to /etc/ceilometer/ceilometer.conf.d/02-ceilometer-host-specific.conf
Dec  1 14:24:13 np0005541455 ceilometer_agent_ipmi[224515]: INFO:__main__:Setting permission for /etc/ceilometer/ceilometer.conf.d/02-ceilometer-host-specific.conf
Dec  1 14:24:13 np0005541455 ceilometer_agent_ipmi[224515]: INFO:__main__:Writing out command to execute
Dec  1 14:24:13 np0005541455 ceilometer_agent_ipmi[224515]: ++ cat /run_command
Dec  1 14:24:13 np0005541455 ceilometer_agent_ipmi[224515]: + CMD='/usr/bin/ceilometer-polling --polling-namespaces ipmi --logfile /dev/stdout'
Dec  1 14:24:13 np0005541455 ceilometer_agent_ipmi[224515]: + ARGS=
Dec  1 14:24:13 np0005541455 ceilometer_agent_ipmi[224515]: + sudo kolla_copy_cacerts
Dec  1 14:24:13 np0005541455 podman[224522]: 2025-12-01 19:24:13.502455877 +0000 UTC m=+0.082228965 container health_status 34a1614f07848d6f362b3ed1fa2407dbcd0f2c7c831f6ef43ff8b2d278ce7c3d (image=quay.io/podified-antelope-centos9/openstack-ceilometer-ipmi:current-podified, name=ceilometer_agent_ipmi, health_status=starting, health_failing_streak=1, health_log=, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.schema-version=1.0, config_data={'image': 'quay.io/podified-antelope-centos9/openstack-ceilometer-ipmi:current-podified', 'user': 'ceilometer', 'restart': 'always', 'command': 'kolla_start', 'security_opt': 'label:type:ceilometer_polling_t', 'privileged': 'true', 'net': 'host', 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'OS_ENDPOINT_TYPE': 'internal'}, 'healthcheck': {'test': '/openstack/healthcheck ipmi', 'mount': '/var/lib/openstack/healthchecks/ceilometer_agent_ipmi'}, 'volumes': ['/var/lib/openstack/config/telemetry-power-monitoring:/var/lib/openstack/config/:z', '/var/lib/openstack/config/telemetry-power-monitoring/ceilometer-agent-ipmi.json:/var/lib/kolla/config_files/config.json:z', '/etc/hosts:/etc/hosts:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/var/lib/openstack/cacerts/telemetry-power-monitoring/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/config/telemetry-power-monitoring/ceilometer_prom_exporter.yaml:/etc/ceilometer/ceilometer_prom_exporter.yaml:z', '/var/lib/openstack/certs/telemetry-power-monitoring/default:/etc/ceilometer/tls:z', '/dev/log:/dev/log', '/var/lib/openstack/healthchecks/ceilometer_agent_ipmi:/openstack:ro,z']}, managed_by=edpm_ansible, org.label-schema.license=GPLv2, config_id=edpm, org.label-schema.vendor=CentOS, container_name=ceilometer_agent_ipmi, io.buildah.version=1.41.3, org.label-schema.build-date=20251125, tcib_build_tag=fa2bb8efef6782c26ea7f1675eeb36dd, tcib_managed=true, maintainer=OpenStack Kubernetes Operator team)
Dec  1 14:24:13 np0005541455 ceilometer_agent_ipmi[224515]: + [[ ! -n '' ]]
Dec  1 14:24:13 np0005541455 ceilometer_agent_ipmi[224515]: + . kolla_extend_start
Dec  1 14:24:13 np0005541455 ceilometer_agent_ipmi[224515]: Running command: '/usr/bin/ceilometer-polling --polling-namespaces ipmi --logfile /dev/stdout'
Dec  1 14:24:13 np0005541455 ceilometer_agent_ipmi[224515]: + echo 'Running command: '\''/usr/bin/ceilometer-polling --polling-namespaces ipmi --logfile /dev/stdout'\'''
Dec  1 14:24:13 np0005541455 ceilometer_agent_ipmi[224515]: + umask 0022
Dec  1 14:24:13 np0005541455 ceilometer_agent_ipmi[224515]: + exec /usr/bin/ceilometer-polling --polling-namespaces ipmi --logfile /dev/stdout
Dec  1 14:24:13 np0005541455 systemd[1]: 34a1614f07848d6f362b3ed1fa2407dbcd0f2c7c831f6ef43ff8b2d278ce7c3d-25cbe42641d886e6.service: Main process exited, code=exited, status=1/FAILURE
Dec  1 14:24:13 np0005541455 systemd[1]: 34a1614f07848d6f362b3ed1fa2407dbcd0f2c7c831f6ef43ff8b2d278ce7c3d-25cbe42641d886e6.service: Failed with result 'exit-code'.
Dec  1 14:24:14 np0005541455 ceilometer_agent_ipmi[224515]: 2025-12-01 19:24:14.300 2 DEBUG cotyledon.oslo_config_glue [-] Full set of CONF: _load_service_manager_options /usr/lib/python3.9/site-packages/cotyledon/oslo_config_glue.py:40
Dec  1 14:24:14 np0005541455 ceilometer_agent_ipmi[224515]: 2025-12-01 19:24:14.300 2 DEBUG cotyledon.oslo_config_glue [-] ******************************************************************************** log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2589
Dec  1 14:24:14 np0005541455 ceilometer_agent_ipmi[224515]: 2025-12-01 19:24:14.300 2 DEBUG cotyledon.oslo_config_glue [-] Configuration options gathered from: log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2590
Dec  1 14:24:14 np0005541455 ceilometer_agent_ipmi[224515]: 2025-12-01 19:24:14.300 2 DEBUG cotyledon.oslo_config_glue [-] command line args: ['--polling-namespaces', 'ipmi', '--logfile', '/dev/stdout'] log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2591
Dec  1 14:24:14 np0005541455 ceilometer_agent_ipmi[224515]: 2025-12-01 19:24:14.300 2 DEBUG cotyledon.oslo_config_glue [-] config files: ['/etc/ceilometer/ceilometer.conf'] log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2592
Dec  1 14:24:14 np0005541455 ceilometer_agent_ipmi[224515]: 2025-12-01 19:24:14.300 2 DEBUG cotyledon.oslo_config_glue [-] ================================================================================ log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2594
Dec  1 14:24:14 np0005541455 ceilometer_agent_ipmi[224515]: 2025-12-01 19:24:14.300 2 DEBUG cotyledon.oslo_config_glue [-] batch_size                     = 50 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602
Dec  1 14:24:14 np0005541455 ceilometer_agent_ipmi[224515]: 2025-12-01 19:24:14.300 2 DEBUG cotyledon.oslo_config_glue [-] cfg_file                       = polling.yaml log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602
Dec  1 14:24:14 np0005541455 ceilometer_agent_ipmi[224515]: 2025-12-01 19:24:14.301 2 DEBUG cotyledon.oslo_config_glue [-] config_dir                     = ['/etc/ceilometer/ceilometer.conf.d'] log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602
Dec  1 14:24:14 np0005541455 ceilometer_agent_ipmi[224515]: 2025-12-01 19:24:14.301 2 DEBUG cotyledon.oslo_config_glue [-] config_file                    = ['/etc/ceilometer/ceilometer.conf'] log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602
Dec  1 14:24:14 np0005541455 ceilometer_agent_ipmi[224515]: 2025-12-01 19:24:14.301 2 DEBUG cotyledon.oslo_config_glue [-] config_source                  = [] log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602
Dec  1 14:24:14 np0005541455 ceilometer_agent_ipmi[224515]: 2025-12-01 19:24:14.301 2 DEBUG cotyledon.oslo_config_glue [-] debug                          = True log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602
Dec  1 14:24:14 np0005541455 ceilometer_agent_ipmi[224515]: 2025-12-01 19:24:14.301 2 DEBUG cotyledon.oslo_config_glue [-] default_log_levels             = ['amqp=WARN', 'amqplib=WARN', 'boto=WARN', 'qpid=WARN', 'sqlalchemy=WARN', 'suds=INFO', 'oslo.messaging=INFO', 'oslo_messaging=INFO', 'iso8601=WARN', 'requests.packages.urllib3.connectionpool=WARN', 'urllib3.connectionpool=WARN', 'websocket=WARN', 'requests.packages.urllib3.util.retry=WARN', 'urllib3.util.retry=WARN', 'keystonemiddleware=WARN', 'routes.middleware=WARN', 'stevedore=WARN', 'taskflow=WARN', 'keystoneauth=WARN', 'oslo.cache=INFO', 'oslo_policy=INFO', 'dogpile.core.dogpile=INFO', 'futurist=INFO', 'neutronclient=INFO', 'keystoneclient=INFO'] log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602
Dec  1 14:24:14 np0005541455 ceilometer_agent_ipmi[224515]: 2025-12-01 19:24:14.301 2 DEBUG cotyledon.oslo_config_glue [-] event_pipeline_cfg_file        = event_pipeline.yaml log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602
Dec  1 14:24:14 np0005541455 ceilometer_agent_ipmi[224515]: 2025-12-01 19:24:14.301 2 DEBUG cotyledon.oslo_config_glue [-] graceful_shutdown_timeout      = 60 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602
Dec  1 14:24:14 np0005541455 ceilometer_agent_ipmi[224515]: 2025-12-01 19:24:14.301 2 DEBUG cotyledon.oslo_config_glue [-] host                           = compute-0.ctlplane.example.com log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602
Dec  1 14:24:14 np0005541455 ceilometer_agent_ipmi[224515]: 2025-12-01 19:24:14.301 2 DEBUG cotyledon.oslo_config_glue [-] http_timeout                   = 600 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602
Dec  1 14:24:14 np0005541455 ceilometer_agent_ipmi[224515]: 2025-12-01 19:24:14.301 2 DEBUG cotyledon.oslo_config_glue [-] hypervisor_inspector           = libvirt log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602
Dec  1 14:24:14 np0005541455 ceilometer_agent_ipmi[224515]: 2025-12-01 19:24:14.301 2 DEBUG cotyledon.oslo_config_glue [-] instance_format                = [instance: %(uuid)s]  log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602
Dec  1 14:24:14 np0005541455 ceilometer_agent_ipmi[224515]: 2025-12-01 19:24:14.302 2 DEBUG cotyledon.oslo_config_glue [-] instance_uuid_format           = [instance: %(uuid)s]  log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602
Dec  1 14:24:14 np0005541455 ceilometer_agent_ipmi[224515]: 2025-12-01 19:24:14.302 2 DEBUG cotyledon.oslo_config_glue [-] libvirt_type                   = kvm log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602
Dec  1 14:24:14 np0005541455 ceilometer_agent_ipmi[224515]: 2025-12-01 19:24:14.302 2 DEBUG cotyledon.oslo_config_glue [-] libvirt_uri                    =  log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602
Dec  1 14:24:14 np0005541455 ceilometer_agent_ipmi[224515]: 2025-12-01 19:24:14.302 2 DEBUG cotyledon.oslo_config_glue [-] log_config_append              = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602
Dec  1 14:24:14 np0005541455 ceilometer_agent_ipmi[224515]: 2025-12-01 19:24:14.302 2 DEBUG cotyledon.oslo_config_glue [-] log_date_format                = %Y-%m-%d %H:%M:%S log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602
Dec  1 14:24:14 np0005541455 ceilometer_agent_ipmi[224515]: 2025-12-01 19:24:14.302 2 DEBUG cotyledon.oslo_config_glue [-] log_dir                        = /var/log/ceilometer log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602
Dec  1 14:24:14 np0005541455 ceilometer_agent_ipmi[224515]: 2025-12-01 19:24:14.302 2 DEBUG cotyledon.oslo_config_glue [-] log_file                       = /dev/stdout log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602
Dec  1 14:24:14 np0005541455 ceilometer_agent_ipmi[224515]: 2025-12-01 19:24:14.302 2 DEBUG cotyledon.oslo_config_glue [-] log_options                    = True log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602
Dec  1 14:24:14 np0005541455 ceilometer_agent_ipmi[224515]: 2025-12-01 19:24:14.302 2 DEBUG cotyledon.oslo_config_glue [-] log_rotate_interval            = 1 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602
Dec  1 14:24:14 np0005541455 ceilometer_agent_ipmi[224515]: 2025-12-01 19:24:14.302 2 DEBUG cotyledon.oslo_config_glue [-] log_rotate_interval_type       = days log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602
Dec  1 14:24:14 np0005541455 ceilometer_agent_ipmi[224515]: 2025-12-01 19:24:14.302 2 DEBUG cotyledon.oslo_config_glue [-] log_rotation_type              = none log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602
Dec  1 14:24:14 np0005541455 ceilometer_agent_ipmi[224515]: 2025-12-01 19:24:14.302 2 DEBUG cotyledon.oslo_config_glue [-] logging_context_format_string  = %(asctime)s.%(msecs)03d %(process)d %(levelname)s %(name)s [%(global_request_id)s %(request_id)s %(user_identity)s] %(instance)s%(message)s log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602
Dec  1 14:24:14 np0005541455 ceilometer_agent_ipmi[224515]: 2025-12-01 19:24:14.302 2 DEBUG cotyledon.oslo_config_glue [-] logging_debug_format_suffix    = %(funcName)s %(pathname)s:%(lineno)d log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602
Dec  1 14:24:14 np0005541455 ceilometer_agent_ipmi[224515]: 2025-12-01 19:24:14.302 2 DEBUG cotyledon.oslo_config_glue [-] logging_default_format_string  = %(asctime)s.%(msecs)03d %(process)d %(levelname)s %(name)s [-] %(instance)s%(message)s log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602
Dec  1 14:24:14 np0005541455 ceilometer_agent_ipmi[224515]: 2025-12-01 19:24:14.303 2 DEBUG cotyledon.oslo_config_glue [-] logging_exception_prefix       = %(asctime)s.%(msecs)03d %(process)d ERROR %(name)s %(instance)s log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602
Dec  1 14:24:14 np0005541455 ceilometer_agent_ipmi[224515]: 2025-12-01 19:24:14.303 2 DEBUG cotyledon.oslo_config_glue [-] logging_user_identity_format   = %(user)s %(project)s %(domain)s %(system_scope)s %(user_domain)s %(project_domain)s log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602
Dec  1 14:24:14 np0005541455 ceilometer_agent_ipmi[224515]: 2025-12-01 19:24:14.303 2 DEBUG cotyledon.oslo_config_glue [-] max_logfile_count              = 30 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602
Dec  1 14:24:14 np0005541455 ceilometer_agent_ipmi[224515]: 2025-12-01 19:24:14.303 2 DEBUG cotyledon.oslo_config_glue [-] max_logfile_size_mb            = 200 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602
Dec  1 14:24:14 np0005541455 ceilometer_agent_ipmi[224515]: 2025-12-01 19:24:14.303 2 DEBUG cotyledon.oslo_config_glue [-] max_parallel_requests          = 64 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602
Dec  1 14:24:14 np0005541455 ceilometer_agent_ipmi[224515]: 2025-12-01 19:24:14.303 2 DEBUG cotyledon.oslo_config_glue [-] partitioning_group_prefix      = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602
Dec  1 14:24:14 np0005541455 ceilometer_agent_ipmi[224515]: 2025-12-01 19:24:14.303 2 DEBUG cotyledon.oslo_config_glue [-] pipeline_cfg_file              = pipeline.yaml log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602
Dec  1 14:24:14 np0005541455 ceilometer_agent_ipmi[224515]: 2025-12-01 19:24:14.303 2 DEBUG cotyledon.oslo_config_glue [-] polling_namespaces             = ['ipmi'] log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602
Dec  1 14:24:14 np0005541455 ceilometer_agent_ipmi[224515]: 2025-12-01 19:24:14.303 2 DEBUG cotyledon.oslo_config_glue [-] pollsters_definitions_dirs     = ['/etc/ceilometer/pollsters.d'] log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602
Dec  1 14:24:14 np0005541455 ceilometer_agent_ipmi[224515]: 2025-12-01 19:24:14.303 2 DEBUG cotyledon.oslo_config_glue [-] publish_errors                 = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602
Dec  1 14:24:14 np0005541455 ceilometer_agent_ipmi[224515]: 2025-12-01 19:24:14.303 2 DEBUG cotyledon.oslo_config_glue [-] rate_limit_burst               = 0 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602
Dec  1 14:24:14 np0005541455 ceilometer_agent_ipmi[224515]: 2025-12-01 19:24:14.303 2 DEBUG cotyledon.oslo_config_glue [-] rate_limit_except_level        = CRITICAL log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602
Dec  1 14:24:14 np0005541455 ceilometer_agent_ipmi[224515]: 2025-12-01 19:24:14.303 2 DEBUG cotyledon.oslo_config_glue [-] rate_limit_interval            = 0 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602
Dec  1 14:24:14 np0005541455 ceilometer_agent_ipmi[224515]: 2025-12-01 19:24:14.304 2 DEBUG cotyledon.oslo_config_glue [-] reseller_prefix                = AUTH_ log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602
Dec  1 14:24:14 np0005541455 ceilometer_agent_ipmi[224515]: 2025-12-01 19:24:14.304 2 DEBUG cotyledon.oslo_config_glue [-] reserved_metadata_keys         = [] log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602
Dec  1 14:24:14 np0005541455 ceilometer_agent_ipmi[224515]: 2025-12-01 19:24:14.304 2 DEBUG cotyledon.oslo_config_glue [-] reserved_metadata_length       = 256 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602
Dec  1 14:24:14 np0005541455 ceilometer_agent_ipmi[224515]: 2025-12-01 19:24:14.304 2 DEBUG cotyledon.oslo_config_glue [-] reserved_metadata_namespace    = ['metering.'] log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602
Dec  1 14:24:14 np0005541455 ceilometer_agent_ipmi[224515]: 2025-12-01 19:24:14.304 2 DEBUG cotyledon.oslo_config_glue [-] rootwrap_config                = /etc/ceilometer/rootwrap.conf log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602
Dec  1 14:24:14 np0005541455 ceilometer_agent_ipmi[224515]: 2025-12-01 19:24:14.304 2 DEBUG cotyledon.oslo_config_glue [-] sample_source                  = openstack log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602
Dec  1 14:24:14 np0005541455 ceilometer_agent_ipmi[224515]: 2025-12-01 19:24:14.304 2 DEBUG cotyledon.oslo_config_glue [-] syslog_log_facility            = LOG_USER log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602
Dec  1 14:24:14 np0005541455 ceilometer_agent_ipmi[224515]: 2025-12-01 19:24:14.304 2 DEBUG cotyledon.oslo_config_glue [-] tenant_name_discovery          = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602
Dec  1 14:24:14 np0005541455 ceilometer_agent_ipmi[224515]: 2025-12-01 19:24:14.304 2 DEBUG cotyledon.oslo_config_glue [-] use_eventlog                   = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602
Dec  1 14:24:14 np0005541455 ceilometer_agent_ipmi[224515]: 2025-12-01 19:24:14.304 2 DEBUG cotyledon.oslo_config_glue [-] use_journal                    = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602
Dec  1 14:24:14 np0005541455 ceilometer_agent_ipmi[224515]: 2025-12-01 19:24:14.304 2 DEBUG cotyledon.oslo_config_glue [-] use_json                       = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602
Dec  1 14:24:14 np0005541455 ceilometer_agent_ipmi[224515]: 2025-12-01 19:24:14.304 2 DEBUG cotyledon.oslo_config_glue [-] use_stderr                     = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602
Dec  1 14:24:14 np0005541455 ceilometer_agent_ipmi[224515]: 2025-12-01 19:24:14.304 2 DEBUG cotyledon.oslo_config_glue [-] use_syslog                     = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602
Dec  1 14:24:14 np0005541455 ceilometer_agent_ipmi[224515]: 2025-12-01 19:24:14.305 2 DEBUG cotyledon.oslo_config_glue [-] watch_log_file                 = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602
Dec  1 14:24:14 np0005541455 ceilometer_agent_ipmi[224515]: 2025-12-01 19:24:14.305 2 DEBUG cotyledon.oslo_config_glue [-] compute.instance_discovery_method = libvirt_metadata log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Dec  1 14:24:14 np0005541455 ceilometer_agent_ipmi[224515]: 2025-12-01 19:24:14.305 2 DEBUG cotyledon.oslo_config_glue [-] compute.resource_cache_expiry  = 3600 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Dec  1 14:24:14 np0005541455 ceilometer_agent_ipmi[224515]: 2025-12-01 19:24:14.305 2 DEBUG cotyledon.oslo_config_glue [-] compute.resource_update_interval = 0 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Dec  1 14:24:14 np0005541455 ceilometer_agent_ipmi[224515]: 2025-12-01 19:24:14.305 2 DEBUG cotyledon.oslo_config_glue [-] coordination.backend_url       = **** log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Dec  1 14:24:14 np0005541455 ceilometer_agent_ipmi[224515]: 2025-12-01 19:24:14.305 2 DEBUG cotyledon.oslo_config_glue [-] event.definitions_cfg_file     = event_definitions.yaml log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Dec  1 14:24:14 np0005541455 ceilometer_agent_ipmi[224515]: 2025-12-01 19:24:14.305 2 DEBUG cotyledon.oslo_config_glue [-] event.drop_unmatched_notifications = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Dec  1 14:24:14 np0005541455 ceilometer_agent_ipmi[224515]: 2025-12-01 19:24:14.305 2 DEBUG cotyledon.oslo_config_glue [-] event.store_raw                = [] log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Dec  1 14:24:14 np0005541455 ceilometer_agent_ipmi[224515]: 2025-12-01 19:24:14.305 2 DEBUG cotyledon.oslo_config_glue [-] ipmi.node_manager_init_retry   = 3 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Dec  1 14:24:14 np0005541455 ceilometer_agent_ipmi[224515]: 2025-12-01 19:24:14.305 2 DEBUG cotyledon.oslo_config_glue [-] ipmi.polling_retry             = 3 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Dec  1 14:24:14 np0005541455 ceilometer_agent_ipmi[224515]: 2025-12-01 19:24:14.305 2 DEBUG cotyledon.oslo_config_glue [-] meter.meter_definitions_dirs   = ['/etc/ceilometer/meters.d', '/usr/lib/python3.9/site-packages/ceilometer/data/meters.d'] log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Dec  1 14:24:14 np0005541455 ceilometer_agent_ipmi[224515]: 2025-12-01 19:24:14.305 2 DEBUG cotyledon.oslo_config_glue [-] monasca.archive_on_failure     = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Dec  1 14:24:14 np0005541455 ceilometer_agent_ipmi[224515]: 2025-12-01 19:24:14.306 2 DEBUG cotyledon.oslo_config_glue [-] monasca.archive_path           = mon_pub_failures.txt log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Dec  1 14:24:14 np0005541455 ceilometer_agent_ipmi[224515]: 2025-12-01 19:24:14.306 2 DEBUG cotyledon.oslo_config_glue [-] monasca.auth_section           = service_credentials log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Dec  1 14:24:14 np0005541455 ceilometer_agent_ipmi[224515]: 2025-12-01 19:24:14.306 2 DEBUG cotyledon.oslo_config_glue [-] monasca.auth_type              = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Dec  1 14:24:14 np0005541455 ceilometer_agent_ipmi[224515]: 2025-12-01 19:24:14.306 2 DEBUG cotyledon.oslo_config_glue [-] monasca.batch_count            = 1000 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Dec  1 14:24:14 np0005541455 ceilometer_agent_ipmi[224515]: 2025-12-01 19:24:14.306 2 DEBUG cotyledon.oslo_config_glue [-] monasca.batch_max_retries      = 3 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Dec  1 14:24:14 np0005541455 ceilometer_agent_ipmi[224515]: 2025-12-01 19:24:14.306 2 DEBUG cotyledon.oslo_config_glue [-] monasca.batch_mode             = True log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Dec  1 14:24:14 np0005541455 ceilometer_agent_ipmi[224515]: 2025-12-01 19:24:14.306 2 DEBUG cotyledon.oslo_config_glue [-] monasca.batch_polling_interval = 5 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Dec  1 14:24:14 np0005541455 ceilometer_agent_ipmi[224515]: 2025-12-01 19:24:14.306 2 DEBUG cotyledon.oslo_config_glue [-] monasca.batch_timeout          = 15 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Dec  1 14:24:14 np0005541455 ceilometer_agent_ipmi[224515]: 2025-12-01 19:24:14.306 2 DEBUG cotyledon.oslo_config_glue [-] monasca.cafile                 = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Dec  1 14:24:14 np0005541455 ceilometer_agent_ipmi[224515]: 2025-12-01 19:24:14.306 2 DEBUG cotyledon.oslo_config_glue [-] monasca.certfile               = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Dec  1 14:24:14 np0005541455 ceilometer_agent_ipmi[224515]: 2025-12-01 19:24:14.306 2 DEBUG cotyledon.oslo_config_glue [-] monasca.client_max_retries     = 3 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Dec  1 14:24:14 np0005541455 ceilometer_agent_ipmi[224515]: 2025-12-01 19:24:14.306 2 DEBUG cotyledon.oslo_config_glue [-] monasca.client_retry_interval  = 60 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Dec  1 14:24:14 np0005541455 ceilometer_agent_ipmi[224515]: 2025-12-01 19:24:14.306 2 DEBUG cotyledon.oslo_config_glue [-] monasca.clientapi_version      = 2_0 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Dec  1 14:24:14 np0005541455 ceilometer_agent_ipmi[224515]: 2025-12-01 19:24:14.307 2 DEBUG cotyledon.oslo_config_glue [-] monasca.cloud_name             = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Dec  1 14:24:14 np0005541455 ceilometer_agent_ipmi[224515]: 2025-12-01 19:24:14.307 2 DEBUG cotyledon.oslo_config_glue [-] monasca.cluster                = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Dec  1 14:24:14 np0005541455 ceilometer_agent_ipmi[224515]: 2025-12-01 19:24:14.307 2 DEBUG cotyledon.oslo_config_glue [-] monasca.collect_timing         = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Dec  1 14:24:14 np0005541455 ceilometer_agent_ipmi[224515]: 2025-12-01 19:24:14.307 2 DEBUG cotyledon.oslo_config_glue [-] monasca.control_plane          = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Dec  1 14:24:14 np0005541455 ceilometer_agent_ipmi[224515]: 2025-12-01 19:24:14.307 2 DEBUG cotyledon.oslo_config_glue [-] monasca.enable_api_pagination  = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Dec  1 14:24:14 np0005541455 ceilometer_agent_ipmi[224515]: 2025-12-01 19:24:14.307 2 DEBUG cotyledon.oslo_config_glue [-] monasca.insecure               = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Dec  1 14:24:14 np0005541455 ceilometer_agent_ipmi[224515]: 2025-12-01 19:24:14.307 2 DEBUG cotyledon.oslo_config_glue [-] monasca.interface              = internal log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Dec  1 14:24:14 np0005541455 ceilometer_agent_ipmi[224515]: 2025-12-01 19:24:14.307 2 DEBUG cotyledon.oslo_config_glue [-] monasca.keyfile                = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Dec  1 14:24:14 np0005541455 ceilometer_agent_ipmi[224515]: 2025-12-01 19:24:14.307 2 DEBUG cotyledon.oslo_config_glue [-] monasca.monasca_mappings       = /etc/ceilometer/monasca_field_definitions.yaml log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Dec  1 14:24:14 np0005541455 ceilometer_agent_ipmi[224515]: 2025-12-01 19:24:14.307 2 DEBUG cotyledon.oslo_config_glue [-] monasca.region_name            = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Dec  1 14:24:14 np0005541455 ceilometer_agent_ipmi[224515]: 2025-12-01 19:24:14.307 2 DEBUG cotyledon.oslo_config_glue [-] monasca.retry_on_failure       = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Dec  1 14:24:14 np0005541455 ceilometer_agent_ipmi[224515]: 2025-12-01 19:24:14.307 2 DEBUG cotyledon.oslo_config_glue [-] monasca.split_loggers          = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Dec  1 14:24:14 np0005541455 ceilometer_agent_ipmi[224515]: 2025-12-01 19:24:14.307 2 DEBUG cotyledon.oslo_config_glue [-] monasca.timeout                = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Dec  1 14:24:14 np0005541455 ceilometer_agent_ipmi[224515]: 2025-12-01 19:24:14.308 2 DEBUG cotyledon.oslo_config_glue [-] notification.ack_on_event_error = True log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Dec  1 14:24:14 np0005541455 ceilometer_agent_ipmi[224515]: 2025-12-01 19:24:14.308 2 DEBUG cotyledon.oslo_config_glue [-] notification.batch_size        = 1 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Dec  1 14:24:14 np0005541455 ceilometer_agent_ipmi[224515]: 2025-12-01 19:24:14.308 2 DEBUG cotyledon.oslo_config_glue [-] notification.batch_timeout     = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Dec  1 14:24:14 np0005541455 ceilometer_agent_ipmi[224515]: 2025-12-01 19:24:14.308 2 DEBUG cotyledon.oslo_config_glue [-] notification.messaging_urls    = **** log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Dec  1 14:24:14 np0005541455 ceilometer_agent_ipmi[224515]: 2025-12-01 19:24:14.308 2 DEBUG cotyledon.oslo_config_glue [-] notification.notification_control_exchanges = ['nova', 'glance', 'neutron', 'cinder', 'heat', 'keystone', 'sahara', 'trove', 'zaqar', 'swift', 'ceilometer', 'magnum', 'dns', 'ironic', 'aodh'] log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Dec  1 14:24:14 np0005541455 ceilometer_agent_ipmi[224515]: 2025-12-01 19:24:14.308 2 DEBUG cotyledon.oslo_config_glue [-] notification.pipelines         = ['meter', 'event'] log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Dec  1 14:24:14 np0005541455 ceilometer_agent_ipmi[224515]: 2025-12-01 19:24:14.308 2 DEBUG cotyledon.oslo_config_glue [-] notification.workers           = 1 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Dec  1 14:24:14 np0005541455 ceilometer_agent_ipmi[224515]: 2025-12-01 19:24:14.308 2 DEBUG cotyledon.oslo_config_glue [-] polling.batch_size             = 50 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Dec  1 14:24:14 np0005541455 ceilometer_agent_ipmi[224515]: 2025-12-01 19:24:14.308 2 DEBUG cotyledon.oslo_config_glue [-] polling.cfg_file               = polling.yaml log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Dec  1 14:24:14 np0005541455 ceilometer_agent_ipmi[224515]: 2025-12-01 19:24:14.308 2 DEBUG cotyledon.oslo_config_glue [-] polling.partitioning_group_prefix = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Dec  1 14:24:14 np0005541455 ceilometer_agent_ipmi[224515]: 2025-12-01 19:24:14.308 2 DEBUG cotyledon.oslo_config_glue [-] polling.pollsters_definitions_dirs = ['/etc/ceilometer/pollsters.d'] log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Dec  1 14:24:14 np0005541455 ceilometer_agent_ipmi[224515]: 2025-12-01 19:24:14.308 2 DEBUG cotyledon.oslo_config_glue [-] polling.tenant_name_discovery  = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Dec  1 14:24:14 np0005541455 ceilometer_agent_ipmi[224515]: 2025-12-01 19:24:14.309 2 DEBUG cotyledon.oslo_config_glue [-] publisher.telemetry_secret     = **** log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Dec  1 14:24:14 np0005541455 ceilometer_agent_ipmi[224515]: 2025-12-01 19:24:14.309 2 DEBUG cotyledon.oslo_config_glue [-] publisher_notifier.event_topic = event log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Dec  1 14:24:14 np0005541455 ceilometer_agent_ipmi[224515]: 2025-12-01 19:24:14.309 2 DEBUG cotyledon.oslo_config_glue [-] publisher_notifier.metering_topic = metering log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Dec  1 14:24:14 np0005541455 ceilometer_agent_ipmi[224515]: 2025-12-01 19:24:14.309 2 DEBUG cotyledon.oslo_config_glue [-] publisher_notifier.telemetry_driver = messagingv2 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Dec  1 14:24:14 np0005541455 ceilometer_agent_ipmi[224515]: 2025-12-01 19:24:14.309 2 DEBUG cotyledon.oslo_config_glue [-] rgw_admin_credentials.access_key = **** log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Dec  1 14:24:14 np0005541455 ceilometer_agent_ipmi[224515]: 2025-12-01 19:24:14.309 2 DEBUG cotyledon.oslo_config_glue [-] rgw_admin_credentials.secret_key = **** log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Dec  1 14:24:14 np0005541455 ceilometer_agent_ipmi[224515]: 2025-12-01 19:24:14.309 2 DEBUG cotyledon.oslo_config_glue [-] rgw_client.implicit_tenants    = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Dec  1 14:24:14 np0005541455 ceilometer_agent_ipmi[224515]: 2025-12-01 19:24:14.309 2 DEBUG cotyledon.oslo_config_glue [-] service_types.cinder           = volumev3 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Dec  1 14:24:14 np0005541455 ceilometer_agent_ipmi[224515]: 2025-12-01 19:24:14.309 2 DEBUG cotyledon.oslo_config_glue [-] service_types.glance           = image log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Dec  1 14:24:14 np0005541455 ceilometer_agent_ipmi[224515]: 2025-12-01 19:24:14.309 2 DEBUG cotyledon.oslo_config_glue [-] service_types.neutron          = network log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Dec  1 14:24:14 np0005541455 ceilometer_agent_ipmi[224515]: 2025-12-01 19:24:14.309 2 DEBUG cotyledon.oslo_config_glue [-] service_types.nova             = compute log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Dec  1 14:24:14 np0005541455 ceilometer_agent_ipmi[224515]: 2025-12-01 19:24:14.309 2 DEBUG cotyledon.oslo_config_glue [-] service_types.radosgw          = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Dec  1 14:24:14 np0005541455 ceilometer_agent_ipmi[224515]: 2025-12-01 19:24:14.310 2 DEBUG cotyledon.oslo_config_glue [-] service_types.swift            = object-store log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Dec  1 14:24:14 np0005541455 ceilometer_agent_ipmi[224515]: 2025-12-01 19:24:14.310 2 DEBUG cotyledon.oslo_config_glue [-] vmware.api_retry_count         = 10 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Dec  1 14:24:14 np0005541455 ceilometer_agent_ipmi[224515]: 2025-12-01 19:24:14.310 2 DEBUG cotyledon.oslo_config_glue [-] vmware.ca_file                 = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Dec  1 14:24:14 np0005541455 ceilometer_agent_ipmi[224515]: 2025-12-01 19:24:14.310 2 DEBUG cotyledon.oslo_config_glue [-] vmware.host_ip                 = 127.0.0.1 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Dec  1 14:24:14 np0005541455 ceilometer_agent_ipmi[224515]: 2025-12-01 19:24:14.310 2 DEBUG cotyledon.oslo_config_glue [-] vmware.host_password           = **** log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Dec  1 14:24:14 np0005541455 ceilometer_agent_ipmi[224515]: 2025-12-01 19:24:14.310 2 DEBUG cotyledon.oslo_config_glue [-] vmware.host_port               = 443 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Dec  1 14:24:14 np0005541455 ceilometer_agent_ipmi[224515]: 2025-12-01 19:24:14.310 2 DEBUG cotyledon.oslo_config_glue [-] vmware.host_username           =  log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Dec  1 14:24:14 np0005541455 ceilometer_agent_ipmi[224515]: 2025-12-01 19:24:14.310 2 DEBUG cotyledon.oslo_config_glue [-] vmware.insecure                = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Dec  1 14:24:14 np0005541455 ceilometer_agent_ipmi[224515]: 2025-12-01 19:24:14.310 2 DEBUG cotyledon.oslo_config_glue [-] vmware.task_poll_interval      = 0.5 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Dec  1 14:24:14 np0005541455 ceilometer_agent_ipmi[224515]: 2025-12-01 19:24:14.310 2 DEBUG cotyledon.oslo_config_glue [-] vmware.wsdl_location           = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Dec  1 14:24:14 np0005541455 ceilometer_agent_ipmi[224515]: 2025-12-01 19:24:14.310 2 DEBUG cotyledon.oslo_config_glue [-] service_credentials.auth_section = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Dec  1 14:24:14 np0005541455 ceilometer_agent_ipmi[224515]: 2025-12-01 19:24:14.310 2 DEBUG cotyledon.oslo_config_glue [-] service_credentials.auth_type  = password log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Dec  1 14:24:14 np0005541455 ceilometer_agent_ipmi[224515]: 2025-12-01 19:24:14.310 2 DEBUG cotyledon.oslo_config_glue [-] service_credentials.cafile     = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Dec  1 14:24:14 np0005541455 ceilometer_agent_ipmi[224515]: 2025-12-01 19:24:14.311 2 DEBUG cotyledon.oslo_config_glue [-] service_credentials.certfile   = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Dec  1 14:24:14 np0005541455 ceilometer_agent_ipmi[224515]: 2025-12-01 19:24:14.311 2 DEBUG cotyledon.oslo_config_glue [-] service_credentials.collect_timing = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Dec  1 14:24:14 np0005541455 ceilometer_agent_ipmi[224515]: 2025-12-01 19:24:14.311 2 DEBUG cotyledon.oslo_config_glue [-] service_credentials.insecure   = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Dec  1 14:24:14 np0005541455 ceilometer_agent_ipmi[224515]: 2025-12-01 19:24:14.311 2 DEBUG cotyledon.oslo_config_glue [-] service_credentials.interface  = internalURL log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Dec  1 14:24:14 np0005541455 ceilometer_agent_ipmi[224515]: 2025-12-01 19:24:14.311 2 DEBUG cotyledon.oslo_config_glue [-] service_credentials.keyfile    = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Dec  1 14:24:14 np0005541455 ceilometer_agent_ipmi[224515]: 2025-12-01 19:24:14.311 2 DEBUG cotyledon.oslo_config_glue [-] service_credentials.region_name = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Dec  1 14:24:14 np0005541455 ceilometer_agent_ipmi[224515]: 2025-12-01 19:24:14.311 2 DEBUG cotyledon.oslo_config_glue [-] service_credentials.split_loggers = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Dec  1 14:24:14 np0005541455 ceilometer_agent_ipmi[224515]: 2025-12-01 19:24:14.311 2 DEBUG cotyledon.oslo_config_glue [-] service_credentials.timeout    = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Dec  1 14:24:14 np0005541455 ceilometer_agent_ipmi[224515]: 2025-12-01 19:24:14.311 2 DEBUG cotyledon.oslo_config_glue [-] gnocchi.auth_section           = service_credentials log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Dec  1 14:24:14 np0005541455 ceilometer_agent_ipmi[224515]: 2025-12-01 19:24:14.311 2 DEBUG cotyledon.oslo_config_glue [-] gnocchi.auth_type              = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Dec  1 14:24:14 np0005541455 ceilometer_agent_ipmi[224515]: 2025-12-01 19:24:14.311 2 DEBUG cotyledon.oslo_config_glue [-] gnocchi.cafile                 = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Dec  1 14:24:14 np0005541455 ceilometer_agent_ipmi[224515]: 2025-12-01 19:24:14.311 2 DEBUG cotyledon.oslo_config_glue [-] gnocchi.certfile               = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Dec  1 14:24:14 np0005541455 ceilometer_agent_ipmi[224515]: 2025-12-01 19:24:14.311 2 DEBUG cotyledon.oslo_config_glue [-] gnocchi.collect_timing         = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Dec  1 14:24:14 np0005541455 ceilometer_agent_ipmi[224515]: 2025-12-01 19:24:14.312 2 DEBUG cotyledon.oslo_config_glue [-] gnocchi.insecure               = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Dec  1 14:24:14 np0005541455 ceilometer_agent_ipmi[224515]: 2025-12-01 19:24:14.312 2 DEBUG cotyledon.oslo_config_glue [-] gnocchi.interface              = internal log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Dec  1 14:24:14 np0005541455 ceilometer_agent_ipmi[224515]: 2025-12-01 19:24:14.312 2 DEBUG cotyledon.oslo_config_glue [-] gnocchi.keyfile                = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Dec  1 14:24:14 np0005541455 ceilometer_agent_ipmi[224515]: 2025-12-01 19:24:14.312 2 DEBUG cotyledon.oslo_config_glue [-] gnocchi.region_name            = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Dec  1 14:24:14 np0005541455 ceilometer_agent_ipmi[224515]: 2025-12-01 19:24:14.312 2 DEBUG cotyledon.oslo_config_glue [-] gnocchi.split_loggers          = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Dec  1 14:24:14 np0005541455 ceilometer_agent_ipmi[224515]: 2025-12-01 19:24:14.312 2 DEBUG cotyledon.oslo_config_glue [-] gnocchi.timeout                = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Dec  1 14:24:14 np0005541455 ceilometer_agent_ipmi[224515]: 2025-12-01 19:24:14.312 2 DEBUG cotyledon.oslo_config_glue [-] zaqar.auth_section             = service_credentials log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Dec  1 14:24:14 np0005541455 ceilometer_agent_ipmi[224515]: 2025-12-01 19:24:14.312 2 DEBUG cotyledon.oslo_config_glue [-] zaqar.auth_type                = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Dec  1 14:24:14 np0005541455 ceilometer_agent_ipmi[224515]: 2025-12-01 19:24:14.312 2 DEBUG cotyledon.oslo_config_glue [-] zaqar.cafile                   = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Dec  1 14:24:14 np0005541455 ceilometer_agent_ipmi[224515]: 2025-12-01 19:24:14.312 2 DEBUG cotyledon.oslo_config_glue [-] zaqar.certfile                 = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Dec  1 14:24:14 np0005541455 ceilometer_agent_ipmi[224515]: 2025-12-01 19:24:14.312 2 DEBUG cotyledon.oslo_config_glue [-] zaqar.collect_timing           = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Dec  1 14:24:14 np0005541455 ceilometer_agent_ipmi[224515]: 2025-12-01 19:24:14.312 2 DEBUG cotyledon.oslo_config_glue [-] zaqar.insecure                 = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Dec  1 14:24:14 np0005541455 ceilometer_agent_ipmi[224515]: 2025-12-01 19:24:14.312 2 DEBUG cotyledon.oslo_config_glue [-] zaqar.interface                = internal log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Dec  1 14:24:14 np0005541455 ceilometer_agent_ipmi[224515]: 2025-12-01 19:24:14.313 2 DEBUG cotyledon.oslo_config_glue [-] zaqar.keyfile                  = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Dec  1 14:24:14 np0005541455 ceilometer_agent_ipmi[224515]: 2025-12-01 19:24:14.313 2 DEBUG cotyledon.oslo_config_glue [-] zaqar.region_name              = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Dec  1 14:24:14 np0005541455 ceilometer_agent_ipmi[224515]: 2025-12-01 19:24:14.313 2 DEBUG cotyledon.oslo_config_glue [-] zaqar.split_loggers            = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Dec  1 14:24:14 np0005541455 ceilometer_agent_ipmi[224515]: 2025-12-01 19:24:14.313 2 DEBUG cotyledon.oslo_config_glue [-] zaqar.timeout                  = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Dec  1 14:24:14 np0005541455 ceilometer_agent_ipmi[224515]: 2025-12-01 19:24:14.313 2 DEBUG cotyledon.oslo_config_glue [-] ******************************************************************************** log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2613
Dec  1 14:24:14 np0005541455 ceilometer_agent_ipmi[224515]: 2025-12-01 19:24:14.331 12 INFO ceilometer.polling.manager [-] Looking for dynamic pollsters configurations at [['/etc/ceilometer/pollsters.d']].
Dec  1 14:24:14 np0005541455 ceilometer_agent_ipmi[224515]: 2025-12-01 19:24:14.333 12 INFO ceilometer.polling.manager [-] No dynamic pollsters found in folder [/etc/ceilometer/pollsters.d].
Dec  1 14:24:14 np0005541455 ceilometer_agent_ipmi[224515]: 2025-12-01 19:24:14.334 12 INFO ceilometer.polling.manager [-] No dynamic pollsters file found in dirs [['/etc/ceilometer/pollsters.d']].
Dec  1 14:24:14 np0005541455 ceilometer_agent_ipmi[224515]: 2025-12-01 19:24:14.350 12 INFO oslo.privsep.daemon [-] Running privsep helper: ['sudo', 'ceilometer-rootwrap', '/etc/ceilometer/rootwrap.conf', 'privsep-helper', '--privsep_context', 'ceilometer.privsep.sys_admin_pctxt', '--privsep_sock_path', '/tmp/tmps10mi0x8/privsep.sock']
Dec  1 14:24:14 np0005541455 python3.9[224696]: ansible-ansible.builtin.systemd Invoked with name=edpm_kepler.service state=restarted daemon_reload=False daemon_reexec=False scope=system no_block=False enabled=None force=None masked=None
Dec  1 14:24:14 np0005541455 systemd[1]: Stopping kepler container...
Dec  1 14:24:14 np0005541455 kepler[224236]: I1201 19:24:14.640658       1 exporter.go:218] Received shutdown signal
Dec  1 14:24:14 np0005541455 kepler[224236]: I1201 19:24:14.641255       1 exporter.go:226] Exiting...
Dec  1 14:24:14 np0005541455 systemd[1]: libpod-23921011954a99f31a49758e512d9e3575f6b2ebf536e7df85e3be11e7690b76.scope: Deactivated successfully.
Dec  1 14:24:14 np0005541455 podman[224707]: 2025-12-01 19:24:14.850239052 +0000 UTC m=+0.283051779 container died 23921011954a99f31a49758e512d9e3575f6b2ebf536e7df85e3be11e7690b76 (image=quay.io/sustainable_computing_io/kepler:release-0.7.12, name=kepler, vendor=Red Hat, Inc., description=The Universal Base Image is designed and engineered to be the base layer for all of your containerized applications, middleware and utilities. This base image is freely redistributable, but Red Hat only supports Red Hat technologies through subscriptions for Red Hat products. This image is maintained by Red Hat and updated regularly., io.k8s.display-name=Red Hat Universal Base Image 9, release-0.7.12=, io.openshift.expose-services=, io.openshift.tags=base rhel9, com.redhat.license_terms=https://www.redhat.com/en/about/red-hat-end-user-license-agreements#UBI, vcs-type=git, distribution-scope=public, io.k8s.description=The Universal Base Image is designed and engineered to be the base layer for all of your containerized applications, middleware and utilities. This base image is freely redistributable, but Red Hat only supports Red Hat technologies through subscriptions for Red Hat products. This image is maintained by Red Hat and updated regularly., name=ubi9, maintainer=Red Hat, Inc., url=https://access.redhat.com/containers/#/registry.access.redhat.com/ubi9/images/9.4-1214.1726694543, config_id=edpm, io.buildah.version=1.29.0, vcs-ref=e309397d02fc53f7fa99db1371b8700eb49f268f, build-date=2024-09-18T21:23:30, release=1214.1726694543, architecture=x86_64, config_data={'image': 'quay.io/sustainable_computing_io/kepler:release-0.7.12', 'privileged': 'true', 'restart': 'always', 'ports': ['8888:8888'], 'net': 'host', 'command': '-v=2', 'recreate': True, 'environment': {'ENABLE_GPU': 'true', 'EXPOSE_CONTAINER_METRICS': 'true', 'ENABLE_PROCESS_METRICS': 'true', 'EXPOSE_VM_METRICS': 'true', 'EXPOSE_ESTIMATED_IDLE_POWER_METRICS': 'false', 'LIBVIRT_METADATA_URI': 'http://openstack.org/xmlns/libvirt/nova/1.1'}, 'healthcheck': {'test': '/openstack/healthcheck kepler', 'mount': '/var/lib/openstack/healthchecks/kepler'}, 'volumes': ['/lib/modules:/lib/modules:ro', '/run/libvirt:/run/libvirt:shared,ro', '/sys:/sys', '/proc:/proc', '/var/lib/openstack/healthchecks/kepler:/openstack:ro,z']}, com.redhat.component=ubi9-container, managed_by=edpm_ansible, summary=Provides the latest release of Red Hat Universal Base Image 9., version=9.4, container_name=kepler)
Dec  1 14:24:14 np0005541455 systemd[1]: 23921011954a99f31a49758e512d9e3575f6b2ebf536e7df85e3be11e7690b76-245c16d177b69844.timer: Deactivated successfully.
Dec  1 14:24:14 np0005541455 systemd[1]: Stopped /usr/bin/podman healthcheck run 23921011954a99f31a49758e512d9e3575f6b2ebf536e7df85e3be11e7690b76.
Dec  1 14:24:14 np0005541455 systemd[1]: var-lib-containers-storage-overlay\x2dcontainers-23921011954a99f31a49758e512d9e3575f6b2ebf536e7df85e3be11e7690b76-userdata-shm.mount: Deactivated successfully.
Dec  1 14:24:14 np0005541455 systemd[1]: var-lib-containers-storage-overlay-6bf4abdce0648e5b651a3fdf123b92df72d8febc048e3ec2d330eaea1d2ef90e-merged.mount: Deactivated successfully.
Dec  1 14:24:14 np0005541455 podman[224707]: 2025-12-01 19:24:14.906801552 +0000 UTC m=+0.339614269 container cleanup 23921011954a99f31a49758e512d9e3575f6b2ebf536e7df85e3be11e7690b76 (image=quay.io/sustainable_computing_io/kepler:release-0.7.12, name=kepler, io.buildah.version=1.29.0, vendor=Red Hat, Inc., io.k8s.display-name=Red Hat Universal Base Image 9, version=9.4, container_name=kepler, com.redhat.license_terms=https://www.redhat.com/en/about/red-hat-end-user-license-agreements#UBI, config_id=edpm, vcs-ref=e309397d02fc53f7fa99db1371b8700eb49f268f, com.redhat.component=ubi9-container, summary=Provides the latest release of Red Hat Universal Base Image 9., build-date=2024-09-18T21:23:30, release=1214.1726694543, architecture=x86_64, vcs-type=git, description=The Universal Base Image is designed and engineered to be the base layer for all of your containerized applications, middleware and utilities. This base image is freely redistributable, but Red Hat only supports Red Hat technologies through subscriptions for Red Hat products. This image is maintained by Red Hat and updated regularly., distribution-scope=public, managed_by=edpm_ansible, io.k8s.description=The Universal Base Image is designed and engineered to be the base layer for all of your containerized applications, middleware and utilities. This base image is freely redistributable, but Red Hat only supports Red Hat technologies through subscriptions for Red Hat products. This image is maintained by Red Hat and updated regularly., name=ubi9, io.openshift.expose-services=, maintainer=Red Hat, Inc., url=https://access.redhat.com/containers/#/registry.access.redhat.com/ubi9/images/9.4-1214.1726694543, config_data={'image': 'quay.io/sustainable_computing_io/kepler:release-0.7.12', 'privileged': 'true', 'restart': 'always', 'ports': ['8888:8888'], 'net': 'host', 'command': '-v=2', 'recreate': True, 'environment': {'ENABLE_GPU': 'true', 'EXPOSE_CONTAINER_METRICS': 'true', 'ENABLE_PROCESS_METRICS': 'true', 'EXPOSE_VM_METRICS': 'true', 'EXPOSE_ESTIMATED_IDLE_POWER_METRICS': 'false', 'LIBVIRT_METADATA_URI': 'http://openstack.org/xmlns/libvirt/nova/1.1'}, 'healthcheck': {'test': '/openstack/healthcheck kepler', 'mount': '/var/lib/openstack/healthchecks/kepler'}, 'volumes': ['/lib/modules:/lib/modules:ro', '/run/libvirt:/run/libvirt:shared,ro', '/sys:/sys', '/proc:/proc', '/var/lib/openstack/healthchecks/kepler:/openstack:ro,z']}, release-0.7.12=, io.openshift.tags=base rhel9)
Dec  1 14:24:14 np0005541455 podman[224707]: kepler
Dec  1 14:24:14 np0005541455 podman[224738]: kepler
Dec  1 14:24:14 np0005541455 systemd[1]: edpm_kepler.service: Deactivated successfully.
Dec  1 14:24:14 np0005541455 systemd[1]: Stopped kepler container.
Dec  1 14:24:14 np0005541455 systemd[1]: Starting kepler container...
Dec  1 14:24:15 np0005541455 ceilometer_agent_ipmi[224515]: 2025-12-01 19:24:15.002 12 INFO oslo.privsep.daemon [-] Spawned new privsep daemon via rootwrap
Dec  1 14:24:15 np0005541455 ceilometer_agent_ipmi[224515]: 2025-12-01 19:24:15.003 12 DEBUG oslo.privsep.daemon [-] Accepted privsep connection to /tmp/tmps10mi0x8/privsep.sock __init__ /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:362
Dec  1 14:24:15 np0005541455 ceilometer_agent_ipmi[224515]: 2025-12-01 19:24:14.898 19 INFO oslo.privsep.daemon [-] privsep daemon starting
Dec  1 14:24:15 np0005541455 ceilometer_agent_ipmi[224515]: 2025-12-01 19:24:14.903 19 INFO oslo.privsep.daemon [-] privsep process running with uid/gid: 0/0
Dec  1 14:24:15 np0005541455 ceilometer_agent_ipmi[224515]: 2025-12-01 19:24:14.912 19 INFO oslo.privsep.daemon [-] privsep process running with capabilities (eff/prm/inh): CAP_CHOWN|CAP_DAC_OVERRIDE|CAP_DAC_READ_SEARCH|CAP_FOWNER|CAP_NET_ADMIN|CAP_SYS_ADMIN/CAP_CHOWN|CAP_DAC_OVERRIDE|CAP_DAC_READ_SEARCH|CAP_FOWNER|CAP_NET_ADMIN|CAP_SYS_ADMIN/none
Dec  1 14:24:15 np0005541455 ceilometer_agent_ipmi[224515]: 2025-12-01 19:24:14.913 19 INFO oslo.privsep.daemon [-] privsep daemon running as pid 19
Dec  1 14:24:15 np0005541455 systemd[1]: Started libcrun container.
Dec  1 14:24:15 np0005541455 ceilometer_agent_ipmi[224515]: 2025-12-01 19:24:15.108 12 DEBUG ceilometer.polling.manager [-] Skip loading extension for hardware.ipmi.current: IPMITool not supported on host _catch_extension_load_error /usr/lib/python3.9/site-packages/ceilometer/polling/manager.py:421
Dec  1 14:24:15 np0005541455 ceilometer_agent_ipmi[224515]: 2025-12-01 19:24:15.110 12 DEBUG ceilometer.polling.manager [-] Skip loading extension for hardware.ipmi.fan: IPMITool not supported on host _catch_extension_load_error /usr/lib/python3.9/site-packages/ceilometer/polling/manager.py:421
Dec  1 14:24:15 np0005541455 ceilometer_agent_ipmi[224515]: 2025-12-01 19:24:15.111 12 DEBUG ceilometer.polling.manager [-] Skip loading extension for hardware.ipmi.node.airflow: object.__new__() takes exactly one argument (the type to instantiate) _catch_extension_load_error /usr/lib/python3.9/site-packages/ceilometer/polling/manager.py:421
Dec  1 14:24:15 np0005541455 ceilometer_agent_ipmi[224515]: 2025-12-01 19:24:15.112 12 DEBUG ceilometer.polling.manager [-] Skip loading extension for hardware.ipmi.node.cpu_util: object.__new__() takes exactly one argument (the type to instantiate) _catch_extension_load_error /usr/lib/python3.9/site-packages/ceilometer/polling/manager.py:421
Dec  1 14:24:15 np0005541455 ceilometer_agent_ipmi[224515]: 2025-12-01 19:24:15.112 12 DEBUG ceilometer.polling.manager [-] Skip loading extension for hardware.ipmi.node.cups: object.__new__() takes exactly one argument (the type to instantiate) _catch_extension_load_error /usr/lib/python3.9/site-packages/ceilometer/polling/manager.py:421
Dec  1 14:24:15 np0005541455 ceilometer_agent_ipmi[224515]: 2025-12-01 19:24:15.112 12 DEBUG ceilometer.polling.manager [-] Skip loading extension for hardware.ipmi.node.io_util: object.__new__() takes exactly one argument (the type to instantiate) _catch_extension_load_error /usr/lib/python3.9/site-packages/ceilometer/polling/manager.py:421
Dec  1 14:24:15 np0005541455 ceilometer_agent_ipmi[224515]: 2025-12-01 19:24:15.112 12 DEBUG ceilometer.polling.manager [-] Skip loading extension for hardware.ipmi.node.mem_util: object.__new__() takes exactly one argument (the type to instantiate) _catch_extension_load_error /usr/lib/python3.9/site-packages/ceilometer/polling/manager.py:421
Dec  1 14:24:15 np0005541455 ceilometer_agent_ipmi[224515]: 2025-12-01 19:24:15.112 12 DEBUG ceilometer.polling.manager [-] Skip loading extension for hardware.ipmi.node.outlet_temperature: object.__new__() takes exactly one argument (the type to instantiate) _catch_extension_load_error /usr/lib/python3.9/site-packages/ceilometer/polling/manager.py:421
Dec  1 14:24:15 np0005541455 ceilometer_agent_ipmi[224515]: 2025-12-01 19:24:15.113 12 DEBUG ceilometer.polling.manager [-] Skip loading extension for hardware.ipmi.node.power: object.__new__() takes exactly one argument (the type to instantiate) _catch_extension_load_error /usr/lib/python3.9/site-packages/ceilometer/polling/manager.py:421
Dec  1 14:24:15 np0005541455 ceilometer_agent_ipmi[224515]: 2025-12-01 19:24:15.113 12 DEBUG ceilometer.polling.manager [-] Skip loading extension for hardware.ipmi.node.temperature: object.__new__() takes exactly one argument (the type to instantiate) _catch_extension_load_error /usr/lib/python3.9/site-packages/ceilometer/polling/manager.py:421
Dec  1 14:24:15 np0005541455 ceilometer_agent_ipmi[224515]: 2025-12-01 19:24:15.113 12 DEBUG ceilometer.polling.manager [-] Skip loading extension for hardware.ipmi.temperature: IPMITool not supported on host _catch_extension_load_error /usr/lib/python3.9/site-packages/ceilometer/polling/manager.py:421
Dec  1 14:24:15 np0005541455 ceilometer_agent_ipmi[224515]: 2025-12-01 19:24:15.113 12 DEBUG ceilometer.polling.manager [-] Skip loading extension for hardware.ipmi.voltage: IPMITool not supported on host _catch_extension_load_error /usr/lib/python3.9/site-packages/ceilometer/polling/manager.py:421
Dec  1 14:24:15 np0005541455 ceilometer_agent_ipmi[224515]: 2025-12-01 19:24:15.113 12 WARNING ceilometer.polling.manager [-] No valid pollsters can be loaded from ['ipmi'] namespaces
Dec  1 14:24:15 np0005541455 ceilometer_agent_ipmi[224515]: 2025-12-01 19:24:15.116 12 DEBUG cotyledon.oslo_config_glue [-] Full set of CONF: _load_service_options /usr/lib/python3.9/site-packages/cotyledon/oslo_config_glue.py:48
Dec  1 14:24:15 np0005541455 ceilometer_agent_ipmi[224515]: 2025-12-01 19:24:15.116 12 DEBUG cotyledon.oslo_config_glue [-] ******************************************************************************** log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2589
Dec  1 14:24:15 np0005541455 ceilometer_agent_ipmi[224515]: 2025-12-01 19:24:15.116 12 DEBUG cotyledon.oslo_config_glue [-] Configuration options gathered from: log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2590
Dec  1 14:24:15 np0005541455 ceilometer_agent_ipmi[224515]: 2025-12-01 19:24:15.116 12 DEBUG cotyledon.oslo_config_glue [-] command line args: ['--polling-namespaces', 'ipmi', '--logfile', '/dev/stdout'] log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2591
Dec  1 14:24:15 np0005541455 ceilometer_agent_ipmi[224515]: 2025-12-01 19:24:15.117 12 DEBUG cotyledon.oslo_config_glue [-] config files: ['/etc/ceilometer/ceilometer.conf'] log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2592
Dec  1 14:24:15 np0005541455 ceilometer_agent_ipmi[224515]: 2025-12-01 19:24:15.117 12 DEBUG cotyledon.oslo_config_glue [-] ================================================================================ log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2594
Dec  1 14:24:15 np0005541455 ceilometer_agent_ipmi[224515]: 2025-12-01 19:24:15.117 12 DEBUG cotyledon.oslo_config_glue [-] batch_size                     = 50 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602
Dec  1 14:24:15 np0005541455 ceilometer_agent_ipmi[224515]: 2025-12-01 19:24:15.117 12 DEBUG cotyledon.oslo_config_glue [-] cfg_file                       = polling.yaml log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602
Dec  1 14:24:15 np0005541455 ceilometer_agent_ipmi[224515]: 2025-12-01 19:24:15.117 12 DEBUG cotyledon.oslo_config_glue [-] config_dir                     = ['/etc/ceilometer/ceilometer.conf.d'] log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602
Dec  1 14:24:15 np0005541455 ceilometer_agent_ipmi[224515]: 2025-12-01 19:24:15.117 12 DEBUG cotyledon.oslo_config_glue [-] config_file                    = ['/etc/ceilometer/ceilometer.conf'] log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602
Dec  1 14:24:15 np0005541455 ceilometer_agent_ipmi[224515]: 2025-12-01 19:24:15.118 12 DEBUG cotyledon.oslo_config_glue [-] config_source                  = [] log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602
Dec  1 14:24:15 np0005541455 ceilometer_agent_ipmi[224515]: 2025-12-01 19:24:15.118 12 DEBUG cotyledon.oslo_config_glue [-] control_exchange               = ceilometer log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602
Dec  1 14:24:15 np0005541455 ceilometer_agent_ipmi[224515]: 2025-12-01 19:24:15.118 12 DEBUG cotyledon.oslo_config_glue [-] debug                          = True log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602
Dec  1 14:24:15 np0005541455 ceilometer_agent_ipmi[224515]: 2025-12-01 19:24:15.118 12 DEBUG cotyledon.oslo_config_glue [-] default_log_levels             = ['amqp=WARN', 'amqplib=WARN', 'boto=WARN', 'qpid=WARN', 'sqlalchemy=WARN', 'suds=INFO', 'oslo.messaging=INFO', 'oslo_messaging=INFO', 'iso8601=WARN', 'requests.packages.urllib3.connectionpool=WARN', 'urllib3.connectionpool=WARN', 'websocket=WARN', 'requests.packages.urllib3.util.retry=WARN', 'urllib3.util.retry=WARN', 'keystonemiddleware=WARN', 'routes.middleware=WARN', 'stevedore=WARN', 'taskflow=WARN', 'keystoneauth=WARN', 'oslo.cache=INFO', 'oslo_policy=INFO', 'dogpile.core.dogpile=INFO', 'futurist=INFO', 'neutronclient=INFO', 'keystoneclient=INFO'] log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602
Dec  1 14:24:15 np0005541455 ceilometer_agent_ipmi[224515]: 2025-12-01 19:24:15.119 12 DEBUG cotyledon.oslo_config_glue [-] event_pipeline_cfg_file        = event_pipeline.yaml log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602
Dec  1 14:24:15 np0005541455 ceilometer_agent_ipmi[224515]: 2025-12-01 19:24:15.119 12 DEBUG cotyledon.oslo_config_glue [-] graceful_shutdown_timeout      = 60 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602
Dec  1 14:24:15 np0005541455 ceilometer_agent_ipmi[224515]: 2025-12-01 19:24:15.119 12 DEBUG cotyledon.oslo_config_glue [-] host                           = compute-0.ctlplane.example.com log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602
Dec  1 14:24:15 np0005541455 ceilometer_agent_ipmi[224515]: 2025-12-01 19:24:15.119 12 DEBUG cotyledon.oslo_config_glue [-] http_timeout                   = 600 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602
Dec  1 14:24:15 np0005541455 ceilometer_agent_ipmi[224515]: 2025-12-01 19:24:15.119 12 DEBUG cotyledon.oslo_config_glue [-] hypervisor_inspector           = libvirt log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602
Dec  1 14:24:15 np0005541455 ceilometer_agent_ipmi[224515]: 2025-12-01 19:24:15.120 12 DEBUG cotyledon.oslo_config_glue [-] instance_format                = [instance: %(uuid)s]  log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602
Dec  1 14:24:15 np0005541455 ceilometer_agent_ipmi[224515]: 2025-12-01 19:24:15.120 12 DEBUG cotyledon.oslo_config_glue [-] instance_uuid_format           = [instance: %(uuid)s]  log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602
Dec  1 14:24:15 np0005541455 ceilometer_agent_ipmi[224515]: 2025-12-01 19:24:15.120 12 DEBUG cotyledon.oslo_config_glue [-] libvirt_type                   = kvm log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602
Dec  1 14:24:15 np0005541455 ceilometer_agent_ipmi[224515]: 2025-12-01 19:24:15.120 12 DEBUG cotyledon.oslo_config_glue [-] libvirt_uri                    =  log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602
Dec  1 14:24:15 np0005541455 ceilometer_agent_ipmi[224515]: 2025-12-01 19:24:15.120 12 DEBUG cotyledon.oslo_config_glue [-] log_config_append              = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602
Dec  1 14:24:15 np0005541455 ceilometer_agent_ipmi[224515]: 2025-12-01 19:24:15.120 12 DEBUG cotyledon.oslo_config_glue [-] log_date_format                = %Y-%m-%d %H:%M:%S log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602
Dec  1 14:24:15 np0005541455 ceilometer_agent_ipmi[224515]: 2025-12-01 19:24:15.121 12 DEBUG cotyledon.oslo_config_glue [-] log_dir                        = /var/log/ceilometer log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602
Dec  1 14:24:15 np0005541455 ceilometer_agent_ipmi[224515]: 2025-12-01 19:24:15.121 12 DEBUG cotyledon.oslo_config_glue [-] log_file                       = /dev/stdout log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602
Dec  1 14:24:15 np0005541455 ceilometer_agent_ipmi[224515]: 2025-12-01 19:24:15.121 12 DEBUG cotyledon.oslo_config_glue [-] log_options                    = True log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602
Dec  1 14:24:15 np0005541455 ceilometer_agent_ipmi[224515]: 2025-12-01 19:24:15.121 12 DEBUG cotyledon.oslo_config_glue [-] log_rotate_interval            = 1 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602
Dec  1 14:24:15 np0005541455 ceilometer_agent_ipmi[224515]: 2025-12-01 19:24:15.121 12 DEBUG cotyledon.oslo_config_glue [-] log_rotate_interval_type       = days log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602
Dec  1 14:24:15 np0005541455 systemd[1]: Started /usr/bin/podman healthcheck run 23921011954a99f31a49758e512d9e3575f6b2ebf536e7df85e3be11e7690b76.
Dec  1 14:24:15 np0005541455 ceilometer_agent_ipmi[224515]: 2025-12-01 19:24:15.122 12 DEBUG cotyledon.oslo_config_glue [-] log_rotation_type              = none log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602
Dec  1 14:24:15 np0005541455 ceilometer_agent_ipmi[224515]: 2025-12-01 19:24:15.122 12 DEBUG cotyledon.oslo_config_glue [-] logging_context_format_string  = %(asctime)s.%(msecs)03d %(process)d %(levelname)s %(name)s [%(global_request_id)s %(request_id)s %(user_identity)s] %(instance)s%(message)s log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602
Dec  1 14:24:15 np0005541455 ceilometer_agent_ipmi[224515]: 2025-12-01 19:24:15.122 12 DEBUG cotyledon.oslo_config_glue [-] logging_debug_format_suffix    = %(funcName)s %(pathname)s:%(lineno)d log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602
Dec  1 14:24:15 np0005541455 ceilometer_agent_ipmi[224515]: 2025-12-01 19:24:15.122 12 DEBUG cotyledon.oslo_config_glue [-] logging_default_format_string  = %(asctime)s.%(msecs)03d %(process)d %(levelname)s %(name)s [-] %(instance)s%(message)s log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602
Dec  1 14:24:15 np0005541455 ceilometer_agent_ipmi[224515]: 2025-12-01 19:24:15.122 12 DEBUG cotyledon.oslo_config_glue [-] logging_exception_prefix       = %(asctime)s.%(msecs)03d %(process)d ERROR %(name)s %(instance)s log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602
Dec  1 14:24:15 np0005541455 ceilometer_agent_ipmi[224515]: 2025-12-01 19:24:15.122 12 DEBUG cotyledon.oslo_config_glue [-] logging_user_identity_format   = %(user)s %(project)s %(domain)s %(system_scope)s %(user_domain)s %(project_domain)s log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602
Dec  1 14:24:15 np0005541455 ceilometer_agent_ipmi[224515]: 2025-12-01 19:24:15.123 12 DEBUG cotyledon.oslo_config_glue [-] max_logfile_count              = 30 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602
Dec  1 14:24:15 np0005541455 ceilometer_agent_ipmi[224515]: 2025-12-01 19:24:15.123 12 DEBUG cotyledon.oslo_config_glue [-] max_logfile_size_mb            = 200 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602
Dec  1 14:24:15 np0005541455 ceilometer_agent_ipmi[224515]: 2025-12-01 19:24:15.123 12 DEBUG cotyledon.oslo_config_glue [-] max_parallel_requests          = 64 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602
Dec  1 14:24:15 np0005541455 ceilometer_agent_ipmi[224515]: 2025-12-01 19:24:15.123 12 DEBUG cotyledon.oslo_config_glue [-] partitioning_group_prefix      = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602
Dec  1 14:24:15 np0005541455 ceilometer_agent_ipmi[224515]: 2025-12-01 19:24:15.123 12 DEBUG cotyledon.oslo_config_glue [-] pipeline_cfg_file              = pipeline.yaml log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602
Dec  1 14:24:15 np0005541455 ceilometer_agent_ipmi[224515]: 2025-12-01 19:24:15.123 12 DEBUG cotyledon.oslo_config_glue [-] polling_namespaces             = ['ipmi'] log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602
Dec  1 14:24:15 np0005541455 ceilometer_agent_ipmi[224515]: 2025-12-01 19:24:15.124 12 DEBUG cotyledon.oslo_config_glue [-] pollsters_definitions_dirs     = ['/etc/ceilometer/pollsters.d'] log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602
Dec  1 19:25:42 compute-0 python3.9[236870]: ansible-ansible.builtin.systemd Invoked with name=rsyslog.service state=restarted daemon_reload=False daemon_reexec=False scope=system no_block=False enabled=None force=None masked=None
Dec  1 19:25:42 compute-0 systemd[1]: Stopping System Logging Service...
Dec  1 19:25:42 compute-0 rsyslogd[1005]: imjournal: 685 messages lost due to rate-limiting (20000 allowed within 600 seconds)
Dec  1 19:25:42 compute-0 rsyslogd[1005]: [origin software="rsyslogd" swVersion="8.2510.0-2.el9" x-pid="1005" x-info="https://www.rsyslog.com"] exiting on signal 15.
Dec  1 19:25:42 compute-0 systemd[1]: rsyslog.service: Deactivated successfully.
Dec  1 19:25:42 compute-0 systemd[1]: Stopped System Logging Service.
Dec  1 19:25:42 compute-0 systemd[1]: rsyslog.service: Consumed 4.298s CPU time, 7.8M memory peak, read 0B from disk, written 6.4M to disk.
Dec  1 19:25:42 compute-0 systemd[1]: Starting System Logging Service...
Dec  1 19:25:43 compute-0 rsyslogd[236874]: [origin software="rsyslogd" swVersion="8.2510.0-2.el9" x-pid="236874" x-info="https://www.rsyslog.com"] start
Dec  1 19:25:43 compute-0 systemd[1]: Started System Logging Service.
Dec  1 19:25:43 compute-0 rsyslogd[236874]: imjournal: journal files changed, reloading...  [v8.2510.0-2.el9 try https://www.rsyslog.com/e/0 ]
Dec  1 19:25:43 compute-0 rsyslogd[236874]: Warning: Certificate file is not set [v8.2510.0-2.el9 try https://www.rsyslog.com/e/2330 ]
Dec  1 19:25:43 compute-0 rsyslogd[236874]: Warning: Key file is not set [v8.2510.0-2.el9 try https://www.rsyslog.com/e/2331 ]
Dec  1 19:25:43 compute-0 rsyslogd[236874]: nsd_ossl: TLS Connection initiated with remote syslog server '172.17.0.80'. [v8.2510.0-2.el9]
Dec  1 19:25:43 compute-0 rsyslogd[236874]: nsd_ossl: Information, no shared curve between syslog client '172.17.0.80' and server [v8.2510.0-2.el9]
Dec  1 19:25:43 compute-0 podman[236879]: 2025-12-01 19:25:43.190842934 +0000 UTC m=+0.076771129 container health_status 61ddba5fa28aaa4735d9b3aecc3d300f499f9ae2248b5f55cd6d6127fcce4236 (image=quay.io/navidys/prometheus-podman-exporter:v1.10.1, name=podman_exporter, health_status=healthy, health_failing_streak=0, health_log=, container_name=podman_exporter, maintainer=Navid Yaghoobi <navidys@fedoraproject.org>, managed_by=edpm_ansible, config_data={'image': 'quay.io/navidys/prometheus-podman-exporter:v1.10.1', 'restart': 'always', 'recreate': True, 'user': 'root', 'privileged': True, 'ports': ['9882:9882'], 'net': 'host', 'command': ['--web.config.file=/etc/podman_exporter/podman_exporter.yaml'], 'environment': {'OS_ENDPOINT_TYPE': 'internal', 'CONTAINER_HOST': 'unix:///run/podman/podman.sock'}, 'healthcheck': {'test': '/openstack/healthcheck podman_exporter', 'mount': '/var/lib/openstack/healthchecks/podman_exporter'}, 'volumes': ['/var/lib/openstack/config/telemetry/podman_exporter.yaml:/etc/podman_exporter/podman_exporter.yaml:z', '/var/lib/openstack/certs/telemetry/default:/etc/podman_exporter/tls:z', '/run/podman/podman.sock:/run/podman/podman.sock:rw,z', '/var/lib/openstack/healthchecks/podman_exporter:/openstack:ro,z']}, config_id=edpm)
Dec  1 19:25:43 compute-0 systemd[1]: session-28.scope: Deactivated successfully.
Dec  1 19:25:43 compute-0 systemd[1]: session-28.scope: Consumed 10.242s CPU time.
Dec  1 19:25:43 compute-0 systemd-logind[797]: Session 28 logged out. Waiting for processes to exit.
Dec  1 19:25:43 compute-0 systemd-logind[797]: Removed session 28.
Dec  1 19:25:44 compute-0 podman[236927]: 2025-12-01 19:25:44.75788824 +0000 UTC m=+0.088905534 container health_status 34a1614f07848d6f362b3ed1fa2407dbcd0f2c7c831f6ef43ff8b2d278ce7c3d (image=quay.io/podified-antelope-centos9/openstack-ceilometer-ipmi:current-podified, name=ceilometer_agent_ipmi, health_status=healthy, health_failing_streak=0, health_log=, tcib_build_tag=fa2bb8efef6782c26ea7f1675eeb36dd, config_id=edpm, org.label-schema.schema-version=1.0, tcib_managed=true, io.buildah.version=1.41.3, managed_by=edpm_ansible, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.vendor=CentOS, config_data={'image': 'quay.io/podified-antelope-centos9/openstack-ceilometer-ipmi:current-podified', 'user': 'ceilometer', 'restart': 'always', 'command': 'kolla_start', 'security_opt': 'label:type:ceilometer_polling_t', 'privileged': 'true', 'net': 'host', 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'OS_ENDPOINT_TYPE': 'internal'}, 'healthcheck': {'test': '/openstack/healthcheck ipmi', 'mount': '/var/lib/openstack/healthchecks/ceilometer_agent_ipmi'}, 'volumes': ['/var/lib/openstack/config/telemetry-power-monitoring:/var/lib/openstack/config/:z', '/var/lib/openstack/config/telemetry-power-monitoring/ceilometer-agent-ipmi.json:/var/lib/kolla/config_files/config.json:z', '/etc/hosts:/etc/hosts:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/var/lib/openstack/cacerts/telemetry-power-monitoring/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/config/telemetry-power-monitoring/ceilometer_prom_exporter.yaml:/etc/ceilometer/ceilometer_prom_exporter.yaml:z', '/var/lib/openstack/certs/telemetry-power-monitoring/default:/etc/ceilometer/tls:z', '/dev/log:/dev/log', '/var/lib/openstack/healthchecks/ceilometer_agent_ipmi:/openstack:ro,z']}, container_name=ceilometer_agent_ipmi, org.label-schema.build-date=20251125, maintainer=OpenStack Kubernetes Operator team)
Dec  1 19:25:45 compute-0 nova_compute[189564]: 2025-12-01 19:25:45.635 189568 DEBUG oslo_service.periodic_task [None req-7cec860b-c1c5-49d2-8a4f-732c76c1788d - - - - - -] Running periodic task ComputeManager._check_instance_build_time run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210#033[00m
Dec  1 19:25:45 compute-0 nova_compute[189564]: 2025-12-01 19:25:45.635 189568 DEBUG oslo_service.periodic_task [None req-7cec860b-c1c5-49d2-8a4f-732c76c1788d - - - - - -] Running periodic task ComputeManager._heal_instance_info_cache run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210#033[00m
Dec  1 19:25:45 compute-0 nova_compute[189564]: 2025-12-01 19:25:45.635 189568 DEBUG nova.compute.manager [None req-7cec860b-c1c5-49d2-8a4f-732c76c1788d - - - - - -] Starting heal instance info cache _heal_instance_info_cache /usr/lib/python3.9/site-packages/nova/compute/manager.py:9858#033[00m
Dec  1 19:25:45 compute-0 nova_compute[189564]: 2025-12-01 19:25:45.635 189568 DEBUG nova.compute.manager [None req-7cec860b-c1c5-49d2-8a4f-732c76c1788d - - - - - -] Rebuilding the list of instances to heal _heal_instance_info_cache /usr/lib/python3.9/site-packages/nova/compute/manager.py:9862#033[00m
Dec  1 19:25:45 compute-0 nova_compute[189564]: 2025-12-01 19:25:45.784 189568 DEBUG nova.compute.manager [None req-7cec860b-c1c5-49d2-8a4f-732c76c1788d - - - - - -] Didn't find any instances for network info cache update. _heal_instance_info_cache /usr/lib/python3.9/site-packages/nova/compute/manager.py:9944#033[00m
Dec  1 19:25:45 compute-0 nova_compute[189564]: 2025-12-01 19:25:45.784 189568 DEBUG oslo_service.periodic_task [None req-7cec860b-c1c5-49d2-8a4f-732c76c1788d - - - - - -] Running periodic task ComputeManager._poll_rebooting_instances run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210#033[00m
Dec  1 19:25:45 compute-0 nova_compute[189564]: 2025-12-01 19:25:45.784 189568 DEBUG oslo_service.periodic_task [None req-7cec860b-c1c5-49d2-8a4f-732c76c1788d - - - - - -] Running periodic task ComputeManager._poll_rescued_instances run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210#033[00m
Dec  1 19:25:45 compute-0 nova_compute[189564]: 2025-12-01 19:25:45.784 189568 DEBUG oslo_service.periodic_task [None req-7cec860b-c1c5-49d2-8a4f-732c76c1788d - - - - - -] Running periodic task ComputeManager._poll_unconfirmed_resizes run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210#033[00m
Dec  1 19:25:45 compute-0 nova_compute[189564]: 2025-12-01 19:25:45.784 189568 DEBUG oslo_service.periodic_task [None req-7cec860b-c1c5-49d2-8a4f-732c76c1788d - - - - - -] Running periodic task ComputeManager._instance_usage_audit run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210#033[00m
Dec  1 19:25:45 compute-0 nova_compute[189564]: 2025-12-01 19:25:45.785 189568 DEBUG oslo_service.periodic_task [None req-7cec860b-c1c5-49d2-8a4f-732c76c1788d - - - - - -] Running periodic task ComputeManager._poll_volume_usage run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210#033[00m
Dec  1 19:25:45 compute-0 nova_compute[189564]: 2025-12-01 19:25:45.785 189568 DEBUG oslo_service.periodic_task [None req-7cec860b-c1c5-49d2-8a4f-732c76c1788d - - - - - -] Running periodic task ComputeManager._reclaim_queued_deletes run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210#033[00m
Dec  1 19:25:45 compute-0 nova_compute[189564]: 2025-12-01 19:25:45.785 189568 DEBUG nova.compute.manager [None req-7cec860b-c1c5-49d2-8a4f-732c76c1788d - - - - - -] CONF.reclaim_instance_interval <= 0, skipping... _reclaim_queued_deletes /usr/lib/python3.9/site-packages/nova/compute/manager.py:10477#033[00m
Dec  1 19:25:46 compute-0 nova_compute[189564]: 2025-12-01 19:25:46.247 189568 DEBUG oslo_service.periodic_task [None req-7cec860b-c1c5-49d2-8a4f-732c76c1788d - - - - - -] Running periodic task ComputeManager.update_available_resource run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210#033[00m
Dec  1 19:25:46 compute-0 nova_compute[189564]: 2025-12-01 19:25:46.275 189568 DEBUG oslo_concurrency.lockutils [None req-7cec860b-c1c5-49d2-8a4f-732c76c1788d - - - - - -] Acquiring lock "compute_resources" by "nova.compute.resource_tracker.ResourceTracker.clean_compute_node_cache" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m
Dec  1 19:25:46 compute-0 nova_compute[189564]: 2025-12-01 19:25:46.276 189568 DEBUG oslo_concurrency.lockutils [None req-7cec860b-c1c5-49d2-8a4f-732c76c1788d - - - - - -] Lock "compute_resources" acquired by "nova.compute.resource_tracker.ResourceTracker.clean_compute_node_cache" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m
Dec  1 19:25:46 compute-0 nova_compute[189564]: 2025-12-01 19:25:46.276 189568 DEBUG oslo_concurrency.lockutils [None req-7cec860b-c1c5-49d2-8a4f-732c76c1788d - - - - - -] Lock "compute_resources" "released" by "nova.compute.resource_tracker.ResourceTracker.clean_compute_node_cache" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m
Dec  1 19:25:46 compute-0 nova_compute[189564]: 2025-12-01 19:25:46.276 189568 DEBUG nova.compute.resource_tracker [None req-7cec860b-c1c5-49d2-8a4f-732c76c1788d - - - - - -] Auditing locally available compute resources for compute-0.ctlplane.example.com (node: compute-0.ctlplane.example.com) update_available_resource /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:861#033[00m
Dec  1 19:25:46 compute-0 podman[236946]: 2025-12-01 19:25:46.296007389 +0000 UTC m=+0.069606763 container health_status 23921011954a99f31a49758e512d9e3575f6b2ebf536e7df85e3be11e7690b76 (image=quay.io/sustainable_computing_io/kepler:release-0.7.12, name=kepler, health_status=healthy, health_failing_streak=0, health_log=, com.redhat.license_terms=https://www.redhat.com/en/about/red-hat-end-user-license-agreements#UBI, managed_by=edpm_ansible, build-date=2024-09-18T21:23:30, vcs-type=git, distribution-scope=public, summary=Provides the latest release of Red Hat Universal Base Image 9., io.buildah.version=1.29.0, release=1214.1726694543, container_name=kepler, config_data={'image': 'quay.io/sustainable_computing_io/kepler:release-0.7.12', 'privileged': 'true', 'restart': 'always', 'ports': ['8888:8888'], 'net': 'host', 'command': '-v=2', 'recreate': True, 'environment': {'ENABLE_GPU': 'true', 'EXPOSE_CONTAINER_METRICS': 'true', 'ENABLE_PROCESS_METRICS': 'true', 'EXPOSE_VM_METRICS': 'true', 'EXPOSE_ESTIMATED_IDLE_POWER_METRICS': 'false', 'LIBVIRT_METADATA_URI': 'http://openstack.org/xmlns/libvirt/nova/1.1'}, 'healthcheck': {'test': '/openstack/healthcheck kepler', 'mount': '/var/lib/openstack/healthchecks/kepler'}, 'volumes': ['/lib/modules:/lib/modules:ro', '/run/libvirt:/run/libvirt:shared,ro', '/sys:/sys', '/proc:/proc', '/var/lib/openstack/healthchecks/kepler:/openstack:ro,z']}, release-0.7.12=, io.k8s.display-name=Red Hat Universal Base Image 9, vcs-ref=e309397d02fc53f7fa99db1371b8700eb49f268f, description=The Universal Base Image is designed and engineered to be the base layer for all of your containerized applications, middleware and utilities. This base image is freely redistributable, but Red Hat only supports Red Hat technologies through subscriptions for Red Hat products. This image is maintained by Red Hat and updated regularly., io.openshift.expose-services=, config_id=edpm, name=ubi9, version=9.4, io.k8s.description=The Universal Base Image is designed and engineered to be the base layer for all of your containerized applications, middleware and utilities. This base image is freely redistributable, but Red Hat only supports Red Hat technologies through subscriptions for Red Hat products. This image is maintained by Red Hat and updated regularly., com.redhat.component=ubi9-container, io.openshift.tags=base rhel9, maintainer=Red Hat, Inc., url=https://access.redhat.com/containers/#/registry.access.redhat.com/ubi9/images/9.4-1214.1726694543, vendor=Red Hat, Inc., architecture=x86_64)
Dec  1 19:25:46 compute-0 nova_compute[189564]: 2025-12-01 19:25:46.572 189568 WARNING nova.virt.libvirt.driver [None req-7cec860b-c1c5-49d2-8a4f-732c76c1788d - - - - - -] This host appears to have multiple sockets per NUMA node. The `socket` PCI NUMA affinity will not be supported.#033[00m
Dec  1 19:25:46 compute-0 nova_compute[189564]: 2025-12-01 19:25:46.573 189568 DEBUG nova.compute.resource_tracker [None req-7cec860b-c1c5-49d2-8a4f-732c76c1788d - - - - - -] Hypervisor/Node resource view: name=compute-0.ctlplane.example.com free_ram=5718MB free_disk=72.43529510498047GB free_vcpus=8 pci_devices=[{"dev_id": "pci_0000_00_03_0", "address": "0000:00:03.0", "product_id": "1000", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1000", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_01_3", "address": "0000:00:01.3", "product_id": "7113", "vendor_id": "8086", "numa_node": null, "label": "label_8086_7113", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_06_0", "address": "0000:00:06.0", "product_id": "1005", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1005", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_04_0", "address": "0000:00:04.0", "product_id": "1001", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1001", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_01_1", "address": "0000:00:01.1", "product_id": "7010", "vendor_id": "8086", "numa_node": null, "label": "label_8086_7010", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_00_0", "address": "0000:00:00.0", "product_id": "1237", "vendor_id": "8086", "numa_node": null, "label": "label_8086_1237", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_05_0", "address": "0000:00:05.0", "product_id": "1002", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1002", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_02_0", "address": "0000:00:02.0", "product_id": "1050", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1050", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_01_2", "address": "0000:00:01.2", "product_id": "7020", "vendor_id": "8086", "numa_node": null, "label": "label_8086_7020", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_01_0", "address": "0000:00:01.0", "product_id": "7000", "vendor_id": "8086", "numa_node": null, "label": "label_8086_7000", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_07_0", "address": "0000:00:07.0", "product_id": "1000", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1000", "dev_type": "type-PCI"}] _report_hypervisor_resource_view /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:1034#033[00m
Dec  1 19:25:46 compute-0 nova_compute[189564]: 2025-12-01 19:25:46.574 189568 DEBUG oslo_concurrency.lockutils [None req-7cec860b-c1c5-49d2-8a4f-732c76c1788d - - - - - -] Acquiring lock "compute_resources" by "nova.compute.resource_tracker.ResourceTracker._update_available_resource" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m
Dec  1 19:25:46 compute-0 nova_compute[189564]: 2025-12-01 19:25:46.574 189568 DEBUG oslo_concurrency.lockutils [None req-7cec860b-c1c5-49d2-8a4f-732c76c1788d - - - - - -] Lock "compute_resources" acquired by "nova.compute.resource_tracker.ResourceTracker._update_available_resource" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m
Dec  1 19:25:46 compute-0 nova_compute[189564]: 2025-12-01 19:25:46.896 189568 DEBUG nova.compute.resource_tracker [None req-7cec860b-c1c5-49d2-8a4f-732c76c1788d - - - - - -] Total usable vcpus: 8, total allocated vcpus: 0 _report_final_resource_view /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:1057#033[00m
Dec  1 19:25:46 compute-0 nova_compute[189564]: 2025-12-01 19:25:46.897 189568 DEBUG nova.compute.resource_tracker [None req-7cec860b-c1c5-49d2-8a4f-732c76c1788d - - - - - -] Final resource view: name=compute-0.ctlplane.example.com phys_ram=7680MB used_ram=512MB phys_disk=79GB used_disk=0GB total_vcpus=8 used_vcpus=0 pci_stats=[] _report_final_resource_view /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:1066#033[00m
Dec  1 19:25:47 compute-0 nova_compute[189564]: 2025-12-01 19:25:47.020 189568 DEBUG nova.scheduler.client.report [None req-7cec860b-c1c5-49d2-8a4f-732c76c1788d - - - - - -] Refreshing inventories for resource provider 0211b5d4-bab8-409f-8f53-df766ffbcb27 _refresh_associations /usr/lib/python3.9/site-packages/nova/scheduler/client/report.py:804#033[00m
Dec  1 19:25:47 compute-0 nova_compute[189564]: 2025-12-01 19:25:47.104 189568 DEBUG nova.scheduler.client.report [None req-7cec860b-c1c5-49d2-8a4f-732c76c1788d - - - - - -] Updating ProviderTree inventory for provider 0211b5d4-bab8-409f-8f53-df766ffbcb27 from _refresh_and_get_inventory using data: {'VCPU': {'total': 8, 'reserved': 0, 'min_unit': 1, 'max_unit': 8, 'step_size': 1, 'allocation_ratio': 4.0}, 'MEMORY_MB': {'total': 7680, 'reserved': 512, 'min_unit': 1, 'max_unit': 7680, 'step_size': 1, 'allocation_ratio': 1.0}, 'DISK_GB': {'total': 79, 'reserved': 0, 'min_unit': 1, 'max_unit': 79, 'step_size': 1, 'allocation_ratio': 0.9}} _refresh_and_get_inventory /usr/lib/python3.9/site-packages/nova/scheduler/client/report.py:768#033[00m
Dec  1 19:25:47 compute-0 nova_compute[189564]: 2025-12-01 19:25:47.105 189568 DEBUG nova.compute.provider_tree [None req-7cec860b-c1c5-49d2-8a4f-732c76c1788d - - - - - -] Updating inventory in ProviderTree for provider 0211b5d4-bab8-409f-8f53-df766ffbcb27 with inventory: {'VCPU': {'total': 8, 'reserved': 0, 'min_unit': 1, 'max_unit': 8, 'step_size': 1, 'allocation_ratio': 4.0}, 'MEMORY_MB': {'total': 7680, 'reserved': 512, 'min_unit': 1, 'max_unit': 7680, 'step_size': 1, 'allocation_ratio': 1.0}, 'DISK_GB': {'total': 79, 'reserved': 0, 'min_unit': 1, 'max_unit': 79, 'step_size': 1, 'allocation_ratio': 0.9}} update_inventory /usr/lib/python3.9/site-packages/nova/compute/provider_tree.py:176#033[00m
Dec  1 19:25:47 compute-0 nova_compute[189564]: 2025-12-01 19:25:47.121 189568 DEBUG nova.scheduler.client.report [None req-7cec860b-c1c5-49d2-8a4f-732c76c1788d - - - - - -] Refreshing aggregate associations for resource provider 0211b5d4-bab8-409f-8f53-df766ffbcb27, aggregates: None _refresh_associations /usr/lib/python3.9/site-packages/nova/scheduler/client/report.py:813#033[00m
Dec  1 19:25:47 compute-0 nova_compute[189564]: 2025-12-01 19:25:47.147 189568 DEBUG nova.scheduler.client.report [None req-7cec860b-c1c5-49d2-8a4f-732c76c1788d - - - - - -] Refreshing trait associations for resource provider 0211b5d4-bab8-409f-8f53-df766ffbcb27, traits: COMPUTE_RESCUE_BFV,COMPUTE_VIOMMU_MODEL_INTEL,COMPUTE_SECURITY_UEFI_SECURE_BOOT,COMPUTE_GRAPHICS_MODEL_VIRTIO,HW_CPU_X86_AMD_SVM,COMPUTE_NODE,COMPUTE_VIOMMU_MODEL_AUTO,HW_CPU_X86_BMI2,COMPUTE_IMAGE_TYPE_ISO,HW_CPU_X86_SSE2,COMPUTE_STORAGE_BUS_SATA,HW_CPU_X86_SSE41,COMPUTE_IMAGE_TYPE_ARI,COMPUTE_SECURITY_TPM_1_2,COMPUTE_STORAGE_BUS_VIRTIO,COMPUTE_IMAGE_TYPE_AMI,COMPUTE_NET_VIF_MODEL_E1000,COMPUTE_GRAPHICS_MODEL_CIRRUS,COMPUTE_GRAPHICS_MODEL_VGA,COMPUTE_TRUSTED_CERTS,COMPUTE_STORAGE_BUS_USB,COMPUTE_VOLUME_ATTACH_WITH_TAG,COMPUTE_NET_VIF_MODEL_VIRTIO,HW_CPU_X86_FMA3,HW_CPU_X86_SSE4A,COMPUTE_ACCELERATORS,COMPUTE_VOLUME_EXTEND,HW_CPU_X86_ABM,COMPUTE_DEVICE_TAGGING,HW_CPU_X86_AVX,HW_CPU_X86_SSE,HW_CPU_X86_SVM,COMPUTE_STORAGE_BUS_IDE,COMPUTE_NET_ATTACH_INTERFACE,HW_CPU_X86_F16C,HW_CPU_X86_MMX,COMPUTE_NET_VIF_MODEL_SPAPR_VLAN,COMPUTE_NET_VIF_MODEL_E1000E,HW_CPU_X86_CLMUL,COMPUTE_GRAPHICS_MODEL_NONE,COMPUTE_VIOMMU_MODEL_VIRTIO,HW_CPU_X86_AVX2,COMPUTE_NET_VIF_MODEL_VMXNET3,COMPUTE_SECURITY_TPM_2_0,COMPUTE_IMAGE_TYPE_AKI,HW_CPU_X86_SSSE3,COMPUTE_IMAGE_TYPE_QCOW2,HW_CPU_X86_BMI,HW_CPU_X86_AESNI,COMPUTE_GRAPHICS_MODEL_BOCHS,COMPUTE_NET_VIF_MODEL_RTL8139,COMPUTE_STORAGE_BUS_SCSI,COMPUTE_NET_VIF_MODEL_PCNET,COMPUTE_VOLUME_MULTI_ATTACH,COMPUTE_SOCKET_PCI_NUMA_AFFINITY,COMPUTE_NET_VIF_MODEL_NE2K_PCI,HW_CPU_X86_SHA,COMPUTE_IMAGE_TYPE_RAW,COMPUTE_NET_ATTACH_INTERFACE_WITH_TAG,HW_CPU_X86_SSE42,COMPUTE_STORAGE_BUS_FDC _refresh_associations /usr/lib/python3.9/site-packages/nova/scheduler/client/report.py:825#033[00m
Dec  1 19:25:47 compute-0 nova_compute[189564]: 2025-12-01 19:25:47.179 189568 DEBUG nova.compute.provider_tree [None req-7cec860b-c1c5-49d2-8a4f-732c76c1788d - - - - - -] Inventory has not changed in ProviderTree for provider: 0211b5d4-bab8-409f-8f53-df766ffbcb27 update_inventory /usr/lib/python3.9/site-packages/nova/compute/provider_tree.py:180#033[00m
Dec  1 19:25:47 compute-0 nova_compute[189564]: 2025-12-01 19:25:47.197 189568 DEBUG nova.scheduler.client.report [None req-7cec860b-c1c5-49d2-8a4f-732c76c1788d - - - - - -] Inventory has not changed for provider 0211b5d4-bab8-409f-8f53-df766ffbcb27 based on inventory data: {'VCPU': {'total': 8, 'reserved': 0, 'min_unit': 1, 'max_unit': 8, 'step_size': 1, 'allocation_ratio': 4.0}, 'MEMORY_MB': {'total': 7680, 'reserved': 512, 'min_unit': 1, 'max_unit': 7680, 'step_size': 1, 'allocation_ratio': 1.0}, 'DISK_GB': {'total': 79, 'reserved': 0, 'min_unit': 1, 'max_unit': 79, 'step_size': 1, 'allocation_ratio': 0.9}} set_inventory_for_provider /usr/lib/python3.9/site-packages/nova/scheduler/client/report.py:940#033[00m
Dec  1 19:25:47 compute-0 nova_compute[189564]: 2025-12-01 19:25:47.199 189568 DEBUG nova.compute.resource_tracker [None req-7cec860b-c1c5-49d2-8a4f-732c76c1788d - - - - - -] Compute_service record updated for compute-0.ctlplane.example.com:compute-0.ctlplane.example.com _update_available_resource /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:995#033[00m
Dec  1 19:25:47 compute-0 nova_compute[189564]: 2025-12-01 19:25:47.200 189568 DEBUG oslo_concurrency.lockutils [None req-7cec860b-c1c5-49d2-8a4f-732c76c1788d - - - - - -] Lock "compute_resources" "released" by "nova.compute.resource_tracker.ResourceTracker._update_available_resource" :: held 0.626s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m
Dec  1 19:25:48 compute-0 ceilometer_agent_compute[200308]: 2025-12-01 19:25:48.808 15 DEBUG ceilometer.polling.manager [-] The number of pollsters in source [pollsters] is bigger than the number of worker threads to execute them. Therefore, one can expect the process to be longer than the expected. execute_polling_task_processing /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:253
Dec  1 19:25:48 compute-0 ceilometer_agent_compute[200308]: 2025-12-01 19:25:48.809 15 DEBUG ceilometer.polling.manager [-] Processing pollsters for [pollsters] with [1] threads. execute_polling_task_processing /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:262
Dec  1 19:25:48 compute-0 ceilometer_agent_compute[200308]: 2025-12-01 19:25:48.809 15 DEBUG ceilometer.polling.manager [-] Registering pollster [<stevedore.extension.Extension object at 0x7fcf6cc3f860>] from source [pollsters] to be executed via executor [<concurrent.futures.thread.ThreadPoolExecutor object at 0x7fcf6cd3b320>] with cache [{}], pollster history [{}], and discovery cache [{}]. register_pollster_execution /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:276
Dec  1 19:25:48 compute-0 ceilometer_agent_compute[200308]: 2025-12-01 19:25:48.809 15 DEBUG ceilometer.polling.manager [-] Executing discovery process for pollsters [<ceilometer.compute.pollsters.net.IncomingBytesDeltaPollster object at 0x7fcf6cc3f830>] and discovery method [local_instances] via process [<bound method AgentManager.discover of <ceilometer.polling.manager.AgentManager object at 0x7fcf6cc07a10>>]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:294
Dec  1 19:25:48 compute-0 ceilometer_agent_compute[200308]: 2025-12-01 19:25:48.810 15 DEBUG ceilometer.polling.manager [-] Registering pollster [<stevedore.extension.Extension object at 0x7fcf6c2e4080>] from source [pollsters] to be executed via executor [<concurrent.futures.thread.ThreadPoolExecutor object at 0x7fcf6cd3b320>] with cache [{}], pollster history [{}], and discovery cache [{}]. register_pollster_execution /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:276
Dec  1 19:25:48 compute-0 ceilometer_agent_compute[200308]: 2025-12-01 19:25:48.810 15 DEBUG ceilometer.polling.manager [-] Registering pollster [<stevedore.extension.Extension object at 0x7fcf6efc98b0>] from source [pollsters] to be executed via executor [<concurrent.futures.thread.ThreadPoolExecutor object at 0x7fcf6cd3b320>] with cache [{}], pollster history [{}], and discovery cache [{}]. register_pollster_execution /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:276
Dec  1 19:25:48 compute-0 ceilometer_agent_compute[200308]: 2025-12-01 19:25:48.810 15 DEBUG ceilometer.polling.manager [-] Registering pollster [<stevedore.extension.Extension object at 0x7fcf6c2e4110>] from source [pollsters] to be executed via executor [<concurrent.futures.thread.ThreadPoolExecutor object at 0x7fcf6cd3b320>] with cache [{}], pollster history [{}], and discovery cache [{}]. register_pollster_execution /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:276
Dec  1 19:25:48 compute-0 ceilometer_agent_compute[200308]: 2025-12-01 19:25:48.810 15 DEBUG ceilometer.polling.manager [-] Registering pollster [<stevedore.extension.Extension object at 0x7fcf6c2e41a0>] from source [pollsters] to be executed via executor [<concurrent.futures.thread.ThreadPoolExecutor object at 0x7fcf6cd3b320>] with cache [{}], pollster history [{}], and discovery cache [{}]. register_pollster_execution /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:276
Dec  1 19:25:48 compute-0 ceilometer_agent_compute[200308]: 2025-12-01 19:25:48.810 15 DEBUG ceilometer.polling.manager [-] Registering pollster [<stevedore.extension.Extension object at 0x7fcf6cc3f290>] from source [pollsters] to be executed via executor [<concurrent.futures.thread.ThreadPoolExecutor object at 0x7fcf6cd3b320>] with cache [{}], pollster history [{}], and discovery cache [{}]. register_pollster_execution /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:276
Dec  1 19:25:48 compute-0 ceilometer_agent_compute[200308]: 2025-12-01 19:25:48.810 15 DEBUG ceilometer.polling.manager [-] Registering pollster [<stevedore.extension.Extension object at 0x7fcf6cc3f2c0>] from source [pollsters] to be executed via executor [<concurrent.futures.thread.ThreadPoolExecutor object at 0x7fcf6cd3b320>] with cache [{}], pollster history [{}], and discovery cache [{}]. register_pollster_execution /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:276
Dec  1 19:25:48 compute-0 ceilometer_agent_compute[200308]: 2025-12-01 19:25:48.810 15 DEBUG ceilometer.polling.manager [-] Registering pollster [<stevedore.extension.Extension object at 0x7fcf6e1e92e0>] from source [pollsters] to be executed via executor [<concurrent.futures.thread.ThreadPoolExecutor object at 0x7fcf6cd3b320>] with cache [{}], pollster history [{}], and discovery cache [{}]. register_pollster_execution /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:276
Dec  1 19:25:48 compute-0 ceilometer_agent_compute[200308]: 2025-12-01 19:25:48.810 15 DEBUG ceilometer.polling.manager [-] Registering pollster [<stevedore.extension.Extension object at 0x7fcf6cc3fb00>] from source [pollsters] to be executed via executor [<concurrent.futures.thread.ThreadPoolExecutor object at 0x7fcf6cd3b320>] with cache [{}], pollster history [{}], and discovery cache [{}]. register_pollster_execution /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:276
Dec  1 19:25:48 compute-0 ceilometer_agent_compute[200308]: 2025-12-01 19:25:48.811 15 DEBUG ceilometer.polling.manager [-] Registering pollster [<stevedore.extension.Extension object at 0x7fcf6cc3f320>] from source [pollsters] to be executed via executor [<concurrent.futures.thread.ThreadPoolExecutor object at 0x7fcf6cd3b320>] with cache [{}], pollster history [{}], and discovery cache [{}]. register_pollster_execution /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:276
Dec  1 19:25:48 compute-0 ceilometer_agent_compute[200308]: 2025-12-01 19:25:48.811 15 DEBUG ceilometer.polling.manager [-] Registering pollster [<stevedore.extension.Extension object at 0x7fcf6cc3f380>] from source [pollsters] to be executed via executor [<concurrent.futures.thread.ThreadPoolExecutor object at 0x7fcf6cd3b320>] with cache [{}], pollster history [{}], and discovery cache [{}]. register_pollster_execution /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:276
Dec  1 19:25:48 compute-0 ceilometer_agent_compute[200308]: 2025-12-01 19:25:48.811 15 DEBUG ceilometer.polling.manager [-] Registering pollster [<stevedore.extension.Extension object at 0x7fcf6cc3f3e0>] from source [pollsters] to be executed via executor [<concurrent.futures.thread.ThreadPoolExecutor object at 0x7fcf6cd3b320>] with cache [{}], pollster history [{}], and discovery cache [{}]. register_pollster_execution /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:276
Dec  1 19:25:48 compute-0 ceilometer_agent_compute[200308]: 2025-12-01 19:25:48.811 15 DEBUG ceilometer.polling.manager [-] Registering pollster [<stevedore.extension.Extension object at 0x7fcf6cc3f440>] from source [pollsters] to be executed via executor [<concurrent.futures.thread.ThreadPoolExecutor object at 0x7fcf6cd3b320>] with cache [{}], pollster history [{}], and discovery cache [{}]. register_pollster_execution /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:276
Dec  1 19:25:48 compute-0 ceilometer_agent_compute[200308]: 2025-12-01 19:25:48.811 15 DEBUG ceilometer.polling.manager [-] Registering pollster [<stevedore.extension.Extension object at 0x7fcf6c2e4470>] from source [pollsters] to be executed via executor [<concurrent.futures.thread.ThreadPoolExecutor object at 0x7fcf6cd3b320>] with cache [{}], pollster history [{}], and discovery cache [{}]. register_pollster_execution /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:276
Dec  1 19:25:48 compute-0 ceilometer_agent_compute[200308]: 2025-12-01 19:25:48.811 15 DEBUG ceilometer.polling.manager [-] Registering pollster [<stevedore.extension.Extension object at 0x7fcf6cc3f4a0>] from source [pollsters] to be executed via executor [<concurrent.futures.thread.ThreadPoolExecutor object at 0x7fcf6cd3b320>] with cache [{}], pollster history [{}], and discovery cache [{}]. register_pollster_execution /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:276
Dec  1 19:25:48 compute-0 ceilometer_agent_compute[200308]: 2025-12-01 19:25:48.811 15 DEBUG ceilometer.polling.manager [-] Registering pollster [<stevedore.extension.Extension object at 0x7fcf6cc3f500>] from source [pollsters] to be executed via executor [<concurrent.futures.thread.ThreadPoolExecutor object at 0x7fcf6cd3b320>] with cache [{}], pollster history [{}], and discovery cache [{}]. register_pollster_execution /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:276
Dec  1 19:25:48 compute-0 ceilometer_agent_compute[200308]: 2025-12-01 19:25:48.811 15 DEBUG ceilometer.polling.manager [-] Registering pollster [<stevedore.extension.Extension object at 0x7fcf6cc3e540>] from source [pollsters] to be executed via executor [<concurrent.futures.thread.ThreadPoolExecutor object at 0x7fcf6cd3b320>] with cache [{}], pollster history [{}], and discovery cache [{}]. register_pollster_execution /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:276
Dec  1 19:25:48 compute-0 ceilometer_agent_compute[200308]: 2025-12-01 19:25:48.811 15 DEBUG ceilometer.polling.manager [-] Registering pollster [<stevedore.extension.Extension object at 0x7fcf6cc3f560>] from source [pollsters] to be executed via executor [<concurrent.futures.thread.ThreadPoolExecutor object at 0x7fcf6cd3b320>] with cache [{}], pollster history [{}], and discovery cache [{}]. register_pollster_execution /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:276
Dec  1 19:25:48 compute-0 ceilometer_agent_compute[200308]: 2025-12-01 19:25:48.811 15 DEBUG ceilometer.polling.manager [-] Registering pollster [<stevedore.extension.Extension object at 0x7fcf6cc3fd70>] from source [pollsters] to be executed via executor [<concurrent.futures.thread.ThreadPoolExecutor object at 0x7fcf6cd3b320>] with cache [{}], pollster history [{}], and discovery cache [{}]. register_pollster_execution /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:276
Dec  1 19:25:48 compute-0 ceilometer_agent_compute[200308]: 2025-12-01 19:25:48.811 15 DEBUG ceilometer.polling.manager [-] Registering pollster [<stevedore.extension.Extension object at 0x7fcf6cc3f5c0>] from source [pollsters] to be executed via executor [<concurrent.futures.thread.ThreadPoolExecutor object at 0x7fcf6cd3b320>] with cache [{}], pollster history [{}], and discovery cache [{}]. register_pollster_execution /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:276
Dec  1 19:25:48 compute-0 ceilometer_agent_compute[200308]: 2025-12-01 19:25:48.811 15 DEBUG ceilometer.polling.manager [-] Registering pollster [<stevedore.extension.Extension object at 0x7fcf6cc3fdd0>] from source [pollsters] to be executed via executor [<concurrent.futures.thread.ThreadPoolExecutor object at 0x7fcf6cd3b320>] with cache [{}], pollster history [{}], and discovery cache [{}]. register_pollster_execution /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:276
Dec  1 19:25:48 compute-0 ceilometer_agent_compute[200308]: 2025-12-01 19:25:48.812 15 DEBUG ceilometer.polling.manager [-] Registering pollster [<stevedore.extension.Extension object at 0x7fcf6cc3fe30>] from source [pollsters] to be executed via executor [<concurrent.futures.thread.ThreadPoolExecutor object at 0x7fcf6cd3b320>] with cache [{}], pollster history [{}], and discovery cache [{}]. register_pollster_execution /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:276
Dec  1 19:25:48 compute-0 ceilometer_agent_compute[200308]: 2025-12-01 19:25:48.812 15 DEBUG ceilometer.polling.manager [-] Registering pollster [<stevedore.extension.Extension object at 0x7fcf6cc3fec0>] from source [pollsters] to be executed via executor [<concurrent.futures.thread.ThreadPoolExecutor object at 0x7fcf6cd3b320>] with cache [{}], pollster history [{}], and discovery cache [{}]. register_pollster_execution /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:276
Dec  1 19:25:48 compute-0 ceilometer_agent_compute[200308]: 2025-12-01 19:25:48.812 15 DEBUG ceilometer.polling.manager [-] Registering pollster [<stevedore.extension.Extension object at 0x7fcf6cc3ffb0>] from source [pollsters] to be executed via executor [<concurrent.futures.thread.ThreadPoolExecutor object at 0x7fcf6cd3b320>] with cache [{}], pollster history [{}], and discovery cache [{}]. register_pollster_execution /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:276
Dec  1 19:25:48 compute-0 ceilometer_agent_compute[200308]: 2025-12-01 19:25:48.812 15 DEBUG ceilometer.polling.manager [-] Registering pollster [<stevedore.extension.Extension object at 0x7fcf6cc3d7c0>] from source [pollsters] to be executed via executor [<concurrent.futures.thread.ThreadPoolExecutor object at 0x7fcf6cd3b320>] with cache [{}], pollster history [{}], and discovery cache [{}]. register_pollster_execution /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:276
Dec  1 19:25:48 compute-0 ceilometer_agent_compute[200308]: 2025-12-01 19:25:48.812 15 DEBUG ceilometer.polling.manager [-] Registering pollster [<stevedore.extension.Extension object at 0x7fcf6cc3f7d0>] from source [pollsters] to be executed via executor [<concurrent.futures.thread.ThreadPoolExecutor object at 0x7fcf6cd3b320>] with cache [{}], pollster history [{}], and discovery cache [{}]. register_pollster_execution /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:276
Dec  1 19:25:48 compute-0 ceilometer_agent_compute[200308]: 2025-12-01 19:25:48.813 15 DEBUG ceilometer.polling.manager [-] Skip pollster network.incoming.bytes.delta, no  resources found this cycle _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:321
Dec  1 19:25:48 compute-0 ceilometer_agent_compute[200308]: 2025-12-01 19:25:48.813 15 DEBUG ceilometer.polling.manager [-] Executing discovery process for pollsters [<ceilometer.compute.pollsters.net.OutgoingPacketsPollster object at 0x7fcf6c2e4050>] and discovery method [local_instances] via process [<bound method AgentManager.discover of <ceilometer.polling.manager.AgentManager object at 0x7fcf6cc07a10>>]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:294
Dec  1 19:25:48 compute-0 ceilometer_agent_compute[200308]: 2025-12-01 19:25:48.813 15 DEBUG ceilometer.polling.manager [-] Skip pollster network.outgoing.packets, no  resources found this cycle _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:321
Dec  1 19:25:48 compute-0 ceilometer_agent_compute[200308]: 2025-12-01 19:25:48.814 15 DEBUG ceilometer.polling.manager [-] Executing discovery process for pollsters [<ceilometer.compute.pollsters.net.OutgoingBytesDeltaPollster object at 0x7fcf6cc3ff20>] and discovery method [local_instances] via process [<bound method AgentManager.discover of <ceilometer.polling.manager.AgentManager object at 0x7fcf6cc07a10>>]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:294
Dec  1 19:25:48 compute-0 ceilometer_agent_compute[200308]: 2025-12-01 19:25:48.814 15 DEBUG ceilometer.polling.manager [-] Skip pollster network.outgoing.bytes.delta, no  resources found this cycle _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:321
Dec  1 19:25:48 compute-0 ceilometer_agent_compute[200308]: 2025-12-01 19:25:48.814 15 DEBUG ceilometer.polling.manager [-] Executing discovery process for pollsters [<ceilometer.compute.pollsters.net.OutgoingDropPollster object at 0x7fcf6c2e40e0>] and discovery method [local_instances] via process [<bound method AgentManager.discover of <ceilometer.polling.manager.AgentManager object at 0x7fcf6cc07a10>>]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:294
Dec  1 19:25:48 compute-0 ceilometer_agent_compute[200308]: 2025-12-01 19:25:48.814 15 DEBUG ceilometer.polling.manager [-] Skip pollster network.outgoing.packets.drop, no  resources found this cycle _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:321
Dec  1 19:25:48 compute-0 ceilometer_agent_compute[200308]: 2025-12-01 19:25:48.814 15 DEBUG ceilometer.polling.manager [-] Executing discovery process for pollsters [<ceilometer.compute.pollsters.net.OutgoingErrorsPollster object at 0x7fcf6c2e4170>] and discovery method [local_instances] via process [<bound method AgentManager.discover of <ceilometer.polling.manager.AgentManager object at 0x7fcf6cc07a10>>]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:294
Dec  1 19:25:48 compute-0 ceilometer_agent_compute[200308]: 2025-12-01 19:25:48.814 15 DEBUG ceilometer.polling.manager [-] Skip pollster network.outgoing.packets.error, no  resources found this cycle _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:321
Dec  1 19:25:48 compute-0 ceilometer_agent_compute[200308]: 2025-12-01 19:25:48.814 15 DEBUG ceilometer.polling.manager [-] Executing discovery process for pollsters [<ceilometer.compute.pollsters.disk.PerDeviceCapacityPollster object at 0x7fcf6cc3d820>] and discovery method [local_instances] via process [<bound method AgentManager.discover of <ceilometer.polling.manager.AgentManager object at 0x7fcf6cc07a10>>]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:294
Dec  1 19:25:48 compute-0 ceilometer_agent_compute[200308]: 2025-12-01 19:25:48.814 15 DEBUG ceilometer.polling.manager [-] Skip pollster disk.device.capacity, no  resources found this cycle _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:321
Dec  1 19:25:48 compute-0 ceilometer_agent_compute[200308]: 2025-12-01 19:25:48.814 15 DEBUG ceilometer.polling.manager [-] Executing discovery process for pollsters [<ceilometer.compute.pollsters.disk.PerDeviceReadBytesPollster object at 0x7fcf6cc3f1d0>] and discovery method [local_instances] via process [<bound method AgentManager.discover of <ceilometer.polling.manager.AgentManager object at 0x7fcf6cc07a10>>]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:294
Dec  1 19:25:48 compute-0 ceilometer_agent_compute[200308]: 2025-12-01 19:25:48.814 15 DEBUG ceilometer.polling.manager [-] Skip pollster disk.device.read.bytes, no  resources found this cycle _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:321
Dec  1 19:25:48 compute-0 ceilometer_agent_compute[200308]: 2025-12-01 19:25:48.814 15 DEBUG ceilometer.polling.manager [-] Executing discovery process for pollsters [<ceilometer.compute.pollsters.net.IncomingBytesPollster object at 0x7fcf6cc3f800>] and discovery method [local_instances] via process [<bound method AgentManager.discover of <ceilometer.polling.manager.AgentManager object at 0x7fcf6cc07a10>>]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:294
Dec  1 19:25:48 compute-0 ceilometer_agent_compute[200308]: 2025-12-01 19:25:48.815 15 DEBUG ceilometer.polling.manager [-] Skip pollster network.incoming.bytes, no  resources found this cycle _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:321
Dec  1 19:25:48 compute-0 ceilometer_agent_compute[200308]: 2025-12-01 19:25:48.815 15 DEBUG ceilometer.polling.manager [-] Executing discovery process for pollsters [<ceilometer.compute.pollsters.net.IncomingBytesRatePollster object at 0x7fcf6cc3fd10>] and discovery method [local_instances] via process [<bound method AgentManager.discover of <ceilometer.polling.manager.AgentManager object at 0x7fcf6cc07a10>>]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:294
Dec  1 19:25:48 compute-0 ceilometer_agent_compute[200308]: 2025-12-01 19:25:48.815 15 DEBUG ceilometer.polling.manager [-] Skip pollster network.incoming.bytes.rate, no  resources found this cycle _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:321
Dec  1 19:25:48 compute-0 ceilometer_agent_compute[200308]: 2025-12-01 19:25:48.815 15 DEBUG ceilometer.polling.manager [-] Executing discovery process for pollsters [<ceilometer.compute.pollsters.disk.PerDeviceDiskReadLatencyPollster object at 0x7fcf6cc3f2f0>] and discovery method [local_instances] via process [<bound method AgentManager.discover of <ceilometer.polling.manager.AgentManager object at 0x7fcf6cc07a10>>]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:294
Dec  1 19:25:48 compute-0 ceilometer_agent_compute[200308]: 2025-12-01 19:25:48.815 15 DEBUG ceilometer.polling.manager [-] Skip pollster disk.device.read.latency, no  resources found this cycle _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:321
Dec  1 19:25:48 compute-0 ceilometer_agent_compute[200308]: 2025-12-01 19:25:48.815 15 DEBUG ceilometer.polling.manager [-] Executing discovery process for pollsters [<ceilometer.compute.pollsters.disk.PerDeviceReadRequestsPollster object at 0x7fcf6cc3f350>] and discovery method [local_instances] via process [<bound method AgentManager.discover of <ceilometer.polling.manager.AgentManager object at 0x7fcf6cc07a10>>]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:294
Dec  1 19:25:48 compute-0 ceilometer_agent_compute[200308]: 2025-12-01 19:25:48.815 15 DEBUG ceilometer.polling.manager [-] Skip pollster disk.device.read.requests, no  resources found this cycle _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:321
Dec  1 19:25:48 compute-0 ceilometer_agent_compute[200308]: 2025-12-01 19:25:48.815 15 DEBUG ceilometer.polling.manager [-] Executing discovery process for pollsters [<ceilometer.compute.pollsters.disk.PerDevicePhysicalPollster object at 0x7fcf6cc3f3b0>] and discovery method [local_instances] via process [<bound method AgentManager.discover of <ceilometer.polling.manager.AgentManager object at 0x7fcf6cc07a10>>]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:294
Dec  1 19:25:48 compute-0 ceilometer_agent_compute[200308]: 2025-12-01 19:25:48.815 15 DEBUG ceilometer.polling.manager [-] Skip pollster disk.device.usage, no  resources found this cycle _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:321
Dec  1 19:25:48 compute-0 ceilometer_agent_compute[200308]: 2025-12-01 19:25:48.815 15 DEBUG ceilometer.polling.manager [-] Executing discovery process for pollsters [<ceilometer.compute.pollsters.disk.PerDeviceWriteBytesPollster object at 0x7fcf6cc3f410>] and discovery method [local_instances] via process [<bound method AgentManager.discover of <ceilometer.polling.manager.AgentManager object at 0x7fcf6cc07a10>>]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:294
Dec  1 19:25:48 compute-0 ceilometer_agent_compute[200308]: 2025-12-01 19:25:48.815 15 DEBUG ceilometer.polling.manager [-] Skip pollster disk.device.write.bytes, no  resources found this cycle _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:321
Dec  1 19:25:48 compute-0 ceilometer_agent_compute[200308]: 2025-12-01 19:25:48.815 15 DEBUG ceilometer.polling.manager [-] Executing discovery process for pollsters [<ceilometer.compute.pollsters.instance_stats.PowerStatePollster object at 0x7fcf6c2e4440>] and discovery method [local_instances] via process [<bound method AgentManager.discover of <ceilometer.polling.manager.AgentManager object at 0x7fcf6cc07a10>>]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:294
Dec  1 19:25:48 compute-0 ceilometer_agent_compute[200308]: 2025-12-01 19:25:48.816 15 DEBUG ceilometer.polling.manager [-] Skip pollster power.state, no  resources found this cycle _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:321
Dec  1 19:25:48 compute-0 ceilometer_agent_compute[200308]: 2025-12-01 19:25:48.816 15 DEBUG ceilometer.polling.manager [-] Executing discovery process for pollsters [<ceilometer.compute.pollsters.disk.PerDeviceDiskWriteLatencyPollster object at 0x7fcf6cc3f470>] and discovery method [local_instances] via process [<bound method AgentManager.discover of <ceilometer.polling.manager.AgentManager object at 0x7fcf6cc07a10>>]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:294
Dec  1 19:25:48 compute-0 ceilometer_agent_compute[200308]: 2025-12-01 19:25:48.816 15 DEBUG ceilometer.polling.manager [-] Skip pollster disk.device.write.latency, no  resources found this cycle _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:321
Dec  1 19:25:48 compute-0 ceilometer_agent_compute[200308]: 2025-12-01 19:25:48.816 15 DEBUG ceilometer.polling.manager [-] Executing discovery process for pollsters [<ceilometer.compute.pollsters.disk.PerDeviceWriteRequestsPollster object at 0x7fcf6cc3f4d0>] and discovery method [local_instances] via process [<bound method AgentManager.discover of <ceilometer.polling.manager.AgentManager object at 0x7fcf6cc07a10>>]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:294
Dec  1 19:25:48 compute-0 ceilometer_agent_compute[200308]: 2025-12-01 19:25:48.816 15 DEBUG ceilometer.polling.manager [-] Skip pollster disk.device.write.requests, no  resources found this cycle _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:321
Dec  1 19:25:48 compute-0 ceilometer_agent_compute[200308]: 2025-12-01 19:25:48.816 15 DEBUG ceilometer.polling.manager [-] Executing discovery process for pollsters [<ceilometer.compute.pollsters.disk.PerDeviceAllocationPollster object at 0x7fcf6cc3e5d0>] and discovery method [local_instances] via process [<bound method AgentManager.discover of <ceilometer.polling.manager.AgentManager object at 0x7fcf6cc07a10>>]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:294
Dec  1 19:25:48 compute-0 ceilometer_agent_compute[200308]: 2025-12-01 19:25:48.816 15 DEBUG ceilometer.polling.manager [-] Skip pollster disk.device.allocation, no  resources found this cycle _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:321
Dec  1 19:25:48 compute-0 ceilometer_agent_compute[200308]: 2025-12-01 19:25:48.816 15 DEBUG ceilometer.polling.manager [-] Executing discovery process for pollsters [<ceilometer.compute.pollsters.disk.EphemeralSizePollster object at 0x7fcf6cc3f530>] and discovery method [local_instances] via process [<bound method AgentManager.discover of <ceilometer.polling.manager.AgentManager object at 0x7fcf6cc07a10>>]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:294
Dec  1 19:25:48 compute-0 ceilometer_agent_compute[200308]: 2025-12-01 19:25:48.816 15 DEBUG ceilometer.polling.manager [-] Skip pollster disk.ephemeral.size, no  resources found this cycle _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:321
Dec  1 19:25:48 compute-0 ceilometer_agent_compute[200308]: 2025-12-01 19:25:48.816 15 DEBUG ceilometer.polling.manager [-] Executing discovery process for pollsters [<ceilometer.compute.pollsters.net.IncomingPacketsPollster object at 0x7fcf6cc3fd40>] and discovery method [local_instances] via process [<bound method AgentManager.discover of <ceilometer.polling.manager.AgentManager object at 0x7fcf6cc07a10>>]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:294
Dec  1 19:25:48 compute-0 ceilometer_agent_compute[200308]: 2025-12-01 19:25:48.816 15 DEBUG ceilometer.polling.manager [-] Skip pollster network.incoming.packets, no  resources found this cycle _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:321
Dec  1 19:25:48 compute-0 ceilometer_agent_compute[200308]: 2025-12-01 19:25:48.816 15 DEBUG ceilometer.polling.manager [-] Executing discovery process for pollsters [<ceilometer.compute.pollsters.disk.RootSizePollster object at 0x7fcf6cc3f590>] and discovery method [local_instances] via process [<bound method AgentManager.discover of <ceilometer.polling.manager.AgentManager object at 0x7fcf6cc07a10>>]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:294
Dec  1 19:25:48 compute-0 ceilometer_agent_compute[200308]: 2025-12-01 19:25:48.817 15 DEBUG ceilometer.polling.manager [-] Skip pollster disk.root.size, no  resources found this cycle _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:321
Dec  1 19:25:48 compute-0 ceilometer_agent_compute[200308]: 2025-12-01 19:25:48.817 15 DEBUG ceilometer.polling.manager [-] Executing discovery process for pollsters [<ceilometer.compute.pollsters.net.IncomingDropPollster object at 0x7fcf6cc3fda0>] and discovery method [local_instances] via process [<bound method AgentManager.discover of <ceilometer.polling.manager.AgentManager object at 0x7fcf6cc07a10>>]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:294
Dec  1 19:25:48 compute-0 ceilometer_agent_compute[200308]: 2025-12-01 19:25:48.817 15 DEBUG ceilometer.polling.manager [-] Skip pollster network.incoming.packets.drop, no  resources found this cycle _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:321
Dec  1 19:25:48 compute-0 ceilometer_agent_compute[200308]: 2025-12-01 19:25:48.817 15 DEBUG ceilometer.polling.manager [-] Executing discovery process for pollsters [<ceilometer.compute.pollsters.net.IncomingErrorsPollster object at 0x7fcf6cc3fe00>] and discovery method [local_instances] via process [<bound method AgentManager.discover of <ceilometer.polling.manager.AgentManager object at 0x7fcf6cc07a10>>]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:294
Dec  1 19:25:48 compute-0 ceilometer_agent_compute[200308]: 2025-12-01 19:25:48.817 15 DEBUG ceilometer.polling.manager [-] Skip pollster network.incoming.packets.error, no  resources found this cycle _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:321
Dec  1 19:25:48 compute-0 ceilometer_agent_compute[200308]: 2025-12-01 19:25:48.817 15 DEBUG ceilometer.polling.manager [-] Executing discovery process for pollsters [<ceilometer.compute.pollsters.net.OutgoingBytesPollster object at 0x7fcf6cc3fe90>] and discovery method [local_instances] via process [<bound method AgentManager.discover of <ceilometer.polling.manager.AgentManager object at 0x7fcf6cc07a10>>]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:294
Dec  1 19:25:48 compute-0 ceilometer_agent_compute[200308]: 2025-12-01 19:25:48.817 15 DEBUG ceilometer.polling.manager [-] Skip pollster network.outgoing.bytes, no  resources found this cycle _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:321
Dec  1 19:25:48 compute-0 ceilometer_agent_compute[200308]: 2025-12-01 19:25:48.817 15 DEBUG ceilometer.polling.manager [-] Executing discovery process for pollsters [<ceilometer.compute.pollsters.net.OutgoingBytesRatePollster object at 0x7fcf6cc3ff80>] and discovery method [local_instances] via process [<bound method AgentManager.discover of <ceilometer.polling.manager.AgentManager object at 0x7fcf6cc07a10>>]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:294
Dec  1 19:25:48 compute-0 ceilometer_agent_compute[200308]: 2025-12-01 19:25:48.817 15 DEBUG ceilometer.polling.manager [-] Skip pollster network.outgoing.bytes.rate, no  resources found this cycle _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:321
Dec  1 19:25:48 compute-0 ceilometer_agent_compute[200308]: 2025-12-01 19:25:48.817 15 DEBUG ceilometer.polling.manager [-] Executing discovery process for pollsters [<ceilometer.compute.pollsters.instance_stats.CPUPollster object at 0x7fcf6cbd1b80>] and discovery method [local_instances] via process [<bound method AgentManager.discover of <ceilometer.polling.manager.AgentManager object at 0x7fcf6cc07a10>>]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:294
Dec  1 19:25:48 compute-0 ceilometer_agent_compute[200308]: 2025-12-01 19:25:48.817 15 DEBUG ceilometer.polling.manager [-] Skip pollster cpu, no  resources found this cycle _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:321
Dec  1 19:25:48 compute-0 ceilometer_agent_compute[200308]: 2025-12-01 19:25:48.818 15 DEBUG ceilometer.polling.manager [-] Executing discovery process for pollsters [<ceilometer.compute.pollsters.instance_stats.MemoryUsagePollster object at 0x7fcf6cc3f7a0>] and discovery method [local_instances] via process [<bound method AgentManager.discover of <ceilometer.polling.manager.AgentManager object at 0x7fcf6cc07a10>>]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:294
Dec  1 19:25:48 compute-0 ceilometer_agent_compute[200308]: 2025-12-01 19:25:48.818 15 DEBUG ceilometer.polling.manager [-] Skip pollster memory.usage, no  resources found this cycle _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:321
Dec  1 19:25:48 compute-0 ceilometer_agent_compute[200308]: 2025-12-01 19:25:48.818 15 DEBUG ceilometer.polling.manager [-] Finished processing pollster [network.incoming.bytes.delta]. execute_polling_task_processing /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:272
Dec  1 19:25:48 compute-0 ceilometer_agent_compute[200308]: 2025-12-01 19:25:48.818 15 DEBUG ceilometer.polling.manager [-] Finished processing pollster [network.outgoing.packets]. execute_polling_task_processing /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:272
Dec  1 19:25:48 compute-0 ceilometer_agent_compute[200308]: 2025-12-01 19:25:48.818 15 DEBUG ceilometer.polling.manager [-] Finished processing pollster [network.outgoing.bytes.delta]. execute_polling_task_processing /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:272
Dec  1 19:25:48 compute-0 ceilometer_agent_compute[200308]: 2025-12-01 19:25:48.818 15 DEBUG ceilometer.polling.manager [-] Finished processing pollster [network.outgoing.packets.drop]. execute_polling_task_processing /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:272
Dec  1 19:25:48 compute-0 ceilometer_agent_compute[200308]: 2025-12-01 19:25:48.818 15 DEBUG ceilometer.polling.manager [-] Finished processing pollster [network.outgoing.packets.error]. execute_polling_task_processing /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:272
Dec  1 19:25:48 compute-0 ceilometer_agent_compute[200308]: 2025-12-01 19:25:48.818 15 DEBUG ceilometer.polling.manager [-] Finished processing pollster [disk.device.capacity]. execute_polling_task_processing /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:272
Dec  1 19:25:48 compute-0 ceilometer_agent_compute[200308]: 2025-12-01 19:25:48.818 15 DEBUG ceilometer.polling.manager [-] Finished processing pollster [disk.device.read.bytes]. execute_polling_task_processing /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:272
Dec  1 19:25:48 compute-0 ceilometer_agent_compute[200308]: 2025-12-01 19:25:48.818 15 DEBUG ceilometer.polling.manager [-] Finished processing pollster [network.incoming.bytes]. execute_polling_task_processing /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:272
Dec  1 19:25:48 compute-0 ceilometer_agent_compute[200308]: 2025-12-01 19:25:48.818 15 DEBUG ceilometer.polling.manager [-] Finished processing pollster [network.incoming.bytes.rate]. execute_polling_task_processing /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:272
Dec  1 19:25:48 compute-0 ceilometer_agent_compute[200308]: 2025-12-01 19:25:48.818 15 DEBUG ceilometer.polling.manager [-] Finished processing pollster [disk.device.read.latency]. execute_polling_task_processing /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:272
Dec  1 19:25:48 compute-0 ceilometer_agent_compute[200308]: 2025-12-01 19:25:48.819 15 DEBUG ceilometer.polling.manager [-] Finished processing pollster [disk.device.read.requests]. execute_polling_task_processing /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:272
Dec  1 19:25:48 compute-0 ceilometer_agent_compute[200308]: 2025-12-01 19:25:48.819 15 DEBUG ceilometer.polling.manager [-] Finished processing pollster [disk.device.usage]. execute_polling_task_processing /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:272
Dec  1 19:25:48 compute-0 ceilometer_agent_compute[200308]: 2025-12-01 19:25:48.819 15 DEBUG ceilometer.polling.manager [-] Finished processing pollster [disk.device.write.bytes]. execute_polling_task_processing /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:272
Dec  1 19:25:48 compute-0 ceilometer_agent_compute[200308]: 2025-12-01 19:25:48.819 15 DEBUG ceilometer.polling.manager [-] Finished processing pollster [power.state]. execute_polling_task_processing /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:272
Dec  1 19:25:48 compute-0 ceilometer_agent_compute[200308]: 2025-12-01 19:25:48.819 15 DEBUG ceilometer.polling.manager [-] Finished processing pollster [disk.device.write.latency]. execute_polling_task_processing /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:272
Dec  1 19:25:48 compute-0 ceilometer_agent_compute[200308]: 2025-12-01 19:25:48.819 15 DEBUG ceilometer.polling.manager [-] Finished processing pollster [disk.device.write.requests]. execute_polling_task_processing /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:272
Dec  1 19:25:48 compute-0 ceilometer_agent_compute[200308]: 2025-12-01 19:25:48.819 15 DEBUG ceilometer.polling.manager [-] Finished processing pollster [disk.device.allocation]. execute_polling_task_processing /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:272
Dec  1 19:25:48 compute-0 ceilometer_agent_compute[200308]: 2025-12-01 19:25:48.819 15 DEBUG ceilometer.polling.manager [-] Finished processing pollster [disk.ephemeral.size]. execute_polling_task_processing /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:272
Dec  1 19:25:48 compute-0 ceilometer_agent_compute[200308]: 2025-12-01 19:25:48.819 15 DEBUG ceilometer.polling.manager [-] Finished processing pollster [network.incoming.packets]. execute_polling_task_processing /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:272
Dec  1 19:25:48 compute-0 ceilometer_agent_compute[200308]: 2025-12-01 19:25:48.819 15 DEBUG ceilometer.polling.manager [-] Finished processing pollster [disk.root.size]. execute_polling_task_processing /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:272
Dec  1 19:25:48 compute-0 ceilometer_agent_compute[200308]: 2025-12-01 19:25:48.819 15 DEBUG ceilometer.polling.manager [-] Finished processing pollster [network.incoming.packets.drop]. execute_polling_task_processing /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:272
Dec  1 19:25:48 compute-0 ceilometer_agent_compute[200308]: 2025-12-01 19:25:48.819 15 DEBUG ceilometer.polling.manager [-] Finished processing pollster [network.incoming.packets.error]. execute_polling_task_processing /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:272
Dec  1 19:25:48 compute-0 ceilometer_agent_compute[200308]: 2025-12-01 19:25:48.819 15 DEBUG ceilometer.polling.manager [-] Finished processing pollster [network.outgoing.bytes]. execute_polling_task_processing /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:272
Dec  1 19:25:48 compute-0 ceilometer_agent_compute[200308]: 2025-12-01 19:25:48.819 15 DEBUG ceilometer.polling.manager [-] Finished processing pollster [network.outgoing.bytes.rate]. execute_polling_task_processing /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:272
Dec  1 19:25:48 compute-0 ceilometer_agent_compute[200308]: 2025-12-01 19:25:48.819 15 DEBUG ceilometer.polling.manager [-] Finished processing pollster [cpu]. execute_polling_task_processing /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:272
Dec  1 19:25:48 compute-0 ceilometer_agent_compute[200308]: 2025-12-01 19:25:48.819 15 DEBUG ceilometer.polling.manager [-] Finished processing pollster [memory.usage]. execute_polling_task_processing /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:272
Dec  1 19:25:50 compute-0 podman[236966]: 2025-12-01 19:25:50.356035133 +0000 UTC m=+0.133572920 container health_status ac5c9902abf0db9f43c889599b2bcc73d33eb8b65444ffdd9b56a5cc93dab792 (image=quay.io/podified-antelope-centos9/openstack-ovn-controller:current-podified, name=ovn_controller, health_status=healthy, health_failing_streak=0, health_log=, io.buildah.version=1.41.3, maintainer=OpenStack Kubernetes Operator team, org.label-schema.build-date=20251125, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS, tcib_build_tag=fa2bb8efef6782c26ea7f1675eeb36dd, org.label-schema.license=GPLv2, tcib_managed=true, config_data={'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ovn_controller', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-ovn-controller:current-podified', 'net': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/lib/modules:/lib/modules:ro', '/run:/run', '/var/lib/openvswitch/ovn:/run/ovn:shared,z', '/var/lib/kolla/config_files/ovn_controller.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/openstack/cacerts/ovn/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/certs/ovn/default/ca.crt:/etc/pki/tls/certs/ovndbca.crt:ro,z', '/var/lib/openstack/certs/ovn/default/tls.crt:/etc/pki/tls/certs/ovndb.crt:ro,z', '/var/lib/openstack/certs/ovn/default/tls.key:/etc/pki/tls/private/ovndb.key:ro,Z', '/var/lib/openstack/healthchecks/ovn_controller:/openstack:ro,z']}, config_id=ovn_controller, managed_by=edpm_ansible, container_name=ovn_controller)
Dec  1 19:25:50 compute-0 podman[236991]: 2025-12-01 19:25:50.44404519 +0000 UTC m=+0.075696300 container health_status 3a3d264f7eb8586ed3d44da8bad3c69e5911bcb2ca062b771386b6d47a5118de (image=quay.rdoproject.org/podified-master-centos10/openstack-ceilometer-compute:current-tested, name=ceilometer_agent_compute, health_status=healthy, health_failing_streak=0, health_log=, org.label-schema.build-date=20251125, config_id=edpm, io.buildah.version=1.41.4, maintainer=OpenStack Kubernetes Operator team, config_data={'image': 'quay.rdoproject.org/podified-master-centos10/openstack-ceilometer-compute:current-tested', 'user': 'ceilometer', 'restart': 'always', 'command': 'kolla_start', 'security_opt': 'label:type:ceilometer_polling_t', 'net': 'host', 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'OS_ENDPOINT_TYPE': 'internal'}, 'healthcheck': {'test': '/openstack/healthcheck compute', 'mount': '/var/lib/openstack/healthchecks/ceilometer_agent_compute'}, 'volumes': ['/var/lib/openstack/config/telemetry:/var/lib/openstack/config/:z', '/var/lib/openstack/config/telemetry/ceilometer-agent-compute.json:/var/lib/kolla/config_files/config.json:z', '/run/libvirt:/run/libvirt:shared,ro', '/etc/hosts:/etc/hosts:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/var/lib/openstack/cacerts/telemetry/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/config/telemetry/ceilometer_prom_exporter.yaml:/etc/ceilometer/ceilometer_prom_exporter.yaml:z', '/var/lib/openstack/certs/telemetry/default:/etc/ceilometer/tls:z', '/dev/log:/dev/log', '/var/lib/openstack/healthchecks/ceilometer_agent_compute:/openstack:ro,z']}, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 10 Base Image, org.label-schema.schema-version=1.0, tcib_managed=true, org.label-schema.vendor=CentOS, tcib_build_tag=3a7876c5b6a4ff2e2bc50e11e9db5f42, container_name=ceilometer_agent_compute, managed_by=edpm_ansible)
Dec  1 19:25:50 compute-0 podman[236996]: 2025-12-01 19:25:50.456330618 +0000 UTC m=+0.066599221 container health_status 43b014a7c88484529ca37fbc1aa040d68d3c565a681d98a3ffe696ded1c66c8b (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified, name=ovn_metadata_agent, health_status=healthy, health_failing_streak=0, health_log=, config_data={'cgroupns': 'host', 'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'EDPM_CONFIG_HASH': '0823bd3e096c75f72e4a95820d41b0d4b6a1172bd2892ddb9f29b788a11bc87d'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ovn_metadata_agent', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified', 'net': 'host', 'pid': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/run/openvswitch:/run/openvswitch:z', '/var/lib/config-data/ansible-generated/neutron-ovn-metadata-agent:/etc/neutron.conf.d:z', '/run/netns:/run/netns:shared', '/var/lib/kolla/config_files/ovn_metadata_agent.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/neutron:/var/lib/neutron:shared,z', '/var/lib/neutron/ovn_metadata_haproxy_wrapper:/usr/local/bin/haproxy:ro', '/var/lib/neutron/kill_scripts:/etc/neutron/kill_scripts:ro', '/var/lib/openstack/cacerts/neutron-metadata/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/ca.crt:/etc/pki/tls/certs/ovndbca.crt:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/tls.crt:/etc/pki/tls/certs/ovndb.crt:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/tls.key:/etc/pki/tls/private/ovndb.key:ro,Z', '/var/lib/openstack/healthchecks/ovn_metadata_agent:/openstack:ro,z']}, org.label-schema.build-date=20251125, org.label-schema.license=GPLv2, io.buildah.version=1.41.3, managed_by=edpm_ansible, org.label-schema.name=CentOS Stream 9 Base Image, container_name=ovn_metadata_agent, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS, tcib_build_tag=fa2bb8efef6782c26ea7f1675eeb36dd, tcib_managed=true, config_id=ovn_metadata_agent, maintainer=OpenStack Kubernetes Operator team)
Dec  1 19:25:57 compute-0 podman[237024]: 2025-12-01 19:25:57.317105613 +0000 UTC m=+0.092816800 container health_status b46bda7fc50db8041eef75400930fc7591d8331b3adc9964f77b2cc87c6b98e2 (image=quay.io/openstack-k8s-operators/openstack-network-exporter:current-podified, name=openstack_network_exporter, health_status=healthy, health_failing_streak=0, health_log=, config_data={'image': 'quay.io/openstack-k8s-operators/openstack-network-exporter:current-podified', 'restart': 'always', 'recreate': True, 'privileged': True, 'ports': ['9105:9105'], 'command': [], 'net': 'host', 'environment': {'OS_ENDPOINT_TYPE': 'internal', 'OPENSTACK_NETWORK_EXPORTER_YAML': '/etc/openstack_network_exporter/openstack_network_exporter.yaml'}, 'healthcheck': {'test': '/openstack/healthcheck openstack-netwo', 'mount': '/var/lib/openstack/healthchecks/openstack_network_exporter'}, 'volumes': ['/var/lib/openstack/config/telemetry/openstack_network_exporter.yaml:/etc/openstack_network_exporter/openstack_network_exporter.yaml:z', '/var/lib/openstack/certs/telemetry/default:/etc/openstack_network_exporter/tls:z', '/var/run/openvswitch:/run/openvswitch:rw,z', '/var/lib/openvswitch/ovn:/run/ovn:rw,z', '/proc:/host/proc:ro', '/var/lib/openstack/healthchecks/openstack_network_exporter:/openstack:ro,z']}, description=The Universal Base Image Minimal is a stripped down image that uses microdnf as a package manager. This base image is freely redistributable, but Red Hat only supports Red Hat technologies through subscriptions for Red Hat products. This image is maintained by Red Hat and updated regularly., com.redhat.license_terms=https://www.redhat.com/en/about/red-hat-end-user-license-agreements#UBI, container_name=openstack_network_exporter, build-date=2025-08-20T13:12:41, distribution-scope=public, io.k8s.display-name=Red Hat Universal Base Image 9 Minimal, io.buildah.version=1.33.7, vcs-ref=f4b088292653bbf5ca8188a5e59ffd06a8671d4b, vendor=Red Hat, Inc., maintainer=Red Hat, Inc., io.k8s.description=The Universal Base Image Minimal is a stripped down image that uses microdnf as a package manager. This base image is freely redistributable, but Red Hat only supports Red Hat technologies through subscriptions for Red Hat products. This image is maintained by Red Hat and updated regularly., architecture=x86_64, io.openshift.tags=minimal rhel9, name=ubi9-minimal, vcs-type=git, managed_by=edpm_ansible, release=1755695350, com.redhat.component=ubi9-minimal-container, config_id=edpm, url=https://catalog.redhat.com/en/search?searchType=containers, version=9.6, summary=Provides the latest release of the minimal Red Hat Universal Base Image 9., io.openshift.expose-services=)
Dec  1 19:25:59 compute-0 podman[203750]: time="2025-12-01T19:25:59Z" level=info msg="List containers: received `last` parameter - overwriting `limit`"
Dec  1 19:25:59 compute-0 podman[203750]: @ - - [01/Dec/2025:19:25:59 +0000] "GET /v4.9.3/libpod/containers/json?all=true&external=false&last=0&namespace=false&size=false&sync=false HTTP/1.1" 200 28288 "" "Go-http-client/1.1"
Dec  1 19:25:59 compute-0 podman[203750]: @ - - [01/Dec/2025:19:25:59 +0000] "GET /v4.9.3/libpod/containers/stats?all=false&interval=1&stream=false HTTP/1.1" 200 4271 "" "Go-http-client/1.1"
Dec  1 19:26:01 compute-0 openstack_network_exporter[205914]: ERROR   19:26:01 appctl.go:144: Failed to get PID for ovn-northd: no control socket files found for ovn-northd
Dec  1 19:26:01 compute-0 openstack_network_exporter[205914]: ERROR   19:26:01 appctl.go:144: Failed to get PID for ovn-northd: no control socket files found for ovn-northd
Dec  1 19:26:01 compute-0 openstack_network_exporter[205914]: ERROR   19:26:01 appctl.go:131: Failed to prepare call to ovsdb-server: no control socket files found for the ovs db server
Dec  1 19:26:01 compute-0 openstack_network_exporter[205914]: ERROR   19:26:01 appctl.go:174: call(dpif-netdev/pmd-perf-show): please specify an existing datapath
Dec  1 19:26:01 compute-0 openstack_network_exporter[205914]: 
Dec  1 19:26:01 compute-0 openstack_network_exporter[205914]: ERROR   19:26:01 appctl.go:174: call(dpif-netdev/pmd-rxq-show): please specify an existing datapath
Dec  1 19:26:01 compute-0 openstack_network_exporter[205914]: 
Dec  1 19:26:02 compute-0 podman[237046]: 2025-12-01 19:26:02.300721026 +0000 UTC m=+0.071765002 container health_status 9bc16c1e84935b321683dd2dfd3901959431e420d380b6b9982945dff3d516b2 (image=quay.io/prometheus/node-exporter:v1.5.0, name=node_exporter, health_status=healthy, health_failing_streak=0, health_log=, managed_by=edpm_ansible, config_data={'image': 'quay.io/prometheus/node-exporter:v1.5.0', 'restart': 'always', 'recreate': True, 'user': 'root', 'privileged': True, 'ports': ['9100:9100'], 'command': ['--web.config.file=/etc/node_exporter/node_exporter.yaml', '--web.disable-exporter-metrics', '--collector.systemd', '--collector.systemd.unit-include=(edpm_.*|ovs.*|openvswitch|virt.*|rsyslog)\\.service', '--no-collector.dmi', '--no-collector.entropy', '--no-collector.thermal_zone', '--no-collector.time', '--no-collector.timex', '--no-collector.uname', '--no-collector.stat', '--no-collector.hwmon', '--no-collector.os', '--no-collector.selinux', '--no-collector.textfile', '--no-collector.powersupplyclass', '--no-collector.pressure', '--no-collector.rapl'], 'net': 'host', 'environment': {'OS_ENDPOINT_TYPE': 'internal'}, 'healthcheck': {'test': '/openstack/healthcheck node_exporter', 'mount': '/var/lib/openstack/healthchecks/node_exporter'}, 'volumes': ['/var/lib/openstack/config/telemetry/node_exporter.yaml:/etc/node_exporter/node_exporter.yaml:z', '/var/lib/openstack/certs/telemetry/default:/etc/node_exporter/tls:z', '/var/run/dbus/system_bus_socket:/var/run/dbus/system_bus_socket:rw', '/var/lib/openstack/healthchecks/node_exporter:/openstack:ro,z']}, config_id=edpm, container_name=node_exporter, maintainer=The Prometheus Authors <prometheus-developers@googlegroups.com>)
Dec  1 19:26:09 compute-0 podman[237068]: 2025-12-01 19:26:09.330704282 +0000 UTC m=+0.106640731 container health_status eee51cf6f5ac491b85fb09827fece37ea9afa564acb449d4ec0d0155a452f02b (image=quay.io/podified-antelope-centos9/openstack-multipathd:current-podified, name=multipathd, health_status=healthy, health_failing_streak=0, health_log=, config_id=multipathd, container_name=multipathd, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS, maintainer=OpenStack Kubernetes Operator team, managed_by=edpm_ansible, config_data={'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/multipathd', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-multipathd:current-podified', 'net': 'host', 'privileged': True, 'restart': 'always', 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/var/lib/kolla/config_files/multipathd.json:/var/lib/kolla/config_files/config.json:ro', '/dev:/dev', '/run/udev:/run/udev', '/sys:/sys', '/lib/modules:/lib/modules:ro', '/etc/iscsi:/etc/iscsi:ro', '/var/lib/iscsi:/var/lib/iscsi', '/etc/multipath:/etc/multipath:z', '/etc/multipath.conf:/etc/multipath.conf:ro', '/var/lib/openstack/healthchecks/multipathd:/openstack:ro,z']}, io.buildah.version=1.41.3, org.label-schema.name=CentOS Stream 9 Base Image, tcib_managed=true, org.label-schema.build-date=20251125, org.label-schema.license=GPLv2, tcib_build_tag=fa2bb8efef6782c26ea7f1675eeb36dd)
Dec  1 19:26:12 compute-0 ovn_metadata_agent[106828]: 2025-12-01 19:26:12.167 106833 DEBUG oslo_concurrency.lockutils [-] Acquiring lock "_check_child_processes" by "neutron.agent.linux.external_process.ProcessMonitor._check_child_processes" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m
Dec  1 19:26:12 compute-0 ovn_metadata_agent[106828]: 2025-12-01 19:26:12.168 106833 DEBUG oslo_concurrency.lockutils [-] Lock "_check_child_processes" acquired by "neutron.agent.linux.external_process.ProcessMonitor._check_child_processes" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m
Dec  1 19:26:12 compute-0 ovn_metadata_agent[106828]: 2025-12-01 19:26:12.168 106833 DEBUG oslo_concurrency.lockutils [-] Lock "_check_child_processes" "released" by "neutron.agent.linux.external_process.ProcessMonitor._check_child_processes" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m
Dec  1 19:26:14 compute-0 podman[237088]: 2025-12-01 19:26:14.302965563 +0000 UTC m=+0.078214150 container health_status 61ddba5fa28aaa4735d9b3aecc3d300f499f9ae2248b5f55cd6d6127fcce4236 (image=quay.io/navidys/prometheus-podman-exporter:v1.10.1, name=podman_exporter, health_status=healthy, health_failing_streak=0, health_log=, config_data={'image': 'quay.io/navidys/prometheus-podman-exporter:v1.10.1', 'restart': 'always', 'recreate': True, 'user': 'root', 'privileged': True, 'ports': ['9882:9882'], 'net': 'host', 'command': ['--web.config.file=/etc/podman_exporter/podman_exporter.yaml'], 'environment': {'OS_ENDPOINT_TYPE': 'internal', 'CONTAINER_HOST': 'unix:///run/podman/podman.sock'}, 'healthcheck': {'test': '/openstack/healthcheck podman_exporter', 'mount': '/var/lib/openstack/healthchecks/podman_exporter'}, 'volumes': ['/var/lib/openstack/config/telemetry/podman_exporter.yaml:/etc/podman_exporter/podman_exporter.yaml:z', '/var/lib/openstack/certs/telemetry/default:/etc/podman_exporter/tls:z', '/run/podman/podman.sock:/run/podman/podman.sock:rw,z', '/var/lib/openstack/healthchecks/podman_exporter:/openstack:ro,z']}, config_id=edpm, container_name=podman_exporter, maintainer=Navid Yaghoobi <navidys@fedoraproject.org>, managed_by=edpm_ansible)
Dec  1 19:26:15 compute-0 podman[237112]: 2025-12-01 19:26:15.287425665 +0000 UTC m=+0.066050335 container health_status 34a1614f07848d6f362b3ed1fa2407dbcd0f2c7c831f6ef43ff8b2d278ce7c3d (image=quay.io/podified-antelope-centos9/openstack-ceilometer-ipmi:current-podified, name=ceilometer_agent_ipmi, health_status=healthy, health_failing_streak=0, health_log=, config_data={'image': 'quay.io/podified-antelope-centos9/openstack-ceilometer-ipmi:current-podified', 'user': 'ceilometer', 'restart': 'always', 'command': 'kolla_start', 'security_opt': 'label:type:ceilometer_polling_t', 'privileged': 'true', 'net': 'host', 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'OS_ENDPOINT_TYPE': 'internal'}, 'healthcheck': {'test': '/openstack/healthcheck ipmi', 'mount': '/var/lib/openstack/healthchecks/ceilometer_agent_ipmi'}, 'volumes': ['/var/lib/openstack/config/telemetry-power-monitoring:/var/lib/openstack/config/:z', '/var/lib/openstack/config/telemetry-power-monitoring/ceilometer-agent-ipmi.json:/var/lib/kolla/config_files/config.json:z', '/etc/hosts:/etc/hosts:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/var/lib/openstack/cacerts/telemetry-power-monitoring/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/config/telemetry-power-monitoring/ceilometer_prom_exporter.yaml:/etc/ceilometer/ceilometer_prom_exporter.yaml:z', '/var/lib/openstack/certs/telemetry-power-monitoring/default:/etc/ceilometer/tls:z', '/dev/log:/dev/log', '/var/lib/openstack/healthchecks/ceilometer_agent_ipmi:/openstack:ro,z']}, container_name=ceilometer_agent_ipmi, managed_by=edpm_ansible, org.label-schema.vendor=CentOS, tcib_managed=true, org.label-schema.name=CentOS Stream 9 Base Image, tcib_build_tag=fa2bb8efef6782c26ea7f1675eeb36dd, config_id=edpm, maintainer=OpenStack Kubernetes Operator team, io.buildah.version=1.41.3, org.label-schema.build-date=20251125, org.label-schema.license=GPLv2, org.label-schema.schema-version=1.0)
Dec  1 19:26:17 compute-0 podman[237130]: 2025-12-01 19:26:17.301026295 +0000 UTC m=+0.075677510 container health_status 23921011954a99f31a49758e512d9e3575f6b2ebf536e7df85e3be11e7690b76 (image=quay.io/sustainable_computing_io/kepler:release-0.7.12, name=kepler, health_status=healthy, health_failing_streak=0, health_log=, io.openshift.expose-services=, build-date=2024-09-18T21:23:30, maintainer=Red Hat, Inc., architecture=x86_64, config_id=edpm, container_name=kepler, io.openshift.tags=base rhel9, url=https://access.redhat.com/containers/#/registry.access.redhat.com/ubi9/images/9.4-1214.1726694543, vendor=Red Hat, Inc., distribution-scope=public, version=9.4, release-0.7.12=, config_data={'image': 'quay.io/sustainable_computing_io/kepler:release-0.7.12', 'privileged': 'true', 'restart': 'always', 'ports': ['8888:8888'], 'net': 'host', 'command': '-v=2', 'recreate': True, 'environment': {'ENABLE_GPU': 'true', 'EXPOSE_CONTAINER_METRICS': 'true', 'ENABLE_PROCESS_METRICS': 'true', 'EXPOSE_VM_METRICS': 'true', 'EXPOSE_ESTIMATED_IDLE_POWER_METRICS': 'false', 'LIBVIRT_METADATA_URI': 'http://openstack.org/xmlns/libvirt/nova/1.1'}, 'healthcheck': {'test': '/openstack/healthcheck kepler', 'mount': '/var/lib/openstack/healthchecks/kepler'}, 'volumes': ['/lib/modules:/lib/modules:ro', '/run/libvirt:/run/libvirt:shared,ro', '/sys:/sys', '/proc:/proc', '/var/lib/openstack/healthchecks/kepler:/openstack:ro,z']}, io.k8s.description=The Universal Base Image is designed and engineered to be the base layer for all of your containerized applications, middleware and utilities. This base image is freely redistributable, but Red Hat only supports Red Hat technologies through subscriptions for Red Hat products. This image is maintained by Red Hat and updated regularly., vcs-type=git, io.buildah.version=1.29.0, com.redhat.license_terms=https://www.redhat.com/en/about/red-hat-end-user-license-agreements#UBI, summary=Provides the latest release of Red Hat Universal Base Image 9., vcs-ref=e309397d02fc53f7fa99db1371b8700eb49f268f, io.k8s.display-name=Red Hat Universal Base Image 9, name=ubi9, release=1214.1726694543, com.redhat.component=ubi9-container, description=The Universal Base Image is designed and engineered to be the base layer for all of your containerized applications, middleware and utilities. This base image is freely redistributable, but Red Hat only supports Red Hat technologies through subscriptions for Red Hat products. This image is maintained by Red Hat and updated regularly., managed_by=edpm_ansible)
Dec  1 19:26:19 compute-0 systemd-logind[797]: New session 29 of user zuul.
Dec  1 19:26:19 compute-0 systemd[1]: Started Session 29 of User zuul.
Dec  1 19:26:20 compute-0 podman[237302]: 2025-12-01 19:26:20.681665967 +0000 UTC m=+0.092501832 container health_status 43b014a7c88484529ca37fbc1aa040d68d3c565a681d98a3ffe696ded1c66c8b (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified, name=ovn_metadata_agent, health_status=healthy, health_failing_streak=0, health_log=, config_data={'cgroupns': 'host', 'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'EDPM_CONFIG_HASH': '0823bd3e096c75f72e4a95820d41b0d4b6a1172bd2892ddb9f29b788a11bc87d'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ovn_metadata_agent', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified', 'net': 'host', 'pid': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/run/openvswitch:/run/openvswitch:z', '/var/lib/config-data/ansible-generated/neutron-ovn-metadata-agent:/etc/neutron.conf.d:z', '/run/netns:/run/netns:shared', '/var/lib/kolla/config_files/ovn_metadata_agent.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/neutron:/var/lib/neutron:shared,z', '/var/lib/neutron/ovn_metadata_haproxy_wrapper:/usr/local/bin/haproxy:ro', '/var/lib/neutron/kill_scripts:/etc/neutron/kill_scripts:ro', '/var/lib/openstack/cacerts/neutron-metadata/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/ca.crt:/etc/pki/tls/certs/ovndbca.crt:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/tls.crt:/etc/pki/tls/certs/ovndb.crt:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/tls.key:/etc/pki/tls/private/ovndb.key:ro,Z', '/var/lib/openstack/healthchecks/ovn_metadata_agent:/openstack:ro,z']}, managed_by=edpm_ansible, org.label-schema.schema-version=1.0, tcib_managed=true, container_name=ovn_metadata_agent, io.buildah.version=1.41.3, org.label-schema.build-date=20251125, org.label-schema.license=GPLv2, config_id=ovn_metadata_agent, maintainer=OpenStack Kubernetes Operator team, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.vendor=CentOS, tcib_build_tag=fa2bb8efef6782c26ea7f1675eeb36dd)
Dec  1 19:26:20 compute-0 podman[237301]: 2025-12-01 19:26:20.721039688 +0000 UTC m=+0.123708219 container health_status 3a3d264f7eb8586ed3d44da8bad3c69e5911bcb2ca062b771386b6d47a5118de (image=quay.rdoproject.org/podified-master-centos10/openstack-ceilometer-compute:current-tested, name=ceilometer_agent_compute, health_status=healthy, health_failing_streak=0, health_log=, managed_by=edpm_ansible, org.label-schema.license=GPLv2, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS, tcib_managed=true, container_name=ceilometer_agent_compute, io.buildah.version=1.41.4, org.label-schema.build-date=20251125, tcib_build_tag=3a7876c5b6a4ff2e2bc50e11e9db5f42, org.label-schema.name=CentOS Stream 10 Base Image, config_data={'image': 'quay.rdoproject.org/podified-master-centos10/openstack-ceilometer-compute:current-tested', 'user': 'ceilometer', 'restart': 'always', 'command': 'kolla_start', 'security_opt': 'label:type:ceilometer_polling_t', 'net': 'host', 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'OS_ENDPOINT_TYPE': 'internal'}, 'healthcheck': {'test': '/openstack/healthcheck compute', 'mount': '/var/lib/openstack/healthchecks/ceilometer_agent_compute'}, 'volumes': ['/var/lib/openstack/config/telemetry:/var/lib/openstack/config/:z', '/var/lib/openstack/config/telemetry/ceilometer-agent-compute.json:/var/lib/kolla/config_files/config.json:z', '/run/libvirt:/run/libvirt:shared,ro', '/etc/hosts:/etc/hosts:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/var/lib/openstack/cacerts/telemetry/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/config/telemetry/ceilometer_prom_exporter.yaml:/etc/ceilometer/ceilometer_prom_exporter.yaml:z', '/var/lib/openstack/certs/telemetry/default:/etc/ceilometer/tls:z', '/dev/log:/dev/log', '/var/lib/openstack/healthchecks/ceilometer_agent_compute:/openstack:ro,z']}, config_id=edpm, maintainer=OpenStack Kubernetes Operator team)
Dec  1 19:26:20 compute-0 podman[237303]: 2025-12-01 19:26:20.757203252 +0000 UTC m=+0.162999449 container health_status ac5c9902abf0db9f43c889599b2bcc73d33eb8b65444ffdd9b56a5cc93dab792 (image=quay.io/podified-antelope-centos9/openstack-ovn-controller:current-podified, name=ovn_controller, health_status=healthy, health_failing_streak=0, health_log=, container_name=ovn_controller, io.buildah.version=1.41.3, maintainer=OpenStack Kubernetes Operator team, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.vendor=CentOS, managed_by=edpm_ansible, config_data={'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ovn_controller', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-ovn-controller:current-podified', 'net': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/lib/modules:/lib/modules:ro', '/run:/run', '/var/lib/openvswitch/ovn:/run/ovn:shared,z', '/var/lib/kolla/config_files/ovn_controller.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/openstack/cacerts/ovn/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/certs/ovn/default/ca.crt:/etc/pki/tls/certs/ovndbca.crt:ro,z', '/var/lib/openstack/certs/ovn/default/tls.crt:/etc/pki/tls/certs/ovndb.crt:ro,z', '/var/lib/openstack/certs/ovn/default/tls.key:/etc/pki/tls/private/ovndb.key:ro,Z', '/var/lib/openstack/healthchecks/ovn_controller:/openstack:ro,z']}, org.label-schema.build-date=20251125, org.label-schema.schema-version=1.0, tcib_build_tag=fa2bb8efef6782c26ea7f1675eeb36dd, tcib_managed=true, config_id=ovn_controller)
Dec  1 19:26:20 compute-0 python3[237366]: ansible-ansible.legacy.setup Invoked with gather_subset=['all'] gather_timeout=10 filter=[] fact_path=/etc/ansible/facts.d
Dec  1 19:26:22 compute-0 python3[237607]: ansible-ansible.legacy.command Invoked with _raw_params=tstamp=$(date -d '30 minute ago' "+%Y-%m-%d %H:%M:%S")#012journalctl -t "ceilometer_agent_compute" --no-pager -S "${tstamp}"#012 _uses_shell=True stdin_add_newline=True strip_empty_ends=True argv=None chdir=None executable=None creates=None removes=None stdin=None
Dec  1 19:26:24 compute-0 python3[237760]: ansible-ansible.legacy.command Invoked with _raw_params=tstamp=$(date -d '30 minute ago' "+%Y-%m-%d %H:%M:%S")#012journalctl -t "nova_compute" --no-pager -S "${tstamp}"#012 _uses_shell=True stdin_add_newline=True strip_empty_ends=True argv=None chdir=None executable=None creates=None removes=None stdin=None
Dec  1 19:26:27 compute-0 python3[237911]: ansible-ansible.builtin.stat Invoked with path=/etc/rsyslog.d/10-telemetry.conf follow=False get_md5=False get_checksum=True get_mime=True get_attributes=True checksum_algorithm=sha1
Dec  1 19:26:28 compute-0 podman[238039]: 2025-12-01 19:26:28.098961565 +0000 UTC m=+0.132918775 container health_status b46bda7fc50db8041eef75400930fc7591d8331b3adc9964f77b2cc87c6b98e2 (image=quay.io/openstack-k8s-operators/openstack-network-exporter:current-podified, name=openstack_network_exporter, health_status=healthy, health_failing_streak=0, health_log=, name=ubi9-minimal, release=1755695350, com.redhat.component=ubi9-minimal-container, vendor=Red Hat, Inc., version=9.6, url=https://catalog.redhat.com/en/search?searchType=containers, summary=Provides the latest release of the minimal Red Hat Universal Base Image 9., com.redhat.license_terms=https://www.redhat.com/en/about/red-hat-end-user-license-agreements#UBI, config_data={'image': 'quay.io/openstack-k8s-operators/openstack-network-exporter:current-podified', 'restart': 'always', 'recreate': True, 'privileged': True, 'ports': ['9105:9105'], 'command': [], 'net': 'host', 'environment': {'OS_ENDPOINT_TYPE': 'internal', 'OPENSTACK_NETWORK_EXPORTER_YAML': '/etc/openstack_network_exporter/openstack_network_exporter.yaml'}, 'healthcheck': {'test': '/openstack/healthcheck openstack-netwo', 'mount': '/var/lib/openstack/healthchecks/openstack_network_exporter'}, 'volumes': ['/var/lib/openstack/config/telemetry/openstack_network_exporter.yaml:/etc/openstack_network_exporter/openstack_network_exporter.yaml:z', '/var/lib/openstack/certs/telemetry/default:/etc/openstack_network_exporter/tls:z', '/var/run/openvswitch:/run/openvswitch:rw,z', '/var/lib/openvswitch/ovn:/run/ovn:rw,z', '/proc:/host/proc:ro', '/var/lib/openstack/healthchecks/openstack_network_exporter:/openstack:ro,z']}, container_name=openstack_network_exporter, io.openshift.expose-services=, managed_by=edpm_ansible, build-date=2025-08-20T13:12:41, description=The Universal Base Image Minimal is a stripped down image that uses microdnf as a package manager. This base image is freely redistributable, but Red Hat only supports Red Hat technologies through subscriptions for Red Hat products. This image is maintained by Red Hat and updated regularly., config_id=edpm, io.buildah.version=1.33.7, architecture=x86_64, io.k8s.description=The Universal Base Image Minimal is a stripped down image that uses microdnf as a package manager. This base image is freely redistributable, but Red Hat only supports Red Hat technologies through subscriptions for Red Hat products. This image is maintained by Red Hat and updated regularly., io.k8s.display-name=Red Hat Universal Base Image 9 Minimal, maintainer=Red Hat, Inc., vcs-ref=f4b088292653bbf5ca8188a5e59ffd06a8671d4b, vcs-type=git, io.openshift.tags=minimal rhel9, distribution-scope=public)
Dec  1 19:26:28 compute-0 python3[238085]: ansible-ansible.legacy.setup Invoked with gather_subset=['all'] gather_timeout=10 filter=[] fact_path=/etc/ansible/facts.d
Dec  1 19:26:29 compute-0 podman[203750]: time="2025-12-01T19:26:29Z" level=info msg="List containers: received `last` parameter - overwriting `limit`"
Dec  1 19:26:29 compute-0 podman[203750]: @ - - [01/Dec/2025:19:26:29 +0000] "GET /v4.9.3/libpod/containers/json?all=true&external=false&last=0&namespace=false&size=false&sync=false HTTP/1.1" 200 28288 "" "Go-http-client/1.1"
Dec  1 19:26:29 compute-0 podman[203750]: @ - - [01/Dec/2025:19:26:29 +0000] "GET /v4.9.3/libpod/containers/stats?all=false&interval=1&stream=false HTTP/1.1" 200 4277 "" "Go-http-client/1.1"
Dec  1 19:26:30 compute-0 python3[238312]: ansible-ansible.legacy.command Invoked with _raw_params=podman ps -a --format "{{.Names}} {{.Status}}" | grep ceilometer_agent_compute#012 _uses_shell=True stdin_add_newline=True strip_empty_ends=True argv=None chdir=None executable=None creates=None removes=None stdin=None
Dec  1 19:26:31 compute-0 openstack_network_exporter[205914]: ERROR   19:26:31 appctl.go:131: Failed to prepare call to ovsdb-server: no control socket files found for the ovs db server
Dec  1 19:26:31 compute-0 openstack_network_exporter[205914]: ERROR   19:26:31 appctl.go:144: Failed to get PID for ovn-northd: no control socket files found for ovn-northd
Dec  1 19:26:31 compute-0 openstack_network_exporter[205914]: ERROR   19:26:31 appctl.go:144: Failed to get PID for ovn-northd: no control socket files found for ovn-northd
Dec  1 19:26:31 compute-0 openstack_network_exporter[205914]: ERROR   19:26:31 appctl.go:174: call(dpif-netdev/pmd-perf-show): please specify an existing datapath
Dec  1 19:26:31 compute-0 openstack_network_exporter[205914]: 
Dec  1 19:26:31 compute-0 openstack_network_exporter[205914]: ERROR   19:26:31 appctl.go:174: call(dpif-netdev/pmd-rxq-show): please specify an existing datapath
Dec  1 19:26:31 compute-0 openstack_network_exporter[205914]: 
Dec  1 19:26:32 compute-0 python3[238478]: ansible-ansible.legacy.command Invoked with _raw_params=podman ps -a --format "{{.Names}} {{.Status}}" | grep node_exporter#012 _uses_shell=True stdin_add_newline=True strip_empty_ends=True argv=None chdir=None executable=None creates=None removes=None stdin=None
Dec  1 19:26:33 compute-0 podman[238517]: 2025-12-01 19:26:33.363205167 +0000 UTC m=+0.122075770 container health_status 9bc16c1e84935b321683dd2dfd3901959431e420d380b6b9982945dff3d516b2 (image=quay.io/prometheus/node-exporter:v1.5.0, name=node_exporter, health_status=healthy, health_failing_streak=0, health_log=, config_data={'image': 'quay.io/prometheus/node-exporter:v1.5.0', 'restart': 'always', 'recreate': True, 'user': 'root', 'privileged': True, 'ports': ['9100:9100'], 'command': ['--web.config.file=/etc/node_exporter/node_exporter.yaml', '--web.disable-exporter-metrics', '--collector.systemd', '--collector.systemd.unit-include=(edpm_.*|ovs.*|openvswitch|virt.*|rsyslog)\\.service', '--no-collector.dmi', '--no-collector.entropy', '--no-collector.thermal_zone', '--no-collector.time', '--no-collector.timex', '--no-collector.uname', '--no-collector.stat', '--no-collector.hwmon', '--no-collector.os', '--no-collector.selinux', '--no-collector.textfile', '--no-collector.powersupplyclass', '--no-collector.pressure', '--no-collector.rapl'], 'net': 'host', 'environment': {'OS_ENDPOINT_TYPE': 'internal'}, 'healthcheck': {'test': '/openstack/healthcheck node_exporter', 'mount': '/var/lib/openstack/healthchecks/node_exporter'}, 'volumes': ['/var/lib/openstack/config/telemetry/node_exporter.yaml:/etc/node_exporter/node_exporter.yaml:z', '/var/lib/openstack/certs/telemetry/default:/etc/node_exporter/tls:z', '/var/run/dbus/system_bus_socket:/var/run/dbus/system_bus_socket:rw', '/var/lib/openstack/healthchecks/node_exporter:/openstack:ro,z']}, config_id=edpm, container_name=node_exporter, maintainer=The Prometheus Authors <prometheus-developers@googlegroups.com>, managed_by=edpm_ansible)
Dec  1 19:26:40 compute-0 podman[238542]: 2025-12-01 19:26:40.3080712 +0000 UTC m=+0.077232439 container health_status eee51cf6f5ac491b85fb09827fece37ea9afa564acb449d4ec0d0155a452f02b (image=quay.io/podified-antelope-centos9/openstack-multipathd:current-podified, name=multipathd, health_status=healthy, health_failing_streak=0, health_log=, org.label-schema.build-date=20251125, org.label-schema.license=GPLv2, org.label-schema.schema-version=1.0, tcib_managed=true, config_data={'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/multipathd', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-multipathd:current-podified', 'net': 'host', 'privileged': True, 'restart': 'always', 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/var/lib/kolla/config_files/multipathd.json:/var/lib/kolla/config_files/config.json:ro', '/dev:/dev', '/run/udev:/run/udev', '/sys:/sys', '/lib/modules:/lib/modules:ro', '/etc/iscsi:/etc/iscsi:ro', '/var/lib/iscsi:/var/lib/iscsi', '/etc/multipath:/etc/multipath:z', '/etc/multipath.conf:/etc/multipath.conf:ro', '/var/lib/openstack/healthchecks/multipathd:/openstack:ro,z']}, maintainer=OpenStack Kubernetes Operator team, managed_by=edpm_ansible, org.label-schema.vendor=CentOS, tcib_build_tag=fa2bb8efef6782c26ea7f1675eeb36dd, config_id=multipathd, container_name=multipathd, io.buildah.version=1.41.3, org.label-schema.name=CentOS Stream 9 Base Image)
Dec  1 19:26:44 compute-0 podman[238562]: 2025-12-01 19:26:44.812518558 +0000 UTC m=+0.124034682 container health_status 61ddba5fa28aaa4735d9b3aecc3d300f499f9ae2248b5f55cd6d6127fcce4236 (image=quay.io/navidys/prometheus-podman-exporter:v1.10.1, name=podman_exporter, health_status=healthy, health_failing_streak=0, health_log=, container_name=podman_exporter, maintainer=Navid Yaghoobi <navidys@fedoraproject.org>, managed_by=edpm_ansible, config_data={'image': 'quay.io/navidys/prometheus-podman-exporter:v1.10.1', 'restart': 'always', 'recreate': True, 'user': 'root', 'privileged': True, 'ports': ['9882:9882'], 'net': 'host', 'command': ['--web.config.file=/etc/podman_exporter/podman_exporter.yaml'], 'environment': {'OS_ENDPOINT_TYPE': 'internal', 'CONTAINER_HOST': 'unix:///run/podman/podman.sock'}, 'healthcheck': {'test': '/openstack/healthcheck podman_exporter', 'mount': '/var/lib/openstack/healthchecks/podman_exporter'}, 'volumes': ['/var/lib/openstack/config/telemetry/podman_exporter.yaml:/etc/podman_exporter/podman_exporter.yaml:z', '/var/lib/openstack/certs/telemetry/default:/etc/podman_exporter/tls:z', '/run/podman/podman.sock:/run/podman/podman.sock:rw,z', '/var/lib/openstack/healthchecks/podman_exporter:/openstack:ro,z']}, config_id=edpm)
Dec  1 19:26:45 compute-0 nova_compute[189564]: 2025-12-01 19:26:45.202 189568 DEBUG oslo_service.periodic_task [None req-7cec860b-c1c5-49d2-8a4f-732c76c1788d - - - - - -] Running periodic task ComputeManager._poll_volume_usage run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210#033[00m
Dec  1 19:26:45 compute-0 nova_compute[189564]: 2025-12-01 19:26:45.247 189568 DEBUG oslo_service.periodic_task [None req-7cec860b-c1c5-49d2-8a4f-732c76c1788d - - - - - -] Running periodic task ComputeManager._heal_instance_info_cache run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210#033[00m
Dec  1 19:26:45 compute-0 nova_compute[189564]: 2025-12-01 19:26:45.249 189568 DEBUG nova.compute.manager [None req-7cec860b-c1c5-49d2-8a4f-732c76c1788d - - - - - -] Starting heal instance info cache _heal_instance_info_cache /usr/lib/python3.9/site-packages/nova/compute/manager.py:9858#033[00m
Dec  1 19:26:45 compute-0 nova_compute[189564]: 2025-12-01 19:26:45.249 189568 DEBUG nova.compute.manager [None req-7cec860b-c1c5-49d2-8a4f-732c76c1788d - - - - - -] Rebuilding the list of instances to heal _heal_instance_info_cache /usr/lib/python3.9/site-packages/nova/compute/manager.py:9862#033[00m
Dec  1 19:26:45 compute-0 nova_compute[189564]: 2025-12-01 19:26:45.271 189568 DEBUG nova.compute.manager [None req-7cec860b-c1c5-49d2-8a4f-732c76c1788d - - - - - -] Didn't find any instances for network info cache update. _heal_instance_info_cache /usr/lib/python3.9/site-packages/nova/compute/manager.py:9944#033[00m
Dec  1 19:26:45 compute-0 nova_compute[189564]: 2025-12-01 19:26:45.271 189568 DEBUG oslo_service.periodic_task [None req-7cec860b-c1c5-49d2-8a4f-732c76c1788d - - - - - -] Running periodic task ComputeManager._instance_usage_audit run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210#033[00m
Dec  1 19:26:45 compute-0 nova_compute[189564]: 2025-12-01 19:26:45.272 189568 DEBUG oslo_service.periodic_task [None req-7cec860b-c1c5-49d2-8a4f-732c76c1788d - - - - - -] Running periodic task ComputeManager._reclaim_queued_deletes run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210#033[00m
Dec  1 19:26:45 compute-0 nova_compute[189564]: 2025-12-01 19:26:45.272 189568 DEBUG nova.compute.manager [None req-7cec860b-c1c5-49d2-8a4f-732c76c1788d - - - - - -] CONF.reclaim_instance_interval <= 0, skipping... _reclaim_queued_deletes /usr/lib/python3.9/site-packages/nova/compute/manager.py:10477#033[00m
Dec  1 19:26:46 compute-0 nova_compute[189564]: 2025-12-01 19:26:46.249 189568 DEBUG oslo_service.periodic_task [None req-7cec860b-c1c5-49d2-8a4f-732c76c1788d - - - - - -] Running periodic task ComputeManager._check_instance_build_time run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210#033[00m
Dec  1 19:26:46 compute-0 nova_compute[189564]: 2025-12-01 19:26:46.249 189568 DEBUG oslo_service.periodic_task [None req-7cec860b-c1c5-49d2-8a4f-732c76c1788d - - - - - -] Running periodic task ComputeManager._poll_rebooting_instances run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210#033[00m
Dec  1 19:26:46 compute-0 nova_compute[189564]: 2025-12-01 19:26:46.250 189568 DEBUG oslo_service.periodic_task [None req-7cec860b-c1c5-49d2-8a4f-732c76c1788d - - - - - -] Running periodic task ComputeManager._poll_rescued_instances run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210#033[00m
Dec  1 19:26:46 compute-0 podman[238585]: 2025-12-01 19:26:46.337442269 +0000 UTC m=+0.095705489 container health_status 34a1614f07848d6f362b3ed1fa2407dbcd0f2c7c831f6ef43ff8b2d278ce7c3d (image=quay.io/podified-antelope-centos9/openstack-ceilometer-ipmi:current-podified, name=ceilometer_agent_ipmi, health_status=healthy, health_failing_streak=0, health_log=, org.label-schema.vendor=CentOS, container_name=ceilometer_agent_ipmi, org.label-schema.license=GPLv2, tcib_build_tag=fa2bb8efef6782c26ea7f1675eeb36dd, io.buildah.version=1.41.3, tcib_managed=true, config_data={'image': 'quay.io/podified-antelope-centos9/openstack-ceilometer-ipmi:current-podified', 'user': 'ceilometer', 'restart': 'always', 'command': 'kolla_start', 'security_opt': 'label:type:ceilometer_polling_t', 'privileged': 'true', 'net': 'host', 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'OS_ENDPOINT_TYPE': 'internal'}, 'healthcheck': {'test': '/openstack/healthcheck ipmi', 'mount': '/var/lib/openstack/healthchecks/ceilometer_agent_ipmi'}, 'volumes': ['/var/lib/openstack/config/telemetry-power-monitoring:/var/lib/openstack/config/:z', '/var/lib/openstack/config/telemetry-power-monitoring/ceilometer-agent-ipmi.json:/var/lib/kolla/config_files/config.json:z', '/etc/hosts:/etc/hosts:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/var/lib/openstack/cacerts/telemetry-power-monitoring/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/config/telemetry-power-monitoring/ceilometer_prom_exporter.yaml:/etc/ceilometer/ceilometer_prom_exporter.yaml:z', '/var/lib/openstack/certs/telemetry-power-monitoring/default:/etc/ceilometer/tls:z', '/dev/log:/dev/log', '/var/lib/openstack/healthchecks/ceilometer_agent_ipmi:/openstack:ro,z']}, managed_by=edpm_ansible, org.label-schema.build-date=20251125, org.label-schema.schema-version=1.0, config_id=edpm, maintainer=OpenStack Kubernetes Operator team, org.label-schema.name=CentOS Stream 9 Base Image)
Dec  1 19:26:47 compute-0 nova_compute[189564]: 2025-12-01 19:26:47.247 189568 DEBUG oslo_service.periodic_task [None req-7cec860b-c1c5-49d2-8a4f-732c76c1788d - - - - - -] Running periodic task ComputeManager._poll_unconfirmed_resizes run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210#033[00m
Dec  1 19:26:47 compute-0 podman[238605]: 2025-12-01 19:26:47.869642394 +0000 UTC m=+0.129709657 container health_status 23921011954a99f31a49758e512d9e3575f6b2ebf536e7df85e3be11e7690b76 (image=quay.io/sustainable_computing_io/kepler:release-0.7.12, name=kepler, health_status=healthy, health_failing_streak=0, health_log=, com.redhat.license_terms=https://www.redhat.com/en/about/red-hat-end-user-license-agreements#UBI, release=1214.1726694543, summary=Provides the latest release of Red Hat Universal Base Image 9., io.k8s.display-name=Red Hat Universal Base Image 9, managed_by=edpm_ansible, vcs-type=git, description=The Universal Base Image is designed and engineered to be the base layer for all of your containerized applications, middleware and utilities. This base image is freely redistributable, but Red Hat only supports Red Hat technologies through subscriptions for Red Hat products. This image is maintained by Red Hat and updated regularly., vcs-ref=e309397d02fc53f7fa99db1371b8700eb49f268f, architecture=x86_64, build-date=2024-09-18T21:23:30, com.redhat.component=ubi9-container, vendor=Red Hat, Inc., distribution-scope=public, config_data={'image': 'quay.io/sustainable_computing_io/kepler:release-0.7.12', 'privileged': 'true', 'restart': 'always', 'ports': ['8888:8888'], 'net': 'host', 'command': '-v=2', 'recreate': True, 'environment': {'ENABLE_GPU': 'true', 'EXPOSE_CONTAINER_METRICS': 'true', 'ENABLE_PROCESS_METRICS': 'true', 'EXPOSE_VM_METRICS': 'true', 'EXPOSE_ESTIMATED_IDLE_POWER_METRICS': 'false', 'LIBVIRT_METADATA_URI': 'http://openstack.org/xmlns/libvirt/nova/1.1'}, 'healthcheck': {'test': '/openstack/healthcheck kepler', 'mount': '/var/lib/openstack/healthchecks/kepler'}, 'volumes': ['/lib/modules:/lib/modules:ro', '/run/libvirt:/run/libvirt:shared,ro', '/sys:/sys', '/proc:/proc', '/var/lib/openstack/healthchecks/kepler:/openstack:ro,z']}, io.k8s.description=The Universal Base Image is designed and engineered to be the base layer for all of your containerized applications, middleware and utilities. This base image is freely redistributable, but Red Hat only supports Red Hat technologies through subscriptions for Red Hat products. This image is maintained by Red Hat and updated regularly., url=https://access.redhat.com/containers/#/registry.access.redhat.com/ubi9/images/9.4-1214.1726694543, version=9.4, maintainer=Red Hat, Inc., io.openshift.expose-services=, release-0.7.12=, config_id=edpm, container_name=kepler, io.buildah.version=1.29.0, io.openshift.tags=base rhel9, name=ubi9)
Dec  1 19:26:48 compute-0 nova_compute[189564]: 2025-12-01 19:26:48.248 189568 DEBUG oslo_service.periodic_task [None req-7cec860b-c1c5-49d2-8a4f-732c76c1788d - - - - - -] Running periodic task ComputeManager.update_available_resource run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210#033[00m
Dec  1 19:26:48 compute-0 nova_compute[189564]: 2025-12-01 19:26:48.275 189568 DEBUG oslo_concurrency.lockutils [None req-7cec860b-c1c5-49d2-8a4f-732c76c1788d - - - - - -] Acquiring lock "compute_resources" by "nova.compute.resource_tracker.ResourceTracker.clean_compute_node_cache" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m
Dec  1 19:26:48 compute-0 nova_compute[189564]: 2025-12-01 19:26:48.276 189568 DEBUG oslo_concurrency.lockutils [None req-7cec860b-c1c5-49d2-8a4f-732c76c1788d - - - - - -] Lock "compute_resources" acquired by "nova.compute.resource_tracker.ResourceTracker.clean_compute_node_cache" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m
Dec  1 19:26:48 compute-0 nova_compute[189564]: 2025-12-01 19:26:48.276 189568 DEBUG oslo_concurrency.lockutils [None req-7cec860b-c1c5-49d2-8a4f-732c76c1788d - - - - - -] Lock "compute_resources" "released" by "nova.compute.resource_tracker.ResourceTracker.clean_compute_node_cache" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m
Dec  1 19:26:48 compute-0 nova_compute[189564]: 2025-12-01 19:26:48.276 189568 DEBUG nova.compute.resource_tracker [None req-7cec860b-c1c5-49d2-8a4f-732c76c1788d - - - - - -] Auditing locally available compute resources for compute-0.ctlplane.example.com (node: compute-0.ctlplane.example.com) update_available_resource /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:861#033[00m
Dec  1 19:26:48 compute-0 nova_compute[189564]: 2025-12-01 19:26:48.653 189568 WARNING nova.virt.libvirt.driver [None req-7cec860b-c1c5-49d2-8a4f-732c76c1788d - - - - - -] This host appears to have multiple sockets per NUMA node. The `socket` PCI NUMA affinity will not be supported.#033[00m
Dec  1 19:26:48 compute-0 nova_compute[189564]: 2025-12-01 19:26:48.656 189568 DEBUG nova.compute.resource_tracker [None req-7cec860b-c1c5-49d2-8a4f-732c76c1788d - - - - - -] Hypervisor/Node resource view: name=compute-0.ctlplane.example.com free_ram=5709MB free_disk=72.4358024597168GB free_vcpus=8 pci_devices=[{"dev_id": "pci_0000_00_03_0", "address": "0000:00:03.0", "product_id": "1000", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1000", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_01_3", "address": "0000:00:01.3", "product_id": "7113", "vendor_id": "8086", "numa_node": null, "label": "label_8086_7113", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_06_0", "address": "0000:00:06.0", "product_id": "1005", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1005", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_04_0", "address": "0000:00:04.0", "product_id": "1001", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1001", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_01_1", "address": "0000:00:01.1", "product_id": "7010", "vendor_id": "8086", "numa_node": null, "label": "label_8086_7010", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_00_0", "address": "0000:00:00.0", "product_id": "1237", "vendor_id": "8086", "numa_node": null, "label": "label_8086_1237", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_05_0", "address": "0000:00:05.0", "product_id": "1002", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1002", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_02_0", "address": "0000:00:02.0", "product_id": "1050", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1050", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_01_2", "address": "0000:00:01.2", "product_id": "7020", "vendor_id": "8086", "numa_node": null, "label": "label_8086_7020", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_01_0", "address": "0000:00:01.0", "product_id": "7000", "vendor_id": "8086", "numa_node": null, "label": "label_8086_7000", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_07_0", "address": "0000:00:07.0", "product_id": "1000", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1000", "dev_type": "type-PCI"}] _report_hypervisor_resource_view /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:1034#033[00m
Dec  1 19:26:48 compute-0 nova_compute[189564]: 2025-12-01 19:26:48.656 189568 DEBUG oslo_concurrency.lockutils [None req-7cec860b-c1c5-49d2-8a4f-732c76c1788d - - - - - -] Acquiring lock "compute_resources" by "nova.compute.resource_tracker.ResourceTracker._update_available_resource" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m
Dec  1 19:26:48 compute-0 nova_compute[189564]: 2025-12-01 19:26:48.656 189568 DEBUG oslo_concurrency.lockutils [None req-7cec860b-c1c5-49d2-8a4f-732c76c1788d - - - - - -] Lock "compute_resources" acquired by "nova.compute.resource_tracker.ResourceTracker._update_available_resource" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m
Dec  1 19:26:48 compute-0 nova_compute[189564]: 2025-12-01 19:26:48.758 189568 DEBUG nova.compute.resource_tracker [None req-7cec860b-c1c5-49d2-8a4f-732c76c1788d - - - - - -] Total usable vcpus: 8, total allocated vcpus: 0 _report_final_resource_view /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:1057#033[00m
Dec  1 19:26:48 compute-0 nova_compute[189564]: 2025-12-01 19:26:48.758 189568 DEBUG nova.compute.resource_tracker [None req-7cec860b-c1c5-49d2-8a4f-732c76c1788d - - - - - -] Final resource view: name=compute-0.ctlplane.example.com phys_ram=7680MB used_ram=512MB phys_disk=79GB used_disk=0GB total_vcpus=8 used_vcpus=0 pci_stats=[] _report_final_resource_view /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:1066#033[00m
Dec  1 19:26:48 compute-0 nova_compute[189564]: 2025-12-01 19:26:48.794 189568 DEBUG nova.compute.provider_tree [None req-7cec860b-c1c5-49d2-8a4f-732c76c1788d - - - - - -] Inventory has not changed in ProviderTree for provider: 0211b5d4-bab8-409f-8f53-df766ffbcb27 update_inventory /usr/lib/python3.9/site-packages/nova/compute/provider_tree.py:180#033[00m
Dec  1 19:26:48 compute-0 nova_compute[189564]: 2025-12-01 19:26:48.811 189568 DEBUG nova.scheduler.client.report [None req-7cec860b-c1c5-49d2-8a4f-732c76c1788d - - - - - -] Inventory has not changed for provider 0211b5d4-bab8-409f-8f53-df766ffbcb27 based on inventory data: {'VCPU': {'total': 8, 'reserved': 0, 'min_unit': 1, 'max_unit': 8, 'step_size': 1, 'allocation_ratio': 4.0}, 'MEMORY_MB': {'total': 7680, 'reserved': 512, 'min_unit': 1, 'max_unit': 7680, 'step_size': 1, 'allocation_ratio': 1.0}, 'DISK_GB': {'total': 79, 'reserved': 0, 'min_unit': 1, 'max_unit': 79, 'step_size': 1, 'allocation_ratio': 0.9}} set_inventory_for_provider /usr/lib/python3.9/site-packages/nova/scheduler/client/report.py:940#033[00m
Dec  1 19:26:48 compute-0 nova_compute[189564]: 2025-12-01 19:26:48.813 189568 DEBUG nova.compute.resource_tracker [None req-7cec860b-c1c5-49d2-8a4f-732c76c1788d - - - - - -] Compute_service record updated for compute-0.ctlplane.example.com:compute-0.ctlplane.example.com _update_available_resource /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:995#033[00m
Dec  1 19:26:48 compute-0 nova_compute[189564]: 2025-12-01 19:26:48.814 189568 DEBUG oslo_concurrency.lockutils [None req-7cec860b-c1c5-49d2-8a4f-732c76c1788d - - - - - -] Lock "compute_resources" "released" by "nova.compute.resource_tracker.ResourceTracker._update_available_resource" :: held 0.157s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m
Dec  1 19:26:49 compute-0 nova_compute[189564]: 2025-12-01 19:26:49.810 189568 DEBUG oslo_service.periodic_task [None req-7cec860b-c1c5-49d2-8a4f-732c76c1788d - - - - - -] Running periodic task ComputeManager._sync_scheduler_instance_info run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210#033[00m
Dec  1 19:26:51 compute-0 podman[238625]: 2025-12-01 19:26:51.436917245 +0000 UTC m=+0.122290818 container health_status 3a3d264f7eb8586ed3d44da8bad3c69e5911bcb2ca062b771386b6d47a5118de (image=quay.rdoproject.org/podified-master-centos10/openstack-ceilometer-compute:current-tested, name=ceilometer_agent_compute, health_status=healthy, health_failing_streak=0, health_log=, org.label-schema.build-date=20251125, org.label-schema.license=GPLv2, org.label-schema.vendor=CentOS, tcib_managed=true, container_name=ceilometer_agent_compute, managed_by=edpm_ansible, org.label-schema.name=CentOS Stream 10 Base Image, config_data={'image': 'quay.rdoproject.org/podified-master-centos10/openstack-ceilometer-compute:current-tested', 'user': 'ceilometer', 'restart': 'always', 'command': 'kolla_start', 'security_opt': 'label:type:ceilometer_polling_t', 'net': 'host', 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'OS_ENDPOINT_TYPE': 'internal'}, 'healthcheck': {'test': '/openstack/healthcheck compute', 'mount': '/var/lib/openstack/healthchecks/ceilometer_agent_compute'}, 'volumes': ['/var/lib/openstack/config/telemetry:/var/lib/openstack/config/:z', '/var/lib/openstack/config/telemetry/ceilometer-agent-compute.json:/var/lib/kolla/config_files/config.json:z', '/run/libvirt:/run/libvirt:shared,ro', '/etc/hosts:/etc/hosts:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/var/lib/openstack/cacerts/telemetry/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/config/telemetry/ceilometer_prom_exporter.yaml:/etc/ceilometer/ceilometer_prom_exporter.yaml:z', '/var/lib/openstack/certs/telemetry/default:/etc/ceilometer/tls:z', '/dev/log:/dev/log', '/var/lib/openstack/healthchecks/ceilometer_agent_compute:/openstack:ro,z']}, config_id=edpm, org.label-schema.schema-version=1.0, tcib_build_tag=3a7876c5b6a4ff2e2bc50e11e9db5f42, io.buildah.version=1.41.4, maintainer=OpenStack Kubernetes Operator team)
Dec  1 19:26:51 compute-0 podman[238626]: 2025-12-01 19:26:51.451980578 +0000 UTC m=+0.122667809 container health_status 43b014a7c88484529ca37fbc1aa040d68d3c565a681d98a3ffe696ded1c66c8b (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified, name=ovn_metadata_agent, health_status=healthy, health_failing_streak=0, health_log=, config_id=ovn_metadata_agent, maintainer=OpenStack Kubernetes Operator team, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 9 Base Image, container_name=ovn_metadata_agent, io.buildah.version=1.41.3, managed_by=edpm_ansible, org.label-schema.build-date=20251125, tcib_managed=true, config_data={'cgroupns': 'host', 'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'EDPM_CONFIG_HASH': '0823bd3e096c75f72e4a95820d41b0d4b6a1172bd2892ddb9f29b788a11bc87d'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ovn_metadata_agent', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified', 'net': 'host', 'pid': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/run/openvswitch:/run/openvswitch:z', '/var/lib/config-data/ansible-generated/neutron-ovn-metadata-agent:/etc/neutron.conf.d:z', '/run/netns:/run/netns:shared', '/var/lib/kolla/config_files/ovn_metadata_agent.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/neutron:/var/lib/neutron:shared,z', '/var/lib/neutron/ovn_metadata_haproxy_wrapper:/usr/local/bin/haproxy:ro', '/var/lib/neutron/kill_scripts:/etc/neutron/kill_scripts:ro', '/var/lib/openstack/cacerts/neutron-metadata/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/ca.crt:/etc/pki/tls/certs/ovndbca.crt:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/tls.crt:/etc/pki/tls/certs/ovndb.crt:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/tls.key:/etc/pki/tls/private/ovndb.key:ro,Z', '/var/lib/openstack/healthchecks/ovn_metadata_agent:/openstack:ro,z']}, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS, tcib_build_tag=fa2bb8efef6782c26ea7f1675eeb36dd)
Dec  1 19:26:51 compute-0 podman[238627]: 2025-12-01 19:26:51.52248664 +0000 UTC m=+0.186168325 container health_status ac5c9902abf0db9f43c889599b2bcc73d33eb8b65444ffdd9b56a5cc93dab792 (image=quay.io/podified-antelope-centos9/openstack-ovn-controller:current-podified, name=ovn_controller, health_status=healthy, health_failing_streak=0, health_log=, maintainer=OpenStack Kubernetes Operator team, org.label-schema.vendor=CentOS, tcib_build_tag=fa2bb8efef6782c26ea7f1675eeb36dd, config_data={'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ovn_controller', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-ovn-controller:current-podified', 'net': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/lib/modules:/lib/modules:ro', '/run:/run', '/var/lib/openvswitch/ovn:/run/ovn:shared,z', '/var/lib/kolla/config_files/ovn_controller.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/openstack/cacerts/ovn/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/certs/ovn/default/ca.crt:/etc/pki/tls/certs/ovndbca.crt:ro,z', '/var/lib/openstack/certs/ovn/default/tls.crt:/etc/pki/tls/certs/ovndb.crt:ro,z', '/var/lib/openstack/certs/ovn/default/tls.key:/etc/pki/tls/private/ovndb.key:ro,Z', '/var/lib/openstack/healthchecks/ovn_controller:/openstack:ro,z']}, config_id=ovn_controller, container_name=ovn_controller, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.schema-version=1.0, io.buildah.version=1.41.3, managed_by=edpm_ansible, org.label-schema.build-date=20251125, tcib_managed=true)
Dec  1 19:26:58 compute-0 podman[238688]: 2025-12-01 19:26:58.323642183 +0000 UTC m=+0.096165444 container health_status b46bda7fc50db8041eef75400930fc7591d8331b3adc9964f77b2cc87c6b98e2 (image=quay.io/openstack-k8s-operators/openstack-network-exporter:current-podified, name=openstack_network_exporter, health_status=healthy, health_failing_streak=0, health_log=, url=https://catalog.redhat.com/en/search?searchType=containers, maintainer=Red Hat, Inc., io.buildah.version=1.33.7, io.openshift.expose-services=, com.redhat.component=ubi9-minimal-container, container_name=openstack_network_exporter, release=1755695350, vcs-ref=f4b088292653bbf5ca8188a5e59ffd06a8671d4b, version=9.6, config_id=edpm, architecture=x86_64, io.openshift.tags=minimal rhel9, name=ubi9-minimal, vendor=Red Hat, Inc., build-date=2025-08-20T13:12:41, io.k8s.display-name=Red Hat Universal Base Image 9 Minimal, summary=Provides the latest release of the minimal Red Hat Universal Base Image 9., distribution-scope=public, description=The Universal Base Image Minimal is a stripped down image that uses microdnf as a package manager. This base image is freely redistributable, but Red Hat only supports Red Hat technologies through subscriptions for Red Hat products. This image is maintained by Red Hat and updated regularly., managed_by=edpm_ansible, config_data={'image': 'quay.io/openstack-k8s-operators/openstack-network-exporter:current-podified', 'restart': 'always', 'recreate': True, 'privileged': True, 'ports': ['9105:9105'], 'command': [], 'net': 'host', 'environment': {'OS_ENDPOINT_TYPE': 'internal', 'OPENSTACK_NETWORK_EXPORTER_YAML': '/etc/openstack_network_exporter/openstack_network_exporter.yaml'}, 'healthcheck': {'test': '/openstack/healthcheck openstack-netwo', 'mount': '/var/lib/openstack/healthchecks/openstack_network_exporter'}, 'volumes': ['/var/lib/openstack/config/telemetry/openstack_network_exporter.yaml:/etc/openstack_network_exporter/openstack_network_exporter.yaml:z', '/var/lib/openstack/certs/telemetry/default:/etc/openstack_network_exporter/tls:z', '/var/run/openvswitch:/run/openvswitch:rw,z', '/var/lib/openvswitch/ovn:/run/ovn:rw,z', '/proc:/host/proc:ro', '/var/lib/openstack/healthchecks/openstack_network_exporter:/openstack:ro,z']}, io.k8s.description=The Universal Base Image Minimal is a stripped down image that uses microdnf as a package manager. This base image is freely redistributable, but Red Hat only supports Red Hat technologies through subscriptions for Red Hat products. This image is maintained by Red Hat and updated regularly., com.redhat.license_terms=https://www.redhat.com/en/about/red-hat-end-user-license-agreements#UBI, vcs-type=git)
Dec  1 19:26:59 compute-0 podman[203750]: time="2025-12-01T19:26:59Z" level=info msg="List containers: received `last` parameter - overwriting `limit`"
Dec  1 19:26:59 compute-0 podman[203750]: @ - - [01/Dec/2025:19:26:59 +0000] "GET /v4.9.3/libpod/containers/json?all=true&external=false&last=0&namespace=false&size=false&sync=false HTTP/1.1" 200 28288 "" "Go-http-client/1.1"
Dec  1 19:26:59 compute-0 podman[203750]: @ - - [01/Dec/2025:19:26:59 +0000] "GET /v4.9.3/libpod/containers/stats?all=false&interval=1&stream=false HTTP/1.1" 200 4286 "" "Go-http-client/1.1"
Dec  1 19:27:01 compute-0 openstack_network_exporter[205914]: ERROR   19:27:01 appctl.go:144: Failed to get PID for ovn-northd: no control socket files found for ovn-northd
Dec  1 19:27:01 compute-0 openstack_network_exporter[205914]: ERROR   19:27:01 appctl.go:144: Failed to get PID for ovn-northd: no control socket files found for ovn-northd
Dec  1 19:27:01 compute-0 openstack_network_exporter[205914]: ERROR   19:27:01 appctl.go:131: Failed to prepare call to ovsdb-server: no control socket files found for the ovs db server
Dec  1 19:27:01 compute-0 openstack_network_exporter[205914]: ERROR   19:27:01 appctl.go:174: call(dpif-netdev/pmd-perf-show): please specify an existing datapath
Dec  1 19:27:01 compute-0 openstack_network_exporter[205914]: 
Dec  1 19:27:01 compute-0 openstack_network_exporter[205914]: ERROR   19:27:01 appctl.go:174: call(dpif-netdev/pmd-rxq-show): please specify an existing datapath
Dec  1 19:27:01 compute-0 openstack_network_exporter[205914]: 
Dec  1 19:27:04 compute-0 podman[238709]: 2025-12-01 19:27:04.285850564 +0000 UTC m=+0.058050029 container health_status 9bc16c1e84935b321683dd2dfd3901959431e420d380b6b9982945dff3d516b2 (image=quay.io/prometheus/node-exporter:v1.5.0, name=node_exporter, health_status=healthy, health_failing_streak=0, health_log=, config_id=edpm, container_name=node_exporter, maintainer=The Prometheus Authors <prometheus-developers@googlegroups.com>, managed_by=edpm_ansible, config_data={'image': 'quay.io/prometheus/node-exporter:v1.5.0', 'restart': 'always', 'recreate': True, 'user': 'root', 'privileged': True, 'ports': ['9100:9100'], 'command': ['--web.config.file=/etc/node_exporter/node_exporter.yaml', '--web.disable-exporter-metrics', '--collector.systemd', '--collector.systemd.unit-include=(edpm_.*|ovs.*|openvswitch|virt.*|rsyslog)\\.service', '--no-collector.dmi', '--no-collector.entropy', '--no-collector.thermal_zone', '--no-collector.time', '--no-collector.timex', '--no-collector.uname', '--no-collector.stat', '--no-collector.hwmon', '--no-collector.os', '--no-collector.selinux', '--no-collector.textfile', '--no-collector.powersupplyclass', '--no-collector.pressure', '--no-collector.rapl'], 'net': 'host', 'environment': {'OS_ENDPOINT_TYPE': 'internal'}, 'healthcheck': {'test': '/openstack/healthcheck node_exporter', 'mount': '/var/lib/openstack/healthchecks/node_exporter'}, 'volumes': ['/var/lib/openstack/config/telemetry/node_exporter.yaml:/etc/node_exporter/node_exporter.yaml:z', '/var/lib/openstack/certs/telemetry/default:/etc/node_exporter/tls:z', '/var/run/dbus/system_bus_socket:/var/run/dbus/system_bus_socket:rw', '/var/lib/openstack/healthchecks/node_exporter:/openstack:ro,z']})
Dec  1 19:27:11 compute-0 podman[238735]: 2025-12-01 19:27:11.388474703 +0000 UTC m=+0.146301467 container health_status eee51cf6f5ac491b85fb09827fece37ea9afa564acb449d4ec0d0155a452f02b (image=quay.io/podified-antelope-centos9/openstack-multipathd:current-podified, name=multipathd, health_status=healthy, health_failing_streak=0, health_log=, org.label-schema.build-date=20251125, org.label-schema.license=GPLv2, config_data={'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/multipathd', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-multipathd:current-podified', 'net': 'host', 'privileged': True, 'restart': 'always', 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/var/lib/kolla/config_files/multipathd.json:/var/lib/kolla/config_files/config.json:ro', '/dev:/dev', '/run/udev:/run/udev', '/sys:/sys', '/lib/modules:/lib/modules:ro', '/etc/iscsi:/etc/iscsi:ro', '/var/lib/iscsi:/var/lib/iscsi', '/etc/multipath:/etc/multipath:z', '/etc/multipath.conf:/etc/multipath.conf:ro', '/var/lib/openstack/healthchecks/multipathd:/openstack:ro,z']}, container_name=multipathd, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.schema-version=1.0, tcib_managed=true, tcib_build_tag=fa2bb8efef6782c26ea7f1675eeb36dd, config_id=multipathd, managed_by=edpm_ansible, org.label-schema.vendor=CentOS, io.buildah.version=1.41.3, maintainer=OpenStack Kubernetes Operator team)
Dec  1 19:27:12 compute-0 ovn_metadata_agent[106828]: 2025-12-01 19:27:12.170 106833 DEBUG oslo_concurrency.lockutils [-] Acquiring lock "_check_child_processes" by "neutron.agent.linux.external_process.ProcessMonitor._check_child_processes" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m
Dec  1 19:27:12 compute-0 ovn_metadata_agent[106828]: 2025-12-01 19:27:12.170 106833 DEBUG oslo_concurrency.lockutils [-] Lock "_check_child_processes" acquired by "neutron.agent.linux.external_process.ProcessMonitor._check_child_processes" :: waited 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m
Dec  1 19:27:12 compute-0 ovn_metadata_agent[106828]: 2025-12-01 19:27:12.170 106833 DEBUG oslo_concurrency.lockutils [-] Lock "_check_child_processes" "released" by "neutron.agent.linux.external_process.ProcessMonitor._check_child_processes" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m
Dec  1 19:27:15 compute-0 podman[238756]: 2025-12-01 19:27:15.313728541 +0000 UTC m=+0.086392552 container health_status 61ddba5fa28aaa4735d9b3aecc3d300f499f9ae2248b5f55cd6d6127fcce4236 (image=quay.io/navidys/prometheus-podman-exporter:v1.10.1, name=podman_exporter, health_status=healthy, health_failing_streak=0, health_log=, config_id=edpm, container_name=podman_exporter, maintainer=Navid Yaghoobi <navidys@fedoraproject.org>, managed_by=edpm_ansible, config_data={'image': 'quay.io/navidys/prometheus-podman-exporter:v1.10.1', 'restart': 'always', 'recreate': True, 'user': 'root', 'privileged': True, 'ports': ['9882:9882'], 'net': 'host', 'command': ['--web.config.file=/etc/podman_exporter/podman_exporter.yaml'], 'environment': {'OS_ENDPOINT_TYPE': 'internal', 'CONTAINER_HOST': 'unix:///run/podman/podman.sock'}, 'healthcheck': {'test': '/openstack/healthcheck podman_exporter', 'mount': '/var/lib/openstack/healthchecks/podman_exporter'}, 'volumes': ['/var/lib/openstack/config/telemetry/podman_exporter.yaml:/etc/podman_exporter/podman_exporter.yaml:z', '/var/lib/openstack/certs/telemetry/default:/etc/podman_exporter/tls:z', '/run/podman/podman.sock:/run/podman/podman.sock:rw,z', '/var/lib/openstack/healthchecks/podman_exporter:/openstack:ro,z']})
Dec  1 19:27:17 compute-0 podman[238779]: 2025-12-01 19:27:17.341346177 +0000 UTC m=+0.115529940 container health_status 34a1614f07848d6f362b3ed1fa2407dbcd0f2c7c831f6ef43ff8b2d278ce7c3d (image=quay.io/podified-antelope-centos9/openstack-ceilometer-ipmi:current-podified, name=ceilometer_agent_ipmi, health_status=healthy, health_failing_streak=0, health_log=, org.label-schema.name=CentOS Stream 9 Base Image, tcib_build_tag=fa2bb8efef6782c26ea7f1675eeb36dd, maintainer=OpenStack Kubernetes Operator team, org.label-schema.build-date=20251125, tcib_managed=true, config_id=edpm, container_name=ceilometer_agent_ipmi, io.buildah.version=1.41.3, org.label-schema.license=GPLv2, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS, config_data={'image': 'quay.io/podified-antelope-centos9/openstack-ceilometer-ipmi:current-podified', 'user': 'ceilometer', 'restart': 'always', 'command': 'kolla_start', 'security_opt': 'label:type:ceilometer_polling_t', 'privileged': 'true', 'net': 'host', 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'OS_ENDPOINT_TYPE': 'internal'}, 'healthcheck': {'test': '/openstack/healthcheck ipmi', 'mount': '/var/lib/openstack/healthchecks/ceilometer_agent_ipmi'}, 'volumes': ['/var/lib/openstack/config/telemetry-power-monitoring:/var/lib/openstack/config/:z', '/var/lib/openstack/config/telemetry-power-monitoring/ceilometer-agent-ipmi.json:/var/lib/kolla/config_files/config.json:z', '/etc/hosts:/etc/hosts:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/var/lib/openstack/cacerts/telemetry-power-monitoring/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/config/telemetry-power-monitoring/ceilometer_prom_exporter.yaml:/etc/ceilometer/ceilometer_prom_exporter.yaml:z', '/var/lib/openstack/certs/telemetry-power-monitoring/default:/etc/ceilometer/tls:z', '/dev/log:/dev/log', '/var/lib/openstack/healthchecks/ceilometer_agent_ipmi:/openstack:ro,z']}, managed_by=edpm_ansible)
Dec  1 19:27:18 compute-0 podman[238799]: 2025-12-01 19:27:18.322705695 +0000 UTC m=+0.093678906 container health_status 23921011954a99f31a49758e512d9e3575f6b2ebf536e7df85e3be11e7690b76 (image=quay.io/sustainable_computing_io/kepler:release-0.7.12, name=kepler, health_status=healthy, health_failing_streak=0, health_log=, vcs-type=git, vendor=Red Hat, Inc., name=ubi9, url=https://access.redhat.com/containers/#/registry.access.redhat.com/ubi9/images/9.4-1214.1726694543, com.redhat.license_terms=https://www.redhat.com/en/about/red-hat-end-user-license-agreements#UBI, distribution-scope=public, io.k8s.display-name=Red Hat Universal Base Image 9, maintainer=Red Hat, Inc., io.openshift.tags=base rhel9, io.k8s.description=The Universal Base Image is designed and engineered to be the base layer for all of your containerized applications, middleware and utilities. This base image is freely redistributable, but Red Hat only supports Red Hat technologies through subscriptions for Red Hat products. This image is maintained by Red Hat and updated regularly., vcs-ref=e309397d02fc53f7fa99db1371b8700eb49f268f, architecture=x86_64, container_name=kepler, description=The Universal Base Image is designed and engineered to be the base layer for all of your containerized applications, middleware and utilities. This base image is freely redistributable, but Red Hat only supports Red Hat technologies through subscriptions for Red Hat products. This image is maintained by Red Hat and updated regularly., managed_by=edpm_ansible, io.openshift.expose-services=, release-0.7.12=, summary=Provides the latest release of Red Hat Universal Base Image 9., config_id=edpm, release=1214.1726694543, version=9.4, build-date=2024-09-18T21:23:30, com.redhat.component=ubi9-container, config_data={'image': 'quay.io/sustainable_computing_io/kepler:release-0.7.12', 'privileged': 'true', 'restart': 'always', 'ports': ['8888:8888'], 'net': 'host', 'command': '-v=2', 'recreate': True, 'environment': {'ENABLE_GPU': 'true', 'EXPOSE_CONTAINER_METRICS': 'true', 'ENABLE_PROCESS_METRICS': 'true', 'EXPOSE_VM_METRICS': 'true', 'EXPOSE_ESTIMATED_IDLE_POWER_METRICS': 'false', 'LIBVIRT_METADATA_URI': 'http://openstack.org/xmlns/libvirt/nova/1.1'}, 'healthcheck': {'test': '/openstack/healthcheck kepler', 'mount': '/var/lib/openstack/healthchecks/kepler'}, 'volumes': ['/lib/modules:/lib/modules:ro', '/run/libvirt:/run/libvirt:shared,ro', '/sys:/sys', '/proc:/proc', '/var/lib/openstack/healthchecks/kepler:/openstack:ro,z']}, io.buildah.version=1.29.0)
Dec  1 19:27:22 compute-0 podman[238819]: 2025-12-01 19:27:22.347963874 +0000 UTC m=+0.108033899 container health_status 3a3d264f7eb8586ed3d44da8bad3c69e5911bcb2ca062b771386b6d47a5118de (image=quay.rdoproject.org/podified-master-centos10/openstack-ceilometer-compute:current-tested, name=ceilometer_agent_compute, health_status=healthy, health_failing_streak=0, health_log=, org.label-schema.license=GPLv2, config_data={'image': 'quay.rdoproject.org/podified-master-centos10/openstack-ceilometer-compute:current-tested', 'user': 'ceilometer', 'restart': 'always', 'command': 'kolla_start', 'security_opt': 'label:type:ceilometer_polling_t', 'net': 'host', 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'OS_ENDPOINT_TYPE': 'internal'}, 'healthcheck': {'test': '/openstack/healthcheck compute', 'mount': '/var/lib/openstack/healthchecks/ceilometer_agent_compute'}, 'volumes': ['/var/lib/openstack/config/telemetry:/var/lib/openstack/config/:z', '/var/lib/openstack/config/telemetry/ceilometer-agent-compute.json:/var/lib/kolla/config_files/config.json:z', '/run/libvirt:/run/libvirt:shared,ro', '/etc/hosts:/etc/hosts:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/var/lib/openstack/cacerts/telemetry/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/config/telemetry/ceilometer_prom_exporter.yaml:/etc/ceilometer/ceilometer_prom_exporter.yaml:z', '/var/lib/openstack/certs/telemetry/default:/etc/ceilometer/tls:z', '/dev/log:/dev/log', '/var/lib/openstack/healthchecks/ceilometer_agent_compute:/openstack:ro,z']}, org.label-schema.build-date=20251125, org.label-schema.schema-version=1.0, tcib_build_tag=3a7876c5b6a4ff2e2bc50e11e9db5f42, tcib_managed=true, config_id=edpm, container_name=ceilometer_agent_compute, maintainer=OpenStack Kubernetes Operator team, managed_by=edpm_ansible, org.label-schema.vendor=CentOS, io.buildah.version=1.41.4, org.label-schema.name=CentOS Stream 10 Base Image)
Dec  1 19:27:22 compute-0 podman[238820]: 2025-12-01 19:27:22.351601735 +0000 UTC m=+0.118289465 container health_status 43b014a7c88484529ca37fbc1aa040d68d3c565a681d98a3ffe696ded1c66c8b (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified, name=ovn_metadata_agent, health_status=healthy, health_failing_streak=0, health_log=, managed_by=edpm_ansible, org.label-schema.build-date=20251125, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.schema-version=1.0, tcib_build_tag=fa2bb8efef6782c26ea7f1675eeb36dd, maintainer=OpenStack Kubernetes Operator team, tcib_managed=true, container_name=ovn_metadata_agent, io.buildah.version=1.41.3, config_data={'cgroupns': 'host', 'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'EDPM_CONFIG_HASH': '0823bd3e096c75f72e4a95820d41b0d4b6a1172bd2892ddb9f29b788a11bc87d'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ovn_metadata_agent', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified', 'net': 'host', 'pid': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/run/openvswitch:/run/openvswitch:z', '/var/lib/config-data/ansible-generated/neutron-ovn-metadata-agent:/etc/neutron.conf.d:z', '/run/netns:/run/netns:shared', '/var/lib/kolla/config_files/ovn_metadata_agent.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/neutron:/var/lib/neutron:shared,z', '/var/lib/neutron/ovn_metadata_haproxy_wrapper:/usr/local/bin/haproxy:ro', '/var/lib/neutron/kill_scripts:/etc/neutron/kill_scripts:ro', '/var/lib/openstack/cacerts/neutron-metadata/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/ca.crt:/etc/pki/tls/certs/ovndbca.crt:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/tls.crt:/etc/pki/tls/certs/ovndb.crt:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/tls.key:/etc/pki/tls/private/ovndb.key:ro,Z', '/var/lib/openstack/healthchecks/ovn_metadata_agent:/openstack:ro,z']}, config_id=ovn_metadata_agent, org.label-schema.vendor=CentOS)
Dec  1 19:27:22 compute-0 podman[238821]: 2025-12-01 19:27:22.413735919 +0000 UTC m=+0.165428396 container health_status ac5c9902abf0db9f43c889599b2bcc73d33eb8b65444ffdd9b56a5cc93dab792 (image=quay.io/podified-antelope-centos9/openstack-ovn-controller:current-podified, name=ovn_controller, health_status=healthy, health_failing_streak=0, health_log=, io.buildah.version=1.41.3, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS, config_id=ovn_controller, org.label-schema.name=CentOS Stream 9 Base Image, tcib_managed=true, maintainer=OpenStack Kubernetes Operator team, managed_by=edpm_ansible, org.label-schema.license=GPLv2, tcib_build_tag=fa2bb8efef6782c26ea7f1675eeb36dd, config_data={'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ovn_controller', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-ovn-controller:current-podified', 'net': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/lib/modules:/lib/modules:ro', '/run:/run', '/var/lib/openvswitch/ovn:/run/ovn:shared,z', '/var/lib/kolla/config_files/ovn_controller.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/openstack/cacerts/ovn/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/certs/ovn/default/ca.crt:/etc/pki/tls/certs/ovndbca.crt:ro,z', '/var/lib/openstack/certs/ovn/default/tls.crt:/etc/pki/tls/certs/ovndb.crt:ro,z', '/var/lib/openstack/certs/ovn/default/tls.key:/etc/pki/tls/private/ovndb.key:ro,Z', '/var/lib/openstack/healthchecks/ovn_controller:/openstack:ro,z']}, container_name=ovn_controller, org.label-schema.build-date=20251125)
Dec  1 19:27:29 compute-0 podman[238879]: 2025-12-01 19:27:29.357316659 +0000 UTC m=+0.122569897 container health_status b46bda7fc50db8041eef75400930fc7591d8331b3adc9964f77b2cc87c6b98e2 (image=quay.io/openstack-k8s-operators/openstack-network-exporter:current-podified, name=openstack_network_exporter, health_status=healthy, health_failing_streak=0, health_log=, io.k8s.description=The Universal Base Image Minimal is a stripped down image that uses microdnf as a package manager. This base image is freely redistributable, but Red Hat only supports Red Hat technologies through subscriptions for Red Hat products. This image is maintained by Red Hat and updated regularly., vendor=Red Hat, Inc., architecture=x86_64, com.redhat.license_terms=https://www.redhat.com/en/about/red-hat-end-user-license-agreements#UBI, distribution-scope=public, config_data={'image': 'quay.io/openstack-k8s-operators/openstack-network-exporter:current-podified', 'restart': 'always', 'recreate': True, 'privileged': True, 'ports': ['9105:9105'], 'command': [], 'net': 'host', 'environment': {'OS_ENDPOINT_TYPE': 'internal', 'OPENSTACK_NETWORK_EXPORTER_YAML': '/etc/openstack_network_exporter/openstack_network_exporter.yaml'}, 'healthcheck': {'test': '/openstack/healthcheck openstack-netwo', 'mount': '/var/lib/openstack/healthchecks/openstack_network_exporter'}, 'volumes': ['/var/lib/openstack/config/telemetry/openstack_network_exporter.yaml:/etc/openstack_network_exporter/openstack_network_exporter.yaml:z', '/var/lib/openstack/certs/telemetry/default:/etc/openstack_network_exporter/tls:z', '/var/run/openvswitch:/run/openvswitch:rw,z', '/var/lib/openvswitch/ovn:/run/ovn:rw,z', '/proc:/host/proc:ro', '/var/lib/openstack/healthchecks/openstack_network_exporter:/openstack:ro,z']}, io.buildah.version=1.33.7, vcs-ref=f4b088292653bbf5ca8188a5e59ffd06a8671d4b, name=ubi9-minimal, version=9.6, config_id=edpm, container_name=openstack_network_exporter, description=The Universal Base Image Minimal is a stripped down image that uses microdnf as a package manager. This base image is freely redistributable, but Red Hat only supports Red Hat technologies through subscriptions for Red Hat products. This image is maintained by Red Hat and updated regularly., vcs-type=git, io.openshift.expose-services=, managed_by=edpm_ansible, io.k8s.display-name=Red Hat Universal Base Image 9 Minimal, build-date=2025-08-20T13:12:41, maintainer=Red Hat, Inc., release=1755695350, url=https://catalog.redhat.com/en/search?searchType=containers, com.redhat.component=ubi9-minimal-container, io.openshift.tags=minimal rhel9, summary=Provides the latest release of the minimal Red Hat Universal Base Image 9.)
Dec  1 19:27:29 compute-0 podman[203750]: time="2025-12-01T19:27:29Z" level=info msg="List containers: received `last` parameter - overwriting `limit`"
Dec  1 19:27:29 compute-0 podman[203750]: @ - - [01/Dec/2025:19:27:29 +0000] "GET /v4.9.3/libpod/containers/json?all=true&external=false&last=0&namespace=false&size=false&sync=false HTTP/1.1" 200 28288 "" "Go-http-client/1.1"
Dec  1 19:27:29 compute-0 podman[203750]: @ - - [01/Dec/2025:19:27:29 +0000] "GET /v4.9.3/libpod/containers/stats?all=false&interval=1&stream=false HTTP/1.1" 200 4286 "" "Go-http-client/1.1"
Dec  1 19:27:31 compute-0 openstack_network_exporter[205914]: ERROR   19:27:31 appctl.go:144: Failed to get PID for ovn-northd: no control socket files found for ovn-northd
Dec  1 19:27:31 compute-0 openstack_network_exporter[205914]: ERROR   19:27:31 appctl.go:144: Failed to get PID for ovn-northd: no control socket files found for ovn-northd
Dec  1 19:27:31 compute-0 openstack_network_exporter[205914]: ERROR   19:27:31 appctl.go:131: Failed to prepare call to ovsdb-server: no control socket files found for the ovs db server
Dec  1 19:27:31 compute-0 openstack_network_exporter[205914]: ERROR   19:27:31 appctl.go:174: call(dpif-netdev/pmd-perf-show): please specify an existing datapath
Dec  1 19:27:31 compute-0 openstack_network_exporter[205914]: 
Dec  1 19:27:31 compute-0 openstack_network_exporter[205914]: ERROR   19:27:31 appctl.go:174: call(dpif-netdev/pmd-rxq-show): please specify an existing datapath
Dec  1 19:27:31 compute-0 openstack_network_exporter[205914]: 
Dec  1 19:27:32 compute-0 systemd[1]: session-29.scope: Deactivated successfully.
Dec  1 19:27:32 compute-0 systemd[1]: session-29.scope: Consumed 10.261s CPU time.
Dec  1 19:27:32 compute-0 systemd-logind[797]: Session 29 logged out. Waiting for processes to exit.
Dec  1 19:27:32 compute-0 systemd-logind[797]: Removed session 29.
Dec  1 19:27:35 compute-0 podman[238900]: 2025-12-01 19:27:35.33902618 +0000 UTC m=+0.100731504 container health_status 9bc16c1e84935b321683dd2dfd3901959431e420d380b6b9982945dff3d516b2 (image=quay.io/prometheus/node-exporter:v1.5.0, name=node_exporter, health_status=healthy, health_failing_streak=0, health_log=, config_data={'image': 'quay.io/prometheus/node-exporter:v1.5.0', 'restart': 'always', 'recreate': True, 'user': 'root', 'privileged': True, 'ports': ['9100:9100'], 'command': ['--web.config.file=/etc/node_exporter/node_exporter.yaml', '--web.disable-exporter-metrics', '--collector.systemd', '--collector.systemd.unit-include=(edpm_.*|ovs.*|openvswitch|virt.*|rsyslog)\\.service', '--no-collector.dmi', '--no-collector.entropy', '--no-collector.thermal_zone', '--no-collector.time', '--no-collector.timex', '--no-collector.uname', '--no-collector.stat', '--no-collector.hwmon', '--no-collector.os', '--no-collector.selinux', '--no-collector.textfile', '--no-collector.powersupplyclass', '--no-collector.pressure', '--no-collector.rapl'], 'net': 'host', 'environment': {'OS_ENDPOINT_TYPE': 'internal'}, 'healthcheck': {'test': '/openstack/healthcheck node_exporter', 'mount': '/var/lib/openstack/healthchecks/node_exporter'}, 'volumes': ['/var/lib/openstack/config/telemetry/node_exporter.yaml:/etc/node_exporter/node_exporter.yaml:z', '/var/lib/openstack/certs/telemetry/default:/etc/node_exporter/tls:z', '/var/run/dbus/system_bus_socket:/var/run/dbus/system_bus_socket:rw', '/var/lib/openstack/healthchecks/node_exporter:/openstack:ro,z']}, config_id=edpm, container_name=node_exporter, maintainer=The Prometheus Authors <prometheus-developers@googlegroups.com>, managed_by=edpm_ansible)
Dec  1 19:27:42 compute-0 podman[238925]: 2025-12-01 19:27:42.379219063 +0000 UTC m=+0.135095024 container health_status eee51cf6f5ac491b85fb09827fece37ea9afa564acb449d4ec0d0155a452f02b (image=quay.io/podified-antelope-centos9/openstack-multipathd:current-podified, name=multipathd, health_status=healthy, health_failing_streak=0, health_log=, org.label-schema.build-date=20251125, org.label-schema.license=GPLv2, tcib_managed=true, container_name=multipathd, managed_by=edpm_ansible, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS, config_data={'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/multipathd', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-multipathd:current-podified', 'net': 'host', 'privileged': True, 'restart': 'always', 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/var/lib/kolla/config_files/multipathd.json:/var/lib/kolla/config_files/config.json:ro', '/dev:/dev', '/run/udev:/run/udev', '/sys:/sys', '/lib/modules:/lib/modules:ro', '/etc/iscsi:/etc/iscsi:ro', '/var/lib/iscsi:/var/lib/iscsi', '/etc/multipath:/etc/multipath:z', '/etc/multipath.conf:/etc/multipath.conf:ro', '/var/lib/openstack/healthchecks/multipathd:/openstack:ro,z']}, io.buildah.version=1.41.3, config_id=multipathd, maintainer=OpenStack Kubernetes Operator team, org.label-schema.name=CentOS Stream 9 Base Image, tcib_build_tag=fa2bb8efef6782c26ea7f1675eeb36dd)
Dec  1 19:27:45 compute-0 nova_compute[189564]: 2025-12-01 19:27:45.248 189568 DEBUG oslo_service.periodic_task [None req-7cec860b-c1c5-49d2-8a4f-732c76c1788d - - - - - -] Running periodic task ComputeManager._heal_instance_info_cache run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210#033[00m
Dec  1 19:27:45 compute-0 nova_compute[189564]: 2025-12-01 19:27:45.249 189568 DEBUG nova.compute.manager [None req-7cec860b-c1c5-49d2-8a4f-732c76c1788d - - - - - -] Starting heal instance info cache _heal_instance_info_cache /usr/lib/python3.9/site-packages/nova/compute/manager.py:9858#033[00m
Dec  1 19:27:45 compute-0 nova_compute[189564]: 2025-12-01 19:27:45.249 189568 DEBUG nova.compute.manager [None req-7cec860b-c1c5-49d2-8a4f-732c76c1788d - - - - - -] Rebuilding the list of instances to heal _heal_instance_info_cache /usr/lib/python3.9/site-packages/nova/compute/manager.py:9862#033[00m
Dec  1 19:27:45 compute-0 nova_compute[189564]: 2025-12-01 19:27:45.267 189568 DEBUG nova.compute.manager [None req-7cec860b-c1c5-49d2-8a4f-732c76c1788d - - - - - -] Didn't find any instances for network info cache update. _heal_instance_info_cache /usr/lib/python3.9/site-packages/nova/compute/manager.py:9944#033[00m
Dec  1 19:27:46 compute-0 nova_compute[189564]: 2025-12-01 19:27:46.247 189568 DEBUG oslo_service.periodic_task [None req-7cec860b-c1c5-49d2-8a4f-732c76c1788d - - - - - -] Running periodic task ComputeManager._check_instance_build_time run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210#033[00m
Dec  1 19:27:46 compute-0 nova_compute[189564]: 2025-12-01 19:27:46.248 189568 DEBUG oslo_service.periodic_task [None req-7cec860b-c1c5-49d2-8a4f-732c76c1788d - - - - - -] Running periodic task ComputeManager._poll_rebooting_instances run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210#033[00m
Dec  1 19:27:46 compute-0 nova_compute[189564]: 2025-12-01 19:27:46.248 189568 DEBUG oslo_service.periodic_task [None req-7cec860b-c1c5-49d2-8a4f-732c76c1788d - - - - - -] Running periodic task ComputeManager._poll_volume_usage run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210#033[00m
Dec  1 19:27:46 compute-0 nova_compute[189564]: 2025-12-01 19:27:46.248 189568 DEBUG oslo_service.periodic_task [None req-7cec860b-c1c5-49d2-8a4f-732c76c1788d - - - - - -] Running periodic task ComputeManager._reclaim_queued_deletes run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210#033[00m
Dec  1 19:27:46 compute-0 nova_compute[189564]: 2025-12-01 19:27:46.248 189568 DEBUG nova.compute.manager [None req-7cec860b-c1c5-49d2-8a4f-732c76c1788d - - - - - -] CONF.reclaim_instance_interval <= 0, skipping... _reclaim_queued_deletes /usr/lib/python3.9/site-packages/nova/compute/manager.py:10477#033[00m
Dec  1 19:27:46 compute-0 podman[238943]: 2025-12-01 19:27:46.363740523 +0000 UTC m=+0.126788699 container health_status 61ddba5fa28aaa4735d9b3aecc3d300f499f9ae2248b5f55cd6d6127fcce4236 (image=quay.io/navidys/prometheus-podman-exporter:v1.10.1, name=podman_exporter, health_status=healthy, health_failing_streak=0, health_log=, managed_by=edpm_ansible, config_data={'image': 'quay.io/navidys/prometheus-podman-exporter:v1.10.1', 'restart': 'always', 'recreate': True, 'user': 'root', 'privileged': True, 'ports': ['9882:9882'], 'net': 'host', 'command': ['--web.config.file=/etc/podman_exporter/podman_exporter.yaml'], 'environment': {'OS_ENDPOINT_TYPE': 'internal', 'CONTAINER_HOST': 'unix:///run/podman/podman.sock'}, 'healthcheck': {'test': '/openstack/healthcheck podman_exporter', 'mount': '/var/lib/openstack/healthchecks/podman_exporter'}, 'volumes': ['/var/lib/openstack/config/telemetry/podman_exporter.yaml:/etc/podman_exporter/podman_exporter.yaml:z', '/var/lib/openstack/certs/telemetry/default:/etc/podman_exporter/tls:z', '/run/podman/podman.sock:/run/podman/podman.sock:rw,z', '/var/lib/openstack/healthchecks/podman_exporter:/openstack:ro,z']}, config_id=edpm, container_name=podman_exporter, maintainer=Navid Yaghoobi <navidys@fedoraproject.org>)
Dec  1 19:27:47 compute-0 nova_compute[189564]: 2025-12-01 19:27:47.249 189568 DEBUG oslo_service.periodic_task [None req-7cec860b-c1c5-49d2-8a4f-732c76c1788d - - - - - -] Running periodic task ComputeManager._instance_usage_audit run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210#033[00m
Dec  1 19:27:48 compute-0 nova_compute[189564]: 2025-12-01 19:27:48.247 189568 DEBUG oslo_service.periodic_task [None req-7cec860b-c1c5-49d2-8a4f-732c76c1788d - - - - - -] Running periodic task ComputeManager._poll_rescued_instances run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210#033[00m
Dec  1 19:27:48 compute-0 nova_compute[189564]: 2025-12-01 19:27:48.247 189568 DEBUG oslo_service.periodic_task [None req-7cec860b-c1c5-49d2-8a4f-732c76c1788d - - - - - -] Running periodic task ComputeManager.update_available_resource run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210#033[00m
Dec  1 19:27:48 compute-0 nova_compute[189564]: 2025-12-01 19:27:48.269 189568 DEBUG oslo_concurrency.lockutils [None req-7cec860b-c1c5-49d2-8a4f-732c76c1788d - - - - - -] Acquiring lock "compute_resources" by "nova.compute.resource_tracker.ResourceTracker.clean_compute_node_cache" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m
Dec  1 19:27:48 compute-0 nova_compute[189564]: 2025-12-01 19:27:48.270 189568 DEBUG oslo_concurrency.lockutils [None req-7cec860b-c1c5-49d2-8a4f-732c76c1788d - - - - - -] Lock "compute_resources" acquired by "nova.compute.resource_tracker.ResourceTracker.clean_compute_node_cache" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m
Dec  1 19:27:48 compute-0 nova_compute[189564]: 2025-12-01 19:27:48.270 189568 DEBUG oslo_concurrency.lockutils [None req-7cec860b-c1c5-49d2-8a4f-732c76c1788d - - - - - -] Lock "compute_resources" "released" by "nova.compute.resource_tracker.ResourceTracker.clean_compute_node_cache" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m
Dec  1 19:27:48 compute-0 nova_compute[189564]: 2025-12-01 19:27:48.270 189568 DEBUG nova.compute.resource_tracker [None req-7cec860b-c1c5-49d2-8a4f-732c76c1788d - - - - - -] Auditing locally available compute resources for compute-0.ctlplane.example.com (node: compute-0.ctlplane.example.com) update_available_resource /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:861#033[00m
Dec  1 19:27:48 compute-0 podman[238967]: 2025-12-01 19:27:48.306672972 +0000 UTC m=+0.083881535 container health_status 34a1614f07848d6f362b3ed1fa2407dbcd0f2c7c831f6ef43ff8b2d278ce7c3d (image=quay.io/podified-antelope-centos9/openstack-ceilometer-ipmi:current-podified, name=ceilometer_agent_ipmi, health_status=healthy, health_failing_streak=0, health_log=, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS, managed_by=edpm_ansible, org.label-schema.build-date=20251125, org.label-schema.license=GPLv2, container_name=ceilometer_agent_ipmi, config_id=edpm, maintainer=OpenStack Kubernetes Operator team, org.label-schema.name=CentOS Stream 9 Base Image, tcib_build_tag=fa2bb8efef6782c26ea7f1675eeb36dd, tcib_managed=true, config_data={'image': 'quay.io/podified-antelope-centos9/openstack-ceilometer-ipmi:current-podified', 'user': 'ceilometer', 'restart': 'always', 'command': 'kolla_start', 'security_opt': 'label:type:ceilometer_polling_t', 'privileged': 'true', 'net': 'host', 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'OS_ENDPOINT_TYPE': 'internal'}, 'healthcheck': {'test': '/openstack/healthcheck ipmi', 'mount': '/var/lib/openstack/healthchecks/ceilometer_agent_ipmi'}, 'volumes': ['/var/lib/openstack/config/telemetry-power-monitoring:/var/lib/openstack/config/:z', '/var/lib/openstack/config/telemetry-power-monitoring/ceilometer-agent-ipmi.json:/var/lib/kolla/config_files/config.json:z', '/etc/hosts:/etc/hosts:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/var/lib/openstack/cacerts/telemetry-power-monitoring/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/config/telemetry-power-monitoring/ceilometer_prom_exporter.yaml:/etc/ceilometer/ceilometer_prom_exporter.yaml:z', '/var/lib/openstack/certs/telemetry-power-monitoring/default:/etc/ceilometer/tls:z', '/dev/log:/dev/log', '/var/lib/openstack/healthchecks/ceilometer_agent_ipmi:/openstack:ro,z']}, io.buildah.version=1.41.3)
Dec  1 19:27:48 compute-0 nova_compute[189564]: 2025-12-01 19:27:48.593 189568 WARNING nova.virt.libvirt.driver [None req-7cec860b-c1c5-49d2-8a4f-732c76c1788d - - - - - -] This host appears to have multiple sockets per NUMA node. The `socket` PCI NUMA affinity will not be supported.#033[00m
Dec  1 19:27:48 compute-0 nova_compute[189564]: 2025-12-01 19:27:48.595 189568 DEBUG nova.compute.resource_tracker [None req-7cec860b-c1c5-49d2-8a4f-732c76c1788d - - - - - -] Hypervisor/Node resource view: name=compute-0.ctlplane.example.com free_ram=5708MB free_disk=72.43577575683594GB free_vcpus=8 pci_devices=[{"dev_id": "pci_0000_00_03_0", "address": "0000:00:03.0", "product_id": "1000", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1000", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_01_3", "address": "0000:00:01.3", "product_id": "7113", "vendor_id": "8086", "numa_node": null, "label": "label_8086_7113", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_06_0", "address": "0000:00:06.0", "product_id": "1005", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1005", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_04_0", "address": "0000:00:04.0", "product_id": "1001", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1001", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_01_1", "address": "0000:00:01.1", "product_id": "7010", "vendor_id": "8086", "numa_node": null, "label": "label_8086_7010", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_00_0", "address": "0000:00:00.0", "product_id": "1237", "vendor_id": "8086", "numa_node": null, "label": "label_8086_1237", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_05_0", "address": "0000:00:05.0", "product_id": "1002", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1002", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_02_0", "address": "0000:00:02.0", "product_id": "1050", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1050", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_01_2", "address": "0000:00:01.2", "product_id": "7020", "vendor_id": "8086", "numa_node": null, "label": "label_8086_7020", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_01_0", "address": "0000:00:01.0", "product_id": "7000", "vendor_id": "8086", "numa_node": null, "label": "label_8086_7000", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_07_0", "address": "0000:00:07.0", "product_id": "1000", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1000", "dev_type": "type-PCI"}] _report_hypervisor_resource_view /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:1034#033[00m
Dec  1 19:27:48 compute-0 nova_compute[189564]: 2025-12-01 19:27:48.595 189568 DEBUG oslo_concurrency.lockutils [None req-7cec860b-c1c5-49d2-8a4f-732c76c1788d - - - - - -] Acquiring lock "compute_resources" by "nova.compute.resource_tracker.ResourceTracker._update_available_resource" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m
Dec  1 19:27:48 compute-0 nova_compute[189564]: 2025-12-01 19:27:48.596 189568 DEBUG oslo_concurrency.lockutils [None req-7cec860b-c1c5-49d2-8a4f-732c76c1788d - - - - - -] Lock "compute_resources" acquired by "nova.compute.resource_tracker.ResourceTracker._update_available_resource" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m
Dec  1 19:27:48 compute-0 nova_compute[189564]: 2025-12-01 19:27:48.706 189568 DEBUG nova.compute.resource_tracker [None req-7cec860b-c1c5-49d2-8a4f-732c76c1788d - - - - - -] Total usable vcpus: 8, total allocated vcpus: 0 _report_final_resource_view /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:1057#033[00m
Dec  1 19:27:48 compute-0 nova_compute[189564]: 2025-12-01 19:27:48.707 189568 DEBUG nova.compute.resource_tracker [None req-7cec860b-c1c5-49d2-8a4f-732c76c1788d - - - - - -] Final resource view: name=compute-0.ctlplane.example.com phys_ram=7680MB used_ram=512MB phys_disk=79GB used_disk=0GB total_vcpus=8 used_vcpus=0 pci_stats=[] _report_final_resource_view /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:1066#033[00m
Dec  1 19:27:48 compute-0 nova_compute[189564]: 2025-12-01 19:27:48.735 189568 DEBUG nova.compute.provider_tree [None req-7cec860b-c1c5-49d2-8a4f-732c76c1788d - - - - - -] Inventory has not changed in ProviderTree for provider: 0211b5d4-bab8-409f-8f53-df766ffbcb27 update_inventory /usr/lib/python3.9/site-packages/nova/compute/provider_tree.py:180#033[00m
Dec  1 19:27:48 compute-0 nova_compute[189564]: 2025-12-01 19:27:48.751 189568 DEBUG nova.scheduler.client.report [None req-7cec860b-c1c5-49d2-8a4f-732c76c1788d - - - - - -] Inventory has not changed for provider 0211b5d4-bab8-409f-8f53-df766ffbcb27 based on inventory data: {'VCPU': {'total': 8, 'reserved': 0, 'min_unit': 1, 'max_unit': 8, 'step_size': 1, 'allocation_ratio': 4.0}, 'MEMORY_MB': {'total': 7680, 'reserved': 512, 'min_unit': 1, 'max_unit': 7680, 'step_size': 1, 'allocation_ratio': 1.0}, 'DISK_GB': {'total': 79, 'reserved': 0, 'min_unit': 1, 'max_unit': 79, 'step_size': 1, 'allocation_ratio': 0.9}} set_inventory_for_provider /usr/lib/python3.9/site-packages/nova/scheduler/client/report.py:940#033[00m
Dec  1 19:27:48 compute-0 nova_compute[189564]: 2025-12-01 19:27:48.754 189568 DEBUG nova.compute.resource_tracker [None req-7cec860b-c1c5-49d2-8a4f-732c76c1788d - - - - - -] Compute_service record updated for compute-0.ctlplane.example.com:compute-0.ctlplane.example.com _update_available_resource /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:995#033[00m
Dec  1 19:27:48 compute-0 nova_compute[189564]: 2025-12-01 19:27:48.755 189568 DEBUG oslo_concurrency.lockutils [None req-7cec860b-c1c5-49d2-8a4f-732c76c1788d - - - - - -] Lock "compute_resources" "released" by "nova.compute.resource_tracker.ResourceTracker._update_available_resource" :: held 0.159s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m
Dec  1 19:27:48 compute-0 ceilometer_agent_compute[200308]: 2025-12-01 19:27:48.809 15 DEBUG ceilometer.polling.manager [-] The number of pollsters in source [pollsters] is bigger than the number of worker threads to execute them. Therefore, one can expect the process to be longer than the expected. execute_polling_task_processing /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:253
Dec  1 19:27:48 compute-0 ceilometer_agent_compute[200308]: 2025-12-01 19:27:48.809 15 DEBUG ceilometer.polling.manager [-] Processing pollsters for [pollsters] with [1] threads. execute_polling_task_processing /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:262
Dec  1 19:27:48 compute-0 ceilometer_agent_compute[200308]: 2025-12-01 19:27:48.809 15 DEBUG ceilometer.polling.manager [-] Registering pollster [<stevedore.extension.Extension object at 0x7fcf6cc3f860>] from source [pollsters] to be executed via executor [<concurrent.futures.thread.ThreadPoolExecutor object at 0x7fcf6f28f260>] with cache [{}], pollster history [{}], and discovery cache [{}]. register_pollster_execution /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:276
Dec  1 19:27:48 compute-0 ceilometer_agent_compute[200308]: 2025-12-01 19:27:48.810 15 DEBUG ceilometer.polling.manager [-] Executing discovery process for pollsters [<ceilometer.compute.pollsters.net.IncomingBytesDeltaPollster object at 0x7fcf6cc3f830>] and discovery method [local_instances] via process [<bound method AgentManager.discover of <ceilometer.polling.manager.AgentManager object at 0x7fcf6cc07a10>>]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:294
Dec  1 19:27:48 compute-0 ceilometer_agent_compute[200308]: 2025-12-01 19:27:48.810 15 DEBUG ceilometer.polling.manager [-] Registering pollster [<stevedore.extension.Extension object at 0x7fcf6c2e4080>] from source [pollsters] to be executed via executor [<concurrent.futures.thread.ThreadPoolExecutor object at 0x7fcf6f28f260>] with cache [{}], pollster history [{}], and discovery cache [{}]. register_pollster_execution /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:276
Dec  1 19:27:48 compute-0 ceilometer_agent_compute[200308]: 2025-12-01 19:27:48.811 15 DEBUG ceilometer.polling.manager [-] Registering pollster [<stevedore.extension.Extension object at 0x7fcf6efc98b0>] from source [pollsters] to be executed via executor [<concurrent.futures.thread.ThreadPoolExecutor object at 0x7fcf6f28f260>] with cache [{}], pollster history [{}], and discovery cache [{}]. register_pollster_execution /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:276
Dec  1 19:27:48 compute-0 ceilometer_agent_compute[200308]: 2025-12-01 19:27:48.811 15 DEBUG ceilometer.polling.manager [-] Registering pollster [<stevedore.extension.Extension object at 0x7fcf6c2e4110>] from source [pollsters] to be executed via executor [<concurrent.futures.thread.ThreadPoolExecutor object at 0x7fcf6f28f260>] with cache [{}], pollster history [{}], and discovery cache [{}]. register_pollster_execution /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:276
Dec  1 19:27:48 compute-0 ceilometer_agent_compute[200308]: 2025-12-01 19:27:48.811 15 DEBUG ceilometer.polling.manager [-] Registering pollster [<stevedore.extension.Extension object at 0x7fcf6c2e41a0>] from source [pollsters] to be executed via executor [<concurrent.futures.thread.ThreadPoolExecutor object at 0x7fcf6f28f260>] with cache [{}], pollster history [{}], and discovery cache [{}]. register_pollster_execution /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:276
Dec  1 19:27:48 compute-0 ceilometer_agent_compute[200308]: 2025-12-01 19:27:48.811 15 DEBUG ceilometer.polling.manager [-] Registering pollster [<stevedore.extension.Extension object at 0x7fcf6cc3f290>] from source [pollsters] to be executed via executor [<concurrent.futures.thread.ThreadPoolExecutor object at 0x7fcf6f28f260>] with cache [{}], pollster history [{}], and discovery cache [{}]. register_pollster_execution /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:276
Dec  1 19:27:48 compute-0 ceilometer_agent_compute[200308]: 2025-12-01 19:27:48.811 15 DEBUG ceilometer.polling.manager [-] Registering pollster [<stevedore.extension.Extension object at 0x7fcf6cc3f2c0>] from source [pollsters] to be executed via executor [<concurrent.futures.thread.ThreadPoolExecutor object at 0x7fcf6f28f260>] with cache [{}], pollster history [{}], and discovery cache [{}]. register_pollster_execution /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:276
Dec  1 19:27:48 compute-0 ceilometer_agent_compute[200308]: 2025-12-01 19:27:48.812 15 DEBUG ceilometer.polling.manager [-] Registering pollster [<stevedore.extension.Extension object at 0x7fcf6e1e92e0>] from source [pollsters] to be executed via executor [<concurrent.futures.thread.ThreadPoolExecutor object at 0x7fcf6f28f260>] with cache [{}], pollster history [{}], and discovery cache [{}]. register_pollster_execution /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:276
Dec  1 19:27:48 compute-0 ceilometer_agent_compute[200308]: 2025-12-01 19:27:48.812 15 DEBUG ceilometer.polling.manager [-] Registering pollster [<stevedore.extension.Extension object at 0x7fcf6cc3fb00>] from source [pollsters] to be executed via executor [<concurrent.futures.thread.ThreadPoolExecutor object at 0x7fcf6f28f260>] with cache [{}], pollster history [{}], and discovery cache [{}]. register_pollster_execution /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:276
Dec  1 19:27:48 compute-0 ceilometer_agent_compute[200308]: 2025-12-01 19:27:48.812 15 DEBUG ceilometer.polling.manager [-] Registering pollster [<stevedore.extension.Extension object at 0x7fcf6cc3f320>] from source [pollsters] to be executed via executor [<concurrent.futures.thread.ThreadPoolExecutor object at 0x7fcf6f28f260>] with cache [{}], pollster history [{}], and discovery cache [{}]. register_pollster_execution /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:276
Dec  1 19:27:48 compute-0 ceilometer_agent_compute[200308]: 2025-12-01 19:27:48.812 15 DEBUG ceilometer.polling.manager [-] Registering pollster [<stevedore.extension.Extension object at 0x7fcf6cc3f380>] from source [pollsters] to be executed via executor [<concurrent.futures.thread.ThreadPoolExecutor object at 0x7fcf6f28f260>] with cache [{}], pollster history [{}], and discovery cache [{}]. register_pollster_execution /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:276
Dec  1 19:27:48 compute-0 ceilometer_agent_compute[200308]: 2025-12-01 19:27:48.812 15 DEBUG ceilometer.polling.manager [-] Registering pollster [<stevedore.extension.Extension object at 0x7fcf6cc3f3e0>] from source [pollsters] to be executed via executor [<concurrent.futures.thread.ThreadPoolExecutor object at 0x7fcf6f28f260>] with cache [{}], pollster history [{}], and discovery cache [{}]. register_pollster_execution /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:276
Dec  1 19:27:48 compute-0 ceilometer_agent_compute[200308]: 2025-12-01 19:27:48.812 15 DEBUG ceilometer.polling.manager [-] Registering pollster [<stevedore.extension.Extension object at 0x7fcf6cc3f440>] from source [pollsters] to be executed via executor [<concurrent.futures.thread.ThreadPoolExecutor object at 0x7fcf6f28f260>] with cache [{}], pollster history [{}], and discovery cache [{}]. register_pollster_execution /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:276
Dec  1 19:27:48 compute-0 ceilometer_agent_compute[200308]: 2025-12-01 19:27:48.812 15 DEBUG ceilometer.polling.manager [-] Registering pollster [<stevedore.extension.Extension object at 0x7fcf6c2e4470>] from source [pollsters] to be executed via executor [<concurrent.futures.thread.ThreadPoolExecutor object at 0x7fcf6f28f260>] with cache [{}], pollster history [{}], and discovery cache [{}]. register_pollster_execution /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:276
Dec  1 19:27:48 compute-0 ceilometer_agent_compute[200308]: 2025-12-01 19:27:48.812 15 DEBUG ceilometer.polling.manager [-] Registering pollster [<stevedore.extension.Extension object at 0x7fcf6cc3f4a0>] from source [pollsters] to be executed via executor [<concurrent.futures.thread.ThreadPoolExecutor object at 0x7fcf6f28f260>] with cache [{}], pollster history [{}], and discovery cache [{}]. register_pollster_execution /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:276
Dec  1 19:27:48 compute-0 ceilometer_agent_compute[200308]: 2025-12-01 19:27:48.812 15 DEBUG ceilometer.polling.manager [-] Registering pollster [<stevedore.extension.Extension object at 0x7fcf6cc3f500>] from source [pollsters] to be executed via executor [<concurrent.futures.thread.ThreadPoolExecutor object at 0x7fcf6f28f260>] with cache [{}], pollster history [{}], and discovery cache [{}]. register_pollster_execution /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:276
Dec  1 19:27:48 compute-0 ceilometer_agent_compute[200308]: 2025-12-01 19:27:48.812 15 DEBUG ceilometer.polling.manager [-] Registering pollster [<stevedore.extension.Extension object at 0x7fcf6cc3e540>] from source [pollsters] to be executed via executor [<concurrent.futures.thread.ThreadPoolExecutor object at 0x7fcf6f28f260>] with cache [{}], pollster history [{}], and discovery cache [{}]. register_pollster_execution /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:276
Dec  1 19:27:48 compute-0 ceilometer_agent_compute[200308]: 2025-12-01 19:27:48.812 15 DEBUG ceilometer.polling.manager [-] Registering pollster [<stevedore.extension.Extension object at 0x7fcf6cc3f560>] from source [pollsters] to be executed via executor [<concurrent.futures.thread.ThreadPoolExecutor object at 0x7fcf6f28f260>] with cache [{}], pollster history [{}], and discovery cache [{}]. register_pollster_execution /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:276
Dec  1 19:27:48 compute-0 ceilometer_agent_compute[200308]: 2025-12-01 19:27:48.813 15 DEBUG ceilometer.polling.manager [-] Registering pollster [<stevedore.extension.Extension object at 0x7fcf6cc3fd70>] from source [pollsters] to be executed via executor [<concurrent.futures.thread.ThreadPoolExecutor object at 0x7fcf6f28f260>] with cache [{}], pollster history [{}], and discovery cache [{}]. register_pollster_execution /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:276
Dec  1 19:27:48 compute-0 ceilometer_agent_compute[200308]: 2025-12-01 19:27:48.813 15 DEBUG ceilometer.polling.manager [-] Registering pollster [<stevedore.extension.Extension object at 0x7fcf6cc3f5c0>] from source [pollsters] to be executed via executor [<concurrent.futures.thread.ThreadPoolExecutor object at 0x7fcf6f28f260>] with cache [{}], pollster history [{}], and discovery cache [{}]. register_pollster_execution /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:276
Dec  1 19:27:48 compute-0 ceilometer_agent_compute[200308]: 2025-12-01 19:27:48.813 15 DEBUG ceilometer.polling.manager [-] Registering pollster [<stevedore.extension.Extension object at 0x7fcf6cc3fdd0>] from source [pollsters] to be executed via executor [<concurrent.futures.thread.ThreadPoolExecutor object at 0x7fcf6f28f260>] with cache [{}], pollster history [{}], and discovery cache [{}]. register_pollster_execution /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:276
Dec  1 19:27:48 compute-0 ceilometer_agent_compute[200308]: 2025-12-01 19:27:48.813 15 DEBUG ceilometer.polling.manager [-] Registering pollster [<stevedore.extension.Extension object at 0x7fcf6cc3fe30>] from source [pollsters] to be executed via executor [<concurrent.futures.thread.ThreadPoolExecutor object at 0x7fcf6f28f260>] with cache [{}], pollster history [{}], and discovery cache [{}]. register_pollster_execution /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:276
Dec  1 19:27:48 compute-0 ceilometer_agent_compute[200308]: 2025-12-01 19:27:48.813 15 DEBUG ceilometer.polling.manager [-] Registering pollster [<stevedore.extension.Extension object at 0x7fcf6cc3fec0>] from source [pollsters] to be executed via executor [<concurrent.futures.thread.ThreadPoolExecutor object at 0x7fcf6f28f260>] with cache [{}], pollster history [{}], and discovery cache [{}]. register_pollster_execution /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:276
Dec  1 19:27:48 compute-0 ceilometer_agent_compute[200308]: 2025-12-01 19:27:48.813 15 DEBUG ceilometer.polling.manager [-] Registering pollster [<stevedore.extension.Extension object at 0x7fcf6cc3ffb0>] from source [pollsters] to be executed via executor [<concurrent.futures.thread.ThreadPoolExecutor object at 0x7fcf6f28f260>] with cache [{}], pollster history [{}], and discovery cache [{}]. register_pollster_execution /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:276
Dec  1 19:27:48 compute-0 ceilometer_agent_compute[200308]: 2025-12-01 19:27:48.813 15 DEBUG ceilometer.polling.manager [-] Registering pollster [<stevedore.extension.Extension object at 0x7fcf6cc3d7c0>] from source [pollsters] to be executed via executor [<concurrent.futures.thread.ThreadPoolExecutor object at 0x7fcf6f28f260>] with cache [{}], pollster history [{}], and discovery cache [{}]. register_pollster_execution /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:276
Dec  1 19:27:48 compute-0 ceilometer_agent_compute[200308]: 2025-12-01 19:27:48.813 15 DEBUG ceilometer.polling.manager [-] Registering pollster [<stevedore.extension.Extension object at 0x7fcf6cc3f7d0>] from source [pollsters] to be executed via executor [<concurrent.futures.thread.ThreadPoolExecutor object at 0x7fcf6f28f260>] with cache [{}], pollster history [{}], and discovery cache [{}]. register_pollster_execution /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:276
Dec  1 19:27:48 compute-0 ceilometer_agent_compute[200308]: 2025-12-01 19:27:48.814 15 DEBUG ceilometer.polling.manager [-] Skip pollster network.incoming.bytes.delta, no  resources found this cycle _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:321
Dec  1 19:27:48 compute-0 ceilometer_agent_compute[200308]: 2025-12-01 19:27:48.814 15 DEBUG ceilometer.polling.manager [-] Executing discovery process for pollsters [<ceilometer.compute.pollsters.net.OutgoingPacketsPollster object at 0x7fcf6c2e4050>] and discovery method [local_instances] via process [<bound method AgentManager.discover of <ceilometer.polling.manager.AgentManager object at 0x7fcf6cc07a10>>]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:294
Dec  1 19:27:48 compute-0 ceilometer_agent_compute[200308]: 2025-12-01 19:27:48.814 15 DEBUG ceilometer.polling.manager [-] Skip pollster network.outgoing.packets, no  resources found this cycle _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:321
Dec  1 19:27:48 compute-0 ceilometer_agent_compute[200308]: 2025-12-01 19:27:48.815 15 DEBUG ceilometer.polling.manager [-] Executing discovery process for pollsters [<ceilometer.compute.pollsters.net.OutgoingBytesDeltaPollster object at 0x7fcf6cc3ff20>] and discovery method [local_instances] via process [<bound method AgentManager.discover of <ceilometer.polling.manager.AgentManager object at 0x7fcf6cc07a10>>]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:294
Dec  1 19:27:48 compute-0 ceilometer_agent_compute[200308]: 2025-12-01 19:27:48.815 15 DEBUG ceilometer.polling.manager [-] Skip pollster network.outgoing.bytes.delta, no  resources found this cycle _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:321
Dec  1 19:27:48 compute-0 ceilometer_agent_compute[200308]: 2025-12-01 19:27:48.815 15 DEBUG ceilometer.polling.manager [-] Executing discovery process for pollsters [<ceilometer.compute.pollsters.net.OutgoingDropPollster object at 0x7fcf6c2e40e0>] and discovery method [local_instances] via process [<bound method AgentManager.discover of <ceilometer.polling.manager.AgentManager object at 0x7fcf6cc07a10>>]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:294
Dec  1 19:27:48 compute-0 ceilometer_agent_compute[200308]: 2025-12-01 19:27:48.815 15 DEBUG ceilometer.polling.manager [-] Skip pollster network.outgoing.packets.drop, no  resources found this cycle _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:321
Dec  1 19:27:48 compute-0 ceilometer_agent_compute[200308]: 2025-12-01 19:27:48.816 15 DEBUG ceilometer.polling.manager [-] Executing discovery process for pollsters [<ceilometer.compute.pollsters.net.OutgoingErrorsPollster object at 0x7fcf6c2e4170>] and discovery method [local_instances] via process [<bound method AgentManager.discover of <ceilometer.polling.manager.AgentManager object at 0x7fcf6cc07a10>>]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:294
Dec  1 19:27:48 compute-0 ceilometer_agent_compute[200308]: 2025-12-01 19:27:48.816 15 DEBUG ceilometer.polling.manager [-] Skip pollster network.outgoing.packets.error, no  resources found this cycle _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:321
Dec  1 19:27:48 compute-0 ceilometer_agent_compute[200308]: 2025-12-01 19:27:48.816 15 DEBUG ceilometer.polling.manager [-] Executing discovery process for pollsters [<ceilometer.compute.pollsters.disk.PerDeviceCapacityPollster object at 0x7fcf6cc3d820>] and discovery method [local_instances] via process [<bound method AgentManager.discover of <ceilometer.polling.manager.AgentManager object at 0x7fcf6cc07a10>>]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:294
Dec  1 19:27:48 compute-0 ceilometer_agent_compute[200308]: 2025-12-01 19:27:48.816 15 DEBUG ceilometer.polling.manager [-] Skip pollster disk.device.capacity, no  resources found this cycle _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:321
Dec  1 19:27:48 compute-0 ceilometer_agent_compute[200308]: 2025-12-01 19:27:48.817 15 DEBUG ceilometer.polling.manager [-] Executing discovery process for pollsters [<ceilometer.compute.pollsters.disk.PerDeviceReadBytesPollster object at 0x7fcf6cc3f1d0>] and discovery method [local_instances] via process [<bound method AgentManager.discover of <ceilometer.polling.manager.AgentManager object at 0x7fcf6cc07a10>>]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:294
Dec  1 19:27:48 compute-0 ceilometer_agent_compute[200308]: 2025-12-01 19:27:48.817 15 DEBUG ceilometer.polling.manager [-] Skip pollster disk.device.read.bytes, no  resources found this cycle _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:321
Dec  1 19:27:48 compute-0 ceilometer_agent_compute[200308]: 2025-12-01 19:27:48.817 15 DEBUG ceilometer.polling.manager [-] Executing discovery process for pollsters [<ceilometer.compute.pollsters.net.IncomingBytesPollster object at 0x7fcf6cc3f800>] and discovery method [local_instances] via process [<bound method AgentManager.discover of <ceilometer.polling.manager.AgentManager object at 0x7fcf6cc07a10>>]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:294
Dec  1 19:27:48 compute-0 ceilometer_agent_compute[200308]: 2025-12-01 19:27:48.817 15 DEBUG ceilometer.polling.manager [-] Skip pollster network.incoming.bytes, no  resources found this cycle _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:321
Dec  1 19:27:48 compute-0 ceilometer_agent_compute[200308]: 2025-12-01 19:27:48.817 15 DEBUG ceilometer.polling.manager [-] Executing discovery process for pollsters [<ceilometer.compute.pollsters.net.IncomingBytesRatePollster object at 0x7fcf6cc3fd10>] and discovery method [local_instances] via process [<bound method AgentManager.discover of <ceilometer.polling.manager.AgentManager object at 0x7fcf6cc07a10>>]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:294
Dec  1 19:27:48 compute-0 ceilometer_agent_compute[200308]: 2025-12-01 19:27:48.818 15 DEBUG ceilometer.polling.manager [-] Skip pollster network.incoming.bytes.rate, no  resources found this cycle _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:321
Dec  1 19:27:48 compute-0 ceilometer_agent_compute[200308]: 2025-12-01 19:27:48.818 15 DEBUG ceilometer.polling.manager [-] Executing discovery process for pollsters [<ceilometer.compute.pollsters.disk.PerDeviceDiskReadLatencyPollster object at 0x7fcf6cc3f2f0>] and discovery method [local_instances] via process [<bound method AgentManager.discover of <ceilometer.polling.manager.AgentManager object at 0x7fcf6cc07a10>>]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:294
Dec  1 19:27:48 compute-0 ceilometer_agent_compute[200308]: 2025-12-01 19:27:48.818 15 DEBUG ceilometer.polling.manager [-] Skip pollster disk.device.read.latency, no  resources found this cycle _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:321
Dec  1 19:27:48 compute-0 ceilometer_agent_compute[200308]: 2025-12-01 19:27:48.819 15 DEBUG ceilometer.polling.manager [-] Executing discovery process for pollsters [<ceilometer.compute.pollsters.disk.PerDeviceReadRequestsPollster object at 0x7fcf6cc3f350>] and discovery method [local_instances] via process [<bound method AgentManager.discover of <ceilometer.polling.manager.AgentManager object at 0x7fcf6cc07a10>>]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:294
Dec  1 19:27:48 compute-0 ceilometer_agent_compute[200308]: 2025-12-01 19:27:48.819 15 DEBUG ceilometer.polling.manager [-] Skip pollster disk.device.read.requests, no  resources found this cycle _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:321
Dec  1 19:27:48 compute-0 ceilometer_agent_compute[200308]: 2025-12-01 19:27:48.819 15 DEBUG ceilometer.polling.manager [-] Executing discovery process for pollsters [<ceilometer.compute.pollsters.disk.PerDevicePhysicalPollster object at 0x7fcf6cc3f3b0>] and discovery method [local_instances] via process [<bound method AgentManager.discover of <ceilometer.polling.manager.AgentManager object at 0x7fcf6cc07a10>>]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:294
Dec  1 19:27:48 compute-0 ceilometer_agent_compute[200308]: 2025-12-01 19:27:48.819 15 DEBUG ceilometer.polling.manager [-] Skip pollster disk.device.usage, no  resources found this cycle _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:321
Dec  1 19:27:48 compute-0 ceilometer_agent_compute[200308]: 2025-12-01 19:27:48.820 15 DEBUG ceilometer.polling.manager [-] Executing discovery process for pollsters [<ceilometer.compute.pollsters.disk.PerDeviceWriteBytesPollster object at 0x7fcf6cc3f410>] and discovery method [local_instances] via process [<bound method AgentManager.discover of <ceilometer.polling.manager.AgentManager object at 0x7fcf6cc07a10>>]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:294
Dec  1 19:27:48 compute-0 ceilometer_agent_compute[200308]: 2025-12-01 19:27:48.820 15 DEBUG ceilometer.polling.manager [-] Skip pollster disk.device.write.bytes, no  resources found this cycle _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:321
Dec  1 19:27:48 compute-0 ceilometer_agent_compute[200308]: 2025-12-01 19:27:48.820 15 DEBUG ceilometer.polling.manager [-] Executing discovery process for pollsters [<ceilometer.compute.pollsters.instance_stats.PowerStatePollster object at 0x7fcf6c2e4440>] and discovery method [local_instances] via process [<bound method AgentManager.discover of <ceilometer.polling.manager.AgentManager object at 0x7fcf6cc07a10>>]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:294
Dec  1 19:27:48 compute-0 ceilometer_agent_compute[200308]: 2025-12-01 19:27:48.820 15 DEBUG ceilometer.polling.manager [-] Skip pollster power.state, no  resources found this cycle _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:321
Dec  1 19:27:48 compute-0 ceilometer_agent_compute[200308]: 2025-12-01 19:27:48.821 15 DEBUG ceilometer.polling.manager [-] Executing discovery process for pollsters [<ceilometer.compute.pollsters.disk.PerDeviceDiskWriteLatencyPollster object at 0x7fcf6cc3f470>] and discovery method [local_instances] via process [<bound method AgentManager.discover of <ceilometer.polling.manager.AgentManager object at 0x7fcf6cc07a10>>]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:294
Dec  1 19:27:48 compute-0 ceilometer_agent_compute[200308]: 2025-12-01 19:27:48.821 15 DEBUG ceilometer.polling.manager [-] Skip pollster disk.device.write.latency, no  resources found this cycle _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:321
Dec  1 19:27:48 compute-0 ceilometer_agent_compute[200308]: 2025-12-01 19:27:48.821 15 DEBUG ceilometer.polling.manager [-] Executing discovery process for pollsters [<ceilometer.compute.pollsters.disk.PerDeviceWriteRequestsPollster object at 0x7fcf6cc3f4d0>] and discovery method [local_instances] via process [<bound method AgentManager.discover of <ceilometer.polling.manager.AgentManager object at 0x7fcf6cc07a10>>]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:294
Dec  1 19:27:48 compute-0 ceilometer_agent_compute[200308]: 2025-12-01 19:27:48.821 15 DEBUG ceilometer.polling.manager [-] Skip pollster disk.device.write.requests, no  resources found this cycle _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:321
Dec  1 19:27:48 compute-0 ceilometer_agent_compute[200308]: 2025-12-01 19:27:48.822 15 DEBUG ceilometer.polling.manager [-] Executing discovery process for pollsters [<ceilometer.compute.pollsters.disk.PerDeviceAllocationPollster object at 0x7fcf6cc3e5d0>] and discovery method [local_instances] via process [<bound method AgentManager.discover of <ceilometer.polling.manager.AgentManager object at 0x7fcf6cc07a10>>]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:294
Dec  1 19:27:48 compute-0 ceilometer_agent_compute[200308]: 2025-12-01 19:27:48.822 15 DEBUG ceilometer.polling.manager [-] Skip pollster disk.device.allocation, no  resources found this cycle _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:321
Dec  1 19:27:48 compute-0 ceilometer_agent_compute[200308]: 2025-12-01 19:27:48.822 15 DEBUG ceilometer.polling.manager [-] Executing discovery process for pollsters [<ceilometer.compute.pollsters.disk.EphemeralSizePollster object at 0x7fcf6cc3f530>] and discovery method [local_instances] via process [<bound method AgentManager.discover of <ceilometer.polling.manager.AgentManager object at 0x7fcf6cc07a10>>]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:294
Dec  1 19:27:48 compute-0 ceilometer_agent_compute[200308]: 2025-12-01 19:27:48.822 15 DEBUG ceilometer.polling.manager [-] Skip pollster disk.ephemeral.size, no  resources found this cycle _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:321
Dec  1 19:27:48 compute-0 ceilometer_agent_compute[200308]: 2025-12-01 19:27:48.823 15 DEBUG ceilometer.polling.manager [-] Executing discovery process for pollsters [<ceilometer.compute.pollsters.net.IncomingPacketsPollster object at 0x7fcf6cc3fd40>] and discovery method [local_instances] via process [<bound method AgentManager.discover of <ceilometer.polling.manager.AgentManager object at 0x7fcf6cc07a10>>]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:294
Dec  1 19:27:48 compute-0 ceilometer_agent_compute[200308]: 2025-12-01 19:27:48.823 15 DEBUG ceilometer.polling.manager [-] Skip pollster network.incoming.packets, no  resources found this cycle _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:321
Dec  1 19:27:48 compute-0 ceilometer_agent_compute[200308]: 2025-12-01 19:27:48.823 15 DEBUG ceilometer.polling.manager [-] Executing discovery process for pollsters [<ceilometer.compute.pollsters.disk.RootSizePollster object at 0x7fcf6cc3f590>] and discovery method [local_instances] via process [<bound method AgentManager.discover of <ceilometer.polling.manager.AgentManager object at 0x7fcf6cc07a10>>]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:294
Dec  1 19:27:48 compute-0 ceilometer_agent_compute[200308]: 2025-12-01 19:27:48.823 15 DEBUG ceilometer.polling.manager [-] Skip pollster disk.root.size, no  resources found this cycle _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:321
Dec  1 19:27:48 compute-0 ceilometer_agent_compute[200308]: 2025-12-01 19:27:48.824 15 DEBUG ceilometer.polling.manager [-] Executing discovery process for pollsters [<ceilometer.compute.pollsters.net.IncomingDropPollster object at 0x7fcf6cc3fda0>] and discovery method [local_instances] via process [<bound method AgentManager.discover of <ceilometer.polling.manager.AgentManager object at 0x7fcf6cc07a10>>]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:294
Dec  1 19:27:48 compute-0 ceilometer_agent_compute[200308]: 2025-12-01 19:27:48.824 15 DEBUG ceilometer.polling.manager [-] Skip pollster network.incoming.packets.drop, no  resources found this cycle _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:321
Dec  1 19:27:48 compute-0 ceilometer_agent_compute[200308]: 2025-12-01 19:27:48.824 15 DEBUG ceilometer.polling.manager [-] Executing discovery process for pollsters [<ceilometer.compute.pollsters.net.IncomingErrorsPollster object at 0x7fcf6cc3fe00>] and discovery method [local_instances] via process [<bound method AgentManager.discover of <ceilometer.polling.manager.AgentManager object at 0x7fcf6cc07a10>>]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:294
Dec  1 19:27:48 compute-0 ceilometer_agent_compute[200308]: 2025-12-01 19:27:48.824 15 DEBUG ceilometer.polling.manager [-] Skip pollster network.incoming.packets.error, no  resources found this cycle _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:321
Dec  1 19:27:48 compute-0 ceilometer_agent_compute[200308]: 2025-12-01 19:27:48.825 15 DEBUG ceilometer.polling.manager [-] Executing discovery process for pollsters [<ceilometer.compute.pollsters.net.OutgoingBytesPollster object at 0x7fcf6cc3fe90>] and discovery method [local_instances] via process [<bound method AgentManager.discover of <ceilometer.polling.manager.AgentManager object at 0x7fcf6cc07a10>>]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:294
Dec  1 19:27:48 compute-0 ceilometer_agent_compute[200308]: 2025-12-01 19:27:48.825 15 DEBUG ceilometer.polling.manager [-] Skip pollster network.outgoing.bytes, no  resources found this cycle _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:321
Dec  1 19:27:48 compute-0 ceilometer_agent_compute[200308]: 2025-12-01 19:27:48.825 15 DEBUG ceilometer.polling.manager [-] Executing discovery process for pollsters [<ceilometer.compute.pollsters.net.OutgoingBytesRatePollster object at 0x7fcf6cc3ff80>] and discovery method [local_instances] via process [<bound method AgentManager.discover of <ceilometer.polling.manager.AgentManager object at 0x7fcf6cc07a10>>]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:294
Dec  1 19:27:48 compute-0 ceilometer_agent_compute[200308]: 2025-12-01 19:27:48.825 15 DEBUG ceilometer.polling.manager [-] Skip pollster network.outgoing.bytes.rate, no  resources found this cycle _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:321
Dec  1 19:27:48 compute-0 ceilometer_agent_compute[200308]: 2025-12-01 19:27:48.826 15 DEBUG ceilometer.polling.manager [-] Executing discovery process for pollsters [<ceilometer.compute.pollsters.instance_stats.CPUPollster object at 0x7fcf6cbd1b80>] and discovery method [local_instances] via process [<bound method AgentManager.discover of <ceilometer.polling.manager.AgentManager object at 0x7fcf6cc07a10>>]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:294
Dec  1 19:27:48 compute-0 ceilometer_agent_compute[200308]: 2025-12-01 19:27:48.826 15 DEBUG ceilometer.polling.manager [-] Skip pollster cpu, no  resources found this cycle _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:321
Dec  1 19:27:48 compute-0 ceilometer_agent_compute[200308]: 2025-12-01 19:27:48.826 15 DEBUG ceilometer.polling.manager [-] Executing discovery process for pollsters [<ceilometer.compute.pollsters.instance_stats.MemoryUsagePollster object at 0x7fcf6cc3f7a0>] and discovery method [local_instances] via process [<bound method AgentManager.discover of <ceilometer.polling.manager.AgentManager object at 0x7fcf6cc07a10>>]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:294
Dec  1 19:27:48 compute-0 ceilometer_agent_compute[200308]: 2025-12-01 19:27:48.826 15 DEBUG ceilometer.polling.manager [-] Skip pollster memory.usage, no  resources found this cycle _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:321
Dec  1 19:27:48 compute-0 ceilometer_agent_compute[200308]: 2025-12-01 19:27:48.827 15 DEBUG ceilometer.polling.manager [-] Finished processing pollster [network.incoming.bytes.delta]. execute_polling_task_processing /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:272
Dec  1 19:27:48 compute-0 ceilometer_agent_compute[200308]: 2025-12-01 19:27:48.827 15 DEBUG ceilometer.polling.manager [-] Finished processing pollster [network.outgoing.packets]. execute_polling_task_processing /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:272
Dec  1 19:27:48 compute-0 ceilometer_agent_compute[200308]: 2025-12-01 19:27:48.827 15 DEBUG ceilometer.polling.manager [-] Finished processing pollster [network.outgoing.bytes.delta]. execute_polling_task_processing /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:272
Dec  1 19:27:48 compute-0 ceilometer_agent_compute[200308]: 2025-12-01 19:27:48.827 15 DEBUG ceilometer.polling.manager [-] Finished processing pollster [network.outgoing.packets.drop]. execute_polling_task_processing /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:272
Dec  1 19:27:48 compute-0 ceilometer_agent_compute[200308]: 2025-12-01 19:27:48.827 15 DEBUG ceilometer.polling.manager [-] Finished processing pollster [network.outgoing.packets.error]. execute_polling_task_processing /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:272
Dec  1 19:27:48 compute-0 ceilometer_agent_compute[200308]: 2025-12-01 19:27:48.827 15 DEBUG ceilometer.polling.manager [-] Finished processing pollster [disk.device.capacity]. execute_polling_task_processing /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:272
Dec  1 19:27:48 compute-0 ceilometer_agent_compute[200308]: 2025-12-01 19:27:48.827 15 DEBUG ceilometer.polling.manager [-] Finished processing pollster [disk.device.read.bytes]. execute_polling_task_processing /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:272
Dec  1 19:27:48 compute-0 ceilometer_agent_compute[200308]: 2025-12-01 19:27:48.827 15 DEBUG ceilometer.polling.manager [-] Finished processing pollster [network.incoming.bytes]. execute_polling_task_processing /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:272
Dec  1 19:27:48 compute-0 ceilometer_agent_compute[200308]: 2025-12-01 19:27:48.827 15 DEBUG ceilometer.polling.manager [-] Finished processing pollster [network.incoming.bytes.rate]. execute_polling_task_processing /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:272
Dec  1 19:27:48 compute-0 ceilometer_agent_compute[200308]: 2025-12-01 19:27:48.828 15 DEBUG ceilometer.polling.manager [-] Finished processing pollster [disk.device.read.latency]. execute_polling_task_processing /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:272
Dec  1 19:27:48 compute-0 ceilometer_agent_compute[200308]: 2025-12-01 19:27:48.828 15 DEBUG ceilometer.polling.manager [-] Finished processing pollster [disk.device.read.requests]. execute_polling_task_processing /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:272
Dec  1 19:27:48 compute-0 ceilometer_agent_compute[200308]: 2025-12-01 19:27:48.828 15 DEBUG ceilometer.polling.manager [-] Finished processing pollster [disk.device.usage]. execute_polling_task_processing /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:272
Dec  1 19:27:48 compute-0 ceilometer_agent_compute[200308]: 2025-12-01 19:27:48.828 15 DEBUG ceilometer.polling.manager [-] Finished processing pollster [disk.device.write.bytes]. execute_polling_task_processing /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:272
Dec  1 19:27:48 compute-0 ceilometer_agent_compute[200308]: 2025-12-01 19:27:48.828 15 DEBUG ceilometer.polling.manager [-] Finished processing pollster [power.state]. execute_polling_task_processing /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:272
Dec  1 19:27:48 compute-0 ceilometer_agent_compute[200308]: 2025-12-01 19:27:48.828 15 DEBUG ceilometer.polling.manager [-] Finished processing pollster [disk.device.write.latency]. execute_polling_task_processing /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:272
Dec  1 19:27:48 compute-0 ceilometer_agent_compute[200308]: 2025-12-01 19:27:48.828 15 DEBUG ceilometer.polling.manager [-] Finished processing pollster [disk.device.write.requests]. execute_polling_task_processing /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:272
Dec  1 19:27:48 compute-0 ceilometer_agent_compute[200308]: 2025-12-01 19:27:48.828 15 DEBUG ceilometer.polling.manager [-] Finished processing pollster [disk.device.allocation]. execute_polling_task_processing /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:272
Dec  1 19:27:48 compute-0 ceilometer_agent_compute[200308]: 2025-12-01 19:27:48.828 15 DEBUG ceilometer.polling.manager [-] Finished processing pollster [disk.ephemeral.size]. execute_polling_task_processing /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:272
Dec  1 19:27:48 compute-0 ceilometer_agent_compute[200308]: 2025-12-01 19:27:48.828 15 DEBUG ceilometer.polling.manager [-] Finished processing pollster [network.incoming.packets]. execute_polling_task_processing /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:272
Dec  1 19:27:48 compute-0 ceilometer_agent_compute[200308]: 2025-12-01 19:27:48.828 15 DEBUG ceilometer.polling.manager [-] Finished processing pollster [disk.root.size]. execute_polling_task_processing /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:272
Dec  1 19:27:48 compute-0 ceilometer_agent_compute[200308]: 2025-12-01 19:27:48.828 15 DEBUG ceilometer.polling.manager [-] Finished processing pollster [network.incoming.packets.drop]. execute_polling_task_processing /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:272
Dec  1 19:27:48 compute-0 ceilometer_agent_compute[200308]: 2025-12-01 19:27:48.828 15 DEBUG ceilometer.polling.manager [-] Finished processing pollster [network.incoming.packets.error]. execute_polling_task_processing /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:272
Dec  1 19:27:48 compute-0 ceilometer_agent_compute[200308]: 2025-12-01 19:27:48.828 15 DEBUG ceilometer.polling.manager [-] Finished processing pollster [network.outgoing.bytes]. execute_polling_task_processing /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:272
Dec  1 19:27:48 compute-0 ceilometer_agent_compute[200308]: 2025-12-01 19:27:48.828 15 DEBUG ceilometer.polling.manager [-] Finished processing pollster [network.outgoing.bytes.rate]. execute_polling_task_processing /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:272
Dec  1 19:27:48 compute-0 ceilometer_agent_compute[200308]: 2025-12-01 19:27:48.828 15 DEBUG ceilometer.polling.manager [-] Finished processing pollster [cpu]. execute_polling_task_processing /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:272
Dec  1 19:27:48 compute-0 ceilometer_agent_compute[200308]: 2025-12-01 19:27:48.828 15 DEBUG ceilometer.polling.manager [-] Finished processing pollster [memory.usage]. execute_polling_task_processing /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:272
Dec  1 19:27:49 compute-0 podman[238986]: 2025-12-01 19:27:49.337539978 +0000 UTC m=+0.109097773 container health_status 23921011954a99f31a49758e512d9e3575f6b2ebf536e7df85e3be11e7690b76 (image=quay.io/sustainable_computing_io/kepler:release-0.7.12, name=kepler, health_status=healthy, health_failing_streak=0, health_log=, config_data={'image': 'quay.io/sustainable_computing_io/kepler:release-0.7.12', 'privileged': 'true', 'restart': 'always', 'ports': ['8888:8888'], 'net': 'host', 'command': '-v=2', 'recreate': True, 'environment': {'ENABLE_GPU': 'true', 'EXPOSE_CONTAINER_METRICS': 'true', 'ENABLE_PROCESS_METRICS': 'true', 'EXPOSE_VM_METRICS': 'true', 'EXPOSE_ESTIMATED_IDLE_POWER_METRICS': 'false', 'LIBVIRT_METADATA_URI': 'http://openstack.org/xmlns/libvirt/nova/1.1'}, 'healthcheck': {'test': '/openstack/healthcheck kepler', 'mount': '/var/lib/openstack/healthchecks/kepler'}, 'volumes': ['/lib/modules:/lib/modules:ro', '/run/libvirt:/run/libvirt:shared,ro', '/sys:/sys', '/proc:/proc', '/var/lib/openstack/healthchecks/kepler:/openstack:ro,z']}, com.redhat.component=ubi9-container, com.redhat.license_terms=https://www.redhat.com/en/about/red-hat-end-user-license-agreements#UBI, summary=Provides the latest release of Red Hat Universal Base Image 9., vcs-ref=e309397d02fc53f7fa99db1371b8700eb49f268f, vcs-type=git, description=The Universal Base Image is designed and engineered to be the base layer for all of your containerized applications, middleware and utilities. This base image is freely redistributable, but Red Hat only supports Red Hat technologies through subscriptions for Red Hat products. This image is maintained by Red Hat and updated regularly., name=ubi9, maintainer=Red Hat, Inc., config_id=edpm, release=1214.1726694543, container_name=kepler, io.k8s.display-name=Red Hat Universal Base Image 9, release-0.7.12=, vendor=Red Hat, Inc., distribution-scope=public, build-date=2024-09-18T21:23:30, url=https://access.redhat.com/containers/#/registry.access.redhat.com/ubi9/images/9.4-1214.1726694543, io.k8s.description=The Universal Base Image is designed and engineered to be the base layer for all of your containerized applications, middleware and utilities. This base image is freely redistributable, but Red Hat only supports Red Hat technologies through subscriptions for Red Hat products. This image is maintained by Red Hat and updated regularly., version=9.4, architecture=x86_64, io.buildah.version=1.29.0, io.openshift.tags=base rhel9, managed_by=edpm_ansible, io.openshift.expose-services=)
Dec  1 19:27:49 compute-0 nova_compute[189564]: 2025-12-01 19:27:49.756 189568 DEBUG oslo_service.periodic_task [None req-7cec860b-c1c5-49d2-8a4f-732c76c1788d - - - - - -] Running periodic task ComputeManager._poll_unconfirmed_resizes run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210#033[00m
Dec  1 19:27:53 compute-0 podman[239008]: 2025-12-01 19:27:53.352254948 +0000 UTC m=+0.117625815 container health_status 43b014a7c88484529ca37fbc1aa040d68d3c565a681d98a3ffe696ded1c66c8b (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified, name=ovn_metadata_agent, health_status=healthy, health_failing_streak=0, health_log=, org.label-schema.license=GPLv2, maintainer=OpenStack Kubernetes Operator team, managed_by=edpm_ansible, io.buildah.version=1.41.3, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS, tcib_build_tag=fa2bb8efef6782c26ea7f1675eeb36dd, org.label-schema.build-date=20251125, org.label-schema.name=CentOS Stream 9 Base Image, tcib_managed=true, config_data={'cgroupns': 'host', 'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'EDPM_CONFIG_HASH': '0823bd3e096c75f72e4a95820d41b0d4b6a1172bd2892ddb9f29b788a11bc87d'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ovn_metadata_agent', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified', 'net': 'host', 'pid': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/run/openvswitch:/run/openvswitch:z', '/var/lib/config-data/ansible-generated/neutron-ovn-metadata-agent:/etc/neutron.conf.d:z', '/run/netns:/run/netns:shared', '/var/lib/kolla/config_files/ovn_metadata_agent.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/neutron:/var/lib/neutron:shared,z', '/var/lib/neutron/ovn_metadata_haproxy_wrapper:/usr/local/bin/haproxy:ro', '/var/lib/neutron/kill_scripts:/etc/neutron/kill_scripts:ro', '/var/lib/openstack/cacerts/neutron-metadata/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/ca.crt:/etc/pki/tls/certs/ovndbca.crt:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/tls.crt:/etc/pki/tls/certs/ovndb.crt:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/tls.key:/etc/pki/tls/private/ovndb.key:ro,Z', '/var/lib/openstack/healthchecks/ovn_metadata_agent:/openstack:ro,z']}, config_id=ovn_metadata_agent, container_name=ovn_metadata_agent)
Dec  1 19:27:53 compute-0 podman[239007]: 2025-12-01 19:27:53.357407337 +0000 UTC m=+0.119160032 container health_status 3a3d264f7eb8586ed3d44da8bad3c69e5911bcb2ca062b771386b6d47a5118de (image=quay.rdoproject.org/podified-master-centos10/openstack-ceilometer-compute:current-tested, name=ceilometer_agent_compute, health_status=healthy, health_failing_streak=0, health_log=, org.label-schema.schema-version=1.0, container_name=ceilometer_agent_compute, org.label-schema.vendor=CentOS, tcib_build_tag=3a7876c5b6a4ff2e2bc50e11e9db5f42, tcib_managed=true, io.buildah.version=1.41.4, maintainer=OpenStack Kubernetes Operator team, managed_by=edpm_ansible, org.label-schema.build-date=20251125, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 10 Base Image, config_data={'image': 'quay.rdoproject.org/podified-master-centos10/openstack-ceilometer-compute:current-tested', 'user': 'ceilometer', 'restart': 'always', 'command': 'kolla_start', 'security_opt': 'label:type:ceilometer_polling_t', 'net': 'host', 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'OS_ENDPOINT_TYPE': 'internal'}, 'healthcheck': {'test': '/openstack/healthcheck compute', 'mount': '/var/lib/openstack/healthchecks/ceilometer_agent_compute'}, 'volumes': ['/var/lib/openstack/config/telemetry:/var/lib/openstack/config/:z', '/var/lib/openstack/config/telemetry/ceilometer-agent-compute.json:/var/lib/kolla/config_files/config.json:z', '/run/libvirt:/run/libvirt:shared,ro', '/etc/hosts:/etc/hosts:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/var/lib/openstack/cacerts/telemetry/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/config/telemetry/ceilometer_prom_exporter.yaml:/etc/ceilometer/ceilometer_prom_exporter.yaml:z', '/var/lib/openstack/certs/telemetry/default:/etc/ceilometer/tls:z', '/dev/log:/dev/log', '/var/lib/openstack/healthchecks/ceilometer_agent_compute:/openstack:ro,z']}, config_id=edpm)
Dec  1 19:27:53 compute-0 podman[239009]: 2025-12-01 19:27:53.410452431 +0000 UTC m=+0.165717597 container health_status ac5c9902abf0db9f43c889599b2bcc73d33eb8b65444ffdd9b56a5cc93dab792 (image=quay.io/podified-antelope-centos9/openstack-ovn-controller:current-podified, name=ovn_controller, health_status=healthy, health_failing_streak=0, health_log=, org.label-schema.vendor=CentOS, io.buildah.version=1.41.3, maintainer=OpenStack Kubernetes Operator team, org.label-schema.build-date=20251125, config_data={'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ovn_controller', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-ovn-controller:current-podified', 'net': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/lib/modules:/lib/modules:ro', '/run:/run', '/var/lib/openvswitch/ovn:/run/ovn:shared,z', '/var/lib/kolla/config_files/ovn_controller.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/openstack/cacerts/ovn/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/certs/ovn/default/ca.crt:/etc/pki/tls/certs/ovndbca.crt:ro,z', '/var/lib/openstack/certs/ovn/default/tls.crt:/etc/pki/tls/certs/ovndb.crt:ro,z', '/var/lib/openstack/certs/ovn/default/tls.key:/etc/pki/tls/private/ovndb.key:ro,Z', '/var/lib/openstack/healthchecks/ovn_controller:/openstack:ro,z']}, managed_by=edpm_ansible, org.label-schema.schema-version=1.0, tcib_managed=true, org.label-schema.license=GPLv2, tcib_build_tag=fa2bb8efef6782c26ea7f1675eeb36dd, config_id=ovn_controller, container_name=ovn_controller, org.label-schema.name=CentOS Stream 9 Base Image)
Dec  1 19:27:59 compute-0 podman[203750]: time="2025-12-01T19:27:59Z" level=info msg="List containers: received `last` parameter - overwriting `limit`"
Dec  1 19:27:59 compute-0 podman[203750]: @ - - [01/Dec/2025:19:27:59 +0000] "GET /v4.9.3/libpod/containers/json?all=true&external=false&last=0&namespace=false&size=false&sync=false HTTP/1.1" 200 28288 "" "Go-http-client/1.1"
Dec  1 19:27:59 compute-0 podman[203750]: @ - - [01/Dec/2025:19:27:59 +0000] "GET /v4.9.3/libpod/containers/stats?all=false&interval=1&stream=false HTTP/1.1" 200 4278 "" "Go-http-client/1.1"
Dec  1 19:28:00 compute-0 podman[239063]: 2025-12-01 19:28:00.345438648 +0000 UTC m=+0.116018447 container health_status b46bda7fc50db8041eef75400930fc7591d8331b3adc9964f77b2cc87c6b98e2 (image=quay.io/openstack-k8s-operators/openstack-network-exporter:current-podified, name=openstack_network_exporter, health_status=healthy, health_failing_streak=0, health_log=, description=The Universal Base Image Minimal is a stripped down image that uses microdnf as a package manager. This base image is freely redistributable, but Red Hat only supports Red Hat technologies through subscriptions for Red Hat products. This image is maintained by Red Hat and updated regularly., com.redhat.license_terms=https://www.redhat.com/en/about/red-hat-end-user-license-agreements#UBI, container_name=openstack_network_exporter, build-date=2025-08-20T13:12:41, managed_by=edpm_ansible, version=9.6, io.k8s.display-name=Red Hat Universal Base Image 9 Minimal, vcs-type=git, config_data={'image': 'quay.io/openstack-k8s-operators/openstack-network-exporter:current-podified', 'restart': 'always', 'recreate': True, 'privileged': True, 'ports': ['9105:9105'], 'command': [], 'net': 'host', 'environment': {'OS_ENDPOINT_TYPE': 'internal', 'OPENSTACK_NETWORK_EXPORTER_YAML': '/etc/openstack_network_exporter/openstack_network_exporter.yaml'}, 'healthcheck': {'test': '/openstack/healthcheck openstack-netwo', 'mount': '/var/lib/openstack/healthchecks/openstack_network_exporter'}, 'volumes': ['/var/lib/openstack/config/telemetry/openstack_network_exporter.yaml:/etc/openstack_network_exporter/openstack_network_exporter.yaml:z', '/var/lib/openstack/certs/telemetry/default:/etc/openstack_network_exporter/tls:z', '/var/run/openvswitch:/run/openvswitch:rw,z', '/var/lib/openvswitch/ovn:/run/ovn:rw,z', '/proc:/host/proc:ro', '/var/lib/openstack/healthchecks/openstack_network_exporter:/openstack:ro,z']}, maintainer=Red Hat, Inc., config_id=edpm, io.openshift.tags=minimal rhel9, url=https://catalog.redhat.com/en/search?searchType=containers, name=ubi9-minimal, vendor=Red Hat, Inc., io.openshift.expose-services=, io.k8s.description=The Universal Base Image Minimal is a stripped down image that uses microdnf as a package manager. This base image is freely redistributable, but Red Hat only supports Red Hat technologies through subscriptions for Red Hat products. This image is maintained by Red Hat and updated regularly., release=1755695350, vcs-ref=f4b088292653bbf5ca8188a5e59ffd06a8671d4b, com.redhat.component=ubi9-minimal-container, io.buildah.version=1.33.7, architecture=x86_64, distribution-scope=public, summary=Provides the latest release of the minimal Red Hat Universal Base Image 9.)
Dec  1 19:28:01 compute-0 openstack_network_exporter[205914]: ERROR   19:28:01 appctl.go:131: Failed to prepare call to ovsdb-server: no control socket files found for the ovs db server
Dec  1 19:28:01 compute-0 openstack_network_exporter[205914]: ERROR   19:28:01 appctl.go:144: Failed to get PID for ovn-northd: no control socket files found for ovn-northd
Dec  1 19:28:01 compute-0 openstack_network_exporter[205914]: ERROR   19:28:01 appctl.go:144: Failed to get PID for ovn-northd: no control socket files found for ovn-northd
Dec  1 19:28:01 compute-0 openstack_network_exporter[205914]: ERROR   19:28:01 appctl.go:174: call(dpif-netdev/pmd-perf-show): please specify an existing datapath
Dec  1 19:28:01 compute-0 openstack_network_exporter[205914]: 
Dec  1 19:28:01 compute-0 openstack_network_exporter[205914]: ERROR   19:28:01 appctl.go:174: call(dpif-netdev/pmd-rxq-show): please specify an existing datapath
Dec  1 19:28:01 compute-0 openstack_network_exporter[205914]: 
Dec  1 19:28:06 compute-0 podman[239084]: 2025-12-01 19:28:06.375557361 +0000 UTC m=+0.129434650 container health_status 9bc16c1e84935b321683dd2dfd3901959431e420d380b6b9982945dff3d516b2 (image=quay.io/prometheus/node-exporter:v1.5.0, name=node_exporter, health_status=healthy, health_failing_streak=0, health_log=, container_name=node_exporter, maintainer=The Prometheus Authors <prometheus-developers@googlegroups.com>, managed_by=edpm_ansible, config_data={'image': 'quay.io/prometheus/node-exporter:v1.5.0', 'restart': 'always', 'recreate': True, 'user': 'root', 'privileged': True, 'ports': ['9100:9100'], 'command': ['--web.config.file=/etc/node_exporter/node_exporter.yaml', '--web.disable-exporter-metrics', '--collector.systemd', '--collector.systemd.unit-include=(edpm_.*|ovs.*|openvswitch|virt.*|rsyslog)\\.service', '--no-collector.dmi', '--no-collector.entropy', '--no-collector.thermal_zone', '--no-collector.time', '--no-collector.timex', '--no-collector.uname', '--no-collector.stat', '--no-collector.hwmon', '--no-collector.os', '--no-collector.selinux', '--no-collector.textfile', '--no-collector.powersupplyclass', '--no-collector.pressure', '--no-collector.rapl'], 'net': 'host', 'environment': {'OS_ENDPOINT_TYPE': 'internal'}, 'healthcheck': {'test': '/openstack/healthcheck node_exporter', 'mount': '/var/lib/openstack/healthchecks/node_exporter'}, 'volumes': ['/var/lib/openstack/config/telemetry/node_exporter.yaml:/etc/node_exporter/node_exporter.yaml:z', '/var/lib/openstack/certs/telemetry/default:/etc/node_exporter/tls:z', '/var/run/dbus/system_bus_socket:/var/run/dbus/system_bus_socket:rw', '/var/lib/openstack/healthchecks/node_exporter:/openstack:ro,z']}, config_id=edpm)
Dec  1 19:28:12 compute-0 ovn_metadata_agent[106828]: 2025-12-01 19:28:12.171 106833 DEBUG oslo_concurrency.lockutils [-] Acquiring lock "_check_child_processes" by "neutron.agent.linux.external_process.ProcessMonitor._check_child_processes" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m
Dec  1 19:28:12 compute-0 ovn_metadata_agent[106828]: 2025-12-01 19:28:12.171 106833 DEBUG oslo_concurrency.lockutils [-] Lock "_check_child_processes" acquired by "neutron.agent.linux.external_process.ProcessMonitor._check_child_processes" :: waited 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m
Dec  1 19:28:12 compute-0 ovn_metadata_agent[106828]: 2025-12-01 19:28:12.172 106833 DEBUG oslo_concurrency.lockutils [-] Lock "_check_child_processes" "released" by "neutron.agent.linux.external_process.ProcessMonitor._check_child_processes" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m
Dec  1 19:28:13 compute-0 podman[239110]: 2025-12-01 19:28:13.369884745 +0000 UTC m=+0.132591717 container health_status eee51cf6f5ac491b85fb09827fece37ea9afa564acb449d4ec0d0155a452f02b (image=quay.io/podified-antelope-centos9/openstack-multipathd:current-podified, name=multipathd, health_status=healthy, health_failing_streak=0, health_log=, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS, container_name=multipathd, maintainer=OpenStack Kubernetes Operator team, org.label-schema.build-date=20251125, org.label-schema.name=CentOS Stream 9 Base Image, config_data={'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/multipathd', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-multipathd:current-podified', 'net': 'host', 'privileged': True, 'restart': 'always', 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/var/lib/kolla/config_files/multipathd.json:/var/lib/kolla/config_files/config.json:ro', '/dev:/dev', '/run/udev:/run/udev', '/sys:/sys', '/lib/modules:/lib/modules:ro', '/etc/iscsi:/etc/iscsi:ro', '/var/lib/iscsi:/var/lib/iscsi', '/etc/multipath:/etc/multipath:z', '/etc/multipath.conf:/etc/multipath.conf:ro', '/var/lib/openstack/healthchecks/multipathd:/openstack:ro,z']}, tcib_build_tag=fa2bb8efef6782c26ea7f1675eeb36dd, tcib_managed=true, config_id=multipathd, io.buildah.version=1.41.3, managed_by=edpm_ansible, org.label-schema.license=GPLv2)
Dec  1 19:28:17 compute-0 podman[239129]: 2025-12-01 19:28:17.27505776 +0000 UTC m=+0.054414758 container health_status 61ddba5fa28aaa4735d9b3aecc3d300f499f9ae2248b5f55cd6d6127fcce4236 (image=quay.io/navidys/prometheus-podman-exporter:v1.10.1, name=podman_exporter, health_status=healthy, health_failing_streak=0, health_log=, managed_by=edpm_ansible, config_data={'image': 'quay.io/navidys/prometheus-podman-exporter:v1.10.1', 'restart': 'always', 'recreate': True, 'user': 'root', 'privileged': True, 'ports': ['9882:9882'], 'net': 'host', 'command': ['--web.config.file=/etc/podman_exporter/podman_exporter.yaml'], 'environment': {'OS_ENDPOINT_TYPE': 'internal', 'CONTAINER_HOST': 'unix:///run/podman/podman.sock'}, 'healthcheck': {'test': '/openstack/healthcheck podman_exporter', 'mount': '/var/lib/openstack/healthchecks/podman_exporter'}, 'volumes': ['/var/lib/openstack/config/telemetry/podman_exporter.yaml:/etc/podman_exporter/podman_exporter.yaml:z', '/var/lib/openstack/certs/telemetry/default:/etc/podman_exporter/tls:z', '/run/podman/podman.sock:/run/podman/podman.sock:rw,z', '/var/lib/openstack/healthchecks/podman_exporter:/openstack:ro,z']}, config_id=edpm, container_name=podman_exporter, maintainer=Navid Yaghoobi <navidys@fedoraproject.org>)
Dec  1 19:28:19 compute-0 podman[239153]: 2025-12-01 19:28:19.33027437 +0000 UTC m=+0.098422514 container health_status 34a1614f07848d6f362b3ed1fa2407dbcd0f2c7c831f6ef43ff8b2d278ce7c3d (image=quay.io/podified-antelope-centos9/openstack-ceilometer-ipmi:current-podified, name=ceilometer_agent_ipmi, health_status=healthy, health_failing_streak=0, health_log=, managed_by=edpm_ansible, tcib_build_tag=fa2bb8efef6782c26ea7f1675eeb36dd, tcib_managed=true, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.schema-version=1.0, config_data={'image': 'quay.io/podified-antelope-centos9/openstack-ceilometer-ipmi:current-podified', 'user': 'ceilometer', 'restart': 'always', 'command': 'kolla_start', 'security_opt': 'label:type:ceilometer_polling_t', 'privileged': 'true', 'net': 'host', 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'OS_ENDPOINT_TYPE': 'internal'}, 'healthcheck': {'test': '/openstack/healthcheck ipmi', 'mount': '/var/lib/openstack/healthchecks/ceilometer_agent_ipmi'}, 'volumes': ['/var/lib/openstack/config/telemetry-power-monitoring:/var/lib/openstack/config/:z', '/var/lib/openstack/config/telemetry-power-monitoring/ceilometer-agent-ipmi.json:/var/lib/kolla/config_files/config.json:z', '/etc/hosts:/etc/hosts:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/var/lib/openstack/cacerts/telemetry-power-monitoring/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/config/telemetry-power-monitoring/ceilometer_prom_exporter.yaml:/etc/ceilometer/ceilometer_prom_exporter.yaml:z', '/var/lib/openstack/certs/telemetry-power-monitoring/default:/etc/ceilometer/tls:z', '/dev/log:/dev/log', '/var/lib/openstack/healthchecks/ceilometer_agent_ipmi:/openstack:ro,z']}, config_id=edpm, io.buildah.version=1.41.3, maintainer=OpenStack Kubernetes Operator team, org.label-schema.build-date=20251125, org.label-schema.vendor=CentOS, container_name=ceilometer_agent_ipmi)
Dec  1 19:28:20 compute-0 podman[239173]: 2025-12-01 19:28:20.299056552 +0000 UTC m=+0.072728303 container health_status 23921011954a99f31a49758e512d9e3575f6b2ebf536e7df85e3be11e7690b76 (image=quay.io/sustainable_computing_io/kepler:release-0.7.12, name=kepler, health_status=healthy, health_failing_streak=0, health_log=, release=1214.1726694543, release-0.7.12=, vcs-type=git, vcs-ref=e309397d02fc53f7fa99db1371b8700eb49f268f, config_data={'image': 'quay.io/sustainable_computing_io/kepler:release-0.7.12', 'privileged': 'true', 'restart': 'always', 'ports': ['8888:8888'], 'net': 'host', 'command': '-v=2', 'recreate': True, 'environment': {'ENABLE_GPU': 'true', 'EXPOSE_CONTAINER_METRICS': 'true', 'ENABLE_PROCESS_METRICS': 'true', 'EXPOSE_VM_METRICS': 'true', 'EXPOSE_ESTIMATED_IDLE_POWER_METRICS': 'false', 'LIBVIRT_METADATA_URI': 'http://openstack.org/xmlns/libvirt/nova/1.1'}, 'healthcheck': {'test': '/openstack/healthcheck kepler', 'mount': '/var/lib/openstack/healthchecks/kepler'}, 'volumes': ['/lib/modules:/lib/modules:ro', '/run/libvirt:/run/libvirt:shared,ro', '/sys:/sys', '/proc:/proc', '/var/lib/openstack/healthchecks/kepler:/openstack:ro,z']}, maintainer=Red Hat, Inc., summary=Provides the latest release of Red Hat Universal Base Image 9., vendor=Red Hat, Inc., build-date=2024-09-18T21:23:30, com.redhat.component=ubi9-container, com.redhat.license_terms=https://www.redhat.com/en/about/red-hat-end-user-license-agreements#UBI, url=https://access.redhat.com/containers/#/registry.access.redhat.com/ubi9/images/9.4-1214.1726694543, io.k8s.display-name=Red Hat Universal Base Image 9, distribution-scope=public, io.k8s.description=The Universal Base Image is designed and engineered to be the base layer for all of your containerized applications, middleware and utilities. This base image is freely redistributable, but Red Hat only supports Red Hat technologies through subscriptions for Red Hat products. This image is maintained by Red Hat and updated regularly., version=9.4, io.openshift.tags=base rhel9, description=The Universal Base Image is designed and engineered to be the base layer for all of your containerized applications, middleware and utilities. This base image is freely redistributable, but Red Hat only supports Red Hat technologies through subscriptions for Red Hat products. This image is maintained by Red Hat and updated regularly., managed_by=edpm_ansible, name=ubi9, config_id=edpm, container_name=kepler, io.openshift.expose-services=, architecture=x86_64, io.buildah.version=1.29.0)
Dec  1 19:28:24 compute-0 podman[239194]: 2025-12-01 19:28:24.337282086 +0000 UTC m=+0.104893723 container health_status 43b014a7c88484529ca37fbc1aa040d68d3c565a681d98a3ffe696ded1c66c8b (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified, name=ovn_metadata_agent, health_status=healthy, health_failing_streak=0, health_log=, org.label-schema.build-date=20251125, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.vendor=CentOS, tcib_build_tag=fa2bb8efef6782c26ea7f1675eeb36dd, org.label-schema.license=GPLv2, tcib_managed=true, config_data={'cgroupns': 'host', 'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'EDPM_CONFIG_HASH': '0823bd3e096c75f72e4a95820d41b0d4b6a1172bd2892ddb9f29b788a11bc87d'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ovn_metadata_agent', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified', 'net': 'host', 'pid': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/run/openvswitch:/run/openvswitch:z', '/var/lib/config-data/ansible-generated/neutron-ovn-metadata-agent:/etc/neutron.conf.d:z', '/run/netns:/run/netns:shared', '/var/lib/kolla/config_files/ovn_metadata_agent.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/neutron:/var/lib/neutron:shared,z', '/var/lib/neutron/ovn_metadata_haproxy_wrapper:/usr/local/bin/haproxy:ro', '/var/lib/neutron/kill_scripts:/etc/neutron/kill_scripts:ro', '/var/lib/openstack/cacerts/neutron-metadata/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/ca.crt:/etc/pki/tls/certs/ovndbca.crt:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/tls.crt:/etc/pki/tls/certs/ovndb.crt:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/tls.key:/etc/pki/tls/private/ovndb.key:ro,Z', '/var/lib/openstack/healthchecks/ovn_metadata_agent:/openstack:ro,z']}, container_name=ovn_metadata_agent, org.label-schema.schema-version=1.0, config_id=ovn_metadata_agent, io.buildah.version=1.41.3, managed_by=edpm_ansible, maintainer=OpenStack Kubernetes Operator team)
Dec  1 19:28:24 compute-0 podman[239193]: 2025-12-01 19:28:24.368688354 +0000 UTC m=+0.127970354 container health_status 3a3d264f7eb8586ed3d44da8bad3c69e5911bcb2ca062b771386b6d47a5118de (image=quay.rdoproject.org/podified-master-centos10/openstack-ceilometer-compute:current-tested, name=ceilometer_agent_compute, health_status=healthy, health_failing_streak=0, health_log=, container_name=ceilometer_agent_compute, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 10 Base Image, tcib_build_tag=3a7876c5b6a4ff2e2bc50e11e9db5f42, config_id=edpm, io.buildah.version=1.41.4, maintainer=OpenStack Kubernetes Operator team, org.label-schema.build-date=20251125, org.label-schema.vendor=CentOS, config_data={'image': 'quay.rdoproject.org/podified-master-centos10/openstack-ceilometer-compute:current-tested', 'user': 'ceilometer', 'restart': 'always', 'command': 'kolla_start', 'security_opt': 'label:type:ceilometer_polling_t', 'net': 'host', 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'OS_ENDPOINT_TYPE': 'internal'}, 'healthcheck': {'test': '/openstack/healthcheck compute', 'mount': '/var/lib/openstack/healthchecks/ceilometer_agent_compute'}, 'volumes': ['/var/lib/openstack/config/telemetry:/var/lib/openstack/config/:z', '/var/lib/openstack/config/telemetry/ceilometer-agent-compute.json:/var/lib/kolla/config_files/config.json:z', '/run/libvirt:/run/libvirt:shared,ro', '/etc/hosts:/etc/hosts:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/var/lib/openstack/cacerts/telemetry/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/config/telemetry/ceilometer_prom_exporter.yaml:/etc/ceilometer/ceilometer_prom_exporter.yaml:z', '/var/lib/openstack/certs/telemetry/default:/etc/ceilometer/tls:z', '/dev/log:/dev/log', '/var/lib/openstack/healthchecks/ceilometer_agent_compute:/openstack:ro,z']}, managed_by=edpm_ansible, org.label-schema.schema-version=1.0, tcib_managed=true)
Dec  1 19:28:24 compute-0 podman[239195]: 2025-12-01 19:28:24.376009329 +0000 UTC m=+0.139658444 container health_status ac5c9902abf0db9f43c889599b2bcc73d33eb8b65444ffdd9b56a5cc93dab792 (image=quay.io/podified-antelope-centos9/openstack-ovn-controller:current-podified, name=ovn_controller, health_status=healthy, health_failing_streak=0, health_log=, managed_by=edpm_ansible, org.label-schema.schema-version=1.0, tcib_managed=true, config_data={'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ovn_controller', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-ovn-controller:current-podified', 'net': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/lib/modules:/lib/modules:ro', '/run:/run', '/var/lib/openvswitch/ovn:/run/ovn:shared,z', '/var/lib/kolla/config_files/ovn_controller.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/openstack/cacerts/ovn/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/certs/ovn/default/ca.crt:/etc/pki/tls/certs/ovndbca.crt:ro,z', '/var/lib/openstack/certs/ovn/default/tls.crt:/etc/pki/tls/certs/ovndb.crt:ro,z', '/var/lib/openstack/certs/ovn/default/tls.key:/etc/pki/tls/private/ovndb.key:ro,Z', '/var/lib/openstack/healthchecks/ovn_controller:/openstack:ro,z']}, config_id=ovn_controller, org.label-schema.name=CentOS Stream 9 Base Image, tcib_build_tag=fa2bb8efef6782c26ea7f1675eeb36dd, container_name=ovn_controller, org.label-schema.build-date=20251125, org.label-schema.vendor=CentOS, io.buildah.version=1.41.3, maintainer=OpenStack Kubernetes Operator team, org.label-schema.license=GPLv2)
Dec  1 19:28:29 compute-0 podman[203750]: time="2025-12-01T19:28:29Z" level=info msg="List containers: received `last` parameter - overwriting `limit`"
Dec  1 19:28:29 compute-0 podman[203750]: @ - - [01/Dec/2025:19:28:29 +0000] "GET /v4.9.3/libpod/containers/json?all=true&external=false&last=0&namespace=false&size=false&sync=false HTTP/1.1" 200 28288 "" "Go-http-client/1.1"
Dec  1 19:28:29 compute-0 podman[203750]: @ - - [01/Dec/2025:19:28:29 +0000] "GET /v4.9.3/libpod/containers/stats?all=false&interval=1&stream=false HTTP/1.1" 200 4278 "" "Go-http-client/1.1"
Dec  1 19:28:31 compute-0 podman[239256]: 2025-12-01 19:28:31.338619557 +0000 UTC m=+0.099575309 container health_status b46bda7fc50db8041eef75400930fc7591d8331b3adc9964f77b2cc87c6b98e2 (image=quay.io/openstack-k8s-operators/openstack-network-exporter:current-podified, name=openstack_network_exporter, health_status=healthy, health_failing_streak=0, health_log=, name=ubi9-minimal, vcs-ref=f4b088292653bbf5ca8188a5e59ffd06a8671d4b, io.openshift.expose-services=, config_id=edpm, io.k8s.display-name=Red Hat Universal Base Image 9 Minimal, io.buildah.version=1.33.7, com.redhat.license_terms=https://www.redhat.com/en/about/red-hat-end-user-license-agreements#UBI, description=The Universal Base Image Minimal is a stripped down image that uses microdnf as a package manager. This base image is freely redistributable, but Red Hat only supports Red Hat technologies through subscriptions for Red Hat products. This image is maintained by Red Hat and updated regularly., vendor=Red Hat, Inc., maintainer=Red Hat, Inc., build-date=2025-08-20T13:12:41, release=1755695350, architecture=x86_64, config_data={'image': 'quay.io/openstack-k8s-operators/openstack-network-exporter:current-podified', 'restart': 'always', 'recreate': True, 'privileged': True, 'ports': ['9105:9105'], 'command': [], 'net': 'host', 'environment': {'OS_ENDPOINT_TYPE': 'internal', 'OPENSTACK_NETWORK_EXPORTER_YAML': '/etc/openstack_network_exporter/openstack_network_exporter.yaml'}, 'healthcheck': {'test': '/openstack/healthcheck openstack-netwo', 'mount': '/var/lib/openstack/healthchecks/openstack_network_exporter'}, 'volumes': ['/var/lib/openstack/config/telemetry/openstack_network_exporter.yaml:/etc/openstack_network_exporter/openstack_network_exporter.yaml:z', '/var/lib/openstack/certs/telemetry/default:/etc/openstack_network_exporter/tls:z', '/var/run/openvswitch:/run/openvswitch:rw,z', '/var/lib/openvswitch/ovn:/run/ovn:rw,z', '/proc:/host/proc:ro', '/var/lib/openstack/healthchecks/openstack_network_exporter:/openstack:ro,z']}, summary=Provides the latest release of the minimal Red Hat Universal Base Image 9., version=9.6, com.redhat.component=ubi9-minimal-container, distribution-scope=public, vcs-type=git, io.k8s.description=The Universal Base Image Minimal is a stripped down image that uses microdnf as a package manager. This base image is freely redistributable, but Red Hat only supports Red Hat technologies through subscriptions for Red Hat products. This image is maintained by Red Hat and updated regularly., container_name=openstack_network_exporter, url=https://catalog.redhat.com/en/search?searchType=containers, io.openshift.tags=minimal rhel9, managed_by=edpm_ansible)
Dec  1 19:28:31 compute-0 openstack_network_exporter[205914]: ERROR   19:28:31 appctl.go:144: Failed to get PID for ovn-northd: no control socket files found for ovn-northd
Dec  1 19:28:31 compute-0 openstack_network_exporter[205914]: ERROR   19:28:31 appctl.go:131: Failed to prepare call to ovsdb-server: no control socket files found for the ovs db server
Dec  1 19:28:31 compute-0 openstack_network_exporter[205914]: ERROR   19:28:31 appctl.go:144: Failed to get PID for ovn-northd: no control socket files found for ovn-northd
Dec  1 19:28:31 compute-0 openstack_network_exporter[205914]: ERROR   19:28:31 appctl.go:174: call(dpif-netdev/pmd-rxq-show): please specify an existing datapath
Dec  1 19:28:31 compute-0 openstack_network_exporter[205914]: 
Dec  1 19:28:31 compute-0 openstack_network_exporter[205914]: ERROR   19:28:31 appctl.go:174: call(dpif-netdev/pmd-perf-show): please specify an existing datapath
Dec  1 19:28:31 compute-0 openstack_network_exporter[205914]: 
Dec  1 19:28:37 compute-0 podman[239276]: 2025-12-01 19:28:37.291479298 +0000 UTC m=+0.068357368 container health_status 9bc16c1e84935b321683dd2dfd3901959431e420d380b6b9982945dff3d516b2 (image=quay.io/prometheus/node-exporter:v1.5.0, name=node_exporter, health_status=healthy, health_failing_streak=0, health_log=, managed_by=edpm_ansible, config_data={'image': 'quay.io/prometheus/node-exporter:v1.5.0', 'restart': 'always', 'recreate': True, 'user': 'root', 'privileged': True, 'ports': ['9100:9100'], 'command': ['--web.config.file=/etc/node_exporter/node_exporter.yaml', '--web.disable-exporter-metrics', '--collector.systemd', '--collector.systemd.unit-include=(edpm_.*|ovs.*|openvswitch|virt.*|rsyslog)\\.service', '--no-collector.dmi', '--no-collector.entropy', '--no-collector.thermal_zone', '--no-collector.time', '--no-collector.timex', '--no-collector.uname', '--no-collector.stat', '--no-collector.hwmon', '--no-collector.os', '--no-collector.selinux', '--no-collector.textfile', '--no-collector.powersupplyclass', '--no-collector.pressure', '--no-collector.rapl'], 'net': 'host', 'environment': {'OS_ENDPOINT_TYPE': 'internal'}, 'healthcheck': {'test': '/openstack/healthcheck node_exporter', 'mount': '/var/lib/openstack/healthchecks/node_exporter'}, 'volumes': ['/var/lib/openstack/config/telemetry/node_exporter.yaml:/etc/node_exporter/node_exporter.yaml:z', '/var/lib/openstack/certs/telemetry/default:/etc/node_exporter/tls:z', '/var/run/dbus/system_bus_socket:/var/run/dbus/system_bus_socket:rw', '/var/lib/openstack/healthchecks/node_exporter:/openstack:ro,z']}, config_id=edpm, container_name=node_exporter, maintainer=The Prometheus Authors <prometheus-developers@googlegroups.com>)
Dec  1 19:28:41 compute-0 ovn_metadata_agent[106828]: 2025-12-01 19:28:41.985 106833 DEBUG ovsdbapp.backend.ovs_idl.event [-] Matched UPDATE: SbGlobalUpdateEvent(events=('update',), table='SB_Global', conditions=None, old_conditions=None), priority=20 to row=SB_Global(external_ids={}, nb_cfg=2, options={'arp_ns_explicit_output': 'true', 'mac_prefix': 'ae:b8:e0', 'max_tunid': '16711680', 'northd_internal_version': '24.03.8-20.33.0-76.8', 'svc_monitor_mac': 'f2:87:69:a7:38:2b'}, ipsec=False) old=SB_Global(nb_cfg=1) matches /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/event.py:43#033[00m
Dec  1 19:28:41 compute-0 ovn_metadata_agent[106828]: 2025-12-01 19:28:41.986 106833 DEBUG neutron.agent.ovn.metadata.agent [-] Delaying updating chassis table for 0 seconds run /usr/lib/python3.9/site-packages/neutron/agent/ovn/metadata/agent.py:274#033[00m
Dec  1 19:28:41 compute-0 ovn_metadata_agent[106828]: 2025-12-01 19:28:41.988 106833 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): DbSetCommand(_result=None, table=Chassis_Private, record=91869463-7ce7-4561-8225-db4a77bb5f12, col_values=(('external_ids', {'neutron:ovn-metadata-sb-cfg': '2'}),), if_exists=True) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89#033[00m
Dec  1 19:28:43 compute-0 podman[239304]: 2025-12-01 19:28:43.530788376 +0000 UTC m=+0.094766181 container health_status eee51cf6f5ac491b85fb09827fece37ea9afa564acb449d4ec0d0155a452f02b (image=quay.io/podified-antelope-centos9/openstack-multipathd:current-podified, name=multipathd, health_status=healthy, health_failing_streak=0, health_log=, org.label-schema.license=GPLv2, org.label-schema.schema-version=1.0, tcib_build_tag=fa2bb8efef6782c26ea7f1675eeb36dd, tcib_managed=true, config_id=multipathd, io.buildah.version=1.41.3, org.label-schema.build-date=20251125, org.label-schema.vendor=CentOS, maintainer=OpenStack Kubernetes Operator team, managed_by=edpm_ansible, org.label-schema.name=CentOS Stream 9 Base Image, config_data={'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/multipathd', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-multipathd:current-podified', 'net': 'host', 'privileged': True, 'restart': 'always', 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/var/lib/kolla/config_files/multipathd.json:/var/lib/kolla/config_files/config.json:ro', '/dev:/dev', '/run/udev:/run/udev', '/sys:/sys', '/lib/modules:/lib/modules:ro', '/etc/iscsi:/etc/iscsi:ro', '/var/lib/iscsi:/var/lib/iscsi', '/etc/multipath:/etc/multipath:z', '/etc/multipath.conf:/etc/multipath.conf:ro', '/var/lib/openstack/healthchecks/multipathd:/openstack:ro,z']}, container_name=multipathd)
Dec  1 19:28:45 compute-0 nova_compute[189564]: 2025-12-01 19:28:45.248 189568 DEBUG oslo_service.periodic_task [None req-7cec860b-c1c5-49d2-8a4f-732c76c1788d - - - - - -] Running periodic task ComputeManager._heal_instance_info_cache run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210#033[00m
Dec  1 19:28:45 compute-0 nova_compute[189564]: 2025-12-01 19:28:45.249 189568 DEBUG nova.compute.manager [None req-7cec860b-c1c5-49d2-8a4f-732c76c1788d - - - - - -] Starting heal instance info cache _heal_instance_info_cache /usr/lib/python3.9/site-packages/nova/compute/manager.py:9858#033[00m
Dec  1 19:28:45 compute-0 nova_compute[189564]: 2025-12-01 19:28:45.249 189568 DEBUG nova.compute.manager [None req-7cec860b-c1c5-49d2-8a4f-732c76c1788d - - - - - -] Rebuilding the list of instances to heal _heal_instance_info_cache /usr/lib/python3.9/site-packages/nova/compute/manager.py:9862#033[00m
Dec  1 19:28:45 compute-0 nova_compute[189564]: 2025-12-01 19:28:45.263 189568 DEBUG nova.compute.manager [None req-7cec860b-c1c5-49d2-8a4f-732c76c1788d - - - - - -] Didn't find any instances for network info cache update. _heal_instance_info_cache /usr/lib/python3.9/site-packages/nova/compute/manager.py:9944#033[00m
Dec  1 19:28:46 compute-0 nova_compute[189564]: 2025-12-01 19:28:46.248 189568 DEBUG oslo_service.periodic_task [None req-7cec860b-c1c5-49d2-8a4f-732c76c1788d - - - - - -] Running periodic task ComputeManager._poll_volume_usage run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210#033[00m
Dec  1 19:28:47 compute-0 nova_compute[189564]: 2025-12-01 19:28:47.248 189568 DEBUG oslo_service.periodic_task [None req-7cec860b-c1c5-49d2-8a4f-732c76c1788d - - - - - -] Running periodic task ComputeManager._poll_rebooting_instances run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210#033[00m
Dec  1 19:28:47 compute-0 nova_compute[189564]: 2025-12-01 19:28:47.249 189568 DEBUG oslo_service.periodic_task [None req-7cec860b-c1c5-49d2-8a4f-732c76c1788d - - - - - -] Running periodic task ComputeManager._instance_usage_audit run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210#033[00m
Dec  1 19:28:47 compute-0 nova_compute[189564]: 2025-12-01 19:28:47.249 189568 DEBUG oslo_service.periodic_task [None req-7cec860b-c1c5-49d2-8a4f-732c76c1788d - - - - - -] Running periodic task ComputeManager._reclaim_queued_deletes run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210#033[00m
Dec  1 19:28:47 compute-0 nova_compute[189564]: 2025-12-01 19:28:47.249 189568 DEBUG nova.compute.manager [None req-7cec860b-c1c5-49d2-8a4f-732c76c1788d - - - - - -] CONF.reclaim_instance_interval <= 0, skipping... _reclaim_queued_deletes /usr/lib/python3.9/site-packages/nova/compute/manager.py:10477#033[00m
Dec  1 19:28:48 compute-0 nova_compute[189564]: 2025-12-01 19:28:48.243 189568 DEBUG oslo_service.periodic_task [None req-7cec860b-c1c5-49d2-8a4f-732c76c1788d - - - - - -] Running periodic task ComputeManager._check_instance_build_time run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210#033[00m
Dec  1 19:28:48 compute-0 nova_compute[189564]: 2025-12-01 19:28:48.247 189568 DEBUG oslo_service.periodic_task [None req-7cec860b-c1c5-49d2-8a4f-732c76c1788d - - - - - -] Running periodic task ComputeManager.update_available_resource run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210#033[00m
Dec  1 19:28:48 compute-0 nova_compute[189564]: 2025-12-01 19:28:48.287 189568 DEBUG oslo_concurrency.lockutils [None req-7cec860b-c1c5-49d2-8a4f-732c76c1788d - - - - - -] Acquiring lock "compute_resources" by "nova.compute.resource_tracker.ResourceTracker.clean_compute_node_cache" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m
Dec  1 19:28:48 compute-0 nova_compute[189564]: 2025-12-01 19:28:48.288 189568 DEBUG oslo_concurrency.lockutils [None req-7cec860b-c1c5-49d2-8a4f-732c76c1788d - - - - - -] Lock "compute_resources" acquired by "nova.compute.resource_tracker.ResourceTracker.clean_compute_node_cache" :: waited 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m
Dec  1 19:28:48 compute-0 nova_compute[189564]: 2025-12-01 19:28:48.288 189568 DEBUG oslo_concurrency.lockutils [None req-7cec860b-c1c5-49d2-8a4f-732c76c1788d - - - - - -] Lock "compute_resources" "released" by "nova.compute.resource_tracker.ResourceTracker.clean_compute_node_cache" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m
Dec  1 19:28:48 compute-0 nova_compute[189564]: 2025-12-01 19:28:48.289 189568 DEBUG nova.compute.resource_tracker [None req-7cec860b-c1c5-49d2-8a4f-732c76c1788d - - - - - -] Auditing locally available compute resources for compute-0.ctlplane.example.com (node: compute-0.ctlplane.example.com) update_available_resource /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:861#033[00m
Dec  1 19:28:48 compute-0 podman[239323]: 2025-12-01 19:28:48.320950691 +0000 UTC m=+0.089659734 container health_status 61ddba5fa28aaa4735d9b3aecc3d300f499f9ae2248b5f55cd6d6127fcce4236 (image=quay.io/navidys/prometheus-podman-exporter:v1.10.1, name=podman_exporter, health_status=healthy, health_failing_streak=0, health_log=, managed_by=edpm_ansible, config_data={'image': 'quay.io/navidys/prometheus-podman-exporter:v1.10.1', 'restart': 'always', 'recreate': True, 'user': 'root', 'privileged': True, 'ports': ['9882:9882'], 'net': 'host', 'command': ['--web.config.file=/etc/podman_exporter/podman_exporter.yaml'], 'environment': {'OS_ENDPOINT_TYPE': 'internal', 'CONTAINER_HOST': 'unix:///run/podman/podman.sock'}, 'healthcheck': {'test': '/openstack/healthcheck podman_exporter', 'mount': '/var/lib/openstack/healthchecks/podman_exporter'}, 'volumes': ['/var/lib/openstack/config/telemetry/podman_exporter.yaml:/etc/podman_exporter/podman_exporter.yaml:z', '/var/lib/openstack/certs/telemetry/default:/etc/podman_exporter/tls:z', '/run/podman/podman.sock:/run/podman/podman.sock:rw,z', '/var/lib/openstack/healthchecks/podman_exporter:/openstack:ro,z']}, config_id=edpm, container_name=podman_exporter, maintainer=Navid Yaghoobi <navidys@fedoraproject.org>)
Dec  1 19:28:48 compute-0 nova_compute[189564]: 2025-12-01 19:28:48.613 189568 WARNING nova.virt.libvirt.driver [None req-7cec860b-c1c5-49d2-8a4f-732c76c1788d - - - - - -] This host appears to have multiple sockets per NUMA node. The `socket` PCI NUMA affinity will not be supported.#033[00m
Dec  1 19:28:48 compute-0 nova_compute[189564]: 2025-12-01 19:28:48.615 189568 DEBUG nova.compute.resource_tracker [None req-7cec860b-c1c5-49d2-8a4f-732c76c1788d - - - - - -] Hypervisor/Node resource view: name=compute-0.ctlplane.example.com free_ram=5690MB free_disk=72.43574905395508GB free_vcpus=8 pci_devices=[{"dev_id": "pci_0000_00_03_0", "address": "0000:00:03.0", "product_id": "1000", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1000", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_01_3", "address": "0000:00:01.3", "product_id": "7113", "vendor_id": "8086", "numa_node": null, "label": "label_8086_7113", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_06_0", "address": "0000:00:06.0", "product_id": "1005", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1005", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_04_0", "address": "0000:00:04.0", "product_id": "1001", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1001", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_01_1", "address": "0000:00:01.1", "product_id": "7010", "vendor_id": "8086", "numa_node": null, "label": "label_8086_7010", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_00_0", "address": "0000:00:00.0", "product_id": "1237", "vendor_id": "8086", "numa_node": null, "label": "label_8086_1237", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_05_0", "address": "0000:00:05.0", "product_id": "1002", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1002", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_02_0", "address": "0000:00:02.0", "product_id": "1050", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1050", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_01_2", "address": "0000:00:01.2", "product_id": "7020", "vendor_id": "8086", "numa_node": null, "label": "label_8086_7020", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_01_0", "address": "0000:00:01.0", "product_id": "7000", "vendor_id": "8086", "numa_node": null, "label": "label_8086_7000", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_07_0", "address": "0000:00:07.0", "product_id": "1000", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1000", "dev_type": "type-PCI"}] _report_hypervisor_resource_view /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:1034#033[00m
Dec  1 19:28:48 compute-0 nova_compute[189564]: 2025-12-01 19:28:48.615 189568 DEBUG oslo_concurrency.lockutils [None req-7cec860b-c1c5-49d2-8a4f-732c76c1788d - - - - - -] Acquiring lock "compute_resources" by "nova.compute.resource_tracker.ResourceTracker._update_available_resource" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m
Dec  1 19:28:48 compute-0 nova_compute[189564]: 2025-12-01 19:28:48.616 189568 DEBUG oslo_concurrency.lockutils [None req-7cec860b-c1c5-49d2-8a4f-732c76c1788d - - - - - -] Lock "compute_resources" acquired by "nova.compute.resource_tracker.ResourceTracker._update_available_resource" :: waited 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m
Dec  1 19:28:48 compute-0 nova_compute[189564]: 2025-12-01 19:28:48.707 189568 DEBUG nova.compute.resource_tracker [None req-7cec860b-c1c5-49d2-8a4f-732c76c1788d - - - - - -] Total usable vcpus: 8, total allocated vcpus: 0 _report_final_resource_view /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:1057#033[00m
Dec  1 19:28:48 compute-0 nova_compute[189564]: 2025-12-01 19:28:48.708 189568 DEBUG nova.compute.resource_tracker [None req-7cec860b-c1c5-49d2-8a4f-732c76c1788d - - - - - -] Final resource view: name=compute-0.ctlplane.example.com phys_ram=7680MB used_ram=512MB phys_disk=79GB used_disk=0GB total_vcpus=8 used_vcpus=0 pci_stats=[] _report_final_resource_view /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:1066#033[00m
Dec  1 19:28:48 compute-0 nova_compute[189564]: 2025-12-01 19:28:48.743 189568 DEBUG nova.compute.provider_tree [None req-7cec860b-c1c5-49d2-8a4f-732c76c1788d - - - - - -] Inventory has not changed in ProviderTree for provider: 0211b5d4-bab8-409f-8f53-df766ffbcb27 update_inventory /usr/lib/python3.9/site-packages/nova/compute/provider_tree.py:180#033[00m
Dec  1 19:28:48 compute-0 nova_compute[189564]: 2025-12-01 19:28:48.759 189568 DEBUG nova.scheduler.client.report [None req-7cec860b-c1c5-49d2-8a4f-732c76c1788d - - - - - -] Inventory has not changed for provider 0211b5d4-bab8-409f-8f53-df766ffbcb27 based on inventory data: {'VCPU': {'total': 8, 'reserved': 0, 'min_unit': 1, 'max_unit': 8, 'step_size': 1, 'allocation_ratio': 4.0}, 'MEMORY_MB': {'total': 7680, 'reserved': 512, 'min_unit': 1, 'max_unit': 7680, 'step_size': 1, 'allocation_ratio': 1.0}, 'DISK_GB': {'total': 79, 'reserved': 0, 'min_unit': 1, 'max_unit': 79, 'step_size': 1, 'allocation_ratio': 0.9}} set_inventory_for_provider /usr/lib/python3.9/site-packages/nova/scheduler/client/report.py:940#033[00m
Dec  1 19:28:48 compute-0 nova_compute[189564]: 2025-12-01 19:28:48.762 189568 DEBUG nova.compute.resource_tracker [None req-7cec860b-c1c5-49d2-8a4f-732c76c1788d - - - - - -] Compute_service record updated for compute-0.ctlplane.example.com:compute-0.ctlplane.example.com _update_available_resource /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:995#033[00m
Dec  1 19:28:48 compute-0 nova_compute[189564]: 2025-12-01 19:28:48.763 189568 DEBUG oslo_concurrency.lockutils [None req-7cec860b-c1c5-49d2-8a4f-732c76c1788d - - - - - -] Lock "compute_resources" "released" by "nova.compute.resource_tracker.ResourceTracker._update_available_resource" :: held 0.147s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m
Dec  1 19:28:50 compute-0 podman[239347]: 2025-12-01 19:28:50.370924729 +0000 UTC m=+0.132107922 container health_status 34a1614f07848d6f362b3ed1fa2407dbcd0f2c7c831f6ef43ff8b2d278ce7c3d (image=quay.io/podified-antelope-centos9/openstack-ceilometer-ipmi:current-podified, name=ceilometer_agent_ipmi, health_status=healthy, health_failing_streak=0, health_log=, tcib_build_tag=fa2bb8efef6782c26ea7f1675eeb36dd, config_id=edpm, org.label-schema.name=CentOS Stream 9 Base Image, container_name=ceilometer_agent_ipmi, managed_by=edpm_ansible, org.label-schema.build-date=20251125, tcib_managed=true, maintainer=OpenStack Kubernetes Operator team, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS, config_data={'image': 'quay.io/podified-antelope-centos9/openstack-ceilometer-ipmi:current-podified', 'user': 'ceilometer', 'restart': 'always', 'command': 'kolla_start', 'security_opt': 'label:type:ceilometer_polling_t', 'privileged': 'true', 'net': 'host', 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'OS_ENDPOINT_TYPE': 'internal'}, 'healthcheck': {'test': '/openstack/healthcheck ipmi', 'mount': '/var/lib/openstack/healthchecks/ceilometer_agent_ipmi'}, 'volumes': ['/var/lib/openstack/config/telemetry-power-monitoring:/var/lib/openstack/config/:z', '/var/lib/openstack/config/telemetry-power-monitoring/ceilometer-agent-ipmi.json:/var/lib/kolla/config_files/config.json:z', '/etc/hosts:/etc/hosts:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/var/lib/openstack/cacerts/telemetry-power-monitoring/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/config/telemetry-power-monitoring/ceilometer_prom_exporter.yaml:/etc/ceilometer/ceilometer_prom_exporter.yaml:z', '/var/lib/openstack/certs/telemetry-power-monitoring/default:/etc/ceilometer/tls:z', '/dev/log:/dev/log', '/var/lib/openstack/healthchecks/ceilometer_agent_ipmi:/openstack:ro,z']}, io.buildah.version=1.41.3, org.label-schema.license=GPLv2)
Dec  1 19:28:50 compute-0 podman[239367]: 2025-12-01 19:28:50.510753578 +0000 UTC m=+0.108435322 container health_status 23921011954a99f31a49758e512d9e3575f6b2ebf536e7df85e3be11e7690b76 (image=quay.io/sustainable_computing_io/kepler:release-0.7.12, name=kepler, health_status=healthy, health_failing_streak=0, health_log=, io.openshift.expose-services=, release-0.7.12=, url=https://access.redhat.com/containers/#/registry.access.redhat.com/ubi9/images/9.4-1214.1726694543, maintainer=Red Hat, Inc., release=1214.1726694543, vendor=Red Hat, Inc., description=The Universal Base Image is designed and engineered to be the base layer for all of your containerized applications, middleware and utilities. This base image is freely redistributable, but Red Hat only supports Red Hat technologies through subscriptions for Red Hat products. This image is maintained by Red Hat and updated regularly., vcs-type=git, distribution-scope=public, io.k8s.description=The Universal Base Image is designed and engineered to be the base layer for all of your containerized applications, middleware and utilities. This base image is freely redistributable, but Red Hat only supports Red Hat technologies through subscriptions for Red Hat products. This image is maintained by Red Hat and updated regularly., com.redhat.component=ubi9-container, com.redhat.license_terms=https://www.redhat.com/en/about/red-hat-end-user-license-agreements#UBI, io.buildah.version=1.29.0, io.k8s.display-name=Red Hat Universal Base Image 9, summary=Provides the latest release of Red Hat Universal Base Image 9., io.openshift.tags=base rhel9, build-date=2024-09-18T21:23:30, managed_by=edpm_ansible, name=ubi9, vcs-ref=e309397d02fc53f7fa99db1371b8700eb49f268f, architecture=x86_64, container_name=kepler, config_data={'image': 'quay.io/sustainable_computing_io/kepler:release-0.7.12', 'privileged': 'true', 'restart': 'always', 'ports': ['8888:8888'], 'net': 'host', 'command': '-v=2', 'recreate': True, 'environment': {'ENABLE_GPU': 'true', 'EXPOSE_CONTAINER_METRICS': 'true', 'ENABLE_PROCESS_METRICS': 'true', 'EXPOSE_VM_METRICS': 'true', 'EXPOSE_ESTIMATED_IDLE_POWER_METRICS': 'false', 'LIBVIRT_METADATA_URI': 'http://openstack.org/xmlns/libvirt/nova/1.1'}, 'healthcheck': {'test': '/openstack/healthcheck kepler', 'mount': '/var/lib/openstack/healthchecks/kepler'}, 'volumes': ['/lib/modules:/lib/modules:ro', '/run/libvirt:/run/libvirt:shared,ro', '/sys:/sys', '/proc:/proc', '/var/lib/openstack/healthchecks/kepler:/openstack:ro,z']}, config_id=edpm, version=9.4)
Dec  1 19:28:50 compute-0 nova_compute[189564]: 2025-12-01 19:28:50.765 189568 DEBUG oslo_service.periodic_task [None req-7cec860b-c1c5-49d2-8a4f-732c76c1788d - - - - - -] Running periodic task ComputeManager._poll_rescued_instances run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210#033[00m
Dec  1 19:28:51 compute-0 nova_compute[189564]: 2025-12-01 19:28:51.244 189568 DEBUG oslo_service.periodic_task [None req-7cec860b-c1c5-49d2-8a4f-732c76c1788d - - - - - -] Running periodic task ComputeManager._sync_scheduler_instance_info run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210#033[00m
Dec  1 19:28:51 compute-0 nova_compute[189564]: 2025-12-01 19:28:51.612 189568 DEBUG oslo_service.periodic_task [None req-7cec860b-c1c5-49d2-8a4f-732c76c1788d - - - - - -] Running periodic task ComputeManager._poll_unconfirmed_resizes run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210#033[00m
Dec  1 19:28:55 compute-0 podman[239388]: 2025-12-01 19:28:55.367013988 +0000 UTC m=+0.134635110 container health_status ac5c9902abf0db9f43c889599b2bcc73d33eb8b65444ffdd9b56a5cc93dab792 (image=quay.io/podified-antelope-centos9/openstack-ovn-controller:current-podified, name=ovn_controller, health_status=healthy, health_failing_streak=0, health_log=, org.label-schema.build-date=20251125, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.schema-version=1.0, container_name=ovn_controller, maintainer=OpenStack Kubernetes Operator team, tcib_build_tag=fa2bb8efef6782c26ea7f1675eeb36dd, tcib_managed=true, org.label-schema.vendor=CentOS, config_data={'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ovn_controller', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-ovn-controller:current-podified', 'net': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/lib/modules:/lib/modules:ro', '/run:/run', '/var/lib/openvswitch/ovn:/run/ovn:shared,z', '/var/lib/kolla/config_files/ovn_controller.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/openstack/cacerts/ovn/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/certs/ovn/default/ca.crt:/etc/pki/tls/certs/ovndbca.crt:ro,z', '/var/lib/openstack/certs/ovn/default/tls.crt:/etc/pki/tls/certs/ovndb.crt:ro,z', '/var/lib/openstack/certs/ovn/default/tls.key:/etc/pki/tls/private/ovndb.key:ro,Z', '/var/lib/openstack/healthchecks/ovn_controller:/openstack:ro,z']}, config_id=ovn_controller, io.buildah.version=1.41.3, managed_by=edpm_ansible)
Dec  1 19:28:55 compute-0 podman[239387]: 2025-12-01 19:28:55.380180885 +0000 UTC m=+0.136113756 container health_status 43b014a7c88484529ca37fbc1aa040d68d3c565a681d98a3ffe696ded1c66c8b (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified, name=ovn_metadata_agent, health_status=healthy, health_failing_streak=0, health_log=, org.label-schema.vendor=CentOS, tcib_managed=true, io.buildah.version=1.41.3, maintainer=OpenStack Kubernetes Operator team, managed_by=edpm_ansible, org.label-schema.build-date=20251125, org.label-schema.license=GPLv2, org.label-schema.schema-version=1.0, tcib_build_tag=fa2bb8efef6782c26ea7f1675eeb36dd, config_data={'cgroupns': 'host', 'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'EDPM_CONFIG_HASH': '0823bd3e096c75f72e4a95820d41b0d4b6a1172bd2892ddb9f29b788a11bc87d'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ovn_metadata_agent', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified', 'net': 'host', 'pid': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/run/openvswitch:/run/openvswitch:z', '/var/lib/config-data/ansible-generated/neutron-ovn-metadata-agent:/etc/neutron.conf.d:z', '/run/netns:/run/netns:shared', '/var/lib/kolla/config_files/ovn_metadata_agent.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/neutron:/var/lib/neutron:shared,z', '/var/lib/neutron/ovn_metadata_haproxy_wrapper:/usr/local/bin/haproxy:ro', '/var/lib/neutron/kill_scripts:/etc/neutron/kill_scripts:ro', '/var/lib/openstack/cacerts/neutron-metadata/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/ca.crt:/etc/pki/tls/certs/ovndbca.crt:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/tls.crt:/etc/pki/tls/certs/ovndb.crt:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/tls.key:/etc/pki/tls/private/ovndb.key:ro,Z', '/var/lib/openstack/healthchecks/ovn_metadata_agent:/openstack:ro,z']}, container_name=ovn_metadata_agent, org.label-schema.name=CentOS Stream 9 Base Image, config_id=ovn_metadata_agent)
Dec  1 19:28:55 compute-0 podman[239386]: 2025-12-01 19:28:55.390518813 +0000 UTC m=+0.151019465 container health_status 3a3d264f7eb8586ed3d44da8bad3c69e5911bcb2ca062b771386b6d47a5118de (image=quay.rdoproject.org/podified-master-centos10/openstack-ceilometer-compute:current-tested, name=ceilometer_agent_compute, health_status=healthy, health_failing_streak=0, health_log=, managed_by=edpm_ansible, org.label-schema.build-date=20251125, org.label-schema.license=GPLv2, maintainer=OpenStack Kubernetes Operator team, org.label-schema.name=CentOS Stream 10 Base Image, org.label-schema.schema-version=1.0, config_id=edpm, tcib_managed=true, config_data={'image': 'quay.rdoproject.org/podified-master-centos10/openstack-ceilometer-compute:current-tested', 'user': 'ceilometer', 'restart': 'always', 'command': 'kolla_start', 'security_opt': 'label:type:ceilometer_polling_t', 'net': 'host', 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'OS_ENDPOINT_TYPE': 'internal'}, 'healthcheck': {'test': '/openstack/healthcheck compute', 'mount': '/var/lib/openstack/healthchecks/ceilometer_agent_compute'}, 'volumes': ['/var/lib/openstack/config/telemetry:/var/lib/openstack/config/:z', '/var/lib/openstack/config/telemetry/ceilometer-agent-compute.json:/var/lib/kolla/config_files/config.json:z', '/run/libvirt:/run/libvirt:shared,ro', '/etc/hosts:/etc/hosts:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/var/lib/openstack/cacerts/telemetry/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/config/telemetry/ceilometer_prom_exporter.yaml:/etc/ceilometer/ceilometer_prom_exporter.yaml:z', '/var/lib/openstack/certs/telemetry/default:/etc/ceilometer/tls:z', '/dev/log:/dev/log', '/var/lib/openstack/healthchecks/ceilometer_agent_compute:/openstack:ro,z']}, container_name=ceilometer_agent_compute, io.buildah.version=1.41.4, org.label-schema.vendor=CentOS, tcib_build_tag=3a7876c5b6a4ff2e2bc50e11e9db5f42)
Dec  1 19:28:59 compute-0 podman[203750]: time="2025-12-01T19:28:59Z" level=info msg="List containers: received `last` parameter - overwriting `limit`"
Dec  1 19:28:59 compute-0 podman[203750]: @ - - [01/Dec/2025:19:28:59 +0000] "GET /v4.9.3/libpod/containers/json?all=true&external=false&last=0&namespace=false&size=false&sync=false HTTP/1.1" 200 28288 "" "Go-http-client/1.1"
Dec  1 19:28:59 compute-0 podman[203750]: @ - - [01/Dec/2025:19:28:59 +0000] "GET /v4.9.3/libpod/containers/stats?all=false&interval=1&stream=false HTTP/1.1" 200 4282 "" "Go-http-client/1.1"
Dec  1 19:29:01 compute-0 openstack_network_exporter[205914]: ERROR   19:29:01 appctl.go:131: Failed to prepare call to ovsdb-server: no control socket files found for the ovs db server
Dec  1 19:29:01 compute-0 openstack_network_exporter[205914]: ERROR   19:29:01 appctl.go:144: Failed to get PID for ovn-northd: no control socket files found for ovn-northd
Dec  1 19:29:01 compute-0 openstack_network_exporter[205914]: ERROR   19:29:01 appctl.go:144: Failed to get PID for ovn-northd: no control socket files found for ovn-northd
Dec  1 19:29:01 compute-0 openstack_network_exporter[205914]: ERROR   19:29:01 appctl.go:174: call(dpif-netdev/pmd-perf-show): please specify an existing datapath
Dec  1 19:29:01 compute-0 openstack_network_exporter[205914]: 
Dec  1 19:29:01 compute-0 openstack_network_exporter[205914]: ERROR   19:29:01 appctl.go:174: call(dpif-netdev/pmd-rxq-show): please specify an existing datapath
Dec  1 19:29:01 compute-0 openstack_network_exporter[205914]: 
Dec  1 19:29:02 compute-0 podman[239447]: 2025-12-01 19:29:02.335203876 +0000 UTC m=+0.099921179 container health_status b46bda7fc50db8041eef75400930fc7591d8331b3adc9964f77b2cc87c6b98e2 (image=quay.io/openstack-k8s-operators/openstack-network-exporter:current-podified, name=openstack_network_exporter, health_status=healthy, health_failing_streak=0, health_log=, build-date=2025-08-20T13:12:41, config_id=edpm, managed_by=edpm_ansible, name=ubi9-minimal, vcs-ref=f4b088292653bbf5ca8188a5e59ffd06a8671d4b, description=The Universal Base Image Minimal is a stripped down image that uses microdnf as a package manager. This base image is freely redistributable, but Red Hat only supports Red Hat technologies through subscriptions for Red Hat products. This image is maintained by Red Hat and updated regularly., url=https://catalog.redhat.com/en/search?searchType=containers, vcs-type=git, config_data={'image': 'quay.io/openstack-k8s-operators/openstack-network-exporter:current-podified', 'restart': 'always', 'recreate': True, 'privileged': True, 'ports': ['9105:9105'], 'command': [], 'net': 'host', 'environment': {'OS_ENDPOINT_TYPE': 'internal', 'OPENSTACK_NETWORK_EXPORTER_YAML': '/etc/openstack_network_exporter/openstack_network_exporter.yaml'}, 'healthcheck': {'test': '/openstack/healthcheck openstack-netwo', 'mount': '/var/lib/openstack/healthchecks/openstack_network_exporter'}, 'volumes': ['/var/lib/openstack/config/telemetry/openstack_network_exporter.yaml:/etc/openstack_network_exporter/openstack_network_exporter.yaml:z', '/var/lib/openstack/certs/telemetry/default:/etc/openstack_network_exporter/tls:z', '/var/run/openvswitch:/run/openvswitch:rw,z', '/var/lib/openvswitch/ovn:/run/ovn:rw,z', '/proc:/host/proc:ro', '/var/lib/openstack/healthchecks/openstack_network_exporter:/openstack:ro,z']}, summary=Provides the latest release of the minimal Red Hat Universal Base Image 9., io.openshift.tags=minimal rhel9, com.redhat.license_terms=https://www.redhat.com/en/about/red-hat-end-user-license-agreements#UBI, distribution-scope=public, maintainer=Red Hat, Inc., com.redhat.component=ubi9-minimal-container, vendor=Red Hat, Inc., io.k8s.display-name=Red Hat Universal Base Image 9 Minimal, io.openshift.expose-services=, container_name=openstack_network_exporter, release=1755695350, architecture=x86_64, io.buildah.version=1.33.7, version=9.6, io.k8s.description=The Universal Base Image Minimal is a stripped down image that uses microdnf as a package manager. This base image is freely redistributable, but Red Hat only supports Red Hat technologies through subscriptions for Red Hat products. This image is maintained by Red Hat and updated regularly.)
Dec  1 19:29:08 compute-0 podman[239468]: 2025-12-01 19:29:08.337678437 +0000 UTC m=+0.109120573 container health_status 9bc16c1e84935b321683dd2dfd3901959431e420d380b6b9982945dff3d516b2 (image=quay.io/prometheus/node-exporter:v1.5.0, name=node_exporter, health_status=healthy, health_failing_streak=0, health_log=, maintainer=The Prometheus Authors <prometheus-developers@googlegroups.com>, managed_by=edpm_ansible, config_data={'image': 'quay.io/prometheus/node-exporter:v1.5.0', 'restart': 'always', 'recreate': True, 'user': 'root', 'privileged': True, 'ports': ['9100:9100'], 'command': ['--web.config.file=/etc/node_exporter/node_exporter.yaml', '--web.disable-exporter-metrics', '--collector.systemd', '--collector.systemd.unit-include=(edpm_.*|ovs.*|openvswitch|virt.*|rsyslog)\\.service', '--no-collector.dmi', '--no-collector.entropy', '--no-collector.thermal_zone', '--no-collector.time', '--no-collector.timex', '--no-collector.uname', '--no-collector.stat', '--no-collector.hwmon', '--no-collector.os', '--no-collector.selinux', '--no-collector.textfile', '--no-collector.powersupplyclass', '--no-collector.pressure', '--no-collector.rapl'], 'net': 'host', 'environment': {'OS_ENDPOINT_TYPE': 'internal'}, 'healthcheck': {'test': '/openstack/healthcheck node_exporter', 'mount': '/var/lib/openstack/healthchecks/node_exporter'}, 'volumes': ['/var/lib/openstack/config/telemetry/node_exporter.yaml:/etc/node_exporter/node_exporter.yaml:z', '/var/lib/openstack/certs/telemetry/default:/etc/node_exporter/tls:z', '/var/run/dbus/system_bus_socket:/var/run/dbus/system_bus_socket:rw', '/var/lib/openstack/healthchecks/node_exporter:/openstack:ro,z']}, config_id=edpm, container_name=node_exporter)
Dec  1 19:29:12 compute-0 ovn_metadata_agent[106828]: 2025-12-01 19:29:12.172 106833 DEBUG oslo_concurrency.lockutils [-] Acquiring lock "_check_child_processes" by "neutron.agent.linux.external_process.ProcessMonitor._check_child_processes" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m
Dec  1 19:29:12 compute-0 ovn_metadata_agent[106828]: 2025-12-01 19:29:12.172 106833 DEBUG oslo_concurrency.lockutils [-] Lock "_check_child_processes" acquired by "neutron.agent.linux.external_process.ProcessMonitor._check_child_processes" :: waited 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m
Dec  1 19:29:12 compute-0 ovn_metadata_agent[106828]: 2025-12-01 19:29:12.173 106833 DEBUG oslo_concurrency.lockutils [-] Lock "_check_child_processes" "released" by "neutron.agent.linux.external_process.ProcessMonitor._check_child_processes" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m
Dec  1 19:29:14 compute-0 podman[239491]: 2025-12-01 19:29:14.329900883 +0000 UTC m=+0.094616147 container health_status eee51cf6f5ac491b85fb09827fece37ea9afa564acb449d4ec0d0155a452f02b (image=quay.io/podified-antelope-centos9/openstack-multipathd:current-podified, name=multipathd, health_status=healthy, health_failing_streak=0, health_log=, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.schema-version=1.0, config_id=multipathd, org.label-schema.build-date=20251125, org.label-schema.vendor=CentOS, io.buildah.version=1.41.3, tcib_build_tag=fa2bb8efef6782c26ea7f1675eeb36dd, tcib_managed=true, config_data={'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/multipathd', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-multipathd:current-podified', 'net': 'host', 'privileged': True, 'restart': 'always', 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/var/lib/kolla/config_files/multipathd.json:/var/lib/kolla/config_files/config.json:ro', '/dev:/dev', '/run/udev:/run/udev', '/sys:/sys', '/lib/modules:/lib/modules:ro', '/etc/iscsi:/etc/iscsi:ro', '/var/lib/iscsi:/var/lib/iscsi', '/etc/multipath:/etc/multipath:z', '/etc/multipath.conf:/etc/multipath.conf:ro', '/var/lib/openstack/healthchecks/multipathd:/openstack:ro,z']}, container_name=multipathd, maintainer=OpenStack Kubernetes Operator team, managed_by=edpm_ansible, org.label-schema.license=GPLv2)
Dec  1 19:29:19 compute-0 podman[239511]: 2025-12-01 19:29:19.315937854 +0000 UTC m=+0.085927269 container health_status 61ddba5fa28aaa4735d9b3aecc3d300f499f9ae2248b5f55cd6d6127fcce4236 (image=quay.io/navidys/prometheus-podman-exporter:v1.10.1, name=podman_exporter, health_status=healthy, health_failing_streak=0, health_log=, managed_by=edpm_ansible, config_data={'image': 'quay.io/navidys/prometheus-podman-exporter:v1.10.1', 'restart': 'always', 'recreate': True, 'user': 'root', 'privileged': True, 'ports': ['9882:9882'], 'net': 'host', 'command': ['--web.config.file=/etc/podman_exporter/podman_exporter.yaml'], 'environment': {'OS_ENDPOINT_TYPE': 'internal', 'CONTAINER_HOST': 'unix:///run/podman/podman.sock'}, 'healthcheck': {'test': '/openstack/healthcheck podman_exporter', 'mount': '/var/lib/openstack/healthchecks/podman_exporter'}, 'volumes': ['/var/lib/openstack/config/telemetry/podman_exporter.yaml:/etc/podman_exporter/podman_exporter.yaml:z', '/var/lib/openstack/certs/telemetry/default:/etc/podman_exporter/tls:z', '/run/podman/podman.sock:/run/podman/podman.sock:rw,z', '/var/lib/openstack/healthchecks/podman_exporter:/openstack:ro,z']}, config_id=edpm, container_name=podman_exporter, maintainer=Navid Yaghoobi <navidys@fedoraproject.org>)
Dec  1 19:29:21 compute-0 podman[239536]: 2025-12-01 19:29:21.320548934 +0000 UTC m=+0.085658962 container health_status 23921011954a99f31a49758e512d9e3575f6b2ebf536e7df85e3be11e7690b76 (image=quay.io/sustainable_computing_io/kepler:release-0.7.12, name=kepler, health_status=healthy, health_failing_streak=0, health_log=, com.redhat.license_terms=https://www.redhat.com/en/about/red-hat-end-user-license-agreements#UBI, io.openshift.expose-services=, release=1214.1726694543, description=The Universal Base Image is designed and engineered to be the base layer for all of your containerized applications, middleware and utilities. This base image is freely redistributable, but Red Hat only supports Red Hat technologies through subscriptions for Red Hat products. This image is maintained by Red Hat and updated regularly., io.openshift.tags=base rhel9, vcs-type=git, config_id=edpm, architecture=x86_64, com.redhat.component=ubi9-container, io.k8s.description=The Universal Base Image is designed and engineered to be the base layer for all of your containerized applications, middleware and utilities. This base image is freely redistributable, but Red Hat only supports Red Hat technologies through subscriptions for Red Hat products. This image is maintained by Red Hat and updated regularly., url=https://access.redhat.com/containers/#/registry.access.redhat.com/ubi9/images/9.4-1214.1726694543, maintainer=Red Hat, Inc., config_data={'image': 'quay.io/sustainable_computing_io/kepler:release-0.7.12', 'privileged': 'true', 'restart': 'always', 'ports': ['8888:8888'], 'net': 'host', 'command': '-v=2', 'recreate': True, 'environment': {'ENABLE_GPU': 'true', 'EXPOSE_CONTAINER_METRICS': 'true', 'ENABLE_PROCESS_METRICS': 'true', 'EXPOSE_VM_METRICS': 'true', 'EXPOSE_ESTIMATED_IDLE_POWER_METRICS': 'false', 'LIBVIRT_METADATA_URI': 'http://openstack.org/xmlns/libvirt/nova/1.1'}, 'healthcheck': {'test': '/openstack/healthcheck kepler', 'mount': '/var/lib/openstack/healthchecks/kepler'}, 'volumes': ['/lib/modules:/lib/modules:ro', '/run/libvirt:/run/libvirt:shared,ro', '/sys:/sys', '/proc:/proc', '/var/lib/openstack/healthchecks/kepler:/openstack:ro,z']}, distribution-scope=public, name=ubi9, summary=Provides the latest release of Red Hat Universal Base Image 9., vendor=Red Hat, Inc., build-date=2024-09-18T21:23:30, io.buildah.version=1.29.0, vcs-ref=e309397d02fc53f7fa99db1371b8700eb49f268f, version=9.4, managed_by=edpm_ansible, io.k8s.display-name=Red Hat Universal Base Image 9, container_name=kepler, release-0.7.12=)
Dec  1 19:29:21 compute-0 podman[239537]: 2025-12-01 19:29:21.343648046 +0000 UTC m=+0.104485111 container health_status 34a1614f07848d6f362b3ed1fa2407dbcd0f2c7c831f6ef43ff8b2d278ce7c3d (image=quay.io/podified-antelope-centos9/openstack-ceilometer-ipmi:current-podified, name=ceilometer_agent_ipmi, health_status=healthy, health_failing_streak=0, health_log=, org.label-schema.name=CentOS Stream 9 Base Image, maintainer=OpenStack Kubernetes Operator team, managed_by=edpm_ansible, config_data={'image': 'quay.io/podified-antelope-centos9/openstack-ceilometer-ipmi:current-podified', 'user': 'ceilometer', 'restart': 'always', 'command': 'kolla_start', 'security_opt': 'label:type:ceilometer_polling_t', 'privileged': 'true', 'net': 'host', 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'OS_ENDPOINT_TYPE': 'internal'}, 'healthcheck': {'test': '/openstack/healthcheck ipmi', 'mount': '/var/lib/openstack/healthchecks/ceilometer_agent_ipmi'}, 'volumes': ['/var/lib/openstack/config/telemetry-power-monitoring:/var/lib/openstack/config/:z', '/var/lib/openstack/config/telemetry-power-monitoring/ceilometer-agent-ipmi.json:/var/lib/kolla/config_files/config.json:z', '/etc/hosts:/etc/hosts:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/var/lib/openstack/cacerts/telemetry-power-monitoring/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/config/telemetry-power-monitoring/ceilometer_prom_exporter.yaml:/etc/ceilometer/ceilometer_prom_exporter.yaml:z', '/var/lib/openstack/certs/telemetry-power-monitoring/default:/etc/ceilometer/tls:z', '/dev/log:/dev/log', '/var/lib/openstack/healthchecks/ceilometer_agent_ipmi:/openstack:ro,z']}, io.buildah.version=1.41.3, org.label-schema.build-date=20251125, org.label-schema.schema-version=1.0, config_id=edpm, container_name=ceilometer_agent_ipmi, org.label-schema.license=GPLv2, org.label-schema.vendor=CentOS, tcib_build_tag=fa2bb8efef6782c26ea7f1675eeb36dd, tcib_managed=true)
Dec  1 19:29:26 compute-0 podman[239574]: 2025-12-01 19:29:26.332383748 +0000 UTC m=+0.096070671 container health_status 3a3d264f7eb8586ed3d44da8bad3c69e5911bcb2ca062b771386b6d47a5118de (image=quay.rdoproject.org/podified-master-centos10/openstack-ceilometer-compute:current-tested, name=ceilometer_agent_compute, health_status=healthy, health_failing_streak=0, health_log=, org.label-schema.vendor=CentOS, managed_by=edpm_ansible, org.label-schema.build-date=20251125, org.label-schema.schema-version=1.0, org.label-schema.name=CentOS Stream 10 Base Image, tcib_build_tag=3a7876c5b6a4ff2e2bc50e11e9db5f42, tcib_managed=true, config_data={'image': 'quay.rdoproject.org/podified-master-centos10/openstack-ceilometer-compute:current-tested', 'user': 'ceilometer', 'restart': 'always', 'command': 'kolla_start', 'security_opt': 'label:type:ceilometer_polling_t', 'net': 'host', 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'OS_ENDPOINT_TYPE': 'internal'}, 'healthcheck': {'test': '/openstack/healthcheck compute', 'mount': '/var/lib/openstack/healthchecks/ceilometer_agent_compute'}, 'volumes': ['/var/lib/openstack/config/telemetry:/var/lib/openstack/config/:z', '/var/lib/openstack/config/telemetry/ceilometer-agent-compute.json:/var/lib/kolla/config_files/config.json:z', '/run/libvirt:/run/libvirt:shared,ro', '/etc/hosts:/etc/hosts:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/var/lib/openstack/cacerts/telemetry/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/config/telemetry/ceilometer_prom_exporter.yaml:/etc/ceilometer/ceilometer_prom_exporter.yaml:z', '/var/lib/openstack/certs/telemetry/default:/etc/ceilometer/tls:z', '/dev/log:/dev/log', '/var/lib/openstack/healthchecks/ceilometer_agent_compute:/openstack:ro,z']}, config_id=edpm, io.buildah.version=1.41.4, maintainer=OpenStack Kubernetes Operator team, org.label-schema.license=GPLv2, container_name=ceilometer_agent_compute)
Dec  1 19:29:26 compute-0 podman[239575]: 2025-12-01 19:29:26.351426425 +0000 UTC m=+0.101035095 container health_status 43b014a7c88484529ca37fbc1aa040d68d3c565a681d98a3ffe696ded1c66c8b (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified, name=ovn_metadata_agent, health_status=healthy, health_failing_streak=0, health_log=, tcib_build_tag=fa2bb8efef6782c26ea7f1675eeb36dd, config_data={'cgroupns': 'host', 'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'EDPM_CONFIG_HASH': '0823bd3e096c75f72e4a95820d41b0d4b6a1172bd2892ddb9f29b788a11bc87d'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ovn_metadata_agent', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified', 'net': 'host', 'pid': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/run/openvswitch:/run/openvswitch:z', '/var/lib/config-data/ansible-generated/neutron-ovn-metadata-agent:/etc/neutron.conf.d:z', '/run/netns:/run/netns:shared', '/var/lib/kolla/config_files/ovn_metadata_agent.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/neutron:/var/lib/neutron:shared,z', '/var/lib/neutron/ovn_metadata_haproxy_wrapper:/usr/local/bin/haproxy:ro', '/var/lib/neutron/kill_scripts:/etc/neutron/kill_scripts:ro', '/var/lib/openstack/cacerts/neutron-metadata/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/ca.crt:/etc/pki/tls/certs/ovndbca.crt:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/tls.crt:/etc/pki/tls/certs/ovndb.crt:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/tls.key:/etc/pki/tls/private/ovndb.key:ro,Z', '/var/lib/openstack/healthchecks/ovn_metadata_agent:/openstack:ro,z']}, maintainer=OpenStack Kubernetes Operator team, managed_by=edpm_ansible, org.label-schema.license=GPLv2, tcib_managed=true, config_id=ovn_metadata_agent, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS, container_name=ovn_metadata_agent, io.buildah.version=1.41.3, org.label-schema.build-date=20251125, org.label-schema.name=CentOS Stream 9 Base Image)
Dec  1 19:29:26 compute-0 podman[239576]: 2025-12-01 19:29:26.41324805 +0000 UTC m=+0.157499395 container health_status ac5c9902abf0db9f43c889599b2bcc73d33eb8b65444ffdd9b56a5cc93dab792 (image=quay.io/podified-antelope-centos9/openstack-ovn-controller:current-podified, name=ovn_controller, health_status=healthy, health_failing_streak=0, health_log=, org.label-schema.license=GPLv2, tcib_build_tag=fa2bb8efef6782c26ea7f1675eeb36dd, config_data={'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ovn_controller', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-ovn-controller:current-podified', 'net': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/lib/modules:/lib/modules:ro', '/run:/run', '/var/lib/openvswitch/ovn:/run/ovn:shared,z', '/var/lib/kolla/config_files/ovn_controller.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/openstack/cacerts/ovn/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/certs/ovn/default/ca.crt:/etc/pki/tls/certs/ovndbca.crt:ro,z', '/var/lib/openstack/certs/ovn/default/tls.crt:/etc/pki/tls/certs/ovndb.crt:ro,z', '/var/lib/openstack/certs/ovn/default/tls.key:/etc/pki/tls/private/ovndb.key:ro,Z', '/var/lib/openstack/healthchecks/ovn_controller:/openstack:ro,z']}, config_id=ovn_controller, org.label-schema.name=CentOS Stream 9 Base Image, tcib_managed=true, container_name=ovn_controller, maintainer=OpenStack Kubernetes Operator team, managed_by=edpm_ansible, org.label-schema.vendor=CentOS, org.label-schema.schema-version=1.0, io.buildah.version=1.41.3, org.label-schema.build-date=20251125)
Dec  1 19:29:29 compute-0 podman[203750]: time="2025-12-01T19:29:29Z" level=info msg="List containers: received `last` parameter - overwriting `limit`"
Dec  1 19:29:29 compute-0 podman[203750]: @ - - [01/Dec/2025:19:29:29 +0000] "GET /v4.9.3/libpod/containers/json?all=true&external=false&last=0&namespace=false&size=false&sync=false HTTP/1.1" 200 28288 "" "Go-http-client/1.1"
Dec  1 19:29:29 compute-0 podman[203750]: @ - - [01/Dec/2025:19:29:29 +0000] "GET /v4.9.3/libpod/containers/stats?all=false&interval=1&stream=false HTTP/1.1" 200 4288 "" "Go-http-client/1.1"
Dec  1 19:29:31 compute-0 openstack_network_exporter[205914]: ERROR   19:29:31 appctl.go:131: Failed to prepare call to ovsdb-server: no control socket files found for the ovs db server
Dec  1 19:29:31 compute-0 openstack_network_exporter[205914]: ERROR   19:29:31 appctl.go:144: Failed to get PID for ovn-northd: no control socket files found for ovn-northd
Dec  1 19:29:31 compute-0 openstack_network_exporter[205914]: ERROR   19:29:31 appctl.go:144: Failed to get PID for ovn-northd: no control socket files found for ovn-northd
Dec  1 19:29:31 compute-0 openstack_network_exporter[205914]: ERROR   19:29:31 appctl.go:174: call(dpif-netdev/pmd-perf-show): please specify an existing datapath
Dec  1 19:29:31 compute-0 openstack_network_exporter[205914]: 
Dec  1 19:29:31 compute-0 openstack_network_exporter[205914]: ERROR   19:29:31 appctl.go:174: call(dpif-netdev/pmd-rxq-show): please specify an existing datapath
Dec  1 19:29:31 compute-0 openstack_network_exporter[205914]: 
Dec  1 19:29:33 compute-0 podman[239636]: 2025-12-01 19:29:33.307342964 +0000 UTC m=+0.078300694 container health_status b46bda7fc50db8041eef75400930fc7591d8331b3adc9964f77b2cc87c6b98e2 (image=quay.io/openstack-k8s-operators/openstack-network-exporter:current-podified, name=openstack_network_exporter, health_status=healthy, health_failing_streak=0, health_log=, config_data={'image': 'quay.io/openstack-k8s-operators/openstack-network-exporter:current-podified', 'restart': 'always', 'recreate': True, 'privileged': True, 'ports': ['9105:9105'], 'command': [], 'net': 'host', 'environment': {'OS_ENDPOINT_TYPE': 'internal', 'OPENSTACK_NETWORK_EXPORTER_YAML': '/etc/openstack_network_exporter/openstack_network_exporter.yaml'}, 'healthcheck': {'test': '/openstack/healthcheck openstack-netwo', 'mount': '/var/lib/openstack/healthchecks/openstack_network_exporter'}, 'volumes': ['/var/lib/openstack/config/telemetry/openstack_network_exporter.yaml:/etc/openstack_network_exporter/openstack_network_exporter.yaml:z', '/var/lib/openstack/certs/telemetry/default:/etc/openstack_network_exporter/tls:z', '/var/run/openvswitch:/run/openvswitch:rw,z', '/var/lib/openvswitch/ovn:/run/ovn:rw,z', '/proc:/host/proc:ro', '/var/lib/openstack/healthchecks/openstack_network_exporter:/openstack:ro,z']}, managed_by=edpm_ansible, io.openshift.expose-services=, io.k8s.display-name=Red Hat Universal Base Image 9 Minimal, io.openshift.tags=minimal rhel9, container_name=openstack_network_exporter, summary=Provides the latest release of the minimal Red Hat Universal Base Image 9., url=https://catalog.redhat.com/en/search?searchType=containers, com.redhat.license_terms=https://www.redhat.com/en/about/red-hat-end-user-license-agreements#UBI, description=The Universal Base Image Minimal is a stripped down image that uses microdnf as a package manager. This base image is freely redistributable, but Red Hat only supports Red Hat technologies through subscriptions for Red Hat products. This image is maintained by Red Hat and updated regularly., distribution-scope=public, io.k8s.description=The Universal Base Image Minimal is a stripped down image that uses microdnf as a package manager. This base image is freely redistributable, but Red Hat only supports Red Hat technologies through subscriptions for Red Hat products. This image is maintained by Red Hat and updated regularly., vcs-type=git, name=ubi9-minimal, vendor=Red Hat, Inc., config_id=edpm, build-date=2025-08-20T13:12:41, com.redhat.component=ubi9-minimal-container, io.buildah.version=1.33.7, vcs-ref=f4b088292653bbf5ca8188a5e59ffd06a8671d4b, maintainer=Red Hat, Inc., architecture=x86_64, release=1755695350, version=9.6)
Dec  1 19:29:34 compute-0 ovn_metadata_agent[106828]: 2025-12-01 19:29:34.097 106833 DEBUG ovsdbapp.backend.ovs_idl.event [-] Matched UPDATE: SbGlobalUpdateEvent(events=('update',), table='SB_Global', conditions=None, old_conditions=None), priority=20 to row=SB_Global(external_ids={}, nb_cfg=3, options={'arp_ns_explicit_output': 'true', 'mac_prefix': 'ae:b8:e0', 'max_tunid': '16711680', 'northd_internal_version': '24.03.8-20.33.0-76.8', 'svc_monitor_mac': 'f2:87:69:a7:38:2b'}, ipsec=False) old=SB_Global(nb_cfg=2) matches /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/event.py:43#033[00m
Dec  1 19:29:34 compute-0 ovn_metadata_agent[106828]: 2025-12-01 19:29:34.099 106833 DEBUG neutron.agent.ovn.metadata.agent [-] Delaying updating chassis table for 2 seconds run /usr/lib/python3.9/site-packages/neutron/agent/ovn/metadata/agent.py:274#033[00m
Dec  1 19:29:36 compute-0 ovn_metadata_agent[106828]: 2025-12-01 19:29:36.100 106833 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): DbSetCommand(_result=None, table=Chassis_Private, record=91869463-7ce7-4561-8225-db4a77bb5f12, col_values=(('external_ids', {'neutron:ovn-metadata-sb-cfg': '3'}),), if_exists=True) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89#033[00m
Dec  1 19:29:39 compute-0 podman[239655]: 2025-12-01 19:29:39.334055172 +0000 UTC m=+0.096886006 container health_status 9bc16c1e84935b321683dd2dfd3901959431e420d380b6b9982945dff3d516b2 (image=quay.io/prometheus/node-exporter:v1.5.0, name=node_exporter, health_status=healthy, health_failing_streak=0, health_log=, maintainer=The Prometheus Authors <prometheus-developers@googlegroups.com>, managed_by=edpm_ansible, config_data={'image': 'quay.io/prometheus/node-exporter:v1.5.0', 'restart': 'always', 'recreate': True, 'user': 'root', 'privileged': True, 'ports': ['9100:9100'], 'command': ['--web.config.file=/etc/node_exporter/node_exporter.yaml', '--web.disable-exporter-metrics', '--collector.systemd', '--collector.systemd.unit-include=(edpm_.*|ovs.*|openvswitch|virt.*|rsyslog)\\.service', '--no-collector.dmi', '--no-collector.entropy', '--no-collector.thermal_zone', '--no-collector.time', '--no-collector.timex', '--no-collector.uname', '--no-collector.stat', '--no-collector.hwmon', '--no-collector.os', '--no-collector.selinux', '--no-collector.textfile', '--no-collector.powersupplyclass', '--no-collector.pressure', '--no-collector.rapl'], 'net': 'host', 'environment': {'OS_ENDPOINT_TYPE': 'internal'}, 'healthcheck': {'test': '/openstack/healthcheck node_exporter', 'mount': '/var/lib/openstack/healthchecks/node_exporter'}, 'volumes': ['/var/lib/openstack/config/telemetry/node_exporter.yaml:/etc/node_exporter/node_exporter.yaml:z', '/var/lib/openstack/certs/telemetry/default:/etc/node_exporter/tls:z', '/var/run/dbus/system_bus_socket:/var/run/dbus/system_bus_socket:rw', '/var/lib/openstack/healthchecks/node_exporter:/openstack:ro,z']}, config_id=edpm, container_name=node_exporter)
Dec  1 19:29:44 compute-0 podman[239681]: 2025-12-01 19:29:44.827244 +0000 UTC m=+0.151099557 container health_status eee51cf6f5ac491b85fb09827fece37ea9afa564acb449d4ec0d0155a452f02b (image=quay.io/podified-antelope-centos9/openstack-multipathd:current-podified, name=multipathd, health_status=healthy, health_failing_streak=0, health_log=, tcib_build_tag=fa2bb8efef6782c26ea7f1675eeb36dd, org.label-schema.name=CentOS Stream 9 Base Image, config_data={'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/multipathd', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-multipathd:current-podified', 'net': 'host', 'privileged': True, 'restart': 'always', 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/var/lib/kolla/config_files/multipathd.json:/var/lib/kolla/config_files/config.json:ro', '/dev:/dev', '/run/udev:/run/udev', '/sys:/sys', '/lib/modules:/lib/modules:ro', '/etc/iscsi:/etc/iscsi:ro', '/var/lib/iscsi:/var/lib/iscsi', '/etc/multipath:/etc/multipath:z', '/etc/multipath.conf:/etc/multipath.conf:ro', '/var/lib/openstack/healthchecks/multipathd:/openstack:ro,z']}, config_id=multipathd, io.buildah.version=1.41.3, org.label-schema.vendor=CentOS, maintainer=OpenStack Kubernetes Operator team, org.label-schema.build-date=20251125, org.label-schema.schema-version=1.0, tcib_managed=true, container_name=multipathd, managed_by=edpm_ansible, org.label-schema.license=GPLv2)
Dec  1 19:29:46 compute-0 nova_compute[189564]: 2025-12-01 19:29:46.054 189568 DEBUG oslo_concurrency.lockutils [None req-286a6c53-945b-4040-bb5b-5f674c790b5e 7c24e8f82e7842b785e565ac65c7f494 35d2a9caf1634dca9fc12ec078239d84 - - default default] Acquiring lock "e73931e9-f7fa-4666-b781-700b385532a9" by "nova.compute.manager.ComputeManager.build_and_run_instance.<locals>._locked_do_build_and_run_instance" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m
Dec  1 19:29:46 compute-0 nova_compute[189564]: 2025-12-01 19:29:46.055 189568 DEBUG oslo_concurrency.lockutils [None req-286a6c53-945b-4040-bb5b-5f674c790b5e 7c24e8f82e7842b785e565ac65c7f494 35d2a9caf1634dca9fc12ec078239d84 - - default default] Lock "e73931e9-f7fa-4666-b781-700b385532a9" acquired by "nova.compute.manager.ComputeManager.build_and_run_instance.<locals>._locked_do_build_and_run_instance" :: waited 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m
Dec  1 19:29:46 compute-0 nova_compute[189564]: 2025-12-01 19:29:46.087 189568 DEBUG nova.compute.manager [None req-286a6c53-945b-4040-bb5b-5f674c790b5e 7c24e8f82e7842b785e565ac65c7f494 35d2a9caf1634dca9fc12ec078239d84 - - default default] [instance: e73931e9-f7fa-4666-b781-700b385532a9] Starting instance... _do_build_and_run_instance /usr/lib/python3.9/site-packages/nova/compute/manager.py:2402#033[00m
Dec  1 19:29:46 compute-0 nova_compute[189564]: 2025-12-01 19:29:46.244 189568 DEBUG oslo_concurrency.lockutils [None req-286a6c53-945b-4040-bb5b-5f674c790b5e 7c24e8f82e7842b785e565ac65c7f494 35d2a9caf1634dca9fc12ec078239d84 - - default default] Acquiring lock "compute_resources" by "nova.compute.resource_tracker.ResourceTracker.instance_claim" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m
Dec  1 19:29:46 compute-0 nova_compute[189564]: 2025-12-01 19:29:46.245 189568 DEBUG oslo_concurrency.lockutils [None req-286a6c53-945b-4040-bb5b-5f674c790b5e 7c24e8f82e7842b785e565ac65c7f494 35d2a9caf1634dca9fc12ec078239d84 - - default default] Lock "compute_resources" acquired by "nova.compute.resource_tracker.ResourceTracker.instance_claim" :: waited 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m
Dec  1 19:29:46 compute-0 nova_compute[189564]: 2025-12-01 19:29:46.247 189568 DEBUG oslo_service.periodic_task [None req-7cec860b-c1c5-49d2-8a4f-732c76c1788d - - - - - -] Running periodic task ComputeManager._heal_instance_info_cache run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210#033[00m
Dec  1 19:29:46 compute-0 nova_compute[189564]: 2025-12-01 19:29:46.248 189568 DEBUG nova.compute.manager [None req-7cec860b-c1c5-49d2-8a4f-732c76c1788d - - - - - -] Starting heal instance info cache _heal_instance_info_cache /usr/lib/python3.9/site-packages/nova/compute/manager.py:9858#033[00m
Dec  1 19:29:46 compute-0 nova_compute[189564]: 2025-12-01 19:29:46.249 189568 DEBUG nova.compute.manager [None req-7cec860b-c1c5-49d2-8a4f-732c76c1788d - - - - - -] Rebuilding the list of instances to heal _heal_instance_info_cache /usr/lib/python3.9/site-packages/nova/compute/manager.py:9862#033[00m
Dec  1 19:29:46 compute-0 nova_compute[189564]: 2025-12-01 19:29:46.262 189568 DEBUG nova.virt.hardware [None req-286a6c53-945b-4040-bb5b-5f674c790b5e 7c24e8f82e7842b785e565ac65c7f494 35d2a9caf1634dca9fc12ec078239d84 - - default default] Require both a host and instance NUMA topology to fit instance on host. numa_fit_instance_to_host /usr/lib/python3.9/site-packages/nova/virt/hardware.py:2368#033[00m
Dec  1 19:29:46 compute-0 nova_compute[189564]: 2025-12-01 19:29:46.263 189568 INFO nova.compute.claims [None req-286a6c53-945b-4040-bb5b-5f674c790b5e 7c24e8f82e7842b785e565ac65c7f494 35d2a9caf1634dca9fc12ec078239d84 - - default default] [instance: e73931e9-f7fa-4666-b781-700b385532a9] Claim successful on node compute-0.ctlplane.example.com#033[00m
Dec  1 19:29:46 compute-0 nova_compute[189564]: 2025-12-01 19:29:46.274 189568 DEBUG nova.compute.manager [None req-7cec860b-c1c5-49d2-8a4f-732c76c1788d - - - - - -] Didn't find any instances for network info cache update. _heal_instance_info_cache /usr/lib/python3.9/site-packages/nova/compute/manager.py:9944#033[00m
Dec  1 19:29:46 compute-0 nova_compute[189564]: 2025-12-01 19:29:46.502 189568 DEBUG nova.compute.provider_tree [None req-286a6c53-945b-4040-bb5b-5f674c790b5e 7c24e8f82e7842b785e565ac65c7f494 35d2a9caf1634dca9fc12ec078239d84 - - default default] Inventory has not changed in ProviderTree for provider: 0211b5d4-bab8-409f-8f53-df766ffbcb27 update_inventory /usr/lib/python3.9/site-packages/nova/compute/provider_tree.py:180#033[00m
Dec  1 19:29:46 compute-0 nova_compute[189564]: 2025-12-01 19:29:46.532 189568 DEBUG nova.scheduler.client.report [None req-286a6c53-945b-4040-bb5b-5f674c790b5e 7c24e8f82e7842b785e565ac65c7f494 35d2a9caf1634dca9fc12ec078239d84 - - default default] Inventory has not changed for provider 0211b5d4-bab8-409f-8f53-df766ffbcb27 based on inventory data: {'VCPU': {'total': 8, 'reserved': 0, 'min_unit': 1, 'max_unit': 8, 'step_size': 1, 'allocation_ratio': 4.0}, 'MEMORY_MB': {'total': 7680, 'reserved': 512, 'min_unit': 1, 'max_unit': 7680, 'step_size': 1, 'allocation_ratio': 1.0}, 'DISK_GB': {'total': 79, 'reserved': 0, 'min_unit': 1, 'max_unit': 79, 'step_size': 1, 'allocation_ratio': 0.9}} set_inventory_for_provider /usr/lib/python3.9/site-packages/nova/scheduler/client/report.py:940#033[00m
Dec  1 19:29:46 compute-0 nova_compute[189564]: 2025-12-01 19:29:46.572 189568 DEBUG oslo_concurrency.lockutils [None req-286a6c53-945b-4040-bb5b-5f674c790b5e 7c24e8f82e7842b785e565ac65c7f494 35d2a9caf1634dca9fc12ec078239d84 - - default default] Lock "compute_resources" "released" by "nova.compute.resource_tracker.ResourceTracker.instance_claim" :: held 0.326s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m
Dec  1 19:29:46 compute-0 nova_compute[189564]: 2025-12-01 19:29:46.572 189568 DEBUG nova.compute.manager [None req-286a6c53-945b-4040-bb5b-5f674c790b5e 7c24e8f82e7842b785e565ac65c7f494 35d2a9caf1634dca9fc12ec078239d84 - - default default] [instance: e73931e9-f7fa-4666-b781-700b385532a9] Start building networks asynchronously for instance. _build_resources /usr/lib/python3.9/site-packages/nova/compute/manager.py:2799#033[00m
Dec  1 19:29:46 compute-0 nova_compute[189564]: 2025-12-01 19:29:46.639 189568 DEBUG nova.compute.manager [None req-286a6c53-945b-4040-bb5b-5f674c790b5e 7c24e8f82e7842b785e565ac65c7f494 35d2a9caf1634dca9fc12ec078239d84 - - default default] [instance: e73931e9-f7fa-4666-b781-700b385532a9] Allocating IP information in the background. _allocate_network_async /usr/lib/python3.9/site-packages/nova/compute/manager.py:1952#033[00m
Dec  1 19:29:46 compute-0 nova_compute[189564]: 2025-12-01 19:29:46.640 189568 DEBUG nova.network.neutron [None req-286a6c53-945b-4040-bb5b-5f674c790b5e 7c24e8f82e7842b785e565ac65c7f494 35d2a9caf1634dca9fc12ec078239d84 - - default default] [instance: e73931e9-f7fa-4666-b781-700b385532a9] allocate_for_instance() allocate_for_instance /usr/lib/python3.9/site-packages/nova/network/neutron.py:1156#033[00m
Dec  1 19:29:46 compute-0 nova_compute[189564]: 2025-12-01 19:29:46.666 189568 INFO nova.virt.libvirt.driver [None req-286a6c53-945b-4040-bb5b-5f674c790b5e 7c24e8f82e7842b785e565ac65c7f494 35d2a9caf1634dca9fc12ec078239d84 - - default default] [instance: e73931e9-f7fa-4666-b781-700b385532a9] Ignoring supplied device name: /dev/vda. Libvirt can't honour user-supplied dev names#033[00m
Dec  1 19:29:46 compute-0 nova_compute[189564]: 2025-12-01 19:29:46.710 189568 DEBUG nova.compute.manager [None req-286a6c53-945b-4040-bb5b-5f674c790b5e 7c24e8f82e7842b785e565ac65c7f494 35d2a9caf1634dca9fc12ec078239d84 - - default default] [instance: e73931e9-f7fa-4666-b781-700b385532a9] Start building block device mappings for instance. _build_resources /usr/lib/python3.9/site-packages/nova/compute/manager.py:2834#033[00m
Dec  1 19:29:46 compute-0 nova_compute[189564]: 2025-12-01 19:29:46.801 189568 DEBUG nova.compute.manager [None req-286a6c53-945b-4040-bb5b-5f674c790b5e 7c24e8f82e7842b785e565ac65c7f494 35d2a9caf1634dca9fc12ec078239d84 - - default default] [instance: e73931e9-f7fa-4666-b781-700b385532a9] Start spawning the instance on the hypervisor. _build_and_run_instance /usr/lib/python3.9/site-packages/nova/compute/manager.py:2608#033[00m
Dec  1 19:29:46 compute-0 nova_compute[189564]: 2025-12-01 19:29:46.804 189568 DEBUG nova.virt.libvirt.driver [None req-286a6c53-945b-4040-bb5b-5f674c790b5e 7c24e8f82e7842b785e565ac65c7f494 35d2a9caf1634dca9fc12ec078239d84 - - default default] [instance: e73931e9-f7fa-4666-b781-700b385532a9] Creating instance directory _create_image /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:4723#033[00m
Dec  1 19:29:46 compute-0 nova_compute[189564]: 2025-12-01 19:29:46.805 189568 INFO nova.virt.libvirt.driver [None req-286a6c53-945b-4040-bb5b-5f674c790b5e 7c24e8f82e7842b785e565ac65c7f494 35d2a9caf1634dca9fc12ec078239d84 - - default default] [instance: e73931e9-f7fa-4666-b781-700b385532a9] Creating image(s)#033[00m
Dec  1 19:29:46 compute-0 nova_compute[189564]: 2025-12-01 19:29:46.806 189568 DEBUG oslo_concurrency.lockutils [None req-286a6c53-945b-4040-bb5b-5f674c790b5e 7c24e8f82e7842b785e565ac65c7f494 35d2a9caf1634dca9fc12ec078239d84 - - default default] Acquiring lock "/var/lib/nova/instances/e73931e9-f7fa-4666-b781-700b385532a9/disk.info" by "nova.virt.libvirt.imagebackend.Image.resolve_driver_format.<locals>.write_to_disk_info_file" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m
Dec  1 19:29:46 compute-0 nova_compute[189564]: 2025-12-01 19:29:46.807 189568 DEBUG oslo_concurrency.lockutils [None req-286a6c53-945b-4040-bb5b-5f674c790b5e 7c24e8f82e7842b785e565ac65c7f494 35d2a9caf1634dca9fc12ec078239d84 - - default default] Lock "/var/lib/nova/instances/e73931e9-f7fa-4666-b781-700b385532a9/disk.info" acquired by "nova.virt.libvirt.imagebackend.Image.resolve_driver_format.<locals>.write_to_disk_info_file" :: waited 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m
Dec  1 19:29:46 compute-0 nova_compute[189564]: 2025-12-01 19:29:46.809 189568 DEBUG oslo_concurrency.lockutils [None req-286a6c53-945b-4040-bb5b-5f674c790b5e 7c24e8f82e7842b785e565ac65c7f494 35d2a9caf1634dca9fc12ec078239d84 - - default default] Lock "/var/lib/nova/instances/e73931e9-f7fa-4666-b781-700b385532a9/disk.info" "released" by "nova.virt.libvirt.imagebackend.Image.resolve_driver_format.<locals>.write_to_disk_info_file" :: held 0.002s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m
Dec  1 19:29:46 compute-0 nova_compute[189564]: 2025-12-01 19:29:46.810 189568 DEBUG oslo_concurrency.lockutils [None req-286a6c53-945b-4040-bb5b-5f674c790b5e 7c24e8f82e7842b785e565ac65c7f494 35d2a9caf1634dca9fc12ec078239d84 - - default default] Acquiring lock "1324593a3f01becd5f72fdfdb0281e45c2a6b683" by "nova.virt.libvirt.imagebackend.Image.cache.<locals>.fetch_func_sync" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m
Dec  1 19:29:46 compute-0 nova_compute[189564]: 2025-12-01 19:29:46.812 189568 DEBUG oslo_concurrency.lockutils [None req-286a6c53-945b-4040-bb5b-5f674c790b5e 7c24e8f82e7842b785e565ac65c7f494 35d2a9caf1634dca9fc12ec078239d84 - - default default] Lock "1324593a3f01becd5f72fdfdb0281e45c2a6b683" acquired by "nova.virt.libvirt.imagebackend.Image.cache.<locals>.fetch_func_sync" :: waited 0.002s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m
Dec  1 19:29:47 compute-0 nova_compute[189564]: 2025-12-01 19:29:47.248 189568 DEBUG oslo_service.periodic_task [None req-7cec860b-c1c5-49d2-8a4f-732c76c1788d - - - - - -] Running periodic task ComputeManager._poll_rebooting_instances run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210#033[00m
Dec  1 19:29:47 compute-0 nova_compute[189564]: 2025-12-01 19:29:47.249 189568 DEBUG oslo_service.periodic_task [None req-7cec860b-c1c5-49d2-8a4f-732c76c1788d - - - - - -] Running periodic task ComputeManager._poll_volume_usage run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210#033[00m
Dec  1 19:29:47 compute-0 nova_compute[189564]: 2025-12-01 19:29:47.300 189568 WARNING oslo_policy.policy [None req-286a6c53-945b-4040-bb5b-5f674c790b5e 7c24e8f82e7842b785e565ac65c7f494 35d2a9caf1634dca9fc12ec078239d84 - - default default] JSON formatted policy_file support is deprecated since Victoria release. You need to use YAML format which will be default in future. You can use ``oslopolicy-convert-json-to-yaml`` tool to convert existing JSON-formatted policy file to YAML-formatted in backward compatible way: https://docs.openstack.org/oslo.policy/latest/cli/oslopolicy-convert-json-to-yaml.html.#033[00m
Dec  1 19:29:47 compute-0 nova_compute[189564]: 2025-12-01 19:29:47.301 189568 WARNING oslo_policy.policy [None req-286a6c53-945b-4040-bb5b-5f674c790b5e 7c24e8f82e7842b785e565ac65c7f494 35d2a9caf1634dca9fc12ec078239d84 - - default default] JSON formatted policy_file support is deprecated since Victoria release. You need to use YAML format which will be default in future. You can use ``oslopolicy-convert-json-to-yaml`` tool to convert existing JSON-formatted policy file to YAML-formatted in backward compatible way: https://docs.openstack.org/oslo.policy/latest/cli/oslopolicy-convert-json-to-yaml.html.#033[00m
Dec  1 19:29:48 compute-0 nova_compute[189564]: 2025-12-01 19:29:48.145 189568 DEBUG nova.network.neutron [None req-286a6c53-945b-4040-bb5b-5f674c790b5e 7c24e8f82e7842b785e565ac65c7f494 35d2a9caf1634dca9fc12ec078239d84 - - default default] [instance: e73931e9-f7fa-4666-b781-700b385532a9] Successfully created port: 3cef930c-870a-4936-a206-b4c3a7ce5c1a _create_port_minimal /usr/lib/python3.9/site-packages/nova/network/neutron.py:548#033[00m
Dec  1 19:29:48 compute-0 nova_compute[189564]: 2025-12-01 19:29:48.204 189568 DEBUG oslo_concurrency.processutils [None req-286a6c53-945b-4040-bb5b-5f674c790b5e 7c24e8f82e7842b785e565ac65c7f494 35d2a9caf1634dca9fc12ec078239d84 - - default default] Running cmd (subprocess): /usr/bin/python3 -m oslo_concurrency.prlimit --as=1073741824 --cpu=30 -- env LC_ALL=C LANG=C qemu-img info /var/lib/nova/instances/_base/1324593a3f01becd5f72fdfdb0281e45c2a6b683.part --force-share --output=json execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384#033[00m
Dec  1 19:29:48 compute-0 nova_compute[189564]: 2025-12-01 19:29:48.248 189568 DEBUG oslo_service.periodic_task [None req-7cec860b-c1c5-49d2-8a4f-732c76c1788d - - - - - -] Running periodic task ComputeManager._instance_usage_audit run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210#033[00m
Dec  1 19:29:48 compute-0 nova_compute[189564]: 2025-12-01 19:29:48.250 189568 DEBUG oslo_service.periodic_task [None req-7cec860b-c1c5-49d2-8a4f-732c76c1788d - - - - - -] Running periodic task ComputeManager._reclaim_queued_deletes run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210#033[00m
Dec  1 19:29:48 compute-0 nova_compute[189564]: 2025-12-01 19:29:48.251 189568 DEBUG nova.compute.manager [None req-7cec860b-c1c5-49d2-8a4f-732c76c1788d - - - - - -] CONF.reclaim_instance_interval <= 0, skipping... _reclaim_queued_deletes /usr/lib/python3.9/site-packages/nova/compute/manager.py:10477#033[00m
Dec  1 19:29:48 compute-0 nova_compute[189564]: 2025-12-01 19:29:48.304 189568 DEBUG oslo_concurrency.processutils [None req-286a6c53-945b-4040-bb5b-5f674c790b5e 7c24e8f82e7842b785e565ac65c7f494 35d2a9caf1634dca9fc12ec078239d84 - - default default] CMD "/usr/bin/python3 -m oslo_concurrency.prlimit --as=1073741824 --cpu=30 -- env LC_ALL=C LANG=C qemu-img info /var/lib/nova/instances/_base/1324593a3f01becd5f72fdfdb0281e45c2a6b683.part --force-share --output=json" returned: 0 in 0.100s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422#033[00m
Dec  1 19:29:48 compute-0 nova_compute[189564]: 2025-12-01 19:29:48.305 189568 DEBUG nova.virt.images [None req-286a6c53-945b-4040-bb5b-5f674c790b5e 7c24e8f82e7842b785e565ac65c7f494 35d2a9caf1634dca9fc12ec078239d84 - - default default] 15bc897a-453b-4133-b6db-08ecdc2b6db0 was qcow2, converting to raw fetch_to_raw /usr/lib/python3.9/site-packages/nova/virt/images.py:242#033[00m
Dec  1 19:29:48 compute-0 nova_compute[189564]: 2025-12-01 19:29:48.307 189568 DEBUG nova.privsep.utils [None req-286a6c53-945b-4040-bb5b-5f674c790b5e 7c24e8f82e7842b785e565ac65c7f494 35d2a9caf1634dca9fc12ec078239d84 - - default default] Path '/var/lib/nova/instances' supports direct I/O supports_direct_io /usr/lib/python3.9/site-packages/nova/privsep/utils.py:63#033[00m
Dec  1 19:29:48 compute-0 nova_compute[189564]: 2025-12-01 19:29:48.307 189568 DEBUG oslo_concurrency.processutils [None req-286a6c53-945b-4040-bb5b-5f674c790b5e 7c24e8f82e7842b785e565ac65c7f494 35d2a9caf1634dca9fc12ec078239d84 - - default default] Running cmd (subprocess): qemu-img convert -t none -O raw -f qcow2 /var/lib/nova/instances/_base/1324593a3f01becd5f72fdfdb0281e45c2a6b683.part /var/lib/nova/instances/_base/1324593a3f01becd5f72fdfdb0281e45c2a6b683.converted execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384#033[00m
Dec  1 19:29:48 compute-0 nova_compute[189564]: 2025-12-01 19:29:48.483 189568 DEBUG oslo_concurrency.processutils [None req-286a6c53-945b-4040-bb5b-5f674c790b5e 7c24e8f82e7842b785e565ac65c7f494 35d2a9caf1634dca9fc12ec078239d84 - - default default] CMD "qemu-img convert -t none -O raw -f qcow2 /var/lib/nova/instances/_base/1324593a3f01becd5f72fdfdb0281e45c2a6b683.part /var/lib/nova/instances/_base/1324593a3f01becd5f72fdfdb0281e45c2a6b683.converted" returned: 0 in 0.176s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422#033[00m
Dec  1 19:29:48 compute-0 nova_compute[189564]: 2025-12-01 19:29:48.488 189568 DEBUG oslo_concurrency.processutils [None req-286a6c53-945b-4040-bb5b-5f674c790b5e 7c24e8f82e7842b785e565ac65c7f494 35d2a9caf1634dca9fc12ec078239d84 - - default default] Running cmd (subprocess): /usr/bin/python3 -m oslo_concurrency.prlimit --as=1073741824 --cpu=30 -- env LC_ALL=C LANG=C qemu-img info /var/lib/nova/instances/_base/1324593a3f01becd5f72fdfdb0281e45c2a6b683.converted --force-share --output=json execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384#033[00m
Dec  1 19:29:48 compute-0 nova_compute[189564]: 2025-12-01 19:29:48.539 189568 DEBUG oslo_concurrency.processutils [None req-286a6c53-945b-4040-bb5b-5f674c790b5e 7c24e8f82e7842b785e565ac65c7f494 35d2a9caf1634dca9fc12ec078239d84 - - default default] CMD "/usr/bin/python3 -m oslo_concurrency.prlimit --as=1073741824 --cpu=30 -- env LC_ALL=C LANG=C qemu-img info /var/lib/nova/instances/_base/1324593a3f01becd5f72fdfdb0281e45c2a6b683.converted --force-share --output=json" returned: 0 in 0.050s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422#033[00m
Dec  1 19:29:48 compute-0 nova_compute[189564]: 2025-12-01 19:29:48.540 189568 DEBUG oslo_concurrency.lockutils [None req-286a6c53-945b-4040-bb5b-5f674c790b5e 7c24e8f82e7842b785e565ac65c7f494 35d2a9caf1634dca9fc12ec078239d84 - - default default] Lock "1324593a3f01becd5f72fdfdb0281e45c2a6b683" "released" by "nova.virt.libvirt.imagebackend.Image.cache.<locals>.fetch_func_sync" :: held 1.729s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m
Dec  1 19:29:48 compute-0 nova_compute[189564]: 2025-12-01 19:29:48.557 189568 INFO oslo.privsep.daemon [None req-286a6c53-945b-4040-bb5b-5f674c790b5e 7c24e8f82e7842b785e565ac65c7f494 35d2a9caf1634dca9fc12ec078239d84 - - default default] Running privsep helper: ['sudo', 'nova-rootwrap', '/etc/nova/rootwrap.conf', 'privsep-helper', '--config-file', '/etc/nova/nova.conf', '--config-file', '/etc/nova/nova-compute.conf', '--config-dir', '/etc/nova/nova.conf.d', '--privsep_context', 'nova.privsep.sys_admin_pctxt', '--privsep_sock_path', '/tmp/tmpjm__fi8y/privsep.sock']#033[00m
Dec  1 19:29:48 compute-0 ceilometer_agent_compute[200308]: 2025-12-01 19:29:48.810 15 DEBUG ceilometer.polling.manager [-] The number of pollsters in source [pollsters] is bigger than the number of worker threads to execute them. Therefore, one can expect the process to be longer than the expected. execute_polling_task_processing /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:253
Dec  1 19:29:48 compute-0 ceilometer_agent_compute[200308]: 2025-12-01 19:29:48.810 15 DEBUG ceilometer.polling.manager [-] Processing pollsters for [pollsters] with [1] threads. execute_polling_task_processing /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:262
Dec  1 19:29:48 compute-0 ceilometer_agent_compute[200308]: 2025-12-01 19:29:48.810 15 DEBUG ceilometer.polling.manager [-] Registering pollster [<stevedore.extension.Extension object at 0x7fcf6cc3f860>] from source [pollsters] to be executed via executor [<concurrent.futures.thread.ThreadPoolExecutor object at 0x7fcf6757d9a0>] with cache [{}], pollster history [{}], and discovery cache [{}]. register_pollster_execution /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:276
Dec  1 19:29:48 compute-0 ceilometer_agent_compute[200308]: 2025-12-01 19:29:48.811 15 DEBUG ceilometer.polling.manager [-] Executing discovery process for pollsters [<ceilometer.compute.pollsters.net.IncomingBytesDeltaPollster object at 0x7fcf6cc3f830>] and discovery method [local_instances] via process [<bound method AgentManager.discover of <ceilometer.polling.manager.AgentManager object at 0x7fcf6cc07a10>>]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:294
Dec  1 19:29:48 compute-0 ceilometer_agent_compute[200308]: 2025-12-01 19:29:48.812 15 DEBUG ceilometer.polling.manager [-] Registering pollster [<stevedore.extension.Extension object at 0x7fcf6c2e4080>] from source [pollsters] to be executed via executor [<concurrent.futures.thread.ThreadPoolExecutor object at 0x7fcf6757d9a0>] with cache [{}], pollster history [{}], and discovery cache [{}]. register_pollster_execution /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:276
Dec  1 19:29:48 compute-0 ceilometer_agent_compute[200308]: 2025-12-01 19:29:48.813 15 DEBUG ceilometer.polling.manager [-] Registering pollster [<stevedore.extension.Extension object at 0x7fcf6efc98b0>] from source [pollsters] to be executed via executor [<concurrent.futures.thread.ThreadPoolExecutor object at 0x7fcf6757d9a0>] with cache [{}], pollster history [{'network.incoming.bytes.delta': []}], and discovery cache [{'local_instances': []}]. register_pollster_execution /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:276
Dec  1 19:29:48 compute-0 ceilometer_agent_compute[200308]: 2025-12-01 19:29:48.813 15 DEBUG ceilometer.polling.manager [-] Registering pollster [<stevedore.extension.Extension object at 0x7fcf6c2e4110>] from source [pollsters] to be executed via executor [<concurrent.futures.thread.ThreadPoolExecutor object at 0x7fcf6757d9a0>] with cache [{}], pollster history [{'network.incoming.bytes.delta': []}], and discovery cache [{'local_instances': []}]. register_pollster_execution /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:276
Dec  1 19:29:48 compute-0 ceilometer_agent_compute[200308]: 2025-12-01 19:29:48.813 15 DEBUG ceilometer.polling.manager [-] Skip pollster network.incoming.bytes.delta, no  resources found this cycle _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:321
Dec  1 19:29:48 compute-0 ceilometer_agent_compute[200308]: 2025-12-01 19:29:48.813 15 DEBUG ceilometer.polling.manager [-] Executing discovery process for pollsters [<ceilometer.compute.pollsters.net.OutgoingPacketsPollster object at 0x7fcf6c2e4050>] and discovery method [local_instances] via process [<bound method AgentManager.discover of <ceilometer.polling.manager.AgentManager object at 0x7fcf6cc07a10>>]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:294
Dec  1 19:29:48 compute-0 ceilometer_agent_compute[200308]: 2025-12-01 19:29:48.814 15 DEBUG ceilometer.polling.manager [-] Skip pollster network.outgoing.packets, no  resources found this cycle _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:321
Dec  1 19:29:48 compute-0 ceilometer_agent_compute[200308]: 2025-12-01 19:29:48.814 15 DEBUG ceilometer.polling.manager [-] Executing discovery process for pollsters [<ceilometer.compute.pollsters.net.OutgoingBytesDeltaPollster object at 0x7fcf6cc3ff20>] and discovery method [local_instances] via process [<bound method AgentManager.discover of <ceilometer.polling.manager.AgentManager object at 0x7fcf6cc07a10>>]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:294
Dec  1 19:29:48 compute-0 ceilometer_agent_compute[200308]: 2025-12-01 19:29:48.813 15 DEBUG ceilometer.polling.manager [-] Registering pollster [<stevedore.extension.Extension object at 0x7fcf6c2e41a0>] from source [pollsters] to be executed via executor [<concurrent.futures.thread.ThreadPoolExecutor object at 0x7fcf6757d9a0>] with cache [{}], pollster history [{'network.incoming.bytes.delta': [], 'network.outgoing.packets': [], 'network.outgoing.bytes.delta': []}], and discovery cache [{'local_instances': []}]. register_pollster_execution /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:276
Dec  1 19:29:48 compute-0 ceilometer_agent_compute[200308]: 2025-12-01 19:29:48.814 15 DEBUG ceilometer.polling.manager [-] Registering pollster [<stevedore.extension.Extension object at 0x7fcf6cc3f290>] from source [pollsters] to be executed via executor [<concurrent.futures.thread.ThreadPoolExecutor object at 0x7fcf6757d9a0>] with cache [{}], pollster history [{'network.incoming.bytes.delta': [], 'network.outgoing.packets': [], 'network.outgoing.bytes.delta': []}], and discovery cache [{'local_instances': []}]. register_pollster_execution /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:276
Dec  1 19:29:48 compute-0 ceilometer_agent_compute[200308]: 2025-12-01 19:29:48.814 15 DEBUG ceilometer.polling.manager [-] Registering pollster [<stevedore.extension.Extension object at 0x7fcf6cc3f2c0>] from source [pollsters] to be executed via executor [<concurrent.futures.thread.ThreadPoolExecutor object at 0x7fcf6757d9a0>] with cache [{}], pollster history [{'network.incoming.bytes.delta': [], 'network.outgoing.packets': [], 'network.outgoing.bytes.delta': []}], and discovery cache [{'local_instances': []}]. register_pollster_execution /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:276
Dec  1 19:29:48 compute-0 ceilometer_agent_compute[200308]: 2025-12-01 19:29:48.814 15 DEBUG ceilometer.polling.manager [-] Registering pollster [<stevedore.extension.Extension object at 0x7fcf6e1e92e0>] from source [pollsters] to be executed via executor [<concurrent.futures.thread.ThreadPoolExecutor object at 0x7fcf6757d9a0>] with cache [{}], pollster history [{'network.incoming.bytes.delta': [], 'network.outgoing.packets': [], 'network.outgoing.bytes.delta': []}], and discovery cache [{'local_instances': []}]. register_pollster_execution /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:276
Dec  1 19:29:48 compute-0 ceilometer_agent_compute[200308]: 2025-12-01 19:29:48.814 15 DEBUG ceilometer.polling.manager [-] Registering pollster [<stevedore.extension.Extension object at 0x7fcf6cc3fb00>] from source [pollsters] to be executed via executor [<concurrent.futures.thread.ThreadPoolExecutor object at 0x7fcf6757d9a0>] with cache [{}], pollster history [{'network.incoming.bytes.delta': [], 'network.outgoing.packets': [], 'network.outgoing.bytes.delta': []}], and discovery cache [{'local_instances': []}]. register_pollster_execution /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:276
Dec  1 19:29:48 compute-0 ceilometer_agent_compute[200308]: 2025-12-01 19:29:48.815 15 DEBUG ceilometer.polling.manager [-] Registering pollster [<stevedore.extension.Extension object at 0x7fcf6cc3f320>] from source [pollsters] to be executed via executor [<concurrent.futures.thread.ThreadPoolExecutor object at 0x7fcf6757d9a0>] with cache [{}], pollster history [{'network.incoming.bytes.delta': [], 'network.outgoing.packets': [], 'network.outgoing.bytes.delta': []}], and discovery cache [{'local_instances': []}]. register_pollster_execution /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:276
Dec  1 19:29:48 compute-0 ceilometer_agent_compute[200308]: 2025-12-01 19:29:48.815 15 DEBUG ceilometer.polling.manager [-] Registering pollster [<stevedore.extension.Extension object at 0x7fcf6cc3f380>] from source [pollsters] to be executed via executor [<concurrent.futures.thread.ThreadPoolExecutor object at 0x7fcf6757d9a0>] with cache [{}], pollster history [{'network.incoming.bytes.delta': [], 'network.outgoing.packets': [], 'network.outgoing.bytes.delta': []}], and discovery cache [{'local_instances': []}]. register_pollster_execution /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:276
Dec  1 19:29:48 compute-0 ceilometer_agent_compute[200308]: 2025-12-01 19:29:48.814 15 DEBUG ceilometer.polling.manager [-] Skip pollster network.outgoing.bytes.delta, no  resources found this cycle _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:321
Dec  1 19:29:48 compute-0 ceilometer_agent_compute[200308]: 2025-12-01 19:29:48.815 15 DEBUG ceilometer.polling.manager [-] Executing discovery process for pollsters [<ceilometer.compute.pollsters.net.OutgoingDropPollster object at 0x7fcf6c2e40e0>] and discovery method [local_instances] via process [<bound method AgentManager.discover of <ceilometer.polling.manager.AgentManager object at 0x7fcf6cc07a10>>]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:294
Dec  1 19:29:48 compute-0 ceilometer_agent_compute[200308]: 2025-12-01 19:29:48.815 15 DEBUG ceilometer.polling.manager [-] Skip pollster network.outgoing.packets.drop, no  resources found this cycle _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:321
Dec  1 19:29:48 compute-0 ceilometer_agent_compute[200308]: 2025-12-01 19:29:48.815 15 DEBUG ceilometer.polling.manager [-] Executing discovery process for pollsters [<ceilometer.compute.pollsters.net.OutgoingErrorsPollster object at 0x7fcf6c2e4170>] and discovery method [local_instances] via process [<bound method AgentManager.discover of <ceilometer.polling.manager.AgentManager object at 0x7fcf6cc07a10>>]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:294
Dec  1 19:29:48 compute-0 ceilometer_agent_compute[200308]: 2025-12-01 19:29:48.815 15 DEBUG ceilometer.polling.manager [-] Skip pollster network.outgoing.packets.error, no  resources found this cycle _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:321
Dec  1 19:29:48 compute-0 ceilometer_agent_compute[200308]: 2025-12-01 19:29:48.816 15 DEBUG ceilometer.polling.manager [-] Executing discovery process for pollsters [<ceilometer.compute.pollsters.disk.PerDeviceCapacityPollster object at 0x7fcf6cc3d820>] and discovery method [local_instances] via process [<bound method AgentManager.discover of <ceilometer.polling.manager.AgentManager object at 0x7fcf6cc07a10>>]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:294
Dec  1 19:29:48 compute-0 ceilometer_agent_compute[200308]: 2025-12-01 19:29:48.816 15 DEBUG ceilometer.polling.manager [-] Skip pollster disk.device.capacity, no  resources found this cycle _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:321
Dec  1 19:29:48 compute-0 ceilometer_agent_compute[200308]: 2025-12-01 19:29:48.816 15 DEBUG ceilometer.polling.manager [-] Executing discovery process for pollsters [<ceilometer.compute.pollsters.disk.PerDeviceReadBytesPollster object at 0x7fcf6cc3f1d0>] and discovery method [local_instances] via process [<bound method AgentManager.discover of <ceilometer.polling.manager.AgentManager object at 0x7fcf6cc07a10>>]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:294
Dec  1 19:29:48 compute-0 ceilometer_agent_compute[200308]: 2025-12-01 19:29:48.816 15 DEBUG ceilometer.polling.manager [-] Skip pollster disk.device.read.bytes, no  resources found this cycle _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:321
Dec  1 19:29:48 compute-0 ceilometer_agent_compute[200308]: 2025-12-01 19:29:48.816 15 DEBUG ceilometer.polling.manager [-] Executing discovery process for pollsters [<ceilometer.compute.pollsters.net.IncomingBytesPollster object at 0x7fcf6cc3f800>] and discovery method [local_instances] via process [<bound method AgentManager.discover of <ceilometer.polling.manager.AgentManager object at 0x7fcf6cc07a10>>]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:294
Dec  1 19:29:48 compute-0 ceilometer_agent_compute[200308]: 2025-12-01 19:29:48.816 15 DEBUG ceilometer.polling.manager [-] Skip pollster network.incoming.bytes, no  resources found this cycle _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:321
Dec  1 19:29:48 compute-0 ceilometer_agent_compute[200308]: 2025-12-01 19:29:48.816 15 DEBUG ceilometer.polling.manager [-] Executing discovery process for pollsters [<ceilometer.compute.pollsters.net.IncomingBytesRatePollster object at 0x7fcf6cc3fd10>] and discovery method [local_instances] via process [<bound method AgentManager.discover of <ceilometer.polling.manager.AgentManager object at 0x7fcf6cc07a10>>]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:294
Dec  1 19:29:48 compute-0 ceilometer_agent_compute[200308]: 2025-12-01 19:29:48.816 15 DEBUG ceilometer.polling.manager [-] Skip pollster network.incoming.bytes.rate, no  resources found this cycle _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:321
Dec  1 19:29:48 compute-0 ceilometer_agent_compute[200308]: 2025-12-01 19:29:48.816 15 DEBUG ceilometer.polling.manager [-] Executing discovery process for pollsters [<ceilometer.compute.pollsters.disk.PerDeviceDiskReadLatencyPollster object at 0x7fcf6cc3f2f0>] and discovery method [local_instances] via process [<bound method AgentManager.discover of <ceilometer.polling.manager.AgentManager object at 0x7fcf6cc07a10>>]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:294
Dec  1 19:29:48 compute-0 ceilometer_agent_compute[200308]: 2025-12-01 19:29:48.817 15 DEBUG ceilometer.polling.manager [-] Skip pollster disk.device.read.latency, no  resources found this cycle _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:321
Dec  1 19:29:48 compute-0 ceilometer_agent_compute[200308]: 2025-12-01 19:29:48.817 15 DEBUG ceilometer.polling.manager [-] Executing discovery process for pollsters [<ceilometer.compute.pollsters.disk.PerDeviceReadRequestsPollster object at 0x7fcf6cc3f350>] and discovery method [local_instances] via process [<bound method AgentManager.discover of <ceilometer.polling.manager.AgentManager object at 0x7fcf6cc07a10>>]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:294
Dec  1 19:29:48 compute-0 ceilometer_agent_compute[200308]: 2025-12-01 19:29:48.815 15 DEBUG ceilometer.polling.manager [-] Registering pollster [<stevedore.extension.Extension object at 0x7fcf6cc3f3e0>] from source [pollsters] to be executed via executor [<concurrent.futures.thread.ThreadPoolExecutor object at 0x7fcf6757d9a0>] with cache [{}], pollster history [{'network.incoming.bytes.delta': [], 'network.outgoing.packets': [], 'network.outgoing.bytes.delta': [], 'network.outgoing.packets.drop': [], 'network.outgoing.packets.error': [], 'disk.device.capacity': [], 'disk.device.read.bytes': [], 'network.incoming.bytes': [], 'network.incoming.bytes.rate': [], 'disk.device.read.latency': [], 'disk.device.read.requests': []}], and discovery cache [{'local_instances': []}]. register_pollster_execution /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:276
Dec  1 19:29:48 compute-0 ceilometer_agent_compute[200308]: 2025-12-01 19:29:48.817 15 DEBUG ceilometer.polling.manager [-] Skip pollster disk.device.read.requests, no  resources found this cycle _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:321
Dec  1 19:29:48 compute-0 ceilometer_agent_compute[200308]: 2025-12-01 19:29:48.817 15 DEBUG ceilometer.polling.manager [-] Executing discovery process for pollsters [<ceilometer.compute.pollsters.disk.PerDevicePhysicalPollster object at 0x7fcf6cc3f3b0>] and discovery method [local_instances] via process [<bound method AgentManager.discover of <ceilometer.polling.manager.AgentManager object at 0x7fcf6cc07a10>>]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:294
Dec  1 19:29:48 compute-0 ceilometer_agent_compute[200308]: 2025-12-01 19:29:48.818 15 DEBUG ceilometer.polling.manager [-] Skip pollster disk.device.usage, no  resources found this cycle _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:321
Dec  1 19:29:48 compute-0 ceilometer_agent_compute[200308]: 2025-12-01 19:29:48.817 15 DEBUG ceilometer.polling.manager [-] Registering pollster [<stevedore.extension.Extension object at 0x7fcf6cc3f440>] from source [pollsters] to be executed via executor [<concurrent.futures.thread.ThreadPoolExecutor object at 0x7fcf6757d9a0>] with cache [{}], pollster history [{'network.incoming.bytes.delta': [], 'network.outgoing.packets': [], 'network.outgoing.bytes.delta': [], 'network.outgoing.packets.drop': [], 'network.outgoing.packets.error': [], 'disk.device.capacity': [], 'disk.device.read.bytes': [], 'network.incoming.bytes': [], 'network.incoming.bytes.rate': [], 'disk.device.read.latency': [], 'disk.device.read.requests': [], 'disk.device.usage': []}], and discovery cache [{'local_instances': []}]. register_pollster_execution /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:276
Dec  1 19:29:48 compute-0 ceilometer_agent_compute[200308]: 2025-12-01 19:29:48.818 15 DEBUG ceilometer.polling.manager [-] Registering pollster [<stevedore.extension.Extension object at 0x7fcf6c2e4470>] from source [pollsters] to be executed via executor [<concurrent.futures.thread.ThreadPoolExecutor object at 0x7fcf6757d9a0>] with cache [{}], pollster history [{'network.incoming.bytes.delta': [], 'network.outgoing.packets': [], 'network.outgoing.bytes.delta': [], 'network.outgoing.packets.drop': [], 'network.outgoing.packets.error': [], 'disk.device.capacity': [], 'disk.device.read.bytes': [], 'network.incoming.bytes': [], 'network.incoming.bytes.rate': [], 'disk.device.read.latency': [], 'disk.device.read.requests': [], 'disk.device.usage': []}], and discovery cache [{'local_instances': []}]. register_pollster_execution /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:276
Dec  1 19:29:48 compute-0 ceilometer_agent_compute[200308]: 2025-12-01 19:29:48.818 15 DEBUG ceilometer.polling.manager [-] Registering pollster [<stevedore.extension.Extension object at 0x7fcf6cc3f4a0>] from source [pollsters] to be executed via executor [<concurrent.futures.thread.ThreadPoolExecutor object at 0x7fcf6757d9a0>] with cache [{}], pollster history [{'network.incoming.bytes.delta': [], 'network.outgoing.packets': [], 'network.outgoing.bytes.delta': [], 'network.outgoing.packets.drop': [], 'network.outgoing.packets.error': [], 'disk.device.capacity': [], 'disk.device.read.bytes': [], 'network.incoming.bytes': [], 'network.incoming.bytes.rate': [], 'disk.device.read.latency': [], 'disk.device.read.requests': [], 'disk.device.usage': []}], and discovery cache [{'local_instances': []}]. register_pollster_execution /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:276
Dec  1 19:29:48 compute-0 ceilometer_agent_compute[200308]: 2025-12-01 19:29:48.819 15 DEBUG ceilometer.polling.manager [-] Registering pollster [<stevedore.extension.Extension object at 0x7fcf6cc3f500>] from source [pollsters] to be executed via executor [<concurrent.futures.thread.ThreadPoolExecutor object at 0x7fcf6757d9a0>] with cache [{}], pollster history [{'network.incoming.bytes.delta': [], 'network.outgoing.packets': [], 'network.outgoing.bytes.delta': [], 'network.outgoing.packets.drop': [], 'network.outgoing.packets.error': [], 'disk.device.capacity': [], 'disk.device.read.bytes': [], 'network.incoming.bytes': [], 'network.incoming.bytes.rate': [], 'disk.device.read.latency': [], 'disk.device.read.requests': [], 'disk.device.usage': []}], and discovery cache [{'local_instances': []}]. register_pollster_execution /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:276
Dec  1 19:29:48 compute-0 ceilometer_agent_compute[200308]: 2025-12-01 19:29:48.819 15 DEBUG ceilometer.polling.manager [-] Registering pollster [<stevedore.extension.Extension object at 0x7fcf6cc3e540>] from source [pollsters] to be executed via executor [<concurrent.futures.thread.ThreadPoolExecutor object at 0x7fcf6757d9a0>] with cache [{}], pollster history [{'network.incoming.bytes.delta': [], 'network.outgoing.packets': [], 'network.outgoing.bytes.delta': [], 'network.outgoing.packets.drop': [], 'network.outgoing.packets.error': [], 'disk.device.capacity': [], 'disk.device.read.bytes': [], 'network.incoming.bytes': [], 'network.incoming.bytes.rate': [], 'disk.device.read.latency': [], 'disk.device.read.requests': [], 'disk.device.usage': []}], and discovery cache [{'local_instances': []}]. register_pollster_execution /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:276
Dec  1 19:29:48 compute-0 ceilometer_agent_compute[200308]: 2025-12-01 19:29:48.820 15 DEBUG ceilometer.polling.manager [-] Registering pollster [<stevedore.extension.Extension object at 0x7fcf6cc3f560>] from source [pollsters] to be executed via executor [<concurrent.futures.thread.ThreadPoolExecutor object at 0x7fcf6757d9a0>] with cache [{}], pollster history [{'network.incoming.bytes.delta': [], 'network.outgoing.packets': [], 'network.outgoing.bytes.delta': [], 'network.outgoing.packets.drop': [], 'network.outgoing.packets.error': [], 'disk.device.capacity': [], 'disk.device.read.bytes': [], 'network.incoming.bytes': [], 'network.incoming.bytes.rate': [], 'disk.device.read.latency': [], 'disk.device.read.requests': [], 'disk.device.usage': []}], and discovery cache [{'local_instances': []}]. register_pollster_execution /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:276
Dec  1 19:29:48 compute-0 ceilometer_agent_compute[200308]: 2025-12-01 19:29:48.820 15 DEBUG ceilometer.polling.manager [-] Registering pollster [<stevedore.extension.Extension object at 0x7fcf6cc3fd70>] from source [pollsters] to be executed via executor [<concurrent.futures.thread.ThreadPoolExecutor object at 0x7fcf6757d9a0>] with cache [{}], pollster history [{'network.incoming.bytes.delta': [], 'network.outgoing.packets': [], 'network.outgoing.bytes.delta': [], 'network.outgoing.packets.drop': [], 'network.outgoing.packets.error': [], 'disk.device.capacity': [], 'disk.device.read.bytes': [], 'network.incoming.bytes': [], 'network.incoming.bytes.rate': [], 'disk.device.read.latency': [], 'disk.device.read.requests': [], 'disk.device.usage': []}], and discovery cache [{'local_instances': []}]. register_pollster_execution /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:276
Dec  1 19:29:48 compute-0 ceilometer_agent_compute[200308]: 2025-12-01 19:29:48.820 15 DEBUG ceilometer.polling.manager [-] Registering pollster [<stevedore.extension.Extension object at 0x7fcf6cc3f5c0>] from source [pollsters] to be executed via executor [<concurrent.futures.thread.ThreadPoolExecutor object at 0x7fcf6757d9a0>] with cache [{}], pollster history [{'network.incoming.bytes.delta': [], 'network.outgoing.packets': [], 'network.outgoing.bytes.delta': [], 'network.outgoing.packets.drop': [], 'network.outgoing.packets.error': [], 'disk.device.capacity': [], 'disk.device.read.bytes': [], 'network.incoming.bytes': [], 'network.incoming.bytes.rate': [], 'disk.device.read.latency': [], 'disk.device.read.requests': [], 'disk.device.usage': []}], and discovery cache [{'local_instances': []}]. register_pollster_execution /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:276
Dec  1 19:29:48 compute-0 ceilometer_agent_compute[200308]: 2025-12-01 19:29:48.819 15 DEBUG ceilometer.polling.manager [-] Executing discovery process for pollsters [<ceilometer.compute.pollsters.disk.PerDeviceWriteBytesPollster object at 0x7fcf6cc3f410>] and discovery method [local_instances] via process [<bound method AgentManager.discover of <ceilometer.polling.manager.AgentManager object at 0x7fcf6cc07a10>>]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:294
Dec  1 19:29:48 compute-0 ceilometer_agent_compute[200308]: 2025-12-01 19:29:48.821 15 DEBUG ceilometer.polling.manager [-] Skip pollster disk.device.write.bytes, no  resources found this cycle _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:321
Dec  1 19:29:48 compute-0 ceilometer_agent_compute[200308]: 2025-12-01 19:29:48.821 15 DEBUG ceilometer.polling.manager [-] Executing discovery process for pollsters [<ceilometer.compute.pollsters.instance_stats.PowerStatePollster object at 0x7fcf6c2e4440>] and discovery method [local_instances] via process [<bound method AgentManager.discover of <ceilometer.polling.manager.AgentManager object at 0x7fcf6cc07a10>>]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:294
Dec  1 19:29:48 compute-0 ceilometer_agent_compute[200308]: 2025-12-01 19:29:48.821 15 DEBUG ceilometer.polling.manager [-] Skip pollster power.state, no  resources found this cycle _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:321
Dec  1 19:29:48 compute-0 ceilometer_agent_compute[200308]: 2025-12-01 19:29:48.821 15 DEBUG ceilometer.polling.manager [-] Executing discovery process for pollsters [<ceilometer.compute.pollsters.disk.PerDeviceDiskWriteLatencyPollster object at 0x7fcf6cc3f470>] and discovery method [local_instances] via process [<bound method AgentManager.discover of <ceilometer.polling.manager.AgentManager object at 0x7fcf6cc07a10>>]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:294
Dec  1 19:29:48 compute-0 ceilometer_agent_compute[200308]: 2025-12-01 19:29:48.821 15 DEBUG ceilometer.polling.manager [-] Skip pollster disk.device.write.latency, no  resources found this cycle _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:321
Dec  1 19:29:48 compute-0 ceilometer_agent_compute[200308]: 2025-12-01 19:29:48.822 15 DEBUG ceilometer.polling.manager [-] Executing discovery process for pollsters [<ceilometer.compute.pollsters.disk.PerDeviceWriteRequestsPollster object at 0x7fcf6cc3f4d0>] and discovery method [local_instances] via process [<bound method AgentManager.discover of <ceilometer.polling.manager.AgentManager object at 0x7fcf6cc07a10>>]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:294
Dec  1 19:29:48 compute-0 ceilometer_agent_compute[200308]: 2025-12-01 19:29:48.822 15 DEBUG ceilometer.polling.manager [-] Skip pollster disk.device.write.requests, no  resources found this cycle _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:321
Dec  1 19:29:48 compute-0 ceilometer_agent_compute[200308]: 2025-12-01 19:29:48.822 15 DEBUG ceilometer.polling.manager [-] Executing discovery process for pollsters [<ceilometer.compute.pollsters.disk.PerDeviceAllocationPollster object at 0x7fcf6cc3e5d0>] and discovery method [local_instances] via process [<bound method AgentManager.discover of <ceilometer.polling.manager.AgentManager object at 0x7fcf6cc07a10>>]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:294
Dec  1 19:29:48 compute-0 ceilometer_agent_compute[200308]: 2025-12-01 19:29:48.822 15 DEBUG ceilometer.polling.manager [-] Skip pollster disk.device.allocation, no  resources found this cycle _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:321
Dec  1 19:29:48 compute-0 ceilometer_agent_compute[200308]: 2025-12-01 19:29:48.822 15 DEBUG ceilometer.polling.manager [-] Executing discovery process for pollsters [<ceilometer.compute.pollsters.disk.EphemeralSizePollster object at 0x7fcf6cc3f530>] and discovery method [local_instances] via process [<bound method AgentManager.discover of <ceilometer.polling.manager.AgentManager object at 0x7fcf6cc07a10>>]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:294
Dec  1 19:29:48 compute-0 ceilometer_agent_compute[200308]: 2025-12-01 19:29:48.822 15 DEBUG ceilometer.polling.manager [-] Skip pollster disk.ephemeral.size, no  resources found this cycle _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:321
Dec  1 19:29:48 compute-0 ceilometer_agent_compute[200308]: 2025-12-01 19:29:48.822 15 DEBUG ceilometer.polling.manager [-] Executing discovery process for pollsters [<ceilometer.compute.pollsters.net.IncomingPacketsPollster object at 0x7fcf6cc3fd40>] and discovery method [local_instances] via process [<bound method AgentManager.discover of <ceilometer.polling.manager.AgentManager object at 0x7fcf6cc07a10>>]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:294
Dec  1 19:29:48 compute-0 ceilometer_agent_compute[200308]: 2025-12-01 19:29:48.822 15 DEBUG ceilometer.polling.manager [-] Skip pollster network.incoming.packets, no  resources found this cycle _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:321
Dec  1 19:29:48 compute-0 ceilometer_agent_compute[200308]: 2025-12-01 19:29:48.822 15 DEBUG ceilometer.polling.manager [-] Executing discovery process for pollsters [<ceilometer.compute.pollsters.disk.RootSizePollster object at 0x7fcf6cc3f590>] and discovery method [local_instances] via process [<bound method AgentManager.discover of <ceilometer.polling.manager.AgentManager object at 0x7fcf6cc07a10>>]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:294
Dec  1 19:29:48 compute-0 ceilometer_agent_compute[200308]: 2025-12-01 19:29:48.823 15 DEBUG ceilometer.polling.manager [-] Skip pollster disk.root.size, no  resources found this cycle _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:321
Dec  1 19:29:48 compute-0 ceilometer_agent_compute[200308]: 2025-12-01 19:29:48.821 15 DEBUG ceilometer.polling.manager [-] Registering pollster [<stevedore.extension.Extension object at 0x7fcf6cc3fdd0>] from source [pollsters] to be executed via executor [<concurrent.futures.thread.ThreadPoolExecutor object at 0x7fcf6757d9a0>] with cache [{}], pollster history [{'network.incoming.bytes.delta': [], 'network.outgoing.packets': [], 'network.outgoing.bytes.delta': [], 'network.outgoing.packets.drop': [], 'network.outgoing.packets.error': [], 'disk.device.capacity': [], 'disk.device.read.bytes': [], 'network.incoming.bytes': [], 'network.incoming.bytes.rate': [], 'disk.device.read.latency': [], 'disk.device.read.requests': [], 'disk.device.usage': [], 'disk.device.write.bytes': [], 'power.state': [], 'disk.device.write.latency': [], 'disk.device.write.requests': [], 'disk.device.allocation': [], 'disk.ephemeral.size': [], 'network.incoming.packets': [], 'disk.root.size': []}], and discovery cache [{'local_instances': []}]. register_pollster_execution /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:276
Dec  1 19:29:48 compute-0 ceilometer_agent_compute[200308]: 2025-12-01 19:29:48.823 15 DEBUG ceilometer.polling.manager [-] Registering pollster [<stevedore.extension.Extension object at 0x7fcf6cc3fe30>] from source [pollsters] to be executed via executor [<concurrent.futures.thread.ThreadPoolExecutor object at 0x7fcf6757d9a0>] with cache [{}], pollster history [{'network.incoming.bytes.delta': [], 'network.outgoing.packets': [], 'network.outgoing.bytes.delta': [], 'network.outgoing.packets.drop': [], 'network.outgoing.packets.error': [], 'disk.device.capacity': [], 'disk.device.read.bytes': [], 'network.incoming.bytes': [], 'network.incoming.bytes.rate': [], 'disk.device.read.latency': [], 'disk.device.read.requests': [], 'disk.device.usage': [], 'disk.device.write.bytes': [], 'power.state': [], 'disk.device.write.latency': [], 'disk.device.write.requests': [], 'disk.device.allocation': [], 'disk.ephemeral.size': [], 'network.incoming.packets': [], 'disk.root.size': []}], and discovery cache [{'local_instances': []}]. register_pollster_execution /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:276
Dec  1 19:29:48 compute-0 ceilometer_agent_compute[200308]: 2025-12-01 19:29:48.824 15 DEBUG ceilometer.polling.manager [-] Registering pollster [<stevedore.extension.Extension object at 0x7fcf6cc3fec0>] from source [pollsters] to be executed via executor [<concurrent.futures.thread.ThreadPoolExecutor object at 0x7fcf6757d9a0>] with cache [{}], pollster history [{'network.incoming.bytes.delta': [], 'network.outgoing.packets': [], 'network.outgoing.bytes.delta': [], 'network.outgoing.packets.drop': [], 'network.outgoing.packets.error': [], 'disk.device.capacity': [], 'disk.device.read.bytes': [], 'network.incoming.bytes': [], 'network.incoming.bytes.rate': [], 'disk.device.read.latency': [], 'disk.device.read.requests': [], 'disk.device.usage': [], 'disk.device.write.bytes': [], 'power.state': [], 'disk.device.write.latency': [], 'disk.device.write.requests': [], 'disk.device.allocation': [], 'disk.ephemeral.size': [], 'network.incoming.packets': [], 'disk.root.size': []}], and discovery cache [{'local_instances': []}]. register_pollster_execution /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:276
Dec  1 19:29:48 compute-0 ceilometer_agent_compute[200308]: 2025-12-01 19:29:48.824 15 DEBUG ceilometer.polling.manager [-] Registering pollster [<stevedore.extension.Extension object at 0x7fcf6cc3ffb0>] from source [pollsters] to be executed via executor [<concurrent.futures.thread.ThreadPoolExecutor object at 0x7fcf6757d9a0>] with cache [{}], pollster history [{'network.incoming.bytes.delta': [], 'network.outgoing.packets': [], 'network.outgoing.bytes.delta': [], 'network.outgoing.packets.drop': [], 'network.outgoing.packets.error': [], 'disk.device.capacity': [], 'disk.device.read.bytes': [], 'network.incoming.bytes': [], 'network.incoming.bytes.rate': [], 'disk.device.read.latency': [], 'disk.device.read.requests': [], 'disk.device.usage': [], 'disk.device.write.bytes': [], 'power.state': [], 'disk.device.write.latency': [], 'disk.device.write.requests': [], 'disk.device.allocation': [], 'disk.ephemeral.size': [], 'network.incoming.packets': [], 'disk.root.size': []}], and discovery cache [{'local_instances': []}]. register_pollster_execution /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:276
Dec  1 19:29:48 compute-0 ceilometer_agent_compute[200308]: 2025-12-01 19:29:48.824 15 DEBUG ceilometer.polling.manager [-] Registering pollster [<stevedore.extension.Extension object at 0x7fcf6cc3d7c0>] from source [pollsters] to be executed via executor [<concurrent.futures.thread.ThreadPoolExecutor object at 0x7fcf6757d9a0>] with cache [{}], pollster history [{'network.incoming.bytes.delta': [], 'network.outgoing.packets': [], 'network.outgoing.bytes.delta': [], 'network.outgoing.packets.drop': [], 'network.outgoing.packets.error': [], 'disk.device.capacity': [], 'disk.device.read.bytes': [], 'network.incoming.bytes': [], 'network.incoming.bytes.rate': [], 'disk.device.read.latency': [], 'disk.device.read.requests': [], 'disk.device.usage': [], 'disk.device.write.bytes': [], 'power.state': [], 'disk.device.write.latency': [], 'disk.device.write.requests': [], 'disk.device.allocation': [], 'disk.ephemeral.size': [], 'network.incoming.packets': [], 'disk.root.size': []}], and discovery cache [{'local_instances': []}]. register_pollster_execution /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:276
Dec  1 19:29:48 compute-0 ceilometer_agent_compute[200308]: 2025-12-01 19:29:48.825 15 DEBUG ceilometer.polling.manager [-] Registering pollster [<stevedore.extension.Extension object at 0x7fcf6cc3f7d0>] from source [pollsters] to be executed via executor [<concurrent.futures.thread.ThreadPoolExecutor object at 0x7fcf6757d9a0>] with cache [{}], pollster history [{'network.incoming.bytes.delta': [], 'network.outgoing.packets': [], 'network.outgoing.bytes.delta': [], 'network.outgoing.packets.drop': [], 'network.outgoing.packets.error': [], 'disk.device.capacity': [], 'disk.device.read.bytes': [], 'network.incoming.bytes': [], 'network.incoming.bytes.rate': [], 'disk.device.read.latency': [], 'disk.device.read.requests': [], 'disk.device.usage': [], 'disk.device.write.bytes': [], 'power.state': [], 'disk.device.write.latency': [], 'disk.device.write.requests': [], 'disk.device.allocation': [], 'disk.ephemeral.size': [], 'network.incoming.packets': [], 'disk.root.size': []}], and discovery cache [{'local_instances': []}]. register_pollster_execution /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:276
Dec  1 19:29:48 compute-0 ceilometer_agent_compute[200308]: 2025-12-01 19:29:48.824 15 DEBUG ceilometer.polling.manager [-] Executing discovery process for pollsters [<ceilometer.compute.pollsters.net.IncomingDropPollster object at 0x7fcf6cc3fda0>] and discovery method [local_instances] via process [<bound method AgentManager.discover of <ceilometer.polling.manager.AgentManager object at 0x7fcf6cc07a10>>]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:294
Dec  1 19:29:48 compute-0 ceilometer_agent_compute[200308]: 2025-12-01 19:29:48.825 15 DEBUG ceilometer.polling.manager [-] Skip pollster network.incoming.packets.drop, no  resources found this cycle _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:321
Dec  1 19:29:48 compute-0 ceilometer_agent_compute[200308]: 2025-12-01 19:29:48.825 15 DEBUG ceilometer.polling.manager [-] Executing discovery process for pollsters [<ceilometer.compute.pollsters.net.IncomingErrorsPollster object at 0x7fcf6cc3fe00>] and discovery method [local_instances] via process [<bound method AgentManager.discover of <ceilometer.polling.manager.AgentManager object at 0x7fcf6cc07a10>>]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:294
Dec  1 19:29:48 compute-0 ceilometer_agent_compute[200308]: 2025-12-01 19:29:48.825 15 DEBUG ceilometer.polling.manager [-] Skip pollster network.incoming.packets.error, no  resources found this cycle _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:321
Dec  1 19:29:48 compute-0 ceilometer_agent_compute[200308]: 2025-12-01 19:29:48.825 15 DEBUG ceilometer.polling.manager [-] Executing discovery process for pollsters [<ceilometer.compute.pollsters.net.OutgoingBytesPollster object at 0x7fcf6cc3fe90>] and discovery method [local_instances] via process [<bound method AgentManager.discover of <ceilometer.polling.manager.AgentManager object at 0x7fcf6cc07a10>>]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:294
Dec  1 19:29:48 compute-0 ceilometer_agent_compute[200308]: 2025-12-01 19:29:48.825 15 DEBUG ceilometer.polling.manager [-] Skip pollster network.outgoing.bytes, no  resources found this cycle _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:321
Dec  1 19:29:48 compute-0 ceilometer_agent_compute[200308]: 2025-12-01 19:29:48.826 15 DEBUG ceilometer.polling.manager [-] Executing discovery process for pollsters [<ceilometer.compute.pollsters.net.OutgoingBytesRatePollster object at 0x7fcf6cc3ff80>] and discovery method [local_instances] via process [<bound method AgentManager.discover of <ceilometer.polling.manager.AgentManager object at 0x7fcf6cc07a10>>]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:294
Dec  1 19:29:48 compute-0 ceilometer_agent_compute[200308]: 2025-12-01 19:29:48.826 15 DEBUG ceilometer.polling.manager [-] Skip pollster network.outgoing.bytes.rate, no  resources found this cycle _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:321
Dec  1 19:29:48 compute-0 ceilometer_agent_compute[200308]: 2025-12-01 19:29:48.826 15 DEBUG ceilometer.polling.manager [-] Executing discovery process for pollsters [<ceilometer.compute.pollsters.instance_stats.CPUPollster object at 0x7fcf6cbd1b80>] and discovery method [local_instances] via process [<bound method AgentManager.discover of <ceilometer.polling.manager.AgentManager object at 0x7fcf6cc07a10>>]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:294
Dec  1 19:29:48 compute-0 ceilometer_agent_compute[200308]: 2025-12-01 19:29:48.826 15 DEBUG ceilometer.polling.manager [-] Skip pollster cpu, no  resources found this cycle _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:321
Dec  1 19:29:48 compute-0 ceilometer_agent_compute[200308]: 2025-12-01 19:29:48.826 15 DEBUG ceilometer.polling.manager [-] Executing discovery process for pollsters [<ceilometer.compute.pollsters.instance_stats.MemoryUsagePollster object at 0x7fcf6cc3f7a0>] and discovery method [local_instances] via process [<bound method AgentManager.discover of <ceilometer.polling.manager.AgentManager object at 0x7fcf6cc07a10>>]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:294
Dec  1 19:29:48 compute-0 ceilometer_agent_compute[200308]: 2025-12-01 19:29:48.826 15 DEBUG ceilometer.polling.manager [-] Skip pollster memory.usage, no  resources found this cycle _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:321
Dec  1 19:29:48 compute-0 ceilometer_agent_compute[200308]: 2025-12-01 19:29:48.827 15 DEBUG ceilometer.polling.manager [-] Finished processing pollster [network.incoming.bytes.delta]. execute_polling_task_processing /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:272
Dec  1 19:29:48 compute-0 ceilometer_agent_compute[200308]: 2025-12-01 19:29:48.827 15 DEBUG ceilometer.polling.manager [-] Finished processing pollster [network.outgoing.packets]. execute_polling_task_processing /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:272
Dec  1 19:29:48 compute-0 ceilometer_agent_compute[200308]: 2025-12-01 19:29:48.827 15 DEBUG ceilometer.polling.manager [-] Finished processing pollster [network.outgoing.bytes.delta]. execute_polling_task_processing /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:272
Dec  1 19:29:48 compute-0 ceilometer_agent_compute[200308]: 2025-12-01 19:29:48.827 15 DEBUG ceilometer.polling.manager [-] Finished processing pollster [network.outgoing.packets.drop]. execute_polling_task_processing /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:272
Dec  1 19:29:48 compute-0 ceilometer_agent_compute[200308]: 2025-12-01 19:29:48.827 15 DEBUG ceilometer.polling.manager [-] Finished processing pollster [network.outgoing.packets.error]. execute_polling_task_processing /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:272
Dec  1 19:29:48 compute-0 ceilometer_agent_compute[200308]: 2025-12-01 19:29:48.828 15 DEBUG ceilometer.polling.manager [-] Finished processing pollster [disk.device.capacity]. execute_polling_task_processing /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:272
Dec  1 19:29:48 compute-0 ceilometer_agent_compute[200308]: 2025-12-01 19:29:48.828 15 DEBUG ceilometer.polling.manager [-] Finished processing pollster [disk.device.read.bytes]. execute_polling_task_processing /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:272
Dec  1 19:29:48 compute-0 ceilometer_agent_compute[200308]: 2025-12-01 19:29:48.828 15 DEBUG ceilometer.polling.manager [-] Finished processing pollster [network.incoming.bytes]. execute_polling_task_processing /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:272
Dec  1 19:29:48 compute-0 ceilometer_agent_compute[200308]: 2025-12-01 19:29:48.828 15 DEBUG ceilometer.polling.manager [-] Finished processing pollster [network.incoming.bytes.rate]. execute_polling_task_processing /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:272
Dec  1 19:29:48 compute-0 ceilometer_agent_compute[200308]: 2025-12-01 19:29:48.828 15 DEBUG ceilometer.polling.manager [-] Finished processing pollster [disk.device.read.latency]. execute_polling_task_processing /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:272
Dec  1 19:29:48 compute-0 ceilometer_agent_compute[200308]: 2025-12-01 19:29:48.829 15 DEBUG ceilometer.polling.manager [-] Finished processing pollster [disk.device.read.requests]. execute_polling_task_processing /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:272
Dec  1 19:29:48 compute-0 ceilometer_agent_compute[200308]: 2025-12-01 19:29:48.829 15 DEBUG ceilometer.polling.manager [-] Finished processing pollster [disk.device.usage]. execute_polling_task_processing /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:272
Dec  1 19:29:48 compute-0 ceilometer_agent_compute[200308]: 2025-12-01 19:29:48.829 15 DEBUG ceilometer.polling.manager [-] Finished processing pollster [disk.device.write.bytes]. execute_polling_task_processing /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:272
Dec  1 19:29:48 compute-0 ceilometer_agent_compute[200308]: 2025-12-01 19:29:48.829 15 DEBUG ceilometer.polling.manager [-] Finished processing pollster [power.state]. execute_polling_task_processing /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:272
Dec  1 19:29:48 compute-0 ceilometer_agent_compute[200308]: 2025-12-01 19:29:48.829 15 DEBUG ceilometer.polling.manager [-] Finished processing pollster [disk.device.write.latency]. execute_polling_task_processing /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:272
Dec  1 19:29:48 compute-0 ceilometer_agent_compute[200308]: 2025-12-01 19:29:48.830 15 DEBUG ceilometer.polling.manager [-] Finished processing pollster [disk.device.write.requests]. execute_polling_task_processing /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:272
Dec  1 19:29:48 compute-0 ceilometer_agent_compute[200308]: 2025-12-01 19:29:48.830 15 DEBUG ceilometer.polling.manager [-] Finished processing pollster [disk.device.allocation]. execute_polling_task_processing /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:272
Dec  1 19:29:48 compute-0 ceilometer_agent_compute[200308]: 2025-12-01 19:29:48.830 15 DEBUG ceilometer.polling.manager [-] Finished processing pollster [disk.ephemeral.size]. execute_polling_task_processing /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:272
Dec  1 19:29:48 compute-0 ceilometer_agent_compute[200308]: 2025-12-01 19:29:48.830 15 DEBUG ceilometer.polling.manager [-] Finished processing pollster [network.incoming.packets]. execute_polling_task_processing /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:272
Dec  1 19:29:48 compute-0 ceilometer_agent_compute[200308]: 2025-12-01 19:29:48.831 15 DEBUG ceilometer.polling.manager [-] Finished processing pollster [disk.root.size]. execute_polling_task_processing /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:272
Dec  1 19:29:48 compute-0 ceilometer_agent_compute[200308]: 2025-12-01 19:29:48.831 15 DEBUG ceilometer.polling.manager [-] Finished processing pollster [network.incoming.packets.drop]. execute_polling_task_processing /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:272
Dec  1 19:29:48 compute-0 ceilometer_agent_compute[200308]: 2025-12-01 19:29:48.831 15 DEBUG ceilometer.polling.manager [-] Finished processing pollster [network.incoming.packets.error]. execute_polling_task_processing /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:272
Dec  1 19:29:48 compute-0 ceilometer_agent_compute[200308]: 2025-12-01 19:29:48.831 15 DEBUG ceilometer.polling.manager [-] Finished processing pollster [network.outgoing.bytes]. execute_polling_task_processing /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:272
Dec  1 19:29:48 compute-0 ceilometer_agent_compute[200308]: 2025-12-01 19:29:48.831 15 DEBUG ceilometer.polling.manager [-] Finished processing pollster [network.outgoing.bytes.rate]. execute_polling_task_processing /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:272
Dec  1 19:29:48 compute-0 ceilometer_agent_compute[200308]: 2025-12-01 19:29:48.832 15 DEBUG ceilometer.polling.manager [-] Finished processing pollster [cpu]. execute_polling_task_processing /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:272
Dec  1 19:29:48 compute-0 ceilometer_agent_compute[200308]: 2025-12-01 19:29:48.832 15 DEBUG ceilometer.polling.manager [-] Finished processing pollster [memory.usage]. execute_polling_task_processing /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:272
Dec  1 19:29:49 compute-0 nova_compute[189564]: 2025-12-01 19:29:49.212 189568 INFO oslo.privsep.daemon [None req-286a6c53-945b-4040-bb5b-5f674c790b5e 7c24e8f82e7842b785e565ac65c7f494 35d2a9caf1634dca9fc12ec078239d84 - - default default] Spawned new privsep daemon via rootwrap#033[00m
Dec  1 19:29:49 compute-0 nova_compute[189564]: 2025-12-01 19:29:49.078 239719 INFO oslo.privsep.daemon [-] privsep daemon starting#033[00m
Dec  1 19:29:49 compute-0 nova_compute[189564]: 2025-12-01 19:29:49.086 239719 INFO oslo.privsep.daemon [-] privsep process running with uid/gid: 0/0#033[00m
Dec  1 19:29:49 compute-0 nova_compute[189564]: 2025-12-01 19:29:49.090 239719 INFO oslo.privsep.daemon [-] privsep process running with capabilities (eff/prm/inh): CAP_CHOWN|CAP_DAC_OVERRIDE|CAP_DAC_READ_SEARCH|CAP_FOWNER|CAP_NET_ADMIN|CAP_SYS_ADMIN/CAP_CHOWN|CAP_DAC_OVERRIDE|CAP_DAC_READ_SEARCH|CAP_FOWNER|CAP_NET_ADMIN|CAP_SYS_ADMIN/none#033[00m
Dec  1 19:29:49 compute-0 nova_compute[189564]: 2025-12-01 19:29:49.090 239719 INFO oslo.privsep.daemon [-] privsep daemon running as pid 239719#033[00m
Dec  1 19:29:49 compute-0 nova_compute[189564]: 2025-12-01 19:29:49.247 189568 DEBUG oslo_service.periodic_task [None req-7cec860b-c1c5-49d2-8a4f-732c76c1788d - - - - - -] Running periodic task ComputeManager._check_instance_build_time run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210#033[00m
Dec  1 19:29:49 compute-0 nova_compute[189564]: 2025-12-01 19:29:49.299 189568 DEBUG oslo_concurrency.processutils [None req-286a6c53-945b-4040-bb5b-5f674c790b5e 7c24e8f82e7842b785e565ac65c7f494 35d2a9caf1634dca9fc12ec078239d84 - - default default] Running cmd (subprocess): /usr/bin/python3 -m oslo_concurrency.prlimit --as=1073741824 --cpu=30 -- env LC_ALL=C LANG=C qemu-img info /var/lib/nova/instances/_base/1324593a3f01becd5f72fdfdb0281e45c2a6b683 --force-share --output=json execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384#033[00m
Dec  1 19:29:49 compute-0 nova_compute[189564]: 2025-12-01 19:29:49.356 189568 DEBUG oslo_concurrency.processutils [None req-286a6c53-945b-4040-bb5b-5f674c790b5e 7c24e8f82e7842b785e565ac65c7f494 35d2a9caf1634dca9fc12ec078239d84 - - default default] CMD "/usr/bin/python3 -m oslo_concurrency.prlimit --as=1073741824 --cpu=30 -- env LC_ALL=C LANG=C qemu-img info /var/lib/nova/instances/_base/1324593a3f01becd5f72fdfdb0281e45c2a6b683 --force-share --output=json" returned: 0 in 0.057s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422#033[00m
Dec  1 19:29:49 compute-0 nova_compute[189564]: 2025-12-01 19:29:49.359 189568 DEBUG oslo_concurrency.lockutils [None req-286a6c53-945b-4040-bb5b-5f674c790b5e 7c24e8f82e7842b785e565ac65c7f494 35d2a9caf1634dca9fc12ec078239d84 - - default default] Acquiring lock "1324593a3f01becd5f72fdfdb0281e45c2a6b683" by "nova.virt.libvirt.imagebackend.Qcow2.create_image.<locals>.create_qcow2_image" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m
Dec  1 19:29:49 compute-0 nova_compute[189564]: 2025-12-01 19:29:49.360 189568 DEBUG oslo_concurrency.lockutils [None req-286a6c53-945b-4040-bb5b-5f674c790b5e 7c24e8f82e7842b785e565ac65c7f494 35d2a9caf1634dca9fc12ec078239d84 - - default default] Lock "1324593a3f01becd5f72fdfdb0281e45c2a6b683" acquired by "nova.virt.libvirt.imagebackend.Qcow2.create_image.<locals>.create_qcow2_image" :: waited 0.002s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m
Dec  1 19:29:49 compute-0 nova_compute[189564]: 2025-12-01 19:29:49.389 189568 DEBUG oslo_concurrency.processutils [None req-286a6c53-945b-4040-bb5b-5f674c790b5e 7c24e8f82e7842b785e565ac65c7f494 35d2a9caf1634dca9fc12ec078239d84 - - default default] Running cmd (subprocess): /usr/bin/python3 -m oslo_concurrency.prlimit --as=1073741824 --cpu=30 -- env LC_ALL=C LANG=C qemu-img info /var/lib/nova/instances/_base/1324593a3f01becd5f72fdfdb0281e45c2a6b683 --force-share --output=json execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384#033[00m
Dec  1 19:29:49 compute-0 nova_compute[189564]: 2025-12-01 19:29:49.484 189568 DEBUG oslo_concurrency.processutils [None req-286a6c53-945b-4040-bb5b-5f674c790b5e 7c24e8f82e7842b785e565ac65c7f494 35d2a9caf1634dca9fc12ec078239d84 - - default default] CMD "/usr/bin/python3 -m oslo_concurrency.prlimit --as=1073741824 --cpu=30 -- env LC_ALL=C LANG=C qemu-img info /var/lib/nova/instances/_base/1324593a3f01becd5f72fdfdb0281e45c2a6b683 --force-share --output=json" returned: 0 in 0.095s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422#033[00m
Dec  1 19:29:49 compute-0 nova_compute[189564]: 2025-12-01 19:29:49.486 189568 DEBUG oslo_concurrency.processutils [None req-286a6c53-945b-4040-bb5b-5f674c790b5e 7c24e8f82e7842b785e565ac65c7f494 35d2a9caf1634dca9fc12ec078239d84 - - default default] Running cmd (subprocess): env LC_ALL=C LANG=C qemu-img create -f qcow2 -o backing_file=/var/lib/nova/instances/_base/1324593a3f01becd5f72fdfdb0281e45c2a6b683,backing_fmt=raw /var/lib/nova/instances/e73931e9-f7fa-4666-b781-700b385532a9/disk 1073741824 execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384#033[00m
Dec  1 19:29:49 compute-0 nova_compute[189564]: 2025-12-01 19:29:49.550 189568 DEBUG oslo_concurrency.processutils [None req-286a6c53-945b-4040-bb5b-5f674c790b5e 7c24e8f82e7842b785e565ac65c7f494 35d2a9caf1634dca9fc12ec078239d84 - - default default] CMD "env LC_ALL=C LANG=C qemu-img create -f qcow2 -o backing_file=/var/lib/nova/instances/_base/1324593a3f01becd5f72fdfdb0281e45c2a6b683,backing_fmt=raw /var/lib/nova/instances/e73931e9-f7fa-4666-b781-700b385532a9/disk 1073741824" returned: 0 in 0.064s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422#033[00m
Dec  1 19:29:49 compute-0 nova_compute[189564]: 2025-12-01 19:29:49.551 189568 DEBUG oslo_concurrency.lockutils [None req-286a6c53-945b-4040-bb5b-5f674c790b5e 7c24e8f82e7842b785e565ac65c7f494 35d2a9caf1634dca9fc12ec078239d84 - - default default] Lock "1324593a3f01becd5f72fdfdb0281e45c2a6b683" "released" by "nova.virt.libvirt.imagebackend.Qcow2.create_image.<locals>.create_qcow2_image" :: held 0.191s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m
Dec  1 19:29:49 compute-0 nova_compute[189564]: 2025-12-01 19:29:49.552 189568 DEBUG oslo_concurrency.processutils [None req-286a6c53-945b-4040-bb5b-5f674c790b5e 7c24e8f82e7842b785e565ac65c7f494 35d2a9caf1634dca9fc12ec078239d84 - - default default] Running cmd (subprocess): /usr/bin/python3 -m oslo_concurrency.prlimit --as=1073741824 --cpu=30 -- env LC_ALL=C LANG=C qemu-img info /var/lib/nova/instances/_base/1324593a3f01becd5f72fdfdb0281e45c2a6b683 --force-share --output=json execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384#033[00m
Dec  1 19:29:49 compute-0 nova_compute[189564]: 2025-12-01 19:29:49.650 189568 DEBUG oslo_concurrency.processutils [None req-286a6c53-945b-4040-bb5b-5f674c790b5e 7c24e8f82e7842b785e565ac65c7f494 35d2a9caf1634dca9fc12ec078239d84 - - default default] CMD "/usr/bin/python3 -m oslo_concurrency.prlimit --as=1073741824 --cpu=30 -- env LC_ALL=C LANG=C qemu-img info /var/lib/nova/instances/_base/1324593a3f01becd5f72fdfdb0281e45c2a6b683 --force-share --output=json" returned: 0 in 0.098s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422#033[00m
Dec  1 19:29:49 compute-0 nova_compute[189564]: 2025-12-01 19:29:49.652 189568 DEBUG nova.virt.disk.api [None req-286a6c53-945b-4040-bb5b-5f674c790b5e 7c24e8f82e7842b785e565ac65c7f494 35d2a9caf1634dca9fc12ec078239d84 - - default default] Checking if we can resize image /var/lib/nova/instances/e73931e9-f7fa-4666-b781-700b385532a9/disk. size=1073741824 can_resize_image /usr/lib/python3.9/site-packages/nova/virt/disk/api.py:166#033[00m
Dec  1 19:29:49 compute-0 nova_compute[189564]: 2025-12-01 19:29:49.654 189568 DEBUG oslo_concurrency.processutils [None req-286a6c53-945b-4040-bb5b-5f674c790b5e 7c24e8f82e7842b785e565ac65c7f494 35d2a9caf1634dca9fc12ec078239d84 - - default default] Running cmd (subprocess): /usr/bin/python3 -m oslo_concurrency.prlimit --as=1073741824 --cpu=30 -- env LC_ALL=C LANG=C qemu-img info /var/lib/nova/instances/e73931e9-f7fa-4666-b781-700b385532a9/disk --force-share --output=json execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384#033[00m
Dec  1 19:29:49 compute-0 nova_compute[189564]: 2025-12-01 19:29:49.734 189568 DEBUG oslo_concurrency.processutils [None req-286a6c53-945b-4040-bb5b-5f674c790b5e 7c24e8f82e7842b785e565ac65c7f494 35d2a9caf1634dca9fc12ec078239d84 - - default default] CMD "/usr/bin/python3 -m oslo_concurrency.prlimit --as=1073741824 --cpu=30 -- env LC_ALL=C LANG=C qemu-img info /var/lib/nova/instances/e73931e9-f7fa-4666-b781-700b385532a9/disk --force-share --output=json" returned: 0 in 0.080s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422#033[00m
Dec  1 19:29:49 compute-0 nova_compute[189564]: 2025-12-01 19:29:49.735 189568 DEBUG nova.virt.disk.api [None req-286a6c53-945b-4040-bb5b-5f674c790b5e 7c24e8f82e7842b785e565ac65c7f494 35d2a9caf1634dca9fc12ec078239d84 - - default default] Cannot resize image /var/lib/nova/instances/e73931e9-f7fa-4666-b781-700b385532a9/disk to a smaller size. can_resize_image /usr/lib/python3.9/site-packages/nova/virt/disk/api.py:172#033[00m
Dec  1 19:29:49 compute-0 nova_compute[189564]: 2025-12-01 19:29:49.736 189568 DEBUG nova.objects.instance [None req-286a6c53-945b-4040-bb5b-5f674c790b5e 7c24e8f82e7842b785e565ac65c7f494 35d2a9caf1634dca9fc12ec078239d84 - - default default] Lazy-loading 'migration_context' on Instance uuid e73931e9-f7fa-4666-b781-700b385532a9 obj_load_attr /usr/lib/python3.9/site-packages/nova/objects/instance.py:1105#033[00m
Dec  1 19:29:49 compute-0 nova_compute[189564]: 2025-12-01 19:29:49.749 189568 DEBUG oslo_concurrency.lockutils [None req-286a6c53-945b-4040-bb5b-5f674c790b5e 7c24e8f82e7842b785e565ac65c7f494 35d2a9caf1634dca9fc12ec078239d84 - - default default] Acquiring lock "/var/lib/nova/instances/e73931e9-f7fa-4666-b781-700b385532a9/disk.info" by "nova.virt.libvirt.imagebackend.Image.resolve_driver_format.<locals>.write_to_disk_info_file" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m
Dec  1 19:29:49 compute-0 nova_compute[189564]: 2025-12-01 19:29:49.750 189568 DEBUG oslo_concurrency.lockutils [None req-286a6c53-945b-4040-bb5b-5f674c790b5e 7c24e8f82e7842b785e565ac65c7f494 35d2a9caf1634dca9fc12ec078239d84 - - default default] Lock "/var/lib/nova/instances/e73931e9-f7fa-4666-b781-700b385532a9/disk.info" acquired by "nova.virt.libvirt.imagebackend.Image.resolve_driver_format.<locals>.write_to_disk_info_file" :: waited 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m
Dec  1 19:29:49 compute-0 nova_compute[189564]: 2025-12-01 19:29:49.751 189568 DEBUG oslo_concurrency.lockutils [None req-286a6c53-945b-4040-bb5b-5f674c790b5e 7c24e8f82e7842b785e565ac65c7f494 35d2a9caf1634dca9fc12ec078239d84 - - default default] Lock "/var/lib/nova/instances/e73931e9-f7fa-4666-b781-700b385532a9/disk.info" "released" by "nova.virt.libvirt.imagebackend.Image.resolve_driver_format.<locals>.write_to_disk_info_file" :: held 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m
Dec  1 19:29:49 compute-0 nova_compute[189564]: 2025-12-01 19:29:49.752 189568 DEBUG oslo_concurrency.lockutils [None req-286a6c53-945b-4040-bb5b-5f674c790b5e 7c24e8f82e7842b785e565ac65c7f494 35d2a9caf1634dca9fc12ec078239d84 - - default default] Acquiring lock "ephemeral_1_0706d66" by "nova.virt.libvirt.imagebackend.Image.cache.<locals>.fetch_func_sync" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m
Dec  1 19:29:49 compute-0 nova_compute[189564]: 2025-12-01 19:29:49.752 189568 DEBUG oslo_concurrency.lockutils [None req-286a6c53-945b-4040-bb5b-5f674c790b5e 7c24e8f82e7842b785e565ac65c7f494 35d2a9caf1634dca9fc12ec078239d84 - - default default] Lock "ephemeral_1_0706d66" acquired by "nova.virt.libvirt.imagebackend.Image.cache.<locals>.fetch_func_sync" :: waited 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m
Dec  1 19:29:49 compute-0 nova_compute[189564]: 2025-12-01 19:29:49.754 189568 DEBUG oslo_concurrency.processutils [None req-286a6c53-945b-4040-bb5b-5f674c790b5e 7c24e8f82e7842b785e565ac65c7f494 35d2a9caf1634dca9fc12ec078239d84 - - default default] Running cmd (subprocess): env LC_ALL=C LANG=C qemu-img create -f raw /var/lib/nova/instances/_base/ephemeral_1_0706d66 1G execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384#033[00m
Dec  1 19:29:49 compute-0 nova_compute[189564]: 2025-12-01 19:29:49.778 189568 DEBUG oslo_concurrency.processutils [None req-286a6c53-945b-4040-bb5b-5f674c790b5e 7c24e8f82e7842b785e565ac65c7f494 35d2a9caf1634dca9fc12ec078239d84 - - default default] CMD "env LC_ALL=C LANG=C qemu-img create -f raw /var/lib/nova/instances/_base/ephemeral_1_0706d66 1G" returned: 0 in 0.024s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422#033[00m
Dec  1 19:29:49 compute-0 nova_compute[189564]: 2025-12-01 19:29:49.779 189568 DEBUG oslo_concurrency.processutils [None req-286a6c53-945b-4040-bb5b-5f674c790b5e 7c24e8f82e7842b785e565ac65c7f494 35d2a9caf1634dca9fc12ec078239d84 - - default default] Running cmd (subprocess): mkfs -t vfat -n ephemeral0 /var/lib/nova/instances/_base/ephemeral_1_0706d66 execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384#033[00m
Dec  1 19:29:49 compute-0 nova_compute[189564]: 2025-12-01 19:29:49.821 189568 DEBUG oslo_concurrency.processutils [None req-286a6c53-945b-4040-bb5b-5f674c790b5e 7c24e8f82e7842b785e565ac65c7f494 35d2a9caf1634dca9fc12ec078239d84 - - default default] CMD "mkfs -t vfat -n ephemeral0 /var/lib/nova/instances/_base/ephemeral_1_0706d66" returned: 0 in 0.042s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422#033[00m
Dec  1 19:29:49 compute-0 nova_compute[189564]: 2025-12-01 19:29:49.823 189568 DEBUG oslo_concurrency.lockutils [None req-286a6c53-945b-4040-bb5b-5f674c790b5e 7c24e8f82e7842b785e565ac65c7f494 35d2a9caf1634dca9fc12ec078239d84 - - default default] Lock "ephemeral_1_0706d66" "released" by "nova.virt.libvirt.imagebackend.Image.cache.<locals>.fetch_func_sync" :: held 0.071s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m
Dec  1 19:29:49 compute-0 nova_compute[189564]: 2025-12-01 19:29:49.849 189568 DEBUG oslo_concurrency.processutils [None req-286a6c53-945b-4040-bb5b-5f674c790b5e 7c24e8f82e7842b785e565ac65c7f494 35d2a9caf1634dca9fc12ec078239d84 - - default default] Running cmd (subprocess): /usr/bin/python3 -m oslo_concurrency.prlimit --as=1073741824 --cpu=30 -- env LC_ALL=C LANG=C qemu-img info /var/lib/nova/instances/_base/ephemeral_1_0706d66 --force-share --output=json execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384#033[00m
Dec  1 19:29:49 compute-0 nova_compute[189564]: 2025-12-01 19:29:49.930 189568 DEBUG oslo_concurrency.processutils [None req-286a6c53-945b-4040-bb5b-5f674c790b5e 7c24e8f82e7842b785e565ac65c7f494 35d2a9caf1634dca9fc12ec078239d84 - - default default] CMD "/usr/bin/python3 -m oslo_concurrency.prlimit --as=1073741824 --cpu=30 -- env LC_ALL=C LANG=C qemu-img info /var/lib/nova/instances/_base/ephemeral_1_0706d66 --force-share --output=json" returned: 0 in 0.081s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422#033[00m
Dec  1 19:29:49 compute-0 nova_compute[189564]: 2025-12-01 19:29:49.933 189568 DEBUG oslo_concurrency.lockutils [None req-286a6c53-945b-4040-bb5b-5f674c790b5e 7c24e8f82e7842b785e565ac65c7f494 35d2a9caf1634dca9fc12ec078239d84 - - default default] Acquiring lock "ephemeral_1_0706d66" by "nova.virt.libvirt.imagebackend.Qcow2.create_image.<locals>.create_qcow2_image" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m
Dec  1 19:29:49 compute-0 nova_compute[189564]: 2025-12-01 19:29:49.934 189568 DEBUG oslo_concurrency.lockutils [None req-286a6c53-945b-4040-bb5b-5f674c790b5e 7c24e8f82e7842b785e565ac65c7f494 35d2a9caf1634dca9fc12ec078239d84 - - default default] Lock "ephemeral_1_0706d66" acquired by "nova.virt.libvirt.imagebackend.Qcow2.create_image.<locals>.create_qcow2_image" :: waited 0.002s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m
Dec  1 19:29:49 compute-0 nova_compute[189564]: 2025-12-01 19:29:49.964 189568 DEBUG oslo_concurrency.processutils [None req-286a6c53-945b-4040-bb5b-5f674c790b5e 7c24e8f82e7842b785e565ac65c7f494 35d2a9caf1634dca9fc12ec078239d84 - - default default] Running cmd (subprocess): /usr/bin/python3 -m oslo_concurrency.prlimit --as=1073741824 --cpu=30 -- env LC_ALL=C LANG=C qemu-img info /var/lib/nova/instances/_base/ephemeral_1_0706d66 --force-share --output=json execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384#033[00m
Dec  1 19:29:50 compute-0 nova_compute[189564]: 2025-12-01 19:29:50.049 189568 DEBUG oslo_concurrency.processutils [None req-286a6c53-945b-4040-bb5b-5f674c790b5e 7c24e8f82e7842b785e565ac65c7f494 35d2a9caf1634dca9fc12ec078239d84 - - default default] CMD "/usr/bin/python3 -m oslo_concurrency.prlimit --as=1073741824 --cpu=30 -- env LC_ALL=C LANG=C qemu-img info /var/lib/nova/instances/_base/ephemeral_1_0706d66 --force-share --output=json" returned: 0 in 0.086s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422#033[00m
Dec  1 19:29:50 compute-0 nova_compute[189564]: 2025-12-01 19:29:50.051 189568 DEBUG oslo_concurrency.processutils [None req-286a6c53-945b-4040-bb5b-5f674c790b5e 7c24e8f82e7842b785e565ac65c7f494 35d2a9caf1634dca9fc12ec078239d84 - - default default] Running cmd (subprocess): env LC_ALL=C LANG=C qemu-img create -f qcow2 -o backing_file=/var/lib/nova/instances/_base/ephemeral_1_0706d66,backing_fmt=raw /var/lib/nova/instances/e73931e9-f7fa-4666-b781-700b385532a9/disk.eph0 1073741824 execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384#033[00m
Dec  1 19:29:50 compute-0 nova_compute[189564]: 2025-12-01 19:29:50.097 189568 DEBUG oslo_concurrency.processutils [None req-286a6c53-945b-4040-bb5b-5f674c790b5e 7c24e8f82e7842b785e565ac65c7f494 35d2a9caf1634dca9fc12ec078239d84 - - default default] CMD "env LC_ALL=C LANG=C qemu-img create -f qcow2 -o backing_file=/var/lib/nova/instances/_base/ephemeral_1_0706d66,backing_fmt=raw /var/lib/nova/instances/e73931e9-f7fa-4666-b781-700b385532a9/disk.eph0 1073741824" returned: 0 in 0.046s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422#033[00m
Dec  1 19:29:50 compute-0 nova_compute[189564]: 2025-12-01 19:29:50.100 189568 DEBUG oslo_concurrency.lockutils [None req-286a6c53-945b-4040-bb5b-5f674c790b5e 7c24e8f82e7842b785e565ac65c7f494 35d2a9caf1634dca9fc12ec078239d84 - - default default] Lock "ephemeral_1_0706d66" "released" by "nova.virt.libvirt.imagebackend.Qcow2.create_image.<locals>.create_qcow2_image" :: held 0.165s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m
Dec  1 19:29:50 compute-0 nova_compute[189564]: 2025-12-01 19:29:50.101 189568 DEBUG oslo_concurrency.processutils [None req-286a6c53-945b-4040-bb5b-5f674c790b5e 7c24e8f82e7842b785e565ac65c7f494 35d2a9caf1634dca9fc12ec078239d84 - - default default] Running cmd (subprocess): /usr/bin/python3 -m oslo_concurrency.prlimit --as=1073741824 --cpu=30 -- env LC_ALL=C LANG=C qemu-img info /var/lib/nova/instances/_base/ephemeral_1_0706d66 --force-share --output=json execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384#033[00m
Dec  1 19:29:50 compute-0 nova_compute[189564]: 2025-12-01 19:29:50.160 189568 DEBUG oslo_concurrency.processutils [None req-286a6c53-945b-4040-bb5b-5f674c790b5e 7c24e8f82e7842b785e565ac65c7f494 35d2a9caf1634dca9fc12ec078239d84 - - default default] CMD "/usr/bin/python3 -m oslo_concurrency.prlimit --as=1073741824 --cpu=30 -- env LC_ALL=C LANG=C qemu-img info /var/lib/nova/instances/_base/ephemeral_1_0706d66 --force-share --output=json" returned: 0 in 0.059s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422#033[00m
Dec  1 19:29:50 compute-0 nova_compute[189564]: 2025-12-01 19:29:50.162 189568 DEBUG nova.virt.libvirt.driver [None req-286a6c53-945b-4040-bb5b-5f674c790b5e 7c24e8f82e7842b785e565ac65c7f494 35d2a9caf1634dca9fc12ec078239d84 - - default default] [instance: e73931e9-f7fa-4666-b781-700b385532a9] Created local disks _create_image /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:4857#033[00m
Dec  1 19:29:50 compute-0 nova_compute[189564]: 2025-12-01 19:29:50.163 189568 DEBUG nova.virt.libvirt.driver [None req-286a6c53-945b-4040-bb5b-5f674c790b5e 7c24e8f82e7842b785e565ac65c7f494 35d2a9caf1634dca9fc12ec078239d84 - - default default] [instance: e73931e9-f7fa-4666-b781-700b385532a9] Ensure instance console log exists: /var/lib/nova/instances/e73931e9-f7fa-4666-b781-700b385532a9/console.log _ensure_console_log_for_instance /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:4609#033[00m
Dec  1 19:29:50 compute-0 nova_compute[189564]: 2025-12-01 19:29:50.164 189568 DEBUG oslo_concurrency.lockutils [None req-286a6c53-945b-4040-bb5b-5f674c790b5e 7c24e8f82e7842b785e565ac65c7f494 35d2a9caf1634dca9fc12ec078239d84 - - default default] Acquiring lock "vgpu_resources" by "nova.virt.libvirt.driver.LibvirtDriver._allocate_mdevs" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m
Dec  1 19:29:50 compute-0 nova_compute[189564]: 2025-12-01 19:29:50.165 189568 DEBUG oslo_concurrency.lockutils [None req-286a6c53-945b-4040-bb5b-5f674c790b5e 7c24e8f82e7842b785e565ac65c7f494 35d2a9caf1634dca9fc12ec078239d84 - - default default] Lock "vgpu_resources" acquired by "nova.virt.libvirt.driver.LibvirtDriver._allocate_mdevs" :: waited 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m
Dec  1 19:29:50 compute-0 nova_compute[189564]: 2025-12-01 19:29:50.166 189568 DEBUG oslo_concurrency.lockutils [None req-286a6c53-945b-4040-bb5b-5f674c790b5e 7c24e8f82e7842b785e565ac65c7f494 35d2a9caf1634dca9fc12ec078239d84 - - default default] Lock "vgpu_resources" "released" by "nova.virt.libvirt.driver.LibvirtDriver._allocate_mdevs" :: held 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m
Dec  1 19:29:50 compute-0 nova_compute[189564]: 2025-12-01 19:29:50.248 189568 DEBUG oslo_service.periodic_task [None req-7cec860b-c1c5-49d2-8a4f-732c76c1788d - - - - - -] Running periodic task ComputeManager.update_available_resource run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210#033[00m
Dec  1 19:29:50 compute-0 nova_compute[189564]: 2025-12-01 19:29:50.280 189568 DEBUG oslo_concurrency.lockutils [None req-7cec860b-c1c5-49d2-8a4f-732c76c1788d - - - - - -] Acquiring lock "compute_resources" by "nova.compute.resource_tracker.ResourceTracker.clean_compute_node_cache" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m
Dec  1 19:29:50 compute-0 nova_compute[189564]: 2025-12-01 19:29:50.281 189568 DEBUG oslo_concurrency.lockutils [None req-7cec860b-c1c5-49d2-8a4f-732c76c1788d - - - - - -] Lock "compute_resources" acquired by "nova.compute.resource_tracker.ResourceTracker.clean_compute_node_cache" :: waited 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m
Dec  1 19:29:50 compute-0 nova_compute[189564]: 2025-12-01 19:29:50.282 189568 DEBUG oslo_concurrency.lockutils [None req-7cec860b-c1c5-49d2-8a4f-732c76c1788d - - - - - -] Lock "compute_resources" "released" by "nova.compute.resource_tracker.ResourceTracker.clean_compute_node_cache" :: held 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m
Dec  1 19:29:50 compute-0 nova_compute[189564]: 2025-12-01 19:29:50.282 189568 DEBUG nova.compute.resource_tracker [None req-7cec860b-c1c5-49d2-8a4f-732c76c1788d - - - - - -] Auditing locally available compute resources for compute-0.ctlplane.example.com (node: compute-0.ctlplane.example.com) update_available_resource /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:861#033[00m
Dec  1 19:29:50 compute-0 podman[239753]: 2025-12-01 19:29:50.320557537 +0000 UTC m=+0.082558012 container health_status 61ddba5fa28aaa4735d9b3aecc3d300f499f9ae2248b5f55cd6d6127fcce4236 (image=quay.io/navidys/prometheus-podman-exporter:v1.10.1, name=podman_exporter, health_status=healthy, health_failing_streak=0, health_log=, managed_by=edpm_ansible, config_data={'image': 'quay.io/navidys/prometheus-podman-exporter:v1.10.1', 'restart': 'always', 'recreate': True, 'user': 'root', 'privileged': True, 'ports': ['9882:9882'], 'net': 'host', 'command': ['--web.config.file=/etc/podman_exporter/podman_exporter.yaml'], 'environment': {'OS_ENDPOINT_TYPE': 'internal', 'CONTAINER_HOST': 'unix:///run/podman/podman.sock'}, 'healthcheck': {'test': '/openstack/healthcheck podman_exporter', 'mount': '/var/lib/openstack/healthchecks/podman_exporter'}, 'volumes': ['/var/lib/openstack/config/telemetry/podman_exporter.yaml:/etc/podman_exporter/podman_exporter.yaml:z', '/var/lib/openstack/certs/telemetry/default:/etc/podman_exporter/tls:z', '/run/podman/podman.sock:/run/podman/podman.sock:rw,z', '/var/lib/openstack/healthchecks/podman_exporter:/openstack:ro,z']}, config_id=edpm, container_name=podman_exporter, maintainer=Navid Yaghoobi <navidys@fedoraproject.org>)
Dec  1 19:29:50 compute-0 nova_compute[189564]: 2025-12-01 19:29:50.452 189568 DEBUG nova.network.neutron [None req-286a6c53-945b-4040-bb5b-5f674c790b5e 7c24e8f82e7842b785e565ac65c7f494 35d2a9caf1634dca9fc12ec078239d84 - - default default] [instance: e73931e9-f7fa-4666-b781-700b385532a9] Successfully updated port: 3cef930c-870a-4936-a206-b4c3a7ce5c1a _update_port /usr/lib/python3.9/site-packages/nova/network/neutron.py:586#033[00m
Dec  1 19:29:50 compute-0 nova_compute[189564]: 2025-12-01 19:29:50.475 189568 DEBUG oslo_concurrency.lockutils [None req-286a6c53-945b-4040-bb5b-5f674c790b5e 7c24e8f82e7842b785e565ac65c7f494 35d2a9caf1634dca9fc12ec078239d84 - - default default] Acquiring lock "refresh_cache-e73931e9-f7fa-4666-b781-700b385532a9" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:312#033[00m
Dec  1 19:29:50 compute-0 nova_compute[189564]: 2025-12-01 19:29:50.475 189568 DEBUG oslo_concurrency.lockutils [None req-286a6c53-945b-4040-bb5b-5f674c790b5e 7c24e8f82e7842b785e565ac65c7f494 35d2a9caf1634dca9fc12ec078239d84 - - default default] Acquired lock "refresh_cache-e73931e9-f7fa-4666-b781-700b385532a9" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:315#033[00m
Dec  1 19:29:50 compute-0 nova_compute[189564]: 2025-12-01 19:29:50.476 189568 DEBUG nova.network.neutron [None req-286a6c53-945b-4040-bb5b-5f674c790b5e 7c24e8f82e7842b785e565ac65c7f494 35d2a9caf1634dca9fc12ec078239d84 - - default default] [instance: e73931e9-f7fa-4666-b781-700b385532a9] Building network info cache for instance _get_instance_nw_info /usr/lib/python3.9/site-packages/nova/network/neutron.py:2010#033[00m
Dec  1 19:29:50 compute-0 nova_compute[189564]: 2025-12-01 19:29:50.605 189568 WARNING nova.virt.libvirt.driver [None req-7cec860b-c1c5-49d2-8a4f-732c76c1788d - - - - - -] This host appears to have multiple sockets per NUMA node. The `socket` PCI NUMA affinity will not be supported.#033[00m
Dec  1 19:29:50 compute-0 nova_compute[189564]: 2025-12-01 19:29:50.608 189568 DEBUG nova.compute.resource_tracker [None req-7cec860b-c1c5-49d2-8a4f-732c76c1788d - - - - - -] Hypervisor/Node resource view: name=compute-0.ctlplane.example.com free_ram=5628MB free_disk=72.40515518188477GB free_vcpus=8 pci_devices=[{"dev_id": "pci_0000_00_03_0", "address": "0000:00:03.0", "product_id": "1000", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1000", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_01_3", "address": "0000:00:01.3", "product_id": "7113", "vendor_id": "8086", "numa_node": null, "label": "label_8086_7113", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_06_0", "address": "0000:00:06.0", "product_id": "1005", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1005", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_04_0", "address": "0000:00:04.0", "product_id": "1001", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1001", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_01_1", "address": "0000:00:01.1", "product_id": "7010", "vendor_id": "8086", "numa_node": null, "label": "label_8086_7010", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_00_0", "address": "0000:00:00.0", "product_id": "1237", "vendor_id": "8086", "numa_node": null, "label": "label_8086_1237", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_05_0", "address": "0000:00:05.0", "product_id": "1002", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1002", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_02_0", "address": "0000:00:02.0", "product_id": "1050", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1050", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_01_2", "address": "0000:00:01.2", "product_id": "7020", "vendor_id": "8086", "numa_node": null, "label": "label_8086_7020", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_01_0", "address": "0000:00:01.0", "product_id": "7000", "vendor_id": "8086", "numa_node": null, "label": "label_8086_7000", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_07_0", "address": "0000:00:07.0", "product_id": "1000", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1000", "dev_type": "type-PCI"}] _report_hypervisor_resource_view /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:1034#033[00m
Dec  1 19:29:50 compute-0 nova_compute[189564]: 2025-12-01 19:29:50.608 189568 DEBUG oslo_concurrency.lockutils [None req-7cec860b-c1c5-49d2-8a4f-732c76c1788d - - - - - -] Acquiring lock "compute_resources" by "nova.compute.resource_tracker.ResourceTracker._update_available_resource" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m
Dec  1 19:29:50 compute-0 nova_compute[189564]: 2025-12-01 19:29:50.608 189568 DEBUG oslo_concurrency.lockutils [None req-7cec860b-c1c5-49d2-8a4f-732c76c1788d - - - - - -] Lock "compute_resources" acquired by "nova.compute.resource_tracker.ResourceTracker._update_available_resource" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m
Dec  1 19:29:50 compute-0 nova_compute[189564]: 2025-12-01 19:29:50.814 189568 DEBUG nova.compute.resource_tracker [None req-7cec860b-c1c5-49d2-8a4f-732c76c1788d - - - - - -] Instance e73931e9-f7fa-4666-b781-700b385532a9 actively managed on this compute host and has allocations in placement: {'resources': {'DISK_GB': 2, 'MEMORY_MB': 512, 'VCPU': 1}}. _remove_deleted_instances_allocations /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:1635#033[00m
Dec  1 19:29:50 compute-0 nova_compute[189564]: 2025-12-01 19:29:50.814 189568 DEBUG nova.compute.resource_tracker [None req-7cec860b-c1c5-49d2-8a4f-732c76c1788d - - - - - -] Total usable vcpus: 8, total allocated vcpus: 1 _report_final_resource_view /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:1057#033[00m
Dec  1 19:29:50 compute-0 nova_compute[189564]: 2025-12-01 19:29:50.815 189568 DEBUG nova.compute.resource_tracker [None req-7cec860b-c1c5-49d2-8a4f-732c76c1788d - - - - - -] Final resource view: name=compute-0.ctlplane.example.com phys_ram=7680MB used_ram=1024MB phys_disk=79GB used_disk=2GB total_vcpus=8 used_vcpus=1 pci_stats=[] _report_final_resource_view /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:1066#033[00m
Dec  1 19:29:50 compute-0 nova_compute[189564]: 2025-12-01 19:29:50.874 189568 DEBUG nova.compute.provider_tree [None req-7cec860b-c1c5-49d2-8a4f-732c76c1788d - - - - - -] Updating inventory in ProviderTree for provider 0211b5d4-bab8-409f-8f53-df766ffbcb27 with inventory: {'MEMORY_MB': {'total': 7680, 'min_unit': 1, 'max_unit': 7680, 'step_size': 1, 'allocation_ratio': 1.0, 'reserved': 512}, 'VCPU': {'total': 8, 'min_unit': 1, 'max_unit': 8, 'step_size': 1, 'allocation_ratio': 4.0, 'reserved': 0}, 'DISK_GB': {'total': 79, 'min_unit': 1, 'max_unit': 79, 'step_size': 1, 'allocation_ratio': 0.9, 'reserved': 1}} update_inventory /usr/lib/python3.9/site-packages/nova/compute/provider_tree.py:176#033[00m
Dec  1 19:29:50 compute-0 nova_compute[189564]: 2025-12-01 19:29:50.905 189568 ERROR nova.scheduler.client.report [None req-7cec860b-c1c5-49d2-8a4f-732c76c1788d - - - - - -] [req-a44c6288-c4b6-42dc-ae7f-e470c53490ec] Failed to update inventory to [{'MEMORY_MB': {'total': 7680, 'min_unit': 1, 'max_unit': 7680, 'step_size': 1, 'allocation_ratio': 1.0, 'reserved': 512}, 'VCPU': {'total': 8, 'min_unit': 1, 'max_unit': 8, 'step_size': 1, 'allocation_ratio': 4.0, 'reserved': 0}, 'DISK_GB': {'total': 79, 'min_unit': 1, 'max_unit': 79, 'step_size': 1, 'allocation_ratio': 0.9, 'reserved': 1}}] for resource provider with UUID 0211b5d4-bab8-409f-8f53-df766ffbcb27.  Got 409: {"errors": [{"status": 409, "title": "Conflict", "detail": "There was a conflict when trying to complete your request.\n\n resource provider generation conflict  ", "code": "placement.concurrent_update", "request_id": "req-a44c6288-c4b6-42dc-ae7f-e470c53490ec"}]}#033[00m
Dec  1 19:29:50 compute-0 nova_compute[189564]: 2025-12-01 19:29:50.931 189568 DEBUG nova.scheduler.client.report [None req-7cec860b-c1c5-49d2-8a4f-732c76c1788d - - - - - -] Refreshing inventories for resource provider 0211b5d4-bab8-409f-8f53-df766ffbcb27 _refresh_associations /usr/lib/python3.9/site-packages/nova/scheduler/client/report.py:804#033[00m
Dec  1 19:29:50 compute-0 nova_compute[189564]: 2025-12-01 19:29:50.953 189568 DEBUG nova.scheduler.client.report [None req-7cec860b-c1c5-49d2-8a4f-732c76c1788d - - - - - -] Updating ProviderTree inventory for provider 0211b5d4-bab8-409f-8f53-df766ffbcb27 from _refresh_and_get_inventory using data: {'VCPU': {'total': 8, 'reserved': 0, 'min_unit': 1, 'max_unit': 8, 'step_size': 1, 'allocation_ratio': 4.0}, 'MEMORY_MB': {'total': 7680, 'reserved': 512, 'min_unit': 1, 'max_unit': 7680, 'step_size': 1, 'allocation_ratio': 1.0}, 'DISK_GB': {'total': 79, 'reserved': 0, 'min_unit': 1, 'max_unit': 79, 'step_size': 1, 'allocation_ratio': 0.9}} _refresh_and_get_inventory /usr/lib/python3.9/site-packages/nova/scheduler/client/report.py:768#033[00m
Dec  1 19:29:50 compute-0 nova_compute[189564]: 2025-12-01 19:29:50.954 189568 DEBUG nova.compute.provider_tree [None req-7cec860b-c1c5-49d2-8a4f-732c76c1788d - - - - - -] Updating inventory in ProviderTree for provider 0211b5d4-bab8-409f-8f53-df766ffbcb27 with inventory: {'VCPU': {'total': 8, 'reserved': 0, 'min_unit': 1, 'max_unit': 8, 'step_size': 1, 'allocation_ratio': 4.0}, 'MEMORY_MB': {'total': 7680, 'reserved': 512, 'min_unit': 1, 'max_unit': 7680, 'step_size': 1, 'allocation_ratio': 1.0}, 'DISK_GB': {'total': 79, 'reserved': 0, 'min_unit': 1, 'max_unit': 79, 'step_size': 1, 'allocation_ratio': 0.9}} update_inventory /usr/lib/python3.9/site-packages/nova/compute/provider_tree.py:176#033[00m
Dec  1 19:29:50 compute-0 nova_compute[189564]: 2025-12-01 19:29:50.970 189568 DEBUG nova.scheduler.client.report [None req-7cec860b-c1c5-49d2-8a4f-732c76c1788d - - - - - -] Refreshing aggregate associations for resource provider 0211b5d4-bab8-409f-8f53-df766ffbcb27, aggregates: None _refresh_associations /usr/lib/python3.9/site-packages/nova/scheduler/client/report.py:813#033[00m
Dec  1 19:29:50 compute-0 nova_compute[189564]: 2025-12-01 19:29:50.990 189568 DEBUG nova.compute.manager [req-6588a8c5-fe2a-41e2-9f60-4905bc8565dc req-c25ff29e-a203-4a7d-bf3b-48a7d57aa4ea 8141e98fb94749b083df4b60a326419b 58466eba0634458f8dd92f929824a1d9 - - default default] [instance: e73931e9-f7fa-4666-b781-700b385532a9] Received event network-changed-3cef930c-870a-4936-a206-b4c3a7ce5c1a external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11048#033[00m
Dec  1 19:29:50 compute-0 nova_compute[189564]: 2025-12-01 19:29:50.991 189568 DEBUG nova.compute.manager [req-6588a8c5-fe2a-41e2-9f60-4905bc8565dc req-c25ff29e-a203-4a7d-bf3b-48a7d57aa4ea 8141e98fb94749b083df4b60a326419b 58466eba0634458f8dd92f929824a1d9 - - default default] [instance: e73931e9-f7fa-4666-b781-700b385532a9] Refreshing instance network info cache due to event network-changed-3cef930c-870a-4936-a206-b4c3a7ce5c1a. external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11053#033[00m
Dec  1 19:29:50 compute-0 nova_compute[189564]: 2025-12-01 19:29:50.991 189568 DEBUG oslo_concurrency.lockutils [req-6588a8c5-fe2a-41e2-9f60-4905bc8565dc req-c25ff29e-a203-4a7d-bf3b-48a7d57aa4ea 8141e98fb94749b083df4b60a326419b 58466eba0634458f8dd92f929824a1d9 - - default default] Acquiring lock "refresh_cache-e73931e9-f7fa-4666-b781-700b385532a9" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:312#033[00m
Dec  1 19:29:51 compute-0 nova_compute[189564]: 2025-12-01 19:29:51.003 189568 DEBUG nova.scheduler.client.report [None req-7cec860b-c1c5-49d2-8a4f-732c76c1788d - - - - - -] Refreshing trait associations for resource provider 0211b5d4-bab8-409f-8f53-df766ffbcb27, traits: COMPUTE_RESCUE_BFV,COMPUTE_VIOMMU_MODEL_INTEL,COMPUTE_SECURITY_UEFI_SECURE_BOOT,COMPUTE_GRAPHICS_MODEL_VIRTIO,HW_CPU_X86_AMD_SVM,COMPUTE_NODE,COMPUTE_VIOMMU_MODEL_AUTO,HW_CPU_X86_BMI2,COMPUTE_IMAGE_TYPE_ISO,HW_CPU_X86_SSE2,COMPUTE_STORAGE_BUS_SATA,HW_CPU_X86_SSE41,COMPUTE_IMAGE_TYPE_ARI,COMPUTE_SECURITY_TPM_1_2,COMPUTE_STORAGE_BUS_VIRTIO,COMPUTE_IMAGE_TYPE_AMI,COMPUTE_NET_VIF_MODEL_E1000,COMPUTE_GRAPHICS_MODEL_CIRRUS,COMPUTE_GRAPHICS_MODEL_VGA,COMPUTE_TRUSTED_CERTS,COMPUTE_STORAGE_BUS_USB,COMPUTE_VOLUME_ATTACH_WITH_TAG,COMPUTE_NET_VIF_MODEL_VIRTIO,HW_CPU_X86_FMA3,HW_CPU_X86_SSE4A,COMPUTE_ACCELERATORS,COMPUTE_VOLUME_EXTEND,HW_CPU_X86_ABM,COMPUTE_DEVICE_TAGGING,HW_CPU_X86_AVX,HW_CPU_X86_SSE,HW_CPU_X86_SVM,COMPUTE_STORAGE_BUS_IDE,COMPUTE_NET_ATTACH_INTERFACE,HW_CPU_X86_F16C,HW_CPU_X86_MMX,COMPUTE_NET_VIF_MODEL_SPAPR_VLAN,COMPUTE_NET_VIF_MODEL_E1000E,HW_CPU_X86_CLMUL,COMPUTE_GRAPHICS_MODEL_NONE,COMPUTE_VIOMMU_MODEL_VIRTIO,HW_CPU_X86_AVX2,COMPUTE_NET_VIF_MODEL_VMXNET3,COMPUTE_SECURITY_TPM_2_0,COMPUTE_IMAGE_TYPE_AKI,HW_CPU_X86_SSSE3,COMPUTE_IMAGE_TYPE_QCOW2,HW_CPU_X86_BMI,HW_CPU_X86_AESNI,COMPUTE_GRAPHICS_MODEL_BOCHS,COMPUTE_NET_VIF_MODEL_RTL8139,COMPUTE_STORAGE_BUS_SCSI,COMPUTE_NET_VIF_MODEL_PCNET,COMPUTE_VOLUME_MULTI_ATTACH,COMPUTE_SOCKET_PCI_NUMA_AFFINITY,COMPUTE_NET_VIF_MODEL_NE2K_PCI,HW_CPU_X86_SHA,COMPUTE_IMAGE_TYPE_RAW,COMPUTE_NET_ATTACH_INTERFACE_WITH_TAG,HW_CPU_X86_SSE42,COMPUTE_STORAGE_BUS_FDC _refresh_associations /usr/lib/python3.9/site-packages/nova/scheduler/client/report.py:825#033[00m
Dec  1 19:29:51 compute-0 nova_compute[189564]: 2025-12-01 19:29:51.062 189568 DEBUG nova.compute.provider_tree [None req-7cec860b-c1c5-49d2-8a4f-732c76c1788d - - - - - -] Updating inventory in ProviderTree for provider 0211b5d4-bab8-409f-8f53-df766ffbcb27 with inventory: {'MEMORY_MB': {'total': 7680, 'min_unit': 1, 'max_unit': 7680, 'step_size': 1, 'allocation_ratio': 1.0, 'reserved': 512}, 'VCPU': {'total': 8, 'min_unit': 1, 'max_unit': 8, 'step_size': 1, 'allocation_ratio': 4.0, 'reserved': 0}, 'DISK_GB': {'total': 79, 'min_unit': 1, 'max_unit': 79, 'step_size': 1, 'allocation_ratio': 0.9, 'reserved': 1}} update_inventory /usr/lib/python3.9/site-packages/nova/compute/provider_tree.py:176#033[00m
Dec  1 19:29:51 compute-0 nova_compute[189564]: 2025-12-01 19:29:51.125 189568 DEBUG nova.scheduler.client.report [None req-7cec860b-c1c5-49d2-8a4f-732c76c1788d - - - - - -] Updated inventory for provider 0211b5d4-bab8-409f-8f53-df766ffbcb27 with generation 3 in Placement from set_inventory_for_provider using data: {'MEMORY_MB': {'total': 7680, 'min_unit': 1, 'max_unit': 7680, 'step_size': 1, 'allocation_ratio': 1.0, 'reserved': 512}, 'VCPU': {'total': 8, 'min_unit': 1, 'max_unit': 8, 'step_size': 1, 'allocation_ratio': 4.0, 'reserved': 0}, 'DISK_GB': {'total': 79, 'min_unit': 1, 'max_unit': 79, 'step_size': 1, 'allocation_ratio': 0.9, 'reserved': 1}} set_inventory_for_provider /usr/lib/python3.9/site-packages/nova/scheduler/client/report.py:957#033[00m
Dec  1 19:29:51 compute-0 nova_compute[189564]: 2025-12-01 19:29:51.126 189568 DEBUG nova.compute.provider_tree [None req-7cec860b-c1c5-49d2-8a4f-732c76c1788d - - - - - -] Updating resource provider 0211b5d4-bab8-409f-8f53-df766ffbcb27 generation from 3 to 4 during operation: update_inventory _update_generation /usr/lib/python3.9/site-packages/nova/compute/provider_tree.py:164#033[00m
Dec  1 19:29:51 compute-0 nova_compute[189564]: 2025-12-01 19:29:51.126 189568 DEBUG nova.compute.provider_tree [None req-7cec860b-c1c5-49d2-8a4f-732c76c1788d - - - - - -] Updating inventory in ProviderTree for provider 0211b5d4-bab8-409f-8f53-df766ffbcb27 with inventory: {'MEMORY_MB': {'total': 7680, 'reserved': 512, 'min_unit': 1, 'max_unit': 7680, 'step_size': 1, 'allocation_ratio': 1.0}, 'VCPU': {'total': 8, 'reserved': 0, 'min_unit': 1, 'max_unit': 8, 'step_size': 1, 'allocation_ratio': 4.0}, 'DISK_GB': {'total': 79, 'reserved': 1, 'min_unit': 1, 'max_unit': 79, 'step_size': 1, 'allocation_ratio': 0.9}} update_inventory /usr/lib/python3.9/site-packages/nova/compute/provider_tree.py:176#033[00m
Dec  1 19:29:51 compute-0 nova_compute[189564]: 2025-12-01 19:29:51.154 189568 DEBUG nova.compute.resource_tracker [None req-7cec860b-c1c5-49d2-8a4f-732c76c1788d - - - - - -] Compute_service record updated for compute-0.ctlplane.example.com:compute-0.ctlplane.example.com _update_available_resource /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:995#033[00m
Dec  1 19:29:51 compute-0 nova_compute[189564]: 2025-12-01 19:29:51.154 189568 DEBUG oslo_concurrency.lockutils [None req-7cec860b-c1c5-49d2-8a4f-732c76c1788d - - - - - -] Lock "compute_resources" "released" by "nova.compute.resource_tracker.ResourceTracker._update_available_resource" :: held 0.546s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m
Dec  1 19:29:51 compute-0 nova_compute[189564]: 2025-12-01 19:29:51.170 189568 DEBUG nova.network.neutron [None req-286a6c53-945b-4040-bb5b-5f674c790b5e 7c24e8f82e7842b785e565ac65c7f494 35d2a9caf1634dca9fc12ec078239d84 - - default default] [instance: e73931e9-f7fa-4666-b781-700b385532a9] Instance cache missing network info. _get_preexisting_port_ids /usr/lib/python3.9/site-packages/nova/network/neutron.py:3323#033[00m
Dec  1 19:29:52 compute-0 podman[239777]: 2025-12-01 19:29:52.3385427 +0000 UTC m=+0.101031592 container health_status 34a1614f07848d6f362b3ed1fa2407dbcd0f2c7c831f6ef43ff8b2d278ce7c3d (image=quay.io/podified-antelope-centos9/openstack-ceilometer-ipmi:current-podified, name=ceilometer_agent_ipmi, health_status=healthy, health_failing_streak=0, health_log=, io.buildah.version=1.41.3, maintainer=OpenStack Kubernetes Operator team, org.label-schema.build-date=20251125, org.label-schema.vendor=CentOS, tcib_build_tag=fa2bb8efef6782c26ea7f1675eeb36dd, config_data={'image': 'quay.io/podified-antelope-centos9/openstack-ceilometer-ipmi:current-podified', 'user': 'ceilometer', 'restart': 'always', 'command': 'kolla_start', 'security_opt': 'label:type:ceilometer_polling_t', 'privileged': 'true', 'net': 'host', 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'OS_ENDPOINT_TYPE': 'internal'}, 'healthcheck': {'test': '/openstack/healthcheck ipmi', 'mount': '/var/lib/openstack/healthchecks/ceilometer_agent_ipmi'}, 'volumes': ['/var/lib/openstack/config/telemetry-power-monitoring:/var/lib/openstack/config/:z', '/var/lib/openstack/config/telemetry-power-monitoring/ceilometer-agent-ipmi.json:/var/lib/kolla/config_files/config.json:z', '/etc/hosts:/etc/hosts:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/var/lib/openstack/cacerts/telemetry-power-monitoring/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/config/telemetry-power-monitoring/ceilometer_prom_exporter.yaml:/etc/ceilometer/ceilometer_prom_exporter.yaml:z', '/var/lib/openstack/certs/telemetry-power-monitoring/default:/etc/ceilometer/tls:z', '/dev/log:/dev/log', '/var/lib/openstack/healthchecks/ceilometer_agent_ipmi:/openstack:ro,z']}, managed_by=edpm_ansible, org.label-schema.schema-version=1.0, tcib_managed=true, config_id=edpm, container_name=ceilometer_agent_ipmi, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 9 Base Image)
Dec  1 19:29:52 compute-0 podman[239776]: 2025-12-01 19:29:52.368898404 +0000 UTC m=+0.126715691 container health_status 23921011954a99f31a49758e512d9e3575f6b2ebf536e7df85e3be11e7690b76 (image=quay.io/sustainable_computing_io/kepler:release-0.7.12, name=kepler, health_status=healthy, health_failing_streak=0, health_log=, io.k8s.display-name=Red Hat Universal Base Image 9, io.openshift.expose-services=, vcs-type=git, name=ubi9, architecture=x86_64, config_data={'image': 'quay.io/sustainable_computing_io/kepler:release-0.7.12', 'privileged': 'true', 'restart': 'always', 'ports': ['8888:8888'], 'net': 'host', 'command': '-v=2', 'recreate': True, 'environment': {'ENABLE_GPU': 'true', 'EXPOSE_CONTAINER_METRICS': 'true', 'ENABLE_PROCESS_METRICS': 'true', 'EXPOSE_VM_METRICS': 'true', 'EXPOSE_ESTIMATED_IDLE_POWER_METRICS': 'false', 'LIBVIRT_METADATA_URI': 'http://openstack.org/xmlns/libvirt/nova/1.1'}, 'healthcheck': {'test': '/openstack/healthcheck kepler', 'mount': '/var/lib/openstack/healthchecks/kepler'}, 'volumes': ['/lib/modules:/lib/modules:ro', '/run/libvirt:/run/libvirt:shared,ro', '/sys:/sys', '/proc:/proc', '/var/lib/openstack/healthchecks/kepler:/openstack:ro,z']}, vcs-ref=e309397d02fc53f7fa99db1371b8700eb49f268f, release=1214.1726694543, release-0.7.12=, io.k8s.description=The Universal Base Image is designed and engineered to be the base layer for all of your containerized applications, middleware and utilities. This base image is freely redistributable, but Red Hat only supports Red Hat technologies through subscriptions for Red Hat products. This image is maintained by Red Hat and updated regularly., io.openshift.tags=base rhel9, maintainer=Red Hat, Inc., version=9.4, managed_by=edpm_ansible, url=https://access.redhat.com/containers/#/registry.access.redhat.com/ubi9/images/9.4-1214.1726694543, com.redhat.license_terms=https://www.redhat.com/en/about/red-hat-end-user-license-agreements#UBI, com.redhat.component=ubi9-container, description=The Universal Base Image is designed and engineered to be the base layer for all of your containerized applications, middleware and utilities. This base image is freely redistributable, but Red Hat only supports Red Hat technologies through subscriptions for Red Hat products. This image is maintained by Red Hat and updated regularly., build-date=2024-09-18T21:23:30, config_id=edpm, vendor=Red Hat, Inc., io.buildah.version=1.29.0, container_name=kepler, distribution-scope=public, summary=Provides the latest release of Red Hat Universal Base Image 9.)
Dec  1 19:29:52 compute-0 nova_compute[189564]: 2025-12-01 19:29:52.723 189568 DEBUG nova.network.neutron [None req-286a6c53-945b-4040-bb5b-5f674c790b5e 7c24e8f82e7842b785e565ac65c7f494 35d2a9caf1634dca9fc12ec078239d84 - - default default] [instance: e73931e9-f7fa-4666-b781-700b385532a9] Updating instance_info_cache with network_info: [{"id": "3cef930c-870a-4936-a206-b4c3a7ce5c1a", "address": "fa:16:3e:fc:8b:70", "network": {"id": "2a4b8529-6171-4880-a97c-66966115a61b", "bridge": "br-int", "label": "private", "subnets": [{"cidr": "192.168.0.0/24", "dns": [], "gateway": {"address": "192.168.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "192.168.0.47", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "35d2a9caf1634dca9fc12ec078239d84", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tap3cef930c-87", "ovs_interfaceid": "3cef930c-870a-4936-a206-b4c3a7ce5c1a", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}}] update_instance_cache_with_nw_info /usr/lib/python3.9/site-packages/nova/network/neutron.py:116#033[00m
Dec  1 19:29:52 compute-0 nova_compute[189564]: 2025-12-01 19:29:52.746 189568 DEBUG oslo_concurrency.lockutils [None req-286a6c53-945b-4040-bb5b-5f674c790b5e 7c24e8f82e7842b785e565ac65c7f494 35d2a9caf1634dca9fc12ec078239d84 - - default default] Releasing lock "refresh_cache-e73931e9-f7fa-4666-b781-700b385532a9" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:333#033[00m
Dec  1 19:29:52 compute-0 nova_compute[189564]: 2025-12-01 19:29:52.746 189568 DEBUG nova.compute.manager [None req-286a6c53-945b-4040-bb5b-5f674c790b5e 7c24e8f82e7842b785e565ac65c7f494 35d2a9caf1634dca9fc12ec078239d84 - - default default] [instance: e73931e9-f7fa-4666-b781-700b385532a9] Instance network_info: |[{"id": "3cef930c-870a-4936-a206-b4c3a7ce5c1a", "address": "fa:16:3e:fc:8b:70", "network": {"id": "2a4b8529-6171-4880-a97c-66966115a61b", "bridge": "br-int", "label": "private", "subnets": [{"cidr": "192.168.0.0/24", "dns": [], "gateway": {"address": "192.168.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "192.168.0.47", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "35d2a9caf1634dca9fc12ec078239d84", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tap3cef930c-87", "ovs_interfaceid": "3cef930c-870a-4936-a206-b4c3a7ce5c1a", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}}]| _allocate_network_async /usr/lib/python3.9/site-packages/nova/compute/manager.py:1967#033[00m
Dec  1 19:29:52 compute-0 nova_compute[189564]: 2025-12-01 19:29:52.748 189568 DEBUG oslo_concurrency.lockutils [req-6588a8c5-fe2a-41e2-9f60-4905bc8565dc req-c25ff29e-a203-4a7d-bf3b-48a7d57aa4ea 8141e98fb94749b083df4b60a326419b 58466eba0634458f8dd92f929824a1d9 - - default default] Acquired lock "refresh_cache-e73931e9-f7fa-4666-b781-700b385532a9" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:315#033[00m
Dec  1 19:29:52 compute-0 nova_compute[189564]: 2025-12-01 19:29:52.748 189568 DEBUG nova.network.neutron [req-6588a8c5-fe2a-41e2-9f60-4905bc8565dc req-c25ff29e-a203-4a7d-bf3b-48a7d57aa4ea 8141e98fb94749b083df4b60a326419b 58466eba0634458f8dd92f929824a1d9 - - default default] [instance: e73931e9-f7fa-4666-b781-700b385532a9] Refreshing network info cache for port 3cef930c-870a-4936-a206-b4c3a7ce5c1a _get_instance_nw_info /usr/lib/python3.9/site-packages/nova/network/neutron.py:2007#033[00m
Dec  1 19:29:52 compute-0 nova_compute[189564]: 2025-12-01 19:29:52.755 189568 DEBUG nova.virt.libvirt.driver [None req-286a6c53-945b-4040-bb5b-5f674c790b5e 7c24e8f82e7842b785e565ac65c7f494 35d2a9caf1634dca9fc12ec078239d84 - - default default] [instance: e73931e9-f7fa-4666-b781-700b385532a9] Start _get_guest_xml network_info=[{"id": "3cef930c-870a-4936-a206-b4c3a7ce5c1a", "address": "fa:16:3e:fc:8b:70", "network": {"id": "2a4b8529-6171-4880-a97c-66966115a61b", "bridge": "br-int", "label": "private", "subnets": [{"cidr": "192.168.0.0/24", "dns": [], "gateway": {"address": "192.168.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "192.168.0.47", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "35d2a9caf1634dca9fc12ec078239d84", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tap3cef930c-87", "ovs_interfaceid": "3cef930c-870a-4936-a206-b4c3a7ce5c1a", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}}] disk_info={'disk_bus': 'virtio', 'cdrom_bus': 'sata', 'mapping': {'root': {'bus': 'virtio', 'dev': 'vda', 'type': 'disk', 'boot_index': '1'}, 'disk': {'bus': 'virtio', 'dev': 'vda', 'type': 'disk', 'boot_index': '1'}, 'disk.eph0': {'bus': 'virtio', 'dev': 'vdb', 'type': 'disk'}, 'disk.config': {'bus': 'sata', 'dev': 'sda', 'type': 'cdrom'}}} image_meta=ImageMeta(checksum='b874c39491a2377b8490f5f1e89761a4',container_format='bare',created_at=2025-12-01T19:28:30Z,direct_url=<?>,disk_format='qcow2',id=15bc897a-453b-4133-b6db-08ecdc2b6db0,min_disk=0,min_ram=0,name='cirros',owner='35d2a9caf1634dca9fc12ec078239d84',properties=ImageMetaProps,protected=<?>,size=16300544,status='active',tags=<?>,updated_at=2025-12-01T19:28:32Z,virtual_size=<?>,visibility=<?>) rescue=None block_device_info={'root_device_name': '/dev/vda', 'image': [{'boot_index': 0, 'guest_format': None, 'encryption_options': None, 'size': 0, 'encryption_secret_uuid': None, 'device_type': 'disk', 'disk_bus': 'virtio', 'encrypted': False, 'encryption_format': None, 'device_name': '/dev/vda', 'image_id': '15bc897a-453b-4133-b6db-08ecdc2b6db0'}], 'ephemerals': [{'guest_format': None, 'encryption_options': None, 'size': 1, 'encryption_secret_uuid': None, 'device_type': 'disk', 'disk_bus': 'virtio', 'encrypted': False, 'encryption_format': None, 'device_name': '/dev/vdb'}], 'block_device_mapping': [], 'swap': None} _get_guest_xml /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:7549#033[00m
Dec  1 19:29:52 compute-0 nova_compute[189564]: 2025-12-01 19:29:52.765 189568 WARNING nova.virt.libvirt.driver [None req-286a6c53-945b-4040-bb5b-5f674c790b5e 7c24e8f82e7842b785e565ac65c7f494 35d2a9caf1634dca9fc12ec078239d84 - - default default] This host appears to have multiple sockets per NUMA node. The `socket` PCI NUMA affinity will not be supported.#033[00m
Dec  1 19:29:52 compute-0 nova_compute[189564]: 2025-12-01 19:29:52.778 189568 DEBUG nova.virt.libvirt.host [None req-286a6c53-945b-4040-bb5b-5f674c790b5e 7c24e8f82e7842b785e565ac65c7f494 35d2a9caf1634dca9fc12ec078239d84 - - default default] Searching host: 'compute-0.ctlplane.example.com' for CPU controller through CGroups V1... _has_cgroupsv1_cpu_controller /usr/lib/python3.9/site-packages/nova/virt/libvirt/host.py:1653#033[00m
Dec  1 19:29:52 compute-0 nova_compute[189564]: 2025-12-01 19:29:52.779 189568 DEBUG nova.virt.libvirt.host [None req-286a6c53-945b-4040-bb5b-5f674c790b5e 7c24e8f82e7842b785e565ac65c7f494 35d2a9caf1634dca9fc12ec078239d84 - - default default] CPU controller missing on host. _has_cgroupsv1_cpu_controller /usr/lib/python3.9/site-packages/nova/virt/libvirt/host.py:1663#033[00m
Dec  1 19:29:52 compute-0 nova_compute[189564]: 2025-12-01 19:29:52.783 189568 DEBUG nova.virt.libvirt.host [None req-286a6c53-945b-4040-bb5b-5f674c790b5e 7c24e8f82e7842b785e565ac65c7f494 35d2a9caf1634dca9fc12ec078239d84 - - default default] Searching host: 'compute-0.ctlplane.example.com' for CPU controller through CGroups V2... _has_cgroupsv2_cpu_controller /usr/lib/python3.9/site-packages/nova/virt/libvirt/host.py:1672#033[00m
Dec  1 19:29:52 compute-0 nova_compute[189564]: 2025-12-01 19:29:52.783 189568 DEBUG nova.virt.libvirt.host [None req-286a6c53-945b-4040-bb5b-5f674c790b5e 7c24e8f82e7842b785e565ac65c7f494 35d2a9caf1634dca9fc12ec078239d84 - - default default] CPU controller found on host. _has_cgroupsv2_cpu_controller /usr/lib/python3.9/site-packages/nova/virt/libvirt/host.py:1679#033[00m
Dec  1 19:29:52 compute-0 nova_compute[189564]: 2025-12-01 19:29:52.784 189568 DEBUG nova.virt.libvirt.driver [None req-286a6c53-945b-4040-bb5b-5f674c790b5e 7c24e8f82e7842b785e565ac65c7f494 35d2a9caf1634dca9fc12ec078239d84 - - default default] CPU mode 'host-model' models '' was chosen, with extra flags: '' _get_guest_cpu_model_config /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:5396#033[00m
Dec  1 19:29:52 compute-0 nova_compute[189564]: 2025-12-01 19:29:52.784 189568 DEBUG nova.virt.hardware [None req-286a6c53-945b-4040-bb5b-5f674c790b5e 7c24e8f82e7842b785e565ac65c7f494 35d2a9caf1634dca9fc12ec078239d84 - - default default] Getting desirable topologies for flavor Flavor(created_at=2025-12-01T19:28:35Z,deleted=False,deleted_at=None,description=None,disabled=False,ephemeral_gb=1,extra_specs={},flavorid='0891a7f6-7194-4f33-bc11-6f6ab8b16145',id=1,is_public=True,memory_mb=512,name='m1.small',projects=<?>,root_gb=1,rxtx_factor=1.0,swap=0,updated_at=None,vcpu_weight=0,vcpus=1) and image_meta ImageMeta(checksum='b874c39491a2377b8490f5f1e89761a4',container_format='bare',created_at=2025-12-01T19:28:30Z,direct_url=<?>,disk_format='qcow2',id=15bc897a-453b-4133-b6db-08ecdc2b6db0,min_disk=0,min_ram=0,name='cirros',owner='35d2a9caf1634dca9fc12ec078239d84',properties=ImageMetaProps,protected=<?>,size=16300544,status='active',tags=<?>,updated_at=2025-12-01T19:28:32Z,virtual_size=<?>,visibility=<?>), allow threads: True _get_desirable_cpu_topologies /usr/lib/python3.9/site-packages/nova/virt/hardware.py:563#033[00m
Dec  1 19:29:52 compute-0 nova_compute[189564]: 2025-12-01 19:29:52.784 189568 DEBUG nova.virt.hardware [None req-286a6c53-945b-4040-bb5b-5f674c790b5e 7c24e8f82e7842b785e565ac65c7f494 35d2a9caf1634dca9fc12ec078239d84 - - default default] Flavor limits 0:0:0 get_cpu_topology_constraints /usr/lib/python3.9/site-packages/nova/virt/hardware.py:348#033[00m
Dec  1 19:29:52 compute-0 nova_compute[189564]: 2025-12-01 19:29:52.784 189568 DEBUG nova.virt.hardware [None req-286a6c53-945b-4040-bb5b-5f674c790b5e 7c24e8f82e7842b785e565ac65c7f494 35d2a9caf1634dca9fc12ec078239d84 - - default default] Image limits 0:0:0 get_cpu_topology_constraints /usr/lib/python3.9/site-packages/nova/virt/hardware.py:352#033[00m
Dec  1 19:29:52 compute-0 nova_compute[189564]: 2025-12-01 19:29:52.784 189568 DEBUG nova.virt.hardware [None req-286a6c53-945b-4040-bb5b-5f674c790b5e 7c24e8f82e7842b785e565ac65c7f494 35d2a9caf1634dca9fc12ec078239d84 - - default default] Flavor pref 0:0:0 get_cpu_topology_constraints /usr/lib/python3.9/site-packages/nova/virt/hardware.py:388#033[00m
Dec  1 19:29:52 compute-0 nova_compute[189564]: 2025-12-01 19:29:52.785 189568 DEBUG nova.virt.hardware [None req-286a6c53-945b-4040-bb5b-5f674c790b5e 7c24e8f82e7842b785e565ac65c7f494 35d2a9caf1634dca9fc12ec078239d84 - - default default] Image pref 0:0:0 get_cpu_topology_constraints /usr/lib/python3.9/site-packages/nova/virt/hardware.py:392#033[00m
Dec  1 19:29:52 compute-0 nova_compute[189564]: 2025-12-01 19:29:52.785 189568 DEBUG nova.virt.hardware [None req-286a6c53-945b-4040-bb5b-5f674c790b5e 7c24e8f82e7842b785e565ac65c7f494 35d2a9caf1634dca9fc12ec078239d84 - - default default] Chose sockets=0, cores=0, threads=0; limits were sockets=65536, cores=65536, threads=65536 get_cpu_topology_constraints /usr/lib/python3.9/site-packages/nova/virt/hardware.py:430#033[00m
Dec  1 19:29:52 compute-0 nova_compute[189564]: 2025-12-01 19:29:52.785 189568 DEBUG nova.virt.hardware [None req-286a6c53-945b-4040-bb5b-5f674c790b5e 7c24e8f82e7842b785e565ac65c7f494 35d2a9caf1634dca9fc12ec078239d84 - - default default] Topology preferred VirtCPUTopology(cores=0,sockets=0,threads=0), maximum VirtCPUTopology(cores=65536,sockets=65536,threads=65536) _get_desirable_cpu_topologies /usr/lib/python3.9/site-packages/nova/virt/hardware.py:569#033[00m
Dec  1 19:29:52 compute-0 nova_compute[189564]: 2025-12-01 19:29:52.785 189568 DEBUG nova.virt.hardware [None req-286a6c53-945b-4040-bb5b-5f674c790b5e 7c24e8f82e7842b785e565ac65c7f494 35d2a9caf1634dca9fc12ec078239d84 - - default default] Build topologies for 1 vcpu(s) 1:1:1 _get_possible_cpu_topologies /usr/lib/python3.9/site-packages/nova/virt/hardware.py:471#033[00m
Dec  1 19:29:52 compute-0 nova_compute[189564]: 2025-12-01 19:29:52.785 189568 DEBUG nova.virt.hardware [None req-286a6c53-945b-4040-bb5b-5f674c790b5e 7c24e8f82e7842b785e565ac65c7f494 35d2a9caf1634dca9fc12ec078239d84 - - default default] Got 1 possible topologies _get_possible_cpu_topologies /usr/lib/python3.9/site-packages/nova/virt/hardware.py:501#033[00m
Dec  1 19:29:52 compute-0 nova_compute[189564]: 2025-12-01 19:29:52.785 189568 DEBUG nova.virt.hardware [None req-286a6c53-945b-4040-bb5b-5f674c790b5e 7c24e8f82e7842b785e565ac65c7f494 35d2a9caf1634dca9fc12ec078239d84 - - default default] Possible topologies [VirtCPUTopology(cores=1,sockets=1,threads=1)] _get_desirable_cpu_topologies /usr/lib/python3.9/site-packages/nova/virt/hardware.py:575#033[00m
Dec  1 19:29:52 compute-0 nova_compute[189564]: 2025-12-01 19:29:52.786 189568 DEBUG nova.virt.hardware [None req-286a6c53-945b-4040-bb5b-5f674c790b5e 7c24e8f82e7842b785e565ac65c7f494 35d2a9caf1634dca9fc12ec078239d84 - - default default] Sorted desired topologies [VirtCPUTopology(cores=1,sockets=1,threads=1)] _get_desirable_cpu_topologies /usr/lib/python3.9/site-packages/nova/virt/hardware.py:577#033[00m
Dec  1 19:29:52 compute-0 nova_compute[189564]: 2025-12-01 19:29:52.789 189568 DEBUG nova.privsep.utils [None req-286a6c53-945b-4040-bb5b-5f674c790b5e 7c24e8f82e7842b785e565ac65c7f494 35d2a9caf1634dca9fc12ec078239d84 - - default default] Path '/var/lib/nova/instances' supports direct I/O supports_direct_io /usr/lib/python3.9/site-packages/nova/privsep/utils.py:63#033[00m
Dec  1 19:29:52 compute-0 nova_compute[189564]: 2025-12-01 19:29:52.790 189568 DEBUG nova.virt.libvirt.vif [None req-286a6c53-945b-4040-bb5b-5f674c790b5e 7c24e8f82e7842b785e565ac65c7f494 35d2a9caf1634dca9fc12ec078239d84 - - default default] vif_type=ovs instance=Instance(access_ip_v4=None,access_ip_v6=None,architecture=None,auto_disk_config=False,availability_zone='nova',cell_name=None,cleaned=False,config_drive='',created_at=2025-12-01T19:29:43Z,default_ephemeral_device=None,default_swap_device=None,deleted=False,deleted_at=None,device_metadata=None,disable_terminate=False,display_description='test_0',display_name='test_0',ec2_ids=EC2Ids,ephemeral_gb=1,ephemeral_key_uuid=None,fault=<?>,flavor=Flavor(1),hidden=False,host='compute-0.ctlplane.example.com',hostname='test-0',id=1,image_ref='15bc897a-453b-4133-b6db-08ecdc2b6db0',info_cache=InstanceInfoCache,instance_type_id=1,kernel_id='',key_data=None,key_name=None,keypairs=KeyPairList,launch_index=0,launched_at=None,launched_on='compute-0.ctlplane.example.com',locked=False,locked_by=None,memory_mb=512,metadata={},migration_context=None,new_flavor=None,node='compute-0.ctlplane.example.com',numa_topology=None,old_flavor=None,os_type=None,pci_devices=<?>,pci_requests=InstancePCIRequests,power_state=0,progress=0,project_id='35d2a9caf1634dca9fc12ec078239d84',ramdisk_id='',reservation_id='r-rcohc3gr',resources=None,root_device_name='/dev/vda',root_gb=1,security_groups=SecurityGroupList,services=<?>,shutdown_terminate=False,system_metadata={boot_roles='member,admin,reader',image_base_image_ref='15bc897a-453b-4133-b6db-08ecdc2b6db0',image_container_format='bare',image_disk_format='qcow2',image_hw_machine_type='q35',image_min_disk='1',image_min_ram='0',image_owner_specified.openstack.md5='',image_owner_specified.openstack.object='images/cirros',image_owner_specified.openstack.sha256='',network_allocated='True',owner_project_name='admin',owner_user_name='admin'},tags=TagList,task_state='spawning',terminated_at=None,trusted_certs=None,updated_at=2025-12-01T19:29:46Z,user_data=None,user_id='7c24e8f82e7842b785e565ac65c7f494',uuid=e73931e9-f7fa-4666-b781-700b385532a9,vcpu_model=VirtCPUModel,vcpus=1,vm_mode=None,vm_state='building') vif={"id": "3cef930c-870a-4936-a206-b4c3a7ce5c1a", "address": "fa:16:3e:fc:8b:70", "network": {"id": "2a4b8529-6171-4880-a97c-66966115a61b", "bridge": "br-int", "label": "private", "subnets": [{"cidr": "192.168.0.0/24", "dns": [], "gateway": {"address": "192.168.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "192.168.0.47", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "35d2a9caf1634dca9fc12ec078239d84", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tap3cef930c-87", "ovs_interfaceid": "3cef930c-870a-4936-a206-b4c3a7ce5c1a", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}} virt_type=kvm get_config /usr/lib/python3.9/site-packages/nova/virt/libvirt/vif.py:563#033[00m
Dec  1 19:29:52 compute-0 nova_compute[189564]: 2025-12-01 19:29:52.790 189568 DEBUG nova.network.os_vif_util [None req-286a6c53-945b-4040-bb5b-5f674c790b5e 7c24e8f82e7842b785e565ac65c7f494 35d2a9caf1634dca9fc12ec078239d84 - - default default] Converting VIF {"id": "3cef930c-870a-4936-a206-b4c3a7ce5c1a", "address": "fa:16:3e:fc:8b:70", "network": {"id": "2a4b8529-6171-4880-a97c-66966115a61b", "bridge": "br-int", "label": "private", "subnets": [{"cidr": "192.168.0.0/24", "dns": [], "gateway": {"address": "192.168.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "192.168.0.47", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "35d2a9caf1634dca9fc12ec078239d84", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tap3cef930c-87", "ovs_interfaceid": "3cef930c-870a-4936-a206-b4c3a7ce5c1a", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}} nova_to_osvif_vif /usr/lib/python3.9/site-packages/nova/network/os_vif_util.py:511#033[00m
Dec  1 19:29:52 compute-0 nova_compute[189564]: 2025-12-01 19:29:52.791 189568 DEBUG nova.network.os_vif_util [None req-286a6c53-945b-4040-bb5b-5f674c790b5e 7c24e8f82e7842b785e565ac65c7f494 35d2a9caf1634dca9fc12ec078239d84 - - default default] Converted object VIFOpenVSwitch(active=False,address=fa:16:3e:fc:8b:70,bridge_name='br-int',has_traffic_filtering=True,id=3cef930c-870a-4936-a206-b4c3a7ce5c1a,network=Network(2a4b8529-6171-4880-a97c-66966115a61b),plugin='ovs',port_profile=VIFPortProfileOpenVSwitch,preserve_on_delete=False,vif_name='tap3cef930c-87') nova_to_osvif_vif /usr/lib/python3.9/site-packages/nova/network/os_vif_util.py:548#033[00m
Dec  1 19:29:52 compute-0 nova_compute[189564]: 2025-12-01 19:29:52.792 189568 DEBUG nova.objects.instance [None req-286a6c53-945b-4040-bb5b-5f674c790b5e 7c24e8f82e7842b785e565ac65c7f494 35d2a9caf1634dca9fc12ec078239d84 - - default default] Lazy-loading 'pci_devices' on Instance uuid e73931e9-f7fa-4666-b781-700b385532a9 obj_load_attr /usr/lib/python3.9/site-packages/nova/objects/instance.py:1105#033[00m
Dec  1 19:29:52 compute-0 nova_compute[189564]: 2025-12-01 19:29:52.806 189568 DEBUG nova.virt.libvirt.driver [None req-286a6c53-945b-4040-bb5b-5f674c790b5e 7c24e8f82e7842b785e565ac65c7f494 35d2a9caf1634dca9fc12ec078239d84 - - default default] [instance: e73931e9-f7fa-4666-b781-700b385532a9] End _get_guest_xml xml=<domain type="kvm">
Dec  1 19:29:52 compute-0 nova_compute[189564]:  <uuid>e73931e9-f7fa-4666-b781-700b385532a9</uuid>
Dec  1 19:29:52 compute-0 nova_compute[189564]:  <name>instance-00000001</name>
Dec  1 19:29:52 compute-0 nova_compute[189564]:  <memory>524288</memory>
Dec  1 19:29:52 compute-0 nova_compute[189564]:  <vcpu>1</vcpu>
Dec  1 19:29:52 compute-0 nova_compute[189564]:  <metadata>
Dec  1 19:29:52 compute-0 nova_compute[189564]:    <nova:instance xmlns:nova="http://openstack.org/xmlns/libvirt/nova/1.1">
Dec  1 19:29:52 compute-0 nova_compute[189564]:      <nova:package version="27.5.2-0.20250829104910.6f8decf.el9"/>
Dec  1 19:29:52 compute-0 nova_compute[189564]:      <nova:name>test_0</nova:name>
Dec  1 19:29:52 compute-0 nova_compute[189564]:      <nova:creationTime>2025-12-01 19:29:52</nova:creationTime>
Dec  1 19:29:52 compute-0 nova_compute[189564]:      <nova:flavor name="m1.small">
Dec  1 19:29:52 compute-0 nova_compute[189564]:        <nova:memory>512</nova:memory>
Dec  1 19:29:52 compute-0 nova_compute[189564]:        <nova:disk>1</nova:disk>
Dec  1 19:29:52 compute-0 nova_compute[189564]:        <nova:swap>0</nova:swap>
Dec  1 19:29:52 compute-0 nova_compute[189564]:        <nova:ephemeral>1</nova:ephemeral>
Dec  1 19:29:52 compute-0 nova_compute[189564]:        <nova:vcpus>1</nova:vcpus>
Dec  1 19:29:52 compute-0 nova_compute[189564]:      </nova:flavor>
Dec  1 19:29:52 compute-0 nova_compute[189564]:      <nova:owner>
Dec  1 19:29:52 compute-0 nova_compute[189564]:        <nova:user uuid="7c24e8f82e7842b785e565ac65c7f494">admin</nova:user>
Dec  1 19:29:52 compute-0 nova_compute[189564]:        <nova:project uuid="35d2a9caf1634dca9fc12ec078239d84">admin</nova:project>
Dec  1 19:29:52 compute-0 nova_compute[189564]:      </nova:owner>
Dec  1 19:29:52 compute-0 nova_compute[189564]:      <nova:root type="image" uuid="15bc897a-453b-4133-b6db-08ecdc2b6db0"/>
Dec  1 19:29:52 compute-0 nova_compute[189564]:      <nova:ports>
Dec  1 19:29:52 compute-0 nova_compute[189564]:        <nova:port uuid="3cef930c-870a-4936-a206-b4c3a7ce5c1a">
Dec  1 19:29:52 compute-0 nova_compute[189564]:          <nova:ip type="fixed" address="192.168.0.47" ipVersion="4"/>
Dec  1 19:29:52 compute-0 nova_compute[189564]:        </nova:port>
Dec  1 19:29:52 compute-0 nova_compute[189564]:      </nova:ports>
Dec  1 19:29:52 compute-0 nova_compute[189564]:    </nova:instance>
Dec  1 19:29:52 compute-0 nova_compute[189564]:  </metadata>
Dec  1 19:29:52 compute-0 nova_compute[189564]:  <sysinfo type="smbios">
Dec  1 19:29:52 compute-0 nova_compute[189564]:    <system>
Dec  1 19:29:52 compute-0 nova_compute[189564]:      <entry name="manufacturer">RDO</entry>
Dec  1 19:29:52 compute-0 nova_compute[189564]:      <entry name="product">OpenStack Compute</entry>
Dec  1 19:29:52 compute-0 nova_compute[189564]:      <entry name="version">27.5.2-0.20250829104910.6f8decf.el9</entry>
Dec  1 19:29:52 compute-0 nova_compute[189564]:      <entry name="serial">e73931e9-f7fa-4666-b781-700b385532a9</entry>
Dec  1 19:29:52 compute-0 nova_compute[189564]:      <entry name="uuid">e73931e9-f7fa-4666-b781-700b385532a9</entry>
Dec  1 19:29:52 compute-0 nova_compute[189564]:      <entry name="family">Virtual Machine</entry>
Dec  1 19:29:52 compute-0 nova_compute[189564]:    </system>
Dec  1 19:29:52 compute-0 nova_compute[189564]:  </sysinfo>
Dec  1 19:29:52 compute-0 nova_compute[189564]:  <os>
Dec  1 19:29:52 compute-0 nova_compute[189564]:    <type arch="x86_64" machine="q35">hvm</type>
Dec  1 19:29:52 compute-0 nova_compute[189564]:    <boot dev="hd"/>
Dec  1 19:29:52 compute-0 nova_compute[189564]:    <smbios mode="sysinfo"/>
Dec  1 19:29:52 compute-0 nova_compute[189564]:  </os>
Dec  1 19:29:52 compute-0 nova_compute[189564]:  <features>
Dec  1 19:29:52 compute-0 nova_compute[189564]:    <acpi/>
Dec  1 19:29:52 compute-0 nova_compute[189564]:    <apic/>
Dec  1 19:29:52 compute-0 nova_compute[189564]:    <vmcoreinfo/>
Dec  1 19:29:52 compute-0 nova_compute[189564]:  </features>
Dec  1 19:29:52 compute-0 nova_compute[189564]:  <clock offset="utc">
Dec  1 19:29:52 compute-0 nova_compute[189564]:    <timer name="pit" tickpolicy="delay"/>
Dec  1 19:29:52 compute-0 nova_compute[189564]:    <timer name="rtc" tickpolicy="catchup"/>
Dec  1 19:29:52 compute-0 nova_compute[189564]:    <timer name="hpet" present="no"/>
Dec  1 19:29:52 compute-0 nova_compute[189564]:  </clock>
Dec  1 19:29:52 compute-0 nova_compute[189564]:  <cpu mode="host-model" match="exact">
Dec  1 19:29:52 compute-0 nova_compute[189564]:    <topology sockets="1" cores="1" threads="1"/>
Dec  1 19:29:52 compute-0 nova_compute[189564]:  </cpu>
Dec  1 19:29:52 compute-0 nova_compute[189564]:  <devices>
Dec  1 19:29:52 compute-0 nova_compute[189564]:    <disk type="file" device="disk">
Dec  1 19:29:52 compute-0 nova_compute[189564]:      <driver name="qemu" type="qcow2" cache="none"/>
Dec  1 19:29:52 compute-0 nova_compute[189564]:      <source file="/var/lib/nova/instances/e73931e9-f7fa-4666-b781-700b385532a9/disk"/>
Dec  1 19:29:52 compute-0 nova_compute[189564]:      <target dev="vda" bus="virtio"/>
Dec  1 19:29:52 compute-0 nova_compute[189564]:    </disk>
Dec  1 19:29:52 compute-0 nova_compute[189564]:    <disk type="file" device="disk">
Dec  1 19:29:52 compute-0 nova_compute[189564]:      <driver name="qemu" type="qcow2" cache="none"/>
Dec  1 19:29:52 compute-0 nova_compute[189564]:      <source file="/var/lib/nova/instances/e73931e9-f7fa-4666-b781-700b385532a9/disk.eph0"/>
Dec  1 19:29:52 compute-0 nova_compute[189564]:      <target dev="vdb" bus="virtio"/>
Dec  1 19:29:52 compute-0 nova_compute[189564]:    </disk>
Dec  1 19:29:52 compute-0 nova_compute[189564]:    <disk type="file" device="cdrom">
Dec  1 19:29:52 compute-0 nova_compute[189564]:      <driver name="qemu" type="raw" cache="none"/>
Dec  1 19:29:52 compute-0 nova_compute[189564]:      <source file="/var/lib/nova/instances/e73931e9-f7fa-4666-b781-700b385532a9/disk.config"/>
Dec  1 19:29:52 compute-0 nova_compute[189564]:      <target dev="sda" bus="sata"/>
Dec  1 19:29:52 compute-0 nova_compute[189564]:    </disk>
Dec  1 19:29:52 compute-0 nova_compute[189564]:    <interface type="ethernet">
Dec  1 19:29:52 compute-0 nova_compute[189564]:      <mac address="fa:16:3e:fc:8b:70"/>
Dec  1 19:29:52 compute-0 nova_compute[189564]:      <model type="virtio"/>
Dec  1 19:29:52 compute-0 nova_compute[189564]:      <driver name="vhost" rx_queue_size="512"/>
Dec  1 19:29:52 compute-0 nova_compute[189564]:      <mtu size="1442"/>
Dec  1 19:29:52 compute-0 nova_compute[189564]:      <target dev="tap3cef930c-87"/>
Dec  1 19:29:52 compute-0 nova_compute[189564]:    </interface>
Dec  1 19:29:52 compute-0 nova_compute[189564]:    <serial type="pty">
Dec  1 19:29:52 compute-0 nova_compute[189564]:      <log file="/var/lib/nova/instances/e73931e9-f7fa-4666-b781-700b385532a9/console.log" append="off"/>
Dec  1 19:29:52 compute-0 nova_compute[189564]:    </serial>
Dec  1 19:29:52 compute-0 nova_compute[189564]:    <graphics type="vnc" autoport="yes" listen="::0"/>
Dec  1 19:29:52 compute-0 nova_compute[189564]:    <video>
Dec  1 19:29:52 compute-0 nova_compute[189564]:      <model type="virtio"/>
Dec  1 19:29:52 compute-0 nova_compute[189564]:    </video>
Dec  1 19:29:52 compute-0 nova_compute[189564]:    <input type="tablet" bus="usb"/>
Dec  1 19:29:52 compute-0 nova_compute[189564]:    <rng model="virtio">
Dec  1 19:29:52 compute-0 nova_compute[189564]:      <backend model="random">/dev/urandom</backend>
Dec  1 19:29:52 compute-0 nova_compute[189564]:    </rng>
Dec  1 19:29:52 compute-0 nova_compute[189564]:    <controller type="pci" model="pcie-root"/>
Dec  1 19:29:52 compute-0 nova_compute[189564]:    <controller type="pci" model="pcie-root-port"/>
Dec  1 19:29:52 compute-0 nova_compute[189564]:    <controller type="pci" model="pcie-root-port"/>
Dec  1 19:29:52 compute-0 nova_compute[189564]:    <controller type="pci" model="pcie-root-port"/>
Dec  1 19:29:52 compute-0 nova_compute[189564]:    <controller type="pci" model="pcie-root-port"/>
Dec  1 19:29:52 compute-0 nova_compute[189564]:    <controller type="pci" model="pcie-root-port"/>
Dec  1 19:29:52 compute-0 nova_compute[189564]:    <controller type="pci" model="pcie-root-port"/>
Dec  1 19:29:52 compute-0 nova_compute[189564]:    <controller type="pci" model="pcie-root-port"/>
Dec  1 19:29:52 compute-0 nova_compute[189564]:    <controller type="pci" model="pcie-root-port"/>
Dec  1 19:29:52 compute-0 nova_compute[189564]:    <controller type="pci" model="pcie-root-port"/>
Dec  1 19:29:52 compute-0 nova_compute[189564]:    <controller type="pci" model="pcie-root-port"/>
Dec  1 19:29:52 compute-0 nova_compute[189564]:    <controller type="pci" model="pcie-root-port"/>
Dec  1 19:29:52 compute-0 nova_compute[189564]:    <controller type="pci" model="pcie-root-port"/>
Dec  1 19:29:52 compute-0 nova_compute[189564]:    <controller type="pci" model="pcie-root-port"/>
Dec  1 19:29:52 compute-0 nova_compute[189564]:    <controller type="pci" model="pcie-root-port"/>
Dec  1 19:29:52 compute-0 nova_compute[189564]:    <controller type="pci" model="pcie-root-port"/>
Dec  1 19:29:52 compute-0 nova_compute[189564]:    <controller type="pci" model="pcie-root-port"/>
Dec  1 19:29:52 compute-0 nova_compute[189564]:    <controller type="pci" model="pcie-root-port"/>
Dec  1 19:29:52 compute-0 nova_compute[189564]:    <controller type="pci" model="pcie-root-port"/>
Dec  1 19:29:52 compute-0 nova_compute[189564]:    <controller type="pci" model="pcie-root-port"/>
Dec  1 19:29:52 compute-0 nova_compute[189564]:    <controller type="pci" model="pcie-root-port"/>
Dec  1 19:29:52 compute-0 nova_compute[189564]:    <controller type="pci" model="pcie-root-port"/>
Dec  1 19:29:52 compute-0 nova_compute[189564]:    <controller type="pci" model="pcie-root-port"/>
Dec  1 19:29:52 compute-0 nova_compute[189564]:    <controller type="pci" model="pcie-root-port"/>
Dec  1 19:29:52 compute-0 nova_compute[189564]:    <controller type="pci" model="pcie-root-port"/>
Dec  1 19:29:52 compute-0 nova_compute[189564]:    <controller type="usb" index="0"/>
Dec  1 19:29:52 compute-0 nova_compute[189564]:    <memballoon model="virtio">
Dec  1 19:29:52 compute-0 nova_compute[189564]:      <stats period="10"/>
Dec  1 19:29:52 compute-0 nova_compute[189564]:    </memballoon>
Dec  1 19:29:52 compute-0 nova_compute[189564]:  </devices>
Dec  1 19:29:52 compute-0 nova_compute[189564]: </domain>
Dec  1 19:29:52 compute-0 nova_compute[189564]: _get_guest_xml /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:7555#033[00m
Dec  1 19:29:52 compute-0 nova_compute[189564]: 2025-12-01 19:29:52.806 189568 DEBUG nova.compute.manager [None req-286a6c53-945b-4040-bb5b-5f674c790b5e 7c24e8f82e7842b785e565ac65c7f494 35d2a9caf1634dca9fc12ec078239d84 - - default default] [instance: e73931e9-f7fa-4666-b781-700b385532a9] Preparing to wait for external event network-vif-plugged-3cef930c-870a-4936-a206-b4c3a7ce5c1a prepare_for_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:283#033[00m
Dec  1 19:29:52 compute-0 nova_compute[189564]: 2025-12-01 19:29:52.807 189568 DEBUG oslo_concurrency.lockutils [None req-286a6c53-945b-4040-bb5b-5f674c790b5e 7c24e8f82e7842b785e565ac65c7f494 35d2a9caf1634dca9fc12ec078239d84 - - default default] Acquiring lock "e73931e9-f7fa-4666-b781-700b385532a9-events" by "nova.compute.manager.InstanceEvents.prepare_for_instance_event.<locals>._create_or_get_event" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m
Dec  1 19:29:52 compute-0 nova_compute[189564]: 2025-12-01 19:29:52.807 189568 DEBUG oslo_concurrency.lockutils [None req-286a6c53-945b-4040-bb5b-5f674c790b5e 7c24e8f82e7842b785e565ac65c7f494 35d2a9caf1634dca9fc12ec078239d84 - - default default] Lock "e73931e9-f7fa-4666-b781-700b385532a9-events" acquired by "nova.compute.manager.InstanceEvents.prepare_for_instance_event.<locals>._create_or_get_event" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m
Dec  1 19:29:52 compute-0 nova_compute[189564]: 2025-12-01 19:29:52.807 189568 DEBUG oslo_concurrency.lockutils [None req-286a6c53-945b-4040-bb5b-5f674c790b5e 7c24e8f82e7842b785e565ac65c7f494 35d2a9caf1634dca9fc12ec078239d84 - - default default] Lock "e73931e9-f7fa-4666-b781-700b385532a9-events" "released" by "nova.compute.manager.InstanceEvents.prepare_for_instance_event.<locals>._create_or_get_event" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m
Dec  1 19:29:52 compute-0 nova_compute[189564]: 2025-12-01 19:29:52.807 189568 DEBUG nova.virt.libvirt.vif [None req-286a6c53-945b-4040-bb5b-5f674c790b5e 7c24e8f82e7842b785e565ac65c7f494 35d2a9caf1634dca9fc12ec078239d84 - - default default] vif_type=ovs instance=Instance(access_ip_v4=None,access_ip_v6=None,architecture=None,auto_disk_config=False,availability_zone='nova',cell_name=None,cleaned=False,config_drive='',created_at=2025-12-01T19:29:43Z,default_ephemeral_device=None,default_swap_device=None,deleted=False,deleted_at=None,device_metadata=None,disable_terminate=False,display_description='test_0',display_name='test_0',ec2_ids=EC2Ids,ephemeral_gb=1,ephemeral_key_uuid=None,fault=<?>,flavor=Flavor(1),hidden=False,host='compute-0.ctlplane.example.com',hostname='test-0',id=1,image_ref='15bc897a-453b-4133-b6db-08ecdc2b6db0',info_cache=InstanceInfoCache,instance_type_id=1,kernel_id='',key_data=None,key_name=None,keypairs=KeyPairList,launch_index=0,launched_at=None,launched_on='compute-0.ctlplane.example.com',locked=False,locked_by=None,memory_mb=512,metadata={},migration_context=None,new_flavor=None,node='compute-0.ctlplane.example.com',numa_topology=None,old_flavor=None,os_type=None,pci_devices=PciDeviceList,pci_requests=InstancePCIRequests,power_state=0,progress=0,project_id='35d2a9caf1634dca9fc12ec078239d84',ramdisk_id='',reservation_id='r-rcohc3gr',resources=None,root_device_name='/dev/vda',root_gb=1,security_groups=SecurityGroupList,services=<?>,shutdown_terminate=False,system_metadata={boot_roles='member,admin,reader',image_base_image_ref='15bc897a-453b-4133-b6db-08ecdc2b6db0',image_container_format='bare',image_disk_format='qcow2',image_hw_machine_type='q35',image_min_disk='1',image_min_ram='0',image_owner_specified.openstack.md5='',image_owner_specified.openstack.object='images/cirros',image_owner_specified.openstack.sha256='',network_allocated='True',owner_project_name='admin',owner_user_name='admin'},tags=TagList,task_state='spawning',terminated_at=None,trusted_certs=None,updated_at=2025-12-01T19:29:46Z,user_data=None,user_id='7c24e8f82e7842b785e565ac65c7f494',uuid=e73931e9-f7fa-4666-b781-700b385532a9,vcpu_model=VirtCPUModel,vcpus=1,vm_mode=None,vm_state='building') vif={"id": "3cef930c-870a-4936-a206-b4c3a7ce5c1a", "address": "fa:16:3e:fc:8b:70", "network": {"id": "2a4b8529-6171-4880-a97c-66966115a61b", "bridge": "br-int", "label": "private", "subnets": [{"cidr": "192.168.0.0/24", "dns": [], "gateway": {"address": "192.168.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "192.168.0.47", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "35d2a9caf1634dca9fc12ec078239d84", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tap3cef930c-87", "ovs_interfaceid": "3cef930c-870a-4936-a206-b4c3a7ce5c1a", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}} plug /usr/lib/python3.9/site-packages/nova/virt/libvirt/vif.py:710#033[00m
Dec  1 19:29:52 compute-0 nova_compute[189564]: 2025-12-01 19:29:52.808 189568 DEBUG nova.network.os_vif_util [None req-286a6c53-945b-4040-bb5b-5f674c790b5e 7c24e8f82e7842b785e565ac65c7f494 35d2a9caf1634dca9fc12ec078239d84 - - default default] Converting VIF {"id": "3cef930c-870a-4936-a206-b4c3a7ce5c1a", "address": "fa:16:3e:fc:8b:70", "network": {"id": "2a4b8529-6171-4880-a97c-66966115a61b", "bridge": "br-int", "label": "private", "subnets": [{"cidr": "192.168.0.0/24", "dns": [], "gateway": {"address": "192.168.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "192.168.0.47", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "35d2a9caf1634dca9fc12ec078239d84", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tap3cef930c-87", "ovs_interfaceid": "3cef930c-870a-4936-a206-b4c3a7ce5c1a", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}} nova_to_osvif_vif /usr/lib/python3.9/site-packages/nova/network/os_vif_util.py:511#033[00m
Dec  1 19:29:52 compute-0 nova_compute[189564]: 2025-12-01 19:29:52.808 189568 DEBUG nova.network.os_vif_util [None req-286a6c53-945b-4040-bb5b-5f674c790b5e 7c24e8f82e7842b785e565ac65c7f494 35d2a9caf1634dca9fc12ec078239d84 - - default default] Converted object VIFOpenVSwitch(active=False,address=fa:16:3e:fc:8b:70,bridge_name='br-int',has_traffic_filtering=True,id=3cef930c-870a-4936-a206-b4c3a7ce5c1a,network=Network(2a4b8529-6171-4880-a97c-66966115a61b),plugin='ovs',port_profile=VIFPortProfileOpenVSwitch,preserve_on_delete=False,vif_name='tap3cef930c-87') nova_to_osvif_vif /usr/lib/python3.9/site-packages/nova/network/os_vif_util.py:548#033[00m
Dec  1 19:29:52 compute-0 nova_compute[189564]: 2025-12-01 19:29:52.809 189568 DEBUG os_vif [None req-286a6c53-945b-4040-bb5b-5f674c790b5e 7c24e8f82e7842b785e565ac65c7f494 35d2a9caf1634dca9fc12ec078239d84 - - default default] Plugging vif VIFOpenVSwitch(active=False,address=fa:16:3e:fc:8b:70,bridge_name='br-int',has_traffic_filtering=True,id=3cef930c-870a-4936-a206-b4c3a7ce5c1a,network=Network(2a4b8529-6171-4880-a97c-66966115a61b),plugin='ovs',port_profile=VIFPortProfileOpenVSwitch,preserve_on_delete=False,vif_name='tap3cef930c-87') plug /usr/lib/python3.9/site-packages/os_vif/__init__.py:76#033[00m
Dec  1 19:29:52 compute-0 nova_compute[189564]: 2025-12-01 19:29:52.839 189568 DEBUG ovsdbapp.backend.ovs_idl [None req-286a6c53-945b-4040-bb5b-5f674c790b5e 7c24e8f82e7842b785e565ac65c7f494 35d2a9caf1634dca9fc12ec078239d84 - - default default] Created schema index Interface.name autocreate_indices /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/__init__.py:106#033[00m
Dec  1 19:29:52 compute-0 nova_compute[189564]: 2025-12-01 19:29:52.840 189568 DEBUG ovsdbapp.backend.ovs_idl [None req-286a6c53-945b-4040-bb5b-5f674c790b5e 7c24e8f82e7842b785e565ac65c7f494 35d2a9caf1634dca9fc12ec078239d84 - - default default] Created schema index Port.name autocreate_indices /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/__init__.py:106#033[00m
Dec  1 19:29:52 compute-0 nova_compute[189564]: 2025-12-01 19:29:52.840 189568 DEBUG ovsdbapp.backend.ovs_idl [None req-286a6c53-945b-4040-bb5b-5f674c790b5e 7c24e8f82e7842b785e565ac65c7f494 35d2a9caf1634dca9fc12ec078239d84 - - default default] Created schema index Bridge.name autocreate_indices /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/__init__.py:106#033[00m
Dec  1 19:29:52 compute-0 nova_compute[189564]: 2025-12-01 19:29:52.840 189568 DEBUG ovsdbapp.backend.ovs_idl.vlog [None req-286a6c53-945b-4040-bb5b-5f674c790b5e 7c24e8f82e7842b785e565ac65c7f494 35d2a9caf1634dca9fc12ec078239d84 - - default default] tcp:127.0.0.1:6640: entering CONNECTING _transition /usr/lib64/python3.9/site-packages/ovs/reconnect.py:519#033[00m
Dec  1 19:29:52 compute-0 nova_compute[189564]: 2025-12-01 19:29:52.841 189568 DEBUG ovsdbapp.backend.ovs_idl.vlog [None req-286a6c53-945b-4040-bb5b-5f674c790b5e 7c24e8f82e7842b785e565ac65c7f494 35d2a9caf1634dca9fc12ec078239d84 - - default default] [POLLOUT] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Dec  1 19:29:52 compute-0 nova_compute[189564]: 2025-12-01 19:29:52.841 189568 DEBUG ovsdbapp.backend.ovs_idl.vlog [None req-286a6c53-945b-4040-bb5b-5f674c790b5e 7c24e8f82e7842b785e565ac65c7f494 35d2a9caf1634dca9fc12ec078239d84 - - default default] tcp:127.0.0.1:6640: entering ACTIVE _transition /usr/lib64/python3.9/site-packages/ovs/reconnect.py:519#033[00m
Dec  1 19:29:52 compute-0 nova_compute[189564]: 2025-12-01 19:29:52.842 189568 DEBUG ovsdbapp.backend.ovs_idl.vlog [None req-286a6c53-945b-4040-bb5b-5f674c790b5e 7c24e8f82e7842b785e565ac65c7f494 35d2a9caf1634dca9fc12ec078239d84 - - default default] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Dec  1 19:29:52 compute-0 nova_compute[189564]: 2025-12-01 19:29:52.844 189568 DEBUG ovsdbapp.backend.ovs_idl.vlog [None req-286a6c53-945b-4040-bb5b-5f674c790b5e 7c24e8f82e7842b785e565ac65c7f494 35d2a9caf1634dca9fc12ec078239d84 - - default default] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Dec  1 19:29:52 compute-0 nova_compute[189564]: 2025-12-01 19:29:52.846 189568 DEBUG ovsdbapp.backend.ovs_idl.vlog [None req-286a6c53-945b-4040-bb5b-5f674c790b5e 7c24e8f82e7842b785e565ac65c7f494 35d2a9caf1634dca9fc12ec078239d84 - - default default] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Dec  1 19:29:52 compute-0 nova_compute[189564]: 2025-12-01 19:29:52.853 189568 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 21 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Dec  1 19:29:52 compute-0 nova_compute[189564]: 2025-12-01 19:29:52.854 189568 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): AddBridgeCommand(_result=None, name=br-int, may_exist=True, datapath_type=system) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89#033[00m
Dec  1 19:29:52 compute-0 nova_compute[189564]: 2025-12-01 19:29:52.854 189568 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Transaction caused no change do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:129#033[00m
Dec  1 19:29:52 compute-0 nova_compute[189564]: 2025-12-01 19:29:52.855 189568 INFO oslo.privsep.daemon [None req-286a6c53-945b-4040-bb5b-5f674c790b5e 7c24e8f82e7842b785e565ac65c7f494 35d2a9caf1634dca9fc12ec078239d84 - - default default] Running privsep helper: ['sudo', 'nova-rootwrap', '/etc/nova/rootwrap.conf', 'privsep-helper', '--config-file', '/etc/nova/nova.conf', '--config-file', '/etc/nova/nova-compute.conf', '--config-dir', '/etc/nova/nova.conf.d', '--privsep_context', 'vif_plug_ovs.privsep.vif_plug', '--privsep_sock_path', '/tmp/tmpdvuukjno/privsep.sock']#033[00m
Dec  1 19:29:53 compute-0 nova_compute[189564]: 2025-12-01 19:29:53.153 189568 DEBUG oslo_service.periodic_task [None req-7cec860b-c1c5-49d2-8a4f-732c76c1788d - - - - - -] Running periodic task ComputeManager._poll_rescued_instances run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210#033[00m
Dec  1 19:29:53 compute-0 nova_compute[189564]: 2025-12-01 19:29:53.248 189568 DEBUG oslo_service.periodic_task [None req-7cec860b-c1c5-49d2-8a4f-732c76c1788d - - - - - -] Running periodic task ComputeManager._poll_unconfirmed_resizes run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210#033[00m
Dec  1 19:29:53 compute-0 nova_compute[189564]: 2025-12-01 19:29:53.537 189568 INFO oslo.privsep.daemon [None req-286a6c53-945b-4040-bb5b-5f674c790b5e 7c24e8f82e7842b785e565ac65c7f494 35d2a9caf1634dca9fc12ec078239d84 - - default default] Spawned new privsep daemon via rootwrap#033[00m
Dec  1 19:29:53 compute-0 nova_compute[189564]: 2025-12-01 19:29:53.422 239816 INFO oslo.privsep.daemon [-] privsep daemon starting#033[00m
Dec  1 19:29:53 compute-0 nova_compute[189564]: 2025-12-01 19:29:53.430 239816 INFO oslo.privsep.daemon [-] privsep process running with uid/gid: 0/0#033[00m
Dec  1 19:29:53 compute-0 nova_compute[189564]: 2025-12-01 19:29:53.434 239816 INFO oslo.privsep.daemon [-] privsep process running with capabilities (eff/prm/inh): CAP_DAC_OVERRIDE|CAP_NET_ADMIN/CAP_DAC_OVERRIDE|CAP_NET_ADMIN/none#033[00m
Dec  1 19:29:53 compute-0 nova_compute[189564]: 2025-12-01 19:29:53.434 239816 INFO oslo.privsep.daemon [-] privsep daemon running as pid 239816#033[00m
Dec  1 19:29:53 compute-0 nova_compute[189564]: 2025-12-01 19:29:53.827 189568 DEBUG nova.network.neutron [req-6588a8c5-fe2a-41e2-9f60-4905bc8565dc req-c25ff29e-a203-4a7d-bf3b-48a7d57aa4ea 8141e98fb94749b083df4b60a326419b 58466eba0634458f8dd92f929824a1d9 - - default default] [instance: e73931e9-f7fa-4666-b781-700b385532a9] Updated VIF entry in instance network info cache for port 3cef930c-870a-4936-a206-b4c3a7ce5c1a. _build_network_info_model /usr/lib/python3.9/site-packages/nova/network/neutron.py:3482#033[00m
Dec  1 19:29:53 compute-0 nova_compute[189564]: 2025-12-01 19:29:53.829 189568 DEBUG nova.network.neutron [req-6588a8c5-fe2a-41e2-9f60-4905bc8565dc req-c25ff29e-a203-4a7d-bf3b-48a7d57aa4ea 8141e98fb94749b083df4b60a326419b 58466eba0634458f8dd92f929824a1d9 - - default default] [instance: e73931e9-f7fa-4666-b781-700b385532a9] Updating instance_info_cache with network_info: [{"id": "3cef930c-870a-4936-a206-b4c3a7ce5c1a", "address": "fa:16:3e:fc:8b:70", "network": {"id": "2a4b8529-6171-4880-a97c-66966115a61b", "bridge": "br-int", "label": "private", "subnets": [{"cidr": "192.168.0.0/24", "dns": [], "gateway": {"address": "192.168.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "192.168.0.47", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "35d2a9caf1634dca9fc12ec078239d84", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tap3cef930c-87", "ovs_interfaceid": "3cef930c-870a-4936-a206-b4c3a7ce5c1a", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}}] update_instance_cache_with_nw_info /usr/lib/python3.9/site-packages/nova/network/neutron.py:116#033[00m
Dec  1 19:29:53 compute-0 nova_compute[189564]: 2025-12-01 19:29:53.834 189568 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 21 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Dec  1 19:29:53 compute-0 nova_compute[189564]: 2025-12-01 19:29:53.835 189568 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): AddPortCommand(_result=None, bridge=br-int, port=tap3cef930c-87, may_exist=True) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89#033[00m
Dec  1 19:29:53 compute-0 nova_compute[189564]: 2025-12-01 19:29:53.837 189568 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=1): DbSetCommand(_result=None, table=Interface, record=tap3cef930c-87, col_values=(('external_ids', {'iface-id': '3cef930c-870a-4936-a206-b4c3a7ce5c1a', 'iface-status': 'active', 'attached-mac': 'fa:16:3e:fc:8b:70', 'vm-uuid': 'e73931e9-f7fa-4666-b781-700b385532a9'}),)) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89#033[00m
Dec  1 19:29:53 compute-0 NetworkManager[56474]: <info>  [1764617393.8424] manager: (tap3cef930c-87): new Open vSwitch Port device (/org/freedesktop/NetworkManager/Devices/19)
Dec  1 19:29:53 compute-0 nova_compute[189564]: 2025-12-01 19:29:53.844 189568 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] 0-ms timeout __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:248#033[00m
Dec  1 19:29:53 compute-0 nova_compute[189564]: 2025-12-01 19:29:53.853 189568 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Dec  1 19:29:53 compute-0 nova_compute[189564]: 2025-12-01 19:29:53.855 189568 INFO os_vif [None req-286a6c53-945b-4040-bb5b-5f674c790b5e 7c24e8f82e7842b785e565ac65c7f494 35d2a9caf1634dca9fc12ec078239d84 - - default default] Successfully plugged vif VIFOpenVSwitch(active=False,address=fa:16:3e:fc:8b:70,bridge_name='br-int',has_traffic_filtering=True,id=3cef930c-870a-4936-a206-b4c3a7ce5c1a,network=Network(2a4b8529-6171-4880-a97c-66966115a61b),plugin='ovs',port_profile=VIFPortProfileOpenVSwitch,preserve_on_delete=False,vif_name='tap3cef930c-87')#033[00m
Dec  1 19:29:53 compute-0 nova_compute[189564]: 2025-12-01 19:29:53.858 189568 DEBUG oslo_concurrency.lockutils [req-6588a8c5-fe2a-41e2-9f60-4905bc8565dc req-c25ff29e-a203-4a7d-bf3b-48a7d57aa4ea 8141e98fb94749b083df4b60a326419b 58466eba0634458f8dd92f929824a1d9 - - default default] Releasing lock "refresh_cache-e73931e9-f7fa-4666-b781-700b385532a9" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:333#033[00m
Dec  1 19:29:53 compute-0 nova_compute[189564]: 2025-12-01 19:29:53.930 189568 DEBUG nova.virt.libvirt.driver [None req-286a6c53-945b-4040-bb5b-5f674c790b5e 7c24e8f82e7842b785e565ac65c7f494 35d2a9caf1634dca9fc12ec078239d84 - - default default] No BDM found with device name vda, not building metadata. _build_disk_metadata /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:12116#033[00m
Dec  1 19:29:53 compute-0 nova_compute[189564]: 2025-12-01 19:29:53.931 189568 DEBUG nova.virt.libvirt.driver [None req-286a6c53-945b-4040-bb5b-5f674c790b5e 7c24e8f82e7842b785e565ac65c7f494 35d2a9caf1634dca9fc12ec078239d84 - - default default] No BDM found with device name vdb, not building metadata. _build_disk_metadata /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:12116#033[00m
Dec  1 19:29:53 compute-0 nova_compute[189564]: 2025-12-01 19:29:53.932 189568 DEBUG nova.virt.libvirt.driver [None req-286a6c53-945b-4040-bb5b-5f674c790b5e 7c24e8f82e7842b785e565ac65c7f494 35d2a9caf1634dca9fc12ec078239d84 - - default default] No BDM found with device name sda, not building metadata. _build_disk_metadata /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:12116#033[00m
Dec  1 19:29:53 compute-0 nova_compute[189564]: 2025-12-01 19:29:53.933 189568 DEBUG nova.virt.libvirt.driver [None req-286a6c53-945b-4040-bb5b-5f674c790b5e 7c24e8f82e7842b785e565ac65c7f494 35d2a9caf1634dca9fc12ec078239d84 - - default default] No VIF found with MAC fa:16:3e:fc:8b:70, not building metadata _build_interface_metadata /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:12092#033[00m
Dec  1 19:29:53 compute-0 nova_compute[189564]: 2025-12-01 19:29:53.934 189568 INFO nova.virt.libvirt.driver [None req-286a6c53-945b-4040-bb5b-5f674c790b5e 7c24e8f82e7842b785e565ac65c7f494 35d2a9caf1634dca9fc12ec078239d84 - - default default] [instance: e73931e9-f7fa-4666-b781-700b385532a9] Using config drive#033[00m
Dec  1 19:29:54 compute-0 nova_compute[189564]: 2025-12-01 19:29:54.409 189568 INFO nova.virt.libvirt.driver [None req-286a6c53-945b-4040-bb5b-5f674c790b5e 7c24e8f82e7842b785e565ac65c7f494 35d2a9caf1634dca9fc12ec078239d84 - - default default] [instance: e73931e9-f7fa-4666-b781-700b385532a9] Creating config drive at /var/lib/nova/instances/e73931e9-f7fa-4666-b781-700b385532a9/disk.config#033[00m
Dec  1 19:29:54 compute-0 nova_compute[189564]: 2025-12-01 19:29:54.419 189568 DEBUG oslo_concurrency.processutils [None req-286a6c53-945b-4040-bb5b-5f674c790b5e 7c24e8f82e7842b785e565ac65c7f494 35d2a9caf1634dca9fc12ec078239d84 - - default default] Running cmd (subprocess): /usr/bin/mkisofs -o /var/lib/nova/instances/e73931e9-f7fa-4666-b781-700b385532a9/disk.config -ldots -allow-lowercase -allow-multidot -l -publisher OpenStack Compute 27.5.2-0.20250829104910.6f8decf.el9 -quiet -J -r -V config-2 /tmp/tmp2kslgn3w execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384#033[00m
Dec  1 19:29:54 compute-0 nova_compute[189564]: 2025-12-01 19:29:54.552 189568 DEBUG oslo_concurrency.processutils [None req-286a6c53-945b-4040-bb5b-5f674c790b5e 7c24e8f82e7842b785e565ac65c7f494 35d2a9caf1634dca9fc12ec078239d84 - - default default] CMD "/usr/bin/mkisofs -o /var/lib/nova/instances/e73931e9-f7fa-4666-b781-700b385532a9/disk.config -ldots -allow-lowercase -allow-multidot -l -publisher OpenStack Compute 27.5.2-0.20250829104910.6f8decf.el9 -quiet -J -r -V config-2 /tmp/tmp2kslgn3w" returned: 0 in 0.134s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422#033[00m
Dec  1 19:29:54 compute-0 kernel: tun: Universal TUN/TAP device driver, 1.6
Dec  1 19:29:54 compute-0 kernel: tap3cef930c-87: entered promiscuous mode
Dec  1 19:29:54 compute-0 NetworkManager[56474]: <info>  [1764617394.6878] manager: (tap3cef930c-87): new Tun device (/org/freedesktop/NetworkManager/Devices/20)
Dec  1 19:29:54 compute-0 nova_compute[189564]: 2025-12-01 19:29:54.690 189568 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Dec  1 19:29:54 compute-0 ovn_controller[97948]: 2025-12-01T19:29:54Z|00027|binding|INFO|Claiming lport 3cef930c-870a-4936-a206-b4c3a7ce5c1a for this chassis.
Dec  1 19:29:54 compute-0 ovn_controller[97948]: 2025-12-01T19:29:54Z|00028|binding|INFO|3cef930c-870a-4936-a206-b4c3a7ce5c1a: Claiming fa:16:3e:fc:8b:70 192.168.0.47
Dec  1 19:29:54 compute-0 nova_compute[189564]: 2025-12-01 19:29:54.700 189568 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Dec  1 19:29:54 compute-0 ovn_metadata_agent[106828]: 2025-12-01 19:29:54.726 106833 DEBUG ovsdbapp.backend.ovs_idl.event [-] Matched UPDATE: PortBindingUpdatedEvent(events=('update',), table='Port_Binding', conditions=None, old_conditions=None), priority=20 to row=Port_Binding(mac=['fa:16:3e:fc:8b:70 192.168.0.47'], port_security=['fa:16:3e:fc:8b:70 192.168.0.47'], type=, nat_addresses=[], virtual_parent=[], up=[False], options={'requested-chassis': 'compute-0.ctlplane.example.com'}, parent_port=[], requested_additional_chassis=[], ha_chassis_group=[], external_ids={'neutron:cidrs': '192.168.0.47/24', 'neutron:device_id': 'e73931e9-f7fa-4666-b781-700b385532a9', 'neutron:device_owner': 'compute:nova', 'neutron:mtu': '', 'neutron:network_name': 'neutron-2a4b8529-6171-4880-a97c-66966115a61b', 'neutron:port_capabilities': '', 'neutron:port_name': '', 'neutron:project_id': '35d2a9caf1634dca9fc12ec078239d84', 'neutron:revision_number': '2', 'neutron:security_group_ids': 'e61a5e79-a7e0-4e4e-bcbc-f9aad845c2b8', 'neutron:subnet_pool_addr_scope4': '', 'neutron:subnet_pool_addr_scope6': '', 'neutron:vnic_type': 'normal'}, additional_chassis=[], tag=[], additional_encap=[], encap=[], mirror_rules=[], datapath=58f8227a-30b3-42df-b03a-90442a651a6d, chassis=[<ovs.db.idl.Row object at 0x7f1b36766670>], tunnel_key=3, gateway_chassis=[], requested_chassis=[<ovs.db.idl.Row object at 0x7f1b36766670>], logical_port=3cef930c-870a-4936-a206-b4c3a7ce5c1a) old=Port_Binding(chassis=[]) matches /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/event.py:43#033[00m
Dec  1 19:29:54 compute-0 ovn_metadata_agent[106828]: 2025-12-01 19:29:54.729 106833 INFO neutron.agent.ovn.metadata.agent [-] Port 3cef930c-870a-4936-a206-b4c3a7ce5c1a in datapath 2a4b8529-6171-4880-a97c-66966115a61b bound to our chassis#033[00m
Dec  1 19:29:54 compute-0 ovn_metadata_agent[106828]: 2025-12-01 19:29:54.733 106833 INFO neutron.agent.ovn.metadata.agent [-] Provisioning metadata for network 2a4b8529-6171-4880-a97c-66966115a61b#033[00m
Dec  1 19:29:54 compute-0 ovn_metadata_agent[106828]: 2025-12-01 19:29:54.736 106833 INFO oslo.privsep.daemon [-] Running privsep helper: ['sudo', 'neutron-rootwrap', '/etc/neutron/rootwrap.conf', 'privsep-helper', '--config-file', '/etc/neutron/neutron.conf', '--config-dir', '/etc/neutron.conf.d', '--privsep_context', 'neutron.privileged.default', '--privsep_sock_path', '/tmp/tmpsnhej984/privsep.sock']#033[00m
Dec  1 19:29:54 compute-0 systemd-udevd[239845]: Network interface NamePolicy= disabled on kernel command line.
Dec  1 19:29:54 compute-0 NetworkManager[56474]: <info>  [1764617394.7654] device (tap3cef930c-87): state change: unmanaged -> unavailable (reason 'connection-assumed', managed-type: 'external')
Dec  1 19:29:54 compute-0 NetworkManager[56474]: <info>  [1764617394.7660] device (tap3cef930c-87): state change: unavailable -> disconnected (reason 'none', managed-type: 'external')
Dec  1 19:29:54 compute-0 systemd-machined[155891]: New machine qemu-1-instance-00000001.
Dec  1 19:29:54 compute-0 systemd[1]: Started Virtual Machine qemu-1-instance-00000001.
Dec  1 19:29:54 compute-0 nova_compute[189564]: 2025-12-01 19:29:54.827 189568 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Dec  1 19:29:54 compute-0 ovn_controller[97948]: 2025-12-01T19:29:54Z|00029|binding|INFO|Setting lport 3cef930c-870a-4936-a206-b4c3a7ce5c1a ovn-installed in OVS
Dec  1 19:29:54 compute-0 ovn_controller[97948]: 2025-12-01T19:29:54Z|00030|binding|INFO|Setting lport 3cef930c-870a-4936-a206-b4c3a7ce5c1a up in Southbound
Dec  1 19:29:54 compute-0 nova_compute[189564]: 2025-12-01 19:29:54.833 189568 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Dec  1 19:29:55 compute-0 nova_compute[189564]: 2025-12-01 19:29:55.111 189568 DEBUG nova.compute.manager [req-784f8f14-8c57-45f5-ac44-f990ac53f881 req-2fe6cab4-3a9e-4b3c-9777-4128e1e467dd 8141e98fb94749b083df4b60a326419b 58466eba0634458f8dd92f929824a1d9 - - default default] [instance: e73931e9-f7fa-4666-b781-700b385532a9] Received event network-vif-plugged-3cef930c-870a-4936-a206-b4c3a7ce5c1a external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11048#033[00m
Dec  1 19:29:55 compute-0 nova_compute[189564]: 2025-12-01 19:29:55.112 189568 DEBUG oslo_concurrency.lockutils [req-784f8f14-8c57-45f5-ac44-f990ac53f881 req-2fe6cab4-3a9e-4b3c-9777-4128e1e467dd 8141e98fb94749b083df4b60a326419b 58466eba0634458f8dd92f929824a1d9 - - default default] Acquiring lock "e73931e9-f7fa-4666-b781-700b385532a9-events" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m
Dec  1 19:29:55 compute-0 nova_compute[189564]: 2025-12-01 19:29:55.112 189568 DEBUG oslo_concurrency.lockutils [req-784f8f14-8c57-45f5-ac44-f990ac53f881 req-2fe6cab4-3a9e-4b3c-9777-4128e1e467dd 8141e98fb94749b083df4b60a326419b 58466eba0634458f8dd92f929824a1d9 - - default default] Lock "e73931e9-f7fa-4666-b781-700b385532a9-events" acquired by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m
Dec  1 19:29:55 compute-0 nova_compute[189564]: 2025-12-01 19:29:55.112 189568 DEBUG oslo_concurrency.lockutils [req-784f8f14-8c57-45f5-ac44-f990ac53f881 req-2fe6cab4-3a9e-4b3c-9777-4128e1e467dd 8141e98fb94749b083df4b60a326419b 58466eba0634458f8dd92f929824a1d9 - - default default] Lock "e73931e9-f7fa-4666-b781-700b385532a9-events" "released" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m
Dec  1 19:29:55 compute-0 nova_compute[189564]: 2025-12-01 19:29:55.112 189568 DEBUG nova.compute.manager [req-784f8f14-8c57-45f5-ac44-f990ac53f881 req-2fe6cab4-3a9e-4b3c-9777-4128e1e467dd 8141e98fb94749b083df4b60a326419b 58466eba0634458f8dd92f929824a1d9 - - default default] [instance: e73931e9-f7fa-4666-b781-700b385532a9] Processing event network-vif-plugged-3cef930c-870a-4936-a206-b4c3a7ce5c1a _process_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:10808#033[00m
Dec  1 19:29:55 compute-0 ovn_metadata_agent[106828]: 2025-12-01 19:29:55.438 106833 INFO oslo.privsep.daemon [-] Spawned new privsep daemon via rootwrap#033[00m
Dec  1 19:29:55 compute-0 ovn_metadata_agent[106828]: 2025-12-01 19:29:55.439 106833 DEBUG oslo.privsep.daemon [-] Accepted privsep connection to /tmp/tmpsnhej984/privsep.sock __init__ /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:362#033[00m
Dec  1 19:29:55 compute-0 ovn_metadata_agent[106828]: 2025-12-01 19:29:55.307 239862 INFO oslo.privsep.daemon [-] privsep daemon starting#033[00m
Dec  1 19:29:55 compute-0 ovn_metadata_agent[106828]: 2025-12-01 19:29:55.313 239862 INFO oslo.privsep.daemon [-] privsep process running with uid/gid: 0/0#033[00m
Dec  1 19:29:55 compute-0 ovn_metadata_agent[106828]: 2025-12-01 19:29:55.316 239862 INFO oslo.privsep.daemon [-] privsep process running with capabilities (eff/prm/inh): CAP_DAC_OVERRIDE|CAP_DAC_READ_SEARCH|CAP_NET_ADMIN|CAP_SYS_ADMIN|CAP_SYS_PTRACE/CAP_DAC_OVERRIDE|CAP_DAC_READ_SEARCH|CAP_NET_ADMIN|CAP_SYS_ADMIN|CAP_SYS_PTRACE/none#033[00m
Dec  1 19:29:55 compute-0 ovn_metadata_agent[106828]: 2025-12-01 19:29:55.316 239862 INFO oslo.privsep.daemon [-] privsep daemon running as pid 239862#033[00m
Dec  1 19:29:55 compute-0 ovn_metadata_agent[106828]: 2025-12-01 19:29:55.444 239862 DEBUG oslo.privsep.daemon [-] privsep: reply[fb480792-0ef7-4ec9-862f-cb1b27b86ac0]: (2,) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Dec  1 19:29:55 compute-0 nova_compute[189564]: 2025-12-01 19:29:55.544 189568 DEBUG nova.compute.manager [None req-286a6c53-945b-4040-bb5b-5f674c790b5e 7c24e8f82e7842b785e565ac65c7f494 35d2a9caf1634dca9fc12ec078239d84 - - default default] [instance: e73931e9-f7fa-4666-b781-700b385532a9] Instance event wait completed in 0 seconds for network-vif-plugged wait_for_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:577#033[00m
Dec  1 19:29:55 compute-0 nova_compute[189564]: 2025-12-01 19:29:55.545 189568 DEBUG nova.virt.driver [None req-025acbbd-8b0a-4055-b5a6-f0460d6fa220 - - - - - -] Emitting event <LifecycleEvent: 1764617395.5441182, e73931e9-f7fa-4666-b781-700b385532a9 => Started> emit_event /usr/lib/python3.9/site-packages/nova/virt/driver.py:1653#033[00m
Dec  1 19:29:55 compute-0 nova_compute[189564]: 2025-12-01 19:29:55.545 189568 INFO nova.compute.manager [None req-025acbbd-8b0a-4055-b5a6-f0460d6fa220 - - - - - -] [instance: e73931e9-f7fa-4666-b781-700b385532a9] VM Started (Lifecycle Event)#033[00m
Dec  1 19:29:55 compute-0 nova_compute[189564]: 2025-12-01 19:29:55.568 189568 DEBUG nova.virt.libvirt.driver [None req-286a6c53-945b-4040-bb5b-5f674c790b5e 7c24e8f82e7842b785e565ac65c7f494 35d2a9caf1634dca9fc12ec078239d84 - - default default] [instance: e73931e9-f7fa-4666-b781-700b385532a9] Guest created on hypervisor spawn /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:4417#033[00m
Dec  1 19:29:55 compute-0 nova_compute[189564]: 2025-12-01 19:29:55.575 189568 INFO nova.virt.libvirt.driver [-] [instance: e73931e9-f7fa-4666-b781-700b385532a9] Instance spawned successfully.#033[00m
Dec  1 19:29:55 compute-0 nova_compute[189564]: 2025-12-01 19:29:55.576 189568 DEBUG nova.virt.libvirt.driver [None req-286a6c53-945b-4040-bb5b-5f674c790b5e 7c24e8f82e7842b785e565ac65c7f494 35d2a9caf1634dca9fc12ec078239d84 - - default default] [instance: e73931e9-f7fa-4666-b781-700b385532a9] Attempting to register defaults for the following image properties: ['hw_cdrom_bus', 'hw_disk_bus', 'hw_input_bus', 'hw_pointer_model', 'hw_video_model', 'hw_vif_model'] _register_undefined_instance_details /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:917#033[00m
Dec  1 19:29:55 compute-0 nova_compute[189564]: 2025-12-01 19:29:55.614 189568 DEBUG nova.compute.manager [None req-025acbbd-8b0a-4055-b5a6-f0460d6fa220 - - - - - -] [instance: e73931e9-f7fa-4666-b781-700b385532a9] Checking state _get_power_state /usr/lib/python3.9/site-packages/nova/compute/manager.py:1762#033[00m
Dec  1 19:29:55 compute-0 nova_compute[189564]: 2025-12-01 19:29:55.623 189568 DEBUG nova.compute.manager [None req-025acbbd-8b0a-4055-b5a6-f0460d6fa220 - - - - - -] [instance: e73931e9-f7fa-4666-b781-700b385532a9] Synchronizing instance power state after lifecycle event "Started"; current vm_state: building, current task_state: spawning, current DB power_state: 0, VM power_state: 1 handle_lifecycle_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:1396#033[00m
Dec  1 19:29:55 compute-0 nova_compute[189564]: 2025-12-01 19:29:55.629 189568 DEBUG nova.virt.libvirt.driver [None req-286a6c53-945b-4040-bb5b-5f674c790b5e 7c24e8f82e7842b785e565ac65c7f494 35d2a9caf1634dca9fc12ec078239d84 - - default default] [instance: e73931e9-f7fa-4666-b781-700b385532a9] Found default for hw_cdrom_bus of sata _register_undefined_instance_details /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:946#033[00m
Dec  1 19:29:55 compute-0 nova_compute[189564]: 2025-12-01 19:29:55.629 189568 DEBUG nova.virt.libvirt.driver [None req-286a6c53-945b-4040-bb5b-5f674c790b5e 7c24e8f82e7842b785e565ac65c7f494 35d2a9caf1634dca9fc12ec078239d84 - - default default] [instance: e73931e9-f7fa-4666-b781-700b385532a9] Found default for hw_disk_bus of virtio _register_undefined_instance_details /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:946#033[00m
Dec  1 19:29:55 compute-0 nova_compute[189564]: 2025-12-01 19:29:55.630 189568 DEBUG nova.virt.libvirt.driver [None req-286a6c53-945b-4040-bb5b-5f674c790b5e 7c24e8f82e7842b785e565ac65c7f494 35d2a9caf1634dca9fc12ec078239d84 - - default default] [instance: e73931e9-f7fa-4666-b781-700b385532a9] Found default for hw_input_bus of usb _register_undefined_instance_details /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:946#033[00m
Dec  1 19:29:55 compute-0 nova_compute[189564]: 2025-12-01 19:29:55.630 189568 DEBUG nova.virt.libvirt.driver [None req-286a6c53-945b-4040-bb5b-5f674c790b5e 7c24e8f82e7842b785e565ac65c7f494 35d2a9caf1634dca9fc12ec078239d84 - - default default] [instance: e73931e9-f7fa-4666-b781-700b385532a9] Found default for hw_pointer_model of usbtablet _register_undefined_instance_details /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:946#033[00m
Dec  1 19:29:55 compute-0 nova_compute[189564]: 2025-12-01 19:29:55.630 189568 DEBUG nova.virt.libvirt.driver [None req-286a6c53-945b-4040-bb5b-5f674c790b5e 7c24e8f82e7842b785e565ac65c7f494 35d2a9caf1634dca9fc12ec078239d84 - - default default] [instance: e73931e9-f7fa-4666-b781-700b385532a9] Found default for hw_video_model of virtio _register_undefined_instance_details /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:946#033[00m
Dec  1 19:29:55 compute-0 nova_compute[189564]: 2025-12-01 19:29:55.631 189568 DEBUG nova.virt.libvirt.driver [None req-286a6c53-945b-4040-bb5b-5f674c790b5e 7c24e8f82e7842b785e565ac65c7f494 35d2a9caf1634dca9fc12ec078239d84 - - default default] [instance: e73931e9-f7fa-4666-b781-700b385532a9] Found default for hw_vif_model of virtio _register_undefined_instance_details /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:946#033[00m
Dec  1 19:29:55 compute-0 nova_compute[189564]: 2025-12-01 19:29:55.652 189568 INFO nova.compute.manager [None req-025acbbd-8b0a-4055-b5a6-f0460d6fa220 - - - - - -] [instance: e73931e9-f7fa-4666-b781-700b385532a9] During sync_power_state the instance has a pending task (spawning). Skip.#033[00m
Dec  1 19:29:55 compute-0 nova_compute[189564]: 2025-12-01 19:29:55.652 189568 DEBUG nova.virt.driver [None req-025acbbd-8b0a-4055-b5a6-f0460d6fa220 - - - - - -] Emitting event <LifecycleEvent: 1764617395.547868, e73931e9-f7fa-4666-b781-700b385532a9 => Paused> emit_event /usr/lib/python3.9/site-packages/nova/virt/driver.py:1653#033[00m
Dec  1 19:29:55 compute-0 nova_compute[189564]: 2025-12-01 19:29:55.652 189568 INFO nova.compute.manager [None req-025acbbd-8b0a-4055-b5a6-f0460d6fa220 - - - - - -] [instance: e73931e9-f7fa-4666-b781-700b385532a9] VM Paused (Lifecycle Event)#033[00m
Dec  1 19:29:55 compute-0 nova_compute[189564]: 2025-12-01 19:29:55.681 189568 DEBUG nova.compute.manager [None req-025acbbd-8b0a-4055-b5a6-f0460d6fa220 - - - - - -] [instance: e73931e9-f7fa-4666-b781-700b385532a9] Checking state _get_power_state /usr/lib/python3.9/site-packages/nova/compute/manager.py:1762#033[00m
Dec  1 19:29:55 compute-0 nova_compute[189564]: 2025-12-01 19:29:55.688 189568 DEBUG nova.virt.driver [None req-025acbbd-8b0a-4055-b5a6-f0460d6fa220 - - - - - -] Emitting event <LifecycleEvent: 1764617395.5498974, e73931e9-f7fa-4666-b781-700b385532a9 => Resumed> emit_event /usr/lib/python3.9/site-packages/nova/virt/driver.py:1653#033[00m
Dec  1 19:29:55 compute-0 nova_compute[189564]: 2025-12-01 19:29:55.688 189568 INFO nova.compute.manager [None req-025acbbd-8b0a-4055-b5a6-f0460d6fa220 - - - - - -] [instance: e73931e9-f7fa-4666-b781-700b385532a9] VM Resumed (Lifecycle Event)#033[00m
Dec  1 19:29:55 compute-0 nova_compute[189564]: 2025-12-01 19:29:55.695 189568 INFO nova.compute.manager [None req-286a6c53-945b-4040-bb5b-5f674c790b5e 7c24e8f82e7842b785e565ac65c7f494 35d2a9caf1634dca9fc12ec078239d84 - - default default] [instance: e73931e9-f7fa-4666-b781-700b385532a9] Took 8.89 seconds to spawn the instance on the hypervisor.#033[00m
Dec  1 19:29:55 compute-0 nova_compute[189564]: 2025-12-01 19:29:55.696 189568 DEBUG nova.compute.manager [None req-286a6c53-945b-4040-bb5b-5f674c790b5e 7c24e8f82e7842b785e565ac65c7f494 35d2a9caf1634dca9fc12ec078239d84 - - default default] [instance: e73931e9-f7fa-4666-b781-700b385532a9] Checking state _get_power_state /usr/lib/python3.9/site-packages/nova/compute/manager.py:1762#033[00m
Dec  1 19:29:55 compute-0 nova_compute[189564]: 2025-12-01 19:29:55.710 189568 DEBUG nova.compute.manager [None req-025acbbd-8b0a-4055-b5a6-f0460d6fa220 - - - - - -] [instance: e73931e9-f7fa-4666-b781-700b385532a9] Checking state _get_power_state /usr/lib/python3.9/site-packages/nova/compute/manager.py:1762#033[00m
Dec  1 19:29:55 compute-0 nova_compute[189564]: 2025-12-01 19:29:55.717 189568 DEBUG nova.compute.manager [None req-025acbbd-8b0a-4055-b5a6-f0460d6fa220 - - - - - -] [instance: e73931e9-f7fa-4666-b781-700b385532a9] Synchronizing instance power state after lifecycle event "Resumed"; current vm_state: building, current task_state: spawning, current DB power_state: 0, VM power_state: 1 handle_lifecycle_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:1396#033[00m
Dec  1 19:29:55 compute-0 nova_compute[189564]: 2025-12-01 19:29:55.743 189568 INFO nova.compute.manager [None req-025acbbd-8b0a-4055-b5a6-f0460d6fa220 - - - - - -] [instance: e73931e9-f7fa-4666-b781-700b385532a9] During sync_power_state the instance has a pending task (spawning). Skip.#033[00m
Dec  1 19:29:55 compute-0 nova_compute[189564]: 2025-12-01 19:29:55.764 189568 INFO nova.compute.manager [None req-286a6c53-945b-4040-bb5b-5f674c790b5e 7c24e8f82e7842b785e565ac65c7f494 35d2a9caf1634dca9fc12ec078239d84 - - default default] [instance: e73931e9-f7fa-4666-b781-700b385532a9] Took 9.56 seconds to build instance.#033[00m
Dec  1 19:29:55 compute-0 nova_compute[189564]: 2025-12-01 19:29:55.780 189568 DEBUG oslo_concurrency.lockutils [None req-286a6c53-945b-4040-bb5b-5f674c790b5e 7c24e8f82e7842b785e565ac65c7f494 35d2a9caf1634dca9fc12ec078239d84 - - default default] Lock "e73931e9-f7fa-4666-b781-700b385532a9" "released" by "nova.compute.manager.ComputeManager.build_and_run_instance.<locals>._locked_do_build_and_run_instance" :: held 9.725s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m
Dec  1 19:29:55 compute-0 ovn_metadata_agent[106828]: 2025-12-01 19:29:55.942 239862 DEBUG oslo_concurrency.lockutils [-] Acquiring lock "context-manager" by "neutron_lib.db.api._create_context_manager" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m
Dec  1 19:29:55 compute-0 ovn_metadata_agent[106828]: 2025-12-01 19:29:55.942 239862 DEBUG oslo_concurrency.lockutils [-] Lock "context-manager" acquired by "neutron_lib.db.api._create_context_manager" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m
Dec  1 19:29:55 compute-0 ovn_metadata_agent[106828]: 2025-12-01 19:29:55.942 239862 DEBUG oslo_concurrency.lockutils [-] Lock "context-manager" "released" by "neutron_lib.db.api._create_context_manager" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m
Dec  1 19:29:56 compute-0 ovn_metadata_agent[106828]: 2025-12-01 19:29:56.487 239862 DEBUG oslo.privsep.daemon [-] privsep: reply[673577a1-7e33-4813-b693-2388e6f506a4]: (4, False) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Dec  1 19:29:56 compute-0 ovn_metadata_agent[106828]: 2025-12-01 19:29:56.489 106833 DEBUG neutron.agent.ovn.metadata.agent [-] Creating VETH tap2a4b8529-61 in ovnmeta-2a4b8529-6171-4880-a97c-66966115a61b namespace provision_datapath /usr/lib/python3.9/site-packages/neutron/agent/ovn/metadata/agent.py:665#033[00m
Dec  1 19:29:56 compute-0 ovn_metadata_agent[106828]: 2025-12-01 19:29:56.491 239862 DEBUG neutron.privileged.agent.linux.ip_lib [-] Interface tap2a4b8529-60 not found in namespace None get_link_id /usr/lib/python3.9/site-packages/neutron/privileged/agent/linux/ip_lib.py:204#033[00m
Dec  1 19:29:56 compute-0 ovn_metadata_agent[106828]: 2025-12-01 19:29:56.492 239862 DEBUG oslo.privsep.daemon [-] privsep: reply[83c8de51-1f61-470e-84a5-793445d9e37d]: (4, False) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Dec  1 19:29:56 compute-0 ovn_metadata_agent[106828]: 2025-12-01 19:29:56.495 239862 DEBUG oslo.privsep.daemon [-] privsep: reply[44b04761-1584-431e-88d1-96e3807e54ce]: (4, False) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Dec  1 19:29:56 compute-0 ovn_metadata_agent[106828]: 2025-12-01 19:29:56.521 106945 DEBUG oslo.privsep.daemon [-] privsep: reply[ee394ebf-1e5b-4b70-8f50-144bc5f56b77]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Dec  1 19:29:56 compute-0 ovn_metadata_agent[106828]: 2025-12-01 19:29:56.543 239862 DEBUG oslo.privsep.daemon [-] privsep: reply[e0f4a2ef-b421-4afc-bc1c-98b70e968d7e]: (4, ('net.ipv4.conf.all.promote_secondaries = 1\n', '', 0)) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Dec  1 19:29:56 compute-0 ovn_metadata_agent[106828]: 2025-12-01 19:29:56.546 106833 INFO oslo.privsep.daemon [-] Running privsep helper: ['sudo', 'neutron-rootwrap', '/etc/neutron/rootwrap.conf', 'privsep-helper', '--config-file', '/etc/neutron/neutron.conf', '--config-dir', '/etc/neutron.conf.d', '--privsep_context', 'neutron.privileged.link_cmd', '--privsep_sock_path', '/tmp/tmpvkdzljzw/privsep.sock']#033[00m
Dec  1 19:29:56 compute-0 podman[239878]: 2025-12-01 19:29:56.63676441 +0000 UTC m=+0.083034118 container health_status 3a3d264f7eb8586ed3d44da8bad3c69e5911bcb2ca062b771386b6d47a5118de (image=quay.rdoproject.org/podified-master-centos10/openstack-ceilometer-compute:current-tested, name=ceilometer_agent_compute, health_status=healthy, health_failing_streak=0, health_log=, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS, tcib_build_tag=3a7876c5b6a4ff2e2bc50e11e9db5f42, tcib_managed=true, config_data={'image': 'quay.rdoproject.org/podified-master-centos10/openstack-ceilometer-compute:current-tested', 'user': 'ceilometer', 'restart': 'always', 'command': 'kolla_start', 'security_opt': 'label:type:ceilometer_polling_t', 'net': 'host', 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'OS_ENDPOINT_TYPE': 'internal'}, 'healthcheck': {'test': '/openstack/healthcheck compute', 'mount': '/var/lib/openstack/healthchecks/ceilometer_agent_compute'}, 'volumes': ['/var/lib/openstack/config/telemetry:/var/lib/openstack/config/:z', '/var/lib/openstack/config/telemetry/ceilometer-agent-compute.json:/var/lib/kolla/config_files/config.json:z', '/run/libvirt:/run/libvirt:shared,ro', '/etc/hosts:/etc/hosts:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/var/lib/openstack/cacerts/telemetry/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/config/telemetry/ceilometer_prom_exporter.yaml:/etc/ceilometer/ceilometer_prom_exporter.yaml:z', '/var/lib/openstack/certs/telemetry/default:/etc/ceilometer/tls:z', '/dev/log:/dev/log', '/var/lib/openstack/healthchecks/ceilometer_agent_compute:/openstack:ro,z']}, container_name=ceilometer_agent_compute, managed_by=edpm_ansible, io.buildah.version=1.41.4, maintainer=OpenStack Kubernetes Operator team, org.label-schema.build-date=20251125, org.label-schema.name=CentOS Stream 10 Base Image, config_id=edpm, org.label-schema.license=GPLv2)
Dec  1 19:29:56 compute-0 podman[239880]: 2025-12-01 19:29:56.668550669 +0000 UTC m=+0.114734874 container health_status 43b014a7c88484529ca37fbc1aa040d68d3c565a681d98a3ffe696ded1c66c8b (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified, name=ovn_metadata_agent, health_status=healthy, health_failing_streak=0, health_log=, io.buildah.version=1.41.3, managed_by=edpm_ansible, tcib_build_tag=fa2bb8efef6782c26ea7f1675eeb36dd, config_id=ovn_metadata_agent, maintainer=OpenStack Kubernetes Operator team, org.label-schema.license=GPLv2, org.label-schema.vendor=CentOS, tcib_managed=true, container_name=ovn_metadata_agent, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.schema-version=1.0, org.label-schema.build-date=20251125, config_data={'cgroupns': 'host', 'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'EDPM_CONFIG_HASH': '0823bd3e096c75f72e4a95820d41b0d4b6a1172bd2892ddb9f29b788a11bc87d'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ovn_metadata_agent', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified', 'net': 'host', 'pid': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/run/openvswitch:/run/openvswitch:z', '/var/lib/config-data/ansible-generated/neutron-ovn-metadata-agent:/etc/neutron.conf.d:z', '/run/netns:/run/netns:shared', '/var/lib/kolla/config_files/ovn_metadata_agent.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/neutron:/var/lib/neutron:shared,z', '/var/lib/neutron/ovn_metadata_haproxy_wrapper:/usr/local/bin/haproxy:ro', '/var/lib/neutron/kill_scripts:/etc/neutron/kill_scripts:ro', '/var/lib/openstack/cacerts/neutron-metadata/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/ca.crt:/etc/pki/tls/certs/ovndbca.crt:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/tls.crt:/etc/pki/tls/certs/ovndb.crt:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/tls.key:/etc/pki/tls/private/ovndb.key:ro,Z', '/var/lib/openstack/healthchecks/ovn_metadata_agent:/openstack:ro,z']})
Dec  1 19:29:56 compute-0 podman[239881]: 2025-12-01 19:29:56.67509995 +0000 UTC m=+0.120042897 container health_status ac5c9902abf0db9f43c889599b2bcc73d33eb8b65444ffdd9b56a5cc93dab792 (image=quay.io/podified-antelope-centos9/openstack-ovn-controller:current-podified, name=ovn_controller, health_status=healthy, health_failing_streak=0, health_log=, org.label-schema.vendor=CentOS, tcib_build_tag=fa2bb8efef6782c26ea7f1675eeb36dd, tcib_managed=true, config_id=ovn_controller, maintainer=OpenStack Kubernetes Operator team, org.label-schema.build-date=20251125, org.label-schema.license=GPLv2, config_data={'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ovn_controller', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-ovn-controller:current-podified', 'net': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/lib/modules:/lib/modules:ro', '/run:/run', '/var/lib/openvswitch/ovn:/run/ovn:shared,z', '/var/lib/kolla/config_files/ovn_controller.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/openstack/cacerts/ovn/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/certs/ovn/default/ca.crt:/etc/pki/tls/certs/ovndbca.crt:ro,z', '/var/lib/openstack/certs/ovn/default/tls.crt:/etc/pki/tls/certs/ovndb.crt:ro,z', '/var/lib/openstack/certs/ovn/default/tls.key:/etc/pki/tls/private/ovndb.key:ro,Z', '/var/lib/openstack/healthchecks/ovn_controller:/openstack:ro,z']}, io.buildah.version=1.41.3, managed_by=edpm_ansible, org.label-schema.schema-version=1.0, container_name=ovn_controller, org.label-schema.name=CentOS Stream 9 Base Image)
Dec  1 19:29:57 compute-0 nova_compute[189564]: 2025-12-01 19:29:57.211 189568 DEBUG nova.compute.manager [req-718a783c-c874-44ca-b27e-da661e50dfc2 req-300d33d7-9f35-430e-9bec-b1961288c7ce 8141e98fb94749b083df4b60a326419b 58466eba0634458f8dd92f929824a1d9 - - default default] [instance: e73931e9-f7fa-4666-b781-700b385532a9] Received event network-vif-plugged-3cef930c-870a-4936-a206-b4c3a7ce5c1a external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11048#033[00m
Dec  1 19:29:57 compute-0 nova_compute[189564]: 2025-12-01 19:29:57.211 189568 DEBUG oslo_concurrency.lockutils [req-718a783c-c874-44ca-b27e-da661e50dfc2 req-300d33d7-9f35-430e-9bec-b1961288c7ce 8141e98fb94749b083df4b60a326419b 58466eba0634458f8dd92f929824a1d9 - - default default] Acquiring lock "e73931e9-f7fa-4666-b781-700b385532a9-events" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m
Dec  1 19:29:57 compute-0 nova_compute[189564]: 2025-12-01 19:29:57.211 189568 DEBUG oslo_concurrency.lockutils [req-718a783c-c874-44ca-b27e-da661e50dfc2 req-300d33d7-9f35-430e-9bec-b1961288c7ce 8141e98fb94749b083df4b60a326419b 58466eba0634458f8dd92f929824a1d9 - - default default] Lock "e73931e9-f7fa-4666-b781-700b385532a9-events" acquired by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m
Dec  1 19:29:57 compute-0 nova_compute[189564]: 2025-12-01 19:29:57.212 189568 DEBUG oslo_concurrency.lockutils [req-718a783c-c874-44ca-b27e-da661e50dfc2 req-300d33d7-9f35-430e-9bec-b1961288c7ce 8141e98fb94749b083df4b60a326419b 58466eba0634458f8dd92f929824a1d9 - - default default] Lock "e73931e9-f7fa-4666-b781-700b385532a9-events" "released" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m
Dec  1 19:29:57 compute-0 nova_compute[189564]: 2025-12-01 19:29:57.212 189568 DEBUG nova.compute.manager [req-718a783c-c874-44ca-b27e-da661e50dfc2 req-300d33d7-9f35-430e-9bec-b1961288c7ce 8141e98fb94749b083df4b60a326419b 58466eba0634458f8dd92f929824a1d9 - - default default] [instance: e73931e9-f7fa-4666-b781-700b385532a9] No waiting events found dispatching network-vif-plugged-3cef930c-870a-4936-a206-b4c3a7ce5c1a pop_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:320#033[00m
Dec  1 19:29:57 compute-0 nova_compute[189564]: 2025-12-01 19:29:57.213 189568 WARNING nova.compute.manager [req-718a783c-c874-44ca-b27e-da661e50dfc2 req-300d33d7-9f35-430e-9bec-b1961288c7ce 8141e98fb94749b083df4b60a326419b 58466eba0634458f8dd92f929824a1d9 - - default default] [instance: e73931e9-f7fa-4666-b781-700b385532a9] Received unexpected event network-vif-plugged-3cef930c-870a-4936-a206-b4c3a7ce5c1a for instance with vm_state active and task_state None.#033[00m
Dec  1 19:29:57 compute-0 ovn_metadata_agent[106828]: 2025-12-01 19:29:57.228 106833 INFO oslo.privsep.daemon [-] Spawned new privsep daemon via rootwrap#033[00m
Dec  1 19:29:57 compute-0 ovn_metadata_agent[106828]: 2025-12-01 19:29:57.230 106833 DEBUG oslo.privsep.daemon [-] Accepted privsep connection to /tmp/tmpvkdzljzw/privsep.sock __init__ /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:362#033[00m
Dec  1 19:29:57 compute-0 ovn_metadata_agent[106828]: 2025-12-01 19:29:57.105 239942 INFO oslo.privsep.daemon [-] privsep daemon starting#033[00m
Dec  1 19:29:57 compute-0 ovn_metadata_agent[106828]: 2025-12-01 19:29:57.109 239942 INFO oslo.privsep.daemon [-] privsep process running with uid/gid: 0/0#033[00m
Dec  1 19:29:57 compute-0 ovn_metadata_agent[106828]: 2025-12-01 19:29:57.111 239942 INFO oslo.privsep.daemon [-] privsep process running with capabilities (eff/prm/inh): CAP_NET_ADMIN|CAP_SYS_ADMIN/CAP_NET_ADMIN|CAP_SYS_ADMIN/none#033[00m
Dec  1 19:29:57 compute-0 ovn_metadata_agent[106828]: 2025-12-01 19:29:57.111 239942 INFO oslo.privsep.daemon [-] privsep daemon running as pid 239942#033[00m
Dec  1 19:29:57 compute-0 ovn_metadata_agent[106828]: 2025-12-01 19:29:57.235 239942 DEBUG oslo.privsep.daemon [-] privsep: reply[0dfc3ff9-2507-4b14-8f58-743ba5193c2f]: (2,) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Dec  1 19:29:57 compute-0 nova_compute[189564]: 2025-12-01 19:29:57.655 189568 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Dec  1 19:29:57 compute-0 ovn_metadata_agent[106828]: 2025-12-01 19:29:57.733 239942 DEBUG oslo_concurrency.lockutils [-] Acquiring lock "context-manager" by "neutron_lib.db.api._create_context_manager" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m
Dec  1 19:29:57 compute-0 ovn_metadata_agent[106828]: 2025-12-01 19:29:57.734 239942 DEBUG oslo_concurrency.lockutils [-] Lock "context-manager" acquired by "neutron_lib.db.api._create_context_manager" :: waited 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m
Dec  1 19:29:57 compute-0 ovn_metadata_agent[106828]: 2025-12-01 19:29:57.735 239942 DEBUG oslo_concurrency.lockutils [-] Lock "context-manager" "released" by "neutron_lib.db.api._create_context_manager" :: held 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m
Dec  1 19:29:57 compute-0 systemd[1]: Starting libvirt proxy daemon...
Dec  1 19:29:57 compute-0 systemd[1]: Started libvirt proxy daemon.
Dec  1 19:29:58 compute-0 ovn_metadata_agent[106828]: 2025-12-01 19:29:58.316 239942 DEBUG oslo.privsep.daemon [-] privsep: reply[27a79571-e5cf-4eeb-981b-1e7291773670]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Dec  1 19:29:58 compute-0 NetworkManager[56474]: <info>  [1764617398.3562] manager: (tap2a4b8529-60): new Veth device (/org/freedesktop/NetworkManager/Devices/21)
Dec  1 19:29:58 compute-0 ovn_metadata_agent[106828]: 2025-12-01 19:29:58.358 239862 DEBUG oslo.privsep.daemon [-] privsep: reply[44f38428-304b-475a-b90b-f74ee56aabc9]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Dec  1 19:29:58 compute-0 ovn_metadata_agent[106828]: 2025-12-01 19:29:58.385 239942 DEBUG oslo.privsep.daemon [-] privsep: reply[2b467ec5-2e7c-443b-b3fd-26fe4aa93b68]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Dec  1 19:29:58 compute-0 ovn_metadata_agent[106828]: 2025-12-01 19:29:58.388 239942 DEBUG oslo.privsep.daemon [-] privsep: reply[541dc51d-4340-4265-97e8-2b6908f2f91c]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Dec  1 19:29:58 compute-0 systemd-udevd[239976]: Network interface NamePolicy= disabled on kernel command line.
Dec  1 19:29:58 compute-0 NetworkManager[56474]: <info>  [1764617398.4197] device (tap2a4b8529-60): carrier: link connected
Dec  1 19:29:58 compute-0 ovn_metadata_agent[106828]: 2025-12-01 19:29:58.425 239942 DEBUG oslo.privsep.daemon [-] privsep: reply[fb04cf2a-50c9-4358-bfd1-6d4ec54362c9]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Dec  1 19:29:58 compute-0 ovn_metadata_agent[106828]: 2025-12-01 19:29:58.450 239862 DEBUG oslo.privsep.daemon [-] privsep: reply[134afbfc-b1fa-4c39-8691-882bb8fbbdf4]: (4, [{'family': 0, '__align': (), 'ifi_type': 1, 'index': 2, 'flags': 69699, 'change': 0, 'attrs': [['IFLA_IFNAME', 'tap2a4b8529-61'], ['IFLA_TXQLEN', 1000], ['IFLA_OPERSTATE', 'UP'], ['IFLA_LINKMODE', 0], ['IFLA_MTU', 1500], ['IFLA_MIN_MTU', 68], ['IFLA_MAX_MTU', 65535], ['IFLA_GROUP', 0], ['IFLA_PROMISCUITY', 0], ['UNKNOWN', {'header': {'length': 8, 'type': 61}}], ['IFLA_NUM_TX_QUEUES', 8], ['IFLA_GSO_MAX_SEGS', 65535], ['IFLA_GSO_MAX_SIZE', 65536], ['IFLA_GRO_MAX_SIZE', 65536], ['UNKNOWN', {'header': {'length': 8, 'type': 63}}], ['UNKNOWN', {'header': {'length': 8, 'type': 64}}], ['IFLA_TSO_MAX_SIZE', 524280], ['IFLA_TSO_MAX_SEGS', 65535], ['UNKNOWN', {'header': {'length': 8, 'type': 66}}], ['IFLA_NUM_RX_QUEUES', 8], ['IFLA_CARRIER', 1], ['IFLA_CARRIER_CHANGES', 2], ['IFLA_CARRIER_UP_COUNT', 1], ['IFLA_CARRIER_DOWN_COUNT', 1], ['IFLA_PROTO_DOWN', 0], ['IFLA_ADDRESS', 'fa:16:3e:47:81:e1'], ['IFLA_BROADCAST', 'ff:ff:ff:ff:ff:ff'], ['IFLA_STATS64', {'rx_packets': 1, 'tx_packets': 1, 'rx_bytes': 90, 'tx_bytes': 90, 'rx_errors': 0, 'tx_errors': 0, 'rx_dropped': 0, 'tx_dropped': 0, 'multicast': 0, 'collisions': 0, 'rx_length_errors': 0, 'rx_over_errors': 0, 'rx_crc_errors': 0, 'rx_frame_errors': 0, 'rx_fifo_errors': 0, 'rx_missed_errors': 0, 'tx_aborted_errors': 0, 'tx_carrier_errors': 0, 'tx_fifo_errors': 0, 'tx_heartbeat_errors': 0, 'tx_window_errors': 0, 'rx_compressed': 0, 'tx_compressed': 0}], ['IFLA_STATS', {'rx_packets': 1, 'tx_packets': 1, 'rx_bytes': 90, 'tx_bytes': 90, 'rx_errors': 0, 'tx_errors': 0, 'rx_dropped': 0, 'tx_dropped': 0, 'multicast': 0, 'collisions': 0, 'rx_length_errors': 0, 'rx_over_errors': 0, 'rx_crc_errors': 0, 'rx_frame_errors': 0, 'rx_fifo_errors': 0, 'rx_missed_errors': 0, 'tx_aborted_errors': 0, 'tx_carrier_errors': 0, 'tx_fifo_errors': 0, 'tx_heartbeat_errors': 0, 'tx_window_errors': 0, 'rx_compressed': 0, 'tx_compressed': 0}], ['IFLA_XDP', {'attrs': [['IFLA_XDP_ATTACHED', None]]}], ['IFLA_LINKINFO', {'attrs': [['IFLA_INFO_KIND', 'veth']]}], ['IFLA_LINK_NETNSID', 0], ['IFLA_LINK', 12], ['IFLA_QDISC', 'noqueue'], ['IFLA_AF_SPEC', {'attrs': [['AF_INET', {'dummy': 65668, 'forwarding': 1, 'mc_forwarding': 0, 'proxy_arp': 0, 'accept_redirects': 0, 'secure_redirects': 0, 'send_redirects': 0, 'shared_media': 1, 'rp_filter': 1, 'accept_source_route': 0, 'bootp_relay': 0, 'log_martians': 0, 'tag': 0, 'arpfilter': 0, 'medium_id': 0, 'noxfrm': 0, 'nopolicy': 0, 'force_igmp_version': 0, 'arp_announce': 0, 'arp_ignore': 0, 'promote_secondaries': 1, 'arp_accept': 0, 'arp_notify': 0, 'accept_local': 0, 'src_vmark': 0, 'proxy_arp_pvlan': 0, 'route_localnet': 0, 'igmpv2_unsolicited_report_interval': 10000, 'igmpv3_unsolicited_report_interval': 1000}], ['AF_INET6', {'attrs': [['IFLA_INET6_FLAGS', 2147483648], ['IFLA_INET6_CACHEINFO', {'max_reasm_len': 65535, 'tstamp': 388613, 'reachable_time': 23320, 'retrans_time': 1000}], ['IFLA_INET6_CONF', {'forwarding': 0, 'hop_limit': 64, 'mtu': 1500, 'accept_ra': 1, 'accept_redirects': 1, 'autoconf': 1, 'dad_transmits': 1, 'router_solicitations': 4294967295, 'router_solicitation_interval': 4000, 'router_solicitation_delay': 1000, 'use_tempaddr': 0, 'temp_valid_lft': 604800, 'temp_preferred_lft': 86400, 'regen_max_retry': 3, 'max_desync_factor': 600, 'max_addresses': 16, 'force_mld_version': 0, 'accept_ra_defrtr': 1, 'accept_ra_pinfo': 1, 'accept_ra_rtr_pref': 1, 'router_probe_interval': 60000, 'accept_ra_rt_info_max_plen': 0, 'proxy_ndp': 0, 'optimistic_dad': 0, 'accept_source_route': 0, 'mc_forwarding': 0, 'disable_ipv6': 0, 'accept_dad': 1, 'force_tllao': 0, 'ndisc_notify': 0}], ['IFLA_INET6_STATS', {'num': 38, 'inpkts': 1, 'inoctets': 76, 'indelivers': 0, 'outforwdatagrams': 0, 'outpkts': 1, 'outoctets': 76, 'inhdrerrors': 0, 'intoobigerrors': 0, 'innoroutes': 0, 'inaddrerrors': 0, 'inunknownprotos': 0, 'intruncatedpkts': 0, 'indiscards': 0, 'outdiscards': 0, 'outnoroutes': 0, 'reasmtimeout': 0, 'reasmreqds': 0, 'reasmoks': 0, 'reasmfails': 0, 'fragoks': 0, 'fragfails': 0, 'fragcreates': 0, 'inmcastpkts': 1, 'outmcastpkts': 1, 'inbcastpkts': 0, 'outbcastpkts': 0, 'inmcastoctets': 76, 'outmcastoctets': 76, 'inbcastoctets': 0, 'outbcastoctets': 0, 'csumerrors': 0, 'noectpkts': 1, 'ect1pkts': 0, 'ect0pkts': 0, 'cepkts': 0}], ['IFLA_INET6_ICMP6STATS', {'num': 7, 'inmsgs': 0, 'inerrors': 0, 'outmsgs': 1, 'outerrors': 0, 'csumerrors': 0}], ['IFLA_INET6_TOKEN', '::'], ['IFLA_INET6_ADDR_GEN_MODE', 0]]}]]}], ['IFLA_MAP', {'mem_start': 0, 'mem_end': 0, 'base_addr': 0, 'irq': 0, 'dma': 0, 'port': 0}], ['UNKNOWN', {'header': {'length': 4, 'type': 32830}}], ['UNKNOWN', {'header': {'length': 4, 'type': 32833}}]], 'header': {'length': 1448, 'type': 16, 'flags': 2, 'sequence_number': 255, 'pid': 239978, 'error': None, 'target': 'ovnmeta-2a4b8529-6171-4880-a97c-66966115a61b', 'stats': (0, 0, 0)}, 'state': 'up', 'event': 'RTM_NEWLINK'}]) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Dec  1 19:29:58 compute-0 ovn_metadata_agent[106828]: 2025-12-01 19:29:58.471 239862 DEBUG oslo.privsep.daemon [-] privsep: reply[9e24d454-7c13-4427-9943-ac8215675982]: (4, ({'family': 10, 'prefixlen': 64, 'flags': 192, 'scope': 253, 'index': 2, 'attrs': [['IFA_ADDRESS', 'fe80::f816:3eff:fe47:81e1'], ['IFA_CACHEINFO', {'ifa_preferred': 4294967295, 'ifa_valid': 4294967295, 'cstamp': 388613, 'tstamp': 388613}], ['IFA_FLAGS', 192]], 'header': {'length': 72, 'type': 20, 'flags': 2, 'sequence_number': 255, 'pid': 239993, 'error': None, 'target': 'ovnmeta-2a4b8529-6171-4880-a97c-66966115a61b', 'stats': (0, 0, 0)}, 'event': 'RTM_NEWADDR'},)) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Dec  1 19:29:58 compute-0 ovn_metadata_agent[106828]: 2025-12-01 19:29:58.490 239862 DEBUG oslo.privsep.daemon [-] privsep: reply[9fb16f5e-36eb-4536-a7d6-f2f0d981df7f]: (4, [{'family': 0, '__align': (), 'ifi_type': 1, 'index': 2, 'flags': 69699, 'change': 0, 'attrs': [['IFLA_IFNAME', 'tap2a4b8529-61'], ['IFLA_TXQLEN', 1000], ['IFLA_OPERSTATE', 'UP'], ['IFLA_LINKMODE', 0], ['IFLA_MTU', 1500], ['IFLA_MIN_MTU', 68], ['IFLA_MAX_MTU', 65535], ['IFLA_GROUP', 0], ['IFLA_PROMISCUITY', 0], ['UNKNOWN', {'header': {'length': 8, 'type': 61}}], ['IFLA_NUM_TX_QUEUES', 8], ['IFLA_GSO_MAX_SEGS', 65535], ['IFLA_GSO_MAX_SIZE', 65536], ['IFLA_GRO_MAX_SIZE', 65536], ['UNKNOWN', {'header': {'length': 8, 'type': 63}}], ['UNKNOWN', {'header': {'length': 8, 'type': 64}}], ['IFLA_TSO_MAX_SIZE', 524280], ['IFLA_TSO_MAX_SEGS', 65535], ['UNKNOWN', {'header': {'length': 8, 'type': 66}}], ['IFLA_NUM_RX_QUEUES', 8], ['IFLA_CARRIER', 1], ['IFLA_CARRIER_CHANGES', 2], ['IFLA_CARRIER_UP_COUNT', 1], ['IFLA_CARRIER_DOWN_COUNT', 1], ['IFLA_PROTO_DOWN', 0], ['IFLA_ADDRESS', 'fa:16:3e:47:81:e1'], ['IFLA_BROADCAST', 'ff:ff:ff:ff:ff:ff'], ['IFLA_STATS64', {'rx_packets': 1, 'tx_packets': 1, 'rx_bytes': 90, 'tx_bytes': 90, 'rx_errors': 0, 'tx_errors': 0, 'rx_dropped': 0, 'tx_dropped': 0, 'multicast': 0, 'collisions': 0, 'rx_length_errors': 0, 'rx_over_errors': 0, 'rx_crc_errors': 0, 'rx_frame_errors': 0, 'rx_fifo_errors': 0, 'rx_missed_errors': 0, 'tx_aborted_errors': 0, 'tx_carrier_errors': 0, 'tx_fifo_errors': 0, 'tx_heartbeat_errors': 0, 'tx_window_errors': 0, 'rx_compressed': 0, 'tx_compressed': 0}], ['IFLA_STATS', {'rx_packets': 1, 'tx_packets': 1, 'rx_bytes': 90, 'tx_bytes': 90, 'rx_errors': 0, 'tx_errors': 0, 'rx_dropped': 0, 'tx_dropped': 0, 'multicast': 0, 'collisions': 0, 'rx_length_errors': 0, 'rx_over_errors': 0, 'rx_crc_errors': 0, 'rx_frame_errors': 0, 'rx_fifo_errors': 0, 'rx_missed_errors': 0, 'tx_aborted_errors': 0, 'tx_carrier_errors': 0, 'tx_fifo_errors': 0, 'tx_heartbeat_errors': 0, 'tx_window_errors': 0, 'rx_compressed': 0, 'tx_compressed': 0}], ['IFLA_XDP', {'attrs': [['IFLA_XDP_ATTACHED', None]]}], ['IFLA_LINKINFO', {'attrs': [['IFLA_INFO_KIND', 'veth']]}], ['IFLA_LINK_NETNSID', 0], ['IFLA_LINK', 12], ['IFLA_QDISC', 'noqueue'], ['IFLA_AF_SPEC', {'attrs': [['AF_INET', {'dummy': 65668, 'forwarding': 1, 'mc_forwarding': 0, 'proxy_arp': 0, 'accept_redirects': 0, 'secure_redirects': 0, 'send_redirects': 0, 'shared_media': 1, 'rp_filter': 1, 'accept_source_route': 0, 'bootp_relay': 0, 'log_martians': 0, 'tag': 0, 'arpfilter': 0, 'medium_id': 0, 'noxfrm': 0, 'nopolicy': 0, 'force_igmp_version': 0, 'arp_announce': 0, 'arp_ignore': 0, 'promote_secondaries': 1, 'arp_accept': 0, 'arp_notify': 0, 'accept_local': 0, 'src_vmark': 0, 'proxy_arp_pvlan': 0, 'route_localnet': 0, 'igmpv2_unsolicited_report_interval': 10000, 'igmpv3_unsolicited_report_interval': 1000}], ['AF_INET6', {'attrs': [['IFLA_INET6_FLAGS', 2147483648], ['IFLA_INET6_CACHEINFO', {'max_reasm_len': 65535, 'tstamp': 388613, 'reachable_time': 23320, 'retrans_time': 1000}], ['IFLA_INET6_CONF', {'forwarding': 0, 'hop_limit': 64, 'mtu': 1500, 'accept_ra': 1, 'accept_redirects': 1, 'autoconf': 1, 'dad_transmits': 1, 'router_solicitations': 4294967295, 'router_solicitation_interval': 4000, 'router_solicitation_delay': 1000, 'use_tempaddr': 0, 'temp_valid_lft': 604800, 'temp_preferred_lft': 86400, 'regen_max_retry': 3, 'max_desync_factor': 600, 'max_addresses': 16, 'force_mld_version': 0, 'accept_ra_defrtr': 1, 'accept_ra_pinfo': 1, 'accept_ra_rtr_pref': 1, 'router_probe_interval': 60000, 'accept_ra_rt_info_max_plen': 0, 'proxy_ndp': 0, 'optimistic_dad': 0, 'accept_source_route': 0, 'mc_forwarding': 0, 'disable_ipv6': 0, 'accept_dad': 1, 'force_tllao': 0, 'ndisc_notify': 0}], ['IFLA_INET6_STATS', {'num': 38, 'inpkts': 1, 'inoctets': 76, 'indelivers': 0, 'outforwdatagrams': 0, 'outpkts': 1, 'outoctets': 76, 'inhdrerrors': 0, 'intoobigerrors': 0, 'innoroutes': 0, 'inaddrerrors': 0, 'inunknownprotos': 0, 'intruncatedpkts': 0, 'indiscards': 0, 'outdiscards': 0, 'outnoroutes': 0, 'reasmtimeout': 0, 'reasmreqds': 0, 'reasmoks': 0, 'reasmfails': 0, 'fragoks': 0, 'fragfails': 0, 'fragcreates': 0, 'inmcastpkts': 1, 'outmcastpkts': 1, 'inbcastpkts': 0, 'outbcastpkts': 0, 'inmcastoctets': 76, 'outmcastoctets': 76, 'inbcastoctets': 0, 'outbcastoctets': 0, 'csumerrors': 0, 'noectpkts': 1, 'ect1pkts': 0, 'ect0pkts': 0, 'cepkts': 0}], ['IFLA_INET6_ICMP6STATS', {'num': 7, 'inmsgs': 0, 'inerrors': 0, 'outmsgs': 1, 'outerrors': 0, 'csumerrors': 0}], ['IFLA_INET6_TOKEN', '::'], ['IFLA_INET6_ADDR_GEN_MODE', 0]]}]]}], ['IFLA_MAP', {'mem_start': 0, 'mem_end': 0, 'base_addr': 0, 'irq': 0, 'dma': 0, 'port': 0}], ['UNKNOWN', {'header': {'length': 4, 'type': 32830}}], ['UNKNOWN', {'header': {'length': 4, 'type': 32833}}]], 'header': {'length': 1448, 'type': 16, 'flags': 0, 'sequence_number': 255, 'pid': 239994, 'error': None, 'target': 'ovnmeta-2a4b8529-6171-4880-a97c-66966115a61b', 'stats': (0, 0, 0)}, 'state': 'up', 'event': 'RTM_NEWLINK'}]) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Dec  1 19:29:58 compute-0 ovn_metadata_agent[106828]: 2025-12-01 19:29:58.525 239862 DEBUG oslo.privsep.daemon [-] privsep: reply[fef7f93e-9d25-4e31-b3a3-cbe7435233ac]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Dec  1 19:29:58 compute-0 ovn_metadata_agent[106828]: 2025-12-01 19:29:58.596 239862 DEBUG oslo.privsep.daemon [-] privsep: reply[d4aef131-7787-4919-85de-76e3b54ac020]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Dec  1 19:29:58 compute-0 ovn_metadata_agent[106828]: 2025-12-01 19:29:58.599 106833 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): DelPortCommand(_result=None, port=tap2a4b8529-60, bridge=br-ex, if_exists=True) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89#033[00m
Dec  1 19:29:58 compute-0 ovn_metadata_agent[106828]: 2025-12-01 19:29:58.600 106833 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Transaction caused no change do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:129#033[00m
Dec  1 19:29:58 compute-0 ovn_metadata_agent[106828]: 2025-12-01 19:29:58.600 106833 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): AddPortCommand(_result=None, bridge=br-int, port=tap2a4b8529-60, may_exist=True) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89#033[00m
Dec  1 19:29:58 compute-0 kernel: tap2a4b8529-60: entered promiscuous mode
Dec  1 19:29:58 compute-0 nova_compute[189564]: 2025-12-01 19:29:58.603 189568 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Dec  1 19:29:58 compute-0 NetworkManager[56474]: <info>  [1764617398.6044] manager: (tap2a4b8529-60): new Open vSwitch Port device (/org/freedesktop/NetworkManager/Devices/22)
Dec  1 19:29:58 compute-0 ovn_metadata_agent[106828]: 2025-12-01 19:29:58.610 106833 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): DbSetCommand(_result=None, table=Interface, record=tap2a4b8529-60, col_values=(('external_ids', {'iface-id': 'f95692ff-1cac-46fe-9e62-21af9fa55eb1'}),)) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89#033[00m
Dec  1 19:29:58 compute-0 nova_compute[189564]: 2025-12-01 19:29:58.611 189568 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Dec  1 19:29:58 compute-0 ovn_controller[97948]: 2025-12-01T19:29:58Z|00031|binding|INFO|Releasing lport f95692ff-1cac-46fe-9e62-21af9fa55eb1 from this chassis (sb_readonly=0)
Dec  1 19:29:58 compute-0 ovn_metadata_agent[106828]: 2025-12-01 19:29:58.613 106833 DEBUG neutron.agent.linux.utils [-] Unable to access /var/lib/neutron/external/pids/2a4b8529-6171-4880-a97c-66966115a61b.pid.haproxy; Error: [Errno 2] No such file or directory: '/var/lib/neutron/external/pids/2a4b8529-6171-4880-a97c-66966115a61b.pid.haproxy' get_value_from_file /usr/lib/python3.9/site-packages/neutron/agent/linux/utils.py:252#033[00m
Dec  1 19:29:58 compute-0 ovn_metadata_agent[106828]: 2025-12-01 19:29:58.614 239862 DEBUG oslo.privsep.daemon [-] privsep: reply[9c446c9b-3ddb-4174-a159-acdd38724dfd]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Dec  1 19:29:58 compute-0 ovn_metadata_agent[106828]: 2025-12-01 19:29:58.615 106833 DEBUG neutron.agent.ovn.metadata.driver [-] haproxy_cfg = 
Dec  1 19:29:58 compute-0 ovn_metadata_agent[106828]: global
Dec  1 19:29:58 compute-0 ovn_metadata_agent[106828]:    log         /dev/log local0 debug
Dec  1 19:29:58 compute-0 ovn_metadata_agent[106828]:    log-tag     haproxy-metadata-proxy-2a4b8529-6171-4880-a97c-66966115a61b
Dec  1 19:29:58 compute-0 ovn_metadata_agent[106828]:    user        root
Dec  1 19:29:58 compute-0 ovn_metadata_agent[106828]:    group       root
Dec  1 19:29:58 compute-0 ovn_metadata_agent[106828]:    maxconn     1024
Dec  1 19:29:58 compute-0 ovn_metadata_agent[106828]:    pidfile     /var/lib/neutron/external/pids/2a4b8529-6171-4880-a97c-66966115a61b.pid.haproxy
Dec  1 19:29:58 compute-0 ovn_metadata_agent[106828]:    daemon
Dec  1 19:29:58 compute-0 ovn_metadata_agent[106828]: 
Dec  1 19:29:58 compute-0 ovn_metadata_agent[106828]: defaults
Dec  1 19:29:58 compute-0 ovn_metadata_agent[106828]:    log global
Dec  1 19:29:58 compute-0 ovn_metadata_agent[106828]:    mode http
Dec  1 19:29:58 compute-0 ovn_metadata_agent[106828]:    option httplog
Dec  1 19:29:58 compute-0 ovn_metadata_agent[106828]:    option dontlognull
Dec  1 19:29:58 compute-0 ovn_metadata_agent[106828]:    option http-server-close
Dec  1 19:29:58 compute-0 ovn_metadata_agent[106828]:    option forwardfor
Dec  1 19:29:58 compute-0 ovn_metadata_agent[106828]:    retries                 3
Dec  1 19:29:58 compute-0 ovn_metadata_agent[106828]:    timeout http-request    30s
Dec  1 19:29:58 compute-0 ovn_metadata_agent[106828]:    timeout connect         30s
Dec  1 19:29:58 compute-0 ovn_metadata_agent[106828]:    timeout client          32s
Dec  1 19:29:58 compute-0 ovn_metadata_agent[106828]:    timeout server          32s
Dec  1 19:29:58 compute-0 ovn_metadata_agent[106828]:    timeout http-keep-alive 30s
Dec  1 19:29:58 compute-0 ovn_metadata_agent[106828]: 
Dec  1 19:29:58 compute-0 ovn_metadata_agent[106828]: 
Dec  1 19:29:58 compute-0 ovn_metadata_agent[106828]: listen listener
Dec  1 19:29:58 compute-0 ovn_metadata_agent[106828]:    bind 169.254.169.254:80
Dec  1 19:29:58 compute-0 ovn_metadata_agent[106828]:    server metadata /var/lib/neutron/metadata_proxy
Dec  1 19:29:58 compute-0 ovn_metadata_agent[106828]:    http-request add-header X-OVN-Network-ID 2a4b8529-6171-4880-a97c-66966115a61b
Dec  1 19:29:58 compute-0 ovn_metadata_agent[106828]: create_config_file /usr/lib/python3.9/site-packages/neutron/agent/ovn/metadata/driver.py:107#033[00m
Dec  1 19:29:58 compute-0 ovn_metadata_agent[106828]: 2025-12-01 19:29:58.616 106833 DEBUG neutron.agent.linux.utils [-] Running command: ['sudo', 'neutron-rootwrap', '/etc/neutron/rootwrap.conf', 'ip', 'netns', 'exec', 'ovnmeta-2a4b8529-6171-4880-a97c-66966115a61b', 'env', 'PROCESS_TAG=haproxy-2a4b8529-6171-4880-a97c-66966115a61b', 'haproxy', '-f', '/var/lib/neutron/ovn-metadata-proxy/2a4b8529-6171-4880-a97c-66966115a61b.conf'] create_process /usr/lib/python3.9/site-packages/neutron/agent/linux/utils.py:84#033[00m
Dec  1 19:29:58 compute-0 nova_compute[189564]: 2025-12-01 19:29:58.627 189568 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Dec  1 19:29:58 compute-0 nova_compute[189564]: 2025-12-01 19:29:58.840 189568 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Dec  1 19:29:59 compute-0 podman[240027]: 2025-12-01 19:29:59.058093781 +0000 UTC m=+0.067066126 container create d90ba4d9f5da009772020c9c416936175fc09c2471a29f0edd5fd21cc78957cd (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified, name=neutron-haproxy-ovnmeta-2a4b8529-6171-4880-a97c-66966115a61b, org.label-schema.vendor=CentOS, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.license=GPLv2, io.buildah.version=1.41.3, org.label-schema.build-date=20251125, tcib_build_tag=fa2bb8efef6782c26ea7f1675eeb36dd, tcib_managed=true, org.label-schema.schema-version=1.0, maintainer=OpenStack Kubernetes Operator team)
Dec  1 19:29:59 compute-0 systemd[1]: Started libpod-conmon-d90ba4d9f5da009772020c9c416936175fc09c2471a29f0edd5fd21cc78957cd.scope.
Dec  1 19:29:59 compute-0 podman[240027]: 2025-12-01 19:29:59.017519871 +0000 UTC m=+0.026492246 image pull 014dc726c85414b29f2dde7b5d875685d08784761c0f0ffa8630d1583a877bf9 quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified
Dec  1 19:29:59 compute-0 systemd[1]: Started libcrun container.
Dec  1 19:29:59 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/8dcbb4f8550a56991064453850edc00aa9ca762c63447d47eb35d8dba1732d59/merged/var/lib/neutron supports timestamps until 2038 (0x7fffffff)
Dec  1 19:29:59 compute-0 podman[240027]: 2025-12-01 19:29:59.168897542 +0000 UTC m=+0.177869907 container init d90ba4d9f5da009772020c9c416936175fc09c2471a29f0edd5fd21cc78957cd (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified, name=neutron-haproxy-ovnmeta-2a4b8529-6171-4880-a97c-66966115a61b, org.label-schema.license=GPLv2, org.label-schema.build-date=20251125, maintainer=OpenStack Kubernetes Operator team, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS, io.buildah.version=1.41.3, tcib_build_tag=fa2bb8efef6782c26ea7f1675eeb36dd, tcib_managed=true, org.label-schema.name=CentOS Stream 9 Base Image)
Dec  1 19:29:59 compute-0 podman[240027]: 2025-12-01 19:29:59.176958381 +0000 UTC m=+0.185930726 container start d90ba4d9f5da009772020c9c416936175fc09c2471a29f0edd5fd21cc78957cd (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified, name=neutron-haproxy-ovnmeta-2a4b8529-6171-4880-a97c-66966115a61b, tcib_build_tag=fa2bb8efef6782c26ea7f1675eeb36dd, tcib_managed=true, org.label-schema.schema-version=1.0, org.label-schema.license=GPLv2, io.buildah.version=1.41.3, maintainer=OpenStack Kubernetes Operator team, org.label-schema.vendor=CentOS, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.build-date=20251125)
Dec  1 19:29:59 compute-0 neutron-haproxy-ovnmeta-2a4b8529-6171-4880-a97c-66966115a61b[240041]: [NOTICE]   (240045) : New worker (240047) forked
Dec  1 19:29:59 compute-0 neutron-haproxy-ovnmeta-2a4b8529-6171-4880-a97c-66966115a61b[240041]: [NOTICE]   (240045) : Loading success.
Dec  1 19:29:59 compute-0 podman[203750]: time="2025-12-01T19:29:59Z" level=info msg="List containers: received `last` parameter - overwriting `limit`"
Dec  1 19:29:59 compute-0 podman[203750]: @ - - [01/Dec/2025:19:29:59 +0000] "GET /v4.9.3/libpod/containers/json?all=true&external=false&last=0&namespace=false&size=false&sync=false HTTP/1.1" 200 29521 "" "Go-http-client/1.1"
Dec  1 19:29:59 compute-0 podman[203750]: @ - - [01/Dec/2025:19:29:59 +0000] "GET /v4.9.3/libpod/containers/stats?all=false&interval=1&stream=false HTTP/1.1" 200 4763 "" "Go-http-client/1.1"
Dec  1 19:30:01 compute-0 openstack_network_exporter[205914]: ERROR   19:30:01 appctl.go:131: Failed to prepare call to ovsdb-server: no control socket files found for the ovs db server
Dec  1 19:30:01 compute-0 openstack_network_exporter[205914]: ERROR   19:30:01 appctl.go:144: Failed to get PID for ovn-northd: no control socket files found for ovn-northd
Dec  1 19:30:01 compute-0 openstack_network_exporter[205914]: ERROR   19:30:01 appctl.go:144: Failed to get PID for ovn-northd: no control socket files found for ovn-northd
Dec  1 19:30:01 compute-0 openstack_network_exporter[205914]: ERROR   19:30:01 appctl.go:174: call(dpif-netdev/pmd-rxq-show): please specify an existing datapath
Dec  1 19:30:01 compute-0 openstack_network_exporter[205914]: 
Dec  1 19:30:01 compute-0 openstack_network_exporter[205914]: ERROR   19:30:01 appctl.go:174: call(dpif-netdev/pmd-perf-show): please specify an existing datapath
Dec  1 19:30:01 compute-0 openstack_network_exporter[205914]: 
Dec  1 19:30:02 compute-0 nova_compute[189564]: 2025-12-01 19:30:02.660 189568 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Dec  1 19:30:03 compute-0 nova_compute[189564]: 2025-12-01 19:30:03.843 189568 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Dec  1 19:30:04 compute-0 podman[240056]: 2025-12-01 19:30:04.3273991 +0000 UTC m=+0.102076734 container health_status b46bda7fc50db8041eef75400930fc7591d8331b3adc9964f77b2cc87c6b98e2 (image=quay.io/openstack-k8s-operators/openstack-network-exporter:current-podified, name=openstack_network_exporter, health_status=healthy, health_failing_streak=0, health_log=, release=1755695350, com.redhat.license_terms=https://www.redhat.com/en/about/red-hat-end-user-license-agreements#UBI, io.openshift.expose-services=, io.openshift.tags=minimal rhel9, distribution-scope=public, io.k8s.description=The Universal Base Image Minimal is a stripped down image that uses microdnf as a package manager. This base image is freely redistributable, but Red Hat only supports Red Hat technologies through subscriptions for Red Hat products. This image is maintained by Red Hat and updated regularly., managed_by=edpm_ansible, io.k8s.display-name=Red Hat Universal Base Image 9 Minimal, vcs-ref=f4b088292653bbf5ca8188a5e59ffd06a8671d4b, com.redhat.component=ubi9-minimal-container, config_id=edpm, vcs-type=git, maintainer=Red Hat, Inc., container_name=openstack_network_exporter, name=ubi9-minimal, summary=Provides the latest release of the minimal Red Hat Universal Base Image 9., version=9.6, build-date=2025-08-20T13:12:41, architecture=x86_64, url=https://catalog.redhat.com/en/search?searchType=containers, config_data={'image': 'quay.io/openstack-k8s-operators/openstack-network-exporter:current-podified', 'restart': 'always', 'recreate': True, 'privileged': True, 'ports': ['9105:9105'], 'command': [], 'net': 'host', 'environment': {'OS_ENDPOINT_TYPE': 'internal', 'OPENSTACK_NETWORK_EXPORTER_YAML': '/etc/openstack_network_exporter/openstack_network_exporter.yaml'}, 'healthcheck': {'test': '/openstack/healthcheck openstack-netwo', 'mount': '/var/lib/openstack/healthchecks/openstack_network_exporter'}, 'volumes': ['/var/lib/openstack/config/telemetry/openstack_network_exporter.yaml:/etc/openstack_network_exporter/openstack_network_exporter.yaml:z', '/var/lib/openstack/certs/telemetry/default:/etc/openstack_network_exporter/tls:z', '/var/run/openvswitch:/run/openvswitch:rw,z', '/var/lib/openvswitch/ovn:/run/ovn:rw,z', '/proc:/host/proc:ro', '/var/lib/openstack/healthchecks/openstack_network_exporter:/openstack:ro,z']}, vendor=Red Hat, Inc., description=The Universal Base Image Minimal is a stripped down image that uses microdnf as a package manager. This base image is freely redistributable, but Red Hat only supports Red Hat technologies through subscriptions for Red Hat products. This image is maintained by Red Hat and updated regularly., io.buildah.version=1.33.7)
Dec  1 19:30:07 compute-0 nova_compute[189564]: 2025-12-01 19:30:07.661 189568 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Dec  1 19:30:08 compute-0 nova_compute[189564]: 2025-12-01 19:30:08.846 189568 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Dec  1 19:30:08 compute-0 NetworkManager[56474]: <info>  [1764617408.9079] manager: (patch-provnet-d6dc1a29-1c9e-4360-96f3-c2c2e887b11b-to-br-int): new Open vSwitch Interface device (/org/freedesktop/NetworkManager/Devices/23)
Dec  1 19:30:08 compute-0 NetworkManager[56474]: <info>  [1764617408.9097] device (patch-provnet-d6dc1a29-1c9e-4360-96f3-c2c2e887b11b-to-br-int)[Open vSwitch Interface]: state change: unmanaged -> unavailable (reason 'managed', managed-type: 'external')
Dec  1 19:30:08 compute-0 NetworkManager[56474]: <info>  [1764617408.9119] manager: (patch-br-int-to-provnet-d6dc1a29-1c9e-4360-96f3-c2c2e887b11b): new Open vSwitch Interface device (/org/freedesktop/NetworkManager/Devices/24)
Dec  1 19:30:08 compute-0 NetworkManager[56474]: <info>  [1764617408.9180] device (patch-br-int-to-provnet-d6dc1a29-1c9e-4360-96f3-c2c2e887b11b)[Open vSwitch Interface]: state change: unmanaged -> unavailable (reason 'managed', managed-type: 'external')
Dec  1 19:30:08 compute-0 NetworkManager[56474]: <info>  [1764617408.9203] manager: (patch-br-int-to-provnet-d6dc1a29-1c9e-4360-96f3-c2c2e887b11b): new Open vSwitch Port device (/org/freedesktop/NetworkManager/Devices/25)
Dec  1 19:30:08 compute-0 NetworkManager[56474]: <info>  [1764617408.9219] manager: (patch-provnet-d6dc1a29-1c9e-4360-96f3-c2c2e887b11b-to-br-int): new Open vSwitch Port device (/org/freedesktop/NetworkManager/Devices/26)
Dec  1 19:30:08 compute-0 NetworkManager[56474]: <info>  [1764617408.9231] device (patch-provnet-d6dc1a29-1c9e-4360-96f3-c2c2e887b11b-to-br-int)[Open vSwitch Interface]: state change: unavailable -> disconnected (reason 'none', managed-type: 'full')
Dec  1 19:30:08 compute-0 NetworkManager[56474]: <info>  [1764617408.9240] device (patch-br-int-to-provnet-d6dc1a29-1c9e-4360-96f3-c2c2e887b11b)[Open vSwitch Interface]: state change: unavailable -> disconnected (reason 'none', managed-type: 'full')
Dec  1 19:30:08 compute-0 ovn_controller[97948]: 2025-12-01T19:30:08Z|00032|binding|INFO|Releasing lport f95692ff-1cac-46fe-9e62-21af9fa55eb1 from this chassis (sb_readonly=0)
Dec  1 19:30:08 compute-0 nova_compute[189564]: 2025-12-01 19:30:08.925 189568 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Dec  1 19:30:08 compute-0 ovn_controller[97948]: 2025-12-01T19:30:08Z|00033|binding|INFO|Releasing lport f95692ff-1cac-46fe-9e62-21af9fa55eb1 from this chassis (sb_readonly=0)
Dec  1 19:30:08 compute-0 nova_compute[189564]: 2025-12-01 19:30:08.967 189568 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Dec  1 19:30:09 compute-0 nova_compute[189564]: 2025-12-01 19:30:09.359 189568 DEBUG nova.compute.manager [req-4c1b391d-fcc9-412e-baa9-ef4fd84109c4 req-ceb1f2d6-f100-4cd5-bbda-30e79168e81e 8141e98fb94749b083df4b60a326419b 58466eba0634458f8dd92f929824a1d9 - - default default] [instance: e73931e9-f7fa-4666-b781-700b385532a9] Received event network-changed-3cef930c-870a-4936-a206-b4c3a7ce5c1a external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11048#033[00m
Dec  1 19:30:09 compute-0 nova_compute[189564]: 2025-12-01 19:30:09.359 189568 DEBUG nova.compute.manager [req-4c1b391d-fcc9-412e-baa9-ef4fd84109c4 req-ceb1f2d6-f100-4cd5-bbda-30e79168e81e 8141e98fb94749b083df4b60a326419b 58466eba0634458f8dd92f929824a1d9 - - default default] [instance: e73931e9-f7fa-4666-b781-700b385532a9] Refreshing instance network info cache due to event network-changed-3cef930c-870a-4936-a206-b4c3a7ce5c1a. external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11053#033[00m
Dec  1 19:30:09 compute-0 nova_compute[189564]: 2025-12-01 19:30:09.360 189568 DEBUG oslo_concurrency.lockutils [req-4c1b391d-fcc9-412e-baa9-ef4fd84109c4 req-ceb1f2d6-f100-4cd5-bbda-30e79168e81e 8141e98fb94749b083df4b60a326419b 58466eba0634458f8dd92f929824a1d9 - - default default] Acquiring lock "refresh_cache-e73931e9-f7fa-4666-b781-700b385532a9" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:312#033[00m
Dec  1 19:30:09 compute-0 nova_compute[189564]: 2025-12-01 19:30:09.360 189568 DEBUG oslo_concurrency.lockutils [req-4c1b391d-fcc9-412e-baa9-ef4fd84109c4 req-ceb1f2d6-f100-4cd5-bbda-30e79168e81e 8141e98fb94749b083df4b60a326419b 58466eba0634458f8dd92f929824a1d9 - - default default] Acquired lock "refresh_cache-e73931e9-f7fa-4666-b781-700b385532a9" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:315#033[00m
Dec  1 19:30:09 compute-0 nova_compute[189564]: 2025-12-01 19:30:09.361 189568 DEBUG nova.network.neutron [req-4c1b391d-fcc9-412e-baa9-ef4fd84109c4 req-ceb1f2d6-f100-4cd5-bbda-30e79168e81e 8141e98fb94749b083df4b60a326419b 58466eba0634458f8dd92f929824a1d9 - - default default] [instance: e73931e9-f7fa-4666-b781-700b385532a9] Refreshing network info cache for port 3cef930c-870a-4936-a206-b4c3a7ce5c1a _get_instance_nw_info /usr/lib/python3.9/site-packages/nova/network/neutron.py:2007#033[00m
Dec  1 19:30:10 compute-0 podman[240079]: 2025-12-01 19:30:10.333037173 +0000 UTC m=+0.092414256 container health_status 9bc16c1e84935b321683dd2dfd3901959431e420d380b6b9982945dff3d516b2 (image=quay.io/prometheus/node-exporter:v1.5.0, name=node_exporter, health_status=healthy, health_failing_streak=0, health_log=, config_data={'image': 'quay.io/prometheus/node-exporter:v1.5.0', 'restart': 'always', 'recreate': True, 'user': 'root', 'privileged': True, 'ports': ['9100:9100'], 'command': ['--web.config.file=/etc/node_exporter/node_exporter.yaml', '--web.disable-exporter-metrics', '--collector.systemd', '--collector.systemd.unit-include=(edpm_.*|ovs.*|openvswitch|virt.*|rsyslog)\\.service', '--no-collector.dmi', '--no-collector.entropy', '--no-collector.thermal_zone', '--no-collector.time', '--no-collector.timex', '--no-collector.uname', '--no-collector.stat', '--no-collector.hwmon', '--no-collector.os', '--no-collector.selinux', '--no-collector.textfile', '--no-collector.powersupplyclass', '--no-collector.pressure', '--no-collector.rapl'], 'net': 'host', 'environment': {'OS_ENDPOINT_TYPE': 'internal'}, 'healthcheck': {'test': '/openstack/healthcheck node_exporter', 'mount': '/var/lib/openstack/healthchecks/node_exporter'}, 'volumes': ['/var/lib/openstack/config/telemetry/node_exporter.yaml:/etc/node_exporter/node_exporter.yaml:z', '/var/lib/openstack/certs/telemetry/default:/etc/node_exporter/tls:z', '/var/run/dbus/system_bus_socket:/var/run/dbus/system_bus_socket:rw', '/var/lib/openstack/healthchecks/node_exporter:/openstack:ro,z']}, config_id=edpm, container_name=node_exporter, maintainer=The Prometheus Authors <prometheus-developers@googlegroups.com>, managed_by=edpm_ansible)
Dec  1 19:30:10 compute-0 nova_compute[189564]: 2025-12-01 19:30:10.577 189568 DEBUG nova.network.neutron [req-4c1b391d-fcc9-412e-baa9-ef4fd84109c4 req-ceb1f2d6-f100-4cd5-bbda-30e79168e81e 8141e98fb94749b083df4b60a326419b 58466eba0634458f8dd92f929824a1d9 - - default default] [instance: e73931e9-f7fa-4666-b781-700b385532a9] Updated VIF entry in instance network info cache for port 3cef930c-870a-4936-a206-b4c3a7ce5c1a. _build_network_info_model /usr/lib/python3.9/site-packages/nova/network/neutron.py:3482#033[00m
Dec  1 19:30:10 compute-0 nova_compute[189564]: 2025-12-01 19:30:10.578 189568 DEBUG nova.network.neutron [req-4c1b391d-fcc9-412e-baa9-ef4fd84109c4 req-ceb1f2d6-f100-4cd5-bbda-30e79168e81e 8141e98fb94749b083df4b60a326419b 58466eba0634458f8dd92f929824a1d9 - - default default] [instance: e73931e9-f7fa-4666-b781-700b385532a9] Updating instance_info_cache with network_info: [{"id": "3cef930c-870a-4936-a206-b4c3a7ce5c1a", "address": "fa:16:3e:fc:8b:70", "network": {"id": "2a4b8529-6171-4880-a97c-66966115a61b", "bridge": "br-int", "label": "private", "subnets": [{"cidr": "192.168.0.0/24", "dns": [], "gateway": {"address": "192.168.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "192.168.0.47", "type": "fixed", "version": 4, "meta": {}, "floating_ips": [{"address": "192.168.122.206", "type": "floating", "version": 4, "meta": {}}]}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "35d2a9caf1634dca9fc12ec078239d84", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tap3cef930c-87", "ovs_interfaceid": "3cef930c-870a-4936-a206-b4c3a7ce5c1a", "qbh_params": null, "qbg_params": null, "active": true, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}}] update_instance_cache_with_nw_info /usr/lib/python3.9/site-packages/nova/network/neutron.py:116#033[00m
Dec  1 19:30:10 compute-0 nova_compute[189564]: 2025-12-01 19:30:10.624 189568 DEBUG oslo_concurrency.lockutils [req-4c1b391d-fcc9-412e-baa9-ef4fd84109c4 req-ceb1f2d6-f100-4cd5-bbda-30e79168e81e 8141e98fb94749b083df4b60a326419b 58466eba0634458f8dd92f929824a1d9 - - default default] Releasing lock "refresh_cache-e73931e9-f7fa-4666-b781-700b385532a9" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:333#033[00m
Dec  1 19:30:12 compute-0 ovn_metadata_agent[106828]: 2025-12-01 19:30:12.172 106833 DEBUG oslo_concurrency.lockutils [-] Acquiring lock "_check_child_processes" by "neutron.agent.linux.external_process.ProcessMonitor._check_child_processes" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m
Dec  1 19:30:12 compute-0 ovn_metadata_agent[106828]: 2025-12-01 19:30:12.173 106833 DEBUG oslo_concurrency.lockutils [-] Lock "_check_child_processes" acquired by "neutron.agent.linux.external_process.ProcessMonitor._check_child_processes" :: waited 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m
Dec  1 19:30:12 compute-0 ovn_metadata_agent[106828]: 2025-12-01 19:30:12.174 106833 DEBUG oslo_concurrency.lockutils [-] Lock "_check_child_processes" "released" by "neutron.agent.linux.external_process.ProcessMonitor._check_child_processes" :: held 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m
Dec  1 19:30:12 compute-0 nova_compute[189564]: 2025-12-01 19:30:12.664 189568 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Dec  1 19:30:13 compute-0 nova_compute[189564]: 2025-12-01 19:30:13.847 189568 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Dec  1 19:30:15 compute-0 podman[240105]: 2025-12-01 19:30:15.375756265 +0000 UTC m=+0.143401926 container health_status eee51cf6f5ac491b85fb09827fece37ea9afa564acb449d4ec0d0155a452f02b (image=quay.io/podified-antelope-centos9/openstack-multipathd:current-podified, name=multipathd, health_status=healthy, health_failing_streak=0, health_log=, managed_by=edpm_ansible, maintainer=OpenStack Kubernetes Operator team, org.label-schema.build-date=20251125, org.label-schema.schema-version=1.0, tcib_build_tag=fa2bb8efef6782c26ea7f1675eeb36dd, tcib_managed=true, org.label-schema.vendor=CentOS, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 9 Base Image, config_data={'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/multipathd', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-multipathd:current-podified', 'net': 'host', 'privileged': True, 'restart': 'always', 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/var/lib/kolla/config_files/multipathd.json:/var/lib/kolla/config_files/config.json:ro', '/dev:/dev', '/run/udev:/run/udev', '/sys:/sys', '/lib/modules:/lib/modules:ro', '/etc/iscsi:/etc/iscsi:ro', '/var/lib/iscsi:/var/lib/iscsi', '/etc/multipath:/etc/multipath:z', '/etc/multipath.conf:/etc/multipath.conf:ro', '/var/lib/openstack/healthchecks/multipathd:/openstack:ro,z']}, config_id=multipathd, container_name=multipathd, io.buildah.version=1.41.3)
Dec  1 19:30:17 compute-0 nova_compute[189564]: 2025-12-01 19:30:17.666 189568 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Dec  1 19:30:18 compute-0 nova_compute[189564]: 2025-12-01 19:30:18.850 189568 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Dec  1 19:30:21 compute-0 podman[240125]: 2025-12-01 19:30:21.296644765 +0000 UTC m=+0.072981217 container health_status 61ddba5fa28aaa4735d9b3aecc3d300f499f9ae2248b5f55cd6d6127fcce4236 (image=quay.io/navidys/prometheus-podman-exporter:v1.10.1, name=podman_exporter, health_status=healthy, health_failing_streak=0, health_log=, config_data={'image': 'quay.io/navidys/prometheus-podman-exporter:v1.10.1', 'restart': 'always', 'recreate': True, 'user': 'root', 'privileged': True, 'ports': ['9882:9882'], 'net': 'host', 'command': ['--web.config.file=/etc/podman_exporter/podman_exporter.yaml'], 'environment': {'OS_ENDPOINT_TYPE': 'internal', 'CONTAINER_HOST': 'unix:///run/podman/podman.sock'}, 'healthcheck': {'test': '/openstack/healthcheck podman_exporter', 'mount': '/var/lib/openstack/healthchecks/podman_exporter'}, 'volumes': ['/var/lib/openstack/config/telemetry/podman_exporter.yaml:/etc/podman_exporter/podman_exporter.yaml:z', '/var/lib/openstack/certs/telemetry/default:/etc/podman_exporter/tls:z', '/run/podman/podman.sock:/run/podman/podman.sock:rw,z', '/var/lib/openstack/healthchecks/podman_exporter:/openstack:ro,z']}, config_id=edpm, container_name=podman_exporter, maintainer=Navid Yaghoobi <navidys@fedoraproject.org>, managed_by=edpm_ansible)
Dec  1 19:30:22 compute-0 nova_compute[189564]: 2025-12-01 19:30:22.668 189568 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Dec  1 19:30:23 compute-0 podman[240148]: 2025-12-01 19:30:23.37931052 +0000 UTC m=+0.142224441 container health_status 23921011954a99f31a49758e512d9e3575f6b2ebf536e7df85e3be11e7690b76 (image=quay.io/sustainable_computing_io/kepler:release-0.7.12, name=kepler, health_status=healthy, health_failing_streak=0, health_log=, name=ubi9, vcs-ref=e309397d02fc53f7fa99db1371b8700eb49f268f, url=https://access.redhat.com/containers/#/registry.access.redhat.com/ubi9/images/9.4-1214.1726694543, io.k8s.display-name=Red Hat Universal Base Image 9, io.buildah.version=1.29.0, maintainer=Red Hat, Inc., config_data={'image': 'quay.io/sustainable_computing_io/kepler:release-0.7.12', 'privileged': 'true', 'restart': 'always', 'ports': ['8888:8888'], 'net': 'host', 'command': '-v=2', 'recreate': True, 'environment': {'ENABLE_GPU': 'true', 'EXPOSE_CONTAINER_METRICS': 'true', 'ENABLE_PROCESS_METRICS': 'true', 'EXPOSE_VM_METRICS': 'true', 'EXPOSE_ESTIMATED_IDLE_POWER_METRICS': 'false', 'LIBVIRT_METADATA_URI': 'http://openstack.org/xmlns/libvirt/nova/1.1'}, 'healthcheck': {'test': '/openstack/healthcheck kepler', 'mount': '/var/lib/openstack/healthchecks/kepler'}, 'volumes': ['/lib/modules:/lib/modules:ro', '/run/libvirt:/run/libvirt:shared,ro', '/sys:/sys', '/proc:/proc', '/var/lib/openstack/healthchecks/kepler:/openstack:ro,z']}, description=The Universal Base Image is designed and engineered to be the base layer for all of your containerized applications, middleware and utilities. This base image is freely redistributable, but Red Hat only supports Red Hat technologies through subscriptions for Red Hat products. This image is maintained by Red Hat and updated regularly., distribution-scope=public, release=1214.1726694543, build-date=2024-09-18T21:23:30, io.openshift.tags=base rhel9, summary=Provides the latest release of Red Hat Universal Base Image 9., vendor=Red Hat, Inc., architecture=x86_64, com.redhat.component=ubi9-container, release-0.7.12=, com.redhat.license_terms=https://www.redhat.com/en/about/red-hat-end-user-license-agreements#UBI, config_id=edpm, container_name=kepler, io.openshift.expose-services=, io.k8s.description=The Universal Base Image is designed and engineered to be the base layer for all of your containerized applications, middleware and utilities. This base image is freely redistributable, but Red Hat only supports Red Hat technologies through subscriptions for Red Hat products. This image is maintained by Red Hat and updated regularly., vcs-type=git, version=9.4, managed_by=edpm_ansible)
Dec  1 19:30:23 compute-0 podman[240149]: 2025-12-01 19:30:23.384294764 +0000 UTC m=+0.142567441 container health_status 34a1614f07848d6f362b3ed1fa2407dbcd0f2c7c831f6ef43ff8b2d278ce7c3d (image=quay.io/podified-antelope-centos9/openstack-ceilometer-ipmi:current-podified, name=ceilometer_agent_ipmi, health_status=healthy, health_failing_streak=0, health_log=, org.label-schema.license=GPLv2, config_id=edpm, managed_by=edpm_ansible, org.label-schema.vendor=CentOS, tcib_build_tag=fa2bb8efef6782c26ea7f1675eeb36dd, container_name=ceilometer_agent_ipmi, io.buildah.version=1.41.3, org.label-schema.build-date=20251125, tcib_managed=true, config_data={'image': 'quay.io/podified-antelope-centos9/openstack-ceilometer-ipmi:current-podified', 'user': 'ceilometer', 'restart': 'always', 'command': 'kolla_start', 'security_opt': 'label:type:ceilometer_polling_t', 'privileged': 'true', 'net': 'host', 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'OS_ENDPOINT_TYPE': 'internal'}, 'healthcheck': {'test': '/openstack/healthcheck ipmi', 'mount': '/var/lib/openstack/healthchecks/ceilometer_agent_ipmi'}, 'volumes': ['/var/lib/openstack/config/telemetry-power-monitoring:/var/lib/openstack/config/:z', '/var/lib/openstack/config/telemetry-power-monitoring/ceilometer-agent-ipmi.json:/var/lib/kolla/config_files/config.json:z', '/etc/hosts:/etc/hosts:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/var/lib/openstack/cacerts/telemetry-power-monitoring/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/config/telemetry-power-monitoring/ceilometer_prom_exporter.yaml:/etc/ceilometer/ceilometer_prom_exporter.yaml:z', '/var/lib/openstack/certs/telemetry-power-monitoring/default:/etc/ceilometer/tls:z', '/dev/log:/dev/log', '/var/lib/openstack/healthchecks/ceilometer_agent_ipmi:/openstack:ro,z']}, maintainer=OpenStack Kubernetes Operator team, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.schema-version=1.0)
Dec  1 19:30:23 compute-0 nova_compute[189564]: 2025-12-01 19:30:23.852 189568 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Dec  1 19:30:27 compute-0 podman[240186]: 2025-12-01 19:30:27.207277221 +0000 UTC m=+0.063664151 container health_status 43b014a7c88484529ca37fbc1aa040d68d3c565a681d98a3ffe696ded1c66c8b (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified, name=ovn_metadata_agent, health_status=healthy, health_failing_streak=0, health_log=, org.label-schema.build-date=20251125, config_id=ovn_metadata_agent, org.label-schema.license=GPLv2, org.label-schema.vendor=CentOS, container_name=ovn_metadata_agent, io.buildah.version=1.41.3, maintainer=OpenStack Kubernetes Operator team, managed_by=edpm_ansible, tcib_managed=true, config_data={'cgroupns': 'host', 'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'EDPM_CONFIG_HASH': '0823bd3e096c75f72e4a95820d41b0d4b6a1172bd2892ddb9f29b788a11bc87d'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ovn_metadata_agent', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified', 'net': 'host', 'pid': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/run/openvswitch:/run/openvswitch:z', '/var/lib/config-data/ansible-generated/neutron-ovn-metadata-agent:/etc/neutron.conf.d:z', '/run/netns:/run/netns:shared', '/var/lib/kolla/config_files/ovn_metadata_agent.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/neutron:/var/lib/neutron:shared,z', '/var/lib/neutron/ovn_metadata_haproxy_wrapper:/usr/local/bin/haproxy:ro', '/var/lib/neutron/kill_scripts:/etc/neutron/kill_scripts:ro', '/var/lib/openstack/cacerts/neutron-metadata/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/ca.crt:/etc/pki/tls/certs/ovndbca.crt:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/tls.crt:/etc/pki/tls/certs/ovndb.crt:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/tls.key:/etc/pki/tls/private/ovndb.key:ro,Z', '/var/lib/openstack/healthchecks/ovn_metadata_agent:/openstack:ro,z']}, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.schema-version=1.0, tcib_build_tag=fa2bb8efef6782c26ea7f1675eeb36dd)
Dec  1 19:30:27 compute-0 podman[240185]: 2025-12-01 19:30:27.217074413 +0000 UTC m=+0.078771737 container health_status 3a3d264f7eb8586ed3d44da8bad3c69e5911bcb2ca062b771386b6d47a5118de (image=quay.rdoproject.org/podified-master-centos10/openstack-ceilometer-compute:current-tested, name=ceilometer_agent_compute, health_status=healthy, health_failing_streak=0, health_log=, managed_by=edpm_ansible, org.label-schema.vendor=CentOS, org.label-schema.license=GPLv2, org.label-schema.schema-version=1.0, tcib_managed=true, config_data={'image': 'quay.rdoproject.org/podified-master-centos10/openstack-ceilometer-compute:current-tested', 'user': 'ceilometer', 'restart': 'always', 'command': 'kolla_start', 'security_opt': 'label:type:ceilometer_polling_t', 'net': 'host', 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'OS_ENDPOINT_TYPE': 'internal'}, 'healthcheck': {'test': '/openstack/healthcheck compute', 'mount': '/var/lib/openstack/healthchecks/ceilometer_agent_compute'}, 'volumes': ['/var/lib/openstack/config/telemetry:/var/lib/openstack/config/:z', '/var/lib/openstack/config/telemetry/ceilometer-agent-compute.json:/var/lib/kolla/config_files/config.json:z', '/run/libvirt:/run/libvirt:shared,ro', '/etc/hosts:/etc/hosts:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/var/lib/openstack/cacerts/telemetry/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/config/telemetry/ceilometer_prom_exporter.yaml:/etc/ceilometer/ceilometer_prom_exporter.yaml:z', '/var/lib/openstack/certs/telemetry/default:/etc/ceilometer/tls:z', '/dev/log:/dev/log', '/var/lib/openstack/healthchecks/ceilometer_agent_compute:/openstack:ro,z']}, maintainer=OpenStack Kubernetes Operator team, org.label-schema.name=CentOS Stream 10 Base Image, config_id=edpm, container_name=ceilometer_agent_compute, org.label-schema.build-date=20251125, tcib_build_tag=3a7876c5b6a4ff2e2bc50e11e9db5f42, io.buildah.version=1.41.4)
Dec  1 19:30:27 compute-0 podman[240188]: 2025-12-01 19:30:27.24298268 +0000 UTC m=+0.095508651 container health_status ac5c9902abf0db9f43c889599b2bcc73d33eb8b65444ffdd9b56a5cc93dab792 (image=quay.io/podified-antelope-centos9/openstack-ovn-controller:current-podified, name=ovn_controller, health_status=healthy, health_failing_streak=0, health_log=, container_name=ovn_controller, org.label-schema.license=GPLv2, tcib_build_tag=fa2bb8efef6782c26ea7f1675eeb36dd, tcib_managed=true, io.buildah.version=1.41.3, org.label-schema.build-date=20251125, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS, config_data={'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ovn_controller', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-ovn-controller:current-podified', 'net': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/lib/modules:/lib/modules:ro', '/run:/run', '/var/lib/openvswitch/ovn:/run/ovn:shared,z', '/var/lib/kolla/config_files/ovn_controller.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/openstack/cacerts/ovn/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/certs/ovn/default/ca.crt:/etc/pki/tls/certs/ovndbca.crt:ro,z', '/var/lib/openstack/certs/ovn/default/tls.crt:/etc/pki/tls/certs/ovndb.crt:ro,z', '/var/lib/openstack/certs/ovn/default/tls.key:/etc/pki/tls/private/ovndb.key:ro,Z', '/var/lib/openstack/healthchecks/ovn_controller:/openstack:ro,z']}, maintainer=OpenStack Kubernetes Operator team, managed_by=edpm_ansible, config_id=ovn_controller)
Dec  1 19:30:27 compute-0 nova_compute[189564]: 2025-12-01 19:30:27.671 189568 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Dec  1 19:30:28 compute-0 ovn_controller[97948]: 2025-12-01T19:30:28Z|00004|pinctrl(ovn_pinctrl0)|INFO|DHCPOFFER fa:16:3e:fc:8b:70 192.168.0.47
Dec  1 19:30:28 compute-0 ovn_controller[97948]: 2025-12-01T19:30:28Z|00005|pinctrl(ovn_pinctrl0)|INFO|DHCPACK fa:16:3e:fc:8b:70 192.168.0.47
Dec  1 19:30:28 compute-0 nova_compute[189564]: 2025-12-01 19:30:28.855 189568 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Dec  1 19:30:29 compute-0 podman[203750]: time="2025-12-01T19:30:29Z" level=info msg="List containers: received `last` parameter - overwriting `limit`"
Dec  1 19:30:29 compute-0 podman[203750]: @ - - [01/Dec/2025:19:30:29 +0000] "GET /v4.9.3/libpod/containers/json?all=true&external=false&last=0&namespace=false&size=false&sync=false HTTP/1.1" 200 29521 "" "Go-http-client/1.1"
Dec  1 19:30:29 compute-0 podman[203750]: @ - - [01/Dec/2025:19:30:29 +0000] "GET /v4.9.3/libpod/containers/stats?all=false&interval=1&stream=false HTTP/1.1" 200 4766 "" "Go-http-client/1.1"
Dec  1 19:30:31 compute-0 openstack_network_exporter[205914]: ERROR   19:30:31 appctl.go:131: Failed to prepare call to ovsdb-server: no control socket files found for the ovs db server
Dec  1 19:30:31 compute-0 openstack_network_exporter[205914]: ERROR   19:30:31 appctl.go:144: Failed to get PID for ovn-northd: no control socket files found for ovn-northd
Dec  1 19:30:31 compute-0 openstack_network_exporter[205914]: ERROR   19:30:31 appctl.go:144: Failed to get PID for ovn-northd: no control socket files found for ovn-northd
Dec  1 19:30:31 compute-0 openstack_network_exporter[205914]: ERROR   19:30:31 appctl.go:174: call(dpif-netdev/pmd-perf-show): please specify an existing datapath
Dec  1 19:30:31 compute-0 openstack_network_exporter[205914]: 
Dec  1 19:30:31 compute-0 openstack_network_exporter[205914]: ERROR   19:30:31 appctl.go:174: call(dpif-netdev/pmd-rxq-show): please specify an existing datapath
Dec  1 19:30:31 compute-0 openstack_network_exporter[205914]: 
Dec  1 19:30:32 compute-0 nova_compute[189564]: 2025-12-01 19:30:32.673 189568 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Dec  1 19:30:33 compute-0 nova_compute[189564]: 2025-12-01 19:30:33.858 189568 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Dec  1 19:30:35 compute-0 podman[240255]: 2025-12-01 19:30:35.371679049 +0000 UTC m=+0.130853080 container health_status b46bda7fc50db8041eef75400930fc7591d8331b3adc9964f77b2cc87c6b98e2 (image=quay.io/openstack-k8s-operators/openstack-network-exporter:current-podified, name=openstack_network_exporter, health_status=healthy, health_failing_streak=0, health_log=, version=9.6, io.openshift.expose-services=, release=1755695350, com.redhat.license_terms=https://www.redhat.com/en/about/red-hat-end-user-license-agreements#UBI, config_id=edpm, description=The Universal Base Image Minimal is a stripped down image that uses microdnf as a package manager. This base image is freely redistributable, but Red Hat only supports Red Hat technologies through subscriptions for Red Hat products. This image is maintained by Red Hat and updated regularly., io.buildah.version=1.33.7, io.k8s.description=The Universal Base Image Minimal is a stripped down image that uses microdnf as a package manager. This base image is freely redistributable, but Red Hat only supports Red Hat technologies through subscriptions for Red Hat products. This image is maintained by Red Hat and updated regularly., build-date=2025-08-20T13:12:41, vcs-type=git, config_data={'image': 'quay.io/openstack-k8s-operators/openstack-network-exporter:current-podified', 'restart': 'always', 'recreate': True, 'privileged': True, 'ports': ['9105:9105'], 'command': [], 'net': 'host', 'environment': {'OS_ENDPOINT_TYPE': 'internal', 'OPENSTACK_NETWORK_EXPORTER_YAML': '/etc/openstack_network_exporter/openstack_network_exporter.yaml'}, 'healthcheck': {'test': '/openstack/healthcheck openstack-netwo', 'mount': '/var/lib/openstack/healthchecks/openstack_network_exporter'}, 'volumes': ['/var/lib/openstack/config/telemetry/openstack_network_exporter.yaml:/etc/openstack_network_exporter/openstack_network_exporter.yaml:z', '/var/lib/openstack/certs/telemetry/default:/etc/openstack_network_exporter/tls:z', '/var/run/openvswitch:/run/openvswitch:rw,z', '/var/lib/openvswitch/ovn:/run/ovn:rw,z', '/proc:/host/proc:ro', '/var/lib/openstack/healthchecks/openstack_network_exporter:/openstack:ro,z']}, name=ubi9-minimal, summary=Provides the latest release of the minimal Red Hat Universal Base Image 9., maintainer=Red Hat, Inc., vendor=Red Hat, Inc., architecture=x86_64, distribution-scope=public, url=https://catalog.redhat.com/en/search?searchType=containers, container_name=openstack_network_exporter, com.redhat.component=ubi9-minimal-container, managed_by=edpm_ansible, vcs-ref=f4b088292653bbf5ca8188a5e59ffd06a8671d4b, io.k8s.display-name=Red Hat Universal Base Image 9 Minimal, io.openshift.tags=minimal rhel9)
Dec  1 19:30:37 compute-0 nova_compute[189564]: 2025-12-01 19:30:37.676 189568 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Dec  1 19:30:38 compute-0 nova_compute[189564]: 2025-12-01 19:30:38.861 189568 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Dec  1 19:30:38 compute-0 ovn_controller[97948]: 2025-12-01T19:30:38Z|00034|memory_trim|INFO|Detected inactivity (last active 30008 ms ago): trimming memory
Dec  1 19:30:41 compute-0 podman[240275]: 2025-12-01 19:30:41.332437957 +0000 UTC m=+0.085693000 container health_status 9bc16c1e84935b321683dd2dfd3901959431e420d380b6b9982945dff3d516b2 (image=quay.io/prometheus/node-exporter:v1.5.0, name=node_exporter, health_status=healthy, health_failing_streak=0, health_log=, config_data={'image': 'quay.io/prometheus/node-exporter:v1.5.0', 'restart': 'always', 'recreate': True, 'user': 'root', 'privileged': True, 'ports': ['9100:9100'], 'command': ['--web.config.file=/etc/node_exporter/node_exporter.yaml', '--web.disable-exporter-metrics', '--collector.systemd', '--collector.systemd.unit-include=(edpm_.*|ovs.*|openvswitch|virt.*|rsyslog)\\.service', '--no-collector.dmi', '--no-collector.entropy', '--no-collector.thermal_zone', '--no-collector.time', '--no-collector.timex', '--no-collector.uname', '--no-collector.stat', '--no-collector.hwmon', '--no-collector.os', '--no-collector.selinux', '--no-collector.textfile', '--no-collector.powersupplyclass', '--no-collector.pressure', '--no-collector.rapl'], 'net': 'host', 'environment': {'OS_ENDPOINT_TYPE': 'internal'}, 'healthcheck': {'test': '/openstack/healthcheck node_exporter', 'mount': '/var/lib/openstack/healthchecks/node_exporter'}, 'volumes': ['/var/lib/openstack/config/telemetry/node_exporter.yaml:/etc/node_exporter/node_exporter.yaml:z', '/var/lib/openstack/certs/telemetry/default:/etc/node_exporter/tls:z', '/var/run/dbus/system_bus_socket:/var/run/dbus/system_bus_socket:rw', '/var/lib/openstack/healthchecks/node_exporter:/openstack:ro,z']}, config_id=edpm, container_name=node_exporter, maintainer=The Prometheus Authors <prometheus-developers@googlegroups.com>, managed_by=edpm_ansible)
Dec  1 19:30:42 compute-0 nova_compute[189564]: 2025-12-01 19:30:42.680 189568 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Dec  1 19:30:43 compute-0 nova_compute[189564]: 2025-12-01 19:30:43.865 189568 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Dec  1 19:30:46 compute-0 nova_compute[189564]: 2025-12-01 19:30:46.248 189568 DEBUG oslo_service.periodic_task [None req-7cec860b-c1c5-49d2-8a4f-732c76c1788d - - - - - -] Running periodic task ComputeManager._heal_instance_info_cache run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210#033[00m
Dec  1 19:30:46 compute-0 nova_compute[189564]: 2025-12-01 19:30:46.248 189568 DEBUG nova.compute.manager [None req-7cec860b-c1c5-49d2-8a4f-732c76c1788d - - - - - -] Starting heal instance info cache _heal_instance_info_cache /usr/lib/python3.9/site-packages/nova/compute/manager.py:9858#033[00m
Dec  1 19:30:46 compute-0 nova_compute[189564]: 2025-12-01 19:30:46.249 189568 DEBUG nova.compute.manager [None req-7cec860b-c1c5-49d2-8a4f-732c76c1788d - - - - - -] Rebuilding the list of instances to heal _heal_instance_info_cache /usr/lib/python3.9/site-packages/nova/compute/manager.py:9862#033[00m
Dec  1 19:30:46 compute-0 podman[240301]: 2025-12-01 19:30:46.333405927 +0000 UTC m=+0.096187652 container health_status eee51cf6f5ac491b85fb09827fece37ea9afa564acb449d4ec0d0155a452f02b (image=quay.io/podified-antelope-centos9/openstack-multipathd:current-podified, name=multipathd, health_status=healthy, health_failing_streak=0, health_log=, maintainer=OpenStack Kubernetes Operator team, managed_by=edpm_ansible, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.build-date=20251125, org.label-schema.license=GPLv2, tcib_build_tag=fa2bb8efef6782c26ea7f1675eeb36dd, tcib_managed=true, container_name=multipathd, org.label-schema.vendor=CentOS, config_data={'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/multipathd', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-multipathd:current-podified', 'net': 'host', 'privileged': True, 'restart': 'always', 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/var/lib/kolla/config_files/multipathd.json:/var/lib/kolla/config_files/config.json:ro', '/dev:/dev', '/run/udev:/run/udev', '/sys:/sys', '/lib/modules:/lib/modules:ro', '/etc/iscsi:/etc/iscsi:ro', '/var/lib/iscsi:/var/lib/iscsi', '/etc/multipath:/etc/multipath:z', '/etc/multipath.conf:/etc/multipath.conf:ro', '/var/lib/openstack/healthchecks/multipathd:/openstack:ro,z']}, org.label-schema.schema-version=1.0, config_id=multipathd, io.buildah.version=1.41.3)
Dec  1 19:30:46 compute-0 nova_compute[189564]: 2025-12-01 19:30:46.486 189568 DEBUG oslo_concurrency.lockutils [None req-7cec860b-c1c5-49d2-8a4f-732c76c1788d - - - - - -] Acquiring lock "refresh_cache-e73931e9-f7fa-4666-b781-700b385532a9" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:312#033[00m
Dec  1 19:30:46 compute-0 nova_compute[189564]: 2025-12-01 19:30:46.487 189568 DEBUG oslo_concurrency.lockutils [None req-7cec860b-c1c5-49d2-8a4f-732c76c1788d - - - - - -] Acquired lock "refresh_cache-e73931e9-f7fa-4666-b781-700b385532a9" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:315#033[00m
Dec  1 19:30:46 compute-0 nova_compute[189564]: 2025-12-01 19:30:46.487 189568 DEBUG nova.network.neutron [None req-7cec860b-c1c5-49d2-8a4f-732c76c1788d - - - - - -] [instance: e73931e9-f7fa-4666-b781-700b385532a9] Forcefully refreshing network info cache for instance _get_instance_nw_info /usr/lib/python3.9/site-packages/nova/network/neutron.py:2004#033[00m
Dec  1 19:30:46 compute-0 nova_compute[189564]: 2025-12-01 19:30:46.487 189568 DEBUG nova.objects.instance [None req-7cec860b-c1c5-49d2-8a4f-732c76c1788d - - - - - -] Lazy-loading 'info_cache' on Instance uuid e73931e9-f7fa-4666-b781-700b385532a9 obj_load_attr /usr/lib/python3.9/site-packages/nova/objects/instance.py:1105#033[00m
Dec  1 19:30:47 compute-0 nova_compute[189564]: 2025-12-01 19:30:47.683 189568 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Dec  1 19:30:48 compute-0 nova_compute[189564]: 2025-12-01 19:30:48.298 189568 DEBUG nova.network.neutron [None req-7cec860b-c1c5-49d2-8a4f-732c76c1788d - - - - - -] [instance: e73931e9-f7fa-4666-b781-700b385532a9] Updating instance_info_cache with network_info: [{"id": "3cef930c-870a-4936-a206-b4c3a7ce5c1a", "address": "fa:16:3e:fc:8b:70", "network": {"id": "2a4b8529-6171-4880-a97c-66966115a61b", "bridge": "br-int", "label": "private", "subnets": [{"cidr": "192.168.0.0/24", "dns": [], "gateway": {"address": "192.168.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "192.168.0.47", "type": "fixed", "version": 4, "meta": {}, "floating_ips": [{"address": "192.168.122.206", "type": "floating", "version": 4, "meta": {}}]}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "35d2a9caf1634dca9fc12ec078239d84", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tap3cef930c-87", "ovs_interfaceid": "3cef930c-870a-4936-a206-b4c3a7ce5c1a", "qbh_params": null, "qbg_params": null, "active": true, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}}] update_instance_cache_with_nw_info /usr/lib/python3.9/site-packages/nova/network/neutron.py:116#033[00m
Dec  1 19:30:48 compute-0 nova_compute[189564]: 2025-12-01 19:30:48.320 189568 DEBUG oslo_concurrency.lockutils [None req-7cec860b-c1c5-49d2-8a4f-732c76c1788d - - - - - -] Releasing lock "refresh_cache-e73931e9-f7fa-4666-b781-700b385532a9" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:333#033[00m
Dec  1 19:30:48 compute-0 nova_compute[189564]: 2025-12-01 19:30:48.321 189568 DEBUG nova.compute.manager [None req-7cec860b-c1c5-49d2-8a4f-732c76c1788d - - - - - -] [instance: e73931e9-f7fa-4666-b781-700b385532a9] Updated the network info_cache for instance _heal_instance_info_cache /usr/lib/python3.9/site-packages/nova/compute/manager.py:9929#033[00m
Dec  1 19:30:48 compute-0 nova_compute[189564]: 2025-12-01 19:30:48.322 189568 DEBUG oslo_service.periodic_task [None req-7cec860b-c1c5-49d2-8a4f-732c76c1788d - - - - - -] Running periodic task ComputeManager._poll_volume_usage run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210#033[00m
Dec  1 19:30:48 compute-0 nova_compute[189564]: 2025-12-01 19:30:48.323 189568 DEBUG oslo_service.periodic_task [None req-7cec860b-c1c5-49d2-8a4f-732c76c1788d - - - - - -] Running periodic task ComputeManager._cleanup_expired_console_auth_tokens run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210#033[00m
Dec  1 19:30:48 compute-0 nova_compute[189564]: 2025-12-01 19:30:48.868 189568 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Dec  1 19:30:49 compute-0 nova_compute[189564]: 2025-12-01 19:30:49.266 189568 DEBUG oslo_service.periodic_task [None req-7cec860b-c1c5-49d2-8a4f-732c76c1788d - - - - - -] Running periodic task ComputeManager._check_instance_build_time run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210#033[00m
Dec  1 19:30:49 compute-0 nova_compute[189564]: 2025-12-01 19:30:49.267 189568 DEBUG oslo_service.periodic_task [None req-7cec860b-c1c5-49d2-8a4f-732c76c1788d - - - - - -] Running periodic task ComputeManager._poll_rebooting_instances run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210#033[00m
Dec  1 19:30:50 compute-0 nova_compute[189564]: 2025-12-01 19:30:50.248 189568 DEBUG oslo_service.periodic_task [None req-7cec860b-c1c5-49d2-8a4f-732c76c1788d - - - - - -] Running periodic task ComputeManager._instance_usage_audit run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210#033[00m
Dec  1 19:30:50 compute-0 nova_compute[189564]: 2025-12-01 19:30:50.250 189568 DEBUG oslo_service.periodic_task [None req-7cec860b-c1c5-49d2-8a4f-732c76c1788d - - - - - -] Running periodic task ComputeManager._reclaim_queued_deletes run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210#033[00m
Dec  1 19:30:50 compute-0 nova_compute[189564]: 2025-12-01 19:30:50.250 189568 DEBUG nova.compute.manager [None req-7cec860b-c1c5-49d2-8a4f-732c76c1788d - - - - - -] CONF.reclaim_instance_interval <= 0, skipping... _reclaim_queued_deletes /usr/lib/python3.9/site-packages/nova/compute/manager.py:10477#033[00m
Dec  1 19:30:50 compute-0 nova_compute[189564]: 2025-12-01 19:30:50.251 189568 DEBUG oslo_service.periodic_task [None req-7cec860b-c1c5-49d2-8a4f-732c76c1788d - - - - - -] Running periodic task ComputeManager._run_pending_deletes run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210#033[00m
Dec  1 19:30:50 compute-0 nova_compute[189564]: 2025-12-01 19:30:50.252 189568 DEBUG nova.compute.manager [None req-7cec860b-c1c5-49d2-8a4f-732c76c1788d - - - - - -] Cleaning up deleted instances _run_pending_deletes /usr/lib/python3.9/site-packages/nova/compute/manager.py:11145#033[00m
Dec  1 19:30:50 compute-0 nova_compute[189564]: 2025-12-01 19:30:50.272 189568 DEBUG nova.compute.manager [None req-7cec860b-c1c5-49d2-8a4f-732c76c1788d - - - - - -] There are 0 instances to clean _run_pending_deletes /usr/lib/python3.9/site-packages/nova/compute/manager.py:11154#033[00m
Dec  1 19:30:50 compute-0 nova_compute[189564]: 2025-12-01 19:30:50.273 189568 DEBUG oslo_service.periodic_task [None req-7cec860b-c1c5-49d2-8a4f-732c76c1788d - - - - - -] Running periodic task ComputeManager._cleanup_incomplete_migrations run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210#033[00m
Dec  1 19:30:50 compute-0 nova_compute[189564]: 2025-12-01 19:30:50.273 189568 DEBUG nova.compute.manager [None req-7cec860b-c1c5-49d2-8a4f-732c76c1788d - - - - - -] Cleaning up deleted instances with incomplete migration  _cleanup_incomplete_migrations /usr/lib/python3.9/site-packages/nova/compute/manager.py:11183#033[00m
Dec  1 19:30:51 compute-0 nova_compute[189564]: 2025-12-01 19:30:51.287 189568 DEBUG oslo_service.periodic_task [None req-7cec860b-c1c5-49d2-8a4f-732c76c1788d - - - - - -] Running periodic task ComputeManager.update_available_resource run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210#033[00m
Dec  1 19:30:51 compute-0 nova_compute[189564]: 2025-12-01 19:30:51.746 189568 DEBUG oslo_concurrency.lockutils [None req-7cec860b-c1c5-49d2-8a4f-732c76c1788d - - - - - -] Acquiring lock "compute_resources" by "nova.compute.resource_tracker.ResourceTracker.clean_compute_node_cache" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m
Dec  1 19:30:51 compute-0 nova_compute[189564]: 2025-12-01 19:30:51.746 189568 DEBUG oslo_concurrency.lockutils [None req-7cec860b-c1c5-49d2-8a4f-732c76c1788d - - - - - -] Lock "compute_resources" acquired by "nova.compute.resource_tracker.ResourceTracker.clean_compute_node_cache" :: waited 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m
Dec  1 19:30:51 compute-0 nova_compute[189564]: 2025-12-01 19:30:51.747 189568 DEBUG oslo_concurrency.lockutils [None req-7cec860b-c1c5-49d2-8a4f-732c76c1788d - - - - - -] Lock "compute_resources" "released" by "nova.compute.resource_tracker.ResourceTracker.clean_compute_node_cache" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m
Dec  1 19:30:51 compute-0 nova_compute[189564]: 2025-12-01 19:30:51.747 189568 DEBUG nova.compute.resource_tracker [None req-7cec860b-c1c5-49d2-8a4f-732c76c1788d - - - - - -] Auditing locally available compute resources for compute-0.ctlplane.example.com (node: compute-0.ctlplane.example.com) update_available_resource /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:861#033[00m
Dec  1 19:30:51 compute-0 nova_compute[189564]: 2025-12-01 19:30:51.863 189568 DEBUG oslo_concurrency.processutils [None req-7cec860b-c1c5-49d2-8a4f-732c76c1788d - - - - - -] Running cmd (subprocess): /usr/bin/python3 -m oslo_concurrency.prlimit --as=1073741824 --cpu=30 -- env LC_ALL=C LANG=C qemu-img info /var/lib/nova/instances/e73931e9-f7fa-4666-b781-700b385532a9/disk --force-share --output=json execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384#033[00m
Dec  1 19:30:51 compute-0 podman[240323]: 2025-12-01 19:30:51.904890173 +0000 UTC m=+0.097412930 container health_status 61ddba5fa28aaa4735d9b3aecc3d300f499f9ae2248b5f55cd6d6127fcce4236 (image=quay.io/navidys/prometheus-podman-exporter:v1.10.1, name=podman_exporter, health_status=healthy, health_failing_streak=0, health_log=, container_name=podman_exporter, maintainer=Navid Yaghoobi <navidys@fedoraproject.org>, managed_by=edpm_ansible, config_data={'image': 'quay.io/navidys/prometheus-podman-exporter:v1.10.1', 'restart': 'always', 'recreate': True, 'user': 'root', 'privileged': True, 'ports': ['9882:9882'], 'net': 'host', 'command': ['--web.config.file=/etc/podman_exporter/podman_exporter.yaml'], 'environment': {'OS_ENDPOINT_TYPE': 'internal', 'CONTAINER_HOST': 'unix:///run/podman/podman.sock'}, 'healthcheck': {'test': '/openstack/healthcheck podman_exporter', 'mount': '/var/lib/openstack/healthchecks/podman_exporter'}, 'volumes': ['/var/lib/openstack/config/telemetry/podman_exporter.yaml:/etc/podman_exporter/podman_exporter.yaml:z', '/var/lib/openstack/certs/telemetry/default:/etc/podman_exporter/tls:z', '/run/podman/podman.sock:/run/podman/podman.sock:rw,z', '/var/lib/openstack/healthchecks/podman_exporter:/openstack:ro,z']}, config_id=edpm)
Dec  1 19:30:51 compute-0 nova_compute[189564]: 2025-12-01 19:30:51.962 189568 DEBUG oslo_concurrency.processutils [None req-7cec860b-c1c5-49d2-8a4f-732c76c1788d - - - - - -] CMD "/usr/bin/python3 -m oslo_concurrency.prlimit --as=1073741824 --cpu=30 -- env LC_ALL=C LANG=C qemu-img info /var/lib/nova/instances/e73931e9-f7fa-4666-b781-700b385532a9/disk --force-share --output=json" returned: 0 in 0.098s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422#033[00m
Dec  1 19:30:51 compute-0 nova_compute[189564]: 2025-12-01 19:30:51.964 189568 DEBUG oslo_concurrency.processutils [None req-7cec860b-c1c5-49d2-8a4f-732c76c1788d - - - - - -] Running cmd (subprocess): /usr/bin/python3 -m oslo_concurrency.prlimit --as=1073741824 --cpu=30 -- env LC_ALL=C LANG=C qemu-img info /var/lib/nova/instances/e73931e9-f7fa-4666-b781-700b385532a9/disk --force-share --output=json execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384#033[00m
Dec  1 19:30:52 compute-0 nova_compute[189564]: 2025-12-01 19:30:52.054 189568 DEBUG oslo_concurrency.processutils [None req-7cec860b-c1c5-49d2-8a4f-732c76c1788d - - - - - -] CMD "/usr/bin/python3 -m oslo_concurrency.prlimit --as=1073741824 --cpu=30 -- env LC_ALL=C LANG=C qemu-img info /var/lib/nova/instances/e73931e9-f7fa-4666-b781-700b385532a9/disk --force-share --output=json" returned: 0 in 0.090s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422#033[00m
Dec  1 19:30:52 compute-0 nova_compute[189564]: 2025-12-01 19:30:52.057 189568 DEBUG oslo_concurrency.processutils [None req-7cec860b-c1c5-49d2-8a4f-732c76c1788d - - - - - -] Running cmd (subprocess): /usr/bin/python3 -m oslo_concurrency.prlimit --as=1073741824 --cpu=30 -- env LC_ALL=C LANG=C qemu-img info /var/lib/nova/instances/e73931e9-f7fa-4666-b781-700b385532a9/disk.eph0 --force-share --output=json execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384#033[00m
Dec  1 19:30:52 compute-0 nova_compute[189564]: 2025-12-01 19:30:52.132 189568 DEBUG oslo_concurrency.processutils [None req-7cec860b-c1c5-49d2-8a4f-732c76c1788d - - - - - -] CMD "/usr/bin/python3 -m oslo_concurrency.prlimit --as=1073741824 --cpu=30 -- env LC_ALL=C LANG=C qemu-img info /var/lib/nova/instances/e73931e9-f7fa-4666-b781-700b385532a9/disk.eph0 --force-share --output=json" returned: 0 in 0.075s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422#033[00m
Dec  1 19:30:52 compute-0 nova_compute[189564]: 2025-12-01 19:30:52.134 189568 DEBUG oslo_concurrency.processutils [None req-7cec860b-c1c5-49d2-8a4f-732c76c1788d - - - - - -] Running cmd (subprocess): /usr/bin/python3 -m oslo_concurrency.prlimit --as=1073741824 --cpu=30 -- env LC_ALL=C LANG=C qemu-img info /var/lib/nova/instances/e73931e9-f7fa-4666-b781-700b385532a9/disk.eph0 --force-share --output=json execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384#033[00m
Dec  1 19:30:52 compute-0 nova_compute[189564]: 2025-12-01 19:30:52.209 189568 DEBUG oslo_concurrency.processutils [None req-7cec860b-c1c5-49d2-8a4f-732c76c1788d - - - - - -] CMD "/usr/bin/python3 -m oslo_concurrency.prlimit --as=1073741824 --cpu=30 -- env LC_ALL=C LANG=C qemu-img info /var/lib/nova/instances/e73931e9-f7fa-4666-b781-700b385532a9/disk.eph0 --force-share --output=json" returned: 0 in 0.076s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422#033[00m
Dec  1 19:30:52 compute-0 ovn_metadata_agent[106828]: 2025-12-01 19:30:52.410 106833 DEBUG ovsdbapp.backend.ovs_idl.event [-] Matched UPDATE: SbGlobalUpdateEvent(events=('update',), table='SB_Global', conditions=None, old_conditions=None), priority=20 to row=SB_Global(external_ids={}, nb_cfg=4, options={'arp_ns_explicit_output': 'true', 'mac_prefix': 'ae:b8:e0', 'max_tunid': '16711680', 'northd_internal_version': '24.03.8-20.33.0-76.8', 'svc_monitor_mac': 'f2:87:69:a7:38:2b'}, ipsec=False) old=SB_Global(nb_cfg=3) matches /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/event.py:43#033[00m
Dec  1 19:30:52 compute-0 ovn_metadata_agent[106828]: 2025-12-01 19:30:52.412 106833 DEBUG neutron.agent.ovn.metadata.agent [-] Delaying updating chassis table for 9 seconds run /usr/lib/python3.9/site-packages/neutron/agent/ovn/metadata/agent.py:274#033[00m
Dec  1 19:30:52 compute-0 nova_compute[189564]: 2025-12-01 19:30:52.411 189568 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Dec  1 19:30:52 compute-0 nova_compute[189564]: 2025-12-01 19:30:52.686 189568 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Dec  1 19:30:52 compute-0 nova_compute[189564]: 2025-12-01 19:30:52.751 189568 WARNING nova.virt.libvirt.driver [None req-7cec860b-c1c5-49d2-8a4f-732c76c1788d - - - - - -] This host appears to have multiple sockets per NUMA node. The `socket` PCI NUMA affinity will not be supported.#033[00m
Dec  1 19:30:52 compute-0 nova_compute[189564]: 2025-12-01 19:30:52.753 189568 DEBUG nova.compute.resource_tracker [None req-7cec860b-c1c5-49d2-8a4f-732c76c1788d - - - - - -] Hypervisor/Node resource view: name=compute-0.ctlplane.example.com free_ram=5253MB free_disk=72.38386535644531GB free_vcpus=7 pci_devices=[{"dev_id": "pci_0000_00_03_0", "address": "0000:00:03.0", "product_id": "1000", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1000", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_01_3", "address": "0000:00:01.3", "product_id": "7113", "vendor_id": "8086", "numa_node": null, "label": "label_8086_7113", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_06_0", "address": "0000:00:06.0", "product_id": "1005", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1005", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_04_0", "address": "0000:00:04.0", "product_id": "1001", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1001", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_01_1", "address": "0000:00:01.1", "product_id": "7010", "vendor_id": "8086", "numa_node": null, "label": "label_8086_7010", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_00_0", "address": "0000:00:00.0", "product_id": "1237", "vendor_id": "8086", "numa_node": null, "label": "label_8086_1237", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_05_0", "address": "0000:00:05.0", "product_id": "1002", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1002", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_02_0", "address": "0000:00:02.0", "product_id": "1050", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1050", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_01_2", "address": "0000:00:01.2", "product_id": "7020", "vendor_id": "8086", "numa_node": null, "label": "label_8086_7020", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_01_0", "address": "0000:00:01.0", "product_id": "7000", "vendor_id": "8086", "numa_node": null, "label": "label_8086_7000", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_07_0", "address": "0000:00:07.0", "product_id": "1000", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1000", "dev_type": "type-PCI"}] _report_hypervisor_resource_view /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:1034#033[00m
Dec  1 19:30:52 compute-0 nova_compute[189564]: 2025-12-01 19:30:52.753 189568 DEBUG oslo_concurrency.lockutils [None req-7cec860b-c1c5-49d2-8a4f-732c76c1788d - - - - - -] Acquiring lock "compute_resources" by "nova.compute.resource_tracker.ResourceTracker._update_available_resource" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m
Dec  1 19:30:52 compute-0 nova_compute[189564]: 2025-12-01 19:30:52.754 189568 DEBUG oslo_concurrency.lockutils [None req-7cec860b-c1c5-49d2-8a4f-732c76c1788d - - - - - -] Lock "compute_resources" acquired by "nova.compute.resource_tracker.ResourceTracker._update_available_resource" :: waited 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m
Dec  1 19:30:52 compute-0 nova_compute[189564]: 2025-12-01 19:30:52.984 189568 DEBUG nova.compute.resource_tracker [None req-7cec860b-c1c5-49d2-8a4f-732c76c1788d - - - - - -] Instance e73931e9-f7fa-4666-b781-700b385532a9 actively managed on this compute host and has allocations in placement: {'resources': {'DISK_GB': 2, 'MEMORY_MB': 512, 'VCPU': 1}}. _remove_deleted_instances_allocations /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:1635#033[00m
Dec  1 19:30:52 compute-0 nova_compute[189564]: 2025-12-01 19:30:52.985 189568 DEBUG nova.compute.resource_tracker [None req-7cec860b-c1c5-49d2-8a4f-732c76c1788d - - - - - -] Total usable vcpus: 8, total allocated vcpus: 1 _report_final_resource_view /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:1057#033[00m
Dec  1 19:30:52 compute-0 nova_compute[189564]: 2025-12-01 19:30:52.985 189568 DEBUG nova.compute.resource_tracker [None req-7cec860b-c1c5-49d2-8a4f-732c76c1788d - - - - - -] Final resource view: name=compute-0.ctlplane.example.com phys_ram=7680MB used_ram=1024MB phys_disk=79GB used_disk=2GB total_vcpus=8 used_vcpus=1 pci_stats=[] _report_final_resource_view /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:1066#033[00m
Dec  1 19:30:53 compute-0 nova_compute[189564]: 2025-12-01 19:30:53.147 189568 DEBUG nova.compute.provider_tree [None req-7cec860b-c1c5-49d2-8a4f-732c76c1788d - - - - - -] Inventory has not changed in ProviderTree for provider: 0211b5d4-bab8-409f-8f53-df766ffbcb27 update_inventory /usr/lib/python3.9/site-packages/nova/compute/provider_tree.py:180#033[00m
Dec  1 19:30:53 compute-0 nova_compute[189564]: 2025-12-01 19:30:53.164 189568 DEBUG nova.scheduler.client.report [None req-7cec860b-c1c5-49d2-8a4f-732c76c1788d - - - - - -] Inventory has not changed for provider 0211b5d4-bab8-409f-8f53-df766ffbcb27 based on inventory data: {'MEMORY_MB': {'total': 7680, 'reserved': 512, 'min_unit': 1, 'max_unit': 7680, 'step_size': 1, 'allocation_ratio': 1.0}, 'VCPU': {'total': 8, 'reserved': 0, 'min_unit': 1, 'max_unit': 8, 'step_size': 1, 'allocation_ratio': 4.0}, 'DISK_GB': {'total': 79, 'reserved': 1, 'min_unit': 1, 'max_unit': 79, 'step_size': 1, 'allocation_ratio': 0.9}} set_inventory_for_provider /usr/lib/python3.9/site-packages/nova/scheduler/client/report.py:940#033[00m
Dec  1 19:30:53 compute-0 nova_compute[189564]: 2025-12-01 19:30:53.183 189568 DEBUG nova.compute.resource_tracker [None req-7cec860b-c1c5-49d2-8a4f-732c76c1788d - - - - - -] Compute_service record updated for compute-0.ctlplane.example.com:compute-0.ctlplane.example.com _update_available_resource /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:995#033[00m
Dec  1 19:30:53 compute-0 nova_compute[189564]: 2025-12-01 19:30:53.184 189568 DEBUG oslo_concurrency.lockutils [None req-7cec860b-c1c5-49d2-8a4f-732c76c1788d - - - - - -] Lock "compute_resources" "released" by "nova.compute.resource_tracker.ResourceTracker._update_available_resource" :: held 0.430s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m
Dec  1 19:30:53 compute-0 nova_compute[189564]: 2025-12-01 19:30:53.872 189568 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Dec  1 19:30:54 compute-0 podman[240359]: 2025-12-01 19:30:54.37440855 +0000 UTC m=+0.138367041 container health_status 23921011954a99f31a49758e512d9e3575f6b2ebf536e7df85e3be11e7690b76 (image=quay.io/sustainable_computing_io/kepler:release-0.7.12, name=kepler, health_status=healthy, health_failing_streak=0, health_log=, io.k8s.display-name=Red Hat Universal Base Image 9, io.openshift.tags=base rhel9, container_name=kepler, config_data={'image': 'quay.io/sustainable_computing_io/kepler:release-0.7.12', 'privileged': 'true', 'restart': 'always', 'ports': ['8888:8888'], 'net': 'host', 'command': '-v=2', 'recreate': True, 'environment': {'ENABLE_GPU': 'true', 'EXPOSE_CONTAINER_METRICS': 'true', 'ENABLE_PROCESS_METRICS': 'true', 'EXPOSE_VM_METRICS': 'true', 'EXPOSE_ESTIMATED_IDLE_POWER_METRICS': 'false', 'LIBVIRT_METADATA_URI': 'http://openstack.org/xmlns/libvirt/nova/1.1'}, 'healthcheck': {'test': '/openstack/healthcheck kepler', 'mount': '/var/lib/openstack/healthchecks/kepler'}, 'volumes': ['/lib/modules:/lib/modules:ro', '/run/libvirt:/run/libvirt:shared,ro', '/sys:/sys', '/proc:/proc', '/var/lib/openstack/healthchecks/kepler:/openstack:ro,z']}, release=1214.1726694543, maintainer=Red Hat, Inc., architecture=x86_64, release-0.7.12=, url=https://access.redhat.com/containers/#/registry.access.redhat.com/ubi9/images/9.4-1214.1726694543, config_id=edpm, vcs-type=git, managed_by=edpm_ansible, io.openshift.expose-services=, summary=Provides the latest release of Red Hat Universal Base Image 9., build-date=2024-09-18T21:23:30, vcs-ref=e309397d02fc53f7fa99db1371b8700eb49f268f, version=9.4, com.redhat.component=ubi9-container, distribution-scope=public, io.buildah.version=1.29.0, com.redhat.license_terms=https://www.redhat.com/en/about/red-hat-end-user-license-agreements#UBI, name=ubi9, io.k8s.description=The Universal Base Image is designed and engineered to be the base layer for all of your containerized applications, middleware and utilities. This base image is freely redistributable, but Red Hat only supports Red Hat technologies through subscriptions for Red Hat products. This image is maintained by Red Hat and updated regularly., vendor=Red Hat, Inc., description=The Universal Base Image is designed and engineered to be the base layer for all of your containerized applications, middleware and utilities. This base image is freely redistributable, but Red Hat only supports Red Hat technologies through subscriptions for Red Hat products. This image is maintained by Red Hat and updated regularly.)
Dec  1 19:30:54 compute-0 podman[240360]: 2025-12-01 19:30:54.395108338 +0000 UTC m=+0.149246577 container health_status 34a1614f07848d6f362b3ed1fa2407dbcd0f2c7c831f6ef43ff8b2d278ce7c3d (image=quay.io/podified-antelope-centos9/openstack-ceilometer-ipmi:current-podified, name=ceilometer_agent_ipmi, health_status=healthy, health_failing_streak=0, health_log=, config_id=edpm, io.buildah.version=1.41.3, maintainer=OpenStack Kubernetes Operator team, managed_by=edpm_ansible, org.label-schema.build-date=20251125, org.label-schema.license=GPLv2, tcib_build_tag=fa2bb8efef6782c26ea7f1675eeb36dd, container_name=ceilometer_agent_ipmi, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS, tcib_managed=true, config_data={'image': 'quay.io/podified-antelope-centos9/openstack-ceilometer-ipmi:current-podified', 'user': 'ceilometer', 'restart': 'always', 'command': 'kolla_start', 'security_opt': 'label:type:ceilometer_polling_t', 'privileged': 'true', 'net': 'host', 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'OS_ENDPOINT_TYPE': 'internal'}, 'healthcheck': {'test': '/openstack/healthcheck ipmi', 'mount': '/var/lib/openstack/healthchecks/ceilometer_agent_ipmi'}, 'volumes': ['/var/lib/openstack/config/telemetry-power-monitoring:/var/lib/openstack/config/:z', '/var/lib/openstack/config/telemetry-power-monitoring/ceilometer-agent-ipmi.json:/var/lib/kolla/config_files/config.json:z', '/etc/hosts:/etc/hosts:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/var/lib/openstack/cacerts/telemetry-power-monitoring/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/config/telemetry-power-monitoring/ceilometer_prom_exporter.yaml:/etc/ceilometer/ceilometer_prom_exporter.yaml:z', '/var/lib/openstack/certs/telemetry-power-monitoring/default:/etc/ceilometer/tls:z', '/dev/log:/dev/log', '/var/lib/openstack/healthchecks/ceilometer_agent_ipmi:/openstack:ro,z']}, org.label-schema.name=CentOS Stream 9 Base Image)
Dec  1 19:30:55 compute-0 nova_compute[189564]: 2025-12-01 19:30:55.140 189568 DEBUG oslo_service.periodic_task [None req-7cec860b-c1c5-49d2-8a4f-732c76c1788d - - - - - -] Running periodic task ComputeManager._sync_scheduler_instance_info run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210#033[00m
Dec  1 19:30:55 compute-0 nova_compute[189564]: 2025-12-01 19:30:55.170 189568 DEBUG oslo_service.periodic_task [None req-7cec860b-c1c5-49d2-8a4f-732c76c1788d - - - - - -] Running periodic task ComputeManager._poll_rescued_instances run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210#033[00m
Dec  1 19:30:55 compute-0 nova_compute[189564]: 2025-12-01 19:30:55.171 189568 DEBUG oslo_service.periodic_task [None req-7cec860b-c1c5-49d2-8a4f-732c76c1788d - - - - - -] Running periodic task ComputeManager._poll_unconfirmed_resizes run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210#033[00m
Dec  1 19:30:57 compute-0 nova_compute[189564]: 2025-12-01 19:30:57.690 189568 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Dec  1 19:30:58 compute-0 podman[240396]: 2025-12-01 19:30:58.329503738 +0000 UTC m=+0.100701302 container health_status 3a3d264f7eb8586ed3d44da8bad3c69e5911bcb2ca062b771386b6d47a5118de (image=quay.rdoproject.org/podified-master-centos10/openstack-ceilometer-compute:current-tested, name=ceilometer_agent_compute, health_status=healthy, health_failing_streak=0, health_log=, org.label-schema.vendor=CentOS, config_data={'image': 'quay.rdoproject.org/podified-master-centos10/openstack-ceilometer-compute:current-tested', 'user': 'ceilometer', 'restart': 'always', 'command': 'kolla_start', 'security_opt': 'label:type:ceilometer_polling_t', 'net': 'host', 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'OS_ENDPOINT_TYPE': 'internal'}, 'healthcheck': {'test': '/openstack/healthcheck compute', 'mount': '/var/lib/openstack/healthchecks/ceilometer_agent_compute'}, 'volumes': ['/var/lib/openstack/config/telemetry:/var/lib/openstack/config/:z', '/var/lib/openstack/config/telemetry/ceilometer-agent-compute.json:/var/lib/kolla/config_files/config.json:z', '/run/libvirt:/run/libvirt:shared,ro', '/etc/hosts:/etc/hosts:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/var/lib/openstack/cacerts/telemetry/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/config/telemetry/ceilometer_prom_exporter.yaml:/etc/ceilometer/ceilometer_prom_exporter.yaml:z', '/var/lib/openstack/certs/telemetry/default:/etc/ceilometer/tls:z', '/dev/log:/dev/log', '/var/lib/openstack/healthchecks/ceilometer_agent_compute:/openstack:ro,z']}, config_id=edpm, container_name=ceilometer_agent_compute, org.label-schema.license=GPLv2, tcib_managed=true, managed_by=edpm_ansible, org.label-schema.name=CentOS Stream 10 Base Image, org.label-schema.schema-version=1.0, maintainer=OpenStack Kubernetes Operator team, tcib_build_tag=3a7876c5b6a4ff2e2bc50e11e9db5f42, io.buildah.version=1.41.4, org.label-schema.build-date=20251125)
Dec  1 19:30:58 compute-0 podman[240397]: 2025-12-01 19:30:58.382794628 +0000 UTC m=+0.138734452 container health_status 43b014a7c88484529ca37fbc1aa040d68d3c565a681d98a3ffe696ded1c66c8b (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified, name=ovn_metadata_agent, health_status=healthy, health_failing_streak=0, health_log=, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS, config_data={'cgroupns': 'host', 'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'EDPM_CONFIG_HASH': '0823bd3e096c75f72e4a95820d41b0d4b6a1172bd2892ddb9f29b788a11bc87d'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ovn_metadata_agent', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified', 'net': 'host', 'pid': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/run/openvswitch:/run/openvswitch:z', '/var/lib/config-data/ansible-generated/neutron-ovn-metadata-agent:/etc/neutron.conf.d:z', '/run/netns:/run/netns:shared', '/var/lib/kolla/config_files/ovn_metadata_agent.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/neutron:/var/lib/neutron:shared,z', '/var/lib/neutron/ovn_metadata_haproxy_wrapper:/usr/local/bin/haproxy:ro', '/var/lib/neutron/kill_scripts:/etc/neutron/kill_scripts:ro', '/var/lib/openstack/cacerts/neutron-metadata/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/ca.crt:/etc/pki/tls/certs/ovndbca.crt:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/tls.crt:/etc/pki/tls/certs/ovndb.crt:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/tls.key:/etc/pki/tls/private/ovndb.key:ro,Z', '/var/lib/openstack/healthchecks/ovn_metadata_agent:/openstack:ro,z']}, io.buildah.version=1.41.3, managed_by=edpm_ansible, config_id=ovn_metadata_agent, container_name=ovn_metadata_agent, maintainer=OpenStack Kubernetes Operator team, tcib_build_tag=fa2bb8efef6782c26ea7f1675eeb36dd, tcib_managed=true, org.label-schema.build-date=20251125)
Dec  1 19:30:58 compute-0 podman[240398]: 2025-12-01 19:30:58.428855467 +0000 UTC m=+0.182598213 container health_status ac5c9902abf0db9f43c889599b2bcc73d33eb8b65444ffdd9b56a5cc93dab792 (image=quay.io/podified-antelope-centos9/openstack-ovn-controller:current-podified, name=ovn_controller, health_status=healthy, health_failing_streak=0, health_log=, tcib_build_tag=fa2bb8efef6782c26ea7f1675eeb36dd, managed_by=edpm_ansible, org.label-schema.vendor=CentOS, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.schema-version=1.0, config_id=ovn_controller, maintainer=OpenStack Kubernetes Operator team, org.label-schema.build-date=20251125, tcib_managed=true, config_data={'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ovn_controller', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-ovn-controller:current-podified', 'net': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/lib/modules:/lib/modules:ro', '/run:/run', '/var/lib/openvswitch/ovn:/run/ovn:shared,z', '/var/lib/kolla/config_files/ovn_controller.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/openstack/cacerts/ovn/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/certs/ovn/default/ca.crt:/etc/pki/tls/certs/ovndbca.crt:ro,z', '/var/lib/openstack/certs/ovn/default/tls.crt:/etc/pki/tls/certs/ovndb.crt:ro,z', '/var/lib/openstack/certs/ovn/default/tls.key:/etc/pki/tls/private/ovndb.key:ro,Z', '/var/lib/openstack/healthchecks/ovn_controller:/openstack:ro,z']}, container_name=ovn_controller, io.buildah.version=1.41.3)
Dec  1 19:30:58 compute-0 nova_compute[189564]: 2025-12-01 19:30:58.603 189568 DEBUG oslo_concurrency.lockutils [None req-25ca770d-7f98-45d0-a424-9d6d0202385c 7c24e8f82e7842b785e565ac65c7f494 35d2a9caf1634dca9fc12ec078239d84 - - default default] Acquiring lock "f4a023f0-04a7-470f-88ef-6284e0580f9e" by "nova.compute.manager.ComputeManager.build_and_run_instance.<locals>._locked_do_build_and_run_instance" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m
Dec  1 19:30:58 compute-0 nova_compute[189564]: 2025-12-01 19:30:58.603 189568 DEBUG oslo_concurrency.lockutils [None req-25ca770d-7f98-45d0-a424-9d6d0202385c 7c24e8f82e7842b785e565ac65c7f494 35d2a9caf1634dca9fc12ec078239d84 - - default default] Lock "f4a023f0-04a7-470f-88ef-6284e0580f9e" acquired by "nova.compute.manager.ComputeManager.build_and_run_instance.<locals>._locked_do_build_and_run_instance" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m
Dec  1 19:30:58 compute-0 nova_compute[189564]: 2025-12-01 19:30:58.619 189568 DEBUG nova.compute.manager [None req-25ca770d-7f98-45d0-a424-9d6d0202385c 7c24e8f82e7842b785e565ac65c7f494 35d2a9caf1634dca9fc12ec078239d84 - - default default] [instance: f4a023f0-04a7-470f-88ef-6284e0580f9e] Starting instance... _do_build_and_run_instance /usr/lib/python3.9/site-packages/nova/compute/manager.py:2402#033[00m
Dec  1 19:30:58 compute-0 nova_compute[189564]: 2025-12-01 19:30:58.694 189568 DEBUG oslo_concurrency.lockutils [None req-25ca770d-7f98-45d0-a424-9d6d0202385c 7c24e8f82e7842b785e565ac65c7f494 35d2a9caf1634dca9fc12ec078239d84 - - default default] Acquiring lock "compute_resources" by "nova.compute.resource_tracker.ResourceTracker.instance_claim" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m
Dec  1 19:30:58 compute-0 nova_compute[189564]: 2025-12-01 19:30:58.695 189568 DEBUG oslo_concurrency.lockutils [None req-25ca770d-7f98-45d0-a424-9d6d0202385c 7c24e8f82e7842b785e565ac65c7f494 35d2a9caf1634dca9fc12ec078239d84 - - default default] Lock "compute_resources" acquired by "nova.compute.resource_tracker.ResourceTracker.instance_claim" :: waited 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m
Dec  1 19:30:58 compute-0 nova_compute[189564]: 2025-12-01 19:30:58.704 189568 DEBUG nova.virt.hardware [None req-25ca770d-7f98-45d0-a424-9d6d0202385c 7c24e8f82e7842b785e565ac65c7f494 35d2a9caf1634dca9fc12ec078239d84 - - default default] Require both a host and instance NUMA topology to fit instance on host. numa_fit_instance_to_host /usr/lib/python3.9/site-packages/nova/virt/hardware.py:2368#033[00m
Dec  1 19:30:58 compute-0 nova_compute[189564]: 2025-12-01 19:30:58.705 189568 INFO nova.compute.claims [None req-25ca770d-7f98-45d0-a424-9d6d0202385c 7c24e8f82e7842b785e565ac65c7f494 35d2a9caf1634dca9fc12ec078239d84 - - default default] [instance: f4a023f0-04a7-470f-88ef-6284e0580f9e] Claim successful on node compute-0.ctlplane.example.com#033[00m
Dec  1 19:30:58 compute-0 nova_compute[189564]: 2025-12-01 19:30:58.823 189568 DEBUG nova.compute.provider_tree [None req-25ca770d-7f98-45d0-a424-9d6d0202385c 7c24e8f82e7842b785e565ac65c7f494 35d2a9caf1634dca9fc12ec078239d84 - - default default] Inventory has not changed in ProviderTree for provider: 0211b5d4-bab8-409f-8f53-df766ffbcb27 update_inventory /usr/lib/python3.9/site-packages/nova/compute/provider_tree.py:180#033[00m
Dec  1 19:30:58 compute-0 nova_compute[189564]: 2025-12-01 19:30:58.853 189568 DEBUG nova.scheduler.client.report [None req-25ca770d-7f98-45d0-a424-9d6d0202385c 7c24e8f82e7842b785e565ac65c7f494 35d2a9caf1634dca9fc12ec078239d84 - - default default] Inventory has not changed for provider 0211b5d4-bab8-409f-8f53-df766ffbcb27 based on inventory data: {'MEMORY_MB': {'total': 7680, 'reserved': 512, 'min_unit': 1, 'max_unit': 7680, 'step_size': 1, 'allocation_ratio': 1.0}, 'VCPU': {'total': 8, 'reserved': 0, 'min_unit': 1, 'max_unit': 8, 'step_size': 1, 'allocation_ratio': 4.0}, 'DISK_GB': {'total': 79, 'reserved': 1, 'min_unit': 1, 'max_unit': 79, 'step_size': 1, 'allocation_ratio': 0.9}} set_inventory_for_provider /usr/lib/python3.9/site-packages/nova/scheduler/client/report.py:940#033[00m
Dec  1 19:30:58 compute-0 nova_compute[189564]: 2025-12-01 19:30:58.875 189568 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Dec  1 19:30:58 compute-0 nova_compute[189564]: 2025-12-01 19:30:58.886 189568 DEBUG oslo_concurrency.lockutils [None req-25ca770d-7f98-45d0-a424-9d6d0202385c 7c24e8f82e7842b785e565ac65c7f494 35d2a9caf1634dca9fc12ec078239d84 - - default default] Lock "compute_resources" "released" by "nova.compute.resource_tracker.ResourceTracker.instance_claim" :: held 0.192s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m
Dec  1 19:30:58 compute-0 nova_compute[189564]: 2025-12-01 19:30:58.887 189568 DEBUG nova.compute.manager [None req-25ca770d-7f98-45d0-a424-9d6d0202385c 7c24e8f82e7842b785e565ac65c7f494 35d2a9caf1634dca9fc12ec078239d84 - - default default] [instance: f4a023f0-04a7-470f-88ef-6284e0580f9e] Start building networks asynchronously for instance. _build_resources /usr/lib/python3.9/site-packages/nova/compute/manager.py:2799#033[00m
Dec  1 19:30:58 compute-0 nova_compute[189564]: 2025-12-01 19:30:58.937 189568 DEBUG nova.compute.manager [None req-25ca770d-7f98-45d0-a424-9d6d0202385c 7c24e8f82e7842b785e565ac65c7f494 35d2a9caf1634dca9fc12ec078239d84 - - default default] [instance: f4a023f0-04a7-470f-88ef-6284e0580f9e] Allocating IP information in the background. _allocate_network_async /usr/lib/python3.9/site-packages/nova/compute/manager.py:1952#033[00m
Dec  1 19:30:58 compute-0 nova_compute[189564]: 2025-12-01 19:30:58.937 189568 DEBUG nova.network.neutron [None req-25ca770d-7f98-45d0-a424-9d6d0202385c 7c24e8f82e7842b785e565ac65c7f494 35d2a9caf1634dca9fc12ec078239d84 - - default default] [instance: f4a023f0-04a7-470f-88ef-6284e0580f9e] allocate_for_instance() allocate_for_instance /usr/lib/python3.9/site-packages/nova/network/neutron.py:1156#033[00m
Dec  1 19:30:58 compute-0 nova_compute[189564]: 2025-12-01 19:30:58.964 189568 INFO nova.virt.libvirt.driver [None req-25ca770d-7f98-45d0-a424-9d6d0202385c 7c24e8f82e7842b785e565ac65c7f494 35d2a9caf1634dca9fc12ec078239d84 - - default default] [instance: f4a023f0-04a7-470f-88ef-6284e0580f9e] Ignoring supplied device name: /dev/vda. Libvirt can't honour user-supplied dev names#033[00m
Dec  1 19:30:58 compute-0 nova_compute[189564]: 2025-12-01 19:30:58.996 189568 DEBUG nova.compute.manager [None req-25ca770d-7f98-45d0-a424-9d6d0202385c 7c24e8f82e7842b785e565ac65c7f494 35d2a9caf1634dca9fc12ec078239d84 - - default default] [instance: f4a023f0-04a7-470f-88ef-6284e0580f9e] Start building block device mappings for instance. _build_resources /usr/lib/python3.9/site-packages/nova/compute/manager.py:2834#033[00m
Dec  1 19:30:59 compute-0 nova_compute[189564]: 2025-12-01 19:30:59.088 189568 DEBUG nova.compute.manager [None req-25ca770d-7f98-45d0-a424-9d6d0202385c 7c24e8f82e7842b785e565ac65c7f494 35d2a9caf1634dca9fc12ec078239d84 - - default default] [instance: f4a023f0-04a7-470f-88ef-6284e0580f9e] Start spawning the instance on the hypervisor. _build_and_run_instance /usr/lib/python3.9/site-packages/nova/compute/manager.py:2608#033[00m
Dec  1 19:30:59 compute-0 nova_compute[189564]: 2025-12-01 19:30:59.090 189568 DEBUG nova.virt.libvirt.driver [None req-25ca770d-7f98-45d0-a424-9d6d0202385c 7c24e8f82e7842b785e565ac65c7f494 35d2a9caf1634dca9fc12ec078239d84 - - default default] [instance: f4a023f0-04a7-470f-88ef-6284e0580f9e] Creating instance directory _create_image /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:4723#033[00m
Dec  1 19:30:59 compute-0 nova_compute[189564]: 2025-12-01 19:30:59.090 189568 INFO nova.virt.libvirt.driver [None req-25ca770d-7f98-45d0-a424-9d6d0202385c 7c24e8f82e7842b785e565ac65c7f494 35d2a9caf1634dca9fc12ec078239d84 - - default default] [instance: f4a023f0-04a7-470f-88ef-6284e0580f9e] Creating image(s)#033[00m
Dec  1 19:30:59 compute-0 nova_compute[189564]: 2025-12-01 19:30:59.091 189568 DEBUG oslo_concurrency.lockutils [None req-25ca770d-7f98-45d0-a424-9d6d0202385c 7c24e8f82e7842b785e565ac65c7f494 35d2a9caf1634dca9fc12ec078239d84 - - default default] Acquiring lock "/var/lib/nova/instances/f4a023f0-04a7-470f-88ef-6284e0580f9e/disk.info" by "nova.virt.libvirt.imagebackend.Image.resolve_driver_format.<locals>.write_to_disk_info_file" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m
Dec  1 19:30:59 compute-0 nova_compute[189564]: 2025-12-01 19:30:59.092 189568 DEBUG oslo_concurrency.lockutils [None req-25ca770d-7f98-45d0-a424-9d6d0202385c 7c24e8f82e7842b785e565ac65c7f494 35d2a9caf1634dca9fc12ec078239d84 - - default default] Lock "/var/lib/nova/instances/f4a023f0-04a7-470f-88ef-6284e0580f9e/disk.info" acquired by "nova.virt.libvirt.imagebackend.Image.resolve_driver_format.<locals>.write_to_disk_info_file" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m
Dec  1 19:30:59 compute-0 nova_compute[189564]: 2025-12-01 19:30:59.093 189568 DEBUG oslo_concurrency.lockutils [None req-25ca770d-7f98-45d0-a424-9d6d0202385c 7c24e8f82e7842b785e565ac65c7f494 35d2a9caf1634dca9fc12ec078239d84 - - default default] Lock "/var/lib/nova/instances/f4a023f0-04a7-470f-88ef-6284e0580f9e/disk.info" "released" by "nova.virt.libvirt.imagebackend.Image.resolve_driver_format.<locals>.write_to_disk_info_file" :: held 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m
Dec  1 19:30:59 compute-0 nova_compute[189564]: 2025-12-01 19:30:59.109 189568 DEBUG oslo_concurrency.processutils [None req-25ca770d-7f98-45d0-a424-9d6d0202385c 7c24e8f82e7842b785e565ac65c7f494 35d2a9caf1634dca9fc12ec078239d84 - - default default] Running cmd (subprocess): /usr/bin/python3 -m oslo_concurrency.prlimit --as=1073741824 --cpu=30 -- env LC_ALL=C LANG=C qemu-img info /var/lib/nova/instances/_base/1324593a3f01becd5f72fdfdb0281e45c2a6b683 --force-share --output=json execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384#033[00m
Dec  1 19:30:59 compute-0 nova_compute[189564]: 2025-12-01 19:30:59.200 189568 DEBUG oslo_concurrency.processutils [None req-25ca770d-7f98-45d0-a424-9d6d0202385c 7c24e8f82e7842b785e565ac65c7f494 35d2a9caf1634dca9fc12ec078239d84 - - default default] CMD "/usr/bin/python3 -m oslo_concurrency.prlimit --as=1073741824 --cpu=30 -- env LC_ALL=C LANG=C qemu-img info /var/lib/nova/instances/_base/1324593a3f01becd5f72fdfdb0281e45c2a6b683 --force-share --output=json" returned: 0 in 0.091s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422#033[00m
Dec  1 19:30:59 compute-0 nova_compute[189564]: 2025-12-01 19:30:59.203 189568 DEBUG oslo_concurrency.lockutils [None req-25ca770d-7f98-45d0-a424-9d6d0202385c 7c24e8f82e7842b785e565ac65c7f494 35d2a9caf1634dca9fc12ec078239d84 - - default default] Acquiring lock "1324593a3f01becd5f72fdfdb0281e45c2a6b683" by "nova.virt.libvirt.imagebackend.Qcow2.create_image.<locals>.create_qcow2_image" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m
Dec  1 19:30:59 compute-0 nova_compute[189564]: 2025-12-01 19:30:59.204 189568 DEBUG oslo_concurrency.lockutils [None req-25ca770d-7f98-45d0-a424-9d6d0202385c 7c24e8f82e7842b785e565ac65c7f494 35d2a9caf1634dca9fc12ec078239d84 - - default default] Lock "1324593a3f01becd5f72fdfdb0281e45c2a6b683" acquired by "nova.virt.libvirt.imagebackend.Qcow2.create_image.<locals>.create_qcow2_image" :: waited 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m
Dec  1 19:30:59 compute-0 nova_compute[189564]: 2025-12-01 19:30:59.219 189568 DEBUG oslo_concurrency.processutils [None req-25ca770d-7f98-45d0-a424-9d6d0202385c 7c24e8f82e7842b785e565ac65c7f494 35d2a9caf1634dca9fc12ec078239d84 - - default default] Running cmd (subprocess): /usr/bin/python3 -m oslo_concurrency.prlimit --as=1073741824 --cpu=30 -- env LC_ALL=C LANG=C qemu-img info /var/lib/nova/instances/_base/1324593a3f01becd5f72fdfdb0281e45c2a6b683 --force-share --output=json execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384#033[00m
Dec  1 19:30:59 compute-0 nova_compute[189564]: 2025-12-01 19:30:59.293 189568 DEBUG oslo_concurrency.processutils [None req-25ca770d-7f98-45d0-a424-9d6d0202385c 7c24e8f82e7842b785e565ac65c7f494 35d2a9caf1634dca9fc12ec078239d84 - - default default] CMD "/usr/bin/python3 -m oslo_concurrency.prlimit --as=1073741824 --cpu=30 -- env LC_ALL=C LANG=C qemu-img info /var/lib/nova/instances/_base/1324593a3f01becd5f72fdfdb0281e45c2a6b683 --force-share --output=json" returned: 0 in 0.073s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422#033[00m
Dec  1 19:30:59 compute-0 nova_compute[189564]: 2025-12-01 19:30:59.294 189568 DEBUG oslo_concurrency.processutils [None req-25ca770d-7f98-45d0-a424-9d6d0202385c 7c24e8f82e7842b785e565ac65c7f494 35d2a9caf1634dca9fc12ec078239d84 - - default default] Running cmd (subprocess): env LC_ALL=C LANG=C qemu-img create -f qcow2 -o backing_file=/var/lib/nova/instances/_base/1324593a3f01becd5f72fdfdb0281e45c2a6b683,backing_fmt=raw /var/lib/nova/instances/f4a023f0-04a7-470f-88ef-6284e0580f9e/disk 1073741824 execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384#033[00m
Dec  1 19:30:59 compute-0 nova_compute[189564]: 2025-12-01 19:30:59.354 189568 DEBUG oslo_concurrency.processutils [None req-25ca770d-7f98-45d0-a424-9d6d0202385c 7c24e8f82e7842b785e565ac65c7f494 35d2a9caf1634dca9fc12ec078239d84 - - default default] CMD "env LC_ALL=C LANG=C qemu-img create -f qcow2 -o backing_file=/var/lib/nova/instances/_base/1324593a3f01becd5f72fdfdb0281e45c2a6b683,backing_fmt=raw /var/lib/nova/instances/f4a023f0-04a7-470f-88ef-6284e0580f9e/disk 1073741824" returned: 0 in 0.059s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422#033[00m
Dec  1 19:30:59 compute-0 nova_compute[189564]: 2025-12-01 19:30:59.356 189568 DEBUG oslo_concurrency.lockutils [None req-25ca770d-7f98-45d0-a424-9d6d0202385c 7c24e8f82e7842b785e565ac65c7f494 35d2a9caf1634dca9fc12ec078239d84 - - default default] Lock "1324593a3f01becd5f72fdfdb0281e45c2a6b683" "released" by "nova.virt.libvirt.imagebackend.Qcow2.create_image.<locals>.create_qcow2_image" :: held 0.151s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m
Dec  1 19:30:59 compute-0 nova_compute[189564]: 2025-12-01 19:30:59.357 189568 DEBUG oslo_concurrency.processutils [None req-25ca770d-7f98-45d0-a424-9d6d0202385c 7c24e8f82e7842b785e565ac65c7f494 35d2a9caf1634dca9fc12ec078239d84 - - default default] Running cmd (subprocess): /usr/bin/python3 -m oslo_concurrency.prlimit --as=1073741824 --cpu=30 -- env LC_ALL=C LANG=C qemu-img info /var/lib/nova/instances/_base/1324593a3f01becd5f72fdfdb0281e45c2a6b683 --force-share --output=json execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384#033[00m
Dec  1 19:30:59 compute-0 nova_compute[189564]: 2025-12-01 19:30:59.447 189568 DEBUG oslo_concurrency.processutils [None req-25ca770d-7f98-45d0-a424-9d6d0202385c 7c24e8f82e7842b785e565ac65c7f494 35d2a9caf1634dca9fc12ec078239d84 - - default default] CMD "/usr/bin/python3 -m oslo_concurrency.prlimit --as=1073741824 --cpu=30 -- env LC_ALL=C LANG=C qemu-img info /var/lib/nova/instances/_base/1324593a3f01becd5f72fdfdb0281e45c2a6b683 --force-share --output=json" returned: 0 in 0.090s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422#033[00m
Dec  1 19:30:59 compute-0 nova_compute[189564]: 2025-12-01 19:30:59.448 189568 DEBUG nova.virt.disk.api [None req-25ca770d-7f98-45d0-a424-9d6d0202385c 7c24e8f82e7842b785e565ac65c7f494 35d2a9caf1634dca9fc12ec078239d84 - - default default] Checking if we can resize image /var/lib/nova/instances/f4a023f0-04a7-470f-88ef-6284e0580f9e/disk. size=1073741824 can_resize_image /usr/lib/python3.9/site-packages/nova/virt/disk/api.py:166#033[00m
Dec  1 19:30:59 compute-0 nova_compute[189564]: 2025-12-01 19:30:59.448 189568 DEBUG oslo_concurrency.processutils [None req-25ca770d-7f98-45d0-a424-9d6d0202385c 7c24e8f82e7842b785e565ac65c7f494 35d2a9caf1634dca9fc12ec078239d84 - - default default] Running cmd (subprocess): /usr/bin/python3 -m oslo_concurrency.prlimit --as=1073741824 --cpu=30 -- env LC_ALL=C LANG=C qemu-img info /var/lib/nova/instances/f4a023f0-04a7-470f-88ef-6284e0580f9e/disk --force-share --output=json execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384#033[00m
Dec  1 19:30:59 compute-0 nova_compute[189564]: 2025-12-01 19:30:59.503 189568 DEBUG oslo_concurrency.processutils [None req-25ca770d-7f98-45d0-a424-9d6d0202385c 7c24e8f82e7842b785e565ac65c7f494 35d2a9caf1634dca9fc12ec078239d84 - - default default] CMD "/usr/bin/python3 -m oslo_concurrency.prlimit --as=1073741824 --cpu=30 -- env LC_ALL=C LANG=C qemu-img info /var/lib/nova/instances/f4a023f0-04a7-470f-88ef-6284e0580f9e/disk --force-share --output=json" returned: 0 in 0.055s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422#033[00m
Dec  1 19:30:59 compute-0 nova_compute[189564]: 2025-12-01 19:30:59.504 189568 DEBUG nova.virt.disk.api [None req-25ca770d-7f98-45d0-a424-9d6d0202385c 7c24e8f82e7842b785e565ac65c7f494 35d2a9caf1634dca9fc12ec078239d84 - - default default] Cannot resize image /var/lib/nova/instances/f4a023f0-04a7-470f-88ef-6284e0580f9e/disk to a smaller size. can_resize_image /usr/lib/python3.9/site-packages/nova/virt/disk/api.py:172#033[00m
Dec  1 19:30:59 compute-0 nova_compute[189564]: 2025-12-01 19:30:59.505 189568 DEBUG nova.objects.instance [None req-25ca770d-7f98-45d0-a424-9d6d0202385c 7c24e8f82e7842b785e565ac65c7f494 35d2a9caf1634dca9fc12ec078239d84 - - default default] Lazy-loading 'migration_context' on Instance uuid f4a023f0-04a7-470f-88ef-6284e0580f9e obj_load_attr /usr/lib/python3.9/site-packages/nova/objects/instance.py:1105#033[00m
Dec  1 19:30:59 compute-0 nova_compute[189564]: 2025-12-01 19:30:59.521 189568 DEBUG oslo_concurrency.lockutils [None req-25ca770d-7f98-45d0-a424-9d6d0202385c 7c24e8f82e7842b785e565ac65c7f494 35d2a9caf1634dca9fc12ec078239d84 - - default default] Acquiring lock "/var/lib/nova/instances/f4a023f0-04a7-470f-88ef-6284e0580f9e/disk.info" by "nova.virt.libvirt.imagebackend.Image.resolve_driver_format.<locals>.write_to_disk_info_file" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m
Dec  1 19:30:59 compute-0 nova_compute[189564]: 2025-12-01 19:30:59.522 189568 DEBUG oslo_concurrency.lockutils [None req-25ca770d-7f98-45d0-a424-9d6d0202385c 7c24e8f82e7842b785e565ac65c7f494 35d2a9caf1634dca9fc12ec078239d84 - - default default] Lock "/var/lib/nova/instances/f4a023f0-04a7-470f-88ef-6284e0580f9e/disk.info" acquired by "nova.virt.libvirt.imagebackend.Image.resolve_driver_format.<locals>.write_to_disk_info_file" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m
Dec  1 19:30:59 compute-0 nova_compute[189564]: 2025-12-01 19:30:59.522 189568 DEBUG oslo_concurrency.lockutils [None req-25ca770d-7f98-45d0-a424-9d6d0202385c 7c24e8f82e7842b785e565ac65c7f494 35d2a9caf1634dca9fc12ec078239d84 - - default default] Lock "/var/lib/nova/instances/f4a023f0-04a7-470f-88ef-6284e0580f9e/disk.info" "released" by "nova.virt.libvirt.imagebackend.Image.resolve_driver_format.<locals>.write_to_disk_info_file" :: held 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m
Dec  1 19:30:59 compute-0 nova_compute[189564]: 2025-12-01 19:30:59.534 189568 DEBUG oslo_concurrency.processutils [None req-25ca770d-7f98-45d0-a424-9d6d0202385c 7c24e8f82e7842b785e565ac65c7f494 35d2a9caf1634dca9fc12ec078239d84 - - default default] Running cmd (subprocess): /usr/bin/python3 -m oslo_concurrency.prlimit --as=1073741824 --cpu=30 -- env LC_ALL=C LANG=C qemu-img info /var/lib/nova/instances/_base/ephemeral_1_0706d66 --force-share --output=json execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384#033[00m
Dec  1 19:30:59 compute-0 nova_compute[189564]: 2025-12-01 19:30:59.627 189568 DEBUG oslo_concurrency.processutils [None req-25ca770d-7f98-45d0-a424-9d6d0202385c 7c24e8f82e7842b785e565ac65c7f494 35d2a9caf1634dca9fc12ec078239d84 - - default default] CMD "/usr/bin/python3 -m oslo_concurrency.prlimit --as=1073741824 --cpu=30 -- env LC_ALL=C LANG=C qemu-img info /var/lib/nova/instances/_base/ephemeral_1_0706d66 --force-share --output=json" returned: 0 in 0.092s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422#033[00m
Dec  1 19:30:59 compute-0 nova_compute[189564]: 2025-12-01 19:30:59.628 189568 DEBUG oslo_concurrency.lockutils [None req-25ca770d-7f98-45d0-a424-9d6d0202385c 7c24e8f82e7842b785e565ac65c7f494 35d2a9caf1634dca9fc12ec078239d84 - - default default] Acquiring lock "ephemeral_1_0706d66" by "nova.virt.libvirt.imagebackend.Qcow2.create_image.<locals>.create_qcow2_image" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m
Dec  1 19:30:59 compute-0 nova_compute[189564]: 2025-12-01 19:30:59.628 189568 DEBUG oslo_concurrency.lockutils [None req-25ca770d-7f98-45d0-a424-9d6d0202385c 7c24e8f82e7842b785e565ac65c7f494 35d2a9caf1634dca9fc12ec078239d84 - - default default] Lock "ephemeral_1_0706d66" acquired by "nova.virt.libvirt.imagebackend.Qcow2.create_image.<locals>.create_qcow2_image" :: waited 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m
Dec  1 19:30:59 compute-0 nova_compute[189564]: 2025-12-01 19:30:59.639 189568 DEBUG oslo_concurrency.processutils [None req-25ca770d-7f98-45d0-a424-9d6d0202385c 7c24e8f82e7842b785e565ac65c7f494 35d2a9caf1634dca9fc12ec078239d84 - - default default] Running cmd (subprocess): /usr/bin/python3 -m oslo_concurrency.prlimit --as=1073741824 --cpu=30 -- env LC_ALL=C LANG=C qemu-img info /var/lib/nova/instances/_base/ephemeral_1_0706d66 --force-share --output=json execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384#033[00m
Dec  1 19:30:59 compute-0 nova_compute[189564]: 2025-12-01 19:30:59.722 189568 DEBUG oslo_concurrency.processutils [None req-25ca770d-7f98-45d0-a424-9d6d0202385c 7c24e8f82e7842b785e565ac65c7f494 35d2a9caf1634dca9fc12ec078239d84 - - default default] CMD "/usr/bin/python3 -m oslo_concurrency.prlimit --as=1073741824 --cpu=30 -- env LC_ALL=C LANG=C qemu-img info /var/lib/nova/instances/_base/ephemeral_1_0706d66 --force-share --output=json" returned: 0 in 0.082s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422#033[00m
Dec  1 19:30:59 compute-0 nova_compute[189564]: 2025-12-01 19:30:59.724 189568 DEBUG oslo_concurrency.processutils [None req-25ca770d-7f98-45d0-a424-9d6d0202385c 7c24e8f82e7842b785e565ac65c7f494 35d2a9caf1634dca9fc12ec078239d84 - - default default] Running cmd (subprocess): env LC_ALL=C LANG=C qemu-img create -f qcow2 -o backing_file=/var/lib/nova/instances/_base/ephemeral_1_0706d66,backing_fmt=raw /var/lib/nova/instances/f4a023f0-04a7-470f-88ef-6284e0580f9e/disk.eph0 1073741824 execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384#033[00m
Dec  1 19:30:59 compute-0 podman[203750]: time="2025-12-01T19:30:59Z" level=info msg="List containers: received `last` parameter - overwriting `limit`"
Dec  1 19:30:59 compute-0 podman[203750]: @ - - [01/Dec/2025:19:30:59 +0000] "GET /v4.9.3/libpod/containers/json?all=true&external=false&last=0&namespace=false&size=false&sync=false HTTP/1.1" 200 29521 "" "Go-http-client/1.1"
Dec  1 19:30:59 compute-0 podman[203750]: @ - - [01/Dec/2025:19:30:59 +0000] "GET /v4.9.3/libpod/containers/stats?all=false&interval=1&stream=false HTTP/1.1" 200 4771 "" "Go-http-client/1.1"
Dec  1 19:30:59 compute-0 nova_compute[189564]: 2025-12-01 19:30:59.781 189568 DEBUG oslo_concurrency.processutils [None req-25ca770d-7f98-45d0-a424-9d6d0202385c 7c24e8f82e7842b785e565ac65c7f494 35d2a9caf1634dca9fc12ec078239d84 - - default default] CMD "env LC_ALL=C LANG=C qemu-img create -f qcow2 -o backing_file=/var/lib/nova/instances/_base/ephemeral_1_0706d66,backing_fmt=raw /var/lib/nova/instances/f4a023f0-04a7-470f-88ef-6284e0580f9e/disk.eph0 1073741824" returned: 0 in 0.058s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422#033[00m
Dec  1 19:30:59 compute-0 nova_compute[189564]: 2025-12-01 19:30:59.783 189568 DEBUG oslo_concurrency.lockutils [None req-25ca770d-7f98-45d0-a424-9d6d0202385c 7c24e8f82e7842b785e565ac65c7f494 35d2a9caf1634dca9fc12ec078239d84 - - default default] Lock "ephemeral_1_0706d66" "released" by "nova.virt.libvirt.imagebackend.Qcow2.create_image.<locals>.create_qcow2_image" :: held 0.154s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m
Dec  1 19:30:59 compute-0 nova_compute[189564]: 2025-12-01 19:30:59.785 189568 DEBUG oslo_concurrency.processutils [None req-25ca770d-7f98-45d0-a424-9d6d0202385c 7c24e8f82e7842b785e565ac65c7f494 35d2a9caf1634dca9fc12ec078239d84 - - default default] Running cmd (subprocess): /usr/bin/python3 -m oslo_concurrency.prlimit --as=1073741824 --cpu=30 -- env LC_ALL=C LANG=C qemu-img info /var/lib/nova/instances/_base/ephemeral_1_0706d66 --force-share --output=json execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384#033[00m
Dec  1 19:30:59 compute-0 nova_compute[189564]: 2025-12-01 19:30:59.859 189568 DEBUG oslo_concurrency.processutils [None req-25ca770d-7f98-45d0-a424-9d6d0202385c 7c24e8f82e7842b785e565ac65c7f494 35d2a9caf1634dca9fc12ec078239d84 - - default default] CMD "/usr/bin/python3 -m oslo_concurrency.prlimit --as=1073741824 --cpu=30 -- env LC_ALL=C LANG=C qemu-img info /var/lib/nova/instances/_base/ephemeral_1_0706d66 --force-share --output=json" returned: 0 in 0.074s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422#033[00m
Dec  1 19:30:59 compute-0 nova_compute[189564]: 2025-12-01 19:30:59.861 189568 DEBUG nova.virt.libvirt.driver [None req-25ca770d-7f98-45d0-a424-9d6d0202385c 7c24e8f82e7842b785e565ac65c7f494 35d2a9caf1634dca9fc12ec078239d84 - - default default] [instance: f4a023f0-04a7-470f-88ef-6284e0580f9e] Created local disks _create_image /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:4857#033[00m
Dec  1 19:30:59 compute-0 nova_compute[189564]: 2025-12-01 19:30:59.862 189568 DEBUG nova.virt.libvirt.driver [None req-25ca770d-7f98-45d0-a424-9d6d0202385c 7c24e8f82e7842b785e565ac65c7f494 35d2a9caf1634dca9fc12ec078239d84 - - default default] [instance: f4a023f0-04a7-470f-88ef-6284e0580f9e] Ensure instance console log exists: /var/lib/nova/instances/f4a023f0-04a7-470f-88ef-6284e0580f9e/console.log _ensure_console_log_for_instance /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:4609#033[00m
Dec  1 19:30:59 compute-0 nova_compute[189564]: 2025-12-01 19:30:59.863 189568 DEBUG oslo_concurrency.lockutils [None req-25ca770d-7f98-45d0-a424-9d6d0202385c 7c24e8f82e7842b785e565ac65c7f494 35d2a9caf1634dca9fc12ec078239d84 - - default default] Acquiring lock "vgpu_resources" by "nova.virt.libvirt.driver.LibvirtDriver._allocate_mdevs" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m
Dec  1 19:30:59 compute-0 nova_compute[189564]: 2025-12-01 19:30:59.864 189568 DEBUG oslo_concurrency.lockutils [None req-25ca770d-7f98-45d0-a424-9d6d0202385c 7c24e8f82e7842b785e565ac65c7f494 35d2a9caf1634dca9fc12ec078239d84 - - default default] Lock "vgpu_resources" acquired by "nova.virt.libvirt.driver.LibvirtDriver._allocate_mdevs" :: waited 0.002s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m
Dec  1 19:30:59 compute-0 nova_compute[189564]: 2025-12-01 19:30:59.865 189568 DEBUG oslo_concurrency.lockutils [None req-25ca770d-7f98-45d0-a424-9d6d0202385c 7c24e8f82e7842b785e565ac65c7f494 35d2a9caf1634dca9fc12ec078239d84 - - default default] Lock "vgpu_resources" "released" by "nova.virt.libvirt.driver.LibvirtDriver._allocate_mdevs" :: held 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m
Dec  1 19:31:00 compute-0 nova_compute[189564]: 2025-12-01 19:31:00.585 189568 DEBUG nova.network.neutron [None req-25ca770d-7f98-45d0-a424-9d6d0202385c 7c24e8f82e7842b785e565ac65c7f494 35d2a9caf1634dca9fc12ec078239d84 - - default default] [instance: f4a023f0-04a7-470f-88ef-6284e0580f9e] Successfully updated port: 0aee22ef-1ffd-4d83-a6ba-7377ff1b62c3 _update_port /usr/lib/python3.9/site-packages/nova/network/neutron.py:586#033[00m
Dec  1 19:31:00 compute-0 nova_compute[189564]: 2025-12-01 19:31:00.633 189568 DEBUG oslo_concurrency.lockutils [None req-25ca770d-7f98-45d0-a424-9d6d0202385c 7c24e8f82e7842b785e565ac65c7f494 35d2a9caf1634dca9fc12ec078239d84 - - default default] Acquiring lock "refresh_cache-f4a023f0-04a7-470f-88ef-6284e0580f9e" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:312#033[00m
Dec  1 19:31:00 compute-0 nova_compute[189564]: 2025-12-01 19:31:00.634 189568 DEBUG oslo_concurrency.lockutils [None req-25ca770d-7f98-45d0-a424-9d6d0202385c 7c24e8f82e7842b785e565ac65c7f494 35d2a9caf1634dca9fc12ec078239d84 - - default default] Acquired lock "refresh_cache-f4a023f0-04a7-470f-88ef-6284e0580f9e" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:315#033[00m
Dec  1 19:31:00 compute-0 nova_compute[189564]: 2025-12-01 19:31:00.635 189568 DEBUG nova.network.neutron [None req-25ca770d-7f98-45d0-a424-9d6d0202385c 7c24e8f82e7842b785e565ac65c7f494 35d2a9caf1634dca9fc12ec078239d84 - - default default] [instance: f4a023f0-04a7-470f-88ef-6284e0580f9e] Building network info cache for instance _get_instance_nw_info /usr/lib/python3.9/site-packages/nova/network/neutron.py:2010#033[00m
Dec  1 19:31:00 compute-0 nova_compute[189564]: 2025-12-01 19:31:00.830 189568 DEBUG nova.network.neutron [None req-25ca770d-7f98-45d0-a424-9d6d0202385c 7c24e8f82e7842b785e565ac65c7f494 35d2a9caf1634dca9fc12ec078239d84 - - default default] [instance: f4a023f0-04a7-470f-88ef-6284e0580f9e] Instance cache missing network info. _get_preexisting_port_ids /usr/lib/python3.9/site-packages/nova/network/neutron.py:3323#033[00m
Dec  1 19:31:01 compute-0 nova_compute[189564]: 2025-12-01 19:31:01.157 189568 DEBUG nova.compute.manager [req-120b7ae6-8c23-40f2-98d2-03b9efaee9d5 req-b6940bb5-af4f-4427-8c0b-fd560c0cc04d 8141e98fb94749b083df4b60a326419b 58466eba0634458f8dd92f929824a1d9 - - default default] [instance: f4a023f0-04a7-470f-88ef-6284e0580f9e] Received event network-changed-0aee22ef-1ffd-4d83-a6ba-7377ff1b62c3 external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11048#033[00m
Dec  1 19:31:01 compute-0 nova_compute[189564]: 2025-12-01 19:31:01.157 189568 DEBUG nova.compute.manager [req-120b7ae6-8c23-40f2-98d2-03b9efaee9d5 req-b6940bb5-af4f-4427-8c0b-fd560c0cc04d 8141e98fb94749b083df4b60a326419b 58466eba0634458f8dd92f929824a1d9 - - default default] [instance: f4a023f0-04a7-470f-88ef-6284e0580f9e] Refreshing instance network info cache due to event network-changed-0aee22ef-1ffd-4d83-a6ba-7377ff1b62c3. external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11053#033[00m
Dec  1 19:31:01 compute-0 nova_compute[189564]: 2025-12-01 19:31:01.158 189568 DEBUG oslo_concurrency.lockutils [req-120b7ae6-8c23-40f2-98d2-03b9efaee9d5 req-b6940bb5-af4f-4427-8c0b-fd560c0cc04d 8141e98fb94749b083df4b60a326419b 58466eba0634458f8dd92f929824a1d9 - - default default] Acquiring lock "refresh_cache-f4a023f0-04a7-470f-88ef-6284e0580f9e" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:312#033[00m
Dec  1 19:31:01 compute-0 openstack_network_exporter[205914]: ERROR   19:31:01 appctl.go:144: Failed to get PID for ovn-northd: no control socket files found for ovn-northd
Dec  1 19:31:01 compute-0 openstack_network_exporter[205914]: ERROR   19:31:01 appctl.go:144: Failed to get PID for ovn-northd: no control socket files found for ovn-northd
Dec  1 19:31:01 compute-0 openstack_network_exporter[205914]: ERROR   19:31:01 appctl.go:131: Failed to prepare call to ovsdb-server: no control socket files found for the ovs db server
Dec  1 19:31:01 compute-0 ovn_metadata_agent[106828]: 2025-12-01 19:31:01.414 106833 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): DbSetCommand(_result=None, table=Chassis_Private, record=91869463-7ce7-4561-8225-db4a77bb5f12, col_values=(('external_ids', {'neutron:ovn-metadata-sb-cfg': '4'}),), if_exists=True) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89#033[00m
Dec  1 19:31:01 compute-0 openstack_network_exporter[205914]: ERROR   19:31:01 appctl.go:174: call(dpif-netdev/pmd-perf-show): please specify an existing datapath
Dec  1 19:31:01 compute-0 openstack_network_exporter[205914]: 
Dec  1 19:31:01 compute-0 openstack_network_exporter[205914]: ERROR   19:31:01 appctl.go:174: call(dpif-netdev/pmd-rxq-show): please specify an existing datapath
Dec  1 19:31:01 compute-0 openstack_network_exporter[205914]: 
Dec  1 19:31:02 compute-0 nova_compute[189564]: 2025-12-01 19:31:02.036 189568 DEBUG nova.network.neutron [None req-25ca770d-7f98-45d0-a424-9d6d0202385c 7c24e8f82e7842b785e565ac65c7f494 35d2a9caf1634dca9fc12ec078239d84 - - default default] [instance: f4a023f0-04a7-470f-88ef-6284e0580f9e] Updating instance_info_cache with network_info: [{"id": "0aee22ef-1ffd-4d83-a6ba-7377ff1b62c3", "address": "fa:16:3e:0a:1c:a4", "network": {"id": "2a4b8529-6171-4880-a97c-66966115a61b", "bridge": "br-int", "label": "private", "subnets": [{"cidr": "192.168.0.0/24", "dns": [], "gateway": {"address": "192.168.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "192.168.0.66", "type": "fixed", "version": 4, "meta": {}, "floating_ips": [{"address": "192.168.122.187", "type": "floating", "version": 4, "meta": {}}]}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "35d2a9caf1634dca9fc12ec078239d84", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tap0aee22ef-1f", "ovs_interfaceid": "0aee22ef-1ffd-4d83-a6ba-7377ff1b62c3", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": true, "delegate_create": true, "meta": {}}] update_instance_cache_with_nw_info /usr/lib/python3.9/site-packages/nova/network/neutron.py:116#033[00m
Dec  1 19:31:02 compute-0 nova_compute[189564]: 2025-12-01 19:31:02.066 189568 DEBUG oslo_concurrency.lockutils [None req-25ca770d-7f98-45d0-a424-9d6d0202385c 7c24e8f82e7842b785e565ac65c7f494 35d2a9caf1634dca9fc12ec078239d84 - - default default] Releasing lock "refresh_cache-f4a023f0-04a7-470f-88ef-6284e0580f9e" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:333#033[00m
Dec  1 19:31:02 compute-0 nova_compute[189564]: 2025-12-01 19:31:02.067 189568 DEBUG nova.compute.manager [None req-25ca770d-7f98-45d0-a424-9d6d0202385c 7c24e8f82e7842b785e565ac65c7f494 35d2a9caf1634dca9fc12ec078239d84 - - default default] [instance: f4a023f0-04a7-470f-88ef-6284e0580f9e] Instance network_info: |[{"id": "0aee22ef-1ffd-4d83-a6ba-7377ff1b62c3", "address": "fa:16:3e:0a:1c:a4", "network": {"id": "2a4b8529-6171-4880-a97c-66966115a61b", "bridge": "br-int", "label": "private", "subnets": [{"cidr": "192.168.0.0/24", "dns": [], "gateway": {"address": "192.168.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "192.168.0.66", "type": "fixed", "version": 4, "meta": {}, "floating_ips": [{"address": "192.168.122.187", "type": "floating", "version": 4, "meta": {}}]}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "35d2a9caf1634dca9fc12ec078239d84", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tap0aee22ef-1f", "ovs_interfaceid": "0aee22ef-1ffd-4d83-a6ba-7377ff1b62c3", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": true, "delegate_create": true, "meta": {}}]| _allocate_network_async /usr/lib/python3.9/site-packages/nova/compute/manager.py:1967#033[00m
Dec  1 19:31:02 compute-0 nova_compute[189564]: 2025-12-01 19:31:02.068 189568 DEBUG oslo_concurrency.lockutils [req-120b7ae6-8c23-40f2-98d2-03b9efaee9d5 req-b6940bb5-af4f-4427-8c0b-fd560c0cc04d 8141e98fb94749b083df4b60a326419b 58466eba0634458f8dd92f929824a1d9 - - default default] Acquired lock "refresh_cache-f4a023f0-04a7-470f-88ef-6284e0580f9e" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:315#033[00m
Dec  1 19:31:02 compute-0 nova_compute[189564]: 2025-12-01 19:31:02.069 189568 DEBUG nova.network.neutron [req-120b7ae6-8c23-40f2-98d2-03b9efaee9d5 req-b6940bb5-af4f-4427-8c0b-fd560c0cc04d 8141e98fb94749b083df4b60a326419b 58466eba0634458f8dd92f929824a1d9 - - default default] [instance: f4a023f0-04a7-470f-88ef-6284e0580f9e] Refreshing network info cache for port 0aee22ef-1ffd-4d83-a6ba-7377ff1b62c3 _get_instance_nw_info /usr/lib/python3.9/site-packages/nova/network/neutron.py:2007#033[00m
Dec  1 19:31:02 compute-0 nova_compute[189564]: 2025-12-01 19:31:02.074 189568 DEBUG nova.virt.libvirt.driver [None req-25ca770d-7f98-45d0-a424-9d6d0202385c 7c24e8f82e7842b785e565ac65c7f494 35d2a9caf1634dca9fc12ec078239d84 - - default default] [instance: f4a023f0-04a7-470f-88ef-6284e0580f9e] Start _get_guest_xml network_info=[{"id": "0aee22ef-1ffd-4d83-a6ba-7377ff1b62c3", "address": "fa:16:3e:0a:1c:a4", "network": {"id": "2a4b8529-6171-4880-a97c-66966115a61b", "bridge": "br-int", "label": "private", "subnets": [{"cidr": "192.168.0.0/24", "dns": [], "gateway": {"address": "192.168.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "192.168.0.66", "type": "fixed", "version": 4, "meta": {}, "floating_ips": [{"address": "192.168.122.187", "type": "floating", "version": 4, "meta": {}}]}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "35d2a9caf1634dca9fc12ec078239d84", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tap0aee22ef-1f", "ovs_interfaceid": "0aee22ef-1ffd-4d83-a6ba-7377ff1b62c3", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": true, "delegate_create": true, "meta": {}}] disk_info={'disk_bus': 'virtio', 'cdrom_bus': 'sata', 'mapping': {'root': {'bus': 'virtio', 'dev': 'vda', 'type': 'disk', 'boot_index': '1'}, 'disk': {'bus': 'virtio', 'dev': 'vda', 'type': 'disk', 'boot_index': '1'}, 'disk.eph0': {'bus': 'virtio', 'dev': 'vdb', 'type': 'disk'}, 'disk.config': {'bus': 'sata', 'dev': 'sda', 'type': 'cdrom'}}} image_meta=ImageMeta(checksum='b874c39491a2377b8490f5f1e89761a4',container_format='bare',created_at=2025-12-01T19:28:30Z,direct_url=<?>,disk_format='qcow2',id=15bc897a-453b-4133-b6db-08ecdc2b6db0,min_disk=0,min_ram=0,name='cirros',owner='35d2a9caf1634dca9fc12ec078239d84',properties=ImageMetaProps,protected=<?>,size=16300544,status='active',tags=<?>,updated_at=2025-12-01T19:28:32Z,virtual_size=<?>,visibility=<?>) rescue=None block_device_info={'root_device_name': '/dev/vda', 'image': [{'boot_index': 0, 'guest_format': None, 'encryption_options': None, 'size': 0, 'encryption_secret_uuid': None, 'device_type': 'disk', 'disk_bus': 'virtio', 'encrypted': False, 'encryption_format': None, 'device_name': '/dev/vda', 'image_id': '15bc897a-453b-4133-b6db-08ecdc2b6db0'}], 'ephemerals': [{'guest_format': None, 'encryption_options': None, 'size': 1, 'encryption_secret_uuid': None, 'device_type': 'disk', 'disk_bus': 'virtio', 'encrypted': False, 'encryption_format': None, 'device_name': '/dev/vdb'}], 'block_device_mapping': [], 'swap': None} _get_guest_xml /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:7549#033[00m
Dec  1 19:31:02 compute-0 nova_compute[189564]: 2025-12-01 19:31:02.085 189568 WARNING nova.virt.libvirt.driver [None req-25ca770d-7f98-45d0-a424-9d6d0202385c 7c24e8f82e7842b785e565ac65c7f494 35d2a9caf1634dca9fc12ec078239d84 - - default default] This host appears to have multiple sockets per NUMA node. The `socket` PCI NUMA affinity will not be supported.#033[00m
Dec  1 19:31:02 compute-0 nova_compute[189564]: 2025-12-01 19:31:02.102 189568 DEBUG nova.virt.libvirt.host [None req-25ca770d-7f98-45d0-a424-9d6d0202385c 7c24e8f82e7842b785e565ac65c7f494 35d2a9caf1634dca9fc12ec078239d84 - - default default] Searching host: 'compute-0.ctlplane.example.com' for CPU controller through CGroups V1... _has_cgroupsv1_cpu_controller /usr/lib/python3.9/site-packages/nova/virt/libvirt/host.py:1653#033[00m
Dec  1 19:31:02 compute-0 nova_compute[189564]: 2025-12-01 19:31:02.103 189568 DEBUG nova.virt.libvirt.host [None req-25ca770d-7f98-45d0-a424-9d6d0202385c 7c24e8f82e7842b785e565ac65c7f494 35d2a9caf1634dca9fc12ec078239d84 - - default default] CPU controller missing on host. _has_cgroupsv1_cpu_controller /usr/lib/python3.9/site-packages/nova/virt/libvirt/host.py:1663#033[00m
Dec  1 19:31:02 compute-0 nova_compute[189564]: 2025-12-01 19:31:02.109 189568 DEBUG nova.virt.libvirt.host [None req-25ca770d-7f98-45d0-a424-9d6d0202385c 7c24e8f82e7842b785e565ac65c7f494 35d2a9caf1634dca9fc12ec078239d84 - - default default] Searching host: 'compute-0.ctlplane.example.com' for CPU controller through CGroups V2... _has_cgroupsv2_cpu_controller /usr/lib/python3.9/site-packages/nova/virt/libvirt/host.py:1672#033[00m
Dec  1 19:31:02 compute-0 nova_compute[189564]: 2025-12-01 19:31:02.110 189568 DEBUG nova.virt.libvirt.host [None req-25ca770d-7f98-45d0-a424-9d6d0202385c 7c24e8f82e7842b785e565ac65c7f494 35d2a9caf1634dca9fc12ec078239d84 - - default default] CPU controller found on host. _has_cgroupsv2_cpu_controller /usr/lib/python3.9/site-packages/nova/virt/libvirt/host.py:1679#033[00m
Dec  1 19:31:02 compute-0 nova_compute[189564]: 2025-12-01 19:31:02.110 189568 DEBUG nova.virt.libvirt.driver [None req-25ca770d-7f98-45d0-a424-9d6d0202385c 7c24e8f82e7842b785e565ac65c7f494 35d2a9caf1634dca9fc12ec078239d84 - - default default] CPU mode 'host-model' models '' was chosen, with extra flags: '' _get_guest_cpu_model_config /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:5396#033[00m
Dec  1 19:31:02 compute-0 nova_compute[189564]: 2025-12-01 19:31:02.110 189568 DEBUG nova.virt.hardware [None req-25ca770d-7f98-45d0-a424-9d6d0202385c 7c24e8f82e7842b785e565ac65c7f494 35d2a9caf1634dca9fc12ec078239d84 - - default default] Getting desirable topologies for flavor Flavor(created_at=2025-12-01T19:28:35Z,deleted=False,deleted_at=None,description=None,disabled=False,ephemeral_gb=1,extra_specs={},flavorid='0891a7f6-7194-4f33-bc11-6f6ab8b16145',id=1,is_public=True,memory_mb=512,name='m1.small',projects=<?>,root_gb=1,rxtx_factor=1.0,swap=0,updated_at=None,vcpu_weight=0,vcpus=1) and image_meta ImageMeta(checksum='b874c39491a2377b8490f5f1e89761a4',container_format='bare',created_at=2025-12-01T19:28:30Z,direct_url=<?>,disk_format='qcow2',id=15bc897a-453b-4133-b6db-08ecdc2b6db0,min_disk=0,min_ram=0,name='cirros',owner='35d2a9caf1634dca9fc12ec078239d84',properties=ImageMetaProps,protected=<?>,size=16300544,status='active',tags=<?>,updated_at=2025-12-01T19:28:32Z,virtual_size=<?>,visibility=<?>), allow threads: True _get_desirable_cpu_topologies /usr/lib/python3.9/site-packages/nova/virt/hardware.py:563#033[00m
Dec  1 19:31:02 compute-0 nova_compute[189564]: 2025-12-01 19:31:02.111 189568 DEBUG nova.virt.hardware [None req-25ca770d-7f98-45d0-a424-9d6d0202385c 7c24e8f82e7842b785e565ac65c7f494 35d2a9caf1634dca9fc12ec078239d84 - - default default] Flavor limits 0:0:0 get_cpu_topology_constraints /usr/lib/python3.9/site-packages/nova/virt/hardware.py:348#033[00m
Dec  1 19:31:02 compute-0 nova_compute[189564]: 2025-12-01 19:31:02.111 189568 DEBUG nova.virt.hardware [None req-25ca770d-7f98-45d0-a424-9d6d0202385c 7c24e8f82e7842b785e565ac65c7f494 35d2a9caf1634dca9fc12ec078239d84 - - default default] Image limits 0:0:0 get_cpu_topology_constraints /usr/lib/python3.9/site-packages/nova/virt/hardware.py:352#033[00m
Dec  1 19:31:02 compute-0 nova_compute[189564]: 2025-12-01 19:31:02.111 189568 DEBUG nova.virt.hardware [None req-25ca770d-7f98-45d0-a424-9d6d0202385c 7c24e8f82e7842b785e565ac65c7f494 35d2a9caf1634dca9fc12ec078239d84 - - default default] Flavor pref 0:0:0 get_cpu_topology_constraints /usr/lib/python3.9/site-packages/nova/virt/hardware.py:388#033[00m
Dec  1 19:31:02 compute-0 nova_compute[189564]: 2025-12-01 19:31:02.111 189568 DEBUG nova.virt.hardware [None req-25ca770d-7f98-45d0-a424-9d6d0202385c 7c24e8f82e7842b785e565ac65c7f494 35d2a9caf1634dca9fc12ec078239d84 - - default default] Image pref 0:0:0 get_cpu_topology_constraints /usr/lib/python3.9/site-packages/nova/virt/hardware.py:392#033[00m
Dec  1 19:31:02 compute-0 nova_compute[189564]: 2025-12-01 19:31:02.111 189568 DEBUG nova.virt.hardware [None req-25ca770d-7f98-45d0-a424-9d6d0202385c 7c24e8f82e7842b785e565ac65c7f494 35d2a9caf1634dca9fc12ec078239d84 - - default default] Chose sockets=0, cores=0, threads=0; limits were sockets=65536, cores=65536, threads=65536 get_cpu_topology_constraints /usr/lib/python3.9/site-packages/nova/virt/hardware.py:430#033[00m
Dec  1 19:31:02 compute-0 nova_compute[189564]: 2025-12-01 19:31:02.111 189568 DEBUG nova.virt.hardware [None req-25ca770d-7f98-45d0-a424-9d6d0202385c 7c24e8f82e7842b785e565ac65c7f494 35d2a9caf1634dca9fc12ec078239d84 - - default default] Topology preferred VirtCPUTopology(cores=0,sockets=0,threads=0), maximum VirtCPUTopology(cores=65536,sockets=65536,threads=65536) _get_desirable_cpu_topologies /usr/lib/python3.9/site-packages/nova/virt/hardware.py:569#033[00m
Dec  1 19:31:02 compute-0 nova_compute[189564]: 2025-12-01 19:31:02.112 189568 DEBUG nova.virt.hardware [None req-25ca770d-7f98-45d0-a424-9d6d0202385c 7c24e8f82e7842b785e565ac65c7f494 35d2a9caf1634dca9fc12ec078239d84 - - default default] Build topologies for 1 vcpu(s) 1:1:1 _get_possible_cpu_topologies /usr/lib/python3.9/site-packages/nova/virt/hardware.py:471#033[00m
Dec  1 19:31:02 compute-0 nova_compute[189564]: 2025-12-01 19:31:02.112 189568 DEBUG nova.virt.hardware [None req-25ca770d-7f98-45d0-a424-9d6d0202385c 7c24e8f82e7842b785e565ac65c7f494 35d2a9caf1634dca9fc12ec078239d84 - - default default] Got 1 possible topologies _get_possible_cpu_topologies /usr/lib/python3.9/site-packages/nova/virt/hardware.py:501#033[00m
Dec  1 19:31:02 compute-0 nova_compute[189564]: 2025-12-01 19:31:02.112 189568 DEBUG nova.virt.hardware [None req-25ca770d-7f98-45d0-a424-9d6d0202385c 7c24e8f82e7842b785e565ac65c7f494 35d2a9caf1634dca9fc12ec078239d84 - - default default] Possible topologies [VirtCPUTopology(cores=1,sockets=1,threads=1)] _get_desirable_cpu_topologies /usr/lib/python3.9/site-packages/nova/virt/hardware.py:575#033[00m
Dec  1 19:31:02 compute-0 nova_compute[189564]: 2025-12-01 19:31:02.112 189568 DEBUG nova.virt.hardware [None req-25ca770d-7f98-45d0-a424-9d6d0202385c 7c24e8f82e7842b785e565ac65c7f494 35d2a9caf1634dca9fc12ec078239d84 - - default default] Sorted desired topologies [VirtCPUTopology(cores=1,sockets=1,threads=1)] _get_desirable_cpu_topologies /usr/lib/python3.9/site-packages/nova/virt/hardware.py:577#033[00m
Dec  1 19:31:02 compute-0 nova_compute[189564]: 2025-12-01 19:31:02.115 189568 DEBUG nova.virt.libvirt.vif [None req-25ca770d-7f98-45d0-a424-9d6d0202385c 7c24e8f82e7842b785e565ac65c7f494 35d2a9caf1634dca9fc12ec078239d84 - - default default] vif_type=ovs instance=Instance(access_ip_v4=None,access_ip_v6=None,architecture=None,auto_disk_config=False,availability_zone='nova',cell_name=None,cleaned=False,config_drive='',created_at=2025-12-01T19:30:56Z,default_ephemeral_device=None,default_swap_device=None,deleted=False,deleted_at=None,device_metadata=None,disable_terminate=False,display_description=None,display_name='vn-rxztcck-f2wxpqwzjpbt-22updzqiujy5-vnf-jgrcp6zbpavd',ec2_ids=EC2Ids,ephemeral_gb=1,ephemeral_key_uuid=None,fault=<?>,flavor=Flavor(1),hidden=False,host='compute-0.ctlplane.example.com',hostname='vn-rxztcck-f2wxpqwzjpbt-22updzqiujy5-vnf-jgrcp6zbpavd',id=2,image_ref='15bc897a-453b-4133-b6db-08ecdc2b6db0',info_cache=InstanceInfoCache,instance_type_id=1,kernel_id='',key_data=None,key_name=None,keypairs=KeyPairList,launch_index=0,launched_at=None,launched_on='compute-0.ctlplane.example.com',locked=False,locked_by=None,memory_mb=512,metadata={metering.server_group='47cf63e2-5b7c-4ff3-8543-aef6d5b1a5c9'},migration_context=None,new_flavor=None,node='compute-0.ctlplane.example.com',numa_topology=None,old_flavor=None,os_type=None,pci_devices=<?>,pci_requests=InstancePCIRequests,power_state=0,progress=0,project_id='35d2a9caf1634dca9fc12ec078239d84',ramdisk_id='',reservation_id='r-9jn6ac13',resources=None,root_device_name='/dev/vda',root_gb=1,security_groups=SecurityGroupList,services=<?>,shutdown_terminate=False,system_metadata={boot_roles='reader,member,admin',image_base_image_ref='15bc897a-453b-4133-b6db-08ecdc2b6db0',image_container_format='bare',image_disk_format='qcow2',image_hw_machine_type='q35',image_min_disk='1',image_min_ram='0',image_owner_specified.openstack.md5='',image_owner_specified.openstack.object='images/cirros',image_owner_specified.openstack.sha256='',network_allocated='True',owner_project_name='admin',owner_user_name='admin'},tags=TagList,task_state='spawning',terminated_at=None,trusted_certs=None,updated_at=2025-12-01T19:30:59Z,user_data='Q29udGVudC1UeXBlOiBtdWx0aXBhcnQvbWl4ZWQ7IGJvdW5kYXJ5PSI9PT09PT09PT09PT09PT02NTc4MjE4NjU1NTUwNjgwNzIwPT0iCk1JTUUtVmVyc2lvbjogMS4wCgotLT09PT09PT09PT09PT09PTY1NzgyMTg2NTU1NTA2ODA3MjA9PQpDb250ZW50LVR5cGU6IHRleHQvY2xvdWQtY29uZmlnOyBjaGFyc2V0PSJ1cy1hc2NpaSIKTUlNRS1WZXJzaW9uOiAxLjAKQ29udGVudC1UcmFuc2Zlci1FbmNvZGluZzogN2JpdApDb250ZW50LURpc3Bvc2l0aW9uOiBhdHRhY2htZW50OyBmaWxlbmFtZT0iY2xvdWQtY29uZmlnIgoKCgojIENhcHR1cmUgYWxsIHN1YnByb2Nlc3Mgb3V0cHV0IGludG8gYSBsb2dmaWxlCiMgVXNlZnVsIGZvciB0cm91Ymxlc2hvb3RpbmcgY2xvdWQtaW5pdCBpc3N1ZXMKb3V0cHV0OiB7YWxsOiAnfCB0ZWUgLWEgL3Zhci9sb2cvY2xvdWQtaW5pdC1vdXRwdXQubG9nJ30KCi0tPT09PT09PT09PT09PT09NjU3ODIxODY1NTU1MDY4MDcyMD09CkNvbnRlbnQtVHlwZTogdGV4dC9jbG91ZC1ib290aG9vazsgY2hhcnNldD0idXMtYXNjaWkiCk1JTUUtVmVyc2lvbjogMS4wCkNvbnRlbnQtVHJhbnNmZXItRW5jb2Rpbmc6IDdiaXQKQ29udGVudC1EaXNwb3NpdGlvbjogYXR0YWNobWVudDsgZmlsZW5hbWU9ImJvb3Rob29rLnNoIgoKIyEvdXNyL2Jpbi9iYXNoCgojIEZJWE1FKHNoYWRvd2VyKSB0aGlzIGlzIGEgd29ya2Fyb3VuZCBmb3IgY2xvdWQtaW5pdCAwLjYuMyBwcmVzZW50IGluIFVidW50dQojIDEyLjA0IExUUzoKIyBodHRwczovL2J1Z3MubGF1bmNocGFkLm5ldC9oZWF0LytidWcvMTI1NzQxMAojCiMgVGhlIG9sZCBjbG91ZC1pbml0IGRvZXNuJ3QgY3JlYXRlIHRoZSB1c2VycyBkaXJlY3RseSBzbyB0aGUgY29tbWFuZHMgdG8gZG8KIyB0aGlzIGFyZSBpbmplY3RlZCB0aG91Z2ggbm92YV91dGlscy5weS4KIwojIE9uY2Ugd2UgZHJvcCBzdXBwb3J0IGZvciAwLjYuMywgd2UgY2FuIHNhZmVseSByZW1vdmUgdGhpcy4KCgojIGluIGNhc2UgaGVhdC1jZm50b29scyBoYXMgYmVlbiBpbnN0YWxsZWQgZnJvbSBwYWNrYWdlIGJ1dCBubyBzeW1saW5rcwojIGFyZSB5ZXQgaW4gL29wdC9hd3MvYmluLwpjZm4tY3JlYXRlLWF3cy1zeW1saW5rcwoKIyBEbyBub3QgcmVtb3ZlIC0gdGhlIGNsb3VkIGJvb3Rob29rIHNob3VsZCBhbHdheXMgcmV0dXJuIHN1Y2Nlc3MKZXhpdCAwCgotLT09PT09PT09PT09PT09PTY1NzgyMTg2NTU1NTA2ODA3MjA9PQpDb250ZW50LVR5cGU6IHRleHQvcGFydC1oYW5kbGVyOyBjaGFyc2V0PSJ1cy1hc2NpaSIKTUlNRS1WZXJzaW9uOiAxLjAKQ29udGVudC1UcmFuc2Zlci1FbmNvZGluZzogN2JpdApDb250ZW50LURpc3Bvc2l0aW9uOiBhdHRhY2htZW50OyBmaWxlbmFtZT0icGFydC1oYW5kbGVyLnB5IgoKIyBwYXJ0LWhhbmRsZXIKIwojICAgIExpY2Vuc2VkIHVuZGVyIHRoZSBBcGFjaGUgTGljZW5zZSwgVmVyc2lvbiAyLjAgKHRoZSAiTGljZW5zZSIpOyB5b3UgbWF5CiMgICAgbm90IHVzZSB0aGlzIGZpbGUgZXhjZXB0IGluIGNvbXBsaWFuY2Ugd2l0aCB0aGUgTGljZW5zZS4gWW91IG1heSBvYnRhaW4KIyAgICBhIGNvcHkgb2YgdGhlIExpY2Vuc2UgYXQKIwojICAgICAgICAgaHR0cDovL3d3dy5hcGFjaGUub3JnL2xpY2Vuc2VzL0xJQ0VOU0UtMi4wCiMKIyAgICBVbmxlc3MgcmVxdWlyZWQgYnkgYXBwbGljYWJsZSBsYXcgb3IgYWdyZWVkIHRvIGluIHdyaXRpbmcsIHNvZnR3YXJlCiMgICAgZGlzdHJpYnV0ZWQgdW5kZXIgdGhlIExpY2Vuc2UgaXMgZGlzdHJpYnV0ZWQgb24gYW4gIkFTIElTIiBCQVNJUywgV0lUSE9VVAojICAgIFdBUlJBTlRJRVMgT1IgQ09ORElUSU9OUyBPRiBBTlkgS0lORCwgZWl0aGVyIGV4cHJlc3Mgb3IgaW1wbGllZC4gU2VlIHRoZQojICAgIExpY2Vuc2UgZm9yIHRoZSBzcGVjaWZpYyBsYW5ndWFnZSBnb3Zlcm5pbmcgcGVybWlzc2lvbnMgYW5kIGxpbWl0YXRpb25zCiMgICAgdW5kZXIgdGhlIExpY2Vuc2UuCgppbXBvcnQgZGF0ZXRpbWUKaW1wb3J0IGVycm5vCmltcG9ydCBvcwppbXBvcnQgc3lzCgoKZGVmIGxpc3RfdHlwZXMoKToKICAgIHJldHVybiBbInRleHQveC1jZm5pbml0ZGF0YSJdCgoKZGVmIGhhbmRsZV9wYXJ0KGRhdGEsIGN0eXBlLCBmaWxlbmFtZSwgcGF5bG9hZCk6CiAgICBpZiBjdHlwZSA9PSAiX19iZWdpbl9fIjoKICAgICAgICB0cnk6CiAgICAgICAgICAgIG9zLm1ha2VkaXJzKCcvdmFyL2xpYi9oZWF0LWNmbnRvb2xzJywgaW50KCI3MDAiLCA4KSkKICAgICAgICBleGNlcHQgT1NFcnJvcjoKICAgICAgICAgICAgZXhfdHlwZSwgZSwgdGIgPSBzeXMuZXhjX2luZm8oKQogICAgICAgICAgICBpZiBlLmVycm5vICE9IGVycm5vLkVFWElTVDoKICAgICAgICAgICAgICAgIHJhaXNlCiAgICAgICAgcmV0dXJuCgogICAgaWYgY3R5cGUgPT0gIl9fZW5kX18iOgogICAgICAgIHJldHVybgoKICAgIHRpbWVzdGFtcCA9IGRhdGV0aW1lLmRhdGV0aW1lLm5vdygpCiAgICB3aXRoIG9wZW4oJy92YXIvbG9nL3BhcnQtaGFuZGxlci5sb2cnLCAnYScpIGFzIGxvZzoKICAgICAgICBsb2cud3JpdGUoJyVzIGZpbGVuYW1lOiVzLCBjdHlwZTolc1xuJyAlICh0aW1lc3RhbXAsIGZpbGVuYW1lLCBjdHlwZSkpCgogICAgaWYgY3R5cGUgPT0gJ3RleHQveC1jZm5pbml0ZGF0YSc6CiAgICAgICAgd2l0aCBvcGVuKCcvdmFyL2xpYi9oZWF0LWNmbnRvb2xzLyVzJyAlIGZpbGVuYW1lLCAndycpIGFzIGY6CiAgICAgICAgICAgIGYud3JpdGUocGF5bG9hZCkKCiAgICAgICAgIyBUT0RPKHNkYWtlKSBob3BlZnVsbHkgdGVtcG9yYXJ5IHVudGlsIHVzZXJzIG1vdmUgdG8gaGVhdC1jZm50b29scy0xLjMKICAgICAgICB3aXRoIG9wZW4oJy92YXIvbGliL2Nsb3VkL2RhdGEvJXMnICUgZmlsZW5hbWUsICd3JykgYXMgZjoKICAgICAgICAgICAgZi53cml0ZShwYXlsb2FkKQoKLS09PT09PT09PT09PT09PT02NTc4MjE4NjU1NTUwNjgwNzIwPT0KQ29udGVudC1UeXBlOiB0ZXh0L3gtY2ZuaW5pdGRhdGE7IGNoYXJzZXQ9InVzLWFzY2lpIgpNSU1FLVZlcnNpb246IDEuMApDb250ZW50LVRyYW5zZmVyLUVuY29kaW5nOiA3Yml0CkNvbnRlbnQtRGlzcG9zaXRpb246IGF0dGFjaG1lbnQ7IGZpbGVuYW1lPSJjZm4tdXNlcmRhdGEiCgoKLS09PT09PT09PT09PT09PT02NTc4MjE4NjU1NTUwNjgwNzIwPT0KQ29udGVudC1UeXBlOiB0ZXh0L3gtc2hlbGxzY3JpcHQ7IGNoYXJzZXQ9InVzLWFzY2lpIgpNSU1FLVZlcnNpb246IDEuMApDb250ZW50LVRyYW5zZmVyLUVuY29kaW5nOiA3Yml0CkNvbnRlbnQtRGlzcG9zaXRpb246IGF0dGFjaG1lbnQ7IGZpbGVuYW1lPSJsb2d1c2VyZGF0YS5weSIKCiMhL3Vzci9iaW4vZW52IHB5dGhvbjMKIwojICAgIExpY2Vuc2VkIHVuZGVyIHRoZSBBcGFjaGUgTGljZW5zZSwgVmVyc2lvbiAyLjAgKHRoZSAiTGljZW5zZSIpOyB5b3UgbWF5CiMgICAgbm90IHVzZSB0aGlzIGZpbGUgZXhjZXB0IGluIGNvbXBsaWFuY2Ugd2l0aCB0aGUgTGljZW5zZS4gWW91IG1heSBvYnRhaW4KIyAgICBhIGNvcHkgb2YgdGhlIExpY2Vuc2UgYXQKIwojICAgICAgICAgaHR0cDovL3d3dy5hcGFjaGUub3JnL2xpY2Vuc2VzL0xJQ0VOU0UtMi4wCiMKIyAgICBVbmxlc3MgcmVxdWlyZWQgYnkgYXBwbGljYWJsZSBsYXcgb3IgYWdyZWVkIHRvIGluIHdyaXRpbmcsIHNvZnR3YXJlCiMgICAgZGlzdHJpYnV0ZWQgdW5kZXIgdGhlIExpY2Vuc2UgaXMgZGlzdHJpYnV0ZWQgb24gYW4gIkFTIElTIiBCQVNJUywgV0lUSE9VVAojICAgIFdBUlJBTlRJRVMgT1IgQ09ORElUSU9OUyBPRiBBTlkgS0lORCwgZWl0aGVyIGV4cHJlc3Mgb3IgaW1wbGllZC4gU2VlIHRoZQojICAgIExpY2Vuc2UgZm9yIHRoZSBzcGVjaWZpYyBsYW5ndWFnZSBnb3Zlcm5pbmcgcGVybWlzc2lvbnMgYW5kIGxpbWl0YXRpb25zCiMgICAgdW5kZXIgdGhlIExpY2Vuc2UuCgppbXBvcnQgZGF0ZXRpbWUKaW1wb3J0IGVycm5vCmltcG9ydCBsb2dnaW5nCmltcG9ydCBvcwppbXBvcnQgc3VicHJvY2VzcwppbXBvcnQgc3lzCgoKVkFSX1BBVEggPSAnL3Zhci9saWIvaGVhdC1jZm50b29scycKTE9HID0gbG9nZ2luZy5nZXRMb2dnZXIoJ2hlYXQtcHJvdmlzaW9uJykKCgpkZWYgaW5pdF9sb2dnaW5nKCk6CiAgICBMT0cuc2V0TGV2ZWwobG9nZ2luZy5JTkZPKQogICAgTE9HLmFkZEhhbmRsZXIobG9nZ2luZy5TdHJlYW1IYW5kbGVyKCkpCiAgICBmaCA9IGxvZ2dpbmcuRmlsZUhhbmRsZXIoIi92YXIvbG9nL2hlYXQtcHJvdmlzaW9uLmxvZyIpCiAgICBvcy5jaG1vZChmaC5iYXNlRmlsZW5hbWUsIGludCgiNjAwIiwgOCkpCiAgICBMT0cuYWRkSGFuZGxlcihmaCkKCgpkZWYgY2FsbChhcmdzKToKCiAgICBjbGFzcyBMb2dTdHJlYW0ob2JqZWN0KToKCiAgICAgICAgZGVmIHdyaXRlKHNlbGYsIGRhdGEpOgogICAgICAgICAgICBMT0cuaW5mbyhkYXRhKQoKICAgIExPRy5pbmZvKCclc1xuJywgJyAnLmpvaW4oYXJncykpICAjI
Dec  1 19:31:02 compute-0 nova_compute[189564]: ywgc3Rkb3V0PXN1YnByb2Nlc3MuUElQRSwKICAgICAgICAgICAgICAgICAgICAgICAgICAgICBzdGRlcnI9c3VicHJvY2Vzcy5QSVBFKQogICAgICAgIGRhdGEgPSBwLmNvbW11bmljYXRlKCkKICAgICAgICBpZiBkYXRhOgogICAgICAgICAgICBmb3IgeCBpbiBkYXRhOgogICAgICAgICAgICAgICAgbHMud3JpdGUoeCkKICAgIGV4Y2VwdCBPU0Vycm9yOgogICAgICAgIGV4X3R5cGUsIGV4LCB0YiA9IHN5cy5leGNfaW5mbygpCiAgICAgICAgaWYgZXguZXJybm8gPT0gZXJybm8uRU5PRVhFQzoKICAgICAgICAgICAgTE9HLmVycm9yKCdVc2VyZGF0YSBlbXB0eSBvciBub3QgZXhlY3V0YWJsZTogJXMnLCBleCkKICAgICAgICAgICAgcmV0dXJuIG9zLkVYX09LCiAgICAgICAgZWxzZToKICAgICAgICAgICAgTE9HLmVycm9yKCdPUyBlcnJvciBydW5uaW5nIHVzZXJkYXRhOiAlcycsIGV4KQogICAgICAgICAgICByZXR1cm4gb3MuRVhfT1NFUlIKICAgIGV4Y2VwdCBFeGNlcHRpb246CiAgICAgICAgZXhfdHlwZSwgZXgsIHRiID0gc3lzLmV4Y19pbmZvKCkKICAgICAgICBMT0cuZXJyb3IoJ1Vua25vd24gZXJyb3IgcnVubmluZyB1c2VyZGF0YTogJXMnLCBleCkKICAgICAgICByZXR1cm4gb3MuRVhfU09GVFdBUkUKICAgIHJldHVybiBwLnJldHVybmNvZGUKCgpkZWYgbWFpbigpOgogICAgdXNlcmRhdGFfcGF0aCA9IG9zLnBhdGguam9pbihWQVJfUEFUSCwgJ2Nmbi11c2VyZGF0YScpCiAgICBvcy5jaG1vZCh1c2VyZGF0YV9wYXRoLCBpbnQoIjcwMCIsIDgpKQoKICAgIExPRy5pbmZvKCdQcm92aXNpb24gYmVnYW46ICVzJywgZGF0ZXRpbWUuZGF0ZXRpbWUubm93KCkpCiAgICByZXR1cm5jb2RlID0gY2FsbChbdXNlcmRhdGFfcGF0aF0pCiAgICBMT0cuaW5mbygnUHJvdmlzaW9uIGRvbmU6ICVzJywgZGF0ZXRpbWUuZGF0ZXRpbWUubm93KCkpCiAgICBpZiByZXR1cm5jb2RlOgogICAgICAgIHJldHVybiByZXR1cm5jb2RlCgoKaWYgX19uYW1lX18gPT0gJ19fbWFpbl9fJzoKICAgIGluaXRfbG9nZ2luZygpCgogICAgY29kZSA9IG1haW4oKQogICAgaWYgY29kZToKICAgICAgICBMT0cuZXJyb3IoJ1Byb3Zpc2lvbiBmYWlsZWQgd2l0aCBleGl0IGNvZGUgJXMnLCBjb2RlKQogICAgICAgIHN5cy5leGl0KGNvZGUpCgogICAgcHJvdmlzaW9uX2xvZyA9IG9zLnBhdGguam9pbihWQVJfUEFUSCwgJ3Byb3Zpc2lvbi1maW5pc2hlZCcpCiAgICAjIHRvdWNoIHRoZSBmaWxlIHNvIGl0IGlzIHRpbWVzdGFtcGVkIHdpdGggd2hlbiBmaW5pc2hlZAogICAgd2l0aCBvcGVuKHByb3Zpc2lvbl9sb2csICdhJyk6CiAgICAgICAgb3MudXRpbWUocHJvdmlzaW9uX2xvZywgTm9uZSkKCi0tPT09PT09PT09PT09PT09NjU3ODIxODY1NTU1MDY4MDcyMD09CkNvbnRlbnQtVHlwZTogdGV4dC94LWNmbmluaXRkYXRhOyBjaGFyc2V0PSJ1cy1hc2NpaSIKTUlNRS1WZXJzaW9uOiAxLjAKQ29udGVudC1UcmFuc2Zlci1FbmNvZGluZzogN2JpdApDb250ZW50LURpc3Bvc2l0aW9uOiBhdHRhY2htZW50OyBmaWxlbmFtZT0iY2ZuLW1ldGFkYXRhLXNlcnZlciIKCmh0dHBzOi8vaGVhdC1jZm5hcGktaW50ZXJuYWwub3BlbnN0YWNrLnN2Yzo4MDAwL3YxLwotLT09PT09PT09PT09PT09PTY1NzgyMTg2NTU1NTA2ODA3MjA9PQpDb250ZW50LVR5cGU6IHRleHQveC1jZm5pbml0ZGF0YTsgY2hhcnNldD0idXMtYXNjaWkiCk1JTUUtVmVyc2lvbjogMS4wCkNvbnRlbnQtVHJhbnNmZXItRW5jb2Rpbmc6IDdiaXQKQ29udGVudC1EaXNwb3NpdGlvbjogYXR0YWNobWVudDsgZmlsZW5hbWU9ImNmbi1ib3RvLWNmZyIKCltCb3RvXQpkZWJ1ZyA9IDAKaXNfc2VjdXJlID0gMApodHRwc192YWxpZGF0ZV9jZXJ0aWZpY2F0ZXMgPSAxCmNmbl9yZWdpb25fbmFtZSA9IGhlYXQKY2ZuX3JlZ2lvbl9lbmRwb2ludCA9IGhlYXQtY2ZuYXBpLWludGVybmFsLm9wZW5zdGFjay5zdmMKLS09PT09PT09PT09PT09PT02NTc4MjE4NjU1NTUwNjgwNzIwPT0tLQo=',user_id='7c24e8f82e7842b785e565ac65c7f494',uuid=f4a023f0-04a7-470f-88ef-6284e0580f9e,vcpu_model=VirtCPUModel,vcpus=1,vm_mode=None,vm_state='building') vif={"id": "0aee22ef-1ffd-4d83-a6ba-7377ff1b62c3", "address": "fa:16:3e:0a:1c:a4", "network": {"id": "2a4b8529-6171-4880-a97c-66966115a61b", "bridge": "br-int", "label": "private", "subnets": [{"cidr": "192.168.0.0/24", "dns": [], "gateway": {"address": "192.168.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "192.168.0.66", "type": "fixed", "version": 4, "meta": {}, "floating_ips": [{"address": "192.168.122.187", "type": "floating", "version": 4, "meta": {}}]}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "35d2a9caf1634dca9fc12ec078239d84", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tap0aee22ef-1f", "ovs_interfaceid": "0aee22ef-1ffd-4d83-a6ba-7377ff1b62c3", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": true, "delegate_create": true, "meta": {}} virt_type=kvm get_config /usr/lib/python3.9/site-packages/nova/virt/libvirt/vif.py:563#033[00m
Dec  1 19:31:02 compute-0 nova_compute[189564]: 2025-12-01 19:31:02.115 189568 DEBUG nova.network.os_vif_util [None req-25ca770d-7f98-45d0-a424-9d6d0202385c 7c24e8f82e7842b785e565ac65c7f494 35d2a9caf1634dca9fc12ec078239d84 - - default default] Converting VIF {"id": "0aee22ef-1ffd-4d83-a6ba-7377ff1b62c3", "address": "fa:16:3e:0a:1c:a4", "network": {"id": "2a4b8529-6171-4880-a97c-66966115a61b", "bridge": "br-int", "label": "private", "subnets": [{"cidr": "192.168.0.0/24", "dns": [], "gateway": {"address": "192.168.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "192.168.0.66", "type": "fixed", "version": 4, "meta": {}, "floating_ips": [{"address": "192.168.122.187", "type": "floating", "version": 4, "meta": {}}]}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "35d2a9caf1634dca9fc12ec078239d84", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tap0aee22ef-1f", "ovs_interfaceid": "0aee22ef-1ffd-4d83-a6ba-7377ff1b62c3", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": true, "delegate_create": true, "meta": {}} nova_to_osvif_vif /usr/lib/python3.9/site-packages/nova/network/os_vif_util.py:511#033[00m
Dec  1 19:31:02 compute-0 nova_compute[189564]: 2025-12-01 19:31:02.116 189568 DEBUG nova.network.os_vif_util [None req-25ca770d-7f98-45d0-a424-9d6d0202385c 7c24e8f82e7842b785e565ac65c7f494 35d2a9caf1634dca9fc12ec078239d84 - - default default] Converted object VIFOpenVSwitch(active=False,address=fa:16:3e:0a:1c:a4,bridge_name='br-int',has_traffic_filtering=True,id=0aee22ef-1ffd-4d83-a6ba-7377ff1b62c3,network=Network(2a4b8529-6171-4880-a97c-66966115a61b),plugin='ovs',port_profile=VIFPortProfileOpenVSwitch,preserve_on_delete=True,vif_name='tap0aee22ef-1f') nova_to_osvif_vif /usr/lib/python3.9/site-packages/nova/network/os_vif_util.py:548#033[00m
Dec  1 19:31:02 compute-0 nova_compute[189564]: 2025-12-01 19:31:02.116 189568 DEBUG nova.objects.instance [None req-25ca770d-7f98-45d0-a424-9d6d0202385c 7c24e8f82e7842b785e565ac65c7f494 35d2a9caf1634dca9fc12ec078239d84 - - default default] Lazy-loading 'pci_devices' on Instance uuid f4a023f0-04a7-470f-88ef-6284e0580f9e obj_load_attr /usr/lib/python3.9/site-packages/nova/objects/instance.py:1105#033[00m
Dec  1 19:31:02 compute-0 nova_compute[189564]: 2025-12-01 19:31:02.135 189568 DEBUG nova.virt.libvirt.driver [None req-25ca770d-7f98-45d0-a424-9d6d0202385c 7c24e8f82e7842b785e565ac65c7f494 35d2a9caf1634dca9fc12ec078239d84 - - default default] [instance: f4a023f0-04a7-470f-88ef-6284e0580f9e] End _get_guest_xml xml=<domain type="kvm">
Dec  1 19:31:02 compute-0 nova_compute[189564]:  <uuid>f4a023f0-04a7-470f-88ef-6284e0580f9e</uuid>
Dec  1 19:31:02 compute-0 nova_compute[189564]:  <name>instance-00000002</name>
Dec  1 19:31:02 compute-0 nova_compute[189564]:  <memory>524288</memory>
Dec  1 19:31:02 compute-0 nova_compute[189564]:  <vcpu>1</vcpu>
Dec  1 19:31:02 compute-0 nova_compute[189564]:  <metadata>
Dec  1 19:31:02 compute-0 nova_compute[189564]:    <nova:instance xmlns:nova="http://openstack.org/xmlns/libvirt/nova/1.1">
Dec  1 19:31:02 compute-0 nova_compute[189564]:      <nova:package version="27.5.2-0.20250829104910.6f8decf.el9"/>
Dec  1 19:31:02 compute-0 nova_compute[189564]:      <nova:name>vn-rxztcck-f2wxpqwzjpbt-22updzqiujy5-vnf-jgrcp6zbpavd</nova:name>
Dec  1 19:31:02 compute-0 nova_compute[189564]:      <nova:creationTime>2025-12-01 19:31:02</nova:creationTime>
Dec  1 19:31:02 compute-0 nova_compute[189564]:      <nova:flavor name="m1.small">
Dec  1 19:31:02 compute-0 nova_compute[189564]:        <nova:memory>512</nova:memory>
Dec  1 19:31:02 compute-0 nova_compute[189564]:        <nova:disk>1</nova:disk>
Dec  1 19:31:02 compute-0 nova_compute[189564]:        <nova:swap>0</nova:swap>
Dec  1 19:31:02 compute-0 nova_compute[189564]:        <nova:ephemeral>1</nova:ephemeral>
Dec  1 19:31:02 compute-0 nova_compute[189564]:        <nova:vcpus>1</nova:vcpus>
Dec  1 19:31:02 compute-0 nova_compute[189564]:      </nova:flavor>
Dec  1 19:31:02 compute-0 nova_compute[189564]:      <nova:owner>
Dec  1 19:31:02 compute-0 nova_compute[189564]:        <nova:user uuid="7c24e8f82e7842b785e565ac65c7f494">admin</nova:user>
Dec  1 19:31:02 compute-0 nova_compute[189564]:        <nova:project uuid="35d2a9caf1634dca9fc12ec078239d84">admin</nova:project>
Dec  1 19:31:02 compute-0 nova_compute[189564]:      </nova:owner>
Dec  1 19:31:02 compute-0 nova_compute[189564]:      <nova:root type="image" uuid="15bc897a-453b-4133-b6db-08ecdc2b6db0"/>
Dec  1 19:31:02 compute-0 nova_compute[189564]:      <nova:ports>
Dec  1 19:31:02 compute-0 nova_compute[189564]:        <nova:port uuid="0aee22ef-1ffd-4d83-a6ba-7377ff1b62c3">
Dec  1 19:31:02 compute-0 nova_compute[189564]:          <nova:ip type="fixed" address="192.168.0.66" ipVersion="4"/>
Dec  1 19:31:02 compute-0 nova_compute[189564]:        </nova:port>
Dec  1 19:31:02 compute-0 nova_compute[189564]:      </nova:ports>
Dec  1 19:31:02 compute-0 nova_compute[189564]:    </nova:instance>
Dec  1 19:31:02 compute-0 nova_compute[189564]:  </metadata>
Dec  1 19:31:02 compute-0 nova_compute[189564]:  <sysinfo type="smbios">
Dec  1 19:31:02 compute-0 nova_compute[189564]:    <system>
Dec  1 19:31:02 compute-0 nova_compute[189564]:      <entry name="manufacturer">RDO</entry>
Dec  1 19:31:02 compute-0 nova_compute[189564]:      <entry name="product">OpenStack Compute</entry>
Dec  1 19:31:02 compute-0 nova_compute[189564]:      <entry name="version">27.5.2-0.20250829104910.6f8decf.el9</entry>
Dec  1 19:31:02 compute-0 nova_compute[189564]:      <entry name="serial">f4a023f0-04a7-470f-88ef-6284e0580f9e</entry>
Dec  1 19:31:02 compute-0 nova_compute[189564]:      <entry name="uuid">f4a023f0-04a7-470f-88ef-6284e0580f9e</entry>
Dec  1 19:31:02 compute-0 nova_compute[189564]:      <entry name="family">Virtual Machine</entry>
Dec  1 19:31:02 compute-0 nova_compute[189564]:    </system>
Dec  1 19:31:02 compute-0 nova_compute[189564]:  </sysinfo>
Dec  1 19:31:02 compute-0 nova_compute[189564]:  <os>
Dec  1 19:31:02 compute-0 nova_compute[189564]:    <type arch="x86_64" machine="q35">hvm</type>
Dec  1 19:31:02 compute-0 nova_compute[189564]:    <boot dev="hd"/>
Dec  1 19:31:02 compute-0 nova_compute[189564]:    <smbios mode="sysinfo"/>
Dec  1 19:31:02 compute-0 nova_compute[189564]:  </os>
Dec  1 19:31:02 compute-0 nova_compute[189564]:  <features>
Dec  1 19:31:02 compute-0 nova_compute[189564]:    <acpi/>
Dec  1 19:31:02 compute-0 nova_compute[189564]:    <apic/>
Dec  1 19:31:02 compute-0 nova_compute[189564]:    <vmcoreinfo/>
Dec  1 19:31:02 compute-0 nova_compute[189564]:  </features>
Dec  1 19:31:02 compute-0 nova_compute[189564]:  <clock offset="utc">
Dec  1 19:31:02 compute-0 nova_compute[189564]:    <timer name="pit" tickpolicy="delay"/>
Dec  1 19:31:02 compute-0 nova_compute[189564]:    <timer name="rtc" tickpolicy="catchup"/>
Dec  1 19:31:02 compute-0 nova_compute[189564]:    <timer name="hpet" present="no"/>
Dec  1 19:31:02 compute-0 nova_compute[189564]:  </clock>
Dec  1 19:31:02 compute-0 nova_compute[189564]:  <cpu mode="host-model" match="exact">
Dec  1 19:31:02 compute-0 nova_compute[189564]:    <topology sockets="1" cores="1" threads="1"/>
Dec  1 19:31:02 compute-0 nova_compute[189564]:  </cpu>
Dec  1 19:31:02 compute-0 nova_compute[189564]:  <devices>
Dec  1 19:31:02 compute-0 nova_compute[189564]:    <disk type="file" device="disk">
Dec  1 19:31:02 compute-0 nova_compute[189564]:      <driver name="qemu" type="qcow2" cache="none"/>
Dec  1 19:31:02 compute-0 nova_compute[189564]:      <source file="/var/lib/nova/instances/f4a023f0-04a7-470f-88ef-6284e0580f9e/disk"/>
Dec  1 19:31:02 compute-0 nova_compute[189564]:      <target dev="vda" bus="virtio"/>
Dec  1 19:31:02 compute-0 nova_compute[189564]:    </disk>
Dec  1 19:31:02 compute-0 nova_compute[189564]:    <disk type="file" device="disk">
Dec  1 19:31:02 compute-0 nova_compute[189564]:      <driver name="qemu" type="qcow2" cache="none"/>
Dec  1 19:31:02 compute-0 nova_compute[189564]:      <source file="/var/lib/nova/instances/f4a023f0-04a7-470f-88ef-6284e0580f9e/disk.eph0"/>
Dec  1 19:31:02 compute-0 nova_compute[189564]:      <target dev="vdb" bus="virtio"/>
Dec  1 19:31:02 compute-0 nova_compute[189564]:    </disk>
Dec  1 19:31:02 compute-0 nova_compute[189564]:    <disk type="file" device="cdrom">
Dec  1 19:31:02 compute-0 nova_compute[189564]:      <driver name="qemu" type="raw" cache="none"/>
Dec  1 19:31:02 compute-0 nova_compute[189564]:      <source file="/var/lib/nova/instances/f4a023f0-04a7-470f-88ef-6284e0580f9e/disk.config"/>
Dec  1 19:31:02 compute-0 nova_compute[189564]:      <target dev="sda" bus="sata"/>
Dec  1 19:31:02 compute-0 nova_compute[189564]:    </disk>
Dec  1 19:31:02 compute-0 nova_compute[189564]:    <interface type="ethernet">
Dec  1 19:31:02 compute-0 nova_compute[189564]:      <mac address="fa:16:3e:0a:1c:a4"/>
Dec  1 19:31:02 compute-0 nova_compute[189564]:      <model type="virtio"/>
Dec  1 19:31:02 compute-0 nova_compute[189564]:      <driver name="vhost" rx_queue_size="512"/>
Dec  1 19:31:02 compute-0 nova_compute[189564]:      <mtu size="1442"/>
Dec  1 19:31:02 compute-0 nova_compute[189564]:      <target dev="tap0aee22ef-1f"/>
Dec  1 19:31:02 compute-0 nova_compute[189564]:    </interface>
Dec  1 19:31:02 compute-0 nova_compute[189564]:    <serial type="pty">
Dec  1 19:31:02 compute-0 nova_compute[189564]:      <log file="/var/lib/nova/instances/f4a023f0-04a7-470f-88ef-6284e0580f9e/console.log" append="off"/>
Dec  1 19:31:02 compute-0 nova_compute[189564]:    </serial>
Dec  1 19:31:02 compute-0 nova_compute[189564]:    <graphics type="vnc" autoport="yes" listen="::0"/>
Dec  1 19:31:02 compute-0 nova_compute[189564]:    <video>
Dec  1 19:31:02 compute-0 nova_compute[189564]:      <model type="virtio"/>
Dec  1 19:31:02 compute-0 nova_compute[189564]:    </video>
Dec  1 19:31:02 compute-0 nova_compute[189564]:    <input type="tablet" bus="usb"/>
Dec  1 19:31:02 compute-0 nova_compute[189564]:    <rng model="virtio">
Dec  1 19:31:02 compute-0 nova_compute[189564]:      <backend model="random">/dev/urandom</backend>
Dec  1 19:31:02 compute-0 nova_compute[189564]:    </rng>
Dec  1 19:31:02 compute-0 nova_compute[189564]:    <controller type="pci" model="pcie-root"/>
Dec  1 19:31:02 compute-0 nova_compute[189564]:    <controller type="pci" model="pcie-root-port"/>
Dec  1 19:31:02 compute-0 nova_compute[189564]:    <controller type="pci" model="pcie-root-port"/>
Dec  1 19:31:02 compute-0 nova_compute[189564]:    <controller type="pci" model="pcie-root-port"/>
Dec  1 19:31:02 compute-0 nova_compute[189564]:    <controller type="pci" model="pcie-root-port"/>
Dec  1 19:31:02 compute-0 nova_compute[189564]:    <controller type="pci" model="pcie-root-port"/>
Dec  1 19:31:02 compute-0 nova_compute[189564]:    <controller type="pci" model="pcie-root-port"/>
Dec  1 19:31:02 compute-0 nova_compute[189564]:    <controller type="pci" model="pcie-root-port"/>
Dec  1 19:31:02 compute-0 nova_compute[189564]:    <controller type="pci" model="pcie-root-port"/>
Dec  1 19:31:02 compute-0 nova_compute[189564]:    <controller type="pci" model="pcie-root-port"/>
Dec  1 19:31:02 compute-0 nova_compute[189564]:    <controller type="pci" model="pcie-root-port"/>
Dec  1 19:31:02 compute-0 nova_compute[189564]:    <controller type="pci" model="pcie-root-port"/>
Dec  1 19:31:02 compute-0 nova_compute[189564]:    <controller type="pci" model="pcie-root-port"/>
Dec  1 19:31:02 compute-0 nova_compute[189564]:    <controller type="pci" model="pcie-root-port"/>
Dec  1 19:31:02 compute-0 nova_compute[189564]:    <controller type="pci" model="pcie-root-port"/>
Dec  1 19:31:02 compute-0 nova_compute[189564]:    <controller type="pci" model="pcie-root-port"/>
Dec  1 19:31:02 compute-0 nova_compute[189564]:    <controller type="pci" model="pcie-root-port"/>
Dec  1 19:31:02 compute-0 nova_compute[189564]:    <controller type="pci" model="pcie-root-port"/>
Dec  1 19:31:02 compute-0 nova_compute[189564]:    <controller type="pci" model="pcie-root-port"/>
Dec  1 19:31:02 compute-0 nova_compute[189564]:    <controller type="pci" model="pcie-root-port"/>
Dec  1 19:31:02 compute-0 nova_compute[189564]:    <controller type="pci" model="pcie-root-port"/>
Dec  1 19:31:02 compute-0 nova_compute[189564]:    <controller type="pci" model="pcie-root-port"/>
Dec  1 19:31:02 compute-0 nova_compute[189564]:    <controller type="pci" model="pcie-root-port"/>
Dec  1 19:31:02 compute-0 nova_compute[189564]:    <controller type="pci" model="pcie-root-port"/>
Dec  1 19:31:02 compute-0 nova_compute[189564]:    <controller type="pci" model="pcie-root-port"/>
Dec  1 19:31:02 compute-0 nova_compute[189564]:    <controller type="usb" index="0"/>
Dec  1 19:31:02 compute-0 nova_compute[189564]:    <memballoon model="virtio">
Dec  1 19:31:02 compute-0 nova_compute[189564]:      <stats period="10"/>
Dec  1 19:31:02 compute-0 nova_compute[189564]:    </memballoon>
Dec  1 19:31:02 compute-0 nova_compute[189564]:  </devices>
Dec  1 19:31:02 compute-0 nova_compute[189564]: </domain>
Dec  1 19:31:02 compute-0 nova_compute[189564]: _get_guest_xml /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:7555#033[00m
Dec  1 19:31:02 compute-0 nova_compute[189564]: 2025-12-01 19:31:02.135 189568 DEBUG nova.compute.manager [None req-25ca770d-7f98-45d0-a424-9d6d0202385c 7c24e8f82e7842b785e565ac65c7f494 35d2a9caf1634dca9fc12ec078239d84 - - default default] [instance: f4a023f0-04a7-470f-88ef-6284e0580f9e] Preparing to wait for external event network-vif-plugged-0aee22ef-1ffd-4d83-a6ba-7377ff1b62c3 prepare_for_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:283#033[00m
Dec  1 19:31:02 compute-0 nova_compute[189564]: 2025-12-01 19:31:02.135 189568 DEBUG oslo_concurrency.lockutils [None req-25ca770d-7f98-45d0-a424-9d6d0202385c 7c24e8f82e7842b785e565ac65c7f494 35d2a9caf1634dca9fc12ec078239d84 - - default default] Acquiring lock "f4a023f0-04a7-470f-88ef-6284e0580f9e-events" by "nova.compute.manager.InstanceEvents.prepare_for_instance_event.<locals>._create_or_get_event" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m
Dec  1 19:31:02 compute-0 nova_compute[189564]: 2025-12-01 19:31:02.135 189568 DEBUG oslo_concurrency.lockutils [None req-25ca770d-7f98-45d0-a424-9d6d0202385c 7c24e8f82e7842b785e565ac65c7f494 35d2a9caf1634dca9fc12ec078239d84 - - default default] Lock "f4a023f0-04a7-470f-88ef-6284e0580f9e-events" acquired by "nova.compute.manager.InstanceEvents.prepare_for_instance_event.<locals>._create_or_get_event" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m
Dec  1 19:31:02 compute-0 nova_compute[189564]: 2025-12-01 19:31:02.135 189568 DEBUG oslo_concurrency.lockutils [None req-25ca770d-7f98-45d0-a424-9d6d0202385c 7c24e8f82e7842b785e565ac65c7f494 35d2a9caf1634dca9fc12ec078239d84 - - default default] Lock "f4a023f0-04a7-470f-88ef-6284e0580f9e-events" "released" by "nova.compute.manager.InstanceEvents.prepare_for_instance_event.<locals>._create_or_get_event" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m
Dec  1 19:31:02 compute-0 nova_compute[189564]: 2025-12-01 19:31:02.136 189568 DEBUG nova.virt.libvirt.vif [None req-25ca770d-7f98-45d0-a424-9d6d0202385c 7c24e8f82e7842b785e565ac65c7f494 35d2a9caf1634dca9fc12ec078239d84 - - default default] vif_type=ovs instance=Instance(access_ip_v4=None,access_ip_v6=None,architecture=None,auto_disk_config=False,availability_zone='nova',cell_name=None,cleaned=False,config_drive='',created_at=2025-12-01T19:30:56Z,default_ephemeral_device=None,default_swap_device=None,deleted=False,deleted_at=None,device_metadata=None,disable_terminate=False,display_description=None,display_name='vn-rxztcck-f2wxpqwzjpbt-22updzqiujy5-vnf-jgrcp6zbpavd',ec2_ids=EC2Ids,ephemeral_gb=1,ephemeral_key_uuid=None,fault=<?>,flavor=Flavor(1),hidden=False,host='compute-0.ctlplane.example.com',hostname='vn-rxztcck-f2wxpqwzjpbt-22updzqiujy5-vnf-jgrcp6zbpavd',id=2,image_ref='15bc897a-453b-4133-b6db-08ecdc2b6db0',info_cache=InstanceInfoCache,instance_type_id=1,kernel_id='',key_data=None,key_name=None,keypairs=KeyPairList,launch_index=0,launched_at=None,launched_on='compute-0.ctlplane.example.com',locked=False,locked_by=None,memory_mb=512,metadata={metering.server_group='47cf63e2-5b7c-4ff3-8543-aef6d5b1a5c9'},migration_context=None,new_flavor=None,node='compute-0.ctlplane.example.com',numa_topology=None,old_flavor=None,os_type=None,pci_devices=PciDeviceList,pci_requests=InstancePCIRequests,power_state=0,progress=0,project_id='35d2a9caf1634dca9fc12ec078239d84',ramdisk_id='',reservation_id='r-9jn6ac13',resources=None,root_device_name='/dev/vda',root_gb=1,security_groups=SecurityGroupList,services=<?>,shutdown_terminate=False,system_metadata={boot_roles='reader,member,admin',image_base_image_ref='15bc897a-453b-4133-b6db-08ecdc2b6db0',image_container_format='bare',image_disk_format='qcow2',image_hw_machine_type='q35',image_min_disk='1',image_min_ram='0',image_owner_specified.openstack.md5='',image_owner_specified.openstack.object='images/cirros',image_owner_specified.openstack.sha256='',network_allocated='True',owner_project_name='admin',owner_user_name='admin'},tags=TagList,task_state='spawning',terminated_at=None,trusted_certs=None,updated_at=2025-12-01T19:30:59Z,user_data='Q29udGVudC1UeXBlOiBtdWx0aXBhcnQvbWl4ZWQ7IGJvdW5kYXJ5PSI9PT09PT09PT09PT09PT02NTc4MjE4NjU1NTUwNjgwNzIwPT0iCk1JTUUtVmVyc2lvbjogMS4wCgotLT09PT09PT09PT09PT09PTY1NzgyMTg2NTU1NTA2ODA3MjA9PQpDb250ZW50LVR5cGU6IHRleHQvY2xvdWQtY29uZmlnOyBjaGFyc2V0PSJ1cy1hc2NpaSIKTUlNRS1WZXJzaW9uOiAxLjAKQ29udGVudC1UcmFuc2Zlci1FbmNvZGluZzogN2JpdApDb250ZW50LURpc3Bvc2l0aW9uOiBhdHRhY2htZW50OyBmaWxlbmFtZT0iY2xvdWQtY29uZmlnIgoKCgojIENhcHR1cmUgYWxsIHN1YnByb2Nlc3Mgb3V0cHV0IGludG8gYSBsb2dmaWxlCiMgVXNlZnVsIGZvciB0cm91Ymxlc2hvb3RpbmcgY2xvdWQtaW5pdCBpc3N1ZXMKb3V0cHV0OiB7YWxsOiAnfCB0ZWUgLWEgL3Zhci9sb2cvY2xvdWQtaW5pdC1vdXRwdXQubG9nJ30KCi0tPT09PT09PT09PT09PT09NjU3ODIxODY1NTU1MDY4MDcyMD09CkNvbnRlbnQtVHlwZTogdGV4dC9jbG91ZC1ib290aG9vazsgY2hhcnNldD0idXMtYXNjaWkiCk1JTUUtVmVyc2lvbjogMS4wCkNvbnRlbnQtVHJhbnNmZXItRW5jb2Rpbmc6IDdiaXQKQ29udGVudC1EaXNwb3NpdGlvbjogYXR0YWNobWVudDsgZmlsZW5hbWU9ImJvb3Rob29rLnNoIgoKIyEvdXNyL2Jpbi9iYXNoCgojIEZJWE1FKHNoYWRvd2VyKSB0aGlzIGlzIGEgd29ya2Fyb3VuZCBmb3IgY2xvdWQtaW5pdCAwLjYuMyBwcmVzZW50IGluIFVidW50dQojIDEyLjA0IExUUzoKIyBodHRwczovL2J1Z3MubGF1bmNocGFkLm5ldC9oZWF0LytidWcvMTI1NzQxMAojCiMgVGhlIG9sZCBjbG91ZC1pbml0IGRvZXNuJ3QgY3JlYXRlIHRoZSB1c2VycyBkaXJlY3RseSBzbyB0aGUgY29tbWFuZHMgdG8gZG8KIyB0aGlzIGFyZSBpbmplY3RlZCB0aG91Z2ggbm92YV91dGlscy5weS4KIwojIE9uY2Ugd2UgZHJvcCBzdXBwb3J0IGZvciAwLjYuMywgd2UgY2FuIHNhZmVseSByZW1vdmUgdGhpcy4KCgojIGluIGNhc2UgaGVhdC1jZm50b29scyBoYXMgYmVlbiBpbnN0YWxsZWQgZnJvbSBwYWNrYWdlIGJ1dCBubyBzeW1saW5rcwojIGFyZSB5ZXQgaW4gL29wdC9hd3MvYmluLwpjZm4tY3JlYXRlLWF3cy1zeW1saW5rcwoKIyBEbyBub3QgcmVtb3ZlIC0gdGhlIGNsb3VkIGJvb3Rob29rIHNob3VsZCBhbHdheXMgcmV0dXJuIHN1Y2Nlc3MKZXhpdCAwCgotLT09PT09PT09PT09PT09PTY1NzgyMTg2NTU1NTA2ODA3MjA9PQpDb250ZW50LVR5cGU6IHRleHQvcGFydC1oYW5kbGVyOyBjaGFyc2V0PSJ1cy1hc2NpaSIKTUlNRS1WZXJzaW9uOiAxLjAKQ29udGVudC1UcmFuc2Zlci1FbmNvZGluZzogN2JpdApDb250ZW50LURpc3Bvc2l0aW9uOiBhdHRhY2htZW50OyBmaWxlbmFtZT0icGFydC1oYW5kbGVyLnB5IgoKIyBwYXJ0LWhhbmRsZXIKIwojICAgIExpY2Vuc2VkIHVuZGVyIHRoZSBBcGFjaGUgTGljZW5zZSwgVmVyc2lvbiAyLjAgKHRoZSAiTGljZW5zZSIpOyB5b3UgbWF5CiMgICAgbm90IHVzZSB0aGlzIGZpbGUgZXhjZXB0IGluIGNvbXBsaWFuY2Ugd2l0aCB0aGUgTGljZW5zZS4gWW91IG1heSBvYnRhaW4KIyAgICBhIGNvcHkgb2YgdGhlIExpY2Vuc2UgYXQKIwojICAgICAgICAgaHR0cDovL3d3dy5hcGFjaGUub3JnL2xpY2Vuc2VzL0xJQ0VOU0UtMi4wCiMKIyAgICBVbmxlc3MgcmVxdWlyZWQgYnkgYXBwbGljYWJsZSBsYXcgb3IgYWdyZWVkIHRvIGluIHdyaXRpbmcsIHNvZnR3YXJlCiMgICAgZGlzdHJpYnV0ZWQgdW5kZXIgdGhlIExpY2Vuc2UgaXMgZGlzdHJpYnV0ZWQgb24gYW4gIkFTIElTIiBCQVNJUywgV0lUSE9VVAojICAgIFdBUlJBTlRJRVMgT1IgQ09ORElUSU9OUyBPRiBBTlkgS0lORCwgZWl0aGVyIGV4cHJlc3Mgb3IgaW1wbGllZC4gU2VlIHRoZQojICAgIExpY2Vuc2UgZm9yIHRoZSBzcGVjaWZpYyBsYW5ndWFnZSBnb3Zlcm5pbmcgcGVybWlzc2lvbnMgYW5kIGxpbWl0YXRpb25zCiMgICAgdW5kZXIgdGhlIExpY2Vuc2UuCgppbXBvcnQgZGF0ZXRpbWUKaW1wb3J0IGVycm5vCmltcG9ydCBvcwppbXBvcnQgc3lzCgoKZGVmIGxpc3RfdHlwZXMoKToKICAgIHJldHVybiBbInRleHQveC1jZm5pbml0ZGF0YSJdCgoKZGVmIGhhbmRsZV9wYXJ0KGRhdGEsIGN0eXBlLCBmaWxlbmFtZSwgcGF5bG9hZCk6CiAgICBpZiBjdHlwZSA9PSAiX19iZWdpbl9fIjoKICAgICAgICB0cnk6CiAgICAgICAgICAgIG9zLm1ha2VkaXJzKCcvdmFyL2xpYi9oZWF0LWNmbnRvb2xzJywgaW50KCI3MDAiLCA4KSkKICAgICAgICBleGNlcHQgT1NFcnJvcjoKICAgICAgICAgICAgZXhfdHlwZSwgZSwgdGIgPSBzeXMuZXhjX2luZm8oKQogICAgICAgICAgICBpZiBlLmVycm5vICE9IGVycm5vLkVFWElTVDoKICAgICAgICAgICAgICAgIHJhaXNlCiAgICAgICAgcmV0dXJuCgogICAgaWYgY3R5cGUgPT0gIl9fZW5kX18iOgogICAgICAgIHJldHVybgoKICAgIHRpbWVzdGFtcCA9IGRhdGV0aW1lLmRhdGV0aW1lLm5vdygpCiAgICB3aXRoIG9wZW4oJy92YXIvbG9nL3BhcnQtaGFuZGxlci5sb2cnLCAnYScpIGFzIGxvZzoKICAgICAgICBsb2cud3JpdGUoJyVzIGZpbGVuYW1lOiVzLCBjdHlwZTolc1xuJyAlICh0aW1lc3RhbXAsIGZpbGVuYW1lLCBjdHlwZSkpCgogICAgaWYgY3R5cGUgPT0gJ3RleHQveC1jZm5pbml0ZGF0YSc6CiAgICAgICAgd2l0aCBvcGVuKCcvdmFyL2xpYi9oZWF0LWNmbnRvb2xzLyVzJyAlIGZpbGVuYW1lLCAndycpIGFzIGY6CiAgICAgICAgICAgIGYud3JpdGUocGF5bG9hZCkKCiAgICAgICAgIyBUT0RPKHNkYWtlKSBob3BlZnVsbHkgdGVtcG9yYXJ5IHVudGlsIHVzZXJzIG1vdmUgdG8gaGVhdC1jZm50b29scy0xLjMKICAgICAgICB3aXRoIG9wZW4oJy92YXIvbGliL2Nsb3VkL2RhdGEvJXMnICUgZmlsZW5hbWUsICd3JykgYXMgZjoKICAgICAgICAgICAgZi53cml0ZShwYXlsb2FkKQoKLS09PT09PT09PT09PT09PT02NTc4MjE4NjU1NTUwNjgwNzIwPT0KQ29udGVudC1UeXBlOiB0ZXh0L3gtY2ZuaW5pdGRhdGE7IGNoYXJzZXQ9InVzLWFzY2lpIgpNSU1FLVZlcnNpb246IDEuMApDb250ZW50LVRyYW5zZmVyLUVuY29kaW5nOiA3Yml0CkNvbnRlbnQtRGlzcG9zaXRpb246IGF0dGFjaG1lbnQ7IGZpbGVuYW1lPSJjZm4tdXNlcmRhdGEiCgoKLS09PT09PT09PT09PT09PT02NTc4MjE4NjU1NTUwNjgwNzIwPT0KQ29udGVudC1UeXBlOiB0ZXh0L3gtc2hlbGxzY3JpcHQ7IGNoYXJzZXQ9InVzLWFzY2lpIgpNSU1FLVZlcnNpb246IDEuMApDb250ZW50LVRyYW5zZmVyLUVuY29kaW5nOiA3Yml0CkNvbnRlbnQtRGlzcG9zaXRpb246IGF0dGFjaG1lbnQ7IGZpbGVuYW1lPSJsb2d1c2VyZGF0YS5weSIKCiMhL3Vzci9iaW4vZW52IHB5dGhvbjMKIwojICAgIExpY2Vuc2VkIHVuZGVyIHRoZSBBcGFjaGUgTGljZW5zZSwgVmVyc2lvbiAyLjAgKHRoZSAiTGljZW5zZSIpOyB5b3UgbWF5CiMgICAgbm90IHVzZSB0aGlzIGZpbGUgZXhjZXB0IGluIGNvbXBsaWFuY2Ugd2l0aCB0aGUgTGljZW5zZS4gWW91IG1heSBvYnRhaW4KIyAgICBhIGNvcHkgb2YgdGhlIExpY2Vuc2UgYXQKIwojICAgICAgICAgaHR0cDovL3d3dy5hcGFjaGUub3JnL2xpY2Vuc2VzL0xJQ0VOU0UtMi4wCiMKIyAgICBVbmxlc3MgcmVxdWlyZWQgYnkgYXBwbGljYWJsZSBsYXcgb3IgYWdyZWVkIHRvIGluIHdyaXRpbmcsIHNvZnR3YXJlCiMgICAgZGlzdHJpYnV0ZWQgdW5kZXIgdGhlIExpY2Vuc2UgaXMgZGlzdHJpYnV0ZWQgb24gYW4gIkFTIElTIiBCQVNJUywgV0lUSE9VVAojICAgIFdBUlJBTlRJRVMgT1IgQ09ORElUSU9OUyBPRiBBTlkgS0lORCwgZWl0aGVyIGV4cHJlc3Mgb3IgaW1wbGllZC4gU2VlIHRoZQojICAgIExpY2Vuc2UgZm9yIHRoZSBzcGVjaWZpYyBsYW5ndWFnZSBnb3Zlcm5pbmcgcGVybWlzc2lvbnMgYW5kIGxpbWl0YXRpb25zCiMgICAgdW5kZXIgdGhlIExpY2Vuc2UuCgppbXBvcnQgZGF0ZXRpbWUKaW1wb3J0IGVycm5vCmltcG9ydCBsb2dnaW5nCmltcG9ydCBvcwppbXBvcnQgc3VicHJvY2VzcwppbXBvcnQgc3lzCgoKVkFSX1BBVEggPSAnL3Zhci9saWIvaGVhdC1jZm50b29scycKTE9HID0gbG9nZ2luZy5nZXRMb2dnZXIoJ2hlYXQtcHJvdmlzaW9uJykKCgpkZWYgaW5pdF9sb2dnaW5nKCk6CiAgICBMT0cuc2V0TGV2ZWwobG9nZ2luZy5JTkZPKQogICAgTE9HLmFkZEhhbmRsZXIobG9nZ2luZy5TdHJlYW1IYW5kbGVyKCkpCiAgICBmaCA9IGxvZ2dpbmcuRmlsZUhhbmRsZXIoIi92YXIvbG9nL2hlYXQtcHJvdmlzaW9uLmxvZyIpCiAgICBvcy5jaG1vZChmaC5iYXNlRmlsZW5hbWUsIGludCgiNjAwIiwgOCkpCiAgICBMT0cuYWRkSGFuZGxlcihmaCkKCgpkZWYgY2FsbChhcmdzKToKCiAgICBjbGFzcyBMb2dTdHJlYW0ob2JqZWN0KToKCiAgICAgICAgZGVmIHdyaXRlKHNlbGYsIGRhdGEpOgogICAgICAgICAgICBMT0cuaW5mbyhkYXRhKQoKICAgIExPRy5pbmZvKCclc1xuJywgJyAnLmpvaW4oYXJ
Dec  1 19:31:02 compute-0 nova_compute[189564]: 2025-12-01 19:31:02.136 189568 DEBUG nova.network.os_vif_util [None req-25ca770d-7f98-45d0-a424-9d6d0202385c 7c24e8f82e7842b785e565ac65c7f494 35d2a9caf1634dca9fc12ec078239d84 - - default default] Converting VIF {"id": "0aee22ef-1ffd-4d83-a6ba-7377ff1b62c3", "address": "fa:16:3e:0a:1c:a4", "network": {"id": "2a4b8529-6171-4880-a97c-66966115a61b", "bridge": "br-int", "label": "private", "subnets": [{"cidr": "192.168.0.0/24", "dns": [], "gateway": {"address": "192.168.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "192.168.0.66", "type": "fixed", "version": 4, "meta": {}, "floating_ips": [{"address": "192.168.122.187", "type": "floating", "version": 4, "meta": {}}]}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "35d2a9caf1634dca9fc12ec078239d84", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tap0aee22ef-1f", "ovs_interfaceid": "0aee22ef-1ffd-4d83-a6ba-7377ff1b62c3", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": true, "delegate_create": true, "meta": {}} nova_to_osvif_vif /usr/lib/python3.9/site-packages/nova/network/os_vif_util.py:511#033[00m
Dec  1 19:31:02 compute-0 nova_compute[189564]: 2025-12-01 19:31:02.137 189568 DEBUG nova.network.os_vif_util [None req-25ca770d-7f98-45d0-a424-9d6d0202385c 7c24e8f82e7842b785e565ac65c7f494 35d2a9caf1634dca9fc12ec078239d84 - - default default] Converted object VIFOpenVSwitch(active=False,address=fa:16:3e:0a:1c:a4,bridge_name='br-int',has_traffic_filtering=True,id=0aee22ef-1ffd-4d83-a6ba-7377ff1b62c3,network=Network(2a4b8529-6171-4880-a97c-66966115a61b),plugin='ovs',port_profile=VIFPortProfileOpenVSwitch,preserve_on_delete=True,vif_name='tap0aee22ef-1f') nova_to_osvif_vif /usr/lib/python3.9/site-packages/nova/network/os_vif_util.py:548#033[00m
Dec  1 19:31:02 compute-0 nova_compute[189564]: 2025-12-01 19:31:02.137 189568 DEBUG os_vif [None req-25ca770d-7f98-45d0-a424-9d6d0202385c 7c24e8f82e7842b785e565ac65c7f494 35d2a9caf1634dca9fc12ec078239d84 - - default default] Plugging vif VIFOpenVSwitch(active=False,address=fa:16:3e:0a:1c:a4,bridge_name='br-int',has_traffic_filtering=True,id=0aee22ef-1ffd-4d83-a6ba-7377ff1b62c3,network=Network(2a4b8529-6171-4880-a97c-66966115a61b),plugin='ovs',port_profile=VIFPortProfileOpenVSwitch,preserve_on_delete=True,vif_name='tap0aee22ef-1f') plug /usr/lib/python3.9/site-packages/os_vif/__init__.py:76#033[00m
Dec  1 19:31:02 compute-0 nova_compute[189564]: 2025-12-01 19:31:02.137 189568 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 21 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Dec  1 19:31:02 compute-0 nova_compute[189564]: 2025-12-01 19:31:02.138 189568 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): AddBridgeCommand(_result=None, name=br-int, may_exist=True, datapath_type=system) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89#033[00m
Dec  1 19:31:02 compute-0 nova_compute[189564]: 2025-12-01 19:31:02.138 189568 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Transaction caused no change do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:129#033[00m
Dec  1 19:31:02 compute-0 nova_compute[189564]: 2025-12-01 19:31:02.141 189568 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 21 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Dec  1 19:31:02 compute-0 nova_compute[189564]: 2025-12-01 19:31:02.142 189568 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): AddPortCommand(_result=None, bridge=br-int, port=tap0aee22ef-1f, may_exist=True) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89#033[00m
Dec  1 19:31:02 compute-0 nova_compute[189564]: 2025-12-01 19:31:02.142 189568 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=1): DbSetCommand(_result=None, table=Interface, record=tap0aee22ef-1f, col_values=(('external_ids', {'iface-id': '0aee22ef-1ffd-4d83-a6ba-7377ff1b62c3', 'iface-status': 'active', 'attached-mac': 'fa:16:3e:0a:1c:a4', 'vm-uuid': 'f4a023f0-04a7-470f-88ef-6284e0580f9e'}),)) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89#033[00m
Dec  1 19:31:02 compute-0 nova_compute[189564]: 2025-12-01 19:31:02.144 189568 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Dec  1 19:31:02 compute-0 NetworkManager[56474]: <info>  [1764617462.1459] manager: (tap0aee22ef-1f): new Open vSwitch Port device (/org/freedesktop/NetworkManager/Devices/27)
Dec  1 19:31:02 compute-0 nova_compute[189564]: 2025-12-01 19:31:02.147 189568 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] 0-ms timeout __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:248#033[00m
Dec  1 19:31:02 compute-0 nova_compute[189564]: 2025-12-01 19:31:02.153 189568 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Dec  1 19:31:02 compute-0 nova_compute[189564]: 2025-12-01 19:31:02.153 189568 INFO os_vif [None req-25ca770d-7f98-45d0-a424-9d6d0202385c 7c24e8f82e7842b785e565ac65c7f494 35d2a9caf1634dca9fc12ec078239d84 - - default default] Successfully plugged vif VIFOpenVSwitch(active=False,address=fa:16:3e:0a:1c:a4,bridge_name='br-int',has_traffic_filtering=True,id=0aee22ef-1ffd-4d83-a6ba-7377ff1b62c3,network=Network(2a4b8529-6171-4880-a97c-66966115a61b),plugin='ovs',port_profile=VIFPortProfileOpenVSwitch,preserve_on_delete=True,vif_name='tap0aee22ef-1f')#033[00m
Dec  1 19:31:02 compute-0 nova_compute[189564]: 2025-12-01 19:31:02.212 189568 DEBUG nova.virt.libvirt.driver [None req-25ca770d-7f98-45d0-a424-9d6d0202385c 7c24e8f82e7842b785e565ac65c7f494 35d2a9caf1634dca9fc12ec078239d84 - - default default] No BDM found with device name vda, not building metadata. _build_disk_metadata /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:12116#033[00m
Dec  1 19:31:02 compute-0 nova_compute[189564]: 2025-12-01 19:31:02.213 189568 DEBUG nova.virt.libvirt.driver [None req-25ca770d-7f98-45d0-a424-9d6d0202385c 7c24e8f82e7842b785e565ac65c7f494 35d2a9caf1634dca9fc12ec078239d84 - - default default] No BDM found with device name vdb, not building metadata. _build_disk_metadata /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:12116#033[00m
Dec  1 19:31:02 compute-0 nova_compute[189564]: 2025-12-01 19:31:02.213 189568 DEBUG nova.virt.libvirt.driver [None req-25ca770d-7f98-45d0-a424-9d6d0202385c 7c24e8f82e7842b785e565ac65c7f494 35d2a9caf1634dca9fc12ec078239d84 - - default default] No BDM found with device name sda, not building metadata. _build_disk_metadata /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:12116#033[00m
Dec  1 19:31:02 compute-0 nova_compute[189564]: 2025-12-01 19:31:02.213 189568 DEBUG nova.virt.libvirt.driver [None req-25ca770d-7f98-45d0-a424-9d6d0202385c 7c24e8f82e7842b785e565ac65c7f494 35d2a9caf1634dca9fc12ec078239d84 - - default default] No VIF found with MAC fa:16:3e:0a:1c:a4, not building metadata _build_interface_metadata /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:12092#033[00m
Dec  1 19:31:02 compute-0 nova_compute[189564]: 2025-12-01 19:31:02.213 189568 INFO nova.virt.libvirt.driver [None req-25ca770d-7f98-45d0-a424-9d6d0202385c 7c24e8f82e7842b785e565ac65c7f494 35d2a9caf1634dca9fc12ec078239d84 - - default default] [instance: f4a023f0-04a7-470f-88ef-6284e0580f9e] Using config drive#033[00m
Dec  1 19:31:02 compute-0 rsyslogd[236874]: message too long (8192) with configured size 8096, begin of message is: 2025-12-01 19:31:02.115 189568 DEBUG nova.virt.libvirt.vif [None req-25ca770d-7f [v8.2510.0-2.el9 try https://www.rsyslog.com/e/2445 ]
Dec  1 19:31:02 compute-0 nova_compute[189564]: 2025-12-01 19:31:02.628 189568 INFO nova.virt.libvirt.driver [None req-25ca770d-7f98-45d0-a424-9d6d0202385c 7c24e8f82e7842b785e565ac65c7f494 35d2a9caf1634dca9fc12ec078239d84 - - default default] [instance: f4a023f0-04a7-470f-88ef-6284e0580f9e] Creating config drive at /var/lib/nova/instances/f4a023f0-04a7-470f-88ef-6284e0580f9e/disk.config#033[00m
Dec  1 19:31:02 compute-0 nova_compute[189564]: 2025-12-01 19:31:02.633 189568 DEBUG oslo_concurrency.processutils [None req-25ca770d-7f98-45d0-a424-9d6d0202385c 7c24e8f82e7842b785e565ac65c7f494 35d2a9caf1634dca9fc12ec078239d84 - - default default] Running cmd (subprocess): /usr/bin/mkisofs -o /var/lib/nova/instances/f4a023f0-04a7-470f-88ef-6284e0580f9e/disk.config -ldots -allow-lowercase -allow-multidot -l -publisher OpenStack Compute 27.5.2-0.20250829104910.6f8decf.el9 -quiet -J -r -V config-2 /tmp/tmptbs6vjc0 execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384#033[00m
Dec  1 19:31:02 compute-0 nova_compute[189564]: 2025-12-01 19:31:02.692 189568 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Dec  1 19:31:02 compute-0 nova_compute[189564]: 2025-12-01 19:31:02.783 189568 DEBUG oslo_concurrency.processutils [None req-25ca770d-7f98-45d0-a424-9d6d0202385c 7c24e8f82e7842b785e565ac65c7f494 35d2a9caf1634dca9fc12ec078239d84 - - default default] CMD "/usr/bin/mkisofs -o /var/lib/nova/instances/f4a023f0-04a7-470f-88ef-6284e0580f9e/disk.config -ldots -allow-lowercase -allow-multidot -l -publisher OpenStack Compute 27.5.2-0.20250829104910.6f8decf.el9 -quiet -J -r -V config-2 /tmp/tmptbs6vjc0" returned: 0 in 0.151s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422#033[00m
Dec  1 19:31:02 compute-0 NetworkManager[56474]: <info>  [1764617462.8869] manager: (tap0aee22ef-1f): new Tun device (/org/freedesktop/NetworkManager/Devices/28)
Dec  1 19:31:02 compute-0 kernel: tap0aee22ef-1f: entered promiscuous mode
Dec  1 19:31:02 compute-0 ovn_controller[97948]: 2025-12-01T19:31:02Z|00035|binding|INFO|Claiming lport 0aee22ef-1ffd-4d83-a6ba-7377ff1b62c3 for this chassis.
Dec  1 19:31:02 compute-0 ovn_controller[97948]: 2025-12-01T19:31:02Z|00036|binding|INFO|0aee22ef-1ffd-4d83-a6ba-7377ff1b62c3: Claiming fa:16:3e:0a:1c:a4 192.168.0.66
Dec  1 19:31:02 compute-0 nova_compute[189564]: 2025-12-01 19:31:02.893 189568 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Dec  1 19:31:02 compute-0 ovn_metadata_agent[106828]: 2025-12-01 19:31:02.899 106833 DEBUG ovsdbapp.backend.ovs_idl.event [-] Matched UPDATE: PortBindingUpdatedEvent(events=('update',), table='Port_Binding', conditions=None, old_conditions=None), priority=20 to row=Port_Binding(mac=['fa:16:3e:0a:1c:a4 192.168.0.66'], port_security=['fa:16:3e:0a:1c:a4 192.168.0.66'], type=, nat_addresses=[], virtual_parent=[], up=[False], options={'requested-chassis': 'compute-0.ctlplane.example.com'}, parent_port=[], requested_additional_chassis=[], ha_chassis_group=[], external_ids={'name': 'vnf-scaleup_group-vz2nmrxztcck-f2wxpqwzjpbt-22updzqiujy5-port-6brymhhcpz7y', 'neutron:cidrs': '192.168.0.66/24', 'neutron:device_id': 'f4a023f0-04a7-470f-88ef-6284e0580f9e', 'neutron:device_owner': 'compute:nova', 'neutron:mtu': '', 'neutron:network_name': 'neutron-2a4b8529-6171-4880-a97c-66966115a61b', 'neutron:port_capabilities': '', 'neutron:port_name': 'vnf-scaleup_group-vz2nmrxztcck-f2wxpqwzjpbt-22updzqiujy5-port-6brymhhcpz7y', 'neutron:project_id': '35d2a9caf1634dca9fc12ec078239d84', 'neutron:revision_number': '2', 'neutron:security_group_ids': 'e61a5e79-a7e0-4e4e-bcbc-f9aad845c2b8', 'neutron:subnet_pool_addr_scope4': '', 'neutron:subnet_pool_addr_scope6': '', 'neutron:vnic_type': 'normal', 'neutron:port_fip': '192.168.122.187'}, additional_chassis=[], tag=[], additional_encap=[], encap=[], mirror_rules=[], datapath=58f8227a-30b3-42df-b03a-90442a651a6d, chassis=[<ovs.db.idl.Row object at 0x7f1b36766670>], tunnel_key=4, gateway_chassis=[], requested_chassis=[<ovs.db.idl.Row object at 0x7f1b36766670>], logical_port=0aee22ef-1ffd-4d83-a6ba-7377ff1b62c3) old=Port_Binding(chassis=[]) matches /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/event.py:43#033[00m
Dec  1 19:31:02 compute-0 ovn_metadata_agent[106828]: 2025-12-01 19:31:02.900 106833 INFO neutron.agent.ovn.metadata.agent [-] Port 0aee22ef-1ffd-4d83-a6ba-7377ff1b62c3 in datapath 2a4b8529-6171-4880-a97c-66966115a61b bound to our chassis#033[00m
Dec  1 19:31:02 compute-0 ovn_metadata_agent[106828]: 2025-12-01 19:31:02.901 106833 INFO neutron.agent.ovn.metadata.agent [-] Provisioning metadata for network 2a4b8529-6171-4880-a97c-66966115a61b#033[00m
Dec  1 19:31:02 compute-0 ovn_controller[97948]: 2025-12-01T19:31:02Z|00037|binding|INFO|Setting lport 0aee22ef-1ffd-4d83-a6ba-7377ff1b62c3 ovn-installed in OVS
Dec  1 19:31:02 compute-0 ovn_controller[97948]: 2025-12-01T19:31:02Z|00038|binding|INFO|Setting lport 0aee22ef-1ffd-4d83-a6ba-7377ff1b62c3 up in Southbound
Dec  1 19:31:02 compute-0 nova_compute[189564]: 2025-12-01 19:31:02.908 189568 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Dec  1 19:31:02 compute-0 nova_compute[189564]: 2025-12-01 19:31:02.911 189568 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Dec  1 19:31:02 compute-0 ovn_metadata_agent[106828]: 2025-12-01 19:31:02.921 239862 DEBUG oslo.privsep.daemon [-] privsep: reply[dcf3b579-499c-4cbd-8d5e-92dc26eafb9f]: (4, True) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Dec  1 19:31:02 compute-0 systemd-machined[155891]: New machine qemu-2-instance-00000002.
Dec  1 19:31:02 compute-0 systemd-udevd[240506]: Network interface NamePolicy= disabled on kernel command line.
Dec  1 19:31:02 compute-0 systemd[1]: Started Virtual Machine qemu-2-instance-00000002.
Dec  1 19:31:02 compute-0 ovn_metadata_agent[106828]: 2025-12-01 19:31:02.958 239942 DEBUG oslo.privsep.daemon [-] privsep: reply[e0767cc7-bb44-4525-8be5-17acd3092de8]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Dec  1 19:31:02 compute-0 ovn_metadata_agent[106828]: 2025-12-01 19:31:02.962 239942 DEBUG oslo.privsep.daemon [-] privsep: reply[df033655-c25a-49b7-9da0-f4cc1e6d412f]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Dec  1 19:31:02 compute-0 NetworkManager[56474]: <info>  [1764617462.9766] device (tap0aee22ef-1f): state change: unmanaged -> unavailable (reason 'connection-assumed', managed-type: 'external')
Dec  1 19:31:02 compute-0 NetworkManager[56474]: <info>  [1764617462.9788] device (tap0aee22ef-1f): state change: unavailable -> disconnected (reason 'none', managed-type: 'external')
Dec  1 19:31:02 compute-0 ovn_metadata_agent[106828]: 2025-12-01 19:31:02.993 239942 DEBUG oslo.privsep.daemon [-] privsep: reply[d68d15ec-5044-4a4e-ba7d-b98de831b0a2]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Dec  1 19:31:03 compute-0 ovn_metadata_agent[106828]: 2025-12-01 19:31:03.011 239862 DEBUG oslo.privsep.daemon [-] privsep: reply[f2858435-f974-449d-b091-47b86e59f35f]: (4, [{'family': 0, '__align': (), 'ifi_type': 1, 'index': 2, 'flags': 69699, 'change': 0, 'attrs': [['IFLA_IFNAME', 'tap2a4b8529-61'], ['IFLA_TXQLEN', 1000], ['IFLA_OPERSTATE', 'UP'], ['IFLA_LINKMODE', 0], ['IFLA_MTU', 1500], ['IFLA_MIN_MTU', 68], ['IFLA_MAX_MTU', 65535], ['IFLA_GROUP', 0], ['IFLA_PROMISCUITY', 0], ['UNKNOWN', {'header': {'length': 8, 'type': 61}}], ['IFLA_NUM_TX_QUEUES', 8], ['IFLA_GSO_MAX_SEGS', 65535], ['IFLA_GSO_MAX_SIZE', 65536], ['IFLA_GRO_MAX_SIZE', 65536], ['UNKNOWN', {'header': {'length': 8, 'type': 63}}], ['UNKNOWN', {'header': {'length': 8, 'type': 64}}], ['IFLA_TSO_MAX_SIZE', 524280], ['IFLA_TSO_MAX_SEGS', 65535], ['UNKNOWN', {'header': {'length': 8, 'type': 66}}], ['IFLA_NUM_RX_QUEUES', 8], ['IFLA_CARRIER', 1], ['IFLA_CARRIER_CHANGES', 2], ['IFLA_CARRIER_UP_COUNT', 1], ['IFLA_CARRIER_DOWN_COUNT', 1], ['IFLA_PROTO_DOWN', 0], ['IFLA_ADDRESS', 'fa:16:3e:47:81:e1'], ['IFLA_BROADCAST', 'ff:ff:ff:ff:ff:ff'], ['IFLA_STATS64', {'rx_packets': 6, 'tx_packets': 5, 'rx_bytes': 532, 'tx_bytes': 354, 'rx_errors': 0, 'tx_errors': 0, 'rx_dropped': 0, 'tx_dropped': 0, 'multicast': 0, 'collisions': 0, 'rx_length_errors': 0, 'rx_over_errors': 0, 'rx_crc_errors': 0, 'rx_frame_errors': 0, 'rx_fifo_errors': 0, 'rx_missed_errors': 0, 'tx_aborted_errors': 0, 'tx_carrier_errors': 0, 'tx_fifo_errors': 0, 'tx_heartbeat_errors': 0, 'tx_window_errors': 0, 'rx_compressed': 0, 'tx_compressed': 0}], ['IFLA_STATS', {'rx_packets': 6, 'tx_packets': 5, 'rx_bytes': 532, 'tx_bytes': 354, 'rx_errors': 0, 'tx_errors': 0, 'rx_dropped': 0, 'tx_dropped': 0, 'multicast': 0, 'collisions': 0, 'rx_length_errors': 0, 'rx_over_errors': 0, 'rx_crc_errors': 0, 'rx_frame_errors': 0, 'rx_fifo_errors': 0, 'rx_missed_errors': 0, 'tx_aborted_errors': 0, 'tx_carrier_errors': 0, 'tx_fifo_errors': 0, 'tx_heartbeat_errors': 0, 'tx_window_errors': 0, 'rx_compressed': 0, 'tx_compressed': 0}], ['IFLA_XDP', {'attrs': [['IFLA_XDP_ATTACHED', None]]}], ['IFLA_LINKINFO', {'attrs': [['IFLA_INFO_KIND', 'veth']]}], ['IFLA_LINK_NETNSID', 0], ['IFLA_LINK', 12], ['IFLA_QDISC', 'noqueue'], ['IFLA_AF_SPEC', {'attrs': [['AF_INET', {'dummy': 65668, 'forwarding': 1, 'mc_forwarding': 0, 'proxy_arp': 0, 'accept_redirects': 0, 'secure_redirects': 0, 'send_redirects': 0, 'shared_media': 1, 'rp_filter': 1, 'accept_source_route': 0, 'bootp_relay': 0, 'log_martians': 0, 'tag': 0, 'arpfilter': 0, 'medium_id': 0, 'noxfrm': 0, 'nopolicy': 0, 'force_igmp_version': 0, 'arp_announce': 0, 'arp_ignore': 0, 'promote_secondaries': 1, 'arp_accept': 0, 'arp_notify': 0, 'accept_local': 0, 'src_vmark': 0, 'proxy_arp_pvlan': 0, 'route_localnet': 0, 'igmpv2_unsolicited_report_interval': 10000, 'igmpv3_unsolicited_report_interval': 1000}], ['AF_INET6', {'attrs': [['IFLA_INET6_FLAGS', 2147483648], ['IFLA_INET6_CACHEINFO', {'max_reasm_len': 65535, 'tstamp': 388613, 'reachable_time': 23320, 'retrans_time': 1000}], ['IFLA_INET6_CONF', {'forwarding': 0, 'hop_limit': 64, 'mtu': 1500, 'accept_ra': 1, 'accept_redirects': 1, 'autoconf': 1, 'dad_transmits': 1, 'router_solicitations': 4294967295, 'router_solicitation_interval': 4000, 'router_solicitation_delay': 1000, 'use_tempaddr': 0, 'temp_valid_lft': 604800, 'temp_preferred_lft': 86400, 'regen_max_retry': 3, 'max_desync_factor': 600, 'max_addresses': 16, 'force_mld_version': 0, 'accept_ra_defrtr': 1, 'accept_ra_pinfo': 1, 'accept_ra_rtr_pref': 1, 'router_probe_interval': 60000, 'accept_ra_rt_info_max_plen': 0, 'proxy_ndp': 0, 'optimistic_dad': 0, 'accept_source_route': 0, 'mc_forwarding': 0, 'disable_ipv6': 0, 'accept_dad': 1, 'force_tllao': 0, 'ndisc_notify': 0}], ['IFLA_INET6_STATS', {'num': 38, 'inpkts': 6, 'inoctets': 448, 'indelivers': 1, 'outforwdatagrams': 0, 'outpkts': 3, 'outoctets': 228, 'inhdrerrors': 0, 'intoobigerrors': 0, 'innoroutes': 0, 'inaddrerrors': 0, 'inunknownprotos': 0, 'intruncatedpkts': 0, 'indiscards': 0, 'outdiscards': 0, 'outnoroutes': 0, 'reasmtimeout': 0, 'reasmreqds': 0, 'reasmoks': 0, 'reasmfails': 0, 'fragoks': 0, 'fragfails': 0, 'fragcreates': 0, 'inmcastpkts': 6, 'outmcastpkts': 3, 'inbcastpkts': 0, 'outbcastpkts': 0, 'inmcastoctets': 448, 'outmcastoctets': 228, 'inbcastoctets': 0, 'outbcastoctets': 0, 'csumerrors': 0, 'noectpkts': 6, 'ect1pkts': 0, 'ect0pkts': 0, 'cepkts': 0}], ['IFLA_INET6_ICMP6STATS', {'num': 7, 'inmsgs': 1, 'inerrors': 0, 'outmsgs': 3, 'outerrors': 0, 'csumerrors': 0}], ['IFLA_INET6_TOKEN', '::'], ['IFLA_INET6_ADDR_GEN_MODE', 0]]}]]}], ['IFLA_MAP', {'mem_start': 0, 'mem_end': 0, 'base_addr': 0, 'irq': 0, 'dma': 0, 'port': 0}], ['UNKNOWN', {'header': {'length': 4, 'type': 32830}}], ['UNKNOWN', {'header': {'length': 4, 'type': 32833}}]], 'header': {'length': 1448, 'type': 16, 'flags': 2, 'sequence_number': 255, 'pid': 240511, 'error': None, 'target': 'ovnmeta-2a4b8529-6171-4880-a97c-66966115a61b', 'stats': (0, 0, 0)}, 'state': 'up', 'event': 'RTM_NEWLINK'}]) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Dec  1 19:31:03 compute-0 ovn_metadata_agent[106828]: 2025-12-01 19:31:03.027 239862 DEBUG oslo.privsep.daemon [-] privsep: reply[bab3281e-dded-4985-b4cc-38a0d7ff4357]: (4, ({'family': 2, 'prefixlen': 24, 'flags': 128, 'scope': 0, 'index': 2, 'attrs': [['IFA_ADDRESS', '192.168.0.2'], ['IFA_LOCAL', '192.168.0.2'], ['IFA_BROADCAST', '192.168.0.255'], ['IFA_LABEL', 'tap2a4b8529-61'], ['IFA_FLAGS', 128], ['IFA_CACHEINFO', {'ifa_preferred': 4294967295, 'ifa_valid': 4294967295, 'cstamp': 388627, 'tstamp': 388627}]], 'header': {'length': 96, 'type': 20, 'flags': 2, 'sequence_number': 255, 'pid': 240516, 'error': None, 'target': 'ovnmeta-2a4b8529-6171-4880-a97c-66966115a61b', 'stats': (0, 0, 0)}, 'event': 'RTM_NEWADDR'}, {'family': 2, 'prefixlen': 32, 'flags': 128, 'scope': 0, 'index': 2, 'attrs': [['IFA_ADDRESS', '169.254.169.254'], ['IFA_LOCAL', '169.254.169.254'], ['IFA_BROADCAST', '169.254.169.254'], ['IFA_LABEL', 'tap2a4b8529-61'], ['IFA_FLAGS', 128], ['IFA_CACHEINFO', {'ifa_preferred': 4294967295, 'ifa_valid': 4294967295, 'cstamp': 388631, 'tstamp': 388631}]], 'header': {'length': 96, 'type': 20, 'flags': 2, 'sequence_number': 255, 'pid': 240516, 'error': None, 'target': 'ovnmeta-2a4b8529-6171-4880-a97c-66966115a61b', 'stats': (0, 0, 0)}, 'event': 'RTM_NEWADDR'})) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Dec  1 19:31:03 compute-0 ovn_metadata_agent[106828]: 2025-12-01 19:31:03.028 106833 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): DelPortCommand(_result=None, port=tap2a4b8529-60, bridge=br-ex, if_exists=True) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89#033[00m
Dec  1 19:31:03 compute-0 nova_compute[189564]: 2025-12-01 19:31:03.031 189568 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Dec  1 19:31:03 compute-0 ovn_metadata_agent[106828]: 2025-12-01 19:31:03.033 106833 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): AddPortCommand(_result=None, bridge=br-int, port=tap2a4b8529-60, may_exist=True) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89#033[00m
Dec  1 19:31:03 compute-0 nova_compute[189564]: 2025-12-01 19:31:03.033 189568 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Dec  1 19:31:03 compute-0 ovn_metadata_agent[106828]: 2025-12-01 19:31:03.034 106833 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Transaction caused no change do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:129#033[00m
Dec  1 19:31:03 compute-0 ovn_metadata_agent[106828]: 2025-12-01 19:31:03.034 106833 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): DbSetCommand(_result=None, table=Interface, record=tap2a4b8529-60, col_values=(('external_ids', {'iface-id': 'f95692ff-1cac-46fe-9e62-21af9fa55eb1'}),)) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89#033[00m
Dec  1 19:31:03 compute-0 ovn_metadata_agent[106828]: 2025-12-01 19:31:03.034 106833 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Transaction caused no change do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:129#033[00m
Dec  1 19:31:03 compute-0 nova_compute[189564]: 2025-12-01 19:31:03.139 189568 DEBUG nova.compute.manager [req-d25f69c6-c55a-4214-b5e2-9b2a1ad79fb6 req-11701757-e389-4180-97b0-28df0c0eaec6 8141e98fb94749b083df4b60a326419b 58466eba0634458f8dd92f929824a1d9 - - default default] [instance: f4a023f0-04a7-470f-88ef-6284e0580f9e] Received event network-vif-plugged-0aee22ef-1ffd-4d83-a6ba-7377ff1b62c3 external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11048#033[00m
Dec  1 19:31:03 compute-0 nova_compute[189564]: 2025-12-01 19:31:03.139 189568 DEBUG oslo_concurrency.lockutils [req-d25f69c6-c55a-4214-b5e2-9b2a1ad79fb6 req-11701757-e389-4180-97b0-28df0c0eaec6 8141e98fb94749b083df4b60a326419b 58466eba0634458f8dd92f929824a1d9 - - default default] Acquiring lock "f4a023f0-04a7-470f-88ef-6284e0580f9e-events" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m
Dec  1 19:31:03 compute-0 nova_compute[189564]: 2025-12-01 19:31:03.140 189568 DEBUG oslo_concurrency.lockutils [req-d25f69c6-c55a-4214-b5e2-9b2a1ad79fb6 req-11701757-e389-4180-97b0-28df0c0eaec6 8141e98fb94749b083df4b60a326419b 58466eba0634458f8dd92f929824a1d9 - - default default] Lock "f4a023f0-04a7-470f-88ef-6284e0580f9e-events" acquired by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: waited 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m
Dec  1 19:31:03 compute-0 nova_compute[189564]: 2025-12-01 19:31:03.141 189568 DEBUG oslo_concurrency.lockutils [req-d25f69c6-c55a-4214-b5e2-9b2a1ad79fb6 req-11701757-e389-4180-97b0-28df0c0eaec6 8141e98fb94749b083df4b60a326419b 58466eba0634458f8dd92f929824a1d9 - - default default] Lock "f4a023f0-04a7-470f-88ef-6284e0580f9e-events" "released" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: held 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m
Dec  1 19:31:03 compute-0 nova_compute[189564]: 2025-12-01 19:31:03.141 189568 DEBUG nova.compute.manager [req-d25f69c6-c55a-4214-b5e2-9b2a1ad79fb6 req-11701757-e389-4180-97b0-28df0c0eaec6 8141e98fb94749b083df4b60a326419b 58466eba0634458f8dd92f929824a1d9 - - default default] [instance: f4a023f0-04a7-470f-88ef-6284e0580f9e] Processing event network-vif-plugged-0aee22ef-1ffd-4d83-a6ba-7377ff1b62c3 _process_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:10808#033[00m
Dec  1 19:31:03 compute-0 nova_compute[189564]: 2025-12-01 19:31:03.327 189568 DEBUG nova.network.neutron [req-120b7ae6-8c23-40f2-98d2-03b9efaee9d5 req-b6940bb5-af4f-4427-8c0b-fd560c0cc04d 8141e98fb94749b083df4b60a326419b 58466eba0634458f8dd92f929824a1d9 - - default default] [instance: f4a023f0-04a7-470f-88ef-6284e0580f9e] Updated VIF entry in instance network info cache for port 0aee22ef-1ffd-4d83-a6ba-7377ff1b62c3. _build_network_info_model /usr/lib/python3.9/site-packages/nova/network/neutron.py:3482#033[00m
Dec  1 19:31:03 compute-0 nova_compute[189564]: 2025-12-01 19:31:03.328 189568 DEBUG nova.network.neutron [req-120b7ae6-8c23-40f2-98d2-03b9efaee9d5 req-b6940bb5-af4f-4427-8c0b-fd560c0cc04d 8141e98fb94749b083df4b60a326419b 58466eba0634458f8dd92f929824a1d9 - - default default] [instance: f4a023f0-04a7-470f-88ef-6284e0580f9e] Updating instance_info_cache with network_info: [{"id": "0aee22ef-1ffd-4d83-a6ba-7377ff1b62c3", "address": "fa:16:3e:0a:1c:a4", "network": {"id": "2a4b8529-6171-4880-a97c-66966115a61b", "bridge": "br-int", "label": "private", "subnets": [{"cidr": "192.168.0.0/24", "dns": [], "gateway": {"address": "192.168.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "192.168.0.66", "type": "fixed", "version": 4, "meta": {}, "floating_ips": [{"address": "192.168.122.187", "type": "floating", "version": 4, "meta": {}}]}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "35d2a9caf1634dca9fc12ec078239d84", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tap0aee22ef-1f", "ovs_interfaceid": "0aee22ef-1ffd-4d83-a6ba-7377ff1b62c3", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": true, "delegate_create": true, "meta": {}}] update_instance_cache_with_nw_info /usr/lib/python3.9/site-packages/nova/network/neutron.py:116#033[00m
Dec  1 19:31:03 compute-0 nova_compute[189564]: 2025-12-01 19:31:03.346 189568 DEBUG oslo_concurrency.lockutils [req-120b7ae6-8c23-40f2-98d2-03b9efaee9d5 req-b6940bb5-af4f-4427-8c0b-fd560c0cc04d 8141e98fb94749b083df4b60a326419b 58466eba0634458f8dd92f929824a1d9 - - default default] Releasing lock "refresh_cache-f4a023f0-04a7-470f-88ef-6284e0580f9e" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:333#033[00m
Dec  1 19:31:03 compute-0 nova_compute[189564]: 2025-12-01 19:31:03.635 189568 DEBUG nova.virt.driver [None req-025acbbd-8b0a-4055-b5a6-f0460d6fa220 - - - - - -] Emitting event <LifecycleEvent: 1764617463.6352572, f4a023f0-04a7-470f-88ef-6284e0580f9e => Started> emit_event /usr/lib/python3.9/site-packages/nova/virt/driver.py:1653#033[00m
Dec  1 19:31:03 compute-0 nova_compute[189564]: 2025-12-01 19:31:03.636 189568 INFO nova.compute.manager [None req-025acbbd-8b0a-4055-b5a6-f0460d6fa220 - - - - - -] [instance: f4a023f0-04a7-470f-88ef-6284e0580f9e] VM Started (Lifecycle Event)#033[00m
Dec  1 19:31:03 compute-0 nova_compute[189564]: 2025-12-01 19:31:03.639 189568 DEBUG nova.compute.manager [None req-25ca770d-7f98-45d0-a424-9d6d0202385c 7c24e8f82e7842b785e565ac65c7f494 35d2a9caf1634dca9fc12ec078239d84 - - default default] [instance: f4a023f0-04a7-470f-88ef-6284e0580f9e] Instance event wait completed in 0 seconds for network-vif-plugged wait_for_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:577#033[00m
Dec  1 19:31:03 compute-0 nova_compute[189564]: 2025-12-01 19:31:03.645 189568 DEBUG nova.virt.libvirt.driver [None req-25ca770d-7f98-45d0-a424-9d6d0202385c 7c24e8f82e7842b785e565ac65c7f494 35d2a9caf1634dca9fc12ec078239d84 - - default default] [instance: f4a023f0-04a7-470f-88ef-6284e0580f9e] Guest created on hypervisor spawn /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:4417#033[00m
Dec  1 19:31:03 compute-0 nova_compute[189564]: 2025-12-01 19:31:03.651 189568 INFO nova.virt.libvirt.driver [-] [instance: f4a023f0-04a7-470f-88ef-6284e0580f9e] Instance spawned successfully.#033[00m
Dec  1 19:31:03 compute-0 nova_compute[189564]: 2025-12-01 19:31:03.651 189568 DEBUG nova.virt.libvirt.driver [None req-25ca770d-7f98-45d0-a424-9d6d0202385c 7c24e8f82e7842b785e565ac65c7f494 35d2a9caf1634dca9fc12ec078239d84 - - default default] [instance: f4a023f0-04a7-470f-88ef-6284e0580f9e] Attempting to register defaults for the following image properties: ['hw_cdrom_bus', 'hw_disk_bus', 'hw_input_bus', 'hw_pointer_model', 'hw_video_model', 'hw_vif_model'] _register_undefined_instance_details /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:917#033[00m
Dec  1 19:31:03 compute-0 nova_compute[189564]: 2025-12-01 19:31:03.657 189568 DEBUG nova.compute.manager [None req-025acbbd-8b0a-4055-b5a6-f0460d6fa220 - - - - - -] [instance: f4a023f0-04a7-470f-88ef-6284e0580f9e] Checking state _get_power_state /usr/lib/python3.9/site-packages/nova/compute/manager.py:1762#033[00m
Dec  1 19:31:03 compute-0 nova_compute[189564]: 2025-12-01 19:31:03.663 189568 DEBUG nova.compute.manager [None req-025acbbd-8b0a-4055-b5a6-f0460d6fa220 - - - - - -] [instance: f4a023f0-04a7-470f-88ef-6284e0580f9e] Synchronizing instance power state after lifecycle event "Started"; current vm_state: building, current task_state: spawning, current DB power_state: 0, VM power_state: 1 handle_lifecycle_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:1396#033[00m
Dec  1 19:31:03 compute-0 nova_compute[189564]: 2025-12-01 19:31:03.673 189568 DEBUG nova.virt.libvirt.driver [None req-25ca770d-7f98-45d0-a424-9d6d0202385c 7c24e8f82e7842b785e565ac65c7f494 35d2a9caf1634dca9fc12ec078239d84 - - default default] [instance: f4a023f0-04a7-470f-88ef-6284e0580f9e] Found default for hw_cdrom_bus of sata _register_undefined_instance_details /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:946#033[00m
Dec  1 19:31:03 compute-0 nova_compute[189564]: 2025-12-01 19:31:03.674 189568 DEBUG nova.virt.libvirt.driver [None req-25ca770d-7f98-45d0-a424-9d6d0202385c 7c24e8f82e7842b785e565ac65c7f494 35d2a9caf1634dca9fc12ec078239d84 - - default default] [instance: f4a023f0-04a7-470f-88ef-6284e0580f9e] Found default for hw_disk_bus of virtio _register_undefined_instance_details /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:946#033[00m
Dec  1 19:31:03 compute-0 nova_compute[189564]: 2025-12-01 19:31:03.674 189568 DEBUG nova.virt.libvirt.driver [None req-25ca770d-7f98-45d0-a424-9d6d0202385c 7c24e8f82e7842b785e565ac65c7f494 35d2a9caf1634dca9fc12ec078239d84 - - default default] [instance: f4a023f0-04a7-470f-88ef-6284e0580f9e] Found default for hw_input_bus of usb _register_undefined_instance_details /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:946#033[00m
Dec  1 19:31:03 compute-0 nova_compute[189564]: 2025-12-01 19:31:03.675 189568 DEBUG nova.virt.libvirt.driver [None req-25ca770d-7f98-45d0-a424-9d6d0202385c 7c24e8f82e7842b785e565ac65c7f494 35d2a9caf1634dca9fc12ec078239d84 - - default default] [instance: f4a023f0-04a7-470f-88ef-6284e0580f9e] Found default for hw_pointer_model of usbtablet _register_undefined_instance_details /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:946#033[00m
Dec  1 19:31:03 compute-0 nova_compute[189564]: 2025-12-01 19:31:03.675 189568 DEBUG nova.virt.libvirt.driver [None req-25ca770d-7f98-45d0-a424-9d6d0202385c 7c24e8f82e7842b785e565ac65c7f494 35d2a9caf1634dca9fc12ec078239d84 - - default default] [instance: f4a023f0-04a7-470f-88ef-6284e0580f9e] Found default for hw_video_model of virtio _register_undefined_instance_details /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:946#033[00m
Dec  1 19:31:03 compute-0 nova_compute[189564]: 2025-12-01 19:31:03.675 189568 DEBUG nova.virt.libvirt.driver [None req-25ca770d-7f98-45d0-a424-9d6d0202385c 7c24e8f82e7842b785e565ac65c7f494 35d2a9caf1634dca9fc12ec078239d84 - - default default] [instance: f4a023f0-04a7-470f-88ef-6284e0580f9e] Found default for hw_vif_model of virtio _register_undefined_instance_details /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:946#033[00m
Dec  1 19:31:03 compute-0 nova_compute[189564]: 2025-12-01 19:31:03.681 189568 INFO nova.compute.manager [None req-025acbbd-8b0a-4055-b5a6-f0460d6fa220 - - - - - -] [instance: f4a023f0-04a7-470f-88ef-6284e0580f9e] During sync_power_state the instance has a pending task (spawning). Skip.#033[00m
Dec  1 19:31:03 compute-0 nova_compute[189564]: 2025-12-01 19:31:03.681 189568 DEBUG nova.virt.driver [None req-025acbbd-8b0a-4055-b5a6-f0460d6fa220 - - - - - -] Emitting event <LifecycleEvent: 1764617463.6353655, f4a023f0-04a7-470f-88ef-6284e0580f9e => Paused> emit_event /usr/lib/python3.9/site-packages/nova/virt/driver.py:1653#033[00m
Dec  1 19:31:03 compute-0 nova_compute[189564]: 2025-12-01 19:31:03.681 189568 INFO nova.compute.manager [None req-025acbbd-8b0a-4055-b5a6-f0460d6fa220 - - - - - -] [instance: f4a023f0-04a7-470f-88ef-6284e0580f9e] VM Paused (Lifecycle Event)#033[00m
Dec  1 19:31:03 compute-0 nova_compute[189564]: 2025-12-01 19:31:03.717 189568 DEBUG nova.compute.manager [None req-025acbbd-8b0a-4055-b5a6-f0460d6fa220 - - - - - -] [instance: f4a023f0-04a7-470f-88ef-6284e0580f9e] Checking state _get_power_state /usr/lib/python3.9/site-packages/nova/compute/manager.py:1762#033[00m
Dec  1 19:31:03 compute-0 nova_compute[189564]: 2025-12-01 19:31:03.723 189568 DEBUG nova.virt.driver [None req-025acbbd-8b0a-4055-b5a6-f0460d6fa220 - - - - - -] Emitting event <LifecycleEvent: 1764617463.6438248, f4a023f0-04a7-470f-88ef-6284e0580f9e => Resumed> emit_event /usr/lib/python3.9/site-packages/nova/virt/driver.py:1653#033[00m
Dec  1 19:31:03 compute-0 nova_compute[189564]: 2025-12-01 19:31:03.724 189568 INFO nova.compute.manager [None req-025acbbd-8b0a-4055-b5a6-f0460d6fa220 - - - - - -] [instance: f4a023f0-04a7-470f-88ef-6284e0580f9e] VM Resumed (Lifecycle Event)#033[00m
Dec  1 19:31:03 compute-0 nova_compute[189564]: 2025-12-01 19:31:03.747 189568 DEBUG nova.compute.manager [None req-025acbbd-8b0a-4055-b5a6-f0460d6fa220 - - - - - -] [instance: f4a023f0-04a7-470f-88ef-6284e0580f9e] Checking state _get_power_state /usr/lib/python3.9/site-packages/nova/compute/manager.py:1762#033[00m
Dec  1 19:31:03 compute-0 nova_compute[189564]: 2025-12-01 19:31:03.753 189568 DEBUG nova.compute.manager [None req-025acbbd-8b0a-4055-b5a6-f0460d6fa220 - - - - - -] [instance: f4a023f0-04a7-470f-88ef-6284e0580f9e] Synchronizing instance power state after lifecycle event "Resumed"; current vm_state: building, current task_state: spawning, current DB power_state: 0, VM power_state: 1 handle_lifecycle_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:1396#033[00m
Dec  1 19:31:03 compute-0 nova_compute[189564]: 2025-12-01 19:31:03.759 189568 INFO nova.compute.manager [None req-25ca770d-7f98-45d0-a424-9d6d0202385c 7c24e8f82e7842b785e565ac65c7f494 35d2a9caf1634dca9fc12ec078239d84 - - default default] [instance: f4a023f0-04a7-470f-88ef-6284e0580f9e] Took 4.67 seconds to spawn the instance on the hypervisor.#033[00m
Dec  1 19:31:03 compute-0 nova_compute[189564]: 2025-12-01 19:31:03.760 189568 DEBUG nova.compute.manager [None req-25ca770d-7f98-45d0-a424-9d6d0202385c 7c24e8f82e7842b785e565ac65c7f494 35d2a9caf1634dca9fc12ec078239d84 - - default default] [instance: f4a023f0-04a7-470f-88ef-6284e0580f9e] Checking state _get_power_state /usr/lib/python3.9/site-packages/nova/compute/manager.py:1762#033[00m
Dec  1 19:31:03 compute-0 nova_compute[189564]: 2025-12-01 19:31:03.771 189568 INFO nova.compute.manager [None req-025acbbd-8b0a-4055-b5a6-f0460d6fa220 - - - - - -] [instance: f4a023f0-04a7-470f-88ef-6284e0580f9e] During sync_power_state the instance has a pending task (spawning). Skip.#033[00m
Dec  1 19:31:03 compute-0 nova_compute[189564]: 2025-12-01 19:31:03.831 189568 INFO nova.compute.manager [None req-25ca770d-7f98-45d0-a424-9d6d0202385c 7c24e8f82e7842b785e565ac65c7f494 35d2a9caf1634dca9fc12ec078239d84 - - default default] [instance: f4a023f0-04a7-470f-88ef-6284e0580f9e] Took 5.17 seconds to build instance.#033[00m
Dec  1 19:31:03 compute-0 nova_compute[189564]: 2025-12-01 19:31:03.851 189568 DEBUG oslo_concurrency.lockutils [None req-25ca770d-7f98-45d0-a424-9d6d0202385c 7c24e8f82e7842b785e565ac65c7f494 35d2a9caf1634dca9fc12ec078239d84 - - default default] Lock "f4a023f0-04a7-470f-88ef-6284e0580f9e" "released" by "nova.compute.manager.ComputeManager.build_and_run_instance.<locals>._locked_do_build_and_run_instance" :: held 5.248s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m
Dec  1 19:31:05 compute-0 nova_compute[189564]: 2025-12-01 19:31:05.233 189568 DEBUG nova.compute.manager [req-62a1669f-0ec3-4ef0-8f40-c10d7e1654af req-5f632f3f-a52a-4e04-8411-ec2c3a2b1bb1 8141e98fb94749b083df4b60a326419b 58466eba0634458f8dd92f929824a1d9 - - default default] [instance: f4a023f0-04a7-470f-88ef-6284e0580f9e] Received event network-vif-plugged-0aee22ef-1ffd-4d83-a6ba-7377ff1b62c3 external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11048#033[00m
Dec  1 19:31:05 compute-0 nova_compute[189564]: 2025-12-01 19:31:05.234 189568 DEBUG oslo_concurrency.lockutils [req-62a1669f-0ec3-4ef0-8f40-c10d7e1654af req-5f632f3f-a52a-4e04-8411-ec2c3a2b1bb1 8141e98fb94749b083df4b60a326419b 58466eba0634458f8dd92f929824a1d9 - - default default] Acquiring lock "f4a023f0-04a7-470f-88ef-6284e0580f9e-events" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m
Dec  1 19:31:05 compute-0 nova_compute[189564]: 2025-12-01 19:31:05.236 189568 DEBUG oslo_concurrency.lockutils [req-62a1669f-0ec3-4ef0-8f40-c10d7e1654af req-5f632f3f-a52a-4e04-8411-ec2c3a2b1bb1 8141e98fb94749b083df4b60a326419b 58466eba0634458f8dd92f929824a1d9 - - default default] Lock "f4a023f0-04a7-470f-88ef-6284e0580f9e-events" acquired by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: waited 0.002s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m
Dec  1 19:31:05 compute-0 nova_compute[189564]: 2025-12-01 19:31:05.237 189568 DEBUG oslo_concurrency.lockutils [req-62a1669f-0ec3-4ef0-8f40-c10d7e1654af req-5f632f3f-a52a-4e04-8411-ec2c3a2b1bb1 8141e98fb94749b083df4b60a326419b 58466eba0634458f8dd92f929824a1d9 - - default default] Lock "f4a023f0-04a7-470f-88ef-6284e0580f9e-events" "released" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m
Dec  1 19:31:05 compute-0 nova_compute[189564]: 2025-12-01 19:31:05.238 189568 DEBUG nova.compute.manager [req-62a1669f-0ec3-4ef0-8f40-c10d7e1654af req-5f632f3f-a52a-4e04-8411-ec2c3a2b1bb1 8141e98fb94749b083df4b60a326419b 58466eba0634458f8dd92f929824a1d9 - - default default] [instance: f4a023f0-04a7-470f-88ef-6284e0580f9e] No waiting events found dispatching network-vif-plugged-0aee22ef-1ffd-4d83-a6ba-7377ff1b62c3 pop_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:320#033[00m
Dec  1 19:31:05 compute-0 nova_compute[189564]: 2025-12-01 19:31:05.238 189568 WARNING nova.compute.manager [req-62a1669f-0ec3-4ef0-8f40-c10d7e1654af req-5f632f3f-a52a-4e04-8411-ec2c3a2b1bb1 8141e98fb94749b083df4b60a326419b 58466eba0634458f8dd92f929824a1d9 - - default default] [instance: f4a023f0-04a7-470f-88ef-6284e0580f9e] Received unexpected event network-vif-plugged-0aee22ef-1ffd-4d83-a6ba-7377ff1b62c3 for instance with vm_state active and task_state None.#033[00m
Dec  1 19:31:06 compute-0 podman[240527]: 2025-12-01 19:31:06.386690955 +0000 UTC m=+0.141411294 container health_status b46bda7fc50db8041eef75400930fc7591d8331b3adc9964f77b2cc87c6b98e2 (image=quay.io/openstack-k8s-operators/openstack-network-exporter:current-podified, name=openstack_network_exporter, health_status=healthy, health_failing_streak=0, health_log=, com.redhat.license_terms=https://www.redhat.com/en/about/red-hat-end-user-license-agreements#UBI, config_data={'image': 'quay.io/openstack-k8s-operators/openstack-network-exporter:current-podified', 'restart': 'always', 'recreate': True, 'privileged': True, 'ports': ['9105:9105'], 'command': [], 'net': 'host', 'environment': {'OS_ENDPOINT_TYPE': 'internal', 'OPENSTACK_NETWORK_EXPORTER_YAML': '/etc/openstack_network_exporter/openstack_network_exporter.yaml'}, 'healthcheck': {'test': '/openstack/healthcheck openstack-netwo', 'mount': '/var/lib/openstack/healthchecks/openstack_network_exporter'}, 'volumes': ['/var/lib/openstack/config/telemetry/openstack_network_exporter.yaml:/etc/openstack_network_exporter/openstack_network_exporter.yaml:z', '/var/lib/openstack/certs/telemetry/default:/etc/openstack_network_exporter/tls:z', '/var/run/openvswitch:/run/openvswitch:rw,z', '/var/lib/openvswitch/ovn:/run/ovn:rw,z', '/proc:/host/proc:ro', '/var/lib/openstack/healthchecks/openstack_network_exporter:/openstack:ro,z']}, distribution-scope=public, url=https://catalog.redhat.com/en/search?searchType=containers, vcs-ref=f4b088292653bbf5ca8188a5e59ffd06a8671d4b, io.k8s.description=The Universal Base Image Minimal is a stripped down image that uses microdnf as a package manager. This base image is freely redistributable, but Red Hat only supports Red Hat technologies through subscriptions for Red Hat products. This image is maintained by Red Hat and updated regularly., io.openshift.expose-services=, io.buildah.version=1.33.7, maintainer=Red Hat, Inc., build-date=2025-08-20T13:12:41, io.k8s.display-name=Red Hat Universal Base Image 9 Minimal, summary=Provides the latest release of the minimal Red Hat Universal Base Image 9., release=1755695350, name=ubi9-minimal, io.openshift.tags=minimal rhel9, config_id=edpm, description=The Universal Base Image Minimal is a stripped down image that uses microdnf as a package manager. This base image is freely redistributable, but Red Hat only supports Red Hat technologies through subscriptions for Red Hat products. This image is maintained by Red Hat and updated regularly., managed_by=edpm_ansible, architecture=x86_64, vcs-type=git, vendor=Red Hat, Inc., version=9.6, com.redhat.component=ubi9-minimal-container, container_name=openstack_network_exporter)
Dec  1 19:31:07 compute-0 nova_compute[189564]: 2025-12-01 19:31:07.145 189568 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Dec  1 19:31:07 compute-0 nova_compute[189564]: 2025-12-01 19:31:07.695 189568 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Dec  1 19:31:12 compute-0 nova_compute[189564]: 2025-12-01 19:31:12.148 189568 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Dec  1 19:31:12 compute-0 ovn_metadata_agent[106828]: 2025-12-01 19:31:12.174 106833 DEBUG oslo_concurrency.lockutils [-] Acquiring lock "_check_child_processes" by "neutron.agent.linux.external_process.ProcessMonitor._check_child_processes" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m
Dec  1 19:31:12 compute-0 ovn_metadata_agent[106828]: 2025-12-01 19:31:12.175 106833 DEBUG oslo_concurrency.lockutils [-] Lock "_check_child_processes" acquired by "neutron.agent.linux.external_process.ProcessMonitor._check_child_processes" :: waited 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m
Dec  1 19:31:12 compute-0 ovn_metadata_agent[106828]: 2025-12-01 19:31:12.176 106833 DEBUG oslo_concurrency.lockutils [-] Lock "_check_child_processes" "released" by "neutron.agent.linux.external_process.ProcessMonitor._check_child_processes" :: held 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m
Dec  1 19:31:12 compute-0 podman[240545]: 2025-12-01 19:31:12.322642651 +0000 UTC m=+0.092442268 container health_status 9bc16c1e84935b321683dd2dfd3901959431e420d380b6b9982945dff3d516b2 (image=quay.io/prometheus/node-exporter:v1.5.0, name=node_exporter, health_status=healthy, health_failing_streak=0, health_log=, config_id=edpm, container_name=node_exporter, maintainer=The Prometheus Authors <prometheus-developers@googlegroups.com>, managed_by=edpm_ansible, config_data={'image': 'quay.io/prometheus/node-exporter:v1.5.0', 'restart': 'always', 'recreate': True, 'user': 'root', 'privileged': True, 'ports': ['9100:9100'], 'command': ['--web.config.file=/etc/node_exporter/node_exporter.yaml', '--web.disable-exporter-metrics', '--collector.systemd', '--collector.systemd.unit-include=(edpm_.*|ovs.*|openvswitch|virt.*|rsyslog)\\.service', '--no-collector.dmi', '--no-collector.entropy', '--no-collector.thermal_zone', '--no-collector.time', '--no-collector.timex', '--no-collector.uname', '--no-collector.stat', '--no-collector.hwmon', '--no-collector.os', '--no-collector.selinux', '--no-collector.textfile', '--no-collector.powersupplyclass', '--no-collector.pressure', '--no-collector.rapl'], 'net': 'host', 'environment': {'OS_ENDPOINT_TYPE': 'internal'}, 'healthcheck': {'test': '/openstack/healthcheck node_exporter', 'mount': '/var/lib/openstack/healthchecks/node_exporter'}, 'volumes': ['/var/lib/openstack/config/telemetry/node_exporter.yaml:/etc/node_exporter/node_exporter.yaml:z', '/var/lib/openstack/certs/telemetry/default:/etc/node_exporter/tls:z', '/var/run/dbus/system_bus_socket:/var/run/dbus/system_bus_socket:rw', '/var/lib/openstack/healthchecks/node_exporter:/openstack:ro,z']})
Dec  1 19:31:12 compute-0 nova_compute[189564]: 2025-12-01 19:31:12.699 189568 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Dec  1 19:31:17 compute-0 nova_compute[189564]: 2025-12-01 19:31:17.153 189568 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Dec  1 19:31:17 compute-0 podman[240575]: 2025-12-01 19:31:17.325932901 +0000 UTC m=+0.093325015 container health_status eee51cf6f5ac491b85fb09827fece37ea9afa564acb449d4ec0d0155a452f02b (image=quay.io/podified-antelope-centos9/openstack-multipathd:current-podified, name=multipathd, health_status=healthy, health_failing_streak=0, health_log=, io.buildah.version=1.41.3, managed_by=edpm_ansible, org.label-schema.build-date=20251125, tcib_build_tag=fa2bb8efef6782c26ea7f1675eeb36dd, tcib_managed=true, config_id=multipathd, container_name=multipathd, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS, config_data={'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/multipathd', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-multipathd:current-podified', 'net': 'host', 'privileged': True, 'restart': 'always', 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/var/lib/kolla/config_files/multipathd.json:/var/lib/kolla/config_files/config.json:ro', '/dev:/dev', '/run/udev:/run/udev', '/sys:/sys', '/lib/modules:/lib/modules:ro', '/etc/iscsi:/etc/iscsi:ro', '/var/lib/iscsi:/var/lib/iscsi', '/etc/multipath:/etc/multipath:z', '/etc/multipath.conf:/etc/multipath.conf:ro', '/var/lib/openstack/healthchecks/multipathd:/openstack:ro,z']}, maintainer=OpenStack Kubernetes Operator team)
Dec  1 19:31:17 compute-0 nova_compute[189564]: 2025-12-01 19:31:17.702 189568 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Dec  1 19:31:22 compute-0 nova_compute[189564]: 2025-12-01 19:31:22.155 189568 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Dec  1 19:31:22 compute-0 podman[240595]: 2025-12-01 19:31:22.334456991 +0000 UTC m=+0.097605056 container health_status 61ddba5fa28aaa4735d9b3aecc3d300f499f9ae2248b5f55cd6d6127fcce4236 (image=quay.io/navidys/prometheus-podman-exporter:v1.10.1, name=podman_exporter, health_status=healthy, health_failing_streak=0, health_log=, maintainer=Navid Yaghoobi <navidys@fedoraproject.org>, managed_by=edpm_ansible, config_data={'image': 'quay.io/navidys/prometheus-podman-exporter:v1.10.1', 'restart': 'always', 'recreate': True, 'user': 'root', 'privileged': True, 'ports': ['9882:9882'], 'net': 'host', 'command': ['--web.config.file=/etc/podman_exporter/podman_exporter.yaml'], 'environment': {'OS_ENDPOINT_TYPE': 'internal', 'CONTAINER_HOST': 'unix:///run/podman/podman.sock'}, 'healthcheck': {'test': '/openstack/healthcheck podman_exporter', 'mount': '/var/lib/openstack/healthchecks/podman_exporter'}, 'volumes': ['/var/lib/openstack/config/telemetry/podman_exporter.yaml:/etc/podman_exporter/podman_exporter.yaml:z', '/var/lib/openstack/certs/telemetry/default:/etc/podman_exporter/tls:z', '/run/podman/podman.sock:/run/podman/podman.sock:rw,z', '/var/lib/openstack/healthchecks/podman_exporter:/openstack:ro,z']}, config_id=edpm, container_name=podman_exporter)
Dec  1 19:31:22 compute-0 nova_compute[189564]: 2025-12-01 19:31:22.705 189568 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Dec  1 19:31:25 compute-0 podman[240619]: 2025-12-01 19:31:25.334853921 +0000 UTC m=+0.091367534 container health_status 34a1614f07848d6f362b3ed1fa2407dbcd0f2c7c831f6ef43ff8b2d278ce7c3d (image=quay.io/podified-antelope-centos9/openstack-ceilometer-ipmi:current-podified, name=ceilometer_agent_ipmi, health_status=healthy, health_failing_streak=0, health_log=, org.label-schema.vendor=CentOS, tcib_build_tag=fa2bb8efef6782c26ea7f1675eeb36dd, tcib_managed=true, io.buildah.version=1.41.3, org.label-schema.name=CentOS Stream 9 Base Image, config_id=edpm, container_name=ceilometer_agent_ipmi, managed_by=edpm_ansible, org.label-schema.build-date=20251125, org.label-schema.license=GPLv2, config_data={'image': 'quay.io/podified-antelope-centos9/openstack-ceilometer-ipmi:current-podified', 'user': 'ceilometer', 'restart': 'always', 'command': 'kolla_start', 'security_opt': 'label:type:ceilometer_polling_t', 'privileged': 'true', 'net': 'host', 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'OS_ENDPOINT_TYPE': 'internal'}, 'healthcheck': {'test': '/openstack/healthcheck ipmi', 'mount': '/var/lib/openstack/healthchecks/ceilometer_agent_ipmi'}, 'volumes': ['/var/lib/openstack/config/telemetry-power-monitoring:/var/lib/openstack/config/:z', '/var/lib/openstack/config/telemetry-power-monitoring/ceilometer-agent-ipmi.json:/var/lib/kolla/config_files/config.json:z', '/etc/hosts:/etc/hosts:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/var/lib/openstack/cacerts/telemetry-power-monitoring/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/config/telemetry-power-monitoring/ceilometer_prom_exporter.yaml:/etc/ceilometer/ceilometer_prom_exporter.yaml:z', '/var/lib/openstack/certs/telemetry-power-monitoring/default:/etc/ceilometer/tls:z', '/dev/log:/dev/log', '/var/lib/openstack/healthchecks/ceilometer_agent_ipmi:/openstack:ro,z']}, maintainer=OpenStack Kubernetes Operator team, org.label-schema.schema-version=1.0)
Dec  1 19:31:25 compute-0 podman[240618]: 2025-12-01 19:31:25.347266763 +0000 UTC m=+0.112843415 container health_status 23921011954a99f31a49758e512d9e3575f6b2ebf536e7df85e3be11e7690b76 (image=quay.io/sustainable_computing_io/kepler:release-0.7.12, name=kepler, health_status=healthy, health_failing_streak=0, health_log=, com.redhat.license_terms=https://www.redhat.com/en/about/red-hat-end-user-license-agreements#UBI, io.openshift.tags=base rhel9, vcs-type=git, summary=Provides the latest release of Red Hat Universal Base Image 9., io.k8s.display-name=Red Hat Universal Base Image 9, release-0.7.12=, container_name=kepler, io.openshift.expose-services=, vcs-ref=e309397d02fc53f7fa99db1371b8700eb49f268f, version=9.4, build-date=2024-09-18T21:23:30, io.buildah.version=1.29.0, config_id=edpm, description=The Universal Base Image is designed and engineered to be the base layer for all of your containerized applications, middleware and utilities. This base image is freely redistributable, but Red Hat only supports Red Hat technologies through subscriptions for Red Hat products. This image is maintained by Red Hat and updated regularly., maintainer=Red Hat, Inc., managed_by=edpm_ansible, release=1214.1726694543, config_data={'image': 'quay.io/sustainable_computing_io/kepler:release-0.7.12', 'privileged': 'true', 'restart': 'always', 'ports': ['8888:8888'], 'net': 'host', 'command': '-v=2', 'recreate': True, 'environment': {'ENABLE_GPU': 'true', 'EXPOSE_CONTAINER_METRICS': 'true', 'ENABLE_PROCESS_METRICS': 'true', 'EXPOSE_VM_METRICS': 'true', 'EXPOSE_ESTIMATED_IDLE_POWER_METRICS': 'false', 'LIBVIRT_METADATA_URI': 'http://openstack.org/xmlns/libvirt/nova/1.1'}, 'healthcheck': {'test': '/openstack/healthcheck kepler', 'mount': '/var/lib/openstack/healthchecks/kepler'}, 'volumes': ['/lib/modules:/lib/modules:ro', '/run/libvirt:/run/libvirt:shared,ro', '/sys:/sys', '/proc:/proc', '/var/lib/openstack/healthchecks/kepler:/openstack:ro,z']}, distribution-scope=public, name=ubi9, architecture=x86_64, io.k8s.description=The Universal Base Image is designed and engineered to be the base layer for all of your containerized applications, middleware and utilities. This base image is freely redistributable, but Red Hat only supports Red Hat technologies through subscriptions for Red Hat products. This image is maintained by Red Hat and updated regularly., url=https://access.redhat.com/containers/#/registry.access.redhat.com/ubi9/images/9.4-1214.1726694543, vendor=Red Hat, Inc., com.redhat.component=ubi9-container)
Dec  1 19:31:26 compute-0 nova_compute[189564]: 2025-12-01 19:31:26.175 189568 DEBUG oslo_service.periodic_task [None req-7cec860b-c1c5-49d2-8a4f-732c76c1788d - - - - - -] Running periodic task ComputeManager._sync_power_states run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210#033[00m
Dec  1 19:31:26 compute-0 nova_compute[189564]: 2025-12-01 19:31:26.218 189568 DEBUG nova.compute.manager [None req-7cec860b-c1c5-49d2-8a4f-732c76c1788d - - - - - -] Triggering sync for uuid e73931e9-f7fa-4666-b781-700b385532a9 _sync_power_states /usr/lib/python3.9/site-packages/nova/compute/manager.py:10268#033[00m
Dec  1 19:31:26 compute-0 nova_compute[189564]: 2025-12-01 19:31:26.219 189568 DEBUG nova.compute.manager [None req-7cec860b-c1c5-49d2-8a4f-732c76c1788d - - - - - -] Triggering sync for uuid f4a023f0-04a7-470f-88ef-6284e0580f9e _sync_power_states /usr/lib/python3.9/site-packages/nova/compute/manager.py:10268#033[00m
Dec  1 19:31:26 compute-0 nova_compute[189564]: 2025-12-01 19:31:26.220 189568 DEBUG oslo_concurrency.lockutils [None req-7cec860b-c1c5-49d2-8a4f-732c76c1788d - - - - - -] Acquiring lock "e73931e9-f7fa-4666-b781-700b385532a9" by "nova.compute.manager.ComputeManager._sync_power_states.<locals>._sync.<locals>.query_driver_power_state_and_sync" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m
Dec  1 19:31:26 compute-0 nova_compute[189564]: 2025-12-01 19:31:26.220 189568 DEBUG oslo_concurrency.lockutils [None req-7cec860b-c1c5-49d2-8a4f-732c76c1788d - - - - - -] Lock "e73931e9-f7fa-4666-b781-700b385532a9" acquired by "nova.compute.manager.ComputeManager._sync_power_states.<locals>._sync.<locals>.query_driver_power_state_and_sync" :: waited 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m
Dec  1 19:31:26 compute-0 nova_compute[189564]: 2025-12-01 19:31:26.221 189568 DEBUG oslo_concurrency.lockutils [None req-7cec860b-c1c5-49d2-8a4f-732c76c1788d - - - - - -] Acquiring lock "f4a023f0-04a7-470f-88ef-6284e0580f9e" by "nova.compute.manager.ComputeManager._sync_power_states.<locals>._sync.<locals>.query_driver_power_state_and_sync" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m
Dec  1 19:31:26 compute-0 nova_compute[189564]: 2025-12-01 19:31:26.222 189568 DEBUG oslo_concurrency.lockutils [None req-7cec860b-c1c5-49d2-8a4f-732c76c1788d - - - - - -] Lock "f4a023f0-04a7-470f-88ef-6284e0580f9e" acquired by "nova.compute.manager.ComputeManager._sync_power_states.<locals>._sync.<locals>.query_driver_power_state_and_sync" :: waited 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m
Dec  1 19:31:26 compute-0 nova_compute[189564]: 2025-12-01 19:31:26.280 189568 DEBUG oslo_concurrency.lockutils [None req-7cec860b-c1c5-49d2-8a4f-732c76c1788d - - - - - -] Lock "e73931e9-f7fa-4666-b781-700b385532a9" "released" by "nova.compute.manager.ComputeManager._sync_power_states.<locals>._sync.<locals>.query_driver_power_state_and_sync" :: held 0.060s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m
Dec  1 19:31:26 compute-0 nova_compute[189564]: 2025-12-01 19:31:26.285 189568 DEBUG oslo_concurrency.lockutils [None req-7cec860b-c1c5-49d2-8a4f-732c76c1788d - - - - - -] Lock "f4a023f0-04a7-470f-88ef-6284e0580f9e" "released" by "nova.compute.manager.ComputeManager._sync_power_states.<locals>._sync.<locals>.query_driver_power_state_and_sync" :: held 0.064s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m
Dec  1 19:31:27 compute-0 nova_compute[189564]: 2025-12-01 19:31:27.158 189568 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Dec  1 19:31:27 compute-0 nova_compute[189564]: 2025-12-01 19:31:27.708 189568 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Dec  1 19:31:29 compute-0 podman[240655]: 2025-12-01 19:31:29.347110546 +0000 UTC m=+0.111105991 container health_status 43b014a7c88484529ca37fbc1aa040d68d3c565a681d98a3ffe696ded1c66c8b (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified, name=ovn_metadata_agent, health_status=healthy, health_failing_streak=0, health_log=, org.label-schema.vendor=CentOS, container_name=ovn_metadata_agent, org.label-schema.name=CentOS Stream 9 Base Image, config_data={'cgroupns': 'host', 'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'EDPM_CONFIG_HASH': '0823bd3e096c75f72e4a95820d41b0d4b6a1172bd2892ddb9f29b788a11bc87d'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ovn_metadata_agent', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified', 'net': 'host', 'pid': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/run/openvswitch:/run/openvswitch:z', '/var/lib/config-data/ansible-generated/neutron-ovn-metadata-agent:/etc/neutron.conf.d:z', '/run/netns:/run/netns:shared', '/var/lib/kolla/config_files/ovn_metadata_agent.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/neutron:/var/lib/neutron:shared,z', '/var/lib/neutron/ovn_metadata_haproxy_wrapper:/usr/local/bin/haproxy:ro', '/var/lib/neutron/kill_scripts:/etc/neutron/kill_scripts:ro', '/var/lib/openstack/cacerts/neutron-metadata/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/ca.crt:/etc/pki/tls/certs/ovndbca.crt:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/tls.crt:/etc/pki/tls/certs/ovndb.crt:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/tls.key:/etc/pki/tls/private/ovndb.key:ro,Z', '/var/lib/openstack/healthchecks/ovn_metadata_agent:/openstack:ro,z']}, config_id=ovn_metadata_agent, io.buildah.version=1.41.3, maintainer=OpenStack Kubernetes Operator team, tcib_managed=true, org.label-schema.build-date=20251125, org.label-schema.schema-version=1.0, tcib_build_tag=fa2bb8efef6782c26ea7f1675eeb36dd, managed_by=edpm_ansible, org.label-schema.license=GPLv2)
Dec  1 19:31:29 compute-0 podman[240656]: 2025-12-01 19:31:29.365255954 +0000 UTC m=+0.125579887 container health_status ac5c9902abf0db9f43c889599b2bcc73d33eb8b65444ffdd9b56a5cc93dab792 (image=quay.io/podified-antelope-centos9/openstack-ovn-controller:current-podified, name=ovn_controller, health_status=healthy, health_failing_streak=0, health_log=, org.label-schema.vendor=CentOS, tcib_build_tag=fa2bb8efef6782c26ea7f1675eeb36dd, config_data={'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ovn_controller', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-ovn-controller:current-podified', 'net': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/lib/modules:/lib/modules:ro', '/run:/run', '/var/lib/openvswitch/ovn:/run/ovn:shared,z', '/var/lib/kolla/config_files/ovn_controller.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/openstack/cacerts/ovn/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/certs/ovn/default/ca.crt:/etc/pki/tls/certs/ovndbca.crt:ro,z', '/var/lib/openstack/certs/ovn/default/tls.crt:/etc/pki/tls/certs/ovndb.crt:ro,z', '/var/lib/openstack/certs/ovn/default/tls.key:/etc/pki/tls/private/ovndb.key:ro,Z', '/var/lib/openstack/healthchecks/ovn_controller:/openstack:ro,z']}, config_id=ovn_controller, maintainer=OpenStack Kubernetes Operator team, org.label-schema.license=GPLv2, tcib_managed=true, container_name=ovn_controller, org.label-schema.name=CentOS Stream 9 Base Image, io.buildah.version=1.41.3, managed_by=edpm_ansible, org.label-schema.build-date=20251125, org.label-schema.schema-version=1.0)
Dec  1 19:31:29 compute-0 podman[240654]: 2025-12-01 19:31:29.376752298 +0000 UTC m=+0.142613552 container health_status 3a3d264f7eb8586ed3d44da8bad3c69e5911bcb2ca062b771386b6d47a5118de (image=quay.rdoproject.org/podified-master-centos10/openstack-ceilometer-compute:current-tested, name=ceilometer_agent_compute, health_status=healthy, health_failing_streak=0, health_log=, tcib_managed=true, config_data={'image': 'quay.rdoproject.org/podified-master-centos10/openstack-ceilometer-compute:current-tested', 'user': 'ceilometer', 'restart': 'always', 'command': 'kolla_start', 'security_opt': 'label:type:ceilometer_polling_t', 'net': 'host', 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'OS_ENDPOINT_TYPE': 'internal'}, 'healthcheck': {'test': '/openstack/healthcheck compute', 'mount': '/var/lib/openstack/healthchecks/ceilometer_agent_compute'}, 'volumes': ['/var/lib/openstack/config/telemetry:/var/lib/openstack/config/:z', '/var/lib/openstack/config/telemetry/ceilometer-agent-compute.json:/var/lib/kolla/config_files/config.json:z', '/run/libvirt:/run/libvirt:shared,ro', '/etc/hosts:/etc/hosts:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/var/lib/openstack/cacerts/telemetry/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/config/telemetry/ceilometer_prom_exporter.yaml:/etc/ceilometer/ceilometer_prom_exporter.yaml:z', '/var/lib/openstack/certs/telemetry/default:/etc/ceilometer/tls:z', '/dev/log:/dev/log', '/var/lib/openstack/healthchecks/ceilometer_agent_compute:/openstack:ro,z']}, org.label-schema.license=GPLv2, tcib_build_tag=3a7876c5b6a4ff2e2bc50e11e9db5f42, container_name=ceilometer_agent_compute, org.label-schema.build-date=20251125, config_id=edpm, io.buildah.version=1.41.4, maintainer=OpenStack Kubernetes Operator team, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS, managed_by=edpm_ansible, org.label-schema.name=CentOS Stream 10 Base Image)
Dec  1 19:31:29 compute-0 podman[203750]: time="2025-12-01T19:31:29Z" level=info msg="List containers: received `last` parameter - overwriting `limit`"
Dec  1 19:31:29 compute-0 podman[203750]: @ - - [01/Dec/2025:19:31:29 +0000] "GET /v4.9.3/libpod/containers/json?all=true&external=false&last=0&namespace=false&size=false&sync=false HTTP/1.1" 200 29521 "" "Go-http-client/1.1"
Dec  1 19:31:29 compute-0 podman[203750]: @ - - [01/Dec/2025:19:31:29 +0000] "GET /v4.9.3/libpod/containers/stats?all=false&interval=1&stream=false HTTP/1.1" 200 4776 "" "Go-http-client/1.1"
Dec  1 19:31:31 compute-0 openstack_network_exporter[205914]: ERROR   19:31:31 appctl.go:131: Failed to prepare call to ovsdb-server: no control socket files found for the ovs db server
Dec  1 19:31:31 compute-0 openstack_network_exporter[205914]: ERROR   19:31:31 appctl.go:144: Failed to get PID for ovn-northd: no control socket files found for ovn-northd
Dec  1 19:31:31 compute-0 openstack_network_exporter[205914]: ERROR   19:31:31 appctl.go:144: Failed to get PID for ovn-northd: no control socket files found for ovn-northd
Dec  1 19:31:31 compute-0 openstack_network_exporter[205914]: ERROR   19:31:31 appctl.go:174: call(dpif-netdev/pmd-perf-show): please specify an existing datapath
Dec  1 19:31:31 compute-0 openstack_network_exporter[205914]: 
Dec  1 19:31:31 compute-0 openstack_network_exporter[205914]: ERROR   19:31:31 appctl.go:174: call(dpif-netdev/pmd-rxq-show): please specify an existing datapath
Dec  1 19:31:31 compute-0 openstack_network_exporter[205914]: 
Dec  1 19:31:32 compute-0 nova_compute[189564]: 2025-12-01 19:31:32.161 189568 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Dec  1 19:31:32 compute-0 nova_compute[189564]: 2025-12-01 19:31:32.710 189568 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Dec  1 19:31:32 compute-0 ovn_controller[97948]: 2025-12-01T19:31:32Z|00039|memory_trim|INFO|Detected inactivity (last active 30007 ms ago): trimming memory
Dec  1 19:31:36 compute-0 ovn_controller[97948]: 2025-12-01T19:31:36Z|00006|pinctrl(ovn_pinctrl0)|INFO|DHCPOFFER fa:16:3e:0a:1c:a4 192.168.0.66
Dec  1 19:31:36 compute-0 ovn_controller[97948]: 2025-12-01T19:31:36Z|00007|pinctrl(ovn_pinctrl0)|INFO|DHCPACK fa:16:3e:0a:1c:a4 192.168.0.66
Dec  1 19:31:37 compute-0 nova_compute[189564]: 2025-12-01 19:31:37.164 189568 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Dec  1 19:31:37 compute-0 podman[240726]: 2025-12-01 19:31:37.356969735 +0000 UTC m=+0.121641086 container health_status b46bda7fc50db8041eef75400930fc7591d8331b3adc9964f77b2cc87c6b98e2 (image=quay.io/openstack-k8s-operators/openstack-network-exporter:current-podified, name=openstack_network_exporter, health_status=healthy, health_failing_streak=0, health_log=, release=1755695350, config_data={'image': 'quay.io/openstack-k8s-operators/openstack-network-exporter:current-podified', 'restart': 'always', 'recreate': True, 'privileged': True, 'ports': ['9105:9105'], 'command': [], 'net': 'host', 'environment': {'OS_ENDPOINT_TYPE': 'internal', 'OPENSTACK_NETWORK_EXPORTER_YAML': '/etc/openstack_network_exporter/openstack_network_exporter.yaml'}, 'healthcheck': {'test': '/openstack/healthcheck openstack-netwo', 'mount': '/var/lib/openstack/healthchecks/openstack_network_exporter'}, 'volumes': ['/var/lib/openstack/config/telemetry/openstack_network_exporter.yaml:/etc/openstack_network_exporter/openstack_network_exporter.yaml:z', '/var/lib/openstack/certs/telemetry/default:/etc/openstack_network_exporter/tls:z', '/var/run/openvswitch:/run/openvswitch:rw,z', '/var/lib/openvswitch/ovn:/run/ovn:rw,z', '/proc:/host/proc:ro', '/var/lib/openstack/healthchecks/openstack_network_exporter:/openstack:ro,z']}, description=The Universal Base Image Minimal is a stripped down image that uses microdnf as a package manager. This base image is freely redistributable, but Red Hat only supports Red Hat technologies through subscriptions for Red Hat products. This image is maintained by Red Hat and updated regularly., io.buildah.version=1.33.7, url=https://catalog.redhat.com/en/search?searchType=containers, vcs-ref=f4b088292653bbf5ca8188a5e59ffd06a8671d4b, architecture=x86_64, com.redhat.component=ubi9-minimal-container, distribution-scope=public, io.k8s.display-name=Red Hat Universal Base Image 9 Minimal, vcs-type=git, managed_by=edpm_ansible, vendor=Red Hat, Inc., com.redhat.license_terms=https://www.redhat.com/en/about/red-hat-end-user-license-agreements#UBI, version=9.6, build-date=2025-08-20T13:12:41, maintainer=Red Hat, Inc., name=ubi9-minimal, io.k8s.description=The Universal Base Image Minimal is a stripped down image that uses microdnf as a package manager. This base image is freely redistributable, but Red Hat only supports Red Hat technologies through subscriptions for Red Hat products. This image is maintained by Red Hat and updated regularly., config_id=edpm, summary=Provides the latest release of the minimal Red Hat Universal Base Image 9., container_name=openstack_network_exporter, io.openshift.expose-services=, io.openshift.tags=minimal rhel9)
Dec  1 19:31:37 compute-0 nova_compute[189564]: 2025-12-01 19:31:37.712 189568 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Dec  1 19:31:42 compute-0 nova_compute[189564]: 2025-12-01 19:31:42.169 189568 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Dec  1 19:31:42 compute-0 nova_compute[189564]: 2025-12-01 19:31:42.715 189568 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Dec  1 19:31:43 compute-0 podman[240750]: 2025-12-01 19:31:43.356362532 +0000 UTC m=+0.114160786 container health_status 9bc16c1e84935b321683dd2dfd3901959431e420d380b6b9982945dff3d516b2 (image=quay.io/prometheus/node-exporter:v1.5.0, name=node_exporter, health_status=healthy, health_failing_streak=0, health_log=, config_data={'image': 'quay.io/prometheus/node-exporter:v1.5.0', 'restart': 'always', 'recreate': True, 'user': 'root', 'privileged': True, 'ports': ['9100:9100'], 'command': ['--web.config.file=/etc/node_exporter/node_exporter.yaml', '--web.disable-exporter-metrics', '--collector.systemd', '--collector.systemd.unit-include=(edpm_.*|ovs.*|openvswitch|virt.*|rsyslog)\\.service', '--no-collector.dmi', '--no-collector.entropy', '--no-collector.thermal_zone', '--no-collector.time', '--no-collector.timex', '--no-collector.uname', '--no-collector.stat', '--no-collector.hwmon', '--no-collector.os', '--no-collector.selinux', '--no-collector.textfile', '--no-collector.powersupplyclass', '--no-collector.pressure', '--no-collector.rapl'], 'net': 'host', 'environment': {'OS_ENDPOINT_TYPE': 'internal'}, 'healthcheck': {'test': '/openstack/healthcheck node_exporter', 'mount': '/var/lib/openstack/healthchecks/node_exporter'}, 'volumes': ['/var/lib/openstack/config/telemetry/node_exporter.yaml:/etc/node_exporter/node_exporter.yaml:z', '/var/lib/openstack/certs/telemetry/default:/etc/node_exporter/tls:z', '/var/run/dbus/system_bus_socket:/var/run/dbus/system_bus_socket:rw', '/var/lib/openstack/healthchecks/node_exporter:/openstack:ro,z']}, config_id=edpm, container_name=node_exporter, maintainer=The Prometheus Authors <prometheus-developers@googlegroups.com>, managed_by=edpm_ansible)
Dec  1 19:31:46 compute-0 nova_compute[189564]: 2025-12-01 19:31:46.295 189568 DEBUG oslo_service.periodic_task [None req-7cec860b-c1c5-49d2-8a4f-732c76c1788d - - - - - -] Running periodic task ComputeManager._heal_instance_info_cache run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210#033[00m
Dec  1 19:31:46 compute-0 nova_compute[189564]: 2025-12-01 19:31:46.297 189568 DEBUG nova.compute.manager [None req-7cec860b-c1c5-49d2-8a4f-732c76c1788d - - - - - -] Starting heal instance info cache _heal_instance_info_cache /usr/lib/python3.9/site-packages/nova/compute/manager.py:9858#033[00m
Dec  1 19:31:46 compute-0 nova_compute[189564]: 2025-12-01 19:31:46.297 189568 DEBUG nova.compute.manager [None req-7cec860b-c1c5-49d2-8a4f-732c76c1788d - - - - - -] Rebuilding the list of instances to heal _heal_instance_info_cache /usr/lib/python3.9/site-packages/nova/compute/manager.py:9862#033[00m
Dec  1 19:31:47 compute-0 nova_compute[189564]: 2025-12-01 19:31:47.172 189568 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Dec  1 19:31:47 compute-0 nova_compute[189564]: 2025-12-01 19:31:47.306 189568 DEBUG oslo_concurrency.lockutils [None req-7cec860b-c1c5-49d2-8a4f-732c76c1788d - - - - - -] Acquiring lock "refresh_cache-e73931e9-f7fa-4666-b781-700b385532a9" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:312#033[00m
Dec  1 19:31:47 compute-0 nova_compute[189564]: 2025-12-01 19:31:47.307 189568 DEBUG oslo_concurrency.lockutils [None req-7cec860b-c1c5-49d2-8a4f-732c76c1788d - - - - - -] Acquired lock "refresh_cache-e73931e9-f7fa-4666-b781-700b385532a9" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:315#033[00m
Dec  1 19:31:47 compute-0 nova_compute[189564]: 2025-12-01 19:31:47.308 189568 DEBUG nova.network.neutron [None req-7cec860b-c1c5-49d2-8a4f-732c76c1788d - - - - - -] [instance: e73931e9-f7fa-4666-b781-700b385532a9] Forcefully refreshing network info cache for instance _get_instance_nw_info /usr/lib/python3.9/site-packages/nova/network/neutron.py:2004#033[00m
Dec  1 19:31:47 compute-0 nova_compute[189564]: 2025-12-01 19:31:47.309 189568 DEBUG nova.objects.instance [None req-7cec860b-c1c5-49d2-8a4f-732c76c1788d - - - - - -] Lazy-loading 'info_cache' on Instance uuid e73931e9-f7fa-4666-b781-700b385532a9 obj_load_attr /usr/lib/python3.9/site-packages/nova/objects/instance.py:1105#033[00m
Dec  1 19:31:47 compute-0 nova_compute[189564]: 2025-12-01 19:31:47.719 189568 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Dec  1 19:31:48 compute-0 podman[240773]: 2025-12-01 19:31:48.320671695 +0000 UTC m=+0.093401007 container health_status eee51cf6f5ac491b85fb09827fece37ea9afa564acb449d4ec0d0155a452f02b (image=quay.io/podified-antelope-centos9/openstack-multipathd:current-podified, name=multipathd, health_status=healthy, health_failing_streak=0, health_log=, org.label-schema.name=CentOS Stream 9 Base Image, config_id=multipathd, tcib_build_tag=fa2bb8efef6782c26ea7f1675eeb36dd, tcib_managed=true, io.buildah.version=1.41.3, managed_by=edpm_ansible, org.label-schema.vendor=CentOS, config_data={'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/multipathd', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-multipathd:current-podified', 'net': 'host', 'privileged': True, 'restart': 'always', 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/var/lib/kolla/config_files/multipathd.json:/var/lib/kolla/config_files/config.json:ro', '/dev:/dev', '/run/udev:/run/udev', '/sys:/sys', '/lib/modules:/lib/modules:ro', '/etc/iscsi:/etc/iscsi:ro', '/var/lib/iscsi:/var/lib/iscsi', '/etc/multipath:/etc/multipath:z', '/etc/multipath.conf:/etc/multipath.conf:ro', '/var/lib/openstack/healthchecks/multipathd:/openstack:ro,z']}, maintainer=OpenStack Kubernetes Operator team, org.label-schema.build-date=20251125, org.label-schema.license=GPLv2, org.label-schema.schema-version=1.0, container_name=multipathd)
Dec  1 19:31:48 compute-0 ceilometer_agent_compute[200308]: 2025-12-01 19:31:48.810 15 DEBUG ceilometer.polling.manager [-] The number of pollsters in source [pollsters] is bigger than the number of worker threads to execute them. Therefore, one can expect the process to be longer than the expected. execute_polling_task_processing /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:253
Dec  1 19:31:48 compute-0 ceilometer_agent_compute[200308]: 2025-12-01 19:31:48.811 15 DEBUG ceilometer.polling.manager [-] Processing pollsters for [pollsters] with [1] threads. execute_polling_task_processing /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:262
Dec  1 19:31:48 compute-0 ceilometer_agent_compute[200308]: 2025-12-01 19:31:48.811 15 DEBUG ceilometer.polling.manager [-] Registering pollster [<stevedore.extension.Extension object at 0x7fcf6cc3f860>] from source [pollsters] to be executed via executor [<concurrent.futures.thread.ThreadPoolExecutor object at 0x7fcf6757d9a0>] with cache [{}], pollster history [{}], and discovery cache [{}]. register_pollster_execution /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:276
Dec  1 19:31:48 compute-0 ceilometer_agent_compute[200308]: 2025-12-01 19:31:48.812 15 DEBUG ceilometer.polling.manager [-] Executing discovery process for pollsters [<ceilometer.compute.pollsters.net.IncomingBytesDeltaPollster object at 0x7fcf6cc3f830>] and discovery method [local_instances] via process [<bound method AgentManager.discover of <ceilometer.polling.manager.AgentManager object at 0x7fcf6cc07a10>>]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:294
Dec  1 19:31:48 compute-0 ceilometer_agent_compute[200308]: 2025-12-01 19:31:48.812 15 DEBUG ceilometer.polling.manager [-] Registering pollster [<stevedore.extension.Extension object at 0x7fcf6c2e4080>] from source [pollsters] to be executed via executor [<concurrent.futures.thread.ThreadPoolExecutor object at 0x7fcf6757d9a0>] with cache [{}], pollster history [{}], and discovery cache [{}]. register_pollster_execution /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:276
Dec  1 19:31:48 compute-0 ceilometer_agent_compute[200308]: 2025-12-01 19:31:48.812 15 DEBUG ceilometer.polling.manager [-] Registering pollster [<stevedore.extension.Extension object at 0x7fcf6efc98b0>] from source [pollsters] to be executed via executor [<concurrent.futures.thread.ThreadPoolExecutor object at 0x7fcf6757d9a0>] with cache [{}], pollster history [{}], and discovery cache [{}]. register_pollster_execution /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:276
Dec  1 19:31:48 compute-0 ceilometer_agent_compute[200308]: 2025-12-01 19:31:48.813 15 DEBUG ceilometer.polling.manager [-] Registering pollster [<stevedore.extension.Extension object at 0x7fcf6c2e4110>] from source [pollsters] to be executed via executor [<concurrent.futures.thread.ThreadPoolExecutor object at 0x7fcf6757d9a0>] with cache [{}], pollster history [{}], and discovery cache [{}]. register_pollster_execution /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:276
Dec  1 19:31:48 compute-0 ceilometer_agent_compute[200308]: 2025-12-01 19:31:48.813 15 DEBUG ceilometer.polling.manager [-] Registering pollster [<stevedore.extension.Extension object at 0x7fcf6c2e41a0>] from source [pollsters] to be executed via executor [<concurrent.futures.thread.ThreadPoolExecutor object at 0x7fcf6757d9a0>] with cache [{}], pollster history [{}], and discovery cache [{}]. register_pollster_execution /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:276
Dec  1 19:31:48 compute-0 ceilometer_agent_compute[200308]: 2025-12-01 19:31:48.813 15 DEBUG ceilometer.polling.manager [-] Registering pollster [<stevedore.extension.Extension object at 0x7fcf6cc3f290>] from source [pollsters] to be executed via executor [<concurrent.futures.thread.ThreadPoolExecutor object at 0x7fcf6757d9a0>] with cache [{}], pollster history [{}], and discovery cache [{}]. register_pollster_execution /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:276
Dec  1 19:31:48 compute-0 ceilometer_agent_compute[200308]: 2025-12-01 19:31:48.813 15 DEBUG ceilometer.polling.manager [-] Registering pollster [<stevedore.extension.Extension object at 0x7fcf6cc3f2c0>] from source [pollsters] to be executed via executor [<concurrent.futures.thread.ThreadPoolExecutor object at 0x7fcf6757d9a0>] with cache [{}], pollster history [{}], and discovery cache [{}]. register_pollster_execution /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:276
Dec  1 19:31:48 compute-0 ceilometer_agent_compute[200308]: 2025-12-01 19:31:48.814 15 DEBUG ceilometer.polling.manager [-] Registering pollster [<stevedore.extension.Extension object at 0x7fcf6e1e92e0>] from source [pollsters] to be executed via executor [<concurrent.futures.thread.ThreadPoolExecutor object at 0x7fcf6757d9a0>] with cache [{}], pollster history [{}], and discovery cache [{}]. register_pollster_execution /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:276
Dec  1 19:31:48 compute-0 ceilometer_agent_compute[200308]: 2025-12-01 19:31:48.814 15 DEBUG ceilometer.polling.manager [-] Registering pollster [<stevedore.extension.Extension object at 0x7fcf6cc3fb00>] from source [pollsters] to be executed via executor [<concurrent.futures.thread.ThreadPoolExecutor object at 0x7fcf6757d9a0>] with cache [{}], pollster history [{}], and discovery cache [{}]. register_pollster_execution /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:276
Dec  1 19:31:48 compute-0 ceilometer_agent_compute[200308]: 2025-12-01 19:31:48.814 15 DEBUG ceilometer.polling.manager [-] Registering pollster [<stevedore.extension.Extension object at 0x7fcf6cc3f320>] from source [pollsters] to be executed via executor [<concurrent.futures.thread.ThreadPoolExecutor object at 0x7fcf6757d9a0>] with cache [{}], pollster history [{}], and discovery cache [{}]. register_pollster_execution /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:276
Dec  1 19:31:48 compute-0 ceilometer_agent_compute[200308]: 2025-12-01 19:31:48.814 15 DEBUG ceilometer.polling.manager [-] Registering pollster [<stevedore.extension.Extension object at 0x7fcf6cc3f380>] from source [pollsters] to be executed via executor [<concurrent.futures.thread.ThreadPoolExecutor object at 0x7fcf6757d9a0>] with cache [{}], pollster history [{}], and discovery cache [{}]. register_pollster_execution /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:276
Dec  1 19:31:48 compute-0 ceilometer_agent_compute[200308]: 2025-12-01 19:31:48.815 15 DEBUG ceilometer.polling.manager [-] Registering pollster [<stevedore.extension.Extension object at 0x7fcf6cc3f3e0>] from source [pollsters] to be executed via executor [<concurrent.futures.thread.ThreadPoolExecutor object at 0x7fcf6757d9a0>] with cache [{}], pollster history [{}], and discovery cache [{}]. register_pollster_execution /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:276
Dec  1 19:31:48 compute-0 ceilometer_agent_compute[200308]: 2025-12-01 19:31:48.815 15 DEBUG ceilometer.polling.manager [-] Registering pollster [<stevedore.extension.Extension object at 0x7fcf6cc3f440>] from source [pollsters] to be executed via executor [<concurrent.futures.thread.ThreadPoolExecutor object at 0x7fcf6757d9a0>] with cache [{}], pollster history [{}], and discovery cache [{}]. register_pollster_execution /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:276
Dec  1 19:31:48 compute-0 ceilometer_agent_compute[200308]: 2025-12-01 19:31:48.815 15 DEBUG ceilometer.polling.manager [-] Registering pollster [<stevedore.extension.Extension object at 0x7fcf6c2e4470>] from source [pollsters] to be executed via executor [<concurrent.futures.thread.ThreadPoolExecutor object at 0x7fcf6757d9a0>] with cache [{}], pollster history [{}], and discovery cache [{}]. register_pollster_execution /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:276
Dec  1 19:31:48 compute-0 ceilometer_agent_compute[200308]: 2025-12-01 19:31:48.815 15 DEBUG ceilometer.polling.manager [-] Registering pollster [<stevedore.extension.Extension object at 0x7fcf6cc3f4a0>] from source [pollsters] to be executed via executor [<concurrent.futures.thread.ThreadPoolExecutor object at 0x7fcf6757d9a0>] with cache [{}], pollster history [{}], and discovery cache [{}]. register_pollster_execution /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:276
Dec  1 19:31:48 compute-0 ceilometer_agent_compute[200308]: 2025-12-01 19:31:48.816 15 DEBUG ceilometer.polling.manager [-] Registering pollster [<stevedore.extension.Extension object at 0x7fcf6cc3f500>] from source [pollsters] to be executed via executor [<concurrent.futures.thread.ThreadPoolExecutor object at 0x7fcf6757d9a0>] with cache [{}], pollster history [{}], and discovery cache [{}]. register_pollster_execution /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:276
Dec  1 19:31:48 compute-0 ceilometer_agent_compute[200308]: 2025-12-01 19:31:48.816 15 DEBUG ceilometer.polling.manager [-] Registering pollster [<stevedore.extension.Extension object at 0x7fcf6cc3e540>] from source [pollsters] to be executed via executor [<concurrent.futures.thread.ThreadPoolExecutor object at 0x7fcf6757d9a0>] with cache [{}], pollster history [{}], and discovery cache [{}]. register_pollster_execution /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:276
Dec  1 19:31:48 compute-0 ceilometer_agent_compute[200308]: 2025-12-01 19:31:48.817 15 DEBUG ceilometer.polling.manager [-] Registering pollster [<stevedore.extension.Extension object at 0x7fcf6cc3f560>] from source [pollsters] to be executed via executor [<concurrent.futures.thread.ThreadPoolExecutor object at 0x7fcf6757d9a0>] with cache [{}], pollster history [{}], and discovery cache [{}]. register_pollster_execution /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:276
Dec  1 19:31:48 compute-0 ceilometer_agent_compute[200308]: 2025-12-01 19:31:48.817 15 DEBUG ceilometer.polling.manager [-] Registering pollster [<stevedore.extension.Extension object at 0x7fcf6cc3fd70>] from source [pollsters] to be executed via executor [<concurrent.futures.thread.ThreadPoolExecutor object at 0x7fcf6757d9a0>] with cache [{}], pollster history [{}], and discovery cache [{}]. register_pollster_execution /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:276
Dec  1 19:31:48 compute-0 ceilometer_agent_compute[200308]: 2025-12-01 19:31:48.818 15 DEBUG ceilometer.polling.manager [-] Registering pollster [<stevedore.extension.Extension object at 0x7fcf6cc3f5c0>] from source [pollsters] to be executed via executor [<concurrent.futures.thread.ThreadPoolExecutor object at 0x7fcf6757d9a0>] with cache [{}], pollster history [{}], and discovery cache [{}]. register_pollster_execution /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:276
Dec  1 19:31:48 compute-0 ceilometer_agent_compute[200308]: 2025-12-01 19:31:48.818 15 DEBUG ceilometer.polling.manager [-] Registering pollster [<stevedore.extension.Extension object at 0x7fcf6cc3fdd0>] from source [pollsters] to be executed via executor [<concurrent.futures.thread.ThreadPoolExecutor object at 0x7fcf6757d9a0>] with cache [{}], pollster history [{}], and discovery cache [{}]. register_pollster_execution /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:276
Dec  1 19:31:48 compute-0 ceilometer_agent_compute[200308]: 2025-12-01 19:31:48.818 15 DEBUG ceilometer.polling.manager [-] Registering pollster [<stevedore.extension.Extension object at 0x7fcf6cc3fe30>] from source [pollsters] to be executed via executor [<concurrent.futures.thread.ThreadPoolExecutor object at 0x7fcf6757d9a0>] with cache [{}], pollster history [{}], and discovery cache [{}]. register_pollster_execution /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:276
Dec  1 19:31:48 compute-0 ceilometer_agent_compute[200308]: 2025-12-01 19:31:48.819 15 DEBUG ceilometer.polling.manager [-] Registering pollster [<stevedore.extension.Extension object at 0x7fcf6cc3fec0>] from source [pollsters] to be executed via executor [<concurrent.futures.thread.ThreadPoolExecutor object at 0x7fcf6757d9a0>] with cache [{}], pollster history [{}], and discovery cache [{}]. register_pollster_execution /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:276
Dec  1 19:31:48 compute-0 ceilometer_agent_compute[200308]: 2025-12-01 19:31:48.819 15 DEBUG ceilometer.polling.manager [-] Registering pollster [<stevedore.extension.Extension object at 0x7fcf6cc3ffb0>] from source [pollsters] to be executed via executor [<concurrent.futures.thread.ThreadPoolExecutor object at 0x7fcf6757d9a0>] with cache [{}], pollster history [{}], and discovery cache [{}]. register_pollster_execution /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:276
Dec  1 19:31:48 compute-0 ceilometer_agent_compute[200308]: 2025-12-01 19:31:48.819 15 DEBUG ceilometer.polling.manager [-] Registering pollster [<stevedore.extension.Extension object at 0x7fcf6cc3d7c0>] from source [pollsters] to be executed via executor [<concurrent.futures.thread.ThreadPoolExecutor object at 0x7fcf6757d9a0>] with cache [{}], pollster history [{}], and discovery cache [{}]. register_pollster_execution /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:276
Dec  1 19:31:48 compute-0 ceilometer_agent_compute[200308]: 2025-12-01 19:31:48.819 15 DEBUG ceilometer.polling.manager [-] Registering pollster [<stevedore.extension.Extension object at 0x7fcf6cc3f7d0>] from source [pollsters] to be executed via executor [<concurrent.futures.thread.ThreadPoolExecutor object at 0x7fcf6757d9a0>] with cache [{}], pollster history [{}], and discovery cache [{}]. register_pollster_execution /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:276
Dec  1 19:31:48 compute-0 ceilometer_agent_compute[200308]: 2025-12-01 19:31:48.817 15 DEBUG ceilometer.compute.discovery [-] Querying metadata for instance e73931e9-f7fa-4666-b781-700b385532a9 from Nova API get_server /usr/lib/python3.12/site-packages/ceilometer/compute/discovery.py:176
Dec  1 19:31:49 compute-0 ceilometer_agent_compute[200308]: 2025-12-01 19:31:49.179 15 DEBUG novaclient.v2.client [-] REQ: curl -g -i -X GET https://nova-internal.openstack.svc:8774/v2.1/servers/e73931e9-f7fa-4666-b781-700b385532a9 -H "Accept: application/json" -H "User-Agent: python-novaclient" -H "X-Auth-Token: {SHA256}1de7f74c971f7abb068fd11d4466b13593717e525e549549f884402049cc943e" -H "X-OpenStack-Nova-API-Version: 2.1" _http_log_request /usr/lib/python3.12/site-packages/keystoneauth1/session.py:572
Dec  1 19:31:49 compute-0 nova_compute[189564]: 2025-12-01 19:31:49.561 189568 DEBUG nova.network.neutron [None req-7cec860b-c1c5-49d2-8a4f-732c76c1788d - - - - - -] [instance: e73931e9-f7fa-4666-b781-700b385532a9] Updating instance_info_cache with network_info: [{"id": "3cef930c-870a-4936-a206-b4c3a7ce5c1a", "address": "fa:16:3e:fc:8b:70", "network": {"id": "2a4b8529-6171-4880-a97c-66966115a61b", "bridge": "br-int", "label": "private", "subnets": [{"cidr": "192.168.0.0/24", "dns": [], "gateway": {"address": "192.168.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "192.168.0.47", "type": "fixed", "version": 4, "meta": {}, "floating_ips": [{"address": "192.168.122.206", "type": "floating", "version": 4, "meta": {}}]}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "35d2a9caf1634dca9fc12ec078239d84", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tap3cef930c-87", "ovs_interfaceid": "3cef930c-870a-4936-a206-b4c3a7ce5c1a", "qbh_params": null, "qbg_params": null, "active": true, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}}] update_instance_cache_with_nw_info /usr/lib/python3.9/site-packages/nova/network/neutron.py:116#033[00m
Dec  1 19:31:49 compute-0 nova_compute[189564]: 2025-12-01 19:31:49.584 189568 DEBUG oslo_concurrency.lockutils [None req-7cec860b-c1c5-49d2-8a4f-732c76c1788d - - - - - -] Releasing lock "refresh_cache-e73931e9-f7fa-4666-b781-700b385532a9" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:333#033[00m
Dec  1 19:31:49 compute-0 nova_compute[189564]: 2025-12-01 19:31:49.585 189568 DEBUG nova.compute.manager [None req-7cec860b-c1c5-49d2-8a4f-732c76c1788d - - - - - -] [instance: e73931e9-f7fa-4666-b781-700b385532a9] Updated the network info_cache for instance _heal_instance_info_cache /usr/lib/python3.9/site-packages/nova/compute/manager.py:9929#033[00m
Dec  1 19:31:49 compute-0 nova_compute[189564]: 2025-12-01 19:31:49.587 189568 DEBUG oslo_service.periodic_task [None req-7cec860b-c1c5-49d2-8a4f-732c76c1788d - - - - - -] Running periodic task ComputeManager._poll_volume_usage run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210#033[00m
Dec  1 19:31:49 compute-0 ceilometer_agent_compute[200308]: 2025-12-01 19:31:49.799 15 DEBUG novaclient.v2.client [-] RESP: [200] Connection: Keep-Alive Content-Length: 1849 Content-Type: application/json Date: Mon, 01 Dec 2025 19:31:49 GMT Keep-Alive: timeout=5, max=100 OpenStack-API-Version: compute 2.1 Server: Apache Vary: OpenStack-API-Version,X-OpenStack-Nova-API-Version X-OpenStack-Nova-API-Version: 2.1 x-compute-request-id: req-4db5901c-9cca-4c5d-892a-3665e5b241bd x-openstack-request-id: req-4db5901c-9cca-4c5d-892a-3665e5b241bd _http_log_response /usr/lib/python3.12/site-packages/keystoneauth1/session.py:613
Dec  1 19:31:49 compute-0 ceilometer_agent_compute[200308]: 2025-12-01 19:31:49.800 15 DEBUG novaclient.v2.client [-] RESP BODY: {"server": {"id": "e73931e9-f7fa-4666-b781-700b385532a9", "name": "test_0", "status": "ACTIVE", "tenant_id": "35d2a9caf1634dca9fc12ec078239d84", "user_id": "7c24e8f82e7842b785e565ac65c7f494", "metadata": {}, "hostId": "e632d98aa833376e2652bb395252bb54f4cc7fd6f020f0d51d7efcd6", "image": {"id": "15bc897a-453b-4133-b6db-08ecdc2b6db0", "links": [{"rel": "bookmark", "href": "https://nova-internal.openstack.svc:8774/images/15bc897a-453b-4133-b6db-08ecdc2b6db0"}]}, "flavor": {"id": "0891a7f6-7194-4f33-bc11-6f6ab8b16145", "links": [{"rel": "bookmark", "href": "https://nova-internal.openstack.svc:8774/flavors/0891a7f6-7194-4f33-bc11-6f6ab8b16145"}]}, "created": "2025-12-01T19:29:43Z", "updated": "2025-12-01T19:29:55Z", "addresses": {"private": [{"version": 4, "addr": "192.168.0.47", "OS-EXT-IPS:type": "fixed", "OS-EXT-IPS-MAC:mac_addr": "fa:16:3e:fc:8b:70"}, {"version": 4, "addr": "192.168.122.206", "OS-EXT-IPS:type": "floating", "OS-EXT-IPS-MAC:mac_addr": "fa:16:3e:fc:8b:70"}]}, "accessIPv4": "", "accessIPv6": "", "links": [{"rel": "self", "href": "https://nova-internal.openstack.svc:8774/v2.1/servers/e73931e9-f7fa-4666-b781-700b385532a9"}, {"rel": "bookmark", "href": "https://nova-internal.openstack.svc:8774/servers/e73931e9-f7fa-4666-b781-700b385532a9"}], "OS-DCF:diskConfig": "MANUAL", "progress": 0, "OS-EXT-AZ:availability_zone": "nova", "config_drive": "True", "key_name": null, "OS-SRV-USG:launched_at": "2025-12-01T19:29:55.000000", "OS-SRV-USG:terminated_at": null, "security_groups": [{"name": "basic"}], "OS-EXT-SRV-ATTR:host": "compute-0.ctlplane.example.com", "OS-EXT-SRV-ATTR:instance_name": "instance-00000001", "OS-EXT-SRV-ATTR:hypervisor_hostname": "compute-0.ctlplane.example.com", "OS-EXT-STS:task_state": null, "OS-EXT-STS:vm_state": "active", "OS-EXT-STS:power_state": 1, "os-extended-volumes:volumes_attached": []}} _http_log_response /usr/lib/python3.12/site-packages/keystoneauth1/session.py:648
Dec  1 19:31:49 compute-0 ceilometer_agent_compute[200308]: 2025-12-01 19:31:49.800 15 DEBUG novaclient.v2.client [-] GET call to compute for https://nova-internal.openstack.svc:8774/v2.1/servers/e73931e9-f7fa-4666-b781-700b385532a9 used request id req-4db5901c-9cca-4c5d-892a-3665e5b241bd request /usr/lib/python3.12/site-packages/keystoneauth1/session.py:1073
Dec  1 19:31:49 compute-0 ceilometer_agent_compute[200308]: 2025-12-01 19:31:49.802 15 DEBUG ceilometer.compute.discovery [-] instance data: {'id': 'e73931e9-f7fa-4666-b781-700b385532a9', 'name': 'test_0', 'flavor': {'id': '0891a7f6-7194-4f33-bc11-6f6ab8b16145', 'name': 'm1.small', 'vcpus': 1, 'ram': 512, 'disk': 1, 'ephemeral': 1, 'swap': 0}, 'image': {'id': '15bc897a-453b-4133-b6db-08ecdc2b6db0'}, 'os_type': 'hvm', 'architecture': 'x86_64', 'OS-EXT-SRV-ATTR:instance_name': 'instance-00000001', 'OS-EXT-SRV-ATTR:host': 'compute-0.ctlplane.example.com', 'OS-EXT-STS:vm_state': 'running', 'tenant_id': '35d2a9caf1634dca9fc12ec078239d84', 'user_id': '7c24e8f82e7842b785e565ac65c7f494', 'hostId': 'e632d98aa833376e2652bb395252bb54f4cc7fd6f020f0d51d7efcd6', 'status': 'active', 'metadata': {}} discover_libvirt_polling /usr/lib/python3.12/site-packages/ceilometer/compute/discovery.py:315
Dec  1 19:31:49 compute-0 ceilometer_agent_compute[200308]: 2025-12-01 19:31:49.806 15 DEBUG ceilometer.compute.discovery [-] Querying metadata for instance f4a023f0-04a7-470f-88ef-6284e0580f9e from Nova API get_server /usr/lib/python3.12/site-packages/ceilometer/compute/discovery.py:176
Dec  1 19:31:49 compute-0 ceilometer_agent_compute[200308]: 2025-12-01 19:31:49.808 15 DEBUG novaclient.v2.client [-] REQ: curl -g -i -X GET https://nova-internal.openstack.svc:8774/v2.1/servers/f4a023f0-04a7-470f-88ef-6284e0580f9e -H "Accept: application/json" -H "User-Agent: python-novaclient" -H "X-Auth-Token: {SHA256}1de7f74c971f7abb068fd11d4466b13593717e525e549549f884402049cc943e" -H "X-OpenStack-Nova-API-Version: 2.1" _http_log_request /usr/lib/python3.12/site-packages/keystoneauth1/session.py:572
Dec  1 19:31:50 compute-0 nova_compute[189564]: 2025-12-01 19:31:50.248 189568 DEBUG oslo_service.periodic_task [None req-7cec860b-c1c5-49d2-8a4f-732c76c1788d - - - - - -] Running periodic task ComputeManager._poll_rebooting_instances run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210#033[00m
Dec  1 19:31:50 compute-0 ceilometer_agent_compute[200308]: 2025-12-01 19:31:50.535 15 DEBUG novaclient.v2.client [-] RESP: [200] Connection: Keep-Alive Content-Length: 1959 Content-Type: application/json Date: Mon, 01 Dec 2025 19:31:49 GMT Keep-Alive: timeout=5, max=99 OpenStack-API-Version: compute 2.1 Server: Apache Vary: OpenStack-API-Version,X-OpenStack-Nova-API-Version X-OpenStack-Nova-API-Version: 2.1 x-compute-request-id: req-d91ca7fa-4505-47a9-8c5c-d501a2b26f4a x-openstack-request-id: req-d91ca7fa-4505-47a9-8c5c-d501a2b26f4a _http_log_response /usr/lib/python3.12/site-packages/keystoneauth1/session.py:613
Dec  1 19:31:50 compute-0 ceilometer_agent_compute[200308]: 2025-12-01 19:31:50.535 15 DEBUG novaclient.v2.client [-] RESP BODY: {"server": {"id": "f4a023f0-04a7-470f-88ef-6284e0580f9e", "name": "vn-rxztcck-f2wxpqwzjpbt-22updzqiujy5-vnf-jgrcp6zbpavd", "status": "ACTIVE", "tenant_id": "35d2a9caf1634dca9fc12ec078239d84", "user_id": "7c24e8f82e7842b785e565ac65c7f494", "metadata": {"metering.server_group": "47cf63e2-5b7c-4ff3-8543-aef6d5b1a5c9"}, "hostId": "e632d98aa833376e2652bb395252bb54f4cc7fd6f020f0d51d7efcd6", "image": {"id": "15bc897a-453b-4133-b6db-08ecdc2b6db0", "links": [{"rel": "bookmark", "href": "https://nova-internal.openstack.svc:8774/images/15bc897a-453b-4133-b6db-08ecdc2b6db0"}]}, "flavor": {"id": "0891a7f6-7194-4f33-bc11-6f6ab8b16145", "links": [{"rel": "bookmark", "href": "https://nova-internal.openstack.svc:8774/flavors/0891a7f6-7194-4f33-bc11-6f6ab8b16145"}]}, "created": "2025-12-01T19:30:56Z", "updated": "2025-12-01T19:31:03Z", "addresses": {"private": [{"version": 4, "addr": "192.168.0.66", "OS-EXT-IPS:type": "fixed", "OS-EXT-IPS-MAC:mac_addr": "fa:16:3e:0a:1c:a4"}, {"version": 4, "addr": "192.168.122.187", "OS-EXT-IPS:type": "floating", "OS-EXT-IPS-MAC:mac_addr": "fa:16:3e:0a:1c:a4"}]}, "accessIPv4": "", "accessIPv6": "", "links": [{"rel": "self", "href": "https://nova-internal.openstack.svc:8774/v2.1/servers/f4a023f0-04a7-470f-88ef-6284e0580f9e"}, {"rel": "bookmark", "href": "https://nova-internal.openstack.svc:8774/servers/f4a023f0-04a7-470f-88ef-6284e0580f9e"}], "OS-DCF:diskConfig": "MANUAL", "progress": 0, "OS-EXT-AZ:availability_zone": "nova", "config_drive": "True", "key_name": null, "OS-SRV-USG:launched_at": "2025-12-01T19:31:03.000000", "OS-SRV-USG:terminated_at": null, "security_groups": [{"name": "basic"}], "OS-EXT-SRV-ATTR:host": "compute-0.ctlplane.example.com", "OS-EXT-SRV-ATTR:instance_name": "instance-00000002", "OS-EXT-SRV-ATTR:hypervisor_hostname": "compute-0.ctlplane.example.com", "OS-EXT-STS:task_state": null, "OS-EXT-STS:vm_state": "active", "OS-EXT-STS:power_state": 1, "os-extended-volumes:volumes_attached": []}} _http_log_response /usr/lib/python3.12/site-packages/keystoneauth1/session.py:648
Dec  1 19:31:50 compute-0 ceilometer_agent_compute[200308]: 2025-12-01 19:31:50.535 15 DEBUG novaclient.v2.client [-] GET call to compute for https://nova-internal.openstack.svc:8774/v2.1/servers/f4a023f0-04a7-470f-88ef-6284e0580f9e used request id req-d91ca7fa-4505-47a9-8c5c-d501a2b26f4a request /usr/lib/python3.12/site-packages/keystoneauth1/session.py:1073
Dec  1 19:31:50 compute-0 ceilometer_agent_compute[200308]: 2025-12-01 19:31:50.538 15 DEBUG ceilometer.compute.discovery [-] instance data: {'id': 'f4a023f0-04a7-470f-88ef-6284e0580f9e', 'name': 'vn-rxztcck-f2wxpqwzjpbt-22updzqiujy5-vnf-jgrcp6zbpavd', 'flavor': {'id': '0891a7f6-7194-4f33-bc11-6f6ab8b16145', 'name': 'm1.small', 'vcpus': 1, 'ram': 512, 'disk': 1, 'ephemeral': 1, 'swap': 0}, 'image': {'id': '15bc897a-453b-4133-b6db-08ecdc2b6db0'}, 'os_type': 'hvm', 'architecture': 'x86_64', 'OS-EXT-SRV-ATTR:instance_name': 'instance-00000002', 'OS-EXT-SRV-ATTR:host': 'compute-0.ctlplane.example.com', 'OS-EXT-STS:vm_state': 'running', 'tenant_id': '35d2a9caf1634dca9fc12ec078239d84', 'user_id': '7c24e8f82e7842b785e565ac65c7f494', 'hostId': 'e632d98aa833376e2652bb395252bb54f4cc7fd6f020f0d51d7efcd6', 'status': 'active', 'metadata': {'metering.server_group': '47cf63e2-5b7c-4ff3-8543-aef6d5b1a5c9'}} discover_libvirt_polling /usr/lib/python3.12/site-packages/ceilometer/compute/discovery.py:315
Dec  1 19:31:50 compute-0 ceilometer_agent_compute[200308]: 2025-12-01 19:31:50.538 15 INFO ceilometer.polling.manager [-] Polling pollster network.incoming.bytes.delta in the context of pollsters
Dec  1 19:31:50 compute-0 ceilometer_agent_compute[200308]: 2025-12-01 19:31:50.539 15 DEBUG ceilometer.polling.manager [-] Checking if we need coordination for pollster [<stevedore.extension.Extension object at 0x7fcf6cc3f860>] with coordination group name [None]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:333
Dec  1 19:31:50 compute-0 ceilometer_agent_compute[200308]: 2025-12-01 19:31:50.539 15 DEBUG ceilometer.polling.manager [-] The pollster [<stevedore.extension.Extension object at 0x7fcf6cc3f860>] is not configured in a source for polling that requires coordination. The current hashrings are the following [None]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:355
Dec  1 19:31:50 compute-0 ceilometer_agent_compute[200308]: 2025-12-01 19:31:50.540 15 DEBUG ceilometer.polling.manager [-] Polster heartbeat update: network.incoming.bytes.delta heartbeat /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:636
Dec  1 19:31:50 compute-0 ceilometer_agent_compute[200308]: 2025-12-01 19:31:50.541 12 DEBUG ceilometer.polling.manager [-] Updated heartbeat for network.incoming.bytes.delta (2025-12-01T19:31:50.539479) _update_status /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:502
Dec  1 19:31:50 compute-0 ceilometer_agent_compute[200308]: 2025-12-01 19:31:50.546 15 DEBUG ceilometer.compute.virt.libvirt.inspector [-] No delta meter predecessor for e73931e9-f7fa-4666-b781-700b385532a9 / tap3cef930c-87 inspect_vnics /usr/lib/python3.12/site-packages/ceilometer/compute/virt/libvirt/inspector.py:143
Dec  1 19:31:50 compute-0 ceilometer_agent_compute[200308]: 2025-12-01 19:31:50.547 15 DEBUG ceilometer.compute.pollsters [-] e73931e9-f7fa-4666-b781-700b385532a9/network.incoming.bytes.delta volume: 0 _stats_to_sample /usr/lib/python3.12/site-packages/ceilometer/compute/pollsters/__init__.py:108
Dec  1 19:31:50 compute-0 ceilometer_agent_compute[200308]: 2025-12-01 19:31:50.554 15 DEBUG ceilometer.compute.virt.libvirt.inspector [-] No delta meter predecessor for f4a023f0-04a7-470f-88ef-6284e0580f9e / tap0aee22ef-1f inspect_vnics /usr/lib/python3.12/site-packages/ceilometer/compute/virt/libvirt/inspector.py:143
Dec  1 19:31:50 compute-0 ceilometer_agent_compute[200308]: 2025-12-01 19:31:50.555 15 DEBUG ceilometer.compute.pollsters [-] f4a023f0-04a7-470f-88ef-6284e0580f9e/network.incoming.bytes.delta volume: 0 _stats_to_sample /usr/lib/python3.12/site-packages/ceilometer/compute/pollsters/__init__.py:108
Dec  1 19:31:50 compute-0 ceilometer_agent_compute[200308]: 2025-12-01 19:31:50.557 15 INFO ceilometer.polling.manager [-] Finished polling pollster network.incoming.bytes.delta in the context of pollsters
Dec  1 19:31:50 compute-0 ceilometer_agent_compute[200308]: 2025-12-01 19:31:50.557 15 DEBUG ceilometer.polling.manager [-] Executing discovery process for pollsters [<ceilometer.compute.pollsters.net.OutgoingPacketsPollster object at 0x7fcf6c2e4050>] and discovery method [local_instances] via process [<bound method AgentManager.discover of <ceilometer.polling.manager.AgentManager object at 0x7fcf6cc07a10>>]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:294
Dec  1 19:31:50 compute-0 ceilometer_agent_compute[200308]: 2025-12-01 19:31:50.558 15 INFO ceilometer.polling.manager [-] Polling pollster network.outgoing.packets in the context of pollsters
Dec  1 19:31:50 compute-0 ceilometer_agent_compute[200308]: 2025-12-01 19:31:50.558 15 DEBUG ceilometer.polling.manager [-] Checking if we need coordination for pollster [<stevedore.extension.Extension object at 0x7fcf6c2e4080>] with coordination group name [None]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:333
Dec  1 19:31:50 compute-0 ceilometer_agent_compute[200308]: 2025-12-01 19:31:50.558 15 DEBUG ceilometer.polling.manager [-] The pollster [<stevedore.extension.Extension object at 0x7fcf6c2e4080>] is not configured in a source for polling that requires coordination. The current hashrings are the following [None]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:355
Dec  1 19:31:50 compute-0 ceilometer_agent_compute[200308]: 2025-12-01 19:31:50.558 15 DEBUG ceilometer.polling.manager [-] Polster heartbeat update: network.outgoing.packets heartbeat /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:636
Dec  1 19:31:50 compute-0 ceilometer_agent_compute[200308]: 2025-12-01 19:31:50.559 15 DEBUG ceilometer.compute.pollsters [-] e73931e9-f7fa-4666-b781-700b385532a9/network.outgoing.packets volume: 20 _stats_to_sample /usr/lib/python3.12/site-packages/ceilometer/compute/pollsters/__init__.py:108
Dec  1 19:31:50 compute-0 ceilometer_agent_compute[200308]: 2025-12-01 19:31:50.559 15 DEBUG ceilometer.compute.pollsters [-] f4a023f0-04a7-470f-88ef-6284e0580f9e/network.outgoing.packets volume: 14 _stats_to_sample /usr/lib/python3.12/site-packages/ceilometer/compute/pollsters/__init__.py:108
Dec  1 19:31:50 compute-0 ceilometer_agent_compute[200308]: 2025-12-01 19:31:50.560 12 DEBUG ceilometer.polling.manager [-] Updated heartbeat for network.outgoing.packets (2025-12-01T19:31:50.558866) _update_status /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:502
Dec  1 19:31:50 compute-0 ceilometer_agent_compute[200308]: 2025-12-01 19:31:50.560 15 INFO ceilometer.polling.manager [-] Finished polling pollster network.outgoing.packets in the context of pollsters
Dec  1 19:31:50 compute-0 ceilometer_agent_compute[200308]: 2025-12-01 19:31:50.560 15 DEBUG ceilometer.polling.manager [-] Executing discovery process for pollsters [<ceilometer.compute.pollsters.net.OutgoingBytesDeltaPollster object at 0x7fcf6cc3ff20>] and discovery method [local_instances] via process [<bound method AgentManager.discover of <ceilometer.polling.manager.AgentManager object at 0x7fcf6cc07a10>>]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:294
Dec  1 19:31:50 compute-0 ceilometer_agent_compute[200308]: 2025-12-01 19:31:50.560 15 INFO ceilometer.polling.manager [-] Polling pollster network.outgoing.bytes.delta in the context of pollsters
Dec  1 19:31:50 compute-0 ceilometer_agent_compute[200308]: 2025-12-01 19:31:50.560 15 DEBUG ceilometer.polling.manager [-] Checking if we need coordination for pollster [<stevedore.extension.Extension object at 0x7fcf6efc98b0>] with coordination group name [None]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:333
Dec  1 19:31:50 compute-0 ceilometer_agent_compute[200308]: 2025-12-01 19:31:50.561 15 DEBUG ceilometer.polling.manager [-] The pollster [<stevedore.extension.Extension object at 0x7fcf6efc98b0>] is not configured in a source for polling that requires coordination. The current hashrings are the following [None]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:355
Dec  1 19:31:50 compute-0 ceilometer_agent_compute[200308]: 2025-12-01 19:31:50.561 15 DEBUG ceilometer.polling.manager [-] Polster heartbeat update: network.outgoing.bytes.delta heartbeat /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:636
Dec  1 19:31:50 compute-0 ceilometer_agent_compute[200308]: 2025-12-01 19:31:50.561 15 DEBUG ceilometer.compute.pollsters [-] e73931e9-f7fa-4666-b781-700b385532a9/network.outgoing.bytes.delta volume: 0 _stats_to_sample /usr/lib/python3.12/site-packages/ceilometer/compute/pollsters/__init__.py:108
Dec  1 19:31:50 compute-0 ceilometer_agent_compute[200308]: 2025-12-01 19:31:50.562 12 DEBUG ceilometer.polling.manager [-] Updated heartbeat for network.outgoing.bytes.delta (2025-12-01T19:31:50.561291) _update_status /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:502
Dec  1 19:31:50 compute-0 ceilometer_agent_compute[200308]: 2025-12-01 19:31:50.562 15 DEBUG ceilometer.compute.pollsters [-] f4a023f0-04a7-470f-88ef-6284e0580f9e/network.outgoing.bytes.delta volume: 0 _stats_to_sample /usr/lib/python3.12/site-packages/ceilometer/compute/pollsters/__init__.py:108
Dec  1 19:31:50 compute-0 ceilometer_agent_compute[200308]: 2025-12-01 19:31:50.562 15 INFO ceilometer.polling.manager [-] Finished polling pollster network.outgoing.bytes.delta in the context of pollsters
Dec  1 19:31:50 compute-0 ceilometer_agent_compute[200308]: 2025-12-01 19:31:50.563 15 DEBUG ceilometer.polling.manager [-] Executing discovery process for pollsters [<ceilometer.compute.pollsters.net.OutgoingDropPollster object at 0x7fcf6c2e40e0>] and discovery method [local_instances] via process [<bound method AgentManager.discover of <ceilometer.polling.manager.AgentManager object at 0x7fcf6cc07a10>>]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:294
Dec  1 19:31:50 compute-0 ceilometer_agent_compute[200308]: 2025-12-01 19:31:50.563 15 INFO ceilometer.polling.manager [-] Polling pollster network.outgoing.packets.drop in the context of pollsters
Dec  1 19:31:50 compute-0 ceilometer_agent_compute[200308]: 2025-12-01 19:31:50.563 15 DEBUG ceilometer.polling.manager [-] Checking if we need coordination for pollster [<stevedore.extension.Extension object at 0x7fcf6c2e4110>] with coordination group name [None]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:333
Dec  1 19:31:50 compute-0 ceilometer_agent_compute[200308]: 2025-12-01 19:31:50.563 15 DEBUG ceilometer.polling.manager [-] The pollster [<stevedore.extension.Extension object at 0x7fcf6c2e4110>] is not configured in a source for polling that requires coordination. The current hashrings are the following [None]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:355
Dec  1 19:31:50 compute-0 ceilometer_agent_compute[200308]: 2025-12-01 19:31:50.563 15 DEBUG ceilometer.polling.manager [-] Polster heartbeat update: network.outgoing.packets.drop heartbeat /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:636
Dec  1 19:31:50 compute-0 ceilometer_agent_compute[200308]: 2025-12-01 19:31:50.564 12 DEBUG ceilometer.polling.manager [-] Updated heartbeat for network.outgoing.packets.drop (2025-12-01T19:31:50.563701) _update_status /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:502
Dec  1 19:31:50 compute-0 ceilometer_agent_compute[200308]: 2025-12-01 19:31:50.564 15 DEBUG ceilometer.compute.pollsters [-] e73931e9-f7fa-4666-b781-700b385532a9/network.outgoing.packets.drop volume: 0 _stats_to_sample /usr/lib/python3.12/site-packages/ceilometer/compute/pollsters/__init__.py:108
Dec  1 19:31:50 compute-0 ceilometer_agent_compute[200308]: 2025-12-01 19:31:50.564 15 DEBUG ceilometer.compute.pollsters [-] f4a023f0-04a7-470f-88ef-6284e0580f9e/network.outgoing.packets.drop volume: 0 _stats_to_sample /usr/lib/python3.12/site-packages/ceilometer/compute/pollsters/__init__.py:108
Dec  1 19:31:50 compute-0 ceilometer_agent_compute[200308]: 2025-12-01 19:31:50.565 15 INFO ceilometer.polling.manager [-] Finished polling pollster network.outgoing.packets.drop in the context of pollsters
Dec  1 19:31:50 compute-0 ceilometer_agent_compute[200308]: 2025-12-01 19:31:50.565 15 DEBUG ceilometer.polling.manager [-] Executing discovery process for pollsters [<ceilometer.compute.pollsters.net.OutgoingErrorsPollster object at 0x7fcf6c2e4170>] and discovery method [local_instances] via process [<bound method AgentManager.discover of <ceilometer.polling.manager.AgentManager object at 0x7fcf6cc07a10>>]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:294
Dec  1 19:31:50 compute-0 ceilometer_agent_compute[200308]: 2025-12-01 19:31:50.566 15 INFO ceilometer.polling.manager [-] Polling pollster network.outgoing.packets.error in the context of pollsters
Dec  1 19:31:50 compute-0 ceilometer_agent_compute[200308]: 2025-12-01 19:31:50.566 15 DEBUG ceilometer.polling.manager [-] Checking if we need coordination for pollster [<stevedore.extension.Extension object at 0x7fcf6c2e41a0>] with coordination group name [None]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:333
Dec  1 19:31:50 compute-0 ceilometer_agent_compute[200308]: 2025-12-01 19:31:50.566 15 DEBUG ceilometer.polling.manager [-] The pollster [<stevedore.extension.Extension object at 0x7fcf6c2e41a0>] is not configured in a source for polling that requires coordination. The current hashrings are the following [None]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:355
Dec  1 19:31:50 compute-0 ceilometer_agent_compute[200308]: 2025-12-01 19:31:50.566 15 DEBUG ceilometer.polling.manager [-] Polster heartbeat update: network.outgoing.packets.error heartbeat /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:636
Dec  1 19:31:50 compute-0 ceilometer_agent_compute[200308]: 2025-12-01 19:31:50.566 15 DEBUG ceilometer.compute.pollsters [-] e73931e9-f7fa-4666-b781-700b385532a9/network.outgoing.packets.error volume: 0 _stats_to_sample /usr/lib/python3.12/site-packages/ceilometer/compute/pollsters/__init__.py:108
Dec  1 19:31:50 compute-0 ceilometer_agent_compute[200308]: 2025-12-01 19:31:50.567 15 DEBUG ceilometer.compute.pollsters [-] f4a023f0-04a7-470f-88ef-6284e0580f9e/network.outgoing.packets.error volume: 0 _stats_to_sample /usr/lib/python3.12/site-packages/ceilometer/compute/pollsters/__init__.py:108
Dec  1 19:31:50 compute-0 ceilometer_agent_compute[200308]: 2025-12-01 19:31:50.567 15 INFO ceilometer.polling.manager [-] Finished polling pollster network.outgoing.packets.error in the context of pollsters
Dec  1 19:31:50 compute-0 ceilometer_agent_compute[200308]: 2025-12-01 19:31:50.568 15 DEBUG ceilometer.polling.manager [-] Executing discovery process for pollsters [<ceilometer.compute.pollsters.disk.PerDeviceCapacityPollster object at 0x7fcf6cc3d820>] and discovery method [local_instances] via process [<bound method AgentManager.discover of <ceilometer.polling.manager.AgentManager object at 0x7fcf6cc07a10>>]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:294
Dec  1 19:31:50 compute-0 ceilometer_agent_compute[200308]: 2025-12-01 19:31:50.568 15 INFO ceilometer.polling.manager [-] Polling pollster disk.device.capacity in the context of pollsters
Dec  1 19:31:50 compute-0 ceilometer_agent_compute[200308]: 2025-12-01 19:31:50.568 15 DEBUG ceilometer.polling.manager [-] Checking if we need coordination for pollster [<stevedore.extension.Extension object at 0x7fcf6cc3f290>] with coordination group name [None]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:333
Dec  1 19:31:50 compute-0 ceilometer_agent_compute[200308]: 2025-12-01 19:31:50.568 15 DEBUG ceilometer.polling.manager [-] The pollster [<stevedore.extension.Extension object at 0x7fcf6cc3f290>] is not configured in a source for polling that requires coordination. The current hashrings are the following [None]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:355
Dec  1 19:31:50 compute-0 ceilometer_agent_compute[200308]: 2025-12-01 19:31:50.568 15 DEBUG ceilometer.polling.manager [-] Polster heartbeat update: disk.device.capacity heartbeat /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:636
Dec  1 19:31:50 compute-0 ceilometer_agent_compute[200308]: 2025-12-01 19:31:50.569 12 DEBUG ceilometer.polling.manager [-] Updated heartbeat for network.outgoing.packets.error (2025-12-01T19:31:50.566407) _update_status /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:502
Dec  1 19:31:50 compute-0 ceilometer_agent_compute[200308]: 2025-12-01 19:31:50.569 12 DEBUG ceilometer.polling.manager [-] Updated heartbeat for disk.device.capacity (2025-12-01T19:31:50.568916) _update_status /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:502
Dec  1 19:31:50 compute-0 ceilometer_agent_compute[200308]: 2025-12-01 19:31:50.615 15 DEBUG ceilometer.compute.pollsters [-] e73931e9-f7fa-4666-b781-700b385532a9/disk.device.capacity volume: 1073741824 _stats_to_sample /usr/lib/python3.12/site-packages/ceilometer/compute/pollsters/__init__.py:108
Dec  1 19:31:50 compute-0 ceilometer_agent_compute[200308]: 2025-12-01 19:31:50.615 15 DEBUG ceilometer.compute.pollsters [-] e73931e9-f7fa-4666-b781-700b385532a9/disk.device.capacity volume: 1073741824 _stats_to_sample /usr/lib/python3.12/site-packages/ceilometer/compute/pollsters/__init__.py:108
Dec  1 19:31:50 compute-0 ceilometer_agent_compute[200308]: 2025-12-01 19:31:50.616 15 DEBUG ceilometer.compute.pollsters [-] e73931e9-f7fa-4666-b781-700b385532a9/disk.device.capacity volume: 485376 _stats_to_sample /usr/lib/python3.12/site-packages/ceilometer/compute/pollsters/__init__.py:108
Dec  1 19:31:50 compute-0 ceilometer_agent_compute[200308]: 2025-12-01 19:31:50.661 15 DEBUG ceilometer.compute.pollsters [-] f4a023f0-04a7-470f-88ef-6284e0580f9e/disk.device.capacity volume: 1073741824 _stats_to_sample /usr/lib/python3.12/site-packages/ceilometer/compute/pollsters/__init__.py:108
Dec  1 19:31:50 compute-0 ceilometer_agent_compute[200308]: 2025-12-01 19:31:50.661 15 DEBUG ceilometer.compute.pollsters [-] f4a023f0-04a7-470f-88ef-6284e0580f9e/disk.device.capacity volume: 1073741824 _stats_to_sample /usr/lib/python3.12/site-packages/ceilometer/compute/pollsters/__init__.py:108
Dec  1 19:31:50 compute-0 ceilometer_agent_compute[200308]: 2025-12-01 19:31:50.662 15 DEBUG ceilometer.compute.pollsters [-] f4a023f0-04a7-470f-88ef-6284e0580f9e/disk.device.capacity volume: 583680 _stats_to_sample /usr/lib/python3.12/site-packages/ceilometer/compute/pollsters/__init__.py:108
Dec  1 19:31:50 compute-0 ceilometer_agent_compute[200308]: 2025-12-01 19:31:50.662 15 INFO ceilometer.polling.manager [-] Finished polling pollster disk.device.capacity in the context of pollsters
Dec  1 19:31:50 compute-0 ceilometer_agent_compute[200308]: 2025-12-01 19:31:50.662 15 DEBUG ceilometer.polling.manager [-] Executing discovery process for pollsters [<ceilometer.compute.pollsters.disk.PerDeviceReadBytesPollster object at 0x7fcf6cc3f1d0>] and discovery method [local_instances] via process [<bound method AgentManager.discover of <ceilometer.polling.manager.AgentManager object at 0x7fcf6cc07a10>>]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:294
Dec  1 19:31:50 compute-0 ceilometer_agent_compute[200308]: 2025-12-01 19:31:50.662 15 INFO ceilometer.polling.manager [-] Polling pollster disk.device.read.bytes in the context of pollsters
Dec  1 19:31:50 compute-0 ceilometer_agent_compute[200308]: 2025-12-01 19:31:50.663 15 DEBUG ceilometer.polling.manager [-] Checking if we need coordination for pollster [<stevedore.extension.Extension object at 0x7fcf6cc3f2c0>] with coordination group name [None]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:333
Dec  1 19:31:50 compute-0 ceilometer_agent_compute[200308]: 2025-12-01 19:31:50.663 15 DEBUG ceilometer.polling.manager [-] The pollster [<stevedore.extension.Extension object at 0x7fcf6cc3f2c0>] is not configured in a source for polling that requires coordination. The current hashrings are the following [None]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:355
Dec  1 19:31:50 compute-0 ceilometer_agent_compute[200308]: 2025-12-01 19:31:50.663 15 DEBUG ceilometer.polling.manager [-] Polster heartbeat update: disk.device.read.bytes heartbeat /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:636
Dec  1 19:31:50 compute-0 ceilometer_agent_compute[200308]: 2025-12-01 19:31:50.663 12 DEBUG ceilometer.polling.manager [-] Updated heartbeat for disk.device.read.bytes (2025-12-01T19:31:50.663221) _update_status /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:502
Dec  1 19:31:50 compute-0 ceilometer_agent_compute[200308]: 2025-12-01 19:31:50.786 15 DEBUG ceilometer.compute.pollsters [-] e73931e9-f7fa-4666-b781-700b385532a9/disk.device.read.bytes volume: 23308800 _stats_to_sample /usr/lib/python3.12/site-packages/ceilometer/compute/pollsters/__init__.py:108
Dec  1 19:31:50 compute-0 ceilometer_agent_compute[200308]: 2025-12-01 19:31:50.787 15 DEBUG ceilometer.compute.pollsters [-] e73931e9-f7fa-4666-b781-700b385532a9/disk.device.read.bytes volume: 3227648 _stats_to_sample /usr/lib/python3.12/site-packages/ceilometer/compute/pollsters/__init__.py:108
Dec  1 19:31:50 compute-0 ceilometer_agent_compute[200308]: 2025-12-01 19:31:50.787 15 DEBUG ceilometer.compute.pollsters [-] e73931e9-f7fa-4666-b781-700b385532a9/disk.device.read.bytes volume: 274786 _stats_to_sample /usr/lib/python3.12/site-packages/ceilometer/compute/pollsters/__init__.py:108
Dec  1 19:31:50 compute-0 ceilometer_agent_compute[200308]: 2025-12-01 19:31:50.871 15 DEBUG ceilometer.compute.pollsters [-] f4a023f0-04a7-470f-88ef-6284e0580f9e/disk.device.read.bytes volume: 23308800 _stats_to_sample /usr/lib/python3.12/site-packages/ceilometer/compute/pollsters/__init__.py:108
Dec  1 19:31:50 compute-0 ceilometer_agent_compute[200308]: 2025-12-01 19:31:50.871 15 DEBUG ceilometer.compute.pollsters [-] f4a023f0-04a7-470f-88ef-6284e0580f9e/disk.device.read.bytes volume: 3227648 _stats_to_sample /usr/lib/python3.12/site-packages/ceilometer/compute/pollsters/__init__.py:108
Dec  1 19:31:50 compute-0 ceilometer_agent_compute[200308]: 2025-12-01 19:31:50.872 15 DEBUG ceilometer.compute.pollsters [-] f4a023f0-04a7-470f-88ef-6284e0580f9e/disk.device.read.bytes volume: 385378 _stats_to_sample /usr/lib/python3.12/site-packages/ceilometer/compute/pollsters/__init__.py:108
Dec  1 19:31:50 compute-0 ceilometer_agent_compute[200308]: 2025-12-01 19:31:50.873 15 INFO ceilometer.polling.manager [-] Finished polling pollster disk.device.read.bytes in the context of pollsters
Dec  1 19:31:50 compute-0 ceilometer_agent_compute[200308]: 2025-12-01 19:31:50.874 15 DEBUG ceilometer.polling.manager [-] Executing discovery process for pollsters [<ceilometer.compute.pollsters.net.IncomingBytesPollster object at 0x7fcf6cc3f800>] and discovery method [local_instances] via process [<bound method AgentManager.discover of <ceilometer.polling.manager.AgentManager object at 0x7fcf6cc07a10>>]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:294
Dec  1 19:31:50 compute-0 ceilometer_agent_compute[200308]: 2025-12-01 19:31:50.874 15 INFO ceilometer.polling.manager [-] Polling pollster network.incoming.bytes in the context of pollsters
Dec  1 19:31:50 compute-0 ceilometer_agent_compute[200308]: 2025-12-01 19:31:50.874 15 DEBUG ceilometer.polling.manager [-] Checking if we need coordination for pollster [<stevedore.extension.Extension object at 0x7fcf6e1e92e0>] with coordination group name [None]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:333
Dec  1 19:31:50 compute-0 ceilometer_agent_compute[200308]: 2025-12-01 19:31:50.874 15 DEBUG ceilometer.polling.manager [-] The pollster [<stevedore.extension.Extension object at 0x7fcf6e1e92e0>] is not configured in a source for polling that requires coordination. The current hashrings are the following [None]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:355
Dec  1 19:31:50 compute-0 ceilometer_agent_compute[200308]: 2025-12-01 19:31:50.874 15 DEBUG ceilometer.polling.manager [-] Polster heartbeat update: network.incoming.bytes heartbeat /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:636
Dec  1 19:31:50 compute-0 ceilometer_agent_compute[200308]: 2025-12-01 19:31:50.875 15 DEBUG ceilometer.compute.pollsters [-] e73931e9-f7fa-4666-b781-700b385532a9/network.incoming.bytes volume: 1968 _stats_to_sample /usr/lib/python3.12/site-packages/ceilometer/compute/pollsters/__init__.py:108
Dec  1 19:31:50 compute-0 ceilometer_agent_compute[200308]: 2025-12-01 19:31:50.875 15 DEBUG ceilometer.compute.pollsters [-] f4a023f0-04a7-470f-88ef-6284e0580f9e/network.incoming.bytes volume: 1486 _stats_to_sample /usr/lib/python3.12/site-packages/ceilometer/compute/pollsters/__init__.py:108
Dec  1 19:31:50 compute-0 ceilometer_agent_compute[200308]: 2025-12-01 19:31:50.876 15 INFO ceilometer.polling.manager [-] Finished polling pollster network.incoming.bytes in the context of pollsters
Dec  1 19:31:50 compute-0 ceilometer_agent_compute[200308]: 2025-12-01 19:31:50.877 15 DEBUG ceilometer.polling.manager [-] Executing discovery process for pollsters [<ceilometer.compute.pollsters.net.IncomingBytesRatePollster object at 0x7fcf6cc3fd10>] and discovery method [local_instances] via process [<bound method AgentManager.discover of <ceilometer.polling.manager.AgentManager object at 0x7fcf6cc07a10>>]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:294
Dec  1 19:31:50 compute-0 ceilometer_agent_compute[200308]: 2025-12-01 19:31:50.877 12 DEBUG ceilometer.polling.manager [-] Updated heartbeat for network.incoming.bytes (2025-12-01T19:31:50.874921) _update_status /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:502
Dec  1 19:31:50 compute-0 ceilometer_agent_compute[200308]: 2025-12-01 19:31:50.877 15 INFO ceilometer.polling.manager [-] Polling pollster network.incoming.bytes.rate in the context of pollsters
Dec  1 19:31:50 compute-0 ceilometer_agent_compute[200308]: 2025-12-01 19:31:50.878 15 DEBUG ceilometer.polling.manager [-] Checking if we need coordination for pollster [<stevedore.extension.Extension object at 0x7fcf6cc3fb00>] with coordination group name [None]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:333
Dec  1 19:31:50 compute-0 ceilometer_agent_compute[200308]: 2025-12-01 19:31:50.878 15 DEBUG ceilometer.polling.manager [-] The pollster [<stevedore.extension.Extension object at 0x7fcf6cc3fb00>] is not configured in a source for polling that requires coordination. The current hashrings are the following [None]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:355
Dec  1 19:31:50 compute-0 ceilometer_agent_compute[200308]: 2025-12-01 19:31:50.878 15 DEBUG ceilometer.polling.manager [-] Polster heartbeat update: network.incoming.bytes.rate heartbeat /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:636
Dec  1 19:31:50 compute-0 ceilometer_agent_compute[200308]: 2025-12-01 19:31:50.878 12 DEBUG ceilometer.polling.manager [-] Updated heartbeat for network.incoming.bytes.rate (2025-12-01T19:31:50.878358) _update_status /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:502
Dec  1 19:31:50 compute-0 ceilometer_agent_compute[200308]: 2025-12-01 19:31:50.879 15 DEBUG ceilometer.compute.pollsters [-] LibvirtInspector does not provide data for IncomingBytesRatePollster get_samples /usr/lib/python3.12/site-packages/ceilometer/compute/pollsters/__init__.py:162
Dec  1 19:31:50 compute-0 ceilometer_agent_compute[200308]: 2025-12-01 19:31:50.879 15 ERROR ceilometer.polling.manager [-] Prevent pollster network.incoming.bytes.rate from polling [<NovaLikeServer: test_0>, <NovaLikeServer: vn-rxztcck-f2wxpqwzjpbt-22updzqiujy5-vnf-jgrcp6zbpavd>] on source pollsters anymore!: ceilometer.polling.plugin_base.PollsterPermanentError: [<NovaLikeServer: test_0>, <NovaLikeServer: vn-rxztcck-f2wxpqwzjpbt-22updzqiujy5-vnf-jgrcp6zbpavd>]
Dec  1 19:31:50 compute-0 ceilometer_agent_compute[200308]: 2025-12-01 19:31:50.880 15 DEBUG ceilometer.polling.manager [-] Executing discovery process for pollsters [<ceilometer.compute.pollsters.disk.PerDeviceDiskReadLatencyPollster object at 0x7fcf6cc3f2f0>] and discovery method [local_instances] via process [<bound method AgentManager.discover of <ceilometer.polling.manager.AgentManager object at 0x7fcf6cc07a10>>]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:294
Dec  1 19:31:50 compute-0 ceilometer_agent_compute[200308]: 2025-12-01 19:31:50.880 15 INFO ceilometer.polling.manager [-] Polling pollster disk.device.read.latency in the context of pollsters
Dec  1 19:31:50 compute-0 ceilometer_agent_compute[200308]: 2025-12-01 19:31:50.880 15 DEBUG ceilometer.polling.manager [-] Checking if we need coordination for pollster [<stevedore.extension.Extension object at 0x7fcf6cc3f320>] with coordination group name [None]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:333
Dec  1 19:31:50 compute-0 ceilometer_agent_compute[200308]: 2025-12-01 19:31:50.881 15 DEBUG ceilometer.polling.manager [-] The pollster [<stevedore.extension.Extension object at 0x7fcf6cc3f320>] is not configured in a source for polling that requires coordination. The current hashrings are the following [None]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:355
Dec  1 19:31:50 compute-0 ceilometer_agent_compute[200308]: 2025-12-01 19:31:50.881 15 DEBUG ceilometer.polling.manager [-] Polster heartbeat update: disk.device.read.latency heartbeat /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:636
Dec  1 19:31:50 compute-0 ceilometer_agent_compute[200308]: 2025-12-01 19:31:50.881 15 DEBUG ceilometer.compute.pollsters [-] e73931e9-f7fa-4666-b781-700b385532a9/disk.device.read.latency volume: 474440550 _stats_to_sample /usr/lib/python3.12/site-packages/ceilometer/compute/pollsters/__init__.py:108
Dec  1 19:31:50 compute-0 ceilometer_agent_compute[200308]: 2025-12-01 19:31:50.882 15 DEBUG ceilometer.compute.pollsters [-] e73931e9-f7fa-4666-b781-700b385532a9/disk.device.read.latency volume: 65600453 _stats_to_sample /usr/lib/python3.12/site-packages/ceilometer/compute/pollsters/__init__.py:108
Dec  1 19:31:50 compute-0 ceilometer_agent_compute[200308]: 2025-12-01 19:31:50.882 15 DEBUG ceilometer.compute.pollsters [-] e73931e9-f7fa-4666-b781-700b385532a9/disk.device.read.latency volume: 49214734 _stats_to_sample /usr/lib/python3.12/site-packages/ceilometer/compute/pollsters/__init__.py:108
Dec  1 19:31:50 compute-0 ceilometer_agent_compute[200308]: 2025-12-01 19:31:50.883 15 DEBUG ceilometer.compute.pollsters [-] f4a023f0-04a7-470f-88ef-6284e0580f9e/disk.device.read.latency volume: 569150794 _stats_to_sample /usr/lib/python3.12/site-packages/ceilometer/compute/pollsters/__init__.py:108
Dec  1 19:31:50 compute-0 ceilometer_agent_compute[200308]: 2025-12-01 19:31:50.884 15 DEBUG ceilometer.compute.pollsters [-] f4a023f0-04a7-470f-88ef-6284e0580f9e/disk.device.read.latency volume: 100146044 _stats_to_sample /usr/lib/python3.12/site-packages/ceilometer/compute/pollsters/__init__.py:108
Dec  1 19:31:50 compute-0 ceilometer_agent_compute[200308]: 2025-12-01 19:31:50.884 15 DEBUG ceilometer.compute.pollsters [-] f4a023f0-04a7-470f-88ef-6284e0580f9e/disk.device.read.latency volume: 76562748 _stats_to_sample /usr/lib/python3.12/site-packages/ceilometer/compute/pollsters/__init__.py:108
Dec  1 19:31:50 compute-0 ceilometer_agent_compute[200308]: 2025-12-01 19:31:50.886 15 INFO ceilometer.polling.manager [-] Finished polling pollster disk.device.read.latency in the context of pollsters
Dec  1 19:31:50 compute-0 ceilometer_agent_compute[200308]: 2025-12-01 19:31:50.886 15 DEBUG ceilometer.polling.manager [-] Executing discovery process for pollsters [<ceilometer.compute.pollsters.disk.PerDeviceReadRequestsPollster object at 0x7fcf6cc3f350>] and discovery method [local_instances] via process [<bound method AgentManager.discover of <ceilometer.polling.manager.AgentManager object at 0x7fcf6cc07a10>>]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:294
Dec  1 19:31:50 compute-0 ceilometer_agent_compute[200308]: 2025-12-01 19:31:50.886 12 DEBUG ceilometer.polling.manager [-] Updated heartbeat for disk.device.read.latency (2025-12-01T19:31:50.881238) _update_status /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:502
Dec  1 19:31:50 compute-0 ceilometer_agent_compute[200308]: 2025-12-01 19:31:50.887 15 INFO ceilometer.polling.manager [-] Polling pollster disk.device.read.requests in the context of pollsters
Dec  1 19:31:50 compute-0 ceilometer_agent_compute[200308]: 2025-12-01 19:31:50.887 15 DEBUG ceilometer.polling.manager [-] Checking if we need coordination for pollster [<stevedore.extension.Extension object at 0x7fcf6cc3f380>] with coordination group name [None]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:333
Dec  1 19:31:50 compute-0 ceilometer_agent_compute[200308]: 2025-12-01 19:31:50.887 15 DEBUG ceilometer.polling.manager [-] The pollster [<stevedore.extension.Extension object at 0x7fcf6cc3f380>] is not configured in a source for polling that requires coordination. The current hashrings are the following [None]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:355
Dec  1 19:31:50 compute-0 ceilometer_agent_compute[200308]: 2025-12-01 19:31:50.887 15 DEBUG ceilometer.polling.manager [-] Polster heartbeat update: disk.device.read.requests heartbeat /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:636
Dec  1 19:31:50 compute-0 ceilometer_agent_compute[200308]: 2025-12-01 19:31:50.888 12 DEBUG ceilometer.polling.manager [-] Updated heartbeat for disk.device.read.requests (2025-12-01T19:31:50.887655) _update_status /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:502
Dec  1 19:31:50 compute-0 ceilometer_agent_compute[200308]: 2025-12-01 19:31:50.888 15 DEBUG ceilometer.compute.pollsters [-] e73931e9-f7fa-4666-b781-700b385532a9/disk.device.read.requests volume: 840 _stats_to_sample /usr/lib/python3.12/site-packages/ceilometer/compute/pollsters/__init__.py:108
Dec  1 19:31:50 compute-0 ceilometer_agent_compute[200308]: 2025-12-01 19:31:50.889 15 DEBUG ceilometer.compute.pollsters [-] e73931e9-f7fa-4666-b781-700b385532a9/disk.device.read.requests volume: 173 _stats_to_sample /usr/lib/python3.12/site-packages/ceilometer/compute/pollsters/__init__.py:108
Dec  1 19:31:50 compute-0 ceilometer_agent_compute[200308]: 2025-12-01 19:31:50.889 15 DEBUG ceilometer.compute.pollsters [-] e73931e9-f7fa-4666-b781-700b385532a9/disk.device.read.requests volume: 109 _stats_to_sample /usr/lib/python3.12/site-packages/ceilometer/compute/pollsters/__init__.py:108
Dec  1 19:31:50 compute-0 ceilometer_agent_compute[200308]: 2025-12-01 19:31:50.890 15 DEBUG ceilometer.compute.pollsters [-] f4a023f0-04a7-470f-88ef-6284e0580f9e/disk.device.read.requests volume: 840 _stats_to_sample /usr/lib/python3.12/site-packages/ceilometer/compute/pollsters/__init__.py:108
Dec  1 19:31:50 compute-0 ceilometer_agent_compute[200308]: 2025-12-01 19:31:50.890 15 DEBUG ceilometer.compute.pollsters [-] f4a023f0-04a7-470f-88ef-6284e0580f9e/disk.device.read.requests volume: 173 _stats_to_sample /usr/lib/python3.12/site-packages/ceilometer/compute/pollsters/__init__.py:108
Dec  1 19:31:50 compute-0 ceilometer_agent_compute[200308]: 2025-12-01 19:31:50.891 15 DEBUG ceilometer.compute.pollsters [-] f4a023f0-04a7-470f-88ef-6284e0580f9e/disk.device.read.requests volume: 124 _stats_to_sample /usr/lib/python3.12/site-packages/ceilometer/compute/pollsters/__init__.py:108
Dec  1 19:31:50 compute-0 ceilometer_agent_compute[200308]: 2025-12-01 19:31:50.891 15 INFO ceilometer.polling.manager [-] Finished polling pollster disk.device.read.requests in the context of pollsters
Dec  1 19:31:50 compute-0 ceilometer_agent_compute[200308]: 2025-12-01 19:31:50.892 15 DEBUG ceilometer.polling.manager [-] Executing discovery process for pollsters [<ceilometer.compute.pollsters.disk.PerDevicePhysicalPollster object at 0x7fcf6cc3f3b0>] and discovery method [local_instances] via process [<bound method AgentManager.discover of <ceilometer.polling.manager.AgentManager object at 0x7fcf6cc07a10>>]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:294
Dec  1 19:31:50 compute-0 ceilometer_agent_compute[200308]: 2025-12-01 19:31:50.892 15 INFO ceilometer.polling.manager [-] Polling pollster disk.device.usage in the context of pollsters
Dec  1 19:31:50 compute-0 ceilometer_agent_compute[200308]: 2025-12-01 19:31:50.892 15 DEBUG ceilometer.polling.manager [-] Checking if we need coordination for pollster [<stevedore.extension.Extension object at 0x7fcf6cc3f3e0>] with coordination group name [None]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:333
Dec  1 19:31:50 compute-0 ceilometer_agent_compute[200308]: 2025-12-01 19:31:50.892 15 DEBUG ceilometer.polling.manager [-] The pollster [<stevedore.extension.Extension object at 0x7fcf6cc3f3e0>] is not configured in a source for polling that requires coordination. The current hashrings are the following [None]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:355
Dec  1 19:31:50 compute-0 ceilometer_agent_compute[200308]: 2025-12-01 19:31:50.893 15 DEBUG ceilometer.polling.manager [-] Polster heartbeat update: disk.device.usage heartbeat /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:636
Dec  1 19:31:50 compute-0 ceilometer_agent_compute[200308]: 2025-12-01 19:31:50.893 15 DEBUG ceilometer.compute.pollsters [-] e73931e9-f7fa-4666-b781-700b385532a9/disk.device.usage volume: 21233664 _stats_to_sample /usr/lib/python3.12/site-packages/ceilometer/compute/pollsters/__init__.py:108
Dec  1 19:31:50 compute-0 ceilometer_agent_compute[200308]: 2025-12-01 19:31:50.893 12 DEBUG ceilometer.polling.manager [-] Updated heartbeat for disk.device.usage (2025-12-01T19:31:50.892945) _update_status /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:502
Dec  1 19:31:50 compute-0 ceilometer_agent_compute[200308]: 2025-12-01 19:31:50.894 15 DEBUG ceilometer.compute.pollsters [-] e73931e9-f7fa-4666-b781-700b385532a9/disk.device.usage volume: 393216 _stats_to_sample /usr/lib/python3.12/site-packages/ceilometer/compute/pollsters/__init__.py:108
Dec  1 19:31:50 compute-0 ceilometer_agent_compute[200308]: 2025-12-01 19:31:50.894 15 DEBUG ceilometer.compute.pollsters [-] e73931e9-f7fa-4666-b781-700b385532a9/disk.device.usage volume: 485376 _stats_to_sample /usr/lib/python3.12/site-packages/ceilometer/compute/pollsters/__init__.py:108
Dec  1 19:31:50 compute-0 ceilometer_agent_compute[200308]: 2025-12-01 19:31:50.895 15 DEBUG ceilometer.compute.pollsters [-] f4a023f0-04a7-470f-88ef-6284e0580f9e/disk.device.usage volume: 21299200 _stats_to_sample /usr/lib/python3.12/site-packages/ceilometer/compute/pollsters/__init__.py:108
Dec  1 19:31:50 compute-0 ceilometer_agent_compute[200308]: 2025-12-01 19:31:50.895 15 DEBUG ceilometer.compute.pollsters [-] f4a023f0-04a7-470f-88ef-6284e0580f9e/disk.device.usage volume: 393216 _stats_to_sample /usr/lib/python3.12/site-packages/ceilometer/compute/pollsters/__init__.py:108
Dec  1 19:31:50 compute-0 ceilometer_agent_compute[200308]: 2025-12-01 19:31:50.896 15 DEBUG ceilometer.compute.pollsters [-] f4a023f0-04a7-470f-88ef-6284e0580f9e/disk.device.usage volume: 583680 _stats_to_sample /usr/lib/python3.12/site-packages/ceilometer/compute/pollsters/__init__.py:108
Dec  1 19:31:50 compute-0 ceilometer_agent_compute[200308]: 2025-12-01 19:31:50.897 15 INFO ceilometer.polling.manager [-] Finished polling pollster disk.device.usage in the context of pollsters
Dec  1 19:31:50 compute-0 ceilometer_agent_compute[200308]: 2025-12-01 19:31:50.897 15 DEBUG ceilometer.polling.manager [-] Executing discovery process for pollsters [<ceilometer.compute.pollsters.disk.PerDeviceWriteBytesPollster object at 0x7fcf6cc3f410>] and discovery method [local_instances] via process [<bound method AgentManager.discover of <ceilometer.polling.manager.AgentManager object at 0x7fcf6cc07a10>>]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:294
Dec  1 19:31:50 compute-0 ceilometer_agent_compute[200308]: 2025-12-01 19:31:50.897 15 INFO ceilometer.polling.manager [-] Polling pollster disk.device.write.bytes in the context of pollsters
Dec  1 19:31:50 compute-0 ceilometer_agent_compute[200308]: 2025-12-01 19:31:50.898 15 DEBUG ceilometer.polling.manager [-] Checking if we need coordination for pollster [<stevedore.extension.Extension object at 0x7fcf6cc3f440>] with coordination group name [None]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:333
Dec  1 19:31:50 compute-0 ceilometer_agent_compute[200308]: 2025-12-01 19:31:50.898 15 DEBUG ceilometer.polling.manager [-] The pollster [<stevedore.extension.Extension object at 0x7fcf6cc3f440>] is not configured in a source for polling that requires coordination. The current hashrings are the following [None]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:355
Dec  1 19:31:50 compute-0 ceilometer_agent_compute[200308]: 2025-12-01 19:31:50.898 15 DEBUG ceilometer.polling.manager [-] Polster heartbeat update: disk.device.write.bytes heartbeat /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:636
Dec  1 19:31:50 compute-0 ceilometer_agent_compute[200308]: 2025-12-01 19:31:50.898 15 DEBUG ceilometer.compute.pollsters [-] e73931e9-f7fa-4666-b781-700b385532a9/disk.device.write.bytes volume: 41779200 _stats_to_sample /usr/lib/python3.12/site-packages/ceilometer/compute/pollsters/__init__.py:108
Dec  1 19:31:50 compute-0 ceilometer_agent_compute[200308]: 2025-12-01 19:31:50.899 12 DEBUG ceilometer.polling.manager [-] Updated heartbeat for disk.device.write.bytes (2025-12-01T19:31:50.898395) _update_status /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:502
Dec  1 19:31:50 compute-0 ceilometer_agent_compute[200308]: 2025-12-01 19:31:50.899 15 DEBUG ceilometer.compute.pollsters [-] e73931e9-f7fa-4666-b781-700b385532a9/disk.device.write.bytes volume: 512 _stats_to_sample /usr/lib/python3.12/site-packages/ceilometer/compute/pollsters/__init__.py:108
Dec  1 19:31:50 compute-0 ceilometer_agent_compute[200308]: 2025-12-01 19:31:50.900 15 DEBUG ceilometer.compute.pollsters [-] e73931e9-f7fa-4666-b781-700b385532a9/disk.device.write.bytes volume: 0 _stats_to_sample /usr/lib/python3.12/site-packages/ceilometer/compute/pollsters/__init__.py:108
Dec  1 19:31:50 compute-0 ceilometer_agent_compute[200308]: 2025-12-01 19:31:50.901 15 DEBUG ceilometer.compute.pollsters [-] f4a023f0-04a7-470f-88ef-6284e0580f9e/disk.device.write.bytes volume: 41697280 _stats_to_sample /usr/lib/python3.12/site-packages/ceilometer/compute/pollsters/__init__.py:108
Dec  1 19:31:50 compute-0 ceilometer_agent_compute[200308]: 2025-12-01 19:31:50.902 15 DEBUG ceilometer.compute.pollsters [-] f4a023f0-04a7-470f-88ef-6284e0580f9e/disk.device.write.bytes volume: 512 _stats_to_sample /usr/lib/python3.12/site-packages/ceilometer/compute/pollsters/__init__.py:108
Dec  1 19:31:50 compute-0 ceilometer_agent_compute[200308]: 2025-12-01 19:31:50.902 15 DEBUG ceilometer.compute.pollsters [-] f4a023f0-04a7-470f-88ef-6284e0580f9e/disk.device.write.bytes volume: 0 _stats_to_sample /usr/lib/python3.12/site-packages/ceilometer/compute/pollsters/__init__.py:108
Dec  1 19:31:50 compute-0 ceilometer_agent_compute[200308]: 2025-12-01 19:31:50.903 15 INFO ceilometer.polling.manager [-] Finished polling pollster disk.device.write.bytes in the context of pollsters
Dec  1 19:31:50 compute-0 ceilometer_agent_compute[200308]: 2025-12-01 19:31:50.903 15 DEBUG ceilometer.polling.manager [-] Executing discovery process for pollsters [<ceilometer.compute.pollsters.instance_stats.PowerStatePollster object at 0x7fcf6c2e4440>] and discovery method [local_instances] via process [<bound method AgentManager.discover of <ceilometer.polling.manager.AgentManager object at 0x7fcf6cc07a10>>]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:294
Dec  1 19:31:50 compute-0 ceilometer_agent_compute[200308]: 2025-12-01 19:31:50.903 15 INFO ceilometer.polling.manager [-] Polling pollster power.state in the context of pollsters
Dec  1 19:31:50 compute-0 ceilometer_agent_compute[200308]: 2025-12-01 19:31:50.904 15 DEBUG ceilometer.polling.manager [-] Checking if we need coordination for pollster [<stevedore.extension.Extension object at 0x7fcf6c2e4470>] with coordination group name [None]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:333
Dec  1 19:31:50 compute-0 ceilometer_agent_compute[200308]: 2025-12-01 19:31:50.904 15 DEBUG ceilometer.polling.manager [-] The pollster [<stevedore.extension.Extension object at 0x7fcf6c2e4470>] is not configured in a source for polling that requires coordination. The current hashrings are the following [None]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:355
Dec  1 19:31:50 compute-0 ceilometer_agent_compute[200308]: 2025-12-01 19:31:50.904 15 DEBUG ceilometer.polling.manager [-] Polster heartbeat update: power.state heartbeat /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:636
Dec  1 19:31:50 compute-0 ceilometer_agent_compute[200308]: 2025-12-01 19:31:50.905 12 DEBUG ceilometer.polling.manager [-] Updated heartbeat for power.state (2025-12-01T19:31:50.904386) _update_status /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:502
Dec  1 19:31:50 compute-0 ceilometer_agent_compute[200308]: 2025-12-01 19:31:50.941 15 DEBUG ceilometer.compute.pollsters [-] e73931e9-f7fa-4666-b781-700b385532a9/power.state volume: 1 _stats_to_sample /usr/lib/python3.12/site-packages/ceilometer/compute/pollsters/__init__.py:108
Dec  1 19:31:50 compute-0 ceilometer_agent_compute[200308]: 2025-12-01 19:31:50.969 15 DEBUG ceilometer.compute.pollsters [-] f4a023f0-04a7-470f-88ef-6284e0580f9e/power.state volume: 1 _stats_to_sample /usr/lib/python3.12/site-packages/ceilometer/compute/pollsters/__init__.py:108
Dec  1 19:31:50 compute-0 ceilometer_agent_compute[200308]: 2025-12-01 19:31:50.970 15 INFO ceilometer.polling.manager [-] Finished polling pollster power.state in the context of pollsters
Dec  1 19:31:50 compute-0 ceilometer_agent_compute[200308]: 2025-12-01 19:31:50.970 15 DEBUG ceilometer.polling.manager [-] Executing discovery process for pollsters [<ceilometer.compute.pollsters.disk.PerDeviceDiskWriteLatencyPollster object at 0x7fcf6cc3f470>] and discovery method [local_instances] via process [<bound method AgentManager.discover of <ceilometer.polling.manager.AgentManager object at 0x7fcf6cc07a10>>]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:294
Dec  1 19:31:50 compute-0 ceilometer_agent_compute[200308]: 2025-12-01 19:31:50.970 15 INFO ceilometer.polling.manager [-] Polling pollster disk.device.write.latency in the context of pollsters
Dec  1 19:31:50 compute-0 ceilometer_agent_compute[200308]: 2025-12-01 19:31:50.970 15 DEBUG ceilometer.polling.manager [-] Checking if we need coordination for pollster [<stevedore.extension.Extension object at 0x7fcf6cc3f4a0>] with coordination group name [None]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:333
Dec  1 19:31:50 compute-0 ceilometer_agent_compute[200308]: 2025-12-01 19:31:50.970 15 DEBUG ceilometer.polling.manager [-] The pollster [<stevedore.extension.Extension object at 0x7fcf6cc3f4a0>] is not configured in a source for polling that requires coordination. The current hashrings are the following [None]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:355
Dec  1 19:31:50 compute-0 ceilometer_agent_compute[200308]: 2025-12-01 19:31:50.970 15 DEBUG ceilometer.polling.manager [-] Polster heartbeat update: disk.device.write.latency heartbeat /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:636
Dec  1 19:31:50 compute-0 ceilometer_agent_compute[200308]: 2025-12-01 19:31:50.970 15 DEBUG ceilometer.compute.pollsters [-] e73931e9-f7fa-4666-b781-700b385532a9/disk.device.write.latency volume: 1119912171 _stats_to_sample /usr/lib/python3.12/site-packages/ceilometer/compute/pollsters/__init__.py:108
Dec  1 19:31:50 compute-0 ceilometer_agent_compute[200308]: 2025-12-01 19:31:50.971 15 DEBUG ceilometer.compute.pollsters [-] e73931e9-f7fa-4666-b781-700b385532a9/disk.device.write.latency volume: 10391061 _stats_to_sample /usr/lib/python3.12/site-packages/ceilometer/compute/pollsters/__init__.py:108
Dec  1 19:31:50 compute-0 ceilometer_agent_compute[200308]: 2025-12-01 19:31:50.971 15 DEBUG ceilometer.compute.pollsters [-] e73931e9-f7fa-4666-b781-700b385532a9/disk.device.write.latency volume: 0 _stats_to_sample /usr/lib/python3.12/site-packages/ceilometer/compute/pollsters/__init__.py:108
Dec  1 19:31:50 compute-0 ceilometer_agent_compute[200308]: 2025-12-01 19:31:50.971 15 DEBUG ceilometer.compute.pollsters [-] f4a023f0-04a7-470f-88ef-6284e0580f9e/disk.device.write.latency volume: 1126738410 _stats_to_sample /usr/lib/python3.12/site-packages/ceilometer/compute/pollsters/__init__.py:108
Dec  1 19:31:50 compute-0 ceilometer_agent_compute[200308]: 2025-12-01 19:31:50.972 15 DEBUG ceilometer.compute.pollsters [-] f4a023f0-04a7-470f-88ef-6284e0580f9e/disk.device.write.latency volume: 13740853 _stats_to_sample /usr/lib/python3.12/site-packages/ceilometer/compute/pollsters/__init__.py:108
Dec  1 19:31:50 compute-0 ceilometer_agent_compute[200308]: 2025-12-01 19:31:50.972 15 DEBUG ceilometer.compute.pollsters [-] f4a023f0-04a7-470f-88ef-6284e0580f9e/disk.device.write.latency volume: 0 _stats_to_sample /usr/lib/python3.12/site-packages/ceilometer/compute/pollsters/__init__.py:108
Dec  1 19:31:50 compute-0 ceilometer_agent_compute[200308]: 2025-12-01 19:31:50.972 15 INFO ceilometer.polling.manager [-] Finished polling pollster disk.device.write.latency in the context of pollsters
Dec  1 19:31:50 compute-0 ceilometer_agent_compute[200308]: 2025-12-01 19:31:50.973 15 DEBUG ceilometer.polling.manager [-] Executing discovery process for pollsters [<ceilometer.compute.pollsters.disk.PerDeviceWriteRequestsPollster object at 0x7fcf6cc3f4d0>] and discovery method [local_instances] via process [<bound method AgentManager.discover of <ceilometer.polling.manager.AgentManager object at 0x7fcf6cc07a10>>]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:294
Dec  1 19:31:50 compute-0 ceilometer_agent_compute[200308]: 2025-12-01 19:31:50.973 15 INFO ceilometer.polling.manager [-] Polling pollster disk.device.write.requests in the context of pollsters
Dec  1 19:31:50 compute-0 ceilometer_agent_compute[200308]: 2025-12-01 19:31:50.973 15 DEBUG ceilometer.polling.manager [-] Checking if we need coordination for pollster [<stevedore.extension.Extension object at 0x7fcf6cc3f500>] with coordination group name [None]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:333
Dec  1 19:31:50 compute-0 ceilometer_agent_compute[200308]: 2025-12-01 19:31:50.973 15 DEBUG ceilometer.polling.manager [-] The pollster [<stevedore.extension.Extension object at 0x7fcf6cc3f500>] is not configured in a source for polling that requires coordination. The current hashrings are the following [None]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:355
Dec  1 19:31:50 compute-0 ceilometer_agent_compute[200308]: 2025-12-01 19:31:50.973 12 DEBUG ceilometer.polling.manager [-] Updated heartbeat for disk.device.write.latency (2025-12-01T19:31:50.970643) _update_status /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:502
Dec  1 19:31:50 compute-0 ceilometer_agent_compute[200308]: 2025-12-01 19:31:50.973 15 DEBUG ceilometer.polling.manager [-] Polster heartbeat update: disk.device.write.requests heartbeat /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:636
Dec  1 19:31:50 compute-0 ceilometer_agent_compute[200308]: 2025-12-01 19:31:50.973 15 DEBUG ceilometer.compute.pollsters [-] e73931e9-f7fa-4666-b781-700b385532a9/disk.device.write.requests volume: 233 _stats_to_sample /usr/lib/python3.12/site-packages/ceilometer/compute/pollsters/__init__.py:108
Dec  1 19:31:50 compute-0 ceilometer_agent_compute[200308]: 2025-12-01 19:31:50.974 15 DEBUG ceilometer.compute.pollsters [-] e73931e9-f7fa-4666-b781-700b385532a9/disk.device.write.requests volume: 1 _stats_to_sample /usr/lib/python3.12/site-packages/ceilometer/compute/pollsters/__init__.py:108
Dec  1 19:31:50 compute-0 ceilometer_agent_compute[200308]: 2025-12-01 19:31:50.974 15 DEBUG ceilometer.compute.pollsters [-] e73931e9-f7fa-4666-b781-700b385532a9/disk.device.write.requests volume: 0 _stats_to_sample /usr/lib/python3.12/site-packages/ceilometer/compute/pollsters/__init__.py:108
Dec  1 19:31:50 compute-0 ceilometer_agent_compute[200308]: 2025-12-01 19:31:50.975 15 DEBUG ceilometer.compute.pollsters [-] f4a023f0-04a7-470f-88ef-6284e0580f9e/disk.device.write.requests volume: 224 _stats_to_sample /usr/lib/python3.12/site-packages/ceilometer/compute/pollsters/__init__.py:108
Dec  1 19:31:50 compute-0 ceilometer_agent_compute[200308]: 2025-12-01 19:31:50.975 15 DEBUG ceilometer.compute.pollsters [-] f4a023f0-04a7-470f-88ef-6284e0580f9e/disk.device.write.requests volume: 1 _stats_to_sample /usr/lib/python3.12/site-packages/ceilometer/compute/pollsters/__init__.py:108
Dec  1 19:31:50 compute-0 ceilometer_agent_compute[200308]: 2025-12-01 19:31:50.975 15 DEBUG ceilometer.compute.pollsters [-] f4a023f0-04a7-470f-88ef-6284e0580f9e/disk.device.write.requests volume: 0 _stats_to_sample /usr/lib/python3.12/site-packages/ceilometer/compute/pollsters/__init__.py:108
Dec  1 19:31:50 compute-0 ceilometer_agent_compute[200308]: 2025-12-01 19:31:50.976 15 INFO ceilometer.polling.manager [-] Finished polling pollster disk.device.write.requests in the context of pollsters
Dec  1 19:31:50 compute-0 ceilometer_agent_compute[200308]: 2025-12-01 19:31:50.976 15 DEBUG ceilometer.polling.manager [-] Executing discovery process for pollsters [<ceilometer.compute.pollsters.disk.PerDeviceAllocationPollster object at 0x7fcf6cc3e5d0>] and discovery method [local_instances] via process [<bound method AgentManager.discover of <ceilometer.polling.manager.AgentManager object at 0x7fcf6cc07a10>>]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:294
Dec  1 19:31:50 compute-0 ceilometer_agent_compute[200308]: 2025-12-01 19:31:50.976 15 INFO ceilometer.polling.manager [-] Polling pollster disk.device.allocation in the context of pollsters
Dec  1 19:31:50 compute-0 ceilometer_agent_compute[200308]: 2025-12-01 19:31:50.976 15 DEBUG ceilometer.polling.manager [-] Checking if we need coordination for pollster [<stevedore.extension.Extension object at 0x7fcf6cc3e540>] with coordination group name [None]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:333
Dec  1 19:31:50 compute-0 ceilometer_agent_compute[200308]: 2025-12-01 19:31:50.976 15 DEBUG ceilometer.polling.manager [-] The pollster [<stevedore.extension.Extension object at 0x7fcf6cc3e540>] is not configured in a source for polling that requires coordination. The current hashrings are the following [None]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:355
Dec  1 19:31:50 compute-0 ceilometer_agent_compute[200308]: 2025-12-01 19:31:50.976 15 DEBUG ceilometer.polling.manager [-] Polster heartbeat update: disk.device.allocation heartbeat /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:636
Dec  1 19:31:50 compute-0 ceilometer_agent_compute[200308]: 2025-12-01 19:31:50.977 15 DEBUG ceilometer.compute.pollsters [-] e73931e9-f7fa-4666-b781-700b385532a9/disk.device.allocation volume: 21307392 _stats_to_sample /usr/lib/python3.12/site-packages/ceilometer/compute/pollsters/__init__.py:108
Dec  1 19:31:50 compute-0 ceilometer_agent_compute[200308]: 2025-12-01 19:31:50.977 15 DEBUG ceilometer.compute.pollsters [-] e73931e9-f7fa-4666-b781-700b385532a9/disk.device.allocation volume: 1253376 _stats_to_sample /usr/lib/python3.12/site-packages/ceilometer/compute/pollsters/__init__.py:108
Dec  1 19:31:50 compute-0 ceilometer_agent_compute[200308]: 2025-12-01 19:31:50.977 15 DEBUG ceilometer.compute.pollsters [-] e73931e9-f7fa-4666-b781-700b385532a9/disk.device.allocation volume: 487424 _stats_to_sample /usr/lib/python3.12/site-packages/ceilometer/compute/pollsters/__init__.py:108
Dec  1 19:31:50 compute-0 ceilometer_agent_compute[200308]: 2025-12-01 19:31:50.977 15 DEBUG ceilometer.compute.pollsters [-] f4a023f0-04a7-470f-88ef-6284e0580f9e/disk.device.allocation volume: 22224896 _stats_to_sample /usr/lib/python3.12/site-packages/ceilometer/compute/pollsters/__init__.py:108
Dec  1 19:31:50 compute-0 ceilometer_agent_compute[200308]: 2025-12-01 19:31:50.978 15 DEBUG ceilometer.compute.pollsters [-] f4a023f0-04a7-470f-88ef-6284e0580f9e/disk.device.allocation volume: 1253376 _stats_to_sample /usr/lib/python3.12/site-packages/ceilometer/compute/pollsters/__init__.py:108
Dec  1 19:31:50 compute-0 ceilometer_agent_compute[200308]: 2025-12-01 19:31:50.978 15 DEBUG ceilometer.compute.pollsters [-] f4a023f0-04a7-470f-88ef-6284e0580f9e/disk.device.allocation volume: 585728 _stats_to_sample /usr/lib/python3.12/site-packages/ceilometer/compute/pollsters/__init__.py:108
Dec  1 19:31:50 compute-0 ceilometer_agent_compute[200308]: 2025-12-01 19:31:50.978 15 INFO ceilometer.polling.manager [-] Finished polling pollster disk.device.allocation in the context of pollsters
Dec  1 19:31:50 compute-0 ceilometer_agent_compute[200308]: 2025-12-01 19:31:50.979 15 DEBUG ceilometer.polling.manager [-] Executing discovery process for pollsters [<ceilometer.compute.pollsters.disk.EphemeralSizePollster object at 0x7fcf6cc3f530>] and discovery method [local_instances] via process [<bound method AgentManager.discover of <ceilometer.polling.manager.AgentManager object at 0x7fcf6cc07a10>>]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:294
Dec  1 19:31:50 compute-0 ceilometer_agent_compute[200308]: 2025-12-01 19:31:50.979 15 INFO ceilometer.polling.manager [-] Polling pollster disk.ephemeral.size in the context of pollsters
Dec  1 19:31:50 compute-0 ceilometer_agent_compute[200308]: 2025-12-01 19:31:50.979 15 DEBUG ceilometer.polling.manager [-] Checking if we need coordination for pollster [<stevedore.extension.Extension object at 0x7fcf6cc3f560>] with coordination group name [None]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:333
Dec  1 19:31:50 compute-0 ceilometer_agent_compute[200308]: 2025-12-01 19:31:50.979 15 DEBUG ceilometer.polling.manager [-] The pollster [<stevedore.extension.Extension object at 0x7fcf6cc3f560>] is not configured in a source for polling that requires coordination. The current hashrings are the following [None]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:355
Dec  1 19:31:50 compute-0 ceilometer_agent_compute[200308]: 2025-12-01 19:31:50.979 15 DEBUG ceilometer.polling.manager [-] Polster heartbeat update: disk.ephemeral.size heartbeat /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:636
Dec  1 19:31:50 compute-0 ceilometer_agent_compute[200308]: 2025-12-01 19:31:50.979 12 DEBUG ceilometer.polling.manager [-] Updated heartbeat for disk.device.write.requests (2025-12-01T19:31:50.973813) _update_status /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:502
Dec  1 19:31:50 compute-0 ceilometer_agent_compute[200308]: 2025-12-01 19:31:50.980 15 INFO ceilometer.polling.manager [-] Finished polling pollster disk.ephemeral.size in the context of pollsters
Dec  1 19:31:50 compute-0 ceilometer_agent_compute[200308]: 2025-12-01 19:31:50.980 15 DEBUG ceilometer.polling.manager [-] Executing discovery process for pollsters [<ceilometer.compute.pollsters.net.IncomingPacketsPollster object at 0x7fcf6cc3fd40>] and discovery method [local_instances] via process [<bound method AgentManager.discover of <ceilometer.polling.manager.AgentManager object at 0x7fcf6cc07a10>>]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:294
Dec  1 19:31:50 compute-0 ceilometer_agent_compute[200308]: 2025-12-01 19:31:50.980 15 INFO ceilometer.polling.manager [-] Polling pollster network.incoming.packets in the context of pollsters
Dec  1 19:31:50 compute-0 ceilometer_agent_compute[200308]: 2025-12-01 19:31:50.980 15 DEBUG ceilometer.polling.manager [-] Checking if we need coordination for pollster [<stevedore.extension.Extension object at 0x7fcf6cc3fd70>] with coordination group name [None]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:333
Dec  1 19:31:50 compute-0 ceilometer_agent_compute[200308]: 2025-12-01 19:31:50.980 15 DEBUG ceilometer.polling.manager [-] The pollster [<stevedore.extension.Extension object at 0x7fcf6cc3fd70>] is not configured in a source for polling that requires coordination. The current hashrings are the following [None]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:355
Dec  1 19:31:50 compute-0 ceilometer_agent_compute[200308]: 2025-12-01 19:31:50.980 15 DEBUG ceilometer.polling.manager [-] Polster heartbeat update: network.incoming.packets heartbeat /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:636
Dec  1 19:31:50 compute-0 ceilometer_agent_compute[200308]: 2025-12-01 19:31:50.980 15 DEBUG ceilometer.compute.pollsters [-] e73931e9-f7fa-4666-b781-700b385532a9/network.incoming.packets volume: 17 _stats_to_sample /usr/lib/python3.12/site-packages/ceilometer/compute/pollsters/__init__.py:108
Dec  1 19:31:50 compute-0 ceilometer_agent_compute[200308]: 2025-12-01 19:31:50.981 15 DEBUG ceilometer.compute.pollsters [-] f4a023f0-04a7-470f-88ef-6284e0580f9e/network.incoming.packets volume: 12 _stats_to_sample /usr/lib/python3.12/site-packages/ceilometer/compute/pollsters/__init__.py:108
Dec  1 19:31:50 compute-0 ceilometer_agent_compute[200308]: 2025-12-01 19:31:50.981 15 INFO ceilometer.polling.manager [-] Finished polling pollster network.incoming.packets in the context of pollsters
Dec  1 19:31:50 compute-0 ceilometer_agent_compute[200308]: 2025-12-01 19:31:50.981 15 DEBUG ceilometer.polling.manager [-] Executing discovery process for pollsters [<ceilometer.compute.pollsters.disk.RootSizePollster object at 0x7fcf6cc3f590>] and discovery method [local_instances] via process [<bound method AgentManager.discover of <ceilometer.polling.manager.AgentManager object at 0x7fcf6cc07a10>>]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:294
Dec  1 19:31:50 compute-0 ceilometer_agent_compute[200308]: 2025-12-01 19:31:50.981 15 INFO ceilometer.polling.manager [-] Polling pollster disk.root.size in the context of pollsters
Dec  1 19:31:50 compute-0 ceilometer_agent_compute[200308]: 2025-12-01 19:31:50.981 15 DEBUG ceilometer.polling.manager [-] Checking if we need coordination for pollster [<stevedore.extension.Extension object at 0x7fcf6cc3f5c0>] with coordination group name [None]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:333
Dec  1 19:31:50 compute-0 ceilometer_agent_compute[200308]: 2025-12-01 19:31:50.982 15 DEBUG ceilometer.polling.manager [-] The pollster [<stevedore.extension.Extension object at 0x7fcf6cc3f5c0>] is not configured in a source for polling that requires coordination. The current hashrings are the following [None]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:355
Dec  1 19:31:50 compute-0 ceilometer_agent_compute[200308]: 2025-12-01 19:31:50.982 15 DEBUG ceilometer.polling.manager [-] Polster heartbeat update: disk.root.size heartbeat /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:636
Dec  1 19:31:50 compute-0 ceilometer_agent_compute[200308]: 2025-12-01 19:31:50.982 15 INFO ceilometer.polling.manager [-] Finished polling pollster disk.root.size in the context of pollsters
Dec  1 19:31:50 compute-0 ceilometer_agent_compute[200308]: 2025-12-01 19:31:50.982 15 DEBUG ceilometer.polling.manager [-] Executing discovery process for pollsters [<ceilometer.compute.pollsters.net.IncomingDropPollster object at 0x7fcf6cc3fda0>] and discovery method [local_instances] via process [<bound method AgentManager.discover of <ceilometer.polling.manager.AgentManager object at 0x7fcf6cc07a10>>]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:294
Dec  1 19:31:50 compute-0 ceilometer_agent_compute[200308]: 2025-12-01 19:31:50.982 15 INFO ceilometer.polling.manager [-] Polling pollster network.incoming.packets.drop in the context of pollsters
Dec  1 19:31:50 compute-0 ceilometer_agent_compute[200308]: 2025-12-01 19:31:50.983 12 DEBUG ceilometer.polling.manager [-] Updated heartbeat for disk.device.allocation (2025-12-01T19:31:50.976893) _update_status /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:502
Dec  1 19:31:50 compute-0 ceilometer_agent_compute[200308]: 2025-12-01 19:31:50.983 15 DEBUG ceilometer.polling.manager [-] Checking if we need coordination for pollster [<stevedore.extension.Extension object at 0x7fcf6cc3fdd0>] with coordination group name [None]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:333
Dec  1 19:31:50 compute-0 ceilometer_agent_compute[200308]: 2025-12-01 19:31:50.983 15 DEBUG ceilometer.polling.manager [-] The pollster [<stevedore.extension.Extension object at 0x7fcf6cc3fdd0>] is not configured in a source for polling that requires coordination. The current hashrings are the following [None]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:355
Dec  1 19:31:50 compute-0 ceilometer_agent_compute[200308]: 2025-12-01 19:31:50.983 15 DEBUG ceilometer.polling.manager [-] Polster heartbeat update: network.incoming.packets.drop heartbeat /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:636
Dec  1 19:31:50 compute-0 ceilometer_agent_compute[200308]: 2025-12-01 19:31:50.983 15 DEBUG ceilometer.compute.pollsters [-] e73931e9-f7fa-4666-b781-700b385532a9/network.incoming.packets.drop volume: 0 _stats_to_sample /usr/lib/python3.12/site-packages/ceilometer/compute/pollsters/__init__.py:108
Dec  1 19:31:50 compute-0 ceilometer_agent_compute[200308]: 2025-12-01 19:31:50.983 15 DEBUG ceilometer.compute.pollsters [-] f4a023f0-04a7-470f-88ef-6284e0580f9e/network.incoming.packets.drop volume: 0 _stats_to_sample /usr/lib/python3.12/site-packages/ceilometer/compute/pollsters/__init__.py:108
Dec  1 19:31:50 compute-0 ceilometer_agent_compute[200308]: 2025-12-01 19:31:50.984 12 DEBUG ceilometer.polling.manager [-] Updated heartbeat for disk.ephemeral.size (2025-12-01T19:31:50.979523) _update_status /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:502
Dec  1 19:31:50 compute-0 ceilometer_agent_compute[200308]: 2025-12-01 19:31:50.984 15 INFO ceilometer.polling.manager [-] Finished polling pollster network.incoming.packets.drop in the context of pollsters
Dec  1 19:31:50 compute-0 ceilometer_agent_compute[200308]: 2025-12-01 19:31:50.984 15 DEBUG ceilometer.polling.manager [-] Executing discovery process for pollsters [<ceilometer.compute.pollsters.net.IncomingErrorsPollster object at 0x7fcf6cc3fe00>] and discovery method [local_instances] via process [<bound method AgentManager.discover of <ceilometer.polling.manager.AgentManager object at 0x7fcf6cc07a10>>]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:294
Dec  1 19:31:50 compute-0 ceilometer_agent_compute[200308]: 2025-12-01 19:31:50.984 15 INFO ceilometer.polling.manager [-] Polling pollster network.incoming.packets.error in the context of pollsters
Dec  1 19:31:50 compute-0 ceilometer_agent_compute[200308]: 2025-12-01 19:31:50.984 15 DEBUG ceilometer.polling.manager [-] Checking if we need coordination for pollster [<stevedore.extension.Extension object at 0x7fcf6cc3fe30>] with coordination group name [None]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:333
Dec  1 19:31:50 compute-0 ceilometer_agent_compute[200308]: 2025-12-01 19:31:50.984 15 DEBUG ceilometer.polling.manager [-] The pollster [<stevedore.extension.Extension object at 0x7fcf6cc3fe30>] is not configured in a source for polling that requires coordination. The current hashrings are the following [None]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:355
Dec  1 19:31:50 compute-0 ceilometer_agent_compute[200308]: 2025-12-01 19:31:50.984 15 DEBUG ceilometer.polling.manager [-] Polster heartbeat update: network.incoming.packets.error heartbeat /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:636
Dec  1 19:31:50 compute-0 ceilometer_agent_compute[200308]: 2025-12-01 19:31:50.984 12 DEBUG ceilometer.polling.manager [-] Updated heartbeat for network.incoming.packets (2025-12-01T19:31:50.980778) _update_status /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:502
Dec  1 19:31:50 compute-0 ceilometer_agent_compute[200308]: 2025-12-01 19:31:50.985 15 DEBUG ceilometer.compute.pollsters [-] e73931e9-f7fa-4666-b781-700b385532a9/network.incoming.packets.error volume: 0 _stats_to_sample /usr/lib/python3.12/site-packages/ceilometer/compute/pollsters/__init__.py:108
Dec  1 19:31:50 compute-0 ceilometer_agent_compute[200308]: 2025-12-01 19:31:50.985 15 DEBUG ceilometer.compute.pollsters [-] f4a023f0-04a7-470f-88ef-6284e0580f9e/network.incoming.packets.error volume: 0 _stats_to_sample /usr/lib/python3.12/site-packages/ceilometer/compute/pollsters/__init__.py:108
Dec  1 19:31:50 compute-0 ceilometer_agent_compute[200308]: 2025-12-01 19:31:50.985 15 INFO ceilometer.polling.manager [-] Finished polling pollster network.incoming.packets.error in the context of pollsters
Dec  1 19:31:50 compute-0 ceilometer_agent_compute[200308]: 2025-12-01 19:31:50.985 15 DEBUG ceilometer.polling.manager [-] Executing discovery process for pollsters [<ceilometer.compute.pollsters.net.OutgoingBytesPollster object at 0x7fcf6cc3fe90>] and discovery method [local_instances] via process [<bound method AgentManager.discover of <ceilometer.polling.manager.AgentManager object at 0x7fcf6cc07a10>>]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:294
Dec  1 19:31:50 compute-0 ceilometer_agent_compute[200308]: 2025-12-01 19:31:50.985 15 INFO ceilometer.polling.manager [-] Polling pollster network.outgoing.bytes in the context of pollsters
Dec  1 19:31:50 compute-0 ceilometer_agent_compute[200308]: 2025-12-01 19:31:50.986 15 DEBUG ceilometer.polling.manager [-] Checking if we need coordination for pollster [<stevedore.extension.Extension object at 0x7fcf6cc3fec0>] with coordination group name [None]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:333
Dec  1 19:31:50 compute-0 ceilometer_agent_compute[200308]: 2025-12-01 19:31:50.986 15 DEBUG ceilometer.polling.manager [-] The pollster [<stevedore.extension.Extension object at 0x7fcf6cc3fec0>] is not configured in a source for polling that requires coordination. The current hashrings are the following [None]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:355
Dec  1 19:31:50 compute-0 ceilometer_agent_compute[200308]: 2025-12-01 19:31:50.986 15 DEBUG ceilometer.polling.manager [-] Polster heartbeat update: network.outgoing.bytes heartbeat /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:636
Dec  1 19:31:50 compute-0 ceilometer_agent_compute[200308]: 2025-12-01 19:31:50.986 15 DEBUG ceilometer.compute.pollsters [-] e73931e9-f7fa-4666-b781-700b385532a9/network.outgoing.bytes volume: 2132 _stats_to_sample /usr/lib/python3.12/site-packages/ceilometer/compute/pollsters/__init__.py:108
Dec  1 19:31:50 compute-0 ceilometer_agent_compute[200308]: 2025-12-01 19:31:50.986 15 DEBUG ceilometer.compute.pollsters [-] f4a023f0-04a7-470f-88ef-6284e0580f9e/network.outgoing.bytes volume: 1751 _stats_to_sample /usr/lib/python3.12/site-packages/ceilometer/compute/pollsters/__init__.py:108
Dec  1 19:31:50 compute-0 ceilometer_agent_compute[200308]: 2025-12-01 19:31:50.987 15 INFO ceilometer.polling.manager [-] Finished polling pollster network.outgoing.bytes in the context of pollsters
Dec  1 19:31:50 compute-0 ceilometer_agent_compute[200308]: 2025-12-01 19:31:50.987 15 DEBUG ceilometer.polling.manager [-] Executing discovery process for pollsters [<ceilometer.compute.pollsters.net.OutgoingBytesRatePollster object at 0x7fcf6cc3ff80>] and discovery method [local_instances] via process [<bound method AgentManager.discover of <ceilometer.polling.manager.AgentManager object at 0x7fcf6cc07a10>>]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:294
Dec  1 19:31:50 compute-0 ceilometer_agent_compute[200308]: 2025-12-01 19:31:50.987 15 INFO ceilometer.polling.manager [-] Polling pollster network.outgoing.bytes.rate in the context of pollsters
Dec  1 19:31:50 compute-0 ceilometer_agent_compute[200308]: 2025-12-01 19:31:50.987 15 DEBUG ceilometer.polling.manager [-] Checking if we need coordination for pollster [<stevedore.extension.Extension object at 0x7fcf6cc3ffb0>] with coordination group name [None]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:333
Dec  1 19:31:50 compute-0 ceilometer_agent_compute[200308]: 2025-12-01 19:31:50.987 15 DEBUG ceilometer.polling.manager [-] The pollster [<stevedore.extension.Extension object at 0x7fcf6cc3ffb0>] is not configured in a source for polling that requires coordination. The current hashrings are the following [None]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:355
Dec  1 19:31:50 compute-0 ceilometer_agent_compute[200308]: 2025-12-01 19:31:50.987 15 DEBUG ceilometer.polling.manager [-] Polster heartbeat update: network.outgoing.bytes.rate heartbeat /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:636
Dec  1 19:31:50 compute-0 ceilometer_agent_compute[200308]: 2025-12-01 19:31:50.987 15 DEBUG ceilometer.compute.pollsters [-] LibvirtInspector does not provide data for OutgoingBytesRatePollster get_samples /usr/lib/python3.12/site-packages/ceilometer/compute/pollsters/__init__.py:162
Dec  1 19:31:50 compute-0 ceilometer_agent_compute[200308]: 2025-12-01 19:31:50.988 15 ERROR ceilometer.polling.manager [-] Prevent pollster network.outgoing.bytes.rate from polling [<NovaLikeServer: test_0>, <NovaLikeServer: vn-rxztcck-f2wxpqwzjpbt-22updzqiujy5-vnf-jgrcp6zbpavd>] on source pollsters anymore!: ceilometer.polling.plugin_base.PollsterPermanentError: [<NovaLikeServer: test_0>, <NovaLikeServer: vn-rxztcck-f2wxpqwzjpbt-22updzqiujy5-vnf-jgrcp6zbpavd>]
Dec  1 19:31:50 compute-0 ceilometer_agent_compute[200308]: 2025-12-01 19:31:50.988 15 DEBUG ceilometer.polling.manager [-] Executing discovery process for pollsters [<ceilometer.compute.pollsters.instance_stats.CPUPollster object at 0x7fcf6cbd1b80>] and discovery method [local_instances] via process [<bound method AgentManager.discover of <ceilometer.polling.manager.AgentManager object at 0x7fcf6cc07a10>>]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:294
Dec  1 19:31:50 compute-0 ceilometer_agent_compute[200308]: 2025-12-01 19:31:50.988 15 INFO ceilometer.polling.manager [-] Polling pollster cpu in the context of pollsters
Dec  1 19:31:50 compute-0 ceilometer_agent_compute[200308]: 2025-12-01 19:31:50.988 15 DEBUG ceilometer.polling.manager [-] Checking if we need coordination for pollster [<stevedore.extension.Extension object at 0x7fcf6cc3d7c0>] with coordination group name [None]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:333
Dec  1 19:31:50 compute-0 ceilometer_agent_compute[200308]: 2025-12-01 19:31:50.988 15 DEBUG ceilometer.polling.manager [-] The pollster [<stevedore.extension.Extension object at 0x7fcf6cc3d7c0>] is not configured in a source for polling that requires coordination. The current hashrings are the following [None]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:355
Dec  1 19:31:50 compute-0 ceilometer_agent_compute[200308]: 2025-12-01 19:31:50.988 15 DEBUG ceilometer.polling.manager [-] Polster heartbeat update: cpu heartbeat /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:636
Dec  1 19:31:50 compute-0 ceilometer_agent_compute[200308]: 2025-12-01 19:31:50.989 15 DEBUG ceilometer.compute.pollsters [-] e73931e9-f7fa-4666-b781-700b385532a9/cpu volume: 33780000000 _stats_to_sample /usr/lib/python3.12/site-packages/ceilometer/compute/pollsters/__init__.py:108
Dec  1 19:31:50 compute-0 ceilometer_agent_compute[200308]: 2025-12-01 19:31:50.989 15 DEBUG ceilometer.compute.pollsters [-] f4a023f0-04a7-470f-88ef-6284e0580f9e/cpu volume: 32570000000 _stats_to_sample /usr/lib/python3.12/site-packages/ceilometer/compute/pollsters/__init__.py:108
Dec  1 19:31:50 compute-0 ceilometer_agent_compute[200308]: 2025-12-01 19:31:50.989 15 INFO ceilometer.polling.manager [-] Finished polling pollster cpu in the context of pollsters
Dec  1 19:31:50 compute-0 ceilometer_agent_compute[200308]: 2025-12-01 19:31:50.989 15 DEBUG ceilometer.polling.manager [-] Executing discovery process for pollsters [<ceilometer.compute.pollsters.instance_stats.MemoryUsagePollster object at 0x7fcf6cc3f7a0>] and discovery method [local_instances] via process [<bound method AgentManager.discover of <ceilometer.polling.manager.AgentManager object at 0x7fcf6cc07a10>>]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:294
Dec  1 19:31:50 compute-0 ceilometer_agent_compute[200308]: 2025-12-01 19:31:50.989 15 INFO ceilometer.polling.manager [-] Polling pollster memory.usage in the context of pollsters
Dec  1 19:31:50 compute-0 ceilometer_agent_compute[200308]: 2025-12-01 19:31:50.989 15 DEBUG ceilometer.polling.manager [-] Checking if we need coordination for pollster [<stevedore.extension.Extension object at 0x7fcf6cc3f7d0>] with coordination group name [None]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:333
Dec  1 19:31:50 compute-0 ceilometer_agent_compute[200308]: 2025-12-01 19:31:50.989 15 DEBUG ceilometer.polling.manager [-] The pollster [<stevedore.extension.Extension object at 0x7fcf6cc3f7d0>] is not configured in a source for polling that requires coordination. The current hashrings are the following [None]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:355
Dec  1 19:31:50 compute-0 ceilometer_agent_compute[200308]: 2025-12-01 19:31:50.990 15 DEBUG ceilometer.polling.manager [-] Polster heartbeat update: memory.usage heartbeat /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:636
Dec  1 19:31:50 compute-0 ceilometer_agent_compute[200308]: 2025-12-01 19:31:50.990 12 DEBUG ceilometer.polling.manager [-] Updated heartbeat for disk.root.size (2025-12-01T19:31:50.982104) _update_status /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:502
Dec  1 19:31:50 compute-0 ceilometer_agent_compute[200308]: 2025-12-01 19:31:50.990 15 DEBUG ceilometer.compute.pollsters [-] e73931e9-f7fa-4666-b781-700b385532a9/memory.usage volume: 48.9140625 _stats_to_sample /usr/lib/python3.12/site-packages/ceilometer/compute/pollsters/__init__.py:108
Dec  1 19:31:50 compute-0 ceilometer_agent_compute[200308]: 2025-12-01 19:31:50.990 15 DEBUG ceilometer.compute.pollsters [-] f4a023f0-04a7-470f-88ef-6284e0580f9e/memory.usage volume: 49.71875 _stats_to_sample /usr/lib/python3.12/site-packages/ceilometer/compute/pollsters/__init__.py:108
Dec  1 19:31:50 compute-0 ceilometer_agent_compute[200308]: 2025-12-01 19:31:50.991 15 INFO ceilometer.polling.manager [-] Finished polling pollster memory.usage in the context of pollsters
Dec  1 19:31:50 compute-0 ceilometer_agent_compute[200308]: 2025-12-01 19:31:50.991 12 DEBUG ceilometer.polling.manager [-] Updated heartbeat for network.incoming.packets.drop (2025-12-01T19:31:50.983178) _update_status /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:502
Dec  1 19:31:50 compute-0 ceilometer_agent_compute[200308]: 2025-12-01 19:31:50.991 12 DEBUG ceilometer.polling.manager [-] Updated heartbeat for network.incoming.packets.error (2025-12-01T19:31:50.984922) _update_status /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:502
Dec  1 19:31:50 compute-0 ceilometer_agent_compute[200308]: 2025-12-01 19:31:50.992 12 DEBUG ceilometer.polling.manager [-] Updated heartbeat for network.outgoing.bytes (2025-12-01T19:31:50.986213) _update_status /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:502
Dec  1 19:31:50 compute-0 ceilometer_agent_compute[200308]: 2025-12-01 19:31:50.992 12 DEBUG ceilometer.polling.manager [-] Updated heartbeat for network.outgoing.bytes.rate (2025-12-01T19:31:50.987651) _update_status /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:502
Dec  1 19:31:50 compute-0 ceilometer_agent_compute[200308]: 2025-12-01 19:31:50.992 12 DEBUG ceilometer.polling.manager [-] Updated heartbeat for cpu (2025-12-01T19:31:50.988948) _update_status /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:502
Dec  1 19:31:50 compute-0 ceilometer_agent_compute[200308]: 2025-12-01 19:31:50.992 12 DEBUG ceilometer.polling.manager [-] Updated heartbeat for memory.usage (2025-12-01T19:31:50.990185) _update_status /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:502
Dec  1 19:31:50 compute-0 ceilometer_agent_compute[200308]: 2025-12-01 19:31:50.992 15 DEBUG ceilometer.polling.manager [-] Finished processing pollster [network.incoming.bytes.delta]. execute_polling_task_processing /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:272
Dec  1 19:31:50 compute-0 ceilometer_agent_compute[200308]: 2025-12-01 19:31:50.992 15 DEBUG ceilometer.polling.manager [-] Finished processing pollster [network.outgoing.packets]. execute_polling_task_processing /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:272
Dec  1 19:31:50 compute-0 ceilometer_agent_compute[200308]: 2025-12-01 19:31:50.992 15 DEBUG ceilometer.polling.manager [-] Finished processing pollster [network.outgoing.bytes.delta]. execute_polling_task_processing /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:272
Dec  1 19:31:50 compute-0 ceilometer_agent_compute[200308]: 2025-12-01 19:31:50.992 15 DEBUG ceilometer.polling.manager [-] Finished processing pollster [network.outgoing.packets.drop]. execute_polling_task_processing /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:272
Dec  1 19:31:50 compute-0 ceilometer_agent_compute[200308]: 2025-12-01 19:31:50.992 15 DEBUG ceilometer.polling.manager [-] Finished processing pollster [network.outgoing.packets.error]. execute_polling_task_processing /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:272
Dec  1 19:31:50 compute-0 ceilometer_agent_compute[200308]: 2025-12-01 19:31:50.992 15 DEBUG ceilometer.polling.manager [-] Finished processing pollster [disk.device.capacity]. execute_polling_task_processing /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:272
Dec  1 19:31:50 compute-0 ceilometer_agent_compute[200308]: 2025-12-01 19:31:50.992 15 DEBUG ceilometer.polling.manager [-] Finished processing pollster [disk.device.read.bytes]. execute_polling_task_processing /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:272
Dec  1 19:31:50 compute-0 ceilometer_agent_compute[200308]: 2025-12-01 19:31:50.993 15 DEBUG ceilometer.polling.manager [-] Finished processing pollster [network.incoming.bytes]. execute_polling_task_processing /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:272
Dec  1 19:31:50 compute-0 ceilometer_agent_compute[200308]: 2025-12-01 19:31:50.993 15 DEBUG ceilometer.polling.manager [-] Finished processing pollster [network.incoming.bytes.rate]. execute_polling_task_processing /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:272
Dec  1 19:31:50 compute-0 ceilometer_agent_compute[200308]: 2025-12-01 19:31:50.993 15 DEBUG ceilometer.polling.manager [-] Finished processing pollster [disk.device.read.latency]. execute_polling_task_processing /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:272
Dec  1 19:31:50 compute-0 ceilometer_agent_compute[200308]: 2025-12-01 19:31:50.993 15 DEBUG ceilometer.polling.manager [-] Finished processing pollster [disk.device.read.requests]. execute_polling_task_processing /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:272
Dec  1 19:31:50 compute-0 ceilometer_agent_compute[200308]: 2025-12-01 19:31:50.993 15 DEBUG ceilometer.polling.manager [-] Finished processing pollster [disk.device.usage]. execute_polling_task_processing /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:272
Dec  1 19:31:50 compute-0 ceilometer_agent_compute[200308]: 2025-12-01 19:31:50.993 15 DEBUG ceilometer.polling.manager [-] Finished processing pollster [disk.device.write.bytes]. execute_polling_task_processing /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:272
Dec  1 19:31:50 compute-0 ceilometer_agent_compute[200308]: 2025-12-01 19:31:50.993 15 DEBUG ceilometer.polling.manager [-] Finished processing pollster [power.state]. execute_polling_task_processing /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:272
Dec  1 19:31:50 compute-0 ceilometer_agent_compute[200308]: 2025-12-01 19:31:50.993 15 DEBUG ceilometer.polling.manager [-] Finished processing pollster [disk.device.write.latency]. execute_polling_task_processing /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:272
Dec  1 19:31:50 compute-0 ceilometer_agent_compute[200308]: 2025-12-01 19:31:50.993 15 DEBUG ceilometer.polling.manager [-] Finished processing pollster [disk.device.write.requests]. execute_polling_task_processing /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:272
Dec  1 19:31:50 compute-0 ceilometer_agent_compute[200308]: 2025-12-01 19:31:50.993 15 DEBUG ceilometer.polling.manager [-] Finished processing pollster [disk.device.allocation]. execute_polling_task_processing /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:272
Dec  1 19:31:50 compute-0 ceilometer_agent_compute[200308]: 2025-12-01 19:31:50.993 15 DEBUG ceilometer.polling.manager [-] Finished processing pollster [disk.ephemeral.size]. execute_polling_task_processing /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:272
Dec  1 19:31:50 compute-0 ceilometer_agent_compute[200308]: 2025-12-01 19:31:50.993 15 DEBUG ceilometer.polling.manager [-] Finished processing pollster [network.incoming.packets]. execute_polling_task_processing /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:272
Dec  1 19:31:50 compute-0 ceilometer_agent_compute[200308]: 2025-12-01 19:31:50.993 15 DEBUG ceilometer.polling.manager [-] Finished processing pollster [disk.root.size]. execute_polling_task_processing /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:272
Dec  1 19:31:50 compute-0 ceilometer_agent_compute[200308]: 2025-12-01 19:31:50.993 15 DEBUG ceilometer.polling.manager [-] Finished processing pollster [network.incoming.packets.drop]. execute_polling_task_processing /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:272
Dec  1 19:31:50 compute-0 ceilometer_agent_compute[200308]: 2025-12-01 19:31:50.993 15 DEBUG ceilometer.polling.manager [-] Finished processing pollster [network.incoming.packets.error]. execute_polling_task_processing /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:272
Dec  1 19:31:50 compute-0 ceilometer_agent_compute[200308]: 2025-12-01 19:31:50.993 15 DEBUG ceilometer.polling.manager [-] Finished processing pollster [network.outgoing.bytes]. execute_polling_task_processing /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:272
Dec  1 19:31:50 compute-0 ceilometer_agent_compute[200308]: 2025-12-01 19:31:50.993 15 DEBUG ceilometer.polling.manager [-] Finished processing pollster [network.outgoing.bytes.rate]. execute_polling_task_processing /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:272
Dec  1 19:31:50 compute-0 ceilometer_agent_compute[200308]: 2025-12-01 19:31:50.994 15 DEBUG ceilometer.polling.manager [-] Finished processing pollster [cpu]. execute_polling_task_processing /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:272
Dec  1 19:31:50 compute-0 ceilometer_agent_compute[200308]: 2025-12-01 19:31:50.994 15 DEBUG ceilometer.polling.manager [-] Finished processing pollster [memory.usage]. execute_polling_task_processing /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:272
Dec  1 19:31:51 compute-0 nova_compute[189564]: 2025-12-01 19:31:51.243 189568 DEBUG oslo_service.periodic_task [None req-7cec860b-c1c5-49d2-8a4f-732c76c1788d - - - - - -] Running periodic task ComputeManager._check_instance_build_time run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210#033[00m
Dec  1 19:31:52 compute-0 nova_compute[189564]: 2025-12-01 19:31:52.177 189568 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Dec  1 19:31:52 compute-0 nova_compute[189564]: 2025-12-01 19:31:52.247 189568 DEBUG oslo_service.periodic_task [None req-7cec860b-c1c5-49d2-8a4f-732c76c1788d - - - - - -] Running periodic task ComputeManager._instance_usage_audit run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210#033[00m
Dec  1 19:31:52 compute-0 nova_compute[189564]: 2025-12-01 19:31:52.248 189568 DEBUG oslo_service.periodic_task [None req-7cec860b-c1c5-49d2-8a4f-732c76c1788d - - - - - -] Running periodic task ComputeManager._reclaim_queued_deletes run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210#033[00m
Dec  1 19:31:52 compute-0 nova_compute[189564]: 2025-12-01 19:31:52.249 189568 DEBUG nova.compute.manager [None req-7cec860b-c1c5-49d2-8a4f-732c76c1788d - - - - - -] CONF.reclaim_instance_interval <= 0, skipping... _reclaim_queued_deletes /usr/lib/python3.9/site-packages/nova/compute/manager.py:10477#033[00m
Dec  1 19:31:52 compute-0 nova_compute[189564]: 2025-12-01 19:31:52.721 189568 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Dec  1 19:31:53 compute-0 nova_compute[189564]: 2025-12-01 19:31:53.247 189568 DEBUG oslo_service.periodic_task [None req-7cec860b-c1c5-49d2-8a4f-732c76c1788d - - - - - -] Running periodic task ComputeManager.update_available_resource run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210#033[00m
Dec  1 19:31:53 compute-0 nova_compute[189564]: 2025-12-01 19:31:53.279 189568 DEBUG oslo_concurrency.lockutils [None req-7cec860b-c1c5-49d2-8a4f-732c76c1788d - - - - - -] Acquiring lock "compute_resources" by "nova.compute.resource_tracker.ResourceTracker.clean_compute_node_cache" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m
Dec  1 19:31:53 compute-0 nova_compute[189564]: 2025-12-01 19:31:53.280 189568 DEBUG oslo_concurrency.lockutils [None req-7cec860b-c1c5-49d2-8a4f-732c76c1788d - - - - - -] Lock "compute_resources" acquired by "nova.compute.resource_tracker.ResourceTracker.clean_compute_node_cache" :: waited 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m
Dec  1 19:31:53 compute-0 nova_compute[189564]: 2025-12-01 19:31:53.281 189568 DEBUG oslo_concurrency.lockutils [None req-7cec860b-c1c5-49d2-8a4f-732c76c1788d - - - - - -] Lock "compute_resources" "released" by "nova.compute.resource_tracker.ResourceTracker.clean_compute_node_cache" :: held 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m
Dec  1 19:31:53 compute-0 nova_compute[189564]: 2025-12-01 19:31:53.282 189568 DEBUG nova.compute.resource_tracker [None req-7cec860b-c1c5-49d2-8a4f-732c76c1788d - - - - - -] Auditing locally available compute resources for compute-0.ctlplane.example.com (node: compute-0.ctlplane.example.com) update_available_resource /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:861#033[00m
Dec  1 19:31:53 compute-0 podman[240797]: 2025-12-01 19:31:53.362077915 +0000 UTC m=+0.121766741 container health_status 61ddba5fa28aaa4735d9b3aecc3d300f499f9ae2248b5f55cd6d6127fcce4236 (image=quay.io/navidys/prometheus-podman-exporter:v1.10.1, name=podman_exporter, health_status=healthy, health_failing_streak=0, health_log=, maintainer=Navid Yaghoobi <navidys@fedoraproject.org>, managed_by=edpm_ansible, config_data={'image': 'quay.io/navidys/prometheus-podman-exporter:v1.10.1', 'restart': 'always', 'recreate': True, 'user': 'root', 'privileged': True, 'ports': ['9882:9882'], 'net': 'host', 'command': ['--web.config.file=/etc/podman_exporter/podman_exporter.yaml'], 'environment': {'OS_ENDPOINT_TYPE': 'internal', 'CONTAINER_HOST': 'unix:///run/podman/podman.sock'}, 'healthcheck': {'test': '/openstack/healthcheck podman_exporter', 'mount': '/var/lib/openstack/healthchecks/podman_exporter'}, 'volumes': ['/var/lib/openstack/config/telemetry/podman_exporter.yaml:/etc/podman_exporter/podman_exporter.yaml:z', '/var/lib/openstack/certs/telemetry/default:/etc/podman_exporter/tls:z', '/run/podman/podman.sock:/run/podman/podman.sock:rw,z', '/var/lib/openstack/healthchecks/podman_exporter:/openstack:ro,z']}, config_id=edpm, container_name=podman_exporter)
Dec  1 19:31:53 compute-0 nova_compute[189564]: 2025-12-01 19:31:53.410 189568 DEBUG oslo_concurrency.processutils [None req-7cec860b-c1c5-49d2-8a4f-732c76c1788d - - - - - -] Running cmd (subprocess): /usr/bin/python3 -m oslo_concurrency.prlimit --as=1073741824 --cpu=30 -- env LC_ALL=C LANG=C qemu-img info /var/lib/nova/instances/e73931e9-f7fa-4666-b781-700b385532a9/disk --force-share --output=json execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384#033[00m
Dec  1 19:31:53 compute-0 nova_compute[189564]: 2025-12-01 19:31:53.505 189568 DEBUG oslo_concurrency.processutils [None req-7cec860b-c1c5-49d2-8a4f-732c76c1788d - - - - - -] CMD "/usr/bin/python3 -m oslo_concurrency.prlimit --as=1073741824 --cpu=30 -- env LC_ALL=C LANG=C qemu-img info /var/lib/nova/instances/e73931e9-f7fa-4666-b781-700b385532a9/disk --force-share --output=json" returned: 0 in 0.095s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422#033[00m
Dec  1 19:31:53 compute-0 nova_compute[189564]: 2025-12-01 19:31:53.506 189568 DEBUG oslo_concurrency.processutils [None req-7cec860b-c1c5-49d2-8a4f-732c76c1788d - - - - - -] Running cmd (subprocess): /usr/bin/python3 -m oslo_concurrency.prlimit --as=1073741824 --cpu=30 -- env LC_ALL=C LANG=C qemu-img info /var/lib/nova/instances/e73931e9-f7fa-4666-b781-700b385532a9/disk --force-share --output=json execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384#033[00m
Dec  1 19:31:53 compute-0 nova_compute[189564]: 2025-12-01 19:31:53.603 189568 DEBUG oslo_concurrency.processutils [None req-7cec860b-c1c5-49d2-8a4f-732c76c1788d - - - - - -] CMD "/usr/bin/python3 -m oslo_concurrency.prlimit --as=1073741824 --cpu=30 -- env LC_ALL=C LANG=C qemu-img info /var/lib/nova/instances/e73931e9-f7fa-4666-b781-700b385532a9/disk --force-share --output=json" returned: 0 in 0.097s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422#033[00m
Dec  1 19:31:53 compute-0 nova_compute[189564]: 2025-12-01 19:31:53.604 189568 DEBUG oslo_concurrency.processutils [None req-7cec860b-c1c5-49d2-8a4f-732c76c1788d - - - - - -] Running cmd (subprocess): /usr/bin/python3 -m oslo_concurrency.prlimit --as=1073741824 --cpu=30 -- env LC_ALL=C LANG=C qemu-img info /var/lib/nova/instances/e73931e9-f7fa-4666-b781-700b385532a9/disk.eph0 --force-share --output=json execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384#033[00m
Dec  1 19:31:53 compute-0 nova_compute[189564]: 2025-12-01 19:31:53.684 189568 DEBUG oslo_concurrency.processutils [None req-7cec860b-c1c5-49d2-8a4f-732c76c1788d - - - - - -] CMD "/usr/bin/python3 -m oslo_concurrency.prlimit --as=1073741824 --cpu=30 -- env LC_ALL=C LANG=C qemu-img info /var/lib/nova/instances/e73931e9-f7fa-4666-b781-700b385532a9/disk.eph0 --force-share --output=json" returned: 0 in 0.080s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422#033[00m
Dec  1 19:31:53 compute-0 nova_compute[189564]: 2025-12-01 19:31:53.686 189568 DEBUG oslo_concurrency.processutils [None req-7cec860b-c1c5-49d2-8a4f-732c76c1788d - - - - - -] Running cmd (subprocess): /usr/bin/python3 -m oslo_concurrency.prlimit --as=1073741824 --cpu=30 -- env LC_ALL=C LANG=C qemu-img info /var/lib/nova/instances/e73931e9-f7fa-4666-b781-700b385532a9/disk.eph0 --force-share --output=json execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384#033[00m
Dec  1 19:31:53 compute-0 nova_compute[189564]: 2025-12-01 19:31:53.742 189568 DEBUG oslo_concurrency.processutils [None req-7cec860b-c1c5-49d2-8a4f-732c76c1788d - - - - - -] CMD "/usr/bin/python3 -m oslo_concurrency.prlimit --as=1073741824 --cpu=30 -- env LC_ALL=C LANG=C qemu-img info /var/lib/nova/instances/e73931e9-f7fa-4666-b781-700b385532a9/disk.eph0 --force-share --output=json" returned: 0 in 0.056s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422#033[00m
Dec  1 19:31:53 compute-0 nova_compute[189564]: 2025-12-01 19:31:53.754 189568 DEBUG oslo_concurrency.processutils [None req-7cec860b-c1c5-49d2-8a4f-732c76c1788d - - - - - -] Running cmd (subprocess): /usr/bin/python3 -m oslo_concurrency.prlimit --as=1073741824 --cpu=30 -- env LC_ALL=C LANG=C qemu-img info /var/lib/nova/instances/f4a023f0-04a7-470f-88ef-6284e0580f9e/disk --force-share --output=json execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384#033[00m
Dec  1 19:31:53 compute-0 nova_compute[189564]: 2025-12-01 19:31:53.849 189568 DEBUG oslo_concurrency.processutils [None req-7cec860b-c1c5-49d2-8a4f-732c76c1788d - - - - - -] CMD "/usr/bin/python3 -m oslo_concurrency.prlimit --as=1073741824 --cpu=30 -- env LC_ALL=C LANG=C qemu-img info /var/lib/nova/instances/f4a023f0-04a7-470f-88ef-6284e0580f9e/disk --force-share --output=json" returned: 0 in 0.095s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422#033[00m
Dec  1 19:31:53 compute-0 nova_compute[189564]: 2025-12-01 19:31:53.850 189568 DEBUG oslo_concurrency.processutils [None req-7cec860b-c1c5-49d2-8a4f-732c76c1788d - - - - - -] Running cmd (subprocess): /usr/bin/python3 -m oslo_concurrency.prlimit --as=1073741824 --cpu=30 -- env LC_ALL=C LANG=C qemu-img info /var/lib/nova/instances/f4a023f0-04a7-470f-88ef-6284e0580f9e/disk --force-share --output=json execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384#033[00m
Dec  1 19:31:53 compute-0 nova_compute[189564]: 2025-12-01 19:31:53.915 189568 DEBUG oslo_concurrency.processutils [None req-7cec860b-c1c5-49d2-8a4f-732c76c1788d - - - - - -] CMD "/usr/bin/python3 -m oslo_concurrency.prlimit --as=1073741824 --cpu=30 -- env LC_ALL=C LANG=C qemu-img info /var/lib/nova/instances/f4a023f0-04a7-470f-88ef-6284e0580f9e/disk --force-share --output=json" returned: 0 in 0.064s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422#033[00m
Dec  1 19:31:53 compute-0 nova_compute[189564]: 2025-12-01 19:31:53.917 189568 DEBUG oslo_concurrency.processutils [None req-7cec860b-c1c5-49d2-8a4f-732c76c1788d - - - - - -] Running cmd (subprocess): /usr/bin/python3 -m oslo_concurrency.prlimit --as=1073741824 --cpu=30 -- env LC_ALL=C LANG=C qemu-img info /var/lib/nova/instances/f4a023f0-04a7-470f-88ef-6284e0580f9e/disk.eph0 --force-share --output=json execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384#033[00m
Dec  1 19:31:54 compute-0 nova_compute[189564]: 2025-12-01 19:31:54.015 189568 DEBUG oslo_concurrency.processutils [None req-7cec860b-c1c5-49d2-8a4f-732c76c1788d - - - - - -] CMD "/usr/bin/python3 -m oslo_concurrency.prlimit --as=1073741824 --cpu=30 -- env LC_ALL=C LANG=C qemu-img info /var/lib/nova/instances/f4a023f0-04a7-470f-88ef-6284e0580f9e/disk.eph0 --force-share --output=json" returned: 0 in 0.098s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422#033[00m
Dec  1 19:31:54 compute-0 nova_compute[189564]: 2025-12-01 19:31:54.017 189568 DEBUG oslo_concurrency.processutils [None req-7cec860b-c1c5-49d2-8a4f-732c76c1788d - - - - - -] Running cmd (subprocess): /usr/bin/python3 -m oslo_concurrency.prlimit --as=1073741824 --cpu=30 -- env LC_ALL=C LANG=C qemu-img info /var/lib/nova/instances/f4a023f0-04a7-470f-88ef-6284e0580f9e/disk.eph0 --force-share --output=json execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384#033[00m
Dec  1 19:31:54 compute-0 nova_compute[189564]: 2025-12-01 19:31:54.101 189568 DEBUG oslo_concurrency.processutils [None req-7cec860b-c1c5-49d2-8a4f-732c76c1788d - - - - - -] CMD "/usr/bin/python3 -m oslo_concurrency.prlimit --as=1073741824 --cpu=30 -- env LC_ALL=C LANG=C qemu-img info /var/lib/nova/instances/f4a023f0-04a7-470f-88ef-6284e0580f9e/disk.eph0 --force-share --output=json" returned: 0 in 0.083s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422#033[00m
Dec  1 19:31:54 compute-0 nova_compute[189564]: 2025-12-01 19:31:54.532 189568 WARNING nova.virt.libvirt.driver [None req-7cec860b-c1c5-49d2-8a4f-732c76c1788d - - - - - -] This host appears to have multiple sockets per NUMA node. The `socket` PCI NUMA affinity will not be supported.#033[00m
Dec  1 19:31:54 compute-0 nova_compute[189564]: 2025-12-01 19:31:54.533 189568 DEBUG nova.compute.resource_tracker [None req-7cec860b-c1c5-49d2-8a4f-732c76c1788d - - - - - -] Hypervisor/Node resource view: name=compute-0.ctlplane.example.com free_ram=5046MB free_disk=72.36140823364258GB free_vcpus=6 pci_devices=[{"dev_id": "pci_0000_00_03_0", "address": "0000:00:03.0", "product_id": "1000", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1000", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_01_3", "address": "0000:00:01.3", "product_id": "7113", "vendor_id": "8086", "numa_node": null, "label": "label_8086_7113", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_06_0", "address": "0000:00:06.0", "product_id": "1005", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1005", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_04_0", "address": "0000:00:04.0", "product_id": "1001", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1001", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_01_1", "address": "0000:00:01.1", "product_id": "7010", "vendor_id": "8086", "numa_node": null, "label": "label_8086_7010", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_00_0", "address": "0000:00:00.0", "product_id": "1237", "vendor_id": "8086", "numa_node": null, "label": "label_8086_1237", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_05_0", "address": "0000:00:05.0", "product_id": "1002", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1002", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_02_0", "address": "0000:00:02.0", "product_id": "1050", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1050", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_01_2", "address": "0000:00:01.2", "product_id": "7020", "vendor_id": "8086", "numa_node": null, "label": "label_8086_7020", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_01_0", "address": "0000:00:01.0", "product_id": "7000", "vendor_id": "8086", "numa_node": null, "label": "label_8086_7000", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_07_0", "address": "0000:00:07.0", "product_id": "1000", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1000", "dev_type": "type-PCI"}] _report_hypervisor_resource_view /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:1034#033[00m
Dec  1 19:31:54 compute-0 nova_compute[189564]: 2025-12-01 19:31:54.534 189568 DEBUG oslo_concurrency.lockutils [None req-7cec860b-c1c5-49d2-8a4f-732c76c1788d - - - - - -] Acquiring lock "compute_resources" by "nova.compute.resource_tracker.ResourceTracker._update_available_resource" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m
Dec  1 19:31:54 compute-0 nova_compute[189564]: 2025-12-01 19:31:54.534 189568 DEBUG oslo_concurrency.lockutils [None req-7cec860b-c1c5-49d2-8a4f-732c76c1788d - - - - - -] Lock "compute_resources" acquired by "nova.compute.resource_tracker.ResourceTracker._update_available_resource" :: waited 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m
Dec  1 19:31:54 compute-0 nova_compute[189564]: 2025-12-01 19:31:54.634 189568 DEBUG nova.compute.resource_tracker [None req-7cec860b-c1c5-49d2-8a4f-732c76c1788d - - - - - -] Instance e73931e9-f7fa-4666-b781-700b385532a9 actively managed on this compute host and has allocations in placement: {'resources': {'DISK_GB': 2, 'MEMORY_MB': 512, 'VCPU': 1}}. _remove_deleted_instances_allocations /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:1635#033[00m
Dec  1 19:31:54 compute-0 nova_compute[189564]: 2025-12-01 19:31:54.634 189568 DEBUG nova.compute.resource_tracker [None req-7cec860b-c1c5-49d2-8a4f-732c76c1788d - - - - - -] Instance f4a023f0-04a7-470f-88ef-6284e0580f9e actively managed on this compute host and has allocations in placement: {'resources': {'DISK_GB': 2, 'MEMORY_MB': 512, 'VCPU': 1}}. _remove_deleted_instances_allocations /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:1635#033[00m
Dec  1 19:31:54 compute-0 nova_compute[189564]: 2025-12-01 19:31:54.635 189568 DEBUG nova.compute.resource_tracker [None req-7cec860b-c1c5-49d2-8a4f-732c76c1788d - - - - - -] Total usable vcpus: 8, total allocated vcpus: 2 _report_final_resource_view /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:1057#033[00m
Dec  1 19:31:54 compute-0 nova_compute[189564]: 2025-12-01 19:31:54.635 189568 DEBUG nova.compute.resource_tracker [None req-7cec860b-c1c5-49d2-8a4f-732c76c1788d - - - - - -] Final resource view: name=compute-0.ctlplane.example.com phys_ram=7680MB used_ram=1536MB phys_disk=79GB used_disk=4GB total_vcpus=8 used_vcpus=2 pci_stats=[] _report_final_resource_view /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:1066#033[00m
Dec  1 19:31:54 compute-0 nova_compute[189564]: 2025-12-01 19:31:54.718 189568 DEBUG nova.compute.provider_tree [None req-7cec860b-c1c5-49d2-8a4f-732c76c1788d - - - - - -] Inventory has not changed in ProviderTree for provider: 0211b5d4-bab8-409f-8f53-df766ffbcb27 update_inventory /usr/lib/python3.9/site-packages/nova/compute/provider_tree.py:180#033[00m
Dec  1 19:31:54 compute-0 nova_compute[189564]: 2025-12-01 19:31:54.734 189568 DEBUG nova.scheduler.client.report [None req-7cec860b-c1c5-49d2-8a4f-732c76c1788d - - - - - -] Inventory has not changed for provider 0211b5d4-bab8-409f-8f53-df766ffbcb27 based on inventory data: {'MEMORY_MB': {'total': 7680, 'reserved': 512, 'min_unit': 1, 'max_unit': 7680, 'step_size': 1, 'allocation_ratio': 1.0}, 'VCPU': {'total': 8, 'reserved': 0, 'min_unit': 1, 'max_unit': 8, 'step_size': 1, 'allocation_ratio': 4.0}, 'DISK_GB': {'total': 79, 'reserved': 1, 'min_unit': 1, 'max_unit': 79, 'step_size': 1, 'allocation_ratio': 0.9}} set_inventory_for_provider /usr/lib/python3.9/site-packages/nova/scheduler/client/report.py:940#033[00m
Dec  1 19:31:54 compute-0 nova_compute[189564]: 2025-12-01 19:31:54.756 189568 DEBUG nova.compute.resource_tracker [None req-7cec860b-c1c5-49d2-8a4f-732c76c1788d - - - - - -] Compute_service record updated for compute-0.ctlplane.example.com:compute-0.ctlplane.example.com _update_available_resource /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:995#033[00m
Dec  1 19:31:54 compute-0 nova_compute[189564]: 2025-12-01 19:31:54.756 189568 DEBUG oslo_concurrency.lockutils [None req-7cec860b-c1c5-49d2-8a4f-732c76c1788d - - - - - -] Lock "compute_resources" "released" by "nova.compute.resource_tracker.ResourceTracker._update_available_resource" :: held 0.222s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m
Dec  1 19:31:56 compute-0 podman[240846]: 2025-12-01 19:31:56.354903144 +0000 UTC m=+0.111164823 container health_status 34a1614f07848d6f362b3ed1fa2407dbcd0f2c7c831f6ef43ff8b2d278ce7c3d (image=quay.io/podified-antelope-centos9/openstack-ceilometer-ipmi:current-podified, name=ceilometer_agent_ipmi, health_status=healthy, health_failing_streak=0, health_log=, managed_by=edpm_ansible, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.vendor=CentOS, tcib_managed=true, config_id=edpm, io.buildah.version=1.41.3, container_name=ceilometer_agent_ipmi, org.label-schema.license=GPLv2, tcib_build_tag=fa2bb8efef6782c26ea7f1675eeb36dd, config_data={'image': 'quay.io/podified-antelope-centos9/openstack-ceilometer-ipmi:current-podified', 'user': 'ceilometer', 'restart': 'always', 'command': 'kolla_start', 'security_opt': 'label:type:ceilometer_polling_t', 'privileged': 'true', 'net': 'host', 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'OS_ENDPOINT_TYPE': 'internal'}, 'healthcheck': {'test': '/openstack/healthcheck ipmi', 'mount': '/var/lib/openstack/healthchecks/ceilometer_agent_ipmi'}, 'volumes': ['/var/lib/openstack/config/telemetry-power-monitoring:/var/lib/openstack/config/:z', '/var/lib/openstack/config/telemetry-power-monitoring/ceilometer-agent-ipmi.json:/var/lib/kolla/config_files/config.json:z', '/etc/hosts:/etc/hosts:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/var/lib/openstack/cacerts/telemetry-power-monitoring/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/config/telemetry-power-monitoring/ceilometer_prom_exporter.yaml:/etc/ceilometer/ceilometer_prom_exporter.yaml:z', '/var/lib/openstack/certs/telemetry-power-monitoring/default:/etc/ceilometer/tls:z', '/dev/log:/dev/log', '/var/lib/openstack/healthchecks/ceilometer_agent_ipmi:/openstack:ro,z']}, org.label-schema.build-date=20251125, org.label-schema.schema-version=1.0, maintainer=OpenStack Kubernetes Operator team)
Dec  1 19:31:56 compute-0 podman[240845]: 2025-12-01 19:31:56.36582258 +0000 UTC m=+0.125637499 container health_status 23921011954a99f31a49758e512d9e3575f6b2ebf536e7df85e3be11e7690b76 (image=quay.io/sustainable_computing_io/kepler:release-0.7.12, name=kepler, health_status=healthy, health_failing_streak=0, health_log=, io.k8s.description=The Universal Base Image is designed and engineered to be the base layer for all of your containerized applications, middleware and utilities. This base image is freely redistributable, but Red Hat only supports Red Hat technologies through subscriptions for Red Hat products. This image is maintained by Red Hat and updated regularly., distribution-scope=public, release-0.7.12=, version=9.4, url=https://access.redhat.com/containers/#/registry.access.redhat.com/ubi9/images/9.4-1214.1726694543, build-date=2024-09-18T21:23:30, config_id=edpm, io.k8s.display-name=Red Hat Universal Base Image 9, summary=Provides the latest release of Red Hat Universal Base Image 9., vendor=Red Hat, Inc., architecture=x86_64, io.openshift.expose-services=, name=ubi9, io.buildah.version=1.29.0, vcs-ref=e309397d02fc53f7fa99db1371b8700eb49f268f, com.redhat.component=ubi9-container, config_data={'image': 'quay.io/sustainable_computing_io/kepler:release-0.7.12', 'privileged': 'true', 'restart': 'always', 'ports': ['8888:8888'], 'net': 'host', 'command': '-v=2', 'recreate': True, 'environment': {'ENABLE_GPU': 'true', 'EXPOSE_CONTAINER_METRICS': 'true', 'ENABLE_PROCESS_METRICS': 'true', 'EXPOSE_VM_METRICS': 'true', 'EXPOSE_ESTIMATED_IDLE_POWER_METRICS': 'false', 'LIBVIRT_METADATA_URI': 'http://openstack.org/xmlns/libvirt/nova/1.1'}, 'healthcheck': {'test': '/openstack/healthcheck kepler', 'mount': '/var/lib/openstack/healthchecks/kepler'}, 'volumes': ['/lib/modules:/lib/modules:ro', '/run/libvirt:/run/libvirt:shared,ro', '/sys:/sys', '/proc:/proc', '/var/lib/openstack/healthchecks/kepler:/openstack:ro,z']}, container_name=kepler, managed_by=edpm_ansible, com.redhat.license_terms=https://www.redhat.com/en/about/red-hat-end-user-license-agreements#UBI, io.openshift.tags=base rhel9, description=The Universal Base Image is designed and engineered to be the base layer for all of your containerized applications, middleware and utilities. This base image is freely redistributable, but Red Hat only supports Red Hat technologies through subscriptions for Red Hat products. This image is maintained by Red Hat and updated regularly., release=1214.1726694543, vcs-type=git, maintainer=Red Hat, Inc.)
Dec  1 19:31:56 compute-0 nova_compute[189564]: 2025-12-01 19:31:56.758 189568 DEBUG oslo_service.periodic_task [None req-7cec860b-c1c5-49d2-8a4f-732c76c1788d - - - - - -] Running periodic task ComputeManager._poll_rescued_instances run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210#033[00m
Dec  1 19:31:56 compute-0 nova_compute[189564]: 2025-12-01 19:31:56.759 189568 DEBUG oslo_service.periodic_task [None req-7cec860b-c1c5-49d2-8a4f-732c76c1788d - - - - - -] Running periodic task ComputeManager._poll_unconfirmed_resizes run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210#033[00m
Dec  1 19:31:57 compute-0 nova_compute[189564]: 2025-12-01 19:31:57.180 189568 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Dec  1 19:31:57 compute-0 nova_compute[189564]: 2025-12-01 19:31:57.726 189568 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Dec  1 19:31:59 compute-0 podman[203750]: time="2025-12-01T19:31:59Z" level=info msg="List containers: received `last` parameter - overwriting `limit`"
Dec  1 19:31:59 compute-0 podman[203750]: @ - - [01/Dec/2025:19:31:59 +0000] "GET /v4.9.3/libpod/containers/json?all=true&external=false&last=0&namespace=false&size=false&sync=false HTTP/1.1" 200 29521 "" "Go-http-client/1.1"
Dec  1 19:31:59 compute-0 podman[203750]: @ - - [01/Dec/2025:19:31:59 +0000] "GET /v4.9.3/libpod/containers/stats?all=false&interval=1&stream=false HTTP/1.1" 200 4777 "" "Go-http-client/1.1"
Dec  1 19:32:00 compute-0 podman[240884]: 2025-12-01 19:32:00.347056758 +0000 UTC m=+0.095234720 container health_status 43b014a7c88484529ca37fbc1aa040d68d3c565a681d98a3ffe696ded1c66c8b (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified, name=ovn_metadata_agent, health_status=healthy, health_failing_streak=0, health_log=, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.schema-version=1.0, io.buildah.version=1.41.3, maintainer=OpenStack Kubernetes Operator team, managed_by=edpm_ansible, org.label-schema.license=GPLv2, tcib_managed=true, container_name=ovn_metadata_agent, config_id=ovn_metadata_agent, org.label-schema.vendor=CentOS, tcib_build_tag=fa2bb8efef6782c26ea7f1675eeb36dd, config_data={'cgroupns': 'host', 'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'EDPM_CONFIG_HASH': '0823bd3e096c75f72e4a95820d41b0d4b6a1172bd2892ddb9f29b788a11bc87d'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ovn_metadata_agent', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified', 'net': 'host', 'pid': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/run/openvswitch:/run/openvswitch:z', '/var/lib/config-data/ansible-generated/neutron-ovn-metadata-agent:/etc/neutron.conf.d:z', '/run/netns:/run/netns:shared', '/var/lib/kolla/config_files/ovn_metadata_agent.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/neutron:/var/lib/neutron:shared,z', '/var/lib/neutron/ovn_metadata_haproxy_wrapper:/usr/local/bin/haproxy:ro', '/var/lib/neutron/kill_scripts:/etc/neutron/kill_scripts:ro', '/var/lib/openstack/cacerts/neutron-metadata/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/ca.crt:/etc/pki/tls/certs/ovndbca.crt:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/tls.crt:/etc/pki/tls/certs/ovndb.crt:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/tls.key:/etc/pki/tls/private/ovndb.key:ro,Z', '/var/lib/openstack/healthchecks/ovn_metadata_agent:/openstack:ro,z']}, org.label-schema.build-date=20251125)
Dec  1 19:32:00 compute-0 podman[240883]: 2025-12-01 19:32:00.380511114 +0000 UTC m=+0.146479568 container health_status 3a3d264f7eb8586ed3d44da8bad3c69e5911bcb2ca062b771386b6d47a5118de (image=quay.rdoproject.org/podified-master-centos10/openstack-ceilometer-compute:current-tested, name=ceilometer_agent_compute, health_status=healthy, health_failing_streak=0, health_log=, maintainer=OpenStack Kubernetes Operator team, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 10 Base Image, managed_by=edpm_ansible, org.label-schema.build-date=20251125, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS, tcib_build_tag=3a7876c5b6a4ff2e2bc50e11e9db5f42, config_data={'image': 'quay.rdoproject.org/podified-master-centos10/openstack-ceilometer-compute:current-tested', 'user': 'ceilometer', 'restart': 'always', 'command': 'kolla_start', 'security_opt': 'label:type:ceilometer_polling_t', 'net': 'host', 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'OS_ENDPOINT_TYPE': 'internal'}, 'healthcheck': {'test': '/openstack/healthcheck compute', 'mount': '/var/lib/openstack/healthchecks/ceilometer_agent_compute'}, 'volumes': ['/var/lib/openstack/config/telemetry:/var/lib/openstack/config/:z', '/var/lib/openstack/config/telemetry/ceilometer-agent-compute.json:/var/lib/kolla/config_files/config.json:z', '/run/libvirt:/run/libvirt:shared,ro', '/etc/hosts:/etc/hosts:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/var/lib/openstack/cacerts/telemetry/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/config/telemetry/ceilometer_prom_exporter.yaml:/etc/ceilometer/ceilometer_prom_exporter.yaml:z', '/var/lib/openstack/certs/telemetry/default:/etc/ceilometer/tls:z', '/dev/log:/dev/log', '/var/lib/openstack/healthchecks/ceilometer_agent_compute:/openstack:ro,z']}, config_id=edpm, container_name=ceilometer_agent_compute, io.buildah.version=1.41.4, tcib_managed=true)
Dec  1 19:32:00 compute-0 podman[240885]: 2025-12-01 19:32:00.390933316 +0000 UTC m=+0.135784155 container health_status ac5c9902abf0db9f43c889599b2bcc73d33eb8b65444ffdd9b56a5cc93dab792 (image=quay.io/podified-antelope-centos9/openstack-ovn-controller:current-podified, name=ovn_controller, health_status=healthy, health_failing_streak=0, health_log=, org.label-schema.build-date=20251125, org.label-schema.vendor=CentOS, config_id=ovn_controller, container_name=ovn_controller, tcib_build_tag=fa2bb8efef6782c26ea7f1675eeb36dd, config_data={'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ovn_controller', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-ovn-controller:current-podified', 'net': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/lib/modules:/lib/modules:ro', '/run:/run', '/var/lib/openvswitch/ovn:/run/ovn:shared,z', '/var/lib/kolla/config_files/ovn_controller.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/openstack/cacerts/ovn/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/certs/ovn/default/ca.crt:/etc/pki/tls/certs/ovndbca.crt:ro,z', '/var/lib/openstack/certs/ovn/default/tls.crt:/etc/pki/tls/certs/ovndb.crt:ro,z', '/var/lib/openstack/certs/ovn/default/tls.key:/etc/pki/tls/private/ovndb.key:ro,Z', '/var/lib/openstack/healthchecks/ovn_controller:/openstack:ro,z']}, managed_by=edpm_ansible, org.label-schema.license=GPLv2, org.label-schema.schema-version=1.0, org.label-schema.name=CentOS Stream 9 Base Image, tcib_managed=true, io.buildah.version=1.41.3, maintainer=OpenStack Kubernetes Operator team)
Dec  1 19:32:01 compute-0 openstack_network_exporter[205914]: ERROR   19:32:01 appctl.go:131: Failed to prepare call to ovsdb-server: no control socket files found for the ovs db server
Dec  1 19:32:01 compute-0 openstack_network_exporter[205914]: ERROR   19:32:01 appctl.go:174: call(dpif-netdev/pmd-rxq-show): please specify an existing datapath
Dec  1 19:32:01 compute-0 openstack_network_exporter[205914]: 
Dec  1 19:32:01 compute-0 openstack_network_exporter[205914]: ERROR   19:32:01 appctl.go:144: Failed to get PID for ovn-northd: no control socket files found for ovn-northd
Dec  1 19:32:01 compute-0 openstack_network_exporter[205914]: ERROR   19:32:01 appctl.go:174: call(dpif-netdev/pmd-perf-show): please specify an existing datapath
Dec  1 19:32:01 compute-0 openstack_network_exporter[205914]: 
Dec  1 19:32:01 compute-0 openstack_network_exporter[205914]: ERROR   19:32:01 appctl.go:144: Failed to get PID for ovn-northd: no control socket files found for ovn-northd
Dec  1 19:32:02 compute-0 nova_compute[189564]: 2025-12-01 19:32:02.183 189568 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Dec  1 19:32:02 compute-0 nova_compute[189564]: 2025-12-01 19:32:02.729 189568 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Dec  1 19:32:07 compute-0 nova_compute[189564]: 2025-12-01 19:32:07.186 189568 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Dec  1 19:32:07 compute-0 nova_compute[189564]: 2025-12-01 19:32:07.732 189568 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Dec  1 19:32:08 compute-0 podman[240947]: 2025-12-01 19:32:08.376330882 +0000 UTC m=+0.134174476 container health_status b46bda7fc50db8041eef75400930fc7591d8331b3adc9964f77b2cc87c6b98e2 (image=quay.io/openstack-k8s-operators/openstack-network-exporter:current-podified, name=openstack_network_exporter, health_status=healthy, health_failing_streak=0, health_log=, io.k8s.description=The Universal Base Image Minimal is a stripped down image that uses microdnf as a package manager. This base image is freely redistributable, but Red Hat only supports Red Hat technologies through subscriptions for Red Hat products. This image is maintained by Red Hat and updated regularly., io.openshift.expose-services=, build-date=2025-08-20T13:12:41, config_id=edpm, managed_by=edpm_ansible, vcs-ref=f4b088292653bbf5ca8188a5e59ffd06a8671d4b, com.redhat.component=ubi9-minimal-container, io.k8s.display-name=Red Hat Universal Base Image 9 Minimal, vendor=Red Hat, Inc., release=1755695350, config_data={'image': 'quay.io/openstack-k8s-operators/openstack-network-exporter:current-podified', 'restart': 'always', 'recreate': True, 'privileged': True, 'ports': ['9105:9105'], 'command': [], 'net': 'host', 'environment': {'OS_ENDPOINT_TYPE': 'internal', 'OPENSTACK_NETWORK_EXPORTER_YAML': '/etc/openstack_network_exporter/openstack_network_exporter.yaml'}, 'healthcheck': {'test': '/openstack/healthcheck openstack-netwo', 'mount': '/var/lib/openstack/healthchecks/openstack_network_exporter'}, 'volumes': ['/var/lib/openstack/config/telemetry/openstack_network_exporter.yaml:/etc/openstack_network_exporter/openstack_network_exporter.yaml:z', '/var/lib/openstack/certs/telemetry/default:/etc/openstack_network_exporter/tls:z', '/var/run/openvswitch:/run/openvswitch:rw,z', '/var/lib/openvswitch/ovn:/run/ovn:rw,z', '/proc:/host/proc:ro', '/var/lib/openstack/healthchecks/openstack_network_exporter:/openstack:ro,z']}, description=The Universal Base Image Minimal is a stripped down image that uses microdnf as a package manager. This base image is freely redistributable, but Red Hat only supports Red Hat technologies through subscriptions for Red Hat products. This image is maintained by Red Hat and updated regularly., architecture=x86_64, maintainer=Red Hat, Inc., distribution-scope=public, vcs-type=git, com.redhat.license_terms=https://www.redhat.com/en/about/red-hat-end-user-license-agreements#UBI, name=ubi9-minimal, version=9.6, container_name=openstack_network_exporter, summary=Provides the latest release of the minimal Red Hat Universal Base Image 9., url=https://catalog.redhat.com/en/search?searchType=containers, io.openshift.tags=minimal rhel9, io.buildah.version=1.33.7)
Dec  1 19:32:12 compute-0 ovn_metadata_agent[106828]: 2025-12-01 19:32:12.176 106833 DEBUG oslo_concurrency.lockutils [-] Acquiring lock "_check_child_processes" by "neutron.agent.linux.external_process.ProcessMonitor._check_child_processes" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m
Dec  1 19:32:12 compute-0 ovn_metadata_agent[106828]: 2025-12-01 19:32:12.177 106833 DEBUG oslo_concurrency.lockutils [-] Lock "_check_child_processes" acquired by "neutron.agent.linux.external_process.ProcessMonitor._check_child_processes" :: waited 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m
Dec  1 19:32:12 compute-0 ovn_metadata_agent[106828]: 2025-12-01 19:32:12.177 106833 DEBUG oslo_concurrency.lockutils [-] Lock "_check_child_processes" "released" by "neutron.agent.linux.external_process.ProcessMonitor._check_child_processes" :: held 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m
Dec  1 19:32:12 compute-0 nova_compute[189564]: 2025-12-01 19:32:12.189 189568 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Dec  1 19:32:12 compute-0 nova_compute[189564]: 2025-12-01 19:32:12.736 189568 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Dec  1 19:32:14 compute-0 podman[240967]: 2025-12-01 19:32:14.36380214 +0000 UTC m=+0.117769828 container health_status 9bc16c1e84935b321683dd2dfd3901959431e420d380b6b9982945dff3d516b2 (image=quay.io/prometheus/node-exporter:v1.5.0, name=node_exporter, health_status=healthy, health_failing_streak=0, health_log=, maintainer=The Prometheus Authors <prometheus-developers@googlegroups.com>, managed_by=edpm_ansible, config_data={'image': 'quay.io/prometheus/node-exporter:v1.5.0', 'restart': 'always', 'recreate': True, 'user': 'root', 'privileged': True, 'ports': ['9100:9100'], 'command': ['--web.config.file=/etc/node_exporter/node_exporter.yaml', '--web.disable-exporter-metrics', '--collector.systemd', '--collector.systemd.unit-include=(edpm_.*|ovs.*|openvswitch|virt.*|rsyslog)\\.service', '--no-collector.dmi', '--no-collector.entropy', '--no-collector.thermal_zone', '--no-collector.time', '--no-collector.timex', '--no-collector.uname', '--no-collector.stat', '--no-collector.hwmon', '--no-collector.os', '--no-collector.selinux', '--no-collector.textfile', '--no-collector.powersupplyclass', '--no-collector.pressure', '--no-collector.rapl'], 'net': 'host', 'environment': {'OS_ENDPOINT_TYPE': 'internal'}, 'healthcheck': {'test': '/openstack/healthcheck node_exporter', 'mount': '/var/lib/openstack/healthchecks/node_exporter'}, 'volumes': ['/var/lib/openstack/config/telemetry/node_exporter.yaml:/etc/node_exporter/node_exporter.yaml:z', '/var/lib/openstack/certs/telemetry/default:/etc/node_exporter/tls:z', '/var/run/dbus/system_bus_socket:/var/run/dbus/system_bus_socket:rw', '/var/lib/openstack/healthchecks/node_exporter:/openstack:ro,z']}, config_id=edpm, container_name=node_exporter)
Dec  1 19:32:17 compute-0 nova_compute[189564]: 2025-12-01 19:32:17.194 189568 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Dec  1 19:32:17 compute-0 nova_compute[189564]: 2025-12-01 19:32:17.738 189568 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Dec  1 19:32:19 compute-0 podman[240989]: 2025-12-01 19:32:19.294795521 +0000 UTC m=+0.065796449 container health_status eee51cf6f5ac491b85fb09827fece37ea9afa564acb449d4ec0d0155a452f02b (image=quay.io/podified-antelope-centos9/openstack-multipathd:current-podified, name=multipathd, health_status=healthy, health_failing_streak=0, health_log=, org.label-schema.license=GPLv2, io.buildah.version=1.41.3, org.label-schema.schema-version=1.0, tcib_build_tag=fa2bb8efef6782c26ea7f1675eeb36dd, tcib_managed=true, config_id=multipathd, container_name=multipathd, org.label-schema.build-date=20251125, org.label-schema.vendor=CentOS, config_data={'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/multipathd', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-multipathd:current-podified', 'net': 'host', 'privileged': True, 'restart': 'always', 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/var/lib/kolla/config_files/multipathd.json:/var/lib/kolla/config_files/config.json:ro', '/dev:/dev', '/run/udev:/run/udev', '/sys:/sys', '/lib/modules:/lib/modules:ro', '/etc/iscsi:/etc/iscsi:ro', '/var/lib/iscsi:/var/lib/iscsi', '/etc/multipath:/etc/multipath:z', '/etc/multipath.conf:/etc/multipath.conf:ro', '/var/lib/openstack/healthchecks/multipathd:/openstack:ro,z']}, maintainer=OpenStack Kubernetes Operator team, managed_by=edpm_ansible, org.label-schema.name=CentOS Stream 9 Base Image)
Dec  1 19:32:22 compute-0 nova_compute[189564]: 2025-12-01 19:32:22.197 189568 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Dec  1 19:32:22 compute-0 nova_compute[189564]: 2025-12-01 19:32:22.741 189568 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Dec  1 19:32:24 compute-0 podman[241009]: 2025-12-01 19:32:24.324620848 +0000 UTC m=+0.085225050 container health_status 61ddba5fa28aaa4735d9b3aecc3d300f499f9ae2248b5f55cd6d6127fcce4236 (image=quay.io/navidys/prometheus-podman-exporter:v1.10.1, name=podman_exporter, health_status=healthy, health_failing_streak=0, health_log=, config_data={'image': 'quay.io/navidys/prometheus-podman-exporter:v1.10.1', 'restart': 'always', 'recreate': True, 'user': 'root', 'privileged': True, 'ports': ['9882:9882'], 'net': 'host', 'command': ['--web.config.file=/etc/podman_exporter/podman_exporter.yaml'], 'environment': {'OS_ENDPOINT_TYPE': 'internal', 'CONTAINER_HOST': 'unix:///run/podman/podman.sock'}, 'healthcheck': {'test': '/openstack/healthcheck podman_exporter', 'mount': '/var/lib/openstack/healthchecks/podman_exporter'}, 'volumes': ['/var/lib/openstack/config/telemetry/podman_exporter.yaml:/etc/podman_exporter/podman_exporter.yaml:z', '/var/lib/openstack/certs/telemetry/default:/etc/podman_exporter/tls:z', '/run/podman/podman.sock:/run/podman/podman.sock:rw,z', '/var/lib/openstack/healthchecks/podman_exporter:/openstack:ro,z']}, config_id=edpm, container_name=podman_exporter, maintainer=Navid Yaghoobi <navidys@fedoraproject.org>, managed_by=edpm_ansible)
Dec  1 19:32:27 compute-0 nova_compute[189564]: 2025-12-01 19:32:27.200 189568 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Dec  1 19:32:27 compute-0 podman[241032]: 2025-12-01 19:32:27.34979983 +0000 UTC m=+0.106817169 container health_status 23921011954a99f31a49758e512d9e3575f6b2ebf536e7df85e3be11e7690b76 (image=quay.io/sustainable_computing_io/kepler:release-0.7.12, name=kepler, health_status=healthy, health_failing_streak=0, health_log=, io.buildah.version=1.29.0, io.openshift.tags=base rhel9, version=9.4, config_id=edpm, vcs-ref=e309397d02fc53f7fa99db1371b8700eb49f268f, release-0.7.12=, architecture=x86_64, com.redhat.component=ubi9-container, build-date=2024-09-18T21:23:30, maintainer=Red Hat, Inc., config_data={'image': 'quay.io/sustainable_computing_io/kepler:release-0.7.12', 'privileged': 'true', 'restart': 'always', 'ports': ['8888:8888'], 'net': 'host', 'command': '-v=2', 'recreate': True, 'environment': {'ENABLE_GPU': 'true', 'EXPOSE_CONTAINER_METRICS': 'true', 'ENABLE_PROCESS_METRICS': 'true', 'EXPOSE_VM_METRICS': 'true', 'EXPOSE_ESTIMATED_IDLE_POWER_METRICS': 'false', 'LIBVIRT_METADATA_URI': 'http://openstack.org/xmlns/libvirt/nova/1.1'}, 'healthcheck': {'test': '/openstack/healthcheck kepler', 'mount': '/var/lib/openstack/healthchecks/kepler'}, 'volumes': ['/lib/modules:/lib/modules:ro', '/run/libvirt:/run/libvirt:shared,ro', '/sys:/sys', '/proc:/proc', '/var/lib/openstack/healthchecks/kepler:/openstack:ro,z']}, distribution-scope=public, vcs-type=git, name=ubi9, url=https://access.redhat.com/containers/#/registry.access.redhat.com/ubi9/images/9.4-1214.1726694543, container_name=kepler, description=The Universal Base Image is designed and engineered to be the base layer for all of your containerized applications, middleware and utilities. This base image is freely redistributable, but Red Hat only supports Red Hat technologies through subscriptions for Red Hat products. This image is maintained by Red Hat and updated regularly., io.openshift.expose-services=, managed_by=edpm_ansible, release=1214.1726694543, com.redhat.license_terms=https://www.redhat.com/en/about/red-hat-end-user-license-agreements#UBI, io.k8s.description=The Universal Base Image is designed and engineered to be the base layer for all of your containerized applications, middleware and utilities. This base image is freely redistributable, but Red Hat only supports Red Hat technologies through subscriptions for Red Hat products. This image is maintained by Red Hat and updated regularly., io.k8s.display-name=Red Hat Universal Base Image 9, summary=Provides the latest release of Red Hat Universal Base Image 9., vendor=Red Hat, Inc.)
Dec  1 19:32:27 compute-0 podman[241033]: 2025-12-01 19:32:27.352575256 +0000 UTC m=+0.108391248 container health_status 34a1614f07848d6f362b3ed1fa2407dbcd0f2c7c831f6ef43ff8b2d278ce7c3d (image=quay.io/podified-antelope-centos9/openstack-ceilometer-ipmi:current-podified, name=ceilometer_agent_ipmi, health_status=healthy, health_failing_streak=0, health_log=, org.label-schema.license=GPLv2, tcib_managed=true, org.label-schema.vendor=CentOS, tcib_build_tag=fa2bb8efef6782c26ea7f1675eeb36dd, config_data={'image': 'quay.io/podified-antelope-centos9/openstack-ceilometer-ipmi:current-podified', 'user': 'ceilometer', 'restart': 'always', 'command': 'kolla_start', 'security_opt': 'label:type:ceilometer_polling_t', 'privileged': 'true', 'net': 'host', 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'OS_ENDPOINT_TYPE': 'internal'}, 'healthcheck': {'test': '/openstack/healthcheck ipmi', 'mount': '/var/lib/openstack/healthchecks/ceilometer_agent_ipmi'}, 'volumes': ['/var/lib/openstack/config/telemetry-power-monitoring:/var/lib/openstack/config/:z', '/var/lib/openstack/config/telemetry-power-monitoring/ceilometer-agent-ipmi.json:/var/lib/kolla/config_files/config.json:z', '/etc/hosts:/etc/hosts:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/var/lib/openstack/cacerts/telemetry-power-monitoring/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/config/telemetry-power-monitoring/ceilometer_prom_exporter.yaml:/etc/ceilometer/ceilometer_prom_exporter.yaml:z', '/var/lib/openstack/certs/telemetry-power-monitoring/default:/etc/ceilometer/tls:z', '/dev/log:/dev/log', '/var/lib/openstack/healthchecks/ceilometer_agent_ipmi:/openstack:ro,z']}, config_id=edpm, maintainer=OpenStack Kubernetes Operator team, org.label-schema.schema-version=1.0, container_name=ceilometer_agent_ipmi, io.buildah.version=1.41.3, managed_by=edpm_ansible, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.build-date=20251125)
Dec  1 19:32:27 compute-0 nova_compute[189564]: 2025-12-01 19:32:27.743 189568 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Dec  1 19:32:29 compute-0 podman[203750]: time="2025-12-01T19:32:29Z" level=info msg="List containers: received `last` parameter - overwriting `limit`"
Dec  1 19:32:29 compute-0 podman[203750]: @ - - [01/Dec/2025:19:32:29 +0000] "GET /v4.9.3/libpod/containers/json?all=true&external=false&last=0&namespace=false&size=false&sync=false HTTP/1.1" 200 29521 "" "Go-http-client/1.1"
Dec  1 19:32:29 compute-0 podman[203750]: @ - - [01/Dec/2025:19:32:29 +0000] "GET /v4.9.3/libpod/containers/stats?all=false&interval=1&stream=false HTTP/1.1" 200 4776 "" "Go-http-client/1.1"
Dec  1 19:32:31 compute-0 podman[241069]: 2025-12-01 19:32:31.356519559 +0000 UTC m=+0.102716992 container health_status 43b014a7c88484529ca37fbc1aa040d68d3c565a681d98a3ffe696ded1c66c8b (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified, name=ovn_metadata_agent, health_status=healthy, health_failing_streak=0, health_log=, tcib_build_tag=fa2bb8efef6782c26ea7f1675eeb36dd, container_name=ovn_metadata_agent, maintainer=OpenStack Kubernetes Operator team, managed_by=edpm_ansible, org.label-schema.build-date=20251125, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.schema-version=1.0, tcib_managed=true, config_data={'cgroupns': 'host', 'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'EDPM_CONFIG_HASH': '0823bd3e096c75f72e4a95820d41b0d4b6a1172bd2892ddb9f29b788a11bc87d'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ovn_metadata_agent', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified', 'net': 'host', 'pid': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/run/openvswitch:/run/openvswitch:z', '/var/lib/config-data/ansible-generated/neutron-ovn-metadata-agent:/etc/neutron.conf.d:z', '/run/netns:/run/netns:shared', '/var/lib/kolla/config_files/ovn_metadata_agent.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/neutron:/var/lib/neutron:shared,z', '/var/lib/neutron/ovn_metadata_haproxy_wrapper:/usr/local/bin/haproxy:ro', '/var/lib/neutron/kill_scripts:/etc/neutron/kill_scripts:ro', '/var/lib/openstack/cacerts/neutron-metadata/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/ca.crt:/etc/pki/tls/certs/ovndbca.crt:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/tls.crt:/etc/pki/tls/certs/ovndb.crt:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/tls.key:/etc/pki/tls/private/ovndb.key:ro,Z', '/var/lib/openstack/healthchecks/ovn_metadata_agent:/openstack:ro,z']}, config_id=ovn_metadata_agent, io.buildah.version=1.41.3, org.label-schema.license=GPLv2, org.label-schema.vendor=CentOS)
Dec  1 19:32:31 compute-0 podman[241068]: 2025-12-01 19:32:31.373765933 +0000 UTC m=+0.123277639 container health_status 3a3d264f7eb8586ed3d44da8bad3c69e5911bcb2ca062b771386b6d47a5118de (image=quay.rdoproject.org/podified-master-centos10/openstack-ceilometer-compute:current-tested, name=ceilometer_agent_compute, health_status=healthy, health_failing_streak=0, health_log=, tcib_managed=true, config_data={'image': 'quay.rdoproject.org/podified-master-centos10/openstack-ceilometer-compute:current-tested', 'user': 'ceilometer', 'restart': 'always', 'command': 'kolla_start', 'security_opt': 'label:type:ceilometer_polling_t', 'net': 'host', 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'OS_ENDPOINT_TYPE': 'internal'}, 'healthcheck': {'test': '/openstack/healthcheck compute', 'mount': '/var/lib/openstack/healthchecks/ceilometer_agent_compute'}, 'volumes': ['/var/lib/openstack/config/telemetry:/var/lib/openstack/config/:z', '/var/lib/openstack/config/telemetry/ceilometer-agent-compute.json:/var/lib/kolla/config_files/config.json:z', '/run/libvirt:/run/libvirt:shared,ro', '/etc/hosts:/etc/hosts:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/var/lib/openstack/cacerts/telemetry/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/config/telemetry/ceilometer_prom_exporter.yaml:/etc/ceilometer/ceilometer_prom_exporter.yaml:z', '/var/lib/openstack/certs/telemetry/default:/etc/ceilometer/tls:z', '/dev/log:/dev/log', '/var/lib/openstack/healthchecks/ceilometer_agent_compute:/openstack:ro,z']}, config_id=edpm, org.label-schema.vendor=CentOS, tcib_build_tag=3a7876c5b6a4ff2e2bc50e11e9db5f42, org.label-schema.build-date=20251125, org.label-schema.name=CentOS Stream 10 Base Image, org.label-schema.schema-version=1.0, container_name=ceilometer_agent_compute, maintainer=OpenStack Kubernetes Operator team, org.label-schema.license=GPLv2, io.buildah.version=1.41.4, managed_by=edpm_ansible)
Dec  1 19:32:31 compute-0 openstack_network_exporter[205914]: ERROR   19:32:31 appctl.go:131: Failed to prepare call to ovsdb-server: no control socket files found for the ovs db server
Dec  1 19:32:31 compute-0 openstack_network_exporter[205914]: ERROR   19:32:31 appctl.go:144: Failed to get PID for ovn-northd: no control socket files found for ovn-northd
Dec  1 19:32:31 compute-0 openstack_network_exporter[205914]: ERROR   19:32:31 appctl.go:144: Failed to get PID for ovn-northd: no control socket files found for ovn-northd
Dec  1 19:32:31 compute-0 openstack_network_exporter[205914]: ERROR   19:32:31 appctl.go:174: call(dpif-netdev/pmd-perf-show): please specify an existing datapath
Dec  1 19:32:31 compute-0 openstack_network_exporter[205914]: 
Dec  1 19:32:31 compute-0 openstack_network_exporter[205914]: ERROR   19:32:31 appctl.go:174: call(dpif-netdev/pmd-rxq-show): please specify an existing datapath
Dec  1 19:32:31 compute-0 openstack_network_exporter[205914]: 
Dec  1 19:32:31 compute-0 podman[241070]: 2025-12-01 19:32:31.445295548 +0000 UTC m=+0.185068832 container health_status ac5c9902abf0db9f43c889599b2bcc73d33eb8b65444ffdd9b56a5cc93dab792 (image=quay.io/podified-antelope-centos9/openstack-ovn-controller:current-podified, name=ovn_controller, health_status=healthy, health_failing_streak=0, health_log=, tcib_managed=true, config_data={'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ovn_controller', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-ovn-controller:current-podified', 'net': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/lib/modules:/lib/modules:ro', '/run:/run', '/var/lib/openvswitch/ovn:/run/ovn:shared,z', '/var/lib/kolla/config_files/ovn_controller.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/openstack/cacerts/ovn/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/certs/ovn/default/ca.crt:/etc/pki/tls/certs/ovndbca.crt:ro,z', '/var/lib/openstack/certs/ovn/default/tls.crt:/etc/pki/tls/certs/ovndb.crt:ro,z', '/var/lib/openstack/certs/ovn/default/tls.key:/etc/pki/tls/private/ovndb.key:ro,Z', '/var/lib/openstack/healthchecks/ovn_controller:/openstack:ro,z']}, io.buildah.version=1.41.3, config_id=ovn_controller, maintainer=OpenStack Kubernetes Operator team, container_name=ovn_controller, managed_by=edpm_ansible, org.label-schema.build-date=20251125, org.label-schema.license=GPLv2, org.label-schema.vendor=CentOS, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.schema-version=1.0, tcib_build_tag=fa2bb8efef6782c26ea7f1675eeb36dd)
Dec  1 19:32:32 compute-0 nova_compute[189564]: 2025-12-01 19:32:32.203 189568 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Dec  1 19:32:32 compute-0 nova_compute[189564]: 2025-12-01 19:32:32.747 189568 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Dec  1 19:32:37 compute-0 nova_compute[189564]: 2025-12-01 19:32:37.206 189568 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Dec  1 19:32:37 compute-0 nova_compute[189564]: 2025-12-01 19:32:37.751 189568 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Dec  1 19:32:39 compute-0 podman[241127]: 2025-12-01 19:32:39.373752138 +0000 UTC m=+0.139613795 container health_status b46bda7fc50db8041eef75400930fc7591d8331b3adc9964f77b2cc87c6b98e2 (image=quay.io/openstack-k8s-operators/openstack-network-exporter:current-podified, name=openstack_network_exporter, health_status=healthy, health_failing_streak=0, health_log=, vcs-ref=f4b088292653bbf5ca8188a5e59ffd06a8671d4b, container_name=openstack_network_exporter, com.redhat.component=ubi9-minimal-container, description=The Universal Base Image Minimal is a stripped down image that uses microdnf as a package manager. This base image is freely redistributable, but Red Hat only supports Red Hat technologies through subscriptions for Red Hat products. This image is maintained by Red Hat and updated regularly., io.k8s.description=The Universal Base Image Minimal is a stripped down image that uses microdnf as a package manager. This base image is freely redistributable, but Red Hat only supports Red Hat technologies through subscriptions for Red Hat products. This image is maintained by Red Hat and updated regularly., managed_by=edpm_ansible, maintainer=Red Hat, Inc., release=1755695350, io.buildah.version=1.33.7, architecture=x86_64, config_id=edpm, vcs-type=git, version=9.6, io.openshift.tags=minimal rhel9, build-date=2025-08-20T13:12:41, com.redhat.license_terms=https://www.redhat.com/en/about/red-hat-end-user-license-agreements#UBI, config_data={'image': 'quay.io/openstack-k8s-operators/openstack-network-exporter:current-podified', 'restart': 'always', 'recreate': True, 'privileged': True, 'ports': ['9105:9105'], 'command': [], 'net': 'host', 'environment': {'OS_ENDPOINT_TYPE': 'internal', 'OPENSTACK_NETWORK_EXPORTER_YAML': '/etc/openstack_network_exporter/openstack_network_exporter.yaml'}, 'healthcheck': {'test': '/openstack/healthcheck openstack-netwo', 'mount': '/var/lib/openstack/healthchecks/openstack_network_exporter'}, 'volumes': ['/var/lib/openstack/config/telemetry/openstack_network_exporter.yaml:/etc/openstack_network_exporter/openstack_network_exporter.yaml:z', '/var/lib/openstack/certs/telemetry/default:/etc/openstack_network_exporter/tls:z', '/var/run/openvswitch:/run/openvswitch:rw,z', '/var/lib/openvswitch/ovn:/run/ovn:rw,z', '/proc:/host/proc:ro', '/var/lib/openstack/healthchecks/openstack_network_exporter:/openstack:ro,z']}, io.k8s.display-name=Red Hat Universal Base Image 9 Minimal, io.openshift.expose-services=, vendor=Red Hat, Inc., distribution-scope=public, name=ubi9-minimal, summary=Provides the latest release of the minimal Red Hat Universal Base Image 9., url=https://catalog.redhat.com/en/search?searchType=containers)
Dec  1 19:32:42 compute-0 nova_compute[189564]: 2025-12-01 19:32:42.208 189568 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Dec  1 19:32:42 compute-0 nova_compute[189564]: 2025-12-01 19:32:42.754 189568 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Dec  1 19:32:44 compute-0 podman[241150]: 2025-12-01 19:32:44.791379789 +0000 UTC m=+0.096268281 container health_status 9bc16c1e84935b321683dd2dfd3901959431e420d380b6b9982945dff3d516b2 (image=quay.io/prometheus/node-exporter:v1.5.0, name=node_exporter, health_status=healthy, health_failing_streak=0, health_log=, config_data={'image': 'quay.io/prometheus/node-exporter:v1.5.0', 'restart': 'always', 'recreate': True, 'user': 'root', 'privileged': True, 'ports': ['9100:9100'], 'command': ['--web.config.file=/etc/node_exporter/node_exporter.yaml', '--web.disable-exporter-metrics', '--collector.systemd', '--collector.systemd.unit-include=(edpm_.*|ovs.*|openvswitch|virt.*|rsyslog)\\.service', '--no-collector.dmi', '--no-collector.entropy', '--no-collector.thermal_zone', '--no-collector.time', '--no-collector.timex', '--no-collector.uname', '--no-collector.stat', '--no-collector.hwmon', '--no-collector.os', '--no-collector.selinux', '--no-collector.textfile', '--no-collector.powersupplyclass', '--no-collector.pressure', '--no-collector.rapl'], 'net': 'host', 'environment': {'OS_ENDPOINT_TYPE': 'internal'}, 'healthcheck': {'test': '/openstack/healthcheck node_exporter', 'mount': '/var/lib/openstack/healthchecks/node_exporter'}, 'volumes': ['/var/lib/openstack/config/telemetry/node_exporter.yaml:/etc/node_exporter/node_exporter.yaml:z', '/var/lib/openstack/certs/telemetry/default:/etc/node_exporter/tls:z', '/var/run/dbus/system_bus_socket:/var/run/dbus/system_bus_socket:rw', '/var/lib/openstack/healthchecks/node_exporter:/openstack:ro,z']}, config_id=edpm, container_name=node_exporter, maintainer=The Prometheus Authors <prometheus-developers@googlegroups.com>, managed_by=edpm_ansible)
Dec  1 19:32:46 compute-0 nova_compute[189564]: 2025-12-01 19:32:46.248 189568 DEBUG oslo_service.periodic_task [None req-7cec860b-c1c5-49d2-8a4f-732c76c1788d - - - - - -] Running periodic task ComputeManager._heal_instance_info_cache run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210#033[00m
Dec  1 19:32:46 compute-0 nova_compute[189564]: 2025-12-01 19:32:46.249 189568 DEBUG nova.compute.manager [None req-7cec860b-c1c5-49d2-8a4f-732c76c1788d - - - - - -] Starting heal instance info cache _heal_instance_info_cache /usr/lib/python3.9/site-packages/nova/compute/manager.py:9858#033[00m
Dec  1 19:32:46 compute-0 nova_compute[189564]: 2025-12-01 19:32:46.554 189568 DEBUG oslo_concurrency.lockutils [None req-7cec860b-c1c5-49d2-8a4f-732c76c1788d - - - - - -] Acquiring lock "refresh_cache-f4a023f0-04a7-470f-88ef-6284e0580f9e" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:312#033[00m
Dec  1 19:32:46 compute-0 nova_compute[189564]: 2025-12-01 19:32:46.554 189568 DEBUG oslo_concurrency.lockutils [None req-7cec860b-c1c5-49d2-8a4f-732c76c1788d - - - - - -] Acquired lock "refresh_cache-f4a023f0-04a7-470f-88ef-6284e0580f9e" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:315#033[00m
Dec  1 19:32:46 compute-0 nova_compute[189564]: 2025-12-01 19:32:46.554 189568 DEBUG nova.network.neutron [None req-7cec860b-c1c5-49d2-8a4f-732c76c1788d - - - - - -] [instance: f4a023f0-04a7-470f-88ef-6284e0580f9e] Forcefully refreshing network info cache for instance _get_instance_nw_info /usr/lib/python3.9/site-packages/nova/network/neutron.py:2004#033[00m
Dec  1 19:32:47 compute-0 nova_compute[189564]: 2025-12-01 19:32:47.213 189568 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Dec  1 19:32:47 compute-0 nova_compute[189564]: 2025-12-01 19:32:47.758 189568 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Dec  1 19:32:49 compute-0 nova_compute[189564]: 2025-12-01 19:32:49.375 189568 DEBUG nova.network.neutron [None req-7cec860b-c1c5-49d2-8a4f-732c76c1788d - - - - - -] [instance: f4a023f0-04a7-470f-88ef-6284e0580f9e] Updating instance_info_cache with network_info: [{"id": "0aee22ef-1ffd-4d83-a6ba-7377ff1b62c3", "address": "fa:16:3e:0a:1c:a4", "network": {"id": "2a4b8529-6171-4880-a97c-66966115a61b", "bridge": "br-int", "label": "private", "subnets": [{"cidr": "192.168.0.0/24", "dns": [], "gateway": {"address": "192.168.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "192.168.0.66", "type": "fixed", "version": 4, "meta": {}, "floating_ips": [{"address": "192.168.122.187", "type": "floating", "version": 4, "meta": {}}]}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "35d2a9caf1634dca9fc12ec078239d84", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tap0aee22ef-1f", "ovs_interfaceid": "0aee22ef-1ffd-4d83-a6ba-7377ff1b62c3", "qbh_params": null, "qbg_params": null, "active": true, "vnic_type": "normal", "profile": {}, "preserve_on_delete": true, "delegate_create": true, "meta": {}}] update_instance_cache_with_nw_info /usr/lib/python3.9/site-packages/nova/network/neutron.py:116#033[00m
Dec  1 19:32:49 compute-0 nova_compute[189564]: 2025-12-01 19:32:49.422 189568 DEBUG oslo_concurrency.lockutils [None req-7cec860b-c1c5-49d2-8a4f-732c76c1788d - - - - - -] Releasing lock "refresh_cache-f4a023f0-04a7-470f-88ef-6284e0580f9e" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:333#033[00m
Dec  1 19:32:49 compute-0 nova_compute[189564]: 2025-12-01 19:32:49.422 189568 DEBUG nova.compute.manager [None req-7cec860b-c1c5-49d2-8a4f-732c76c1788d - - - - - -] [instance: f4a023f0-04a7-470f-88ef-6284e0580f9e] Updated the network info_cache for instance _heal_instance_info_cache /usr/lib/python3.9/site-packages/nova/compute/manager.py:9929#033[00m
Dec  1 19:32:50 compute-0 nova_compute[189564]: 2025-12-01 19:32:50.247 189568 DEBUG oslo_service.periodic_task [None req-7cec860b-c1c5-49d2-8a4f-732c76c1788d - - - - - -] Running periodic task ComputeManager._poll_rebooting_instances run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210#033[00m
Dec  1 19:32:50 compute-0 nova_compute[189564]: 2025-12-01 19:32:50.247 189568 DEBUG oslo_service.periodic_task [None req-7cec860b-c1c5-49d2-8a4f-732c76c1788d - - - - - -] Running periodic task ComputeManager._poll_volume_usage run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210#033[00m
Dec  1 19:32:50 compute-0 podman[241176]: 2025-12-01 19:32:50.351838822 +0000 UTC m=+0.122446512 container health_status eee51cf6f5ac491b85fb09827fece37ea9afa564acb449d4ec0d0155a452f02b (image=quay.io/podified-antelope-centos9/openstack-multipathd:current-podified, name=multipathd, health_status=healthy, health_failing_streak=0, health_log=, config_id=multipathd, maintainer=OpenStack Kubernetes Operator team, managed_by=edpm_ansible, org.label-schema.build-date=20251125, org.label-schema.vendor=CentOS, tcib_build_tag=fa2bb8efef6782c26ea7f1675eeb36dd, config_data={'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/multipathd', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-multipathd:current-podified', 'net': 'host', 'privileged': True, 'restart': 'always', 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/var/lib/kolla/config_files/multipathd.json:/var/lib/kolla/config_files/config.json:ro', '/dev:/dev', '/run/udev:/run/udev', '/sys:/sys', '/lib/modules:/lib/modules:ro', '/etc/iscsi:/etc/iscsi:ro', '/var/lib/iscsi:/var/lib/iscsi', '/etc/multipath:/etc/multipath:z', '/etc/multipath.conf:/etc/multipath.conf:ro', '/var/lib/openstack/healthchecks/multipathd:/openstack:ro,z']}, org.label-schema.name=CentOS Stream 9 Base Image, tcib_managed=true, container_name=multipathd, io.buildah.version=1.41.3, org.label-schema.license=GPLv2, org.label-schema.schema-version=1.0)
Dec  1 19:32:51 compute-0 nova_compute[189564]: 2025-12-01 19:32:51.243 189568 DEBUG oslo_service.periodic_task [None req-7cec860b-c1c5-49d2-8a4f-732c76c1788d - - - - - -] Running periodic task ComputeManager._check_instance_build_time run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210#033[00m
Dec  1 19:32:52 compute-0 nova_compute[189564]: 2025-12-01 19:32:52.217 189568 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Dec  1 19:32:52 compute-0 nova_compute[189564]: 2025-12-01 19:32:52.248 189568 DEBUG oslo_service.periodic_task [None req-7cec860b-c1c5-49d2-8a4f-732c76c1788d - - - - - -] Running periodic task ComputeManager._reclaim_queued_deletes run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210#033[00m
Dec  1 19:32:52 compute-0 nova_compute[189564]: 2025-12-01 19:32:52.248 189568 DEBUG nova.compute.manager [None req-7cec860b-c1c5-49d2-8a4f-732c76c1788d - - - - - -] CONF.reclaim_instance_interval <= 0, skipping... _reclaim_queued_deletes /usr/lib/python3.9/site-packages/nova/compute/manager.py:10477#033[00m
Dec  1 19:32:52 compute-0 nova_compute[189564]: 2025-12-01 19:32:52.761 189568 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Dec  1 19:32:53 compute-0 nova_compute[189564]: 2025-12-01 19:32:53.244 189568 DEBUG oslo_service.periodic_task [None req-7cec860b-c1c5-49d2-8a4f-732c76c1788d - - - - - -] Running periodic task ComputeManager._sync_scheduler_instance_info run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210#033[00m
Dec  1 19:32:53 compute-0 nova_compute[189564]: 2025-12-01 19:32:53.268 189568 DEBUG oslo_service.periodic_task [None req-7cec860b-c1c5-49d2-8a4f-732c76c1788d - - - - - -] Running periodic task ComputeManager._instance_usage_audit run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210#033[00m
Dec  1 19:32:54 compute-0 nova_compute[189564]: 2025-12-01 19:32:54.249 189568 DEBUG oslo_service.periodic_task [None req-7cec860b-c1c5-49d2-8a4f-732c76c1788d - - - - - -] Running periodic task ComputeManager.update_available_resource run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210#033[00m
Dec  1 19:32:54 compute-0 nova_compute[189564]: 2025-12-01 19:32:54.300 189568 DEBUG oslo_concurrency.lockutils [None req-7cec860b-c1c5-49d2-8a4f-732c76c1788d - - - - - -] Acquiring lock "compute_resources" by "nova.compute.resource_tracker.ResourceTracker.clean_compute_node_cache" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m
Dec  1 19:32:54 compute-0 nova_compute[189564]: 2025-12-01 19:32:54.301 189568 DEBUG oslo_concurrency.lockutils [None req-7cec860b-c1c5-49d2-8a4f-732c76c1788d - - - - - -] Lock "compute_resources" acquired by "nova.compute.resource_tracker.ResourceTracker.clean_compute_node_cache" :: waited 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m
Dec  1 19:32:54 compute-0 nova_compute[189564]: 2025-12-01 19:32:54.301 189568 DEBUG oslo_concurrency.lockutils [None req-7cec860b-c1c5-49d2-8a4f-732c76c1788d - - - - - -] Lock "compute_resources" "released" by "nova.compute.resource_tracker.ResourceTracker.clean_compute_node_cache" :: held 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m
Dec  1 19:32:54 compute-0 nova_compute[189564]: 2025-12-01 19:32:54.302 189568 DEBUG nova.compute.resource_tracker [None req-7cec860b-c1c5-49d2-8a4f-732c76c1788d - - - - - -] Auditing locally available compute resources for compute-0.ctlplane.example.com (node: compute-0.ctlplane.example.com) update_available_resource /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:861#033[00m
Dec  1 19:32:54 compute-0 nova_compute[189564]: 2025-12-01 19:32:54.517 189568 DEBUG oslo_concurrency.processutils [None req-7cec860b-c1c5-49d2-8a4f-732c76c1788d - - - - - -] Running cmd (subprocess): /usr/bin/python3 -m oslo_concurrency.prlimit --as=1073741824 --cpu=30 -- env LC_ALL=C LANG=C qemu-img info /var/lib/nova/instances/e73931e9-f7fa-4666-b781-700b385532a9/disk --force-share --output=json execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384#033[00m
Dec  1 19:32:54 compute-0 nova_compute[189564]: 2025-12-01 19:32:54.614 189568 DEBUG oslo_concurrency.processutils [None req-7cec860b-c1c5-49d2-8a4f-732c76c1788d - - - - - -] CMD "/usr/bin/python3 -m oslo_concurrency.prlimit --as=1073741824 --cpu=30 -- env LC_ALL=C LANG=C qemu-img info /var/lib/nova/instances/e73931e9-f7fa-4666-b781-700b385532a9/disk --force-share --output=json" returned: 0 in 0.097s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422#033[00m
Dec  1 19:32:54 compute-0 nova_compute[189564]: 2025-12-01 19:32:54.615 189568 DEBUG oslo_concurrency.processutils [None req-7cec860b-c1c5-49d2-8a4f-732c76c1788d - - - - - -] Running cmd (subprocess): /usr/bin/python3 -m oslo_concurrency.prlimit --as=1073741824 --cpu=30 -- env LC_ALL=C LANG=C qemu-img info /var/lib/nova/instances/e73931e9-f7fa-4666-b781-700b385532a9/disk --force-share --output=json execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384#033[00m
Dec  1 19:32:54 compute-0 nova_compute[189564]: 2025-12-01 19:32:54.678 189568 DEBUG oslo_concurrency.processutils [None req-7cec860b-c1c5-49d2-8a4f-732c76c1788d - - - - - -] CMD "/usr/bin/python3 -m oslo_concurrency.prlimit --as=1073741824 --cpu=30 -- env LC_ALL=C LANG=C qemu-img info /var/lib/nova/instances/e73931e9-f7fa-4666-b781-700b385532a9/disk --force-share --output=json" returned: 0 in 0.062s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422#033[00m
Dec  1 19:32:54 compute-0 nova_compute[189564]: 2025-12-01 19:32:54.680 189568 DEBUG oslo_concurrency.processutils [None req-7cec860b-c1c5-49d2-8a4f-732c76c1788d - - - - - -] Running cmd (subprocess): /usr/bin/python3 -m oslo_concurrency.prlimit --as=1073741824 --cpu=30 -- env LC_ALL=C LANG=C qemu-img info /var/lib/nova/instances/e73931e9-f7fa-4666-b781-700b385532a9/disk.eph0 --force-share --output=json execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384#033[00m
Dec  1 19:32:54 compute-0 nova_compute[189564]: 2025-12-01 19:32:54.773 189568 DEBUG oslo_concurrency.processutils [None req-7cec860b-c1c5-49d2-8a4f-732c76c1788d - - - - - -] CMD "/usr/bin/python3 -m oslo_concurrency.prlimit --as=1073741824 --cpu=30 -- env LC_ALL=C LANG=C qemu-img info /var/lib/nova/instances/e73931e9-f7fa-4666-b781-700b385532a9/disk.eph0 --force-share --output=json" returned: 0 in 0.093s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422#033[00m
Dec  1 19:32:54 compute-0 nova_compute[189564]: 2025-12-01 19:32:54.776 189568 DEBUG oslo_concurrency.processutils [None req-7cec860b-c1c5-49d2-8a4f-732c76c1788d - - - - - -] Running cmd (subprocess): /usr/bin/python3 -m oslo_concurrency.prlimit --as=1073741824 --cpu=30 -- env LC_ALL=C LANG=C qemu-img info /var/lib/nova/instances/e73931e9-f7fa-4666-b781-700b385532a9/disk.eph0 --force-share --output=json execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384#033[00m
Dec  1 19:32:54 compute-0 nova_compute[189564]: 2025-12-01 19:32:54.838 189568 DEBUG oslo_concurrency.processutils [None req-7cec860b-c1c5-49d2-8a4f-732c76c1788d - - - - - -] CMD "/usr/bin/python3 -m oslo_concurrency.prlimit --as=1073741824 --cpu=30 -- env LC_ALL=C LANG=C qemu-img info /var/lib/nova/instances/e73931e9-f7fa-4666-b781-700b385532a9/disk.eph0 --force-share --output=json" returned: 0 in 0.062s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422#033[00m
Dec  1 19:32:54 compute-0 nova_compute[189564]: 2025-12-01 19:32:54.850 189568 DEBUG oslo_concurrency.processutils [None req-7cec860b-c1c5-49d2-8a4f-732c76c1788d - - - - - -] Running cmd (subprocess): /usr/bin/python3 -m oslo_concurrency.prlimit --as=1073741824 --cpu=30 -- env LC_ALL=C LANG=C qemu-img info /var/lib/nova/instances/f4a023f0-04a7-470f-88ef-6284e0580f9e/disk --force-share --output=json execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384#033[00m
Dec  1 19:32:54 compute-0 nova_compute[189564]: 2025-12-01 19:32:54.913 189568 DEBUG oslo_concurrency.processutils [None req-7cec860b-c1c5-49d2-8a4f-732c76c1788d - - - - - -] CMD "/usr/bin/python3 -m oslo_concurrency.prlimit --as=1073741824 --cpu=30 -- env LC_ALL=C LANG=C qemu-img info /var/lib/nova/instances/f4a023f0-04a7-470f-88ef-6284e0580f9e/disk --force-share --output=json" returned: 0 in 0.063s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422#033[00m
Dec  1 19:32:54 compute-0 nova_compute[189564]: 2025-12-01 19:32:54.915 189568 DEBUG oslo_concurrency.processutils [None req-7cec860b-c1c5-49d2-8a4f-732c76c1788d - - - - - -] Running cmd (subprocess): /usr/bin/python3 -m oslo_concurrency.prlimit --as=1073741824 --cpu=30 -- env LC_ALL=C LANG=C qemu-img info /var/lib/nova/instances/f4a023f0-04a7-470f-88ef-6284e0580f9e/disk --force-share --output=json execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384#033[00m
Dec  1 19:32:54 compute-0 nova_compute[189564]: 2025-12-01 19:32:54.977 189568 DEBUG oslo_concurrency.processutils [None req-7cec860b-c1c5-49d2-8a4f-732c76c1788d - - - - - -] CMD "/usr/bin/python3 -m oslo_concurrency.prlimit --as=1073741824 --cpu=30 -- env LC_ALL=C LANG=C qemu-img info /var/lib/nova/instances/f4a023f0-04a7-470f-88ef-6284e0580f9e/disk --force-share --output=json" returned: 0 in 0.062s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422#033[00m
Dec  1 19:32:54 compute-0 nova_compute[189564]: 2025-12-01 19:32:54.979 189568 DEBUG oslo_concurrency.processutils [None req-7cec860b-c1c5-49d2-8a4f-732c76c1788d - - - - - -] Running cmd (subprocess): /usr/bin/python3 -m oslo_concurrency.prlimit --as=1073741824 --cpu=30 -- env LC_ALL=C LANG=C qemu-img info /var/lib/nova/instances/f4a023f0-04a7-470f-88ef-6284e0580f9e/disk.eph0 --force-share --output=json execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384#033[00m
Dec  1 19:32:55 compute-0 nova_compute[189564]: 2025-12-01 19:32:55.074 189568 DEBUG oslo_concurrency.processutils [None req-7cec860b-c1c5-49d2-8a4f-732c76c1788d - - - - - -] CMD "/usr/bin/python3 -m oslo_concurrency.prlimit --as=1073741824 --cpu=30 -- env LC_ALL=C LANG=C qemu-img info /var/lib/nova/instances/f4a023f0-04a7-470f-88ef-6284e0580f9e/disk.eph0 --force-share --output=json" returned: 0 in 0.095s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422#033[00m
Dec  1 19:32:55 compute-0 nova_compute[189564]: 2025-12-01 19:32:55.077 189568 DEBUG oslo_concurrency.processutils [None req-7cec860b-c1c5-49d2-8a4f-732c76c1788d - - - - - -] Running cmd (subprocess): /usr/bin/python3 -m oslo_concurrency.prlimit --as=1073741824 --cpu=30 -- env LC_ALL=C LANG=C qemu-img info /var/lib/nova/instances/f4a023f0-04a7-470f-88ef-6284e0580f9e/disk.eph0 --force-share --output=json execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384#033[00m
Dec  1 19:32:55 compute-0 nova_compute[189564]: 2025-12-01 19:32:55.140 189568 DEBUG oslo_concurrency.processutils [None req-7cec860b-c1c5-49d2-8a4f-732c76c1788d - - - - - -] CMD "/usr/bin/python3 -m oslo_concurrency.prlimit --as=1073741824 --cpu=30 -- env LC_ALL=C LANG=C qemu-img info /var/lib/nova/instances/f4a023f0-04a7-470f-88ef-6284e0580f9e/disk.eph0 --force-share --output=json" returned: 0 in 0.063s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422#033[00m
Dec  1 19:32:55 compute-0 podman[241222]: 2025-12-01 19:32:55.361542037 +0000 UTC m=+0.118800901 container health_status 61ddba5fa28aaa4735d9b3aecc3d300f499f9ae2248b5f55cd6d6127fcce4236 (image=quay.io/navidys/prometheus-podman-exporter:v1.10.1, name=podman_exporter, health_status=healthy, health_failing_streak=0, health_log=, config_id=edpm, container_name=podman_exporter, maintainer=Navid Yaghoobi <navidys@fedoraproject.org>, managed_by=edpm_ansible, config_data={'image': 'quay.io/navidys/prometheus-podman-exporter:v1.10.1', 'restart': 'always', 'recreate': True, 'user': 'root', 'privileged': True, 'ports': ['9882:9882'], 'net': 'host', 'command': ['--web.config.file=/etc/podman_exporter/podman_exporter.yaml'], 'environment': {'OS_ENDPOINT_TYPE': 'internal', 'CONTAINER_HOST': 'unix:///run/podman/podman.sock'}, 'healthcheck': {'test': '/openstack/healthcheck podman_exporter', 'mount': '/var/lib/openstack/healthchecks/podman_exporter'}, 'volumes': ['/var/lib/openstack/config/telemetry/podman_exporter.yaml:/etc/podman_exporter/podman_exporter.yaml:z', '/var/lib/openstack/certs/telemetry/default:/etc/podman_exporter/tls:z', '/run/podman/podman.sock:/run/podman/podman.sock:rw,z', '/var/lib/openstack/healthchecks/podman_exporter:/openstack:ro,z']})
Dec  1 19:32:55 compute-0 nova_compute[189564]: 2025-12-01 19:32:55.625 189568 WARNING nova.virt.libvirt.driver [None req-7cec860b-c1c5-49d2-8a4f-732c76c1788d - - - - - -] This host appears to have multiple sockets per NUMA node. The `socket` PCI NUMA affinity will not be supported.#033[00m
Dec  1 19:32:55 compute-0 nova_compute[189564]: 2025-12-01 19:32:55.628 189568 DEBUG nova.compute.resource_tracker [None req-7cec860b-c1c5-49d2-8a4f-732c76c1788d - - - - - -] Hypervisor/Node resource view: name=compute-0.ctlplane.example.com free_ram=5056MB free_disk=72.36140441894531GB free_vcpus=6 pci_devices=[{"dev_id": "pci_0000_00_03_0", "address": "0000:00:03.0", "product_id": "1000", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1000", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_01_3", "address": "0000:00:01.3", "product_id": "7113", "vendor_id": "8086", "numa_node": null, "label": "label_8086_7113", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_06_0", "address": "0000:00:06.0", "product_id": "1005", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1005", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_04_0", "address": "0000:00:04.0", "product_id": "1001", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1001", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_01_1", "address": "0000:00:01.1", "product_id": "7010", "vendor_id": "8086", "numa_node": null, "label": "label_8086_7010", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_00_0", "address": "0000:00:00.0", "product_id": "1237", "vendor_id": "8086", "numa_node": null, "label": "label_8086_1237", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_05_0", "address": "0000:00:05.0", "product_id": "1002", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1002", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_02_0", "address": "0000:00:02.0", "product_id": "1050", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1050", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_01_2", "address": "0000:00:01.2", "product_id": "7020", "vendor_id": "8086", "numa_node": null, "label": "label_8086_7020", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_01_0", "address": "0000:00:01.0", "product_id": "7000", "vendor_id": "8086", "numa_node": null, "label": "label_8086_7000", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_07_0", "address": "0000:00:07.0", "product_id": "1000", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1000", "dev_type": "type-PCI"}] _report_hypervisor_resource_view /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:1034#033[00m
Dec  1 19:32:55 compute-0 nova_compute[189564]: 2025-12-01 19:32:55.629 189568 DEBUG oslo_concurrency.lockutils [None req-7cec860b-c1c5-49d2-8a4f-732c76c1788d - - - - - -] Acquiring lock "compute_resources" by "nova.compute.resource_tracker.ResourceTracker._update_available_resource" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m
Dec  1 19:32:55 compute-0 nova_compute[189564]: 2025-12-01 19:32:55.629 189568 DEBUG oslo_concurrency.lockutils [None req-7cec860b-c1c5-49d2-8a4f-732c76c1788d - - - - - -] Lock "compute_resources" acquired by "nova.compute.resource_tracker.ResourceTracker._update_available_resource" :: waited 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m
Dec  1 19:32:55 compute-0 nova_compute[189564]: 2025-12-01 19:32:55.736 189568 DEBUG nova.compute.resource_tracker [None req-7cec860b-c1c5-49d2-8a4f-732c76c1788d - - - - - -] Instance e73931e9-f7fa-4666-b781-700b385532a9 actively managed on this compute host and has allocations in placement: {'resources': {'DISK_GB': 2, 'MEMORY_MB': 512, 'VCPU': 1}}. _remove_deleted_instances_allocations /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:1635#033[00m
Dec  1 19:32:55 compute-0 nova_compute[189564]: 2025-12-01 19:32:55.737 189568 DEBUG nova.compute.resource_tracker [None req-7cec860b-c1c5-49d2-8a4f-732c76c1788d - - - - - -] Instance f4a023f0-04a7-470f-88ef-6284e0580f9e actively managed on this compute host and has allocations in placement: {'resources': {'DISK_GB': 2, 'MEMORY_MB': 512, 'VCPU': 1}}. _remove_deleted_instances_allocations /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:1635#033[00m
Dec  1 19:32:55 compute-0 nova_compute[189564]: 2025-12-01 19:32:55.738 189568 DEBUG nova.compute.resource_tracker [None req-7cec860b-c1c5-49d2-8a4f-732c76c1788d - - - - - -] Total usable vcpus: 8, total allocated vcpus: 2 _report_final_resource_view /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:1057#033[00m
Dec  1 19:32:55 compute-0 nova_compute[189564]: 2025-12-01 19:32:55.738 189568 DEBUG nova.compute.resource_tracker [None req-7cec860b-c1c5-49d2-8a4f-732c76c1788d - - - - - -] Final resource view: name=compute-0.ctlplane.example.com phys_ram=7680MB used_ram=1536MB phys_disk=79GB used_disk=4GB total_vcpus=8 used_vcpus=2 pci_stats=[] _report_final_resource_view /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:1066#033[00m
Dec  1 19:32:55 compute-0 nova_compute[189564]: 2025-12-01 19:32:55.808 189568 DEBUG nova.compute.provider_tree [None req-7cec860b-c1c5-49d2-8a4f-732c76c1788d - - - - - -] Inventory has not changed in ProviderTree for provider: 0211b5d4-bab8-409f-8f53-df766ffbcb27 update_inventory /usr/lib/python3.9/site-packages/nova/compute/provider_tree.py:180#033[00m
Dec  1 19:32:55 compute-0 nova_compute[189564]: 2025-12-01 19:32:55.823 189568 DEBUG nova.scheduler.client.report [None req-7cec860b-c1c5-49d2-8a4f-732c76c1788d - - - - - -] Inventory has not changed for provider 0211b5d4-bab8-409f-8f53-df766ffbcb27 based on inventory data: {'MEMORY_MB': {'total': 7680, 'reserved': 512, 'min_unit': 1, 'max_unit': 7680, 'step_size': 1, 'allocation_ratio': 1.0}, 'VCPU': {'total': 8, 'reserved': 0, 'min_unit': 1, 'max_unit': 8, 'step_size': 1, 'allocation_ratio': 4.0}, 'DISK_GB': {'total': 79, 'reserved': 1, 'min_unit': 1, 'max_unit': 79, 'step_size': 1, 'allocation_ratio': 0.9}} set_inventory_for_provider /usr/lib/python3.9/site-packages/nova/scheduler/client/report.py:940#033[00m
Dec  1 19:32:55 compute-0 nova_compute[189564]: 2025-12-01 19:32:55.825 189568 DEBUG nova.compute.resource_tracker [None req-7cec860b-c1c5-49d2-8a4f-732c76c1788d - - - - - -] Compute_service record updated for compute-0.ctlplane.example.com:compute-0.ctlplane.example.com _update_available_resource /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:995#033[00m
Dec  1 19:32:55 compute-0 nova_compute[189564]: 2025-12-01 19:32:55.826 189568 DEBUG oslo_concurrency.lockutils [None req-7cec860b-c1c5-49d2-8a4f-732c76c1788d - - - - - -] Lock "compute_resources" "released" by "nova.compute.resource_tracker.ResourceTracker._update_available_resource" :: held 0.197s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m
Dec  1 19:32:56 compute-0 nova_compute[189564]: 2025-12-01 19:32:56.826 189568 DEBUG oslo_service.periodic_task [None req-7cec860b-c1c5-49d2-8a4f-732c76c1788d - - - - - -] Running periodic task ComputeManager._poll_unconfirmed_resizes run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210#033[00m
Dec  1 19:32:57 compute-0 nova_compute[189564]: 2025-12-01 19:32:57.222 189568 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Dec  1 19:32:57 compute-0 nova_compute[189564]: 2025-12-01 19:32:57.250 189568 DEBUG oslo_service.periodic_task [None req-7cec860b-c1c5-49d2-8a4f-732c76c1788d - - - - - -] Running periodic task ComputeManager._poll_rescued_instances run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210#033[00m
Dec  1 19:32:57 compute-0 nova_compute[189564]: 2025-12-01 19:32:57.764 189568 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Dec  1 19:32:58 compute-0 podman[241244]: 2025-12-01 19:32:58.359751953 +0000 UTC m=+0.108209471 container health_status 23921011954a99f31a49758e512d9e3575f6b2ebf536e7df85e3be11e7690b76 (image=quay.io/sustainable_computing_io/kepler:release-0.7.12, name=kepler, health_status=healthy, health_failing_streak=0, health_log=, maintainer=Red Hat, Inc., vcs-ref=e309397d02fc53f7fa99db1371b8700eb49f268f, config_data={'image': 'quay.io/sustainable_computing_io/kepler:release-0.7.12', 'privileged': 'true', 'restart': 'always', 'ports': ['8888:8888'], 'net': 'host', 'command': '-v=2', 'recreate': True, 'environment': {'ENABLE_GPU': 'true', 'EXPOSE_CONTAINER_METRICS': 'true', 'ENABLE_PROCESS_METRICS': 'true', 'EXPOSE_VM_METRICS': 'true', 'EXPOSE_ESTIMATED_IDLE_POWER_METRICS': 'false', 'LIBVIRT_METADATA_URI': 'http://openstack.org/xmlns/libvirt/nova/1.1'}, 'healthcheck': {'test': '/openstack/healthcheck kepler', 'mount': '/var/lib/openstack/healthchecks/kepler'}, 'volumes': ['/lib/modules:/lib/modules:ro', '/run/libvirt:/run/libvirt:shared,ro', '/sys:/sys', '/proc:/proc', '/var/lib/openstack/healthchecks/kepler:/openstack:ro,z']}, config_id=edpm, url=https://access.redhat.com/containers/#/registry.access.redhat.com/ubi9/images/9.4-1214.1726694543, architecture=x86_64, vcs-type=git, release=1214.1726694543, com.redhat.component=ubi9-container, io.k8s.description=The Universal Base Image is designed and engineered to be the base layer for all of your containerized applications, middleware and utilities. This base image is freely redistributable, but Red Hat only supports Red Hat technologies through subscriptions for Red Hat products. This image is maintained by Red Hat and updated regularly., name=ubi9, summary=Provides the latest release of Red Hat Universal Base Image 9., io.buildah.version=1.29.0, io.openshift.expose-services=, managed_by=edpm_ansible, com.redhat.license_terms=https://www.redhat.com/en/about/red-hat-end-user-license-agreements#UBI, description=The Universal Base Image is designed and engineered to be the base layer for all of your containerized applications, middleware and utilities. This base image is freely redistributable, but Red Hat only supports Red Hat technologies through subscriptions for Red Hat products. This image is maintained by Red Hat and updated regularly., release-0.7.12=, build-date=2024-09-18T21:23:30, container_name=kepler, vendor=Red Hat, Inc., io.k8s.display-name=Red Hat Universal Base Image 9, version=9.4, io.openshift.tags=base rhel9, distribution-scope=public)
Dec  1 19:32:58 compute-0 podman[241245]: 2025-12-01 19:32:58.363736816 +0000 UTC m=+0.109273385 container health_status 34a1614f07848d6f362b3ed1fa2407dbcd0f2c7c831f6ef43ff8b2d278ce7c3d (image=quay.io/podified-antelope-centos9/openstack-ceilometer-ipmi:current-podified, name=ceilometer_agent_ipmi, health_status=healthy, health_failing_streak=0, health_log=, container_name=ceilometer_agent_ipmi, org.label-schema.build-date=20251125, tcib_build_tag=fa2bb8efef6782c26ea7f1675eeb36dd, maintainer=OpenStack Kubernetes Operator team, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS, config_data={'image': 'quay.io/podified-antelope-centos9/openstack-ceilometer-ipmi:current-podified', 'user': 'ceilometer', 'restart': 'always', 'command': 'kolla_start', 'security_opt': 'label:type:ceilometer_polling_t', 'privileged': 'true', 'net': 'host', 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'OS_ENDPOINT_TYPE': 'internal'}, 'healthcheck': {'test': '/openstack/healthcheck ipmi', 'mount': '/var/lib/openstack/healthchecks/ceilometer_agent_ipmi'}, 'volumes': ['/var/lib/openstack/config/telemetry-power-monitoring:/var/lib/openstack/config/:z', '/var/lib/openstack/config/telemetry-power-monitoring/ceilometer-agent-ipmi.json:/var/lib/kolla/config_files/config.json:z', '/etc/hosts:/etc/hosts:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/var/lib/openstack/cacerts/telemetry-power-monitoring/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/config/telemetry-power-monitoring/ceilometer_prom_exporter.yaml:/etc/ceilometer/ceilometer_prom_exporter.yaml:z', '/var/lib/openstack/certs/telemetry-power-monitoring/default:/etc/ceilometer/tls:z', '/dev/log:/dev/log', '/var/lib/openstack/healthchecks/ceilometer_agent_ipmi:/openstack:ro,z']}, config_id=edpm, managed_by=edpm_ansible, io.buildah.version=1.41.3, tcib_managed=true)
Dec  1 19:32:59 compute-0 podman[203750]: time="2025-12-01T19:32:59Z" level=info msg="List containers: received `last` parameter - overwriting `limit`"
Dec  1 19:32:59 compute-0 podman[203750]: @ - - [01/Dec/2025:19:32:59 +0000] "GET /v4.9.3/libpod/containers/json?all=true&external=false&last=0&namespace=false&size=false&sync=false HTTP/1.1" 200 29521 "" "Go-http-client/1.1"
Dec  1 19:32:59 compute-0 podman[203750]: @ - - [01/Dec/2025:19:32:59 +0000] "GET /v4.9.3/libpod/containers/stats?all=false&interval=1&stream=false HTTP/1.1" 200 4780 "" "Go-http-client/1.1"
Dec  1 19:33:01 compute-0 openstack_network_exporter[205914]: ERROR   19:33:01 appctl.go:144: Failed to get PID for ovn-northd: no control socket files found for ovn-northd
Dec  1 19:33:01 compute-0 openstack_network_exporter[205914]: ERROR   19:33:01 appctl.go:144: Failed to get PID for ovn-northd: no control socket files found for ovn-northd
Dec  1 19:33:01 compute-0 openstack_network_exporter[205914]: ERROR   19:33:01 appctl.go:131: Failed to prepare call to ovsdb-server: no control socket files found for the ovs db server
Dec  1 19:33:01 compute-0 openstack_network_exporter[205914]: ERROR   19:33:01 appctl.go:174: call(dpif-netdev/pmd-rxq-show): please specify an existing datapath
Dec  1 19:33:01 compute-0 openstack_network_exporter[205914]: 
Dec  1 19:33:01 compute-0 openstack_network_exporter[205914]: ERROR   19:33:01 appctl.go:174: call(dpif-netdev/pmd-perf-show): please specify an existing datapath
Dec  1 19:33:01 compute-0 openstack_network_exporter[205914]: 
Dec  1 19:33:02 compute-0 nova_compute[189564]: 2025-12-01 19:33:02.225 189568 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Dec  1 19:33:02 compute-0 podman[241284]: 2025-12-01 19:33:02.367782742 +0000 UTC m=+0.118906553 container health_status 43b014a7c88484529ca37fbc1aa040d68d3c565a681d98a3ffe696ded1c66c8b (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified, name=ovn_metadata_agent, health_status=healthy, health_failing_streak=0, health_log=, config_id=ovn_metadata_agent, io.buildah.version=1.41.3, maintainer=OpenStack Kubernetes Operator team, managed_by=edpm_ansible, org.label-schema.build-date=20251125, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.vendor=CentOS, tcib_managed=true, org.label-schema.schema-version=1.0, config_data={'cgroupns': 'host', 'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'EDPM_CONFIG_HASH': '0823bd3e096c75f72e4a95820d41b0d4b6a1172bd2892ddb9f29b788a11bc87d'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ovn_metadata_agent', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified', 'net': 'host', 'pid': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/run/openvswitch:/run/openvswitch:z', '/var/lib/config-data/ansible-generated/neutron-ovn-metadata-agent:/etc/neutron.conf.d:z', '/run/netns:/run/netns:shared', '/var/lib/kolla/config_files/ovn_metadata_agent.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/neutron:/var/lib/neutron:shared,z', '/var/lib/neutron/ovn_metadata_haproxy_wrapper:/usr/local/bin/haproxy:ro', '/var/lib/neutron/kill_scripts:/etc/neutron/kill_scripts:ro', '/var/lib/openstack/cacerts/neutron-metadata/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/ca.crt:/etc/pki/tls/certs/ovndbca.crt:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/tls.crt:/etc/pki/tls/certs/ovndb.crt:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/tls.key:/etc/pki/tls/private/ovndb.key:ro,Z', '/var/lib/openstack/healthchecks/ovn_metadata_agent:/openstack:ro,z']}, container_name=ovn_metadata_agent, tcib_build_tag=fa2bb8efef6782c26ea7f1675eeb36dd)
Dec  1 19:33:02 compute-0 podman[241283]: 2025-12-01 19:33:02.393360874 +0000 UTC m=+0.148564462 container health_status 3a3d264f7eb8586ed3d44da8bad3c69e5911bcb2ca062b771386b6d47a5118de (image=quay.rdoproject.org/podified-master-centos10/openstack-ceilometer-compute:current-tested, name=ceilometer_agent_compute, health_status=healthy, health_failing_streak=0, health_log=, config_id=edpm, io.buildah.version=1.41.4, org.label-schema.schema-version=1.0, tcib_managed=true, maintainer=OpenStack Kubernetes Operator team, org.label-schema.name=CentOS Stream 10 Base Image, tcib_build_tag=3a7876c5b6a4ff2e2bc50e11e9db5f42, container_name=ceilometer_agent_compute, config_data={'image': 'quay.rdoproject.org/podified-master-centos10/openstack-ceilometer-compute:current-tested', 'user': 'ceilometer', 'restart': 'always', 'command': 'kolla_start', 'security_opt': 'label:type:ceilometer_polling_t', 'net': 'host', 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'OS_ENDPOINT_TYPE': 'internal'}, 'healthcheck': {'test': '/openstack/healthcheck compute', 'mount': '/var/lib/openstack/healthchecks/ceilometer_agent_compute'}, 'volumes': ['/var/lib/openstack/config/telemetry:/var/lib/openstack/config/:z', '/var/lib/openstack/config/telemetry/ceilometer-agent-compute.json:/var/lib/kolla/config_files/config.json:z', '/run/libvirt:/run/libvirt:shared,ro', '/etc/hosts:/etc/hosts:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/var/lib/openstack/cacerts/telemetry/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/config/telemetry/ceilometer_prom_exporter.yaml:/etc/ceilometer/ceilometer_prom_exporter.yaml:z', '/var/lib/openstack/certs/telemetry/default:/etc/ceilometer/tls:z', '/dev/log:/dev/log', '/var/lib/openstack/healthchecks/ceilometer_agent_compute:/openstack:ro,z']}, managed_by=edpm_ansible, org.label-schema.build-date=20251125, org.label-schema.license=GPLv2, org.label-schema.vendor=CentOS)
Dec  1 19:33:02 compute-0 podman[241285]: 2025-12-01 19:33:02.428626805 +0000 UTC m=+0.171812030 container health_status ac5c9902abf0db9f43c889599b2bcc73d33eb8b65444ffdd9b56a5cc93dab792 (image=quay.io/podified-antelope-centos9/openstack-ovn-controller:current-podified, name=ovn_controller, health_status=healthy, health_failing_streak=0, health_log=, org.label-schema.build-date=20251125, org.label-schema.name=CentOS Stream 9 Base Image, tcib_managed=true, config_data={'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ovn_controller', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-ovn-controller:current-podified', 'net': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/lib/modules:/lib/modules:ro', '/run:/run', '/var/lib/openvswitch/ovn:/run/ovn:shared,z', '/var/lib/kolla/config_files/ovn_controller.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/openstack/cacerts/ovn/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/certs/ovn/default/ca.crt:/etc/pki/tls/certs/ovndbca.crt:ro,z', '/var/lib/openstack/certs/ovn/default/tls.crt:/etc/pki/tls/certs/ovndb.crt:ro,z', '/var/lib/openstack/certs/ovn/default/tls.key:/etc/pki/tls/private/ovndb.key:ro,Z', '/var/lib/openstack/healthchecks/ovn_controller:/openstack:ro,z']}, org.label-schema.vendor=CentOS, config_id=ovn_controller, io.buildah.version=1.41.3, org.label-schema.license=GPLv2, org.label-schema.schema-version=1.0, tcib_build_tag=fa2bb8efef6782c26ea7f1675eeb36dd, container_name=ovn_controller, maintainer=OpenStack Kubernetes Operator team, managed_by=edpm_ansible)
Dec  1 19:33:02 compute-0 nova_compute[189564]: 2025-12-01 19:33:02.774 189568 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Dec  1 19:33:03 compute-0 systemd[1]: virtproxyd.service: Deactivated successfully.
Dec  1 19:33:07 compute-0 nova_compute[189564]: 2025-12-01 19:33:07.229 189568 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Dec  1 19:33:07 compute-0 nova_compute[189564]: 2025-12-01 19:33:07.776 189568 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Dec  1 19:33:10 compute-0 podman[241347]: 2025-12-01 19:33:10.373525721 +0000 UTC m=+0.134109955 container health_status b46bda7fc50db8041eef75400930fc7591d8331b3adc9964f77b2cc87c6b98e2 (image=quay.io/openstack-k8s-operators/openstack-network-exporter:current-podified, name=openstack_network_exporter, health_status=healthy, health_failing_streak=0, health_log=, architecture=x86_64, com.redhat.component=ubi9-minimal-container, config_data={'image': 'quay.io/openstack-k8s-operators/openstack-network-exporter:current-podified', 'restart': 'always', 'recreate': True, 'privileged': True, 'ports': ['9105:9105'], 'command': [], 'net': 'host', 'environment': {'OS_ENDPOINT_TYPE': 'internal', 'OPENSTACK_NETWORK_EXPORTER_YAML': '/etc/openstack_network_exporter/openstack_network_exporter.yaml'}, 'healthcheck': {'test': '/openstack/healthcheck openstack-netwo', 'mount': '/var/lib/openstack/healthchecks/openstack_network_exporter'}, 'volumes': ['/var/lib/openstack/config/telemetry/openstack_network_exporter.yaml:/etc/openstack_network_exporter/openstack_network_exporter.yaml:z', '/var/lib/openstack/certs/telemetry/default:/etc/openstack_network_exporter/tls:z', '/var/run/openvswitch:/run/openvswitch:rw,z', '/var/lib/openvswitch/ovn:/run/ovn:rw,z', '/proc:/host/proc:ro', '/var/lib/openstack/healthchecks/openstack_network_exporter:/openstack:ro,z']}, name=ubi9-minimal, managed_by=edpm_ansible, url=https://catalog.redhat.com/en/search?searchType=containers, build-date=2025-08-20T13:12:41, container_name=openstack_network_exporter, io.openshift.tags=minimal rhel9, description=The Universal Base Image Minimal is a stripped down image that uses microdnf as a package manager. This base image is freely redistributable, but Red Hat only supports Red Hat technologies through subscriptions for Red Hat products. This image is maintained by Red Hat and updated regularly., distribution-scope=public, io.k8s.description=The Universal Base Image Minimal is a stripped down image that uses microdnf as a package manager. This base image is freely redistributable, but Red Hat only supports Red Hat technologies through subscriptions for Red Hat products. This image is maintained by Red Hat and updated regularly., io.k8s.display-name=Red Hat Universal Base Image 9 Minimal, version=9.6, io.buildah.version=1.33.7, io.openshift.expose-services=, maintainer=Red Hat, Inc., release=1755695350, config_id=edpm, vcs-type=git, vcs-ref=f4b088292653bbf5ca8188a5e59ffd06a8671d4b, com.redhat.license_terms=https://www.redhat.com/en/about/red-hat-end-user-license-agreements#UBI, summary=Provides the latest release of the minimal Red Hat Universal Base Image 9., vendor=Red Hat, Inc.)
Dec  1 19:33:12 compute-0 ovn_metadata_agent[106828]: 2025-12-01 19:33:12.177 106833 DEBUG oslo_concurrency.lockutils [-] Acquiring lock "_check_child_processes" by "neutron.agent.linux.external_process.ProcessMonitor._check_child_processes" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m
Dec  1 19:33:12 compute-0 ovn_metadata_agent[106828]: 2025-12-01 19:33:12.178 106833 DEBUG oslo_concurrency.lockutils [-] Lock "_check_child_processes" acquired by "neutron.agent.linux.external_process.ProcessMonitor._check_child_processes" :: waited 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m
Dec  1 19:33:12 compute-0 ovn_metadata_agent[106828]: 2025-12-01 19:33:12.179 106833 DEBUG oslo_concurrency.lockutils [-] Lock "_check_child_processes" "released" by "neutron.agent.linux.external_process.ProcessMonitor._check_child_processes" :: held 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m
Dec  1 19:33:12 compute-0 nova_compute[189564]: 2025-12-01 19:33:12.232 189568 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Dec  1 19:33:12 compute-0 nova_compute[189564]: 2025-12-01 19:33:12.779 189568 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Dec  1 19:33:15 compute-0 podman[241370]: 2025-12-01 19:33:15.349214988 +0000 UTC m=+0.108931054 container health_status 9bc16c1e84935b321683dd2dfd3901959431e420d380b6b9982945dff3d516b2 (image=quay.io/prometheus/node-exporter:v1.5.0, name=node_exporter, health_status=healthy, health_failing_streak=0, health_log=, config_data={'image': 'quay.io/prometheus/node-exporter:v1.5.0', 'restart': 'always', 'recreate': True, 'user': 'root', 'privileged': True, 'ports': ['9100:9100'], 'command': ['--web.config.file=/etc/node_exporter/node_exporter.yaml', '--web.disable-exporter-metrics', '--collector.systemd', '--collector.systemd.unit-include=(edpm_.*|ovs.*|openvswitch|virt.*|rsyslog)\\.service', '--no-collector.dmi', '--no-collector.entropy', '--no-collector.thermal_zone', '--no-collector.time', '--no-collector.timex', '--no-collector.uname', '--no-collector.stat', '--no-collector.hwmon', '--no-collector.os', '--no-collector.selinux', '--no-collector.textfile', '--no-collector.powersupplyclass', '--no-collector.pressure', '--no-collector.rapl'], 'net': 'host', 'environment': {'OS_ENDPOINT_TYPE': 'internal'}, 'healthcheck': {'test': '/openstack/healthcheck node_exporter', 'mount': '/var/lib/openstack/healthchecks/node_exporter'}, 'volumes': ['/var/lib/openstack/config/telemetry/node_exporter.yaml:/etc/node_exporter/node_exporter.yaml:z', '/var/lib/openstack/certs/telemetry/default:/etc/node_exporter/tls:z', '/var/run/dbus/system_bus_socket:/var/run/dbus/system_bus_socket:rw', '/var/lib/openstack/healthchecks/node_exporter:/openstack:ro,z']}, config_id=edpm, container_name=node_exporter, maintainer=The Prometheus Authors <prometheus-developers@googlegroups.com>, managed_by=edpm_ansible)
Dec  1 19:33:17 compute-0 nova_compute[189564]: 2025-12-01 19:33:17.236 189568 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Dec  1 19:33:17 compute-0 nova_compute[189564]: 2025-12-01 19:33:17.783 189568 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Dec  1 19:33:21 compute-0 podman[241396]: 2025-12-01 19:33:21.326006996 +0000 UTC m=+0.091845445 container health_status eee51cf6f5ac491b85fb09827fece37ea9afa564acb449d4ec0d0155a452f02b (image=quay.io/podified-antelope-centos9/openstack-multipathd:current-podified, name=multipathd, health_status=healthy, health_failing_streak=0, health_log=, io.buildah.version=1.41.3, maintainer=OpenStack Kubernetes Operator team, org.label-schema.vendor=CentOS, org.label-schema.schema-version=1.0, tcib_managed=true, managed_by=edpm_ansible, config_data={'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/multipathd', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-multipathd:current-podified', 'net': 'host', 'privileged': True, 'restart': 'always', 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/var/lib/kolla/config_files/multipathd.json:/var/lib/kolla/config_files/config.json:ro', '/dev:/dev', '/run/udev:/run/udev', '/sys:/sys', '/lib/modules:/lib/modules:ro', '/etc/iscsi:/etc/iscsi:ro', '/var/lib/iscsi:/var/lib/iscsi', '/etc/multipath:/etc/multipath:z', '/etc/multipath.conf:/etc/multipath.conf:ro', '/var/lib/openstack/healthchecks/multipathd:/openstack:ro,z']}, config_id=multipathd, org.label-schema.build-date=20251125, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 9 Base Image, tcib_build_tag=fa2bb8efef6782c26ea7f1675eeb36dd, container_name=multipathd)
Dec  1 19:33:22 compute-0 nova_compute[189564]: 2025-12-01 19:33:22.238 189568 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Dec  1 19:33:22 compute-0 nova_compute[189564]: 2025-12-01 19:33:22.786 189568 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Dec  1 19:33:26 compute-0 podman[241415]: 2025-12-01 19:33:26.32352894 +0000 UTC m=+0.091042710 container health_status 61ddba5fa28aaa4735d9b3aecc3d300f499f9ae2248b5f55cd6d6127fcce4236 (image=quay.io/navidys/prometheus-podman-exporter:v1.10.1, name=podman_exporter, health_status=healthy, health_failing_streak=0, health_log=, managed_by=edpm_ansible, config_data={'image': 'quay.io/navidys/prometheus-podman-exporter:v1.10.1', 'restart': 'always', 'recreate': True, 'user': 'root', 'privileged': True, 'ports': ['9882:9882'], 'net': 'host', 'command': ['--web.config.file=/etc/podman_exporter/podman_exporter.yaml'], 'environment': {'OS_ENDPOINT_TYPE': 'internal', 'CONTAINER_HOST': 'unix:///run/podman/podman.sock'}, 'healthcheck': {'test': '/openstack/healthcheck podman_exporter', 'mount': '/var/lib/openstack/healthchecks/podman_exporter'}, 'volumes': ['/var/lib/openstack/config/telemetry/podman_exporter.yaml:/etc/podman_exporter/podman_exporter.yaml:z', '/var/lib/openstack/certs/telemetry/default:/etc/podman_exporter/tls:z', '/run/podman/podman.sock:/run/podman/podman.sock:rw,z', '/var/lib/openstack/healthchecks/podman_exporter:/openstack:ro,z']}, config_id=edpm, container_name=podman_exporter, maintainer=Navid Yaghoobi <navidys@fedoraproject.org>)
Dec  1 19:33:27 compute-0 nova_compute[189564]: 2025-12-01 19:33:27.241 189568 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Dec  1 19:33:27 compute-0 nova_compute[189564]: 2025-12-01 19:33:27.788 189568 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Dec  1 19:33:29 compute-0 podman[241438]: 2025-12-01 19:33:29.309215996 +0000 UTC m=+0.072275369 container health_status 34a1614f07848d6f362b3ed1fa2407dbcd0f2c7c831f6ef43ff8b2d278ce7c3d (image=quay.io/podified-antelope-centos9/openstack-ceilometer-ipmi:current-podified, name=ceilometer_agent_ipmi, health_status=healthy, health_failing_streak=0, health_log=, org.label-schema.build-date=20251125, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.vendor=CentOS, tcib_managed=true, config_id=edpm, managed_by=edpm_ansible, org.label-schema.license=GPLv2, config_data={'image': 'quay.io/podified-antelope-centos9/openstack-ceilometer-ipmi:current-podified', 'user': 'ceilometer', 'restart': 'always', 'command': 'kolla_start', 'security_opt': 'label:type:ceilometer_polling_t', 'privileged': 'true', 'net': 'host', 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'OS_ENDPOINT_TYPE': 'internal'}, 'healthcheck': {'test': '/openstack/healthcheck ipmi', 'mount': '/var/lib/openstack/healthchecks/ceilometer_agent_ipmi'}, 'volumes': ['/var/lib/openstack/config/telemetry-power-monitoring:/var/lib/openstack/config/:z', '/var/lib/openstack/config/telemetry-power-monitoring/ceilometer-agent-ipmi.json:/var/lib/kolla/config_files/config.json:z', '/etc/hosts:/etc/hosts:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/var/lib/openstack/cacerts/telemetry-power-monitoring/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/config/telemetry-power-monitoring/ceilometer_prom_exporter.yaml:/etc/ceilometer/ceilometer_prom_exporter.yaml:z', '/var/lib/openstack/certs/telemetry-power-monitoring/default:/etc/ceilometer/tls:z', '/dev/log:/dev/log', '/var/lib/openstack/healthchecks/ceilometer_agent_ipmi:/openstack:ro,z']}, org.label-schema.schema-version=1.0, tcib_build_tag=fa2bb8efef6782c26ea7f1675eeb36dd, container_name=ceilometer_agent_ipmi, io.buildah.version=1.41.3, maintainer=OpenStack Kubernetes Operator team)
Dec  1 19:33:29 compute-0 podman[241437]: 2025-12-01 19:33:29.310839016 +0000 UTC m=+0.079310277 container health_status 23921011954a99f31a49758e512d9e3575f6b2ebf536e7df85e3be11e7690b76 (image=quay.io/sustainable_computing_io/kepler:release-0.7.12, name=kepler, health_status=healthy, health_failing_streak=0, health_log=, com.redhat.component=ubi9-container, vcs-ref=e309397d02fc53f7fa99db1371b8700eb49f268f, container_name=kepler, vendor=Red Hat, Inc., io.openshift.expose-services=, com.redhat.license_terms=https://www.redhat.com/en/about/red-hat-end-user-license-agreements#UBI, config_id=edpm, name=ubi9, io.buildah.version=1.29.0, release=1214.1726694543, release-0.7.12=, summary=Provides the latest release of Red Hat Universal Base Image 9., managed_by=edpm_ansible, io.k8s.display-name=Red Hat Universal Base Image 9, vcs-type=git, description=The Universal Base Image is designed and engineered to be the base layer for all of your containerized applications, middleware and utilities. This base image is freely redistributable, but Red Hat only supports Red Hat technologies through subscriptions for Red Hat products. This image is maintained by Red Hat and updated regularly., maintainer=Red Hat, Inc., config_data={'image': 'quay.io/sustainable_computing_io/kepler:release-0.7.12', 'privileged': 'true', 'restart': 'always', 'ports': ['8888:8888'], 'net': 'host', 'command': '-v=2', 'recreate': True, 'environment': {'ENABLE_GPU': 'true', 'EXPOSE_CONTAINER_METRICS': 'true', 'ENABLE_PROCESS_METRICS': 'true', 'EXPOSE_VM_METRICS': 'true', 'EXPOSE_ESTIMATED_IDLE_POWER_METRICS': 'false', 'LIBVIRT_METADATA_URI': 'http://openstack.org/xmlns/libvirt/nova/1.1'}, 'healthcheck': {'test': '/openstack/healthcheck kepler', 'mount': '/var/lib/openstack/healthchecks/kepler'}, 'volumes': ['/lib/modules:/lib/modules:ro', '/run/libvirt:/run/libvirt:shared,ro', '/sys:/sys', '/proc:/proc', '/var/lib/openstack/healthchecks/kepler:/openstack:ro,z']}, url=https://access.redhat.com/containers/#/registry.access.redhat.com/ubi9/images/9.4-1214.1726694543, distribution-scope=public, io.k8s.description=The Universal Base Image is designed and engineered to be the base layer for all of your containerized applications, middleware and utilities. This base image is freely redistributable, but Red Hat only supports Red Hat technologies through subscriptions for Red Hat products. This image is maintained by Red Hat and updated regularly., version=9.4, architecture=x86_64, build-date=2024-09-18T21:23:30, io.openshift.tags=base rhel9)
Dec  1 19:33:29 compute-0 podman[203750]: time="2025-12-01T19:33:29Z" level=info msg="List containers: received `last` parameter - overwriting `limit`"
Dec  1 19:33:29 compute-0 podman[203750]: @ - - [01/Dec/2025:19:33:29 +0000] "GET /v4.9.3/libpod/containers/json?all=true&external=false&last=0&namespace=false&size=false&sync=false HTTP/1.1" 200 29521 "" "Go-http-client/1.1"
Dec  1 19:33:29 compute-0 podman[203750]: @ - - [01/Dec/2025:19:33:29 +0000] "GET /v4.9.3/libpod/containers/stats?all=false&interval=1&stream=false HTTP/1.1" 200 4782 "" "Go-http-client/1.1"
Dec  1 19:33:31 compute-0 openstack_network_exporter[205914]: ERROR   19:33:31 appctl.go:144: Failed to get PID for ovn-northd: no control socket files found for ovn-northd
Dec  1 19:33:31 compute-0 openstack_network_exporter[205914]: ERROR   19:33:31 appctl.go:144: Failed to get PID for ovn-northd: no control socket files found for ovn-northd
Dec  1 19:33:31 compute-0 openstack_network_exporter[205914]: ERROR   19:33:31 appctl.go:131: Failed to prepare call to ovsdb-server: no control socket files found for the ovs db server
Dec  1 19:33:31 compute-0 openstack_network_exporter[205914]: ERROR   19:33:31 appctl.go:174: call(dpif-netdev/pmd-perf-show): please specify an existing datapath
Dec  1 19:33:31 compute-0 openstack_network_exporter[205914]: 
Dec  1 19:33:31 compute-0 openstack_network_exporter[205914]: ERROR   19:33:31 appctl.go:174: call(dpif-netdev/pmd-rxq-show): please specify an existing datapath
Dec  1 19:33:31 compute-0 openstack_network_exporter[205914]: 
Dec  1 19:33:32 compute-0 nova_compute[189564]: 2025-12-01 19:33:32.244 189568 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Dec  1 19:33:32 compute-0 nova_compute[189564]: 2025-12-01 19:33:32.790 189568 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Dec  1 19:33:33 compute-0 podman[241474]: 2025-12-01 19:33:33.347796915 +0000 UTC m=+0.117086757 container health_status 3a3d264f7eb8586ed3d44da8bad3c69e5911bcb2ca062b771386b6d47a5118de (image=quay.rdoproject.org/podified-master-centos10/openstack-ceilometer-compute:current-tested, name=ceilometer_agent_compute, health_status=healthy, health_failing_streak=0, health_log=, maintainer=OpenStack Kubernetes Operator team, managed_by=edpm_ansible, org.label-schema.build-date=20251125, tcib_build_tag=3a7876c5b6a4ff2e2bc50e11e9db5f42, config_data={'image': 'quay.rdoproject.org/podified-master-centos10/openstack-ceilometer-compute:current-tested', 'user': 'ceilometer', 'restart': 'always', 'command': 'kolla_start', 'security_opt': 'label:type:ceilometer_polling_t', 'net': 'host', 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'OS_ENDPOINT_TYPE': 'internal'}, 'healthcheck': {'test': '/openstack/healthcheck compute', 'mount': '/var/lib/openstack/healthchecks/ceilometer_agent_compute'}, 'volumes': ['/var/lib/openstack/config/telemetry:/var/lib/openstack/config/:z', '/var/lib/openstack/config/telemetry/ceilometer-agent-compute.json:/var/lib/kolla/config_files/config.json:z', '/run/libvirt:/run/libvirt:shared,ro', '/etc/hosts:/etc/hosts:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/var/lib/openstack/cacerts/telemetry/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/config/telemetry/ceilometer_prom_exporter.yaml:/etc/ceilometer/ceilometer_prom_exporter.yaml:z', '/var/lib/openstack/certs/telemetry/default:/etc/ceilometer/tls:z', '/dev/log:/dev/log', '/var/lib/openstack/healthchecks/ceilometer_agent_compute:/openstack:ro,z']}, config_id=edpm, tcib_managed=true, container_name=ceilometer_agent_compute, io.buildah.version=1.41.4, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 10 Base Image)
Dec  1 19:33:33 compute-0 podman[241475]: 2025-12-01 19:33:33.358536437 +0000 UTC m=+0.113395712 container health_status 43b014a7c88484529ca37fbc1aa040d68d3c565a681d98a3ffe696ded1c66c8b (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified, name=ovn_metadata_agent, health_status=healthy, health_failing_streak=0, health_log=, org.label-schema.build-date=20251125, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 9 Base Image, tcib_build_tag=fa2bb8efef6782c26ea7f1675eeb36dd, tcib_managed=true, config_data={'cgroupns': 'host', 'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'EDPM_CONFIG_HASH': '0823bd3e096c75f72e4a95820d41b0d4b6a1172bd2892ddb9f29b788a11bc87d'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ovn_metadata_agent', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified', 'net': 'host', 'pid': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/run/openvswitch:/run/openvswitch:z', '/var/lib/config-data/ansible-generated/neutron-ovn-metadata-agent:/etc/neutron.conf.d:z', '/run/netns:/run/netns:shared', '/var/lib/kolla/config_files/ovn_metadata_agent.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/neutron:/var/lib/neutron:shared,z', '/var/lib/neutron/ovn_metadata_haproxy_wrapper:/usr/local/bin/haproxy:ro', '/var/lib/neutron/kill_scripts:/etc/neutron/kill_scripts:ro', '/var/lib/openstack/cacerts/neutron-metadata/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/ca.crt:/etc/pki/tls/certs/ovndbca.crt:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/tls.crt:/etc/pki/tls/certs/ovndb.crt:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/tls.key:/etc/pki/tls/private/ovndb.key:ro,Z', '/var/lib/openstack/healthchecks/ovn_metadata_agent:/openstack:ro,z']}, container_name=ovn_metadata_agent, io.buildah.version=1.41.3, managed_by=edpm_ansible, maintainer=OpenStack Kubernetes Operator team, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS, config_id=ovn_metadata_agent)
Dec  1 19:33:33 compute-0 podman[241476]: 2025-12-01 19:33:33.405516623 +0000 UTC m=+0.156262371 container health_status ac5c9902abf0db9f43c889599b2bcc73d33eb8b65444ffdd9b56a5cc93dab792 (image=quay.io/podified-antelope-centos9/openstack-ovn-controller:current-podified, name=ovn_controller, health_status=healthy, health_failing_streak=0, health_log=, org.label-schema.build-date=20251125, org.label-schema.schema-version=1.0, config_data={'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ovn_controller', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-ovn-controller:current-podified', 'net': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/lib/modules:/lib/modules:ro', '/run:/run', '/var/lib/openvswitch/ovn:/run/ovn:shared,z', '/var/lib/kolla/config_files/ovn_controller.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/openstack/cacerts/ovn/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/certs/ovn/default/ca.crt:/etc/pki/tls/certs/ovndbca.crt:ro,z', '/var/lib/openstack/certs/ovn/default/tls.crt:/etc/pki/tls/certs/ovndb.crt:ro,z', '/var/lib/openstack/certs/ovn/default/tls.key:/etc/pki/tls/private/ovndb.key:ro,Z', '/var/lib/openstack/healthchecks/ovn_controller:/openstack:ro,z']}, tcib_build_tag=fa2bb8efef6782c26ea7f1675eeb36dd, tcib_managed=true, managed_by=edpm_ansible, org.label-schema.license=GPLv2, org.label-schema.vendor=CentOS, io.buildah.version=1.41.3, org.label-schema.name=CentOS Stream 9 Base Image, config_id=ovn_controller, container_name=ovn_controller, maintainer=OpenStack Kubernetes Operator team)
Dec  1 19:33:37 compute-0 nova_compute[189564]: 2025-12-01 19:33:37.248 189568 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Dec  1 19:33:37 compute-0 nova_compute[189564]: 2025-12-01 19:33:37.793 189568 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Dec  1 19:33:41 compute-0 podman[241534]: 2025-12-01 19:33:41.356942797 +0000 UTC m=+0.113962300 container health_status b46bda7fc50db8041eef75400930fc7591d8331b3adc9964f77b2cc87c6b98e2 (image=quay.io/openstack-k8s-operators/openstack-network-exporter:current-podified, name=openstack_network_exporter, health_status=healthy, health_failing_streak=0, health_log=, vcs-type=git, name=ubi9-minimal, com.redhat.license_terms=https://www.redhat.com/en/about/red-hat-end-user-license-agreements#UBI, maintainer=Red Hat, Inc., release=1755695350, config_id=edpm, vendor=Red Hat, Inc., description=The Universal Base Image Minimal is a stripped down image that uses microdnf as a package manager. This base image is freely redistributable, but Red Hat only supports Red Hat technologies through subscriptions for Red Hat products. This image is maintained by Red Hat and updated regularly., managed_by=edpm_ansible, io.openshift.expose-services=, summary=Provides the latest release of the minimal Red Hat Universal Base Image 9., version=9.6, config_data={'image': 'quay.io/openstack-k8s-operators/openstack-network-exporter:current-podified', 'restart': 'always', 'recreate': True, 'privileged': True, 'ports': ['9105:9105'], 'command': [], 'net': 'host', 'environment': {'OS_ENDPOINT_TYPE': 'internal', 'OPENSTACK_NETWORK_EXPORTER_YAML': '/etc/openstack_network_exporter/openstack_network_exporter.yaml'}, 'healthcheck': {'test': '/openstack/healthcheck openstack-netwo', 'mount': '/var/lib/openstack/healthchecks/openstack_network_exporter'}, 'volumes': ['/var/lib/openstack/config/telemetry/openstack_network_exporter.yaml:/etc/openstack_network_exporter/openstack_network_exporter.yaml:z', '/var/lib/openstack/certs/telemetry/default:/etc/openstack_network_exporter/tls:z', '/var/run/openvswitch:/run/openvswitch:rw,z', '/var/lib/openvswitch/ovn:/run/ovn:rw,z', '/proc:/host/proc:ro', '/var/lib/openstack/healthchecks/openstack_network_exporter:/openstack:ro,z']}, url=https://catalog.redhat.com/en/search?searchType=containers, io.openshift.tags=minimal rhel9, architecture=x86_64, distribution-scope=public, io.buildah.version=1.33.7, io.k8s.description=The Universal Base Image Minimal is a stripped down image that uses microdnf as a package manager. This base image is freely redistributable, but Red Hat only supports Red Hat technologies through subscriptions for Red Hat products. This image is maintained by Red Hat and updated regularly., vcs-ref=f4b088292653bbf5ca8188a5e59ffd06a8671d4b, build-date=2025-08-20T13:12:41, container_name=openstack_network_exporter, com.redhat.component=ubi9-minimal-container, io.k8s.display-name=Red Hat Universal Base Image 9 Minimal)
Dec  1 19:33:42 compute-0 nova_compute[189564]: 2025-12-01 19:33:42.251 189568 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Dec  1 19:33:42 compute-0 nova_compute[189564]: 2025-12-01 19:33:42.797 189568 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Dec  1 19:33:46 compute-0 podman[241554]: 2025-12-01 19:33:46.320375276 +0000 UTC m=+0.086601493 container health_status 9bc16c1e84935b321683dd2dfd3901959431e420d380b6b9982945dff3d516b2 (image=quay.io/prometheus/node-exporter:v1.5.0, name=node_exporter, health_status=healthy, health_failing_streak=0, health_log=, config_id=edpm, container_name=node_exporter, maintainer=The Prometheus Authors <prometheus-developers@googlegroups.com>, managed_by=edpm_ansible, config_data={'image': 'quay.io/prometheus/node-exporter:v1.5.0', 'restart': 'always', 'recreate': True, 'user': 'root', 'privileged': True, 'ports': ['9100:9100'], 'command': ['--web.config.file=/etc/node_exporter/node_exporter.yaml', '--web.disable-exporter-metrics', '--collector.systemd', '--collector.systemd.unit-include=(edpm_.*|ovs.*|openvswitch|virt.*|rsyslog)\\.service', '--no-collector.dmi', '--no-collector.entropy', '--no-collector.thermal_zone', '--no-collector.time', '--no-collector.timex', '--no-collector.uname', '--no-collector.stat', '--no-collector.hwmon', '--no-collector.os', '--no-collector.selinux', '--no-collector.textfile', '--no-collector.powersupplyclass', '--no-collector.pressure', '--no-collector.rapl'], 'net': 'host', 'environment': {'OS_ENDPOINT_TYPE': 'internal'}, 'healthcheck': {'test': '/openstack/healthcheck node_exporter', 'mount': '/var/lib/openstack/healthchecks/node_exporter'}, 'volumes': ['/var/lib/openstack/config/telemetry/node_exporter.yaml:/etc/node_exporter/node_exporter.yaml:z', '/var/lib/openstack/certs/telemetry/default:/etc/node_exporter/tls:z', '/var/run/dbus/system_bus_socket:/var/run/dbus/system_bus_socket:rw', '/var/lib/openstack/healthchecks/node_exporter:/openstack:ro,z']})
Dec  1 19:33:47 compute-0 nova_compute[189564]: 2025-12-01 19:33:47.255 189568 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Dec  1 19:33:47 compute-0 nova_compute[189564]: 2025-12-01 19:33:47.798 189568 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Dec  1 19:33:48 compute-0 nova_compute[189564]: 2025-12-01 19:33:48.248 189568 DEBUG oslo_service.periodic_task [None req-7cec860b-c1c5-49d2-8a4f-732c76c1788d - - - - - -] Running periodic task ComputeManager._heal_instance_info_cache run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210#033[00m
Dec  1 19:33:48 compute-0 nova_compute[189564]: 2025-12-01 19:33:48.249 189568 DEBUG nova.compute.manager [None req-7cec860b-c1c5-49d2-8a4f-732c76c1788d - - - - - -] Starting heal instance info cache _heal_instance_info_cache /usr/lib/python3.9/site-packages/nova/compute/manager.py:9858#033[00m
Dec  1 19:33:48 compute-0 nova_compute[189564]: 2025-12-01 19:33:48.249 189568 DEBUG nova.compute.manager [None req-7cec860b-c1c5-49d2-8a4f-732c76c1788d - - - - - -] Rebuilding the list of instances to heal _heal_instance_info_cache /usr/lib/python3.9/site-packages/nova/compute/manager.py:9862#033[00m
Dec  1 19:33:48 compute-0 ceilometer_agent_compute[200308]: 2025-12-01 19:33:48.811 15 DEBUG ceilometer.polling.manager [-] The number of pollsters in source [pollsters] is bigger than the number of worker threads to execute them. Therefore, one can expect the process to be longer than the expected. execute_polling_task_processing /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:253
Dec  1 19:33:48 compute-0 ceilometer_agent_compute[200308]: 2025-12-01 19:33:48.811 15 DEBUG ceilometer.polling.manager [-] Processing pollsters for [pollsters] with [1] threads. execute_polling_task_processing /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:262
Dec  1 19:33:48 compute-0 ceilometer_agent_compute[200308]: 2025-12-01 19:33:48.812 15 DEBUG ceilometer.polling.manager [-] Registering pollster [<stevedore.extension.Extension object at 0x7fcf6cc3f860>] from source [pollsters] to be executed via executor [<concurrent.futures.thread.ThreadPoolExecutor object at 0x7fcf6ebb41d0>] with cache [{}], pollster history [{}], and discovery cache [{}]. register_pollster_execution /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:276
Dec  1 19:33:48 compute-0 ceilometer_agent_compute[200308]: 2025-12-01 19:33:48.814 15 DEBUG ceilometer.polling.manager [-] Executing discovery process for pollsters [<ceilometer.compute.pollsters.net.IncomingBytesDeltaPollster object at 0x7fcf6cc3f830>] and discovery method [local_instances] via process [<bound method AgentManager.discover of <ceilometer.polling.manager.AgentManager object at 0x7fcf6cc07a10>>]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:294
Dec  1 19:33:48 compute-0 ceilometer_agent_compute[200308]: 2025-12-01 19:33:48.814 15 DEBUG ceilometer.polling.manager [-] Registering pollster [<stevedore.extension.Extension object at 0x7fcf6c2e4080>] from source [pollsters] to be executed via executor [<concurrent.futures.thread.ThreadPoolExecutor object at 0x7fcf6ebb41d0>] with cache [{}], pollster history [{}], and discovery cache [{}]. register_pollster_execution /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:276
Dec  1 19:33:48 compute-0 ceilometer_agent_compute[200308]: 2025-12-01 19:33:48.815 15 DEBUG ceilometer.polling.manager [-] Registering pollster [<stevedore.extension.Extension object at 0x7fcf6efc98b0>] from source [pollsters] to be executed via executor [<concurrent.futures.thread.ThreadPoolExecutor object at 0x7fcf6ebb41d0>] with cache [{}], pollster history [{}], and discovery cache [{}]. register_pollster_execution /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:276
Dec  1 19:33:48 compute-0 ceilometer_agent_compute[200308]: 2025-12-01 19:33:48.815 15 DEBUG ceilometer.polling.manager [-] Registering pollster [<stevedore.extension.Extension object at 0x7fcf6c2e4110>] from source [pollsters] to be executed via executor [<concurrent.futures.thread.ThreadPoolExecutor object at 0x7fcf6ebb41d0>] with cache [{}], pollster history [{}], and discovery cache [{}]. register_pollster_execution /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:276
Dec  1 19:33:48 compute-0 ceilometer_agent_compute[200308]: 2025-12-01 19:33:48.816 15 DEBUG ceilometer.polling.manager [-] Registering pollster [<stevedore.extension.Extension object at 0x7fcf6c2e41a0>] from source [pollsters] to be executed via executor [<concurrent.futures.thread.ThreadPoolExecutor object at 0x7fcf6ebb41d0>] with cache [{}], pollster history [{}], and discovery cache [{}]. register_pollster_execution /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:276
Dec  1 19:33:48 compute-0 ceilometer_agent_compute[200308]: 2025-12-01 19:33:48.816 15 DEBUG ceilometer.polling.manager [-] Registering pollster [<stevedore.extension.Extension object at 0x7fcf6cc3f290>] from source [pollsters] to be executed via executor [<concurrent.futures.thread.ThreadPoolExecutor object at 0x7fcf6ebb41d0>] with cache [{}], pollster history [{}], and discovery cache [{}]. register_pollster_execution /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:276
Dec  1 19:33:48 compute-0 ceilometer_agent_compute[200308]: 2025-12-01 19:33:48.817 15 DEBUG ceilometer.polling.manager [-] Registering pollster [<stevedore.extension.Extension object at 0x7fcf6cc3f2c0>] from source [pollsters] to be executed via executor [<concurrent.futures.thread.ThreadPoolExecutor object at 0x7fcf6ebb41d0>] with cache [{}], pollster history [{}], and discovery cache [{}]. register_pollster_execution /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:276
Dec  1 19:33:48 compute-0 ceilometer_agent_compute[200308]: 2025-12-01 19:33:48.817 15 DEBUG ceilometer.polling.manager [-] Registering pollster [<stevedore.extension.Extension object at 0x7fcf6e1e92e0>] from source [pollsters] to be executed via executor [<concurrent.futures.thread.ThreadPoolExecutor object at 0x7fcf6ebb41d0>] with cache [{}], pollster history [{}], and discovery cache [{}]. register_pollster_execution /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:276
Dec  1 19:33:48 compute-0 ceilometer_agent_compute[200308]: 2025-12-01 19:33:48.818 15 DEBUG ceilometer.polling.manager [-] Registering pollster [<stevedore.extension.Extension object at 0x7fcf6cc3fb00>] from source [pollsters] to be executed via executor [<concurrent.futures.thread.ThreadPoolExecutor object at 0x7fcf6ebb41d0>] with cache [{}], pollster history [{}], and discovery cache [{}]. register_pollster_execution /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:276
Dec  1 19:33:48 compute-0 ceilometer_agent_compute[200308]: 2025-12-01 19:33:48.819 15 DEBUG ceilometer.polling.manager [-] Registering pollster [<stevedore.extension.Extension object at 0x7fcf6cc3f320>] from source [pollsters] to be executed via executor [<concurrent.futures.thread.ThreadPoolExecutor object at 0x7fcf6ebb41d0>] with cache [{}], pollster history [{}], and discovery cache [{}]. register_pollster_execution /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:276
Dec  1 19:33:48 compute-0 ceilometer_agent_compute[200308]: 2025-12-01 19:33:48.819 15 DEBUG ceilometer.polling.manager [-] Registering pollster [<stevedore.extension.Extension object at 0x7fcf6cc3f380>] from source [pollsters] to be executed via executor [<concurrent.futures.thread.ThreadPoolExecutor object at 0x7fcf6ebb41d0>] with cache [{}], pollster history [{}], and discovery cache [{}]. register_pollster_execution /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:276
Dec  1 19:33:48 compute-0 ceilometer_agent_compute[200308]: 2025-12-01 19:33:48.819 15 DEBUG ceilometer.polling.manager [-] Registering pollster [<stevedore.extension.Extension object at 0x7fcf6cc3f3e0>] from source [pollsters] to be executed via executor [<concurrent.futures.thread.ThreadPoolExecutor object at 0x7fcf6ebb41d0>] with cache [{}], pollster history [{}], and discovery cache [{}]. register_pollster_execution /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:276
Dec  1 19:33:48 compute-0 ceilometer_agent_compute[200308]: 2025-12-01 19:33:48.820 15 DEBUG ceilometer.polling.manager [-] Registering pollster [<stevedore.extension.Extension object at 0x7fcf6cc3f440>] from source [pollsters] to be executed via executor [<concurrent.futures.thread.ThreadPoolExecutor object at 0x7fcf6ebb41d0>] with cache [{}], pollster history [{}], and discovery cache [{}]. register_pollster_execution /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:276
Dec  1 19:33:48 compute-0 ceilometer_agent_compute[200308]: 2025-12-01 19:33:48.820 15 DEBUG ceilometer.polling.manager [-] Registering pollster [<stevedore.extension.Extension object at 0x7fcf6c2e4470>] from source [pollsters] to be executed via executor [<concurrent.futures.thread.ThreadPoolExecutor object at 0x7fcf6ebb41d0>] with cache [{}], pollster history [{}], and discovery cache [{}]. register_pollster_execution /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:276
Dec  1 19:33:48 compute-0 ceilometer_agent_compute[200308]: 2025-12-01 19:33:48.821 15 DEBUG ceilometer.polling.manager [-] Registering pollster [<stevedore.extension.Extension object at 0x7fcf6cc3f4a0>] from source [pollsters] to be executed via executor [<concurrent.futures.thread.ThreadPoolExecutor object at 0x7fcf6ebb41d0>] with cache [{}], pollster history [{}], and discovery cache [{}]. register_pollster_execution /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:276
Dec  1 19:33:48 compute-0 ceilometer_agent_compute[200308]: 2025-12-01 19:33:48.821 15 DEBUG ceilometer.polling.manager [-] Registering pollster [<stevedore.extension.Extension object at 0x7fcf6cc3f500>] from source [pollsters] to be executed via executor [<concurrent.futures.thread.ThreadPoolExecutor object at 0x7fcf6ebb41d0>] with cache [{}], pollster history [{}], and discovery cache [{}]. register_pollster_execution /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:276
Dec  1 19:33:48 compute-0 ceilometer_agent_compute[200308]: 2025-12-01 19:33:48.822 15 DEBUG ceilometer.polling.manager [-] Registering pollster [<stevedore.extension.Extension object at 0x7fcf6cc3e540>] from source [pollsters] to be executed via executor [<concurrent.futures.thread.ThreadPoolExecutor object at 0x7fcf6ebb41d0>] with cache [{}], pollster history [{}], and discovery cache [{}]. register_pollster_execution /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:276
Dec  1 19:33:48 compute-0 ceilometer_agent_compute[200308]: 2025-12-01 19:33:48.823 15 DEBUG ceilometer.polling.manager [-] Registering pollster [<stevedore.extension.Extension object at 0x7fcf6cc3f560>] from source [pollsters] to be executed via executor [<concurrent.futures.thread.ThreadPoolExecutor object at 0x7fcf6ebb41d0>] with cache [{}], pollster history [{}], and discovery cache [{}]. register_pollster_execution /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:276
Dec  1 19:33:48 compute-0 ceilometer_agent_compute[200308]: 2025-12-01 19:33:48.823 15 DEBUG ceilometer.polling.manager [-] Registering pollster [<stevedore.extension.Extension object at 0x7fcf6cc3fd70>] from source [pollsters] to be executed via executor [<concurrent.futures.thread.ThreadPoolExecutor object at 0x7fcf6ebb41d0>] with cache [{}], pollster history [{}], and discovery cache [{}]. register_pollster_execution /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:276
Dec  1 19:33:48 compute-0 ceilometer_agent_compute[200308]: 2025-12-01 19:33:48.824 15 DEBUG ceilometer.polling.manager [-] Registering pollster [<stevedore.extension.Extension object at 0x7fcf6cc3f5c0>] from source [pollsters] to be executed via executor [<concurrent.futures.thread.ThreadPoolExecutor object at 0x7fcf6ebb41d0>] with cache [{}], pollster history [{}], and discovery cache [{}]. register_pollster_execution /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:276
Dec  1 19:33:48 compute-0 ceilometer_agent_compute[200308]: 2025-12-01 19:33:48.824 15 DEBUG ceilometer.polling.manager [-] Registering pollster [<stevedore.extension.Extension object at 0x7fcf6cc3fdd0>] from source [pollsters] to be executed via executor [<concurrent.futures.thread.ThreadPoolExecutor object at 0x7fcf6ebb41d0>] with cache [{}], pollster history [{}], and discovery cache [{}]. register_pollster_execution /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:276
Dec  1 19:33:48 compute-0 ceilometer_agent_compute[200308]: 2025-12-01 19:33:48.824 15 DEBUG ceilometer.polling.manager [-] Registering pollster [<stevedore.extension.Extension object at 0x7fcf6cc3fe30>] from source [pollsters] to be executed via executor [<concurrent.futures.thread.ThreadPoolExecutor object at 0x7fcf6ebb41d0>] with cache [{}], pollster history [{}], and discovery cache [{}]. register_pollster_execution /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:276
Dec  1 19:33:48 compute-0 ceilometer_agent_compute[200308]: 2025-12-01 19:33:48.825 15 DEBUG ceilometer.polling.manager [-] Registering pollster [<stevedore.extension.Extension object at 0x7fcf6cc3fec0>] from source [pollsters] to be executed via executor [<concurrent.futures.thread.ThreadPoolExecutor object at 0x7fcf6ebb41d0>] with cache [{}], pollster history [{}], and discovery cache [{}]. register_pollster_execution /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:276
Dec  1 19:33:48 compute-0 ceilometer_agent_compute[200308]: 2025-12-01 19:33:48.825 15 DEBUG ceilometer.polling.manager [-] Registering pollster [<stevedore.extension.Extension object at 0x7fcf6cc3ffb0>] from source [pollsters] to be executed via executor [<concurrent.futures.thread.ThreadPoolExecutor object at 0x7fcf6ebb41d0>] with cache [{}], pollster history [{}], and discovery cache [{}]. register_pollster_execution /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:276
Dec  1 19:33:48 compute-0 ceilometer_agent_compute[200308]: 2025-12-01 19:33:48.823 15 DEBUG ceilometer.compute.discovery [-] instance data: {'id': 'e73931e9-f7fa-4666-b781-700b385532a9', 'name': 'test_0', 'flavor': {'id': '0891a7f6-7194-4f33-bc11-6f6ab8b16145', 'name': 'm1.small', 'vcpus': 1, 'ram': 512, 'disk': 1, 'ephemeral': 1, 'swap': 0}, 'image': {'id': '15bc897a-453b-4133-b6db-08ecdc2b6db0'}, 'os_type': 'hvm', 'architecture': 'x86_64', 'OS-EXT-SRV-ATTR:instance_name': 'instance-00000001', 'OS-EXT-SRV-ATTR:host': 'compute-0.ctlplane.example.com', 'OS-EXT-STS:vm_state': 'running', 'tenant_id': '35d2a9caf1634dca9fc12ec078239d84', 'user_id': '7c24e8f82e7842b785e565ac65c7f494', 'hostId': 'e632d98aa833376e2652bb395252bb54f4cc7fd6f020f0d51d7efcd6', 'status': 'active', 'metadata': {}} discover_libvirt_polling /usr/lib/python3.12/site-packages/ceilometer/compute/discovery.py:315
Dec  1 19:33:48 compute-0 ceilometer_agent_compute[200308]: 2025-12-01 19:33:48.826 15 DEBUG ceilometer.polling.manager [-] Registering pollster [<stevedore.extension.Extension object at 0x7fcf6cc3d7c0>] from source [pollsters] to be executed via executor [<concurrent.futures.thread.ThreadPoolExecutor object at 0x7fcf6ebb41d0>] with cache [{}], pollster history [{}], and discovery cache [{}]. register_pollster_execution /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:276
Dec  1 19:33:48 compute-0 ceilometer_agent_compute[200308]: 2025-12-01 19:33:48.827 15 DEBUG ceilometer.polling.manager [-] Registering pollster [<stevedore.extension.Extension object at 0x7fcf6cc3f7d0>] from source [pollsters] to be executed via executor [<concurrent.futures.thread.ThreadPoolExecutor object at 0x7fcf6ebb41d0>] with cache [{}], pollster history [{}], and discovery cache [{}]. register_pollster_execution /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:276
Dec  1 19:33:48 compute-0 ceilometer_agent_compute[200308]: 2025-12-01 19:33:48.829 15 DEBUG ceilometer.compute.discovery [-] instance data: {'id': 'f4a023f0-04a7-470f-88ef-6284e0580f9e', 'name': 'vn-rxztcck-f2wxpqwzjpbt-22updzqiujy5-vnf-jgrcp6zbpavd', 'flavor': {'id': '0891a7f6-7194-4f33-bc11-6f6ab8b16145', 'name': 'm1.small', 'vcpus': 1, 'ram': 512, 'disk': 1, 'ephemeral': 1, 'swap': 0}, 'image': {'id': '15bc897a-453b-4133-b6db-08ecdc2b6db0'}, 'os_type': 'hvm', 'architecture': 'x86_64', 'OS-EXT-SRV-ATTR:instance_name': 'instance-00000002', 'OS-EXT-SRV-ATTR:host': 'compute-0.ctlplane.example.com', 'OS-EXT-STS:vm_state': 'running', 'tenant_id': '35d2a9caf1634dca9fc12ec078239d84', 'user_id': '7c24e8f82e7842b785e565ac65c7f494', 'hostId': 'e632d98aa833376e2652bb395252bb54f4cc7fd6f020f0d51d7efcd6', 'status': 'active', 'metadata': {'metering.server_group': '47cf63e2-5b7c-4ff3-8543-aef6d5b1a5c9'}} discover_libvirt_polling /usr/lib/python3.12/site-packages/ceilometer/compute/discovery.py:315
Dec  1 19:33:48 compute-0 ceilometer_agent_compute[200308]: 2025-12-01 19:33:48.830 15 INFO ceilometer.polling.manager [-] Polling pollster network.incoming.bytes.delta in the context of pollsters
Dec  1 19:33:48 compute-0 ceilometer_agent_compute[200308]: 2025-12-01 19:33:48.830 15 DEBUG ceilometer.polling.manager [-] Checking if we need coordination for pollster [<stevedore.extension.Extension object at 0x7fcf6cc3f860>] with coordination group name [None]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:333
Dec  1 19:33:48 compute-0 ceilometer_agent_compute[200308]: 2025-12-01 19:33:48.830 15 DEBUG ceilometer.polling.manager [-] The pollster [<stevedore.extension.Extension object at 0x7fcf6cc3f860>] is not configured in a source for polling that requires coordination. The current hashrings are the following [None]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:355
Dec  1 19:33:48 compute-0 ceilometer_agent_compute[200308]: 2025-12-01 19:33:48.830 15 DEBUG ceilometer.polling.manager [-] Polster heartbeat update: network.incoming.bytes.delta heartbeat /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:636
Dec  1 19:33:48 compute-0 ceilometer_agent_compute[200308]: 2025-12-01 19:33:48.831 12 DEBUG ceilometer.polling.manager [-] Updated heartbeat for network.incoming.bytes.delta (2025-12-01T19:33:48.830517) _update_status /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:502
Dec  1 19:33:48 compute-0 ceilometer_agent_compute[200308]: 2025-12-01 19:33:48.834 15 DEBUG ceilometer.compute.pollsters [-] e73931e9-f7fa-4666-b781-700b385532a9/network.incoming.bytes.delta volume: 0 _stats_to_sample /usr/lib/python3.12/site-packages/ceilometer/compute/pollsters/__init__.py:108
Dec  1 19:33:48 compute-0 ceilometer_agent_compute[200308]: 2025-12-01 19:33:48.838 15 DEBUG ceilometer.compute.pollsters [-] f4a023f0-04a7-470f-88ef-6284e0580f9e/network.incoming.bytes.delta volume: 3363 _stats_to_sample /usr/lib/python3.12/site-packages/ceilometer/compute/pollsters/__init__.py:108
Dec  1 19:33:48 compute-0 ceilometer_agent_compute[200308]: 2025-12-01 19:33:48.839 15 INFO ceilometer.polling.manager [-] Finished polling pollster network.incoming.bytes.delta in the context of pollsters
Dec  1 19:33:48 compute-0 ceilometer_agent_compute[200308]: 2025-12-01 19:33:48.839 15 DEBUG ceilometer.polling.manager [-] Executing discovery process for pollsters [<ceilometer.compute.pollsters.net.OutgoingPacketsPollster object at 0x7fcf6c2e4050>] and discovery method [local_instances] via process [<bound method AgentManager.discover of <ceilometer.polling.manager.AgentManager object at 0x7fcf6cc07a10>>]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:294
Dec  1 19:33:48 compute-0 ceilometer_agent_compute[200308]: 2025-12-01 19:33:48.839 15 INFO ceilometer.polling.manager [-] Polling pollster network.outgoing.packets in the context of pollsters
Dec  1 19:33:48 compute-0 ceilometer_agent_compute[200308]: 2025-12-01 19:33:48.839 15 DEBUG ceilometer.polling.manager [-] Checking if we need coordination for pollster [<stevedore.extension.Extension object at 0x7fcf6c2e4080>] with coordination group name [None]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:333
Dec  1 19:33:48 compute-0 ceilometer_agent_compute[200308]: 2025-12-01 19:33:48.840 15 DEBUG ceilometer.polling.manager [-] The pollster [<stevedore.extension.Extension object at 0x7fcf6c2e4080>] is not configured in a source for polling that requires coordination. The current hashrings are the following [None]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:355
Dec  1 19:33:48 compute-0 ceilometer_agent_compute[200308]: 2025-12-01 19:33:48.840 15 DEBUG ceilometer.polling.manager [-] Polster heartbeat update: network.outgoing.packets heartbeat /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:636
Dec  1 19:33:48 compute-0 ceilometer_agent_compute[200308]: 2025-12-01 19:33:48.840 15 DEBUG ceilometer.compute.pollsters [-] e73931e9-f7fa-4666-b781-700b385532a9/network.outgoing.packets volume: 21 _stats_to_sample /usr/lib/python3.12/site-packages/ceilometer/compute/pollsters/__init__.py:108
Dec  1 19:33:48 compute-0 ceilometer_agent_compute[200308]: 2025-12-01 19:33:48.840 15 DEBUG ceilometer.compute.pollsters [-] f4a023f0-04a7-470f-88ef-6284e0580f9e/network.outgoing.packets volume: 42 _stats_to_sample /usr/lib/python3.12/site-packages/ceilometer/compute/pollsters/__init__.py:108
Dec  1 19:33:48 compute-0 ceilometer_agent_compute[200308]: 2025-12-01 19:33:48.841 15 INFO ceilometer.polling.manager [-] Finished polling pollster network.outgoing.packets in the context of pollsters
Dec  1 19:33:48 compute-0 ceilometer_agent_compute[200308]: 2025-12-01 19:33:48.841 15 DEBUG ceilometer.polling.manager [-] Executing discovery process for pollsters [<ceilometer.compute.pollsters.net.OutgoingBytesDeltaPollster object at 0x7fcf6cc3ff20>] and discovery method [local_instances] via process [<bound method AgentManager.discover of <ceilometer.polling.manager.AgentManager object at 0x7fcf6cc07a10>>]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:294
Dec  1 19:33:48 compute-0 ceilometer_agent_compute[200308]: 2025-12-01 19:33:48.841 15 INFO ceilometer.polling.manager [-] Polling pollster network.outgoing.bytes.delta in the context of pollsters
Dec  1 19:33:48 compute-0 ceilometer_agent_compute[200308]: 2025-12-01 19:33:48.841 15 DEBUG ceilometer.polling.manager [-] Checking if we need coordination for pollster [<stevedore.extension.Extension object at 0x7fcf6efc98b0>] with coordination group name [None]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:333
Dec  1 19:33:48 compute-0 ceilometer_agent_compute[200308]: 2025-12-01 19:33:48.841 15 DEBUG ceilometer.polling.manager [-] The pollster [<stevedore.extension.Extension object at 0x7fcf6efc98b0>] is not configured in a source for polling that requires coordination. The current hashrings are the following [None]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:355
Dec  1 19:33:48 compute-0 ceilometer_agent_compute[200308]: 2025-12-01 19:33:48.841 15 DEBUG ceilometer.polling.manager [-] Polster heartbeat update: network.outgoing.bytes.delta heartbeat /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:636
Dec  1 19:33:48 compute-0 ceilometer_agent_compute[200308]: 2025-12-01 19:33:48.841 15 DEBUG ceilometer.compute.pollsters [-] e73931e9-f7fa-4666-b781-700b385532a9/network.outgoing.bytes.delta volume: 70 _stats_to_sample /usr/lib/python3.12/site-packages/ceilometer/compute/pollsters/__init__.py:108
Dec  1 19:33:48 compute-0 ceilometer_agent_compute[200308]: 2025-12-01 19:33:48.842 15 DEBUG ceilometer.compute.pollsters [-] f4a023f0-04a7-470f-88ef-6284e0580f9e/network.outgoing.bytes.delta volume: 3071 _stats_to_sample /usr/lib/python3.12/site-packages/ceilometer/compute/pollsters/__init__.py:108
Dec  1 19:33:48 compute-0 ceilometer_agent_compute[200308]: 2025-12-01 19:33:48.842 15 INFO ceilometer.polling.manager [-] Finished polling pollster network.outgoing.bytes.delta in the context of pollsters
Dec  1 19:33:48 compute-0 ceilometer_agent_compute[200308]: 2025-12-01 19:33:48.842 15 DEBUG ceilometer.polling.manager [-] Executing discovery process for pollsters [<ceilometer.compute.pollsters.net.OutgoingDropPollster object at 0x7fcf6c2e40e0>] and discovery method [local_instances] via process [<bound method AgentManager.discover of <ceilometer.polling.manager.AgentManager object at 0x7fcf6cc07a10>>]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:294
Dec  1 19:33:48 compute-0 ceilometer_agent_compute[200308]: 2025-12-01 19:33:48.842 12 DEBUG ceilometer.polling.manager [-] Updated heartbeat for network.outgoing.packets (2025-12-01T19:33:48.840098) _update_status /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:502
Dec  1 19:33:48 compute-0 ceilometer_agent_compute[200308]: 2025-12-01 19:33:48.842 15 INFO ceilometer.polling.manager [-] Polling pollster network.outgoing.packets.drop in the context of pollsters
Dec  1 19:33:48 compute-0 ceilometer_agent_compute[200308]: 2025-12-01 19:33:48.843 15 DEBUG ceilometer.polling.manager [-] Checking if we need coordination for pollster [<stevedore.extension.Extension object at 0x7fcf6c2e4110>] with coordination group name [None]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:333
Dec  1 19:33:48 compute-0 ceilometer_agent_compute[200308]: 2025-12-01 19:33:48.843 15 DEBUG ceilometer.polling.manager [-] The pollster [<stevedore.extension.Extension object at 0x7fcf6c2e4110>] is not configured in a source for polling that requires coordination. The current hashrings are the following [None]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:355
Dec  1 19:33:48 compute-0 ceilometer_agent_compute[200308]: 2025-12-01 19:33:48.843 15 DEBUG ceilometer.polling.manager [-] Polster heartbeat update: network.outgoing.packets.drop heartbeat /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:636
Dec  1 19:33:48 compute-0 ceilometer_agent_compute[200308]: 2025-12-01 19:33:48.843 15 DEBUG ceilometer.compute.pollsters [-] e73931e9-f7fa-4666-b781-700b385532a9/network.outgoing.packets.drop volume: 0 _stats_to_sample /usr/lib/python3.12/site-packages/ceilometer/compute/pollsters/__init__.py:108
Dec  1 19:33:48 compute-0 ceilometer_agent_compute[200308]: 2025-12-01 19:33:48.843 15 DEBUG ceilometer.compute.pollsters [-] f4a023f0-04a7-470f-88ef-6284e0580f9e/network.outgoing.packets.drop volume: 0 _stats_to_sample /usr/lib/python3.12/site-packages/ceilometer/compute/pollsters/__init__.py:108
Dec  1 19:33:48 compute-0 ceilometer_agent_compute[200308]: 2025-12-01 19:33:48.843 15 INFO ceilometer.polling.manager [-] Finished polling pollster network.outgoing.packets.drop in the context of pollsters
Dec  1 19:33:48 compute-0 ceilometer_agent_compute[200308]: 2025-12-01 19:33:48.844 15 DEBUG ceilometer.polling.manager [-] Executing discovery process for pollsters [<ceilometer.compute.pollsters.net.OutgoingErrorsPollster object at 0x7fcf6c2e4170>] and discovery method [local_instances] via process [<bound method AgentManager.discover of <ceilometer.polling.manager.AgentManager object at 0x7fcf6cc07a10>>]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:294
Dec  1 19:33:48 compute-0 ceilometer_agent_compute[200308]: 2025-12-01 19:33:48.842 12 DEBUG ceilometer.polling.manager [-] Updated heartbeat for network.outgoing.bytes.delta (2025-12-01T19:33:48.841781) _update_status /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:502
Dec  1 19:33:48 compute-0 ceilometer_agent_compute[200308]: 2025-12-01 19:33:48.844 15 INFO ceilometer.polling.manager [-] Polling pollster network.outgoing.packets.error in the context of pollsters
Dec  1 19:33:48 compute-0 ceilometer_agent_compute[200308]: 2025-12-01 19:33:48.844 15 DEBUG ceilometer.polling.manager [-] Checking if we need coordination for pollster [<stevedore.extension.Extension object at 0x7fcf6c2e41a0>] with coordination group name [None]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:333
Dec  1 19:33:48 compute-0 ceilometer_agent_compute[200308]: 2025-12-01 19:33:48.844 15 DEBUG ceilometer.polling.manager [-] The pollster [<stevedore.extension.Extension object at 0x7fcf6c2e41a0>] is not configured in a source for polling that requires coordination. The current hashrings are the following [None]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:355
Dec  1 19:33:48 compute-0 ceilometer_agent_compute[200308]: 2025-12-01 19:33:48.844 15 DEBUG ceilometer.polling.manager [-] Polster heartbeat update: network.outgoing.packets.error heartbeat /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:636
Dec  1 19:33:48 compute-0 ceilometer_agent_compute[200308]: 2025-12-01 19:33:48.844 15 DEBUG ceilometer.compute.pollsters [-] e73931e9-f7fa-4666-b781-700b385532a9/network.outgoing.packets.error volume: 0 _stats_to_sample /usr/lib/python3.12/site-packages/ceilometer/compute/pollsters/__init__.py:108
Dec  1 19:33:48 compute-0 ceilometer_agent_compute[200308]: 2025-12-01 19:33:48.845 15 DEBUG ceilometer.compute.pollsters [-] f4a023f0-04a7-470f-88ef-6284e0580f9e/network.outgoing.packets.error volume: 0 _stats_to_sample /usr/lib/python3.12/site-packages/ceilometer/compute/pollsters/__init__.py:108
Dec  1 19:33:48 compute-0 ceilometer_agent_compute[200308]: 2025-12-01 19:33:48.845 15 INFO ceilometer.polling.manager [-] Finished polling pollster network.outgoing.packets.error in the context of pollsters
Dec  1 19:33:48 compute-0 ceilometer_agent_compute[200308]: 2025-12-01 19:33:48.845 12 DEBUG ceilometer.polling.manager [-] Updated heartbeat for network.outgoing.packets.drop (2025-12-01T19:33:48.843136) _update_status /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:502
Dec  1 19:33:48 compute-0 ceilometer_agent_compute[200308]: 2025-12-01 19:33:48.845 15 DEBUG ceilometer.polling.manager [-] Executing discovery process for pollsters [<ceilometer.compute.pollsters.disk.PerDeviceCapacityPollster object at 0x7fcf6cc3d820>] and discovery method [local_instances] via process [<bound method AgentManager.discover of <ceilometer.polling.manager.AgentManager object at 0x7fcf6cc07a10>>]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:294
Dec  1 19:33:48 compute-0 ceilometer_agent_compute[200308]: 2025-12-01 19:33:48.846 15 INFO ceilometer.polling.manager [-] Polling pollster disk.device.capacity in the context of pollsters
Dec  1 19:33:48 compute-0 ceilometer_agent_compute[200308]: 2025-12-01 19:33:48.846 15 DEBUG ceilometer.polling.manager [-] Checking if we need coordination for pollster [<stevedore.extension.Extension object at 0x7fcf6cc3f290>] with coordination group name [None]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:333
Dec  1 19:33:48 compute-0 ceilometer_agent_compute[200308]: 2025-12-01 19:33:48.846 15 DEBUG ceilometer.polling.manager [-] The pollster [<stevedore.extension.Extension object at 0x7fcf6cc3f290>] is not configured in a source for polling that requires coordination. The current hashrings are the following [None]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:355
Dec  1 19:33:48 compute-0 ceilometer_agent_compute[200308]: 2025-12-01 19:33:48.846 15 DEBUG ceilometer.polling.manager [-] Polster heartbeat update: disk.device.capacity heartbeat /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:636
Dec  1 19:33:48 compute-0 ceilometer_agent_compute[200308]: 2025-12-01 19:33:48.846 12 DEBUG ceilometer.polling.manager [-] Updated heartbeat for network.outgoing.packets.error (2025-12-01T19:33:48.844676) _update_status /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:502
Dec  1 19:33:48 compute-0 ceilometer_agent_compute[200308]: 2025-12-01 19:33:48.848 12 DEBUG ceilometer.polling.manager [-] Updated heartbeat for disk.device.capacity (2025-12-01T19:33:48.846334) _update_status /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:502
Dec  1 19:33:48 compute-0 ceilometer_agent_compute[200308]: 2025-12-01 19:33:48.869 15 DEBUG ceilometer.compute.pollsters [-] e73931e9-f7fa-4666-b781-700b385532a9/disk.device.capacity volume: 1073741824 _stats_to_sample /usr/lib/python3.12/site-packages/ceilometer/compute/pollsters/__init__.py:108
Dec  1 19:33:48 compute-0 ceilometer_agent_compute[200308]: 2025-12-01 19:33:48.869 15 DEBUG ceilometer.compute.pollsters [-] e73931e9-f7fa-4666-b781-700b385532a9/disk.device.capacity volume: 1073741824 _stats_to_sample /usr/lib/python3.12/site-packages/ceilometer/compute/pollsters/__init__.py:108
Dec  1 19:33:48 compute-0 ceilometer_agent_compute[200308]: 2025-12-01 19:33:48.870 15 DEBUG ceilometer.compute.pollsters [-] e73931e9-f7fa-4666-b781-700b385532a9/disk.device.capacity volume: 485376 _stats_to_sample /usr/lib/python3.12/site-packages/ceilometer/compute/pollsters/__init__.py:108
Dec  1 19:33:48 compute-0 ceilometer_agent_compute[200308]: 2025-12-01 19:33:48.898 15 DEBUG ceilometer.compute.pollsters [-] f4a023f0-04a7-470f-88ef-6284e0580f9e/disk.device.capacity volume: 1073741824 _stats_to_sample /usr/lib/python3.12/site-packages/ceilometer/compute/pollsters/__init__.py:108
Dec  1 19:33:48 compute-0 ceilometer_agent_compute[200308]: 2025-12-01 19:33:48.899 15 DEBUG ceilometer.compute.pollsters [-] f4a023f0-04a7-470f-88ef-6284e0580f9e/disk.device.capacity volume: 1073741824 _stats_to_sample /usr/lib/python3.12/site-packages/ceilometer/compute/pollsters/__init__.py:108
Dec  1 19:33:48 compute-0 ceilometer_agent_compute[200308]: 2025-12-01 19:33:48.899 15 DEBUG ceilometer.compute.pollsters [-] f4a023f0-04a7-470f-88ef-6284e0580f9e/disk.device.capacity volume: 583680 _stats_to_sample /usr/lib/python3.12/site-packages/ceilometer/compute/pollsters/__init__.py:108
Dec  1 19:33:48 compute-0 ceilometer_agent_compute[200308]: 2025-12-01 19:33:48.900 15 INFO ceilometer.polling.manager [-] Finished polling pollster disk.device.capacity in the context of pollsters
Dec  1 19:33:48 compute-0 ceilometer_agent_compute[200308]: 2025-12-01 19:33:48.900 15 DEBUG ceilometer.polling.manager [-] Executing discovery process for pollsters [<ceilometer.compute.pollsters.disk.PerDeviceReadBytesPollster object at 0x7fcf6cc3f1d0>] and discovery method [local_instances] via process [<bound method AgentManager.discover of <ceilometer.polling.manager.AgentManager object at 0x7fcf6cc07a10>>]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:294
Dec  1 19:33:48 compute-0 ceilometer_agent_compute[200308]: 2025-12-01 19:33:48.900 15 INFO ceilometer.polling.manager [-] Polling pollster disk.device.read.bytes in the context of pollsters
Dec  1 19:33:48 compute-0 ceilometer_agent_compute[200308]: 2025-12-01 19:33:48.900 15 DEBUG ceilometer.polling.manager [-] Checking if we need coordination for pollster [<stevedore.extension.Extension object at 0x7fcf6cc3f2c0>] with coordination group name [None]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:333
Dec  1 19:33:48 compute-0 ceilometer_agent_compute[200308]: 2025-12-01 19:33:48.901 15 DEBUG ceilometer.polling.manager [-] The pollster [<stevedore.extension.Extension object at 0x7fcf6cc3f2c0>] is not configured in a source for polling that requires coordination. The current hashrings are the following [None]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:355
Dec  1 19:33:48 compute-0 ceilometer_agent_compute[200308]: 2025-12-01 19:33:48.901 15 DEBUG ceilometer.polling.manager [-] Polster heartbeat update: disk.device.read.bytes heartbeat /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:636
Dec  1 19:33:48 compute-0 ceilometer_agent_compute[200308]: 2025-12-01 19:33:48.902 12 DEBUG ceilometer.polling.manager [-] Updated heartbeat for disk.device.read.bytes (2025-12-01T19:33:48.901176) _update_status /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:502
Dec  1 19:33:48 compute-0 ceilometer_agent_compute[200308]: 2025-12-01 19:33:48.990 15 DEBUG ceilometer.compute.pollsters [-] e73931e9-f7fa-4666-b781-700b385532a9/disk.device.read.bytes volume: 23308800 _stats_to_sample /usr/lib/python3.12/site-packages/ceilometer/compute/pollsters/__init__.py:108
Dec  1 19:33:48 compute-0 ceilometer_agent_compute[200308]: 2025-12-01 19:33:48.991 15 DEBUG ceilometer.compute.pollsters [-] e73931e9-f7fa-4666-b781-700b385532a9/disk.device.read.bytes volume: 3227648 _stats_to_sample /usr/lib/python3.12/site-packages/ceilometer/compute/pollsters/__init__.py:108
Dec  1 19:33:48 compute-0 ceilometer_agent_compute[200308]: 2025-12-01 19:33:48.991 15 DEBUG ceilometer.compute.pollsters [-] e73931e9-f7fa-4666-b781-700b385532a9/disk.device.read.bytes volume: 274786 _stats_to_sample /usr/lib/python3.12/site-packages/ceilometer/compute/pollsters/__init__.py:108
Dec  1 19:33:49 compute-0 ceilometer_agent_compute[200308]: 2025-12-01 19:33:49.091 15 DEBUG ceilometer.compute.pollsters [-] f4a023f0-04a7-470f-88ef-6284e0580f9e/disk.device.read.bytes volume: 23325184 _stats_to_sample /usr/lib/python3.12/site-packages/ceilometer/compute/pollsters/__init__.py:108
Dec  1 19:33:49 compute-0 ceilometer_agent_compute[200308]: 2025-12-01 19:33:49.092 15 DEBUG ceilometer.compute.pollsters [-] f4a023f0-04a7-470f-88ef-6284e0580f9e/disk.device.read.bytes volume: 3227648 _stats_to_sample /usr/lib/python3.12/site-packages/ceilometer/compute/pollsters/__init__.py:108
Dec  1 19:33:49 compute-0 ceilometer_agent_compute[200308]: 2025-12-01 19:33:49.092 15 DEBUG ceilometer.compute.pollsters [-] f4a023f0-04a7-470f-88ef-6284e0580f9e/disk.device.read.bytes volume: 385378 _stats_to_sample /usr/lib/python3.12/site-packages/ceilometer/compute/pollsters/__init__.py:108
Dec  1 19:33:49 compute-0 ceilometer_agent_compute[200308]: 2025-12-01 19:33:49.092 15 INFO ceilometer.polling.manager [-] Finished polling pollster disk.device.read.bytes in the context of pollsters
Dec  1 19:33:49 compute-0 ceilometer_agent_compute[200308]: 2025-12-01 19:33:49.092 15 DEBUG ceilometer.polling.manager [-] Executing discovery process for pollsters [<ceilometer.compute.pollsters.net.IncomingBytesPollster object at 0x7fcf6cc3f800>] and discovery method [local_instances] via process [<bound method AgentManager.discover of <ceilometer.polling.manager.AgentManager object at 0x7fcf6cc07a10>>]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:294
Dec  1 19:33:49 compute-0 ceilometer_agent_compute[200308]: 2025-12-01 19:33:49.093 15 INFO ceilometer.polling.manager [-] Polling pollster network.incoming.bytes in the context of pollsters
Dec  1 19:33:49 compute-0 ceilometer_agent_compute[200308]: 2025-12-01 19:33:49.093 15 DEBUG ceilometer.polling.manager [-] Checking if we need coordination for pollster [<stevedore.extension.Extension object at 0x7fcf6e1e92e0>] with coordination group name [None]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:333
Dec  1 19:33:49 compute-0 ceilometer_agent_compute[200308]: 2025-12-01 19:33:49.093 15 DEBUG ceilometer.polling.manager [-] The pollster [<stevedore.extension.Extension object at 0x7fcf6e1e92e0>] is not configured in a source for polling that requires coordination. The current hashrings are the following [None]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:355
Dec  1 19:33:49 compute-0 ceilometer_agent_compute[200308]: 2025-12-01 19:33:49.093 15 DEBUG ceilometer.polling.manager [-] Polster heartbeat update: network.incoming.bytes heartbeat /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:636
Dec  1 19:33:49 compute-0 ceilometer_agent_compute[200308]: 2025-12-01 19:33:49.093 15 DEBUG ceilometer.compute.pollsters [-] e73931e9-f7fa-4666-b781-700b385532a9/network.incoming.bytes volume: 1968 _stats_to_sample /usr/lib/python3.12/site-packages/ceilometer/compute/pollsters/__init__.py:108
Dec  1 19:33:49 compute-0 ceilometer_agent_compute[200308]: 2025-12-01 19:33:49.093 12 DEBUG ceilometer.polling.manager [-] Updated heartbeat for network.incoming.bytes (2025-12-01T19:33:49.093293) _update_status /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:502
Dec  1 19:33:49 compute-0 ceilometer_agent_compute[200308]: 2025-12-01 19:33:49.093 15 DEBUG ceilometer.compute.pollsters [-] f4a023f0-04a7-470f-88ef-6284e0580f9e/network.incoming.bytes volume: 4849 _stats_to_sample /usr/lib/python3.12/site-packages/ceilometer/compute/pollsters/__init__.py:108
Dec  1 19:33:49 compute-0 ceilometer_agent_compute[200308]: 2025-12-01 19:33:49.094 15 INFO ceilometer.polling.manager [-] Finished polling pollster network.incoming.bytes in the context of pollsters
Dec  1 19:33:49 compute-0 ceilometer_agent_compute[200308]: 2025-12-01 19:33:49.094 15 DEBUG ceilometer.polling.manager [-] Executing discovery process for pollsters [<ceilometer.compute.pollsters.net.IncomingBytesRatePollster object at 0x7fcf6cc3fd10>] and discovery method [local_instances] via process [<bound method AgentManager.discover of <ceilometer.polling.manager.AgentManager object at 0x7fcf6cc07a10>>]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:294
Dec  1 19:33:49 compute-0 ceilometer_agent_compute[200308]: 2025-12-01 19:33:49.094 15 DEBUG ceilometer.polling.manager [-] Skip pollster network.incoming.bytes.rate, no new resources found this cycle _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:321
Dec  1 19:33:49 compute-0 ceilometer_agent_compute[200308]: 2025-12-01 19:33:49.094 15 DEBUG ceilometer.polling.manager [-] Executing discovery process for pollsters [<ceilometer.compute.pollsters.disk.PerDeviceDiskReadLatencyPollster object at 0x7fcf6cc3f2f0>] and discovery method [local_instances] via process [<bound method AgentManager.discover of <ceilometer.polling.manager.AgentManager object at 0x7fcf6cc07a10>>]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:294
Dec  1 19:33:49 compute-0 ceilometer_agent_compute[200308]: 2025-12-01 19:33:49.094 15 INFO ceilometer.polling.manager [-] Polling pollster disk.device.read.latency in the context of pollsters
Dec  1 19:33:49 compute-0 ceilometer_agent_compute[200308]: 2025-12-01 19:33:49.094 15 DEBUG ceilometer.polling.manager [-] Checking if we need coordination for pollster [<stevedore.extension.Extension object at 0x7fcf6cc3f320>] with coordination group name [None]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:333
Dec  1 19:33:49 compute-0 ceilometer_agent_compute[200308]: 2025-12-01 19:33:49.094 15 DEBUG ceilometer.polling.manager [-] The pollster [<stevedore.extension.Extension object at 0x7fcf6cc3f320>] is not configured in a source for polling that requires coordination. The current hashrings are the following [None]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:355
Dec  1 19:33:49 compute-0 ceilometer_agent_compute[200308]: 2025-12-01 19:33:49.094 15 DEBUG ceilometer.polling.manager [-] Polster heartbeat update: disk.device.read.latency heartbeat /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:636
Dec  1 19:33:49 compute-0 ceilometer_agent_compute[200308]: 2025-12-01 19:33:49.094 15 DEBUG ceilometer.compute.pollsters [-] e73931e9-f7fa-4666-b781-700b385532a9/disk.device.read.latency volume: 474440550 _stats_to_sample /usr/lib/python3.12/site-packages/ceilometer/compute/pollsters/__init__.py:108
Dec  1 19:33:49 compute-0 ceilometer_agent_compute[200308]: 2025-12-01 19:33:49.095 12 DEBUG ceilometer.polling.manager [-] Updated heartbeat for disk.device.read.latency (2025-12-01T19:33:49.094798) _update_status /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:502
Dec  1 19:33:49 compute-0 ceilometer_agent_compute[200308]: 2025-12-01 19:33:49.095 15 DEBUG ceilometer.compute.pollsters [-] e73931e9-f7fa-4666-b781-700b385532a9/disk.device.read.latency volume: 65600453 _stats_to_sample /usr/lib/python3.12/site-packages/ceilometer/compute/pollsters/__init__.py:108
Dec  1 19:33:49 compute-0 ceilometer_agent_compute[200308]: 2025-12-01 19:33:49.095 15 DEBUG ceilometer.compute.pollsters [-] e73931e9-f7fa-4666-b781-700b385532a9/disk.device.read.latency volume: 49214734 _stats_to_sample /usr/lib/python3.12/site-packages/ceilometer/compute/pollsters/__init__.py:108
Dec  1 19:33:49 compute-0 ceilometer_agent_compute[200308]: 2025-12-01 19:33:49.095 15 DEBUG ceilometer.compute.pollsters [-] f4a023f0-04a7-470f-88ef-6284e0580f9e/disk.device.read.latency volume: 571654353 _stats_to_sample /usr/lib/python3.12/site-packages/ceilometer/compute/pollsters/__init__.py:108
Dec  1 19:33:49 compute-0 ceilometer_agent_compute[200308]: 2025-12-01 19:33:49.096 15 DEBUG ceilometer.compute.pollsters [-] f4a023f0-04a7-470f-88ef-6284e0580f9e/disk.device.read.latency volume: 100146044 _stats_to_sample /usr/lib/python3.12/site-packages/ceilometer/compute/pollsters/__init__.py:108
Dec  1 19:33:49 compute-0 ceilometer_agent_compute[200308]: 2025-12-01 19:33:49.096 15 DEBUG ceilometer.compute.pollsters [-] f4a023f0-04a7-470f-88ef-6284e0580f9e/disk.device.read.latency volume: 76562748 _stats_to_sample /usr/lib/python3.12/site-packages/ceilometer/compute/pollsters/__init__.py:108
Dec  1 19:33:49 compute-0 ceilometer_agent_compute[200308]: 2025-12-01 19:33:49.096 15 INFO ceilometer.polling.manager [-] Finished polling pollster disk.device.read.latency in the context of pollsters
Dec  1 19:33:49 compute-0 ceilometer_agent_compute[200308]: 2025-12-01 19:33:49.096 15 DEBUG ceilometer.polling.manager [-] Executing discovery process for pollsters [<ceilometer.compute.pollsters.disk.PerDeviceReadRequestsPollster object at 0x7fcf6cc3f350>] and discovery method [local_instances] via process [<bound method AgentManager.discover of <ceilometer.polling.manager.AgentManager object at 0x7fcf6cc07a10>>]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:294
Dec  1 19:33:49 compute-0 ceilometer_agent_compute[200308]: 2025-12-01 19:33:49.096 15 INFO ceilometer.polling.manager [-] Polling pollster disk.device.read.requests in the context of pollsters
Dec  1 19:33:49 compute-0 ceilometer_agent_compute[200308]: 2025-12-01 19:33:49.096 15 DEBUG ceilometer.polling.manager [-] Checking if we need coordination for pollster [<stevedore.extension.Extension object at 0x7fcf6cc3f380>] with coordination group name [None]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:333
Dec  1 19:33:49 compute-0 ceilometer_agent_compute[200308]: 2025-12-01 19:33:49.097 15 DEBUG ceilometer.polling.manager [-] The pollster [<stevedore.extension.Extension object at 0x7fcf6cc3f380>] is not configured in a source for polling that requires coordination. The current hashrings are the following [None]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:355
Dec  1 19:33:49 compute-0 ceilometer_agent_compute[200308]: 2025-12-01 19:33:49.097 15 DEBUG ceilometer.polling.manager [-] Polster heartbeat update: disk.device.read.requests heartbeat /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:636
Dec  1 19:33:49 compute-0 ceilometer_agent_compute[200308]: 2025-12-01 19:33:49.097 15 DEBUG ceilometer.compute.pollsters [-] e73931e9-f7fa-4666-b781-700b385532a9/disk.device.read.requests volume: 840 _stats_to_sample /usr/lib/python3.12/site-packages/ceilometer/compute/pollsters/__init__.py:108
Dec  1 19:33:49 compute-0 ceilometer_agent_compute[200308]: 2025-12-01 19:33:49.097 15 DEBUG ceilometer.compute.pollsters [-] e73931e9-f7fa-4666-b781-700b385532a9/disk.device.read.requests volume: 173 _stats_to_sample /usr/lib/python3.12/site-packages/ceilometer/compute/pollsters/__init__.py:108
Dec  1 19:33:49 compute-0 ceilometer_agent_compute[200308]: 2025-12-01 19:33:49.097 12 DEBUG ceilometer.polling.manager [-] Updated heartbeat for disk.device.read.requests (2025-12-01T19:33:49.097065) _update_status /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:502
Dec  1 19:33:49 compute-0 ceilometer_agent_compute[200308]: 2025-12-01 19:33:49.097 15 DEBUG ceilometer.compute.pollsters [-] e73931e9-f7fa-4666-b781-700b385532a9/disk.device.read.requests volume: 109 _stats_to_sample /usr/lib/python3.12/site-packages/ceilometer/compute/pollsters/__init__.py:108
Dec  1 19:33:49 compute-0 ceilometer_agent_compute[200308]: 2025-12-01 19:33:49.098 15 DEBUG ceilometer.compute.pollsters [-] f4a023f0-04a7-470f-88ef-6284e0580f9e/disk.device.read.requests volume: 844 _stats_to_sample /usr/lib/python3.12/site-packages/ceilometer/compute/pollsters/__init__.py:108
Dec  1 19:33:49 compute-0 ceilometer_agent_compute[200308]: 2025-12-01 19:33:49.098 15 DEBUG ceilometer.compute.pollsters [-] f4a023f0-04a7-470f-88ef-6284e0580f9e/disk.device.read.requests volume: 173 _stats_to_sample /usr/lib/python3.12/site-packages/ceilometer/compute/pollsters/__init__.py:108
Dec  1 19:33:49 compute-0 ceilometer_agent_compute[200308]: 2025-12-01 19:33:49.098 15 DEBUG ceilometer.compute.pollsters [-] f4a023f0-04a7-470f-88ef-6284e0580f9e/disk.device.read.requests volume: 124 _stats_to_sample /usr/lib/python3.12/site-packages/ceilometer/compute/pollsters/__init__.py:108
Dec  1 19:33:49 compute-0 ceilometer_agent_compute[200308]: 2025-12-01 19:33:49.098 15 INFO ceilometer.polling.manager [-] Finished polling pollster disk.device.read.requests in the context of pollsters
Dec  1 19:33:49 compute-0 ceilometer_agent_compute[200308]: 2025-12-01 19:33:49.098 15 DEBUG ceilometer.polling.manager [-] Executing discovery process for pollsters [<ceilometer.compute.pollsters.disk.PerDevicePhysicalPollster object at 0x7fcf6cc3f3b0>] and discovery method [local_instances] via process [<bound method AgentManager.discover of <ceilometer.polling.manager.AgentManager object at 0x7fcf6cc07a10>>]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:294
Dec  1 19:33:49 compute-0 ceilometer_agent_compute[200308]: 2025-12-01 19:33:49.099 15 INFO ceilometer.polling.manager [-] Polling pollster disk.device.usage in the context of pollsters
Dec  1 19:33:49 compute-0 ceilometer_agent_compute[200308]: 2025-12-01 19:33:49.099 15 DEBUG ceilometer.polling.manager [-] Checking if we need coordination for pollster [<stevedore.extension.Extension object at 0x7fcf6cc3f3e0>] with coordination group name [None]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:333
Dec  1 19:33:49 compute-0 ceilometer_agent_compute[200308]: 2025-12-01 19:33:49.099 15 DEBUG ceilometer.polling.manager [-] The pollster [<stevedore.extension.Extension object at 0x7fcf6cc3f3e0>] is not configured in a source for polling that requires coordination. The current hashrings are the following [None]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:355
Dec  1 19:33:49 compute-0 ceilometer_agent_compute[200308]: 2025-12-01 19:33:49.099 15 DEBUG ceilometer.polling.manager [-] Polster heartbeat update: disk.device.usage heartbeat /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:636
Dec  1 19:33:49 compute-0 ceilometer_agent_compute[200308]: 2025-12-01 19:33:49.099 12 DEBUG ceilometer.polling.manager [-] Updated heartbeat for disk.device.usage (2025-12-01T19:33:49.099208) _update_status /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:502
Dec  1 19:33:49 compute-0 ceilometer_agent_compute[200308]: 2025-12-01 19:33:49.099 15 DEBUG ceilometer.compute.pollsters [-] e73931e9-f7fa-4666-b781-700b385532a9/disk.device.usage volume: 21233664 _stats_to_sample /usr/lib/python3.12/site-packages/ceilometer/compute/pollsters/__init__.py:108
Dec  1 19:33:49 compute-0 ceilometer_agent_compute[200308]: 2025-12-01 19:33:49.099 15 DEBUG ceilometer.compute.pollsters [-] e73931e9-f7fa-4666-b781-700b385532a9/disk.device.usage volume: 393216 _stats_to_sample /usr/lib/python3.12/site-packages/ceilometer/compute/pollsters/__init__.py:108
Dec  1 19:33:49 compute-0 ceilometer_agent_compute[200308]: 2025-12-01 19:33:49.099 15 DEBUG ceilometer.compute.pollsters [-] e73931e9-f7fa-4666-b781-700b385532a9/disk.device.usage volume: 485376 _stats_to_sample /usr/lib/python3.12/site-packages/ceilometer/compute/pollsters/__init__.py:108
Dec  1 19:33:49 compute-0 ceilometer_agent_compute[200308]: 2025-12-01 19:33:49.100 15 DEBUG ceilometer.compute.pollsters [-] f4a023f0-04a7-470f-88ef-6284e0580f9e/disk.device.usage volume: 21364736 _stats_to_sample /usr/lib/python3.12/site-packages/ceilometer/compute/pollsters/__init__.py:108
Dec  1 19:33:49 compute-0 ceilometer_agent_compute[200308]: 2025-12-01 19:33:49.100 15 DEBUG ceilometer.compute.pollsters [-] f4a023f0-04a7-470f-88ef-6284e0580f9e/disk.device.usage volume: 393216 _stats_to_sample /usr/lib/python3.12/site-packages/ceilometer/compute/pollsters/__init__.py:108
Dec  1 19:33:49 compute-0 ceilometer_agent_compute[200308]: 2025-12-01 19:33:49.100 15 DEBUG ceilometer.compute.pollsters [-] f4a023f0-04a7-470f-88ef-6284e0580f9e/disk.device.usage volume: 583680 _stats_to_sample /usr/lib/python3.12/site-packages/ceilometer/compute/pollsters/__init__.py:108
Dec  1 19:33:49 compute-0 ceilometer_agent_compute[200308]: 2025-12-01 19:33:49.100 15 INFO ceilometer.polling.manager [-] Finished polling pollster disk.device.usage in the context of pollsters
Dec  1 19:33:49 compute-0 ceilometer_agent_compute[200308]: 2025-12-01 19:33:49.100 15 DEBUG ceilometer.polling.manager [-] Executing discovery process for pollsters [<ceilometer.compute.pollsters.disk.PerDeviceWriteBytesPollster object at 0x7fcf6cc3f410>] and discovery method [local_instances] via process [<bound method AgentManager.discover of <ceilometer.polling.manager.AgentManager object at 0x7fcf6cc07a10>>]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:294
Dec  1 19:33:49 compute-0 ceilometer_agent_compute[200308]: 2025-12-01 19:33:49.101 15 INFO ceilometer.polling.manager [-] Polling pollster disk.device.write.bytes in the context of pollsters
Dec  1 19:33:49 compute-0 ceilometer_agent_compute[200308]: 2025-12-01 19:33:49.101 15 DEBUG ceilometer.polling.manager [-] Checking if we need coordination for pollster [<stevedore.extension.Extension object at 0x7fcf6cc3f440>] with coordination group name [None]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:333
Dec  1 19:33:49 compute-0 ceilometer_agent_compute[200308]: 2025-12-01 19:33:49.101 15 DEBUG ceilometer.polling.manager [-] The pollster [<stevedore.extension.Extension object at 0x7fcf6cc3f440>] is not configured in a source for polling that requires coordination. The current hashrings are the following [None]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:355
Dec  1 19:33:49 compute-0 ceilometer_agent_compute[200308]: 2025-12-01 19:33:49.101 15 DEBUG ceilometer.polling.manager [-] Polster heartbeat update: disk.device.write.bytes heartbeat /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:636
Dec  1 19:33:49 compute-0 ceilometer_agent_compute[200308]: 2025-12-01 19:33:49.101 15 DEBUG ceilometer.compute.pollsters [-] e73931e9-f7fa-4666-b781-700b385532a9/disk.device.write.bytes volume: 41779200 _stats_to_sample /usr/lib/python3.12/site-packages/ceilometer/compute/pollsters/__init__.py:108
Dec  1 19:33:49 compute-0 ceilometer_agent_compute[200308]: 2025-12-01 19:33:49.101 12 DEBUG ceilometer.polling.manager [-] Updated heartbeat for disk.device.write.bytes (2025-12-01T19:33:49.101214) _update_status /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:502
Dec  1 19:33:49 compute-0 ceilometer_agent_compute[200308]: 2025-12-01 19:33:49.101 15 DEBUG ceilometer.compute.pollsters [-] e73931e9-f7fa-4666-b781-700b385532a9/disk.device.write.bytes volume: 512 _stats_to_sample /usr/lib/python3.12/site-packages/ceilometer/compute/pollsters/__init__.py:108
Dec  1 19:33:49 compute-0 ceilometer_agent_compute[200308]: 2025-12-01 19:33:49.101 15 DEBUG ceilometer.compute.pollsters [-] e73931e9-f7fa-4666-b781-700b385532a9/disk.device.write.bytes volume: 0 _stats_to_sample /usr/lib/python3.12/site-packages/ceilometer/compute/pollsters/__init__.py:108
Dec  1 19:33:49 compute-0 ceilometer_agent_compute[200308]: 2025-12-01 19:33:49.102 15 DEBUG ceilometer.compute.pollsters [-] f4a023f0-04a7-470f-88ef-6284e0580f9e/disk.device.write.bytes volume: 41836544 _stats_to_sample /usr/lib/python3.12/site-packages/ceilometer/compute/pollsters/__init__.py:108
Dec  1 19:33:49 compute-0 ceilometer_agent_compute[200308]: 2025-12-01 19:33:49.102 15 DEBUG ceilometer.compute.pollsters [-] f4a023f0-04a7-470f-88ef-6284e0580f9e/disk.device.write.bytes volume: 512 _stats_to_sample /usr/lib/python3.12/site-packages/ceilometer/compute/pollsters/__init__.py:108
Dec  1 19:33:49 compute-0 ceilometer_agent_compute[200308]: 2025-12-01 19:33:49.102 15 DEBUG ceilometer.compute.pollsters [-] f4a023f0-04a7-470f-88ef-6284e0580f9e/disk.device.write.bytes volume: 0 _stats_to_sample /usr/lib/python3.12/site-packages/ceilometer/compute/pollsters/__init__.py:108
Dec  1 19:33:49 compute-0 ceilometer_agent_compute[200308]: 2025-12-01 19:33:49.102 15 INFO ceilometer.polling.manager [-] Finished polling pollster disk.device.write.bytes in the context of pollsters
Dec  1 19:33:49 compute-0 ceilometer_agent_compute[200308]: 2025-12-01 19:33:49.102 15 DEBUG ceilometer.polling.manager [-] Executing discovery process for pollsters [<ceilometer.compute.pollsters.instance_stats.PowerStatePollster object at 0x7fcf6c2e4440>] and discovery method [local_instances] via process [<bound method AgentManager.discover of <ceilometer.polling.manager.AgentManager object at 0x7fcf6cc07a10>>]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:294
Dec  1 19:33:49 compute-0 ceilometer_agent_compute[200308]: 2025-12-01 19:33:49.102 15 INFO ceilometer.polling.manager [-] Polling pollster power.state in the context of pollsters
Dec  1 19:33:49 compute-0 ceilometer_agent_compute[200308]: 2025-12-01 19:33:49.103 15 DEBUG ceilometer.polling.manager [-] Checking if we need coordination for pollster [<stevedore.extension.Extension object at 0x7fcf6c2e4470>] with coordination group name [None]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:333
Dec  1 19:33:49 compute-0 ceilometer_agent_compute[200308]: 2025-12-01 19:33:49.103 15 DEBUG ceilometer.polling.manager [-] The pollster [<stevedore.extension.Extension object at 0x7fcf6c2e4470>] is not configured in a source for polling that requires coordination. The current hashrings are the following [None]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:355
Dec  1 19:33:49 compute-0 ceilometer_agent_compute[200308]: 2025-12-01 19:33:49.103 15 DEBUG ceilometer.polling.manager [-] Polster heartbeat update: power.state heartbeat /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:636
Dec  1 19:33:49 compute-0 ceilometer_agent_compute[200308]: 2025-12-01 19:33:49.103 12 DEBUG ceilometer.polling.manager [-] Updated heartbeat for power.state (2025-12-01T19:33:49.103140) _update_status /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:502
Dec  1 19:33:49 compute-0 ceilometer_agent_compute[200308]: 2025-12-01 19:33:49.137 15 DEBUG ceilometer.compute.pollsters [-] e73931e9-f7fa-4666-b781-700b385532a9/power.state volume: 1 _stats_to_sample /usr/lib/python3.12/site-packages/ceilometer/compute/pollsters/__init__.py:108
Dec  1 19:33:49 compute-0 ceilometer_agent_compute[200308]: 2025-12-01 19:33:49.166 15 DEBUG ceilometer.compute.pollsters [-] f4a023f0-04a7-470f-88ef-6284e0580f9e/power.state volume: 1 _stats_to_sample /usr/lib/python3.12/site-packages/ceilometer/compute/pollsters/__init__.py:108
Dec  1 19:33:49 compute-0 ceilometer_agent_compute[200308]: 2025-12-01 19:33:49.167 15 INFO ceilometer.polling.manager [-] Finished polling pollster power.state in the context of pollsters
Dec  1 19:33:49 compute-0 ceilometer_agent_compute[200308]: 2025-12-01 19:33:49.167 15 DEBUG ceilometer.polling.manager [-] Executing discovery process for pollsters [<ceilometer.compute.pollsters.disk.PerDeviceDiskWriteLatencyPollster object at 0x7fcf6cc3f470>] and discovery method [local_instances] via process [<bound method AgentManager.discover of <ceilometer.polling.manager.AgentManager object at 0x7fcf6cc07a10>>]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:294
Dec  1 19:33:49 compute-0 ceilometer_agent_compute[200308]: 2025-12-01 19:33:49.167 15 INFO ceilometer.polling.manager [-] Polling pollster disk.device.write.latency in the context of pollsters
Dec  1 19:33:49 compute-0 ceilometer_agent_compute[200308]: 2025-12-01 19:33:49.167 15 DEBUG ceilometer.polling.manager [-] Checking if we need coordination for pollster [<stevedore.extension.Extension object at 0x7fcf6cc3f4a0>] with coordination group name [None]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:333
Dec  1 19:33:49 compute-0 ceilometer_agent_compute[200308]: 2025-12-01 19:33:49.168 15 DEBUG ceilometer.polling.manager [-] The pollster [<stevedore.extension.Extension object at 0x7fcf6cc3f4a0>] is not configured in a source for polling that requires coordination. The current hashrings are the following [None]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:355
Dec  1 19:33:49 compute-0 ceilometer_agent_compute[200308]: 2025-12-01 19:33:49.168 15 DEBUG ceilometer.polling.manager [-] Polster heartbeat update: disk.device.write.latency heartbeat /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:636
Dec  1 19:33:49 compute-0 ceilometer_agent_compute[200308]: 2025-12-01 19:33:49.168 12 DEBUG ceilometer.polling.manager [-] Updated heartbeat for disk.device.write.latency (2025-12-01T19:33:49.168137) _update_status /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:502
Dec  1 19:33:49 compute-0 ceilometer_agent_compute[200308]: 2025-12-01 19:33:49.170 15 DEBUG ceilometer.compute.pollsters [-] e73931e9-f7fa-4666-b781-700b385532a9/disk.device.write.latency volume: 1119912171 _stats_to_sample /usr/lib/python3.12/site-packages/ceilometer/compute/pollsters/__init__.py:108
Dec  1 19:33:49 compute-0 ceilometer_agent_compute[200308]: 2025-12-01 19:33:49.171 15 DEBUG ceilometer.compute.pollsters [-] e73931e9-f7fa-4666-b781-700b385532a9/disk.device.write.latency volume: 10391061 _stats_to_sample /usr/lib/python3.12/site-packages/ceilometer/compute/pollsters/__init__.py:108
Dec  1 19:33:49 compute-0 ceilometer_agent_compute[200308]: 2025-12-01 19:33:49.171 15 DEBUG ceilometer.compute.pollsters [-] e73931e9-f7fa-4666-b781-700b385532a9/disk.device.write.latency volume: 0 _stats_to_sample /usr/lib/python3.12/site-packages/ceilometer/compute/pollsters/__init__.py:108
Dec  1 19:33:49 compute-0 ceilometer_agent_compute[200308]: 2025-12-01 19:33:49.171 15 DEBUG ceilometer.compute.pollsters [-] f4a023f0-04a7-470f-88ef-6284e0580f9e/disk.device.write.latency volume: 1158162729 _stats_to_sample /usr/lib/python3.12/site-packages/ceilometer/compute/pollsters/__init__.py:108
Dec  1 19:33:49 compute-0 ceilometer_agent_compute[200308]: 2025-12-01 19:33:49.172 15 DEBUG ceilometer.compute.pollsters [-] f4a023f0-04a7-470f-88ef-6284e0580f9e/disk.device.write.latency volume: 13740853 _stats_to_sample /usr/lib/python3.12/site-packages/ceilometer/compute/pollsters/__init__.py:108
Dec  1 19:33:49 compute-0 ceilometer_agent_compute[200308]: 2025-12-01 19:33:49.172 15 DEBUG ceilometer.compute.pollsters [-] f4a023f0-04a7-470f-88ef-6284e0580f9e/disk.device.write.latency volume: 0 _stats_to_sample /usr/lib/python3.12/site-packages/ceilometer/compute/pollsters/__init__.py:108
Dec  1 19:33:49 compute-0 ceilometer_agent_compute[200308]: 2025-12-01 19:33:49.172 15 INFO ceilometer.polling.manager [-] Finished polling pollster disk.device.write.latency in the context of pollsters
Dec  1 19:33:49 compute-0 ceilometer_agent_compute[200308]: 2025-12-01 19:33:49.172 15 DEBUG ceilometer.polling.manager [-] Executing discovery process for pollsters [<ceilometer.compute.pollsters.disk.PerDeviceWriteRequestsPollster object at 0x7fcf6cc3f4d0>] and discovery method [local_instances] via process [<bound method AgentManager.discover of <ceilometer.polling.manager.AgentManager object at 0x7fcf6cc07a10>>]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:294
Dec  1 19:33:49 compute-0 ceilometer_agent_compute[200308]: 2025-12-01 19:33:49.173 15 INFO ceilometer.polling.manager [-] Polling pollster disk.device.write.requests in the context of pollsters
Dec  1 19:33:49 compute-0 ceilometer_agent_compute[200308]: 2025-12-01 19:33:49.173 15 DEBUG ceilometer.polling.manager [-] Checking if we need coordination for pollster [<stevedore.extension.Extension object at 0x7fcf6cc3f500>] with coordination group name [None]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:333
Dec  1 19:33:49 compute-0 ceilometer_agent_compute[200308]: 2025-12-01 19:33:49.173 15 DEBUG ceilometer.polling.manager [-] The pollster [<stevedore.extension.Extension object at 0x7fcf6cc3f500>] is not configured in a source for polling that requires coordination. The current hashrings are the following [None]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:355
Dec  1 19:33:49 compute-0 ceilometer_agent_compute[200308]: 2025-12-01 19:33:49.173 15 DEBUG ceilometer.polling.manager [-] Polster heartbeat update: disk.device.write.requests heartbeat /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:636
Dec  1 19:33:49 compute-0 ceilometer_agent_compute[200308]: 2025-12-01 19:33:49.173 15 DEBUG ceilometer.compute.pollsters [-] e73931e9-f7fa-4666-b781-700b385532a9/disk.device.write.requests volume: 233 _stats_to_sample /usr/lib/python3.12/site-packages/ceilometer/compute/pollsters/__init__.py:108
Dec  1 19:33:49 compute-0 ceilometer_agent_compute[200308]: 2025-12-01 19:33:49.173 12 DEBUG ceilometer.polling.manager [-] Updated heartbeat for disk.device.write.requests (2025-12-01T19:33:49.173270) _update_status /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:502
Dec  1 19:33:49 compute-0 ceilometer_agent_compute[200308]: 2025-12-01 19:33:49.173 15 DEBUG ceilometer.compute.pollsters [-] e73931e9-f7fa-4666-b781-700b385532a9/disk.device.write.requests volume: 1 _stats_to_sample /usr/lib/python3.12/site-packages/ceilometer/compute/pollsters/__init__.py:108
Dec  1 19:33:49 compute-0 ceilometer_agent_compute[200308]: 2025-12-01 19:33:49.173 15 DEBUG ceilometer.compute.pollsters [-] e73931e9-f7fa-4666-b781-700b385532a9/disk.device.write.requests volume: 0 _stats_to_sample /usr/lib/python3.12/site-packages/ceilometer/compute/pollsters/__init__.py:108
Dec  1 19:33:49 compute-0 ceilometer_agent_compute[200308]: 2025-12-01 19:33:49.174 15 DEBUG ceilometer.compute.pollsters [-] f4a023f0-04a7-470f-88ef-6284e0580f9e/disk.device.write.requests volume: 242 _stats_to_sample /usr/lib/python3.12/site-packages/ceilometer/compute/pollsters/__init__.py:108
Dec  1 19:33:49 compute-0 ceilometer_agent_compute[200308]: 2025-12-01 19:33:49.174 15 DEBUG ceilometer.compute.pollsters [-] f4a023f0-04a7-470f-88ef-6284e0580f9e/disk.device.write.requests volume: 1 _stats_to_sample /usr/lib/python3.12/site-packages/ceilometer/compute/pollsters/__init__.py:108
Dec  1 19:33:49 compute-0 ceilometer_agent_compute[200308]: 2025-12-01 19:33:49.174 15 DEBUG ceilometer.compute.pollsters [-] f4a023f0-04a7-470f-88ef-6284e0580f9e/disk.device.write.requests volume: 0 _stats_to_sample /usr/lib/python3.12/site-packages/ceilometer/compute/pollsters/__init__.py:108
Dec  1 19:33:49 compute-0 ceilometer_agent_compute[200308]: 2025-12-01 19:33:49.175 15 INFO ceilometer.polling.manager [-] Finished polling pollster disk.device.write.requests in the context of pollsters
Dec  1 19:33:49 compute-0 ceilometer_agent_compute[200308]: 2025-12-01 19:33:49.175 15 DEBUG ceilometer.polling.manager [-] Executing discovery process for pollsters [<ceilometer.compute.pollsters.disk.PerDeviceAllocationPollster object at 0x7fcf6cc3e5d0>] and discovery method [local_instances] via process [<bound method AgentManager.discover of <ceilometer.polling.manager.AgentManager object at 0x7fcf6cc07a10>>]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:294
Dec  1 19:33:49 compute-0 ceilometer_agent_compute[200308]: 2025-12-01 19:33:49.175 15 INFO ceilometer.polling.manager [-] Polling pollster disk.device.allocation in the context of pollsters
Dec  1 19:33:49 compute-0 ceilometer_agent_compute[200308]: 2025-12-01 19:33:49.175 15 DEBUG ceilometer.polling.manager [-] Checking if we need coordination for pollster [<stevedore.extension.Extension object at 0x7fcf6cc3e540>] with coordination group name [None]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:333
Dec  1 19:33:49 compute-0 ceilometer_agent_compute[200308]: 2025-12-01 19:33:49.175 15 DEBUG ceilometer.polling.manager [-] The pollster [<stevedore.extension.Extension object at 0x7fcf6cc3e540>] is not configured in a source for polling that requires coordination. The current hashrings are the following [None]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:355
Dec  1 19:33:49 compute-0 ceilometer_agent_compute[200308]: 2025-12-01 19:33:49.175 15 DEBUG ceilometer.polling.manager [-] Polster heartbeat update: disk.device.allocation heartbeat /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:636
Dec  1 19:33:49 compute-0 ceilometer_agent_compute[200308]: 2025-12-01 19:33:49.175 12 DEBUG ceilometer.polling.manager [-] Updated heartbeat for disk.device.allocation (2025-12-01T19:33:49.175520) _update_status /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:502
Dec  1 19:33:49 compute-0 ceilometer_agent_compute[200308]: 2025-12-01 19:33:49.175 15 DEBUG ceilometer.compute.pollsters [-] e73931e9-f7fa-4666-b781-700b385532a9/disk.device.allocation volume: 21307392 _stats_to_sample /usr/lib/python3.12/site-packages/ceilometer/compute/pollsters/__init__.py:108
Dec  1 19:33:49 compute-0 ceilometer_agent_compute[200308]: 2025-12-01 19:33:49.176 15 DEBUG ceilometer.compute.pollsters [-] e73931e9-f7fa-4666-b781-700b385532a9/disk.device.allocation volume: 1253376 _stats_to_sample /usr/lib/python3.12/site-packages/ceilometer/compute/pollsters/__init__.py:108
Dec  1 19:33:49 compute-0 ceilometer_agent_compute[200308]: 2025-12-01 19:33:49.176 15 DEBUG ceilometer.compute.pollsters [-] e73931e9-f7fa-4666-b781-700b385532a9/disk.device.allocation volume: 487424 _stats_to_sample /usr/lib/python3.12/site-packages/ceilometer/compute/pollsters/__init__.py:108
Dec  1 19:33:49 compute-0 ceilometer_agent_compute[200308]: 2025-12-01 19:33:49.176 15 DEBUG ceilometer.compute.pollsters [-] f4a023f0-04a7-470f-88ef-6284e0580f9e/disk.device.allocation volume: 22224896 _stats_to_sample /usr/lib/python3.12/site-packages/ceilometer/compute/pollsters/__init__.py:108
Dec  1 19:33:49 compute-0 ceilometer_agent_compute[200308]: 2025-12-01 19:33:49.176 15 DEBUG ceilometer.compute.pollsters [-] f4a023f0-04a7-470f-88ef-6284e0580f9e/disk.device.allocation volume: 1253376 _stats_to_sample /usr/lib/python3.12/site-packages/ceilometer/compute/pollsters/__init__.py:108
Dec  1 19:33:49 compute-0 ceilometer_agent_compute[200308]: 2025-12-01 19:33:49.176 15 DEBUG ceilometer.compute.pollsters [-] f4a023f0-04a7-470f-88ef-6284e0580f9e/disk.device.allocation volume: 585728 _stats_to_sample /usr/lib/python3.12/site-packages/ceilometer/compute/pollsters/__init__.py:108
Dec  1 19:33:49 compute-0 ceilometer_agent_compute[200308]: 2025-12-01 19:33:49.177 15 INFO ceilometer.polling.manager [-] Finished polling pollster disk.device.allocation in the context of pollsters
Dec  1 19:33:49 compute-0 ceilometer_agent_compute[200308]: 2025-12-01 19:33:49.177 15 DEBUG ceilometer.polling.manager [-] Executing discovery process for pollsters [<ceilometer.compute.pollsters.disk.EphemeralSizePollster object at 0x7fcf6cc3f530>] and discovery method [local_instances] via process [<bound method AgentManager.discover of <ceilometer.polling.manager.AgentManager object at 0x7fcf6cc07a10>>]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:294
Dec  1 19:33:49 compute-0 ceilometer_agent_compute[200308]: 2025-12-01 19:33:49.177 15 INFO ceilometer.polling.manager [-] Polling pollster disk.ephemeral.size in the context of pollsters
Dec  1 19:33:49 compute-0 ceilometer_agent_compute[200308]: 2025-12-01 19:33:49.177 15 DEBUG ceilometer.polling.manager [-] Checking if we need coordination for pollster [<stevedore.extension.Extension object at 0x7fcf6cc3f560>] with coordination group name [None]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:333
Dec  1 19:33:49 compute-0 ceilometer_agent_compute[200308]: 2025-12-01 19:33:49.177 15 DEBUG ceilometer.polling.manager [-] The pollster [<stevedore.extension.Extension object at 0x7fcf6cc3f560>] is not configured in a source for polling that requires coordination. The current hashrings are the following [None]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:355
Dec  1 19:33:49 compute-0 ceilometer_agent_compute[200308]: 2025-12-01 19:33:49.177 15 DEBUG ceilometer.polling.manager [-] Polster heartbeat update: disk.ephemeral.size heartbeat /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:636
Dec  1 19:33:49 compute-0 ceilometer_agent_compute[200308]: 2025-12-01 19:33:49.178 15 INFO ceilometer.polling.manager [-] Finished polling pollster disk.ephemeral.size in the context of pollsters
Dec  1 19:33:49 compute-0 ceilometer_agent_compute[200308]: 2025-12-01 19:33:49.178 12 DEBUG ceilometer.polling.manager [-] Updated heartbeat for disk.ephemeral.size (2025-12-01T19:33:49.177725) _update_status /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:502
Dec  1 19:33:49 compute-0 ceilometer_agent_compute[200308]: 2025-12-01 19:33:49.178 15 DEBUG ceilometer.polling.manager [-] Executing discovery process for pollsters [<ceilometer.compute.pollsters.net.IncomingPacketsPollster object at 0x7fcf6cc3fd40>] and discovery method [local_instances] via process [<bound method AgentManager.discover of <ceilometer.polling.manager.AgentManager object at 0x7fcf6cc07a10>>]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:294
Dec  1 19:33:49 compute-0 ceilometer_agent_compute[200308]: 2025-12-01 19:33:49.178 15 INFO ceilometer.polling.manager [-] Polling pollster network.incoming.packets in the context of pollsters
Dec  1 19:33:49 compute-0 ceilometer_agent_compute[200308]: 2025-12-01 19:33:49.178 15 DEBUG ceilometer.polling.manager [-] Checking if we need coordination for pollster [<stevedore.extension.Extension object at 0x7fcf6cc3fd70>] with coordination group name [None]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:333
Dec  1 19:33:49 compute-0 ceilometer_agent_compute[200308]: 2025-12-01 19:33:49.178 15 DEBUG ceilometer.polling.manager [-] The pollster [<stevedore.extension.Extension object at 0x7fcf6cc3fd70>] is not configured in a source for polling that requires coordination. The current hashrings are the following [None]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:355
Dec  1 19:33:49 compute-0 ceilometer_agent_compute[200308]: 2025-12-01 19:33:49.178 15 DEBUG ceilometer.polling.manager [-] Polster heartbeat update: network.incoming.packets heartbeat /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:636
Dec  1 19:33:49 compute-0 ceilometer_agent_compute[200308]: 2025-12-01 19:33:49.179 15 DEBUG ceilometer.compute.pollsters [-] e73931e9-f7fa-4666-b781-700b385532a9/network.incoming.packets volume: 17 _stats_to_sample /usr/lib/python3.12/site-packages/ceilometer/compute/pollsters/__init__.py:108
Dec  1 19:33:49 compute-0 ceilometer_agent_compute[200308]: 2025-12-01 19:33:49.179 12 DEBUG ceilometer.polling.manager [-] Updated heartbeat for network.incoming.packets (2025-12-01T19:33:49.178927) _update_status /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:502
Dec  1 19:33:49 compute-0 ceilometer_agent_compute[200308]: 2025-12-01 19:33:49.179 15 DEBUG ceilometer.compute.pollsters [-] f4a023f0-04a7-470f-88ef-6284e0580f9e/network.incoming.packets volume: 31 _stats_to_sample /usr/lib/python3.12/site-packages/ceilometer/compute/pollsters/__init__.py:108
Dec  1 19:33:49 compute-0 ceilometer_agent_compute[200308]: 2025-12-01 19:33:49.179 15 INFO ceilometer.polling.manager [-] Finished polling pollster network.incoming.packets in the context of pollsters
Dec  1 19:33:49 compute-0 ceilometer_agent_compute[200308]: 2025-12-01 19:33:49.179 15 DEBUG ceilometer.polling.manager [-] Executing discovery process for pollsters [<ceilometer.compute.pollsters.disk.RootSizePollster object at 0x7fcf6cc3f590>] and discovery method [local_instances] via process [<bound method AgentManager.discover of <ceilometer.polling.manager.AgentManager object at 0x7fcf6cc07a10>>]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:294
Dec  1 19:33:49 compute-0 ceilometer_agent_compute[200308]: 2025-12-01 19:33:49.180 15 INFO ceilometer.polling.manager [-] Polling pollster disk.root.size in the context of pollsters
Dec  1 19:33:49 compute-0 ceilometer_agent_compute[200308]: 2025-12-01 19:33:49.180 15 DEBUG ceilometer.polling.manager [-] Checking if we need coordination for pollster [<stevedore.extension.Extension object at 0x7fcf6cc3f5c0>] with coordination group name [None]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:333
Dec  1 19:33:49 compute-0 ceilometer_agent_compute[200308]: 2025-12-01 19:33:49.180 15 DEBUG ceilometer.polling.manager [-] The pollster [<stevedore.extension.Extension object at 0x7fcf6cc3f5c0>] is not configured in a source for polling that requires coordination. The current hashrings are the following [None]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:355
Dec  1 19:33:49 compute-0 ceilometer_agent_compute[200308]: 2025-12-01 19:33:49.180 15 DEBUG ceilometer.polling.manager [-] Polster heartbeat update: disk.root.size heartbeat /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:636
Dec  1 19:33:49 compute-0 ceilometer_agent_compute[200308]: 2025-12-01 19:33:49.180 15 INFO ceilometer.polling.manager [-] Finished polling pollster disk.root.size in the context of pollsters
Dec  1 19:33:49 compute-0 ceilometer_agent_compute[200308]: 2025-12-01 19:33:49.180 15 DEBUG ceilometer.polling.manager [-] Executing discovery process for pollsters [<ceilometer.compute.pollsters.net.IncomingDropPollster object at 0x7fcf6cc3fda0>] and discovery method [local_instances] via process [<bound method AgentManager.discover of <ceilometer.polling.manager.AgentManager object at 0x7fcf6cc07a10>>]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:294
Dec  1 19:33:49 compute-0 ceilometer_agent_compute[200308]: 2025-12-01 19:33:49.181 15 INFO ceilometer.polling.manager [-] Polling pollster network.incoming.packets.drop in the context of pollsters
Dec  1 19:33:49 compute-0 ceilometer_agent_compute[200308]: 2025-12-01 19:33:49.181 15 DEBUG ceilometer.polling.manager [-] Checking if we need coordination for pollster [<stevedore.extension.Extension object at 0x7fcf6cc3fdd0>] with coordination group name [None]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:333
Dec  1 19:33:49 compute-0 ceilometer_agent_compute[200308]: 2025-12-01 19:33:49.181 12 DEBUG ceilometer.polling.manager [-] Updated heartbeat for disk.root.size (2025-12-01T19:33:49.180221) _update_status /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:502
Dec  1 19:33:49 compute-0 ceilometer_agent_compute[200308]: 2025-12-01 19:33:49.181 15 DEBUG ceilometer.polling.manager [-] The pollster [<stevedore.extension.Extension object at 0x7fcf6cc3fdd0>] is not configured in a source for polling that requires coordination. The current hashrings are the following [None]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:355
Dec  1 19:33:49 compute-0 ceilometer_agent_compute[200308]: 2025-12-01 19:33:49.181 15 DEBUG ceilometer.polling.manager [-] Polster heartbeat update: network.incoming.packets.drop heartbeat /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:636
Dec  1 19:33:49 compute-0 ceilometer_agent_compute[200308]: 2025-12-01 19:33:49.181 12 DEBUG ceilometer.polling.manager [-] Updated heartbeat for network.incoming.packets.drop (2025-12-01T19:33:49.181536) _update_status /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:502
Dec  1 19:33:49 compute-0 ceilometer_agent_compute[200308]: 2025-12-01 19:33:49.181 15 DEBUG ceilometer.compute.pollsters [-] e73931e9-f7fa-4666-b781-700b385532a9/network.incoming.packets.drop volume: 0 _stats_to_sample /usr/lib/python3.12/site-packages/ceilometer/compute/pollsters/__init__.py:108
Dec  1 19:33:49 compute-0 ceilometer_agent_compute[200308]: 2025-12-01 19:33:49.182 15 DEBUG ceilometer.compute.pollsters [-] f4a023f0-04a7-470f-88ef-6284e0580f9e/network.incoming.packets.drop volume: 0 _stats_to_sample /usr/lib/python3.12/site-packages/ceilometer/compute/pollsters/__init__.py:108
Dec  1 19:33:49 compute-0 ceilometer_agent_compute[200308]: 2025-12-01 19:33:49.182 15 INFO ceilometer.polling.manager [-] Finished polling pollster network.incoming.packets.drop in the context of pollsters
Dec  1 19:33:49 compute-0 ceilometer_agent_compute[200308]: 2025-12-01 19:33:49.182 15 DEBUG ceilometer.polling.manager [-] Executing discovery process for pollsters [<ceilometer.compute.pollsters.net.IncomingErrorsPollster object at 0x7fcf6cc3fe00>] and discovery method [local_instances] via process [<bound method AgentManager.discover of <ceilometer.polling.manager.AgentManager object at 0x7fcf6cc07a10>>]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:294
Dec  1 19:33:49 compute-0 ceilometer_agent_compute[200308]: 2025-12-01 19:33:49.182 15 INFO ceilometer.polling.manager [-] Polling pollster network.incoming.packets.error in the context of pollsters
Dec  1 19:33:49 compute-0 ceilometer_agent_compute[200308]: 2025-12-01 19:33:49.182 15 DEBUG ceilometer.polling.manager [-] Checking if we need coordination for pollster [<stevedore.extension.Extension object at 0x7fcf6cc3fe30>] with coordination group name [None]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:333
Dec  1 19:33:49 compute-0 ceilometer_agent_compute[200308]: 2025-12-01 19:33:49.182 15 DEBUG ceilometer.polling.manager [-] The pollster [<stevedore.extension.Extension object at 0x7fcf6cc3fe30>] is not configured in a source for polling that requires coordination. The current hashrings are the following [None]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:355
Dec  1 19:33:49 compute-0 ceilometer_agent_compute[200308]: 2025-12-01 19:33:49.182 15 DEBUG ceilometer.polling.manager [-] Polster heartbeat update: network.incoming.packets.error heartbeat /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:636
Dec  1 19:33:49 compute-0 ceilometer_agent_compute[200308]: 2025-12-01 19:33:49.183 15 DEBUG ceilometer.compute.pollsters [-] e73931e9-f7fa-4666-b781-700b385532a9/network.incoming.packets.error volume: 0 _stats_to_sample /usr/lib/python3.12/site-packages/ceilometer/compute/pollsters/__init__.py:108
Dec  1 19:33:49 compute-0 ceilometer_agent_compute[200308]: 2025-12-01 19:33:49.183 15 DEBUG ceilometer.compute.pollsters [-] f4a023f0-04a7-470f-88ef-6284e0580f9e/network.incoming.packets.error volume: 0 _stats_to_sample /usr/lib/python3.12/site-packages/ceilometer/compute/pollsters/__init__.py:108
Dec  1 19:33:49 compute-0 ceilometer_agent_compute[200308]: 2025-12-01 19:33:49.183 12 DEBUG ceilometer.polling.manager [-] Updated heartbeat for network.incoming.packets.error (2025-12-01T19:33:49.182820) _update_status /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:502
Dec  1 19:33:49 compute-0 ceilometer_agent_compute[200308]: 2025-12-01 19:33:49.183 15 INFO ceilometer.polling.manager [-] Finished polling pollster network.incoming.packets.error in the context of pollsters
Dec  1 19:33:49 compute-0 ceilometer_agent_compute[200308]: 2025-12-01 19:33:49.183 15 DEBUG ceilometer.polling.manager [-] Executing discovery process for pollsters [<ceilometer.compute.pollsters.net.OutgoingBytesPollster object at 0x7fcf6cc3fe90>] and discovery method [local_instances] via process [<bound method AgentManager.discover of <ceilometer.polling.manager.AgentManager object at 0x7fcf6cc07a10>>]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:294
Dec  1 19:33:49 compute-0 ceilometer_agent_compute[200308]: 2025-12-01 19:33:49.183 15 INFO ceilometer.polling.manager [-] Polling pollster network.outgoing.bytes in the context of pollsters
Dec  1 19:33:49 compute-0 ceilometer_agent_compute[200308]: 2025-12-01 19:33:49.183 15 DEBUG ceilometer.polling.manager [-] Checking if we need coordination for pollster [<stevedore.extension.Extension object at 0x7fcf6cc3fec0>] with coordination group name [None]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:333
Dec  1 19:33:49 compute-0 ceilometer_agent_compute[200308]: 2025-12-01 19:33:49.184 15 DEBUG ceilometer.polling.manager [-] The pollster [<stevedore.extension.Extension object at 0x7fcf6cc3fec0>] is not configured in a source for polling that requires coordination. The current hashrings are the following [None]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:355
Dec  1 19:33:49 compute-0 ceilometer_agent_compute[200308]: 2025-12-01 19:33:49.184 15 DEBUG ceilometer.polling.manager [-] Polster heartbeat update: network.outgoing.bytes heartbeat /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:636
Dec  1 19:33:49 compute-0 ceilometer_agent_compute[200308]: 2025-12-01 19:33:49.184 15 DEBUG ceilometer.compute.pollsters [-] e73931e9-f7fa-4666-b781-700b385532a9/network.outgoing.bytes volume: 2202 _stats_to_sample /usr/lib/python3.12/site-packages/ceilometer/compute/pollsters/__init__.py:108
Dec  1 19:33:49 compute-0 ceilometer_agent_compute[200308]: 2025-12-01 19:33:49.184 15 DEBUG ceilometer.compute.pollsters [-] f4a023f0-04a7-470f-88ef-6284e0580f9e/network.outgoing.bytes volume: 4822 _stats_to_sample /usr/lib/python3.12/site-packages/ceilometer/compute/pollsters/__init__.py:108
Dec  1 19:33:49 compute-0 ceilometer_agent_compute[200308]: 2025-12-01 19:33:49.184 12 DEBUG ceilometer.polling.manager [-] Updated heartbeat for network.outgoing.bytes (2025-12-01T19:33:49.184094) _update_status /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:502
Dec  1 19:33:49 compute-0 ceilometer_agent_compute[200308]: 2025-12-01 19:33:49.185 15 INFO ceilometer.polling.manager [-] Finished polling pollster network.outgoing.bytes in the context of pollsters
Dec  1 19:33:49 compute-0 ceilometer_agent_compute[200308]: 2025-12-01 19:33:49.185 15 DEBUG ceilometer.polling.manager [-] Executing discovery process for pollsters [<ceilometer.compute.pollsters.net.OutgoingBytesRatePollster object at 0x7fcf6cc3ff80>] and discovery method [local_instances] via process [<bound method AgentManager.discover of <ceilometer.polling.manager.AgentManager object at 0x7fcf6cc07a10>>]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:294
Dec  1 19:33:49 compute-0 ceilometer_agent_compute[200308]: 2025-12-01 19:33:49.185 15 DEBUG ceilometer.polling.manager [-] Skip pollster network.outgoing.bytes.rate, no new resources found this cycle _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:321
Dec  1 19:33:49 compute-0 ceilometer_agent_compute[200308]: 2025-12-01 19:33:49.185 15 DEBUG ceilometer.polling.manager [-] Executing discovery process for pollsters [<ceilometer.compute.pollsters.instance_stats.CPUPollster object at 0x7fcf6cbd1b80>] and discovery method [local_instances] via process [<bound method AgentManager.discover of <ceilometer.polling.manager.AgentManager object at 0x7fcf6cc07a10>>]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:294
Dec  1 19:33:49 compute-0 ceilometer_agent_compute[200308]: 2025-12-01 19:33:49.185 15 INFO ceilometer.polling.manager [-] Polling pollster cpu in the context of pollsters
Dec  1 19:33:49 compute-0 ceilometer_agent_compute[200308]: 2025-12-01 19:33:49.185 15 DEBUG ceilometer.polling.manager [-] Checking if we need coordination for pollster [<stevedore.extension.Extension object at 0x7fcf6cc3d7c0>] with coordination group name [None]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:333
Dec  1 19:33:49 compute-0 ceilometer_agent_compute[200308]: 2025-12-01 19:33:49.185 15 DEBUG ceilometer.polling.manager [-] The pollster [<stevedore.extension.Extension object at 0x7fcf6cc3d7c0>] is not configured in a source for polling that requires coordination. The current hashrings are the following [None]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:355
Dec  1 19:33:49 compute-0 ceilometer_agent_compute[200308]: 2025-12-01 19:33:49.185 15 DEBUG ceilometer.polling.manager [-] Polster heartbeat update: cpu heartbeat /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:636
Dec  1 19:33:49 compute-0 ceilometer_agent_compute[200308]: 2025-12-01 19:33:49.185 15 DEBUG ceilometer.compute.pollsters [-] e73931e9-f7fa-4666-b781-700b385532a9/cpu volume: 35550000000 _stats_to_sample /usr/lib/python3.12/site-packages/ceilometer/compute/pollsters/__init__.py:108
Dec  1 19:33:49 compute-0 ceilometer_agent_compute[200308]: 2025-12-01 19:33:49.185 15 DEBUG ceilometer.compute.pollsters [-] f4a023f0-04a7-470f-88ef-6284e0580f9e/cpu volume: 71910000000 _stats_to_sample /usr/lib/python3.12/site-packages/ceilometer/compute/pollsters/__init__.py:108
Dec  1 19:33:49 compute-0 ceilometer_agent_compute[200308]: 2025-12-01 19:33:49.186 15 INFO ceilometer.polling.manager [-] Finished polling pollster cpu in the context of pollsters
Dec  1 19:33:49 compute-0 ceilometer_agent_compute[200308]: 2025-12-01 19:33:49.186 15 DEBUG ceilometer.polling.manager [-] Executing discovery process for pollsters [<ceilometer.compute.pollsters.instance_stats.MemoryUsagePollster object at 0x7fcf6cc3f7a0>] and discovery method [local_instances] via process [<bound method AgentManager.discover of <ceilometer.polling.manager.AgentManager object at 0x7fcf6cc07a10>>]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:294
Dec  1 19:33:49 compute-0 ceilometer_agent_compute[200308]: 2025-12-01 19:33:49.186 15 INFO ceilometer.polling.manager [-] Polling pollster memory.usage in the context of pollsters
Dec  1 19:33:49 compute-0 ceilometer_agent_compute[200308]: 2025-12-01 19:33:49.186 15 DEBUG ceilometer.polling.manager [-] Checking if we need coordination for pollster [<stevedore.extension.Extension object at 0x7fcf6cc3f7d0>] with coordination group name [None]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:333
Dec  1 19:33:49 compute-0 ceilometer_agent_compute[200308]: 2025-12-01 19:33:49.187 15 DEBUG ceilometer.polling.manager [-] The pollster [<stevedore.extension.Extension object at 0x7fcf6cc3f7d0>] is not configured in a source for polling that requires coordination. The current hashrings are the following [None]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:355
Dec  1 19:33:49 compute-0 ceilometer_agent_compute[200308]: 2025-12-01 19:33:49.187 15 DEBUG ceilometer.polling.manager [-] Polster heartbeat update: memory.usage heartbeat /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:636
Dec  1 19:33:49 compute-0 ceilometer_agent_compute[200308]: 2025-12-01 19:33:49.187 15 DEBUG ceilometer.compute.pollsters [-] e73931e9-f7fa-4666-b781-700b385532a9/memory.usage volume: 48.9140625 _stats_to_sample /usr/lib/python3.12/site-packages/ceilometer/compute/pollsters/__init__.py:108
Dec  1 19:33:49 compute-0 ceilometer_agent_compute[200308]: 2025-12-01 19:33:49.187 12 DEBUG ceilometer.polling.manager [-] Updated heartbeat for cpu (2025-12-01T19:33:49.185556) _update_status /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:502
Dec  1 19:33:49 compute-0 ceilometer_agent_compute[200308]: 2025-12-01 19:33:49.187 15 DEBUG ceilometer.compute.pollsters [-] f4a023f0-04a7-470f-88ef-6284e0580f9e/memory.usage volume: 49.1796875 _stats_to_sample /usr/lib/python3.12/site-packages/ceilometer/compute/pollsters/__init__.py:108
Dec  1 19:33:49 compute-0 ceilometer_agent_compute[200308]: 2025-12-01 19:33:49.188 15 INFO ceilometer.polling.manager [-] Finished polling pollster memory.usage in the context of pollsters
Dec  1 19:33:49 compute-0 ceilometer_agent_compute[200308]: 2025-12-01 19:33:49.188 12 DEBUG ceilometer.polling.manager [-] Updated heartbeat for memory.usage (2025-12-01T19:33:49.187092) _update_status /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:502
Dec  1 19:33:49 compute-0 ceilometer_agent_compute[200308]: 2025-12-01 19:33:49.188 15 DEBUG ceilometer.polling.manager [-] Finished processing pollster [network.incoming.bytes.delta]. execute_polling_task_processing /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:272
Dec  1 19:33:49 compute-0 ceilometer_agent_compute[200308]: 2025-12-01 19:33:49.188 15 DEBUG ceilometer.polling.manager [-] Finished processing pollster [network.outgoing.packets]. execute_polling_task_processing /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:272
Dec  1 19:33:49 compute-0 ceilometer_agent_compute[200308]: 2025-12-01 19:33:49.188 15 DEBUG ceilometer.polling.manager [-] Finished processing pollster [network.outgoing.bytes.delta]. execute_polling_task_processing /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:272
Dec  1 19:33:49 compute-0 ceilometer_agent_compute[200308]: 2025-12-01 19:33:49.188 15 DEBUG ceilometer.polling.manager [-] Finished processing pollster [network.outgoing.packets.drop]. execute_polling_task_processing /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:272
Dec  1 19:33:49 compute-0 ceilometer_agent_compute[200308]: 2025-12-01 19:33:49.189 15 DEBUG ceilometer.polling.manager [-] Finished processing pollster [network.outgoing.packets.error]. execute_polling_task_processing /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:272
Dec  1 19:33:49 compute-0 ceilometer_agent_compute[200308]: 2025-12-01 19:33:49.189 15 DEBUG ceilometer.polling.manager [-] Finished processing pollster [disk.device.capacity]. execute_polling_task_processing /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:272
Dec  1 19:33:49 compute-0 ceilometer_agent_compute[200308]: 2025-12-01 19:33:49.189 15 DEBUG ceilometer.polling.manager [-] Finished processing pollster [disk.device.read.bytes]. execute_polling_task_processing /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:272
Dec  1 19:33:49 compute-0 ceilometer_agent_compute[200308]: 2025-12-01 19:33:49.189 15 DEBUG ceilometer.polling.manager [-] Finished processing pollster [network.incoming.bytes]. execute_polling_task_processing /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:272
Dec  1 19:33:49 compute-0 ceilometer_agent_compute[200308]: 2025-12-01 19:33:49.190 15 DEBUG ceilometer.polling.manager [-] Finished processing pollster [network.incoming.bytes.rate]. execute_polling_task_processing /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:272
Dec  1 19:33:49 compute-0 ceilometer_agent_compute[200308]: 2025-12-01 19:33:49.190 15 DEBUG ceilometer.polling.manager [-] Finished processing pollster [disk.device.read.latency]. execute_polling_task_processing /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:272
Dec  1 19:33:49 compute-0 ceilometer_agent_compute[200308]: 2025-12-01 19:33:49.190 15 DEBUG ceilometer.polling.manager [-] Finished processing pollster [disk.device.read.requests]. execute_polling_task_processing /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:272
Dec  1 19:33:49 compute-0 ceilometer_agent_compute[200308]: 2025-12-01 19:33:49.190 15 DEBUG ceilometer.polling.manager [-] Finished processing pollster [disk.device.usage]. execute_polling_task_processing /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:272
Dec  1 19:33:49 compute-0 ceilometer_agent_compute[200308]: 2025-12-01 19:33:49.190 15 DEBUG ceilometer.polling.manager [-] Finished processing pollster [disk.device.write.bytes]. execute_polling_task_processing /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:272
Dec  1 19:33:49 compute-0 ceilometer_agent_compute[200308]: 2025-12-01 19:33:49.191 15 DEBUG ceilometer.polling.manager [-] Finished processing pollster [power.state]. execute_polling_task_processing /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:272
Dec  1 19:33:49 compute-0 ceilometer_agent_compute[200308]: 2025-12-01 19:33:49.191 15 DEBUG ceilometer.polling.manager [-] Finished processing pollster [disk.device.write.latency]. execute_polling_task_processing /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:272
Dec  1 19:33:49 compute-0 ceilometer_agent_compute[200308]: 2025-12-01 19:33:49.191 15 DEBUG ceilometer.polling.manager [-] Finished processing pollster [disk.device.write.requests]. execute_polling_task_processing /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:272
Dec  1 19:33:49 compute-0 ceilometer_agent_compute[200308]: 2025-12-01 19:33:49.191 15 DEBUG ceilometer.polling.manager [-] Finished processing pollster [disk.device.allocation]. execute_polling_task_processing /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:272
Dec  1 19:33:49 compute-0 ceilometer_agent_compute[200308]: 2025-12-01 19:33:49.191 15 DEBUG ceilometer.polling.manager [-] Finished processing pollster [disk.ephemeral.size]. execute_polling_task_processing /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:272
Dec  1 19:33:49 compute-0 ceilometer_agent_compute[200308]: 2025-12-01 19:33:49.191 15 DEBUG ceilometer.polling.manager [-] Finished processing pollster [network.incoming.packets]. execute_polling_task_processing /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:272
Dec  1 19:33:49 compute-0 ceilometer_agent_compute[200308]: 2025-12-01 19:33:49.191 15 DEBUG ceilometer.polling.manager [-] Finished processing pollster [disk.root.size]. execute_polling_task_processing /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:272
Dec  1 19:33:49 compute-0 ceilometer_agent_compute[200308]: 2025-12-01 19:33:49.191 15 DEBUG ceilometer.polling.manager [-] Finished processing pollster [network.incoming.packets.drop]. execute_polling_task_processing /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:272
Dec  1 19:33:49 compute-0 ceilometer_agent_compute[200308]: 2025-12-01 19:33:49.191 15 DEBUG ceilometer.polling.manager [-] Finished processing pollster [network.incoming.packets.error]. execute_polling_task_processing /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:272
Dec  1 19:33:49 compute-0 ceilometer_agent_compute[200308]: 2025-12-01 19:33:49.192 15 DEBUG ceilometer.polling.manager [-] Finished processing pollster [network.outgoing.bytes]. execute_polling_task_processing /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:272
Dec  1 19:33:49 compute-0 ceilometer_agent_compute[200308]: 2025-12-01 19:33:49.192 15 DEBUG ceilometer.polling.manager [-] Finished processing pollster [network.outgoing.bytes.rate]. execute_polling_task_processing /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:272
Dec  1 19:33:49 compute-0 ceilometer_agent_compute[200308]: 2025-12-01 19:33:49.192 15 DEBUG ceilometer.polling.manager [-] Finished processing pollster [cpu]. execute_polling_task_processing /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:272
Dec  1 19:33:49 compute-0 ceilometer_agent_compute[200308]: 2025-12-01 19:33:49.192 15 DEBUG ceilometer.polling.manager [-] Finished processing pollster [memory.usage]. execute_polling_task_processing /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:272
Dec  1 19:33:49 compute-0 nova_compute[189564]: 2025-12-01 19:33:49.382 189568 DEBUG oslo_concurrency.lockutils [None req-7cec860b-c1c5-49d2-8a4f-732c76c1788d - - - - - -] Acquiring lock "refresh_cache-e73931e9-f7fa-4666-b781-700b385532a9" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:312#033[00m
Dec  1 19:33:49 compute-0 nova_compute[189564]: 2025-12-01 19:33:49.383 189568 DEBUG oslo_concurrency.lockutils [None req-7cec860b-c1c5-49d2-8a4f-732c76c1788d - - - - - -] Acquired lock "refresh_cache-e73931e9-f7fa-4666-b781-700b385532a9" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:315#033[00m
Dec  1 19:33:49 compute-0 nova_compute[189564]: 2025-12-01 19:33:49.383 189568 DEBUG nova.network.neutron [None req-7cec860b-c1c5-49d2-8a4f-732c76c1788d - - - - - -] [instance: e73931e9-f7fa-4666-b781-700b385532a9] Forcefully refreshing network info cache for instance _get_instance_nw_info /usr/lib/python3.9/site-packages/nova/network/neutron.py:2004#033[00m
Dec  1 19:33:49 compute-0 nova_compute[189564]: 2025-12-01 19:33:49.383 189568 DEBUG nova.objects.instance [None req-7cec860b-c1c5-49d2-8a4f-732c76c1788d - - - - - -] Lazy-loading 'info_cache' on Instance uuid e73931e9-f7fa-4666-b781-700b385532a9 obj_load_attr /usr/lib/python3.9/site-packages/nova/objects/instance.py:1105#033[00m
Dec  1 19:33:52 compute-0 nova_compute[189564]: 2025-12-01 19:33:52.245 189568 DEBUG nova.network.neutron [None req-7cec860b-c1c5-49d2-8a4f-732c76c1788d - - - - - -] [instance: e73931e9-f7fa-4666-b781-700b385532a9] Updating instance_info_cache with network_info: [{"id": "3cef930c-870a-4936-a206-b4c3a7ce5c1a", "address": "fa:16:3e:fc:8b:70", "network": {"id": "2a4b8529-6171-4880-a97c-66966115a61b", "bridge": "br-int", "label": "private", "subnets": [{"cidr": "192.168.0.0/24", "dns": [], "gateway": {"address": "192.168.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "192.168.0.47", "type": "fixed", "version": 4, "meta": {}, "floating_ips": [{"address": "192.168.122.206", "type": "floating", "version": 4, "meta": {}}]}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "35d2a9caf1634dca9fc12ec078239d84", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tap3cef930c-87", "ovs_interfaceid": "3cef930c-870a-4936-a206-b4c3a7ce5c1a", "qbh_params": null, "qbg_params": null, "active": true, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}}] update_instance_cache_with_nw_info /usr/lib/python3.9/site-packages/nova/network/neutron.py:116#033[00m
Dec  1 19:33:52 compute-0 nova_compute[189564]: 2025-12-01 19:33:52.259 189568 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Dec  1 19:33:52 compute-0 nova_compute[189564]: 2025-12-01 19:33:52.272 189568 DEBUG oslo_concurrency.lockutils [None req-7cec860b-c1c5-49d2-8a4f-732c76c1788d - - - - - -] Releasing lock "refresh_cache-e73931e9-f7fa-4666-b781-700b385532a9" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:333#033[00m
Dec  1 19:33:52 compute-0 nova_compute[189564]: 2025-12-01 19:33:52.272 189568 DEBUG nova.compute.manager [None req-7cec860b-c1c5-49d2-8a4f-732c76c1788d - - - - - -] [instance: e73931e9-f7fa-4666-b781-700b385532a9] Updated the network info_cache for instance _heal_instance_info_cache /usr/lib/python3.9/site-packages/nova/compute/manager.py:9929#033[00m
Dec  1 19:33:52 compute-0 nova_compute[189564]: 2025-12-01 19:33:52.272 189568 DEBUG oslo_service.periodic_task [None req-7cec860b-c1c5-49d2-8a4f-732c76c1788d - - - - - -] Running periodic task ComputeManager._poll_rebooting_instances run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210#033[00m
Dec  1 19:33:52 compute-0 nova_compute[189564]: 2025-12-01 19:33:52.273 189568 DEBUG oslo_service.periodic_task [None req-7cec860b-c1c5-49d2-8a4f-732c76c1788d - - - - - -] Running periodic task ComputeManager._poll_volume_usage run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210#033[00m
Dec  1 19:33:52 compute-0 nova_compute[189564]: 2025-12-01 19:33:52.273 189568 DEBUG oslo_service.periodic_task [None req-7cec860b-c1c5-49d2-8a4f-732c76c1788d - - - - - -] Running periodic task ComputeManager._reclaim_queued_deletes run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210#033[00m
Dec  1 19:33:52 compute-0 nova_compute[189564]: 2025-12-01 19:33:52.273 189568 DEBUG nova.compute.manager [None req-7cec860b-c1c5-49d2-8a4f-732c76c1788d - - - - - -] CONF.reclaim_instance_interval <= 0, skipping... _reclaim_queued_deletes /usr/lib/python3.9/site-packages/nova/compute/manager.py:10477#033[00m
Dec  1 19:33:52 compute-0 podman[241585]: 2025-12-01 19:33:52.328748932 +0000 UTC m=+0.093550447 container health_status eee51cf6f5ac491b85fb09827fece37ea9afa564acb449d4ec0d0155a452f02b (image=quay.io/podified-antelope-centos9/openstack-multipathd:current-podified, name=multipathd, health_status=healthy, health_failing_streak=0, health_log=, io.buildah.version=1.41.3, tcib_managed=true, config_id=multipathd, managed_by=edpm_ansible, org.label-schema.build-date=20251125, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.schema-version=1.0, container_name=multipathd, tcib_build_tag=fa2bb8efef6782c26ea7f1675eeb36dd, maintainer=OpenStack Kubernetes Operator team, org.label-schema.vendor=CentOS, config_data={'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/multipathd', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-multipathd:current-podified', 'net': 'host', 'privileged': True, 'restart': 'always', 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/var/lib/kolla/config_files/multipathd.json:/var/lib/kolla/config_files/config.json:ro', '/dev:/dev', '/run/udev:/run/udev', '/sys:/sys', '/lib/modules:/lib/modules:ro', '/etc/iscsi:/etc/iscsi:ro', '/var/lib/iscsi:/var/lib/iscsi', '/etc/multipath:/etc/multipath:z', '/etc/multipath.conf:/etc/multipath.conf:ro', '/var/lib/openstack/healthchecks/multipathd:/openstack:ro,z']})
Dec  1 19:33:52 compute-0 nova_compute[189564]: 2025-12-01 19:33:52.801 189568 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Dec  1 19:33:54 compute-0 nova_compute[189564]: 2025-12-01 19:33:54.249 189568 DEBUG oslo_service.periodic_task [None req-7cec860b-c1c5-49d2-8a4f-732c76c1788d - - - - - -] Running periodic task ComputeManager._check_instance_build_time run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210#033[00m
Dec  1 19:33:54 compute-0 nova_compute[189564]: 2025-12-01 19:33:54.249 189568 DEBUG oslo_service.periodic_task [None req-7cec860b-c1c5-49d2-8a4f-732c76c1788d - - - - - -] Running periodic task ComputeManager._instance_usage_audit run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210#033[00m
Dec  1 19:33:55 compute-0 nova_compute[189564]: 2025-12-01 19:33:55.248 189568 DEBUG oslo_service.periodic_task [None req-7cec860b-c1c5-49d2-8a4f-732c76c1788d - - - - - -] Running periodic task ComputeManager._poll_unconfirmed_resizes run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210#033[00m
Dec  1 19:33:55 compute-0 nova_compute[189564]: 2025-12-01 19:33:55.248 189568 DEBUG oslo_service.periodic_task [None req-7cec860b-c1c5-49d2-8a4f-732c76c1788d - - - - - -] Running periodic task ComputeManager.update_available_resource run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210#033[00m
Dec  1 19:33:55 compute-0 nova_compute[189564]: 2025-12-01 19:33:55.280 189568 DEBUG oslo_concurrency.lockutils [None req-7cec860b-c1c5-49d2-8a4f-732c76c1788d - - - - - -] Acquiring lock "compute_resources" by "nova.compute.resource_tracker.ResourceTracker.clean_compute_node_cache" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m
Dec  1 19:33:55 compute-0 nova_compute[189564]: 2025-12-01 19:33:55.281 189568 DEBUG oslo_concurrency.lockutils [None req-7cec860b-c1c5-49d2-8a4f-732c76c1788d - - - - - -] Lock "compute_resources" acquired by "nova.compute.resource_tracker.ResourceTracker.clean_compute_node_cache" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m
Dec  1 19:33:55 compute-0 nova_compute[189564]: 2025-12-01 19:33:55.281 189568 DEBUG oslo_concurrency.lockutils [None req-7cec860b-c1c5-49d2-8a4f-732c76c1788d - - - - - -] Lock "compute_resources" "released" by "nova.compute.resource_tracker.ResourceTracker.clean_compute_node_cache" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m
Dec  1 19:33:55 compute-0 nova_compute[189564]: 2025-12-01 19:33:55.281 189568 DEBUG nova.compute.resource_tracker [None req-7cec860b-c1c5-49d2-8a4f-732c76c1788d - - - - - -] Auditing locally available compute resources for compute-0.ctlplane.example.com (node: compute-0.ctlplane.example.com) update_available_resource /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:861#033[00m
Dec  1 19:33:55 compute-0 nova_compute[189564]: 2025-12-01 19:33:55.359 189568 DEBUG oslo_concurrency.processutils [None req-7cec860b-c1c5-49d2-8a4f-732c76c1788d - - - - - -] Running cmd (subprocess): /usr/bin/python3 -m oslo_concurrency.prlimit --as=1073741824 --cpu=30 -- env LC_ALL=C LANG=C qemu-img info /var/lib/nova/instances/e73931e9-f7fa-4666-b781-700b385532a9/disk --force-share --output=json execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384#033[00m
Dec  1 19:33:55 compute-0 nova_compute[189564]: 2025-12-01 19:33:55.422 189568 DEBUG oslo_concurrency.processutils [None req-7cec860b-c1c5-49d2-8a4f-732c76c1788d - - - - - -] CMD "/usr/bin/python3 -m oslo_concurrency.prlimit --as=1073741824 --cpu=30 -- env LC_ALL=C LANG=C qemu-img info /var/lib/nova/instances/e73931e9-f7fa-4666-b781-700b385532a9/disk --force-share --output=json" returned: 0 in 0.063s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422#033[00m
Dec  1 19:33:55 compute-0 nova_compute[189564]: 2025-12-01 19:33:55.424 189568 DEBUG oslo_concurrency.processutils [None req-7cec860b-c1c5-49d2-8a4f-732c76c1788d - - - - - -] Running cmd (subprocess): /usr/bin/python3 -m oslo_concurrency.prlimit --as=1073741824 --cpu=30 -- env LC_ALL=C LANG=C qemu-img info /var/lib/nova/instances/e73931e9-f7fa-4666-b781-700b385532a9/disk --force-share --output=json execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384#033[00m
Dec  1 19:33:55 compute-0 nova_compute[189564]: 2025-12-01 19:33:55.484 189568 DEBUG oslo_concurrency.processutils [None req-7cec860b-c1c5-49d2-8a4f-732c76c1788d - - - - - -] CMD "/usr/bin/python3 -m oslo_concurrency.prlimit --as=1073741824 --cpu=30 -- env LC_ALL=C LANG=C qemu-img info /var/lib/nova/instances/e73931e9-f7fa-4666-b781-700b385532a9/disk --force-share --output=json" returned: 0 in 0.061s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422#033[00m
Dec  1 19:33:55 compute-0 nova_compute[189564]: 2025-12-01 19:33:55.486 189568 DEBUG oslo_concurrency.processutils [None req-7cec860b-c1c5-49d2-8a4f-732c76c1788d - - - - - -] Running cmd (subprocess): /usr/bin/python3 -m oslo_concurrency.prlimit --as=1073741824 --cpu=30 -- env LC_ALL=C LANG=C qemu-img info /var/lib/nova/instances/e73931e9-f7fa-4666-b781-700b385532a9/disk.eph0 --force-share --output=json execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384#033[00m
Dec  1 19:33:55 compute-0 nova_compute[189564]: 2025-12-01 19:33:55.542 189568 DEBUG oslo_concurrency.processutils [None req-7cec860b-c1c5-49d2-8a4f-732c76c1788d - - - - - -] CMD "/usr/bin/python3 -m oslo_concurrency.prlimit --as=1073741824 --cpu=30 -- env LC_ALL=C LANG=C qemu-img info /var/lib/nova/instances/e73931e9-f7fa-4666-b781-700b385532a9/disk.eph0 --force-share --output=json" returned: 0 in 0.056s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422#033[00m
Dec  1 19:33:55 compute-0 nova_compute[189564]: 2025-12-01 19:33:55.543 189568 DEBUG oslo_concurrency.processutils [None req-7cec860b-c1c5-49d2-8a4f-732c76c1788d - - - - - -] Running cmd (subprocess): /usr/bin/python3 -m oslo_concurrency.prlimit --as=1073741824 --cpu=30 -- env LC_ALL=C LANG=C qemu-img info /var/lib/nova/instances/e73931e9-f7fa-4666-b781-700b385532a9/disk.eph0 --force-share --output=json execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384#033[00m
Dec  1 19:33:55 compute-0 nova_compute[189564]: 2025-12-01 19:33:55.602 189568 DEBUG oslo_concurrency.processutils [None req-7cec860b-c1c5-49d2-8a4f-732c76c1788d - - - - - -] CMD "/usr/bin/python3 -m oslo_concurrency.prlimit --as=1073741824 --cpu=30 -- env LC_ALL=C LANG=C qemu-img info /var/lib/nova/instances/e73931e9-f7fa-4666-b781-700b385532a9/disk.eph0 --force-share --output=json" returned: 0 in 0.059s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422#033[00m
Dec  1 19:33:55 compute-0 nova_compute[189564]: 2025-12-01 19:33:55.611 189568 DEBUG oslo_concurrency.processutils [None req-7cec860b-c1c5-49d2-8a4f-732c76c1788d - - - - - -] Running cmd (subprocess): /usr/bin/python3 -m oslo_concurrency.prlimit --as=1073741824 --cpu=30 -- env LC_ALL=C LANG=C qemu-img info /var/lib/nova/instances/f4a023f0-04a7-470f-88ef-6284e0580f9e/disk --force-share --output=json execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384#033[00m
Dec  1 19:33:55 compute-0 nova_compute[189564]: 2025-12-01 19:33:55.671 189568 DEBUG oslo_concurrency.processutils [None req-7cec860b-c1c5-49d2-8a4f-732c76c1788d - - - - - -] CMD "/usr/bin/python3 -m oslo_concurrency.prlimit --as=1073741824 --cpu=30 -- env LC_ALL=C LANG=C qemu-img info /var/lib/nova/instances/f4a023f0-04a7-470f-88ef-6284e0580f9e/disk --force-share --output=json" returned: 0 in 0.060s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422#033[00m
Dec  1 19:33:55 compute-0 nova_compute[189564]: 2025-12-01 19:33:55.672 189568 DEBUG oslo_concurrency.processutils [None req-7cec860b-c1c5-49d2-8a4f-732c76c1788d - - - - - -] Running cmd (subprocess): /usr/bin/python3 -m oslo_concurrency.prlimit --as=1073741824 --cpu=30 -- env LC_ALL=C LANG=C qemu-img info /var/lib/nova/instances/f4a023f0-04a7-470f-88ef-6284e0580f9e/disk --force-share --output=json execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384#033[00m
Dec  1 19:33:55 compute-0 nova_compute[189564]: 2025-12-01 19:33:55.728 189568 DEBUG oslo_concurrency.processutils [None req-7cec860b-c1c5-49d2-8a4f-732c76c1788d - - - - - -] CMD "/usr/bin/python3 -m oslo_concurrency.prlimit --as=1073741824 --cpu=30 -- env LC_ALL=C LANG=C qemu-img info /var/lib/nova/instances/f4a023f0-04a7-470f-88ef-6284e0580f9e/disk --force-share --output=json" returned: 0 in 0.056s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422#033[00m
Dec  1 19:33:55 compute-0 nova_compute[189564]: 2025-12-01 19:33:55.730 189568 DEBUG oslo_concurrency.processutils [None req-7cec860b-c1c5-49d2-8a4f-732c76c1788d - - - - - -] Running cmd (subprocess): /usr/bin/python3 -m oslo_concurrency.prlimit --as=1073741824 --cpu=30 -- env LC_ALL=C LANG=C qemu-img info /var/lib/nova/instances/f4a023f0-04a7-470f-88ef-6284e0580f9e/disk.eph0 --force-share --output=json execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384#033[00m
Dec  1 19:33:55 compute-0 nova_compute[189564]: 2025-12-01 19:33:55.788 189568 DEBUG oslo_concurrency.processutils [None req-7cec860b-c1c5-49d2-8a4f-732c76c1788d - - - - - -] CMD "/usr/bin/python3 -m oslo_concurrency.prlimit --as=1073741824 --cpu=30 -- env LC_ALL=C LANG=C qemu-img info /var/lib/nova/instances/f4a023f0-04a7-470f-88ef-6284e0580f9e/disk.eph0 --force-share --output=json" returned: 0 in 0.059s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422#033[00m
Dec  1 19:33:55 compute-0 nova_compute[189564]: 2025-12-01 19:33:55.790 189568 DEBUG oslo_concurrency.processutils [None req-7cec860b-c1c5-49d2-8a4f-732c76c1788d - - - - - -] Running cmd (subprocess): /usr/bin/python3 -m oslo_concurrency.prlimit --as=1073741824 --cpu=30 -- env LC_ALL=C LANG=C qemu-img info /var/lib/nova/instances/f4a023f0-04a7-470f-88ef-6284e0580f9e/disk.eph0 --force-share --output=json execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384#033[00m
Dec  1 19:33:55 compute-0 nova_compute[189564]: 2025-12-01 19:33:55.873 189568 DEBUG oslo_concurrency.processutils [None req-7cec860b-c1c5-49d2-8a4f-732c76c1788d - - - - - -] CMD "/usr/bin/python3 -m oslo_concurrency.prlimit --as=1073741824 --cpu=30 -- env LC_ALL=C LANG=C qemu-img info /var/lib/nova/instances/f4a023f0-04a7-470f-88ef-6284e0580f9e/disk.eph0 --force-share --output=json" returned: 0 in 0.083s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422#033[00m
Dec  1 19:33:56 compute-0 nova_compute[189564]: 2025-12-01 19:33:56.196 189568 WARNING nova.virt.libvirt.driver [None req-7cec860b-c1c5-49d2-8a4f-732c76c1788d - - - - - -] This host appears to have multiple sockets per NUMA node. The `socket` PCI NUMA affinity will not be supported.#033[00m
Dec  1 19:33:56 compute-0 nova_compute[189564]: 2025-12-01 19:33:56.197 189568 DEBUG nova.compute.resource_tracker [None req-7cec860b-c1c5-49d2-8a4f-732c76c1788d - - - - - -] Hypervisor/Node resource view: name=compute-0.ctlplane.example.com free_ram=5053MB free_disk=72.36140441894531GB free_vcpus=6 pci_devices=[{"dev_id": "pci_0000_00_03_0", "address": "0000:00:03.0", "product_id": "1000", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1000", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_01_3", "address": "0000:00:01.3", "product_id": "7113", "vendor_id": "8086", "numa_node": null, "label": "label_8086_7113", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_06_0", "address": "0000:00:06.0", "product_id": "1005", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1005", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_04_0", "address": "0000:00:04.0", "product_id": "1001", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1001", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_01_1", "address": "0000:00:01.1", "product_id": "7010", "vendor_id": "8086", "numa_node": null, "label": "label_8086_7010", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_00_0", "address": "0000:00:00.0", "product_id": "1237", "vendor_id": "8086", "numa_node": null, "label": "label_8086_1237", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_05_0", "address": "0000:00:05.0", "product_id": "1002", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1002", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_02_0", "address": "0000:00:02.0", "product_id": "1050", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1050", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_01_2", "address": "0000:00:01.2", "product_id": "7020", "vendor_id": "8086", "numa_node": null, "label": "label_8086_7020", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_01_0", "address": "0000:00:01.0", "product_id": "7000", "vendor_id": "8086", "numa_node": null, "label": "label_8086_7000", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_07_0", "address": "0000:00:07.0", "product_id": "1000", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1000", "dev_type": "type-PCI"}] _report_hypervisor_resource_view /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:1034#033[00m
Dec  1 19:33:56 compute-0 nova_compute[189564]: 2025-12-01 19:33:56.198 189568 DEBUG oslo_concurrency.lockutils [None req-7cec860b-c1c5-49d2-8a4f-732c76c1788d - - - - - -] Acquiring lock "compute_resources" by "nova.compute.resource_tracker.ResourceTracker._update_available_resource" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m
Dec  1 19:33:56 compute-0 nova_compute[189564]: 2025-12-01 19:33:56.199 189568 DEBUG oslo_concurrency.lockutils [None req-7cec860b-c1c5-49d2-8a4f-732c76c1788d - - - - - -] Lock "compute_resources" acquired by "nova.compute.resource_tracker.ResourceTracker._update_available_resource" :: waited 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m
Dec  1 19:33:56 compute-0 nova_compute[189564]: 2025-12-01 19:33:56.320 189568 DEBUG nova.compute.resource_tracker [None req-7cec860b-c1c5-49d2-8a4f-732c76c1788d - - - - - -] Instance e73931e9-f7fa-4666-b781-700b385532a9 actively managed on this compute host and has allocations in placement: {'resources': {'DISK_GB': 2, 'MEMORY_MB': 512, 'VCPU': 1}}. _remove_deleted_instances_allocations /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:1635#033[00m
Dec  1 19:33:56 compute-0 nova_compute[189564]: 2025-12-01 19:33:56.321 189568 DEBUG nova.compute.resource_tracker [None req-7cec860b-c1c5-49d2-8a4f-732c76c1788d - - - - - -] Instance f4a023f0-04a7-470f-88ef-6284e0580f9e actively managed on this compute host and has allocations in placement: {'resources': {'DISK_GB': 2, 'MEMORY_MB': 512, 'VCPU': 1}}. _remove_deleted_instances_allocations /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:1635#033[00m
Dec  1 19:33:56 compute-0 nova_compute[189564]: 2025-12-01 19:33:56.322 189568 DEBUG nova.compute.resource_tracker [None req-7cec860b-c1c5-49d2-8a4f-732c76c1788d - - - - - -] Total usable vcpus: 8, total allocated vcpus: 2 _report_final_resource_view /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:1057#033[00m
Dec  1 19:33:56 compute-0 nova_compute[189564]: 2025-12-01 19:33:56.322 189568 DEBUG nova.compute.resource_tracker [None req-7cec860b-c1c5-49d2-8a4f-732c76c1788d - - - - - -] Final resource view: name=compute-0.ctlplane.example.com phys_ram=7680MB used_ram=1536MB phys_disk=79GB used_disk=4GB total_vcpus=8 used_vcpus=2 pci_stats=[] _report_final_resource_view /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:1066#033[00m
Dec  1 19:33:56 compute-0 nova_compute[189564]: 2025-12-01 19:33:56.384 189568 DEBUG nova.compute.provider_tree [None req-7cec860b-c1c5-49d2-8a4f-732c76c1788d - - - - - -] Inventory has not changed in ProviderTree for provider: 0211b5d4-bab8-409f-8f53-df766ffbcb27 update_inventory /usr/lib/python3.9/site-packages/nova/compute/provider_tree.py:180#033[00m
Dec  1 19:33:56 compute-0 nova_compute[189564]: 2025-12-01 19:33:56.411 189568 DEBUG nova.scheduler.client.report [None req-7cec860b-c1c5-49d2-8a4f-732c76c1788d - - - - - -] Inventory has not changed for provider 0211b5d4-bab8-409f-8f53-df766ffbcb27 based on inventory data: {'MEMORY_MB': {'total': 7680, 'reserved': 512, 'min_unit': 1, 'max_unit': 7680, 'step_size': 1, 'allocation_ratio': 1.0}, 'VCPU': {'total': 8, 'reserved': 0, 'min_unit': 1, 'max_unit': 8, 'step_size': 1, 'allocation_ratio': 4.0}, 'DISK_GB': {'total': 79, 'reserved': 1, 'min_unit': 1, 'max_unit': 79, 'step_size': 1, 'allocation_ratio': 0.9}} set_inventory_for_provider /usr/lib/python3.9/site-packages/nova/scheduler/client/report.py:940#033[00m
Dec  1 19:33:56 compute-0 nova_compute[189564]: 2025-12-01 19:33:56.413 189568 DEBUG nova.compute.resource_tracker [None req-7cec860b-c1c5-49d2-8a4f-732c76c1788d - - - - - -] Compute_service record updated for compute-0.ctlplane.example.com:compute-0.ctlplane.example.com _update_available_resource /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:995#033[00m
Dec  1 19:33:56 compute-0 nova_compute[189564]: 2025-12-01 19:33:56.414 189568 DEBUG oslo_concurrency.lockutils [None req-7cec860b-c1c5-49d2-8a4f-732c76c1788d - - - - - -] Lock "compute_resources" "released" by "nova.compute.resource_tracker.ResourceTracker._update_available_resource" :: held 0.215s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m
Dec  1 19:33:57 compute-0 nova_compute[189564]: 2025-12-01 19:33:57.261 189568 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Dec  1 19:33:57 compute-0 podman[241627]: 2025-12-01 19:33:57.339805865 +0000 UTC m=+0.099159161 container health_status 61ddba5fa28aaa4735d9b3aecc3d300f499f9ae2248b5f55cd6d6127fcce4236 (image=quay.io/navidys/prometheus-podman-exporter:v1.10.1, name=podman_exporter, health_status=healthy, health_failing_streak=0, health_log=, config_data={'image': 'quay.io/navidys/prometheus-podman-exporter:v1.10.1', 'restart': 'always', 'recreate': True, 'user': 'root', 'privileged': True, 'ports': ['9882:9882'], 'net': 'host', 'command': ['--web.config.file=/etc/podman_exporter/podman_exporter.yaml'], 'environment': {'OS_ENDPOINT_TYPE': 'internal', 'CONTAINER_HOST': 'unix:///run/podman/podman.sock'}, 'healthcheck': {'test': '/openstack/healthcheck podman_exporter', 'mount': '/var/lib/openstack/healthchecks/podman_exporter'}, 'volumes': ['/var/lib/openstack/config/telemetry/podman_exporter.yaml:/etc/podman_exporter/podman_exporter.yaml:z', '/var/lib/openstack/certs/telemetry/default:/etc/podman_exporter/tls:z', '/run/podman/podman.sock:/run/podman/podman.sock:rw,z', '/var/lib/openstack/healthchecks/podman_exporter:/openstack:ro,z']}, config_id=edpm, container_name=podman_exporter, maintainer=Navid Yaghoobi <navidys@fedoraproject.org>, managed_by=edpm_ansible)
Dec  1 19:33:57 compute-0 nova_compute[189564]: 2025-12-01 19:33:57.804 189568 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Dec  1 19:33:59 compute-0 nova_compute[189564]: 2025-12-01 19:33:59.414 189568 DEBUG oslo_service.periodic_task [None req-7cec860b-c1c5-49d2-8a4f-732c76c1788d - - - - - -] Running periodic task ComputeManager._poll_rescued_instances run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210#033[00m
Dec  1 19:33:59 compute-0 podman[203750]: time="2025-12-01T19:33:59Z" level=info msg="List containers: received `last` parameter - overwriting `limit`"
Dec  1 19:33:59 compute-0 podman[203750]: @ - - [01/Dec/2025:19:33:59 +0000] "GET /v4.9.3/libpod/containers/json?all=true&external=false&last=0&namespace=false&size=false&sync=false HTTP/1.1" 200 29521 "" "Go-http-client/1.1"
Dec  1 19:33:59 compute-0 podman[203750]: @ - - [01/Dec/2025:19:33:59 +0000] "GET /v4.9.3/libpod/containers/stats?all=false&interval=1&stream=false HTTP/1.1" 200 4778 "" "Go-http-client/1.1"
Dec  1 19:34:00 compute-0 podman[241648]: 2025-12-01 19:34:00.295733179 +0000 UTC m=+0.067665486 container health_status 23921011954a99f31a49758e512d9e3575f6b2ebf536e7df85e3be11e7690b76 (image=quay.io/sustainable_computing_io/kepler:release-0.7.12, name=kepler, health_status=healthy, health_failing_streak=0, health_log=, io.openshift.expose-services=, io.openshift.tags=base rhel9, vcs-type=git, architecture=x86_64, config_data={'image': 'quay.io/sustainable_computing_io/kepler:release-0.7.12', 'privileged': 'true', 'restart': 'always', 'ports': ['8888:8888'], 'net': 'host', 'command': '-v=2', 'recreate': True, 'environment': {'ENABLE_GPU': 'true', 'EXPOSE_CONTAINER_METRICS': 'true', 'ENABLE_PROCESS_METRICS': 'true', 'EXPOSE_VM_METRICS': 'true', 'EXPOSE_ESTIMATED_IDLE_POWER_METRICS': 'false', 'LIBVIRT_METADATA_URI': 'http://openstack.org/xmlns/libvirt/nova/1.1'}, 'healthcheck': {'test': '/openstack/healthcheck kepler', 'mount': '/var/lib/openstack/healthchecks/kepler'}, 'volumes': ['/lib/modules:/lib/modules:ro', '/run/libvirt:/run/libvirt:shared,ro', '/sys:/sys', '/proc:/proc', '/var/lib/openstack/healthchecks/kepler:/openstack:ro,z']}, summary=Provides the latest release of Red Hat Universal Base Image 9., com.redhat.component=ubi9-container, maintainer=Red Hat, Inc., name=ubi9, release-0.7.12=, version=9.4, vcs-ref=e309397d02fc53f7fa99db1371b8700eb49f268f, io.k8s.description=The Universal Base Image is designed and engineered to be the base layer for all of your containerized applications, middleware and utilities. This base image is freely redistributable, but Red Hat only supports Red Hat technologies through subscriptions for Red Hat products. This image is maintained by Red Hat and updated regularly., io.buildah.version=1.29.0, vendor=Red Hat, Inc., release=1214.1726694543, com.redhat.license_terms=https://www.redhat.com/en/about/red-hat-end-user-license-agreements#UBI, container_name=kepler, io.k8s.display-name=Red Hat Universal Base Image 9, config_id=edpm, description=The Universal Base Image is designed and engineered to be the base layer for all of your containerized applications, middleware and utilities. This base image is freely redistributable, but Red Hat only supports Red Hat technologies through subscriptions for Red Hat products. This image is maintained by Red Hat and updated regularly., managed_by=edpm_ansible, build-date=2024-09-18T21:23:30, url=https://access.redhat.com/containers/#/registry.access.redhat.com/ubi9/images/9.4-1214.1726694543, distribution-scope=public)
Dec  1 19:34:00 compute-0 podman[241649]: 2025-12-01 19:34:00.332883719 +0000 UTC m=+0.097757088 container health_status 34a1614f07848d6f362b3ed1fa2407dbcd0f2c7c831f6ef43ff8b2d278ce7c3d (image=quay.io/podified-antelope-centos9/openstack-ceilometer-ipmi:current-podified, name=ceilometer_agent_ipmi, health_status=healthy, health_failing_streak=0, health_log=, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.schema-version=1.0, tcib_managed=true, io.buildah.version=1.41.3, maintainer=OpenStack Kubernetes Operator team, managed_by=edpm_ansible, org.label-schema.license=GPLv2, config_data={'image': 'quay.io/podified-antelope-centos9/openstack-ceilometer-ipmi:current-podified', 'user': 'ceilometer', 'restart': 'always', 'command': 'kolla_start', 'security_opt': 'label:type:ceilometer_polling_t', 'privileged': 'true', 'net': 'host', 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'OS_ENDPOINT_TYPE': 'internal'}, 'healthcheck': {'test': '/openstack/healthcheck ipmi', 'mount': '/var/lib/openstack/healthchecks/ceilometer_agent_ipmi'}, 'volumes': ['/var/lib/openstack/config/telemetry-power-monitoring:/var/lib/openstack/config/:z', '/var/lib/openstack/config/telemetry-power-monitoring/ceilometer-agent-ipmi.json:/var/lib/kolla/config_files/config.json:z', '/etc/hosts:/etc/hosts:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/var/lib/openstack/cacerts/telemetry-power-monitoring/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/config/telemetry-power-monitoring/ceilometer_prom_exporter.yaml:/etc/ceilometer/ceilometer_prom_exporter.yaml:z', '/var/lib/openstack/certs/telemetry-power-monitoring/default:/etc/ceilometer/tls:z', '/dev/log:/dev/log', '/var/lib/openstack/healthchecks/ceilometer_agent_ipmi:/openstack:ro,z']}, container_name=ceilometer_agent_ipmi, org.label-schema.vendor=CentOS, tcib_build_tag=fa2bb8efef6782c26ea7f1675eeb36dd, config_id=edpm, org.label-schema.build-date=20251125)
Dec  1 19:34:01 compute-0 openstack_network_exporter[205914]: ERROR   19:34:01 appctl.go:131: Failed to prepare call to ovsdb-server: no control socket files found for the ovs db server
Dec  1 19:34:01 compute-0 openstack_network_exporter[205914]: ERROR   19:34:01 appctl.go:144: Failed to get PID for ovn-northd: no control socket files found for ovn-northd
Dec  1 19:34:01 compute-0 openstack_network_exporter[205914]: ERROR   19:34:01 appctl.go:144: Failed to get PID for ovn-northd: no control socket files found for ovn-northd
Dec  1 19:34:01 compute-0 openstack_network_exporter[205914]: ERROR   19:34:01 appctl.go:174: call(dpif-netdev/pmd-perf-show): please specify an existing datapath
Dec  1 19:34:01 compute-0 openstack_network_exporter[205914]: 
Dec  1 19:34:01 compute-0 openstack_network_exporter[205914]: ERROR   19:34:01 appctl.go:174: call(dpif-netdev/pmd-rxq-show): please specify an existing datapath
Dec  1 19:34:01 compute-0 openstack_network_exporter[205914]: 
Dec  1 19:34:02 compute-0 nova_compute[189564]: 2025-12-01 19:34:02.265 189568 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Dec  1 19:34:02 compute-0 nova_compute[189564]: 2025-12-01 19:34:02.807 189568 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Dec  1 19:34:04 compute-0 podman[241690]: 2025-12-01 19:34:04.139772204 +0000 UTC m=+0.130618415 container health_status 43b014a7c88484529ca37fbc1aa040d68d3c565a681d98a3ffe696ded1c66c8b (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified, name=ovn_metadata_agent, health_status=healthy, health_failing_streak=0, health_log=, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.vendor=CentOS, container_name=ovn_metadata_agent, io.buildah.version=1.41.3, tcib_build_tag=fa2bb8efef6782c26ea7f1675eeb36dd, config_data={'cgroupns': 'host', 'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'EDPM_CONFIG_HASH': '0823bd3e096c75f72e4a95820d41b0d4b6a1172bd2892ddb9f29b788a11bc87d'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ovn_metadata_agent', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified', 'net': 'host', 'pid': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/run/openvswitch:/run/openvswitch:z', '/var/lib/config-data/ansible-generated/neutron-ovn-metadata-agent:/etc/neutron.conf.d:z', '/run/netns:/run/netns:shared', '/var/lib/kolla/config_files/ovn_metadata_agent.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/neutron:/var/lib/neutron:shared,z', '/var/lib/neutron/ovn_metadata_haproxy_wrapper:/usr/local/bin/haproxy:ro', '/var/lib/neutron/kill_scripts:/etc/neutron/kill_scripts:ro', '/var/lib/openstack/cacerts/neutron-metadata/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/ca.crt:/etc/pki/tls/certs/ovndbca.crt:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/tls.crt:/etc/pki/tls/certs/ovndb.crt:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/tls.key:/etc/pki/tls/private/ovndb.key:ro,Z', '/var/lib/openstack/healthchecks/ovn_metadata_agent:/openstack:ro,z']}, org.label-schema.build-date=20251125, org.label-schema.schema-version=1.0, org.label-schema.license=GPLv2, tcib_managed=true, config_id=ovn_metadata_agent, maintainer=OpenStack Kubernetes Operator team, managed_by=edpm_ansible)
Dec  1 19:34:04 compute-0 podman[241689]: 2025-12-01 19:34:04.144035487 +0000 UTC m=+0.141854174 container health_status 3a3d264f7eb8586ed3d44da8bad3c69e5911bcb2ca062b771386b6d47a5118de (image=quay.rdoproject.org/podified-master-centos10/openstack-ceilometer-compute:current-tested, name=ceilometer_agent_compute, health_status=healthy, health_failing_streak=0, health_log=, maintainer=OpenStack Kubernetes Operator team, org.label-schema.license=GPLv2, tcib_build_tag=3a7876c5b6a4ff2e2bc50e11e9db5f42, config_id=edpm, container_name=ceilometer_agent_compute, org.label-schema.build-date=20251125, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS, io.buildah.version=1.41.4, org.label-schema.name=CentOS Stream 10 Base Image, tcib_managed=true, config_data={'image': 'quay.rdoproject.org/podified-master-centos10/openstack-ceilometer-compute:current-tested', 'user': 'ceilometer', 'restart': 'always', 'command': 'kolla_start', 'security_opt': 'label:type:ceilometer_polling_t', 'net': 'host', 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'OS_ENDPOINT_TYPE': 'internal'}, 'healthcheck': {'test': '/openstack/healthcheck compute', 'mount': '/var/lib/openstack/healthchecks/ceilometer_agent_compute'}, 'volumes': ['/var/lib/openstack/config/telemetry:/var/lib/openstack/config/:z', '/var/lib/openstack/config/telemetry/ceilometer-agent-compute.json:/var/lib/kolla/config_files/config.json:z', '/run/libvirt:/run/libvirt:shared,ro', '/etc/hosts:/etc/hosts:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/var/lib/openstack/cacerts/telemetry/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/config/telemetry/ceilometer_prom_exporter.yaml:/etc/ceilometer/ceilometer_prom_exporter.yaml:z', '/var/lib/openstack/certs/telemetry/default:/etc/ceilometer/tls:z', '/dev/log:/dev/log', '/var/lib/openstack/healthchecks/ceilometer_agent_compute:/openstack:ro,z']}, managed_by=edpm_ansible)
Dec  1 19:34:04 compute-0 podman[241691]: 2025-12-01 19:34:04.172304282 +0000 UTC m=+0.159889032 container health_status ac5c9902abf0db9f43c889599b2bcc73d33eb8b65444ffdd9b56a5cc93dab792 (image=quay.io/podified-antelope-centos9/openstack-ovn-controller:current-podified, name=ovn_controller, health_status=healthy, health_failing_streak=0, health_log=, container_name=ovn_controller, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 9 Base Image, config_data={'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ovn_controller', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-ovn-controller:current-podified', 'net': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/lib/modules:/lib/modules:ro', '/run:/run', '/var/lib/openvswitch/ovn:/run/ovn:shared,z', '/var/lib/kolla/config_files/ovn_controller.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/openstack/cacerts/ovn/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/certs/ovn/default/ca.crt:/etc/pki/tls/certs/ovndbca.crt:ro,z', '/var/lib/openstack/certs/ovn/default/tls.crt:/etc/pki/tls/certs/ovndb.crt:ro,z', '/var/lib/openstack/certs/ovn/default/tls.key:/etc/pki/tls/private/ovndb.key:ro,Z', '/var/lib/openstack/healthchecks/ovn_controller:/openstack:ro,z']}, maintainer=OpenStack Kubernetes Operator team, org.label-schema.build-date=20251125, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS, managed_by=edpm_ansible, tcib_build_tag=fa2bb8efef6782c26ea7f1675eeb36dd, io.buildah.version=1.41.3, tcib_managed=true, config_id=ovn_controller)
Dec  1 19:34:07 compute-0 nova_compute[189564]: 2025-12-01 19:34:07.271 189568 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Dec  1 19:34:07 compute-0 nova_compute[189564]: 2025-12-01 19:34:07.809 189568 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Dec  1 19:34:12 compute-0 ovn_metadata_agent[106828]: 2025-12-01 19:34:12.179 106833 DEBUG oslo_concurrency.lockutils [-] Acquiring lock "_check_child_processes" by "neutron.agent.linux.external_process.ProcessMonitor._check_child_processes" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m
Dec  1 19:34:12 compute-0 ovn_metadata_agent[106828]: 2025-12-01 19:34:12.180 106833 DEBUG oslo_concurrency.lockutils [-] Lock "_check_child_processes" acquired by "neutron.agent.linux.external_process.ProcessMonitor._check_child_processes" :: waited 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m
Dec  1 19:34:12 compute-0 ovn_metadata_agent[106828]: 2025-12-01 19:34:12.182 106833 DEBUG oslo_concurrency.lockutils [-] Lock "_check_child_processes" "released" by "neutron.agent.linux.external_process.ProcessMonitor._check_child_processes" :: held 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m
Dec  1 19:34:12 compute-0 nova_compute[189564]: 2025-12-01 19:34:12.275 189568 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Dec  1 19:34:12 compute-0 podman[241746]: 2025-12-01 19:34:12.347360735 +0000 UTC m=+0.101594443 container health_status b46bda7fc50db8041eef75400930fc7591d8331b3adc9964f77b2cc87c6b98e2 (image=quay.io/openstack-k8s-operators/openstack-network-exporter:current-podified, name=openstack_network_exporter, health_status=healthy, health_failing_streak=0, health_log=, io.k8s.description=The Universal Base Image Minimal is a stripped down image that uses microdnf as a package manager. This base image is freely redistributable, but Red Hat only supports Red Hat technologies through subscriptions for Red Hat products. This image is maintained by Red Hat and updated regularly., io.k8s.display-name=Red Hat Universal Base Image 9 Minimal, io.openshift.expose-services=, managed_by=edpm_ansible, io.buildah.version=1.33.7, vcs-ref=f4b088292653bbf5ca8188a5e59ffd06a8671d4b, io.openshift.tags=minimal rhel9, name=ubi9-minimal, summary=Provides the latest release of the minimal Red Hat Universal Base Image 9., url=https://catalog.redhat.com/en/search?searchType=containers, config_id=edpm, container_name=openstack_network_exporter, distribution-scope=public, version=9.6, com.redhat.component=ubi9-minimal-container, com.redhat.license_terms=https://www.redhat.com/en/about/red-hat-end-user-license-agreements#UBI, build-date=2025-08-20T13:12:41, config_data={'image': 'quay.io/openstack-k8s-operators/openstack-network-exporter:current-podified', 'restart': 'always', 'recreate': True, 'privileged': True, 'ports': ['9105:9105'], 'command': [], 'net': 'host', 'environment': {'OS_ENDPOINT_TYPE': 'internal', 'OPENSTACK_NETWORK_EXPORTER_YAML': '/etc/openstack_network_exporter/openstack_network_exporter.yaml'}, 'healthcheck': {'test': '/openstack/healthcheck openstack-netwo', 'mount': '/var/lib/openstack/healthchecks/openstack_network_exporter'}, 'volumes': ['/var/lib/openstack/config/telemetry/openstack_network_exporter.yaml:/etc/openstack_network_exporter/openstack_network_exporter.yaml:z', '/var/lib/openstack/certs/telemetry/default:/etc/openstack_network_exporter/tls:z', '/var/run/openvswitch:/run/openvswitch:rw,z', '/var/lib/openvswitch/ovn:/run/ovn:rw,z', '/proc:/host/proc:ro', '/var/lib/openstack/healthchecks/openstack_network_exporter:/openstack:ro,z']}, vendor=Red Hat, Inc., maintainer=Red Hat, Inc., architecture=x86_64, description=The Universal Base Image Minimal is a stripped down image that uses microdnf as a package manager. This base image is freely redistributable, but Red Hat only supports Red Hat technologies through subscriptions for Red Hat products. This image is maintained by Red Hat and updated regularly., release=1755695350, vcs-type=git)
Dec  1 19:34:12 compute-0 nova_compute[189564]: 2025-12-01 19:34:12.815 189568 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Dec  1 19:34:17 compute-0 nova_compute[189564]: 2025-12-01 19:34:17.278 189568 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Dec  1 19:34:17 compute-0 podman[241767]: 2025-12-01 19:34:17.322742911 +0000 UTC m=+0.098734573 container health_status 9bc16c1e84935b321683dd2dfd3901959431e420d380b6b9982945dff3d516b2 (image=quay.io/prometheus/node-exporter:v1.5.0, name=node_exporter, health_status=healthy, health_failing_streak=0, health_log=, managed_by=edpm_ansible, config_data={'image': 'quay.io/prometheus/node-exporter:v1.5.0', 'restart': 'always', 'recreate': True, 'user': 'root', 'privileged': True, 'ports': ['9100:9100'], 'command': ['--web.config.file=/etc/node_exporter/node_exporter.yaml', '--web.disable-exporter-metrics', '--collector.systemd', '--collector.systemd.unit-include=(edpm_.*|ovs.*|openvswitch|virt.*|rsyslog)\\.service', '--no-collector.dmi', '--no-collector.entropy', '--no-collector.thermal_zone', '--no-collector.time', '--no-collector.timex', '--no-collector.uname', '--no-collector.stat', '--no-collector.hwmon', '--no-collector.os', '--no-collector.selinux', '--no-collector.textfile', '--no-collector.powersupplyclass', '--no-collector.pressure', '--no-collector.rapl'], 'net': 'host', 'environment': {'OS_ENDPOINT_TYPE': 'internal'}, 'healthcheck': {'test': '/openstack/healthcheck node_exporter', 'mount': '/var/lib/openstack/healthchecks/node_exporter'}, 'volumes': ['/var/lib/openstack/config/telemetry/node_exporter.yaml:/etc/node_exporter/node_exporter.yaml:z', '/var/lib/openstack/certs/telemetry/default:/etc/node_exporter/tls:z', '/var/run/dbus/system_bus_socket:/var/run/dbus/system_bus_socket:rw', '/var/lib/openstack/healthchecks/node_exporter:/openstack:ro,z']}, config_id=edpm, container_name=node_exporter, maintainer=The Prometheus Authors <prometheus-developers@googlegroups.com>)
Dec  1 19:34:17 compute-0 nova_compute[189564]: 2025-12-01 19:34:17.815 189568 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Dec  1 19:34:22 compute-0 nova_compute[189564]: 2025-12-01 19:34:22.280 189568 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Dec  1 19:34:22 compute-0 nova_compute[189564]: 2025-12-01 19:34:22.818 189568 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Dec  1 19:34:23 compute-0 podman[241794]: 2025-12-01 19:34:23.378443539 +0000 UTC m=+0.133703091 container health_status eee51cf6f5ac491b85fb09827fece37ea9afa564acb449d4ec0d0155a452f02b (image=quay.io/podified-antelope-centos9/openstack-multipathd:current-podified, name=multipathd, health_status=healthy, health_failing_streak=0, health_log=, managed_by=edpm_ansible, org.label-schema.schema-version=1.0, tcib_managed=true, org.label-schema.build-date=20251125, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.vendor=CentOS, config_data={'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/multipathd', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-multipathd:current-podified', 'net': 'host', 'privileged': True, 'restart': 'always', 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/var/lib/kolla/config_files/multipathd.json:/var/lib/kolla/config_files/config.json:ro', '/dev:/dev', '/run/udev:/run/udev', '/sys:/sys', '/lib/modules:/lib/modules:ro', '/etc/iscsi:/etc/iscsi:ro', '/var/lib/iscsi:/var/lib/iscsi', '/etc/multipath:/etc/multipath:z', '/etc/multipath.conf:/etc/multipath.conf:ro', '/var/lib/openstack/healthchecks/multipathd:/openstack:ro,z']}, config_id=multipathd, tcib_build_tag=fa2bb8efef6782c26ea7f1675eeb36dd, container_name=multipathd, io.buildah.version=1.41.3, maintainer=OpenStack Kubernetes Operator team)
Dec  1 19:34:27 compute-0 nova_compute[189564]: 2025-12-01 19:34:27.284 189568 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Dec  1 19:34:27 compute-0 nova_compute[189564]: 2025-12-01 19:34:27.823 189568 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Dec  1 19:34:28 compute-0 podman[241812]: 2025-12-01 19:34:28.356113479 +0000 UTC m=+0.128216751 container health_status 61ddba5fa28aaa4735d9b3aecc3d300f499f9ae2248b5f55cd6d6127fcce4236 (image=quay.io/navidys/prometheus-podman-exporter:v1.10.1, name=podman_exporter, health_status=healthy, health_failing_streak=0, health_log=, managed_by=edpm_ansible, config_data={'image': 'quay.io/navidys/prometheus-podman-exporter:v1.10.1', 'restart': 'always', 'recreate': True, 'user': 'root', 'privileged': True, 'ports': ['9882:9882'], 'net': 'host', 'command': ['--web.config.file=/etc/podman_exporter/podman_exporter.yaml'], 'environment': {'OS_ENDPOINT_TYPE': 'internal', 'CONTAINER_HOST': 'unix:///run/podman/podman.sock'}, 'healthcheck': {'test': '/openstack/healthcheck podman_exporter', 'mount': '/var/lib/openstack/healthchecks/podman_exporter'}, 'volumes': ['/var/lib/openstack/config/telemetry/podman_exporter.yaml:/etc/podman_exporter/podman_exporter.yaml:z', '/var/lib/openstack/certs/telemetry/default:/etc/podman_exporter/tls:z', '/run/podman/podman.sock:/run/podman/podman.sock:rw,z', '/var/lib/openstack/healthchecks/podman_exporter:/openstack:ro,z']}, config_id=edpm, container_name=podman_exporter, maintainer=Navid Yaghoobi <navidys@fedoraproject.org>)
Dec  1 19:34:29 compute-0 podman[203750]: time="2025-12-01T19:34:29Z" level=info msg="List containers: received `last` parameter - overwriting `limit`"
Dec  1 19:34:29 compute-0 podman[203750]: @ - - [01/Dec/2025:19:34:29 +0000] "GET /v4.9.3/libpod/containers/json?all=true&external=false&last=0&namespace=false&size=false&sync=false HTTP/1.1" 200 29521 "" "Go-http-client/1.1"
Dec  1 19:34:29 compute-0 podman[203750]: @ - - [01/Dec/2025:19:34:29 +0000] "GET /v4.9.3/libpod/containers/stats?all=false&interval=1&stream=false HTTP/1.1" 200 4789 "" "Go-http-client/1.1"
Dec  1 19:34:31 compute-0 podman[241836]: 2025-12-01 19:34:31.365152183 +0000 UTC m=+0.117435206 container health_status 23921011954a99f31a49758e512d9e3575f6b2ebf536e7df85e3be11e7690b76 (image=quay.io/sustainable_computing_io/kepler:release-0.7.12, name=kepler, health_status=healthy, health_failing_streak=0, health_log=, com.redhat.component=ubi9-container, io.openshift.expose-services=, release-0.7.12=, config_data={'image': 'quay.io/sustainable_computing_io/kepler:release-0.7.12', 'privileged': 'true', 'restart': 'always', 'ports': ['8888:8888'], 'net': 'host', 'command': '-v=2', 'recreate': True, 'environment': {'ENABLE_GPU': 'true', 'EXPOSE_CONTAINER_METRICS': 'true', 'ENABLE_PROCESS_METRICS': 'true', 'EXPOSE_VM_METRICS': 'true', 'EXPOSE_ESTIMATED_IDLE_POWER_METRICS': 'false', 'LIBVIRT_METADATA_URI': 'http://openstack.org/xmlns/libvirt/nova/1.1'}, 'healthcheck': {'test': '/openstack/healthcheck kepler', 'mount': '/var/lib/openstack/healthchecks/kepler'}, 'volumes': ['/lib/modules:/lib/modules:ro', '/run/libvirt:/run/libvirt:shared,ro', '/sys:/sys', '/proc:/proc', '/var/lib/openstack/healthchecks/kepler:/openstack:ro,z']}, version=9.4, architecture=x86_64, container_name=kepler, maintainer=Red Hat, Inc., summary=Provides the latest release of Red Hat Universal Base Image 9., release=1214.1726694543, url=https://access.redhat.com/containers/#/registry.access.redhat.com/ubi9/images/9.4-1214.1726694543, vcs-ref=e309397d02fc53f7fa99db1371b8700eb49f268f, com.redhat.license_terms=https://www.redhat.com/en/about/red-hat-end-user-license-agreements#UBI, build-date=2024-09-18T21:23:30, io.buildah.version=1.29.0, io.k8s.description=The Universal Base Image is designed and engineered to be the base layer for all of your containerized applications, middleware and utilities. This base image is freely redistributable, but Red Hat only supports Red Hat technologies through subscriptions for Red Hat products. This image is maintained by Red Hat and updated regularly., vcs-type=git, config_id=edpm, description=The Universal Base Image is designed and engineered to be the base layer for all of your containerized applications, middleware and utilities. This base image is freely redistributable, but Red Hat only supports Red Hat technologies through subscriptions for Red Hat products. This image is maintained by Red Hat and updated regularly., io.openshift.tags=base rhel9, name=ubi9, managed_by=edpm_ansible, distribution-scope=public, io.k8s.display-name=Red Hat Universal Base Image 9, vendor=Red Hat, Inc.)
Dec  1 19:34:31 compute-0 podman[241837]: 2025-12-01 19:34:31.397172239 +0000 UTC m=+0.142693651 container health_status 34a1614f07848d6f362b3ed1fa2407dbcd0f2c7c831f6ef43ff8b2d278ce7c3d (image=quay.io/podified-antelope-centos9/openstack-ceilometer-ipmi:current-podified, name=ceilometer_agent_ipmi, health_status=healthy, health_failing_streak=0, health_log=, config_id=edpm, container_name=ceilometer_agent_ipmi, org.label-schema.schema-version=1.0, tcib_build_tag=fa2bb8efef6782c26ea7f1675eeb36dd, tcib_managed=true, config_data={'image': 'quay.io/podified-antelope-centos9/openstack-ceilometer-ipmi:current-podified', 'user': 'ceilometer', 'restart': 'always', 'command': 'kolla_start', 'security_opt': 'label:type:ceilometer_polling_t', 'privileged': 'true', 'net': 'host', 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'OS_ENDPOINT_TYPE': 'internal'}, 'healthcheck': {'test': '/openstack/healthcheck ipmi', 'mount': '/var/lib/openstack/healthchecks/ceilometer_agent_ipmi'}, 'volumes': ['/var/lib/openstack/config/telemetry-power-monitoring:/var/lib/openstack/config/:z', '/var/lib/openstack/config/telemetry-power-monitoring/ceilometer-agent-ipmi.json:/var/lib/kolla/config_files/config.json:z', '/etc/hosts:/etc/hosts:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/var/lib/openstack/cacerts/telemetry-power-monitoring/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/config/telemetry-power-monitoring/ceilometer_prom_exporter.yaml:/etc/ceilometer/ceilometer_prom_exporter.yaml:z', '/var/lib/openstack/certs/telemetry-power-monitoring/default:/etc/ceilometer/tls:z', '/dev/log:/dev/log', '/var/lib/openstack/healthchecks/ceilometer_agent_ipmi:/openstack:ro,z']}, maintainer=OpenStack Kubernetes Operator team, managed_by=edpm_ansible, org.label-schema.build-date=20251125, org.label-schema.name=CentOS Stream 9 Base Image, io.buildah.version=1.41.3, org.label-schema.license=GPLv2, org.label-schema.vendor=CentOS)
Dec  1 19:34:31 compute-0 openstack_network_exporter[205914]: ERROR   19:34:31 appctl.go:131: Failed to prepare call to ovsdb-server: no control socket files found for the ovs db server
Dec  1 19:34:31 compute-0 openstack_network_exporter[205914]: ERROR   19:34:31 appctl.go:144: Failed to get PID for ovn-northd: no control socket files found for ovn-northd
Dec  1 19:34:31 compute-0 openstack_network_exporter[205914]: ERROR   19:34:31 appctl.go:144: Failed to get PID for ovn-northd: no control socket files found for ovn-northd
Dec  1 19:34:31 compute-0 openstack_network_exporter[205914]: ERROR   19:34:31 appctl.go:174: call(dpif-netdev/pmd-perf-show): please specify an existing datapath
Dec  1 19:34:31 compute-0 openstack_network_exporter[205914]: 
Dec  1 19:34:31 compute-0 openstack_network_exporter[205914]: ERROR   19:34:31 appctl.go:174: call(dpif-netdev/pmd-rxq-show): please specify an existing datapath
Dec  1 19:34:31 compute-0 openstack_network_exporter[205914]: 
Dec  1 19:34:32 compute-0 nova_compute[189564]: 2025-12-01 19:34:32.289 189568 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Dec  1 19:34:32 compute-0 nova_compute[189564]: 2025-12-01 19:34:32.826 189568 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Dec  1 19:34:34 compute-0 podman[241877]: 2025-12-01 19:34:34.339866478 +0000 UTC m=+0.099659092 container health_status 3a3d264f7eb8586ed3d44da8bad3c69e5911bcb2ca062b771386b6d47a5118de (image=quay.rdoproject.org/podified-master-centos10/openstack-ceilometer-compute:current-tested, name=ceilometer_agent_compute, health_status=healthy, health_failing_streak=0, health_log=, tcib_build_tag=3a7876c5b6a4ff2e2bc50e11e9db5f42, tcib_managed=true, container_name=ceilometer_agent_compute, maintainer=OpenStack Kubernetes Operator team, org.label-schema.schema-version=1.0, config_data={'image': 'quay.rdoproject.org/podified-master-centos10/openstack-ceilometer-compute:current-tested', 'user': 'ceilometer', 'restart': 'always', 'command': 'kolla_start', 'security_opt': 'label:type:ceilometer_polling_t', 'net': 'host', 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'OS_ENDPOINT_TYPE': 'internal'}, 'healthcheck': {'test': '/openstack/healthcheck compute', 'mount': '/var/lib/openstack/healthchecks/ceilometer_agent_compute'}, 'volumes': ['/var/lib/openstack/config/telemetry:/var/lib/openstack/config/:z', '/var/lib/openstack/config/telemetry/ceilometer-agent-compute.json:/var/lib/kolla/config_files/config.json:z', '/run/libvirt:/run/libvirt:shared,ro', '/etc/hosts:/etc/hosts:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/var/lib/openstack/cacerts/telemetry/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/config/telemetry/ceilometer_prom_exporter.yaml:/etc/ceilometer/ceilometer_prom_exporter.yaml:z', '/var/lib/openstack/certs/telemetry/default:/etc/ceilometer/tls:z', '/dev/log:/dev/log', '/var/lib/openstack/healthchecks/ceilometer_agent_compute:/openstack:ro,z']}, managed_by=edpm_ansible, org.label-schema.license=GPLv2, org.label-schema.vendor=CentOS, io.buildah.version=1.41.4, org.label-schema.name=CentOS Stream 10 Base Image, config_id=edpm, org.label-schema.build-date=20251125)
Dec  1 19:34:34 compute-0 podman[241878]: 2025-12-01 19:34:34.373796164 +0000 UTC m=+0.129326136 container health_status 43b014a7c88484529ca37fbc1aa040d68d3c565a681d98a3ffe696ded1c66c8b (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified, name=ovn_metadata_agent, health_status=healthy, health_failing_streak=0, health_log=, tcib_build_tag=fa2bb8efef6782c26ea7f1675eeb36dd, config_data={'cgroupns': 'host', 'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'EDPM_CONFIG_HASH': '0823bd3e096c75f72e4a95820d41b0d4b6a1172bd2892ddb9f29b788a11bc87d'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ovn_metadata_agent', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified', 'net': 'host', 'pid': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/run/openvswitch:/run/openvswitch:z', '/var/lib/config-data/ansible-generated/neutron-ovn-metadata-agent:/etc/neutron.conf.d:z', '/run/netns:/run/netns:shared', '/var/lib/kolla/config_files/ovn_metadata_agent.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/neutron:/var/lib/neutron:shared,z', '/var/lib/neutron/ovn_metadata_haproxy_wrapper:/usr/local/bin/haproxy:ro', '/var/lib/neutron/kill_scripts:/etc/neutron/kill_scripts:ro', '/var/lib/openstack/cacerts/neutron-metadata/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/ca.crt:/etc/pki/tls/certs/ovndbca.crt:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/tls.crt:/etc/pki/tls/certs/ovndb.crt:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/tls.key:/etc/pki/tls/private/ovndb.key:ro,Z', '/var/lib/openstack/healthchecks/ovn_metadata_agent:/openstack:ro,z']}, io.buildah.version=1.41.3, managed_by=edpm_ansible, org.label-schema.vendor=CentOS, config_id=ovn_metadata_agent, maintainer=OpenStack Kubernetes Operator team, tcib_managed=true, org.label-schema.build-date=20251125, org.label-schema.name=CentOS Stream 9 Base Image, container_name=ovn_metadata_agent, org.label-schema.license=GPLv2, org.label-schema.schema-version=1.0)
Dec  1 19:34:34 compute-0 podman[241879]: 2025-12-01 19:34:34.424757501 +0000 UTC m=+0.167455614 container health_status ac5c9902abf0db9f43c889599b2bcc73d33eb8b65444ffdd9b56a5cc93dab792 (image=quay.io/podified-antelope-centos9/openstack-ovn-controller:current-podified, name=ovn_controller, health_status=healthy, health_failing_streak=0, health_log=, org.label-schema.build-date=20251125, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS, config_data={'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ovn_controller', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-ovn-controller:current-podified', 'net': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/lib/modules:/lib/modules:ro', '/run:/run', '/var/lib/openvswitch/ovn:/run/ovn:shared,z', '/var/lib/kolla/config_files/ovn_controller.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/openstack/cacerts/ovn/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/certs/ovn/default/ca.crt:/etc/pki/tls/certs/ovndbca.crt:ro,z', '/var/lib/openstack/certs/ovn/default/tls.crt:/etc/pki/tls/certs/ovndb.crt:ro,z', '/var/lib/openstack/certs/ovn/default/tls.key:/etc/pki/tls/private/ovndb.key:ro,Z', '/var/lib/openstack/healthchecks/ovn_controller:/openstack:ro,z']}, tcib_build_tag=fa2bb8efef6782c26ea7f1675eeb36dd, config_id=ovn_controller, io.buildah.version=1.41.3, maintainer=OpenStack Kubernetes Operator team, managed_by=edpm_ansible, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 9 Base Image, tcib_managed=true, container_name=ovn_controller)
Dec  1 19:34:37 compute-0 nova_compute[189564]: 2025-12-01 19:34:37.293 189568 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Dec  1 19:34:37 compute-0 nova_compute[189564]: 2025-12-01 19:34:37.829 189568 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Dec  1 19:34:42 compute-0 nova_compute[189564]: 2025-12-01 19:34:42.297 189568 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Dec  1 19:34:42 compute-0 nova_compute[189564]: 2025-12-01 19:34:42.832 189568 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Dec  1 19:34:43 compute-0 podman[241935]: 2025-12-01 19:34:43.376690501 +0000 UTC m=+0.139929926 container health_status b46bda7fc50db8041eef75400930fc7591d8331b3adc9964f77b2cc87c6b98e2 (image=quay.io/openstack-k8s-operators/openstack-network-exporter:current-podified, name=openstack_network_exporter, health_status=healthy, health_failing_streak=0, health_log=, vcs-ref=f4b088292653bbf5ca8188a5e59ffd06a8671d4b, io.openshift.tags=minimal rhel9, version=9.6, release=1755695350, com.redhat.component=ubi9-minimal-container, com.redhat.license_terms=https://www.redhat.com/en/about/red-hat-end-user-license-agreements#UBI, description=The Universal Base Image Minimal is a stripped down image that uses microdnf as a package manager. This base image is freely redistributable, but Red Hat only supports Red Hat technologies through subscriptions for Red Hat products. This image is maintained by Red Hat and updated regularly., name=ubi9-minimal, build-date=2025-08-20T13:12:41, managed_by=edpm_ansible, architecture=x86_64, io.k8s.display-name=Red Hat Universal Base Image 9 Minimal, maintainer=Red Hat, Inc., summary=Provides the latest release of the minimal Red Hat Universal Base Image 9., distribution-scope=public, io.buildah.version=1.33.7, io.k8s.description=The Universal Base Image Minimal is a stripped down image that uses microdnf as a package manager. This base image is freely redistributable, but Red Hat only supports Red Hat technologies through subscriptions for Red Hat products. This image is maintained by Red Hat and updated regularly., vcs-type=git, vendor=Red Hat, Inc., container_name=openstack_network_exporter, url=https://catalog.redhat.com/en/search?searchType=containers, io.openshift.expose-services=, config_id=edpm, config_data={'image': 'quay.io/openstack-k8s-operators/openstack-network-exporter:current-podified', 'restart': 'always', 'recreate': True, 'privileged': True, 'ports': ['9105:9105'], 'command': [], 'net': 'host', 'environment': {'OS_ENDPOINT_TYPE': 'internal', 'OPENSTACK_NETWORK_EXPORTER_YAML': '/etc/openstack_network_exporter/openstack_network_exporter.yaml'}, 'healthcheck': {'test': '/openstack/healthcheck openstack-netwo', 'mount': '/var/lib/openstack/healthchecks/openstack_network_exporter'}, 'volumes': ['/var/lib/openstack/config/telemetry/openstack_network_exporter.yaml:/etc/openstack_network_exporter/openstack_network_exporter.yaml:z', '/var/lib/openstack/certs/telemetry/default:/etc/openstack_network_exporter/tls:z', '/var/run/openvswitch:/run/openvswitch:rw,z', '/var/lib/openvswitch/ovn:/run/ovn:rw,z', '/proc:/host/proc:ro', '/var/lib/openstack/healthchecks/openstack_network_exporter:/openstack:ro,z']})
Dec  1 19:34:47 compute-0 nova_compute[189564]: 2025-12-01 19:34:47.300 189568 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Dec  1 19:34:47 compute-0 podman[241958]: 2025-12-01 19:34:47.602081169 +0000 UTC m=+0.111236734 container health_status 9bc16c1e84935b321683dd2dfd3901959431e420d380b6b9982945dff3d516b2 (image=quay.io/prometheus/node-exporter:v1.5.0, name=node_exporter, health_status=healthy, health_failing_streak=0, health_log=, container_name=node_exporter, maintainer=The Prometheus Authors <prometheus-developers@googlegroups.com>, managed_by=edpm_ansible, config_data={'image': 'quay.io/prometheus/node-exporter:v1.5.0', 'restart': 'always', 'recreate': True, 'user': 'root', 'privileged': True, 'ports': ['9100:9100'], 'command': ['--web.config.file=/etc/node_exporter/node_exporter.yaml', '--web.disable-exporter-metrics', '--collector.systemd', '--collector.systemd.unit-include=(edpm_.*|ovs.*|openvswitch|virt.*|rsyslog)\\.service', '--no-collector.dmi', '--no-collector.entropy', '--no-collector.thermal_zone', '--no-collector.time', '--no-collector.timex', '--no-collector.uname', '--no-collector.stat', '--no-collector.hwmon', '--no-collector.os', '--no-collector.selinux', '--no-collector.textfile', '--no-collector.powersupplyclass', '--no-collector.pressure', '--no-collector.rapl'], 'net': 'host', 'environment': {'OS_ENDPOINT_TYPE': 'internal'}, 'healthcheck': {'test': '/openstack/healthcheck node_exporter', 'mount': '/var/lib/openstack/healthchecks/node_exporter'}, 'volumes': ['/var/lib/openstack/config/telemetry/node_exporter.yaml:/etc/node_exporter/node_exporter.yaml:z', '/var/lib/openstack/certs/telemetry/default:/etc/node_exporter/tls:z', '/var/run/dbus/system_bus_socket:/var/run/dbus/system_bus_socket:rw', '/var/lib/openstack/healthchecks/node_exporter:/openstack:ro,z']}, config_id=edpm)
Dec  1 19:34:47 compute-0 nova_compute[189564]: 2025-12-01 19:34:47.834 189568 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Dec  1 19:34:48 compute-0 nova_compute[189564]: 2025-12-01 19:34:48.248 189568 DEBUG oslo_service.periodic_task [None req-7cec860b-c1c5-49d2-8a4f-732c76c1788d - - - - - -] Running periodic task ComputeManager._heal_instance_info_cache run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210#033[00m
Dec  1 19:34:48 compute-0 nova_compute[189564]: 2025-12-01 19:34:48.249 189568 DEBUG nova.compute.manager [None req-7cec860b-c1c5-49d2-8a4f-732c76c1788d - - - - - -] Starting heal instance info cache _heal_instance_info_cache /usr/lib/python3.9/site-packages/nova/compute/manager.py:9858#033[00m
Dec  1 19:34:49 compute-0 nova_compute[189564]: 2025-12-01 19:34:49.422 189568 DEBUG oslo_concurrency.lockutils [None req-7cec860b-c1c5-49d2-8a4f-732c76c1788d - - - - - -] Acquiring lock "refresh_cache-f4a023f0-04a7-470f-88ef-6284e0580f9e" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:312#033[00m
Dec  1 19:34:49 compute-0 nova_compute[189564]: 2025-12-01 19:34:49.424 189568 DEBUG oslo_concurrency.lockutils [None req-7cec860b-c1c5-49d2-8a4f-732c76c1788d - - - - - -] Acquired lock "refresh_cache-f4a023f0-04a7-470f-88ef-6284e0580f9e" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:315#033[00m
Dec  1 19:34:49 compute-0 nova_compute[189564]: 2025-12-01 19:34:49.425 189568 DEBUG nova.network.neutron [None req-7cec860b-c1c5-49d2-8a4f-732c76c1788d - - - - - -] [instance: f4a023f0-04a7-470f-88ef-6284e0580f9e] Forcefully refreshing network info cache for instance _get_instance_nw_info /usr/lib/python3.9/site-packages/nova/network/neutron.py:2004#033[00m
Dec  1 19:34:52 compute-0 nova_compute[189564]: 2025-12-01 19:34:52.304 189568 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Dec  1 19:34:52 compute-0 nova_compute[189564]: 2025-12-01 19:34:52.837 189568 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Dec  1 19:34:53 compute-0 nova_compute[189564]: 2025-12-01 19:34:53.449 189568 DEBUG nova.network.neutron [None req-7cec860b-c1c5-49d2-8a4f-732c76c1788d - - - - - -] [instance: f4a023f0-04a7-470f-88ef-6284e0580f9e] Updating instance_info_cache with network_info: [{"id": "0aee22ef-1ffd-4d83-a6ba-7377ff1b62c3", "address": "fa:16:3e:0a:1c:a4", "network": {"id": "2a4b8529-6171-4880-a97c-66966115a61b", "bridge": "br-int", "label": "private", "subnets": [{"cidr": "192.168.0.0/24", "dns": [], "gateway": {"address": "192.168.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "192.168.0.66", "type": "fixed", "version": 4, "meta": {}, "floating_ips": [{"address": "192.168.122.187", "type": "floating", "version": 4, "meta": {}}]}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "35d2a9caf1634dca9fc12ec078239d84", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tap0aee22ef-1f", "ovs_interfaceid": "0aee22ef-1ffd-4d83-a6ba-7377ff1b62c3", "qbh_params": null, "qbg_params": null, "active": true, "vnic_type": "normal", "profile": {}, "preserve_on_delete": true, "delegate_create": true, "meta": {}}] update_instance_cache_with_nw_info /usr/lib/python3.9/site-packages/nova/network/neutron.py:116#033[00m
Dec  1 19:34:53 compute-0 nova_compute[189564]: 2025-12-01 19:34:53.473 189568 DEBUG oslo_concurrency.lockutils [None req-7cec860b-c1c5-49d2-8a4f-732c76c1788d - - - - - -] Releasing lock "refresh_cache-f4a023f0-04a7-470f-88ef-6284e0580f9e" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:333#033[00m
Dec  1 19:34:53 compute-0 nova_compute[189564]: 2025-12-01 19:34:53.474 189568 DEBUG nova.compute.manager [None req-7cec860b-c1c5-49d2-8a4f-732c76c1788d - - - - - -] [instance: f4a023f0-04a7-470f-88ef-6284e0580f9e] Updated the network info_cache for instance _heal_instance_info_cache /usr/lib/python3.9/site-packages/nova/compute/manager.py:9929#033[00m
Dec  1 19:34:53 compute-0 nova_compute[189564]: 2025-12-01 19:34:53.475 189568 DEBUG oslo_service.periodic_task [None req-7cec860b-c1c5-49d2-8a4f-732c76c1788d - - - - - -] Running periodic task ComputeManager._poll_rebooting_instances run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210#033[00m
Dec  1 19:34:53 compute-0 nova_compute[189564]: 2025-12-01 19:34:53.476 189568 DEBUG oslo_service.periodic_task [None req-7cec860b-c1c5-49d2-8a4f-732c76c1788d - - - - - -] Running periodic task ComputeManager._poll_volume_usage run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210#033[00m
Dec  1 19:34:53 compute-0 nova_compute[189564]: 2025-12-01 19:34:53.478 189568 DEBUG oslo_service.periodic_task [None req-7cec860b-c1c5-49d2-8a4f-732c76c1788d - - - - - -] Running periodic task ComputeManager._reclaim_queued_deletes run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210#033[00m
Dec  1 19:34:53 compute-0 nova_compute[189564]: 2025-12-01 19:34:53.479 189568 DEBUG nova.compute.manager [None req-7cec860b-c1c5-49d2-8a4f-732c76c1788d - - - - - -] CONF.reclaim_instance_interval <= 0, skipping... _reclaim_queued_deletes /usr/lib/python3.9/site-packages/nova/compute/manager.py:10477#033[00m
Dec  1 19:34:54 compute-0 nova_compute[189564]: 2025-12-01 19:34:54.251 189568 DEBUG oslo_service.periodic_task [None req-7cec860b-c1c5-49d2-8a4f-732c76c1788d - - - - - -] Running periodic task ComputeManager._check_instance_build_time run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210#033[00m
Dec  1 19:34:54 compute-0 nova_compute[189564]: 2025-12-01 19:34:54.251 189568 DEBUG oslo_service.periodic_task [None req-7cec860b-c1c5-49d2-8a4f-732c76c1788d - - - - - -] Running periodic task ComputeManager._instance_usage_audit run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210#033[00m
Dec  1 19:34:54 compute-0 podman[241983]: 2025-12-01 19:34:54.365341686 +0000 UTC m=+0.124377882 container health_status eee51cf6f5ac491b85fb09827fece37ea9afa564acb449d4ec0d0155a452f02b (image=quay.io/podified-antelope-centos9/openstack-multipathd:current-podified, name=multipathd, health_status=healthy, health_failing_streak=0, health_log=, managed_by=edpm_ansible, config_id=multipathd, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.vendor=CentOS, tcib_managed=true, config_data={'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/multipathd', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-multipathd:current-podified', 'net': 'host', 'privileged': True, 'restart': 'always', 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/var/lib/kolla/config_files/multipathd.json:/var/lib/kolla/config_files/config.json:ro', '/dev:/dev', '/run/udev:/run/udev', '/sys:/sys', '/lib/modules:/lib/modules:ro', '/etc/iscsi:/etc/iscsi:ro', '/var/lib/iscsi:/var/lib/iscsi', '/etc/multipath:/etc/multipath:z', '/etc/multipath.conf:/etc/multipath.conf:ro', '/var/lib/openstack/healthchecks/multipathd:/openstack:ro,z']}, container_name=multipathd, org.label-schema.build-date=20251125, org.label-schema.schema-version=1.0, tcib_build_tag=fa2bb8efef6782c26ea7f1675eeb36dd, io.buildah.version=1.41.3, maintainer=OpenStack Kubernetes Operator team, org.label-schema.license=GPLv2)
Dec  1 19:34:57 compute-0 nova_compute[189564]: 2025-12-01 19:34:57.249 189568 DEBUG oslo_service.periodic_task [None req-7cec860b-c1c5-49d2-8a4f-732c76c1788d - - - - - -] Running periodic task ComputeManager._poll_unconfirmed_resizes run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210#033[00m
Dec  1 19:34:57 compute-0 nova_compute[189564]: 2025-12-01 19:34:57.249 189568 DEBUG oslo_service.periodic_task [None req-7cec860b-c1c5-49d2-8a4f-732c76c1788d - - - - - -] Running periodic task ComputeManager.update_available_resource run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210#033[00m
Dec  1 19:34:57 compute-0 nova_compute[189564]: 2025-12-01 19:34:57.285 189568 DEBUG oslo_concurrency.lockutils [None req-7cec860b-c1c5-49d2-8a4f-732c76c1788d - - - - - -] Acquiring lock "compute_resources" by "nova.compute.resource_tracker.ResourceTracker.clean_compute_node_cache" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m
Dec  1 19:34:57 compute-0 nova_compute[189564]: 2025-12-01 19:34:57.285 189568 DEBUG oslo_concurrency.lockutils [None req-7cec860b-c1c5-49d2-8a4f-732c76c1788d - - - - - -] Lock "compute_resources" acquired by "nova.compute.resource_tracker.ResourceTracker.clean_compute_node_cache" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m
Dec  1 19:34:57 compute-0 nova_compute[189564]: 2025-12-01 19:34:57.285 189568 DEBUG oslo_concurrency.lockutils [None req-7cec860b-c1c5-49d2-8a4f-732c76c1788d - - - - - -] Lock "compute_resources" "released" by "nova.compute.resource_tracker.ResourceTracker.clean_compute_node_cache" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m
Dec  1 19:34:57 compute-0 nova_compute[189564]: 2025-12-01 19:34:57.285 189568 DEBUG nova.compute.resource_tracker [None req-7cec860b-c1c5-49d2-8a4f-732c76c1788d - - - - - -] Auditing locally available compute resources for compute-0.ctlplane.example.com (node: compute-0.ctlplane.example.com) update_available_resource /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:861#033[00m
Dec  1 19:34:57 compute-0 nova_compute[189564]: 2025-12-01 19:34:57.306 189568 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Dec  1 19:34:57 compute-0 nova_compute[189564]: 2025-12-01 19:34:57.394 189568 DEBUG oslo_concurrency.processutils [None req-7cec860b-c1c5-49d2-8a4f-732c76c1788d - - - - - -] Running cmd (subprocess): /usr/bin/python3 -m oslo_concurrency.prlimit --as=1073741824 --cpu=30 -- env LC_ALL=C LANG=C qemu-img info /var/lib/nova/instances/e73931e9-f7fa-4666-b781-700b385532a9/disk --force-share --output=json execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384#033[00m
Dec  1 19:34:57 compute-0 nova_compute[189564]: 2025-12-01 19:34:57.487 189568 DEBUG oslo_concurrency.processutils [None req-7cec860b-c1c5-49d2-8a4f-732c76c1788d - - - - - -] CMD "/usr/bin/python3 -m oslo_concurrency.prlimit --as=1073741824 --cpu=30 -- env LC_ALL=C LANG=C qemu-img info /var/lib/nova/instances/e73931e9-f7fa-4666-b781-700b385532a9/disk --force-share --output=json" returned: 0 in 0.093s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422#033[00m
Dec  1 19:34:57 compute-0 nova_compute[189564]: 2025-12-01 19:34:57.489 189568 DEBUG oslo_concurrency.processutils [None req-7cec860b-c1c5-49d2-8a4f-732c76c1788d - - - - - -] Running cmd (subprocess): /usr/bin/python3 -m oslo_concurrency.prlimit --as=1073741824 --cpu=30 -- env LC_ALL=C LANG=C qemu-img info /var/lib/nova/instances/e73931e9-f7fa-4666-b781-700b385532a9/disk --force-share --output=json execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384#033[00m
Dec  1 19:34:57 compute-0 nova_compute[189564]: 2025-12-01 19:34:57.587 189568 DEBUG oslo_concurrency.processutils [None req-7cec860b-c1c5-49d2-8a4f-732c76c1788d - - - - - -] CMD "/usr/bin/python3 -m oslo_concurrency.prlimit --as=1073741824 --cpu=30 -- env LC_ALL=C LANG=C qemu-img info /var/lib/nova/instances/e73931e9-f7fa-4666-b781-700b385532a9/disk --force-share --output=json" returned: 0 in 0.098s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422#033[00m
Dec  1 19:34:57 compute-0 nova_compute[189564]: 2025-12-01 19:34:57.592 189568 DEBUG oslo_concurrency.processutils [None req-7cec860b-c1c5-49d2-8a4f-732c76c1788d - - - - - -] Running cmd (subprocess): /usr/bin/python3 -m oslo_concurrency.prlimit --as=1073741824 --cpu=30 -- env LC_ALL=C LANG=C qemu-img info /var/lib/nova/instances/e73931e9-f7fa-4666-b781-700b385532a9/disk.eph0 --force-share --output=json execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384#033[00m
Dec  1 19:34:57 compute-0 nova_compute[189564]: 2025-12-01 19:34:57.686 189568 DEBUG oslo_concurrency.processutils [None req-7cec860b-c1c5-49d2-8a4f-732c76c1788d - - - - - -] CMD "/usr/bin/python3 -m oslo_concurrency.prlimit --as=1073741824 --cpu=30 -- env LC_ALL=C LANG=C qemu-img info /var/lib/nova/instances/e73931e9-f7fa-4666-b781-700b385532a9/disk.eph0 --force-share --output=json" returned: 0 in 0.094s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422#033[00m
Dec  1 19:34:57 compute-0 nova_compute[189564]: 2025-12-01 19:34:57.687 189568 DEBUG oslo_concurrency.processutils [None req-7cec860b-c1c5-49d2-8a4f-732c76c1788d - - - - - -] Running cmd (subprocess): /usr/bin/python3 -m oslo_concurrency.prlimit --as=1073741824 --cpu=30 -- env LC_ALL=C LANG=C qemu-img info /var/lib/nova/instances/e73931e9-f7fa-4666-b781-700b385532a9/disk.eph0 --force-share --output=json execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384#033[00m
Dec  1 19:34:57 compute-0 nova_compute[189564]: 2025-12-01 19:34:57.778 189568 DEBUG oslo_concurrency.processutils [None req-7cec860b-c1c5-49d2-8a4f-732c76c1788d - - - - - -] CMD "/usr/bin/python3 -m oslo_concurrency.prlimit --as=1073741824 --cpu=30 -- env LC_ALL=C LANG=C qemu-img info /var/lib/nova/instances/e73931e9-f7fa-4666-b781-700b385532a9/disk.eph0 --force-share --output=json" returned: 0 in 0.090s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422#033[00m
Dec  1 19:34:57 compute-0 nova_compute[189564]: 2025-12-01 19:34:57.784 189568 DEBUG oslo_concurrency.processutils [None req-7cec860b-c1c5-49d2-8a4f-732c76c1788d - - - - - -] Running cmd (subprocess): /usr/bin/python3 -m oslo_concurrency.prlimit --as=1073741824 --cpu=30 -- env LC_ALL=C LANG=C qemu-img info /var/lib/nova/instances/f4a023f0-04a7-470f-88ef-6284e0580f9e/disk --force-share --output=json execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384#033[00m
Dec  1 19:34:57 compute-0 nova_compute[189564]: 2025-12-01 19:34:57.838 189568 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Dec  1 19:34:57 compute-0 nova_compute[189564]: 2025-12-01 19:34:57.878 189568 DEBUG oslo_concurrency.processutils [None req-7cec860b-c1c5-49d2-8a4f-732c76c1788d - - - - - -] CMD "/usr/bin/python3 -m oslo_concurrency.prlimit --as=1073741824 --cpu=30 -- env LC_ALL=C LANG=C qemu-img info /var/lib/nova/instances/f4a023f0-04a7-470f-88ef-6284e0580f9e/disk --force-share --output=json" returned: 0 in 0.094s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422#033[00m
Dec  1 19:34:57 compute-0 nova_compute[189564]: 2025-12-01 19:34:57.879 189568 DEBUG oslo_concurrency.processutils [None req-7cec860b-c1c5-49d2-8a4f-732c76c1788d - - - - - -] Running cmd (subprocess): /usr/bin/python3 -m oslo_concurrency.prlimit --as=1073741824 --cpu=30 -- env LC_ALL=C LANG=C qemu-img info /var/lib/nova/instances/f4a023f0-04a7-470f-88ef-6284e0580f9e/disk --force-share --output=json execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384#033[00m
Dec  1 19:34:57 compute-0 nova_compute[189564]: 2025-12-01 19:34:57.970 189568 DEBUG oslo_concurrency.processutils [None req-7cec860b-c1c5-49d2-8a4f-732c76c1788d - - - - - -] CMD "/usr/bin/python3 -m oslo_concurrency.prlimit --as=1073741824 --cpu=30 -- env LC_ALL=C LANG=C qemu-img info /var/lib/nova/instances/f4a023f0-04a7-470f-88ef-6284e0580f9e/disk --force-share --output=json" returned: 0 in 0.090s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422#033[00m
Dec  1 19:34:57 compute-0 nova_compute[189564]: 2025-12-01 19:34:57.972 189568 DEBUG oslo_concurrency.processutils [None req-7cec860b-c1c5-49d2-8a4f-732c76c1788d - - - - - -] Running cmd (subprocess): /usr/bin/python3 -m oslo_concurrency.prlimit --as=1073741824 --cpu=30 -- env LC_ALL=C LANG=C qemu-img info /var/lib/nova/instances/f4a023f0-04a7-470f-88ef-6284e0580f9e/disk.eph0 --force-share --output=json execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384#033[00m
Dec  1 19:34:58 compute-0 nova_compute[189564]: 2025-12-01 19:34:58.045 189568 DEBUG oslo_concurrency.processutils [None req-7cec860b-c1c5-49d2-8a4f-732c76c1788d - - - - - -] CMD "/usr/bin/python3 -m oslo_concurrency.prlimit --as=1073741824 --cpu=30 -- env LC_ALL=C LANG=C qemu-img info /var/lib/nova/instances/f4a023f0-04a7-470f-88ef-6284e0580f9e/disk.eph0 --force-share --output=json" returned: 0 in 0.073s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422#033[00m
Dec  1 19:34:58 compute-0 nova_compute[189564]: 2025-12-01 19:34:58.047 189568 DEBUG oslo_concurrency.processutils [None req-7cec860b-c1c5-49d2-8a4f-732c76c1788d - - - - - -] Running cmd (subprocess): /usr/bin/python3 -m oslo_concurrency.prlimit --as=1073741824 --cpu=30 -- env LC_ALL=C LANG=C qemu-img info /var/lib/nova/instances/f4a023f0-04a7-470f-88ef-6284e0580f9e/disk.eph0 --force-share --output=json execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384#033[00m
Dec  1 19:34:58 compute-0 nova_compute[189564]: 2025-12-01 19:34:58.141 189568 DEBUG oslo_concurrency.processutils [None req-7cec860b-c1c5-49d2-8a4f-732c76c1788d - - - - - -] CMD "/usr/bin/python3 -m oslo_concurrency.prlimit --as=1073741824 --cpu=30 -- env LC_ALL=C LANG=C qemu-img info /var/lib/nova/instances/f4a023f0-04a7-470f-88ef-6284e0580f9e/disk.eph0 --force-share --output=json" returned: 0 in 0.095s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422#033[00m
Dec  1 19:34:58 compute-0 nova_compute[189564]: 2025-12-01 19:34:58.592 189568 WARNING nova.virt.libvirt.driver [None req-7cec860b-c1c5-49d2-8a4f-732c76c1788d - - - - - -] This host appears to have multiple sockets per NUMA node. The `socket` PCI NUMA affinity will not be supported.#033[00m
Dec  1 19:34:58 compute-0 nova_compute[189564]: 2025-12-01 19:34:58.594 189568 DEBUG nova.compute.resource_tracker [None req-7cec860b-c1c5-49d2-8a4f-732c76c1788d - - - - - -] Hypervisor/Node resource view: name=compute-0.ctlplane.example.com free_ram=5057MB free_disk=72.36132431030273GB free_vcpus=6 pci_devices=[{"dev_id": "pci_0000_00_03_0", "address": "0000:00:03.0", "product_id": "1000", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1000", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_01_3", "address": "0000:00:01.3", "product_id": "7113", "vendor_id": "8086", "numa_node": null, "label": "label_8086_7113", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_06_0", "address": "0000:00:06.0", "product_id": "1005", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1005", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_04_0", "address": "0000:00:04.0", "product_id": "1001", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1001", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_01_1", "address": "0000:00:01.1", "product_id": "7010", "vendor_id": "8086", "numa_node": null, "label": "label_8086_7010", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_00_0", "address": "0000:00:00.0", "product_id": "1237", "vendor_id": "8086", "numa_node": null, "label": "label_8086_1237", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_05_0", "address": "0000:00:05.0", "product_id": "1002", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1002", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_02_0", "address": "0000:00:02.0", "product_id": "1050", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1050", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_01_2", "address": "0000:00:01.2", "product_id": "7020", "vendor_id": "8086", "numa_node": null, "label": "label_8086_7020", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_01_0", "address": "0000:00:01.0", "product_id": "7000", "vendor_id": "8086", "numa_node": null, "label": "label_8086_7000", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_07_0", "address": "0000:00:07.0", "product_id": "1000", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1000", "dev_type": "type-PCI"}] _report_hypervisor_resource_view /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:1034#033[00m
Dec  1 19:34:58 compute-0 nova_compute[189564]: 2025-12-01 19:34:58.594 189568 DEBUG oslo_concurrency.lockutils [None req-7cec860b-c1c5-49d2-8a4f-732c76c1788d - - - - - -] Acquiring lock "compute_resources" by "nova.compute.resource_tracker.ResourceTracker._update_available_resource" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m
Dec  1 19:34:58 compute-0 nova_compute[189564]: 2025-12-01 19:34:58.595 189568 DEBUG oslo_concurrency.lockutils [None req-7cec860b-c1c5-49d2-8a4f-732c76c1788d - - - - - -] Lock "compute_resources" acquired by "nova.compute.resource_tracker.ResourceTracker._update_available_resource" :: waited 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m
Dec  1 19:34:58 compute-0 nova_compute[189564]: 2025-12-01 19:34:58.659 189568 DEBUG nova.compute.resource_tracker [None req-7cec860b-c1c5-49d2-8a4f-732c76c1788d - - - - - -] Instance e73931e9-f7fa-4666-b781-700b385532a9 actively managed on this compute host and has allocations in placement: {'resources': {'DISK_GB': 2, 'MEMORY_MB': 512, 'VCPU': 1}}. _remove_deleted_instances_allocations /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:1635#033[00m
Dec  1 19:34:58 compute-0 nova_compute[189564]: 2025-12-01 19:34:58.660 189568 DEBUG nova.compute.resource_tracker [None req-7cec860b-c1c5-49d2-8a4f-732c76c1788d - - - - - -] Instance f4a023f0-04a7-470f-88ef-6284e0580f9e actively managed on this compute host and has allocations in placement: {'resources': {'DISK_GB': 2, 'MEMORY_MB': 512, 'VCPU': 1}}. _remove_deleted_instances_allocations /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:1635#033[00m
Dec  1 19:34:58 compute-0 nova_compute[189564]: 2025-12-01 19:34:58.660 189568 DEBUG nova.compute.resource_tracker [None req-7cec860b-c1c5-49d2-8a4f-732c76c1788d - - - - - -] Total usable vcpus: 8, total allocated vcpus: 2 _report_final_resource_view /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:1057#033[00m
Dec  1 19:34:58 compute-0 nova_compute[189564]: 2025-12-01 19:34:58.661 189568 DEBUG nova.compute.resource_tracker [None req-7cec860b-c1c5-49d2-8a4f-732c76c1788d - - - - - -] Final resource view: name=compute-0.ctlplane.example.com phys_ram=7680MB used_ram=1536MB phys_disk=79GB used_disk=4GB total_vcpus=8 used_vcpus=2 pci_stats=[] _report_final_resource_view /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:1066#033[00m
Dec  1 19:34:58 compute-0 nova_compute[189564]: 2025-12-01 19:34:58.675 189568 DEBUG nova.scheduler.client.report [None req-7cec860b-c1c5-49d2-8a4f-732c76c1788d - - - - - -] Refreshing inventories for resource provider 0211b5d4-bab8-409f-8f53-df766ffbcb27 _refresh_associations /usr/lib/python3.9/site-packages/nova/scheduler/client/report.py:804#033[00m
Dec  1 19:34:58 compute-0 nova_compute[189564]: 2025-12-01 19:34:58.696 189568 DEBUG nova.scheduler.client.report [None req-7cec860b-c1c5-49d2-8a4f-732c76c1788d - - - - - -] Updating ProviderTree inventory for provider 0211b5d4-bab8-409f-8f53-df766ffbcb27 from _refresh_and_get_inventory using data: {'VCPU': {'total': 8, 'reserved': 0, 'min_unit': 1, 'max_unit': 8, 'step_size': 1, 'allocation_ratio': 4.0}, 'MEMORY_MB': {'total': 7680, 'reserved': 512, 'min_unit': 1, 'max_unit': 7680, 'step_size': 1, 'allocation_ratio': 1.0}, 'DISK_GB': {'total': 79, 'reserved': 1, 'min_unit': 1, 'max_unit': 79, 'step_size': 1, 'allocation_ratio': 0.9}} _refresh_and_get_inventory /usr/lib/python3.9/site-packages/nova/scheduler/client/report.py:768#033[00m
Dec  1 19:34:58 compute-0 nova_compute[189564]: 2025-12-01 19:34:58.696 189568 DEBUG nova.compute.provider_tree [None req-7cec860b-c1c5-49d2-8a4f-732c76c1788d - - - - - -] Updating inventory in ProviderTree for provider 0211b5d4-bab8-409f-8f53-df766ffbcb27 with inventory: {'VCPU': {'total': 8, 'reserved': 0, 'min_unit': 1, 'max_unit': 8, 'step_size': 1, 'allocation_ratio': 4.0}, 'MEMORY_MB': {'total': 7680, 'reserved': 512, 'min_unit': 1, 'max_unit': 7680, 'step_size': 1, 'allocation_ratio': 1.0}, 'DISK_GB': {'total': 79, 'reserved': 1, 'min_unit': 1, 'max_unit': 79, 'step_size': 1, 'allocation_ratio': 0.9}} update_inventory /usr/lib/python3.9/site-packages/nova/compute/provider_tree.py:176#033[00m
Dec  1 19:34:58 compute-0 nova_compute[189564]: 2025-12-01 19:34:58.711 189568 DEBUG nova.scheduler.client.report [None req-7cec860b-c1c5-49d2-8a4f-732c76c1788d - - - - - -] Refreshing aggregate associations for resource provider 0211b5d4-bab8-409f-8f53-df766ffbcb27, aggregates: None _refresh_associations /usr/lib/python3.9/site-packages/nova/scheduler/client/report.py:813#033[00m
Dec  1 19:34:58 compute-0 nova_compute[189564]: 2025-12-01 19:34:58.740 189568 DEBUG nova.scheduler.client.report [None req-7cec860b-c1c5-49d2-8a4f-732c76c1788d - - - - - -] Refreshing trait associations for resource provider 0211b5d4-bab8-409f-8f53-df766ffbcb27, traits: COMPUTE_RESCUE_BFV,COMPUTE_VIOMMU_MODEL_INTEL,COMPUTE_SECURITY_UEFI_SECURE_BOOT,COMPUTE_GRAPHICS_MODEL_VIRTIO,HW_CPU_X86_AMD_SVM,COMPUTE_NODE,COMPUTE_VIOMMU_MODEL_AUTO,HW_CPU_X86_BMI2,COMPUTE_IMAGE_TYPE_ISO,HW_CPU_X86_SSE2,COMPUTE_STORAGE_BUS_SATA,HW_CPU_X86_SSE41,COMPUTE_IMAGE_TYPE_ARI,COMPUTE_SECURITY_TPM_1_2,COMPUTE_STORAGE_BUS_VIRTIO,COMPUTE_IMAGE_TYPE_AMI,COMPUTE_NET_VIF_MODEL_E1000,COMPUTE_GRAPHICS_MODEL_CIRRUS,COMPUTE_GRAPHICS_MODEL_VGA,COMPUTE_TRUSTED_CERTS,COMPUTE_STORAGE_BUS_USB,COMPUTE_VOLUME_ATTACH_WITH_TAG,COMPUTE_NET_VIF_MODEL_VIRTIO,HW_CPU_X86_FMA3,HW_CPU_X86_SSE4A,COMPUTE_ACCELERATORS,COMPUTE_VOLUME_EXTEND,HW_CPU_X86_ABM,COMPUTE_DEVICE_TAGGING,HW_CPU_X86_AVX,HW_CPU_X86_SSE,HW_CPU_X86_SVM,COMPUTE_STORAGE_BUS_IDE,COMPUTE_NET_ATTACH_INTERFACE,HW_CPU_X86_F16C,HW_CPU_X86_MMX,COMPUTE_NET_VIF_MODEL_SPAPR_VLAN,COMPUTE_NET_VIF_MODEL_E1000E,HW_CPU_X86_CLMUL,COMPUTE_GRAPHICS_MODEL_NONE,COMPUTE_VIOMMU_MODEL_VIRTIO,HW_CPU_X86_AVX2,COMPUTE_NET_VIF_MODEL_VMXNET3,COMPUTE_SECURITY_TPM_2_0,COMPUTE_IMAGE_TYPE_AKI,HW_CPU_X86_SSSE3,COMPUTE_IMAGE_TYPE_QCOW2,HW_CPU_X86_BMI,HW_CPU_X86_AESNI,COMPUTE_GRAPHICS_MODEL_BOCHS,COMPUTE_NET_VIF_MODEL_RTL8139,COMPUTE_STORAGE_BUS_SCSI,COMPUTE_NET_VIF_MODEL_PCNET,COMPUTE_VOLUME_MULTI_ATTACH,COMPUTE_SOCKET_PCI_NUMA_AFFINITY,COMPUTE_NET_VIF_MODEL_NE2K_PCI,HW_CPU_X86_SHA,COMPUTE_IMAGE_TYPE_RAW,COMPUTE_NET_ATTACH_INTERFACE_WITH_TAG,HW_CPU_X86_SSE42,COMPUTE_STORAGE_BUS_FDC _refresh_associations /usr/lib/python3.9/site-packages/nova/scheduler/client/report.py:825#033[00m
Dec  1 19:34:58 compute-0 nova_compute[189564]: 2025-12-01 19:34:58.811 189568 DEBUG nova.compute.provider_tree [None req-7cec860b-c1c5-49d2-8a4f-732c76c1788d - - - - - -] Inventory has not changed in ProviderTree for provider: 0211b5d4-bab8-409f-8f53-df766ffbcb27 update_inventory /usr/lib/python3.9/site-packages/nova/compute/provider_tree.py:180#033[00m
Dec  1 19:34:58 compute-0 nova_compute[189564]: 2025-12-01 19:34:58.827 189568 DEBUG nova.scheduler.client.report [None req-7cec860b-c1c5-49d2-8a4f-732c76c1788d - - - - - -] Inventory has not changed for provider 0211b5d4-bab8-409f-8f53-df766ffbcb27 based on inventory data: {'VCPU': {'total': 8, 'reserved': 0, 'min_unit': 1, 'max_unit': 8, 'step_size': 1, 'allocation_ratio': 4.0}, 'MEMORY_MB': {'total': 7680, 'reserved': 512, 'min_unit': 1, 'max_unit': 7680, 'step_size': 1, 'allocation_ratio': 1.0}, 'DISK_GB': {'total': 79, 'reserved': 1, 'min_unit': 1, 'max_unit': 79, 'step_size': 1, 'allocation_ratio': 0.9}} set_inventory_for_provider /usr/lib/python3.9/site-packages/nova/scheduler/client/report.py:940#033[00m
Dec  1 19:34:58 compute-0 nova_compute[189564]: 2025-12-01 19:34:58.830 189568 DEBUG nova.compute.resource_tracker [None req-7cec860b-c1c5-49d2-8a4f-732c76c1788d - - - - - -] Compute_service record updated for compute-0.ctlplane.example.com:compute-0.ctlplane.example.com _update_available_resource /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:995#033[00m
Dec  1 19:34:58 compute-0 nova_compute[189564]: 2025-12-01 19:34:58.831 189568 DEBUG oslo_concurrency.lockutils [None req-7cec860b-c1c5-49d2-8a4f-732c76c1788d - - - - - -] Lock "compute_resources" "released" by "nova.compute.resource_tracker.ResourceTracker._update_available_resource" :: held 0.236s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m
Dec  1 19:34:59 compute-0 podman[242029]: 2025-12-01 19:34:59.347613828 +0000 UTC m=+0.107853457 container health_status 61ddba5fa28aaa4735d9b3aecc3d300f499f9ae2248b5f55cd6d6127fcce4236 (image=quay.io/navidys/prometheus-podman-exporter:v1.10.1, name=podman_exporter, health_status=healthy, health_failing_streak=0, health_log=, config_data={'image': 'quay.io/navidys/prometheus-podman-exporter:v1.10.1', 'restart': 'always', 'recreate': True, 'user': 'root', 'privileged': True, 'ports': ['9882:9882'], 'net': 'host', 'command': ['--web.config.file=/etc/podman_exporter/podman_exporter.yaml'], 'environment': {'OS_ENDPOINT_TYPE': 'internal', 'CONTAINER_HOST': 'unix:///run/podman/podman.sock'}, 'healthcheck': {'test': '/openstack/healthcheck podman_exporter', 'mount': '/var/lib/openstack/healthchecks/podman_exporter'}, 'volumes': ['/var/lib/openstack/config/telemetry/podman_exporter.yaml:/etc/podman_exporter/podman_exporter.yaml:z', '/var/lib/openstack/certs/telemetry/default:/etc/podman_exporter/tls:z', '/run/podman/podman.sock:/run/podman/podman.sock:rw,z', '/var/lib/openstack/healthchecks/podman_exporter:/openstack:ro,z']}, config_id=edpm, container_name=podman_exporter, maintainer=Navid Yaghoobi <navidys@fedoraproject.org>, managed_by=edpm_ansible)
Dec  1 19:34:59 compute-0 podman[203750]: time="2025-12-01T19:34:59Z" level=info msg="List containers: received `last` parameter - overwriting `limit`"
Dec  1 19:34:59 compute-0 podman[203750]: @ - - [01/Dec/2025:19:34:59 +0000] "GET /v4.9.3/libpod/containers/json?all=true&external=false&last=0&namespace=false&size=false&sync=false HTTP/1.1" 200 29521 "" "Go-http-client/1.1"
Dec  1 19:34:59 compute-0 podman[203750]: @ - - [01/Dec/2025:19:34:59 +0000] "GET /v4.9.3/libpod/containers/stats?all=false&interval=1&stream=false HTTP/1.1" 200 4790 "" "Go-http-client/1.1"
Dec  1 19:34:59 compute-0 nova_compute[189564]: 2025-12-01 19:34:59.826 189568 DEBUG oslo_service.periodic_task [None req-7cec860b-c1c5-49d2-8a4f-732c76c1788d - - - - - -] Running periodic task ComputeManager._sync_scheduler_instance_info run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210#033[00m
Dec  1 19:34:59 compute-0 nova_compute[189564]: 2025-12-01 19:34:59.895 189568 DEBUG oslo_service.periodic_task [None req-7cec860b-c1c5-49d2-8a4f-732c76c1788d - - - - - -] Running periodic task ComputeManager._poll_rescued_instances run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210#033[00m
Dec  1 19:35:01 compute-0 openstack_network_exporter[205914]: ERROR   19:35:01 appctl.go:131: Failed to prepare call to ovsdb-server: no control socket files found for the ovs db server
Dec  1 19:35:01 compute-0 openstack_network_exporter[205914]: ERROR   19:35:01 appctl.go:144: Failed to get PID for ovn-northd: no control socket files found for ovn-northd
Dec  1 19:35:01 compute-0 openstack_network_exporter[205914]: ERROR   19:35:01 appctl.go:144: Failed to get PID for ovn-northd: no control socket files found for ovn-northd
Dec  1 19:35:01 compute-0 openstack_network_exporter[205914]: ERROR   19:35:01 appctl.go:174: call(dpif-netdev/pmd-perf-show): please specify an existing datapath
Dec  1 19:35:01 compute-0 openstack_network_exporter[205914]: 
Dec  1 19:35:01 compute-0 openstack_network_exporter[205914]: ERROR   19:35:01 appctl.go:174: call(dpif-netdev/pmd-rxq-show): please specify an existing datapath
Dec  1 19:35:01 compute-0 openstack_network_exporter[205914]: 
Dec  1 19:35:02 compute-0 nova_compute[189564]: 2025-12-01 19:35:02.310 189568 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Dec  1 19:35:02 compute-0 podman[242051]: 2025-12-01 19:35:02.348370094 +0000 UTC m=+0.116932800 container health_status 23921011954a99f31a49758e512d9e3575f6b2ebf536e7df85e3be11e7690b76 (image=quay.io/sustainable_computing_io/kepler:release-0.7.12, name=kepler, health_status=healthy, health_failing_streak=0, health_log=, io.openshift.expose-services=, managed_by=edpm_ansible, config_data={'image': 'quay.io/sustainable_computing_io/kepler:release-0.7.12', 'privileged': 'true', 'restart': 'always', 'ports': ['8888:8888'], 'net': 'host', 'command': '-v=2', 'recreate': True, 'environment': {'ENABLE_GPU': 'true', 'EXPOSE_CONTAINER_METRICS': 'true', 'ENABLE_PROCESS_METRICS': 'true', 'EXPOSE_VM_METRICS': 'true', 'EXPOSE_ESTIMATED_IDLE_POWER_METRICS': 'false', 'LIBVIRT_METADATA_URI': 'http://openstack.org/xmlns/libvirt/nova/1.1'}, 'healthcheck': {'test': '/openstack/healthcheck kepler', 'mount': '/var/lib/openstack/healthchecks/kepler'}, 'volumes': ['/lib/modules:/lib/modules:ro', '/run/libvirt:/run/libvirt:shared,ro', '/sys:/sys', '/proc:/proc', '/var/lib/openstack/healthchecks/kepler:/openstack:ro,z']}, name=ubi9, vcs-type=git, distribution-scope=public, description=The Universal Base Image is designed and engineered to be the base layer for all of your containerized applications, middleware and utilities. This base image is freely redistributable, but Red Hat only supports Red Hat technologies through subscriptions for Red Hat products. This image is maintained by Red Hat and updated regularly., architecture=x86_64, container_name=kepler, vcs-ref=e309397d02fc53f7fa99db1371b8700eb49f268f, maintainer=Red Hat, Inc., vendor=Red Hat, Inc., build-date=2024-09-18T21:23:30, io.openshift.tags=base rhel9, release-0.7.12=, config_id=edpm, io.k8s.description=The Universal Base Image is designed and engineered to be the base layer for all of your containerized applications, middleware and utilities. This base image is freely redistributable, but Red Hat only supports Red Hat technologies through subscriptions for Red Hat products. This image is maintained by Red Hat and updated regularly., summary=Provides the latest release of Red Hat Universal Base Image 9., io.buildah.version=1.29.0, url=https://access.redhat.com/containers/#/registry.access.redhat.com/ubi9/images/9.4-1214.1726694543, com.redhat.component=ubi9-container, release=1214.1726694543, com.redhat.license_terms=https://www.redhat.com/en/about/red-hat-end-user-license-agreements#UBI, version=9.4, io.k8s.display-name=Red Hat Universal Base Image 9)
Dec  1 19:35:02 compute-0 podman[242052]: 2025-12-01 19:35:02.372236537 +0000 UTC m=+0.121407890 container health_status 34a1614f07848d6f362b3ed1fa2407dbcd0f2c7c831f6ef43ff8b2d278ce7c3d (image=quay.io/podified-antelope-centos9/openstack-ceilometer-ipmi:current-podified, name=ceilometer_agent_ipmi, health_status=healthy, health_failing_streak=0, health_log=, io.buildah.version=1.41.3, managed_by=edpm_ansible, org.label-schema.build-date=20251125, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.vendor=CentOS, org.label-schema.schema-version=1.0, tcib_managed=true, config_id=edpm, maintainer=OpenStack Kubernetes Operator team, config_data={'image': 'quay.io/podified-antelope-centos9/openstack-ceilometer-ipmi:current-podified', 'user': 'ceilometer', 'restart': 'always', 'command': 'kolla_start', 'security_opt': 'label:type:ceilometer_polling_t', 'privileged': 'true', 'net': 'host', 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'OS_ENDPOINT_TYPE': 'internal'}, 'healthcheck': {'test': '/openstack/healthcheck ipmi', 'mount': '/var/lib/openstack/healthchecks/ceilometer_agent_ipmi'}, 'volumes': ['/var/lib/openstack/config/telemetry-power-monitoring:/var/lib/openstack/config/:z', '/var/lib/openstack/config/telemetry-power-monitoring/ceilometer-agent-ipmi.json:/var/lib/kolla/config_files/config.json:z', '/etc/hosts:/etc/hosts:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/var/lib/openstack/cacerts/telemetry-power-monitoring/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/config/telemetry-power-monitoring/ceilometer_prom_exporter.yaml:/etc/ceilometer/ceilometer_prom_exporter.yaml:z', '/var/lib/openstack/certs/telemetry-power-monitoring/default:/etc/ceilometer/tls:z', '/dev/log:/dev/log', '/var/lib/openstack/healthchecks/ceilometer_agent_ipmi:/openstack:ro,z']}, container_name=ceilometer_agent_ipmi, tcib_build_tag=fa2bb8efef6782c26ea7f1675eeb36dd)
Dec  1 19:35:02 compute-0 nova_compute[189564]: 2025-12-01 19:35:02.844 189568 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Dec  1 19:35:05 compute-0 podman[242089]: 2025-12-01 19:35:05.35748847 +0000 UTC m=+0.118036534 container health_status 43b014a7c88484529ca37fbc1aa040d68d3c565a681d98a3ffe696ded1c66c8b (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified, name=ovn_metadata_agent, health_status=healthy, health_failing_streak=0, health_log=, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.vendor=CentOS, tcib_build_tag=fa2bb8efef6782c26ea7f1675eeb36dd, config_id=ovn_metadata_agent, container_name=ovn_metadata_agent, maintainer=OpenStack Kubernetes Operator team, org.label-schema.license=GPLv2, tcib_managed=true, io.buildah.version=1.41.3, org.label-schema.schema-version=1.0, config_data={'cgroupns': 'host', 'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'EDPM_CONFIG_HASH': '0823bd3e096c75f72e4a95820d41b0d4b6a1172bd2892ddb9f29b788a11bc87d'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ovn_metadata_agent', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified', 'net': 'host', 'pid': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/run/openvswitch:/run/openvswitch:z', '/var/lib/config-data/ansible-generated/neutron-ovn-metadata-agent:/etc/neutron.conf.d:z', '/run/netns:/run/netns:shared', '/var/lib/kolla/config_files/ovn_metadata_agent.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/neutron:/var/lib/neutron:shared,z', '/var/lib/neutron/ovn_metadata_haproxy_wrapper:/usr/local/bin/haproxy:ro', '/var/lib/neutron/kill_scripts:/etc/neutron/kill_scripts:ro', '/var/lib/openstack/cacerts/neutron-metadata/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/ca.crt:/etc/pki/tls/certs/ovndbca.crt:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/tls.crt:/etc/pki/tls/certs/ovndb.crt:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/tls.key:/etc/pki/tls/private/ovndb.key:ro,Z', '/var/lib/openstack/healthchecks/ovn_metadata_agent:/openstack:ro,z']}, managed_by=edpm_ansible, org.label-schema.build-date=20251125)
Dec  1 19:35:05 compute-0 podman[242088]: 2025-12-01 19:35:05.384532862 +0000 UTC m=+0.141652980 container health_status 3a3d264f7eb8586ed3d44da8bad3c69e5911bcb2ca062b771386b6d47a5118de (image=quay.rdoproject.org/podified-master-centos10/openstack-ceilometer-compute:current-tested, name=ceilometer_agent_compute, health_status=healthy, health_failing_streak=0, health_log=, org.label-schema.license=GPLv2, tcib_managed=true, container_name=ceilometer_agent_compute, io.buildah.version=1.41.4, managed_by=edpm_ansible, org.label-schema.build-date=20251125, org.label-schema.name=CentOS Stream 10 Base Image, org.label-schema.schema-version=1.0, tcib_build_tag=3a7876c5b6a4ff2e2bc50e11e9db5f42, config_data={'image': 'quay.rdoproject.org/podified-master-centos10/openstack-ceilometer-compute:current-tested', 'user': 'ceilometer', 'restart': 'always', 'command': 'kolla_start', 'security_opt': 'label:type:ceilometer_polling_t', 'net': 'host', 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'OS_ENDPOINT_TYPE': 'internal'}, 'healthcheck': {'test': '/openstack/healthcheck compute', 'mount': '/var/lib/openstack/healthchecks/ceilometer_agent_compute'}, 'volumes': ['/var/lib/openstack/config/telemetry:/var/lib/openstack/config/:z', '/var/lib/openstack/config/telemetry/ceilometer-agent-compute.json:/var/lib/kolla/config_files/config.json:z', '/run/libvirt:/run/libvirt:shared,ro', '/etc/hosts:/etc/hosts:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/var/lib/openstack/cacerts/telemetry/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/config/telemetry/ceilometer_prom_exporter.yaml:/etc/ceilometer/ceilometer_prom_exporter.yaml:z', '/var/lib/openstack/certs/telemetry/default:/etc/ceilometer/tls:z', '/dev/log:/dev/log', '/var/lib/openstack/healthchecks/ceilometer_agent_compute:/openstack:ro,z']}, org.label-schema.vendor=CentOS, config_id=edpm, maintainer=OpenStack Kubernetes Operator team)
Dec  1 19:35:05 compute-0 podman[242090]: 2025-12-01 19:35:05.418833469 +0000 UTC m=+0.163087876 container health_status ac5c9902abf0db9f43c889599b2bcc73d33eb8b65444ffdd9b56a5cc93dab792 (image=quay.io/podified-antelope-centos9/openstack-ovn-controller:current-podified, name=ovn_controller, health_status=healthy, health_failing_streak=0, health_log=, config_id=ovn_controller, org.label-schema.name=CentOS Stream 9 Base Image, container_name=ovn_controller, io.buildah.version=1.41.3, org.label-schema.license=GPLv2, org.label-schema.schema-version=1.0, config_data={'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ovn_controller', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-ovn-controller:current-podified', 'net': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/lib/modules:/lib/modules:ro', '/run:/run', '/var/lib/openvswitch/ovn:/run/ovn:shared,z', '/var/lib/kolla/config_files/ovn_controller.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/openstack/cacerts/ovn/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/certs/ovn/default/ca.crt:/etc/pki/tls/certs/ovndbca.crt:ro,z', '/var/lib/openstack/certs/ovn/default/tls.crt:/etc/pki/tls/certs/ovndb.crt:ro,z', '/var/lib/openstack/certs/ovn/default/tls.key:/etc/pki/tls/private/ovndb.key:ro,Z', '/var/lib/openstack/healthchecks/ovn_controller:/openstack:ro,z']}, managed_by=edpm_ansible, org.label-schema.build-date=20251125, maintainer=OpenStack Kubernetes Operator team, org.label-schema.vendor=CentOS, tcib_build_tag=fa2bb8efef6782c26ea7f1675eeb36dd, tcib_managed=true)
Dec  1 19:35:07 compute-0 nova_compute[189564]: 2025-12-01 19:35:07.315 189568 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Dec  1 19:35:07 compute-0 nova_compute[189564]: 2025-12-01 19:35:07.848 189568 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Dec  1 19:35:12 compute-0 ovn_metadata_agent[106828]: 2025-12-01 19:35:12.181 106833 DEBUG oslo_concurrency.lockutils [-] Acquiring lock "_check_child_processes" by "neutron.agent.linux.external_process.ProcessMonitor._check_child_processes" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m
Dec  1 19:35:12 compute-0 ovn_metadata_agent[106828]: 2025-12-01 19:35:12.181 106833 DEBUG oslo_concurrency.lockutils [-] Lock "_check_child_processes" acquired by "neutron.agent.linux.external_process.ProcessMonitor._check_child_processes" :: waited 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m
Dec  1 19:35:12 compute-0 ovn_metadata_agent[106828]: 2025-12-01 19:35:12.182 106833 DEBUG oslo_concurrency.lockutils [-] Lock "_check_child_processes" "released" by "neutron.agent.linux.external_process.ProcessMonitor._check_child_processes" :: held 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m
Dec  1 19:35:12 compute-0 nova_compute[189564]: 2025-12-01 19:35:12.317 189568 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Dec  1 19:35:12 compute-0 nova_compute[189564]: 2025-12-01 19:35:12.852 189568 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Dec  1 19:35:14 compute-0 podman[242149]: 2025-12-01 19:35:14.333207261 +0000 UTC m=+0.096178314 container health_status b46bda7fc50db8041eef75400930fc7591d8331b3adc9964f77b2cc87c6b98e2 (image=quay.io/openstack-k8s-operators/openstack-network-exporter:current-podified, name=openstack_network_exporter, health_status=healthy, health_failing_streak=0, health_log=, io.k8s.description=The Universal Base Image Minimal is a stripped down image that uses microdnf as a package manager. This base image is freely redistributable, but Red Hat only supports Red Hat technologies through subscriptions for Red Hat products. This image is maintained by Red Hat and updated regularly., name=ubi9-minimal, architecture=x86_64, vcs-type=git, com.redhat.license_terms=https://www.redhat.com/en/about/red-hat-end-user-license-agreements#UBI, maintainer=Red Hat, Inc., config_id=edpm, vcs-ref=f4b088292653bbf5ca8188a5e59ffd06a8671d4b, vendor=Red Hat, Inc., distribution-scope=public, managed_by=edpm_ansible, release=1755695350, container_name=openstack_network_exporter, summary=Provides the latest release of the minimal Red Hat Universal Base Image 9., version=9.6, build-date=2025-08-20T13:12:41, url=https://catalog.redhat.com/en/search?searchType=containers, config_data={'image': 'quay.io/openstack-k8s-operators/openstack-network-exporter:current-podified', 'restart': 'always', 'recreate': True, 'privileged': True, 'ports': ['9105:9105'], 'command': [], 'net': 'host', 'environment': {'OS_ENDPOINT_TYPE': 'internal', 'OPENSTACK_NETWORK_EXPORTER_YAML': '/etc/openstack_network_exporter/openstack_network_exporter.yaml'}, 'healthcheck': {'test': '/openstack/healthcheck openstack-netwo', 'mount': '/var/lib/openstack/healthchecks/openstack_network_exporter'}, 'volumes': ['/var/lib/openstack/config/telemetry/openstack_network_exporter.yaml:/etc/openstack_network_exporter/openstack_network_exporter.yaml:z', '/var/lib/openstack/certs/telemetry/default:/etc/openstack_network_exporter/tls:z', '/var/run/openvswitch:/run/openvswitch:rw,z', '/var/lib/openvswitch/ovn:/run/ovn:rw,z', '/proc:/host/proc:ro', '/var/lib/openstack/healthchecks/openstack_network_exporter:/openstack:ro,z']}, io.openshift.tags=minimal rhel9, com.redhat.component=ubi9-minimal-container, io.buildah.version=1.33.7, io.k8s.display-name=Red Hat Universal Base Image 9 Minimal, io.openshift.expose-services=, description=The Universal Base Image Minimal is a stripped down image that uses microdnf as a package manager. This base image is freely redistributable, but Red Hat only supports Red Hat technologies through subscriptions for Red Hat products. This image is maintained by Red Hat and updated regularly.)
Dec  1 19:35:17 compute-0 nova_compute[189564]: 2025-12-01 19:35:17.322 189568 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Dec  1 19:35:17 compute-0 nova_compute[189564]: 2025-12-01 19:35:17.858 189568 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Dec  1 19:35:18 compute-0 podman[242168]: 2025-12-01 19:35:18.334878966 +0000 UTC m=+0.098978302 container health_status 9bc16c1e84935b321683dd2dfd3901959431e420d380b6b9982945dff3d516b2 (image=quay.io/prometheus/node-exporter:v1.5.0, name=node_exporter, health_status=healthy, health_failing_streak=0, health_log=, container_name=node_exporter, maintainer=The Prometheus Authors <prometheus-developers@googlegroups.com>, managed_by=edpm_ansible, config_data={'image': 'quay.io/prometheus/node-exporter:v1.5.0', 'restart': 'always', 'recreate': True, 'user': 'root', 'privileged': True, 'ports': ['9100:9100'], 'command': ['--web.config.file=/etc/node_exporter/node_exporter.yaml', '--web.disable-exporter-metrics', '--collector.systemd', '--collector.systemd.unit-include=(edpm_.*|ovs.*|openvswitch|virt.*|rsyslog)\\.service', '--no-collector.dmi', '--no-collector.entropy', '--no-collector.thermal_zone', '--no-collector.time', '--no-collector.timex', '--no-collector.uname', '--no-collector.stat', '--no-collector.hwmon', '--no-collector.os', '--no-collector.selinux', '--no-collector.textfile', '--no-collector.powersupplyclass', '--no-collector.pressure', '--no-collector.rapl'], 'net': 'host', 'environment': {'OS_ENDPOINT_TYPE': 'internal'}, 'healthcheck': {'test': '/openstack/healthcheck node_exporter', 'mount': '/var/lib/openstack/healthchecks/node_exporter'}, 'volumes': ['/var/lib/openstack/config/telemetry/node_exporter.yaml:/etc/node_exporter/node_exporter.yaml:z', '/var/lib/openstack/certs/telemetry/default:/etc/node_exporter/tls:z', '/var/run/dbus/system_bus_socket:/var/run/dbus/system_bus_socket:rw', '/var/lib/openstack/healthchecks/node_exporter:/openstack:ro,z']}, config_id=edpm)
Dec  1 19:35:22 compute-0 nova_compute[189564]: 2025-12-01 19:35:22.325 189568 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Dec  1 19:35:22 compute-0 nova_compute[189564]: 2025-12-01 19:35:22.864 189568 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Dec  1 19:35:25 compute-0 podman[242195]: 2025-12-01 19:35:25.348450834 +0000 UTC m=+0.105642038 container health_status eee51cf6f5ac491b85fb09827fece37ea9afa564acb449d4ec0d0155a452f02b (image=quay.io/podified-antelope-centos9/openstack-multipathd:current-podified, name=multipathd, health_status=healthy, health_failing_streak=0, health_log=, io.buildah.version=1.41.3, org.label-schema.build-date=20251125, tcib_build_tag=fa2bb8efef6782c26ea7f1675eeb36dd, managed_by=edpm_ansible, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.schema-version=1.0, container_name=multipathd, maintainer=OpenStack Kubernetes Operator team, org.label-schema.license=GPLv2, org.label-schema.vendor=CentOS, tcib_managed=true, config_data={'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/multipathd', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-multipathd:current-podified', 'net': 'host', 'privileged': True, 'restart': 'always', 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/var/lib/kolla/config_files/multipathd.json:/var/lib/kolla/config_files/config.json:ro', '/dev:/dev', '/run/udev:/run/udev', '/sys:/sys', '/lib/modules:/lib/modules:ro', '/etc/iscsi:/etc/iscsi:ro', '/var/lib/iscsi:/var/lib/iscsi', '/etc/multipath:/etc/multipath:z', '/etc/multipath.conf:/etc/multipath.conf:ro', '/var/lib/openstack/healthchecks/multipathd:/openstack:ro,z']}, config_id=multipathd)
Dec  1 19:35:27 compute-0 nova_compute[189564]: 2025-12-01 19:35:27.330 189568 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Dec  1 19:35:27 compute-0 nova_compute[189564]: 2025-12-01 19:35:27.866 189568 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Dec  1 19:35:29 compute-0 podman[203750]: time="2025-12-01T19:35:29Z" level=info msg="List containers: received `last` parameter - overwriting `limit`"
Dec  1 19:35:29 compute-0 podman[203750]: @ - - [01/Dec/2025:19:35:29 +0000] "GET /v4.9.3/libpod/containers/json?all=true&external=false&last=0&namespace=false&size=false&sync=false HTTP/1.1" 200 29521 "" "Go-http-client/1.1"
Dec  1 19:35:29 compute-0 podman[203750]: @ - - [01/Dec/2025:19:35:29 +0000] "GET /v4.9.3/libpod/containers/stats?all=false&interval=1&stream=false HTTP/1.1" 200 4787 "" "Go-http-client/1.1"
Dec  1 19:35:30 compute-0 podman[242215]: 2025-12-01 19:35:30.38033528 +0000 UTC m=+0.131693880 container health_status 61ddba5fa28aaa4735d9b3aecc3d300f499f9ae2248b5f55cd6d6127fcce4236 (image=quay.io/navidys/prometheus-podman-exporter:v1.10.1, name=podman_exporter, health_status=healthy, health_failing_streak=0, health_log=, maintainer=Navid Yaghoobi <navidys@fedoraproject.org>, managed_by=edpm_ansible, config_data={'image': 'quay.io/navidys/prometheus-podman-exporter:v1.10.1', 'restart': 'always', 'recreate': True, 'user': 'root', 'privileged': True, 'ports': ['9882:9882'], 'net': 'host', 'command': ['--web.config.file=/etc/podman_exporter/podman_exporter.yaml'], 'environment': {'OS_ENDPOINT_TYPE': 'internal', 'CONTAINER_HOST': 'unix:///run/podman/podman.sock'}, 'healthcheck': {'test': '/openstack/healthcheck podman_exporter', 'mount': '/var/lib/openstack/healthchecks/podman_exporter'}, 'volumes': ['/var/lib/openstack/config/telemetry/podman_exporter.yaml:/etc/podman_exporter/podman_exporter.yaml:z', '/var/lib/openstack/certs/telemetry/default:/etc/podman_exporter/tls:z', '/run/podman/podman.sock:/run/podman/podman.sock:rw,z', '/var/lib/openstack/healthchecks/podman_exporter:/openstack:ro,z']}, config_id=edpm, container_name=podman_exporter)
Dec  1 19:35:31 compute-0 openstack_network_exporter[205914]: ERROR   19:35:31 appctl.go:131: Failed to prepare call to ovsdb-server: no control socket files found for the ovs db server
Dec  1 19:35:31 compute-0 openstack_network_exporter[205914]: ERROR   19:35:31 appctl.go:144: Failed to get PID for ovn-northd: no control socket files found for ovn-northd
Dec  1 19:35:31 compute-0 openstack_network_exporter[205914]: ERROR   19:35:31 appctl.go:144: Failed to get PID for ovn-northd: no control socket files found for ovn-northd
Dec  1 19:35:31 compute-0 openstack_network_exporter[205914]: ERROR   19:35:31 appctl.go:174: call(dpif-netdev/pmd-rxq-show): please specify an existing datapath
Dec  1 19:35:31 compute-0 openstack_network_exporter[205914]: 
Dec  1 19:35:31 compute-0 openstack_network_exporter[205914]: ERROR   19:35:31 appctl.go:174: call(dpif-netdev/pmd-perf-show): please specify an existing datapath
Dec  1 19:35:31 compute-0 openstack_network_exporter[205914]: 
Dec  1 19:35:32 compute-0 nova_compute[189564]: 2025-12-01 19:35:32.334 189568 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Dec  1 19:35:32 compute-0 nova_compute[189564]: 2025-12-01 19:35:32.875 189568 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Dec  1 19:35:33 compute-0 podman[242239]: 2025-12-01 19:35:33.368812494 +0000 UTC m=+0.127334584 container health_status 23921011954a99f31a49758e512d9e3575f6b2ebf536e7df85e3be11e7690b76 (image=quay.io/sustainable_computing_io/kepler:release-0.7.12, name=kepler, health_status=healthy, health_failing_streak=0, health_log=, vcs-ref=e309397d02fc53f7fa99db1371b8700eb49f268f, distribution-scope=public, config_id=edpm, name=ubi9, summary=Provides the latest release of Red Hat Universal Base Image 9., container_name=kepler, managed_by=edpm_ansible, release=1214.1726694543, config_data={'image': 'quay.io/sustainable_computing_io/kepler:release-0.7.12', 'privileged': 'true', 'restart': 'always', 'ports': ['8888:8888'], 'net': 'host', 'command': '-v=2', 'recreate': True, 'environment': {'ENABLE_GPU': 'true', 'EXPOSE_CONTAINER_METRICS': 'true', 'ENABLE_PROCESS_METRICS': 'true', 'EXPOSE_VM_METRICS': 'true', 'EXPOSE_ESTIMATED_IDLE_POWER_METRICS': 'false', 'LIBVIRT_METADATA_URI': 'http://openstack.org/xmlns/libvirt/nova/1.1'}, 'healthcheck': {'test': '/openstack/healthcheck kepler', 'mount': '/var/lib/openstack/healthchecks/kepler'}, 'volumes': ['/lib/modules:/lib/modules:ro', '/run/libvirt:/run/libvirt:shared,ro', '/sys:/sys', '/proc:/proc', '/var/lib/openstack/healthchecks/kepler:/openstack:ro,z']}, io.openshift.tags=base rhel9, com.redhat.component=ubi9-container, io.buildah.version=1.29.0, io.openshift.expose-services=, vendor=Red Hat, Inc., io.k8s.description=The Universal Base Image is designed and engineered to be the base layer for all of your containerized applications, middleware and utilities. This base image is freely redistributable, but Red Hat only supports Red Hat technologies through subscriptions for Red Hat products. This image is maintained by Red Hat and updated regularly., io.k8s.display-name=Red Hat Universal Base Image 9, version=9.4, maintainer=Red Hat, Inc., architecture=x86_64, com.redhat.license_terms=https://www.redhat.com/en/about/red-hat-end-user-license-agreements#UBI, release-0.7.12=, vcs-type=git, build-date=2024-09-18T21:23:30, description=The Universal Base Image is designed and engineered to be the base layer for all of your containerized applications, middleware and utilities. This base image is freely redistributable, but Red Hat only supports Red Hat technologies through subscriptions for Red Hat products. This image is maintained by Red Hat and updated regularly., url=https://access.redhat.com/containers/#/registry.access.redhat.com/ubi9/images/9.4-1214.1726694543)
Dec  1 19:35:33 compute-0 podman[242240]: 2025-12-01 19:35:33.374769549 +0000 UTC m=+0.122126552 container health_status 34a1614f07848d6f362b3ed1fa2407dbcd0f2c7c831f6ef43ff8b2d278ce7c3d (image=quay.io/podified-antelope-centos9/openstack-ceilometer-ipmi:current-podified, name=ceilometer_agent_ipmi, health_status=healthy, health_failing_streak=0, health_log=, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.vendor=CentOS, tcib_build_tag=fa2bb8efef6782c26ea7f1675eeb36dd, config_data={'image': 'quay.io/podified-antelope-centos9/openstack-ceilometer-ipmi:current-podified', 'user': 'ceilometer', 'restart': 'always', 'command': 'kolla_start', 'security_opt': 'label:type:ceilometer_polling_t', 'privileged': 'true', 'net': 'host', 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'OS_ENDPOINT_TYPE': 'internal'}, 'healthcheck': {'test': '/openstack/healthcheck ipmi', 'mount': '/var/lib/openstack/healthchecks/ceilometer_agent_ipmi'}, 'volumes': ['/var/lib/openstack/config/telemetry-power-monitoring:/var/lib/openstack/config/:z', '/var/lib/openstack/config/telemetry-power-monitoring/ceilometer-agent-ipmi.json:/var/lib/kolla/config_files/config.json:z', '/etc/hosts:/etc/hosts:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/var/lib/openstack/cacerts/telemetry-power-monitoring/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/config/telemetry-power-monitoring/ceilometer_prom_exporter.yaml:/etc/ceilometer/ceilometer_prom_exporter.yaml:z', '/var/lib/openstack/certs/telemetry-power-monitoring/default:/etc/ceilometer/tls:z', '/dev/log:/dev/log', '/var/lib/openstack/healthchecks/ceilometer_agent_ipmi:/openstack:ro,z']}, config_id=edpm, container_name=ceilometer_agent_ipmi, org.label-schema.build-date=20251125, org.label-schema.license=GPLv2, managed_by=edpm_ansible, org.label-schema.schema-version=1.0, tcib_managed=true, io.buildah.version=1.41.3, maintainer=OpenStack Kubernetes Operator team)
Dec  1 19:35:36 compute-0 podman[242275]: 2025-12-01 19:35:36.363990056 +0000 UTC m=+0.110221242 container health_status 43b014a7c88484529ca37fbc1aa040d68d3c565a681d98a3ffe696ded1c66c8b (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified, name=ovn_metadata_agent, health_status=healthy, health_failing_streak=0, health_log=, managed_by=edpm_ansible, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.schema-version=1.0, config_id=ovn_metadata_agent, maintainer=OpenStack Kubernetes Operator team, org.label-schema.vendor=CentOS, tcib_build_tag=fa2bb8efef6782c26ea7f1675eeb36dd, tcib_managed=true, container_name=ovn_metadata_agent, org.label-schema.build-date=20251125, config_data={'cgroupns': 'host', 'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'EDPM_CONFIG_HASH': '0823bd3e096c75f72e4a95820d41b0d4b6a1172bd2892ddb9f29b788a11bc87d'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ovn_metadata_agent', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified', 'net': 'host', 'pid': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/run/openvswitch:/run/openvswitch:z', '/var/lib/config-data/ansible-generated/neutron-ovn-metadata-agent:/etc/neutron.conf.d:z', '/run/netns:/run/netns:shared', '/var/lib/kolla/config_files/ovn_metadata_agent.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/neutron:/var/lib/neutron:shared,z', '/var/lib/neutron/ovn_metadata_haproxy_wrapper:/usr/local/bin/haproxy:ro', '/var/lib/neutron/kill_scripts:/etc/neutron/kill_scripts:ro', '/var/lib/openstack/cacerts/neutron-metadata/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/ca.crt:/etc/pki/tls/certs/ovndbca.crt:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/tls.crt:/etc/pki/tls/certs/ovndb.crt:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/tls.key:/etc/pki/tls/private/ovndb.key:ro,Z', '/var/lib/openstack/healthchecks/ovn_metadata_agent:/openstack:ro,z']}, io.buildah.version=1.41.3)
Dec  1 19:35:36 compute-0 podman[242274]: 2025-12-01 19:35:36.38951291 +0000 UTC m=+0.142215786 container health_status 3a3d264f7eb8586ed3d44da8bad3c69e5911bcb2ca062b771386b6d47a5118de (image=quay.rdoproject.org/podified-master-centos10/openstack-ceilometer-compute:current-tested, name=ceilometer_agent_compute, health_status=healthy, health_failing_streak=0, health_log=, org.label-schema.name=CentOS Stream 10 Base Image, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS, tcib_build_tag=3a7876c5b6a4ff2e2bc50e11e9db5f42, config_data={'image': 'quay.rdoproject.org/podified-master-centos10/openstack-ceilometer-compute:current-tested', 'user': 'ceilometer', 'restart': 'always', 'command': 'kolla_start', 'security_opt': 'label:type:ceilometer_polling_t', 'net': 'host', 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'OS_ENDPOINT_TYPE': 'internal'}, 'healthcheck': {'test': '/openstack/healthcheck compute', 'mount': '/var/lib/openstack/healthchecks/ceilometer_agent_compute'}, 'volumes': ['/var/lib/openstack/config/telemetry:/var/lib/openstack/config/:z', '/var/lib/openstack/config/telemetry/ceilometer-agent-compute.json:/var/lib/kolla/config_files/config.json:z', '/run/libvirt:/run/libvirt:shared,ro', '/etc/hosts:/etc/hosts:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/var/lib/openstack/cacerts/telemetry/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/config/telemetry/ceilometer_prom_exporter.yaml:/etc/ceilometer/ceilometer_prom_exporter.yaml:z', '/var/lib/openstack/certs/telemetry/default:/etc/ceilometer/tls:z', '/dev/log:/dev/log', '/var/lib/openstack/healthchecks/ceilometer_agent_compute:/openstack:ro,z']}, config_id=edpm, io.buildah.version=1.41.4, org.label-schema.build-date=20251125, org.label-schema.license=GPLv2, tcib_managed=true, managed_by=edpm_ansible, container_name=ceilometer_agent_compute, maintainer=OpenStack Kubernetes Operator team)
Dec  1 19:35:36 compute-0 podman[242276]: 2025-12-01 19:35:36.439080843 +0000 UTC m=+0.169384482 container health_status ac5c9902abf0db9f43c889599b2bcc73d33eb8b65444ffdd9b56a5cc93dab792 (image=quay.io/podified-antelope-centos9/openstack-ovn-controller:current-podified, name=ovn_controller, health_status=healthy, health_failing_streak=0, health_log=, managed_by=edpm_ansible, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.vendor=CentOS, tcib_build_tag=fa2bb8efef6782c26ea7f1675eeb36dd, tcib_managed=true, config_data={'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ovn_controller', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-ovn-controller:current-podified', 'net': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/lib/modules:/lib/modules:ro', '/run:/run', '/var/lib/openvswitch/ovn:/run/ovn:shared,z', '/var/lib/kolla/config_files/ovn_controller.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/openstack/cacerts/ovn/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/certs/ovn/default/ca.crt:/etc/pki/tls/certs/ovndbca.crt:ro,z', '/var/lib/openstack/certs/ovn/default/tls.crt:/etc/pki/tls/certs/ovndb.crt:ro,z', '/var/lib/openstack/certs/ovn/default/tls.key:/etc/pki/tls/private/ovndb.key:ro,Z', '/var/lib/openstack/healthchecks/ovn_controller:/openstack:ro,z']}, io.buildah.version=1.41.3, org.label-schema.schema-version=1.0, container_name=ovn_controller, maintainer=OpenStack Kubernetes Operator team, org.label-schema.license=GPLv2, config_id=ovn_controller, org.label-schema.build-date=20251125)
Dec  1 19:35:37 compute-0 nova_compute[189564]: 2025-12-01 19:35:37.339 189568 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Dec  1 19:35:37 compute-0 nova_compute[189564]: 2025-12-01 19:35:37.876 189568 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Dec  1 19:35:42 compute-0 nova_compute[189564]: 2025-12-01 19:35:42.342 189568 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Dec  1 19:35:42 compute-0 nova_compute[189564]: 2025-12-01 19:35:42.879 189568 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Dec  1 19:35:44 compute-0 podman[242333]: 2025-12-01 19:35:44.832768201 +0000 UTC m=+0.124622339 container health_status b46bda7fc50db8041eef75400930fc7591d8331b3adc9964f77b2cc87c6b98e2 (image=quay.io/openstack-k8s-operators/openstack-network-exporter:current-podified, name=openstack_network_exporter, health_status=healthy, health_failing_streak=0, health_log=, description=The Universal Base Image Minimal is a stripped down image that uses microdnf as a package manager. This base image is freely redistributable, but Red Hat only supports Red Hat technologies through subscriptions for Red Hat products. This image is maintained by Red Hat and updated regularly., com.redhat.component=ubi9-minimal-container, com.redhat.license_terms=https://www.redhat.com/en/about/red-hat-end-user-license-agreements#UBI, name=ubi9-minimal, io.openshift.expose-services=, config_data={'image': 'quay.io/openstack-k8s-operators/openstack-network-exporter:current-podified', 'restart': 'always', 'recreate': True, 'privileged': True, 'ports': ['9105:9105'], 'command': [], 'net': 'host', 'environment': {'OS_ENDPOINT_TYPE': 'internal', 'OPENSTACK_NETWORK_EXPORTER_YAML': '/etc/openstack_network_exporter/openstack_network_exporter.yaml'}, 'healthcheck': {'test': '/openstack/healthcheck openstack-netwo', 'mount': '/var/lib/openstack/healthchecks/openstack_network_exporter'}, 'volumes': ['/var/lib/openstack/config/telemetry/openstack_network_exporter.yaml:/etc/openstack_network_exporter/openstack_network_exporter.yaml:z', '/var/lib/openstack/certs/telemetry/default:/etc/openstack_network_exporter/tls:z', '/var/run/openvswitch:/run/openvswitch:rw,z', '/var/lib/openvswitch/ovn:/run/ovn:rw,z', '/proc:/host/proc:ro', '/var/lib/openstack/healthchecks/openstack_network_exporter:/openstack:ro,z']}, config_id=edpm, maintainer=Red Hat, Inc., container_name=openstack_network_exporter, release=1755695350, distribution-scope=public, build-date=2025-08-20T13:12:41, io.k8s.description=The Universal Base Image Minimal is a stripped down image that uses microdnf as a package manager. This base image is freely redistributable, but Red Hat only supports Red Hat technologies through subscriptions for Red Hat products. This image is maintained by Red Hat and updated regularly., vcs-ref=f4b088292653bbf5ca8188a5e59ffd06a8671d4b, architecture=x86_64, io.openshift.tags=minimal rhel9, managed_by=edpm_ansible, summary=Provides the latest release of the minimal Red Hat Universal Base Image 9., vcs-type=git, version=9.6, io.k8s.display-name=Red Hat Universal Base Image 9 Minimal, io.buildah.version=1.33.7, url=https://catalog.redhat.com/en/search?searchType=containers, vendor=Red Hat, Inc.)
Dec  1 19:35:47 compute-0 nova_compute[189564]: 2025-12-01 19:35:47.347 189568 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Dec  1 19:35:47 compute-0 nova_compute[189564]: 2025-12-01 19:35:47.884 189568 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Dec  1 19:35:48 compute-0 ceilometer_agent_compute[200308]: 2025-12-01 19:35:48.812 15 DEBUG ceilometer.polling.manager [-] The number of pollsters in source [pollsters] is bigger than the number of worker threads to execute them. Therefore, one can expect the process to be longer than the expected. execute_polling_task_processing /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:253
Dec  1 19:35:48 compute-0 ceilometer_agent_compute[200308]: 2025-12-01 19:35:48.813 15 DEBUG ceilometer.polling.manager [-] Processing pollsters for [pollsters] with [1] threads. execute_polling_task_processing /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:262
Dec  1 19:35:48 compute-0 ceilometer_agent_compute[200308]: 2025-12-01 19:35:48.813 15 DEBUG ceilometer.polling.manager [-] Registering pollster [<stevedore.extension.Extension object at 0x7fcf6cc3f860>] from source [pollsters] to be executed via executor [<concurrent.futures.thread.ThreadPoolExecutor object at 0x7fcf6757d6a0>] with cache [{}], pollster history [{}], and discovery cache [{}]. register_pollster_execution /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:276
Dec  1 19:35:48 compute-0 ceilometer_agent_compute[200308]: 2025-12-01 19:35:48.814 15 DEBUG ceilometer.polling.manager [-] Executing discovery process for pollsters [<ceilometer.compute.pollsters.net.IncomingBytesDeltaPollster object at 0x7fcf6cc3f830>] and discovery method [local_instances] via process [<bound method AgentManager.discover of <ceilometer.polling.manager.AgentManager object at 0x7fcf6cc07a10>>]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:294
Dec  1 19:35:48 compute-0 ceilometer_agent_compute[200308]: 2025-12-01 19:35:48.814 15 DEBUG ceilometer.polling.manager [-] Registering pollster [<stevedore.extension.Extension object at 0x7fcf6c2e4080>] from source [pollsters] to be executed via executor [<concurrent.futures.thread.ThreadPoolExecutor object at 0x7fcf6757d6a0>] with cache [{}], pollster history [{}], and discovery cache [{}]. register_pollster_execution /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:276
Dec  1 19:35:48 compute-0 ceilometer_agent_compute[200308]: 2025-12-01 19:35:48.815 15 DEBUG ceilometer.polling.manager [-] Registering pollster [<stevedore.extension.Extension object at 0x7fcf6efc98b0>] from source [pollsters] to be executed via executor [<concurrent.futures.thread.ThreadPoolExecutor object at 0x7fcf6757d6a0>] with cache [{}], pollster history [{}], and discovery cache [{}]. register_pollster_execution /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:276
Dec  1 19:35:48 compute-0 ceilometer_agent_compute[200308]: 2025-12-01 19:35:48.815 15 DEBUG ceilometer.polling.manager [-] Registering pollster [<stevedore.extension.Extension object at 0x7fcf6c2e4110>] from source [pollsters] to be executed via executor [<concurrent.futures.thread.ThreadPoolExecutor object at 0x7fcf6757d6a0>] with cache [{}], pollster history [{}], and discovery cache [{}]. register_pollster_execution /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:276
Dec  1 19:35:48 compute-0 ceilometer_agent_compute[200308]: 2025-12-01 19:35:48.815 15 DEBUG ceilometer.polling.manager [-] Registering pollster [<stevedore.extension.Extension object at 0x7fcf6c2e41a0>] from source [pollsters] to be executed via executor [<concurrent.futures.thread.ThreadPoolExecutor object at 0x7fcf6757d6a0>] with cache [{}], pollster history [{}], and discovery cache [{}]. register_pollster_execution /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:276
Dec  1 19:35:48 compute-0 ceilometer_agent_compute[200308]: 2025-12-01 19:35:48.815 15 DEBUG ceilometer.polling.manager [-] Registering pollster [<stevedore.extension.Extension object at 0x7fcf6cc3f290>] from source [pollsters] to be executed via executor [<concurrent.futures.thread.ThreadPoolExecutor object at 0x7fcf6757d6a0>] with cache [{}], pollster history [{}], and discovery cache [{}]. register_pollster_execution /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:276
Dec  1 19:35:48 compute-0 ceilometer_agent_compute[200308]: 2025-12-01 19:35:48.816 15 DEBUG ceilometer.polling.manager [-] Registering pollster [<stevedore.extension.Extension object at 0x7fcf6cc3f2c0>] from source [pollsters] to be executed via executor [<concurrent.futures.thread.ThreadPoolExecutor object at 0x7fcf6757d6a0>] with cache [{}], pollster history [{}], and discovery cache [{}]. register_pollster_execution /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:276
Dec  1 19:35:48 compute-0 ceilometer_agent_compute[200308]: 2025-12-01 19:35:48.816 15 DEBUG ceilometer.polling.manager [-] Registering pollster [<stevedore.extension.Extension object at 0x7fcf6e1e92e0>] from source [pollsters] to be executed via executor [<concurrent.futures.thread.ThreadPoolExecutor object at 0x7fcf6757d6a0>] with cache [{}], pollster history [{}], and discovery cache [{}]. register_pollster_execution /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:276
Dec  1 19:35:48 compute-0 ceilometer_agent_compute[200308]: 2025-12-01 19:35:48.816 15 DEBUG ceilometer.polling.manager [-] Registering pollster [<stevedore.extension.Extension object at 0x7fcf6cc3fb00>] from source [pollsters] to be executed via executor [<concurrent.futures.thread.ThreadPoolExecutor object at 0x7fcf6757d6a0>] with cache [{}], pollster history [{}], and discovery cache [{}]. register_pollster_execution /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:276
Dec  1 19:35:48 compute-0 ceilometer_agent_compute[200308]: 2025-12-01 19:35:48.816 15 DEBUG ceilometer.polling.manager [-] Registering pollster [<stevedore.extension.Extension object at 0x7fcf6cc3f320>] from source [pollsters] to be executed via executor [<concurrent.futures.thread.ThreadPoolExecutor object at 0x7fcf6757d6a0>] with cache [{}], pollster history [{}], and discovery cache [{}]. register_pollster_execution /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:276
Dec  1 19:35:48 compute-0 ceilometer_agent_compute[200308]: 2025-12-01 19:35:48.816 15 DEBUG ceilometer.polling.manager [-] Registering pollster [<stevedore.extension.Extension object at 0x7fcf6cc3f380>] from source [pollsters] to be executed via executor [<concurrent.futures.thread.ThreadPoolExecutor object at 0x7fcf6757d6a0>] with cache [{}], pollster history [{}], and discovery cache [{}]. register_pollster_execution /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:276
Dec  1 19:35:48 compute-0 ceilometer_agent_compute[200308]: 2025-12-01 19:35:48.817 15 DEBUG ceilometer.polling.manager [-] Registering pollster [<stevedore.extension.Extension object at 0x7fcf6cc3f3e0>] from source [pollsters] to be executed via executor [<concurrent.futures.thread.ThreadPoolExecutor object at 0x7fcf6757d6a0>] with cache [{}], pollster history [{}], and discovery cache [{}]. register_pollster_execution /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:276
Dec  1 19:35:48 compute-0 ceilometer_agent_compute[200308]: 2025-12-01 19:35:48.817 15 DEBUG ceilometer.polling.manager [-] Registering pollster [<stevedore.extension.Extension object at 0x7fcf6cc3f440>] from source [pollsters] to be executed via executor [<concurrent.futures.thread.ThreadPoolExecutor object at 0x7fcf6757d6a0>] with cache [{}], pollster history [{}], and discovery cache [{}]. register_pollster_execution /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:276
Dec  1 19:35:48 compute-0 ceilometer_agent_compute[200308]: 2025-12-01 19:35:48.817 15 DEBUG ceilometer.polling.manager [-] Registering pollster [<stevedore.extension.Extension object at 0x7fcf6c2e4470>] from source [pollsters] to be executed via executor [<concurrent.futures.thread.ThreadPoolExecutor object at 0x7fcf6757d6a0>] with cache [{}], pollster history [{}], and discovery cache [{}]. register_pollster_execution /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:276
Dec  1 19:35:48 compute-0 ceilometer_agent_compute[200308]: 2025-12-01 19:35:48.817 15 DEBUG ceilometer.polling.manager [-] Registering pollster [<stevedore.extension.Extension object at 0x7fcf6cc3f4a0>] from source [pollsters] to be executed via executor [<concurrent.futures.thread.ThreadPoolExecutor object at 0x7fcf6757d6a0>] with cache [{}], pollster history [{}], and discovery cache [{}]. register_pollster_execution /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:276
Dec  1 19:35:48 compute-0 ceilometer_agent_compute[200308]: 2025-12-01 19:35:48.817 15 DEBUG ceilometer.polling.manager [-] Registering pollster [<stevedore.extension.Extension object at 0x7fcf6cc3f500>] from source [pollsters] to be executed via executor [<concurrent.futures.thread.ThreadPoolExecutor object at 0x7fcf6757d6a0>] with cache [{}], pollster history [{}], and discovery cache [{}]. register_pollster_execution /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:276
Dec  1 19:35:48 compute-0 ceilometer_agent_compute[200308]: 2025-12-01 19:35:48.818 15 DEBUG ceilometer.polling.manager [-] Registering pollster [<stevedore.extension.Extension object at 0x7fcf6cc3e540>] from source [pollsters] to be executed via executor [<concurrent.futures.thread.ThreadPoolExecutor object at 0x7fcf6757d6a0>] with cache [{}], pollster history [{}], and discovery cache [{}]. register_pollster_execution /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:276
Dec  1 19:35:48 compute-0 ceilometer_agent_compute[200308]: 2025-12-01 19:35:48.818 15 DEBUG ceilometer.polling.manager [-] Registering pollster [<stevedore.extension.Extension object at 0x7fcf6cc3f560>] from source [pollsters] to be executed via executor [<concurrent.futures.thread.ThreadPoolExecutor object at 0x7fcf6757d6a0>] with cache [{}], pollster history [{}], and discovery cache [{}]. register_pollster_execution /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:276
Dec  1 19:35:48 compute-0 ceilometer_agent_compute[200308]: 2025-12-01 19:35:48.819 15 DEBUG ceilometer.polling.manager [-] Registering pollster [<stevedore.extension.Extension object at 0x7fcf6cc3fd70>] from source [pollsters] to be executed via executor [<concurrent.futures.thread.ThreadPoolExecutor object at 0x7fcf6757d6a0>] with cache [{}], pollster history [{}], and discovery cache [{}]. register_pollster_execution /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:276
Dec  1 19:35:48 compute-0 ceilometer_agent_compute[200308]: 2025-12-01 19:35:48.820 15 DEBUG ceilometer.polling.manager [-] Registering pollster [<stevedore.extension.Extension object at 0x7fcf6cc3f5c0>] from source [pollsters] to be executed via executor [<concurrent.futures.thread.ThreadPoolExecutor object at 0x7fcf6757d6a0>] with cache [{}], pollster history [{}], and discovery cache [{}]. register_pollster_execution /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:276
Dec  1 19:35:48 compute-0 ceilometer_agent_compute[200308]: 2025-12-01 19:35:48.820 15 DEBUG ceilometer.polling.manager [-] Registering pollster [<stevedore.extension.Extension object at 0x7fcf6cc3fdd0>] from source [pollsters] to be executed via executor [<concurrent.futures.thread.ThreadPoolExecutor object at 0x7fcf6757d6a0>] with cache [{}], pollster history [{}], and discovery cache [{}]. register_pollster_execution /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:276
Dec  1 19:35:48 compute-0 ceilometer_agent_compute[200308]: 2025-12-01 19:35:48.820 15 DEBUG ceilometer.polling.manager [-] Registering pollster [<stevedore.extension.Extension object at 0x7fcf6cc3fe30>] from source [pollsters] to be executed via executor [<concurrent.futures.thread.ThreadPoolExecutor object at 0x7fcf6757d6a0>] with cache [{}], pollster history [{}], and discovery cache [{}]. register_pollster_execution /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:276
Dec  1 19:35:48 compute-0 ceilometer_agent_compute[200308]: 2025-12-01 19:35:48.820 15 DEBUG ceilometer.polling.manager [-] Registering pollster [<stevedore.extension.Extension object at 0x7fcf6cc3fec0>] from source [pollsters] to be executed via executor [<concurrent.futures.thread.ThreadPoolExecutor object at 0x7fcf6757d6a0>] with cache [{}], pollster history [{}], and discovery cache [{}]. register_pollster_execution /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:276
Dec  1 19:35:48 compute-0 ceilometer_agent_compute[200308]: 2025-12-01 19:35:48.821 15 DEBUG ceilometer.polling.manager [-] Registering pollster [<stevedore.extension.Extension object at 0x7fcf6cc3ffb0>] from source [pollsters] to be executed via executor [<concurrent.futures.thread.ThreadPoolExecutor object at 0x7fcf6757d6a0>] with cache [{}], pollster history [{}], and discovery cache [{}]. register_pollster_execution /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:276
Dec  1 19:35:48 compute-0 ceilometer_agent_compute[200308]: 2025-12-01 19:35:48.821 15 DEBUG ceilometer.polling.manager [-] Registering pollster [<stevedore.extension.Extension object at 0x7fcf6cc3d7c0>] from source [pollsters] to be executed via executor [<concurrent.futures.thread.ThreadPoolExecutor object at 0x7fcf6757d6a0>] with cache [{}], pollster history [{}], and discovery cache [{}]. register_pollster_execution /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:276
Dec  1 19:35:48 compute-0 ceilometer_agent_compute[200308]: 2025-12-01 19:35:48.822 15 DEBUG ceilometer.polling.manager [-] Registering pollster [<stevedore.extension.Extension object at 0x7fcf6cc3f7d0>] from source [pollsters] to be executed via executor [<concurrent.futures.thread.ThreadPoolExecutor object at 0x7fcf6757d6a0>] with cache [{}], pollster history [{}], and discovery cache [{}]. register_pollster_execution /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:276
Dec  1 19:35:48 compute-0 ceilometer_agent_compute[200308]: 2025-12-01 19:35:48.823 15 DEBUG ceilometer.compute.discovery [-] instance data: {'id': 'e73931e9-f7fa-4666-b781-700b385532a9', 'name': 'test_0', 'flavor': {'id': '0891a7f6-7194-4f33-bc11-6f6ab8b16145', 'name': 'm1.small', 'vcpus': 1, 'ram': 512, 'disk': 1, 'ephemeral': 1, 'swap': 0}, 'image': {'id': '15bc897a-453b-4133-b6db-08ecdc2b6db0'}, 'os_type': 'hvm', 'architecture': 'x86_64', 'OS-EXT-SRV-ATTR:instance_name': 'instance-00000001', 'OS-EXT-SRV-ATTR:host': 'compute-0.ctlplane.example.com', 'OS-EXT-STS:vm_state': 'running', 'tenant_id': '35d2a9caf1634dca9fc12ec078239d84', 'user_id': '7c24e8f82e7842b785e565ac65c7f494', 'hostId': 'e632d98aa833376e2652bb395252bb54f4cc7fd6f020f0d51d7efcd6', 'status': 'active', 'metadata': {}} discover_libvirt_polling /usr/lib/python3.12/site-packages/ceilometer/compute/discovery.py:315
Dec  1 19:35:48 compute-0 ceilometer_agent_compute[200308]: 2025-12-01 19:35:48.827 15 DEBUG ceilometer.compute.discovery [-] instance data: {'id': 'f4a023f0-04a7-470f-88ef-6284e0580f9e', 'name': 'vn-rxztcck-f2wxpqwzjpbt-22updzqiujy5-vnf-jgrcp6zbpavd', 'flavor': {'id': '0891a7f6-7194-4f33-bc11-6f6ab8b16145', 'name': 'm1.small', 'vcpus': 1, 'ram': 512, 'disk': 1, 'ephemeral': 1, 'swap': 0}, 'image': {'id': '15bc897a-453b-4133-b6db-08ecdc2b6db0'}, 'os_type': 'hvm', 'architecture': 'x86_64', 'OS-EXT-SRV-ATTR:instance_name': 'instance-00000002', 'OS-EXT-SRV-ATTR:host': 'compute-0.ctlplane.example.com', 'OS-EXT-STS:vm_state': 'running', 'tenant_id': '35d2a9caf1634dca9fc12ec078239d84', 'user_id': '7c24e8f82e7842b785e565ac65c7f494', 'hostId': 'e632d98aa833376e2652bb395252bb54f4cc7fd6f020f0d51d7efcd6', 'status': 'active', 'metadata': {'metering.server_group': '47cf63e2-5b7c-4ff3-8543-aef6d5b1a5c9'}} discover_libvirt_polling /usr/lib/python3.12/site-packages/ceilometer/compute/discovery.py:315
Dec  1 19:35:48 compute-0 ceilometer_agent_compute[200308]: 2025-12-01 19:35:48.828 15 INFO ceilometer.polling.manager [-] Polling pollster network.incoming.bytes.delta in the context of pollsters
Dec  1 19:35:48 compute-0 ceilometer_agent_compute[200308]: 2025-12-01 19:35:48.828 15 DEBUG ceilometer.polling.manager [-] Checking if we need coordination for pollster [<stevedore.extension.Extension object at 0x7fcf6cc3f860>] with coordination group name [None]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:333
Dec  1 19:35:48 compute-0 ceilometer_agent_compute[200308]: 2025-12-01 19:35:48.828 15 DEBUG ceilometer.polling.manager [-] The pollster [<stevedore.extension.Extension object at 0x7fcf6cc3f860>] is not configured in a source for polling that requires coordination. The current hashrings are the following [None]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:355
Dec  1 19:35:48 compute-0 ceilometer_agent_compute[200308]: 2025-12-01 19:35:48.829 15 DEBUG ceilometer.polling.manager [-] Polster heartbeat update: network.incoming.bytes.delta heartbeat /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:636
Dec  1 19:35:48 compute-0 ceilometer_agent_compute[200308]: 2025-12-01 19:35:48.829 12 DEBUG ceilometer.polling.manager [-] Updated heartbeat for network.incoming.bytes.delta (2025-12-01T19:35:48.828938) _update_status /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:502
Dec  1 19:35:48 compute-0 ceilometer_agent_compute[200308]: 2025-12-01 19:35:48.834 15 DEBUG ceilometer.compute.pollsters [-] e73931e9-f7fa-4666-b781-700b385532a9/network.incoming.bytes.delta volume: 0 _stats_to_sample /usr/lib/python3.12/site-packages/ceilometer/compute/pollsters/__init__.py:108
Dec  1 19:35:48 compute-0 ceilometer_agent_compute[200308]: 2025-12-01 19:35:48.839 15 DEBUG ceilometer.compute.pollsters [-] f4a023f0-04a7-470f-88ef-6284e0580f9e/network.incoming.bytes.delta volume: 0 _stats_to_sample /usr/lib/python3.12/site-packages/ceilometer/compute/pollsters/__init__.py:108
Dec  1 19:35:48 compute-0 ceilometer_agent_compute[200308]: 2025-12-01 19:35:48.839 15 INFO ceilometer.polling.manager [-] Finished polling pollster network.incoming.bytes.delta in the context of pollsters
Dec  1 19:35:48 compute-0 ceilometer_agent_compute[200308]: 2025-12-01 19:35:48.840 15 DEBUG ceilometer.polling.manager [-] Executing discovery process for pollsters [<ceilometer.compute.pollsters.net.OutgoingPacketsPollster object at 0x7fcf6c2e4050>] and discovery method [local_instances] via process [<bound method AgentManager.discover of <ceilometer.polling.manager.AgentManager object at 0x7fcf6cc07a10>>]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:294
Dec  1 19:35:48 compute-0 ceilometer_agent_compute[200308]: 2025-12-01 19:35:48.840 15 INFO ceilometer.polling.manager [-] Polling pollster network.outgoing.packets in the context of pollsters
Dec  1 19:35:48 compute-0 ceilometer_agent_compute[200308]: 2025-12-01 19:35:48.840 15 DEBUG ceilometer.polling.manager [-] Checking if we need coordination for pollster [<stevedore.extension.Extension object at 0x7fcf6c2e4080>] with coordination group name [None]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:333
Dec  1 19:35:48 compute-0 ceilometer_agent_compute[200308]: 2025-12-01 19:35:48.840 15 DEBUG ceilometer.polling.manager [-] The pollster [<stevedore.extension.Extension object at 0x7fcf6c2e4080>] is not configured in a source for polling that requires coordination. The current hashrings are the following [None]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:355
Dec  1 19:35:48 compute-0 ceilometer_agent_compute[200308]: 2025-12-01 19:35:48.841 15 DEBUG ceilometer.polling.manager [-] Polster heartbeat update: network.outgoing.packets heartbeat /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:636
Dec  1 19:35:48 compute-0 ceilometer_agent_compute[200308]: 2025-12-01 19:35:48.841 15 DEBUG ceilometer.compute.pollsters [-] e73931e9-f7fa-4666-b781-700b385532a9/network.outgoing.packets volume: 22 _stats_to_sample /usr/lib/python3.12/site-packages/ceilometer/compute/pollsters/__init__.py:108
Dec  1 19:35:48 compute-0 ceilometer_agent_compute[200308]: 2025-12-01 19:35:48.841 12 DEBUG ceilometer.polling.manager [-] Updated heartbeat for network.outgoing.packets (2025-12-01T19:35:48.840897) _update_status /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:502
Dec  1 19:35:48 compute-0 ceilometer_agent_compute[200308]: 2025-12-01 19:35:48.841 15 DEBUG ceilometer.compute.pollsters [-] f4a023f0-04a7-470f-88ef-6284e0580f9e/network.outgoing.packets volume: 43 _stats_to_sample /usr/lib/python3.12/site-packages/ceilometer/compute/pollsters/__init__.py:108
Dec  1 19:35:48 compute-0 ceilometer_agent_compute[200308]: 2025-12-01 19:35:48.842 15 INFO ceilometer.polling.manager [-] Finished polling pollster network.outgoing.packets in the context of pollsters
Dec  1 19:35:48 compute-0 ceilometer_agent_compute[200308]: 2025-12-01 19:35:48.842 15 DEBUG ceilometer.polling.manager [-] Executing discovery process for pollsters [<ceilometer.compute.pollsters.net.OutgoingBytesDeltaPollster object at 0x7fcf6cc3ff20>] and discovery method [local_instances] via process [<bound method AgentManager.discover of <ceilometer.polling.manager.AgentManager object at 0x7fcf6cc07a10>>]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:294
Dec  1 19:35:48 compute-0 ceilometer_agent_compute[200308]: 2025-12-01 19:35:48.843 15 INFO ceilometer.polling.manager [-] Polling pollster network.outgoing.bytes.delta in the context of pollsters
Dec  1 19:35:48 compute-0 ceilometer_agent_compute[200308]: 2025-12-01 19:35:48.843 15 DEBUG ceilometer.polling.manager [-] Checking if we need coordination for pollster [<stevedore.extension.Extension object at 0x7fcf6efc98b0>] with coordination group name [None]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:333
Dec  1 19:35:48 compute-0 ceilometer_agent_compute[200308]: 2025-12-01 19:35:48.843 15 DEBUG ceilometer.polling.manager [-] The pollster [<stevedore.extension.Extension object at 0x7fcf6efc98b0>] is not configured in a source for polling that requires coordination. The current hashrings are the following [None]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:355
Dec  1 19:35:48 compute-0 ceilometer_agent_compute[200308]: 2025-12-01 19:35:48.843 15 DEBUG ceilometer.polling.manager [-] Polster heartbeat update: network.outgoing.bytes.delta heartbeat /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:636
Dec  1 19:35:48 compute-0 ceilometer_agent_compute[200308]: 2025-12-01 19:35:48.843 15 DEBUG ceilometer.compute.pollsters [-] e73931e9-f7fa-4666-b781-700b385532a9/network.outgoing.bytes.delta volume: 70 _stats_to_sample /usr/lib/python3.12/site-packages/ceilometer/compute/pollsters/__init__.py:108
Dec  1 19:35:48 compute-0 ceilometer_agent_compute[200308]: 2025-12-01 19:35:48.844 12 DEBUG ceilometer.polling.manager [-] Updated heartbeat for network.outgoing.bytes.delta (2025-12-01T19:35:48.843500) _update_status /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:502
Dec  1 19:35:48 compute-0 ceilometer_agent_compute[200308]: 2025-12-01 19:35:48.844 15 DEBUG ceilometer.compute.pollsters [-] f4a023f0-04a7-470f-88ef-6284e0580f9e/network.outgoing.bytes.delta volume: 70 _stats_to_sample /usr/lib/python3.12/site-packages/ceilometer/compute/pollsters/__init__.py:108
Dec  1 19:35:48 compute-0 ceilometer_agent_compute[200308]: 2025-12-01 19:35:48.845 15 INFO ceilometer.polling.manager [-] Finished polling pollster network.outgoing.bytes.delta in the context of pollsters
Dec  1 19:35:48 compute-0 ceilometer_agent_compute[200308]: 2025-12-01 19:35:48.845 15 DEBUG ceilometer.polling.manager [-] Executing discovery process for pollsters [<ceilometer.compute.pollsters.net.OutgoingDropPollster object at 0x7fcf6c2e40e0>] and discovery method [local_instances] via process [<bound method AgentManager.discover of <ceilometer.polling.manager.AgentManager object at 0x7fcf6cc07a10>>]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:294
Dec  1 19:35:48 compute-0 ceilometer_agent_compute[200308]: 2025-12-01 19:35:48.845 15 INFO ceilometer.polling.manager [-] Polling pollster network.outgoing.packets.drop in the context of pollsters
Dec  1 19:35:48 compute-0 ceilometer_agent_compute[200308]: 2025-12-01 19:35:48.846 15 DEBUG ceilometer.polling.manager [-] Checking if we need coordination for pollster [<stevedore.extension.Extension object at 0x7fcf6c2e4110>] with coordination group name [None]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:333
Dec  1 19:35:48 compute-0 ceilometer_agent_compute[200308]: 2025-12-01 19:35:48.846 15 DEBUG ceilometer.polling.manager [-] The pollster [<stevedore.extension.Extension object at 0x7fcf6c2e4110>] is not configured in a source for polling that requires coordination. The current hashrings are the following [None]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:355
Dec  1 19:35:48 compute-0 ceilometer_agent_compute[200308]: 2025-12-01 19:35:48.846 15 DEBUG ceilometer.polling.manager [-] Polster heartbeat update: network.outgoing.packets.drop heartbeat /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:636
Dec  1 19:35:48 compute-0 ceilometer_agent_compute[200308]: 2025-12-01 19:35:48.846 12 DEBUG ceilometer.polling.manager [-] Updated heartbeat for network.outgoing.packets.drop (2025-12-01T19:35:48.846278) _update_status /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:502
Dec  1 19:35:48 compute-0 ceilometer_agent_compute[200308]: 2025-12-01 19:35:48.846 15 DEBUG ceilometer.compute.pollsters [-] e73931e9-f7fa-4666-b781-700b385532a9/network.outgoing.packets.drop volume: 0 _stats_to_sample /usr/lib/python3.12/site-packages/ceilometer/compute/pollsters/__init__.py:108
Dec  1 19:35:48 compute-0 ceilometer_agent_compute[200308]: 2025-12-01 19:35:48.847 15 DEBUG ceilometer.compute.pollsters [-] f4a023f0-04a7-470f-88ef-6284e0580f9e/network.outgoing.packets.drop volume: 0 _stats_to_sample /usr/lib/python3.12/site-packages/ceilometer/compute/pollsters/__init__.py:108
Dec  1 19:35:48 compute-0 ceilometer_agent_compute[200308]: 2025-12-01 19:35:48.848 15 INFO ceilometer.polling.manager [-] Finished polling pollster network.outgoing.packets.drop in the context of pollsters
Dec  1 19:35:48 compute-0 ceilometer_agent_compute[200308]: 2025-12-01 19:35:48.848 15 DEBUG ceilometer.polling.manager [-] Executing discovery process for pollsters [<ceilometer.compute.pollsters.net.OutgoingErrorsPollster object at 0x7fcf6c2e4170>] and discovery method [local_instances] via process [<bound method AgentManager.discover of <ceilometer.polling.manager.AgentManager object at 0x7fcf6cc07a10>>]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:294
Dec  1 19:35:48 compute-0 ceilometer_agent_compute[200308]: 2025-12-01 19:35:48.848 15 INFO ceilometer.polling.manager [-] Polling pollster network.outgoing.packets.error in the context of pollsters
Dec  1 19:35:48 compute-0 ceilometer_agent_compute[200308]: 2025-12-01 19:35:48.848 15 DEBUG ceilometer.polling.manager [-] Checking if we need coordination for pollster [<stevedore.extension.Extension object at 0x7fcf6c2e41a0>] with coordination group name [None]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:333
Dec  1 19:35:48 compute-0 ceilometer_agent_compute[200308]: 2025-12-01 19:35:48.848 15 DEBUG ceilometer.polling.manager [-] The pollster [<stevedore.extension.Extension object at 0x7fcf6c2e41a0>] is not configured in a source for polling that requires coordination. The current hashrings are the following [None]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:355
Dec  1 19:35:48 compute-0 ceilometer_agent_compute[200308]: 2025-12-01 19:35:48.849 15 DEBUG ceilometer.polling.manager [-] Polster heartbeat update: network.outgoing.packets.error heartbeat /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:636
Dec  1 19:35:48 compute-0 ceilometer_agent_compute[200308]: 2025-12-01 19:35:48.849 15 DEBUG ceilometer.compute.pollsters [-] e73931e9-f7fa-4666-b781-700b385532a9/network.outgoing.packets.error volume: 0 _stats_to_sample /usr/lib/python3.12/site-packages/ceilometer/compute/pollsters/__init__.py:108
Dec  1 19:35:48 compute-0 ceilometer_agent_compute[200308]: 2025-12-01 19:35:48.849 12 DEBUG ceilometer.polling.manager [-] Updated heartbeat for network.outgoing.packets.error (2025-12-01T19:35:48.849096) _update_status /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:502
Dec  1 19:35:48 compute-0 ceilometer_agent_compute[200308]: 2025-12-01 19:35:48.850 15 DEBUG ceilometer.compute.pollsters [-] f4a023f0-04a7-470f-88ef-6284e0580f9e/network.outgoing.packets.error volume: 0 _stats_to_sample /usr/lib/python3.12/site-packages/ceilometer/compute/pollsters/__init__.py:108
Dec  1 19:35:48 compute-0 ceilometer_agent_compute[200308]: 2025-12-01 19:35:48.850 15 INFO ceilometer.polling.manager [-] Finished polling pollster network.outgoing.packets.error in the context of pollsters
Dec  1 19:35:48 compute-0 ceilometer_agent_compute[200308]: 2025-12-01 19:35:48.851 15 DEBUG ceilometer.polling.manager [-] Executing discovery process for pollsters [<ceilometer.compute.pollsters.disk.PerDeviceCapacityPollster object at 0x7fcf6cc3d820>] and discovery method [local_instances] via process [<bound method AgentManager.discover of <ceilometer.polling.manager.AgentManager object at 0x7fcf6cc07a10>>]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:294
Dec  1 19:35:48 compute-0 ceilometer_agent_compute[200308]: 2025-12-01 19:35:48.851 15 INFO ceilometer.polling.manager [-] Polling pollster disk.device.capacity in the context of pollsters
Dec  1 19:35:48 compute-0 ceilometer_agent_compute[200308]: 2025-12-01 19:35:48.851 15 DEBUG ceilometer.polling.manager [-] Checking if we need coordination for pollster [<stevedore.extension.Extension object at 0x7fcf6cc3f290>] with coordination group name [None]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:333
Dec  1 19:35:48 compute-0 ceilometer_agent_compute[200308]: 2025-12-01 19:35:48.851 15 DEBUG ceilometer.polling.manager [-] The pollster [<stevedore.extension.Extension object at 0x7fcf6cc3f290>] is not configured in a source for polling that requires coordination. The current hashrings are the following [None]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:355
Dec  1 19:35:48 compute-0 ceilometer_agent_compute[200308]: 2025-12-01 19:35:48.851 15 DEBUG ceilometer.polling.manager [-] Polster heartbeat update: disk.device.capacity heartbeat /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:636
Dec  1 19:35:48 compute-0 ceilometer_agent_compute[200308]: 2025-12-01 19:35:48.852 12 DEBUG ceilometer.polling.manager [-] Updated heartbeat for disk.device.capacity (2025-12-01T19:35:48.851737) _update_status /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:502
Dec  1 19:35:48 compute-0 ceilometer_agent_compute[200308]: 2025-12-01 19:35:48.888 15 DEBUG ceilometer.compute.pollsters [-] e73931e9-f7fa-4666-b781-700b385532a9/disk.device.capacity volume: 1073741824 _stats_to_sample /usr/lib/python3.12/site-packages/ceilometer/compute/pollsters/__init__.py:108
Dec  1 19:35:48 compute-0 ceilometer_agent_compute[200308]: 2025-12-01 19:35:48.889 15 DEBUG ceilometer.compute.pollsters [-] e73931e9-f7fa-4666-b781-700b385532a9/disk.device.capacity volume: 1073741824 _stats_to_sample /usr/lib/python3.12/site-packages/ceilometer/compute/pollsters/__init__.py:108
Dec  1 19:35:48 compute-0 ceilometer_agent_compute[200308]: 2025-12-01 19:35:48.890 15 DEBUG ceilometer.compute.pollsters [-] e73931e9-f7fa-4666-b781-700b385532a9/disk.device.capacity volume: 485376 _stats_to_sample /usr/lib/python3.12/site-packages/ceilometer/compute/pollsters/__init__.py:108
Dec  1 19:35:48 compute-0 ceilometer_agent_compute[200308]: 2025-12-01 19:35:48.923 15 DEBUG ceilometer.compute.pollsters [-] f4a023f0-04a7-470f-88ef-6284e0580f9e/disk.device.capacity volume: 1073741824 _stats_to_sample /usr/lib/python3.12/site-packages/ceilometer/compute/pollsters/__init__.py:108
Dec  1 19:35:48 compute-0 ceilometer_agent_compute[200308]: 2025-12-01 19:35:48.924 15 DEBUG ceilometer.compute.pollsters [-] f4a023f0-04a7-470f-88ef-6284e0580f9e/disk.device.capacity volume: 1073741824 _stats_to_sample /usr/lib/python3.12/site-packages/ceilometer/compute/pollsters/__init__.py:108
Dec  1 19:35:48 compute-0 ceilometer_agent_compute[200308]: 2025-12-01 19:35:48.924 15 DEBUG ceilometer.compute.pollsters [-] f4a023f0-04a7-470f-88ef-6284e0580f9e/disk.device.capacity volume: 583680 _stats_to_sample /usr/lib/python3.12/site-packages/ceilometer/compute/pollsters/__init__.py:108
Dec  1 19:35:48 compute-0 ceilometer_agent_compute[200308]: 2025-12-01 19:35:48.925 15 INFO ceilometer.polling.manager [-] Finished polling pollster disk.device.capacity in the context of pollsters
Dec  1 19:35:48 compute-0 ceilometer_agent_compute[200308]: 2025-12-01 19:35:48.925 15 DEBUG ceilometer.polling.manager [-] Executing discovery process for pollsters [<ceilometer.compute.pollsters.disk.PerDeviceReadBytesPollster object at 0x7fcf6cc3f1d0>] and discovery method [local_instances] via process [<bound method AgentManager.discover of <ceilometer.polling.manager.AgentManager object at 0x7fcf6cc07a10>>]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:294
Dec  1 19:35:48 compute-0 ceilometer_agent_compute[200308]: 2025-12-01 19:35:48.926 15 INFO ceilometer.polling.manager [-] Polling pollster disk.device.read.bytes in the context of pollsters
Dec  1 19:35:48 compute-0 ceilometer_agent_compute[200308]: 2025-12-01 19:35:48.926 15 DEBUG ceilometer.polling.manager [-] Checking if we need coordination for pollster [<stevedore.extension.Extension object at 0x7fcf6cc3f2c0>] with coordination group name [None]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:333
Dec  1 19:35:48 compute-0 ceilometer_agent_compute[200308]: 2025-12-01 19:35:48.926 15 DEBUG ceilometer.polling.manager [-] The pollster [<stevedore.extension.Extension object at 0x7fcf6cc3f2c0>] is not configured in a source for polling that requires coordination. The current hashrings are the following [None]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:355
Dec  1 19:35:48 compute-0 ceilometer_agent_compute[200308]: 2025-12-01 19:35:48.926 15 DEBUG ceilometer.polling.manager [-] Polster heartbeat update: disk.device.read.bytes heartbeat /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:636
Dec  1 19:35:48 compute-0 ceilometer_agent_compute[200308]: 2025-12-01 19:35:48.927 12 DEBUG ceilometer.polling.manager [-] Updated heartbeat for disk.device.read.bytes (2025-12-01T19:35:48.926650) _update_status /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:502
Dec  1 19:35:49 compute-0 ceilometer_agent_compute[200308]: 2025-12-01 19:35:49.033 15 DEBUG ceilometer.compute.pollsters [-] e73931e9-f7fa-4666-b781-700b385532a9/disk.device.read.bytes volume: 23308800 _stats_to_sample /usr/lib/python3.12/site-packages/ceilometer/compute/pollsters/__init__.py:108
Dec  1 19:35:49 compute-0 ceilometer_agent_compute[200308]: 2025-12-01 19:35:49.033 15 DEBUG ceilometer.compute.pollsters [-] e73931e9-f7fa-4666-b781-700b385532a9/disk.device.read.bytes volume: 3227648 _stats_to_sample /usr/lib/python3.12/site-packages/ceilometer/compute/pollsters/__init__.py:108
Dec  1 19:35:49 compute-0 ceilometer_agent_compute[200308]: 2025-12-01 19:35:49.034 15 DEBUG ceilometer.compute.pollsters [-] e73931e9-f7fa-4666-b781-700b385532a9/disk.device.read.bytes volume: 274786 _stats_to_sample /usr/lib/python3.12/site-packages/ceilometer/compute/pollsters/__init__.py:108
Dec  1 19:35:49 compute-0 ceilometer_agent_compute[200308]: 2025-12-01 19:35:49.148 15 DEBUG ceilometer.compute.pollsters [-] f4a023f0-04a7-470f-88ef-6284e0580f9e/disk.device.read.bytes volume: 23325184 _stats_to_sample /usr/lib/python3.12/site-packages/ceilometer/compute/pollsters/__init__.py:108
Dec  1 19:35:49 compute-0 ceilometer_agent_compute[200308]: 2025-12-01 19:35:49.149 15 DEBUG ceilometer.compute.pollsters [-] f4a023f0-04a7-470f-88ef-6284e0580f9e/disk.device.read.bytes volume: 3227648 _stats_to_sample /usr/lib/python3.12/site-packages/ceilometer/compute/pollsters/__init__.py:108
Dec  1 19:35:49 compute-0 ceilometer_agent_compute[200308]: 2025-12-01 19:35:49.150 15 DEBUG ceilometer.compute.pollsters [-] f4a023f0-04a7-470f-88ef-6284e0580f9e/disk.device.read.bytes volume: 385378 _stats_to_sample /usr/lib/python3.12/site-packages/ceilometer/compute/pollsters/__init__.py:108
Dec  1 19:35:49 compute-0 ceilometer_agent_compute[200308]: 2025-12-01 19:35:49.150 15 INFO ceilometer.polling.manager [-] Finished polling pollster disk.device.read.bytes in the context of pollsters
Dec  1 19:35:49 compute-0 ceilometer_agent_compute[200308]: 2025-12-01 19:35:49.151 15 DEBUG ceilometer.polling.manager [-] Executing discovery process for pollsters [<ceilometer.compute.pollsters.net.IncomingBytesPollster object at 0x7fcf6cc3f800>] and discovery method [local_instances] via process [<bound method AgentManager.discover of <ceilometer.polling.manager.AgentManager object at 0x7fcf6cc07a10>>]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:294
Dec  1 19:35:49 compute-0 ceilometer_agent_compute[200308]: 2025-12-01 19:35:49.151 15 INFO ceilometer.polling.manager [-] Polling pollster network.incoming.bytes in the context of pollsters
Dec  1 19:35:49 compute-0 ceilometer_agent_compute[200308]: 2025-12-01 19:35:49.151 15 DEBUG ceilometer.polling.manager [-] Checking if we need coordination for pollster [<stevedore.extension.Extension object at 0x7fcf6e1e92e0>] with coordination group name [None]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:333
Dec  1 19:35:49 compute-0 ceilometer_agent_compute[200308]: 2025-12-01 19:35:49.151 15 DEBUG ceilometer.polling.manager [-] The pollster [<stevedore.extension.Extension object at 0x7fcf6e1e92e0>] is not configured in a source for polling that requires coordination. The current hashrings are the following [None]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:355
Dec  1 19:35:49 compute-0 ceilometer_agent_compute[200308]: 2025-12-01 19:35:49.151 15 DEBUG ceilometer.polling.manager [-] Polster heartbeat update: network.incoming.bytes heartbeat /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:636
Dec  1 19:35:49 compute-0 ceilometer_agent_compute[200308]: 2025-12-01 19:35:49.152 15 DEBUG ceilometer.compute.pollsters [-] e73931e9-f7fa-4666-b781-700b385532a9/network.incoming.bytes volume: 1968 _stats_to_sample /usr/lib/python3.12/site-packages/ceilometer/compute/pollsters/__init__.py:108
Dec  1 19:35:49 compute-0 ceilometer_agent_compute[200308]: 2025-12-01 19:35:49.152 12 DEBUG ceilometer.polling.manager [-] Updated heartbeat for network.incoming.bytes (2025-12-01T19:35:49.151900) _update_status /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:502
Dec  1 19:35:49 compute-0 ceilometer_agent_compute[200308]: 2025-12-01 19:35:49.152 15 DEBUG ceilometer.compute.pollsters [-] f4a023f0-04a7-470f-88ef-6284e0580f9e/network.incoming.bytes volume: 4849 _stats_to_sample /usr/lib/python3.12/site-packages/ceilometer/compute/pollsters/__init__.py:108
Dec  1 19:35:49 compute-0 ceilometer_agent_compute[200308]: 2025-12-01 19:35:49.153 15 INFO ceilometer.polling.manager [-] Finished polling pollster network.incoming.bytes in the context of pollsters
Dec  1 19:35:49 compute-0 ceilometer_agent_compute[200308]: 2025-12-01 19:35:49.153 15 DEBUG ceilometer.polling.manager [-] Executing discovery process for pollsters [<ceilometer.compute.pollsters.net.IncomingBytesRatePollster object at 0x7fcf6cc3fd10>] and discovery method [local_instances] via process [<bound method AgentManager.discover of <ceilometer.polling.manager.AgentManager object at 0x7fcf6cc07a10>>]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:294
Dec  1 19:35:49 compute-0 ceilometer_agent_compute[200308]: 2025-12-01 19:35:49.153 15 DEBUG ceilometer.polling.manager [-] Skip pollster network.incoming.bytes.rate, no new resources found this cycle _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:321
Dec  1 19:35:49 compute-0 ceilometer_agent_compute[200308]: 2025-12-01 19:35:49.154 15 DEBUG ceilometer.polling.manager [-] Executing discovery process for pollsters [<ceilometer.compute.pollsters.disk.PerDeviceDiskReadLatencyPollster object at 0x7fcf6cc3f2f0>] and discovery method [local_instances] via process [<bound method AgentManager.discover of <ceilometer.polling.manager.AgentManager object at 0x7fcf6cc07a10>>]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:294
Dec  1 19:35:49 compute-0 ceilometer_agent_compute[200308]: 2025-12-01 19:35:49.154 15 INFO ceilometer.polling.manager [-] Polling pollster disk.device.read.latency in the context of pollsters
Dec  1 19:35:49 compute-0 ceilometer_agent_compute[200308]: 2025-12-01 19:35:49.154 15 DEBUG ceilometer.polling.manager [-] Checking if we need coordination for pollster [<stevedore.extension.Extension object at 0x7fcf6cc3f320>] with coordination group name [None]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:333
Dec  1 19:35:49 compute-0 ceilometer_agent_compute[200308]: 2025-12-01 19:35:49.154 15 DEBUG ceilometer.polling.manager [-] The pollster [<stevedore.extension.Extension object at 0x7fcf6cc3f320>] is not configured in a source for polling that requires coordination. The current hashrings are the following [None]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:355
Dec  1 19:35:49 compute-0 ceilometer_agent_compute[200308]: 2025-12-01 19:35:49.154 15 DEBUG ceilometer.polling.manager [-] Polster heartbeat update: disk.device.read.latency heartbeat /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:636
Dec  1 19:35:49 compute-0 ceilometer_agent_compute[200308]: 2025-12-01 19:35:49.155 15 DEBUG ceilometer.compute.pollsters [-] e73931e9-f7fa-4666-b781-700b385532a9/disk.device.read.latency volume: 474440550 _stats_to_sample /usr/lib/python3.12/site-packages/ceilometer/compute/pollsters/__init__.py:108
Dec  1 19:35:49 compute-0 ceilometer_agent_compute[200308]: 2025-12-01 19:35:49.155 12 DEBUG ceilometer.polling.manager [-] Updated heartbeat for disk.device.read.latency (2025-12-01T19:35:49.154720) _update_status /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:502
Dec  1 19:35:49 compute-0 ceilometer_agent_compute[200308]: 2025-12-01 19:35:49.155 15 DEBUG ceilometer.compute.pollsters [-] e73931e9-f7fa-4666-b781-700b385532a9/disk.device.read.latency volume: 65600453 _stats_to_sample /usr/lib/python3.12/site-packages/ceilometer/compute/pollsters/__init__.py:108
Dec  1 19:35:49 compute-0 ceilometer_agent_compute[200308]: 2025-12-01 19:35:49.156 15 DEBUG ceilometer.compute.pollsters [-] e73931e9-f7fa-4666-b781-700b385532a9/disk.device.read.latency volume: 49214734 _stats_to_sample /usr/lib/python3.12/site-packages/ceilometer/compute/pollsters/__init__.py:108
Dec  1 19:35:49 compute-0 ceilometer_agent_compute[200308]: 2025-12-01 19:35:49.156 15 DEBUG ceilometer.compute.pollsters [-] f4a023f0-04a7-470f-88ef-6284e0580f9e/disk.device.read.latency volume: 571654353 _stats_to_sample /usr/lib/python3.12/site-packages/ceilometer/compute/pollsters/__init__.py:108
Dec  1 19:35:49 compute-0 ceilometer_agent_compute[200308]: 2025-12-01 19:35:49.157 15 DEBUG ceilometer.compute.pollsters [-] f4a023f0-04a7-470f-88ef-6284e0580f9e/disk.device.read.latency volume: 100146044 _stats_to_sample /usr/lib/python3.12/site-packages/ceilometer/compute/pollsters/__init__.py:108
Dec  1 19:35:49 compute-0 ceilometer_agent_compute[200308]: 2025-12-01 19:35:49.157 15 DEBUG ceilometer.compute.pollsters [-] f4a023f0-04a7-470f-88ef-6284e0580f9e/disk.device.read.latency volume: 76562748 _stats_to_sample /usr/lib/python3.12/site-packages/ceilometer/compute/pollsters/__init__.py:108
Dec  1 19:35:49 compute-0 ceilometer_agent_compute[200308]: 2025-12-01 19:35:49.158 15 INFO ceilometer.polling.manager [-] Finished polling pollster disk.device.read.latency in the context of pollsters
Dec  1 19:35:49 compute-0 ceilometer_agent_compute[200308]: 2025-12-01 19:35:49.158 15 DEBUG ceilometer.polling.manager [-] Executing discovery process for pollsters [<ceilometer.compute.pollsters.disk.PerDeviceReadRequestsPollster object at 0x7fcf6cc3f350>] and discovery method [local_instances] via process [<bound method AgentManager.discover of <ceilometer.polling.manager.AgentManager object at 0x7fcf6cc07a10>>]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:294
Dec  1 19:35:49 compute-0 ceilometer_agent_compute[200308]: 2025-12-01 19:35:49.158 15 INFO ceilometer.polling.manager [-] Polling pollster disk.device.read.requests in the context of pollsters
Dec  1 19:35:49 compute-0 ceilometer_agent_compute[200308]: 2025-12-01 19:35:49.158 15 DEBUG ceilometer.polling.manager [-] Checking if we need coordination for pollster [<stevedore.extension.Extension object at 0x7fcf6cc3f380>] with coordination group name [None]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:333
Dec  1 19:35:49 compute-0 ceilometer_agent_compute[200308]: 2025-12-01 19:35:49.158 15 DEBUG ceilometer.polling.manager [-] The pollster [<stevedore.extension.Extension object at 0x7fcf6cc3f380>] is not configured in a source for polling that requires coordination. The current hashrings are the following [None]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:355
Dec  1 19:35:49 compute-0 ceilometer_agent_compute[200308]: 2025-12-01 19:35:49.159 15 DEBUG ceilometer.polling.manager [-] Polster heartbeat update: disk.device.read.requests heartbeat /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:636
Dec  1 19:35:49 compute-0 ceilometer_agent_compute[200308]: 2025-12-01 19:35:49.159 12 DEBUG ceilometer.polling.manager [-] Updated heartbeat for disk.device.read.requests (2025-12-01T19:35:49.159083) _update_status /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:502
Dec  1 19:35:49 compute-0 ceilometer_agent_compute[200308]: 2025-12-01 19:35:49.159 15 DEBUG ceilometer.compute.pollsters [-] e73931e9-f7fa-4666-b781-700b385532a9/disk.device.read.requests volume: 840 _stats_to_sample /usr/lib/python3.12/site-packages/ceilometer/compute/pollsters/__init__.py:108
Dec  1 19:35:49 compute-0 ceilometer_agent_compute[200308]: 2025-12-01 19:35:49.160 15 DEBUG ceilometer.compute.pollsters [-] e73931e9-f7fa-4666-b781-700b385532a9/disk.device.read.requests volume: 173 _stats_to_sample /usr/lib/python3.12/site-packages/ceilometer/compute/pollsters/__init__.py:108
Dec  1 19:35:49 compute-0 ceilometer_agent_compute[200308]: 2025-12-01 19:35:49.160 15 DEBUG ceilometer.compute.pollsters [-] e73931e9-f7fa-4666-b781-700b385532a9/disk.device.read.requests volume: 109 _stats_to_sample /usr/lib/python3.12/site-packages/ceilometer/compute/pollsters/__init__.py:108
Dec  1 19:35:49 compute-0 ceilometer_agent_compute[200308]: 2025-12-01 19:35:49.161 15 DEBUG ceilometer.compute.pollsters [-] f4a023f0-04a7-470f-88ef-6284e0580f9e/disk.device.read.requests volume: 844 _stats_to_sample /usr/lib/python3.12/site-packages/ceilometer/compute/pollsters/__init__.py:108
Dec  1 19:35:49 compute-0 ceilometer_agent_compute[200308]: 2025-12-01 19:35:49.161 15 DEBUG ceilometer.compute.pollsters [-] f4a023f0-04a7-470f-88ef-6284e0580f9e/disk.device.read.requests volume: 173 _stats_to_sample /usr/lib/python3.12/site-packages/ceilometer/compute/pollsters/__init__.py:108
Dec  1 19:35:49 compute-0 ceilometer_agent_compute[200308]: 2025-12-01 19:35:49.161 15 DEBUG ceilometer.compute.pollsters [-] f4a023f0-04a7-470f-88ef-6284e0580f9e/disk.device.read.requests volume: 124 _stats_to_sample /usr/lib/python3.12/site-packages/ceilometer/compute/pollsters/__init__.py:108
Dec  1 19:35:49 compute-0 ceilometer_agent_compute[200308]: 2025-12-01 19:35:49.162 15 INFO ceilometer.polling.manager [-] Finished polling pollster disk.device.read.requests in the context of pollsters
Dec  1 19:35:49 compute-0 ceilometer_agent_compute[200308]: 2025-12-01 19:35:49.162 15 DEBUG ceilometer.polling.manager [-] Executing discovery process for pollsters [<ceilometer.compute.pollsters.disk.PerDevicePhysicalPollster object at 0x7fcf6cc3f3b0>] and discovery method [local_instances] via process [<bound method AgentManager.discover of <ceilometer.polling.manager.AgentManager object at 0x7fcf6cc07a10>>]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:294
Dec  1 19:35:49 compute-0 ceilometer_agent_compute[200308]: 2025-12-01 19:35:49.163 15 INFO ceilometer.polling.manager [-] Polling pollster disk.device.usage in the context of pollsters
Dec  1 19:35:49 compute-0 ceilometer_agent_compute[200308]: 2025-12-01 19:35:49.163 15 DEBUG ceilometer.polling.manager [-] Checking if we need coordination for pollster [<stevedore.extension.Extension object at 0x7fcf6cc3f3e0>] with coordination group name [None]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:333
Dec  1 19:35:49 compute-0 ceilometer_agent_compute[200308]: 2025-12-01 19:35:49.163 15 DEBUG ceilometer.polling.manager [-] The pollster [<stevedore.extension.Extension object at 0x7fcf6cc3f3e0>] is not configured in a source for polling that requires coordination. The current hashrings are the following [None]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:355
Dec  1 19:35:49 compute-0 ceilometer_agent_compute[200308]: 2025-12-01 19:35:49.163 15 DEBUG ceilometer.polling.manager [-] Polster heartbeat update: disk.device.usage heartbeat /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:636
Dec  1 19:35:49 compute-0 ceilometer_agent_compute[200308]: 2025-12-01 19:35:49.163 12 DEBUG ceilometer.polling.manager [-] Updated heartbeat for disk.device.usage (2025-12-01T19:35:49.163547) _update_status /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:502
Dec  1 19:35:49 compute-0 ceilometer_agent_compute[200308]: 2025-12-01 19:35:49.164 15 DEBUG ceilometer.compute.pollsters [-] e73931e9-f7fa-4666-b781-700b385532a9/disk.device.usage volume: 21233664 _stats_to_sample /usr/lib/python3.12/site-packages/ceilometer/compute/pollsters/__init__.py:108
Dec  1 19:35:49 compute-0 ceilometer_agent_compute[200308]: 2025-12-01 19:35:49.164 15 DEBUG ceilometer.compute.pollsters [-] e73931e9-f7fa-4666-b781-700b385532a9/disk.device.usage volume: 393216 _stats_to_sample /usr/lib/python3.12/site-packages/ceilometer/compute/pollsters/__init__.py:108
Dec  1 19:35:49 compute-0 ceilometer_agent_compute[200308]: 2025-12-01 19:35:49.164 15 DEBUG ceilometer.compute.pollsters [-] e73931e9-f7fa-4666-b781-700b385532a9/disk.device.usage volume: 485376 _stats_to_sample /usr/lib/python3.12/site-packages/ceilometer/compute/pollsters/__init__.py:108
Dec  1 19:35:49 compute-0 ceilometer_agent_compute[200308]: 2025-12-01 19:35:49.165 15 DEBUG ceilometer.compute.pollsters [-] f4a023f0-04a7-470f-88ef-6284e0580f9e/disk.device.usage volume: 21364736 _stats_to_sample /usr/lib/python3.12/site-packages/ceilometer/compute/pollsters/__init__.py:108
Dec  1 19:35:49 compute-0 ceilometer_agent_compute[200308]: 2025-12-01 19:35:49.165 15 DEBUG ceilometer.compute.pollsters [-] f4a023f0-04a7-470f-88ef-6284e0580f9e/disk.device.usage volume: 393216 _stats_to_sample /usr/lib/python3.12/site-packages/ceilometer/compute/pollsters/__init__.py:108
Dec  1 19:35:49 compute-0 ceilometer_agent_compute[200308]: 2025-12-01 19:35:49.166 15 DEBUG ceilometer.compute.pollsters [-] f4a023f0-04a7-470f-88ef-6284e0580f9e/disk.device.usage volume: 583680 _stats_to_sample /usr/lib/python3.12/site-packages/ceilometer/compute/pollsters/__init__.py:108
Dec  1 19:35:49 compute-0 ceilometer_agent_compute[200308]: 2025-12-01 19:35:49.166 15 INFO ceilometer.polling.manager [-] Finished polling pollster disk.device.usage in the context of pollsters
Dec  1 19:35:49 compute-0 ceilometer_agent_compute[200308]: 2025-12-01 19:35:49.167 15 DEBUG ceilometer.polling.manager [-] Executing discovery process for pollsters [<ceilometer.compute.pollsters.disk.PerDeviceWriteBytesPollster object at 0x7fcf6cc3f410>] and discovery method [local_instances] via process [<bound method AgentManager.discover of <ceilometer.polling.manager.AgentManager object at 0x7fcf6cc07a10>>]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:294
Dec  1 19:35:49 compute-0 ceilometer_agent_compute[200308]: 2025-12-01 19:35:49.167 15 INFO ceilometer.polling.manager [-] Polling pollster disk.device.write.bytes in the context of pollsters
Dec  1 19:35:49 compute-0 ceilometer_agent_compute[200308]: 2025-12-01 19:35:49.167 15 DEBUG ceilometer.polling.manager [-] Checking if we need coordination for pollster [<stevedore.extension.Extension object at 0x7fcf6cc3f440>] with coordination group name [None]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:333
Dec  1 19:35:49 compute-0 ceilometer_agent_compute[200308]: 2025-12-01 19:35:49.167 15 DEBUG ceilometer.polling.manager [-] The pollster [<stevedore.extension.Extension object at 0x7fcf6cc3f440>] is not configured in a source for polling that requires coordination. The current hashrings are the following [None]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:355
Dec  1 19:35:49 compute-0 ceilometer_agent_compute[200308]: 2025-12-01 19:35:49.167 15 DEBUG ceilometer.polling.manager [-] Polster heartbeat update: disk.device.write.bytes heartbeat /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:636
Dec  1 19:35:49 compute-0 ceilometer_agent_compute[200308]: 2025-12-01 19:35:49.168 15 DEBUG ceilometer.compute.pollsters [-] e73931e9-f7fa-4666-b781-700b385532a9/disk.device.write.bytes volume: 41779200 _stats_to_sample /usr/lib/python3.12/site-packages/ceilometer/compute/pollsters/__init__.py:108
Dec  1 19:35:49 compute-0 ceilometer_agent_compute[200308]: 2025-12-01 19:35:49.168 15 DEBUG ceilometer.compute.pollsters [-] e73931e9-f7fa-4666-b781-700b385532a9/disk.device.write.bytes volume: 512 _stats_to_sample /usr/lib/python3.12/site-packages/ceilometer/compute/pollsters/__init__.py:108
Dec  1 19:35:49 compute-0 ceilometer_agent_compute[200308]: 2025-12-01 19:35:49.169 15 DEBUG ceilometer.compute.pollsters [-] e73931e9-f7fa-4666-b781-700b385532a9/disk.device.write.bytes volume: 0 _stats_to_sample /usr/lib/python3.12/site-packages/ceilometer/compute/pollsters/__init__.py:108
Dec  1 19:35:49 compute-0 ceilometer_agent_compute[200308]: 2025-12-01 19:35:49.169 15 DEBUG ceilometer.compute.pollsters [-] f4a023f0-04a7-470f-88ef-6284e0580f9e/disk.device.write.bytes volume: 41836544 _stats_to_sample /usr/lib/python3.12/site-packages/ceilometer/compute/pollsters/__init__.py:108
Dec  1 19:35:49 compute-0 ceilometer_agent_compute[200308]: 2025-12-01 19:35:49.170 15 DEBUG ceilometer.compute.pollsters [-] f4a023f0-04a7-470f-88ef-6284e0580f9e/disk.device.write.bytes volume: 512 _stats_to_sample /usr/lib/python3.12/site-packages/ceilometer/compute/pollsters/__init__.py:108
Dec  1 19:35:49 compute-0 ceilometer_agent_compute[200308]: 2025-12-01 19:35:49.171 15 DEBUG ceilometer.compute.pollsters [-] f4a023f0-04a7-470f-88ef-6284e0580f9e/disk.device.write.bytes volume: 0 _stats_to_sample /usr/lib/python3.12/site-packages/ceilometer/compute/pollsters/__init__.py:108
Dec  1 19:35:49 compute-0 ceilometer_agent_compute[200308]: 2025-12-01 19:35:49.172 15 INFO ceilometer.polling.manager [-] Finished polling pollster disk.device.write.bytes in the context of pollsters
Dec  1 19:35:49 compute-0 ceilometer_agent_compute[200308]: 2025-12-01 19:35:49.172 15 DEBUG ceilometer.polling.manager [-] Executing discovery process for pollsters [<ceilometer.compute.pollsters.instance_stats.PowerStatePollster object at 0x7fcf6c2e4440>] and discovery method [local_instances] via process [<bound method AgentManager.discover of <ceilometer.polling.manager.AgentManager object at 0x7fcf6cc07a10>>]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:294
Dec  1 19:35:49 compute-0 ceilometer_agent_compute[200308]: 2025-12-01 19:35:49.172 15 INFO ceilometer.polling.manager [-] Polling pollster power.state in the context of pollsters
Dec  1 19:35:49 compute-0 ceilometer_agent_compute[200308]: 2025-12-01 19:35:49.173 15 DEBUG ceilometer.polling.manager [-] Checking if we need coordination for pollster [<stevedore.extension.Extension object at 0x7fcf6c2e4470>] with coordination group name [None]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:333
Dec  1 19:35:49 compute-0 ceilometer_agent_compute[200308]: 2025-12-01 19:35:49.173 12 DEBUG ceilometer.polling.manager [-] Updated heartbeat for disk.device.write.bytes (2025-12-01T19:35:49.167878) _update_status /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:502
Dec  1 19:35:49 compute-0 ceilometer_agent_compute[200308]: 2025-12-01 19:35:49.173 15 DEBUG ceilometer.polling.manager [-] The pollster [<stevedore.extension.Extension object at 0x7fcf6c2e4470>] is not configured in a source for polling that requires coordination. The current hashrings are the following [None]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:355
Dec  1 19:35:49 compute-0 ceilometer_agent_compute[200308]: 2025-12-01 19:35:49.173 15 DEBUG ceilometer.polling.manager [-] Polster heartbeat update: power.state heartbeat /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:636
Dec  1 19:35:49 compute-0 ceilometer_agent_compute[200308]: 2025-12-01 19:35:49.173 12 DEBUG ceilometer.polling.manager [-] Updated heartbeat for power.state (2025-12-01T19:35:49.173630) _update_status /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:502
Dec  1 19:35:49 compute-0 ceilometer_agent_compute[200308]: 2025-12-01 19:35:49.207 15 DEBUG ceilometer.compute.pollsters [-] e73931e9-f7fa-4666-b781-700b385532a9/power.state volume: 1 _stats_to_sample /usr/lib/python3.12/site-packages/ceilometer/compute/pollsters/__init__.py:108
Dec  1 19:35:49 compute-0 ceilometer_agent_compute[200308]: 2025-12-01 19:35:49.248 15 DEBUG ceilometer.compute.pollsters [-] f4a023f0-04a7-470f-88ef-6284e0580f9e/power.state volume: 1 _stats_to_sample /usr/lib/python3.12/site-packages/ceilometer/compute/pollsters/__init__.py:108
Dec  1 19:35:49 compute-0 nova_compute[189564]: 2025-12-01 19:35:49.248 189568 DEBUG oslo_service.periodic_task [None req-7cec860b-c1c5-49d2-8a4f-732c76c1788d - - - - - -] Running periodic task ComputeManager._heal_instance_info_cache run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210#033[00m
Dec  1 19:35:49 compute-0 nova_compute[189564]: 2025-12-01 19:35:49.248 189568 DEBUG nova.compute.manager [None req-7cec860b-c1c5-49d2-8a4f-732c76c1788d - - - - - -] Starting heal instance info cache _heal_instance_info_cache /usr/lib/python3.9/site-packages/nova/compute/manager.py:9858#033[00m
Dec  1 19:35:49 compute-0 ceilometer_agent_compute[200308]: 2025-12-01 19:35:49.249 15 INFO ceilometer.polling.manager [-] Finished polling pollster power.state in the context of pollsters
Dec  1 19:35:49 compute-0 ceilometer_agent_compute[200308]: 2025-12-01 19:35:49.249 15 DEBUG ceilometer.polling.manager [-] Executing discovery process for pollsters [<ceilometer.compute.pollsters.disk.PerDeviceDiskWriteLatencyPollster object at 0x7fcf6cc3f470>] and discovery method [local_instances] via process [<bound method AgentManager.discover of <ceilometer.polling.manager.AgentManager object at 0x7fcf6cc07a10>>]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:294
Dec  1 19:35:49 compute-0 nova_compute[189564]: 2025-12-01 19:35:49.248 189568 DEBUG nova.compute.manager [None req-7cec860b-c1c5-49d2-8a4f-732c76c1788d - - - - - -] Rebuilding the list of instances to heal _heal_instance_info_cache /usr/lib/python3.9/site-packages/nova/compute/manager.py:9862#033[00m
Dec  1 19:35:49 compute-0 ceilometer_agent_compute[200308]: 2025-12-01 19:35:49.249 15 INFO ceilometer.polling.manager [-] Polling pollster disk.device.write.latency in the context of pollsters
Dec  1 19:35:49 compute-0 ceilometer_agent_compute[200308]: 2025-12-01 19:35:49.249 15 DEBUG ceilometer.polling.manager [-] Checking if we need coordination for pollster [<stevedore.extension.Extension object at 0x7fcf6cc3f4a0>] with coordination group name [None]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:333
Dec  1 19:35:49 compute-0 ceilometer_agent_compute[200308]: 2025-12-01 19:35:49.250 15 DEBUG ceilometer.polling.manager [-] The pollster [<stevedore.extension.Extension object at 0x7fcf6cc3f4a0>] is not configured in a source for polling that requires coordination. The current hashrings are the following [None]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:355
Dec  1 19:35:49 compute-0 ceilometer_agent_compute[200308]: 2025-12-01 19:35:49.250 15 DEBUG ceilometer.polling.manager [-] Polster heartbeat update: disk.device.write.latency heartbeat /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:636
Dec  1 19:35:49 compute-0 ceilometer_agent_compute[200308]: 2025-12-01 19:35:49.250 12 DEBUG ceilometer.polling.manager [-] Updated heartbeat for disk.device.write.latency (2025-12-01T19:35:49.250167) _update_status /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:502
Dec  1 19:35:49 compute-0 ceilometer_agent_compute[200308]: 2025-12-01 19:35:49.250 15 DEBUG ceilometer.compute.pollsters [-] e73931e9-f7fa-4666-b781-700b385532a9/disk.device.write.latency volume: 1119912171 _stats_to_sample /usr/lib/python3.12/site-packages/ceilometer/compute/pollsters/__init__.py:108
Dec  1 19:35:49 compute-0 ceilometer_agent_compute[200308]: 2025-12-01 19:35:49.251 15 DEBUG ceilometer.compute.pollsters [-] e73931e9-f7fa-4666-b781-700b385532a9/disk.device.write.latency volume: 10391061 _stats_to_sample /usr/lib/python3.12/site-packages/ceilometer/compute/pollsters/__init__.py:108
Dec  1 19:35:49 compute-0 ceilometer_agent_compute[200308]: 2025-12-01 19:35:49.251 15 DEBUG ceilometer.compute.pollsters [-] e73931e9-f7fa-4666-b781-700b385532a9/disk.device.write.latency volume: 0 _stats_to_sample /usr/lib/python3.12/site-packages/ceilometer/compute/pollsters/__init__.py:108
Dec  1 19:35:49 compute-0 ceilometer_agent_compute[200308]: 2025-12-01 19:35:49.252 15 DEBUG ceilometer.compute.pollsters [-] f4a023f0-04a7-470f-88ef-6284e0580f9e/disk.device.write.latency volume: 1158162729 _stats_to_sample /usr/lib/python3.12/site-packages/ceilometer/compute/pollsters/__init__.py:108
Dec  1 19:35:49 compute-0 ceilometer_agent_compute[200308]: 2025-12-01 19:35:49.253 15 DEBUG ceilometer.compute.pollsters [-] f4a023f0-04a7-470f-88ef-6284e0580f9e/disk.device.write.latency volume: 13740853 _stats_to_sample /usr/lib/python3.12/site-packages/ceilometer/compute/pollsters/__init__.py:108
Dec  1 19:35:49 compute-0 ceilometer_agent_compute[200308]: 2025-12-01 19:35:49.253 15 DEBUG ceilometer.compute.pollsters [-] f4a023f0-04a7-470f-88ef-6284e0580f9e/disk.device.write.latency volume: 0 _stats_to_sample /usr/lib/python3.12/site-packages/ceilometer/compute/pollsters/__init__.py:108
Dec  1 19:35:49 compute-0 ceilometer_agent_compute[200308]: 2025-12-01 19:35:49.254 15 INFO ceilometer.polling.manager [-] Finished polling pollster disk.device.write.latency in the context of pollsters
Dec  1 19:35:49 compute-0 ceilometer_agent_compute[200308]: 2025-12-01 19:35:49.254 15 DEBUG ceilometer.polling.manager [-] Executing discovery process for pollsters [<ceilometer.compute.pollsters.disk.PerDeviceWriteRequestsPollster object at 0x7fcf6cc3f4d0>] and discovery method [local_instances] via process [<bound method AgentManager.discover of <ceilometer.polling.manager.AgentManager object at 0x7fcf6cc07a10>>]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:294
Dec  1 19:35:49 compute-0 ceilometer_agent_compute[200308]: 2025-12-01 19:35:49.254 15 INFO ceilometer.polling.manager [-] Polling pollster disk.device.write.requests in the context of pollsters
Dec  1 19:35:49 compute-0 ceilometer_agent_compute[200308]: 2025-12-01 19:35:49.255 15 DEBUG ceilometer.polling.manager [-] Checking if we need coordination for pollster [<stevedore.extension.Extension object at 0x7fcf6cc3f500>] with coordination group name [None]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:333
Dec  1 19:35:49 compute-0 ceilometer_agent_compute[200308]: 2025-12-01 19:35:49.255 15 DEBUG ceilometer.polling.manager [-] The pollster [<stevedore.extension.Extension object at 0x7fcf6cc3f500>] is not configured in a source for polling that requires coordination. The current hashrings are the following [None]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:355
Dec  1 19:35:49 compute-0 ceilometer_agent_compute[200308]: 2025-12-01 19:35:49.255 12 DEBUG ceilometer.polling.manager [-] Updated heartbeat for disk.device.write.requests (2025-12-01T19:35:49.255450) _update_status /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:502
Dec  1 19:35:49 compute-0 ceilometer_agent_compute[200308]: 2025-12-01 19:35:49.255 15 DEBUG ceilometer.polling.manager [-] Polster heartbeat update: disk.device.write.requests heartbeat /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:636
Dec  1 19:35:49 compute-0 ceilometer_agent_compute[200308]: 2025-12-01 19:35:49.256 15 DEBUG ceilometer.compute.pollsters [-] e73931e9-f7fa-4666-b781-700b385532a9/disk.device.write.requests volume: 233 _stats_to_sample /usr/lib/python3.12/site-packages/ceilometer/compute/pollsters/__init__.py:108
Dec  1 19:35:49 compute-0 ceilometer_agent_compute[200308]: 2025-12-01 19:35:49.256 15 DEBUG ceilometer.compute.pollsters [-] e73931e9-f7fa-4666-b781-700b385532a9/disk.device.write.requests volume: 1 _stats_to_sample /usr/lib/python3.12/site-packages/ceilometer/compute/pollsters/__init__.py:108
Dec  1 19:35:49 compute-0 ceilometer_agent_compute[200308]: 2025-12-01 19:35:49.257 15 DEBUG ceilometer.compute.pollsters [-] e73931e9-f7fa-4666-b781-700b385532a9/disk.device.write.requests volume: 0 _stats_to_sample /usr/lib/python3.12/site-packages/ceilometer/compute/pollsters/__init__.py:108
Dec  1 19:35:49 compute-0 ceilometer_agent_compute[200308]: 2025-12-01 19:35:49.257 15 DEBUG ceilometer.compute.pollsters [-] f4a023f0-04a7-470f-88ef-6284e0580f9e/disk.device.write.requests volume: 242 _stats_to_sample /usr/lib/python3.12/site-packages/ceilometer/compute/pollsters/__init__.py:108
Dec  1 19:35:49 compute-0 ceilometer_agent_compute[200308]: 2025-12-01 19:35:49.258 15 DEBUG ceilometer.compute.pollsters [-] f4a023f0-04a7-470f-88ef-6284e0580f9e/disk.device.write.requests volume: 1 _stats_to_sample /usr/lib/python3.12/site-packages/ceilometer/compute/pollsters/__init__.py:108
Dec  1 19:35:49 compute-0 ceilometer_agent_compute[200308]: 2025-12-01 19:35:49.258 15 DEBUG ceilometer.compute.pollsters [-] f4a023f0-04a7-470f-88ef-6284e0580f9e/disk.device.write.requests volume: 0 _stats_to_sample /usr/lib/python3.12/site-packages/ceilometer/compute/pollsters/__init__.py:108
Dec  1 19:35:49 compute-0 ceilometer_agent_compute[200308]: 2025-12-01 19:35:49.259 15 INFO ceilometer.polling.manager [-] Finished polling pollster disk.device.write.requests in the context of pollsters
Dec  1 19:35:49 compute-0 ceilometer_agent_compute[200308]: 2025-12-01 19:35:49.260 15 DEBUG ceilometer.polling.manager [-] Executing discovery process for pollsters [<ceilometer.compute.pollsters.disk.PerDeviceAllocationPollster object at 0x7fcf6cc3e5d0>] and discovery method [local_instances] via process [<bound method AgentManager.discover of <ceilometer.polling.manager.AgentManager object at 0x7fcf6cc07a10>>]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:294
Dec  1 19:35:49 compute-0 ceilometer_agent_compute[200308]: 2025-12-01 19:35:49.260 15 INFO ceilometer.polling.manager [-] Polling pollster disk.device.allocation in the context of pollsters
Dec  1 19:35:49 compute-0 ceilometer_agent_compute[200308]: 2025-12-01 19:35:49.260 15 DEBUG ceilometer.polling.manager [-] Checking if we need coordination for pollster [<stevedore.extension.Extension object at 0x7fcf6cc3e540>] with coordination group name [None]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:333
Dec  1 19:35:49 compute-0 ceilometer_agent_compute[200308]: 2025-12-01 19:35:49.260 15 DEBUG ceilometer.polling.manager [-] The pollster [<stevedore.extension.Extension object at 0x7fcf6cc3e540>] is not configured in a source for polling that requires coordination. The current hashrings are the following [None]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:355
Dec  1 19:35:49 compute-0 ceilometer_agent_compute[200308]: 2025-12-01 19:35:49.261 15 DEBUG ceilometer.polling.manager [-] Polster heartbeat update: disk.device.allocation heartbeat /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:636
Dec  1 19:35:49 compute-0 ceilometer_agent_compute[200308]: 2025-12-01 19:35:49.261 12 DEBUG ceilometer.polling.manager [-] Updated heartbeat for disk.device.allocation (2025-12-01T19:35:49.261105) _update_status /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:502
Dec  1 19:35:49 compute-0 ceilometer_agent_compute[200308]: 2025-12-01 19:35:49.261 15 DEBUG ceilometer.compute.pollsters [-] e73931e9-f7fa-4666-b781-700b385532a9/disk.device.allocation volume: 21307392 _stats_to_sample /usr/lib/python3.12/site-packages/ceilometer/compute/pollsters/__init__.py:108
Dec  1 19:35:49 compute-0 ceilometer_agent_compute[200308]: 2025-12-01 19:35:49.262 15 DEBUG ceilometer.compute.pollsters [-] e73931e9-f7fa-4666-b781-700b385532a9/disk.device.allocation volume: 1253376 _stats_to_sample /usr/lib/python3.12/site-packages/ceilometer/compute/pollsters/__init__.py:108
Dec  1 19:35:49 compute-0 ceilometer_agent_compute[200308]: 2025-12-01 19:35:49.262 15 DEBUG ceilometer.compute.pollsters [-] e73931e9-f7fa-4666-b781-700b385532a9/disk.device.allocation volume: 487424 _stats_to_sample /usr/lib/python3.12/site-packages/ceilometer/compute/pollsters/__init__.py:108
Dec  1 19:35:49 compute-0 ceilometer_agent_compute[200308]: 2025-12-01 19:35:49.263 15 DEBUG ceilometer.compute.pollsters [-] f4a023f0-04a7-470f-88ef-6284e0580f9e/disk.device.allocation volume: 22224896 _stats_to_sample /usr/lib/python3.12/site-packages/ceilometer/compute/pollsters/__init__.py:108
Dec  1 19:35:49 compute-0 ceilometer_agent_compute[200308]: 2025-12-01 19:35:49.263 15 DEBUG ceilometer.compute.pollsters [-] f4a023f0-04a7-470f-88ef-6284e0580f9e/disk.device.allocation volume: 1253376 _stats_to_sample /usr/lib/python3.12/site-packages/ceilometer/compute/pollsters/__init__.py:108
Dec  1 19:35:49 compute-0 ceilometer_agent_compute[200308]: 2025-12-01 19:35:49.264 15 DEBUG ceilometer.compute.pollsters [-] f4a023f0-04a7-470f-88ef-6284e0580f9e/disk.device.allocation volume: 585728 _stats_to_sample /usr/lib/python3.12/site-packages/ceilometer/compute/pollsters/__init__.py:108
Dec  1 19:35:49 compute-0 ceilometer_agent_compute[200308]: 2025-12-01 19:35:49.265 15 INFO ceilometer.polling.manager [-] Finished polling pollster disk.device.allocation in the context of pollsters
Dec  1 19:35:49 compute-0 ceilometer_agent_compute[200308]: 2025-12-01 19:35:49.266 15 DEBUG ceilometer.polling.manager [-] Executing discovery process for pollsters [<ceilometer.compute.pollsters.disk.EphemeralSizePollster object at 0x7fcf6cc3f530>] and discovery method [local_instances] via process [<bound method AgentManager.discover of <ceilometer.polling.manager.AgentManager object at 0x7fcf6cc07a10>>]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:294
Dec  1 19:35:49 compute-0 ceilometer_agent_compute[200308]: 2025-12-01 19:35:49.266 15 INFO ceilometer.polling.manager [-] Polling pollster disk.ephemeral.size in the context of pollsters
Dec  1 19:35:49 compute-0 ceilometer_agent_compute[200308]: 2025-12-01 19:35:49.267 15 DEBUG ceilometer.polling.manager [-] Checking if we need coordination for pollster [<stevedore.extension.Extension object at 0x7fcf6cc3f560>] with coordination group name [None]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:333
Dec  1 19:35:49 compute-0 ceilometer_agent_compute[200308]: 2025-12-01 19:35:49.267 15 DEBUG ceilometer.polling.manager [-] The pollster [<stevedore.extension.Extension object at 0x7fcf6cc3f560>] is not configured in a source for polling that requires coordination. The current hashrings are the following [None]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:355
Dec  1 19:35:49 compute-0 ceilometer_agent_compute[200308]: 2025-12-01 19:35:49.267 15 DEBUG ceilometer.polling.manager [-] Polster heartbeat update: disk.ephemeral.size heartbeat /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:636
Dec  1 19:35:49 compute-0 ceilometer_agent_compute[200308]: 2025-12-01 19:35:49.268 12 DEBUG ceilometer.polling.manager [-] Updated heartbeat for disk.ephemeral.size (2025-12-01T19:35:49.267625) _update_status /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:502
Dec  1 19:35:49 compute-0 ceilometer_agent_compute[200308]: 2025-12-01 19:35:49.269 15 INFO ceilometer.polling.manager [-] Finished polling pollster disk.ephemeral.size in the context of pollsters
Dec  1 19:35:49 compute-0 ceilometer_agent_compute[200308]: 2025-12-01 19:35:49.269 15 DEBUG ceilometer.polling.manager [-] Executing discovery process for pollsters [<ceilometer.compute.pollsters.net.IncomingPacketsPollster object at 0x7fcf6cc3fd40>] and discovery method [local_instances] via process [<bound method AgentManager.discover of <ceilometer.polling.manager.AgentManager object at 0x7fcf6cc07a10>>]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:294
Dec  1 19:35:49 compute-0 ceilometer_agent_compute[200308]: 2025-12-01 19:35:49.269 15 INFO ceilometer.polling.manager [-] Polling pollster network.incoming.packets in the context of pollsters
Dec  1 19:35:49 compute-0 ceilometer_agent_compute[200308]: 2025-12-01 19:35:49.269 15 DEBUG ceilometer.polling.manager [-] Checking if we need coordination for pollster [<stevedore.extension.Extension object at 0x7fcf6cc3fd70>] with coordination group name [None]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:333
Dec  1 19:35:49 compute-0 ceilometer_agent_compute[200308]: 2025-12-01 19:35:49.270 15 DEBUG ceilometer.polling.manager [-] The pollster [<stevedore.extension.Extension object at 0x7fcf6cc3fd70>] is not configured in a source for polling that requires coordination. The current hashrings are the following [None]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:355
Dec  1 19:35:49 compute-0 ceilometer_agent_compute[200308]: 2025-12-01 19:35:49.270 12 DEBUG ceilometer.polling.manager [-] Updated heartbeat for network.incoming.packets (2025-12-01T19:35:49.270437) _update_status /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:502
Dec  1 19:35:49 compute-0 ceilometer_agent_compute[200308]: 2025-12-01 19:35:49.270 15 DEBUG ceilometer.polling.manager [-] Polster heartbeat update: network.incoming.packets heartbeat /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:636
Dec  1 19:35:49 compute-0 ceilometer_agent_compute[200308]: 2025-12-01 19:35:49.272 15 DEBUG ceilometer.compute.pollsters [-] e73931e9-f7fa-4666-b781-700b385532a9/network.incoming.packets volume: 17 _stats_to_sample /usr/lib/python3.12/site-packages/ceilometer/compute/pollsters/__init__.py:108
Dec  1 19:35:49 compute-0 ceilometer_agent_compute[200308]: 2025-12-01 19:35:49.273 15 DEBUG ceilometer.compute.pollsters [-] f4a023f0-04a7-470f-88ef-6284e0580f9e/network.incoming.packets volume: 31 _stats_to_sample /usr/lib/python3.12/site-packages/ceilometer/compute/pollsters/__init__.py:108
Dec  1 19:35:49 compute-0 ceilometer_agent_compute[200308]: 2025-12-01 19:35:49.274 15 INFO ceilometer.polling.manager [-] Finished polling pollster network.incoming.packets in the context of pollsters
Dec  1 19:35:49 compute-0 ceilometer_agent_compute[200308]: 2025-12-01 19:35:49.274 15 DEBUG ceilometer.polling.manager [-] Executing discovery process for pollsters [<ceilometer.compute.pollsters.disk.RootSizePollster object at 0x7fcf6cc3f590>] and discovery method [local_instances] via process [<bound method AgentManager.discover of <ceilometer.polling.manager.AgentManager object at 0x7fcf6cc07a10>>]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:294
Dec  1 19:35:49 compute-0 ceilometer_agent_compute[200308]: 2025-12-01 19:35:49.275 15 INFO ceilometer.polling.manager [-] Polling pollster disk.root.size in the context of pollsters
Dec  1 19:35:49 compute-0 ceilometer_agent_compute[200308]: 2025-12-01 19:35:49.275 15 DEBUG ceilometer.polling.manager [-] Checking if we need coordination for pollster [<stevedore.extension.Extension object at 0x7fcf6cc3f5c0>] with coordination group name [None]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:333
Dec  1 19:35:49 compute-0 ceilometer_agent_compute[200308]: 2025-12-01 19:35:49.275 15 DEBUG ceilometer.polling.manager [-] The pollster [<stevedore.extension.Extension object at 0x7fcf6cc3f5c0>] is not configured in a source for polling that requires coordination. The current hashrings are the following [None]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:355
Dec  1 19:35:49 compute-0 ceilometer_agent_compute[200308]: 2025-12-01 19:35:49.275 15 DEBUG ceilometer.polling.manager [-] Polster heartbeat update: disk.root.size heartbeat /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:636
Dec  1 19:35:49 compute-0 ceilometer_agent_compute[200308]: 2025-12-01 19:35:49.276 15 INFO ceilometer.polling.manager [-] Finished polling pollster disk.root.size in the context of pollsters
Dec  1 19:35:49 compute-0 ceilometer_agent_compute[200308]: 2025-12-01 19:35:49.276 15 DEBUG ceilometer.polling.manager [-] Executing discovery process for pollsters [<ceilometer.compute.pollsters.net.IncomingDropPollster object at 0x7fcf6cc3fda0>] and discovery method [local_instances] via process [<bound method AgentManager.discover of <ceilometer.polling.manager.AgentManager object at 0x7fcf6cc07a10>>]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:294
Dec  1 19:35:49 compute-0 ceilometer_agent_compute[200308]: 2025-12-01 19:35:49.276 15 INFO ceilometer.polling.manager [-] Polling pollster network.incoming.packets.drop in the context of pollsters
Dec  1 19:35:49 compute-0 ceilometer_agent_compute[200308]: 2025-12-01 19:35:49.276 15 DEBUG ceilometer.polling.manager [-] Checking if we need coordination for pollster [<stevedore.extension.Extension object at 0x7fcf6cc3fdd0>] with coordination group name [None]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:333
Dec  1 19:35:49 compute-0 ceilometer_agent_compute[200308]: 2025-12-01 19:35:49.276 15 DEBUG ceilometer.polling.manager [-] The pollster [<stevedore.extension.Extension object at 0x7fcf6cc3fdd0>] is not configured in a source for polling that requires coordination. The current hashrings are the following [None]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:355
Dec  1 19:35:49 compute-0 ceilometer_agent_compute[200308]: 2025-12-01 19:35:49.276 12 DEBUG ceilometer.polling.manager [-] Updated heartbeat for disk.root.size (2025-12-01T19:35:49.275436) _update_status /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:502
Dec  1 19:35:49 compute-0 ceilometer_agent_compute[200308]: 2025-12-01 19:35:49.277 15 DEBUG ceilometer.polling.manager [-] Polster heartbeat update: network.incoming.packets.drop heartbeat /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:636
Dec  1 19:35:49 compute-0 ceilometer_agent_compute[200308]: 2025-12-01 19:35:49.277 15 DEBUG ceilometer.compute.pollsters [-] e73931e9-f7fa-4666-b781-700b385532a9/network.incoming.packets.drop volume: 0 _stats_to_sample /usr/lib/python3.12/site-packages/ceilometer/compute/pollsters/__init__.py:108
Dec  1 19:35:49 compute-0 ceilometer_agent_compute[200308]: 2025-12-01 19:35:49.277 15 DEBUG ceilometer.compute.pollsters [-] f4a023f0-04a7-470f-88ef-6284e0580f9e/network.incoming.packets.drop volume: 0 _stats_to_sample /usr/lib/python3.12/site-packages/ceilometer/compute/pollsters/__init__.py:108
Dec  1 19:35:49 compute-0 ceilometer_agent_compute[200308]: 2025-12-01 19:35:49.278 15 INFO ceilometer.polling.manager [-] Finished polling pollster network.incoming.packets.drop in the context of pollsters
Dec  1 19:35:49 compute-0 ceilometer_agent_compute[200308]: 2025-12-01 19:35:49.278 15 DEBUG ceilometer.polling.manager [-] Executing discovery process for pollsters [<ceilometer.compute.pollsters.net.IncomingErrorsPollster object at 0x7fcf6cc3fe00>] and discovery method [local_instances] via process [<bound method AgentManager.discover of <ceilometer.polling.manager.AgentManager object at 0x7fcf6cc07a10>>]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:294
Dec  1 19:35:49 compute-0 ceilometer_agent_compute[200308]: 2025-12-01 19:35:49.278 15 INFO ceilometer.polling.manager [-] Polling pollster network.incoming.packets.error in the context of pollsters
Dec  1 19:35:49 compute-0 ceilometer_agent_compute[200308]: 2025-12-01 19:35:49.278 15 DEBUG ceilometer.polling.manager [-] Checking if we need coordination for pollster [<stevedore.extension.Extension object at 0x7fcf6cc3fe30>] with coordination group name [None]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:333
Dec  1 19:35:49 compute-0 ceilometer_agent_compute[200308]: 2025-12-01 19:35:49.279 15 DEBUG ceilometer.polling.manager [-] The pollster [<stevedore.extension.Extension object at 0x7fcf6cc3fe30>] is not configured in a source for polling that requires coordination. The current hashrings are the following [None]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:355
Dec  1 19:35:49 compute-0 ceilometer_agent_compute[200308]: 2025-12-01 19:35:49.279 12 DEBUG ceilometer.polling.manager [-] Updated heartbeat for network.incoming.packets.drop (2025-12-01T19:35:49.277028) _update_status /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:502
Dec  1 19:35:49 compute-0 ceilometer_agent_compute[200308]: 2025-12-01 19:35:49.279 15 DEBUG ceilometer.polling.manager [-] Polster heartbeat update: network.incoming.packets.error heartbeat /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:636
Dec  1 19:35:49 compute-0 ceilometer_agent_compute[200308]: 2025-12-01 19:35:49.279 12 DEBUG ceilometer.polling.manager [-] Updated heartbeat for network.incoming.packets.error (2025-12-01T19:35:49.279495) _update_status /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:502
Dec  1 19:35:49 compute-0 ceilometer_agent_compute[200308]: 2025-12-01 19:35:49.280 15 DEBUG ceilometer.compute.pollsters [-] e73931e9-f7fa-4666-b781-700b385532a9/network.incoming.packets.error volume: 0 _stats_to_sample /usr/lib/python3.12/site-packages/ceilometer/compute/pollsters/__init__.py:108
Dec  1 19:35:49 compute-0 ceilometer_agent_compute[200308]: 2025-12-01 19:35:49.280 15 DEBUG ceilometer.compute.pollsters [-] f4a023f0-04a7-470f-88ef-6284e0580f9e/network.incoming.packets.error volume: 0 _stats_to_sample /usr/lib/python3.12/site-packages/ceilometer/compute/pollsters/__init__.py:108
Dec  1 19:35:49 compute-0 ceilometer_agent_compute[200308]: 2025-12-01 19:35:49.280 15 INFO ceilometer.polling.manager [-] Finished polling pollster network.incoming.packets.error in the context of pollsters
Dec  1 19:35:49 compute-0 ceilometer_agent_compute[200308]: 2025-12-01 19:35:49.280 15 DEBUG ceilometer.polling.manager [-] Executing discovery process for pollsters [<ceilometer.compute.pollsters.net.OutgoingBytesPollster object at 0x7fcf6cc3fe90>] and discovery method [local_instances] via process [<bound method AgentManager.discover of <ceilometer.polling.manager.AgentManager object at 0x7fcf6cc07a10>>]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:294
Dec  1 19:35:49 compute-0 ceilometer_agent_compute[200308]: 2025-12-01 19:35:49.280 15 INFO ceilometer.polling.manager [-] Polling pollster network.outgoing.bytes in the context of pollsters
Dec  1 19:35:49 compute-0 ceilometer_agent_compute[200308]: 2025-12-01 19:35:49.281 15 DEBUG ceilometer.polling.manager [-] Checking if we need coordination for pollster [<stevedore.extension.Extension object at 0x7fcf6cc3fec0>] with coordination group name [None]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:333
Dec  1 19:35:49 compute-0 ceilometer_agent_compute[200308]: 2025-12-01 19:35:49.281 15 DEBUG ceilometer.polling.manager [-] The pollster [<stevedore.extension.Extension object at 0x7fcf6cc3fec0>] is not configured in a source for polling that requires coordination. The current hashrings are the following [None]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:355
Dec  1 19:35:49 compute-0 ceilometer_agent_compute[200308]: 2025-12-01 19:35:49.281 15 DEBUG ceilometer.polling.manager [-] Polster heartbeat update: network.outgoing.bytes heartbeat /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:636
Dec  1 19:35:49 compute-0 ceilometer_agent_compute[200308]: 2025-12-01 19:35:49.281 15 DEBUG ceilometer.compute.pollsters [-] e73931e9-f7fa-4666-b781-700b385532a9/network.outgoing.bytes volume: 2272 _stats_to_sample /usr/lib/python3.12/site-packages/ceilometer/compute/pollsters/__init__.py:108
Dec  1 19:35:49 compute-0 ceilometer_agent_compute[200308]: 2025-12-01 19:35:49.281 15 DEBUG ceilometer.compute.pollsters [-] f4a023f0-04a7-470f-88ef-6284e0580f9e/network.outgoing.bytes volume: 4892 _stats_to_sample /usr/lib/python3.12/site-packages/ceilometer/compute/pollsters/__init__.py:108
Dec  1 19:35:49 compute-0 ceilometer_agent_compute[200308]: 2025-12-01 19:35:49.282 15 INFO ceilometer.polling.manager [-] Finished polling pollster network.outgoing.bytes in the context of pollsters
Dec  1 19:35:49 compute-0 ceilometer_agent_compute[200308]: 2025-12-01 19:35:49.282 15 DEBUG ceilometer.polling.manager [-] Executing discovery process for pollsters [<ceilometer.compute.pollsters.net.OutgoingBytesRatePollster object at 0x7fcf6cc3ff80>] and discovery method [local_instances] via process [<bound method AgentManager.discover of <ceilometer.polling.manager.AgentManager object at 0x7fcf6cc07a10>>]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:294
Dec  1 19:35:49 compute-0 ceilometer_agent_compute[200308]: 2025-12-01 19:35:49.282 15 DEBUG ceilometer.polling.manager [-] Skip pollster network.outgoing.bytes.rate, no new resources found this cycle _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:321
Dec  1 19:35:49 compute-0 ceilometer_agent_compute[200308]: 2025-12-01 19:35:49.282 15 DEBUG ceilometer.polling.manager [-] Executing discovery process for pollsters [<ceilometer.compute.pollsters.instance_stats.CPUPollster object at 0x7fcf6cbd1b80>] and discovery method [local_instances] via process [<bound method AgentManager.discover of <ceilometer.polling.manager.AgentManager object at 0x7fcf6cc07a10>>]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:294
Dec  1 19:35:49 compute-0 ceilometer_agent_compute[200308]: 2025-12-01 19:35:49.283 15 INFO ceilometer.polling.manager [-] Polling pollster cpu in the context of pollsters
Dec  1 19:35:49 compute-0 ceilometer_agent_compute[200308]: 2025-12-01 19:35:49.283 12 DEBUG ceilometer.polling.manager [-] Updated heartbeat for network.outgoing.bytes (2025-12-01T19:35:49.281260) _update_status /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:502
Dec  1 19:35:49 compute-0 ceilometer_agent_compute[200308]: 2025-12-01 19:35:49.283 15 DEBUG ceilometer.polling.manager [-] Checking if we need coordination for pollster [<stevedore.extension.Extension object at 0x7fcf6cc3d7c0>] with coordination group name [None]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:333
Dec  1 19:35:49 compute-0 ceilometer_agent_compute[200308]: 2025-12-01 19:35:49.283 15 DEBUG ceilometer.polling.manager [-] The pollster [<stevedore.extension.Extension object at 0x7fcf6cc3d7c0>] is not configured in a source for polling that requires coordination. The current hashrings are the following [None]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:355
Dec  1 19:35:49 compute-0 ceilometer_agent_compute[200308]: 2025-12-01 19:35:49.283 15 DEBUG ceilometer.polling.manager [-] Polster heartbeat update: cpu heartbeat /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:636
Dec  1 19:35:49 compute-0 ceilometer_agent_compute[200308]: 2025-12-01 19:35:49.283 15 DEBUG ceilometer.compute.pollsters [-] e73931e9-f7fa-4666-b781-700b385532a9/cpu volume: 37180000000 _stats_to_sample /usr/lib/python3.12/site-packages/ceilometer/compute/pollsters/__init__.py:108
Dec  1 19:35:49 compute-0 ceilometer_agent_compute[200308]: 2025-12-01 19:35:49.283 15 DEBUG ceilometer.compute.pollsters [-] f4a023f0-04a7-470f-88ef-6284e0580f9e/cpu volume: 191620000000 _stats_to_sample /usr/lib/python3.12/site-packages/ceilometer/compute/pollsters/__init__.py:108
Dec  1 19:35:49 compute-0 ceilometer_agent_compute[200308]: 2025-12-01 19:35:49.284 15 INFO ceilometer.polling.manager [-] Finished polling pollster cpu in the context of pollsters
Dec  1 19:35:49 compute-0 ceilometer_agent_compute[200308]: 2025-12-01 19:35:49.284 15 DEBUG ceilometer.polling.manager [-] Executing discovery process for pollsters [<ceilometer.compute.pollsters.instance_stats.MemoryUsagePollster object at 0x7fcf6cc3f7a0>] and discovery method [local_instances] via process [<bound method AgentManager.discover of <ceilometer.polling.manager.AgentManager object at 0x7fcf6cc07a10>>]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:294
Dec  1 19:35:49 compute-0 ceilometer_agent_compute[200308]: 2025-12-01 19:35:49.284 15 INFO ceilometer.polling.manager [-] Polling pollster memory.usage in the context of pollsters
Dec  1 19:35:49 compute-0 ceilometer_agent_compute[200308]: 2025-12-01 19:35:49.285 15 DEBUG ceilometer.polling.manager [-] Checking if we need coordination for pollster [<stevedore.extension.Extension object at 0x7fcf6cc3f7d0>] with coordination group name [None]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:333
Dec  1 19:35:49 compute-0 ceilometer_agent_compute[200308]: 2025-12-01 19:35:49.285 15 DEBUG ceilometer.polling.manager [-] The pollster [<stevedore.extension.Extension object at 0x7fcf6cc3f7d0>] is not configured in a source for polling that requires coordination. The current hashrings are the following [None]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:355
Dec  1 19:35:49 compute-0 ceilometer_agent_compute[200308]: 2025-12-01 19:35:49.285 15 DEBUG ceilometer.polling.manager [-] Polster heartbeat update: memory.usage heartbeat /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:636
Dec  1 19:35:49 compute-0 ceilometer_agent_compute[200308]: 2025-12-01 19:35:49.285 15 DEBUG ceilometer.compute.pollsters [-] e73931e9-f7fa-4666-b781-700b385532a9/memory.usage volume: 48.79296875 _stats_to_sample /usr/lib/python3.12/site-packages/ceilometer/compute/pollsters/__init__.py:108
Dec  1 19:35:49 compute-0 ceilometer_agent_compute[200308]: 2025-12-01 19:35:49.286 15 DEBUG ceilometer.compute.pollsters [-] f4a023f0-04a7-470f-88ef-6284e0580f9e/memory.usage volume: 49.171875 _stats_to_sample /usr/lib/python3.12/site-packages/ceilometer/compute/pollsters/__init__.py:108
Dec  1 19:35:49 compute-0 ceilometer_agent_compute[200308]: 2025-12-01 19:35:49.286 15 INFO ceilometer.polling.manager [-] Finished polling pollster memory.usage in the context of pollsters
Dec  1 19:35:49 compute-0 ceilometer_agent_compute[200308]: 2025-12-01 19:35:49.287 12 DEBUG ceilometer.polling.manager [-] Updated heartbeat for cpu (2025-12-01T19:35:49.283396) _update_status /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:502
Dec  1 19:35:49 compute-0 ceilometer_agent_compute[200308]: 2025-12-01 19:35:49.287 12 DEBUG ceilometer.polling.manager [-] Updated heartbeat for memory.usage (2025-12-01T19:35:49.285550) _update_status /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:502
Dec  1 19:35:49 compute-0 ceilometer_agent_compute[200308]: 2025-12-01 19:35:49.287 15 DEBUG ceilometer.polling.manager [-] Finished processing pollster [network.incoming.bytes.delta]. execute_polling_task_processing /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:272
Dec  1 19:35:49 compute-0 ceilometer_agent_compute[200308]: 2025-12-01 19:35:49.288 15 DEBUG ceilometer.polling.manager [-] Finished processing pollster [network.outgoing.packets]. execute_polling_task_processing /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:272
Dec  1 19:35:49 compute-0 ceilometer_agent_compute[200308]: 2025-12-01 19:35:49.288 15 DEBUG ceilometer.polling.manager [-] Finished processing pollster [network.outgoing.bytes.delta]. execute_polling_task_processing /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:272
Dec  1 19:35:49 compute-0 ceilometer_agent_compute[200308]: 2025-12-01 19:35:49.288 15 DEBUG ceilometer.polling.manager [-] Finished processing pollster [network.outgoing.packets.drop]. execute_polling_task_processing /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:272
Dec  1 19:35:49 compute-0 ceilometer_agent_compute[200308]: 2025-12-01 19:35:49.288 15 DEBUG ceilometer.polling.manager [-] Finished processing pollster [network.outgoing.packets.error]. execute_polling_task_processing /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:272
Dec  1 19:35:49 compute-0 ceilometer_agent_compute[200308]: 2025-12-01 19:35:49.288 15 DEBUG ceilometer.polling.manager [-] Finished processing pollster [disk.device.capacity]. execute_polling_task_processing /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:272
Dec  1 19:35:49 compute-0 ceilometer_agent_compute[200308]: 2025-12-01 19:35:49.288 15 DEBUG ceilometer.polling.manager [-] Finished processing pollster [disk.device.read.bytes]. execute_polling_task_processing /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:272
Dec  1 19:35:49 compute-0 ceilometer_agent_compute[200308]: 2025-12-01 19:35:49.288 15 DEBUG ceilometer.polling.manager [-] Finished processing pollster [network.incoming.bytes]. execute_polling_task_processing /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:272
Dec  1 19:35:49 compute-0 ceilometer_agent_compute[200308]: 2025-12-01 19:35:49.288 15 DEBUG ceilometer.polling.manager [-] Finished processing pollster [network.incoming.bytes.rate]. execute_polling_task_processing /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:272
Dec  1 19:35:49 compute-0 ceilometer_agent_compute[200308]: 2025-12-01 19:35:49.288 15 DEBUG ceilometer.polling.manager [-] Finished processing pollster [disk.device.read.latency]. execute_polling_task_processing /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:272
Dec  1 19:35:49 compute-0 ceilometer_agent_compute[200308]: 2025-12-01 19:35:49.288 15 DEBUG ceilometer.polling.manager [-] Finished processing pollster [disk.device.read.requests]. execute_polling_task_processing /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:272
Dec  1 19:35:49 compute-0 ceilometer_agent_compute[200308]: 2025-12-01 19:35:49.288 15 DEBUG ceilometer.polling.manager [-] Finished processing pollster [disk.device.usage]. execute_polling_task_processing /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:272
Dec  1 19:35:49 compute-0 ceilometer_agent_compute[200308]: 2025-12-01 19:35:49.289 15 DEBUG ceilometer.polling.manager [-] Finished processing pollster [disk.device.write.bytes]. execute_polling_task_processing /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:272
Dec  1 19:35:49 compute-0 ceilometer_agent_compute[200308]: 2025-12-01 19:35:49.289 15 DEBUG ceilometer.polling.manager [-] Finished processing pollster [power.state]. execute_polling_task_processing /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:272
Dec  1 19:35:49 compute-0 ceilometer_agent_compute[200308]: 2025-12-01 19:35:49.289 15 DEBUG ceilometer.polling.manager [-] Finished processing pollster [disk.device.write.latency]. execute_polling_task_processing /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:272
Dec  1 19:35:49 compute-0 ceilometer_agent_compute[200308]: 2025-12-01 19:35:49.289 15 DEBUG ceilometer.polling.manager [-] Finished processing pollster [disk.device.write.requests]. execute_polling_task_processing /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:272
Dec  1 19:35:49 compute-0 ceilometer_agent_compute[200308]: 2025-12-01 19:35:49.289 15 DEBUG ceilometer.polling.manager [-] Finished processing pollster [disk.device.allocation]. execute_polling_task_processing /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:272
Dec  1 19:35:49 compute-0 ceilometer_agent_compute[200308]: 2025-12-01 19:35:49.289 15 DEBUG ceilometer.polling.manager [-] Finished processing pollster [disk.ephemeral.size]. execute_polling_task_processing /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:272
Dec  1 19:35:49 compute-0 ceilometer_agent_compute[200308]: 2025-12-01 19:35:49.289 15 DEBUG ceilometer.polling.manager [-] Finished processing pollster [network.incoming.packets]. execute_polling_task_processing /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:272
Dec  1 19:35:49 compute-0 ceilometer_agent_compute[200308]: 2025-12-01 19:35:49.289 15 DEBUG ceilometer.polling.manager [-] Finished processing pollster [disk.root.size]. execute_polling_task_processing /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:272
Dec  1 19:35:49 compute-0 ceilometer_agent_compute[200308]: 2025-12-01 19:35:49.289 15 DEBUG ceilometer.polling.manager [-] Finished processing pollster [network.incoming.packets.drop]. execute_polling_task_processing /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:272
Dec  1 19:35:49 compute-0 ceilometer_agent_compute[200308]: 2025-12-01 19:35:49.289 15 DEBUG ceilometer.polling.manager [-] Finished processing pollster [network.incoming.packets.error]. execute_polling_task_processing /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:272
Dec  1 19:35:49 compute-0 ceilometer_agent_compute[200308]: 2025-12-01 19:35:49.290 15 DEBUG ceilometer.polling.manager [-] Finished processing pollster [network.outgoing.bytes]. execute_polling_task_processing /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:272
Dec  1 19:35:49 compute-0 ceilometer_agent_compute[200308]: 2025-12-01 19:35:49.290 15 DEBUG ceilometer.polling.manager [-] Finished processing pollster [network.outgoing.bytes.rate]. execute_polling_task_processing /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:272
Dec  1 19:35:49 compute-0 ceilometer_agent_compute[200308]: 2025-12-01 19:35:49.290 15 DEBUG ceilometer.polling.manager [-] Finished processing pollster [cpu]. execute_polling_task_processing /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:272
Dec  1 19:35:49 compute-0 ceilometer_agent_compute[200308]: 2025-12-01 19:35:49.290 15 DEBUG ceilometer.polling.manager [-] Finished processing pollster [memory.usage]. execute_polling_task_processing /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:272
Dec  1 19:35:49 compute-0 podman[242356]: 2025-12-01 19:35:49.342247929 +0000 UTC m=+0.102222632 container health_status 9bc16c1e84935b321683dd2dfd3901959431e420d380b6b9982945dff3d516b2 (image=quay.io/prometheus/node-exporter:v1.5.0, name=node_exporter, health_status=healthy, health_failing_streak=0, health_log=, maintainer=The Prometheus Authors <prometheus-developers@googlegroups.com>, managed_by=edpm_ansible, config_data={'image': 'quay.io/prometheus/node-exporter:v1.5.0', 'restart': 'always', 'recreate': True, 'user': 'root', 'privileged': True, 'ports': ['9100:9100'], 'command': ['--web.config.file=/etc/node_exporter/node_exporter.yaml', '--web.disable-exporter-metrics', '--collector.systemd', '--collector.systemd.unit-include=(edpm_.*|ovs.*|openvswitch|virt.*|rsyslog)\\.service', '--no-collector.dmi', '--no-collector.entropy', '--no-collector.thermal_zone', '--no-collector.time', '--no-collector.timex', '--no-collector.uname', '--no-collector.stat', '--no-collector.hwmon', '--no-collector.os', '--no-collector.selinux', '--no-collector.textfile', '--no-collector.powersupplyclass', '--no-collector.pressure', '--no-collector.rapl'], 'net': 'host', 'environment': {'OS_ENDPOINT_TYPE': 'internal'}, 'healthcheck': {'test': '/openstack/healthcheck node_exporter', 'mount': '/var/lib/openstack/healthchecks/node_exporter'}, 'volumes': ['/var/lib/openstack/config/telemetry/node_exporter.yaml:/etc/node_exporter/node_exporter.yaml:z', '/var/lib/openstack/certs/telemetry/default:/etc/node_exporter/tls:z', '/var/run/dbus/system_bus_socket:/var/run/dbus/system_bus_socket:rw', '/var/lib/openstack/healthchecks/node_exporter:/openstack:ro,z']}, config_id=edpm, container_name=node_exporter)
Dec  1 19:35:49 compute-0 nova_compute[189564]: 2025-12-01 19:35:49.508 189568 DEBUG oslo_concurrency.lockutils [None req-7cec860b-c1c5-49d2-8a4f-732c76c1788d - - - - - -] Acquiring lock "refresh_cache-e73931e9-f7fa-4666-b781-700b385532a9" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:312#033[00m
Dec  1 19:35:49 compute-0 nova_compute[189564]: 2025-12-01 19:35:49.508 189568 DEBUG oslo_concurrency.lockutils [None req-7cec860b-c1c5-49d2-8a4f-732c76c1788d - - - - - -] Acquired lock "refresh_cache-e73931e9-f7fa-4666-b781-700b385532a9" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:315#033[00m
Dec  1 19:35:49 compute-0 nova_compute[189564]: 2025-12-01 19:35:49.509 189568 DEBUG nova.network.neutron [None req-7cec860b-c1c5-49d2-8a4f-732c76c1788d - - - - - -] [instance: e73931e9-f7fa-4666-b781-700b385532a9] Forcefully refreshing network info cache for instance _get_instance_nw_info /usr/lib/python3.9/site-packages/nova/network/neutron.py:2004#033[00m
Dec  1 19:35:49 compute-0 nova_compute[189564]: 2025-12-01 19:35:49.509 189568 DEBUG nova.objects.instance [None req-7cec860b-c1c5-49d2-8a4f-732c76c1788d - - - - - -] Lazy-loading 'info_cache' on Instance uuid e73931e9-f7fa-4666-b781-700b385532a9 obj_load_attr /usr/lib/python3.9/site-packages/nova/objects/instance.py:1105#033[00m
Dec  1 19:35:52 compute-0 nova_compute[189564]: 2025-12-01 19:35:52.060 189568 DEBUG nova.network.neutron [None req-7cec860b-c1c5-49d2-8a4f-732c76c1788d - - - - - -] [instance: e73931e9-f7fa-4666-b781-700b385532a9] Updating instance_info_cache with network_info: [{"id": "3cef930c-870a-4936-a206-b4c3a7ce5c1a", "address": "fa:16:3e:fc:8b:70", "network": {"id": "2a4b8529-6171-4880-a97c-66966115a61b", "bridge": "br-int", "label": "private", "subnets": [{"cidr": "192.168.0.0/24", "dns": [], "gateway": {"address": "192.168.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "192.168.0.47", "type": "fixed", "version": 4, "meta": {}, "floating_ips": [{"address": "192.168.122.206", "type": "floating", "version": 4, "meta": {}}]}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "35d2a9caf1634dca9fc12ec078239d84", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tap3cef930c-87", "ovs_interfaceid": "3cef930c-870a-4936-a206-b4c3a7ce5c1a", "qbh_params": null, "qbg_params": null, "active": true, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}}] update_instance_cache_with_nw_info /usr/lib/python3.9/site-packages/nova/network/neutron.py:116#033[00m
Dec  1 19:35:52 compute-0 nova_compute[189564]: 2025-12-01 19:35:52.074 189568 DEBUG oslo_concurrency.lockutils [None req-7cec860b-c1c5-49d2-8a4f-732c76c1788d - - - - - -] Releasing lock "refresh_cache-e73931e9-f7fa-4666-b781-700b385532a9" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:333#033[00m
Dec  1 19:35:52 compute-0 nova_compute[189564]: 2025-12-01 19:35:52.075 189568 DEBUG nova.compute.manager [None req-7cec860b-c1c5-49d2-8a4f-732c76c1788d - - - - - -] [instance: e73931e9-f7fa-4666-b781-700b385532a9] Updated the network info_cache for instance _heal_instance_info_cache /usr/lib/python3.9/site-packages/nova/compute/manager.py:9929#033[00m
Dec  1 19:35:52 compute-0 nova_compute[189564]: 2025-12-01 19:35:52.248 189568 DEBUG oslo_service.periodic_task [None req-7cec860b-c1c5-49d2-8a4f-732c76c1788d - - - - - -] Running periodic task ComputeManager._poll_rebooting_instances run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210#033[00m
Dec  1 19:35:52 compute-0 nova_compute[189564]: 2025-12-01 19:35:52.249 189568 DEBUG oslo_service.periodic_task [None req-7cec860b-c1c5-49d2-8a4f-732c76c1788d - - - - - -] Running periodic task ComputeManager._cleanup_expired_console_auth_tokens run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210#033[00m
Dec  1 19:35:52 compute-0 nova_compute[189564]: 2025-12-01 19:35:52.350 189568 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Dec  1 19:35:52 compute-0 nova_compute[189564]: 2025-12-01 19:35:52.887 189568 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Dec  1 19:35:53 compute-0 nova_compute[189564]: 2025-12-01 19:35:53.265 189568 DEBUG oslo_service.periodic_task [None req-7cec860b-c1c5-49d2-8a4f-732c76c1788d - - - - - -] Running periodic task ComputeManager._poll_volume_usage run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210#033[00m
Dec  1 19:35:54 compute-0 nova_compute[189564]: 2025-12-01 19:35:54.243 189568 DEBUG oslo_service.periodic_task [None req-7cec860b-c1c5-49d2-8a4f-732c76c1788d - - - - - -] Running periodic task ComputeManager._check_instance_build_time run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210#033[00m
Dec  1 19:35:54 compute-0 nova_compute[189564]: 2025-12-01 19:35:54.248 189568 DEBUG oslo_service.periodic_task [None req-7cec860b-c1c5-49d2-8a4f-732c76c1788d - - - - - -] Running periodic task ComputeManager._reclaim_queued_deletes run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210#033[00m
Dec  1 19:35:54 compute-0 nova_compute[189564]: 2025-12-01 19:35:54.248 189568 DEBUG nova.compute.manager [None req-7cec860b-c1c5-49d2-8a4f-732c76c1788d - - - - - -] CONF.reclaim_instance_interval <= 0, skipping... _reclaim_queued_deletes /usr/lib/python3.9/site-packages/nova/compute/manager.py:10477#033[00m
Dec  1 19:35:54 compute-0 nova_compute[189564]: 2025-12-01 19:35:54.248 189568 DEBUG oslo_service.periodic_task [None req-7cec860b-c1c5-49d2-8a4f-732c76c1788d - - - - - -] Running periodic task ComputeManager._cleanup_incomplete_migrations run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210#033[00m
Dec  1 19:35:54 compute-0 nova_compute[189564]: 2025-12-01 19:35:54.249 189568 DEBUG nova.compute.manager [None req-7cec860b-c1c5-49d2-8a4f-732c76c1788d - - - - - -] Cleaning up deleted instances with incomplete migration  _cleanup_incomplete_migrations /usr/lib/python3.9/site-packages/nova/compute/manager.py:11183#033[00m
Dec  1 19:35:56 compute-0 nova_compute[189564]: 2025-12-01 19:35:56.269 189568 DEBUG oslo_service.periodic_task [None req-7cec860b-c1c5-49d2-8a4f-732c76c1788d - - - - - -] Running periodic task ComputeManager._instance_usage_audit run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210#033[00m
Dec  1 19:35:56 compute-0 podman[242379]: 2025-12-01 19:35:56.352882765 +0000 UTC m=+0.111859672 container health_status eee51cf6f5ac491b85fb09827fece37ea9afa564acb449d4ec0d0155a452f02b (image=quay.io/podified-antelope-centos9/openstack-multipathd:current-podified, name=multipathd, health_status=healthy, health_failing_streak=0, health_log=, tcib_build_tag=fa2bb8efef6782c26ea7f1675eeb36dd, tcib_managed=true, config_id=multipathd, org.label-schema.build-date=20251125, org.label-schema.license=GPLv2, container_name=multipathd, io.buildah.version=1.41.3, managed_by=edpm_ansible, org.label-schema.vendor=CentOS, maintainer=OpenStack Kubernetes Operator team, org.label-schema.schema-version=1.0, config_data={'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/multipathd', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-multipathd:current-podified', 'net': 'host', 'privileged': True, 'restart': 'always', 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/var/lib/kolla/config_files/multipathd.json:/var/lib/kolla/config_files/config.json:ro', '/dev:/dev', '/run/udev:/run/udev', '/sys:/sys', '/lib/modules:/lib/modules:ro', '/etc/iscsi:/etc/iscsi:ro', '/var/lib/iscsi:/var/lib/iscsi', '/etc/multipath:/etc/multipath:z', '/etc/multipath.conf:/etc/multipath.conf:ro', '/var/lib/openstack/healthchecks/multipathd:/openstack:ro,z']}, org.label-schema.name=CentOS Stream 9 Base Image)
Dec  1 19:35:57 compute-0 nova_compute[189564]: 2025-12-01 19:35:57.249 189568 DEBUG oslo_service.periodic_task [None req-7cec860b-c1c5-49d2-8a4f-732c76c1788d - - - - - -] Running periodic task ComputeManager._poll_unconfirmed_resizes run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210#033[00m
Dec  1 19:35:57 compute-0 nova_compute[189564]: 2025-12-01 19:35:57.353 189568 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Dec  1 19:35:57 compute-0 nova_compute[189564]: 2025-12-01 19:35:57.890 189568 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Dec  1 19:35:59 compute-0 nova_compute[189564]: 2025-12-01 19:35:59.248 189568 DEBUG oslo_service.periodic_task [None req-7cec860b-c1c5-49d2-8a4f-732c76c1788d - - - - - -] Running periodic task ComputeManager.update_available_resource run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210#033[00m
Dec  1 19:35:59 compute-0 nova_compute[189564]: 2025-12-01 19:35:59.280 189568 DEBUG oslo_concurrency.lockutils [None req-7cec860b-c1c5-49d2-8a4f-732c76c1788d - - - - - -] Acquiring lock "compute_resources" by "nova.compute.resource_tracker.ResourceTracker.clean_compute_node_cache" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m
Dec  1 19:35:59 compute-0 nova_compute[189564]: 2025-12-01 19:35:59.281 189568 DEBUG oslo_concurrency.lockutils [None req-7cec860b-c1c5-49d2-8a4f-732c76c1788d - - - - - -] Lock "compute_resources" acquired by "nova.compute.resource_tracker.ResourceTracker.clean_compute_node_cache" :: waited 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m
Dec  1 19:35:59 compute-0 nova_compute[189564]: 2025-12-01 19:35:59.281 189568 DEBUG oslo_concurrency.lockutils [None req-7cec860b-c1c5-49d2-8a4f-732c76c1788d - - - - - -] Lock "compute_resources" "released" by "nova.compute.resource_tracker.ResourceTracker.clean_compute_node_cache" :: held 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m
Dec  1 19:35:59 compute-0 nova_compute[189564]: 2025-12-01 19:35:59.282 189568 DEBUG nova.compute.resource_tracker [None req-7cec860b-c1c5-49d2-8a4f-732c76c1788d - - - - - -] Auditing locally available compute resources for compute-0.ctlplane.example.com (node: compute-0.ctlplane.example.com) update_available_resource /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:861#033[00m
Dec  1 19:35:59 compute-0 nova_compute[189564]: 2025-12-01 19:35:59.374 189568 DEBUG oslo_concurrency.processutils [None req-7cec860b-c1c5-49d2-8a4f-732c76c1788d - - - - - -] Running cmd (subprocess): /usr/bin/python3 -m oslo_concurrency.prlimit --as=1073741824 --cpu=30 -- env LC_ALL=C LANG=C qemu-img info /var/lib/nova/instances/e73931e9-f7fa-4666-b781-700b385532a9/disk --force-share --output=json execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384#033[00m
Dec  1 19:35:59 compute-0 nova_compute[189564]: 2025-12-01 19:35:59.478 189568 DEBUG oslo_concurrency.processutils [None req-7cec860b-c1c5-49d2-8a4f-732c76c1788d - - - - - -] CMD "/usr/bin/python3 -m oslo_concurrency.prlimit --as=1073741824 --cpu=30 -- env LC_ALL=C LANG=C qemu-img info /var/lib/nova/instances/e73931e9-f7fa-4666-b781-700b385532a9/disk --force-share --output=json" returned: 0 in 0.104s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422#033[00m
Dec  1 19:35:59 compute-0 nova_compute[189564]: 2025-12-01 19:35:59.479 189568 DEBUG oslo_concurrency.processutils [None req-7cec860b-c1c5-49d2-8a4f-732c76c1788d - - - - - -] Running cmd (subprocess): /usr/bin/python3 -m oslo_concurrency.prlimit --as=1073741824 --cpu=30 -- env LC_ALL=C LANG=C qemu-img info /var/lib/nova/instances/e73931e9-f7fa-4666-b781-700b385532a9/disk --force-share --output=json execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384#033[00m
Dec  1 19:35:59 compute-0 nova_compute[189564]: 2025-12-01 19:35:59.540 189568 DEBUG oslo_concurrency.processutils [None req-7cec860b-c1c5-49d2-8a4f-732c76c1788d - - - - - -] CMD "/usr/bin/python3 -m oslo_concurrency.prlimit --as=1073741824 --cpu=30 -- env LC_ALL=C LANG=C qemu-img info /var/lib/nova/instances/e73931e9-f7fa-4666-b781-700b385532a9/disk --force-share --output=json" returned: 0 in 0.060s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422#033[00m
Dec  1 19:35:59 compute-0 nova_compute[189564]: 2025-12-01 19:35:59.544 189568 DEBUG oslo_concurrency.processutils [None req-7cec860b-c1c5-49d2-8a4f-732c76c1788d - - - - - -] Running cmd (subprocess): /usr/bin/python3 -m oslo_concurrency.prlimit --as=1073741824 --cpu=30 -- env LC_ALL=C LANG=C qemu-img info /var/lib/nova/instances/e73931e9-f7fa-4666-b781-700b385532a9/disk.eph0 --force-share --output=json execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384#033[00m
Dec  1 19:35:59 compute-0 nova_compute[189564]: 2025-12-01 19:35:59.608 189568 DEBUG oslo_concurrency.processutils [None req-7cec860b-c1c5-49d2-8a4f-732c76c1788d - - - - - -] CMD "/usr/bin/python3 -m oslo_concurrency.prlimit --as=1073741824 --cpu=30 -- env LC_ALL=C LANG=C qemu-img info /var/lib/nova/instances/e73931e9-f7fa-4666-b781-700b385532a9/disk.eph0 --force-share --output=json" returned: 0 in 0.064s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422#033[00m
Dec  1 19:35:59 compute-0 nova_compute[189564]: 2025-12-01 19:35:59.610 189568 DEBUG oslo_concurrency.processutils [None req-7cec860b-c1c5-49d2-8a4f-732c76c1788d - - - - - -] Running cmd (subprocess): /usr/bin/python3 -m oslo_concurrency.prlimit --as=1073741824 --cpu=30 -- env LC_ALL=C LANG=C qemu-img info /var/lib/nova/instances/e73931e9-f7fa-4666-b781-700b385532a9/disk.eph0 --force-share --output=json execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384#033[00m
Dec  1 19:35:59 compute-0 nova_compute[189564]: 2025-12-01 19:35:59.670 189568 DEBUG oslo_concurrency.processutils [None req-7cec860b-c1c5-49d2-8a4f-732c76c1788d - - - - - -] CMD "/usr/bin/python3 -m oslo_concurrency.prlimit --as=1073741824 --cpu=30 -- env LC_ALL=C LANG=C qemu-img info /var/lib/nova/instances/e73931e9-f7fa-4666-b781-700b385532a9/disk.eph0 --force-share --output=json" returned: 0 in 0.060s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422#033[00m
Dec  1 19:35:59 compute-0 nova_compute[189564]: 2025-12-01 19:35:59.681 189568 DEBUG oslo_concurrency.processutils [None req-7cec860b-c1c5-49d2-8a4f-732c76c1788d - - - - - -] Running cmd (subprocess): /usr/bin/python3 -m oslo_concurrency.prlimit --as=1073741824 --cpu=30 -- env LC_ALL=C LANG=C qemu-img info /var/lib/nova/instances/f4a023f0-04a7-470f-88ef-6284e0580f9e/disk --force-share --output=json execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384#033[00m
Dec  1 19:35:59 compute-0 nova_compute[189564]: 2025-12-01 19:35:59.741 189568 DEBUG oslo_concurrency.processutils [None req-7cec860b-c1c5-49d2-8a4f-732c76c1788d - - - - - -] CMD "/usr/bin/python3 -m oslo_concurrency.prlimit --as=1073741824 --cpu=30 -- env LC_ALL=C LANG=C qemu-img info /var/lib/nova/instances/f4a023f0-04a7-470f-88ef-6284e0580f9e/disk --force-share --output=json" returned: 0 in 0.061s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422#033[00m
Dec  1 19:35:59 compute-0 nova_compute[189564]: 2025-12-01 19:35:59.744 189568 DEBUG oslo_concurrency.processutils [None req-7cec860b-c1c5-49d2-8a4f-732c76c1788d - - - - - -] Running cmd (subprocess): /usr/bin/python3 -m oslo_concurrency.prlimit --as=1073741824 --cpu=30 -- env LC_ALL=C LANG=C qemu-img info /var/lib/nova/instances/f4a023f0-04a7-470f-88ef-6284e0580f9e/disk --force-share --output=json execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384#033[00m
Dec  1 19:35:59 compute-0 podman[203750]: time="2025-12-01T19:35:59Z" level=info msg="List containers: received `last` parameter - overwriting `limit`"
Dec  1 19:35:59 compute-0 podman[203750]: @ - - [01/Dec/2025:19:35:59 +0000] "GET /v4.9.3/libpod/containers/json?all=true&external=false&last=0&namespace=false&size=false&sync=false HTTP/1.1" 200 29521 "" "Go-http-client/1.1"
Dec  1 19:35:59 compute-0 podman[203750]: @ - - [01/Dec/2025:19:35:59 +0000] "GET /v4.9.3/libpod/containers/stats?all=false&interval=1&stream=false HTTP/1.1" 200 4791 "" "Go-http-client/1.1"
Dec  1 19:35:59 compute-0 nova_compute[189564]: 2025-12-01 19:35:59.806 189568 DEBUG oslo_concurrency.processutils [None req-7cec860b-c1c5-49d2-8a4f-732c76c1788d - - - - - -] CMD "/usr/bin/python3 -m oslo_concurrency.prlimit --as=1073741824 --cpu=30 -- env LC_ALL=C LANG=C qemu-img info /var/lib/nova/instances/f4a023f0-04a7-470f-88ef-6284e0580f9e/disk --force-share --output=json" returned: 0 in 0.062s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422#033[00m
Dec  1 19:35:59 compute-0 nova_compute[189564]: 2025-12-01 19:35:59.809 189568 DEBUG oslo_concurrency.processutils [None req-7cec860b-c1c5-49d2-8a4f-732c76c1788d - - - - - -] Running cmd (subprocess): /usr/bin/python3 -m oslo_concurrency.prlimit --as=1073741824 --cpu=30 -- env LC_ALL=C LANG=C qemu-img info /var/lib/nova/instances/f4a023f0-04a7-470f-88ef-6284e0580f9e/disk.eph0 --force-share --output=json execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384#033[00m
Dec  1 19:35:59 compute-0 nova_compute[189564]: 2025-12-01 19:35:59.873 189568 DEBUG oslo_concurrency.processutils [None req-7cec860b-c1c5-49d2-8a4f-732c76c1788d - - - - - -] CMD "/usr/bin/python3 -m oslo_concurrency.prlimit --as=1073741824 --cpu=30 -- env LC_ALL=C LANG=C qemu-img info /var/lib/nova/instances/f4a023f0-04a7-470f-88ef-6284e0580f9e/disk.eph0 --force-share --output=json" returned: 0 in 0.063s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422#033[00m
Dec  1 19:35:59 compute-0 nova_compute[189564]: 2025-12-01 19:35:59.874 189568 DEBUG oslo_concurrency.processutils [None req-7cec860b-c1c5-49d2-8a4f-732c76c1788d - - - - - -] Running cmd (subprocess): /usr/bin/python3 -m oslo_concurrency.prlimit --as=1073741824 --cpu=30 -- env LC_ALL=C LANG=C qemu-img info /var/lib/nova/instances/f4a023f0-04a7-470f-88ef-6284e0580f9e/disk.eph0 --force-share --output=json execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384#033[00m
Dec  1 19:35:59 compute-0 nova_compute[189564]: 2025-12-01 19:35:59.969 189568 DEBUG oslo_concurrency.processutils [None req-7cec860b-c1c5-49d2-8a4f-732c76c1788d - - - - - -] CMD "/usr/bin/python3 -m oslo_concurrency.prlimit --as=1073741824 --cpu=30 -- env LC_ALL=C LANG=C qemu-img info /var/lib/nova/instances/f4a023f0-04a7-470f-88ef-6284e0580f9e/disk.eph0 --force-share --output=json" returned: 0 in 0.095s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422#033[00m
Dec  1 19:36:00 compute-0 nova_compute[189564]: 2025-12-01 19:36:00.443 189568 WARNING nova.virt.libvirt.driver [None req-7cec860b-c1c5-49d2-8a4f-732c76c1788d - - - - - -] This host appears to have multiple sockets per NUMA node. The `socket` PCI NUMA affinity will not be supported.#033[00m
Dec  1 19:36:00 compute-0 nova_compute[189564]: 2025-12-01 19:36:00.445 189568 DEBUG nova.compute.resource_tracker [None req-7cec860b-c1c5-49d2-8a4f-732c76c1788d - - - - - -] Hypervisor/Node resource view: name=compute-0.ctlplane.example.com free_ram=5056MB free_disk=72.36142349243164GB free_vcpus=6 pci_devices=[{"dev_id": "pci_0000_00_03_0", "address": "0000:00:03.0", "product_id": "1000", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1000", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_01_3", "address": "0000:00:01.3", "product_id": "7113", "vendor_id": "8086", "numa_node": null, "label": "label_8086_7113", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_06_0", "address": "0000:00:06.0", "product_id": "1005", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1005", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_04_0", "address": "0000:00:04.0", "product_id": "1001", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1001", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_01_1", "address": "0000:00:01.1", "product_id": "7010", "vendor_id": "8086", "numa_node": null, "label": "label_8086_7010", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_00_0", "address": "0000:00:00.0", "product_id": "1237", "vendor_id": "8086", "numa_node": null, "label": "label_8086_1237", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_05_0", "address": "0000:00:05.0", "product_id": "1002", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1002", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_02_0", "address": "0000:00:02.0", "product_id": "1050", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1050", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_01_2", "address": "0000:00:01.2", "product_id": "7020", "vendor_id": "8086", "numa_node": null, "label": "label_8086_7020", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_01_0", "address": "0000:00:01.0", "product_id": "7000", "vendor_id": "8086", "numa_node": null, "label": "label_8086_7000", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_07_0", "address": "0000:00:07.0", "product_id": "1000", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1000", "dev_type": "type-PCI"}] _report_hypervisor_resource_view /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:1034#033[00m
Dec  1 19:36:00 compute-0 nova_compute[189564]: 2025-12-01 19:36:00.446 189568 DEBUG oslo_concurrency.lockutils [None req-7cec860b-c1c5-49d2-8a4f-732c76c1788d - - - - - -] Acquiring lock "compute_resources" by "nova.compute.resource_tracker.ResourceTracker._update_available_resource" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m
Dec  1 19:36:00 compute-0 nova_compute[189564]: 2025-12-01 19:36:00.446 189568 DEBUG oslo_concurrency.lockutils [None req-7cec860b-c1c5-49d2-8a4f-732c76c1788d - - - - - -] Lock "compute_resources" acquired by "nova.compute.resource_tracker.ResourceTracker._update_available_resource" :: waited 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m
Dec  1 19:36:00 compute-0 nova_compute[189564]: 2025-12-01 19:36:00.699 189568 DEBUG nova.compute.resource_tracker [None req-7cec860b-c1c5-49d2-8a4f-732c76c1788d - - - - - -] Instance e73931e9-f7fa-4666-b781-700b385532a9 actively managed on this compute host and has allocations in placement: {'resources': {'DISK_GB': 2, 'MEMORY_MB': 512, 'VCPU': 1}}. _remove_deleted_instances_allocations /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:1635#033[00m
Dec  1 19:36:00 compute-0 nova_compute[189564]: 2025-12-01 19:36:00.701 189568 DEBUG nova.compute.resource_tracker [None req-7cec860b-c1c5-49d2-8a4f-732c76c1788d - - - - - -] Instance f4a023f0-04a7-470f-88ef-6284e0580f9e actively managed on this compute host and has allocations in placement: {'resources': {'DISK_GB': 2, 'MEMORY_MB': 512, 'VCPU': 1}}. _remove_deleted_instances_allocations /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:1635#033[00m
Dec  1 19:36:00 compute-0 nova_compute[189564]: 2025-12-01 19:36:00.701 189568 DEBUG nova.compute.resource_tracker [None req-7cec860b-c1c5-49d2-8a4f-732c76c1788d - - - - - -] Total usable vcpus: 8, total allocated vcpus: 2 _report_final_resource_view /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:1057#033[00m
Dec  1 19:36:00 compute-0 nova_compute[189564]: 2025-12-01 19:36:00.702 189568 DEBUG nova.compute.resource_tracker [None req-7cec860b-c1c5-49d2-8a4f-732c76c1788d - - - - - -] Final resource view: name=compute-0.ctlplane.example.com phys_ram=7680MB used_ram=1536MB phys_disk=79GB used_disk=4GB total_vcpus=8 used_vcpus=2 pci_stats=[] _report_final_resource_view /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:1066#033[00m
Dec  1 19:36:00 compute-0 nova_compute[189564]: 2025-12-01 19:36:00.868 189568 DEBUG nova.compute.provider_tree [None req-7cec860b-c1c5-49d2-8a4f-732c76c1788d - - - - - -] Inventory has not changed in ProviderTree for provider: 0211b5d4-bab8-409f-8f53-df766ffbcb27 update_inventory /usr/lib/python3.9/site-packages/nova/compute/provider_tree.py:180#033[00m
Dec  1 19:36:00 compute-0 nova_compute[189564]: 2025-12-01 19:36:00.883 189568 DEBUG nova.scheduler.client.report [None req-7cec860b-c1c5-49d2-8a4f-732c76c1788d - - - - - -] Inventory has not changed for provider 0211b5d4-bab8-409f-8f53-df766ffbcb27 based on inventory data: {'VCPU': {'total': 8, 'reserved': 0, 'min_unit': 1, 'max_unit': 8, 'step_size': 1, 'allocation_ratio': 4.0}, 'MEMORY_MB': {'total': 7680, 'reserved': 512, 'min_unit': 1, 'max_unit': 7680, 'step_size': 1, 'allocation_ratio': 1.0}, 'DISK_GB': {'total': 79, 'reserved': 1, 'min_unit': 1, 'max_unit': 79, 'step_size': 1, 'allocation_ratio': 0.9}} set_inventory_for_provider /usr/lib/python3.9/site-packages/nova/scheduler/client/report.py:940#033[00m
Dec  1 19:36:00 compute-0 nova_compute[189564]: 2025-12-01 19:36:00.885 189568 DEBUG nova.compute.resource_tracker [None req-7cec860b-c1c5-49d2-8a4f-732c76c1788d - - - - - -] Compute_service record updated for compute-0.ctlplane.example.com:compute-0.ctlplane.example.com _update_available_resource /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:995#033[00m
Dec  1 19:36:00 compute-0 nova_compute[189564]: 2025-12-01 19:36:00.885 189568 DEBUG oslo_concurrency.lockutils [None req-7cec860b-c1c5-49d2-8a4f-732c76c1788d - - - - - -] Lock "compute_resources" "released" by "nova.compute.resource_tracker.ResourceTracker._update_available_resource" :: held 0.439s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m
Dec  1 19:36:01 compute-0 podman[242424]: 2025-12-01 19:36:01.392469331 +0000 UTC m=+0.152419975 container health_status 61ddba5fa28aaa4735d9b3aecc3d300f499f9ae2248b5f55cd6d6127fcce4236 (image=quay.io/navidys/prometheus-podman-exporter:v1.10.1, name=podman_exporter, health_status=healthy, health_failing_streak=0, health_log=, maintainer=Navid Yaghoobi <navidys@fedoraproject.org>, managed_by=edpm_ansible, config_data={'image': 'quay.io/navidys/prometheus-podman-exporter:v1.10.1', 'restart': 'always', 'recreate': True, 'user': 'root', 'privileged': True, 'ports': ['9882:9882'], 'net': 'host', 'command': ['--web.config.file=/etc/podman_exporter/podman_exporter.yaml'], 'environment': {'OS_ENDPOINT_TYPE': 'internal', 'CONTAINER_HOST': 'unix:///run/podman/podman.sock'}, 'healthcheck': {'test': '/openstack/healthcheck podman_exporter', 'mount': '/var/lib/openstack/healthchecks/podman_exporter'}, 'volumes': ['/var/lib/openstack/config/telemetry/podman_exporter.yaml:/etc/podman_exporter/podman_exporter.yaml:z', '/var/lib/openstack/certs/telemetry/default:/etc/podman_exporter/tls:z', '/run/podman/podman.sock:/run/podman/podman.sock:rw,z', '/var/lib/openstack/healthchecks/podman_exporter:/openstack:ro,z']}, config_id=edpm, container_name=podman_exporter)
Dec  1 19:36:01 compute-0 openstack_network_exporter[205914]: ERROR   19:36:01 appctl.go:131: Failed to prepare call to ovsdb-server: no control socket files found for the ovs db server
Dec  1 19:36:01 compute-0 openstack_network_exporter[205914]: ERROR   19:36:01 appctl.go:144: Failed to get PID for ovn-northd: no control socket files found for ovn-northd
Dec  1 19:36:01 compute-0 openstack_network_exporter[205914]: ERROR   19:36:01 appctl.go:144: Failed to get PID for ovn-northd: no control socket files found for ovn-northd
Dec  1 19:36:01 compute-0 openstack_network_exporter[205914]: ERROR   19:36:01 appctl.go:174: call(dpif-netdev/pmd-perf-show): please specify an existing datapath
Dec  1 19:36:01 compute-0 openstack_network_exporter[205914]: 
Dec  1 19:36:01 compute-0 openstack_network_exporter[205914]: ERROR   19:36:01 appctl.go:174: call(dpif-netdev/pmd-rxq-show): please specify an existing datapath
Dec  1 19:36:01 compute-0 openstack_network_exporter[205914]: 
Dec  1 19:36:01 compute-0 nova_compute[189564]: 2025-12-01 19:36:01.885 189568 DEBUG oslo_service.periodic_task [None req-7cec860b-c1c5-49d2-8a4f-732c76c1788d - - - - - -] Running periodic task ComputeManager._poll_rescued_instances run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210#033[00m
Dec  1 19:36:02 compute-0 nova_compute[189564]: 2025-12-01 19:36:02.249 189568 DEBUG oslo_service.periodic_task [None req-7cec860b-c1c5-49d2-8a4f-732c76c1788d - - - - - -] Running periodic task ComputeManager._run_pending_deletes run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210#033[00m
Dec  1 19:36:02 compute-0 nova_compute[189564]: 2025-12-01 19:36:02.249 189568 DEBUG nova.compute.manager [None req-7cec860b-c1c5-49d2-8a4f-732c76c1788d - - - - - -] Cleaning up deleted instances _run_pending_deletes /usr/lib/python3.9/site-packages/nova/compute/manager.py:11145#033[00m
Dec  1 19:36:02 compute-0 nova_compute[189564]: 2025-12-01 19:36:02.271 189568 DEBUG nova.compute.manager [None req-7cec860b-c1c5-49d2-8a4f-732c76c1788d - - - - - -] There are 0 instances to clean _run_pending_deletes /usr/lib/python3.9/site-packages/nova/compute/manager.py:11154#033[00m
Dec  1 19:36:02 compute-0 nova_compute[189564]: 2025-12-01 19:36:02.357 189568 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Dec  1 19:36:02 compute-0 nova_compute[189564]: 2025-12-01 19:36:02.893 189568 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Dec  1 19:36:04 compute-0 podman[242450]: 2025-12-01 19:36:04.367536587 +0000 UTC m=+0.126231209 container health_status 23921011954a99f31a49758e512d9e3575f6b2ebf536e7df85e3be11e7690b76 (image=quay.io/sustainable_computing_io/kepler:release-0.7.12, name=kepler, health_status=healthy, health_failing_streak=0, health_log=, release=1214.1726694543, version=9.4, io.openshift.tags=base rhel9, io.k8s.display-name=Red Hat Universal Base Image 9, release-0.7.12=, url=https://access.redhat.com/containers/#/registry.access.redhat.com/ubi9/images/9.4-1214.1726694543, summary=Provides the latest release of Red Hat Universal Base Image 9., distribution-scope=public, io.openshift.expose-services=, config_id=edpm, com.redhat.component=ubi9-container, architecture=x86_64, io.k8s.description=The Universal Base Image is designed and engineered to be the base layer for all of your containerized applications, middleware and utilities. This base image is freely redistributable, but Red Hat only supports Red Hat technologies through subscriptions for Red Hat products. This image is maintained by Red Hat and updated regularly., build-date=2024-09-18T21:23:30, vcs-ref=e309397d02fc53f7fa99db1371b8700eb49f268f, managed_by=edpm_ansible, maintainer=Red Hat, Inc., name=ubi9, vcs-type=git, com.redhat.license_terms=https://www.redhat.com/en/about/red-hat-end-user-license-agreements#UBI, io.buildah.version=1.29.0, config_data={'image': 'quay.io/sustainable_computing_io/kepler:release-0.7.12', 'privileged': 'true', 'restart': 'always', 'ports': ['8888:8888'], 'net': 'host', 'command': '-v=2', 'recreate': True, 'environment': {'ENABLE_GPU': 'true', 'EXPOSE_CONTAINER_METRICS': 'true', 'ENABLE_PROCESS_METRICS': 'true', 'EXPOSE_VM_METRICS': 'true', 'EXPOSE_ESTIMATED_IDLE_POWER_METRICS': 'false', 'LIBVIRT_METADATA_URI': 'http://openstack.org/xmlns/libvirt/nova/1.1'}, 'healthcheck': {'test': '/openstack/healthcheck kepler', 'mount': '/var/lib/openstack/healthchecks/kepler'}, 'volumes': ['/lib/modules:/lib/modules:ro', '/run/libvirt:/run/libvirt:shared,ro', '/sys:/sys', '/proc:/proc', '/var/lib/openstack/healthchecks/kepler:/openstack:ro,z']}, container_name=kepler, vendor=Red Hat, Inc., description=The Universal Base Image is designed and engineered to be the base layer for all of your containerized applications, middleware and utilities. This base image is freely redistributable, but Red Hat only supports Red Hat technologies through subscriptions for Red Hat products. This image is maintained by Red Hat and updated regularly.)
Dec  1 19:36:04 compute-0 podman[242451]: 2025-12-01 19:36:04.409866805 +0000 UTC m=+0.158512074 container health_status 34a1614f07848d6f362b3ed1fa2407dbcd0f2c7c831f6ef43ff8b2d278ce7c3d (image=quay.io/podified-antelope-centos9/openstack-ceilometer-ipmi:current-podified, name=ceilometer_agent_ipmi, health_status=healthy, health_failing_streak=0, health_log=, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS, tcib_managed=true, config_data={'image': 'quay.io/podified-antelope-centos9/openstack-ceilometer-ipmi:current-podified', 'user': 'ceilometer', 'restart': 'always', 'command': 'kolla_start', 'security_opt': 'label:type:ceilometer_polling_t', 'privileged': 'true', 'net': 'host', 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'OS_ENDPOINT_TYPE': 'internal'}, 'healthcheck': {'test': '/openstack/healthcheck ipmi', 'mount': '/var/lib/openstack/healthchecks/ceilometer_agent_ipmi'}, 'volumes': ['/var/lib/openstack/config/telemetry-power-monitoring:/var/lib/openstack/config/:z', '/var/lib/openstack/config/telemetry-power-monitoring/ceilometer-agent-ipmi.json:/var/lib/kolla/config_files/config.json:z', '/etc/hosts:/etc/hosts:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/var/lib/openstack/cacerts/telemetry-power-monitoring/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/config/telemetry-power-monitoring/ceilometer_prom_exporter.yaml:/etc/ceilometer/ceilometer_prom_exporter.yaml:z', '/var/lib/openstack/certs/telemetry-power-monitoring/default:/etc/ceilometer/tls:z', '/dev/log:/dev/log', '/var/lib/openstack/healthchecks/ceilometer_agent_ipmi:/openstack:ro,z']}, org.label-schema.build-date=20251125, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 9 Base Image, tcib_build_tag=fa2bb8efef6782c26ea7f1675eeb36dd, config_id=edpm, container_name=ceilometer_agent_ipmi, maintainer=OpenStack Kubernetes Operator team, managed_by=edpm_ansible, io.buildah.version=1.41.3)
Dec  1 19:36:07 compute-0 podman[242488]: 2025-12-01 19:36:07.353834454 +0000 UTC m=+0.110790940 container health_status 43b014a7c88484529ca37fbc1aa040d68d3c565a681d98a3ffe696ded1c66c8b (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified, name=ovn_metadata_agent, health_status=healthy, health_failing_streak=0, health_log=, org.label-schema.name=CentOS Stream 9 Base Image, tcib_build_tag=fa2bb8efef6782c26ea7f1675eeb36dd, maintainer=OpenStack Kubernetes Operator team, org.label-schema.build-date=20251125, config_id=ovn_metadata_agent, io.buildah.version=1.41.3, managed_by=edpm_ansible, org.label-schema.license=GPLv2, org.label-schema.schema-version=1.0, container_name=ovn_metadata_agent, org.label-schema.vendor=CentOS, tcib_managed=true, config_data={'cgroupns': 'host', 'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'EDPM_CONFIG_HASH': '0823bd3e096c75f72e4a95820d41b0d4b6a1172bd2892ddb9f29b788a11bc87d'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ovn_metadata_agent', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified', 'net': 'host', 'pid': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/run/openvswitch:/run/openvswitch:z', '/var/lib/config-data/ansible-generated/neutron-ovn-metadata-agent:/etc/neutron.conf.d:z', '/run/netns:/run/netns:shared', '/var/lib/kolla/config_files/ovn_metadata_agent.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/neutron:/var/lib/neutron:shared,z', '/var/lib/neutron/ovn_metadata_haproxy_wrapper:/usr/local/bin/haproxy:ro', '/var/lib/neutron/kill_scripts:/etc/neutron/kill_scripts:ro', '/var/lib/openstack/cacerts/neutron-metadata/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/ca.crt:/etc/pki/tls/certs/ovndbca.crt:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/tls.crt:/etc/pki/tls/certs/ovndb.crt:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/tls.key:/etc/pki/tls/private/ovndb.key:ro,Z', '/var/lib/openstack/healthchecks/ovn_metadata_agent:/openstack:ro,z']})
Dec  1 19:36:07 compute-0 podman[242487]: 2025-12-01 19:36:07.360316005 +0000 UTC m=+0.123553626 container health_status 3a3d264f7eb8586ed3d44da8bad3c69e5911bcb2ca062b771386b6d47a5118de (image=quay.rdoproject.org/podified-master-centos10/openstack-ceilometer-compute:current-tested, name=ceilometer_agent_compute, health_status=healthy, health_failing_streak=0, health_log=, io.buildah.version=1.41.4, maintainer=OpenStack Kubernetes Operator team, tcib_managed=true, config_data={'image': 'quay.rdoproject.org/podified-master-centos10/openstack-ceilometer-compute:current-tested', 'user': 'ceilometer', 'restart': 'always', 'command': 'kolla_start', 'security_opt': 'label:type:ceilometer_polling_t', 'net': 'host', 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'OS_ENDPOINT_TYPE': 'internal'}, 'healthcheck': {'test': '/openstack/healthcheck compute', 'mount': '/var/lib/openstack/healthchecks/ceilometer_agent_compute'}, 'volumes': ['/var/lib/openstack/config/telemetry:/var/lib/openstack/config/:z', '/var/lib/openstack/config/telemetry/ceilometer-agent-compute.json:/var/lib/kolla/config_files/config.json:z', '/run/libvirt:/run/libvirt:shared,ro', '/etc/hosts:/etc/hosts:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/var/lib/openstack/cacerts/telemetry/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/config/telemetry/ceilometer_prom_exporter.yaml:/etc/ceilometer/ceilometer_prom_exporter.yaml:z', '/var/lib/openstack/certs/telemetry/default:/etc/ceilometer/tls:z', '/dev/log:/dev/log', '/var/lib/openstack/healthchecks/ceilometer_agent_compute:/openstack:ro,z']}, managed_by=edpm_ansible, org.label-schema.name=CentOS Stream 10 Base Image, container_name=ceilometer_agent_compute, org.label-schema.build-date=20251125, config_id=edpm, org.label-schema.license=GPLv2, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS, tcib_build_tag=3a7876c5b6a4ff2e2bc50e11e9db5f42)
Dec  1 19:36:07 compute-0 nova_compute[189564]: 2025-12-01 19:36:07.362 189568 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Dec  1 19:36:07 compute-0 podman[242489]: 2025-12-01 19:36:07.41927278 +0000 UTC m=+0.166475802 container health_status ac5c9902abf0db9f43c889599b2bcc73d33eb8b65444ffdd9b56a5cc93dab792 (image=quay.io/podified-antelope-centos9/openstack-ovn-controller:current-podified, name=ovn_controller, health_status=healthy, health_failing_streak=0, health_log=, config_data={'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ovn_controller', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-ovn-controller:current-podified', 'net': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/lib/modules:/lib/modules:ro', '/run:/run', '/var/lib/openvswitch/ovn:/run/ovn:shared,z', '/var/lib/kolla/config_files/ovn_controller.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/openstack/cacerts/ovn/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/certs/ovn/default/ca.crt:/etc/pki/tls/certs/ovndbca.crt:ro,z', '/var/lib/openstack/certs/ovn/default/tls.crt:/etc/pki/tls/certs/ovndb.crt:ro,z', '/var/lib/openstack/certs/ovn/default/tls.key:/etc/pki/tls/private/ovndb.key:ro,Z', '/var/lib/openstack/healthchecks/ovn_controller:/openstack:ro,z']}, io.buildah.version=1.41.3, org.label-schema.build-date=20251125, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 9 Base Image, maintainer=OpenStack Kubernetes Operator team, tcib_managed=true, config_id=ovn_controller, managed_by=edpm_ansible, org.label-schema.schema-version=1.0, container_name=ovn_controller, org.label-schema.vendor=CentOS, tcib_build_tag=fa2bb8efef6782c26ea7f1675eeb36dd)
Dec  1 19:36:07 compute-0 nova_compute[189564]: 2025-12-01 19:36:07.899 189568 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Dec  1 19:36:12 compute-0 ovn_metadata_agent[106828]: 2025-12-01 19:36:12.183 106833 DEBUG oslo_concurrency.lockutils [-] Acquiring lock "_check_child_processes" by "neutron.agent.linux.external_process.ProcessMonitor._check_child_processes" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m
Dec  1 19:36:12 compute-0 ovn_metadata_agent[106828]: 2025-12-01 19:36:12.183 106833 DEBUG oslo_concurrency.lockutils [-] Lock "_check_child_processes" acquired by "neutron.agent.linux.external_process.ProcessMonitor._check_child_processes" :: waited 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m
Dec  1 19:36:12 compute-0 ovn_metadata_agent[106828]: 2025-12-01 19:36:12.184 106833 DEBUG oslo_concurrency.lockutils [-] Lock "_check_child_processes" "released" by "neutron.agent.linux.external_process.ProcessMonitor._check_child_processes" :: held 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m
Dec  1 19:36:12 compute-0 nova_compute[189564]: 2025-12-01 19:36:12.366 189568 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Dec  1 19:36:12 compute-0 nova_compute[189564]: 2025-12-01 19:36:12.903 189568 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Dec  1 19:36:15 compute-0 podman[242548]: 2025-12-01 19:36:15.391866013 +0000 UTC m=+0.159033200 container health_status b46bda7fc50db8041eef75400930fc7591d8331b3adc9964f77b2cc87c6b98e2 (image=quay.io/openstack-k8s-operators/openstack-network-exporter:current-podified, name=openstack_network_exporter, health_status=healthy, health_failing_streak=0, health_log=, description=The Universal Base Image Minimal is a stripped down image that uses microdnf as a package manager. This base image is freely redistributable, but Red Hat only supports Red Hat technologies through subscriptions for Red Hat products. This image is maintained by Red Hat and updated regularly., io.openshift.tags=minimal rhel9, io.k8s.description=The Universal Base Image Minimal is a stripped down image that uses microdnf as a package manager. This base image is freely redistributable, but Red Hat only supports Red Hat technologies through subscriptions for Red Hat products. This image is maintained by Red Hat and updated regularly., release=1755695350, vendor=Red Hat, Inc., vcs-type=git, com.redhat.component=ubi9-minimal-container, io.openshift.expose-services=, maintainer=Red Hat, Inc., summary=Provides the latest release of the minimal Red Hat Universal Base Image 9., version=9.6, config_id=edpm, vcs-ref=f4b088292653bbf5ca8188a5e59ffd06a8671d4b, build-date=2025-08-20T13:12:41, config_data={'image': 'quay.io/openstack-k8s-operators/openstack-network-exporter:current-podified', 'restart': 'always', 'recreate': True, 'privileged': True, 'ports': ['9105:9105'], 'command': [], 'net': 'host', 'environment': {'OS_ENDPOINT_TYPE': 'internal', 'OPENSTACK_NETWORK_EXPORTER_YAML': '/etc/openstack_network_exporter/openstack_network_exporter.yaml'}, 'healthcheck': {'test': '/openstack/healthcheck openstack-netwo', 'mount': '/var/lib/openstack/healthchecks/openstack_network_exporter'}, 'volumes': ['/var/lib/openstack/config/telemetry/openstack_network_exporter.yaml:/etc/openstack_network_exporter/openstack_network_exporter.yaml:z', '/var/lib/openstack/certs/telemetry/default:/etc/openstack_network_exporter/tls:z', '/var/run/openvswitch:/run/openvswitch:rw,z', '/var/lib/openvswitch/ovn:/run/ovn:rw,z', '/proc:/host/proc:ro', '/var/lib/openstack/healthchecks/openstack_network_exporter:/openstack:ro,z']}, io.k8s.display-name=Red Hat Universal Base Image 9 Minimal, com.redhat.license_terms=https://www.redhat.com/en/about/red-hat-end-user-license-agreements#UBI, container_name=openstack_network_exporter, architecture=x86_64, distribution-scope=public, io.buildah.version=1.33.7, name=ubi9-minimal, url=https://catalog.redhat.com/en/search?searchType=containers, managed_by=edpm_ansible)
Dec  1 19:36:17 compute-0 nova_compute[189564]: 2025-12-01 19:36:17.368 189568 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Dec  1 19:36:17 compute-0 nova_compute[189564]: 2025-12-01 19:36:17.909 189568 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Dec  1 19:36:20 compute-0 podman[242572]: 2025-12-01 19:36:20.31338724 +0000 UTC m=+0.073043046 container health_status 9bc16c1e84935b321683dd2dfd3901959431e420d380b6b9982945dff3d516b2 (image=quay.io/prometheus/node-exporter:v1.5.0, name=node_exporter, health_status=healthy, health_failing_streak=0, health_log=, config_id=edpm, container_name=node_exporter, maintainer=The Prometheus Authors <prometheus-developers@googlegroups.com>, managed_by=edpm_ansible, config_data={'image': 'quay.io/prometheus/node-exporter:v1.5.0', 'restart': 'always', 'recreate': True, 'user': 'root', 'privileged': True, 'ports': ['9100:9100'], 'command': ['--web.config.file=/etc/node_exporter/node_exporter.yaml', '--web.disable-exporter-metrics', '--collector.systemd', '--collector.systemd.unit-include=(edpm_.*|ovs.*|openvswitch|virt.*|rsyslog)\\.service', '--no-collector.dmi', '--no-collector.entropy', '--no-collector.thermal_zone', '--no-collector.time', '--no-collector.timex', '--no-collector.uname', '--no-collector.stat', '--no-collector.hwmon', '--no-collector.os', '--no-collector.selinux', '--no-collector.textfile', '--no-collector.powersupplyclass', '--no-collector.pressure', '--no-collector.rapl'], 'net': 'host', 'environment': {'OS_ENDPOINT_TYPE': 'internal'}, 'healthcheck': {'test': '/openstack/healthcheck node_exporter', 'mount': '/var/lib/openstack/healthchecks/node_exporter'}, 'volumes': ['/var/lib/openstack/config/telemetry/node_exporter.yaml:/etc/node_exporter/node_exporter.yaml:z', '/var/lib/openstack/certs/telemetry/default:/etc/node_exporter/tls:z', '/var/run/dbus/system_bus_socket:/var/run/dbus/system_bus_socket:rw', '/var/lib/openstack/healthchecks/node_exporter:/openstack:ro,z']})
Dec  1 19:36:22 compute-0 nova_compute[189564]: 2025-12-01 19:36:22.373 189568 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Dec  1 19:36:22 compute-0 nova_compute[189564]: 2025-12-01 19:36:22.914 189568 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Dec  1 19:36:27 compute-0 ovn_metadata_agent[106828]: 2025-12-01 19:36:27.246 106833 DEBUG ovsdbapp.backend.ovs_idl.event [-] Matched UPDATE: SbGlobalUpdateEvent(events=('update',), table='SB_Global', conditions=None, old_conditions=None), priority=20 to row=SB_Global(external_ids={}, nb_cfg=5, options={'arp_ns_explicit_output': 'true', 'mac_prefix': 'ae:b8:e0', 'max_tunid': '16711680', 'northd_internal_version': '24.03.8-20.33.0-76.8', 'svc_monitor_mac': 'f2:87:69:a7:38:2b'}, ipsec=False) old=SB_Global(nb_cfg=4) matches /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/event.py:43#033[00m
Dec  1 19:36:27 compute-0 ovn_metadata_agent[106828]: 2025-12-01 19:36:27.247 106833 DEBUG neutron.agent.ovn.metadata.agent [-] Delaying updating chassis table for 5 seconds run /usr/lib/python3.9/site-packages/neutron/agent/ovn/metadata/agent.py:274#033[00m
Dec  1 19:36:27 compute-0 nova_compute[189564]: 2025-12-01 19:36:27.252 189568 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Dec  1 19:36:27 compute-0 nova_compute[189564]: 2025-12-01 19:36:27.377 189568 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Dec  1 19:36:27 compute-0 podman[242595]: 2025-12-01 19:36:27.389313 +0000 UTC m=+0.150023825 container health_status eee51cf6f5ac491b85fb09827fece37ea9afa564acb449d4ec0d0155a452f02b (image=quay.io/podified-antelope-centos9/openstack-multipathd:current-podified, name=multipathd, health_status=healthy, health_failing_streak=0, health_log=, tcib_build_tag=fa2bb8efef6782c26ea7f1675eeb36dd, config_id=multipathd, maintainer=OpenStack Kubernetes Operator team, org.label-schema.build-date=20251125, org.label-schema.name=CentOS Stream 9 Base Image, container_name=multipathd, io.buildah.version=1.41.3, managed_by=edpm_ansible, org.label-schema.license=GPLv2, tcib_managed=true, config_data={'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/multipathd', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-multipathd:current-podified', 'net': 'host', 'privileged': True, 'restart': 'always', 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/var/lib/kolla/config_files/multipathd.json:/var/lib/kolla/config_files/config.json:ro', '/dev:/dev', '/run/udev:/run/udev', '/sys:/sys', '/lib/modules:/lib/modules:ro', '/etc/iscsi:/etc/iscsi:ro', '/var/lib/iscsi:/var/lib/iscsi', '/etc/multipath:/etc/multipath:z', '/etc/multipath.conf:/etc/multipath.conf:ro', '/var/lib/openstack/healthchecks/multipathd:/openstack:ro,z']}, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS)
Dec  1 19:36:27 compute-0 nova_compute[189564]: 2025-12-01 19:36:27.918 189568 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Dec  1 19:36:29 compute-0 podman[203750]: time="2025-12-01T19:36:29Z" level=info msg="List containers: received `last` parameter - overwriting `limit`"
Dec  1 19:36:29 compute-0 podman[203750]: @ - - [01/Dec/2025:19:36:29 +0000] "GET /v4.9.3/libpod/containers/json?all=true&external=false&last=0&namespace=false&size=false&sync=false HTTP/1.1" 200 29521 "" "Go-http-client/1.1"
Dec  1 19:36:29 compute-0 podman[203750]: @ - - [01/Dec/2025:19:36:29 +0000] "GET /v4.9.3/libpod/containers/stats?all=false&interval=1&stream=false HTTP/1.1" 200 4791 "" "Go-http-client/1.1"
Dec  1 19:36:31 compute-0 nova_compute[189564]: 2025-12-01 19:36:31.343 189568 DEBUG oslo_concurrency.lockutils [None req-92e2ca87-5818-491b-861f-9c34a79f287f 7c24e8f82e7842b785e565ac65c7f494 35d2a9caf1634dca9fc12ec078239d84 - - default default] Acquiring lock "850ac274-3f22-41ce-b7d7-ac64d7adac70" by "nova.compute.manager.ComputeManager.build_and_run_instance.<locals>._locked_do_build_and_run_instance" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m
Dec  1 19:36:31 compute-0 nova_compute[189564]: 2025-12-01 19:36:31.343 189568 DEBUG oslo_concurrency.lockutils [None req-92e2ca87-5818-491b-861f-9c34a79f287f 7c24e8f82e7842b785e565ac65c7f494 35d2a9caf1634dca9fc12ec078239d84 - - default default] Lock "850ac274-3f22-41ce-b7d7-ac64d7adac70" acquired by "nova.compute.manager.ComputeManager.build_and_run_instance.<locals>._locked_do_build_and_run_instance" :: waited 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m
Dec  1 19:36:31 compute-0 nova_compute[189564]: 2025-12-01 19:36:31.364 189568 DEBUG nova.compute.manager [None req-92e2ca87-5818-491b-861f-9c34a79f287f 7c24e8f82e7842b785e565ac65c7f494 35d2a9caf1634dca9fc12ec078239d84 - - default default] [instance: 850ac274-3f22-41ce-b7d7-ac64d7adac70] Starting instance... _do_build_and_run_instance /usr/lib/python3.9/site-packages/nova/compute/manager.py:2402#033[00m
Dec  1 19:36:31 compute-0 openstack_network_exporter[205914]: ERROR   19:36:31 appctl.go:144: Failed to get PID for ovn-northd: no control socket files found for ovn-northd
Dec  1 19:36:31 compute-0 openstack_network_exporter[205914]: ERROR   19:36:31 appctl.go:144: Failed to get PID for ovn-northd: no control socket files found for ovn-northd
Dec  1 19:36:31 compute-0 openstack_network_exporter[205914]: ERROR   19:36:31 appctl.go:131: Failed to prepare call to ovsdb-server: no control socket files found for the ovs db server
Dec  1 19:36:31 compute-0 openstack_network_exporter[205914]: ERROR   19:36:31 appctl.go:174: call(dpif-netdev/pmd-rxq-show): please specify an existing datapath
Dec  1 19:36:31 compute-0 openstack_network_exporter[205914]: 
Dec  1 19:36:31 compute-0 openstack_network_exporter[205914]: ERROR   19:36:31 appctl.go:174: call(dpif-netdev/pmd-perf-show): please specify an existing datapath
Dec  1 19:36:31 compute-0 openstack_network_exporter[205914]: 
Dec  1 19:36:31 compute-0 nova_compute[189564]: 2025-12-01 19:36:31.470 189568 DEBUG oslo_concurrency.lockutils [None req-92e2ca87-5818-491b-861f-9c34a79f287f 7c24e8f82e7842b785e565ac65c7f494 35d2a9caf1634dca9fc12ec078239d84 - - default default] Acquiring lock "compute_resources" by "nova.compute.resource_tracker.ResourceTracker.instance_claim" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m
Dec  1 19:36:31 compute-0 nova_compute[189564]: 2025-12-01 19:36:31.471 189568 DEBUG oslo_concurrency.lockutils [None req-92e2ca87-5818-491b-861f-9c34a79f287f 7c24e8f82e7842b785e565ac65c7f494 35d2a9caf1634dca9fc12ec078239d84 - - default default] Lock "compute_resources" acquired by "nova.compute.resource_tracker.ResourceTracker.instance_claim" :: waited 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m
Dec  1 19:36:31 compute-0 nova_compute[189564]: 2025-12-01 19:36:31.484 189568 DEBUG nova.virt.hardware [None req-92e2ca87-5818-491b-861f-9c34a79f287f 7c24e8f82e7842b785e565ac65c7f494 35d2a9caf1634dca9fc12ec078239d84 - - default default] Require both a host and instance NUMA topology to fit instance on host. numa_fit_instance_to_host /usr/lib/python3.9/site-packages/nova/virt/hardware.py:2368#033[00m
Dec  1 19:36:31 compute-0 nova_compute[189564]: 2025-12-01 19:36:31.485 189568 INFO nova.compute.claims [None req-92e2ca87-5818-491b-861f-9c34a79f287f 7c24e8f82e7842b785e565ac65c7f494 35d2a9caf1634dca9fc12ec078239d84 - - default default] [instance: 850ac274-3f22-41ce-b7d7-ac64d7adac70] Claim successful on node compute-0.ctlplane.example.com#033[00m
Dec  1 19:36:31 compute-0 nova_compute[189564]: 2025-12-01 19:36:31.671 189568 DEBUG nova.compute.provider_tree [None req-92e2ca87-5818-491b-861f-9c34a79f287f 7c24e8f82e7842b785e565ac65c7f494 35d2a9caf1634dca9fc12ec078239d84 - - default default] Inventory has not changed in ProviderTree for provider: 0211b5d4-bab8-409f-8f53-df766ffbcb27 update_inventory /usr/lib/python3.9/site-packages/nova/compute/provider_tree.py:180#033[00m
Dec  1 19:36:31 compute-0 nova_compute[189564]: 2025-12-01 19:36:31.717 189568 DEBUG nova.scheduler.client.report [None req-92e2ca87-5818-491b-861f-9c34a79f287f 7c24e8f82e7842b785e565ac65c7f494 35d2a9caf1634dca9fc12ec078239d84 - - default default] Inventory has not changed for provider 0211b5d4-bab8-409f-8f53-df766ffbcb27 based on inventory data: {'VCPU': {'total': 8, 'reserved': 0, 'min_unit': 1, 'max_unit': 8, 'step_size': 1, 'allocation_ratio': 4.0}, 'MEMORY_MB': {'total': 7680, 'reserved': 512, 'min_unit': 1, 'max_unit': 7680, 'step_size': 1, 'allocation_ratio': 1.0}, 'DISK_GB': {'total': 79, 'reserved': 1, 'min_unit': 1, 'max_unit': 79, 'step_size': 1, 'allocation_ratio': 0.9}} set_inventory_for_provider /usr/lib/python3.9/site-packages/nova/scheduler/client/report.py:940#033[00m
Dec  1 19:36:31 compute-0 nova_compute[189564]: 2025-12-01 19:36:31.750 189568 DEBUG oslo_concurrency.lockutils [None req-92e2ca87-5818-491b-861f-9c34a79f287f 7c24e8f82e7842b785e565ac65c7f494 35d2a9caf1634dca9fc12ec078239d84 - - default default] Lock "compute_resources" "released" by "nova.compute.resource_tracker.ResourceTracker.instance_claim" :: held 0.280s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m
Dec  1 19:36:31 compute-0 nova_compute[189564]: 2025-12-01 19:36:31.751 189568 DEBUG nova.compute.manager [None req-92e2ca87-5818-491b-861f-9c34a79f287f 7c24e8f82e7842b785e565ac65c7f494 35d2a9caf1634dca9fc12ec078239d84 - - default default] [instance: 850ac274-3f22-41ce-b7d7-ac64d7adac70] Start building networks asynchronously for instance. _build_resources /usr/lib/python3.9/site-packages/nova/compute/manager.py:2799#033[00m
Dec  1 19:36:31 compute-0 nova_compute[189564]: 2025-12-01 19:36:31.807 189568 DEBUG nova.compute.manager [None req-92e2ca87-5818-491b-861f-9c34a79f287f 7c24e8f82e7842b785e565ac65c7f494 35d2a9caf1634dca9fc12ec078239d84 - - default default] [instance: 850ac274-3f22-41ce-b7d7-ac64d7adac70] Allocating IP information in the background. _allocate_network_async /usr/lib/python3.9/site-packages/nova/compute/manager.py:1952#033[00m
Dec  1 19:36:31 compute-0 nova_compute[189564]: 2025-12-01 19:36:31.808 189568 DEBUG nova.network.neutron [None req-92e2ca87-5818-491b-861f-9c34a79f287f 7c24e8f82e7842b785e565ac65c7f494 35d2a9caf1634dca9fc12ec078239d84 - - default default] [instance: 850ac274-3f22-41ce-b7d7-ac64d7adac70] allocate_for_instance() allocate_for_instance /usr/lib/python3.9/site-packages/nova/network/neutron.py:1156#033[00m
Dec  1 19:36:31 compute-0 nova_compute[189564]: 2025-12-01 19:36:31.839 189568 INFO nova.virt.libvirt.driver [None req-92e2ca87-5818-491b-861f-9c34a79f287f 7c24e8f82e7842b785e565ac65c7f494 35d2a9caf1634dca9fc12ec078239d84 - - default default] [instance: 850ac274-3f22-41ce-b7d7-ac64d7adac70] Ignoring supplied device name: /dev/vda. Libvirt can't honour user-supplied dev names#033[00m
Dec  1 19:36:31 compute-0 nova_compute[189564]: 2025-12-01 19:36:31.877 189568 DEBUG nova.compute.manager [None req-92e2ca87-5818-491b-861f-9c34a79f287f 7c24e8f82e7842b785e565ac65c7f494 35d2a9caf1634dca9fc12ec078239d84 - - default default] [instance: 850ac274-3f22-41ce-b7d7-ac64d7adac70] Start building block device mappings for instance. _build_resources /usr/lib/python3.9/site-packages/nova/compute/manager.py:2834#033[00m
Dec  1 19:36:32 compute-0 nova_compute[189564]: 2025-12-01 19:36:32.002 189568 DEBUG nova.compute.manager [None req-92e2ca87-5818-491b-861f-9c34a79f287f 7c24e8f82e7842b785e565ac65c7f494 35d2a9caf1634dca9fc12ec078239d84 - - default default] [instance: 850ac274-3f22-41ce-b7d7-ac64d7adac70] Start spawning the instance on the hypervisor. _build_and_run_instance /usr/lib/python3.9/site-packages/nova/compute/manager.py:2608#033[00m
Dec  1 19:36:32 compute-0 nova_compute[189564]: 2025-12-01 19:36:32.004 189568 DEBUG nova.virt.libvirt.driver [None req-92e2ca87-5818-491b-861f-9c34a79f287f 7c24e8f82e7842b785e565ac65c7f494 35d2a9caf1634dca9fc12ec078239d84 - - default default] [instance: 850ac274-3f22-41ce-b7d7-ac64d7adac70] Creating instance directory _create_image /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:4723#033[00m
Dec  1 19:36:32 compute-0 nova_compute[189564]: 2025-12-01 19:36:32.005 189568 INFO nova.virt.libvirt.driver [None req-92e2ca87-5818-491b-861f-9c34a79f287f 7c24e8f82e7842b785e565ac65c7f494 35d2a9caf1634dca9fc12ec078239d84 - - default default] [instance: 850ac274-3f22-41ce-b7d7-ac64d7adac70] Creating image(s)#033[00m
Dec  1 19:36:32 compute-0 nova_compute[189564]: 2025-12-01 19:36:32.006 189568 DEBUG oslo_concurrency.lockutils [None req-92e2ca87-5818-491b-861f-9c34a79f287f 7c24e8f82e7842b785e565ac65c7f494 35d2a9caf1634dca9fc12ec078239d84 - - default default] Acquiring lock "/var/lib/nova/instances/850ac274-3f22-41ce-b7d7-ac64d7adac70/disk.info" by "nova.virt.libvirt.imagebackend.Image.resolve_driver_format.<locals>.write_to_disk_info_file" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m
Dec  1 19:36:32 compute-0 nova_compute[189564]: 2025-12-01 19:36:32.007 189568 DEBUG oslo_concurrency.lockutils [None req-92e2ca87-5818-491b-861f-9c34a79f287f 7c24e8f82e7842b785e565ac65c7f494 35d2a9caf1634dca9fc12ec078239d84 - - default default] Lock "/var/lib/nova/instances/850ac274-3f22-41ce-b7d7-ac64d7adac70/disk.info" acquired by "nova.virt.libvirt.imagebackend.Image.resolve_driver_format.<locals>.write_to_disk_info_file" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m
Dec  1 19:36:32 compute-0 nova_compute[189564]: 2025-12-01 19:36:32.008 189568 DEBUG oslo_concurrency.lockutils [None req-92e2ca87-5818-491b-861f-9c34a79f287f 7c24e8f82e7842b785e565ac65c7f494 35d2a9caf1634dca9fc12ec078239d84 - - default default] Lock "/var/lib/nova/instances/850ac274-3f22-41ce-b7d7-ac64d7adac70/disk.info" "released" by "nova.virt.libvirt.imagebackend.Image.resolve_driver_format.<locals>.write_to_disk_info_file" :: held 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m
Dec  1 19:36:32 compute-0 nova_compute[189564]: 2025-12-01 19:36:32.031 189568 DEBUG oslo_concurrency.processutils [None req-92e2ca87-5818-491b-861f-9c34a79f287f 7c24e8f82e7842b785e565ac65c7f494 35d2a9caf1634dca9fc12ec078239d84 - - default default] Running cmd (subprocess): /usr/bin/python3 -m oslo_concurrency.prlimit --as=1073741824 --cpu=30 -- env LC_ALL=C LANG=C qemu-img info /var/lib/nova/instances/_base/1324593a3f01becd5f72fdfdb0281e45c2a6b683 --force-share --output=json execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384#033[00m
Dec  1 19:36:32 compute-0 nova_compute[189564]: 2025-12-01 19:36:32.131 189568 DEBUG oslo_concurrency.processutils [None req-92e2ca87-5818-491b-861f-9c34a79f287f 7c24e8f82e7842b785e565ac65c7f494 35d2a9caf1634dca9fc12ec078239d84 - - default default] CMD "/usr/bin/python3 -m oslo_concurrency.prlimit --as=1073741824 --cpu=30 -- env LC_ALL=C LANG=C qemu-img info /var/lib/nova/instances/_base/1324593a3f01becd5f72fdfdb0281e45c2a6b683 --force-share --output=json" returned: 0 in 0.099s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422#033[00m
Dec  1 19:36:32 compute-0 nova_compute[189564]: 2025-12-01 19:36:32.133 189568 DEBUG oslo_concurrency.lockutils [None req-92e2ca87-5818-491b-861f-9c34a79f287f 7c24e8f82e7842b785e565ac65c7f494 35d2a9caf1634dca9fc12ec078239d84 - - default default] Acquiring lock "1324593a3f01becd5f72fdfdb0281e45c2a6b683" by "nova.virt.libvirt.imagebackend.Qcow2.create_image.<locals>.create_qcow2_image" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m
Dec  1 19:36:32 compute-0 nova_compute[189564]: 2025-12-01 19:36:32.134 189568 DEBUG oslo_concurrency.lockutils [None req-92e2ca87-5818-491b-861f-9c34a79f287f 7c24e8f82e7842b785e565ac65c7f494 35d2a9caf1634dca9fc12ec078239d84 - - default default] Lock "1324593a3f01becd5f72fdfdb0281e45c2a6b683" acquired by "nova.virt.libvirt.imagebackend.Qcow2.create_image.<locals>.create_qcow2_image" :: waited 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m
Dec  1 19:36:32 compute-0 nova_compute[189564]: 2025-12-01 19:36:32.161 189568 DEBUG oslo_concurrency.processutils [None req-92e2ca87-5818-491b-861f-9c34a79f287f 7c24e8f82e7842b785e565ac65c7f494 35d2a9caf1634dca9fc12ec078239d84 - - default default] Running cmd (subprocess): /usr/bin/python3 -m oslo_concurrency.prlimit --as=1073741824 --cpu=30 -- env LC_ALL=C LANG=C qemu-img info /var/lib/nova/instances/_base/1324593a3f01becd5f72fdfdb0281e45c2a6b683 --force-share --output=json execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384#033[00m
Dec  1 19:36:32 compute-0 ovn_metadata_agent[106828]: 2025-12-01 19:36:32.249 106833 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): DbSetCommand(_result=None, table=Chassis_Private, record=91869463-7ce7-4561-8225-db4a77bb5f12, col_values=(('external_ids', {'neutron:ovn-metadata-sb-cfg': '5'}),), if_exists=True) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89#033[00m
Dec  1 19:36:32 compute-0 nova_compute[189564]: 2025-12-01 19:36:32.252 189568 DEBUG oslo_concurrency.processutils [None req-92e2ca87-5818-491b-861f-9c34a79f287f 7c24e8f82e7842b785e565ac65c7f494 35d2a9caf1634dca9fc12ec078239d84 - - default default] CMD "/usr/bin/python3 -m oslo_concurrency.prlimit --as=1073741824 --cpu=30 -- env LC_ALL=C LANG=C qemu-img info /var/lib/nova/instances/_base/1324593a3f01becd5f72fdfdb0281e45c2a6b683 --force-share --output=json" returned: 0 in 0.091s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422#033[00m
Dec  1 19:36:32 compute-0 nova_compute[189564]: 2025-12-01 19:36:32.253 189568 DEBUG oslo_concurrency.processutils [None req-92e2ca87-5818-491b-861f-9c34a79f287f 7c24e8f82e7842b785e565ac65c7f494 35d2a9caf1634dca9fc12ec078239d84 - - default default] Running cmd (subprocess): env LC_ALL=C LANG=C qemu-img create -f qcow2 -o backing_file=/var/lib/nova/instances/_base/1324593a3f01becd5f72fdfdb0281e45c2a6b683,backing_fmt=raw /var/lib/nova/instances/850ac274-3f22-41ce-b7d7-ac64d7adac70/disk 1073741824 execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384#033[00m
Dec  1 19:36:32 compute-0 nova_compute[189564]: 2025-12-01 19:36:32.306 189568 DEBUG oslo_concurrency.processutils [None req-92e2ca87-5818-491b-861f-9c34a79f287f 7c24e8f82e7842b785e565ac65c7f494 35d2a9caf1634dca9fc12ec078239d84 - - default default] CMD "env LC_ALL=C LANG=C qemu-img create -f qcow2 -o backing_file=/var/lib/nova/instances/_base/1324593a3f01becd5f72fdfdb0281e45c2a6b683,backing_fmt=raw /var/lib/nova/instances/850ac274-3f22-41ce-b7d7-ac64d7adac70/disk 1073741824" returned: 0 in 0.053s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422#033[00m
Dec  1 19:36:32 compute-0 nova_compute[189564]: 2025-12-01 19:36:32.308 189568 DEBUG oslo_concurrency.lockutils [None req-92e2ca87-5818-491b-861f-9c34a79f287f 7c24e8f82e7842b785e565ac65c7f494 35d2a9caf1634dca9fc12ec078239d84 - - default default] Lock "1324593a3f01becd5f72fdfdb0281e45c2a6b683" "released" by "nova.virt.libvirt.imagebackend.Qcow2.create_image.<locals>.create_qcow2_image" :: held 0.174s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m
Dec  1 19:36:32 compute-0 nova_compute[189564]: 2025-12-01 19:36:32.308 189568 DEBUG oslo_concurrency.processutils [None req-92e2ca87-5818-491b-861f-9c34a79f287f 7c24e8f82e7842b785e565ac65c7f494 35d2a9caf1634dca9fc12ec078239d84 - - default default] Running cmd (subprocess): /usr/bin/python3 -m oslo_concurrency.prlimit --as=1073741824 --cpu=30 -- env LC_ALL=C LANG=C qemu-img info /var/lib/nova/instances/_base/1324593a3f01becd5f72fdfdb0281e45c2a6b683 --force-share --output=json execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384#033[00m
Dec  1 19:36:32 compute-0 podman[242620]: 2025-12-01 19:36:32.329377298 +0000 UTC m=+0.097724906 container health_status 61ddba5fa28aaa4735d9b3aecc3d300f499f9ae2248b5f55cd6d6127fcce4236 (image=quay.io/navidys/prometheus-podman-exporter:v1.10.1, name=podman_exporter, health_status=healthy, health_failing_streak=0, health_log=, maintainer=Navid Yaghoobi <navidys@fedoraproject.org>, managed_by=edpm_ansible, config_data={'image': 'quay.io/navidys/prometheus-podman-exporter:v1.10.1', 'restart': 'always', 'recreate': True, 'user': 'root', 'privileged': True, 'ports': ['9882:9882'], 'net': 'host', 'command': ['--web.config.file=/etc/podman_exporter/podman_exporter.yaml'], 'environment': {'OS_ENDPOINT_TYPE': 'internal', 'CONTAINER_HOST': 'unix:///run/podman/podman.sock'}, 'healthcheck': {'test': '/openstack/healthcheck podman_exporter', 'mount': '/var/lib/openstack/healthchecks/podman_exporter'}, 'volumes': ['/var/lib/openstack/config/telemetry/podman_exporter.yaml:/etc/podman_exporter/podman_exporter.yaml:z', '/var/lib/openstack/certs/telemetry/default:/etc/podman_exporter/tls:z', '/run/podman/podman.sock:/run/podman/podman.sock:rw,z', '/var/lib/openstack/healthchecks/podman_exporter:/openstack:ro,z']}, config_id=edpm, container_name=podman_exporter)
Dec  1 19:36:32 compute-0 nova_compute[189564]: 2025-12-01 19:36:32.380 189568 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Dec  1 19:36:32 compute-0 nova_compute[189564]: 2025-12-01 19:36:32.422 189568 DEBUG oslo_concurrency.processutils [None req-92e2ca87-5818-491b-861f-9c34a79f287f 7c24e8f82e7842b785e565ac65c7f494 35d2a9caf1634dca9fc12ec078239d84 - - default default] CMD "/usr/bin/python3 -m oslo_concurrency.prlimit --as=1073741824 --cpu=30 -- env LC_ALL=C LANG=C qemu-img info /var/lib/nova/instances/_base/1324593a3f01becd5f72fdfdb0281e45c2a6b683 --force-share --output=json" returned: 0 in 0.114s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422#033[00m
Dec  1 19:36:32 compute-0 nova_compute[189564]: 2025-12-01 19:36:32.423 189568 DEBUG nova.virt.disk.api [None req-92e2ca87-5818-491b-861f-9c34a79f287f 7c24e8f82e7842b785e565ac65c7f494 35d2a9caf1634dca9fc12ec078239d84 - - default default] Checking if we can resize image /var/lib/nova/instances/850ac274-3f22-41ce-b7d7-ac64d7adac70/disk. size=1073741824 can_resize_image /usr/lib/python3.9/site-packages/nova/virt/disk/api.py:166#033[00m
Dec  1 19:36:32 compute-0 nova_compute[189564]: 2025-12-01 19:36:32.423 189568 DEBUG oslo_concurrency.processutils [None req-92e2ca87-5818-491b-861f-9c34a79f287f 7c24e8f82e7842b785e565ac65c7f494 35d2a9caf1634dca9fc12ec078239d84 - - default default] Running cmd (subprocess): /usr/bin/python3 -m oslo_concurrency.prlimit --as=1073741824 --cpu=30 -- env LC_ALL=C LANG=C qemu-img info /var/lib/nova/instances/850ac274-3f22-41ce-b7d7-ac64d7adac70/disk --force-share --output=json execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384#033[00m
Dec  1 19:36:32 compute-0 nova_compute[189564]: 2025-12-01 19:36:32.519 189568 DEBUG oslo_concurrency.processutils [None req-92e2ca87-5818-491b-861f-9c34a79f287f 7c24e8f82e7842b785e565ac65c7f494 35d2a9caf1634dca9fc12ec078239d84 - - default default] CMD "/usr/bin/python3 -m oslo_concurrency.prlimit --as=1073741824 --cpu=30 -- env LC_ALL=C LANG=C qemu-img info /var/lib/nova/instances/850ac274-3f22-41ce-b7d7-ac64d7adac70/disk --force-share --output=json" returned: 0 in 0.096s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422#033[00m
Dec  1 19:36:32 compute-0 nova_compute[189564]: 2025-12-01 19:36:32.521 189568 DEBUG nova.virt.disk.api [None req-92e2ca87-5818-491b-861f-9c34a79f287f 7c24e8f82e7842b785e565ac65c7f494 35d2a9caf1634dca9fc12ec078239d84 - - default default] Cannot resize image /var/lib/nova/instances/850ac274-3f22-41ce-b7d7-ac64d7adac70/disk to a smaller size. can_resize_image /usr/lib/python3.9/site-packages/nova/virt/disk/api.py:172#033[00m
Dec  1 19:36:32 compute-0 nova_compute[189564]: 2025-12-01 19:36:32.522 189568 DEBUG nova.objects.instance [None req-92e2ca87-5818-491b-861f-9c34a79f287f 7c24e8f82e7842b785e565ac65c7f494 35d2a9caf1634dca9fc12ec078239d84 - - default default] Lazy-loading 'migration_context' on Instance uuid 850ac274-3f22-41ce-b7d7-ac64d7adac70 obj_load_attr /usr/lib/python3.9/site-packages/nova/objects/instance.py:1105#033[00m
Dec  1 19:36:32 compute-0 nova_compute[189564]: 2025-12-01 19:36:32.544 189568 DEBUG oslo_concurrency.lockutils [None req-92e2ca87-5818-491b-861f-9c34a79f287f 7c24e8f82e7842b785e565ac65c7f494 35d2a9caf1634dca9fc12ec078239d84 - - default default] Acquiring lock "/var/lib/nova/instances/850ac274-3f22-41ce-b7d7-ac64d7adac70/disk.info" by "nova.virt.libvirt.imagebackend.Image.resolve_driver_format.<locals>.write_to_disk_info_file" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m
Dec  1 19:36:32 compute-0 nova_compute[189564]: 2025-12-01 19:36:32.544 189568 DEBUG oslo_concurrency.lockutils [None req-92e2ca87-5818-491b-861f-9c34a79f287f 7c24e8f82e7842b785e565ac65c7f494 35d2a9caf1634dca9fc12ec078239d84 - - default default] Lock "/var/lib/nova/instances/850ac274-3f22-41ce-b7d7-ac64d7adac70/disk.info" acquired by "nova.virt.libvirt.imagebackend.Image.resolve_driver_format.<locals>.write_to_disk_info_file" :: waited 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m
Dec  1 19:36:32 compute-0 nova_compute[189564]: 2025-12-01 19:36:32.546 189568 DEBUG oslo_concurrency.lockutils [None req-92e2ca87-5818-491b-861f-9c34a79f287f 7c24e8f82e7842b785e565ac65c7f494 35d2a9caf1634dca9fc12ec078239d84 - - default default] Lock "/var/lib/nova/instances/850ac274-3f22-41ce-b7d7-ac64d7adac70/disk.info" "released" by "nova.virt.libvirt.imagebackend.Image.resolve_driver_format.<locals>.write_to_disk_info_file" :: held 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m
Dec  1 19:36:32 compute-0 nova_compute[189564]: 2025-12-01 19:36:32.566 189568 DEBUG oslo_concurrency.processutils [None req-92e2ca87-5818-491b-861f-9c34a79f287f 7c24e8f82e7842b785e565ac65c7f494 35d2a9caf1634dca9fc12ec078239d84 - - default default] Running cmd (subprocess): /usr/bin/python3 -m oslo_concurrency.prlimit --as=1073741824 --cpu=30 -- env LC_ALL=C LANG=C qemu-img info /var/lib/nova/instances/_base/ephemeral_1_0706d66 --force-share --output=json execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384#033[00m
Dec  1 19:36:32 compute-0 nova_compute[189564]: 2025-12-01 19:36:32.657 189568 DEBUG oslo_concurrency.processutils [None req-92e2ca87-5818-491b-861f-9c34a79f287f 7c24e8f82e7842b785e565ac65c7f494 35d2a9caf1634dca9fc12ec078239d84 - - default default] CMD "/usr/bin/python3 -m oslo_concurrency.prlimit --as=1073741824 --cpu=30 -- env LC_ALL=C LANG=C qemu-img info /var/lib/nova/instances/_base/ephemeral_1_0706d66 --force-share --output=json" returned: 0 in 0.092s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422#033[00m
Dec  1 19:36:32 compute-0 nova_compute[189564]: 2025-12-01 19:36:32.659 189568 DEBUG oslo_concurrency.lockutils [None req-92e2ca87-5818-491b-861f-9c34a79f287f 7c24e8f82e7842b785e565ac65c7f494 35d2a9caf1634dca9fc12ec078239d84 - - default default] Acquiring lock "ephemeral_1_0706d66" by "nova.virt.libvirt.imagebackend.Qcow2.create_image.<locals>.create_qcow2_image" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m
Dec  1 19:36:32 compute-0 nova_compute[189564]: 2025-12-01 19:36:32.660 189568 DEBUG oslo_concurrency.lockutils [None req-92e2ca87-5818-491b-861f-9c34a79f287f 7c24e8f82e7842b785e565ac65c7f494 35d2a9caf1634dca9fc12ec078239d84 - - default default] Lock "ephemeral_1_0706d66" acquired by "nova.virt.libvirt.imagebackend.Qcow2.create_image.<locals>.create_qcow2_image" :: waited 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m
Dec  1 19:36:32 compute-0 nova_compute[189564]: 2025-12-01 19:36:32.678 189568 DEBUG oslo_concurrency.processutils [None req-92e2ca87-5818-491b-861f-9c34a79f287f 7c24e8f82e7842b785e565ac65c7f494 35d2a9caf1634dca9fc12ec078239d84 - - default default] Running cmd (subprocess): /usr/bin/python3 -m oslo_concurrency.prlimit --as=1073741824 --cpu=30 -- env LC_ALL=C LANG=C qemu-img info /var/lib/nova/instances/_base/ephemeral_1_0706d66 --force-share --output=json execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384#033[00m
Dec  1 19:36:32 compute-0 nova_compute[189564]: 2025-12-01 19:36:32.735 189568 DEBUG oslo_concurrency.processutils [None req-92e2ca87-5818-491b-861f-9c34a79f287f 7c24e8f82e7842b785e565ac65c7f494 35d2a9caf1634dca9fc12ec078239d84 - - default default] CMD "/usr/bin/python3 -m oslo_concurrency.prlimit --as=1073741824 --cpu=30 -- env LC_ALL=C LANG=C qemu-img info /var/lib/nova/instances/_base/ephemeral_1_0706d66 --force-share --output=json" returned: 0 in 0.057s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422#033[00m
Dec  1 19:36:32 compute-0 nova_compute[189564]: 2025-12-01 19:36:32.737 189568 DEBUG oslo_concurrency.processutils [None req-92e2ca87-5818-491b-861f-9c34a79f287f 7c24e8f82e7842b785e565ac65c7f494 35d2a9caf1634dca9fc12ec078239d84 - - default default] Running cmd (subprocess): env LC_ALL=C LANG=C qemu-img create -f qcow2 -o backing_file=/var/lib/nova/instances/_base/ephemeral_1_0706d66,backing_fmt=raw /var/lib/nova/instances/850ac274-3f22-41ce-b7d7-ac64d7adac70/disk.eph0 1073741824 execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384#033[00m
Dec  1 19:36:32 compute-0 nova_compute[189564]: 2025-12-01 19:36:32.782 189568 DEBUG oslo_concurrency.processutils [None req-92e2ca87-5818-491b-861f-9c34a79f287f 7c24e8f82e7842b785e565ac65c7f494 35d2a9caf1634dca9fc12ec078239d84 - - default default] CMD "env LC_ALL=C LANG=C qemu-img create -f qcow2 -o backing_file=/var/lib/nova/instances/_base/ephemeral_1_0706d66,backing_fmt=raw /var/lib/nova/instances/850ac274-3f22-41ce-b7d7-ac64d7adac70/disk.eph0 1073741824" returned: 0 in 0.045s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422#033[00m
Dec  1 19:36:32 compute-0 nova_compute[189564]: 2025-12-01 19:36:32.783 189568 DEBUG oslo_concurrency.lockutils [None req-92e2ca87-5818-491b-861f-9c34a79f287f 7c24e8f82e7842b785e565ac65c7f494 35d2a9caf1634dca9fc12ec078239d84 - - default default] Lock "ephemeral_1_0706d66" "released" by "nova.virt.libvirt.imagebackend.Qcow2.create_image.<locals>.create_qcow2_image" :: held 0.123s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m
Dec  1 19:36:32 compute-0 nova_compute[189564]: 2025-12-01 19:36:32.784 189568 DEBUG oslo_concurrency.processutils [None req-92e2ca87-5818-491b-861f-9c34a79f287f 7c24e8f82e7842b785e565ac65c7f494 35d2a9caf1634dca9fc12ec078239d84 - - default default] Running cmd (subprocess): /usr/bin/python3 -m oslo_concurrency.prlimit --as=1073741824 --cpu=30 -- env LC_ALL=C LANG=C qemu-img info /var/lib/nova/instances/_base/ephemeral_1_0706d66 --force-share --output=json execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384#033[00m
Dec  1 19:36:32 compute-0 nova_compute[189564]: 2025-12-01 19:36:32.851 189568 DEBUG oslo_concurrency.processutils [None req-92e2ca87-5818-491b-861f-9c34a79f287f 7c24e8f82e7842b785e565ac65c7f494 35d2a9caf1634dca9fc12ec078239d84 - - default default] CMD "/usr/bin/python3 -m oslo_concurrency.prlimit --as=1073741824 --cpu=30 -- env LC_ALL=C LANG=C qemu-img info /var/lib/nova/instances/_base/ephemeral_1_0706d66 --force-share --output=json" returned: 0 in 0.067s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422#033[00m
Dec  1 19:36:32 compute-0 nova_compute[189564]: 2025-12-01 19:36:32.853 189568 DEBUG nova.virt.libvirt.driver [None req-92e2ca87-5818-491b-861f-9c34a79f287f 7c24e8f82e7842b785e565ac65c7f494 35d2a9caf1634dca9fc12ec078239d84 - - default default] [instance: 850ac274-3f22-41ce-b7d7-ac64d7adac70] Created local disks _create_image /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:4857#033[00m
Dec  1 19:36:32 compute-0 nova_compute[189564]: 2025-12-01 19:36:32.853 189568 DEBUG nova.virt.libvirt.driver [None req-92e2ca87-5818-491b-861f-9c34a79f287f 7c24e8f82e7842b785e565ac65c7f494 35d2a9caf1634dca9fc12ec078239d84 - - default default] [instance: 850ac274-3f22-41ce-b7d7-ac64d7adac70] Ensure instance console log exists: /var/lib/nova/instances/850ac274-3f22-41ce-b7d7-ac64d7adac70/console.log _ensure_console_log_for_instance /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:4609#033[00m
Dec  1 19:36:32 compute-0 nova_compute[189564]: 2025-12-01 19:36:32.854 189568 DEBUG oslo_concurrency.lockutils [None req-92e2ca87-5818-491b-861f-9c34a79f287f 7c24e8f82e7842b785e565ac65c7f494 35d2a9caf1634dca9fc12ec078239d84 - - default default] Acquiring lock "vgpu_resources" by "nova.virt.libvirt.driver.LibvirtDriver._allocate_mdevs" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m
Dec  1 19:36:32 compute-0 nova_compute[189564]: 2025-12-01 19:36:32.855 189568 DEBUG oslo_concurrency.lockutils [None req-92e2ca87-5818-491b-861f-9c34a79f287f 7c24e8f82e7842b785e565ac65c7f494 35d2a9caf1634dca9fc12ec078239d84 - - default default] Lock "vgpu_resources" acquired by "nova.virt.libvirt.driver.LibvirtDriver._allocate_mdevs" :: waited 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m
Dec  1 19:36:32 compute-0 nova_compute[189564]: 2025-12-01 19:36:32.855 189568 DEBUG oslo_concurrency.lockutils [None req-92e2ca87-5818-491b-861f-9c34a79f287f 7c24e8f82e7842b785e565ac65c7f494 35d2a9caf1634dca9fc12ec078239d84 - - default default] Lock "vgpu_resources" "released" by "nova.virt.libvirt.driver.LibvirtDriver._allocate_mdevs" :: held 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m
Dec  1 19:36:32 compute-0 nova_compute[189564]: 2025-12-01 19:36:32.920 189568 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Dec  1 19:36:34 compute-0 nova_compute[189564]: 2025-12-01 19:36:34.290 189568 DEBUG nova.network.neutron [None req-92e2ca87-5818-491b-861f-9c34a79f287f 7c24e8f82e7842b785e565ac65c7f494 35d2a9caf1634dca9fc12ec078239d84 - - default default] [instance: 850ac274-3f22-41ce-b7d7-ac64d7adac70] Successfully updated port: 076102cd-d411-4d3d-a31e-4851d4a8d107 _update_port /usr/lib/python3.9/site-packages/nova/network/neutron.py:586#033[00m
Dec  1 19:36:34 compute-0 nova_compute[189564]: 2025-12-01 19:36:34.309 189568 DEBUG oslo_concurrency.lockutils [None req-92e2ca87-5818-491b-861f-9c34a79f287f 7c24e8f82e7842b785e565ac65c7f494 35d2a9caf1634dca9fc12ec078239d84 - - default default] Acquiring lock "refresh_cache-850ac274-3f22-41ce-b7d7-ac64d7adac70" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:312#033[00m
Dec  1 19:36:34 compute-0 nova_compute[189564]: 2025-12-01 19:36:34.310 189568 DEBUG oslo_concurrency.lockutils [None req-92e2ca87-5818-491b-861f-9c34a79f287f 7c24e8f82e7842b785e565ac65c7f494 35d2a9caf1634dca9fc12ec078239d84 - - default default] Acquired lock "refresh_cache-850ac274-3f22-41ce-b7d7-ac64d7adac70" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:315#033[00m
Dec  1 19:36:34 compute-0 nova_compute[189564]: 2025-12-01 19:36:34.310 189568 DEBUG nova.network.neutron [None req-92e2ca87-5818-491b-861f-9c34a79f287f 7c24e8f82e7842b785e565ac65c7f494 35d2a9caf1634dca9fc12ec078239d84 - - default default] [instance: 850ac274-3f22-41ce-b7d7-ac64d7adac70] Building network info cache for instance _get_instance_nw_info /usr/lib/python3.9/site-packages/nova/network/neutron.py:2010#033[00m
Dec  1 19:36:34 compute-0 nova_compute[189564]: 2025-12-01 19:36:34.414 189568 DEBUG nova.compute.manager [req-99a93567-dbab-47c6-b476-a68e0984aeb0 req-13c15835-6a06-4e50-b0fa-cfd0ad4fb83c 8141e98fb94749b083df4b60a326419b 58466eba0634458f8dd92f929824a1d9 - - default default] [instance: 850ac274-3f22-41ce-b7d7-ac64d7adac70] Received event network-changed-076102cd-d411-4d3d-a31e-4851d4a8d107 external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11048#033[00m
Dec  1 19:36:34 compute-0 nova_compute[189564]: 2025-12-01 19:36:34.414 189568 DEBUG nova.compute.manager [req-99a93567-dbab-47c6-b476-a68e0984aeb0 req-13c15835-6a06-4e50-b0fa-cfd0ad4fb83c 8141e98fb94749b083df4b60a326419b 58466eba0634458f8dd92f929824a1d9 - - default default] [instance: 850ac274-3f22-41ce-b7d7-ac64d7adac70] Refreshing instance network info cache due to event network-changed-076102cd-d411-4d3d-a31e-4851d4a8d107. external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11053#033[00m
Dec  1 19:36:34 compute-0 nova_compute[189564]: 2025-12-01 19:36:34.415 189568 DEBUG oslo_concurrency.lockutils [req-99a93567-dbab-47c6-b476-a68e0984aeb0 req-13c15835-6a06-4e50-b0fa-cfd0ad4fb83c 8141e98fb94749b083df4b60a326419b 58466eba0634458f8dd92f929824a1d9 - - default default] Acquiring lock "refresh_cache-850ac274-3f22-41ce-b7d7-ac64d7adac70" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:312#033[00m
Dec  1 19:36:34 compute-0 nova_compute[189564]: 2025-12-01 19:36:34.499 189568 DEBUG nova.network.neutron [None req-92e2ca87-5818-491b-861f-9c34a79f287f 7c24e8f82e7842b785e565ac65c7f494 35d2a9caf1634dca9fc12ec078239d84 - - default default] [instance: 850ac274-3f22-41ce-b7d7-ac64d7adac70] Instance cache missing network info. _get_preexisting_port_ids /usr/lib/python3.9/site-packages/nova/network/neutron.py:3323#033[00m
Dec  1 19:36:35 compute-0 podman[242668]: 2025-12-01 19:36:35.373910423 +0000 UTC m=+0.134730689 container health_status 34a1614f07848d6f362b3ed1fa2407dbcd0f2c7c831f6ef43ff8b2d278ce7c3d (image=quay.io/podified-antelope-centos9/openstack-ceilometer-ipmi:current-podified, name=ceilometer_agent_ipmi, health_status=healthy, health_failing_streak=0, health_log=, config_data={'image': 'quay.io/podified-antelope-centos9/openstack-ceilometer-ipmi:current-podified', 'user': 'ceilometer', 'restart': 'always', 'command': 'kolla_start', 'security_opt': 'label:type:ceilometer_polling_t', 'privileged': 'true', 'net': 'host', 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'OS_ENDPOINT_TYPE': 'internal'}, 'healthcheck': {'test': '/openstack/healthcheck ipmi', 'mount': '/var/lib/openstack/healthchecks/ceilometer_agent_ipmi'}, 'volumes': ['/var/lib/openstack/config/telemetry-power-monitoring:/var/lib/openstack/config/:z', '/var/lib/openstack/config/telemetry-power-monitoring/ceilometer-agent-ipmi.json:/var/lib/kolla/config_files/config.json:z', '/etc/hosts:/etc/hosts:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/var/lib/openstack/cacerts/telemetry-power-monitoring/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/config/telemetry-power-monitoring/ceilometer_prom_exporter.yaml:/etc/ceilometer/ceilometer_prom_exporter.yaml:z', '/var/lib/openstack/certs/telemetry-power-monitoring/default:/etc/ceilometer/tls:z', '/dev/log:/dev/log', '/var/lib/openstack/healthchecks/ceilometer_agent_ipmi:/openstack:ro,z']}, maintainer=OpenStack Kubernetes Operator team, managed_by=edpm_ansible, org.label-schema.build-date=20251125, org.label-schema.schema-version=1.0, io.buildah.version=1.41.3, org.label-schema.vendor=CentOS, tcib_build_tag=fa2bb8efef6782c26ea7f1675eeb36dd, tcib_managed=true, config_id=edpm, container_name=ceilometer_agent_ipmi, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 9 Base Image)
Dec  1 19:36:35 compute-0 podman[242667]: 2025-12-01 19:36:35.382034816 +0000 UTC m=+0.147104994 container health_status 23921011954a99f31a49758e512d9e3575f6b2ebf536e7df85e3be11e7690b76 (image=quay.io/sustainable_computing_io/kepler:release-0.7.12, name=kepler, health_status=healthy, health_failing_streak=0, health_log=, vendor=Red Hat, Inc., architecture=x86_64, io.openshift.tags=base rhel9, managed_by=edpm_ansible, vcs-type=git, distribution-scope=public, io.k8s.description=The Universal Base Image is designed and engineered to be the base layer for all of your containerized applications, middleware and utilities. This base image is freely redistributable, but Red Hat only supports Red Hat technologies through subscriptions for Red Hat products. This image is maintained by Red Hat and updated regularly., url=https://access.redhat.com/containers/#/registry.access.redhat.com/ubi9/images/9.4-1214.1726694543, version=9.4, description=The Universal Base Image is designed and engineered to be the base layer for all of your containerized applications, middleware and utilities. This base image is freely redistributable, but Red Hat only supports Red Hat technologies through subscriptions for Red Hat products. This image is maintained by Red Hat and updated regularly., release-0.7.12=, com.redhat.component=ubi9-container, com.redhat.license_terms=https://www.redhat.com/en/about/red-hat-end-user-license-agreements#UBI, container_name=kepler, io.buildah.version=1.29.0, build-date=2024-09-18T21:23:30, io.k8s.display-name=Red Hat Universal Base Image 9, config_data={'image': 'quay.io/sustainable_computing_io/kepler:release-0.7.12', 'privileged': 'true', 'restart': 'always', 'ports': ['8888:8888'], 'net': 'host', 'command': '-v=2', 'recreate': True, 'environment': {'ENABLE_GPU': 'true', 'EXPOSE_CONTAINER_METRICS': 'true', 'ENABLE_PROCESS_METRICS': 'true', 'EXPOSE_VM_METRICS': 'true', 'EXPOSE_ESTIMATED_IDLE_POWER_METRICS': 'false', 'LIBVIRT_METADATA_URI': 'http://openstack.org/xmlns/libvirt/nova/1.1'}, 'healthcheck': {'test': '/openstack/healthcheck kepler', 'mount': '/var/lib/openstack/healthchecks/kepler'}, 'volumes': ['/lib/modules:/lib/modules:ro', '/run/libvirt:/run/libvirt:shared,ro', '/sys:/sys', '/proc:/proc', '/var/lib/openstack/healthchecks/kepler:/openstack:ro,z']}, release=1214.1726694543, vcs-ref=e309397d02fc53f7fa99db1371b8700eb49f268f, io.openshift.expose-services=, name=ubi9, config_id=edpm, maintainer=Red Hat, Inc., summary=Provides the latest release of Red Hat Universal Base Image 9.)
Dec  1 19:36:36 compute-0 nova_compute[189564]: 2025-12-01 19:36:36.568 189568 DEBUG nova.network.neutron [None req-92e2ca87-5818-491b-861f-9c34a79f287f 7c24e8f82e7842b785e565ac65c7f494 35d2a9caf1634dca9fc12ec078239d84 - - default default] [instance: 850ac274-3f22-41ce-b7d7-ac64d7adac70] Updating instance_info_cache with network_info: [{"id": "076102cd-d411-4d3d-a31e-4851d4a8d107", "address": "fa:16:3e:ce:df:71", "network": {"id": "2a4b8529-6171-4880-a97c-66966115a61b", "bridge": "br-int", "label": "private", "subnets": [{"cidr": "192.168.0.0/24", "dns": [], "gateway": {"address": "192.168.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "192.168.0.62", "type": "fixed", "version": 4, "meta": {}, "floating_ips": [{"address": "192.168.122.240", "type": "floating", "version": 4, "meta": {}}]}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "35d2a9caf1634dca9fc12ec078239d84", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tap076102cd-d4", "ovs_interfaceid": "076102cd-d411-4d3d-a31e-4851d4a8d107", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": true, "delegate_create": true, "meta": {}}] update_instance_cache_with_nw_info /usr/lib/python3.9/site-packages/nova/network/neutron.py:116#033[00m
Dec  1 19:36:36 compute-0 nova_compute[189564]: 2025-12-01 19:36:36.600 189568 DEBUG oslo_concurrency.lockutils [None req-92e2ca87-5818-491b-861f-9c34a79f287f 7c24e8f82e7842b785e565ac65c7f494 35d2a9caf1634dca9fc12ec078239d84 - - default default] Releasing lock "refresh_cache-850ac274-3f22-41ce-b7d7-ac64d7adac70" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:333#033[00m
Dec  1 19:36:36 compute-0 nova_compute[189564]: 2025-12-01 19:36:36.601 189568 DEBUG nova.compute.manager [None req-92e2ca87-5818-491b-861f-9c34a79f287f 7c24e8f82e7842b785e565ac65c7f494 35d2a9caf1634dca9fc12ec078239d84 - - default default] [instance: 850ac274-3f22-41ce-b7d7-ac64d7adac70] Instance network_info: |[{"id": "076102cd-d411-4d3d-a31e-4851d4a8d107", "address": "fa:16:3e:ce:df:71", "network": {"id": "2a4b8529-6171-4880-a97c-66966115a61b", "bridge": "br-int", "label": "private", "subnets": [{"cidr": "192.168.0.0/24", "dns": [], "gateway": {"address": "192.168.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "192.168.0.62", "type": "fixed", "version": 4, "meta": {}, "floating_ips": [{"address": "192.168.122.240", "type": "floating", "version": 4, "meta": {}}]}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "35d2a9caf1634dca9fc12ec078239d84", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tap076102cd-d4", "ovs_interfaceid": "076102cd-d411-4d3d-a31e-4851d4a8d107", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": true, "delegate_create": true, "meta": {}}]| _allocate_network_async /usr/lib/python3.9/site-packages/nova/compute/manager.py:1967#033[00m
Dec  1 19:36:36 compute-0 nova_compute[189564]: 2025-12-01 19:36:36.602 189568 DEBUG oslo_concurrency.lockutils [req-99a93567-dbab-47c6-b476-a68e0984aeb0 req-13c15835-6a06-4e50-b0fa-cfd0ad4fb83c 8141e98fb94749b083df4b60a326419b 58466eba0634458f8dd92f929824a1d9 - - default default] Acquired lock "refresh_cache-850ac274-3f22-41ce-b7d7-ac64d7adac70" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:315#033[00m
Dec  1 19:36:36 compute-0 nova_compute[189564]: 2025-12-01 19:36:36.603 189568 DEBUG nova.network.neutron [req-99a93567-dbab-47c6-b476-a68e0984aeb0 req-13c15835-6a06-4e50-b0fa-cfd0ad4fb83c 8141e98fb94749b083df4b60a326419b 58466eba0634458f8dd92f929824a1d9 - - default default] [instance: 850ac274-3f22-41ce-b7d7-ac64d7adac70] Refreshing network info cache for port 076102cd-d411-4d3d-a31e-4851d4a8d107 _get_instance_nw_info /usr/lib/python3.9/site-packages/nova/network/neutron.py:2007#033[00m
Dec  1 19:36:36 compute-0 nova_compute[189564]: 2025-12-01 19:36:36.609 189568 DEBUG nova.virt.libvirt.driver [None req-92e2ca87-5818-491b-861f-9c34a79f287f 7c24e8f82e7842b785e565ac65c7f494 35d2a9caf1634dca9fc12ec078239d84 - - default default] [instance: 850ac274-3f22-41ce-b7d7-ac64d7adac70] Start _get_guest_xml network_info=[{"id": "076102cd-d411-4d3d-a31e-4851d4a8d107", "address": "fa:16:3e:ce:df:71", "network": {"id": "2a4b8529-6171-4880-a97c-66966115a61b", "bridge": "br-int", "label": "private", "subnets": [{"cidr": "192.168.0.0/24", "dns": [], "gateway": {"address": "192.168.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "192.168.0.62", "type": "fixed", "version": 4, "meta": {}, "floating_ips": [{"address": "192.168.122.240", "type": "floating", "version": 4, "meta": {}}]}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "35d2a9caf1634dca9fc12ec078239d84", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tap076102cd-d4", "ovs_interfaceid": "076102cd-d411-4d3d-a31e-4851d4a8d107", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": true, "delegate_create": true, "meta": {}}] disk_info={'disk_bus': 'virtio', 'cdrom_bus': 'sata', 'mapping': {'root': {'bus': 'virtio', 'dev': 'vda', 'type': 'disk', 'boot_index': '1'}, 'disk': {'bus': 'virtio', 'dev': 'vda', 'type': 'disk', 'boot_index': '1'}, 'disk.eph0': {'bus': 'virtio', 'dev': 'vdb', 'type': 'disk'}, 'disk.config': {'bus': 'sata', 'dev': 'sda', 'type': 'cdrom'}}} image_meta=ImageMeta(checksum='b874c39491a2377b8490f5f1e89761a4',container_format='bare',created_at=2025-12-01T19:28:30Z,direct_url=<?>,disk_format='qcow2',id=15bc897a-453b-4133-b6db-08ecdc2b6db0,min_disk=0,min_ram=0,name='cirros',owner='35d2a9caf1634dca9fc12ec078239d84',properties=ImageMetaProps,protected=<?>,size=16300544,status='active',tags=<?>,updated_at=2025-12-01T19:28:32Z,virtual_size=<?>,visibility=<?>) rescue=None block_device_info={'root_device_name': '/dev/vda', 'image': [{'boot_index': 0, 'guest_format': None, 'encryption_options': None, 'size': 0, 'encryption_secret_uuid': None, 'device_type': 'disk', 'disk_bus': 'virtio', 'encrypted': False, 'encryption_format': None, 'device_name': '/dev/vda', 'image_id': '15bc897a-453b-4133-b6db-08ecdc2b6db0'}], 'ephemerals': [{'guest_format': None, 'encryption_options': None, 'size': 1, 'encryption_secret_uuid': None, 'device_type': 'disk', 'disk_bus': 'virtio', 'encrypted': False, 'encryption_format': None, 'device_name': '/dev/vdb'}], 'block_device_mapping': [], 'swap': None} _get_guest_xml /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:7549#033[00m
Dec  1 19:36:36 compute-0 nova_compute[189564]: 2025-12-01 19:36:36.623 189568 WARNING nova.virt.libvirt.driver [None req-92e2ca87-5818-491b-861f-9c34a79f287f 7c24e8f82e7842b785e565ac65c7f494 35d2a9caf1634dca9fc12ec078239d84 - - default default] This host appears to have multiple sockets per NUMA node. The `socket` PCI NUMA affinity will not be supported.#033[00m
Dec  1 19:36:36 compute-0 nova_compute[189564]: 2025-12-01 19:36:36.637 189568 DEBUG nova.virt.libvirt.host [None req-92e2ca87-5818-491b-861f-9c34a79f287f 7c24e8f82e7842b785e565ac65c7f494 35d2a9caf1634dca9fc12ec078239d84 - - default default] Searching host: 'compute-0.ctlplane.example.com' for CPU controller through CGroups V1... _has_cgroupsv1_cpu_controller /usr/lib/python3.9/site-packages/nova/virt/libvirt/host.py:1653#033[00m
Dec  1 19:36:36 compute-0 nova_compute[189564]: 2025-12-01 19:36:36.639 189568 DEBUG nova.virt.libvirt.host [None req-92e2ca87-5818-491b-861f-9c34a79f287f 7c24e8f82e7842b785e565ac65c7f494 35d2a9caf1634dca9fc12ec078239d84 - - default default] CPU controller missing on host. _has_cgroupsv1_cpu_controller /usr/lib/python3.9/site-packages/nova/virt/libvirt/host.py:1663#033[00m
Dec  1 19:36:36 compute-0 nova_compute[189564]: 2025-12-01 19:36:36.646 189568 DEBUG nova.virt.libvirt.host [None req-92e2ca87-5818-491b-861f-9c34a79f287f 7c24e8f82e7842b785e565ac65c7f494 35d2a9caf1634dca9fc12ec078239d84 - - default default] Searching host: 'compute-0.ctlplane.example.com' for CPU controller through CGroups V2... _has_cgroupsv2_cpu_controller /usr/lib/python3.9/site-packages/nova/virt/libvirt/host.py:1672#033[00m
Dec  1 19:36:36 compute-0 nova_compute[189564]: 2025-12-01 19:36:36.648 189568 DEBUG nova.virt.libvirt.host [None req-92e2ca87-5818-491b-861f-9c34a79f287f 7c24e8f82e7842b785e565ac65c7f494 35d2a9caf1634dca9fc12ec078239d84 - - default default] CPU controller found on host. _has_cgroupsv2_cpu_controller /usr/lib/python3.9/site-packages/nova/virt/libvirt/host.py:1679#033[00m
Dec  1 19:36:36 compute-0 nova_compute[189564]: 2025-12-01 19:36:36.649 189568 DEBUG nova.virt.libvirt.driver [None req-92e2ca87-5818-491b-861f-9c34a79f287f 7c24e8f82e7842b785e565ac65c7f494 35d2a9caf1634dca9fc12ec078239d84 - - default default] CPU mode 'host-model' models '' was chosen, with extra flags: '' _get_guest_cpu_model_config /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:5396#033[00m
Dec  1 19:36:36 compute-0 nova_compute[189564]: 2025-12-01 19:36:36.650 189568 DEBUG nova.virt.hardware [None req-92e2ca87-5818-491b-861f-9c34a79f287f 7c24e8f82e7842b785e565ac65c7f494 35d2a9caf1634dca9fc12ec078239d84 - - default default] Getting desirable topologies for flavor Flavor(created_at=2025-12-01T19:28:35Z,deleted=False,deleted_at=None,description=None,disabled=False,ephemeral_gb=1,extra_specs={},flavorid='0891a7f6-7194-4f33-bc11-6f6ab8b16145',id=1,is_public=True,memory_mb=512,name='m1.small',projects=<?>,root_gb=1,rxtx_factor=1.0,swap=0,updated_at=None,vcpu_weight=0,vcpus=1) and image_meta ImageMeta(checksum='b874c39491a2377b8490f5f1e89761a4',container_format='bare',created_at=2025-12-01T19:28:30Z,direct_url=<?>,disk_format='qcow2',id=15bc897a-453b-4133-b6db-08ecdc2b6db0,min_disk=0,min_ram=0,name='cirros',owner='35d2a9caf1634dca9fc12ec078239d84',properties=ImageMetaProps,protected=<?>,size=16300544,status='active',tags=<?>,updated_at=2025-12-01T19:28:32Z,virtual_size=<?>,visibility=<?>), allow threads: True _get_desirable_cpu_topologies /usr/lib/python3.9/site-packages/nova/virt/hardware.py:563#033[00m
Dec  1 19:36:36 compute-0 nova_compute[189564]: 2025-12-01 19:36:36.651 189568 DEBUG nova.virt.hardware [None req-92e2ca87-5818-491b-861f-9c34a79f287f 7c24e8f82e7842b785e565ac65c7f494 35d2a9caf1634dca9fc12ec078239d84 - - default default] Flavor limits 0:0:0 get_cpu_topology_constraints /usr/lib/python3.9/site-packages/nova/virt/hardware.py:348#033[00m
Dec  1 19:36:36 compute-0 nova_compute[189564]: 2025-12-01 19:36:36.652 189568 DEBUG nova.virt.hardware [None req-92e2ca87-5818-491b-861f-9c34a79f287f 7c24e8f82e7842b785e565ac65c7f494 35d2a9caf1634dca9fc12ec078239d84 - - default default] Image limits 0:0:0 get_cpu_topology_constraints /usr/lib/python3.9/site-packages/nova/virt/hardware.py:352#033[00m
Dec  1 19:36:36 compute-0 nova_compute[189564]: 2025-12-01 19:36:36.652 189568 DEBUG nova.virt.hardware [None req-92e2ca87-5818-491b-861f-9c34a79f287f 7c24e8f82e7842b785e565ac65c7f494 35d2a9caf1634dca9fc12ec078239d84 - - default default] Flavor pref 0:0:0 get_cpu_topology_constraints /usr/lib/python3.9/site-packages/nova/virt/hardware.py:388#033[00m
Dec  1 19:36:36 compute-0 nova_compute[189564]: 2025-12-01 19:36:36.653 189568 DEBUG nova.virt.hardware [None req-92e2ca87-5818-491b-861f-9c34a79f287f 7c24e8f82e7842b785e565ac65c7f494 35d2a9caf1634dca9fc12ec078239d84 - - default default] Image pref 0:0:0 get_cpu_topology_constraints /usr/lib/python3.9/site-packages/nova/virt/hardware.py:392#033[00m
Dec  1 19:36:36 compute-0 nova_compute[189564]: 2025-12-01 19:36:36.654 189568 DEBUG nova.virt.hardware [None req-92e2ca87-5818-491b-861f-9c34a79f287f 7c24e8f82e7842b785e565ac65c7f494 35d2a9caf1634dca9fc12ec078239d84 - - default default] Chose sockets=0, cores=0, threads=0; limits were sockets=65536, cores=65536, threads=65536 get_cpu_topology_constraints /usr/lib/python3.9/site-packages/nova/virt/hardware.py:430#033[00m
Dec  1 19:36:36 compute-0 nova_compute[189564]: 2025-12-01 19:36:36.655 189568 DEBUG nova.virt.hardware [None req-92e2ca87-5818-491b-861f-9c34a79f287f 7c24e8f82e7842b785e565ac65c7f494 35d2a9caf1634dca9fc12ec078239d84 - - default default] Topology preferred VirtCPUTopology(cores=0,sockets=0,threads=0), maximum VirtCPUTopology(cores=65536,sockets=65536,threads=65536) _get_desirable_cpu_topologies /usr/lib/python3.9/site-packages/nova/virt/hardware.py:569#033[00m
Dec  1 19:36:36 compute-0 nova_compute[189564]: 2025-12-01 19:36:36.656 189568 DEBUG nova.virt.hardware [None req-92e2ca87-5818-491b-861f-9c34a79f287f 7c24e8f82e7842b785e565ac65c7f494 35d2a9caf1634dca9fc12ec078239d84 - - default default] Build topologies for 1 vcpu(s) 1:1:1 _get_possible_cpu_topologies /usr/lib/python3.9/site-packages/nova/virt/hardware.py:471#033[00m
Dec  1 19:36:36 compute-0 nova_compute[189564]: 2025-12-01 19:36:36.656 189568 DEBUG nova.virt.hardware [None req-92e2ca87-5818-491b-861f-9c34a79f287f 7c24e8f82e7842b785e565ac65c7f494 35d2a9caf1634dca9fc12ec078239d84 - - default default] Got 1 possible topologies _get_possible_cpu_topologies /usr/lib/python3.9/site-packages/nova/virt/hardware.py:501#033[00m
Dec  1 19:36:36 compute-0 nova_compute[189564]: 2025-12-01 19:36:36.657 189568 DEBUG nova.virt.hardware [None req-92e2ca87-5818-491b-861f-9c34a79f287f 7c24e8f82e7842b785e565ac65c7f494 35d2a9caf1634dca9fc12ec078239d84 - - default default] Possible topologies [VirtCPUTopology(cores=1,sockets=1,threads=1)] _get_desirable_cpu_topologies /usr/lib/python3.9/site-packages/nova/virt/hardware.py:575#033[00m
Dec  1 19:36:36 compute-0 nova_compute[189564]: 2025-12-01 19:36:36.658 189568 DEBUG nova.virt.hardware [None req-92e2ca87-5818-491b-861f-9c34a79f287f 7c24e8f82e7842b785e565ac65c7f494 35d2a9caf1634dca9fc12ec078239d84 - - default default] Sorted desired topologies [VirtCPUTopology(cores=1,sockets=1,threads=1)] _get_desirable_cpu_topologies /usr/lib/python3.9/site-packages/nova/virt/hardware.py:577#033[00m
Dec  1 19:36:36 compute-0 nova_compute[189564]: 2025-12-01 19:36:36.665 189568 DEBUG nova.virt.libvirt.vif [None req-92e2ca87-5818-491b-861f-9c34a79f287f 7c24e8f82e7842b785e565ac65c7f494 35d2a9caf1634dca9fc12ec078239d84 - - default default] vif_type=ovs instance=Instance(access_ip_v4=None,access_ip_v6=None,architecture=None,auto_disk_config=False,availability_zone='nova',cell_name=None,cleaned=False,config_drive='',created_at=2025-12-01T19:36:30Z,default_ephemeral_device=None,default_swap_device=None,deleted=False,deleted_at=None,device_metadata=None,disable_terminate=False,display_description=None,display_name='vn-rxztcck-a6xkcgll2h6t-dmjd3wlevael-vnf-74vtqyxw74yx',ec2_ids=EC2Ids,ephemeral_gb=1,ephemeral_key_uuid=None,fault=<?>,flavor=Flavor(1),hidden=False,host='compute-0.ctlplane.example.com',hostname='vn-rxztcck-a6xkcgll2h6t-dmjd3wlevael-vnf-74vtqyxw74yx',id=3,image_ref='15bc897a-453b-4133-b6db-08ecdc2b6db0',info_cache=InstanceInfoCache,instance_type_id=1,kernel_id='',key_data=None,key_name=None,keypairs=KeyPairList,launch_index=0,launched_at=None,launched_on='compute-0.ctlplane.example.com',locked=False,locked_by=None,memory_mb=512,metadata={metering.server_group='47cf63e2-5b7c-4ff3-8543-aef6d5b1a5c9'},migration_context=None,new_flavor=None,node='compute-0.ctlplane.example.com',numa_topology=None,old_flavor=None,os_type=None,pci_devices=<?>,pci_requests=InstancePCIRequests,power_state=0,progress=0,project_id='35d2a9caf1634dca9fc12ec078239d84',ramdisk_id='',reservation_id='r-fbknfj75',resources=None,root_device_name='/dev/vda',root_gb=1,security_groups=SecurityGroupList,services=<?>,shutdown_terminate=False,system_metadata={boot_roles='reader,member,admin',image_base_image_ref='15bc897a-453b-4133-b6db-08ecdc2b6db0',image_container_format='bare',image_disk_format='qcow2',image_hw_machine_type='q35',image_min_disk='1',image_min_ram='0',image_owner_specified.openstack.md5='',image_owner_specified.openstack.object='images/cirros',image_owner_specified.openstack.sha256='',network_allocated='True',owner_project_name='admin',owner_user_name='admin'},tags=TagList,task_state='spawning',terminated_at=None,trusted_certs=None,updated_at=2025-12-01T19:36:31Z,user_data='Q29udGVudC1UeXBlOiBtdWx0aXBhcnQvbWl4ZWQ7IGJvdW5kYXJ5PSI9PT09PT09PT09PT09PT01NzIyMDUwMDgxOTA2NDcwMDg4PT0iCk1JTUUtVmVyc2lvbjogMS4wCgotLT09PT09PT09PT09PT09PTU3MjIwNTAwODE5MDY0NzAwODg9PQpDb250ZW50LVR5cGU6IHRleHQvY2xvdWQtY29uZmlnOyBjaGFyc2V0PSJ1cy1hc2NpaSIKTUlNRS1WZXJzaW9uOiAxLjAKQ29udGVudC1UcmFuc2Zlci1FbmNvZGluZzogN2JpdApDb250ZW50LURpc3Bvc2l0aW9uOiBhdHRhY2htZW50OyBmaWxlbmFtZT0iY2xvdWQtY29uZmlnIgoKCgojIENhcHR1cmUgYWxsIHN1YnByb2Nlc3Mgb3V0cHV0IGludG8gYSBsb2dmaWxlCiMgVXNlZnVsIGZvciB0cm91Ymxlc2hvb3RpbmcgY2xvdWQtaW5pdCBpc3N1ZXMKb3V0cHV0OiB7YWxsOiAnfCB0ZWUgLWEgL3Zhci9sb2cvY2xvdWQtaW5pdC1vdXRwdXQubG9nJ30KCi0tPT09PT09PT09PT09PT09NTcyMjA1MDA4MTkwNjQ3MDA4OD09CkNvbnRlbnQtVHlwZTogdGV4dC9jbG91ZC1ib290aG9vazsgY2hhcnNldD0idXMtYXNjaWkiCk1JTUUtVmVyc2lvbjogMS4wCkNvbnRlbnQtVHJhbnNmZXItRW5jb2Rpbmc6IDdiaXQKQ29udGVudC1EaXNwb3NpdGlvbjogYXR0YWNobWVudDsgZmlsZW5hbWU9ImJvb3Rob29rLnNoIgoKIyEvdXNyL2Jpbi9iYXNoCgojIEZJWE1FKHNoYWRvd2VyKSB0aGlzIGlzIGEgd29ya2Fyb3VuZCBmb3IgY2xvdWQtaW5pdCAwLjYuMyBwcmVzZW50IGluIFVidW50dQojIDEyLjA0IExUUzoKIyBodHRwczovL2J1Z3MubGF1bmNocGFkLm5ldC9oZWF0LytidWcvMTI1NzQxMAojCiMgVGhlIG9sZCBjbG91ZC1pbml0IGRvZXNuJ3QgY3JlYXRlIHRoZSB1c2VycyBkaXJlY3RseSBzbyB0aGUgY29tbWFuZHMgdG8gZG8KIyB0aGlzIGFyZSBpbmplY3RlZCB0aG91Z2ggbm92YV91dGlscy5weS4KIwojIE9uY2Ugd2UgZHJvcCBzdXBwb3J0IGZvciAwLjYuMywgd2UgY2FuIHNhZmVseSByZW1vdmUgdGhpcy4KCgojIGluIGNhc2UgaGVhdC1jZm50b29scyBoYXMgYmVlbiBpbnN0YWxsZWQgZnJvbSBwYWNrYWdlIGJ1dCBubyBzeW1saW5rcwojIGFyZSB5ZXQgaW4gL29wdC9hd3MvYmluLwpjZm4tY3JlYXRlLWF3cy1zeW1saW5rcwoKIyBEbyBub3QgcmVtb3ZlIC0gdGhlIGNsb3VkIGJvb3Rob29rIHNob3VsZCBhbHdheXMgcmV0dXJuIHN1Y2Nlc3MKZXhpdCAwCgotLT09PT09PT09PT09PT09PTU3MjIwNTAwODE5MDY0NzAwODg9PQpDb250ZW50LVR5cGU6IHRleHQvcGFydC1oYW5kbGVyOyBjaGFyc2V0PSJ1cy1hc2NpaSIKTUlNRS1WZXJzaW9uOiAxLjAKQ29udGVudC1UcmFuc2Zlci1FbmNvZGluZzogN2JpdApDb250ZW50LURpc3Bvc2l0aW9uOiBhdHRhY2htZW50OyBmaWxlbmFtZT0icGFydC1oYW5kbGVyLnB5IgoKIyBwYXJ0LWhhbmRsZXIKIwojICAgIExpY2Vuc2VkIHVuZGVyIHRoZSBBcGFjaGUgTGljZW5zZSwgVmVyc2lvbiAyLjAgKHRoZSAiTGljZW5zZSIpOyB5b3UgbWF5CiMgICAgbm90IHVzZSB0aGlzIGZpbGUgZXhjZXB0IGluIGNvbXBsaWFuY2Ugd2l0aCB0aGUgTGljZW5zZS4gWW91IG1heSBvYnRhaW4KIyAgICBhIGNvcHkgb2YgdGhlIExpY2Vuc2UgYXQKIwojICAgICAgICAgaHR0cDovL3d3dy5hcGFjaGUub3JnL2xpY2Vuc2VzL0xJQ0VOU0UtMi4wCiMKIyAgICBVbmxlc3MgcmVxdWlyZWQgYnkgYXBwbGljYWJsZSBsYXcgb3IgYWdyZWVkIHRvIGluIHdyaXRpbmcsIHNvZnR3YXJlCiMgICAgZGlzdHJpYnV0ZWQgdW5kZXIgdGhlIExpY2Vuc2UgaXMgZGlzdHJpYnV0ZWQgb24gYW4gIkFTIElTIiBCQVNJUywgV0lUSE9VVAojICAgIFdBUlJBTlRJRVMgT1IgQ09ORElUSU9OUyBPRiBBTlkgS0lORCwgZWl0aGVyIGV4cHJlc3Mgb3IgaW1wbGllZC4gU2VlIHRoZQojICAgIExpY2Vuc2UgZm9yIHRoZSBzcGVjaWZpYyBsYW5ndWFnZSBnb3Zlcm5pbmcgcGVybWlzc2lvbnMgYW5kIGxpbWl0YXRpb25zCiMgICAgdW5kZXIgdGhlIExpY2Vuc2UuCgppbXBvcnQgZGF0ZXRpbWUKaW1wb3J0IGVycm5vCmltcG9ydCBvcwppbXBvcnQgc3lzCgoKZGVmIGxpc3RfdHlwZXMoKToKICAgIHJldHVybiBbInRleHQveC1jZm5pbml0ZGF0YSJdCgoKZGVmIGhhbmRsZV9wYXJ0KGRhdGEsIGN0eXBlLCBmaWxlbmFtZSwgcGF5bG9hZCk6CiAgICBpZiBjdHlwZSA9PSAiX19iZWdpbl9fIjoKICAgICAgICB0cnk6CiAgICAgICAgICAgIG9zLm1ha2VkaXJzKCcvdmFyL2xpYi9oZWF0LWNmbnRvb2xzJywgaW50KCI3MDAiLCA4KSkKICAgICAgICBleGNlcHQgT1NFcnJvcjoKICAgICAgICAgICAgZXhfdHlwZSwgZSwgdGIgPSBzeXMuZXhjX2luZm8oKQogICAgICAgICAgICBpZiBlLmVycm5vICE9IGVycm5vLkVFWElTVDoKICAgICAgICAgICAgICAgIHJhaXNlCiAgICAgICAgcmV0dXJuCgogICAgaWYgY3R5cGUgPT0gIl9fZW5kX18iOgogICAgICAgIHJldHVybgoKICAgIHRpbWVzdGFtcCA9IGRhdGV0aW1lLmRhdGV0aW1lLm5vdygpCiAgICB3aXRoIG9wZW4oJy92YXIvbG9nL3BhcnQtaGFuZGxlci5sb2cnLCAnYScpIGFzIGxvZzoKICAgICAgICBsb2cud3JpdGUoJyVzIGZpbGVuYW1lOiVzLCBjdHlwZTolc1xuJyAlICh0aW1lc3RhbXAsIGZpbGVuYW1lLCBjdHlwZSkpCgogICAgaWYgY3R5cGUgPT0gJ3RleHQveC1jZm5pbml0ZGF0YSc6CiAgICAgICAgd2l0aCBvcGVuKCcvdmFyL2xpYi9oZWF0LWNmbnRvb2xzLyVzJyAlIGZpbGVuYW1lLCAndycpIGFzIGY6CiAgICAgICAgICAgIGYud3JpdGUocGF5bG9hZCkKCiAgICAgICAgIyBUT0RPKHNkYWtlKSBob3BlZnVsbHkgdGVtcG9yYXJ5IHVudGlsIHVzZXJzIG1vdmUgdG8gaGVhdC1jZm50b29scy0xLjMKICAgICAgICB3aXRoIG9wZW4oJy92YXIvbGliL2Nsb3VkL2RhdGEvJXMnICUgZmlsZW5hbWUsICd3JykgYXMgZjoKICAgICAgICAgICAgZi53cml0ZShwYXlsb2FkKQoKLS09PT09PT09PT09PT09PT01NzIyMDUwMDgxOTA2NDcwMDg4PT0KQ29udGVudC1UeXBlOiB0ZXh0L3gtY2ZuaW5pdGRhdGE7IGNoYXJzZXQ9InVzLWFzY2lpIgpNSU1FLVZlcnNpb246IDEuMApDb250ZW50LVRyYW5zZmVyLUVuY29kaW5nOiA3Yml0CkNvbnRlbnQtRGlzcG9zaXRpb246IGF0dGFjaG1lbnQ7IGZpbGVuYW1lPSJjZm4tdXNlcmRhdGEiCgoKLS09PT09PT09PT09PT09PT01NzIyMDUwMDgxOTA2NDcwMDg4PT0KQ29udGVudC1UeXBlOiB0ZXh0L3gtc2hlbGxzY3JpcHQ7IGNoYXJzZXQ9InVzLWFzY2lpIgpNSU1FLVZlcnNpb246IDEuMApDb250ZW50LVRyYW5zZmVyLUVuY29kaW5nOiA3Yml0CkNvbnRlbnQtRGlzcG9zaXRpb246IGF0dGFjaG1lbnQ7IGZpbGVuYW1lPSJsb2d1c2VyZGF0YS5weSIKCiMhL3Vzci9iaW4vZW52IHB5dGhvbjMKIwojICAgIExpY2Vuc2VkIHVuZGVyIHRoZSBBcGFjaGUgTGljZW5zZSwgVmVyc2lvbiAyLjAgKHRoZSAiTGljZW5zZSIpOyB5b3UgbWF5CiMgICAgbm90IHVzZSB0aGlzIGZpbGUgZXhjZXB0IGluIGNvbXBsaWFuY2Ugd2l0aCB0aGUgTGljZW5zZS4gWW91IG1heSBvYnRhaW4KIyAgICBhIGNvcHkgb2YgdGhlIExpY2Vuc2UgYXQKIwojICAgICAgICAgaHR0cDovL3d3dy5hcGFjaGUub3JnL2xpY2Vuc2VzL0xJQ0VOU0UtMi4wCiMKIyAgICBVbmxlc3MgcmVxdWlyZWQgYnkgYXBwbGljYWJsZSBsYXcgb3IgYWdyZWVkIHRvIGluIHdyaXRpbmcsIHNvZnR3YXJlCiMgICAgZGlzdHJpYnV0ZWQgdW5kZXIgdGhlIExpY2Vuc2UgaXMgZGlzdHJpYnV0ZWQgb24gYW4gIkFTIElTIiBCQVNJUywgV0lUSE9VVAojICAgIFdBUlJBTlRJRVMgT1IgQ09ORElUSU9OUyBPRiBBTlkgS0lORCwgZWl0aGVyIGV4cHJlc3Mgb3IgaW1wbGllZC4gU2VlIHRoZQojICAgIExpY2Vuc2UgZm9yIHRoZSBzcGVjaWZpYyBsYW5ndWFnZSBnb3Zlcm5pbmcgcGVybWlzc2lvbnMgYW5kIGxpbWl0YXRpb25zCiMgICAgdW5kZXIgdGhlIExpY2Vuc2UuCgppbXBvcnQgZGF0ZXRpbWUKaW1wb3J0IGVycm5vCmltcG9ydCBsb2dnaW5nCmltcG9ydCBvcwppbXBvcnQgc3VicHJvY2VzcwppbXBvcnQgc3lzCgoKVkFSX1BBVEggPSAnL3Zhci9saWIvaGVhdC1jZm50b29scycKTE9HID0gbG9nZ2luZy5nZXRMb2dnZXIoJ2hlYXQtcHJvdmlzaW9uJykKCgpkZWYgaW5pdF9sb2dnaW5nKCk6CiAgICBMT0cuc2V0TGV2ZWwobG9nZ2luZy5JTkZPKQogICAgTE9HLmFkZEhhbmRsZXIobG9nZ2luZy5TdHJlYW1IYW5kbGVyKCkpCiAgICBmaCA9IGxvZ2dpbmcuRmlsZUhhbmRsZXIoIi92YXIvbG9nL2hlYXQtcHJvdmlzaW9uLmxvZyIpCiAgICBvcy5jaG1vZChmaC5iYXNlRmlsZW5hbWUsIGludCgiNjAwIiwgOCkpCiAgICBMT0cuYWRkSGFuZGxlcihmaCkKCgpkZWYgY2FsbChhcmdzKToKCiAgICBjbGFzcyBMb2dTdHJlYW0ob2JqZWN0KToKCiAgICAgICAgZGVmIHdyaXRlKHNlbGYsIGRhdGEpOgogICAgICAgICAgICBMT0cuaW5mbyhkYXRhKQoKICAgIExPRy5pbmZvKCclc1xuJywgJyAnLmpvaW4oYXJncykpICAjI
Dec  1 19:36:36 compute-0 nova_compute[189564]: ywgc3Rkb3V0PXN1YnByb2Nlc3MuUElQRSwKICAgICAgICAgICAgICAgICAgICAgICAgICAgICBzdGRlcnI9c3VicHJvY2Vzcy5QSVBFKQogICAgICAgIGRhdGEgPSBwLmNvbW11bmljYXRlKCkKICAgICAgICBpZiBkYXRhOgogICAgICAgICAgICBmb3IgeCBpbiBkYXRhOgogICAgICAgICAgICAgICAgbHMud3JpdGUoeCkKICAgIGV4Y2VwdCBPU0Vycm9yOgogICAgICAgIGV4X3R5cGUsIGV4LCB0YiA9IHN5cy5leGNfaW5mbygpCiAgICAgICAgaWYgZXguZXJybm8gPT0gZXJybm8uRU5PRVhFQzoKICAgICAgICAgICAgTE9HLmVycm9yKCdVc2VyZGF0YSBlbXB0eSBvciBub3QgZXhlY3V0YWJsZTogJXMnLCBleCkKICAgICAgICAgICAgcmV0dXJuIG9zLkVYX09LCiAgICAgICAgZWxzZToKICAgICAgICAgICAgTE9HLmVycm9yKCdPUyBlcnJvciBydW5uaW5nIHVzZXJkYXRhOiAlcycsIGV4KQogICAgICAgICAgICByZXR1cm4gb3MuRVhfT1NFUlIKICAgIGV4Y2VwdCBFeGNlcHRpb246CiAgICAgICAgZXhfdHlwZSwgZXgsIHRiID0gc3lzLmV4Y19pbmZvKCkKICAgICAgICBMT0cuZXJyb3IoJ1Vua25vd24gZXJyb3IgcnVubmluZyB1c2VyZGF0YTogJXMnLCBleCkKICAgICAgICByZXR1cm4gb3MuRVhfU09GVFdBUkUKICAgIHJldHVybiBwLnJldHVybmNvZGUKCgpkZWYgbWFpbigpOgogICAgdXNlcmRhdGFfcGF0aCA9IG9zLnBhdGguam9pbihWQVJfUEFUSCwgJ2Nmbi11c2VyZGF0YScpCiAgICBvcy5jaG1vZCh1c2VyZGF0YV9wYXRoLCBpbnQoIjcwMCIsIDgpKQoKICAgIExPRy5pbmZvKCdQcm92aXNpb24gYmVnYW46ICVzJywgZGF0ZXRpbWUuZGF0ZXRpbWUubm93KCkpCiAgICByZXR1cm5jb2RlID0gY2FsbChbdXNlcmRhdGFfcGF0aF0pCiAgICBMT0cuaW5mbygnUHJvdmlzaW9uIGRvbmU6ICVzJywgZGF0ZXRpbWUuZGF0ZXRpbWUubm93KCkpCiAgICBpZiByZXR1cm5jb2RlOgogICAgICAgIHJldHVybiByZXR1cm5jb2RlCgoKaWYgX19uYW1lX18gPT0gJ19fbWFpbl9fJzoKICAgIGluaXRfbG9nZ2luZygpCgogICAgY29kZSA9IG1haW4oKQogICAgaWYgY29kZToKICAgICAgICBMT0cuZXJyb3IoJ1Byb3Zpc2lvbiBmYWlsZWQgd2l0aCBleGl0IGNvZGUgJXMnLCBjb2RlKQogICAgICAgIHN5cy5leGl0KGNvZGUpCgogICAgcHJvdmlzaW9uX2xvZyA9IG9zLnBhdGguam9pbihWQVJfUEFUSCwgJ3Byb3Zpc2lvbi1maW5pc2hlZCcpCiAgICAjIHRvdWNoIHRoZSBmaWxlIHNvIGl0IGlzIHRpbWVzdGFtcGVkIHdpdGggd2hlbiBmaW5pc2hlZAogICAgd2l0aCBvcGVuKHByb3Zpc2lvbl9sb2csICdhJyk6CiAgICAgICAgb3MudXRpbWUocHJvdmlzaW9uX2xvZywgTm9uZSkKCi0tPT09PT09PT09PT09PT09NTcyMjA1MDA4MTkwNjQ3MDA4OD09CkNvbnRlbnQtVHlwZTogdGV4dC94LWNmbmluaXRkYXRhOyBjaGFyc2V0PSJ1cy1hc2NpaSIKTUlNRS1WZXJzaW9uOiAxLjAKQ29udGVudC1UcmFuc2Zlci1FbmNvZGluZzogN2JpdApDb250ZW50LURpc3Bvc2l0aW9uOiBhdHRhY2htZW50OyBmaWxlbmFtZT0iY2ZuLW1ldGFkYXRhLXNlcnZlciIKCmh0dHBzOi8vaGVhdC1jZm5hcGktaW50ZXJuYWwub3BlbnN0YWNrLnN2Yzo4MDAwL3YxLwotLT09PT09PT09PT09PT09PTU3MjIwNTAwODE5MDY0NzAwODg9PQpDb250ZW50LVR5cGU6IHRleHQveC1jZm5pbml0ZGF0YTsgY2hhcnNldD0idXMtYXNjaWkiCk1JTUUtVmVyc2lvbjogMS4wCkNvbnRlbnQtVHJhbnNmZXItRW5jb2Rpbmc6IDdiaXQKQ29udGVudC1EaXNwb3NpdGlvbjogYXR0YWNobWVudDsgZmlsZW5hbWU9ImNmbi1ib3RvLWNmZyIKCltCb3RvXQpkZWJ1ZyA9IDAKaXNfc2VjdXJlID0gMApodHRwc192YWxpZGF0ZV9jZXJ0aWZpY2F0ZXMgPSAxCmNmbl9yZWdpb25fbmFtZSA9IGhlYXQKY2ZuX3JlZ2lvbl9lbmRwb2ludCA9IGhlYXQtY2ZuYXBpLWludGVybmFsLm9wZW5zdGFjay5zdmMKLS09PT09PT09PT09PT09PT01NzIyMDUwMDgxOTA2NDcwMDg4PT0tLQo=',user_id='7c24e8f82e7842b785e565ac65c7f494',uuid=850ac274-3f22-41ce-b7d7-ac64d7adac70,vcpu_model=VirtCPUModel,vcpus=1,vm_mode=None,vm_state='building') vif={"id": "076102cd-d411-4d3d-a31e-4851d4a8d107", "address": "fa:16:3e:ce:df:71", "network": {"id": "2a4b8529-6171-4880-a97c-66966115a61b", "bridge": "br-int", "label": "private", "subnets": [{"cidr": "192.168.0.0/24", "dns": [], "gateway": {"address": "192.168.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "192.168.0.62", "type": "fixed", "version": 4, "meta": {}, "floating_ips": [{"address": "192.168.122.240", "type": "floating", "version": 4, "meta": {}}]}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "35d2a9caf1634dca9fc12ec078239d84", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tap076102cd-d4", "ovs_interfaceid": "076102cd-d411-4d3d-a31e-4851d4a8d107", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": true, "delegate_create": true, "meta": {}} virt_type=kvm get_config /usr/lib/python3.9/site-packages/nova/virt/libvirt/vif.py:563#033[00m
Dec  1 19:36:36 compute-0 nova_compute[189564]: 2025-12-01 19:36:36.666 189568 DEBUG nova.network.os_vif_util [None req-92e2ca87-5818-491b-861f-9c34a79f287f 7c24e8f82e7842b785e565ac65c7f494 35d2a9caf1634dca9fc12ec078239d84 - - default default] Converting VIF {"id": "076102cd-d411-4d3d-a31e-4851d4a8d107", "address": "fa:16:3e:ce:df:71", "network": {"id": "2a4b8529-6171-4880-a97c-66966115a61b", "bridge": "br-int", "label": "private", "subnets": [{"cidr": "192.168.0.0/24", "dns": [], "gateway": {"address": "192.168.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "192.168.0.62", "type": "fixed", "version": 4, "meta": {}, "floating_ips": [{"address": "192.168.122.240", "type": "floating", "version": 4, "meta": {}}]}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "35d2a9caf1634dca9fc12ec078239d84", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tap076102cd-d4", "ovs_interfaceid": "076102cd-d411-4d3d-a31e-4851d4a8d107", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": true, "delegate_create": true, "meta": {}} nova_to_osvif_vif /usr/lib/python3.9/site-packages/nova/network/os_vif_util.py:511#033[00m
Dec  1 19:36:36 compute-0 nova_compute[189564]: 2025-12-01 19:36:36.668 189568 DEBUG nova.network.os_vif_util [None req-92e2ca87-5818-491b-861f-9c34a79f287f 7c24e8f82e7842b785e565ac65c7f494 35d2a9caf1634dca9fc12ec078239d84 - - default default] Converted object VIFOpenVSwitch(active=False,address=fa:16:3e:ce:df:71,bridge_name='br-int',has_traffic_filtering=True,id=076102cd-d411-4d3d-a31e-4851d4a8d107,network=Network(2a4b8529-6171-4880-a97c-66966115a61b),plugin='ovs',port_profile=VIFPortProfileOpenVSwitch,preserve_on_delete=True,vif_name='tap076102cd-d4') nova_to_osvif_vif /usr/lib/python3.9/site-packages/nova/network/os_vif_util.py:548#033[00m
Dec  1 19:36:36 compute-0 nova_compute[189564]: 2025-12-01 19:36:36.671 189568 DEBUG nova.objects.instance [None req-92e2ca87-5818-491b-861f-9c34a79f287f 7c24e8f82e7842b785e565ac65c7f494 35d2a9caf1634dca9fc12ec078239d84 - - default default] Lazy-loading 'pci_devices' on Instance uuid 850ac274-3f22-41ce-b7d7-ac64d7adac70 obj_load_attr /usr/lib/python3.9/site-packages/nova/objects/instance.py:1105#033[00m
Dec  1 19:36:36 compute-0 nova_compute[189564]: 2025-12-01 19:36:36.703 189568 DEBUG nova.virt.libvirt.driver [None req-92e2ca87-5818-491b-861f-9c34a79f287f 7c24e8f82e7842b785e565ac65c7f494 35d2a9caf1634dca9fc12ec078239d84 - - default default] [instance: 850ac274-3f22-41ce-b7d7-ac64d7adac70] End _get_guest_xml xml=<domain type="kvm">
Dec  1 19:36:36 compute-0 nova_compute[189564]:  <uuid>850ac274-3f22-41ce-b7d7-ac64d7adac70</uuid>
Dec  1 19:36:36 compute-0 nova_compute[189564]:  <name>instance-00000003</name>
Dec  1 19:36:36 compute-0 nova_compute[189564]:  <memory>524288</memory>
Dec  1 19:36:36 compute-0 nova_compute[189564]:  <vcpu>1</vcpu>
Dec  1 19:36:36 compute-0 nova_compute[189564]:  <metadata>
Dec  1 19:36:36 compute-0 nova_compute[189564]:    <nova:instance xmlns:nova="http://openstack.org/xmlns/libvirt/nova/1.1">
Dec  1 19:36:36 compute-0 nova_compute[189564]:      <nova:package version="27.5.2-0.20250829104910.6f8decf.el9"/>
Dec  1 19:36:36 compute-0 nova_compute[189564]:      <nova:name>vn-rxztcck-a6xkcgll2h6t-dmjd3wlevael-vnf-74vtqyxw74yx</nova:name>
Dec  1 19:36:36 compute-0 nova_compute[189564]:      <nova:creationTime>2025-12-01 19:36:36</nova:creationTime>
Dec  1 19:36:36 compute-0 nova_compute[189564]:      <nova:flavor name="m1.small">
Dec  1 19:36:36 compute-0 nova_compute[189564]:        <nova:memory>512</nova:memory>
Dec  1 19:36:36 compute-0 nova_compute[189564]:        <nova:disk>1</nova:disk>
Dec  1 19:36:36 compute-0 nova_compute[189564]:        <nova:swap>0</nova:swap>
Dec  1 19:36:36 compute-0 nova_compute[189564]:        <nova:ephemeral>1</nova:ephemeral>
Dec  1 19:36:36 compute-0 nova_compute[189564]:        <nova:vcpus>1</nova:vcpus>
Dec  1 19:36:36 compute-0 nova_compute[189564]:      </nova:flavor>
Dec  1 19:36:36 compute-0 nova_compute[189564]:      <nova:owner>
Dec  1 19:36:36 compute-0 nova_compute[189564]:        <nova:user uuid="7c24e8f82e7842b785e565ac65c7f494">admin</nova:user>
Dec  1 19:36:36 compute-0 nova_compute[189564]:        <nova:project uuid="35d2a9caf1634dca9fc12ec078239d84">admin</nova:project>
Dec  1 19:36:36 compute-0 nova_compute[189564]:      </nova:owner>
Dec  1 19:36:36 compute-0 nova_compute[189564]:      <nova:root type="image" uuid="15bc897a-453b-4133-b6db-08ecdc2b6db0"/>
Dec  1 19:36:36 compute-0 nova_compute[189564]:      <nova:ports>
Dec  1 19:36:36 compute-0 nova_compute[189564]:        <nova:port uuid="076102cd-d411-4d3d-a31e-4851d4a8d107">
Dec  1 19:36:36 compute-0 nova_compute[189564]:          <nova:ip type="fixed" address="192.168.0.62" ipVersion="4"/>
Dec  1 19:36:36 compute-0 nova_compute[189564]:        </nova:port>
Dec  1 19:36:36 compute-0 nova_compute[189564]:      </nova:ports>
Dec  1 19:36:36 compute-0 nova_compute[189564]:    </nova:instance>
Dec  1 19:36:36 compute-0 nova_compute[189564]:  </metadata>
Dec  1 19:36:36 compute-0 nova_compute[189564]:  <sysinfo type="smbios">
Dec  1 19:36:36 compute-0 nova_compute[189564]:    <system>
Dec  1 19:36:36 compute-0 nova_compute[189564]:      <entry name="manufacturer">RDO</entry>
Dec  1 19:36:36 compute-0 nova_compute[189564]:      <entry name="product">OpenStack Compute</entry>
Dec  1 19:36:36 compute-0 nova_compute[189564]:      <entry name="version">27.5.2-0.20250829104910.6f8decf.el9</entry>
Dec  1 19:36:36 compute-0 nova_compute[189564]:      <entry name="serial">850ac274-3f22-41ce-b7d7-ac64d7adac70</entry>
Dec  1 19:36:36 compute-0 nova_compute[189564]:      <entry name="uuid">850ac274-3f22-41ce-b7d7-ac64d7adac70</entry>
Dec  1 19:36:36 compute-0 nova_compute[189564]:      <entry name="family">Virtual Machine</entry>
Dec  1 19:36:36 compute-0 nova_compute[189564]:    </system>
Dec  1 19:36:36 compute-0 nova_compute[189564]:  </sysinfo>
Dec  1 19:36:36 compute-0 nova_compute[189564]:  <os>
Dec  1 19:36:36 compute-0 nova_compute[189564]:    <type arch="x86_64" machine="q35">hvm</type>
Dec  1 19:36:36 compute-0 nova_compute[189564]:    <boot dev="hd"/>
Dec  1 19:36:36 compute-0 nova_compute[189564]:    <smbios mode="sysinfo"/>
Dec  1 19:36:36 compute-0 nova_compute[189564]:  </os>
Dec  1 19:36:36 compute-0 nova_compute[189564]:  <features>
Dec  1 19:36:36 compute-0 nova_compute[189564]:    <acpi/>
Dec  1 19:36:36 compute-0 nova_compute[189564]:    <apic/>
Dec  1 19:36:36 compute-0 nova_compute[189564]:    <vmcoreinfo/>
Dec  1 19:36:36 compute-0 nova_compute[189564]:  </features>
Dec  1 19:36:36 compute-0 nova_compute[189564]:  <clock offset="utc">
Dec  1 19:36:36 compute-0 nova_compute[189564]:    <timer name="pit" tickpolicy="delay"/>
Dec  1 19:36:36 compute-0 nova_compute[189564]:    <timer name="rtc" tickpolicy="catchup"/>
Dec  1 19:36:36 compute-0 nova_compute[189564]:    <timer name="hpet" present="no"/>
Dec  1 19:36:36 compute-0 nova_compute[189564]:  </clock>
Dec  1 19:36:36 compute-0 nova_compute[189564]:  <cpu mode="host-model" match="exact">
Dec  1 19:36:36 compute-0 nova_compute[189564]:    <topology sockets="1" cores="1" threads="1"/>
Dec  1 19:36:36 compute-0 nova_compute[189564]:  </cpu>
Dec  1 19:36:36 compute-0 nova_compute[189564]:  <devices>
Dec  1 19:36:36 compute-0 nova_compute[189564]:    <disk type="file" device="disk">
Dec  1 19:36:36 compute-0 nova_compute[189564]:      <driver name="qemu" type="qcow2" cache="none"/>
Dec  1 19:36:36 compute-0 nova_compute[189564]:      <source file="/var/lib/nova/instances/850ac274-3f22-41ce-b7d7-ac64d7adac70/disk"/>
Dec  1 19:36:36 compute-0 nova_compute[189564]:      <target dev="vda" bus="virtio"/>
Dec  1 19:36:36 compute-0 nova_compute[189564]:    </disk>
Dec  1 19:36:36 compute-0 nova_compute[189564]:    <disk type="file" device="disk">
Dec  1 19:36:36 compute-0 nova_compute[189564]:      <driver name="qemu" type="qcow2" cache="none"/>
Dec  1 19:36:36 compute-0 nova_compute[189564]:      <source file="/var/lib/nova/instances/850ac274-3f22-41ce-b7d7-ac64d7adac70/disk.eph0"/>
Dec  1 19:36:36 compute-0 nova_compute[189564]:      <target dev="vdb" bus="virtio"/>
Dec  1 19:36:36 compute-0 nova_compute[189564]:    </disk>
Dec  1 19:36:36 compute-0 nova_compute[189564]:    <disk type="file" device="cdrom">
Dec  1 19:36:36 compute-0 nova_compute[189564]:      <driver name="qemu" type="raw" cache="none"/>
Dec  1 19:36:36 compute-0 nova_compute[189564]:      <source file="/var/lib/nova/instances/850ac274-3f22-41ce-b7d7-ac64d7adac70/disk.config"/>
Dec  1 19:36:36 compute-0 nova_compute[189564]:      <target dev="sda" bus="sata"/>
Dec  1 19:36:36 compute-0 nova_compute[189564]:    </disk>
Dec  1 19:36:36 compute-0 nova_compute[189564]:    <interface type="ethernet">
Dec  1 19:36:36 compute-0 nova_compute[189564]:      <mac address="fa:16:3e:ce:df:71"/>
Dec  1 19:36:36 compute-0 nova_compute[189564]:      <model type="virtio"/>
Dec  1 19:36:36 compute-0 nova_compute[189564]:      <driver name="vhost" rx_queue_size="512"/>
Dec  1 19:36:36 compute-0 nova_compute[189564]:      <mtu size="1442"/>
Dec  1 19:36:36 compute-0 nova_compute[189564]:      <target dev="tap076102cd-d4"/>
Dec  1 19:36:36 compute-0 nova_compute[189564]:    </interface>
Dec  1 19:36:36 compute-0 nova_compute[189564]:    <serial type="pty">
Dec  1 19:36:36 compute-0 nova_compute[189564]:      <log file="/var/lib/nova/instances/850ac274-3f22-41ce-b7d7-ac64d7adac70/console.log" append="off"/>
Dec  1 19:36:36 compute-0 nova_compute[189564]:    </serial>
Dec  1 19:36:36 compute-0 nova_compute[189564]:    <graphics type="vnc" autoport="yes" listen="::0"/>
Dec  1 19:36:36 compute-0 nova_compute[189564]:    <video>
Dec  1 19:36:36 compute-0 nova_compute[189564]:      <model type="virtio"/>
Dec  1 19:36:36 compute-0 nova_compute[189564]:    </video>
Dec  1 19:36:36 compute-0 nova_compute[189564]:    <input type="tablet" bus="usb"/>
Dec  1 19:36:36 compute-0 nova_compute[189564]:    <rng model="virtio">
Dec  1 19:36:36 compute-0 nova_compute[189564]:      <backend model="random">/dev/urandom</backend>
Dec  1 19:36:36 compute-0 nova_compute[189564]:    </rng>
Dec  1 19:36:36 compute-0 nova_compute[189564]:    <controller type="pci" model="pcie-root"/>
Dec  1 19:36:36 compute-0 nova_compute[189564]:    <controller type="pci" model="pcie-root-port"/>
Dec  1 19:36:36 compute-0 nova_compute[189564]:    <controller type="pci" model="pcie-root-port"/>
Dec  1 19:36:36 compute-0 nova_compute[189564]:    <controller type="pci" model="pcie-root-port"/>
Dec  1 19:36:36 compute-0 nova_compute[189564]:    <controller type="pci" model="pcie-root-port"/>
Dec  1 19:36:36 compute-0 nova_compute[189564]:    <controller type="pci" model="pcie-root-port"/>
Dec  1 19:36:36 compute-0 nova_compute[189564]:    <controller type="pci" model="pcie-root-port"/>
Dec  1 19:36:36 compute-0 nova_compute[189564]:    <controller type="pci" model="pcie-root-port"/>
Dec  1 19:36:36 compute-0 nova_compute[189564]:    <controller type="pci" model="pcie-root-port"/>
Dec  1 19:36:36 compute-0 nova_compute[189564]:    <controller type="pci" model="pcie-root-port"/>
Dec  1 19:36:36 compute-0 nova_compute[189564]:    <controller type="pci" model="pcie-root-port"/>
Dec  1 19:36:36 compute-0 nova_compute[189564]:    <controller type="pci" model="pcie-root-port"/>
Dec  1 19:36:36 compute-0 nova_compute[189564]:    <controller type="pci" model="pcie-root-port"/>
Dec  1 19:36:36 compute-0 nova_compute[189564]:    <controller type="pci" model="pcie-root-port"/>
Dec  1 19:36:36 compute-0 nova_compute[189564]:    <controller type="pci" model="pcie-root-port"/>
Dec  1 19:36:36 compute-0 nova_compute[189564]:    <controller type="pci" model="pcie-root-port"/>
Dec  1 19:36:36 compute-0 nova_compute[189564]:    <controller type="pci" model="pcie-root-port"/>
Dec  1 19:36:36 compute-0 nova_compute[189564]:    <controller type="pci" model="pcie-root-port"/>
Dec  1 19:36:36 compute-0 nova_compute[189564]:    <controller type="pci" model="pcie-root-port"/>
Dec  1 19:36:36 compute-0 nova_compute[189564]:    <controller type="pci" model="pcie-root-port"/>
Dec  1 19:36:36 compute-0 nova_compute[189564]:    <controller type="pci" model="pcie-root-port"/>
Dec  1 19:36:36 compute-0 nova_compute[189564]:    <controller type="pci" model="pcie-root-port"/>
Dec  1 19:36:36 compute-0 nova_compute[189564]:    <controller type="pci" model="pcie-root-port"/>
Dec  1 19:36:36 compute-0 nova_compute[189564]:    <controller type="pci" model="pcie-root-port"/>
Dec  1 19:36:36 compute-0 nova_compute[189564]:    <controller type="pci" model="pcie-root-port"/>
Dec  1 19:36:36 compute-0 nova_compute[189564]:    <controller type="usb" index="0"/>
Dec  1 19:36:36 compute-0 nova_compute[189564]:    <memballoon model="virtio">
Dec  1 19:36:36 compute-0 nova_compute[189564]:      <stats period="10"/>
Dec  1 19:36:36 compute-0 nova_compute[189564]:    </memballoon>
Dec  1 19:36:36 compute-0 nova_compute[189564]:  </devices>
Dec  1 19:36:36 compute-0 nova_compute[189564]: </domain>
Dec  1 19:36:36 compute-0 nova_compute[189564]: _get_guest_xml /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:7555#033[00m
Dec  1 19:36:36 compute-0 nova_compute[189564]: 2025-12-01 19:36:36.704 189568 DEBUG nova.compute.manager [None req-92e2ca87-5818-491b-861f-9c34a79f287f 7c24e8f82e7842b785e565ac65c7f494 35d2a9caf1634dca9fc12ec078239d84 - - default default] [instance: 850ac274-3f22-41ce-b7d7-ac64d7adac70] Preparing to wait for external event network-vif-plugged-076102cd-d411-4d3d-a31e-4851d4a8d107 prepare_for_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:283#033[00m
Dec  1 19:36:36 compute-0 nova_compute[189564]: 2025-12-01 19:36:36.705 189568 DEBUG oslo_concurrency.lockutils [None req-92e2ca87-5818-491b-861f-9c34a79f287f 7c24e8f82e7842b785e565ac65c7f494 35d2a9caf1634dca9fc12ec078239d84 - - default default] Acquiring lock "850ac274-3f22-41ce-b7d7-ac64d7adac70-events" by "nova.compute.manager.InstanceEvents.prepare_for_instance_event.<locals>._create_or_get_event" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m
Dec  1 19:36:36 compute-0 nova_compute[189564]: 2025-12-01 19:36:36.705 189568 DEBUG oslo_concurrency.lockutils [None req-92e2ca87-5818-491b-861f-9c34a79f287f 7c24e8f82e7842b785e565ac65c7f494 35d2a9caf1634dca9fc12ec078239d84 - - default default] Lock "850ac274-3f22-41ce-b7d7-ac64d7adac70-events" acquired by "nova.compute.manager.InstanceEvents.prepare_for_instance_event.<locals>._create_or_get_event" :: waited 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m
Dec  1 19:36:36 compute-0 nova_compute[189564]: 2025-12-01 19:36:36.706 189568 DEBUG oslo_concurrency.lockutils [None req-92e2ca87-5818-491b-861f-9c34a79f287f 7c24e8f82e7842b785e565ac65c7f494 35d2a9caf1634dca9fc12ec078239d84 - - default default] Lock "850ac274-3f22-41ce-b7d7-ac64d7adac70-events" "released" by "nova.compute.manager.InstanceEvents.prepare_for_instance_event.<locals>._create_or_get_event" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m
Dec  1 19:36:36 compute-0 nova_compute[189564]: 2025-12-01 19:36:36.707 189568 DEBUG nova.virt.libvirt.vif [None req-92e2ca87-5818-491b-861f-9c34a79f287f 7c24e8f82e7842b785e565ac65c7f494 35d2a9caf1634dca9fc12ec078239d84 - - default default] vif_type=ovs instance=Instance(access_ip_v4=None,access_ip_v6=None,architecture=None,auto_disk_config=False,availability_zone='nova',cell_name=None,cleaned=False,config_drive='',created_at=2025-12-01T19:36:30Z,default_ephemeral_device=None,default_swap_device=None,deleted=False,deleted_at=None,device_metadata=None,disable_terminate=False,display_description=None,display_name='vn-rxztcck-a6xkcgll2h6t-dmjd3wlevael-vnf-74vtqyxw74yx',ec2_ids=EC2Ids,ephemeral_gb=1,ephemeral_key_uuid=None,fault=<?>,flavor=Flavor(1),hidden=False,host='compute-0.ctlplane.example.com',hostname='vn-rxztcck-a6xkcgll2h6t-dmjd3wlevael-vnf-74vtqyxw74yx',id=3,image_ref='15bc897a-453b-4133-b6db-08ecdc2b6db0',info_cache=InstanceInfoCache,instance_type_id=1,kernel_id='',key_data=None,key_name=None,keypairs=KeyPairList,launch_index=0,launched_at=None,launched_on='compute-0.ctlplane.example.com',locked=False,locked_by=None,memory_mb=512,metadata={metering.server_group='47cf63e2-5b7c-4ff3-8543-aef6d5b1a5c9'},migration_context=None,new_flavor=None,node='compute-0.ctlplane.example.com',numa_topology=None,old_flavor=None,os_type=None,pci_devices=PciDeviceList,pci_requests=InstancePCIRequests,power_state=0,progress=0,project_id='35d2a9caf1634dca9fc12ec078239d84',ramdisk_id='',reservation_id='r-fbknfj75',resources=None,root_device_name='/dev/vda',root_gb=1,security_groups=SecurityGroupList,services=<?>,shutdown_terminate=False,system_metadata={boot_roles='reader,member,admin',image_base_image_ref='15bc897a-453b-4133-b6db-08ecdc2b6db0',image_container_format='bare',image_disk_format='qcow2',image_hw_machine_type='q35',image_min_disk='1',image_min_ram='0',image_owner_specified.openstack.md5='',image_owner_specified.openstack.object='images/cirros',image_owner_specified.openstack.sha256='',network_allocated='True',owner_project_name='admin',owner_user_name='admin'},tags=TagList,task_state='spawning',terminated_at=None,trusted_certs=None,updated_at=2025-12-01T19:36:31Z,user_data='Q29udGVudC1UeXBlOiBtdWx0aXBhcnQvbWl4ZWQ7IGJvdW5kYXJ5PSI9PT09PT09PT09PT09PT01NzIyMDUwMDgxOTA2NDcwMDg4PT0iCk1JTUUtVmVyc2lvbjogMS4wCgotLT09PT09PT09PT09PT09PTU3MjIwNTAwODE5MDY0NzAwODg9PQpDb250ZW50LVR5cGU6IHRleHQvY2xvdWQtY29uZmlnOyBjaGFyc2V0PSJ1cy1hc2NpaSIKTUlNRS1WZXJzaW9uOiAxLjAKQ29udGVudC1UcmFuc2Zlci1FbmNvZGluZzogN2JpdApDb250ZW50LURpc3Bvc2l0aW9uOiBhdHRhY2htZW50OyBmaWxlbmFtZT0iY2xvdWQtY29uZmlnIgoKCgojIENhcHR1cmUgYWxsIHN1YnByb2Nlc3Mgb3V0cHV0IGludG8gYSBsb2dmaWxlCiMgVXNlZnVsIGZvciB0cm91Ymxlc2hvb3RpbmcgY2xvdWQtaW5pdCBpc3N1ZXMKb3V0cHV0OiB7YWxsOiAnfCB0ZWUgLWEgL3Zhci9sb2cvY2xvdWQtaW5pdC1vdXRwdXQubG9nJ30KCi0tPT09PT09PT09PT09PT09NTcyMjA1MDA4MTkwNjQ3MDA4OD09CkNvbnRlbnQtVHlwZTogdGV4dC9jbG91ZC1ib290aG9vazsgY2hhcnNldD0idXMtYXNjaWkiCk1JTUUtVmVyc2lvbjogMS4wCkNvbnRlbnQtVHJhbnNmZXItRW5jb2Rpbmc6IDdiaXQKQ29udGVudC1EaXNwb3NpdGlvbjogYXR0YWNobWVudDsgZmlsZW5hbWU9ImJvb3Rob29rLnNoIgoKIyEvdXNyL2Jpbi9iYXNoCgojIEZJWE1FKHNoYWRvd2VyKSB0aGlzIGlzIGEgd29ya2Fyb3VuZCBmb3IgY2xvdWQtaW5pdCAwLjYuMyBwcmVzZW50IGluIFVidW50dQojIDEyLjA0IExUUzoKIyBodHRwczovL2J1Z3MubGF1bmNocGFkLm5ldC9oZWF0LytidWcvMTI1NzQxMAojCiMgVGhlIG9sZCBjbG91ZC1pbml0IGRvZXNuJ3QgY3JlYXRlIHRoZSB1c2VycyBkaXJlY3RseSBzbyB0aGUgY29tbWFuZHMgdG8gZG8KIyB0aGlzIGFyZSBpbmplY3RlZCB0aG91Z2ggbm92YV91dGlscy5weS4KIwojIE9uY2Ugd2UgZHJvcCBzdXBwb3J0IGZvciAwLjYuMywgd2UgY2FuIHNhZmVseSByZW1vdmUgdGhpcy4KCgojIGluIGNhc2UgaGVhdC1jZm50b29scyBoYXMgYmVlbiBpbnN0YWxsZWQgZnJvbSBwYWNrYWdlIGJ1dCBubyBzeW1saW5rcwojIGFyZSB5ZXQgaW4gL29wdC9hd3MvYmluLwpjZm4tY3JlYXRlLWF3cy1zeW1saW5rcwoKIyBEbyBub3QgcmVtb3ZlIC0gdGhlIGNsb3VkIGJvb3Rob29rIHNob3VsZCBhbHdheXMgcmV0dXJuIHN1Y2Nlc3MKZXhpdCAwCgotLT09PT09PT09PT09PT09PTU3MjIwNTAwODE5MDY0NzAwODg9PQpDb250ZW50LVR5cGU6IHRleHQvcGFydC1oYW5kbGVyOyBjaGFyc2V0PSJ1cy1hc2NpaSIKTUlNRS1WZXJzaW9uOiAxLjAKQ29udGVudC1UcmFuc2Zlci1FbmNvZGluZzogN2JpdApDb250ZW50LURpc3Bvc2l0aW9uOiBhdHRhY2htZW50OyBmaWxlbmFtZT0icGFydC1oYW5kbGVyLnB5IgoKIyBwYXJ0LWhhbmRsZXIKIwojICAgIExpY2Vuc2VkIHVuZGVyIHRoZSBBcGFjaGUgTGljZW5zZSwgVmVyc2lvbiAyLjAgKHRoZSAiTGljZW5zZSIpOyB5b3UgbWF5CiMgICAgbm90IHVzZSB0aGlzIGZpbGUgZXhjZXB0IGluIGNvbXBsaWFuY2Ugd2l0aCB0aGUgTGljZW5zZS4gWW91IG1heSBvYnRhaW4KIyAgICBhIGNvcHkgb2YgdGhlIExpY2Vuc2UgYXQKIwojICAgICAgICAgaHR0cDovL3d3dy5hcGFjaGUub3JnL2xpY2Vuc2VzL0xJQ0VOU0UtMi4wCiMKIyAgICBVbmxlc3MgcmVxdWlyZWQgYnkgYXBwbGljYWJsZSBsYXcgb3IgYWdyZWVkIHRvIGluIHdyaXRpbmcsIHNvZnR3YXJlCiMgICAgZGlzdHJpYnV0ZWQgdW5kZXIgdGhlIExpY2Vuc2UgaXMgZGlzdHJpYnV0ZWQgb24gYW4gIkFTIElTIiBCQVNJUywgV0lUSE9VVAojICAgIFdBUlJBTlRJRVMgT1IgQ09ORElUSU9OUyBPRiBBTlkgS0lORCwgZWl0aGVyIGV4cHJlc3Mgb3IgaW1wbGllZC4gU2VlIHRoZQojICAgIExpY2Vuc2UgZm9yIHRoZSBzcGVjaWZpYyBsYW5ndWFnZSBnb3Zlcm5pbmcgcGVybWlzc2lvbnMgYW5kIGxpbWl0YXRpb25zCiMgICAgdW5kZXIgdGhlIExpY2Vuc2UuCgppbXBvcnQgZGF0ZXRpbWUKaW1wb3J0IGVycm5vCmltcG9ydCBvcwppbXBvcnQgc3lzCgoKZGVmIGxpc3RfdHlwZXMoKToKICAgIHJldHVybiBbInRleHQveC1jZm5pbml0ZGF0YSJdCgoKZGVmIGhhbmRsZV9wYXJ0KGRhdGEsIGN0eXBlLCBmaWxlbmFtZSwgcGF5bG9hZCk6CiAgICBpZiBjdHlwZSA9PSAiX19iZWdpbl9fIjoKICAgICAgICB0cnk6CiAgICAgICAgICAgIG9zLm1ha2VkaXJzKCcvdmFyL2xpYi9oZWF0LWNmbnRvb2xzJywgaW50KCI3MDAiLCA4KSkKICAgICAgICBleGNlcHQgT1NFcnJvcjoKICAgICAgICAgICAgZXhfdHlwZSwgZSwgdGIgPSBzeXMuZXhjX2luZm8oKQogICAgICAgICAgICBpZiBlLmVycm5vICE9IGVycm5vLkVFWElTVDoKICAgICAgICAgICAgICAgIHJhaXNlCiAgICAgICAgcmV0dXJuCgogICAgaWYgY3R5cGUgPT0gIl9fZW5kX18iOgogICAgICAgIHJldHVybgoKICAgIHRpbWVzdGFtcCA9IGRhdGV0aW1lLmRhdGV0aW1lLm5vdygpCiAgICB3aXRoIG9wZW4oJy92YXIvbG9nL3BhcnQtaGFuZGxlci5sb2cnLCAnYScpIGFzIGxvZzoKICAgICAgICBsb2cud3JpdGUoJyVzIGZpbGVuYW1lOiVzLCBjdHlwZTolc1xuJyAlICh0aW1lc3RhbXAsIGZpbGVuYW1lLCBjdHlwZSkpCgogICAgaWYgY3R5cGUgPT0gJ3RleHQveC1jZm5pbml0ZGF0YSc6CiAgICAgICAgd2l0aCBvcGVuKCcvdmFyL2xpYi9oZWF0LWNmbnRvb2xzLyVzJyAlIGZpbGVuYW1lLCAndycpIGFzIGY6CiAgICAgICAgICAgIGYud3JpdGUocGF5bG9hZCkKCiAgICAgICAgIyBUT0RPKHNkYWtlKSBob3BlZnVsbHkgdGVtcG9yYXJ5IHVudGlsIHVzZXJzIG1vdmUgdG8gaGVhdC1jZm50b29scy0xLjMKICAgICAgICB3aXRoIG9wZW4oJy92YXIvbGliL2Nsb3VkL2RhdGEvJXMnICUgZmlsZW5hbWUsICd3JykgYXMgZjoKICAgICAgICAgICAgZi53cml0ZShwYXlsb2FkKQoKLS09PT09PT09PT09PT09PT01NzIyMDUwMDgxOTA2NDcwMDg4PT0KQ29udGVudC1UeXBlOiB0ZXh0L3gtY2ZuaW5pdGRhdGE7IGNoYXJzZXQ9InVzLWFzY2lpIgpNSU1FLVZlcnNpb246IDEuMApDb250ZW50LVRyYW5zZmVyLUVuY29kaW5nOiA3Yml0CkNvbnRlbnQtRGlzcG9zaXRpb246IGF0dGFjaG1lbnQ7IGZpbGVuYW1lPSJjZm4tdXNlcmRhdGEiCgoKLS09PT09PT09PT09PT09PT01NzIyMDUwMDgxOTA2NDcwMDg4PT0KQ29udGVudC1UeXBlOiB0ZXh0L3gtc2hlbGxzY3JpcHQ7IGNoYXJzZXQ9InVzLWFzY2lpIgpNSU1FLVZlcnNpb246IDEuMApDb250ZW50LVRyYW5zZmVyLUVuY29kaW5nOiA3Yml0CkNvbnRlbnQtRGlzcG9zaXRpb246IGF0dGFjaG1lbnQ7IGZpbGVuYW1lPSJsb2d1c2VyZGF0YS5weSIKCiMhL3Vzci9iaW4vZW52IHB5dGhvbjMKIwojICAgIExpY2Vuc2VkIHVuZGVyIHRoZSBBcGFjaGUgTGljZW5zZSwgVmVyc2lvbiAyLjAgKHRoZSAiTGljZW5zZSIpOyB5b3UgbWF5CiMgICAgbm90IHVzZSB0aGlzIGZpbGUgZXhjZXB0IGluIGNvbXBsaWFuY2Ugd2l0aCB0aGUgTGljZW5zZS4gWW91IG1heSBvYnRhaW4KIyAgICBhIGNvcHkgb2YgdGhlIExpY2Vuc2UgYXQKIwojICAgICAgICAgaHR0cDovL3d3dy5hcGFjaGUub3JnL2xpY2Vuc2VzL0xJQ0VOU0UtMi4wCiMKIyAgICBVbmxlc3MgcmVxdWlyZWQgYnkgYXBwbGljYWJsZSBsYXcgb3IgYWdyZWVkIHRvIGluIHdyaXRpbmcsIHNvZnR3YXJlCiMgICAgZGlzdHJpYnV0ZWQgdW5kZXIgdGhlIExpY2Vuc2UgaXMgZGlzdHJpYnV0ZWQgb24gYW4gIkFTIElTIiBCQVNJUywgV0lUSE9VVAojICAgIFdBUlJBTlRJRVMgT1IgQ09ORElUSU9OUyBPRiBBTlkgS0lORCwgZWl0aGVyIGV4cHJlc3Mgb3IgaW1wbGllZC4gU2VlIHRoZQojICAgIExpY2Vuc2UgZm9yIHRoZSBzcGVjaWZpYyBsYW5ndWFnZSBnb3Zlcm5pbmcgcGVybWlzc2lvbnMgYW5kIGxpbWl0YXRpb25zCiMgICAgdW5kZXIgdGhlIExpY2Vuc2UuCgppbXBvcnQgZGF0ZXRpbWUKaW1wb3J0IGVycm5vCmltcG9ydCBsb2dnaW5nCmltcG9ydCBvcwppbXBvcnQgc3VicHJvY2VzcwppbXBvcnQgc3lzCgoKVkFSX1BBVEggPSAnL3Zhci9saWIvaGVhdC1jZm50b29scycKTE9HID0gbG9nZ2luZy5nZXRMb2dnZXIoJ2hlYXQtcHJvdmlzaW9uJykKCgpkZWYgaW5pdF9sb2dnaW5nKCk6CiAgICBMT0cuc2V0TGV2ZWwobG9nZ2luZy5JTkZPKQogICAgTE9HLmFkZEhhbmRsZXIobG9nZ2luZy5TdHJlYW1IYW5kbGVyKCkpCiAgICBmaCA9IGxvZ2dpbmcuRmlsZUhhbmRsZXIoIi92YXIvbG9nL2hlYXQtcHJvdmlzaW9uLmxvZyIpCiAgICBvcy5jaG1vZChmaC5iYXNlRmlsZW5hbWUsIGludCgiNjAwIiwgOCkpCiAgICBMT0cuYWRkSGFuZGxlcihmaCkKCgpkZWYgY2FsbChhcmdzKToKCiAgICBjbGFzcyBMb2dTdHJlYW0ob2JqZWN0KToKCiAgICAgICAgZGVmIHdyaXRlKHNlbGYsIGRhdGEpOgogICAgICAgICAgICBMT0cuaW5mbyhkYXRhKQoKICAgIExPRy5pbmZvKCclc1xuJywgJyAnLmpvaW4oYXJ
Dec  1 19:36:36 compute-0 nova_compute[189564]: wZW4oYXJncywgc3Rkb3V0PXN1YnByb2Nlc3MuUElQRSwKICAgICAgICAgICAgICAgICAgICAgICAgICAgICBzdGRlcnI9c3VicHJvY2Vzcy5QSVBFKQogICAgICAgIGRhdGEgPSBwLmNvbW11bmljYXRlKCkKICAgICAgICBpZiBkYXRhOgogICAgICAgICAgICBmb3IgeCBpbiBkYXRhOgogICAgICAgICAgICAgICAgbHMud3JpdGUoeCkKICAgIGV4Y2VwdCBPU0Vycm9yOgogICAgICAgIGV4X3R5cGUsIGV4LCB0YiA9IHN5cy5leGNfaW5mbygpCiAgICAgICAgaWYgZXguZXJybm8gPT0gZXJybm8uRU5PRVhFQzoKICAgICAgICAgICAgTE9HLmVycm9yKCdVc2VyZGF0YSBlbXB0eSBvciBub3QgZXhlY3V0YWJsZTogJXMnLCBleCkKICAgICAgICAgICAgcmV0dXJuIG9zLkVYX09LCiAgICAgICAgZWxzZToKICAgICAgICAgICAgTE9HLmVycm9yKCdPUyBlcnJvciBydW5uaW5nIHVzZXJkYXRhOiAlcycsIGV4KQogICAgICAgICAgICByZXR1cm4gb3MuRVhfT1NFUlIKICAgIGV4Y2VwdCBFeGNlcHRpb246CiAgICAgICAgZXhfdHlwZSwgZXgsIHRiID0gc3lzLmV4Y19pbmZvKCkKICAgICAgICBMT0cuZXJyb3IoJ1Vua25vd24gZXJyb3IgcnVubmluZyB1c2VyZGF0YTogJXMnLCBleCkKICAgICAgICByZXR1cm4gb3MuRVhfU09GVFdBUkUKICAgIHJldHVybiBwLnJldHVybmNvZGUKCgpkZWYgbWFpbigpOgogICAgdXNlcmRhdGFfcGF0aCA9IG9zLnBhdGguam9pbihWQVJfUEFUSCwgJ2Nmbi11c2VyZGF0YScpCiAgICBvcy5jaG1vZCh1c2VyZGF0YV9wYXRoLCBpbnQoIjcwMCIsIDgpKQoKICAgIExPRy5pbmZvKCdQcm92aXNpb24gYmVnYW46ICVzJywgZGF0ZXRpbWUuZGF0ZXRpbWUubm93KCkpCiAgICByZXR1cm5jb2RlID0gY2FsbChbdXNlcmRhdGFfcGF0aF0pCiAgICBMT0cuaW5mbygnUHJvdmlzaW9uIGRvbmU6ICVzJywgZGF0ZXRpbWUuZGF0ZXRpbWUubm93KCkpCiAgICBpZiByZXR1cm5jb2RlOgogICAgICAgIHJldHVybiByZXR1cm5jb2RlCgoKaWYgX19uYW1lX18gPT0gJ19fbWFpbl9fJzoKICAgIGluaXRfbG9nZ2luZygpCgogICAgY29kZSA9IG1haW4oKQogICAgaWYgY29kZToKICAgICAgICBMT0cuZXJyb3IoJ1Byb3Zpc2lvbiBmYWlsZWQgd2l0aCBleGl0IGNvZGUgJXMnLCBjb2RlKQogICAgICAgIHN5cy5leGl0KGNvZGUpCgogICAgcHJvdmlzaW9uX2xvZyA9IG9zLnBhdGguam9pbihWQVJfUEFUSCwgJ3Byb3Zpc2lvbi1maW5pc2hlZCcpCiAgICAjIHRvdWNoIHRoZSBmaWxlIHNvIGl0IGlzIHRpbWVzdGFtcGVkIHdpdGggd2hlbiBmaW5pc2hlZAogICAgd2l0aCBvcGVuKHByb3Zpc2lvbl9sb2csICdhJyk6CiAgICAgICAgb3MudXRpbWUocHJvdmlzaW9uX2xvZywgTm9uZSkKCi0tPT09PT09PT09PT09PT09NTcyMjA1MDA4MTkwNjQ3MDA4OD09CkNvbnRlbnQtVHlwZTogdGV4dC94LWNmbmluaXRkYXRhOyBjaGFyc2V0PSJ1cy1hc2NpaSIKTUlNRS1WZXJzaW9uOiAxLjAKQ29udGVudC1UcmFuc2Zlci1FbmNvZGluZzogN2JpdApDb250ZW50LURpc3Bvc2l0aW9uOiBhdHRhY2htZW50OyBmaWxlbmFtZT0iY2ZuLW1ldGFkYXRhLXNlcnZlciIKCmh0dHBzOi8vaGVhdC1jZm5hcGktaW50ZXJuYWwub3BlbnN0YWNrLnN2Yzo4MDAwL3YxLwotLT09PT09PT09PT09PT09PTU3MjIwNTAwODE5MDY0NzAwODg9PQpDb250ZW50LVR5cGU6IHRleHQveC1jZm5pbml0ZGF0YTsgY2hhcnNldD0idXMtYXNjaWkiCk1JTUUtVmVyc2lvbjogMS4wCkNvbnRlbnQtVHJhbnNmZXItRW5jb2Rpbmc6IDdiaXQKQ29udGVudC1EaXNwb3NpdGlvbjogYXR0YWNobWVudDsgZmlsZW5hbWU9ImNmbi1ib3RvLWNmZyIKCltCb3RvXQpkZWJ1ZyA9IDAKaXNfc2VjdXJlID0gMApodHRwc192YWxpZGF0ZV9jZXJ0aWZpY2F0ZXMgPSAxCmNmbl9yZWdpb25fbmFtZSA9IGhlYXQKY2ZuX3JlZ2lvbl9lbmRwb2ludCA9IGhlYXQtY2ZuYXBpLWludGVybmFsLm9wZW5zdGFjay5zdmMKLS09PT09PT09PT09PT09PT01NzIyMDUwMDgxOTA2NDcwMDg4PT0tLQo=',user_id='7c24e8f82e7842b785e565ac65c7f494',uuid=850ac274-3f22-41ce-b7d7-ac64d7adac70,vcpu_model=VirtCPUModel,vcpus=1,vm_mode=None,vm_state='building') vif={"id": "076102cd-d411-4d3d-a31e-4851d4a8d107", "address": "fa:16:3e:ce:df:71", "network": {"id": "2a4b8529-6171-4880-a97c-66966115a61b", "bridge": "br-int", "label": "private", "subnets": [{"cidr": "192.168.0.0/24", "dns": [], "gateway": {"address": "192.168.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "192.168.0.62", "type": "fixed", "version": 4, "meta": {}, "floating_ips": [{"address": "192.168.122.240", "type": "floating", "version": 4, "meta": {}}]}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "35d2a9caf1634dca9fc12ec078239d84", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tap076102cd-d4", "ovs_interfaceid": "076102cd-d411-4d3d-a31e-4851d4a8d107", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": true, "delegate_create": true, "meta": {}} plug /usr/lib/python3.9/site-packages/nova/virt/libvirt/vif.py:710#033[00m
Dec  1 19:36:36 compute-0 nova_compute[189564]: 2025-12-01 19:36:36.707 189568 DEBUG nova.network.os_vif_util [None req-92e2ca87-5818-491b-861f-9c34a79f287f 7c24e8f82e7842b785e565ac65c7f494 35d2a9caf1634dca9fc12ec078239d84 - - default default] Converting VIF {"id": "076102cd-d411-4d3d-a31e-4851d4a8d107", "address": "fa:16:3e:ce:df:71", "network": {"id": "2a4b8529-6171-4880-a97c-66966115a61b", "bridge": "br-int", "label": "private", "subnets": [{"cidr": "192.168.0.0/24", "dns": [], "gateway": {"address": "192.168.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "192.168.0.62", "type": "fixed", "version": 4, "meta": {}, "floating_ips": [{"address": "192.168.122.240", "type": "floating", "version": 4, "meta": {}}]}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "35d2a9caf1634dca9fc12ec078239d84", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tap076102cd-d4", "ovs_interfaceid": "076102cd-d411-4d3d-a31e-4851d4a8d107", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": true, "delegate_create": true, "meta": {}} nova_to_osvif_vif /usr/lib/python3.9/site-packages/nova/network/os_vif_util.py:511#033[00m
Dec  1 19:36:36 compute-0 nova_compute[189564]: 2025-12-01 19:36:36.708 189568 DEBUG nova.network.os_vif_util [None req-92e2ca87-5818-491b-861f-9c34a79f287f 7c24e8f82e7842b785e565ac65c7f494 35d2a9caf1634dca9fc12ec078239d84 - - default default] Converted object VIFOpenVSwitch(active=False,address=fa:16:3e:ce:df:71,bridge_name='br-int',has_traffic_filtering=True,id=076102cd-d411-4d3d-a31e-4851d4a8d107,network=Network(2a4b8529-6171-4880-a97c-66966115a61b),plugin='ovs',port_profile=VIFPortProfileOpenVSwitch,preserve_on_delete=True,vif_name='tap076102cd-d4') nova_to_osvif_vif /usr/lib/python3.9/site-packages/nova/network/os_vif_util.py:548#033[00m
Dec  1 19:36:36 compute-0 nova_compute[189564]: 2025-12-01 19:36:36.709 189568 DEBUG os_vif [None req-92e2ca87-5818-491b-861f-9c34a79f287f 7c24e8f82e7842b785e565ac65c7f494 35d2a9caf1634dca9fc12ec078239d84 - - default default] Plugging vif VIFOpenVSwitch(active=False,address=fa:16:3e:ce:df:71,bridge_name='br-int',has_traffic_filtering=True,id=076102cd-d411-4d3d-a31e-4851d4a8d107,network=Network(2a4b8529-6171-4880-a97c-66966115a61b),plugin='ovs',port_profile=VIFPortProfileOpenVSwitch,preserve_on_delete=True,vif_name='tap076102cd-d4') plug /usr/lib/python3.9/site-packages/os_vif/__init__.py:76#033[00m
Dec  1 19:36:36 compute-0 nova_compute[189564]: 2025-12-01 19:36:36.710 189568 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 21 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Dec  1 19:36:36 compute-0 nova_compute[189564]: 2025-12-01 19:36:36.710 189568 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): AddBridgeCommand(_result=None, name=br-int, may_exist=True, datapath_type=system) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89#033[00m
Dec  1 19:36:36 compute-0 nova_compute[189564]: 2025-12-01 19:36:36.711 189568 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Transaction caused no change do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:129#033[00m
Dec  1 19:36:36 compute-0 nova_compute[189564]: 2025-12-01 19:36:36.715 189568 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 21 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Dec  1 19:36:36 compute-0 nova_compute[189564]: 2025-12-01 19:36:36.715 189568 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): AddPortCommand(_result=None, bridge=br-int, port=tap076102cd-d4, may_exist=True) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89#033[00m
Dec  1 19:36:36 compute-0 nova_compute[189564]: 2025-12-01 19:36:36.716 189568 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=1): DbSetCommand(_result=None, table=Interface, record=tap076102cd-d4, col_values=(('external_ids', {'iface-id': '076102cd-d411-4d3d-a31e-4851d4a8d107', 'iface-status': 'active', 'attached-mac': 'fa:16:3e:ce:df:71', 'vm-uuid': '850ac274-3f22-41ce-b7d7-ac64d7adac70'}),)) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89#033[00m
Dec  1 19:36:36 compute-0 nova_compute[189564]: 2025-12-01 19:36:36.718 189568 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Dec  1 19:36:36 compute-0 nova_compute[189564]: 2025-12-01 19:36:36.720 189568 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] 0-ms timeout __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:248#033[00m
Dec  1 19:36:36 compute-0 NetworkManager[56474]: <info>  [1764617796.7205] manager: (tap076102cd-d4): new Open vSwitch Port device (/org/freedesktop/NetworkManager/Devices/29)
Dec  1 19:36:36 compute-0 nova_compute[189564]: 2025-12-01 19:36:36.733 189568 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Dec  1 19:36:36 compute-0 nova_compute[189564]: 2025-12-01 19:36:36.734 189568 INFO os_vif [None req-92e2ca87-5818-491b-861f-9c34a79f287f 7c24e8f82e7842b785e565ac65c7f494 35d2a9caf1634dca9fc12ec078239d84 - - default default] Successfully plugged vif VIFOpenVSwitch(active=False,address=fa:16:3e:ce:df:71,bridge_name='br-int',has_traffic_filtering=True,id=076102cd-d411-4d3d-a31e-4851d4a8d107,network=Network(2a4b8529-6171-4880-a97c-66966115a61b),plugin='ovs',port_profile=VIFPortProfileOpenVSwitch,preserve_on_delete=True,vif_name='tap076102cd-d4')#033[00m
Dec  1 19:36:36 compute-0 nova_compute[189564]: 2025-12-01 19:36:36.800 189568 DEBUG nova.virt.libvirt.driver [None req-92e2ca87-5818-491b-861f-9c34a79f287f 7c24e8f82e7842b785e565ac65c7f494 35d2a9caf1634dca9fc12ec078239d84 - - default default] No BDM found with device name vda, not building metadata. _build_disk_metadata /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:12116#033[00m
Dec  1 19:36:36 compute-0 nova_compute[189564]: 2025-12-01 19:36:36.800 189568 DEBUG nova.virt.libvirt.driver [None req-92e2ca87-5818-491b-861f-9c34a79f287f 7c24e8f82e7842b785e565ac65c7f494 35d2a9caf1634dca9fc12ec078239d84 - - default default] No BDM found with device name vdb, not building metadata. _build_disk_metadata /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:12116#033[00m
Dec  1 19:36:36 compute-0 nova_compute[189564]: 2025-12-01 19:36:36.801 189568 DEBUG nova.virt.libvirt.driver [None req-92e2ca87-5818-491b-861f-9c34a79f287f 7c24e8f82e7842b785e565ac65c7f494 35d2a9caf1634dca9fc12ec078239d84 - - default default] No BDM found with device name sda, not building metadata. _build_disk_metadata /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:12116#033[00m
Dec  1 19:36:36 compute-0 nova_compute[189564]: 2025-12-01 19:36:36.801 189568 DEBUG nova.virt.libvirt.driver [None req-92e2ca87-5818-491b-861f-9c34a79f287f 7c24e8f82e7842b785e565ac65c7f494 35d2a9caf1634dca9fc12ec078239d84 - - default default] No VIF found with MAC fa:16:3e:ce:df:71, not building metadata _build_interface_metadata /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:12092#033[00m
Dec  1 19:36:36 compute-0 nova_compute[189564]: 2025-12-01 19:36:36.801 189568 INFO nova.virt.libvirt.driver [None req-92e2ca87-5818-491b-861f-9c34a79f287f 7c24e8f82e7842b785e565ac65c7f494 35d2a9caf1634dca9fc12ec078239d84 - - default default] [instance: 850ac274-3f22-41ce-b7d7-ac64d7adac70] Using config drive#033[00m
Dec  1 19:36:36 compute-0 rsyslogd[236874]: message too long (8192) with configured size 8096, begin of message is: 2025-12-01 19:36:36.665 189568 DEBUG nova.virt.libvirt.vif [None req-92e2ca87-58 [v8.2510.0-2.el9 try https://www.rsyslog.com/e/2445 ]
Dec  1 19:36:37 compute-0 rsyslogd[236874]: message too long (8192) with configured size 8096, begin of message is: 2025-12-01 19:36:36.707 189568 DEBUG nova.virt.libvirt.vif [None req-92e2ca87-58 [v8.2510.0-2.el9 try https://www.rsyslog.com/e/2445 ]
Dec  1 19:36:37 compute-0 nova_compute[189564]: 2025-12-01 19:36:37.683 189568 INFO nova.virt.libvirt.driver [None req-92e2ca87-5818-491b-861f-9c34a79f287f 7c24e8f82e7842b785e565ac65c7f494 35d2a9caf1634dca9fc12ec078239d84 - - default default] [instance: 850ac274-3f22-41ce-b7d7-ac64d7adac70] Creating config drive at /var/lib/nova/instances/850ac274-3f22-41ce-b7d7-ac64d7adac70/disk.config#033[00m
Dec  1 19:36:37 compute-0 nova_compute[189564]: 2025-12-01 19:36:37.694 189568 DEBUG oslo_concurrency.processutils [None req-92e2ca87-5818-491b-861f-9c34a79f287f 7c24e8f82e7842b785e565ac65c7f494 35d2a9caf1634dca9fc12ec078239d84 - - default default] Running cmd (subprocess): /usr/bin/mkisofs -o /var/lib/nova/instances/850ac274-3f22-41ce-b7d7-ac64d7adac70/disk.config -ldots -allow-lowercase -allow-multidot -l -publisher OpenStack Compute 27.5.2-0.20250829104910.6f8decf.el9 -quiet -J -r -V config-2 /tmp/tmp17joa_cm execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384#033[00m
Dec  1 19:36:37 compute-0 nova_compute[189564]: 2025-12-01 19:36:37.847 189568 DEBUG oslo_concurrency.processutils [None req-92e2ca87-5818-491b-861f-9c34a79f287f 7c24e8f82e7842b785e565ac65c7f494 35d2a9caf1634dca9fc12ec078239d84 - - default default] CMD "/usr/bin/mkisofs -o /var/lib/nova/instances/850ac274-3f22-41ce-b7d7-ac64d7adac70/disk.config -ldots -allow-lowercase -allow-multidot -l -publisher OpenStack Compute 27.5.2-0.20250829104910.6f8decf.el9 -quiet -J -r -V config-2 /tmp/tmp17joa_cm" returned: 0 in 0.153s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422#033[00m
Dec  1 19:36:37 compute-0 nova_compute[189564]: 2025-12-01 19:36:37.924 189568 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Dec  1 19:36:37 compute-0 kernel: tap076102cd-d4: entered promiscuous mode
Dec  1 19:36:37 compute-0 NetworkManager[56474]: <info>  [1764617797.9903] manager: (tap076102cd-d4): new Tun device (/org/freedesktop/NetworkManager/Devices/30)
Dec  1 19:36:37 compute-0 ovn_controller[97948]: 2025-12-01T19:36:37Z|00040|binding|INFO|Claiming lport 076102cd-d411-4d3d-a31e-4851d4a8d107 for this chassis.
Dec  1 19:36:37 compute-0 ovn_controller[97948]: 2025-12-01T19:36:37Z|00041|binding|INFO|076102cd-d411-4d3d-a31e-4851d4a8d107: Claiming fa:16:3e:ce:df:71 192.168.0.62
Dec  1 19:36:38 compute-0 nova_compute[189564]: 2025-12-01 19:36:37.998 189568 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Dec  1 19:36:38 compute-0 ovn_metadata_agent[106828]: 2025-12-01 19:36:38.009 106833 DEBUG ovsdbapp.backend.ovs_idl.event [-] Matched UPDATE: PortBindingUpdatedEvent(events=('update',), table='Port_Binding', conditions=None, old_conditions=None), priority=20 to row=Port_Binding(mac=['fa:16:3e:ce:df:71 192.168.0.62'], port_security=['fa:16:3e:ce:df:71 192.168.0.62'], type=, nat_addresses=[], virtual_parent=[], up=[False], options={'requested-chassis': 'compute-0.ctlplane.example.com'}, parent_port=[], requested_additional_chassis=[], ha_chassis_group=[], external_ids={'name': 'vnf-scaleup_group-vz2nmrxztcck-a6xkcgll2h6t-dmjd3wlevael-port-elrdg4anttdl', 'neutron:cidrs': '192.168.0.62/24', 'neutron:device_id': '850ac274-3f22-41ce-b7d7-ac64d7adac70', 'neutron:device_owner': 'compute:nova', 'neutron:mtu': '', 'neutron:network_name': 'neutron-2a4b8529-6171-4880-a97c-66966115a61b', 'neutron:port_capabilities': '', 'neutron:port_name': 'vnf-scaleup_group-vz2nmrxztcck-a6xkcgll2h6t-dmjd3wlevael-port-elrdg4anttdl', 'neutron:project_id': '35d2a9caf1634dca9fc12ec078239d84', 'neutron:revision_number': '2', 'neutron:security_group_ids': 'e61a5e79-a7e0-4e4e-bcbc-f9aad845c2b8', 'neutron:subnet_pool_addr_scope4': '', 'neutron:subnet_pool_addr_scope6': '', 'neutron:vnic_type': 'normal', 'neutron:port_fip': '192.168.122.240'}, additional_chassis=[], tag=[], additional_encap=[], encap=[], mirror_rules=[], datapath=58f8227a-30b3-42df-b03a-90442a651a6d, chassis=[<ovs.db.idl.Row object at 0x7f1b36766670>], tunnel_key=5, gateway_chassis=[], requested_chassis=[<ovs.db.idl.Row object at 0x7f1b36766670>], logical_port=076102cd-d411-4d3d-a31e-4851d4a8d107) old=Port_Binding(chassis=[]) matches /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/event.py:43#033[00m
Dec  1 19:36:38 compute-0 ovn_metadata_agent[106828]: 2025-12-01 19:36:38.010 106833 INFO neutron.agent.ovn.metadata.agent [-] Port 076102cd-d411-4d3d-a31e-4851d4a8d107 in datapath 2a4b8529-6171-4880-a97c-66966115a61b bound to our chassis#033[00m
Dec  1 19:36:38 compute-0 ovn_metadata_agent[106828]: 2025-12-01 19:36:38.011 106833 INFO neutron.agent.ovn.metadata.agent [-] Provisioning metadata for network 2a4b8529-6171-4880-a97c-66966115a61b#033[00m
Dec  1 19:36:38 compute-0 ovn_controller[97948]: 2025-12-01T19:36:38Z|00042|binding|INFO|Setting lport 076102cd-d411-4d3d-a31e-4851d4a8d107 ovn-installed in OVS
Dec  1 19:36:38 compute-0 ovn_controller[97948]: 2025-12-01T19:36:38Z|00043|binding|INFO|Setting lport 076102cd-d411-4d3d-a31e-4851d4a8d107 up in Southbound
Dec  1 19:36:38 compute-0 nova_compute[189564]: 2025-12-01 19:36:38.029 189568 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Dec  1 19:36:38 compute-0 nova_compute[189564]: 2025-12-01 19:36:38.036 189568 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Dec  1 19:36:38 compute-0 ovn_metadata_agent[106828]: 2025-12-01 19:36:38.037 239862 DEBUG oslo.privsep.daemon [-] privsep: reply[472bd0d0-bfd2-441e-9c91-39f57ea9095d]: (4, True) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Dec  1 19:36:38 compute-0 systemd-machined[155891]: New machine qemu-3-instance-00000003.
Dec  1 19:36:38 compute-0 systemd[1]: Started Virtual Machine qemu-3-instance-00000003.
Dec  1 19:36:38 compute-0 systemd-udevd[242779]: Network interface NamePolicy= disabled on kernel command line.
Dec  1 19:36:38 compute-0 podman[242717]: 2025-12-01 19:36:38.075275575 +0000 UTC m=+0.133282613 container health_status 43b014a7c88484529ca37fbc1aa040d68d3c565a681d98a3ffe696ded1c66c8b (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified, name=ovn_metadata_agent, health_status=healthy, health_failing_streak=0, health_log=, org.label-schema.vendor=CentOS, org.label-schema.build-date=20251125, org.label-schema.license=GPLv2, org.label-schema.schema-version=1.0, tcib_build_tag=fa2bb8efef6782c26ea7f1675eeb36dd, tcib_managed=true, config_id=ovn_metadata_agent, container_name=ovn_metadata_agent, maintainer=OpenStack Kubernetes Operator team, org.label-schema.name=CentOS Stream 9 Base Image, config_data={'cgroupns': 'host', 'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'EDPM_CONFIG_HASH': '0823bd3e096c75f72e4a95820d41b0d4b6a1172bd2892ddb9f29b788a11bc87d'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ovn_metadata_agent', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified', 'net': 'host', 'pid': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/run/openvswitch:/run/openvswitch:z', '/var/lib/config-data/ansible-generated/neutron-ovn-metadata-agent:/etc/neutron.conf.d:z', '/run/netns:/run/netns:shared', '/var/lib/kolla/config_files/ovn_metadata_agent.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/neutron:/var/lib/neutron:shared,z', '/var/lib/neutron/ovn_metadata_haproxy_wrapper:/usr/local/bin/haproxy:ro', '/var/lib/neutron/kill_scripts:/etc/neutron/kill_scripts:ro', '/var/lib/openstack/cacerts/neutron-metadata/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/ca.crt:/etc/pki/tls/certs/ovndbca.crt:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/tls.crt:/etc/pki/tls/certs/ovndb.crt:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/tls.key:/etc/pki/tls/private/ovndb.key:ro,Z', '/var/lib/openstack/healthchecks/ovn_metadata_agent:/openstack:ro,z']}, io.buildah.version=1.41.3, managed_by=edpm_ansible)
Dec  1 19:36:38 compute-0 podman[242716]: 2025-12-01 19:36:38.080027913 +0000 UTC m=+0.135390949 container health_status 3a3d264f7eb8586ed3d44da8bad3c69e5911bcb2ca062b771386b6d47a5118de (image=quay.rdoproject.org/podified-master-centos10/openstack-ceilometer-compute:current-tested, name=ceilometer_agent_compute, health_status=healthy, health_failing_streak=0, health_log=, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS, tcib_build_tag=3a7876c5b6a4ff2e2bc50e11e9db5f42, io.buildah.version=1.41.4, managed_by=edpm_ansible, org.label-schema.build-date=20251125, container_name=ceilometer_agent_compute, maintainer=OpenStack Kubernetes Operator team, org.label-schema.name=CentOS Stream 10 Base Image, org.label-schema.license=GPLv2, tcib_managed=true, config_data={'image': 'quay.rdoproject.org/podified-master-centos10/openstack-ceilometer-compute:current-tested', 'user': 'ceilometer', 'restart': 'always', 'command': 'kolla_start', 'security_opt': 'label:type:ceilometer_polling_t', 'net': 'host', 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'OS_ENDPOINT_TYPE': 'internal'}, 'healthcheck': {'test': '/openstack/healthcheck compute', 'mount': '/var/lib/openstack/healthchecks/ceilometer_agent_compute'}, 'volumes': ['/var/lib/openstack/config/telemetry:/var/lib/openstack/config/:z', '/var/lib/openstack/config/telemetry/ceilometer-agent-compute.json:/var/lib/kolla/config_files/config.json:z', '/run/libvirt:/run/libvirt:shared,ro', '/etc/hosts:/etc/hosts:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/var/lib/openstack/cacerts/telemetry/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/config/telemetry/ceilometer_prom_exporter.yaml:/etc/ceilometer/ceilometer_prom_exporter.yaml:z', '/var/lib/openstack/certs/telemetry/default:/etc/ceilometer/tls:z', '/dev/log:/dev/log', '/var/lib/openstack/healthchecks/ceilometer_agent_compute:/openstack:ro,z']}, config_id=edpm)
Dec  1 19:36:38 compute-0 ovn_metadata_agent[106828]: 2025-12-01 19:36:38.080 239942 DEBUG oslo.privsep.daemon [-] privsep: reply[301ed0a3-6339-447f-9a7a-43a80316a8be]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Dec  1 19:36:38 compute-0 NetworkManager[56474]: <info>  [1764617798.0846] device (tap076102cd-d4): state change: unmanaged -> unavailable (reason 'connection-assumed', managed-type: 'external')
Dec  1 19:36:38 compute-0 ovn_metadata_agent[106828]: 2025-12-01 19:36:38.088 239942 DEBUG oslo.privsep.daemon [-] privsep: reply[1b7abea6-19e5-440e-9520-d819d5ced902]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Dec  1 19:36:38 compute-0 NetworkManager[56474]: <info>  [1764617798.0913] device (tap076102cd-d4): state change: unavailable -> disconnected (reason 'none', managed-type: 'external')
Dec  1 19:36:38 compute-0 ovn_metadata_agent[106828]: 2025-12-01 19:36:38.121 239942 DEBUG oslo.privsep.daemon [-] privsep: reply[0b8d3fbb-f8e8-4003-b8cc-dbc91cf70701]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Dec  1 19:36:38 compute-0 podman[242718]: 2025-12-01 19:36:38.125907953 +0000 UTC m=+0.170072980 container health_status ac5c9902abf0db9f43c889599b2bcc73d33eb8b65444ffdd9b56a5cc93dab792 (image=quay.io/podified-antelope-centos9/openstack-ovn-controller:current-podified, name=ovn_controller, health_status=healthy, health_failing_streak=0, health_log=, config_data={'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ovn_controller', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-ovn-controller:current-podified', 'net': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/lib/modules:/lib/modules:ro', '/run:/run', '/var/lib/openvswitch/ovn:/run/ovn:shared,z', '/var/lib/kolla/config_files/ovn_controller.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/openstack/cacerts/ovn/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/certs/ovn/default/ca.crt:/etc/pki/tls/certs/ovndbca.crt:ro,z', '/var/lib/openstack/certs/ovn/default/tls.crt:/etc/pki/tls/certs/ovndb.crt:ro,z', '/var/lib/openstack/certs/ovn/default/tls.key:/etc/pki/tls/private/ovndb.key:ro,Z', '/var/lib/openstack/healthchecks/ovn_controller:/openstack:ro,z']}, io.buildah.version=1.41.3, maintainer=OpenStack Kubernetes Operator team, org.label-schema.schema-version=1.0, config_id=ovn_controller, container_name=ovn_controller, managed_by=edpm_ansible, org.label-schema.vendor=CentOS, tcib_build_tag=fa2bb8efef6782c26ea7f1675eeb36dd, tcib_managed=true, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.build-date=20251125)
Dec  1 19:36:38 compute-0 ovn_metadata_agent[106828]: 2025-12-01 19:36:38.139 239862 DEBUG oslo.privsep.daemon [-] privsep: reply[5fc5ed6d-e202-485a-a3e2-d23292ce60b5]: (4, [{'family': 0, '__align': (), 'ifi_type': 1, 'index': 2, 'flags': 69699, 'change': 0, 'attrs': [['IFLA_IFNAME', 'tap2a4b8529-61'], ['IFLA_TXQLEN', 1000], ['IFLA_OPERSTATE', 'UP'], ['IFLA_LINKMODE', 0], ['IFLA_MTU', 1500], ['IFLA_MIN_MTU', 68], ['IFLA_MAX_MTU', 65535], ['IFLA_GROUP', 0], ['IFLA_PROMISCUITY', 0], ['UNKNOWN', {'header': {'length': 8, 'type': 61}}], ['IFLA_NUM_TX_QUEUES', 8], ['IFLA_GSO_MAX_SEGS', 65535], ['IFLA_GSO_MAX_SIZE', 65536], ['IFLA_GRO_MAX_SIZE', 65536], ['UNKNOWN', {'header': {'length': 8, 'type': 63}}], ['UNKNOWN', {'header': {'length': 8, 'type': 64}}], ['IFLA_TSO_MAX_SIZE', 524280], ['IFLA_TSO_MAX_SEGS', 65535], ['UNKNOWN', {'header': {'length': 8, 'type': 66}}], ['IFLA_NUM_RX_QUEUES', 8], ['IFLA_CARRIER', 1], ['IFLA_CARRIER_CHANGES', 2], ['IFLA_CARRIER_UP_COUNT', 1], ['IFLA_CARRIER_DOWN_COUNT', 1], ['IFLA_PROTO_DOWN', 0], ['IFLA_ADDRESS', 'fa:16:3e:47:81:e1'], ['IFLA_BROADCAST', 'ff:ff:ff:ff:ff:ff'], ['IFLA_STATS64', {'rx_packets': 7, 'tx_packets': 7, 'rx_bytes': 574, 'tx_bytes': 438, 'rx_errors': 0, 'tx_errors': 0, 'rx_dropped': 0, 'tx_dropped': 0, 'multicast': 0, 'collisions': 0, 'rx_length_errors': 0, 'rx_over_errors': 0, 'rx_crc_errors': 0, 'rx_frame_errors': 0, 'rx_fifo_errors': 0, 'rx_missed_errors': 0, 'tx_aborted_errors': 0, 'tx_carrier_errors': 0, 'tx_fifo_errors': 0, 'tx_heartbeat_errors': 0, 'tx_window_errors': 0, 'rx_compressed': 0, 'tx_compressed': 0}], ['IFLA_STATS', {'rx_packets': 7, 'tx_packets': 7, 'rx_bytes': 574, 'tx_bytes': 438, 'rx_errors': 0, 'tx_errors': 0, 'rx_dropped': 0, 'tx_dropped': 0, 'multicast': 0, 'collisions': 0, 'rx_length_errors': 0, 'rx_over_errors': 0, 'rx_crc_errors': 0, 'rx_frame_errors': 0, 'rx_fifo_errors': 0, 'rx_missed_errors': 0, 'tx_aborted_errors': 0, 'tx_carrier_errors': 0, 'tx_fifo_errors': 0, 'tx_heartbeat_errors': 0, 'tx_window_errors': 0, 'rx_compressed': 0, 'tx_compressed': 0}], ['IFLA_XDP', {'attrs': [['IFLA_XDP_ATTACHED', None]]}], ['IFLA_LINKINFO', {'attrs': [['IFLA_INFO_KIND', 'veth']]}], ['IFLA_LINK_NETNSID', 0], ['IFLA_LINK', 12], ['IFLA_QDISC', 'noqueue'], ['IFLA_AF_SPEC', {'attrs': [['AF_INET', {'dummy': 65668, 'forwarding': 1, 'mc_forwarding': 0, 'proxy_arp': 0, 'accept_redirects': 0, 'secure_redirects': 0, 'send_redirects': 0, 'shared_media': 1, 'rp_filter': 1, 'accept_source_route': 0, 'bootp_relay': 0, 'log_martians': 0, 'tag': 0, 'arpfilter': 0, 'medium_id': 0, 'noxfrm': 0, 'nopolicy': 0, 'force_igmp_version': 0, 'arp_announce': 0, 'arp_ignore': 0, 'promote_secondaries': 1, 'arp_accept': 0, 'arp_notify': 0, 'accept_local': 0, 'src_vmark': 0, 'proxy_arp_pvlan': 0, 'route_localnet': 0, 'igmpv2_unsolicited_report_interval': 10000, 'igmpv3_unsolicited_report_interval': 1000}], ['AF_INET6', {'attrs': [['IFLA_INET6_FLAGS', 2147483648], ['IFLA_INET6_CACHEINFO', {'max_reasm_len': 65535, 'tstamp': 388613, 'reachable_time': 31141, 'retrans_time': 1000}], ['IFLA_INET6_CONF', {'forwarding': 0, 'hop_limit': 64, 'mtu': 1500, 'accept_ra': 1, 'accept_redirects': 1, 'autoconf': 1, 'dad_transmits': 1, 'router_solicitations': 4294967295, 'router_solicitation_interval': 4000, 'router_solicitation_delay': 1000, 'use_tempaddr': 0, 'temp_valid_lft': 604800, 'temp_preferred_lft': 86400, 'regen_max_retry': 3, 'max_desync_factor': 600, 'max_addresses': 16, 'force_mld_version': 0, 'accept_ra_defrtr': 1, 'accept_ra_pinfo': 1, 'accept_ra_rtr_pref': 1, 'router_probe_interval': 60000, 'accept_ra_rt_info_max_plen': 0, 'proxy_ndp': 0, 'optimistic_dad': 0, 'accept_source_route': 0, 'mc_forwarding': 0, 'disable_ipv6': 0, 'accept_dad': 1, 'force_tllao': 0, 'ndisc_notify': 0}], ['IFLA_INET6_STATS', {'num': 38, 'inpkts': 6, 'inoctets': 448, 'indelivers': 1, 'outforwdatagrams': 0, 'outpkts': 3, 'outoctets': 228, 'inhdrerrors': 0, 'intoobigerrors': 0, 'innoroutes': 0, 'inaddrerrors': 0, 'inunknownprotos': 0, 'intruncatedpkts': 0, 'indiscards': 0, 'outdiscards': 0, 'outnoroutes': 0, 'reasmtimeout': 0, 'reasmreqds': 0, 'reasmoks': 0, 'reasmfails': 0, 'fragoks': 0, 'fragfails': 0, 'fragcreates': 0, 'inmcastpkts': 6, 'outmcastpkts': 3, 'inbcastpkts': 0, 'outbcastpkts': 0, 'inmcastoctets': 448, 'outmcastoctets': 228, 'inbcastoctets': 0, 'outbcastoctets': 0, 'csumerrors': 0, 'noectpkts': 6, 'ect1pkts': 0, 'ect0pkts': 0, 'cepkts': 0}], ['IFLA_INET6_ICMP6STATS', {'num': 7, 'inmsgs': 1, 'inerrors': 0, 'outmsgs': 3, 'outerrors': 0, 'csumerrors': 0}], ['IFLA_INET6_TOKEN', '::'], ['IFLA_INET6_ADDR_GEN_MODE', 0]]}]]}], ['IFLA_MAP', {'mem_start': 0, 'mem_end': 0, 'base_addr': 0, 'irq': 0, 'dma': 0, 'port': 0}], ['UNKNOWN', {'header': {'length': 4, 'type': 32830}}], ['UNKNOWN', {'header': {'length': 4, 'type': 32833}}]], 'header': {'length': 1448, 'type': 16, 'flags': 2, 'sequence_number': 255, 'pid': 242799, 'error': None, 'target': 'ovnmeta-2a4b8529-6171-4880-a97c-66966115a61b', 'stats': (0, 0, 0)}, 'state': 'up', 'event': 'RTM_NEWLINK'}]) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Dec  1 19:36:38 compute-0 ovn_metadata_agent[106828]: 2025-12-01 19:36:38.157 239862 DEBUG oslo.privsep.daemon [-] privsep: reply[3799df06-b0da-48b5-9759-f5759cf2ffa4]: (4, ({'family': 2, 'prefixlen': 24, 'flags': 128, 'scope': 0, 'index': 2, 'attrs': [['IFA_ADDRESS', '192.168.0.2'], ['IFA_LOCAL', '192.168.0.2'], ['IFA_BROADCAST', '192.168.0.255'], ['IFA_LABEL', 'tap2a4b8529-61'], ['IFA_FLAGS', 128], ['IFA_CACHEINFO', {'ifa_preferred': 4294967295, 'ifa_valid': 4294967295, 'cstamp': 388627, 'tstamp': 388627}]], 'header': {'length': 96, 'type': 20, 'flags': 2, 'sequence_number': 255, 'pid': 242800, 'error': None, 'target': 'ovnmeta-2a4b8529-6171-4880-a97c-66966115a61b', 'stats': (0, 0, 0)}, 'event': 'RTM_NEWADDR'}, {'family': 2, 'prefixlen': 32, 'flags': 128, 'scope': 0, 'index': 2, 'attrs': [['IFA_ADDRESS', '169.254.169.254'], ['IFA_LOCAL', '169.254.169.254'], ['IFA_BROADCAST', '169.254.169.254'], ['IFA_LABEL', 'tap2a4b8529-61'], ['IFA_FLAGS', 128], ['IFA_CACHEINFO', {'ifa_preferred': 4294967295, 'ifa_valid': 4294967295, 'cstamp': 388631, 'tstamp': 388631}]], 'header': {'length': 96, 'type': 20, 'flags': 2, 'sequence_number': 255, 'pid': 242800, 'error': None, 'target': 'ovnmeta-2a4b8529-6171-4880-a97c-66966115a61b', 'stats': (0, 0, 0)}, 'event': 'RTM_NEWADDR'})) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Dec  1 19:36:38 compute-0 ovn_metadata_agent[106828]: 2025-12-01 19:36:38.159 106833 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): DelPortCommand(_result=None, port=tap2a4b8529-60, bridge=br-ex, if_exists=True) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89#033[00m
Dec  1 19:36:38 compute-0 nova_compute[189564]: 2025-12-01 19:36:38.160 189568 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Dec  1 19:36:38 compute-0 nova_compute[189564]: 2025-12-01 19:36:38.162 189568 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Dec  1 19:36:38 compute-0 ovn_metadata_agent[106828]: 2025-12-01 19:36:38.162 106833 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): AddPortCommand(_result=None, bridge=br-int, port=tap2a4b8529-60, may_exist=True) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89#033[00m
Dec  1 19:36:38 compute-0 ovn_metadata_agent[106828]: 2025-12-01 19:36:38.163 106833 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Transaction caused no change do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:129#033[00m
Dec  1 19:36:38 compute-0 ovn_metadata_agent[106828]: 2025-12-01 19:36:38.163 106833 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): DbSetCommand(_result=None, table=Interface, record=tap2a4b8529-60, col_values=(('external_ids', {'iface-id': 'f95692ff-1cac-46fe-9e62-21af9fa55eb1'}),)) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89#033[00m
Dec  1 19:36:38 compute-0 ovn_metadata_agent[106828]: 2025-12-01 19:36:38.164 106833 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Transaction caused no change do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:129#033[00m
Dec  1 19:36:38 compute-0 nova_compute[189564]: 2025-12-01 19:36:38.569 189568 DEBUG nova.network.neutron [req-99a93567-dbab-47c6-b476-a68e0984aeb0 req-13c15835-6a06-4e50-b0fa-cfd0ad4fb83c 8141e98fb94749b083df4b60a326419b 58466eba0634458f8dd92f929824a1d9 - - default default] [instance: 850ac274-3f22-41ce-b7d7-ac64d7adac70] Updated VIF entry in instance network info cache for port 076102cd-d411-4d3d-a31e-4851d4a8d107. _build_network_info_model /usr/lib/python3.9/site-packages/nova/network/neutron.py:3482#033[00m
Dec  1 19:36:38 compute-0 nova_compute[189564]: 2025-12-01 19:36:38.571 189568 DEBUG nova.network.neutron [req-99a93567-dbab-47c6-b476-a68e0984aeb0 req-13c15835-6a06-4e50-b0fa-cfd0ad4fb83c 8141e98fb94749b083df4b60a326419b 58466eba0634458f8dd92f929824a1d9 - - default default] [instance: 850ac274-3f22-41ce-b7d7-ac64d7adac70] Updating instance_info_cache with network_info: [{"id": "076102cd-d411-4d3d-a31e-4851d4a8d107", "address": "fa:16:3e:ce:df:71", "network": {"id": "2a4b8529-6171-4880-a97c-66966115a61b", "bridge": "br-int", "label": "private", "subnets": [{"cidr": "192.168.0.0/24", "dns": [], "gateway": {"address": "192.168.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "192.168.0.62", "type": "fixed", "version": 4, "meta": {}, "floating_ips": [{"address": "192.168.122.240", "type": "floating", "version": 4, "meta": {}}]}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "35d2a9caf1634dca9fc12ec078239d84", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tap076102cd-d4", "ovs_interfaceid": "076102cd-d411-4d3d-a31e-4851d4a8d107", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": true, "delegate_create": true, "meta": {}}] update_instance_cache_with_nw_info /usr/lib/python3.9/site-packages/nova/network/neutron.py:116#033[00m
Dec  1 19:36:38 compute-0 nova_compute[189564]: 2025-12-01 19:36:38.587 189568 DEBUG oslo_concurrency.lockutils [req-99a93567-dbab-47c6-b476-a68e0984aeb0 req-13c15835-6a06-4e50-b0fa-cfd0ad4fb83c 8141e98fb94749b083df4b60a326419b 58466eba0634458f8dd92f929824a1d9 - - default default] Releasing lock "refresh_cache-850ac274-3f22-41ce-b7d7-ac64d7adac70" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:333#033[00m
Dec  1 19:36:38 compute-0 nova_compute[189564]: 2025-12-01 19:36:38.601 189568 DEBUG nova.virt.driver [None req-025acbbd-8b0a-4055-b5a6-f0460d6fa220 - - - - - -] Emitting event <LifecycleEvent: 1764617798.6008866, 850ac274-3f22-41ce-b7d7-ac64d7adac70 => Started> emit_event /usr/lib/python3.9/site-packages/nova/virt/driver.py:1653#033[00m
Dec  1 19:36:38 compute-0 nova_compute[189564]: 2025-12-01 19:36:38.602 189568 INFO nova.compute.manager [None req-025acbbd-8b0a-4055-b5a6-f0460d6fa220 - - - - - -] [instance: 850ac274-3f22-41ce-b7d7-ac64d7adac70] VM Started (Lifecycle Event)#033[00m
Dec  1 19:36:38 compute-0 nova_compute[189564]: 2025-12-01 19:36:38.621 189568 DEBUG nova.compute.manager [None req-025acbbd-8b0a-4055-b5a6-f0460d6fa220 - - - - - -] [instance: 850ac274-3f22-41ce-b7d7-ac64d7adac70] Checking state _get_power_state /usr/lib/python3.9/site-packages/nova/compute/manager.py:1762#033[00m
Dec  1 19:36:38 compute-0 nova_compute[189564]: 2025-12-01 19:36:38.630 189568 DEBUG nova.virt.driver [None req-025acbbd-8b0a-4055-b5a6-f0460d6fa220 - - - - - -] Emitting event <LifecycleEvent: 1764617798.6010056, 850ac274-3f22-41ce-b7d7-ac64d7adac70 => Paused> emit_event /usr/lib/python3.9/site-packages/nova/virt/driver.py:1653#033[00m
Dec  1 19:36:38 compute-0 nova_compute[189564]: 2025-12-01 19:36:38.631 189568 INFO nova.compute.manager [None req-025acbbd-8b0a-4055-b5a6-f0460d6fa220 - - - - - -] [instance: 850ac274-3f22-41ce-b7d7-ac64d7adac70] VM Paused (Lifecycle Event)#033[00m
Dec  1 19:36:38 compute-0 nova_compute[189564]: 2025-12-01 19:36:38.652 189568 DEBUG nova.compute.manager [None req-025acbbd-8b0a-4055-b5a6-f0460d6fa220 - - - - - -] [instance: 850ac274-3f22-41ce-b7d7-ac64d7adac70] Checking state _get_power_state /usr/lib/python3.9/site-packages/nova/compute/manager.py:1762#033[00m
Dec  1 19:36:38 compute-0 nova_compute[189564]: 2025-12-01 19:36:38.659 189568 DEBUG nova.compute.manager [None req-025acbbd-8b0a-4055-b5a6-f0460d6fa220 - - - - - -] [instance: 850ac274-3f22-41ce-b7d7-ac64d7adac70] Synchronizing instance power state after lifecycle event "Paused"; current vm_state: building, current task_state: spawning, current DB power_state: 0, VM power_state: 3 handle_lifecycle_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:1396#033[00m
Dec  1 19:36:38 compute-0 nova_compute[189564]: 2025-12-01 19:36:38.681 189568 INFO nova.compute.manager [None req-025acbbd-8b0a-4055-b5a6-f0460d6fa220 - - - - - -] [instance: 850ac274-3f22-41ce-b7d7-ac64d7adac70] During sync_power_state the instance has a pending task (spawning). Skip.#033[00m
Dec  1 19:36:38 compute-0 nova_compute[189564]: 2025-12-01 19:36:38.709 189568 DEBUG nova.compute.manager [req-0296ad76-5784-4b3d-bea2-a03931d406e9 req-3ceadd75-7a32-4ec3-91cf-fd16393eaae1 8141e98fb94749b083df4b60a326419b 58466eba0634458f8dd92f929824a1d9 - - default default] [instance: 850ac274-3f22-41ce-b7d7-ac64d7adac70] Received event network-vif-plugged-076102cd-d411-4d3d-a31e-4851d4a8d107 external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11048#033[00m
Dec  1 19:36:38 compute-0 nova_compute[189564]: 2025-12-01 19:36:38.711 189568 DEBUG oslo_concurrency.lockutils [req-0296ad76-5784-4b3d-bea2-a03931d406e9 req-3ceadd75-7a32-4ec3-91cf-fd16393eaae1 8141e98fb94749b083df4b60a326419b 58466eba0634458f8dd92f929824a1d9 - - default default] Acquiring lock "850ac274-3f22-41ce-b7d7-ac64d7adac70-events" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m
Dec  1 19:36:38 compute-0 nova_compute[189564]: 2025-12-01 19:36:38.712 189568 DEBUG oslo_concurrency.lockutils [req-0296ad76-5784-4b3d-bea2-a03931d406e9 req-3ceadd75-7a32-4ec3-91cf-fd16393eaae1 8141e98fb94749b083df4b60a326419b 58466eba0634458f8dd92f929824a1d9 - - default default] Lock "850ac274-3f22-41ce-b7d7-ac64d7adac70-events" acquired by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: waited 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m
Dec  1 19:36:38 compute-0 nova_compute[189564]: 2025-12-01 19:36:38.713 189568 DEBUG oslo_concurrency.lockutils [req-0296ad76-5784-4b3d-bea2-a03931d406e9 req-3ceadd75-7a32-4ec3-91cf-fd16393eaae1 8141e98fb94749b083df4b60a326419b 58466eba0634458f8dd92f929824a1d9 - - default default] Lock "850ac274-3f22-41ce-b7d7-ac64d7adac70-events" "released" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: held 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m
Dec  1 19:36:38 compute-0 nova_compute[189564]: 2025-12-01 19:36:38.713 189568 DEBUG nova.compute.manager [req-0296ad76-5784-4b3d-bea2-a03931d406e9 req-3ceadd75-7a32-4ec3-91cf-fd16393eaae1 8141e98fb94749b083df4b60a326419b 58466eba0634458f8dd92f929824a1d9 - - default default] [instance: 850ac274-3f22-41ce-b7d7-ac64d7adac70] Processing event network-vif-plugged-076102cd-d411-4d3d-a31e-4851d4a8d107 _process_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:10808#033[00m
Dec  1 19:36:38 compute-0 nova_compute[189564]: 2025-12-01 19:36:38.715 189568 DEBUG nova.compute.manager [None req-92e2ca87-5818-491b-861f-9c34a79f287f 7c24e8f82e7842b785e565ac65c7f494 35d2a9caf1634dca9fc12ec078239d84 - - default default] [instance: 850ac274-3f22-41ce-b7d7-ac64d7adac70] Instance event wait completed in 0 seconds for network-vif-plugged wait_for_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:577#033[00m
Dec  1 19:36:38 compute-0 nova_compute[189564]: 2025-12-01 19:36:38.732 189568 DEBUG nova.virt.driver [None req-025acbbd-8b0a-4055-b5a6-f0460d6fa220 - - - - - -] Emitting event <LifecycleEvent: 1764617798.7320757, 850ac274-3f22-41ce-b7d7-ac64d7adac70 => Resumed> emit_event /usr/lib/python3.9/site-packages/nova/virt/driver.py:1653#033[00m
Dec  1 19:36:38 compute-0 nova_compute[189564]: 2025-12-01 19:36:38.746 189568 INFO nova.compute.manager [None req-025acbbd-8b0a-4055-b5a6-f0460d6fa220 - - - - - -] [instance: 850ac274-3f22-41ce-b7d7-ac64d7adac70] VM Resumed (Lifecycle Event)#033[00m
Dec  1 19:36:38 compute-0 nova_compute[189564]: 2025-12-01 19:36:38.749 189568 DEBUG nova.virt.libvirt.driver [None req-92e2ca87-5818-491b-861f-9c34a79f287f 7c24e8f82e7842b785e565ac65c7f494 35d2a9caf1634dca9fc12ec078239d84 - - default default] [instance: 850ac274-3f22-41ce-b7d7-ac64d7adac70] Guest created on hypervisor spawn /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:4417#033[00m
Dec  1 19:36:38 compute-0 nova_compute[189564]: 2025-12-01 19:36:38.763 189568 INFO nova.virt.libvirt.driver [-] [instance: 850ac274-3f22-41ce-b7d7-ac64d7adac70] Instance spawned successfully.#033[00m
Dec  1 19:36:38 compute-0 nova_compute[189564]: 2025-12-01 19:36:38.764 189568 DEBUG nova.virt.libvirt.driver [None req-92e2ca87-5818-491b-861f-9c34a79f287f 7c24e8f82e7842b785e565ac65c7f494 35d2a9caf1634dca9fc12ec078239d84 - - default default] [instance: 850ac274-3f22-41ce-b7d7-ac64d7adac70] Attempting to register defaults for the following image properties: ['hw_cdrom_bus', 'hw_disk_bus', 'hw_input_bus', 'hw_pointer_model', 'hw_video_model', 'hw_vif_model'] _register_undefined_instance_details /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:917#033[00m
Dec  1 19:36:38 compute-0 nova_compute[189564]: 2025-12-01 19:36:38.769 189568 DEBUG nova.compute.manager [None req-025acbbd-8b0a-4055-b5a6-f0460d6fa220 - - - - - -] [instance: 850ac274-3f22-41ce-b7d7-ac64d7adac70] Checking state _get_power_state /usr/lib/python3.9/site-packages/nova/compute/manager.py:1762#033[00m
Dec  1 19:36:38 compute-0 nova_compute[189564]: 2025-12-01 19:36:38.775 189568 DEBUG nova.compute.manager [None req-025acbbd-8b0a-4055-b5a6-f0460d6fa220 - - - - - -] [instance: 850ac274-3f22-41ce-b7d7-ac64d7adac70] Synchronizing instance power state after lifecycle event "Resumed"; current vm_state: building, current task_state: spawning, current DB power_state: 0, VM power_state: 1 handle_lifecycle_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:1396#033[00m
Dec  1 19:36:38 compute-0 nova_compute[189564]: 2025-12-01 19:36:38.792 189568 DEBUG nova.virt.libvirt.driver [None req-92e2ca87-5818-491b-861f-9c34a79f287f 7c24e8f82e7842b785e565ac65c7f494 35d2a9caf1634dca9fc12ec078239d84 - - default default] [instance: 850ac274-3f22-41ce-b7d7-ac64d7adac70] Found default for hw_cdrom_bus of sata _register_undefined_instance_details /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:946#033[00m
Dec  1 19:36:38 compute-0 nova_compute[189564]: 2025-12-01 19:36:38.793 189568 DEBUG nova.virt.libvirt.driver [None req-92e2ca87-5818-491b-861f-9c34a79f287f 7c24e8f82e7842b785e565ac65c7f494 35d2a9caf1634dca9fc12ec078239d84 - - default default] [instance: 850ac274-3f22-41ce-b7d7-ac64d7adac70] Found default for hw_disk_bus of virtio _register_undefined_instance_details /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:946#033[00m
Dec  1 19:36:38 compute-0 nova_compute[189564]: 2025-12-01 19:36:38.794 189568 DEBUG nova.virt.libvirt.driver [None req-92e2ca87-5818-491b-861f-9c34a79f287f 7c24e8f82e7842b785e565ac65c7f494 35d2a9caf1634dca9fc12ec078239d84 - - default default] [instance: 850ac274-3f22-41ce-b7d7-ac64d7adac70] Found default for hw_input_bus of usb _register_undefined_instance_details /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:946#033[00m
Dec  1 19:36:38 compute-0 nova_compute[189564]: 2025-12-01 19:36:38.795 189568 DEBUG nova.virt.libvirt.driver [None req-92e2ca87-5818-491b-861f-9c34a79f287f 7c24e8f82e7842b785e565ac65c7f494 35d2a9caf1634dca9fc12ec078239d84 - - default default] [instance: 850ac274-3f22-41ce-b7d7-ac64d7adac70] Found default for hw_pointer_model of usbtablet _register_undefined_instance_details /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:946#033[00m
Dec  1 19:36:38 compute-0 nova_compute[189564]: 2025-12-01 19:36:38.796 189568 DEBUG nova.virt.libvirt.driver [None req-92e2ca87-5818-491b-861f-9c34a79f287f 7c24e8f82e7842b785e565ac65c7f494 35d2a9caf1634dca9fc12ec078239d84 - - default default] [instance: 850ac274-3f22-41ce-b7d7-ac64d7adac70] Found default for hw_video_model of virtio _register_undefined_instance_details /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:946#033[00m
Dec  1 19:36:38 compute-0 nova_compute[189564]: 2025-12-01 19:36:38.797 189568 DEBUG nova.virt.libvirt.driver [None req-92e2ca87-5818-491b-861f-9c34a79f287f 7c24e8f82e7842b785e565ac65c7f494 35d2a9caf1634dca9fc12ec078239d84 - - default default] [instance: 850ac274-3f22-41ce-b7d7-ac64d7adac70] Found default for hw_vif_model of virtio _register_undefined_instance_details /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:946#033[00m
Dec  1 19:36:38 compute-0 nova_compute[189564]: 2025-12-01 19:36:38.803 189568 INFO nova.compute.manager [None req-025acbbd-8b0a-4055-b5a6-f0460d6fa220 - - - - - -] [instance: 850ac274-3f22-41ce-b7d7-ac64d7adac70] During sync_power_state the instance has a pending task (spawning). Skip.#033[00m
Dec  1 19:36:38 compute-0 nova_compute[189564]: 2025-12-01 19:36:38.859 189568 INFO nova.compute.manager [None req-92e2ca87-5818-491b-861f-9c34a79f287f 7c24e8f82e7842b785e565ac65c7f494 35d2a9caf1634dca9fc12ec078239d84 - - default default] [instance: 850ac274-3f22-41ce-b7d7-ac64d7adac70] Took 6.86 seconds to spawn the instance on the hypervisor.#033[00m
Dec  1 19:36:38 compute-0 nova_compute[189564]: 2025-12-01 19:36:38.860 189568 DEBUG nova.compute.manager [None req-92e2ca87-5818-491b-861f-9c34a79f287f 7c24e8f82e7842b785e565ac65c7f494 35d2a9caf1634dca9fc12ec078239d84 - - default default] [instance: 850ac274-3f22-41ce-b7d7-ac64d7adac70] Checking state _get_power_state /usr/lib/python3.9/site-packages/nova/compute/manager.py:1762#033[00m
Dec  1 19:36:38 compute-0 nova_compute[189564]: 2025-12-01 19:36:38.946 189568 INFO nova.compute.manager [None req-92e2ca87-5818-491b-861f-9c34a79f287f 7c24e8f82e7842b785e565ac65c7f494 35d2a9caf1634dca9fc12ec078239d84 - - default default] [instance: 850ac274-3f22-41ce-b7d7-ac64d7adac70] Took 7.53 seconds to build instance.#033[00m
Dec  1 19:36:38 compute-0 nova_compute[189564]: 2025-12-01 19:36:38.964 189568 DEBUG oslo_concurrency.lockutils [None req-92e2ca87-5818-491b-861f-9c34a79f287f 7c24e8f82e7842b785e565ac65c7f494 35d2a9caf1634dca9fc12ec078239d84 - - default default] Lock "850ac274-3f22-41ce-b7d7-ac64d7adac70" "released" by "nova.compute.manager.ComputeManager.build_and_run_instance.<locals>._locked_do_build_and_run_instance" :: held 7.621s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m
Dec  1 19:36:39 compute-0 systemd[1]: Starting libvirt proxy daemon...
Dec  1 19:36:39 compute-0 systemd[1]: Started libvirt proxy daemon.
Dec  1 19:36:40 compute-0 nova_compute[189564]: 2025-12-01 19:36:40.784 189568 DEBUG nova.compute.manager [req-523f9bff-f419-4f7f-be51-bf2304b439cb req-eeedacdc-4286-41ca-9745-4282156bb318 8141e98fb94749b083df4b60a326419b 58466eba0634458f8dd92f929824a1d9 - - default default] [instance: 850ac274-3f22-41ce-b7d7-ac64d7adac70] Received event network-vif-plugged-076102cd-d411-4d3d-a31e-4851d4a8d107 external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11048#033[00m
Dec  1 19:36:40 compute-0 nova_compute[189564]: 2025-12-01 19:36:40.785 189568 DEBUG oslo_concurrency.lockutils [req-523f9bff-f419-4f7f-be51-bf2304b439cb req-eeedacdc-4286-41ca-9745-4282156bb318 8141e98fb94749b083df4b60a326419b 58466eba0634458f8dd92f929824a1d9 - - default default] Acquiring lock "850ac274-3f22-41ce-b7d7-ac64d7adac70-events" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m
Dec  1 19:36:40 compute-0 nova_compute[189564]: 2025-12-01 19:36:40.785 189568 DEBUG oslo_concurrency.lockutils [req-523f9bff-f419-4f7f-be51-bf2304b439cb req-eeedacdc-4286-41ca-9745-4282156bb318 8141e98fb94749b083df4b60a326419b 58466eba0634458f8dd92f929824a1d9 - - default default] Lock "850ac274-3f22-41ce-b7d7-ac64d7adac70-events" acquired by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: waited 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m
Dec  1 19:36:40 compute-0 nova_compute[189564]: 2025-12-01 19:36:40.786 189568 DEBUG oslo_concurrency.lockutils [req-523f9bff-f419-4f7f-be51-bf2304b439cb req-eeedacdc-4286-41ca-9745-4282156bb318 8141e98fb94749b083df4b60a326419b 58466eba0634458f8dd92f929824a1d9 - - default default] Lock "850ac274-3f22-41ce-b7d7-ac64d7adac70-events" "released" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m
Dec  1 19:36:40 compute-0 nova_compute[189564]: 2025-12-01 19:36:40.786 189568 DEBUG nova.compute.manager [req-523f9bff-f419-4f7f-be51-bf2304b439cb req-eeedacdc-4286-41ca-9745-4282156bb318 8141e98fb94749b083df4b60a326419b 58466eba0634458f8dd92f929824a1d9 - - default default] [instance: 850ac274-3f22-41ce-b7d7-ac64d7adac70] No waiting events found dispatching network-vif-plugged-076102cd-d411-4d3d-a31e-4851d4a8d107 pop_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:320#033[00m
Dec  1 19:36:40 compute-0 nova_compute[189564]: 2025-12-01 19:36:40.787 189568 WARNING nova.compute.manager [req-523f9bff-f419-4f7f-be51-bf2304b439cb req-eeedacdc-4286-41ca-9745-4282156bb318 8141e98fb94749b083df4b60a326419b 58466eba0634458f8dd92f929824a1d9 - - default default] [instance: 850ac274-3f22-41ce-b7d7-ac64d7adac70] Received unexpected event network-vif-plugged-076102cd-d411-4d3d-a31e-4851d4a8d107 for instance with vm_state active and task_state None.#033[00m
Dec  1 19:36:41 compute-0 nova_compute[189564]: 2025-12-01 19:36:41.722 189568 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Dec  1 19:36:42 compute-0 nova_compute[189564]: 2025-12-01 19:36:42.927 189568 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Dec  1 19:36:46 compute-0 podman[242829]: 2025-12-01 19:36:46.450058746 +0000 UTC m=+0.199134216 container health_status b46bda7fc50db8041eef75400930fc7591d8331b3adc9964f77b2cc87c6b98e2 (image=quay.io/openstack-k8s-operators/openstack-network-exporter:current-podified, name=openstack_network_exporter, health_status=healthy, health_failing_streak=0, health_log=, build-date=2025-08-20T13:12:41, io.openshift.expose-services=, url=https://catalog.redhat.com/en/search?searchType=containers, vcs-ref=f4b088292653bbf5ca8188a5e59ffd06a8671d4b, container_name=openstack_network_exporter, architecture=x86_64, config_data={'image': 'quay.io/openstack-k8s-operators/openstack-network-exporter:current-podified', 'restart': 'always', 'recreate': True, 'privileged': True, 'ports': ['9105:9105'], 'command': [], 'net': 'host', 'environment': {'OS_ENDPOINT_TYPE': 'internal', 'OPENSTACK_NETWORK_EXPORTER_YAML': '/etc/openstack_network_exporter/openstack_network_exporter.yaml'}, 'healthcheck': {'test': '/openstack/healthcheck openstack-netwo', 'mount': '/var/lib/openstack/healthchecks/openstack_network_exporter'}, 'volumes': ['/var/lib/openstack/config/telemetry/openstack_network_exporter.yaml:/etc/openstack_network_exporter/openstack_network_exporter.yaml:z', '/var/lib/openstack/certs/telemetry/default:/etc/openstack_network_exporter/tls:z', '/var/run/openvswitch:/run/openvswitch:rw,z', '/var/lib/openvswitch/ovn:/run/ovn:rw,z', '/proc:/host/proc:ro', '/var/lib/openstack/healthchecks/openstack_network_exporter:/openstack:ro,z']}, io.openshift.tags=minimal rhel9, name=ubi9-minimal, distribution-scope=public, summary=Provides the latest release of the minimal Red Hat Universal Base Image 9., vcs-type=git, config_id=edpm, io.buildah.version=1.33.7, maintainer=Red Hat, Inc., release=1755695350, version=9.6, io.k8s.description=The Universal Base Image Minimal is a stripped down image that uses microdnf as a package manager. This base image is freely redistributable, but Red Hat only supports Red Hat technologies through subscriptions for Red Hat products. This image is maintained by Red Hat and updated regularly., com.redhat.component=ubi9-minimal-container, com.redhat.license_terms=https://www.redhat.com/en/about/red-hat-end-user-license-agreements#UBI, description=The Universal Base Image Minimal is a stripped down image that uses microdnf as a package manager. This base image is freely redistributable, but Red Hat only supports Red Hat technologies through subscriptions for Red Hat products. This image is maintained by Red Hat and updated regularly., io.k8s.display-name=Red Hat Universal Base Image 9 Minimal, vendor=Red Hat, Inc., managed_by=edpm_ansible)
Dec  1 19:36:46 compute-0 nova_compute[189564]: 2025-12-01 19:36:46.729 189568 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Dec  1 19:36:47 compute-0 nova_compute[189564]: 2025-12-01 19:36:47.932 189568 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Dec  1 19:36:49 compute-0 nova_compute[189564]: 2025-12-01 19:36:49.271 189568 DEBUG oslo_service.periodic_task [None req-7cec860b-c1c5-49d2-8a4f-732c76c1788d - - - - - -] Running periodic task ComputeManager._heal_instance_info_cache run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210#033[00m
Dec  1 19:36:49 compute-0 nova_compute[189564]: 2025-12-01 19:36:49.272 189568 DEBUG nova.compute.manager [None req-7cec860b-c1c5-49d2-8a4f-732c76c1788d - - - - - -] Starting heal instance info cache _heal_instance_info_cache /usr/lib/python3.9/site-packages/nova/compute/manager.py:9858#033[00m
Dec  1 19:36:49 compute-0 nova_compute[189564]: 2025-12-01 19:36:49.528 189568 DEBUG oslo_concurrency.lockutils [None req-7cec860b-c1c5-49d2-8a4f-732c76c1788d - - - - - -] Acquiring lock "refresh_cache-f4a023f0-04a7-470f-88ef-6284e0580f9e" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:312#033[00m
Dec  1 19:36:49 compute-0 nova_compute[189564]: 2025-12-01 19:36:49.529 189568 DEBUG oslo_concurrency.lockutils [None req-7cec860b-c1c5-49d2-8a4f-732c76c1788d - - - - - -] Acquired lock "refresh_cache-f4a023f0-04a7-470f-88ef-6284e0580f9e" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:315#033[00m
Dec  1 19:36:49 compute-0 nova_compute[189564]: 2025-12-01 19:36:49.530 189568 DEBUG nova.network.neutron [None req-7cec860b-c1c5-49d2-8a4f-732c76c1788d - - - - - -] [instance: f4a023f0-04a7-470f-88ef-6284e0580f9e] Forcefully refreshing network info cache for instance _get_instance_nw_info /usr/lib/python3.9/site-packages/nova/network/neutron.py:2004#033[00m
Dec  1 19:36:51 compute-0 podman[242851]: 2025-12-01 19:36:51.362780121 +0000 UTC m=+0.123447807 container health_status 9bc16c1e84935b321683dd2dfd3901959431e420d380b6b9982945dff3d516b2 (image=quay.io/prometheus/node-exporter:v1.5.0, name=node_exporter, health_status=healthy, health_failing_streak=0, health_log=, managed_by=edpm_ansible, config_data={'image': 'quay.io/prometheus/node-exporter:v1.5.0', 'restart': 'always', 'recreate': True, 'user': 'root', 'privileged': True, 'ports': ['9100:9100'], 'command': ['--web.config.file=/etc/node_exporter/node_exporter.yaml', '--web.disable-exporter-metrics', '--collector.systemd', '--collector.systemd.unit-include=(edpm_.*|ovs.*|openvswitch|virt.*|rsyslog)\\.service', '--no-collector.dmi', '--no-collector.entropy', '--no-collector.thermal_zone', '--no-collector.time', '--no-collector.timex', '--no-collector.uname', '--no-collector.stat', '--no-collector.hwmon', '--no-collector.os', '--no-collector.selinux', '--no-collector.textfile', '--no-collector.powersupplyclass', '--no-collector.pressure', '--no-collector.rapl'], 'net': 'host', 'environment': {'OS_ENDPOINT_TYPE': 'internal'}, 'healthcheck': {'test': '/openstack/healthcheck node_exporter', 'mount': '/var/lib/openstack/healthchecks/node_exporter'}, 'volumes': ['/var/lib/openstack/config/telemetry/node_exporter.yaml:/etc/node_exporter/node_exporter.yaml:z', '/var/lib/openstack/certs/telemetry/default:/etc/node_exporter/tls:z', '/var/run/dbus/system_bus_socket:/var/run/dbus/system_bus_socket:rw', '/var/lib/openstack/healthchecks/node_exporter:/openstack:ro,z']}, config_id=edpm, container_name=node_exporter, maintainer=The Prometheus Authors <prometheus-developers@googlegroups.com>)
Dec  1 19:36:51 compute-0 nova_compute[189564]: 2025-12-01 19:36:51.732 189568 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Dec  1 19:36:51 compute-0 nova_compute[189564]: 2025-12-01 19:36:51.944 189568 DEBUG nova.network.neutron [None req-7cec860b-c1c5-49d2-8a4f-732c76c1788d - - - - - -] [instance: f4a023f0-04a7-470f-88ef-6284e0580f9e] Updating instance_info_cache with network_info: [{"id": "0aee22ef-1ffd-4d83-a6ba-7377ff1b62c3", "address": "fa:16:3e:0a:1c:a4", "network": {"id": "2a4b8529-6171-4880-a97c-66966115a61b", "bridge": "br-int", "label": "private", "subnets": [{"cidr": "192.168.0.0/24", "dns": [], "gateway": {"address": "192.168.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "192.168.0.66", "type": "fixed", "version": 4, "meta": {}, "floating_ips": [{"address": "192.168.122.187", "type": "floating", "version": 4, "meta": {}}]}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "35d2a9caf1634dca9fc12ec078239d84", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tap0aee22ef-1f", "ovs_interfaceid": "0aee22ef-1ffd-4d83-a6ba-7377ff1b62c3", "qbh_params": null, "qbg_params": null, "active": true, "vnic_type": "normal", "profile": {}, "preserve_on_delete": true, "delegate_create": true, "meta": {}}] update_instance_cache_with_nw_info /usr/lib/python3.9/site-packages/nova/network/neutron.py:116#033[00m
Dec  1 19:36:52 compute-0 nova_compute[189564]: 2025-12-01 19:36:52.171 189568 DEBUG oslo_concurrency.lockutils [None req-7cec860b-c1c5-49d2-8a4f-732c76c1788d - - - - - -] Releasing lock "refresh_cache-f4a023f0-04a7-470f-88ef-6284e0580f9e" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:333#033[00m
Dec  1 19:36:52 compute-0 nova_compute[189564]: 2025-12-01 19:36:52.171 189568 DEBUG nova.compute.manager [None req-7cec860b-c1c5-49d2-8a4f-732c76c1788d - - - - - -] [instance: f4a023f0-04a7-470f-88ef-6284e0580f9e] Updated the network info_cache for instance _heal_instance_info_cache /usr/lib/python3.9/site-packages/nova/compute/manager.py:9929#033[00m
Dec  1 19:36:52 compute-0 nova_compute[189564]: 2025-12-01 19:36:52.935 189568 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Dec  1 19:36:53 compute-0 nova_compute[189564]: 2025-12-01 19:36:53.247 189568 DEBUG oslo_service.periodic_task [None req-7cec860b-c1c5-49d2-8a4f-732c76c1788d - - - - - -] Running periodic task ComputeManager._poll_rebooting_instances run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210#033[00m
Dec  1 19:36:53 compute-0 nova_compute[189564]: 2025-12-01 19:36:53.248 189568 DEBUG oslo_service.periodic_task [None req-7cec860b-c1c5-49d2-8a4f-732c76c1788d - - - - - -] Running periodic task ComputeManager._poll_volume_usage run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210#033[00m
Dec  1 19:36:56 compute-0 nova_compute[189564]: 2025-12-01 19:36:56.244 189568 DEBUG oslo_service.periodic_task [None req-7cec860b-c1c5-49d2-8a4f-732c76c1788d - - - - - -] Running periodic task ComputeManager._check_instance_build_time run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210#033[00m
Dec  1 19:36:56 compute-0 nova_compute[189564]: 2025-12-01 19:36:56.247 189568 DEBUG oslo_service.periodic_task [None req-7cec860b-c1c5-49d2-8a4f-732c76c1788d - - - - - -] Running periodic task ComputeManager._instance_usage_audit run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210#033[00m
Dec  1 19:36:56 compute-0 nova_compute[189564]: 2025-12-01 19:36:56.248 189568 DEBUG oslo_service.periodic_task [None req-7cec860b-c1c5-49d2-8a4f-732c76c1788d - - - - - -] Running periodic task ComputeManager._reclaim_queued_deletes run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210#033[00m
Dec  1 19:36:56 compute-0 nova_compute[189564]: 2025-12-01 19:36:56.248 189568 DEBUG nova.compute.manager [None req-7cec860b-c1c5-49d2-8a4f-732c76c1788d - - - - - -] CONF.reclaim_instance_interval <= 0, skipping... _reclaim_queued_deletes /usr/lib/python3.9/site-packages/nova/compute/manager.py:10477#033[00m
Dec  1 19:36:56 compute-0 nova_compute[189564]: 2025-12-01 19:36:56.736 189568 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Dec  1 19:36:57 compute-0 nova_compute[189564]: 2025-12-01 19:36:57.938 189568 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Dec  1 19:36:58 compute-0 podman[242876]: 2025-12-01 19:36:58.373296432 +0000 UTC m=+0.139899640 container health_status eee51cf6f5ac491b85fb09827fece37ea9afa564acb449d4ec0d0155a452f02b (image=quay.io/podified-antelope-centos9/openstack-multipathd:current-podified, name=multipathd, health_status=healthy, health_failing_streak=0, health_log=, tcib_build_tag=fa2bb8efef6782c26ea7f1675eeb36dd, io.buildah.version=1.41.3, managed_by=edpm_ansible, org.label-schema.build-date=20251125, org.label-schema.schema-version=1.0, config_data={'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/multipathd', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-multipathd:current-podified', 'net': 'host', 'privileged': True, 'restart': 'always', 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/var/lib/kolla/config_files/multipathd.json:/var/lib/kolla/config_files/config.json:ro', '/dev:/dev', '/run/udev:/run/udev', '/sys:/sys', '/lib/modules:/lib/modules:ro', '/etc/iscsi:/etc/iscsi:ro', '/var/lib/iscsi:/var/lib/iscsi', '/etc/multipath:/etc/multipath:z', '/etc/multipath.conf:/etc/multipath.conf:ro', '/var/lib/openstack/healthchecks/multipathd:/openstack:ro,z']}, maintainer=OpenStack Kubernetes Operator team, org.label-schema.license=GPLv2, tcib_managed=true, org.label-schema.name=CentOS Stream 9 Base Image, config_id=multipathd, container_name=multipathd, org.label-schema.vendor=CentOS)
Dec  1 19:36:59 compute-0 nova_compute[189564]: 2025-12-01 19:36:59.249 189568 DEBUG oslo_service.periodic_task [None req-7cec860b-c1c5-49d2-8a4f-732c76c1788d - - - - - -] Running periodic task ComputeManager._poll_unconfirmed_resizes run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210#033[00m
Dec  1 19:36:59 compute-0 podman[203750]: time="2025-12-01T19:36:59Z" level=info msg="List containers: received `last` parameter - overwriting `limit`"
Dec  1 19:36:59 compute-0 podman[203750]: @ - - [01/Dec/2025:19:36:59 +0000] "GET /v4.9.3/libpod/containers/json?all=true&external=false&last=0&namespace=false&size=false&sync=false HTTP/1.1" 200 29521 "" "Go-http-client/1.1"
Dec  1 19:36:59 compute-0 podman[203750]: @ - - [01/Dec/2025:19:36:59 +0000] "GET /v4.9.3/libpod/containers/stats?all=false&interval=1&stream=false HTTP/1.1" 200 4790 "" "Go-http-client/1.1"
Dec  1 19:37:00 compute-0 nova_compute[189564]: 2025-12-01 19:37:00.245 189568 DEBUG oslo_service.periodic_task [None req-7cec860b-c1c5-49d2-8a4f-732c76c1788d - - - - - -] Running periodic task ComputeManager._sync_scheduler_instance_info run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210#033[00m
Dec  1 19:37:00 compute-0 nova_compute[189564]: 2025-12-01 19:37:00.275 189568 DEBUG oslo_service.periodic_task [None req-7cec860b-c1c5-49d2-8a4f-732c76c1788d - - - - - -] Running periodic task ComputeManager.update_available_resource run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210#033[00m
Dec  1 19:37:00 compute-0 nova_compute[189564]: 2025-12-01 19:37:00.304 189568 DEBUG oslo_concurrency.lockutils [None req-7cec860b-c1c5-49d2-8a4f-732c76c1788d - - - - - -] Acquiring lock "compute_resources" by "nova.compute.resource_tracker.ResourceTracker.clean_compute_node_cache" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m
Dec  1 19:37:00 compute-0 nova_compute[189564]: 2025-12-01 19:37:00.305 189568 DEBUG oslo_concurrency.lockutils [None req-7cec860b-c1c5-49d2-8a4f-732c76c1788d - - - - - -] Lock "compute_resources" acquired by "nova.compute.resource_tracker.ResourceTracker.clean_compute_node_cache" :: waited 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m
Dec  1 19:37:00 compute-0 nova_compute[189564]: 2025-12-01 19:37:00.306 189568 DEBUG oslo_concurrency.lockutils [None req-7cec860b-c1c5-49d2-8a4f-732c76c1788d - - - - - -] Lock "compute_resources" "released" by "nova.compute.resource_tracker.ResourceTracker.clean_compute_node_cache" :: held 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m
Dec  1 19:37:00 compute-0 nova_compute[189564]: 2025-12-01 19:37:00.307 189568 DEBUG nova.compute.resource_tracker [None req-7cec860b-c1c5-49d2-8a4f-732c76c1788d - - - - - -] Auditing locally available compute resources for compute-0.ctlplane.example.com (node: compute-0.ctlplane.example.com) update_available_resource /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:861#033[00m
Dec  1 19:37:00 compute-0 nova_compute[189564]: 2025-12-01 19:37:00.417 189568 DEBUG oslo_concurrency.processutils [None req-7cec860b-c1c5-49d2-8a4f-732c76c1788d - - - - - -] Running cmd (subprocess): /usr/bin/python3 -m oslo_concurrency.prlimit --as=1073741824 --cpu=30 -- env LC_ALL=C LANG=C qemu-img info /var/lib/nova/instances/e73931e9-f7fa-4666-b781-700b385532a9/disk --force-share --output=json execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384#033[00m
Dec  1 19:37:00 compute-0 nova_compute[189564]: 2025-12-01 19:37:00.493 189568 DEBUG oslo_concurrency.processutils [None req-7cec860b-c1c5-49d2-8a4f-732c76c1788d - - - - - -] CMD "/usr/bin/python3 -m oslo_concurrency.prlimit --as=1073741824 --cpu=30 -- env LC_ALL=C LANG=C qemu-img info /var/lib/nova/instances/e73931e9-f7fa-4666-b781-700b385532a9/disk --force-share --output=json" returned: 0 in 0.076s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422#033[00m
Dec  1 19:37:00 compute-0 nova_compute[189564]: 2025-12-01 19:37:00.496 189568 DEBUG oslo_concurrency.processutils [None req-7cec860b-c1c5-49d2-8a4f-732c76c1788d - - - - - -] Running cmd (subprocess): /usr/bin/python3 -m oslo_concurrency.prlimit --as=1073741824 --cpu=30 -- env LC_ALL=C LANG=C qemu-img info /var/lib/nova/instances/e73931e9-f7fa-4666-b781-700b385532a9/disk --force-share --output=json execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384#033[00m
Dec  1 19:37:00 compute-0 nova_compute[189564]: 2025-12-01 19:37:00.594 189568 DEBUG oslo_concurrency.processutils [None req-7cec860b-c1c5-49d2-8a4f-732c76c1788d - - - - - -] CMD "/usr/bin/python3 -m oslo_concurrency.prlimit --as=1073741824 --cpu=30 -- env LC_ALL=C LANG=C qemu-img info /var/lib/nova/instances/e73931e9-f7fa-4666-b781-700b385532a9/disk --force-share --output=json" returned: 0 in 0.098s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422#033[00m
Dec  1 19:37:00 compute-0 nova_compute[189564]: 2025-12-01 19:37:00.597 189568 DEBUG oslo_concurrency.processutils [None req-7cec860b-c1c5-49d2-8a4f-732c76c1788d - - - - - -] Running cmd (subprocess): /usr/bin/python3 -m oslo_concurrency.prlimit --as=1073741824 --cpu=30 -- env LC_ALL=C LANG=C qemu-img info /var/lib/nova/instances/e73931e9-f7fa-4666-b781-700b385532a9/disk.eph0 --force-share --output=json execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384#033[00m
Dec  1 19:37:00 compute-0 nova_compute[189564]: 2025-12-01 19:37:00.674 189568 DEBUG oslo_concurrency.processutils [None req-7cec860b-c1c5-49d2-8a4f-732c76c1788d - - - - - -] CMD "/usr/bin/python3 -m oslo_concurrency.prlimit --as=1073741824 --cpu=30 -- env LC_ALL=C LANG=C qemu-img info /var/lib/nova/instances/e73931e9-f7fa-4666-b781-700b385532a9/disk.eph0 --force-share --output=json" returned: 0 in 0.077s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422#033[00m
Dec  1 19:37:00 compute-0 nova_compute[189564]: 2025-12-01 19:37:00.676 189568 DEBUG oslo_concurrency.processutils [None req-7cec860b-c1c5-49d2-8a4f-732c76c1788d - - - - - -] Running cmd (subprocess): /usr/bin/python3 -m oslo_concurrency.prlimit --as=1073741824 --cpu=30 -- env LC_ALL=C LANG=C qemu-img info /var/lib/nova/instances/e73931e9-f7fa-4666-b781-700b385532a9/disk.eph0 --force-share --output=json execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384#033[00m
Dec  1 19:37:00 compute-0 nova_compute[189564]: 2025-12-01 19:37:00.749 189568 DEBUG oslo_concurrency.processutils [None req-7cec860b-c1c5-49d2-8a4f-732c76c1788d - - - - - -] CMD "/usr/bin/python3 -m oslo_concurrency.prlimit --as=1073741824 --cpu=30 -- env LC_ALL=C LANG=C qemu-img info /var/lib/nova/instances/e73931e9-f7fa-4666-b781-700b385532a9/disk.eph0 --force-share --output=json" returned: 0 in 0.072s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422#033[00m
Dec  1 19:37:00 compute-0 nova_compute[189564]: 2025-12-01 19:37:00.762 189568 DEBUG oslo_concurrency.processutils [None req-7cec860b-c1c5-49d2-8a4f-732c76c1788d - - - - - -] Running cmd (subprocess): /usr/bin/python3 -m oslo_concurrency.prlimit --as=1073741824 --cpu=30 -- env LC_ALL=C LANG=C qemu-img info /var/lib/nova/instances/f4a023f0-04a7-470f-88ef-6284e0580f9e/disk --force-share --output=json execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384#033[00m
Dec  1 19:37:00 compute-0 nova_compute[189564]: 2025-12-01 19:37:00.848 189568 DEBUG oslo_concurrency.processutils [None req-7cec860b-c1c5-49d2-8a4f-732c76c1788d - - - - - -] CMD "/usr/bin/python3 -m oslo_concurrency.prlimit --as=1073741824 --cpu=30 -- env LC_ALL=C LANG=C qemu-img info /var/lib/nova/instances/f4a023f0-04a7-470f-88ef-6284e0580f9e/disk --force-share --output=json" returned: 0 in 0.086s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422#033[00m
Dec  1 19:37:00 compute-0 nova_compute[189564]: 2025-12-01 19:37:00.850 189568 DEBUG oslo_concurrency.processutils [None req-7cec860b-c1c5-49d2-8a4f-732c76c1788d - - - - - -] Running cmd (subprocess): /usr/bin/python3 -m oslo_concurrency.prlimit --as=1073741824 --cpu=30 -- env LC_ALL=C LANG=C qemu-img info /var/lib/nova/instances/f4a023f0-04a7-470f-88ef-6284e0580f9e/disk --force-share --output=json execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384#033[00m
Dec  1 19:37:00 compute-0 nova_compute[189564]: 2025-12-01 19:37:00.909 189568 DEBUG oslo_concurrency.processutils [None req-7cec860b-c1c5-49d2-8a4f-732c76c1788d - - - - - -] CMD "/usr/bin/python3 -m oslo_concurrency.prlimit --as=1073741824 --cpu=30 -- env LC_ALL=C LANG=C qemu-img info /var/lib/nova/instances/f4a023f0-04a7-470f-88ef-6284e0580f9e/disk --force-share --output=json" returned: 0 in 0.059s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422#033[00m
Dec  1 19:37:00 compute-0 nova_compute[189564]: 2025-12-01 19:37:00.910 189568 DEBUG oslo_concurrency.processutils [None req-7cec860b-c1c5-49d2-8a4f-732c76c1788d - - - - - -] Running cmd (subprocess): /usr/bin/python3 -m oslo_concurrency.prlimit --as=1073741824 --cpu=30 -- env LC_ALL=C LANG=C qemu-img info /var/lib/nova/instances/f4a023f0-04a7-470f-88ef-6284e0580f9e/disk.eph0 --force-share --output=json execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384#033[00m
Dec  1 19:37:01 compute-0 nova_compute[189564]: 2025-12-01 19:37:00.999 189568 DEBUG oslo_concurrency.processutils [None req-7cec860b-c1c5-49d2-8a4f-732c76c1788d - - - - - -] CMD "/usr/bin/python3 -m oslo_concurrency.prlimit --as=1073741824 --cpu=30 -- env LC_ALL=C LANG=C qemu-img info /var/lib/nova/instances/f4a023f0-04a7-470f-88ef-6284e0580f9e/disk.eph0 --force-share --output=json" returned: 0 in 0.088s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422#033[00m
Dec  1 19:37:01 compute-0 nova_compute[189564]: 2025-12-01 19:37:01.002 189568 DEBUG oslo_concurrency.processutils [None req-7cec860b-c1c5-49d2-8a4f-732c76c1788d - - - - - -] Running cmd (subprocess): /usr/bin/python3 -m oslo_concurrency.prlimit --as=1073741824 --cpu=30 -- env LC_ALL=C LANG=C qemu-img info /var/lib/nova/instances/f4a023f0-04a7-470f-88ef-6284e0580f9e/disk.eph0 --force-share --output=json execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384#033[00m
Dec  1 19:37:01 compute-0 nova_compute[189564]: 2025-12-01 19:37:01.064 189568 DEBUG oslo_concurrency.processutils [None req-7cec860b-c1c5-49d2-8a4f-732c76c1788d - - - - - -] CMD "/usr/bin/python3 -m oslo_concurrency.prlimit --as=1073741824 --cpu=30 -- env LC_ALL=C LANG=C qemu-img info /var/lib/nova/instances/f4a023f0-04a7-470f-88ef-6284e0580f9e/disk.eph0 --force-share --output=json" returned: 0 in 0.062s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422#033[00m
Dec  1 19:37:01 compute-0 nova_compute[189564]: 2025-12-01 19:37:01.079 189568 DEBUG oslo_concurrency.processutils [None req-7cec860b-c1c5-49d2-8a4f-732c76c1788d - - - - - -] Running cmd (subprocess): /usr/bin/python3 -m oslo_concurrency.prlimit --as=1073741824 --cpu=30 -- env LC_ALL=C LANG=C qemu-img info /var/lib/nova/instances/850ac274-3f22-41ce-b7d7-ac64d7adac70/disk --force-share --output=json execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384#033[00m
Dec  1 19:37:01 compute-0 nova_compute[189564]: 2025-12-01 19:37:01.143 189568 DEBUG oslo_concurrency.processutils [None req-7cec860b-c1c5-49d2-8a4f-732c76c1788d - - - - - -] CMD "/usr/bin/python3 -m oslo_concurrency.prlimit --as=1073741824 --cpu=30 -- env LC_ALL=C LANG=C qemu-img info /var/lib/nova/instances/850ac274-3f22-41ce-b7d7-ac64d7adac70/disk --force-share --output=json" returned: 0 in 0.064s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422#033[00m
Dec  1 19:37:01 compute-0 nova_compute[189564]: 2025-12-01 19:37:01.146 189568 DEBUG oslo_concurrency.processutils [None req-7cec860b-c1c5-49d2-8a4f-732c76c1788d - - - - - -] Running cmd (subprocess): /usr/bin/python3 -m oslo_concurrency.prlimit --as=1073741824 --cpu=30 -- env LC_ALL=C LANG=C qemu-img info /var/lib/nova/instances/850ac274-3f22-41ce-b7d7-ac64d7adac70/disk --force-share --output=json execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384#033[00m
Dec  1 19:37:01 compute-0 nova_compute[189564]: 2025-12-01 19:37:01.207 189568 DEBUG oslo_concurrency.processutils [None req-7cec860b-c1c5-49d2-8a4f-732c76c1788d - - - - - -] CMD "/usr/bin/python3 -m oslo_concurrency.prlimit --as=1073741824 --cpu=30 -- env LC_ALL=C LANG=C qemu-img info /var/lib/nova/instances/850ac274-3f22-41ce-b7d7-ac64d7adac70/disk --force-share --output=json" returned: 0 in 0.062s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422#033[00m
Dec  1 19:37:01 compute-0 nova_compute[189564]: 2025-12-01 19:37:01.210 189568 DEBUG oslo_concurrency.processutils [None req-7cec860b-c1c5-49d2-8a4f-732c76c1788d - - - - - -] Running cmd (subprocess): /usr/bin/python3 -m oslo_concurrency.prlimit --as=1073741824 --cpu=30 -- env LC_ALL=C LANG=C qemu-img info /var/lib/nova/instances/850ac274-3f22-41ce-b7d7-ac64d7adac70/disk.eph0 --force-share --output=json execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384#033[00m
Dec  1 19:37:01 compute-0 nova_compute[189564]: 2025-12-01 19:37:01.278 189568 DEBUG oslo_concurrency.processutils [None req-7cec860b-c1c5-49d2-8a4f-732c76c1788d - - - - - -] CMD "/usr/bin/python3 -m oslo_concurrency.prlimit --as=1073741824 --cpu=30 -- env LC_ALL=C LANG=C qemu-img info /var/lib/nova/instances/850ac274-3f22-41ce-b7d7-ac64d7adac70/disk.eph0 --force-share --output=json" returned: 0 in 0.068s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422#033[00m
Dec  1 19:37:01 compute-0 nova_compute[189564]: 2025-12-01 19:37:01.287 189568 DEBUG oslo_concurrency.processutils [None req-7cec860b-c1c5-49d2-8a4f-732c76c1788d - - - - - -] Running cmd (subprocess): /usr/bin/python3 -m oslo_concurrency.prlimit --as=1073741824 --cpu=30 -- env LC_ALL=C LANG=C qemu-img info /var/lib/nova/instances/850ac274-3f22-41ce-b7d7-ac64d7adac70/disk.eph0 --force-share --output=json execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384#033[00m
Dec  1 19:37:01 compute-0 nova_compute[189564]: 2025-12-01 19:37:01.346 189568 DEBUG oslo_concurrency.processutils [None req-7cec860b-c1c5-49d2-8a4f-732c76c1788d - - - - - -] CMD "/usr/bin/python3 -m oslo_concurrency.prlimit --as=1073741824 --cpu=30 -- env LC_ALL=C LANG=C qemu-img info /var/lib/nova/instances/850ac274-3f22-41ce-b7d7-ac64d7adac70/disk.eph0 --force-share --output=json" returned: 0 in 0.059s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422#033[00m
Dec  1 19:37:01 compute-0 openstack_network_exporter[205914]: ERROR   19:37:01 appctl.go:131: Failed to prepare call to ovsdb-server: no control socket files found for the ovs db server
Dec  1 19:37:01 compute-0 openstack_network_exporter[205914]: ERROR   19:37:01 appctl.go:144: Failed to get PID for ovn-northd: no control socket files found for ovn-northd
Dec  1 19:37:01 compute-0 openstack_network_exporter[205914]: ERROR   19:37:01 appctl.go:144: Failed to get PID for ovn-northd: no control socket files found for ovn-northd
Dec  1 19:37:01 compute-0 openstack_network_exporter[205914]: ERROR   19:37:01 appctl.go:174: call(dpif-netdev/pmd-rxq-show): please specify an existing datapath
Dec  1 19:37:01 compute-0 openstack_network_exporter[205914]: 
Dec  1 19:37:01 compute-0 openstack_network_exporter[205914]: ERROR   19:37:01 appctl.go:174: call(dpif-netdev/pmd-perf-show): please specify an existing datapath
Dec  1 19:37:01 compute-0 openstack_network_exporter[205914]: 
Dec  1 19:37:01 compute-0 nova_compute[189564]: 2025-12-01 19:37:01.738 189568 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Dec  1 19:37:01 compute-0 nova_compute[189564]: 2025-12-01 19:37:01.879 189568 WARNING nova.virt.libvirt.driver [None req-7cec860b-c1c5-49d2-8a4f-732c76c1788d - - - - - -] This host appears to have multiple sockets per NUMA node. The `socket` PCI NUMA affinity will not be supported.#033[00m
Dec  1 19:37:01 compute-0 nova_compute[189564]: 2025-12-01 19:37:01.881 189568 DEBUG nova.compute.resource_tracker [None req-7cec860b-c1c5-49d2-8a4f-732c76c1788d - - - - - -] Hypervisor/Node resource view: name=compute-0.ctlplane.example.com free_ram=4935MB free_disk=72.3604850769043GB free_vcpus=5 pci_devices=[{"dev_id": "pci_0000_00_03_0", "address": "0000:00:03.0", "product_id": "1000", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1000", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_01_3", "address": "0000:00:01.3", "product_id": "7113", "vendor_id": "8086", "numa_node": null, "label": "label_8086_7113", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_06_0", "address": "0000:00:06.0", "product_id": "1005", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1005", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_04_0", "address": "0000:00:04.0", "product_id": "1001", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1001", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_01_1", "address": "0000:00:01.1", "product_id": "7010", "vendor_id": "8086", "numa_node": null, "label": "label_8086_7010", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_00_0", "address": "0000:00:00.0", "product_id": "1237", "vendor_id": "8086", "numa_node": null, "label": "label_8086_1237", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_05_0", "address": "0000:00:05.0", "product_id": "1002", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1002", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_02_0", "address": "0000:00:02.0", "product_id": "1050", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1050", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_01_2", "address": "0000:00:01.2", "product_id": "7020", "vendor_id": "8086", "numa_node": null, "label": "label_8086_7020", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_01_0", "address": "0000:00:01.0", "product_id": "7000", "vendor_id": "8086", "numa_node": null, "label": "label_8086_7000", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_07_0", "address": "0000:00:07.0", "product_id": "1000", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1000", "dev_type": "type-PCI"}] _report_hypervisor_resource_view /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:1034#033[00m
Dec  1 19:37:01 compute-0 nova_compute[189564]: 2025-12-01 19:37:01.881 189568 DEBUG oslo_concurrency.lockutils [None req-7cec860b-c1c5-49d2-8a4f-732c76c1788d - - - - - -] Acquiring lock "compute_resources" by "nova.compute.resource_tracker.ResourceTracker._update_available_resource" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m
Dec  1 19:37:01 compute-0 nova_compute[189564]: 2025-12-01 19:37:01.882 189568 DEBUG oslo_concurrency.lockutils [None req-7cec860b-c1c5-49d2-8a4f-732c76c1788d - - - - - -] Lock "compute_resources" acquired by "nova.compute.resource_tracker.ResourceTracker._update_available_resource" :: waited 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m
Dec  1 19:37:02 compute-0 nova_compute[189564]: 2025-12-01 19:37:02.001 189568 DEBUG nova.compute.resource_tracker [None req-7cec860b-c1c5-49d2-8a4f-732c76c1788d - - - - - -] Instance e73931e9-f7fa-4666-b781-700b385532a9 actively managed on this compute host and has allocations in placement: {'resources': {'DISK_GB': 2, 'MEMORY_MB': 512, 'VCPU': 1}}. _remove_deleted_instances_allocations /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:1635#033[00m
Dec  1 19:37:02 compute-0 nova_compute[189564]: 2025-12-01 19:37:02.002 189568 DEBUG nova.compute.resource_tracker [None req-7cec860b-c1c5-49d2-8a4f-732c76c1788d - - - - - -] Instance f4a023f0-04a7-470f-88ef-6284e0580f9e actively managed on this compute host and has allocations in placement: {'resources': {'DISK_GB': 2, 'MEMORY_MB': 512, 'VCPU': 1}}. _remove_deleted_instances_allocations /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:1635#033[00m
Dec  1 19:37:02 compute-0 nova_compute[189564]: 2025-12-01 19:37:02.002 189568 DEBUG nova.compute.resource_tracker [None req-7cec860b-c1c5-49d2-8a4f-732c76c1788d - - - - - -] Instance 850ac274-3f22-41ce-b7d7-ac64d7adac70 actively managed on this compute host and has allocations in placement: {'resources': {'DISK_GB': 2, 'MEMORY_MB': 512, 'VCPU': 1}}. _remove_deleted_instances_allocations /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:1635#033[00m
Dec  1 19:37:02 compute-0 nova_compute[189564]: 2025-12-01 19:37:02.003 189568 DEBUG nova.compute.resource_tracker [None req-7cec860b-c1c5-49d2-8a4f-732c76c1788d - - - - - -] Total usable vcpus: 8, total allocated vcpus: 3 _report_final_resource_view /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:1057#033[00m
Dec  1 19:37:02 compute-0 nova_compute[189564]: 2025-12-01 19:37:02.003 189568 DEBUG nova.compute.resource_tracker [None req-7cec860b-c1c5-49d2-8a4f-732c76c1788d - - - - - -] Final resource view: name=compute-0.ctlplane.example.com phys_ram=7680MB used_ram=2048MB phys_disk=79GB used_disk=6GB total_vcpus=8 used_vcpus=3 pci_stats=[] _report_final_resource_view /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:1066#033[00m
Dec  1 19:37:02 compute-0 nova_compute[189564]: 2025-12-01 19:37:02.119 189568 DEBUG nova.compute.provider_tree [None req-7cec860b-c1c5-49d2-8a4f-732c76c1788d - - - - - -] Inventory has not changed in ProviderTree for provider: 0211b5d4-bab8-409f-8f53-df766ffbcb27 update_inventory /usr/lib/python3.9/site-packages/nova/compute/provider_tree.py:180#033[00m
Dec  1 19:37:02 compute-0 nova_compute[189564]: 2025-12-01 19:37:02.153 189568 DEBUG nova.scheduler.client.report [None req-7cec860b-c1c5-49d2-8a4f-732c76c1788d - - - - - -] Inventory has not changed for provider 0211b5d4-bab8-409f-8f53-df766ffbcb27 based on inventory data: {'VCPU': {'total': 8, 'reserved': 0, 'min_unit': 1, 'max_unit': 8, 'step_size': 1, 'allocation_ratio': 4.0}, 'MEMORY_MB': {'total': 7680, 'reserved': 512, 'min_unit': 1, 'max_unit': 7680, 'step_size': 1, 'allocation_ratio': 1.0}, 'DISK_GB': {'total': 79, 'reserved': 1, 'min_unit': 1, 'max_unit': 79, 'step_size': 1, 'allocation_ratio': 0.9}} set_inventory_for_provider /usr/lib/python3.9/site-packages/nova/scheduler/client/report.py:940#033[00m
Dec  1 19:37:02 compute-0 nova_compute[189564]: 2025-12-01 19:37:02.195 189568 DEBUG nova.compute.resource_tracker [None req-7cec860b-c1c5-49d2-8a4f-732c76c1788d - - - - - -] Compute_service record updated for compute-0.ctlplane.example.com:compute-0.ctlplane.example.com _update_available_resource /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:995#033[00m
Dec  1 19:37:02 compute-0 nova_compute[189564]: 2025-12-01 19:37:02.196 189568 DEBUG oslo_concurrency.lockutils [None req-7cec860b-c1c5-49d2-8a4f-732c76c1788d - - - - - -] Lock "compute_resources" "released" by "nova.compute.resource_tracker.ResourceTracker._update_available_resource" :: held 0.315s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m
Dec  1 19:37:02 compute-0 nova_compute[189564]: 2025-12-01 19:37:02.940 189568 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Dec  1 19:37:03 compute-0 podman[242932]: 2025-12-01 19:37:03.360330054 +0000 UTC m=+0.125782880 container health_status 61ddba5fa28aaa4735d9b3aecc3d300f499f9ae2248b5f55cd6d6127fcce4236 (image=quay.io/navidys/prometheus-podman-exporter:v1.10.1, name=podman_exporter, health_status=healthy, health_failing_streak=0, health_log=, managed_by=edpm_ansible, config_data={'image': 'quay.io/navidys/prometheus-podman-exporter:v1.10.1', 'restart': 'always', 'recreate': True, 'user': 'root', 'privileged': True, 'ports': ['9882:9882'], 'net': 'host', 'command': ['--web.config.file=/etc/podman_exporter/podman_exporter.yaml'], 'environment': {'OS_ENDPOINT_TYPE': 'internal', 'CONTAINER_HOST': 'unix:///run/podman/podman.sock'}, 'healthcheck': {'test': '/openstack/healthcheck podman_exporter', 'mount': '/var/lib/openstack/healthchecks/podman_exporter'}, 'volumes': ['/var/lib/openstack/config/telemetry/podman_exporter.yaml:/etc/podman_exporter/podman_exporter.yaml:z', '/var/lib/openstack/certs/telemetry/default:/etc/podman_exporter/tls:z', '/run/podman/podman.sock:/run/podman/podman.sock:rw,z', '/var/lib/openstack/healthchecks/podman_exporter:/openstack:ro,z']}, config_id=edpm, container_name=podman_exporter, maintainer=Navid Yaghoobi <navidys@fedoraproject.org>)
Dec  1 19:37:04 compute-0 nova_compute[189564]: 2025-12-01 19:37:04.176 189568 DEBUG oslo_service.periodic_task [None req-7cec860b-c1c5-49d2-8a4f-732c76c1788d - - - - - -] Running periodic task ComputeManager._poll_rescued_instances run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210#033[00m
Dec  1 19:37:06 compute-0 podman[242957]: 2025-12-01 19:37:06.372128859 +0000 UTC m=+0.121790075 container health_status 34a1614f07848d6f362b3ed1fa2407dbcd0f2c7c831f6ef43ff8b2d278ce7c3d (image=quay.io/podified-antelope-centos9/openstack-ceilometer-ipmi:current-podified, name=ceilometer_agent_ipmi, health_status=healthy, health_failing_streak=0, health_log=, org.label-schema.build-date=20251125, org.label-schema.license=GPLv2, org.label-schema.vendor=CentOS, config_data={'image': 'quay.io/podified-antelope-centos9/openstack-ceilometer-ipmi:current-podified', 'user': 'ceilometer', 'restart': 'always', 'command': 'kolla_start', 'security_opt': 'label:type:ceilometer_polling_t', 'privileged': 'true', 'net': 'host', 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'OS_ENDPOINT_TYPE': 'internal'}, 'healthcheck': {'test': '/openstack/healthcheck ipmi', 'mount': '/var/lib/openstack/healthchecks/ceilometer_agent_ipmi'}, 'volumes': ['/var/lib/openstack/config/telemetry-power-monitoring:/var/lib/openstack/config/:z', '/var/lib/openstack/config/telemetry-power-monitoring/ceilometer-agent-ipmi.json:/var/lib/kolla/config_files/config.json:z', '/etc/hosts:/etc/hosts:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/var/lib/openstack/cacerts/telemetry-power-monitoring/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/config/telemetry-power-monitoring/ceilometer_prom_exporter.yaml:/etc/ceilometer/ceilometer_prom_exporter.yaml:z', '/var/lib/openstack/certs/telemetry-power-monitoring/default:/etc/ceilometer/tls:z', '/dev/log:/dev/log', '/var/lib/openstack/healthchecks/ceilometer_agent_ipmi:/openstack:ro,z']}, org.label-schema.schema-version=1.0, tcib_build_tag=fa2bb8efef6782c26ea7f1675eeb36dd, config_id=edpm, io.buildah.version=1.41.3, org.label-schema.name=CentOS Stream 9 Base Image, managed_by=edpm_ansible, tcib_managed=true, container_name=ceilometer_agent_ipmi, maintainer=OpenStack Kubernetes Operator team)
Dec  1 19:37:06 compute-0 podman[242956]: 2025-12-01 19:37:06.391188884 +0000 UTC m=+0.150757120 container health_status 23921011954a99f31a49758e512d9e3575f6b2ebf536e7df85e3be11e7690b76 (image=quay.io/sustainable_computing_io/kepler:release-0.7.12, name=kepler, health_status=healthy, health_failing_streak=0, health_log=, config_id=edpm, io.openshift.expose-services=, container_name=kepler, maintainer=Red Hat, Inc., version=9.4, managed_by=edpm_ansible, io.k8s.description=The Universal Base Image is designed and engineered to be the base layer for all of your containerized applications, middleware and utilities. This base image is freely redistributable, but Red Hat only supports Red Hat technologies through subscriptions for Red Hat products. This image is maintained by Red Hat and updated regularly., description=The Universal Base Image is designed and engineered to be the base layer for all of your containerized applications, middleware and utilities. This base image is freely redistributable, but Red Hat only supports Red Hat technologies through subscriptions for Red Hat products. This image is maintained by Red Hat and updated regularly., io.buildah.version=1.29.0, io.k8s.display-name=Red Hat Universal Base Image 9, io.openshift.tags=base rhel9, name=ubi9, release=1214.1726694543, url=https://access.redhat.com/containers/#/registry.access.redhat.com/ubi9/images/9.4-1214.1726694543, release-0.7.12=, build-date=2024-09-18T21:23:30, summary=Provides the latest release of Red Hat Universal Base Image 9., vcs-ref=e309397d02fc53f7fa99db1371b8700eb49f268f, vcs-type=git, com.redhat.license_terms=https://www.redhat.com/en/about/red-hat-end-user-license-agreements#UBI, config_data={'image': 'quay.io/sustainable_computing_io/kepler:release-0.7.12', 'privileged': 'true', 'restart': 'always', 'ports': ['8888:8888'], 'net': 'host', 'command': '-v=2', 'recreate': True, 'environment': {'ENABLE_GPU': 'true', 'EXPOSE_CONTAINER_METRICS': 'true', 'ENABLE_PROCESS_METRICS': 'true', 'EXPOSE_VM_METRICS': 'true', 'EXPOSE_ESTIMATED_IDLE_POWER_METRICS': 'false', 'LIBVIRT_METADATA_URI': 'http://openstack.org/xmlns/libvirt/nova/1.1'}, 'healthcheck': {'test': '/openstack/healthcheck kepler', 'mount': '/var/lib/openstack/healthchecks/kepler'}, 'volumes': ['/lib/modules:/lib/modules:ro', '/run/libvirt:/run/libvirt:shared,ro', '/sys:/sys', '/proc:/proc', '/var/lib/openstack/healthchecks/kepler:/openstack:ro,z']}, distribution-scope=public, vendor=Red Hat, Inc., architecture=x86_64, com.redhat.component=ubi9-container)
Dec  1 19:37:06 compute-0 nova_compute[189564]: 2025-12-01 19:37:06.742 189568 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Dec  1 19:37:07 compute-0 nova_compute[189564]: 2025-12-01 19:37:07.942 189568 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Dec  1 19:37:08 compute-0 ovn_controller[97948]: 2025-12-01T19:37:08Z|00044|memory_trim|INFO|Detected inactivity (last active 30001 ms ago): trimming memory
Dec  1 19:37:08 compute-0 podman[242993]: 2025-12-01 19:37:08.397123046 +0000 UTC m=+0.138587678 container health_status ac5c9902abf0db9f43c889599b2bcc73d33eb8b65444ffdd9b56a5cc93dab792 (image=quay.io/podified-antelope-centos9/openstack-ovn-controller:current-podified, name=ovn_controller, health_status=healthy, health_failing_streak=0, health_log=, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.schema-version=1.0, config_data={'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ovn_controller', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-ovn-controller:current-podified', 'net': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/lib/modules:/lib/modules:ro', '/run:/run', '/var/lib/openvswitch/ovn:/run/ovn:shared,z', '/var/lib/kolla/config_files/ovn_controller.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/openstack/cacerts/ovn/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/certs/ovn/default/ca.crt:/etc/pki/tls/certs/ovndbca.crt:ro,z', '/var/lib/openstack/certs/ovn/default/tls.crt:/etc/pki/tls/certs/ovndb.crt:ro,z', '/var/lib/openstack/certs/ovn/default/tls.key:/etc/pki/tls/private/ovndb.key:ro,Z', '/var/lib/openstack/healthchecks/ovn_controller:/openstack:ro,z']}, io.buildah.version=1.41.3, org.label-schema.license=GPLv2, tcib_build_tag=fa2bb8efef6782c26ea7f1675eeb36dd, tcib_managed=true, maintainer=OpenStack Kubernetes Operator team, org.label-schema.build-date=20251125, org.label-schema.vendor=CentOS, config_id=ovn_controller, container_name=ovn_controller, managed_by=edpm_ansible)
Dec  1 19:37:08 compute-0 podman[242992]: 2025-12-01 19:37:08.406819089 +0000 UTC m=+0.155821857 container health_status 43b014a7c88484529ca37fbc1aa040d68d3c565a681d98a3ffe696ded1c66c8b (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified, name=ovn_metadata_agent, health_status=healthy, health_failing_streak=0, health_log=, tcib_managed=true, config_data={'cgroupns': 'host', 'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'EDPM_CONFIG_HASH': '0823bd3e096c75f72e4a95820d41b0d4b6a1172bd2892ddb9f29b788a11bc87d'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ovn_metadata_agent', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified', 'net': 'host', 'pid': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/run/openvswitch:/run/openvswitch:z', '/var/lib/config-data/ansible-generated/neutron-ovn-metadata-agent:/etc/neutron.conf.d:z', '/run/netns:/run/netns:shared', '/var/lib/kolla/config_files/ovn_metadata_agent.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/neutron:/var/lib/neutron:shared,z', '/var/lib/neutron/ovn_metadata_haproxy_wrapper:/usr/local/bin/haproxy:ro', '/var/lib/neutron/kill_scripts:/etc/neutron/kill_scripts:ro', '/var/lib/openstack/cacerts/neutron-metadata/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/ca.crt:/etc/pki/tls/certs/ovndbca.crt:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/tls.crt:/etc/pki/tls/certs/ovndb.crt:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/tls.key:/etc/pki/tls/private/ovndb.key:ro,Z', '/var/lib/openstack/healthchecks/ovn_metadata_agent:/openstack:ro,z']}, container_name=ovn_metadata_agent, org.label-schema.build-date=20251125, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 9 Base Image, managed_by=edpm_ansible, io.buildah.version=1.41.3, maintainer=OpenStack Kubernetes Operator team, org.label-schema.vendor=CentOS, config_id=ovn_metadata_agent, org.label-schema.schema-version=1.0, tcib_build_tag=fa2bb8efef6782c26ea7f1675eeb36dd)
Dec  1 19:37:08 compute-0 podman[242991]: 2025-12-01 19:37:08.420754813 +0000 UTC m=+0.175324675 container health_status 3a3d264f7eb8586ed3d44da8bad3c69e5911bcb2ca062b771386b6d47a5118de (image=quay.rdoproject.org/podified-master-centos10/openstack-ceilometer-compute:current-tested, name=ceilometer_agent_compute, health_status=healthy, health_failing_streak=0, health_log=, org.label-schema.build-date=20251125, tcib_build_tag=3a7876c5b6a4ff2e2bc50e11e9db5f42, tcib_managed=true, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 10 Base Image, org.label-schema.vendor=CentOS, io.buildah.version=1.41.4, managed_by=edpm_ansible, org.label-schema.schema-version=1.0, config_data={'image': 'quay.rdoproject.org/podified-master-centos10/openstack-ceilometer-compute:current-tested', 'user': 'ceilometer', 'restart': 'always', 'command': 'kolla_start', 'security_opt': 'label:type:ceilometer_polling_t', 'net': 'host', 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'OS_ENDPOINT_TYPE': 'internal'}, 'healthcheck': {'test': '/openstack/healthcheck compute', 'mount': '/var/lib/openstack/healthchecks/ceilometer_agent_compute'}, 'volumes': ['/var/lib/openstack/config/telemetry:/var/lib/openstack/config/:z', '/var/lib/openstack/config/telemetry/ceilometer-agent-compute.json:/var/lib/kolla/config_files/config.json:z', '/run/libvirt:/run/libvirt:shared,ro', '/etc/hosts:/etc/hosts:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/var/lib/openstack/cacerts/telemetry/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/config/telemetry/ceilometer_prom_exporter.yaml:/etc/ceilometer/ceilometer_prom_exporter.yaml:z', '/var/lib/openstack/certs/telemetry/default:/etc/ceilometer/tls:z', '/dev/log:/dev/log', '/var/lib/openstack/healthchecks/ceilometer_agent_compute:/openstack:ro,z']}, container_name=ceilometer_agent_compute, config_id=edpm, maintainer=OpenStack Kubernetes Operator team)
Dec  1 19:37:10 compute-0 ovn_controller[97948]: 2025-12-01T19:37:10Z|00008|pinctrl(ovn_pinctrl0)|INFO|DHCPOFFER fa:16:3e:ce:df:71 192.168.0.62
Dec  1 19:37:10 compute-0 ovn_controller[97948]: 2025-12-01T19:37:10Z|00009|pinctrl(ovn_pinctrl0)|INFO|DHCPACK fa:16:3e:ce:df:71 192.168.0.62
Dec  1 19:37:11 compute-0 nova_compute[189564]: 2025-12-01 19:37:11.745 189568 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Dec  1 19:37:12 compute-0 ovn_metadata_agent[106828]: 2025-12-01 19:37:12.186 106833 DEBUG oslo_concurrency.lockutils [-] Acquiring lock "_check_child_processes" by "neutron.agent.linux.external_process.ProcessMonitor._check_child_processes" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m
Dec  1 19:37:12 compute-0 ovn_metadata_agent[106828]: 2025-12-01 19:37:12.187 106833 DEBUG oslo_concurrency.lockutils [-] Lock "_check_child_processes" acquired by "neutron.agent.linux.external_process.ProcessMonitor._check_child_processes" :: waited 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m
Dec  1 19:37:12 compute-0 ovn_metadata_agent[106828]: 2025-12-01 19:37:12.187 106833 DEBUG oslo_concurrency.lockutils [-] Lock "_check_child_processes" "released" by "neutron.agent.linux.external_process.ProcessMonitor._check_child_processes" :: held 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m
Dec  1 19:37:12 compute-0 nova_compute[189564]: 2025-12-01 19:37:12.947 189568 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Dec  1 19:37:16 compute-0 nova_compute[189564]: 2025-12-01 19:37:16.749 189568 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Dec  1 19:37:17 compute-0 podman[243064]: 2025-12-01 19:37:17.351468195 +0000 UTC m=+0.111289439 container health_status b46bda7fc50db8041eef75400930fc7591d8331b3adc9964f77b2cc87c6b98e2 (image=quay.io/openstack-k8s-operators/openstack-network-exporter:current-podified, name=openstack_network_exporter, health_status=healthy, health_failing_streak=0, health_log=, com.redhat.component=ubi9-minimal-container, managed_by=edpm_ansible, description=The Universal Base Image Minimal is a stripped down image that uses microdnf as a package manager. This base image is freely redistributable, but Red Hat only supports Red Hat technologies through subscriptions for Red Hat products. This image is maintained by Red Hat and updated regularly., url=https://catalog.redhat.com/en/search?searchType=containers, com.redhat.license_terms=https://www.redhat.com/en/about/red-hat-end-user-license-agreements#UBI, config_data={'image': 'quay.io/openstack-k8s-operators/openstack-network-exporter:current-podified', 'restart': 'always', 'recreate': True, 'privileged': True, 'ports': ['9105:9105'], 'command': [], 'net': 'host', 'environment': {'OS_ENDPOINT_TYPE': 'internal', 'OPENSTACK_NETWORK_EXPORTER_YAML': '/etc/openstack_network_exporter/openstack_network_exporter.yaml'}, 'healthcheck': {'test': '/openstack/healthcheck openstack-netwo', 'mount': '/var/lib/openstack/healthchecks/openstack_network_exporter'}, 'volumes': ['/var/lib/openstack/config/telemetry/openstack_network_exporter.yaml:/etc/openstack_network_exporter/openstack_network_exporter.yaml:z', '/var/lib/openstack/certs/telemetry/default:/etc/openstack_network_exporter/tls:z', '/var/run/openvswitch:/run/openvswitch:rw,z', '/var/lib/openvswitch/ovn:/run/ovn:rw,z', '/proc:/host/proc:ro', '/var/lib/openstack/healthchecks/openstack_network_exporter:/openstack:ro,z']}, io.buildah.version=1.33.7, distribution-scope=public, vendor=Red Hat, Inc., build-date=2025-08-20T13:12:41, config_id=edpm, container_name=openstack_network_exporter, maintainer=Red Hat, Inc., vcs-type=git, io.k8s.description=The Universal Base Image Minimal is a stripped down image that uses microdnf as a package manager. This base image is freely redistributable, but Red Hat only supports Red Hat technologies through subscriptions for Red Hat products. This image is maintained by Red Hat and updated regularly., io.k8s.display-name=Red Hat Universal Base Image 9 Minimal, version=9.6, io.openshift.expose-services=, name=ubi9-minimal, summary=Provides the latest release of the minimal Red Hat Universal Base Image 9., vcs-ref=f4b088292653bbf5ca8188a5e59ffd06a8671d4b, io.openshift.tags=minimal rhel9, architecture=x86_64, release=1755695350)
Dec  1 19:37:17 compute-0 nova_compute[189564]: 2025-12-01 19:37:17.951 189568 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Dec  1 19:37:21 compute-0 nova_compute[189564]: 2025-12-01 19:37:21.752 189568 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Dec  1 19:37:22 compute-0 podman[243086]: 2025-12-01 19:37:22.353480013 +0000 UTC m=+0.101686910 container health_status 9bc16c1e84935b321683dd2dfd3901959431e420d380b6b9982945dff3d516b2 (image=quay.io/prometheus/node-exporter:v1.5.0, name=node_exporter, health_status=healthy, health_failing_streak=0, health_log=, maintainer=The Prometheus Authors <prometheus-developers@googlegroups.com>, managed_by=edpm_ansible, config_data={'image': 'quay.io/prometheus/node-exporter:v1.5.0', 'restart': 'always', 'recreate': True, 'user': 'root', 'privileged': True, 'ports': ['9100:9100'], 'command': ['--web.config.file=/etc/node_exporter/node_exporter.yaml', '--web.disable-exporter-metrics', '--collector.systemd', '--collector.systemd.unit-include=(edpm_.*|ovs.*|openvswitch|virt.*|rsyslog)\\.service', '--no-collector.dmi', '--no-collector.entropy', '--no-collector.thermal_zone', '--no-collector.time', '--no-collector.timex', '--no-collector.uname', '--no-collector.stat', '--no-collector.hwmon', '--no-collector.os', '--no-collector.selinux', '--no-collector.textfile', '--no-collector.powersupplyclass', '--no-collector.pressure', '--no-collector.rapl'], 'net': 'host', 'environment': {'OS_ENDPOINT_TYPE': 'internal'}, 'healthcheck': {'test': '/openstack/healthcheck node_exporter', 'mount': '/var/lib/openstack/healthchecks/node_exporter'}, 'volumes': ['/var/lib/openstack/config/telemetry/node_exporter.yaml:/etc/node_exporter/node_exporter.yaml:z', '/var/lib/openstack/certs/telemetry/default:/etc/node_exporter/tls:z', '/var/run/dbus/system_bus_socket:/var/run/dbus/system_bus_socket:rw', '/var/lib/openstack/healthchecks/node_exporter:/openstack:ro,z']}, config_id=edpm, container_name=node_exporter)
Dec  1 19:37:22 compute-0 nova_compute[189564]: 2025-12-01 19:37:22.953 189568 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Dec  1 19:37:26 compute-0 nova_compute[189564]: 2025-12-01 19:37:26.756 189568 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Dec  1 19:37:27 compute-0 nova_compute[189564]: 2025-12-01 19:37:27.956 189568 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Dec  1 19:37:29 compute-0 podman[243108]: 2025-12-01 19:37:29.322400158 +0000 UTC m=+0.083943557 container health_status eee51cf6f5ac491b85fb09827fece37ea9afa564acb449d4ec0d0155a452f02b (image=quay.io/podified-antelope-centos9/openstack-multipathd:current-podified, name=multipathd, health_status=healthy, health_failing_streak=0, health_log=, tcib_managed=true, managed_by=edpm_ansible, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.vendor=CentOS, container_name=multipathd, maintainer=OpenStack Kubernetes Operator team, org.label-schema.build-date=20251125, org.label-schema.license=GPLv2, org.label-schema.schema-version=1.0, tcib_build_tag=fa2bb8efef6782c26ea7f1675eeb36dd, config_id=multipathd, io.buildah.version=1.41.3, config_data={'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/multipathd', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-multipathd:current-podified', 'net': 'host', 'privileged': True, 'restart': 'always', 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/var/lib/kolla/config_files/multipathd.json:/var/lib/kolla/config_files/config.json:ro', '/dev:/dev', '/run/udev:/run/udev', '/sys:/sys', '/lib/modules:/lib/modules:ro', '/etc/iscsi:/etc/iscsi:ro', '/var/lib/iscsi:/var/lib/iscsi', '/etc/multipath:/etc/multipath:z', '/etc/multipath.conf:/etc/multipath.conf:ro', '/var/lib/openstack/healthchecks/multipathd:/openstack:ro,z']})
Dec  1 19:37:29 compute-0 podman[203750]: time="2025-12-01T19:37:29Z" level=info msg="List containers: received `last` parameter - overwriting `limit`"
Dec  1 19:37:29 compute-0 podman[203750]: @ - - [01/Dec/2025:19:37:29 +0000] "GET /v4.9.3/libpod/containers/json?all=true&external=false&last=0&namespace=false&size=false&sync=false HTTP/1.1" 200 29521 "" "Go-http-client/1.1"
Dec  1 19:37:29 compute-0 podman[203750]: @ - - [01/Dec/2025:19:37:29 +0000] "GET /v4.9.3/libpod/containers/stats?all=false&interval=1&stream=false HTTP/1.1" 200 4793 "" "Go-http-client/1.1"
Dec  1 19:37:31 compute-0 openstack_network_exporter[205914]: ERROR   19:37:31 appctl.go:131: Failed to prepare call to ovsdb-server: no control socket files found for the ovs db server
Dec  1 19:37:31 compute-0 openstack_network_exporter[205914]: ERROR   19:37:31 appctl.go:144: Failed to get PID for ovn-northd: no control socket files found for ovn-northd
Dec  1 19:37:31 compute-0 openstack_network_exporter[205914]: ERROR   19:37:31 appctl.go:144: Failed to get PID for ovn-northd: no control socket files found for ovn-northd
Dec  1 19:37:31 compute-0 openstack_network_exporter[205914]: ERROR   19:37:31 appctl.go:174: call(dpif-netdev/pmd-perf-show): please specify an existing datapath
Dec  1 19:37:31 compute-0 openstack_network_exporter[205914]: 
Dec  1 19:37:31 compute-0 openstack_network_exporter[205914]: ERROR   19:37:31 appctl.go:174: call(dpif-netdev/pmd-rxq-show): please specify an existing datapath
Dec  1 19:37:31 compute-0 openstack_network_exporter[205914]: 
Dec  1 19:37:31 compute-0 nova_compute[189564]: 2025-12-01 19:37:31.759 189568 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Dec  1 19:37:32 compute-0 nova_compute[189564]: 2025-12-01 19:37:32.959 189568 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Dec  1 19:37:34 compute-0 podman[243128]: 2025-12-01 19:37:34.345215923 +0000 UTC m=+0.108875413 container health_status 61ddba5fa28aaa4735d9b3aecc3d300f499f9ae2248b5f55cd6d6127fcce4236 (image=quay.io/navidys/prometheus-podman-exporter:v1.10.1, name=podman_exporter, health_status=healthy, health_failing_streak=0, health_log=, config_data={'image': 'quay.io/navidys/prometheus-podman-exporter:v1.10.1', 'restart': 'always', 'recreate': True, 'user': 'root', 'privileged': True, 'ports': ['9882:9882'], 'net': 'host', 'command': ['--web.config.file=/etc/podman_exporter/podman_exporter.yaml'], 'environment': {'OS_ENDPOINT_TYPE': 'internal', 'CONTAINER_HOST': 'unix:///run/podman/podman.sock'}, 'healthcheck': {'test': '/openstack/healthcheck podman_exporter', 'mount': '/var/lib/openstack/healthchecks/podman_exporter'}, 'volumes': ['/var/lib/openstack/config/telemetry/podman_exporter.yaml:/etc/podman_exporter/podman_exporter.yaml:z', '/var/lib/openstack/certs/telemetry/default:/etc/podman_exporter/tls:z', '/run/podman/podman.sock:/run/podman/podman.sock:rw,z', '/var/lib/openstack/healthchecks/podman_exporter:/openstack:ro,z']}, config_id=edpm, container_name=podman_exporter, maintainer=Navid Yaghoobi <navidys@fedoraproject.org>, managed_by=edpm_ansible)
Dec  1 19:37:36 compute-0 nova_compute[189564]: 2025-12-01 19:37:36.763 189568 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Dec  1 19:37:37 compute-0 podman[243152]: 2025-12-01 19:37:37.378246261 +0000 UTC m=+0.138540898 container health_status 23921011954a99f31a49758e512d9e3575f6b2ebf536e7df85e3be11e7690b76 (image=quay.io/sustainable_computing_io/kepler:release-0.7.12, name=kepler, health_status=healthy, health_failing_streak=0, health_log=, io.k8s.description=The Universal Base Image is designed and engineered to be the base layer for all of your containerized applications, middleware and utilities. This base image is freely redistributable, but Red Hat only supports Red Hat technologies through subscriptions for Red Hat products. This image is maintained by Red Hat and updated regularly., maintainer=Red Hat, Inc., summary=Provides the latest release of Red Hat Universal Base Image 9., version=9.4, description=The Universal Base Image is designed and engineered to be the base layer for all of your containerized applications, middleware and utilities. This base image is freely redistributable, but Red Hat only supports Red Hat technologies through subscriptions for Red Hat products. This image is maintained by Red Hat and updated regularly., distribution-scope=public, config_data={'image': 'quay.io/sustainable_computing_io/kepler:release-0.7.12', 'privileged': 'true', 'restart': 'always', 'ports': ['8888:8888'], 'net': 'host', 'command': '-v=2', 'recreate': True, 'environment': {'ENABLE_GPU': 'true', 'EXPOSE_CONTAINER_METRICS': 'true', 'ENABLE_PROCESS_METRICS': 'true', 'EXPOSE_VM_METRICS': 'true', 'EXPOSE_ESTIMATED_IDLE_POWER_METRICS': 'false', 'LIBVIRT_METADATA_URI': 'http://openstack.org/xmlns/libvirt/nova/1.1'}, 'healthcheck': {'test': '/openstack/healthcheck kepler', 'mount': '/var/lib/openstack/healthchecks/kepler'}, 'volumes': ['/lib/modules:/lib/modules:ro', '/run/libvirt:/run/libvirt:shared,ro', '/sys:/sys', '/proc:/proc', '/var/lib/openstack/healthchecks/kepler:/openstack:ro,z']}, io.openshift.tags=base rhel9, url=https://access.redhat.com/containers/#/registry.access.redhat.com/ubi9/images/9.4-1214.1726694543, container_name=kepler, io.openshift.expose-services=, name=ubi9, build-date=2024-09-18T21:23:30, architecture=x86_64, managed_by=edpm_ansible, vcs-type=git, vcs-ref=e309397d02fc53f7fa99db1371b8700eb49f268f, com.redhat.license_terms=https://www.redhat.com/en/about/red-hat-end-user-license-agreements#UBI, io.k8s.display-name=Red Hat Universal Base Image 9, com.redhat.component=ubi9-container, release=1214.1726694543, vendor=Red Hat, Inc., config_id=edpm, io.buildah.version=1.29.0, release-0.7.12=)
Dec  1 19:37:37 compute-0 podman[243153]: 2025-12-01 19:37:37.379402357 +0000 UTC m=+0.128744703 container health_status 34a1614f07848d6f362b3ed1fa2407dbcd0f2c7c831f6ef43ff8b2d278ce7c3d (image=quay.io/podified-antelope-centos9/openstack-ceilometer-ipmi:current-podified, name=ceilometer_agent_ipmi, health_status=healthy, health_failing_streak=0, health_log=, org.label-schema.vendor=CentOS, config_data={'image': 'quay.io/podified-antelope-centos9/openstack-ceilometer-ipmi:current-podified', 'user': 'ceilometer', 'restart': 'always', 'command': 'kolla_start', 'security_opt': 'label:type:ceilometer_polling_t', 'privileged': 'true', 'net': 'host', 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'OS_ENDPOINT_TYPE': 'internal'}, 'healthcheck': {'test': '/openstack/healthcheck ipmi', 'mount': '/var/lib/openstack/healthchecks/ceilometer_agent_ipmi'}, 'volumes': ['/var/lib/openstack/config/telemetry-power-monitoring:/var/lib/openstack/config/:z', '/var/lib/openstack/config/telemetry-power-monitoring/ceilometer-agent-ipmi.json:/var/lib/kolla/config_files/config.json:z', '/etc/hosts:/etc/hosts:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/var/lib/openstack/cacerts/telemetry-power-monitoring/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/config/telemetry-power-monitoring/ceilometer_prom_exporter.yaml:/etc/ceilometer/ceilometer_prom_exporter.yaml:z', '/var/lib/openstack/certs/telemetry-power-monitoring/default:/etc/ceilometer/tls:z', '/dev/log:/dev/log', '/var/lib/openstack/healthchecks/ceilometer_agent_ipmi:/openstack:ro,z']}, io.buildah.version=1.41.3, maintainer=OpenStack Kubernetes Operator team, tcib_managed=true, org.label-schema.license=GPLv2, org.label-schema.schema-version=1.0, tcib_build_tag=fa2bb8efef6782c26ea7f1675eeb36dd, container_name=ceilometer_agent_ipmi, managed_by=edpm_ansible, org.label-schema.build-date=20251125, config_id=edpm, org.label-schema.name=CentOS Stream 9 Base Image)
Dec  1 19:37:37 compute-0 nova_compute[189564]: 2025-12-01 19:37:37.963 189568 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Dec  1 19:37:39 compute-0 podman[243190]: 2025-12-01 19:37:39.327090555 +0000 UTC m=+0.093354320 container health_status 3a3d264f7eb8586ed3d44da8bad3c69e5911bcb2ca062b771386b6d47a5118de (image=quay.rdoproject.org/podified-master-centos10/openstack-ceilometer-compute:current-tested, name=ceilometer_agent_compute, health_status=healthy, health_failing_streak=0, health_log=, managed_by=edpm_ansible, tcib_managed=true, config_id=edpm, org.label-schema.build-date=20251125, org.label-schema.schema-version=1.0, maintainer=OpenStack Kubernetes Operator team, org.label-schema.name=CentOS Stream 10 Base Image, config_data={'image': 'quay.rdoproject.org/podified-master-centos10/openstack-ceilometer-compute:current-tested', 'user': 'ceilometer', 'restart': 'always', 'command': 'kolla_start', 'security_opt': 'label:type:ceilometer_polling_t', 'net': 'host', 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'OS_ENDPOINT_TYPE': 'internal'}, 'healthcheck': {'test': '/openstack/healthcheck compute', 'mount': '/var/lib/openstack/healthchecks/ceilometer_agent_compute'}, 'volumes': ['/var/lib/openstack/config/telemetry:/var/lib/openstack/config/:z', '/var/lib/openstack/config/telemetry/ceilometer-agent-compute.json:/var/lib/kolla/config_files/config.json:z', '/run/libvirt:/run/libvirt:shared,ro', '/etc/hosts:/etc/hosts:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/var/lib/openstack/cacerts/telemetry/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/config/telemetry/ceilometer_prom_exporter.yaml:/etc/ceilometer/ceilometer_prom_exporter.yaml:z', '/var/lib/openstack/certs/telemetry/default:/etc/ceilometer/tls:z', '/dev/log:/dev/log', '/var/lib/openstack/healthchecks/ceilometer_agent_compute:/openstack:ro,z']}, container_name=ceilometer_agent_compute, org.label-schema.license=GPLv2, org.label-schema.vendor=CentOS, tcib_build_tag=3a7876c5b6a4ff2e2bc50e11e9db5f42, io.buildah.version=1.41.4)
Dec  1 19:37:39 compute-0 podman[243191]: 2025-12-01 19:37:39.36476911 +0000 UTC m=+0.109877865 container health_status 43b014a7c88484529ca37fbc1aa040d68d3c565a681d98a3ffe696ded1c66c8b (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified, name=ovn_metadata_agent, health_status=healthy, health_failing_streak=0, health_log=, org.label-schema.vendor=CentOS, tcib_build_tag=fa2bb8efef6782c26ea7f1675eeb36dd, config_id=ovn_metadata_agent, org.label-schema.name=CentOS Stream 9 Base Image, container_name=ovn_metadata_agent, io.buildah.version=1.41.3, maintainer=OpenStack Kubernetes Operator team, managed_by=edpm_ansible, org.label-schema.build-date=20251125, org.label-schema.schema-version=1.0, tcib_managed=true, config_data={'cgroupns': 'host', 'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'EDPM_CONFIG_HASH': '0823bd3e096c75f72e4a95820d41b0d4b6a1172bd2892ddb9f29b788a11bc87d'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ovn_metadata_agent', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified', 'net': 'host', 'pid': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/run/openvswitch:/run/openvswitch:z', '/var/lib/config-data/ansible-generated/neutron-ovn-metadata-agent:/etc/neutron.conf.d:z', '/run/netns:/run/netns:shared', '/var/lib/kolla/config_files/ovn_metadata_agent.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/neutron:/var/lib/neutron:shared,z', '/var/lib/neutron/ovn_metadata_haproxy_wrapper:/usr/local/bin/haproxy:ro', '/var/lib/neutron/kill_scripts:/etc/neutron/kill_scripts:ro', '/var/lib/openstack/cacerts/neutron-metadata/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/ca.crt:/etc/pki/tls/certs/ovndbca.crt:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/tls.crt:/etc/pki/tls/certs/ovndb.crt:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/tls.key:/etc/pki/tls/private/ovndb.key:ro,Z', '/var/lib/openstack/healthchecks/ovn_metadata_agent:/openstack:ro,z']}, org.label-schema.license=GPLv2)
Dec  1 19:37:39 compute-0 podman[243196]: 2025-12-01 19:37:39.399841432 +0000 UTC m=+0.151593475 container health_status ac5c9902abf0db9f43c889599b2bcc73d33eb8b65444ffdd9b56a5cc93dab792 (image=quay.io/podified-antelope-centos9/openstack-ovn-controller:current-podified, name=ovn_controller, health_status=healthy, health_failing_streak=0, health_log=, config_id=ovn_controller, org.label-schema.build-date=20251125, tcib_managed=true, config_data={'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ovn_controller', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-ovn-controller:current-podified', 'net': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/lib/modules:/lib/modules:ro', '/run:/run', '/var/lib/openvswitch/ovn:/run/ovn:shared,z', '/var/lib/kolla/config_files/ovn_controller.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/openstack/cacerts/ovn/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/certs/ovn/default/ca.crt:/etc/pki/tls/certs/ovndbca.crt:ro,z', '/var/lib/openstack/certs/ovn/default/tls.crt:/etc/pki/tls/certs/ovndb.crt:ro,z', '/var/lib/openstack/certs/ovn/default/tls.key:/etc/pki/tls/private/ovndb.key:ro,Z', '/var/lib/openstack/healthchecks/ovn_controller:/openstack:ro,z']}, io.buildah.version=1.41.3, maintainer=OpenStack Kubernetes Operator team, org.label-schema.license=GPLv2, org.label-schema.schema-version=1.0, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.vendor=CentOS, container_name=ovn_controller, managed_by=edpm_ansible, tcib_build_tag=fa2bb8efef6782c26ea7f1675eeb36dd)
Dec  1 19:37:41 compute-0 nova_compute[189564]: 2025-12-01 19:37:41.768 189568 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Dec  1 19:37:42 compute-0 nova_compute[189564]: 2025-12-01 19:37:42.966 189568 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Dec  1 19:37:46 compute-0 nova_compute[189564]: 2025-12-01 19:37:46.774 189568 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Dec  1 19:37:47 compute-0 nova_compute[189564]: 2025-12-01 19:37:47.969 189568 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Dec  1 19:37:48 compute-0 podman[243252]: 2025-12-01 19:37:48.382780582 +0000 UTC m=+0.136795004 container health_status b46bda7fc50db8041eef75400930fc7591d8331b3adc9964f77b2cc87c6b98e2 (image=quay.io/openstack-k8s-operators/openstack-network-exporter:current-podified, name=openstack_network_exporter, health_status=healthy, health_failing_streak=0, health_log=, io.k8s.display-name=Red Hat Universal Base Image 9 Minimal, io.openshift.expose-services=, summary=Provides the latest release of the minimal Red Hat Universal Base Image 9., io.openshift.tags=minimal rhel9, release=1755695350, vendor=Red Hat, Inc., distribution-scope=public, url=https://catalog.redhat.com/en/search?searchType=containers, io.buildah.version=1.33.7, version=9.6, vcs-type=git, config_data={'image': 'quay.io/openstack-k8s-operators/openstack-network-exporter:current-podified', 'restart': 'always', 'recreate': True, 'privileged': True, 'ports': ['9105:9105'], 'command': [], 'net': 'host', 'environment': {'OS_ENDPOINT_TYPE': 'internal', 'OPENSTACK_NETWORK_EXPORTER_YAML': '/etc/openstack_network_exporter/openstack_network_exporter.yaml'}, 'healthcheck': {'test': '/openstack/healthcheck openstack-netwo', 'mount': '/var/lib/openstack/healthchecks/openstack_network_exporter'}, 'volumes': ['/var/lib/openstack/config/telemetry/openstack_network_exporter.yaml:/etc/openstack_network_exporter/openstack_network_exporter.yaml:z', '/var/lib/openstack/certs/telemetry/default:/etc/openstack_network_exporter/tls:z', '/var/run/openvswitch:/run/openvswitch:rw,z', '/var/lib/openvswitch/ovn:/run/ovn:rw,z', '/proc:/host/proc:ro', '/var/lib/openstack/healthchecks/openstack_network_exporter:/openstack:ro,z']}, description=The Universal Base Image Minimal is a stripped down image that uses microdnf as a package manager. This base image is freely redistributable, but Red Hat only supports Red Hat technologies through subscriptions for Red Hat products. This image is maintained by Red Hat and updated regularly., vcs-ref=f4b088292653bbf5ca8188a5e59ffd06a8671d4b, com.redhat.license_terms=https://www.redhat.com/en/about/red-hat-end-user-license-agreements#UBI, managed_by=edpm_ansible, maintainer=Red Hat, Inc., architecture=x86_64, io.k8s.description=The Universal Base Image Minimal is a stripped down image that uses microdnf as a package manager. This base image is freely redistributable, but Red Hat only supports Red Hat technologies through subscriptions for Red Hat products. This image is maintained by Red Hat and updated regularly., name=ubi9-minimal, build-date=2025-08-20T13:12:41, config_id=edpm, container_name=openstack_network_exporter, com.redhat.component=ubi9-minimal-container)
Dec  1 19:37:48 compute-0 ceilometer_agent_compute[200308]: 2025-12-01 19:37:48.812 15 DEBUG ceilometer.polling.manager [-] The number of pollsters in source [pollsters] is bigger than the number of worker threads to execute them. Therefore, one can expect the process to be longer than the expected. execute_polling_task_processing /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:253
Dec  1 19:37:48 compute-0 ceilometer_agent_compute[200308]: 2025-12-01 19:37:48.813 15 DEBUG ceilometer.polling.manager [-] Processing pollsters for [pollsters] with [1] threads. execute_polling_task_processing /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:262
Dec  1 19:37:48 compute-0 ceilometer_agent_compute[200308]: 2025-12-01 19:37:48.814 15 DEBUG ceilometer.polling.manager [-] Registering pollster [<stevedore.extension.Extension object at 0x7fcf6cc3f860>] from source [pollsters] to be executed via executor [<concurrent.futures.thread.ThreadPoolExecutor object at 0x7fcf6ebb41d0>] with cache [{}], pollster history [{}], and discovery cache [{}]. register_pollster_execution /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:276
Dec  1 19:37:48 compute-0 ceilometer_agent_compute[200308]: 2025-12-01 19:37:48.814 15 DEBUG ceilometer.polling.manager [-] Executing discovery process for pollsters [<ceilometer.compute.pollsters.net.IncomingBytesDeltaPollster object at 0x7fcf6cc3f830>] and discovery method [local_instances] via process [<bound method AgentManager.discover of <ceilometer.polling.manager.AgentManager object at 0x7fcf6cc07a10>>]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:294
Dec  1 19:37:48 compute-0 ceilometer_agent_compute[200308]: 2025-12-01 19:37:48.815 15 DEBUG ceilometer.polling.manager [-] Registering pollster [<stevedore.extension.Extension object at 0x7fcf6c2e4080>] from source [pollsters] to be executed via executor [<concurrent.futures.thread.ThreadPoolExecutor object at 0x7fcf6ebb41d0>] with cache [{}], pollster history [{}], and discovery cache [{}]. register_pollster_execution /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:276
Dec  1 19:37:48 compute-0 ceilometer_agent_compute[200308]: 2025-12-01 19:37:48.815 15 DEBUG ceilometer.polling.manager [-] Registering pollster [<stevedore.extension.Extension object at 0x7fcf6efc98b0>] from source [pollsters] to be executed via executor [<concurrent.futures.thread.ThreadPoolExecutor object at 0x7fcf6ebb41d0>] with cache [{}], pollster history [{}], and discovery cache [{}]. register_pollster_execution /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:276
Dec  1 19:37:48 compute-0 ceilometer_agent_compute[200308]: 2025-12-01 19:37:48.816 15 DEBUG ceilometer.polling.manager [-] Registering pollster [<stevedore.extension.Extension object at 0x7fcf6c2e4110>] from source [pollsters] to be executed via executor [<concurrent.futures.thread.ThreadPoolExecutor object at 0x7fcf6ebb41d0>] with cache [{}], pollster history [{}], and discovery cache [{}]. register_pollster_execution /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:276
Dec  1 19:37:48 compute-0 ceilometer_agent_compute[200308]: 2025-12-01 19:37:48.816 15 DEBUG ceilometer.polling.manager [-] Registering pollster [<stevedore.extension.Extension object at 0x7fcf6c2e41a0>] from source [pollsters] to be executed via executor [<concurrent.futures.thread.ThreadPoolExecutor object at 0x7fcf6ebb41d0>] with cache [{}], pollster history [{}], and discovery cache [{}]. register_pollster_execution /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:276
Dec  1 19:37:48 compute-0 ceilometer_agent_compute[200308]: 2025-12-01 19:37:48.816 15 DEBUG ceilometer.polling.manager [-] Registering pollster [<stevedore.extension.Extension object at 0x7fcf6cc3f290>] from source [pollsters] to be executed via executor [<concurrent.futures.thread.ThreadPoolExecutor object at 0x7fcf6ebb41d0>] with cache [{}], pollster history [{}], and discovery cache [{}]. register_pollster_execution /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:276
Dec  1 19:37:48 compute-0 ceilometer_agent_compute[200308]: 2025-12-01 19:37:48.817 15 DEBUG ceilometer.polling.manager [-] Registering pollster [<stevedore.extension.Extension object at 0x7fcf6cc3f2c0>] from source [pollsters] to be executed via executor [<concurrent.futures.thread.ThreadPoolExecutor object at 0x7fcf6ebb41d0>] with cache [{}], pollster history [{}], and discovery cache [{}]. register_pollster_execution /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:276
Dec  1 19:37:48 compute-0 ceilometer_agent_compute[200308]: 2025-12-01 19:37:48.817 15 DEBUG ceilometer.polling.manager [-] Registering pollster [<stevedore.extension.Extension object at 0x7fcf6e1e92e0>] from source [pollsters] to be executed via executor [<concurrent.futures.thread.ThreadPoolExecutor object at 0x7fcf6ebb41d0>] with cache [{}], pollster history [{}], and discovery cache [{}]. register_pollster_execution /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:276
Dec  1 19:37:48 compute-0 ceilometer_agent_compute[200308]: 2025-12-01 19:37:48.818 15 DEBUG ceilometer.polling.manager [-] Registering pollster [<stevedore.extension.Extension object at 0x7fcf6cc3fb00>] from source [pollsters] to be executed via executor [<concurrent.futures.thread.ThreadPoolExecutor object at 0x7fcf6ebb41d0>] with cache [{}], pollster history [{}], and discovery cache [{}]. register_pollster_execution /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:276
Dec  1 19:37:48 compute-0 ceilometer_agent_compute[200308]: 2025-12-01 19:37:48.818 15 DEBUG ceilometer.polling.manager [-] Registering pollster [<stevedore.extension.Extension object at 0x7fcf6cc3f320>] from source [pollsters] to be executed via executor [<concurrent.futures.thread.ThreadPoolExecutor object at 0x7fcf6ebb41d0>] with cache [{}], pollster history [{}], and discovery cache [{}]. register_pollster_execution /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:276
Dec  1 19:37:48 compute-0 ceilometer_agent_compute[200308]: 2025-12-01 19:37:48.818 15 DEBUG ceilometer.polling.manager [-] Registering pollster [<stevedore.extension.Extension object at 0x7fcf6cc3f380>] from source [pollsters] to be executed via executor [<concurrent.futures.thread.ThreadPoolExecutor object at 0x7fcf6ebb41d0>] with cache [{}], pollster history [{}], and discovery cache [{}]. register_pollster_execution /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:276
Dec  1 19:37:48 compute-0 ceilometer_agent_compute[200308]: 2025-12-01 19:37:48.819 15 DEBUG ceilometer.polling.manager [-] Registering pollster [<stevedore.extension.Extension object at 0x7fcf6cc3f3e0>] from source [pollsters] to be executed via executor [<concurrent.futures.thread.ThreadPoolExecutor object at 0x7fcf6ebb41d0>] with cache [{}], pollster history [{}], and discovery cache [{}]. register_pollster_execution /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:276
Dec  1 19:37:48 compute-0 ceilometer_agent_compute[200308]: 2025-12-01 19:37:48.819 15 DEBUG ceilometer.polling.manager [-] Registering pollster [<stevedore.extension.Extension object at 0x7fcf6cc3f440>] from source [pollsters] to be executed via executor [<concurrent.futures.thread.ThreadPoolExecutor object at 0x7fcf6ebb41d0>] with cache [{}], pollster history [{}], and discovery cache [{}]. register_pollster_execution /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:276
Dec  1 19:37:48 compute-0 ceilometer_agent_compute[200308]: 2025-12-01 19:37:48.819 15 DEBUG ceilometer.polling.manager [-] Registering pollster [<stevedore.extension.Extension object at 0x7fcf6c2e4470>] from source [pollsters] to be executed via executor [<concurrent.futures.thread.ThreadPoolExecutor object at 0x7fcf6ebb41d0>] with cache [{}], pollster history [{}], and discovery cache [{}]. register_pollster_execution /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:276
Dec  1 19:37:48 compute-0 ceilometer_agent_compute[200308]: 2025-12-01 19:37:48.820 15 DEBUG ceilometer.polling.manager [-] Registering pollster [<stevedore.extension.Extension object at 0x7fcf6cc3f4a0>] from source [pollsters] to be executed via executor [<concurrent.futures.thread.ThreadPoolExecutor object at 0x7fcf6ebb41d0>] with cache [{}], pollster history [{}], and discovery cache [{}]. register_pollster_execution /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:276
Dec  1 19:37:48 compute-0 ceilometer_agent_compute[200308]: 2025-12-01 19:37:48.820 15 DEBUG ceilometer.polling.manager [-] Registering pollster [<stevedore.extension.Extension object at 0x7fcf6cc3f500>] from source [pollsters] to be executed via executor [<concurrent.futures.thread.ThreadPoolExecutor object at 0x7fcf6ebb41d0>] with cache [{}], pollster history [{}], and discovery cache [{}]. register_pollster_execution /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:276
Dec  1 19:37:48 compute-0 ceilometer_agent_compute[200308]: 2025-12-01 19:37:48.821 15 DEBUG ceilometer.polling.manager [-] Registering pollster [<stevedore.extension.Extension object at 0x7fcf6cc3e540>] from source [pollsters] to be executed via executor [<concurrent.futures.thread.ThreadPoolExecutor object at 0x7fcf6ebb41d0>] with cache [{}], pollster history [{}], and discovery cache [{}]. register_pollster_execution /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:276
Dec  1 19:37:48 compute-0 ceilometer_agent_compute[200308]: 2025-12-01 19:37:48.821 15 DEBUG ceilometer.polling.manager [-] Registering pollster [<stevedore.extension.Extension object at 0x7fcf6cc3f560>] from source [pollsters] to be executed via executor [<concurrent.futures.thread.ThreadPoolExecutor object at 0x7fcf6ebb41d0>] with cache [{}], pollster history [{}], and discovery cache [{}]. register_pollster_execution /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:276
Dec  1 19:37:48 compute-0 ceilometer_agent_compute[200308]: 2025-12-01 19:37:48.822 15 DEBUG ceilometer.polling.manager [-] Registering pollster [<stevedore.extension.Extension object at 0x7fcf6cc3fd70>] from source [pollsters] to be executed via executor [<concurrent.futures.thread.ThreadPoolExecutor object at 0x7fcf6ebb41d0>] with cache [{}], pollster history [{}], and discovery cache [{}]. register_pollster_execution /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:276
Dec  1 19:37:48 compute-0 ceilometer_agent_compute[200308]: 2025-12-01 19:37:48.822 15 DEBUG ceilometer.polling.manager [-] Registering pollster [<stevedore.extension.Extension object at 0x7fcf6cc3f5c0>] from source [pollsters] to be executed via executor [<concurrent.futures.thread.ThreadPoolExecutor object at 0x7fcf6ebb41d0>] with cache [{}], pollster history [{}], and discovery cache [{}]. register_pollster_execution /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:276
Dec  1 19:37:48 compute-0 ceilometer_agent_compute[200308]: 2025-12-01 19:37:48.823 15 DEBUG ceilometer.polling.manager [-] Registering pollster [<stevedore.extension.Extension object at 0x7fcf6cc3fdd0>] from source [pollsters] to be executed via executor [<concurrent.futures.thread.ThreadPoolExecutor object at 0x7fcf6ebb41d0>] with cache [{}], pollster history [{}], and discovery cache [{}]. register_pollster_execution /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:276
Dec  1 19:37:48 compute-0 ceilometer_agent_compute[200308]: 2025-12-01 19:37:48.823 15 DEBUG ceilometer.polling.manager [-] Registering pollster [<stevedore.extension.Extension object at 0x7fcf6cc3fe30>] from source [pollsters] to be executed via executor [<concurrent.futures.thread.ThreadPoolExecutor object at 0x7fcf6ebb41d0>] with cache [{}], pollster history [{}], and discovery cache [{}]. register_pollster_execution /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:276
Dec  1 19:37:48 compute-0 ceilometer_agent_compute[200308]: 2025-12-01 19:37:48.823 15 DEBUG ceilometer.polling.manager [-] Registering pollster [<stevedore.extension.Extension object at 0x7fcf6cc3fec0>] from source [pollsters] to be executed via executor [<concurrent.futures.thread.ThreadPoolExecutor object at 0x7fcf6ebb41d0>] with cache [{}], pollster history [{}], and discovery cache [{}]. register_pollster_execution /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:276
Dec  1 19:37:48 compute-0 ceilometer_agent_compute[200308]: 2025-12-01 19:37:48.824 15 DEBUG ceilometer.polling.manager [-] Registering pollster [<stevedore.extension.Extension object at 0x7fcf6cc3ffb0>] from source [pollsters] to be executed via executor [<concurrent.futures.thread.ThreadPoolExecutor object at 0x7fcf6ebb41d0>] with cache [{}], pollster history [{}], and discovery cache [{}]. register_pollster_execution /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:276
Dec  1 19:37:48 compute-0 ceilometer_agent_compute[200308]: 2025-12-01 19:37:48.824 15 DEBUG ceilometer.polling.manager [-] Registering pollster [<stevedore.extension.Extension object at 0x7fcf6cc3d7c0>] from source [pollsters] to be executed via executor [<concurrent.futures.thread.ThreadPoolExecutor object at 0x7fcf6ebb41d0>] with cache [{}], pollster history [{}], and discovery cache [{}]. register_pollster_execution /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:276
Dec  1 19:37:48 compute-0 ceilometer_agent_compute[200308]: 2025-12-01 19:37:48.825 15 DEBUG ceilometer.polling.manager [-] Registering pollster [<stevedore.extension.Extension object at 0x7fcf6cc3f7d0>] from source [pollsters] to be executed via executor [<concurrent.futures.thread.ThreadPoolExecutor object at 0x7fcf6ebb41d0>] with cache [{}], pollster history [{}], and discovery cache [{}]. register_pollster_execution /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:276
Dec  1 19:37:48 compute-0 ceilometer_agent_compute[200308]: 2025-12-01 19:37:48.822 15 DEBUG ceilometer.compute.discovery [-] instance data: {'id': 'e73931e9-f7fa-4666-b781-700b385532a9', 'name': 'test_0', 'flavor': {'id': '0891a7f6-7194-4f33-bc11-6f6ab8b16145', 'name': 'm1.small', 'vcpus': 1, 'ram': 512, 'disk': 1, 'ephemeral': 1, 'swap': 0}, 'image': {'id': '15bc897a-453b-4133-b6db-08ecdc2b6db0'}, 'os_type': 'hvm', 'architecture': 'x86_64', 'OS-EXT-SRV-ATTR:instance_name': 'instance-00000001', 'OS-EXT-SRV-ATTR:host': 'compute-0.ctlplane.example.com', 'OS-EXT-STS:vm_state': 'running', 'tenant_id': '35d2a9caf1634dca9fc12ec078239d84', 'user_id': '7c24e8f82e7842b785e565ac65c7f494', 'hostId': 'e632d98aa833376e2652bb395252bb54f4cc7fd6f020f0d51d7efcd6', 'status': 'active', 'metadata': {}} discover_libvirt_polling /usr/lib/python3.12/site-packages/ceilometer/compute/discovery.py:315
Dec  1 19:37:48 compute-0 ceilometer_agent_compute[200308]: 2025-12-01 19:37:48.830 15 DEBUG ceilometer.compute.discovery [-] instance data: {'id': 'f4a023f0-04a7-470f-88ef-6284e0580f9e', 'name': 'vn-rxztcck-f2wxpqwzjpbt-22updzqiujy5-vnf-jgrcp6zbpavd', 'flavor': {'id': '0891a7f6-7194-4f33-bc11-6f6ab8b16145', 'name': 'm1.small', 'vcpus': 1, 'ram': 512, 'disk': 1, 'ephemeral': 1, 'swap': 0}, 'image': {'id': '15bc897a-453b-4133-b6db-08ecdc2b6db0'}, 'os_type': 'hvm', 'architecture': 'x86_64', 'OS-EXT-SRV-ATTR:instance_name': 'instance-00000002', 'OS-EXT-SRV-ATTR:host': 'compute-0.ctlplane.example.com', 'OS-EXT-STS:vm_state': 'running', 'tenant_id': '35d2a9caf1634dca9fc12ec078239d84', 'user_id': '7c24e8f82e7842b785e565ac65c7f494', 'hostId': 'e632d98aa833376e2652bb395252bb54f4cc7fd6f020f0d51d7efcd6', 'status': 'active', 'metadata': {'metering.server_group': '47cf63e2-5b7c-4ff3-8543-aef6d5b1a5c9'}} discover_libvirt_polling /usr/lib/python3.12/site-packages/ceilometer/compute/discovery.py:315
Dec  1 19:37:48 compute-0 ceilometer_agent_compute[200308]: 2025-12-01 19:37:48.833 15 DEBUG ceilometer.compute.discovery [-] Querying metadata for instance 850ac274-3f22-41ce-b7d7-ac64d7adac70 from Nova API get_server /usr/lib/python3.12/site-packages/ceilometer/compute/discovery.py:176
Dec  1 19:37:48 compute-0 ceilometer_agent_compute[200308]: 2025-12-01 19:37:48.835 15 DEBUG novaclient.v2.client [-] REQ: curl -g -i -X GET https://nova-internal.openstack.svc:8774/v2.1/servers/850ac274-3f22-41ce-b7d7-ac64d7adac70 -H "Accept: application/json" -H "User-Agent: python-novaclient" -H "X-Auth-Token: {SHA256}1de7f74c971f7abb068fd11d4466b13593717e525e549549f884402049cc943e" -H "X-OpenStack-Nova-API-Version: 2.1" _http_log_request /usr/lib/python3.12/site-packages/keystoneauth1/session.py:572
Dec  1 19:37:49 compute-0 ceilometer_agent_compute[200308]: 2025-12-01 19:37:49.895 15 DEBUG novaclient.v2.client [-] RESP: [200] Connection: Keep-Alive Content-Length: 1959 Content-Type: application/json Date: Mon, 01 Dec 2025 19:37:48 GMT Keep-Alive: timeout=5, max=100 OpenStack-API-Version: compute 2.1 Server: Apache Vary: OpenStack-API-Version,X-OpenStack-Nova-API-Version X-OpenStack-Nova-API-Version: 2.1 x-compute-request-id: req-e5b6078d-fe6b-472b-8b4e-be5e0ee2c34a x-openstack-request-id: req-e5b6078d-fe6b-472b-8b4e-be5e0ee2c34a _http_log_response /usr/lib/python3.12/site-packages/keystoneauth1/session.py:613
Dec  1 19:37:49 compute-0 ceilometer_agent_compute[200308]: 2025-12-01 19:37:49.895 15 DEBUG novaclient.v2.client [-] RESP BODY: {"server": {"id": "850ac274-3f22-41ce-b7d7-ac64d7adac70", "name": "vn-rxztcck-a6xkcgll2h6t-dmjd3wlevael-vnf-74vtqyxw74yx", "status": "ACTIVE", "tenant_id": "35d2a9caf1634dca9fc12ec078239d84", "user_id": "7c24e8f82e7842b785e565ac65c7f494", "metadata": {"metering.server_group": "47cf63e2-5b7c-4ff3-8543-aef6d5b1a5c9"}, "hostId": "e632d98aa833376e2652bb395252bb54f4cc7fd6f020f0d51d7efcd6", "image": {"id": "15bc897a-453b-4133-b6db-08ecdc2b6db0", "links": [{"rel": "bookmark", "href": "https://nova-internal.openstack.svc:8774/images/15bc897a-453b-4133-b6db-08ecdc2b6db0"}]}, "flavor": {"id": "0891a7f6-7194-4f33-bc11-6f6ab8b16145", "links": [{"rel": "bookmark", "href": "https://nova-internal.openstack.svc:8774/flavors/0891a7f6-7194-4f33-bc11-6f6ab8b16145"}]}, "created": "2025-12-01T19:36:30Z", "updated": "2025-12-01T19:36:38Z", "addresses": {"private": [{"version": 4, "addr": "192.168.0.62", "OS-EXT-IPS:type": "fixed", "OS-EXT-IPS-MAC:mac_addr": "fa:16:3e:ce:df:71"}, {"version": 4, "addr": "192.168.122.240", "OS-EXT-IPS:type": "floating", "OS-EXT-IPS-MAC:mac_addr": "fa:16:3e:ce:df:71"}]}, "accessIPv4": "", "accessIPv6": "", "links": [{"rel": "self", "href": "https://nova-internal.openstack.svc:8774/v2.1/servers/850ac274-3f22-41ce-b7d7-ac64d7adac70"}, {"rel": "bookmark", "href": "https://nova-internal.openstack.svc:8774/servers/850ac274-3f22-41ce-b7d7-ac64d7adac70"}], "OS-DCF:diskConfig": "MANUAL", "progress": 0, "OS-EXT-AZ:availability_zone": "nova", "config_drive": "True", "key_name": null, "OS-SRV-USG:launched_at": "2025-12-01T19:36:38.000000", "OS-SRV-USG:terminated_at": null, "security_groups": [{"name": "basic"}], "OS-EXT-SRV-ATTR:host": "compute-0.ctlplane.example.com", "OS-EXT-SRV-ATTR:instance_name": "instance-00000003", "OS-EXT-SRV-ATTR:hypervisor_hostname": "compute-0.ctlplane.example.com", "OS-EXT-STS:task_state": null, "OS-EXT-STS:vm_state": "active", "OS-EXT-STS:power_state": 1, "os-extended-volumes:volumes_attached": []}} _http_log_response /usr/lib/python3.12/site-packages/keystoneauth1/session.py:648
Dec  1 19:37:49 compute-0 ceilometer_agent_compute[200308]: 2025-12-01 19:37:49.896 15 DEBUG novaclient.v2.client [-] GET call to compute for https://nova-internal.openstack.svc:8774/v2.1/servers/850ac274-3f22-41ce-b7d7-ac64d7adac70 used request id req-e5b6078d-fe6b-472b-8b4e-be5e0ee2c34a request /usr/lib/python3.12/site-packages/keystoneauth1/session.py:1073
Dec  1 19:37:49 compute-0 ceilometer_agent_compute[200308]: 2025-12-01 19:37:49.898 15 DEBUG ceilometer.compute.discovery [-] instance data: {'id': '850ac274-3f22-41ce-b7d7-ac64d7adac70', 'name': 'vn-rxztcck-a6xkcgll2h6t-dmjd3wlevael-vnf-74vtqyxw74yx', 'flavor': {'id': '0891a7f6-7194-4f33-bc11-6f6ab8b16145', 'name': 'm1.small', 'vcpus': 1, 'ram': 512, 'disk': 1, 'ephemeral': 1, 'swap': 0}, 'image': {'id': '15bc897a-453b-4133-b6db-08ecdc2b6db0'}, 'os_type': 'hvm', 'architecture': 'x86_64', 'OS-EXT-SRV-ATTR:instance_name': 'instance-00000003', 'OS-EXT-SRV-ATTR:host': 'compute-0.ctlplane.example.com', 'OS-EXT-STS:vm_state': 'running', 'tenant_id': '35d2a9caf1634dca9fc12ec078239d84', 'user_id': '7c24e8f82e7842b785e565ac65c7f494', 'hostId': 'e632d98aa833376e2652bb395252bb54f4cc7fd6f020f0d51d7efcd6', 'status': 'active', 'metadata': {'metering.server_group': '47cf63e2-5b7c-4ff3-8543-aef6d5b1a5c9'}} discover_libvirt_polling /usr/lib/python3.12/site-packages/ceilometer/compute/discovery.py:315
Dec  1 19:37:49 compute-0 ceilometer_agent_compute[200308]: 2025-12-01 19:37:49.898 15 INFO ceilometer.polling.manager [-] Polling pollster network.incoming.bytes.delta in the context of pollsters
Dec  1 19:37:49 compute-0 ceilometer_agent_compute[200308]: 2025-12-01 19:37:49.899 15 DEBUG ceilometer.polling.manager [-] Checking if we need coordination for pollster [<stevedore.extension.Extension object at 0x7fcf6cc3f860>] with coordination group name [None]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:333
Dec  1 19:37:49 compute-0 ceilometer_agent_compute[200308]: 2025-12-01 19:37:49.899 15 DEBUG ceilometer.polling.manager [-] The pollster [<stevedore.extension.Extension object at 0x7fcf6cc3f860>] is not configured in a source for polling that requires coordination. The current hashrings are the following [None]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:355
Dec  1 19:37:49 compute-0 ceilometer_agent_compute[200308]: 2025-12-01 19:37:49.899 15 DEBUG ceilometer.polling.manager [-] Polster heartbeat update: network.incoming.bytes.delta heartbeat /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:636
Dec  1 19:37:49 compute-0 ceilometer_agent_compute[200308]: 2025-12-01 19:37:49.901 12 DEBUG ceilometer.polling.manager [-] Updated heartbeat for network.incoming.bytes.delta (2025-12-01T19:37:49.899702) _update_status /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:502
Dec  1 19:37:49 compute-0 ceilometer_agent_compute[200308]: 2025-12-01 19:37:49.907 15 DEBUG ceilometer.compute.pollsters [-] e73931e9-f7fa-4666-b781-700b385532a9/network.incoming.bytes.delta volume: 84 _stats_to_sample /usr/lib/python3.12/site-packages/ceilometer/compute/pollsters/__init__.py:108
Dec  1 19:37:49 compute-0 ceilometer_agent_compute[200308]: 2025-12-01 19:37:49.914 15 DEBUG ceilometer.compute.pollsters [-] f4a023f0-04a7-470f-88ef-6284e0580f9e/network.incoming.bytes.delta volume: 84 _stats_to_sample /usr/lib/python3.12/site-packages/ceilometer/compute/pollsters/__init__.py:108
Dec  1 19:37:49 compute-0 ceilometer_agent_compute[200308]: 2025-12-01 19:37:49.920 15 DEBUG ceilometer.compute.virt.libvirt.inspector [-] No delta meter predecessor for 850ac274-3f22-41ce-b7d7-ac64d7adac70 / tap076102cd-d4 inspect_vnics /usr/lib/python3.12/site-packages/ceilometer/compute/virt/libvirt/inspector.py:143
Dec  1 19:37:49 compute-0 ceilometer_agent_compute[200308]: 2025-12-01 19:37:49.920 15 DEBUG ceilometer.compute.pollsters [-] 850ac274-3f22-41ce-b7d7-ac64d7adac70/network.incoming.bytes.delta volume: 0 _stats_to_sample /usr/lib/python3.12/site-packages/ceilometer/compute/pollsters/__init__.py:108
Dec  1 19:37:49 compute-0 ceilometer_agent_compute[200308]: 2025-12-01 19:37:49.921 15 INFO ceilometer.polling.manager [-] Finished polling pollster network.incoming.bytes.delta in the context of pollsters
Dec  1 19:37:49 compute-0 ceilometer_agent_compute[200308]: 2025-12-01 19:37:49.921 15 DEBUG ceilometer.polling.manager [-] Executing discovery process for pollsters [<ceilometer.compute.pollsters.net.OutgoingPacketsPollster object at 0x7fcf6c2e4050>] and discovery method [local_instances] via process [<bound method AgentManager.discover of <ceilometer.polling.manager.AgentManager object at 0x7fcf6cc07a10>>]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:294
Dec  1 19:37:49 compute-0 ceilometer_agent_compute[200308]: 2025-12-01 19:37:49.921 15 INFO ceilometer.polling.manager [-] Polling pollster network.outgoing.packets in the context of pollsters
Dec  1 19:37:49 compute-0 ceilometer_agent_compute[200308]: 2025-12-01 19:37:49.922 15 DEBUG ceilometer.polling.manager [-] Checking if we need coordination for pollster [<stevedore.extension.Extension object at 0x7fcf6c2e4080>] with coordination group name [None]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:333
Dec  1 19:37:49 compute-0 ceilometer_agent_compute[200308]: 2025-12-01 19:37:49.922 15 DEBUG ceilometer.polling.manager [-] The pollster [<stevedore.extension.Extension object at 0x7fcf6c2e4080>] is not configured in a source for polling that requires coordination. The current hashrings are the following [None]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:355
Dec  1 19:37:49 compute-0 ceilometer_agent_compute[200308]: 2025-12-01 19:37:49.922 15 DEBUG ceilometer.polling.manager [-] Polster heartbeat update: network.outgoing.packets heartbeat /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:636
Dec  1 19:37:49 compute-0 ceilometer_agent_compute[200308]: 2025-12-01 19:37:49.922 15 DEBUG ceilometer.compute.pollsters [-] e73931e9-f7fa-4666-b781-700b385532a9/network.outgoing.packets volume: 22 _stats_to_sample /usr/lib/python3.12/site-packages/ceilometer/compute/pollsters/__init__.py:108
Dec  1 19:37:49 compute-0 ceilometer_agent_compute[200308]: 2025-12-01 19:37:49.923 15 DEBUG ceilometer.compute.pollsters [-] f4a023f0-04a7-470f-88ef-6284e0580f9e/network.outgoing.packets volume: 44 _stats_to_sample /usr/lib/python3.12/site-packages/ceilometer/compute/pollsters/__init__.py:108
Dec  1 19:37:49 compute-0 ceilometer_agent_compute[200308]: 2025-12-01 19:37:49.923 15 DEBUG ceilometer.compute.pollsters [-] 850ac274-3f22-41ce-b7d7-ac64d7adac70/network.outgoing.packets volume: 19 _stats_to_sample /usr/lib/python3.12/site-packages/ceilometer/compute/pollsters/__init__.py:108
Dec  1 19:37:49 compute-0 ceilometer_agent_compute[200308]: 2025-12-01 19:37:49.924 12 DEBUG ceilometer.polling.manager [-] Updated heartbeat for network.outgoing.packets (2025-12-01T19:37:49.922393) _update_status /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:502
Dec  1 19:37:49 compute-0 ceilometer_agent_compute[200308]: 2025-12-01 19:37:49.924 15 INFO ceilometer.polling.manager [-] Finished polling pollster network.outgoing.packets in the context of pollsters
Dec  1 19:37:49 compute-0 ceilometer_agent_compute[200308]: 2025-12-01 19:37:49.924 15 DEBUG ceilometer.polling.manager [-] Executing discovery process for pollsters [<ceilometer.compute.pollsters.net.OutgoingBytesDeltaPollster object at 0x7fcf6cc3ff20>] and discovery method [local_instances] via process [<bound method AgentManager.discover of <ceilometer.polling.manager.AgentManager object at 0x7fcf6cc07a10>>]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:294
Dec  1 19:37:49 compute-0 ceilometer_agent_compute[200308]: 2025-12-01 19:37:49.925 15 INFO ceilometer.polling.manager [-] Polling pollster network.outgoing.bytes.delta in the context of pollsters
Dec  1 19:37:49 compute-0 ceilometer_agent_compute[200308]: 2025-12-01 19:37:49.925 15 DEBUG ceilometer.polling.manager [-] Checking if we need coordination for pollster [<stevedore.extension.Extension object at 0x7fcf6efc98b0>] with coordination group name [None]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:333
Dec  1 19:37:49 compute-0 ceilometer_agent_compute[200308]: 2025-12-01 19:37:49.925 15 DEBUG ceilometer.polling.manager [-] The pollster [<stevedore.extension.Extension object at 0x7fcf6efc98b0>] is not configured in a source for polling that requires coordination. The current hashrings are the following [None]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:355
Dec  1 19:37:49 compute-0 ceilometer_agent_compute[200308]: 2025-12-01 19:37:49.925 15 DEBUG ceilometer.polling.manager [-] Polster heartbeat update: network.outgoing.bytes.delta heartbeat /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:636
Dec  1 19:37:49 compute-0 ceilometer_agent_compute[200308]: 2025-12-01 19:37:49.925 15 DEBUG ceilometer.compute.pollsters [-] e73931e9-f7fa-4666-b781-700b385532a9/network.outgoing.bytes.delta volume: 0 _stats_to_sample /usr/lib/python3.12/site-packages/ceilometer/compute/pollsters/__init__.py:108
Dec  1 19:37:49 compute-0 ceilometer_agent_compute[200308]: 2025-12-01 19:37:49.926 15 DEBUG ceilometer.compute.pollsters [-] f4a023f0-04a7-470f-88ef-6284e0580f9e/network.outgoing.bytes.delta volume: 70 _stats_to_sample /usr/lib/python3.12/site-packages/ceilometer/compute/pollsters/__init__.py:108
Dec  1 19:37:49 compute-0 ceilometer_agent_compute[200308]: 2025-12-01 19:37:49.926 15 DEBUG ceilometer.compute.pollsters [-] 850ac274-3f22-41ce-b7d7-ac64d7adac70/network.outgoing.bytes.delta volume: 0 _stats_to_sample /usr/lib/python3.12/site-packages/ceilometer/compute/pollsters/__init__.py:108
Dec  1 19:37:49 compute-0 ceilometer_agent_compute[200308]: 2025-12-01 19:37:49.927 15 INFO ceilometer.polling.manager [-] Finished polling pollster network.outgoing.bytes.delta in the context of pollsters
Dec  1 19:37:49 compute-0 ceilometer_agent_compute[200308]: 2025-12-01 19:37:49.927 15 DEBUG ceilometer.polling.manager [-] Executing discovery process for pollsters [<ceilometer.compute.pollsters.net.OutgoingDropPollster object at 0x7fcf6c2e40e0>] and discovery method [local_instances] via process [<bound method AgentManager.discover of <ceilometer.polling.manager.AgentManager object at 0x7fcf6cc07a10>>]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:294
Dec  1 19:37:49 compute-0 ceilometer_agent_compute[200308]: 2025-12-01 19:37:49.928 15 INFO ceilometer.polling.manager [-] Polling pollster network.outgoing.packets.drop in the context of pollsters
Dec  1 19:37:49 compute-0 ceilometer_agent_compute[200308]: 2025-12-01 19:37:49.928 15 DEBUG ceilometer.polling.manager [-] Checking if we need coordination for pollster [<stevedore.extension.Extension object at 0x7fcf6c2e4110>] with coordination group name [None]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:333
Dec  1 19:37:49 compute-0 ceilometer_agent_compute[200308]: 2025-12-01 19:37:49.928 15 DEBUG ceilometer.polling.manager [-] The pollster [<stevedore.extension.Extension object at 0x7fcf6c2e4110>] is not configured in a source for polling that requires coordination. The current hashrings are the following [None]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:355
Dec  1 19:37:49 compute-0 ceilometer_agent_compute[200308]: 2025-12-01 19:37:49.928 15 DEBUG ceilometer.polling.manager [-] Polster heartbeat update: network.outgoing.packets.drop heartbeat /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:636
Dec  1 19:37:49 compute-0 ceilometer_agent_compute[200308]: 2025-12-01 19:37:49.928 15 DEBUG ceilometer.compute.pollsters [-] e73931e9-f7fa-4666-b781-700b385532a9/network.outgoing.packets.drop volume: 0 _stats_to_sample /usr/lib/python3.12/site-packages/ceilometer/compute/pollsters/__init__.py:108
Dec  1 19:37:49 compute-0 ceilometer_agent_compute[200308]: 2025-12-01 19:37:49.929 12 DEBUG ceilometer.polling.manager [-] Updated heartbeat for network.outgoing.bytes.delta (2025-12-01T19:37:49.925534) _update_status /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:502
Dec  1 19:37:49 compute-0 ceilometer_agent_compute[200308]: 2025-12-01 19:37:49.929 12 DEBUG ceilometer.polling.manager [-] Updated heartbeat for network.outgoing.packets.drop (2025-12-01T19:37:49.928683) _update_status /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:502
Dec  1 19:37:49 compute-0 ceilometer_agent_compute[200308]: 2025-12-01 19:37:49.929 15 DEBUG ceilometer.compute.pollsters [-] f4a023f0-04a7-470f-88ef-6284e0580f9e/network.outgoing.packets.drop volume: 0 _stats_to_sample /usr/lib/python3.12/site-packages/ceilometer/compute/pollsters/__init__.py:108
Dec  1 19:37:49 compute-0 ceilometer_agent_compute[200308]: 2025-12-01 19:37:49.930 15 DEBUG ceilometer.compute.pollsters [-] 850ac274-3f22-41ce-b7d7-ac64d7adac70/network.outgoing.packets.drop volume: 0 _stats_to_sample /usr/lib/python3.12/site-packages/ceilometer/compute/pollsters/__init__.py:108
Dec  1 19:37:49 compute-0 ceilometer_agent_compute[200308]: 2025-12-01 19:37:49.930 15 INFO ceilometer.polling.manager [-] Finished polling pollster network.outgoing.packets.drop in the context of pollsters
Dec  1 19:37:49 compute-0 ceilometer_agent_compute[200308]: 2025-12-01 19:37:49.930 15 DEBUG ceilometer.polling.manager [-] Executing discovery process for pollsters [<ceilometer.compute.pollsters.net.OutgoingErrorsPollster object at 0x7fcf6c2e4170>] and discovery method [local_instances] via process [<bound method AgentManager.discover of <ceilometer.polling.manager.AgentManager object at 0x7fcf6cc07a10>>]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:294
Dec  1 19:37:49 compute-0 ceilometer_agent_compute[200308]: 2025-12-01 19:37:49.931 15 INFO ceilometer.polling.manager [-] Polling pollster network.outgoing.packets.error in the context of pollsters
Dec  1 19:37:49 compute-0 ceilometer_agent_compute[200308]: 2025-12-01 19:37:49.931 15 DEBUG ceilometer.polling.manager [-] Checking if we need coordination for pollster [<stevedore.extension.Extension object at 0x7fcf6c2e41a0>] with coordination group name [None]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:333
Dec  1 19:37:49 compute-0 ceilometer_agent_compute[200308]: 2025-12-01 19:37:49.931 15 DEBUG ceilometer.polling.manager [-] The pollster [<stevedore.extension.Extension object at 0x7fcf6c2e41a0>] is not configured in a source for polling that requires coordination. The current hashrings are the following [None]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:355
Dec  1 19:37:49 compute-0 ceilometer_agent_compute[200308]: 2025-12-01 19:37:49.931 15 DEBUG ceilometer.polling.manager [-] Polster heartbeat update: network.outgoing.packets.error heartbeat /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:636
Dec  1 19:37:49 compute-0 ceilometer_agent_compute[200308]: 2025-12-01 19:37:49.932 15 DEBUG ceilometer.compute.pollsters [-] e73931e9-f7fa-4666-b781-700b385532a9/network.outgoing.packets.error volume: 0 _stats_to_sample /usr/lib/python3.12/site-packages/ceilometer/compute/pollsters/__init__.py:108
Dec  1 19:37:49 compute-0 ceilometer_agent_compute[200308]: 2025-12-01 19:37:49.932 12 DEBUG ceilometer.polling.manager [-] Updated heartbeat for network.outgoing.packets.error (2025-12-01T19:37:49.931736) _update_status /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:502
Dec  1 19:37:49 compute-0 ceilometer_agent_compute[200308]: 2025-12-01 19:37:49.932 15 DEBUG ceilometer.compute.pollsters [-] f4a023f0-04a7-470f-88ef-6284e0580f9e/network.outgoing.packets.error volume: 0 _stats_to_sample /usr/lib/python3.12/site-packages/ceilometer/compute/pollsters/__init__.py:108
Dec  1 19:37:49 compute-0 ceilometer_agent_compute[200308]: 2025-12-01 19:37:49.933 15 DEBUG ceilometer.compute.pollsters [-] 850ac274-3f22-41ce-b7d7-ac64d7adac70/network.outgoing.packets.error volume: 0 _stats_to_sample /usr/lib/python3.12/site-packages/ceilometer/compute/pollsters/__init__.py:108
Dec  1 19:37:49 compute-0 ceilometer_agent_compute[200308]: 2025-12-01 19:37:49.933 15 INFO ceilometer.polling.manager [-] Finished polling pollster network.outgoing.packets.error in the context of pollsters
Dec  1 19:37:49 compute-0 ceilometer_agent_compute[200308]: 2025-12-01 19:37:49.934 15 DEBUG ceilometer.polling.manager [-] Executing discovery process for pollsters [<ceilometer.compute.pollsters.disk.PerDeviceCapacityPollster object at 0x7fcf6cc3d820>] and discovery method [local_instances] via process [<bound method AgentManager.discover of <ceilometer.polling.manager.AgentManager object at 0x7fcf6cc07a10>>]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:294
Dec  1 19:37:49 compute-0 ceilometer_agent_compute[200308]: 2025-12-01 19:37:49.934 15 INFO ceilometer.polling.manager [-] Polling pollster disk.device.capacity in the context of pollsters
Dec  1 19:37:49 compute-0 ceilometer_agent_compute[200308]: 2025-12-01 19:37:49.934 15 DEBUG ceilometer.polling.manager [-] Checking if we need coordination for pollster [<stevedore.extension.Extension object at 0x7fcf6cc3f290>] with coordination group name [None]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:333
Dec  1 19:37:49 compute-0 ceilometer_agent_compute[200308]: 2025-12-01 19:37:49.934 15 DEBUG ceilometer.polling.manager [-] The pollster [<stevedore.extension.Extension object at 0x7fcf6cc3f290>] is not configured in a source for polling that requires coordination. The current hashrings are the following [None]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:355
Dec  1 19:37:49 compute-0 ceilometer_agent_compute[200308]: 2025-12-01 19:37:49.934 15 DEBUG ceilometer.polling.manager [-] Polster heartbeat update: disk.device.capacity heartbeat /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:636
Dec  1 19:37:49 compute-0 ceilometer_agent_compute[200308]: 2025-12-01 19:37:49.935 12 DEBUG ceilometer.polling.manager [-] Updated heartbeat for disk.device.capacity (2025-12-01T19:37:49.934812) _update_status /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:502
Dec  1 19:37:49 compute-0 ceilometer_agent_compute[200308]: 2025-12-01 19:37:49.971 15 DEBUG ceilometer.compute.pollsters [-] e73931e9-f7fa-4666-b781-700b385532a9/disk.device.capacity volume: 1073741824 _stats_to_sample /usr/lib/python3.12/site-packages/ceilometer/compute/pollsters/__init__.py:108
Dec  1 19:37:49 compute-0 ceilometer_agent_compute[200308]: 2025-12-01 19:37:49.972 15 DEBUG ceilometer.compute.pollsters [-] e73931e9-f7fa-4666-b781-700b385532a9/disk.device.capacity volume: 1073741824 _stats_to_sample /usr/lib/python3.12/site-packages/ceilometer/compute/pollsters/__init__.py:108
Dec  1 19:37:49 compute-0 ceilometer_agent_compute[200308]: 2025-12-01 19:37:49.973 15 DEBUG ceilometer.compute.pollsters [-] e73931e9-f7fa-4666-b781-700b385532a9/disk.device.capacity volume: 485376 _stats_to_sample /usr/lib/python3.12/site-packages/ceilometer/compute/pollsters/__init__.py:108
Dec  1 19:37:50 compute-0 ceilometer_agent_compute[200308]: 2025-12-01 19:37:50.009 15 DEBUG ceilometer.compute.pollsters [-] f4a023f0-04a7-470f-88ef-6284e0580f9e/disk.device.capacity volume: 1073741824 _stats_to_sample /usr/lib/python3.12/site-packages/ceilometer/compute/pollsters/__init__.py:108
Dec  1 19:37:50 compute-0 ceilometer_agent_compute[200308]: 2025-12-01 19:37:50.010 15 DEBUG ceilometer.compute.pollsters [-] f4a023f0-04a7-470f-88ef-6284e0580f9e/disk.device.capacity volume: 1073741824 _stats_to_sample /usr/lib/python3.12/site-packages/ceilometer/compute/pollsters/__init__.py:108
Dec  1 19:37:50 compute-0 ceilometer_agent_compute[200308]: 2025-12-01 19:37:50.010 15 DEBUG ceilometer.compute.pollsters [-] f4a023f0-04a7-470f-88ef-6284e0580f9e/disk.device.capacity volume: 583680 _stats_to_sample /usr/lib/python3.12/site-packages/ceilometer/compute/pollsters/__init__.py:108
Dec  1 19:37:50 compute-0 ceilometer_agent_compute[200308]: 2025-12-01 19:37:50.041 15 DEBUG ceilometer.compute.pollsters [-] 850ac274-3f22-41ce-b7d7-ac64d7adac70/disk.device.capacity volume: 1073741824 _stats_to_sample /usr/lib/python3.12/site-packages/ceilometer/compute/pollsters/__init__.py:108
Dec  1 19:37:50 compute-0 ceilometer_agent_compute[200308]: 2025-12-01 19:37:50.041 15 DEBUG ceilometer.compute.pollsters [-] 850ac274-3f22-41ce-b7d7-ac64d7adac70/disk.device.capacity volume: 1073741824 _stats_to_sample /usr/lib/python3.12/site-packages/ceilometer/compute/pollsters/__init__.py:108
Dec  1 19:37:50 compute-0 ceilometer_agent_compute[200308]: 2025-12-01 19:37:50.042 15 DEBUG ceilometer.compute.pollsters [-] 850ac274-3f22-41ce-b7d7-ac64d7adac70/disk.device.capacity volume: 583680 _stats_to_sample /usr/lib/python3.12/site-packages/ceilometer/compute/pollsters/__init__.py:108
Dec  1 19:37:50 compute-0 ceilometer_agent_compute[200308]: 2025-12-01 19:37:50.043 15 INFO ceilometer.polling.manager [-] Finished polling pollster disk.device.capacity in the context of pollsters
Dec  1 19:37:50 compute-0 ceilometer_agent_compute[200308]: 2025-12-01 19:37:50.044 15 DEBUG ceilometer.polling.manager [-] Executing discovery process for pollsters [<ceilometer.compute.pollsters.disk.PerDeviceReadBytesPollster object at 0x7fcf6cc3f1d0>] and discovery method [local_instances] via process [<bound method AgentManager.discover of <ceilometer.polling.manager.AgentManager object at 0x7fcf6cc07a10>>]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:294
Dec  1 19:37:50 compute-0 ceilometer_agent_compute[200308]: 2025-12-01 19:37:50.044 15 INFO ceilometer.polling.manager [-] Polling pollster disk.device.read.bytes in the context of pollsters
Dec  1 19:37:50 compute-0 ceilometer_agent_compute[200308]: 2025-12-01 19:37:50.044 15 DEBUG ceilometer.polling.manager [-] Checking if we need coordination for pollster [<stevedore.extension.Extension object at 0x7fcf6cc3f2c0>] with coordination group name [None]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:333
Dec  1 19:37:50 compute-0 ceilometer_agent_compute[200308]: 2025-12-01 19:37:50.044 15 DEBUG ceilometer.polling.manager [-] The pollster [<stevedore.extension.Extension object at 0x7fcf6cc3f2c0>] is not configured in a source for polling that requires coordination. The current hashrings are the following [None]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:355
Dec  1 19:37:50 compute-0 ceilometer_agent_compute[200308]: 2025-12-01 19:37:50.045 15 DEBUG ceilometer.polling.manager [-] Polster heartbeat update: disk.device.read.bytes heartbeat /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:636
Dec  1 19:37:50 compute-0 ceilometer_agent_compute[200308]: 2025-12-01 19:37:50.046 12 DEBUG ceilometer.polling.manager [-] Updated heartbeat for disk.device.read.bytes (2025-12-01T19:37:50.045142) _update_status /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:502
Dec  1 19:37:50 compute-0 ceilometer_agent_compute[200308]: 2025-12-01 19:37:50.166 15 DEBUG ceilometer.compute.pollsters [-] e73931e9-f7fa-4666-b781-700b385532a9/disk.device.read.bytes volume: 23308800 _stats_to_sample /usr/lib/python3.12/site-packages/ceilometer/compute/pollsters/__init__.py:108
Dec  1 19:37:50 compute-0 ceilometer_agent_compute[200308]: 2025-12-01 19:37:50.167 15 DEBUG ceilometer.compute.pollsters [-] e73931e9-f7fa-4666-b781-700b385532a9/disk.device.read.bytes volume: 3227648 _stats_to_sample /usr/lib/python3.12/site-packages/ceilometer/compute/pollsters/__init__.py:108
Dec  1 19:37:50 compute-0 ceilometer_agent_compute[200308]: 2025-12-01 19:37:50.167 15 DEBUG ceilometer.compute.pollsters [-] e73931e9-f7fa-4666-b781-700b385532a9/disk.device.read.bytes volume: 274786 _stats_to_sample /usr/lib/python3.12/site-packages/ceilometer/compute/pollsters/__init__.py:108
Dec  1 19:37:50 compute-0 nova_compute[189564]: 2025-12-01 19:37:50.250 189568 DEBUG oslo_service.periodic_task [None req-7cec860b-c1c5-49d2-8a4f-732c76c1788d - - - - - -] Running periodic task ComputeManager._heal_instance_info_cache run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210#033[00m
Dec  1 19:37:50 compute-0 nova_compute[189564]: 2025-12-01 19:37:50.250 189568 DEBUG nova.compute.manager [None req-7cec860b-c1c5-49d2-8a4f-732c76c1788d - - - - - -] Starting heal instance info cache _heal_instance_info_cache /usr/lib/python3.9/site-packages/nova/compute/manager.py:9858#033[00m
Dec  1 19:37:50 compute-0 nova_compute[189564]: 2025-12-01 19:37:50.251 189568 DEBUG nova.compute.manager [None req-7cec860b-c1c5-49d2-8a4f-732c76c1788d - - - - - -] Rebuilding the list of instances to heal _heal_instance_info_cache /usr/lib/python3.9/site-packages/nova/compute/manager.py:9862#033[00m
Dec  1 19:37:50 compute-0 ceilometer_agent_compute[200308]: 2025-12-01 19:37:50.277 15 DEBUG ceilometer.compute.pollsters [-] f4a023f0-04a7-470f-88ef-6284e0580f9e/disk.device.read.bytes volume: 23325184 _stats_to_sample /usr/lib/python3.12/site-packages/ceilometer/compute/pollsters/__init__.py:108
Dec  1 19:37:50 compute-0 ceilometer_agent_compute[200308]: 2025-12-01 19:37:50.278 15 DEBUG ceilometer.compute.pollsters [-] f4a023f0-04a7-470f-88ef-6284e0580f9e/disk.device.read.bytes volume: 3227648 _stats_to_sample /usr/lib/python3.12/site-packages/ceilometer/compute/pollsters/__init__.py:108
Dec  1 19:37:50 compute-0 ceilometer_agent_compute[200308]: 2025-12-01 19:37:50.278 15 DEBUG ceilometer.compute.pollsters [-] f4a023f0-04a7-470f-88ef-6284e0580f9e/disk.device.read.bytes volume: 385378 _stats_to_sample /usr/lib/python3.12/site-packages/ceilometer/compute/pollsters/__init__.py:108
Dec  1 19:37:50 compute-0 ceilometer_agent_compute[200308]: 2025-12-01 19:37:50.357 15 DEBUG ceilometer.compute.pollsters [-] 850ac274-3f22-41ce-b7d7-ac64d7adac70/disk.device.read.bytes volume: 23308800 _stats_to_sample /usr/lib/python3.12/site-packages/ceilometer/compute/pollsters/__init__.py:108
Dec  1 19:37:50 compute-0 ceilometer_agent_compute[200308]: 2025-12-01 19:37:50.358 15 DEBUG ceilometer.compute.pollsters [-] 850ac274-3f22-41ce-b7d7-ac64d7adac70/disk.device.read.bytes volume: 3227648 _stats_to_sample /usr/lib/python3.12/site-packages/ceilometer/compute/pollsters/__init__.py:108
Dec  1 19:37:50 compute-0 ceilometer_agent_compute[200308]: 2025-12-01 19:37:50.358 15 DEBUG ceilometer.compute.pollsters [-] 850ac274-3f22-41ce-b7d7-ac64d7adac70/disk.device.read.bytes volume: 385378 _stats_to_sample /usr/lib/python3.12/site-packages/ceilometer/compute/pollsters/__init__.py:108
Dec  1 19:37:50 compute-0 ceilometer_agent_compute[200308]: 2025-12-01 19:37:50.359 15 INFO ceilometer.polling.manager [-] Finished polling pollster disk.device.read.bytes in the context of pollsters
Dec  1 19:37:50 compute-0 ceilometer_agent_compute[200308]: 2025-12-01 19:37:50.360 15 DEBUG ceilometer.polling.manager [-] Executing discovery process for pollsters [<ceilometer.compute.pollsters.net.IncomingBytesPollster object at 0x7fcf6cc3f800>] and discovery method [local_instances] via process [<bound method AgentManager.discover of <ceilometer.polling.manager.AgentManager object at 0x7fcf6cc07a10>>]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:294
Dec  1 19:37:50 compute-0 ceilometer_agent_compute[200308]: 2025-12-01 19:37:50.360 15 INFO ceilometer.polling.manager [-] Polling pollster network.incoming.bytes in the context of pollsters
Dec  1 19:37:50 compute-0 ceilometer_agent_compute[200308]: 2025-12-01 19:37:50.360 15 DEBUG ceilometer.polling.manager [-] Checking if we need coordination for pollster [<stevedore.extension.Extension object at 0x7fcf6e1e92e0>] with coordination group name [None]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:333
Dec  1 19:37:50 compute-0 ceilometer_agent_compute[200308]: 2025-12-01 19:37:50.360 15 DEBUG ceilometer.polling.manager [-] The pollster [<stevedore.extension.Extension object at 0x7fcf6e1e92e0>] is not configured in a source for polling that requires coordination. The current hashrings are the following [None]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:355
Dec  1 19:37:50 compute-0 ceilometer_agent_compute[200308]: 2025-12-01 19:37:50.360 15 DEBUG ceilometer.polling.manager [-] Polster heartbeat update: network.incoming.bytes heartbeat /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:636
Dec  1 19:37:50 compute-0 ceilometer_agent_compute[200308]: 2025-12-01 19:37:50.361 15 DEBUG ceilometer.compute.pollsters [-] e73931e9-f7fa-4666-b781-700b385532a9/network.incoming.bytes volume: 2052 _stats_to_sample /usr/lib/python3.12/site-packages/ceilometer/compute/pollsters/__init__.py:108
Dec  1 19:37:50 compute-0 ceilometer_agent_compute[200308]: 2025-12-01 19:37:50.361 12 DEBUG ceilometer.polling.manager [-] Updated heartbeat for network.incoming.bytes (2025-12-01T19:37:50.360813) _update_status /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:502
Dec  1 19:37:50 compute-0 ceilometer_agent_compute[200308]: 2025-12-01 19:37:50.361 15 DEBUG ceilometer.compute.pollsters [-] f4a023f0-04a7-470f-88ef-6284e0580f9e/network.incoming.bytes volume: 4933 _stats_to_sample /usr/lib/python3.12/site-packages/ceilometer/compute/pollsters/__init__.py:108
Dec  1 19:37:50 compute-0 ceilometer_agent_compute[200308]: 2025-12-01 19:37:50.362 15 DEBUG ceilometer.compute.pollsters [-] 850ac274-3f22-41ce-b7d7-ac64d7adac70/network.incoming.bytes volume: 1486 _stats_to_sample /usr/lib/python3.12/site-packages/ceilometer/compute/pollsters/__init__.py:108
Dec  1 19:37:50 compute-0 ceilometer_agent_compute[200308]: 2025-12-01 19:37:50.362 15 INFO ceilometer.polling.manager [-] Finished polling pollster network.incoming.bytes in the context of pollsters
Dec  1 19:37:50 compute-0 ceilometer_agent_compute[200308]: 2025-12-01 19:37:50.363 15 DEBUG ceilometer.polling.manager [-] Executing discovery process for pollsters [<ceilometer.compute.pollsters.net.IncomingBytesRatePollster object at 0x7fcf6cc3fd10>] and discovery method [local_instances] via process [<bound method AgentManager.discover of <ceilometer.polling.manager.AgentManager object at 0x7fcf6cc07a10>>]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:294
Dec  1 19:37:50 compute-0 ceilometer_agent_compute[200308]: 2025-12-01 19:37:50.363 15 INFO ceilometer.polling.manager [-] Polling pollster network.incoming.bytes.rate in the context of pollsters
Dec  1 19:37:50 compute-0 ceilometer_agent_compute[200308]: 2025-12-01 19:37:50.363 15 DEBUG ceilometer.polling.manager [-] Checking if we need coordination for pollster [<stevedore.extension.Extension object at 0x7fcf6cc3fb00>] with coordination group name [None]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:333
Dec  1 19:37:50 compute-0 ceilometer_agent_compute[200308]: 2025-12-01 19:37:50.363 15 DEBUG ceilometer.polling.manager [-] The pollster [<stevedore.extension.Extension object at 0x7fcf6cc3fb00>] is not configured in a source for polling that requires coordination. The current hashrings are the following [None]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:355
Dec  1 19:37:50 compute-0 ceilometer_agent_compute[200308]: 2025-12-01 19:37:50.364 12 DEBUG ceilometer.polling.manager [-] Updated heartbeat for network.incoming.bytes.rate (2025-12-01T19:37:50.363872) _update_status /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:502
Dec  1 19:37:50 compute-0 ceilometer_agent_compute[200308]: 2025-12-01 19:37:50.363 15 DEBUG ceilometer.polling.manager [-] Polster heartbeat update: network.incoming.bytes.rate heartbeat /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:636
Dec  1 19:37:50 compute-0 ceilometer_agent_compute[200308]: 2025-12-01 19:37:50.364 15 DEBUG ceilometer.compute.pollsters [-] LibvirtInspector does not provide data for IncomingBytesRatePollster get_samples /usr/lib/python3.12/site-packages/ceilometer/compute/pollsters/__init__.py:162
Dec  1 19:37:50 compute-0 ceilometer_agent_compute[200308]: 2025-12-01 19:37:50.364 15 ERROR ceilometer.polling.manager [-] Prevent pollster network.incoming.bytes.rate from polling [<NovaLikeServer: vn-rxztcck-a6xkcgll2h6t-dmjd3wlevael-vnf-74vtqyxw74yx>] on source pollsters anymore!: ceilometer.polling.plugin_base.PollsterPermanentError: [<NovaLikeServer: vn-rxztcck-a6xkcgll2h6t-dmjd3wlevael-vnf-74vtqyxw74yx>]
Dec  1 19:37:50 compute-0 ceilometer_agent_compute[200308]: 2025-12-01 19:37:50.365 15 DEBUG ceilometer.polling.manager [-] Executing discovery process for pollsters [<ceilometer.compute.pollsters.disk.PerDeviceDiskReadLatencyPollster object at 0x7fcf6cc3f2f0>] and discovery method [local_instances] via process [<bound method AgentManager.discover of <ceilometer.polling.manager.AgentManager object at 0x7fcf6cc07a10>>]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:294
Dec  1 19:37:50 compute-0 ceilometer_agent_compute[200308]: 2025-12-01 19:37:50.365 15 INFO ceilometer.polling.manager [-] Polling pollster disk.device.read.latency in the context of pollsters
Dec  1 19:37:50 compute-0 ceilometer_agent_compute[200308]: 2025-12-01 19:37:50.365 15 DEBUG ceilometer.polling.manager [-] Checking if we need coordination for pollster [<stevedore.extension.Extension object at 0x7fcf6cc3f320>] with coordination group name [None]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:333
Dec  1 19:37:50 compute-0 ceilometer_agent_compute[200308]: 2025-12-01 19:37:50.365 15 DEBUG ceilometer.polling.manager [-] The pollster [<stevedore.extension.Extension object at 0x7fcf6cc3f320>] is not configured in a source for polling that requires coordination. The current hashrings are the following [None]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:355
Dec  1 19:37:50 compute-0 ceilometer_agent_compute[200308]: 2025-12-01 19:37:50.365 15 DEBUG ceilometer.polling.manager [-] Polster heartbeat update: disk.device.read.latency heartbeat /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:636
Dec  1 19:37:50 compute-0 ceilometer_agent_compute[200308]: 2025-12-01 19:37:50.366 15 DEBUG ceilometer.compute.pollsters [-] e73931e9-f7fa-4666-b781-700b385532a9/disk.device.read.latency volume: 474440550 _stats_to_sample /usr/lib/python3.12/site-packages/ceilometer/compute/pollsters/__init__.py:108
Dec  1 19:37:50 compute-0 ceilometer_agent_compute[200308]: 2025-12-01 19:37:50.366 15 DEBUG ceilometer.compute.pollsters [-] e73931e9-f7fa-4666-b781-700b385532a9/disk.device.read.latency volume: 65600453 _stats_to_sample /usr/lib/python3.12/site-packages/ceilometer/compute/pollsters/__init__.py:108
Dec  1 19:37:50 compute-0 ceilometer_agent_compute[200308]: 2025-12-01 19:37:50.367 12 DEBUG ceilometer.polling.manager [-] Updated heartbeat for disk.device.read.latency (2025-12-01T19:37:50.365768) _update_status /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:502
Dec  1 19:37:50 compute-0 ceilometer_agent_compute[200308]: 2025-12-01 19:37:50.367 15 DEBUG ceilometer.compute.pollsters [-] e73931e9-f7fa-4666-b781-700b385532a9/disk.device.read.latency volume: 49214734 _stats_to_sample /usr/lib/python3.12/site-packages/ceilometer/compute/pollsters/__init__.py:108
Dec  1 19:37:50 compute-0 ceilometer_agent_compute[200308]: 2025-12-01 19:37:50.367 15 DEBUG ceilometer.compute.pollsters [-] f4a023f0-04a7-470f-88ef-6284e0580f9e/disk.device.read.latency volume: 571654353 _stats_to_sample /usr/lib/python3.12/site-packages/ceilometer/compute/pollsters/__init__.py:108
Dec  1 19:37:50 compute-0 ceilometer_agent_compute[200308]: 2025-12-01 19:37:50.368 15 DEBUG ceilometer.compute.pollsters [-] f4a023f0-04a7-470f-88ef-6284e0580f9e/disk.device.read.latency volume: 100146044 _stats_to_sample /usr/lib/python3.12/site-packages/ceilometer/compute/pollsters/__init__.py:108
Dec  1 19:37:50 compute-0 ceilometer_agent_compute[200308]: 2025-12-01 19:37:50.368 15 DEBUG ceilometer.compute.pollsters [-] f4a023f0-04a7-470f-88ef-6284e0580f9e/disk.device.read.latency volume: 76562748 _stats_to_sample /usr/lib/python3.12/site-packages/ceilometer/compute/pollsters/__init__.py:108
Dec  1 19:37:50 compute-0 ceilometer_agent_compute[200308]: 2025-12-01 19:37:50.369 15 DEBUG ceilometer.compute.pollsters [-] 850ac274-3f22-41ce-b7d7-ac64d7adac70/disk.device.read.latency volume: 578521054 _stats_to_sample /usr/lib/python3.12/site-packages/ceilometer/compute/pollsters/__init__.py:108
Dec  1 19:37:50 compute-0 ceilometer_agent_compute[200308]: 2025-12-01 19:37:50.369 15 DEBUG ceilometer.compute.pollsters [-] 850ac274-3f22-41ce-b7d7-ac64d7adac70/disk.device.read.latency volume: 98903610 _stats_to_sample /usr/lib/python3.12/site-packages/ceilometer/compute/pollsters/__init__.py:108
Dec  1 19:37:50 compute-0 ceilometer_agent_compute[200308]: 2025-12-01 19:37:50.369 15 DEBUG ceilometer.compute.pollsters [-] 850ac274-3f22-41ce-b7d7-ac64d7adac70/disk.device.read.latency volume: 76991265 _stats_to_sample /usr/lib/python3.12/site-packages/ceilometer/compute/pollsters/__init__.py:108
Dec  1 19:37:50 compute-0 ceilometer_agent_compute[200308]: 2025-12-01 19:37:50.370 15 INFO ceilometer.polling.manager [-] Finished polling pollster disk.device.read.latency in the context of pollsters
Dec  1 19:37:50 compute-0 ceilometer_agent_compute[200308]: 2025-12-01 19:37:50.371 15 DEBUG ceilometer.polling.manager [-] Executing discovery process for pollsters [<ceilometer.compute.pollsters.disk.PerDeviceReadRequestsPollster object at 0x7fcf6cc3f350>] and discovery method [local_instances] via process [<bound method AgentManager.discover of <ceilometer.polling.manager.AgentManager object at 0x7fcf6cc07a10>>]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:294
Dec  1 19:37:50 compute-0 ceilometer_agent_compute[200308]: 2025-12-01 19:37:50.371 15 INFO ceilometer.polling.manager [-] Polling pollster disk.device.read.requests in the context of pollsters
Dec  1 19:37:50 compute-0 ceilometer_agent_compute[200308]: 2025-12-01 19:37:50.371 15 DEBUG ceilometer.polling.manager [-] Checking if we need coordination for pollster [<stevedore.extension.Extension object at 0x7fcf6cc3f380>] with coordination group name [None]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:333
Dec  1 19:37:50 compute-0 ceilometer_agent_compute[200308]: 2025-12-01 19:37:50.371 15 DEBUG ceilometer.polling.manager [-] The pollster [<stevedore.extension.Extension object at 0x7fcf6cc3f380>] is not configured in a source for polling that requires coordination. The current hashrings are the following [None]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:355
Dec  1 19:37:50 compute-0 ceilometer_agent_compute[200308]: 2025-12-01 19:37:50.371 15 DEBUG ceilometer.polling.manager [-] Polster heartbeat update: disk.device.read.requests heartbeat /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:636
Dec  1 19:37:50 compute-0 ceilometer_agent_compute[200308]: 2025-12-01 19:37:50.372 12 DEBUG ceilometer.polling.manager [-] Updated heartbeat for disk.device.read.requests (2025-12-01T19:37:50.371847) _update_status /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:502
Dec  1 19:37:50 compute-0 ceilometer_agent_compute[200308]: 2025-12-01 19:37:50.372 15 DEBUG ceilometer.compute.pollsters [-] e73931e9-f7fa-4666-b781-700b385532a9/disk.device.read.requests volume: 840 _stats_to_sample /usr/lib/python3.12/site-packages/ceilometer/compute/pollsters/__init__.py:108
Dec  1 19:37:50 compute-0 ceilometer_agent_compute[200308]: 2025-12-01 19:37:50.372 15 DEBUG ceilometer.compute.pollsters [-] e73931e9-f7fa-4666-b781-700b385532a9/disk.device.read.requests volume: 173 _stats_to_sample /usr/lib/python3.12/site-packages/ceilometer/compute/pollsters/__init__.py:108
Dec  1 19:37:50 compute-0 ceilometer_agent_compute[200308]: 2025-12-01 19:37:50.373 15 DEBUG ceilometer.compute.pollsters [-] e73931e9-f7fa-4666-b781-700b385532a9/disk.device.read.requests volume: 109 _stats_to_sample /usr/lib/python3.12/site-packages/ceilometer/compute/pollsters/__init__.py:108
Dec  1 19:37:50 compute-0 ceilometer_agent_compute[200308]: 2025-12-01 19:37:50.373 15 DEBUG ceilometer.compute.pollsters [-] f4a023f0-04a7-470f-88ef-6284e0580f9e/disk.device.read.requests volume: 844 _stats_to_sample /usr/lib/python3.12/site-packages/ceilometer/compute/pollsters/__init__.py:108
Dec  1 19:37:50 compute-0 ceilometer_agent_compute[200308]: 2025-12-01 19:37:50.373 15 DEBUG ceilometer.compute.pollsters [-] f4a023f0-04a7-470f-88ef-6284e0580f9e/disk.device.read.requests volume: 173 _stats_to_sample /usr/lib/python3.12/site-packages/ceilometer/compute/pollsters/__init__.py:108
Dec  1 19:37:50 compute-0 ceilometer_agent_compute[200308]: 2025-12-01 19:37:50.374 15 DEBUG ceilometer.compute.pollsters [-] f4a023f0-04a7-470f-88ef-6284e0580f9e/disk.device.read.requests volume: 124 _stats_to_sample /usr/lib/python3.12/site-packages/ceilometer/compute/pollsters/__init__.py:108
Dec  1 19:37:50 compute-0 ceilometer_agent_compute[200308]: 2025-12-01 19:37:50.374 15 DEBUG ceilometer.compute.pollsters [-] 850ac274-3f22-41ce-b7d7-ac64d7adac70/disk.device.read.requests volume: 840 _stats_to_sample /usr/lib/python3.12/site-packages/ceilometer/compute/pollsters/__init__.py:108
Dec  1 19:37:50 compute-0 ceilometer_agent_compute[200308]: 2025-12-01 19:37:50.375 15 DEBUG ceilometer.compute.pollsters [-] 850ac274-3f22-41ce-b7d7-ac64d7adac70/disk.device.read.requests volume: 173 _stats_to_sample /usr/lib/python3.12/site-packages/ceilometer/compute/pollsters/__init__.py:108
Dec  1 19:37:50 compute-0 ceilometer_agent_compute[200308]: 2025-12-01 19:37:50.375 15 DEBUG ceilometer.compute.pollsters [-] 850ac274-3f22-41ce-b7d7-ac64d7adac70/disk.device.read.requests volume: 124 _stats_to_sample /usr/lib/python3.12/site-packages/ceilometer/compute/pollsters/__init__.py:108
Dec  1 19:37:50 compute-0 ceilometer_agent_compute[200308]: 2025-12-01 19:37:50.376 15 INFO ceilometer.polling.manager [-] Finished polling pollster disk.device.read.requests in the context of pollsters
Dec  1 19:37:50 compute-0 ceilometer_agent_compute[200308]: 2025-12-01 19:37:50.376 15 DEBUG ceilometer.polling.manager [-] Executing discovery process for pollsters [<ceilometer.compute.pollsters.disk.PerDevicePhysicalPollster object at 0x7fcf6cc3f3b0>] and discovery method [local_instances] via process [<bound method AgentManager.discover of <ceilometer.polling.manager.AgentManager object at 0x7fcf6cc07a10>>]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:294
Dec  1 19:37:50 compute-0 ceilometer_agent_compute[200308]: 2025-12-01 19:37:50.377 15 INFO ceilometer.polling.manager [-] Polling pollster disk.device.usage in the context of pollsters
Dec  1 19:37:50 compute-0 ceilometer_agent_compute[200308]: 2025-12-01 19:37:50.377 15 DEBUG ceilometer.polling.manager [-] Checking if we need coordination for pollster [<stevedore.extension.Extension object at 0x7fcf6cc3f3e0>] with coordination group name [None]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:333
Dec  1 19:37:50 compute-0 ceilometer_agent_compute[200308]: 2025-12-01 19:37:50.377 15 DEBUG ceilometer.polling.manager [-] The pollster [<stevedore.extension.Extension object at 0x7fcf6cc3f3e0>] is not configured in a source for polling that requires coordination. The current hashrings are the following [None]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:355
Dec  1 19:37:50 compute-0 ceilometer_agent_compute[200308]: 2025-12-01 19:37:50.377 15 DEBUG ceilometer.polling.manager [-] Polster heartbeat update: disk.device.usage heartbeat /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:636
Dec  1 19:37:50 compute-0 ceilometer_agent_compute[200308]: 2025-12-01 19:37:50.377 15 DEBUG ceilometer.compute.pollsters [-] e73931e9-f7fa-4666-b781-700b385532a9/disk.device.usage volume: 21233664 _stats_to_sample /usr/lib/python3.12/site-packages/ceilometer/compute/pollsters/__init__.py:108
Dec  1 19:37:50 compute-0 ceilometer_agent_compute[200308]: 2025-12-01 19:37:50.378 15 DEBUG ceilometer.compute.pollsters [-] e73931e9-f7fa-4666-b781-700b385532a9/disk.device.usage volume: 393216 _stats_to_sample /usr/lib/python3.12/site-packages/ceilometer/compute/pollsters/__init__.py:108
Dec  1 19:37:50 compute-0 ceilometer_agent_compute[200308]: 2025-12-01 19:37:50.378 12 DEBUG ceilometer.polling.manager [-] Updated heartbeat for disk.device.usage (2025-12-01T19:37:50.377636) _update_status /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:502
Dec  1 19:37:50 compute-0 ceilometer_agent_compute[200308]: 2025-12-01 19:37:50.379 15 DEBUG ceilometer.compute.pollsters [-] e73931e9-f7fa-4666-b781-700b385532a9/disk.device.usage volume: 485376 _stats_to_sample /usr/lib/python3.12/site-packages/ceilometer/compute/pollsters/__init__.py:108
Dec  1 19:37:50 compute-0 ceilometer_agent_compute[200308]: 2025-12-01 19:37:50.379 15 DEBUG ceilometer.compute.pollsters [-] f4a023f0-04a7-470f-88ef-6284e0580f9e/disk.device.usage volume: 21364736 _stats_to_sample /usr/lib/python3.12/site-packages/ceilometer/compute/pollsters/__init__.py:108
Dec  1 19:37:50 compute-0 ceilometer_agent_compute[200308]: 2025-12-01 19:37:50.380 15 DEBUG ceilometer.compute.pollsters [-] f4a023f0-04a7-470f-88ef-6284e0580f9e/disk.device.usage volume: 393216 _stats_to_sample /usr/lib/python3.12/site-packages/ceilometer/compute/pollsters/__init__.py:108
Dec  1 19:37:50 compute-0 ceilometer_agent_compute[200308]: 2025-12-01 19:37:50.380 15 DEBUG ceilometer.compute.pollsters [-] f4a023f0-04a7-470f-88ef-6284e0580f9e/disk.device.usage volume: 583680 _stats_to_sample /usr/lib/python3.12/site-packages/ceilometer/compute/pollsters/__init__.py:108
Dec  1 19:37:50 compute-0 ceilometer_agent_compute[200308]: 2025-12-01 19:37:50.381 15 DEBUG ceilometer.compute.pollsters [-] 850ac274-3f22-41ce-b7d7-ac64d7adac70/disk.device.usage volume: 21299200 _stats_to_sample /usr/lib/python3.12/site-packages/ceilometer/compute/pollsters/__init__.py:108
Dec  1 19:37:50 compute-0 ceilometer_agent_compute[200308]: 2025-12-01 19:37:50.381 15 DEBUG ceilometer.compute.pollsters [-] 850ac274-3f22-41ce-b7d7-ac64d7adac70/disk.device.usage volume: 393216 _stats_to_sample /usr/lib/python3.12/site-packages/ceilometer/compute/pollsters/__init__.py:108
Dec  1 19:37:50 compute-0 ceilometer_agent_compute[200308]: 2025-12-01 19:37:50.382 15 DEBUG ceilometer.compute.pollsters [-] 850ac274-3f22-41ce-b7d7-ac64d7adac70/disk.device.usage volume: 583680 _stats_to_sample /usr/lib/python3.12/site-packages/ceilometer/compute/pollsters/__init__.py:108
Dec  1 19:37:50 compute-0 ceilometer_agent_compute[200308]: 2025-12-01 19:37:50.383 15 INFO ceilometer.polling.manager [-] Finished polling pollster disk.device.usage in the context of pollsters
Dec  1 19:37:50 compute-0 ceilometer_agent_compute[200308]: 2025-12-01 19:37:50.383 15 DEBUG ceilometer.polling.manager [-] Executing discovery process for pollsters [<ceilometer.compute.pollsters.disk.PerDeviceWriteBytesPollster object at 0x7fcf6cc3f410>] and discovery method [local_instances] via process [<bound method AgentManager.discover of <ceilometer.polling.manager.AgentManager object at 0x7fcf6cc07a10>>]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:294
Dec  1 19:37:50 compute-0 ceilometer_agent_compute[200308]: 2025-12-01 19:37:50.383 15 INFO ceilometer.polling.manager [-] Polling pollster disk.device.write.bytes in the context of pollsters
Dec  1 19:37:50 compute-0 ceilometer_agent_compute[200308]: 2025-12-01 19:37:50.383 15 DEBUG ceilometer.polling.manager [-] Checking if we need coordination for pollster [<stevedore.extension.Extension object at 0x7fcf6cc3f440>] with coordination group name [None]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:333
Dec  1 19:37:50 compute-0 ceilometer_agent_compute[200308]: 2025-12-01 19:37:50.383 15 DEBUG ceilometer.polling.manager [-] The pollster [<stevedore.extension.Extension object at 0x7fcf6cc3f440>] is not configured in a source for polling that requires coordination. The current hashrings are the following [None]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:355
Dec  1 19:37:50 compute-0 ceilometer_agent_compute[200308]: 2025-12-01 19:37:50.383 15 DEBUG ceilometer.polling.manager [-] Polster heartbeat update: disk.device.write.bytes heartbeat /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:636
Dec  1 19:37:50 compute-0 ceilometer_agent_compute[200308]: 2025-12-01 19:37:50.383 15 DEBUG ceilometer.compute.pollsters [-] e73931e9-f7fa-4666-b781-700b385532a9/disk.device.write.bytes volume: 41779200 _stats_to_sample /usr/lib/python3.12/site-packages/ceilometer/compute/pollsters/__init__.py:108
Dec  1 19:37:50 compute-0 ceilometer_agent_compute[200308]: 2025-12-01 19:37:50.384 12 DEBUG ceilometer.polling.manager [-] Updated heartbeat for disk.device.write.bytes (2025-12-01T19:37:50.383754) _update_status /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:502
Dec  1 19:37:50 compute-0 ceilometer_agent_compute[200308]: 2025-12-01 19:37:50.384 15 DEBUG ceilometer.compute.pollsters [-] e73931e9-f7fa-4666-b781-700b385532a9/disk.device.write.bytes volume: 512 _stats_to_sample /usr/lib/python3.12/site-packages/ceilometer/compute/pollsters/__init__.py:108
Dec  1 19:37:50 compute-0 ceilometer_agent_compute[200308]: 2025-12-01 19:37:50.384 15 DEBUG ceilometer.compute.pollsters [-] e73931e9-f7fa-4666-b781-700b385532a9/disk.device.write.bytes volume: 0 _stats_to_sample /usr/lib/python3.12/site-packages/ceilometer/compute/pollsters/__init__.py:108
Dec  1 19:37:50 compute-0 ceilometer_agent_compute[200308]: 2025-12-01 19:37:50.384 15 DEBUG ceilometer.compute.pollsters [-] f4a023f0-04a7-470f-88ef-6284e0580f9e/disk.device.write.bytes volume: 41836544 _stats_to_sample /usr/lib/python3.12/site-packages/ceilometer/compute/pollsters/__init__.py:108
Dec  1 19:37:50 compute-0 ceilometer_agent_compute[200308]: 2025-12-01 19:37:50.385 15 DEBUG ceilometer.compute.pollsters [-] f4a023f0-04a7-470f-88ef-6284e0580f9e/disk.device.write.bytes volume: 512 _stats_to_sample /usr/lib/python3.12/site-packages/ceilometer/compute/pollsters/__init__.py:108
Dec  1 19:37:50 compute-0 ceilometer_agent_compute[200308]: 2025-12-01 19:37:50.385 15 DEBUG ceilometer.compute.pollsters [-] f4a023f0-04a7-470f-88ef-6284e0580f9e/disk.device.write.bytes volume: 0 _stats_to_sample /usr/lib/python3.12/site-packages/ceilometer/compute/pollsters/__init__.py:108
Dec  1 19:37:50 compute-0 ceilometer_agent_compute[200308]: 2025-12-01 19:37:50.385 15 DEBUG ceilometer.compute.pollsters [-] 850ac274-3f22-41ce-b7d7-ac64d7adac70/disk.device.write.bytes volume: 41779200 _stats_to_sample /usr/lib/python3.12/site-packages/ceilometer/compute/pollsters/__init__.py:108
Dec  1 19:37:50 compute-0 ceilometer_agent_compute[200308]: 2025-12-01 19:37:50.385 15 DEBUG ceilometer.compute.pollsters [-] 850ac274-3f22-41ce-b7d7-ac64d7adac70/disk.device.write.bytes volume: 512 _stats_to_sample /usr/lib/python3.12/site-packages/ceilometer/compute/pollsters/__init__.py:108
Dec  1 19:37:50 compute-0 ceilometer_agent_compute[200308]: 2025-12-01 19:37:50.385 15 DEBUG ceilometer.compute.pollsters [-] 850ac274-3f22-41ce-b7d7-ac64d7adac70/disk.device.write.bytes volume: 0 _stats_to_sample /usr/lib/python3.12/site-packages/ceilometer/compute/pollsters/__init__.py:108
Dec  1 19:37:50 compute-0 ceilometer_agent_compute[200308]: 2025-12-01 19:37:50.386 15 INFO ceilometer.polling.manager [-] Finished polling pollster disk.device.write.bytes in the context of pollsters
Dec  1 19:37:50 compute-0 ceilometer_agent_compute[200308]: 2025-12-01 19:37:50.386 15 DEBUG ceilometer.polling.manager [-] Executing discovery process for pollsters [<ceilometer.compute.pollsters.instance_stats.PowerStatePollster object at 0x7fcf6c2e4440>] and discovery method [local_instances] via process [<bound method AgentManager.discover of <ceilometer.polling.manager.AgentManager object at 0x7fcf6cc07a10>>]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:294
Dec  1 19:37:50 compute-0 ceilometer_agent_compute[200308]: 2025-12-01 19:37:50.386 15 INFO ceilometer.polling.manager [-] Polling pollster power.state in the context of pollsters
Dec  1 19:37:50 compute-0 ceilometer_agent_compute[200308]: 2025-12-01 19:37:50.386 15 DEBUG ceilometer.polling.manager [-] Checking if we need coordination for pollster [<stevedore.extension.Extension object at 0x7fcf6c2e4470>] with coordination group name [None]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:333
Dec  1 19:37:50 compute-0 ceilometer_agent_compute[200308]: 2025-12-01 19:37:50.386 15 DEBUG ceilometer.polling.manager [-] The pollster [<stevedore.extension.Extension object at 0x7fcf6c2e4470>] is not configured in a source for polling that requires coordination. The current hashrings are the following [None]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:355
Dec  1 19:37:50 compute-0 ceilometer_agent_compute[200308]: 2025-12-01 19:37:50.386 15 DEBUG ceilometer.polling.manager [-] Polster heartbeat update: power.state heartbeat /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:636
Dec  1 19:37:50 compute-0 ceilometer_agent_compute[200308]: 2025-12-01 19:37:50.387 12 DEBUG ceilometer.polling.manager [-] Updated heartbeat for power.state (2025-12-01T19:37:50.386756) _update_status /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:502
Dec  1 19:37:50 compute-0 ceilometer_agent_compute[200308]: 2025-12-01 19:37:50.421 15 DEBUG ceilometer.compute.pollsters [-] e73931e9-f7fa-4666-b781-700b385532a9/power.state volume: 1 _stats_to_sample /usr/lib/python3.12/site-packages/ceilometer/compute/pollsters/__init__.py:108
Dec  1 19:37:50 compute-0 ceilometer_agent_compute[200308]: 2025-12-01 19:37:50.461 15 DEBUG ceilometer.compute.pollsters [-] f4a023f0-04a7-470f-88ef-6284e0580f9e/power.state volume: 1 _stats_to_sample /usr/lib/python3.12/site-packages/ceilometer/compute/pollsters/__init__.py:108
Dec  1 19:37:50 compute-0 ceilometer_agent_compute[200308]: 2025-12-01 19:37:50.489 15 DEBUG ceilometer.compute.pollsters [-] 850ac274-3f22-41ce-b7d7-ac64d7adac70/power.state volume: 1 _stats_to_sample /usr/lib/python3.12/site-packages/ceilometer/compute/pollsters/__init__.py:108
Dec  1 19:37:50 compute-0 ceilometer_agent_compute[200308]: 2025-12-01 19:37:50.490 15 INFO ceilometer.polling.manager [-] Finished polling pollster power.state in the context of pollsters
Dec  1 19:37:50 compute-0 ceilometer_agent_compute[200308]: 2025-12-01 19:37:50.490 15 DEBUG ceilometer.polling.manager [-] Executing discovery process for pollsters [<ceilometer.compute.pollsters.disk.PerDeviceDiskWriteLatencyPollster object at 0x7fcf6cc3f470>] and discovery method [local_instances] via process [<bound method AgentManager.discover of <ceilometer.polling.manager.AgentManager object at 0x7fcf6cc07a10>>]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:294
Dec  1 19:37:50 compute-0 ceilometer_agent_compute[200308]: 2025-12-01 19:37:50.490 15 INFO ceilometer.polling.manager [-] Polling pollster disk.device.write.latency in the context of pollsters
Dec  1 19:37:50 compute-0 ceilometer_agent_compute[200308]: 2025-12-01 19:37:50.490 15 DEBUG ceilometer.polling.manager [-] Checking if we need coordination for pollster [<stevedore.extension.Extension object at 0x7fcf6cc3f4a0>] with coordination group name [None]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:333
Dec  1 19:37:50 compute-0 ceilometer_agent_compute[200308]: 2025-12-01 19:37:50.490 15 DEBUG ceilometer.polling.manager [-] The pollster [<stevedore.extension.Extension object at 0x7fcf6cc3f4a0>] is not configured in a source for polling that requires coordination. The current hashrings are the following [None]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:355
Dec  1 19:37:50 compute-0 ceilometer_agent_compute[200308]: 2025-12-01 19:37:50.490 15 DEBUG ceilometer.polling.manager [-] Polster heartbeat update: disk.device.write.latency heartbeat /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:636
Dec  1 19:37:50 compute-0 ceilometer_agent_compute[200308]: 2025-12-01 19:37:50.490 15 DEBUG ceilometer.compute.pollsters [-] e73931e9-f7fa-4666-b781-700b385532a9/disk.device.write.latency volume: 1119912171 _stats_to_sample /usr/lib/python3.12/site-packages/ceilometer/compute/pollsters/__init__.py:108
Dec  1 19:37:50 compute-0 ceilometer_agent_compute[200308]: 2025-12-01 19:37:50.491 15 DEBUG ceilometer.compute.pollsters [-] e73931e9-f7fa-4666-b781-700b385532a9/disk.device.write.latency volume: 10391061 _stats_to_sample /usr/lib/python3.12/site-packages/ceilometer/compute/pollsters/__init__.py:108
Dec  1 19:37:50 compute-0 ceilometer_agent_compute[200308]: 2025-12-01 19:37:50.491 12 DEBUG ceilometer.polling.manager [-] Updated heartbeat for disk.device.write.latency (2025-12-01T19:37:50.490523) _update_status /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:502
Dec  1 19:37:50 compute-0 ceilometer_agent_compute[200308]: 2025-12-01 19:37:50.491 15 DEBUG ceilometer.compute.pollsters [-] e73931e9-f7fa-4666-b781-700b385532a9/disk.device.write.latency volume: 0 _stats_to_sample /usr/lib/python3.12/site-packages/ceilometer/compute/pollsters/__init__.py:108
Dec  1 19:37:50 compute-0 ceilometer_agent_compute[200308]: 2025-12-01 19:37:50.491 15 DEBUG ceilometer.compute.pollsters [-] f4a023f0-04a7-470f-88ef-6284e0580f9e/disk.device.write.latency volume: 1158162729 _stats_to_sample /usr/lib/python3.12/site-packages/ceilometer/compute/pollsters/__init__.py:108
Dec  1 19:37:50 compute-0 ceilometer_agent_compute[200308]: 2025-12-01 19:37:50.491 15 DEBUG ceilometer.compute.pollsters [-] f4a023f0-04a7-470f-88ef-6284e0580f9e/disk.device.write.latency volume: 13740853 _stats_to_sample /usr/lib/python3.12/site-packages/ceilometer/compute/pollsters/__init__.py:108
Dec  1 19:37:50 compute-0 ceilometer_agent_compute[200308]: 2025-12-01 19:37:50.492 15 DEBUG ceilometer.compute.pollsters [-] f4a023f0-04a7-470f-88ef-6284e0580f9e/disk.device.write.latency volume: 0 _stats_to_sample /usr/lib/python3.12/site-packages/ceilometer/compute/pollsters/__init__.py:108
Dec  1 19:37:50 compute-0 ceilometer_agent_compute[200308]: 2025-12-01 19:37:50.492 15 DEBUG ceilometer.compute.pollsters [-] 850ac274-3f22-41ce-b7d7-ac64d7adac70/disk.device.write.latency volume: 2063543219 _stats_to_sample /usr/lib/python3.12/site-packages/ceilometer/compute/pollsters/__init__.py:108
Dec  1 19:37:50 compute-0 ceilometer_agent_compute[200308]: 2025-12-01 19:37:50.492 15 DEBUG ceilometer.compute.pollsters [-] 850ac274-3f22-41ce-b7d7-ac64d7adac70/disk.device.write.latency volume: 12721696 _stats_to_sample /usr/lib/python3.12/site-packages/ceilometer/compute/pollsters/__init__.py:108
Dec  1 19:37:50 compute-0 ceilometer_agent_compute[200308]: 2025-12-01 19:37:50.492 15 DEBUG ceilometer.compute.pollsters [-] 850ac274-3f22-41ce-b7d7-ac64d7adac70/disk.device.write.latency volume: 0 _stats_to_sample /usr/lib/python3.12/site-packages/ceilometer/compute/pollsters/__init__.py:108
Dec  1 19:37:50 compute-0 ceilometer_agent_compute[200308]: 2025-12-01 19:37:50.493 15 INFO ceilometer.polling.manager [-] Finished polling pollster disk.device.write.latency in the context of pollsters
Dec  1 19:37:50 compute-0 ceilometer_agent_compute[200308]: 2025-12-01 19:37:50.493 15 DEBUG ceilometer.polling.manager [-] Executing discovery process for pollsters [<ceilometer.compute.pollsters.disk.PerDeviceWriteRequestsPollster object at 0x7fcf6cc3f4d0>] and discovery method [local_instances] via process [<bound method AgentManager.discover of <ceilometer.polling.manager.AgentManager object at 0x7fcf6cc07a10>>]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:294
Dec  1 19:37:50 compute-0 ceilometer_agent_compute[200308]: 2025-12-01 19:37:50.493 15 INFO ceilometer.polling.manager [-] Polling pollster disk.device.write.requests in the context of pollsters
Dec  1 19:37:50 compute-0 ceilometer_agent_compute[200308]: 2025-12-01 19:37:50.493 15 DEBUG ceilometer.polling.manager [-] Checking if we need coordination for pollster [<stevedore.extension.Extension object at 0x7fcf6cc3f500>] with coordination group name [None]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:333
Dec  1 19:37:50 compute-0 ceilometer_agent_compute[200308]: 2025-12-01 19:37:50.493 15 DEBUG ceilometer.polling.manager [-] The pollster [<stevedore.extension.Extension object at 0x7fcf6cc3f500>] is not configured in a source for polling that requires coordination. The current hashrings are the following [None]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:355
Dec  1 19:37:50 compute-0 ceilometer_agent_compute[200308]: 2025-12-01 19:37:50.493 15 DEBUG ceilometer.polling.manager [-] Polster heartbeat update: disk.device.write.requests heartbeat /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:636
Dec  1 19:37:50 compute-0 ceilometer_agent_compute[200308]: 2025-12-01 19:37:50.493 15 DEBUG ceilometer.compute.pollsters [-] e73931e9-f7fa-4666-b781-700b385532a9/disk.device.write.requests volume: 233 _stats_to_sample /usr/lib/python3.12/site-packages/ceilometer/compute/pollsters/__init__.py:108
Dec  1 19:37:50 compute-0 ceilometer_agent_compute[200308]: 2025-12-01 19:37:50.493 15 DEBUG ceilometer.compute.pollsters [-] e73931e9-f7fa-4666-b781-700b385532a9/disk.device.write.requests volume: 1 _stats_to_sample /usr/lib/python3.12/site-packages/ceilometer/compute/pollsters/__init__.py:108
Dec  1 19:37:50 compute-0 ceilometer_agent_compute[200308]: 2025-12-01 19:37:50.494 15 DEBUG ceilometer.compute.pollsters [-] e73931e9-f7fa-4666-b781-700b385532a9/disk.device.write.requests volume: 0 _stats_to_sample /usr/lib/python3.12/site-packages/ceilometer/compute/pollsters/__init__.py:108
Dec  1 19:37:50 compute-0 ceilometer_agent_compute[200308]: 2025-12-01 19:37:50.494 15 DEBUG ceilometer.compute.pollsters [-] f4a023f0-04a7-470f-88ef-6284e0580f9e/disk.device.write.requests volume: 242 _stats_to_sample /usr/lib/python3.12/site-packages/ceilometer/compute/pollsters/__init__.py:108
Dec  1 19:37:50 compute-0 ceilometer_agent_compute[200308]: 2025-12-01 19:37:50.494 15 DEBUG ceilometer.compute.pollsters [-] f4a023f0-04a7-470f-88ef-6284e0580f9e/disk.device.write.requests volume: 1 _stats_to_sample /usr/lib/python3.12/site-packages/ceilometer/compute/pollsters/__init__.py:108
Dec  1 19:37:50 compute-0 ceilometer_agent_compute[200308]: 2025-12-01 19:37:50.494 15 DEBUG ceilometer.compute.pollsters [-] f4a023f0-04a7-470f-88ef-6284e0580f9e/disk.device.write.requests volume: 0 _stats_to_sample /usr/lib/python3.12/site-packages/ceilometer/compute/pollsters/__init__.py:108
Dec  1 19:37:50 compute-0 ceilometer_agent_compute[200308]: 2025-12-01 19:37:50.494 15 DEBUG ceilometer.compute.pollsters [-] 850ac274-3f22-41ce-b7d7-ac64d7adac70/disk.device.write.requests volume: 232 _stats_to_sample /usr/lib/python3.12/site-packages/ceilometer/compute/pollsters/__init__.py:108
Dec  1 19:37:50 compute-0 ceilometer_agent_compute[200308]: 2025-12-01 19:37:50.495 15 DEBUG ceilometer.compute.pollsters [-] 850ac274-3f22-41ce-b7d7-ac64d7adac70/disk.device.write.requests volume: 1 _stats_to_sample /usr/lib/python3.12/site-packages/ceilometer/compute/pollsters/__init__.py:108
Dec  1 19:37:50 compute-0 ceilometer_agent_compute[200308]: 2025-12-01 19:37:50.495 15 DEBUG ceilometer.compute.pollsters [-] 850ac274-3f22-41ce-b7d7-ac64d7adac70/disk.device.write.requests volume: 0 _stats_to_sample /usr/lib/python3.12/site-packages/ceilometer/compute/pollsters/__init__.py:108
Dec  1 19:37:50 compute-0 ceilometer_agent_compute[200308]: 2025-12-01 19:37:50.495 15 INFO ceilometer.polling.manager [-] Finished polling pollster disk.device.write.requests in the context of pollsters
Dec  1 19:37:50 compute-0 ceilometer_agent_compute[200308]: 2025-12-01 19:37:50.495 15 DEBUG ceilometer.polling.manager [-] Executing discovery process for pollsters [<ceilometer.compute.pollsters.disk.PerDeviceAllocationPollster object at 0x7fcf6cc3e5d0>] and discovery method [local_instances] via process [<bound method AgentManager.discover of <ceilometer.polling.manager.AgentManager object at 0x7fcf6cc07a10>>]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:294
Dec  1 19:37:50 compute-0 ceilometer_agent_compute[200308]: 2025-12-01 19:37:50.496 15 INFO ceilometer.polling.manager [-] Polling pollster disk.device.allocation in the context of pollsters
Dec  1 19:37:50 compute-0 ceilometer_agent_compute[200308]: 2025-12-01 19:37:50.496 12 DEBUG ceilometer.polling.manager [-] Updated heartbeat for disk.device.write.requests (2025-12-01T19:37:50.493537) _update_status /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:502
Dec  1 19:37:50 compute-0 ceilometer_agent_compute[200308]: 2025-12-01 19:37:50.496 15 DEBUG ceilometer.polling.manager [-] Checking if we need coordination for pollster [<stevedore.extension.Extension object at 0x7fcf6cc3e540>] with coordination group name [None]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:333
Dec  1 19:37:50 compute-0 ceilometer_agent_compute[200308]: 2025-12-01 19:37:50.496 15 DEBUG ceilometer.polling.manager [-] The pollster [<stevedore.extension.Extension object at 0x7fcf6cc3e540>] is not configured in a source for polling that requires coordination. The current hashrings are the following [None]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:355
Dec  1 19:37:50 compute-0 ceilometer_agent_compute[200308]: 2025-12-01 19:37:50.496 15 DEBUG ceilometer.polling.manager [-] Polster heartbeat update: disk.device.allocation heartbeat /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:636
Dec  1 19:37:50 compute-0 ceilometer_agent_compute[200308]: 2025-12-01 19:37:50.496 15 DEBUG ceilometer.compute.pollsters [-] e73931e9-f7fa-4666-b781-700b385532a9/disk.device.allocation volume: 21307392 _stats_to_sample /usr/lib/python3.12/site-packages/ceilometer/compute/pollsters/__init__.py:108
Dec  1 19:37:50 compute-0 ceilometer_agent_compute[200308]: 2025-12-01 19:37:50.496 15 DEBUG ceilometer.compute.pollsters [-] e73931e9-f7fa-4666-b781-700b385532a9/disk.device.allocation volume: 1253376 _stats_to_sample /usr/lib/python3.12/site-packages/ceilometer/compute/pollsters/__init__.py:108
Dec  1 19:37:50 compute-0 ceilometer_agent_compute[200308]: 2025-12-01 19:37:50.496 15 DEBUG ceilometer.compute.pollsters [-] e73931e9-f7fa-4666-b781-700b385532a9/disk.device.allocation volume: 487424 _stats_to_sample /usr/lib/python3.12/site-packages/ceilometer/compute/pollsters/__init__.py:108
Dec  1 19:37:50 compute-0 ceilometer_agent_compute[200308]: 2025-12-01 19:37:50.496 15 DEBUG ceilometer.compute.pollsters [-] f4a023f0-04a7-470f-88ef-6284e0580f9e/disk.device.allocation volume: 22224896 _stats_to_sample /usr/lib/python3.12/site-packages/ceilometer/compute/pollsters/__init__.py:108
Dec  1 19:37:50 compute-0 ceilometer_agent_compute[200308]: 2025-12-01 19:37:50.497 15 DEBUG ceilometer.compute.pollsters [-] f4a023f0-04a7-470f-88ef-6284e0580f9e/disk.device.allocation volume: 1253376 _stats_to_sample /usr/lib/python3.12/site-packages/ceilometer/compute/pollsters/__init__.py:108
Dec  1 19:37:50 compute-0 ceilometer_agent_compute[200308]: 2025-12-01 19:37:50.497 15 DEBUG ceilometer.compute.pollsters [-] f4a023f0-04a7-470f-88ef-6284e0580f9e/disk.device.allocation volume: 585728 _stats_to_sample /usr/lib/python3.12/site-packages/ceilometer/compute/pollsters/__init__.py:108
Dec  1 19:37:50 compute-0 ceilometer_agent_compute[200308]: 2025-12-01 19:37:50.497 15 DEBUG ceilometer.compute.pollsters [-] 850ac274-3f22-41ce-b7d7-ac64d7adac70/disk.device.allocation volume: 22224896 _stats_to_sample /usr/lib/python3.12/site-packages/ceilometer/compute/pollsters/__init__.py:108
Dec  1 19:37:50 compute-0 ceilometer_agent_compute[200308]: 2025-12-01 19:37:50.497 12 DEBUG ceilometer.polling.manager [-] Updated heartbeat for disk.device.allocation (2025-12-01T19:37:50.496296) _update_status /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:502
Dec  1 19:37:50 compute-0 ceilometer_agent_compute[200308]: 2025-12-01 19:37:50.497 15 DEBUG ceilometer.compute.pollsters [-] 850ac274-3f22-41ce-b7d7-ac64d7adac70/disk.device.allocation volume: 1253376 _stats_to_sample /usr/lib/python3.12/site-packages/ceilometer/compute/pollsters/__init__.py:108
Dec  1 19:37:50 compute-0 ceilometer_agent_compute[200308]: 2025-12-01 19:37:50.498 15 DEBUG ceilometer.compute.pollsters [-] 850ac274-3f22-41ce-b7d7-ac64d7adac70/disk.device.allocation volume: 585728 _stats_to_sample /usr/lib/python3.12/site-packages/ceilometer/compute/pollsters/__init__.py:108
Dec  1 19:37:50 compute-0 ceilometer_agent_compute[200308]: 2025-12-01 19:37:50.498 15 INFO ceilometer.polling.manager [-] Finished polling pollster disk.device.allocation in the context of pollsters
Dec  1 19:37:50 compute-0 ceilometer_agent_compute[200308]: 2025-12-01 19:37:50.498 15 DEBUG ceilometer.polling.manager [-] Executing discovery process for pollsters [<ceilometer.compute.pollsters.disk.EphemeralSizePollster object at 0x7fcf6cc3f530>] and discovery method [local_instances] via process [<bound method AgentManager.discover of <ceilometer.polling.manager.AgentManager object at 0x7fcf6cc07a10>>]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:294
Dec  1 19:37:50 compute-0 ceilometer_agent_compute[200308]: 2025-12-01 19:37:50.498 15 INFO ceilometer.polling.manager [-] Polling pollster disk.ephemeral.size in the context of pollsters
Dec  1 19:37:50 compute-0 ceilometer_agent_compute[200308]: 2025-12-01 19:37:50.498 15 DEBUG ceilometer.polling.manager [-] Checking if we need coordination for pollster [<stevedore.extension.Extension object at 0x7fcf6cc3f560>] with coordination group name [None]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:333
Dec  1 19:37:50 compute-0 ceilometer_agent_compute[200308]: 2025-12-01 19:37:50.498 15 DEBUG ceilometer.polling.manager [-] The pollster [<stevedore.extension.Extension object at 0x7fcf6cc3f560>] is not configured in a source for polling that requires coordination. The current hashrings are the following [None]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:355
Dec  1 19:37:50 compute-0 ceilometer_agent_compute[200308]: 2025-12-01 19:37:50.499 15 DEBUG ceilometer.polling.manager [-] Polster heartbeat update: disk.ephemeral.size heartbeat /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:636
Dec  1 19:37:50 compute-0 ceilometer_agent_compute[200308]: 2025-12-01 19:37:50.499 15 INFO ceilometer.polling.manager [-] Finished polling pollster disk.ephemeral.size in the context of pollsters
Dec  1 19:37:50 compute-0 ceilometer_agent_compute[200308]: 2025-12-01 19:37:50.499 15 DEBUG ceilometer.polling.manager [-] Executing discovery process for pollsters [<ceilometer.compute.pollsters.net.IncomingPacketsPollster object at 0x7fcf6cc3fd40>] and discovery method [local_instances] via process [<bound method AgentManager.discover of <ceilometer.polling.manager.AgentManager object at 0x7fcf6cc07a10>>]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:294
Dec  1 19:37:50 compute-0 ceilometer_agent_compute[200308]: 2025-12-01 19:37:50.499 15 INFO ceilometer.polling.manager [-] Polling pollster network.incoming.packets in the context of pollsters
Dec  1 19:37:50 compute-0 ceilometer_agent_compute[200308]: 2025-12-01 19:37:50.499 15 DEBUG ceilometer.polling.manager [-] Checking if we need coordination for pollster [<stevedore.extension.Extension object at 0x7fcf6cc3fd70>] with coordination group name [None]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:333
Dec  1 19:37:50 compute-0 ceilometer_agent_compute[200308]: 2025-12-01 19:37:50.499 15 DEBUG ceilometer.polling.manager [-] The pollster [<stevedore.extension.Extension object at 0x7fcf6cc3fd70>] is not configured in a source for polling that requires coordination. The current hashrings are the following [None]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:355
Dec  1 19:37:50 compute-0 ceilometer_agent_compute[200308]: 2025-12-01 19:37:50.499 15 DEBUG ceilometer.polling.manager [-] Polster heartbeat update: network.incoming.packets heartbeat /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:636
Dec  1 19:37:50 compute-0 ceilometer_agent_compute[200308]: 2025-12-01 19:37:50.500 15 DEBUG ceilometer.compute.pollsters [-] e73931e9-f7fa-4666-b781-700b385532a9/network.incoming.packets volume: 19 _stats_to_sample /usr/lib/python3.12/site-packages/ceilometer/compute/pollsters/__init__.py:108
Dec  1 19:37:50 compute-0 ceilometer_agent_compute[200308]: 2025-12-01 19:37:50.500 12 DEBUG ceilometer.polling.manager [-] Updated heartbeat for disk.ephemeral.size (2025-12-01T19:37:50.498988) _update_status /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:502
Dec  1 19:37:50 compute-0 ceilometer_agent_compute[200308]: 2025-12-01 19:37:50.500 15 DEBUG ceilometer.compute.pollsters [-] f4a023f0-04a7-470f-88ef-6284e0580f9e/network.incoming.packets volume: 33 _stats_to_sample /usr/lib/python3.12/site-packages/ceilometer/compute/pollsters/__init__.py:108
Dec  1 19:37:50 compute-0 ceilometer_agent_compute[200308]: 2025-12-01 19:37:50.500 15 DEBUG ceilometer.compute.pollsters [-] 850ac274-3f22-41ce-b7d7-ac64d7adac70/network.incoming.packets volume: 12 _stats_to_sample /usr/lib/python3.12/site-packages/ceilometer/compute/pollsters/__init__.py:108
Dec  1 19:37:50 compute-0 ceilometer_agent_compute[200308]: 2025-12-01 19:37:50.500 12 DEBUG ceilometer.polling.manager [-] Updated heartbeat for network.incoming.packets (2025-12-01T19:37:50.499957) _update_status /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:502
Dec  1 19:37:50 compute-0 ceilometer_agent_compute[200308]: 2025-12-01 19:37:50.501 15 INFO ceilometer.polling.manager [-] Finished polling pollster network.incoming.packets in the context of pollsters
Dec  1 19:37:50 compute-0 ceilometer_agent_compute[200308]: 2025-12-01 19:37:50.501 15 DEBUG ceilometer.polling.manager [-] Executing discovery process for pollsters [<ceilometer.compute.pollsters.disk.RootSizePollster object at 0x7fcf6cc3f590>] and discovery method [local_instances] via process [<bound method AgentManager.discover of <ceilometer.polling.manager.AgentManager object at 0x7fcf6cc07a10>>]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:294
Dec  1 19:37:50 compute-0 ceilometer_agent_compute[200308]: 2025-12-01 19:37:50.501 15 INFO ceilometer.polling.manager [-] Polling pollster disk.root.size in the context of pollsters
Dec  1 19:37:50 compute-0 ceilometer_agent_compute[200308]: 2025-12-01 19:37:50.501 15 DEBUG ceilometer.polling.manager [-] Checking if we need coordination for pollster [<stevedore.extension.Extension object at 0x7fcf6cc3f5c0>] with coordination group name [None]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:333
Dec  1 19:37:50 compute-0 ceilometer_agent_compute[200308]: 2025-12-01 19:37:50.501 15 DEBUG ceilometer.polling.manager [-] The pollster [<stevedore.extension.Extension object at 0x7fcf6cc3f5c0>] is not configured in a source for polling that requires coordination. The current hashrings are the following [None]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:355
Dec  1 19:37:50 compute-0 ceilometer_agent_compute[200308]: 2025-12-01 19:37:50.501 15 DEBUG ceilometer.polling.manager [-] Polster heartbeat update: disk.root.size heartbeat /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:636
Dec  1 19:37:50 compute-0 ceilometer_agent_compute[200308]: 2025-12-01 19:37:50.501 15 INFO ceilometer.polling.manager [-] Finished polling pollster disk.root.size in the context of pollsters
Dec  1 19:37:50 compute-0 ceilometer_agent_compute[200308]: 2025-12-01 19:37:50.502 15 DEBUG ceilometer.polling.manager [-] Executing discovery process for pollsters [<ceilometer.compute.pollsters.net.IncomingDropPollster object at 0x7fcf6cc3fda0>] and discovery method [local_instances] via process [<bound method AgentManager.discover of <ceilometer.polling.manager.AgentManager object at 0x7fcf6cc07a10>>]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:294
Dec  1 19:37:50 compute-0 ceilometer_agent_compute[200308]: 2025-12-01 19:37:50.502 15 INFO ceilometer.polling.manager [-] Polling pollster network.incoming.packets.drop in the context of pollsters
Dec  1 19:37:50 compute-0 ceilometer_agent_compute[200308]: 2025-12-01 19:37:50.502 15 DEBUG ceilometer.polling.manager [-] Checking if we need coordination for pollster [<stevedore.extension.Extension object at 0x7fcf6cc3fdd0>] with coordination group name [None]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:333
Dec  1 19:37:50 compute-0 ceilometer_agent_compute[200308]: 2025-12-01 19:37:50.502 15 DEBUG ceilometer.polling.manager [-] The pollster [<stevedore.extension.Extension object at 0x7fcf6cc3fdd0>] is not configured in a source for polling that requires coordination. The current hashrings are the following [None]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:355
Dec  1 19:37:50 compute-0 ceilometer_agent_compute[200308]: 2025-12-01 19:37:50.502 15 DEBUG ceilometer.polling.manager [-] Polster heartbeat update: network.incoming.packets.drop heartbeat /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:636
Dec  1 19:37:50 compute-0 ceilometer_agent_compute[200308]: 2025-12-01 19:37:50.502 12 DEBUG ceilometer.polling.manager [-] Updated heartbeat for disk.root.size (2025-12-01T19:37:50.501478) _update_status /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:502
Dec  1 19:37:50 compute-0 ceilometer_agent_compute[200308]: 2025-12-01 19:37:50.502 15 DEBUG ceilometer.compute.pollsters [-] e73931e9-f7fa-4666-b781-700b385532a9/network.incoming.packets.drop volume: 0 _stats_to_sample /usr/lib/python3.12/site-packages/ceilometer/compute/pollsters/__init__.py:108
Dec  1 19:37:50 compute-0 ceilometer_agent_compute[200308]: 2025-12-01 19:37:50.502 15 DEBUG ceilometer.compute.pollsters [-] f4a023f0-04a7-470f-88ef-6284e0580f9e/network.incoming.packets.drop volume: 0 _stats_to_sample /usr/lib/python3.12/site-packages/ceilometer/compute/pollsters/__init__.py:108
Dec  1 19:37:50 compute-0 ceilometer_agent_compute[200308]: 2025-12-01 19:37:50.503 15 DEBUG ceilometer.compute.pollsters [-] 850ac274-3f22-41ce-b7d7-ac64d7adac70/network.incoming.packets.drop volume: 0 _stats_to_sample /usr/lib/python3.12/site-packages/ceilometer/compute/pollsters/__init__.py:108
Dec  1 19:37:50 compute-0 ceilometer_agent_compute[200308]: 2025-12-01 19:37:50.503 15 INFO ceilometer.polling.manager [-] Finished polling pollster network.incoming.packets.drop in the context of pollsters
Dec  1 19:37:50 compute-0 ceilometer_agent_compute[200308]: 2025-12-01 19:37:50.503 15 DEBUG ceilometer.polling.manager [-] Executing discovery process for pollsters [<ceilometer.compute.pollsters.net.IncomingErrorsPollster object at 0x7fcf6cc3fe00>] and discovery method [local_instances] via process [<bound method AgentManager.discover of <ceilometer.polling.manager.AgentManager object at 0x7fcf6cc07a10>>]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:294
Dec  1 19:37:50 compute-0 ceilometer_agent_compute[200308]: 2025-12-01 19:37:50.504 15 INFO ceilometer.polling.manager [-] Polling pollster network.incoming.packets.error in the context of pollsters
Dec  1 19:37:50 compute-0 ceilometer_agent_compute[200308]: 2025-12-01 19:37:50.504 12 DEBUG ceilometer.polling.manager [-] Updated heartbeat for network.incoming.packets.drop (2025-12-01T19:37:50.502531) _update_status /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:502
Dec  1 19:37:50 compute-0 ceilometer_agent_compute[200308]: 2025-12-01 19:37:50.504 15 DEBUG ceilometer.polling.manager [-] Checking if we need coordination for pollster [<stevedore.extension.Extension object at 0x7fcf6cc3fe30>] with coordination group name [None]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:333
Dec  1 19:37:50 compute-0 ceilometer_agent_compute[200308]: 2025-12-01 19:37:50.504 15 DEBUG ceilometer.polling.manager [-] The pollster [<stevedore.extension.Extension object at 0x7fcf6cc3fe30>] is not configured in a source for polling that requires coordination. The current hashrings are the following [None]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:355
Dec  1 19:37:50 compute-0 ceilometer_agent_compute[200308]: 2025-12-01 19:37:50.504 15 DEBUG ceilometer.polling.manager [-] Polster heartbeat update: network.incoming.packets.error heartbeat /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:636
Dec  1 19:37:50 compute-0 ceilometer_agent_compute[200308]: 2025-12-01 19:37:50.504 15 DEBUG ceilometer.compute.pollsters [-] e73931e9-f7fa-4666-b781-700b385532a9/network.incoming.packets.error volume: 0 _stats_to_sample /usr/lib/python3.12/site-packages/ceilometer/compute/pollsters/__init__.py:108
Dec  1 19:37:50 compute-0 ceilometer_agent_compute[200308]: 2025-12-01 19:37:50.504 15 DEBUG ceilometer.compute.pollsters [-] f4a023f0-04a7-470f-88ef-6284e0580f9e/network.incoming.packets.error volume: 0 _stats_to_sample /usr/lib/python3.12/site-packages/ceilometer/compute/pollsters/__init__.py:108
Dec  1 19:37:50 compute-0 ceilometer_agent_compute[200308]: 2025-12-01 19:37:50.504 15 DEBUG ceilometer.compute.pollsters [-] 850ac274-3f22-41ce-b7d7-ac64d7adac70/network.incoming.packets.error volume: 0 _stats_to_sample /usr/lib/python3.12/site-packages/ceilometer/compute/pollsters/__init__.py:108
Dec  1 19:37:50 compute-0 ceilometer_agent_compute[200308]: 2025-12-01 19:37:50.505 15 INFO ceilometer.polling.manager [-] Finished polling pollster network.incoming.packets.error in the context of pollsters
Dec  1 19:37:50 compute-0 ceilometer_agent_compute[200308]: 2025-12-01 19:37:50.505 15 DEBUG ceilometer.polling.manager [-] Executing discovery process for pollsters [<ceilometer.compute.pollsters.net.OutgoingBytesPollster object at 0x7fcf6cc3fe90>] and discovery method [local_instances] via process [<bound method AgentManager.discover of <ceilometer.polling.manager.AgentManager object at 0x7fcf6cc07a10>>]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:294
Dec  1 19:37:50 compute-0 ceilometer_agent_compute[200308]: 2025-12-01 19:37:50.505 15 INFO ceilometer.polling.manager [-] Polling pollster network.outgoing.bytes in the context of pollsters
Dec  1 19:37:50 compute-0 ceilometer_agent_compute[200308]: 2025-12-01 19:37:50.505 15 DEBUG ceilometer.polling.manager [-] Checking if we need coordination for pollster [<stevedore.extension.Extension object at 0x7fcf6cc3fec0>] with coordination group name [None]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:333
Dec  1 19:37:50 compute-0 ceilometer_agent_compute[200308]: 2025-12-01 19:37:50.505 15 DEBUG ceilometer.polling.manager [-] The pollster [<stevedore.extension.Extension object at 0x7fcf6cc3fec0>] is not configured in a source for polling that requires coordination. The current hashrings are the following [None]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:355
Dec  1 19:37:50 compute-0 ceilometer_agent_compute[200308]: 2025-12-01 19:37:50.505 15 DEBUG ceilometer.polling.manager [-] Polster heartbeat update: network.outgoing.bytes heartbeat /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:636
Dec  1 19:37:50 compute-0 ceilometer_agent_compute[200308]: 2025-12-01 19:37:50.505 15 DEBUG ceilometer.compute.pollsters [-] e73931e9-f7fa-4666-b781-700b385532a9/network.outgoing.bytes volume: 2272 _stats_to_sample /usr/lib/python3.12/site-packages/ceilometer/compute/pollsters/__init__.py:108
Dec  1 19:37:50 compute-0 ceilometer_agent_compute[200308]: 2025-12-01 19:37:50.506 12 DEBUG ceilometer.polling.manager [-] Updated heartbeat for network.incoming.packets.error (2025-12-01T19:37:50.504338) _update_status /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:502
Dec  1 19:37:50 compute-0 ceilometer_agent_compute[200308]: 2025-12-01 19:37:50.506 12 DEBUG ceilometer.polling.manager [-] Updated heartbeat for network.outgoing.bytes (2025-12-01T19:37:50.505714) _update_status /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:502
Dec  1 19:37:50 compute-0 ceilometer_agent_compute[200308]: 2025-12-01 19:37:50.506 15 DEBUG ceilometer.compute.pollsters [-] f4a023f0-04a7-470f-88ef-6284e0580f9e/network.outgoing.bytes volume: 4962 _stats_to_sample /usr/lib/python3.12/site-packages/ceilometer/compute/pollsters/__init__.py:108
Dec  1 19:37:50 compute-0 ceilometer_agent_compute[200308]: 2025-12-01 19:37:50.506 15 DEBUG ceilometer.compute.pollsters [-] 850ac274-3f22-41ce-b7d7-ac64d7adac70/network.outgoing.bytes volume: 2146 _stats_to_sample /usr/lib/python3.12/site-packages/ceilometer/compute/pollsters/__init__.py:108
Dec  1 19:37:50 compute-0 ceilometer_agent_compute[200308]: 2025-12-01 19:37:50.506 15 INFO ceilometer.polling.manager [-] Finished polling pollster network.outgoing.bytes in the context of pollsters
Dec  1 19:37:50 compute-0 ceilometer_agent_compute[200308]: 2025-12-01 19:37:50.506 15 DEBUG ceilometer.polling.manager [-] Executing discovery process for pollsters [<ceilometer.compute.pollsters.net.OutgoingBytesRatePollster object at 0x7fcf6cc3ff80>] and discovery method [local_instances] via process [<bound method AgentManager.discover of <ceilometer.polling.manager.AgentManager object at 0x7fcf6cc07a10>>]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:294
Dec  1 19:37:50 compute-0 ceilometer_agent_compute[200308]: 2025-12-01 19:37:50.506 15 INFO ceilometer.polling.manager [-] Polling pollster network.outgoing.bytes.rate in the context of pollsters
Dec  1 19:37:50 compute-0 ceilometer_agent_compute[200308]: 2025-12-01 19:37:50.507 15 DEBUG ceilometer.polling.manager [-] Checking if we need coordination for pollster [<stevedore.extension.Extension object at 0x7fcf6cc3ffb0>] with coordination group name [None]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:333
Dec  1 19:37:50 compute-0 ceilometer_agent_compute[200308]: 2025-12-01 19:37:50.507 15 DEBUG ceilometer.polling.manager [-] The pollster [<stevedore.extension.Extension object at 0x7fcf6cc3ffb0>] is not configured in a source for polling that requires coordination. The current hashrings are the following [None]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:355
Dec  1 19:37:50 compute-0 ceilometer_agent_compute[200308]: 2025-12-01 19:37:50.507 15 DEBUG ceilometer.polling.manager [-] Polster heartbeat update: network.outgoing.bytes.rate heartbeat /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:636
Dec  1 19:37:50 compute-0 ceilometer_agent_compute[200308]: 2025-12-01 19:37:50.507 15 DEBUG ceilometer.compute.pollsters [-] LibvirtInspector does not provide data for OutgoingBytesRatePollster get_samples /usr/lib/python3.12/site-packages/ceilometer/compute/pollsters/__init__.py:162
Dec  1 19:37:50 compute-0 ceilometer_agent_compute[200308]: 2025-12-01 19:37:50.507 15 ERROR ceilometer.polling.manager [-] Prevent pollster network.outgoing.bytes.rate from polling [<NovaLikeServer: vn-rxztcck-a6xkcgll2h6t-dmjd3wlevael-vnf-74vtqyxw74yx>] on source pollsters anymore!: ceilometer.polling.plugin_base.PollsterPermanentError: [<NovaLikeServer: vn-rxztcck-a6xkcgll2h6t-dmjd3wlevael-vnf-74vtqyxw74yx>]
Dec  1 19:37:50 compute-0 ceilometer_agent_compute[200308]: 2025-12-01 19:37:50.507 12 DEBUG ceilometer.polling.manager [-] Updated heartbeat for network.outgoing.bytes.rate (2025-12-01T19:37:50.507153) _update_status /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:502
Dec  1 19:37:50 compute-0 ceilometer_agent_compute[200308]: 2025-12-01 19:37:50.507 15 DEBUG ceilometer.polling.manager [-] Executing discovery process for pollsters [<ceilometer.compute.pollsters.instance_stats.CPUPollster object at 0x7fcf6cbd1b80>] and discovery method [local_instances] via process [<bound method AgentManager.discover of <ceilometer.polling.manager.AgentManager object at 0x7fcf6cc07a10>>]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:294
Dec  1 19:37:50 compute-0 ceilometer_agent_compute[200308]: 2025-12-01 19:37:50.507 15 INFO ceilometer.polling.manager [-] Polling pollster cpu in the context of pollsters
Dec  1 19:37:50 compute-0 ceilometer_agent_compute[200308]: 2025-12-01 19:37:50.507 15 DEBUG ceilometer.polling.manager [-] Checking if we need coordination for pollster [<stevedore.extension.Extension object at 0x7fcf6cc3d7c0>] with coordination group name [None]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:333
Dec  1 19:37:50 compute-0 ceilometer_agent_compute[200308]: 2025-12-01 19:37:50.508 15 DEBUG ceilometer.polling.manager [-] The pollster [<stevedore.extension.Extension object at 0x7fcf6cc3d7c0>] is not configured in a source for polling that requires coordination. The current hashrings are the following [None]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:355
Dec  1 19:37:50 compute-0 ceilometer_agent_compute[200308]: 2025-12-01 19:37:50.508 15 DEBUG ceilometer.polling.manager [-] Polster heartbeat update: cpu heartbeat /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:636
Dec  1 19:37:50 compute-0 ceilometer_agent_compute[200308]: 2025-12-01 19:37:50.508 15 DEBUG ceilometer.compute.pollsters [-] e73931e9-f7fa-4666-b781-700b385532a9/cpu volume: 39190000000 _stats_to_sample /usr/lib/python3.12/site-packages/ceilometer/compute/pollsters/__init__.py:108
Dec  1 19:37:50 compute-0 ceilometer_agent_compute[200308]: 2025-12-01 19:37:50.508 15 DEBUG ceilometer.compute.pollsters [-] f4a023f0-04a7-470f-88ef-6284e0580f9e/cpu volume: 312290000000 _stats_to_sample /usr/lib/python3.12/site-packages/ceilometer/compute/pollsters/__init__.py:108
Dec  1 19:37:50 compute-0 ceilometer_agent_compute[200308]: 2025-12-01 19:37:50.508 15 DEBUG ceilometer.compute.pollsters [-] 850ac274-3f22-41ce-b7d7-ac64d7adac70/cpu volume: 32150000000 _stats_to_sample /usr/lib/python3.12/site-packages/ceilometer/compute/pollsters/__init__.py:108
Dec  1 19:37:50 compute-0 ceilometer_agent_compute[200308]: 2025-12-01 19:37:50.508 15 INFO ceilometer.polling.manager [-] Finished polling pollster cpu in the context of pollsters
Dec  1 19:37:50 compute-0 ceilometer_agent_compute[200308]: 2025-12-01 19:37:50.509 15 DEBUG ceilometer.polling.manager [-] Executing discovery process for pollsters [<ceilometer.compute.pollsters.instance_stats.MemoryUsagePollster object at 0x7fcf6cc3f7a0>] and discovery method [local_instances] via process [<bound method AgentManager.discover of <ceilometer.polling.manager.AgentManager object at 0x7fcf6cc07a10>>]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:294
Dec  1 19:37:50 compute-0 ceilometer_agent_compute[200308]: 2025-12-01 19:37:50.509 12 DEBUG ceilometer.polling.manager [-] Updated heartbeat for cpu (2025-12-01T19:37:50.508102) _update_status /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:502
Dec  1 19:37:50 compute-0 ceilometer_agent_compute[200308]: 2025-12-01 19:37:50.509 15 INFO ceilometer.polling.manager [-] Polling pollster memory.usage in the context of pollsters
Dec  1 19:37:50 compute-0 ceilometer_agent_compute[200308]: 2025-12-01 19:37:50.509 15 DEBUG ceilometer.polling.manager [-] Checking if we need coordination for pollster [<stevedore.extension.Extension object at 0x7fcf6cc3f7d0>] with coordination group name [None]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:333
Dec  1 19:37:50 compute-0 ceilometer_agent_compute[200308]: 2025-12-01 19:37:50.509 15 DEBUG ceilometer.polling.manager [-] The pollster [<stevedore.extension.Extension object at 0x7fcf6cc3f7d0>] is not configured in a source for polling that requires coordination. The current hashrings are the following [None]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:355
Dec  1 19:37:50 compute-0 ceilometer_agent_compute[200308]: 2025-12-01 19:37:50.509 15 DEBUG ceilometer.polling.manager [-] Polster heartbeat update: memory.usage heartbeat /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:636
Dec  1 19:37:50 compute-0 ceilometer_agent_compute[200308]: 2025-12-01 19:37:50.509 15 DEBUG ceilometer.compute.pollsters [-] e73931e9-f7fa-4666-b781-700b385532a9/memory.usage volume: 48.79296875 _stats_to_sample /usr/lib/python3.12/site-packages/ceilometer/compute/pollsters/__init__.py:108
Dec  1 19:37:50 compute-0 ceilometer_agent_compute[200308]: 2025-12-01 19:37:50.510 15 DEBUG ceilometer.compute.pollsters [-] f4a023f0-04a7-470f-88ef-6284e0580f9e/memory.usage volume: 49.171875 _stats_to_sample /usr/lib/python3.12/site-packages/ceilometer/compute/pollsters/__init__.py:108
Dec  1 19:37:50 compute-0 ceilometer_agent_compute[200308]: 2025-12-01 19:37:50.510 15 DEBUG ceilometer.compute.pollsters [-] 850ac274-3f22-41ce-b7d7-ac64d7adac70/memory.usage volume: 49.05859375 _stats_to_sample /usr/lib/python3.12/site-packages/ceilometer/compute/pollsters/__init__.py:108
Dec  1 19:37:50 compute-0 ceilometer_agent_compute[200308]: 2025-12-01 19:37:50.510 15 INFO ceilometer.polling.manager [-] Finished polling pollster memory.usage in the context of pollsters
Dec  1 19:37:50 compute-0 ceilometer_agent_compute[200308]: 2025-12-01 19:37:50.510 12 DEBUG ceilometer.polling.manager [-] Updated heartbeat for memory.usage (2025-12-01T19:37:50.509722) _update_status /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:502
Dec  1 19:37:50 compute-0 ceilometer_agent_compute[200308]: 2025-12-01 19:37:50.511 15 DEBUG ceilometer.polling.manager [-] Finished processing pollster [network.incoming.bytes.delta]. execute_polling_task_processing /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:272
Dec  1 19:37:50 compute-0 ceilometer_agent_compute[200308]: 2025-12-01 19:37:50.512 15 DEBUG ceilometer.polling.manager [-] Finished processing pollster [network.outgoing.packets]. execute_polling_task_processing /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:272
Dec  1 19:37:50 compute-0 ceilometer_agent_compute[200308]: 2025-12-01 19:37:50.512 15 DEBUG ceilometer.polling.manager [-] Finished processing pollster [network.outgoing.bytes.delta]. execute_polling_task_processing /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:272
Dec  1 19:37:50 compute-0 ceilometer_agent_compute[200308]: 2025-12-01 19:37:50.512 15 DEBUG ceilometer.polling.manager [-] Finished processing pollster [network.outgoing.packets.drop]. execute_polling_task_processing /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:272
Dec  1 19:37:50 compute-0 ceilometer_agent_compute[200308]: 2025-12-01 19:37:50.512 15 DEBUG ceilometer.polling.manager [-] Finished processing pollster [network.outgoing.packets.error]. execute_polling_task_processing /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:272
Dec  1 19:37:50 compute-0 ceilometer_agent_compute[200308]: 2025-12-01 19:37:50.512 15 DEBUG ceilometer.polling.manager [-] Finished processing pollster [disk.device.capacity]. execute_polling_task_processing /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:272
Dec  1 19:37:50 compute-0 ceilometer_agent_compute[200308]: 2025-12-01 19:37:50.512 15 DEBUG ceilometer.polling.manager [-] Finished processing pollster [disk.device.read.bytes]. execute_polling_task_processing /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:272
Dec  1 19:37:50 compute-0 ceilometer_agent_compute[200308]: 2025-12-01 19:37:50.512 15 DEBUG ceilometer.polling.manager [-] Finished processing pollster [network.incoming.bytes]. execute_polling_task_processing /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:272
Dec  1 19:37:50 compute-0 ceilometer_agent_compute[200308]: 2025-12-01 19:37:50.512 15 DEBUG ceilometer.polling.manager [-] Finished processing pollster [network.incoming.bytes.rate]. execute_polling_task_processing /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:272
Dec  1 19:37:50 compute-0 ceilometer_agent_compute[200308]: 2025-12-01 19:37:50.512 15 DEBUG ceilometer.polling.manager [-] Finished processing pollster [disk.device.read.latency]. execute_polling_task_processing /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:272
Dec  1 19:37:50 compute-0 ceilometer_agent_compute[200308]: 2025-12-01 19:37:50.512 15 DEBUG ceilometer.polling.manager [-] Finished processing pollster [disk.device.read.requests]. execute_polling_task_processing /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:272
Dec  1 19:37:50 compute-0 ceilometer_agent_compute[200308]: 2025-12-01 19:37:50.512 15 DEBUG ceilometer.polling.manager [-] Finished processing pollster [disk.device.usage]. execute_polling_task_processing /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:272
Dec  1 19:37:50 compute-0 ceilometer_agent_compute[200308]: 2025-12-01 19:37:50.512 15 DEBUG ceilometer.polling.manager [-] Finished processing pollster [disk.device.write.bytes]. execute_polling_task_processing /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:272
Dec  1 19:37:50 compute-0 ceilometer_agent_compute[200308]: 2025-12-01 19:37:50.512 15 DEBUG ceilometer.polling.manager [-] Finished processing pollster [power.state]. execute_polling_task_processing /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:272
Dec  1 19:37:50 compute-0 ceilometer_agent_compute[200308]: 2025-12-01 19:37:50.512 15 DEBUG ceilometer.polling.manager [-] Finished processing pollster [disk.device.write.latency]. execute_polling_task_processing /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:272
Dec  1 19:37:50 compute-0 ceilometer_agent_compute[200308]: 2025-12-01 19:37:50.512 15 DEBUG ceilometer.polling.manager [-] Finished processing pollster [disk.device.write.requests]. execute_polling_task_processing /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:272
Dec  1 19:37:50 compute-0 ceilometer_agent_compute[200308]: 2025-12-01 19:37:50.513 15 DEBUG ceilometer.polling.manager [-] Finished processing pollster [disk.device.allocation]. execute_polling_task_processing /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:272
Dec  1 19:37:50 compute-0 ceilometer_agent_compute[200308]: 2025-12-01 19:37:50.513 15 DEBUG ceilometer.polling.manager [-] Finished processing pollster [disk.ephemeral.size]. execute_polling_task_processing /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:272
Dec  1 19:37:50 compute-0 ceilometer_agent_compute[200308]: 2025-12-01 19:37:50.513 15 DEBUG ceilometer.polling.manager [-] Finished processing pollster [network.incoming.packets]. execute_polling_task_processing /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:272
Dec  1 19:37:50 compute-0 ceilometer_agent_compute[200308]: 2025-12-01 19:37:50.513 15 DEBUG ceilometer.polling.manager [-] Finished processing pollster [disk.root.size]. execute_polling_task_processing /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:272
Dec  1 19:37:50 compute-0 ceilometer_agent_compute[200308]: 2025-12-01 19:37:50.513 15 DEBUG ceilometer.polling.manager [-] Finished processing pollster [network.incoming.packets.drop]. execute_polling_task_processing /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:272
Dec  1 19:37:50 compute-0 ceilometer_agent_compute[200308]: 2025-12-01 19:37:50.513 15 DEBUG ceilometer.polling.manager [-] Finished processing pollster [network.incoming.packets.error]. execute_polling_task_processing /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:272
Dec  1 19:37:50 compute-0 ceilometer_agent_compute[200308]: 2025-12-01 19:37:50.513 15 DEBUG ceilometer.polling.manager [-] Finished processing pollster [network.outgoing.bytes]. execute_polling_task_processing /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:272
Dec  1 19:37:50 compute-0 ceilometer_agent_compute[200308]: 2025-12-01 19:37:50.513 15 DEBUG ceilometer.polling.manager [-] Finished processing pollster [network.outgoing.bytes.rate]. execute_polling_task_processing /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:272
Dec  1 19:37:50 compute-0 ceilometer_agent_compute[200308]: 2025-12-01 19:37:50.513 15 DEBUG ceilometer.polling.manager [-] Finished processing pollster [cpu]. execute_polling_task_processing /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:272
Dec  1 19:37:50 compute-0 ceilometer_agent_compute[200308]: 2025-12-01 19:37:50.513 15 DEBUG ceilometer.polling.manager [-] Finished processing pollster [memory.usage]. execute_polling_task_processing /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:272
Dec  1 19:37:50 compute-0 nova_compute[189564]: 2025-12-01 19:37:50.604 189568 DEBUG oslo_concurrency.lockutils [None req-7cec860b-c1c5-49d2-8a4f-732c76c1788d - - - - - -] Acquiring lock "refresh_cache-e73931e9-f7fa-4666-b781-700b385532a9" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:312#033[00m
Dec  1 19:37:50 compute-0 nova_compute[189564]: 2025-12-01 19:37:50.604 189568 DEBUG oslo_concurrency.lockutils [None req-7cec860b-c1c5-49d2-8a4f-732c76c1788d - - - - - -] Acquired lock "refresh_cache-e73931e9-f7fa-4666-b781-700b385532a9" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:315#033[00m
Dec  1 19:37:50 compute-0 nova_compute[189564]: 2025-12-01 19:37:50.604 189568 DEBUG nova.network.neutron [None req-7cec860b-c1c5-49d2-8a4f-732c76c1788d - - - - - -] [instance: e73931e9-f7fa-4666-b781-700b385532a9] Forcefully refreshing network info cache for instance _get_instance_nw_info /usr/lib/python3.9/site-packages/nova/network/neutron.py:2004#033[00m
Dec  1 19:37:50 compute-0 nova_compute[189564]: 2025-12-01 19:37:50.605 189568 DEBUG nova.objects.instance [None req-7cec860b-c1c5-49d2-8a4f-732c76c1788d - - - - - -] Lazy-loading 'info_cache' on Instance uuid e73931e9-f7fa-4666-b781-700b385532a9 obj_load_attr /usr/lib/python3.9/site-packages/nova/objects/instance.py:1105#033[00m
Dec  1 19:37:51 compute-0 nova_compute[189564]: 2025-12-01 19:37:51.781 189568 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Dec  1 19:37:52 compute-0 nova_compute[189564]: 2025-12-01 19:37:52.641 189568 DEBUG nova.network.neutron [None req-7cec860b-c1c5-49d2-8a4f-732c76c1788d - - - - - -] [instance: e73931e9-f7fa-4666-b781-700b385532a9] Updating instance_info_cache with network_info: [{"id": "3cef930c-870a-4936-a206-b4c3a7ce5c1a", "address": "fa:16:3e:fc:8b:70", "network": {"id": "2a4b8529-6171-4880-a97c-66966115a61b", "bridge": "br-int", "label": "private", "subnets": [{"cidr": "192.168.0.0/24", "dns": [], "gateway": {"address": "192.168.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "192.168.0.47", "type": "fixed", "version": 4, "meta": {}, "floating_ips": [{"address": "192.168.122.206", "type": "floating", "version": 4, "meta": {}}]}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "35d2a9caf1634dca9fc12ec078239d84", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tap3cef930c-87", "ovs_interfaceid": "3cef930c-870a-4936-a206-b4c3a7ce5c1a", "qbh_params": null, "qbg_params": null, "active": true, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}}] update_instance_cache_with_nw_info /usr/lib/python3.9/site-packages/nova/network/neutron.py:116#033[00m
Dec  1 19:37:52 compute-0 nova_compute[189564]: 2025-12-01 19:37:52.662 189568 DEBUG oslo_concurrency.lockutils [None req-7cec860b-c1c5-49d2-8a4f-732c76c1788d - - - - - -] Releasing lock "refresh_cache-e73931e9-f7fa-4666-b781-700b385532a9" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:333#033[00m
Dec  1 19:37:52 compute-0 nova_compute[189564]: 2025-12-01 19:37:52.662 189568 DEBUG nova.compute.manager [None req-7cec860b-c1c5-49d2-8a4f-732c76c1788d - - - - - -] [instance: e73931e9-f7fa-4666-b781-700b385532a9] Updated the network info_cache for instance _heal_instance_info_cache /usr/lib/python3.9/site-packages/nova/compute/manager.py:9929#033[00m
Dec  1 19:37:52 compute-0 nova_compute[189564]: 2025-12-01 19:37:52.972 189568 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Dec  1 19:37:53 compute-0 nova_compute[189564]: 2025-12-01 19:37:53.248 189568 DEBUG oslo_service.periodic_task [None req-7cec860b-c1c5-49d2-8a4f-732c76c1788d - - - - - -] Running periodic task ComputeManager._poll_rebooting_instances run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210#033[00m
Dec  1 19:37:53 compute-0 podman[243277]: 2025-12-01 19:37:53.335977449 +0000 UTC m=+0.095128955 container health_status 9bc16c1e84935b321683dd2dfd3901959431e420d380b6b9982945dff3d516b2 (image=quay.io/prometheus/node-exporter:v1.5.0, name=node_exporter, health_status=healthy, health_failing_streak=0, health_log=, managed_by=edpm_ansible, config_data={'image': 'quay.io/prometheus/node-exporter:v1.5.0', 'restart': 'always', 'recreate': True, 'user': 'root', 'privileged': True, 'ports': ['9100:9100'], 'command': ['--web.config.file=/etc/node_exporter/node_exporter.yaml', '--web.disable-exporter-metrics', '--collector.systemd', '--collector.systemd.unit-include=(edpm_.*|ovs.*|openvswitch|virt.*|rsyslog)\\.service', '--no-collector.dmi', '--no-collector.entropy', '--no-collector.thermal_zone', '--no-collector.time', '--no-collector.timex', '--no-collector.uname', '--no-collector.stat', '--no-collector.hwmon', '--no-collector.os', '--no-collector.selinux', '--no-collector.textfile', '--no-collector.powersupplyclass', '--no-collector.pressure', '--no-collector.rapl'], 'net': 'host', 'environment': {'OS_ENDPOINT_TYPE': 'internal'}, 'healthcheck': {'test': '/openstack/healthcheck node_exporter', 'mount': '/var/lib/openstack/healthchecks/node_exporter'}, 'volumes': ['/var/lib/openstack/config/telemetry/node_exporter.yaml:/etc/node_exporter/node_exporter.yaml:z', '/var/lib/openstack/certs/telemetry/default:/etc/node_exporter/tls:z', '/var/run/dbus/system_bus_socket:/var/run/dbus/system_bus_socket:rw', '/var/lib/openstack/healthchecks/node_exporter:/openstack:ro,z']}, config_id=edpm, container_name=node_exporter, maintainer=The Prometheus Authors <prometheus-developers@googlegroups.com>)
Dec  1 19:37:54 compute-0 nova_compute[189564]: 2025-12-01 19:37:54.248 189568 DEBUG oslo_service.periodic_task [None req-7cec860b-c1c5-49d2-8a4f-732c76c1788d - - - - - -] Running periodic task ComputeManager._poll_volume_usage run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210#033[00m
Dec  1 19:37:56 compute-0 nova_compute[189564]: 2025-12-01 19:37:56.249 189568 DEBUG oslo_service.periodic_task [None req-7cec860b-c1c5-49d2-8a4f-732c76c1788d - - - - - -] Running periodic task ComputeManager._reclaim_queued_deletes run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210#033[00m
Dec  1 19:37:56 compute-0 nova_compute[189564]: 2025-12-01 19:37:56.250 189568 DEBUG nova.compute.manager [None req-7cec860b-c1c5-49d2-8a4f-732c76c1788d - - - - - -] CONF.reclaim_instance_interval <= 0, skipping... _reclaim_queued_deletes /usr/lib/python3.9/site-packages/nova/compute/manager.py:10477#033[00m
Dec  1 19:37:56 compute-0 nova_compute[189564]: 2025-12-01 19:37:56.784 189568 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Dec  1 19:37:57 compute-0 nova_compute[189564]: 2025-12-01 19:37:57.974 189568 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Dec  1 19:37:58 compute-0 nova_compute[189564]: 2025-12-01 19:37:58.244 189568 DEBUG oslo_service.periodic_task [None req-7cec860b-c1c5-49d2-8a4f-732c76c1788d - - - - - -] Running periodic task ComputeManager._check_instance_build_time run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210#033[00m
Dec  1 19:37:58 compute-0 nova_compute[189564]: 2025-12-01 19:37:58.247 189568 DEBUG oslo_service.periodic_task [None req-7cec860b-c1c5-49d2-8a4f-732c76c1788d - - - - - -] Running periodic task ComputeManager._instance_usage_audit run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210#033[00m
Dec  1 19:37:59 compute-0 nova_compute[189564]: 2025-12-01 19:37:59.248 189568 DEBUG oslo_service.periodic_task [None req-7cec860b-c1c5-49d2-8a4f-732c76c1788d - - - - - -] Running periodic task ComputeManager._poll_unconfirmed_resizes run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210#033[00m
Dec  1 19:37:59 compute-0 podman[243303]: 2025-12-01 19:37:59.536439509 +0000 UTC m=+0.133329535 container health_status eee51cf6f5ac491b85fb09827fece37ea9afa564acb449d4ec0d0155a452f02b (image=quay.io/podified-antelope-centos9/openstack-multipathd:current-podified, name=multipathd, health_status=healthy, health_failing_streak=0, health_log=, io.buildah.version=1.41.3, managed_by=edpm_ansible, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.schema-version=1.0, tcib_build_tag=fa2bb8efef6782c26ea7f1675eeb36dd, tcib_managed=true, config_id=multipathd, maintainer=OpenStack Kubernetes Operator team, org.label-schema.build-date=20251125, config_data={'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/multipathd', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-multipathd:current-podified', 'net': 'host', 'privileged': True, 'restart': 'always', 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/var/lib/kolla/config_files/multipathd.json:/var/lib/kolla/config_files/config.json:ro', '/dev:/dev', '/run/udev:/run/udev', '/sys:/sys', '/lib/modules:/lib/modules:ro', '/etc/iscsi:/etc/iscsi:ro', '/var/lib/iscsi:/var/lib/iscsi', '/etc/multipath:/etc/multipath:z', '/etc/multipath.conf:/etc/multipath.conf:ro', '/var/lib/openstack/healthchecks/multipathd:/openstack:ro,z']}, org.label-schema.vendor=CentOS, container_name=multipathd)
Dec  1 19:37:59 compute-0 podman[203750]: time="2025-12-01T19:37:59Z" level=info msg="List containers: received `last` parameter - overwriting `limit`"
Dec  1 19:37:59 compute-0 podman[203750]: @ - - [01/Dec/2025:19:37:59 +0000] "GET /v4.9.3/libpod/containers/json?all=true&external=false&last=0&namespace=false&size=false&sync=false HTTP/1.1" 200 29521 "" "Go-http-client/1.1"
Dec  1 19:37:59 compute-0 podman[203750]: @ - - [01/Dec/2025:19:37:59 +0000] "GET /v4.9.3/libpod/containers/stats?all=false&interval=1&stream=false HTTP/1.1" 200 4800 "" "Go-http-client/1.1"
Dec  1 19:38:01 compute-0 nova_compute[189564]: 2025-12-01 19:38:01.248 189568 DEBUG oslo_service.periodic_task [None req-7cec860b-c1c5-49d2-8a4f-732c76c1788d - - - - - -] Running periodic task ComputeManager.update_available_resource run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210#033[00m
Dec  1 19:38:01 compute-0 nova_compute[189564]: 2025-12-01 19:38:01.276 189568 DEBUG oslo_concurrency.lockutils [None req-7cec860b-c1c5-49d2-8a4f-732c76c1788d - - - - - -] Acquiring lock "compute_resources" by "nova.compute.resource_tracker.ResourceTracker.clean_compute_node_cache" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m
Dec  1 19:38:01 compute-0 nova_compute[189564]: 2025-12-01 19:38:01.276 189568 DEBUG oslo_concurrency.lockutils [None req-7cec860b-c1c5-49d2-8a4f-732c76c1788d - - - - - -] Lock "compute_resources" acquired by "nova.compute.resource_tracker.ResourceTracker.clean_compute_node_cache" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m
Dec  1 19:38:01 compute-0 nova_compute[189564]: 2025-12-01 19:38:01.276 189568 DEBUG oslo_concurrency.lockutils [None req-7cec860b-c1c5-49d2-8a4f-732c76c1788d - - - - - -] Lock "compute_resources" "released" by "nova.compute.resource_tracker.ResourceTracker.clean_compute_node_cache" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m
Dec  1 19:38:01 compute-0 nova_compute[189564]: 2025-12-01 19:38:01.277 189568 DEBUG nova.compute.resource_tracker [None req-7cec860b-c1c5-49d2-8a4f-732c76c1788d - - - - - -] Auditing locally available compute resources for compute-0.ctlplane.example.com (node: compute-0.ctlplane.example.com) update_available_resource /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:861#033[00m
Dec  1 19:38:01 compute-0 nova_compute[189564]: 2025-12-01 19:38:01.372 189568 DEBUG oslo_concurrency.processutils [None req-7cec860b-c1c5-49d2-8a4f-732c76c1788d - - - - - -] Running cmd (subprocess): /usr/bin/python3 -m oslo_concurrency.prlimit --as=1073741824 --cpu=30 -- env LC_ALL=C LANG=C qemu-img info /var/lib/nova/instances/e73931e9-f7fa-4666-b781-700b385532a9/disk --force-share --output=json execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384#033[00m
Dec  1 19:38:01 compute-0 openstack_network_exporter[205914]: ERROR   19:38:01 appctl.go:144: Failed to get PID for ovn-northd: no control socket files found for ovn-northd
Dec  1 19:38:01 compute-0 openstack_network_exporter[205914]: ERROR   19:38:01 appctl.go:144: Failed to get PID for ovn-northd: no control socket files found for ovn-northd
Dec  1 19:38:01 compute-0 openstack_network_exporter[205914]: ERROR   19:38:01 appctl.go:131: Failed to prepare call to ovsdb-server: no control socket files found for the ovs db server
Dec  1 19:38:01 compute-0 openstack_network_exporter[205914]: ERROR   19:38:01 appctl.go:174: call(dpif-netdev/pmd-perf-show): please specify an existing datapath
Dec  1 19:38:01 compute-0 openstack_network_exporter[205914]: 
Dec  1 19:38:01 compute-0 openstack_network_exporter[205914]: ERROR   19:38:01 appctl.go:174: call(dpif-netdev/pmd-rxq-show): please specify an existing datapath
Dec  1 19:38:01 compute-0 openstack_network_exporter[205914]: 
Dec  1 19:38:01 compute-0 nova_compute[189564]: 2025-12-01 19:38:01.486 189568 DEBUG oslo_concurrency.processutils [None req-7cec860b-c1c5-49d2-8a4f-732c76c1788d - - - - - -] CMD "/usr/bin/python3 -m oslo_concurrency.prlimit --as=1073741824 --cpu=30 -- env LC_ALL=C LANG=C qemu-img info /var/lib/nova/instances/e73931e9-f7fa-4666-b781-700b385532a9/disk --force-share --output=json" returned: 0 in 0.114s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422#033[00m
Dec  1 19:38:01 compute-0 nova_compute[189564]: 2025-12-01 19:38:01.487 189568 DEBUG oslo_concurrency.processutils [None req-7cec860b-c1c5-49d2-8a4f-732c76c1788d - - - - - -] Running cmd (subprocess): /usr/bin/python3 -m oslo_concurrency.prlimit --as=1073741824 --cpu=30 -- env LC_ALL=C LANG=C qemu-img info /var/lib/nova/instances/e73931e9-f7fa-4666-b781-700b385532a9/disk --force-share --output=json execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384#033[00m
Dec  1 19:38:01 compute-0 nova_compute[189564]: 2025-12-01 19:38:01.559 189568 DEBUG oslo_concurrency.processutils [None req-7cec860b-c1c5-49d2-8a4f-732c76c1788d - - - - - -] CMD "/usr/bin/python3 -m oslo_concurrency.prlimit --as=1073741824 --cpu=30 -- env LC_ALL=C LANG=C qemu-img info /var/lib/nova/instances/e73931e9-f7fa-4666-b781-700b385532a9/disk --force-share --output=json" returned: 0 in 0.072s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422#033[00m
Dec  1 19:38:01 compute-0 nova_compute[189564]: 2025-12-01 19:38:01.561 189568 DEBUG oslo_concurrency.processutils [None req-7cec860b-c1c5-49d2-8a4f-732c76c1788d - - - - - -] Running cmd (subprocess): /usr/bin/python3 -m oslo_concurrency.prlimit --as=1073741824 --cpu=30 -- env LC_ALL=C LANG=C qemu-img info /var/lib/nova/instances/e73931e9-f7fa-4666-b781-700b385532a9/disk.eph0 --force-share --output=json execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384#033[00m
Dec  1 19:38:01 compute-0 nova_compute[189564]: 2025-12-01 19:38:01.655 189568 DEBUG oslo_concurrency.processutils [None req-7cec860b-c1c5-49d2-8a4f-732c76c1788d - - - - - -] CMD "/usr/bin/python3 -m oslo_concurrency.prlimit --as=1073741824 --cpu=30 -- env LC_ALL=C LANG=C qemu-img info /var/lib/nova/instances/e73931e9-f7fa-4666-b781-700b385532a9/disk.eph0 --force-share --output=json" returned: 0 in 0.094s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422#033[00m
Dec  1 19:38:01 compute-0 nova_compute[189564]: 2025-12-01 19:38:01.656 189568 DEBUG oslo_concurrency.processutils [None req-7cec860b-c1c5-49d2-8a4f-732c76c1788d - - - - - -] Running cmd (subprocess): /usr/bin/python3 -m oslo_concurrency.prlimit --as=1073741824 --cpu=30 -- env LC_ALL=C LANG=C qemu-img info /var/lib/nova/instances/e73931e9-f7fa-4666-b781-700b385532a9/disk.eph0 --force-share --output=json execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384#033[00m
Dec  1 19:38:01 compute-0 nova_compute[189564]: 2025-12-01 19:38:01.722 189568 DEBUG oslo_concurrency.processutils [None req-7cec860b-c1c5-49d2-8a4f-732c76c1788d - - - - - -] CMD "/usr/bin/python3 -m oslo_concurrency.prlimit --as=1073741824 --cpu=30 -- env LC_ALL=C LANG=C qemu-img info /var/lib/nova/instances/e73931e9-f7fa-4666-b781-700b385532a9/disk.eph0 --force-share --output=json" returned: 0 in 0.065s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422#033[00m
Dec  1 19:38:01 compute-0 nova_compute[189564]: 2025-12-01 19:38:01.734 189568 DEBUG oslo_concurrency.processutils [None req-7cec860b-c1c5-49d2-8a4f-732c76c1788d - - - - - -] Running cmd (subprocess): /usr/bin/python3 -m oslo_concurrency.prlimit --as=1073741824 --cpu=30 -- env LC_ALL=C LANG=C qemu-img info /var/lib/nova/instances/f4a023f0-04a7-470f-88ef-6284e0580f9e/disk --force-share --output=json execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384#033[00m
Dec  1 19:38:01 compute-0 nova_compute[189564]: 2025-12-01 19:38:01.786 189568 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Dec  1 19:38:01 compute-0 nova_compute[189564]: 2025-12-01 19:38:01.835 189568 DEBUG oslo_concurrency.processutils [None req-7cec860b-c1c5-49d2-8a4f-732c76c1788d - - - - - -] CMD "/usr/bin/python3 -m oslo_concurrency.prlimit --as=1073741824 --cpu=30 -- env LC_ALL=C LANG=C qemu-img info /var/lib/nova/instances/f4a023f0-04a7-470f-88ef-6284e0580f9e/disk --force-share --output=json" returned: 0 in 0.101s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422#033[00m
Dec  1 19:38:01 compute-0 nova_compute[189564]: 2025-12-01 19:38:01.837 189568 DEBUG oslo_concurrency.processutils [None req-7cec860b-c1c5-49d2-8a4f-732c76c1788d - - - - - -] Running cmd (subprocess): /usr/bin/python3 -m oslo_concurrency.prlimit --as=1073741824 --cpu=30 -- env LC_ALL=C LANG=C qemu-img info /var/lib/nova/instances/f4a023f0-04a7-470f-88ef-6284e0580f9e/disk --force-share --output=json execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384#033[00m
Dec  1 19:38:01 compute-0 nova_compute[189564]: 2025-12-01 19:38:01.935 189568 DEBUG oslo_concurrency.processutils [None req-7cec860b-c1c5-49d2-8a4f-732c76c1788d - - - - - -] CMD "/usr/bin/python3 -m oslo_concurrency.prlimit --as=1073741824 --cpu=30 -- env LC_ALL=C LANG=C qemu-img info /var/lib/nova/instances/f4a023f0-04a7-470f-88ef-6284e0580f9e/disk --force-share --output=json" returned: 0 in 0.098s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422#033[00m
Dec  1 19:38:01 compute-0 nova_compute[189564]: 2025-12-01 19:38:01.936 189568 DEBUG oslo_concurrency.processutils [None req-7cec860b-c1c5-49d2-8a4f-732c76c1788d - - - - - -] Running cmd (subprocess): /usr/bin/python3 -m oslo_concurrency.prlimit --as=1073741824 --cpu=30 -- env LC_ALL=C LANG=C qemu-img info /var/lib/nova/instances/f4a023f0-04a7-470f-88ef-6284e0580f9e/disk.eph0 --force-share --output=json execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384#033[00m
Dec  1 19:38:02 compute-0 nova_compute[189564]: 2025-12-01 19:38:02.029 189568 DEBUG oslo_concurrency.processutils [None req-7cec860b-c1c5-49d2-8a4f-732c76c1788d - - - - - -] CMD "/usr/bin/python3 -m oslo_concurrency.prlimit --as=1073741824 --cpu=30 -- env LC_ALL=C LANG=C qemu-img info /var/lib/nova/instances/f4a023f0-04a7-470f-88ef-6284e0580f9e/disk.eph0 --force-share --output=json" returned: 0 in 0.093s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422#033[00m
Dec  1 19:38:02 compute-0 nova_compute[189564]: 2025-12-01 19:38:02.030 189568 DEBUG oslo_concurrency.processutils [None req-7cec860b-c1c5-49d2-8a4f-732c76c1788d - - - - - -] Running cmd (subprocess): /usr/bin/python3 -m oslo_concurrency.prlimit --as=1073741824 --cpu=30 -- env LC_ALL=C LANG=C qemu-img info /var/lib/nova/instances/f4a023f0-04a7-470f-88ef-6284e0580f9e/disk.eph0 --force-share --output=json execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384#033[00m
Dec  1 19:38:02 compute-0 nova_compute[189564]: 2025-12-01 19:38:02.124 189568 DEBUG oslo_concurrency.processutils [None req-7cec860b-c1c5-49d2-8a4f-732c76c1788d - - - - - -] CMD "/usr/bin/python3 -m oslo_concurrency.prlimit --as=1073741824 --cpu=30 -- env LC_ALL=C LANG=C qemu-img info /var/lib/nova/instances/f4a023f0-04a7-470f-88ef-6284e0580f9e/disk.eph0 --force-share --output=json" returned: 0 in 0.094s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422#033[00m
Dec  1 19:38:02 compute-0 nova_compute[189564]: 2025-12-01 19:38:02.138 189568 DEBUG oslo_concurrency.processutils [None req-7cec860b-c1c5-49d2-8a4f-732c76c1788d - - - - - -] Running cmd (subprocess): /usr/bin/python3 -m oslo_concurrency.prlimit --as=1073741824 --cpu=30 -- env LC_ALL=C LANG=C qemu-img info /var/lib/nova/instances/850ac274-3f22-41ce-b7d7-ac64d7adac70/disk --force-share --output=json execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384#033[00m
Dec  1 19:38:02 compute-0 nova_compute[189564]: 2025-12-01 19:38:02.227 189568 DEBUG oslo_concurrency.processutils [None req-7cec860b-c1c5-49d2-8a4f-732c76c1788d - - - - - -] CMD "/usr/bin/python3 -m oslo_concurrency.prlimit --as=1073741824 --cpu=30 -- env LC_ALL=C LANG=C qemu-img info /var/lib/nova/instances/850ac274-3f22-41ce-b7d7-ac64d7adac70/disk --force-share --output=json" returned: 0 in 0.089s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422#033[00m
Dec  1 19:38:02 compute-0 nova_compute[189564]: 2025-12-01 19:38:02.229 189568 DEBUG oslo_concurrency.processutils [None req-7cec860b-c1c5-49d2-8a4f-732c76c1788d - - - - - -] Running cmd (subprocess): /usr/bin/python3 -m oslo_concurrency.prlimit --as=1073741824 --cpu=30 -- env LC_ALL=C LANG=C qemu-img info /var/lib/nova/instances/850ac274-3f22-41ce-b7d7-ac64d7adac70/disk --force-share --output=json execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384#033[00m
Dec  1 19:38:02 compute-0 nova_compute[189564]: 2025-12-01 19:38:02.294 189568 DEBUG oslo_concurrency.processutils [None req-7cec860b-c1c5-49d2-8a4f-732c76c1788d - - - - - -] CMD "/usr/bin/python3 -m oslo_concurrency.prlimit --as=1073741824 --cpu=30 -- env LC_ALL=C LANG=C qemu-img info /var/lib/nova/instances/850ac274-3f22-41ce-b7d7-ac64d7adac70/disk --force-share --output=json" returned: 0 in 0.065s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422#033[00m
Dec  1 19:38:02 compute-0 nova_compute[189564]: 2025-12-01 19:38:02.296 189568 DEBUG oslo_concurrency.processutils [None req-7cec860b-c1c5-49d2-8a4f-732c76c1788d - - - - - -] Running cmd (subprocess): /usr/bin/python3 -m oslo_concurrency.prlimit --as=1073741824 --cpu=30 -- env LC_ALL=C LANG=C qemu-img info /var/lib/nova/instances/850ac274-3f22-41ce-b7d7-ac64d7adac70/disk.eph0 --force-share --output=json execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384#033[00m
Dec  1 19:38:02 compute-0 nova_compute[189564]: 2025-12-01 19:38:02.361 189568 DEBUG oslo_concurrency.processutils [None req-7cec860b-c1c5-49d2-8a4f-732c76c1788d - - - - - -] CMD "/usr/bin/python3 -m oslo_concurrency.prlimit --as=1073741824 --cpu=30 -- env LC_ALL=C LANG=C qemu-img info /var/lib/nova/instances/850ac274-3f22-41ce-b7d7-ac64d7adac70/disk.eph0 --force-share --output=json" returned: 0 in 0.065s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422#033[00m
Dec  1 19:38:02 compute-0 nova_compute[189564]: 2025-12-01 19:38:02.362 189568 DEBUG oslo_concurrency.processutils [None req-7cec860b-c1c5-49d2-8a4f-732c76c1788d - - - - - -] Running cmd (subprocess): /usr/bin/python3 -m oslo_concurrency.prlimit --as=1073741824 --cpu=30 -- env LC_ALL=C LANG=C qemu-img info /var/lib/nova/instances/850ac274-3f22-41ce-b7d7-ac64d7adac70/disk.eph0 --force-share --output=json execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384#033[00m
Dec  1 19:38:02 compute-0 nova_compute[189564]: 2025-12-01 19:38:02.457 189568 DEBUG oslo_concurrency.processutils [None req-7cec860b-c1c5-49d2-8a4f-732c76c1788d - - - - - -] CMD "/usr/bin/python3 -m oslo_concurrency.prlimit --as=1073741824 --cpu=30 -- env LC_ALL=C LANG=C qemu-img info /var/lib/nova/instances/850ac274-3f22-41ce-b7d7-ac64d7adac70/disk.eph0 --force-share --output=json" returned: 0 in 0.095s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422#033[00m
Dec  1 19:38:02 compute-0 nova_compute[189564]: 2025-12-01 19:38:02.914 189568 WARNING nova.virt.libvirt.driver [None req-7cec860b-c1c5-49d2-8a4f-732c76c1788d - - - - - -] This host appears to have multiple sockets per NUMA node. The `socket` PCI NUMA affinity will not be supported.#033[00m
Dec  1 19:38:02 compute-0 nova_compute[189564]: 2025-12-01 19:38:02.917 189568 DEBUG nova.compute.resource_tracker [None req-7cec860b-c1c5-49d2-8a4f-732c76c1788d - - - - - -] Hypervisor/Node resource view: name=compute-0.ctlplane.example.com free_ram=4833MB free_disk=72.33890914916992GB free_vcpus=5 pci_devices=[{"dev_id": "pci_0000_00_03_0", "address": "0000:00:03.0", "product_id": "1000", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1000", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_01_3", "address": "0000:00:01.3", "product_id": "7113", "vendor_id": "8086", "numa_node": null, "label": "label_8086_7113", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_06_0", "address": "0000:00:06.0", "product_id": "1005", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1005", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_04_0", "address": "0000:00:04.0", "product_id": "1001", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1001", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_01_1", "address": "0000:00:01.1", "product_id": "7010", "vendor_id": "8086", "numa_node": null, "label": "label_8086_7010", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_00_0", "address": "0000:00:00.0", "product_id": "1237", "vendor_id": "8086", "numa_node": null, "label": "label_8086_1237", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_05_0", "address": "0000:00:05.0", "product_id": "1002", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1002", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_02_0", "address": "0000:00:02.0", "product_id": "1050", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1050", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_01_2", "address": "0000:00:01.2", "product_id": "7020", "vendor_id": "8086", "numa_node": null, "label": "label_8086_7020", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_01_0", "address": "0000:00:01.0", "product_id": "7000", "vendor_id": "8086", "numa_node": null, "label": "label_8086_7000", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_07_0", "address": "0000:00:07.0", "product_id": "1000", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1000", "dev_type": "type-PCI"}] _report_hypervisor_resource_view /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:1034#033[00m
Dec  1 19:38:02 compute-0 nova_compute[189564]: 2025-12-01 19:38:02.917 189568 DEBUG oslo_concurrency.lockutils [None req-7cec860b-c1c5-49d2-8a4f-732c76c1788d - - - - - -] Acquiring lock "compute_resources" by "nova.compute.resource_tracker.ResourceTracker._update_available_resource" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m
Dec  1 19:38:02 compute-0 nova_compute[189564]: 2025-12-01 19:38:02.918 189568 DEBUG oslo_concurrency.lockutils [None req-7cec860b-c1c5-49d2-8a4f-732c76c1788d - - - - - -] Lock "compute_resources" acquired by "nova.compute.resource_tracker.ResourceTracker._update_available_resource" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m
Dec  1 19:38:02 compute-0 nova_compute[189564]: 2025-12-01 19:38:02.977 189568 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Dec  1 19:38:03 compute-0 nova_compute[189564]: 2025-12-01 19:38:03.008 189568 DEBUG nova.compute.resource_tracker [None req-7cec860b-c1c5-49d2-8a4f-732c76c1788d - - - - - -] Instance e73931e9-f7fa-4666-b781-700b385532a9 actively managed on this compute host and has allocations in placement: {'resources': {'DISK_GB': 2, 'MEMORY_MB': 512, 'VCPU': 1}}. _remove_deleted_instances_allocations /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:1635#033[00m
Dec  1 19:38:03 compute-0 nova_compute[189564]: 2025-12-01 19:38:03.008 189568 DEBUG nova.compute.resource_tracker [None req-7cec860b-c1c5-49d2-8a4f-732c76c1788d - - - - - -] Instance f4a023f0-04a7-470f-88ef-6284e0580f9e actively managed on this compute host and has allocations in placement: {'resources': {'DISK_GB': 2, 'MEMORY_MB': 512, 'VCPU': 1}}. _remove_deleted_instances_allocations /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:1635#033[00m
Dec  1 19:38:03 compute-0 nova_compute[189564]: 2025-12-01 19:38:03.008 189568 DEBUG nova.compute.resource_tracker [None req-7cec860b-c1c5-49d2-8a4f-732c76c1788d - - - - - -] Instance 850ac274-3f22-41ce-b7d7-ac64d7adac70 actively managed on this compute host and has allocations in placement: {'resources': {'DISK_GB': 2, 'MEMORY_MB': 512, 'VCPU': 1}}. _remove_deleted_instances_allocations /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:1635#033[00m
Dec  1 19:38:03 compute-0 nova_compute[189564]: 2025-12-01 19:38:03.009 189568 DEBUG nova.compute.resource_tracker [None req-7cec860b-c1c5-49d2-8a4f-732c76c1788d - - - - - -] Total usable vcpus: 8, total allocated vcpus: 3 _report_final_resource_view /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:1057#033[00m
Dec  1 19:38:03 compute-0 nova_compute[189564]: 2025-12-01 19:38:03.009 189568 DEBUG nova.compute.resource_tracker [None req-7cec860b-c1c5-49d2-8a4f-732c76c1788d - - - - - -] Final resource view: name=compute-0.ctlplane.example.com phys_ram=7680MB used_ram=2048MB phys_disk=79GB used_disk=6GB total_vcpus=8 used_vcpus=3 pci_stats=[] _report_final_resource_view /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:1066#033[00m
Dec  1 19:38:03 compute-0 nova_compute[189564]: 2025-12-01 19:38:03.123 189568 DEBUG nova.compute.provider_tree [None req-7cec860b-c1c5-49d2-8a4f-732c76c1788d - - - - - -] Inventory has not changed in ProviderTree for provider: 0211b5d4-bab8-409f-8f53-df766ffbcb27 update_inventory /usr/lib/python3.9/site-packages/nova/compute/provider_tree.py:180#033[00m
Dec  1 19:38:03 compute-0 nova_compute[189564]: 2025-12-01 19:38:03.149 189568 DEBUG nova.scheduler.client.report [None req-7cec860b-c1c5-49d2-8a4f-732c76c1788d - - - - - -] Inventory has not changed for provider 0211b5d4-bab8-409f-8f53-df766ffbcb27 based on inventory data: {'VCPU': {'total': 8, 'reserved': 0, 'min_unit': 1, 'max_unit': 8, 'step_size': 1, 'allocation_ratio': 4.0}, 'MEMORY_MB': {'total': 7680, 'reserved': 512, 'min_unit': 1, 'max_unit': 7680, 'step_size': 1, 'allocation_ratio': 1.0}, 'DISK_GB': {'total': 79, 'reserved': 1, 'min_unit': 1, 'max_unit': 79, 'step_size': 1, 'allocation_ratio': 0.9}} set_inventory_for_provider /usr/lib/python3.9/site-packages/nova/scheduler/client/report.py:940#033[00m
Dec  1 19:38:03 compute-0 nova_compute[189564]: 2025-12-01 19:38:03.150 189568 DEBUG nova.compute.resource_tracker [None req-7cec860b-c1c5-49d2-8a4f-732c76c1788d - - - - - -] Compute_service record updated for compute-0.ctlplane.example.com:compute-0.ctlplane.example.com _update_available_resource /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:995#033[00m
Dec  1 19:38:03 compute-0 nova_compute[189564]: 2025-12-01 19:38:03.151 189568 DEBUG oslo_concurrency.lockutils [None req-7cec860b-c1c5-49d2-8a4f-732c76c1788d - - - - - -] Lock "compute_resources" "released" by "nova.compute.resource_tracker.ResourceTracker._update_available_resource" :: held 0.233s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m
Dec  1 19:38:04 compute-0 nova_compute[189564]: 2025-12-01 19:38:04.150 189568 DEBUG oslo_service.periodic_task [None req-7cec860b-c1c5-49d2-8a4f-732c76c1788d - - - - - -] Running periodic task ComputeManager._poll_rescued_instances run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210#033[00m
Dec  1 19:38:05 compute-0 podman[243363]: 2025-12-01 19:38:05.337275427 +0000 UTC m=+0.098140469 container health_status 61ddba5fa28aaa4735d9b3aecc3d300f499f9ae2248b5f55cd6d6127fcce4236 (image=quay.io/navidys/prometheus-podman-exporter:v1.10.1, name=podman_exporter, health_status=healthy, health_failing_streak=0, health_log=, managed_by=edpm_ansible, config_data={'image': 'quay.io/navidys/prometheus-podman-exporter:v1.10.1', 'restart': 'always', 'recreate': True, 'user': 'root', 'privileged': True, 'ports': ['9882:9882'], 'net': 'host', 'command': ['--web.config.file=/etc/podman_exporter/podman_exporter.yaml'], 'environment': {'OS_ENDPOINT_TYPE': 'internal', 'CONTAINER_HOST': 'unix:///run/podman/podman.sock'}, 'healthcheck': {'test': '/openstack/healthcheck podman_exporter', 'mount': '/var/lib/openstack/healthchecks/podman_exporter'}, 'volumes': ['/var/lib/openstack/config/telemetry/podman_exporter.yaml:/etc/podman_exporter/podman_exporter.yaml:z', '/var/lib/openstack/certs/telemetry/default:/etc/podman_exporter/tls:z', '/run/podman/podman.sock:/run/podman/podman.sock:rw,z', '/var/lib/openstack/healthchecks/podman_exporter:/openstack:ro,z']}, config_id=edpm, container_name=podman_exporter, maintainer=Navid Yaghoobi <navidys@fedoraproject.org>)
Dec  1 19:38:06 compute-0 nova_compute[189564]: 2025-12-01 19:38:06.793 189568 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Dec  1 19:38:07 compute-0 nova_compute[189564]: 2025-12-01 19:38:07.979 189568 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Dec  1 19:38:08 compute-0 podman[243388]: 2025-12-01 19:38:08.333395204 +0000 UTC m=+0.089750588 container health_status 34a1614f07848d6f362b3ed1fa2407dbcd0f2c7c831f6ef43ff8b2d278ce7c3d (image=quay.io/podified-antelope-centos9/openstack-ceilometer-ipmi:current-podified, name=ceilometer_agent_ipmi, health_status=healthy, health_failing_streak=0, health_log=, config_id=edpm, tcib_build_tag=fa2bb8efef6782c26ea7f1675eeb36dd, tcib_managed=true, config_data={'image': 'quay.io/podified-antelope-centos9/openstack-ceilometer-ipmi:current-podified', 'user': 'ceilometer', 'restart': 'always', 'command': 'kolla_start', 'security_opt': 'label:type:ceilometer_polling_t', 'privileged': 'true', 'net': 'host', 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'OS_ENDPOINT_TYPE': 'internal'}, 'healthcheck': {'test': '/openstack/healthcheck ipmi', 'mount': '/var/lib/openstack/healthchecks/ceilometer_agent_ipmi'}, 'volumes': ['/var/lib/openstack/config/telemetry-power-monitoring:/var/lib/openstack/config/:z', '/var/lib/openstack/config/telemetry-power-monitoring/ceilometer-agent-ipmi.json:/var/lib/kolla/config_files/config.json:z', '/etc/hosts:/etc/hosts:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/var/lib/openstack/cacerts/telemetry-power-monitoring/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/config/telemetry-power-monitoring/ceilometer_prom_exporter.yaml:/etc/ceilometer/ceilometer_prom_exporter.yaml:z', '/var/lib/openstack/certs/telemetry-power-monitoring/default:/etc/ceilometer/tls:z', '/dev/log:/dev/log', '/var/lib/openstack/healthchecks/ceilometer_agent_ipmi:/openstack:ro,z']}, io.buildah.version=1.41.3, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS, container_name=ceilometer_agent_ipmi, org.label-schema.name=CentOS Stream 9 Base Image, maintainer=OpenStack Kubernetes Operator team, managed_by=edpm_ansible, org.label-schema.build-date=20251125, org.label-schema.license=GPLv2)
Dec  1 19:38:08 compute-0 podman[243387]: 2025-12-01 19:38:08.38527248 +0000 UTC m=+0.140161627 container health_status 23921011954a99f31a49758e512d9e3575f6b2ebf536e7df85e3be11e7690b76 (image=quay.io/sustainable_computing_io/kepler:release-0.7.12, name=kepler, health_status=healthy, health_failing_streak=0, health_log=, io.openshift.tags=base rhel9, io.openshift.expose-services=, io.buildah.version=1.29.0, url=https://access.redhat.com/containers/#/registry.access.redhat.com/ubi9/images/9.4-1214.1726694543, vcs-ref=e309397d02fc53f7fa99db1371b8700eb49f268f, com.redhat.license_terms=https://www.redhat.com/en/about/red-hat-end-user-license-agreements#UBI, summary=Provides the latest release of Red Hat Universal Base Image 9., vcs-type=git, com.redhat.component=ubi9-container, io.k8s.display-name=Red Hat Universal Base Image 9, managed_by=edpm_ansible, build-date=2024-09-18T21:23:30, config_data={'image': 'quay.io/sustainable_computing_io/kepler:release-0.7.12', 'privileged': 'true', 'restart': 'always', 'ports': ['8888:8888'], 'net': 'host', 'command': '-v=2', 'recreate': True, 'environment': {'ENABLE_GPU': 'true', 'EXPOSE_CONTAINER_METRICS': 'true', 'ENABLE_PROCESS_METRICS': 'true', 'EXPOSE_VM_METRICS': 'true', 'EXPOSE_ESTIMATED_IDLE_POWER_METRICS': 'false', 'LIBVIRT_METADATA_URI': 'http://openstack.org/xmlns/libvirt/nova/1.1'}, 'healthcheck': {'test': '/openstack/healthcheck kepler', 'mount': '/var/lib/openstack/healthchecks/kepler'}, 'volumes': ['/lib/modules:/lib/modules:ro', '/run/libvirt:/run/libvirt:shared,ro', '/sys:/sys', '/proc:/proc', '/var/lib/openstack/healthchecks/kepler:/openstack:ro,z']}, vendor=Red Hat, Inc., release=1214.1726694543, version=9.4, description=The Universal Base Image is designed and engineered to be the base layer for all of your containerized applications, middleware and utilities. This base image is freely redistributable, but Red Hat only supports Red Hat technologies through subscriptions for Red Hat products. This image is maintained by Red Hat and updated regularly., distribution-scope=public, io.k8s.description=The Universal Base Image is designed and engineered to be the base layer for all of your containerized applications, middleware and utilities. This base image is freely redistributable, but Red Hat only supports Red Hat technologies through subscriptions for Red Hat products. This image is maintained by Red Hat and updated regularly., name=ubi9, architecture=x86_64, config_id=edpm, release-0.7.12=, container_name=kepler, maintainer=Red Hat, Inc.)
Dec  1 19:38:10 compute-0 podman[243427]: 2025-12-01 19:38:10.377160546 +0000 UTC m=+0.124120789 container health_status 43b014a7c88484529ca37fbc1aa040d68d3c565a681d98a3ffe696ded1c66c8b (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified, name=ovn_metadata_agent, health_status=healthy, health_failing_streak=0, health_log=, container_name=ovn_metadata_agent, org.label-schema.license=GPLv2, tcib_managed=true, io.buildah.version=1.41.3, maintainer=OpenStack Kubernetes Operator team, org.label-schema.name=CentOS Stream 9 Base Image, tcib_build_tag=fa2bb8efef6782c26ea7f1675eeb36dd, config_id=ovn_metadata_agent, org.label-schema.build-date=20251125, managed_by=edpm_ansible, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS, config_data={'cgroupns': 'host', 'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'EDPM_CONFIG_HASH': '0823bd3e096c75f72e4a95820d41b0d4b6a1172bd2892ddb9f29b788a11bc87d'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ovn_metadata_agent', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified', 'net': 'host', 'pid': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/run/openvswitch:/run/openvswitch:z', '/var/lib/config-data/ansible-generated/neutron-ovn-metadata-agent:/etc/neutron.conf.d:z', '/run/netns:/run/netns:shared', '/var/lib/kolla/config_files/ovn_metadata_agent.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/neutron:/var/lib/neutron:shared,z', '/var/lib/neutron/ovn_metadata_haproxy_wrapper:/usr/local/bin/haproxy:ro', '/var/lib/neutron/kill_scripts:/etc/neutron/kill_scripts:ro', '/var/lib/openstack/cacerts/neutron-metadata/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/ca.crt:/etc/pki/tls/certs/ovndbca.crt:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/tls.crt:/etc/pki/tls/certs/ovndb.crt:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/tls.key:/etc/pki/tls/private/ovndb.key:ro,Z', '/var/lib/openstack/healthchecks/ovn_metadata_agent:/openstack:ro,z']})
Dec  1 19:38:10 compute-0 podman[243426]: 2025-12-01 19:38:10.377371472 +0000 UTC m=+0.133866122 container health_status 3a3d264f7eb8586ed3d44da8bad3c69e5911bcb2ca062b771386b6d47a5118de (image=quay.rdoproject.org/podified-master-centos10/openstack-ceilometer-compute:current-tested, name=ceilometer_agent_compute, health_status=healthy, health_failing_streak=0, health_log=, org.label-schema.name=CentOS Stream 10 Base Image, org.label-schema.schema-version=1.0, tcib_managed=true, config_data={'image': 'quay.rdoproject.org/podified-master-centos10/openstack-ceilometer-compute:current-tested', 'user': 'ceilometer', 'restart': 'always', 'command': 'kolla_start', 'security_opt': 'label:type:ceilometer_polling_t', 'net': 'host', 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'OS_ENDPOINT_TYPE': 'internal'}, 'healthcheck': {'test': '/openstack/healthcheck compute', 'mount': '/var/lib/openstack/healthchecks/ceilometer_agent_compute'}, 'volumes': ['/var/lib/openstack/config/telemetry:/var/lib/openstack/config/:z', '/var/lib/openstack/config/telemetry/ceilometer-agent-compute.json:/var/lib/kolla/config_files/config.json:z', '/run/libvirt:/run/libvirt:shared,ro', '/etc/hosts:/etc/hosts:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/var/lib/openstack/cacerts/telemetry/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/config/telemetry/ceilometer_prom_exporter.yaml:/etc/ceilometer/ceilometer_prom_exporter.yaml:z', '/var/lib/openstack/certs/telemetry/default:/etc/ceilometer/tls:z', '/dev/log:/dev/log', '/var/lib/openstack/healthchecks/ceilometer_agent_compute:/openstack:ro,z']}, managed_by=edpm_ansible, org.label-schema.license=GPLv2, org.label-schema.vendor=CentOS, container_name=ceilometer_agent_compute, tcib_build_tag=3a7876c5b6a4ff2e2bc50e11e9db5f42, io.buildah.version=1.41.4, org.label-schema.build-date=20251125, config_id=edpm, maintainer=OpenStack Kubernetes Operator team)
Dec  1 19:38:10 compute-0 podman[243428]: 2025-12-01 19:38:10.419376881 +0000 UTC m=+0.156729314 container health_status ac5c9902abf0db9f43c889599b2bcc73d33eb8b65444ffdd9b56a5cc93dab792 (image=quay.io/podified-antelope-centos9/openstack-ovn-controller:current-podified, name=ovn_controller, health_status=healthy, health_failing_streak=0, health_log=, org.label-schema.license=GPLv2, tcib_build_tag=fa2bb8efef6782c26ea7f1675eeb36dd, config_id=ovn_controller, org.label-schema.build-date=20251125, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS, maintainer=OpenStack Kubernetes Operator team, tcib_managed=true, config_data={'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ovn_controller', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-ovn-controller:current-podified', 'net': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/lib/modules:/lib/modules:ro', '/run:/run', '/var/lib/openvswitch/ovn:/run/ovn:shared,z', '/var/lib/kolla/config_files/ovn_controller.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/openstack/cacerts/ovn/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/certs/ovn/default/ca.crt:/etc/pki/tls/certs/ovndbca.crt:ro,z', '/var/lib/openstack/certs/ovn/default/tls.crt:/etc/pki/tls/certs/ovndb.crt:ro,z', '/var/lib/openstack/certs/ovn/default/tls.key:/etc/pki/tls/private/ovndb.key:ro,Z', '/var/lib/openstack/healthchecks/ovn_controller:/openstack:ro,z']}, io.buildah.version=1.41.3, org.label-schema.name=CentOS Stream 9 Base Image, container_name=ovn_controller, managed_by=edpm_ansible)
Dec  1 19:38:11 compute-0 nova_compute[189564]: 2025-12-01 19:38:11.797 189568 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Dec  1 19:38:12 compute-0 ovn_metadata_agent[106828]: 2025-12-01 19:38:12.187 106833 DEBUG oslo_concurrency.lockutils [-] Acquiring lock "_check_child_processes" by "neutron.agent.linux.external_process.ProcessMonitor._check_child_processes" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m
Dec  1 19:38:12 compute-0 ovn_metadata_agent[106828]: 2025-12-01 19:38:12.188 106833 DEBUG oslo_concurrency.lockutils [-] Lock "_check_child_processes" acquired by "neutron.agent.linux.external_process.ProcessMonitor._check_child_processes" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m
Dec  1 19:38:12 compute-0 ovn_metadata_agent[106828]: 2025-12-01 19:38:12.188 106833 DEBUG oslo_concurrency.lockutils [-] Lock "_check_child_processes" "released" by "neutron.agent.linux.external_process.ProcessMonitor._check_child_processes" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m
Dec  1 19:38:12 compute-0 nova_compute[189564]: 2025-12-01 19:38:12.980 189568 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Dec  1 19:38:16 compute-0 nova_compute[189564]: 2025-12-01 19:38:16.802 189568 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Dec  1 19:38:17 compute-0 nova_compute[189564]: 2025-12-01 19:38:17.983 189568 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Dec  1 19:38:19 compute-0 podman[243485]: 2025-12-01 19:38:19.356630648 +0000 UTC m=+0.114638874 container health_status b46bda7fc50db8041eef75400930fc7591d8331b3adc9964f77b2cc87c6b98e2 (image=quay.io/openstack-k8s-operators/openstack-network-exporter:current-podified, name=openstack_network_exporter, health_status=healthy, health_failing_streak=0, health_log=, com.redhat.component=ubi9-minimal-container, managed_by=edpm_ansible, com.redhat.license_terms=https://www.redhat.com/en/about/red-hat-end-user-license-agreements#UBI, vcs-ref=f4b088292653bbf5ca8188a5e59ffd06a8671d4b, io.k8s.description=The Universal Base Image Minimal is a stripped down image that uses microdnf as a package manager. This base image is freely redistributable, but Red Hat only supports Red Hat technologies through subscriptions for Red Hat products. This image is maintained by Red Hat and updated regularly., container_name=openstack_network_exporter, io.openshift.expose-services=, vendor=Red Hat, Inc., build-date=2025-08-20T13:12:41, config_id=edpm, description=The Universal Base Image Minimal is a stripped down image that uses microdnf as a package manager. This base image is freely redistributable, but Red Hat only supports Red Hat technologies through subscriptions for Red Hat products. This image is maintained by Red Hat and updated regularly., version=9.6, maintainer=Red Hat, Inc., release=1755695350, io.k8s.display-name=Red Hat Universal Base Image 9 Minimal, distribution-scope=public, name=ubi9-minimal, config_data={'image': 'quay.io/openstack-k8s-operators/openstack-network-exporter:current-podified', 'restart': 'always', 'recreate': True, 'privileged': True, 'ports': ['9105:9105'], 'command': [], 'net': 'host', 'environment': {'OS_ENDPOINT_TYPE': 'internal', 'OPENSTACK_NETWORK_EXPORTER_YAML': '/etc/openstack_network_exporter/openstack_network_exporter.yaml'}, 'healthcheck': {'test': '/openstack/healthcheck openstack-netwo', 'mount': '/var/lib/openstack/healthchecks/openstack_network_exporter'}, 'volumes': ['/var/lib/openstack/config/telemetry/openstack_network_exporter.yaml:/etc/openstack_network_exporter/openstack_network_exporter.yaml:z', '/var/lib/openstack/certs/telemetry/default:/etc/openstack_network_exporter/tls:z', '/var/run/openvswitch:/run/openvswitch:rw,z', '/var/lib/openvswitch/ovn:/run/ovn:rw,z', '/proc:/host/proc:ro', '/var/lib/openstack/healthchecks/openstack_network_exporter:/openstack:ro,z']}, io.openshift.tags=minimal rhel9, summary=Provides the latest release of the minimal Red Hat Universal Base Image 9., architecture=x86_64, vcs-type=git, url=https://catalog.redhat.com/en/search?searchType=containers, io.buildah.version=1.33.7)
Dec  1 19:38:21 compute-0 nova_compute[189564]: 2025-12-01 19:38:21.805 189568 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Dec  1 19:38:22 compute-0 nova_compute[189564]: 2025-12-01 19:38:22.987 189568 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Dec  1 19:38:24 compute-0 podman[243508]: 2025-12-01 19:38:24.337914389 +0000 UTC m=+0.101673548 container health_status 9bc16c1e84935b321683dd2dfd3901959431e420d380b6b9982945dff3d516b2 (image=quay.io/prometheus/node-exporter:v1.5.0, name=node_exporter, health_status=healthy, health_failing_streak=0, health_log=, config_id=edpm, container_name=node_exporter, maintainer=The Prometheus Authors <prometheus-developers@googlegroups.com>, managed_by=edpm_ansible, config_data={'image': 'quay.io/prometheus/node-exporter:v1.5.0', 'restart': 'always', 'recreate': True, 'user': 'root', 'privileged': True, 'ports': ['9100:9100'], 'command': ['--web.config.file=/etc/node_exporter/node_exporter.yaml', '--web.disable-exporter-metrics', '--collector.systemd', '--collector.systemd.unit-include=(edpm_.*|ovs.*|openvswitch|virt.*|rsyslog)\\.service', '--no-collector.dmi', '--no-collector.entropy', '--no-collector.thermal_zone', '--no-collector.time', '--no-collector.timex', '--no-collector.uname', '--no-collector.stat', '--no-collector.hwmon', '--no-collector.os', '--no-collector.selinux', '--no-collector.textfile', '--no-collector.powersupplyclass', '--no-collector.pressure', '--no-collector.rapl'], 'net': 'host', 'environment': {'OS_ENDPOINT_TYPE': 'internal'}, 'healthcheck': {'test': '/openstack/healthcheck node_exporter', 'mount': '/var/lib/openstack/healthchecks/node_exporter'}, 'volumes': ['/var/lib/openstack/config/telemetry/node_exporter.yaml:/etc/node_exporter/node_exporter.yaml:z', '/var/lib/openstack/certs/telemetry/default:/etc/node_exporter/tls:z', '/var/run/dbus/system_bus_socket:/var/run/dbus/system_bus_socket:rw', '/var/lib/openstack/healthchecks/node_exporter:/openstack:ro,z']})
Dec  1 19:38:25 compute-0 nova_compute[189564]: 2025-12-01 19:38:25.341 189568 DEBUG oslo_concurrency.lockutils [None req-0d56ffdb-90a8-41d6-a9cb-dfca591660bc 7c24e8f82e7842b785e565ac65c7f494 35d2a9caf1634dca9fc12ec078239d84 - - default default] Acquiring lock "f4a023f0-04a7-470f-88ef-6284e0580f9e" by "nova.compute.manager.ComputeManager.terminate_instance.<locals>.do_terminate_instance" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m
Dec  1 19:38:25 compute-0 nova_compute[189564]: 2025-12-01 19:38:25.342 189568 DEBUG oslo_concurrency.lockutils [None req-0d56ffdb-90a8-41d6-a9cb-dfca591660bc 7c24e8f82e7842b785e565ac65c7f494 35d2a9caf1634dca9fc12ec078239d84 - - default default] Lock "f4a023f0-04a7-470f-88ef-6284e0580f9e" acquired by "nova.compute.manager.ComputeManager.terminate_instance.<locals>.do_terminate_instance" :: waited 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m
Dec  1 19:38:25 compute-0 nova_compute[189564]: 2025-12-01 19:38:25.343 189568 DEBUG oslo_concurrency.lockutils [None req-0d56ffdb-90a8-41d6-a9cb-dfca591660bc 7c24e8f82e7842b785e565ac65c7f494 35d2a9caf1634dca9fc12ec078239d84 - - default default] Acquiring lock "f4a023f0-04a7-470f-88ef-6284e0580f9e-events" by "nova.compute.manager.InstanceEvents.clear_events_for_instance.<locals>._clear_events" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m
Dec  1 19:38:25 compute-0 nova_compute[189564]: 2025-12-01 19:38:25.343 189568 DEBUG oslo_concurrency.lockutils [None req-0d56ffdb-90a8-41d6-a9cb-dfca591660bc 7c24e8f82e7842b785e565ac65c7f494 35d2a9caf1634dca9fc12ec078239d84 - - default default] Lock "f4a023f0-04a7-470f-88ef-6284e0580f9e-events" acquired by "nova.compute.manager.InstanceEvents.clear_events_for_instance.<locals>._clear_events" :: waited 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m
Dec  1 19:38:25 compute-0 nova_compute[189564]: 2025-12-01 19:38:25.344 189568 DEBUG oslo_concurrency.lockutils [None req-0d56ffdb-90a8-41d6-a9cb-dfca591660bc 7c24e8f82e7842b785e565ac65c7f494 35d2a9caf1634dca9fc12ec078239d84 - - default default] Lock "f4a023f0-04a7-470f-88ef-6284e0580f9e-events" "released" by "nova.compute.manager.InstanceEvents.clear_events_for_instance.<locals>._clear_events" :: held 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m
Dec  1 19:38:25 compute-0 nova_compute[189564]: 2025-12-01 19:38:25.346 189568 INFO nova.compute.manager [None req-0d56ffdb-90a8-41d6-a9cb-dfca591660bc 7c24e8f82e7842b785e565ac65c7f494 35d2a9caf1634dca9fc12ec078239d84 - - default default] [instance: f4a023f0-04a7-470f-88ef-6284e0580f9e] Terminating instance#033[00m
Dec  1 19:38:25 compute-0 nova_compute[189564]: 2025-12-01 19:38:25.349 189568 DEBUG nova.compute.manager [None req-0d56ffdb-90a8-41d6-a9cb-dfca591660bc 7c24e8f82e7842b785e565ac65c7f494 35d2a9caf1634dca9fc12ec078239d84 - - default default] [instance: f4a023f0-04a7-470f-88ef-6284e0580f9e] Start destroying the instance on the hypervisor. _shutdown_instance /usr/lib/python3.9/site-packages/nova/compute/manager.py:3120#033[00m
Dec  1 19:38:25 compute-0 kernel: tap0aee22ef-1f (unregistering): left promiscuous mode
Dec  1 19:38:25 compute-0 NetworkManager[56474]: <info>  [1764617905.3960] device (tap0aee22ef-1f): state change: disconnected -> unmanaged (reason 'unmanaged', managed-type: 'removed')
Dec  1 19:38:25 compute-0 ovn_controller[97948]: 2025-12-01T19:38:25Z|00045|binding|INFO|Releasing lport 0aee22ef-1ffd-4d83-a6ba-7377ff1b62c3 from this chassis (sb_readonly=0)
Dec  1 19:38:25 compute-0 nova_compute[189564]: 2025-12-01 19:38:25.409 189568 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Dec  1 19:38:25 compute-0 ovn_controller[97948]: 2025-12-01T19:38:25Z|00046|binding|INFO|Setting lport 0aee22ef-1ffd-4d83-a6ba-7377ff1b62c3 down in Southbound
Dec  1 19:38:25 compute-0 ovn_controller[97948]: 2025-12-01T19:38:25Z|00047|binding|INFO|Removing iface tap0aee22ef-1f ovn-installed in OVS
Dec  1 19:38:25 compute-0 nova_compute[189564]: 2025-12-01 19:38:25.413 189568 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Dec  1 19:38:25 compute-0 ovn_metadata_agent[106828]: 2025-12-01 19:38:25.423 106833 DEBUG ovsdbapp.backend.ovs_idl.event [-] Matched UPDATE: PortBindingUpdatedEvent(events=('update',), table='Port_Binding', conditions=None, old_conditions=None), priority=20 to row=Port_Binding(mac=['fa:16:3e:0a:1c:a4 192.168.0.66'], port_security=['fa:16:3e:0a:1c:a4 192.168.0.66'], type=, nat_addresses=[], virtual_parent=[], up=[False], options={'requested-chassis': 'compute-0.ctlplane.example.com'}, parent_port=[], requested_additional_chassis=[], ha_chassis_group=[], external_ids={'name': 'vnf-scaleup_group-vz2nmrxztcck-f2wxpqwzjpbt-22updzqiujy5-port-6brymhhcpz7y', 'neutron:cidrs': '192.168.0.66/24', 'neutron:device_id': 'f4a023f0-04a7-470f-88ef-6284e0580f9e', 'neutron:device_owner': 'compute:nova', 'neutron:mtu': '', 'neutron:network_name': 'neutron-2a4b8529-6171-4880-a97c-66966115a61b', 'neutron:port_capabilities': '', 'neutron:port_name': 'vnf-scaleup_group-vz2nmrxztcck-f2wxpqwzjpbt-22updzqiujy5-port-6brymhhcpz7y', 'neutron:project_id': '35d2a9caf1634dca9fc12ec078239d84', 'neutron:revision_number': '4', 'neutron:security_group_ids': 'e61a5e79-a7e0-4e4e-bcbc-f9aad845c2b8', 'neutron:subnet_pool_addr_scope4': '', 'neutron:subnet_pool_addr_scope6': '', 'neutron:vnic_type': 'normal', 'neutron:port_fip': '192.168.122.187', 'neutron:host_id': 'compute-0.ctlplane.example.com'}, additional_chassis=[], tag=[], additional_encap=[], encap=[], mirror_rules=[], datapath=58f8227a-30b3-42df-b03a-90442a651a6d, chassis=[], tunnel_key=4, gateway_chassis=[], requested_chassis=[<ovs.db.idl.Row object at 0x7f1b36766670>], logical_port=0aee22ef-1ffd-4d83-a6ba-7377ff1b62c3) old=Port_Binding(up=[True], chassis=[<ovs.db.idl.Row object at 0x7f1b36766670>]) matches /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/event.py:43#033[00m
Dec  1 19:38:25 compute-0 ovn_metadata_agent[106828]: 2025-12-01 19:38:25.427 106833 INFO neutron.agent.ovn.metadata.agent [-] Port 0aee22ef-1ffd-4d83-a6ba-7377ff1b62c3 in datapath 2a4b8529-6171-4880-a97c-66966115a61b unbound from our chassis#033[00m
Dec  1 19:38:25 compute-0 ovn_metadata_agent[106828]: 2025-12-01 19:38:25.430 106833 INFO neutron.agent.ovn.metadata.agent [-] Provisioning metadata for network 2a4b8529-6171-4880-a97c-66966115a61b#033[00m
Dec  1 19:38:25 compute-0 nova_compute[189564]: 2025-12-01 19:38:25.436 189568 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Dec  1 19:38:25 compute-0 systemd[1]: machine-qemu\x2d2\x2dinstance\x2d00000002.scope: Deactivated successfully.
Dec  1 19:38:25 compute-0 ovn_metadata_agent[106828]: 2025-12-01 19:38:25.453 239862 DEBUG oslo.privsep.daemon [-] privsep: reply[ae1c8ae6-8282-4a20-8225-a9ba8ede89ec]: (4, True) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Dec  1 19:38:25 compute-0 systemd[1]: machine-qemu\x2d2\x2dinstance\x2d00000002.scope: Consumed 6min 46.479s CPU time.
Dec  1 19:38:25 compute-0 systemd-machined[155891]: Machine qemu-2-instance-00000002 terminated.
Dec  1 19:38:25 compute-0 ovn_metadata_agent[106828]: 2025-12-01 19:38:25.491 239942 DEBUG oslo.privsep.daemon [-] privsep: reply[8d2d22c3-14c8-4369-8b6c-313454947d58]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Dec  1 19:38:25 compute-0 ovn_metadata_agent[106828]: 2025-12-01 19:38:25.496 239942 DEBUG oslo.privsep.daemon [-] privsep: reply[5eba643a-6320-4e34-9cb2-2cb175b4ee05]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Dec  1 19:38:25 compute-0 ovn_metadata_agent[106828]: 2025-12-01 19:38:25.539 239942 DEBUG oslo.privsep.daemon [-] privsep: reply[7598c5c5-2171-41cc-88a2-fe14baac1543]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Dec  1 19:38:25 compute-0 ovn_metadata_agent[106828]: 2025-12-01 19:38:25.566 239862 DEBUG oslo.privsep.daemon [-] privsep: reply[3f322b15-7939-4779-8abf-897d333c5ba5]: (4, [{'family': 0, '__align': (), 'ifi_type': 1, 'index': 2, 'flags': 69699, 'change': 0, 'attrs': [['IFLA_IFNAME', 'tap2a4b8529-61'], ['IFLA_TXQLEN', 1000], ['IFLA_OPERSTATE', 'UP'], ['IFLA_LINKMODE', 0], ['IFLA_MTU', 1500], ['IFLA_MIN_MTU', 68], ['IFLA_MAX_MTU', 65535], ['IFLA_GROUP', 0], ['IFLA_PROMISCUITY', 0], ['UNKNOWN', {'header': {'length': 8, 'type': 61}}], ['IFLA_NUM_TX_QUEUES', 8], ['IFLA_GSO_MAX_SEGS', 65535], ['IFLA_GSO_MAX_SIZE', 65536], ['IFLA_GRO_MAX_SIZE', 65536], ['UNKNOWN', {'header': {'length': 8, 'type': 63}}], ['UNKNOWN', {'header': {'length': 8, 'type': 64}}], ['IFLA_TSO_MAX_SIZE', 524280], ['IFLA_TSO_MAX_SEGS', 65535], ['UNKNOWN', {'header': {'length': 8, 'type': 66}}], ['IFLA_NUM_RX_QUEUES', 8], ['IFLA_CARRIER', 1], ['IFLA_CARRIER_CHANGES', 2], ['IFLA_CARRIER_UP_COUNT', 1], ['IFLA_CARRIER_DOWN_COUNT', 1], ['IFLA_PROTO_DOWN', 0], ['IFLA_ADDRESS', 'fa:16:3e:47:81:e1'], ['IFLA_BROADCAST', 'ff:ff:ff:ff:ff:ff'], ['IFLA_STATS64', {'rx_packets': 7, 'tx_packets': 9, 'rx_bytes': 574, 'tx_bytes': 522, 'rx_errors': 0, 'tx_errors': 0, 'rx_dropped': 0, 'tx_dropped': 0, 'multicast': 0, 'collisions': 0, 'rx_length_errors': 0, 'rx_over_errors': 0, 'rx_crc_errors': 0, 'rx_frame_errors': 0, 'rx_fifo_errors': 0, 'rx_missed_errors': 0, 'tx_aborted_errors': 0, 'tx_carrier_errors': 0, 'tx_fifo_errors': 0, 'tx_heartbeat_errors': 0, 'tx_window_errors': 0, 'rx_compressed': 0, 'tx_compressed': 0}], ['IFLA_STATS', {'rx_packets': 7, 'tx_packets': 9, 'rx_bytes': 574, 'tx_bytes': 522, 'rx_errors': 0, 'tx_errors': 0, 'rx_dropped': 0, 'tx_dropped': 0, 'multicast': 0, 'collisions': 0, 'rx_length_errors': 0, 'rx_over_errors': 0, 'rx_crc_errors': 0, 'rx_frame_errors': 0, 'rx_fifo_errors': 0, 'rx_missed_errors': 0, 'tx_aborted_errors': 0, 'tx_carrier_errors': 0, 'tx_fifo_errors': 0, 'tx_heartbeat_errors': 0, 'tx_window_errors': 0, 'rx_compressed': 0, 'tx_compressed': 0}], ['IFLA_XDP', {'attrs': [['IFLA_XDP_ATTACHED', None]]}], ['IFLA_LINKINFO', {'attrs': [['IFLA_INFO_KIND', 'veth']]}], ['IFLA_LINK_NETNSID', 0], ['IFLA_LINK', 12], ['IFLA_QDISC', 'noqueue'], ['IFLA_AF_SPEC', {'attrs': [['AF_INET', {'dummy': 65668, 'forwarding': 1, 'mc_forwarding': 0, 'proxy_arp': 0, 'accept_redirects': 0, 'secure_redirects': 0, 'send_redirects': 0, 'shared_media': 1, 'rp_filter': 1, 'accept_source_route': 0, 'bootp_relay': 0, 'log_martians': 0, 'tag': 0, 'arpfilter': 0, 'medium_id': 0, 'noxfrm': 0, 'nopolicy': 0, 'force_igmp_version': 0, 'arp_announce': 0, 'arp_ignore': 0, 'promote_secondaries': 1, 'arp_accept': 0, 'arp_notify': 0, 'accept_local': 0, 'src_vmark': 0, 'proxy_arp_pvlan': 0, 'route_localnet': 0, 'igmpv2_unsolicited_report_interval': 10000, 'igmpv3_unsolicited_report_interval': 1000}], ['AF_INET6', {'attrs': [['IFLA_INET6_FLAGS', 2147483648], ['IFLA_INET6_CACHEINFO', {'max_reasm_len': 65535, 'tstamp': 388613, 'reachable_time': 30621, 'retrans_time': 1000}], ['IFLA_INET6_CONF', {'forwarding': 0, 'hop_limit': 64, 'mtu': 1500, 'accept_ra': 1, 'accept_redirects': 1, 'autoconf': 1, 'dad_transmits': 1, 'router_solicitations': 4294967295, 'router_solicitation_interval': 4000, 'router_solicitation_delay': 1000, 'use_tempaddr': 0, 'temp_valid_lft': 604800, 'temp_preferred_lft': 86400, 'regen_max_retry': 3, 'max_desync_factor': 600, 'max_addresses': 16, 'force_mld_version': 0, 'accept_ra_defrtr': 1, 'accept_ra_pinfo': 1, 'accept_ra_rtr_pref': 1, 'router_probe_interval': 60000, 'accept_ra_rt_info_max_plen': 0, 'proxy_ndp': 0, 'optimistic_dad': 0, 'accept_source_route': 0, 'mc_forwarding': 0, 'disable_ipv6': 0, 'accept_dad': 1, 'force_tllao': 0, 'ndisc_notify': 0}], ['IFLA_INET6_STATS', {'num': 38, 'inpkts': 6, 'inoctets': 448, 'indelivers': 1, 'outforwdatagrams': 0, 'outpkts': 3, 'outoctets': 228, 'inhdrerrors': 0, 'intoobigerrors': 0, 'innoroutes': 0, 'inaddrerrors': 0, 'inunknownprotos': 0, 'intruncatedpkts': 0, 'indiscards': 0, 'outdiscards': 0, 'outnoroutes': 0, 'reasmtimeout': 0, 'reasmreqds': 0, 'reasmoks': 0, 'reasmfails': 0, 'fragoks': 0, 'fragfails': 0, 'fragcreates': 0, 'inmcastpkts': 6, 'outmcastpkts': 3, 'inbcastpkts': 0, 'outbcastpkts': 0, 'inmcastoctets': 448, 'outmcastoctets': 228, 'inbcastoctets': 0, 'outbcastoctets': 0, 'csumerrors': 0, 'noectpkts': 6, 'ect1pkts': 0, 'ect0pkts': 0, 'cepkts': 0}], ['IFLA_INET6_ICMP6STATS', {'num': 7, 'inmsgs': 1, 'inerrors': 0, 'outmsgs': 3, 'outerrors': 0, 'csumerrors': 0}], ['IFLA_INET6_TOKEN', '::'], ['IFLA_INET6_ADDR_GEN_MODE', 0]]}]]}], ['IFLA_MAP', {'mem_start': 0, 'mem_end': 0, 'base_addr': 0, 'irq': 0, 'dma': 0, 'port': 0}], ['UNKNOWN', {'header': {'length': 4, 'type': 32830}}], ['UNKNOWN', {'header': {'length': 4, 'type': 32833}}]], 'header': {'length': 1448, 'type': 16, 'flags': 2, 'sequence_number': 255, 'pid': 243546, 'error': None, 'target': 'ovnmeta-2a4b8529-6171-4880-a97c-66966115a61b', 'stats': (0, 0, 0)}, 'state': 'up', 'event': 'RTM_NEWLINK'}]) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Dec  1 19:38:25 compute-0 ovn_metadata_agent[106828]: 2025-12-01 19:38:25.584 239862 DEBUG oslo.privsep.daemon [-] privsep: reply[93f85766-a049-4938-a762-bd4e078b02f1]: (4, ({'family': 2, 'prefixlen': 24, 'flags': 128, 'scope': 0, 'index': 2, 'attrs': [['IFA_ADDRESS', '192.168.0.2'], ['IFA_LOCAL', '192.168.0.2'], ['IFA_BROADCAST', '192.168.0.255'], ['IFA_LABEL', 'tap2a4b8529-61'], ['IFA_FLAGS', 128], ['IFA_CACHEINFO', {'ifa_preferred': 4294967295, 'ifa_valid': 4294967295, 'cstamp': 388627, 'tstamp': 388627}]], 'header': {'length': 96, 'type': 20, 'flags': 2, 'sequence_number': 255, 'pid': 243548, 'error': None, 'target': 'ovnmeta-2a4b8529-6171-4880-a97c-66966115a61b', 'stats': (0, 0, 0)}, 'event': 'RTM_NEWADDR'}, {'family': 2, 'prefixlen': 32, 'flags': 128, 'scope': 0, 'index': 2, 'attrs': [['IFA_ADDRESS', '169.254.169.254'], ['IFA_LOCAL', '169.254.169.254'], ['IFA_BROADCAST', '169.254.169.254'], ['IFA_LABEL', 'tap2a4b8529-61'], ['IFA_FLAGS', 128], ['IFA_CACHEINFO', {'ifa_preferred': 4294967295, 'ifa_valid': 4294967295, 'cstamp': 388631, 'tstamp': 388631}]], 'header': {'length': 96, 'type': 20, 'flags': 2, 'sequence_number': 255, 'pid': 243548, 'error': None, 'target': 'ovnmeta-2a4b8529-6171-4880-a97c-66966115a61b', 'stats': (0, 0, 0)}, 'event': 'RTM_NEWADDR'})) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Dec  1 19:38:25 compute-0 nova_compute[189564]: 2025-12-01 19:38:25.586 189568 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Dec  1 19:38:25 compute-0 ovn_metadata_agent[106828]: 2025-12-01 19:38:25.586 106833 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): DelPortCommand(_result=None, port=tap2a4b8529-60, bridge=br-ex, if_exists=True) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89#033[00m
Dec  1 19:38:25 compute-0 nova_compute[189564]: 2025-12-01 19:38:25.588 189568 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Dec  1 19:38:25 compute-0 nova_compute[189564]: 2025-12-01 19:38:25.592 189568 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Dec  1 19:38:25 compute-0 nova_compute[189564]: 2025-12-01 19:38:25.598 189568 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Dec  1 19:38:25 compute-0 ovn_metadata_agent[106828]: 2025-12-01 19:38:25.599 106833 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): AddPortCommand(_result=None, bridge=br-int, port=tap2a4b8529-60, may_exist=True) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89#033[00m
Dec  1 19:38:25 compute-0 ovn_metadata_agent[106828]: 2025-12-01 19:38:25.599 106833 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Transaction caused no change do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:129#033[00m
Dec  1 19:38:25 compute-0 ovn_metadata_agent[106828]: 2025-12-01 19:38:25.600 106833 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): DbSetCommand(_result=None, table=Interface, record=tap2a4b8529-60, col_values=(('external_ids', {'iface-id': 'f95692ff-1cac-46fe-9e62-21af9fa55eb1'}),)) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89#033[00m
Dec  1 19:38:25 compute-0 ovn_metadata_agent[106828]: 2025-12-01 19:38:25.600 106833 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Transaction caused no change do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:129#033[00m
Dec  1 19:38:25 compute-0 nova_compute[189564]: 2025-12-01 19:38:25.654 189568 INFO nova.virt.libvirt.driver [-] [instance: f4a023f0-04a7-470f-88ef-6284e0580f9e] Instance destroyed successfully.#033[00m
Dec  1 19:38:25 compute-0 nova_compute[189564]: 2025-12-01 19:38:25.655 189568 DEBUG nova.objects.instance [None req-0d56ffdb-90a8-41d6-a9cb-dfca591660bc 7c24e8f82e7842b785e565ac65c7f494 35d2a9caf1634dca9fc12ec078239d84 - - default default] Lazy-loading 'resources' on Instance uuid f4a023f0-04a7-470f-88ef-6284e0580f9e obj_load_attr /usr/lib/python3.9/site-packages/nova/objects/instance.py:1105#033[00m
Dec  1 19:38:25 compute-0 nova_compute[189564]: 2025-12-01 19:38:25.672 189568 DEBUG nova.virt.libvirt.vif [None req-0d56ffdb-90a8-41d6-a9cb-dfca591660bc 7c24e8f82e7842b785e565ac65c7f494 35d2a9caf1634dca9fc12ec078239d84 - - default default] vif_type=ovs instance=Instance(access_ip_v4=None,access_ip_v6=None,architecture=None,auto_disk_config=False,availability_zone='nova',cell_name=None,cleaned=False,config_drive='True',created_at=2025-12-01T19:30:56Z,default_ephemeral_device=None,default_swap_device=None,deleted=False,deleted_at=None,device_metadata=<?>,disable_terminate=False,display_description=None,display_name='vn-rxztcck-f2wxpqwzjpbt-22updzqiujy5-vnf-jgrcp6zbpavd',ec2_ids=<?>,ephemeral_gb=1,ephemeral_key_uuid=None,fault=<?>,flavor=Flavor(1),hidden=False,host='compute-0.ctlplane.example.com',hostname='vn-rxztcck-f2wxpqwzjpbt-22updzqiujy5-vnf-jgrcp6zbpavd',id=2,image_ref='15bc897a-453b-4133-b6db-08ecdc2b6db0',info_cache=InstanceInfoCache,instance_type_id=1,kernel_id='',key_data=None,key_name=None,keypairs=<?>,launch_index=0,launched_at=2025-12-01T19:31:03Z,launched_on='compute-0.ctlplane.example.com',locked=False,locked_by=None,memory_mb=512,metadata={metering.server_group='47cf63e2-5b7c-4ff3-8543-aef6d5b1a5c9'},migration_context=<?>,new_flavor=None,node='compute-0.ctlplane.example.com',numa_topology=None,old_flavor=None,os_type=None,pci_devices=<?>,pci_requests=<?>,power_state=1,progress=0,project_id='35d2a9caf1634dca9fc12ec078239d84',ramdisk_id='',reservation_id='r-9jn6ac13',resources=None,root_device_name='/dev/vda',root_gb=1,security_groups=SecurityGroupList,services=<?>,shutdown_terminate=False,system_metadata={boot_roles='reader,member,admin',image_base_image_ref='15bc897a-453b-4133-b6db-08ecdc2b6db0',image_container_format='bare',image_disk_format='qcow2',image_hw_cdrom_bus='sata',image_hw_disk_bus='virtio',image_hw_input_bus='usb',image_hw_machine_type='q35',image_hw_pointer_model='usbtablet',image_hw_video_model='virtio',image_hw_vif_model='virtio',image_min_disk='1',image_min_ram='0',image_owner_specified.openstack.md5='',image_owner_specified.openstack.object='images/cirros',image_owner_specified.openstack.sha256='',owner_project_name='admin',owner_user_name='admin'},tags=<?>,task_state='deleting',terminated_at=None,trusted_certs=<?>,updated_at=2025-12-01T19:31:03Z,user_data='Q29udGVudC1UeXBlOiBtdWx0aXBhcnQvbWl4ZWQ7IGJvdW5kYXJ5PSI9PT09PT09PT09PT09PT02NTc4MjE4NjU1NTUwNjgwNzIwPT0iCk1JTUUtVmVyc2lvbjogMS4wCgotLT09PT09PT09PT09PT09PTY1NzgyMTg2NTU1NTA2ODA3MjA9PQpDb250ZW50LVR5cGU6IHRleHQvY2xvdWQtY29uZmlnOyBjaGFyc2V0PSJ1cy1hc2NpaSIKTUlNRS1WZXJzaW9uOiAxLjAKQ29udGVudC1UcmFuc2Zlci1FbmNvZGluZzogN2JpdApDb250ZW50LURpc3Bvc2l0aW9uOiBhdHRhY2htZW50OyBmaWxlbmFtZT0iY2xvdWQtY29uZmlnIgoKCgojIENhcHR1cmUgYWxsIHN1YnByb2Nlc3Mgb3V0cHV0IGludG8gYSBsb2dmaWxlCiMgVXNlZnVsIGZvciB0cm91Ymxlc2hvb3RpbmcgY2xvdWQtaW5pdCBpc3N1ZXMKb3V0cHV0OiB7YWxsOiAnfCB0ZWUgLWEgL3Zhci9sb2cvY2xvdWQtaW5pdC1vdXRwdXQubG9nJ30KCi0tPT09PT09PT09PT09PT09NjU3ODIxODY1NTU1MDY4MDcyMD09CkNvbnRlbnQtVHlwZTogdGV4dC9jbG91ZC1ib290aG9vazsgY2hhcnNldD0idXMtYXNjaWkiCk1JTUUtVmVyc2lvbjogMS4wCkNvbnRlbnQtVHJhbnNmZXItRW5jb2Rpbmc6IDdiaXQKQ29udGVudC1EaXNwb3NpdGlvbjogYXR0YWNobWVudDsgZmlsZW5hbWU9ImJvb3Rob29rLnNoIgoKIyEvdXNyL2Jpbi9iYXNoCgojIEZJWE1FKHNoYWRvd2VyKSB0aGlzIGlzIGEgd29ya2Fyb3VuZCBmb3IgY2xvdWQtaW5pdCAwLjYuMyBwcmVzZW50IGluIFVidW50dQojIDEyLjA0IExUUzoKIyBodHRwczovL2J1Z3MubGF1bmNocGFkLm5ldC9oZWF0LytidWcvMTI1NzQxMAojCiMgVGhlIG9sZCBjbG91ZC1pbml0IGRvZXNuJ3QgY3JlYXRlIHRoZSB1c2VycyBkaXJlY3RseSBzbyB0aGUgY29tbWFuZHMgdG8gZG8KIyB0aGlzIGFyZSBpbmplY3RlZCB0aG91Z2ggbm92YV91dGlscy5weS4KIwojIE9uY2Ugd2UgZHJvcCBzdXBwb3J0IGZvciAwLjYuMywgd2UgY2FuIHNhZmVseSByZW1vdmUgdGhpcy4KCgojIGluIGNhc2UgaGVhdC1jZm50b29scyBoYXMgYmVlbiBpbnN0YWxsZWQgZnJvbSBwYWNrYWdlIGJ1dCBubyBzeW1saW5rcwojIGFyZSB5ZXQgaW4gL29wdC9hd3MvYmluLwpjZm4tY3JlYXRlLWF3cy1zeW1saW5rcwoKIyBEbyBub3QgcmVtb3ZlIC0gdGhlIGNsb3VkIGJvb3Rob29rIHNob3VsZCBhbHdheXMgcmV0dXJuIHN1Y2Nlc3MKZXhpdCAwCgotLT09PT09PT09PT09PT09PTY1NzgyMTg2NTU1NTA2ODA3MjA9PQpDb250ZW50LVR5cGU6IHRleHQvcGFydC1oYW5kbGVyOyBjaGFyc2V0PSJ1cy1hc2NpaSIKTUlNRS1WZXJzaW9uOiAxLjAKQ29udGVudC1UcmFuc2Zlci1FbmNvZGluZzogN2JpdApDb250ZW50LURpc3Bvc2l0aW9uOiBhdHRhY2htZW50OyBmaWxlbmFtZT0icGFydC1oYW5kbGVyLnB5IgoKIyBwYXJ0LWhhbmRsZXIKIwojICAgIExpY2Vuc2VkIHVuZGVyIHRoZSBBcGFjaGUgTGljZW5zZSwgVmVyc2lvbiAyLjAgKHRoZSAiTGljZW5zZSIpOyB5b3UgbWF5CiMgICAgbm90IHVzZSB0aGlzIGZpbGUgZXhjZXB0IGluIGNvbXBsaWFuY2Ugd2l0aCB0aGUgTGljZW5zZS4gWW91IG1heSBvYnRhaW4KIyAgICBhIGNvcHkgb2YgdGhlIExpY2Vuc2UgYXQKIwojICAgICAgICAgaHR0cDovL3d3dy5hcGFjaGUub3JnL2xpY2Vuc2VzL0xJQ0VOU0UtMi4wCiMKIyAgICBVbmxlc3MgcmVxdWlyZWQgYnkgYXBwbGljYWJsZSBsYXcgb3IgYWdyZWVkIHRvIGluIHdyaXRpbmcsIHNvZnR3YXJlCiMgICAgZGlzdHJpYnV0ZWQgdW5kZXIgdGhlIExpY2Vuc2UgaXMgZGlzdHJpYnV0ZWQgb24gYW4gIkFTIElTIiBCQVNJUywgV0lUSE9VVAojICAgIFdBUlJBTlRJRVMgT1IgQ09ORElUSU9OUyBPRiBBTlkgS0lORCwgZWl0aGVyIGV4cHJlc3Mgb3IgaW1wbGllZC4gU2VlIHRoZQojICAgIExpY2Vuc2UgZm9yIHRoZSBzcGVjaWZpYyBsYW5ndWFnZSBnb3Zlcm5pbmcgcGVybWlzc2lvbnMgYW5kIGxpbWl0YXRpb25zCiMgICAgdW5kZXIgdGhlIExpY2Vuc2UuCgppbXBvcnQgZGF0ZXRpbWUKaW1wb3J0IGVycm5vCmltcG9ydCBvcwppbXBvcnQgc3lzCgoKZGVmIGxpc3RfdHlwZXMoKToKICAgIHJldHVybiBbInRleHQveC1jZm5pbml0ZGF0YSJdCgoKZGVmIGhhbmRsZV9wYXJ0KGRhdGEsIGN0eXBlLCBmaWxlbmFtZSwgcGF5bG9hZCk6CiAgICBpZiBjdHlwZSA9PSAiX19iZWdpbl9fIjoKICAgICAgICB0cnk6CiAgICAgICAgICAgIG9zLm1ha2VkaXJzKCcvdmFyL2xpYi9oZWF0LWNmbnRvb2xzJywgaW50KCI3MDAiLCA4KSkKICAgICAgICBleGNlcHQgT1NFcnJvcjoKICAgICAgICAgICAgZXhfdHlwZSwgZSwgdGIgPSBzeXMuZXhjX2luZm8oKQogICAgICAgICAgICBpZiBlLmVycm5vICE9IGVycm5vLkVFWElTVDoKICAgICAgICAgICAgICAgIHJhaXNlCiAgICAgICAgcmV0dXJuCgogICAgaWYgY3R5cGUgPT0gIl9fZW5kX18iOgogICAgICAgIHJldHVybgoKICAgIHRpbWVzdGFtcCA9IGRhdGV0aW1lLmRhdGV0aW1lLm5vdygpCiAgICB3aXRoIG9wZW4oJy92YXIvbG9nL3BhcnQtaGFuZGxlci5sb2cnLCAnYScpIGFzIGxvZzoKICAgICAgICBsb2cud3JpdGUoJyVzIGZpbGVuYW1lOiVzLCBjdHlwZTolc1xuJyAlICh0aW1lc3RhbXAsIGZpbGVuYW1lLCBjdHlwZSkpCgogICAgaWYgY3R5cGUgPT0gJ3RleHQveC1jZm5pbml0ZGF0YSc6CiAgICAgICAgd2l0aCBvcGVuKCcvdmFyL2xpYi9oZWF0LWNmbnRvb2xzLyVzJyAlIGZpbGVuYW1lLCAndycpIGFzIGY6CiAgICAgICAgICAgIGYud3JpdGUocGF5bG9hZCkKCiAgICAgICAgIyBUT0RPKHNkYWtlKSBob3BlZnVsbHkgdGVtcG9yYXJ5IHVudGlsIHVzZXJzIG1vdmUgdG8gaGVhdC1jZm50b29scy0xLjMKICAgICAgICB3aXRoIG9wZW4oJy92YXIvbGliL2Nsb3VkL2RhdGEvJXMnICUgZmlsZW5hbWUsICd3JykgYXMgZjoKICAgICAgICAgICAgZi53cml0ZShwYXlsb2FkKQoKLS09PT09PT09PT09PT09PT02NTc4MjE4NjU1NTUwNjgwNzIwPT0KQ29udGVudC1UeXBlOiB0ZXh0L3gtY2ZuaW5pdGRhdGE7IGNoYXJzZXQ9InVzLWFzY2lpIgpNSU1FLVZlcnNpb246IDEuMApDb250ZW50LVRyYW5zZmVyLUVuY29kaW5nOiA3Yml0CkNvbnRlbnQtRGlzcG9zaXRpb246IGF0dGFjaG1lbnQ7IGZpbGVuYW1lPSJjZm4tdXNlcmRhdGEiCgoKLS09PT09PT09PT09PT09PT02NTc4MjE4NjU1NTUwNjgwNzIwPT0KQ29udGVudC1UeXBlOiB0ZXh0L3gtc2hlbGxzY3JpcHQ7IGNoYXJzZXQ9InVzLWFzY2lpIgpNSU1FLVZlcnNpb246IDEuMApDb250ZW50LVRyYW5zZmVyLUVuY29kaW5nOiA3Yml0CkNvbnRlbnQtRGlzcG9zaXRpb246IGF0dGFjaG1lbnQ7IGZpbGVuYW1lPSJsb2d1c2VyZGF0YS5weSIKCiMhL3Vzci9iaW4vZW52IHB5dGhvbjMKIwojICAgIExpY2Vuc2VkIHVuZGVyIHRoZSBBcGFjaGUgTGljZW5zZSwgVmVyc2lvbiAyLjAgKHRoZSAiTGljZW5zZSIpOyB5b3UgbWF5CiMgICAgbm90IHVzZSB0aGlzIGZpbGUgZXhjZXB0IGluIGNvbXBsaWFuY2Ugd2l0aCB0aGUgTGljZW5zZS4gWW91IG1heSBvYnRhaW4KIyAgICBhIGNvcHkgb2YgdGhlIExpY2Vuc2UgYXQKIwojICAgICAgICAgaHR0cDovL3d3dy5hcGFjaGUub3JnL2xpY2Vuc2VzL0xJQ0VOU0UtMi4wCiMKIyAgICBVbmxlc3MgcmVxdWlyZWQgYnkgYXBwbGljYWJsZSBsYXcgb3IgYWdyZWVkIHRvIGluIHdyaXRpbmcsIHNvZnR3YXJlCiMgICAgZGlzdHJpYnV0ZWQgdW5kZXIgdGhlIExpY2Vuc2UgaXMgZGlzdHJpYnV0ZWQgb24gYW4gIkFTIElTIiBCQVNJUywgV0lUSE9VVAojICAgIFdBUlJBTlRJRVMgT1IgQ09ORElUSU9OUyBPRiBBTlkgS0lORCwgZWl0aGVyIGV4cHJlc3Mgb3IgaW1wbGllZC4gU2VlIHRoZQojICAgIExpY2Vuc2UgZm9yIHRoZSBzcGVjaWZpYyBsYW5ndWFnZSBnb3Zlcm5pbmcgcGVybWlzc2lvbnMgYW5kIGxpbWl0YXRpb25zCiMgICAgdW5kZXIgdGhlIExpY2Vuc2UuCgppbXBvcnQgZGF0ZXRpbWUKaW1wb3J0IGVycm5vCmltcG9ydCBsb2dnaW5nCmltcG9ydCBvcwppbXBvcnQgc3VicHJvY2VzcwppbXBvcnQgc3lzCgoKVkFSX1BBVEggPSAnL3Zhci9saWIvaGVhdC1jZm50b29scycKTE9HID0gbG9nZ2luZy5nZXRMb2dnZXIoJ2hlYXQtcHJvdmlzaW9uJykKCgpkZWYgaW5pdF9sb2dnaW5nKCk6CiAgICBMT0cuc2V0TGV2ZWwobG9nZ2luZy5JTkZPKQogICAgTE9HLmFkZEhhbmRsZXIobG9nZ2luZy5TdHJlYW1IYW5kbGVyKCkpCiAgICBmaCA9IGxvZ2dpbmcuRmlsZUhhbmRsZXIoIi92YXIvbG9nL2hlYXQtcHJvdmlzaW9uLmxvZyIpCiAgICBvcy5jaG1vZChmaC5iYXNlRmlsZW5hbWUsIGludCgiNjAwIiwgOCkpCiAgICBMT0cuYWRkSGFuZGxlcihmaCkKCgpkZWYgY2FsbChhcmdzKToKCiAgICBjbGFzcyBMb2dTdHJlYW0ob2JqZWN0KToKC
Dec  1 19:38:25 compute-0 nova_compute[189564]: Cclc1xuJywgJyAnLmpvaW4oYXJncykpICAjIG5vcWEKICAgIHRyeToKICAgICAgICBscyA9IExvZ1N0cmVhbSgpCiAgICAgICAgcCA9IHN1YnByb2Nlc3MuUG9wZW4oYXJncywgc3Rkb3V0PXN1YnByb2Nlc3MuUElQRSwKICAgICAgICAgICAgICAgICAgICAgICAgICAgICBzdGRlcnI9c3VicHJvY2Vzcy5QSVBFKQogICAgICAgIGRhdGEgPSBwLmNvbW11bmljYXRlKCkKICAgICAgICBpZiBkYXRhOgogICAgICAgICAgICBmb3IgeCBpbiBkYXRhOgogICAgICAgICAgICAgICAgbHMud3JpdGUoeCkKICAgIGV4Y2VwdCBPU0Vycm9yOgogICAgICAgIGV4X3R5cGUsIGV4LCB0YiA9IHN5cy5leGNfaW5mbygpCiAgICAgICAgaWYgZXguZXJybm8gPT0gZXJybm8uRU5PRVhFQzoKICAgICAgICAgICAgTE9HLmVycm9yKCdVc2VyZGF0YSBlbXB0eSBvciBub3QgZXhlY3V0YWJsZTogJXMnLCBleCkKICAgICAgICAgICAgcmV0dXJuIG9zLkVYX09LCiAgICAgICAgZWxzZToKICAgICAgICAgICAgTE9HLmVycm9yKCdPUyBlcnJvciBydW5uaW5nIHVzZXJkYXRhOiAlcycsIGV4KQogICAgICAgICAgICByZXR1cm4gb3MuRVhfT1NFUlIKICAgIGV4Y2VwdCBFeGNlcHRpb246CiAgICAgICAgZXhfdHlwZSwgZXgsIHRiID0gc3lzLmV4Y19pbmZvKCkKICAgICAgICBMT0cuZXJyb3IoJ1Vua25vd24gZXJyb3IgcnVubmluZyB1c2VyZGF0YTogJXMnLCBleCkKICAgICAgICByZXR1cm4gb3MuRVhfU09GVFdBUkUKICAgIHJldHVybiBwLnJldHVybmNvZGUKCgpkZWYgbWFpbigpOgogICAgdXNlcmRhdGFfcGF0aCA9IG9zLnBhdGguam9pbihWQVJfUEFUSCwgJ2Nmbi11c2VyZGF0YScpCiAgICBvcy5jaG1vZCh1c2VyZGF0YV9wYXRoLCBpbnQoIjcwMCIsIDgpKQoKICAgIExPRy5pbmZvKCdQcm92aXNpb24gYmVnYW46ICVzJywgZGF0ZXRpbWUuZGF0ZXRpbWUubm93KCkpCiAgICByZXR1cm5jb2RlID0gY2FsbChbdXNlcmRhdGFfcGF0aF0pCiAgICBMT0cuaW5mbygnUHJvdmlzaW9uIGRvbmU6ICVzJywgZGF0ZXRpbWUuZGF0ZXRpbWUubm93KCkpCiAgICBpZiByZXR1cm5jb2RlOgogICAgICAgIHJldHVybiByZXR1cm5jb2RlCgoKaWYgX19uYW1lX18gPT0gJ19fbWFpbl9fJzoKICAgIGluaXRfbG9nZ2luZygpCgogICAgY29kZSA9IG1haW4oKQogICAgaWYgY29kZToKICAgICAgICBMT0cuZXJyb3IoJ1Byb3Zpc2lvbiBmYWlsZWQgd2l0aCBleGl0IGNvZGUgJXMnLCBjb2RlKQogICAgICAgIHN5cy5leGl0KGNvZGUpCgogICAgcHJvdmlzaW9uX2xvZyA9IG9zLnBhdGguam9pbihWQVJfUEFUSCwgJ3Byb3Zpc2lvbi1maW5pc2hlZCcpCiAgICAjIHRvdWNoIHRoZSBmaWxlIHNvIGl0IGlzIHRpbWVzdGFtcGVkIHdpdGggd2hlbiBmaW5pc2hlZAogICAgd2l0aCBvcGVuKHByb3Zpc2lvbl9sb2csICdhJyk6CiAgICAgICAgb3MudXRpbWUocHJvdmlzaW9uX2xvZywgTm9uZSkKCi0tPT09PT09PT09PT09PT09NjU3ODIxODY1NTU1MDY4MDcyMD09CkNvbnRlbnQtVHlwZTogdGV4dC94LWNmbmluaXRkYXRhOyBjaGFyc2V0PSJ1cy1hc2NpaSIKTUlNRS1WZXJzaW9uOiAxLjAKQ29udGVudC1UcmFuc2Zlci1FbmNvZGluZzogN2JpdApDb250ZW50LURpc3Bvc2l0aW9uOiBhdHRhY2htZW50OyBmaWxlbmFtZT0iY2ZuLW1ldGFkYXRhLXNlcnZlciIKCmh0dHBzOi8vaGVhdC1jZm5hcGktaW50ZXJuYWwub3BlbnN0YWNrLnN2Yzo4MDAwL3YxLwotLT09PT09PT09PT09PT09PTY1NzgyMTg2NTU1NTA2ODA3MjA9PQpDb250ZW50LVR5cGU6IHRleHQveC1jZm5pbml0ZGF0YTsgY2hhcnNldD0idXMtYXNjaWkiCk1JTUUtVmVyc2lvbjogMS4wCkNvbnRlbnQtVHJhbnNmZXItRW5jb2Rpbmc6IDdiaXQKQ29udGVudC1EaXNwb3NpdGlvbjogYXR0YWNobWVudDsgZmlsZW5hbWU9ImNmbi1ib3RvLWNmZyIKCltCb3RvXQpkZWJ1ZyA9IDAKaXNfc2VjdXJlID0gMApodHRwc192YWxpZGF0ZV9jZXJ0aWZpY2F0ZXMgPSAxCmNmbl9yZWdpb25fbmFtZSA9IGhlYXQKY2ZuX3JlZ2lvbl9lbmRwb2ludCA9IGhlYXQtY2ZuYXBpLWludGVybmFsLm9wZW5zdGFjay5zdmMKLS09PT09PT09PT09PT09PT02NTc4MjE4NjU1NTUwNjgwNzIwPT0tLQo=',user_id='7c24e8f82e7842b785e565ac65c7f494',uuid=f4a023f0-04a7-470f-88ef-6284e0580f9e,vcpu_model=<?>,vcpus=1,vm_mode=None,vm_state='active') vif={"id": "0aee22ef-1ffd-4d83-a6ba-7377ff1b62c3", "address": "fa:16:3e:0a:1c:a4", "network": {"id": "2a4b8529-6171-4880-a97c-66966115a61b", "bridge": "br-int", "label": "private", "subnets": [{"cidr": "192.168.0.0/24", "dns": [], "gateway": {"address": "192.168.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "192.168.0.66", "type": "fixed", "version": 4, "meta": {}, "floating_ips": [{"address": "192.168.122.187", "type": "floating", "version": 4, "meta": {}}]}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "35d2a9caf1634dca9fc12ec078239d84", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tap0aee22ef-1f", "ovs_interfaceid": "0aee22ef-1ffd-4d83-a6ba-7377ff1b62c3", "qbh_params": null, "qbg_params": null, "active": true, "vnic_type": "normal", "profile": {}, "preserve_on_delete": true, "delegate_create": true, "meta": {}} unplug /usr/lib/python3.9/site-packages/nova/virt/libvirt/vif.py:828#033[00m
Dec  1 19:38:25 compute-0 nova_compute[189564]: 2025-12-01 19:38:25.672 189568 DEBUG nova.network.os_vif_util [None req-0d56ffdb-90a8-41d6-a9cb-dfca591660bc 7c24e8f82e7842b785e565ac65c7f494 35d2a9caf1634dca9fc12ec078239d84 - - default default] Converting VIF {"id": "0aee22ef-1ffd-4d83-a6ba-7377ff1b62c3", "address": "fa:16:3e:0a:1c:a4", "network": {"id": "2a4b8529-6171-4880-a97c-66966115a61b", "bridge": "br-int", "label": "private", "subnets": [{"cidr": "192.168.0.0/24", "dns": [], "gateway": {"address": "192.168.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "192.168.0.66", "type": "fixed", "version": 4, "meta": {}, "floating_ips": [{"address": "192.168.122.187", "type": "floating", "version": 4, "meta": {}}]}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "35d2a9caf1634dca9fc12ec078239d84", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tap0aee22ef-1f", "ovs_interfaceid": "0aee22ef-1ffd-4d83-a6ba-7377ff1b62c3", "qbh_params": null, "qbg_params": null, "active": true, "vnic_type": "normal", "profile": {}, "preserve_on_delete": true, "delegate_create": true, "meta": {}} nova_to_osvif_vif /usr/lib/python3.9/site-packages/nova/network/os_vif_util.py:511#033[00m
Dec  1 19:38:25 compute-0 nova_compute[189564]: 2025-12-01 19:38:25.673 189568 DEBUG nova.network.os_vif_util [None req-0d56ffdb-90a8-41d6-a9cb-dfca591660bc 7c24e8f82e7842b785e565ac65c7f494 35d2a9caf1634dca9fc12ec078239d84 - - default default] Converted object VIFOpenVSwitch(active=True,address=fa:16:3e:0a:1c:a4,bridge_name='br-int',has_traffic_filtering=True,id=0aee22ef-1ffd-4d83-a6ba-7377ff1b62c3,network=Network(2a4b8529-6171-4880-a97c-66966115a61b),plugin='ovs',port_profile=VIFPortProfileOpenVSwitch,preserve_on_delete=True,vif_name='tap0aee22ef-1f') nova_to_osvif_vif /usr/lib/python3.9/site-packages/nova/network/os_vif_util.py:548#033[00m
Dec  1 19:38:25 compute-0 nova_compute[189564]: 2025-12-01 19:38:25.674 189568 DEBUG os_vif [None req-0d56ffdb-90a8-41d6-a9cb-dfca591660bc 7c24e8f82e7842b785e565ac65c7f494 35d2a9caf1634dca9fc12ec078239d84 - - default default] Unplugging vif VIFOpenVSwitch(active=True,address=fa:16:3e:0a:1c:a4,bridge_name='br-int',has_traffic_filtering=True,id=0aee22ef-1ffd-4d83-a6ba-7377ff1b62c3,network=Network(2a4b8529-6171-4880-a97c-66966115a61b),plugin='ovs',port_profile=VIFPortProfileOpenVSwitch,preserve_on_delete=True,vif_name='tap0aee22ef-1f') unplug /usr/lib/python3.9/site-packages/os_vif/__init__.py:109#033[00m
Dec  1 19:38:25 compute-0 nova_compute[189564]: 2025-12-01 19:38:25.679 189568 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 21 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Dec  1 19:38:25 compute-0 nova_compute[189564]: 2025-12-01 19:38:25.679 189568 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): DelPortCommand(_result=None, port=tap0aee22ef-1f, bridge=br-int, if_exists=True) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89#033[00m
Dec  1 19:38:25 compute-0 nova_compute[189564]: 2025-12-01 19:38:25.681 189568 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Dec  1 19:38:25 compute-0 nova_compute[189564]: 2025-12-01 19:38:25.684 189568 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Dec  1 19:38:25 compute-0 nova_compute[189564]: 2025-12-01 19:38:25.687 189568 INFO os_vif [None req-0d56ffdb-90a8-41d6-a9cb-dfca591660bc 7c24e8f82e7842b785e565ac65c7f494 35d2a9caf1634dca9fc12ec078239d84 - - default default] Successfully unplugged vif VIFOpenVSwitch(active=True,address=fa:16:3e:0a:1c:a4,bridge_name='br-int',has_traffic_filtering=True,id=0aee22ef-1ffd-4d83-a6ba-7377ff1b62c3,network=Network(2a4b8529-6171-4880-a97c-66966115a61b),plugin='ovs',port_profile=VIFPortProfileOpenVSwitch,preserve_on_delete=True,vif_name='tap0aee22ef-1f')#033[00m
Dec  1 19:38:25 compute-0 nova_compute[189564]: 2025-12-01 19:38:25.688 189568 INFO nova.virt.libvirt.driver [None req-0d56ffdb-90a8-41d6-a9cb-dfca591660bc 7c24e8f82e7842b785e565ac65c7f494 35d2a9caf1634dca9fc12ec078239d84 - - default default] [instance: f4a023f0-04a7-470f-88ef-6284e0580f9e] Deleting instance files /var/lib/nova/instances/f4a023f0-04a7-470f-88ef-6284e0580f9e_del#033[00m
Dec  1 19:38:25 compute-0 nova_compute[189564]: 2025-12-01 19:38:25.689 189568 INFO nova.virt.libvirt.driver [None req-0d56ffdb-90a8-41d6-a9cb-dfca591660bc 7c24e8f82e7842b785e565ac65c7f494 35d2a9caf1634dca9fc12ec078239d84 - - default default] [instance: f4a023f0-04a7-470f-88ef-6284e0580f9e] Deletion of /var/lib/nova/instances/f4a023f0-04a7-470f-88ef-6284e0580f9e_del complete#033[00m
Dec  1 19:38:25 compute-0 nova_compute[189564]: 2025-12-01 19:38:25.759 189568 DEBUG nova.virt.libvirt.host [None req-0d56ffdb-90a8-41d6-a9cb-dfca591660bc 7c24e8f82e7842b785e565ac65c7f494 35d2a9caf1634dca9fc12ec078239d84 - - default default] Checking UEFI support for host arch (x86_64) supports_uefi /usr/lib/python3.9/site-packages/nova/virt/libvirt/host.py:1754#033[00m
Dec  1 19:38:25 compute-0 nova_compute[189564]: 2025-12-01 19:38:25.760 189568 INFO nova.virt.libvirt.host [None req-0d56ffdb-90a8-41d6-a9cb-dfca591660bc 7c24e8f82e7842b785e565ac65c7f494 35d2a9caf1634dca9fc12ec078239d84 - - default default] UEFI support detected#033[00m
Dec  1 19:38:25 compute-0 nova_compute[189564]: 2025-12-01 19:38:25.764 189568 INFO nova.compute.manager [None req-0d56ffdb-90a8-41d6-a9cb-dfca591660bc 7c24e8f82e7842b785e565ac65c7f494 35d2a9caf1634dca9fc12ec078239d84 - - default default] [instance: f4a023f0-04a7-470f-88ef-6284e0580f9e] Took 0.41 seconds to destroy the instance on the hypervisor.#033[00m
Dec  1 19:38:25 compute-0 nova_compute[189564]: 2025-12-01 19:38:25.764 189568 DEBUG oslo.service.loopingcall [None req-0d56ffdb-90a8-41d6-a9cb-dfca591660bc 7c24e8f82e7842b785e565ac65c7f494 35d2a9caf1634dca9fc12ec078239d84 - - default default] Waiting for function nova.compute.manager.ComputeManager._try_deallocate_network.<locals>._deallocate_network_with_retries to return. func /usr/lib/python3.9/site-packages/oslo_service/loopingcall.py:435#033[00m
Dec  1 19:38:25 compute-0 nova_compute[189564]: 2025-12-01 19:38:25.765 189568 DEBUG nova.compute.manager [-] [instance: f4a023f0-04a7-470f-88ef-6284e0580f9e] Deallocating network for instance _deallocate_network /usr/lib/python3.9/site-packages/nova/compute/manager.py:2259#033[00m
Dec  1 19:38:25 compute-0 nova_compute[189564]: 2025-12-01 19:38:25.765 189568 DEBUG nova.network.neutron [-] [instance: f4a023f0-04a7-470f-88ef-6284e0580f9e] deallocate_for_instance() deallocate_for_instance /usr/lib/python3.9/site-packages/nova/network/neutron.py:1803#033[00m
Dec  1 19:38:25 compute-0 nova_compute[189564]: 2025-12-01 19:38:25.773 189568 DEBUG nova.compute.manager [req-2c37c726-34cd-402d-a744-f8bf87ca2f3c req-b13e0bdb-e62e-46e6-9629-1e16e2ee7afc 8141e98fb94749b083df4b60a326419b 58466eba0634458f8dd92f929824a1d9 - - default default] [instance: f4a023f0-04a7-470f-88ef-6284e0580f9e] Received event network-vif-unplugged-0aee22ef-1ffd-4d83-a6ba-7377ff1b62c3 external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11048#033[00m
Dec  1 19:38:25 compute-0 nova_compute[189564]: 2025-12-01 19:38:25.774 189568 DEBUG oslo_concurrency.lockutils [req-2c37c726-34cd-402d-a744-f8bf87ca2f3c req-b13e0bdb-e62e-46e6-9629-1e16e2ee7afc 8141e98fb94749b083df4b60a326419b 58466eba0634458f8dd92f929824a1d9 - - default default] Acquiring lock "f4a023f0-04a7-470f-88ef-6284e0580f9e-events" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m
Dec  1 19:38:25 compute-0 nova_compute[189564]: 2025-12-01 19:38:25.774 189568 DEBUG oslo_concurrency.lockutils [req-2c37c726-34cd-402d-a744-f8bf87ca2f3c req-b13e0bdb-e62e-46e6-9629-1e16e2ee7afc 8141e98fb94749b083df4b60a326419b 58466eba0634458f8dd92f929824a1d9 - - default default] Lock "f4a023f0-04a7-470f-88ef-6284e0580f9e-events" acquired by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m
Dec  1 19:38:25 compute-0 nova_compute[189564]: 2025-12-01 19:38:25.775 189568 DEBUG oslo_concurrency.lockutils [req-2c37c726-34cd-402d-a744-f8bf87ca2f3c req-b13e0bdb-e62e-46e6-9629-1e16e2ee7afc 8141e98fb94749b083df4b60a326419b 58466eba0634458f8dd92f929824a1d9 - - default default] Lock "f4a023f0-04a7-470f-88ef-6284e0580f9e-events" "released" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m
Dec  1 19:38:25 compute-0 nova_compute[189564]: 2025-12-01 19:38:25.775 189568 DEBUG nova.compute.manager [req-2c37c726-34cd-402d-a744-f8bf87ca2f3c req-b13e0bdb-e62e-46e6-9629-1e16e2ee7afc 8141e98fb94749b083df4b60a326419b 58466eba0634458f8dd92f929824a1d9 - - default default] [instance: f4a023f0-04a7-470f-88ef-6284e0580f9e] No waiting events found dispatching network-vif-unplugged-0aee22ef-1ffd-4d83-a6ba-7377ff1b62c3 pop_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:320#033[00m
Dec  1 19:38:25 compute-0 nova_compute[189564]: 2025-12-01 19:38:25.775 189568 DEBUG nova.compute.manager [req-2c37c726-34cd-402d-a744-f8bf87ca2f3c req-b13e0bdb-e62e-46e6-9629-1e16e2ee7afc 8141e98fb94749b083df4b60a326419b 58466eba0634458f8dd92f929824a1d9 - - default default] [instance: f4a023f0-04a7-470f-88ef-6284e0580f9e] Received event network-vif-unplugged-0aee22ef-1ffd-4d83-a6ba-7377ff1b62c3 for instance with task_state deleting. _process_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:10826#033[00m
Dec  1 19:38:25 compute-0 nova_compute[189564]: 2025-12-01 19:38:25.887 189568 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Dec  1 19:38:25 compute-0 ovn_metadata_agent[106828]: 2025-12-01 19:38:25.888 106833 DEBUG ovsdbapp.backend.ovs_idl.event [-] Matched UPDATE: SbGlobalUpdateEvent(events=('update',), table='SB_Global', conditions=None, old_conditions=None), priority=20 to row=SB_Global(external_ids={}, nb_cfg=6, options={'arp_ns_explicit_output': 'true', 'mac_prefix': 'ae:b8:e0', 'max_tunid': '16711680', 'northd_internal_version': '24.03.8-20.33.0-76.8', 'svc_monitor_mac': 'f2:87:69:a7:38:2b'}, ipsec=False) old=SB_Global(nb_cfg=5) matches /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/event.py:43#033[00m
Dec  1 19:38:25 compute-0 ovn_metadata_agent[106828]: 2025-12-01 19:38:25.890 106833 DEBUG neutron.agent.ovn.metadata.agent [-] Delaying updating chassis table for 3 seconds run /usr/lib/python3.9/site-packages/neutron/agent/ovn/metadata/agent.py:274#033[00m
Dec  1 19:38:25 compute-0 rsyslogd[236874]: message too long (8192) with configured size 8096, begin of message is: 2025-12-01 19:38:25.672 189568 DEBUG nova.virt.libvirt.vif [None req-0d56ffdb-90 [v8.2510.0-2.el9 try https://www.rsyslog.com/e/2445 ]
Dec  1 19:38:26 compute-0 nova_compute[189564]: 2025-12-01 19:38:26.770 189568 DEBUG nova.network.neutron [-] [instance: f4a023f0-04a7-470f-88ef-6284e0580f9e] Updating instance_info_cache with network_info: [] update_instance_cache_with_nw_info /usr/lib/python3.9/site-packages/nova/network/neutron.py:116#033[00m
Dec  1 19:38:26 compute-0 nova_compute[189564]: 2025-12-01 19:38:26.822 189568 INFO nova.compute.manager [-] [instance: f4a023f0-04a7-470f-88ef-6284e0580f9e] Took 1.06 seconds to deallocate network for instance.#033[00m
Dec  1 19:38:26 compute-0 nova_compute[189564]: 2025-12-01 19:38:26.874 189568 DEBUG oslo_concurrency.lockutils [None req-0d56ffdb-90a8-41d6-a9cb-dfca591660bc 7c24e8f82e7842b785e565ac65c7f494 35d2a9caf1634dca9fc12ec078239d84 - - default default] Acquiring lock "compute_resources" by "nova.compute.resource_tracker.ResourceTracker.update_usage" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m
Dec  1 19:38:26 compute-0 nova_compute[189564]: 2025-12-01 19:38:26.875 189568 DEBUG oslo_concurrency.lockutils [None req-0d56ffdb-90a8-41d6-a9cb-dfca591660bc 7c24e8f82e7842b785e565ac65c7f494 35d2a9caf1634dca9fc12ec078239d84 - - default default] Lock "compute_resources" acquired by "nova.compute.resource_tracker.ResourceTracker.update_usage" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m
Dec  1 19:38:26 compute-0 nova_compute[189564]: 2025-12-01 19:38:26.978 189568 DEBUG nova.compute.provider_tree [None req-0d56ffdb-90a8-41d6-a9cb-dfca591660bc 7c24e8f82e7842b785e565ac65c7f494 35d2a9caf1634dca9fc12ec078239d84 - - default default] Inventory has not changed in ProviderTree for provider: 0211b5d4-bab8-409f-8f53-df766ffbcb27 update_inventory /usr/lib/python3.9/site-packages/nova/compute/provider_tree.py:180#033[00m
Dec  1 19:38:27 compute-0 nova_compute[189564]: 2025-12-01 19:38:27.025 189568 DEBUG nova.scheduler.client.report [None req-0d56ffdb-90a8-41d6-a9cb-dfca591660bc 7c24e8f82e7842b785e565ac65c7f494 35d2a9caf1634dca9fc12ec078239d84 - - default default] Inventory has not changed for provider 0211b5d4-bab8-409f-8f53-df766ffbcb27 based on inventory data: {'VCPU': {'total': 8, 'reserved': 0, 'min_unit': 1, 'max_unit': 8, 'step_size': 1, 'allocation_ratio': 4.0}, 'MEMORY_MB': {'total': 7680, 'reserved': 512, 'min_unit': 1, 'max_unit': 7680, 'step_size': 1, 'allocation_ratio': 1.0}, 'DISK_GB': {'total': 79, 'reserved': 1, 'min_unit': 1, 'max_unit': 79, 'step_size': 1, 'allocation_ratio': 0.9}} set_inventory_for_provider /usr/lib/python3.9/site-packages/nova/scheduler/client/report.py:940#033[00m
Dec  1 19:38:27 compute-0 nova_compute[189564]: 2025-12-01 19:38:27.058 189568 DEBUG oslo_concurrency.lockutils [None req-0d56ffdb-90a8-41d6-a9cb-dfca591660bc 7c24e8f82e7842b785e565ac65c7f494 35d2a9caf1634dca9fc12ec078239d84 - - default default] Lock "compute_resources" "released" by "nova.compute.resource_tracker.ResourceTracker.update_usage" :: held 0.183s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m
Dec  1 19:38:27 compute-0 nova_compute[189564]: 2025-12-01 19:38:27.098 189568 INFO nova.scheduler.client.report [None req-0d56ffdb-90a8-41d6-a9cb-dfca591660bc 7c24e8f82e7842b785e565ac65c7f494 35d2a9caf1634dca9fc12ec078239d84 - - default default] Deleted allocations for instance f4a023f0-04a7-470f-88ef-6284e0580f9e#033[00m
Dec  1 19:38:27 compute-0 nova_compute[189564]: 2025-12-01 19:38:27.206 189568 DEBUG oslo_concurrency.lockutils [None req-0d56ffdb-90a8-41d6-a9cb-dfca591660bc 7c24e8f82e7842b785e565ac65c7f494 35d2a9caf1634dca9fc12ec078239d84 - - default default] Lock "f4a023f0-04a7-470f-88ef-6284e0580f9e" "released" by "nova.compute.manager.ComputeManager.terminate_instance.<locals>.do_terminate_instance" :: held 1.864s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m
Dec  1 19:38:27 compute-0 nova_compute[189564]: 2025-12-01 19:38:27.894 189568 DEBUG nova.compute.manager [req-a0743b59-03f5-4c11-be9c-cae47c2bf4f1 req-1f8d6778-5cf6-4c84-9acb-2522352caa0c 8141e98fb94749b083df4b60a326419b 58466eba0634458f8dd92f929824a1d9 - - default default] [instance: f4a023f0-04a7-470f-88ef-6284e0580f9e] Received event network-vif-plugged-0aee22ef-1ffd-4d83-a6ba-7377ff1b62c3 external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11048#033[00m
Dec  1 19:38:27 compute-0 nova_compute[189564]: 2025-12-01 19:38:27.894 189568 DEBUG oslo_concurrency.lockutils [req-a0743b59-03f5-4c11-be9c-cae47c2bf4f1 req-1f8d6778-5cf6-4c84-9acb-2522352caa0c 8141e98fb94749b083df4b60a326419b 58466eba0634458f8dd92f929824a1d9 - - default default] Acquiring lock "f4a023f0-04a7-470f-88ef-6284e0580f9e-events" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m
Dec  1 19:38:27 compute-0 nova_compute[189564]: 2025-12-01 19:38:27.896 189568 DEBUG oslo_concurrency.lockutils [req-a0743b59-03f5-4c11-be9c-cae47c2bf4f1 req-1f8d6778-5cf6-4c84-9acb-2522352caa0c 8141e98fb94749b083df4b60a326419b 58466eba0634458f8dd92f929824a1d9 - - default default] Lock "f4a023f0-04a7-470f-88ef-6284e0580f9e-events" acquired by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: waited 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m
Dec  1 19:38:27 compute-0 nova_compute[189564]: 2025-12-01 19:38:27.896 189568 DEBUG oslo_concurrency.lockutils [req-a0743b59-03f5-4c11-be9c-cae47c2bf4f1 req-1f8d6778-5cf6-4c84-9acb-2522352caa0c 8141e98fb94749b083df4b60a326419b 58466eba0634458f8dd92f929824a1d9 - - default default] Lock "f4a023f0-04a7-470f-88ef-6284e0580f9e-events" "released" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m
Dec  1 19:38:27 compute-0 nova_compute[189564]: 2025-12-01 19:38:27.896 189568 DEBUG nova.compute.manager [req-a0743b59-03f5-4c11-be9c-cae47c2bf4f1 req-1f8d6778-5cf6-4c84-9acb-2522352caa0c 8141e98fb94749b083df4b60a326419b 58466eba0634458f8dd92f929824a1d9 - - default default] [instance: f4a023f0-04a7-470f-88ef-6284e0580f9e] No waiting events found dispatching network-vif-plugged-0aee22ef-1ffd-4d83-a6ba-7377ff1b62c3 pop_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:320#033[00m
Dec  1 19:38:27 compute-0 nova_compute[189564]: 2025-12-01 19:38:27.897 189568 WARNING nova.compute.manager [req-a0743b59-03f5-4c11-be9c-cae47c2bf4f1 req-1f8d6778-5cf6-4c84-9acb-2522352caa0c 8141e98fb94749b083df4b60a326419b 58466eba0634458f8dd92f929824a1d9 - - default default] [instance: f4a023f0-04a7-470f-88ef-6284e0580f9e] Received unexpected event network-vif-plugged-0aee22ef-1ffd-4d83-a6ba-7377ff1b62c3 for instance with vm_state deleted and task_state None.#033[00m
Dec  1 19:38:27 compute-0 nova_compute[189564]: 2025-12-01 19:38:27.897 189568 DEBUG nova.compute.manager [req-a0743b59-03f5-4c11-be9c-cae47c2bf4f1 req-1f8d6778-5cf6-4c84-9acb-2522352caa0c 8141e98fb94749b083df4b60a326419b 58466eba0634458f8dd92f929824a1d9 - - default default] [instance: f4a023f0-04a7-470f-88ef-6284e0580f9e] Received event network-changed-0aee22ef-1ffd-4d83-a6ba-7377ff1b62c3 external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11048#033[00m
Dec  1 19:38:27 compute-0 nova_compute[189564]: 2025-12-01 19:38:27.897 189568 DEBUG nova.compute.manager [req-a0743b59-03f5-4c11-be9c-cae47c2bf4f1 req-1f8d6778-5cf6-4c84-9acb-2522352caa0c 8141e98fb94749b083df4b60a326419b 58466eba0634458f8dd92f929824a1d9 - - default default] [instance: f4a023f0-04a7-470f-88ef-6284e0580f9e] Refreshing instance network info cache due to event network-changed-0aee22ef-1ffd-4d83-a6ba-7377ff1b62c3. external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11053#033[00m
Dec  1 19:38:27 compute-0 nova_compute[189564]: 2025-12-01 19:38:27.897 189568 DEBUG oslo_concurrency.lockutils [req-a0743b59-03f5-4c11-be9c-cae47c2bf4f1 req-1f8d6778-5cf6-4c84-9acb-2522352caa0c 8141e98fb94749b083df4b60a326419b 58466eba0634458f8dd92f929824a1d9 - - default default] Acquiring lock "refresh_cache-f4a023f0-04a7-470f-88ef-6284e0580f9e" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:312#033[00m
Dec  1 19:38:27 compute-0 nova_compute[189564]: 2025-12-01 19:38:27.898 189568 DEBUG oslo_concurrency.lockutils [req-a0743b59-03f5-4c11-be9c-cae47c2bf4f1 req-1f8d6778-5cf6-4c84-9acb-2522352caa0c 8141e98fb94749b083df4b60a326419b 58466eba0634458f8dd92f929824a1d9 - - default default] Acquired lock "refresh_cache-f4a023f0-04a7-470f-88ef-6284e0580f9e" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:315#033[00m
Dec  1 19:38:27 compute-0 nova_compute[189564]: 2025-12-01 19:38:27.898 189568 DEBUG nova.network.neutron [req-a0743b59-03f5-4c11-be9c-cae47c2bf4f1 req-1f8d6778-5cf6-4c84-9acb-2522352caa0c 8141e98fb94749b083df4b60a326419b 58466eba0634458f8dd92f929824a1d9 - - default default] [instance: f4a023f0-04a7-470f-88ef-6284e0580f9e] Refreshing network info cache for port 0aee22ef-1ffd-4d83-a6ba-7377ff1b62c3 _get_instance_nw_info /usr/lib/python3.9/site-packages/nova/network/neutron.py:2007#033[00m
Dec  1 19:38:27 compute-0 nova_compute[189564]: 2025-12-01 19:38:27.990 189568 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Dec  1 19:38:28 compute-0 nova_compute[189564]: 2025-12-01 19:38:28.781 189568 DEBUG nova.network.neutron [req-a0743b59-03f5-4c11-be9c-cae47c2bf4f1 req-1f8d6778-5cf6-4c84-9acb-2522352caa0c 8141e98fb94749b083df4b60a326419b 58466eba0634458f8dd92f929824a1d9 - - default default] [instance: f4a023f0-04a7-470f-88ef-6284e0580f9e] Instance cache missing network info. _get_preexisting_port_ids /usr/lib/python3.9/site-packages/nova/network/neutron.py:3323#033[00m
Dec  1 19:38:28 compute-0 ovn_metadata_agent[106828]: 2025-12-01 19:38:28.893 106833 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): DbSetCommand(_result=None, table=Chassis_Private, record=91869463-7ce7-4561-8225-db4a77bb5f12, col_values=(('external_ids', {'neutron:ovn-metadata-sb-cfg': '6'}),), if_exists=True) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89#033[00m
Dec  1 19:38:29 compute-0 podman[203750]: time="2025-12-01T19:38:29Z" level=info msg="List containers: received `last` parameter - overwriting `limit`"
Dec  1 19:38:29 compute-0 podman[203750]: @ - - [01/Dec/2025:19:38:29 +0000] "GET /v4.9.3/libpod/containers/json?all=true&external=false&last=0&namespace=false&size=false&sync=false HTTP/1.1" 200 29521 "" "Go-http-client/1.1"
Dec  1 19:38:29 compute-0 podman[203750]: @ - - [01/Dec/2025:19:38:29 +0000] "GET /v4.9.3/libpod/containers/stats?all=false&interval=1&stream=false HTTP/1.1" 200 4796 "" "Go-http-client/1.1"
Dec  1 19:38:29 compute-0 nova_compute[189564]: 2025-12-01 19:38:29.988 189568 DEBUG nova.network.neutron [req-a0743b59-03f5-4c11-be9c-cae47c2bf4f1 req-1f8d6778-5cf6-4c84-9acb-2522352caa0c 8141e98fb94749b083df4b60a326419b 58466eba0634458f8dd92f929824a1d9 - - default default] [instance: f4a023f0-04a7-470f-88ef-6284e0580f9e] Instance is deleted, no further info cache update update_instance_cache_with_nw_info /usr/lib/python3.9/site-packages/nova/network/neutron.py:106#033[00m
Dec  1 19:38:29 compute-0 nova_compute[189564]: 2025-12-01 19:38:29.989 189568 DEBUG oslo_concurrency.lockutils [req-a0743b59-03f5-4c11-be9c-cae47c2bf4f1 req-1f8d6778-5cf6-4c84-9acb-2522352caa0c 8141e98fb94749b083df4b60a326419b 58466eba0634458f8dd92f929824a1d9 - - default default] Releasing lock "refresh_cache-f4a023f0-04a7-470f-88ef-6284e0580f9e" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:333#033[00m
Dec  1 19:38:30 compute-0 podman[243567]: 2025-12-01 19:38:30.366420159 +0000 UTC m=+0.131710676 container health_status eee51cf6f5ac491b85fb09827fece37ea9afa564acb449d4ec0d0155a452f02b (image=quay.io/podified-antelope-centos9/openstack-multipathd:current-podified, name=multipathd, health_status=healthy, health_failing_streak=0, health_log=, maintainer=OpenStack Kubernetes Operator team, org.label-schema.build-date=20251125, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.schema-version=1.0, tcib_managed=true, container_name=multipathd, io.buildah.version=1.41.3, org.label-schema.vendor=CentOS, managed_by=edpm_ansible, org.label-schema.license=GPLv2, config_data={'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/multipathd', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-multipathd:current-podified', 'net': 'host', 'privileged': True, 'restart': 'always', 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/var/lib/kolla/config_files/multipathd.json:/var/lib/kolla/config_files/config.json:ro', '/dev:/dev', '/run/udev:/run/udev', '/sys:/sys', '/lib/modules:/lib/modules:ro', '/etc/iscsi:/etc/iscsi:ro', '/var/lib/iscsi:/var/lib/iscsi', '/etc/multipath:/etc/multipath:z', '/etc/multipath.conf:/etc/multipath.conf:ro', '/var/lib/openstack/healthchecks/multipathd:/openstack:ro,z']}, tcib_build_tag=fa2bb8efef6782c26ea7f1675eeb36dd, config_id=multipathd)
Dec  1 19:38:30 compute-0 nova_compute[189564]: 2025-12-01 19:38:30.682 189568 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Dec  1 19:38:31 compute-0 openstack_network_exporter[205914]: ERROR   19:38:31 appctl.go:131: Failed to prepare call to ovsdb-server: no control socket files found for the ovs db server
Dec  1 19:38:31 compute-0 openstack_network_exporter[205914]: ERROR   19:38:31 appctl.go:144: Failed to get PID for ovn-northd: no control socket files found for ovn-northd
Dec  1 19:38:31 compute-0 openstack_network_exporter[205914]: ERROR   19:38:31 appctl.go:144: Failed to get PID for ovn-northd: no control socket files found for ovn-northd
Dec  1 19:38:31 compute-0 openstack_network_exporter[205914]: ERROR   19:38:31 appctl.go:174: call(dpif-netdev/pmd-perf-show): please specify an existing datapath
Dec  1 19:38:31 compute-0 openstack_network_exporter[205914]: 
Dec  1 19:38:31 compute-0 openstack_network_exporter[205914]: ERROR   19:38:31 appctl.go:174: call(dpif-netdev/pmd-rxq-show): please specify an existing datapath
Dec  1 19:38:31 compute-0 openstack_network_exporter[205914]: 
Dec  1 19:38:32 compute-0 nova_compute[189564]: 2025-12-01 19:38:32.994 189568 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Dec  1 19:38:35 compute-0 nova_compute[189564]: 2025-12-01 19:38:35.685 189568 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Dec  1 19:38:36 compute-0 podman[243586]: 2025-12-01 19:38:36.347481247 +0000 UTC m=+0.111924219 container health_status 61ddba5fa28aaa4735d9b3aecc3d300f499f9ae2248b5f55cd6d6127fcce4236 (image=quay.io/navidys/prometheus-podman-exporter:v1.10.1, name=podman_exporter, health_status=healthy, health_failing_streak=0, health_log=, config_data={'image': 'quay.io/navidys/prometheus-podman-exporter:v1.10.1', 'restart': 'always', 'recreate': True, 'user': 'root', 'privileged': True, 'ports': ['9882:9882'], 'net': 'host', 'command': ['--web.config.file=/etc/podman_exporter/podman_exporter.yaml'], 'environment': {'OS_ENDPOINT_TYPE': 'internal', 'CONTAINER_HOST': 'unix:///run/podman/podman.sock'}, 'healthcheck': {'test': '/openstack/healthcheck podman_exporter', 'mount': '/var/lib/openstack/healthchecks/podman_exporter'}, 'volumes': ['/var/lib/openstack/config/telemetry/podman_exporter.yaml:/etc/podman_exporter/podman_exporter.yaml:z', '/var/lib/openstack/certs/telemetry/default:/etc/podman_exporter/tls:z', '/run/podman/podman.sock:/run/podman/podman.sock:rw,z', '/var/lib/openstack/healthchecks/podman_exporter:/openstack:ro,z']}, config_id=edpm, container_name=podman_exporter, maintainer=Navid Yaghoobi <navidys@fedoraproject.org>, managed_by=edpm_ansible)
Dec  1 19:38:37 compute-0 nova_compute[189564]: 2025-12-01 19:38:37.996 189568 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Dec  1 19:38:39 compute-0 podman[243611]: 2025-12-01 19:38:39.339177643 +0000 UTC m=+0.100399944 container health_status 23921011954a99f31a49758e512d9e3575f6b2ebf536e7df85e3be11e7690b76 (image=quay.io/sustainable_computing_io/kepler:release-0.7.12, name=kepler, health_status=healthy, health_failing_streak=0, health_log=, vcs-type=git, com.redhat.component=ubi9-container, io.k8s.description=The Universal Base Image is designed and engineered to be the base layer for all of your containerized applications, middleware and utilities. This base image is freely redistributable, but Red Hat only supports Red Hat technologies through subscriptions for Red Hat products. This image is maintained by Red Hat and updated regularly., managed_by=edpm_ansible, version=9.4, architecture=x86_64, config_id=edpm, distribution-scope=public, io.openshift.tags=base rhel9, release=1214.1726694543, summary=Provides the latest release of Red Hat Universal Base Image 9., config_data={'image': 'quay.io/sustainable_computing_io/kepler:release-0.7.12', 'privileged': 'true', 'restart': 'always', 'ports': ['8888:8888'], 'net': 'host', 'command': '-v=2', 'recreate': True, 'environment': {'ENABLE_GPU': 'true', 'EXPOSE_CONTAINER_METRICS': 'true', 'ENABLE_PROCESS_METRICS': 'true', 'EXPOSE_VM_METRICS': 'true', 'EXPOSE_ESTIMATED_IDLE_POWER_METRICS': 'false', 'LIBVIRT_METADATA_URI': 'http://openstack.org/xmlns/libvirt/nova/1.1'}, 'healthcheck': {'test': '/openstack/healthcheck kepler', 'mount': '/var/lib/openstack/healthchecks/kepler'}, 'volumes': ['/lib/modules:/lib/modules:ro', '/run/libvirt:/run/libvirt:shared,ro', '/sys:/sys', '/proc:/proc', '/var/lib/openstack/healthchecks/kepler:/openstack:ro,z']}, container_name=kepler, maintainer=Red Hat, Inc., name=ubi9, release-0.7.12=, url=https://access.redhat.com/containers/#/registry.access.redhat.com/ubi9/images/9.4-1214.1726694543, vendor=Red Hat, Inc., com.redhat.license_terms=https://www.redhat.com/en/about/red-hat-end-user-license-agreements#UBI, build-date=2024-09-18T21:23:30, vcs-ref=e309397d02fc53f7fa99db1371b8700eb49f268f, io.openshift.expose-services=, description=The Universal Base Image is designed and engineered to be the base layer for all of your containerized applications, middleware and utilities. This base image is freely redistributable, but Red Hat only supports Red Hat technologies through subscriptions for Red Hat products. This image is maintained by Red Hat and updated regularly., io.buildah.version=1.29.0, io.k8s.display-name=Red Hat Universal Base Image 9)
Dec  1 19:38:39 compute-0 podman[243612]: 2025-12-01 19:38:39.376139314 +0000 UTC m=+0.132309485 container health_status 34a1614f07848d6f362b3ed1fa2407dbcd0f2c7c831f6ef43ff8b2d278ce7c3d (image=quay.io/podified-antelope-centos9/openstack-ceilometer-ipmi:current-podified, name=ceilometer_agent_ipmi, health_status=healthy, health_failing_streak=0, health_log=, config_id=edpm, io.buildah.version=1.41.3, managed_by=edpm_ansible, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS, tcib_build_tag=fa2bb8efef6782c26ea7f1675eeb36dd, config_data={'image': 'quay.io/podified-antelope-centos9/openstack-ceilometer-ipmi:current-podified', 'user': 'ceilometer', 'restart': 'always', 'command': 'kolla_start', 'security_opt': 'label:type:ceilometer_polling_t', 'privileged': 'true', 'net': 'host', 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'OS_ENDPOINT_TYPE': 'internal'}, 'healthcheck': {'test': '/openstack/healthcheck ipmi', 'mount': '/var/lib/openstack/healthchecks/ceilometer_agent_ipmi'}, 'volumes': ['/var/lib/openstack/config/telemetry-power-monitoring:/var/lib/openstack/config/:z', '/var/lib/openstack/config/telemetry-power-monitoring/ceilometer-agent-ipmi.json:/var/lib/kolla/config_files/config.json:z', '/etc/hosts:/etc/hosts:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/var/lib/openstack/cacerts/telemetry-power-monitoring/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/config/telemetry-power-monitoring/ceilometer_prom_exporter.yaml:/etc/ceilometer/ceilometer_prom_exporter.yaml:z', '/var/lib/openstack/certs/telemetry-power-monitoring/default:/etc/ceilometer/tls:z', '/dev/log:/dev/log', '/var/lib/openstack/healthchecks/ceilometer_agent_ipmi:/openstack:ro,z']}, org.label-schema.build-date=20251125, container_name=ceilometer_agent_ipmi, org.label-schema.license=GPLv2, maintainer=OpenStack Kubernetes Operator team, org.label-schema.name=CentOS Stream 9 Base Image, tcib_managed=true)
Dec  1 19:38:39 compute-0 systemd[1]: virtproxyd.service: Deactivated successfully.
Dec  1 19:38:40 compute-0 nova_compute[189564]: 2025-12-01 19:38:40.651 189568 DEBUG nova.virt.driver [-] Emitting event <LifecycleEvent: 1764617905.649323, f4a023f0-04a7-470f-88ef-6284e0580f9e => Stopped> emit_event /usr/lib/python3.9/site-packages/nova/virt/driver.py:1653#033[00m
Dec  1 19:38:40 compute-0 nova_compute[189564]: 2025-12-01 19:38:40.652 189568 INFO nova.compute.manager [-] [instance: f4a023f0-04a7-470f-88ef-6284e0580f9e] VM Stopped (Lifecycle Event)#033[00m
Dec  1 19:38:40 compute-0 nova_compute[189564]: 2025-12-01 19:38:40.674 189568 DEBUG nova.compute.manager [None req-1a4736d7-4e7f-4b53-b0b8-dd6dd3c259c3 - - - - - -] [instance: f4a023f0-04a7-470f-88ef-6284e0580f9e] Checking state _get_power_state /usr/lib/python3.9/site-packages/nova/compute/manager.py:1762#033[00m
Dec  1 19:38:40 compute-0 nova_compute[189564]: 2025-12-01 19:38:40.689 189568 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Dec  1 19:38:41 compute-0 podman[243649]: 2025-12-01 19:38:41.306725064 +0000 UTC m=+0.076276989 container health_status 3a3d264f7eb8586ed3d44da8bad3c69e5911bcb2ca062b771386b6d47a5118de (image=quay.rdoproject.org/podified-master-centos10/openstack-ceilometer-compute:current-tested, name=ceilometer_agent_compute, health_status=healthy, health_failing_streak=0, health_log=, managed_by=edpm_ansible, org.label-schema.build-date=20251125, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 10 Base Image, tcib_build_tag=3a7876c5b6a4ff2e2bc50e11e9db5f42, tcib_managed=true, config_id=edpm, container_name=ceilometer_agent_compute, io.buildah.version=1.41.4, maintainer=OpenStack Kubernetes Operator team, config_data={'image': 'quay.rdoproject.org/podified-master-centos10/openstack-ceilometer-compute:current-tested', 'user': 'ceilometer', 'restart': 'always', 'command': 'kolla_start', 'security_opt': 'label:type:ceilometer_polling_t', 'net': 'host', 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'OS_ENDPOINT_TYPE': 'internal'}, 'healthcheck': {'test': '/openstack/healthcheck compute', 'mount': '/var/lib/openstack/healthchecks/ceilometer_agent_compute'}, 'volumes': ['/var/lib/openstack/config/telemetry:/var/lib/openstack/config/:z', '/var/lib/openstack/config/telemetry/ceilometer-agent-compute.json:/var/lib/kolla/config_files/config.json:z', '/run/libvirt:/run/libvirt:shared,ro', '/etc/hosts:/etc/hosts:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/var/lib/openstack/cacerts/telemetry/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/config/telemetry/ceilometer_prom_exporter.yaml:/etc/ceilometer/ceilometer_prom_exporter.yaml:z', '/var/lib/openstack/certs/telemetry/default:/etc/ceilometer/tls:z', '/dev/log:/dev/log', '/var/lib/openstack/healthchecks/ceilometer_agent_compute:/openstack:ro,z']})
Dec  1 19:38:41 compute-0 podman[243650]: 2025-12-01 19:38:41.316762432 +0000 UTC m=+0.078582802 container health_status 43b014a7c88484529ca37fbc1aa040d68d3c565a681d98a3ffe696ded1c66c8b (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified, name=ovn_metadata_agent, health_status=healthy, health_failing_streak=0, health_log=, config_data={'cgroupns': 'host', 'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'EDPM_CONFIG_HASH': '0823bd3e096c75f72e4a95820d41b0d4b6a1172bd2892ddb9f29b788a11bc87d'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ovn_metadata_agent', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified', 'net': 'host', 'pid': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/run/openvswitch:/run/openvswitch:z', '/var/lib/config-data/ansible-generated/neutron-ovn-metadata-agent:/etc/neutron.conf.d:z', '/run/netns:/run/netns:shared', '/var/lib/kolla/config_files/ovn_metadata_agent.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/neutron:/var/lib/neutron:shared,z', '/var/lib/neutron/ovn_metadata_haproxy_wrapper:/usr/local/bin/haproxy:ro', '/var/lib/neutron/kill_scripts:/etc/neutron/kill_scripts:ro', '/var/lib/openstack/cacerts/neutron-metadata/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/ca.crt:/etc/pki/tls/certs/ovndbca.crt:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/tls.crt:/etc/pki/tls/certs/ovndb.crt:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/tls.key:/etc/pki/tls/private/ovndb.key:ro,Z', '/var/lib/openstack/healthchecks/ovn_metadata_agent:/openstack:ro,z']}, container_name=ovn_metadata_agent, maintainer=OpenStack Kubernetes Operator team, org.label-schema.license=GPLv2, managed_by=edpm_ansible, org.label-schema.schema-version=1.0, config_id=ovn_metadata_agent, io.buildah.version=1.41.3, org.label-schema.vendor=CentOS, tcib_build_tag=fa2bb8efef6782c26ea7f1675eeb36dd, org.label-schema.build-date=20251125, org.label-schema.name=CentOS Stream 9 Base Image, tcib_managed=true)
Dec  1 19:38:41 compute-0 podman[243651]: 2025-12-01 19:38:41.372383726 +0000 UTC m=+0.121829663 container health_status ac5c9902abf0db9f43c889599b2bcc73d33eb8b65444ffdd9b56a5cc93dab792 (image=quay.io/podified-antelope-centos9/openstack-ovn-controller:current-podified, name=ovn_controller, health_status=healthy, health_failing_streak=0, health_log=, container_name=ovn_controller, managed_by=edpm_ansible, org.label-schema.schema-version=1.0, tcib_managed=true, config_data={'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ovn_controller', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-ovn-controller:current-podified', 'net': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/lib/modules:/lib/modules:ro', '/run:/run', '/var/lib/openvswitch/ovn:/run/ovn:shared,z', '/var/lib/kolla/config_files/ovn_controller.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/openstack/cacerts/ovn/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/certs/ovn/default/ca.crt:/etc/pki/tls/certs/ovndbca.crt:ro,z', '/var/lib/openstack/certs/ovn/default/tls.crt:/etc/pki/tls/certs/ovndb.crt:ro,z', '/var/lib/openstack/certs/ovn/default/tls.key:/etc/pki/tls/private/ovndb.key:ro,Z', '/var/lib/openstack/healthchecks/ovn_controller:/openstack:ro,z']}, maintainer=OpenStack Kubernetes Operator team, org.label-schema.build-date=20251125, org.label-schema.license=GPLv2, config_id=ovn_controller, io.buildah.version=1.41.3, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.vendor=CentOS, tcib_build_tag=fa2bb8efef6782c26ea7f1675eeb36dd)
Dec  1 19:38:43 compute-0 nova_compute[189564]: 2025-12-01 19:38:43.000 189568 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Dec  1 19:38:45 compute-0 nova_compute[189564]: 2025-12-01 19:38:45.691 189568 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Dec  1 19:38:48 compute-0 nova_compute[189564]: 2025-12-01 19:38:48.004 189568 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Dec  1 19:38:50 compute-0 nova_compute[189564]: 2025-12-01 19:38:50.249 189568 DEBUG oslo_service.periodic_task [None req-7cec860b-c1c5-49d2-8a4f-732c76c1788d - - - - - -] Running periodic task ComputeManager._heal_instance_info_cache run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210#033[00m
Dec  1 19:38:50 compute-0 nova_compute[189564]: 2025-12-01 19:38:50.250 189568 DEBUG nova.compute.manager [None req-7cec860b-c1c5-49d2-8a4f-732c76c1788d - - - - - -] Starting heal instance info cache _heal_instance_info_cache /usr/lib/python3.9/site-packages/nova/compute/manager.py:9858#033[00m
Dec  1 19:38:50 compute-0 podman[243712]: 2025-12-01 19:38:50.345828915 +0000 UTC m=+0.102855061 container health_status b46bda7fc50db8041eef75400930fc7591d8331b3adc9964f77b2cc87c6b98e2 (image=quay.io/openstack-k8s-operators/openstack-network-exporter:current-podified, name=openstack_network_exporter, health_status=healthy, health_failing_streak=0, health_log=, build-date=2025-08-20T13:12:41, config_data={'image': 'quay.io/openstack-k8s-operators/openstack-network-exporter:current-podified', 'restart': 'always', 'recreate': True, 'privileged': True, 'ports': ['9105:9105'], 'command': [], 'net': 'host', 'environment': {'OS_ENDPOINT_TYPE': 'internal', 'OPENSTACK_NETWORK_EXPORTER_YAML': '/etc/openstack_network_exporter/openstack_network_exporter.yaml'}, 'healthcheck': {'test': '/openstack/healthcheck openstack-netwo', 'mount': '/var/lib/openstack/healthchecks/openstack_network_exporter'}, 'volumes': ['/var/lib/openstack/config/telemetry/openstack_network_exporter.yaml:/etc/openstack_network_exporter/openstack_network_exporter.yaml:z', '/var/lib/openstack/certs/telemetry/default:/etc/openstack_network_exporter/tls:z', '/var/run/openvswitch:/run/openvswitch:rw,z', '/var/lib/openvswitch/ovn:/run/ovn:rw,z', '/proc:/host/proc:ro', '/var/lib/openstack/healthchecks/openstack_network_exporter:/openstack:ro,z']}, config_id=edpm, summary=Provides the latest release of the minimal Red Hat Universal Base Image 9., vcs-ref=f4b088292653bbf5ca8188a5e59ffd06a8671d4b, container_name=openstack_network_exporter, version=9.6, com.redhat.component=ubi9-minimal-container, com.redhat.license_terms=https://www.redhat.com/en/about/red-hat-end-user-license-agreements#UBI, vendor=Red Hat, Inc., io.openshift.tags=minimal rhel9, io.openshift.expose-services=, name=ubi9-minimal, architecture=x86_64, description=The Universal Base Image Minimal is a stripped down image that uses microdnf as a package manager. This base image is freely redistributable, but Red Hat only supports Red Hat technologies through subscriptions for Red Hat products. This image is maintained by Red Hat and updated regularly., io.k8s.description=The Universal Base Image Minimal is a stripped down image that uses microdnf as a package manager. This base image is freely redistributable, but Red Hat only supports Red Hat technologies through subscriptions for Red Hat products. This image is maintained by Red Hat and updated regularly., io.k8s.display-name=Red Hat Universal Base Image 9 Minimal, vcs-type=git, distribution-scope=public, url=https://catalog.redhat.com/en/search?searchType=containers, release=1755695350, maintainer=Red Hat, Inc., managed_by=edpm_ansible, io.buildah.version=1.33.7)
Dec  1 19:38:50 compute-0 nova_compute[189564]: 2025-12-01 19:38:50.682 189568 DEBUG oslo_concurrency.lockutils [None req-7cec860b-c1c5-49d2-8a4f-732c76c1788d - - - - - -] Acquiring lock "refresh_cache-850ac274-3f22-41ce-b7d7-ac64d7adac70" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:312#033[00m
Dec  1 19:38:50 compute-0 nova_compute[189564]: 2025-12-01 19:38:50.683 189568 DEBUG oslo_concurrency.lockutils [None req-7cec860b-c1c5-49d2-8a4f-732c76c1788d - - - - - -] Acquired lock "refresh_cache-850ac274-3f22-41ce-b7d7-ac64d7adac70" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:315#033[00m
Dec  1 19:38:50 compute-0 nova_compute[189564]: 2025-12-01 19:38:50.683 189568 DEBUG nova.network.neutron [None req-7cec860b-c1c5-49d2-8a4f-732c76c1788d - - - - - -] [instance: 850ac274-3f22-41ce-b7d7-ac64d7adac70] Forcefully refreshing network info cache for instance _get_instance_nw_info /usr/lib/python3.9/site-packages/nova/network/neutron.py:2004#033[00m
Dec  1 19:38:50 compute-0 nova_compute[189564]: 2025-12-01 19:38:50.693 189568 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Dec  1 19:38:53 compute-0 nova_compute[189564]: 2025-12-01 19:38:53.007 189568 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Dec  1 19:38:54 compute-0 nova_compute[189564]: 2025-12-01 19:38:54.284 189568 DEBUG nova.network.neutron [None req-7cec860b-c1c5-49d2-8a4f-732c76c1788d - - - - - -] [instance: 850ac274-3f22-41ce-b7d7-ac64d7adac70] Updating instance_info_cache with network_info: [{"id": "076102cd-d411-4d3d-a31e-4851d4a8d107", "address": "fa:16:3e:ce:df:71", "network": {"id": "2a4b8529-6171-4880-a97c-66966115a61b", "bridge": "br-int", "label": "private", "subnets": [{"cidr": "192.168.0.0/24", "dns": [], "gateway": {"address": "192.168.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "192.168.0.62", "type": "fixed", "version": 4, "meta": {}, "floating_ips": [{"address": "192.168.122.240", "type": "floating", "version": 4, "meta": {}}]}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "35d2a9caf1634dca9fc12ec078239d84", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tap076102cd-d4", "ovs_interfaceid": "076102cd-d411-4d3d-a31e-4851d4a8d107", "qbh_params": null, "qbg_params": null, "active": true, "vnic_type": "normal", "profile": {}, "preserve_on_delete": true, "delegate_create": true, "meta": {}}] update_instance_cache_with_nw_info /usr/lib/python3.9/site-packages/nova/network/neutron.py:116#033[00m
Dec  1 19:38:54 compute-0 nova_compute[189564]: 2025-12-01 19:38:54.339 189568 DEBUG oslo_concurrency.lockutils [None req-7cec860b-c1c5-49d2-8a4f-732c76c1788d - - - - - -] Releasing lock "refresh_cache-850ac274-3f22-41ce-b7d7-ac64d7adac70" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:333#033[00m
Dec  1 19:38:54 compute-0 nova_compute[189564]: 2025-12-01 19:38:54.339 189568 DEBUG nova.compute.manager [None req-7cec860b-c1c5-49d2-8a4f-732c76c1788d - - - - - -] [instance: 850ac274-3f22-41ce-b7d7-ac64d7adac70] Updated the network info_cache for instance _heal_instance_info_cache /usr/lib/python3.9/site-packages/nova/compute/manager.py:9929#033[00m
Dec  1 19:38:54 compute-0 nova_compute[189564]: 2025-12-01 19:38:54.340 189568 DEBUG oslo_service.periodic_task [None req-7cec860b-c1c5-49d2-8a4f-732c76c1788d - - - - - -] Running periodic task ComputeManager._poll_rebooting_instances run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210#033[00m
Dec  1 19:38:55 compute-0 podman[243732]: 2025-12-01 19:38:55.355252464 +0000 UTC m=+0.123928329 container health_status 9bc16c1e84935b321683dd2dfd3901959431e420d380b6b9982945dff3d516b2 (image=quay.io/prometheus/node-exporter:v1.5.0, name=node_exporter, health_status=healthy, health_failing_streak=0, health_log=, managed_by=edpm_ansible, config_data={'image': 'quay.io/prometheus/node-exporter:v1.5.0', 'restart': 'always', 'recreate': True, 'user': 'root', 'privileged': True, 'ports': ['9100:9100'], 'command': ['--web.config.file=/etc/node_exporter/node_exporter.yaml', '--web.disable-exporter-metrics', '--collector.systemd', '--collector.systemd.unit-include=(edpm_.*|ovs.*|openvswitch|virt.*|rsyslog)\\.service', '--no-collector.dmi', '--no-collector.entropy', '--no-collector.thermal_zone', '--no-collector.time', '--no-collector.timex', '--no-collector.uname', '--no-collector.stat', '--no-collector.hwmon', '--no-collector.os', '--no-collector.selinux', '--no-collector.textfile', '--no-collector.powersupplyclass', '--no-collector.pressure', '--no-collector.rapl'], 'net': 'host', 'environment': {'OS_ENDPOINT_TYPE': 'internal'}, 'healthcheck': {'test': '/openstack/healthcheck node_exporter', 'mount': '/var/lib/openstack/healthchecks/node_exporter'}, 'volumes': ['/var/lib/openstack/config/telemetry/node_exporter.yaml:/etc/node_exporter/node_exporter.yaml:z', '/var/lib/openstack/certs/telemetry/default:/etc/node_exporter/tls:z', '/var/run/dbus/system_bus_socket:/var/run/dbus/system_bus_socket:rw', '/var/lib/openstack/healthchecks/node_exporter:/openstack:ro,z']}, config_id=edpm, container_name=node_exporter, maintainer=The Prometheus Authors <prometheus-developers@googlegroups.com>)
Dec  1 19:38:55 compute-0 nova_compute[189564]: 2025-12-01 19:38:55.696 189568 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Dec  1 19:38:56 compute-0 nova_compute[189564]: 2025-12-01 19:38:56.248 189568 DEBUG oslo_service.periodic_task [None req-7cec860b-c1c5-49d2-8a4f-732c76c1788d - - - - - -] Running periodic task ComputeManager._poll_volume_usage run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210#033[00m
Dec  1 19:38:57 compute-0 nova_compute[189564]: 2025-12-01 19:38:57.249 189568 DEBUG oslo_service.periodic_task [None req-7cec860b-c1c5-49d2-8a4f-732c76c1788d - - - - - -] Running periodic task ComputeManager._reclaim_queued_deletes run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210#033[00m
Dec  1 19:38:57 compute-0 nova_compute[189564]: 2025-12-01 19:38:57.250 189568 DEBUG nova.compute.manager [None req-7cec860b-c1c5-49d2-8a4f-732c76c1788d - - - - - -] CONF.reclaim_instance_interval <= 0, skipping... _reclaim_queued_deletes /usr/lib/python3.9/site-packages/nova/compute/manager.py:10477#033[00m
Dec  1 19:38:58 compute-0 nova_compute[189564]: 2025-12-01 19:38:58.010 189568 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Dec  1 19:38:59 compute-0 nova_compute[189564]: 2025-12-01 19:38:59.250 189568 DEBUG oslo_service.periodic_task [None req-7cec860b-c1c5-49d2-8a4f-732c76c1788d - - - - - -] Running periodic task ComputeManager._poll_unconfirmed_resizes run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210#033[00m
Dec  1 19:38:59 compute-0 podman[203750]: time="2025-12-01T19:38:59Z" level=info msg="List containers: received `last` parameter - overwriting `limit`"
Dec  1 19:38:59 compute-0 podman[203750]: @ - - [01/Dec/2025:19:38:59 +0000] "GET /v4.9.3/libpod/containers/json?all=true&external=false&last=0&namespace=false&size=false&sync=false HTTP/1.1" 200 29521 "" "Go-http-client/1.1"
Dec  1 19:38:59 compute-0 podman[203750]: @ - - [01/Dec/2025:19:38:59 +0000] "GET /v4.9.3/libpod/containers/stats?all=false&interval=1&stream=false HTTP/1.1" 200 4803 "" "Go-http-client/1.1"
Dec  1 19:39:00 compute-0 ovn_controller[97948]: 2025-12-01T19:39:00Z|00048|memory_trim|INFO|Detected inactivity (last active 30014 ms ago): trimming memory
Dec  1 19:39:00 compute-0 nova_compute[189564]: 2025-12-01 19:39:00.244 189568 DEBUG oslo_service.periodic_task [None req-7cec860b-c1c5-49d2-8a4f-732c76c1788d - - - - - -] Running periodic task ComputeManager._check_instance_build_time run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210#033[00m
Dec  1 19:39:00 compute-0 nova_compute[189564]: 2025-12-01 19:39:00.245 189568 DEBUG oslo_service.periodic_task [None req-7cec860b-c1c5-49d2-8a4f-732c76c1788d - - - - - -] Running periodic task ComputeManager._sync_scheduler_instance_info run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210#033[00m
Dec  1 19:39:00 compute-0 nova_compute[189564]: 2025-12-01 19:39:00.287 189568 DEBUG oslo_service.periodic_task [None req-7cec860b-c1c5-49d2-8a4f-732c76c1788d - - - - - -] Running periodic task ComputeManager._instance_usage_audit run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210#033[00m
Dec  1 19:39:00 compute-0 nova_compute[189564]: 2025-12-01 19:39:00.699 189568 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Dec  1 19:39:01 compute-0 podman[243756]: 2025-12-01 19:39:01.315223782 +0000 UTC m=+0.077894290 container health_status eee51cf6f5ac491b85fb09827fece37ea9afa564acb449d4ec0d0155a452f02b (image=quay.io/podified-antelope-centos9/openstack-multipathd:current-podified, name=multipathd, health_status=healthy, health_failing_streak=0, health_log=, container_name=multipathd, org.label-schema.build-date=20251125, org.label-schema.license=GPLv2, org.label-schema.vendor=CentOS, tcib_managed=true, config_data={'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/multipathd', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-multipathd:current-podified', 'net': 'host', 'privileged': True, 'restart': 'always', 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/var/lib/kolla/config_files/multipathd.json:/var/lib/kolla/config_files/config.json:ro', '/dev:/dev', '/run/udev:/run/udev', '/sys:/sys', '/lib/modules:/lib/modules:ro', '/etc/iscsi:/etc/iscsi:ro', '/var/lib/iscsi:/var/lib/iscsi', '/etc/multipath:/etc/multipath:z', '/etc/multipath.conf:/etc/multipath.conf:ro', '/var/lib/openstack/healthchecks/multipathd:/openstack:ro,z']}, io.buildah.version=1.41.3, maintainer=OpenStack Kubernetes Operator team, org.label-schema.name=CentOS Stream 9 Base Image, config_id=multipathd, managed_by=edpm_ansible, org.label-schema.schema-version=1.0, tcib_build_tag=fa2bb8efef6782c26ea7f1675eeb36dd)
Dec  1 19:39:01 compute-0 openstack_network_exporter[205914]: ERROR   19:39:01 appctl.go:131: Failed to prepare call to ovsdb-server: no control socket files found for the ovs db server
Dec  1 19:39:01 compute-0 openstack_network_exporter[205914]: ERROR   19:39:01 appctl.go:144: Failed to get PID for ovn-northd: no control socket files found for ovn-northd
Dec  1 19:39:01 compute-0 openstack_network_exporter[205914]: ERROR   19:39:01 appctl.go:144: Failed to get PID for ovn-northd: no control socket files found for ovn-northd
Dec  1 19:39:01 compute-0 openstack_network_exporter[205914]: ERROR   19:39:01 appctl.go:174: call(dpif-netdev/pmd-rxq-show): please specify an existing datapath
Dec  1 19:39:01 compute-0 openstack_network_exporter[205914]: 
Dec  1 19:39:01 compute-0 openstack_network_exporter[205914]: ERROR   19:39:01 appctl.go:174: call(dpif-netdev/pmd-perf-show): please specify an existing datapath
Dec  1 19:39:01 compute-0 openstack_network_exporter[205914]: 
Dec  1 19:39:02 compute-0 nova_compute[189564]: 2025-12-01 19:39:02.250 189568 DEBUG oslo_service.periodic_task [None req-7cec860b-c1c5-49d2-8a4f-732c76c1788d - - - - - -] Running periodic task ComputeManager._poll_rescued_instances run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210#033[00m
Dec  1 19:39:03 compute-0 nova_compute[189564]: 2025-12-01 19:39:03.014 189568 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Dec  1 19:39:03 compute-0 nova_compute[189564]: 2025-12-01 19:39:03.248 189568 DEBUG oslo_service.periodic_task [None req-7cec860b-c1c5-49d2-8a4f-732c76c1788d - - - - - -] Running periodic task ComputeManager.update_available_resource run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210#033[00m
Dec  1 19:39:03 compute-0 nova_compute[189564]: 2025-12-01 19:39:03.281 189568 DEBUG oslo_concurrency.lockutils [None req-7cec860b-c1c5-49d2-8a4f-732c76c1788d - - - - - -] Acquiring lock "compute_resources" by "nova.compute.resource_tracker.ResourceTracker.clean_compute_node_cache" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m
Dec  1 19:39:03 compute-0 nova_compute[189564]: 2025-12-01 19:39:03.282 189568 DEBUG oslo_concurrency.lockutils [None req-7cec860b-c1c5-49d2-8a4f-732c76c1788d - - - - - -] Lock "compute_resources" acquired by "nova.compute.resource_tracker.ResourceTracker.clean_compute_node_cache" :: waited 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m
Dec  1 19:39:03 compute-0 nova_compute[189564]: 2025-12-01 19:39:03.283 189568 DEBUG oslo_concurrency.lockutils [None req-7cec860b-c1c5-49d2-8a4f-732c76c1788d - - - - - -] Lock "compute_resources" "released" by "nova.compute.resource_tracker.ResourceTracker.clean_compute_node_cache" :: held 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m
Dec  1 19:39:03 compute-0 nova_compute[189564]: 2025-12-01 19:39:03.284 189568 DEBUG nova.compute.resource_tracker [None req-7cec860b-c1c5-49d2-8a4f-732c76c1788d - - - - - -] Auditing locally available compute resources for compute-0.ctlplane.example.com (node: compute-0.ctlplane.example.com) update_available_resource /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:861#033[00m
Dec  1 19:39:03 compute-0 nova_compute[189564]: 2025-12-01 19:39:03.385 189568 DEBUG oslo_concurrency.processutils [None req-7cec860b-c1c5-49d2-8a4f-732c76c1788d - - - - - -] Running cmd (subprocess): /usr/bin/python3 -m oslo_concurrency.prlimit --as=1073741824 --cpu=30 -- env LC_ALL=C LANG=C qemu-img info /var/lib/nova/instances/e73931e9-f7fa-4666-b781-700b385532a9/disk --force-share --output=json execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384#033[00m
Dec  1 19:39:03 compute-0 nova_compute[189564]: 2025-12-01 19:39:03.481 189568 DEBUG oslo_concurrency.processutils [None req-7cec860b-c1c5-49d2-8a4f-732c76c1788d - - - - - -] CMD "/usr/bin/python3 -m oslo_concurrency.prlimit --as=1073741824 --cpu=30 -- env LC_ALL=C LANG=C qemu-img info /var/lib/nova/instances/e73931e9-f7fa-4666-b781-700b385532a9/disk --force-share --output=json" returned: 0 in 0.096s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422#033[00m
Dec  1 19:39:03 compute-0 nova_compute[189564]: 2025-12-01 19:39:03.482 189568 DEBUG oslo_concurrency.processutils [None req-7cec860b-c1c5-49d2-8a4f-732c76c1788d - - - - - -] Running cmd (subprocess): /usr/bin/python3 -m oslo_concurrency.prlimit --as=1073741824 --cpu=30 -- env LC_ALL=C LANG=C qemu-img info /var/lib/nova/instances/e73931e9-f7fa-4666-b781-700b385532a9/disk --force-share --output=json execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384#033[00m
Dec  1 19:39:03 compute-0 nova_compute[189564]: 2025-12-01 19:39:03.551 189568 DEBUG oslo_concurrency.processutils [None req-7cec860b-c1c5-49d2-8a4f-732c76c1788d - - - - - -] CMD "/usr/bin/python3 -m oslo_concurrency.prlimit --as=1073741824 --cpu=30 -- env LC_ALL=C LANG=C qemu-img info /var/lib/nova/instances/e73931e9-f7fa-4666-b781-700b385532a9/disk --force-share --output=json" returned: 0 in 0.069s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422#033[00m
Dec  1 19:39:03 compute-0 nova_compute[189564]: 2025-12-01 19:39:03.552 189568 DEBUG oslo_concurrency.processutils [None req-7cec860b-c1c5-49d2-8a4f-732c76c1788d - - - - - -] Running cmd (subprocess): /usr/bin/python3 -m oslo_concurrency.prlimit --as=1073741824 --cpu=30 -- env LC_ALL=C LANG=C qemu-img info /var/lib/nova/instances/e73931e9-f7fa-4666-b781-700b385532a9/disk.eph0 --force-share --output=json execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384#033[00m
Dec  1 19:39:03 compute-0 nova_compute[189564]: 2025-12-01 19:39:03.633 189568 DEBUG oslo_concurrency.processutils [None req-7cec860b-c1c5-49d2-8a4f-732c76c1788d - - - - - -] CMD "/usr/bin/python3 -m oslo_concurrency.prlimit --as=1073741824 --cpu=30 -- env LC_ALL=C LANG=C qemu-img info /var/lib/nova/instances/e73931e9-f7fa-4666-b781-700b385532a9/disk.eph0 --force-share --output=json" returned: 0 in 0.080s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422#033[00m
Dec  1 19:39:03 compute-0 nova_compute[189564]: 2025-12-01 19:39:03.634 189568 DEBUG oslo_concurrency.processutils [None req-7cec860b-c1c5-49d2-8a4f-732c76c1788d - - - - - -] Running cmd (subprocess): /usr/bin/python3 -m oslo_concurrency.prlimit --as=1073741824 --cpu=30 -- env LC_ALL=C LANG=C qemu-img info /var/lib/nova/instances/e73931e9-f7fa-4666-b781-700b385532a9/disk.eph0 --force-share --output=json execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384#033[00m
Dec  1 19:39:03 compute-0 nova_compute[189564]: 2025-12-01 19:39:03.691 189568 DEBUG oslo_concurrency.processutils [None req-7cec860b-c1c5-49d2-8a4f-732c76c1788d - - - - - -] CMD "/usr/bin/python3 -m oslo_concurrency.prlimit --as=1073741824 --cpu=30 -- env LC_ALL=C LANG=C qemu-img info /var/lib/nova/instances/e73931e9-f7fa-4666-b781-700b385532a9/disk.eph0 --force-share --output=json" returned: 0 in 0.057s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422#033[00m
Dec  1 19:39:03 compute-0 nova_compute[189564]: 2025-12-01 19:39:03.700 189568 DEBUG oslo_concurrency.processutils [None req-7cec860b-c1c5-49d2-8a4f-732c76c1788d - - - - - -] Running cmd (subprocess): /usr/bin/python3 -m oslo_concurrency.prlimit --as=1073741824 --cpu=30 -- env LC_ALL=C LANG=C qemu-img info /var/lib/nova/instances/850ac274-3f22-41ce-b7d7-ac64d7adac70/disk --force-share --output=json execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384#033[00m
Dec  1 19:39:03 compute-0 nova_compute[189564]: 2025-12-01 19:39:03.760 189568 DEBUG oslo_concurrency.processutils [None req-7cec860b-c1c5-49d2-8a4f-732c76c1788d - - - - - -] CMD "/usr/bin/python3 -m oslo_concurrency.prlimit --as=1073741824 --cpu=30 -- env LC_ALL=C LANG=C qemu-img info /var/lib/nova/instances/850ac274-3f22-41ce-b7d7-ac64d7adac70/disk --force-share --output=json" returned: 0 in 0.060s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422#033[00m
Dec  1 19:39:03 compute-0 nova_compute[189564]: 2025-12-01 19:39:03.761 189568 DEBUG oslo_concurrency.processutils [None req-7cec860b-c1c5-49d2-8a4f-732c76c1788d - - - - - -] Running cmd (subprocess): /usr/bin/python3 -m oslo_concurrency.prlimit --as=1073741824 --cpu=30 -- env LC_ALL=C LANG=C qemu-img info /var/lib/nova/instances/850ac274-3f22-41ce-b7d7-ac64d7adac70/disk --force-share --output=json execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384#033[00m
Dec  1 19:39:03 compute-0 nova_compute[189564]: 2025-12-01 19:39:03.827 189568 DEBUG oslo_concurrency.processutils [None req-7cec860b-c1c5-49d2-8a4f-732c76c1788d - - - - - -] CMD "/usr/bin/python3 -m oslo_concurrency.prlimit --as=1073741824 --cpu=30 -- env LC_ALL=C LANG=C qemu-img info /var/lib/nova/instances/850ac274-3f22-41ce-b7d7-ac64d7adac70/disk --force-share --output=json" returned: 0 in 0.066s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422#033[00m
Dec  1 19:39:03 compute-0 nova_compute[189564]: 2025-12-01 19:39:03.832 189568 DEBUG oslo_concurrency.processutils [None req-7cec860b-c1c5-49d2-8a4f-732c76c1788d - - - - - -] Running cmd (subprocess): /usr/bin/python3 -m oslo_concurrency.prlimit --as=1073741824 --cpu=30 -- env LC_ALL=C LANG=C qemu-img info /var/lib/nova/instances/850ac274-3f22-41ce-b7d7-ac64d7adac70/disk.eph0 --force-share --output=json execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384#033[00m
Dec  1 19:39:03 compute-0 nova_compute[189564]: 2025-12-01 19:39:03.898 189568 DEBUG oslo_concurrency.processutils [None req-7cec860b-c1c5-49d2-8a4f-732c76c1788d - - - - - -] CMD "/usr/bin/python3 -m oslo_concurrency.prlimit --as=1073741824 --cpu=30 -- env LC_ALL=C LANG=C qemu-img info /var/lib/nova/instances/850ac274-3f22-41ce-b7d7-ac64d7adac70/disk.eph0 --force-share --output=json" returned: 0 in 0.066s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422#033[00m
Dec  1 19:39:03 compute-0 nova_compute[189564]: 2025-12-01 19:39:03.900 189568 DEBUG oslo_concurrency.processutils [None req-7cec860b-c1c5-49d2-8a4f-732c76c1788d - - - - - -] Running cmd (subprocess): /usr/bin/python3 -m oslo_concurrency.prlimit --as=1073741824 --cpu=30 -- env LC_ALL=C LANG=C qemu-img info /var/lib/nova/instances/850ac274-3f22-41ce-b7d7-ac64d7adac70/disk.eph0 --force-share --output=json execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384#033[00m
Dec  1 19:39:03 compute-0 nova_compute[189564]: 2025-12-01 19:39:03.997 189568 DEBUG oslo_concurrency.processutils [None req-7cec860b-c1c5-49d2-8a4f-732c76c1788d - - - - - -] CMD "/usr/bin/python3 -m oslo_concurrency.prlimit --as=1073741824 --cpu=30 -- env LC_ALL=C LANG=C qemu-img info /var/lib/nova/instances/850ac274-3f22-41ce-b7d7-ac64d7adac70/disk.eph0 --force-share --output=json" returned: 0 in 0.096s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422#033[00m
Dec  1 19:39:04 compute-0 nova_compute[189564]: 2025-12-01 19:39:04.342 189568 WARNING nova.virt.libvirt.driver [None req-7cec860b-c1c5-49d2-8a4f-732c76c1788d - - - - - -] This host appears to have multiple sockets per NUMA node. The `socket` PCI NUMA affinity will not be supported.#033[00m
Dec  1 19:39:04 compute-0 nova_compute[189564]: 2025-12-01 19:39:04.344 189568 DEBUG nova.compute.resource_tracker [None req-7cec860b-c1c5-49d2-8a4f-732c76c1788d - - - - - -] Hypervisor/Node resource view: name=compute-0.ctlplane.example.com free_ram=4972MB free_disk=72.36140441894531GB free_vcpus=6 pci_devices=[{"dev_id": "pci_0000_00_03_0", "address": "0000:00:03.0", "product_id": "1000", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1000", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_01_3", "address": "0000:00:01.3", "product_id": "7113", "vendor_id": "8086", "numa_node": null, "label": "label_8086_7113", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_06_0", "address": "0000:00:06.0", "product_id": "1005", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1005", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_04_0", "address": "0000:00:04.0", "product_id": "1001", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1001", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_01_1", "address": "0000:00:01.1", "product_id": "7010", "vendor_id": "8086", "numa_node": null, "label": "label_8086_7010", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_00_0", "address": "0000:00:00.0", "product_id": "1237", "vendor_id": "8086", "numa_node": null, "label": "label_8086_1237", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_05_0", "address": "0000:00:05.0", "product_id": "1002", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1002", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_02_0", "address": "0000:00:02.0", "product_id": "1050", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1050", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_01_2", "address": "0000:00:01.2", "product_id": "7020", "vendor_id": "8086", "numa_node": null, "label": "label_8086_7020", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_01_0", "address": "0000:00:01.0", "product_id": "7000", "vendor_id": "8086", "numa_node": null, "label": "label_8086_7000", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_07_0", "address": "0000:00:07.0", "product_id": "1000", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1000", "dev_type": "type-PCI"}] _report_hypervisor_resource_view /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:1034#033[00m
Dec  1 19:39:04 compute-0 nova_compute[189564]: 2025-12-01 19:39:04.344 189568 DEBUG oslo_concurrency.lockutils [None req-7cec860b-c1c5-49d2-8a4f-732c76c1788d - - - - - -] Acquiring lock "compute_resources" by "nova.compute.resource_tracker.ResourceTracker._update_available_resource" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m
Dec  1 19:39:04 compute-0 nova_compute[189564]: 2025-12-01 19:39:04.345 189568 DEBUG oslo_concurrency.lockutils [None req-7cec860b-c1c5-49d2-8a4f-732c76c1788d - - - - - -] Lock "compute_resources" acquired by "nova.compute.resource_tracker.ResourceTracker._update_available_resource" :: waited 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m
Dec  1 19:39:04 compute-0 nova_compute[189564]: 2025-12-01 19:39:04.437 189568 DEBUG nova.compute.resource_tracker [None req-7cec860b-c1c5-49d2-8a4f-732c76c1788d - - - - - -] Instance e73931e9-f7fa-4666-b781-700b385532a9 actively managed on this compute host and has allocations in placement: {'resources': {'DISK_GB': 2, 'MEMORY_MB': 512, 'VCPU': 1}}. _remove_deleted_instances_allocations /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:1635#033[00m
Dec  1 19:39:04 compute-0 nova_compute[189564]: 2025-12-01 19:39:04.437 189568 DEBUG nova.compute.resource_tracker [None req-7cec860b-c1c5-49d2-8a4f-732c76c1788d - - - - - -] Instance 850ac274-3f22-41ce-b7d7-ac64d7adac70 actively managed on this compute host and has allocations in placement: {'resources': {'DISK_GB': 2, 'MEMORY_MB': 512, 'VCPU': 1}}. _remove_deleted_instances_allocations /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:1635#033[00m
Dec  1 19:39:04 compute-0 nova_compute[189564]: 2025-12-01 19:39:04.438 189568 DEBUG nova.compute.resource_tracker [None req-7cec860b-c1c5-49d2-8a4f-732c76c1788d - - - - - -] Total usable vcpus: 8, total allocated vcpus: 2 _report_final_resource_view /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:1057#033[00m
Dec  1 19:39:04 compute-0 nova_compute[189564]: 2025-12-01 19:39:04.438 189568 DEBUG nova.compute.resource_tracker [None req-7cec860b-c1c5-49d2-8a4f-732c76c1788d - - - - - -] Final resource view: name=compute-0.ctlplane.example.com phys_ram=7680MB used_ram=1536MB phys_disk=79GB used_disk=4GB total_vcpus=8 used_vcpus=2 pci_stats=[] _report_final_resource_view /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:1066#033[00m
Dec  1 19:39:04 compute-0 nova_compute[189564]: 2025-12-01 19:39:04.498 189568 DEBUG nova.compute.provider_tree [None req-7cec860b-c1c5-49d2-8a4f-732c76c1788d - - - - - -] Inventory has not changed in ProviderTree for provider: 0211b5d4-bab8-409f-8f53-df766ffbcb27 update_inventory /usr/lib/python3.9/site-packages/nova/compute/provider_tree.py:180#033[00m
Dec  1 19:39:04 compute-0 nova_compute[189564]: 2025-12-01 19:39:04.517 189568 DEBUG nova.scheduler.client.report [None req-7cec860b-c1c5-49d2-8a4f-732c76c1788d - - - - - -] Inventory has not changed for provider 0211b5d4-bab8-409f-8f53-df766ffbcb27 based on inventory data: {'VCPU': {'total': 8, 'reserved': 0, 'min_unit': 1, 'max_unit': 8, 'step_size': 1, 'allocation_ratio': 4.0}, 'MEMORY_MB': {'total': 7680, 'reserved': 512, 'min_unit': 1, 'max_unit': 7680, 'step_size': 1, 'allocation_ratio': 1.0}, 'DISK_GB': {'total': 79, 'reserved': 1, 'min_unit': 1, 'max_unit': 79, 'step_size': 1, 'allocation_ratio': 0.9}} set_inventory_for_provider /usr/lib/python3.9/site-packages/nova/scheduler/client/report.py:940#033[00m
Dec  1 19:39:04 compute-0 nova_compute[189564]: 2025-12-01 19:39:04.537 189568 DEBUG nova.compute.resource_tracker [None req-7cec860b-c1c5-49d2-8a4f-732c76c1788d - - - - - -] Compute_service record updated for compute-0.ctlplane.example.com:compute-0.ctlplane.example.com _update_available_resource /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:995#033[00m
Dec  1 19:39:04 compute-0 nova_compute[189564]: 2025-12-01 19:39:04.538 189568 DEBUG oslo_concurrency.lockutils [None req-7cec860b-c1c5-49d2-8a4f-732c76c1788d - - - - - -] Lock "compute_resources" "released" by "nova.compute.resource_tracker.ResourceTracker._update_available_resource" :: held 0.193s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m
Dec  1 19:39:05 compute-0 nova_compute[189564]: 2025-12-01 19:39:05.703 189568 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Dec  1 19:39:07 compute-0 podman[243803]: 2025-12-01 19:39:07.341795642 +0000 UTC m=+0.088914029 container health_status 61ddba5fa28aaa4735d9b3aecc3d300f499f9ae2248b5f55cd6d6127fcce4236 (image=quay.io/navidys/prometheus-podman-exporter:v1.10.1, name=podman_exporter, health_status=healthy, health_failing_streak=0, health_log=, config_data={'image': 'quay.io/navidys/prometheus-podman-exporter:v1.10.1', 'restart': 'always', 'recreate': True, 'user': 'root', 'privileged': True, 'ports': ['9882:9882'], 'net': 'host', 'command': ['--web.config.file=/etc/podman_exporter/podman_exporter.yaml'], 'environment': {'OS_ENDPOINT_TYPE': 'internal', 'CONTAINER_HOST': 'unix:///run/podman/podman.sock'}, 'healthcheck': {'test': '/openstack/healthcheck podman_exporter', 'mount': '/var/lib/openstack/healthchecks/podman_exporter'}, 'volumes': ['/var/lib/openstack/config/telemetry/podman_exporter.yaml:/etc/podman_exporter/podman_exporter.yaml:z', '/var/lib/openstack/certs/telemetry/default:/etc/podman_exporter/tls:z', '/run/podman/podman.sock:/run/podman/podman.sock:rw,z', '/var/lib/openstack/healthchecks/podman_exporter:/openstack:ro,z']}, config_id=edpm, container_name=podman_exporter, maintainer=Navid Yaghoobi <navidys@fedoraproject.org>, managed_by=edpm_ansible)
Dec  1 19:39:08 compute-0 nova_compute[189564]: 2025-12-01 19:39:08.018 189568 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Dec  1 19:39:10 compute-0 podman[243827]: 2025-12-01 19:39:10.329039936 +0000 UTC m=+0.080609486 container health_status 34a1614f07848d6f362b3ed1fa2407dbcd0f2c7c831f6ef43ff8b2d278ce7c3d (image=quay.io/podified-antelope-centos9/openstack-ceilometer-ipmi:current-podified, name=ceilometer_agent_ipmi, health_status=healthy, health_failing_streak=0, health_log=, tcib_managed=true, config_id=edpm, org.label-schema.build-date=20251125, org.label-schema.vendor=CentOS, config_data={'image': 'quay.io/podified-antelope-centos9/openstack-ceilometer-ipmi:current-podified', 'user': 'ceilometer', 'restart': 'always', 'command': 'kolla_start', 'security_opt': 'label:type:ceilometer_polling_t', 'privileged': 'true', 'net': 'host', 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'OS_ENDPOINT_TYPE': 'internal'}, 'healthcheck': {'test': '/openstack/healthcheck ipmi', 'mount': '/var/lib/openstack/healthchecks/ceilometer_agent_ipmi'}, 'volumes': ['/var/lib/openstack/config/telemetry-power-monitoring:/var/lib/openstack/config/:z', '/var/lib/openstack/config/telemetry-power-monitoring/ceilometer-agent-ipmi.json:/var/lib/kolla/config_files/config.json:z', '/etc/hosts:/etc/hosts:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/var/lib/openstack/cacerts/telemetry-power-monitoring/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/config/telemetry-power-monitoring/ceilometer_prom_exporter.yaml:/etc/ceilometer/ceilometer_prom_exporter.yaml:z', '/var/lib/openstack/certs/telemetry-power-monitoring/default:/etc/ceilometer/tls:z', '/dev/log:/dev/log', '/var/lib/openstack/healthchecks/ceilometer_agent_ipmi:/openstack:ro,z']}, container_name=ceilometer_agent_ipmi, io.buildah.version=1.41.3, maintainer=OpenStack Kubernetes Operator team, managed_by=edpm_ansible, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.license=GPLv2, org.label-schema.schema-version=1.0, tcib_build_tag=fa2bb8efef6782c26ea7f1675eeb36dd)
Dec  1 19:39:10 compute-0 podman[243826]: 2025-12-01 19:39:10.366303227 +0000 UTC m=+0.123967660 container health_status 23921011954a99f31a49758e512d9e3575f6b2ebf536e7df85e3be11e7690b76 (image=quay.io/sustainable_computing_io/kepler:release-0.7.12, name=kepler, health_status=healthy, health_failing_streak=0, health_log=, com.redhat.component=ubi9-container, vcs-type=git, version=9.4, config_data={'image': 'quay.io/sustainable_computing_io/kepler:release-0.7.12', 'privileged': 'true', 'restart': 'always', 'ports': ['8888:8888'], 'net': 'host', 'command': '-v=2', 'recreate': True, 'environment': {'ENABLE_GPU': 'true', 'EXPOSE_CONTAINER_METRICS': 'true', 'ENABLE_PROCESS_METRICS': 'true', 'EXPOSE_VM_METRICS': 'true', 'EXPOSE_ESTIMATED_IDLE_POWER_METRICS': 'false', 'LIBVIRT_METADATA_URI': 'http://openstack.org/xmlns/libvirt/nova/1.1'}, 'healthcheck': {'test': '/openstack/healthcheck kepler', 'mount': '/var/lib/openstack/healthchecks/kepler'}, 'volumes': ['/lib/modules:/lib/modules:ro', '/run/libvirt:/run/libvirt:shared,ro', '/sys:/sys', '/proc:/proc', '/var/lib/openstack/healthchecks/kepler:/openstack:ro,z']}, io.openshift.tags=base rhel9, vcs-ref=e309397d02fc53f7fa99db1371b8700eb49f268f, architecture=x86_64, description=The Universal Base Image is designed and engineered to be the base layer for all of your containerized applications, middleware and utilities. This base image is freely redistributable, but Red Hat only supports Red Hat technologies through subscriptions for Red Hat products. This image is maintained by Red Hat and updated regularly., io.k8s.description=The Universal Base Image is designed and engineered to be the base layer for all of your containerized applications, middleware and utilities. This base image is freely redistributable, but Red Hat only supports Red Hat technologies through subscriptions for Red Hat products. This image is maintained by Red Hat and updated regularly., vendor=Red Hat, Inc., io.buildah.version=1.29.0, maintainer=Red Hat, Inc., summary=Provides the latest release of Red Hat Universal Base Image 9., distribution-scope=public, build-date=2024-09-18T21:23:30, io.k8s.display-name=Red Hat Universal Base Image 9, name=ubi9, url=https://access.redhat.com/containers/#/registry.access.redhat.com/ubi9/images/9.4-1214.1726694543, config_id=edpm, io.openshift.expose-services=, release=1214.1726694543, release-0.7.12=, com.redhat.license_terms=https://www.redhat.com/en/about/red-hat-end-user-license-agreements#UBI, container_name=kepler, managed_by=edpm_ansible)
Dec  1 19:39:10 compute-0 nova_compute[189564]: 2025-12-01 19:39:10.707 189568 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Dec  1 19:39:12 compute-0 ovn_metadata_agent[106828]: 2025-12-01 19:39:12.189 106833 DEBUG oslo_concurrency.lockutils [-] Acquiring lock "_check_child_processes" by "neutron.agent.linux.external_process.ProcessMonitor._check_child_processes" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m
Dec  1 19:39:12 compute-0 ovn_metadata_agent[106828]: 2025-12-01 19:39:12.189 106833 DEBUG oslo_concurrency.lockutils [-] Lock "_check_child_processes" acquired by "neutron.agent.linux.external_process.ProcessMonitor._check_child_processes" :: waited 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m
Dec  1 19:39:12 compute-0 ovn_metadata_agent[106828]: 2025-12-01 19:39:12.190 106833 DEBUG oslo_concurrency.lockutils [-] Lock "_check_child_processes" "released" by "neutron.agent.linux.external_process.ProcessMonitor._check_child_processes" :: held 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m
Dec  1 19:39:12 compute-0 podman[243863]: 2025-12-01 19:39:12.35089789 +0000 UTC m=+0.100071503 container health_status 43b014a7c88484529ca37fbc1aa040d68d3c565a681d98a3ffe696ded1c66c8b (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified, name=ovn_metadata_agent, health_status=healthy, health_failing_streak=0, health_log=, io.buildah.version=1.41.3, maintainer=OpenStack Kubernetes Operator team, org.label-schema.license=GPLv2, org.label-schema.schema-version=1.0, tcib_build_tag=fa2bb8efef6782c26ea7f1675eeb36dd, tcib_managed=true, managed_by=edpm_ansible, org.label-schema.build-date=20251125, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.vendor=CentOS, config_data={'cgroupns': 'host', 'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'EDPM_CONFIG_HASH': '0823bd3e096c75f72e4a95820d41b0d4b6a1172bd2892ddb9f29b788a11bc87d'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ovn_metadata_agent', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified', 'net': 'host', 'pid': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/run/openvswitch:/run/openvswitch:z', '/var/lib/config-data/ansible-generated/neutron-ovn-metadata-agent:/etc/neutron.conf.d:z', '/run/netns:/run/netns:shared', '/var/lib/kolla/config_files/ovn_metadata_agent.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/neutron:/var/lib/neutron:shared,z', '/var/lib/neutron/ovn_metadata_haproxy_wrapper:/usr/local/bin/haproxy:ro', '/var/lib/neutron/kill_scripts:/etc/neutron/kill_scripts:ro', '/var/lib/openstack/cacerts/neutron-metadata/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/ca.crt:/etc/pki/tls/certs/ovndbca.crt:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/tls.crt:/etc/pki/tls/certs/ovndb.crt:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/tls.key:/etc/pki/tls/private/ovndb.key:ro,Z', '/var/lib/openstack/healthchecks/ovn_metadata_agent:/openstack:ro,z']}, config_id=ovn_metadata_agent, container_name=ovn_metadata_agent)
Dec  1 19:39:12 compute-0 podman[243862]: 2025-12-01 19:39:12.351542851 +0000 UTC m=+0.095102436 container health_status 3a3d264f7eb8586ed3d44da8bad3c69e5911bcb2ca062b771386b6d47a5118de (image=quay.rdoproject.org/podified-master-centos10/openstack-ceilometer-compute:current-tested, name=ceilometer_agent_compute, health_status=healthy, health_failing_streak=0, health_log=, org.label-schema.build-date=20251125, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS, container_name=ceilometer_agent_compute, maintainer=OpenStack Kubernetes Operator team, org.label-schema.name=CentOS Stream 10 Base Image, tcib_managed=true, org.label-schema.license=GPLv2, tcib_build_tag=3a7876c5b6a4ff2e2bc50e11e9db5f42, config_id=edpm, io.buildah.version=1.41.4, managed_by=edpm_ansible, config_data={'image': 'quay.rdoproject.org/podified-master-centos10/openstack-ceilometer-compute:current-tested', 'user': 'ceilometer', 'restart': 'always', 'command': 'kolla_start', 'security_opt': 'label:type:ceilometer_polling_t', 'net': 'host', 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'OS_ENDPOINT_TYPE': 'internal'}, 'healthcheck': {'test': '/openstack/healthcheck compute', 'mount': '/var/lib/openstack/healthchecks/ceilometer_agent_compute'}, 'volumes': ['/var/lib/openstack/config/telemetry:/var/lib/openstack/config/:z', '/var/lib/openstack/config/telemetry/ceilometer-agent-compute.json:/var/lib/kolla/config_files/config.json:z', '/run/libvirt:/run/libvirt:shared,ro', '/etc/hosts:/etc/hosts:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/var/lib/openstack/cacerts/telemetry/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/config/telemetry/ceilometer_prom_exporter.yaml:/etc/ceilometer/ceilometer_prom_exporter.yaml:z', '/var/lib/openstack/certs/telemetry/default:/etc/ceilometer/tls:z', '/dev/log:/dev/log', '/var/lib/openstack/healthchecks/ceilometer_agent_compute:/openstack:ro,z']})
Dec  1 19:39:12 compute-0 podman[243864]: 2025-12-01 19:39:12.371020418 +0000 UTC m=+0.114894573 container health_status ac5c9902abf0db9f43c889599b2bcc73d33eb8b65444ffdd9b56a5cc93dab792 (image=quay.io/podified-antelope-centos9/openstack-ovn-controller:current-podified, name=ovn_controller, health_status=healthy, health_failing_streak=0, health_log=, org.label-schema.name=CentOS Stream 9 Base Image, tcib_managed=true, io.buildah.version=1.41.3, maintainer=OpenStack Kubernetes Operator team, managed_by=edpm_ansible, org.label-schema.build-date=20251125, org.label-schema.license=GPLv2, org.label-schema.vendor=CentOS, config_data={'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ovn_controller', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-ovn-controller:current-podified', 'net': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/lib/modules:/lib/modules:ro', '/run:/run', '/var/lib/openvswitch/ovn:/run/ovn:shared,z', '/var/lib/kolla/config_files/ovn_controller.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/openstack/cacerts/ovn/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/certs/ovn/default/ca.crt:/etc/pki/tls/certs/ovndbca.crt:ro,z', '/var/lib/openstack/certs/ovn/default/tls.crt:/etc/pki/tls/certs/ovndb.crt:ro,z', '/var/lib/openstack/certs/ovn/default/tls.key:/etc/pki/tls/private/ovndb.key:ro,Z', '/var/lib/openstack/healthchecks/ovn_controller:/openstack:ro,z']}, tcib_build_tag=fa2bb8efef6782c26ea7f1675eeb36dd, config_id=ovn_controller, org.label-schema.schema-version=1.0, container_name=ovn_controller)
Dec  1 19:39:13 compute-0 nova_compute[189564]: 2025-12-01 19:39:13.020 189568 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Dec  1 19:39:15 compute-0 nova_compute[189564]: 2025-12-01 19:39:15.710 189568 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Dec  1 19:39:18 compute-0 nova_compute[189564]: 2025-12-01 19:39:18.024 189568 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Dec  1 19:39:20 compute-0 nova_compute[189564]: 2025-12-01 19:39:20.713 189568 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Dec  1 19:39:21 compute-0 podman[243925]: 2025-12-01 19:39:21.393877383 +0000 UTC m=+0.150364726 container health_status b46bda7fc50db8041eef75400930fc7591d8331b3adc9964f77b2cc87c6b98e2 (image=quay.io/openstack-k8s-operators/openstack-network-exporter:current-podified, name=openstack_network_exporter, health_status=healthy, health_failing_streak=0, health_log=, description=The Universal Base Image Minimal is a stripped down image that uses microdnf as a package manager. This base image is freely redistributable, but Red Hat only supports Red Hat technologies through subscriptions for Red Hat products. This image is maintained by Red Hat and updated regularly., distribution-scope=public, release=1755695350, build-date=2025-08-20T13:12:41, container_name=openstack_network_exporter, config_data={'image': 'quay.io/openstack-k8s-operators/openstack-network-exporter:current-podified', 'restart': 'always', 'recreate': True, 'privileged': True, 'ports': ['9105:9105'], 'command': [], 'net': 'host', 'environment': {'OS_ENDPOINT_TYPE': 'internal', 'OPENSTACK_NETWORK_EXPORTER_YAML': '/etc/openstack_network_exporter/openstack_network_exporter.yaml'}, 'healthcheck': {'test': '/openstack/healthcheck openstack-netwo', 'mount': '/var/lib/openstack/healthchecks/openstack_network_exporter'}, 'volumes': ['/var/lib/openstack/config/telemetry/openstack_network_exporter.yaml:/etc/openstack_network_exporter/openstack_network_exporter.yaml:z', '/var/lib/openstack/certs/telemetry/default:/etc/openstack_network_exporter/tls:z', '/var/run/openvswitch:/run/openvswitch:rw,z', '/var/lib/openvswitch/ovn:/run/ovn:rw,z', '/proc:/host/proc:ro', '/var/lib/openstack/healthchecks/openstack_network_exporter:/openstack:ro,z']}, architecture=x86_64, com.redhat.license_terms=https://www.redhat.com/en/about/red-hat-end-user-license-agreements#UBI, io.openshift.expose-services=, vcs-ref=f4b088292653bbf5ca8188a5e59ffd06a8671d4b, name=ubi9-minimal, io.k8s.description=The Universal Base Image Minimal is a stripped down image that uses microdnf as a package manager. This base image is freely redistributable, but Red Hat only supports Red Hat technologies through subscriptions for Red Hat products. This image is maintained by Red Hat and updated regularly., io.k8s.display-name=Red Hat Universal Base Image 9 Minimal, managed_by=edpm_ansible, summary=Provides the latest release of the minimal Red Hat Universal Base Image 9., url=https://catalog.redhat.com/en/search?searchType=containers, version=9.6, io.buildah.version=1.33.7, vcs-type=git, vendor=Red Hat, Inc., maintainer=Red Hat, Inc., com.redhat.component=ubi9-minimal-container, io.openshift.tags=minimal rhel9, config_id=edpm)
Dec  1 19:39:23 compute-0 nova_compute[189564]: 2025-12-01 19:39:23.028 189568 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Dec  1 19:39:25 compute-0 nova_compute[189564]: 2025-12-01 19:39:25.716 189568 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Dec  1 19:39:26 compute-0 podman[243946]: 2025-12-01 19:39:26.315037434 +0000 UTC m=+0.085104528 container health_status 9bc16c1e84935b321683dd2dfd3901959431e420d380b6b9982945dff3d516b2 (image=quay.io/prometheus/node-exporter:v1.5.0, name=node_exporter, health_status=healthy, health_failing_streak=0, health_log=, managed_by=edpm_ansible, config_data={'image': 'quay.io/prometheus/node-exporter:v1.5.0', 'restart': 'always', 'recreate': True, 'user': 'root', 'privileged': True, 'ports': ['9100:9100'], 'command': ['--web.config.file=/etc/node_exporter/node_exporter.yaml', '--web.disable-exporter-metrics', '--collector.systemd', '--collector.systemd.unit-include=(edpm_.*|ovs.*|openvswitch|virt.*|rsyslog)\\.service', '--no-collector.dmi', '--no-collector.entropy', '--no-collector.thermal_zone', '--no-collector.time', '--no-collector.timex', '--no-collector.uname', '--no-collector.stat', '--no-collector.hwmon', '--no-collector.os', '--no-collector.selinux', '--no-collector.textfile', '--no-collector.powersupplyclass', '--no-collector.pressure', '--no-collector.rapl'], 'net': 'host', 'environment': {'OS_ENDPOINT_TYPE': 'internal'}, 'healthcheck': {'test': '/openstack/healthcheck node_exporter', 'mount': '/var/lib/openstack/healthchecks/node_exporter'}, 'volumes': ['/var/lib/openstack/config/telemetry/node_exporter.yaml:/etc/node_exporter/node_exporter.yaml:z', '/var/lib/openstack/certs/telemetry/default:/etc/node_exporter/tls:z', '/var/run/dbus/system_bus_socket:/var/run/dbus/system_bus_socket:rw', '/var/lib/openstack/healthchecks/node_exporter:/openstack:ro,z']}, config_id=edpm, container_name=node_exporter, maintainer=The Prometheus Authors <prometheus-developers@googlegroups.com>)
Dec  1 19:39:28 compute-0 nova_compute[189564]: 2025-12-01 19:39:28.031 189568 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Dec  1 19:39:29 compute-0 podman[203750]: time="2025-12-01T19:39:29Z" level=info msg="List containers: received `last` parameter - overwriting `limit`"
Dec  1 19:39:29 compute-0 podman[203750]: @ - - [01/Dec/2025:19:39:29 +0000] "GET /v4.9.3/libpod/containers/json?all=true&external=false&last=0&namespace=false&size=false&sync=false HTTP/1.1" 200 29521 "" "Go-http-client/1.1"
Dec  1 19:39:29 compute-0 podman[203750]: @ - - [01/Dec/2025:19:39:29 +0000] "GET /v4.9.3/libpod/containers/stats?all=false&interval=1&stream=false HTTP/1.1" 200 4793 "" "Go-http-client/1.1"
Dec  1 19:39:30 compute-0 nova_compute[189564]: 2025-12-01 19:39:30.719 189568 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Dec  1 19:39:31 compute-0 openstack_network_exporter[205914]: ERROR   19:39:31 appctl.go:144: Failed to get PID for ovn-northd: no control socket files found for ovn-northd
Dec  1 19:39:31 compute-0 openstack_network_exporter[205914]: ERROR   19:39:31 appctl.go:144: Failed to get PID for ovn-northd: no control socket files found for ovn-northd
Dec  1 19:39:31 compute-0 openstack_network_exporter[205914]: ERROR   19:39:31 appctl.go:131: Failed to prepare call to ovsdb-server: no control socket files found for the ovs db server
Dec  1 19:39:31 compute-0 openstack_network_exporter[205914]: ERROR   19:39:31 appctl.go:174: call(dpif-netdev/pmd-rxq-show): please specify an existing datapath
Dec  1 19:39:31 compute-0 openstack_network_exporter[205914]: 
Dec  1 19:39:31 compute-0 openstack_network_exporter[205914]: ERROR   19:39:31 appctl.go:174: call(dpif-netdev/pmd-perf-show): please specify an existing datapath
Dec  1 19:39:31 compute-0 openstack_network_exporter[205914]: 
Dec  1 19:39:32 compute-0 podman[243971]: 2025-12-01 19:39:32.350046281 +0000 UTC m=+0.118825397 container health_status eee51cf6f5ac491b85fb09827fece37ea9afa564acb449d4ec0d0155a452f02b (image=quay.io/podified-antelope-centos9/openstack-multipathd:current-podified, name=multipathd, health_status=healthy, health_failing_streak=0, health_log=, config_data={'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/multipathd', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-multipathd:current-podified', 'net': 'host', 'privileged': True, 'restart': 'always', 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/var/lib/kolla/config_files/multipathd.json:/var/lib/kolla/config_files/config.json:ro', '/dev:/dev', '/run/udev:/run/udev', '/sys:/sys', '/lib/modules:/lib/modules:ro', '/etc/iscsi:/etc/iscsi:ro', '/var/lib/iscsi:/var/lib/iscsi', '/etc/multipath:/etc/multipath:z', '/etc/multipath.conf:/etc/multipath.conf:ro', '/var/lib/openstack/healthchecks/multipathd:/openstack:ro,z']}, maintainer=OpenStack Kubernetes Operator team, org.label-schema.build-date=20251125, org.label-schema.schema-version=1.0, tcib_build_tag=fa2bb8efef6782c26ea7f1675eeb36dd, tcib_managed=true, config_id=multipathd, container_name=multipathd, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.vendor=CentOS, io.buildah.version=1.41.3, managed_by=edpm_ansible)
Dec  1 19:39:33 compute-0 nova_compute[189564]: 2025-12-01 19:39:33.034 189568 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Dec  1 19:39:35 compute-0 nova_compute[189564]: 2025-12-01 19:39:35.722 189568 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Dec  1 19:39:38 compute-0 nova_compute[189564]: 2025-12-01 19:39:38.037 189568 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Dec  1 19:39:38 compute-0 podman[243988]: 2025-12-01 19:39:38.335361304 +0000 UTC m=+0.098648358 container health_status 61ddba5fa28aaa4735d9b3aecc3d300f499f9ae2248b5f55cd6d6127fcce4236 (image=quay.io/navidys/prometheus-podman-exporter:v1.10.1, name=podman_exporter, health_status=healthy, health_failing_streak=0, health_log=, config_data={'image': 'quay.io/navidys/prometheus-podman-exporter:v1.10.1', 'restart': 'always', 'recreate': True, 'user': 'root', 'privileged': True, 'ports': ['9882:9882'], 'net': 'host', 'command': ['--web.config.file=/etc/podman_exporter/podman_exporter.yaml'], 'environment': {'OS_ENDPOINT_TYPE': 'internal', 'CONTAINER_HOST': 'unix:///run/podman/podman.sock'}, 'healthcheck': {'test': '/openstack/healthcheck podman_exporter', 'mount': '/var/lib/openstack/healthchecks/podman_exporter'}, 'volumes': ['/var/lib/openstack/config/telemetry/podman_exporter.yaml:/etc/podman_exporter/podman_exporter.yaml:z', '/var/lib/openstack/certs/telemetry/default:/etc/podman_exporter/tls:z', '/run/podman/podman.sock:/run/podman/podman.sock:rw,z', '/var/lib/openstack/healthchecks/podman_exporter:/openstack:ro,z']}, config_id=edpm, container_name=podman_exporter, maintainer=Navid Yaghoobi <navidys@fedoraproject.org>, managed_by=edpm_ansible)
Dec  1 19:39:40 compute-0 nova_compute[189564]: 2025-12-01 19:39:40.725 189568 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Dec  1 19:39:41 compute-0 podman[244012]: 2025-12-01 19:39:41.316179784 +0000 UTC m=+0.089239089 container health_status 23921011954a99f31a49758e512d9e3575f6b2ebf536e7df85e3be11e7690b76 (image=quay.io/sustainable_computing_io/kepler:release-0.7.12, name=kepler, health_status=healthy, health_failing_streak=0, health_log=, io.openshift.expose-services=, description=The Universal Base Image is designed and engineered to be the base layer for all of your containerized applications, middleware and utilities. This base image is freely redistributable, but Red Hat only supports Red Hat technologies through subscriptions for Red Hat products. This image is maintained by Red Hat and updated regularly., distribution-scope=public, io.buildah.version=1.29.0, build-date=2024-09-18T21:23:30, release=1214.1726694543, io.k8s.description=The Universal Base Image is designed and engineered to be the base layer for all of your containerized applications, middleware and utilities. This base image is freely redistributable, but Red Hat only supports Red Hat technologies through subscriptions for Red Hat products. This image is maintained by Red Hat and updated regularly., maintainer=Red Hat, Inc., architecture=x86_64, com.redhat.component=ubi9-container, config_id=edpm, com.redhat.license_terms=https://www.redhat.com/en/about/red-hat-end-user-license-agreements#UBI, name=ubi9, summary=Provides the latest release of Red Hat Universal Base Image 9., io.k8s.display-name=Red Hat Universal Base Image 9, vcs-type=git, io.openshift.tags=base rhel9, vcs-ref=e309397d02fc53f7fa99db1371b8700eb49f268f, vendor=Red Hat, Inc., config_data={'image': 'quay.io/sustainable_computing_io/kepler:release-0.7.12', 'privileged': 'true', 'restart': 'always', 'ports': ['8888:8888'], 'net': 'host', 'command': '-v=2', 'recreate': True, 'environment': {'ENABLE_GPU': 'true', 'EXPOSE_CONTAINER_METRICS': 'true', 'ENABLE_PROCESS_METRICS': 'true', 'EXPOSE_VM_METRICS': 'true', 'EXPOSE_ESTIMATED_IDLE_POWER_METRICS': 'false', 'LIBVIRT_METADATA_URI': 'http://openstack.org/xmlns/libvirt/nova/1.1'}, 'healthcheck': {'test': '/openstack/healthcheck kepler', 'mount': '/var/lib/openstack/healthchecks/kepler'}, 'volumes': ['/lib/modules:/lib/modules:ro', '/run/libvirt:/run/libvirt:shared,ro', '/sys:/sys', '/proc:/proc', '/var/lib/openstack/healthchecks/kepler:/openstack:ro,z']}, url=https://access.redhat.com/containers/#/registry.access.redhat.com/ubi9/images/9.4-1214.1726694543, container_name=kepler, version=9.4, managed_by=edpm_ansible, release-0.7.12=)
Dec  1 19:39:41 compute-0 podman[244013]: 2025-12-01 19:39:41.337639984 +0000 UTC m=+0.102004714 container health_status 34a1614f07848d6f362b3ed1fa2407dbcd0f2c7c831f6ef43ff8b2d278ce7c3d (image=quay.io/podified-antelope-centos9/openstack-ceilometer-ipmi:current-podified, name=ceilometer_agent_ipmi, health_status=healthy, health_failing_streak=0, health_log=, org.label-schema.license=GPLv2, tcib_build_tag=fa2bb8efef6782c26ea7f1675eeb36dd, org.label-schema.build-date=20251125, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS, config_data={'image': 'quay.io/podified-antelope-centos9/openstack-ceilometer-ipmi:current-podified', 'user': 'ceilometer', 'restart': 'always', 'command': 'kolla_start', 'security_opt': 'label:type:ceilometer_polling_t', 'privileged': 'true', 'net': 'host', 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'OS_ENDPOINT_TYPE': 'internal'}, 'healthcheck': {'test': '/openstack/healthcheck ipmi', 'mount': '/var/lib/openstack/healthchecks/ceilometer_agent_ipmi'}, 'volumes': ['/var/lib/openstack/config/telemetry-power-monitoring:/var/lib/openstack/config/:z', '/var/lib/openstack/config/telemetry-power-monitoring/ceilometer-agent-ipmi.json:/var/lib/kolla/config_files/config.json:z', '/etc/hosts:/etc/hosts:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/var/lib/openstack/cacerts/telemetry-power-monitoring/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/config/telemetry-power-monitoring/ceilometer_prom_exporter.yaml:/etc/ceilometer/ceilometer_prom_exporter.yaml:z', '/var/lib/openstack/certs/telemetry-power-monitoring/default:/etc/ceilometer/tls:z', '/dev/log:/dev/log', '/var/lib/openstack/healthchecks/ceilometer_agent_ipmi:/openstack:ro,z']}, org.label-schema.name=CentOS Stream 9 Base Image, container_name=ceilometer_agent_ipmi, tcib_managed=true, config_id=edpm, io.buildah.version=1.41.3, maintainer=OpenStack Kubernetes Operator team, managed_by=edpm_ansible)
Dec  1 19:39:43 compute-0 nova_compute[189564]: 2025-12-01 19:39:43.039 189568 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Dec  1 19:39:43 compute-0 podman[244049]: 2025-12-01 19:39:43.322042852 +0000 UTC m=+0.096826930 container health_status 3a3d264f7eb8586ed3d44da8bad3c69e5911bcb2ca062b771386b6d47a5118de (image=quay.rdoproject.org/podified-master-centos10/openstack-ceilometer-compute:current-tested, name=ceilometer_agent_compute, health_status=healthy, health_failing_streak=0, health_log=, tcib_managed=true, container_name=ceilometer_agent_compute, managed_by=edpm_ansible, org.label-schema.name=CentOS Stream 10 Base Image, org.label-schema.vendor=CentOS, config_data={'image': 'quay.rdoproject.org/podified-master-centos10/openstack-ceilometer-compute:current-tested', 'user': 'ceilometer', 'restart': 'always', 'command': 'kolla_start', 'security_opt': 'label:type:ceilometer_polling_t', 'net': 'host', 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'OS_ENDPOINT_TYPE': 'internal'}, 'healthcheck': {'test': '/openstack/healthcheck compute', 'mount': '/var/lib/openstack/healthchecks/ceilometer_agent_compute'}, 'volumes': ['/var/lib/openstack/config/telemetry:/var/lib/openstack/config/:z', '/var/lib/openstack/config/telemetry/ceilometer-agent-compute.json:/var/lib/kolla/config_files/config.json:z', '/run/libvirt:/run/libvirt:shared,ro', '/etc/hosts:/etc/hosts:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/var/lib/openstack/cacerts/telemetry/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/config/telemetry/ceilometer_prom_exporter.yaml:/etc/ceilometer/ceilometer_prom_exporter.yaml:z', '/var/lib/openstack/certs/telemetry/default:/etc/ceilometer/tls:z', '/dev/log:/dev/log', '/var/lib/openstack/healthchecks/ceilometer_agent_compute:/openstack:ro,z']}, org.label-schema.license=GPLv2, config_id=edpm, org.label-schema.build-date=20251125, org.label-schema.schema-version=1.0, tcib_build_tag=3a7876c5b6a4ff2e2bc50e11e9db5f42, io.buildah.version=1.41.4, maintainer=OpenStack Kubernetes Operator team)
Dec  1 19:39:43 compute-0 podman[244050]: 2025-12-01 19:39:43.339836716 +0000 UTC m=+0.092579936 container health_status 43b014a7c88484529ca37fbc1aa040d68d3c565a681d98a3ffe696ded1c66c8b (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified, name=ovn_metadata_agent, health_status=healthy, health_failing_streak=0, health_log=, tcib_build_tag=fa2bb8efef6782c26ea7f1675eeb36dd, tcib_managed=true, config_data={'cgroupns': 'host', 'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'EDPM_CONFIG_HASH': '0823bd3e096c75f72e4a95820d41b0d4b6a1172bd2892ddb9f29b788a11bc87d'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ovn_metadata_agent', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified', 'net': 'host', 'pid': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/run/openvswitch:/run/openvswitch:z', '/var/lib/config-data/ansible-generated/neutron-ovn-metadata-agent:/etc/neutron.conf.d:z', '/run/netns:/run/netns:shared', '/var/lib/kolla/config_files/ovn_metadata_agent.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/neutron:/var/lib/neutron:shared,z', '/var/lib/neutron/ovn_metadata_haproxy_wrapper:/usr/local/bin/haproxy:ro', '/var/lib/neutron/kill_scripts:/etc/neutron/kill_scripts:ro', '/var/lib/openstack/cacerts/neutron-metadata/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/ca.crt:/etc/pki/tls/certs/ovndbca.crt:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/tls.crt:/etc/pki/tls/certs/ovndb.crt:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/tls.key:/etc/pki/tls/private/ovndb.key:ro,Z', '/var/lib/openstack/healthchecks/ovn_metadata_agent:/openstack:ro,z']}, org.label-schema.name=CentOS Stream 9 Base Image, container_name=ovn_metadata_agent, io.buildah.version=1.41.3, org.label-schema.build-date=20251125, org.label-schema.license=GPLv2, org.label-schema.schema-version=1.0, config_id=ovn_metadata_agent, maintainer=OpenStack Kubernetes Operator team, managed_by=edpm_ansible, org.label-schema.vendor=CentOS)
Dec  1 19:39:43 compute-0 podman[244051]: 2025-12-01 19:39:43.413383727 +0000 UTC m=+0.164953719 container health_status ac5c9902abf0db9f43c889599b2bcc73d33eb8b65444ffdd9b56a5cc93dab792 (image=quay.io/podified-antelope-centos9/openstack-ovn-controller:current-podified, name=ovn_controller, health_status=healthy, health_failing_streak=0, health_log=, config_id=ovn_controller, io.buildah.version=1.41.3, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.schema-version=1.0, config_data={'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ovn_controller', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-ovn-controller:current-podified', 'net': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/lib/modules:/lib/modules:ro', '/run:/run', '/var/lib/openvswitch/ovn:/run/ovn:shared,z', '/var/lib/kolla/config_files/ovn_controller.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/openstack/cacerts/ovn/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/certs/ovn/default/ca.crt:/etc/pki/tls/certs/ovndbca.crt:ro,z', '/var/lib/openstack/certs/ovn/default/tls.crt:/etc/pki/tls/certs/ovndb.crt:ro,z', '/var/lib/openstack/certs/ovn/default/tls.key:/etc/pki/tls/private/ovndb.key:ro,Z', '/var/lib/openstack/healthchecks/ovn_controller:/openstack:ro,z']}, maintainer=OpenStack Kubernetes Operator team, tcib_build_tag=fa2bb8efef6782c26ea7f1675eeb36dd, org.label-schema.license=GPLv2, container_name=ovn_controller, managed_by=edpm_ansible, org.label-schema.build-date=20251125, org.label-schema.vendor=CentOS, tcib_managed=true)
Dec  1 19:39:45 compute-0 nova_compute[189564]: 2025-12-01 19:39:45.729 189568 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Dec  1 19:39:48 compute-0 nova_compute[189564]: 2025-12-01 19:39:48.042 189568 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Dec  1 19:39:48 compute-0 ceilometer_agent_compute[200308]: 2025-12-01 19:39:48.813 15 DEBUG ceilometer.polling.manager [-] The number of pollsters in source [pollsters] is bigger than the number of worker threads to execute them. Therefore, one can expect the process to be longer than the expected. execute_polling_task_processing /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:253
Dec  1 19:39:48 compute-0 ceilometer_agent_compute[200308]: 2025-12-01 19:39:48.813 15 DEBUG ceilometer.polling.manager [-] Processing pollsters for [pollsters] with [1] threads. execute_polling_task_processing /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:262
Dec  1 19:39:48 compute-0 ceilometer_agent_compute[200308]: 2025-12-01 19:39:48.814 15 DEBUG ceilometer.polling.manager [-] Registering pollster [<stevedore.extension.Extension object at 0x7fcf6cc3f860>] from source [pollsters] to be executed via executor [<concurrent.futures.thread.ThreadPoolExecutor object at 0x7fcf6ebb41d0>] with cache [{}], pollster history [{}], and discovery cache [{}]. register_pollster_execution /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:276
Dec  1 19:39:48 compute-0 ceilometer_agent_compute[200308]: 2025-12-01 19:39:48.814 15 DEBUG ceilometer.polling.manager [-] Executing discovery process for pollsters [<ceilometer.compute.pollsters.net.IncomingBytesDeltaPollster object at 0x7fcf6cc3f830>] and discovery method [local_instances] via process [<bound method AgentManager.discover of <ceilometer.polling.manager.AgentManager object at 0x7fcf6cc07a10>>]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:294
Dec  1 19:39:48 compute-0 ceilometer_agent_compute[200308]: 2025-12-01 19:39:48.815 15 DEBUG ceilometer.polling.manager [-] Registering pollster [<stevedore.extension.Extension object at 0x7fcf6c2e4080>] from source [pollsters] to be executed via executor [<concurrent.futures.thread.ThreadPoolExecutor object at 0x7fcf6ebb41d0>] with cache [{}], pollster history [{}], and discovery cache [{}]. register_pollster_execution /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:276
Dec  1 19:39:48 compute-0 ceilometer_agent_compute[200308]: 2025-12-01 19:39:48.815 15 DEBUG ceilometer.polling.manager [-] Registering pollster [<stevedore.extension.Extension object at 0x7fcf6efc98b0>] from source [pollsters] to be executed via executor [<concurrent.futures.thread.ThreadPoolExecutor object at 0x7fcf6ebb41d0>] with cache [{}], pollster history [{}], and discovery cache [{}]. register_pollster_execution /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:276
Dec  1 19:39:48 compute-0 ceilometer_agent_compute[200308]: 2025-12-01 19:39:48.815 15 DEBUG ceilometer.polling.manager [-] Registering pollster [<stevedore.extension.Extension object at 0x7fcf6c2e4110>] from source [pollsters] to be executed via executor [<concurrent.futures.thread.ThreadPoolExecutor object at 0x7fcf6ebb41d0>] with cache [{}], pollster history [{}], and discovery cache [{}]. register_pollster_execution /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:276
Dec  1 19:39:48 compute-0 ceilometer_agent_compute[200308]: 2025-12-01 19:39:48.816 15 DEBUG ceilometer.polling.manager [-] Registering pollster [<stevedore.extension.Extension object at 0x7fcf6c2e41a0>] from source [pollsters] to be executed via executor [<concurrent.futures.thread.ThreadPoolExecutor object at 0x7fcf6ebb41d0>] with cache [{}], pollster history [{}], and discovery cache [{}]. register_pollster_execution /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:276
Dec  1 19:39:48 compute-0 ceilometer_agent_compute[200308]: 2025-12-01 19:39:48.816 15 DEBUG ceilometer.polling.manager [-] Registering pollster [<stevedore.extension.Extension object at 0x7fcf6cc3f290>] from source [pollsters] to be executed via executor [<concurrent.futures.thread.ThreadPoolExecutor object at 0x7fcf6ebb41d0>] with cache [{}], pollster history [{}], and discovery cache [{}]. register_pollster_execution /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:276
Dec  1 19:39:48 compute-0 ceilometer_agent_compute[200308]: 2025-12-01 19:39:48.817 15 DEBUG ceilometer.polling.manager [-] Registering pollster [<stevedore.extension.Extension object at 0x7fcf6cc3f2c0>] from source [pollsters] to be executed via executor [<concurrent.futures.thread.ThreadPoolExecutor object at 0x7fcf6ebb41d0>] with cache [{}], pollster history [{}], and discovery cache [{}]. register_pollster_execution /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:276
Dec  1 19:39:48 compute-0 ceilometer_agent_compute[200308]: 2025-12-01 19:39:48.817 15 DEBUG ceilometer.polling.manager [-] Registering pollster [<stevedore.extension.Extension object at 0x7fcf6e1e92e0>] from source [pollsters] to be executed via executor [<concurrent.futures.thread.ThreadPoolExecutor object at 0x7fcf6ebb41d0>] with cache [{}], pollster history [{}], and discovery cache [{}]. register_pollster_execution /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:276
Dec  1 19:39:48 compute-0 ceilometer_agent_compute[200308]: 2025-12-01 19:39:48.817 15 DEBUG ceilometer.polling.manager [-] Registering pollster [<stevedore.extension.Extension object at 0x7fcf6cc3fb00>] from source [pollsters] to be executed via executor [<concurrent.futures.thread.ThreadPoolExecutor object at 0x7fcf6ebb41d0>] with cache [{}], pollster history [{}], and discovery cache [{}]. register_pollster_execution /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:276
Dec  1 19:39:48 compute-0 ceilometer_agent_compute[200308]: 2025-12-01 19:39:48.818 15 DEBUG ceilometer.polling.manager [-] Registering pollster [<stevedore.extension.Extension object at 0x7fcf6cc3f320>] from source [pollsters] to be executed via executor [<concurrent.futures.thread.ThreadPoolExecutor object at 0x7fcf6ebb41d0>] with cache [{}], pollster history [{}], and discovery cache [{}]. register_pollster_execution /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:276
Dec  1 19:39:48 compute-0 ceilometer_agent_compute[200308]: 2025-12-01 19:39:48.818 15 DEBUG ceilometer.polling.manager [-] Registering pollster [<stevedore.extension.Extension object at 0x7fcf6cc3f380>] from source [pollsters] to be executed via executor [<concurrent.futures.thread.ThreadPoolExecutor object at 0x7fcf6ebb41d0>] with cache [{}], pollster history [{}], and discovery cache [{}]. register_pollster_execution /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:276
Dec  1 19:39:48 compute-0 ceilometer_agent_compute[200308]: 2025-12-01 19:39:48.818 15 DEBUG ceilometer.polling.manager [-] Registering pollster [<stevedore.extension.Extension object at 0x7fcf6cc3f3e0>] from source [pollsters] to be executed via executor [<concurrent.futures.thread.ThreadPoolExecutor object at 0x7fcf6ebb41d0>] with cache [{}], pollster history [{}], and discovery cache [{}]. register_pollster_execution /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:276
Dec  1 19:39:48 compute-0 ceilometer_agent_compute[200308]: 2025-12-01 19:39:48.818 15 DEBUG ceilometer.polling.manager [-] Registering pollster [<stevedore.extension.Extension object at 0x7fcf6cc3f440>] from source [pollsters] to be executed via executor [<concurrent.futures.thread.ThreadPoolExecutor object at 0x7fcf6ebb41d0>] with cache [{}], pollster history [{}], and discovery cache [{}]. register_pollster_execution /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:276
Dec  1 19:39:48 compute-0 ceilometer_agent_compute[200308]: 2025-12-01 19:39:48.819 15 DEBUG ceilometer.polling.manager [-] Registering pollster [<stevedore.extension.Extension object at 0x7fcf6c2e4470>] from source [pollsters] to be executed via executor [<concurrent.futures.thread.ThreadPoolExecutor object at 0x7fcf6ebb41d0>] with cache [{}], pollster history [{}], and discovery cache [{}]. register_pollster_execution /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:276
Dec  1 19:39:48 compute-0 ceilometer_agent_compute[200308]: 2025-12-01 19:39:48.819 15 DEBUG ceilometer.polling.manager [-] Registering pollster [<stevedore.extension.Extension object at 0x7fcf6cc3f4a0>] from source [pollsters] to be executed via executor [<concurrent.futures.thread.ThreadPoolExecutor object at 0x7fcf6ebb41d0>] with cache [{}], pollster history [{}], and discovery cache [{}]. register_pollster_execution /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:276
Dec  1 19:39:48 compute-0 ceilometer_agent_compute[200308]: 2025-12-01 19:39:48.820 15 DEBUG ceilometer.polling.manager [-] Registering pollster [<stevedore.extension.Extension object at 0x7fcf6cc3f500>] from source [pollsters] to be executed via executor [<concurrent.futures.thread.ThreadPoolExecutor object at 0x7fcf6ebb41d0>] with cache [{}], pollster history [{}], and discovery cache [{}]. register_pollster_execution /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:276
Dec  1 19:39:48 compute-0 ceilometer_agent_compute[200308]: 2025-12-01 19:39:48.820 15 DEBUG ceilometer.compute.discovery [-] instance data: {'id': 'e73931e9-f7fa-4666-b781-700b385532a9', 'name': 'test_0', 'flavor': {'id': '0891a7f6-7194-4f33-bc11-6f6ab8b16145', 'name': 'm1.small', 'vcpus': 1, 'ram': 512, 'disk': 1, 'ephemeral': 1, 'swap': 0}, 'image': {'id': '15bc897a-453b-4133-b6db-08ecdc2b6db0'}, 'os_type': 'hvm', 'architecture': 'x86_64', 'OS-EXT-SRV-ATTR:instance_name': 'instance-00000001', 'OS-EXT-SRV-ATTR:host': 'compute-0.ctlplane.example.com', 'OS-EXT-STS:vm_state': 'running', 'tenant_id': '35d2a9caf1634dca9fc12ec078239d84', 'user_id': '7c24e8f82e7842b785e565ac65c7f494', 'hostId': 'e632d98aa833376e2652bb395252bb54f4cc7fd6f020f0d51d7efcd6', 'status': 'active', 'metadata': {}} discover_libvirt_polling /usr/lib/python3.12/site-packages/ceilometer/compute/discovery.py:315
Dec  1 19:39:48 compute-0 ceilometer_agent_compute[200308]: 2025-12-01 19:39:48.820 15 DEBUG ceilometer.polling.manager [-] Registering pollster [<stevedore.extension.Extension object at 0x7fcf6cc3e540>] from source [pollsters] to be executed via executor [<concurrent.futures.thread.ThreadPoolExecutor object at 0x7fcf6ebb41d0>] with cache [{}], pollster history [{}], and discovery cache [{}]. register_pollster_execution /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:276
Dec  1 19:39:48 compute-0 ceilometer_agent_compute[200308]: 2025-12-01 19:39:48.821 15 DEBUG ceilometer.polling.manager [-] Registering pollster [<stevedore.extension.Extension object at 0x7fcf6cc3f560>] from source [pollsters] to be executed via executor [<concurrent.futures.thread.ThreadPoolExecutor object at 0x7fcf6ebb41d0>] with cache [{}], pollster history [{}], and discovery cache [{}]. register_pollster_execution /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:276
Dec  1 19:39:48 compute-0 ceilometer_agent_compute[200308]: 2025-12-01 19:39:48.821 15 DEBUG ceilometer.polling.manager [-] Registering pollster [<stevedore.extension.Extension object at 0x7fcf6cc3fd70>] from source [pollsters] to be executed via executor [<concurrent.futures.thread.ThreadPoolExecutor object at 0x7fcf6ebb41d0>] with cache [{}], pollster history [{}], and discovery cache [{}]. register_pollster_execution /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:276
Dec  1 19:39:48 compute-0 ceilometer_agent_compute[200308]: 2025-12-01 19:39:48.821 15 DEBUG ceilometer.polling.manager [-] Registering pollster [<stevedore.extension.Extension object at 0x7fcf6cc3f5c0>] from source [pollsters] to be executed via executor [<concurrent.futures.thread.ThreadPoolExecutor object at 0x7fcf6ebb41d0>] with cache [{}], pollster history [{}], and discovery cache [{}]. register_pollster_execution /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:276
Dec  1 19:39:48 compute-0 ceilometer_agent_compute[200308]: 2025-12-01 19:39:48.822 15 DEBUG ceilometer.polling.manager [-] Registering pollster [<stevedore.extension.Extension object at 0x7fcf6cc3fdd0>] from source [pollsters] to be executed via executor [<concurrent.futures.thread.ThreadPoolExecutor object at 0x7fcf6ebb41d0>] with cache [{}], pollster history [{}], and discovery cache [{}]. register_pollster_execution /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:276
Dec  1 19:39:48 compute-0 ceilometer_agent_compute[200308]: 2025-12-01 19:39:48.822 15 DEBUG ceilometer.polling.manager [-] Registering pollster [<stevedore.extension.Extension object at 0x7fcf6cc3fe30>] from source [pollsters] to be executed via executor [<concurrent.futures.thread.ThreadPoolExecutor object at 0x7fcf6ebb41d0>] with cache [{}], pollster history [{}], and discovery cache [{}]. register_pollster_execution /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:276
Dec  1 19:39:48 compute-0 ceilometer_agent_compute[200308]: 2025-12-01 19:39:48.822 15 DEBUG ceilometer.polling.manager [-] Registering pollster [<stevedore.extension.Extension object at 0x7fcf6cc3fec0>] from source [pollsters] to be executed via executor [<concurrent.futures.thread.ThreadPoolExecutor object at 0x7fcf6ebb41d0>] with cache [{}], pollster history [{}], and discovery cache [{}]. register_pollster_execution /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:276
Dec  1 19:39:48 compute-0 ceilometer_agent_compute[200308]: 2025-12-01 19:39:48.823 15 DEBUG ceilometer.polling.manager [-] Registering pollster [<stevedore.extension.Extension object at 0x7fcf6cc3ffb0>] from source [pollsters] to be executed via executor [<concurrent.futures.thread.ThreadPoolExecutor object at 0x7fcf6ebb41d0>] with cache [{}], pollster history [{}], and discovery cache [{}]. register_pollster_execution /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:276
Dec  1 19:39:48 compute-0 ceilometer_agent_compute[200308]: 2025-12-01 19:39:48.823 15 DEBUG ceilometer.polling.manager [-] Registering pollster [<stevedore.extension.Extension object at 0x7fcf6cc3d7c0>] from source [pollsters] to be executed via executor [<concurrent.futures.thread.ThreadPoolExecutor object at 0x7fcf6ebb41d0>] with cache [{}], pollster history [{}], and discovery cache [{}]. register_pollster_execution /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:276
Dec  1 19:39:48 compute-0 ceilometer_agent_compute[200308]: 2025-12-01 19:39:48.823 15 DEBUG ceilometer.polling.manager [-] Registering pollster [<stevedore.extension.Extension object at 0x7fcf6cc3f7d0>] from source [pollsters] to be executed via executor [<concurrent.futures.thread.ThreadPoolExecutor object at 0x7fcf6ebb41d0>] with cache [{}], pollster history [{}], and discovery cache [{}]. register_pollster_execution /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:276
Dec  1 19:39:48 compute-0 ceilometer_agent_compute[200308]: 2025-12-01 19:39:48.825 15 DEBUG ceilometer.compute.discovery [-] instance data: {'id': '850ac274-3f22-41ce-b7d7-ac64d7adac70', 'name': 'vn-rxztcck-a6xkcgll2h6t-dmjd3wlevael-vnf-74vtqyxw74yx', 'flavor': {'id': '0891a7f6-7194-4f33-bc11-6f6ab8b16145', 'name': 'm1.small', 'vcpus': 1, 'ram': 512, 'disk': 1, 'ephemeral': 1, 'swap': 0}, 'image': {'id': '15bc897a-453b-4133-b6db-08ecdc2b6db0'}, 'os_type': 'hvm', 'architecture': 'x86_64', 'OS-EXT-SRV-ATTR:instance_name': 'instance-00000003', 'OS-EXT-SRV-ATTR:host': 'compute-0.ctlplane.example.com', 'OS-EXT-STS:vm_state': 'running', 'tenant_id': '35d2a9caf1634dca9fc12ec078239d84', 'user_id': '7c24e8f82e7842b785e565ac65c7f494', 'hostId': 'e632d98aa833376e2652bb395252bb54f4cc7fd6f020f0d51d7efcd6', 'status': 'active', 'metadata': {'metering.server_group': '47cf63e2-5b7c-4ff3-8543-aef6d5b1a5c9'}} discover_libvirt_polling /usr/lib/python3.12/site-packages/ceilometer/compute/discovery.py:315
Dec  1 19:39:48 compute-0 ceilometer_agent_compute[200308]: 2025-12-01 19:39:48.825 15 INFO ceilometer.polling.manager [-] Polling pollster network.incoming.bytes.delta in the context of pollsters
Dec  1 19:39:48 compute-0 ceilometer_agent_compute[200308]: 2025-12-01 19:39:48.825 15 DEBUG ceilometer.polling.manager [-] Checking if we need coordination for pollster [<stevedore.extension.Extension object at 0x7fcf6cc3f860>] with coordination group name [None]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:333
Dec  1 19:39:48 compute-0 ceilometer_agent_compute[200308]: 2025-12-01 19:39:48.825 15 DEBUG ceilometer.polling.manager [-] The pollster [<stevedore.extension.Extension object at 0x7fcf6cc3f860>] is not configured in a source for polling that requires coordination. The current hashrings are the following [None]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:355
Dec  1 19:39:48 compute-0 ceilometer_agent_compute[200308]: 2025-12-01 19:39:48.825 15 DEBUG ceilometer.polling.manager [-] Polster heartbeat update: network.incoming.bytes.delta heartbeat /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:636
Dec  1 19:39:48 compute-0 ceilometer_agent_compute[200308]: 2025-12-01 19:39:48.826 12 DEBUG ceilometer.polling.manager [-] Updated heartbeat for network.incoming.bytes.delta (2025-12-01T19:39:48.825600) _update_status /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:502
Dec  1 19:39:48 compute-0 ceilometer_agent_compute[200308]: 2025-12-01 19:39:48.830 15 DEBUG ceilometer.compute.pollsters [-] e73931e9-f7fa-4666-b781-700b385532a9/network.incoming.bytes.delta volume: 84 _stats_to_sample /usr/lib/python3.12/site-packages/ceilometer/compute/pollsters/__init__.py:108
Dec  1 19:39:48 compute-0 ceilometer_agent_compute[200308]: 2025-12-01 19:39:48.834 15 DEBUG ceilometer.compute.pollsters [-] 850ac274-3f22-41ce-b7d7-ac64d7adac70/network.incoming.bytes.delta volume: 84 _stats_to_sample /usr/lib/python3.12/site-packages/ceilometer/compute/pollsters/__init__.py:108
Dec  1 19:39:48 compute-0 ceilometer_agent_compute[200308]: 2025-12-01 19:39:48.834 15 INFO ceilometer.polling.manager [-] Finished polling pollster network.incoming.bytes.delta in the context of pollsters
Dec  1 19:39:48 compute-0 ceilometer_agent_compute[200308]: 2025-12-01 19:39:48.834 15 DEBUG ceilometer.polling.manager [-] Executing discovery process for pollsters [<ceilometer.compute.pollsters.net.OutgoingPacketsPollster object at 0x7fcf6c2e4050>] and discovery method [local_instances] via process [<bound method AgentManager.discover of <ceilometer.polling.manager.AgentManager object at 0x7fcf6cc07a10>>]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:294
Dec  1 19:39:48 compute-0 ceilometer_agent_compute[200308]: 2025-12-01 19:39:48.835 15 INFO ceilometer.polling.manager [-] Polling pollster network.outgoing.packets in the context of pollsters
Dec  1 19:39:48 compute-0 ceilometer_agent_compute[200308]: 2025-12-01 19:39:48.835 15 DEBUG ceilometer.polling.manager [-] Checking if we need coordination for pollster [<stevedore.extension.Extension object at 0x7fcf6c2e4080>] with coordination group name [None]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:333
Dec  1 19:39:48 compute-0 ceilometer_agent_compute[200308]: 2025-12-01 19:39:48.835 15 DEBUG ceilometer.polling.manager [-] The pollster [<stevedore.extension.Extension object at 0x7fcf6c2e4080>] is not configured in a source for polling that requires coordination. The current hashrings are the following [None]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:355
Dec  1 19:39:48 compute-0 ceilometer_agent_compute[200308]: 2025-12-01 19:39:48.835 15 DEBUG ceilometer.polling.manager [-] Polster heartbeat update: network.outgoing.packets heartbeat /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:636
Dec  1 19:39:48 compute-0 ceilometer_agent_compute[200308]: 2025-12-01 19:39:48.835 15 DEBUG ceilometer.compute.pollsters [-] e73931e9-f7fa-4666-b781-700b385532a9/network.outgoing.packets volume: 23 _stats_to_sample /usr/lib/python3.12/site-packages/ceilometer/compute/pollsters/__init__.py:108
Dec  1 19:39:48 compute-0 ceilometer_agent_compute[200308]: 2025-12-01 19:39:48.835 15 DEBUG ceilometer.compute.pollsters [-] 850ac274-3f22-41ce-b7d7-ac64d7adac70/network.outgoing.packets volume: 21 _stats_to_sample /usr/lib/python3.12/site-packages/ceilometer/compute/pollsters/__init__.py:108
Dec  1 19:39:48 compute-0 ceilometer_agent_compute[200308]: 2025-12-01 19:39:48.835 15 INFO ceilometer.polling.manager [-] Finished polling pollster network.outgoing.packets in the context of pollsters
Dec  1 19:39:48 compute-0 ceilometer_agent_compute[200308]: 2025-12-01 19:39:48.836 15 DEBUG ceilometer.polling.manager [-] Executing discovery process for pollsters [<ceilometer.compute.pollsters.net.OutgoingBytesDeltaPollster object at 0x7fcf6cc3ff20>] and discovery method [local_instances] via process [<bound method AgentManager.discover of <ceilometer.polling.manager.AgentManager object at 0x7fcf6cc07a10>>]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:294
Dec  1 19:39:48 compute-0 ceilometer_agent_compute[200308]: 2025-12-01 19:39:48.836 15 INFO ceilometer.polling.manager [-] Polling pollster network.outgoing.bytes.delta in the context of pollsters
Dec  1 19:39:48 compute-0 ceilometer_agent_compute[200308]: 2025-12-01 19:39:48.836 15 DEBUG ceilometer.polling.manager [-] Checking if we need coordination for pollster [<stevedore.extension.Extension object at 0x7fcf6efc98b0>] with coordination group name [None]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:333
Dec  1 19:39:48 compute-0 ceilometer_agent_compute[200308]: 2025-12-01 19:39:48.836 15 DEBUG ceilometer.polling.manager [-] The pollster [<stevedore.extension.Extension object at 0x7fcf6efc98b0>] is not configured in a source for polling that requires coordination. The current hashrings are the following [None]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:355
Dec  1 19:39:48 compute-0 ceilometer_agent_compute[200308]: 2025-12-01 19:39:48.836 15 DEBUG ceilometer.polling.manager [-] Polster heartbeat update: network.outgoing.bytes.delta heartbeat /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:636
Dec  1 19:39:48 compute-0 ceilometer_agent_compute[200308]: 2025-12-01 19:39:48.836 15 DEBUG ceilometer.compute.pollsters [-] e73931e9-f7fa-4666-b781-700b385532a9/network.outgoing.bytes.delta volume: 70 _stats_to_sample /usr/lib/python3.12/site-packages/ceilometer/compute/pollsters/__init__.py:108
Dec  1 19:39:48 compute-0 ceilometer_agent_compute[200308]: 2025-12-01 19:39:48.836 15 DEBUG ceilometer.compute.pollsters [-] 850ac274-3f22-41ce-b7d7-ac64d7adac70/network.outgoing.bytes.delta volume: 140 _stats_to_sample /usr/lib/python3.12/site-packages/ceilometer/compute/pollsters/__init__.py:108
Dec  1 19:39:48 compute-0 ceilometer_agent_compute[200308]: 2025-12-01 19:39:48.837 12 DEBUG ceilometer.polling.manager [-] Updated heartbeat for network.outgoing.packets (2025-12-01T19:39:48.835220) _update_status /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:502
Dec  1 19:39:48 compute-0 ceilometer_agent_compute[200308]: 2025-12-01 19:39:48.837 15 INFO ceilometer.polling.manager [-] Finished polling pollster network.outgoing.bytes.delta in the context of pollsters
Dec  1 19:39:48 compute-0 ceilometer_agent_compute[200308]: 2025-12-01 19:39:48.837 15 DEBUG ceilometer.polling.manager [-] Executing discovery process for pollsters [<ceilometer.compute.pollsters.net.OutgoingDropPollster object at 0x7fcf6c2e40e0>] and discovery method [local_instances] via process [<bound method AgentManager.discover of <ceilometer.polling.manager.AgentManager object at 0x7fcf6cc07a10>>]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:294
Dec  1 19:39:48 compute-0 ceilometer_agent_compute[200308]: 2025-12-01 19:39:48.837 15 INFO ceilometer.polling.manager [-] Polling pollster network.outgoing.packets.drop in the context of pollsters
Dec  1 19:39:48 compute-0 ceilometer_agent_compute[200308]: 2025-12-01 19:39:48.837 15 DEBUG ceilometer.polling.manager [-] Checking if we need coordination for pollster [<stevedore.extension.Extension object at 0x7fcf6c2e4110>] with coordination group name [None]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:333
Dec  1 19:39:48 compute-0 ceilometer_agent_compute[200308]: 2025-12-01 19:39:48.837 15 DEBUG ceilometer.polling.manager [-] The pollster [<stevedore.extension.Extension object at 0x7fcf6c2e4110>] is not configured in a source for polling that requires coordination. The current hashrings are the following [None]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:355
Dec  1 19:39:48 compute-0 ceilometer_agent_compute[200308]: 2025-12-01 19:39:48.837 15 DEBUG ceilometer.polling.manager [-] Polster heartbeat update: network.outgoing.packets.drop heartbeat /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:636
Dec  1 19:39:48 compute-0 ceilometer_agent_compute[200308]: 2025-12-01 19:39:48.837 15 DEBUG ceilometer.compute.pollsters [-] e73931e9-f7fa-4666-b781-700b385532a9/network.outgoing.packets.drop volume: 0 _stats_to_sample /usr/lib/python3.12/site-packages/ceilometer/compute/pollsters/__init__.py:108
Dec  1 19:39:48 compute-0 ceilometer_agent_compute[200308]: 2025-12-01 19:39:48.838 15 DEBUG ceilometer.compute.pollsters [-] 850ac274-3f22-41ce-b7d7-ac64d7adac70/network.outgoing.packets.drop volume: 0 _stats_to_sample /usr/lib/python3.12/site-packages/ceilometer/compute/pollsters/__init__.py:108
Dec  1 19:39:48 compute-0 ceilometer_agent_compute[200308]: 2025-12-01 19:39:48.838 15 INFO ceilometer.polling.manager [-] Finished polling pollster network.outgoing.packets.drop in the context of pollsters
Dec  1 19:39:48 compute-0 ceilometer_agent_compute[200308]: 2025-12-01 19:39:48.838 15 DEBUG ceilometer.polling.manager [-] Executing discovery process for pollsters [<ceilometer.compute.pollsters.net.OutgoingErrorsPollster object at 0x7fcf6c2e4170>] and discovery method [local_instances] via process [<bound method AgentManager.discover of <ceilometer.polling.manager.AgentManager object at 0x7fcf6cc07a10>>]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:294
Dec  1 19:39:48 compute-0 ceilometer_agent_compute[200308]: 2025-12-01 19:39:48.838 15 INFO ceilometer.polling.manager [-] Polling pollster network.outgoing.packets.error in the context of pollsters
Dec  1 19:39:48 compute-0 ceilometer_agent_compute[200308]: 2025-12-01 19:39:48.838 15 DEBUG ceilometer.polling.manager [-] Checking if we need coordination for pollster [<stevedore.extension.Extension object at 0x7fcf6c2e41a0>] with coordination group name [None]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:333
Dec  1 19:39:48 compute-0 ceilometer_agent_compute[200308]: 2025-12-01 19:39:48.838 15 DEBUG ceilometer.polling.manager [-] The pollster [<stevedore.extension.Extension object at 0x7fcf6c2e41a0>] is not configured in a source for polling that requires coordination. The current hashrings are the following [None]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:355
Dec  1 19:39:48 compute-0 ceilometer_agent_compute[200308]: 2025-12-01 19:39:48.838 15 DEBUG ceilometer.polling.manager [-] Polster heartbeat update: network.outgoing.packets.error heartbeat /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:636
Dec  1 19:39:48 compute-0 ceilometer_agent_compute[200308]: 2025-12-01 19:39:48.838 15 DEBUG ceilometer.compute.pollsters [-] e73931e9-f7fa-4666-b781-700b385532a9/network.outgoing.packets.error volume: 0 _stats_to_sample /usr/lib/python3.12/site-packages/ceilometer/compute/pollsters/__init__.py:108
Dec  1 19:39:48 compute-0 ceilometer_agent_compute[200308]: 2025-12-01 19:39:48.839 15 DEBUG ceilometer.compute.pollsters [-] 850ac274-3f22-41ce-b7d7-ac64d7adac70/network.outgoing.packets.error volume: 0 _stats_to_sample /usr/lib/python3.12/site-packages/ceilometer/compute/pollsters/__init__.py:108
Dec  1 19:39:48 compute-0 ceilometer_agent_compute[200308]: 2025-12-01 19:39:48.839 15 INFO ceilometer.polling.manager [-] Finished polling pollster network.outgoing.packets.error in the context of pollsters
Dec  1 19:39:48 compute-0 ceilometer_agent_compute[200308]: 2025-12-01 19:39:48.839 15 DEBUG ceilometer.polling.manager [-] Executing discovery process for pollsters [<ceilometer.compute.pollsters.disk.PerDeviceCapacityPollster object at 0x7fcf6cc3d820>] and discovery method [local_instances] via process [<bound method AgentManager.discover of <ceilometer.polling.manager.AgentManager object at 0x7fcf6cc07a10>>]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:294
Dec  1 19:39:48 compute-0 ceilometer_agent_compute[200308]: 2025-12-01 19:39:48.839 15 INFO ceilometer.polling.manager [-] Polling pollster disk.device.capacity in the context of pollsters
Dec  1 19:39:48 compute-0 ceilometer_agent_compute[200308]: 2025-12-01 19:39:48.839 15 DEBUG ceilometer.polling.manager [-] Checking if we need coordination for pollster [<stevedore.extension.Extension object at 0x7fcf6cc3f290>] with coordination group name [None]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:333
Dec  1 19:39:48 compute-0 ceilometer_agent_compute[200308]: 2025-12-01 19:39:48.839 15 DEBUG ceilometer.polling.manager [-] The pollster [<stevedore.extension.Extension object at 0x7fcf6cc3f290>] is not configured in a source for polling that requires coordination. The current hashrings are the following [None]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:355
Dec  1 19:39:48 compute-0 ceilometer_agent_compute[200308]: 2025-12-01 19:39:48.839 15 DEBUG ceilometer.polling.manager [-] Polster heartbeat update: disk.device.capacity heartbeat /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:636
Dec  1 19:39:48 compute-0 ceilometer_agent_compute[200308]: 2025-12-01 19:39:48.840 12 DEBUG ceilometer.polling.manager [-] Updated heartbeat for network.outgoing.bytes.delta (2025-12-01T19:39:48.836316) _update_status /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:502
Dec  1 19:39:48 compute-0 ceilometer_agent_compute[200308]: 2025-12-01 19:39:48.840 12 DEBUG ceilometer.polling.manager [-] Updated heartbeat for network.outgoing.packets.drop (2025-12-01T19:39:48.837770) _update_status /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:502
Dec  1 19:39:48 compute-0 ceilometer_agent_compute[200308]: 2025-12-01 19:39:48.840 12 DEBUG ceilometer.polling.manager [-] Updated heartbeat for network.outgoing.packets.error (2025-12-01T19:39:48.838804) _update_status /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:502
Dec  1 19:39:48 compute-0 ceilometer_agent_compute[200308]: 2025-12-01 19:39:48.840 12 DEBUG ceilometer.polling.manager [-] Updated heartbeat for disk.device.capacity (2025-12-01T19:39:48.839872) _update_status /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:502
Dec  1 19:39:48 compute-0 ceilometer_agent_compute[200308]: 2025-12-01 19:39:48.865 15 DEBUG ceilometer.compute.pollsters [-] e73931e9-f7fa-4666-b781-700b385532a9/disk.device.capacity volume: 1073741824 _stats_to_sample /usr/lib/python3.12/site-packages/ceilometer/compute/pollsters/__init__.py:108
Dec  1 19:39:48 compute-0 ceilometer_agent_compute[200308]: 2025-12-01 19:39:48.866 15 DEBUG ceilometer.compute.pollsters [-] e73931e9-f7fa-4666-b781-700b385532a9/disk.device.capacity volume: 1073741824 _stats_to_sample /usr/lib/python3.12/site-packages/ceilometer/compute/pollsters/__init__.py:108
Dec  1 19:39:48 compute-0 ceilometer_agent_compute[200308]: 2025-12-01 19:39:48.866 15 DEBUG ceilometer.compute.pollsters [-] e73931e9-f7fa-4666-b781-700b385532a9/disk.device.capacity volume: 485376 _stats_to_sample /usr/lib/python3.12/site-packages/ceilometer/compute/pollsters/__init__.py:108
Dec  1 19:39:48 compute-0 ceilometer_agent_compute[200308]: 2025-12-01 19:39:48.890 15 DEBUG ceilometer.compute.pollsters [-] 850ac274-3f22-41ce-b7d7-ac64d7adac70/disk.device.capacity volume: 1073741824 _stats_to_sample /usr/lib/python3.12/site-packages/ceilometer/compute/pollsters/__init__.py:108
Dec  1 19:39:48 compute-0 ceilometer_agent_compute[200308]: 2025-12-01 19:39:48.890 15 DEBUG ceilometer.compute.pollsters [-] 850ac274-3f22-41ce-b7d7-ac64d7adac70/disk.device.capacity volume: 1073741824 _stats_to_sample /usr/lib/python3.12/site-packages/ceilometer/compute/pollsters/__init__.py:108
Dec  1 19:39:48 compute-0 ceilometer_agent_compute[200308]: 2025-12-01 19:39:48.891 15 DEBUG ceilometer.compute.pollsters [-] 850ac274-3f22-41ce-b7d7-ac64d7adac70/disk.device.capacity volume: 583680 _stats_to_sample /usr/lib/python3.12/site-packages/ceilometer/compute/pollsters/__init__.py:108
Dec  1 19:39:48 compute-0 ceilometer_agent_compute[200308]: 2025-12-01 19:39:48.891 15 INFO ceilometer.polling.manager [-] Finished polling pollster disk.device.capacity in the context of pollsters
Dec  1 19:39:48 compute-0 ceilometer_agent_compute[200308]: 2025-12-01 19:39:48.891 15 DEBUG ceilometer.polling.manager [-] Executing discovery process for pollsters [<ceilometer.compute.pollsters.disk.PerDeviceReadBytesPollster object at 0x7fcf6cc3f1d0>] and discovery method [local_instances] via process [<bound method AgentManager.discover of <ceilometer.polling.manager.AgentManager object at 0x7fcf6cc07a10>>]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:294
Dec  1 19:39:48 compute-0 ceilometer_agent_compute[200308]: 2025-12-01 19:39:48.891 15 INFO ceilometer.polling.manager [-] Polling pollster disk.device.read.bytes in the context of pollsters
Dec  1 19:39:48 compute-0 ceilometer_agent_compute[200308]: 2025-12-01 19:39:48.891 15 DEBUG ceilometer.polling.manager [-] Checking if we need coordination for pollster [<stevedore.extension.Extension object at 0x7fcf6cc3f2c0>] with coordination group name [None]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:333
Dec  1 19:39:48 compute-0 ceilometer_agent_compute[200308]: 2025-12-01 19:39:48.891 15 DEBUG ceilometer.polling.manager [-] The pollster [<stevedore.extension.Extension object at 0x7fcf6cc3f2c0>] is not configured in a source for polling that requires coordination. The current hashrings are the following [None]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:355
Dec  1 19:39:48 compute-0 ceilometer_agent_compute[200308]: 2025-12-01 19:39:48.892 15 DEBUG ceilometer.polling.manager [-] Polster heartbeat update: disk.device.read.bytes heartbeat /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:636
Dec  1 19:39:48 compute-0 ceilometer_agent_compute[200308]: 2025-12-01 19:39:48.892 12 DEBUG ceilometer.polling.manager [-] Updated heartbeat for disk.device.read.bytes (2025-12-01T19:39:48.892024) _update_status /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:502
Dec  1 19:39:48 compute-0 ceilometer_agent_compute[200308]: 2025-12-01 19:39:48.958 15 DEBUG ceilometer.compute.pollsters [-] e73931e9-f7fa-4666-b781-700b385532a9/disk.device.read.bytes volume: 23308800 _stats_to_sample /usr/lib/python3.12/site-packages/ceilometer/compute/pollsters/__init__.py:108
Dec  1 19:39:48 compute-0 ceilometer_agent_compute[200308]: 2025-12-01 19:39:48.958 15 DEBUG ceilometer.compute.pollsters [-] e73931e9-f7fa-4666-b781-700b385532a9/disk.device.read.bytes volume: 3227648 _stats_to_sample /usr/lib/python3.12/site-packages/ceilometer/compute/pollsters/__init__.py:108
Dec  1 19:39:48 compute-0 ceilometer_agent_compute[200308]: 2025-12-01 19:39:48.958 15 DEBUG ceilometer.compute.pollsters [-] e73931e9-f7fa-4666-b781-700b385532a9/disk.device.read.bytes volume: 274786 _stats_to_sample /usr/lib/python3.12/site-packages/ceilometer/compute/pollsters/__init__.py:108
Dec  1 19:39:49 compute-0 ceilometer_agent_compute[200308]: 2025-12-01 19:39:49.034 15 DEBUG ceilometer.compute.pollsters [-] 850ac274-3f22-41ce-b7d7-ac64d7adac70/disk.device.read.bytes volume: 23308800 _stats_to_sample /usr/lib/python3.12/site-packages/ceilometer/compute/pollsters/__init__.py:108
Dec  1 19:39:49 compute-0 ceilometer_agent_compute[200308]: 2025-12-01 19:39:49.035 15 DEBUG ceilometer.compute.pollsters [-] 850ac274-3f22-41ce-b7d7-ac64d7adac70/disk.device.read.bytes volume: 3227648 _stats_to_sample /usr/lib/python3.12/site-packages/ceilometer/compute/pollsters/__init__.py:108
Dec  1 19:39:49 compute-0 ceilometer_agent_compute[200308]: 2025-12-01 19:39:49.036 15 DEBUG ceilometer.compute.pollsters [-] 850ac274-3f22-41ce-b7d7-ac64d7adac70/disk.device.read.bytes volume: 385378 _stats_to_sample /usr/lib/python3.12/site-packages/ceilometer/compute/pollsters/__init__.py:108
Dec  1 19:39:49 compute-0 ceilometer_agent_compute[200308]: 2025-12-01 19:39:49.037 15 INFO ceilometer.polling.manager [-] Finished polling pollster disk.device.read.bytes in the context of pollsters
Dec  1 19:39:49 compute-0 ceilometer_agent_compute[200308]: 2025-12-01 19:39:49.037 15 DEBUG ceilometer.polling.manager [-] Executing discovery process for pollsters [<ceilometer.compute.pollsters.net.IncomingBytesPollster object at 0x7fcf6cc3f800>] and discovery method [local_instances] via process [<bound method AgentManager.discover of <ceilometer.polling.manager.AgentManager object at 0x7fcf6cc07a10>>]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:294
Dec  1 19:39:49 compute-0 ceilometer_agent_compute[200308]: 2025-12-01 19:39:49.037 15 INFO ceilometer.polling.manager [-] Polling pollster network.incoming.bytes in the context of pollsters
Dec  1 19:39:49 compute-0 ceilometer_agent_compute[200308]: 2025-12-01 19:39:49.037 15 DEBUG ceilometer.polling.manager [-] Checking if we need coordination for pollster [<stevedore.extension.Extension object at 0x7fcf6e1e92e0>] with coordination group name [None]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:333
Dec  1 19:39:49 compute-0 ceilometer_agent_compute[200308]: 2025-12-01 19:39:49.038 15 DEBUG ceilometer.polling.manager [-] The pollster [<stevedore.extension.Extension object at 0x7fcf6e1e92e0>] is not configured in a source for polling that requires coordination. The current hashrings are the following [None]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:355
Dec  1 19:39:49 compute-0 ceilometer_agent_compute[200308]: 2025-12-01 19:39:49.038 15 DEBUG ceilometer.polling.manager [-] Polster heartbeat update: network.incoming.bytes heartbeat /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:636
Dec  1 19:39:49 compute-0 ceilometer_agent_compute[200308]: 2025-12-01 19:39:49.038 15 DEBUG ceilometer.compute.pollsters [-] e73931e9-f7fa-4666-b781-700b385532a9/network.incoming.bytes volume: 2136 _stats_to_sample /usr/lib/python3.12/site-packages/ceilometer/compute/pollsters/__init__.py:108
Dec  1 19:39:49 compute-0 ceilometer_agent_compute[200308]: 2025-12-01 19:39:49.039 12 DEBUG ceilometer.polling.manager [-] Updated heartbeat for network.incoming.bytes (2025-12-01T19:39:49.038237) _update_status /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:502
Dec  1 19:39:49 compute-0 ceilometer_agent_compute[200308]: 2025-12-01 19:39:49.039 15 DEBUG ceilometer.compute.pollsters [-] 850ac274-3f22-41ce-b7d7-ac64d7adac70/network.incoming.bytes volume: 1570 _stats_to_sample /usr/lib/python3.12/site-packages/ceilometer/compute/pollsters/__init__.py:108
Dec  1 19:39:49 compute-0 ceilometer_agent_compute[200308]: 2025-12-01 19:39:49.040 15 INFO ceilometer.polling.manager [-] Finished polling pollster network.incoming.bytes in the context of pollsters
Dec  1 19:39:49 compute-0 ceilometer_agent_compute[200308]: 2025-12-01 19:39:49.040 15 DEBUG ceilometer.polling.manager [-] Executing discovery process for pollsters [<ceilometer.compute.pollsters.net.IncomingBytesRatePollster object at 0x7fcf6cc3fd10>] and discovery method [local_instances] via process [<bound method AgentManager.discover of <ceilometer.polling.manager.AgentManager object at 0x7fcf6cc07a10>>]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:294
Dec  1 19:39:49 compute-0 ceilometer_agent_compute[200308]: 2025-12-01 19:39:49.040 15 DEBUG ceilometer.polling.manager [-] Skip pollster network.incoming.bytes.rate, no new resources found this cycle _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:321
Dec  1 19:39:49 compute-0 ceilometer_agent_compute[200308]: 2025-12-01 19:39:49.040 15 DEBUG ceilometer.polling.manager [-] Executing discovery process for pollsters [<ceilometer.compute.pollsters.disk.PerDeviceDiskReadLatencyPollster object at 0x7fcf6cc3f2f0>] and discovery method [local_instances] via process [<bound method AgentManager.discover of <ceilometer.polling.manager.AgentManager object at 0x7fcf6cc07a10>>]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:294
Dec  1 19:39:49 compute-0 ceilometer_agent_compute[200308]: 2025-12-01 19:39:49.040 15 INFO ceilometer.polling.manager [-] Polling pollster disk.device.read.latency in the context of pollsters
Dec  1 19:39:49 compute-0 ceilometer_agent_compute[200308]: 2025-12-01 19:39:49.041 15 DEBUG ceilometer.polling.manager [-] Checking if we need coordination for pollster [<stevedore.extension.Extension object at 0x7fcf6cc3f320>] with coordination group name [None]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:333
Dec  1 19:39:49 compute-0 ceilometer_agent_compute[200308]: 2025-12-01 19:39:49.041 15 DEBUG ceilometer.polling.manager [-] The pollster [<stevedore.extension.Extension object at 0x7fcf6cc3f320>] is not configured in a source for polling that requires coordination. The current hashrings are the following [None]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:355
Dec  1 19:39:49 compute-0 ceilometer_agent_compute[200308]: 2025-12-01 19:39:49.041 15 DEBUG ceilometer.polling.manager [-] Polster heartbeat update: disk.device.read.latency heartbeat /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:636
Dec  1 19:39:49 compute-0 ceilometer_agent_compute[200308]: 2025-12-01 19:39:49.042 15 DEBUG ceilometer.compute.pollsters [-] e73931e9-f7fa-4666-b781-700b385532a9/disk.device.read.latency volume: 474440550 _stats_to_sample /usr/lib/python3.12/site-packages/ceilometer/compute/pollsters/__init__.py:108
Dec  1 19:39:49 compute-0 ceilometer_agent_compute[200308]: 2025-12-01 19:39:49.042 15 DEBUG ceilometer.compute.pollsters [-] e73931e9-f7fa-4666-b781-700b385532a9/disk.device.read.latency volume: 65600453 _stats_to_sample /usr/lib/python3.12/site-packages/ceilometer/compute/pollsters/__init__.py:108
Dec  1 19:39:49 compute-0 ceilometer_agent_compute[200308]: 2025-12-01 19:39:49.043 15 DEBUG ceilometer.compute.pollsters [-] e73931e9-f7fa-4666-b781-700b385532a9/disk.device.read.latency volume: 49214734 _stats_to_sample /usr/lib/python3.12/site-packages/ceilometer/compute/pollsters/__init__.py:108
Dec  1 19:39:49 compute-0 ceilometer_agent_compute[200308]: 2025-12-01 19:39:49.044 12 DEBUG ceilometer.polling.manager [-] Updated heartbeat for disk.device.read.latency (2025-12-01T19:39:49.041688) _update_status /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:502
Dec  1 19:39:49 compute-0 ceilometer_agent_compute[200308]: 2025-12-01 19:39:49.044 15 DEBUG ceilometer.compute.pollsters [-] 850ac274-3f22-41ce-b7d7-ac64d7adac70/disk.device.read.latency volume: 578521054 _stats_to_sample /usr/lib/python3.12/site-packages/ceilometer/compute/pollsters/__init__.py:108
Dec  1 19:39:49 compute-0 ceilometer_agent_compute[200308]: 2025-12-01 19:39:49.045 15 DEBUG ceilometer.compute.pollsters [-] 850ac274-3f22-41ce-b7d7-ac64d7adac70/disk.device.read.latency volume: 98903610 _stats_to_sample /usr/lib/python3.12/site-packages/ceilometer/compute/pollsters/__init__.py:108
Dec  1 19:39:49 compute-0 ceilometer_agent_compute[200308]: 2025-12-01 19:39:49.045 15 DEBUG ceilometer.compute.pollsters [-] 850ac274-3f22-41ce-b7d7-ac64d7adac70/disk.device.read.latency volume: 76991265 _stats_to_sample /usr/lib/python3.12/site-packages/ceilometer/compute/pollsters/__init__.py:108
Dec  1 19:39:49 compute-0 ceilometer_agent_compute[200308]: 2025-12-01 19:39:49.047 15 INFO ceilometer.polling.manager [-] Finished polling pollster disk.device.read.latency in the context of pollsters
Dec  1 19:39:49 compute-0 ceilometer_agent_compute[200308]: 2025-12-01 19:39:49.047 15 DEBUG ceilometer.polling.manager [-] Executing discovery process for pollsters [<ceilometer.compute.pollsters.disk.PerDeviceReadRequestsPollster object at 0x7fcf6cc3f350>] and discovery method [local_instances] via process [<bound method AgentManager.discover of <ceilometer.polling.manager.AgentManager object at 0x7fcf6cc07a10>>]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:294
Dec  1 19:39:49 compute-0 ceilometer_agent_compute[200308]: 2025-12-01 19:39:49.048 15 INFO ceilometer.polling.manager [-] Polling pollster disk.device.read.requests in the context of pollsters
Dec  1 19:39:49 compute-0 ceilometer_agent_compute[200308]: 2025-12-01 19:39:49.048 15 DEBUG ceilometer.polling.manager [-] Checking if we need coordination for pollster [<stevedore.extension.Extension object at 0x7fcf6cc3f380>] with coordination group name [None]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:333
Dec  1 19:39:49 compute-0 ceilometer_agent_compute[200308]: 2025-12-01 19:39:49.048 15 DEBUG ceilometer.polling.manager [-] The pollster [<stevedore.extension.Extension object at 0x7fcf6cc3f380>] is not configured in a source for polling that requires coordination. The current hashrings are the following [None]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:355
Dec  1 19:39:49 compute-0 ceilometer_agent_compute[200308]: 2025-12-01 19:39:49.049 15 DEBUG ceilometer.polling.manager [-] Polster heartbeat update: disk.device.read.requests heartbeat /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:636
Dec  1 19:39:49 compute-0 ceilometer_agent_compute[200308]: 2025-12-01 19:39:49.049 12 DEBUG ceilometer.polling.manager [-] Updated heartbeat for disk.device.read.requests (2025-12-01T19:39:49.048951) _update_status /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:502
Dec  1 19:39:49 compute-0 ceilometer_agent_compute[200308]: 2025-12-01 19:39:49.049 15 DEBUG ceilometer.compute.pollsters [-] e73931e9-f7fa-4666-b781-700b385532a9/disk.device.read.requests volume: 840 _stats_to_sample /usr/lib/python3.12/site-packages/ceilometer/compute/pollsters/__init__.py:108
Dec  1 19:39:49 compute-0 ceilometer_agent_compute[200308]: 2025-12-01 19:39:49.050 15 DEBUG ceilometer.compute.pollsters [-] e73931e9-f7fa-4666-b781-700b385532a9/disk.device.read.requests volume: 173 _stats_to_sample /usr/lib/python3.12/site-packages/ceilometer/compute/pollsters/__init__.py:108
Dec  1 19:39:49 compute-0 ceilometer_agent_compute[200308]: 2025-12-01 19:39:49.050 15 DEBUG ceilometer.compute.pollsters [-] e73931e9-f7fa-4666-b781-700b385532a9/disk.device.read.requests volume: 109 _stats_to_sample /usr/lib/python3.12/site-packages/ceilometer/compute/pollsters/__init__.py:108
Dec  1 19:39:49 compute-0 ceilometer_agent_compute[200308]: 2025-12-01 19:39:49.051 15 DEBUG ceilometer.compute.pollsters [-] 850ac274-3f22-41ce-b7d7-ac64d7adac70/disk.device.read.requests volume: 840 _stats_to_sample /usr/lib/python3.12/site-packages/ceilometer/compute/pollsters/__init__.py:108
Dec  1 19:39:49 compute-0 ceilometer_agent_compute[200308]: 2025-12-01 19:39:49.052 15 DEBUG ceilometer.compute.pollsters [-] 850ac274-3f22-41ce-b7d7-ac64d7adac70/disk.device.read.requests volume: 173 _stats_to_sample /usr/lib/python3.12/site-packages/ceilometer/compute/pollsters/__init__.py:108
Dec  1 19:39:49 compute-0 ceilometer_agent_compute[200308]: 2025-12-01 19:39:49.052 15 DEBUG ceilometer.compute.pollsters [-] 850ac274-3f22-41ce-b7d7-ac64d7adac70/disk.device.read.requests volume: 124 _stats_to_sample /usr/lib/python3.12/site-packages/ceilometer/compute/pollsters/__init__.py:108
Dec  1 19:39:49 compute-0 ceilometer_agent_compute[200308]: 2025-12-01 19:39:49.054 15 INFO ceilometer.polling.manager [-] Finished polling pollster disk.device.read.requests in the context of pollsters
Dec  1 19:39:49 compute-0 ceilometer_agent_compute[200308]: 2025-12-01 19:39:49.055 15 DEBUG ceilometer.polling.manager [-] Executing discovery process for pollsters [<ceilometer.compute.pollsters.disk.PerDevicePhysicalPollster object at 0x7fcf6cc3f3b0>] and discovery method [local_instances] via process [<bound method AgentManager.discover of <ceilometer.polling.manager.AgentManager object at 0x7fcf6cc07a10>>]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:294
Dec  1 19:39:49 compute-0 ceilometer_agent_compute[200308]: 2025-12-01 19:39:49.055 15 INFO ceilometer.polling.manager [-] Polling pollster disk.device.usage in the context of pollsters
Dec  1 19:39:49 compute-0 ceilometer_agent_compute[200308]: 2025-12-01 19:39:49.055 15 DEBUG ceilometer.polling.manager [-] Checking if we need coordination for pollster [<stevedore.extension.Extension object at 0x7fcf6cc3f3e0>] with coordination group name [None]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:333
Dec  1 19:39:49 compute-0 ceilometer_agent_compute[200308]: 2025-12-01 19:39:49.056 15 DEBUG ceilometer.polling.manager [-] The pollster [<stevedore.extension.Extension object at 0x7fcf6cc3f3e0>] is not configured in a source for polling that requires coordination. The current hashrings are the following [None]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:355
Dec  1 19:39:49 compute-0 ceilometer_agent_compute[200308]: 2025-12-01 19:39:49.057 12 DEBUG ceilometer.polling.manager [-] Updated heartbeat for disk.device.usage (2025-12-01T19:39:49.056536) _update_status /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:502
Dec  1 19:39:49 compute-0 ceilometer_agent_compute[200308]: 2025-12-01 19:39:49.056 15 DEBUG ceilometer.polling.manager [-] Polster heartbeat update: disk.device.usage heartbeat /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:636
Dec  1 19:39:49 compute-0 ceilometer_agent_compute[200308]: 2025-12-01 19:39:49.057 15 DEBUG ceilometer.compute.pollsters [-] e73931e9-f7fa-4666-b781-700b385532a9/disk.device.usage volume: 21233664 _stats_to_sample /usr/lib/python3.12/site-packages/ceilometer/compute/pollsters/__init__.py:108
Dec  1 19:39:49 compute-0 ceilometer_agent_compute[200308]: 2025-12-01 19:39:49.058 15 DEBUG ceilometer.compute.pollsters [-] e73931e9-f7fa-4666-b781-700b385532a9/disk.device.usage volume: 393216 _stats_to_sample /usr/lib/python3.12/site-packages/ceilometer/compute/pollsters/__init__.py:108
Dec  1 19:39:49 compute-0 ceilometer_agent_compute[200308]: 2025-12-01 19:39:49.059 15 DEBUG ceilometer.compute.pollsters [-] e73931e9-f7fa-4666-b781-700b385532a9/disk.device.usage volume: 485376 _stats_to_sample /usr/lib/python3.12/site-packages/ceilometer/compute/pollsters/__init__.py:108
Dec  1 19:39:49 compute-0 ceilometer_agent_compute[200308]: 2025-12-01 19:39:49.060 15 DEBUG ceilometer.compute.pollsters [-] 850ac274-3f22-41ce-b7d7-ac64d7adac70/disk.device.usage volume: 21299200 _stats_to_sample /usr/lib/python3.12/site-packages/ceilometer/compute/pollsters/__init__.py:108
Dec  1 19:39:49 compute-0 ceilometer_agent_compute[200308]: 2025-12-01 19:39:49.061 15 DEBUG ceilometer.compute.pollsters [-] 850ac274-3f22-41ce-b7d7-ac64d7adac70/disk.device.usage volume: 393216 _stats_to_sample /usr/lib/python3.12/site-packages/ceilometer/compute/pollsters/__init__.py:108
Dec  1 19:39:49 compute-0 ceilometer_agent_compute[200308]: 2025-12-01 19:39:49.062 15 DEBUG ceilometer.compute.pollsters [-] 850ac274-3f22-41ce-b7d7-ac64d7adac70/disk.device.usage volume: 583680 _stats_to_sample /usr/lib/python3.12/site-packages/ceilometer/compute/pollsters/__init__.py:108
Dec  1 19:39:49 compute-0 ceilometer_agent_compute[200308]: 2025-12-01 19:39:49.063 15 INFO ceilometer.polling.manager [-] Finished polling pollster disk.device.usage in the context of pollsters
Dec  1 19:39:49 compute-0 ceilometer_agent_compute[200308]: 2025-12-01 19:39:49.063 15 DEBUG ceilometer.polling.manager [-] Executing discovery process for pollsters [<ceilometer.compute.pollsters.disk.PerDeviceWriteBytesPollster object at 0x7fcf6cc3f410>] and discovery method [local_instances] via process [<bound method AgentManager.discover of <ceilometer.polling.manager.AgentManager object at 0x7fcf6cc07a10>>]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:294
Dec  1 19:39:49 compute-0 ceilometer_agent_compute[200308]: 2025-12-01 19:39:49.064 15 INFO ceilometer.polling.manager [-] Polling pollster disk.device.write.bytes in the context of pollsters
Dec  1 19:39:49 compute-0 ceilometer_agent_compute[200308]: 2025-12-01 19:39:49.064 15 DEBUG ceilometer.polling.manager [-] Checking if we need coordination for pollster [<stevedore.extension.Extension object at 0x7fcf6cc3f440>] with coordination group name [None]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:333
Dec  1 19:39:49 compute-0 ceilometer_agent_compute[200308]: 2025-12-01 19:39:49.064 15 DEBUG ceilometer.polling.manager [-] The pollster [<stevedore.extension.Extension object at 0x7fcf6cc3f440>] is not configured in a source for polling that requires coordination. The current hashrings are the following [None]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:355
Dec  1 19:39:49 compute-0 ceilometer_agent_compute[200308]: 2025-12-01 19:39:49.065 15 DEBUG ceilometer.polling.manager [-] Polster heartbeat update: disk.device.write.bytes heartbeat /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:636
Dec  1 19:39:49 compute-0 ceilometer_agent_compute[200308]: 2025-12-01 19:39:49.065 15 DEBUG ceilometer.compute.pollsters [-] e73931e9-f7fa-4666-b781-700b385532a9/disk.device.write.bytes volume: 41779200 _stats_to_sample /usr/lib/python3.12/site-packages/ceilometer/compute/pollsters/__init__.py:108
Dec  1 19:39:49 compute-0 ceilometer_agent_compute[200308]: 2025-12-01 19:39:49.065 12 DEBUG ceilometer.polling.manager [-] Updated heartbeat for disk.device.write.bytes (2025-12-01T19:39:49.065052) _update_status /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:502
Dec  1 19:39:49 compute-0 ceilometer_agent_compute[200308]: 2025-12-01 19:39:49.066 15 DEBUG ceilometer.compute.pollsters [-] e73931e9-f7fa-4666-b781-700b385532a9/disk.device.write.bytes volume: 512 _stats_to_sample /usr/lib/python3.12/site-packages/ceilometer/compute/pollsters/__init__.py:108
Dec  1 19:39:49 compute-0 ceilometer_agent_compute[200308]: 2025-12-01 19:39:49.067 15 DEBUG ceilometer.compute.pollsters [-] e73931e9-f7fa-4666-b781-700b385532a9/disk.device.write.bytes volume: 0 _stats_to_sample /usr/lib/python3.12/site-packages/ceilometer/compute/pollsters/__init__.py:108
Dec  1 19:39:49 compute-0 ceilometer_agent_compute[200308]: 2025-12-01 19:39:49.068 15 DEBUG ceilometer.compute.pollsters [-] 850ac274-3f22-41ce-b7d7-ac64d7adac70/disk.device.write.bytes volume: 41779200 _stats_to_sample /usr/lib/python3.12/site-packages/ceilometer/compute/pollsters/__init__.py:108
Dec  1 19:39:49 compute-0 ceilometer_agent_compute[200308]: 2025-12-01 19:39:49.069 15 DEBUG ceilometer.compute.pollsters [-] 850ac274-3f22-41ce-b7d7-ac64d7adac70/disk.device.write.bytes volume: 512 _stats_to_sample /usr/lib/python3.12/site-packages/ceilometer/compute/pollsters/__init__.py:108
Dec  1 19:39:49 compute-0 ceilometer_agent_compute[200308]: 2025-12-01 19:39:49.070 15 DEBUG ceilometer.compute.pollsters [-] 850ac274-3f22-41ce-b7d7-ac64d7adac70/disk.device.write.bytes volume: 0 _stats_to_sample /usr/lib/python3.12/site-packages/ceilometer/compute/pollsters/__init__.py:108
Dec  1 19:39:49 compute-0 ceilometer_agent_compute[200308]: 2025-12-01 19:39:49.071 15 INFO ceilometer.polling.manager [-] Finished polling pollster disk.device.write.bytes in the context of pollsters
Dec  1 19:39:49 compute-0 ceilometer_agent_compute[200308]: 2025-12-01 19:39:49.071 15 DEBUG ceilometer.polling.manager [-] Executing discovery process for pollsters [<ceilometer.compute.pollsters.instance_stats.PowerStatePollster object at 0x7fcf6c2e4440>] and discovery method [local_instances] via process [<bound method AgentManager.discover of <ceilometer.polling.manager.AgentManager object at 0x7fcf6cc07a10>>]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:294
Dec  1 19:39:49 compute-0 ceilometer_agent_compute[200308]: 2025-12-01 19:39:49.072 15 INFO ceilometer.polling.manager [-] Polling pollster power.state in the context of pollsters
Dec  1 19:39:49 compute-0 ceilometer_agent_compute[200308]: 2025-12-01 19:39:49.072 15 DEBUG ceilometer.polling.manager [-] Checking if we need coordination for pollster [<stevedore.extension.Extension object at 0x7fcf6c2e4470>] with coordination group name [None]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:333
Dec  1 19:39:49 compute-0 ceilometer_agent_compute[200308]: 2025-12-01 19:39:49.073 15 DEBUG ceilometer.polling.manager [-] The pollster [<stevedore.extension.Extension object at 0x7fcf6c2e4470>] is not configured in a source for polling that requires coordination. The current hashrings are the following [None]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:355
Dec  1 19:39:49 compute-0 ceilometer_agent_compute[200308]: 2025-12-01 19:39:49.073 15 DEBUG ceilometer.polling.manager [-] Polster heartbeat update: power.state heartbeat /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:636
Dec  1 19:39:49 compute-0 ceilometer_agent_compute[200308]: 2025-12-01 19:39:49.073 12 DEBUG ceilometer.polling.manager [-] Updated heartbeat for power.state (2025-12-01T19:39:49.073620) _update_status /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:502
Dec  1 19:39:49 compute-0 ceilometer_agent_compute[200308]: 2025-12-01 19:39:49.098 15 DEBUG ceilometer.compute.pollsters [-] e73931e9-f7fa-4666-b781-700b385532a9/power.state volume: 1 _stats_to_sample /usr/lib/python3.12/site-packages/ceilometer/compute/pollsters/__init__.py:108
Dec  1 19:39:49 compute-0 ceilometer_agent_compute[200308]: 2025-12-01 19:39:49.130 15 DEBUG ceilometer.compute.pollsters [-] 850ac274-3f22-41ce-b7d7-ac64d7adac70/power.state volume: 1 _stats_to_sample /usr/lib/python3.12/site-packages/ceilometer/compute/pollsters/__init__.py:108
Dec  1 19:39:49 compute-0 ceilometer_agent_compute[200308]: 2025-12-01 19:39:49.131 15 INFO ceilometer.polling.manager [-] Finished polling pollster power.state in the context of pollsters
Dec  1 19:39:49 compute-0 ceilometer_agent_compute[200308]: 2025-12-01 19:39:49.131 15 DEBUG ceilometer.polling.manager [-] Executing discovery process for pollsters [<ceilometer.compute.pollsters.disk.PerDeviceDiskWriteLatencyPollster object at 0x7fcf6cc3f470>] and discovery method [local_instances] via process [<bound method AgentManager.discover of <ceilometer.polling.manager.AgentManager object at 0x7fcf6cc07a10>>]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:294
Dec  1 19:39:49 compute-0 ceilometer_agent_compute[200308]: 2025-12-01 19:39:49.131 15 INFO ceilometer.polling.manager [-] Polling pollster disk.device.write.latency in the context of pollsters
Dec  1 19:39:49 compute-0 ceilometer_agent_compute[200308]: 2025-12-01 19:39:49.131 15 DEBUG ceilometer.polling.manager [-] Checking if we need coordination for pollster [<stevedore.extension.Extension object at 0x7fcf6cc3f4a0>] with coordination group name [None]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:333
Dec  1 19:39:49 compute-0 ceilometer_agent_compute[200308]: 2025-12-01 19:39:49.131 15 DEBUG ceilometer.polling.manager [-] The pollster [<stevedore.extension.Extension object at 0x7fcf6cc3f4a0>] is not configured in a source for polling that requires coordination. The current hashrings are the following [None]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:355
Dec  1 19:39:49 compute-0 ceilometer_agent_compute[200308]: 2025-12-01 19:39:49.131 15 DEBUG ceilometer.polling.manager [-] Polster heartbeat update: disk.device.write.latency heartbeat /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:636
Dec  1 19:39:49 compute-0 ceilometer_agent_compute[200308]: 2025-12-01 19:39:49.132 15 DEBUG ceilometer.compute.pollsters [-] e73931e9-f7fa-4666-b781-700b385532a9/disk.device.write.latency volume: 1119912171 _stats_to_sample /usr/lib/python3.12/site-packages/ceilometer/compute/pollsters/__init__.py:108
Dec  1 19:39:49 compute-0 ceilometer_agent_compute[200308]: 2025-12-01 19:39:49.132 15 DEBUG ceilometer.compute.pollsters [-] e73931e9-f7fa-4666-b781-700b385532a9/disk.device.write.latency volume: 10391061 _stats_to_sample /usr/lib/python3.12/site-packages/ceilometer/compute/pollsters/__init__.py:108
Dec  1 19:39:49 compute-0 ceilometer_agent_compute[200308]: 2025-12-01 19:39:49.132 15 DEBUG ceilometer.compute.pollsters [-] e73931e9-f7fa-4666-b781-700b385532a9/disk.device.write.latency volume: 0 _stats_to_sample /usr/lib/python3.12/site-packages/ceilometer/compute/pollsters/__init__.py:108
Dec  1 19:39:49 compute-0 ceilometer_agent_compute[200308]: 2025-12-01 19:39:49.132 15 DEBUG ceilometer.compute.pollsters [-] 850ac274-3f22-41ce-b7d7-ac64d7adac70/disk.device.write.latency volume: 2063543219 _stats_to_sample /usr/lib/python3.12/site-packages/ceilometer/compute/pollsters/__init__.py:108
Dec  1 19:39:49 compute-0 ceilometer_agent_compute[200308]: 2025-12-01 19:39:49.133 15 DEBUG ceilometer.compute.pollsters [-] 850ac274-3f22-41ce-b7d7-ac64d7adac70/disk.device.write.latency volume: 12721696 _stats_to_sample /usr/lib/python3.12/site-packages/ceilometer/compute/pollsters/__init__.py:108
Dec  1 19:39:49 compute-0 ceilometer_agent_compute[200308]: 2025-12-01 19:39:49.133 15 DEBUG ceilometer.compute.pollsters [-] 850ac274-3f22-41ce-b7d7-ac64d7adac70/disk.device.write.latency volume: 0 _stats_to_sample /usr/lib/python3.12/site-packages/ceilometer/compute/pollsters/__init__.py:108
Dec  1 19:39:49 compute-0 ceilometer_agent_compute[200308]: 2025-12-01 19:39:49.133 15 INFO ceilometer.polling.manager [-] Finished polling pollster disk.device.write.latency in the context of pollsters
Dec  1 19:39:49 compute-0 ceilometer_agent_compute[200308]: 2025-12-01 19:39:49.133 15 DEBUG ceilometer.polling.manager [-] Executing discovery process for pollsters [<ceilometer.compute.pollsters.disk.PerDeviceWriteRequestsPollster object at 0x7fcf6cc3f4d0>] and discovery method [local_instances] via process [<bound method AgentManager.discover of <ceilometer.polling.manager.AgentManager object at 0x7fcf6cc07a10>>]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:294
Dec  1 19:39:49 compute-0 ceilometer_agent_compute[200308]: 2025-12-01 19:39:49.134 15 INFO ceilometer.polling.manager [-] Polling pollster disk.device.write.requests in the context of pollsters
Dec  1 19:39:49 compute-0 ceilometer_agent_compute[200308]: 2025-12-01 19:39:49.134 15 DEBUG ceilometer.polling.manager [-] Checking if we need coordination for pollster [<stevedore.extension.Extension object at 0x7fcf6cc3f500>] with coordination group name [None]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:333
Dec  1 19:39:49 compute-0 ceilometer_agent_compute[200308]: 2025-12-01 19:39:49.134 15 DEBUG ceilometer.polling.manager [-] The pollster [<stevedore.extension.Extension object at 0x7fcf6cc3f500>] is not configured in a source for polling that requires coordination. The current hashrings are the following [None]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:355
Dec  1 19:39:49 compute-0 ceilometer_agent_compute[200308]: 2025-12-01 19:39:49.134 15 DEBUG ceilometer.polling.manager [-] Polster heartbeat update: disk.device.write.requests heartbeat /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:636
Dec  1 19:39:49 compute-0 ceilometer_agent_compute[200308]: 2025-12-01 19:39:49.134 15 DEBUG ceilometer.compute.pollsters [-] e73931e9-f7fa-4666-b781-700b385532a9/disk.device.write.requests volume: 233 _stats_to_sample /usr/lib/python3.12/site-packages/ceilometer/compute/pollsters/__init__.py:108
Dec  1 19:39:49 compute-0 ceilometer_agent_compute[200308]: 2025-12-01 19:39:49.134 15 DEBUG ceilometer.compute.pollsters [-] e73931e9-f7fa-4666-b781-700b385532a9/disk.device.write.requests volume: 1 _stats_to_sample /usr/lib/python3.12/site-packages/ceilometer/compute/pollsters/__init__.py:108
Dec  1 19:39:49 compute-0 ceilometer_agent_compute[200308]: 2025-12-01 19:39:49.134 15 DEBUG ceilometer.compute.pollsters [-] e73931e9-f7fa-4666-b781-700b385532a9/disk.device.write.requests volume: 0 _stats_to_sample /usr/lib/python3.12/site-packages/ceilometer/compute/pollsters/__init__.py:108
Dec  1 19:39:49 compute-0 ceilometer_agent_compute[200308]: 2025-12-01 19:39:49.135 15 DEBUG ceilometer.compute.pollsters [-] 850ac274-3f22-41ce-b7d7-ac64d7adac70/disk.device.write.requests volume: 232 _stats_to_sample /usr/lib/python3.12/site-packages/ceilometer/compute/pollsters/__init__.py:108
Dec  1 19:39:49 compute-0 ceilometer_agent_compute[200308]: 2025-12-01 19:39:49.135 15 DEBUG ceilometer.compute.pollsters [-] 850ac274-3f22-41ce-b7d7-ac64d7adac70/disk.device.write.requests volume: 1 _stats_to_sample /usr/lib/python3.12/site-packages/ceilometer/compute/pollsters/__init__.py:108
Dec  1 19:39:49 compute-0 ceilometer_agent_compute[200308]: 2025-12-01 19:39:49.135 12 DEBUG ceilometer.polling.manager [-] Updated heartbeat for disk.device.write.latency (2025-12-01T19:39:49.131934) _update_status /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:502
Dec  1 19:39:49 compute-0 ceilometer_agent_compute[200308]: 2025-12-01 19:39:49.135 15 DEBUG ceilometer.compute.pollsters [-] 850ac274-3f22-41ce-b7d7-ac64d7adac70/disk.device.write.requests volume: 0 _stats_to_sample /usr/lib/python3.12/site-packages/ceilometer/compute/pollsters/__init__.py:108
Dec  1 19:39:49 compute-0 ceilometer_agent_compute[200308]: 2025-12-01 19:39:49.135 12 DEBUG ceilometer.polling.manager [-] Updated heartbeat for disk.device.write.requests (2025-12-01T19:39:49.134233) _update_status /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:502
Dec  1 19:39:49 compute-0 ceilometer_agent_compute[200308]: 2025-12-01 19:39:49.136 15 INFO ceilometer.polling.manager [-] Finished polling pollster disk.device.write.requests in the context of pollsters
Dec  1 19:39:49 compute-0 ceilometer_agent_compute[200308]: 2025-12-01 19:39:49.136 15 DEBUG ceilometer.polling.manager [-] Executing discovery process for pollsters [<ceilometer.compute.pollsters.disk.PerDeviceAllocationPollster object at 0x7fcf6cc3e5d0>] and discovery method [local_instances] via process [<bound method AgentManager.discover of <ceilometer.polling.manager.AgentManager object at 0x7fcf6cc07a10>>]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:294
Dec  1 19:39:49 compute-0 ceilometer_agent_compute[200308]: 2025-12-01 19:39:49.136 15 INFO ceilometer.polling.manager [-] Polling pollster disk.device.allocation in the context of pollsters
Dec  1 19:39:49 compute-0 ceilometer_agent_compute[200308]: 2025-12-01 19:39:49.136 15 DEBUG ceilometer.polling.manager [-] Checking if we need coordination for pollster [<stevedore.extension.Extension object at 0x7fcf6cc3e540>] with coordination group name [None]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:333
Dec  1 19:39:49 compute-0 ceilometer_agent_compute[200308]: 2025-12-01 19:39:49.136 15 DEBUG ceilometer.polling.manager [-] The pollster [<stevedore.extension.Extension object at 0x7fcf6cc3e540>] is not configured in a source for polling that requires coordination. The current hashrings are the following [None]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:355
Dec  1 19:39:49 compute-0 ceilometer_agent_compute[200308]: 2025-12-01 19:39:49.136 15 DEBUG ceilometer.polling.manager [-] Polster heartbeat update: disk.device.allocation heartbeat /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:636
Dec  1 19:39:49 compute-0 ceilometer_agent_compute[200308]: 2025-12-01 19:39:49.136 15 DEBUG ceilometer.compute.pollsters [-] e73931e9-f7fa-4666-b781-700b385532a9/disk.device.allocation volume: 21307392 _stats_to_sample /usr/lib/python3.12/site-packages/ceilometer/compute/pollsters/__init__.py:108
Dec  1 19:39:49 compute-0 ceilometer_agent_compute[200308]: 2025-12-01 19:39:49.137 15 DEBUG ceilometer.compute.pollsters [-] e73931e9-f7fa-4666-b781-700b385532a9/disk.device.allocation volume: 1253376 _stats_to_sample /usr/lib/python3.12/site-packages/ceilometer/compute/pollsters/__init__.py:108
Dec  1 19:39:49 compute-0 ceilometer_agent_compute[200308]: 2025-12-01 19:39:49.137 15 DEBUG ceilometer.compute.pollsters [-] e73931e9-f7fa-4666-b781-700b385532a9/disk.device.allocation volume: 487424 _stats_to_sample /usr/lib/python3.12/site-packages/ceilometer/compute/pollsters/__init__.py:108
Dec  1 19:39:49 compute-0 ceilometer_agent_compute[200308]: 2025-12-01 19:39:49.137 15 DEBUG ceilometer.compute.pollsters [-] 850ac274-3f22-41ce-b7d7-ac64d7adac70/disk.device.allocation volume: 22224896 _stats_to_sample /usr/lib/python3.12/site-packages/ceilometer/compute/pollsters/__init__.py:108
Dec  1 19:39:49 compute-0 ceilometer_agent_compute[200308]: 2025-12-01 19:39:49.138 15 DEBUG ceilometer.compute.pollsters [-] 850ac274-3f22-41ce-b7d7-ac64d7adac70/disk.device.allocation volume: 1253376 _stats_to_sample /usr/lib/python3.12/site-packages/ceilometer/compute/pollsters/__init__.py:108
Dec  1 19:39:49 compute-0 ceilometer_agent_compute[200308]: 2025-12-01 19:39:49.138 12 DEBUG ceilometer.polling.manager [-] Updated heartbeat for disk.device.allocation (2025-12-01T19:39:49.136794) _update_status /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:502
Dec  1 19:39:49 compute-0 ceilometer_agent_compute[200308]: 2025-12-01 19:39:49.138 15 DEBUG ceilometer.compute.pollsters [-] 850ac274-3f22-41ce-b7d7-ac64d7adac70/disk.device.allocation volume: 585728 _stats_to_sample /usr/lib/python3.12/site-packages/ceilometer/compute/pollsters/__init__.py:108
Dec  1 19:39:49 compute-0 ceilometer_agent_compute[200308]: 2025-12-01 19:39:49.138 15 INFO ceilometer.polling.manager [-] Finished polling pollster disk.device.allocation in the context of pollsters
Dec  1 19:39:49 compute-0 ceilometer_agent_compute[200308]: 2025-12-01 19:39:49.138 15 DEBUG ceilometer.polling.manager [-] Executing discovery process for pollsters [<ceilometer.compute.pollsters.disk.EphemeralSizePollster object at 0x7fcf6cc3f530>] and discovery method [local_instances] via process [<bound method AgentManager.discover of <ceilometer.polling.manager.AgentManager object at 0x7fcf6cc07a10>>]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:294
Dec  1 19:39:49 compute-0 ceilometer_agent_compute[200308]: 2025-12-01 19:39:49.138 15 INFO ceilometer.polling.manager [-] Polling pollster disk.ephemeral.size in the context of pollsters
Dec  1 19:39:49 compute-0 ceilometer_agent_compute[200308]: 2025-12-01 19:39:49.138 15 DEBUG ceilometer.polling.manager [-] Checking if we need coordination for pollster [<stevedore.extension.Extension object at 0x7fcf6cc3f560>] with coordination group name [None]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:333
Dec  1 19:39:49 compute-0 ceilometer_agent_compute[200308]: 2025-12-01 19:39:49.139 15 DEBUG ceilometer.polling.manager [-] The pollster [<stevedore.extension.Extension object at 0x7fcf6cc3f560>] is not configured in a source for polling that requires coordination. The current hashrings are the following [None]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:355
Dec  1 19:39:49 compute-0 ceilometer_agent_compute[200308]: 2025-12-01 19:39:49.139 15 DEBUG ceilometer.polling.manager [-] Polster heartbeat update: disk.ephemeral.size heartbeat /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:636
Dec  1 19:39:49 compute-0 ceilometer_agent_compute[200308]: 2025-12-01 19:39:49.139 15 INFO ceilometer.polling.manager [-] Finished polling pollster disk.ephemeral.size in the context of pollsters
Dec  1 19:39:49 compute-0 ceilometer_agent_compute[200308]: 2025-12-01 19:39:49.139 15 DEBUG ceilometer.polling.manager [-] Executing discovery process for pollsters [<ceilometer.compute.pollsters.net.IncomingPacketsPollster object at 0x7fcf6cc3fd40>] and discovery method [local_instances] via process [<bound method AgentManager.discover of <ceilometer.polling.manager.AgentManager object at 0x7fcf6cc07a10>>]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:294
Dec  1 19:39:49 compute-0 ceilometer_agent_compute[200308]: 2025-12-01 19:39:49.139 15 INFO ceilometer.polling.manager [-] Polling pollster network.incoming.packets in the context of pollsters
Dec  1 19:39:49 compute-0 ceilometer_agent_compute[200308]: 2025-12-01 19:39:49.140 15 DEBUG ceilometer.polling.manager [-] Checking if we need coordination for pollster [<stevedore.extension.Extension object at 0x7fcf6cc3fd70>] with coordination group name [None]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:333
Dec  1 19:39:49 compute-0 ceilometer_agent_compute[200308]: 2025-12-01 19:39:49.140 15 DEBUG ceilometer.polling.manager [-] The pollster [<stevedore.extension.Extension object at 0x7fcf6cc3fd70>] is not configured in a source for polling that requires coordination. The current hashrings are the following [None]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:355
Dec  1 19:39:49 compute-0 ceilometer_agent_compute[200308]: 2025-12-01 19:39:49.140 15 DEBUG ceilometer.polling.manager [-] Polster heartbeat update: network.incoming.packets heartbeat /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:636
Dec  1 19:39:49 compute-0 ceilometer_agent_compute[200308]: 2025-12-01 19:39:49.140 15 DEBUG ceilometer.compute.pollsters [-] e73931e9-f7fa-4666-b781-700b385532a9/network.incoming.packets volume: 21 _stats_to_sample /usr/lib/python3.12/site-packages/ceilometer/compute/pollsters/__init__.py:108
Dec  1 19:39:49 compute-0 ceilometer_agent_compute[200308]: 2025-12-01 19:39:49.140 12 DEBUG ceilometer.polling.manager [-] Updated heartbeat for disk.ephemeral.size (2025-12-01T19:39:49.139085) _update_status /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:502
Dec  1 19:39:49 compute-0 ceilometer_agent_compute[200308]: 2025-12-01 19:39:49.140 12 DEBUG ceilometer.polling.manager [-] Updated heartbeat for network.incoming.packets (2025-12-01T19:39:49.140265) _update_status /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:502
Dec  1 19:39:49 compute-0 ceilometer_agent_compute[200308]: 2025-12-01 19:39:49.141 15 DEBUG ceilometer.compute.pollsters [-] 850ac274-3f22-41ce-b7d7-ac64d7adac70/network.incoming.packets volume: 14 _stats_to_sample /usr/lib/python3.12/site-packages/ceilometer/compute/pollsters/__init__.py:108
Dec  1 19:39:49 compute-0 ceilometer_agent_compute[200308]: 2025-12-01 19:39:49.142 15 INFO ceilometer.polling.manager [-] Finished polling pollster network.incoming.packets in the context of pollsters
Dec  1 19:39:49 compute-0 ceilometer_agent_compute[200308]: 2025-12-01 19:39:49.142 15 DEBUG ceilometer.polling.manager [-] Executing discovery process for pollsters [<ceilometer.compute.pollsters.disk.RootSizePollster object at 0x7fcf6cc3f590>] and discovery method [local_instances] via process [<bound method AgentManager.discover of <ceilometer.polling.manager.AgentManager object at 0x7fcf6cc07a10>>]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:294
Dec  1 19:39:49 compute-0 ceilometer_agent_compute[200308]: 2025-12-01 19:39:49.142 15 INFO ceilometer.polling.manager [-] Polling pollster disk.root.size in the context of pollsters
Dec  1 19:39:49 compute-0 ceilometer_agent_compute[200308]: 2025-12-01 19:39:49.143 15 DEBUG ceilometer.polling.manager [-] Checking if we need coordination for pollster [<stevedore.extension.Extension object at 0x7fcf6cc3f5c0>] with coordination group name [None]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:333
Dec  1 19:39:49 compute-0 ceilometer_agent_compute[200308]: 2025-12-01 19:39:49.143 15 DEBUG ceilometer.polling.manager [-] The pollster [<stevedore.extension.Extension object at 0x7fcf6cc3f5c0>] is not configured in a source for polling that requires coordination. The current hashrings are the following [None]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:355
Dec  1 19:39:49 compute-0 ceilometer_agent_compute[200308]: 2025-12-01 19:39:49.143 15 DEBUG ceilometer.polling.manager [-] Polster heartbeat update: disk.root.size heartbeat /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:636
Dec  1 19:39:49 compute-0 ceilometer_agent_compute[200308]: 2025-12-01 19:39:49.143 12 DEBUG ceilometer.polling.manager [-] Updated heartbeat for disk.root.size (2025-12-01T19:39:49.143451) _update_status /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:502
Dec  1 19:39:49 compute-0 ceilometer_agent_compute[200308]: 2025-12-01 19:39:49.144 15 INFO ceilometer.polling.manager [-] Finished polling pollster disk.root.size in the context of pollsters
Dec  1 19:39:49 compute-0 ceilometer_agent_compute[200308]: 2025-12-01 19:39:49.144 15 DEBUG ceilometer.polling.manager [-] Executing discovery process for pollsters [<ceilometer.compute.pollsters.net.IncomingDropPollster object at 0x7fcf6cc3fda0>] and discovery method [local_instances] via process [<bound method AgentManager.discover of <ceilometer.polling.manager.AgentManager object at 0x7fcf6cc07a10>>]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:294
Dec  1 19:39:49 compute-0 ceilometer_agent_compute[200308]: 2025-12-01 19:39:49.145 15 INFO ceilometer.polling.manager [-] Polling pollster network.incoming.packets.drop in the context of pollsters
Dec  1 19:39:49 compute-0 ceilometer_agent_compute[200308]: 2025-12-01 19:39:49.145 15 DEBUG ceilometer.polling.manager [-] Checking if we need coordination for pollster [<stevedore.extension.Extension object at 0x7fcf6cc3fdd0>] with coordination group name [None]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:333
Dec  1 19:39:49 compute-0 ceilometer_agent_compute[200308]: 2025-12-01 19:39:49.145 15 DEBUG ceilometer.polling.manager [-] The pollster [<stevedore.extension.Extension object at 0x7fcf6cc3fdd0>] is not configured in a source for polling that requires coordination. The current hashrings are the following [None]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:355
Dec  1 19:39:49 compute-0 ceilometer_agent_compute[200308]: 2025-12-01 19:39:49.146 12 DEBUG ceilometer.polling.manager [-] Updated heartbeat for network.incoming.packets.drop (2025-12-01T19:39:49.145810) _update_status /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:502
Dec  1 19:39:49 compute-0 ceilometer_agent_compute[200308]: 2025-12-01 19:39:49.145 15 DEBUG ceilometer.polling.manager [-] Polster heartbeat update: network.incoming.packets.drop heartbeat /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:636
Dec  1 19:39:49 compute-0 ceilometer_agent_compute[200308]: 2025-12-01 19:39:49.146 15 DEBUG ceilometer.compute.pollsters [-] e73931e9-f7fa-4666-b781-700b385532a9/network.incoming.packets.drop volume: 0 _stats_to_sample /usr/lib/python3.12/site-packages/ceilometer/compute/pollsters/__init__.py:108
Dec  1 19:39:49 compute-0 ceilometer_agent_compute[200308]: 2025-12-01 19:39:49.146 15 DEBUG ceilometer.compute.pollsters [-] 850ac274-3f22-41ce-b7d7-ac64d7adac70/network.incoming.packets.drop volume: 0 _stats_to_sample /usr/lib/python3.12/site-packages/ceilometer/compute/pollsters/__init__.py:108
Dec  1 19:39:49 compute-0 ceilometer_agent_compute[200308]: 2025-12-01 19:39:49.147 15 INFO ceilometer.polling.manager [-] Finished polling pollster network.incoming.packets.drop in the context of pollsters
Dec  1 19:39:49 compute-0 ceilometer_agent_compute[200308]: 2025-12-01 19:39:49.147 15 DEBUG ceilometer.polling.manager [-] Executing discovery process for pollsters [<ceilometer.compute.pollsters.net.IncomingErrorsPollster object at 0x7fcf6cc3fe00>] and discovery method [local_instances] via process [<bound method AgentManager.discover of <ceilometer.polling.manager.AgentManager object at 0x7fcf6cc07a10>>]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:294
Dec  1 19:39:49 compute-0 ceilometer_agent_compute[200308]: 2025-12-01 19:39:49.148 15 INFO ceilometer.polling.manager [-] Polling pollster network.incoming.packets.error in the context of pollsters
Dec  1 19:39:49 compute-0 ceilometer_agent_compute[200308]: 2025-12-01 19:39:49.148 15 DEBUG ceilometer.polling.manager [-] Checking if we need coordination for pollster [<stevedore.extension.Extension object at 0x7fcf6cc3fe30>] with coordination group name [None]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:333
Dec  1 19:39:49 compute-0 ceilometer_agent_compute[200308]: 2025-12-01 19:39:49.148 15 DEBUG ceilometer.polling.manager [-] The pollster [<stevedore.extension.Extension object at 0x7fcf6cc3fe30>] is not configured in a source for polling that requires coordination. The current hashrings are the following [None]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:355
Dec  1 19:39:49 compute-0 ceilometer_agent_compute[200308]: 2025-12-01 19:39:49.148 15 DEBUG ceilometer.polling.manager [-] Polster heartbeat update: network.incoming.packets.error heartbeat /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:636
Dec  1 19:39:49 compute-0 ceilometer_agent_compute[200308]: 2025-12-01 19:39:49.148 15 DEBUG ceilometer.compute.pollsters [-] e73931e9-f7fa-4666-b781-700b385532a9/network.incoming.packets.error volume: 0 _stats_to_sample /usr/lib/python3.12/site-packages/ceilometer/compute/pollsters/__init__.py:108
Dec  1 19:39:49 compute-0 ceilometer_agent_compute[200308]: 2025-12-01 19:39:49.149 12 DEBUG ceilometer.polling.manager [-] Updated heartbeat for network.incoming.packets.error (2025-12-01T19:39:49.148510) _update_status /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:502
Dec  1 19:39:49 compute-0 ceilometer_agent_compute[200308]: 2025-12-01 19:39:49.150 15 DEBUG ceilometer.compute.pollsters [-] 850ac274-3f22-41ce-b7d7-ac64d7adac70/network.incoming.packets.error volume: 0 _stats_to_sample /usr/lib/python3.12/site-packages/ceilometer/compute/pollsters/__init__.py:108
Dec  1 19:39:49 compute-0 ceilometer_agent_compute[200308]: 2025-12-01 19:39:49.151 15 INFO ceilometer.polling.manager [-] Finished polling pollster network.incoming.packets.error in the context of pollsters
Dec  1 19:39:49 compute-0 ceilometer_agent_compute[200308]: 2025-12-01 19:39:49.151 15 DEBUG ceilometer.polling.manager [-] Executing discovery process for pollsters [<ceilometer.compute.pollsters.net.OutgoingBytesPollster object at 0x7fcf6cc3fe90>] and discovery method [local_instances] via process [<bound method AgentManager.discover of <ceilometer.polling.manager.AgentManager object at 0x7fcf6cc07a10>>]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:294
Dec  1 19:39:49 compute-0 ceilometer_agent_compute[200308]: 2025-12-01 19:39:49.152 15 INFO ceilometer.polling.manager [-] Polling pollster network.outgoing.bytes in the context of pollsters
Dec  1 19:39:49 compute-0 ceilometer_agent_compute[200308]: 2025-12-01 19:39:49.152 15 DEBUG ceilometer.polling.manager [-] Checking if we need coordination for pollster [<stevedore.extension.Extension object at 0x7fcf6cc3fec0>] with coordination group name [None]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:333
Dec  1 19:39:49 compute-0 ceilometer_agent_compute[200308]: 2025-12-01 19:39:49.152 15 DEBUG ceilometer.polling.manager [-] The pollster [<stevedore.extension.Extension object at 0x7fcf6cc3fec0>] is not configured in a source for polling that requires coordination. The current hashrings are the following [None]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:355
Dec  1 19:39:49 compute-0 ceilometer_agent_compute[200308]: 2025-12-01 19:39:49.152 15 DEBUG ceilometer.polling.manager [-] Polster heartbeat update: network.outgoing.bytes heartbeat /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:636
Dec  1 19:39:49 compute-0 ceilometer_agent_compute[200308]: 2025-12-01 19:39:49.152 15 DEBUG ceilometer.compute.pollsters [-] e73931e9-f7fa-4666-b781-700b385532a9/network.outgoing.bytes volume: 2342 _stats_to_sample /usr/lib/python3.12/site-packages/ceilometer/compute/pollsters/__init__.py:108
Dec  1 19:39:49 compute-0 ceilometer_agent_compute[200308]: 2025-12-01 19:39:49.153 15 DEBUG ceilometer.compute.pollsters [-] 850ac274-3f22-41ce-b7d7-ac64d7adac70/network.outgoing.bytes volume: 2286 _stats_to_sample /usr/lib/python3.12/site-packages/ceilometer/compute/pollsters/__init__.py:108
Dec  1 19:39:49 compute-0 ceilometer_agent_compute[200308]: 2025-12-01 19:39:49.154 15 INFO ceilometer.polling.manager [-] Finished polling pollster network.outgoing.bytes in the context of pollsters
Dec  1 19:39:49 compute-0 ceilometer_agent_compute[200308]: 2025-12-01 19:39:49.154 15 DEBUG ceilometer.polling.manager [-] Executing discovery process for pollsters [<ceilometer.compute.pollsters.net.OutgoingBytesRatePollster object at 0x7fcf6cc3ff80>] and discovery method [local_instances] via process [<bound method AgentManager.discover of <ceilometer.polling.manager.AgentManager object at 0x7fcf6cc07a10>>]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:294
Dec  1 19:39:49 compute-0 ceilometer_agent_compute[200308]: 2025-12-01 19:39:49.155 12 DEBUG ceilometer.polling.manager [-] Updated heartbeat for network.outgoing.bytes (2025-12-01T19:39:49.152460) _update_status /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:502
Dec  1 19:39:49 compute-0 ceilometer_agent_compute[200308]: 2025-12-01 19:39:49.155 15 DEBUG ceilometer.polling.manager [-] Skip pollster network.outgoing.bytes.rate, no new resources found this cycle _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:321
Dec  1 19:39:49 compute-0 ceilometer_agent_compute[200308]: 2025-12-01 19:39:49.155 15 DEBUG ceilometer.polling.manager [-] Executing discovery process for pollsters [<ceilometer.compute.pollsters.instance_stats.CPUPollster object at 0x7fcf6cbd1b80>] and discovery method [local_instances] via process [<bound method AgentManager.discover of <ceilometer.polling.manager.AgentManager object at 0x7fcf6cc07a10>>]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:294
Dec  1 19:39:49 compute-0 ceilometer_agent_compute[200308]: 2025-12-01 19:39:49.155 15 INFO ceilometer.polling.manager [-] Polling pollster cpu in the context of pollsters
Dec  1 19:39:49 compute-0 ceilometer_agent_compute[200308]: 2025-12-01 19:39:49.155 15 DEBUG ceilometer.polling.manager [-] Checking if we need coordination for pollster [<stevedore.extension.Extension object at 0x7fcf6cc3d7c0>] with coordination group name [None]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:333
Dec  1 19:39:49 compute-0 ceilometer_agent_compute[200308]: 2025-12-01 19:39:49.155 15 DEBUG ceilometer.polling.manager [-] The pollster [<stevedore.extension.Extension object at 0x7fcf6cc3d7c0>] is not configured in a source for polling that requires coordination. The current hashrings are the following [None]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:355
Dec  1 19:39:49 compute-0 ceilometer_agent_compute[200308]: 2025-12-01 19:39:49.156 15 DEBUG ceilometer.polling.manager [-] Polster heartbeat update: cpu heartbeat /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:636
Dec  1 19:39:49 compute-0 ceilometer_agent_compute[200308]: 2025-12-01 19:39:49.156 12 DEBUG ceilometer.polling.manager [-] Updated heartbeat for cpu (2025-12-01T19:39:49.156039) _update_status /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:502
Dec  1 19:39:49 compute-0 ceilometer_agent_compute[200308]: 2025-12-01 19:39:49.156 15 DEBUG ceilometer.compute.pollsters [-] e73931e9-f7fa-4666-b781-700b385532a9/cpu volume: 40960000000 _stats_to_sample /usr/lib/python3.12/site-packages/ceilometer/compute/pollsters/__init__.py:108
Dec  1 19:39:49 compute-0 ceilometer_agent_compute[200308]: 2025-12-01 19:39:49.157 15 DEBUG ceilometer.compute.pollsters [-] 850ac274-3f22-41ce-b7d7-ac64d7adac70/cpu volume: 33920000000 _stats_to_sample /usr/lib/python3.12/site-packages/ceilometer/compute/pollsters/__init__.py:108
Dec  1 19:39:49 compute-0 ceilometer_agent_compute[200308]: 2025-12-01 19:39:49.157 15 INFO ceilometer.polling.manager [-] Finished polling pollster cpu in the context of pollsters
Dec  1 19:39:49 compute-0 ceilometer_agent_compute[200308]: 2025-12-01 19:39:49.158 15 DEBUG ceilometer.polling.manager [-] Executing discovery process for pollsters [<ceilometer.compute.pollsters.instance_stats.MemoryUsagePollster object at 0x7fcf6cc3f7a0>] and discovery method [local_instances] via process [<bound method AgentManager.discover of <ceilometer.polling.manager.AgentManager object at 0x7fcf6cc07a10>>]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:294
Dec  1 19:39:49 compute-0 ceilometer_agent_compute[200308]: 2025-12-01 19:39:49.158 15 INFO ceilometer.polling.manager [-] Polling pollster memory.usage in the context of pollsters
Dec  1 19:39:49 compute-0 ceilometer_agent_compute[200308]: 2025-12-01 19:39:49.158 15 DEBUG ceilometer.polling.manager [-] Checking if we need coordination for pollster [<stevedore.extension.Extension object at 0x7fcf6cc3f7d0>] with coordination group name [None]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:333
Dec  1 19:39:49 compute-0 ceilometer_agent_compute[200308]: 2025-12-01 19:39:49.158 15 DEBUG ceilometer.polling.manager [-] The pollster [<stevedore.extension.Extension object at 0x7fcf6cc3f7d0>] is not configured in a source for polling that requires coordination. The current hashrings are the following [None]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:355
Dec  1 19:39:49 compute-0 ceilometer_agent_compute[200308]: 2025-12-01 19:39:49.158 15 DEBUG ceilometer.polling.manager [-] Polster heartbeat update: memory.usage heartbeat /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:636
Dec  1 19:39:49 compute-0 ceilometer_agent_compute[200308]: 2025-12-01 19:39:49.159 15 DEBUG ceilometer.compute.pollsters [-] e73931e9-f7fa-4666-b781-700b385532a9/memory.usage volume: 48.79296875 _stats_to_sample /usr/lib/python3.12/site-packages/ceilometer/compute/pollsters/__init__.py:108
Dec  1 19:39:49 compute-0 ceilometer_agent_compute[200308]: 2025-12-01 19:39:49.159 15 DEBUG ceilometer.compute.pollsters [-] 850ac274-3f22-41ce-b7d7-ac64d7adac70/memory.usage volume: 49.05859375 _stats_to_sample /usr/lib/python3.12/site-packages/ceilometer/compute/pollsters/__init__.py:108
Dec  1 19:39:49 compute-0 ceilometer_agent_compute[200308]: 2025-12-01 19:39:49.160 12 DEBUG ceilometer.polling.manager [-] Updated heartbeat for memory.usage (2025-12-01T19:39:49.158754) _update_status /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:502
Dec  1 19:39:49 compute-0 ceilometer_agent_compute[200308]: 2025-12-01 19:39:49.161 15 INFO ceilometer.polling.manager [-] Finished polling pollster memory.usage in the context of pollsters
Dec  1 19:39:49 compute-0 ceilometer_agent_compute[200308]: 2025-12-01 19:39:49.162 15 DEBUG ceilometer.polling.manager [-] Finished processing pollster [network.incoming.bytes.delta]. execute_polling_task_processing /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:272
Dec  1 19:39:49 compute-0 ceilometer_agent_compute[200308]: 2025-12-01 19:39:49.162 15 DEBUG ceilometer.polling.manager [-] Finished processing pollster [network.outgoing.packets]. execute_polling_task_processing /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:272
Dec  1 19:39:49 compute-0 ceilometer_agent_compute[200308]: 2025-12-01 19:39:49.162 15 DEBUG ceilometer.polling.manager [-] Finished processing pollster [network.outgoing.bytes.delta]. execute_polling_task_processing /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:272
Dec  1 19:39:49 compute-0 ceilometer_agent_compute[200308]: 2025-12-01 19:39:49.162 15 DEBUG ceilometer.polling.manager [-] Finished processing pollster [network.outgoing.packets.drop]. execute_polling_task_processing /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:272
Dec  1 19:39:49 compute-0 ceilometer_agent_compute[200308]: 2025-12-01 19:39:49.162 15 DEBUG ceilometer.polling.manager [-] Finished processing pollster [network.outgoing.packets.error]. execute_polling_task_processing /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:272
Dec  1 19:39:49 compute-0 ceilometer_agent_compute[200308]: 2025-12-01 19:39:49.163 15 DEBUG ceilometer.polling.manager [-] Finished processing pollster [disk.device.capacity]. execute_polling_task_processing /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:272
Dec  1 19:39:49 compute-0 ceilometer_agent_compute[200308]: 2025-12-01 19:39:49.163 15 DEBUG ceilometer.polling.manager [-] Finished processing pollster [disk.device.read.bytes]. execute_polling_task_processing /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:272
Dec  1 19:39:49 compute-0 ceilometer_agent_compute[200308]: 2025-12-01 19:39:49.163 15 DEBUG ceilometer.polling.manager [-] Finished processing pollster [network.incoming.bytes]. execute_polling_task_processing /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:272
Dec  1 19:39:49 compute-0 ceilometer_agent_compute[200308]: 2025-12-01 19:39:49.163 15 DEBUG ceilometer.polling.manager [-] Finished processing pollster [network.incoming.bytes.rate]. execute_polling_task_processing /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:272
Dec  1 19:39:49 compute-0 ceilometer_agent_compute[200308]: 2025-12-01 19:39:49.163 15 DEBUG ceilometer.polling.manager [-] Finished processing pollster [disk.device.read.latency]. execute_polling_task_processing /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:272
Dec  1 19:39:49 compute-0 ceilometer_agent_compute[200308]: 2025-12-01 19:39:49.164 15 DEBUG ceilometer.polling.manager [-] Finished processing pollster [disk.device.read.requests]. execute_polling_task_processing /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:272
Dec  1 19:39:49 compute-0 ceilometer_agent_compute[200308]: 2025-12-01 19:39:49.164 15 DEBUG ceilometer.polling.manager [-] Finished processing pollster [disk.device.usage]. execute_polling_task_processing /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:272
Dec  1 19:39:49 compute-0 ceilometer_agent_compute[200308]: 2025-12-01 19:39:49.164 15 DEBUG ceilometer.polling.manager [-] Finished processing pollster [disk.device.write.bytes]. execute_polling_task_processing /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:272
Dec  1 19:39:49 compute-0 ceilometer_agent_compute[200308]: 2025-12-01 19:39:49.164 15 DEBUG ceilometer.polling.manager [-] Finished processing pollster [power.state]. execute_polling_task_processing /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:272
Dec  1 19:39:49 compute-0 ceilometer_agent_compute[200308]: 2025-12-01 19:39:49.164 15 DEBUG ceilometer.polling.manager [-] Finished processing pollster [disk.device.write.latency]. execute_polling_task_processing /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:272
Dec  1 19:39:49 compute-0 ceilometer_agent_compute[200308]: 2025-12-01 19:39:49.164 15 DEBUG ceilometer.polling.manager [-] Finished processing pollster [disk.device.write.requests]. execute_polling_task_processing /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:272
Dec  1 19:39:49 compute-0 ceilometer_agent_compute[200308]: 2025-12-01 19:39:49.165 15 DEBUG ceilometer.polling.manager [-] Finished processing pollster [disk.device.allocation]. execute_polling_task_processing /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:272
Dec  1 19:39:49 compute-0 ceilometer_agent_compute[200308]: 2025-12-01 19:39:49.165 15 DEBUG ceilometer.polling.manager [-] Finished processing pollster [disk.ephemeral.size]. execute_polling_task_processing /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:272
Dec  1 19:39:49 compute-0 ceilometer_agent_compute[200308]: 2025-12-01 19:39:49.165 15 DEBUG ceilometer.polling.manager [-] Finished processing pollster [network.incoming.packets]. execute_polling_task_processing /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:272
Dec  1 19:39:49 compute-0 ceilometer_agent_compute[200308]: 2025-12-01 19:39:49.165 15 DEBUG ceilometer.polling.manager [-] Finished processing pollster [disk.root.size]. execute_polling_task_processing /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:272
Dec  1 19:39:49 compute-0 ceilometer_agent_compute[200308]: 2025-12-01 19:39:49.165 15 DEBUG ceilometer.polling.manager [-] Finished processing pollster [network.incoming.packets.drop]. execute_polling_task_processing /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:272
Dec  1 19:39:49 compute-0 ceilometer_agent_compute[200308]: 2025-12-01 19:39:49.166 15 DEBUG ceilometer.polling.manager [-] Finished processing pollster [network.incoming.packets.error]. execute_polling_task_processing /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:272
Dec  1 19:39:49 compute-0 ceilometer_agent_compute[200308]: 2025-12-01 19:39:49.166 15 DEBUG ceilometer.polling.manager [-] Finished processing pollster [network.outgoing.bytes]. execute_polling_task_processing /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:272
Dec  1 19:39:49 compute-0 ceilometer_agent_compute[200308]: 2025-12-01 19:39:49.166 15 DEBUG ceilometer.polling.manager [-] Finished processing pollster [network.outgoing.bytes.rate]. execute_polling_task_processing /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:272
Dec  1 19:39:49 compute-0 ceilometer_agent_compute[200308]: 2025-12-01 19:39:49.166 15 DEBUG ceilometer.polling.manager [-] Finished processing pollster [cpu]. execute_polling_task_processing /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:272
Dec  1 19:39:49 compute-0 ceilometer_agent_compute[200308]: 2025-12-01 19:39:49.166 15 DEBUG ceilometer.polling.manager [-] Finished processing pollster [memory.usage]. execute_polling_task_processing /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:272
Dec  1 19:39:50 compute-0 nova_compute[189564]: 2025-12-01 19:39:50.731 189568 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Dec  1 19:39:52 compute-0 podman[244110]: 2025-12-01 19:39:52.299780309 +0000 UTC m=+0.074141431 container health_status b46bda7fc50db8041eef75400930fc7591d8331b3adc9964f77b2cc87c6b98e2 (image=quay.io/openstack-k8s-operators/openstack-network-exporter:current-podified, name=openstack_network_exporter, health_status=healthy, health_failing_streak=0, health_log=, vendor=Red Hat, Inc., maintainer=Red Hat, Inc., release=1755695350, com.redhat.license_terms=https://www.redhat.com/en/about/red-hat-end-user-license-agreements#UBI, managed_by=edpm_ansible, url=https://catalog.redhat.com/en/search?searchType=containers, distribution-scope=public, build-date=2025-08-20T13:12:41, description=The Universal Base Image Minimal is a stripped down image that uses microdnf as a package manager. This base image is freely redistributable, but Red Hat only supports Red Hat technologies through subscriptions for Red Hat products. This image is maintained by Red Hat and updated regularly., io.buildah.version=1.33.7, com.redhat.component=ubi9-minimal-container, container_name=openstack_network_exporter, vcs-type=git, config_id=edpm, io.openshift.tags=minimal rhel9, version=9.6, architecture=x86_64, io.openshift.expose-services=, summary=Provides the latest release of the minimal Red Hat Universal Base Image 9., io.k8s.display-name=Red Hat Universal Base Image 9 Minimal, vcs-ref=f4b088292653bbf5ca8188a5e59ffd06a8671d4b, io.k8s.description=The Universal Base Image Minimal is a stripped down image that uses microdnf as a package manager. This base image is freely redistributable, but Red Hat only supports Red Hat technologies through subscriptions for Red Hat products. This image is maintained by Red Hat and updated regularly., config_data={'image': 'quay.io/openstack-k8s-operators/openstack-network-exporter:current-podified', 'restart': 'always', 'recreate': True, 'privileged': True, 'ports': ['9105:9105'], 'command': [], 'net': 'host', 'environment': {'OS_ENDPOINT_TYPE': 'internal', 'OPENSTACK_NETWORK_EXPORTER_YAML': '/etc/openstack_network_exporter/openstack_network_exporter.yaml'}, 'healthcheck': {'test': '/openstack/healthcheck openstack-netwo', 'mount': '/var/lib/openstack/healthchecks/openstack_network_exporter'}, 'volumes': ['/var/lib/openstack/config/telemetry/openstack_network_exporter.yaml:/etc/openstack_network_exporter/openstack_network_exporter.yaml:z', '/var/lib/openstack/certs/telemetry/default:/etc/openstack_network_exporter/tls:z', '/var/run/openvswitch:/run/openvswitch:rw,z', '/var/lib/openvswitch/ovn:/run/ovn:rw,z', '/proc:/host/proc:ro', '/var/lib/openstack/healthchecks/openstack_network_exporter:/openstack:ro,z']}, name=ubi9-minimal)
Dec  1 19:39:53 compute-0 nova_compute[189564]: 2025-12-01 19:39:53.049 189568 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Dec  1 19:39:53 compute-0 nova_compute[189564]: 2025-12-01 19:39:53.538 189568 DEBUG oslo_service.periodic_task [None req-7cec860b-c1c5-49d2-8a4f-732c76c1788d - - - - - -] Running periodic task ComputeManager._heal_instance_info_cache run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210#033[00m
Dec  1 19:39:53 compute-0 nova_compute[189564]: 2025-12-01 19:39:53.539 189568 DEBUG nova.compute.manager [None req-7cec860b-c1c5-49d2-8a4f-732c76c1788d - - - - - -] Starting heal instance info cache _heal_instance_info_cache /usr/lib/python3.9/site-packages/nova/compute/manager.py:9858#033[00m
Dec  1 19:39:53 compute-0 nova_compute[189564]: 2025-12-01 19:39:53.539 189568 DEBUG nova.compute.manager [None req-7cec860b-c1c5-49d2-8a4f-732c76c1788d - - - - - -] Rebuilding the list of instances to heal _heal_instance_info_cache /usr/lib/python3.9/site-packages/nova/compute/manager.py:9862#033[00m
Dec  1 19:39:54 compute-0 nova_compute[189564]: 2025-12-01 19:39:54.223 189568 DEBUG oslo_concurrency.lockutils [None req-7cec860b-c1c5-49d2-8a4f-732c76c1788d - - - - - -] Acquiring lock "refresh_cache-e73931e9-f7fa-4666-b781-700b385532a9" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:312#033[00m
Dec  1 19:39:54 compute-0 nova_compute[189564]: 2025-12-01 19:39:54.224 189568 DEBUG oslo_concurrency.lockutils [None req-7cec860b-c1c5-49d2-8a4f-732c76c1788d - - - - - -] Acquired lock "refresh_cache-e73931e9-f7fa-4666-b781-700b385532a9" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:315#033[00m
Dec  1 19:39:54 compute-0 nova_compute[189564]: 2025-12-01 19:39:54.224 189568 DEBUG nova.network.neutron [None req-7cec860b-c1c5-49d2-8a4f-732c76c1788d - - - - - -] [instance: e73931e9-f7fa-4666-b781-700b385532a9] Forcefully refreshing network info cache for instance _get_instance_nw_info /usr/lib/python3.9/site-packages/nova/network/neutron.py:2004#033[00m
Dec  1 19:39:54 compute-0 nova_compute[189564]: 2025-12-01 19:39:54.225 189568 DEBUG nova.objects.instance [None req-7cec860b-c1c5-49d2-8a4f-732c76c1788d - - - - - -] Lazy-loading 'info_cache' on Instance uuid e73931e9-f7fa-4666-b781-700b385532a9 obj_load_attr /usr/lib/python3.9/site-packages/nova/objects/instance.py:1105#033[00m
Dec  1 19:39:55 compute-0 nova_compute[189564]: 2025-12-01 19:39:55.734 189568 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Dec  1 19:39:56 compute-0 nova_compute[189564]: 2025-12-01 19:39:56.932 189568 DEBUG nova.network.neutron [None req-7cec860b-c1c5-49d2-8a4f-732c76c1788d - - - - - -] [instance: e73931e9-f7fa-4666-b781-700b385532a9] Updating instance_info_cache with network_info: [{"id": "3cef930c-870a-4936-a206-b4c3a7ce5c1a", "address": "fa:16:3e:fc:8b:70", "network": {"id": "2a4b8529-6171-4880-a97c-66966115a61b", "bridge": "br-int", "label": "private", "subnets": [{"cidr": "192.168.0.0/24", "dns": [], "gateway": {"address": "192.168.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "192.168.0.47", "type": "fixed", "version": 4, "meta": {}, "floating_ips": [{"address": "192.168.122.206", "type": "floating", "version": 4, "meta": {}}]}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "35d2a9caf1634dca9fc12ec078239d84", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tap3cef930c-87", "ovs_interfaceid": "3cef930c-870a-4936-a206-b4c3a7ce5c1a", "qbh_params": null, "qbg_params": null, "active": true, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}}] update_instance_cache_with_nw_info /usr/lib/python3.9/site-packages/nova/network/neutron.py:116#033[00m
Dec  1 19:39:56 compute-0 nova_compute[189564]: 2025-12-01 19:39:56.954 189568 DEBUG oslo_concurrency.lockutils [None req-7cec860b-c1c5-49d2-8a4f-732c76c1788d - - - - - -] Releasing lock "refresh_cache-e73931e9-f7fa-4666-b781-700b385532a9" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:333#033[00m
Dec  1 19:39:56 compute-0 nova_compute[189564]: 2025-12-01 19:39:56.955 189568 DEBUG nova.compute.manager [None req-7cec860b-c1c5-49d2-8a4f-732c76c1788d - - - - - -] [instance: e73931e9-f7fa-4666-b781-700b385532a9] Updated the network info_cache for instance _heal_instance_info_cache /usr/lib/python3.9/site-packages/nova/compute/manager.py:9929#033[00m
Dec  1 19:39:56 compute-0 nova_compute[189564]: 2025-12-01 19:39:56.956 189568 DEBUG oslo_service.periodic_task [None req-7cec860b-c1c5-49d2-8a4f-732c76c1788d - - - - - -] Running periodic task ComputeManager._poll_rebooting_instances run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210#033[00m
Dec  1 19:39:57 compute-0 podman[244131]: 2025-12-01 19:39:57.329590254 +0000 UTC m=+0.090060395 container health_status 9bc16c1e84935b321683dd2dfd3901959431e420d380b6b9982945dff3d516b2 (image=quay.io/prometheus/node-exporter:v1.5.0, name=node_exporter, health_status=healthy, health_failing_streak=0, health_log=, maintainer=The Prometheus Authors <prometheus-developers@googlegroups.com>, managed_by=edpm_ansible, config_data={'image': 'quay.io/prometheus/node-exporter:v1.5.0', 'restart': 'always', 'recreate': True, 'user': 'root', 'privileged': True, 'ports': ['9100:9100'], 'command': ['--web.config.file=/etc/node_exporter/node_exporter.yaml', '--web.disable-exporter-metrics', '--collector.systemd', '--collector.systemd.unit-include=(edpm_.*|ovs.*|openvswitch|virt.*|rsyslog)\\.service', '--no-collector.dmi', '--no-collector.entropy', '--no-collector.thermal_zone', '--no-collector.time', '--no-collector.timex', '--no-collector.uname', '--no-collector.stat', '--no-collector.hwmon', '--no-collector.os', '--no-collector.selinux', '--no-collector.textfile', '--no-collector.powersupplyclass', '--no-collector.pressure', '--no-collector.rapl'], 'net': 'host', 'environment': {'OS_ENDPOINT_TYPE': 'internal'}, 'healthcheck': {'test': '/openstack/healthcheck node_exporter', 'mount': '/var/lib/openstack/healthchecks/node_exporter'}, 'volumes': ['/var/lib/openstack/config/telemetry/node_exporter.yaml:/etc/node_exporter/node_exporter.yaml:z', '/var/lib/openstack/certs/telemetry/default:/etc/node_exporter/tls:z', '/var/run/dbus/system_bus_socket:/var/run/dbus/system_bus_socket:rw', '/var/lib/openstack/healthchecks/node_exporter:/openstack:ro,z']}, config_id=edpm, container_name=node_exporter)
Dec  1 19:39:58 compute-0 nova_compute[189564]: 2025-12-01 19:39:58.051 189568 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Dec  1 19:39:58 compute-0 nova_compute[189564]: 2025-12-01 19:39:58.248 189568 DEBUG oslo_service.periodic_task [None req-7cec860b-c1c5-49d2-8a4f-732c76c1788d - - - - - -] Running periodic task ComputeManager._poll_volume_usage run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210#033[00m
Dec  1 19:39:58 compute-0 nova_compute[189564]: 2025-12-01 19:39:58.249 189568 DEBUG oslo_service.periodic_task [None req-7cec860b-c1c5-49d2-8a4f-732c76c1788d - - - - - -] Running periodic task ComputeManager._reclaim_queued_deletes run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210#033[00m
Dec  1 19:39:58 compute-0 nova_compute[189564]: 2025-12-01 19:39:58.250 189568 DEBUG nova.compute.manager [None req-7cec860b-c1c5-49d2-8a4f-732c76c1788d - - - - - -] CONF.reclaim_instance_interval <= 0, skipping... _reclaim_queued_deletes /usr/lib/python3.9/site-packages/nova/compute/manager.py:10477#033[00m
Dec  1 19:39:59 compute-0 nova_compute[189564]: 2025-12-01 19:39:59.251 189568 DEBUG oslo_service.periodic_task [None req-7cec860b-c1c5-49d2-8a4f-732c76c1788d - - - - - -] Running periodic task ComputeManager._poll_unconfirmed_resizes run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210#033[00m
Dec  1 19:39:59 compute-0 podman[203750]: time="2025-12-01T19:39:59Z" level=info msg="List containers: received `last` parameter - overwriting `limit`"
Dec  1 19:39:59 compute-0 podman[203750]: @ - - [01/Dec/2025:19:39:59 +0000] "GET /v4.9.3/libpod/containers/json?all=true&external=false&last=0&namespace=false&size=false&sync=false HTTP/1.1" 200 29521 "" "Go-http-client/1.1"
Dec  1 19:39:59 compute-0 podman[203750]: @ - - [01/Dec/2025:19:39:59 +0000] "GET /v4.9.3/libpod/containers/stats?all=false&interval=1&stream=false HTTP/1.1" 200 4798 "" "Go-http-client/1.1"
Dec  1 19:40:00 compute-0 nova_compute[189564]: 2025-12-01 19:40:00.737 189568 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Dec  1 19:40:01 compute-0 nova_compute[189564]: 2025-12-01 19:40:01.245 189568 DEBUG oslo_service.periodic_task [None req-7cec860b-c1c5-49d2-8a4f-732c76c1788d - - - - - -] Running periodic task ComputeManager._check_instance_build_time run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210#033[00m
Dec  1 19:40:01 compute-0 nova_compute[189564]: 2025-12-01 19:40:01.247 189568 DEBUG oslo_service.periodic_task [None req-7cec860b-c1c5-49d2-8a4f-732c76c1788d - - - - - -] Running periodic task ComputeManager._instance_usage_audit run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210#033[00m
Dec  1 19:40:01 compute-0 openstack_network_exporter[205914]: ERROR   19:40:01 appctl.go:131: Failed to prepare call to ovsdb-server: no control socket files found for the ovs db server
Dec  1 19:40:01 compute-0 openstack_network_exporter[205914]: ERROR   19:40:01 appctl.go:144: Failed to get PID for ovn-northd: no control socket files found for ovn-northd
Dec  1 19:40:01 compute-0 openstack_network_exporter[205914]: ERROR   19:40:01 appctl.go:144: Failed to get PID for ovn-northd: no control socket files found for ovn-northd
Dec  1 19:40:01 compute-0 openstack_network_exporter[205914]: ERROR   19:40:01 appctl.go:174: call(dpif-netdev/pmd-rxq-show): please specify an existing datapath
Dec  1 19:40:01 compute-0 openstack_network_exporter[205914]: 
Dec  1 19:40:01 compute-0 openstack_network_exporter[205914]: ERROR   19:40:01 appctl.go:174: call(dpif-netdev/pmd-perf-show): please specify an existing datapath
Dec  1 19:40:01 compute-0 openstack_network_exporter[205914]: 
Dec  1 19:40:02 compute-0 nova_compute[189564]: 2025-12-01 19:40:02.251 189568 DEBUG oslo_service.periodic_task [None req-7cec860b-c1c5-49d2-8a4f-732c76c1788d - - - - - -] Running periodic task ComputeManager._poll_rescued_instances run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210#033[00m
Dec  1 19:40:03 compute-0 nova_compute[189564]: 2025-12-01 19:40:03.055 189568 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Dec  1 19:40:03 compute-0 nova_compute[189564]: 2025-12-01 19:40:03.247 189568 DEBUG oslo_service.periodic_task [None req-7cec860b-c1c5-49d2-8a4f-732c76c1788d - - - - - -] Running periodic task ComputeManager.update_available_resource run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210#033[00m
Dec  1 19:40:03 compute-0 nova_compute[189564]: 2025-12-01 19:40:03.302 189568 DEBUG oslo_concurrency.lockutils [None req-7cec860b-c1c5-49d2-8a4f-732c76c1788d - - - - - -] Acquiring lock "compute_resources" by "nova.compute.resource_tracker.ResourceTracker.clean_compute_node_cache" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m
Dec  1 19:40:03 compute-0 nova_compute[189564]: 2025-12-01 19:40:03.302 189568 DEBUG oslo_concurrency.lockutils [None req-7cec860b-c1c5-49d2-8a4f-732c76c1788d - - - - - -] Lock "compute_resources" acquired by "nova.compute.resource_tracker.ResourceTracker.clean_compute_node_cache" :: waited 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m
Dec  1 19:40:03 compute-0 nova_compute[189564]: 2025-12-01 19:40:03.303 189568 DEBUG oslo_concurrency.lockutils [None req-7cec860b-c1c5-49d2-8a4f-732c76c1788d - - - - - -] Lock "compute_resources" "released" by "nova.compute.resource_tracker.ResourceTracker.clean_compute_node_cache" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m
Dec  1 19:40:03 compute-0 nova_compute[189564]: 2025-12-01 19:40:03.303 189568 DEBUG nova.compute.resource_tracker [None req-7cec860b-c1c5-49d2-8a4f-732c76c1788d - - - - - -] Auditing locally available compute resources for compute-0.ctlplane.example.com (node: compute-0.ctlplane.example.com) update_available_resource /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:861#033[00m
Dec  1 19:40:03 compute-0 podman[244155]: 2025-12-01 19:40:03.367379008 +0000 UTC m=+0.124993862 container health_status eee51cf6f5ac491b85fb09827fece37ea9afa564acb449d4ec0d0155a452f02b (image=quay.io/podified-antelope-centos9/openstack-multipathd:current-podified, name=multipathd, health_status=healthy, health_failing_streak=0, health_log=, managed_by=edpm_ansible, maintainer=OpenStack Kubernetes Operator team, org.label-schema.build-date=20251125, org.label-schema.license=GPLv2, tcib_build_tag=fa2bb8efef6782c26ea7f1675eeb36dd, config_data={'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/multipathd', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-multipathd:current-podified', 'net': 'host', 'privileged': True, 'restart': 'always', 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/var/lib/kolla/config_files/multipathd.json:/var/lib/kolla/config_files/config.json:ro', '/dev:/dev', '/run/udev:/run/udev', '/sys:/sys', '/lib/modules:/lib/modules:ro', '/etc/iscsi:/etc/iscsi:ro', '/var/lib/iscsi:/var/lib/iscsi', '/etc/multipath:/etc/multipath:z', '/etc/multipath.conf:/etc/multipath.conf:ro', '/var/lib/openstack/healthchecks/multipathd:/openstack:ro,z']}, org.label-schema.vendor=CentOS, tcib_managed=true, config_id=multipathd, io.buildah.version=1.41.3, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.schema-version=1.0, container_name=multipathd)
Dec  1 19:40:03 compute-0 nova_compute[189564]: 2025-12-01 19:40:03.414 189568 DEBUG oslo_concurrency.processutils [None req-7cec860b-c1c5-49d2-8a4f-732c76c1788d - - - - - -] Running cmd (subprocess): /usr/bin/python3 -m oslo_concurrency.prlimit --as=1073741824 --cpu=30 -- env LC_ALL=C LANG=C qemu-img info /var/lib/nova/instances/e73931e9-f7fa-4666-b781-700b385532a9/disk --force-share --output=json execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384#033[00m
Dec  1 19:40:03 compute-0 nova_compute[189564]: 2025-12-01 19:40:03.520 189568 DEBUG oslo_concurrency.processutils [None req-7cec860b-c1c5-49d2-8a4f-732c76c1788d - - - - - -] CMD "/usr/bin/python3 -m oslo_concurrency.prlimit --as=1073741824 --cpu=30 -- env LC_ALL=C LANG=C qemu-img info /var/lib/nova/instances/e73931e9-f7fa-4666-b781-700b385532a9/disk --force-share --output=json" returned: 0 in 0.106s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422#033[00m
Dec  1 19:40:03 compute-0 nova_compute[189564]: 2025-12-01 19:40:03.521 189568 DEBUG oslo_concurrency.processutils [None req-7cec860b-c1c5-49d2-8a4f-732c76c1788d - - - - - -] Running cmd (subprocess): /usr/bin/python3 -m oslo_concurrency.prlimit --as=1073741824 --cpu=30 -- env LC_ALL=C LANG=C qemu-img info /var/lib/nova/instances/e73931e9-f7fa-4666-b781-700b385532a9/disk --force-share --output=json execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384#033[00m
Dec  1 19:40:03 compute-0 nova_compute[189564]: 2025-12-01 19:40:03.591 189568 DEBUG oslo_concurrency.processutils [None req-7cec860b-c1c5-49d2-8a4f-732c76c1788d - - - - - -] CMD "/usr/bin/python3 -m oslo_concurrency.prlimit --as=1073741824 --cpu=30 -- env LC_ALL=C LANG=C qemu-img info /var/lib/nova/instances/e73931e9-f7fa-4666-b781-700b385532a9/disk --force-share --output=json" returned: 0 in 0.070s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422#033[00m
Dec  1 19:40:03 compute-0 nova_compute[189564]: 2025-12-01 19:40:03.593 189568 DEBUG oslo_concurrency.processutils [None req-7cec860b-c1c5-49d2-8a4f-732c76c1788d - - - - - -] Running cmd (subprocess): /usr/bin/python3 -m oslo_concurrency.prlimit --as=1073741824 --cpu=30 -- env LC_ALL=C LANG=C qemu-img info /var/lib/nova/instances/e73931e9-f7fa-4666-b781-700b385532a9/disk.eph0 --force-share --output=json execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384#033[00m
Dec  1 19:40:03 compute-0 nova_compute[189564]: 2025-12-01 19:40:03.652 189568 DEBUG oslo_concurrency.processutils [None req-7cec860b-c1c5-49d2-8a4f-732c76c1788d - - - - - -] CMD "/usr/bin/python3 -m oslo_concurrency.prlimit --as=1073741824 --cpu=30 -- env LC_ALL=C LANG=C qemu-img info /var/lib/nova/instances/e73931e9-f7fa-4666-b781-700b385532a9/disk.eph0 --force-share --output=json" returned: 0 in 0.060s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422#033[00m
Dec  1 19:40:03 compute-0 nova_compute[189564]: 2025-12-01 19:40:03.655 189568 DEBUG oslo_concurrency.processutils [None req-7cec860b-c1c5-49d2-8a4f-732c76c1788d - - - - - -] Running cmd (subprocess): /usr/bin/python3 -m oslo_concurrency.prlimit --as=1073741824 --cpu=30 -- env LC_ALL=C LANG=C qemu-img info /var/lib/nova/instances/e73931e9-f7fa-4666-b781-700b385532a9/disk.eph0 --force-share --output=json execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384#033[00m
Dec  1 19:40:03 compute-0 nova_compute[189564]: 2025-12-01 19:40:03.748 189568 DEBUG oslo_concurrency.processutils [None req-7cec860b-c1c5-49d2-8a4f-732c76c1788d - - - - - -] CMD "/usr/bin/python3 -m oslo_concurrency.prlimit --as=1073741824 --cpu=30 -- env LC_ALL=C LANG=C qemu-img info /var/lib/nova/instances/e73931e9-f7fa-4666-b781-700b385532a9/disk.eph0 --force-share --output=json" returned: 0 in 0.093s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422#033[00m
Dec  1 19:40:03 compute-0 nova_compute[189564]: 2025-12-01 19:40:03.754 189568 DEBUG oslo_concurrency.processutils [None req-7cec860b-c1c5-49d2-8a4f-732c76c1788d - - - - - -] Running cmd (subprocess): /usr/bin/python3 -m oslo_concurrency.prlimit --as=1073741824 --cpu=30 -- env LC_ALL=C LANG=C qemu-img info /var/lib/nova/instances/850ac274-3f22-41ce-b7d7-ac64d7adac70/disk --force-share --output=json execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384#033[00m
Dec  1 19:40:03 compute-0 nova_compute[189564]: 2025-12-01 19:40:03.809 189568 DEBUG oslo_concurrency.processutils [None req-7cec860b-c1c5-49d2-8a4f-732c76c1788d - - - - - -] CMD "/usr/bin/python3 -m oslo_concurrency.prlimit --as=1073741824 --cpu=30 -- env LC_ALL=C LANG=C qemu-img info /var/lib/nova/instances/850ac274-3f22-41ce-b7d7-ac64d7adac70/disk --force-share --output=json" returned: 0 in 0.055s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422#033[00m
Dec  1 19:40:03 compute-0 nova_compute[189564]: 2025-12-01 19:40:03.810 189568 DEBUG oslo_concurrency.processutils [None req-7cec860b-c1c5-49d2-8a4f-732c76c1788d - - - - - -] Running cmd (subprocess): /usr/bin/python3 -m oslo_concurrency.prlimit --as=1073741824 --cpu=30 -- env LC_ALL=C LANG=C qemu-img info /var/lib/nova/instances/850ac274-3f22-41ce-b7d7-ac64d7adac70/disk --force-share --output=json execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384#033[00m
Dec  1 19:40:03 compute-0 nova_compute[189564]: 2025-12-01 19:40:03.899 189568 DEBUG oslo_concurrency.processutils [None req-7cec860b-c1c5-49d2-8a4f-732c76c1788d - - - - - -] CMD "/usr/bin/python3 -m oslo_concurrency.prlimit --as=1073741824 --cpu=30 -- env LC_ALL=C LANG=C qemu-img info /var/lib/nova/instances/850ac274-3f22-41ce-b7d7-ac64d7adac70/disk --force-share --output=json" returned: 0 in 0.089s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422#033[00m
Dec  1 19:40:03 compute-0 nova_compute[189564]: 2025-12-01 19:40:03.900 189568 DEBUG oslo_concurrency.processutils [None req-7cec860b-c1c5-49d2-8a4f-732c76c1788d - - - - - -] Running cmd (subprocess): /usr/bin/python3 -m oslo_concurrency.prlimit --as=1073741824 --cpu=30 -- env LC_ALL=C LANG=C qemu-img info /var/lib/nova/instances/850ac274-3f22-41ce-b7d7-ac64d7adac70/disk.eph0 --force-share --output=json execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384#033[00m
Dec  1 19:40:03 compute-0 nova_compute[189564]: 2025-12-01 19:40:03.980 189568 DEBUG oslo_concurrency.processutils [None req-7cec860b-c1c5-49d2-8a4f-732c76c1788d - - - - - -] CMD "/usr/bin/python3 -m oslo_concurrency.prlimit --as=1073741824 --cpu=30 -- env LC_ALL=C LANG=C qemu-img info /var/lib/nova/instances/850ac274-3f22-41ce-b7d7-ac64d7adac70/disk.eph0 --force-share --output=json" returned: 0 in 0.080s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422#033[00m
Dec  1 19:40:03 compute-0 nova_compute[189564]: 2025-12-01 19:40:03.982 189568 DEBUG oslo_concurrency.processutils [None req-7cec860b-c1c5-49d2-8a4f-732c76c1788d - - - - - -] Running cmd (subprocess): /usr/bin/python3 -m oslo_concurrency.prlimit --as=1073741824 --cpu=30 -- env LC_ALL=C LANG=C qemu-img info /var/lib/nova/instances/850ac274-3f22-41ce-b7d7-ac64d7adac70/disk.eph0 --force-share --output=json execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384#033[00m
Dec  1 19:40:04 compute-0 nova_compute[189564]: 2025-12-01 19:40:04.049 189568 DEBUG oslo_concurrency.processutils [None req-7cec860b-c1c5-49d2-8a4f-732c76c1788d - - - - - -] CMD "/usr/bin/python3 -m oslo_concurrency.prlimit --as=1073741824 --cpu=30 -- env LC_ALL=C LANG=C qemu-img info /var/lib/nova/instances/850ac274-3f22-41ce-b7d7-ac64d7adac70/disk.eph0 --force-share --output=json" returned: 0 in 0.067s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422#033[00m
Dec  1 19:40:04 compute-0 nova_compute[189564]: 2025-12-01 19:40:04.543 189568 WARNING nova.virt.libvirt.driver [None req-7cec860b-c1c5-49d2-8a4f-732c76c1788d - - - - - -] This host appears to have multiple sockets per NUMA node. The `socket` PCI NUMA affinity will not be supported.#033[00m
Dec  1 19:40:04 compute-0 nova_compute[189564]: 2025-12-01 19:40:04.544 189568 DEBUG nova.compute.resource_tracker [None req-7cec860b-c1c5-49d2-8a4f-732c76c1788d - - - - - -] Hypervisor/Node resource view: name=compute-0.ctlplane.example.com free_ram=4961MB free_disk=72.36137390136719GB free_vcpus=6 pci_devices=[{"dev_id": "pci_0000_00_03_0", "address": "0000:00:03.0", "product_id": "1000", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1000", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_01_3", "address": "0000:00:01.3", "product_id": "7113", "vendor_id": "8086", "numa_node": null, "label": "label_8086_7113", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_06_0", "address": "0000:00:06.0", "product_id": "1005", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1005", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_04_0", "address": "0000:00:04.0", "product_id": "1001", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1001", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_01_1", "address": "0000:00:01.1", "product_id": "7010", "vendor_id": "8086", "numa_node": null, "label": "label_8086_7010", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_00_0", "address": "0000:00:00.0", "product_id": "1237", "vendor_id": "8086", "numa_node": null, "label": "label_8086_1237", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_05_0", "address": "0000:00:05.0", "product_id": "1002", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1002", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_02_0", "address": "0000:00:02.0", "product_id": "1050", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1050", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_01_2", "address": "0000:00:01.2", "product_id": "7020", "vendor_id": "8086", "numa_node": null, "label": "label_8086_7020", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_01_0", "address": "0000:00:01.0", "product_id": "7000", "vendor_id": "8086", "numa_node": null, "label": "label_8086_7000", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_07_0", "address": "0000:00:07.0", "product_id": "1000", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1000", "dev_type": "type-PCI"}] _report_hypervisor_resource_view /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:1034#033[00m
Dec  1 19:40:04 compute-0 nova_compute[189564]: 2025-12-01 19:40:04.545 189568 DEBUG oslo_concurrency.lockutils [None req-7cec860b-c1c5-49d2-8a4f-732c76c1788d - - - - - -] Acquiring lock "compute_resources" by "nova.compute.resource_tracker.ResourceTracker._update_available_resource" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m
Dec  1 19:40:04 compute-0 nova_compute[189564]: 2025-12-01 19:40:04.545 189568 DEBUG oslo_concurrency.lockutils [None req-7cec860b-c1c5-49d2-8a4f-732c76c1788d - - - - - -] Lock "compute_resources" acquired by "nova.compute.resource_tracker.ResourceTracker._update_available_resource" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m
Dec  1 19:40:04 compute-0 nova_compute[189564]: 2025-12-01 19:40:04.646 189568 DEBUG nova.compute.resource_tracker [None req-7cec860b-c1c5-49d2-8a4f-732c76c1788d - - - - - -] Instance e73931e9-f7fa-4666-b781-700b385532a9 actively managed on this compute host and has allocations in placement: {'resources': {'DISK_GB': 2, 'MEMORY_MB': 512, 'VCPU': 1}}. _remove_deleted_instances_allocations /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:1635#033[00m
Dec  1 19:40:04 compute-0 nova_compute[189564]: 2025-12-01 19:40:04.646 189568 DEBUG nova.compute.resource_tracker [None req-7cec860b-c1c5-49d2-8a4f-732c76c1788d - - - - - -] Instance 850ac274-3f22-41ce-b7d7-ac64d7adac70 actively managed on this compute host and has allocations in placement: {'resources': {'DISK_GB': 2, 'MEMORY_MB': 512, 'VCPU': 1}}. _remove_deleted_instances_allocations /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:1635#033[00m
Dec  1 19:40:04 compute-0 nova_compute[189564]: 2025-12-01 19:40:04.647 189568 DEBUG nova.compute.resource_tracker [None req-7cec860b-c1c5-49d2-8a4f-732c76c1788d - - - - - -] Total usable vcpus: 8, total allocated vcpus: 2 _report_final_resource_view /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:1057#033[00m
Dec  1 19:40:04 compute-0 nova_compute[189564]: 2025-12-01 19:40:04.647 189568 DEBUG nova.compute.resource_tracker [None req-7cec860b-c1c5-49d2-8a4f-732c76c1788d - - - - - -] Final resource view: name=compute-0.ctlplane.example.com phys_ram=7680MB used_ram=1536MB phys_disk=79GB used_disk=4GB total_vcpus=8 used_vcpus=2 pci_stats=[] _report_final_resource_view /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:1066#033[00m
Dec  1 19:40:04 compute-0 nova_compute[189564]: 2025-12-01 19:40:04.670 189568 DEBUG nova.scheduler.client.report [None req-7cec860b-c1c5-49d2-8a4f-732c76c1788d - - - - - -] Refreshing inventories for resource provider 0211b5d4-bab8-409f-8f53-df766ffbcb27 _refresh_associations /usr/lib/python3.9/site-packages/nova/scheduler/client/report.py:804#033[00m
Dec  1 19:40:04 compute-0 nova_compute[189564]: 2025-12-01 19:40:04.696 189568 DEBUG nova.scheduler.client.report [None req-7cec860b-c1c5-49d2-8a4f-732c76c1788d - - - - - -] Updating ProviderTree inventory for provider 0211b5d4-bab8-409f-8f53-df766ffbcb27 from _refresh_and_get_inventory using data: {'VCPU': {'total': 8, 'reserved': 0, 'min_unit': 1, 'max_unit': 8, 'step_size': 1, 'allocation_ratio': 4.0}, 'MEMORY_MB': {'total': 7680, 'reserved': 512, 'min_unit': 1, 'max_unit': 7680, 'step_size': 1, 'allocation_ratio': 1.0}, 'DISK_GB': {'total': 79, 'reserved': 1, 'min_unit': 1, 'max_unit': 79, 'step_size': 1, 'allocation_ratio': 0.9}} _refresh_and_get_inventory /usr/lib/python3.9/site-packages/nova/scheduler/client/report.py:768#033[00m
Dec  1 19:40:04 compute-0 nova_compute[189564]: 2025-12-01 19:40:04.697 189568 DEBUG nova.compute.provider_tree [None req-7cec860b-c1c5-49d2-8a4f-732c76c1788d - - - - - -] Updating inventory in ProviderTree for provider 0211b5d4-bab8-409f-8f53-df766ffbcb27 with inventory: {'VCPU': {'total': 8, 'reserved': 0, 'min_unit': 1, 'max_unit': 8, 'step_size': 1, 'allocation_ratio': 4.0}, 'MEMORY_MB': {'total': 7680, 'reserved': 512, 'min_unit': 1, 'max_unit': 7680, 'step_size': 1, 'allocation_ratio': 1.0}, 'DISK_GB': {'total': 79, 'reserved': 1, 'min_unit': 1, 'max_unit': 79, 'step_size': 1, 'allocation_ratio': 0.9}} update_inventory /usr/lib/python3.9/site-packages/nova/compute/provider_tree.py:176#033[00m
Dec  1 19:40:04 compute-0 nova_compute[189564]: 2025-12-01 19:40:04.716 189568 DEBUG nova.scheduler.client.report [None req-7cec860b-c1c5-49d2-8a4f-732c76c1788d - - - - - -] Refreshing aggregate associations for resource provider 0211b5d4-bab8-409f-8f53-df766ffbcb27, aggregates: None _refresh_associations /usr/lib/python3.9/site-packages/nova/scheduler/client/report.py:813#033[00m
Dec  1 19:40:04 compute-0 nova_compute[189564]: 2025-12-01 19:40:04.747 189568 DEBUG nova.scheduler.client.report [None req-7cec860b-c1c5-49d2-8a4f-732c76c1788d - - - - - -] Refreshing trait associations for resource provider 0211b5d4-bab8-409f-8f53-df766ffbcb27, traits: COMPUTE_RESCUE_BFV,COMPUTE_VIOMMU_MODEL_INTEL,COMPUTE_SECURITY_UEFI_SECURE_BOOT,COMPUTE_GRAPHICS_MODEL_VIRTIO,HW_CPU_X86_AMD_SVM,COMPUTE_NODE,COMPUTE_VIOMMU_MODEL_AUTO,HW_CPU_X86_BMI2,COMPUTE_IMAGE_TYPE_ISO,HW_CPU_X86_SSE2,COMPUTE_STORAGE_BUS_SATA,HW_CPU_X86_SSE41,COMPUTE_IMAGE_TYPE_ARI,COMPUTE_SECURITY_TPM_1_2,COMPUTE_STORAGE_BUS_VIRTIO,COMPUTE_IMAGE_TYPE_AMI,COMPUTE_NET_VIF_MODEL_E1000,COMPUTE_GRAPHICS_MODEL_CIRRUS,COMPUTE_GRAPHICS_MODEL_VGA,COMPUTE_TRUSTED_CERTS,COMPUTE_STORAGE_BUS_USB,COMPUTE_VOLUME_ATTACH_WITH_TAG,COMPUTE_NET_VIF_MODEL_VIRTIO,HW_CPU_X86_FMA3,HW_CPU_X86_SSE4A,COMPUTE_ACCELERATORS,COMPUTE_VOLUME_EXTEND,HW_CPU_X86_ABM,COMPUTE_DEVICE_TAGGING,HW_CPU_X86_AVX,HW_CPU_X86_SSE,HW_CPU_X86_SVM,COMPUTE_STORAGE_BUS_IDE,COMPUTE_NET_ATTACH_INTERFACE,HW_CPU_X86_F16C,HW_CPU_X86_MMX,COMPUTE_NET_VIF_MODEL_SPAPR_VLAN,COMPUTE_NET_VIF_MODEL_E1000E,HW_CPU_X86_CLMUL,COMPUTE_GRAPHICS_MODEL_NONE,COMPUTE_VIOMMU_MODEL_VIRTIO,HW_CPU_X86_AVX2,COMPUTE_NET_VIF_MODEL_VMXNET3,COMPUTE_SECURITY_TPM_2_0,COMPUTE_IMAGE_TYPE_AKI,HW_CPU_X86_SSSE3,COMPUTE_IMAGE_TYPE_QCOW2,HW_CPU_X86_BMI,HW_CPU_X86_AESNI,COMPUTE_GRAPHICS_MODEL_BOCHS,COMPUTE_NET_VIF_MODEL_RTL8139,COMPUTE_STORAGE_BUS_SCSI,COMPUTE_NET_VIF_MODEL_PCNET,COMPUTE_VOLUME_MULTI_ATTACH,COMPUTE_SOCKET_PCI_NUMA_AFFINITY,COMPUTE_NET_VIF_MODEL_NE2K_PCI,HW_CPU_X86_SHA,COMPUTE_IMAGE_TYPE_RAW,COMPUTE_NET_ATTACH_INTERFACE_WITH_TAG,HW_CPU_X86_SSE42,COMPUTE_STORAGE_BUS_FDC _refresh_associations /usr/lib/python3.9/site-packages/nova/scheduler/client/report.py:825#033[00m
Dec  1 19:40:04 compute-0 nova_compute[189564]: 2025-12-01 19:40:04.817 189568 DEBUG nova.compute.provider_tree [None req-7cec860b-c1c5-49d2-8a4f-732c76c1788d - - - - - -] Inventory has not changed in ProviderTree for provider: 0211b5d4-bab8-409f-8f53-df766ffbcb27 update_inventory /usr/lib/python3.9/site-packages/nova/compute/provider_tree.py:180#033[00m
Dec  1 19:40:04 compute-0 nova_compute[189564]: 2025-12-01 19:40:04.850 189568 DEBUG nova.scheduler.client.report [None req-7cec860b-c1c5-49d2-8a4f-732c76c1788d - - - - - -] Inventory has not changed for provider 0211b5d4-bab8-409f-8f53-df766ffbcb27 based on inventory data: {'VCPU': {'total': 8, 'reserved': 0, 'min_unit': 1, 'max_unit': 8, 'step_size': 1, 'allocation_ratio': 4.0}, 'MEMORY_MB': {'total': 7680, 'reserved': 512, 'min_unit': 1, 'max_unit': 7680, 'step_size': 1, 'allocation_ratio': 1.0}, 'DISK_GB': {'total': 79, 'reserved': 1, 'min_unit': 1, 'max_unit': 79, 'step_size': 1, 'allocation_ratio': 0.9}} set_inventory_for_provider /usr/lib/python3.9/site-packages/nova/scheduler/client/report.py:940#033[00m
Dec  1 19:40:04 compute-0 nova_compute[189564]: 2025-12-01 19:40:04.852 189568 DEBUG nova.compute.resource_tracker [None req-7cec860b-c1c5-49d2-8a4f-732c76c1788d - - - - - -] Compute_service record updated for compute-0.ctlplane.example.com:compute-0.ctlplane.example.com _update_available_resource /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:995#033[00m
Dec  1 19:40:04 compute-0 nova_compute[189564]: 2025-12-01 19:40:04.853 189568 DEBUG oslo_concurrency.lockutils [None req-7cec860b-c1c5-49d2-8a4f-732c76c1788d - - - - - -] Lock "compute_resources" "released" by "nova.compute.resource_tracker.ResourceTracker._update_available_resource" :: held 0.308s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m
Dec  1 19:40:05 compute-0 nova_compute[189564]: 2025-12-01 19:40:05.740 189568 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Dec  1 19:40:08 compute-0 nova_compute[189564]: 2025-12-01 19:40:08.058 189568 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Dec  1 19:40:09 compute-0 podman[244200]: 2025-12-01 19:40:09.366977255 +0000 UTC m=+0.124544428 container health_status 61ddba5fa28aaa4735d9b3aecc3d300f499f9ae2248b5f55cd6d6127fcce4236 (image=quay.io/navidys/prometheus-podman-exporter:v1.10.1, name=podman_exporter, health_status=healthy, health_failing_streak=0, health_log=, config_data={'image': 'quay.io/navidys/prometheus-podman-exporter:v1.10.1', 'restart': 'always', 'recreate': True, 'user': 'root', 'privileged': True, 'ports': ['9882:9882'], 'net': 'host', 'command': ['--web.config.file=/etc/podman_exporter/podman_exporter.yaml'], 'environment': {'OS_ENDPOINT_TYPE': 'internal', 'CONTAINER_HOST': 'unix:///run/podman/podman.sock'}, 'healthcheck': {'test': '/openstack/healthcheck podman_exporter', 'mount': '/var/lib/openstack/healthchecks/podman_exporter'}, 'volumes': ['/var/lib/openstack/config/telemetry/podman_exporter.yaml:/etc/podman_exporter/podman_exporter.yaml:z', '/var/lib/openstack/certs/telemetry/default:/etc/podman_exporter/tls:z', '/run/podman/podman.sock:/run/podman/podman.sock:rw,z', '/var/lib/openstack/healthchecks/podman_exporter:/openstack:ro,z']}, config_id=edpm, container_name=podman_exporter, maintainer=Navid Yaghoobi <navidys@fedoraproject.org>, managed_by=edpm_ansible)
Dec  1 19:40:10 compute-0 nova_compute[189564]: 2025-12-01 19:40:10.744 189568 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Dec  1 19:40:12 compute-0 ovn_metadata_agent[106828]: 2025-12-01 19:40:12.190 106833 DEBUG oslo_concurrency.lockutils [-] Acquiring lock "_check_child_processes" by "neutron.agent.linux.external_process.ProcessMonitor._check_child_processes" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m
Dec  1 19:40:12 compute-0 ovn_metadata_agent[106828]: 2025-12-01 19:40:12.191 106833 DEBUG oslo_concurrency.lockutils [-] Lock "_check_child_processes" acquired by "neutron.agent.linux.external_process.ProcessMonitor._check_child_processes" :: waited 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m
Dec  1 19:40:12 compute-0 ovn_metadata_agent[106828]: 2025-12-01 19:40:12.192 106833 DEBUG oslo_concurrency.lockutils [-] Lock "_check_child_processes" "released" by "neutron.agent.linux.external_process.ProcessMonitor._check_child_processes" :: held 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m
Dec  1 19:40:12 compute-0 podman[244223]: 2025-12-01 19:40:12.308760218 +0000 UTC m=+0.074473211 container health_status 23921011954a99f31a49758e512d9e3575f6b2ebf536e7df85e3be11e7690b76 (image=quay.io/sustainable_computing_io/kepler:release-0.7.12, name=kepler, health_status=healthy, health_failing_streak=0, health_log=, release=1214.1726694543, name=ubi9, config_data={'image': 'quay.io/sustainable_computing_io/kepler:release-0.7.12', 'privileged': 'true', 'restart': 'always', 'ports': ['8888:8888'], 'net': 'host', 'command': '-v=2', 'recreate': True, 'environment': {'ENABLE_GPU': 'true', 'EXPOSE_CONTAINER_METRICS': 'true', 'ENABLE_PROCESS_METRICS': 'true', 'EXPOSE_VM_METRICS': 'true', 'EXPOSE_ESTIMATED_IDLE_POWER_METRICS': 'false', 'LIBVIRT_METADATA_URI': 'http://openstack.org/xmlns/libvirt/nova/1.1'}, 'healthcheck': {'test': '/openstack/healthcheck kepler', 'mount': '/var/lib/openstack/healthchecks/kepler'}, 'volumes': ['/lib/modules:/lib/modules:ro', '/run/libvirt:/run/libvirt:shared,ro', '/sys:/sys', '/proc:/proc', '/var/lib/openstack/healthchecks/kepler:/openstack:ro,z']}, description=The Universal Base Image is designed and engineered to be the base layer for all of your containerized applications, middleware and utilities. This base image is freely redistributable, but Red Hat only supports Red Hat technologies through subscriptions for Red Hat products. This image is maintained by Red Hat and updated regularly., release-0.7.12=, build-date=2024-09-18T21:23:30, com.redhat.component=ubi9-container, io.openshift.expose-services=, version=9.4, config_id=edpm, container_name=kepler, maintainer=Red Hat, Inc., vendor=Red Hat, Inc., distribution-scope=public, managed_by=edpm_ansible, io.k8s.description=The Universal Base Image is designed and engineered to be the base layer for all of your containerized applications, middleware and utilities. This base image is freely redistributable, but Red Hat only supports Red Hat technologies through subscriptions for Red Hat products. This image is maintained by Red Hat and updated regularly., vcs-ref=e309397d02fc53f7fa99db1371b8700eb49f268f, io.openshift.tags=base rhel9, summary=Provides the latest release of Red Hat Universal Base Image 9., url=https://access.redhat.com/containers/#/registry.access.redhat.com/ubi9/images/9.4-1214.1726694543, architecture=x86_64, vcs-type=git, com.redhat.license_terms=https://www.redhat.com/en/about/red-hat-end-user-license-agreements#UBI, io.k8s.display-name=Red Hat Universal Base Image 9, io.buildah.version=1.29.0)
Dec  1 19:40:12 compute-0 podman[244224]: 2025-12-01 19:40:12.348916821 +0000 UTC m=+0.098221954 container health_status 34a1614f07848d6f362b3ed1fa2407dbcd0f2c7c831f6ef43ff8b2d278ce7c3d (image=quay.io/podified-antelope-centos9/openstack-ceilometer-ipmi:current-podified, name=ceilometer_agent_ipmi, health_status=healthy, health_failing_streak=0, health_log=, org.label-schema.vendor=CentOS, tcib_build_tag=fa2bb8efef6782c26ea7f1675eeb36dd, tcib_managed=true, maintainer=OpenStack Kubernetes Operator team, managed_by=edpm_ansible, org.label-schema.schema-version=1.0, org.label-schema.build-date=20251125, config_data={'image': 'quay.io/podified-antelope-centos9/openstack-ceilometer-ipmi:current-podified', 'user': 'ceilometer', 'restart': 'always', 'command': 'kolla_start', 'security_opt': 'label:type:ceilometer_polling_t', 'privileged': 'true', 'net': 'host', 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'OS_ENDPOINT_TYPE': 'internal'}, 'healthcheck': {'test': '/openstack/healthcheck ipmi', 'mount': '/var/lib/openstack/healthchecks/ceilometer_agent_ipmi'}, 'volumes': ['/var/lib/openstack/config/telemetry-power-monitoring:/var/lib/openstack/config/:z', '/var/lib/openstack/config/telemetry-power-monitoring/ceilometer-agent-ipmi.json:/var/lib/kolla/config_files/config.json:z', '/etc/hosts:/etc/hosts:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/var/lib/openstack/cacerts/telemetry-power-monitoring/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/config/telemetry-power-monitoring/ceilometer_prom_exporter.yaml:/etc/ceilometer/ceilometer_prom_exporter.yaml:z', '/var/lib/openstack/certs/telemetry-power-monitoring/default:/etc/ceilometer/tls:z', '/dev/log:/dev/log', '/var/lib/openstack/healthchecks/ceilometer_agent_ipmi:/openstack:ro,z']}, config_id=edpm, io.buildah.version=1.41.3, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 9 Base Image, container_name=ceilometer_agent_ipmi)
Dec  1 19:40:13 compute-0 nova_compute[189564]: 2025-12-01 19:40:13.060 189568 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Dec  1 19:40:14 compute-0 podman[244263]: 2025-12-01 19:40:14.31190029 +0000 UTC m=+0.085903235 container health_status 43b014a7c88484529ca37fbc1aa040d68d3c565a681d98a3ffe696ded1c66c8b (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified, name=ovn_metadata_agent, health_status=healthy, health_failing_streak=0, health_log=, org.label-schema.license=GPLv2, org.label-schema.vendor=CentOS, config_id=ovn_metadata_agent, container_name=ovn_metadata_agent, io.buildah.version=1.41.3, org.label-schema.name=CentOS Stream 9 Base Image, tcib_managed=true, org.label-schema.build-date=20251125, org.label-schema.schema-version=1.0, tcib_build_tag=fa2bb8efef6782c26ea7f1675eeb36dd, config_data={'cgroupns': 'host', 'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'EDPM_CONFIG_HASH': '0823bd3e096c75f72e4a95820d41b0d4b6a1172bd2892ddb9f29b788a11bc87d'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ovn_metadata_agent', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified', 'net': 'host', 'pid': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/run/openvswitch:/run/openvswitch:z', '/var/lib/config-data/ansible-generated/neutron-ovn-metadata-agent:/etc/neutron.conf.d:z', '/run/netns:/run/netns:shared', '/var/lib/kolla/config_files/ovn_metadata_agent.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/neutron:/var/lib/neutron:shared,z', '/var/lib/neutron/ovn_metadata_haproxy_wrapper:/usr/local/bin/haproxy:ro', '/var/lib/neutron/kill_scripts:/etc/neutron/kill_scripts:ro', '/var/lib/openstack/cacerts/neutron-metadata/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/ca.crt:/etc/pki/tls/certs/ovndbca.crt:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/tls.crt:/etc/pki/tls/certs/ovndb.crt:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/tls.key:/etc/pki/tls/private/ovndb.key:ro,Z', '/var/lib/openstack/healthchecks/ovn_metadata_agent:/openstack:ro,z']}, maintainer=OpenStack Kubernetes Operator team, managed_by=edpm_ansible)
Dec  1 19:40:14 compute-0 podman[244262]: 2025-12-01 19:40:14.335353642 +0000 UTC m=+0.101642551 container health_status 3a3d264f7eb8586ed3d44da8bad3c69e5911bcb2ca062b771386b6d47a5118de (image=quay.rdoproject.org/podified-master-centos10/openstack-ceilometer-compute:current-tested, name=ceilometer_agent_compute, health_status=healthy, health_failing_streak=0, health_log=, io.buildah.version=1.41.4, org.label-schema.build-date=20251125, config_data={'image': 'quay.rdoproject.org/podified-master-centos10/openstack-ceilometer-compute:current-tested', 'user': 'ceilometer', 'restart': 'always', 'command': 'kolla_start', 'security_opt': 'label:type:ceilometer_polling_t', 'net': 'host', 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'OS_ENDPOINT_TYPE': 'internal'}, 'healthcheck': {'test': '/openstack/healthcheck compute', 'mount': '/var/lib/openstack/healthchecks/ceilometer_agent_compute'}, 'volumes': ['/var/lib/openstack/config/telemetry:/var/lib/openstack/config/:z', '/var/lib/openstack/config/telemetry/ceilometer-agent-compute.json:/var/lib/kolla/config_files/config.json:z', '/run/libvirt:/run/libvirt:shared,ro', '/etc/hosts:/etc/hosts:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/var/lib/openstack/cacerts/telemetry/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/config/telemetry/ceilometer_prom_exporter.yaml:/etc/ceilometer/ceilometer_prom_exporter.yaml:z', '/var/lib/openstack/certs/telemetry/default:/etc/ceilometer/tls:z', '/dev/log:/dev/log', '/var/lib/openstack/healthchecks/ceilometer_agent_compute:/openstack:ro,z']}, org.label-schema.license=GPLv2, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS, maintainer=OpenStack Kubernetes Operator team, tcib_build_tag=3a7876c5b6a4ff2e2bc50e11e9db5f42, tcib_managed=true, config_id=edpm, managed_by=edpm_ansible, org.label-schema.name=CentOS Stream 10 Base Image, container_name=ceilometer_agent_compute)
Dec  1 19:40:14 compute-0 podman[244264]: 2025-12-01 19:40:14.39045261 +0000 UTC m=+0.150238154 container health_status ac5c9902abf0db9f43c889599b2bcc73d33eb8b65444ffdd9b56a5cc93dab792 (image=quay.io/podified-antelope-centos9/openstack-ovn-controller:current-podified, name=ovn_controller, health_status=healthy, health_failing_streak=0, health_log=, maintainer=OpenStack Kubernetes Operator team, org.label-schema.build-date=20251125, org.label-schema.license=GPLv2, managed_by=edpm_ansible, org.label-schema.vendor=CentOS, tcib_managed=true, config_id=ovn_controller, container_name=ovn_controller, org.label-schema.schema-version=1.0, org.label-schema.name=CentOS Stream 9 Base Image, tcib_build_tag=fa2bb8efef6782c26ea7f1675eeb36dd, config_data={'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ovn_controller', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-ovn-controller:current-podified', 'net': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/lib/modules:/lib/modules:ro', '/run:/run', '/var/lib/openvswitch/ovn:/run/ovn:shared,z', '/var/lib/kolla/config_files/ovn_controller.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/openstack/cacerts/ovn/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/certs/ovn/default/ca.crt:/etc/pki/tls/certs/ovndbca.crt:ro,z', '/var/lib/openstack/certs/ovn/default/tls.crt:/etc/pki/tls/certs/ovndb.crt:ro,z', '/var/lib/openstack/certs/ovn/default/tls.key:/etc/pki/tls/private/ovndb.key:ro,Z', '/var/lib/openstack/healthchecks/ovn_controller:/openstack:ro,z']}, io.buildah.version=1.41.3)
Dec  1 19:40:15 compute-0 nova_compute[189564]: 2025-12-01 19:40:15.746 189568 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Dec  1 19:40:18 compute-0 nova_compute[189564]: 2025-12-01 19:40:18.064 189568 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Dec  1 19:40:20 compute-0 nova_compute[189564]: 2025-12-01 19:40:20.749 189568 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Dec  1 19:40:23 compute-0 nova_compute[189564]: 2025-12-01 19:40:23.066 189568 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Dec  1 19:40:23 compute-0 podman[244326]: 2025-12-01 19:40:23.302366539 +0000 UTC m=+0.075259256 container health_status b46bda7fc50db8041eef75400930fc7591d8331b3adc9964f77b2cc87c6b98e2 (image=quay.io/openstack-k8s-operators/openstack-network-exporter:current-podified, name=openstack_network_exporter, health_status=healthy, health_failing_streak=0, health_log=, release=1755695350, io.openshift.expose-services=, io.openshift.tags=minimal rhel9, config_id=edpm, io.k8s.description=The Universal Base Image Minimal is a stripped down image that uses microdnf as a package manager. This base image is freely redistributable, but Red Hat only supports Red Hat technologies through subscriptions for Red Hat products. This image is maintained by Red Hat and updated regularly., vcs-ref=f4b088292653bbf5ca8188a5e59ffd06a8671d4b, distribution-scope=public, version=9.6, com.redhat.component=ubi9-minimal-container, description=The Universal Base Image Minimal is a stripped down image that uses microdnf as a package manager. This base image is freely redistributable, but Red Hat only supports Red Hat technologies through subscriptions for Red Hat products. This image is maintained by Red Hat and updated regularly., io.buildah.version=1.33.7, architecture=x86_64, managed_by=edpm_ansible, com.redhat.license_terms=https://www.redhat.com/en/about/red-hat-end-user-license-agreements#UBI, container_name=openstack_network_exporter, config_data={'image': 'quay.io/openstack-k8s-operators/openstack-network-exporter:current-podified', 'restart': 'always', 'recreate': True, 'privileged': True, 'ports': ['9105:9105'], 'command': [], 'net': 'host', 'environment': {'OS_ENDPOINT_TYPE': 'internal', 'OPENSTACK_NETWORK_EXPORTER_YAML': '/etc/openstack_network_exporter/openstack_network_exporter.yaml'}, 'healthcheck': {'test': '/openstack/healthcheck openstack-netwo', 'mount': '/var/lib/openstack/healthchecks/openstack_network_exporter'}, 'volumes': ['/var/lib/openstack/config/telemetry/openstack_network_exporter.yaml:/etc/openstack_network_exporter/openstack_network_exporter.yaml:z', '/var/lib/openstack/certs/telemetry/default:/etc/openstack_network_exporter/tls:z', '/var/run/openvswitch:/run/openvswitch:rw,z', '/var/lib/openvswitch/ovn:/run/ovn:rw,z', '/proc:/host/proc:ro', '/var/lib/openstack/healthchecks/openstack_network_exporter:/openstack:ro,z']}, url=https://catalog.redhat.com/en/search?searchType=containers, vcs-type=git, io.k8s.display-name=Red Hat Universal Base Image 9 Minimal, build-date=2025-08-20T13:12:41, summary=Provides the latest release of the minimal Red Hat Universal Base Image 9., name=ubi9-minimal, vendor=Red Hat, Inc., maintainer=Red Hat, Inc.)
Dec  1 19:40:25 compute-0 nova_compute[189564]: 2025-12-01 19:40:25.753 189568 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Dec  1 19:40:28 compute-0 nova_compute[189564]: 2025-12-01 19:40:28.069 189568 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Dec  1 19:40:28 compute-0 podman[244348]: 2025-12-01 19:40:28.308379729 +0000 UTC m=+0.078341824 container health_status 9bc16c1e84935b321683dd2dfd3901959431e420d380b6b9982945dff3d516b2 (image=quay.io/prometheus/node-exporter:v1.5.0, name=node_exporter, health_status=healthy, health_failing_streak=0, health_log=, managed_by=edpm_ansible, config_data={'image': 'quay.io/prometheus/node-exporter:v1.5.0', 'restart': 'always', 'recreate': True, 'user': 'root', 'privileged': True, 'ports': ['9100:9100'], 'command': ['--web.config.file=/etc/node_exporter/node_exporter.yaml', '--web.disable-exporter-metrics', '--collector.systemd', '--collector.systemd.unit-include=(edpm_.*|ovs.*|openvswitch|virt.*|rsyslog)\\.service', '--no-collector.dmi', '--no-collector.entropy', '--no-collector.thermal_zone', '--no-collector.time', '--no-collector.timex', '--no-collector.uname', '--no-collector.stat', '--no-collector.hwmon', '--no-collector.os', '--no-collector.selinux', '--no-collector.textfile', '--no-collector.powersupplyclass', '--no-collector.pressure', '--no-collector.rapl'], 'net': 'host', 'environment': {'OS_ENDPOINT_TYPE': 'internal'}, 'healthcheck': {'test': '/openstack/healthcheck node_exporter', 'mount': '/var/lib/openstack/healthchecks/node_exporter'}, 'volumes': ['/var/lib/openstack/config/telemetry/node_exporter.yaml:/etc/node_exporter/node_exporter.yaml:z', '/var/lib/openstack/certs/telemetry/default:/etc/node_exporter/tls:z', '/var/run/dbus/system_bus_socket:/var/run/dbus/system_bus_socket:rw', '/var/lib/openstack/healthchecks/node_exporter:/openstack:ro,z']}, config_id=edpm, container_name=node_exporter, maintainer=The Prometheus Authors <prometheus-developers@googlegroups.com>)
Dec  1 19:40:29 compute-0 podman[203750]: time="2025-12-01T19:40:29Z" level=info msg="List containers: received `last` parameter - overwriting `limit`"
Dec  1 19:40:29 compute-0 podman[203750]: @ - - [01/Dec/2025:19:40:29 +0000] "GET /v4.9.3/libpod/containers/json?all=true&external=false&last=0&namespace=false&size=false&sync=false HTTP/1.1" 200 29521 "" "Go-http-client/1.1"
Dec  1 19:40:29 compute-0 podman[203750]: @ - - [01/Dec/2025:19:40:29 +0000] "GET /v4.9.3/libpod/containers/stats?all=false&interval=1&stream=false HTTP/1.1" 200 4807 "" "Go-http-client/1.1"
Dec  1 19:40:30 compute-0 nova_compute[189564]: 2025-12-01 19:40:30.756 189568 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Dec  1 19:40:31 compute-0 openstack_network_exporter[205914]: ERROR   19:40:31 appctl.go:131: Failed to prepare call to ovsdb-server: no control socket files found for the ovs db server
Dec  1 19:40:31 compute-0 openstack_network_exporter[205914]: ERROR   19:40:31 appctl.go:144: Failed to get PID for ovn-northd: no control socket files found for ovn-northd
Dec  1 19:40:31 compute-0 openstack_network_exporter[205914]: ERROR   19:40:31 appctl.go:144: Failed to get PID for ovn-northd: no control socket files found for ovn-northd
Dec  1 19:40:31 compute-0 openstack_network_exporter[205914]: ERROR   19:40:31 appctl.go:174: call(dpif-netdev/pmd-perf-show): please specify an existing datapath
Dec  1 19:40:31 compute-0 openstack_network_exporter[205914]: 
Dec  1 19:40:31 compute-0 openstack_network_exporter[205914]: ERROR   19:40:31 appctl.go:174: call(dpif-netdev/pmd-rxq-show): please specify an existing datapath
Dec  1 19:40:31 compute-0 openstack_network_exporter[205914]: 
Dec  1 19:40:33 compute-0 nova_compute[189564]: 2025-12-01 19:40:33.072 189568 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Dec  1 19:40:34 compute-0 podman[244376]: 2025-12-01 19:40:34.326510963 +0000 UTC m=+0.074822173 container health_status eee51cf6f5ac491b85fb09827fece37ea9afa564acb449d4ec0d0155a452f02b (image=quay.io/podified-antelope-centos9/openstack-multipathd:current-podified, name=multipathd, health_status=healthy, health_failing_streak=0, health_log=, org.label-schema.name=CentOS Stream 9 Base Image, tcib_managed=true, container_name=multipathd, io.buildah.version=1.41.3, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS, config_id=multipathd, maintainer=OpenStack Kubernetes Operator team, managed_by=edpm_ansible, tcib_build_tag=fa2bb8efef6782c26ea7f1675eeb36dd, config_data={'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/multipathd', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-multipathd:current-podified', 'net': 'host', 'privileged': True, 'restart': 'always', 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/var/lib/kolla/config_files/multipathd.json:/var/lib/kolla/config_files/config.json:ro', '/dev:/dev', '/run/udev:/run/udev', '/sys:/sys', '/lib/modules:/lib/modules:ro', '/etc/iscsi:/etc/iscsi:ro', '/var/lib/iscsi:/var/lib/iscsi', '/etc/multipath:/etc/multipath:z', '/etc/multipath.conf:/etc/multipath.conf:ro', '/var/lib/openstack/healthchecks/multipathd:/openstack:ro,z']}, org.label-schema.license=GPLv2, org.label-schema.build-date=20251125)
Dec  1 19:40:35 compute-0 nova_compute[189564]: 2025-12-01 19:40:35.759 189568 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Dec  1 19:40:38 compute-0 nova_compute[189564]: 2025-12-01 19:40:38.076 189568 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Dec  1 19:40:40 compute-0 podman[244396]: 2025-12-01 19:40:40.351508504 +0000 UTC m=+0.107029704 container health_status 61ddba5fa28aaa4735d9b3aecc3d300f499f9ae2248b5f55cd6d6127fcce4236 (image=quay.io/navidys/prometheus-podman-exporter:v1.10.1, name=podman_exporter, health_status=healthy, health_failing_streak=0, health_log=, maintainer=Navid Yaghoobi <navidys@fedoraproject.org>, managed_by=edpm_ansible, config_data={'image': 'quay.io/navidys/prometheus-podman-exporter:v1.10.1', 'restart': 'always', 'recreate': True, 'user': 'root', 'privileged': True, 'ports': ['9882:9882'], 'net': 'host', 'command': ['--web.config.file=/etc/podman_exporter/podman_exporter.yaml'], 'environment': {'OS_ENDPOINT_TYPE': 'internal', 'CONTAINER_HOST': 'unix:///run/podman/podman.sock'}, 'healthcheck': {'test': '/openstack/healthcheck podman_exporter', 'mount': '/var/lib/openstack/healthchecks/podman_exporter'}, 'volumes': ['/var/lib/openstack/config/telemetry/podman_exporter.yaml:/etc/podman_exporter/podman_exporter.yaml:z', '/var/lib/openstack/certs/telemetry/default:/etc/podman_exporter/tls:z', '/run/podman/podman.sock:/run/podman/podman.sock:rw,z', '/var/lib/openstack/healthchecks/podman_exporter:/openstack:ro,z']}, config_id=edpm, container_name=podman_exporter)
Dec  1 19:40:40 compute-0 nova_compute[189564]: 2025-12-01 19:40:40.761 189568 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Dec  1 19:40:43 compute-0 nova_compute[189564]: 2025-12-01 19:40:43.078 189568 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Dec  1 19:40:43 compute-0 podman[244419]: 2025-12-01 19:40:43.343724595 +0000 UTC m=+0.108700086 container health_status 23921011954a99f31a49758e512d9e3575f6b2ebf536e7df85e3be11e7690b76 (image=quay.io/sustainable_computing_io/kepler:release-0.7.12, name=kepler, health_status=healthy, health_failing_streak=0, health_log=, name=ubi9, architecture=x86_64, io.buildah.version=1.29.0, summary=Provides the latest release of Red Hat Universal Base Image 9., release-0.7.12=, vcs-ref=e309397d02fc53f7fa99db1371b8700eb49f268f, com.redhat.license_terms=https://www.redhat.com/en/about/red-hat-end-user-license-agreements#UBI, config_data={'image': 'quay.io/sustainable_computing_io/kepler:release-0.7.12', 'privileged': 'true', 'restart': 'always', 'ports': ['8888:8888'], 'net': 'host', 'command': '-v=2', 'recreate': True, 'environment': {'ENABLE_GPU': 'true', 'EXPOSE_CONTAINER_METRICS': 'true', 'ENABLE_PROCESS_METRICS': 'true', 'EXPOSE_VM_METRICS': 'true', 'EXPOSE_ESTIMATED_IDLE_POWER_METRICS': 'false', 'LIBVIRT_METADATA_URI': 'http://openstack.org/xmlns/libvirt/nova/1.1'}, 'healthcheck': {'test': '/openstack/healthcheck kepler', 'mount': '/var/lib/openstack/healthchecks/kepler'}, 'volumes': ['/lib/modules:/lib/modules:ro', '/run/libvirt:/run/libvirt:shared,ro', '/sys:/sys', '/proc:/proc', '/var/lib/openstack/healthchecks/kepler:/openstack:ro,z']}, vendor=Red Hat, Inc., config_id=edpm, vcs-type=git, io.k8s.display-name=Red Hat Universal Base Image 9, build-date=2024-09-18T21:23:30, container_name=kepler, io.openshift.expose-services=, com.redhat.component=ubi9-container, maintainer=Red Hat, Inc., url=https://access.redhat.com/containers/#/registry.access.redhat.com/ubi9/images/9.4-1214.1726694543, version=9.4, description=The Universal Base Image is designed and engineered to be the base layer for all of your containerized applications, middleware and utilities. This base image is freely redistributable, but Red Hat only supports Red Hat technologies through subscriptions for Red Hat products. This image is maintained by Red Hat and updated regularly., distribution-scope=public, io.k8s.description=The Universal Base Image is designed and engineered to be the base layer for all of your containerized applications, middleware and utilities. This base image is freely redistributable, but Red Hat only supports Red Hat technologies through subscriptions for Red Hat products. This image is maintained by Red Hat and updated regularly., io.openshift.tags=base rhel9, managed_by=edpm_ansible, release=1214.1726694543)
Dec  1 19:40:43 compute-0 podman[244420]: 2025-12-01 19:40:43.371621219 +0000 UTC m=+0.125355384 container health_status 34a1614f07848d6f362b3ed1fa2407dbcd0f2c7c831f6ef43ff8b2d278ce7c3d (image=quay.io/podified-antelope-centos9/openstack-ceilometer-ipmi:current-podified, name=ceilometer_agent_ipmi, health_status=healthy, health_failing_streak=0, health_log=, org.label-schema.name=CentOS Stream 9 Base Image, tcib_managed=true, container_name=ceilometer_agent_ipmi, io.buildah.version=1.41.3, managed_by=edpm_ansible, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS, tcib_build_tag=fa2bb8efef6782c26ea7f1675eeb36dd, maintainer=OpenStack Kubernetes Operator team, org.label-schema.license=GPLv2, config_data={'image': 'quay.io/podified-antelope-centos9/openstack-ceilometer-ipmi:current-podified', 'user': 'ceilometer', 'restart': 'always', 'command': 'kolla_start', 'security_opt': 'label:type:ceilometer_polling_t', 'privileged': 'true', 'net': 'host', 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'OS_ENDPOINT_TYPE': 'internal'}, 'healthcheck': {'test': '/openstack/healthcheck ipmi', 'mount': '/var/lib/openstack/healthchecks/ceilometer_agent_ipmi'}, 'volumes': ['/var/lib/openstack/config/telemetry-power-monitoring:/var/lib/openstack/config/:z', '/var/lib/openstack/config/telemetry-power-monitoring/ceilometer-agent-ipmi.json:/var/lib/kolla/config_files/config.json:z', '/etc/hosts:/etc/hosts:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/var/lib/openstack/cacerts/telemetry-power-monitoring/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/config/telemetry-power-monitoring/ceilometer_prom_exporter.yaml:/etc/ceilometer/ceilometer_prom_exporter.yaml:z', '/var/lib/openstack/certs/telemetry-power-monitoring/default:/etc/ceilometer/tls:z', '/dev/log:/dev/log', '/var/lib/openstack/healthchecks/ceilometer_agent_ipmi:/openstack:ro,z']}, config_id=edpm, org.label-schema.build-date=20251125)
Dec  1 19:40:44 compute-0 podman[244457]: 2025-12-01 19:40:44.796829753 +0000 UTC m=+0.092713470 container health_status 43b014a7c88484529ca37fbc1aa040d68d3c565a681d98a3ffe696ded1c66c8b (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified, name=ovn_metadata_agent, health_status=healthy, health_failing_streak=0, health_log=, tcib_managed=true, config_id=ovn_metadata_agent, tcib_build_tag=fa2bb8efef6782c26ea7f1675eeb36dd, container_name=ovn_metadata_agent, io.buildah.version=1.41.3, maintainer=OpenStack Kubernetes Operator team, org.label-schema.build-date=20251125, org.label-schema.license=GPLv2, org.label-schema.schema-version=1.0, config_data={'cgroupns': 'host', 'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'EDPM_CONFIG_HASH': '0823bd3e096c75f72e4a95820d41b0d4b6a1172bd2892ddb9f29b788a11bc87d'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ovn_metadata_agent', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified', 'net': 'host', 'pid': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/run/openvswitch:/run/openvswitch:z', '/var/lib/config-data/ansible-generated/neutron-ovn-metadata-agent:/etc/neutron.conf.d:z', '/run/netns:/run/netns:shared', '/var/lib/kolla/config_files/ovn_metadata_agent.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/neutron:/var/lib/neutron:shared,z', '/var/lib/neutron/ovn_metadata_haproxy_wrapper:/usr/local/bin/haproxy:ro', '/var/lib/neutron/kill_scripts:/etc/neutron/kill_scripts:ro', '/var/lib/openstack/cacerts/neutron-metadata/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/ca.crt:/etc/pki/tls/certs/ovndbca.crt:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/tls.crt:/etc/pki/tls/certs/ovndb.crt:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/tls.key:/etc/pki/tls/private/ovndb.key:ro,Z', '/var/lib/openstack/healthchecks/ovn_metadata_agent:/openstack:ro,z']}, managed_by=edpm_ansible, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.vendor=CentOS)
Dec  1 19:40:44 compute-0 podman[244456]: 2025-12-01 19:40:44.825168411 +0000 UTC m=+0.116707801 container health_status 3a3d264f7eb8586ed3d44da8bad3c69e5911bcb2ca062b771386b6d47a5118de (image=quay.rdoproject.org/podified-master-centos10/openstack-ceilometer-compute:current-tested, name=ceilometer_agent_compute, health_status=healthy, health_failing_streak=0, health_log=, org.label-schema.build-date=20251125, tcib_managed=true, org.label-schema.vendor=CentOS, config_data={'image': 'quay.rdoproject.org/podified-master-centos10/openstack-ceilometer-compute:current-tested', 'user': 'ceilometer', 'restart': 'always', 'command': 'kolla_start', 'security_opt': 'label:type:ceilometer_polling_t', 'net': 'host', 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'OS_ENDPOINT_TYPE': 'internal'}, 'healthcheck': {'test': '/openstack/healthcheck compute', 'mount': '/var/lib/openstack/healthchecks/ceilometer_agent_compute'}, 'volumes': ['/var/lib/openstack/config/telemetry:/var/lib/openstack/config/:z', '/var/lib/openstack/config/telemetry/ceilometer-agent-compute.json:/var/lib/kolla/config_files/config.json:z', '/run/libvirt:/run/libvirt:shared,ro', '/etc/hosts:/etc/hosts:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/var/lib/openstack/cacerts/telemetry/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/config/telemetry/ceilometer_prom_exporter.yaml:/etc/ceilometer/ceilometer_prom_exporter.yaml:z', '/var/lib/openstack/certs/telemetry/default:/etc/ceilometer/tls:z', '/dev/log:/dev/log', '/var/lib/openstack/healthchecks/ceilometer_agent_compute:/openstack:ro,z']}, managed_by=edpm_ansible, org.label-schema.license=GPLv2, org.label-schema.schema-version=1.0, container_name=ceilometer_agent_compute, maintainer=OpenStack Kubernetes Operator team, org.label-schema.name=CentOS Stream 10 Base Image, tcib_build_tag=3a7876c5b6a4ff2e2bc50e11e9db5f42, config_id=edpm, io.buildah.version=1.41.4)
Dec  1 19:40:44 compute-0 podman[244458]: 2025-12-01 19:40:44.88666934 +0000 UTC m=+0.174983787 container health_status ac5c9902abf0db9f43c889599b2bcc73d33eb8b65444ffdd9b56a5cc93dab792 (image=quay.io/podified-antelope-centos9/openstack-ovn-controller:current-podified, name=ovn_controller, health_status=healthy, health_failing_streak=0, health_log=, container_name=ovn_controller, org.label-schema.build-date=20251125, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.schema-version=1.0, config_data={'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ovn_controller', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-ovn-controller:current-podified', 'net': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/lib/modules:/lib/modules:ro', '/run:/run', '/var/lib/openvswitch/ovn:/run/ovn:shared,z', '/var/lib/kolla/config_files/ovn_controller.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/openstack/cacerts/ovn/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/certs/ovn/default/ca.crt:/etc/pki/tls/certs/ovndbca.crt:ro,z', '/var/lib/openstack/certs/ovn/default/tls.crt:/etc/pki/tls/certs/ovndb.crt:ro,z', '/var/lib/openstack/certs/ovn/default/tls.key:/etc/pki/tls/private/ovndb.key:ro,Z', '/var/lib/openstack/healthchecks/ovn_controller:/openstack:ro,z']}, config_id=ovn_controller, maintainer=OpenStack Kubernetes Operator team, org.label-schema.vendor=CentOS, tcib_managed=true, io.buildah.version=1.41.3, managed_by=edpm_ansible, org.label-schema.license=GPLv2, tcib_build_tag=fa2bb8efef6782c26ea7f1675eeb36dd)
Dec  1 19:40:45 compute-0 nova_compute[189564]: 2025-12-01 19:40:45.765 189568 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Dec  1 19:40:48 compute-0 nova_compute[189564]: 2025-12-01 19:40:48.081 189568 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Dec  1 19:40:50 compute-0 nova_compute[189564]: 2025-12-01 19:40:50.769 189568 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Dec  1 19:40:53 compute-0 nova_compute[189564]: 2025-12-01 19:40:53.084 189568 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Dec  1 19:40:54 compute-0 podman[244523]: 2025-12-01 19:40:54.369629638 +0000 UTC m=+0.135397672 container health_status b46bda7fc50db8041eef75400930fc7591d8331b3adc9964f77b2cc87c6b98e2 (image=quay.io/openstack-k8s-operators/openstack-network-exporter:current-podified, name=openstack_network_exporter, health_status=healthy, health_failing_streak=0, health_log=, distribution-scope=public, io.k8s.display-name=Red Hat Universal Base Image 9 Minimal, io.k8s.description=The Universal Base Image Minimal is a stripped down image that uses microdnf as a package manager. This base image is freely redistributable, but Red Hat only supports Red Hat technologies through subscriptions for Red Hat products. This image is maintained by Red Hat and updated regularly., managed_by=edpm_ansible, io.openshift.tags=minimal rhel9, vcs-type=git, vendor=Red Hat, Inc., config_id=edpm, io.buildah.version=1.33.7, description=The Universal Base Image Minimal is a stripped down image that uses microdnf as a package manager. This base image is freely redistributable, but Red Hat only supports Red Hat technologies through subscriptions for Red Hat products. This image is maintained by Red Hat and updated regularly., name=ubi9-minimal, release=1755695350, build-date=2025-08-20T13:12:41, url=https://catalog.redhat.com/en/search?searchType=containers, com.redhat.component=ubi9-minimal-container, version=9.6, container_name=openstack_network_exporter, vcs-ref=f4b088292653bbf5ca8188a5e59ffd06a8671d4b, io.openshift.expose-services=, architecture=x86_64, config_data={'image': 'quay.io/openstack-k8s-operators/openstack-network-exporter:current-podified', 'restart': 'always', 'recreate': True, 'privileged': True, 'ports': ['9105:9105'], 'command': [], 'net': 'host', 'environment': {'OS_ENDPOINT_TYPE': 'internal', 'OPENSTACK_NETWORK_EXPORTER_YAML': '/etc/openstack_network_exporter/openstack_network_exporter.yaml'}, 'healthcheck': {'test': '/openstack/healthcheck openstack-netwo', 'mount': '/var/lib/openstack/healthchecks/openstack_network_exporter'}, 'volumes': ['/var/lib/openstack/config/telemetry/openstack_network_exporter.yaml:/etc/openstack_network_exporter/openstack_network_exporter.yaml:z', '/var/lib/openstack/certs/telemetry/default:/etc/openstack_network_exporter/tls:z', '/var/run/openvswitch:/run/openvswitch:rw,z', '/var/lib/openvswitch/ovn:/run/ovn:rw,z', '/proc:/host/proc:ro', '/var/lib/openstack/healthchecks/openstack_network_exporter:/openstack:ro,z']}, maintainer=Red Hat, Inc., com.redhat.license_terms=https://www.redhat.com/en/about/red-hat-end-user-license-agreements#UBI, summary=Provides the latest release of the minimal Red Hat Universal Base Image 9.)
Dec  1 19:40:54 compute-0 nova_compute[189564]: 2025-12-01 19:40:54.854 189568 DEBUG oslo_service.periodic_task [None req-7cec860b-c1c5-49d2-8a4f-732c76c1788d - - - - - -] Running periodic task ComputeManager._heal_instance_info_cache run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210#033[00m
Dec  1 19:40:54 compute-0 nova_compute[189564]: 2025-12-01 19:40:54.854 189568 DEBUG nova.compute.manager [None req-7cec860b-c1c5-49d2-8a4f-732c76c1788d - - - - - -] Starting heal instance info cache _heal_instance_info_cache /usr/lib/python3.9/site-packages/nova/compute/manager.py:9858#033[00m
Dec  1 19:40:55 compute-0 nova_compute[189564]: 2025-12-01 19:40:55.724 189568 DEBUG oslo_concurrency.lockutils [None req-7cec860b-c1c5-49d2-8a4f-732c76c1788d - - - - - -] Acquiring lock "refresh_cache-850ac274-3f22-41ce-b7d7-ac64d7adac70" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:312#033[00m
Dec  1 19:40:55 compute-0 nova_compute[189564]: 2025-12-01 19:40:55.725 189568 DEBUG oslo_concurrency.lockutils [None req-7cec860b-c1c5-49d2-8a4f-732c76c1788d - - - - - -] Acquired lock "refresh_cache-850ac274-3f22-41ce-b7d7-ac64d7adac70" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:315#033[00m
Dec  1 19:40:55 compute-0 nova_compute[189564]: 2025-12-01 19:40:55.725 189568 DEBUG nova.network.neutron [None req-7cec860b-c1c5-49d2-8a4f-732c76c1788d - - - - - -] [instance: 850ac274-3f22-41ce-b7d7-ac64d7adac70] Forcefully refreshing network info cache for instance _get_instance_nw_info /usr/lib/python3.9/site-packages/nova/network/neutron.py:2004#033[00m
Dec  1 19:40:55 compute-0 nova_compute[189564]: 2025-12-01 19:40:55.772 189568 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Dec  1 19:40:58 compute-0 nova_compute[189564]: 2025-12-01 19:40:58.087 189568 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Dec  1 19:40:58 compute-0 nova_compute[189564]: 2025-12-01 19:40:58.110 189568 DEBUG nova.network.neutron [None req-7cec860b-c1c5-49d2-8a4f-732c76c1788d - - - - - -] [instance: 850ac274-3f22-41ce-b7d7-ac64d7adac70] Updating instance_info_cache with network_info: [{"id": "076102cd-d411-4d3d-a31e-4851d4a8d107", "address": "fa:16:3e:ce:df:71", "network": {"id": "2a4b8529-6171-4880-a97c-66966115a61b", "bridge": "br-int", "label": "private", "subnets": [{"cidr": "192.168.0.0/24", "dns": [], "gateway": {"address": "192.168.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "192.168.0.62", "type": "fixed", "version": 4, "meta": {}, "floating_ips": [{"address": "192.168.122.240", "type": "floating", "version": 4, "meta": {}}]}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "35d2a9caf1634dca9fc12ec078239d84", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tap076102cd-d4", "ovs_interfaceid": "076102cd-d411-4d3d-a31e-4851d4a8d107", "qbh_params": null, "qbg_params": null, "active": true, "vnic_type": "normal", "profile": {}, "preserve_on_delete": true, "delegate_create": true, "meta": {}}] update_instance_cache_with_nw_info /usr/lib/python3.9/site-packages/nova/network/neutron.py:116#033[00m
Dec  1 19:40:58 compute-0 nova_compute[189564]: 2025-12-01 19:40:58.128 189568 DEBUG oslo_concurrency.lockutils [None req-7cec860b-c1c5-49d2-8a4f-732c76c1788d - - - - - -] Releasing lock "refresh_cache-850ac274-3f22-41ce-b7d7-ac64d7adac70" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:333#033[00m
Dec  1 19:40:58 compute-0 nova_compute[189564]: 2025-12-01 19:40:58.129 189568 DEBUG nova.compute.manager [None req-7cec860b-c1c5-49d2-8a4f-732c76c1788d - - - - - -] [instance: 850ac274-3f22-41ce-b7d7-ac64d7adac70] Updated the network info_cache for instance _heal_instance_info_cache /usr/lib/python3.9/site-packages/nova/compute/manager.py:9929#033[00m
Dec  1 19:40:58 compute-0 nova_compute[189564]: 2025-12-01 19:40:58.130 189568 DEBUG oslo_service.periodic_task [None req-7cec860b-c1c5-49d2-8a4f-732c76c1788d - - - - - -] Running periodic task ComputeManager._poll_rebooting_instances run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210#033[00m
Dec  1 19:40:58 compute-0 nova_compute[189564]: 2025-12-01 19:40:58.248 189568 DEBUG oslo_service.periodic_task [None req-7cec860b-c1c5-49d2-8a4f-732c76c1788d - - - - - -] Running periodic task ComputeManager._reclaim_queued_deletes run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210#033[00m
Dec  1 19:40:58 compute-0 nova_compute[189564]: 2025-12-01 19:40:58.249 189568 DEBUG nova.compute.manager [None req-7cec860b-c1c5-49d2-8a4f-732c76c1788d - - - - - -] CONF.reclaim_instance_interval <= 0, skipping... _reclaim_queued_deletes /usr/lib/python3.9/site-packages/nova/compute/manager.py:10477#033[00m
Dec  1 19:40:59 compute-0 nova_compute[189564]: 2025-12-01 19:40:59.250 189568 DEBUG oslo_service.periodic_task [None req-7cec860b-c1c5-49d2-8a4f-732c76c1788d - - - - - -] Running periodic task ComputeManager._poll_volume_usage run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210#033[00m
Dec  1 19:40:59 compute-0 podman[244544]: 2025-12-01 19:40:59.346049071 +0000 UTC m=+0.103183061 container health_status 9bc16c1e84935b321683dd2dfd3901959431e420d380b6b9982945dff3d516b2 (image=quay.io/prometheus/node-exporter:v1.5.0, name=node_exporter, health_status=healthy, health_failing_streak=0, health_log=, container_name=node_exporter, maintainer=The Prometheus Authors <prometheus-developers@googlegroups.com>, managed_by=edpm_ansible, config_data={'image': 'quay.io/prometheus/node-exporter:v1.5.0', 'restart': 'always', 'recreate': True, 'user': 'root', 'privileged': True, 'ports': ['9100:9100'], 'command': ['--web.config.file=/etc/node_exporter/node_exporter.yaml', '--web.disable-exporter-metrics', '--collector.systemd', '--collector.systemd.unit-include=(edpm_.*|ovs.*|openvswitch|virt.*|rsyslog)\\.service', '--no-collector.dmi', '--no-collector.entropy', '--no-collector.thermal_zone', '--no-collector.time', '--no-collector.timex', '--no-collector.uname', '--no-collector.stat', '--no-collector.hwmon', '--no-collector.os', '--no-collector.selinux', '--no-collector.textfile', '--no-collector.powersupplyclass', '--no-collector.pressure', '--no-collector.rapl'], 'net': 'host', 'environment': {'OS_ENDPOINT_TYPE': 'internal'}, 'healthcheck': {'test': '/openstack/healthcheck node_exporter', 'mount': '/var/lib/openstack/healthchecks/node_exporter'}, 'volumes': ['/var/lib/openstack/config/telemetry/node_exporter.yaml:/etc/node_exporter/node_exporter.yaml:z', '/var/lib/openstack/certs/telemetry/default:/etc/node_exporter/tls:z', '/var/run/dbus/system_bus_socket:/var/run/dbus/system_bus_socket:rw', '/var/lib/openstack/healthchecks/node_exporter:/openstack:ro,z']}, config_id=edpm)
Dec  1 19:40:59 compute-0 podman[203750]: time="2025-12-01T19:40:59Z" level=info msg="List containers: received `last` parameter - overwriting `limit`"
Dec  1 19:40:59 compute-0 podman[203750]: @ - - [01/Dec/2025:19:40:59 +0000] "GET /v4.9.3/libpod/containers/json?all=true&external=false&last=0&namespace=false&size=false&sync=false HTTP/1.1" 200 29521 "" "Go-http-client/1.1"
Dec  1 19:40:59 compute-0 podman[203750]: @ - - [01/Dec/2025:19:40:59 +0000] "GET /v4.9.3/libpod/containers/stats?all=false&interval=1&stream=false HTTP/1.1" 200 4800 "" "Go-http-client/1.1"
Dec  1 19:41:00 compute-0 nova_compute[189564]: 2025-12-01 19:41:00.248 189568 DEBUG oslo_service.periodic_task [None req-7cec860b-c1c5-49d2-8a4f-732c76c1788d - - - - - -] Running periodic task ComputeManager._poll_unconfirmed_resizes run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210#033[00m
Dec  1 19:41:00 compute-0 nova_compute[189564]: 2025-12-01 19:41:00.775 189568 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Dec  1 19:41:01 compute-0 nova_compute[189564]: 2025-12-01 19:41:01.245 189568 DEBUG oslo_service.periodic_task [None req-7cec860b-c1c5-49d2-8a4f-732c76c1788d - - - - - -] Running periodic task ComputeManager._check_instance_build_time run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210#033[00m
Dec  1 19:41:01 compute-0 nova_compute[189564]: 2025-12-01 19:41:01.246 189568 DEBUG oslo_service.periodic_task [None req-7cec860b-c1c5-49d2-8a4f-732c76c1788d - - - - - -] Running periodic task ComputeManager._sync_scheduler_instance_info run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210#033[00m
Dec  1 19:41:01 compute-0 openstack_network_exporter[205914]: ERROR   19:41:01 appctl.go:144: Failed to get PID for ovn-northd: no control socket files found for ovn-northd
Dec  1 19:41:01 compute-0 openstack_network_exporter[205914]: ERROR   19:41:01 appctl.go:144: Failed to get PID for ovn-northd: no control socket files found for ovn-northd
Dec  1 19:41:01 compute-0 openstack_network_exporter[205914]: ERROR   19:41:01 appctl.go:131: Failed to prepare call to ovsdb-server: no control socket files found for the ovs db server
Dec  1 19:41:01 compute-0 openstack_network_exporter[205914]: ERROR   19:41:01 appctl.go:174: call(dpif-netdev/pmd-rxq-show): please specify an existing datapath
Dec  1 19:41:01 compute-0 openstack_network_exporter[205914]: 
Dec  1 19:41:01 compute-0 openstack_network_exporter[205914]: ERROR   19:41:01 appctl.go:174: call(dpif-netdev/pmd-perf-show): please specify an existing datapath
Dec  1 19:41:01 compute-0 openstack_network_exporter[205914]: 
Dec  1 19:41:02 compute-0 nova_compute[189564]: 2025-12-01 19:41:02.248 189568 DEBUG oslo_service.periodic_task [None req-7cec860b-c1c5-49d2-8a4f-732c76c1788d - - - - - -] Running periodic task ComputeManager._instance_usage_audit run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210#033[00m
Dec  1 19:41:02 compute-0 nova_compute[189564]: 2025-12-01 19:41:02.249 189568 DEBUG oslo_service.periodic_task [None req-7cec860b-c1c5-49d2-8a4f-732c76c1788d - - - - - -] Running periodic task ComputeManager._cleanup_expired_console_auth_tokens run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210#033[00m
Dec  1 19:41:03 compute-0 nova_compute[189564]: 2025-12-01 19:41:03.092 189568 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Dec  1 19:41:03 compute-0 nova_compute[189564]: 2025-12-01 19:41:03.261 189568 DEBUG oslo_service.periodic_task [None req-7cec860b-c1c5-49d2-8a4f-732c76c1788d - - - - - -] Running periodic task ComputeManager._poll_rescued_instances run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210#033[00m
Dec  1 19:41:03 compute-0 nova_compute[189564]: 2025-12-01 19:41:03.262 189568 DEBUG oslo_service.periodic_task [None req-7cec860b-c1c5-49d2-8a4f-732c76c1788d - - - - - -] Running periodic task ComputeManager.update_available_resource run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210#033[00m
Dec  1 19:41:03 compute-0 nova_compute[189564]: 2025-12-01 19:41:03.296 189568 DEBUG oslo_concurrency.lockutils [None req-7cec860b-c1c5-49d2-8a4f-732c76c1788d - - - - - -] Acquiring lock "compute_resources" by "nova.compute.resource_tracker.ResourceTracker.clean_compute_node_cache" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m
Dec  1 19:41:03 compute-0 nova_compute[189564]: 2025-12-01 19:41:03.296 189568 DEBUG oslo_concurrency.lockutils [None req-7cec860b-c1c5-49d2-8a4f-732c76c1788d - - - - - -] Lock "compute_resources" acquired by "nova.compute.resource_tracker.ResourceTracker.clean_compute_node_cache" :: waited 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m
Dec  1 19:41:03 compute-0 nova_compute[189564]: 2025-12-01 19:41:03.297 189568 DEBUG oslo_concurrency.lockutils [None req-7cec860b-c1c5-49d2-8a4f-732c76c1788d - - - - - -] Lock "compute_resources" "released" by "nova.compute.resource_tracker.ResourceTracker.clean_compute_node_cache" :: held 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m
Dec  1 19:41:03 compute-0 nova_compute[189564]: 2025-12-01 19:41:03.298 189568 DEBUG nova.compute.resource_tracker [None req-7cec860b-c1c5-49d2-8a4f-732c76c1788d - - - - - -] Auditing locally available compute resources for compute-0.ctlplane.example.com (node: compute-0.ctlplane.example.com) update_available_resource /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:861#033[00m
Dec  1 19:41:03 compute-0 nova_compute[189564]: 2025-12-01 19:41:03.402 189568 DEBUG oslo_concurrency.processutils [None req-7cec860b-c1c5-49d2-8a4f-732c76c1788d - - - - - -] Running cmd (subprocess): /usr/bin/python3 -m oslo_concurrency.prlimit --as=1073741824 --cpu=30 -- env LC_ALL=C LANG=C qemu-img info /var/lib/nova/instances/e73931e9-f7fa-4666-b781-700b385532a9/disk --force-share --output=json execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384#033[00m
Dec  1 19:41:03 compute-0 nova_compute[189564]: 2025-12-01 19:41:03.511 189568 DEBUG oslo_concurrency.processutils [None req-7cec860b-c1c5-49d2-8a4f-732c76c1788d - - - - - -] CMD "/usr/bin/python3 -m oslo_concurrency.prlimit --as=1073741824 --cpu=30 -- env LC_ALL=C LANG=C qemu-img info /var/lib/nova/instances/e73931e9-f7fa-4666-b781-700b385532a9/disk --force-share --output=json" returned: 0 in 0.108s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422#033[00m
Dec  1 19:41:03 compute-0 nova_compute[189564]: 2025-12-01 19:41:03.512 189568 DEBUG oslo_concurrency.processutils [None req-7cec860b-c1c5-49d2-8a4f-732c76c1788d - - - - - -] Running cmd (subprocess): /usr/bin/python3 -m oslo_concurrency.prlimit --as=1073741824 --cpu=30 -- env LC_ALL=C LANG=C qemu-img info /var/lib/nova/instances/e73931e9-f7fa-4666-b781-700b385532a9/disk --force-share --output=json execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384#033[00m
Dec  1 19:41:03 compute-0 nova_compute[189564]: 2025-12-01 19:41:03.586 189568 DEBUG oslo_concurrency.processutils [None req-7cec860b-c1c5-49d2-8a4f-732c76c1788d - - - - - -] CMD "/usr/bin/python3 -m oslo_concurrency.prlimit --as=1073741824 --cpu=30 -- env LC_ALL=C LANG=C qemu-img info /var/lib/nova/instances/e73931e9-f7fa-4666-b781-700b385532a9/disk --force-share --output=json" returned: 0 in 0.074s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422#033[00m
Dec  1 19:41:03 compute-0 nova_compute[189564]: 2025-12-01 19:41:03.588 189568 DEBUG oslo_concurrency.processutils [None req-7cec860b-c1c5-49d2-8a4f-732c76c1788d - - - - - -] Running cmd (subprocess): /usr/bin/python3 -m oslo_concurrency.prlimit --as=1073741824 --cpu=30 -- env LC_ALL=C LANG=C qemu-img info /var/lib/nova/instances/e73931e9-f7fa-4666-b781-700b385532a9/disk.eph0 --force-share --output=json execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384#033[00m
Dec  1 19:41:03 compute-0 nova_compute[189564]: 2025-12-01 19:41:03.662 189568 DEBUG oslo_concurrency.processutils [None req-7cec860b-c1c5-49d2-8a4f-732c76c1788d - - - - - -] CMD "/usr/bin/python3 -m oslo_concurrency.prlimit --as=1073741824 --cpu=30 -- env LC_ALL=C LANG=C qemu-img info /var/lib/nova/instances/e73931e9-f7fa-4666-b781-700b385532a9/disk.eph0 --force-share --output=json" returned: 0 in 0.074s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422#033[00m
Dec  1 19:41:03 compute-0 nova_compute[189564]: 2025-12-01 19:41:03.664 189568 DEBUG oslo_concurrency.processutils [None req-7cec860b-c1c5-49d2-8a4f-732c76c1788d - - - - - -] Running cmd (subprocess): /usr/bin/python3 -m oslo_concurrency.prlimit --as=1073741824 --cpu=30 -- env LC_ALL=C LANG=C qemu-img info /var/lib/nova/instances/e73931e9-f7fa-4666-b781-700b385532a9/disk.eph0 --force-share --output=json execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384#033[00m
Dec  1 19:41:03 compute-0 nova_compute[189564]: 2025-12-01 19:41:03.724 189568 DEBUG oslo_concurrency.processutils [None req-7cec860b-c1c5-49d2-8a4f-732c76c1788d - - - - - -] CMD "/usr/bin/python3 -m oslo_concurrency.prlimit --as=1073741824 --cpu=30 -- env LC_ALL=C LANG=C qemu-img info /var/lib/nova/instances/e73931e9-f7fa-4666-b781-700b385532a9/disk.eph0 --force-share --output=json" returned: 0 in 0.060s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422#033[00m
Dec  1 19:41:03 compute-0 nova_compute[189564]: 2025-12-01 19:41:03.735 189568 DEBUG oslo_concurrency.processutils [None req-7cec860b-c1c5-49d2-8a4f-732c76c1788d - - - - - -] Running cmd (subprocess): /usr/bin/python3 -m oslo_concurrency.prlimit --as=1073741824 --cpu=30 -- env LC_ALL=C LANG=C qemu-img info /var/lib/nova/instances/850ac274-3f22-41ce-b7d7-ac64d7adac70/disk --force-share --output=json execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384#033[00m
Dec  1 19:41:03 compute-0 nova_compute[189564]: 2025-12-01 19:41:03.833 189568 DEBUG oslo_concurrency.processutils [None req-7cec860b-c1c5-49d2-8a4f-732c76c1788d - - - - - -] CMD "/usr/bin/python3 -m oslo_concurrency.prlimit --as=1073741824 --cpu=30 -- env LC_ALL=C LANG=C qemu-img info /var/lib/nova/instances/850ac274-3f22-41ce-b7d7-ac64d7adac70/disk --force-share --output=json" returned: 0 in 0.098s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422#033[00m
Dec  1 19:41:03 compute-0 nova_compute[189564]: 2025-12-01 19:41:03.835 189568 DEBUG oslo_concurrency.processutils [None req-7cec860b-c1c5-49d2-8a4f-732c76c1788d - - - - - -] Running cmd (subprocess): /usr/bin/python3 -m oslo_concurrency.prlimit --as=1073741824 --cpu=30 -- env LC_ALL=C LANG=C qemu-img info /var/lib/nova/instances/850ac274-3f22-41ce-b7d7-ac64d7adac70/disk --force-share --output=json execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384#033[00m
Dec  1 19:41:03 compute-0 nova_compute[189564]: 2025-12-01 19:41:03.896 189568 DEBUG oslo_concurrency.processutils [None req-7cec860b-c1c5-49d2-8a4f-732c76c1788d - - - - - -] CMD "/usr/bin/python3 -m oslo_concurrency.prlimit --as=1073741824 --cpu=30 -- env LC_ALL=C LANG=C qemu-img info /var/lib/nova/instances/850ac274-3f22-41ce-b7d7-ac64d7adac70/disk --force-share --output=json" returned: 0 in 0.061s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422#033[00m
Dec  1 19:41:03 compute-0 nova_compute[189564]: 2025-12-01 19:41:03.898 189568 DEBUG oslo_concurrency.processutils [None req-7cec860b-c1c5-49d2-8a4f-732c76c1788d - - - - - -] Running cmd (subprocess): /usr/bin/python3 -m oslo_concurrency.prlimit --as=1073741824 --cpu=30 -- env LC_ALL=C LANG=C qemu-img info /var/lib/nova/instances/850ac274-3f22-41ce-b7d7-ac64d7adac70/disk.eph0 --force-share --output=json execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384#033[00m
Dec  1 19:41:03 compute-0 nova_compute[189564]: 2025-12-01 19:41:03.964 189568 DEBUG oslo_concurrency.processutils [None req-7cec860b-c1c5-49d2-8a4f-732c76c1788d - - - - - -] CMD "/usr/bin/python3 -m oslo_concurrency.prlimit --as=1073741824 --cpu=30 -- env LC_ALL=C LANG=C qemu-img info /var/lib/nova/instances/850ac274-3f22-41ce-b7d7-ac64d7adac70/disk.eph0 --force-share --output=json" returned: 0 in 0.065s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422#033[00m
Dec  1 19:41:03 compute-0 nova_compute[189564]: 2025-12-01 19:41:03.966 189568 DEBUG oslo_concurrency.processutils [None req-7cec860b-c1c5-49d2-8a4f-732c76c1788d - - - - - -] Running cmd (subprocess): /usr/bin/python3 -m oslo_concurrency.prlimit --as=1073741824 --cpu=30 -- env LC_ALL=C LANG=C qemu-img info /var/lib/nova/instances/850ac274-3f22-41ce-b7d7-ac64d7adac70/disk.eph0 --force-share --output=json execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384#033[00m
Dec  1 19:41:04 compute-0 nova_compute[189564]: 2025-12-01 19:41:04.058 189568 DEBUG oslo_concurrency.processutils [None req-7cec860b-c1c5-49d2-8a4f-732c76c1788d - - - - - -] CMD "/usr/bin/python3 -m oslo_concurrency.prlimit --as=1073741824 --cpu=30 -- env LC_ALL=C LANG=C qemu-img info /var/lib/nova/instances/850ac274-3f22-41ce-b7d7-ac64d7adac70/disk.eph0 --force-share --output=json" returned: 0 in 0.092s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422#033[00m
Dec  1 19:41:04 compute-0 nova_compute[189564]: 2025-12-01 19:41:04.572 189568 WARNING nova.virt.libvirt.driver [None req-7cec860b-c1c5-49d2-8a4f-732c76c1788d - - - - - -] This host appears to have multiple sockets per NUMA node. The `socket` PCI NUMA affinity will not be supported.#033[00m
Dec  1 19:41:04 compute-0 nova_compute[189564]: 2025-12-01 19:41:04.573 189568 DEBUG nova.compute.resource_tracker [None req-7cec860b-c1c5-49d2-8a4f-732c76c1788d - - - - - -] Hypervisor/Node resource view: name=compute-0.ctlplane.example.com free_ram=4977MB free_disk=72.36139678955078GB free_vcpus=6 pci_devices=[{"dev_id": "pci_0000_00_03_0", "address": "0000:00:03.0", "product_id": "1000", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1000", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_01_3", "address": "0000:00:01.3", "product_id": "7113", "vendor_id": "8086", "numa_node": null, "label": "label_8086_7113", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_06_0", "address": "0000:00:06.0", "product_id": "1005", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1005", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_04_0", "address": "0000:00:04.0", "product_id": "1001", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1001", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_01_1", "address": "0000:00:01.1", "product_id": "7010", "vendor_id": "8086", "numa_node": null, "label": "label_8086_7010", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_00_0", "address": "0000:00:00.0", "product_id": "1237", "vendor_id": "8086", "numa_node": null, "label": "label_8086_1237", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_05_0", "address": "0000:00:05.0", "product_id": "1002", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1002", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_02_0", "address": "0000:00:02.0", "product_id": "1050", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1050", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_01_2", "address": "0000:00:01.2", "product_id": "7020", "vendor_id": "8086", "numa_node": null, "label": "label_8086_7020", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_01_0", "address": "0000:00:01.0", "product_id": "7000", "vendor_id": "8086", "numa_node": null, "label": "label_8086_7000", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_07_0", "address": "0000:00:07.0", "product_id": "1000", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1000", "dev_type": "type-PCI"}] _report_hypervisor_resource_view /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:1034#033[00m
Dec  1 19:41:04 compute-0 nova_compute[189564]: 2025-12-01 19:41:04.573 189568 DEBUG oslo_concurrency.lockutils [None req-7cec860b-c1c5-49d2-8a4f-732c76c1788d - - - - - -] Acquiring lock "compute_resources" by "nova.compute.resource_tracker.ResourceTracker._update_available_resource" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m
Dec  1 19:41:04 compute-0 nova_compute[189564]: 2025-12-01 19:41:04.574 189568 DEBUG oslo_concurrency.lockutils [None req-7cec860b-c1c5-49d2-8a4f-732c76c1788d - - - - - -] Lock "compute_resources" acquired by "nova.compute.resource_tracker.ResourceTracker._update_available_resource" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m
Dec  1 19:41:04 compute-0 nova_compute[189564]: 2025-12-01 19:41:04.730 189568 DEBUG nova.compute.resource_tracker [None req-7cec860b-c1c5-49d2-8a4f-732c76c1788d - - - - - -] Instance e73931e9-f7fa-4666-b781-700b385532a9 actively managed on this compute host and has allocations in placement: {'resources': {'DISK_GB': 2, 'MEMORY_MB': 512, 'VCPU': 1}}. _remove_deleted_instances_allocations /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:1635#033[00m
Dec  1 19:41:04 compute-0 nova_compute[189564]: 2025-12-01 19:41:04.731 189568 DEBUG nova.compute.resource_tracker [None req-7cec860b-c1c5-49d2-8a4f-732c76c1788d - - - - - -] Instance 850ac274-3f22-41ce-b7d7-ac64d7adac70 actively managed on this compute host and has allocations in placement: {'resources': {'DISK_GB': 2, 'MEMORY_MB': 512, 'VCPU': 1}}. _remove_deleted_instances_allocations /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:1635#033[00m
Dec  1 19:41:04 compute-0 nova_compute[189564]: 2025-12-01 19:41:04.732 189568 DEBUG nova.compute.resource_tracker [None req-7cec860b-c1c5-49d2-8a4f-732c76c1788d - - - - - -] Total usable vcpus: 8, total allocated vcpus: 2 _report_final_resource_view /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:1057#033[00m
Dec  1 19:41:04 compute-0 nova_compute[189564]: 2025-12-01 19:41:04.733 189568 DEBUG nova.compute.resource_tracker [None req-7cec860b-c1c5-49d2-8a4f-732c76c1788d - - - - - -] Final resource view: name=compute-0.ctlplane.example.com phys_ram=7680MB used_ram=1536MB phys_disk=79GB used_disk=4GB total_vcpus=8 used_vcpus=2 pci_stats=[] _report_final_resource_view /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:1066#033[00m
Dec  1 19:41:04 compute-0 nova_compute[189564]: 2025-12-01 19:41:04.896 189568 DEBUG nova.compute.provider_tree [None req-7cec860b-c1c5-49d2-8a4f-732c76c1788d - - - - - -] Inventory has not changed in ProviderTree for provider: 0211b5d4-bab8-409f-8f53-df766ffbcb27 update_inventory /usr/lib/python3.9/site-packages/nova/compute/provider_tree.py:180#033[00m
Dec  1 19:41:04 compute-0 nova_compute[189564]: 2025-12-01 19:41:04.913 189568 DEBUG nova.scheduler.client.report [None req-7cec860b-c1c5-49d2-8a4f-732c76c1788d - - - - - -] Inventory has not changed for provider 0211b5d4-bab8-409f-8f53-df766ffbcb27 based on inventory data: {'VCPU': {'total': 8, 'reserved': 0, 'min_unit': 1, 'max_unit': 8, 'step_size': 1, 'allocation_ratio': 4.0}, 'MEMORY_MB': {'total': 7680, 'reserved': 512, 'min_unit': 1, 'max_unit': 7680, 'step_size': 1, 'allocation_ratio': 1.0}, 'DISK_GB': {'total': 79, 'reserved': 1, 'min_unit': 1, 'max_unit': 79, 'step_size': 1, 'allocation_ratio': 0.9}} set_inventory_for_provider /usr/lib/python3.9/site-packages/nova/scheduler/client/report.py:940#033[00m
Dec  1 19:41:04 compute-0 nova_compute[189564]: 2025-12-01 19:41:04.914 189568 DEBUG nova.compute.resource_tracker [None req-7cec860b-c1c5-49d2-8a4f-732c76c1788d - - - - - -] Compute_service record updated for compute-0.ctlplane.example.com:compute-0.ctlplane.example.com _update_available_resource /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:995#033[00m
Dec  1 19:41:04 compute-0 nova_compute[189564]: 2025-12-01 19:41:04.915 189568 DEBUG oslo_concurrency.lockutils [None req-7cec860b-c1c5-49d2-8a4f-732c76c1788d - - - - - -] Lock "compute_resources" "released" by "nova.compute.resource_tracker.ResourceTracker._update_available_resource" :: held 0.341s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m
Dec  1 19:41:05 compute-0 podman[244596]: 2025-12-01 19:41:05.353753953 +0000 UTC m=+0.108743008 container health_status eee51cf6f5ac491b85fb09827fece37ea9afa564acb449d4ec0d0155a452f02b (image=quay.io/podified-antelope-centos9/openstack-multipathd:current-podified, name=multipathd, health_status=healthy, health_failing_streak=0, health_log=, config_data={'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/multipathd', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-multipathd:current-podified', 'net': 'host', 'privileged': True, 'restart': 'always', 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/var/lib/kolla/config_files/multipathd.json:/var/lib/kolla/config_files/config.json:ro', '/dev:/dev', '/run/udev:/run/udev', '/sys:/sys', '/lib/modules:/lib/modules:ro', '/etc/iscsi:/etc/iscsi:ro', '/var/lib/iscsi:/var/lib/iscsi', '/etc/multipath:/etc/multipath:z', '/etc/multipath.conf:/etc/multipath.conf:ro', '/var/lib/openstack/healthchecks/multipathd:/openstack:ro,z']}, maintainer=OpenStack Kubernetes Operator team, managed_by=edpm_ansible, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.vendor=CentOS, container_name=multipathd, org.label-schema.build-date=20251125, tcib_build_tag=fa2bb8efef6782c26ea7f1675eeb36dd, io.buildah.version=1.41.3, org.label-schema.license=GPLv2, org.label-schema.schema-version=1.0, config_id=multipathd, tcib_managed=true)
Dec  1 19:41:05 compute-0 nova_compute[189564]: 2025-12-01 19:41:05.778 189568 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Dec  1 19:41:06 compute-0 nova_compute[189564]: 2025-12-01 19:41:06.248 189568 DEBUG oslo_service.periodic_task [None req-7cec860b-c1c5-49d2-8a4f-732c76c1788d - - - - - -] Running periodic task ComputeManager._cleanup_incomplete_migrations run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210#033[00m
Dec  1 19:41:06 compute-0 nova_compute[189564]: 2025-12-01 19:41:06.250 189568 DEBUG nova.compute.manager [None req-7cec860b-c1c5-49d2-8a4f-732c76c1788d - - - - - -] Cleaning up deleted instances with incomplete migration  _cleanup_incomplete_migrations /usr/lib/python3.9/site-packages/nova/compute/manager.py:11183#033[00m
Dec  1 19:41:08 compute-0 nova_compute[189564]: 2025-12-01 19:41:08.095 189568 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Dec  1 19:41:10 compute-0 nova_compute[189564]: 2025-12-01 19:41:10.782 189568 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Dec  1 19:41:11 compute-0 podman[244616]: 2025-12-01 19:41:11.336194724 +0000 UTC m=+0.091526341 container health_status 61ddba5fa28aaa4735d9b3aecc3d300f499f9ae2248b5f55cd6d6127fcce4236 (image=quay.io/navidys/prometheus-podman-exporter:v1.10.1, name=podman_exporter, health_status=healthy, health_failing_streak=0, health_log=, managed_by=edpm_ansible, config_data={'image': 'quay.io/navidys/prometheus-podman-exporter:v1.10.1', 'restart': 'always', 'recreate': True, 'user': 'root', 'privileged': True, 'ports': ['9882:9882'], 'net': 'host', 'command': ['--web.config.file=/etc/podman_exporter/podman_exporter.yaml'], 'environment': {'OS_ENDPOINT_TYPE': 'internal', 'CONTAINER_HOST': 'unix:///run/podman/podman.sock'}, 'healthcheck': {'test': '/openstack/healthcheck podman_exporter', 'mount': '/var/lib/openstack/healthchecks/podman_exporter'}, 'volumes': ['/var/lib/openstack/config/telemetry/podman_exporter.yaml:/etc/podman_exporter/podman_exporter.yaml:z', '/var/lib/openstack/certs/telemetry/default:/etc/podman_exporter/tls:z', '/run/podman/podman.sock:/run/podman/podman.sock:rw,z', '/var/lib/openstack/healthchecks/podman_exporter:/openstack:ro,z']}, config_id=edpm, container_name=podman_exporter, maintainer=Navid Yaghoobi <navidys@fedoraproject.org>)
Dec  1 19:41:12 compute-0 ovn_metadata_agent[106828]: 2025-12-01 19:41:12.191 106833 DEBUG oslo_concurrency.lockutils [-] Acquiring lock "_check_child_processes" by "neutron.agent.linux.external_process.ProcessMonitor._check_child_processes" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m
Dec  1 19:41:12 compute-0 ovn_metadata_agent[106828]: 2025-12-01 19:41:12.192 106833 DEBUG oslo_concurrency.lockutils [-] Lock "_check_child_processes" acquired by "neutron.agent.linux.external_process.ProcessMonitor._check_child_processes" :: waited 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m
Dec  1 19:41:12 compute-0 ovn_metadata_agent[106828]: 2025-12-01 19:41:12.193 106833 DEBUG oslo_concurrency.lockutils [-] Lock "_check_child_processes" "released" by "neutron.agent.linux.external_process.ProcessMonitor._check_child_processes" :: held 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m
Dec  1 19:41:12 compute-0 nova_compute[189564]: 2025-12-01 19:41:12.284 189568 DEBUG oslo_service.periodic_task [None req-7cec860b-c1c5-49d2-8a4f-732c76c1788d - - - - - -] Running periodic task ComputeManager._run_pending_deletes run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210#033[00m
Dec  1 19:41:12 compute-0 nova_compute[189564]: 2025-12-01 19:41:12.284 189568 DEBUG nova.compute.manager [None req-7cec860b-c1c5-49d2-8a4f-732c76c1788d - - - - - -] Cleaning up deleted instances _run_pending_deletes /usr/lib/python3.9/site-packages/nova/compute/manager.py:11145#033[00m
Dec  1 19:41:12 compute-0 nova_compute[189564]: 2025-12-01 19:41:12.302 189568 DEBUG nova.compute.manager [None req-7cec860b-c1c5-49d2-8a4f-732c76c1788d - - - - - -] There are 0 instances to clean _run_pending_deletes /usr/lib/python3.9/site-packages/nova/compute/manager.py:11154#033[00m
Dec  1 19:41:13 compute-0 nova_compute[189564]: 2025-12-01 19:41:13.097 189568 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Dec  1 19:41:14 compute-0 podman[244638]: 2025-12-01 19:41:14.395905336 +0000 UTC m=+0.146635849 container health_status 23921011954a99f31a49758e512d9e3575f6b2ebf536e7df85e3be11e7690b76 (image=quay.io/sustainable_computing_io/kepler:release-0.7.12, name=kepler, health_status=healthy, health_failing_streak=0, health_log=, vcs-type=git, config_data={'image': 'quay.io/sustainable_computing_io/kepler:release-0.7.12', 'privileged': 'true', 'restart': 'always', 'ports': ['8888:8888'], 'net': 'host', 'command': '-v=2', 'recreate': True, 'environment': {'ENABLE_GPU': 'true', 'EXPOSE_CONTAINER_METRICS': 'true', 'ENABLE_PROCESS_METRICS': 'true', 'EXPOSE_VM_METRICS': 'true', 'EXPOSE_ESTIMATED_IDLE_POWER_METRICS': 'false', 'LIBVIRT_METADATA_URI': 'http://openstack.org/xmlns/libvirt/nova/1.1'}, 'healthcheck': {'test': '/openstack/healthcheck kepler', 'mount': '/var/lib/openstack/healthchecks/kepler'}, 'volumes': ['/lib/modules:/lib/modules:ro', '/run/libvirt:/run/libvirt:shared,ro', '/sys:/sys', '/proc:/proc', '/var/lib/openstack/healthchecks/kepler:/openstack:ro,z']}, summary=Provides the latest release of Red Hat Universal Base Image 9., io.openshift.tags=base rhel9, com.redhat.component=ubi9-container, description=The Universal Base Image is designed and engineered to be the base layer for all of your containerized applications, middleware and utilities. This base image is freely redistributable, but Red Hat only supports Red Hat technologies through subscriptions for Red Hat products. This image is maintained by Red Hat and updated regularly., vendor=Red Hat, Inc., io.openshift.expose-services=, vcs-ref=e309397d02fc53f7fa99db1371b8700eb49f268f, io.buildah.version=1.29.0, release-0.7.12=, io.k8s.description=The Universal Base Image is designed and engineered to be the base layer for all of your containerized applications, middleware and utilities. This base image is freely redistributable, but Red Hat only supports Red Hat technologies through subscriptions for Red Hat products. This image is maintained by Red Hat and updated regularly., version=9.4, architecture=x86_64, distribution-scope=public, name=ubi9, io.k8s.display-name=Red Hat Universal Base Image 9, release=1214.1726694543, url=https://access.redhat.com/containers/#/registry.access.redhat.com/ubi9/images/9.4-1214.1726694543, managed_by=edpm_ansible, build-date=2024-09-18T21:23:30, config_id=edpm, container_name=kepler, com.redhat.license_terms=https://www.redhat.com/en/about/red-hat-end-user-license-agreements#UBI, maintainer=Red Hat, Inc.)
Dec  1 19:41:14 compute-0 podman[244639]: 2025-12-01 19:41:14.424231874 +0000 UTC m=+0.168575685 container health_status 34a1614f07848d6f362b3ed1fa2407dbcd0f2c7c831f6ef43ff8b2d278ce7c3d (image=quay.io/podified-antelope-centos9/openstack-ceilometer-ipmi:current-podified, name=ceilometer_agent_ipmi, health_status=healthy, health_failing_streak=0, health_log=, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.schema-version=1.0, managed_by=edpm_ansible, org.label-schema.vendor=CentOS, tcib_managed=true, container_name=ceilometer_agent_ipmi, org.label-schema.build-date=20251125, org.label-schema.license=GPLv2, tcib_build_tag=fa2bb8efef6782c26ea7f1675eeb36dd, config_data={'image': 'quay.io/podified-antelope-centos9/openstack-ceilometer-ipmi:current-podified', 'user': 'ceilometer', 'restart': 'always', 'command': 'kolla_start', 'security_opt': 'label:type:ceilometer_polling_t', 'privileged': 'true', 'net': 'host', 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'OS_ENDPOINT_TYPE': 'internal'}, 'healthcheck': {'test': '/openstack/healthcheck ipmi', 'mount': '/var/lib/openstack/healthchecks/ceilometer_agent_ipmi'}, 'volumes': ['/var/lib/openstack/config/telemetry-power-monitoring:/var/lib/openstack/config/:z', '/var/lib/openstack/config/telemetry-power-monitoring/ceilometer-agent-ipmi.json:/var/lib/kolla/config_files/config.json:z', '/etc/hosts:/etc/hosts:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/var/lib/openstack/cacerts/telemetry-power-monitoring/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/config/telemetry-power-monitoring/ceilometer_prom_exporter.yaml:/etc/ceilometer/ceilometer_prom_exporter.yaml:z', '/var/lib/openstack/certs/telemetry-power-monitoring/default:/etc/ceilometer/tls:z', '/dev/log:/dev/log', '/var/lib/openstack/healthchecks/ceilometer_agent_ipmi:/openstack:ro,z']}, config_id=edpm, io.buildah.version=1.41.3, maintainer=OpenStack Kubernetes Operator team)
Dec  1 19:41:15 compute-0 podman[244674]: 2025-12-01 19:41:15.369443903 +0000 UTC m=+0.131534540 container health_status 3a3d264f7eb8586ed3d44da8bad3c69e5911bcb2ca062b771386b6d47a5118de (image=quay.rdoproject.org/podified-master-centos10/openstack-ceilometer-compute:current-tested, name=ceilometer_agent_compute, health_status=healthy, health_failing_streak=0, health_log=, config_id=edpm, container_name=ceilometer_agent_compute, org.label-schema.build-date=20251125, tcib_build_tag=3a7876c5b6a4ff2e2bc50e11e9db5f42, io.buildah.version=1.41.4, maintainer=OpenStack Kubernetes Operator team, managed_by=edpm_ansible, tcib_managed=true, org.label-schema.name=CentOS Stream 10 Base Image, org.label-schema.vendor=CentOS, org.label-schema.license=GPLv2, org.label-schema.schema-version=1.0, config_data={'image': 'quay.rdoproject.org/podified-master-centos10/openstack-ceilometer-compute:current-tested', 'user': 'ceilometer', 'restart': 'always', 'command': 'kolla_start', 'security_opt': 'label:type:ceilometer_polling_t', 'net': 'host', 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'OS_ENDPOINT_TYPE': 'internal'}, 'healthcheck': {'test': '/openstack/healthcheck compute', 'mount': '/var/lib/openstack/healthchecks/ceilometer_agent_compute'}, 'volumes': ['/var/lib/openstack/config/telemetry:/var/lib/openstack/config/:z', '/var/lib/openstack/config/telemetry/ceilometer-agent-compute.json:/var/lib/kolla/config_files/config.json:z', '/run/libvirt:/run/libvirt:shared,ro', '/etc/hosts:/etc/hosts:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/var/lib/openstack/cacerts/telemetry/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/config/telemetry/ceilometer_prom_exporter.yaml:/etc/ceilometer/ceilometer_prom_exporter.yaml:z', '/var/lib/openstack/certs/telemetry/default:/etc/ceilometer/tls:z', '/dev/log:/dev/log', '/var/lib/openstack/healthchecks/ceilometer_agent_compute:/openstack:ro,z']})
Dec  1 19:41:15 compute-0 podman[244675]: 2025-12-01 19:41:15.369522595 +0000 UTC m=+0.120708796 container health_status 43b014a7c88484529ca37fbc1aa040d68d3c565a681d98a3ffe696ded1c66c8b (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified, name=ovn_metadata_agent, health_status=healthy, health_failing_streak=0, health_log=, maintainer=OpenStack Kubernetes Operator team, org.label-schema.build-date=20251125, org.label-schema.vendor=CentOS, tcib_build_tag=fa2bb8efef6782c26ea7f1675eeb36dd, tcib_managed=true, container_name=ovn_metadata_agent, managed_by=edpm_ansible, config_data={'cgroupns': 'host', 'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'EDPM_CONFIG_HASH': '0823bd3e096c75f72e4a95820d41b0d4b6a1172bd2892ddb9f29b788a11bc87d'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ovn_metadata_agent', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified', 'net': 'host', 'pid': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/run/openvswitch:/run/openvswitch:z', '/var/lib/config-data/ansible-generated/neutron-ovn-metadata-agent:/etc/neutron.conf.d:z', '/run/netns:/run/netns:shared', '/var/lib/kolla/config_files/ovn_metadata_agent.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/neutron:/var/lib/neutron:shared,z', '/var/lib/neutron/ovn_metadata_haproxy_wrapper:/usr/local/bin/haproxy:ro', '/var/lib/neutron/kill_scripts:/etc/neutron/kill_scripts:ro', '/var/lib/openstack/cacerts/neutron-metadata/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/ca.crt:/etc/pki/tls/certs/ovndbca.crt:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/tls.crt:/etc/pki/tls/certs/ovndb.crt:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/tls.key:/etc/pki/tls/private/ovndb.key:ro,Z', '/var/lib/openstack/healthchecks/ovn_metadata_agent:/openstack:ro,z']}, config_id=ovn_metadata_agent, io.buildah.version=1.41.3, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.schema-version=1.0)
Dec  1 19:41:15 compute-0 podman[244676]: 2025-12-01 19:41:15.45137481 +0000 UTC m=+0.199048290 container health_status ac5c9902abf0db9f43c889599b2bcc73d33eb8b65444ffdd9b56a5cc93dab792 (image=quay.io/podified-antelope-centos9/openstack-ovn-controller:current-podified, name=ovn_controller, health_status=healthy, health_failing_streak=0, health_log=, config_id=ovn_controller, container_name=ovn_controller, io.buildah.version=1.41.3, managed_by=edpm_ansible, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.vendor=CentOS, tcib_managed=true, config_data={'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ovn_controller', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-ovn-controller:current-podified', 'net': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/lib/modules:/lib/modules:ro', '/run:/run', '/var/lib/openvswitch/ovn:/run/ovn:shared,z', '/var/lib/kolla/config_files/ovn_controller.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/openstack/cacerts/ovn/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/certs/ovn/default/ca.crt:/etc/pki/tls/certs/ovndbca.crt:ro,z', '/var/lib/openstack/certs/ovn/default/tls.crt:/etc/pki/tls/certs/ovndb.crt:ro,z', '/var/lib/openstack/certs/ovn/default/tls.key:/etc/pki/tls/private/ovndb.key:ro,Z', '/var/lib/openstack/healthchecks/ovn_controller:/openstack:ro,z']}, maintainer=OpenStack Kubernetes Operator team, org.label-schema.build-date=20251125, org.label-schema.schema-version=1.0, tcib_build_tag=fa2bb8efef6782c26ea7f1675eeb36dd)
Dec  1 19:41:15 compute-0 nova_compute[189564]: 2025-12-01 19:41:15.785 189568 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Dec  1 19:41:18 compute-0 nova_compute[189564]: 2025-12-01 19:41:18.101 189568 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Dec  1 19:41:20 compute-0 nova_compute[189564]: 2025-12-01 19:41:20.788 189568 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Dec  1 19:41:23 compute-0 nova_compute[189564]: 2025-12-01 19:41:23.103 189568 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Dec  1 19:41:25 compute-0 podman[244735]: 2025-12-01 19:41:25.359907427 +0000 UTC m=+0.121787732 container health_status b46bda7fc50db8041eef75400930fc7591d8331b3adc9964f77b2cc87c6b98e2 (image=quay.io/openstack-k8s-operators/openstack-network-exporter:current-podified, name=openstack_network_exporter, health_status=healthy, health_failing_streak=0, health_log=, io.openshift.expose-services=, description=The Universal Base Image Minimal is a stripped down image that uses microdnf as a package manager. This base image is freely redistributable, but Red Hat only supports Red Hat technologies through subscriptions for Red Hat products. This image is maintained by Red Hat and updated regularly., com.redhat.component=ubi9-minimal-container, io.buildah.version=1.33.7, managed_by=edpm_ansible, distribution-scope=public, url=https://catalog.redhat.com/en/search?searchType=containers, io.k8s.display-name=Red Hat Universal Base Image 9 Minimal, com.redhat.license_terms=https://www.redhat.com/en/about/red-hat-end-user-license-agreements#UBI, release=1755695350, build-date=2025-08-20T13:12:41, vcs-type=git, io.k8s.description=The Universal Base Image Minimal is a stripped down image that uses microdnf as a package manager. This base image is freely redistributable, but Red Hat only supports Red Hat technologies through subscriptions for Red Hat products. This image is maintained by Red Hat and updated regularly., maintainer=Red Hat, Inc., architecture=x86_64, config_data={'image': 'quay.io/openstack-k8s-operators/openstack-network-exporter:current-podified', 'restart': 'always', 'recreate': True, 'privileged': True, 'ports': ['9105:9105'], 'command': [], 'net': 'host', 'environment': {'OS_ENDPOINT_TYPE': 'internal', 'OPENSTACK_NETWORK_EXPORTER_YAML': '/etc/openstack_network_exporter/openstack_network_exporter.yaml'}, 'healthcheck': {'test': '/openstack/healthcheck openstack-netwo', 'mount': '/var/lib/openstack/healthchecks/openstack_network_exporter'}, 'volumes': ['/var/lib/openstack/config/telemetry/openstack_network_exporter.yaml:/etc/openstack_network_exporter/openstack_network_exporter.yaml:z', '/var/lib/openstack/certs/telemetry/default:/etc/openstack_network_exporter/tls:z', '/var/run/openvswitch:/run/openvswitch:rw,z', '/var/lib/openvswitch/ovn:/run/ovn:rw,z', '/proc:/host/proc:ro', '/var/lib/openstack/healthchecks/openstack_network_exporter:/openstack:ro,z']}, summary=Provides the latest release of the minimal Red Hat Universal Base Image 9., vcs-ref=f4b088292653bbf5ca8188a5e59ffd06a8671d4b, config_id=edpm, vendor=Red Hat, Inc., version=9.6, io.openshift.tags=minimal rhel9, container_name=openstack_network_exporter, name=ubi9-minimal)
Dec  1 19:41:25 compute-0 nova_compute[189564]: 2025-12-01 19:41:25.792 189568 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Dec  1 19:41:28 compute-0 nova_compute[189564]: 2025-12-01 19:41:28.107 189568 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Dec  1 19:41:29 compute-0 podman[203750]: time="2025-12-01T19:41:29Z" level=info msg="List containers: received `last` parameter - overwriting `limit`"
Dec  1 19:41:29 compute-0 podman[203750]: @ - - [01/Dec/2025:19:41:29 +0000] "GET /v4.9.3/libpod/containers/json?all=true&external=false&last=0&namespace=false&size=false&sync=false HTTP/1.1" 200 29521 "" "Go-http-client/1.1"
Dec  1 19:41:29 compute-0 podman[203750]: @ - - [01/Dec/2025:19:41:29 +0000] "GET /v4.9.3/libpod/containers/stats?all=false&interval=1&stream=false HTTP/1.1" 200 4808 "" "Go-http-client/1.1"
Dec  1 19:41:30 compute-0 podman[244754]: 2025-12-01 19:41:30.341668616 +0000 UTC m=+0.111945339 container health_status 9bc16c1e84935b321683dd2dfd3901959431e420d380b6b9982945dff3d516b2 (image=quay.io/prometheus/node-exporter:v1.5.0, name=node_exporter, health_status=healthy, health_failing_streak=0, health_log=, managed_by=edpm_ansible, config_data={'image': 'quay.io/prometheus/node-exporter:v1.5.0', 'restart': 'always', 'recreate': True, 'user': 'root', 'privileged': True, 'ports': ['9100:9100'], 'command': ['--web.config.file=/etc/node_exporter/node_exporter.yaml', '--web.disable-exporter-metrics', '--collector.systemd', '--collector.systemd.unit-include=(edpm_.*|ovs.*|openvswitch|virt.*|rsyslog)\\.service', '--no-collector.dmi', '--no-collector.entropy', '--no-collector.thermal_zone', '--no-collector.time', '--no-collector.timex', '--no-collector.uname', '--no-collector.stat', '--no-collector.hwmon', '--no-collector.os', '--no-collector.selinux', '--no-collector.textfile', '--no-collector.powersupplyclass', '--no-collector.pressure', '--no-collector.rapl'], 'net': 'host', 'environment': {'OS_ENDPOINT_TYPE': 'internal'}, 'healthcheck': {'test': '/openstack/healthcheck node_exporter', 'mount': '/var/lib/openstack/healthchecks/node_exporter'}, 'volumes': ['/var/lib/openstack/config/telemetry/node_exporter.yaml:/etc/node_exporter/node_exporter.yaml:z', '/var/lib/openstack/certs/telemetry/default:/etc/node_exporter/tls:z', '/var/run/dbus/system_bus_socket:/var/run/dbus/system_bus_socket:rw', '/var/lib/openstack/healthchecks/node_exporter:/openstack:ro,z']}, config_id=edpm, container_name=node_exporter, maintainer=The Prometheus Authors <prometheus-developers@googlegroups.com>)
Dec  1 19:41:30 compute-0 nova_compute[189564]: 2025-12-01 19:41:30.795 189568 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Dec  1 19:41:31 compute-0 openstack_network_exporter[205914]: ERROR   19:41:31 appctl.go:144: Failed to get PID for ovn-northd: no control socket files found for ovn-northd
Dec  1 19:41:31 compute-0 openstack_network_exporter[205914]: ERROR   19:41:31 appctl.go:144: Failed to get PID for ovn-northd: no control socket files found for ovn-northd
Dec  1 19:41:31 compute-0 openstack_network_exporter[205914]: ERROR   19:41:31 appctl.go:131: Failed to prepare call to ovsdb-server: no control socket files found for the ovs db server
Dec  1 19:41:31 compute-0 openstack_network_exporter[205914]: ERROR   19:41:31 appctl.go:174: call(dpif-netdev/pmd-perf-show): please specify an existing datapath
Dec  1 19:41:31 compute-0 openstack_network_exporter[205914]: 
Dec  1 19:41:31 compute-0 openstack_network_exporter[205914]: ERROR   19:41:31 appctl.go:174: call(dpif-netdev/pmd-rxq-show): please specify an existing datapath
Dec  1 19:41:31 compute-0 openstack_network_exporter[205914]: 
Dec  1 19:41:33 compute-0 nova_compute[189564]: 2025-12-01 19:41:33.112 189568 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Dec  1 19:41:35 compute-0 nova_compute[189564]: 2025-12-01 19:41:35.799 189568 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Dec  1 19:41:36 compute-0 podman[244778]: 2025-12-01 19:41:36.350104949 +0000 UTC m=+0.111826245 container health_status eee51cf6f5ac491b85fb09827fece37ea9afa564acb449d4ec0d0155a452f02b (image=quay.io/podified-antelope-centos9/openstack-multipathd:current-podified, name=multipathd, health_status=healthy, health_failing_streak=0, health_log=, tcib_managed=true, config_data={'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/multipathd', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-multipathd:current-podified', 'net': 'host', 'privileged': True, 'restart': 'always', 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/var/lib/kolla/config_files/multipathd.json:/var/lib/kolla/config_files/config.json:ro', '/dev:/dev', '/run/udev:/run/udev', '/sys:/sys', '/lib/modules:/lib/modules:ro', '/etc/iscsi:/etc/iscsi:ro', '/var/lib/iscsi:/var/lib/iscsi', '/etc/multipath:/etc/multipath:z', '/etc/multipath.conf:/etc/multipath.conf:ro', '/var/lib/openstack/healthchecks/multipathd:/openstack:ro,z']}, io.buildah.version=1.41.3, managed_by=edpm_ansible, org.label-schema.build-date=20251125, org.label-schema.name=CentOS Stream 9 Base Image, config_id=multipathd, container_name=multipathd, org.label-schema.schema-version=1.0, maintainer=OpenStack Kubernetes Operator team, org.label-schema.license=GPLv2, org.label-schema.vendor=CentOS, tcib_build_tag=fa2bb8efef6782c26ea7f1675eeb36dd)
Dec  1 19:41:38 compute-0 nova_compute[189564]: 2025-12-01 19:41:38.114 189568 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Dec  1 19:41:40 compute-0 nova_compute[189564]: 2025-12-01 19:41:40.803 189568 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Dec  1 19:41:42 compute-0 podman[244798]: 2025-12-01 19:41:42.405671518 +0000 UTC m=+0.158739062 container health_status 61ddba5fa28aaa4735d9b3aecc3d300f499f9ae2248b5f55cd6d6127fcce4236 (image=quay.io/navidys/prometheus-podman-exporter:v1.10.1, name=podman_exporter, health_status=healthy, health_failing_streak=0, health_log=, maintainer=Navid Yaghoobi <navidys@fedoraproject.org>, managed_by=edpm_ansible, config_data={'image': 'quay.io/navidys/prometheus-podman-exporter:v1.10.1', 'restart': 'always', 'recreate': True, 'user': 'root', 'privileged': True, 'ports': ['9882:9882'], 'net': 'host', 'command': ['--web.config.file=/etc/podman_exporter/podman_exporter.yaml'], 'environment': {'OS_ENDPOINT_TYPE': 'internal', 'CONTAINER_HOST': 'unix:///run/podman/podman.sock'}, 'healthcheck': {'test': '/openstack/healthcheck podman_exporter', 'mount': '/var/lib/openstack/healthchecks/podman_exporter'}, 'volumes': ['/var/lib/openstack/config/telemetry/podman_exporter.yaml:/etc/podman_exporter/podman_exporter.yaml:z', '/var/lib/openstack/certs/telemetry/default:/etc/podman_exporter/tls:z', '/run/podman/podman.sock:/run/podman/podman.sock:rw,z', '/var/lib/openstack/healthchecks/podman_exporter:/openstack:ro,z']}, config_id=edpm, container_name=podman_exporter)
Dec  1 19:41:43 compute-0 nova_compute[189564]: 2025-12-01 19:41:43.116 189568 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Dec  1 19:41:44 compute-0 podman[244824]: 2025-12-01 19:41:44.781490963 +0000 UTC m=+0.098955047 container health_status 34a1614f07848d6f362b3ed1fa2407dbcd0f2c7c831f6ef43ff8b2d278ce7c3d (image=quay.io/podified-antelope-centos9/openstack-ceilometer-ipmi:current-podified, name=ceilometer_agent_ipmi, health_status=healthy, health_failing_streak=0, health_log=, managed_by=edpm_ansible, org.label-schema.schema-version=1.0, tcib_managed=true, container_name=ceilometer_agent_ipmi, io.buildah.version=1.41.3, maintainer=OpenStack Kubernetes Operator team, org.label-schema.build-date=20251125, org.label-schema.vendor=CentOS, tcib_build_tag=fa2bb8efef6782c26ea7f1675eeb36dd, config_id=edpm, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 9 Base Image, config_data={'image': 'quay.io/podified-antelope-centos9/openstack-ceilometer-ipmi:current-podified', 'user': 'ceilometer', 'restart': 'always', 'command': 'kolla_start', 'security_opt': 'label:type:ceilometer_polling_t', 'privileged': 'true', 'net': 'host', 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'OS_ENDPOINT_TYPE': 'internal'}, 'healthcheck': {'test': '/openstack/healthcheck ipmi', 'mount': '/var/lib/openstack/healthchecks/ceilometer_agent_ipmi'}, 'volumes': ['/var/lib/openstack/config/telemetry-power-monitoring:/var/lib/openstack/config/:z', '/var/lib/openstack/config/telemetry-power-monitoring/ceilometer-agent-ipmi.json:/var/lib/kolla/config_files/config.json:z', '/etc/hosts:/etc/hosts:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/var/lib/openstack/cacerts/telemetry-power-monitoring/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/config/telemetry-power-monitoring/ceilometer_prom_exporter.yaml:/etc/ceilometer/ceilometer_prom_exporter.yaml:z', '/var/lib/openstack/certs/telemetry-power-monitoring/default:/etc/ceilometer/tls:z', '/dev/log:/dev/log', '/var/lib/openstack/healthchecks/ceilometer_agent_ipmi:/openstack:ro,z']})
Dec  1 19:41:44 compute-0 podman[244823]: 2025-12-01 19:41:44.799089271 +0000 UTC m=+0.113700625 container health_status 23921011954a99f31a49758e512d9e3575f6b2ebf536e7df85e3be11e7690b76 (image=quay.io/sustainable_computing_io/kepler:release-0.7.12, name=kepler, health_status=healthy, health_failing_streak=0, health_log=, architecture=x86_64, com.redhat.component=ubi9-container, release=1214.1726694543, url=https://access.redhat.com/containers/#/registry.access.redhat.com/ubi9/images/9.4-1214.1726694543, io.k8s.display-name=Red Hat Universal Base Image 9, maintainer=Red Hat, Inc., io.openshift.expose-services=, com.redhat.license_terms=https://www.redhat.com/en/about/red-hat-end-user-license-agreements#UBI, config_id=edpm, io.openshift.tags=base rhel9, vcs-ref=e309397d02fc53f7fa99db1371b8700eb49f268f, vcs-type=git, version=9.4, container_name=kepler, io.buildah.version=1.29.0, name=ubi9, build-date=2024-09-18T21:23:30, summary=Provides the latest release of Red Hat Universal Base Image 9., config_data={'image': 'quay.io/sustainable_computing_io/kepler:release-0.7.12', 'privileged': 'true', 'restart': 'always', 'ports': ['8888:8888'], 'net': 'host', 'command': '-v=2', 'recreate': True, 'environment': {'ENABLE_GPU': 'true', 'EXPOSE_CONTAINER_METRICS': 'true', 'ENABLE_PROCESS_METRICS': 'true', 'EXPOSE_VM_METRICS': 'true', 'EXPOSE_ESTIMATED_IDLE_POWER_METRICS': 'false', 'LIBVIRT_METADATA_URI': 'http://openstack.org/xmlns/libvirt/nova/1.1'}, 'healthcheck': {'test': '/openstack/healthcheck kepler', 'mount': '/var/lib/openstack/healthchecks/kepler'}, 'volumes': ['/lib/modules:/lib/modules:ro', '/run/libvirt:/run/libvirt:shared,ro', '/sys:/sys', '/proc:/proc', '/var/lib/openstack/healthchecks/kepler:/openstack:ro,z']}, distribution-scope=public, io.k8s.description=The Universal Base Image is designed and engineered to be the base layer for all of your containerized applications, middleware and utilities. This base image is freely redistributable, but Red Hat only supports Red Hat technologies through subscriptions for Red Hat products. This image is maintained by Red Hat and updated regularly., release-0.7.12=, description=The Universal Base Image is designed and engineered to be the base layer for all of your containerized applications, middleware and utilities. This base image is freely redistributable, but Red Hat only supports Red Hat technologies through subscriptions for Red Hat products. This image is maintained by Red Hat and updated regularly., managed_by=edpm_ansible, vendor=Red Hat, Inc.)
Dec  1 19:41:45 compute-0 nova_compute[189564]: 2025-12-01 19:41:45.807 189568 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Dec  1 19:41:46 compute-0 podman[244862]: 2025-12-01 19:41:46.384067008 +0000 UTC m=+0.141064363 container health_status 43b014a7c88484529ca37fbc1aa040d68d3c565a681d98a3ffe696ded1c66c8b (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified, name=ovn_metadata_agent, health_status=healthy, health_failing_streak=0, health_log=, io.buildah.version=1.41.3, managed_by=edpm_ansible, config_data={'cgroupns': 'host', 'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'EDPM_CONFIG_HASH': '0823bd3e096c75f72e4a95820d41b0d4b6a1172bd2892ddb9f29b788a11bc87d'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ovn_metadata_agent', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified', 'net': 'host', 'pid': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/run/openvswitch:/run/openvswitch:z', '/var/lib/config-data/ansible-generated/neutron-ovn-metadata-agent:/etc/neutron.conf.d:z', '/run/netns:/run/netns:shared', '/var/lib/kolla/config_files/ovn_metadata_agent.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/neutron:/var/lib/neutron:shared,z', '/var/lib/neutron/ovn_metadata_haproxy_wrapper:/usr/local/bin/haproxy:ro', '/var/lib/neutron/kill_scripts:/etc/neutron/kill_scripts:ro', '/var/lib/openstack/cacerts/neutron-metadata/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/ca.crt:/etc/pki/tls/certs/ovndbca.crt:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/tls.crt:/etc/pki/tls/certs/ovndb.crt:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/tls.key:/etc/pki/tls/private/ovndb.key:ro,Z', '/var/lib/openstack/healthchecks/ovn_metadata_agent:/openstack:ro,z']}, config_id=ovn_metadata_agent, maintainer=OpenStack Kubernetes Operator team, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.schema-version=1.0, org.label-schema.build-date=20251125, org.label-schema.license=GPLv2, tcib_managed=true, org.label-schema.vendor=CentOS, tcib_build_tag=fa2bb8efef6782c26ea7f1675eeb36dd, container_name=ovn_metadata_agent)
Dec  1 19:41:46 compute-0 podman[244861]: 2025-12-01 19:41:46.393717213 +0000 UTC m=+0.154541549 container health_status 3a3d264f7eb8586ed3d44da8bad3c69e5911bcb2ca062b771386b6d47a5118de (image=quay.rdoproject.org/podified-master-centos10/openstack-ceilometer-compute:current-tested, name=ceilometer_agent_compute, health_status=healthy, health_failing_streak=0, health_log=, org.label-schema.build-date=20251125, org.label-schema.vendor=CentOS, tcib_build_tag=3a7876c5b6a4ff2e2bc50e11e9db5f42, org.label-schema.schema-version=1.0, config_data={'image': 'quay.rdoproject.org/podified-master-centos10/openstack-ceilometer-compute:current-tested', 'user': 'ceilometer', 'restart': 'always', 'command': 'kolla_start', 'security_opt': 'label:type:ceilometer_polling_t', 'net': 'host', 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'OS_ENDPOINT_TYPE': 'internal'}, 'healthcheck': {'test': '/openstack/healthcheck compute', 'mount': '/var/lib/openstack/healthchecks/ceilometer_agent_compute'}, 'volumes': ['/var/lib/openstack/config/telemetry:/var/lib/openstack/config/:z', '/var/lib/openstack/config/telemetry/ceilometer-agent-compute.json:/var/lib/kolla/config_files/config.json:z', '/run/libvirt:/run/libvirt:shared,ro', '/etc/hosts:/etc/hosts:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/var/lib/openstack/cacerts/telemetry/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/config/telemetry/ceilometer_prom_exporter.yaml:/etc/ceilometer/ceilometer_prom_exporter.yaml:z', '/var/lib/openstack/certs/telemetry/default:/etc/ceilometer/tls:z', '/dev/log:/dev/log', '/var/lib/openstack/healthchecks/ceilometer_agent_compute:/openstack:ro,z']}, io.buildah.version=1.41.4, managed_by=edpm_ansible, org.label-schema.name=CentOS Stream 10 Base Image, container_name=ceilometer_agent_compute, maintainer=OpenStack Kubernetes Operator team, org.label-schema.license=GPLv2, tcib_managed=true, config_id=edpm)
Dec  1 19:41:46 compute-0 podman[244863]: 2025-12-01 19:41:46.422514906 +0000 UTC m=+0.169817053 container health_status ac5c9902abf0db9f43c889599b2bcc73d33eb8b65444ffdd9b56a5cc93dab792 (image=quay.io/podified-antelope-centos9/openstack-ovn-controller:current-podified, name=ovn_controller, health_status=healthy, health_failing_streak=0, health_log=, io.buildah.version=1.41.3, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.schema-version=1.0, managed_by=edpm_ansible, tcib_managed=true, config_id=ovn_controller, org.label-schema.build-date=20251125, org.label-schema.vendor=CentOS, config_data={'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ovn_controller', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-ovn-controller:current-podified', 'net': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/lib/modules:/lib/modules:ro', '/run:/run', '/var/lib/openvswitch/ovn:/run/ovn:shared,z', '/var/lib/kolla/config_files/ovn_controller.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/openstack/cacerts/ovn/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/certs/ovn/default/ca.crt:/etc/pki/tls/certs/ovndbca.crt:ro,z', '/var/lib/openstack/certs/ovn/default/tls.crt:/etc/pki/tls/certs/ovndb.crt:ro,z', '/var/lib/openstack/certs/ovn/default/tls.key:/etc/pki/tls/private/ovndb.key:ro,Z', '/var/lib/openstack/healthchecks/ovn_controller:/openstack:ro,z']}, maintainer=OpenStack Kubernetes Operator team, org.label-schema.license=GPLv2, tcib_build_tag=fa2bb8efef6782c26ea7f1675eeb36dd, container_name=ovn_controller)
Dec  1 19:41:48 compute-0 nova_compute[189564]: 2025-12-01 19:41:48.121 189568 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Dec  1 19:41:48 compute-0 ceilometer_agent_compute[200308]: 2025-12-01 19:41:48.813 15 DEBUG ceilometer.polling.manager [-] The number of pollsters in source [pollsters] is bigger than the number of worker threads to execute them. Therefore, one can expect the process to be longer than the expected. execute_polling_task_processing /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:253
Dec  1 19:41:48 compute-0 ceilometer_agent_compute[200308]: 2025-12-01 19:41:48.814 15 DEBUG ceilometer.polling.manager [-] Processing pollsters for [pollsters] with [1] threads. execute_polling_task_processing /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:262
Dec  1 19:41:48 compute-0 ceilometer_agent_compute[200308]: 2025-12-01 19:41:48.814 15 DEBUG ceilometer.polling.manager [-] Registering pollster [<stevedore.extension.Extension object at 0x7fcf6cc3f860>] from source [pollsters] to be executed via executor [<concurrent.futures.thread.ThreadPoolExecutor object at 0x7fcf6e1c6ba0>] with cache [{}], pollster history [{}], and discovery cache [{}]. register_pollster_execution /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:276
Dec  1 19:41:48 compute-0 ceilometer_agent_compute[200308]: 2025-12-01 19:41:48.815 15 DEBUG ceilometer.polling.manager [-] Executing discovery process for pollsters [<ceilometer.compute.pollsters.net.IncomingBytesDeltaPollster object at 0x7fcf6cc3f830>] and discovery method [local_instances] via process [<bound method AgentManager.discover of <ceilometer.polling.manager.AgentManager object at 0x7fcf6cc07a10>>]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:294
Dec  1 19:41:48 compute-0 ceilometer_agent_compute[200308]: 2025-12-01 19:41:48.815 15 DEBUG ceilometer.polling.manager [-] Registering pollster [<stevedore.extension.Extension object at 0x7fcf6c2e4080>] from source [pollsters] to be executed via executor [<concurrent.futures.thread.ThreadPoolExecutor object at 0x7fcf6e1c6ba0>] with cache [{}], pollster history [{}], and discovery cache [{}]. register_pollster_execution /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:276
Dec  1 19:41:48 compute-0 ceilometer_agent_compute[200308]: 2025-12-01 19:41:48.816 15 DEBUG ceilometer.polling.manager [-] Registering pollster [<stevedore.extension.Extension object at 0x7fcf6efc98b0>] from source [pollsters] to be executed via executor [<concurrent.futures.thread.ThreadPoolExecutor object at 0x7fcf6e1c6ba0>] with cache [{}], pollster history [{}], and discovery cache [{}]. register_pollster_execution /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:276
Dec  1 19:41:48 compute-0 ceilometer_agent_compute[200308]: 2025-12-01 19:41:48.816 15 DEBUG ceilometer.polling.manager [-] Registering pollster [<stevedore.extension.Extension object at 0x7fcf6c2e4110>] from source [pollsters] to be executed via executor [<concurrent.futures.thread.ThreadPoolExecutor object at 0x7fcf6e1c6ba0>] with cache [{}], pollster history [{}], and discovery cache [{}]. register_pollster_execution /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:276
Dec  1 19:41:48 compute-0 ceilometer_agent_compute[200308]: 2025-12-01 19:41:48.816 15 DEBUG ceilometer.polling.manager [-] Registering pollster [<stevedore.extension.Extension object at 0x7fcf6c2e41a0>] from source [pollsters] to be executed via executor [<concurrent.futures.thread.ThreadPoolExecutor object at 0x7fcf6e1c6ba0>] with cache [{}], pollster history [{}], and discovery cache [{}]. register_pollster_execution /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:276
Dec  1 19:41:48 compute-0 ceilometer_agent_compute[200308]: 2025-12-01 19:41:48.816 15 DEBUG ceilometer.polling.manager [-] Registering pollster [<stevedore.extension.Extension object at 0x7fcf6cc3f290>] from source [pollsters] to be executed via executor [<concurrent.futures.thread.ThreadPoolExecutor object at 0x7fcf6e1c6ba0>] with cache [{}], pollster history [{}], and discovery cache [{}]. register_pollster_execution /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:276
Dec  1 19:41:48 compute-0 ceilometer_agent_compute[200308]: 2025-12-01 19:41:48.817 15 DEBUG ceilometer.polling.manager [-] Registering pollster [<stevedore.extension.Extension object at 0x7fcf6cc3f2c0>] from source [pollsters] to be executed via executor [<concurrent.futures.thread.ThreadPoolExecutor object at 0x7fcf6e1c6ba0>] with cache [{}], pollster history [{}], and discovery cache [{}]. register_pollster_execution /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:276
Dec  1 19:41:48 compute-0 ceilometer_agent_compute[200308]: 2025-12-01 19:41:48.817 15 DEBUG ceilometer.polling.manager [-] Registering pollster [<stevedore.extension.Extension object at 0x7fcf6e1e92e0>] from source [pollsters] to be executed via executor [<concurrent.futures.thread.ThreadPoolExecutor object at 0x7fcf6e1c6ba0>] with cache [{}], pollster history [{}], and discovery cache [{}]. register_pollster_execution /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:276
Dec  1 19:41:48 compute-0 ceilometer_agent_compute[200308]: 2025-12-01 19:41:48.817 15 DEBUG ceilometer.polling.manager [-] Registering pollster [<stevedore.extension.Extension object at 0x7fcf6cc3fb00>] from source [pollsters] to be executed via executor [<concurrent.futures.thread.ThreadPoolExecutor object at 0x7fcf6e1c6ba0>] with cache [{}], pollster history [{}], and discovery cache [{}]. register_pollster_execution /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:276
Dec  1 19:41:48 compute-0 ceilometer_agent_compute[200308]: 2025-12-01 19:41:48.817 15 DEBUG ceilometer.polling.manager [-] Registering pollster [<stevedore.extension.Extension object at 0x7fcf6cc3f320>] from source [pollsters] to be executed via executor [<concurrent.futures.thread.ThreadPoolExecutor object at 0x7fcf6e1c6ba0>] with cache [{}], pollster history [{}], and discovery cache [{}]. register_pollster_execution /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:276
Dec  1 19:41:48 compute-0 ceilometer_agent_compute[200308]: 2025-12-01 19:41:48.818 15 DEBUG ceilometer.polling.manager [-] Registering pollster [<stevedore.extension.Extension object at 0x7fcf6cc3f380>] from source [pollsters] to be executed via executor [<concurrent.futures.thread.ThreadPoolExecutor object at 0x7fcf6e1c6ba0>] with cache [{}], pollster history [{}], and discovery cache [{}]. register_pollster_execution /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:276
Dec  1 19:41:48 compute-0 ceilometer_agent_compute[200308]: 2025-12-01 19:41:48.818 15 DEBUG ceilometer.polling.manager [-] Registering pollster [<stevedore.extension.Extension object at 0x7fcf6cc3f3e0>] from source [pollsters] to be executed via executor [<concurrent.futures.thread.ThreadPoolExecutor object at 0x7fcf6e1c6ba0>] with cache [{}], pollster history [{}], and discovery cache [{}]. register_pollster_execution /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:276
Dec  1 19:41:48 compute-0 ceilometer_agent_compute[200308]: 2025-12-01 19:41:48.818 15 DEBUG ceilometer.polling.manager [-] Registering pollster [<stevedore.extension.Extension object at 0x7fcf6cc3f440>] from source [pollsters] to be executed via executor [<concurrent.futures.thread.ThreadPoolExecutor object at 0x7fcf6e1c6ba0>] with cache [{}], pollster history [{}], and discovery cache [{}]. register_pollster_execution /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:276
Dec  1 19:41:48 compute-0 ceilometer_agent_compute[200308]: 2025-12-01 19:41:48.818 15 DEBUG ceilometer.polling.manager [-] Registering pollster [<stevedore.extension.Extension object at 0x7fcf6c2e4470>] from source [pollsters] to be executed via executor [<concurrent.futures.thread.ThreadPoolExecutor object at 0x7fcf6e1c6ba0>] with cache [{}], pollster history [{}], and discovery cache [{}]. register_pollster_execution /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:276
Dec  1 19:41:48 compute-0 ceilometer_agent_compute[200308]: 2025-12-01 19:41:48.818 15 DEBUG ceilometer.polling.manager [-] Registering pollster [<stevedore.extension.Extension object at 0x7fcf6cc3f4a0>] from source [pollsters] to be executed via executor [<concurrent.futures.thread.ThreadPoolExecutor object at 0x7fcf6e1c6ba0>] with cache [{}], pollster history [{}], and discovery cache [{}]. register_pollster_execution /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:276
Dec  1 19:41:48 compute-0 ceilometer_agent_compute[200308]: 2025-12-01 19:41:48.818 15 DEBUG ceilometer.polling.manager [-] Registering pollster [<stevedore.extension.Extension object at 0x7fcf6cc3f500>] from source [pollsters] to be executed via executor [<concurrent.futures.thread.ThreadPoolExecutor object at 0x7fcf6e1c6ba0>] with cache [{}], pollster history [{}], and discovery cache [{}]. register_pollster_execution /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:276
Dec  1 19:41:48 compute-0 ceilometer_agent_compute[200308]: 2025-12-01 19:41:48.819 15 DEBUG ceilometer.polling.manager [-] Registering pollster [<stevedore.extension.Extension object at 0x7fcf6cc3e540>] from source [pollsters] to be executed via executor [<concurrent.futures.thread.ThreadPoolExecutor object at 0x7fcf6e1c6ba0>] with cache [{}], pollster history [{}], and discovery cache [{}]. register_pollster_execution /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:276
Dec  1 19:41:48 compute-0 ceilometer_agent_compute[200308]: 2025-12-01 19:41:48.819 15 DEBUG ceilometer.polling.manager [-] Registering pollster [<stevedore.extension.Extension object at 0x7fcf6cc3f560>] from source [pollsters] to be executed via executor [<concurrent.futures.thread.ThreadPoolExecutor object at 0x7fcf6e1c6ba0>] with cache [{}], pollster history [{}], and discovery cache [{}]. register_pollster_execution /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:276
Dec  1 19:41:48 compute-0 ceilometer_agent_compute[200308]: 2025-12-01 19:41:48.819 15 DEBUG ceilometer.polling.manager [-] Registering pollster [<stevedore.extension.Extension object at 0x7fcf6cc3fd70>] from source [pollsters] to be executed via executor [<concurrent.futures.thread.ThreadPoolExecutor object at 0x7fcf6e1c6ba0>] with cache [{}], pollster history [{}], and discovery cache [{}]. register_pollster_execution /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:276
Dec  1 19:41:48 compute-0 ceilometer_agent_compute[200308]: 2025-12-01 19:41:48.819 15 DEBUG ceilometer.polling.manager [-] Registering pollster [<stevedore.extension.Extension object at 0x7fcf6cc3f5c0>] from source [pollsters] to be executed via executor [<concurrent.futures.thread.ThreadPoolExecutor object at 0x7fcf6e1c6ba0>] with cache [{}], pollster history [{}], and discovery cache [{}]. register_pollster_execution /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:276
Dec  1 19:41:48 compute-0 ceilometer_agent_compute[200308]: 2025-12-01 19:41:48.819 15 DEBUG ceilometer.polling.manager [-] Registering pollster [<stevedore.extension.Extension object at 0x7fcf6cc3fdd0>] from source [pollsters] to be executed via executor [<concurrent.futures.thread.ThreadPoolExecutor object at 0x7fcf6e1c6ba0>] with cache [{}], pollster history [{}], and discovery cache [{}]. register_pollster_execution /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:276
Dec  1 19:41:48 compute-0 ceilometer_agent_compute[200308]: 2025-12-01 19:41:48.819 15 DEBUG ceilometer.polling.manager [-] Registering pollster [<stevedore.extension.Extension object at 0x7fcf6cc3fe30>] from source [pollsters] to be executed via executor [<concurrent.futures.thread.ThreadPoolExecutor object at 0x7fcf6e1c6ba0>] with cache [{}], pollster history [{}], and discovery cache [{}]. register_pollster_execution /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:276
Dec  1 19:41:48 compute-0 ceilometer_agent_compute[200308]: 2025-12-01 19:41:48.820 15 DEBUG ceilometer.polling.manager [-] Registering pollster [<stevedore.extension.Extension object at 0x7fcf6cc3fec0>] from source [pollsters] to be executed via executor [<concurrent.futures.thread.ThreadPoolExecutor object at 0x7fcf6e1c6ba0>] with cache [{}], pollster history [{}], and discovery cache [{}]. register_pollster_execution /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:276
Dec  1 19:41:48 compute-0 ceilometer_agent_compute[200308]: 2025-12-01 19:41:48.820 15 DEBUG ceilometer.polling.manager [-] Registering pollster [<stevedore.extension.Extension object at 0x7fcf6cc3ffb0>] from source [pollsters] to be executed via executor [<concurrent.futures.thread.ThreadPoolExecutor object at 0x7fcf6e1c6ba0>] with cache [{}], pollster history [{}], and discovery cache [{}]. register_pollster_execution /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:276
Dec  1 19:41:48 compute-0 ceilometer_agent_compute[200308]: 2025-12-01 19:41:48.820 15 DEBUG ceilometer.polling.manager [-] Registering pollster [<stevedore.extension.Extension object at 0x7fcf6cc3d7c0>] from source [pollsters] to be executed via executor [<concurrent.futures.thread.ThreadPoolExecutor object at 0x7fcf6e1c6ba0>] with cache [{}], pollster history [{}], and discovery cache [{}]. register_pollster_execution /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:276
Dec  1 19:41:48 compute-0 ceilometer_agent_compute[200308]: 2025-12-01 19:41:48.820 15 DEBUG ceilometer.polling.manager [-] Registering pollster [<stevedore.extension.Extension object at 0x7fcf6cc3f7d0>] from source [pollsters] to be executed via executor [<concurrent.futures.thread.ThreadPoolExecutor object at 0x7fcf6e1c6ba0>] with cache [{}], pollster history [{}], and discovery cache [{}]. register_pollster_execution /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:276
Dec  1 19:41:48 compute-0 ceilometer_agent_compute[200308]: 2025-12-01 19:41:48.824 15 DEBUG ceilometer.compute.discovery [-] instance data: {'id': 'e73931e9-f7fa-4666-b781-700b385532a9', 'name': 'test_0', 'flavor': {'id': '0891a7f6-7194-4f33-bc11-6f6ab8b16145', 'name': 'm1.small', 'vcpus': 1, 'ram': 512, 'disk': 1, 'ephemeral': 1, 'swap': 0}, 'image': {'id': '15bc897a-453b-4133-b6db-08ecdc2b6db0'}, 'os_type': 'hvm', 'architecture': 'x86_64', 'OS-EXT-SRV-ATTR:instance_name': 'instance-00000001', 'OS-EXT-SRV-ATTR:host': 'compute-0.ctlplane.example.com', 'OS-EXT-STS:vm_state': 'running', 'tenant_id': '35d2a9caf1634dca9fc12ec078239d84', 'user_id': '7c24e8f82e7842b785e565ac65c7f494', 'hostId': 'e632d98aa833376e2652bb395252bb54f4cc7fd6f020f0d51d7efcd6', 'status': 'active', 'metadata': {}} discover_libvirt_polling /usr/lib/python3.12/site-packages/ceilometer/compute/discovery.py:315
Dec  1 19:41:48 compute-0 ceilometer_agent_compute[200308]: 2025-12-01 19:41:48.830 15 DEBUG ceilometer.compute.discovery [-] instance data: {'id': '850ac274-3f22-41ce-b7d7-ac64d7adac70', 'name': 'vn-rxztcck-a6xkcgll2h6t-dmjd3wlevael-vnf-74vtqyxw74yx', 'flavor': {'id': '0891a7f6-7194-4f33-bc11-6f6ab8b16145', 'name': 'm1.small', 'vcpus': 1, 'ram': 512, 'disk': 1, 'ephemeral': 1, 'swap': 0}, 'image': {'id': '15bc897a-453b-4133-b6db-08ecdc2b6db0'}, 'os_type': 'hvm', 'architecture': 'x86_64', 'OS-EXT-SRV-ATTR:instance_name': 'instance-00000003', 'OS-EXT-SRV-ATTR:host': 'compute-0.ctlplane.example.com', 'OS-EXT-STS:vm_state': 'running', 'tenant_id': '35d2a9caf1634dca9fc12ec078239d84', 'user_id': '7c24e8f82e7842b785e565ac65c7f494', 'hostId': 'e632d98aa833376e2652bb395252bb54f4cc7fd6f020f0d51d7efcd6', 'status': 'active', 'metadata': {'metering.server_group': '47cf63e2-5b7c-4ff3-8543-aef6d5b1a5c9'}} discover_libvirt_polling /usr/lib/python3.12/site-packages/ceilometer/compute/discovery.py:315
Dec  1 19:41:48 compute-0 ceilometer_agent_compute[200308]: 2025-12-01 19:41:48.830 15 INFO ceilometer.polling.manager [-] Polling pollster network.incoming.bytes.delta in the context of pollsters
Dec  1 19:41:48 compute-0 ceilometer_agent_compute[200308]: 2025-12-01 19:41:48.831 15 DEBUG ceilometer.polling.manager [-] Checking if we need coordination for pollster [<stevedore.extension.Extension object at 0x7fcf6cc3f860>] with coordination group name [None]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:333
Dec  1 19:41:48 compute-0 ceilometer_agent_compute[200308]: 2025-12-01 19:41:48.831 15 DEBUG ceilometer.polling.manager [-] The pollster [<stevedore.extension.Extension object at 0x7fcf6cc3f860>] is not configured in a source for polling that requires coordination. The current hashrings are the following [None]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:355
Dec  1 19:41:48 compute-0 ceilometer_agent_compute[200308]: 2025-12-01 19:41:48.831 15 DEBUG ceilometer.polling.manager [-] Polster heartbeat update: network.incoming.bytes.delta heartbeat /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:636
Dec  1 19:41:48 compute-0 ceilometer_agent_compute[200308]: 2025-12-01 19:41:48.832 12 DEBUG ceilometer.polling.manager [-] Updated heartbeat for network.incoming.bytes.delta (2025-12-01T19:41:48.831368) _update_status /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:502
Dec  1 19:41:48 compute-0 ceilometer_agent_compute[200308]: 2025-12-01 19:41:48.839 15 DEBUG ceilometer.compute.pollsters [-] e73931e9-f7fa-4666-b781-700b385532a9/network.incoming.bytes.delta volume: 0 _stats_to_sample /usr/lib/python3.12/site-packages/ceilometer/compute/pollsters/__init__.py:108
Dec  1 19:41:48 compute-0 ceilometer_agent_compute[200308]: 2025-12-01 19:41:48.846 15 DEBUG ceilometer.compute.pollsters [-] 850ac274-3f22-41ce-b7d7-ac64d7adac70/network.incoming.bytes.delta volume: 0 _stats_to_sample /usr/lib/python3.12/site-packages/ceilometer/compute/pollsters/__init__.py:108
Dec  1 19:41:48 compute-0 ceilometer_agent_compute[200308]: 2025-12-01 19:41:48.847 15 INFO ceilometer.polling.manager [-] Finished polling pollster network.incoming.bytes.delta in the context of pollsters
Dec  1 19:41:48 compute-0 ceilometer_agent_compute[200308]: 2025-12-01 19:41:48.847 15 DEBUG ceilometer.polling.manager [-] Executing discovery process for pollsters [<ceilometer.compute.pollsters.net.OutgoingPacketsPollster object at 0x7fcf6c2e4050>] and discovery method [local_instances] via process [<bound method AgentManager.discover of <ceilometer.polling.manager.AgentManager object at 0x7fcf6cc07a10>>]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:294
Dec  1 19:41:48 compute-0 ceilometer_agent_compute[200308]: 2025-12-01 19:41:48.847 15 INFO ceilometer.polling.manager [-] Polling pollster network.outgoing.packets in the context of pollsters
Dec  1 19:41:48 compute-0 ceilometer_agent_compute[200308]: 2025-12-01 19:41:48.848 15 DEBUG ceilometer.polling.manager [-] Checking if we need coordination for pollster [<stevedore.extension.Extension object at 0x7fcf6c2e4080>] with coordination group name [None]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:333
Dec  1 19:41:48 compute-0 ceilometer_agent_compute[200308]: 2025-12-01 19:41:48.848 15 DEBUG ceilometer.polling.manager [-] The pollster [<stevedore.extension.Extension object at 0x7fcf6c2e4080>] is not configured in a source for polling that requires coordination. The current hashrings are the following [None]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:355
Dec  1 19:41:48 compute-0 ceilometer_agent_compute[200308]: 2025-12-01 19:41:48.848 15 DEBUG ceilometer.polling.manager [-] Polster heartbeat update: network.outgoing.packets heartbeat /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:636
Dec  1 19:41:48 compute-0 ceilometer_agent_compute[200308]: 2025-12-01 19:41:48.848 15 DEBUG ceilometer.compute.pollsters [-] e73931e9-f7fa-4666-b781-700b385532a9/network.outgoing.packets volume: 23 _stats_to_sample /usr/lib/python3.12/site-packages/ceilometer/compute/pollsters/__init__.py:108
Dec  1 19:41:48 compute-0 ceilometer_agent_compute[200308]: 2025-12-01 19:41:48.849 15 DEBUG ceilometer.compute.pollsters [-] 850ac274-3f22-41ce-b7d7-ac64d7adac70/network.outgoing.packets volume: 22 _stats_to_sample /usr/lib/python3.12/site-packages/ceilometer/compute/pollsters/__init__.py:108
Dec  1 19:41:48 compute-0 ceilometer_agent_compute[200308]: 2025-12-01 19:41:48.849 15 INFO ceilometer.polling.manager [-] Finished polling pollster network.outgoing.packets in the context of pollsters
Dec  1 19:41:48 compute-0 ceilometer_agent_compute[200308]: 2025-12-01 19:41:48.850 15 DEBUG ceilometer.polling.manager [-] Executing discovery process for pollsters [<ceilometer.compute.pollsters.net.OutgoingBytesDeltaPollster object at 0x7fcf6cc3ff20>] and discovery method [local_instances] via process [<bound method AgentManager.discover of <ceilometer.polling.manager.AgentManager object at 0x7fcf6cc07a10>>]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:294
Dec  1 19:41:48 compute-0 ceilometer_agent_compute[200308]: 2025-12-01 19:41:48.850 15 INFO ceilometer.polling.manager [-] Polling pollster network.outgoing.bytes.delta in the context of pollsters
Dec  1 19:41:48 compute-0 ceilometer_agent_compute[200308]: 2025-12-01 19:41:48.850 15 DEBUG ceilometer.polling.manager [-] Checking if we need coordination for pollster [<stevedore.extension.Extension object at 0x7fcf6efc98b0>] with coordination group name [None]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:333
Dec  1 19:41:48 compute-0 ceilometer_agent_compute[200308]: 2025-12-01 19:41:48.850 15 DEBUG ceilometer.polling.manager [-] The pollster [<stevedore.extension.Extension object at 0x7fcf6efc98b0>] is not configured in a source for polling that requires coordination. The current hashrings are the following [None]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:355
Dec  1 19:41:48 compute-0 ceilometer_agent_compute[200308]: 2025-12-01 19:41:48.850 12 DEBUG ceilometer.polling.manager [-] Updated heartbeat for network.outgoing.packets (2025-12-01T19:41:48.848384) _update_status /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:502
Dec  1 19:41:48 compute-0 ceilometer_agent_compute[200308]: 2025-12-01 19:41:48.851 15 DEBUG ceilometer.polling.manager [-] Polster heartbeat update: network.outgoing.bytes.delta heartbeat /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:636
Dec  1 19:41:48 compute-0 ceilometer_agent_compute[200308]: 2025-12-01 19:41:48.851 15 DEBUG ceilometer.compute.pollsters [-] e73931e9-f7fa-4666-b781-700b385532a9/network.outgoing.bytes.delta volume: 0 _stats_to_sample /usr/lib/python3.12/site-packages/ceilometer/compute/pollsters/__init__.py:108
Dec  1 19:41:48 compute-0 ceilometer_agent_compute[200308]: 2025-12-01 19:41:48.851 12 DEBUG ceilometer.polling.manager [-] Updated heartbeat for network.outgoing.bytes.delta (2025-12-01T19:41:48.850944) _update_status /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:502
Dec  1 19:41:48 compute-0 ceilometer_agent_compute[200308]: 2025-12-01 19:41:48.851 15 DEBUG ceilometer.compute.pollsters [-] 850ac274-3f22-41ce-b7d7-ac64d7adac70/network.outgoing.bytes.delta volume: 70 _stats_to_sample /usr/lib/python3.12/site-packages/ceilometer/compute/pollsters/__init__.py:108
Dec  1 19:41:48 compute-0 ceilometer_agent_compute[200308]: 2025-12-01 19:41:48.852 15 INFO ceilometer.polling.manager [-] Finished polling pollster network.outgoing.bytes.delta in the context of pollsters
Dec  1 19:41:48 compute-0 ceilometer_agent_compute[200308]: 2025-12-01 19:41:48.852 15 DEBUG ceilometer.polling.manager [-] Executing discovery process for pollsters [<ceilometer.compute.pollsters.net.OutgoingDropPollster object at 0x7fcf6c2e40e0>] and discovery method [local_instances] via process [<bound method AgentManager.discover of <ceilometer.polling.manager.AgentManager object at 0x7fcf6cc07a10>>]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:294
Dec  1 19:41:48 compute-0 ceilometer_agent_compute[200308]: 2025-12-01 19:41:48.852 15 INFO ceilometer.polling.manager [-] Polling pollster network.outgoing.packets.drop in the context of pollsters
Dec  1 19:41:48 compute-0 ceilometer_agent_compute[200308]: 2025-12-01 19:41:48.853 15 DEBUG ceilometer.polling.manager [-] Checking if we need coordination for pollster [<stevedore.extension.Extension object at 0x7fcf6c2e4110>] with coordination group name [None]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:333
Dec  1 19:41:48 compute-0 ceilometer_agent_compute[200308]: 2025-12-01 19:41:48.853 15 DEBUG ceilometer.polling.manager [-] The pollster [<stevedore.extension.Extension object at 0x7fcf6c2e4110>] is not configured in a source for polling that requires coordination. The current hashrings are the following [None]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:355
Dec  1 19:41:48 compute-0 ceilometer_agent_compute[200308]: 2025-12-01 19:41:48.853 15 DEBUG ceilometer.polling.manager [-] Polster heartbeat update: network.outgoing.packets.drop heartbeat /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:636
Dec  1 19:41:48 compute-0 ceilometer_agent_compute[200308]: 2025-12-01 19:41:48.853 15 DEBUG ceilometer.compute.pollsters [-] e73931e9-f7fa-4666-b781-700b385532a9/network.outgoing.packets.drop volume: 0 _stats_to_sample /usr/lib/python3.12/site-packages/ceilometer/compute/pollsters/__init__.py:108
Dec  1 19:41:48 compute-0 ceilometer_agent_compute[200308]: 2025-12-01 19:41:48.854 12 DEBUG ceilometer.polling.manager [-] Updated heartbeat for network.outgoing.packets.drop (2025-12-01T19:41:48.853369) _update_status /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:502
Dec  1 19:41:48 compute-0 ceilometer_agent_compute[200308]: 2025-12-01 19:41:48.854 15 DEBUG ceilometer.compute.pollsters [-] 850ac274-3f22-41ce-b7d7-ac64d7adac70/network.outgoing.packets.drop volume: 0 _stats_to_sample /usr/lib/python3.12/site-packages/ceilometer/compute/pollsters/__init__.py:108
Dec  1 19:41:48 compute-0 ceilometer_agent_compute[200308]: 2025-12-01 19:41:48.854 15 INFO ceilometer.polling.manager [-] Finished polling pollster network.outgoing.packets.drop in the context of pollsters
Dec  1 19:41:48 compute-0 ceilometer_agent_compute[200308]: 2025-12-01 19:41:48.855 15 DEBUG ceilometer.polling.manager [-] Executing discovery process for pollsters [<ceilometer.compute.pollsters.net.OutgoingErrorsPollster object at 0x7fcf6c2e4170>] and discovery method [local_instances] via process [<bound method AgentManager.discover of <ceilometer.polling.manager.AgentManager object at 0x7fcf6cc07a10>>]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:294
Dec  1 19:41:48 compute-0 ceilometer_agent_compute[200308]: 2025-12-01 19:41:48.855 15 INFO ceilometer.polling.manager [-] Polling pollster network.outgoing.packets.error in the context of pollsters
Dec  1 19:41:48 compute-0 ceilometer_agent_compute[200308]: 2025-12-01 19:41:48.855 15 DEBUG ceilometer.polling.manager [-] Checking if we need coordination for pollster [<stevedore.extension.Extension object at 0x7fcf6c2e41a0>] with coordination group name [None]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:333
Dec  1 19:41:48 compute-0 ceilometer_agent_compute[200308]: 2025-12-01 19:41:48.855 15 DEBUG ceilometer.polling.manager [-] The pollster [<stevedore.extension.Extension object at 0x7fcf6c2e41a0>] is not configured in a source for polling that requires coordination. The current hashrings are the following [None]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:355
Dec  1 19:41:48 compute-0 ceilometer_agent_compute[200308]: 2025-12-01 19:41:48.855 15 DEBUG ceilometer.polling.manager [-] Polster heartbeat update: network.outgoing.packets.error heartbeat /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:636
Dec  1 19:41:48 compute-0 ceilometer_agent_compute[200308]: 2025-12-01 19:41:48.856 15 DEBUG ceilometer.compute.pollsters [-] e73931e9-f7fa-4666-b781-700b385532a9/network.outgoing.packets.error volume: 0 _stats_to_sample /usr/lib/python3.12/site-packages/ceilometer/compute/pollsters/__init__.py:108
Dec  1 19:41:48 compute-0 ceilometer_agent_compute[200308]: 2025-12-01 19:41:48.856 12 DEBUG ceilometer.polling.manager [-] Updated heartbeat for network.outgoing.packets.error (2025-12-01T19:41:48.855713) _update_status /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:502
Dec  1 19:41:48 compute-0 ceilometer_agent_compute[200308]: 2025-12-01 19:41:48.856 15 DEBUG ceilometer.compute.pollsters [-] 850ac274-3f22-41ce-b7d7-ac64d7adac70/network.outgoing.packets.error volume: 0 _stats_to_sample /usr/lib/python3.12/site-packages/ceilometer/compute/pollsters/__init__.py:108
Dec  1 19:41:48 compute-0 ceilometer_agent_compute[200308]: 2025-12-01 19:41:48.857 15 INFO ceilometer.polling.manager [-] Finished polling pollster network.outgoing.packets.error in the context of pollsters
Dec  1 19:41:48 compute-0 ceilometer_agent_compute[200308]: 2025-12-01 19:41:48.857 15 DEBUG ceilometer.polling.manager [-] Executing discovery process for pollsters [<ceilometer.compute.pollsters.disk.PerDeviceCapacityPollster object at 0x7fcf6cc3d820>] and discovery method [local_instances] via process [<bound method AgentManager.discover of <ceilometer.polling.manager.AgentManager object at 0x7fcf6cc07a10>>]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:294
Dec  1 19:41:48 compute-0 ceilometer_agent_compute[200308]: 2025-12-01 19:41:48.857 15 INFO ceilometer.polling.manager [-] Polling pollster disk.device.capacity in the context of pollsters
Dec  1 19:41:48 compute-0 ceilometer_agent_compute[200308]: 2025-12-01 19:41:48.857 15 DEBUG ceilometer.polling.manager [-] Checking if we need coordination for pollster [<stevedore.extension.Extension object at 0x7fcf6cc3f290>] with coordination group name [None]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:333
Dec  1 19:41:48 compute-0 ceilometer_agent_compute[200308]: 2025-12-01 19:41:48.857 15 DEBUG ceilometer.polling.manager [-] The pollster [<stevedore.extension.Extension object at 0x7fcf6cc3f290>] is not configured in a source for polling that requires coordination. The current hashrings are the following [None]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:355
Dec  1 19:41:48 compute-0 ceilometer_agent_compute[200308]: 2025-12-01 19:41:48.858 15 DEBUG ceilometer.polling.manager [-] Polster heartbeat update: disk.device.capacity heartbeat /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:636
Dec  1 19:41:48 compute-0 ceilometer_agent_compute[200308]: 2025-12-01 19:41:48.858 12 DEBUG ceilometer.polling.manager [-] Updated heartbeat for disk.device.capacity (2025-12-01T19:41:48.858111) _update_status /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:502
Dec  1 19:41:48 compute-0 ceilometer_agent_compute[200308]: 2025-12-01 19:41:48.892 15 DEBUG ceilometer.compute.pollsters [-] e73931e9-f7fa-4666-b781-700b385532a9/disk.device.capacity volume: 1073741824 _stats_to_sample /usr/lib/python3.12/site-packages/ceilometer/compute/pollsters/__init__.py:108
Dec  1 19:41:48 compute-0 ceilometer_agent_compute[200308]: 2025-12-01 19:41:48.893 15 DEBUG ceilometer.compute.pollsters [-] e73931e9-f7fa-4666-b781-700b385532a9/disk.device.capacity volume: 1073741824 _stats_to_sample /usr/lib/python3.12/site-packages/ceilometer/compute/pollsters/__init__.py:108
Dec  1 19:41:48 compute-0 ceilometer_agent_compute[200308]: 2025-12-01 19:41:48.893 15 DEBUG ceilometer.compute.pollsters [-] e73931e9-f7fa-4666-b781-700b385532a9/disk.device.capacity volume: 485376 _stats_to_sample /usr/lib/python3.12/site-packages/ceilometer/compute/pollsters/__init__.py:108
Dec  1 19:41:48 compute-0 ceilometer_agent_compute[200308]: 2025-12-01 19:41:48.920 15 DEBUG ceilometer.compute.pollsters [-] 850ac274-3f22-41ce-b7d7-ac64d7adac70/disk.device.capacity volume: 1073741824 _stats_to_sample /usr/lib/python3.12/site-packages/ceilometer/compute/pollsters/__init__.py:108
Dec  1 19:41:48 compute-0 ceilometer_agent_compute[200308]: 2025-12-01 19:41:48.921 15 DEBUG ceilometer.compute.pollsters [-] 850ac274-3f22-41ce-b7d7-ac64d7adac70/disk.device.capacity volume: 1073741824 _stats_to_sample /usr/lib/python3.12/site-packages/ceilometer/compute/pollsters/__init__.py:108
Dec  1 19:41:48 compute-0 ceilometer_agent_compute[200308]: 2025-12-01 19:41:48.921 15 DEBUG ceilometer.compute.pollsters [-] 850ac274-3f22-41ce-b7d7-ac64d7adac70/disk.device.capacity volume: 583680 _stats_to_sample /usr/lib/python3.12/site-packages/ceilometer/compute/pollsters/__init__.py:108
Dec  1 19:41:48 compute-0 ceilometer_agent_compute[200308]: 2025-12-01 19:41:48.922 15 INFO ceilometer.polling.manager [-] Finished polling pollster disk.device.capacity in the context of pollsters
Dec  1 19:41:48 compute-0 ceilometer_agent_compute[200308]: 2025-12-01 19:41:48.922 15 DEBUG ceilometer.polling.manager [-] Executing discovery process for pollsters [<ceilometer.compute.pollsters.disk.PerDeviceReadBytesPollster object at 0x7fcf6cc3f1d0>] and discovery method [local_instances] via process [<bound method AgentManager.discover of <ceilometer.polling.manager.AgentManager object at 0x7fcf6cc07a10>>]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:294
Dec  1 19:41:48 compute-0 ceilometer_agent_compute[200308]: 2025-12-01 19:41:48.923 15 INFO ceilometer.polling.manager [-] Polling pollster disk.device.read.bytes in the context of pollsters
Dec  1 19:41:48 compute-0 ceilometer_agent_compute[200308]: 2025-12-01 19:41:48.923 15 DEBUG ceilometer.polling.manager [-] Checking if we need coordination for pollster [<stevedore.extension.Extension object at 0x7fcf6cc3f2c0>] with coordination group name [None]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:333
Dec  1 19:41:48 compute-0 ceilometer_agent_compute[200308]: 2025-12-01 19:41:48.923 15 DEBUG ceilometer.polling.manager [-] The pollster [<stevedore.extension.Extension object at 0x7fcf6cc3f2c0>] is not configured in a source for polling that requires coordination. The current hashrings are the following [None]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:355
Dec  1 19:41:48 compute-0 ceilometer_agent_compute[200308]: 2025-12-01 19:41:48.923 15 DEBUG ceilometer.polling.manager [-] Polster heartbeat update: disk.device.read.bytes heartbeat /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:636
Dec  1 19:41:48 compute-0 ceilometer_agent_compute[200308]: 2025-12-01 19:41:48.924 12 DEBUG ceilometer.polling.manager [-] Updated heartbeat for disk.device.read.bytes (2025-12-01T19:41:48.923698) _update_status /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:502
Dec  1 19:41:49 compute-0 ceilometer_agent_compute[200308]: 2025-12-01 19:41:49.018 15 DEBUG ceilometer.compute.pollsters [-] e73931e9-f7fa-4666-b781-700b385532a9/disk.device.read.bytes volume: 23308800 _stats_to_sample /usr/lib/python3.12/site-packages/ceilometer/compute/pollsters/__init__.py:108
Dec  1 19:41:49 compute-0 ceilometer_agent_compute[200308]: 2025-12-01 19:41:49.019 15 DEBUG ceilometer.compute.pollsters [-] e73931e9-f7fa-4666-b781-700b385532a9/disk.device.read.bytes volume: 3227648 _stats_to_sample /usr/lib/python3.12/site-packages/ceilometer/compute/pollsters/__init__.py:108
Dec  1 19:41:49 compute-0 ceilometer_agent_compute[200308]: 2025-12-01 19:41:49.019 15 DEBUG ceilometer.compute.pollsters [-] e73931e9-f7fa-4666-b781-700b385532a9/disk.device.read.bytes volume: 274786 _stats_to_sample /usr/lib/python3.12/site-packages/ceilometer/compute/pollsters/__init__.py:108
Dec  1 19:41:49 compute-0 ceilometer_agent_compute[200308]: 2025-12-01 19:41:49.116 15 DEBUG ceilometer.compute.pollsters [-] 850ac274-3f22-41ce-b7d7-ac64d7adac70/disk.device.read.bytes volume: 23308800 _stats_to_sample /usr/lib/python3.12/site-packages/ceilometer/compute/pollsters/__init__.py:108
Dec  1 19:41:49 compute-0 ceilometer_agent_compute[200308]: 2025-12-01 19:41:49.117 15 DEBUG ceilometer.compute.pollsters [-] 850ac274-3f22-41ce-b7d7-ac64d7adac70/disk.device.read.bytes volume: 3227648 _stats_to_sample /usr/lib/python3.12/site-packages/ceilometer/compute/pollsters/__init__.py:108
Dec  1 19:41:49 compute-0 ceilometer_agent_compute[200308]: 2025-12-01 19:41:49.118 15 DEBUG ceilometer.compute.pollsters [-] 850ac274-3f22-41ce-b7d7-ac64d7adac70/disk.device.read.bytes volume: 385378 _stats_to_sample /usr/lib/python3.12/site-packages/ceilometer/compute/pollsters/__init__.py:108
Dec  1 19:41:49 compute-0 ceilometer_agent_compute[200308]: 2025-12-01 19:41:49.119 15 INFO ceilometer.polling.manager [-] Finished polling pollster disk.device.read.bytes in the context of pollsters
Dec  1 19:41:49 compute-0 ceilometer_agent_compute[200308]: 2025-12-01 19:41:49.119 15 DEBUG ceilometer.polling.manager [-] Executing discovery process for pollsters [<ceilometer.compute.pollsters.net.IncomingBytesPollster object at 0x7fcf6cc3f800>] and discovery method [local_instances] via process [<bound method AgentManager.discover of <ceilometer.polling.manager.AgentManager object at 0x7fcf6cc07a10>>]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:294
Dec  1 19:41:49 compute-0 ceilometer_agent_compute[200308]: 2025-12-01 19:41:49.119 15 INFO ceilometer.polling.manager [-] Polling pollster network.incoming.bytes in the context of pollsters
Dec  1 19:41:49 compute-0 ceilometer_agent_compute[200308]: 2025-12-01 19:41:49.119 15 DEBUG ceilometer.polling.manager [-] Checking if we need coordination for pollster [<stevedore.extension.Extension object at 0x7fcf6e1e92e0>] with coordination group name [None]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:333
Dec  1 19:41:49 compute-0 ceilometer_agent_compute[200308]: 2025-12-01 19:41:49.120 15 DEBUG ceilometer.polling.manager [-] The pollster [<stevedore.extension.Extension object at 0x7fcf6e1e92e0>] is not configured in a source for polling that requires coordination. The current hashrings are the following [None]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:355
Dec  1 19:41:49 compute-0 ceilometer_agent_compute[200308]: 2025-12-01 19:41:49.120 15 DEBUG ceilometer.polling.manager [-] Polster heartbeat update: network.incoming.bytes heartbeat /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:636
Dec  1 19:41:49 compute-0 ceilometer_agent_compute[200308]: 2025-12-01 19:41:49.120 15 DEBUG ceilometer.compute.pollsters [-] e73931e9-f7fa-4666-b781-700b385532a9/network.incoming.bytes volume: 2136 _stats_to_sample /usr/lib/python3.12/site-packages/ceilometer/compute/pollsters/__init__.py:108
Dec  1 19:41:49 compute-0 ceilometer_agent_compute[200308]: 2025-12-01 19:41:49.121 15 DEBUG ceilometer.compute.pollsters [-] 850ac274-3f22-41ce-b7d7-ac64d7adac70/network.incoming.bytes volume: 1570 _stats_to_sample /usr/lib/python3.12/site-packages/ceilometer/compute/pollsters/__init__.py:108
Dec  1 19:41:49 compute-0 ceilometer_agent_compute[200308]: 2025-12-01 19:41:49.121 15 INFO ceilometer.polling.manager [-] Finished polling pollster network.incoming.bytes in the context of pollsters
Dec  1 19:41:49 compute-0 ceilometer_agent_compute[200308]: 2025-12-01 19:41:49.122 15 DEBUG ceilometer.polling.manager [-] Executing discovery process for pollsters [<ceilometer.compute.pollsters.net.IncomingBytesRatePollster object at 0x7fcf6cc3fd10>] and discovery method [local_instances] via process [<bound method AgentManager.discover of <ceilometer.polling.manager.AgentManager object at 0x7fcf6cc07a10>>]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:294
Dec  1 19:41:49 compute-0 ceilometer_agent_compute[200308]: 2025-12-01 19:41:49.122 15 DEBUG ceilometer.polling.manager [-] Skip pollster network.incoming.bytes.rate, no new resources found this cycle _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:321
Dec  1 19:41:49 compute-0 ceilometer_agent_compute[200308]: 2025-12-01 19:41:49.122 15 DEBUG ceilometer.polling.manager [-] Executing discovery process for pollsters [<ceilometer.compute.pollsters.disk.PerDeviceDiskReadLatencyPollster object at 0x7fcf6cc3f2f0>] and discovery method [local_instances] via process [<bound method AgentManager.discover of <ceilometer.polling.manager.AgentManager object at 0x7fcf6cc07a10>>]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:294
Dec  1 19:41:49 compute-0 ceilometer_agent_compute[200308]: 2025-12-01 19:41:49.123 12 DEBUG ceilometer.polling.manager [-] Updated heartbeat for network.incoming.bytes (2025-12-01T19:41:49.120250) _update_status /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:502
Dec  1 19:41:49 compute-0 ceilometer_agent_compute[200308]: 2025-12-01 19:41:49.123 15 INFO ceilometer.polling.manager [-] Polling pollster disk.device.read.latency in the context of pollsters
Dec  1 19:41:49 compute-0 ceilometer_agent_compute[200308]: 2025-12-01 19:41:49.123 15 DEBUG ceilometer.polling.manager [-] Checking if we need coordination for pollster [<stevedore.extension.Extension object at 0x7fcf6cc3f320>] with coordination group name [None]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:333
Dec  1 19:41:49 compute-0 ceilometer_agent_compute[200308]: 2025-12-01 19:41:49.123 15 DEBUG ceilometer.polling.manager [-] The pollster [<stevedore.extension.Extension object at 0x7fcf6cc3f320>] is not configured in a source for polling that requires coordination. The current hashrings are the following [None]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:355
Dec  1 19:41:49 compute-0 ceilometer_agent_compute[200308]: 2025-12-01 19:41:49.124 15 DEBUG ceilometer.polling.manager [-] Polster heartbeat update: disk.device.read.latency heartbeat /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:636
Dec  1 19:41:49 compute-0 ceilometer_agent_compute[200308]: 2025-12-01 19:41:49.124 15 DEBUG ceilometer.compute.pollsters [-] e73931e9-f7fa-4666-b781-700b385532a9/disk.device.read.latency volume: 474440550 _stats_to_sample /usr/lib/python3.12/site-packages/ceilometer/compute/pollsters/__init__.py:108
Dec  1 19:41:49 compute-0 ceilometer_agent_compute[200308]: 2025-12-01 19:41:49.124 15 DEBUG ceilometer.compute.pollsters [-] e73931e9-f7fa-4666-b781-700b385532a9/disk.device.read.latency volume: 65600453 _stats_to_sample /usr/lib/python3.12/site-packages/ceilometer/compute/pollsters/__init__.py:108
Dec  1 19:41:49 compute-0 ceilometer_agent_compute[200308]: 2025-12-01 19:41:49.125 15 DEBUG ceilometer.compute.pollsters [-] e73931e9-f7fa-4666-b781-700b385532a9/disk.device.read.latency volume: 49214734 _stats_to_sample /usr/lib/python3.12/site-packages/ceilometer/compute/pollsters/__init__.py:108
Dec  1 19:41:49 compute-0 ceilometer_agent_compute[200308]: 2025-12-01 19:41:49.125 15 DEBUG ceilometer.compute.pollsters [-] 850ac274-3f22-41ce-b7d7-ac64d7adac70/disk.device.read.latency volume: 578521054 _stats_to_sample /usr/lib/python3.12/site-packages/ceilometer/compute/pollsters/__init__.py:108
Dec  1 19:41:49 compute-0 ceilometer_agent_compute[200308]: 2025-12-01 19:41:49.126 15 DEBUG ceilometer.compute.pollsters [-] 850ac274-3f22-41ce-b7d7-ac64d7adac70/disk.device.read.latency volume: 98903610 _stats_to_sample /usr/lib/python3.12/site-packages/ceilometer/compute/pollsters/__init__.py:108
Dec  1 19:41:49 compute-0 ceilometer_agent_compute[200308]: 2025-12-01 19:41:49.126 15 DEBUG ceilometer.compute.pollsters [-] 850ac274-3f22-41ce-b7d7-ac64d7adac70/disk.device.read.latency volume: 76991265 _stats_to_sample /usr/lib/python3.12/site-packages/ceilometer/compute/pollsters/__init__.py:108
Dec  1 19:41:49 compute-0 ceilometer_agent_compute[200308]: 2025-12-01 19:41:49.127 15 INFO ceilometer.polling.manager [-] Finished polling pollster disk.device.read.latency in the context of pollsters
Dec  1 19:41:49 compute-0 ceilometer_agent_compute[200308]: 2025-12-01 19:41:49.128 15 DEBUG ceilometer.polling.manager [-] Executing discovery process for pollsters [<ceilometer.compute.pollsters.disk.PerDeviceReadRequestsPollster object at 0x7fcf6cc3f350>] and discovery method [local_instances] via process [<bound method AgentManager.discover of <ceilometer.polling.manager.AgentManager object at 0x7fcf6cc07a10>>]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:294
Dec  1 19:41:49 compute-0 ceilometer_agent_compute[200308]: 2025-12-01 19:41:49.128 12 DEBUG ceilometer.polling.manager [-] Updated heartbeat for disk.device.read.latency (2025-12-01T19:41:49.124134) _update_status /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:502
Dec  1 19:41:49 compute-0 ceilometer_agent_compute[200308]: 2025-12-01 19:41:49.128 15 INFO ceilometer.polling.manager [-] Polling pollster disk.device.read.requests in the context of pollsters
Dec  1 19:41:49 compute-0 ceilometer_agent_compute[200308]: 2025-12-01 19:41:49.128 15 DEBUG ceilometer.polling.manager [-] Checking if we need coordination for pollster [<stevedore.extension.Extension object at 0x7fcf6cc3f380>] with coordination group name [None]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:333
Dec  1 19:41:49 compute-0 ceilometer_agent_compute[200308]: 2025-12-01 19:41:49.128 15 DEBUG ceilometer.polling.manager [-] The pollster [<stevedore.extension.Extension object at 0x7fcf6cc3f380>] is not configured in a source for polling that requires coordination. The current hashrings are the following [None]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:355
Dec  1 19:41:49 compute-0 ceilometer_agent_compute[200308]: 2025-12-01 19:41:49.129 15 DEBUG ceilometer.polling.manager [-] Polster heartbeat update: disk.device.read.requests heartbeat /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:636
Dec  1 19:41:49 compute-0 ceilometer_agent_compute[200308]: 2025-12-01 19:41:49.129 15 DEBUG ceilometer.compute.pollsters [-] e73931e9-f7fa-4666-b781-700b385532a9/disk.device.read.requests volume: 840 _stats_to_sample /usr/lib/python3.12/site-packages/ceilometer/compute/pollsters/__init__.py:108
Dec  1 19:41:49 compute-0 ceilometer_agent_compute[200308]: 2025-12-01 19:41:49.129 15 DEBUG ceilometer.compute.pollsters [-] e73931e9-f7fa-4666-b781-700b385532a9/disk.device.read.requests volume: 173 _stats_to_sample /usr/lib/python3.12/site-packages/ceilometer/compute/pollsters/__init__.py:108
Dec  1 19:41:49 compute-0 ceilometer_agent_compute[200308]: 2025-12-01 19:41:49.130 15 DEBUG ceilometer.compute.pollsters [-] e73931e9-f7fa-4666-b781-700b385532a9/disk.device.read.requests volume: 109 _stats_to_sample /usr/lib/python3.12/site-packages/ceilometer/compute/pollsters/__init__.py:108
Dec  1 19:41:49 compute-0 ceilometer_agent_compute[200308]: 2025-12-01 19:41:49.130 15 DEBUG ceilometer.compute.pollsters [-] 850ac274-3f22-41ce-b7d7-ac64d7adac70/disk.device.read.requests volume: 840 _stats_to_sample /usr/lib/python3.12/site-packages/ceilometer/compute/pollsters/__init__.py:108
Dec  1 19:41:49 compute-0 ceilometer_agent_compute[200308]: 2025-12-01 19:41:49.131 15 DEBUG ceilometer.compute.pollsters [-] 850ac274-3f22-41ce-b7d7-ac64d7adac70/disk.device.read.requests volume: 173 _stats_to_sample /usr/lib/python3.12/site-packages/ceilometer/compute/pollsters/__init__.py:108
Dec  1 19:41:49 compute-0 ceilometer_agent_compute[200308]: 2025-12-01 19:41:49.132 15 DEBUG ceilometer.compute.pollsters [-] 850ac274-3f22-41ce-b7d7-ac64d7adac70/disk.device.read.requests volume: 124 _stats_to_sample /usr/lib/python3.12/site-packages/ceilometer/compute/pollsters/__init__.py:108
Dec  1 19:41:49 compute-0 ceilometer_agent_compute[200308]: 2025-12-01 19:41:49.132 15 INFO ceilometer.polling.manager [-] Finished polling pollster disk.device.read.requests in the context of pollsters
Dec  1 19:41:49 compute-0 ceilometer_agent_compute[200308]: 2025-12-01 19:41:49.133 15 DEBUG ceilometer.polling.manager [-] Executing discovery process for pollsters [<ceilometer.compute.pollsters.disk.PerDevicePhysicalPollster object at 0x7fcf6cc3f3b0>] and discovery method [local_instances] via process [<bound method AgentManager.discover of <ceilometer.polling.manager.AgentManager object at 0x7fcf6cc07a10>>]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:294
Dec  1 19:41:49 compute-0 ceilometer_agent_compute[200308]: 2025-12-01 19:41:49.133 15 INFO ceilometer.polling.manager [-] Polling pollster disk.device.usage in the context of pollsters
Dec  1 19:41:49 compute-0 ceilometer_agent_compute[200308]: 2025-12-01 19:41:49.133 15 DEBUG ceilometer.polling.manager [-] Checking if we need coordination for pollster [<stevedore.extension.Extension object at 0x7fcf6cc3f3e0>] with coordination group name [None]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:333
Dec  1 19:41:49 compute-0 ceilometer_agent_compute[200308]: 2025-12-01 19:41:49.134 15 DEBUG ceilometer.polling.manager [-] The pollster [<stevedore.extension.Extension object at 0x7fcf6cc3f3e0>] is not configured in a source for polling that requires coordination. The current hashrings are the following [None]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:355
Dec  1 19:41:49 compute-0 ceilometer_agent_compute[200308]: 2025-12-01 19:41:49.134 15 DEBUG ceilometer.polling.manager [-] Polster heartbeat update: disk.device.usage heartbeat /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:636
Dec  1 19:41:49 compute-0 ceilometer_agent_compute[200308]: 2025-12-01 19:41:49.134 15 DEBUG ceilometer.compute.pollsters [-] e73931e9-f7fa-4666-b781-700b385532a9/disk.device.usage volume: 21233664 _stats_to_sample /usr/lib/python3.12/site-packages/ceilometer/compute/pollsters/__init__.py:108
Dec  1 19:41:49 compute-0 ceilometer_agent_compute[200308]: 2025-12-01 19:41:49.134 12 DEBUG ceilometer.polling.manager [-] Updated heartbeat for disk.device.read.requests (2025-12-01T19:41:49.129101) _update_status /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:502
Dec  1 19:41:49 compute-0 ceilometer_agent_compute[200308]: 2025-12-01 19:41:49.135 12 DEBUG ceilometer.polling.manager [-] Updated heartbeat for disk.device.usage (2025-12-01T19:41:49.134196) _update_status /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:502
Dec  1 19:41:49 compute-0 ceilometer_agent_compute[200308]: 2025-12-01 19:41:49.135 15 DEBUG ceilometer.compute.pollsters [-] e73931e9-f7fa-4666-b781-700b385532a9/disk.device.usage volume: 393216 _stats_to_sample /usr/lib/python3.12/site-packages/ceilometer/compute/pollsters/__init__.py:108
Dec  1 19:41:49 compute-0 ceilometer_agent_compute[200308]: 2025-12-01 19:41:49.135 15 DEBUG ceilometer.compute.pollsters [-] e73931e9-f7fa-4666-b781-700b385532a9/disk.device.usage volume: 485376 _stats_to_sample /usr/lib/python3.12/site-packages/ceilometer/compute/pollsters/__init__.py:108
Dec  1 19:41:49 compute-0 ceilometer_agent_compute[200308]: 2025-12-01 19:41:49.136 15 DEBUG ceilometer.compute.pollsters [-] 850ac274-3f22-41ce-b7d7-ac64d7adac70/disk.device.usage volume: 21299200 _stats_to_sample /usr/lib/python3.12/site-packages/ceilometer/compute/pollsters/__init__.py:108
Dec  1 19:41:49 compute-0 ceilometer_agent_compute[200308]: 2025-12-01 19:41:49.136 15 DEBUG ceilometer.compute.pollsters [-] 850ac274-3f22-41ce-b7d7-ac64d7adac70/disk.device.usage volume: 393216 _stats_to_sample /usr/lib/python3.12/site-packages/ceilometer/compute/pollsters/__init__.py:108
Dec  1 19:41:49 compute-0 ceilometer_agent_compute[200308]: 2025-12-01 19:41:49.137 15 DEBUG ceilometer.compute.pollsters [-] 850ac274-3f22-41ce-b7d7-ac64d7adac70/disk.device.usage volume: 583680 _stats_to_sample /usr/lib/python3.12/site-packages/ceilometer/compute/pollsters/__init__.py:108
Dec  1 19:41:49 compute-0 ceilometer_agent_compute[200308]: 2025-12-01 19:41:49.137 15 INFO ceilometer.polling.manager [-] Finished polling pollster disk.device.usage in the context of pollsters
Dec  1 19:41:49 compute-0 ceilometer_agent_compute[200308]: 2025-12-01 19:41:49.138 15 DEBUG ceilometer.polling.manager [-] Executing discovery process for pollsters [<ceilometer.compute.pollsters.disk.PerDeviceWriteBytesPollster object at 0x7fcf6cc3f410>] and discovery method [local_instances] via process [<bound method AgentManager.discover of <ceilometer.polling.manager.AgentManager object at 0x7fcf6cc07a10>>]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:294
Dec  1 19:41:49 compute-0 ceilometer_agent_compute[200308]: 2025-12-01 19:41:49.138 15 INFO ceilometer.polling.manager [-] Polling pollster disk.device.write.bytes in the context of pollsters
Dec  1 19:41:49 compute-0 ceilometer_agent_compute[200308]: 2025-12-01 19:41:49.138 15 DEBUG ceilometer.polling.manager [-] Checking if we need coordination for pollster [<stevedore.extension.Extension object at 0x7fcf6cc3f440>] with coordination group name [None]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:333
Dec  1 19:41:49 compute-0 ceilometer_agent_compute[200308]: 2025-12-01 19:41:49.138 15 DEBUG ceilometer.polling.manager [-] The pollster [<stevedore.extension.Extension object at 0x7fcf6cc3f440>] is not configured in a source for polling that requires coordination. The current hashrings are the following [None]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:355
Dec  1 19:41:49 compute-0 ceilometer_agent_compute[200308]: 2025-12-01 19:41:49.139 15 DEBUG ceilometer.polling.manager [-] Polster heartbeat update: disk.device.write.bytes heartbeat /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:636
Dec  1 19:41:49 compute-0 ceilometer_agent_compute[200308]: 2025-12-01 19:41:49.139 15 DEBUG ceilometer.compute.pollsters [-] e73931e9-f7fa-4666-b781-700b385532a9/disk.device.write.bytes volume: 41779200 _stats_to_sample /usr/lib/python3.12/site-packages/ceilometer/compute/pollsters/__init__.py:108
Dec  1 19:41:49 compute-0 ceilometer_agent_compute[200308]: 2025-12-01 19:41:49.139 12 DEBUG ceilometer.polling.manager [-] Updated heartbeat for disk.device.write.bytes (2025-12-01T19:41:49.139054) _update_status /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:502
Dec  1 19:41:49 compute-0 ceilometer_agent_compute[200308]: 2025-12-01 19:41:49.140 15 DEBUG ceilometer.compute.pollsters [-] e73931e9-f7fa-4666-b781-700b385532a9/disk.device.write.bytes volume: 512 _stats_to_sample /usr/lib/python3.12/site-packages/ceilometer/compute/pollsters/__init__.py:108
Dec  1 19:41:49 compute-0 ceilometer_agent_compute[200308]: 2025-12-01 19:41:49.140 15 DEBUG ceilometer.compute.pollsters [-] e73931e9-f7fa-4666-b781-700b385532a9/disk.device.write.bytes volume: 0 _stats_to_sample /usr/lib/python3.12/site-packages/ceilometer/compute/pollsters/__init__.py:108
Dec  1 19:41:49 compute-0 ceilometer_agent_compute[200308]: 2025-12-01 19:41:49.141 15 DEBUG ceilometer.compute.pollsters [-] 850ac274-3f22-41ce-b7d7-ac64d7adac70/disk.device.write.bytes volume: 41779200 _stats_to_sample /usr/lib/python3.12/site-packages/ceilometer/compute/pollsters/__init__.py:108
Dec  1 19:41:49 compute-0 ceilometer_agent_compute[200308]: 2025-12-01 19:41:49.141 15 DEBUG ceilometer.compute.pollsters [-] 850ac274-3f22-41ce-b7d7-ac64d7adac70/disk.device.write.bytes volume: 512 _stats_to_sample /usr/lib/python3.12/site-packages/ceilometer/compute/pollsters/__init__.py:108
Dec  1 19:41:49 compute-0 ceilometer_agent_compute[200308]: 2025-12-01 19:41:49.141 15 DEBUG ceilometer.compute.pollsters [-] 850ac274-3f22-41ce-b7d7-ac64d7adac70/disk.device.write.bytes volume: 0 _stats_to_sample /usr/lib/python3.12/site-packages/ceilometer/compute/pollsters/__init__.py:108
Dec  1 19:41:49 compute-0 ceilometer_agent_compute[200308]: 2025-12-01 19:41:49.142 15 INFO ceilometer.polling.manager [-] Finished polling pollster disk.device.write.bytes in the context of pollsters
Dec  1 19:41:49 compute-0 ceilometer_agent_compute[200308]: 2025-12-01 19:41:49.143 15 DEBUG ceilometer.polling.manager [-] Executing discovery process for pollsters [<ceilometer.compute.pollsters.instance_stats.PowerStatePollster object at 0x7fcf6c2e4440>] and discovery method [local_instances] via process [<bound method AgentManager.discover of <ceilometer.polling.manager.AgentManager object at 0x7fcf6cc07a10>>]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:294
Dec  1 19:41:49 compute-0 ceilometer_agent_compute[200308]: 2025-12-01 19:41:49.143 15 INFO ceilometer.polling.manager [-] Polling pollster power.state in the context of pollsters
Dec  1 19:41:49 compute-0 ceilometer_agent_compute[200308]: 2025-12-01 19:41:49.143 15 DEBUG ceilometer.polling.manager [-] Checking if we need coordination for pollster [<stevedore.extension.Extension object at 0x7fcf6c2e4470>] with coordination group name [None]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:333
Dec  1 19:41:49 compute-0 ceilometer_agent_compute[200308]: 2025-12-01 19:41:49.143 15 DEBUG ceilometer.polling.manager [-] The pollster [<stevedore.extension.Extension object at 0x7fcf6c2e4470>] is not configured in a source for polling that requires coordination. The current hashrings are the following [None]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:355
Dec  1 19:41:49 compute-0 ceilometer_agent_compute[200308]: 2025-12-01 19:41:49.143 15 DEBUG ceilometer.polling.manager [-] Polster heartbeat update: power.state heartbeat /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:636
Dec  1 19:41:49 compute-0 ceilometer_agent_compute[200308]: 2025-12-01 19:41:49.144 12 DEBUG ceilometer.polling.manager [-] Updated heartbeat for power.state (2025-12-01T19:41:49.143672) _update_status /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:502
Dec  1 19:41:49 compute-0 ceilometer_agent_compute[200308]: 2025-12-01 19:41:49.195 15 DEBUG ceilometer.compute.pollsters [-] e73931e9-f7fa-4666-b781-700b385532a9/power.state volume: 1 _stats_to_sample /usr/lib/python3.12/site-packages/ceilometer/compute/pollsters/__init__.py:108
Dec  1 19:41:49 compute-0 ceilometer_agent_compute[200308]: 2025-12-01 19:41:49.220 15 DEBUG ceilometer.compute.pollsters [-] 850ac274-3f22-41ce-b7d7-ac64d7adac70/power.state volume: 1 _stats_to_sample /usr/lib/python3.12/site-packages/ceilometer/compute/pollsters/__init__.py:108
Dec  1 19:41:49 compute-0 ceilometer_agent_compute[200308]: 2025-12-01 19:41:49.221 15 INFO ceilometer.polling.manager [-] Finished polling pollster power.state in the context of pollsters
Dec  1 19:41:49 compute-0 ceilometer_agent_compute[200308]: 2025-12-01 19:41:49.221 15 DEBUG ceilometer.polling.manager [-] Executing discovery process for pollsters [<ceilometer.compute.pollsters.disk.PerDeviceDiskWriteLatencyPollster object at 0x7fcf6cc3f470>] and discovery method [local_instances] via process [<bound method AgentManager.discover of <ceilometer.polling.manager.AgentManager object at 0x7fcf6cc07a10>>]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:294
Dec  1 19:41:49 compute-0 ceilometer_agent_compute[200308]: 2025-12-01 19:41:49.221 15 INFO ceilometer.polling.manager [-] Polling pollster disk.device.write.latency in the context of pollsters
Dec  1 19:41:49 compute-0 ceilometer_agent_compute[200308]: 2025-12-01 19:41:49.221 15 DEBUG ceilometer.polling.manager [-] Checking if we need coordination for pollster [<stevedore.extension.Extension object at 0x7fcf6cc3f4a0>] with coordination group name [None]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:333
Dec  1 19:41:49 compute-0 ceilometer_agent_compute[200308]: 2025-12-01 19:41:49.221 15 DEBUG ceilometer.polling.manager [-] The pollster [<stevedore.extension.Extension object at 0x7fcf6cc3f4a0>] is not configured in a source for polling that requires coordination. The current hashrings are the following [None]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:355
Dec  1 19:41:49 compute-0 ceilometer_agent_compute[200308]: 2025-12-01 19:41:49.221 15 DEBUG ceilometer.polling.manager [-] Polster heartbeat update: disk.device.write.latency heartbeat /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:636
Dec  1 19:41:49 compute-0 ceilometer_agent_compute[200308]: 2025-12-01 19:41:49.221 15 DEBUG ceilometer.compute.pollsters [-] e73931e9-f7fa-4666-b781-700b385532a9/disk.device.write.latency volume: 1119912171 _stats_to_sample /usr/lib/python3.12/site-packages/ceilometer/compute/pollsters/__init__.py:108
Dec  1 19:41:49 compute-0 ceilometer_agent_compute[200308]: 2025-12-01 19:41:49.222 15 DEBUG ceilometer.compute.pollsters [-] e73931e9-f7fa-4666-b781-700b385532a9/disk.device.write.latency volume: 10391061 _stats_to_sample /usr/lib/python3.12/site-packages/ceilometer/compute/pollsters/__init__.py:108
Dec  1 19:41:49 compute-0 ceilometer_agent_compute[200308]: 2025-12-01 19:41:49.222 12 DEBUG ceilometer.polling.manager [-] Updated heartbeat for disk.device.write.latency (2025-12-01T19:41:49.221636) _update_status /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:502
Dec  1 19:41:49 compute-0 ceilometer_agent_compute[200308]: 2025-12-01 19:41:49.222 15 DEBUG ceilometer.compute.pollsters [-] e73931e9-f7fa-4666-b781-700b385532a9/disk.device.write.latency volume: 0 _stats_to_sample /usr/lib/python3.12/site-packages/ceilometer/compute/pollsters/__init__.py:108
Dec  1 19:41:49 compute-0 ceilometer_agent_compute[200308]: 2025-12-01 19:41:49.222 15 DEBUG ceilometer.compute.pollsters [-] 850ac274-3f22-41ce-b7d7-ac64d7adac70/disk.device.write.latency volume: 2063543219 _stats_to_sample /usr/lib/python3.12/site-packages/ceilometer/compute/pollsters/__init__.py:108
Dec  1 19:41:49 compute-0 ceilometer_agent_compute[200308]: 2025-12-01 19:41:49.222 15 DEBUG ceilometer.compute.pollsters [-] 850ac274-3f22-41ce-b7d7-ac64d7adac70/disk.device.write.latency volume: 12721696 _stats_to_sample /usr/lib/python3.12/site-packages/ceilometer/compute/pollsters/__init__.py:108
Dec  1 19:41:49 compute-0 ceilometer_agent_compute[200308]: 2025-12-01 19:41:49.223 15 DEBUG ceilometer.compute.pollsters [-] 850ac274-3f22-41ce-b7d7-ac64d7adac70/disk.device.write.latency volume: 0 _stats_to_sample /usr/lib/python3.12/site-packages/ceilometer/compute/pollsters/__init__.py:108
Dec  1 19:41:49 compute-0 ceilometer_agent_compute[200308]: 2025-12-01 19:41:49.223 15 INFO ceilometer.polling.manager [-] Finished polling pollster disk.device.write.latency in the context of pollsters
Dec  1 19:41:49 compute-0 ceilometer_agent_compute[200308]: 2025-12-01 19:41:49.223 15 DEBUG ceilometer.polling.manager [-] Executing discovery process for pollsters [<ceilometer.compute.pollsters.disk.PerDeviceWriteRequestsPollster object at 0x7fcf6cc3f4d0>] and discovery method [local_instances] via process [<bound method AgentManager.discover of <ceilometer.polling.manager.AgentManager object at 0x7fcf6cc07a10>>]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:294
Dec  1 19:41:49 compute-0 ceilometer_agent_compute[200308]: 2025-12-01 19:41:49.223 15 INFO ceilometer.polling.manager [-] Polling pollster disk.device.write.requests in the context of pollsters
Dec  1 19:41:49 compute-0 ceilometer_agent_compute[200308]: 2025-12-01 19:41:49.224 15 DEBUG ceilometer.polling.manager [-] Checking if we need coordination for pollster [<stevedore.extension.Extension object at 0x7fcf6cc3f500>] with coordination group name [None]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:333
Dec  1 19:41:49 compute-0 ceilometer_agent_compute[200308]: 2025-12-01 19:41:49.224 15 DEBUG ceilometer.polling.manager [-] The pollster [<stevedore.extension.Extension object at 0x7fcf6cc3f500>] is not configured in a source for polling that requires coordination. The current hashrings are the following [None]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:355
Dec  1 19:41:49 compute-0 ceilometer_agent_compute[200308]: 2025-12-01 19:41:49.224 15 DEBUG ceilometer.polling.manager [-] Polster heartbeat update: disk.device.write.requests heartbeat /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:636
Dec  1 19:41:49 compute-0 ceilometer_agent_compute[200308]: 2025-12-01 19:41:49.224 15 DEBUG ceilometer.compute.pollsters [-] e73931e9-f7fa-4666-b781-700b385532a9/disk.device.write.requests volume: 233 _stats_to_sample /usr/lib/python3.12/site-packages/ceilometer/compute/pollsters/__init__.py:108
Dec  1 19:41:49 compute-0 ceilometer_agent_compute[200308]: 2025-12-01 19:41:49.224 12 DEBUG ceilometer.polling.manager [-] Updated heartbeat for disk.device.write.requests (2025-12-01T19:41:49.224248) _update_status /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:502
Dec  1 19:41:49 compute-0 ceilometer_agent_compute[200308]: 2025-12-01 19:41:49.224 15 DEBUG ceilometer.compute.pollsters [-] e73931e9-f7fa-4666-b781-700b385532a9/disk.device.write.requests volume: 1 _stats_to_sample /usr/lib/python3.12/site-packages/ceilometer/compute/pollsters/__init__.py:108
Dec  1 19:41:49 compute-0 ceilometer_agent_compute[200308]: 2025-12-01 19:41:49.225 15 DEBUG ceilometer.compute.pollsters [-] e73931e9-f7fa-4666-b781-700b385532a9/disk.device.write.requests volume: 0 _stats_to_sample /usr/lib/python3.12/site-packages/ceilometer/compute/pollsters/__init__.py:108
Dec  1 19:41:49 compute-0 ceilometer_agent_compute[200308]: 2025-12-01 19:41:49.225 15 DEBUG ceilometer.compute.pollsters [-] 850ac274-3f22-41ce-b7d7-ac64d7adac70/disk.device.write.requests volume: 232 _stats_to_sample /usr/lib/python3.12/site-packages/ceilometer/compute/pollsters/__init__.py:108
Dec  1 19:41:49 compute-0 ceilometer_agent_compute[200308]: 2025-12-01 19:41:49.225 15 DEBUG ceilometer.compute.pollsters [-] 850ac274-3f22-41ce-b7d7-ac64d7adac70/disk.device.write.requests volume: 1 _stats_to_sample /usr/lib/python3.12/site-packages/ceilometer/compute/pollsters/__init__.py:108
Dec  1 19:41:49 compute-0 ceilometer_agent_compute[200308]: 2025-12-01 19:41:49.225 15 DEBUG ceilometer.compute.pollsters [-] 850ac274-3f22-41ce-b7d7-ac64d7adac70/disk.device.write.requests volume: 0 _stats_to_sample /usr/lib/python3.12/site-packages/ceilometer/compute/pollsters/__init__.py:108
Dec  1 19:41:49 compute-0 ceilometer_agent_compute[200308]: 2025-12-01 19:41:49.226 15 INFO ceilometer.polling.manager [-] Finished polling pollster disk.device.write.requests in the context of pollsters
Dec  1 19:41:49 compute-0 ceilometer_agent_compute[200308]: 2025-12-01 19:41:49.226 15 DEBUG ceilometer.polling.manager [-] Executing discovery process for pollsters [<ceilometer.compute.pollsters.disk.PerDeviceAllocationPollster object at 0x7fcf6cc3e5d0>] and discovery method [local_instances] via process [<bound method AgentManager.discover of <ceilometer.polling.manager.AgentManager object at 0x7fcf6cc07a10>>]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:294
Dec  1 19:41:49 compute-0 ceilometer_agent_compute[200308]: 2025-12-01 19:41:49.226 15 INFO ceilometer.polling.manager [-] Polling pollster disk.device.allocation in the context of pollsters
Dec  1 19:41:49 compute-0 ceilometer_agent_compute[200308]: 2025-12-01 19:41:49.226 15 DEBUG ceilometer.polling.manager [-] Checking if we need coordination for pollster [<stevedore.extension.Extension object at 0x7fcf6cc3e540>] with coordination group name [None]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:333
Dec  1 19:41:49 compute-0 ceilometer_agent_compute[200308]: 2025-12-01 19:41:49.226 15 DEBUG ceilometer.polling.manager [-] The pollster [<stevedore.extension.Extension object at 0x7fcf6cc3e540>] is not configured in a source for polling that requires coordination. The current hashrings are the following [None]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:355
Dec  1 19:41:49 compute-0 ceilometer_agent_compute[200308]: 2025-12-01 19:41:49.226 15 DEBUG ceilometer.polling.manager [-] Polster heartbeat update: disk.device.allocation heartbeat /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:636
Dec  1 19:41:49 compute-0 ceilometer_agent_compute[200308]: 2025-12-01 19:41:49.227 15 DEBUG ceilometer.compute.pollsters [-] e73931e9-f7fa-4666-b781-700b385532a9/disk.device.allocation volume: 21307392 _stats_to_sample /usr/lib/python3.12/site-packages/ceilometer/compute/pollsters/__init__.py:108
Dec  1 19:41:49 compute-0 ceilometer_agent_compute[200308]: 2025-12-01 19:41:49.227 15 DEBUG ceilometer.compute.pollsters [-] e73931e9-f7fa-4666-b781-700b385532a9/disk.device.allocation volume: 1253376 _stats_to_sample /usr/lib/python3.12/site-packages/ceilometer/compute/pollsters/__init__.py:108
Dec  1 19:41:49 compute-0 ceilometer_agent_compute[200308]: 2025-12-01 19:41:49.227 15 DEBUG ceilometer.compute.pollsters [-] e73931e9-f7fa-4666-b781-700b385532a9/disk.device.allocation volume: 487424 _stats_to_sample /usr/lib/python3.12/site-packages/ceilometer/compute/pollsters/__init__.py:108
Dec  1 19:41:49 compute-0 ceilometer_agent_compute[200308]: 2025-12-01 19:41:49.228 15 DEBUG ceilometer.compute.pollsters [-] 850ac274-3f22-41ce-b7d7-ac64d7adac70/disk.device.allocation volume: 22224896 _stats_to_sample /usr/lib/python3.12/site-packages/ceilometer/compute/pollsters/__init__.py:108
Dec  1 19:41:49 compute-0 ceilometer_agent_compute[200308]: 2025-12-01 19:41:49.228 12 DEBUG ceilometer.polling.manager [-] Updated heartbeat for disk.device.allocation (2025-12-01T19:41:49.226940) _update_status /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:502
Dec  1 19:41:49 compute-0 ceilometer_agent_compute[200308]: 2025-12-01 19:41:49.228 15 DEBUG ceilometer.compute.pollsters [-] 850ac274-3f22-41ce-b7d7-ac64d7adac70/disk.device.allocation volume: 1253376 _stats_to_sample /usr/lib/python3.12/site-packages/ceilometer/compute/pollsters/__init__.py:108
Dec  1 19:41:49 compute-0 ceilometer_agent_compute[200308]: 2025-12-01 19:41:49.228 15 DEBUG ceilometer.compute.pollsters [-] 850ac274-3f22-41ce-b7d7-ac64d7adac70/disk.device.allocation volume: 585728 _stats_to_sample /usr/lib/python3.12/site-packages/ceilometer/compute/pollsters/__init__.py:108
Dec  1 19:41:49 compute-0 ceilometer_agent_compute[200308]: 2025-12-01 19:41:49.229 15 INFO ceilometer.polling.manager [-] Finished polling pollster disk.device.allocation in the context of pollsters
Dec  1 19:41:49 compute-0 ceilometer_agent_compute[200308]: 2025-12-01 19:41:49.229 15 DEBUG ceilometer.polling.manager [-] Executing discovery process for pollsters [<ceilometer.compute.pollsters.disk.EphemeralSizePollster object at 0x7fcf6cc3f530>] and discovery method [local_instances] via process [<bound method AgentManager.discover of <ceilometer.polling.manager.AgentManager object at 0x7fcf6cc07a10>>]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:294
Dec  1 19:41:49 compute-0 ceilometer_agent_compute[200308]: 2025-12-01 19:41:49.229 15 INFO ceilometer.polling.manager [-] Polling pollster disk.ephemeral.size in the context of pollsters
Dec  1 19:41:49 compute-0 ceilometer_agent_compute[200308]: 2025-12-01 19:41:49.229 15 DEBUG ceilometer.polling.manager [-] Checking if we need coordination for pollster [<stevedore.extension.Extension object at 0x7fcf6cc3f560>] with coordination group name [None]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:333
Dec  1 19:41:49 compute-0 ceilometer_agent_compute[200308]: 2025-12-01 19:41:49.229 15 DEBUG ceilometer.polling.manager [-] The pollster [<stevedore.extension.Extension object at 0x7fcf6cc3f560>] is not configured in a source for polling that requires coordination. The current hashrings are the following [None]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:355
Dec  1 19:41:49 compute-0 ceilometer_agent_compute[200308]: 2025-12-01 19:41:49.229 15 DEBUG ceilometer.polling.manager [-] Polster heartbeat update: disk.ephemeral.size heartbeat /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:636
Dec  1 19:41:49 compute-0 ceilometer_agent_compute[200308]: 2025-12-01 19:41:49.230 12 DEBUG ceilometer.polling.manager [-] Updated heartbeat for disk.ephemeral.size (2025-12-01T19:41:49.229788) _update_status /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:502
Dec  1 19:41:49 compute-0 ceilometer_agent_compute[200308]: 2025-12-01 19:41:49.230 15 INFO ceilometer.polling.manager [-] Finished polling pollster disk.ephemeral.size in the context of pollsters
Dec  1 19:41:49 compute-0 ceilometer_agent_compute[200308]: 2025-12-01 19:41:49.230 15 DEBUG ceilometer.polling.manager [-] Executing discovery process for pollsters [<ceilometer.compute.pollsters.net.IncomingPacketsPollster object at 0x7fcf6cc3fd40>] and discovery method [local_instances] via process [<bound method AgentManager.discover of <ceilometer.polling.manager.AgentManager object at 0x7fcf6cc07a10>>]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:294
Dec  1 19:41:49 compute-0 ceilometer_agent_compute[200308]: 2025-12-01 19:41:49.230 15 INFO ceilometer.polling.manager [-] Polling pollster network.incoming.packets in the context of pollsters
Dec  1 19:41:49 compute-0 ceilometer_agent_compute[200308]: 2025-12-01 19:41:49.230 15 DEBUG ceilometer.polling.manager [-] Checking if we need coordination for pollster [<stevedore.extension.Extension object at 0x7fcf6cc3fd70>] with coordination group name [None]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:333
Dec  1 19:41:49 compute-0 ceilometer_agent_compute[200308]: 2025-12-01 19:41:49.230 15 DEBUG ceilometer.polling.manager [-] The pollster [<stevedore.extension.Extension object at 0x7fcf6cc3fd70>] is not configured in a source for polling that requires coordination. The current hashrings are the following [None]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:355
Dec  1 19:41:49 compute-0 ceilometer_agent_compute[200308]: 2025-12-01 19:41:49.231 15 DEBUG ceilometer.polling.manager [-] Polster heartbeat update: network.incoming.packets heartbeat /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:636
Dec  1 19:41:49 compute-0 ceilometer_agent_compute[200308]: 2025-12-01 19:41:49.231 15 DEBUG ceilometer.compute.pollsters [-] e73931e9-f7fa-4666-b781-700b385532a9/network.incoming.packets volume: 21 _stats_to_sample /usr/lib/python3.12/site-packages/ceilometer/compute/pollsters/__init__.py:108
Dec  1 19:41:49 compute-0 ceilometer_agent_compute[200308]: 2025-12-01 19:41:49.231 15 DEBUG ceilometer.compute.pollsters [-] 850ac274-3f22-41ce-b7d7-ac64d7adac70/network.incoming.packets volume: 14 _stats_to_sample /usr/lib/python3.12/site-packages/ceilometer/compute/pollsters/__init__.py:108
Dec  1 19:41:49 compute-0 ceilometer_agent_compute[200308]: 2025-12-01 19:41:49.231 15 INFO ceilometer.polling.manager [-] Finished polling pollster network.incoming.packets in the context of pollsters
Dec  1 19:41:49 compute-0 ceilometer_agent_compute[200308]: 2025-12-01 19:41:49.232 15 DEBUG ceilometer.polling.manager [-] Executing discovery process for pollsters [<ceilometer.compute.pollsters.disk.RootSizePollster object at 0x7fcf6cc3f590>] and discovery method [local_instances] via process [<bound method AgentManager.discover of <ceilometer.polling.manager.AgentManager object at 0x7fcf6cc07a10>>]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:294
Dec  1 19:41:49 compute-0 ceilometer_agent_compute[200308]: 2025-12-01 19:41:49.232 15 INFO ceilometer.polling.manager [-] Polling pollster disk.root.size in the context of pollsters
Dec  1 19:41:49 compute-0 ceilometer_agent_compute[200308]: 2025-12-01 19:41:49.232 15 DEBUG ceilometer.polling.manager [-] Checking if we need coordination for pollster [<stevedore.extension.Extension object at 0x7fcf6cc3f5c0>] with coordination group name [None]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:333
Dec  1 19:41:49 compute-0 ceilometer_agent_compute[200308]: 2025-12-01 19:41:49.232 15 DEBUG ceilometer.polling.manager [-] The pollster [<stevedore.extension.Extension object at 0x7fcf6cc3f5c0>] is not configured in a source for polling that requires coordination. The current hashrings are the following [None]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:355
Dec  1 19:41:49 compute-0 ceilometer_agent_compute[200308]: 2025-12-01 19:41:49.232 15 DEBUG ceilometer.polling.manager [-] Polster heartbeat update: disk.root.size heartbeat /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:636
Dec  1 19:41:49 compute-0 ceilometer_agent_compute[200308]: 2025-12-01 19:41:49.232 12 DEBUG ceilometer.polling.manager [-] Updated heartbeat for network.incoming.packets (2025-12-01T19:41:49.231017) _update_status /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:502
Dec  1 19:41:49 compute-0 ceilometer_agent_compute[200308]: 2025-12-01 19:41:49.233 15 INFO ceilometer.polling.manager [-] Finished polling pollster disk.root.size in the context of pollsters
Dec  1 19:41:49 compute-0 ceilometer_agent_compute[200308]: 2025-12-01 19:41:49.233 15 DEBUG ceilometer.polling.manager [-] Executing discovery process for pollsters [<ceilometer.compute.pollsters.net.IncomingDropPollster object at 0x7fcf6cc3fda0>] and discovery method [local_instances] via process [<bound method AgentManager.discover of <ceilometer.polling.manager.AgentManager object at 0x7fcf6cc07a10>>]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:294
Dec  1 19:41:49 compute-0 ceilometer_agent_compute[200308]: 2025-12-01 19:41:49.233 15 INFO ceilometer.polling.manager [-] Polling pollster network.incoming.packets.drop in the context of pollsters
Dec  1 19:41:49 compute-0 ceilometer_agent_compute[200308]: 2025-12-01 19:41:49.233 15 DEBUG ceilometer.polling.manager [-] Checking if we need coordination for pollster [<stevedore.extension.Extension object at 0x7fcf6cc3fdd0>] with coordination group name [None]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:333
Dec  1 19:41:49 compute-0 ceilometer_agent_compute[200308]: 2025-12-01 19:41:49.233 15 DEBUG ceilometer.polling.manager [-] The pollster [<stevedore.extension.Extension object at 0x7fcf6cc3fdd0>] is not configured in a source for polling that requires coordination. The current hashrings are the following [None]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:355
Dec  1 19:41:49 compute-0 ceilometer_agent_compute[200308]: 2025-12-01 19:41:49.233 15 DEBUG ceilometer.polling.manager [-] Polster heartbeat update: network.incoming.packets.drop heartbeat /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:636
Dec  1 19:41:49 compute-0 ceilometer_agent_compute[200308]: 2025-12-01 19:41:49.233 15 DEBUG ceilometer.compute.pollsters [-] e73931e9-f7fa-4666-b781-700b385532a9/network.incoming.packets.drop volume: 0 _stats_to_sample /usr/lib/python3.12/site-packages/ceilometer/compute/pollsters/__init__.py:108
Dec  1 19:41:49 compute-0 ceilometer_agent_compute[200308]: 2025-12-01 19:41:49.234 15 DEBUG ceilometer.compute.pollsters [-] 850ac274-3f22-41ce-b7d7-ac64d7adac70/network.incoming.packets.drop volume: 0 _stats_to_sample /usr/lib/python3.12/site-packages/ceilometer/compute/pollsters/__init__.py:108
Dec  1 19:41:49 compute-0 ceilometer_agent_compute[200308]: 2025-12-01 19:41:49.234 15 INFO ceilometer.polling.manager [-] Finished polling pollster network.incoming.packets.drop in the context of pollsters
Dec  1 19:41:49 compute-0 ceilometer_agent_compute[200308]: 2025-12-01 19:41:49.234 15 DEBUG ceilometer.polling.manager [-] Executing discovery process for pollsters [<ceilometer.compute.pollsters.net.IncomingErrorsPollster object at 0x7fcf6cc3fe00>] and discovery method [local_instances] via process [<bound method AgentManager.discover of <ceilometer.polling.manager.AgentManager object at 0x7fcf6cc07a10>>]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:294
Dec  1 19:41:49 compute-0 ceilometer_agent_compute[200308]: 2025-12-01 19:41:49.234 15 INFO ceilometer.polling.manager [-] Polling pollster network.incoming.packets.error in the context of pollsters
Dec  1 19:41:49 compute-0 ceilometer_agent_compute[200308]: 2025-12-01 19:41:49.235 15 DEBUG ceilometer.polling.manager [-] Checking if we need coordination for pollster [<stevedore.extension.Extension object at 0x7fcf6cc3fe30>] with coordination group name [None]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:333
Dec  1 19:41:49 compute-0 ceilometer_agent_compute[200308]: 2025-12-01 19:41:49.235 15 DEBUG ceilometer.polling.manager [-] The pollster [<stevedore.extension.Extension object at 0x7fcf6cc3fe30>] is not configured in a source for polling that requires coordination. The current hashrings are the following [None]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:355
Dec  1 19:41:49 compute-0 ceilometer_agent_compute[200308]: 2025-12-01 19:41:49.235 15 DEBUG ceilometer.polling.manager [-] Polster heartbeat update: network.incoming.packets.error heartbeat /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:636
Dec  1 19:41:49 compute-0 ceilometer_agent_compute[200308]: 2025-12-01 19:41:49.235 15 DEBUG ceilometer.compute.pollsters [-] e73931e9-f7fa-4666-b781-700b385532a9/network.incoming.packets.error volume: 0 _stats_to_sample /usr/lib/python3.12/site-packages/ceilometer/compute/pollsters/__init__.py:108
Dec  1 19:41:49 compute-0 ceilometer_agent_compute[200308]: 2025-12-01 19:41:49.236 15 DEBUG ceilometer.compute.pollsters [-] 850ac274-3f22-41ce-b7d7-ac64d7adac70/network.incoming.packets.error volume: 0 _stats_to_sample /usr/lib/python3.12/site-packages/ceilometer/compute/pollsters/__init__.py:108
Dec  1 19:41:49 compute-0 ceilometer_agent_compute[200308]: 2025-12-01 19:41:49.236 15 INFO ceilometer.polling.manager [-] Finished polling pollster network.incoming.packets.error in the context of pollsters
Dec  1 19:41:49 compute-0 ceilometer_agent_compute[200308]: 2025-12-01 19:41:49.237 15 DEBUG ceilometer.polling.manager [-] Executing discovery process for pollsters [<ceilometer.compute.pollsters.net.OutgoingBytesPollster object at 0x7fcf6cc3fe90>] and discovery method [local_instances] via process [<bound method AgentManager.discover of <ceilometer.polling.manager.AgentManager object at 0x7fcf6cc07a10>>]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:294
Dec  1 19:41:49 compute-0 ceilometer_agent_compute[200308]: 2025-12-01 19:41:49.237 15 INFO ceilometer.polling.manager [-] Polling pollster network.outgoing.bytes in the context of pollsters
Dec  1 19:41:49 compute-0 ceilometer_agent_compute[200308]: 2025-12-01 19:41:49.237 15 DEBUG ceilometer.polling.manager [-] Checking if we need coordination for pollster [<stevedore.extension.Extension object at 0x7fcf6cc3fec0>] with coordination group name [None]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:333
Dec  1 19:41:49 compute-0 ceilometer_agent_compute[200308]: 2025-12-01 19:41:49.237 15 DEBUG ceilometer.polling.manager [-] The pollster [<stevedore.extension.Extension object at 0x7fcf6cc3fec0>] is not configured in a source for polling that requires coordination. The current hashrings are the following [None]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:355
Dec  1 19:41:49 compute-0 ceilometer_agent_compute[200308]: 2025-12-01 19:41:49.237 15 DEBUG ceilometer.polling.manager [-] Polster heartbeat update: network.outgoing.bytes heartbeat /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:636
Dec  1 19:41:49 compute-0 ceilometer_agent_compute[200308]: 2025-12-01 19:41:49.237 15 DEBUG ceilometer.compute.pollsters [-] e73931e9-f7fa-4666-b781-700b385532a9/network.outgoing.bytes volume: 2342 _stats_to_sample /usr/lib/python3.12/site-packages/ceilometer/compute/pollsters/__init__.py:108
Dec  1 19:41:49 compute-0 ceilometer_agent_compute[200308]: 2025-12-01 19:41:49.238 15 DEBUG ceilometer.compute.pollsters [-] 850ac274-3f22-41ce-b7d7-ac64d7adac70/network.outgoing.bytes volume: 2356 _stats_to_sample /usr/lib/python3.12/site-packages/ceilometer/compute/pollsters/__init__.py:108
Dec  1 19:41:49 compute-0 ceilometer_agent_compute[200308]: 2025-12-01 19:41:49.238 12 DEBUG ceilometer.polling.manager [-] Updated heartbeat for disk.root.size (2025-12-01T19:41:49.232470) _update_status /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:502
Dec  1 19:41:49 compute-0 ceilometer_agent_compute[200308]: 2025-12-01 19:41:49.238 15 INFO ceilometer.polling.manager [-] Finished polling pollster network.outgoing.bytes in the context of pollsters
Dec  1 19:41:49 compute-0 ceilometer_agent_compute[200308]: 2025-12-01 19:41:49.238 12 DEBUG ceilometer.polling.manager [-] Updated heartbeat for network.incoming.packets.drop (2025-12-01T19:41:49.233722) _update_status /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:502
Dec  1 19:41:49 compute-0 ceilometer_agent_compute[200308]: 2025-12-01 19:41:49.238 15 DEBUG ceilometer.polling.manager [-] Executing discovery process for pollsters [<ceilometer.compute.pollsters.net.OutgoingBytesRatePollster object at 0x7fcf6cc3ff80>] and discovery method [local_instances] via process [<bound method AgentManager.discover of <ceilometer.polling.manager.AgentManager object at 0x7fcf6cc07a10>>]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:294
Dec  1 19:41:49 compute-0 ceilometer_agent_compute[200308]: 2025-12-01 19:41:49.238 15 DEBUG ceilometer.polling.manager [-] Skip pollster network.outgoing.bytes.rate, no new resources found this cycle _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:321
Dec  1 19:41:49 compute-0 ceilometer_agent_compute[200308]: 2025-12-01 19:41:49.238 12 DEBUG ceilometer.polling.manager [-] Updated heartbeat for network.incoming.packets.error (2025-12-01T19:41:49.235778) _update_status /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:502
Dec  1 19:41:49 compute-0 ceilometer_agent_compute[200308]: 2025-12-01 19:41:49.238 15 DEBUG ceilometer.polling.manager [-] Executing discovery process for pollsters [<ceilometer.compute.pollsters.instance_stats.CPUPollster object at 0x7fcf6cbd1b80>] and discovery method [local_instances] via process [<bound method AgentManager.discover of <ceilometer.polling.manager.AgentManager object at 0x7fcf6cc07a10>>]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:294
Dec  1 19:41:49 compute-0 ceilometer_agent_compute[200308]: 2025-12-01 19:41:49.238 12 DEBUG ceilometer.polling.manager [-] Updated heartbeat for network.outgoing.bytes (2025-12-01T19:41:49.237656) _update_status /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:502
Dec  1 19:41:49 compute-0 ceilometer_agent_compute[200308]: 2025-12-01 19:41:49.238 15 INFO ceilometer.polling.manager [-] Polling pollster cpu in the context of pollsters
Dec  1 19:41:49 compute-0 ceilometer_agent_compute[200308]: 2025-12-01 19:41:49.238 15 DEBUG ceilometer.polling.manager [-] Checking if we need coordination for pollster [<stevedore.extension.Extension object at 0x7fcf6cc3d7c0>] with coordination group name [None]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:333
Dec  1 19:41:49 compute-0 ceilometer_agent_compute[200308]: 2025-12-01 19:41:49.239 15 DEBUG ceilometer.polling.manager [-] The pollster [<stevedore.extension.Extension object at 0x7fcf6cc3d7c0>] is not configured in a source for polling that requires coordination. The current hashrings are the following [None]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:355
Dec  1 19:41:49 compute-0 ceilometer_agent_compute[200308]: 2025-12-01 19:41:49.239 15 DEBUG ceilometer.polling.manager [-] Polster heartbeat update: cpu heartbeat /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:636
Dec  1 19:41:49 compute-0 ceilometer_agent_compute[200308]: 2025-12-01 19:41:49.239 15 DEBUG ceilometer.compute.pollsters [-] e73931e9-f7fa-4666-b781-700b385532a9/cpu volume: 42800000000 _stats_to_sample /usr/lib/python3.12/site-packages/ceilometer/compute/pollsters/__init__.py:108
Dec  1 19:41:49 compute-0 ceilometer_agent_compute[200308]: 2025-12-01 19:41:49.239 15 DEBUG ceilometer.compute.pollsters [-] 850ac274-3f22-41ce-b7d7-ac64d7adac70/cpu volume: 35740000000 _stats_to_sample /usr/lib/python3.12/site-packages/ceilometer/compute/pollsters/__init__.py:108
Dec  1 19:41:49 compute-0 ceilometer_agent_compute[200308]: 2025-12-01 19:41:49.239 15 INFO ceilometer.polling.manager [-] Finished polling pollster cpu in the context of pollsters
Dec  1 19:41:49 compute-0 ceilometer_agent_compute[200308]: 2025-12-01 19:41:49.239 15 DEBUG ceilometer.polling.manager [-] Executing discovery process for pollsters [<ceilometer.compute.pollsters.instance_stats.MemoryUsagePollster object at 0x7fcf6cc3f7a0>] and discovery method [local_instances] via process [<bound method AgentManager.discover of <ceilometer.polling.manager.AgentManager object at 0x7fcf6cc07a10>>]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:294
Dec  1 19:41:49 compute-0 ceilometer_agent_compute[200308]: 2025-12-01 19:41:49.240 15 INFO ceilometer.polling.manager [-] Polling pollster memory.usage in the context of pollsters
Dec  1 19:41:49 compute-0 ceilometer_agent_compute[200308]: 2025-12-01 19:41:49.240 15 DEBUG ceilometer.polling.manager [-] Checking if we need coordination for pollster [<stevedore.extension.Extension object at 0x7fcf6cc3f7d0>] with coordination group name [None]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:333
Dec  1 19:41:49 compute-0 ceilometer_agent_compute[200308]: 2025-12-01 19:41:49.240 15 DEBUG ceilometer.polling.manager [-] The pollster [<stevedore.extension.Extension object at 0x7fcf6cc3f7d0>] is not configured in a source for polling that requires coordination. The current hashrings are the following [None]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:355
Dec  1 19:41:49 compute-0 ceilometer_agent_compute[200308]: 2025-12-01 19:41:49.240 15 DEBUG ceilometer.polling.manager [-] Polster heartbeat update: memory.usage heartbeat /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:636
Dec  1 19:41:49 compute-0 ceilometer_agent_compute[200308]: 2025-12-01 19:41:49.240 15 DEBUG ceilometer.compute.pollsters [-] e73931e9-f7fa-4666-b781-700b385532a9/memory.usage volume: 48.79296875 _stats_to_sample /usr/lib/python3.12/site-packages/ceilometer/compute/pollsters/__init__.py:108
Dec  1 19:41:49 compute-0 ceilometer_agent_compute[200308]: 2025-12-01 19:41:49.240 12 DEBUG ceilometer.polling.manager [-] Updated heartbeat for cpu (2025-12-01T19:41:49.239136) _update_status /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:502
Dec  1 19:41:49 compute-0 ceilometer_agent_compute[200308]: 2025-12-01 19:41:49.240 15 DEBUG ceilometer.compute.pollsters [-] 850ac274-3f22-41ce-b7d7-ac64d7adac70/memory.usage volume: 49.05859375 _stats_to_sample /usr/lib/python3.12/site-packages/ceilometer/compute/pollsters/__init__.py:108
Dec  1 19:41:49 compute-0 ceilometer_agent_compute[200308]: 2025-12-01 19:41:49.240 12 DEBUG ceilometer.polling.manager [-] Updated heartbeat for memory.usage (2025-12-01T19:41:49.240263) _update_status /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:502
Dec  1 19:41:49 compute-0 ceilometer_agent_compute[200308]: 2025-12-01 19:41:49.241 15 INFO ceilometer.polling.manager [-] Finished polling pollster memory.usage in the context of pollsters
Dec  1 19:41:49 compute-0 ceilometer_agent_compute[200308]: 2025-12-01 19:41:49.241 15 DEBUG ceilometer.polling.manager [-] Finished processing pollster [network.incoming.bytes.delta]. execute_polling_task_processing /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:272
Dec  1 19:41:49 compute-0 ceilometer_agent_compute[200308]: 2025-12-01 19:41:49.241 15 DEBUG ceilometer.polling.manager [-] Finished processing pollster [network.outgoing.packets]. execute_polling_task_processing /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:272
Dec  1 19:41:49 compute-0 ceilometer_agent_compute[200308]: 2025-12-01 19:41:49.242 15 DEBUG ceilometer.polling.manager [-] Finished processing pollster [network.outgoing.bytes.delta]. execute_polling_task_processing /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:272
Dec  1 19:41:49 compute-0 ceilometer_agent_compute[200308]: 2025-12-01 19:41:49.242 15 DEBUG ceilometer.polling.manager [-] Finished processing pollster [network.outgoing.packets.drop]. execute_polling_task_processing /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:272
Dec  1 19:41:49 compute-0 ceilometer_agent_compute[200308]: 2025-12-01 19:41:49.242 15 DEBUG ceilometer.polling.manager [-] Finished processing pollster [network.outgoing.packets.error]. execute_polling_task_processing /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:272
Dec  1 19:41:49 compute-0 ceilometer_agent_compute[200308]: 2025-12-01 19:41:49.242 15 DEBUG ceilometer.polling.manager [-] Finished processing pollster [disk.device.capacity]. execute_polling_task_processing /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:272
Dec  1 19:41:49 compute-0 ceilometer_agent_compute[200308]: 2025-12-01 19:41:49.242 15 DEBUG ceilometer.polling.manager [-] Finished processing pollster [disk.device.read.bytes]. execute_polling_task_processing /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:272
Dec  1 19:41:49 compute-0 ceilometer_agent_compute[200308]: 2025-12-01 19:41:49.242 15 DEBUG ceilometer.polling.manager [-] Finished processing pollster [network.incoming.bytes]. execute_polling_task_processing /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:272
Dec  1 19:41:49 compute-0 ceilometer_agent_compute[200308]: 2025-12-01 19:41:49.243 15 DEBUG ceilometer.polling.manager [-] Finished processing pollster [network.incoming.bytes.rate]. execute_polling_task_processing /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:272
Dec  1 19:41:49 compute-0 ceilometer_agent_compute[200308]: 2025-12-01 19:41:49.243 15 DEBUG ceilometer.polling.manager [-] Finished processing pollster [disk.device.read.latency]. execute_polling_task_processing /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:272
Dec  1 19:41:49 compute-0 ceilometer_agent_compute[200308]: 2025-12-01 19:41:49.243 15 DEBUG ceilometer.polling.manager [-] Finished processing pollster [disk.device.read.requests]. execute_polling_task_processing /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:272
Dec  1 19:41:49 compute-0 ceilometer_agent_compute[200308]: 2025-12-01 19:41:49.243 15 DEBUG ceilometer.polling.manager [-] Finished processing pollster [disk.device.usage]. execute_polling_task_processing /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:272
Dec  1 19:41:49 compute-0 ceilometer_agent_compute[200308]: 2025-12-01 19:41:49.243 15 DEBUG ceilometer.polling.manager [-] Finished processing pollster [disk.device.write.bytes]. execute_polling_task_processing /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:272
Dec  1 19:41:49 compute-0 ceilometer_agent_compute[200308]: 2025-12-01 19:41:49.243 15 DEBUG ceilometer.polling.manager [-] Finished processing pollster [power.state]. execute_polling_task_processing /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:272
Dec  1 19:41:49 compute-0 ceilometer_agent_compute[200308]: 2025-12-01 19:41:49.243 15 DEBUG ceilometer.polling.manager [-] Finished processing pollster [disk.device.write.latency]. execute_polling_task_processing /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:272
Dec  1 19:41:49 compute-0 ceilometer_agent_compute[200308]: 2025-12-01 19:41:49.244 15 DEBUG ceilometer.polling.manager [-] Finished processing pollster [disk.device.write.requests]. execute_polling_task_processing /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:272
Dec  1 19:41:49 compute-0 ceilometer_agent_compute[200308]: 2025-12-01 19:41:49.244 15 DEBUG ceilometer.polling.manager [-] Finished processing pollster [disk.device.allocation]. execute_polling_task_processing /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:272
Dec  1 19:41:49 compute-0 ceilometer_agent_compute[200308]: 2025-12-01 19:41:49.244 15 DEBUG ceilometer.polling.manager [-] Finished processing pollster [disk.ephemeral.size]. execute_polling_task_processing /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:272
Dec  1 19:41:49 compute-0 ceilometer_agent_compute[200308]: 2025-12-01 19:41:49.244 15 DEBUG ceilometer.polling.manager [-] Finished processing pollster [network.incoming.packets]. execute_polling_task_processing /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:272
Dec  1 19:41:49 compute-0 ceilometer_agent_compute[200308]: 2025-12-01 19:41:49.245 15 DEBUG ceilometer.polling.manager [-] Finished processing pollster [disk.root.size]. execute_polling_task_processing /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:272
Dec  1 19:41:49 compute-0 ceilometer_agent_compute[200308]: 2025-12-01 19:41:49.245 15 DEBUG ceilometer.polling.manager [-] Finished processing pollster [network.incoming.packets.drop]. execute_polling_task_processing /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:272
Dec  1 19:41:49 compute-0 ceilometer_agent_compute[200308]: 2025-12-01 19:41:49.245 15 DEBUG ceilometer.polling.manager [-] Finished processing pollster [network.incoming.packets.error]. execute_polling_task_processing /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:272
Dec  1 19:41:49 compute-0 ceilometer_agent_compute[200308]: 2025-12-01 19:41:49.245 15 DEBUG ceilometer.polling.manager [-] Finished processing pollster [network.outgoing.bytes]. execute_polling_task_processing /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:272
Dec  1 19:41:49 compute-0 ceilometer_agent_compute[200308]: 2025-12-01 19:41:49.245 15 DEBUG ceilometer.polling.manager [-] Finished processing pollster [network.outgoing.bytes.rate]. execute_polling_task_processing /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:272
Dec  1 19:41:49 compute-0 ceilometer_agent_compute[200308]: 2025-12-01 19:41:49.245 15 DEBUG ceilometer.polling.manager [-] Finished processing pollster [cpu]. execute_polling_task_processing /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:272
Dec  1 19:41:49 compute-0 ceilometer_agent_compute[200308]: 2025-12-01 19:41:49.245 15 DEBUG ceilometer.polling.manager [-] Finished processing pollster [memory.usage]. execute_polling_task_processing /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:272
Dec  1 19:41:50 compute-0 nova_compute[189564]: 2025-12-01 19:41:50.811 189568 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Dec  1 19:41:53 compute-0 nova_compute[189564]: 2025-12-01 19:41:53.124 189568 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Dec  1 19:41:53 compute-0 nova_compute[189564]: 2025-12-01 19:41:53.193 189568 DEBUG oslo_service.periodic_task [None req-7cec860b-c1c5-49d2-8a4f-732c76c1788d - - - - - -] Running periodic task ComputeManager._sync_power_states run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210#033[00m
Dec  1 19:41:53 compute-0 nova_compute[189564]: 2025-12-01 19:41:53.228 189568 DEBUG nova.compute.manager [None req-7cec860b-c1c5-49d2-8a4f-732c76c1788d - - - - - -] Triggering sync for uuid e73931e9-f7fa-4666-b781-700b385532a9 _sync_power_states /usr/lib/python3.9/site-packages/nova/compute/manager.py:10268#033[00m
Dec  1 19:41:53 compute-0 nova_compute[189564]: 2025-12-01 19:41:53.229 189568 DEBUG nova.compute.manager [None req-7cec860b-c1c5-49d2-8a4f-732c76c1788d - - - - - -] Triggering sync for uuid 850ac274-3f22-41ce-b7d7-ac64d7adac70 _sync_power_states /usr/lib/python3.9/site-packages/nova/compute/manager.py:10268#033[00m
Dec  1 19:41:53 compute-0 nova_compute[189564]: 2025-12-01 19:41:53.230 189568 DEBUG oslo_concurrency.lockutils [None req-7cec860b-c1c5-49d2-8a4f-732c76c1788d - - - - - -] Acquiring lock "e73931e9-f7fa-4666-b781-700b385532a9" by "nova.compute.manager.ComputeManager._sync_power_states.<locals>._sync.<locals>.query_driver_power_state_and_sync" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m
Dec  1 19:41:53 compute-0 nova_compute[189564]: 2025-12-01 19:41:53.231 189568 DEBUG oslo_concurrency.lockutils [None req-7cec860b-c1c5-49d2-8a4f-732c76c1788d - - - - - -] Lock "e73931e9-f7fa-4666-b781-700b385532a9" acquired by "nova.compute.manager.ComputeManager._sync_power_states.<locals>._sync.<locals>.query_driver_power_state_and_sync" :: waited 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m
Dec  1 19:41:53 compute-0 nova_compute[189564]: 2025-12-01 19:41:53.232 189568 DEBUG oslo_concurrency.lockutils [None req-7cec860b-c1c5-49d2-8a4f-732c76c1788d - - - - - -] Acquiring lock "850ac274-3f22-41ce-b7d7-ac64d7adac70" by "nova.compute.manager.ComputeManager._sync_power_states.<locals>._sync.<locals>.query_driver_power_state_and_sync" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m
Dec  1 19:41:53 compute-0 nova_compute[189564]: 2025-12-01 19:41:53.232 189568 DEBUG oslo_concurrency.lockutils [None req-7cec860b-c1c5-49d2-8a4f-732c76c1788d - - - - - -] Lock "850ac274-3f22-41ce-b7d7-ac64d7adac70" acquired by "nova.compute.manager.ComputeManager._sync_power_states.<locals>._sync.<locals>.query_driver_power_state_and_sync" :: waited 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m
Dec  1 19:41:53 compute-0 nova_compute[189564]: 2025-12-01 19:41:53.288 189568 DEBUG oslo_concurrency.lockutils [None req-7cec860b-c1c5-49d2-8a4f-732c76c1788d - - - - - -] Lock "e73931e9-f7fa-4666-b781-700b385532a9" "released" by "nova.compute.manager.ComputeManager._sync_power_states.<locals>._sync.<locals>.query_driver_power_state_and_sync" :: held 0.058s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m
Dec  1 19:41:53 compute-0 nova_compute[189564]: 2025-12-01 19:41:53.293 189568 DEBUG oslo_concurrency.lockutils [None req-7cec860b-c1c5-49d2-8a4f-732c76c1788d - - - - - -] Lock "850ac274-3f22-41ce-b7d7-ac64d7adac70" "released" by "nova.compute.manager.ComputeManager._sync_power_states.<locals>._sync.<locals>.query_driver_power_state_and_sync" :: held 0.061s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m
Dec  1 19:41:55 compute-0 nova_compute[189564]: 2025-12-01 19:41:55.291 189568 DEBUG oslo_service.periodic_task [None req-7cec860b-c1c5-49d2-8a4f-732c76c1788d - - - - - -] Running periodic task ComputeManager._heal_instance_info_cache run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210#033[00m
Dec  1 19:41:55 compute-0 nova_compute[189564]: 2025-12-01 19:41:55.294 189568 DEBUG nova.compute.manager [None req-7cec860b-c1c5-49d2-8a4f-732c76c1788d - - - - - -] Starting heal instance info cache _heal_instance_info_cache /usr/lib/python3.9/site-packages/nova/compute/manager.py:9858#033[00m
Dec  1 19:41:55 compute-0 nova_compute[189564]: 2025-12-01 19:41:55.295 189568 DEBUG nova.compute.manager [None req-7cec860b-c1c5-49d2-8a4f-732c76c1788d - - - - - -] Rebuilding the list of instances to heal _heal_instance_info_cache /usr/lib/python3.9/site-packages/nova/compute/manager.py:9862#033[00m
Dec  1 19:41:55 compute-0 nova_compute[189564]: 2025-12-01 19:41:55.814 189568 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Dec  1 19:41:55 compute-0 nova_compute[189564]: 2025-12-01 19:41:55.817 189568 DEBUG oslo_concurrency.lockutils [None req-7cec860b-c1c5-49d2-8a4f-732c76c1788d - - - - - -] Acquiring lock "refresh_cache-e73931e9-f7fa-4666-b781-700b385532a9" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:312#033[00m
Dec  1 19:41:55 compute-0 nova_compute[189564]: 2025-12-01 19:41:55.818 189568 DEBUG oslo_concurrency.lockutils [None req-7cec860b-c1c5-49d2-8a4f-732c76c1788d - - - - - -] Acquired lock "refresh_cache-e73931e9-f7fa-4666-b781-700b385532a9" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:315#033[00m
Dec  1 19:41:55 compute-0 nova_compute[189564]: 2025-12-01 19:41:55.818 189568 DEBUG nova.network.neutron [None req-7cec860b-c1c5-49d2-8a4f-732c76c1788d - - - - - -] [instance: e73931e9-f7fa-4666-b781-700b385532a9] Forcefully refreshing network info cache for instance _get_instance_nw_info /usr/lib/python3.9/site-packages/nova/network/neutron.py:2004#033[00m
Dec  1 19:41:55 compute-0 nova_compute[189564]: 2025-12-01 19:41:55.819 189568 DEBUG nova.objects.instance [None req-7cec860b-c1c5-49d2-8a4f-732c76c1788d - - - - - -] Lazy-loading 'info_cache' on Instance uuid e73931e9-f7fa-4666-b781-700b385532a9 obj_load_attr /usr/lib/python3.9/site-packages/nova/objects/instance.py:1105#033[00m
Dec  1 19:41:56 compute-0 podman[244921]: 2025-12-01 19:41:56.356515471 +0000 UTC m=+0.118631776 container health_status b46bda7fc50db8041eef75400930fc7591d8331b3adc9964f77b2cc87c6b98e2 (image=quay.io/openstack-k8s-operators/openstack-network-exporter:current-podified, name=openstack_network_exporter, health_status=healthy, health_failing_streak=0, health_log=, release=1755695350, description=The Universal Base Image Minimal is a stripped down image that uses microdnf as a package manager. This base image is freely redistributable, but Red Hat only supports Red Hat technologies through subscriptions for Red Hat products. This image is maintained by Red Hat and updated regularly., name=ubi9-minimal, io.k8s.description=The Universal Base Image Minimal is a stripped down image that uses microdnf as a package manager. This base image is freely redistributable, but Red Hat only supports Red Hat technologies through subscriptions for Red Hat products. This image is maintained by Red Hat and updated regularly., io.openshift.tags=minimal rhel9, url=https://catalog.redhat.com/en/search?searchType=containers, config_data={'image': 'quay.io/openstack-k8s-operators/openstack-network-exporter:current-podified', 'restart': 'always', 'recreate': True, 'privileged': True, 'ports': ['9105:9105'], 'command': [], 'net': 'host', 'environment': {'OS_ENDPOINT_TYPE': 'internal', 'OPENSTACK_NETWORK_EXPORTER_YAML': '/etc/openstack_network_exporter/openstack_network_exporter.yaml'}, 'healthcheck': {'test': '/openstack/healthcheck openstack-netwo', 'mount': '/var/lib/openstack/healthchecks/openstack_network_exporter'}, 'volumes': ['/var/lib/openstack/config/telemetry/openstack_network_exporter.yaml:/etc/openstack_network_exporter/openstack_network_exporter.yaml:z', '/var/lib/openstack/certs/telemetry/default:/etc/openstack_network_exporter/tls:z', '/var/run/openvswitch:/run/openvswitch:rw,z', '/var/lib/openvswitch/ovn:/run/ovn:rw,z', '/proc:/host/proc:ro', '/var/lib/openstack/healthchecks/openstack_network_exporter:/openstack:ro,z']}, container_name=openstack_network_exporter, distribution-scope=public, summary=Provides the latest release of the minimal Red Hat Universal Base Image 9., vcs-ref=f4b088292653bbf5ca8188a5e59ffd06a8671d4b, vendor=Red Hat, Inc., version=9.6, build-date=2025-08-20T13:12:41, vcs-type=git, architecture=x86_64, io.k8s.display-name=Red Hat Universal Base Image 9 Minimal, io.openshift.expose-services=, com.redhat.license_terms=https://www.redhat.com/en/about/red-hat-end-user-license-agreements#UBI, config_id=edpm, io.buildah.version=1.33.7, maintainer=Red Hat, Inc., com.redhat.component=ubi9-minimal-container, managed_by=edpm_ansible)
Dec  1 19:41:56 compute-0 nova_compute[189564]: 2025-12-01 19:41:56.998 189568 DEBUG nova.network.neutron [None req-7cec860b-c1c5-49d2-8a4f-732c76c1788d - - - - - -] [instance: e73931e9-f7fa-4666-b781-700b385532a9] Updating instance_info_cache with network_info: [{"id": "3cef930c-870a-4936-a206-b4c3a7ce5c1a", "address": "fa:16:3e:fc:8b:70", "network": {"id": "2a4b8529-6171-4880-a97c-66966115a61b", "bridge": "br-int", "label": "private", "subnets": [{"cidr": "192.168.0.0/24", "dns": [], "gateway": {"address": "192.168.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "192.168.0.47", "type": "fixed", "version": 4, "meta": {}, "floating_ips": [{"address": "192.168.122.206", "type": "floating", "version": 4, "meta": {}}]}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "35d2a9caf1634dca9fc12ec078239d84", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tap3cef930c-87", "ovs_interfaceid": "3cef930c-870a-4936-a206-b4c3a7ce5c1a", "qbh_params": null, "qbg_params": null, "active": true, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}}] update_instance_cache_with_nw_info /usr/lib/python3.9/site-packages/nova/network/neutron.py:116#033[00m
Dec  1 19:41:57 compute-0 nova_compute[189564]: 2025-12-01 19:41:57.013 189568 DEBUG oslo_concurrency.lockutils [None req-7cec860b-c1c5-49d2-8a4f-732c76c1788d - - - - - -] Releasing lock "refresh_cache-e73931e9-f7fa-4666-b781-700b385532a9" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:333#033[00m
Dec  1 19:41:57 compute-0 nova_compute[189564]: 2025-12-01 19:41:57.014 189568 DEBUG nova.compute.manager [None req-7cec860b-c1c5-49d2-8a4f-732c76c1788d - - - - - -] [instance: e73931e9-f7fa-4666-b781-700b385532a9] Updated the network info_cache for instance _heal_instance_info_cache /usr/lib/python3.9/site-packages/nova/compute/manager.py:9929#033[00m
Dec  1 19:41:57 compute-0 nova_compute[189564]: 2025-12-01 19:41:57.248 189568 DEBUG oslo_service.periodic_task [None req-7cec860b-c1c5-49d2-8a4f-732c76c1788d - - - - - -] Running periodic task ComputeManager._poll_rebooting_instances run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210#033[00m
Dec  1 19:41:58 compute-0 nova_compute[189564]: 2025-12-01 19:41:58.126 189568 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Dec  1 19:41:59 compute-0 podman[203750]: time="2025-12-01T19:41:59Z" level=info msg="List containers: received `last` parameter - overwriting `limit`"
Dec  1 19:41:59 compute-0 podman[203750]: @ - - [01/Dec/2025:19:41:59 +0000] "GET /v4.9.3/libpod/containers/json?all=true&external=false&last=0&namespace=false&size=false&sync=false HTTP/1.1" 200 29521 "" "Go-http-client/1.1"
Dec  1 19:41:59 compute-0 podman[203750]: @ - - [01/Dec/2025:19:41:59 +0000] "GET /v4.9.3/libpod/containers/stats?all=false&interval=1&stream=false HTTP/1.1" 200 4803 "" "Go-http-client/1.1"
Dec  1 19:42:00 compute-0 nova_compute[189564]: 2025-12-01 19:42:00.248 189568 DEBUG oslo_service.periodic_task [None req-7cec860b-c1c5-49d2-8a4f-732c76c1788d - - - - - -] Running periodic task ComputeManager._reclaim_queued_deletes run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210#033[00m
Dec  1 19:42:00 compute-0 nova_compute[189564]: 2025-12-01 19:42:00.250 189568 DEBUG nova.compute.manager [None req-7cec860b-c1c5-49d2-8a4f-732c76c1788d - - - - - -] CONF.reclaim_instance_interval <= 0, skipping... _reclaim_queued_deletes /usr/lib/python3.9/site-packages/nova/compute/manager.py:10477#033[00m
Dec  1 19:42:00 compute-0 nova_compute[189564]: 2025-12-01 19:42:00.817 189568 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Dec  1 19:42:01 compute-0 nova_compute[189564]: 2025-12-01 19:42:01.246 189568 DEBUG oslo_service.periodic_task [None req-7cec860b-c1c5-49d2-8a4f-732c76c1788d - - - - - -] Running periodic task ComputeManager._check_instance_build_time run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210#033[00m
Dec  1 19:42:01 compute-0 nova_compute[189564]: 2025-12-01 19:42:01.247 189568 DEBUG oslo_service.periodic_task [None req-7cec860b-c1c5-49d2-8a4f-732c76c1788d - - - - - -] Running periodic task ComputeManager._poll_volume_usage run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210#033[00m
Dec  1 19:42:01 compute-0 podman[244940]: 2025-12-01 19:42:01.328486438 +0000 UTC m=+0.091029279 container health_status 9bc16c1e84935b321683dd2dfd3901959431e420d380b6b9982945dff3d516b2 (image=quay.io/prometheus/node-exporter:v1.5.0, name=node_exporter, health_status=healthy, health_failing_streak=0, health_log=, config_data={'image': 'quay.io/prometheus/node-exporter:v1.5.0', 'restart': 'always', 'recreate': True, 'user': 'root', 'privileged': True, 'ports': ['9100:9100'], 'command': ['--web.config.file=/etc/node_exporter/node_exporter.yaml', '--web.disable-exporter-metrics', '--collector.systemd', '--collector.systemd.unit-include=(edpm_.*|ovs.*|openvswitch|virt.*|rsyslog)\\.service', '--no-collector.dmi', '--no-collector.entropy', '--no-collector.thermal_zone', '--no-collector.time', '--no-collector.timex', '--no-collector.uname', '--no-collector.stat', '--no-collector.hwmon', '--no-collector.os', '--no-collector.selinux', '--no-collector.textfile', '--no-collector.powersupplyclass', '--no-collector.pressure', '--no-collector.rapl'], 'net': 'host', 'environment': {'OS_ENDPOINT_TYPE': 'internal'}, 'healthcheck': {'test': '/openstack/healthcheck node_exporter', 'mount': '/var/lib/openstack/healthchecks/node_exporter'}, 'volumes': ['/var/lib/openstack/config/telemetry/node_exporter.yaml:/etc/node_exporter/node_exporter.yaml:z', '/var/lib/openstack/certs/telemetry/default:/etc/node_exporter/tls:z', '/var/run/dbus/system_bus_socket:/var/run/dbus/system_bus_socket:rw', '/var/lib/openstack/healthchecks/node_exporter:/openstack:ro,z']}, config_id=edpm, container_name=node_exporter, maintainer=The Prometheus Authors <prometheus-developers@googlegroups.com>, managed_by=edpm_ansible)
Dec  1 19:42:01 compute-0 openstack_network_exporter[205914]: ERROR   19:42:01 appctl.go:131: Failed to prepare call to ovsdb-server: no control socket files found for the ovs db server
Dec  1 19:42:01 compute-0 openstack_network_exporter[205914]: ERROR   19:42:01 appctl.go:144: Failed to get PID for ovn-northd: no control socket files found for ovn-northd
Dec  1 19:42:01 compute-0 openstack_network_exporter[205914]: ERROR   19:42:01 appctl.go:174: call(dpif-netdev/pmd-perf-show): please specify an existing datapath
Dec  1 19:42:01 compute-0 openstack_network_exporter[205914]: 
Dec  1 19:42:01 compute-0 openstack_network_exporter[205914]: ERROR   19:42:01 appctl.go:144: Failed to get PID for ovn-northd: no control socket files found for ovn-northd
Dec  1 19:42:01 compute-0 openstack_network_exporter[205914]: ERROR   19:42:01 appctl.go:174: call(dpif-netdev/pmd-rxq-show): please specify an existing datapath
Dec  1 19:42:01 compute-0 openstack_network_exporter[205914]: 
Dec  1 19:42:02 compute-0 nova_compute[189564]: 2025-12-01 19:42:02.248 189568 DEBUG oslo_service.periodic_task [None req-7cec860b-c1c5-49d2-8a4f-732c76c1788d - - - - - -] Running periodic task ComputeManager._poll_unconfirmed_resizes run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210#033[00m
Dec  1 19:42:03 compute-0 nova_compute[189564]: 2025-12-01 19:42:03.130 189568 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Dec  1 19:42:03 compute-0 nova_compute[189564]: 2025-12-01 19:42:03.249 189568 DEBUG oslo_service.periodic_task [None req-7cec860b-c1c5-49d2-8a4f-732c76c1788d - - - - - -] Running periodic task ComputeManager._poll_rescued_instances run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210#033[00m
Dec  1 19:42:04 compute-0 nova_compute[189564]: 2025-12-01 19:42:04.247 189568 DEBUG oslo_service.periodic_task [None req-7cec860b-c1c5-49d2-8a4f-732c76c1788d - - - - - -] Running periodic task ComputeManager._instance_usage_audit run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210#033[00m
Dec  1 19:42:05 compute-0 nova_compute[189564]: 2025-12-01 19:42:05.248 189568 DEBUG oslo_service.periodic_task [None req-7cec860b-c1c5-49d2-8a4f-732c76c1788d - - - - - -] Running periodic task ComputeManager.update_available_resource run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210#033[00m
Dec  1 19:42:05 compute-0 nova_compute[189564]: 2025-12-01 19:42:05.296 189568 DEBUG oslo_concurrency.lockutils [None req-7cec860b-c1c5-49d2-8a4f-732c76c1788d - - - - - -] Acquiring lock "compute_resources" by "nova.compute.resource_tracker.ResourceTracker.clean_compute_node_cache" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m
Dec  1 19:42:05 compute-0 nova_compute[189564]: 2025-12-01 19:42:05.296 189568 DEBUG oslo_concurrency.lockutils [None req-7cec860b-c1c5-49d2-8a4f-732c76c1788d - - - - - -] Lock "compute_resources" acquired by "nova.compute.resource_tracker.ResourceTracker.clean_compute_node_cache" :: waited 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m
Dec  1 19:42:05 compute-0 nova_compute[189564]: 2025-12-01 19:42:05.297 189568 DEBUG oslo_concurrency.lockutils [None req-7cec860b-c1c5-49d2-8a4f-732c76c1788d - - - - - -] Lock "compute_resources" "released" by "nova.compute.resource_tracker.ResourceTracker.clean_compute_node_cache" :: held 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m
Dec  1 19:42:05 compute-0 nova_compute[189564]: 2025-12-01 19:42:05.298 189568 DEBUG nova.compute.resource_tracker [None req-7cec860b-c1c5-49d2-8a4f-732c76c1788d - - - - - -] Auditing locally available compute resources for compute-0.ctlplane.example.com (node: compute-0.ctlplane.example.com) update_available_resource /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:861#033[00m
Dec  1 19:42:05 compute-0 nova_compute[189564]: 2025-12-01 19:42:05.411 189568 DEBUG oslo_concurrency.processutils [None req-7cec860b-c1c5-49d2-8a4f-732c76c1788d - - - - - -] Running cmd (subprocess): /usr/bin/python3 -m oslo_concurrency.prlimit --as=1073741824 --cpu=30 -- env LC_ALL=C LANG=C qemu-img info /var/lib/nova/instances/e73931e9-f7fa-4666-b781-700b385532a9/disk --force-share --output=json execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384#033[00m
Dec  1 19:42:05 compute-0 nova_compute[189564]: 2025-12-01 19:42:05.518 189568 DEBUG oslo_concurrency.processutils [None req-7cec860b-c1c5-49d2-8a4f-732c76c1788d - - - - - -] CMD "/usr/bin/python3 -m oslo_concurrency.prlimit --as=1073741824 --cpu=30 -- env LC_ALL=C LANG=C qemu-img info /var/lib/nova/instances/e73931e9-f7fa-4666-b781-700b385532a9/disk --force-share --output=json" returned: 0 in 0.108s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422#033[00m
Dec  1 19:42:05 compute-0 nova_compute[189564]: 2025-12-01 19:42:05.521 189568 DEBUG oslo_concurrency.processutils [None req-7cec860b-c1c5-49d2-8a4f-732c76c1788d - - - - - -] Running cmd (subprocess): /usr/bin/python3 -m oslo_concurrency.prlimit --as=1073741824 --cpu=30 -- env LC_ALL=C LANG=C qemu-img info /var/lib/nova/instances/e73931e9-f7fa-4666-b781-700b385532a9/disk --force-share --output=json execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384#033[00m
Dec  1 19:42:05 compute-0 nova_compute[189564]: 2025-12-01 19:42:05.611 189568 DEBUG oslo_concurrency.processutils [None req-7cec860b-c1c5-49d2-8a4f-732c76c1788d - - - - - -] CMD "/usr/bin/python3 -m oslo_concurrency.prlimit --as=1073741824 --cpu=30 -- env LC_ALL=C LANG=C qemu-img info /var/lib/nova/instances/e73931e9-f7fa-4666-b781-700b385532a9/disk --force-share --output=json" returned: 0 in 0.090s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422#033[00m
Dec  1 19:42:05 compute-0 nova_compute[189564]: 2025-12-01 19:42:05.613 189568 DEBUG oslo_concurrency.processutils [None req-7cec860b-c1c5-49d2-8a4f-732c76c1788d - - - - - -] Running cmd (subprocess): /usr/bin/python3 -m oslo_concurrency.prlimit --as=1073741824 --cpu=30 -- env LC_ALL=C LANG=C qemu-img info /var/lib/nova/instances/e73931e9-f7fa-4666-b781-700b385532a9/disk.eph0 --force-share --output=json execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384#033[00m
Dec  1 19:42:05 compute-0 nova_compute[189564]: 2025-12-01 19:42:05.671 189568 DEBUG oslo_concurrency.processutils [None req-7cec860b-c1c5-49d2-8a4f-732c76c1788d - - - - - -] CMD "/usr/bin/python3 -m oslo_concurrency.prlimit --as=1073741824 --cpu=30 -- env LC_ALL=C LANG=C qemu-img info /var/lib/nova/instances/e73931e9-f7fa-4666-b781-700b385532a9/disk.eph0 --force-share --output=json" returned: 0 in 0.058s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422#033[00m
Dec  1 19:42:05 compute-0 nova_compute[189564]: 2025-12-01 19:42:05.673 189568 DEBUG oslo_concurrency.processutils [None req-7cec860b-c1c5-49d2-8a4f-732c76c1788d - - - - - -] Running cmd (subprocess): /usr/bin/python3 -m oslo_concurrency.prlimit --as=1073741824 --cpu=30 -- env LC_ALL=C LANG=C qemu-img info /var/lib/nova/instances/e73931e9-f7fa-4666-b781-700b385532a9/disk.eph0 --force-share --output=json execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384#033[00m
Dec  1 19:42:05 compute-0 nova_compute[189564]: 2025-12-01 19:42:05.741 189568 DEBUG oslo_concurrency.processutils [None req-7cec860b-c1c5-49d2-8a4f-732c76c1788d - - - - - -] CMD "/usr/bin/python3 -m oslo_concurrency.prlimit --as=1073741824 --cpu=30 -- env LC_ALL=C LANG=C qemu-img info /var/lib/nova/instances/e73931e9-f7fa-4666-b781-700b385532a9/disk.eph0 --force-share --output=json" returned: 0 in 0.068s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422#033[00m
Dec  1 19:42:05 compute-0 nova_compute[189564]: 2025-12-01 19:42:05.755 189568 DEBUG oslo_concurrency.processutils [None req-7cec860b-c1c5-49d2-8a4f-732c76c1788d - - - - - -] Running cmd (subprocess): /usr/bin/python3 -m oslo_concurrency.prlimit --as=1073741824 --cpu=30 -- env LC_ALL=C LANG=C qemu-img info /var/lib/nova/instances/850ac274-3f22-41ce-b7d7-ac64d7adac70/disk --force-share --output=json execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384#033[00m
Dec  1 19:42:05 compute-0 nova_compute[189564]: 2025-12-01 19:42:05.821 189568 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Dec  1 19:42:05 compute-0 nova_compute[189564]: 2025-12-01 19:42:05.853 189568 DEBUG oslo_concurrency.processutils [None req-7cec860b-c1c5-49d2-8a4f-732c76c1788d - - - - - -] CMD "/usr/bin/python3 -m oslo_concurrency.prlimit --as=1073741824 --cpu=30 -- env LC_ALL=C LANG=C qemu-img info /var/lib/nova/instances/850ac274-3f22-41ce-b7d7-ac64d7adac70/disk --force-share --output=json" returned: 0 in 0.098s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422#033[00m
Dec  1 19:42:05 compute-0 nova_compute[189564]: 2025-12-01 19:42:05.855 189568 DEBUG oslo_concurrency.processutils [None req-7cec860b-c1c5-49d2-8a4f-732c76c1788d - - - - - -] Running cmd (subprocess): /usr/bin/python3 -m oslo_concurrency.prlimit --as=1073741824 --cpu=30 -- env LC_ALL=C LANG=C qemu-img info /var/lib/nova/instances/850ac274-3f22-41ce-b7d7-ac64d7adac70/disk --force-share --output=json execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384#033[00m
Dec  1 19:42:05 compute-0 nova_compute[189564]: 2025-12-01 19:42:05.921 189568 DEBUG oslo_concurrency.processutils [None req-7cec860b-c1c5-49d2-8a4f-732c76c1788d - - - - - -] CMD "/usr/bin/python3 -m oslo_concurrency.prlimit --as=1073741824 --cpu=30 -- env LC_ALL=C LANG=C qemu-img info /var/lib/nova/instances/850ac274-3f22-41ce-b7d7-ac64d7adac70/disk --force-share --output=json" returned: 0 in 0.066s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422#033[00m
Dec  1 19:42:05 compute-0 nova_compute[189564]: 2025-12-01 19:42:05.923 189568 DEBUG oslo_concurrency.processutils [None req-7cec860b-c1c5-49d2-8a4f-732c76c1788d - - - - - -] Running cmd (subprocess): /usr/bin/python3 -m oslo_concurrency.prlimit --as=1073741824 --cpu=30 -- env LC_ALL=C LANG=C qemu-img info /var/lib/nova/instances/850ac274-3f22-41ce-b7d7-ac64d7adac70/disk.eph0 --force-share --output=json execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384#033[00m
Dec  1 19:42:05 compute-0 nova_compute[189564]: 2025-12-01 19:42:05.986 189568 DEBUG oslo_concurrency.processutils [None req-7cec860b-c1c5-49d2-8a4f-732c76c1788d - - - - - -] CMD "/usr/bin/python3 -m oslo_concurrency.prlimit --as=1073741824 --cpu=30 -- env LC_ALL=C LANG=C qemu-img info /var/lib/nova/instances/850ac274-3f22-41ce-b7d7-ac64d7adac70/disk.eph0 --force-share --output=json" returned: 0 in 0.063s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422#033[00m
Dec  1 19:42:05 compute-0 nova_compute[189564]: 2025-12-01 19:42:05.987 189568 DEBUG oslo_concurrency.processutils [None req-7cec860b-c1c5-49d2-8a4f-732c76c1788d - - - - - -] Running cmd (subprocess): /usr/bin/python3 -m oslo_concurrency.prlimit --as=1073741824 --cpu=30 -- env LC_ALL=C LANG=C qemu-img info /var/lib/nova/instances/850ac274-3f22-41ce-b7d7-ac64d7adac70/disk.eph0 --force-share --output=json execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384#033[00m
Dec  1 19:42:06 compute-0 nova_compute[189564]: 2025-12-01 19:42:06.065 189568 DEBUG oslo_concurrency.processutils [None req-7cec860b-c1c5-49d2-8a4f-732c76c1788d - - - - - -] CMD "/usr/bin/python3 -m oslo_concurrency.prlimit --as=1073741824 --cpu=30 -- env LC_ALL=C LANG=C qemu-img info /var/lib/nova/instances/850ac274-3f22-41ce-b7d7-ac64d7adac70/disk.eph0 --force-share --output=json" returned: 0 in 0.078s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422#033[00m
Dec  1 19:42:06 compute-0 nova_compute[189564]: 2025-12-01 19:42:06.602 189568 WARNING nova.virt.libvirt.driver [None req-7cec860b-c1c5-49d2-8a4f-732c76c1788d - - - - - -] This host appears to have multiple sockets per NUMA node. The `socket` PCI NUMA affinity will not be supported.#033[00m
Dec  1 19:42:06 compute-0 nova_compute[189564]: 2025-12-01 19:42:06.604 189568 DEBUG nova.compute.resource_tracker [None req-7cec860b-c1c5-49d2-8a4f-732c76c1788d - - - - - -] Hypervisor/Node resource view: name=compute-0.ctlplane.example.com free_ram=4968MB free_disk=72.36145401000977GB free_vcpus=6 pci_devices=[{"dev_id": "pci_0000_00_03_0", "address": "0000:00:03.0", "product_id": "1000", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1000", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_01_3", "address": "0000:00:01.3", "product_id": "7113", "vendor_id": "8086", "numa_node": null, "label": "label_8086_7113", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_06_0", "address": "0000:00:06.0", "product_id": "1005", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1005", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_04_0", "address": "0000:00:04.0", "product_id": "1001", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1001", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_01_1", "address": "0000:00:01.1", "product_id": "7010", "vendor_id": "8086", "numa_node": null, "label": "label_8086_7010", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_00_0", "address": "0000:00:00.0", "product_id": "1237", "vendor_id": "8086", "numa_node": null, "label": "label_8086_1237", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_05_0", "address": "0000:00:05.0", "product_id": "1002", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1002", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_02_0", "address": "0000:00:02.0", "product_id": "1050", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1050", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_01_2", "address": "0000:00:01.2", "product_id": "7020", "vendor_id": "8086", "numa_node": null, "label": "label_8086_7020", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_01_0", "address": "0000:00:01.0", "product_id": "7000", "vendor_id": "8086", "numa_node": null, "label": "label_8086_7000", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_07_0", "address": "0000:00:07.0", "product_id": "1000", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1000", "dev_type": "type-PCI"}] _report_hypervisor_resource_view /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:1034#033[00m
Dec  1 19:42:06 compute-0 nova_compute[189564]: 2025-12-01 19:42:06.604 189568 DEBUG oslo_concurrency.lockutils [None req-7cec860b-c1c5-49d2-8a4f-732c76c1788d - - - - - -] Acquiring lock "compute_resources" by "nova.compute.resource_tracker.ResourceTracker._update_available_resource" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m
Dec  1 19:42:06 compute-0 nova_compute[189564]: 2025-12-01 19:42:06.605 189568 DEBUG oslo_concurrency.lockutils [None req-7cec860b-c1c5-49d2-8a4f-732c76c1788d - - - - - -] Lock "compute_resources" acquired by "nova.compute.resource_tracker.ResourceTracker._update_available_resource" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m
Dec  1 19:42:06 compute-0 nova_compute[189564]: 2025-12-01 19:42:06.710 189568 DEBUG nova.compute.resource_tracker [None req-7cec860b-c1c5-49d2-8a4f-732c76c1788d - - - - - -] Instance e73931e9-f7fa-4666-b781-700b385532a9 actively managed on this compute host and has allocations in placement: {'resources': {'DISK_GB': 2, 'MEMORY_MB': 512, 'VCPU': 1}}. _remove_deleted_instances_allocations /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:1635#033[00m
Dec  1 19:42:06 compute-0 nova_compute[189564]: 2025-12-01 19:42:06.711 189568 DEBUG nova.compute.resource_tracker [None req-7cec860b-c1c5-49d2-8a4f-732c76c1788d - - - - - -] Instance 850ac274-3f22-41ce-b7d7-ac64d7adac70 actively managed on this compute host and has allocations in placement: {'resources': {'DISK_GB': 2, 'MEMORY_MB': 512, 'VCPU': 1}}. _remove_deleted_instances_allocations /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:1635#033[00m
Dec  1 19:42:06 compute-0 nova_compute[189564]: 2025-12-01 19:42:06.711 189568 DEBUG nova.compute.resource_tracker [None req-7cec860b-c1c5-49d2-8a4f-732c76c1788d - - - - - -] Total usable vcpus: 8, total allocated vcpus: 2 _report_final_resource_view /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:1057#033[00m
Dec  1 19:42:06 compute-0 nova_compute[189564]: 2025-12-01 19:42:06.712 189568 DEBUG nova.compute.resource_tracker [None req-7cec860b-c1c5-49d2-8a4f-732c76c1788d - - - - - -] Final resource view: name=compute-0.ctlplane.example.com phys_ram=7680MB used_ram=1536MB phys_disk=79GB used_disk=4GB total_vcpus=8 used_vcpus=2 pci_stats=[] _report_final_resource_view /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:1066#033[00m
Dec  1 19:42:06 compute-0 nova_compute[189564]: 2025-12-01 19:42:06.775 189568 DEBUG nova.compute.provider_tree [None req-7cec860b-c1c5-49d2-8a4f-732c76c1788d - - - - - -] Inventory has not changed in ProviderTree for provider: 0211b5d4-bab8-409f-8f53-df766ffbcb27 update_inventory /usr/lib/python3.9/site-packages/nova/compute/provider_tree.py:180#033[00m
Dec  1 19:42:06 compute-0 nova_compute[189564]: 2025-12-01 19:42:06.902 189568 DEBUG nova.scheduler.client.report [None req-7cec860b-c1c5-49d2-8a4f-732c76c1788d - - - - - -] Inventory has not changed for provider 0211b5d4-bab8-409f-8f53-df766ffbcb27 based on inventory data: {'VCPU': {'total': 8, 'reserved': 0, 'min_unit': 1, 'max_unit': 8, 'step_size': 1, 'allocation_ratio': 4.0}, 'MEMORY_MB': {'total': 7680, 'reserved': 512, 'min_unit': 1, 'max_unit': 7680, 'step_size': 1, 'allocation_ratio': 1.0}, 'DISK_GB': {'total': 79, 'reserved': 1, 'min_unit': 1, 'max_unit': 79, 'step_size': 1, 'allocation_ratio': 0.9}} set_inventory_for_provider /usr/lib/python3.9/site-packages/nova/scheduler/client/report.py:940#033[00m
Dec  1 19:42:06 compute-0 nova_compute[189564]: 2025-12-01 19:42:06.904 189568 DEBUG nova.compute.resource_tracker [None req-7cec860b-c1c5-49d2-8a4f-732c76c1788d - - - - - -] Compute_service record updated for compute-0.ctlplane.example.com:compute-0.ctlplane.example.com _update_available_resource /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:995#033[00m
Dec  1 19:42:06 compute-0 nova_compute[189564]: 2025-12-01 19:42:06.905 189568 DEBUG oslo_concurrency.lockutils [None req-7cec860b-c1c5-49d2-8a4f-732c76c1788d - - - - - -] Lock "compute_resources" "released" by "nova.compute.resource_tracker.ResourceTracker._update_available_resource" :: held 0.300s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m
Dec  1 19:42:07 compute-0 podman[244989]: 2025-12-01 19:42:07.355311455 +0000 UTC m=+0.118886255 container health_status eee51cf6f5ac491b85fb09827fece37ea9afa564acb449d4ec0d0155a452f02b (image=quay.io/podified-antelope-centos9/openstack-multipathd:current-podified, name=multipathd, health_status=healthy, health_failing_streak=0, health_log=, managed_by=edpm_ansible, org.label-schema.license=GPLv2, container_name=multipathd, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS, config_id=multipathd, maintainer=OpenStack Kubernetes Operator team, org.label-schema.build-date=20251125, tcib_build_tag=fa2bb8efef6782c26ea7f1675eeb36dd, tcib_managed=true, config_data={'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/multipathd', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-multipathd:current-podified', 'net': 'host', 'privileged': True, 'restart': 'always', 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/var/lib/kolla/config_files/multipathd.json:/var/lib/kolla/config_files/config.json:ro', '/dev:/dev', '/run/udev:/run/udev', '/sys:/sys', '/lib/modules:/lib/modules:ro', '/etc/iscsi:/etc/iscsi:ro', '/var/lib/iscsi:/var/lib/iscsi', '/etc/multipath:/etc/multipath:z', '/etc/multipath.conf:/etc/multipath.conf:ro', '/var/lib/openstack/healthchecks/multipathd:/openstack:ro,z']}, io.buildah.version=1.41.3)
Dec  1 19:42:08 compute-0 nova_compute[189564]: 2025-12-01 19:42:08.132 189568 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Dec  1 19:42:10 compute-0 nova_compute[189564]: 2025-12-01 19:42:10.826 189568 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Dec  1 19:42:12 compute-0 ovn_metadata_agent[106828]: 2025-12-01 19:42:12.192 106833 DEBUG oslo_concurrency.lockutils [-] Acquiring lock "_check_child_processes" by "neutron.agent.linux.external_process.ProcessMonitor._check_child_processes" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m
Dec  1 19:42:12 compute-0 ovn_metadata_agent[106828]: 2025-12-01 19:42:12.193 106833 DEBUG oslo_concurrency.lockutils [-] Lock "_check_child_processes" acquired by "neutron.agent.linux.external_process.ProcessMonitor._check_child_processes" :: waited 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m
Dec  1 19:42:12 compute-0 ovn_metadata_agent[106828]: 2025-12-01 19:42:12.193 106833 DEBUG oslo_concurrency.lockutils [-] Lock "_check_child_processes" "released" by "neutron.agent.linux.external_process.ProcessMonitor._check_child_processes" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m
Dec  1 19:42:13 compute-0 nova_compute[189564]: 2025-12-01 19:42:13.136 189568 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Dec  1 19:42:13 compute-0 podman[245009]: 2025-12-01 19:42:13.315417611 +0000 UTC m=+0.078595953 container health_status 61ddba5fa28aaa4735d9b3aecc3d300f499f9ae2248b5f55cd6d6127fcce4236 (image=quay.io/navidys/prometheus-podman-exporter:v1.10.1, name=podman_exporter, health_status=healthy, health_failing_streak=0, health_log=, config_data={'image': 'quay.io/navidys/prometheus-podman-exporter:v1.10.1', 'restart': 'always', 'recreate': True, 'user': 'root', 'privileged': True, 'ports': ['9882:9882'], 'net': 'host', 'command': ['--web.config.file=/etc/podman_exporter/podman_exporter.yaml'], 'environment': {'OS_ENDPOINT_TYPE': 'internal', 'CONTAINER_HOST': 'unix:///run/podman/podman.sock'}, 'healthcheck': {'test': '/openstack/healthcheck podman_exporter', 'mount': '/var/lib/openstack/healthchecks/podman_exporter'}, 'volumes': ['/var/lib/openstack/config/telemetry/podman_exporter.yaml:/etc/podman_exporter/podman_exporter.yaml:z', '/var/lib/openstack/certs/telemetry/default:/etc/podman_exporter/tls:z', '/run/podman/podman.sock:/run/podman/podman.sock:rw,z', '/var/lib/openstack/healthchecks/podman_exporter:/openstack:ro,z']}, config_id=edpm, container_name=podman_exporter, maintainer=Navid Yaghoobi <navidys@fedoraproject.org>, managed_by=edpm_ansible)
Dec  1 19:42:15 compute-0 podman[245033]: 2025-12-01 19:42:15.347100634 +0000 UTC m=+0.105285762 container health_status 23921011954a99f31a49758e512d9e3575f6b2ebf536e7df85e3be11e7690b76 (image=quay.io/sustainable_computing_io/kepler:release-0.7.12, name=kepler, health_status=healthy, health_failing_streak=0, health_log=, io.openshift.tags=base rhel9, managed_by=edpm_ansible, io.buildah.version=1.29.0, vendor=Red Hat, Inc., name=ubi9, release=1214.1726694543, release-0.7.12=, vcs-type=git, io.openshift.expose-services=, com.redhat.component=ubi9-container, container_name=kepler, vcs-ref=e309397d02fc53f7fa99db1371b8700eb49f268f, com.redhat.license_terms=https://www.redhat.com/en/about/red-hat-end-user-license-agreements#UBI, version=9.4, io.k8s.display-name=Red Hat Universal Base Image 9, summary=Provides the latest release of Red Hat Universal Base Image 9., url=https://access.redhat.com/containers/#/registry.access.redhat.com/ubi9/images/9.4-1214.1726694543, build-date=2024-09-18T21:23:30, architecture=x86_64, description=The Universal Base Image is designed and engineered to be the base layer for all of your containerized applications, middleware and utilities. This base image is freely redistributable, but Red Hat only supports Red Hat technologies through subscriptions for Red Hat products. This image is maintained by Red Hat and updated regularly., config_data={'image': 'quay.io/sustainable_computing_io/kepler:release-0.7.12', 'privileged': 'true', 'restart': 'always', 'ports': ['8888:8888'], 'net': 'host', 'command': '-v=2', 'recreate': True, 'environment': {'ENABLE_GPU': 'true', 'EXPOSE_CONTAINER_METRICS': 'true', 'ENABLE_PROCESS_METRICS': 'true', 'EXPOSE_VM_METRICS': 'true', 'EXPOSE_ESTIMATED_IDLE_POWER_METRICS': 'false', 'LIBVIRT_METADATA_URI': 'http://openstack.org/xmlns/libvirt/nova/1.1'}, 'healthcheck': {'test': '/openstack/healthcheck kepler', 'mount': '/var/lib/openstack/healthchecks/kepler'}, 'volumes': ['/lib/modules:/lib/modules:ro', '/run/libvirt:/run/libvirt:shared,ro', '/sys:/sys', '/proc:/proc', '/var/lib/openstack/healthchecks/kepler:/openstack:ro,z']}, config_id=edpm, io.k8s.description=The Universal Base Image is designed and engineered to be the base layer for all of your containerized applications, middleware and utilities. This base image is freely redistributable, but Red Hat only supports Red Hat technologies through subscriptions for Red Hat products. This image is maintained by Red Hat and updated regularly., maintainer=Red Hat, Inc., distribution-scope=public)
Dec  1 19:42:15 compute-0 podman[245034]: 2025-12-01 19:42:15.360733907 +0000 UTC m=+0.117536922 container health_status 34a1614f07848d6f362b3ed1fa2407dbcd0f2c7c831f6ef43ff8b2d278ce7c3d (image=quay.io/podified-antelope-centos9/openstack-ceilometer-ipmi:current-podified, name=ceilometer_agent_ipmi, health_status=healthy, health_failing_streak=0, health_log=, org.label-schema.license=GPLv2, tcib_managed=true, io.buildah.version=1.41.3, maintainer=OpenStack Kubernetes Operator team, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS, config_data={'image': 'quay.io/podified-antelope-centos9/openstack-ceilometer-ipmi:current-podified', 'user': 'ceilometer', 'restart': 'always', 'command': 'kolla_start', 'security_opt': 'label:type:ceilometer_polling_t', 'privileged': 'true', 'net': 'host', 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'OS_ENDPOINT_TYPE': 'internal'}, 'healthcheck': {'test': '/openstack/healthcheck ipmi', 'mount': '/var/lib/openstack/healthchecks/ceilometer_agent_ipmi'}, 'volumes': ['/var/lib/openstack/config/telemetry-power-monitoring:/var/lib/openstack/config/:z', '/var/lib/openstack/config/telemetry-power-monitoring/ceilometer-agent-ipmi.json:/var/lib/kolla/config_files/config.json:z', '/etc/hosts:/etc/hosts:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/var/lib/openstack/cacerts/telemetry-power-monitoring/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/config/telemetry-power-monitoring/ceilometer_prom_exporter.yaml:/etc/ceilometer/ceilometer_prom_exporter.yaml:z', '/var/lib/openstack/certs/telemetry-power-monitoring/default:/etc/ceilometer/tls:z', '/dev/log:/dev/log', '/var/lib/openstack/healthchecks/ceilometer_agent_ipmi:/openstack:ro,z']}, config_id=edpm, managed_by=edpm_ansible, org.label-schema.build-date=20251125, tcib_build_tag=fa2bb8efef6782c26ea7f1675eeb36dd, container_name=ceilometer_agent_ipmi)
Dec  1 19:42:15 compute-0 nova_compute[189564]: 2025-12-01 19:42:15.828 189568 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Dec  1 19:42:17 compute-0 podman[245068]: 2025-12-01 19:42:17.329464374 +0000 UTC m=+0.100598656 container health_status 3a3d264f7eb8586ed3d44da8bad3c69e5911bcb2ca062b771386b6d47a5118de (image=quay.rdoproject.org/podified-master-centos10/openstack-ceilometer-compute:current-tested, name=ceilometer_agent_compute, health_status=healthy, health_failing_streak=0, health_log=, org.label-schema.license=GPLv2, tcib_managed=true, config_data={'image': 'quay.rdoproject.org/podified-master-centos10/openstack-ceilometer-compute:current-tested', 'user': 'ceilometer', 'restart': 'always', 'command': 'kolla_start', 'security_opt': 'label:type:ceilometer_polling_t', 'net': 'host', 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'OS_ENDPOINT_TYPE': 'internal'}, 'healthcheck': {'test': '/openstack/healthcheck compute', 'mount': '/var/lib/openstack/healthchecks/ceilometer_agent_compute'}, 'volumes': ['/var/lib/openstack/config/telemetry:/var/lib/openstack/config/:z', '/var/lib/openstack/config/telemetry/ceilometer-agent-compute.json:/var/lib/kolla/config_files/config.json:z', '/run/libvirt:/run/libvirt:shared,ro', '/etc/hosts:/etc/hosts:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/var/lib/openstack/cacerts/telemetry/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/config/telemetry/ceilometer_prom_exporter.yaml:/etc/ceilometer/ceilometer_prom_exporter.yaml:z', '/var/lib/openstack/certs/telemetry/default:/etc/ceilometer/tls:z', '/dev/log:/dev/log', '/var/lib/openstack/healthchecks/ceilometer_agent_compute:/openstack:ro,z']}, container_name=ceilometer_agent_compute, io.buildah.version=1.41.4, managed_by=edpm_ansible, org.label-schema.schema-version=1.0, org.label-schema.build-date=20251125, org.label-schema.name=CentOS Stream 10 Base Image, org.label-schema.vendor=CentOS, tcib_build_tag=3a7876c5b6a4ff2e2bc50e11e9db5f42, config_id=edpm, maintainer=OpenStack Kubernetes Operator team)
Dec  1 19:42:17 compute-0 podman[245069]: 2025-12-01 19:42:17.333753328 +0000 UTC m=+0.092724362 container health_status 43b014a7c88484529ca37fbc1aa040d68d3c565a681d98a3ffe696ded1c66c8b (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified, name=ovn_metadata_agent, health_status=healthy, health_failing_streak=0, health_log=, org.label-schema.build-date=20251125, org.label-schema.name=CentOS Stream 9 Base Image, tcib_build_tag=fa2bb8efef6782c26ea7f1675eeb36dd, tcib_managed=true, managed_by=edpm_ansible, container_name=ovn_metadata_agent, config_data={'cgroupns': 'host', 'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'EDPM_CONFIG_HASH': '0823bd3e096c75f72e4a95820d41b0d4b6a1172bd2892ddb9f29b788a11bc87d'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ovn_metadata_agent', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified', 'net': 'host', 'pid': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/run/openvswitch:/run/openvswitch:z', '/var/lib/config-data/ansible-generated/neutron-ovn-metadata-agent:/etc/neutron.conf.d:z', '/run/netns:/run/netns:shared', '/var/lib/kolla/config_files/ovn_metadata_agent.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/neutron:/var/lib/neutron:shared,z', '/var/lib/neutron/ovn_metadata_haproxy_wrapper:/usr/local/bin/haproxy:ro', '/var/lib/neutron/kill_scripts:/etc/neutron/kill_scripts:ro', '/var/lib/openstack/cacerts/neutron-metadata/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/ca.crt:/etc/pki/tls/certs/ovndbca.crt:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/tls.crt:/etc/pki/tls/certs/ovndb.crt:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/tls.key:/etc/pki/tls/private/ovndb.key:ro,Z', '/var/lib/openstack/healthchecks/ovn_metadata_agent:/openstack:ro,z']}, org.label-schema.license=GPLv2, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS, config_id=ovn_metadata_agent, io.buildah.version=1.41.3, maintainer=OpenStack Kubernetes Operator team)
Dec  1 19:42:17 compute-0 podman[245070]: 2025-12-01 19:42:17.382096969 +0000 UTC m=+0.146803861 container health_status ac5c9902abf0db9f43c889599b2bcc73d33eb8b65444ffdd9b56a5cc93dab792 (image=quay.io/podified-antelope-centos9/openstack-ovn-controller:current-podified, name=ovn_controller, health_status=healthy, health_failing_streak=0, health_log=, org.label-schema.vendor=CentOS, io.buildah.version=1.41.3, org.label-schema.build-date=20251125, maintainer=OpenStack Kubernetes Operator team, org.label-schema.schema-version=1.0, config_data={'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ovn_controller', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-ovn-controller:current-podified', 'net': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/lib/modules:/lib/modules:ro', '/run:/run', '/var/lib/openvswitch/ovn:/run/ovn:shared,z', '/var/lib/kolla/config_files/ovn_controller.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/openstack/cacerts/ovn/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/certs/ovn/default/ca.crt:/etc/pki/tls/certs/ovndbca.crt:ro,z', '/var/lib/openstack/certs/ovn/default/tls.crt:/etc/pki/tls/certs/ovndb.crt:ro,z', '/var/lib/openstack/certs/ovn/default/tls.key:/etc/pki/tls/private/ovndb.key:ro,Z', '/var/lib/openstack/healthchecks/ovn_controller:/openstack:ro,z']}, config_id=ovn_controller, container_name=ovn_controller, tcib_build_tag=fa2bb8efef6782c26ea7f1675eeb36dd, tcib_managed=true, managed_by=edpm_ansible, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 9 Base Image)
Dec  1 19:42:18 compute-0 nova_compute[189564]: 2025-12-01 19:42:18.138 189568 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Dec  1 19:42:20 compute-0 nova_compute[189564]: 2025-12-01 19:42:20.831 189568 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Dec  1 19:42:23 compute-0 nova_compute[189564]: 2025-12-01 19:42:23.141 189568 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Dec  1 19:42:25 compute-0 nova_compute[189564]: 2025-12-01 19:42:25.834 189568 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Dec  1 19:42:27 compute-0 podman[245133]: 2025-12-01 19:42:27.310082847 +0000 UTC m=+0.077630422 container health_status b46bda7fc50db8041eef75400930fc7591d8331b3adc9964f77b2cc87c6b98e2 (image=quay.io/openstack-k8s-operators/openstack-network-exporter:current-podified, name=openstack_network_exporter, health_status=healthy, health_failing_streak=0, health_log=, build-date=2025-08-20T13:12:41, config_data={'image': 'quay.io/openstack-k8s-operators/openstack-network-exporter:current-podified', 'restart': 'always', 'recreate': True, 'privileged': True, 'ports': ['9105:9105'], 'command': [], 'net': 'host', 'environment': {'OS_ENDPOINT_TYPE': 'internal', 'OPENSTACK_NETWORK_EXPORTER_YAML': '/etc/openstack_network_exporter/openstack_network_exporter.yaml'}, 'healthcheck': {'test': '/openstack/healthcheck openstack-netwo', 'mount': '/var/lib/openstack/healthchecks/openstack_network_exporter'}, 'volumes': ['/var/lib/openstack/config/telemetry/openstack_network_exporter.yaml:/etc/openstack_network_exporter/openstack_network_exporter.yaml:z', '/var/lib/openstack/certs/telemetry/default:/etc/openstack_network_exporter/tls:z', '/var/run/openvswitch:/run/openvswitch:rw,z', '/var/lib/openvswitch/ovn:/run/ovn:rw,z', '/proc:/host/proc:ro', '/var/lib/openstack/healthchecks/openstack_network_exporter:/openstack:ro,z']}, maintainer=Red Hat, Inc., version=9.6, name=ubi9-minimal, distribution-scope=public, io.k8s.description=The Universal Base Image Minimal is a stripped down image that uses microdnf as a package manager. This base image is freely redistributable, but Red Hat only supports Red Hat technologies through subscriptions for Red Hat products. This image is maintained by Red Hat and updated regularly., url=https://catalog.redhat.com/en/search?searchType=containers, description=The Universal Base Image Minimal is a stripped down image that uses microdnf as a package manager. This base image is freely redistributable, but Red Hat only supports Red Hat technologies through subscriptions for Red Hat products. This image is maintained by Red Hat and updated regularly., release=1755695350, vendor=Red Hat, Inc., io.openshift.expose-services=, io.openshift.tags=minimal rhel9, io.buildah.version=1.33.7, vcs-ref=f4b088292653bbf5ca8188a5e59ffd06a8671d4b, vcs-type=git, container_name=openstack_network_exporter, com.redhat.component=ubi9-minimal-container, com.redhat.license_terms=https://www.redhat.com/en/about/red-hat-end-user-license-agreements#UBI, config_id=edpm, io.k8s.display-name=Red Hat Universal Base Image 9 Minimal, summary=Provides the latest release of the minimal Red Hat Universal Base Image 9., architecture=x86_64, managed_by=edpm_ansible)
Dec  1 19:42:28 compute-0 nova_compute[189564]: 2025-12-01 19:42:28.143 189568 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Dec  1 19:42:29 compute-0 podman[203750]: time="2025-12-01T19:42:29Z" level=info msg="List containers: received `last` parameter - overwriting `limit`"
Dec  1 19:42:29 compute-0 podman[203750]: @ - - [01/Dec/2025:19:42:29 +0000] "GET /v4.9.3/libpod/containers/json?all=true&external=false&last=0&namespace=false&size=false&sync=false HTTP/1.1" 200 29521 "" "Go-http-client/1.1"
Dec  1 19:42:29 compute-0 podman[203750]: @ - - [01/Dec/2025:19:42:29 +0000] "GET /v4.9.3/libpod/containers/stats?all=false&interval=1&stream=false HTTP/1.1" 200 4804 "" "Go-http-client/1.1"
Dec  1 19:42:30 compute-0 nova_compute[189564]: 2025-12-01 19:42:30.837 189568 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Dec  1 19:42:31 compute-0 openstack_network_exporter[205914]: ERROR   19:42:31 appctl.go:131: Failed to prepare call to ovsdb-server: no control socket files found for the ovs db server
Dec  1 19:42:31 compute-0 openstack_network_exporter[205914]: ERROR   19:42:31 appctl.go:144: Failed to get PID for ovn-northd: no control socket files found for ovn-northd
Dec  1 19:42:31 compute-0 openstack_network_exporter[205914]: ERROR   19:42:31 appctl.go:144: Failed to get PID for ovn-northd: no control socket files found for ovn-northd
Dec  1 19:42:31 compute-0 openstack_network_exporter[205914]: ERROR   19:42:31 appctl.go:174: call(dpif-netdev/pmd-rxq-show): please specify an existing datapath
Dec  1 19:42:31 compute-0 openstack_network_exporter[205914]: 
Dec  1 19:42:31 compute-0 openstack_network_exporter[205914]: ERROR   19:42:31 appctl.go:174: call(dpif-netdev/pmd-perf-show): please specify an existing datapath
Dec  1 19:42:31 compute-0 openstack_network_exporter[205914]: 
Dec  1 19:42:32 compute-0 podman[245154]: 2025-12-01 19:42:32.315191786 +0000 UTC m=+0.077877890 container health_status 9bc16c1e84935b321683dd2dfd3901959431e420d380b6b9982945dff3d516b2 (image=quay.io/prometheus/node-exporter:v1.5.0, name=node_exporter, health_status=healthy, health_failing_streak=0, health_log=, maintainer=The Prometheus Authors <prometheus-developers@googlegroups.com>, managed_by=edpm_ansible, config_data={'image': 'quay.io/prometheus/node-exporter:v1.5.0', 'restart': 'always', 'recreate': True, 'user': 'root', 'privileged': True, 'ports': ['9100:9100'], 'command': ['--web.config.file=/etc/node_exporter/node_exporter.yaml', '--web.disable-exporter-metrics', '--collector.systemd', '--collector.systemd.unit-include=(edpm_.*|ovs.*|openvswitch|virt.*|rsyslog)\\.service', '--no-collector.dmi', '--no-collector.entropy', '--no-collector.thermal_zone', '--no-collector.time', '--no-collector.timex', '--no-collector.uname', '--no-collector.stat', '--no-collector.hwmon', '--no-collector.os', '--no-collector.selinux', '--no-collector.textfile', '--no-collector.powersupplyclass', '--no-collector.pressure', '--no-collector.rapl'], 'net': 'host', 'environment': {'OS_ENDPOINT_TYPE': 'internal'}, 'healthcheck': {'test': '/openstack/healthcheck node_exporter', 'mount': '/var/lib/openstack/healthchecks/node_exporter'}, 'volumes': ['/var/lib/openstack/config/telemetry/node_exporter.yaml:/etc/node_exporter/node_exporter.yaml:z', '/var/lib/openstack/certs/telemetry/default:/etc/node_exporter/tls:z', '/var/run/dbus/system_bus_socket:/var/run/dbus/system_bus_socket:rw', '/var/lib/openstack/healthchecks/node_exporter:/openstack:ro,z']}, config_id=edpm, container_name=node_exporter)
Dec  1 19:42:33 compute-0 nova_compute[189564]: 2025-12-01 19:42:33.145 189568 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Dec  1 19:42:35 compute-0 nova_compute[189564]: 2025-12-01 19:42:35.841 189568 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Dec  1 19:42:38 compute-0 nova_compute[189564]: 2025-12-01 19:42:38.147 189568 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Dec  1 19:42:38 compute-0 podman[245177]: 2025-12-01 19:42:38.377512956 +0000 UTC m=+0.130461664 container health_status eee51cf6f5ac491b85fb09827fece37ea9afa564acb449d4ec0d0155a452f02b (image=quay.io/podified-antelope-centos9/openstack-multipathd:current-podified, name=multipathd, health_status=healthy, health_failing_streak=0, health_log=, org.label-schema.build-date=20251125, org.label-schema.vendor=CentOS, tcib_build_tag=fa2bb8efef6782c26ea7f1675eeb36dd, tcib_managed=true, config_id=multipathd, org.label-schema.license=GPLv2, config_data={'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/multipathd', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-multipathd:current-podified', 'net': 'host', 'privileged': True, 'restart': 'always', 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/var/lib/kolla/config_files/multipathd.json:/var/lib/kolla/config_files/config.json:ro', '/dev:/dev', '/run/udev:/run/udev', '/sys:/sys', '/lib/modules:/lib/modules:ro', '/etc/iscsi:/etc/iscsi:ro', '/var/lib/iscsi:/var/lib/iscsi', '/etc/multipath:/etc/multipath:z', '/etc/multipath.conf:/etc/multipath.conf:ro', '/var/lib/openstack/healthchecks/multipathd:/openstack:ro,z']}, maintainer=OpenStack Kubernetes Operator team, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.schema-version=1.0, container_name=multipathd, io.buildah.version=1.41.3, managed_by=edpm_ansible)
Dec  1 19:42:40 compute-0 nova_compute[189564]: 2025-12-01 19:42:40.844 189568 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Dec  1 19:42:43 compute-0 nova_compute[189564]: 2025-12-01 19:42:43.151 189568 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Dec  1 19:42:44 compute-0 podman[245197]: 2025-12-01 19:42:44.323231063 +0000 UTC m=+0.093289809 container health_status 61ddba5fa28aaa4735d9b3aecc3d300f499f9ae2248b5f55cd6d6127fcce4236 (image=quay.io/navidys/prometheus-podman-exporter:v1.10.1, name=podman_exporter, health_status=healthy, health_failing_streak=0, health_log=, config_data={'image': 'quay.io/navidys/prometheus-podman-exporter:v1.10.1', 'restart': 'always', 'recreate': True, 'user': 'root', 'privileged': True, 'ports': ['9882:9882'], 'net': 'host', 'command': ['--web.config.file=/etc/podman_exporter/podman_exporter.yaml'], 'environment': {'OS_ENDPOINT_TYPE': 'internal', 'CONTAINER_HOST': 'unix:///run/podman/podman.sock'}, 'healthcheck': {'test': '/openstack/healthcheck podman_exporter', 'mount': '/var/lib/openstack/healthchecks/podman_exporter'}, 'volumes': ['/var/lib/openstack/config/telemetry/podman_exporter.yaml:/etc/podman_exporter/podman_exporter.yaml:z', '/var/lib/openstack/certs/telemetry/default:/etc/podman_exporter/tls:z', '/run/podman/podman.sock:/run/podman/podman.sock:rw,z', '/var/lib/openstack/healthchecks/podman_exporter:/openstack:ro,z']}, config_id=edpm, container_name=podman_exporter, maintainer=Navid Yaghoobi <navidys@fedoraproject.org>, managed_by=edpm_ansible)
Dec  1 19:42:45 compute-0 nova_compute[189564]: 2025-12-01 19:42:45.846 189568 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Dec  1 19:42:46 compute-0 podman[245221]: 2025-12-01 19:42:46.323303469 +0000 UTC m=+0.100382110 container health_status 23921011954a99f31a49758e512d9e3575f6b2ebf536e7df85e3be11e7690b76 (image=quay.io/sustainable_computing_io/kepler:release-0.7.12, name=kepler, health_status=healthy, health_failing_streak=0, health_log=, description=The Universal Base Image is designed and engineered to be the base layer for all of your containerized applications, middleware and utilities. This base image is freely redistributable, but Red Hat only supports Red Hat technologies through subscriptions for Red Hat products. This image is maintained by Red Hat and updated regularly., summary=Provides the latest release of Red Hat Universal Base Image 9., io.k8s.description=The Universal Base Image is designed and engineered to be the base layer for all of your containerized applications, middleware and utilities. This base image is freely redistributable, but Red Hat only supports Red Hat technologies through subscriptions for Red Hat products. This image is maintained by Red Hat and updated regularly., io.k8s.display-name=Red Hat Universal Base Image 9, io.openshift.expose-services=, vcs-ref=e309397d02fc53f7fa99db1371b8700eb49f268f, build-date=2024-09-18T21:23:30, config_id=edpm, vendor=Red Hat, Inc., com.redhat.license_terms=https://www.redhat.com/en/about/red-hat-end-user-license-agreements#UBI, name=ubi9, vcs-type=git, distribution-scope=public, config_data={'image': 'quay.io/sustainable_computing_io/kepler:release-0.7.12', 'privileged': 'true', 'restart': 'always', 'ports': ['8888:8888'], 'net': 'host', 'command': '-v=2', 'recreate': True, 'environment': {'ENABLE_GPU': 'true', 'EXPOSE_CONTAINER_METRICS': 'true', 'ENABLE_PROCESS_METRICS': 'true', 'EXPOSE_VM_METRICS': 'true', 'EXPOSE_ESTIMATED_IDLE_POWER_METRICS': 'false', 'LIBVIRT_METADATA_URI': 'http://openstack.org/xmlns/libvirt/nova/1.1'}, 'healthcheck': {'test': '/openstack/healthcheck kepler', 'mount': '/var/lib/openstack/healthchecks/kepler'}, 'volumes': ['/lib/modules:/lib/modules:ro', '/run/libvirt:/run/libvirt:shared,ro', '/sys:/sys', '/proc:/proc', '/var/lib/openstack/healthchecks/kepler:/openstack:ro,z']}, container_name=kepler, managed_by=edpm_ansible, release=1214.1726694543, release-0.7.12=, io.openshift.tags=base rhel9, io.buildah.version=1.29.0, maintainer=Red Hat, Inc., version=9.4, url=https://access.redhat.com/containers/#/registry.access.redhat.com/ubi9/images/9.4-1214.1726694543, architecture=x86_64, com.redhat.component=ubi9-container)
Dec  1 19:42:46 compute-0 podman[245222]: 2025-12-01 19:42:46.325417335 +0000 UTC m=+0.096123639 container health_status 34a1614f07848d6f362b3ed1fa2407dbcd0f2c7c831f6ef43ff8b2d278ce7c3d (image=quay.io/podified-antelope-centos9/openstack-ceilometer-ipmi:current-podified, name=ceilometer_agent_ipmi, health_status=healthy, health_failing_streak=0, health_log=, config_id=edpm, io.buildah.version=1.41.3, maintainer=OpenStack Kubernetes Operator team, org.label-schema.build-date=20251125, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS, config_data={'image': 'quay.io/podified-antelope-centos9/openstack-ceilometer-ipmi:current-podified', 'user': 'ceilometer', 'restart': 'always', 'command': 'kolla_start', 'security_opt': 'label:type:ceilometer_polling_t', 'privileged': 'true', 'net': 'host', 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'OS_ENDPOINT_TYPE': 'internal'}, 'healthcheck': {'test': '/openstack/healthcheck ipmi', 'mount': '/var/lib/openstack/healthchecks/ceilometer_agent_ipmi'}, 'volumes': ['/var/lib/openstack/config/telemetry-power-monitoring:/var/lib/openstack/config/:z', '/var/lib/openstack/config/telemetry-power-monitoring/ceilometer-agent-ipmi.json:/var/lib/kolla/config_files/config.json:z', '/etc/hosts:/etc/hosts:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/var/lib/openstack/cacerts/telemetry-power-monitoring/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/config/telemetry-power-monitoring/ceilometer_prom_exporter.yaml:/etc/ceilometer/ceilometer_prom_exporter.yaml:z', '/var/lib/openstack/certs/telemetry-power-monitoring/default:/etc/ceilometer/tls:z', '/dev/log:/dev/log', '/var/lib/openstack/healthchecks/ceilometer_agent_ipmi:/openstack:ro,z']}, container_name=ceilometer_agent_ipmi, managed_by=edpm_ansible, tcib_build_tag=fa2bb8efef6782c26ea7f1675eeb36dd, tcib_managed=true)
Dec  1 19:42:48 compute-0 nova_compute[189564]: 2025-12-01 19:42:48.153 189568 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Dec  1 19:42:48 compute-0 podman[245258]: 2025-12-01 19:42:48.310155094 +0000 UTC m=+0.081701570 container health_status 3a3d264f7eb8586ed3d44da8bad3c69e5911bcb2ca062b771386b6d47a5118de (image=quay.rdoproject.org/podified-master-centos10/openstack-ceilometer-compute:current-tested, name=ceilometer_agent_compute, health_status=healthy, health_failing_streak=0, health_log=, maintainer=OpenStack Kubernetes Operator team, managed_by=edpm_ansible, org.label-schema.license=GPLv2, org.label-schema.schema-version=1.0, org.label-schema.build-date=20251125, org.label-schema.vendor=CentOS, tcib_managed=true, config_id=edpm, container_name=ceilometer_agent_compute, org.label-schema.name=CentOS Stream 10 Base Image, tcib_build_tag=3a7876c5b6a4ff2e2bc50e11e9db5f42, config_data={'image': 'quay.rdoproject.org/podified-master-centos10/openstack-ceilometer-compute:current-tested', 'user': 'ceilometer', 'restart': 'always', 'command': 'kolla_start', 'security_opt': 'label:type:ceilometer_polling_t', 'net': 'host', 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'OS_ENDPOINT_TYPE': 'internal'}, 'healthcheck': {'test': '/openstack/healthcheck compute', 'mount': '/var/lib/openstack/healthchecks/ceilometer_agent_compute'}, 'volumes': ['/var/lib/openstack/config/telemetry:/var/lib/openstack/config/:z', '/var/lib/openstack/config/telemetry/ceilometer-agent-compute.json:/var/lib/kolla/config_files/config.json:z', '/run/libvirt:/run/libvirt:shared,ro', '/etc/hosts:/etc/hosts:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/var/lib/openstack/cacerts/telemetry/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/config/telemetry/ceilometer_prom_exporter.yaml:/etc/ceilometer/ceilometer_prom_exporter.yaml:z', '/var/lib/openstack/certs/telemetry/default:/etc/ceilometer/tls:z', '/dev/log:/dev/log', '/var/lib/openstack/healthchecks/ceilometer_agent_compute:/openstack:ro,z']}, io.buildah.version=1.41.4)
Dec  1 19:42:48 compute-0 podman[245259]: 2025-12-01 19:42:48.316200322 +0000 UTC m=+0.083121713 container health_status 43b014a7c88484529ca37fbc1aa040d68d3c565a681d98a3ffe696ded1c66c8b (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified, name=ovn_metadata_agent, health_status=healthy, health_failing_streak=0, health_log=, io.buildah.version=1.41.3, maintainer=OpenStack Kubernetes Operator team, org.label-schema.build-date=20251125, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS, tcib_build_tag=fa2bb8efef6782c26ea7f1675eeb36dd, config_data={'cgroupns': 'host', 'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'EDPM_CONFIG_HASH': '0823bd3e096c75f72e4a95820d41b0d4b6a1172bd2892ddb9f29b788a11bc87d'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ovn_metadata_agent', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified', 'net': 'host', 'pid': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/run/openvswitch:/run/openvswitch:z', '/var/lib/config-data/ansible-generated/neutron-ovn-metadata-agent:/etc/neutron.conf.d:z', '/run/netns:/run/netns:shared', '/var/lib/kolla/config_files/ovn_metadata_agent.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/neutron:/var/lib/neutron:shared,z', '/var/lib/neutron/ovn_metadata_haproxy_wrapper:/usr/local/bin/haproxy:ro', '/var/lib/neutron/kill_scripts:/etc/neutron/kill_scripts:ro', '/var/lib/openstack/cacerts/neutron-metadata/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/ca.crt:/etc/pki/tls/certs/ovndbca.crt:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/tls.crt:/etc/pki/tls/certs/ovndb.crt:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/tls.key:/etc/pki/tls/private/ovndb.key:ro,Z', '/var/lib/openstack/healthchecks/ovn_metadata_agent:/openstack:ro,z']}, container_name=ovn_metadata_agent, managed_by=edpm_ansible, tcib_managed=true, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 9 Base Image, config_id=ovn_metadata_agent)
Dec  1 19:42:48 compute-0 podman[245260]: 2025-12-01 19:42:48.355500933 +0000 UTC m=+0.116836531 container health_status ac5c9902abf0db9f43c889599b2bcc73d33eb8b65444ffdd9b56a5cc93dab792 (image=quay.io/podified-antelope-centos9/openstack-ovn-controller:current-podified, name=ovn_controller, health_status=healthy, health_failing_streak=0, health_log=, tcib_build_tag=fa2bb8efef6782c26ea7f1675eeb36dd, config_data={'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ovn_controller', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-ovn-controller:current-podified', 'net': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/lib/modules:/lib/modules:ro', '/run:/run', '/var/lib/openvswitch/ovn:/run/ovn:shared,z', '/var/lib/kolla/config_files/ovn_controller.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/openstack/cacerts/ovn/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/certs/ovn/default/ca.crt:/etc/pki/tls/certs/ovndbca.crt:ro,z', '/var/lib/openstack/certs/ovn/default/tls.crt:/etc/pki/tls/certs/ovndb.crt:ro,z', '/var/lib/openstack/certs/ovn/default/tls.key:/etc/pki/tls/private/ovndb.key:ro,Z', '/var/lib/openstack/healthchecks/ovn_controller:/openstack:ro,z']}, container_name=ovn_controller, managed_by=edpm_ansible, org.label-schema.build-date=20251125, org.label-schema.name=CentOS Stream 9 Base Image, tcib_managed=true, config_id=ovn_controller, io.buildah.version=1.41.3, maintainer=OpenStack Kubernetes Operator team, org.label-schema.license=GPLv2, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS)
Dec  1 19:42:50 compute-0 nova_compute[189564]: 2025-12-01 19:42:50.850 189568 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Dec  1 19:42:53 compute-0 nova_compute[189564]: 2025-12-01 19:42:53.156 189568 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Dec  1 19:42:55 compute-0 nova_compute[189564]: 2025-12-01 19:42:55.852 189568 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Dec  1 19:42:58 compute-0 nova_compute[189564]: 2025-12-01 19:42:58.159 189568 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Dec  1 19:42:58 compute-0 podman[245326]: 2025-12-01 19:42:58.35422451 +0000 UTC m=+0.111708032 container health_status b46bda7fc50db8041eef75400930fc7591d8331b3adc9964f77b2cc87c6b98e2 (image=quay.io/openstack-k8s-operators/openstack-network-exporter:current-podified, name=openstack_network_exporter, health_status=healthy, health_failing_streak=0, health_log=, io.buildah.version=1.33.7, managed_by=edpm_ansible, release=1755695350, vcs-ref=f4b088292653bbf5ca8188a5e59ffd06a8671d4b, config_data={'image': 'quay.io/openstack-k8s-operators/openstack-network-exporter:current-podified', 'restart': 'always', 'recreate': True, 'privileged': True, 'ports': ['9105:9105'], 'command': [], 'net': 'host', 'environment': {'OS_ENDPOINT_TYPE': 'internal', 'OPENSTACK_NETWORK_EXPORTER_YAML': '/etc/openstack_network_exporter/openstack_network_exporter.yaml'}, 'healthcheck': {'test': '/openstack/healthcheck openstack-netwo', 'mount': '/var/lib/openstack/healthchecks/openstack_network_exporter'}, 'volumes': ['/var/lib/openstack/config/telemetry/openstack_network_exporter.yaml:/etc/openstack_network_exporter/openstack_network_exporter.yaml:z', '/var/lib/openstack/certs/telemetry/default:/etc/openstack_network_exporter/tls:z', '/var/run/openvswitch:/run/openvswitch:rw,z', '/var/lib/openvswitch/ovn:/run/ovn:rw,z', '/proc:/host/proc:ro', '/var/lib/openstack/healthchecks/openstack_network_exporter:/openstack:ro,z']}, maintainer=Red Hat, Inc., description=The Universal Base Image Minimal is a stripped down image that uses microdnf as a package manager. This base image is freely redistributable, but Red Hat only supports Red Hat technologies through subscriptions for Red Hat products. This image is maintained by Red Hat and updated regularly., container_name=openstack_network_exporter, io.openshift.tags=minimal rhel9, url=https://catalog.redhat.com/en/search?searchType=containers, build-date=2025-08-20T13:12:41, io.k8s.description=The Universal Base Image Minimal is a stripped down image that uses microdnf as a package manager. This base image is freely redistributable, but Red Hat only supports Red Hat technologies through subscriptions for Red Hat products. This image is maintained by Red Hat and updated regularly., vendor=Red Hat, Inc., config_id=edpm, distribution-scope=public, summary=Provides the latest release of the minimal Red Hat Universal Base Image 9., architecture=x86_64, name=ubi9-minimal, com.redhat.component=ubi9-minimal-container, io.k8s.display-name=Red Hat Universal Base Image 9 Minimal, io.openshift.expose-services=, version=9.6, com.redhat.license_terms=https://www.redhat.com/en/about/red-hat-end-user-license-agreements#UBI, vcs-type=git)
Dec  1 19:42:58 compute-0 nova_compute[189564]: 2025-12-01 19:42:58.905 189568 DEBUG oslo_service.periodic_task [None req-7cec860b-c1c5-49d2-8a4f-732c76c1788d - - - - - -] Running periodic task ComputeManager._heal_instance_info_cache run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210#033[00m
Dec  1 19:42:58 compute-0 nova_compute[189564]: 2025-12-01 19:42:58.906 189568 DEBUG nova.compute.manager [None req-7cec860b-c1c5-49d2-8a4f-732c76c1788d - - - - - -] Starting heal instance info cache _heal_instance_info_cache /usr/lib/python3.9/site-packages/nova/compute/manager.py:9858#033[00m
Dec  1 19:42:59 compute-0 podman[203750]: time="2025-12-01T19:42:59Z" level=info msg="List containers: received `last` parameter - overwriting `limit`"
Dec  1 19:42:59 compute-0 podman[203750]: @ - - [01/Dec/2025:19:42:59 +0000] "GET /v4.9.3/libpod/containers/json?all=true&external=false&last=0&namespace=false&size=false&sync=false HTTP/1.1" 200 29521 "" "Go-http-client/1.1"
Dec  1 19:42:59 compute-0 podman[203750]: @ - - [01/Dec/2025:19:42:59 +0000] "GET /v4.9.3/libpod/containers/stats?all=false&interval=1&stream=false HTTP/1.1" 200 4805 "" "Go-http-client/1.1"
Dec  1 19:42:59 compute-0 nova_compute[189564]: 2025-12-01 19:42:59.825 189568 DEBUG oslo_concurrency.lockutils [None req-7cec860b-c1c5-49d2-8a4f-732c76c1788d - - - - - -] Acquiring lock "refresh_cache-850ac274-3f22-41ce-b7d7-ac64d7adac70" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:312#033[00m
Dec  1 19:42:59 compute-0 nova_compute[189564]: 2025-12-01 19:42:59.826 189568 DEBUG oslo_concurrency.lockutils [None req-7cec860b-c1c5-49d2-8a4f-732c76c1788d - - - - - -] Acquired lock "refresh_cache-850ac274-3f22-41ce-b7d7-ac64d7adac70" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:315#033[00m
Dec  1 19:42:59 compute-0 nova_compute[189564]: 2025-12-01 19:42:59.826 189568 DEBUG nova.network.neutron [None req-7cec860b-c1c5-49d2-8a4f-732c76c1788d - - - - - -] [instance: 850ac274-3f22-41ce-b7d7-ac64d7adac70] Forcefully refreshing network info cache for instance _get_instance_nw_info /usr/lib/python3.9/site-packages/nova/network/neutron.py:2004#033[00m
Dec  1 19:43:00 compute-0 nova_compute[189564]: 2025-12-01 19:43:00.857 189568 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Dec  1 19:43:01 compute-0 openstack_network_exporter[205914]: ERROR   19:43:01 appctl.go:144: Failed to get PID for ovn-northd: no control socket files found for ovn-northd
Dec  1 19:43:01 compute-0 openstack_network_exporter[205914]: ERROR   19:43:01 appctl.go:174: call(dpif-netdev/pmd-perf-show): please specify an existing datapath
Dec  1 19:43:01 compute-0 openstack_network_exporter[205914]: 
Dec  1 19:43:01 compute-0 openstack_network_exporter[205914]: ERROR   19:43:01 appctl.go:174: call(dpif-netdev/pmd-rxq-show): please specify an existing datapath
Dec  1 19:43:01 compute-0 openstack_network_exporter[205914]: 
Dec  1 19:43:01 compute-0 openstack_network_exporter[205914]: ERROR   19:43:01 appctl.go:144: Failed to get PID for ovn-northd: no control socket files found for ovn-northd
Dec  1 19:43:01 compute-0 openstack_network_exporter[205914]: ERROR   19:43:01 appctl.go:131: Failed to prepare call to ovsdb-server: no control socket files found for the ovs db server
Dec  1 19:43:02 compute-0 nova_compute[189564]: 2025-12-01 19:43:02.936 189568 DEBUG nova.network.neutron [None req-7cec860b-c1c5-49d2-8a4f-732c76c1788d - - - - - -] [instance: 850ac274-3f22-41ce-b7d7-ac64d7adac70] Updating instance_info_cache with network_info: [{"id": "076102cd-d411-4d3d-a31e-4851d4a8d107", "address": "fa:16:3e:ce:df:71", "network": {"id": "2a4b8529-6171-4880-a97c-66966115a61b", "bridge": "br-int", "label": "private", "subnets": [{"cidr": "192.168.0.0/24", "dns": [], "gateway": {"address": "192.168.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "192.168.0.62", "type": "fixed", "version": 4, "meta": {}, "floating_ips": [{"address": "192.168.122.240", "type": "floating", "version": 4, "meta": {}}]}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "35d2a9caf1634dca9fc12ec078239d84", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tap076102cd-d4", "ovs_interfaceid": "076102cd-d411-4d3d-a31e-4851d4a8d107", "qbh_params": null, "qbg_params": null, "active": true, "vnic_type": "normal", "profile": {}, "preserve_on_delete": true, "delegate_create": true, "meta": {}}] update_instance_cache_with_nw_info /usr/lib/python3.9/site-packages/nova/network/neutron.py:116#033[00m
Dec  1 19:43:03 compute-0 nova_compute[189564]: 2025-12-01 19:43:03.038 189568 DEBUG oslo_concurrency.lockutils [None req-7cec860b-c1c5-49d2-8a4f-732c76c1788d - - - - - -] Releasing lock "refresh_cache-850ac274-3f22-41ce-b7d7-ac64d7adac70" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:333#033[00m
Dec  1 19:43:03 compute-0 nova_compute[189564]: 2025-12-01 19:43:03.039 189568 DEBUG nova.compute.manager [None req-7cec860b-c1c5-49d2-8a4f-732c76c1788d - - - - - -] [instance: 850ac274-3f22-41ce-b7d7-ac64d7adac70] Updated the network info_cache for instance _heal_instance_info_cache /usr/lib/python3.9/site-packages/nova/compute/manager.py:9929#033[00m
Dec  1 19:43:03 compute-0 nova_compute[189564]: 2025-12-01 19:43:03.040 189568 DEBUG oslo_service.periodic_task [None req-7cec860b-c1c5-49d2-8a4f-732c76c1788d - - - - - -] Running periodic task ComputeManager._poll_rebooting_instances run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210#033[00m
Dec  1 19:43:03 compute-0 nova_compute[189564]: 2025-12-01 19:43:03.041 189568 DEBUG oslo_service.periodic_task [None req-7cec860b-c1c5-49d2-8a4f-732c76c1788d - - - - - -] Running periodic task ComputeManager._poll_volume_usage run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210#033[00m
Dec  1 19:43:03 compute-0 nova_compute[189564]: 2025-12-01 19:43:03.042 189568 DEBUG oslo_service.periodic_task [None req-7cec860b-c1c5-49d2-8a4f-732c76c1788d - - - - - -] Running periodic task ComputeManager._reclaim_queued_deletes run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210#033[00m
Dec  1 19:43:03 compute-0 nova_compute[189564]: 2025-12-01 19:43:03.043 189568 DEBUG nova.compute.manager [None req-7cec860b-c1c5-49d2-8a4f-732c76c1788d - - - - - -] CONF.reclaim_instance_interval <= 0, skipping... _reclaim_queued_deletes /usr/lib/python3.9/site-packages/nova/compute/manager.py:10477#033[00m
Dec  1 19:43:03 compute-0 nova_compute[189564]: 2025-12-01 19:43:03.161 189568 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Dec  1 19:43:03 compute-0 nova_compute[189564]: 2025-12-01 19:43:03.251 189568 DEBUG oslo_service.periodic_task [None req-7cec860b-c1c5-49d2-8a4f-732c76c1788d - - - - - -] Running periodic task ComputeManager._check_instance_build_time run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210#033[00m
Dec  1 19:43:03 compute-0 nova_compute[189564]: 2025-12-01 19:43:03.251 189568 DEBUG oslo_service.periodic_task [None req-7cec860b-c1c5-49d2-8a4f-732c76c1788d - - - - - -] Running periodic task ComputeManager._sync_scheduler_instance_info run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210#033[00m
Dec  1 19:43:03 compute-0 nova_compute[189564]: 2025-12-01 19:43:03.347 189568 DEBUG oslo_service.periodic_task [None req-7cec860b-c1c5-49d2-8a4f-732c76c1788d - - - - - -] Running periodic task ComputeManager._poll_rescued_instances run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210#033[00m
Dec  1 19:43:03 compute-0 podman[245346]: 2025-12-01 19:43:03.35690889 +0000 UTC m=+0.125000635 container health_status 9bc16c1e84935b321683dd2dfd3901959431e420d380b6b9982945dff3d516b2 (image=quay.io/prometheus/node-exporter:v1.5.0, name=node_exporter, health_status=healthy, health_failing_streak=0, health_log=, config_data={'image': 'quay.io/prometheus/node-exporter:v1.5.0', 'restart': 'always', 'recreate': True, 'user': 'root', 'privileged': True, 'ports': ['9100:9100'], 'command': ['--web.config.file=/etc/node_exporter/node_exporter.yaml', '--web.disable-exporter-metrics', '--collector.systemd', '--collector.systemd.unit-include=(edpm_.*|ovs.*|openvswitch|virt.*|rsyslog)\\.service', '--no-collector.dmi', '--no-collector.entropy', '--no-collector.thermal_zone', '--no-collector.time', '--no-collector.timex', '--no-collector.uname', '--no-collector.stat', '--no-collector.hwmon', '--no-collector.os', '--no-collector.selinux', '--no-collector.textfile', '--no-collector.powersupplyclass', '--no-collector.pressure', '--no-collector.rapl'], 'net': 'host', 'environment': {'OS_ENDPOINT_TYPE': 'internal'}, 'healthcheck': {'test': '/openstack/healthcheck node_exporter', 'mount': '/var/lib/openstack/healthchecks/node_exporter'}, 'volumes': ['/var/lib/openstack/config/telemetry/node_exporter.yaml:/etc/node_exporter/node_exporter.yaml:z', '/var/lib/openstack/certs/telemetry/default:/etc/node_exporter/tls:z', '/var/run/dbus/system_bus_socket:/var/run/dbus/system_bus_socket:rw', '/var/lib/openstack/healthchecks/node_exporter:/openstack:ro,z']}, config_id=edpm, container_name=node_exporter, maintainer=The Prometheus Authors <prometheus-developers@googlegroups.com>, managed_by=edpm_ansible)
Dec  1 19:43:04 compute-0 nova_compute[189564]: 2025-12-01 19:43:04.247 189568 DEBUG oslo_service.periodic_task [None req-7cec860b-c1c5-49d2-8a4f-732c76c1788d - - - - - -] Running periodic task ComputeManager._poll_unconfirmed_resizes run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210#033[00m
Dec  1 19:43:05 compute-0 nova_compute[189564]: 2025-12-01 19:43:05.247 189568 DEBUG oslo_service.periodic_task [None req-7cec860b-c1c5-49d2-8a4f-732c76c1788d - - - - - -] Running periodic task ComputeManager._instance_usage_audit run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210#033[00m
Dec  1 19:43:05 compute-0 nova_compute[189564]: 2025-12-01 19:43:05.859 189568 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Dec  1 19:43:07 compute-0 nova_compute[189564]: 2025-12-01 19:43:07.248 189568 DEBUG oslo_service.periodic_task [None req-7cec860b-c1c5-49d2-8a4f-732c76c1788d - - - - - -] Running periodic task ComputeManager.update_available_resource run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210#033[00m
Dec  1 19:43:07 compute-0 nova_compute[189564]: 2025-12-01 19:43:07.271 189568 DEBUG oslo_concurrency.lockutils [None req-7cec860b-c1c5-49d2-8a4f-732c76c1788d - - - - - -] Acquiring lock "compute_resources" by "nova.compute.resource_tracker.ResourceTracker.clean_compute_node_cache" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m
Dec  1 19:43:07 compute-0 nova_compute[189564]: 2025-12-01 19:43:07.271 189568 DEBUG oslo_concurrency.lockutils [None req-7cec860b-c1c5-49d2-8a4f-732c76c1788d - - - - - -] Lock "compute_resources" acquired by "nova.compute.resource_tracker.ResourceTracker.clean_compute_node_cache" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m
Dec  1 19:43:07 compute-0 nova_compute[189564]: 2025-12-01 19:43:07.271 189568 DEBUG oslo_concurrency.lockutils [None req-7cec860b-c1c5-49d2-8a4f-732c76c1788d - - - - - -] Lock "compute_resources" "released" by "nova.compute.resource_tracker.ResourceTracker.clean_compute_node_cache" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m
Dec  1 19:43:07 compute-0 nova_compute[189564]: 2025-12-01 19:43:07.272 189568 DEBUG nova.compute.resource_tracker [None req-7cec860b-c1c5-49d2-8a4f-732c76c1788d - - - - - -] Auditing locally available compute resources for compute-0.ctlplane.example.com (node: compute-0.ctlplane.example.com) update_available_resource /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:861#033[00m
Dec  1 19:43:07 compute-0 nova_compute[189564]: 2025-12-01 19:43:07.355 189568 DEBUG oslo_concurrency.processutils [None req-7cec860b-c1c5-49d2-8a4f-732c76c1788d - - - - - -] Running cmd (subprocess): /usr/bin/python3 -m oslo_concurrency.prlimit --as=1073741824 --cpu=30 -- env LC_ALL=C LANG=C qemu-img info /var/lib/nova/instances/e73931e9-f7fa-4666-b781-700b385532a9/disk --force-share --output=json execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384#033[00m
Dec  1 19:43:07 compute-0 nova_compute[189564]: 2025-12-01 19:43:07.432 189568 DEBUG oslo_concurrency.processutils [None req-7cec860b-c1c5-49d2-8a4f-732c76c1788d - - - - - -] CMD "/usr/bin/python3 -m oslo_concurrency.prlimit --as=1073741824 --cpu=30 -- env LC_ALL=C LANG=C qemu-img info /var/lib/nova/instances/e73931e9-f7fa-4666-b781-700b385532a9/disk --force-share --output=json" returned: 0 in 0.076s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422#033[00m
Dec  1 19:43:07 compute-0 nova_compute[189564]: 2025-12-01 19:43:07.433 189568 DEBUG oslo_concurrency.processutils [None req-7cec860b-c1c5-49d2-8a4f-732c76c1788d - - - - - -] Running cmd (subprocess): /usr/bin/python3 -m oslo_concurrency.prlimit --as=1073741824 --cpu=30 -- env LC_ALL=C LANG=C qemu-img info /var/lib/nova/instances/e73931e9-f7fa-4666-b781-700b385532a9/disk --force-share --output=json execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384#033[00m
Dec  1 19:43:07 compute-0 nova_compute[189564]: 2025-12-01 19:43:07.493 189568 DEBUG oslo_concurrency.processutils [None req-7cec860b-c1c5-49d2-8a4f-732c76c1788d - - - - - -] CMD "/usr/bin/python3 -m oslo_concurrency.prlimit --as=1073741824 --cpu=30 -- env LC_ALL=C LANG=C qemu-img info /var/lib/nova/instances/e73931e9-f7fa-4666-b781-700b385532a9/disk --force-share --output=json" returned: 0 in 0.060s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422#033[00m
Dec  1 19:43:07 compute-0 nova_compute[189564]: 2025-12-01 19:43:07.495 189568 DEBUG oslo_concurrency.processutils [None req-7cec860b-c1c5-49d2-8a4f-732c76c1788d - - - - - -] Running cmd (subprocess): /usr/bin/python3 -m oslo_concurrency.prlimit --as=1073741824 --cpu=30 -- env LC_ALL=C LANG=C qemu-img info /var/lib/nova/instances/e73931e9-f7fa-4666-b781-700b385532a9/disk.eph0 --force-share --output=json execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384#033[00m
Dec  1 19:43:07 compute-0 nova_compute[189564]: 2025-12-01 19:43:07.552 189568 DEBUG oslo_concurrency.processutils [None req-7cec860b-c1c5-49d2-8a4f-732c76c1788d - - - - - -] CMD "/usr/bin/python3 -m oslo_concurrency.prlimit --as=1073741824 --cpu=30 -- env LC_ALL=C LANG=C qemu-img info /var/lib/nova/instances/e73931e9-f7fa-4666-b781-700b385532a9/disk.eph0 --force-share --output=json" returned: 0 in 0.058s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422#033[00m
Dec  1 19:43:07 compute-0 nova_compute[189564]: 2025-12-01 19:43:07.554 189568 DEBUG oslo_concurrency.processutils [None req-7cec860b-c1c5-49d2-8a4f-732c76c1788d - - - - - -] Running cmd (subprocess): /usr/bin/python3 -m oslo_concurrency.prlimit --as=1073741824 --cpu=30 -- env LC_ALL=C LANG=C qemu-img info /var/lib/nova/instances/e73931e9-f7fa-4666-b781-700b385532a9/disk.eph0 --force-share --output=json execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384#033[00m
Dec  1 19:43:07 compute-0 nova_compute[189564]: 2025-12-01 19:43:07.616 189568 DEBUG oslo_concurrency.processutils [None req-7cec860b-c1c5-49d2-8a4f-732c76c1788d - - - - - -] CMD "/usr/bin/python3 -m oslo_concurrency.prlimit --as=1073741824 --cpu=30 -- env LC_ALL=C LANG=C qemu-img info /var/lib/nova/instances/e73931e9-f7fa-4666-b781-700b385532a9/disk.eph0 --force-share --output=json" returned: 0 in 0.063s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422#033[00m
Dec  1 19:43:07 compute-0 nova_compute[189564]: 2025-12-01 19:43:07.626 189568 DEBUG oslo_concurrency.processutils [None req-7cec860b-c1c5-49d2-8a4f-732c76c1788d - - - - - -] Running cmd (subprocess): /usr/bin/python3 -m oslo_concurrency.prlimit --as=1073741824 --cpu=30 -- env LC_ALL=C LANG=C qemu-img info /var/lib/nova/instances/850ac274-3f22-41ce-b7d7-ac64d7adac70/disk --force-share --output=json execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384#033[00m
Dec  1 19:43:07 compute-0 nova_compute[189564]: 2025-12-01 19:43:07.687 189568 DEBUG oslo_concurrency.processutils [None req-7cec860b-c1c5-49d2-8a4f-732c76c1788d - - - - - -] CMD "/usr/bin/python3 -m oslo_concurrency.prlimit --as=1073741824 --cpu=30 -- env LC_ALL=C LANG=C qemu-img info /var/lib/nova/instances/850ac274-3f22-41ce-b7d7-ac64d7adac70/disk --force-share --output=json" returned: 0 in 0.061s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422#033[00m
Dec  1 19:43:07 compute-0 nova_compute[189564]: 2025-12-01 19:43:07.688 189568 DEBUG oslo_concurrency.processutils [None req-7cec860b-c1c5-49d2-8a4f-732c76c1788d - - - - - -] Running cmd (subprocess): /usr/bin/python3 -m oslo_concurrency.prlimit --as=1073741824 --cpu=30 -- env LC_ALL=C LANG=C qemu-img info /var/lib/nova/instances/850ac274-3f22-41ce-b7d7-ac64d7adac70/disk --force-share --output=json execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384#033[00m
Dec  1 19:43:07 compute-0 nova_compute[189564]: 2025-12-01 19:43:07.750 189568 DEBUG oslo_concurrency.processutils [None req-7cec860b-c1c5-49d2-8a4f-732c76c1788d - - - - - -] CMD "/usr/bin/python3 -m oslo_concurrency.prlimit --as=1073741824 --cpu=30 -- env LC_ALL=C LANG=C qemu-img info /var/lib/nova/instances/850ac274-3f22-41ce-b7d7-ac64d7adac70/disk --force-share --output=json" returned: 0 in 0.062s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422#033[00m
Dec  1 19:43:07 compute-0 nova_compute[189564]: 2025-12-01 19:43:07.752 189568 DEBUG oslo_concurrency.processutils [None req-7cec860b-c1c5-49d2-8a4f-732c76c1788d - - - - - -] Running cmd (subprocess): /usr/bin/python3 -m oslo_concurrency.prlimit --as=1073741824 --cpu=30 -- env LC_ALL=C LANG=C qemu-img info /var/lib/nova/instances/850ac274-3f22-41ce-b7d7-ac64d7adac70/disk.eph0 --force-share --output=json execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384#033[00m
Dec  1 19:43:07 compute-0 nova_compute[189564]: 2025-12-01 19:43:07.811 189568 DEBUG oslo_concurrency.processutils [None req-7cec860b-c1c5-49d2-8a4f-732c76c1788d - - - - - -] CMD "/usr/bin/python3 -m oslo_concurrency.prlimit --as=1073741824 --cpu=30 -- env LC_ALL=C LANG=C qemu-img info /var/lib/nova/instances/850ac274-3f22-41ce-b7d7-ac64d7adac70/disk.eph0 --force-share --output=json" returned: 0 in 0.059s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422#033[00m
Dec  1 19:43:07 compute-0 nova_compute[189564]: 2025-12-01 19:43:07.813 189568 DEBUG oslo_concurrency.processutils [None req-7cec860b-c1c5-49d2-8a4f-732c76c1788d - - - - - -] Running cmd (subprocess): /usr/bin/python3 -m oslo_concurrency.prlimit --as=1073741824 --cpu=30 -- env LC_ALL=C LANG=C qemu-img info /var/lib/nova/instances/850ac274-3f22-41ce-b7d7-ac64d7adac70/disk.eph0 --force-share --output=json execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384#033[00m
Dec  1 19:43:07 compute-0 nova_compute[189564]: 2025-12-01 19:43:07.882 189568 DEBUG oslo_concurrency.processutils [None req-7cec860b-c1c5-49d2-8a4f-732c76c1788d - - - - - -] CMD "/usr/bin/python3 -m oslo_concurrency.prlimit --as=1073741824 --cpu=30 -- env LC_ALL=C LANG=C qemu-img info /var/lib/nova/instances/850ac274-3f22-41ce-b7d7-ac64d7adac70/disk.eph0 --force-share --output=json" returned: 0 in 0.069s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422#033[00m
Dec  1 19:43:08 compute-0 nova_compute[189564]: 2025-12-01 19:43:08.165 189568 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Dec  1 19:43:08 compute-0 nova_compute[189564]: 2025-12-01 19:43:08.288 189568 WARNING nova.virt.libvirt.driver [None req-7cec860b-c1c5-49d2-8a4f-732c76c1788d - - - - - -] This host appears to have multiple sockets per NUMA node. The `socket` PCI NUMA affinity will not be supported.#033[00m
Dec  1 19:43:08 compute-0 nova_compute[189564]: 2025-12-01 19:43:08.290 189568 DEBUG nova.compute.resource_tracker [None req-7cec860b-c1c5-49d2-8a4f-732c76c1788d - - - - - -] Hypervisor/Node resource view: name=compute-0.ctlplane.example.com free_ram=4979MB free_disk=72.36145401000977GB free_vcpus=6 pci_devices=[{"dev_id": "pci_0000_00_03_0", "address": "0000:00:03.0", "product_id": "1000", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1000", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_01_3", "address": "0000:00:01.3", "product_id": "7113", "vendor_id": "8086", "numa_node": null, "label": "label_8086_7113", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_06_0", "address": "0000:00:06.0", "product_id": "1005", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1005", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_04_0", "address": "0000:00:04.0", "product_id": "1001", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1001", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_01_1", "address": "0000:00:01.1", "product_id": "7010", "vendor_id": "8086", "numa_node": null, "label": "label_8086_7010", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_00_0", "address": "0000:00:00.0", "product_id": "1237", "vendor_id": "8086", "numa_node": null, "label": "label_8086_1237", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_05_0", "address": "0000:00:05.0", "product_id": "1002", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1002", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_02_0", "address": "0000:00:02.0", "product_id": "1050", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1050", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_01_2", "address": "0000:00:01.2", "product_id": "7020", "vendor_id": "8086", "numa_node": null, "label": "label_8086_7020", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_01_0", "address": "0000:00:01.0", "product_id": "7000", "vendor_id": "8086", "numa_node": null, "label": "label_8086_7000", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_07_0", "address": "0000:00:07.0", "product_id": "1000", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1000", "dev_type": "type-PCI"}] _report_hypervisor_resource_view /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:1034#033[00m
Dec  1 19:43:08 compute-0 nova_compute[189564]: 2025-12-01 19:43:08.290 189568 DEBUG oslo_concurrency.lockutils [None req-7cec860b-c1c5-49d2-8a4f-732c76c1788d - - - - - -] Acquiring lock "compute_resources" by "nova.compute.resource_tracker.ResourceTracker._update_available_resource" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m
Dec  1 19:43:08 compute-0 nova_compute[189564]: 2025-12-01 19:43:08.291 189568 DEBUG oslo_concurrency.lockutils [None req-7cec860b-c1c5-49d2-8a4f-732c76c1788d - - - - - -] Lock "compute_resources" acquired by "nova.compute.resource_tracker.ResourceTracker._update_available_resource" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m
Dec  1 19:43:08 compute-0 nova_compute[189564]: 2025-12-01 19:43:08.625 189568 DEBUG nova.compute.resource_tracker [None req-7cec860b-c1c5-49d2-8a4f-732c76c1788d - - - - - -] Instance e73931e9-f7fa-4666-b781-700b385532a9 actively managed on this compute host and has allocations in placement: {'resources': {'DISK_GB': 2, 'MEMORY_MB': 512, 'VCPU': 1}}. _remove_deleted_instances_allocations /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:1635#033[00m
Dec  1 19:43:08 compute-0 nova_compute[189564]: 2025-12-01 19:43:08.626 189568 DEBUG nova.compute.resource_tracker [None req-7cec860b-c1c5-49d2-8a4f-732c76c1788d - - - - - -] Instance 850ac274-3f22-41ce-b7d7-ac64d7adac70 actively managed on this compute host and has allocations in placement: {'resources': {'DISK_GB': 2, 'MEMORY_MB': 512, 'VCPU': 1}}. _remove_deleted_instances_allocations /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:1635#033[00m
Dec  1 19:43:08 compute-0 nova_compute[189564]: 2025-12-01 19:43:08.626 189568 DEBUG nova.compute.resource_tracker [None req-7cec860b-c1c5-49d2-8a4f-732c76c1788d - - - - - -] Total usable vcpus: 8, total allocated vcpus: 2 _report_final_resource_view /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:1057#033[00m
Dec  1 19:43:08 compute-0 nova_compute[189564]: 2025-12-01 19:43:08.627 189568 DEBUG nova.compute.resource_tracker [None req-7cec860b-c1c5-49d2-8a4f-732c76c1788d - - - - - -] Final resource view: name=compute-0.ctlplane.example.com phys_ram=7680MB used_ram=1536MB phys_disk=79GB used_disk=4GB total_vcpus=8 used_vcpus=2 pci_stats=[] _report_final_resource_view /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:1066#033[00m
Dec  1 19:43:08 compute-0 nova_compute[189564]: 2025-12-01 19:43:08.705 189568 DEBUG nova.compute.provider_tree [None req-7cec860b-c1c5-49d2-8a4f-732c76c1788d - - - - - -] Inventory has not changed in ProviderTree for provider: 0211b5d4-bab8-409f-8f53-df766ffbcb27 update_inventory /usr/lib/python3.9/site-packages/nova/compute/provider_tree.py:180#033[00m
Dec  1 19:43:08 compute-0 nova_compute[189564]: 2025-12-01 19:43:08.973 189568 DEBUG nova.scheduler.client.report [None req-7cec860b-c1c5-49d2-8a4f-732c76c1788d - - - - - -] Inventory has not changed for provider 0211b5d4-bab8-409f-8f53-df766ffbcb27 based on inventory data: {'VCPU': {'total': 8, 'reserved': 0, 'min_unit': 1, 'max_unit': 8, 'step_size': 1, 'allocation_ratio': 4.0}, 'MEMORY_MB': {'total': 7680, 'reserved': 512, 'min_unit': 1, 'max_unit': 7680, 'step_size': 1, 'allocation_ratio': 1.0}, 'DISK_GB': {'total': 79, 'reserved': 1, 'min_unit': 1, 'max_unit': 79, 'step_size': 1, 'allocation_ratio': 0.9}} set_inventory_for_provider /usr/lib/python3.9/site-packages/nova/scheduler/client/report.py:940#033[00m
Dec  1 19:43:08 compute-0 nova_compute[189564]: 2025-12-01 19:43:08.977 189568 DEBUG nova.compute.resource_tracker [None req-7cec860b-c1c5-49d2-8a4f-732c76c1788d - - - - - -] Compute_service record updated for compute-0.ctlplane.example.com:compute-0.ctlplane.example.com _update_available_resource /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:995#033[00m
Dec  1 19:43:08 compute-0 nova_compute[189564]: 2025-12-01 19:43:08.977 189568 DEBUG oslo_concurrency.lockutils [None req-7cec860b-c1c5-49d2-8a4f-732c76c1788d - - - - - -] Lock "compute_resources" "released" by "nova.compute.resource_tracker.ResourceTracker._update_available_resource" :: held 0.686s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m
Dec  1 19:43:09 compute-0 podman[245393]: 2025-12-01 19:43:09.369323624 +0000 UTC m=+0.119209766 container health_status eee51cf6f5ac491b85fb09827fece37ea9afa564acb449d4ec0d0155a452f02b (image=quay.io/podified-antelope-centos9/openstack-multipathd:current-podified, name=multipathd, health_status=healthy, health_failing_streak=0, health_log=, org.label-schema.name=CentOS Stream 9 Base Image, tcib_build_tag=fa2bb8efef6782c26ea7f1675eeb36dd, config_id=multipathd, org.label-schema.license=GPLv2, org.label-schema.schema-version=1.0, tcib_managed=true, config_data={'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/multipathd', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-multipathd:current-podified', 'net': 'host', 'privileged': True, 'restart': 'always', 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/var/lib/kolla/config_files/multipathd.json:/var/lib/kolla/config_files/config.json:ro', '/dev:/dev', '/run/udev:/run/udev', '/sys:/sys', '/lib/modules:/lib/modules:ro', '/etc/iscsi:/etc/iscsi:ro', '/var/lib/iscsi:/var/lib/iscsi', '/etc/multipath:/etc/multipath:z', '/etc/multipath.conf:/etc/multipath.conf:ro', '/var/lib/openstack/healthchecks/multipathd:/openstack:ro,z']}, io.buildah.version=1.41.3, container_name=multipathd, maintainer=OpenStack Kubernetes Operator team, managed_by=edpm_ansible, org.label-schema.vendor=CentOS, org.label-schema.build-date=20251125)
Dec  1 19:43:10 compute-0 nova_compute[189564]: 2025-12-01 19:43:10.861 189568 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Dec  1 19:43:12 compute-0 ovn_metadata_agent[106828]: 2025-12-01 19:43:12.193 106833 DEBUG oslo_concurrency.lockutils [-] Acquiring lock "_check_child_processes" by "neutron.agent.linux.external_process.ProcessMonitor._check_child_processes" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m
Dec  1 19:43:12 compute-0 ovn_metadata_agent[106828]: 2025-12-01 19:43:12.194 106833 DEBUG oslo_concurrency.lockutils [-] Lock "_check_child_processes" acquired by "neutron.agent.linux.external_process.ProcessMonitor._check_child_processes" :: waited 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m
Dec  1 19:43:12 compute-0 ovn_metadata_agent[106828]: 2025-12-01 19:43:12.195 106833 DEBUG oslo_concurrency.lockutils [-] Lock "_check_child_processes" "released" by "neutron.agent.linux.external_process.ProcessMonitor._check_child_processes" :: held 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m
Dec  1 19:43:13 compute-0 nova_compute[189564]: 2025-12-01 19:43:13.168 189568 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Dec  1 19:43:14 compute-0 podman[245413]: 2025-12-01 19:43:14.830939555 +0000 UTC m=+0.114793418 container health_status 61ddba5fa28aaa4735d9b3aecc3d300f499f9ae2248b5f55cd6d6127fcce4236 (image=quay.io/navidys/prometheus-podman-exporter:v1.10.1, name=podman_exporter, health_status=healthy, health_failing_streak=0, health_log=, maintainer=Navid Yaghoobi <navidys@fedoraproject.org>, managed_by=edpm_ansible, config_data={'image': 'quay.io/navidys/prometheus-podman-exporter:v1.10.1', 'restart': 'always', 'recreate': True, 'user': 'root', 'privileged': True, 'ports': ['9882:9882'], 'net': 'host', 'command': ['--web.config.file=/etc/podman_exporter/podman_exporter.yaml'], 'environment': {'OS_ENDPOINT_TYPE': 'internal', 'CONTAINER_HOST': 'unix:///run/podman/podman.sock'}, 'healthcheck': {'test': '/openstack/healthcheck podman_exporter', 'mount': '/var/lib/openstack/healthchecks/podman_exporter'}, 'volumes': ['/var/lib/openstack/config/telemetry/podman_exporter.yaml:/etc/podman_exporter/podman_exporter.yaml:z', '/var/lib/openstack/certs/telemetry/default:/etc/podman_exporter/tls:z', '/run/podman/podman.sock:/run/podman/podman.sock:rw,z', '/var/lib/openstack/healthchecks/podman_exporter:/openstack:ro,z']}, config_id=edpm, container_name=podman_exporter)
Dec  1 19:43:15 compute-0 nova_compute[189564]: 2025-12-01 19:43:15.864 189568 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Dec  1 19:43:17 compute-0 podman[245439]: 2025-12-01 19:43:17.344671092 +0000 UTC m=+0.102145245 container health_status 34a1614f07848d6f362b3ed1fa2407dbcd0f2c7c831f6ef43ff8b2d278ce7c3d (image=quay.io/podified-antelope-centos9/openstack-ceilometer-ipmi:current-podified, name=ceilometer_agent_ipmi, health_status=healthy, health_failing_streak=0, health_log=, tcib_managed=true, config_id=edpm, maintainer=OpenStack Kubernetes Operator team, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.schema-version=1.0, managed_by=edpm_ansible, org.label-schema.vendor=CentOS, container_name=ceilometer_agent_ipmi, config_data={'image': 'quay.io/podified-antelope-centos9/openstack-ceilometer-ipmi:current-podified', 'user': 'ceilometer', 'restart': 'always', 'command': 'kolla_start', 'security_opt': 'label:type:ceilometer_polling_t', 'privileged': 'true', 'net': 'host', 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'OS_ENDPOINT_TYPE': 'internal'}, 'healthcheck': {'test': '/openstack/healthcheck ipmi', 'mount': '/var/lib/openstack/healthchecks/ceilometer_agent_ipmi'}, 'volumes': ['/var/lib/openstack/config/telemetry-power-monitoring:/var/lib/openstack/config/:z', '/var/lib/openstack/config/telemetry-power-monitoring/ceilometer-agent-ipmi.json:/var/lib/kolla/config_files/config.json:z', '/etc/hosts:/etc/hosts:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/var/lib/openstack/cacerts/telemetry-power-monitoring/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/config/telemetry-power-monitoring/ceilometer_prom_exporter.yaml:/etc/ceilometer/ceilometer_prom_exporter.yaml:z', '/var/lib/openstack/certs/telemetry-power-monitoring/default:/etc/ceilometer/tls:z', '/dev/log:/dev/log', '/var/lib/openstack/healthchecks/ceilometer_agent_ipmi:/openstack:ro,z']}, io.buildah.version=1.41.3, org.label-schema.build-date=20251125, tcib_build_tag=fa2bb8efef6782c26ea7f1675eeb36dd)
Dec  1 19:43:17 compute-0 podman[245438]: 2025-12-01 19:43:17.357289524 +0000 UTC m=+0.121015541 container health_status 23921011954a99f31a49758e512d9e3575f6b2ebf536e7df85e3be11e7690b76 (image=quay.io/sustainable_computing_io/kepler:release-0.7.12, name=kepler, health_status=healthy, health_failing_streak=0, health_log=, io.openshift.expose-services=, managed_by=edpm_ansible, summary=Provides the latest release of Red Hat Universal Base Image 9., url=https://access.redhat.com/containers/#/registry.access.redhat.com/ubi9/images/9.4-1214.1726694543, com.redhat.component=ubi9-container, config_data={'image': 'quay.io/sustainable_computing_io/kepler:release-0.7.12', 'privileged': 'true', 'restart': 'always', 'ports': ['8888:8888'], 'net': 'host', 'command': '-v=2', 'recreate': True, 'environment': {'ENABLE_GPU': 'true', 'EXPOSE_CONTAINER_METRICS': 'true', 'ENABLE_PROCESS_METRICS': 'true', 'EXPOSE_VM_METRICS': 'true', 'EXPOSE_ESTIMATED_IDLE_POWER_METRICS': 'false', 'LIBVIRT_METADATA_URI': 'http://openstack.org/xmlns/libvirt/nova/1.1'}, 'healthcheck': {'test': '/openstack/healthcheck kepler', 'mount': '/var/lib/openstack/healthchecks/kepler'}, 'volumes': ['/lib/modules:/lib/modules:ro', '/run/libvirt:/run/libvirt:shared,ro', '/sys:/sys', '/proc:/proc', '/var/lib/openstack/healthchecks/kepler:/openstack:ro,z']}, vcs-ref=e309397d02fc53f7fa99db1371b8700eb49f268f, description=The Universal Base Image is designed and engineered to be the base layer for all of your containerized applications, middleware and utilities. This base image is freely redistributable, but Red Hat only supports Red Hat technologies through subscriptions for Red Hat products. This image is maintained by Red Hat and updated regularly., release-0.7.12=, vcs-type=git, version=9.4, release=1214.1726694543, io.k8s.description=The Universal Base Image is designed and engineered to be the base layer for all of your containerized applications, middleware and utilities. This base image is freely redistributable, but Red Hat only supports Red Hat technologies through subscriptions for Red Hat products. This image is maintained by Red Hat and updated regularly., maintainer=Red Hat, Inc., io.openshift.tags=base rhel9, com.redhat.license_terms=https://www.redhat.com/en/about/red-hat-end-user-license-agreements#UBI, config_id=edpm, distribution-scope=public, io.buildah.version=1.29.0, architecture=x86_64, vendor=Red Hat, Inc., io.k8s.display-name=Red Hat Universal Base Image 9, build-date=2024-09-18T21:23:30, container_name=kepler, name=ubi9)
Dec  1 19:43:18 compute-0 nova_compute[189564]: 2025-12-01 19:43:18.170 189568 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Dec  1 19:43:18 compute-0 podman[245477]: 2025-12-01 19:43:18.509069812 +0000 UTC m=+0.089617685 container health_status 3a3d264f7eb8586ed3d44da8bad3c69e5911bcb2ca062b771386b6d47a5118de (image=quay.rdoproject.org/podified-master-centos10/openstack-ceilometer-compute:current-tested, name=ceilometer_agent_compute, health_status=healthy, health_failing_streak=0, health_log=, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 10 Base Image, org.label-schema.vendor=CentOS, config_id=edpm, maintainer=OpenStack Kubernetes Operator team, tcib_build_tag=3a7876c5b6a4ff2e2bc50e11e9db5f42, managed_by=edpm_ansible, org.label-schema.schema-version=1.0, tcib_managed=true, config_data={'image': 'quay.rdoproject.org/podified-master-centos10/openstack-ceilometer-compute:current-tested', 'user': 'ceilometer', 'restart': 'always', 'command': 'kolla_start', 'security_opt': 'label:type:ceilometer_polling_t', 'net': 'host', 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'OS_ENDPOINT_TYPE': 'internal'}, 'healthcheck': {'test': '/openstack/healthcheck compute', 'mount': '/var/lib/openstack/healthchecks/ceilometer_agent_compute'}, 'volumes': ['/var/lib/openstack/config/telemetry:/var/lib/openstack/config/:z', '/var/lib/openstack/config/telemetry/ceilometer-agent-compute.json:/var/lib/kolla/config_files/config.json:z', '/run/libvirt:/run/libvirt:shared,ro', '/etc/hosts:/etc/hosts:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/var/lib/openstack/cacerts/telemetry/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/config/telemetry/ceilometer_prom_exporter.yaml:/etc/ceilometer/ceilometer_prom_exporter.yaml:z', '/var/lib/openstack/certs/telemetry/default:/etc/ceilometer/tls:z', '/dev/log:/dev/log', '/var/lib/openstack/healthchecks/ceilometer_agent_compute:/openstack:ro,z']}, container_name=ceilometer_agent_compute, io.buildah.version=1.41.4, org.label-schema.build-date=20251125)
Dec  1 19:43:18 compute-0 podman[245478]: 2025-12-01 19:43:18.52700759 +0000 UTC m=+0.090269506 container health_status 43b014a7c88484529ca37fbc1aa040d68d3c565a681d98a3ffe696ded1c66c8b (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified, name=ovn_metadata_agent, health_status=healthy, health_failing_streak=0, health_log=, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS, config_id=ovn_metadata_agent, container_name=ovn_metadata_agent, org.label-schema.license=GPLv2, org.label-schema.build-date=20251125, org.label-schema.name=CentOS Stream 9 Base Image, tcib_build_tag=fa2bb8efef6782c26ea7f1675eeb36dd, tcib_managed=true, config_data={'cgroupns': 'host', 'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'EDPM_CONFIG_HASH': '0823bd3e096c75f72e4a95820d41b0d4b6a1172bd2892ddb9f29b788a11bc87d'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ovn_metadata_agent', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified', 'net': 'host', 'pid': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/run/openvswitch:/run/openvswitch:z', '/var/lib/config-data/ansible-generated/neutron-ovn-metadata-agent:/etc/neutron.conf.d:z', '/run/netns:/run/netns:shared', '/var/lib/kolla/config_files/ovn_metadata_agent.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/neutron:/var/lib/neutron:shared,z', '/var/lib/neutron/ovn_metadata_haproxy_wrapper:/usr/local/bin/haproxy:ro', '/var/lib/neutron/kill_scripts:/etc/neutron/kill_scripts:ro', '/var/lib/openstack/cacerts/neutron-metadata/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/ca.crt:/etc/pki/tls/certs/ovndbca.crt:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/tls.crt:/etc/pki/tls/certs/ovndb.crt:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/tls.key:/etc/pki/tls/private/ovndb.key:ro,Z', '/var/lib/openstack/healthchecks/ovn_metadata_agent:/openstack:ro,z']}, io.buildah.version=1.41.3, maintainer=OpenStack Kubernetes Operator team, managed_by=edpm_ansible)
Dec  1 19:43:18 compute-0 podman[245479]: 2025-12-01 19:43:18.568395336 +0000 UTC m=+0.138181035 container health_status ac5c9902abf0db9f43c889599b2bcc73d33eb8b65444ffdd9b56a5cc93dab792 (image=quay.io/podified-antelope-centos9/openstack-ovn-controller:current-podified, name=ovn_controller, health_status=healthy, health_failing_streak=0, health_log=, config_id=ovn_controller, container_name=ovn_controller, maintainer=OpenStack Kubernetes Operator team, managed_by=edpm_ansible, org.label-schema.license=GPLv2, tcib_managed=true, io.buildah.version=1.41.3, org.label-schema.build-date=20251125, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.schema-version=1.0, tcib_build_tag=fa2bb8efef6782c26ea7f1675eeb36dd, config_data={'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ovn_controller', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-ovn-controller:current-podified', 'net': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/lib/modules:/lib/modules:ro', '/run:/run', '/var/lib/openvswitch/ovn:/run/ovn:shared,z', '/var/lib/kolla/config_files/ovn_controller.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/openstack/cacerts/ovn/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/certs/ovn/default/ca.crt:/etc/pki/tls/certs/ovndbca.crt:ro,z', '/var/lib/openstack/certs/ovn/default/tls.crt:/etc/pki/tls/certs/ovndb.crt:ro,z', '/var/lib/openstack/certs/ovn/default/tls.key:/etc/pki/tls/private/ovndb.key:ro,Z', '/var/lib/openstack/healthchecks/ovn_controller:/openstack:ro,z']}, org.label-schema.vendor=CentOS)
Dec  1 19:43:20 compute-0 nova_compute[189564]: 2025-12-01 19:43:20.866 189568 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Dec  1 19:43:23 compute-0 nova_compute[189564]: 2025-12-01 19:43:23.173 189568 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Dec  1 19:43:25 compute-0 nova_compute[189564]: 2025-12-01 19:43:25.869 189568 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Dec  1 19:43:28 compute-0 nova_compute[189564]: 2025-12-01 19:43:28.177 189568 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Dec  1 19:43:29 compute-0 podman[245539]: 2025-12-01 19:43:29.388367722 +0000 UTC m=+0.147948638 container health_status b46bda7fc50db8041eef75400930fc7591d8331b3adc9964f77b2cc87c6b98e2 (image=quay.io/openstack-k8s-operators/openstack-network-exporter:current-podified, name=openstack_network_exporter, health_status=healthy, health_failing_streak=0, health_log=, description=The Universal Base Image Minimal is a stripped down image that uses microdnf as a package manager. This base image is freely redistributable, but Red Hat only supports Red Hat technologies through subscriptions for Red Hat products. This image is maintained by Red Hat and updated regularly., summary=Provides the latest release of the minimal Red Hat Universal Base Image 9., config_id=edpm, url=https://catalog.redhat.com/en/search?searchType=containers, maintainer=Red Hat, Inc., vendor=Red Hat, Inc., io.openshift.expose-services=, managed_by=edpm_ansible, io.k8s.display-name=Red Hat Universal Base Image 9 Minimal, version=9.6, architecture=x86_64, build-date=2025-08-20T13:12:41, io.buildah.version=1.33.7, vcs-type=git, com.redhat.license_terms=https://www.redhat.com/en/about/red-hat-end-user-license-agreements#UBI, io.openshift.tags=minimal rhel9, release=1755695350, name=ubi9-minimal, container_name=openstack_network_exporter, com.redhat.component=ubi9-minimal-container, config_data={'image': 'quay.io/openstack-k8s-operators/openstack-network-exporter:current-podified', 'restart': 'always', 'recreate': True, 'privileged': True, 'ports': ['9105:9105'], 'command': [], 'net': 'host', 'environment': {'OS_ENDPOINT_TYPE': 'internal', 'OPENSTACK_NETWORK_EXPORTER_YAML': '/etc/openstack_network_exporter/openstack_network_exporter.yaml'}, 'healthcheck': {'test': '/openstack/healthcheck openstack-netwo', 'mount': '/var/lib/openstack/healthchecks/openstack_network_exporter'}, 'volumes': ['/var/lib/openstack/config/telemetry/openstack_network_exporter.yaml:/etc/openstack_network_exporter/openstack_network_exporter.yaml:z', '/var/lib/openstack/certs/telemetry/default:/etc/openstack_network_exporter/tls:z', '/var/run/openvswitch:/run/openvswitch:rw,z', '/var/lib/openvswitch/ovn:/run/ovn:rw,z', '/proc:/host/proc:ro', '/var/lib/openstack/healthchecks/openstack_network_exporter:/openstack:ro,z']}, distribution-scope=public, io.k8s.description=The Universal Base Image Minimal is a stripped down image that uses microdnf as a package manager. This base image is freely redistributable, but Red Hat only supports Red Hat technologies through subscriptions for Red Hat products. This image is maintained by Red Hat and updated regularly., vcs-ref=f4b088292653bbf5ca8188a5e59ffd06a8671d4b)
Dec  1 19:43:29 compute-0 podman[203750]: time="2025-12-01T19:43:29Z" level=info msg="List containers: received `last` parameter - overwriting `limit`"
Dec  1 19:43:29 compute-0 podman[203750]: @ - - [01/Dec/2025:19:43:29 +0000] "GET /v4.9.3/libpod/containers/json?all=true&external=false&last=0&namespace=false&size=false&sync=false HTTP/1.1" 200 29521 "" "Go-http-client/1.1"
Dec  1 19:43:29 compute-0 podman[203750]: @ - - [01/Dec/2025:19:43:29 +0000] "GET /v4.9.3/libpod/containers/stats?all=false&interval=1&stream=false HTTP/1.1" 200 4802 "" "Go-http-client/1.1"
Dec  1 19:43:30 compute-0 nova_compute[189564]: 2025-12-01 19:43:30.871 189568 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Dec  1 19:43:31 compute-0 openstack_network_exporter[205914]: ERROR   19:43:31 appctl.go:144: Failed to get PID for ovn-northd: no control socket files found for ovn-northd
Dec  1 19:43:31 compute-0 openstack_network_exporter[205914]: ERROR   19:43:31 appctl.go:144: Failed to get PID for ovn-northd: no control socket files found for ovn-northd
Dec  1 19:43:31 compute-0 openstack_network_exporter[205914]: ERROR   19:43:31 appctl.go:131: Failed to prepare call to ovsdb-server: no control socket files found for the ovs db server
Dec  1 19:43:31 compute-0 openstack_network_exporter[205914]: ERROR   19:43:31 appctl.go:174: call(dpif-netdev/pmd-perf-show): please specify an existing datapath
Dec  1 19:43:31 compute-0 openstack_network_exporter[205914]: 
Dec  1 19:43:31 compute-0 openstack_network_exporter[205914]: ERROR   19:43:31 appctl.go:174: call(dpif-netdev/pmd-rxq-show): please specify an existing datapath
Dec  1 19:43:31 compute-0 openstack_network_exporter[205914]: 
Dec  1 19:43:33 compute-0 nova_compute[189564]: 2025-12-01 19:43:33.179 189568 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Dec  1 19:43:34 compute-0 podman[245562]: 2025-12-01 19:43:34.344190224 +0000 UTC m=+0.111659051 container health_status 9bc16c1e84935b321683dd2dfd3901959431e420d380b6b9982945dff3d516b2 (image=quay.io/prometheus/node-exporter:v1.5.0, name=node_exporter, health_status=healthy, health_failing_streak=0, health_log=, container_name=node_exporter, maintainer=The Prometheus Authors <prometheus-developers@googlegroups.com>, managed_by=edpm_ansible, config_data={'image': 'quay.io/prometheus/node-exporter:v1.5.0', 'restart': 'always', 'recreate': True, 'user': 'root', 'privileged': True, 'ports': ['9100:9100'], 'command': ['--web.config.file=/etc/node_exporter/node_exporter.yaml', '--web.disable-exporter-metrics', '--collector.systemd', '--collector.systemd.unit-include=(edpm_.*|ovs.*|openvswitch|virt.*|rsyslog)\\.service', '--no-collector.dmi', '--no-collector.entropy', '--no-collector.thermal_zone', '--no-collector.time', '--no-collector.timex', '--no-collector.uname', '--no-collector.stat', '--no-collector.hwmon', '--no-collector.os', '--no-collector.selinux', '--no-collector.textfile', '--no-collector.powersupplyclass', '--no-collector.pressure', '--no-collector.rapl'], 'net': 'host', 'environment': {'OS_ENDPOINT_TYPE': 'internal'}, 'healthcheck': {'test': '/openstack/healthcheck node_exporter', 'mount': '/var/lib/openstack/healthchecks/node_exporter'}, 'volumes': ['/var/lib/openstack/config/telemetry/node_exporter.yaml:/etc/node_exporter/node_exporter.yaml:z', '/var/lib/openstack/certs/telemetry/default:/etc/node_exporter/tls:z', '/var/run/dbus/system_bus_socket:/var/run/dbus/system_bus_socket:rw', '/var/lib/openstack/healthchecks/node_exporter:/openstack:ro,z']}, config_id=edpm)
Dec  1 19:43:35 compute-0 nova_compute[189564]: 2025-12-01 19:43:35.875 189568 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Dec  1 19:43:38 compute-0 nova_compute[189564]: 2025-12-01 19:43:38.183 189568 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Dec  1 19:43:40 compute-0 podman[245587]: 2025-12-01 19:43:40.372675947 +0000 UTC m=+0.127466391 container health_status eee51cf6f5ac491b85fb09827fece37ea9afa564acb449d4ec0d0155a452f02b (image=quay.io/podified-antelope-centos9/openstack-multipathd:current-podified, name=multipathd, health_status=healthy, health_failing_streak=0, health_log=, tcib_managed=true, config_id=multipathd, org.label-schema.license=GPLv2, tcib_build_tag=fa2bb8efef6782c26ea7f1675eeb36dd, config_data={'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/multipathd', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-multipathd:current-podified', 'net': 'host', 'privileged': True, 'restart': 'always', 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/var/lib/kolla/config_files/multipathd.json:/var/lib/kolla/config_files/config.json:ro', '/dev:/dev', '/run/udev:/run/udev', '/sys:/sys', '/lib/modules:/lib/modules:ro', '/etc/iscsi:/etc/iscsi:ro', '/var/lib/iscsi:/var/lib/iscsi', '/etc/multipath:/etc/multipath:z', '/etc/multipath.conf:/etc/multipath.conf:ro', '/var/lib/openstack/healthchecks/multipathd:/openstack:ro,z']}, container_name=multipathd, maintainer=OpenStack Kubernetes Operator team, org.label-schema.name=CentOS Stream 9 Base Image, managed_by=edpm_ansible, org.label-schema.vendor=CentOS, io.buildah.version=1.41.3, org.label-schema.build-date=20251125, org.label-schema.schema-version=1.0)
Dec  1 19:43:40 compute-0 nova_compute[189564]: 2025-12-01 19:43:40.879 189568 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Dec  1 19:43:43 compute-0 nova_compute[189564]: 2025-12-01 19:43:43.185 189568 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Dec  1 19:43:45 compute-0 podman[245606]: 2025-12-01 19:43:45.314441537 +0000 UTC m=+0.091481083 container health_status 61ddba5fa28aaa4735d9b3aecc3d300f499f9ae2248b5f55cd6d6127fcce4236 (image=quay.io/navidys/prometheus-podman-exporter:v1.10.1, name=podman_exporter, health_status=healthy, health_failing_streak=0, health_log=, config_id=edpm, container_name=podman_exporter, maintainer=Navid Yaghoobi <navidys@fedoraproject.org>, managed_by=edpm_ansible, config_data={'image': 'quay.io/navidys/prometheus-podman-exporter:v1.10.1', 'restart': 'always', 'recreate': True, 'user': 'root', 'privileged': True, 'ports': ['9882:9882'], 'net': 'host', 'command': ['--web.config.file=/etc/podman_exporter/podman_exporter.yaml'], 'environment': {'OS_ENDPOINT_TYPE': 'internal', 'CONTAINER_HOST': 'unix:///run/podman/podman.sock'}, 'healthcheck': {'test': '/openstack/healthcheck podman_exporter', 'mount': '/var/lib/openstack/healthchecks/podman_exporter'}, 'volumes': ['/var/lib/openstack/config/telemetry/podman_exporter.yaml:/etc/podman_exporter/podman_exporter.yaml:z', '/var/lib/openstack/certs/telemetry/default:/etc/podman_exporter/tls:z', '/run/podman/podman.sock:/run/podman/podman.sock:rw,z', '/var/lib/openstack/healthchecks/podman_exporter:/openstack:ro,z']})
Dec  1 19:43:45 compute-0 nova_compute[189564]: 2025-12-01 19:43:45.882 189568 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Dec  1 19:43:48 compute-0 nova_compute[189564]: 2025-12-01 19:43:48.187 189568 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Dec  1 19:43:48 compute-0 podman[245630]: 2025-12-01 19:43:48.31918563 +0000 UTC m=+0.088167301 container health_status 23921011954a99f31a49758e512d9e3575f6b2ebf536e7df85e3be11e7690b76 (image=quay.io/sustainable_computing_io/kepler:release-0.7.12, name=kepler, health_status=healthy, health_failing_streak=0, health_log=, managed_by=edpm_ansible, config_data={'image': 'quay.io/sustainable_computing_io/kepler:release-0.7.12', 'privileged': 'true', 'restart': 'always', 'ports': ['8888:8888'], 'net': 'host', 'command': '-v=2', 'recreate': True, 'environment': {'ENABLE_GPU': 'true', 'EXPOSE_CONTAINER_METRICS': 'true', 'ENABLE_PROCESS_METRICS': 'true', 'EXPOSE_VM_METRICS': 'true', 'EXPOSE_ESTIMATED_IDLE_POWER_METRICS': 'false', 'LIBVIRT_METADATA_URI': 'http://openstack.org/xmlns/libvirt/nova/1.1'}, 'healthcheck': {'test': '/openstack/healthcheck kepler', 'mount': '/var/lib/openstack/healthchecks/kepler'}, 'volumes': ['/lib/modules:/lib/modules:ro', '/run/libvirt:/run/libvirt:shared,ro', '/sys:/sys', '/proc:/proc', '/var/lib/openstack/healthchecks/kepler:/openstack:ro,z']}, io.openshift.expose-services=, version=9.4, build-date=2024-09-18T21:23:30, io.buildah.version=1.29.0, vendor=Red Hat, Inc., architecture=x86_64, vcs-ref=e309397d02fc53f7fa99db1371b8700eb49f268f, com.redhat.component=ubi9-container, summary=Provides the latest release of Red Hat Universal Base Image 9., maintainer=Red Hat, Inc., name=ubi9, release-0.7.12=, release=1214.1726694543, distribution-scope=public, io.k8s.description=The Universal Base Image is designed and engineered to be the base layer for all of your containerized applications, middleware and utilities. This base image is freely redistributable, but Red Hat only supports Red Hat technologies through subscriptions for Red Hat products. This image is maintained by Red Hat and updated regularly., io.openshift.tags=base rhel9, io.k8s.display-name=Red Hat Universal Base Image 9, com.redhat.license_terms=https://www.redhat.com/en/about/red-hat-end-user-license-agreements#UBI, description=The Universal Base Image is designed and engineered to be the base layer for all of your containerized applications, middleware and utilities. This base image is freely redistributable, but Red Hat only supports Red Hat technologies through subscriptions for Red Hat products. This image is maintained by Red Hat and updated regularly., url=https://access.redhat.com/containers/#/registry.access.redhat.com/ubi9/images/9.4-1214.1726694543, container_name=kepler, vcs-type=git, config_id=edpm)
Dec  1 19:43:48 compute-0 podman[245631]: 2025-12-01 19:43:48.367168941 +0000 UTC m=+0.117256124 container health_status 34a1614f07848d6f362b3ed1fa2407dbcd0f2c7c831f6ef43ff8b2d278ce7c3d (image=quay.io/podified-antelope-centos9/openstack-ceilometer-ipmi:current-podified, name=ceilometer_agent_ipmi, health_status=healthy, health_failing_streak=0, health_log=, tcib_managed=true, maintainer=OpenStack Kubernetes Operator team, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.vendor=CentOS, tcib_build_tag=fa2bb8efef6782c26ea7f1675eeb36dd, config_data={'image': 'quay.io/podified-antelope-centos9/openstack-ceilometer-ipmi:current-podified', 'user': 'ceilometer', 'restart': 'always', 'command': 'kolla_start', 'security_opt': 'label:type:ceilometer_polling_t', 'privileged': 'true', 'net': 'host', 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'OS_ENDPOINT_TYPE': 'internal'}, 'healthcheck': {'test': '/openstack/healthcheck ipmi', 'mount': '/var/lib/openstack/healthchecks/ceilometer_agent_ipmi'}, 'volumes': ['/var/lib/openstack/config/telemetry-power-monitoring:/var/lib/openstack/config/:z', '/var/lib/openstack/config/telemetry-power-monitoring/ceilometer-agent-ipmi.json:/var/lib/kolla/config_files/config.json:z', '/etc/hosts:/etc/hosts:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/var/lib/openstack/cacerts/telemetry-power-monitoring/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/config/telemetry-power-monitoring/ceilometer_prom_exporter.yaml:/etc/ceilometer/ceilometer_prom_exporter.yaml:z', '/var/lib/openstack/certs/telemetry-power-monitoring/default:/etc/ceilometer/tls:z', '/dev/log:/dev/log', '/var/lib/openstack/healthchecks/ceilometer_agent_ipmi:/openstack:ro,z']}, config_id=edpm, io.buildah.version=1.41.3, container_name=ceilometer_agent_ipmi, managed_by=edpm_ansible, org.label-schema.build-date=20251125, org.label-schema.license=GPLv2, org.label-schema.schema-version=1.0)
Dec  1 19:43:48 compute-0 ceilometer_agent_compute[200308]: 2025-12-01 19:43:48.814 15 DEBUG ceilometer.polling.manager [-] The number of pollsters in source [pollsters] is bigger than the number of worker threads to execute them. Therefore, one can expect the process to be longer than the expected. execute_polling_task_processing /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:253
Dec  1 19:43:48 compute-0 ceilometer_agent_compute[200308]: 2025-12-01 19:43:48.815 15 DEBUG ceilometer.polling.manager [-] Processing pollsters for [pollsters] with [1] threads. execute_polling_task_processing /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:262
Dec  1 19:43:48 compute-0 ceilometer_agent_compute[200308]: 2025-12-01 19:43:48.815 15 DEBUG ceilometer.polling.manager [-] Registering pollster [<stevedore.extension.Extension object at 0x7fcf6cc3f860>] from source [pollsters] to be executed via executor [<concurrent.futures.thread.ThreadPoolExecutor object at 0x7fcf6757d6a0>] with cache [{}], pollster history [{}], and discovery cache [{}]. register_pollster_execution /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:276
Dec  1 19:43:48 compute-0 ceilometer_agent_compute[200308]: 2025-12-01 19:43:48.815 15 DEBUG ceilometer.polling.manager [-] Executing discovery process for pollsters [<ceilometer.compute.pollsters.net.IncomingBytesDeltaPollster object at 0x7fcf6cc3f830>] and discovery method [local_instances] via process [<bound method AgentManager.discover of <ceilometer.polling.manager.AgentManager object at 0x7fcf6cc07a10>>]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:294
Dec  1 19:43:48 compute-0 ceilometer_agent_compute[200308]: 2025-12-01 19:43:48.816 15 DEBUG ceilometer.polling.manager [-] Registering pollster [<stevedore.extension.Extension object at 0x7fcf6c2e4080>] from source [pollsters] to be executed via executor [<concurrent.futures.thread.ThreadPoolExecutor object at 0x7fcf6757d6a0>] with cache [{}], pollster history [{}], and discovery cache [{}]. register_pollster_execution /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:276
Dec  1 19:43:48 compute-0 ceilometer_agent_compute[200308]: 2025-12-01 19:43:48.816 15 DEBUG ceilometer.polling.manager [-] Registering pollster [<stevedore.extension.Extension object at 0x7fcf6efc98b0>] from source [pollsters] to be executed via executor [<concurrent.futures.thread.ThreadPoolExecutor object at 0x7fcf6757d6a0>] with cache [{}], pollster history [{}], and discovery cache [{}]. register_pollster_execution /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:276
Dec  1 19:43:48 compute-0 ceilometer_agent_compute[200308]: 2025-12-01 19:43:48.816 15 DEBUG ceilometer.polling.manager [-] Registering pollster [<stevedore.extension.Extension object at 0x7fcf6c2e4110>] from source [pollsters] to be executed via executor [<concurrent.futures.thread.ThreadPoolExecutor object at 0x7fcf6757d6a0>] with cache [{}], pollster history [{}], and discovery cache [{}]. register_pollster_execution /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:276
Dec  1 19:43:48 compute-0 ceilometer_agent_compute[200308]: 2025-12-01 19:43:48.816 15 DEBUG ceilometer.polling.manager [-] Registering pollster [<stevedore.extension.Extension object at 0x7fcf6c2e41a0>] from source [pollsters] to be executed via executor [<concurrent.futures.thread.ThreadPoolExecutor object at 0x7fcf6757d6a0>] with cache [{}], pollster history [{}], and discovery cache [{}]. register_pollster_execution /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:276
Dec  1 19:43:48 compute-0 ceilometer_agent_compute[200308]: 2025-12-01 19:43:48.816 15 DEBUG ceilometer.polling.manager [-] Registering pollster [<stevedore.extension.Extension object at 0x7fcf6cc3f290>] from source [pollsters] to be executed via executor [<concurrent.futures.thread.ThreadPoolExecutor object at 0x7fcf6757d6a0>] with cache [{}], pollster history [{}], and discovery cache [{}]. register_pollster_execution /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:276
Dec  1 19:43:48 compute-0 ceilometer_agent_compute[200308]: 2025-12-01 19:43:48.816 15 DEBUG ceilometer.polling.manager [-] Registering pollster [<stevedore.extension.Extension object at 0x7fcf6cc3f2c0>] from source [pollsters] to be executed via executor [<concurrent.futures.thread.ThreadPoolExecutor object at 0x7fcf6757d6a0>] with cache [{}], pollster history [{}], and discovery cache [{}]. register_pollster_execution /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:276
Dec  1 19:43:48 compute-0 ceilometer_agent_compute[200308]: 2025-12-01 19:43:48.817 15 DEBUG ceilometer.polling.manager [-] Registering pollster [<stevedore.extension.Extension object at 0x7fcf6e1e92e0>] from source [pollsters] to be executed via executor [<concurrent.futures.thread.ThreadPoolExecutor object at 0x7fcf6757d6a0>] with cache [{}], pollster history [{}], and discovery cache [{}]. register_pollster_execution /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:276
Dec  1 19:43:48 compute-0 ceilometer_agent_compute[200308]: 2025-12-01 19:43:48.817 15 DEBUG ceilometer.polling.manager [-] Registering pollster [<stevedore.extension.Extension object at 0x7fcf6cc3fb00>] from source [pollsters] to be executed via executor [<concurrent.futures.thread.ThreadPoolExecutor object at 0x7fcf6757d6a0>] with cache [{}], pollster history [{}], and discovery cache [{}]. register_pollster_execution /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:276
Dec  1 19:43:48 compute-0 ceilometer_agent_compute[200308]: 2025-12-01 19:43:48.817 15 DEBUG ceilometer.polling.manager [-] Registering pollster [<stevedore.extension.Extension object at 0x7fcf6cc3f320>] from source [pollsters] to be executed via executor [<concurrent.futures.thread.ThreadPoolExecutor object at 0x7fcf6757d6a0>] with cache [{}], pollster history [{}], and discovery cache [{}]. register_pollster_execution /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:276
Dec  1 19:43:48 compute-0 ceilometer_agent_compute[200308]: 2025-12-01 19:43:48.817 15 DEBUG ceilometer.polling.manager [-] Registering pollster [<stevedore.extension.Extension object at 0x7fcf6cc3f380>] from source [pollsters] to be executed via executor [<concurrent.futures.thread.ThreadPoolExecutor object at 0x7fcf6757d6a0>] with cache [{}], pollster history [{}], and discovery cache [{}]. register_pollster_execution /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:276
Dec  1 19:43:48 compute-0 ceilometer_agent_compute[200308]: 2025-12-01 19:43:48.817 15 DEBUG ceilometer.polling.manager [-] Registering pollster [<stevedore.extension.Extension object at 0x7fcf6cc3f3e0>] from source [pollsters] to be executed via executor [<concurrent.futures.thread.ThreadPoolExecutor object at 0x7fcf6757d6a0>] with cache [{}], pollster history [{}], and discovery cache [{}]. register_pollster_execution /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:276
Dec  1 19:43:48 compute-0 ceilometer_agent_compute[200308]: 2025-12-01 19:43:48.817 15 DEBUG ceilometer.polling.manager [-] Registering pollster [<stevedore.extension.Extension object at 0x7fcf6cc3f440>] from source [pollsters] to be executed via executor [<concurrent.futures.thread.ThreadPoolExecutor object at 0x7fcf6757d6a0>] with cache [{}], pollster history [{}], and discovery cache [{}]. register_pollster_execution /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:276
Dec  1 19:43:48 compute-0 ceilometer_agent_compute[200308]: 2025-12-01 19:43:48.817 15 DEBUG ceilometer.polling.manager [-] Registering pollster [<stevedore.extension.Extension object at 0x7fcf6c2e4470>] from source [pollsters] to be executed via executor [<concurrent.futures.thread.ThreadPoolExecutor object at 0x7fcf6757d6a0>] with cache [{}], pollster history [{}], and discovery cache [{}]. register_pollster_execution /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:276
Dec  1 19:43:48 compute-0 ceilometer_agent_compute[200308]: 2025-12-01 19:43:48.817 15 DEBUG ceilometer.polling.manager [-] Registering pollster [<stevedore.extension.Extension object at 0x7fcf6cc3f4a0>] from source [pollsters] to be executed via executor [<concurrent.futures.thread.ThreadPoolExecutor object at 0x7fcf6757d6a0>] with cache [{}], pollster history [{}], and discovery cache [{}]. register_pollster_execution /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:276
Dec  1 19:43:48 compute-0 ceilometer_agent_compute[200308]: 2025-12-01 19:43:48.817 15 DEBUG ceilometer.polling.manager [-] Registering pollster [<stevedore.extension.Extension object at 0x7fcf6cc3f500>] from source [pollsters] to be executed via executor [<concurrent.futures.thread.ThreadPoolExecutor object at 0x7fcf6757d6a0>] with cache [{}], pollster history [{}], and discovery cache [{}]. register_pollster_execution /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:276
Dec  1 19:43:48 compute-0 ceilometer_agent_compute[200308]: 2025-12-01 19:43:48.818 15 DEBUG ceilometer.polling.manager [-] Registering pollster [<stevedore.extension.Extension object at 0x7fcf6cc3e540>] from source [pollsters] to be executed via executor [<concurrent.futures.thread.ThreadPoolExecutor object at 0x7fcf6757d6a0>] with cache [{}], pollster history [{}], and discovery cache [{}]. register_pollster_execution /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:276
Dec  1 19:43:48 compute-0 ceilometer_agent_compute[200308]: 2025-12-01 19:43:48.818 15 DEBUG ceilometer.polling.manager [-] Registering pollster [<stevedore.extension.Extension object at 0x7fcf6cc3f560>] from source [pollsters] to be executed via executor [<concurrent.futures.thread.ThreadPoolExecutor object at 0x7fcf6757d6a0>] with cache [{}], pollster history [{}], and discovery cache [{}]. register_pollster_execution /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:276
Dec  1 19:43:48 compute-0 ceilometer_agent_compute[200308]: 2025-12-01 19:43:48.818 15 DEBUG ceilometer.polling.manager [-] Registering pollster [<stevedore.extension.Extension object at 0x7fcf6cc3fd70>] from source [pollsters] to be executed via executor [<concurrent.futures.thread.ThreadPoolExecutor object at 0x7fcf6757d6a0>] with cache [{}], pollster history [{}], and discovery cache [{}]. register_pollster_execution /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:276
Dec  1 19:43:48 compute-0 ceilometer_agent_compute[200308]: 2025-12-01 19:43:48.818 15 DEBUG ceilometer.polling.manager [-] Registering pollster [<stevedore.extension.Extension object at 0x7fcf6cc3f5c0>] from source [pollsters] to be executed via executor [<concurrent.futures.thread.ThreadPoolExecutor object at 0x7fcf6757d6a0>] with cache [{}], pollster history [{}], and discovery cache [{}]. register_pollster_execution /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:276
Dec  1 19:43:48 compute-0 ceilometer_agent_compute[200308]: 2025-12-01 19:43:48.818 15 DEBUG ceilometer.polling.manager [-] Registering pollster [<stevedore.extension.Extension object at 0x7fcf6cc3fdd0>] from source [pollsters] to be executed via executor [<concurrent.futures.thread.ThreadPoolExecutor object at 0x7fcf6757d6a0>] with cache [{}], pollster history [{}], and discovery cache [{}]. register_pollster_execution /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:276
Dec  1 19:43:48 compute-0 ceilometer_agent_compute[200308]: 2025-12-01 19:43:48.818 15 DEBUG ceilometer.polling.manager [-] Registering pollster [<stevedore.extension.Extension object at 0x7fcf6cc3fe30>] from source [pollsters] to be executed via executor [<concurrent.futures.thread.ThreadPoolExecutor object at 0x7fcf6757d6a0>] with cache [{}], pollster history [{}], and discovery cache [{}]. register_pollster_execution /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:276
Dec  1 19:43:48 compute-0 ceilometer_agent_compute[200308]: 2025-12-01 19:43:48.818 15 DEBUG ceilometer.polling.manager [-] Registering pollster [<stevedore.extension.Extension object at 0x7fcf6cc3fec0>] from source [pollsters] to be executed via executor [<concurrent.futures.thread.ThreadPoolExecutor object at 0x7fcf6757d6a0>] with cache [{}], pollster history [{}], and discovery cache [{}]. register_pollster_execution /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:276
Dec  1 19:43:48 compute-0 ceilometer_agent_compute[200308]: 2025-12-01 19:43:48.818 15 DEBUG ceilometer.polling.manager [-] Registering pollster [<stevedore.extension.Extension object at 0x7fcf6cc3ffb0>] from source [pollsters] to be executed via executor [<concurrent.futures.thread.ThreadPoolExecutor object at 0x7fcf6757d6a0>] with cache [{}], pollster history [{}], and discovery cache [{}]. register_pollster_execution /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:276
Dec  1 19:43:48 compute-0 ceilometer_agent_compute[200308]: 2025-12-01 19:43:48.818 15 DEBUG ceilometer.polling.manager [-] Registering pollster [<stevedore.extension.Extension object at 0x7fcf6cc3d7c0>] from source [pollsters] to be executed via executor [<concurrent.futures.thread.ThreadPoolExecutor object at 0x7fcf6757d6a0>] with cache [{}], pollster history [{}], and discovery cache [{}]. register_pollster_execution /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:276
Dec  1 19:43:48 compute-0 ceilometer_agent_compute[200308]: 2025-12-01 19:43:48.818 15 DEBUG ceilometer.polling.manager [-] Registering pollster [<stevedore.extension.Extension object at 0x7fcf6cc3f7d0>] from source [pollsters] to be executed via executor [<concurrent.futures.thread.ThreadPoolExecutor object at 0x7fcf6757d6a0>] with cache [{}], pollster history [{}], and discovery cache [{}]. register_pollster_execution /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:276
Dec  1 19:43:48 compute-0 ceilometer_agent_compute[200308]: 2025-12-01 19:43:48.825 15 DEBUG ceilometer.compute.discovery [-] instance data: {'id': 'e73931e9-f7fa-4666-b781-700b385532a9', 'name': 'test_0', 'flavor': {'id': '0891a7f6-7194-4f33-bc11-6f6ab8b16145', 'name': 'm1.small', 'vcpus': 1, 'ram': 512, 'disk': 1, 'ephemeral': 1, 'swap': 0}, 'image': {'id': '15bc897a-453b-4133-b6db-08ecdc2b6db0'}, 'os_type': 'hvm', 'architecture': 'x86_64', 'OS-EXT-SRV-ATTR:instance_name': 'instance-00000001', 'OS-EXT-SRV-ATTR:host': 'compute-0.ctlplane.example.com', 'OS-EXT-STS:vm_state': 'running', 'tenant_id': '35d2a9caf1634dca9fc12ec078239d84', 'user_id': '7c24e8f82e7842b785e565ac65c7f494', 'hostId': 'e632d98aa833376e2652bb395252bb54f4cc7fd6f020f0d51d7efcd6', 'status': 'active', 'metadata': {}} discover_libvirt_polling /usr/lib/python3.12/site-packages/ceilometer/compute/discovery.py:315
Dec  1 19:43:48 compute-0 ceilometer_agent_compute[200308]: 2025-12-01 19:43:48.828 15 DEBUG ceilometer.compute.discovery [-] instance data: {'id': '850ac274-3f22-41ce-b7d7-ac64d7adac70', 'name': 'vn-rxztcck-a6xkcgll2h6t-dmjd3wlevael-vnf-74vtqyxw74yx', 'flavor': {'id': '0891a7f6-7194-4f33-bc11-6f6ab8b16145', 'name': 'm1.small', 'vcpus': 1, 'ram': 512, 'disk': 1, 'ephemeral': 1, 'swap': 0}, 'image': {'id': '15bc897a-453b-4133-b6db-08ecdc2b6db0'}, 'os_type': 'hvm', 'architecture': 'x86_64', 'OS-EXT-SRV-ATTR:instance_name': 'instance-00000003', 'OS-EXT-SRV-ATTR:host': 'compute-0.ctlplane.example.com', 'OS-EXT-STS:vm_state': 'running', 'tenant_id': '35d2a9caf1634dca9fc12ec078239d84', 'user_id': '7c24e8f82e7842b785e565ac65c7f494', 'hostId': 'e632d98aa833376e2652bb395252bb54f4cc7fd6f020f0d51d7efcd6', 'status': 'active', 'metadata': {'metering.server_group': '47cf63e2-5b7c-4ff3-8543-aef6d5b1a5c9'}} discover_libvirt_polling /usr/lib/python3.12/site-packages/ceilometer/compute/discovery.py:315
Dec  1 19:43:48 compute-0 ceilometer_agent_compute[200308]: 2025-12-01 19:43:48.828 15 INFO ceilometer.polling.manager [-] Polling pollster network.incoming.bytes.delta in the context of pollsters
Dec  1 19:43:48 compute-0 ceilometer_agent_compute[200308]: 2025-12-01 19:43:48.828 15 DEBUG ceilometer.polling.manager [-] Checking if we need coordination for pollster [<stevedore.extension.Extension object at 0x7fcf6cc3f860>] with coordination group name [None]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:333
Dec  1 19:43:48 compute-0 ceilometer_agent_compute[200308]: 2025-12-01 19:43:48.828 15 DEBUG ceilometer.polling.manager [-] The pollster [<stevedore.extension.Extension object at 0x7fcf6cc3f860>] is not configured in a source for polling that requires coordination. The current hashrings are the following [None]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:355
Dec  1 19:43:48 compute-0 ceilometer_agent_compute[200308]: 2025-12-01 19:43:48.828 15 DEBUG ceilometer.polling.manager [-] Polster heartbeat update: network.incoming.bytes.delta heartbeat /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:636
Dec  1 19:43:48 compute-0 ceilometer_agent_compute[200308]: 2025-12-01 19:43:48.829 12 DEBUG ceilometer.polling.manager [-] Updated heartbeat for network.incoming.bytes.delta (2025-12-01T19:43:48.828749) _update_status /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:502
Dec  1 19:43:48 compute-0 ceilometer_agent_compute[200308]: 2025-12-01 19:43:48.832 15 DEBUG ceilometer.compute.pollsters [-] e73931e9-f7fa-4666-b781-700b385532a9/network.incoming.bytes.delta volume: 0 _stats_to_sample /usr/lib/python3.12/site-packages/ceilometer/compute/pollsters/__init__.py:108
Dec  1 19:43:48 compute-0 ceilometer_agent_compute[200308]: 2025-12-01 19:43:48.835 15 DEBUG ceilometer.compute.pollsters [-] 850ac274-3f22-41ce-b7d7-ac64d7adac70/network.incoming.bytes.delta volume: 0 _stats_to_sample /usr/lib/python3.12/site-packages/ceilometer/compute/pollsters/__init__.py:108
Dec  1 19:43:48 compute-0 ceilometer_agent_compute[200308]: 2025-12-01 19:43:48.835 15 INFO ceilometer.polling.manager [-] Finished polling pollster network.incoming.bytes.delta in the context of pollsters
Dec  1 19:43:48 compute-0 ceilometer_agent_compute[200308]: 2025-12-01 19:43:48.836 15 DEBUG ceilometer.polling.manager [-] Executing discovery process for pollsters [<ceilometer.compute.pollsters.net.OutgoingPacketsPollster object at 0x7fcf6c2e4050>] and discovery method [local_instances] via process [<bound method AgentManager.discover of <ceilometer.polling.manager.AgentManager object at 0x7fcf6cc07a10>>]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:294
Dec  1 19:43:48 compute-0 ceilometer_agent_compute[200308]: 2025-12-01 19:43:48.836 15 INFO ceilometer.polling.manager [-] Polling pollster network.outgoing.packets in the context of pollsters
Dec  1 19:43:48 compute-0 ceilometer_agent_compute[200308]: 2025-12-01 19:43:48.836 15 DEBUG ceilometer.polling.manager [-] Checking if we need coordination for pollster [<stevedore.extension.Extension object at 0x7fcf6c2e4080>] with coordination group name [None]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:333
Dec  1 19:43:48 compute-0 ceilometer_agent_compute[200308]: 2025-12-01 19:43:48.836 15 DEBUG ceilometer.polling.manager [-] The pollster [<stevedore.extension.Extension object at 0x7fcf6c2e4080>] is not configured in a source for polling that requires coordination. The current hashrings are the following [None]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:355
Dec  1 19:43:48 compute-0 ceilometer_agent_compute[200308]: 2025-12-01 19:43:48.836 15 DEBUG ceilometer.polling.manager [-] Polster heartbeat update: network.outgoing.packets heartbeat /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:636
Dec  1 19:43:48 compute-0 ceilometer_agent_compute[200308]: 2025-12-01 19:43:48.836 15 DEBUG ceilometer.compute.pollsters [-] e73931e9-f7fa-4666-b781-700b385532a9/network.outgoing.packets volume: 23 _stats_to_sample /usr/lib/python3.12/site-packages/ceilometer/compute/pollsters/__init__.py:108
Dec  1 19:43:48 compute-0 ceilometer_agent_compute[200308]: 2025-12-01 19:43:48.836 15 DEBUG ceilometer.compute.pollsters [-] 850ac274-3f22-41ce-b7d7-ac64d7adac70/network.outgoing.packets volume: 22 _stats_to_sample /usr/lib/python3.12/site-packages/ceilometer/compute/pollsters/__init__.py:108
Dec  1 19:43:48 compute-0 ceilometer_agent_compute[200308]: 2025-12-01 19:43:48.837 12 DEBUG ceilometer.polling.manager [-] Updated heartbeat for network.outgoing.packets (2025-12-01T19:43:48.836341) _update_status /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:502
Dec  1 19:43:48 compute-0 ceilometer_agent_compute[200308]: 2025-12-01 19:43:48.837 15 INFO ceilometer.polling.manager [-] Finished polling pollster network.outgoing.packets in the context of pollsters
Dec  1 19:43:48 compute-0 ceilometer_agent_compute[200308]: 2025-12-01 19:43:48.837 15 DEBUG ceilometer.polling.manager [-] Executing discovery process for pollsters [<ceilometer.compute.pollsters.net.OutgoingBytesDeltaPollster object at 0x7fcf6cc3ff20>] and discovery method [local_instances] via process [<bound method AgentManager.discover of <ceilometer.polling.manager.AgentManager object at 0x7fcf6cc07a10>>]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:294
Dec  1 19:43:48 compute-0 ceilometer_agent_compute[200308]: 2025-12-01 19:43:48.837 15 INFO ceilometer.polling.manager [-] Polling pollster network.outgoing.bytes.delta in the context of pollsters
Dec  1 19:43:48 compute-0 ceilometer_agent_compute[200308]: 2025-12-01 19:43:48.837 15 DEBUG ceilometer.polling.manager [-] Checking if we need coordination for pollster [<stevedore.extension.Extension object at 0x7fcf6efc98b0>] with coordination group name [None]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:333
Dec  1 19:43:48 compute-0 ceilometer_agent_compute[200308]: 2025-12-01 19:43:48.837 15 DEBUG ceilometer.polling.manager [-] The pollster [<stevedore.extension.Extension object at 0x7fcf6efc98b0>] is not configured in a source for polling that requires coordination. The current hashrings are the following [None]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:355
Dec  1 19:43:48 compute-0 ceilometer_agent_compute[200308]: 2025-12-01 19:43:48.837 15 DEBUG ceilometer.polling.manager [-] Polster heartbeat update: network.outgoing.bytes.delta heartbeat /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:636
Dec  1 19:43:48 compute-0 ceilometer_agent_compute[200308]: 2025-12-01 19:43:48.838 15 DEBUG ceilometer.compute.pollsters [-] e73931e9-f7fa-4666-b781-700b385532a9/network.outgoing.bytes.delta volume: 0 _stats_to_sample /usr/lib/python3.12/site-packages/ceilometer/compute/pollsters/__init__.py:108
Dec  1 19:43:48 compute-0 ceilometer_agent_compute[200308]: 2025-12-01 19:43:48.838 12 DEBUG ceilometer.polling.manager [-] Updated heartbeat for network.outgoing.bytes.delta (2025-12-01T19:43:48.837746) _update_status /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:502
Dec  1 19:43:48 compute-0 ceilometer_agent_compute[200308]: 2025-12-01 19:43:48.838 15 DEBUG ceilometer.compute.pollsters [-] 850ac274-3f22-41ce-b7d7-ac64d7adac70/network.outgoing.bytes.delta volume: 0 _stats_to_sample /usr/lib/python3.12/site-packages/ceilometer/compute/pollsters/__init__.py:108
Dec  1 19:43:48 compute-0 ceilometer_agent_compute[200308]: 2025-12-01 19:43:48.838 15 INFO ceilometer.polling.manager [-] Finished polling pollster network.outgoing.bytes.delta in the context of pollsters
Dec  1 19:43:48 compute-0 ceilometer_agent_compute[200308]: 2025-12-01 19:43:48.838 15 DEBUG ceilometer.polling.manager [-] Executing discovery process for pollsters [<ceilometer.compute.pollsters.net.OutgoingDropPollster object at 0x7fcf6c2e40e0>] and discovery method [local_instances] via process [<bound method AgentManager.discover of <ceilometer.polling.manager.AgentManager object at 0x7fcf6cc07a10>>]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:294
Dec  1 19:43:48 compute-0 ceilometer_agent_compute[200308]: 2025-12-01 19:43:48.838 15 INFO ceilometer.polling.manager [-] Polling pollster network.outgoing.packets.drop in the context of pollsters
Dec  1 19:43:48 compute-0 ceilometer_agent_compute[200308]: 2025-12-01 19:43:48.838 15 DEBUG ceilometer.polling.manager [-] Checking if we need coordination for pollster [<stevedore.extension.Extension object at 0x7fcf6c2e4110>] with coordination group name [None]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:333
Dec  1 19:43:48 compute-0 ceilometer_agent_compute[200308]: 2025-12-01 19:43:48.838 15 DEBUG ceilometer.polling.manager [-] The pollster [<stevedore.extension.Extension object at 0x7fcf6c2e4110>] is not configured in a source for polling that requires coordination. The current hashrings are the following [None]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:355
Dec  1 19:43:48 compute-0 ceilometer_agent_compute[200308]: 2025-12-01 19:43:48.839 15 DEBUG ceilometer.polling.manager [-] Polster heartbeat update: network.outgoing.packets.drop heartbeat /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:636
Dec  1 19:43:48 compute-0 ceilometer_agent_compute[200308]: 2025-12-01 19:43:48.839 15 DEBUG ceilometer.compute.pollsters [-] e73931e9-f7fa-4666-b781-700b385532a9/network.outgoing.packets.drop volume: 0 _stats_to_sample /usr/lib/python3.12/site-packages/ceilometer/compute/pollsters/__init__.py:108
Dec  1 19:43:48 compute-0 ceilometer_agent_compute[200308]: 2025-12-01 19:43:48.839 12 DEBUG ceilometer.polling.manager [-] Updated heartbeat for network.outgoing.packets.drop (2025-12-01T19:43:48.838981) _update_status /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:502
Dec  1 19:43:48 compute-0 ceilometer_agent_compute[200308]: 2025-12-01 19:43:48.839 15 DEBUG ceilometer.compute.pollsters [-] 850ac274-3f22-41ce-b7d7-ac64d7adac70/network.outgoing.packets.drop volume: 0 _stats_to_sample /usr/lib/python3.12/site-packages/ceilometer/compute/pollsters/__init__.py:108
Dec  1 19:43:48 compute-0 ceilometer_agent_compute[200308]: 2025-12-01 19:43:48.839 15 INFO ceilometer.polling.manager [-] Finished polling pollster network.outgoing.packets.drop in the context of pollsters
Dec  1 19:43:48 compute-0 ceilometer_agent_compute[200308]: 2025-12-01 19:43:48.839 15 DEBUG ceilometer.polling.manager [-] Executing discovery process for pollsters [<ceilometer.compute.pollsters.net.OutgoingErrorsPollster object at 0x7fcf6c2e4170>] and discovery method [local_instances] via process [<bound method AgentManager.discover of <ceilometer.polling.manager.AgentManager object at 0x7fcf6cc07a10>>]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:294
Dec  1 19:43:48 compute-0 ceilometer_agent_compute[200308]: 2025-12-01 19:43:48.839 15 INFO ceilometer.polling.manager [-] Polling pollster network.outgoing.packets.error in the context of pollsters
Dec  1 19:43:48 compute-0 ceilometer_agent_compute[200308]: 2025-12-01 19:43:48.839 15 DEBUG ceilometer.polling.manager [-] Checking if we need coordination for pollster [<stevedore.extension.Extension object at 0x7fcf6c2e41a0>] with coordination group name [None]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:333
Dec  1 19:43:48 compute-0 ceilometer_agent_compute[200308]: 2025-12-01 19:43:48.840 15 DEBUG ceilometer.polling.manager [-] The pollster [<stevedore.extension.Extension object at 0x7fcf6c2e41a0>] is not configured in a source for polling that requires coordination. The current hashrings are the following [None]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:355
Dec  1 19:43:48 compute-0 ceilometer_agent_compute[200308]: 2025-12-01 19:43:48.840 15 DEBUG ceilometer.polling.manager [-] Polster heartbeat update: network.outgoing.packets.error heartbeat /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:636
Dec  1 19:43:48 compute-0 ceilometer_agent_compute[200308]: 2025-12-01 19:43:48.840 15 DEBUG ceilometer.compute.pollsters [-] e73931e9-f7fa-4666-b781-700b385532a9/network.outgoing.packets.error volume: 0 _stats_to_sample /usr/lib/python3.12/site-packages/ceilometer/compute/pollsters/__init__.py:108
Dec  1 19:43:48 compute-0 ceilometer_agent_compute[200308]: 2025-12-01 19:43:48.840 15 DEBUG ceilometer.compute.pollsters [-] 850ac274-3f22-41ce-b7d7-ac64d7adac70/network.outgoing.packets.error volume: 0 _stats_to_sample /usr/lib/python3.12/site-packages/ceilometer/compute/pollsters/__init__.py:108
Dec  1 19:43:48 compute-0 ceilometer_agent_compute[200308]: 2025-12-01 19:43:48.840 15 INFO ceilometer.polling.manager [-] Finished polling pollster network.outgoing.packets.error in the context of pollsters
Dec  1 19:43:48 compute-0 ceilometer_agent_compute[200308]: 2025-12-01 19:43:48.840 15 DEBUG ceilometer.polling.manager [-] Executing discovery process for pollsters [<ceilometer.compute.pollsters.disk.PerDeviceCapacityPollster object at 0x7fcf6cc3d820>] and discovery method [local_instances] via process [<bound method AgentManager.discover of <ceilometer.polling.manager.AgentManager object at 0x7fcf6cc07a10>>]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:294
Dec  1 19:43:48 compute-0 ceilometer_agent_compute[200308]: 2025-12-01 19:43:48.841 15 INFO ceilometer.polling.manager [-] Polling pollster disk.device.capacity in the context of pollsters
Dec  1 19:43:48 compute-0 ceilometer_agent_compute[200308]: 2025-12-01 19:43:48.841 12 DEBUG ceilometer.polling.manager [-] Updated heartbeat for network.outgoing.packets.error (2025-12-01T19:43:48.840071) _update_status /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:502
Dec  1 19:43:48 compute-0 ceilometer_agent_compute[200308]: 2025-12-01 19:43:48.841 15 DEBUG ceilometer.polling.manager [-] Checking if we need coordination for pollster [<stevedore.extension.Extension object at 0x7fcf6cc3f290>] with coordination group name [None]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:333
Dec  1 19:43:48 compute-0 ceilometer_agent_compute[200308]: 2025-12-01 19:43:48.841 15 DEBUG ceilometer.polling.manager [-] The pollster [<stevedore.extension.Extension object at 0x7fcf6cc3f290>] is not configured in a source for polling that requires coordination. The current hashrings are the following [None]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:355
Dec  1 19:43:48 compute-0 ceilometer_agent_compute[200308]: 2025-12-01 19:43:48.841 15 DEBUG ceilometer.polling.manager [-] Polster heartbeat update: disk.device.capacity heartbeat /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:636
Dec  1 19:43:48 compute-0 ceilometer_agent_compute[200308]: 2025-12-01 19:43:48.841 12 DEBUG ceilometer.polling.manager [-] Updated heartbeat for disk.device.capacity (2025-12-01T19:43:48.841193) _update_status /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:502
Dec  1 19:43:48 compute-0 ceilometer_agent_compute[200308]: 2025-12-01 19:43:48.860 15 DEBUG ceilometer.compute.pollsters [-] e73931e9-f7fa-4666-b781-700b385532a9/disk.device.capacity volume: 1073741824 _stats_to_sample /usr/lib/python3.12/site-packages/ceilometer/compute/pollsters/__init__.py:108
Dec  1 19:43:48 compute-0 ceilometer_agent_compute[200308]: 2025-12-01 19:43:48.861 15 DEBUG ceilometer.compute.pollsters [-] e73931e9-f7fa-4666-b781-700b385532a9/disk.device.capacity volume: 1073741824 _stats_to_sample /usr/lib/python3.12/site-packages/ceilometer/compute/pollsters/__init__.py:108
Dec  1 19:43:48 compute-0 ceilometer_agent_compute[200308]: 2025-12-01 19:43:48.861 15 DEBUG ceilometer.compute.pollsters [-] e73931e9-f7fa-4666-b781-700b385532a9/disk.device.capacity volume: 485376 _stats_to_sample /usr/lib/python3.12/site-packages/ceilometer/compute/pollsters/__init__.py:108
Dec  1 19:43:48 compute-0 ceilometer_agent_compute[200308]: 2025-12-01 19:43:48.880 15 DEBUG ceilometer.compute.pollsters [-] 850ac274-3f22-41ce-b7d7-ac64d7adac70/disk.device.capacity volume: 1073741824 _stats_to_sample /usr/lib/python3.12/site-packages/ceilometer/compute/pollsters/__init__.py:108
Dec  1 19:43:48 compute-0 ceilometer_agent_compute[200308]: 2025-12-01 19:43:48.880 15 DEBUG ceilometer.compute.pollsters [-] 850ac274-3f22-41ce-b7d7-ac64d7adac70/disk.device.capacity volume: 1073741824 _stats_to_sample /usr/lib/python3.12/site-packages/ceilometer/compute/pollsters/__init__.py:108
Dec  1 19:43:48 compute-0 ceilometer_agent_compute[200308]: 2025-12-01 19:43:48.881 15 DEBUG ceilometer.compute.pollsters [-] 850ac274-3f22-41ce-b7d7-ac64d7adac70/disk.device.capacity volume: 583680 _stats_to_sample /usr/lib/python3.12/site-packages/ceilometer/compute/pollsters/__init__.py:108
Dec  1 19:43:48 compute-0 ceilometer_agent_compute[200308]: 2025-12-01 19:43:48.881 15 INFO ceilometer.polling.manager [-] Finished polling pollster disk.device.capacity in the context of pollsters
Dec  1 19:43:48 compute-0 ceilometer_agent_compute[200308]: 2025-12-01 19:43:48.881 15 DEBUG ceilometer.polling.manager [-] Executing discovery process for pollsters [<ceilometer.compute.pollsters.disk.PerDeviceReadBytesPollster object at 0x7fcf6cc3f1d0>] and discovery method [local_instances] via process [<bound method AgentManager.discover of <ceilometer.polling.manager.AgentManager object at 0x7fcf6cc07a10>>]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:294
Dec  1 19:43:48 compute-0 ceilometer_agent_compute[200308]: 2025-12-01 19:43:48.881 15 INFO ceilometer.polling.manager [-] Polling pollster disk.device.read.bytes in the context of pollsters
Dec  1 19:43:48 compute-0 ceilometer_agent_compute[200308]: 2025-12-01 19:43:48.881 15 DEBUG ceilometer.polling.manager [-] Checking if we need coordination for pollster [<stevedore.extension.Extension object at 0x7fcf6cc3f2c0>] with coordination group name [None]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:333
Dec  1 19:43:48 compute-0 ceilometer_agent_compute[200308]: 2025-12-01 19:43:48.881 15 DEBUG ceilometer.polling.manager [-] The pollster [<stevedore.extension.Extension object at 0x7fcf6cc3f2c0>] is not configured in a source for polling that requires coordination. The current hashrings are the following [None]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:355
Dec  1 19:43:48 compute-0 ceilometer_agent_compute[200308]: 2025-12-01 19:43:48.882 15 DEBUG ceilometer.polling.manager [-] Polster heartbeat update: disk.device.read.bytes heartbeat /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:636
Dec  1 19:43:48 compute-0 ceilometer_agent_compute[200308]: 2025-12-01 19:43:48.882 12 DEBUG ceilometer.polling.manager [-] Updated heartbeat for disk.device.read.bytes (2025-12-01T19:43:48.881966) _update_status /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:502
Dec  1 19:43:48 compute-0 ceilometer_agent_compute[200308]: 2025-12-01 19:43:48.939 15 DEBUG ceilometer.compute.pollsters [-] e73931e9-f7fa-4666-b781-700b385532a9/disk.device.read.bytes volume: 23308800 _stats_to_sample /usr/lib/python3.12/site-packages/ceilometer/compute/pollsters/__init__.py:108
Dec  1 19:43:48 compute-0 ceilometer_agent_compute[200308]: 2025-12-01 19:43:48.940 15 DEBUG ceilometer.compute.pollsters [-] e73931e9-f7fa-4666-b781-700b385532a9/disk.device.read.bytes volume: 3227648 _stats_to_sample /usr/lib/python3.12/site-packages/ceilometer/compute/pollsters/__init__.py:108
Dec  1 19:43:48 compute-0 ceilometer_agent_compute[200308]: 2025-12-01 19:43:48.940 15 DEBUG ceilometer.compute.pollsters [-] e73931e9-f7fa-4666-b781-700b385532a9/disk.device.read.bytes volume: 274786 _stats_to_sample /usr/lib/python3.12/site-packages/ceilometer/compute/pollsters/__init__.py:108
Dec  1 19:43:49 compute-0 ceilometer_agent_compute[200308]: 2025-12-01 19:43:49.036 15 DEBUG ceilometer.compute.pollsters [-] 850ac274-3f22-41ce-b7d7-ac64d7adac70/disk.device.read.bytes volume: 23308800 _stats_to_sample /usr/lib/python3.12/site-packages/ceilometer/compute/pollsters/__init__.py:108
Dec  1 19:43:49 compute-0 ceilometer_agent_compute[200308]: 2025-12-01 19:43:49.036 15 DEBUG ceilometer.compute.pollsters [-] 850ac274-3f22-41ce-b7d7-ac64d7adac70/disk.device.read.bytes volume: 3227648 _stats_to_sample /usr/lib/python3.12/site-packages/ceilometer/compute/pollsters/__init__.py:108
Dec  1 19:43:49 compute-0 ceilometer_agent_compute[200308]: 2025-12-01 19:43:49.037 15 DEBUG ceilometer.compute.pollsters [-] 850ac274-3f22-41ce-b7d7-ac64d7adac70/disk.device.read.bytes volume: 385378 _stats_to_sample /usr/lib/python3.12/site-packages/ceilometer/compute/pollsters/__init__.py:108
Dec  1 19:43:49 compute-0 ceilometer_agent_compute[200308]: 2025-12-01 19:43:49.037 15 INFO ceilometer.polling.manager [-] Finished polling pollster disk.device.read.bytes in the context of pollsters
Dec  1 19:43:49 compute-0 ceilometer_agent_compute[200308]: 2025-12-01 19:43:49.038 15 DEBUG ceilometer.polling.manager [-] Executing discovery process for pollsters [<ceilometer.compute.pollsters.net.IncomingBytesPollster object at 0x7fcf6cc3f800>] and discovery method [local_instances] via process [<bound method AgentManager.discover of <ceilometer.polling.manager.AgentManager object at 0x7fcf6cc07a10>>]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:294
Dec  1 19:43:49 compute-0 ceilometer_agent_compute[200308]: 2025-12-01 19:43:49.038 15 INFO ceilometer.polling.manager [-] Polling pollster network.incoming.bytes in the context of pollsters
Dec  1 19:43:49 compute-0 ceilometer_agent_compute[200308]: 2025-12-01 19:43:49.038 15 DEBUG ceilometer.polling.manager [-] Checking if we need coordination for pollster [<stevedore.extension.Extension object at 0x7fcf6e1e92e0>] with coordination group name [None]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:333
Dec  1 19:43:49 compute-0 ceilometer_agent_compute[200308]: 2025-12-01 19:43:49.038 15 DEBUG ceilometer.polling.manager [-] The pollster [<stevedore.extension.Extension object at 0x7fcf6e1e92e0>] is not configured in a source for polling that requires coordination. The current hashrings are the following [None]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:355
Dec  1 19:43:49 compute-0 ceilometer_agent_compute[200308]: 2025-12-01 19:43:49.038 15 DEBUG ceilometer.polling.manager [-] Polster heartbeat update: network.incoming.bytes heartbeat /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:636
Dec  1 19:43:49 compute-0 ceilometer_agent_compute[200308]: 2025-12-01 19:43:49.038 15 DEBUG ceilometer.compute.pollsters [-] e73931e9-f7fa-4666-b781-700b385532a9/network.incoming.bytes volume: 2136 _stats_to_sample /usr/lib/python3.12/site-packages/ceilometer/compute/pollsters/__init__.py:108
Dec  1 19:43:49 compute-0 ceilometer_agent_compute[200308]: 2025-12-01 19:43:49.039 15 DEBUG ceilometer.compute.pollsters [-] 850ac274-3f22-41ce-b7d7-ac64d7adac70/network.incoming.bytes volume: 1570 _stats_to_sample /usr/lib/python3.12/site-packages/ceilometer/compute/pollsters/__init__.py:108
Dec  1 19:43:49 compute-0 ceilometer_agent_compute[200308]: 2025-12-01 19:43:49.039 12 DEBUG ceilometer.polling.manager [-] Updated heartbeat for network.incoming.bytes (2025-12-01T19:43:49.038456) _update_status /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:502
Dec  1 19:43:49 compute-0 ceilometer_agent_compute[200308]: 2025-12-01 19:43:49.039 15 INFO ceilometer.polling.manager [-] Finished polling pollster network.incoming.bytes in the context of pollsters
Dec  1 19:43:49 compute-0 ceilometer_agent_compute[200308]: 2025-12-01 19:43:49.039 15 DEBUG ceilometer.polling.manager [-] Executing discovery process for pollsters [<ceilometer.compute.pollsters.net.IncomingBytesRatePollster object at 0x7fcf6cc3fd10>] and discovery method [local_instances] via process [<bound method AgentManager.discover of <ceilometer.polling.manager.AgentManager object at 0x7fcf6cc07a10>>]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:294
Dec  1 19:43:49 compute-0 ceilometer_agent_compute[200308]: 2025-12-01 19:43:49.039 15 DEBUG ceilometer.polling.manager [-] Skip pollster network.incoming.bytes.rate, no new resources found this cycle _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:321
Dec  1 19:43:49 compute-0 ceilometer_agent_compute[200308]: 2025-12-01 19:43:49.040 15 DEBUG ceilometer.polling.manager [-] Executing discovery process for pollsters [<ceilometer.compute.pollsters.disk.PerDeviceDiskReadLatencyPollster object at 0x7fcf6cc3f2f0>] and discovery method [local_instances] via process [<bound method AgentManager.discover of <ceilometer.polling.manager.AgentManager object at 0x7fcf6cc07a10>>]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:294
Dec  1 19:43:49 compute-0 ceilometer_agent_compute[200308]: 2025-12-01 19:43:49.040 15 INFO ceilometer.polling.manager [-] Polling pollster disk.device.read.latency in the context of pollsters
Dec  1 19:43:49 compute-0 ceilometer_agent_compute[200308]: 2025-12-01 19:43:49.040 15 DEBUG ceilometer.polling.manager [-] Checking if we need coordination for pollster [<stevedore.extension.Extension object at 0x7fcf6cc3f320>] with coordination group name [None]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:333
Dec  1 19:43:49 compute-0 ceilometer_agent_compute[200308]: 2025-12-01 19:43:49.040 15 DEBUG ceilometer.polling.manager [-] The pollster [<stevedore.extension.Extension object at 0x7fcf6cc3f320>] is not configured in a source for polling that requires coordination. The current hashrings are the following [None]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:355
Dec  1 19:43:49 compute-0 ceilometer_agent_compute[200308]: 2025-12-01 19:43:49.040 12 DEBUG ceilometer.polling.manager [-] Updated heartbeat for disk.device.read.latency (2025-12-01T19:43:49.040475) _update_status /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:502
Dec  1 19:43:49 compute-0 ceilometer_agent_compute[200308]: 2025-12-01 19:43:49.040 15 DEBUG ceilometer.polling.manager [-] Polster heartbeat update: disk.device.read.latency heartbeat /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:636
Dec  1 19:43:49 compute-0 ceilometer_agent_compute[200308]: 2025-12-01 19:43:49.041 15 DEBUG ceilometer.compute.pollsters [-] e73931e9-f7fa-4666-b781-700b385532a9/disk.device.read.latency volume: 474440550 _stats_to_sample /usr/lib/python3.12/site-packages/ceilometer/compute/pollsters/__init__.py:108
Dec  1 19:43:49 compute-0 ceilometer_agent_compute[200308]: 2025-12-01 19:43:49.041 15 DEBUG ceilometer.compute.pollsters [-] e73931e9-f7fa-4666-b781-700b385532a9/disk.device.read.latency volume: 65600453 _stats_to_sample /usr/lib/python3.12/site-packages/ceilometer/compute/pollsters/__init__.py:108
Dec  1 19:43:49 compute-0 ceilometer_agent_compute[200308]: 2025-12-01 19:43:49.041 15 DEBUG ceilometer.compute.pollsters [-] e73931e9-f7fa-4666-b781-700b385532a9/disk.device.read.latency volume: 49214734 _stats_to_sample /usr/lib/python3.12/site-packages/ceilometer/compute/pollsters/__init__.py:108
Dec  1 19:43:49 compute-0 ceilometer_agent_compute[200308]: 2025-12-01 19:43:49.041 15 DEBUG ceilometer.compute.pollsters [-] 850ac274-3f22-41ce-b7d7-ac64d7adac70/disk.device.read.latency volume: 578521054 _stats_to_sample /usr/lib/python3.12/site-packages/ceilometer/compute/pollsters/__init__.py:108
Dec  1 19:43:49 compute-0 ceilometer_agent_compute[200308]: 2025-12-01 19:43:49.042 15 DEBUG ceilometer.compute.pollsters [-] 850ac274-3f22-41ce-b7d7-ac64d7adac70/disk.device.read.latency volume: 98903610 _stats_to_sample /usr/lib/python3.12/site-packages/ceilometer/compute/pollsters/__init__.py:108
Dec  1 19:43:49 compute-0 ceilometer_agent_compute[200308]: 2025-12-01 19:43:49.042 15 DEBUG ceilometer.compute.pollsters [-] 850ac274-3f22-41ce-b7d7-ac64d7adac70/disk.device.read.latency volume: 76991265 _stats_to_sample /usr/lib/python3.12/site-packages/ceilometer/compute/pollsters/__init__.py:108
Dec  1 19:43:49 compute-0 ceilometer_agent_compute[200308]: 2025-12-01 19:43:49.042 15 INFO ceilometer.polling.manager [-] Finished polling pollster disk.device.read.latency in the context of pollsters
Dec  1 19:43:49 compute-0 ceilometer_agent_compute[200308]: 2025-12-01 19:43:49.043 15 DEBUG ceilometer.polling.manager [-] Executing discovery process for pollsters [<ceilometer.compute.pollsters.disk.PerDeviceReadRequestsPollster object at 0x7fcf6cc3f350>] and discovery method [local_instances] via process [<bound method AgentManager.discover of <ceilometer.polling.manager.AgentManager object at 0x7fcf6cc07a10>>]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:294
Dec  1 19:43:49 compute-0 ceilometer_agent_compute[200308]: 2025-12-01 19:43:49.043 15 INFO ceilometer.polling.manager [-] Polling pollster disk.device.read.requests in the context of pollsters
Dec  1 19:43:49 compute-0 ceilometer_agent_compute[200308]: 2025-12-01 19:43:49.043 15 DEBUG ceilometer.polling.manager [-] Checking if we need coordination for pollster [<stevedore.extension.Extension object at 0x7fcf6cc3f380>] with coordination group name [None]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:333
Dec  1 19:43:49 compute-0 ceilometer_agent_compute[200308]: 2025-12-01 19:43:49.043 15 DEBUG ceilometer.polling.manager [-] The pollster [<stevedore.extension.Extension object at 0x7fcf6cc3f380>] is not configured in a source for polling that requires coordination. The current hashrings are the following [None]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:355
Dec  1 19:43:49 compute-0 ceilometer_agent_compute[200308]: 2025-12-01 19:43:49.043 15 DEBUG ceilometer.polling.manager [-] Polster heartbeat update: disk.device.read.requests heartbeat /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:636
Dec  1 19:43:49 compute-0 ceilometer_agent_compute[200308]: 2025-12-01 19:43:49.043 15 DEBUG ceilometer.compute.pollsters [-] e73931e9-f7fa-4666-b781-700b385532a9/disk.device.read.requests volume: 840 _stats_to_sample /usr/lib/python3.12/site-packages/ceilometer/compute/pollsters/__init__.py:108
Dec  1 19:43:49 compute-0 ceilometer_agent_compute[200308]: 2025-12-01 19:43:49.043 15 DEBUG ceilometer.compute.pollsters [-] e73931e9-f7fa-4666-b781-700b385532a9/disk.device.read.requests volume: 173 _stats_to_sample /usr/lib/python3.12/site-packages/ceilometer/compute/pollsters/__init__.py:108
Dec  1 19:43:49 compute-0 ceilometer_agent_compute[200308]: 2025-12-01 19:43:49.044 15 DEBUG ceilometer.compute.pollsters [-] e73931e9-f7fa-4666-b781-700b385532a9/disk.device.read.requests volume: 109 _stats_to_sample /usr/lib/python3.12/site-packages/ceilometer/compute/pollsters/__init__.py:108
Dec  1 19:43:49 compute-0 ceilometer_agent_compute[200308]: 2025-12-01 19:43:49.044 15 DEBUG ceilometer.compute.pollsters [-] 850ac274-3f22-41ce-b7d7-ac64d7adac70/disk.device.read.requests volume: 840 _stats_to_sample /usr/lib/python3.12/site-packages/ceilometer/compute/pollsters/__init__.py:108
Dec  1 19:43:49 compute-0 ceilometer_agent_compute[200308]: 2025-12-01 19:43:49.045 15 DEBUG ceilometer.compute.pollsters [-] 850ac274-3f22-41ce-b7d7-ac64d7adac70/disk.device.read.requests volume: 173 _stats_to_sample /usr/lib/python3.12/site-packages/ceilometer/compute/pollsters/__init__.py:108
Dec  1 19:43:49 compute-0 ceilometer_agent_compute[200308]: 2025-12-01 19:43:49.045 15 DEBUG ceilometer.compute.pollsters [-] 850ac274-3f22-41ce-b7d7-ac64d7adac70/disk.device.read.requests volume: 124 _stats_to_sample /usr/lib/python3.12/site-packages/ceilometer/compute/pollsters/__init__.py:108
Dec  1 19:43:49 compute-0 ceilometer_agent_compute[200308]: 2025-12-01 19:43:49.045 12 DEBUG ceilometer.polling.manager [-] Updated heartbeat for disk.device.read.requests (2025-12-01T19:43:49.043477) _update_status /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:502
Dec  1 19:43:49 compute-0 ceilometer_agent_compute[200308]: 2025-12-01 19:43:49.046 15 INFO ceilometer.polling.manager [-] Finished polling pollster disk.device.read.requests in the context of pollsters
Dec  1 19:43:49 compute-0 ceilometer_agent_compute[200308]: 2025-12-01 19:43:49.046 15 DEBUG ceilometer.polling.manager [-] Executing discovery process for pollsters [<ceilometer.compute.pollsters.disk.PerDevicePhysicalPollster object at 0x7fcf6cc3f3b0>] and discovery method [local_instances] via process [<bound method AgentManager.discover of <ceilometer.polling.manager.AgentManager object at 0x7fcf6cc07a10>>]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:294
Dec  1 19:43:49 compute-0 ceilometer_agent_compute[200308]: 2025-12-01 19:43:49.046 15 INFO ceilometer.polling.manager [-] Polling pollster disk.device.usage in the context of pollsters
Dec  1 19:43:49 compute-0 ceilometer_agent_compute[200308]: 2025-12-01 19:43:49.046 15 DEBUG ceilometer.polling.manager [-] Checking if we need coordination for pollster [<stevedore.extension.Extension object at 0x7fcf6cc3f3e0>] with coordination group name [None]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:333
Dec  1 19:43:49 compute-0 ceilometer_agent_compute[200308]: 2025-12-01 19:43:49.046 15 DEBUG ceilometer.polling.manager [-] The pollster [<stevedore.extension.Extension object at 0x7fcf6cc3f3e0>] is not configured in a source for polling that requires coordination. The current hashrings are the following [None]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:355
Dec  1 19:43:49 compute-0 ceilometer_agent_compute[200308]: 2025-12-01 19:43:49.046 15 DEBUG ceilometer.polling.manager [-] Polster heartbeat update: disk.device.usage heartbeat /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:636
Dec  1 19:43:49 compute-0 ceilometer_agent_compute[200308]: 2025-12-01 19:43:49.046 15 DEBUG ceilometer.compute.pollsters [-] e73931e9-f7fa-4666-b781-700b385532a9/disk.device.usage volume: 21233664 _stats_to_sample /usr/lib/python3.12/site-packages/ceilometer/compute/pollsters/__init__.py:108
Dec  1 19:43:49 compute-0 ceilometer_agent_compute[200308]: 2025-12-01 19:43:49.047 15 DEBUG ceilometer.compute.pollsters [-] e73931e9-f7fa-4666-b781-700b385532a9/disk.device.usage volume: 393216 _stats_to_sample /usr/lib/python3.12/site-packages/ceilometer/compute/pollsters/__init__.py:108
Dec  1 19:43:49 compute-0 ceilometer_agent_compute[200308]: 2025-12-01 19:43:49.047 12 DEBUG ceilometer.polling.manager [-] Updated heartbeat for disk.device.usage (2025-12-01T19:43:49.046508) _update_status /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:502
Dec  1 19:43:49 compute-0 ceilometer_agent_compute[200308]: 2025-12-01 19:43:49.047 15 DEBUG ceilometer.compute.pollsters [-] e73931e9-f7fa-4666-b781-700b385532a9/disk.device.usage volume: 485376 _stats_to_sample /usr/lib/python3.12/site-packages/ceilometer/compute/pollsters/__init__.py:108
Dec  1 19:43:49 compute-0 ceilometer_agent_compute[200308]: 2025-12-01 19:43:49.047 15 DEBUG ceilometer.compute.pollsters [-] 850ac274-3f22-41ce-b7d7-ac64d7adac70/disk.device.usage volume: 21299200 _stats_to_sample /usr/lib/python3.12/site-packages/ceilometer/compute/pollsters/__init__.py:108
Dec  1 19:43:49 compute-0 ceilometer_agent_compute[200308]: 2025-12-01 19:43:49.047 15 DEBUG ceilometer.compute.pollsters [-] 850ac274-3f22-41ce-b7d7-ac64d7adac70/disk.device.usage volume: 393216 _stats_to_sample /usr/lib/python3.12/site-packages/ceilometer/compute/pollsters/__init__.py:108
Dec  1 19:43:49 compute-0 ceilometer_agent_compute[200308]: 2025-12-01 19:43:49.048 15 DEBUG ceilometer.compute.pollsters [-] 850ac274-3f22-41ce-b7d7-ac64d7adac70/disk.device.usage volume: 583680 _stats_to_sample /usr/lib/python3.12/site-packages/ceilometer/compute/pollsters/__init__.py:108
Dec  1 19:43:49 compute-0 ceilometer_agent_compute[200308]: 2025-12-01 19:43:49.049 15 INFO ceilometer.polling.manager [-] Finished polling pollster disk.device.usage in the context of pollsters
Dec  1 19:43:49 compute-0 ceilometer_agent_compute[200308]: 2025-12-01 19:43:49.049 15 DEBUG ceilometer.polling.manager [-] Executing discovery process for pollsters [<ceilometer.compute.pollsters.disk.PerDeviceWriteBytesPollster object at 0x7fcf6cc3f410>] and discovery method [local_instances] via process [<bound method AgentManager.discover of <ceilometer.polling.manager.AgentManager object at 0x7fcf6cc07a10>>]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:294
Dec  1 19:43:49 compute-0 ceilometer_agent_compute[200308]: 2025-12-01 19:43:49.049 15 INFO ceilometer.polling.manager [-] Polling pollster disk.device.write.bytes in the context of pollsters
Dec  1 19:43:49 compute-0 ceilometer_agent_compute[200308]: 2025-12-01 19:43:49.049 15 DEBUG ceilometer.polling.manager [-] Checking if we need coordination for pollster [<stevedore.extension.Extension object at 0x7fcf6cc3f440>] with coordination group name [None]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:333
Dec  1 19:43:49 compute-0 ceilometer_agent_compute[200308]: 2025-12-01 19:43:49.049 15 DEBUG ceilometer.polling.manager [-] The pollster [<stevedore.extension.Extension object at 0x7fcf6cc3f440>] is not configured in a source for polling that requires coordination. The current hashrings are the following [None]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:355
Dec  1 19:43:49 compute-0 ceilometer_agent_compute[200308]: 2025-12-01 19:43:49.049 15 DEBUG ceilometer.polling.manager [-] Polster heartbeat update: disk.device.write.bytes heartbeat /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:636
Dec  1 19:43:49 compute-0 ceilometer_agent_compute[200308]: 2025-12-01 19:43:49.049 15 DEBUG ceilometer.compute.pollsters [-] e73931e9-f7fa-4666-b781-700b385532a9/disk.device.write.bytes volume: 41779200 _stats_to_sample /usr/lib/python3.12/site-packages/ceilometer/compute/pollsters/__init__.py:108
Dec  1 19:43:49 compute-0 ceilometer_agent_compute[200308]: 2025-12-01 19:43:49.050 15 DEBUG ceilometer.compute.pollsters [-] e73931e9-f7fa-4666-b781-700b385532a9/disk.device.write.bytes volume: 512 _stats_to_sample /usr/lib/python3.12/site-packages/ceilometer/compute/pollsters/__init__.py:108
Dec  1 19:43:49 compute-0 ceilometer_agent_compute[200308]: 2025-12-01 19:43:49.050 12 DEBUG ceilometer.polling.manager [-] Updated heartbeat for disk.device.write.bytes (2025-12-01T19:43:49.049773) _update_status /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:502
Dec  1 19:43:49 compute-0 ceilometer_agent_compute[200308]: 2025-12-01 19:43:49.050 15 DEBUG ceilometer.compute.pollsters [-] e73931e9-f7fa-4666-b781-700b385532a9/disk.device.write.bytes volume: 0 _stats_to_sample /usr/lib/python3.12/site-packages/ceilometer/compute/pollsters/__init__.py:108
Dec  1 19:43:49 compute-0 ceilometer_agent_compute[200308]: 2025-12-01 19:43:49.050 15 DEBUG ceilometer.compute.pollsters [-] 850ac274-3f22-41ce-b7d7-ac64d7adac70/disk.device.write.bytes volume: 41779200 _stats_to_sample /usr/lib/python3.12/site-packages/ceilometer/compute/pollsters/__init__.py:108
Dec  1 19:43:49 compute-0 ceilometer_agent_compute[200308]: 2025-12-01 19:43:49.051 15 DEBUG ceilometer.compute.pollsters [-] 850ac274-3f22-41ce-b7d7-ac64d7adac70/disk.device.write.bytes volume: 512 _stats_to_sample /usr/lib/python3.12/site-packages/ceilometer/compute/pollsters/__init__.py:108
Dec  1 19:43:49 compute-0 ceilometer_agent_compute[200308]: 2025-12-01 19:43:49.051 15 DEBUG ceilometer.compute.pollsters [-] 850ac274-3f22-41ce-b7d7-ac64d7adac70/disk.device.write.bytes volume: 0 _stats_to_sample /usr/lib/python3.12/site-packages/ceilometer/compute/pollsters/__init__.py:108
Dec  1 19:43:49 compute-0 ceilometer_agent_compute[200308]: 2025-12-01 19:43:49.051 15 INFO ceilometer.polling.manager [-] Finished polling pollster disk.device.write.bytes in the context of pollsters
Dec  1 19:43:49 compute-0 ceilometer_agent_compute[200308]: 2025-12-01 19:43:49.052 15 DEBUG ceilometer.polling.manager [-] Executing discovery process for pollsters [<ceilometer.compute.pollsters.instance_stats.PowerStatePollster object at 0x7fcf6c2e4440>] and discovery method [local_instances] via process [<bound method AgentManager.discover of <ceilometer.polling.manager.AgentManager object at 0x7fcf6cc07a10>>]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:294
Dec  1 19:43:49 compute-0 ceilometer_agent_compute[200308]: 2025-12-01 19:43:49.052 15 INFO ceilometer.polling.manager [-] Polling pollster power.state in the context of pollsters
Dec  1 19:43:49 compute-0 ceilometer_agent_compute[200308]: 2025-12-01 19:43:49.052 15 DEBUG ceilometer.polling.manager [-] Checking if we need coordination for pollster [<stevedore.extension.Extension object at 0x7fcf6c2e4470>] with coordination group name [None]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:333
Dec  1 19:43:49 compute-0 ceilometer_agent_compute[200308]: 2025-12-01 19:43:49.052 15 DEBUG ceilometer.polling.manager [-] The pollster [<stevedore.extension.Extension object at 0x7fcf6c2e4470>] is not configured in a source for polling that requires coordination. The current hashrings are the following [None]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:355
Dec  1 19:43:49 compute-0 ceilometer_agent_compute[200308]: 2025-12-01 19:43:49.052 15 DEBUG ceilometer.polling.manager [-] Polster heartbeat update: power.state heartbeat /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:636
Dec  1 19:43:49 compute-0 ceilometer_agent_compute[200308]: 2025-12-01 19:43:49.053 12 DEBUG ceilometer.polling.manager [-] Updated heartbeat for power.state (2025-12-01T19:43:49.052530) _update_status /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:502
Dec  1 19:43:49 compute-0 ceilometer_agent_compute[200308]: 2025-12-01 19:43:49.080 15 DEBUG ceilometer.compute.pollsters [-] e73931e9-f7fa-4666-b781-700b385532a9/power.state volume: 1 _stats_to_sample /usr/lib/python3.12/site-packages/ceilometer/compute/pollsters/__init__.py:108
Dec  1 19:43:49 compute-0 ceilometer_agent_compute[200308]: 2025-12-01 19:43:49.120 15 DEBUG ceilometer.compute.pollsters [-] 850ac274-3f22-41ce-b7d7-ac64d7adac70/power.state volume: 1 _stats_to_sample /usr/lib/python3.12/site-packages/ceilometer/compute/pollsters/__init__.py:108
Dec  1 19:43:49 compute-0 ceilometer_agent_compute[200308]: 2025-12-01 19:43:49.120 15 INFO ceilometer.polling.manager [-] Finished polling pollster power.state in the context of pollsters
Dec  1 19:43:49 compute-0 ceilometer_agent_compute[200308]: 2025-12-01 19:43:49.121 15 DEBUG ceilometer.polling.manager [-] Executing discovery process for pollsters [<ceilometer.compute.pollsters.disk.PerDeviceDiskWriteLatencyPollster object at 0x7fcf6cc3f470>] and discovery method [local_instances] via process [<bound method AgentManager.discover of <ceilometer.polling.manager.AgentManager object at 0x7fcf6cc07a10>>]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:294
Dec  1 19:43:49 compute-0 ceilometer_agent_compute[200308]: 2025-12-01 19:43:49.121 15 INFO ceilometer.polling.manager [-] Polling pollster disk.device.write.latency in the context of pollsters
Dec  1 19:43:49 compute-0 ceilometer_agent_compute[200308]: 2025-12-01 19:43:49.121 15 DEBUG ceilometer.polling.manager [-] Checking if we need coordination for pollster [<stevedore.extension.Extension object at 0x7fcf6cc3f4a0>] with coordination group name [None]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:333
Dec  1 19:43:49 compute-0 ceilometer_agent_compute[200308]: 2025-12-01 19:43:49.121 15 DEBUG ceilometer.polling.manager [-] The pollster [<stevedore.extension.Extension object at 0x7fcf6cc3f4a0>] is not configured in a source for polling that requires coordination. The current hashrings are the following [None]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:355
Dec  1 19:43:49 compute-0 ceilometer_agent_compute[200308]: 2025-12-01 19:43:49.121 15 DEBUG ceilometer.polling.manager [-] Polster heartbeat update: disk.device.write.latency heartbeat /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:636
Dec  1 19:43:49 compute-0 ceilometer_agent_compute[200308]: 2025-12-01 19:43:49.121 15 DEBUG ceilometer.compute.pollsters [-] e73931e9-f7fa-4666-b781-700b385532a9/disk.device.write.latency volume: 1119912171 _stats_to_sample /usr/lib/python3.12/site-packages/ceilometer/compute/pollsters/__init__.py:108
Dec  1 19:43:49 compute-0 ceilometer_agent_compute[200308]: 2025-12-01 19:43:49.121 15 DEBUG ceilometer.compute.pollsters [-] e73931e9-f7fa-4666-b781-700b385532a9/disk.device.write.latency volume: 10391061 _stats_to_sample /usr/lib/python3.12/site-packages/ceilometer/compute/pollsters/__init__.py:108
Dec  1 19:43:49 compute-0 ceilometer_agent_compute[200308]: 2025-12-01 19:43:49.121 15 DEBUG ceilometer.compute.pollsters [-] e73931e9-f7fa-4666-b781-700b385532a9/disk.device.write.latency volume: 0 _stats_to_sample /usr/lib/python3.12/site-packages/ceilometer/compute/pollsters/__init__.py:108
Dec  1 19:43:49 compute-0 ceilometer_agent_compute[200308]: 2025-12-01 19:43:49.122 15 DEBUG ceilometer.compute.pollsters [-] 850ac274-3f22-41ce-b7d7-ac64d7adac70/disk.device.write.latency volume: 2063543219 _stats_to_sample /usr/lib/python3.12/site-packages/ceilometer/compute/pollsters/__init__.py:108
Dec  1 19:43:49 compute-0 ceilometer_agent_compute[200308]: 2025-12-01 19:43:49.122 15 DEBUG ceilometer.compute.pollsters [-] 850ac274-3f22-41ce-b7d7-ac64d7adac70/disk.device.write.latency volume: 12721696 _stats_to_sample /usr/lib/python3.12/site-packages/ceilometer/compute/pollsters/__init__.py:108
Dec  1 19:43:49 compute-0 ceilometer_agent_compute[200308]: 2025-12-01 19:43:49.122 15 DEBUG ceilometer.compute.pollsters [-] 850ac274-3f22-41ce-b7d7-ac64d7adac70/disk.device.write.latency volume: 0 _stats_to_sample /usr/lib/python3.12/site-packages/ceilometer/compute/pollsters/__init__.py:108
Dec  1 19:43:49 compute-0 ceilometer_agent_compute[200308]: 2025-12-01 19:43:49.122 12 DEBUG ceilometer.polling.manager [-] Updated heartbeat for disk.device.write.latency (2025-12-01T19:43:49.121412) _update_status /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:502
Dec  1 19:43:49 compute-0 ceilometer_agent_compute[200308]: 2025-12-01 19:43:49.123 15 INFO ceilometer.polling.manager [-] Finished polling pollster disk.device.write.latency in the context of pollsters
Dec  1 19:43:49 compute-0 ceilometer_agent_compute[200308]: 2025-12-01 19:43:49.123 15 DEBUG ceilometer.polling.manager [-] Executing discovery process for pollsters [<ceilometer.compute.pollsters.disk.PerDeviceWriteRequestsPollster object at 0x7fcf6cc3f4d0>] and discovery method [local_instances] via process [<bound method AgentManager.discover of <ceilometer.polling.manager.AgentManager object at 0x7fcf6cc07a10>>]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:294
Dec  1 19:43:49 compute-0 ceilometer_agent_compute[200308]: 2025-12-01 19:43:49.123 15 INFO ceilometer.polling.manager [-] Polling pollster disk.device.write.requests in the context of pollsters
Dec  1 19:43:49 compute-0 ceilometer_agent_compute[200308]: 2025-12-01 19:43:49.123 15 DEBUG ceilometer.polling.manager [-] Checking if we need coordination for pollster [<stevedore.extension.Extension object at 0x7fcf6cc3f500>] with coordination group name [None]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:333
Dec  1 19:43:49 compute-0 ceilometer_agent_compute[200308]: 2025-12-01 19:43:49.123 15 DEBUG ceilometer.polling.manager [-] The pollster [<stevedore.extension.Extension object at 0x7fcf6cc3f500>] is not configured in a source for polling that requires coordination. The current hashrings are the following [None]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:355
Dec  1 19:43:49 compute-0 ceilometer_agent_compute[200308]: 2025-12-01 19:43:49.123 15 DEBUG ceilometer.polling.manager [-] Polster heartbeat update: disk.device.write.requests heartbeat /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:636
Dec  1 19:43:49 compute-0 ceilometer_agent_compute[200308]: 2025-12-01 19:43:49.123 15 DEBUG ceilometer.compute.pollsters [-] e73931e9-f7fa-4666-b781-700b385532a9/disk.device.write.requests volume: 233 _stats_to_sample /usr/lib/python3.12/site-packages/ceilometer/compute/pollsters/__init__.py:108
Dec  1 19:43:49 compute-0 ceilometer_agent_compute[200308]: 2025-12-01 19:43:49.123 15 DEBUG ceilometer.compute.pollsters [-] e73931e9-f7fa-4666-b781-700b385532a9/disk.device.write.requests volume: 1 _stats_to_sample /usr/lib/python3.12/site-packages/ceilometer/compute/pollsters/__init__.py:108
Dec  1 19:43:49 compute-0 ceilometer_agent_compute[200308]: 2025-12-01 19:43:49.124 15 DEBUG ceilometer.compute.pollsters [-] e73931e9-f7fa-4666-b781-700b385532a9/disk.device.write.requests volume: 0 _stats_to_sample /usr/lib/python3.12/site-packages/ceilometer/compute/pollsters/__init__.py:108
Dec  1 19:43:49 compute-0 ceilometer_agent_compute[200308]: 2025-12-01 19:43:49.124 12 DEBUG ceilometer.polling.manager [-] Updated heartbeat for disk.device.write.requests (2025-12-01T19:43:49.123639) _update_status /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:502
Dec  1 19:43:49 compute-0 ceilometer_agent_compute[200308]: 2025-12-01 19:43:49.124 15 DEBUG ceilometer.compute.pollsters [-] 850ac274-3f22-41ce-b7d7-ac64d7adac70/disk.device.write.requests volume: 232 _stats_to_sample /usr/lib/python3.12/site-packages/ceilometer/compute/pollsters/__init__.py:108
Dec  1 19:43:49 compute-0 ceilometer_agent_compute[200308]: 2025-12-01 19:43:49.124 15 DEBUG ceilometer.compute.pollsters [-] 850ac274-3f22-41ce-b7d7-ac64d7adac70/disk.device.write.requests volume: 1 _stats_to_sample /usr/lib/python3.12/site-packages/ceilometer/compute/pollsters/__init__.py:108
Dec  1 19:43:49 compute-0 ceilometer_agent_compute[200308]: 2025-12-01 19:43:49.124 15 DEBUG ceilometer.compute.pollsters [-] 850ac274-3f22-41ce-b7d7-ac64d7adac70/disk.device.write.requests volume: 0 _stats_to_sample /usr/lib/python3.12/site-packages/ceilometer/compute/pollsters/__init__.py:108
Dec  1 19:43:49 compute-0 ceilometer_agent_compute[200308]: 2025-12-01 19:43:49.125 15 INFO ceilometer.polling.manager [-] Finished polling pollster disk.device.write.requests in the context of pollsters
Dec  1 19:43:49 compute-0 ceilometer_agent_compute[200308]: 2025-12-01 19:43:49.125 15 DEBUG ceilometer.polling.manager [-] Executing discovery process for pollsters [<ceilometer.compute.pollsters.disk.PerDeviceAllocationPollster object at 0x7fcf6cc3e5d0>] and discovery method [local_instances] via process [<bound method AgentManager.discover of <ceilometer.polling.manager.AgentManager object at 0x7fcf6cc07a10>>]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:294
Dec  1 19:43:49 compute-0 ceilometer_agent_compute[200308]: 2025-12-01 19:43:49.125 15 INFO ceilometer.polling.manager [-] Polling pollster disk.device.allocation in the context of pollsters
Dec  1 19:43:49 compute-0 ceilometer_agent_compute[200308]: 2025-12-01 19:43:49.125 15 DEBUG ceilometer.polling.manager [-] Checking if we need coordination for pollster [<stevedore.extension.Extension object at 0x7fcf6cc3e540>] with coordination group name [None]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:333
Dec  1 19:43:49 compute-0 ceilometer_agent_compute[200308]: 2025-12-01 19:43:49.125 15 DEBUG ceilometer.polling.manager [-] The pollster [<stevedore.extension.Extension object at 0x7fcf6cc3e540>] is not configured in a source for polling that requires coordination. The current hashrings are the following [None]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:355
Dec  1 19:43:49 compute-0 ceilometer_agent_compute[200308]: 2025-12-01 19:43:49.125 15 DEBUG ceilometer.polling.manager [-] Polster heartbeat update: disk.device.allocation heartbeat /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:636
Dec  1 19:43:49 compute-0 ceilometer_agent_compute[200308]: 2025-12-01 19:43:49.125 15 DEBUG ceilometer.compute.pollsters [-] e73931e9-f7fa-4666-b781-700b385532a9/disk.device.allocation volume: 21307392 _stats_to_sample /usr/lib/python3.12/site-packages/ceilometer/compute/pollsters/__init__.py:108
Dec  1 19:43:49 compute-0 ceilometer_agent_compute[200308]: 2025-12-01 19:43:49.125 15 DEBUG ceilometer.compute.pollsters [-] e73931e9-f7fa-4666-b781-700b385532a9/disk.device.allocation volume: 1253376 _stats_to_sample /usr/lib/python3.12/site-packages/ceilometer/compute/pollsters/__init__.py:108
Dec  1 19:43:49 compute-0 ceilometer_agent_compute[200308]: 2025-12-01 19:43:49.126 15 DEBUG ceilometer.compute.pollsters [-] e73931e9-f7fa-4666-b781-700b385532a9/disk.device.allocation volume: 487424 _stats_to_sample /usr/lib/python3.12/site-packages/ceilometer/compute/pollsters/__init__.py:108
Dec  1 19:43:49 compute-0 ceilometer_agent_compute[200308]: 2025-12-01 19:43:49.126 15 DEBUG ceilometer.compute.pollsters [-] 850ac274-3f22-41ce-b7d7-ac64d7adac70/disk.device.allocation volume: 22224896 _stats_to_sample /usr/lib/python3.12/site-packages/ceilometer/compute/pollsters/__init__.py:108
Dec  1 19:43:49 compute-0 ceilometer_agent_compute[200308]: 2025-12-01 19:43:49.126 15 DEBUG ceilometer.compute.pollsters [-] 850ac274-3f22-41ce-b7d7-ac64d7adac70/disk.device.allocation volume: 1253376 _stats_to_sample /usr/lib/python3.12/site-packages/ceilometer/compute/pollsters/__init__.py:108
Dec  1 19:43:49 compute-0 ceilometer_agent_compute[200308]: 2025-12-01 19:43:49.126 15 DEBUG ceilometer.compute.pollsters [-] 850ac274-3f22-41ce-b7d7-ac64d7adac70/disk.device.allocation volume: 585728 _stats_to_sample /usr/lib/python3.12/site-packages/ceilometer/compute/pollsters/__init__.py:108
Dec  1 19:43:49 compute-0 ceilometer_agent_compute[200308]: 2025-12-01 19:43:49.127 12 DEBUG ceilometer.polling.manager [-] Updated heartbeat for disk.device.allocation (2025-12-01T19:43:49.125551) _update_status /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:502
Dec  1 19:43:49 compute-0 ceilometer_agent_compute[200308]: 2025-12-01 19:43:49.127 15 INFO ceilometer.polling.manager [-] Finished polling pollster disk.device.allocation in the context of pollsters
Dec  1 19:43:49 compute-0 ceilometer_agent_compute[200308]: 2025-12-01 19:43:49.127 15 DEBUG ceilometer.polling.manager [-] Executing discovery process for pollsters [<ceilometer.compute.pollsters.disk.EphemeralSizePollster object at 0x7fcf6cc3f530>] and discovery method [local_instances] via process [<bound method AgentManager.discover of <ceilometer.polling.manager.AgentManager object at 0x7fcf6cc07a10>>]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:294
Dec  1 19:43:49 compute-0 ceilometer_agent_compute[200308]: 2025-12-01 19:43:49.127 15 INFO ceilometer.polling.manager [-] Polling pollster disk.ephemeral.size in the context of pollsters
Dec  1 19:43:49 compute-0 ceilometer_agent_compute[200308]: 2025-12-01 19:43:49.127 15 DEBUG ceilometer.polling.manager [-] Checking if we need coordination for pollster [<stevedore.extension.Extension object at 0x7fcf6cc3f560>] with coordination group name [None]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:333
Dec  1 19:43:49 compute-0 ceilometer_agent_compute[200308]: 2025-12-01 19:43:49.127 15 DEBUG ceilometer.polling.manager [-] The pollster [<stevedore.extension.Extension object at 0x7fcf6cc3f560>] is not configured in a source for polling that requires coordination. The current hashrings are the following [None]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:355
Dec  1 19:43:49 compute-0 ceilometer_agent_compute[200308]: 2025-12-01 19:43:49.127 15 DEBUG ceilometer.polling.manager [-] Polster heartbeat update: disk.ephemeral.size heartbeat /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:636
Dec  1 19:43:49 compute-0 ceilometer_agent_compute[200308]: 2025-12-01 19:43:49.128 15 INFO ceilometer.polling.manager [-] Finished polling pollster disk.ephemeral.size in the context of pollsters
Dec  1 19:43:49 compute-0 ceilometer_agent_compute[200308]: 2025-12-01 19:43:49.128 15 DEBUG ceilometer.polling.manager [-] Executing discovery process for pollsters [<ceilometer.compute.pollsters.net.IncomingPacketsPollster object at 0x7fcf6cc3fd40>] and discovery method [local_instances] via process [<bound method AgentManager.discover of <ceilometer.polling.manager.AgentManager object at 0x7fcf6cc07a10>>]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:294
Dec  1 19:43:49 compute-0 ceilometer_agent_compute[200308]: 2025-12-01 19:43:49.128 15 INFO ceilometer.polling.manager [-] Polling pollster network.incoming.packets in the context of pollsters
Dec  1 19:43:49 compute-0 ceilometer_agent_compute[200308]: 2025-12-01 19:43:49.128 15 DEBUG ceilometer.polling.manager [-] Checking if we need coordination for pollster [<stevedore.extension.Extension object at 0x7fcf6cc3fd70>] with coordination group name [None]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:333
Dec  1 19:43:49 compute-0 ceilometer_agent_compute[200308]: 2025-12-01 19:43:49.128 15 DEBUG ceilometer.polling.manager [-] The pollster [<stevedore.extension.Extension object at 0x7fcf6cc3fd70>] is not configured in a source for polling that requires coordination. The current hashrings are the following [None]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:355
Dec  1 19:43:49 compute-0 ceilometer_agent_compute[200308]: 2025-12-01 19:43:49.128 15 DEBUG ceilometer.polling.manager [-] Polster heartbeat update: network.incoming.packets heartbeat /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:636
Dec  1 19:43:49 compute-0 ceilometer_agent_compute[200308]: 2025-12-01 19:43:49.128 15 DEBUG ceilometer.compute.pollsters [-] e73931e9-f7fa-4666-b781-700b385532a9/network.incoming.packets volume: 21 _stats_to_sample /usr/lib/python3.12/site-packages/ceilometer/compute/pollsters/__init__.py:108
Dec  1 19:43:49 compute-0 ceilometer_agent_compute[200308]: 2025-12-01 19:43:49.128 15 DEBUG ceilometer.compute.pollsters [-] 850ac274-3f22-41ce-b7d7-ac64d7adac70/network.incoming.packets volume: 14 _stats_to_sample /usr/lib/python3.12/site-packages/ceilometer/compute/pollsters/__init__.py:108
Dec  1 19:43:49 compute-0 ceilometer_agent_compute[200308]: 2025-12-01 19:43:49.129 15 INFO ceilometer.polling.manager [-] Finished polling pollster network.incoming.packets in the context of pollsters
Dec  1 19:43:49 compute-0 ceilometer_agent_compute[200308]: 2025-12-01 19:43:49.129 12 DEBUG ceilometer.polling.manager [-] Updated heartbeat for disk.ephemeral.size (2025-12-01T19:43:49.127705) _update_status /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:502
Dec  1 19:43:49 compute-0 ceilometer_agent_compute[200308]: 2025-12-01 19:43:49.129 15 DEBUG ceilometer.polling.manager [-] Executing discovery process for pollsters [<ceilometer.compute.pollsters.disk.RootSizePollster object at 0x7fcf6cc3f590>] and discovery method [local_instances] via process [<bound method AgentManager.discover of <ceilometer.polling.manager.AgentManager object at 0x7fcf6cc07a10>>]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:294
Dec  1 19:43:49 compute-0 ceilometer_agent_compute[200308]: 2025-12-01 19:43:49.129 12 DEBUG ceilometer.polling.manager [-] Updated heartbeat for network.incoming.packets (2025-12-01T19:43:49.128594) _update_status /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:502
Dec  1 19:43:49 compute-0 ceilometer_agent_compute[200308]: 2025-12-01 19:43:49.129 15 INFO ceilometer.polling.manager [-] Polling pollster disk.root.size in the context of pollsters
Dec  1 19:43:49 compute-0 ceilometer_agent_compute[200308]: 2025-12-01 19:43:49.129 15 DEBUG ceilometer.polling.manager [-] Checking if we need coordination for pollster [<stevedore.extension.Extension object at 0x7fcf6cc3f5c0>] with coordination group name [None]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:333
Dec  1 19:43:49 compute-0 ceilometer_agent_compute[200308]: 2025-12-01 19:43:49.129 15 DEBUG ceilometer.polling.manager [-] The pollster [<stevedore.extension.Extension object at 0x7fcf6cc3f5c0>] is not configured in a source for polling that requires coordination. The current hashrings are the following [None]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:355
Dec  1 19:43:49 compute-0 ceilometer_agent_compute[200308]: 2025-12-01 19:43:49.129 15 DEBUG ceilometer.polling.manager [-] Polster heartbeat update: disk.root.size heartbeat /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:636
Dec  1 19:43:49 compute-0 ceilometer_agent_compute[200308]: 2025-12-01 19:43:49.130 15 INFO ceilometer.polling.manager [-] Finished polling pollster disk.root.size in the context of pollsters
Dec  1 19:43:49 compute-0 ceilometer_agent_compute[200308]: 2025-12-01 19:43:49.130 15 DEBUG ceilometer.polling.manager [-] Executing discovery process for pollsters [<ceilometer.compute.pollsters.net.IncomingDropPollster object at 0x7fcf6cc3fda0>] and discovery method [local_instances] via process [<bound method AgentManager.discover of <ceilometer.polling.manager.AgentManager object at 0x7fcf6cc07a10>>]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:294
Dec  1 19:43:49 compute-0 ceilometer_agent_compute[200308]: 2025-12-01 19:43:49.130 12 DEBUG ceilometer.polling.manager [-] Updated heartbeat for disk.root.size (2025-12-01T19:43:49.129785) _update_status /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:502
Dec  1 19:43:49 compute-0 ceilometer_agent_compute[200308]: 2025-12-01 19:43:49.130 15 INFO ceilometer.polling.manager [-] Polling pollster network.incoming.packets.drop in the context of pollsters
Dec  1 19:43:49 compute-0 ceilometer_agent_compute[200308]: 2025-12-01 19:43:49.130 15 DEBUG ceilometer.polling.manager [-] Checking if we need coordination for pollster [<stevedore.extension.Extension object at 0x7fcf6cc3fdd0>] with coordination group name [None]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:333
Dec  1 19:43:49 compute-0 ceilometer_agent_compute[200308]: 2025-12-01 19:43:49.130 15 DEBUG ceilometer.polling.manager [-] The pollster [<stevedore.extension.Extension object at 0x7fcf6cc3fdd0>] is not configured in a source for polling that requires coordination. The current hashrings are the following [None]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:355
Dec  1 19:43:49 compute-0 ceilometer_agent_compute[200308]: 2025-12-01 19:43:49.130 15 DEBUG ceilometer.polling.manager [-] Polster heartbeat update: network.incoming.packets.drop heartbeat /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:636
Dec  1 19:43:49 compute-0 ceilometer_agent_compute[200308]: 2025-12-01 19:43:49.130 15 DEBUG ceilometer.compute.pollsters [-] e73931e9-f7fa-4666-b781-700b385532a9/network.incoming.packets.drop volume: 0 _stats_to_sample /usr/lib/python3.12/site-packages/ceilometer/compute/pollsters/__init__.py:108
Dec  1 19:43:49 compute-0 ceilometer_agent_compute[200308]: 2025-12-01 19:43:49.131 15 DEBUG ceilometer.compute.pollsters [-] 850ac274-3f22-41ce-b7d7-ac64d7adac70/network.incoming.packets.drop volume: 0 _stats_to_sample /usr/lib/python3.12/site-packages/ceilometer/compute/pollsters/__init__.py:108
Dec  1 19:43:49 compute-0 ceilometer_agent_compute[200308]: 2025-12-01 19:43:49.131 12 DEBUG ceilometer.polling.manager [-] Updated heartbeat for network.incoming.packets.drop (2025-12-01T19:43:49.130724) _update_status /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:502
Dec  1 19:43:49 compute-0 ceilometer_agent_compute[200308]: 2025-12-01 19:43:49.131 15 INFO ceilometer.polling.manager [-] Finished polling pollster network.incoming.packets.drop in the context of pollsters
Dec  1 19:43:49 compute-0 ceilometer_agent_compute[200308]: 2025-12-01 19:43:49.131 15 DEBUG ceilometer.polling.manager [-] Executing discovery process for pollsters [<ceilometer.compute.pollsters.net.IncomingErrorsPollster object at 0x7fcf6cc3fe00>] and discovery method [local_instances] via process [<bound method AgentManager.discover of <ceilometer.polling.manager.AgentManager object at 0x7fcf6cc07a10>>]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:294
Dec  1 19:43:49 compute-0 ceilometer_agent_compute[200308]: 2025-12-01 19:43:49.131 15 INFO ceilometer.polling.manager [-] Polling pollster network.incoming.packets.error in the context of pollsters
Dec  1 19:43:49 compute-0 ceilometer_agent_compute[200308]: 2025-12-01 19:43:49.131 15 DEBUG ceilometer.polling.manager [-] Checking if we need coordination for pollster [<stevedore.extension.Extension object at 0x7fcf6cc3fe30>] with coordination group name [None]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:333
Dec  1 19:43:49 compute-0 ceilometer_agent_compute[200308]: 2025-12-01 19:43:49.131 15 DEBUG ceilometer.polling.manager [-] The pollster [<stevedore.extension.Extension object at 0x7fcf6cc3fe30>] is not configured in a source for polling that requires coordination. The current hashrings are the following [None]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:355
Dec  1 19:43:49 compute-0 ceilometer_agent_compute[200308]: 2025-12-01 19:43:49.132 15 DEBUG ceilometer.polling.manager [-] Polster heartbeat update: network.incoming.packets.error heartbeat /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:636
Dec  1 19:43:49 compute-0 ceilometer_agent_compute[200308]: 2025-12-01 19:43:49.132 15 DEBUG ceilometer.compute.pollsters [-] e73931e9-f7fa-4666-b781-700b385532a9/network.incoming.packets.error volume: 0 _stats_to_sample /usr/lib/python3.12/site-packages/ceilometer/compute/pollsters/__init__.py:108
Dec  1 19:43:49 compute-0 ceilometer_agent_compute[200308]: 2025-12-01 19:43:49.132 15 DEBUG ceilometer.compute.pollsters [-] 850ac274-3f22-41ce-b7d7-ac64d7adac70/network.incoming.packets.error volume: 0 _stats_to_sample /usr/lib/python3.12/site-packages/ceilometer/compute/pollsters/__init__.py:108
Dec  1 19:43:49 compute-0 ceilometer_agent_compute[200308]: 2025-12-01 19:43:49.132 15 INFO ceilometer.polling.manager [-] Finished polling pollster network.incoming.packets.error in the context of pollsters
Dec  1 19:43:49 compute-0 ceilometer_agent_compute[200308]: 2025-12-01 19:43:49.132 15 DEBUG ceilometer.polling.manager [-] Executing discovery process for pollsters [<ceilometer.compute.pollsters.net.OutgoingBytesPollster object at 0x7fcf6cc3fe90>] and discovery method [local_instances] via process [<bound method AgentManager.discover of <ceilometer.polling.manager.AgentManager object at 0x7fcf6cc07a10>>]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:294
Dec  1 19:43:49 compute-0 ceilometer_agent_compute[200308]: 2025-12-01 19:43:49.133 15 INFO ceilometer.polling.manager [-] Polling pollster network.outgoing.bytes in the context of pollsters
Dec  1 19:43:49 compute-0 ceilometer_agent_compute[200308]: 2025-12-01 19:43:49.133 15 DEBUG ceilometer.polling.manager [-] Checking if we need coordination for pollster [<stevedore.extension.Extension object at 0x7fcf6cc3fec0>] with coordination group name [None]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:333
Dec  1 19:43:49 compute-0 ceilometer_agent_compute[200308]: 2025-12-01 19:43:49.133 15 DEBUG ceilometer.polling.manager [-] The pollster [<stevedore.extension.Extension object at 0x7fcf6cc3fec0>] is not configured in a source for polling that requires coordination. The current hashrings are the following [None]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:355
Dec  1 19:43:49 compute-0 ceilometer_agent_compute[200308]: 2025-12-01 19:43:49.133 12 DEBUG ceilometer.polling.manager [-] Updated heartbeat for network.incoming.packets.error (2025-12-01T19:43:49.132023) _update_status /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:502
Dec  1 19:43:49 compute-0 ceilometer_agent_compute[200308]: 2025-12-01 19:43:49.133 15 DEBUG ceilometer.polling.manager [-] Polster heartbeat update: network.outgoing.bytes heartbeat /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:636
Dec  1 19:43:49 compute-0 ceilometer_agent_compute[200308]: 2025-12-01 19:43:49.133 12 DEBUG ceilometer.polling.manager [-] Updated heartbeat for network.outgoing.bytes (2025-12-01T19:43:49.133299) _update_status /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:502
Dec  1 19:43:49 compute-0 ceilometer_agent_compute[200308]: 2025-12-01 19:43:49.133 15 DEBUG ceilometer.compute.pollsters [-] e73931e9-f7fa-4666-b781-700b385532a9/network.outgoing.bytes volume: 2342 _stats_to_sample /usr/lib/python3.12/site-packages/ceilometer/compute/pollsters/__init__.py:108
Dec  1 19:43:49 compute-0 ceilometer_agent_compute[200308]: 2025-12-01 19:43:49.133 15 DEBUG ceilometer.compute.pollsters [-] 850ac274-3f22-41ce-b7d7-ac64d7adac70/network.outgoing.bytes volume: 2356 _stats_to_sample /usr/lib/python3.12/site-packages/ceilometer/compute/pollsters/__init__.py:108
Dec  1 19:43:49 compute-0 ceilometer_agent_compute[200308]: 2025-12-01 19:43:49.134 15 INFO ceilometer.polling.manager [-] Finished polling pollster network.outgoing.bytes in the context of pollsters
Dec  1 19:43:49 compute-0 ceilometer_agent_compute[200308]: 2025-12-01 19:43:49.134 15 DEBUG ceilometer.polling.manager [-] Executing discovery process for pollsters [<ceilometer.compute.pollsters.net.OutgoingBytesRatePollster object at 0x7fcf6cc3ff80>] and discovery method [local_instances] via process [<bound method AgentManager.discover of <ceilometer.polling.manager.AgentManager object at 0x7fcf6cc07a10>>]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:294
Dec  1 19:43:49 compute-0 ceilometer_agent_compute[200308]: 2025-12-01 19:43:49.134 15 DEBUG ceilometer.polling.manager [-] Skip pollster network.outgoing.bytes.rate, no new resources found this cycle _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:321
Dec  1 19:43:49 compute-0 ceilometer_agent_compute[200308]: 2025-12-01 19:43:49.134 15 DEBUG ceilometer.polling.manager [-] Executing discovery process for pollsters [<ceilometer.compute.pollsters.instance_stats.CPUPollster object at 0x7fcf6cbd1b80>] and discovery method [local_instances] via process [<bound method AgentManager.discover of <ceilometer.polling.manager.AgentManager object at 0x7fcf6cc07a10>>]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:294
Dec  1 19:43:49 compute-0 ceilometer_agent_compute[200308]: 2025-12-01 19:43:49.134 15 INFO ceilometer.polling.manager [-] Polling pollster cpu in the context of pollsters
Dec  1 19:43:49 compute-0 ceilometer_agent_compute[200308]: 2025-12-01 19:43:49.134 15 DEBUG ceilometer.polling.manager [-] Checking if we need coordination for pollster [<stevedore.extension.Extension object at 0x7fcf6cc3d7c0>] with coordination group name [None]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:333
Dec  1 19:43:49 compute-0 ceilometer_agent_compute[200308]: 2025-12-01 19:43:49.134 15 DEBUG ceilometer.polling.manager [-] The pollster [<stevedore.extension.Extension object at 0x7fcf6cc3d7c0>] is not configured in a source for polling that requires coordination. The current hashrings are the following [None]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:355
Dec  1 19:43:49 compute-0 ceilometer_agent_compute[200308]: 2025-12-01 19:43:49.134 15 DEBUG ceilometer.polling.manager [-] Polster heartbeat update: cpu heartbeat /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:636
Dec  1 19:43:49 compute-0 ceilometer_agent_compute[200308]: 2025-12-01 19:43:49.134 15 DEBUG ceilometer.compute.pollsters [-] e73931e9-f7fa-4666-b781-700b385532a9/cpu volume: 44500000000 _stats_to_sample /usr/lib/python3.12/site-packages/ceilometer/compute/pollsters/__init__.py:108
Dec  1 19:43:49 compute-0 ceilometer_agent_compute[200308]: 2025-12-01 19:43:49.134 15 DEBUG ceilometer.compute.pollsters [-] 850ac274-3f22-41ce-b7d7-ac64d7adac70/cpu volume: 37410000000 _stats_to_sample /usr/lib/python3.12/site-packages/ceilometer/compute/pollsters/__init__.py:108
Dec  1 19:43:49 compute-0 ceilometer_agent_compute[200308]: 2025-12-01 19:43:49.135 15 INFO ceilometer.polling.manager [-] Finished polling pollster cpu in the context of pollsters
Dec  1 19:43:49 compute-0 ceilometer_agent_compute[200308]: 2025-12-01 19:43:49.135 15 DEBUG ceilometer.polling.manager [-] Executing discovery process for pollsters [<ceilometer.compute.pollsters.instance_stats.MemoryUsagePollster object at 0x7fcf6cc3f7a0>] and discovery method [local_instances] via process [<bound method AgentManager.discover of <ceilometer.polling.manager.AgentManager object at 0x7fcf6cc07a10>>]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:294
Dec  1 19:43:49 compute-0 ceilometer_agent_compute[200308]: 2025-12-01 19:43:49.135 12 DEBUG ceilometer.polling.manager [-] Updated heartbeat for cpu (2025-12-01T19:43:49.134673) _update_status /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:502
Dec  1 19:43:49 compute-0 ceilometer_agent_compute[200308]: 2025-12-01 19:43:49.135 15 INFO ceilometer.polling.manager [-] Polling pollster memory.usage in the context of pollsters
Dec  1 19:43:49 compute-0 ceilometer_agent_compute[200308]: 2025-12-01 19:43:49.135 15 DEBUG ceilometer.polling.manager [-] Checking if we need coordination for pollster [<stevedore.extension.Extension object at 0x7fcf6cc3f7d0>] with coordination group name [None]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:333
Dec  1 19:43:49 compute-0 ceilometer_agent_compute[200308]: 2025-12-01 19:43:49.135 15 DEBUG ceilometer.polling.manager [-] The pollster [<stevedore.extension.Extension object at 0x7fcf6cc3f7d0>] is not configured in a source for polling that requires coordination. The current hashrings are the following [None]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:355
Dec  1 19:43:49 compute-0 ceilometer_agent_compute[200308]: 2025-12-01 19:43:49.135 15 DEBUG ceilometer.polling.manager [-] Polster heartbeat update: memory.usage heartbeat /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:636
Dec  1 19:43:49 compute-0 ceilometer_agent_compute[200308]: 2025-12-01 19:43:49.135 15 DEBUG ceilometer.compute.pollsters [-] e73931e9-f7fa-4666-b781-700b385532a9/memory.usage volume: 48.79296875 _stats_to_sample /usr/lib/python3.12/site-packages/ceilometer/compute/pollsters/__init__.py:108
Dec  1 19:43:49 compute-0 ceilometer_agent_compute[200308]: 2025-12-01 19:43:49.136 15 DEBUG ceilometer.compute.pollsters [-] 850ac274-3f22-41ce-b7d7-ac64d7adac70/memory.usage volume: 48.9375 _stats_to_sample /usr/lib/python3.12/site-packages/ceilometer/compute/pollsters/__init__.py:108
Dec  1 19:43:49 compute-0 ceilometer_agent_compute[200308]: 2025-12-01 19:43:49.136 15 INFO ceilometer.polling.manager [-] Finished polling pollster memory.usage in the context of pollsters
Dec  1 19:43:49 compute-0 ceilometer_agent_compute[200308]: 2025-12-01 19:43:49.136 15 DEBUG ceilometer.polling.manager [-] Finished processing pollster [network.incoming.bytes.delta]. execute_polling_task_processing /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:272
Dec  1 19:43:49 compute-0 ceilometer_agent_compute[200308]: 2025-12-01 19:43:49.137 15 DEBUG ceilometer.polling.manager [-] Finished processing pollster [network.outgoing.packets]. execute_polling_task_processing /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:272
Dec  1 19:43:49 compute-0 ceilometer_agent_compute[200308]: 2025-12-01 19:43:49.137 15 DEBUG ceilometer.polling.manager [-] Finished processing pollster [network.outgoing.bytes.delta]. execute_polling_task_processing /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:272
Dec  1 19:43:49 compute-0 ceilometer_agent_compute[200308]: 2025-12-01 19:43:49.137 15 DEBUG ceilometer.polling.manager [-] Finished processing pollster [network.outgoing.packets.drop]. execute_polling_task_processing /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:272
Dec  1 19:43:49 compute-0 ceilometer_agent_compute[200308]: 2025-12-01 19:43:49.137 15 DEBUG ceilometer.polling.manager [-] Finished processing pollster [network.outgoing.packets.error]. execute_polling_task_processing /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:272
Dec  1 19:43:49 compute-0 ceilometer_agent_compute[200308]: 2025-12-01 19:43:49.137 15 DEBUG ceilometer.polling.manager [-] Finished processing pollster [disk.device.capacity]. execute_polling_task_processing /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:272
Dec  1 19:43:49 compute-0 ceilometer_agent_compute[200308]: 2025-12-01 19:43:49.137 15 DEBUG ceilometer.polling.manager [-] Finished processing pollster [disk.device.read.bytes]. execute_polling_task_processing /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:272
Dec  1 19:43:49 compute-0 ceilometer_agent_compute[200308]: 2025-12-01 19:43:49.137 15 DEBUG ceilometer.polling.manager [-] Finished processing pollster [network.incoming.bytes]. execute_polling_task_processing /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:272
Dec  1 19:43:49 compute-0 ceilometer_agent_compute[200308]: 2025-12-01 19:43:49.137 15 DEBUG ceilometer.polling.manager [-] Finished processing pollster [network.incoming.bytes.rate]. execute_polling_task_processing /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:272
Dec  1 19:43:49 compute-0 ceilometer_agent_compute[200308]: 2025-12-01 19:43:49.137 15 DEBUG ceilometer.polling.manager [-] Finished processing pollster [disk.device.read.latency]. execute_polling_task_processing /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:272
Dec  1 19:43:49 compute-0 ceilometer_agent_compute[200308]: 2025-12-01 19:43:49.137 15 DEBUG ceilometer.polling.manager [-] Finished processing pollster [disk.device.read.requests]. execute_polling_task_processing /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:272
Dec  1 19:43:49 compute-0 ceilometer_agent_compute[200308]: 2025-12-01 19:43:49.137 15 DEBUG ceilometer.polling.manager [-] Finished processing pollster [disk.device.usage]. execute_polling_task_processing /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:272
Dec  1 19:43:49 compute-0 ceilometer_agent_compute[200308]: 2025-12-01 19:43:49.137 15 DEBUG ceilometer.polling.manager [-] Finished processing pollster [disk.device.write.bytes]. execute_polling_task_processing /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:272
Dec  1 19:43:49 compute-0 ceilometer_agent_compute[200308]: 2025-12-01 19:43:49.137 12 DEBUG ceilometer.polling.manager [-] Updated heartbeat for memory.usage (2025-12-01T19:43:49.135896) _update_status /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:502
Dec  1 19:43:49 compute-0 ceilometer_agent_compute[200308]: 2025-12-01 19:43:49.137 15 DEBUG ceilometer.polling.manager [-] Finished processing pollster [power.state]. execute_polling_task_processing /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:272
Dec  1 19:43:49 compute-0 ceilometer_agent_compute[200308]: 2025-12-01 19:43:49.137 15 DEBUG ceilometer.polling.manager [-] Finished processing pollster [disk.device.write.latency]. execute_polling_task_processing /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:272
Dec  1 19:43:49 compute-0 ceilometer_agent_compute[200308]: 2025-12-01 19:43:49.137 15 DEBUG ceilometer.polling.manager [-] Finished processing pollster [disk.device.write.requests]. execute_polling_task_processing /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:272
Dec  1 19:43:49 compute-0 ceilometer_agent_compute[200308]: 2025-12-01 19:43:49.138 15 DEBUG ceilometer.polling.manager [-] Finished processing pollster [disk.device.allocation]. execute_polling_task_processing /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:272
Dec  1 19:43:49 compute-0 ceilometer_agent_compute[200308]: 2025-12-01 19:43:49.138 15 DEBUG ceilometer.polling.manager [-] Finished processing pollster [disk.ephemeral.size]. execute_polling_task_processing /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:272
Dec  1 19:43:49 compute-0 ceilometer_agent_compute[200308]: 2025-12-01 19:43:49.138 15 DEBUG ceilometer.polling.manager [-] Finished processing pollster [network.incoming.packets]. execute_polling_task_processing /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:272
Dec  1 19:43:49 compute-0 ceilometer_agent_compute[200308]: 2025-12-01 19:43:49.138 15 DEBUG ceilometer.polling.manager [-] Finished processing pollster [disk.root.size]. execute_polling_task_processing /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:272
Dec  1 19:43:49 compute-0 ceilometer_agent_compute[200308]: 2025-12-01 19:43:49.138 15 DEBUG ceilometer.polling.manager [-] Finished processing pollster [network.incoming.packets.drop]. execute_polling_task_processing /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:272
Dec  1 19:43:49 compute-0 ceilometer_agent_compute[200308]: 2025-12-01 19:43:49.138 15 DEBUG ceilometer.polling.manager [-] Finished processing pollster [network.incoming.packets.error]. execute_polling_task_processing /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:272
Dec  1 19:43:49 compute-0 ceilometer_agent_compute[200308]: 2025-12-01 19:43:49.138 15 DEBUG ceilometer.polling.manager [-] Finished processing pollster [network.outgoing.bytes]. execute_polling_task_processing /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:272
Dec  1 19:43:49 compute-0 ceilometer_agent_compute[200308]: 2025-12-01 19:43:49.138 15 DEBUG ceilometer.polling.manager [-] Finished processing pollster [network.outgoing.bytes.rate]. execute_polling_task_processing /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:272
Dec  1 19:43:49 compute-0 ceilometer_agent_compute[200308]: 2025-12-01 19:43:49.138 15 DEBUG ceilometer.polling.manager [-] Finished processing pollster [cpu]. execute_polling_task_processing /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:272
Dec  1 19:43:49 compute-0 ceilometer_agent_compute[200308]: 2025-12-01 19:43:49.138 15 DEBUG ceilometer.polling.manager [-] Finished processing pollster [memory.usage]. execute_polling_task_processing /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:272
Dec  1 19:43:49 compute-0 podman[245668]: 2025-12-01 19:43:49.376181703 +0000 UTC m=+0.132660603 container health_status 43b014a7c88484529ca37fbc1aa040d68d3c565a681d98a3ffe696ded1c66c8b (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified, name=ovn_metadata_agent, health_status=healthy, health_failing_streak=0, health_log=, org.label-schema.vendor=CentOS, tcib_build_tag=fa2bb8efef6782c26ea7f1675eeb36dd, config_data={'cgroupns': 'host', 'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'EDPM_CONFIG_HASH': '0823bd3e096c75f72e4a95820d41b0d4b6a1172bd2892ddb9f29b788a11bc87d'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ovn_metadata_agent', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified', 'net': 'host', 'pid': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/run/openvswitch:/run/openvswitch:z', '/var/lib/config-data/ansible-generated/neutron-ovn-metadata-agent:/etc/neutron.conf.d:z', '/run/netns:/run/netns:shared', '/var/lib/kolla/config_files/ovn_metadata_agent.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/neutron:/var/lib/neutron:shared,z', '/var/lib/neutron/ovn_metadata_haproxy_wrapper:/usr/local/bin/haproxy:ro', '/var/lib/neutron/kill_scripts:/etc/neutron/kill_scripts:ro', '/var/lib/openstack/cacerts/neutron-metadata/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/ca.crt:/etc/pki/tls/certs/ovndbca.crt:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/tls.crt:/etc/pki/tls/certs/ovndb.crt:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/tls.key:/etc/pki/tls/private/ovndb.key:ro,Z', '/var/lib/openstack/healthchecks/ovn_metadata_agent:/openstack:ro,z']}, org.label-schema.license=GPLv2, maintainer=OpenStack Kubernetes Operator team, org.label-schema.schema-version=1.0, io.buildah.version=1.41.3, managed_by=edpm_ansible, org.label-schema.build-date=20251125, tcib_managed=true, config_id=ovn_metadata_agent, container_name=ovn_metadata_agent, org.label-schema.name=CentOS Stream 9 Base Image)
Dec  1 19:43:49 compute-0 podman[245667]: 2025-12-01 19:43:49.382074946 +0000 UTC m=+0.141061434 container health_status 3a3d264f7eb8586ed3d44da8bad3c69e5911bcb2ca062b771386b6d47a5118de (image=quay.rdoproject.org/podified-master-centos10/openstack-ceilometer-compute:current-tested, name=ceilometer_agent_compute, health_status=healthy, health_failing_streak=0, health_log=, config_data={'image': 'quay.rdoproject.org/podified-master-centos10/openstack-ceilometer-compute:current-tested', 'user': 'ceilometer', 'restart': 'always', 'command': 'kolla_start', 'security_opt': 'label:type:ceilometer_polling_t', 'net': 'host', 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'OS_ENDPOINT_TYPE': 'internal'}, 'healthcheck': {'test': '/openstack/healthcheck compute', 'mount': '/var/lib/openstack/healthchecks/ceilometer_agent_compute'}, 'volumes': ['/var/lib/openstack/config/telemetry:/var/lib/openstack/config/:z', '/var/lib/openstack/config/telemetry/ceilometer-agent-compute.json:/var/lib/kolla/config_files/config.json:z', '/run/libvirt:/run/libvirt:shared,ro', '/etc/hosts:/etc/hosts:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/var/lib/openstack/cacerts/telemetry/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/config/telemetry/ceilometer_prom_exporter.yaml:/etc/ceilometer/ceilometer_prom_exporter.yaml:z', '/var/lib/openstack/certs/telemetry/default:/etc/ceilometer/tls:z', '/dev/log:/dev/log', '/var/lib/openstack/healthchecks/ceilometer_agent_compute:/openstack:ro,z']}, maintainer=OpenStack Kubernetes Operator team, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 10 Base Image, tcib_managed=true, container_name=ceilometer_agent_compute, org.label-schema.build-date=20251125, org.label-schema.schema-version=1.0, tcib_build_tag=3a7876c5b6a4ff2e2bc50e11e9db5f42, config_id=edpm, io.buildah.version=1.41.4, managed_by=edpm_ansible, org.label-schema.vendor=CentOS)
Dec  1 19:43:49 compute-0 podman[245669]: 2025-12-01 19:43:49.430902203 +0000 UTC m=+0.188033363 container health_status ac5c9902abf0db9f43c889599b2bcc73d33eb8b65444ffdd9b56a5cc93dab792 (image=quay.io/podified-antelope-centos9/openstack-ovn-controller:current-podified, name=ovn_controller, health_status=healthy, health_failing_streak=0, health_log=, tcib_build_tag=fa2bb8efef6782c26ea7f1675eeb36dd, config_id=ovn_controller, container_name=ovn_controller, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.license=GPLv2, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS, tcib_managed=true, config_data={'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ovn_controller', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-ovn-controller:current-podified', 'net': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/lib/modules:/lib/modules:ro', '/run:/run', '/var/lib/openvswitch/ovn:/run/ovn:shared,z', '/var/lib/kolla/config_files/ovn_controller.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/openstack/cacerts/ovn/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/certs/ovn/default/ca.crt:/etc/pki/tls/certs/ovndbca.crt:ro,z', '/var/lib/openstack/certs/ovn/default/tls.crt:/etc/pki/tls/certs/ovndb.crt:ro,z', '/var/lib/openstack/certs/ovn/default/tls.key:/etc/pki/tls/private/ovndb.key:ro,Z', '/var/lib/openstack/healthchecks/ovn_controller:/openstack:ro,z']}, io.buildah.version=1.41.3, maintainer=OpenStack Kubernetes Operator team, managed_by=edpm_ansible, org.label-schema.build-date=20251125)
Dec  1 19:43:51 compute-0 nova_compute[189564]: 2025-12-01 19:43:50.885 189568 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Dec  1 19:43:53 compute-0 nova_compute[189564]: 2025-12-01 19:43:53.189 189568 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Dec  1 19:43:55 compute-0 nova_compute[189564]: 2025-12-01 19:43:55.886 189568 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Dec  1 19:43:58 compute-0 nova_compute[189564]: 2025-12-01 19:43:58.193 189568 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Dec  1 19:43:58 compute-0 nova_compute[189564]: 2025-12-01 19:43:58.978 189568 DEBUG oslo_service.periodic_task [None req-7cec860b-c1c5-49d2-8a4f-732c76c1788d - - - - - -] Running periodic task ComputeManager._heal_instance_info_cache run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210#033[00m
Dec  1 19:43:58 compute-0 nova_compute[189564]: 2025-12-01 19:43:58.979 189568 DEBUG nova.compute.manager [None req-7cec860b-c1c5-49d2-8a4f-732c76c1788d - - - - - -] Starting heal instance info cache _heal_instance_info_cache /usr/lib/python3.9/site-packages/nova/compute/manager.py:9858#033[00m
Dec  1 19:43:58 compute-0 nova_compute[189564]: 2025-12-01 19:43:58.980 189568 DEBUG nova.compute.manager [None req-7cec860b-c1c5-49d2-8a4f-732c76c1788d - - - - - -] Rebuilding the list of instances to heal _heal_instance_info_cache /usr/lib/python3.9/site-packages/nova/compute/manager.py:9862#033[00m
Dec  1 19:43:59 compute-0 podman[203750]: time="2025-12-01T19:43:59Z" level=info msg="List containers: received `last` parameter - overwriting `limit`"
Dec  1 19:43:59 compute-0 podman[203750]: @ - - [01/Dec/2025:19:43:59 +0000] "GET /v4.9.3/libpod/containers/json?all=true&external=false&last=0&namespace=false&size=false&sync=false HTTP/1.1" 200 29521 "" "Go-http-client/1.1"
Dec  1 19:43:59 compute-0 podman[203750]: @ - - [01/Dec/2025:19:43:59 +0000] "GET /v4.9.3/libpod/containers/stats?all=false&interval=1&stream=false HTTP/1.1" 200 4803 "" "Go-http-client/1.1"
Dec  1 19:43:59 compute-0 nova_compute[189564]: 2025-12-01 19:43:59.938 189568 DEBUG oslo_concurrency.lockutils [None req-7cec860b-c1c5-49d2-8a4f-732c76c1788d - - - - - -] Acquiring lock "refresh_cache-e73931e9-f7fa-4666-b781-700b385532a9" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:312#033[00m
Dec  1 19:43:59 compute-0 nova_compute[189564]: 2025-12-01 19:43:59.938 189568 DEBUG oslo_concurrency.lockutils [None req-7cec860b-c1c5-49d2-8a4f-732c76c1788d - - - - - -] Acquired lock "refresh_cache-e73931e9-f7fa-4666-b781-700b385532a9" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:315#033[00m
Dec  1 19:43:59 compute-0 nova_compute[189564]: 2025-12-01 19:43:59.938 189568 DEBUG nova.network.neutron [None req-7cec860b-c1c5-49d2-8a4f-732c76c1788d - - - - - -] [instance: e73931e9-f7fa-4666-b781-700b385532a9] Forcefully refreshing network info cache for instance _get_instance_nw_info /usr/lib/python3.9/site-packages/nova/network/neutron.py:2004#033[00m
Dec  1 19:43:59 compute-0 nova_compute[189564]: 2025-12-01 19:43:59.939 189568 DEBUG nova.objects.instance [None req-7cec860b-c1c5-49d2-8a4f-732c76c1788d - - - - - -] Lazy-loading 'info_cache' on Instance uuid e73931e9-f7fa-4666-b781-700b385532a9 obj_load_attr /usr/lib/python3.9/site-packages/nova/objects/instance.py:1105#033[00m
Dec  1 19:44:00 compute-0 podman[245729]: 2025-12-01 19:44:00.337483861 +0000 UTC m=+0.099861583 container health_status b46bda7fc50db8041eef75400930fc7591d8331b3adc9964f77b2cc87c6b98e2 (image=quay.io/openstack-k8s-operators/openstack-network-exporter:current-podified, name=openstack_network_exporter, health_status=healthy, health_failing_streak=0, health_log=, version=9.6, managed_by=edpm_ansible, summary=Provides the latest release of the minimal Red Hat Universal Base Image 9., io.buildah.version=1.33.7, maintainer=Red Hat, Inc., io.k8s.display-name=Red Hat Universal Base Image 9 Minimal, name=ubi9-minimal, vendor=Red Hat, Inc., distribution-scope=public, config_data={'image': 'quay.io/openstack-k8s-operators/openstack-network-exporter:current-podified', 'restart': 'always', 'recreate': True, 'privileged': True, 'ports': ['9105:9105'], 'command': [], 'net': 'host', 'environment': {'OS_ENDPOINT_TYPE': 'internal', 'OPENSTACK_NETWORK_EXPORTER_YAML': '/etc/openstack_network_exporter/openstack_network_exporter.yaml'}, 'healthcheck': {'test': '/openstack/healthcheck openstack-netwo', 'mount': '/var/lib/openstack/healthchecks/openstack_network_exporter'}, 'volumes': ['/var/lib/openstack/config/telemetry/openstack_network_exporter.yaml:/etc/openstack_network_exporter/openstack_network_exporter.yaml:z', '/var/lib/openstack/certs/telemetry/default:/etc/openstack_network_exporter/tls:z', '/var/run/openvswitch:/run/openvswitch:rw,z', '/var/lib/openvswitch/ovn:/run/ovn:rw,z', '/proc:/host/proc:ro', '/var/lib/openstack/healthchecks/openstack_network_exporter:/openstack:ro,z']}, container_name=openstack_network_exporter, io.openshift.tags=minimal rhel9, com.redhat.component=ubi9-minimal-container, build-date=2025-08-20T13:12:41, config_id=edpm, vcs-ref=f4b088292653bbf5ca8188a5e59ffd06a8671d4b, vcs-type=git, io.k8s.description=The Universal Base Image Minimal is a stripped down image that uses microdnf as a package manager. This base image is freely redistributable, but Red Hat only supports Red Hat technologies through subscriptions for Red Hat products. This image is maintained by Red Hat and updated regularly., architecture=x86_64, com.redhat.license_terms=https://www.redhat.com/en/about/red-hat-end-user-license-agreements#UBI, description=The Universal Base Image Minimal is a stripped down image that uses microdnf as a package manager. This base image is freely redistributable, but Red Hat only supports Red Hat technologies through subscriptions for Red Hat products. This image is maintained by Red Hat and updated regularly., io.openshift.expose-services=, release=1755695350, url=https://catalog.redhat.com/en/search?searchType=containers)
Dec  1 19:44:00 compute-0 nova_compute[189564]: 2025-12-01 19:44:00.888 189568 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Dec  1 19:44:01 compute-0 openstack_network_exporter[205914]: ERROR   19:44:01 appctl.go:144: Failed to get PID for ovn-northd: no control socket files found for ovn-northd
Dec  1 19:44:01 compute-0 openstack_network_exporter[205914]: ERROR   19:44:01 appctl.go:144: Failed to get PID for ovn-northd: no control socket files found for ovn-northd
Dec  1 19:44:01 compute-0 openstack_network_exporter[205914]: ERROR   19:44:01 appctl.go:131: Failed to prepare call to ovsdb-server: no control socket files found for the ovs db server
Dec  1 19:44:01 compute-0 openstack_network_exporter[205914]: ERROR   19:44:01 appctl.go:174: call(dpif-netdev/pmd-rxq-show): please specify an existing datapath
Dec  1 19:44:01 compute-0 openstack_network_exporter[205914]: 
Dec  1 19:44:01 compute-0 openstack_network_exporter[205914]: ERROR   19:44:01 appctl.go:174: call(dpif-netdev/pmd-perf-show): please specify an existing datapath
Dec  1 19:44:01 compute-0 openstack_network_exporter[205914]: 
Dec  1 19:44:02 compute-0 nova_compute[189564]: 2025-12-01 19:44:02.300 189568 DEBUG nova.network.neutron [None req-7cec860b-c1c5-49d2-8a4f-732c76c1788d - - - - - -] [instance: e73931e9-f7fa-4666-b781-700b385532a9] Updating instance_info_cache with network_info: [{"id": "3cef930c-870a-4936-a206-b4c3a7ce5c1a", "address": "fa:16:3e:fc:8b:70", "network": {"id": "2a4b8529-6171-4880-a97c-66966115a61b", "bridge": "br-int", "label": "private", "subnets": [{"cidr": "192.168.0.0/24", "dns": [], "gateway": {"address": "192.168.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "192.168.0.47", "type": "fixed", "version": 4, "meta": {}, "floating_ips": [{"address": "192.168.122.206", "type": "floating", "version": 4, "meta": {}}]}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "35d2a9caf1634dca9fc12ec078239d84", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tap3cef930c-87", "ovs_interfaceid": "3cef930c-870a-4936-a206-b4c3a7ce5c1a", "qbh_params": null, "qbg_params": null, "active": true, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}}] update_instance_cache_with_nw_info /usr/lib/python3.9/site-packages/nova/network/neutron.py:116#033[00m
Dec  1 19:44:02 compute-0 nova_compute[189564]: 2025-12-01 19:44:02.418 189568 DEBUG oslo_concurrency.lockutils [None req-7cec860b-c1c5-49d2-8a4f-732c76c1788d - - - - - -] Releasing lock "refresh_cache-e73931e9-f7fa-4666-b781-700b385532a9" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:333#033[00m
Dec  1 19:44:02 compute-0 nova_compute[189564]: 2025-12-01 19:44:02.418 189568 DEBUG nova.compute.manager [None req-7cec860b-c1c5-49d2-8a4f-732c76c1788d - - - - - -] [instance: e73931e9-f7fa-4666-b781-700b385532a9] Updated the network info_cache for instance _heal_instance_info_cache /usr/lib/python3.9/site-packages/nova/compute/manager.py:9929#033[00m
Dec  1 19:44:02 compute-0 nova_compute[189564]: 2025-12-01 19:44:02.419 189568 DEBUG oslo_service.periodic_task [None req-7cec860b-c1c5-49d2-8a4f-732c76c1788d - - - - - -] Running periodic task ComputeManager._poll_rebooting_instances run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210#033[00m
Dec  1 19:44:02 compute-0 nova_compute[189564]: 2025-12-01 19:44:02.419 189568 DEBUG oslo_service.periodic_task [None req-7cec860b-c1c5-49d2-8a4f-732c76c1788d - - - - - -] Running periodic task ComputeManager._poll_volume_usage run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210#033[00m
Dec  1 19:44:02 compute-0 nova_compute[189564]: 2025-12-01 19:44:02.420 189568 DEBUG oslo_service.periodic_task [None req-7cec860b-c1c5-49d2-8a4f-732c76c1788d - - - - - -] Running periodic task ComputeManager._reclaim_queued_deletes run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210#033[00m
Dec  1 19:44:02 compute-0 nova_compute[189564]: 2025-12-01 19:44:02.420 189568 DEBUG nova.compute.manager [None req-7cec860b-c1c5-49d2-8a4f-732c76c1788d - - - - - -] CONF.reclaim_instance_interval <= 0, skipping... _reclaim_queued_deletes /usr/lib/python3.9/site-packages/nova/compute/manager.py:10477#033[00m
Dec  1 19:44:03 compute-0 nova_compute[189564]: 2025-12-01 19:44:03.194 189568 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Dec  1 19:44:03 compute-0 nova_compute[189564]: 2025-12-01 19:44:03.248 189568 DEBUG oslo_service.periodic_task [None req-7cec860b-c1c5-49d2-8a4f-732c76c1788d - - - - - -] Running periodic task ComputeManager._check_instance_build_time run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210#033[00m
Dec  1 19:44:03 compute-0 nova_compute[189564]: 2025-12-01 19:44:03.249 189568 DEBUG oslo_service.periodic_task [None req-7cec860b-c1c5-49d2-8a4f-732c76c1788d - - - - - -] Running periodic task ComputeManager._poll_rescued_instances run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210#033[00m
Dec  1 19:44:05 compute-0 nova_compute[189564]: 2025-12-01 19:44:05.247 189568 DEBUG oslo_service.periodic_task [None req-7cec860b-c1c5-49d2-8a4f-732c76c1788d - - - - - -] Running periodic task ComputeManager._poll_unconfirmed_resizes run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210#033[00m
Dec  1 19:44:05 compute-0 podman[245748]: 2025-12-01 19:44:05.348244569 +0000 UTC m=+0.114496358 container health_status 9bc16c1e84935b321683dd2dfd3901959431e420d380b6b9982945dff3d516b2 (image=quay.io/prometheus/node-exporter:v1.5.0, name=node_exporter, health_status=healthy, health_failing_streak=0, health_log=, config_data={'image': 'quay.io/prometheus/node-exporter:v1.5.0', 'restart': 'always', 'recreate': True, 'user': 'root', 'privileged': True, 'ports': ['9100:9100'], 'command': ['--web.config.file=/etc/node_exporter/node_exporter.yaml', '--web.disable-exporter-metrics', '--collector.systemd', '--collector.systemd.unit-include=(edpm_.*|ovs.*|openvswitch|virt.*|rsyslog)\\.service', '--no-collector.dmi', '--no-collector.entropy', '--no-collector.thermal_zone', '--no-collector.time', '--no-collector.timex', '--no-collector.uname', '--no-collector.stat', '--no-collector.hwmon', '--no-collector.os', '--no-collector.selinux', '--no-collector.textfile', '--no-collector.powersupplyclass', '--no-collector.pressure', '--no-collector.rapl'], 'net': 'host', 'environment': {'OS_ENDPOINT_TYPE': 'internal'}, 'healthcheck': {'test': '/openstack/healthcheck node_exporter', 'mount': '/var/lib/openstack/healthchecks/node_exporter'}, 'volumes': ['/var/lib/openstack/config/telemetry/node_exporter.yaml:/etc/node_exporter/node_exporter.yaml:z', '/var/lib/openstack/certs/telemetry/default:/etc/node_exporter/tls:z', '/var/run/dbus/system_bus_socket:/var/run/dbus/system_bus_socket:rw', '/var/lib/openstack/healthchecks/node_exporter:/openstack:ro,z']}, config_id=edpm, container_name=node_exporter, maintainer=The Prometheus Authors <prometheus-developers@googlegroups.com>, managed_by=edpm_ansible)
Dec  1 19:44:05 compute-0 nova_compute[189564]: 2025-12-01 19:44:05.890 189568 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Dec  1 19:44:06 compute-0 nova_compute[189564]: 2025-12-01 19:44:06.248 189568 DEBUG oslo_service.periodic_task [None req-7cec860b-c1c5-49d2-8a4f-732c76c1788d - - - - - -] Running periodic task ComputeManager._instance_usage_audit run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210#033[00m
Dec  1 19:44:07 compute-0 nova_compute[189564]: 2025-12-01 19:44:07.248 189568 DEBUG oslo_service.periodic_task [None req-7cec860b-c1c5-49d2-8a4f-732c76c1788d - - - - - -] Running periodic task ComputeManager.update_available_resource run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210#033[00m
Dec  1 19:44:07 compute-0 nova_compute[189564]: 2025-12-01 19:44:07.290 189568 DEBUG oslo_concurrency.lockutils [None req-7cec860b-c1c5-49d2-8a4f-732c76c1788d - - - - - -] Acquiring lock "compute_resources" by "nova.compute.resource_tracker.ResourceTracker.clean_compute_node_cache" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m
Dec  1 19:44:07 compute-0 nova_compute[189564]: 2025-12-01 19:44:07.291 189568 DEBUG oslo_concurrency.lockutils [None req-7cec860b-c1c5-49d2-8a4f-732c76c1788d - - - - - -] Lock "compute_resources" acquired by "nova.compute.resource_tracker.ResourceTracker.clean_compute_node_cache" :: waited 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m
Dec  1 19:44:07 compute-0 nova_compute[189564]: 2025-12-01 19:44:07.292 189568 DEBUG oslo_concurrency.lockutils [None req-7cec860b-c1c5-49d2-8a4f-732c76c1788d - - - - - -] Lock "compute_resources" "released" by "nova.compute.resource_tracker.ResourceTracker.clean_compute_node_cache" :: held 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m
Dec  1 19:44:07 compute-0 nova_compute[189564]: 2025-12-01 19:44:07.292 189568 DEBUG nova.compute.resource_tracker [None req-7cec860b-c1c5-49d2-8a4f-732c76c1788d - - - - - -] Auditing locally available compute resources for compute-0.ctlplane.example.com (node: compute-0.ctlplane.example.com) update_available_resource /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:861#033[00m
Dec  1 19:44:07 compute-0 nova_compute[189564]: 2025-12-01 19:44:07.425 189568 DEBUG oslo_concurrency.processutils [None req-7cec860b-c1c5-49d2-8a4f-732c76c1788d - - - - - -] Running cmd (subprocess): /usr/bin/python3 -m oslo_concurrency.prlimit --as=1073741824 --cpu=30 -- env LC_ALL=C LANG=C qemu-img info /var/lib/nova/instances/e73931e9-f7fa-4666-b781-700b385532a9/disk --force-share --output=json execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384#033[00m
Dec  1 19:44:07 compute-0 nova_compute[189564]: 2025-12-01 19:44:07.523 189568 DEBUG oslo_concurrency.processutils [None req-7cec860b-c1c5-49d2-8a4f-732c76c1788d - - - - - -] CMD "/usr/bin/python3 -m oslo_concurrency.prlimit --as=1073741824 --cpu=30 -- env LC_ALL=C LANG=C qemu-img info /var/lib/nova/instances/e73931e9-f7fa-4666-b781-700b385532a9/disk --force-share --output=json" returned: 0 in 0.098s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422#033[00m
Dec  1 19:44:07 compute-0 nova_compute[189564]: 2025-12-01 19:44:07.525 189568 DEBUG oslo_concurrency.processutils [None req-7cec860b-c1c5-49d2-8a4f-732c76c1788d - - - - - -] Running cmd (subprocess): /usr/bin/python3 -m oslo_concurrency.prlimit --as=1073741824 --cpu=30 -- env LC_ALL=C LANG=C qemu-img info /var/lib/nova/instances/e73931e9-f7fa-4666-b781-700b385532a9/disk --force-share --output=json execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384#033[00m
Dec  1 19:44:07 compute-0 nova_compute[189564]: 2025-12-01 19:44:07.618 189568 DEBUG oslo_concurrency.processutils [None req-7cec860b-c1c5-49d2-8a4f-732c76c1788d - - - - - -] CMD "/usr/bin/python3 -m oslo_concurrency.prlimit --as=1073741824 --cpu=30 -- env LC_ALL=C LANG=C qemu-img info /var/lib/nova/instances/e73931e9-f7fa-4666-b781-700b385532a9/disk --force-share --output=json" returned: 0 in 0.093s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422#033[00m
Dec  1 19:44:07 compute-0 nova_compute[189564]: 2025-12-01 19:44:07.620 189568 DEBUG oslo_concurrency.processutils [None req-7cec860b-c1c5-49d2-8a4f-732c76c1788d - - - - - -] Running cmd (subprocess): /usr/bin/python3 -m oslo_concurrency.prlimit --as=1073741824 --cpu=30 -- env LC_ALL=C LANG=C qemu-img info /var/lib/nova/instances/e73931e9-f7fa-4666-b781-700b385532a9/disk.eph0 --force-share --output=json execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384#033[00m
Dec  1 19:44:07 compute-0 nova_compute[189564]: 2025-12-01 19:44:07.692 189568 DEBUG oslo_concurrency.processutils [None req-7cec860b-c1c5-49d2-8a4f-732c76c1788d - - - - - -] CMD "/usr/bin/python3 -m oslo_concurrency.prlimit --as=1073741824 --cpu=30 -- env LC_ALL=C LANG=C qemu-img info /var/lib/nova/instances/e73931e9-f7fa-4666-b781-700b385532a9/disk.eph0 --force-share --output=json" returned: 0 in 0.071s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422#033[00m
Dec  1 19:44:07 compute-0 nova_compute[189564]: 2025-12-01 19:44:07.694 189568 DEBUG oslo_concurrency.processutils [None req-7cec860b-c1c5-49d2-8a4f-732c76c1788d - - - - - -] Running cmd (subprocess): /usr/bin/python3 -m oslo_concurrency.prlimit --as=1073741824 --cpu=30 -- env LC_ALL=C LANG=C qemu-img info /var/lib/nova/instances/e73931e9-f7fa-4666-b781-700b385532a9/disk.eph0 --force-share --output=json execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384#033[00m
Dec  1 19:44:07 compute-0 nova_compute[189564]: 2025-12-01 19:44:07.805 189568 DEBUG oslo_concurrency.processutils [None req-7cec860b-c1c5-49d2-8a4f-732c76c1788d - - - - - -] CMD "/usr/bin/python3 -m oslo_concurrency.prlimit --as=1073741824 --cpu=30 -- env LC_ALL=C LANG=C qemu-img info /var/lib/nova/instances/e73931e9-f7fa-4666-b781-700b385532a9/disk.eph0 --force-share --output=json" returned: 0 in 0.111s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422#033[00m
Dec  1 19:44:07 compute-0 nova_compute[189564]: 2025-12-01 19:44:07.818 189568 DEBUG oslo_concurrency.processutils [None req-7cec860b-c1c5-49d2-8a4f-732c76c1788d - - - - - -] Running cmd (subprocess): /usr/bin/python3 -m oslo_concurrency.prlimit --as=1073741824 --cpu=30 -- env LC_ALL=C LANG=C qemu-img info /var/lib/nova/instances/850ac274-3f22-41ce-b7d7-ac64d7adac70/disk --force-share --output=json execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384#033[00m
Dec  1 19:44:07 compute-0 nova_compute[189564]: 2025-12-01 19:44:07.917 189568 DEBUG oslo_concurrency.processutils [None req-7cec860b-c1c5-49d2-8a4f-732c76c1788d - - - - - -] CMD "/usr/bin/python3 -m oslo_concurrency.prlimit --as=1073741824 --cpu=30 -- env LC_ALL=C LANG=C qemu-img info /var/lib/nova/instances/850ac274-3f22-41ce-b7d7-ac64d7adac70/disk --force-share --output=json" returned: 0 in 0.098s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422#033[00m
Dec  1 19:44:07 compute-0 nova_compute[189564]: 2025-12-01 19:44:07.919 189568 DEBUG oslo_concurrency.processutils [None req-7cec860b-c1c5-49d2-8a4f-732c76c1788d - - - - - -] Running cmd (subprocess): /usr/bin/python3 -m oslo_concurrency.prlimit --as=1073741824 --cpu=30 -- env LC_ALL=C LANG=C qemu-img info /var/lib/nova/instances/850ac274-3f22-41ce-b7d7-ac64d7adac70/disk --force-share --output=json execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384#033[00m
Dec  1 19:44:07 compute-0 nova_compute[189564]: 2025-12-01 19:44:07.997 189568 DEBUG oslo_concurrency.processutils [None req-7cec860b-c1c5-49d2-8a4f-732c76c1788d - - - - - -] CMD "/usr/bin/python3 -m oslo_concurrency.prlimit --as=1073741824 --cpu=30 -- env LC_ALL=C LANG=C qemu-img info /var/lib/nova/instances/850ac274-3f22-41ce-b7d7-ac64d7adac70/disk --force-share --output=json" returned: 0 in 0.078s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422#033[00m
Dec  1 19:44:08 compute-0 nova_compute[189564]: 2025-12-01 19:44:08.000 189568 DEBUG oslo_concurrency.processutils [None req-7cec860b-c1c5-49d2-8a4f-732c76c1788d - - - - - -] Running cmd (subprocess): /usr/bin/python3 -m oslo_concurrency.prlimit --as=1073741824 --cpu=30 -- env LC_ALL=C LANG=C qemu-img info /var/lib/nova/instances/850ac274-3f22-41ce-b7d7-ac64d7adac70/disk.eph0 --force-share --output=json execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384#033[00m
Dec  1 19:44:08 compute-0 nova_compute[189564]: 2025-12-01 19:44:08.077 189568 DEBUG oslo_concurrency.processutils [None req-7cec860b-c1c5-49d2-8a4f-732c76c1788d - - - - - -] CMD "/usr/bin/python3 -m oslo_concurrency.prlimit --as=1073741824 --cpu=30 -- env LC_ALL=C LANG=C qemu-img info /var/lib/nova/instances/850ac274-3f22-41ce-b7d7-ac64d7adac70/disk.eph0 --force-share --output=json" returned: 0 in 0.077s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422#033[00m
Dec  1 19:44:08 compute-0 nova_compute[189564]: 2025-12-01 19:44:08.079 189568 DEBUG oslo_concurrency.processutils [None req-7cec860b-c1c5-49d2-8a4f-732c76c1788d - - - - - -] Running cmd (subprocess): /usr/bin/python3 -m oslo_concurrency.prlimit --as=1073741824 --cpu=30 -- env LC_ALL=C LANG=C qemu-img info /var/lib/nova/instances/850ac274-3f22-41ce-b7d7-ac64d7adac70/disk.eph0 --force-share --output=json execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384#033[00m
Dec  1 19:44:08 compute-0 nova_compute[189564]: 2025-12-01 19:44:08.142 189568 DEBUG oslo_concurrency.processutils [None req-7cec860b-c1c5-49d2-8a4f-732c76c1788d - - - - - -] CMD "/usr/bin/python3 -m oslo_concurrency.prlimit --as=1073741824 --cpu=30 -- env LC_ALL=C LANG=C qemu-img info /var/lib/nova/instances/850ac274-3f22-41ce-b7d7-ac64d7adac70/disk.eph0 --force-share --output=json" returned: 0 in 0.063s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422#033[00m
Dec  1 19:44:08 compute-0 nova_compute[189564]: 2025-12-01 19:44:08.196 189568 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Dec  1 19:44:08 compute-0 nova_compute[189564]: 2025-12-01 19:44:08.637 189568 WARNING nova.virt.libvirt.driver [None req-7cec860b-c1c5-49d2-8a4f-732c76c1788d - - - - - -] This host appears to have multiple sockets per NUMA node. The `socket` PCI NUMA affinity will not be supported.#033[00m
Dec  1 19:44:08 compute-0 nova_compute[189564]: 2025-12-01 19:44:08.639 189568 DEBUG nova.compute.resource_tracker [None req-7cec860b-c1c5-49d2-8a4f-732c76c1788d - - - - - -] Hypervisor/Node resource view: name=compute-0.ctlplane.example.com free_ram=4965MB free_disk=72.36145401000977GB free_vcpus=6 pci_devices=[{"dev_id": "pci_0000_00_03_0", "address": "0000:00:03.0", "product_id": "1000", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1000", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_01_3", "address": "0000:00:01.3", "product_id": "7113", "vendor_id": "8086", "numa_node": null, "label": "label_8086_7113", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_06_0", "address": "0000:00:06.0", "product_id": "1005", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1005", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_04_0", "address": "0000:00:04.0", "product_id": "1001", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1001", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_01_1", "address": "0000:00:01.1", "product_id": "7010", "vendor_id": "8086", "numa_node": null, "label": "label_8086_7010", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_00_0", "address": "0000:00:00.0", "product_id": "1237", "vendor_id": "8086", "numa_node": null, "label": "label_8086_1237", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_05_0", "address": "0000:00:05.0", "product_id": "1002", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1002", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_02_0", "address": "0000:00:02.0", "product_id": "1050", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1050", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_01_2", "address": "0000:00:01.2", "product_id": "7020", "vendor_id": "8086", "numa_node": null, "label": "label_8086_7020", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_01_0", "address": "0000:00:01.0", "product_id": "7000", "vendor_id": "8086", "numa_node": null, "label": "label_8086_7000", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_07_0", "address": "0000:00:07.0", "product_id": "1000", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1000", "dev_type": "type-PCI"}] _report_hypervisor_resource_view /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:1034#033[00m
Dec  1 19:44:08 compute-0 nova_compute[189564]: 2025-12-01 19:44:08.639 189568 DEBUG oslo_concurrency.lockutils [None req-7cec860b-c1c5-49d2-8a4f-732c76c1788d - - - - - -] Acquiring lock "compute_resources" by "nova.compute.resource_tracker.ResourceTracker._update_available_resource" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m
Dec  1 19:44:08 compute-0 nova_compute[189564]: 2025-12-01 19:44:08.640 189568 DEBUG oslo_concurrency.lockutils [None req-7cec860b-c1c5-49d2-8a4f-732c76c1788d - - - - - -] Lock "compute_resources" acquired by "nova.compute.resource_tracker.ResourceTracker._update_available_resource" :: waited 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m
Dec  1 19:44:08 compute-0 nova_compute[189564]: 2025-12-01 19:44:08.876 189568 DEBUG nova.compute.resource_tracker [None req-7cec860b-c1c5-49d2-8a4f-732c76c1788d - - - - - -] Instance e73931e9-f7fa-4666-b781-700b385532a9 actively managed on this compute host and has allocations in placement: {'resources': {'DISK_GB': 2, 'MEMORY_MB': 512, 'VCPU': 1}}. _remove_deleted_instances_allocations /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:1635#033[00m
Dec  1 19:44:08 compute-0 nova_compute[189564]: 2025-12-01 19:44:08.877 189568 DEBUG nova.compute.resource_tracker [None req-7cec860b-c1c5-49d2-8a4f-732c76c1788d - - - - - -] Instance 850ac274-3f22-41ce-b7d7-ac64d7adac70 actively managed on this compute host and has allocations in placement: {'resources': {'DISK_GB': 2, 'MEMORY_MB': 512, 'VCPU': 1}}. _remove_deleted_instances_allocations /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:1635#033[00m
Dec  1 19:44:08 compute-0 nova_compute[189564]: 2025-12-01 19:44:08.877 189568 DEBUG nova.compute.resource_tracker [None req-7cec860b-c1c5-49d2-8a4f-732c76c1788d - - - - - -] Total usable vcpus: 8, total allocated vcpus: 2 _report_final_resource_view /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:1057#033[00m
Dec  1 19:44:08 compute-0 nova_compute[189564]: 2025-12-01 19:44:08.878 189568 DEBUG nova.compute.resource_tracker [None req-7cec860b-c1c5-49d2-8a4f-732c76c1788d - - - - - -] Final resource view: name=compute-0.ctlplane.example.com phys_ram=7680MB used_ram=1536MB phys_disk=79GB used_disk=4GB total_vcpus=8 used_vcpus=2 pci_stats=[] _report_final_resource_view /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:1066#033[00m
Dec  1 19:44:08 compute-0 nova_compute[189564]: 2025-12-01 19:44:08.962 189568 DEBUG nova.compute.provider_tree [None req-7cec860b-c1c5-49d2-8a4f-732c76c1788d - - - - - -] Inventory has not changed in ProviderTree for provider: 0211b5d4-bab8-409f-8f53-df766ffbcb27 update_inventory /usr/lib/python3.9/site-packages/nova/compute/provider_tree.py:180#033[00m
Dec  1 19:44:08 compute-0 nova_compute[189564]: 2025-12-01 19:44:08.978 189568 DEBUG nova.scheduler.client.report [None req-7cec860b-c1c5-49d2-8a4f-732c76c1788d - - - - - -] Inventory has not changed for provider 0211b5d4-bab8-409f-8f53-df766ffbcb27 based on inventory data: {'VCPU': {'total': 8, 'reserved': 0, 'min_unit': 1, 'max_unit': 8, 'step_size': 1, 'allocation_ratio': 4.0}, 'MEMORY_MB': {'total': 7680, 'reserved': 512, 'min_unit': 1, 'max_unit': 7680, 'step_size': 1, 'allocation_ratio': 1.0}, 'DISK_GB': {'total': 79, 'reserved': 1, 'min_unit': 1, 'max_unit': 79, 'step_size': 1, 'allocation_ratio': 0.9}} set_inventory_for_provider /usr/lib/python3.9/site-packages/nova/scheduler/client/report.py:940#033[00m
Dec  1 19:44:08 compute-0 nova_compute[189564]: 2025-12-01 19:44:08.980 189568 DEBUG nova.compute.resource_tracker [None req-7cec860b-c1c5-49d2-8a4f-732c76c1788d - - - - - -] Compute_service record updated for compute-0.ctlplane.example.com:compute-0.ctlplane.example.com _update_available_resource /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:995#033[00m
Dec  1 19:44:08 compute-0 nova_compute[189564]: 2025-12-01 19:44:08.980 189568 DEBUG oslo_concurrency.lockutils [None req-7cec860b-c1c5-49d2-8a4f-732c76c1788d - - - - - -] Lock "compute_resources" "released" by "nova.compute.resource_tracker.ResourceTracker._update_available_resource" :: held 0.341s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m
Dec  1 19:44:10 compute-0 nova_compute[189564]: 2025-12-01 19:44:10.893 189568 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Dec  1 19:44:11 compute-0 podman[245797]: 2025-12-01 19:44:11.38440869 +0000 UTC m=+0.146412281 container health_status eee51cf6f5ac491b85fb09827fece37ea9afa564acb449d4ec0d0155a452f02b (image=quay.io/podified-antelope-centos9/openstack-multipathd:current-podified, name=multipathd, health_status=healthy, health_failing_streak=0, health_log=, config_id=multipathd, container_name=multipathd, io.buildah.version=1.41.3, maintainer=OpenStack Kubernetes Operator team, managed_by=edpm_ansible, org.label-schema.build-date=20251125, tcib_managed=true, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS, tcib_build_tag=fa2bb8efef6782c26ea7f1675eeb36dd, config_data={'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/multipathd', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-multipathd:current-podified', 'net': 'host', 'privileged': True, 'restart': 'always', 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/var/lib/kolla/config_files/multipathd.json:/var/lib/kolla/config_files/config.json:ro', '/dev:/dev', '/run/udev:/run/udev', '/sys:/sys', '/lib/modules:/lib/modules:ro', '/etc/iscsi:/etc/iscsi:ro', '/var/lib/iscsi:/var/lib/iscsi', '/etc/multipath:/etc/multipath:z', '/etc/multipath.conf:/etc/multipath.conf:ro', '/var/lib/openstack/healthchecks/multipathd:/openstack:ro,z']})
Dec  1 19:44:12 compute-0 ovn_metadata_agent[106828]: 2025-12-01 19:44:12.195 106833 DEBUG oslo_concurrency.lockutils [-] Acquiring lock "_check_child_processes" by "neutron.agent.linux.external_process.ProcessMonitor._check_child_processes" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m
Dec  1 19:44:12 compute-0 ovn_metadata_agent[106828]: 2025-12-01 19:44:12.195 106833 DEBUG oslo_concurrency.lockutils [-] Lock "_check_child_processes" acquired by "neutron.agent.linux.external_process.ProcessMonitor._check_child_processes" :: waited 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m
Dec  1 19:44:12 compute-0 ovn_metadata_agent[106828]: 2025-12-01 19:44:12.196 106833 DEBUG oslo_concurrency.lockutils [-] Lock "_check_child_processes" "released" by "neutron.agent.linux.external_process.ProcessMonitor._check_child_processes" :: held 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m
Dec  1 19:44:13 compute-0 nova_compute[189564]: 2025-12-01 19:44:13.198 189568 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Dec  1 19:44:15 compute-0 nova_compute[189564]: 2025-12-01 19:44:15.896 189568 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Dec  1 19:44:16 compute-0 podman[245818]: 2025-12-01 19:44:16.336347644 +0000 UTC m=+0.103333212 container health_status 61ddba5fa28aaa4735d9b3aecc3d300f499f9ae2248b5f55cd6d6127fcce4236 (image=quay.io/navidys/prometheus-podman-exporter:v1.10.1, name=podman_exporter, health_status=healthy, health_failing_streak=0, health_log=, managed_by=edpm_ansible, config_data={'image': 'quay.io/navidys/prometheus-podman-exporter:v1.10.1', 'restart': 'always', 'recreate': True, 'user': 'root', 'privileged': True, 'ports': ['9882:9882'], 'net': 'host', 'command': ['--web.config.file=/etc/podman_exporter/podman_exporter.yaml'], 'environment': {'OS_ENDPOINT_TYPE': 'internal', 'CONTAINER_HOST': 'unix:///run/podman/podman.sock'}, 'healthcheck': {'test': '/openstack/healthcheck podman_exporter', 'mount': '/var/lib/openstack/healthchecks/podman_exporter'}, 'volumes': ['/var/lib/openstack/config/telemetry/podman_exporter.yaml:/etc/podman_exporter/podman_exporter.yaml:z', '/var/lib/openstack/certs/telemetry/default:/etc/podman_exporter/tls:z', '/run/podman/podman.sock:/run/podman/podman.sock:rw,z', '/var/lib/openstack/healthchecks/podman_exporter:/openstack:ro,z']}, config_id=edpm, container_name=podman_exporter, maintainer=Navid Yaghoobi <navidys@fedoraproject.org>)
Dec  1 19:44:18 compute-0 nova_compute[189564]: 2025-12-01 19:44:18.200 189568 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Dec  1 19:44:19 compute-0 podman[245843]: 2025-12-01 19:44:19.360014664 +0000 UTC m=+0.107235253 container health_status 34a1614f07848d6f362b3ed1fa2407dbcd0f2c7c831f6ef43ff8b2d278ce7c3d (image=quay.io/podified-antelope-centos9/openstack-ceilometer-ipmi:current-podified, name=ceilometer_agent_ipmi, health_status=healthy, health_failing_streak=0, health_log=, org.label-schema.schema-version=1.0, config_id=edpm, org.label-schema.vendor=CentOS, org.label-schema.build-date=20251125, org.label-schema.name=CentOS Stream 9 Base Image, tcib_build_tag=fa2bb8efef6782c26ea7f1675eeb36dd, tcib_managed=true, config_data={'image': 'quay.io/podified-antelope-centos9/openstack-ceilometer-ipmi:current-podified', 'user': 'ceilometer', 'restart': 'always', 'command': 'kolla_start', 'security_opt': 'label:type:ceilometer_polling_t', 'privileged': 'true', 'net': 'host', 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'OS_ENDPOINT_TYPE': 'internal'}, 'healthcheck': {'test': '/openstack/healthcheck ipmi', 'mount': '/var/lib/openstack/healthchecks/ceilometer_agent_ipmi'}, 'volumes': ['/var/lib/openstack/config/telemetry-power-monitoring:/var/lib/openstack/config/:z', '/var/lib/openstack/config/telemetry-power-monitoring/ceilometer-agent-ipmi.json:/var/lib/kolla/config_files/config.json:z', '/etc/hosts:/etc/hosts:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/var/lib/openstack/cacerts/telemetry-power-monitoring/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/config/telemetry-power-monitoring/ceilometer_prom_exporter.yaml:/etc/ceilometer/ceilometer_prom_exporter.yaml:z', '/var/lib/openstack/certs/telemetry-power-monitoring/default:/etc/ceilometer/tls:z', '/dev/log:/dev/log', '/var/lib/openstack/healthchecks/ceilometer_agent_ipmi:/openstack:ro,z']}, io.buildah.version=1.41.3, org.label-schema.license=GPLv2, container_name=ceilometer_agent_ipmi, maintainer=OpenStack Kubernetes Operator team, managed_by=edpm_ansible)
Dec  1 19:44:19 compute-0 podman[245842]: 2025-12-01 19:44:19.366982601 +0000 UTC m=+0.122699433 container health_status 23921011954a99f31a49758e512d9e3575f6b2ebf536e7df85e3be11e7690b76 (image=quay.io/sustainable_computing_io/kepler:release-0.7.12, name=kepler, health_status=healthy, health_failing_streak=0, health_log=, config_id=edpm, distribution-scope=public, url=https://access.redhat.com/containers/#/registry.access.redhat.com/ubi9/images/9.4-1214.1726694543, vendor=Red Hat, Inc., container_name=kepler, com.redhat.license_terms=https://www.redhat.com/en/about/red-hat-end-user-license-agreements#UBI, io.openshift.expose-services=, com.redhat.component=ubi9-container, maintainer=Red Hat, Inc., release-0.7.12=, io.k8s.description=The Universal Base Image is designed and engineered to be the base layer for all of your containerized applications, middleware and utilities. This base image is freely redistributable, but Red Hat only supports Red Hat technologies through subscriptions for Red Hat products. This image is maintained by Red Hat and updated regularly., release=1214.1726694543, vcs-type=git, config_data={'image': 'quay.io/sustainable_computing_io/kepler:release-0.7.12', 'privileged': 'true', 'restart': 'always', 'ports': ['8888:8888'], 'net': 'host', 'command': '-v=2', 'recreate': True, 'environment': {'ENABLE_GPU': 'true', 'EXPOSE_CONTAINER_METRICS': 'true', 'ENABLE_PROCESS_METRICS': 'true', 'EXPOSE_VM_METRICS': 'true', 'EXPOSE_ESTIMATED_IDLE_POWER_METRICS': 'false', 'LIBVIRT_METADATA_URI': 'http://openstack.org/xmlns/libvirt/nova/1.1'}, 'healthcheck': {'test': '/openstack/healthcheck kepler', 'mount': '/var/lib/openstack/healthchecks/kepler'}, 'volumes': ['/lib/modules:/lib/modules:ro', '/run/libvirt:/run/libvirt:shared,ro', '/sys:/sys', '/proc:/proc', '/var/lib/openstack/healthchecks/kepler:/openstack:ro,z']}, description=The Universal Base Image is designed and engineered to be the base layer for all of your containerized applications, middleware and utilities. This base image is freely redistributable, but Red Hat only supports Red Hat technologies through subscriptions for Red Hat products. This image is maintained by Red Hat and updated regularly., io.openshift.tags=base rhel9, managed_by=edpm_ansible, architecture=x86_64, io.buildah.version=1.29.0, version=9.4, name=ubi9, build-date=2024-09-18T21:23:30, io.k8s.display-name=Red Hat Universal Base Image 9, vcs-ref=e309397d02fc53f7fa99db1371b8700eb49f268f, summary=Provides the latest release of Red Hat Universal Base Image 9.)
Dec  1 19:44:20 compute-0 podman[245880]: 2025-12-01 19:44:20.353233085 +0000 UTC m=+0.102979191 container health_status 43b014a7c88484529ca37fbc1aa040d68d3c565a681d98a3ffe696ded1c66c8b (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified, name=ovn_metadata_agent, health_status=healthy, health_failing_streak=0, health_log=, managed_by=edpm_ansible, tcib_build_tag=fa2bb8efef6782c26ea7f1675eeb36dd, config_id=ovn_metadata_agent, container_name=ovn_metadata_agent, io.buildah.version=1.41.3, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.vendor=CentOS, tcib_managed=true, org.label-schema.build-date=20251125, org.label-schema.license=GPLv2, org.label-schema.schema-version=1.0, config_data={'cgroupns': 'host', 'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'EDPM_CONFIG_HASH': '0823bd3e096c75f72e4a95820d41b0d4b6a1172bd2892ddb9f29b788a11bc87d'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ovn_metadata_agent', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified', 'net': 'host', 'pid': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/run/openvswitch:/run/openvswitch:z', '/var/lib/config-data/ansible-generated/neutron-ovn-metadata-agent:/etc/neutron.conf.d:z', '/run/netns:/run/netns:shared', '/var/lib/kolla/config_files/ovn_metadata_agent.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/neutron:/var/lib/neutron:shared,z', '/var/lib/neutron/ovn_metadata_haproxy_wrapper:/usr/local/bin/haproxy:ro', '/var/lib/neutron/kill_scripts:/etc/neutron/kill_scripts:ro', '/var/lib/openstack/cacerts/neutron-metadata/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/ca.crt:/etc/pki/tls/certs/ovndbca.crt:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/tls.crt:/etc/pki/tls/certs/ovndb.crt:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/tls.key:/etc/pki/tls/private/ovndb.key:ro,Z', '/var/lib/openstack/healthchecks/ovn_metadata_agent:/openstack:ro,z']}, maintainer=OpenStack Kubernetes Operator team)
Dec  1 19:44:20 compute-0 podman[245879]: 2025-12-01 19:44:20.400477014 +0000 UTC m=+0.152119618 container health_status 3a3d264f7eb8586ed3d44da8bad3c69e5911bcb2ca062b771386b6d47a5118de (image=quay.rdoproject.org/podified-master-centos10/openstack-ceilometer-compute:current-tested, name=ceilometer_agent_compute, health_status=healthy, health_failing_streak=0, health_log=, org.label-schema.license=GPLv2, container_name=ceilometer_agent_compute, config_id=edpm, io.buildah.version=1.41.4, org.label-schema.name=CentOS Stream 10 Base Image, tcib_build_tag=3a7876c5b6a4ff2e2bc50e11e9db5f42, managed_by=edpm_ansible, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS, tcib_managed=true, config_data={'image': 'quay.rdoproject.org/podified-master-centos10/openstack-ceilometer-compute:current-tested', 'user': 'ceilometer', 'restart': 'always', 'command': 'kolla_start', 'security_opt': 'label:type:ceilometer_polling_t', 'net': 'host', 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'OS_ENDPOINT_TYPE': 'internal'}, 'healthcheck': {'test': '/openstack/healthcheck compute', 'mount': '/var/lib/openstack/healthchecks/ceilometer_agent_compute'}, 'volumes': ['/var/lib/openstack/config/telemetry:/var/lib/openstack/config/:z', '/var/lib/openstack/config/telemetry/ceilometer-agent-compute.json:/var/lib/kolla/config_files/config.json:z', '/run/libvirt:/run/libvirt:shared,ro', '/etc/hosts:/etc/hosts:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/var/lib/openstack/cacerts/telemetry/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/config/telemetry/ceilometer_prom_exporter.yaml:/etc/ceilometer/ceilometer_prom_exporter.yaml:z', '/var/lib/openstack/certs/telemetry/default:/etc/ceilometer/tls:z', '/dev/log:/dev/log', '/var/lib/openstack/healthchecks/ceilometer_agent_compute:/openstack:ro,z']}, maintainer=OpenStack Kubernetes Operator team, org.label-schema.build-date=20251125)
Dec  1 19:44:20 compute-0 podman[245881]: 2025-12-01 19:44:20.442357915 +0000 UTC m=+0.184824144 container health_status ac5c9902abf0db9f43c889599b2bcc73d33eb8b65444ffdd9b56a5cc93dab792 (image=quay.io/podified-antelope-centos9/openstack-ovn-controller:current-podified, name=ovn_controller, health_status=healthy, health_failing_streak=0, health_log=, org.label-schema.vendor=CentOS, config_id=ovn_controller, org.label-schema.schema-version=1.0, tcib_managed=true, container_name=ovn_controller, org.label-schema.build-date=20251125, org.label-schema.license=GPLv2, config_data={'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ovn_controller', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-ovn-controller:current-podified', 'net': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/lib/modules:/lib/modules:ro', '/run:/run', '/var/lib/openvswitch/ovn:/run/ovn:shared,z', '/var/lib/kolla/config_files/ovn_controller.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/openstack/cacerts/ovn/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/certs/ovn/default/ca.crt:/etc/pki/tls/certs/ovndbca.crt:ro,z', '/var/lib/openstack/certs/ovn/default/tls.crt:/etc/pki/tls/certs/ovndb.crt:ro,z', '/var/lib/openstack/certs/ovn/default/tls.key:/etc/pki/tls/private/ovndb.key:ro,Z', '/var/lib/openstack/healthchecks/ovn_controller:/openstack:ro,z']}, maintainer=OpenStack Kubernetes Operator team, org.label-schema.name=CentOS Stream 9 Base Image, tcib_build_tag=fa2bb8efef6782c26ea7f1675eeb36dd, io.buildah.version=1.41.3, managed_by=edpm_ansible)
Dec  1 19:44:20 compute-0 nova_compute[189564]: 2025-12-01 19:44:20.899 189568 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Dec  1 19:44:23 compute-0 nova_compute[189564]: 2025-12-01 19:44:23.202 189568 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Dec  1 19:44:25 compute-0 nova_compute[189564]: 2025-12-01 19:44:25.902 189568 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Dec  1 19:44:28 compute-0 nova_compute[189564]: 2025-12-01 19:44:28.206 189568 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Dec  1 19:44:29 compute-0 podman[203750]: time="2025-12-01T19:44:29Z" level=info msg="List containers: received `last` parameter - overwriting `limit`"
Dec  1 19:44:29 compute-0 podman[203750]: @ - - [01/Dec/2025:19:44:29 +0000] "GET /v4.9.3/libpod/containers/json?all=true&external=false&last=0&namespace=false&size=false&sync=false HTTP/1.1" 200 29521 "" "Go-http-client/1.1"
Dec  1 19:44:29 compute-0 podman[203750]: @ - - [01/Dec/2025:19:44:29 +0000] "GET /v4.9.3/libpod/containers/stats?all=false&interval=1&stream=false HTTP/1.1" 200 4807 "" "Go-http-client/1.1"
Dec  1 19:44:30 compute-0 nova_compute[189564]: 2025-12-01 19:44:30.907 189568 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Dec  1 19:44:31 compute-0 podman[245941]: 2025-12-01 19:44:31.339128302 +0000 UTC m=+0.094972662 container health_status b46bda7fc50db8041eef75400930fc7591d8331b3adc9964f77b2cc87c6b98e2 (image=quay.io/openstack-k8s-operators/openstack-network-exporter:current-podified, name=openstack_network_exporter, health_status=healthy, health_failing_streak=0, health_log=, container_name=openstack_network_exporter, io.openshift.tags=minimal rhel9, architecture=x86_64, config_data={'image': 'quay.io/openstack-k8s-operators/openstack-network-exporter:current-podified', 'restart': 'always', 'recreate': True, 'privileged': True, 'ports': ['9105:9105'], 'command': [], 'net': 'host', 'environment': {'OS_ENDPOINT_TYPE': 'internal', 'OPENSTACK_NETWORK_EXPORTER_YAML': '/etc/openstack_network_exporter/openstack_network_exporter.yaml'}, 'healthcheck': {'test': '/openstack/healthcheck openstack-netwo', 'mount': '/var/lib/openstack/healthchecks/openstack_network_exporter'}, 'volumes': ['/var/lib/openstack/config/telemetry/openstack_network_exporter.yaml:/etc/openstack_network_exporter/openstack_network_exporter.yaml:z', '/var/lib/openstack/certs/telemetry/default:/etc/openstack_network_exporter/tls:z', '/var/run/openvswitch:/run/openvswitch:rw,z', '/var/lib/openvswitch/ovn:/run/ovn:rw,z', '/proc:/host/proc:ro', '/var/lib/openstack/healthchecks/openstack_network_exporter:/openstack:ro,z']}, managed_by=edpm_ansible, url=https://catalog.redhat.com/en/search?searchType=containers, io.openshift.expose-services=, com.redhat.component=ubi9-minimal-container, build-date=2025-08-20T13:12:41, io.k8s.description=The Universal Base Image Minimal is a stripped down image that uses microdnf as a package manager. This base image is freely redistributable, but Red Hat only supports Red Hat technologies through subscriptions for Red Hat products. This image is maintained by Red Hat and updated regularly., name=ubi9-minimal, release=1755695350, vendor=Red Hat, Inc., version=9.6, io.k8s.display-name=Red Hat Universal Base Image 9 Minimal, description=The Universal Base Image Minimal is a stripped down image that uses microdnf as a package manager. This base image is freely redistributable, but Red Hat only supports Red Hat technologies through subscriptions for Red Hat products. This image is maintained by Red Hat and updated regularly., maintainer=Red Hat, Inc., vcs-ref=f4b088292653bbf5ca8188a5e59ffd06a8671d4b, distribution-scope=public, summary=Provides the latest release of the minimal Red Hat Universal Base Image 9., vcs-type=git, io.buildah.version=1.33.7, config_id=edpm, com.redhat.license_terms=https://www.redhat.com/en/about/red-hat-end-user-license-agreements#UBI)
Dec  1 19:44:31 compute-0 openstack_network_exporter[205914]: ERROR   19:44:31 appctl.go:144: Failed to get PID for ovn-northd: no control socket files found for ovn-northd
Dec  1 19:44:31 compute-0 openstack_network_exporter[205914]: ERROR   19:44:31 appctl.go:144: Failed to get PID for ovn-northd: no control socket files found for ovn-northd
Dec  1 19:44:31 compute-0 openstack_network_exporter[205914]: ERROR   19:44:31 appctl.go:131: Failed to prepare call to ovsdb-server: no control socket files found for the ovs db server
Dec  1 19:44:31 compute-0 openstack_network_exporter[205914]: ERROR   19:44:31 appctl.go:174: call(dpif-netdev/pmd-perf-show): please specify an existing datapath
Dec  1 19:44:31 compute-0 openstack_network_exporter[205914]: 
Dec  1 19:44:31 compute-0 openstack_network_exporter[205914]: ERROR   19:44:31 appctl.go:174: call(dpif-netdev/pmd-rxq-show): please specify an existing datapath
Dec  1 19:44:31 compute-0 openstack_network_exporter[205914]: 
Dec  1 19:44:33 compute-0 nova_compute[189564]: 2025-12-01 19:44:33.209 189568 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Dec  1 19:44:35 compute-0 nova_compute[189564]: 2025-12-01 19:44:35.910 189568 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Dec  1 19:44:36 compute-0 podman[245961]: 2025-12-01 19:44:36.351974903 +0000 UTC m=+0.113876310 container health_status 9bc16c1e84935b321683dd2dfd3901959431e420d380b6b9982945dff3d516b2 (image=quay.io/prometheus/node-exporter:v1.5.0, name=node_exporter, health_status=healthy, health_failing_streak=0, health_log=, config_id=edpm, container_name=node_exporter, maintainer=The Prometheus Authors <prometheus-developers@googlegroups.com>, managed_by=edpm_ansible, config_data={'image': 'quay.io/prometheus/node-exporter:v1.5.0', 'restart': 'always', 'recreate': True, 'user': 'root', 'privileged': True, 'ports': ['9100:9100'], 'command': ['--web.config.file=/etc/node_exporter/node_exporter.yaml', '--web.disable-exporter-metrics', '--collector.systemd', '--collector.systemd.unit-include=(edpm_.*|ovs.*|openvswitch|virt.*|rsyslog)\\.service', '--no-collector.dmi', '--no-collector.entropy', '--no-collector.thermal_zone', '--no-collector.time', '--no-collector.timex', '--no-collector.uname', '--no-collector.stat', '--no-collector.hwmon', '--no-collector.os', '--no-collector.selinux', '--no-collector.textfile', '--no-collector.powersupplyclass', '--no-collector.pressure', '--no-collector.rapl'], 'net': 'host', 'environment': {'OS_ENDPOINT_TYPE': 'internal'}, 'healthcheck': {'test': '/openstack/healthcheck node_exporter', 'mount': '/var/lib/openstack/healthchecks/node_exporter'}, 'volumes': ['/var/lib/openstack/config/telemetry/node_exporter.yaml:/etc/node_exporter/node_exporter.yaml:z', '/var/lib/openstack/certs/telemetry/default:/etc/node_exporter/tls:z', '/var/run/dbus/system_bus_socket:/var/run/dbus/system_bus_socket:rw', '/var/lib/openstack/healthchecks/node_exporter:/openstack:ro,z']})
Dec  1 19:44:38 compute-0 nova_compute[189564]: 2025-12-01 19:44:38.213 189568 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Dec  1 19:44:40 compute-0 nova_compute[189564]: 2025-12-01 19:44:40.913 189568 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Dec  1 19:44:42 compute-0 podman[245983]: 2025-12-01 19:44:42.331504101 +0000 UTC m=+0.091084341 container health_status eee51cf6f5ac491b85fb09827fece37ea9afa564acb449d4ec0d0155a452f02b (image=quay.io/podified-antelope-centos9/openstack-multipathd:current-podified, name=multipathd, health_status=healthy, health_failing_streak=0, health_log=, container_name=multipathd, maintainer=OpenStack Kubernetes Operator team, config_data={'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/multipathd', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-multipathd:current-podified', 'net': 'host', 'privileged': True, 'restart': 'always', 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/var/lib/kolla/config_files/multipathd.json:/var/lib/kolla/config_files/config.json:ro', '/dev:/dev', '/run/udev:/run/udev', '/sys:/sys', '/lib/modules:/lib/modules:ro', '/etc/iscsi:/etc/iscsi:ro', '/var/lib/iscsi:/var/lib/iscsi', '/etc/multipath:/etc/multipath:z', '/etc/multipath.conf:/etc/multipath.conf:ro', '/var/lib/openstack/healthchecks/multipathd:/openstack:ro,z']}, org.label-schema.build-date=20251125, org.label-schema.license=GPLv2, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS, tcib_managed=true, io.buildah.version=1.41.3, managed_by=edpm_ansible, org.label-schema.name=CentOS Stream 9 Base Image, tcib_build_tag=fa2bb8efef6782c26ea7f1675eeb36dd, config_id=multipathd)
Dec  1 19:44:43 compute-0 nova_compute[189564]: 2025-12-01 19:44:43.217 189568 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Dec  1 19:44:45 compute-0 nova_compute[189564]: 2025-12-01 19:44:45.916 189568 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Dec  1 19:44:47 compute-0 podman[246002]: 2025-12-01 19:44:47.344491373 +0000 UTC m=+0.102322460 container health_status 61ddba5fa28aaa4735d9b3aecc3d300f499f9ae2248b5f55cd6d6127fcce4236 (image=quay.io/navidys/prometheus-podman-exporter:v1.10.1, name=podman_exporter, health_status=healthy, health_failing_streak=0, health_log=, config_id=edpm, container_name=podman_exporter, maintainer=Navid Yaghoobi <navidys@fedoraproject.org>, managed_by=edpm_ansible, config_data={'image': 'quay.io/navidys/prometheus-podman-exporter:v1.10.1', 'restart': 'always', 'recreate': True, 'user': 'root', 'privileged': True, 'ports': ['9882:9882'], 'net': 'host', 'command': ['--web.config.file=/etc/podman_exporter/podman_exporter.yaml'], 'environment': {'OS_ENDPOINT_TYPE': 'internal', 'CONTAINER_HOST': 'unix:///run/podman/podman.sock'}, 'healthcheck': {'test': '/openstack/healthcheck podman_exporter', 'mount': '/var/lib/openstack/healthchecks/podman_exporter'}, 'volumes': ['/var/lib/openstack/config/telemetry/podman_exporter.yaml:/etc/podman_exporter/podman_exporter.yaml:z', '/var/lib/openstack/certs/telemetry/default:/etc/podman_exporter/tls:z', '/run/podman/podman.sock:/run/podman/podman.sock:rw,z', '/var/lib/openstack/healthchecks/podman_exporter:/openstack:ro,z']})
Dec  1 19:44:48 compute-0 nova_compute[189564]: 2025-12-01 19:44:48.220 189568 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Dec  1 19:44:50 compute-0 podman[246024]: 2025-12-01 19:44:50.354358695 +0000 UTC m=+0.117530822 container health_status 23921011954a99f31a49758e512d9e3575f6b2ebf536e7df85e3be11e7690b76 (image=quay.io/sustainable_computing_io/kepler:release-0.7.12, name=kepler, health_status=healthy, health_failing_streak=0, health_log=, release-0.7.12=, container_name=kepler, description=The Universal Base Image is designed and engineered to be the base layer for all of your containerized applications, middleware and utilities. This base image is freely redistributable, but Red Hat only supports Red Hat technologies through subscriptions for Red Hat products. This image is maintained by Red Hat and updated regularly., distribution-scope=public, architecture=x86_64, com.redhat.license_terms=https://www.redhat.com/en/about/red-hat-end-user-license-agreements#UBI, io.k8s.description=The Universal Base Image is designed and engineered to be the base layer for all of your containerized applications, middleware and utilities. This base image is freely redistributable, but Red Hat only supports Red Hat technologies through subscriptions for Red Hat products. This image is maintained by Red Hat and updated regularly., release=1214.1726694543, url=https://access.redhat.com/containers/#/registry.access.redhat.com/ubi9/images/9.4-1214.1726694543, version=9.4, vcs-type=git, vcs-ref=e309397d02fc53f7fa99db1371b8700eb49f268f, io.openshift.tags=base rhel9, summary=Provides the latest release of Red Hat Universal Base Image 9., io.k8s.display-name=Red Hat Universal Base Image 9, build-date=2024-09-18T21:23:30, com.redhat.component=ubi9-container, io.buildah.version=1.29.0, io.openshift.expose-services=, config_id=edpm, config_data={'image': 'quay.io/sustainable_computing_io/kepler:release-0.7.12', 'privileged': 'true', 'restart': 'always', 'ports': ['8888:8888'], 'net': 'host', 'command': '-v=2', 'recreate': True, 'environment': {'ENABLE_GPU': 'true', 'EXPOSE_CONTAINER_METRICS': 'true', 'ENABLE_PROCESS_METRICS': 'true', 'EXPOSE_VM_METRICS': 'true', 'EXPOSE_ESTIMATED_IDLE_POWER_METRICS': 'false', 'LIBVIRT_METADATA_URI': 'http://openstack.org/xmlns/libvirt/nova/1.1'}, 'healthcheck': {'test': '/openstack/healthcheck kepler', 'mount': '/var/lib/openstack/healthchecks/kepler'}, 'volumes': ['/lib/modules:/lib/modules:ro', '/run/libvirt:/run/libvirt:shared,ro', '/sys:/sys', '/proc:/proc', '/var/lib/openstack/healthchecks/kepler:/openstack:ro,z']}, managed_by=edpm_ansible, name=ubi9, maintainer=Red Hat, Inc., vendor=Red Hat, Inc.)
Dec  1 19:44:50 compute-0 podman[246025]: 2025-12-01 19:44:50.362857899 +0000 UTC m=+0.120486634 container health_status 34a1614f07848d6f362b3ed1fa2407dbcd0f2c7c831f6ef43ff8b2d278ce7c3d (image=quay.io/podified-antelope-centos9/openstack-ceilometer-ipmi:current-podified, name=ceilometer_agent_ipmi, health_status=healthy, health_failing_streak=0, health_log=, maintainer=OpenStack Kubernetes Operator team, tcib_build_tag=fa2bb8efef6782c26ea7f1675eeb36dd, config_id=edpm, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 9 Base Image, tcib_managed=true, managed_by=edpm_ansible, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS, config_data={'image': 'quay.io/podified-antelope-centos9/openstack-ceilometer-ipmi:current-podified', 'user': 'ceilometer', 'restart': 'always', 'command': 'kolla_start', 'security_opt': 'label:type:ceilometer_polling_t', 'privileged': 'true', 'net': 'host', 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'OS_ENDPOINT_TYPE': 'internal'}, 'healthcheck': {'test': '/openstack/healthcheck ipmi', 'mount': '/var/lib/openstack/healthchecks/ceilometer_agent_ipmi'}, 'volumes': ['/var/lib/openstack/config/telemetry-power-monitoring:/var/lib/openstack/config/:z', '/var/lib/openstack/config/telemetry-power-monitoring/ceilometer-agent-ipmi.json:/var/lib/kolla/config_files/config.json:z', '/etc/hosts:/etc/hosts:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/var/lib/openstack/cacerts/telemetry-power-monitoring/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/config/telemetry-power-monitoring/ceilometer_prom_exporter.yaml:/etc/ceilometer/ceilometer_prom_exporter.yaml:z', '/var/lib/openstack/certs/telemetry-power-monitoring/default:/etc/ceilometer/tls:z', '/dev/log:/dev/log', '/var/lib/openstack/healthchecks/ceilometer_agent_ipmi:/openstack:ro,z']}, io.buildah.version=1.41.3, org.label-schema.build-date=20251125, container_name=ceilometer_agent_ipmi)
Dec  1 19:44:50 compute-0 podman[246064]: 2025-12-01 19:44:50.466133238 +0000 UTC m=+0.083509425 container health_status 43b014a7c88484529ca37fbc1aa040d68d3c565a681d98a3ffe696ded1c66c8b (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified, name=ovn_metadata_agent, health_status=healthy, health_failing_streak=0, health_log=, org.label-schema.schema-version=1.0, managed_by=edpm_ansible, org.label-schema.build-date=20251125, org.label-schema.license=GPLv2, tcib_managed=true, config_data={'cgroupns': 'host', 'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'EDPM_CONFIG_HASH': '0823bd3e096c75f72e4a95820d41b0d4b6a1172bd2892ddb9f29b788a11bc87d'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ovn_metadata_agent', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified', 'net': 'host', 'pid': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/run/openvswitch:/run/openvswitch:z', '/var/lib/config-data/ansible-generated/neutron-ovn-metadata-agent:/etc/neutron.conf.d:z', '/run/netns:/run/netns:shared', '/var/lib/kolla/config_files/ovn_metadata_agent.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/neutron:/var/lib/neutron:shared,z', '/var/lib/neutron/ovn_metadata_haproxy_wrapper:/usr/local/bin/haproxy:ro', '/var/lib/neutron/kill_scripts:/etc/neutron/kill_scripts:ro', '/var/lib/openstack/cacerts/neutron-metadata/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/ca.crt:/etc/pki/tls/certs/ovndbca.crt:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/tls.crt:/etc/pki/tls/certs/ovndb.crt:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/tls.key:/etc/pki/tls/private/ovndb.key:ro,Z', '/var/lib/openstack/healthchecks/ovn_metadata_agent:/openstack:ro,z']}, maintainer=OpenStack Kubernetes Operator team, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.vendor=CentOS, tcib_build_tag=fa2bb8efef6782c26ea7f1675eeb36dd, container_name=ovn_metadata_agent, io.buildah.version=1.41.3, config_id=ovn_metadata_agent)
Dec  1 19:44:50 compute-0 podman[246082]: 2025-12-01 19:44:50.610928747 +0000 UTC m=+0.106378576 container health_status 3a3d264f7eb8586ed3d44da8bad3c69e5911bcb2ca062b771386b6d47a5118de (image=quay.rdoproject.org/podified-master-centos10/openstack-ceilometer-compute:current-tested, name=ceilometer_agent_compute, health_status=healthy, health_failing_streak=0, health_log=, managed_by=edpm_ansible, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 10 Base Image, config_data={'image': 'quay.rdoproject.org/podified-master-centos10/openstack-ceilometer-compute:current-tested', 'user': 'ceilometer', 'restart': 'always', 'command': 'kolla_start', 'security_opt': 'label:type:ceilometer_polling_t', 'net': 'host', 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'OS_ENDPOINT_TYPE': 'internal'}, 'healthcheck': {'test': '/openstack/healthcheck compute', 'mount': '/var/lib/openstack/healthchecks/ceilometer_agent_compute'}, 'volumes': ['/var/lib/openstack/config/telemetry:/var/lib/openstack/config/:z', '/var/lib/openstack/config/telemetry/ceilometer-agent-compute.json:/var/lib/kolla/config_files/config.json:z', '/run/libvirt:/run/libvirt:shared,ro', '/etc/hosts:/etc/hosts:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/var/lib/openstack/cacerts/telemetry/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/config/telemetry/ceilometer_prom_exporter.yaml:/etc/ceilometer/ceilometer_prom_exporter.yaml:z', '/var/lib/openstack/certs/telemetry/default:/etc/ceilometer/tls:z', '/dev/log:/dev/log', '/var/lib/openstack/healthchecks/ceilometer_agent_compute:/openstack:ro,z']}, maintainer=OpenStack Kubernetes Operator team, org.label-schema.schema-version=1.0, config_id=edpm, container_name=ceilometer_agent_compute, io.buildah.version=1.41.4, org.label-schema.build-date=20251125, tcib_managed=true, org.label-schema.vendor=CentOS, tcib_build_tag=3a7876c5b6a4ff2e2bc50e11e9db5f42)
Dec  1 19:44:50 compute-0 podman[246083]: 2025-12-01 19:44:50.67925018 +0000 UTC m=+0.174818733 container health_status ac5c9902abf0db9f43c889599b2bcc73d33eb8b65444ffdd9b56a5cc93dab792 (image=quay.io/podified-antelope-centos9/openstack-ovn-controller:current-podified, name=ovn_controller, health_status=healthy, health_failing_streak=0, health_log=, org.label-schema.build-date=20251125, config_id=ovn_controller, maintainer=OpenStack Kubernetes Operator team, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.schema-version=1.0, tcib_managed=true, container_name=ovn_controller, io.buildah.version=1.41.3, managed_by=edpm_ansible, org.label-schema.license=GPLv2, org.label-schema.vendor=CentOS, tcib_build_tag=fa2bb8efef6782c26ea7f1675eeb36dd, config_data={'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ovn_controller', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-ovn-controller:current-podified', 'net': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/lib/modules:/lib/modules:ro', '/run:/run', '/var/lib/openvswitch/ovn:/run/ovn:shared,z', '/var/lib/kolla/config_files/ovn_controller.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/openstack/cacerts/ovn/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/certs/ovn/default/ca.crt:/etc/pki/tls/certs/ovndbca.crt:ro,z', '/var/lib/openstack/certs/ovn/default/tls.crt:/etc/pki/tls/certs/ovndb.crt:ro,z', '/var/lib/openstack/certs/ovn/default/tls.key:/etc/pki/tls/private/ovndb.key:ro,Z', '/var/lib/openstack/healthchecks/ovn_controller:/openstack:ro,z']})
Dec  1 19:44:50 compute-0 nova_compute[189564]: 2025-12-01 19:44:50.919 189568 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Dec  1 19:44:53 compute-0 nova_compute[189564]: 2025-12-01 19:44:53.223 189568 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Dec  1 19:44:55 compute-0 nova_compute[189564]: 2025-12-01 19:44:55.923 189568 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Dec  1 19:44:58 compute-0 nova_compute[189564]: 2025-12-01 19:44:58.227 189568 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Dec  1 19:44:58 compute-0 nova_compute[189564]: 2025-12-01 19:44:58.981 189568 DEBUG oslo_service.periodic_task [None req-7cec860b-c1c5-49d2-8a4f-732c76c1788d - - - - - -] Running periodic task ComputeManager._heal_instance_info_cache run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210#033[00m
Dec  1 19:44:58 compute-0 nova_compute[189564]: 2025-12-01 19:44:58.981 189568 DEBUG nova.compute.manager [None req-7cec860b-c1c5-49d2-8a4f-732c76c1788d - - - - - -] Starting heal instance info cache _heal_instance_info_cache /usr/lib/python3.9/site-packages/nova/compute/manager.py:9858#033[00m
Dec  1 19:44:59 compute-0 podman[203750]: time="2025-12-01T19:44:59Z" level=info msg="List containers: received `last` parameter - overwriting `limit`"
Dec  1 19:44:59 compute-0 podman[203750]: @ - - [01/Dec/2025:19:44:59 +0000] "GET /v4.9.3/libpod/containers/json?all=true&external=false&last=0&namespace=false&size=false&sync=false HTTP/1.1" 200 29521 "" "Go-http-client/1.1"
Dec  1 19:44:59 compute-0 podman[203750]: @ - - [01/Dec/2025:19:44:59 +0000] "GET /v4.9.3/libpod/containers/stats?all=false&interval=1&stream=false HTTP/1.1" 200 4808 "" "Go-http-client/1.1"
Dec  1 19:44:59 compute-0 nova_compute[189564]: 2025-12-01 19:44:59.929 189568 DEBUG oslo_concurrency.lockutils [None req-7cec860b-c1c5-49d2-8a4f-732c76c1788d - - - - - -] Acquiring lock "refresh_cache-850ac274-3f22-41ce-b7d7-ac64d7adac70" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:312#033[00m
Dec  1 19:44:59 compute-0 nova_compute[189564]: 2025-12-01 19:44:59.931 189568 DEBUG oslo_concurrency.lockutils [None req-7cec860b-c1c5-49d2-8a4f-732c76c1788d - - - - - -] Acquired lock "refresh_cache-850ac274-3f22-41ce-b7d7-ac64d7adac70" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:315#033[00m
Dec  1 19:44:59 compute-0 nova_compute[189564]: 2025-12-01 19:44:59.931 189568 DEBUG nova.network.neutron [None req-7cec860b-c1c5-49d2-8a4f-732c76c1788d - - - - - -] [instance: 850ac274-3f22-41ce-b7d7-ac64d7adac70] Forcefully refreshing network info cache for instance _get_instance_nw_info /usr/lib/python3.9/site-packages/nova/network/neutron.py:2004#033[00m
Dec  1 19:45:00 compute-0 nova_compute[189564]: 2025-12-01 19:45:00.927 189568 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Dec  1 19:45:01 compute-0 openstack_network_exporter[205914]: ERROR   19:45:01 appctl.go:131: Failed to prepare call to ovsdb-server: no control socket files found for the ovs db server
Dec  1 19:45:01 compute-0 openstack_network_exporter[205914]: ERROR   19:45:01 appctl.go:144: Failed to get PID for ovn-northd: no control socket files found for ovn-northd
Dec  1 19:45:01 compute-0 openstack_network_exporter[205914]: ERROR   19:45:01 appctl.go:144: Failed to get PID for ovn-northd: no control socket files found for ovn-northd
Dec  1 19:45:01 compute-0 openstack_network_exporter[205914]: ERROR   19:45:01 appctl.go:174: call(dpif-netdev/pmd-perf-show): please specify an existing datapath
Dec  1 19:45:01 compute-0 openstack_network_exporter[205914]: 
Dec  1 19:45:01 compute-0 openstack_network_exporter[205914]: ERROR   19:45:01 appctl.go:174: call(dpif-netdev/pmd-rxq-show): please specify an existing datapath
Dec  1 19:45:01 compute-0 openstack_network_exporter[205914]: 
Dec  1 19:45:02 compute-0 podman[246130]: 2025-12-01 19:45:02.387111981 +0000 UTC m=+0.142120687 container health_status b46bda7fc50db8041eef75400930fc7591d8331b3adc9964f77b2cc87c6b98e2 (image=quay.io/openstack-k8s-operators/openstack-network-exporter:current-podified, name=openstack_network_exporter, health_status=healthy, health_failing_streak=0, health_log=, maintainer=Red Hat, Inc., com.redhat.license_terms=https://www.redhat.com/en/about/red-hat-end-user-license-agreements#UBI, vendor=Red Hat, Inc., architecture=x86_64, url=https://catalog.redhat.com/en/search?searchType=containers, io.k8s.display-name=Red Hat Universal Base Image 9 Minimal, io.openshift.tags=minimal rhel9, io.k8s.description=The Universal Base Image Minimal is a stripped down image that uses microdnf as a package manager. This base image is freely redistributable, but Red Hat only supports Red Hat technologies through subscriptions for Red Hat products. This image is maintained by Red Hat and updated regularly., com.redhat.component=ubi9-minimal-container, managed_by=edpm_ansible, release=1755695350, distribution-scope=public, io.buildah.version=1.33.7, name=ubi9-minimal, container_name=openstack_network_exporter, config_data={'image': 'quay.io/openstack-k8s-operators/openstack-network-exporter:current-podified', 'restart': 'always', 'recreate': True, 'privileged': True, 'ports': ['9105:9105'], 'command': [], 'net': 'host', 'environment': {'OS_ENDPOINT_TYPE': 'internal', 'OPENSTACK_NETWORK_EXPORTER_YAML': '/etc/openstack_network_exporter/openstack_network_exporter.yaml'}, 'healthcheck': {'test': '/openstack/healthcheck openstack-netwo', 'mount': '/var/lib/openstack/healthchecks/openstack_network_exporter'}, 'volumes': ['/var/lib/openstack/config/telemetry/openstack_network_exporter.yaml:/etc/openstack_network_exporter/openstack_network_exporter.yaml:z', '/var/lib/openstack/certs/telemetry/default:/etc/openstack_network_exporter/tls:z', '/var/run/openvswitch:/run/openvswitch:rw,z', '/var/lib/openvswitch/ovn:/run/ovn:rw,z', '/proc:/host/proc:ro', '/var/lib/openstack/healthchecks/openstack_network_exporter:/openstack:ro,z']}, description=The Universal Base Image Minimal is a stripped down image that uses microdnf as a package manager. This base image is freely redistributable, but Red Hat only supports Red Hat technologies through subscriptions for Red Hat products. This image is maintained by Red Hat and updated regularly., summary=Provides the latest release of the minimal Red Hat Universal Base Image 9., build-date=2025-08-20T13:12:41, io.openshift.expose-services=, vcs-ref=f4b088292653bbf5ca8188a5e59ffd06a8671d4b, version=9.6, vcs-type=git, config_id=edpm)
Dec  1 19:45:03 compute-0 nova_compute[189564]: 2025-12-01 19:45:03.105 189568 DEBUG nova.network.neutron [None req-7cec860b-c1c5-49d2-8a4f-732c76c1788d - - - - - -] [instance: 850ac274-3f22-41ce-b7d7-ac64d7adac70] Updating instance_info_cache with network_info: [{"id": "076102cd-d411-4d3d-a31e-4851d4a8d107", "address": "fa:16:3e:ce:df:71", "network": {"id": "2a4b8529-6171-4880-a97c-66966115a61b", "bridge": "br-int", "label": "private", "subnets": [{"cidr": "192.168.0.0/24", "dns": [], "gateway": {"address": "192.168.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "192.168.0.62", "type": "fixed", "version": 4, "meta": {}, "floating_ips": [{"address": "192.168.122.240", "type": "floating", "version": 4, "meta": {}}]}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "35d2a9caf1634dca9fc12ec078239d84", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tap076102cd-d4", "ovs_interfaceid": "076102cd-d411-4d3d-a31e-4851d4a8d107", "qbh_params": null, "qbg_params": null, "active": true, "vnic_type": "normal", "profile": {}, "preserve_on_delete": true, "delegate_create": true, "meta": {}}] update_instance_cache_with_nw_info /usr/lib/python3.9/site-packages/nova/network/neutron.py:116#033[00m
Dec  1 19:45:03 compute-0 nova_compute[189564]: 2025-12-01 19:45:03.124 189568 DEBUG oslo_concurrency.lockutils [None req-7cec860b-c1c5-49d2-8a4f-732c76c1788d - - - - - -] Releasing lock "refresh_cache-850ac274-3f22-41ce-b7d7-ac64d7adac70" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:333#033[00m
Dec  1 19:45:03 compute-0 nova_compute[189564]: 2025-12-01 19:45:03.125 189568 DEBUG nova.compute.manager [None req-7cec860b-c1c5-49d2-8a4f-732c76c1788d - - - - - -] [instance: 850ac274-3f22-41ce-b7d7-ac64d7adac70] Updated the network info_cache for instance _heal_instance_info_cache /usr/lib/python3.9/site-packages/nova/compute/manager.py:9929#033[00m
Dec  1 19:45:03 compute-0 nova_compute[189564]: 2025-12-01 19:45:03.126 189568 DEBUG oslo_service.periodic_task [None req-7cec860b-c1c5-49d2-8a4f-732c76c1788d - - - - - -] Running periodic task ComputeManager._poll_rebooting_instances run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210#033[00m
Dec  1 19:45:03 compute-0 nova_compute[189564]: 2025-12-01 19:45:03.126 189568 DEBUG oslo_service.periodic_task [None req-7cec860b-c1c5-49d2-8a4f-732c76c1788d - - - - - -] Running periodic task ComputeManager._reclaim_queued_deletes run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210#033[00m
Dec  1 19:45:03 compute-0 nova_compute[189564]: 2025-12-01 19:45:03.126 189568 DEBUG nova.compute.manager [None req-7cec860b-c1c5-49d2-8a4f-732c76c1788d - - - - - -] CONF.reclaim_instance_interval <= 0, skipping... _reclaim_queued_deletes /usr/lib/python3.9/site-packages/nova/compute/manager.py:10477#033[00m
Dec  1 19:45:03 compute-0 nova_compute[189564]: 2025-12-01 19:45:03.231 189568 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Dec  1 19:45:03 compute-0 nova_compute[189564]: 2025-12-01 19:45:03.248 189568 DEBUG oslo_service.periodic_task [None req-7cec860b-c1c5-49d2-8a4f-732c76c1788d - - - - - -] Running periodic task ComputeManager._check_instance_build_time run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210#033[00m
Dec  1 19:45:03 compute-0 nova_compute[189564]: 2025-12-01 19:45:03.249 189568 DEBUG oslo_service.periodic_task [None req-7cec860b-c1c5-49d2-8a4f-732c76c1788d - - - - - -] Running periodic task ComputeManager._poll_rescued_instances run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210#033[00m
Dec  1 19:45:03 compute-0 nova_compute[189564]: 2025-12-01 19:45:03.249 189568 DEBUG oslo_service.periodic_task [None req-7cec860b-c1c5-49d2-8a4f-732c76c1788d - - - - - -] Running periodic task ComputeManager._poll_volume_usage run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210#033[00m
Dec  1 19:45:05 compute-0 nova_compute[189564]: 2025-12-01 19:45:05.244 189568 DEBUG oslo_service.periodic_task [None req-7cec860b-c1c5-49d2-8a4f-732c76c1788d - - - - - -] Running periodic task ComputeManager._sync_scheduler_instance_info run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210#033[00m
Dec  1 19:45:05 compute-0 nova_compute[189564]: 2025-12-01 19:45:05.266 189568 DEBUG oslo_service.periodic_task [None req-7cec860b-c1c5-49d2-8a4f-732c76c1788d - - - - - -] Running periodic task ComputeManager._poll_unconfirmed_resizes run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210#033[00m
Dec  1 19:45:05 compute-0 nova_compute[189564]: 2025-12-01 19:45:05.930 189568 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Dec  1 19:45:07 compute-0 podman[246151]: 2025-12-01 19:45:07.348190033 +0000 UTC m=+0.111465765 container health_status 9bc16c1e84935b321683dd2dfd3901959431e420d380b6b9982945dff3d516b2 (image=quay.io/prometheus/node-exporter:v1.5.0, name=node_exporter, health_status=healthy, health_failing_streak=0, health_log=, config_data={'image': 'quay.io/prometheus/node-exporter:v1.5.0', 'restart': 'always', 'recreate': True, 'user': 'root', 'privileged': True, 'ports': ['9100:9100'], 'command': ['--web.config.file=/etc/node_exporter/node_exporter.yaml', '--web.disable-exporter-metrics', '--collector.systemd', '--collector.systemd.unit-include=(edpm_.*|ovs.*|openvswitch|virt.*|rsyslog)\\.service', '--no-collector.dmi', '--no-collector.entropy', '--no-collector.thermal_zone', '--no-collector.time', '--no-collector.timex', '--no-collector.uname', '--no-collector.stat', '--no-collector.hwmon', '--no-collector.os', '--no-collector.selinux', '--no-collector.textfile', '--no-collector.powersupplyclass', '--no-collector.pressure', '--no-collector.rapl'], 'net': 'host', 'environment': {'OS_ENDPOINT_TYPE': 'internal'}, 'healthcheck': {'test': '/openstack/healthcheck node_exporter', 'mount': '/var/lib/openstack/healthchecks/node_exporter'}, 'volumes': ['/var/lib/openstack/config/telemetry/node_exporter.yaml:/etc/node_exporter/node_exporter.yaml:z', '/var/lib/openstack/certs/telemetry/default:/etc/node_exporter/tls:z', '/var/run/dbus/system_bus_socket:/var/run/dbus/system_bus_socket:rw', '/var/lib/openstack/healthchecks/node_exporter:/openstack:ro,z']}, config_id=edpm, container_name=node_exporter, maintainer=The Prometheus Authors <prometheus-developers@googlegroups.com>, managed_by=edpm_ansible)
Dec  1 19:45:08 compute-0 nova_compute[189564]: 2025-12-01 19:45:08.235 189568 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Dec  1 19:45:08 compute-0 nova_compute[189564]: 2025-12-01 19:45:08.247 189568 DEBUG oslo_service.periodic_task [None req-7cec860b-c1c5-49d2-8a4f-732c76c1788d - - - - - -] Running periodic task ComputeManager._instance_usage_audit run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210#033[00m
Dec  1 19:45:08 compute-0 nova_compute[189564]: 2025-12-01 19:45:08.248 189568 DEBUG oslo_service.periodic_task [None req-7cec860b-c1c5-49d2-8a4f-732c76c1788d - - - - - -] Running periodic task ComputeManager.update_available_resource run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210#033[00m
Dec  1 19:45:08 compute-0 nova_compute[189564]: 2025-12-01 19:45:08.284 189568 DEBUG oslo_concurrency.lockutils [None req-7cec860b-c1c5-49d2-8a4f-732c76c1788d - - - - - -] Acquiring lock "compute_resources" by "nova.compute.resource_tracker.ResourceTracker.clean_compute_node_cache" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m
Dec  1 19:45:08 compute-0 nova_compute[189564]: 2025-12-01 19:45:08.284 189568 DEBUG oslo_concurrency.lockutils [None req-7cec860b-c1c5-49d2-8a4f-732c76c1788d - - - - - -] Lock "compute_resources" acquired by "nova.compute.resource_tracker.ResourceTracker.clean_compute_node_cache" :: waited 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m
Dec  1 19:45:08 compute-0 nova_compute[189564]: 2025-12-01 19:45:08.285 189568 DEBUG oslo_concurrency.lockutils [None req-7cec860b-c1c5-49d2-8a4f-732c76c1788d - - - - - -] Lock "compute_resources" "released" by "nova.compute.resource_tracker.ResourceTracker.clean_compute_node_cache" :: held 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m
Dec  1 19:45:08 compute-0 nova_compute[189564]: 2025-12-01 19:45:08.286 189568 DEBUG nova.compute.resource_tracker [None req-7cec860b-c1c5-49d2-8a4f-732c76c1788d - - - - - -] Auditing locally available compute resources for compute-0.ctlplane.example.com (node: compute-0.ctlplane.example.com) update_available_resource /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:861#033[00m
Dec  1 19:45:08 compute-0 nova_compute[189564]: 2025-12-01 19:45:08.397 189568 DEBUG oslo_concurrency.processutils [None req-7cec860b-c1c5-49d2-8a4f-732c76c1788d - - - - - -] Running cmd (subprocess): /usr/bin/python3 -m oslo_concurrency.prlimit --as=1073741824 --cpu=30 -- env LC_ALL=C LANG=C qemu-img info /var/lib/nova/instances/e73931e9-f7fa-4666-b781-700b385532a9/disk --force-share --output=json execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384#033[00m
Dec  1 19:45:08 compute-0 nova_compute[189564]: 2025-12-01 19:45:08.493 189568 DEBUG oslo_concurrency.processutils [None req-7cec860b-c1c5-49d2-8a4f-732c76c1788d - - - - - -] CMD "/usr/bin/python3 -m oslo_concurrency.prlimit --as=1073741824 --cpu=30 -- env LC_ALL=C LANG=C qemu-img info /var/lib/nova/instances/e73931e9-f7fa-4666-b781-700b385532a9/disk --force-share --output=json" returned: 0 in 0.096s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422#033[00m
Dec  1 19:45:08 compute-0 nova_compute[189564]: 2025-12-01 19:45:08.495 189568 DEBUG oslo_concurrency.processutils [None req-7cec860b-c1c5-49d2-8a4f-732c76c1788d - - - - - -] Running cmd (subprocess): /usr/bin/python3 -m oslo_concurrency.prlimit --as=1073741824 --cpu=30 -- env LC_ALL=C LANG=C qemu-img info /var/lib/nova/instances/e73931e9-f7fa-4666-b781-700b385532a9/disk --force-share --output=json execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384#033[00m
Dec  1 19:45:08 compute-0 nova_compute[189564]: 2025-12-01 19:45:08.572 189568 DEBUG oslo_concurrency.processutils [None req-7cec860b-c1c5-49d2-8a4f-732c76c1788d - - - - - -] CMD "/usr/bin/python3 -m oslo_concurrency.prlimit --as=1073741824 --cpu=30 -- env LC_ALL=C LANG=C qemu-img info /var/lib/nova/instances/e73931e9-f7fa-4666-b781-700b385532a9/disk --force-share --output=json" returned: 0 in 0.077s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422#033[00m
Dec  1 19:45:08 compute-0 nova_compute[189564]: 2025-12-01 19:45:08.574 189568 DEBUG oslo_concurrency.processutils [None req-7cec860b-c1c5-49d2-8a4f-732c76c1788d - - - - - -] Running cmd (subprocess): /usr/bin/python3 -m oslo_concurrency.prlimit --as=1073741824 --cpu=30 -- env LC_ALL=C LANG=C qemu-img info /var/lib/nova/instances/e73931e9-f7fa-4666-b781-700b385532a9/disk.eph0 --force-share --output=json execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384#033[00m
Dec  1 19:45:08 compute-0 nova_compute[189564]: 2025-12-01 19:45:08.637 189568 DEBUG oslo_concurrency.processutils [None req-7cec860b-c1c5-49d2-8a4f-732c76c1788d - - - - - -] CMD "/usr/bin/python3 -m oslo_concurrency.prlimit --as=1073741824 --cpu=30 -- env LC_ALL=C LANG=C qemu-img info /var/lib/nova/instances/e73931e9-f7fa-4666-b781-700b385532a9/disk.eph0 --force-share --output=json" returned: 0 in 0.063s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422#033[00m
Dec  1 19:45:08 compute-0 nova_compute[189564]: 2025-12-01 19:45:08.639 189568 DEBUG oslo_concurrency.processutils [None req-7cec860b-c1c5-49d2-8a4f-732c76c1788d - - - - - -] Running cmd (subprocess): /usr/bin/python3 -m oslo_concurrency.prlimit --as=1073741824 --cpu=30 -- env LC_ALL=C LANG=C qemu-img info /var/lib/nova/instances/e73931e9-f7fa-4666-b781-700b385532a9/disk.eph0 --force-share --output=json execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384#033[00m
Dec  1 19:45:08 compute-0 nova_compute[189564]: 2025-12-01 19:45:08.700 189568 DEBUG oslo_concurrency.processutils [None req-7cec860b-c1c5-49d2-8a4f-732c76c1788d - - - - - -] CMD "/usr/bin/python3 -m oslo_concurrency.prlimit --as=1073741824 --cpu=30 -- env LC_ALL=C LANG=C qemu-img info /var/lib/nova/instances/e73931e9-f7fa-4666-b781-700b385532a9/disk.eph0 --force-share --output=json" returned: 0 in 0.061s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422#033[00m
Dec  1 19:45:08 compute-0 nova_compute[189564]: 2025-12-01 19:45:08.712 189568 DEBUG oslo_concurrency.processutils [None req-7cec860b-c1c5-49d2-8a4f-732c76c1788d - - - - - -] Running cmd (subprocess): /usr/bin/python3 -m oslo_concurrency.prlimit --as=1073741824 --cpu=30 -- env LC_ALL=C LANG=C qemu-img info /var/lib/nova/instances/850ac274-3f22-41ce-b7d7-ac64d7adac70/disk --force-share --output=json execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384#033[00m
Dec  1 19:45:08 compute-0 nova_compute[189564]: 2025-12-01 19:45:08.780 189568 DEBUG oslo_concurrency.processutils [None req-7cec860b-c1c5-49d2-8a4f-732c76c1788d - - - - - -] CMD "/usr/bin/python3 -m oslo_concurrency.prlimit --as=1073741824 --cpu=30 -- env LC_ALL=C LANG=C qemu-img info /var/lib/nova/instances/850ac274-3f22-41ce-b7d7-ac64d7adac70/disk --force-share --output=json" returned: 0 in 0.068s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422#033[00m
Dec  1 19:45:08 compute-0 nova_compute[189564]: 2025-12-01 19:45:08.783 189568 DEBUG oslo_concurrency.processutils [None req-7cec860b-c1c5-49d2-8a4f-732c76c1788d - - - - - -] Running cmd (subprocess): /usr/bin/python3 -m oslo_concurrency.prlimit --as=1073741824 --cpu=30 -- env LC_ALL=C LANG=C qemu-img info /var/lib/nova/instances/850ac274-3f22-41ce-b7d7-ac64d7adac70/disk --force-share --output=json execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384#033[00m
Dec  1 19:45:08 compute-0 nova_compute[189564]: 2025-12-01 19:45:08.843 189568 DEBUG oslo_concurrency.processutils [None req-7cec860b-c1c5-49d2-8a4f-732c76c1788d - - - - - -] CMD "/usr/bin/python3 -m oslo_concurrency.prlimit --as=1073741824 --cpu=30 -- env LC_ALL=C LANG=C qemu-img info /var/lib/nova/instances/850ac274-3f22-41ce-b7d7-ac64d7adac70/disk --force-share --output=json" returned: 0 in 0.060s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422#033[00m
Dec  1 19:45:08 compute-0 nova_compute[189564]: 2025-12-01 19:45:08.846 189568 DEBUG oslo_concurrency.processutils [None req-7cec860b-c1c5-49d2-8a4f-732c76c1788d - - - - - -] Running cmd (subprocess): /usr/bin/python3 -m oslo_concurrency.prlimit --as=1073741824 --cpu=30 -- env LC_ALL=C LANG=C qemu-img info /var/lib/nova/instances/850ac274-3f22-41ce-b7d7-ac64d7adac70/disk.eph0 --force-share --output=json execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384#033[00m
Dec  1 19:45:08 compute-0 nova_compute[189564]: 2025-12-01 19:45:08.915 189568 DEBUG oslo_concurrency.processutils [None req-7cec860b-c1c5-49d2-8a4f-732c76c1788d - - - - - -] CMD "/usr/bin/python3 -m oslo_concurrency.prlimit --as=1073741824 --cpu=30 -- env LC_ALL=C LANG=C qemu-img info /var/lib/nova/instances/850ac274-3f22-41ce-b7d7-ac64d7adac70/disk.eph0 --force-share --output=json" returned: 0 in 0.069s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422#033[00m
Dec  1 19:45:08 compute-0 nova_compute[189564]: 2025-12-01 19:45:08.917 189568 DEBUG oslo_concurrency.processutils [None req-7cec860b-c1c5-49d2-8a4f-732c76c1788d - - - - - -] Running cmd (subprocess): /usr/bin/python3 -m oslo_concurrency.prlimit --as=1073741824 --cpu=30 -- env LC_ALL=C LANG=C qemu-img info /var/lib/nova/instances/850ac274-3f22-41ce-b7d7-ac64d7adac70/disk.eph0 --force-share --output=json execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384#033[00m
Dec  1 19:45:09 compute-0 nova_compute[189564]: 2025-12-01 19:45:09.011 189568 DEBUG oslo_concurrency.processutils [None req-7cec860b-c1c5-49d2-8a4f-732c76c1788d - - - - - -] CMD "/usr/bin/python3 -m oslo_concurrency.prlimit --as=1073741824 --cpu=30 -- env LC_ALL=C LANG=C qemu-img info /var/lib/nova/instances/850ac274-3f22-41ce-b7d7-ac64d7adac70/disk.eph0 --force-share --output=json" returned: 0 in 0.094s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422#033[00m
Dec  1 19:45:09 compute-0 nova_compute[189564]: 2025-12-01 19:45:09.392 189568 WARNING nova.virt.libvirt.driver [None req-7cec860b-c1c5-49d2-8a4f-732c76c1788d - - - - - -] This host appears to have multiple sockets per NUMA node. The `socket` PCI NUMA affinity will not be supported.#033[00m
Dec  1 19:45:09 compute-0 nova_compute[189564]: 2025-12-01 19:45:09.393 189568 DEBUG nova.compute.resource_tracker [None req-7cec860b-c1c5-49d2-8a4f-732c76c1788d - - - - - -] Hypervisor/Node resource view: name=compute-0.ctlplane.example.com free_ram=4959MB free_disk=72.3614501953125GB free_vcpus=6 pci_devices=[{"dev_id": "pci_0000_00_03_0", "address": "0000:00:03.0", "product_id": "1000", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1000", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_01_3", "address": "0000:00:01.3", "product_id": "7113", "vendor_id": "8086", "numa_node": null, "label": "label_8086_7113", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_06_0", "address": "0000:00:06.0", "product_id": "1005", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1005", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_04_0", "address": "0000:00:04.0", "product_id": "1001", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1001", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_01_1", "address": "0000:00:01.1", "product_id": "7010", "vendor_id": "8086", "numa_node": null, "label": "label_8086_7010", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_00_0", "address": "0000:00:00.0", "product_id": "1237", "vendor_id": "8086", "numa_node": null, "label": "label_8086_1237", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_05_0", "address": "0000:00:05.0", "product_id": "1002", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1002", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_02_0", "address": "0000:00:02.0", "product_id": "1050", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1050", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_01_2", "address": "0000:00:01.2", "product_id": "7020", "vendor_id": "8086", "numa_node": null, "label": "label_8086_7020", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_01_0", "address": "0000:00:01.0", "product_id": "7000", "vendor_id": "8086", "numa_node": null, "label": "label_8086_7000", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_07_0", "address": "0000:00:07.0", "product_id": "1000", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1000", "dev_type": "type-PCI"}] _report_hypervisor_resource_view /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:1034#033[00m
Dec  1 19:45:09 compute-0 nova_compute[189564]: 2025-12-01 19:45:09.393 189568 DEBUG oslo_concurrency.lockutils [None req-7cec860b-c1c5-49d2-8a4f-732c76c1788d - - - - - -] Acquiring lock "compute_resources" by "nova.compute.resource_tracker.ResourceTracker._update_available_resource" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m
Dec  1 19:45:09 compute-0 nova_compute[189564]: 2025-12-01 19:45:09.393 189568 DEBUG oslo_concurrency.lockutils [None req-7cec860b-c1c5-49d2-8a4f-732c76c1788d - - - - - -] Lock "compute_resources" acquired by "nova.compute.resource_tracker.ResourceTracker._update_available_resource" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m
Dec  1 19:45:09 compute-0 nova_compute[189564]: 2025-12-01 19:45:09.523 189568 DEBUG nova.compute.resource_tracker [None req-7cec860b-c1c5-49d2-8a4f-732c76c1788d - - - - - -] Instance e73931e9-f7fa-4666-b781-700b385532a9 actively managed on this compute host and has allocations in placement: {'resources': {'DISK_GB': 2, 'MEMORY_MB': 512, 'VCPU': 1}}. _remove_deleted_instances_allocations /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:1635#033[00m
Dec  1 19:45:09 compute-0 nova_compute[189564]: 2025-12-01 19:45:09.524 189568 DEBUG nova.compute.resource_tracker [None req-7cec860b-c1c5-49d2-8a4f-732c76c1788d - - - - - -] Instance 850ac274-3f22-41ce-b7d7-ac64d7adac70 actively managed on this compute host and has allocations in placement: {'resources': {'DISK_GB': 2, 'MEMORY_MB': 512, 'VCPU': 1}}. _remove_deleted_instances_allocations /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:1635#033[00m
Dec  1 19:45:09 compute-0 nova_compute[189564]: 2025-12-01 19:45:09.525 189568 DEBUG nova.compute.resource_tracker [None req-7cec860b-c1c5-49d2-8a4f-732c76c1788d - - - - - -] Total usable vcpus: 8, total allocated vcpus: 2 _report_final_resource_view /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:1057#033[00m
Dec  1 19:45:09 compute-0 nova_compute[189564]: 2025-12-01 19:45:09.525 189568 DEBUG nova.compute.resource_tracker [None req-7cec860b-c1c5-49d2-8a4f-732c76c1788d - - - - - -] Final resource view: name=compute-0.ctlplane.example.com phys_ram=7680MB used_ram=1536MB phys_disk=79GB used_disk=4GB total_vcpus=8 used_vcpus=2 pci_stats=[] _report_final_resource_view /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:1066#033[00m
Dec  1 19:45:09 compute-0 nova_compute[189564]: 2025-12-01 19:45:09.546 189568 DEBUG nova.scheduler.client.report [None req-7cec860b-c1c5-49d2-8a4f-732c76c1788d - - - - - -] Refreshing inventories for resource provider 0211b5d4-bab8-409f-8f53-df766ffbcb27 _refresh_associations /usr/lib/python3.9/site-packages/nova/scheduler/client/report.py:804#033[00m
Dec  1 19:45:09 compute-0 nova_compute[189564]: 2025-12-01 19:45:09.565 189568 DEBUG nova.scheduler.client.report [None req-7cec860b-c1c5-49d2-8a4f-732c76c1788d - - - - - -] Updating ProviderTree inventory for provider 0211b5d4-bab8-409f-8f53-df766ffbcb27 from _refresh_and_get_inventory using data: {'VCPU': {'total': 8, 'reserved': 0, 'min_unit': 1, 'max_unit': 8, 'step_size': 1, 'allocation_ratio': 4.0}, 'MEMORY_MB': {'total': 7680, 'reserved': 512, 'min_unit': 1, 'max_unit': 7680, 'step_size': 1, 'allocation_ratio': 1.0}, 'DISK_GB': {'total': 79, 'reserved': 1, 'min_unit': 1, 'max_unit': 79, 'step_size': 1, 'allocation_ratio': 0.9}} _refresh_and_get_inventory /usr/lib/python3.9/site-packages/nova/scheduler/client/report.py:768#033[00m
Dec  1 19:45:09 compute-0 nova_compute[189564]: 2025-12-01 19:45:09.565 189568 DEBUG nova.compute.provider_tree [None req-7cec860b-c1c5-49d2-8a4f-732c76c1788d - - - - - -] Updating inventory in ProviderTree for provider 0211b5d4-bab8-409f-8f53-df766ffbcb27 with inventory: {'VCPU': {'total': 8, 'reserved': 0, 'min_unit': 1, 'max_unit': 8, 'step_size': 1, 'allocation_ratio': 4.0}, 'MEMORY_MB': {'total': 7680, 'reserved': 512, 'min_unit': 1, 'max_unit': 7680, 'step_size': 1, 'allocation_ratio': 1.0}, 'DISK_GB': {'total': 79, 'reserved': 1, 'min_unit': 1, 'max_unit': 79, 'step_size': 1, 'allocation_ratio': 0.9}} update_inventory /usr/lib/python3.9/site-packages/nova/compute/provider_tree.py:176#033[00m
Dec  1 19:45:09 compute-0 nova_compute[189564]: 2025-12-01 19:45:09.581 189568 DEBUG nova.scheduler.client.report [None req-7cec860b-c1c5-49d2-8a4f-732c76c1788d - - - - - -] Refreshing aggregate associations for resource provider 0211b5d4-bab8-409f-8f53-df766ffbcb27, aggregates: None _refresh_associations /usr/lib/python3.9/site-packages/nova/scheduler/client/report.py:813#033[00m
Dec  1 19:45:09 compute-0 nova_compute[189564]: 2025-12-01 19:45:09.600 189568 DEBUG nova.scheduler.client.report [None req-7cec860b-c1c5-49d2-8a4f-732c76c1788d - - - - - -] Refreshing trait associations for resource provider 0211b5d4-bab8-409f-8f53-df766ffbcb27, traits: COMPUTE_RESCUE_BFV,COMPUTE_VIOMMU_MODEL_INTEL,COMPUTE_SECURITY_UEFI_SECURE_BOOT,COMPUTE_GRAPHICS_MODEL_VIRTIO,HW_CPU_X86_AMD_SVM,COMPUTE_NODE,COMPUTE_VIOMMU_MODEL_AUTO,HW_CPU_X86_BMI2,COMPUTE_IMAGE_TYPE_ISO,HW_CPU_X86_SSE2,COMPUTE_STORAGE_BUS_SATA,HW_CPU_X86_SSE41,COMPUTE_IMAGE_TYPE_ARI,COMPUTE_SECURITY_TPM_1_2,COMPUTE_STORAGE_BUS_VIRTIO,COMPUTE_IMAGE_TYPE_AMI,COMPUTE_NET_VIF_MODEL_E1000,COMPUTE_GRAPHICS_MODEL_CIRRUS,COMPUTE_GRAPHICS_MODEL_VGA,COMPUTE_TRUSTED_CERTS,COMPUTE_STORAGE_BUS_USB,COMPUTE_VOLUME_ATTACH_WITH_TAG,COMPUTE_NET_VIF_MODEL_VIRTIO,HW_CPU_X86_FMA3,HW_CPU_X86_SSE4A,COMPUTE_ACCELERATORS,COMPUTE_VOLUME_EXTEND,HW_CPU_X86_ABM,COMPUTE_DEVICE_TAGGING,HW_CPU_X86_AVX,HW_CPU_X86_SSE,HW_CPU_X86_SVM,COMPUTE_STORAGE_BUS_IDE,COMPUTE_NET_ATTACH_INTERFACE,HW_CPU_X86_F16C,HW_CPU_X86_MMX,COMPUTE_NET_VIF_MODEL_SPAPR_VLAN,COMPUTE_NET_VIF_MODEL_E1000E,HW_CPU_X86_CLMUL,COMPUTE_GRAPHICS_MODEL_NONE,COMPUTE_VIOMMU_MODEL_VIRTIO,HW_CPU_X86_AVX2,COMPUTE_NET_VIF_MODEL_VMXNET3,COMPUTE_SECURITY_TPM_2_0,COMPUTE_IMAGE_TYPE_AKI,HW_CPU_X86_SSSE3,COMPUTE_IMAGE_TYPE_QCOW2,HW_CPU_X86_BMI,HW_CPU_X86_AESNI,COMPUTE_GRAPHICS_MODEL_BOCHS,COMPUTE_NET_VIF_MODEL_RTL8139,COMPUTE_STORAGE_BUS_SCSI,COMPUTE_NET_VIF_MODEL_PCNET,COMPUTE_VOLUME_MULTI_ATTACH,COMPUTE_SOCKET_PCI_NUMA_AFFINITY,COMPUTE_NET_VIF_MODEL_NE2K_PCI,HW_CPU_X86_SHA,COMPUTE_IMAGE_TYPE_RAW,COMPUTE_NET_ATTACH_INTERFACE_WITH_TAG,HW_CPU_X86_SSE42,COMPUTE_STORAGE_BUS_FDC _refresh_associations /usr/lib/python3.9/site-packages/nova/scheduler/client/report.py:825#033[00m
Dec  1 19:45:09 compute-0 nova_compute[189564]: 2025-12-01 19:45:09.674 189568 DEBUG nova.compute.provider_tree [None req-7cec860b-c1c5-49d2-8a4f-732c76c1788d - - - - - -] Inventory has not changed in ProviderTree for provider: 0211b5d4-bab8-409f-8f53-df766ffbcb27 update_inventory /usr/lib/python3.9/site-packages/nova/compute/provider_tree.py:180#033[00m
Dec  1 19:45:09 compute-0 nova_compute[189564]: 2025-12-01 19:45:09.698 189568 DEBUG nova.scheduler.client.report [None req-7cec860b-c1c5-49d2-8a4f-732c76c1788d - - - - - -] Inventory has not changed for provider 0211b5d4-bab8-409f-8f53-df766ffbcb27 based on inventory data: {'VCPU': {'total': 8, 'reserved': 0, 'min_unit': 1, 'max_unit': 8, 'step_size': 1, 'allocation_ratio': 4.0}, 'MEMORY_MB': {'total': 7680, 'reserved': 512, 'min_unit': 1, 'max_unit': 7680, 'step_size': 1, 'allocation_ratio': 1.0}, 'DISK_GB': {'total': 79, 'reserved': 1, 'min_unit': 1, 'max_unit': 79, 'step_size': 1, 'allocation_ratio': 0.9}} set_inventory_for_provider /usr/lib/python3.9/site-packages/nova/scheduler/client/report.py:940#033[00m
Dec  1 19:45:09 compute-0 nova_compute[189564]: 2025-12-01 19:45:09.700 189568 DEBUG nova.compute.resource_tracker [None req-7cec860b-c1c5-49d2-8a4f-732c76c1788d - - - - - -] Compute_service record updated for compute-0.ctlplane.example.com:compute-0.ctlplane.example.com _update_available_resource /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:995#033[00m
Dec  1 19:45:09 compute-0 nova_compute[189564]: 2025-12-01 19:45:09.701 189568 DEBUG oslo_concurrency.lockutils [None req-7cec860b-c1c5-49d2-8a4f-732c76c1788d - - - - - -] Lock "compute_resources" "released" by "nova.compute.resource_tracker.ResourceTracker._update_available_resource" :: held 0.308s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m
Dec  1 19:45:10 compute-0 nova_compute[189564]: 2025-12-01 19:45:10.933 189568 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Dec  1 19:45:12 compute-0 ovn_metadata_agent[106828]: 2025-12-01 19:45:12.197 106833 DEBUG oslo_concurrency.lockutils [-] Acquiring lock "_check_child_processes" by "neutron.agent.linux.external_process.ProcessMonitor._check_child_processes" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m
Dec  1 19:45:12 compute-0 ovn_metadata_agent[106828]: 2025-12-01 19:45:12.199 106833 DEBUG oslo_concurrency.lockutils [-] Lock "_check_child_processes" acquired by "neutron.agent.linux.external_process.ProcessMonitor._check_child_processes" :: waited 0.002s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m
Dec  1 19:45:12 compute-0 ovn_metadata_agent[106828]: 2025-12-01 19:45:12.199 106833 DEBUG oslo_concurrency.lockutils [-] Lock "_check_child_processes" "released" by "neutron.agent.linux.external_process.ProcessMonitor._check_child_processes" :: held 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m
Dec  1 19:45:13 compute-0 nova_compute[189564]: 2025-12-01 19:45:13.238 189568 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Dec  1 19:45:13 compute-0 podman[246198]: 2025-12-01 19:45:13.359318248 +0000 UTC m=+0.115649925 container health_status eee51cf6f5ac491b85fb09827fece37ea9afa564acb449d4ec0d0155a452f02b (image=quay.io/podified-antelope-centos9/openstack-multipathd:current-podified, name=multipathd, health_status=healthy, health_failing_streak=0, health_log=, io.buildah.version=1.41.3, maintainer=OpenStack Kubernetes Operator team, org.label-schema.schema-version=1.0, config_data={'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/multipathd', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-multipathd:current-podified', 'net': 'host', 'privileged': True, 'restart': 'always', 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/var/lib/kolla/config_files/multipathd.json:/var/lib/kolla/config_files/config.json:ro', '/dev:/dev', '/run/udev:/run/udev', '/sys:/sys', '/lib/modules:/lib/modules:ro', '/etc/iscsi:/etc/iscsi:ro', '/var/lib/iscsi:/var/lib/iscsi', '/etc/multipath:/etc/multipath:z', '/etc/multipath.conf:/etc/multipath.conf:ro', '/var/lib/openstack/healthchecks/multipathd:/openstack:ro,z']}, config_id=multipathd, tcib_managed=true, managed_by=edpm_ansible, org.label-schema.build-date=20251125, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.vendor=CentOS, tcib_build_tag=fa2bb8efef6782c26ea7f1675eeb36dd, container_name=multipathd)
Dec  1 19:45:15 compute-0 nova_compute[189564]: 2025-12-01 19:45:15.935 189568 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Dec  1 19:45:18 compute-0 nova_compute[189564]: 2025-12-01 19:45:18.241 189568 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Dec  1 19:45:18 compute-0 podman[246217]: 2025-12-01 19:45:18.324349389 +0000 UTC m=+0.093385852 container health_status 61ddba5fa28aaa4735d9b3aecc3d300f499f9ae2248b5f55cd6d6127fcce4236 (image=quay.io/navidys/prometheus-podman-exporter:v1.10.1, name=podman_exporter, health_status=healthy, health_failing_streak=0, health_log=, container_name=podman_exporter, maintainer=Navid Yaghoobi <navidys@fedoraproject.org>, managed_by=edpm_ansible, config_data={'image': 'quay.io/navidys/prometheus-podman-exporter:v1.10.1', 'restart': 'always', 'recreate': True, 'user': 'root', 'privileged': True, 'ports': ['9882:9882'], 'net': 'host', 'command': ['--web.config.file=/etc/podman_exporter/podman_exporter.yaml'], 'environment': {'OS_ENDPOINT_TYPE': 'internal', 'CONTAINER_HOST': 'unix:///run/podman/podman.sock'}, 'healthcheck': {'test': '/openstack/healthcheck podman_exporter', 'mount': '/var/lib/openstack/healthchecks/podman_exporter'}, 'volumes': ['/var/lib/openstack/config/telemetry/podman_exporter.yaml:/etc/podman_exporter/podman_exporter.yaml:z', '/var/lib/openstack/certs/telemetry/default:/etc/podman_exporter/tls:z', '/run/podman/podman.sock:/run/podman/podman.sock:rw,z', '/var/lib/openstack/healthchecks/podman_exporter:/openstack:ro,z']}, config_id=edpm)
Dec  1 19:45:20 compute-0 nova_compute[189564]: 2025-12-01 19:45:20.938 189568 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Dec  1 19:45:21 compute-0 podman[246242]: 2025-12-01 19:45:21.343716216 +0000 UTC m=+0.099833153 container health_status 34a1614f07848d6f362b3ed1fa2407dbcd0f2c7c831f6ef43ff8b2d278ce7c3d (image=quay.io/podified-antelope-centos9/openstack-ceilometer-ipmi:current-podified, name=ceilometer_agent_ipmi, health_status=healthy, health_failing_streak=0, health_log=, org.label-schema.license=GPLv2, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS, managed_by=edpm_ansible, io.buildah.version=1.41.3, org.label-schema.name=CentOS Stream 9 Base Image, tcib_managed=true, config_data={'image': 'quay.io/podified-antelope-centos9/openstack-ceilometer-ipmi:current-podified', 'user': 'ceilometer', 'restart': 'always', 'command': 'kolla_start', 'security_opt': 'label:type:ceilometer_polling_t', 'privileged': 'true', 'net': 'host', 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'OS_ENDPOINT_TYPE': 'internal'}, 'healthcheck': {'test': '/openstack/healthcheck ipmi', 'mount': '/var/lib/openstack/healthchecks/ceilometer_agent_ipmi'}, 'volumes': ['/var/lib/openstack/config/telemetry-power-monitoring:/var/lib/openstack/config/:z', '/var/lib/openstack/config/telemetry-power-monitoring/ceilometer-agent-ipmi.json:/var/lib/kolla/config_files/config.json:z', '/etc/hosts:/etc/hosts:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/var/lib/openstack/cacerts/telemetry-power-monitoring/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/config/telemetry-power-monitoring/ceilometer_prom_exporter.yaml:/etc/ceilometer/ceilometer_prom_exporter.yaml:z', '/var/lib/openstack/certs/telemetry-power-monitoring/default:/etc/ceilometer/tls:z', '/dev/log:/dev/log', '/var/lib/openstack/healthchecks/ceilometer_agent_ipmi:/openstack:ro,z']}, container_name=ceilometer_agent_ipmi, org.label-schema.build-date=20251125, tcib_build_tag=fa2bb8efef6782c26ea7f1675eeb36dd, config_id=edpm, maintainer=OpenStack Kubernetes Operator team)
Dec  1 19:45:21 compute-0 podman[246244]: 2025-12-01 19:45:21.350826837 +0000 UTC m=+0.099700609 container health_status 43b014a7c88484529ca37fbc1aa040d68d3c565a681d98a3ffe696ded1c66c8b (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified, name=ovn_metadata_agent, health_status=healthy, health_failing_streak=0, health_log=, org.label-schema.vendor=CentOS, container_name=ovn_metadata_agent, org.label-schema.name=CentOS Stream 9 Base Image, config_data={'cgroupns': 'host', 'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'EDPM_CONFIG_HASH': '0823bd3e096c75f72e4a95820d41b0d4b6a1172bd2892ddb9f29b788a11bc87d'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ovn_metadata_agent', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified', 'net': 'host', 'pid': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/run/openvswitch:/run/openvswitch:z', '/var/lib/config-data/ansible-generated/neutron-ovn-metadata-agent:/etc/neutron.conf.d:z', '/run/netns:/run/netns:shared', '/var/lib/kolla/config_files/ovn_metadata_agent.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/neutron:/var/lib/neutron:shared,z', '/var/lib/neutron/ovn_metadata_haproxy_wrapper:/usr/local/bin/haproxy:ro', '/var/lib/neutron/kill_scripts:/etc/neutron/kill_scripts:ro', '/var/lib/openstack/cacerts/neutron-metadata/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/ca.crt:/etc/pki/tls/certs/ovndbca.crt:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/tls.crt:/etc/pki/tls/certs/ovndb.crt:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/tls.key:/etc/pki/tls/private/ovndb.key:ro,Z', '/var/lib/openstack/healthchecks/ovn_metadata_agent:/openstack:ro,z']}, config_id=ovn_metadata_agent, managed_by=edpm_ansible, tcib_managed=true, maintainer=OpenStack Kubernetes Operator team, org.label-schema.build-date=20251125, tcib_build_tag=fa2bb8efef6782c26ea7f1675eeb36dd, io.buildah.version=1.41.3, org.label-schema.license=GPLv2, org.label-schema.schema-version=1.0)
Dec  1 19:45:21 compute-0 podman[246241]: 2025-12-01 19:45:21.364349247 +0000 UTC m=+0.133051795 container health_status 23921011954a99f31a49758e512d9e3575f6b2ebf536e7df85e3be11e7690b76 (image=quay.io/sustainable_computing_io/kepler:release-0.7.12, name=kepler, health_status=healthy, health_failing_streak=0, health_log=, distribution-scope=public, description=The Universal Base Image is designed and engineered to be the base layer for all of your containerized applications, middleware and utilities. This base image is freely redistributable, but Red Hat only supports Red Hat technologies through subscriptions for Red Hat products. This image is maintained by Red Hat and updated regularly., release-0.7.12=, vcs-type=git, config_id=edpm, release=1214.1726694543, name=ubi9, vendor=Red Hat, Inc., io.openshift.expose-services=, architecture=x86_64, io.k8s.description=The Universal Base Image is designed and engineered to be the base layer for all of your containerized applications, middleware and utilities. This base image is freely redistributable, but Red Hat only supports Red Hat technologies through subscriptions for Red Hat products. This image is maintained by Red Hat and updated regularly., io.buildah.version=1.29.0, maintainer=Red Hat, Inc., vcs-ref=e309397d02fc53f7fa99db1371b8700eb49f268f, com.redhat.component=ubi9-container, io.k8s.display-name=Red Hat Universal Base Image 9, build-date=2024-09-18T21:23:30, config_data={'image': 'quay.io/sustainable_computing_io/kepler:release-0.7.12', 'privileged': 'true', 'restart': 'always', 'ports': ['8888:8888'], 'net': 'host', 'command': '-v=2', 'recreate': True, 'environment': {'ENABLE_GPU': 'true', 'EXPOSE_CONTAINER_METRICS': 'true', 'ENABLE_PROCESS_METRICS': 'true', 'EXPOSE_VM_METRICS': 'true', 'EXPOSE_ESTIMATED_IDLE_POWER_METRICS': 'false', 'LIBVIRT_METADATA_URI': 'http://openstack.org/xmlns/libvirt/nova/1.1'}, 'healthcheck': {'test': '/openstack/healthcheck kepler', 'mount': '/var/lib/openstack/healthchecks/kepler'}, 'volumes': ['/lib/modules:/lib/modules:ro', '/run/libvirt:/run/libvirt:shared,ro', '/sys:/sys', '/proc:/proc', '/var/lib/openstack/healthchecks/kepler:/openstack:ro,z']}, io.openshift.tags=base rhel9, managed_by=edpm_ansible, com.redhat.license_terms=https://www.redhat.com/en/about/red-hat-end-user-license-agreements#UBI, url=https://access.redhat.com/containers/#/registry.access.redhat.com/ubi9/images/9.4-1214.1726694543, version=9.4, container_name=kepler, summary=Provides the latest release of Red Hat Universal Base Image 9.)
Dec  1 19:45:21 compute-0 podman[246243]: 2025-12-01 19:45:21.374761181 +0000 UTC m=+0.131106524 container health_status 3a3d264f7eb8586ed3d44da8bad3c69e5911bcb2ca062b771386b6d47a5118de (image=quay.rdoproject.org/podified-master-centos10/openstack-ceilometer-compute:current-tested, name=ceilometer_agent_compute, health_status=healthy, health_failing_streak=0, health_log=, org.label-schema.license=GPLv2, org.label-schema.vendor=CentOS, container_name=ceilometer_agent_compute, org.label-schema.build-date=20251125, maintainer=OpenStack Kubernetes Operator team, org.label-schema.schema-version=1.0, tcib_managed=true, config_data={'image': 'quay.rdoproject.org/podified-master-centos10/openstack-ceilometer-compute:current-tested', 'user': 'ceilometer', 'restart': 'always', 'command': 'kolla_start', 'security_opt': 'label:type:ceilometer_polling_t', 'net': 'host', 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'OS_ENDPOINT_TYPE': 'internal'}, 'healthcheck': {'test': '/openstack/healthcheck compute', 'mount': '/var/lib/openstack/healthchecks/ceilometer_agent_compute'}, 'volumes': ['/var/lib/openstack/config/telemetry:/var/lib/openstack/config/:z', '/var/lib/openstack/config/telemetry/ceilometer-agent-compute.json:/var/lib/kolla/config_files/config.json:z', '/run/libvirt:/run/libvirt:shared,ro', '/etc/hosts:/etc/hosts:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/var/lib/openstack/cacerts/telemetry/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/config/telemetry/ceilometer_prom_exporter.yaml:/etc/ceilometer/ceilometer_prom_exporter.yaml:z', '/var/lib/openstack/certs/telemetry/default:/etc/ceilometer/tls:z', '/dev/log:/dev/log', '/var/lib/openstack/healthchecks/ceilometer_agent_compute:/openstack:ro,z']}, org.label-schema.name=CentOS Stream 10 Base Image, tcib_build_tag=3a7876c5b6a4ff2e2bc50e11e9db5f42, config_id=edpm, io.buildah.version=1.41.4, managed_by=edpm_ansible)
Dec  1 19:45:21 compute-0 podman[246245]: 2025-12-01 19:45:21.394494264 +0000 UTC m=+0.151388185 container health_status ac5c9902abf0db9f43c889599b2bcc73d33eb8b65444ffdd9b56a5cc93dab792 (image=quay.io/podified-antelope-centos9/openstack-ovn-controller:current-podified, name=ovn_controller, health_status=healthy, health_failing_streak=0, health_log=, org.label-schema.vendor=CentOS, maintainer=OpenStack Kubernetes Operator team, managed_by=edpm_ansible, tcib_build_tag=fa2bb8efef6782c26ea7f1675eeb36dd, config_data={'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ovn_controller', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-ovn-controller:current-podified', 'net': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/lib/modules:/lib/modules:ro', '/run:/run', '/var/lib/openvswitch/ovn:/run/ovn:shared,z', '/var/lib/kolla/config_files/ovn_controller.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/openstack/cacerts/ovn/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/certs/ovn/default/ca.crt:/etc/pki/tls/certs/ovndbca.crt:ro,z', '/var/lib/openstack/certs/ovn/default/tls.crt:/etc/pki/tls/certs/ovndb.crt:ro,z', '/var/lib/openstack/certs/ovn/default/tls.key:/etc/pki/tls/private/ovndb.key:ro,Z', '/var/lib/openstack/healthchecks/ovn_controller:/openstack:ro,z']}, config_id=ovn_controller, container_name=ovn_controller, io.buildah.version=1.41.3, org.label-schema.build-date=20251125, org.label-schema.license=GPLv2, tcib_managed=true, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.schema-version=1.0)
Dec  1 19:45:23 compute-0 nova_compute[189564]: 2025-12-01 19:45:23.246 189568 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Dec  1 19:45:25 compute-0 nova_compute[189564]: 2025-12-01 19:45:25.941 189568 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Dec  1 19:45:28 compute-0 nova_compute[189564]: 2025-12-01 19:45:28.249 189568 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Dec  1 19:45:29 compute-0 podman[203750]: time="2025-12-01T19:45:29Z" level=info msg="List containers: received `last` parameter - overwriting `limit`"
Dec  1 19:45:29 compute-0 podman[203750]: @ - - [01/Dec/2025:19:45:29 +0000] "GET /v4.9.3/libpod/containers/json?all=true&external=false&last=0&namespace=false&size=false&sync=false HTTP/1.1" 200 29521 "" "Go-http-client/1.1"
Dec  1 19:45:29 compute-0 podman[203750]: @ - - [01/Dec/2025:19:45:29 +0000] "GET /v4.9.3/libpod/containers/stats?all=false&interval=1&stream=false HTTP/1.1" 200 4806 "" "Go-http-client/1.1"
Dec  1 19:45:30 compute-0 nova_compute[189564]: 2025-12-01 19:45:30.944 189568 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Dec  1 19:45:31 compute-0 openstack_network_exporter[205914]: ERROR   19:45:31 appctl.go:131: Failed to prepare call to ovsdb-server: no control socket files found for the ovs db server
Dec  1 19:45:31 compute-0 openstack_network_exporter[205914]: ERROR   19:45:31 appctl.go:144: Failed to get PID for ovn-northd: no control socket files found for ovn-northd
Dec  1 19:45:31 compute-0 openstack_network_exporter[205914]: ERROR   19:45:31 appctl.go:174: call(dpif-netdev/pmd-rxq-show): please specify an existing datapath
Dec  1 19:45:31 compute-0 openstack_network_exporter[205914]: 
Dec  1 19:45:31 compute-0 openstack_network_exporter[205914]: ERROR   19:45:31 appctl.go:144: Failed to get PID for ovn-northd: no control socket files found for ovn-northd
Dec  1 19:45:31 compute-0 openstack_network_exporter[205914]: ERROR   19:45:31 appctl.go:174: call(dpif-netdev/pmd-perf-show): please specify an existing datapath
Dec  1 19:45:31 compute-0 openstack_network_exporter[205914]: 
Dec  1 19:45:33 compute-0 nova_compute[189564]: 2025-12-01 19:45:33.251 189568 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Dec  1 19:45:33 compute-0 podman[246341]: 2025-12-01 19:45:33.339096969 +0000 UTC m=+0.101828375 container health_status b46bda7fc50db8041eef75400930fc7591d8331b3adc9964f77b2cc87c6b98e2 (image=quay.io/openstack-k8s-operators/openstack-network-exporter:current-podified, name=openstack_network_exporter, health_status=healthy, health_failing_streak=0, health_log=, io.openshift.tags=minimal rhel9, summary=Provides the latest release of the minimal Red Hat Universal Base Image 9., io.k8s.display-name=Red Hat Universal Base Image 9 Minimal, name=ubi9-minimal, com.redhat.license_terms=https://www.redhat.com/en/about/red-hat-end-user-license-agreements#UBI, config_id=edpm, release=1755695350, io.openshift.expose-services=, build-date=2025-08-20T13:12:41, description=The Universal Base Image Minimal is a stripped down image that uses microdnf as a package manager. This base image is freely redistributable, but Red Hat only supports Red Hat technologies through subscriptions for Red Hat products. This image is maintained by Red Hat and updated regularly., managed_by=edpm_ansible, vendor=Red Hat, Inc., com.redhat.component=ubi9-minimal-container, vcs-type=git, architecture=x86_64, io.buildah.version=1.33.7, vcs-ref=f4b088292653bbf5ca8188a5e59ffd06a8671d4b, version=9.6, container_name=openstack_network_exporter, distribution-scope=public, io.k8s.description=The Universal Base Image Minimal is a stripped down image that uses microdnf as a package manager. This base image is freely redistributable, but Red Hat only supports Red Hat technologies through subscriptions for Red Hat products. This image is maintained by Red Hat and updated regularly., config_data={'image': 'quay.io/openstack-k8s-operators/openstack-network-exporter:current-podified', 'restart': 'always', 'recreate': True, 'privileged': True, 'ports': ['9105:9105'], 'command': [], 'net': 'host', 'environment': {'OS_ENDPOINT_TYPE': 'internal', 'OPENSTACK_NETWORK_EXPORTER_YAML': '/etc/openstack_network_exporter/openstack_network_exporter.yaml'}, 'healthcheck': {'test': '/openstack/healthcheck openstack-netwo', 'mount': '/var/lib/openstack/healthchecks/openstack_network_exporter'}, 'volumes': ['/var/lib/openstack/config/telemetry/openstack_network_exporter.yaml:/etc/openstack_network_exporter/openstack_network_exporter.yaml:z', '/var/lib/openstack/certs/telemetry/default:/etc/openstack_network_exporter/tls:z', '/var/run/openvswitch:/run/openvswitch:rw,z', '/var/lib/openvswitch/ovn:/run/ovn:rw,z', '/proc:/host/proc:ro', '/var/lib/openstack/healthchecks/openstack_network_exporter:/openstack:ro,z']}, maintainer=Red Hat, Inc., url=https://catalog.redhat.com/en/search?searchType=containers)
Dec  1 19:45:35 compute-0 nova_compute[189564]: 2025-12-01 19:45:35.947 189568 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Dec  1 19:45:38 compute-0 nova_compute[189564]: 2025-12-01 19:45:38.253 189568 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Dec  1 19:45:38 compute-0 podman[246364]: 2025-12-01 19:45:38.332132116 +0000 UTC m=+0.095089446 container health_status 9bc16c1e84935b321683dd2dfd3901959431e420d380b6b9982945dff3d516b2 (image=quay.io/prometheus/node-exporter:v1.5.0, name=node_exporter, health_status=healthy, health_failing_streak=0, health_log=, maintainer=The Prometheus Authors <prometheus-developers@googlegroups.com>, managed_by=edpm_ansible, config_data={'image': 'quay.io/prometheus/node-exporter:v1.5.0', 'restart': 'always', 'recreate': True, 'user': 'root', 'privileged': True, 'ports': ['9100:9100'], 'command': ['--web.config.file=/etc/node_exporter/node_exporter.yaml', '--web.disable-exporter-metrics', '--collector.systemd', '--collector.systemd.unit-include=(edpm_.*|ovs.*|openvswitch|virt.*|rsyslog)\\.service', '--no-collector.dmi', '--no-collector.entropy', '--no-collector.thermal_zone', '--no-collector.time', '--no-collector.timex', '--no-collector.uname', '--no-collector.stat', '--no-collector.hwmon', '--no-collector.os', '--no-collector.selinux', '--no-collector.textfile', '--no-collector.powersupplyclass', '--no-collector.pressure', '--no-collector.rapl'], 'net': 'host', 'environment': {'OS_ENDPOINT_TYPE': 'internal'}, 'healthcheck': {'test': '/openstack/healthcheck node_exporter', 'mount': '/var/lib/openstack/healthchecks/node_exporter'}, 'volumes': ['/var/lib/openstack/config/telemetry/node_exporter.yaml:/etc/node_exporter/node_exporter.yaml:z', '/var/lib/openstack/certs/telemetry/default:/etc/node_exporter/tls:z', '/var/run/dbus/system_bus_socket:/var/run/dbus/system_bus_socket:rw', '/var/lib/openstack/healthchecks/node_exporter:/openstack:ro,z']}, config_id=edpm, container_name=node_exporter)
Dec  1 19:45:40 compute-0 nova_compute[189564]: 2025-12-01 19:45:40.950 189568 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Dec  1 19:45:43 compute-0 nova_compute[189564]: 2025-12-01 19:45:43.259 189568 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Dec  1 19:45:44 compute-0 podman[246387]: 2025-12-01 19:45:44.368727428 +0000 UTC m=+0.125208621 container health_status eee51cf6f5ac491b85fb09827fece37ea9afa564acb449d4ec0d0155a452f02b (image=quay.io/podified-antelope-centos9/openstack-multipathd:current-podified, name=multipathd, health_status=healthy, health_failing_streak=0, health_log=, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 9 Base Image, tcib_build_tag=fa2bb8efef6782c26ea7f1675eeb36dd, config_data={'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/multipathd', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-multipathd:current-podified', 'net': 'host', 'privileged': True, 'restart': 'always', 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/var/lib/kolla/config_files/multipathd.json:/var/lib/kolla/config_files/config.json:ro', '/dev:/dev', '/run/udev:/run/udev', '/sys:/sys', '/lib/modules:/lib/modules:ro', '/etc/iscsi:/etc/iscsi:ro', '/var/lib/iscsi:/var/lib/iscsi', '/etc/multipath:/etc/multipath:z', '/etc/multipath.conf:/etc/multipath.conf:ro', '/var/lib/openstack/healthchecks/multipathd:/openstack:ro,z']}, container_name=multipathd, io.buildah.version=1.41.3, managed_by=edpm_ansible, org.label-schema.schema-version=1.0, tcib_managed=true, org.label-schema.vendor=CentOS, config_id=multipathd, maintainer=OpenStack Kubernetes Operator team, org.label-schema.build-date=20251125)
Dec  1 19:45:45 compute-0 nova_compute[189564]: 2025-12-01 19:45:45.952 189568 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Dec  1 19:45:48 compute-0 nova_compute[189564]: 2025-12-01 19:45:48.260 189568 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Dec  1 19:45:48 compute-0 ceilometer_agent_compute[200308]: 2025-12-01 19:45:48.815 15 DEBUG ceilometer.polling.manager [-] The number of pollsters in source [pollsters] is bigger than the number of worker threads to execute them. Therefore, one can expect the process to be longer than the expected. execute_polling_task_processing /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:253
Dec  1 19:45:48 compute-0 ceilometer_agent_compute[200308]: 2025-12-01 19:45:48.815 15 DEBUG ceilometer.polling.manager [-] Processing pollsters for [pollsters] with [1] threads. execute_polling_task_processing /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:262
Dec  1 19:45:48 compute-0 ceilometer_agent_compute[200308]: 2025-12-01 19:45:48.816 15 DEBUG ceilometer.polling.manager [-] Registering pollster [<stevedore.extension.Extension object at 0x7fcf6cc3f860>] from source [pollsters] to be executed via executor [<concurrent.futures.thread.ThreadPoolExecutor object at 0x7fcf6cd3b320>] with cache [{}], pollster history [{}], and discovery cache [{}]. register_pollster_execution /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:276
Dec  1 19:45:48 compute-0 ceilometer_agent_compute[200308]: 2025-12-01 19:45:48.817 15 DEBUG ceilometer.polling.manager [-] Executing discovery process for pollsters [<ceilometer.compute.pollsters.net.IncomingBytesDeltaPollster object at 0x7fcf6cc3f830>] and discovery method [local_instances] via process [<bound method AgentManager.discover of <ceilometer.polling.manager.AgentManager object at 0x7fcf6cc07a10>>]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:294
Dec  1 19:45:48 compute-0 ceilometer_agent_compute[200308]: 2025-12-01 19:45:48.820 15 DEBUG ceilometer.polling.manager [-] Registering pollster [<stevedore.extension.Extension object at 0x7fcf6c2e4080>] from source [pollsters] to be executed via executor [<concurrent.futures.thread.ThreadPoolExecutor object at 0x7fcf6cd3b320>] with cache [{}], pollster history [{}], and discovery cache [{}]. register_pollster_execution /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:276
Dec  1 19:45:48 compute-0 ceilometer_agent_compute[200308]: 2025-12-01 19:45:48.820 15 DEBUG ceilometer.polling.manager [-] Registering pollster [<stevedore.extension.Extension object at 0x7fcf6efc98b0>] from source [pollsters] to be executed via executor [<concurrent.futures.thread.ThreadPoolExecutor object at 0x7fcf6cd3b320>] with cache [{}], pollster history [{}], and discovery cache [{}]. register_pollster_execution /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:276
Dec  1 19:45:48 compute-0 ceilometer_agent_compute[200308]: 2025-12-01 19:45:48.820 15 DEBUG ceilometer.polling.manager [-] Registering pollster [<stevedore.extension.Extension object at 0x7fcf6c2e4110>] from source [pollsters] to be executed via executor [<concurrent.futures.thread.ThreadPoolExecutor object at 0x7fcf6cd3b320>] with cache [{}], pollster history [{}], and discovery cache [{}]. register_pollster_execution /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:276
Dec  1 19:45:48 compute-0 ceilometer_agent_compute[200308]: 2025-12-01 19:45:48.820 15 DEBUG ceilometer.polling.manager [-] Registering pollster [<stevedore.extension.Extension object at 0x7fcf6c2e41a0>] from source [pollsters] to be executed via executor [<concurrent.futures.thread.ThreadPoolExecutor object at 0x7fcf6cd3b320>] with cache [{}], pollster history [{}], and discovery cache [{}]. register_pollster_execution /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:276
Dec  1 19:45:48 compute-0 ceilometer_agent_compute[200308]: 2025-12-01 19:45:48.821 15 DEBUG ceilometer.polling.manager [-] Registering pollster [<stevedore.extension.Extension object at 0x7fcf6cc3f290>] from source [pollsters] to be executed via executor [<concurrent.futures.thread.ThreadPoolExecutor object at 0x7fcf6cd3b320>] with cache [{}], pollster history [{}], and discovery cache [{}]. register_pollster_execution /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:276
Dec  1 19:45:48 compute-0 ceilometer_agent_compute[200308]: 2025-12-01 19:45:48.821 15 DEBUG ceilometer.polling.manager [-] Registering pollster [<stevedore.extension.Extension object at 0x7fcf6cc3f2c0>] from source [pollsters] to be executed via executor [<concurrent.futures.thread.ThreadPoolExecutor object at 0x7fcf6cd3b320>] with cache [{}], pollster history [{}], and discovery cache [{}]. register_pollster_execution /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:276
Dec  1 19:45:48 compute-0 ceilometer_agent_compute[200308]: 2025-12-01 19:45:48.821 15 DEBUG ceilometer.polling.manager [-] Registering pollster [<stevedore.extension.Extension object at 0x7fcf6e1e92e0>] from source [pollsters] to be executed via executor [<concurrent.futures.thread.ThreadPoolExecutor object at 0x7fcf6cd3b320>] with cache [{}], pollster history [{}], and discovery cache [{}]. register_pollster_execution /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:276
Dec  1 19:45:48 compute-0 ceilometer_agent_compute[200308]: 2025-12-01 19:45:48.821 15 DEBUG ceilometer.polling.manager [-] Registering pollster [<stevedore.extension.Extension object at 0x7fcf6cc3fb00>] from source [pollsters] to be executed via executor [<concurrent.futures.thread.ThreadPoolExecutor object at 0x7fcf6cd3b320>] with cache [{}], pollster history [{}], and discovery cache [{}]. register_pollster_execution /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:276
Dec  1 19:45:48 compute-0 ceilometer_agent_compute[200308]: 2025-12-01 19:45:48.821 15 DEBUG ceilometer.polling.manager [-] Registering pollster [<stevedore.extension.Extension object at 0x7fcf6cc3f320>] from source [pollsters] to be executed via executor [<concurrent.futures.thread.ThreadPoolExecutor object at 0x7fcf6cd3b320>] with cache [{}], pollster history [{}], and discovery cache [{}]. register_pollster_execution /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:276
Dec  1 19:45:48 compute-0 ceilometer_agent_compute[200308]: 2025-12-01 19:45:48.822 15 DEBUG ceilometer.polling.manager [-] Registering pollster [<stevedore.extension.Extension object at 0x7fcf6cc3f380>] from source [pollsters] to be executed via executor [<concurrent.futures.thread.ThreadPoolExecutor object at 0x7fcf6cd3b320>] with cache [{}], pollster history [{}], and discovery cache [{}]. register_pollster_execution /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:276
Dec  1 19:45:48 compute-0 ceilometer_agent_compute[200308]: 2025-12-01 19:45:48.822 15 DEBUG ceilometer.polling.manager [-] Registering pollster [<stevedore.extension.Extension object at 0x7fcf6cc3f3e0>] from source [pollsters] to be executed via executor [<concurrent.futures.thread.ThreadPoolExecutor object at 0x7fcf6cd3b320>] with cache [{}], pollster history [{}], and discovery cache [{}]. register_pollster_execution /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:276
Dec  1 19:45:48 compute-0 ceilometer_agent_compute[200308]: 2025-12-01 19:45:48.822 15 DEBUG ceilometer.polling.manager [-] Registering pollster [<stevedore.extension.Extension object at 0x7fcf6cc3f440>] from source [pollsters] to be executed via executor [<concurrent.futures.thread.ThreadPoolExecutor object at 0x7fcf6cd3b320>] with cache [{}], pollster history [{}], and discovery cache [{}]. register_pollster_execution /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:276
Dec  1 19:45:48 compute-0 ceilometer_agent_compute[200308]: 2025-12-01 19:45:48.822 15 DEBUG ceilometer.polling.manager [-] Registering pollster [<stevedore.extension.Extension object at 0x7fcf6c2e4470>] from source [pollsters] to be executed via executor [<concurrent.futures.thread.ThreadPoolExecutor object at 0x7fcf6cd3b320>] with cache [{}], pollster history [{}], and discovery cache [{}]. register_pollster_execution /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:276
Dec  1 19:45:48 compute-0 ceilometer_agent_compute[200308]: 2025-12-01 19:45:48.822 15 DEBUG ceilometer.polling.manager [-] Registering pollster [<stevedore.extension.Extension object at 0x7fcf6cc3f4a0>] from source [pollsters] to be executed via executor [<concurrent.futures.thread.ThreadPoolExecutor object at 0x7fcf6cd3b320>] with cache [{}], pollster history [{}], and discovery cache [{}]. register_pollster_execution /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:276
Dec  1 19:45:48 compute-0 ceilometer_agent_compute[200308]: 2025-12-01 19:45:48.822 15 DEBUG ceilometer.polling.manager [-] Registering pollster [<stevedore.extension.Extension object at 0x7fcf6cc3f500>] from source [pollsters] to be executed via executor [<concurrent.futures.thread.ThreadPoolExecutor object at 0x7fcf6cd3b320>] with cache [{}], pollster history [{}], and discovery cache [{}]. register_pollster_execution /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:276
Dec  1 19:45:48 compute-0 ceilometer_agent_compute[200308]: 2025-12-01 19:45:48.823 15 DEBUG ceilometer.polling.manager [-] Registering pollster [<stevedore.extension.Extension object at 0x7fcf6cc3e540>] from source [pollsters] to be executed via executor [<concurrent.futures.thread.ThreadPoolExecutor object at 0x7fcf6cd3b320>] with cache [{}], pollster history [{}], and discovery cache [{}]. register_pollster_execution /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:276
Dec  1 19:45:48 compute-0 ceilometer_agent_compute[200308]: 2025-12-01 19:45:48.823 15 DEBUG ceilometer.polling.manager [-] Registering pollster [<stevedore.extension.Extension object at 0x7fcf6cc3f560>] from source [pollsters] to be executed via executor [<concurrent.futures.thread.ThreadPoolExecutor object at 0x7fcf6cd3b320>] with cache [{}], pollster history [{}], and discovery cache [{}]. register_pollster_execution /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:276
Dec  1 19:45:48 compute-0 ceilometer_agent_compute[200308]: 2025-12-01 19:45:48.823 15 DEBUG ceilometer.polling.manager [-] Registering pollster [<stevedore.extension.Extension object at 0x7fcf6cc3fd70>] from source [pollsters] to be executed via executor [<concurrent.futures.thread.ThreadPoolExecutor object at 0x7fcf6cd3b320>] with cache [{}], pollster history [{}], and discovery cache [{}]. register_pollster_execution /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:276
Dec  1 19:45:48 compute-0 ceilometer_agent_compute[200308]: 2025-12-01 19:45:48.823 15 DEBUG ceilometer.polling.manager [-] Registering pollster [<stevedore.extension.Extension object at 0x7fcf6cc3f5c0>] from source [pollsters] to be executed via executor [<concurrent.futures.thread.ThreadPoolExecutor object at 0x7fcf6cd3b320>] with cache [{}], pollster history [{}], and discovery cache [{}]. register_pollster_execution /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:276
Dec  1 19:45:48 compute-0 ceilometer_agent_compute[200308]: 2025-12-01 19:45:48.823 15 DEBUG ceilometer.polling.manager [-] Registering pollster [<stevedore.extension.Extension object at 0x7fcf6cc3fdd0>] from source [pollsters] to be executed via executor [<concurrent.futures.thread.ThreadPoolExecutor object at 0x7fcf6cd3b320>] with cache [{}], pollster history [{}], and discovery cache [{}]. register_pollster_execution /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:276
Dec  1 19:45:48 compute-0 ceilometer_agent_compute[200308]: 2025-12-01 19:45:48.824 15 DEBUG ceilometer.polling.manager [-] Registering pollster [<stevedore.extension.Extension object at 0x7fcf6cc3fe30>] from source [pollsters] to be executed via executor [<concurrent.futures.thread.ThreadPoolExecutor object at 0x7fcf6cd3b320>] with cache [{}], pollster history [{}], and discovery cache [{}]. register_pollster_execution /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:276
Dec  1 19:45:48 compute-0 ceilometer_agent_compute[200308]: 2025-12-01 19:45:48.824 15 DEBUG ceilometer.polling.manager [-] Registering pollster [<stevedore.extension.Extension object at 0x7fcf6cc3fec0>] from source [pollsters] to be executed via executor [<concurrent.futures.thread.ThreadPoolExecutor object at 0x7fcf6cd3b320>] with cache [{}], pollster history [{}], and discovery cache [{}]. register_pollster_execution /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:276
Dec  1 19:45:48 compute-0 ceilometer_agent_compute[200308]: 2025-12-01 19:45:48.824 15 DEBUG ceilometer.polling.manager [-] Registering pollster [<stevedore.extension.Extension object at 0x7fcf6cc3ffb0>] from source [pollsters] to be executed via executor [<concurrent.futures.thread.ThreadPoolExecutor object at 0x7fcf6cd3b320>] with cache [{}], pollster history [{}], and discovery cache [{}]. register_pollster_execution /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:276
Dec  1 19:45:48 compute-0 ceilometer_agent_compute[200308]: 2025-12-01 19:45:48.824 15 DEBUG ceilometer.polling.manager [-] Registering pollster [<stevedore.extension.Extension object at 0x7fcf6cc3d7c0>] from source [pollsters] to be executed via executor [<concurrent.futures.thread.ThreadPoolExecutor object at 0x7fcf6cd3b320>] with cache [{}], pollster history [{}], and discovery cache [{}]. register_pollster_execution /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:276
Dec  1 19:45:48 compute-0 ceilometer_agent_compute[200308]: 2025-12-01 19:45:48.825 15 DEBUG ceilometer.polling.manager [-] Registering pollster [<stevedore.extension.Extension object at 0x7fcf6cc3f7d0>] from source [pollsters] to be executed via executor [<concurrent.futures.thread.ThreadPoolExecutor object at 0x7fcf6cd3b320>] with cache [{}], pollster history [{}], and discovery cache [{}]. register_pollster_execution /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:276
Dec  1 19:45:48 compute-0 ceilometer_agent_compute[200308]: 2025-12-01 19:45:48.836 15 DEBUG ceilometer.compute.discovery [-] instance data: {'id': 'e73931e9-f7fa-4666-b781-700b385532a9', 'name': 'test_0', 'flavor': {'id': '0891a7f6-7194-4f33-bc11-6f6ab8b16145', 'name': 'm1.small', 'vcpus': 1, 'ram': 512, 'disk': 1, 'ephemeral': 1, 'swap': 0}, 'image': {'id': '15bc897a-453b-4133-b6db-08ecdc2b6db0'}, 'os_type': 'hvm', 'architecture': 'x86_64', 'OS-EXT-SRV-ATTR:instance_name': 'instance-00000001', 'OS-EXT-SRV-ATTR:host': 'compute-0.ctlplane.example.com', 'OS-EXT-STS:vm_state': 'running', 'tenant_id': '35d2a9caf1634dca9fc12ec078239d84', 'user_id': '7c24e8f82e7842b785e565ac65c7f494', 'hostId': 'e632d98aa833376e2652bb395252bb54f4cc7fd6f020f0d51d7efcd6', 'status': 'active', 'metadata': {}} discover_libvirt_polling /usr/lib/python3.12/site-packages/ceilometer/compute/discovery.py:315
Dec  1 19:45:48 compute-0 ceilometer_agent_compute[200308]: 2025-12-01 19:45:48.840 15 DEBUG ceilometer.compute.discovery [-] instance data: {'id': '850ac274-3f22-41ce-b7d7-ac64d7adac70', 'name': 'vn-rxztcck-a6xkcgll2h6t-dmjd3wlevael-vnf-74vtqyxw74yx', 'flavor': {'id': '0891a7f6-7194-4f33-bc11-6f6ab8b16145', 'name': 'm1.small', 'vcpus': 1, 'ram': 512, 'disk': 1, 'ephemeral': 1, 'swap': 0}, 'image': {'id': '15bc897a-453b-4133-b6db-08ecdc2b6db0'}, 'os_type': 'hvm', 'architecture': 'x86_64', 'OS-EXT-SRV-ATTR:instance_name': 'instance-00000003', 'OS-EXT-SRV-ATTR:host': 'compute-0.ctlplane.example.com', 'OS-EXT-STS:vm_state': 'running', 'tenant_id': '35d2a9caf1634dca9fc12ec078239d84', 'user_id': '7c24e8f82e7842b785e565ac65c7f494', 'hostId': 'e632d98aa833376e2652bb395252bb54f4cc7fd6f020f0d51d7efcd6', 'status': 'active', 'metadata': {'metering.server_group': '47cf63e2-5b7c-4ff3-8543-aef6d5b1a5c9'}} discover_libvirt_polling /usr/lib/python3.12/site-packages/ceilometer/compute/discovery.py:315
Dec  1 19:45:48 compute-0 ceilometer_agent_compute[200308]: 2025-12-01 19:45:48.840 15 INFO ceilometer.polling.manager [-] Polling pollster network.incoming.bytes.delta in the context of pollsters
Dec  1 19:45:48 compute-0 ceilometer_agent_compute[200308]: 2025-12-01 19:45:48.840 15 DEBUG ceilometer.polling.manager [-] Checking if we need coordination for pollster [<stevedore.extension.Extension object at 0x7fcf6cc3f860>] with coordination group name [None]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:333
Dec  1 19:45:48 compute-0 ceilometer_agent_compute[200308]: 2025-12-01 19:45:48.840 15 DEBUG ceilometer.polling.manager [-] The pollster [<stevedore.extension.Extension object at 0x7fcf6cc3f860>] is not configured in a source for polling that requires coordination. The current hashrings are the following [None]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:355
Dec  1 19:45:48 compute-0 ceilometer_agent_compute[200308]: 2025-12-01 19:45:48.840 15 DEBUG ceilometer.polling.manager [-] Polster heartbeat update: network.incoming.bytes.delta heartbeat /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:636
Dec  1 19:45:48 compute-0 ceilometer_agent_compute[200308]: 2025-12-01 19:45:48.842 12 DEBUG ceilometer.polling.manager [-] Updated heartbeat for network.incoming.bytes.delta (2025-12-01T19:45:48.840863) _update_status /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:502
Dec  1 19:45:48 compute-0 ceilometer_agent_compute[200308]: 2025-12-01 19:45:48.846 15 DEBUG ceilometer.compute.pollsters [-] e73931e9-f7fa-4666-b781-700b385532a9/network.incoming.bytes.delta volume: 0 _stats_to_sample /usr/lib/python3.12/site-packages/ceilometer/compute/pollsters/__init__.py:108
Dec  1 19:45:48 compute-0 ceilometer_agent_compute[200308]: 2025-12-01 19:45:48.853 15 DEBUG ceilometer.compute.pollsters [-] 850ac274-3f22-41ce-b7d7-ac64d7adac70/network.incoming.bytes.delta volume: 0 _stats_to_sample /usr/lib/python3.12/site-packages/ceilometer/compute/pollsters/__init__.py:108
Dec  1 19:45:48 compute-0 ceilometer_agent_compute[200308]: 2025-12-01 19:45:48.854 15 INFO ceilometer.polling.manager [-] Finished polling pollster network.incoming.bytes.delta in the context of pollsters
Dec  1 19:45:48 compute-0 ceilometer_agent_compute[200308]: 2025-12-01 19:45:48.855 15 DEBUG ceilometer.polling.manager [-] Executing discovery process for pollsters [<ceilometer.compute.pollsters.net.OutgoingPacketsPollster object at 0x7fcf6c2e4050>] and discovery method [local_instances] via process [<bound method AgentManager.discover of <ceilometer.polling.manager.AgentManager object at 0x7fcf6cc07a10>>]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:294
Dec  1 19:45:48 compute-0 ceilometer_agent_compute[200308]: 2025-12-01 19:45:48.855 15 INFO ceilometer.polling.manager [-] Polling pollster network.outgoing.packets in the context of pollsters
Dec  1 19:45:48 compute-0 ceilometer_agent_compute[200308]: 2025-12-01 19:45:48.855 15 DEBUG ceilometer.polling.manager [-] Checking if we need coordination for pollster [<stevedore.extension.Extension object at 0x7fcf6c2e4080>] with coordination group name [None]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:333
Dec  1 19:45:48 compute-0 ceilometer_agent_compute[200308]: 2025-12-01 19:45:48.855 15 DEBUG ceilometer.polling.manager [-] The pollster [<stevedore.extension.Extension object at 0x7fcf6c2e4080>] is not configured in a source for polling that requires coordination. The current hashrings are the following [None]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:355
Dec  1 19:45:48 compute-0 ceilometer_agent_compute[200308]: 2025-12-01 19:45:48.855 15 DEBUG ceilometer.polling.manager [-] Polster heartbeat update: network.outgoing.packets heartbeat /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:636
Dec  1 19:45:48 compute-0 ceilometer_agent_compute[200308]: 2025-12-01 19:45:48.856 15 DEBUG ceilometer.compute.pollsters [-] e73931e9-f7fa-4666-b781-700b385532a9/network.outgoing.packets volume: 23 _stats_to_sample /usr/lib/python3.12/site-packages/ceilometer/compute/pollsters/__init__.py:108
Dec  1 19:45:48 compute-0 ceilometer_agent_compute[200308]: 2025-12-01 19:45:48.856 15 DEBUG ceilometer.compute.pollsters [-] 850ac274-3f22-41ce-b7d7-ac64d7adac70/network.outgoing.packets volume: 22 _stats_to_sample /usr/lib/python3.12/site-packages/ceilometer/compute/pollsters/__init__.py:108
Dec  1 19:45:48 compute-0 ceilometer_agent_compute[200308]: 2025-12-01 19:45:48.857 15 INFO ceilometer.polling.manager [-] Finished polling pollster network.outgoing.packets in the context of pollsters
Dec  1 19:45:48 compute-0 ceilometer_agent_compute[200308]: 2025-12-01 19:45:48.857 15 DEBUG ceilometer.polling.manager [-] Executing discovery process for pollsters [<ceilometer.compute.pollsters.net.OutgoingBytesDeltaPollster object at 0x7fcf6cc3ff20>] and discovery method [local_instances] via process [<bound method AgentManager.discover of <ceilometer.polling.manager.AgentManager object at 0x7fcf6cc07a10>>]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:294
Dec  1 19:45:48 compute-0 ceilometer_agent_compute[200308]: 2025-12-01 19:45:48.857 15 INFO ceilometer.polling.manager [-] Polling pollster network.outgoing.bytes.delta in the context of pollsters
Dec  1 19:45:48 compute-0 ceilometer_agent_compute[200308]: 2025-12-01 19:45:48.857 15 DEBUG ceilometer.polling.manager [-] Checking if we need coordination for pollster [<stevedore.extension.Extension object at 0x7fcf6efc98b0>] with coordination group name [None]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:333
Dec  1 19:45:48 compute-0 ceilometer_agent_compute[200308]: 2025-12-01 19:45:48.858 15 DEBUG ceilometer.polling.manager [-] The pollster [<stevedore.extension.Extension object at 0x7fcf6efc98b0>] is not configured in a source for polling that requires coordination. The current hashrings are the following [None]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:355
Dec  1 19:45:48 compute-0 ceilometer_agent_compute[200308]: 2025-12-01 19:45:48.858 15 DEBUG ceilometer.polling.manager [-] Polster heartbeat update: network.outgoing.bytes.delta heartbeat /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:636
Dec  1 19:45:48 compute-0 ceilometer_agent_compute[200308]: 2025-12-01 19:45:48.858 15 DEBUG ceilometer.compute.pollsters [-] e73931e9-f7fa-4666-b781-700b385532a9/network.outgoing.bytes.delta volume: 0 _stats_to_sample /usr/lib/python3.12/site-packages/ceilometer/compute/pollsters/__init__.py:108
Dec  1 19:45:48 compute-0 ceilometer_agent_compute[200308]: 2025-12-01 19:45:48.858 15 DEBUG ceilometer.compute.pollsters [-] 850ac274-3f22-41ce-b7d7-ac64d7adac70/network.outgoing.bytes.delta volume: 0 _stats_to_sample /usr/lib/python3.12/site-packages/ceilometer/compute/pollsters/__init__.py:108
Dec  1 19:45:48 compute-0 ceilometer_agent_compute[200308]: 2025-12-01 19:45:48.859 15 INFO ceilometer.polling.manager [-] Finished polling pollster network.outgoing.bytes.delta in the context of pollsters
Dec  1 19:45:48 compute-0 ceilometer_agent_compute[200308]: 2025-12-01 19:45:48.859 15 DEBUG ceilometer.polling.manager [-] Executing discovery process for pollsters [<ceilometer.compute.pollsters.net.OutgoingDropPollster object at 0x7fcf6c2e40e0>] and discovery method [local_instances] via process [<bound method AgentManager.discover of <ceilometer.polling.manager.AgentManager object at 0x7fcf6cc07a10>>]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:294
Dec  1 19:45:48 compute-0 ceilometer_agent_compute[200308]: 2025-12-01 19:45:48.860 15 INFO ceilometer.polling.manager [-] Polling pollster network.outgoing.packets.drop in the context of pollsters
Dec  1 19:45:48 compute-0 ceilometer_agent_compute[200308]: 2025-12-01 19:45:48.860 15 DEBUG ceilometer.polling.manager [-] Checking if we need coordination for pollster [<stevedore.extension.Extension object at 0x7fcf6c2e4110>] with coordination group name [None]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:333
Dec  1 19:45:48 compute-0 ceilometer_agent_compute[200308]: 2025-12-01 19:45:48.860 15 DEBUG ceilometer.polling.manager [-] The pollster [<stevedore.extension.Extension object at 0x7fcf6c2e4110>] is not configured in a source for polling that requires coordination. The current hashrings are the following [None]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:355
Dec  1 19:45:48 compute-0 ceilometer_agent_compute[200308]: 2025-12-01 19:45:48.860 15 DEBUG ceilometer.polling.manager [-] Polster heartbeat update: network.outgoing.packets.drop heartbeat /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:636
Dec  1 19:45:48 compute-0 ceilometer_agent_compute[200308]: 2025-12-01 19:45:48.860 15 DEBUG ceilometer.compute.pollsters [-] e73931e9-f7fa-4666-b781-700b385532a9/network.outgoing.packets.drop volume: 0 _stats_to_sample /usr/lib/python3.12/site-packages/ceilometer/compute/pollsters/__init__.py:108
Dec  1 19:45:48 compute-0 ceilometer_agent_compute[200308]: 2025-12-01 19:45:48.861 15 DEBUG ceilometer.compute.pollsters [-] 850ac274-3f22-41ce-b7d7-ac64d7adac70/network.outgoing.packets.drop volume: 0 _stats_to_sample /usr/lib/python3.12/site-packages/ceilometer/compute/pollsters/__init__.py:108
Dec  1 19:45:48 compute-0 ceilometer_agent_compute[200308]: 2025-12-01 19:45:48.861 15 INFO ceilometer.polling.manager [-] Finished polling pollster network.outgoing.packets.drop in the context of pollsters
Dec  1 19:45:48 compute-0 ceilometer_agent_compute[200308]: 2025-12-01 19:45:48.862 15 DEBUG ceilometer.polling.manager [-] Executing discovery process for pollsters [<ceilometer.compute.pollsters.net.OutgoingErrorsPollster object at 0x7fcf6c2e4170>] and discovery method [local_instances] via process [<bound method AgentManager.discover of <ceilometer.polling.manager.AgentManager object at 0x7fcf6cc07a10>>]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:294
Dec  1 19:45:48 compute-0 ceilometer_agent_compute[200308]: 2025-12-01 19:45:48.862 15 INFO ceilometer.polling.manager [-] Polling pollster network.outgoing.packets.error in the context of pollsters
Dec  1 19:45:48 compute-0 ceilometer_agent_compute[200308]: 2025-12-01 19:45:48.862 15 DEBUG ceilometer.polling.manager [-] Checking if we need coordination for pollster [<stevedore.extension.Extension object at 0x7fcf6c2e41a0>] with coordination group name [None]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:333
Dec  1 19:45:48 compute-0 ceilometer_agent_compute[200308]: 2025-12-01 19:45:48.863 15 DEBUG ceilometer.polling.manager [-] The pollster [<stevedore.extension.Extension object at 0x7fcf6c2e41a0>] is not configured in a source for polling that requires coordination. The current hashrings are the following [None]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:355
Dec  1 19:45:48 compute-0 ceilometer_agent_compute[200308]: 2025-12-01 19:45:48.863 15 DEBUG ceilometer.polling.manager [-] Polster heartbeat update: network.outgoing.packets.error heartbeat /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:636
Dec  1 19:45:48 compute-0 ceilometer_agent_compute[200308]: 2025-12-01 19:45:48.863 12 DEBUG ceilometer.polling.manager [-] Updated heartbeat for network.outgoing.packets (2025-12-01T19:45:48.855800) _update_status /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:502
Dec  1 19:45:48 compute-0 ceilometer_agent_compute[200308]: 2025-12-01 19:45:48.863 15 DEBUG ceilometer.compute.pollsters [-] e73931e9-f7fa-4666-b781-700b385532a9/network.outgoing.packets.error volume: 0 _stats_to_sample /usr/lib/python3.12/site-packages/ceilometer/compute/pollsters/__init__.py:108
Dec  1 19:45:48 compute-0 ceilometer_agent_compute[200308]: 2025-12-01 19:45:48.863 12 DEBUG ceilometer.polling.manager [-] Updated heartbeat for network.outgoing.bytes.delta (2025-12-01T19:45:48.858169) _update_status /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:502
Dec  1 19:45:48 compute-0 ceilometer_agent_compute[200308]: 2025-12-01 19:45:48.864 12 DEBUG ceilometer.polling.manager [-] Updated heartbeat for network.outgoing.packets.drop (2025-12-01T19:45:48.860429) _update_status /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:502
Dec  1 19:45:48 compute-0 ceilometer_agent_compute[200308]: 2025-12-01 19:45:48.864 15 DEBUG ceilometer.compute.pollsters [-] 850ac274-3f22-41ce-b7d7-ac64d7adac70/network.outgoing.packets.error volume: 0 _stats_to_sample /usr/lib/python3.12/site-packages/ceilometer/compute/pollsters/__init__.py:108
Dec  1 19:45:48 compute-0 ceilometer_agent_compute[200308]: 2025-12-01 19:45:48.864 15 INFO ceilometer.polling.manager [-] Finished polling pollster network.outgoing.packets.error in the context of pollsters
Dec  1 19:45:48 compute-0 ceilometer_agent_compute[200308]: 2025-12-01 19:45:48.865 12 DEBUG ceilometer.polling.manager [-] Updated heartbeat for network.outgoing.packets.error (2025-12-01T19:45:48.863625) _update_status /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:502
Dec  1 19:45:48 compute-0 ceilometer_agent_compute[200308]: 2025-12-01 19:45:48.865 15 DEBUG ceilometer.polling.manager [-] Executing discovery process for pollsters [<ceilometer.compute.pollsters.disk.PerDeviceCapacityPollster object at 0x7fcf6cc3d820>] and discovery method [local_instances] via process [<bound method AgentManager.discover of <ceilometer.polling.manager.AgentManager object at 0x7fcf6cc07a10>>]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:294
Dec  1 19:45:48 compute-0 ceilometer_agent_compute[200308]: 2025-12-01 19:45:48.865 15 INFO ceilometer.polling.manager [-] Polling pollster disk.device.capacity in the context of pollsters
Dec  1 19:45:48 compute-0 ceilometer_agent_compute[200308]: 2025-12-01 19:45:48.865 15 DEBUG ceilometer.polling.manager [-] Checking if we need coordination for pollster [<stevedore.extension.Extension object at 0x7fcf6cc3f290>] with coordination group name [None]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:333
Dec  1 19:45:48 compute-0 ceilometer_agent_compute[200308]: 2025-12-01 19:45:48.866 15 DEBUG ceilometer.polling.manager [-] The pollster [<stevedore.extension.Extension object at 0x7fcf6cc3f290>] is not configured in a source for polling that requires coordination. The current hashrings are the following [None]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:355
Dec  1 19:45:48 compute-0 ceilometer_agent_compute[200308]: 2025-12-01 19:45:48.866 15 DEBUG ceilometer.polling.manager [-] Polster heartbeat update: disk.device.capacity heartbeat /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:636
Dec  1 19:45:48 compute-0 ceilometer_agent_compute[200308]: 2025-12-01 19:45:48.866 12 DEBUG ceilometer.polling.manager [-] Updated heartbeat for disk.device.capacity (2025-12-01T19:45:48.866146) _update_status /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:502
Dec  1 19:45:48 compute-0 ceilometer_agent_compute[200308]: 2025-12-01 19:45:48.901 15 DEBUG ceilometer.compute.pollsters [-] e73931e9-f7fa-4666-b781-700b385532a9/disk.device.capacity volume: 1073741824 _stats_to_sample /usr/lib/python3.12/site-packages/ceilometer/compute/pollsters/__init__.py:108
Dec  1 19:45:48 compute-0 ceilometer_agent_compute[200308]: 2025-12-01 19:45:48.902 15 DEBUG ceilometer.compute.pollsters [-] e73931e9-f7fa-4666-b781-700b385532a9/disk.device.capacity volume: 1073741824 _stats_to_sample /usr/lib/python3.12/site-packages/ceilometer/compute/pollsters/__init__.py:108
Dec  1 19:45:48 compute-0 ceilometer_agent_compute[200308]: 2025-12-01 19:45:48.902 15 DEBUG ceilometer.compute.pollsters [-] e73931e9-f7fa-4666-b781-700b385532a9/disk.device.capacity volume: 485376 _stats_to_sample /usr/lib/python3.12/site-packages/ceilometer/compute/pollsters/__init__.py:108
Dec  1 19:45:48 compute-0 ceilometer_agent_compute[200308]: 2025-12-01 19:45:48.930 15 DEBUG ceilometer.compute.pollsters [-] 850ac274-3f22-41ce-b7d7-ac64d7adac70/disk.device.capacity volume: 1073741824 _stats_to_sample /usr/lib/python3.12/site-packages/ceilometer/compute/pollsters/__init__.py:108
Dec  1 19:45:48 compute-0 ceilometer_agent_compute[200308]: 2025-12-01 19:45:48.930 15 DEBUG ceilometer.compute.pollsters [-] 850ac274-3f22-41ce-b7d7-ac64d7adac70/disk.device.capacity volume: 1073741824 _stats_to_sample /usr/lib/python3.12/site-packages/ceilometer/compute/pollsters/__init__.py:108
Dec  1 19:45:48 compute-0 ceilometer_agent_compute[200308]: 2025-12-01 19:45:48.931 15 DEBUG ceilometer.compute.pollsters [-] 850ac274-3f22-41ce-b7d7-ac64d7adac70/disk.device.capacity volume: 583680 _stats_to_sample /usr/lib/python3.12/site-packages/ceilometer/compute/pollsters/__init__.py:108
Dec  1 19:45:48 compute-0 ceilometer_agent_compute[200308]: 2025-12-01 19:45:48.932 15 INFO ceilometer.polling.manager [-] Finished polling pollster disk.device.capacity in the context of pollsters
Dec  1 19:45:48 compute-0 ceilometer_agent_compute[200308]: 2025-12-01 19:45:48.932 15 DEBUG ceilometer.polling.manager [-] Executing discovery process for pollsters [<ceilometer.compute.pollsters.disk.PerDeviceReadBytesPollster object at 0x7fcf6cc3f1d0>] and discovery method [local_instances] via process [<bound method AgentManager.discover of <ceilometer.polling.manager.AgentManager object at 0x7fcf6cc07a10>>]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:294
Dec  1 19:45:48 compute-0 ceilometer_agent_compute[200308]: 2025-12-01 19:45:48.932 15 INFO ceilometer.polling.manager [-] Polling pollster disk.device.read.bytes in the context of pollsters
Dec  1 19:45:48 compute-0 ceilometer_agent_compute[200308]: 2025-12-01 19:45:48.932 15 DEBUG ceilometer.polling.manager [-] Checking if we need coordination for pollster [<stevedore.extension.Extension object at 0x7fcf6cc3f2c0>] with coordination group name [None]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:333
Dec  1 19:45:48 compute-0 ceilometer_agent_compute[200308]: 2025-12-01 19:45:48.933 15 DEBUG ceilometer.polling.manager [-] The pollster [<stevedore.extension.Extension object at 0x7fcf6cc3f2c0>] is not configured in a source for polling that requires coordination. The current hashrings are the following [None]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:355
Dec  1 19:45:48 compute-0 ceilometer_agent_compute[200308]: 2025-12-01 19:45:48.933 15 DEBUG ceilometer.polling.manager [-] Polster heartbeat update: disk.device.read.bytes heartbeat /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:636
Dec  1 19:45:48 compute-0 ceilometer_agent_compute[200308]: 2025-12-01 19:45:48.934 12 DEBUG ceilometer.polling.manager [-] Updated heartbeat for disk.device.read.bytes (2025-12-01T19:45:48.933298) _update_status /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:502
Dec  1 19:45:49 compute-0 ceilometer_agent_compute[200308]: 2025-12-01 19:45:49.009 15 DEBUG ceilometer.compute.pollsters [-] e73931e9-f7fa-4666-b781-700b385532a9/disk.device.read.bytes volume: 23308800 _stats_to_sample /usr/lib/python3.12/site-packages/ceilometer/compute/pollsters/__init__.py:108
Dec  1 19:45:49 compute-0 ceilometer_agent_compute[200308]: 2025-12-01 19:45:49.010 15 DEBUG ceilometer.compute.pollsters [-] e73931e9-f7fa-4666-b781-700b385532a9/disk.device.read.bytes volume: 3227648 _stats_to_sample /usr/lib/python3.12/site-packages/ceilometer/compute/pollsters/__init__.py:108
Dec  1 19:45:49 compute-0 ceilometer_agent_compute[200308]: 2025-12-01 19:45:49.010 15 DEBUG ceilometer.compute.pollsters [-] e73931e9-f7fa-4666-b781-700b385532a9/disk.device.read.bytes volume: 274786 _stats_to_sample /usr/lib/python3.12/site-packages/ceilometer/compute/pollsters/__init__.py:108
Dec  1 19:45:49 compute-0 ceilometer_agent_compute[200308]: 2025-12-01 19:45:49.099 15 DEBUG ceilometer.compute.pollsters [-] 850ac274-3f22-41ce-b7d7-ac64d7adac70/disk.device.read.bytes volume: 23308800 _stats_to_sample /usr/lib/python3.12/site-packages/ceilometer/compute/pollsters/__init__.py:108
Dec  1 19:45:49 compute-0 ceilometer_agent_compute[200308]: 2025-12-01 19:45:49.100 15 DEBUG ceilometer.compute.pollsters [-] 850ac274-3f22-41ce-b7d7-ac64d7adac70/disk.device.read.bytes volume: 3227648 _stats_to_sample /usr/lib/python3.12/site-packages/ceilometer/compute/pollsters/__init__.py:108
Dec  1 19:45:49 compute-0 ceilometer_agent_compute[200308]: 2025-12-01 19:45:49.100 15 DEBUG ceilometer.compute.pollsters [-] 850ac274-3f22-41ce-b7d7-ac64d7adac70/disk.device.read.bytes volume: 385378 _stats_to_sample /usr/lib/python3.12/site-packages/ceilometer/compute/pollsters/__init__.py:108
Dec  1 19:45:49 compute-0 ceilometer_agent_compute[200308]: 2025-12-01 19:45:49.101 15 INFO ceilometer.polling.manager [-] Finished polling pollster disk.device.read.bytes in the context of pollsters
Dec  1 19:45:49 compute-0 ceilometer_agent_compute[200308]: 2025-12-01 19:45:49.101 15 DEBUG ceilometer.polling.manager [-] Executing discovery process for pollsters [<ceilometer.compute.pollsters.net.IncomingBytesPollster object at 0x7fcf6cc3f800>] and discovery method [local_instances] via process [<bound method AgentManager.discover of <ceilometer.polling.manager.AgentManager object at 0x7fcf6cc07a10>>]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:294
Dec  1 19:45:49 compute-0 ceilometer_agent_compute[200308]: 2025-12-01 19:45:49.101 15 INFO ceilometer.polling.manager [-] Polling pollster network.incoming.bytes in the context of pollsters
Dec  1 19:45:49 compute-0 ceilometer_agent_compute[200308]: 2025-12-01 19:45:49.101 15 DEBUG ceilometer.polling.manager [-] Checking if we need coordination for pollster [<stevedore.extension.Extension object at 0x7fcf6e1e92e0>] with coordination group name [None]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:333
Dec  1 19:45:49 compute-0 ceilometer_agent_compute[200308]: 2025-12-01 19:45:49.102 15 DEBUG ceilometer.polling.manager [-] The pollster [<stevedore.extension.Extension object at 0x7fcf6e1e92e0>] is not configured in a source for polling that requires coordination. The current hashrings are the following [None]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:355
Dec  1 19:45:49 compute-0 ceilometer_agent_compute[200308]: 2025-12-01 19:45:49.102 15 DEBUG ceilometer.polling.manager [-] Polster heartbeat update: network.incoming.bytes heartbeat /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:636
Dec  1 19:45:49 compute-0 ceilometer_agent_compute[200308]: 2025-12-01 19:45:49.102 15 DEBUG ceilometer.compute.pollsters [-] e73931e9-f7fa-4666-b781-700b385532a9/network.incoming.bytes volume: 2136 _stats_to_sample /usr/lib/python3.12/site-packages/ceilometer/compute/pollsters/__init__.py:108
Dec  1 19:45:49 compute-0 ceilometer_agent_compute[200308]: 2025-12-01 19:45:49.102 15 DEBUG ceilometer.compute.pollsters [-] 850ac274-3f22-41ce-b7d7-ac64d7adac70/network.incoming.bytes volume: 1570 _stats_to_sample /usr/lib/python3.12/site-packages/ceilometer/compute/pollsters/__init__.py:108
Dec  1 19:45:49 compute-0 ceilometer_agent_compute[200308]: 2025-12-01 19:45:49.102 15 INFO ceilometer.polling.manager [-] Finished polling pollster network.incoming.bytes in the context of pollsters
Dec  1 19:45:49 compute-0 ceilometer_agent_compute[200308]: 2025-12-01 19:45:49.103 15 DEBUG ceilometer.polling.manager [-] Executing discovery process for pollsters [<ceilometer.compute.pollsters.net.IncomingBytesRatePollster object at 0x7fcf6cc3fd10>] and discovery method [local_instances] via process [<bound method AgentManager.discover of <ceilometer.polling.manager.AgentManager object at 0x7fcf6cc07a10>>]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:294
Dec  1 19:45:49 compute-0 ceilometer_agent_compute[200308]: 2025-12-01 19:45:49.103 15 DEBUG ceilometer.polling.manager [-] Skip pollster network.incoming.bytes.rate, no new resources found this cycle _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:321
Dec  1 19:45:49 compute-0 ceilometer_agent_compute[200308]: 2025-12-01 19:45:49.103 15 DEBUG ceilometer.polling.manager [-] Executing discovery process for pollsters [<ceilometer.compute.pollsters.disk.PerDeviceDiskReadLatencyPollster object at 0x7fcf6cc3f2f0>] and discovery method [local_instances] via process [<bound method AgentManager.discover of <ceilometer.polling.manager.AgentManager object at 0x7fcf6cc07a10>>]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:294
Dec  1 19:45:49 compute-0 ceilometer_agent_compute[200308]: 2025-12-01 19:45:49.103 15 INFO ceilometer.polling.manager [-] Polling pollster disk.device.read.latency in the context of pollsters
Dec  1 19:45:49 compute-0 ceilometer_agent_compute[200308]: 2025-12-01 19:45:49.103 15 DEBUG ceilometer.polling.manager [-] Checking if we need coordination for pollster [<stevedore.extension.Extension object at 0x7fcf6cc3f320>] with coordination group name [None]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:333
Dec  1 19:45:49 compute-0 ceilometer_agent_compute[200308]: 2025-12-01 19:45:49.103 15 DEBUG ceilometer.polling.manager [-] The pollster [<stevedore.extension.Extension object at 0x7fcf6cc3f320>] is not configured in a source for polling that requires coordination. The current hashrings are the following [None]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:355
Dec  1 19:45:49 compute-0 ceilometer_agent_compute[200308]: 2025-12-01 19:45:49.103 15 DEBUG ceilometer.polling.manager [-] Polster heartbeat update: disk.device.read.latency heartbeat /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:636
Dec  1 19:45:49 compute-0 ceilometer_agent_compute[200308]: 2025-12-01 19:45:49.103 15 DEBUG ceilometer.compute.pollsters [-] e73931e9-f7fa-4666-b781-700b385532a9/disk.device.read.latency volume: 474440550 _stats_to_sample /usr/lib/python3.12/site-packages/ceilometer/compute/pollsters/__init__.py:108
Dec  1 19:45:49 compute-0 ceilometer_agent_compute[200308]: 2025-12-01 19:45:49.104 15 DEBUG ceilometer.compute.pollsters [-] e73931e9-f7fa-4666-b781-700b385532a9/disk.device.read.latency volume: 65600453 _stats_to_sample /usr/lib/python3.12/site-packages/ceilometer/compute/pollsters/__init__.py:108
Dec  1 19:45:49 compute-0 ceilometer_agent_compute[200308]: 2025-12-01 19:45:49.105 15 DEBUG ceilometer.compute.pollsters [-] e73931e9-f7fa-4666-b781-700b385532a9/disk.device.read.latency volume: 49214734 _stats_to_sample /usr/lib/python3.12/site-packages/ceilometer/compute/pollsters/__init__.py:108
Dec  1 19:45:49 compute-0 ceilometer_agent_compute[200308]: 2025-12-01 19:45:49.105 12 DEBUG ceilometer.polling.manager [-] Updated heartbeat for network.incoming.bytes (2025-12-01T19:45:49.102116) _update_status /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:502
Dec  1 19:45:49 compute-0 ceilometer_agent_compute[200308]: 2025-12-01 19:45:49.105 12 DEBUG ceilometer.polling.manager [-] Updated heartbeat for disk.device.read.latency (2025-12-01T19:45:49.103744) _update_status /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:502
Dec  1 19:45:49 compute-0 ceilometer_agent_compute[200308]: 2025-12-01 19:45:49.105 15 DEBUG ceilometer.compute.pollsters [-] 850ac274-3f22-41ce-b7d7-ac64d7adac70/disk.device.read.latency volume: 578521054 _stats_to_sample /usr/lib/python3.12/site-packages/ceilometer/compute/pollsters/__init__.py:108
Dec  1 19:45:49 compute-0 ceilometer_agent_compute[200308]: 2025-12-01 19:45:49.105 15 DEBUG ceilometer.compute.pollsters [-] 850ac274-3f22-41ce-b7d7-ac64d7adac70/disk.device.read.latency volume: 98903610 _stats_to_sample /usr/lib/python3.12/site-packages/ceilometer/compute/pollsters/__init__.py:108
Dec  1 19:45:49 compute-0 ceilometer_agent_compute[200308]: 2025-12-01 19:45:49.106 15 DEBUG ceilometer.compute.pollsters [-] 850ac274-3f22-41ce-b7d7-ac64d7adac70/disk.device.read.latency volume: 76991265 _stats_to_sample /usr/lib/python3.12/site-packages/ceilometer/compute/pollsters/__init__.py:108
Dec  1 19:45:49 compute-0 ceilometer_agent_compute[200308]: 2025-12-01 19:45:49.106 15 INFO ceilometer.polling.manager [-] Finished polling pollster disk.device.read.latency in the context of pollsters
Dec  1 19:45:49 compute-0 ceilometer_agent_compute[200308]: 2025-12-01 19:45:49.106 15 DEBUG ceilometer.polling.manager [-] Executing discovery process for pollsters [<ceilometer.compute.pollsters.disk.PerDeviceReadRequestsPollster object at 0x7fcf6cc3f350>] and discovery method [local_instances] via process [<bound method AgentManager.discover of <ceilometer.polling.manager.AgentManager object at 0x7fcf6cc07a10>>]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:294
Dec  1 19:45:49 compute-0 ceilometer_agent_compute[200308]: 2025-12-01 19:45:49.106 15 INFO ceilometer.polling.manager [-] Polling pollster disk.device.read.requests in the context of pollsters
Dec  1 19:45:49 compute-0 ceilometer_agent_compute[200308]: 2025-12-01 19:45:49.106 15 DEBUG ceilometer.polling.manager [-] Checking if we need coordination for pollster [<stevedore.extension.Extension object at 0x7fcf6cc3f380>] with coordination group name [None]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:333
Dec  1 19:45:49 compute-0 ceilometer_agent_compute[200308]: 2025-12-01 19:45:49.106 15 DEBUG ceilometer.polling.manager [-] The pollster [<stevedore.extension.Extension object at 0x7fcf6cc3f380>] is not configured in a source for polling that requires coordination. The current hashrings are the following [None]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:355
Dec  1 19:45:49 compute-0 ceilometer_agent_compute[200308]: 2025-12-01 19:45:49.107 15 DEBUG ceilometer.polling.manager [-] Polster heartbeat update: disk.device.read.requests heartbeat /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:636
Dec  1 19:45:49 compute-0 ceilometer_agent_compute[200308]: 2025-12-01 19:45:49.107 15 DEBUG ceilometer.compute.pollsters [-] e73931e9-f7fa-4666-b781-700b385532a9/disk.device.read.requests volume: 840 _stats_to_sample /usr/lib/python3.12/site-packages/ceilometer/compute/pollsters/__init__.py:108
Dec  1 19:45:49 compute-0 ceilometer_agent_compute[200308]: 2025-12-01 19:45:49.107 15 DEBUG ceilometer.compute.pollsters [-] e73931e9-f7fa-4666-b781-700b385532a9/disk.device.read.requests volume: 173 _stats_to_sample /usr/lib/python3.12/site-packages/ceilometer/compute/pollsters/__init__.py:108
Dec  1 19:45:49 compute-0 ceilometer_agent_compute[200308]: 2025-12-01 19:45:49.107 15 DEBUG ceilometer.compute.pollsters [-] e73931e9-f7fa-4666-b781-700b385532a9/disk.device.read.requests volume: 109 _stats_to_sample /usr/lib/python3.12/site-packages/ceilometer/compute/pollsters/__init__.py:108
Dec  1 19:45:49 compute-0 ceilometer_agent_compute[200308]: 2025-12-01 19:45:49.107 15 DEBUG ceilometer.compute.pollsters [-] 850ac274-3f22-41ce-b7d7-ac64d7adac70/disk.device.read.requests volume: 840 _stats_to_sample /usr/lib/python3.12/site-packages/ceilometer/compute/pollsters/__init__.py:108
Dec  1 19:45:49 compute-0 ceilometer_agent_compute[200308]: 2025-12-01 19:45:49.108 15 DEBUG ceilometer.compute.pollsters [-] 850ac274-3f22-41ce-b7d7-ac64d7adac70/disk.device.read.requests volume: 173 _stats_to_sample /usr/lib/python3.12/site-packages/ceilometer/compute/pollsters/__init__.py:108
Dec  1 19:45:49 compute-0 ceilometer_agent_compute[200308]: 2025-12-01 19:45:49.108 15 DEBUG ceilometer.compute.pollsters [-] 850ac274-3f22-41ce-b7d7-ac64d7adac70/disk.device.read.requests volume: 124 _stats_to_sample /usr/lib/python3.12/site-packages/ceilometer/compute/pollsters/__init__.py:108
Dec  1 19:45:49 compute-0 ceilometer_agent_compute[200308]: 2025-12-01 19:45:49.108 15 INFO ceilometer.polling.manager [-] Finished polling pollster disk.device.read.requests in the context of pollsters
Dec  1 19:45:49 compute-0 ceilometer_agent_compute[200308]: 2025-12-01 19:45:49.108 15 DEBUG ceilometer.polling.manager [-] Executing discovery process for pollsters [<ceilometer.compute.pollsters.disk.PerDevicePhysicalPollster object at 0x7fcf6cc3f3b0>] and discovery method [local_instances] via process [<bound method AgentManager.discover of <ceilometer.polling.manager.AgentManager object at 0x7fcf6cc07a10>>]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:294
Dec  1 19:45:49 compute-0 ceilometer_agent_compute[200308]: 2025-12-01 19:45:49.108 15 INFO ceilometer.polling.manager [-] Polling pollster disk.device.usage in the context of pollsters
Dec  1 19:45:49 compute-0 ceilometer_agent_compute[200308]: 2025-12-01 19:45:49.108 15 DEBUG ceilometer.polling.manager [-] Checking if we need coordination for pollster [<stevedore.extension.Extension object at 0x7fcf6cc3f3e0>] with coordination group name [None]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:333
Dec  1 19:45:49 compute-0 ceilometer_agent_compute[200308]: 2025-12-01 19:45:49.109 15 DEBUG ceilometer.polling.manager [-] The pollster [<stevedore.extension.Extension object at 0x7fcf6cc3f3e0>] is not configured in a source for polling that requires coordination. The current hashrings are the following [None]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:355
Dec  1 19:45:49 compute-0 ceilometer_agent_compute[200308]: 2025-12-01 19:45:49.109 15 DEBUG ceilometer.polling.manager [-] Polster heartbeat update: disk.device.usage heartbeat /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:636
Dec  1 19:45:49 compute-0 ceilometer_agent_compute[200308]: 2025-12-01 19:45:49.109 15 DEBUG ceilometer.compute.pollsters [-] e73931e9-f7fa-4666-b781-700b385532a9/disk.device.usage volume: 21233664 _stats_to_sample /usr/lib/python3.12/site-packages/ceilometer/compute/pollsters/__init__.py:108
Dec  1 19:45:49 compute-0 ceilometer_agent_compute[200308]: 2025-12-01 19:45:49.109 12 DEBUG ceilometer.polling.manager [-] Updated heartbeat for disk.device.read.requests (2025-12-01T19:45:49.107014) _update_status /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:502
Dec  1 19:45:49 compute-0 ceilometer_agent_compute[200308]: 2025-12-01 19:45:49.109 15 DEBUG ceilometer.compute.pollsters [-] e73931e9-f7fa-4666-b781-700b385532a9/disk.device.usage volume: 393216 _stats_to_sample /usr/lib/python3.12/site-packages/ceilometer/compute/pollsters/__init__.py:108
Dec  1 19:45:49 compute-0 ceilometer_agent_compute[200308]: 2025-12-01 19:45:49.109 15 DEBUG ceilometer.compute.pollsters [-] e73931e9-f7fa-4666-b781-700b385532a9/disk.device.usage volume: 485376 _stats_to_sample /usr/lib/python3.12/site-packages/ceilometer/compute/pollsters/__init__.py:108
Dec  1 19:45:49 compute-0 ceilometer_agent_compute[200308]: 2025-12-01 19:45:49.109 15 DEBUG ceilometer.compute.pollsters [-] 850ac274-3f22-41ce-b7d7-ac64d7adac70/disk.device.usage volume: 21299200 _stats_to_sample /usr/lib/python3.12/site-packages/ceilometer/compute/pollsters/__init__.py:108
Dec  1 19:45:49 compute-0 ceilometer_agent_compute[200308]: 2025-12-01 19:45:49.110 15 DEBUG ceilometer.compute.pollsters [-] 850ac274-3f22-41ce-b7d7-ac64d7adac70/disk.device.usage volume: 393216 _stats_to_sample /usr/lib/python3.12/site-packages/ceilometer/compute/pollsters/__init__.py:108
Dec  1 19:45:49 compute-0 ceilometer_agent_compute[200308]: 2025-12-01 19:45:49.110 15 DEBUG ceilometer.compute.pollsters [-] 850ac274-3f22-41ce-b7d7-ac64d7adac70/disk.device.usage volume: 583680 _stats_to_sample /usr/lib/python3.12/site-packages/ceilometer/compute/pollsters/__init__.py:108
Dec  1 19:45:49 compute-0 ceilometer_agent_compute[200308]: 2025-12-01 19:45:49.110 15 INFO ceilometer.polling.manager [-] Finished polling pollster disk.device.usage in the context of pollsters
Dec  1 19:45:49 compute-0 ceilometer_agent_compute[200308]: 2025-12-01 19:45:49.110 15 DEBUG ceilometer.polling.manager [-] Executing discovery process for pollsters [<ceilometer.compute.pollsters.disk.PerDeviceWriteBytesPollster object at 0x7fcf6cc3f410>] and discovery method [local_instances] via process [<bound method AgentManager.discover of <ceilometer.polling.manager.AgentManager object at 0x7fcf6cc07a10>>]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:294
Dec  1 19:45:49 compute-0 ceilometer_agent_compute[200308]: 2025-12-01 19:45:49.110 15 INFO ceilometer.polling.manager [-] Polling pollster disk.device.write.bytes in the context of pollsters
Dec  1 19:45:49 compute-0 ceilometer_agent_compute[200308]: 2025-12-01 19:45:49.110 15 DEBUG ceilometer.polling.manager [-] Checking if we need coordination for pollster [<stevedore.extension.Extension object at 0x7fcf6cc3f440>] with coordination group name [None]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:333
Dec  1 19:45:49 compute-0 ceilometer_agent_compute[200308]: 2025-12-01 19:45:49.111 15 DEBUG ceilometer.polling.manager [-] The pollster [<stevedore.extension.Extension object at 0x7fcf6cc3f440>] is not configured in a source for polling that requires coordination. The current hashrings are the following [None]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:355
Dec  1 19:45:49 compute-0 ceilometer_agent_compute[200308]: 2025-12-01 19:45:49.111 12 DEBUG ceilometer.polling.manager [-] Updated heartbeat for disk.device.usage (2025-12-01T19:45:49.109229) _update_status /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:502
Dec  1 19:45:49 compute-0 ceilometer_agent_compute[200308]: 2025-12-01 19:45:49.111 15 DEBUG ceilometer.polling.manager [-] Polster heartbeat update: disk.device.write.bytes heartbeat /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:636
Dec  1 19:45:49 compute-0 ceilometer_agent_compute[200308]: 2025-12-01 19:45:49.111 15 DEBUG ceilometer.compute.pollsters [-] e73931e9-f7fa-4666-b781-700b385532a9/disk.device.write.bytes volume: 41779200 _stats_to_sample /usr/lib/python3.12/site-packages/ceilometer/compute/pollsters/__init__.py:108
Dec  1 19:45:49 compute-0 ceilometer_agent_compute[200308]: 2025-12-01 19:45:49.111 12 DEBUG ceilometer.polling.manager [-] Updated heartbeat for disk.device.write.bytes (2025-12-01T19:45:49.111235) _update_status /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:502
Dec  1 19:45:49 compute-0 ceilometer_agent_compute[200308]: 2025-12-01 19:45:49.111 15 DEBUG ceilometer.compute.pollsters [-] e73931e9-f7fa-4666-b781-700b385532a9/disk.device.write.bytes volume: 512 _stats_to_sample /usr/lib/python3.12/site-packages/ceilometer/compute/pollsters/__init__.py:108
Dec  1 19:45:49 compute-0 ceilometer_agent_compute[200308]: 2025-12-01 19:45:49.111 15 DEBUG ceilometer.compute.pollsters [-] e73931e9-f7fa-4666-b781-700b385532a9/disk.device.write.bytes volume: 0 _stats_to_sample /usr/lib/python3.12/site-packages/ceilometer/compute/pollsters/__init__.py:108
Dec  1 19:45:49 compute-0 ceilometer_agent_compute[200308]: 2025-12-01 19:45:49.112 15 DEBUG ceilometer.compute.pollsters [-] 850ac274-3f22-41ce-b7d7-ac64d7adac70/disk.device.write.bytes volume: 41779200 _stats_to_sample /usr/lib/python3.12/site-packages/ceilometer/compute/pollsters/__init__.py:108
Dec  1 19:45:49 compute-0 ceilometer_agent_compute[200308]: 2025-12-01 19:45:49.112 15 DEBUG ceilometer.compute.pollsters [-] 850ac274-3f22-41ce-b7d7-ac64d7adac70/disk.device.write.bytes volume: 512 _stats_to_sample /usr/lib/python3.12/site-packages/ceilometer/compute/pollsters/__init__.py:108
Dec  1 19:45:49 compute-0 ceilometer_agent_compute[200308]: 2025-12-01 19:45:49.112 15 DEBUG ceilometer.compute.pollsters [-] 850ac274-3f22-41ce-b7d7-ac64d7adac70/disk.device.write.bytes volume: 0 _stats_to_sample /usr/lib/python3.12/site-packages/ceilometer/compute/pollsters/__init__.py:108
Dec  1 19:45:49 compute-0 ceilometer_agent_compute[200308]: 2025-12-01 19:45:49.112 15 INFO ceilometer.polling.manager [-] Finished polling pollster disk.device.write.bytes in the context of pollsters
Dec  1 19:45:49 compute-0 ceilometer_agent_compute[200308]: 2025-12-01 19:45:49.112 15 DEBUG ceilometer.polling.manager [-] Executing discovery process for pollsters [<ceilometer.compute.pollsters.instance_stats.PowerStatePollster object at 0x7fcf6c2e4440>] and discovery method [local_instances] via process [<bound method AgentManager.discover of <ceilometer.polling.manager.AgentManager object at 0x7fcf6cc07a10>>]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:294
Dec  1 19:45:49 compute-0 ceilometer_agent_compute[200308]: 2025-12-01 19:45:49.112 15 INFO ceilometer.polling.manager [-] Polling pollster power.state in the context of pollsters
Dec  1 19:45:49 compute-0 ceilometer_agent_compute[200308]: 2025-12-01 19:45:49.113 15 DEBUG ceilometer.polling.manager [-] Checking if we need coordination for pollster [<stevedore.extension.Extension object at 0x7fcf6c2e4470>] with coordination group name [None]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:333
Dec  1 19:45:49 compute-0 ceilometer_agent_compute[200308]: 2025-12-01 19:45:49.113 15 DEBUG ceilometer.polling.manager [-] The pollster [<stevedore.extension.Extension object at 0x7fcf6c2e4470>] is not configured in a source for polling that requires coordination. The current hashrings are the following [None]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:355
Dec  1 19:45:49 compute-0 ceilometer_agent_compute[200308]: 2025-12-01 19:45:49.113 15 DEBUG ceilometer.polling.manager [-] Polster heartbeat update: power.state heartbeat /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:636
Dec  1 19:45:49 compute-0 ceilometer_agent_compute[200308]: 2025-12-01 19:45:49.113 12 DEBUG ceilometer.polling.manager [-] Updated heartbeat for power.state (2025-12-01T19:45:49.113129) _update_status /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:502
Dec  1 19:45:49 compute-0 ceilometer_agent_compute[200308]: 2025-12-01 19:45:49.148 15 DEBUG ceilometer.compute.pollsters [-] e73931e9-f7fa-4666-b781-700b385532a9/power.state volume: 1 _stats_to_sample /usr/lib/python3.12/site-packages/ceilometer/compute/pollsters/__init__.py:108
Dec  1 19:45:49 compute-0 ceilometer_agent_compute[200308]: 2025-12-01 19:45:49.177 15 DEBUG ceilometer.compute.pollsters [-] 850ac274-3f22-41ce-b7d7-ac64d7adac70/power.state volume: 1 _stats_to_sample /usr/lib/python3.12/site-packages/ceilometer/compute/pollsters/__init__.py:108
Dec  1 19:45:49 compute-0 ceilometer_agent_compute[200308]: 2025-12-01 19:45:49.177 15 INFO ceilometer.polling.manager [-] Finished polling pollster power.state in the context of pollsters
Dec  1 19:45:49 compute-0 ceilometer_agent_compute[200308]: 2025-12-01 19:45:49.178 15 DEBUG ceilometer.polling.manager [-] Executing discovery process for pollsters [<ceilometer.compute.pollsters.disk.PerDeviceDiskWriteLatencyPollster object at 0x7fcf6cc3f470>] and discovery method [local_instances] via process [<bound method AgentManager.discover of <ceilometer.polling.manager.AgentManager object at 0x7fcf6cc07a10>>]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:294
Dec  1 19:45:49 compute-0 ceilometer_agent_compute[200308]: 2025-12-01 19:45:49.178 15 INFO ceilometer.polling.manager [-] Polling pollster disk.device.write.latency in the context of pollsters
Dec  1 19:45:49 compute-0 ceilometer_agent_compute[200308]: 2025-12-01 19:45:49.178 15 DEBUG ceilometer.polling.manager [-] Checking if we need coordination for pollster [<stevedore.extension.Extension object at 0x7fcf6cc3f4a0>] with coordination group name [None]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:333
Dec  1 19:45:49 compute-0 ceilometer_agent_compute[200308]: 2025-12-01 19:45:49.178 15 DEBUG ceilometer.polling.manager [-] The pollster [<stevedore.extension.Extension object at 0x7fcf6cc3f4a0>] is not configured in a source for polling that requires coordination. The current hashrings are the following [None]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:355
Dec  1 19:45:49 compute-0 ceilometer_agent_compute[200308]: 2025-12-01 19:45:49.178 15 DEBUG ceilometer.polling.manager [-] Polster heartbeat update: disk.device.write.latency heartbeat /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:636
Dec  1 19:45:49 compute-0 ceilometer_agent_compute[200308]: 2025-12-01 19:45:49.178 15 DEBUG ceilometer.compute.pollsters [-] e73931e9-f7fa-4666-b781-700b385532a9/disk.device.write.latency volume: 1119912171 _stats_to_sample /usr/lib/python3.12/site-packages/ceilometer/compute/pollsters/__init__.py:108
Dec  1 19:45:49 compute-0 ceilometer_agent_compute[200308]: 2025-12-01 19:45:49.178 15 DEBUG ceilometer.compute.pollsters [-] e73931e9-f7fa-4666-b781-700b385532a9/disk.device.write.latency volume: 10391061 _stats_to_sample /usr/lib/python3.12/site-packages/ceilometer/compute/pollsters/__init__.py:108
Dec  1 19:45:49 compute-0 ceilometer_agent_compute[200308]: 2025-12-01 19:45:49.178 15 DEBUG ceilometer.compute.pollsters [-] e73931e9-f7fa-4666-b781-700b385532a9/disk.device.write.latency volume: 0 _stats_to_sample /usr/lib/python3.12/site-packages/ceilometer/compute/pollsters/__init__.py:108
Dec  1 19:45:49 compute-0 ceilometer_agent_compute[200308]: 2025-12-01 19:45:49.179 15 DEBUG ceilometer.compute.pollsters [-] 850ac274-3f22-41ce-b7d7-ac64d7adac70/disk.device.write.latency volume: 2063543219 _stats_to_sample /usr/lib/python3.12/site-packages/ceilometer/compute/pollsters/__init__.py:108
Dec  1 19:45:49 compute-0 ceilometer_agent_compute[200308]: 2025-12-01 19:45:49.179 15 DEBUG ceilometer.compute.pollsters [-] 850ac274-3f22-41ce-b7d7-ac64d7adac70/disk.device.write.latency volume: 12721696 _stats_to_sample /usr/lib/python3.12/site-packages/ceilometer/compute/pollsters/__init__.py:108
Dec  1 19:45:49 compute-0 ceilometer_agent_compute[200308]: 2025-12-01 19:45:49.179 15 DEBUG ceilometer.compute.pollsters [-] 850ac274-3f22-41ce-b7d7-ac64d7adac70/disk.device.write.latency volume: 0 _stats_to_sample /usr/lib/python3.12/site-packages/ceilometer/compute/pollsters/__init__.py:108
Dec  1 19:45:49 compute-0 ceilometer_agent_compute[200308]: 2025-12-01 19:45:49.179 15 INFO ceilometer.polling.manager [-] Finished polling pollster disk.device.write.latency in the context of pollsters
Dec  1 19:45:49 compute-0 ceilometer_agent_compute[200308]: 2025-12-01 19:45:49.179 15 DEBUG ceilometer.polling.manager [-] Executing discovery process for pollsters [<ceilometer.compute.pollsters.disk.PerDeviceWriteRequestsPollster object at 0x7fcf6cc3f4d0>] and discovery method [local_instances] via process [<bound method AgentManager.discover of <ceilometer.polling.manager.AgentManager object at 0x7fcf6cc07a10>>]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:294
Dec  1 19:45:49 compute-0 ceilometer_agent_compute[200308]: 2025-12-01 19:45:49.179 15 INFO ceilometer.polling.manager [-] Polling pollster disk.device.write.requests in the context of pollsters
Dec  1 19:45:49 compute-0 ceilometer_agent_compute[200308]: 2025-12-01 19:45:49.179 15 DEBUG ceilometer.polling.manager [-] Checking if we need coordination for pollster [<stevedore.extension.Extension object at 0x7fcf6cc3f500>] with coordination group name [None]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:333
Dec  1 19:45:49 compute-0 ceilometer_agent_compute[200308]: 2025-12-01 19:45:49.180 15 DEBUG ceilometer.polling.manager [-] The pollster [<stevedore.extension.Extension object at 0x7fcf6cc3f500>] is not configured in a source for polling that requires coordination. The current hashrings are the following [None]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:355
Dec  1 19:45:49 compute-0 ceilometer_agent_compute[200308]: 2025-12-01 19:45:49.180 15 DEBUG ceilometer.polling.manager [-] Polster heartbeat update: disk.device.write.requests heartbeat /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:636
Dec  1 19:45:49 compute-0 ceilometer_agent_compute[200308]: 2025-12-01 19:45:49.180 15 DEBUG ceilometer.compute.pollsters [-] e73931e9-f7fa-4666-b781-700b385532a9/disk.device.write.requests volume: 233 _stats_to_sample /usr/lib/python3.12/site-packages/ceilometer/compute/pollsters/__init__.py:108
Dec  1 19:45:49 compute-0 ceilometer_agent_compute[200308]: 2025-12-01 19:45:49.180 15 DEBUG ceilometer.compute.pollsters [-] e73931e9-f7fa-4666-b781-700b385532a9/disk.device.write.requests volume: 1 _stats_to_sample /usr/lib/python3.12/site-packages/ceilometer/compute/pollsters/__init__.py:108
Dec  1 19:45:49 compute-0 ceilometer_agent_compute[200308]: 2025-12-01 19:45:49.180 15 DEBUG ceilometer.compute.pollsters [-] e73931e9-f7fa-4666-b781-700b385532a9/disk.device.write.requests volume: 0 _stats_to_sample /usr/lib/python3.12/site-packages/ceilometer/compute/pollsters/__init__.py:108
Dec  1 19:45:49 compute-0 ceilometer_agent_compute[200308]: 2025-12-01 19:45:49.180 15 DEBUG ceilometer.compute.pollsters [-] 850ac274-3f22-41ce-b7d7-ac64d7adac70/disk.device.write.requests volume: 232 _stats_to_sample /usr/lib/python3.12/site-packages/ceilometer/compute/pollsters/__init__.py:108
Dec  1 19:45:49 compute-0 ceilometer_agent_compute[200308]: 2025-12-01 19:45:49.180 12 DEBUG ceilometer.polling.manager [-] Updated heartbeat for disk.device.write.latency (2025-12-01T19:45:49.178294) _update_status /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:502
Dec  1 19:45:49 compute-0 ceilometer_agent_compute[200308]: 2025-12-01 19:45:49.181 15 DEBUG ceilometer.compute.pollsters [-] 850ac274-3f22-41ce-b7d7-ac64d7adac70/disk.device.write.requests volume: 1 _stats_to_sample /usr/lib/python3.12/site-packages/ceilometer/compute/pollsters/__init__.py:108
Dec  1 19:45:49 compute-0 ceilometer_agent_compute[200308]: 2025-12-01 19:45:49.181 15 DEBUG ceilometer.compute.pollsters [-] 850ac274-3f22-41ce-b7d7-ac64d7adac70/disk.device.write.requests volume: 0 _stats_to_sample /usr/lib/python3.12/site-packages/ceilometer/compute/pollsters/__init__.py:108
Dec  1 19:45:49 compute-0 ceilometer_agent_compute[200308]: 2025-12-01 19:45:49.181 15 INFO ceilometer.polling.manager [-] Finished polling pollster disk.device.write.requests in the context of pollsters
Dec  1 19:45:49 compute-0 ceilometer_agent_compute[200308]: 2025-12-01 19:45:49.181 15 DEBUG ceilometer.polling.manager [-] Executing discovery process for pollsters [<ceilometer.compute.pollsters.disk.PerDeviceAllocationPollster object at 0x7fcf6cc3e5d0>] and discovery method [local_instances] via process [<bound method AgentManager.discover of <ceilometer.polling.manager.AgentManager object at 0x7fcf6cc07a10>>]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:294
Dec  1 19:45:49 compute-0 ceilometer_agent_compute[200308]: 2025-12-01 19:45:49.182 15 INFO ceilometer.polling.manager [-] Polling pollster disk.device.allocation in the context of pollsters
Dec  1 19:45:49 compute-0 ceilometer_agent_compute[200308]: 2025-12-01 19:45:49.182 15 DEBUG ceilometer.polling.manager [-] Checking if we need coordination for pollster [<stevedore.extension.Extension object at 0x7fcf6cc3e540>] with coordination group name [None]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:333
Dec  1 19:45:49 compute-0 ceilometer_agent_compute[200308]: 2025-12-01 19:45:49.182 15 DEBUG ceilometer.polling.manager [-] The pollster [<stevedore.extension.Extension object at 0x7fcf6cc3e540>] is not configured in a source for polling that requires coordination. The current hashrings are the following [None]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:355
Dec  1 19:45:49 compute-0 ceilometer_agent_compute[200308]: 2025-12-01 19:45:49.182 15 DEBUG ceilometer.polling.manager [-] Polster heartbeat update: disk.device.allocation heartbeat /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:636
Dec  1 19:45:49 compute-0 ceilometer_agent_compute[200308]: 2025-12-01 19:45:49.182 15 DEBUG ceilometer.compute.pollsters [-] e73931e9-f7fa-4666-b781-700b385532a9/disk.device.allocation volume: 21307392 _stats_to_sample /usr/lib/python3.12/site-packages/ceilometer/compute/pollsters/__init__.py:108
Dec  1 19:45:49 compute-0 ceilometer_agent_compute[200308]: 2025-12-01 19:45:49.182 15 DEBUG ceilometer.compute.pollsters [-] e73931e9-f7fa-4666-b781-700b385532a9/disk.device.allocation volume: 1253376 _stats_to_sample /usr/lib/python3.12/site-packages/ceilometer/compute/pollsters/__init__.py:108
Dec  1 19:45:49 compute-0 ceilometer_agent_compute[200308]: 2025-12-01 19:45:49.183 15 DEBUG ceilometer.compute.pollsters [-] e73931e9-f7fa-4666-b781-700b385532a9/disk.device.allocation volume: 487424 _stats_to_sample /usr/lib/python3.12/site-packages/ceilometer/compute/pollsters/__init__.py:108
Dec  1 19:45:49 compute-0 ceilometer_agent_compute[200308]: 2025-12-01 19:45:49.183 15 DEBUG ceilometer.compute.pollsters [-] 850ac274-3f22-41ce-b7d7-ac64d7adac70/disk.device.allocation volume: 22224896 _stats_to_sample /usr/lib/python3.12/site-packages/ceilometer/compute/pollsters/__init__.py:108
Dec  1 19:45:49 compute-0 ceilometer_agent_compute[200308]: 2025-12-01 19:45:49.183 12 DEBUG ceilometer.polling.manager [-] Updated heartbeat for disk.device.write.requests (2025-12-01T19:45:49.180113) _update_status /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:502
Dec  1 19:45:49 compute-0 ceilometer_agent_compute[200308]: 2025-12-01 19:45:49.183 15 DEBUG ceilometer.compute.pollsters [-] 850ac274-3f22-41ce-b7d7-ac64d7adac70/disk.device.allocation volume: 1253376 _stats_to_sample /usr/lib/python3.12/site-packages/ceilometer/compute/pollsters/__init__.py:108
Dec  1 19:45:49 compute-0 ceilometer_agent_compute[200308]: 2025-12-01 19:45:49.183 15 DEBUG ceilometer.compute.pollsters [-] 850ac274-3f22-41ce-b7d7-ac64d7adac70/disk.device.allocation volume: 585728 _stats_to_sample /usr/lib/python3.12/site-packages/ceilometer/compute/pollsters/__init__.py:108
Dec  1 19:45:49 compute-0 ceilometer_agent_compute[200308]: 2025-12-01 19:45:49.183 12 DEBUG ceilometer.polling.manager [-] Updated heartbeat for disk.device.allocation (2025-12-01T19:45:49.182450) _update_status /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:502
Dec  1 19:45:49 compute-0 ceilometer_agent_compute[200308]: 2025-12-01 19:45:49.184 15 INFO ceilometer.polling.manager [-] Finished polling pollster disk.device.allocation in the context of pollsters
Dec  1 19:45:49 compute-0 ceilometer_agent_compute[200308]: 2025-12-01 19:45:49.184 15 DEBUG ceilometer.polling.manager [-] Executing discovery process for pollsters [<ceilometer.compute.pollsters.disk.EphemeralSizePollster object at 0x7fcf6cc3f530>] and discovery method [local_instances] via process [<bound method AgentManager.discover of <ceilometer.polling.manager.AgentManager object at 0x7fcf6cc07a10>>]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:294
Dec  1 19:45:49 compute-0 ceilometer_agent_compute[200308]: 2025-12-01 19:45:49.184 15 INFO ceilometer.polling.manager [-] Polling pollster disk.ephemeral.size in the context of pollsters
Dec  1 19:45:49 compute-0 ceilometer_agent_compute[200308]: 2025-12-01 19:45:49.184 15 DEBUG ceilometer.polling.manager [-] Checking if we need coordination for pollster [<stevedore.extension.Extension object at 0x7fcf6cc3f560>] with coordination group name [None]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:333
Dec  1 19:45:49 compute-0 ceilometer_agent_compute[200308]: 2025-12-01 19:45:49.184 15 DEBUG ceilometer.polling.manager [-] The pollster [<stevedore.extension.Extension object at 0x7fcf6cc3f560>] is not configured in a source for polling that requires coordination. The current hashrings are the following [None]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:355
Dec  1 19:45:49 compute-0 ceilometer_agent_compute[200308]: 2025-12-01 19:45:49.184 15 DEBUG ceilometer.polling.manager [-] Polster heartbeat update: disk.ephemeral.size heartbeat /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:636
Dec  1 19:45:49 compute-0 ceilometer_agent_compute[200308]: 2025-12-01 19:45:49.185 15 INFO ceilometer.polling.manager [-] Finished polling pollster disk.ephemeral.size in the context of pollsters
Dec  1 19:45:49 compute-0 ceilometer_agent_compute[200308]: 2025-12-01 19:45:49.185 15 DEBUG ceilometer.polling.manager [-] Executing discovery process for pollsters [<ceilometer.compute.pollsters.net.IncomingPacketsPollster object at 0x7fcf6cc3fd40>] and discovery method [local_instances] via process [<bound method AgentManager.discover of <ceilometer.polling.manager.AgentManager object at 0x7fcf6cc07a10>>]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:294
Dec  1 19:45:49 compute-0 ceilometer_agent_compute[200308]: 2025-12-01 19:45:49.185 15 INFO ceilometer.polling.manager [-] Polling pollster network.incoming.packets in the context of pollsters
Dec  1 19:45:49 compute-0 ceilometer_agent_compute[200308]: 2025-12-01 19:45:49.185 15 DEBUG ceilometer.polling.manager [-] Checking if we need coordination for pollster [<stevedore.extension.Extension object at 0x7fcf6cc3fd70>] with coordination group name [None]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:333
Dec  1 19:45:49 compute-0 ceilometer_agent_compute[200308]: 2025-12-01 19:45:49.185 15 DEBUG ceilometer.polling.manager [-] The pollster [<stevedore.extension.Extension object at 0x7fcf6cc3fd70>] is not configured in a source for polling that requires coordination. The current hashrings are the following [None]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:355
Dec  1 19:45:49 compute-0 ceilometer_agent_compute[200308]: 2025-12-01 19:45:49.185 15 DEBUG ceilometer.polling.manager [-] Polster heartbeat update: network.incoming.packets heartbeat /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:636
Dec  1 19:45:49 compute-0 ceilometer_agent_compute[200308]: 2025-12-01 19:45:49.185 15 DEBUG ceilometer.compute.pollsters [-] e73931e9-f7fa-4666-b781-700b385532a9/network.incoming.packets volume: 21 _stats_to_sample /usr/lib/python3.12/site-packages/ceilometer/compute/pollsters/__init__.py:108
Dec  1 19:45:49 compute-0 ceilometer_agent_compute[200308]: 2025-12-01 19:45:49.185 12 DEBUG ceilometer.polling.manager [-] Updated heartbeat for disk.ephemeral.size (2025-12-01T19:45:49.184555) _update_status /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:502
Dec  1 19:45:49 compute-0 ceilometer_agent_compute[200308]: 2025-12-01 19:45:49.185 15 DEBUG ceilometer.compute.pollsters [-] 850ac274-3f22-41ce-b7d7-ac64d7adac70/network.incoming.packets volume: 14 _stats_to_sample /usr/lib/python3.12/site-packages/ceilometer/compute/pollsters/__init__.py:108
Dec  1 19:45:49 compute-0 ceilometer_agent_compute[200308]: 2025-12-01 19:45:49.186 12 DEBUG ceilometer.polling.manager [-] Updated heartbeat for network.incoming.packets (2025-12-01T19:45:49.185461) _update_status /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:502
Dec  1 19:45:49 compute-0 ceilometer_agent_compute[200308]: 2025-12-01 19:45:49.186 15 INFO ceilometer.polling.manager [-] Finished polling pollster network.incoming.packets in the context of pollsters
Dec  1 19:45:49 compute-0 ceilometer_agent_compute[200308]: 2025-12-01 19:45:49.186 15 DEBUG ceilometer.polling.manager [-] Executing discovery process for pollsters [<ceilometer.compute.pollsters.disk.RootSizePollster object at 0x7fcf6cc3f590>] and discovery method [local_instances] via process [<bound method AgentManager.discover of <ceilometer.polling.manager.AgentManager object at 0x7fcf6cc07a10>>]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:294
Dec  1 19:45:49 compute-0 ceilometer_agent_compute[200308]: 2025-12-01 19:45:49.186 15 INFO ceilometer.polling.manager [-] Polling pollster disk.root.size in the context of pollsters
Dec  1 19:45:49 compute-0 ceilometer_agent_compute[200308]: 2025-12-01 19:45:49.186 15 DEBUG ceilometer.polling.manager [-] Checking if we need coordination for pollster [<stevedore.extension.Extension object at 0x7fcf6cc3f5c0>] with coordination group name [None]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:333
Dec  1 19:45:49 compute-0 ceilometer_agent_compute[200308]: 2025-12-01 19:45:49.186 15 DEBUG ceilometer.polling.manager [-] The pollster [<stevedore.extension.Extension object at 0x7fcf6cc3f5c0>] is not configured in a source for polling that requires coordination. The current hashrings are the following [None]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:355
Dec  1 19:45:49 compute-0 ceilometer_agent_compute[200308]: 2025-12-01 19:45:49.186 15 DEBUG ceilometer.polling.manager [-] Polster heartbeat update: disk.root.size heartbeat /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:636
Dec  1 19:45:49 compute-0 ceilometer_agent_compute[200308]: 2025-12-01 19:45:49.187 15 INFO ceilometer.polling.manager [-] Finished polling pollster disk.root.size in the context of pollsters
Dec  1 19:45:49 compute-0 ceilometer_agent_compute[200308]: 2025-12-01 19:45:49.187 15 DEBUG ceilometer.polling.manager [-] Executing discovery process for pollsters [<ceilometer.compute.pollsters.net.IncomingDropPollster object at 0x7fcf6cc3fda0>] and discovery method [local_instances] via process [<bound method AgentManager.discover of <ceilometer.polling.manager.AgentManager object at 0x7fcf6cc07a10>>]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:294
Dec  1 19:45:49 compute-0 ceilometer_agent_compute[200308]: 2025-12-01 19:45:49.187 15 INFO ceilometer.polling.manager [-] Polling pollster network.incoming.packets.drop in the context of pollsters
Dec  1 19:45:49 compute-0 ceilometer_agent_compute[200308]: 2025-12-01 19:45:49.187 15 DEBUG ceilometer.polling.manager [-] Checking if we need coordination for pollster [<stevedore.extension.Extension object at 0x7fcf6cc3fdd0>] with coordination group name [None]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:333
Dec  1 19:45:49 compute-0 ceilometer_agent_compute[200308]: 2025-12-01 19:45:49.187 15 DEBUG ceilometer.polling.manager [-] The pollster [<stevedore.extension.Extension object at 0x7fcf6cc3fdd0>] is not configured in a source for polling that requires coordination. The current hashrings are the following [None]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:355
Dec  1 19:45:49 compute-0 ceilometer_agent_compute[200308]: 2025-12-01 19:45:49.187 15 DEBUG ceilometer.polling.manager [-] Polster heartbeat update: network.incoming.packets.drop heartbeat /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:636
Dec  1 19:45:49 compute-0 ceilometer_agent_compute[200308]: 2025-12-01 19:45:49.187 15 DEBUG ceilometer.compute.pollsters [-] e73931e9-f7fa-4666-b781-700b385532a9/network.incoming.packets.drop volume: 0 _stats_to_sample /usr/lib/python3.12/site-packages/ceilometer/compute/pollsters/__init__.py:108
Dec  1 19:45:49 compute-0 ceilometer_agent_compute[200308]: 2025-12-01 19:45:49.188 12 DEBUG ceilometer.polling.manager [-] Updated heartbeat for disk.root.size (2025-12-01T19:45:49.186935) _update_status /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:502
Dec  1 19:45:49 compute-0 ceilometer_agent_compute[200308]: 2025-12-01 19:45:49.188 15 DEBUG ceilometer.compute.pollsters [-] 850ac274-3f22-41ce-b7d7-ac64d7adac70/network.incoming.packets.drop volume: 0 _stats_to_sample /usr/lib/python3.12/site-packages/ceilometer/compute/pollsters/__init__.py:108
Dec  1 19:45:49 compute-0 ceilometer_agent_compute[200308]: 2025-12-01 19:45:49.188 15 INFO ceilometer.polling.manager [-] Finished polling pollster network.incoming.packets.drop in the context of pollsters
Dec  1 19:45:49 compute-0 ceilometer_agent_compute[200308]: 2025-12-01 19:45:49.188 15 DEBUG ceilometer.polling.manager [-] Executing discovery process for pollsters [<ceilometer.compute.pollsters.net.IncomingErrorsPollster object at 0x7fcf6cc3fe00>] and discovery method [local_instances] via process [<bound method AgentManager.discover of <ceilometer.polling.manager.AgentManager object at 0x7fcf6cc07a10>>]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:294
Dec  1 19:45:49 compute-0 ceilometer_agent_compute[200308]: 2025-12-01 19:45:49.188 15 INFO ceilometer.polling.manager [-] Polling pollster network.incoming.packets.error in the context of pollsters
Dec  1 19:45:49 compute-0 ceilometer_agent_compute[200308]: 2025-12-01 19:45:49.188 15 DEBUG ceilometer.polling.manager [-] Checking if we need coordination for pollster [<stevedore.extension.Extension object at 0x7fcf6cc3fe30>] with coordination group name [None]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:333
Dec  1 19:45:49 compute-0 ceilometer_agent_compute[200308]: 2025-12-01 19:45:49.188 15 DEBUG ceilometer.polling.manager [-] The pollster [<stevedore.extension.Extension object at 0x7fcf6cc3fe30>] is not configured in a source for polling that requires coordination. The current hashrings are the following [None]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:355
Dec  1 19:45:49 compute-0 ceilometer_agent_compute[200308]: 2025-12-01 19:45:49.188 15 DEBUG ceilometer.polling.manager [-] Polster heartbeat update: network.incoming.packets.error heartbeat /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:636
Dec  1 19:45:49 compute-0 ceilometer_agent_compute[200308]: 2025-12-01 19:45:49.188 15 DEBUG ceilometer.compute.pollsters [-] e73931e9-f7fa-4666-b781-700b385532a9/network.incoming.packets.error volume: 0 _stats_to_sample /usr/lib/python3.12/site-packages/ceilometer/compute/pollsters/__init__.py:108
Dec  1 19:45:49 compute-0 ceilometer_agent_compute[200308]: 2025-12-01 19:45:49.189 15 DEBUG ceilometer.compute.pollsters [-] 850ac274-3f22-41ce-b7d7-ac64d7adac70/network.incoming.packets.error volume: 0 _stats_to_sample /usr/lib/python3.12/site-packages/ceilometer/compute/pollsters/__init__.py:108
Dec  1 19:45:49 compute-0 ceilometer_agent_compute[200308]: 2025-12-01 19:45:49.189 15 INFO ceilometer.polling.manager [-] Finished polling pollster network.incoming.packets.error in the context of pollsters
Dec  1 19:45:49 compute-0 ceilometer_agent_compute[200308]: 2025-12-01 19:45:49.189 15 DEBUG ceilometer.polling.manager [-] Executing discovery process for pollsters [<ceilometer.compute.pollsters.net.OutgoingBytesPollster object at 0x7fcf6cc3fe90>] and discovery method [local_instances] via process [<bound method AgentManager.discover of <ceilometer.polling.manager.AgentManager object at 0x7fcf6cc07a10>>]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:294
Dec  1 19:45:49 compute-0 ceilometer_agent_compute[200308]: 2025-12-01 19:45:49.189 15 INFO ceilometer.polling.manager [-] Polling pollster network.outgoing.bytes in the context of pollsters
Dec  1 19:45:49 compute-0 ceilometer_agent_compute[200308]: 2025-12-01 19:45:49.189 15 DEBUG ceilometer.polling.manager [-] Checking if we need coordination for pollster [<stevedore.extension.Extension object at 0x7fcf6cc3fec0>] with coordination group name [None]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:333
Dec  1 19:45:49 compute-0 ceilometer_agent_compute[200308]: 2025-12-01 19:45:49.190 15 DEBUG ceilometer.polling.manager [-] The pollster [<stevedore.extension.Extension object at 0x7fcf6cc3fec0>] is not configured in a source for polling that requires coordination. The current hashrings are the following [None]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:355
Dec  1 19:45:49 compute-0 ceilometer_agent_compute[200308]: 2025-12-01 19:45:49.190 12 DEBUG ceilometer.polling.manager [-] Updated heartbeat for network.incoming.packets.drop (2025-12-01T19:45:49.187780) _update_status /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:502
Dec  1 19:45:49 compute-0 ceilometer_agent_compute[200308]: 2025-12-01 19:45:49.190 15 DEBUG ceilometer.polling.manager [-] Polster heartbeat update: network.outgoing.bytes heartbeat /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:636
Dec  1 19:45:49 compute-0 ceilometer_agent_compute[200308]: 2025-12-01 19:45:49.190 15 DEBUG ceilometer.compute.pollsters [-] e73931e9-f7fa-4666-b781-700b385532a9/network.outgoing.bytes volume: 2342 _stats_to_sample /usr/lib/python3.12/site-packages/ceilometer/compute/pollsters/__init__.py:108
Dec  1 19:45:49 compute-0 ceilometer_agent_compute[200308]: 2025-12-01 19:45:49.190 15 DEBUG ceilometer.compute.pollsters [-] 850ac274-3f22-41ce-b7d7-ac64d7adac70/network.outgoing.bytes volume: 2356 _stats_to_sample /usr/lib/python3.12/site-packages/ceilometer/compute/pollsters/__init__.py:108
Dec  1 19:45:49 compute-0 ceilometer_agent_compute[200308]: 2025-12-01 19:45:49.190 15 INFO ceilometer.polling.manager [-] Finished polling pollster network.outgoing.bytes in the context of pollsters
Dec  1 19:45:49 compute-0 ceilometer_agent_compute[200308]: 2025-12-01 19:45:49.190 15 DEBUG ceilometer.polling.manager [-] Executing discovery process for pollsters [<ceilometer.compute.pollsters.net.OutgoingBytesRatePollster object at 0x7fcf6cc3ff80>] and discovery method [local_instances] via process [<bound method AgentManager.discover of <ceilometer.polling.manager.AgentManager object at 0x7fcf6cc07a10>>]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:294
Dec  1 19:45:49 compute-0 ceilometer_agent_compute[200308]: 2025-12-01 19:45:49.190 15 DEBUG ceilometer.polling.manager [-] Skip pollster network.outgoing.bytes.rate, no new resources found this cycle _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:321
Dec  1 19:45:49 compute-0 ceilometer_agent_compute[200308]: 2025-12-01 19:45:49.190 15 DEBUG ceilometer.polling.manager [-] Executing discovery process for pollsters [<ceilometer.compute.pollsters.instance_stats.CPUPollster object at 0x7fcf6cbd1b80>] and discovery method [local_instances] via process [<bound method AgentManager.discover of <ceilometer.polling.manager.AgentManager object at 0x7fcf6cc07a10>>]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:294
Dec  1 19:45:49 compute-0 ceilometer_agent_compute[200308]: 2025-12-01 19:45:49.191 15 INFO ceilometer.polling.manager [-] Polling pollster cpu in the context of pollsters
Dec  1 19:45:49 compute-0 ceilometer_agent_compute[200308]: 2025-12-01 19:45:49.191 12 DEBUG ceilometer.polling.manager [-] Updated heartbeat for network.incoming.packets.error (2025-12-01T19:45:49.188877) _update_status /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:502
Dec  1 19:45:49 compute-0 ceilometer_agent_compute[200308]: 2025-12-01 19:45:49.191 15 DEBUG ceilometer.polling.manager [-] Checking if we need coordination for pollster [<stevedore.extension.Extension object at 0x7fcf6cc3d7c0>] with coordination group name [None]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:333
Dec  1 19:45:49 compute-0 ceilometer_agent_compute[200308]: 2025-12-01 19:45:49.191 15 DEBUG ceilometer.polling.manager [-] The pollster [<stevedore.extension.Extension object at 0x7fcf6cc3d7c0>] is not configured in a source for polling that requires coordination. The current hashrings are the following [None]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:355
Dec  1 19:45:49 compute-0 ceilometer_agent_compute[200308]: 2025-12-01 19:45:49.191 15 DEBUG ceilometer.polling.manager [-] Polster heartbeat update: cpu heartbeat /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:636
Dec  1 19:45:49 compute-0 ceilometer_agent_compute[200308]: 2025-12-01 19:45:49.191 15 DEBUG ceilometer.compute.pollsters [-] e73931e9-f7fa-4666-b781-700b385532a9/cpu volume: 46320000000 _stats_to_sample /usr/lib/python3.12/site-packages/ceilometer/compute/pollsters/__init__.py:108
Dec  1 19:45:49 compute-0 ceilometer_agent_compute[200308]: 2025-12-01 19:45:49.191 12 DEBUG ceilometer.polling.manager [-] Updated heartbeat for network.outgoing.bytes (2025-12-01T19:45:49.190109) _update_status /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:502
Dec  1 19:45:49 compute-0 ceilometer_agent_compute[200308]: 2025-12-01 19:45:49.191 15 DEBUG ceilometer.compute.pollsters [-] 850ac274-3f22-41ce-b7d7-ac64d7adac70/cpu volume: 39210000000 _stats_to_sample /usr/lib/python3.12/site-packages/ceilometer/compute/pollsters/__init__.py:108
Dec  1 19:45:49 compute-0 ceilometer_agent_compute[200308]: 2025-12-01 19:45:49.191 12 DEBUG ceilometer.polling.manager [-] Updated heartbeat for cpu (2025-12-01T19:45:49.191335) _update_status /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:502
Dec  1 19:45:49 compute-0 ceilometer_agent_compute[200308]: 2025-12-01 19:45:49.191 15 INFO ceilometer.polling.manager [-] Finished polling pollster cpu in the context of pollsters
Dec  1 19:45:49 compute-0 ceilometer_agent_compute[200308]: 2025-12-01 19:45:49.192 15 DEBUG ceilometer.polling.manager [-] Executing discovery process for pollsters [<ceilometer.compute.pollsters.instance_stats.MemoryUsagePollster object at 0x7fcf6cc3f7a0>] and discovery method [local_instances] via process [<bound method AgentManager.discover of <ceilometer.polling.manager.AgentManager object at 0x7fcf6cc07a10>>]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:294
Dec  1 19:45:49 compute-0 ceilometer_agent_compute[200308]: 2025-12-01 19:45:49.192 15 INFO ceilometer.polling.manager [-] Polling pollster memory.usage in the context of pollsters
Dec  1 19:45:49 compute-0 ceilometer_agent_compute[200308]: 2025-12-01 19:45:49.192 15 DEBUG ceilometer.polling.manager [-] Checking if we need coordination for pollster [<stevedore.extension.Extension object at 0x7fcf6cc3f7d0>] with coordination group name [None]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:333
Dec  1 19:45:49 compute-0 ceilometer_agent_compute[200308]: 2025-12-01 19:45:49.192 15 DEBUG ceilometer.polling.manager [-] The pollster [<stevedore.extension.Extension object at 0x7fcf6cc3f7d0>] is not configured in a source for polling that requires coordination. The current hashrings are the following [None]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:355
Dec  1 19:45:49 compute-0 ceilometer_agent_compute[200308]: 2025-12-01 19:45:49.192 15 DEBUG ceilometer.polling.manager [-] Polster heartbeat update: memory.usage heartbeat /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:636
Dec  1 19:45:49 compute-0 ceilometer_agent_compute[200308]: 2025-12-01 19:45:49.192 15 DEBUG ceilometer.compute.pollsters [-] e73931e9-f7fa-4666-b781-700b385532a9/memory.usage volume: 48.79296875 _stats_to_sample /usr/lib/python3.12/site-packages/ceilometer/compute/pollsters/__init__.py:108
Dec  1 19:45:49 compute-0 ceilometer_agent_compute[200308]: 2025-12-01 19:45:49.192 15 DEBUG ceilometer.compute.pollsters [-] 850ac274-3f22-41ce-b7d7-ac64d7adac70/memory.usage volume: 48.9375 _stats_to_sample /usr/lib/python3.12/site-packages/ceilometer/compute/pollsters/__init__.py:108
Dec  1 19:45:49 compute-0 ceilometer_agent_compute[200308]: 2025-12-01 19:45:49.193 15 INFO ceilometer.polling.manager [-] Finished polling pollster memory.usage in the context of pollsters
Dec  1 19:45:49 compute-0 ceilometer_agent_compute[200308]: 2025-12-01 19:45:49.193 12 DEBUG ceilometer.polling.manager [-] Updated heartbeat for memory.usage (2025-12-01T19:45:49.192370) _update_status /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:502
Dec  1 19:45:49 compute-0 ceilometer_agent_compute[200308]: 2025-12-01 19:45:49.193 15 DEBUG ceilometer.polling.manager [-] Finished processing pollster [network.incoming.bytes.delta]. execute_polling_task_processing /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:272
Dec  1 19:45:49 compute-0 ceilometer_agent_compute[200308]: 2025-12-01 19:45:49.194 15 DEBUG ceilometer.polling.manager [-] Finished processing pollster [network.outgoing.packets]. execute_polling_task_processing /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:272
Dec  1 19:45:49 compute-0 ceilometer_agent_compute[200308]: 2025-12-01 19:45:49.194 15 DEBUG ceilometer.polling.manager [-] Finished processing pollster [network.outgoing.bytes.delta]. execute_polling_task_processing /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:272
Dec  1 19:45:49 compute-0 ceilometer_agent_compute[200308]: 2025-12-01 19:45:49.194 15 DEBUG ceilometer.polling.manager [-] Finished processing pollster [network.outgoing.packets.drop]. execute_polling_task_processing /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:272
Dec  1 19:45:49 compute-0 ceilometer_agent_compute[200308]: 2025-12-01 19:45:49.194 15 DEBUG ceilometer.polling.manager [-] Finished processing pollster [network.outgoing.packets.error]. execute_polling_task_processing /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:272
Dec  1 19:45:49 compute-0 ceilometer_agent_compute[200308]: 2025-12-01 19:45:49.194 15 DEBUG ceilometer.polling.manager [-] Finished processing pollster [disk.device.capacity]. execute_polling_task_processing /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:272
Dec  1 19:45:49 compute-0 ceilometer_agent_compute[200308]: 2025-12-01 19:45:49.194 15 DEBUG ceilometer.polling.manager [-] Finished processing pollster [disk.device.read.bytes]. execute_polling_task_processing /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:272
Dec  1 19:45:49 compute-0 ceilometer_agent_compute[200308]: 2025-12-01 19:45:49.194 15 DEBUG ceilometer.polling.manager [-] Finished processing pollster [network.incoming.bytes]. execute_polling_task_processing /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:272
Dec  1 19:45:49 compute-0 ceilometer_agent_compute[200308]: 2025-12-01 19:45:49.194 15 DEBUG ceilometer.polling.manager [-] Finished processing pollster [network.incoming.bytes.rate]. execute_polling_task_processing /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:272
Dec  1 19:45:49 compute-0 ceilometer_agent_compute[200308]: 2025-12-01 19:45:49.194 15 DEBUG ceilometer.polling.manager [-] Finished processing pollster [disk.device.read.latency]. execute_polling_task_processing /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:272
Dec  1 19:45:49 compute-0 ceilometer_agent_compute[200308]: 2025-12-01 19:45:49.194 15 DEBUG ceilometer.polling.manager [-] Finished processing pollster [disk.device.read.requests]. execute_polling_task_processing /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:272
Dec  1 19:45:49 compute-0 ceilometer_agent_compute[200308]: 2025-12-01 19:45:49.195 15 DEBUG ceilometer.polling.manager [-] Finished processing pollster [disk.device.usage]. execute_polling_task_processing /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:272
Dec  1 19:45:49 compute-0 ceilometer_agent_compute[200308]: 2025-12-01 19:45:49.195 15 DEBUG ceilometer.polling.manager [-] Finished processing pollster [disk.device.write.bytes]. execute_polling_task_processing /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:272
Dec  1 19:45:49 compute-0 ceilometer_agent_compute[200308]: 2025-12-01 19:45:49.195 15 DEBUG ceilometer.polling.manager [-] Finished processing pollster [power.state]. execute_polling_task_processing /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:272
Dec  1 19:45:49 compute-0 ceilometer_agent_compute[200308]: 2025-12-01 19:45:49.195 15 DEBUG ceilometer.polling.manager [-] Finished processing pollster [disk.device.write.latency]. execute_polling_task_processing /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:272
Dec  1 19:45:49 compute-0 ceilometer_agent_compute[200308]: 2025-12-01 19:45:49.195 15 DEBUG ceilometer.polling.manager [-] Finished processing pollster [disk.device.write.requests]. execute_polling_task_processing /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:272
Dec  1 19:45:49 compute-0 ceilometer_agent_compute[200308]: 2025-12-01 19:45:49.195 15 DEBUG ceilometer.polling.manager [-] Finished processing pollster [disk.device.allocation]. execute_polling_task_processing /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:272
Dec  1 19:45:49 compute-0 ceilometer_agent_compute[200308]: 2025-12-01 19:45:49.195 15 DEBUG ceilometer.polling.manager [-] Finished processing pollster [disk.ephemeral.size]. execute_polling_task_processing /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:272
Dec  1 19:45:49 compute-0 ceilometer_agent_compute[200308]: 2025-12-01 19:45:49.195 15 DEBUG ceilometer.polling.manager [-] Finished processing pollster [network.incoming.packets]. execute_polling_task_processing /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:272
Dec  1 19:45:49 compute-0 ceilometer_agent_compute[200308]: 2025-12-01 19:45:49.195 15 DEBUG ceilometer.polling.manager [-] Finished processing pollster [disk.root.size]. execute_polling_task_processing /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:272
Dec  1 19:45:49 compute-0 ceilometer_agent_compute[200308]: 2025-12-01 19:45:49.195 15 DEBUG ceilometer.polling.manager [-] Finished processing pollster [network.incoming.packets.drop]. execute_polling_task_processing /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:272
Dec  1 19:45:49 compute-0 ceilometer_agent_compute[200308]: 2025-12-01 19:45:49.195 15 DEBUG ceilometer.polling.manager [-] Finished processing pollster [network.incoming.packets.error]. execute_polling_task_processing /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:272
Dec  1 19:45:49 compute-0 ceilometer_agent_compute[200308]: 2025-12-01 19:45:49.195 15 DEBUG ceilometer.polling.manager [-] Finished processing pollster [network.outgoing.bytes]. execute_polling_task_processing /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:272
Dec  1 19:45:49 compute-0 ceilometer_agent_compute[200308]: 2025-12-01 19:45:49.196 15 DEBUG ceilometer.polling.manager [-] Finished processing pollster [network.outgoing.bytes.rate]. execute_polling_task_processing /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:272
Dec  1 19:45:49 compute-0 ceilometer_agent_compute[200308]: 2025-12-01 19:45:49.196 15 DEBUG ceilometer.polling.manager [-] Finished processing pollster [cpu]. execute_polling_task_processing /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:272
Dec  1 19:45:49 compute-0 ceilometer_agent_compute[200308]: 2025-12-01 19:45:49.196 15 DEBUG ceilometer.polling.manager [-] Finished processing pollster [memory.usage]. execute_polling_task_processing /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:272
Dec  1 19:45:49 compute-0 podman[246408]: 2025-12-01 19:45:49.326472442 +0000 UTC m=+0.089702848 container health_status 61ddba5fa28aaa4735d9b3aecc3d300f499f9ae2248b5f55cd6d6127fcce4236 (image=quay.io/navidys/prometheus-podman-exporter:v1.10.1, name=podman_exporter, health_status=healthy, health_failing_streak=0, health_log=, config_id=edpm, container_name=podman_exporter, maintainer=Navid Yaghoobi <navidys@fedoraproject.org>, managed_by=edpm_ansible, config_data={'image': 'quay.io/navidys/prometheus-podman-exporter:v1.10.1', 'restart': 'always', 'recreate': True, 'user': 'root', 'privileged': True, 'ports': ['9882:9882'], 'net': 'host', 'command': ['--web.config.file=/etc/podman_exporter/podman_exporter.yaml'], 'environment': {'OS_ENDPOINT_TYPE': 'internal', 'CONTAINER_HOST': 'unix:///run/podman/podman.sock'}, 'healthcheck': {'test': '/openstack/healthcheck podman_exporter', 'mount': '/var/lib/openstack/healthchecks/podman_exporter'}, 'volumes': ['/var/lib/openstack/config/telemetry/podman_exporter.yaml:/etc/podman_exporter/podman_exporter.yaml:z', '/var/lib/openstack/certs/telemetry/default:/etc/podman_exporter/tls:z', '/run/podman/podman.sock:/run/podman/podman.sock:rw,z', '/var/lib/openstack/healthchecks/podman_exporter:/openstack:ro,z']})
Dec  1 19:45:50 compute-0 nova_compute[189564]: 2025-12-01 19:45:50.954 189568 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Dec  1 19:45:52 compute-0 podman[246436]: 2025-12-01 19:45:52.351051752 +0000 UTC m=+0.095194509 container health_status 43b014a7c88484529ca37fbc1aa040d68d3c565a681d98a3ffe696ded1c66c8b (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified, name=ovn_metadata_agent, health_status=healthy, health_failing_streak=0, health_log=, tcib_build_tag=fa2bb8efef6782c26ea7f1675eeb36dd, io.buildah.version=1.41.3, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS, tcib_managed=true, org.label-schema.name=CentOS Stream 9 Base Image, maintainer=OpenStack Kubernetes Operator team, managed_by=edpm_ansible, org.label-schema.build-date=20251125, org.label-schema.license=GPLv2, config_data={'cgroupns': 'host', 'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'EDPM_CONFIG_HASH': '0823bd3e096c75f72e4a95820d41b0d4b6a1172bd2892ddb9f29b788a11bc87d'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ovn_metadata_agent', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified', 'net': 'host', 'pid': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/run/openvswitch:/run/openvswitch:z', '/var/lib/config-data/ansible-generated/neutron-ovn-metadata-agent:/etc/neutron.conf.d:z', '/run/netns:/run/netns:shared', '/var/lib/kolla/config_files/ovn_metadata_agent.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/neutron:/var/lib/neutron:shared,z', '/var/lib/neutron/ovn_metadata_haproxy_wrapper:/usr/local/bin/haproxy:ro', '/var/lib/neutron/kill_scripts:/etc/neutron/kill_scripts:ro', '/var/lib/openstack/cacerts/neutron-metadata/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/ca.crt:/etc/pki/tls/certs/ovndbca.crt:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/tls.crt:/etc/pki/tls/certs/ovndb.crt:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/tls.key:/etc/pki/tls/private/ovndb.key:ro,Z', '/var/lib/openstack/healthchecks/ovn_metadata_agent:/openstack:ro,z']}, config_id=ovn_metadata_agent, container_name=ovn_metadata_agent)
Dec  1 19:45:52 compute-0 podman[246433]: 2025-12-01 19:45:52.369134014 +0000 UTC m=+0.126167211 container health_status 23921011954a99f31a49758e512d9e3575f6b2ebf536e7df85e3be11e7690b76 (image=quay.io/sustainable_computing_io/kepler:release-0.7.12, name=kepler, health_status=healthy, health_failing_streak=0, health_log=, release-0.7.12=, distribution-scope=public, vcs-type=git, com.redhat.license_terms=https://www.redhat.com/en/about/red-hat-end-user-license-agreements#UBI, config_data={'image': 'quay.io/sustainable_computing_io/kepler:release-0.7.12', 'privileged': 'true', 'restart': 'always', 'ports': ['8888:8888'], 'net': 'host', 'command': '-v=2', 'recreate': True, 'environment': {'ENABLE_GPU': 'true', 'EXPOSE_CONTAINER_METRICS': 'true', 'ENABLE_PROCESS_METRICS': 'true', 'EXPOSE_VM_METRICS': 'true', 'EXPOSE_ESTIMATED_IDLE_POWER_METRICS': 'false', 'LIBVIRT_METADATA_URI': 'http://openstack.org/xmlns/libvirt/nova/1.1'}, 'healthcheck': {'test': '/openstack/healthcheck kepler', 'mount': '/var/lib/openstack/healthchecks/kepler'}, 'volumes': ['/lib/modules:/lib/modules:ro', '/run/libvirt:/run/libvirt:shared,ro', '/sys:/sys', '/proc:/proc', '/var/lib/openstack/healthchecks/kepler:/openstack:ro,z']}, container_name=kepler, summary=Provides the latest release of Red Hat Universal Base Image 9., com.redhat.component=ubi9-container, io.k8s.description=The Universal Base Image is designed and engineered to be the base layer for all of your containerized applications, middleware and utilities. This base image is freely redistributable, but Red Hat only supports Red Hat technologies through subscriptions for Red Hat products. This image is maintained by Red Hat and updated regularly., build-date=2024-09-18T21:23:30, config_id=edpm, managed_by=edpm_ansible, architecture=x86_64, io.k8s.display-name=Red Hat Universal Base Image 9, maintainer=Red Hat, Inc., url=https://access.redhat.com/containers/#/registry.access.redhat.com/ubi9/images/9.4-1214.1726694543, vcs-ref=e309397d02fc53f7fa99db1371b8700eb49f268f, description=The Universal Base Image is designed and engineered to be the base layer for all of your containerized applications, middleware and utilities. This base image is freely redistributable, but Red Hat only supports Red Hat technologies through subscriptions for Red Hat products. This image is maintained by Red Hat and updated regularly., release=1214.1726694543, version=9.4, io.buildah.version=1.29.0, vendor=Red Hat, Inc., io.openshift.tags=base rhel9, name=ubi9, io.openshift.expose-services=)
Dec  1 19:45:52 compute-0 podman[246435]: 2025-12-01 19:45:52.381816788 +0000 UTC m=+0.127916355 container health_status 3a3d264f7eb8586ed3d44da8bad3c69e5911bcb2ca062b771386b6d47a5118de (image=quay.rdoproject.org/podified-master-centos10/openstack-ceilometer-compute:current-tested, name=ceilometer_agent_compute, health_status=healthy, health_failing_streak=0, health_log=, config_id=edpm, maintainer=OpenStack Kubernetes Operator team, org.label-schema.license=GPLv2, tcib_managed=true, io.buildah.version=1.41.4, managed_by=edpm_ansible, org.label-schema.vendor=CentOS, container_name=ceilometer_agent_compute, org.label-schema.schema-version=1.0, org.label-schema.build-date=20251125, org.label-schema.name=CentOS Stream 10 Base Image, tcib_build_tag=3a7876c5b6a4ff2e2bc50e11e9db5f42, config_data={'image': 'quay.rdoproject.org/podified-master-centos10/openstack-ceilometer-compute:current-tested', 'user': 'ceilometer', 'restart': 'always', 'command': 'kolla_start', 'security_opt': 'label:type:ceilometer_polling_t', 'net': 'host', 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'OS_ENDPOINT_TYPE': 'internal'}, 'healthcheck': {'test': '/openstack/healthcheck compute', 'mount': '/var/lib/openstack/healthchecks/ceilometer_agent_compute'}, 'volumes': ['/var/lib/openstack/config/telemetry:/var/lib/openstack/config/:z', '/var/lib/openstack/config/telemetry/ceilometer-agent-compute.json:/var/lib/kolla/config_files/config.json:z', '/run/libvirt:/run/libvirt:shared,ro', '/etc/hosts:/etc/hosts:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/var/lib/openstack/cacerts/telemetry/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/config/telemetry/ceilometer_prom_exporter.yaml:/etc/ceilometer/ceilometer_prom_exporter.yaml:z', '/var/lib/openstack/certs/telemetry/default:/etc/ceilometer/tls:z', '/dev/log:/dev/log', '/var/lib/openstack/healthchecks/ceilometer_agent_compute:/openstack:ro,z']})
Dec  1 19:45:52 compute-0 podman[246434]: 2025-12-01 19:45:52.383424948 +0000 UTC m=+0.132391384 container health_status 34a1614f07848d6f362b3ed1fa2407dbcd0f2c7c831f6ef43ff8b2d278ce7c3d (image=quay.io/podified-antelope-centos9/openstack-ceilometer-ipmi:current-podified, name=ceilometer_agent_ipmi, health_status=healthy, health_failing_streak=0, health_log=, org.label-schema.schema-version=1.0, tcib_managed=true, container_name=ceilometer_agent_ipmi, io.buildah.version=1.41.3, maintainer=OpenStack Kubernetes Operator team, org.label-schema.build-date=20251125, org.label-schema.vendor=CentOS, org.label-schema.license=GPLv2, tcib_build_tag=fa2bb8efef6782c26ea7f1675eeb36dd, config_data={'image': 'quay.io/podified-antelope-centos9/openstack-ceilometer-ipmi:current-podified', 'user': 'ceilometer', 'restart': 'always', 'command': 'kolla_start', 'security_opt': 'label:type:ceilometer_polling_t', 'privileged': 'true', 'net': 'host', 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'OS_ENDPOINT_TYPE': 'internal'}, 'healthcheck': {'test': '/openstack/healthcheck ipmi', 'mount': '/var/lib/openstack/healthchecks/ceilometer_agent_ipmi'}, 'volumes': ['/var/lib/openstack/config/telemetry-power-monitoring:/var/lib/openstack/config/:z', '/var/lib/openstack/config/telemetry-power-monitoring/ceilometer-agent-ipmi.json:/var/lib/kolla/config_files/config.json:z', '/etc/hosts:/etc/hosts:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/var/lib/openstack/cacerts/telemetry-power-monitoring/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/config/telemetry-power-monitoring/ceilometer_prom_exporter.yaml:/etc/ceilometer/ceilometer_prom_exporter.yaml:z', '/var/lib/openstack/certs/telemetry-power-monitoring/default:/etc/ceilometer/tls:z', '/dev/log:/dev/log', '/var/lib/openstack/healthchecks/ceilometer_agent_ipmi:/openstack:ro,z']}, config_id=edpm, managed_by=edpm_ansible, org.label-schema.name=CentOS Stream 9 Base Image)
Dec  1 19:45:52 compute-0 podman[246437]: 2025-12-01 19:45:52.440993927 +0000 UTC m=+0.182056508 container health_status ac5c9902abf0db9f43c889599b2bcc73d33eb8b65444ffdd9b56a5cc93dab792 (image=quay.io/podified-antelope-centos9/openstack-ovn-controller:current-podified, name=ovn_controller, health_status=healthy, health_failing_streak=0, health_log=, io.buildah.version=1.41.3, maintainer=OpenStack Kubernetes Operator team, org.label-schema.license=GPLv2, tcib_managed=true, config_data={'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ovn_controller', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-ovn-controller:current-podified', 'net': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/lib/modules:/lib/modules:ro', '/run:/run', '/var/lib/openvswitch/ovn:/run/ovn:shared,z', '/var/lib/kolla/config_files/ovn_controller.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/openstack/cacerts/ovn/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/certs/ovn/default/ca.crt:/etc/pki/tls/certs/ovndbca.crt:ro,z', '/var/lib/openstack/certs/ovn/default/tls.crt:/etc/pki/tls/certs/ovndb.crt:ro,z', '/var/lib/openstack/certs/ovn/default/tls.key:/etc/pki/tls/private/ovndb.key:ro,Z', '/var/lib/openstack/healthchecks/ovn_controller:/openstack:ro,z']}, config_id=ovn_controller, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS, tcib_build_tag=fa2bb8efef6782c26ea7f1675eeb36dd, container_name=ovn_controller, managed_by=edpm_ansible, org.label-schema.build-date=20251125)
Dec  1 19:45:53 compute-0 nova_compute[189564]: 2025-12-01 19:45:53.262 189568 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Dec  1 19:45:55 compute-0 nova_compute[189564]: 2025-12-01 19:45:55.957 189568 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Dec  1 19:45:58 compute-0 nova_compute[189564]: 2025-12-01 19:45:58.268 189568 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Dec  1 19:45:59 compute-0 podman[203750]: time="2025-12-01T19:45:59Z" level=info msg="List containers: received `last` parameter - overwriting `limit`"
Dec  1 19:45:59 compute-0 podman[203750]: @ - - [01/Dec/2025:19:45:59 +0000] "GET /v4.9.3/libpod/containers/json?all=true&external=false&last=0&namespace=false&size=false&sync=false HTTP/1.1" 200 29521 "" "Go-http-client/1.1"
Dec  1 19:45:59 compute-0 podman[203750]: @ - - [01/Dec/2025:19:45:59 +0000] "GET /v4.9.3/libpod/containers/stats?all=false&interval=1&stream=false HTTP/1.1" 200 4816 "" "Go-http-client/1.1"
Dec  1 19:46:00 compute-0 nova_compute[189564]: 2025-12-01 19:46:00.702 189568 DEBUG oslo_service.periodic_task [None req-7cec860b-c1c5-49d2-8a4f-732c76c1788d - - - - - -] Running periodic task ComputeManager._heal_instance_info_cache run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210#033[00m
Dec  1 19:46:00 compute-0 nova_compute[189564]: 2025-12-01 19:46:00.703 189568 DEBUG nova.compute.manager [None req-7cec860b-c1c5-49d2-8a4f-732c76c1788d - - - - - -] Starting heal instance info cache _heal_instance_info_cache /usr/lib/python3.9/site-packages/nova/compute/manager.py:9858#033[00m
Dec  1 19:46:00 compute-0 nova_compute[189564]: 2025-12-01 19:46:00.703 189568 DEBUG nova.compute.manager [None req-7cec860b-c1c5-49d2-8a4f-732c76c1788d - - - - - -] Rebuilding the list of instances to heal _heal_instance_info_cache /usr/lib/python3.9/site-packages/nova/compute/manager.py:9862#033[00m
Dec  1 19:46:00 compute-0 nova_compute[189564]: 2025-12-01 19:46:00.960 189568 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Dec  1 19:46:00 compute-0 nova_compute[189564]: 2025-12-01 19:46:00.978 189568 DEBUG oslo_concurrency.lockutils [None req-7cec860b-c1c5-49d2-8a4f-732c76c1788d - - - - - -] Acquiring lock "refresh_cache-e73931e9-f7fa-4666-b781-700b385532a9" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:312#033[00m
Dec  1 19:46:00 compute-0 nova_compute[189564]: 2025-12-01 19:46:00.979 189568 DEBUG oslo_concurrency.lockutils [None req-7cec860b-c1c5-49d2-8a4f-732c76c1788d - - - - - -] Acquired lock "refresh_cache-e73931e9-f7fa-4666-b781-700b385532a9" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:315#033[00m
Dec  1 19:46:00 compute-0 nova_compute[189564]: 2025-12-01 19:46:00.980 189568 DEBUG nova.network.neutron [None req-7cec860b-c1c5-49d2-8a4f-732c76c1788d - - - - - -] [instance: e73931e9-f7fa-4666-b781-700b385532a9] Forcefully refreshing network info cache for instance _get_instance_nw_info /usr/lib/python3.9/site-packages/nova/network/neutron.py:2004#033[00m
Dec  1 19:46:00 compute-0 nova_compute[189564]: 2025-12-01 19:46:00.981 189568 DEBUG nova.objects.instance [None req-7cec860b-c1c5-49d2-8a4f-732c76c1788d - - - - - -] Lazy-loading 'info_cache' on Instance uuid e73931e9-f7fa-4666-b781-700b385532a9 obj_load_attr /usr/lib/python3.9/site-packages/nova/objects/instance.py:1105#033[00m
Dec  1 19:46:01 compute-0 openstack_network_exporter[205914]: ERROR   19:46:01 appctl.go:144: Failed to get PID for ovn-northd: no control socket files found for ovn-northd
Dec  1 19:46:01 compute-0 openstack_network_exporter[205914]: ERROR   19:46:01 appctl.go:144: Failed to get PID for ovn-northd: no control socket files found for ovn-northd
Dec  1 19:46:01 compute-0 openstack_network_exporter[205914]: ERROR   19:46:01 appctl.go:131: Failed to prepare call to ovsdb-server: no control socket files found for the ovs db server
Dec  1 19:46:01 compute-0 openstack_network_exporter[205914]: ERROR   19:46:01 appctl.go:174: call(dpif-netdev/pmd-perf-show): please specify an existing datapath
Dec  1 19:46:01 compute-0 openstack_network_exporter[205914]: 
Dec  1 19:46:01 compute-0 openstack_network_exporter[205914]: ERROR   19:46:01 appctl.go:174: call(dpif-netdev/pmd-rxq-show): please specify an existing datapath
Dec  1 19:46:01 compute-0 openstack_network_exporter[205914]: 
Dec  1 19:46:03 compute-0 nova_compute[189564]: 2025-12-01 19:46:03.015 189568 DEBUG nova.network.neutron [None req-7cec860b-c1c5-49d2-8a4f-732c76c1788d - - - - - -] [instance: e73931e9-f7fa-4666-b781-700b385532a9] Updating instance_info_cache with network_info: [{"id": "3cef930c-870a-4936-a206-b4c3a7ce5c1a", "address": "fa:16:3e:fc:8b:70", "network": {"id": "2a4b8529-6171-4880-a97c-66966115a61b", "bridge": "br-int", "label": "private", "subnets": [{"cidr": "192.168.0.0/24", "dns": [], "gateway": {"address": "192.168.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "192.168.0.47", "type": "fixed", "version": 4, "meta": {}, "floating_ips": [{"address": "192.168.122.206", "type": "floating", "version": 4, "meta": {}}]}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "35d2a9caf1634dca9fc12ec078239d84", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tap3cef930c-87", "ovs_interfaceid": "3cef930c-870a-4936-a206-b4c3a7ce5c1a", "qbh_params": null, "qbg_params": null, "active": true, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}}] update_instance_cache_with_nw_info /usr/lib/python3.9/site-packages/nova/network/neutron.py:116#033[00m
Dec  1 19:46:03 compute-0 nova_compute[189564]: 2025-12-01 19:46:03.031 189568 DEBUG oslo_concurrency.lockutils [None req-7cec860b-c1c5-49d2-8a4f-732c76c1788d - - - - - -] Releasing lock "refresh_cache-e73931e9-f7fa-4666-b781-700b385532a9" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:333#033[00m
Dec  1 19:46:03 compute-0 nova_compute[189564]: 2025-12-01 19:46:03.032 189568 DEBUG nova.compute.manager [None req-7cec860b-c1c5-49d2-8a4f-732c76c1788d - - - - - -] [instance: e73931e9-f7fa-4666-b781-700b385532a9] Updated the network info_cache for instance _heal_instance_info_cache /usr/lib/python3.9/site-packages/nova/compute/manager.py:9929#033[00m
Dec  1 19:46:03 compute-0 nova_compute[189564]: 2025-12-01 19:46:03.032 189568 DEBUG oslo_service.periodic_task [None req-7cec860b-c1c5-49d2-8a4f-732c76c1788d - - - - - -] Running periodic task ComputeManager._poll_rebooting_instances run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210#033[00m
Dec  1 19:46:03 compute-0 nova_compute[189564]: 2025-12-01 19:46:03.033 189568 DEBUG oslo_service.periodic_task [None req-7cec860b-c1c5-49d2-8a4f-732c76c1788d - - - - - -] Running periodic task ComputeManager._reclaim_queued_deletes run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210#033[00m
Dec  1 19:46:03 compute-0 nova_compute[189564]: 2025-12-01 19:46:03.033 189568 DEBUG nova.compute.manager [None req-7cec860b-c1c5-49d2-8a4f-732c76c1788d - - - - - -] CONF.reclaim_instance_interval <= 0, skipping... _reclaim_queued_deletes /usr/lib/python3.9/site-packages/nova/compute/manager.py:10477#033[00m
Dec  1 19:46:03 compute-0 nova_compute[189564]: 2025-12-01 19:46:03.270 189568 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Dec  1 19:46:04 compute-0 nova_compute[189564]: 2025-12-01 19:46:04.248 189568 DEBUG oslo_service.periodic_task [None req-7cec860b-c1c5-49d2-8a4f-732c76c1788d - - - - - -] Running periodic task ComputeManager._poll_rescued_instances run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210#033[00m
Dec  1 19:46:04 compute-0 nova_compute[189564]: 2025-12-01 19:46:04.248 189568 DEBUG oslo_service.periodic_task [None req-7cec860b-c1c5-49d2-8a4f-732c76c1788d - - - - - -] Running periodic task ComputeManager._poll_volume_usage run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210#033[00m
Dec  1 19:46:04 compute-0 podman[246527]: 2025-12-01 19:46:04.37938912 +0000 UTC m=+0.140499527 container health_status b46bda7fc50db8041eef75400930fc7591d8331b3adc9964f77b2cc87c6b98e2 (image=quay.io/openstack-k8s-operators/openstack-network-exporter:current-podified, name=openstack_network_exporter, health_status=healthy, health_failing_streak=0, health_log=, io.buildah.version=1.33.7, managed_by=edpm_ansible, container_name=openstack_network_exporter, summary=Provides the latest release of the minimal Red Hat Universal Base Image 9., config_data={'image': 'quay.io/openstack-k8s-operators/openstack-network-exporter:current-podified', 'restart': 'always', 'recreate': True, 'privileged': True, 'ports': ['9105:9105'], 'command': [], 'net': 'host', 'environment': {'OS_ENDPOINT_TYPE': 'internal', 'OPENSTACK_NETWORK_EXPORTER_YAML': '/etc/openstack_network_exporter/openstack_network_exporter.yaml'}, 'healthcheck': {'test': '/openstack/healthcheck openstack-netwo', 'mount': '/var/lib/openstack/healthchecks/openstack_network_exporter'}, 'volumes': ['/var/lib/openstack/config/telemetry/openstack_network_exporter.yaml:/etc/openstack_network_exporter/openstack_network_exporter.yaml:z', '/var/lib/openstack/certs/telemetry/default:/etc/openstack_network_exporter/tls:z', '/var/run/openvswitch:/run/openvswitch:rw,z', '/var/lib/openvswitch/ovn:/run/ovn:rw,z', '/proc:/host/proc:ro', '/var/lib/openstack/healthchecks/openstack_network_exporter:/openstack:ro,z']}, build-date=2025-08-20T13:12:41, config_id=edpm, io.k8s.description=The Universal Base Image Minimal is a stripped down image that uses microdnf as a package manager. This base image is freely redistributable, but Red Hat only supports Red Hat technologies through subscriptions for Red Hat products. This image is maintained by Red Hat and updated regularly., io.openshift.tags=minimal rhel9, description=The Universal Base Image Minimal is a stripped down image that uses microdnf as a package manager. This base image is freely redistributable, but Red Hat only supports Red Hat technologies through subscriptions for Red Hat products. This image is maintained by Red Hat and updated regularly., url=https://catalog.redhat.com/en/search?searchType=containers, vcs-type=git, vendor=Red Hat, Inc., name=ubi9-minimal, com.redhat.component=ubi9-minimal-container, com.redhat.license_terms=https://www.redhat.com/en/about/red-hat-end-user-license-agreements#UBI, distribution-scope=public, vcs-ref=f4b088292653bbf5ca8188a5e59ffd06a8671d4b, version=9.6, io.k8s.display-name=Red Hat Universal Base Image 9 Minimal, io.openshift.expose-services=, maintainer=Red Hat, Inc., release=1755695350, architecture=x86_64)
Dec  1 19:46:05 compute-0 nova_compute[189564]: 2025-12-01 19:46:05.243 189568 DEBUG oslo_service.periodic_task [None req-7cec860b-c1c5-49d2-8a4f-732c76c1788d - - - - - -] Running periodic task ComputeManager._check_instance_build_time run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210#033[00m
Dec  1 19:46:05 compute-0 nova_compute[189564]: 2025-12-01 19:46:05.966 189568 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Dec  1 19:46:07 compute-0 nova_compute[189564]: 2025-12-01 19:46:07.248 189568 DEBUG oslo_service.periodic_task [None req-7cec860b-c1c5-49d2-8a4f-732c76c1788d - - - - - -] Running periodic task ComputeManager._poll_unconfirmed_resizes run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210#033[00m
Dec  1 19:46:08 compute-0 nova_compute[189564]: 2025-12-01 19:46:08.273 189568 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Dec  1 19:46:09 compute-0 nova_compute[189564]: 2025-12-01 19:46:09.247 189568 DEBUG oslo_service.periodic_task [None req-7cec860b-c1c5-49d2-8a4f-732c76c1788d - - - - - -] Running periodic task ComputeManager.update_available_resource run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210#033[00m
Dec  1 19:46:09 compute-0 nova_compute[189564]: 2025-12-01 19:46:09.274 189568 DEBUG oslo_concurrency.lockutils [None req-7cec860b-c1c5-49d2-8a4f-732c76c1788d - - - - - -] Acquiring lock "compute_resources" by "nova.compute.resource_tracker.ResourceTracker.clean_compute_node_cache" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m
Dec  1 19:46:09 compute-0 nova_compute[189564]: 2025-12-01 19:46:09.274 189568 DEBUG oslo_concurrency.lockutils [None req-7cec860b-c1c5-49d2-8a4f-732c76c1788d - - - - - -] Lock "compute_resources" acquired by "nova.compute.resource_tracker.ResourceTracker.clean_compute_node_cache" :: waited 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m
Dec  1 19:46:09 compute-0 nova_compute[189564]: 2025-12-01 19:46:09.275 189568 DEBUG oslo_concurrency.lockutils [None req-7cec860b-c1c5-49d2-8a4f-732c76c1788d - - - - - -] Lock "compute_resources" "released" by "nova.compute.resource_tracker.ResourceTracker.clean_compute_node_cache" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m
Dec  1 19:46:09 compute-0 nova_compute[189564]: 2025-12-01 19:46:09.275 189568 DEBUG nova.compute.resource_tracker [None req-7cec860b-c1c5-49d2-8a4f-732c76c1788d - - - - - -] Auditing locally available compute resources for compute-0.ctlplane.example.com (node: compute-0.ctlplane.example.com) update_available_resource /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:861#033[00m
Dec  1 19:46:09 compute-0 podman[246548]: 2025-12-01 19:46:09.365685433 +0000 UTC m=+0.117026627 container health_status 9bc16c1e84935b321683dd2dfd3901959431e420d380b6b9982945dff3d516b2 (image=quay.io/prometheus/node-exporter:v1.5.0, name=node_exporter, health_status=healthy, health_failing_streak=0, health_log=, config_data={'image': 'quay.io/prometheus/node-exporter:v1.5.0', 'restart': 'always', 'recreate': True, 'user': 'root', 'privileged': True, 'ports': ['9100:9100'], 'command': ['--web.config.file=/etc/node_exporter/node_exporter.yaml', '--web.disable-exporter-metrics', '--collector.systemd', '--collector.systemd.unit-include=(edpm_.*|ovs.*|openvswitch|virt.*|rsyslog)\\.service', '--no-collector.dmi', '--no-collector.entropy', '--no-collector.thermal_zone', '--no-collector.time', '--no-collector.timex', '--no-collector.uname', '--no-collector.stat', '--no-collector.hwmon', '--no-collector.os', '--no-collector.selinux', '--no-collector.textfile', '--no-collector.powersupplyclass', '--no-collector.pressure', '--no-collector.rapl'], 'net': 'host', 'environment': {'OS_ENDPOINT_TYPE': 'internal'}, 'healthcheck': {'test': '/openstack/healthcheck node_exporter', 'mount': '/var/lib/openstack/healthchecks/node_exporter'}, 'volumes': ['/var/lib/openstack/config/telemetry/node_exporter.yaml:/etc/node_exporter/node_exporter.yaml:z', '/var/lib/openstack/certs/telemetry/default:/etc/node_exporter/tls:z', '/var/run/dbus/system_bus_socket:/var/run/dbus/system_bus_socket:rw', '/var/lib/openstack/healthchecks/node_exporter:/openstack:ro,z']}, config_id=edpm, container_name=node_exporter, maintainer=The Prometheus Authors <prometheus-developers@googlegroups.com>, managed_by=edpm_ansible)
Dec  1 19:46:09 compute-0 nova_compute[189564]: 2025-12-01 19:46:09.382 189568 DEBUG oslo_concurrency.processutils [None req-7cec860b-c1c5-49d2-8a4f-732c76c1788d - - - - - -] Running cmd (subprocess): /usr/bin/python3 -m oslo_concurrency.prlimit --as=1073741824 --cpu=30 -- env LC_ALL=C LANG=C qemu-img info /var/lib/nova/instances/e73931e9-f7fa-4666-b781-700b385532a9/disk --force-share --output=json execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384#033[00m
Dec  1 19:46:09 compute-0 nova_compute[189564]: 2025-12-01 19:46:09.475 189568 DEBUG oslo_concurrency.processutils [None req-7cec860b-c1c5-49d2-8a4f-732c76c1788d - - - - - -] CMD "/usr/bin/python3 -m oslo_concurrency.prlimit --as=1073741824 --cpu=30 -- env LC_ALL=C LANG=C qemu-img info /var/lib/nova/instances/e73931e9-f7fa-4666-b781-700b385532a9/disk --force-share --output=json" returned: 0 in 0.093s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422#033[00m
Dec  1 19:46:09 compute-0 nova_compute[189564]: 2025-12-01 19:46:09.476 189568 DEBUG oslo_concurrency.processutils [None req-7cec860b-c1c5-49d2-8a4f-732c76c1788d - - - - - -] Running cmd (subprocess): /usr/bin/python3 -m oslo_concurrency.prlimit --as=1073741824 --cpu=30 -- env LC_ALL=C LANG=C qemu-img info /var/lib/nova/instances/e73931e9-f7fa-4666-b781-700b385532a9/disk --force-share --output=json execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384#033[00m
Dec  1 19:46:09 compute-0 nova_compute[189564]: 2025-12-01 19:46:09.575 189568 DEBUG oslo_concurrency.processutils [None req-7cec860b-c1c5-49d2-8a4f-732c76c1788d - - - - - -] CMD "/usr/bin/python3 -m oslo_concurrency.prlimit --as=1073741824 --cpu=30 -- env LC_ALL=C LANG=C qemu-img info /var/lib/nova/instances/e73931e9-f7fa-4666-b781-700b385532a9/disk --force-share --output=json" returned: 0 in 0.100s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422#033[00m
Dec  1 19:46:09 compute-0 nova_compute[189564]: 2025-12-01 19:46:09.577 189568 DEBUG oslo_concurrency.processutils [None req-7cec860b-c1c5-49d2-8a4f-732c76c1788d - - - - - -] Running cmd (subprocess): /usr/bin/python3 -m oslo_concurrency.prlimit --as=1073741824 --cpu=30 -- env LC_ALL=C LANG=C qemu-img info /var/lib/nova/instances/e73931e9-f7fa-4666-b781-700b385532a9/disk.eph0 --force-share --output=json execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384#033[00m
Dec  1 19:46:09 compute-0 nova_compute[189564]: 2025-12-01 19:46:09.658 189568 DEBUG oslo_concurrency.processutils [None req-7cec860b-c1c5-49d2-8a4f-732c76c1788d - - - - - -] CMD "/usr/bin/python3 -m oslo_concurrency.prlimit --as=1073741824 --cpu=30 -- env LC_ALL=C LANG=C qemu-img info /var/lib/nova/instances/e73931e9-f7fa-4666-b781-700b385532a9/disk.eph0 --force-share --output=json" returned: 0 in 0.081s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422#033[00m
Dec  1 19:46:09 compute-0 nova_compute[189564]: 2025-12-01 19:46:09.659 189568 DEBUG oslo_concurrency.processutils [None req-7cec860b-c1c5-49d2-8a4f-732c76c1788d - - - - - -] Running cmd (subprocess): /usr/bin/python3 -m oslo_concurrency.prlimit --as=1073741824 --cpu=30 -- env LC_ALL=C LANG=C qemu-img info /var/lib/nova/instances/e73931e9-f7fa-4666-b781-700b385532a9/disk.eph0 --force-share --output=json execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384#033[00m
Dec  1 19:46:09 compute-0 nova_compute[189564]: 2025-12-01 19:46:09.737 189568 DEBUG oslo_concurrency.processutils [None req-7cec860b-c1c5-49d2-8a4f-732c76c1788d - - - - - -] CMD "/usr/bin/python3 -m oslo_concurrency.prlimit --as=1073741824 --cpu=30 -- env LC_ALL=C LANG=C qemu-img info /var/lib/nova/instances/e73931e9-f7fa-4666-b781-700b385532a9/disk.eph0 --force-share --output=json" returned: 0 in 0.077s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422#033[00m
Dec  1 19:46:09 compute-0 nova_compute[189564]: 2025-12-01 19:46:09.746 189568 DEBUG oslo_concurrency.processutils [None req-7cec860b-c1c5-49d2-8a4f-732c76c1788d - - - - - -] Running cmd (subprocess): /usr/bin/python3 -m oslo_concurrency.prlimit --as=1073741824 --cpu=30 -- env LC_ALL=C LANG=C qemu-img info /var/lib/nova/instances/850ac274-3f22-41ce-b7d7-ac64d7adac70/disk --force-share --output=json execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384#033[00m
Dec  1 19:46:09 compute-0 nova_compute[189564]: 2025-12-01 19:46:09.840 189568 DEBUG oslo_concurrency.processutils [None req-7cec860b-c1c5-49d2-8a4f-732c76c1788d - - - - - -] CMD "/usr/bin/python3 -m oslo_concurrency.prlimit --as=1073741824 --cpu=30 -- env LC_ALL=C LANG=C qemu-img info /var/lib/nova/instances/850ac274-3f22-41ce-b7d7-ac64d7adac70/disk --force-share --output=json" returned: 0 in 0.094s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422#033[00m
Dec  1 19:46:09 compute-0 nova_compute[189564]: 2025-12-01 19:46:09.841 189568 DEBUG oslo_concurrency.processutils [None req-7cec860b-c1c5-49d2-8a4f-732c76c1788d - - - - - -] Running cmd (subprocess): /usr/bin/python3 -m oslo_concurrency.prlimit --as=1073741824 --cpu=30 -- env LC_ALL=C LANG=C qemu-img info /var/lib/nova/instances/850ac274-3f22-41ce-b7d7-ac64d7adac70/disk --force-share --output=json execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384#033[00m
Dec  1 19:46:09 compute-0 nova_compute[189564]: 2025-12-01 19:46:09.931 189568 DEBUG oslo_concurrency.processutils [None req-7cec860b-c1c5-49d2-8a4f-732c76c1788d - - - - - -] CMD "/usr/bin/python3 -m oslo_concurrency.prlimit --as=1073741824 --cpu=30 -- env LC_ALL=C LANG=C qemu-img info /var/lib/nova/instances/850ac274-3f22-41ce-b7d7-ac64d7adac70/disk --force-share --output=json" returned: 0 in 0.090s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422#033[00m
Dec  1 19:46:09 compute-0 nova_compute[189564]: 2025-12-01 19:46:09.933 189568 DEBUG oslo_concurrency.processutils [None req-7cec860b-c1c5-49d2-8a4f-732c76c1788d - - - - - -] Running cmd (subprocess): /usr/bin/python3 -m oslo_concurrency.prlimit --as=1073741824 --cpu=30 -- env LC_ALL=C LANG=C qemu-img info /var/lib/nova/instances/850ac274-3f22-41ce-b7d7-ac64d7adac70/disk.eph0 --force-share --output=json execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384#033[00m
Dec  1 19:46:10 compute-0 nova_compute[189564]: 2025-12-01 19:46:10.029 189568 DEBUG oslo_concurrency.processutils [None req-7cec860b-c1c5-49d2-8a4f-732c76c1788d - - - - - -] CMD "/usr/bin/python3 -m oslo_concurrency.prlimit --as=1073741824 --cpu=30 -- env LC_ALL=C LANG=C qemu-img info /var/lib/nova/instances/850ac274-3f22-41ce-b7d7-ac64d7adac70/disk.eph0 --force-share --output=json" returned: 0 in 0.097s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422#033[00m
Dec  1 19:46:10 compute-0 nova_compute[189564]: 2025-12-01 19:46:10.031 189568 DEBUG oslo_concurrency.processutils [None req-7cec860b-c1c5-49d2-8a4f-732c76c1788d - - - - - -] Running cmd (subprocess): /usr/bin/python3 -m oslo_concurrency.prlimit --as=1073741824 --cpu=30 -- env LC_ALL=C LANG=C qemu-img info /var/lib/nova/instances/850ac274-3f22-41ce-b7d7-ac64d7adac70/disk.eph0 --force-share --output=json execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384#033[00m
Dec  1 19:46:10 compute-0 nova_compute[189564]: 2025-12-01 19:46:10.101 189568 DEBUG oslo_concurrency.processutils [None req-7cec860b-c1c5-49d2-8a4f-732c76c1788d - - - - - -] CMD "/usr/bin/python3 -m oslo_concurrency.prlimit --as=1073741824 --cpu=30 -- env LC_ALL=C LANG=C qemu-img info /var/lib/nova/instances/850ac274-3f22-41ce-b7d7-ac64d7adac70/disk.eph0 --force-share --output=json" returned: 0 in 0.070s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422#033[00m
Dec  1 19:46:10 compute-0 nova_compute[189564]: 2025-12-01 19:46:10.739 189568 WARNING nova.virt.libvirt.driver [None req-7cec860b-c1c5-49d2-8a4f-732c76c1788d - - - - - -] This host appears to have multiple sockets per NUMA node. The `socket` PCI NUMA affinity will not be supported.#033[00m
Dec  1 19:46:10 compute-0 nova_compute[189564]: 2025-12-01 19:46:10.742 189568 DEBUG nova.compute.resource_tracker [None req-7cec860b-c1c5-49d2-8a4f-732c76c1788d - - - - - -] Hypervisor/Node resource view: name=compute-0.ctlplane.example.com free_ram=4901MB free_disk=72.3614501953125GB free_vcpus=6 pci_devices=[{"dev_id": "pci_0000_00_03_0", "address": "0000:00:03.0", "product_id": "1000", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1000", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_01_3", "address": "0000:00:01.3", "product_id": "7113", "vendor_id": "8086", "numa_node": null, "label": "label_8086_7113", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_06_0", "address": "0000:00:06.0", "product_id": "1005", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1005", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_04_0", "address": "0000:00:04.0", "product_id": "1001", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1001", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_01_1", "address": "0000:00:01.1", "product_id": "7010", "vendor_id": "8086", "numa_node": null, "label": "label_8086_7010", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_00_0", "address": "0000:00:00.0", "product_id": "1237", "vendor_id": "8086", "numa_node": null, "label": "label_8086_1237", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_05_0", "address": "0000:00:05.0", "product_id": "1002", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1002", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_02_0", "address": "0000:00:02.0", "product_id": "1050", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1050", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_01_2", "address": "0000:00:01.2", "product_id": "7020", "vendor_id": "8086", "numa_node": null, "label": "label_8086_7020", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_01_0", "address": "0000:00:01.0", "product_id": "7000", "vendor_id": "8086", "numa_node": null, "label": "label_8086_7000", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_07_0", "address": "0000:00:07.0", "product_id": "1000", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1000", "dev_type": "type-PCI"}] _report_hypervisor_resource_view /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:1034#033[00m
Dec  1 19:46:10 compute-0 nova_compute[189564]: 2025-12-01 19:46:10.743 189568 DEBUG oslo_concurrency.lockutils [None req-7cec860b-c1c5-49d2-8a4f-732c76c1788d - - - - - -] Acquiring lock "compute_resources" by "nova.compute.resource_tracker.ResourceTracker._update_available_resource" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m
Dec  1 19:46:10 compute-0 nova_compute[189564]: 2025-12-01 19:46:10.743 189568 DEBUG oslo_concurrency.lockutils [None req-7cec860b-c1c5-49d2-8a4f-732c76c1788d - - - - - -] Lock "compute_resources" acquired by "nova.compute.resource_tracker.ResourceTracker._update_available_resource" :: waited 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m
Dec  1 19:46:10 compute-0 nova_compute[189564]: 2025-12-01 19:46:10.962 189568 DEBUG nova.compute.resource_tracker [None req-7cec860b-c1c5-49d2-8a4f-732c76c1788d - - - - - -] Instance e73931e9-f7fa-4666-b781-700b385532a9 actively managed on this compute host and has allocations in placement: {'resources': {'DISK_GB': 2, 'MEMORY_MB': 512, 'VCPU': 1}}. _remove_deleted_instances_allocations /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:1635#033[00m
Dec  1 19:46:10 compute-0 nova_compute[189564]: 2025-12-01 19:46:10.963 189568 DEBUG nova.compute.resource_tracker [None req-7cec860b-c1c5-49d2-8a4f-732c76c1788d - - - - - -] Instance 850ac274-3f22-41ce-b7d7-ac64d7adac70 actively managed on this compute host and has allocations in placement: {'resources': {'DISK_GB': 2, 'MEMORY_MB': 512, 'VCPU': 1}}. _remove_deleted_instances_allocations /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:1635#033[00m
Dec  1 19:46:10 compute-0 nova_compute[189564]: 2025-12-01 19:46:10.964 189568 DEBUG nova.compute.resource_tracker [None req-7cec860b-c1c5-49d2-8a4f-732c76c1788d - - - - - -] Total usable vcpus: 8, total allocated vcpus: 2 _report_final_resource_view /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:1057#033[00m
Dec  1 19:46:10 compute-0 nova_compute[189564]: 2025-12-01 19:46:10.965 189568 DEBUG nova.compute.resource_tracker [None req-7cec860b-c1c5-49d2-8a4f-732c76c1788d - - - - - -] Final resource view: name=compute-0.ctlplane.example.com phys_ram=7680MB used_ram=1536MB phys_disk=79GB used_disk=4GB total_vcpus=8 used_vcpus=2 pci_stats=[] _report_final_resource_view /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:1066#033[00m
Dec  1 19:46:10 compute-0 nova_compute[189564]: 2025-12-01 19:46:10.970 189568 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Dec  1 19:46:11 compute-0 nova_compute[189564]: 2025-12-01 19:46:11.147 189568 DEBUG nova.compute.provider_tree [None req-7cec860b-c1c5-49d2-8a4f-732c76c1788d - - - - - -] Inventory has not changed in ProviderTree for provider: 0211b5d4-bab8-409f-8f53-df766ffbcb27 update_inventory /usr/lib/python3.9/site-packages/nova/compute/provider_tree.py:180#033[00m
Dec  1 19:46:11 compute-0 nova_compute[189564]: 2025-12-01 19:46:11.166 189568 DEBUG nova.scheduler.client.report [None req-7cec860b-c1c5-49d2-8a4f-732c76c1788d - - - - - -] Inventory has not changed for provider 0211b5d4-bab8-409f-8f53-df766ffbcb27 based on inventory data: {'VCPU': {'total': 8, 'reserved': 0, 'min_unit': 1, 'max_unit': 8, 'step_size': 1, 'allocation_ratio': 4.0}, 'MEMORY_MB': {'total': 7680, 'reserved': 512, 'min_unit': 1, 'max_unit': 7680, 'step_size': 1, 'allocation_ratio': 1.0}, 'DISK_GB': {'total': 79, 'reserved': 1, 'min_unit': 1, 'max_unit': 79, 'step_size': 1, 'allocation_ratio': 0.9}} set_inventory_for_provider /usr/lib/python3.9/site-packages/nova/scheduler/client/report.py:940#033[00m
Dec  1 19:46:11 compute-0 nova_compute[189564]: 2025-12-01 19:46:11.168 189568 DEBUG nova.compute.resource_tracker [None req-7cec860b-c1c5-49d2-8a4f-732c76c1788d - - - - - -] Compute_service record updated for compute-0.ctlplane.example.com:compute-0.ctlplane.example.com _update_available_resource /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:995#033[00m
Dec  1 19:46:11 compute-0 nova_compute[189564]: 2025-12-01 19:46:11.168 189568 DEBUG oslo_concurrency.lockutils [None req-7cec860b-c1c5-49d2-8a4f-732c76c1788d - - - - - -] Lock "compute_resources" "released" by "nova.compute.resource_tracker.ResourceTracker._update_available_resource" :: held 0.425s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m
Dec  1 19:46:12 compute-0 nova_compute[189564]: 2025-12-01 19:46:12.170 189568 DEBUG oslo_service.periodic_task [None req-7cec860b-c1c5-49d2-8a4f-732c76c1788d - - - - - -] Running periodic task ComputeManager._instance_usage_audit run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210#033[00m
Dec  1 19:46:12 compute-0 ovn_metadata_agent[106828]: 2025-12-01 19:46:12.199 106833 DEBUG oslo_concurrency.lockutils [-] Acquiring lock "_check_child_processes" by "neutron.agent.linux.external_process.ProcessMonitor._check_child_processes" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m
Dec  1 19:46:12 compute-0 ovn_metadata_agent[106828]: 2025-12-01 19:46:12.200 106833 DEBUG oslo_concurrency.lockutils [-] Lock "_check_child_processes" acquired by "neutron.agent.linux.external_process.ProcessMonitor._check_child_processes" :: waited 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m
Dec  1 19:46:12 compute-0 ovn_metadata_agent[106828]: 2025-12-01 19:46:12.201 106833 DEBUG oslo_concurrency.lockutils [-] Lock "_check_child_processes" "released" by "neutron.agent.linux.external_process.ProcessMonitor._check_child_processes" :: held 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m
Dec  1 19:46:13 compute-0 nova_compute[189564]: 2025-12-01 19:46:13.275 189568 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Dec  1 19:46:14 compute-0 podman[246595]: 2025-12-01 19:46:14.855843268 +0000 UTC m=+0.131282490 container health_status eee51cf6f5ac491b85fb09827fece37ea9afa564acb449d4ec0d0155a452f02b (image=quay.io/podified-antelope-centos9/openstack-multipathd:current-podified, name=multipathd, health_status=healthy, health_failing_streak=0, health_log=, org.label-schema.build-date=20251125, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS, io.buildah.version=1.41.3, managed_by=edpm_ansible, tcib_build_tag=fa2bb8efef6782c26ea7f1675eeb36dd, container_name=multipathd, maintainer=OpenStack Kubernetes Operator team, org.label-schema.license=GPLv2, config_id=multipathd, org.label-schema.name=CentOS Stream 9 Base Image, tcib_managed=true, config_data={'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/multipathd', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-multipathd:current-podified', 'net': 'host', 'privileged': True, 'restart': 'always', 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/var/lib/kolla/config_files/multipathd.json:/var/lib/kolla/config_files/config.json:ro', '/dev:/dev', '/run/udev:/run/udev', '/sys:/sys', '/lib/modules:/lib/modules:ro', '/etc/iscsi:/etc/iscsi:ro', '/var/lib/iscsi:/var/lib/iscsi', '/etc/multipath:/etc/multipath:z', '/etc/multipath.conf:/etc/multipath.conf:ro', '/var/lib/openstack/healthchecks/multipathd:/openstack:ro,z']})
Dec  1 19:46:15 compute-0 nova_compute[189564]: 2025-12-01 19:46:15.248 189568 DEBUG oslo_service.periodic_task [None req-7cec860b-c1c5-49d2-8a4f-732c76c1788d - - - - - -] Running periodic task ComputeManager._cleanup_expired_console_auth_tokens run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210#033[00m
Dec  1 19:46:15 compute-0 nova_compute[189564]: 2025-12-01 19:46:15.973 189568 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Dec  1 19:46:18 compute-0 nova_compute[189564]: 2025-12-01 19:46:18.271 189568 DEBUG oslo_service.periodic_task [None req-7cec860b-c1c5-49d2-8a4f-732c76c1788d - - - - - -] Running periodic task ComputeManager._cleanup_incomplete_migrations run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210#033[00m
Dec  1 19:46:18 compute-0 nova_compute[189564]: 2025-12-01 19:46:18.272 189568 DEBUG nova.compute.manager [None req-7cec860b-c1c5-49d2-8a4f-732c76c1788d - - - - - -] Cleaning up deleted instances with incomplete migration  _cleanup_incomplete_migrations /usr/lib/python3.9/site-packages/nova/compute/manager.py:11183#033[00m
Dec  1 19:46:18 compute-0 nova_compute[189564]: 2025-12-01 19:46:18.277 189568 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Dec  1 19:46:20 compute-0 nova_compute[189564]: 2025-12-01 19:46:20.270 189568 DEBUG oslo_service.periodic_task [None req-7cec860b-c1c5-49d2-8a4f-732c76c1788d - - - - - -] Running periodic task ComputeManager._run_pending_deletes run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210#033[00m
Dec  1 19:46:20 compute-0 nova_compute[189564]: 2025-12-01 19:46:20.271 189568 DEBUG nova.compute.manager [None req-7cec860b-c1c5-49d2-8a4f-732c76c1788d - - - - - -] Cleaning up deleted instances _run_pending_deletes /usr/lib/python3.9/site-packages/nova/compute/manager.py:11145#033[00m
Dec  1 19:46:20 compute-0 podman[246614]: 2025-12-01 19:46:20.395498533 +0000 UTC m=+0.144146000 container health_status 61ddba5fa28aaa4735d9b3aecc3d300f499f9ae2248b5f55cd6d6127fcce4236 (image=quay.io/navidys/prometheus-podman-exporter:v1.10.1, name=podman_exporter, health_status=healthy, health_failing_streak=0, health_log=, maintainer=Navid Yaghoobi <navidys@fedoraproject.org>, managed_by=edpm_ansible, config_data={'image': 'quay.io/navidys/prometheus-podman-exporter:v1.10.1', 'restart': 'always', 'recreate': True, 'user': 'root', 'privileged': True, 'ports': ['9882:9882'], 'net': 'host', 'command': ['--web.config.file=/etc/podman_exporter/podman_exporter.yaml'], 'environment': {'OS_ENDPOINT_TYPE': 'internal', 'CONTAINER_HOST': 'unix:///run/podman/podman.sock'}, 'healthcheck': {'test': '/openstack/healthcheck podman_exporter', 'mount': '/var/lib/openstack/healthchecks/podman_exporter'}, 'volumes': ['/var/lib/openstack/config/telemetry/podman_exporter.yaml:/etc/podman_exporter/podman_exporter.yaml:z', '/var/lib/openstack/certs/telemetry/default:/etc/podman_exporter/tls:z', '/run/podman/podman.sock:/run/podman/podman.sock:rw,z', '/var/lib/openstack/healthchecks/podman_exporter:/openstack:ro,z']}, config_id=edpm, container_name=podman_exporter)
Dec  1 19:46:20 compute-0 nova_compute[189564]: 2025-12-01 19:46:20.444 189568 DEBUG nova.compute.manager [None req-7cec860b-c1c5-49d2-8a4f-732c76c1788d - - - - - -] There are 0 instances to clean _run_pending_deletes /usr/lib/python3.9/site-packages/nova/compute/manager.py:11154#033[00m
Dec  1 19:46:20 compute-0 nova_compute[189564]: 2025-12-01 19:46:20.975 189568 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Dec  1 19:46:23 compute-0 nova_compute[189564]: 2025-12-01 19:46:23.279 189568 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Dec  1 19:46:23 compute-0 podman[246640]: 2025-12-01 19:46:23.349775266 +0000 UTC m=+0.092999309 container health_status 34a1614f07848d6f362b3ed1fa2407dbcd0f2c7c831f6ef43ff8b2d278ce7c3d (image=quay.io/podified-antelope-centos9/openstack-ceilometer-ipmi:current-podified, name=ceilometer_agent_ipmi, health_status=healthy, health_failing_streak=0, health_log=, tcib_build_tag=fa2bb8efef6782c26ea7f1675eeb36dd, tcib_managed=true, container_name=ceilometer_agent_ipmi, org.label-schema.license=GPLv2, config_id=edpm, maintainer=OpenStack Kubernetes Operator team, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS, config_data={'image': 'quay.io/podified-antelope-centos9/openstack-ceilometer-ipmi:current-podified', 'user': 'ceilometer', 'restart': 'always', 'command': 'kolla_start', 'security_opt': 'label:type:ceilometer_polling_t', 'privileged': 'true', 'net': 'host', 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'OS_ENDPOINT_TYPE': 'internal'}, 'healthcheck': {'test': '/openstack/healthcheck ipmi', 'mount': '/var/lib/openstack/healthchecks/ceilometer_agent_ipmi'}, 'volumes': ['/var/lib/openstack/config/telemetry-power-monitoring:/var/lib/openstack/config/:z', '/var/lib/openstack/config/telemetry-power-monitoring/ceilometer-agent-ipmi.json:/var/lib/kolla/config_files/config.json:z', '/etc/hosts:/etc/hosts:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/var/lib/openstack/cacerts/telemetry-power-monitoring/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/config/telemetry-power-monitoring/ceilometer_prom_exporter.yaml:/etc/ceilometer/ceilometer_prom_exporter.yaml:z', '/var/lib/openstack/certs/telemetry-power-monitoring/default:/etc/ceilometer/tls:z', '/dev/log:/dev/log', '/var/lib/openstack/healthchecks/ceilometer_agent_ipmi:/openstack:ro,z']}, io.buildah.version=1.41.3, managed_by=edpm_ansible, org.label-schema.build-date=20251125)
Dec  1 19:46:23 compute-0 podman[246639]: 2025-12-01 19:46:23.36953353 +0000 UTC m=+0.119866315 container health_status 23921011954a99f31a49758e512d9e3575f6b2ebf536e7df85e3be11e7690b76 (image=quay.io/sustainable_computing_io/kepler:release-0.7.12, name=kepler, health_status=healthy, health_failing_streak=0, health_log=, io.k8s.description=The Universal Base Image is designed and engineered to be the base layer for all of your containerized applications, middleware and utilities. This base image is freely redistributable, but Red Hat only supports Red Hat technologies through subscriptions for Red Hat products. This image is maintained by Red Hat and updated regularly., config_id=edpm, build-date=2024-09-18T21:23:30, io.openshift.expose-services=, version=9.4, architecture=x86_64, io.k8s.display-name=Red Hat Universal Base Image 9, vendor=Red Hat, Inc., distribution-scope=public, vcs-type=git, container_name=kepler, summary=Provides the latest release of Red Hat Universal Base Image 9., com.redhat.license_terms=https://www.redhat.com/en/about/red-hat-end-user-license-agreements#UBI, name=ubi9, url=https://access.redhat.com/containers/#/registry.access.redhat.com/ubi9/images/9.4-1214.1726694543, release-0.7.12=, maintainer=Red Hat, Inc., release=1214.1726694543, vcs-ref=e309397d02fc53f7fa99db1371b8700eb49f268f, description=The Universal Base Image is designed and engineered to be the base layer for all of your containerized applications, middleware and utilities. This base image is freely redistributable, but Red Hat only supports Red Hat technologies through subscriptions for Red Hat products. This image is maintained by Red Hat and updated regularly., com.redhat.component=ubi9-container, config_data={'image': 'quay.io/sustainable_computing_io/kepler:release-0.7.12', 'privileged': 'true', 'restart': 'always', 'ports': ['8888:8888'], 'net': 'host', 'command': '-v=2', 'recreate': True, 'environment': {'ENABLE_GPU': 'true', 'EXPOSE_CONTAINER_METRICS': 'true', 'ENABLE_PROCESS_METRICS': 'true', 'EXPOSE_VM_METRICS': 'true', 'EXPOSE_ESTIMATED_IDLE_POWER_METRICS': 'false', 'LIBVIRT_METADATA_URI': 'http://openstack.org/xmlns/libvirt/nova/1.1'}, 'healthcheck': {'test': '/openstack/healthcheck kepler', 'mount': '/var/lib/openstack/healthchecks/kepler'}, 'volumes': ['/lib/modules:/lib/modules:ro', '/run/libvirt:/run/libvirt:shared,ro', '/sys:/sys', '/proc:/proc', '/var/lib/openstack/healthchecks/kepler:/openstack:ro,z']}, managed_by=edpm_ansible, io.openshift.tags=base rhel9, io.buildah.version=1.29.0)
Dec  1 19:46:23 compute-0 podman[246642]: 2025-12-01 19:46:23.394976171 +0000 UTC m=+0.133801308 container health_status 43b014a7c88484529ca37fbc1aa040d68d3c565a681d98a3ffe696ded1c66c8b (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified, name=ovn_metadata_agent, health_status=healthy, health_failing_streak=0, health_log=, org.label-schema.vendor=CentOS, tcib_managed=true, config_id=ovn_metadata_agent, container_name=ovn_metadata_agent, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.schema-version=1.0, org.label-schema.build-date=20251125, tcib_build_tag=fa2bb8efef6782c26ea7f1675eeb36dd, config_data={'cgroupns': 'host', 'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'EDPM_CONFIG_HASH': '0823bd3e096c75f72e4a95820d41b0d4b6a1172bd2892ddb9f29b788a11bc87d'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ovn_metadata_agent', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified', 'net': 'host', 'pid': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/run/openvswitch:/run/openvswitch:z', '/var/lib/config-data/ansible-generated/neutron-ovn-metadata-agent:/etc/neutron.conf.d:z', '/run/netns:/run/netns:shared', '/var/lib/kolla/config_files/ovn_metadata_agent.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/neutron:/var/lib/neutron:shared,z', '/var/lib/neutron/ovn_metadata_haproxy_wrapper:/usr/local/bin/haproxy:ro', '/var/lib/neutron/kill_scripts:/etc/neutron/kill_scripts:ro', '/var/lib/openstack/cacerts/neutron-metadata/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/ca.crt:/etc/pki/tls/certs/ovndbca.crt:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/tls.crt:/etc/pki/tls/certs/ovndb.crt:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/tls.key:/etc/pki/tls/private/ovndb.key:ro,Z', '/var/lib/openstack/healthchecks/ovn_metadata_agent:/openstack:ro,z']}, io.buildah.version=1.41.3, maintainer=OpenStack Kubernetes Operator team, managed_by=edpm_ansible)
Dec  1 19:46:23 compute-0 podman[246641]: 2025-12-01 19:46:23.402024461 +0000 UTC m=+0.138811205 container health_status 3a3d264f7eb8586ed3d44da8bad3c69e5911bcb2ca062b771386b6d47a5118de (image=quay.rdoproject.org/podified-master-centos10/openstack-ceilometer-compute:current-tested, name=ceilometer_agent_compute, health_status=healthy, health_failing_streak=0, health_log=, org.label-schema.license=GPLv2, config_id=edpm, org.label-schema.name=CentOS Stream 10 Base Image, tcib_build_tag=3a7876c5b6a4ff2e2bc50e11e9db5f42, tcib_managed=true, maintainer=OpenStack Kubernetes Operator team, managed_by=edpm_ansible, org.label-schema.schema-version=1.0, container_name=ceilometer_agent_compute, org.label-schema.build-date=20251125, org.label-schema.vendor=CentOS, config_data={'image': 'quay.rdoproject.org/podified-master-centos10/openstack-ceilometer-compute:current-tested', 'user': 'ceilometer', 'restart': 'always', 'command': 'kolla_start', 'security_opt': 'label:type:ceilometer_polling_t', 'net': 'host', 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'OS_ENDPOINT_TYPE': 'internal'}, 'healthcheck': {'test': '/openstack/healthcheck compute', 'mount': '/var/lib/openstack/healthchecks/ceilometer_agent_compute'}, 'volumes': ['/var/lib/openstack/config/telemetry:/var/lib/openstack/config/:z', '/var/lib/openstack/config/telemetry/ceilometer-agent-compute.json:/var/lib/kolla/config_files/config.json:z', '/run/libvirt:/run/libvirt:shared,ro', '/etc/hosts:/etc/hosts:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/var/lib/openstack/cacerts/telemetry/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/config/telemetry/ceilometer_prom_exporter.yaml:/etc/ceilometer/ceilometer_prom_exporter.yaml:z', '/var/lib/openstack/certs/telemetry/default:/etc/ceilometer/tls:z', '/dev/log:/dev/log', '/var/lib/openstack/healthchecks/ceilometer_agent_compute:/openstack:ro,z']}, io.buildah.version=1.41.4)
Dec  1 19:46:23 compute-0 podman[246646]: 2025-12-01 19:46:23.416452039 +0000 UTC m=+0.140761215 container health_status ac5c9902abf0db9f43c889599b2bcc73d33eb8b65444ffdd9b56a5cc93dab792 (image=quay.io/podified-antelope-centos9/openstack-ovn-controller:current-podified, name=ovn_controller, health_status=healthy, health_failing_streak=0, health_log=, config_data={'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ovn_controller', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-ovn-controller:current-podified', 'net': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/lib/modules:/lib/modules:ro', '/run:/run', '/var/lib/openvswitch/ovn:/run/ovn:shared,z', '/var/lib/kolla/config_files/ovn_controller.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/openstack/cacerts/ovn/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/certs/ovn/default/ca.crt:/etc/pki/tls/certs/ovndbca.crt:ro,z', '/var/lib/openstack/certs/ovn/default/tls.crt:/etc/pki/tls/certs/ovndb.crt:ro,z', '/var/lib/openstack/certs/ovn/default/tls.key:/etc/pki/tls/private/ovndb.key:ro,Z', '/var/lib/openstack/healthchecks/ovn_controller:/openstack:ro,z']}, config_id=ovn_controller, container_name=ovn_controller, managed_by=edpm_ansible, org.label-schema.build-date=20251125, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS, tcib_build_tag=fa2bb8efef6782c26ea7f1675eeb36dd, tcib_managed=true, io.buildah.version=1.41.3, maintainer=OpenStack Kubernetes Operator team, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 9 Base Image)
Dec  1 19:46:25 compute-0 nova_compute[189564]: 2025-12-01 19:46:25.978 189568 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Dec  1 19:46:28 compute-0 nova_compute[189564]: 2025-12-01 19:46:28.282 189568 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Dec  1 19:46:29 compute-0 podman[203750]: time="2025-12-01T19:46:29Z" level=info msg="List containers: received `last` parameter - overwriting `limit`"
Dec  1 19:46:29 compute-0 podman[203750]: @ - - [01/Dec/2025:19:46:29 +0000] "GET /v4.9.3/libpod/containers/json?all=true&external=false&last=0&namespace=false&size=false&sync=false HTTP/1.1" 200 29521 "" "Go-http-client/1.1"
Dec  1 19:46:29 compute-0 podman[203750]: @ - - [01/Dec/2025:19:46:29 +0000] "GET /v4.9.3/libpod/containers/stats?all=false&interval=1&stream=false HTTP/1.1" 200 4807 "" "Go-http-client/1.1"
Dec  1 19:46:30 compute-0 nova_compute[189564]: 2025-12-01 19:46:30.980 189568 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Dec  1 19:46:31 compute-0 openstack_network_exporter[205914]: ERROR   19:46:31 appctl.go:144: Failed to get PID for ovn-northd: no control socket files found for ovn-northd
Dec  1 19:46:31 compute-0 openstack_network_exporter[205914]: ERROR   19:46:31 appctl.go:144: Failed to get PID for ovn-northd: no control socket files found for ovn-northd
Dec  1 19:46:31 compute-0 openstack_network_exporter[205914]: ERROR   19:46:31 appctl.go:131: Failed to prepare call to ovsdb-server: no control socket files found for the ovs db server
Dec  1 19:46:31 compute-0 openstack_network_exporter[205914]: ERROR   19:46:31 appctl.go:174: call(dpif-netdev/pmd-rxq-show): please specify an existing datapath
Dec  1 19:46:31 compute-0 openstack_network_exporter[205914]: 
Dec  1 19:46:31 compute-0 openstack_network_exporter[205914]: ERROR   19:46:31 appctl.go:174: call(dpif-netdev/pmd-perf-show): please specify an existing datapath
Dec  1 19:46:31 compute-0 openstack_network_exporter[205914]: 
Dec  1 19:46:33 compute-0 nova_compute[189564]: 2025-12-01 19:46:33.286 189568 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Dec  1 19:46:35 compute-0 podman[246741]: 2025-12-01 19:46:35.357973669 +0000 UTC m=+0.112273149 container health_status b46bda7fc50db8041eef75400930fc7591d8331b3adc9964f77b2cc87c6b98e2 (image=quay.io/openstack-k8s-operators/openstack-network-exporter:current-podified, name=openstack_network_exporter, health_status=healthy, health_failing_streak=0, health_log=, distribution-scope=public, io.buildah.version=1.33.7, com.redhat.component=ubi9-minimal-container, vcs-ref=f4b088292653bbf5ca8188a5e59ffd06a8671d4b, io.k8s.display-name=Red Hat Universal Base Image 9 Minimal, version=9.6, container_name=openstack_network_exporter, architecture=x86_64, url=https://catalog.redhat.com/en/search?searchType=containers, io.openshift.expose-services=, build-date=2025-08-20T13:12:41, release=1755695350, summary=Provides the latest release of the minimal Red Hat Universal Base Image 9., io.openshift.tags=minimal rhel9, com.redhat.license_terms=https://www.redhat.com/en/about/red-hat-end-user-license-agreements#UBI, config_id=edpm, maintainer=Red Hat, Inc., managed_by=edpm_ansible, name=ubi9-minimal, vendor=Red Hat, Inc., vcs-type=git, io.k8s.description=The Universal Base Image Minimal is a stripped down image that uses microdnf as a package manager. This base image is freely redistributable, but Red Hat only supports Red Hat technologies through subscriptions for Red Hat products. This image is maintained by Red Hat and updated regularly., config_data={'image': 'quay.io/openstack-k8s-operators/openstack-network-exporter:current-podified', 'restart': 'always', 'recreate': True, 'privileged': True, 'ports': ['9105:9105'], 'command': [], 'net': 'host', 'environment': {'OS_ENDPOINT_TYPE': 'internal', 'OPENSTACK_NETWORK_EXPORTER_YAML': '/etc/openstack_network_exporter/openstack_network_exporter.yaml'}, 'healthcheck': {'test': '/openstack/healthcheck openstack-netwo', 'mount': '/var/lib/openstack/healthchecks/openstack_network_exporter'}, 'volumes': ['/var/lib/openstack/config/telemetry/openstack_network_exporter.yaml:/etc/openstack_network_exporter/openstack_network_exporter.yaml:z', '/var/lib/openstack/certs/telemetry/default:/etc/openstack_network_exporter/tls:z', '/var/run/openvswitch:/run/openvswitch:rw,z', '/var/lib/openvswitch/ovn:/run/ovn:rw,z', '/proc:/host/proc:ro', '/var/lib/openstack/healthchecks/openstack_network_exporter:/openstack:ro,z']}, description=The Universal Base Image Minimal is a stripped down image that uses microdnf as a package manager. This base image is freely redistributable, but Red Hat only supports Red Hat technologies through subscriptions for Red Hat products. This image is maintained by Red Hat and updated regularly.)
Dec  1 19:46:35 compute-0 nova_compute[189564]: 2025-12-01 19:46:35.982 189568 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Dec  1 19:46:38 compute-0 nova_compute[189564]: 2025-12-01 19:46:38.287 189568 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Dec  1 19:46:40 compute-0 podman[246761]: 2025-12-01 19:46:40.342272285 +0000 UTC m=+0.112660362 container health_status 9bc16c1e84935b321683dd2dfd3901959431e420d380b6b9982945dff3d516b2 (image=quay.io/prometheus/node-exporter:v1.5.0, name=node_exporter, health_status=healthy, health_failing_streak=0, health_log=, maintainer=The Prometheus Authors <prometheus-developers@googlegroups.com>, managed_by=edpm_ansible, config_data={'image': 'quay.io/prometheus/node-exporter:v1.5.0', 'restart': 'always', 'recreate': True, 'user': 'root', 'privileged': True, 'ports': ['9100:9100'], 'command': ['--web.config.file=/etc/node_exporter/node_exporter.yaml', '--web.disable-exporter-metrics', '--collector.systemd', '--collector.systemd.unit-include=(edpm_.*|ovs.*|openvswitch|virt.*|rsyslog)\\.service', '--no-collector.dmi', '--no-collector.entropy', '--no-collector.thermal_zone', '--no-collector.time', '--no-collector.timex', '--no-collector.uname', '--no-collector.stat', '--no-collector.hwmon', '--no-collector.os', '--no-collector.selinux', '--no-collector.textfile', '--no-collector.powersupplyclass', '--no-collector.pressure', '--no-collector.rapl'], 'net': 'host', 'environment': {'OS_ENDPOINT_TYPE': 'internal'}, 'healthcheck': {'test': '/openstack/healthcheck node_exporter', 'mount': '/var/lib/openstack/healthchecks/node_exporter'}, 'volumes': ['/var/lib/openstack/config/telemetry/node_exporter.yaml:/etc/node_exporter/node_exporter.yaml:z', '/var/lib/openstack/certs/telemetry/default:/etc/node_exporter/tls:z', '/var/run/dbus/system_bus_socket:/var/run/dbus/system_bus_socket:rw', '/var/lib/openstack/healthchecks/node_exporter:/openstack:ro,z']}, config_id=edpm, container_name=node_exporter)
Dec  1 19:46:40 compute-0 nova_compute[189564]: 2025-12-01 19:46:40.986 189568 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Dec  1 19:46:43 compute-0 nova_compute[189564]: 2025-12-01 19:46:43.291 189568 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Dec  1 19:46:45 compute-0 podman[246786]: 2025-12-01 19:46:45.354525817 +0000 UTC m=+0.123819468 container health_status eee51cf6f5ac491b85fb09827fece37ea9afa564acb449d4ec0d0155a452f02b (image=quay.io/podified-antelope-centos9/openstack-multipathd:current-podified, name=multipathd, health_status=healthy, health_failing_streak=0, health_log=, io.buildah.version=1.41.3, maintainer=OpenStack Kubernetes Operator team, org.label-schema.license=GPLv2, tcib_managed=true, config_data={'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/multipathd', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-multipathd:current-podified', 'net': 'host', 'privileged': True, 'restart': 'always', 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/var/lib/kolla/config_files/multipathd.json:/var/lib/kolla/config_files/config.json:ro', '/dev:/dev', '/run/udev:/run/udev', '/sys:/sys', '/lib/modules:/lib/modules:ro', '/etc/iscsi:/etc/iscsi:ro', '/var/lib/iscsi:/var/lib/iscsi', '/etc/multipath:/etc/multipath:z', '/etc/multipath.conf:/etc/multipath.conf:ro', '/var/lib/openstack/healthchecks/multipathd:/openstack:ro,z']}, org.label-schema.name=CentOS Stream 9 Base Image, container_name=multipathd, tcib_build_tag=fa2bb8efef6782c26ea7f1675eeb36dd, managed_by=edpm_ansible, org.label-schema.build-date=20251125, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS, config_id=multipathd)
Dec  1 19:46:45 compute-0 nova_compute[189564]: 2025-12-01 19:46:45.990 189568 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Dec  1 19:46:48 compute-0 nova_compute[189564]: 2025-12-01 19:46:48.294 189568 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Dec  1 19:46:50 compute-0 nova_compute[189564]: 2025-12-01 19:46:50.993 189568 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Dec  1 19:46:51 compute-0 podman[246809]: 2025-12-01 19:46:51.856150013 +0000 UTC m=+0.107778951 container health_status 61ddba5fa28aaa4735d9b3aecc3d300f499f9ae2248b5f55cd6d6127fcce4236 (image=quay.io/navidys/prometheus-podman-exporter:v1.10.1, name=podman_exporter, health_status=healthy, health_failing_streak=0, health_log=, managed_by=edpm_ansible, config_data={'image': 'quay.io/navidys/prometheus-podman-exporter:v1.10.1', 'restart': 'always', 'recreate': True, 'user': 'root', 'privileged': True, 'ports': ['9882:9882'], 'net': 'host', 'command': ['--web.config.file=/etc/podman_exporter/podman_exporter.yaml'], 'environment': {'OS_ENDPOINT_TYPE': 'internal', 'CONTAINER_HOST': 'unix:///run/podman/podman.sock'}, 'healthcheck': {'test': '/openstack/healthcheck podman_exporter', 'mount': '/var/lib/openstack/healthchecks/podman_exporter'}, 'volumes': ['/var/lib/openstack/config/telemetry/podman_exporter.yaml:/etc/podman_exporter/podman_exporter.yaml:z', '/var/lib/openstack/certs/telemetry/default:/etc/podman_exporter/tls:z', '/run/podman/podman.sock:/run/podman/podman.sock:rw,z', '/var/lib/openstack/healthchecks/podman_exporter:/openstack:ro,z']}, config_id=edpm, container_name=podman_exporter, maintainer=Navid Yaghoobi <navidys@fedoraproject.org>)
Dec  1 19:46:53 compute-0 nova_compute[189564]: 2025-12-01 19:46:53.296 189568 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Dec  1 19:46:54 compute-0 podman[246832]: 2025-12-01 19:46:54.359248519 +0000 UTC m=+0.103301891 container health_status 34a1614f07848d6f362b3ed1fa2407dbcd0f2c7c831f6ef43ff8b2d278ce7c3d (image=quay.io/podified-antelope-centos9/openstack-ceilometer-ipmi:current-podified, name=ceilometer_agent_ipmi, health_status=healthy, health_failing_streak=0, health_log=, container_name=ceilometer_agent_ipmi, org.label-schema.build-date=20251125, org.label-schema.vendor=CentOS, tcib_managed=true, maintainer=OpenStack Kubernetes Operator team, org.label-schema.name=CentOS Stream 9 Base Image, tcib_build_tag=fa2bb8efef6782c26ea7f1675eeb36dd, config_data={'image': 'quay.io/podified-antelope-centos9/openstack-ceilometer-ipmi:current-podified', 'user': 'ceilometer', 'restart': 'always', 'command': 'kolla_start', 'security_opt': 'label:type:ceilometer_polling_t', 'privileged': 'true', 'net': 'host', 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'OS_ENDPOINT_TYPE': 'internal'}, 'healthcheck': {'test': '/openstack/healthcheck ipmi', 'mount': '/var/lib/openstack/healthchecks/ceilometer_agent_ipmi'}, 'volumes': ['/var/lib/openstack/config/telemetry-power-monitoring:/var/lib/openstack/config/:z', '/var/lib/openstack/config/telemetry-power-monitoring/ceilometer-agent-ipmi.json:/var/lib/kolla/config_files/config.json:z', '/etc/hosts:/etc/hosts:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/var/lib/openstack/cacerts/telemetry-power-monitoring/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/config/telemetry-power-monitoring/ceilometer_prom_exporter.yaml:/etc/ceilometer/ceilometer_prom_exporter.yaml:z', '/var/lib/openstack/certs/telemetry-power-monitoring/default:/etc/ceilometer/tls:z', '/dev/log:/dev/log', '/var/lib/openstack/healthchecks/ceilometer_agent_ipmi:/openstack:ro,z']}, io.buildah.version=1.41.3, org.label-schema.license=GPLv2, config_id=edpm, managed_by=edpm_ansible, org.label-schema.schema-version=1.0)
Dec  1 19:46:54 compute-0 podman[246834]: 2025-12-01 19:46:54.362321544 +0000 UTC m=+0.099438411 container health_status 43b014a7c88484529ca37fbc1aa040d68d3c565a681d98a3ffe696ded1c66c8b (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified, name=ovn_metadata_agent, health_status=healthy, health_failing_streak=0, health_log=, org.label-schema.build-date=20251125, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 9 Base Image, tcib_build_tag=fa2bb8efef6782c26ea7f1675eeb36dd, io.buildah.version=1.41.3, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS, tcib_managed=true, container_name=ovn_metadata_agent, config_data={'cgroupns': 'host', 'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'EDPM_CONFIG_HASH': '0823bd3e096c75f72e4a95820d41b0d4b6a1172bd2892ddb9f29b788a11bc87d'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ovn_metadata_agent', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified', 'net': 'host', 'pid': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/run/openvswitch:/run/openvswitch:z', '/var/lib/config-data/ansible-generated/neutron-ovn-metadata-agent:/etc/neutron.conf.d:z', '/run/netns:/run/netns:shared', '/var/lib/kolla/config_files/ovn_metadata_agent.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/neutron:/var/lib/neutron:shared,z', '/var/lib/neutron/ovn_metadata_haproxy_wrapper:/usr/local/bin/haproxy:ro', '/var/lib/neutron/kill_scripts:/etc/neutron/kill_scripts:ro', '/var/lib/openstack/cacerts/neutron-metadata/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/ca.crt:/etc/pki/tls/certs/ovndbca.crt:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/tls.crt:/etc/pki/tls/certs/ovndb.crt:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/tls.key:/etc/pki/tls/private/ovndb.key:ro,Z', '/var/lib/openstack/healthchecks/ovn_metadata_agent:/openstack:ro,z']}, maintainer=OpenStack Kubernetes Operator team, config_id=ovn_metadata_agent, managed_by=edpm_ansible)
Dec  1 19:46:54 compute-0 podman[246833]: 2025-12-01 19:46:54.363898263 +0000 UTC m=+0.108548784 container health_status 3a3d264f7eb8586ed3d44da8bad3c69e5911bcb2ca062b771386b6d47a5118de (image=quay.rdoproject.org/podified-master-centos10/openstack-ceilometer-compute:current-tested, name=ceilometer_agent_compute, health_status=healthy, health_failing_streak=0, health_log=, maintainer=OpenStack Kubernetes Operator team, org.label-schema.build-date=20251125, tcib_managed=true, config_data={'image': 'quay.rdoproject.org/podified-master-centos10/openstack-ceilometer-compute:current-tested', 'user': 'ceilometer', 'restart': 'always', 'command': 'kolla_start', 'security_opt': 'label:type:ceilometer_polling_t', 'net': 'host', 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'OS_ENDPOINT_TYPE': 'internal'}, 'healthcheck': {'test': '/openstack/healthcheck compute', 'mount': '/var/lib/openstack/healthchecks/ceilometer_agent_compute'}, 'volumes': ['/var/lib/openstack/config/telemetry:/var/lib/openstack/config/:z', '/var/lib/openstack/config/telemetry/ceilometer-agent-compute.json:/var/lib/kolla/config_files/config.json:z', '/run/libvirt:/run/libvirt:shared,ro', '/etc/hosts:/etc/hosts:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/var/lib/openstack/cacerts/telemetry/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/config/telemetry/ceilometer_prom_exporter.yaml:/etc/ceilometer/ceilometer_prom_exporter.yaml:z', '/var/lib/openstack/certs/telemetry/default:/etc/ceilometer/tls:z', '/dev/log:/dev/log', '/var/lib/openstack/healthchecks/ceilometer_agent_compute:/openstack:ro,z']}, org.label-schema.vendor=CentOS, org.label-schema.schema-version=1.0, container_name=ceilometer_agent_compute, io.buildah.version=1.41.4, managed_by=edpm_ansible, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 10 Base Image, tcib_build_tag=3a7876c5b6a4ff2e2bc50e11e9db5f42, config_id=edpm)
Dec  1 19:46:54 compute-0 podman[246831]: 2025-12-01 19:46:54.386819125 +0000 UTC m=+0.140992602 container health_status 23921011954a99f31a49758e512d9e3575f6b2ebf536e7df85e3be11e7690b76 (image=quay.io/sustainable_computing_io/kepler:release-0.7.12, name=kepler, health_status=healthy, health_failing_streak=0, health_log=, maintainer=Red Hat, Inc., vendor=Red Hat, Inc., io.k8s.description=The Universal Base Image is designed and engineered to be the base layer for all of your containerized applications, middleware and utilities. This base image is freely redistributable, but Red Hat only supports Red Hat technologies through subscriptions for Red Hat products. This image is maintained by Red Hat and updated regularly., version=9.4, architecture=x86_64, com.redhat.license_terms=https://www.redhat.com/en/about/red-hat-end-user-license-agreements#UBI, description=The Universal Base Image is designed and engineered to be the base layer for all of your containerized applications, middleware and utilities. This base image is freely redistributable, but Red Hat only supports Red Hat technologies through subscriptions for Red Hat products. This image is maintained by Red Hat and updated regularly., vcs-ref=e309397d02fc53f7fa99db1371b8700eb49f268f, vcs-type=git, config_id=edpm, name=ubi9, config_data={'image': 'quay.io/sustainable_computing_io/kepler:release-0.7.12', 'privileged': 'true', 'restart': 'always', 'ports': ['8888:8888'], 'net': 'host', 'command': '-v=2', 'recreate': True, 'environment': {'ENABLE_GPU': 'true', 'EXPOSE_CONTAINER_METRICS': 'true', 'ENABLE_PROCESS_METRICS': 'true', 'EXPOSE_VM_METRICS': 'true', 'EXPOSE_ESTIMATED_IDLE_POWER_METRICS': 'false', 'LIBVIRT_METADATA_URI': 'http://openstack.org/xmlns/libvirt/nova/1.1'}, 'healthcheck': {'test': '/openstack/healthcheck kepler', 'mount': '/var/lib/openstack/healthchecks/kepler'}, 'volumes': ['/lib/modules:/lib/modules:ro', '/run/libvirt:/run/libvirt:shared,ro', '/sys:/sys', '/proc:/proc', '/var/lib/openstack/healthchecks/kepler:/openstack:ro,z']}, io.k8s.display-name=Red Hat Universal Base Image 9, release=1214.1726694543, summary=Provides the latest release of Red Hat Universal Base Image 9., release-0.7.12=, distribution-scope=public, container_name=kepler, managed_by=edpm_ansible, build-date=2024-09-18T21:23:30, io.buildah.version=1.29.0, io.openshift.expose-services=, io.openshift.tags=base rhel9, com.redhat.component=ubi9-container, url=https://access.redhat.com/containers/#/registry.access.redhat.com/ubi9/images/9.4-1214.1726694543)
Dec  1 19:46:54 compute-0 podman[246835]: 2025-12-01 19:46:54.386512966 +0000 UTC m=+0.131121605 container health_status ac5c9902abf0db9f43c889599b2bcc73d33eb8b65444ffdd9b56a5cc93dab792 (image=quay.io/podified-antelope-centos9/openstack-ovn-controller:current-podified, name=ovn_controller, health_status=healthy, health_failing_streak=0, health_log=, config_id=ovn_controller, io.buildah.version=1.41.3, maintainer=OpenStack Kubernetes Operator team, org.label-schema.build-date=20251125, org.label-schema.license=GPLv2, tcib_managed=true, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS, tcib_build_tag=fa2bb8efef6782c26ea7f1675eeb36dd, config_data={'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ovn_controller', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-ovn-controller:current-podified', 'net': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/lib/modules:/lib/modules:ro', '/run:/run', '/var/lib/openvswitch/ovn:/run/ovn:shared,z', '/var/lib/kolla/config_files/ovn_controller.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/openstack/cacerts/ovn/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/certs/ovn/default/ca.crt:/etc/pki/tls/certs/ovndbca.crt:ro,z', '/var/lib/openstack/certs/ovn/default/tls.crt:/etc/pki/tls/certs/ovndb.crt:ro,z', '/var/lib/openstack/certs/ovn/default/tls.key:/etc/pki/tls/private/ovndb.key:ro,Z', '/var/lib/openstack/healthchecks/ovn_controller:/openstack:ro,z']}, container_name=ovn_controller, managed_by=edpm_ansible, org.label-schema.name=CentOS Stream 9 Base Image)
Dec  1 19:46:55 compute-0 nova_compute[189564]: 2025-12-01 19:46:55.996 189568 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Dec  1 19:46:58 compute-0 nova_compute[189564]: 2025-12-01 19:46:58.299 189568 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Dec  1 19:46:59 compute-0 podman[203750]: time="2025-12-01T19:46:59Z" level=info msg="List containers: received `last` parameter - overwriting `limit`"
Dec  1 19:46:59 compute-0 podman[203750]: @ - - [01/Dec/2025:19:46:59 +0000] "GET /v4.9.3/libpod/containers/json?all=true&external=false&last=0&namespace=false&size=false&sync=false HTTP/1.1" 200 29521 "" "Go-http-client/1.1"
Dec  1 19:46:59 compute-0 podman[203750]: @ - - [01/Dec/2025:19:46:59 +0000] "GET /v4.9.3/libpod/containers/stats?all=false&interval=1&stream=false HTTP/1.1" 200 4813 "" "Go-http-client/1.1"
Dec  1 19:47:01 compute-0 nova_compute[189564]: 2025-12-01 19:47:01.000 189568 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Dec  1 19:47:01 compute-0 openstack_network_exporter[205914]: ERROR   19:47:01 appctl.go:131: Failed to prepare call to ovsdb-server: no control socket files found for the ovs db server
Dec  1 19:47:01 compute-0 openstack_network_exporter[205914]: ERROR   19:47:01 appctl.go:174: call(dpif-netdev/pmd-perf-show): please specify an existing datapath
Dec  1 19:47:01 compute-0 openstack_network_exporter[205914]: 
Dec  1 19:47:01 compute-0 openstack_network_exporter[205914]: ERROR   19:47:01 appctl.go:144: Failed to get PID for ovn-northd: no control socket files found for ovn-northd
Dec  1 19:47:01 compute-0 openstack_network_exporter[205914]: ERROR   19:47:01 appctl.go:144: Failed to get PID for ovn-northd: no control socket files found for ovn-northd
Dec  1 19:47:01 compute-0 openstack_network_exporter[205914]: ERROR   19:47:01 appctl.go:174: call(dpif-netdev/pmd-rxq-show): please specify an existing datapath
Dec  1 19:47:01 compute-0 openstack_network_exporter[205914]: 
Dec  1 19:47:01 compute-0 nova_compute[189564]: 2025-12-01 19:47:01.422 189568 DEBUG oslo_service.periodic_task [None req-7cec860b-c1c5-49d2-8a4f-732c76c1788d - - - - - -] Running periodic task ComputeManager._heal_instance_info_cache run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210#033[00m
Dec  1 19:47:01 compute-0 nova_compute[189564]: 2025-12-01 19:47:01.423 189568 DEBUG nova.compute.manager [None req-7cec860b-c1c5-49d2-8a4f-732c76c1788d - - - - - -] Starting heal instance info cache _heal_instance_info_cache /usr/lib/python3.9/site-packages/nova/compute/manager.py:9858#033[00m
Dec  1 19:47:01 compute-0 nova_compute[189564]: 2025-12-01 19:47:01.871 189568 DEBUG oslo_concurrency.lockutils [None req-7cec860b-c1c5-49d2-8a4f-732c76c1788d - - - - - -] Acquiring lock "refresh_cache-850ac274-3f22-41ce-b7d7-ac64d7adac70" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:312#033[00m
Dec  1 19:47:01 compute-0 nova_compute[189564]: 2025-12-01 19:47:01.872 189568 DEBUG oslo_concurrency.lockutils [None req-7cec860b-c1c5-49d2-8a4f-732c76c1788d - - - - - -] Acquired lock "refresh_cache-850ac274-3f22-41ce-b7d7-ac64d7adac70" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:315#033[00m
Dec  1 19:47:01 compute-0 nova_compute[189564]: 2025-12-01 19:47:01.873 189568 DEBUG nova.network.neutron [None req-7cec860b-c1c5-49d2-8a4f-732c76c1788d - - - - - -] [instance: 850ac274-3f22-41ce-b7d7-ac64d7adac70] Forcefully refreshing network info cache for instance _get_instance_nw_info /usr/lib/python3.9/site-packages/nova/network/neutron.py:2004#033[00m
Dec  1 19:47:03 compute-0 nova_compute[189564]: 2025-12-01 19:47:03.303 189568 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Dec  1 19:47:03 compute-0 nova_compute[189564]: 2025-12-01 19:47:03.383 189568 DEBUG nova.network.neutron [None req-7cec860b-c1c5-49d2-8a4f-732c76c1788d - - - - - -] [instance: 850ac274-3f22-41ce-b7d7-ac64d7adac70] Updating instance_info_cache with network_info: [{"id": "076102cd-d411-4d3d-a31e-4851d4a8d107", "address": "fa:16:3e:ce:df:71", "network": {"id": "2a4b8529-6171-4880-a97c-66966115a61b", "bridge": "br-int", "label": "private", "subnets": [{"cidr": "192.168.0.0/24", "dns": [], "gateway": {"address": "192.168.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "192.168.0.62", "type": "fixed", "version": 4, "meta": {}, "floating_ips": [{"address": "192.168.122.240", "type": "floating", "version": 4, "meta": {}}]}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "35d2a9caf1634dca9fc12ec078239d84", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tap076102cd-d4", "ovs_interfaceid": "076102cd-d411-4d3d-a31e-4851d4a8d107", "qbh_params": null, "qbg_params": null, "active": true, "vnic_type": "normal", "profile": {}, "preserve_on_delete": true, "delegate_create": true, "meta": {}}] update_instance_cache_with_nw_info /usr/lib/python3.9/site-packages/nova/network/neutron.py:116#033[00m
Dec  1 19:47:03 compute-0 nova_compute[189564]: 2025-12-01 19:47:03.401 189568 DEBUG oslo_concurrency.lockutils [None req-7cec860b-c1c5-49d2-8a4f-732c76c1788d - - - - - -] Releasing lock "refresh_cache-850ac274-3f22-41ce-b7d7-ac64d7adac70" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:333#033[00m
Dec  1 19:47:03 compute-0 nova_compute[189564]: 2025-12-01 19:47:03.401 189568 DEBUG nova.compute.manager [None req-7cec860b-c1c5-49d2-8a4f-732c76c1788d - - - - - -] [instance: 850ac274-3f22-41ce-b7d7-ac64d7adac70] Updated the network info_cache for instance _heal_instance_info_cache /usr/lib/python3.9/site-packages/nova/compute/manager.py:9929#033[00m
Dec  1 19:47:03 compute-0 nova_compute[189564]: 2025-12-01 19:47:03.401 189568 DEBUG oslo_service.periodic_task [None req-7cec860b-c1c5-49d2-8a4f-732c76c1788d - - - - - -] Running periodic task ComputeManager._poll_rebooting_instances run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210#033[00m
Dec  1 19:47:03 compute-0 nova_compute[189564]: 2025-12-01 19:47:03.402 189568 DEBUG oslo_service.periodic_task [None req-7cec860b-c1c5-49d2-8a4f-732c76c1788d - - - - - -] Running periodic task ComputeManager._reclaim_queued_deletes run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210#033[00m
Dec  1 19:47:03 compute-0 nova_compute[189564]: 2025-12-01 19:47:03.402 189568 DEBUG nova.compute.manager [None req-7cec860b-c1c5-49d2-8a4f-732c76c1788d - - - - - -] CONF.reclaim_instance_interval <= 0, skipping... _reclaim_queued_deletes /usr/lib/python3.9/site-packages/nova/compute/manager.py:10477#033[00m
Dec  1 19:47:05 compute-0 nova_compute[189564]: 2025-12-01 19:47:05.249 189568 DEBUG oslo_service.periodic_task [None req-7cec860b-c1c5-49d2-8a4f-732c76c1788d - - - - - -] Running periodic task ComputeManager._poll_rescued_instances run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210#033[00m
Dec  1 19:47:06 compute-0 nova_compute[189564]: 2025-12-01 19:47:06.006 189568 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Dec  1 19:47:06 compute-0 nova_compute[189564]: 2025-12-01 19:47:06.248 189568 DEBUG oslo_service.periodic_task [None req-7cec860b-c1c5-49d2-8a4f-732c76c1788d - - - - - -] Running periodic task ComputeManager._poll_volume_usage run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210#033[00m
Dec  1 19:47:06 compute-0 podman[246926]: 2025-12-01 19:47:06.367135233 +0000 UTC m=+0.128468193 container health_status b46bda7fc50db8041eef75400930fc7591d8331b3adc9964f77b2cc87c6b98e2 (image=quay.io/openstack-k8s-operators/openstack-network-exporter:current-podified, name=openstack_network_exporter, health_status=healthy, health_failing_streak=0, health_log=, build-date=2025-08-20T13:12:41, container_name=openstack_network_exporter, config_data={'image': 'quay.io/openstack-k8s-operators/openstack-network-exporter:current-podified', 'restart': 'always', 'recreate': True, 'privileged': True, 'ports': ['9105:9105'], 'command': [], 'net': 'host', 'environment': {'OS_ENDPOINT_TYPE': 'internal', 'OPENSTACK_NETWORK_EXPORTER_YAML': '/etc/openstack_network_exporter/openstack_network_exporter.yaml'}, 'healthcheck': {'test': '/openstack/healthcheck openstack-netwo', 'mount': '/var/lib/openstack/healthchecks/openstack_network_exporter'}, 'volumes': ['/var/lib/openstack/config/telemetry/openstack_network_exporter.yaml:/etc/openstack_network_exporter/openstack_network_exporter.yaml:z', '/var/lib/openstack/certs/telemetry/default:/etc/openstack_network_exporter/tls:z', '/var/run/openvswitch:/run/openvswitch:rw,z', '/var/lib/openvswitch/ovn:/run/ovn:rw,z', '/proc:/host/proc:ro', '/var/lib/openstack/healthchecks/openstack_network_exporter:/openstack:ro,z']}, config_id=edpm, url=https://catalog.redhat.com/en/search?searchType=containers, version=9.6, vcs-type=git, summary=Provides the latest release of the minimal Red Hat Universal Base Image 9., vendor=Red Hat, Inc., name=ubi9-minimal, vcs-ref=f4b088292653bbf5ca8188a5e59ffd06a8671d4b, architecture=x86_64, distribution-scope=public, release=1755695350, io.k8s.description=The Universal Base Image Minimal is a stripped down image that uses microdnf as a package manager. This base image is freely redistributable, but Red Hat only supports Red Hat technologies through subscriptions for Red Hat products. This image is maintained by Red Hat and updated regularly., io.openshift.tags=minimal rhel9, com.redhat.license_terms=https://www.redhat.com/en/about/red-hat-end-user-license-agreements#UBI, maintainer=Red Hat, Inc., io.openshift.expose-services=, io.k8s.display-name=Red Hat Universal Base Image 9 Minimal, description=The Universal Base Image Minimal is a stripped down image that uses microdnf as a package manager. This base image is freely redistributable, but Red Hat only supports Red Hat technologies through subscriptions for Red Hat products. This image is maintained by Red Hat and updated regularly., io.buildah.version=1.33.7, managed_by=edpm_ansible, com.redhat.component=ubi9-minimal-container)
Dec  1 19:47:07 compute-0 nova_compute[189564]: 2025-12-01 19:47:07.244 189568 DEBUG oslo_service.periodic_task [None req-7cec860b-c1c5-49d2-8a4f-732c76c1788d - - - - - -] Running periodic task ComputeManager._check_instance_build_time run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210#033[00m
Dec  1 19:47:08 compute-0 nova_compute[189564]: 2025-12-01 19:47:08.247 189568 DEBUG oslo_service.periodic_task [None req-7cec860b-c1c5-49d2-8a4f-732c76c1788d - - - - - -] Running periodic task ComputeManager._poll_unconfirmed_resizes run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210#033[00m
Dec  1 19:47:08 compute-0 nova_compute[189564]: 2025-12-01 19:47:08.306 189568 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Dec  1 19:47:09 compute-0 nova_compute[189564]: 2025-12-01 19:47:09.248 189568 DEBUG oslo_service.periodic_task [None req-7cec860b-c1c5-49d2-8a4f-732c76c1788d - - - - - -] Running periodic task ComputeManager.update_available_resource run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210#033[00m
Dec  1 19:47:09 compute-0 nova_compute[189564]: 2025-12-01 19:47:09.294 189568 DEBUG oslo_concurrency.lockutils [None req-7cec860b-c1c5-49d2-8a4f-732c76c1788d - - - - - -] Acquiring lock "compute_resources" by "nova.compute.resource_tracker.ResourceTracker.clean_compute_node_cache" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m
Dec  1 19:47:09 compute-0 nova_compute[189564]: 2025-12-01 19:47:09.295 189568 DEBUG oslo_concurrency.lockutils [None req-7cec860b-c1c5-49d2-8a4f-732c76c1788d - - - - - -] Lock "compute_resources" acquired by "nova.compute.resource_tracker.ResourceTracker.clean_compute_node_cache" :: waited 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m
Dec  1 19:47:09 compute-0 nova_compute[189564]: 2025-12-01 19:47:09.296 189568 DEBUG oslo_concurrency.lockutils [None req-7cec860b-c1c5-49d2-8a4f-732c76c1788d - - - - - -] Lock "compute_resources" "released" by "nova.compute.resource_tracker.ResourceTracker.clean_compute_node_cache" :: held 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m
Dec  1 19:47:09 compute-0 nova_compute[189564]: 2025-12-01 19:47:09.296 189568 DEBUG nova.compute.resource_tracker [None req-7cec860b-c1c5-49d2-8a4f-732c76c1788d - - - - - -] Auditing locally available compute resources for compute-0.ctlplane.example.com (node: compute-0.ctlplane.example.com) update_available_resource /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:861#033[00m
Dec  1 19:47:09 compute-0 nova_compute[189564]: 2025-12-01 19:47:09.422 189568 DEBUG oslo_concurrency.processutils [None req-7cec860b-c1c5-49d2-8a4f-732c76c1788d - - - - - -] Running cmd (subprocess): /usr/bin/python3 -m oslo_concurrency.prlimit --as=1073741824 --cpu=30 -- env LC_ALL=C LANG=C qemu-img info /var/lib/nova/instances/e73931e9-f7fa-4666-b781-700b385532a9/disk --force-share --output=json execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384#033[00m
Dec  1 19:47:09 compute-0 nova_compute[189564]: 2025-12-01 19:47:09.515 189568 DEBUG oslo_concurrency.processutils [None req-7cec860b-c1c5-49d2-8a4f-732c76c1788d - - - - - -] CMD "/usr/bin/python3 -m oslo_concurrency.prlimit --as=1073741824 --cpu=30 -- env LC_ALL=C LANG=C qemu-img info /var/lib/nova/instances/e73931e9-f7fa-4666-b781-700b385532a9/disk --force-share --output=json" returned: 0 in 0.092s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422#033[00m
Dec  1 19:47:09 compute-0 nova_compute[189564]: 2025-12-01 19:47:09.517 189568 DEBUG oslo_concurrency.processutils [None req-7cec860b-c1c5-49d2-8a4f-732c76c1788d - - - - - -] Running cmd (subprocess): /usr/bin/python3 -m oslo_concurrency.prlimit --as=1073741824 --cpu=30 -- env LC_ALL=C LANG=C qemu-img info /var/lib/nova/instances/e73931e9-f7fa-4666-b781-700b385532a9/disk --force-share --output=json execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384#033[00m
Dec  1 19:47:09 compute-0 nova_compute[189564]: 2025-12-01 19:47:09.619 189568 DEBUG oslo_concurrency.processutils [None req-7cec860b-c1c5-49d2-8a4f-732c76c1788d - - - - - -] CMD "/usr/bin/python3 -m oslo_concurrency.prlimit --as=1073741824 --cpu=30 -- env LC_ALL=C LANG=C qemu-img info /var/lib/nova/instances/e73931e9-f7fa-4666-b781-700b385532a9/disk --force-share --output=json" returned: 0 in 0.102s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422#033[00m
Dec  1 19:47:09 compute-0 nova_compute[189564]: 2025-12-01 19:47:09.621 189568 DEBUG oslo_concurrency.processutils [None req-7cec860b-c1c5-49d2-8a4f-732c76c1788d - - - - - -] Running cmd (subprocess): /usr/bin/python3 -m oslo_concurrency.prlimit --as=1073741824 --cpu=30 -- env LC_ALL=C LANG=C qemu-img info /var/lib/nova/instances/e73931e9-f7fa-4666-b781-700b385532a9/disk.eph0 --force-share --output=json execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384#033[00m
Dec  1 19:47:09 compute-0 nova_compute[189564]: 2025-12-01 19:47:09.685 189568 DEBUG oslo_concurrency.processutils [None req-7cec860b-c1c5-49d2-8a4f-732c76c1788d - - - - - -] CMD "/usr/bin/python3 -m oslo_concurrency.prlimit --as=1073741824 --cpu=30 -- env LC_ALL=C LANG=C qemu-img info /var/lib/nova/instances/e73931e9-f7fa-4666-b781-700b385532a9/disk.eph0 --force-share --output=json" returned: 0 in 0.064s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422#033[00m
Dec  1 19:47:09 compute-0 nova_compute[189564]: 2025-12-01 19:47:09.688 189568 DEBUG oslo_concurrency.processutils [None req-7cec860b-c1c5-49d2-8a4f-732c76c1788d - - - - - -] Running cmd (subprocess): /usr/bin/python3 -m oslo_concurrency.prlimit --as=1073741824 --cpu=30 -- env LC_ALL=C LANG=C qemu-img info /var/lib/nova/instances/e73931e9-f7fa-4666-b781-700b385532a9/disk.eph0 --force-share --output=json execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384#033[00m
Dec  1 19:47:09 compute-0 nova_compute[189564]: 2025-12-01 19:47:09.780 189568 DEBUG oslo_concurrency.processutils [None req-7cec860b-c1c5-49d2-8a4f-732c76c1788d - - - - - -] CMD "/usr/bin/python3 -m oslo_concurrency.prlimit --as=1073741824 --cpu=30 -- env LC_ALL=C LANG=C qemu-img info /var/lib/nova/instances/e73931e9-f7fa-4666-b781-700b385532a9/disk.eph0 --force-share --output=json" returned: 0 in 0.092s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422#033[00m
Dec  1 19:47:09 compute-0 nova_compute[189564]: 2025-12-01 19:47:09.793 189568 DEBUG oslo_concurrency.processutils [None req-7cec860b-c1c5-49d2-8a4f-732c76c1788d - - - - - -] Running cmd (subprocess): /usr/bin/python3 -m oslo_concurrency.prlimit --as=1073741824 --cpu=30 -- env LC_ALL=C LANG=C qemu-img info /var/lib/nova/instances/850ac274-3f22-41ce-b7d7-ac64d7adac70/disk --force-share --output=json execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384#033[00m
Dec  1 19:47:09 compute-0 nova_compute[189564]: 2025-12-01 19:47:09.893 189568 DEBUG oslo_concurrency.processutils [None req-7cec860b-c1c5-49d2-8a4f-732c76c1788d - - - - - -] CMD "/usr/bin/python3 -m oslo_concurrency.prlimit --as=1073741824 --cpu=30 -- env LC_ALL=C LANG=C qemu-img info /var/lib/nova/instances/850ac274-3f22-41ce-b7d7-ac64d7adac70/disk --force-share --output=json" returned: 0 in 0.100s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422#033[00m
Dec  1 19:47:09 compute-0 nova_compute[189564]: 2025-12-01 19:47:09.895 189568 DEBUG oslo_concurrency.processutils [None req-7cec860b-c1c5-49d2-8a4f-732c76c1788d - - - - - -] Running cmd (subprocess): /usr/bin/python3 -m oslo_concurrency.prlimit --as=1073741824 --cpu=30 -- env LC_ALL=C LANG=C qemu-img info /var/lib/nova/instances/850ac274-3f22-41ce-b7d7-ac64d7adac70/disk --force-share --output=json execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384#033[00m
Dec  1 19:47:09 compute-0 nova_compute[189564]: 2025-12-01 19:47:09.998 189568 DEBUG oslo_concurrency.processutils [None req-7cec860b-c1c5-49d2-8a4f-732c76c1788d - - - - - -] CMD "/usr/bin/python3 -m oslo_concurrency.prlimit --as=1073741824 --cpu=30 -- env LC_ALL=C LANG=C qemu-img info /var/lib/nova/instances/850ac274-3f22-41ce-b7d7-ac64d7adac70/disk --force-share --output=json" returned: 0 in 0.103s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422#033[00m
Dec  1 19:47:10 compute-0 nova_compute[189564]: 2025-12-01 19:47:09.999 189568 DEBUG oslo_concurrency.processutils [None req-7cec860b-c1c5-49d2-8a4f-732c76c1788d - - - - - -] Running cmd (subprocess): /usr/bin/python3 -m oslo_concurrency.prlimit --as=1073741824 --cpu=30 -- env LC_ALL=C LANG=C qemu-img info /var/lib/nova/instances/850ac274-3f22-41ce-b7d7-ac64d7adac70/disk.eph0 --force-share --output=json execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384#033[00m
Dec  1 19:47:10 compute-0 nova_compute[189564]: 2025-12-01 19:47:10.060 189568 DEBUG oslo_concurrency.processutils [None req-7cec860b-c1c5-49d2-8a4f-732c76c1788d - - - - - -] CMD "/usr/bin/python3 -m oslo_concurrency.prlimit --as=1073741824 --cpu=30 -- env LC_ALL=C LANG=C qemu-img info /var/lib/nova/instances/850ac274-3f22-41ce-b7d7-ac64d7adac70/disk.eph0 --force-share --output=json" returned: 0 in 0.060s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422#033[00m
Dec  1 19:47:10 compute-0 nova_compute[189564]: 2025-12-01 19:47:10.062 189568 DEBUG oslo_concurrency.processutils [None req-7cec860b-c1c5-49d2-8a4f-732c76c1788d - - - - - -] Running cmd (subprocess): /usr/bin/python3 -m oslo_concurrency.prlimit --as=1073741824 --cpu=30 -- env LC_ALL=C LANG=C qemu-img info /var/lib/nova/instances/850ac274-3f22-41ce-b7d7-ac64d7adac70/disk.eph0 --force-share --output=json execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384#033[00m
Dec  1 19:47:10 compute-0 nova_compute[189564]: 2025-12-01 19:47:10.159 189568 DEBUG oslo_concurrency.processutils [None req-7cec860b-c1c5-49d2-8a4f-732c76c1788d - - - - - -] CMD "/usr/bin/python3 -m oslo_concurrency.prlimit --as=1073741824 --cpu=30 -- env LC_ALL=C LANG=C qemu-img info /var/lib/nova/instances/850ac274-3f22-41ce-b7d7-ac64d7adac70/disk.eph0 --force-share --output=json" returned: 0 in 0.098s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422#033[00m
Dec  1 19:47:10 compute-0 nova_compute[189564]: 2025-12-01 19:47:10.746 189568 WARNING nova.virt.libvirt.driver [None req-7cec860b-c1c5-49d2-8a4f-732c76c1788d - - - - - -] This host appears to have multiple sockets per NUMA node. The `socket` PCI NUMA affinity will not be supported.#033[00m
Dec  1 19:47:10 compute-0 nova_compute[189564]: 2025-12-01 19:47:10.747 189568 DEBUG nova.compute.resource_tracker [None req-7cec860b-c1c5-49d2-8a4f-732c76c1788d - - - - - -] Hypervisor/Node resource view: name=compute-0.ctlplane.example.com free_ram=4853MB free_disk=72.3614501953125GB free_vcpus=6 pci_devices=[{"dev_id": "pci_0000_00_03_0", "address": "0000:00:03.0", "product_id": "1000", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1000", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_01_3", "address": "0000:00:01.3", "product_id": "7113", "vendor_id": "8086", "numa_node": null, "label": "label_8086_7113", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_06_0", "address": "0000:00:06.0", "product_id": "1005", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1005", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_04_0", "address": "0000:00:04.0", "product_id": "1001", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1001", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_01_1", "address": "0000:00:01.1", "product_id": "7010", "vendor_id": "8086", "numa_node": null, "label": "label_8086_7010", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_00_0", "address": "0000:00:00.0", "product_id": "1237", "vendor_id": "8086", "numa_node": null, "label": "label_8086_1237", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_05_0", "address": "0000:00:05.0", "product_id": "1002", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1002", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_02_0", "address": "0000:00:02.0", "product_id": "1050", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1050", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_01_2", "address": "0000:00:01.2", "product_id": "7020", "vendor_id": "8086", "numa_node": null, "label": "label_8086_7020", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_01_0", "address": "0000:00:01.0", "product_id": "7000", "vendor_id": "8086", "numa_node": null, "label": "label_8086_7000", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_07_0", "address": "0000:00:07.0", "product_id": "1000", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1000", "dev_type": "type-PCI"}] _report_hypervisor_resource_view /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:1034#033[00m
Dec  1 19:47:10 compute-0 nova_compute[189564]: 2025-12-01 19:47:10.747 189568 DEBUG oslo_concurrency.lockutils [None req-7cec860b-c1c5-49d2-8a4f-732c76c1788d - - - - - -] Acquiring lock "compute_resources" by "nova.compute.resource_tracker.ResourceTracker._update_available_resource" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m
Dec  1 19:47:10 compute-0 nova_compute[189564]: 2025-12-01 19:47:10.748 189568 DEBUG oslo_concurrency.lockutils [None req-7cec860b-c1c5-49d2-8a4f-732c76c1788d - - - - - -] Lock "compute_resources" acquired by "nova.compute.resource_tracker.ResourceTracker._update_available_resource" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m
Dec  1 19:47:10 compute-0 nova_compute[189564]: 2025-12-01 19:47:10.839 189568 DEBUG nova.compute.resource_tracker [None req-7cec860b-c1c5-49d2-8a4f-732c76c1788d - - - - - -] Instance e73931e9-f7fa-4666-b781-700b385532a9 actively managed on this compute host and has allocations in placement: {'resources': {'DISK_GB': 2, 'MEMORY_MB': 512, 'VCPU': 1}}. _remove_deleted_instances_allocations /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:1635#033[00m
Dec  1 19:47:10 compute-0 nova_compute[189564]: 2025-12-01 19:47:10.840 189568 DEBUG nova.compute.resource_tracker [None req-7cec860b-c1c5-49d2-8a4f-732c76c1788d - - - - - -] Instance 850ac274-3f22-41ce-b7d7-ac64d7adac70 actively managed on this compute host and has allocations in placement: {'resources': {'DISK_GB': 2, 'MEMORY_MB': 512, 'VCPU': 1}}. _remove_deleted_instances_allocations /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:1635#033[00m
Dec  1 19:47:10 compute-0 nova_compute[189564]: 2025-12-01 19:47:10.840 189568 DEBUG nova.compute.resource_tracker [None req-7cec860b-c1c5-49d2-8a4f-732c76c1788d - - - - - -] Total usable vcpus: 8, total allocated vcpus: 2 _report_final_resource_view /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:1057#033[00m
Dec  1 19:47:10 compute-0 nova_compute[189564]: 2025-12-01 19:47:10.840 189568 DEBUG nova.compute.resource_tracker [None req-7cec860b-c1c5-49d2-8a4f-732c76c1788d - - - - - -] Final resource view: name=compute-0.ctlplane.example.com phys_ram=7680MB used_ram=1536MB phys_disk=79GB used_disk=4GB total_vcpus=8 used_vcpus=2 pci_stats=[] _report_final_resource_view /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:1066#033[00m
Dec  1 19:47:10 compute-0 nova_compute[189564]: 2025-12-01 19:47:10.920 189568 DEBUG nova.compute.provider_tree [None req-7cec860b-c1c5-49d2-8a4f-732c76c1788d - - - - - -] Inventory has not changed in ProviderTree for provider: 0211b5d4-bab8-409f-8f53-df766ffbcb27 update_inventory /usr/lib/python3.9/site-packages/nova/compute/provider_tree.py:180#033[00m
Dec  1 19:47:10 compute-0 nova_compute[189564]: 2025-12-01 19:47:10.941 189568 DEBUG nova.scheduler.client.report [None req-7cec860b-c1c5-49d2-8a4f-732c76c1788d - - - - - -] Inventory has not changed for provider 0211b5d4-bab8-409f-8f53-df766ffbcb27 based on inventory data: {'VCPU': {'total': 8, 'reserved': 0, 'min_unit': 1, 'max_unit': 8, 'step_size': 1, 'allocation_ratio': 4.0}, 'MEMORY_MB': {'total': 7680, 'reserved': 512, 'min_unit': 1, 'max_unit': 7680, 'step_size': 1, 'allocation_ratio': 1.0}, 'DISK_GB': {'total': 79, 'reserved': 1, 'min_unit': 1, 'max_unit': 79, 'step_size': 1, 'allocation_ratio': 0.9}} set_inventory_for_provider /usr/lib/python3.9/site-packages/nova/scheduler/client/report.py:940#033[00m
Dec  1 19:47:10 compute-0 nova_compute[189564]: 2025-12-01 19:47:10.944 189568 DEBUG nova.compute.resource_tracker [None req-7cec860b-c1c5-49d2-8a4f-732c76c1788d - - - - - -] Compute_service record updated for compute-0.ctlplane.example.com:compute-0.ctlplane.example.com _update_available_resource /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:995#033[00m
Dec  1 19:47:10 compute-0 nova_compute[189564]: 2025-12-01 19:47:10.945 189568 DEBUG oslo_concurrency.lockutils [None req-7cec860b-c1c5-49d2-8a4f-732c76c1788d - - - - - -] Lock "compute_resources" "released" by "nova.compute.resource_tracker.ResourceTracker._update_available_resource" :: held 0.197s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m
Dec  1 19:47:11 compute-0 nova_compute[189564]: 2025-12-01 19:47:11.009 189568 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Dec  1 19:47:11 compute-0 podman[246972]: 2025-12-01 19:47:11.363761695 +0000 UTC m=+0.111054934 container health_status 9bc16c1e84935b321683dd2dfd3901959431e420d380b6b9982945dff3d516b2 (image=quay.io/prometheus/node-exporter:v1.5.0, name=node_exporter, health_status=healthy, health_failing_streak=0, health_log=, config_id=edpm, container_name=node_exporter, maintainer=The Prometheus Authors <prometheus-developers@googlegroups.com>, managed_by=edpm_ansible, config_data={'image': 'quay.io/prometheus/node-exporter:v1.5.0', 'restart': 'always', 'recreate': True, 'user': 'root', 'privileged': True, 'ports': ['9100:9100'], 'command': ['--web.config.file=/etc/node_exporter/node_exporter.yaml', '--web.disable-exporter-metrics', '--collector.systemd', '--collector.systemd.unit-include=(edpm_.*|ovs.*|openvswitch|virt.*|rsyslog)\\.service', '--no-collector.dmi', '--no-collector.entropy', '--no-collector.thermal_zone', '--no-collector.time', '--no-collector.timex', '--no-collector.uname', '--no-collector.stat', '--no-collector.hwmon', '--no-collector.os', '--no-collector.selinux', '--no-collector.textfile', '--no-collector.powersupplyclass', '--no-collector.pressure', '--no-collector.rapl'], 'net': 'host', 'environment': {'OS_ENDPOINT_TYPE': 'internal'}, 'healthcheck': {'test': '/openstack/healthcheck node_exporter', 'mount': '/var/lib/openstack/healthchecks/node_exporter'}, 'volumes': ['/var/lib/openstack/config/telemetry/node_exporter.yaml:/etc/node_exporter/node_exporter.yaml:z', '/var/lib/openstack/certs/telemetry/default:/etc/node_exporter/tls:z', '/var/run/dbus/system_bus_socket:/var/run/dbus/system_bus_socket:rw', '/var/lib/openstack/healthchecks/node_exporter:/openstack:ro,z']})
Dec  1 19:47:11 compute-0 nova_compute[189564]: 2025-12-01 19:47:11.940 189568 DEBUG oslo_service.periodic_task [None req-7cec860b-c1c5-49d2-8a4f-732c76c1788d - - - - - -] Running periodic task ComputeManager._sync_scheduler_instance_info run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210#033[00m
Dec  1 19:47:11 compute-0 nova_compute[189564]: 2025-12-01 19:47:11.962 189568 DEBUG oslo_service.periodic_task [None req-7cec860b-c1c5-49d2-8a4f-732c76c1788d - - - - - -] Running periodic task ComputeManager._instance_usage_audit run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210#033[00m
Dec  1 19:47:12 compute-0 ovn_metadata_agent[106828]: 2025-12-01 19:47:12.200 106833 DEBUG oslo_concurrency.lockutils [-] Acquiring lock "_check_child_processes" by "neutron.agent.linux.external_process.ProcessMonitor._check_child_processes" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m
Dec  1 19:47:12 compute-0 ovn_metadata_agent[106828]: 2025-12-01 19:47:12.201 106833 DEBUG oslo_concurrency.lockutils [-] Lock "_check_child_processes" acquired by "neutron.agent.linux.external_process.ProcessMonitor._check_child_processes" :: waited 0.002s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m
Dec  1 19:47:12 compute-0 ovn_metadata_agent[106828]: 2025-12-01 19:47:12.203 106833 DEBUG oslo_concurrency.lockutils [-] Lock "_check_child_processes" "released" by "neutron.agent.linux.external_process.ProcessMonitor._check_child_processes" :: held 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m
Dec  1 19:47:13 compute-0 nova_compute[189564]: 2025-12-01 19:47:13.310 189568 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Dec  1 19:47:16 compute-0 nova_compute[189564]: 2025-12-01 19:47:16.010 189568 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Dec  1 19:47:16 compute-0 podman[246996]: 2025-12-01 19:47:16.370352196 +0000 UTC m=+0.133637173 container health_status eee51cf6f5ac491b85fb09827fece37ea9afa564acb449d4ec0d0155a452f02b (image=quay.io/podified-antelope-centos9/openstack-multipathd:current-podified, name=multipathd, health_status=healthy, health_failing_streak=0, health_log=, io.buildah.version=1.41.3, maintainer=OpenStack Kubernetes Operator team, managed_by=edpm_ansible, org.label-schema.build-date=20251125, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS, container_name=multipathd, org.label-schema.name=CentOS Stream 9 Base Image, config_id=multipathd, config_data={'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/multipathd', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-multipathd:current-podified', 'net': 'host', 'privileged': True, 'restart': 'always', 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/var/lib/kolla/config_files/multipathd.json:/var/lib/kolla/config_files/config.json:ro', '/dev:/dev', '/run/udev:/run/udev', '/sys:/sys', '/lib/modules:/lib/modules:ro', '/etc/iscsi:/etc/iscsi:ro', '/var/lib/iscsi:/var/lib/iscsi', '/etc/multipath:/etc/multipath:z', '/etc/multipath.conf:/etc/multipath.conf:ro', '/var/lib/openstack/healthchecks/multipathd:/openstack:ro,z']}, org.label-schema.license=GPLv2, tcib_build_tag=fa2bb8efef6782c26ea7f1675eeb36dd, tcib_managed=true)
Dec  1 19:47:18 compute-0 nova_compute[189564]: 2025-12-01 19:47:18.313 189568 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Dec  1 19:47:21 compute-0 nova_compute[189564]: 2025-12-01 19:47:21.012 189568 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Dec  1 19:47:22 compute-0 podman[247017]: 2025-12-01 19:47:22.342010696 +0000 UTC m=+0.099220688 container health_status 61ddba5fa28aaa4735d9b3aecc3d300f499f9ae2248b5f55cd6d6127fcce4236 (image=quay.io/navidys/prometheus-podman-exporter:v1.10.1, name=podman_exporter, health_status=healthy, health_failing_streak=0, health_log=, config_id=edpm, container_name=podman_exporter, maintainer=Navid Yaghoobi <navidys@fedoraproject.org>, managed_by=edpm_ansible, config_data={'image': 'quay.io/navidys/prometheus-podman-exporter:v1.10.1', 'restart': 'always', 'recreate': True, 'user': 'root', 'privileged': True, 'ports': ['9882:9882'], 'net': 'host', 'command': ['--web.config.file=/etc/podman_exporter/podman_exporter.yaml'], 'environment': {'OS_ENDPOINT_TYPE': 'internal', 'CONTAINER_HOST': 'unix:///run/podman/podman.sock'}, 'healthcheck': {'test': '/openstack/healthcheck podman_exporter', 'mount': '/var/lib/openstack/healthchecks/podman_exporter'}, 'volumes': ['/var/lib/openstack/config/telemetry/podman_exporter.yaml:/etc/podman_exporter/podman_exporter.yaml:z', '/var/lib/openstack/certs/telemetry/default:/etc/podman_exporter/tls:z', '/run/podman/podman.sock:/run/podman/podman.sock:rw,z', '/var/lib/openstack/healthchecks/podman_exporter:/openstack:ro,z']})
Dec  1 19:47:23 compute-0 nova_compute[189564]: 2025-12-01 19:47:23.316 189568 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Dec  1 19:47:25 compute-0 podman[247043]: 2025-12-01 19:47:25.354890734 +0000 UTC m=+0.104661836 container health_status 43b014a7c88484529ca37fbc1aa040d68d3c565a681d98a3ffe696ded1c66c8b (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified, name=ovn_metadata_agent, health_status=healthy, health_failing_streak=0, health_log=, org.label-schema.name=CentOS Stream 9 Base Image, config_data={'cgroupns': 'host', 'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'EDPM_CONFIG_HASH': '0823bd3e096c75f72e4a95820d41b0d4b6a1172bd2892ddb9f29b788a11bc87d'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ovn_metadata_agent', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified', 'net': 'host', 'pid': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/run/openvswitch:/run/openvswitch:z', '/var/lib/config-data/ansible-generated/neutron-ovn-metadata-agent:/etc/neutron.conf.d:z', '/run/netns:/run/netns:shared', '/var/lib/kolla/config_files/ovn_metadata_agent.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/neutron:/var/lib/neutron:shared,z', '/var/lib/neutron/ovn_metadata_haproxy_wrapper:/usr/local/bin/haproxy:ro', '/var/lib/neutron/kill_scripts:/etc/neutron/kill_scripts:ro', '/var/lib/openstack/cacerts/neutron-metadata/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/ca.crt:/etc/pki/tls/certs/ovndbca.crt:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/tls.crt:/etc/pki/tls/certs/ovndb.crt:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/tls.key:/etc/pki/tls/private/ovndb.key:ro,Z', '/var/lib/openstack/healthchecks/ovn_metadata_agent:/openstack:ro,z']}, io.buildah.version=1.41.3, managed_by=edpm_ansible, container_name=ovn_metadata_agent, tcib_build_tag=fa2bb8efef6782c26ea7f1675eeb36dd, maintainer=OpenStack Kubernetes Operator team, org.label-schema.build-date=20251125, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS, tcib_managed=true, config_id=ovn_metadata_agent, org.label-schema.license=GPLv2)
Dec  1 19:47:25 compute-0 podman[247042]: 2025-12-01 19:47:25.362702335 +0000 UTC m=+0.119663830 container health_status 3a3d264f7eb8586ed3d44da8bad3c69e5911bcb2ca062b771386b6d47a5118de (image=quay.rdoproject.org/podified-master-centos10/openstack-ceilometer-compute:current-tested, name=ceilometer_agent_compute, health_status=healthy, health_failing_streak=0, health_log=, container_name=ceilometer_agent_compute, io.buildah.version=1.41.4, maintainer=OpenStack Kubernetes Operator team, org.label-schema.name=CentOS Stream 10 Base Image, org.label-schema.vendor=CentOS, tcib_managed=true, org.label-schema.license=GPLv2, org.label-schema.schema-version=1.0, managed_by=edpm_ansible, tcib_build_tag=3a7876c5b6a4ff2e2bc50e11e9db5f42, config_data={'image': 'quay.rdoproject.org/podified-master-centos10/openstack-ceilometer-compute:current-tested', 'user': 'ceilometer', 'restart': 'always', 'command': 'kolla_start', 'security_opt': 'label:type:ceilometer_polling_t', 'net': 'host', 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'OS_ENDPOINT_TYPE': 'internal'}, 'healthcheck': {'test': '/openstack/healthcheck compute', 'mount': '/var/lib/openstack/healthchecks/ceilometer_agent_compute'}, 'volumes': ['/var/lib/openstack/config/telemetry:/var/lib/openstack/config/:z', '/var/lib/openstack/config/telemetry/ceilometer-agent-compute.json:/var/lib/kolla/config_files/config.json:z', '/run/libvirt:/run/libvirt:shared,ro', '/etc/hosts:/etc/hosts:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/var/lib/openstack/cacerts/telemetry/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/config/telemetry/ceilometer_prom_exporter.yaml:/etc/ceilometer/ceilometer_prom_exporter.yaml:z', '/var/lib/openstack/certs/telemetry/default:/etc/ceilometer/tls:z', '/dev/log:/dev/log', '/var/lib/openstack/healthchecks/ceilometer_agent_compute:/openstack:ro,z']}, config_id=edpm, org.label-schema.build-date=20251125)
Dec  1 19:47:25 compute-0 podman[247044]: 2025-12-01 19:47:25.369424574 +0000 UTC m=+0.126121171 container health_status ac5c9902abf0db9f43c889599b2bcc73d33eb8b65444ffdd9b56a5cc93dab792 (image=quay.io/podified-antelope-centos9/openstack-ovn-controller:current-podified, name=ovn_controller, health_status=healthy, health_failing_streak=0, health_log=, io.buildah.version=1.41.3, managed_by=edpm_ansible, tcib_build_tag=fa2bb8efef6782c26ea7f1675eeb36dd, config_id=ovn_controller, container_name=ovn_controller, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.schema-version=1.0, config_data={'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ovn_controller', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-ovn-controller:current-podified', 'net': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/lib/modules:/lib/modules:ro', '/run:/run', '/var/lib/openvswitch/ovn:/run/ovn:shared,z', '/var/lib/kolla/config_files/ovn_controller.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/openstack/cacerts/ovn/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/certs/ovn/default/ca.crt:/etc/pki/tls/certs/ovndbca.crt:ro,z', '/var/lib/openstack/certs/ovn/default/tls.crt:/etc/pki/tls/certs/ovndb.crt:ro,z', '/var/lib/openstack/certs/ovn/default/tls.key:/etc/pki/tls/private/ovndb.key:ro,Z', '/var/lib/openstack/healthchecks/ovn_controller:/openstack:ro,z']}, maintainer=OpenStack Kubernetes Operator team, tcib_managed=true, org.label-schema.build-date=20251125, org.label-schema.license=GPLv2, org.label-schema.vendor=CentOS)
Dec  1 19:47:25 compute-0 podman[247040]: 2025-12-01 19:47:25.376250205 +0000 UTC m=+0.136528923 container health_status 23921011954a99f31a49758e512d9e3575f6b2ebf536e7df85e3be11e7690b76 (image=quay.io/sustainable_computing_io/kepler:release-0.7.12, name=kepler, health_status=healthy, health_failing_streak=0, health_log=, com.redhat.license_terms=https://www.redhat.com/en/about/red-hat-end-user-license-agreements#UBI, name=ubi9, build-date=2024-09-18T21:23:30, url=https://access.redhat.com/containers/#/registry.access.redhat.com/ubi9/images/9.4-1214.1726694543, io.buildah.version=1.29.0, config_id=edpm, container_name=kepler, com.redhat.component=ubi9-container, release-0.7.12=, description=The Universal Base Image is designed and engineered to be the base layer for all of your containerized applications, middleware and utilities. This base image is freely redistributable, but Red Hat only supports Red Hat technologies through subscriptions for Red Hat products. This image is maintained by Red Hat and updated regularly., io.openshift.tags=base rhel9, vcs-ref=e309397d02fc53f7fa99db1371b8700eb49f268f, version=9.4, io.k8s.description=The Universal Base Image is designed and engineered to be the base layer for all of your containerized applications, middleware and utilities. This base image is freely redistributable, but Red Hat only supports Red Hat technologies through subscriptions for Red Hat products. This image is maintained by Red Hat and updated regularly., maintainer=Red Hat, Inc., release=1214.1726694543, distribution-scope=public, vcs-type=git, config_data={'image': 'quay.io/sustainable_computing_io/kepler:release-0.7.12', 'privileged': 'true', 'restart': 'always', 'ports': ['8888:8888'], 'net': 'host', 'command': '-v=2', 'recreate': True, 'environment': {'ENABLE_GPU': 'true', 'EXPOSE_CONTAINER_METRICS': 'true', 'ENABLE_PROCESS_METRICS': 'true', 'EXPOSE_VM_METRICS': 'true', 'EXPOSE_ESTIMATED_IDLE_POWER_METRICS': 'false', 'LIBVIRT_METADATA_URI': 'http://openstack.org/xmlns/libvirt/nova/1.1'}, 'healthcheck': {'test': '/openstack/healthcheck kepler', 'mount': '/var/lib/openstack/healthchecks/kepler'}, 'volumes': ['/lib/modules:/lib/modules:ro', '/run/libvirt:/run/libvirt:shared,ro', '/sys:/sys', '/proc:/proc', '/var/lib/openstack/healthchecks/kepler:/openstack:ro,z']}, io.openshift.expose-services=, managed_by=edpm_ansible, vendor=Red Hat, Inc., architecture=x86_64, io.k8s.display-name=Red Hat Universal Base Image 9, summary=Provides the latest release of Red Hat Universal Base Image 9.)
Dec  1 19:47:25 compute-0 podman[247041]: 2025-12-01 19:47:25.387509663 +0000 UTC m=+0.141124734 container health_status 34a1614f07848d6f362b3ed1fa2407dbcd0f2c7c831f6ef43ff8b2d278ce7c3d (image=quay.io/podified-antelope-centos9/openstack-ceilometer-ipmi:current-podified, name=ceilometer_agent_ipmi, health_status=healthy, health_failing_streak=0, health_log=, tcib_managed=true, container_name=ceilometer_agent_ipmi, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS, tcib_build_tag=fa2bb8efef6782c26ea7f1675eeb36dd, config_data={'image': 'quay.io/podified-antelope-centos9/openstack-ceilometer-ipmi:current-podified', 'user': 'ceilometer', 'restart': 'always', 'command': 'kolla_start', 'security_opt': 'label:type:ceilometer_polling_t', 'privileged': 'true', 'net': 'host', 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'OS_ENDPOINT_TYPE': 'internal'}, 'healthcheck': {'test': '/openstack/healthcheck ipmi', 'mount': '/var/lib/openstack/healthchecks/ceilometer_agent_ipmi'}, 'volumes': ['/var/lib/openstack/config/telemetry-power-monitoring:/var/lib/openstack/config/:z', '/var/lib/openstack/config/telemetry-power-monitoring/ceilometer-agent-ipmi.json:/var/lib/kolla/config_files/config.json:z', '/etc/hosts:/etc/hosts:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/var/lib/openstack/cacerts/telemetry-power-monitoring/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/config/telemetry-power-monitoring/ceilometer_prom_exporter.yaml:/etc/ceilometer/ceilometer_prom_exporter.yaml:z', '/var/lib/openstack/certs/telemetry-power-monitoring/default:/etc/ceilometer/tls:z', '/dev/log:/dev/log', '/var/lib/openstack/healthchecks/ceilometer_agent_ipmi:/openstack:ro,z']}, io.buildah.version=1.41.3, managed_by=edpm_ansible, org.label-schema.build-date=20251125, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 9 Base Image, config_id=edpm, maintainer=OpenStack Kubernetes Operator team)
Dec  1 19:47:26 compute-0 nova_compute[189564]: 2025-12-01 19:47:26.015 189568 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Dec  1 19:47:28 compute-0 nova_compute[189564]: 2025-12-01 19:47:28.319 189568 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Dec  1 19:47:29 compute-0 podman[203750]: time="2025-12-01T19:47:29Z" level=info msg="List containers: received `last` parameter - overwriting `limit`"
Dec  1 19:47:29 compute-0 podman[203750]: @ - - [01/Dec/2025:19:47:29 +0000] "GET /v4.9.3/libpod/containers/json?all=true&external=false&last=0&namespace=false&size=false&sync=false HTTP/1.1" 200 29521 "" "Go-http-client/1.1"
Dec  1 19:47:29 compute-0 podman[203750]: @ - - [01/Dec/2025:19:47:29 +0000] "GET /v4.9.3/libpod/containers/stats?all=false&interval=1&stream=false HTTP/1.1" 200 4804 "" "Go-http-client/1.1"
Dec  1 19:47:31 compute-0 nova_compute[189564]: 2025-12-01 19:47:31.019 189568 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Dec  1 19:47:31 compute-0 openstack_network_exporter[205914]: ERROR   19:47:31 appctl.go:144: Failed to get PID for ovn-northd: no control socket files found for ovn-northd
Dec  1 19:47:31 compute-0 openstack_network_exporter[205914]: ERROR   19:47:31 appctl.go:144: Failed to get PID for ovn-northd: no control socket files found for ovn-northd
Dec  1 19:47:31 compute-0 openstack_network_exporter[205914]: ERROR   19:47:31 appctl.go:131: Failed to prepare call to ovsdb-server: no control socket files found for the ovs db server
Dec  1 19:47:31 compute-0 openstack_network_exporter[205914]: ERROR   19:47:31 appctl.go:174: call(dpif-netdev/pmd-perf-show): please specify an existing datapath
Dec  1 19:47:31 compute-0 openstack_network_exporter[205914]: 
Dec  1 19:47:31 compute-0 openstack_network_exporter[205914]: ERROR   19:47:31 appctl.go:174: call(dpif-netdev/pmd-rxq-show): please specify an existing datapath
Dec  1 19:47:31 compute-0 openstack_network_exporter[205914]: 
Dec  1 19:47:33 compute-0 nova_compute[189564]: 2025-12-01 19:47:33.322 189568 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Dec  1 19:47:36 compute-0 nova_compute[189564]: 2025-12-01 19:47:36.022 189568 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Dec  1 19:47:37 compute-0 podman[247139]: 2025-12-01 19:47:37.055299057 +0000 UTC m=+0.096988589 container health_status b46bda7fc50db8041eef75400930fc7591d8331b3adc9964f77b2cc87c6b98e2 (image=quay.io/openstack-k8s-operators/openstack-network-exporter:current-podified, name=openstack_network_exporter, health_status=healthy, health_failing_streak=0, health_log=, managed_by=edpm_ansible, architecture=x86_64, io.openshift.tags=minimal rhel9, version=9.6, com.redhat.component=ubi9-minimal-container, com.redhat.license_terms=https://www.redhat.com/en/about/red-hat-end-user-license-agreements#UBI, vcs-type=git, distribution-scope=public, io.buildah.version=1.33.7, build-date=2025-08-20T13:12:41, summary=Provides the latest release of the minimal Red Hat Universal Base Image 9., container_name=openstack_network_exporter, io.k8s.display-name=Red Hat Universal Base Image 9 Minimal, io.openshift.expose-services=, config_id=edpm, release=1755695350, io.k8s.description=The Universal Base Image Minimal is a stripped down image that uses microdnf as a package manager. This base image is freely redistributable, but Red Hat only supports Red Hat technologies through subscriptions for Red Hat products. This image is maintained by Red Hat and updated regularly., url=https://catalog.redhat.com/en/search?searchType=containers, config_data={'image': 'quay.io/openstack-k8s-operators/openstack-network-exporter:current-podified', 'restart': 'always', 'recreate': True, 'privileged': True, 'ports': ['9105:9105'], 'command': [], 'net': 'host', 'environment': {'OS_ENDPOINT_TYPE': 'internal', 'OPENSTACK_NETWORK_EXPORTER_YAML': '/etc/openstack_network_exporter/openstack_network_exporter.yaml'}, 'healthcheck': {'test': '/openstack/healthcheck openstack-netwo', 'mount': '/var/lib/openstack/healthchecks/openstack_network_exporter'}, 'volumes': ['/var/lib/openstack/config/telemetry/openstack_network_exporter.yaml:/etc/openstack_network_exporter/openstack_network_exporter.yaml:z', '/var/lib/openstack/certs/telemetry/default:/etc/openstack_network_exporter/tls:z', '/var/run/openvswitch:/run/openvswitch:rw,z', '/var/lib/openvswitch/ovn:/run/ovn:rw,z', '/proc:/host/proc:ro', '/var/lib/openstack/healthchecks/openstack_network_exporter:/openstack:ro,z']}, vcs-ref=f4b088292653bbf5ca8188a5e59ffd06a8671d4b, description=The Universal Base Image Minimal is a stripped down image that uses microdnf as a package manager. This base image is freely redistributable, but Red Hat only supports Red Hat technologies through subscriptions for Red Hat products. This image is maintained by Red Hat and updated regularly., maintainer=Red Hat, Inc., name=ubi9-minimal, vendor=Red Hat, Inc.)
Dec  1 19:47:38 compute-0 nova_compute[189564]: 2025-12-01 19:47:38.325 189568 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Dec  1 19:47:41 compute-0 nova_compute[189564]: 2025-12-01 19:47:41.024 189568 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Dec  1 19:47:42 compute-0 podman[247160]: 2025-12-01 19:47:42.311943081 +0000 UTC m=+0.080960734 container health_status 9bc16c1e84935b321683dd2dfd3901959431e420d380b6b9982945dff3d516b2 (image=quay.io/prometheus/node-exporter:v1.5.0, name=node_exporter, health_status=healthy, health_failing_streak=0, health_log=, maintainer=The Prometheus Authors <prometheus-developers@googlegroups.com>, managed_by=edpm_ansible, config_data={'image': 'quay.io/prometheus/node-exporter:v1.5.0', 'restart': 'always', 'recreate': True, 'user': 'root', 'privileged': True, 'ports': ['9100:9100'], 'command': ['--web.config.file=/etc/node_exporter/node_exporter.yaml', '--web.disable-exporter-metrics', '--collector.systemd', '--collector.systemd.unit-include=(edpm_.*|ovs.*|openvswitch|virt.*|rsyslog)\\.service', '--no-collector.dmi', '--no-collector.entropy', '--no-collector.thermal_zone', '--no-collector.time', '--no-collector.timex', '--no-collector.uname', '--no-collector.stat', '--no-collector.hwmon', '--no-collector.os', '--no-collector.selinux', '--no-collector.textfile', '--no-collector.powersupplyclass', '--no-collector.pressure', '--no-collector.rapl'], 'net': 'host', 'environment': {'OS_ENDPOINT_TYPE': 'internal'}, 'healthcheck': {'test': '/openstack/healthcheck node_exporter', 'mount': '/var/lib/openstack/healthchecks/node_exporter'}, 'volumes': ['/var/lib/openstack/config/telemetry/node_exporter.yaml:/etc/node_exporter/node_exporter.yaml:z', '/var/lib/openstack/certs/telemetry/default:/etc/node_exporter/tls:z', '/var/run/dbus/system_bus_socket:/var/run/dbus/system_bus_socket:rw', '/var/lib/openstack/healthchecks/node_exporter:/openstack:ro,z']}, config_id=edpm, container_name=node_exporter)
Dec  1 19:47:43 compute-0 nova_compute[189564]: 2025-12-01 19:47:43.328 189568 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Dec  1 19:47:46 compute-0 nova_compute[189564]: 2025-12-01 19:47:46.027 189568 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Dec  1 19:47:47 compute-0 podman[247183]: 2025-12-01 19:47:47.398062543 +0000 UTC m=+0.152516906 container health_status eee51cf6f5ac491b85fb09827fece37ea9afa564acb449d4ec0d0155a452f02b (image=quay.io/podified-antelope-centos9/openstack-multipathd:current-podified, name=multipathd, health_status=healthy, health_failing_streak=0, health_log=, org.label-schema.build-date=20251125, org.label-schema.vendor=CentOS, tcib_managed=true, config_data={'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/multipathd', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-multipathd:current-podified', 'net': 'host', 'privileged': True, 'restart': 'always', 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/var/lib/kolla/config_files/multipathd.json:/var/lib/kolla/config_files/config.json:ro', '/dev:/dev', '/run/udev:/run/udev', '/sys:/sys', '/lib/modules:/lib/modules:ro', '/etc/iscsi:/etc/iscsi:ro', '/var/lib/iscsi:/var/lib/iscsi', '/etc/multipath:/etc/multipath:z', '/etc/multipath.conf:/etc/multipath.conf:ro', '/var/lib/openstack/healthchecks/multipathd:/openstack:ro,z']}, config_id=multipathd, container_name=multipathd, org.label-schema.name=CentOS Stream 9 Base Image, tcib_build_tag=fa2bb8efef6782c26ea7f1675eeb36dd, io.buildah.version=1.41.3, maintainer=OpenStack Kubernetes Operator team, managed_by=edpm_ansible, org.label-schema.license=GPLv2, org.label-schema.schema-version=1.0)
Dec  1 19:47:48 compute-0 nova_compute[189564]: 2025-12-01 19:47:48.331 189568 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Dec  1 19:47:48 compute-0 ceilometer_agent_compute[200308]: 2025-12-01 19:47:48.816 15 DEBUG ceilometer.polling.manager [-] The number of pollsters in source [pollsters] is bigger than the number of worker threads to execute them. Therefore, one can expect the process to be longer than the expected. execute_polling_task_processing /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:253
Dec  1 19:47:48 compute-0 ceilometer_agent_compute[200308]: 2025-12-01 19:47:48.816 15 DEBUG ceilometer.polling.manager [-] Processing pollsters for [pollsters] with [1] threads. execute_polling_task_processing /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:262
Dec  1 19:47:48 compute-0 ceilometer_agent_compute[200308]: 2025-12-01 19:47:48.816 15 DEBUG ceilometer.polling.manager [-] Registering pollster [<stevedore.extension.Extension object at 0x7fcf6cc3f860>] from source [pollsters] to be executed via executor [<concurrent.futures.thread.ThreadPoolExecutor object at 0x7fcf6ebb4140>] with cache [{}], pollster history [{}], and discovery cache [{}]. register_pollster_execution /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:276
Dec  1 19:47:48 compute-0 ceilometer_agent_compute[200308]: 2025-12-01 19:47:48.817 15 DEBUG ceilometer.polling.manager [-] Executing discovery process for pollsters [<ceilometer.compute.pollsters.net.IncomingBytesDeltaPollster object at 0x7fcf6cc3f830>] and discovery method [local_instances] via process [<bound method AgentManager.discover of <ceilometer.polling.manager.AgentManager object at 0x7fcf6cc07a10>>]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:294
Dec  1 19:47:48 compute-0 ceilometer_agent_compute[200308]: 2025-12-01 19:47:48.818 15 DEBUG ceilometer.polling.manager [-] Registering pollster [<stevedore.extension.Extension object at 0x7fcf6c2e4080>] from source [pollsters] to be executed via executor [<concurrent.futures.thread.ThreadPoolExecutor object at 0x7fcf6ebb4140>] with cache [{}], pollster history [{}], and discovery cache [{}]. register_pollster_execution /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:276
Dec  1 19:47:48 compute-0 ceilometer_agent_compute[200308]: 2025-12-01 19:47:48.818 15 DEBUG ceilometer.polling.manager [-] Registering pollster [<stevedore.extension.Extension object at 0x7fcf6efc98b0>] from source [pollsters] to be executed via executor [<concurrent.futures.thread.ThreadPoolExecutor object at 0x7fcf6ebb4140>] with cache [{}], pollster history [{}], and discovery cache [{}]. register_pollster_execution /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:276
Dec  1 19:47:48 compute-0 ceilometer_agent_compute[200308]: 2025-12-01 19:47:48.819 15 DEBUG ceilometer.polling.manager [-] Registering pollster [<stevedore.extension.Extension object at 0x7fcf6c2e4110>] from source [pollsters] to be executed via executor [<concurrent.futures.thread.ThreadPoolExecutor object at 0x7fcf6ebb4140>] with cache [{}], pollster history [{}], and discovery cache [{}]. register_pollster_execution /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:276
Dec  1 19:47:48 compute-0 ceilometer_agent_compute[200308]: 2025-12-01 19:47:48.819 15 DEBUG ceilometer.polling.manager [-] Registering pollster [<stevedore.extension.Extension object at 0x7fcf6c2e41a0>] from source [pollsters] to be executed via executor [<concurrent.futures.thread.ThreadPoolExecutor object at 0x7fcf6ebb4140>] with cache [{}], pollster history [{}], and discovery cache [{}]. register_pollster_execution /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:276
Dec  1 19:47:48 compute-0 ceilometer_agent_compute[200308]: 2025-12-01 19:47:48.819 15 DEBUG ceilometer.polling.manager [-] Registering pollster [<stevedore.extension.Extension object at 0x7fcf6cc3f290>] from source [pollsters] to be executed via executor [<concurrent.futures.thread.ThreadPoolExecutor object at 0x7fcf6ebb4140>] with cache [{}], pollster history [{}], and discovery cache [{}]. register_pollster_execution /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:276
Dec  1 19:47:48 compute-0 ceilometer_agent_compute[200308]: 2025-12-01 19:47:48.820 15 DEBUG ceilometer.polling.manager [-] Registering pollster [<stevedore.extension.Extension object at 0x7fcf6cc3f2c0>] from source [pollsters] to be executed via executor [<concurrent.futures.thread.ThreadPoolExecutor object at 0x7fcf6ebb4140>] with cache [{}], pollster history [{}], and discovery cache [{}]. register_pollster_execution /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:276
Dec  1 19:47:48 compute-0 ceilometer_agent_compute[200308]: 2025-12-01 19:47:48.820 15 DEBUG ceilometer.polling.manager [-] Registering pollster [<stevedore.extension.Extension object at 0x7fcf6e1e92e0>] from source [pollsters] to be executed via executor [<concurrent.futures.thread.ThreadPoolExecutor object at 0x7fcf6ebb4140>] with cache [{}], pollster history [{}], and discovery cache [{}]. register_pollster_execution /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:276
Dec  1 19:47:48 compute-0 ceilometer_agent_compute[200308]: 2025-12-01 19:47:48.821 15 DEBUG ceilometer.polling.manager [-] Registering pollster [<stevedore.extension.Extension object at 0x7fcf6cc3fb00>] from source [pollsters] to be executed via executor [<concurrent.futures.thread.ThreadPoolExecutor object at 0x7fcf6ebb4140>] with cache [{}], pollster history [{}], and discovery cache [{}]. register_pollster_execution /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:276
Dec  1 19:47:48 compute-0 ceilometer_agent_compute[200308]: 2025-12-01 19:47:48.821 15 DEBUG ceilometer.polling.manager [-] Registering pollster [<stevedore.extension.Extension object at 0x7fcf6cc3f320>] from source [pollsters] to be executed via executor [<concurrent.futures.thread.ThreadPoolExecutor object at 0x7fcf6ebb4140>] with cache [{}], pollster history [{}], and discovery cache [{}]. register_pollster_execution /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:276
Dec  1 19:47:48 compute-0 ceilometer_agent_compute[200308]: 2025-12-01 19:47:48.821 15 DEBUG ceilometer.polling.manager [-] Registering pollster [<stevedore.extension.Extension object at 0x7fcf6cc3f380>] from source [pollsters] to be executed via executor [<concurrent.futures.thread.ThreadPoolExecutor object at 0x7fcf6ebb4140>] with cache [{}], pollster history [{}], and discovery cache [{}]. register_pollster_execution /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:276
Dec  1 19:47:48 compute-0 ceilometer_agent_compute[200308]: 2025-12-01 19:47:48.821 15 DEBUG ceilometer.polling.manager [-] Registering pollster [<stevedore.extension.Extension object at 0x7fcf6cc3f3e0>] from source [pollsters] to be executed via executor [<concurrent.futures.thread.ThreadPoolExecutor object at 0x7fcf6ebb4140>] with cache [{}], pollster history [{}], and discovery cache [{}]. register_pollster_execution /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:276
Dec  1 19:47:48 compute-0 ceilometer_agent_compute[200308]: 2025-12-01 19:47:48.822 15 DEBUG ceilometer.polling.manager [-] Registering pollster [<stevedore.extension.Extension object at 0x7fcf6cc3f440>] from source [pollsters] to be executed via executor [<concurrent.futures.thread.ThreadPoolExecutor object at 0x7fcf6ebb4140>] with cache [{}], pollster history [{}], and discovery cache [{}]. register_pollster_execution /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:276
Dec  1 19:47:48 compute-0 ceilometer_agent_compute[200308]: 2025-12-01 19:47:48.822 15 DEBUG ceilometer.polling.manager [-] Registering pollster [<stevedore.extension.Extension object at 0x7fcf6c2e4470>] from source [pollsters] to be executed via executor [<concurrent.futures.thread.ThreadPoolExecutor object at 0x7fcf6ebb4140>] with cache [{}], pollster history [{}], and discovery cache [{}]. register_pollster_execution /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:276
Dec  1 19:47:48 compute-0 ceilometer_agent_compute[200308]: 2025-12-01 19:47:48.822 15 DEBUG ceilometer.polling.manager [-] Registering pollster [<stevedore.extension.Extension object at 0x7fcf6cc3f4a0>] from source [pollsters] to be executed via executor [<concurrent.futures.thread.ThreadPoolExecutor object at 0x7fcf6ebb4140>] with cache [{}], pollster history [{}], and discovery cache [{}]. register_pollster_execution /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:276
Dec  1 19:47:48 compute-0 ceilometer_agent_compute[200308]: 2025-12-01 19:47:48.822 15 DEBUG ceilometer.polling.manager [-] Registering pollster [<stevedore.extension.Extension object at 0x7fcf6cc3f500>] from source [pollsters] to be executed via executor [<concurrent.futures.thread.ThreadPoolExecutor object at 0x7fcf6ebb4140>] with cache [{}], pollster history [{}], and discovery cache [{}]. register_pollster_execution /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:276
Dec  1 19:47:48 compute-0 ceilometer_agent_compute[200308]: 2025-12-01 19:47:48.823 15 DEBUG ceilometer.polling.manager [-] Registering pollster [<stevedore.extension.Extension object at 0x7fcf6cc3e540>] from source [pollsters] to be executed via executor [<concurrent.futures.thread.ThreadPoolExecutor object at 0x7fcf6ebb4140>] with cache [{}], pollster history [{}], and discovery cache [{}]. register_pollster_execution /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:276
Dec  1 19:47:48 compute-0 ceilometer_agent_compute[200308]: 2025-12-01 19:47:48.823 15 DEBUG ceilometer.polling.manager [-] Registering pollster [<stevedore.extension.Extension object at 0x7fcf6cc3f560>] from source [pollsters] to be executed via executor [<concurrent.futures.thread.ThreadPoolExecutor object at 0x7fcf6ebb4140>] with cache [{}], pollster history [{}], and discovery cache [{}]. register_pollster_execution /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:276
Dec  1 19:47:48 compute-0 ceilometer_agent_compute[200308]: 2025-12-01 19:47:48.824 15 DEBUG ceilometer.polling.manager [-] Registering pollster [<stevedore.extension.Extension object at 0x7fcf6cc3fd70>] from source [pollsters] to be executed via executor [<concurrent.futures.thread.ThreadPoolExecutor object at 0x7fcf6ebb4140>] with cache [{}], pollster history [{}], and discovery cache [{}]. register_pollster_execution /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:276
Dec  1 19:47:48 compute-0 ceilometer_agent_compute[200308]: 2025-12-01 19:47:48.824 15 DEBUG ceilometer.polling.manager [-] Registering pollster [<stevedore.extension.Extension object at 0x7fcf6cc3f5c0>] from source [pollsters] to be executed via executor [<concurrent.futures.thread.ThreadPoolExecutor object at 0x7fcf6ebb4140>] with cache [{}], pollster history [{}], and discovery cache [{}]. register_pollster_execution /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:276
Dec  1 19:47:48 compute-0 ceilometer_agent_compute[200308]: 2025-12-01 19:47:48.824 15 DEBUG ceilometer.polling.manager [-] Registering pollster [<stevedore.extension.Extension object at 0x7fcf6cc3fdd0>] from source [pollsters] to be executed via executor [<concurrent.futures.thread.ThreadPoolExecutor object at 0x7fcf6ebb4140>] with cache [{}], pollster history [{}], and discovery cache [{}]. register_pollster_execution /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:276
Dec  1 19:47:48 compute-0 ceilometer_agent_compute[200308]: 2025-12-01 19:47:48.825 15 DEBUG ceilometer.polling.manager [-] Registering pollster [<stevedore.extension.Extension object at 0x7fcf6cc3fe30>] from source [pollsters] to be executed via executor [<concurrent.futures.thread.ThreadPoolExecutor object at 0x7fcf6ebb4140>] with cache [{}], pollster history [{}], and discovery cache [{}]. register_pollster_execution /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:276
Dec  1 19:47:48 compute-0 ceilometer_agent_compute[200308]: 2025-12-01 19:47:48.825 15 DEBUG ceilometer.polling.manager [-] Registering pollster [<stevedore.extension.Extension object at 0x7fcf6cc3fec0>] from source [pollsters] to be executed via executor [<concurrent.futures.thread.ThreadPoolExecutor object at 0x7fcf6ebb4140>] with cache [{}], pollster history [{}], and discovery cache [{}]. register_pollster_execution /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:276
Dec  1 19:47:48 compute-0 ceilometer_agent_compute[200308]: 2025-12-01 19:47:48.825 15 DEBUG ceilometer.polling.manager [-] Registering pollster [<stevedore.extension.Extension object at 0x7fcf6cc3ffb0>] from source [pollsters] to be executed via executor [<concurrent.futures.thread.ThreadPoolExecutor object at 0x7fcf6ebb4140>] with cache [{}], pollster history [{}], and discovery cache [{}]. register_pollster_execution /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:276
Dec  1 19:47:48 compute-0 ceilometer_agent_compute[200308]: 2025-12-01 19:47:48.825 15 DEBUG ceilometer.polling.manager [-] Registering pollster [<stevedore.extension.Extension object at 0x7fcf6cc3d7c0>] from source [pollsters] to be executed via executor [<concurrent.futures.thread.ThreadPoolExecutor object at 0x7fcf6ebb4140>] with cache [{}], pollster history [{}], and discovery cache [{}]. register_pollster_execution /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:276
Dec  1 19:47:48 compute-0 ceilometer_agent_compute[200308]: 2025-12-01 19:47:48.826 15 DEBUG ceilometer.polling.manager [-] Registering pollster [<stevedore.extension.Extension object at 0x7fcf6cc3f7d0>] from source [pollsters] to be executed via executor [<concurrent.futures.thread.ThreadPoolExecutor object at 0x7fcf6ebb4140>] with cache [{}], pollster history [{}], and discovery cache [{}]. register_pollster_execution /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:276
Dec  1 19:47:48 compute-0 ceilometer_agent_compute[200308]: 2025-12-01 19:47:48.830 15 DEBUG ceilometer.compute.discovery [-] instance data: {'id': 'e73931e9-f7fa-4666-b781-700b385532a9', 'name': 'test_0', 'flavor': {'id': '0891a7f6-7194-4f33-bc11-6f6ab8b16145', 'name': 'm1.small', 'vcpus': 1, 'ram': 512, 'disk': 1, 'ephemeral': 1, 'swap': 0}, 'image': {'id': '15bc897a-453b-4133-b6db-08ecdc2b6db0'}, 'os_type': 'hvm', 'architecture': 'x86_64', 'OS-EXT-SRV-ATTR:instance_name': 'instance-00000001', 'OS-EXT-SRV-ATTR:host': 'compute-0.ctlplane.example.com', 'OS-EXT-STS:vm_state': 'running', 'tenant_id': '35d2a9caf1634dca9fc12ec078239d84', 'user_id': '7c24e8f82e7842b785e565ac65c7f494', 'hostId': 'e632d98aa833376e2652bb395252bb54f4cc7fd6f020f0d51d7efcd6', 'status': 'active', 'metadata': {}} discover_libvirt_polling /usr/lib/python3.12/site-packages/ceilometer/compute/discovery.py:315
Dec  1 19:47:48 compute-0 ceilometer_agent_compute[200308]: 2025-12-01 19:47:48.837 15 DEBUG ceilometer.compute.discovery [-] instance data: {'id': '850ac274-3f22-41ce-b7d7-ac64d7adac70', 'name': 'vn-rxztcck-a6xkcgll2h6t-dmjd3wlevael-vnf-74vtqyxw74yx', 'flavor': {'id': '0891a7f6-7194-4f33-bc11-6f6ab8b16145', 'name': 'm1.small', 'vcpus': 1, 'ram': 512, 'disk': 1, 'ephemeral': 1, 'swap': 0}, 'image': {'id': '15bc897a-453b-4133-b6db-08ecdc2b6db0'}, 'os_type': 'hvm', 'architecture': 'x86_64', 'OS-EXT-SRV-ATTR:instance_name': 'instance-00000003', 'OS-EXT-SRV-ATTR:host': 'compute-0.ctlplane.example.com', 'OS-EXT-STS:vm_state': 'running', 'tenant_id': '35d2a9caf1634dca9fc12ec078239d84', 'user_id': '7c24e8f82e7842b785e565ac65c7f494', 'hostId': 'e632d98aa833376e2652bb395252bb54f4cc7fd6f020f0d51d7efcd6', 'status': 'active', 'metadata': {'metering.server_group': '47cf63e2-5b7c-4ff3-8543-aef6d5b1a5c9'}} discover_libvirt_polling /usr/lib/python3.12/site-packages/ceilometer/compute/discovery.py:315
Dec  1 19:47:48 compute-0 ceilometer_agent_compute[200308]: 2025-12-01 19:47:48.838 15 INFO ceilometer.polling.manager [-] Polling pollster network.incoming.bytes.delta in the context of pollsters
Dec  1 19:47:48 compute-0 ceilometer_agent_compute[200308]: 2025-12-01 19:47:48.838 15 DEBUG ceilometer.polling.manager [-] Checking if we need coordination for pollster [<stevedore.extension.Extension object at 0x7fcf6cc3f860>] with coordination group name [None]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:333
Dec  1 19:47:48 compute-0 ceilometer_agent_compute[200308]: 2025-12-01 19:47:48.838 15 DEBUG ceilometer.polling.manager [-] The pollster [<stevedore.extension.Extension object at 0x7fcf6cc3f860>] is not configured in a source for polling that requires coordination. The current hashrings are the following [None]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:355
Dec  1 19:47:48 compute-0 ceilometer_agent_compute[200308]: 2025-12-01 19:47:48.838 15 DEBUG ceilometer.polling.manager [-] Polster heartbeat update: network.incoming.bytes.delta heartbeat /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:636
Dec  1 19:47:48 compute-0 ceilometer_agent_compute[200308]: 2025-12-01 19:47:48.839 12 DEBUG ceilometer.polling.manager [-] Updated heartbeat for network.incoming.bytes.delta (2025-12-01T19:47:48.838645) _update_status /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:502
Dec  1 19:47:48 compute-0 ceilometer_agent_compute[200308]: 2025-12-01 19:47:48.847 15 DEBUG ceilometer.compute.pollsters [-] e73931e9-f7fa-4666-b781-700b385532a9/network.incoming.bytes.delta volume: 0 _stats_to_sample /usr/lib/python3.12/site-packages/ceilometer/compute/pollsters/__init__.py:108
Dec  1 19:47:48 compute-0 ceilometer_agent_compute[200308]: 2025-12-01 19:47:48.853 15 DEBUG ceilometer.compute.pollsters [-] 850ac274-3f22-41ce-b7d7-ac64d7adac70/network.incoming.bytes.delta volume: 0 _stats_to_sample /usr/lib/python3.12/site-packages/ceilometer/compute/pollsters/__init__.py:108
Dec  1 19:47:48 compute-0 ceilometer_agent_compute[200308]: 2025-12-01 19:47:48.854 15 INFO ceilometer.polling.manager [-] Finished polling pollster network.incoming.bytes.delta in the context of pollsters
Dec  1 19:47:48 compute-0 ceilometer_agent_compute[200308]: 2025-12-01 19:47:48.854 15 DEBUG ceilometer.polling.manager [-] Executing discovery process for pollsters [<ceilometer.compute.pollsters.net.OutgoingPacketsPollster object at 0x7fcf6c2e4050>] and discovery method [local_instances] via process [<bound method AgentManager.discover of <ceilometer.polling.manager.AgentManager object at 0x7fcf6cc07a10>>]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:294
Dec  1 19:47:48 compute-0 ceilometer_agent_compute[200308]: 2025-12-01 19:47:48.855 15 INFO ceilometer.polling.manager [-] Polling pollster network.outgoing.packets in the context of pollsters
Dec  1 19:47:48 compute-0 ceilometer_agent_compute[200308]: 2025-12-01 19:47:48.855 15 DEBUG ceilometer.polling.manager [-] Checking if we need coordination for pollster [<stevedore.extension.Extension object at 0x7fcf6c2e4080>] with coordination group name [None]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:333
Dec  1 19:47:48 compute-0 ceilometer_agent_compute[200308]: 2025-12-01 19:47:48.855 15 DEBUG ceilometer.polling.manager [-] The pollster [<stevedore.extension.Extension object at 0x7fcf6c2e4080>] is not configured in a source for polling that requires coordination. The current hashrings are the following [None]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:355
Dec  1 19:47:48 compute-0 ceilometer_agent_compute[200308]: 2025-12-01 19:47:48.855 15 DEBUG ceilometer.polling.manager [-] Polster heartbeat update: network.outgoing.packets heartbeat /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:636
Dec  1 19:47:48 compute-0 ceilometer_agent_compute[200308]: 2025-12-01 19:47:48.856 15 DEBUG ceilometer.compute.pollsters [-] e73931e9-f7fa-4666-b781-700b385532a9/network.outgoing.packets volume: 24 _stats_to_sample /usr/lib/python3.12/site-packages/ceilometer/compute/pollsters/__init__.py:108
Dec  1 19:47:48 compute-0 ceilometer_agent_compute[200308]: 2025-12-01 19:47:48.856 15 DEBUG ceilometer.compute.pollsters [-] 850ac274-3f22-41ce-b7d7-ac64d7adac70/network.outgoing.packets volume: 23 _stats_to_sample /usr/lib/python3.12/site-packages/ceilometer/compute/pollsters/__init__.py:108
Dec  1 19:47:48 compute-0 ceilometer_agent_compute[200308]: 2025-12-01 19:47:48.857 15 INFO ceilometer.polling.manager [-] Finished polling pollster network.outgoing.packets in the context of pollsters
Dec  1 19:47:48 compute-0 ceilometer_agent_compute[200308]: 2025-12-01 19:47:48.857 15 DEBUG ceilometer.polling.manager [-] Executing discovery process for pollsters [<ceilometer.compute.pollsters.net.OutgoingBytesDeltaPollster object at 0x7fcf6cc3ff20>] and discovery method [local_instances] via process [<bound method AgentManager.discover of <ceilometer.polling.manager.AgentManager object at 0x7fcf6cc07a10>>]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:294
Dec  1 19:47:48 compute-0 ceilometer_agent_compute[200308]: 2025-12-01 19:47:48.858 15 INFO ceilometer.polling.manager [-] Polling pollster network.outgoing.bytes.delta in the context of pollsters
Dec  1 19:47:48 compute-0 ceilometer_agent_compute[200308]: 2025-12-01 19:47:48.858 12 DEBUG ceilometer.polling.manager [-] Updated heartbeat for network.outgoing.packets (2025-12-01T19:47:48.855782) _update_status /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:502
Dec  1 19:47:48 compute-0 ceilometer_agent_compute[200308]: 2025-12-01 19:47:48.858 15 DEBUG ceilometer.polling.manager [-] Checking if we need coordination for pollster [<stevedore.extension.Extension object at 0x7fcf6efc98b0>] with coordination group name [None]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:333
Dec  1 19:47:48 compute-0 ceilometer_agent_compute[200308]: 2025-12-01 19:47:48.858 15 DEBUG ceilometer.polling.manager [-] The pollster [<stevedore.extension.Extension object at 0x7fcf6efc98b0>] is not configured in a source for polling that requires coordination. The current hashrings are the following [None]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:355
Dec  1 19:47:48 compute-0 ceilometer_agent_compute[200308]: 2025-12-01 19:47:48.859 12 DEBUG ceilometer.polling.manager [-] Updated heartbeat for network.outgoing.bytes.delta (2025-12-01T19:47:48.858911) _update_status /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:502
Dec  1 19:47:48 compute-0 ceilometer_agent_compute[200308]: 2025-12-01 19:47:48.858 15 DEBUG ceilometer.polling.manager [-] Polster heartbeat update: network.outgoing.bytes.delta heartbeat /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:636
Dec  1 19:47:48 compute-0 ceilometer_agent_compute[200308]: 2025-12-01 19:47:48.859 15 DEBUG ceilometer.compute.pollsters [-] e73931e9-f7fa-4666-b781-700b385532a9/network.outgoing.bytes.delta volume: 70 _stats_to_sample /usr/lib/python3.12/site-packages/ceilometer/compute/pollsters/__init__.py:108
Dec  1 19:47:48 compute-0 ceilometer_agent_compute[200308]: 2025-12-01 19:47:48.859 15 DEBUG ceilometer.compute.pollsters [-] 850ac274-3f22-41ce-b7d7-ac64d7adac70/network.outgoing.bytes.delta volume: 70 _stats_to_sample /usr/lib/python3.12/site-packages/ceilometer/compute/pollsters/__init__.py:108
Dec  1 19:47:48 compute-0 ceilometer_agent_compute[200308]: 2025-12-01 19:47:48.860 15 INFO ceilometer.polling.manager [-] Finished polling pollster network.outgoing.bytes.delta in the context of pollsters
Dec  1 19:47:48 compute-0 ceilometer_agent_compute[200308]: 2025-12-01 19:47:48.860 15 DEBUG ceilometer.polling.manager [-] Executing discovery process for pollsters [<ceilometer.compute.pollsters.net.OutgoingDropPollster object at 0x7fcf6c2e40e0>] and discovery method [local_instances] via process [<bound method AgentManager.discover of <ceilometer.polling.manager.AgentManager object at 0x7fcf6cc07a10>>]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:294
Dec  1 19:47:48 compute-0 ceilometer_agent_compute[200308]: 2025-12-01 19:47:48.861 15 INFO ceilometer.polling.manager [-] Polling pollster network.outgoing.packets.drop in the context of pollsters
Dec  1 19:47:48 compute-0 ceilometer_agent_compute[200308]: 2025-12-01 19:47:48.861 15 DEBUG ceilometer.polling.manager [-] Checking if we need coordination for pollster [<stevedore.extension.Extension object at 0x7fcf6c2e4110>] with coordination group name [None]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:333
Dec  1 19:47:48 compute-0 ceilometer_agent_compute[200308]: 2025-12-01 19:47:48.861 15 DEBUG ceilometer.polling.manager [-] The pollster [<stevedore.extension.Extension object at 0x7fcf6c2e4110>] is not configured in a source for polling that requires coordination. The current hashrings are the following [None]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:355
Dec  1 19:47:48 compute-0 ceilometer_agent_compute[200308]: 2025-12-01 19:47:48.861 12 DEBUG ceilometer.polling.manager [-] Updated heartbeat for network.outgoing.packets.drop (2025-12-01T19:47:48.861651) _update_status /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:502
Dec  1 19:47:48 compute-0 ceilometer_agent_compute[200308]: 2025-12-01 19:47:48.861 15 DEBUG ceilometer.polling.manager [-] Polster heartbeat update: network.outgoing.packets.drop heartbeat /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:636
Dec  1 19:47:48 compute-0 ceilometer_agent_compute[200308]: 2025-12-01 19:47:48.862 15 DEBUG ceilometer.compute.pollsters [-] e73931e9-f7fa-4666-b781-700b385532a9/network.outgoing.packets.drop volume: 0 _stats_to_sample /usr/lib/python3.12/site-packages/ceilometer/compute/pollsters/__init__.py:108
Dec  1 19:47:48 compute-0 ceilometer_agent_compute[200308]: 2025-12-01 19:47:48.862 15 DEBUG ceilometer.compute.pollsters [-] 850ac274-3f22-41ce-b7d7-ac64d7adac70/network.outgoing.packets.drop volume: 0 _stats_to_sample /usr/lib/python3.12/site-packages/ceilometer/compute/pollsters/__init__.py:108
Dec  1 19:47:48 compute-0 ceilometer_agent_compute[200308]: 2025-12-01 19:47:48.863 15 INFO ceilometer.polling.manager [-] Finished polling pollster network.outgoing.packets.drop in the context of pollsters
Dec  1 19:47:48 compute-0 ceilometer_agent_compute[200308]: 2025-12-01 19:47:48.863 15 DEBUG ceilometer.polling.manager [-] Executing discovery process for pollsters [<ceilometer.compute.pollsters.net.OutgoingErrorsPollster object at 0x7fcf6c2e4170>] and discovery method [local_instances] via process [<bound method AgentManager.discover of <ceilometer.polling.manager.AgentManager object at 0x7fcf6cc07a10>>]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:294
Dec  1 19:47:48 compute-0 ceilometer_agent_compute[200308]: 2025-12-01 19:47:48.863 15 INFO ceilometer.polling.manager [-] Polling pollster network.outgoing.packets.error in the context of pollsters
Dec  1 19:47:48 compute-0 ceilometer_agent_compute[200308]: 2025-12-01 19:47:48.863 15 DEBUG ceilometer.polling.manager [-] Checking if we need coordination for pollster [<stevedore.extension.Extension object at 0x7fcf6c2e41a0>] with coordination group name [None]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:333
Dec  1 19:47:48 compute-0 ceilometer_agent_compute[200308]: 2025-12-01 19:47:48.864 15 DEBUG ceilometer.polling.manager [-] The pollster [<stevedore.extension.Extension object at 0x7fcf6c2e41a0>] is not configured in a source for polling that requires coordination. The current hashrings are the following [None]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:355
Dec  1 19:47:48 compute-0 ceilometer_agent_compute[200308]: 2025-12-01 19:47:48.864 12 DEBUG ceilometer.polling.manager [-] Updated heartbeat for network.outgoing.packets.error (2025-12-01T19:47:48.864224) _update_status /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:502
Dec  1 19:47:48 compute-0 ceilometer_agent_compute[200308]: 2025-12-01 19:47:48.864 15 DEBUG ceilometer.polling.manager [-] Polster heartbeat update: network.outgoing.packets.error heartbeat /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:636
Dec  1 19:47:48 compute-0 ceilometer_agent_compute[200308]: 2025-12-01 19:47:48.864 15 DEBUG ceilometer.compute.pollsters [-] e73931e9-f7fa-4666-b781-700b385532a9/network.outgoing.packets.error volume: 0 _stats_to_sample /usr/lib/python3.12/site-packages/ceilometer/compute/pollsters/__init__.py:108
Dec  1 19:47:48 compute-0 ceilometer_agent_compute[200308]: 2025-12-01 19:47:48.865 15 DEBUG ceilometer.compute.pollsters [-] 850ac274-3f22-41ce-b7d7-ac64d7adac70/network.outgoing.packets.error volume: 0 _stats_to_sample /usr/lib/python3.12/site-packages/ceilometer/compute/pollsters/__init__.py:108
Dec  1 19:47:48 compute-0 ceilometer_agent_compute[200308]: 2025-12-01 19:47:48.866 15 INFO ceilometer.polling.manager [-] Finished polling pollster network.outgoing.packets.error in the context of pollsters
Dec  1 19:47:48 compute-0 ceilometer_agent_compute[200308]: 2025-12-01 19:47:48.866 15 DEBUG ceilometer.polling.manager [-] Executing discovery process for pollsters [<ceilometer.compute.pollsters.disk.PerDeviceCapacityPollster object at 0x7fcf6cc3d820>] and discovery method [local_instances] via process [<bound method AgentManager.discover of <ceilometer.polling.manager.AgentManager object at 0x7fcf6cc07a10>>]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:294
Dec  1 19:47:48 compute-0 ceilometer_agent_compute[200308]: 2025-12-01 19:47:48.866 15 INFO ceilometer.polling.manager [-] Polling pollster disk.device.capacity in the context of pollsters
Dec  1 19:47:48 compute-0 ceilometer_agent_compute[200308]: 2025-12-01 19:47:48.866 15 DEBUG ceilometer.polling.manager [-] Checking if we need coordination for pollster [<stevedore.extension.Extension object at 0x7fcf6cc3f290>] with coordination group name [None]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:333
Dec  1 19:47:48 compute-0 ceilometer_agent_compute[200308]: 2025-12-01 19:47:48.866 15 DEBUG ceilometer.polling.manager [-] The pollster [<stevedore.extension.Extension object at 0x7fcf6cc3f290>] is not configured in a source for polling that requires coordination. The current hashrings are the following [None]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:355
Dec  1 19:47:48 compute-0 ceilometer_agent_compute[200308]: 2025-12-01 19:47:48.867 12 DEBUG ceilometer.polling.manager [-] Updated heartbeat for disk.device.capacity (2025-12-01T19:47:48.866989) _update_status /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:502
Dec  1 19:47:48 compute-0 ceilometer_agent_compute[200308]: 2025-12-01 19:47:48.867 15 DEBUG ceilometer.polling.manager [-] Polster heartbeat update: disk.device.capacity heartbeat /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:636
Dec  1 19:47:48 compute-0 ceilometer_agent_compute[200308]: 2025-12-01 19:47:48.919 15 DEBUG ceilometer.compute.pollsters [-] e73931e9-f7fa-4666-b781-700b385532a9/disk.device.capacity volume: 1073741824 _stats_to_sample /usr/lib/python3.12/site-packages/ceilometer/compute/pollsters/__init__.py:108
Dec  1 19:47:48 compute-0 ceilometer_agent_compute[200308]: 2025-12-01 19:47:48.920 15 DEBUG ceilometer.compute.pollsters [-] e73931e9-f7fa-4666-b781-700b385532a9/disk.device.capacity volume: 1073741824 _stats_to_sample /usr/lib/python3.12/site-packages/ceilometer/compute/pollsters/__init__.py:108
Dec  1 19:47:48 compute-0 ceilometer_agent_compute[200308]: 2025-12-01 19:47:48.920 15 DEBUG ceilometer.compute.pollsters [-] e73931e9-f7fa-4666-b781-700b385532a9/disk.device.capacity volume: 485376 _stats_to_sample /usr/lib/python3.12/site-packages/ceilometer/compute/pollsters/__init__.py:108
Dec  1 19:47:48 compute-0 ceilometer_agent_compute[200308]: 2025-12-01 19:47:48.964 15 DEBUG ceilometer.compute.pollsters [-] 850ac274-3f22-41ce-b7d7-ac64d7adac70/disk.device.capacity volume: 1073741824 _stats_to_sample /usr/lib/python3.12/site-packages/ceilometer/compute/pollsters/__init__.py:108
Dec  1 19:47:48 compute-0 ceilometer_agent_compute[200308]: 2025-12-01 19:47:48.965 15 DEBUG ceilometer.compute.pollsters [-] 850ac274-3f22-41ce-b7d7-ac64d7adac70/disk.device.capacity volume: 1073741824 _stats_to_sample /usr/lib/python3.12/site-packages/ceilometer/compute/pollsters/__init__.py:108
Dec  1 19:47:48 compute-0 ceilometer_agent_compute[200308]: 2025-12-01 19:47:48.966 15 DEBUG ceilometer.compute.pollsters [-] 850ac274-3f22-41ce-b7d7-ac64d7adac70/disk.device.capacity volume: 583680 _stats_to_sample /usr/lib/python3.12/site-packages/ceilometer/compute/pollsters/__init__.py:108
Dec  1 19:47:48 compute-0 ceilometer_agent_compute[200308]: 2025-12-01 19:47:48.966 15 INFO ceilometer.polling.manager [-] Finished polling pollster disk.device.capacity in the context of pollsters
Dec  1 19:47:48 compute-0 ceilometer_agent_compute[200308]: 2025-12-01 19:47:48.967 15 DEBUG ceilometer.polling.manager [-] Executing discovery process for pollsters [<ceilometer.compute.pollsters.disk.PerDeviceReadBytesPollster object at 0x7fcf6cc3f1d0>] and discovery method [local_instances] via process [<bound method AgentManager.discover of <ceilometer.polling.manager.AgentManager object at 0x7fcf6cc07a10>>]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:294
Dec  1 19:47:48 compute-0 ceilometer_agent_compute[200308]: 2025-12-01 19:47:48.967 15 INFO ceilometer.polling.manager [-] Polling pollster disk.device.read.bytes in the context of pollsters
Dec  1 19:47:48 compute-0 ceilometer_agent_compute[200308]: 2025-12-01 19:47:48.967 15 DEBUG ceilometer.polling.manager [-] Checking if we need coordination for pollster [<stevedore.extension.Extension object at 0x7fcf6cc3f2c0>] with coordination group name [None]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:333
Dec  1 19:47:48 compute-0 ceilometer_agent_compute[200308]: 2025-12-01 19:47:48.967 15 DEBUG ceilometer.polling.manager [-] The pollster [<stevedore.extension.Extension object at 0x7fcf6cc3f2c0>] is not configured in a source for polling that requires coordination. The current hashrings are the following [None]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:355
Dec  1 19:47:48 compute-0 ceilometer_agent_compute[200308]: 2025-12-01 19:47:48.968 15 DEBUG ceilometer.polling.manager [-] Polster heartbeat update: disk.device.read.bytes heartbeat /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:636
Dec  1 19:47:48 compute-0 ceilometer_agent_compute[200308]: 2025-12-01 19:47:48.968 12 DEBUG ceilometer.polling.manager [-] Updated heartbeat for disk.device.read.bytes (2025-12-01T19:47:48.967925) _update_status /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:502
Dec  1 19:47:49 compute-0 ceilometer_agent_compute[200308]: 2025-12-01 19:47:49.095 15 DEBUG ceilometer.compute.pollsters [-] e73931e9-f7fa-4666-b781-700b385532a9/disk.device.read.bytes volume: 23308800 _stats_to_sample /usr/lib/python3.12/site-packages/ceilometer/compute/pollsters/__init__.py:108
Dec  1 19:47:49 compute-0 ceilometer_agent_compute[200308]: 2025-12-01 19:47:49.096 15 DEBUG ceilometer.compute.pollsters [-] e73931e9-f7fa-4666-b781-700b385532a9/disk.device.read.bytes volume: 3227648 _stats_to_sample /usr/lib/python3.12/site-packages/ceilometer/compute/pollsters/__init__.py:108
Dec  1 19:47:49 compute-0 ceilometer_agent_compute[200308]: 2025-12-01 19:47:49.096 15 DEBUG ceilometer.compute.pollsters [-] e73931e9-f7fa-4666-b781-700b385532a9/disk.device.read.bytes volume: 274786 _stats_to_sample /usr/lib/python3.12/site-packages/ceilometer/compute/pollsters/__init__.py:108
Dec  1 19:47:49 compute-0 ceilometer_agent_compute[200308]: 2025-12-01 19:47:49.204 15 DEBUG ceilometer.compute.pollsters [-] 850ac274-3f22-41ce-b7d7-ac64d7adac70/disk.device.read.bytes volume: 23308800 _stats_to_sample /usr/lib/python3.12/site-packages/ceilometer/compute/pollsters/__init__.py:108
Dec  1 19:47:49 compute-0 ceilometer_agent_compute[200308]: 2025-12-01 19:47:49.205 15 DEBUG ceilometer.compute.pollsters [-] 850ac274-3f22-41ce-b7d7-ac64d7adac70/disk.device.read.bytes volume: 3227648 _stats_to_sample /usr/lib/python3.12/site-packages/ceilometer/compute/pollsters/__init__.py:108
Dec  1 19:47:49 compute-0 ceilometer_agent_compute[200308]: 2025-12-01 19:47:49.205 15 DEBUG ceilometer.compute.pollsters [-] 850ac274-3f22-41ce-b7d7-ac64d7adac70/disk.device.read.bytes volume: 385378 _stats_to_sample /usr/lib/python3.12/site-packages/ceilometer/compute/pollsters/__init__.py:108
Dec  1 19:47:49 compute-0 ceilometer_agent_compute[200308]: 2025-12-01 19:47:49.206 15 INFO ceilometer.polling.manager [-] Finished polling pollster disk.device.read.bytes in the context of pollsters
Dec  1 19:47:49 compute-0 ceilometer_agent_compute[200308]: 2025-12-01 19:47:49.206 15 DEBUG ceilometer.polling.manager [-] Executing discovery process for pollsters [<ceilometer.compute.pollsters.net.IncomingBytesPollster object at 0x7fcf6cc3f800>] and discovery method [local_instances] via process [<bound method AgentManager.discover of <ceilometer.polling.manager.AgentManager object at 0x7fcf6cc07a10>>]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:294
Dec  1 19:47:49 compute-0 ceilometer_agent_compute[200308]: 2025-12-01 19:47:49.207 15 INFO ceilometer.polling.manager [-] Polling pollster network.incoming.bytes in the context of pollsters
Dec  1 19:47:49 compute-0 ceilometer_agent_compute[200308]: 2025-12-01 19:47:49.207 15 DEBUG ceilometer.polling.manager [-] Checking if we need coordination for pollster [<stevedore.extension.Extension object at 0x7fcf6e1e92e0>] with coordination group name [None]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:333
Dec  1 19:47:49 compute-0 ceilometer_agent_compute[200308]: 2025-12-01 19:47:49.207 15 DEBUG ceilometer.polling.manager [-] The pollster [<stevedore.extension.Extension object at 0x7fcf6e1e92e0>] is not configured in a source for polling that requires coordination. The current hashrings are the following [None]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:355
Dec  1 19:47:49 compute-0 ceilometer_agent_compute[200308]: 2025-12-01 19:47:49.207 15 DEBUG ceilometer.polling.manager [-] Polster heartbeat update: network.incoming.bytes heartbeat /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:636
Dec  1 19:47:49 compute-0 ceilometer_agent_compute[200308]: 2025-12-01 19:47:49.207 15 DEBUG ceilometer.compute.pollsters [-] e73931e9-f7fa-4666-b781-700b385532a9/network.incoming.bytes volume: 2136 _stats_to_sample /usr/lib/python3.12/site-packages/ceilometer/compute/pollsters/__init__.py:108
Dec  1 19:47:49 compute-0 ceilometer_agent_compute[200308]: 2025-12-01 19:47:49.208 15 DEBUG ceilometer.compute.pollsters [-] 850ac274-3f22-41ce-b7d7-ac64d7adac70/network.incoming.bytes volume: 1570 _stats_to_sample /usr/lib/python3.12/site-packages/ceilometer/compute/pollsters/__init__.py:108
Dec  1 19:47:49 compute-0 ceilometer_agent_compute[200308]: 2025-12-01 19:47:49.208 15 INFO ceilometer.polling.manager [-] Finished polling pollster network.incoming.bytes in the context of pollsters
Dec  1 19:47:49 compute-0 ceilometer_agent_compute[200308]: 2025-12-01 19:47:49.208 15 DEBUG ceilometer.polling.manager [-] Executing discovery process for pollsters [<ceilometer.compute.pollsters.net.IncomingBytesRatePollster object at 0x7fcf6cc3fd10>] and discovery method [local_instances] via process [<bound method AgentManager.discover of <ceilometer.polling.manager.AgentManager object at 0x7fcf6cc07a10>>]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:294
Dec  1 19:47:49 compute-0 ceilometer_agent_compute[200308]: 2025-12-01 19:47:49.208 15 DEBUG ceilometer.polling.manager [-] Skip pollster network.incoming.bytes.rate, no new resources found this cycle _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:321
Dec  1 19:47:49 compute-0 ceilometer_agent_compute[200308]: 2025-12-01 19:47:49.209 15 DEBUG ceilometer.polling.manager [-] Executing discovery process for pollsters [<ceilometer.compute.pollsters.disk.PerDeviceDiskReadLatencyPollster object at 0x7fcf6cc3f2f0>] and discovery method [local_instances] via process [<bound method AgentManager.discover of <ceilometer.polling.manager.AgentManager object at 0x7fcf6cc07a10>>]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:294
Dec  1 19:47:49 compute-0 ceilometer_agent_compute[200308]: 2025-12-01 19:47:49.209 15 INFO ceilometer.polling.manager [-] Polling pollster disk.device.read.latency in the context of pollsters
Dec  1 19:47:49 compute-0 ceilometer_agent_compute[200308]: 2025-12-01 19:47:49.209 15 DEBUG ceilometer.polling.manager [-] Checking if we need coordination for pollster [<stevedore.extension.Extension object at 0x7fcf6cc3f320>] with coordination group name [None]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:333
Dec  1 19:47:49 compute-0 ceilometer_agent_compute[200308]: 2025-12-01 19:47:49.209 15 DEBUG ceilometer.polling.manager [-] The pollster [<stevedore.extension.Extension object at 0x7fcf6cc3f320>] is not configured in a source for polling that requires coordination. The current hashrings are the following [None]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:355
Dec  1 19:47:49 compute-0 ceilometer_agent_compute[200308]: 2025-12-01 19:47:49.209 15 DEBUG ceilometer.polling.manager [-] Polster heartbeat update: disk.device.read.latency heartbeat /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:636
Dec  1 19:47:49 compute-0 ceilometer_agent_compute[200308]: 2025-12-01 19:47:49.210 15 DEBUG ceilometer.compute.pollsters [-] e73931e9-f7fa-4666-b781-700b385532a9/disk.device.read.latency volume: 474440550 _stats_to_sample /usr/lib/python3.12/site-packages/ceilometer/compute/pollsters/__init__.py:108
Dec  1 19:47:49 compute-0 ceilometer_agent_compute[200308]: 2025-12-01 19:47:49.210 15 DEBUG ceilometer.compute.pollsters [-] e73931e9-f7fa-4666-b781-700b385532a9/disk.device.read.latency volume: 65600453 _stats_to_sample /usr/lib/python3.12/site-packages/ceilometer/compute/pollsters/__init__.py:108
Dec  1 19:47:49 compute-0 ceilometer_agent_compute[200308]: 2025-12-01 19:47:49.210 12 DEBUG ceilometer.polling.manager [-] Updated heartbeat for network.incoming.bytes (2025-12-01T19:47:49.207559) _update_status /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:502
Dec  1 19:47:49 compute-0 ceilometer_agent_compute[200308]: 2025-12-01 19:47:49.210 15 DEBUG ceilometer.compute.pollsters [-] e73931e9-f7fa-4666-b781-700b385532a9/disk.device.read.latency volume: 49214734 _stats_to_sample /usr/lib/python3.12/site-packages/ceilometer/compute/pollsters/__init__.py:108
Dec  1 19:47:49 compute-0 ceilometer_agent_compute[200308]: 2025-12-01 19:47:49.211 15 DEBUG ceilometer.compute.pollsters [-] 850ac274-3f22-41ce-b7d7-ac64d7adac70/disk.device.read.latency volume: 578521054 _stats_to_sample /usr/lib/python3.12/site-packages/ceilometer/compute/pollsters/__init__.py:108
Dec  1 19:47:49 compute-0 ceilometer_agent_compute[200308]: 2025-12-01 19:47:49.211 12 DEBUG ceilometer.polling.manager [-] Updated heartbeat for disk.device.read.latency (2025-12-01T19:47:49.209928) _update_status /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:502
Dec  1 19:47:49 compute-0 ceilometer_agent_compute[200308]: 2025-12-01 19:47:49.211 15 DEBUG ceilometer.compute.pollsters [-] 850ac274-3f22-41ce-b7d7-ac64d7adac70/disk.device.read.latency volume: 98903610 _stats_to_sample /usr/lib/python3.12/site-packages/ceilometer/compute/pollsters/__init__.py:108
Dec  1 19:47:49 compute-0 ceilometer_agent_compute[200308]: 2025-12-01 19:47:49.211 15 DEBUG ceilometer.compute.pollsters [-] 850ac274-3f22-41ce-b7d7-ac64d7adac70/disk.device.read.latency volume: 76991265 _stats_to_sample /usr/lib/python3.12/site-packages/ceilometer/compute/pollsters/__init__.py:108
Dec  1 19:47:49 compute-0 ceilometer_agent_compute[200308]: 2025-12-01 19:47:49.212 15 INFO ceilometer.polling.manager [-] Finished polling pollster disk.device.read.latency in the context of pollsters
Dec  1 19:47:49 compute-0 ceilometer_agent_compute[200308]: 2025-12-01 19:47:49.212 15 DEBUG ceilometer.polling.manager [-] Executing discovery process for pollsters [<ceilometer.compute.pollsters.disk.PerDeviceReadRequestsPollster object at 0x7fcf6cc3f350>] and discovery method [local_instances] via process [<bound method AgentManager.discover of <ceilometer.polling.manager.AgentManager object at 0x7fcf6cc07a10>>]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:294
Dec  1 19:47:49 compute-0 ceilometer_agent_compute[200308]: 2025-12-01 19:47:49.212 15 INFO ceilometer.polling.manager [-] Polling pollster disk.device.read.requests in the context of pollsters
Dec  1 19:47:49 compute-0 ceilometer_agent_compute[200308]: 2025-12-01 19:47:49.212 15 DEBUG ceilometer.polling.manager [-] Checking if we need coordination for pollster [<stevedore.extension.Extension object at 0x7fcf6cc3f380>] with coordination group name [None]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:333
Dec  1 19:47:49 compute-0 ceilometer_agent_compute[200308]: 2025-12-01 19:47:49.213 15 DEBUG ceilometer.polling.manager [-] The pollster [<stevedore.extension.Extension object at 0x7fcf6cc3f380>] is not configured in a source for polling that requires coordination. The current hashrings are the following [None]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:355
Dec  1 19:47:49 compute-0 ceilometer_agent_compute[200308]: 2025-12-01 19:47:49.213 15 DEBUG ceilometer.polling.manager [-] Polster heartbeat update: disk.device.read.requests heartbeat /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:636
Dec  1 19:47:49 compute-0 ceilometer_agent_compute[200308]: 2025-12-01 19:47:49.213 12 DEBUG ceilometer.polling.manager [-] Updated heartbeat for disk.device.read.requests (2025-12-01T19:47:49.213116) _update_status /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:502
Dec  1 19:47:49 compute-0 ceilometer_agent_compute[200308]: 2025-12-01 19:47:49.213 15 DEBUG ceilometer.compute.pollsters [-] e73931e9-f7fa-4666-b781-700b385532a9/disk.device.read.requests volume: 840 _stats_to_sample /usr/lib/python3.12/site-packages/ceilometer/compute/pollsters/__init__.py:108
Dec  1 19:47:49 compute-0 ceilometer_agent_compute[200308]: 2025-12-01 19:47:49.213 15 DEBUG ceilometer.compute.pollsters [-] e73931e9-f7fa-4666-b781-700b385532a9/disk.device.read.requests volume: 173 _stats_to_sample /usr/lib/python3.12/site-packages/ceilometer/compute/pollsters/__init__.py:108
Dec  1 19:47:49 compute-0 ceilometer_agent_compute[200308]: 2025-12-01 19:47:49.214 15 DEBUG ceilometer.compute.pollsters [-] e73931e9-f7fa-4666-b781-700b385532a9/disk.device.read.requests volume: 109 _stats_to_sample /usr/lib/python3.12/site-packages/ceilometer/compute/pollsters/__init__.py:108
Dec  1 19:47:49 compute-0 ceilometer_agent_compute[200308]: 2025-12-01 19:47:49.214 15 DEBUG ceilometer.compute.pollsters [-] 850ac274-3f22-41ce-b7d7-ac64d7adac70/disk.device.read.requests volume: 840 _stats_to_sample /usr/lib/python3.12/site-packages/ceilometer/compute/pollsters/__init__.py:108
Dec  1 19:47:49 compute-0 ceilometer_agent_compute[200308]: 2025-12-01 19:47:49.214 15 DEBUG ceilometer.compute.pollsters [-] 850ac274-3f22-41ce-b7d7-ac64d7adac70/disk.device.read.requests volume: 173 _stats_to_sample /usr/lib/python3.12/site-packages/ceilometer/compute/pollsters/__init__.py:108
Dec  1 19:47:49 compute-0 ceilometer_agent_compute[200308]: 2025-12-01 19:47:49.215 15 DEBUG ceilometer.compute.pollsters [-] 850ac274-3f22-41ce-b7d7-ac64d7adac70/disk.device.read.requests volume: 124 _stats_to_sample /usr/lib/python3.12/site-packages/ceilometer/compute/pollsters/__init__.py:108
Dec  1 19:47:49 compute-0 ceilometer_agent_compute[200308]: 2025-12-01 19:47:49.215 15 INFO ceilometer.polling.manager [-] Finished polling pollster disk.device.read.requests in the context of pollsters
Dec  1 19:47:49 compute-0 ceilometer_agent_compute[200308]: 2025-12-01 19:47:49.215 15 DEBUG ceilometer.polling.manager [-] Executing discovery process for pollsters [<ceilometer.compute.pollsters.disk.PerDevicePhysicalPollster object at 0x7fcf6cc3f3b0>] and discovery method [local_instances] via process [<bound method AgentManager.discover of <ceilometer.polling.manager.AgentManager object at 0x7fcf6cc07a10>>]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:294
Dec  1 19:47:49 compute-0 ceilometer_agent_compute[200308]: 2025-12-01 19:47:49.215 15 INFO ceilometer.polling.manager [-] Polling pollster disk.device.usage in the context of pollsters
Dec  1 19:47:49 compute-0 ceilometer_agent_compute[200308]: 2025-12-01 19:47:49.216 15 DEBUG ceilometer.polling.manager [-] Checking if we need coordination for pollster [<stevedore.extension.Extension object at 0x7fcf6cc3f3e0>] with coordination group name [None]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:333
Dec  1 19:47:49 compute-0 ceilometer_agent_compute[200308]: 2025-12-01 19:47:49.216 15 DEBUG ceilometer.polling.manager [-] The pollster [<stevedore.extension.Extension object at 0x7fcf6cc3f3e0>] is not configured in a source for polling that requires coordination. The current hashrings are the following [None]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:355
Dec  1 19:47:49 compute-0 ceilometer_agent_compute[200308]: 2025-12-01 19:47:49.216 15 DEBUG ceilometer.polling.manager [-] Polster heartbeat update: disk.device.usage heartbeat /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:636
Dec  1 19:47:49 compute-0 ceilometer_agent_compute[200308]: 2025-12-01 19:47:49.216 15 DEBUG ceilometer.compute.pollsters [-] e73931e9-f7fa-4666-b781-700b385532a9/disk.device.usage volume: 21233664 _stats_to_sample /usr/lib/python3.12/site-packages/ceilometer/compute/pollsters/__init__.py:108
Dec  1 19:47:49 compute-0 ceilometer_agent_compute[200308]: 2025-12-01 19:47:49.216 12 DEBUG ceilometer.polling.manager [-] Updated heartbeat for disk.device.usage (2025-12-01T19:47:49.216277) _update_status /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:502
Dec  1 19:47:49 compute-0 ceilometer_agent_compute[200308]: 2025-12-01 19:47:49.216 15 DEBUG ceilometer.compute.pollsters [-] e73931e9-f7fa-4666-b781-700b385532a9/disk.device.usage volume: 393216 _stats_to_sample /usr/lib/python3.12/site-packages/ceilometer/compute/pollsters/__init__.py:108
Dec  1 19:47:49 compute-0 ceilometer_agent_compute[200308]: 2025-12-01 19:47:49.217 15 DEBUG ceilometer.compute.pollsters [-] e73931e9-f7fa-4666-b781-700b385532a9/disk.device.usage volume: 485376 _stats_to_sample /usr/lib/python3.12/site-packages/ceilometer/compute/pollsters/__init__.py:108
Dec  1 19:47:49 compute-0 ceilometer_agent_compute[200308]: 2025-12-01 19:47:49.217 15 DEBUG ceilometer.compute.pollsters [-] 850ac274-3f22-41ce-b7d7-ac64d7adac70/disk.device.usage volume: 21299200 _stats_to_sample /usr/lib/python3.12/site-packages/ceilometer/compute/pollsters/__init__.py:108
Dec  1 19:47:49 compute-0 ceilometer_agent_compute[200308]: 2025-12-01 19:47:49.217 15 DEBUG ceilometer.compute.pollsters [-] 850ac274-3f22-41ce-b7d7-ac64d7adac70/disk.device.usage volume: 393216 _stats_to_sample /usr/lib/python3.12/site-packages/ceilometer/compute/pollsters/__init__.py:108
Dec  1 19:47:49 compute-0 ceilometer_agent_compute[200308]: 2025-12-01 19:47:49.218 15 DEBUG ceilometer.compute.pollsters [-] 850ac274-3f22-41ce-b7d7-ac64d7adac70/disk.device.usage volume: 583680 _stats_to_sample /usr/lib/python3.12/site-packages/ceilometer/compute/pollsters/__init__.py:108
Dec  1 19:47:49 compute-0 ceilometer_agent_compute[200308]: 2025-12-01 19:47:49.218 15 INFO ceilometer.polling.manager [-] Finished polling pollster disk.device.usage in the context of pollsters
Dec  1 19:47:49 compute-0 ceilometer_agent_compute[200308]: 2025-12-01 19:47:49.218 15 DEBUG ceilometer.polling.manager [-] Executing discovery process for pollsters [<ceilometer.compute.pollsters.disk.PerDeviceWriteBytesPollster object at 0x7fcf6cc3f410>] and discovery method [local_instances] via process [<bound method AgentManager.discover of <ceilometer.polling.manager.AgentManager object at 0x7fcf6cc07a10>>]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:294
Dec  1 19:47:49 compute-0 ceilometer_agent_compute[200308]: 2025-12-01 19:47:49.218 15 INFO ceilometer.polling.manager [-] Polling pollster disk.device.write.bytes in the context of pollsters
Dec  1 19:47:49 compute-0 ceilometer_agent_compute[200308]: 2025-12-01 19:47:49.218 15 DEBUG ceilometer.polling.manager [-] Checking if we need coordination for pollster [<stevedore.extension.Extension object at 0x7fcf6cc3f440>] with coordination group name [None]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:333
Dec  1 19:47:49 compute-0 ceilometer_agent_compute[200308]: 2025-12-01 19:47:49.219 15 DEBUG ceilometer.polling.manager [-] The pollster [<stevedore.extension.Extension object at 0x7fcf6cc3f440>] is not configured in a source for polling that requires coordination. The current hashrings are the following [None]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:355
Dec  1 19:47:49 compute-0 ceilometer_agent_compute[200308]: 2025-12-01 19:47:49.219 15 DEBUG ceilometer.polling.manager [-] Polster heartbeat update: disk.device.write.bytes heartbeat /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:636
Dec  1 19:47:49 compute-0 ceilometer_agent_compute[200308]: 2025-12-01 19:47:49.219 15 DEBUG ceilometer.compute.pollsters [-] e73931e9-f7fa-4666-b781-700b385532a9/disk.device.write.bytes volume: 41779200 _stats_to_sample /usr/lib/python3.12/site-packages/ceilometer/compute/pollsters/__init__.py:108
Dec  1 19:47:49 compute-0 ceilometer_agent_compute[200308]: 2025-12-01 19:47:49.219 12 DEBUG ceilometer.polling.manager [-] Updated heartbeat for disk.device.write.bytes (2025-12-01T19:47:49.219131) _update_status /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:502
Dec  1 19:47:49 compute-0 ceilometer_agent_compute[200308]: 2025-12-01 19:47:49.219 15 DEBUG ceilometer.compute.pollsters [-] e73931e9-f7fa-4666-b781-700b385532a9/disk.device.write.bytes volume: 512 _stats_to_sample /usr/lib/python3.12/site-packages/ceilometer/compute/pollsters/__init__.py:108
Dec  1 19:47:49 compute-0 ceilometer_agent_compute[200308]: 2025-12-01 19:47:49.220 15 DEBUG ceilometer.compute.pollsters [-] e73931e9-f7fa-4666-b781-700b385532a9/disk.device.write.bytes volume: 0 _stats_to_sample /usr/lib/python3.12/site-packages/ceilometer/compute/pollsters/__init__.py:108
Dec  1 19:47:49 compute-0 ceilometer_agent_compute[200308]: 2025-12-01 19:47:49.220 15 DEBUG ceilometer.compute.pollsters [-] 850ac274-3f22-41ce-b7d7-ac64d7adac70/disk.device.write.bytes volume: 41779200 _stats_to_sample /usr/lib/python3.12/site-packages/ceilometer/compute/pollsters/__init__.py:108
Dec  1 19:47:49 compute-0 ceilometer_agent_compute[200308]: 2025-12-01 19:47:49.220 15 DEBUG ceilometer.compute.pollsters [-] 850ac274-3f22-41ce-b7d7-ac64d7adac70/disk.device.write.bytes volume: 512 _stats_to_sample /usr/lib/python3.12/site-packages/ceilometer/compute/pollsters/__init__.py:108
Dec  1 19:47:49 compute-0 ceilometer_agent_compute[200308]: 2025-12-01 19:47:49.220 15 DEBUG ceilometer.compute.pollsters [-] 850ac274-3f22-41ce-b7d7-ac64d7adac70/disk.device.write.bytes volume: 0 _stats_to_sample /usr/lib/python3.12/site-packages/ceilometer/compute/pollsters/__init__.py:108
Dec  1 19:47:49 compute-0 ceilometer_agent_compute[200308]: 2025-12-01 19:47:49.221 15 INFO ceilometer.polling.manager [-] Finished polling pollster disk.device.write.bytes in the context of pollsters
Dec  1 19:47:49 compute-0 ceilometer_agent_compute[200308]: 2025-12-01 19:47:49.221 15 DEBUG ceilometer.polling.manager [-] Executing discovery process for pollsters [<ceilometer.compute.pollsters.instance_stats.PowerStatePollster object at 0x7fcf6c2e4440>] and discovery method [local_instances] via process [<bound method AgentManager.discover of <ceilometer.polling.manager.AgentManager object at 0x7fcf6cc07a10>>]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:294
Dec  1 19:47:49 compute-0 ceilometer_agent_compute[200308]: 2025-12-01 19:47:49.221 15 INFO ceilometer.polling.manager [-] Polling pollster power.state in the context of pollsters
Dec  1 19:47:49 compute-0 ceilometer_agent_compute[200308]: 2025-12-01 19:47:49.221 15 DEBUG ceilometer.polling.manager [-] Checking if we need coordination for pollster [<stevedore.extension.Extension object at 0x7fcf6c2e4470>] with coordination group name [None]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:333
Dec  1 19:47:49 compute-0 ceilometer_agent_compute[200308]: 2025-12-01 19:47:49.221 15 DEBUG ceilometer.polling.manager [-] The pollster [<stevedore.extension.Extension object at 0x7fcf6c2e4470>] is not configured in a source for polling that requires coordination. The current hashrings are the following [None]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:355
Dec  1 19:47:49 compute-0 ceilometer_agent_compute[200308]: 2025-12-01 19:47:49.222 15 DEBUG ceilometer.polling.manager [-] Polster heartbeat update: power.state heartbeat /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:636
Dec  1 19:47:49 compute-0 ceilometer_agent_compute[200308]: 2025-12-01 19:47:49.222 12 DEBUG ceilometer.polling.manager [-] Updated heartbeat for power.state (2025-12-01T19:47:49.222057) _update_status /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:502
Dec  1 19:47:49 compute-0 ceilometer_agent_compute[200308]: 2025-12-01 19:47:49.249 15 DEBUG ceilometer.compute.pollsters [-] e73931e9-f7fa-4666-b781-700b385532a9/power.state volume: 1 _stats_to_sample /usr/lib/python3.12/site-packages/ceilometer/compute/pollsters/__init__.py:108
Dec  1 19:47:49 compute-0 ceilometer_agent_compute[200308]: 2025-12-01 19:47:49.278 15 DEBUG ceilometer.compute.pollsters [-] 850ac274-3f22-41ce-b7d7-ac64d7adac70/power.state volume: 1 _stats_to_sample /usr/lib/python3.12/site-packages/ceilometer/compute/pollsters/__init__.py:108
Dec  1 19:47:49 compute-0 ceilometer_agent_compute[200308]: 2025-12-01 19:47:49.278 15 INFO ceilometer.polling.manager [-] Finished polling pollster power.state in the context of pollsters
Dec  1 19:47:49 compute-0 ceilometer_agent_compute[200308]: 2025-12-01 19:47:49.279 15 DEBUG ceilometer.polling.manager [-] Executing discovery process for pollsters [<ceilometer.compute.pollsters.disk.PerDeviceDiskWriteLatencyPollster object at 0x7fcf6cc3f470>] and discovery method [local_instances] via process [<bound method AgentManager.discover of <ceilometer.polling.manager.AgentManager object at 0x7fcf6cc07a10>>]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:294
Dec  1 19:47:49 compute-0 ceilometer_agent_compute[200308]: 2025-12-01 19:47:49.279 15 INFO ceilometer.polling.manager [-] Polling pollster disk.device.write.latency in the context of pollsters
Dec  1 19:47:49 compute-0 ceilometer_agent_compute[200308]: 2025-12-01 19:47:49.279 15 DEBUG ceilometer.polling.manager [-] Checking if we need coordination for pollster [<stevedore.extension.Extension object at 0x7fcf6cc3f4a0>] with coordination group name [None]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:333
Dec  1 19:47:49 compute-0 ceilometer_agent_compute[200308]: 2025-12-01 19:47:49.279 15 DEBUG ceilometer.polling.manager [-] The pollster [<stevedore.extension.Extension object at 0x7fcf6cc3f4a0>] is not configured in a source for polling that requires coordination. The current hashrings are the following [None]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:355
Dec  1 19:47:49 compute-0 ceilometer_agent_compute[200308]: 2025-12-01 19:47:49.279 15 DEBUG ceilometer.polling.manager [-] Polster heartbeat update: disk.device.write.latency heartbeat /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:636
Dec  1 19:47:49 compute-0 ceilometer_agent_compute[200308]: 2025-12-01 19:47:49.279 15 DEBUG ceilometer.compute.pollsters [-] e73931e9-f7fa-4666-b781-700b385532a9/disk.device.write.latency volume: 1119912171 _stats_to_sample /usr/lib/python3.12/site-packages/ceilometer/compute/pollsters/__init__.py:108
Dec  1 19:47:49 compute-0 ceilometer_agent_compute[200308]: 2025-12-01 19:47:49.280 15 DEBUG ceilometer.compute.pollsters [-] e73931e9-f7fa-4666-b781-700b385532a9/disk.device.write.latency volume: 10391061 _stats_to_sample /usr/lib/python3.12/site-packages/ceilometer/compute/pollsters/__init__.py:108
Dec  1 19:47:49 compute-0 ceilometer_agent_compute[200308]: 2025-12-01 19:47:49.280 15 DEBUG ceilometer.compute.pollsters [-] e73931e9-f7fa-4666-b781-700b385532a9/disk.device.write.latency volume: 0 _stats_to_sample /usr/lib/python3.12/site-packages/ceilometer/compute/pollsters/__init__.py:108
Dec  1 19:47:49 compute-0 ceilometer_agent_compute[200308]: 2025-12-01 19:47:49.280 15 DEBUG ceilometer.compute.pollsters [-] 850ac274-3f22-41ce-b7d7-ac64d7adac70/disk.device.write.latency volume: 2063543219 _stats_to_sample /usr/lib/python3.12/site-packages/ceilometer/compute/pollsters/__init__.py:108
Dec  1 19:47:49 compute-0 ceilometer_agent_compute[200308]: 2025-12-01 19:47:49.281 15 DEBUG ceilometer.compute.pollsters [-] 850ac274-3f22-41ce-b7d7-ac64d7adac70/disk.device.write.latency volume: 12721696 _stats_to_sample /usr/lib/python3.12/site-packages/ceilometer/compute/pollsters/__init__.py:108
Dec  1 19:47:49 compute-0 ceilometer_agent_compute[200308]: 2025-12-01 19:47:49.281 15 DEBUG ceilometer.compute.pollsters [-] 850ac274-3f22-41ce-b7d7-ac64d7adac70/disk.device.write.latency volume: 0 _stats_to_sample /usr/lib/python3.12/site-packages/ceilometer/compute/pollsters/__init__.py:108
Dec  1 19:47:49 compute-0 ceilometer_agent_compute[200308]: 2025-12-01 19:47:49.281 15 INFO ceilometer.polling.manager [-] Finished polling pollster disk.device.write.latency in the context of pollsters
Dec  1 19:47:49 compute-0 ceilometer_agent_compute[200308]: 2025-12-01 19:47:49.282 15 DEBUG ceilometer.polling.manager [-] Executing discovery process for pollsters [<ceilometer.compute.pollsters.disk.PerDeviceWriteRequestsPollster object at 0x7fcf6cc3f4d0>] and discovery method [local_instances] via process [<bound method AgentManager.discover of <ceilometer.polling.manager.AgentManager object at 0x7fcf6cc07a10>>]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:294
Dec  1 19:47:49 compute-0 ceilometer_agent_compute[200308]: 2025-12-01 19:47:49.282 15 INFO ceilometer.polling.manager [-] Polling pollster disk.device.write.requests in the context of pollsters
Dec  1 19:47:49 compute-0 ceilometer_agent_compute[200308]: 2025-12-01 19:47:49.282 12 DEBUG ceilometer.polling.manager [-] Updated heartbeat for disk.device.write.latency (2025-12-01T19:47:49.279609) _update_status /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:502
Dec  1 19:47:49 compute-0 ceilometer_agent_compute[200308]: 2025-12-01 19:47:49.282 15 DEBUG ceilometer.polling.manager [-] Checking if we need coordination for pollster [<stevedore.extension.Extension object at 0x7fcf6cc3f500>] with coordination group name [None]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:333
Dec  1 19:47:49 compute-0 ceilometer_agent_compute[200308]: 2025-12-01 19:47:49.282 15 DEBUG ceilometer.polling.manager [-] The pollster [<stevedore.extension.Extension object at 0x7fcf6cc3f500>] is not configured in a source for polling that requires coordination. The current hashrings are the following [None]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:355
Dec  1 19:47:49 compute-0 ceilometer_agent_compute[200308]: 2025-12-01 19:47:49.283 15 DEBUG ceilometer.polling.manager [-] Polster heartbeat update: disk.device.write.requests heartbeat /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:636
Dec  1 19:47:49 compute-0 ceilometer_agent_compute[200308]: 2025-12-01 19:47:49.283 15 DEBUG ceilometer.compute.pollsters [-] e73931e9-f7fa-4666-b781-700b385532a9/disk.device.write.requests volume: 233 _stats_to_sample /usr/lib/python3.12/site-packages/ceilometer/compute/pollsters/__init__.py:108
Dec  1 19:47:49 compute-0 ceilometer_agent_compute[200308]: 2025-12-01 19:47:49.283 12 DEBUG ceilometer.polling.manager [-] Updated heartbeat for disk.device.write.requests (2025-12-01T19:47:49.282959) _update_status /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:502
Dec  1 19:47:49 compute-0 ceilometer_agent_compute[200308]: 2025-12-01 19:47:49.283 15 DEBUG ceilometer.compute.pollsters [-] e73931e9-f7fa-4666-b781-700b385532a9/disk.device.write.requests volume: 1 _stats_to_sample /usr/lib/python3.12/site-packages/ceilometer/compute/pollsters/__init__.py:108
Dec  1 19:47:49 compute-0 ceilometer_agent_compute[200308]: 2025-12-01 19:47:49.283 15 DEBUG ceilometer.compute.pollsters [-] e73931e9-f7fa-4666-b781-700b385532a9/disk.device.write.requests volume: 0 _stats_to_sample /usr/lib/python3.12/site-packages/ceilometer/compute/pollsters/__init__.py:108
Dec  1 19:47:49 compute-0 ceilometer_agent_compute[200308]: 2025-12-01 19:47:49.284 15 DEBUG ceilometer.compute.pollsters [-] 850ac274-3f22-41ce-b7d7-ac64d7adac70/disk.device.write.requests volume: 232 _stats_to_sample /usr/lib/python3.12/site-packages/ceilometer/compute/pollsters/__init__.py:108
Dec  1 19:47:49 compute-0 ceilometer_agent_compute[200308]: 2025-12-01 19:47:49.284 15 DEBUG ceilometer.compute.pollsters [-] 850ac274-3f22-41ce-b7d7-ac64d7adac70/disk.device.write.requests volume: 1 _stats_to_sample /usr/lib/python3.12/site-packages/ceilometer/compute/pollsters/__init__.py:108
Dec  1 19:47:49 compute-0 ceilometer_agent_compute[200308]: 2025-12-01 19:47:49.285 15 DEBUG ceilometer.compute.pollsters [-] 850ac274-3f22-41ce-b7d7-ac64d7adac70/disk.device.write.requests volume: 0 _stats_to_sample /usr/lib/python3.12/site-packages/ceilometer/compute/pollsters/__init__.py:108
Dec  1 19:47:49 compute-0 ceilometer_agent_compute[200308]: 2025-12-01 19:47:49.285 15 INFO ceilometer.polling.manager [-] Finished polling pollster disk.device.write.requests in the context of pollsters
Dec  1 19:47:49 compute-0 ceilometer_agent_compute[200308]: 2025-12-01 19:47:49.285 15 DEBUG ceilometer.polling.manager [-] Executing discovery process for pollsters [<ceilometer.compute.pollsters.disk.PerDeviceAllocationPollster object at 0x7fcf6cc3e5d0>] and discovery method [local_instances] via process [<bound method AgentManager.discover of <ceilometer.polling.manager.AgentManager object at 0x7fcf6cc07a10>>]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:294
Dec  1 19:47:49 compute-0 ceilometer_agent_compute[200308]: 2025-12-01 19:47:49.285 15 INFO ceilometer.polling.manager [-] Polling pollster disk.device.allocation in the context of pollsters
Dec  1 19:47:49 compute-0 ceilometer_agent_compute[200308]: 2025-12-01 19:47:49.285 15 DEBUG ceilometer.polling.manager [-] Checking if we need coordination for pollster [<stevedore.extension.Extension object at 0x7fcf6cc3e540>] with coordination group name [None]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:333
Dec  1 19:47:49 compute-0 ceilometer_agent_compute[200308]: 2025-12-01 19:47:49.286 15 DEBUG ceilometer.polling.manager [-] The pollster [<stevedore.extension.Extension object at 0x7fcf6cc3e540>] is not configured in a source for polling that requires coordination. The current hashrings are the following [None]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:355
Dec  1 19:47:49 compute-0 ceilometer_agent_compute[200308]: 2025-12-01 19:47:49.286 15 DEBUG ceilometer.polling.manager [-] Polster heartbeat update: disk.device.allocation heartbeat /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:636
Dec  1 19:47:49 compute-0 ceilometer_agent_compute[200308]: 2025-12-01 19:47:49.286 15 DEBUG ceilometer.compute.pollsters [-] e73931e9-f7fa-4666-b781-700b385532a9/disk.device.allocation volume: 21307392 _stats_to_sample /usr/lib/python3.12/site-packages/ceilometer/compute/pollsters/__init__.py:108
Dec  1 19:47:49 compute-0 ceilometer_agent_compute[200308]: 2025-12-01 19:47:49.286 15 DEBUG ceilometer.compute.pollsters [-] e73931e9-f7fa-4666-b781-700b385532a9/disk.device.allocation volume: 1253376 _stats_to_sample /usr/lib/python3.12/site-packages/ceilometer/compute/pollsters/__init__.py:108
Dec  1 19:47:49 compute-0 ceilometer_agent_compute[200308]: 2025-12-01 19:47:49.286 15 DEBUG ceilometer.compute.pollsters [-] e73931e9-f7fa-4666-b781-700b385532a9/disk.device.allocation volume: 487424 _stats_to_sample /usr/lib/python3.12/site-packages/ceilometer/compute/pollsters/__init__.py:108
Dec  1 19:47:49 compute-0 ceilometer_agent_compute[200308]: 2025-12-01 19:47:49.287 15 DEBUG ceilometer.compute.pollsters [-] 850ac274-3f22-41ce-b7d7-ac64d7adac70/disk.device.allocation volume: 22224896 _stats_to_sample /usr/lib/python3.12/site-packages/ceilometer/compute/pollsters/__init__.py:108
Dec  1 19:47:49 compute-0 ceilometer_agent_compute[200308]: 2025-12-01 19:47:49.287 15 DEBUG ceilometer.compute.pollsters [-] 850ac274-3f22-41ce-b7d7-ac64d7adac70/disk.device.allocation volume: 1253376 _stats_to_sample /usr/lib/python3.12/site-packages/ceilometer/compute/pollsters/__init__.py:108
Dec  1 19:47:49 compute-0 ceilometer_agent_compute[200308]: 2025-12-01 19:47:49.287 15 DEBUG ceilometer.compute.pollsters [-] 850ac274-3f22-41ce-b7d7-ac64d7adac70/disk.device.allocation volume: 585728 _stats_to_sample /usr/lib/python3.12/site-packages/ceilometer/compute/pollsters/__init__.py:108
Dec  1 19:47:49 compute-0 ceilometer_agent_compute[200308]: 2025-12-01 19:47:49.288 15 INFO ceilometer.polling.manager [-] Finished polling pollster disk.device.allocation in the context of pollsters
Dec  1 19:47:49 compute-0 ceilometer_agent_compute[200308]: 2025-12-01 19:47:49.288 15 DEBUG ceilometer.polling.manager [-] Executing discovery process for pollsters [<ceilometer.compute.pollsters.disk.EphemeralSizePollster object at 0x7fcf6cc3f530>] and discovery method [local_instances] via process [<bound method AgentManager.discover of <ceilometer.polling.manager.AgentManager object at 0x7fcf6cc07a10>>]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:294
Dec  1 19:47:49 compute-0 ceilometer_agent_compute[200308]: 2025-12-01 19:47:49.288 12 DEBUG ceilometer.polling.manager [-] Updated heartbeat for disk.device.allocation (2025-12-01T19:47:49.286185) _update_status /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:502
Dec  1 19:47:49 compute-0 ceilometer_agent_compute[200308]: 2025-12-01 19:47:49.288 15 INFO ceilometer.polling.manager [-] Polling pollster disk.ephemeral.size in the context of pollsters
Dec  1 19:47:49 compute-0 ceilometer_agent_compute[200308]: 2025-12-01 19:47:49.289 15 DEBUG ceilometer.polling.manager [-] Checking if we need coordination for pollster [<stevedore.extension.Extension object at 0x7fcf6cc3f560>] with coordination group name [None]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:333
Dec  1 19:47:49 compute-0 ceilometer_agent_compute[200308]: 2025-12-01 19:47:49.289 15 DEBUG ceilometer.polling.manager [-] The pollster [<stevedore.extension.Extension object at 0x7fcf6cc3f560>] is not configured in a source for polling that requires coordination. The current hashrings are the following [None]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:355
Dec  1 19:47:49 compute-0 ceilometer_agent_compute[200308]: 2025-12-01 19:47:49.289 15 DEBUG ceilometer.polling.manager [-] Polster heartbeat update: disk.ephemeral.size heartbeat /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:636
Dec  1 19:47:49 compute-0 ceilometer_agent_compute[200308]: 2025-12-01 19:47:49.289 15 INFO ceilometer.polling.manager [-] Finished polling pollster disk.ephemeral.size in the context of pollsters
Dec  1 19:47:49 compute-0 ceilometer_agent_compute[200308]: 2025-12-01 19:47:49.289 15 DEBUG ceilometer.polling.manager [-] Executing discovery process for pollsters [<ceilometer.compute.pollsters.net.IncomingPacketsPollster object at 0x7fcf6cc3fd40>] and discovery method [local_instances] via process [<bound method AgentManager.discover of <ceilometer.polling.manager.AgentManager object at 0x7fcf6cc07a10>>]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:294
Dec  1 19:47:49 compute-0 ceilometer_agent_compute[200308]: 2025-12-01 19:47:49.290 12 DEBUG ceilometer.polling.manager [-] Updated heartbeat for disk.ephemeral.size (2025-12-01T19:47:49.289236) _update_status /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:502
Dec  1 19:47:49 compute-0 ceilometer_agent_compute[200308]: 2025-12-01 19:47:49.290 15 INFO ceilometer.polling.manager [-] Polling pollster network.incoming.packets in the context of pollsters
Dec  1 19:47:49 compute-0 ceilometer_agent_compute[200308]: 2025-12-01 19:47:49.290 15 DEBUG ceilometer.polling.manager [-] Checking if we need coordination for pollster [<stevedore.extension.Extension object at 0x7fcf6cc3fd70>] with coordination group name [None]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:333
Dec  1 19:47:49 compute-0 ceilometer_agent_compute[200308]: 2025-12-01 19:47:49.290 15 DEBUG ceilometer.polling.manager [-] The pollster [<stevedore.extension.Extension object at 0x7fcf6cc3fd70>] is not configured in a source for polling that requires coordination. The current hashrings are the following [None]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:355
Dec  1 19:47:49 compute-0 ceilometer_agent_compute[200308]: 2025-12-01 19:47:49.290 15 DEBUG ceilometer.polling.manager [-] Polster heartbeat update: network.incoming.packets heartbeat /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:636
Dec  1 19:47:49 compute-0 ceilometer_agent_compute[200308]: 2025-12-01 19:47:49.290 15 DEBUG ceilometer.compute.pollsters [-] e73931e9-f7fa-4666-b781-700b385532a9/network.incoming.packets volume: 21 _stats_to_sample /usr/lib/python3.12/site-packages/ceilometer/compute/pollsters/__init__.py:108
Dec  1 19:47:49 compute-0 ceilometer_agent_compute[200308]: 2025-12-01 19:47:49.291 15 DEBUG ceilometer.compute.pollsters [-] 850ac274-3f22-41ce-b7d7-ac64d7adac70/network.incoming.packets volume: 14 _stats_to_sample /usr/lib/python3.12/site-packages/ceilometer/compute/pollsters/__init__.py:108
Dec  1 19:47:49 compute-0 ceilometer_agent_compute[200308]: 2025-12-01 19:47:49.291 15 INFO ceilometer.polling.manager [-] Finished polling pollster network.incoming.packets in the context of pollsters
Dec  1 19:47:49 compute-0 ceilometer_agent_compute[200308]: 2025-12-01 19:47:49.291 15 DEBUG ceilometer.polling.manager [-] Executing discovery process for pollsters [<ceilometer.compute.pollsters.disk.RootSizePollster object at 0x7fcf6cc3f590>] and discovery method [local_instances] via process [<bound method AgentManager.discover of <ceilometer.polling.manager.AgentManager object at 0x7fcf6cc07a10>>]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:294
Dec  1 19:47:49 compute-0 ceilometer_agent_compute[200308]: 2025-12-01 19:47:49.291 15 INFO ceilometer.polling.manager [-] Polling pollster disk.root.size in the context of pollsters
Dec  1 19:47:49 compute-0 ceilometer_agent_compute[200308]: 2025-12-01 19:47:49.291 15 DEBUG ceilometer.polling.manager [-] Checking if we need coordination for pollster [<stevedore.extension.Extension object at 0x7fcf6cc3f5c0>] with coordination group name [None]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:333
Dec  1 19:47:49 compute-0 ceilometer_agent_compute[200308]: 2025-12-01 19:47:49.292 15 DEBUG ceilometer.polling.manager [-] The pollster [<stevedore.extension.Extension object at 0x7fcf6cc3f5c0>] is not configured in a source for polling that requires coordination. The current hashrings are the following [None]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:355
Dec  1 19:47:49 compute-0 ceilometer_agent_compute[200308]: 2025-12-01 19:47:49.292 15 DEBUG ceilometer.polling.manager [-] Polster heartbeat update: disk.root.size heartbeat /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:636
Dec  1 19:47:49 compute-0 ceilometer_agent_compute[200308]: 2025-12-01 19:47:49.292 15 INFO ceilometer.polling.manager [-] Finished polling pollster disk.root.size in the context of pollsters
Dec  1 19:47:49 compute-0 ceilometer_agent_compute[200308]: 2025-12-01 19:47:49.292 15 DEBUG ceilometer.polling.manager [-] Executing discovery process for pollsters [<ceilometer.compute.pollsters.net.IncomingDropPollster object at 0x7fcf6cc3fda0>] and discovery method [local_instances] via process [<bound method AgentManager.discover of <ceilometer.polling.manager.AgentManager object at 0x7fcf6cc07a10>>]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:294
Dec  1 19:47:49 compute-0 ceilometer_agent_compute[200308]: 2025-12-01 19:47:49.293 12 DEBUG ceilometer.polling.manager [-] Updated heartbeat for network.incoming.packets (2025-12-01T19:47:49.290555) _update_status /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:502
Dec  1 19:47:49 compute-0 ceilometer_agent_compute[200308]: 2025-12-01 19:47:49.293 15 INFO ceilometer.polling.manager [-] Polling pollster network.incoming.packets.drop in the context of pollsters
Dec  1 19:47:49 compute-0 ceilometer_agent_compute[200308]: 2025-12-01 19:47:49.293 12 DEBUG ceilometer.polling.manager [-] Updated heartbeat for disk.root.size (2025-12-01T19:47:49.292120) _update_status /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:502
Dec  1 19:47:49 compute-0 ceilometer_agent_compute[200308]: 2025-12-01 19:47:49.293 15 DEBUG ceilometer.polling.manager [-] Checking if we need coordination for pollster [<stevedore.extension.Extension object at 0x7fcf6cc3fdd0>] with coordination group name [None]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:333
Dec  1 19:47:49 compute-0 ceilometer_agent_compute[200308]: 2025-12-01 19:47:49.293 15 DEBUG ceilometer.polling.manager [-] The pollster [<stevedore.extension.Extension object at 0x7fcf6cc3fdd0>] is not configured in a source for polling that requires coordination. The current hashrings are the following [None]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:355
Dec  1 19:47:49 compute-0 ceilometer_agent_compute[200308]: 2025-12-01 19:47:49.293 15 DEBUG ceilometer.polling.manager [-] Polster heartbeat update: network.incoming.packets.drop heartbeat /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:636
Dec  1 19:47:49 compute-0 ceilometer_agent_compute[200308]: 2025-12-01 19:47:49.293 15 DEBUG ceilometer.compute.pollsters [-] e73931e9-f7fa-4666-b781-700b385532a9/network.incoming.packets.drop volume: 0 _stats_to_sample /usr/lib/python3.12/site-packages/ceilometer/compute/pollsters/__init__.py:108
Dec  1 19:47:49 compute-0 ceilometer_agent_compute[200308]: 2025-12-01 19:47:49.294 12 DEBUG ceilometer.polling.manager [-] Updated heartbeat for network.incoming.packets.drop (2025-12-01T19:47:49.293623) _update_status /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:502
Dec  1 19:47:49 compute-0 ceilometer_agent_compute[200308]: 2025-12-01 19:47:49.294 15 DEBUG ceilometer.compute.pollsters [-] 850ac274-3f22-41ce-b7d7-ac64d7adac70/network.incoming.packets.drop volume: 0 _stats_to_sample /usr/lib/python3.12/site-packages/ceilometer/compute/pollsters/__init__.py:108
Dec  1 19:47:49 compute-0 ceilometer_agent_compute[200308]: 2025-12-01 19:47:49.294 15 INFO ceilometer.polling.manager [-] Finished polling pollster network.incoming.packets.drop in the context of pollsters
Dec  1 19:47:49 compute-0 ceilometer_agent_compute[200308]: 2025-12-01 19:47:49.294 15 DEBUG ceilometer.polling.manager [-] Executing discovery process for pollsters [<ceilometer.compute.pollsters.net.IncomingErrorsPollster object at 0x7fcf6cc3fe00>] and discovery method [local_instances] via process [<bound method AgentManager.discover of <ceilometer.polling.manager.AgentManager object at 0x7fcf6cc07a10>>]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:294
Dec  1 19:47:49 compute-0 ceilometer_agent_compute[200308]: 2025-12-01 19:47:49.295 15 INFO ceilometer.polling.manager [-] Polling pollster network.incoming.packets.error in the context of pollsters
Dec  1 19:47:49 compute-0 ceilometer_agent_compute[200308]: 2025-12-01 19:47:49.295 15 DEBUG ceilometer.polling.manager [-] Checking if we need coordination for pollster [<stevedore.extension.Extension object at 0x7fcf6cc3fe30>] with coordination group name [None]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:333
Dec  1 19:47:49 compute-0 ceilometer_agent_compute[200308]: 2025-12-01 19:47:49.295 15 DEBUG ceilometer.polling.manager [-] The pollster [<stevedore.extension.Extension object at 0x7fcf6cc3fe30>] is not configured in a source for polling that requires coordination. The current hashrings are the following [None]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:355
Dec  1 19:47:49 compute-0 ceilometer_agent_compute[200308]: 2025-12-01 19:47:49.295 15 DEBUG ceilometer.polling.manager [-] Polster heartbeat update: network.incoming.packets.error heartbeat /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:636
Dec  1 19:47:49 compute-0 ceilometer_agent_compute[200308]: 2025-12-01 19:47:49.295 15 DEBUG ceilometer.compute.pollsters [-] e73931e9-f7fa-4666-b781-700b385532a9/network.incoming.packets.error volume: 0 _stats_to_sample /usr/lib/python3.12/site-packages/ceilometer/compute/pollsters/__init__.py:108
Dec  1 19:47:49 compute-0 ceilometer_agent_compute[200308]: 2025-12-01 19:47:49.295 12 DEBUG ceilometer.polling.manager [-] Updated heartbeat for network.incoming.packets.error (2025-12-01T19:47:49.295319) _update_status /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:502
Dec  1 19:47:49 compute-0 ceilometer_agent_compute[200308]: 2025-12-01 19:47:49.296 15 DEBUG ceilometer.compute.pollsters [-] 850ac274-3f22-41ce-b7d7-ac64d7adac70/network.incoming.packets.error volume: 0 _stats_to_sample /usr/lib/python3.12/site-packages/ceilometer/compute/pollsters/__init__.py:108
Dec  1 19:47:49 compute-0 ceilometer_agent_compute[200308]: 2025-12-01 19:47:49.296 15 INFO ceilometer.polling.manager [-] Finished polling pollster network.incoming.packets.error in the context of pollsters
Dec  1 19:47:49 compute-0 ceilometer_agent_compute[200308]: 2025-12-01 19:47:49.296 15 DEBUG ceilometer.polling.manager [-] Executing discovery process for pollsters [<ceilometer.compute.pollsters.net.OutgoingBytesPollster object at 0x7fcf6cc3fe90>] and discovery method [local_instances] via process [<bound method AgentManager.discover of <ceilometer.polling.manager.AgentManager object at 0x7fcf6cc07a10>>]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:294
Dec  1 19:47:49 compute-0 ceilometer_agent_compute[200308]: 2025-12-01 19:47:49.296 15 INFO ceilometer.polling.manager [-] Polling pollster network.outgoing.bytes in the context of pollsters
Dec  1 19:47:49 compute-0 ceilometer_agent_compute[200308]: 2025-12-01 19:47:49.296 15 DEBUG ceilometer.polling.manager [-] Checking if we need coordination for pollster [<stevedore.extension.Extension object at 0x7fcf6cc3fec0>] with coordination group name [None]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:333
Dec  1 19:47:49 compute-0 ceilometer_agent_compute[200308]: 2025-12-01 19:47:49.296 15 DEBUG ceilometer.polling.manager [-] The pollster [<stevedore.extension.Extension object at 0x7fcf6cc3fec0>] is not configured in a source for polling that requires coordination. The current hashrings are the following [None]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:355
Dec  1 19:47:49 compute-0 ceilometer_agent_compute[200308]: 2025-12-01 19:47:49.297 15 DEBUG ceilometer.polling.manager [-] Polster heartbeat update: network.outgoing.bytes heartbeat /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:636
Dec  1 19:47:49 compute-0 ceilometer_agent_compute[200308]: 2025-12-01 19:47:49.297 15 DEBUG ceilometer.compute.pollsters [-] e73931e9-f7fa-4666-b781-700b385532a9/network.outgoing.bytes volume: 2412 _stats_to_sample /usr/lib/python3.12/site-packages/ceilometer/compute/pollsters/__init__.py:108
Dec  1 19:47:49 compute-0 ceilometer_agent_compute[200308]: 2025-12-01 19:47:49.297 15 DEBUG ceilometer.compute.pollsters [-] 850ac274-3f22-41ce-b7d7-ac64d7adac70/network.outgoing.bytes volume: 2426 _stats_to_sample /usr/lib/python3.12/site-packages/ceilometer/compute/pollsters/__init__.py:108
Dec  1 19:47:49 compute-0 ceilometer_agent_compute[200308]: 2025-12-01 19:47:49.297 15 INFO ceilometer.polling.manager [-] Finished polling pollster network.outgoing.bytes in the context of pollsters
Dec  1 19:47:49 compute-0 ceilometer_agent_compute[200308]: 2025-12-01 19:47:49.298 15 DEBUG ceilometer.polling.manager [-] Executing discovery process for pollsters [<ceilometer.compute.pollsters.net.OutgoingBytesRatePollster object at 0x7fcf6cc3ff80>] and discovery method [local_instances] via process [<bound method AgentManager.discover of <ceilometer.polling.manager.AgentManager object at 0x7fcf6cc07a10>>]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:294
Dec  1 19:47:49 compute-0 ceilometer_agent_compute[200308]: 2025-12-01 19:47:49.298 15 DEBUG ceilometer.polling.manager [-] Skip pollster network.outgoing.bytes.rate, no new resources found this cycle _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:321
Dec  1 19:47:49 compute-0 ceilometer_agent_compute[200308]: 2025-12-01 19:47:49.298 15 DEBUG ceilometer.polling.manager [-] Executing discovery process for pollsters [<ceilometer.compute.pollsters.instance_stats.CPUPollster object at 0x7fcf6cbd1b80>] and discovery method [local_instances] via process [<bound method AgentManager.discover of <ceilometer.polling.manager.AgentManager object at 0x7fcf6cc07a10>>]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:294
Dec  1 19:47:49 compute-0 ceilometer_agent_compute[200308]: 2025-12-01 19:47:49.298 15 INFO ceilometer.polling.manager [-] Polling pollster cpu in the context of pollsters
Dec  1 19:47:49 compute-0 ceilometer_agent_compute[200308]: 2025-12-01 19:47:49.298 15 DEBUG ceilometer.polling.manager [-] Checking if we need coordination for pollster [<stevedore.extension.Extension object at 0x7fcf6cc3d7c0>] with coordination group name [None]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:333
Dec  1 19:47:49 compute-0 ceilometer_agent_compute[200308]: 2025-12-01 19:47:49.298 15 DEBUG ceilometer.polling.manager [-] The pollster [<stevedore.extension.Extension object at 0x7fcf6cc3d7c0>] is not configured in a source for polling that requires coordination. The current hashrings are the following [None]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:355
Dec  1 19:47:49 compute-0 ceilometer_agent_compute[200308]: 2025-12-01 19:47:49.298 12 DEBUG ceilometer.polling.manager [-] Updated heartbeat for network.outgoing.bytes (2025-12-01T19:47:49.297051) _update_status /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:502
Dec  1 19:47:49 compute-0 ceilometer_agent_compute[200308]: 2025-12-01 19:47:49.299 15 DEBUG ceilometer.polling.manager [-] Polster heartbeat update: cpu heartbeat /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:636
Dec  1 19:47:49 compute-0 ceilometer_agent_compute[200308]: 2025-12-01 19:47:49.299 15 DEBUG ceilometer.compute.pollsters [-] e73931e9-f7fa-4666-b781-700b385532a9/cpu volume: 48250000000 _stats_to_sample /usr/lib/python3.12/site-packages/ceilometer/compute/pollsters/__init__.py:108
Dec  1 19:47:49 compute-0 ceilometer_agent_compute[200308]: 2025-12-01 19:47:49.299 15 DEBUG ceilometer.compute.pollsters [-] 850ac274-3f22-41ce-b7d7-ac64d7adac70/cpu volume: 41140000000 _stats_to_sample /usr/lib/python3.12/site-packages/ceilometer/compute/pollsters/__init__.py:108
Dec  1 19:47:49 compute-0 ceilometer_agent_compute[200308]: 2025-12-01 19:47:49.299 15 INFO ceilometer.polling.manager [-] Finished polling pollster cpu in the context of pollsters
Dec  1 19:47:49 compute-0 ceilometer_agent_compute[200308]: 2025-12-01 19:47:49.300 12 DEBUG ceilometer.polling.manager [-] Updated heartbeat for cpu (2025-12-01T19:47:49.299072) _update_status /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:502
Dec  1 19:47:49 compute-0 ceilometer_agent_compute[200308]: 2025-12-01 19:47:49.300 15 DEBUG ceilometer.polling.manager [-] Executing discovery process for pollsters [<ceilometer.compute.pollsters.instance_stats.MemoryUsagePollster object at 0x7fcf6cc3f7a0>] and discovery method [local_instances] via process [<bound method AgentManager.discover of <ceilometer.polling.manager.AgentManager object at 0x7fcf6cc07a10>>]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:294
Dec  1 19:47:49 compute-0 ceilometer_agent_compute[200308]: 2025-12-01 19:47:49.300 15 INFO ceilometer.polling.manager [-] Polling pollster memory.usage in the context of pollsters
Dec  1 19:47:49 compute-0 ceilometer_agent_compute[200308]: 2025-12-01 19:47:49.300 15 DEBUG ceilometer.polling.manager [-] Checking if we need coordination for pollster [<stevedore.extension.Extension object at 0x7fcf6cc3f7d0>] with coordination group name [None]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:333
Dec  1 19:47:49 compute-0 ceilometer_agent_compute[200308]: 2025-12-01 19:47:49.300 15 DEBUG ceilometer.polling.manager [-] The pollster [<stevedore.extension.Extension object at 0x7fcf6cc3f7d0>] is not configured in a source for polling that requires coordination. The current hashrings are the following [None]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:355
Dec  1 19:47:49 compute-0 ceilometer_agent_compute[200308]: 2025-12-01 19:47:49.301 15 DEBUG ceilometer.polling.manager [-] Polster heartbeat update: memory.usage heartbeat /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:636
Dec  1 19:47:49 compute-0 ceilometer_agent_compute[200308]: 2025-12-01 19:47:49.301 15 DEBUG ceilometer.compute.pollsters [-] e73931e9-f7fa-4666-b781-700b385532a9/memory.usage volume: 48.79296875 _stats_to_sample /usr/lib/python3.12/site-packages/ceilometer/compute/pollsters/__init__.py:108
Dec  1 19:47:49 compute-0 ceilometer_agent_compute[200308]: 2025-12-01 19:47:49.301 12 DEBUG ceilometer.polling.manager [-] Updated heartbeat for memory.usage (2025-12-01T19:47:49.301000) _update_status /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:502
Dec  1 19:47:49 compute-0 ceilometer_agent_compute[200308]: 2025-12-01 19:47:49.301 15 DEBUG ceilometer.compute.pollsters [-] 850ac274-3f22-41ce-b7d7-ac64d7adac70/memory.usage volume: 48.9375 _stats_to_sample /usr/lib/python3.12/site-packages/ceilometer/compute/pollsters/__init__.py:108
Dec  1 19:47:49 compute-0 ceilometer_agent_compute[200308]: 2025-12-01 19:47:49.302 15 INFO ceilometer.polling.manager [-] Finished polling pollster memory.usage in the context of pollsters
Dec  1 19:47:49 compute-0 ceilometer_agent_compute[200308]: 2025-12-01 19:47:49.302 15 DEBUG ceilometer.polling.manager [-] Finished processing pollster [network.incoming.bytes.delta]. execute_polling_task_processing /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:272
Dec  1 19:47:49 compute-0 ceilometer_agent_compute[200308]: 2025-12-01 19:47:49.302 15 DEBUG ceilometer.polling.manager [-] Finished processing pollster [network.outgoing.packets]. execute_polling_task_processing /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:272
Dec  1 19:47:49 compute-0 ceilometer_agent_compute[200308]: 2025-12-01 19:47:49.302 15 DEBUG ceilometer.polling.manager [-] Finished processing pollster [network.outgoing.bytes.delta]. execute_polling_task_processing /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:272
Dec  1 19:47:49 compute-0 ceilometer_agent_compute[200308]: 2025-12-01 19:47:49.302 15 DEBUG ceilometer.polling.manager [-] Finished processing pollster [network.outgoing.packets.drop]. execute_polling_task_processing /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:272
Dec  1 19:47:49 compute-0 ceilometer_agent_compute[200308]: 2025-12-01 19:47:49.302 15 DEBUG ceilometer.polling.manager [-] Finished processing pollster [network.outgoing.packets.error]. execute_polling_task_processing /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:272
Dec  1 19:47:49 compute-0 ceilometer_agent_compute[200308]: 2025-12-01 19:47:49.302 15 DEBUG ceilometer.polling.manager [-] Finished processing pollster [disk.device.capacity]. execute_polling_task_processing /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:272
Dec  1 19:47:49 compute-0 ceilometer_agent_compute[200308]: 2025-12-01 19:47:49.303 15 DEBUG ceilometer.polling.manager [-] Finished processing pollster [disk.device.read.bytes]. execute_polling_task_processing /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:272
Dec  1 19:47:49 compute-0 ceilometer_agent_compute[200308]: 2025-12-01 19:47:49.303 15 DEBUG ceilometer.polling.manager [-] Finished processing pollster [network.incoming.bytes]. execute_polling_task_processing /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:272
Dec  1 19:47:49 compute-0 ceilometer_agent_compute[200308]: 2025-12-01 19:47:49.303 15 DEBUG ceilometer.polling.manager [-] Finished processing pollster [network.incoming.bytes.rate]. execute_polling_task_processing /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:272
Dec  1 19:47:49 compute-0 ceilometer_agent_compute[200308]: 2025-12-01 19:47:49.303 15 DEBUG ceilometer.polling.manager [-] Finished processing pollster [disk.device.read.latency]. execute_polling_task_processing /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:272
Dec  1 19:47:49 compute-0 ceilometer_agent_compute[200308]: 2025-12-01 19:47:49.303 15 DEBUG ceilometer.polling.manager [-] Finished processing pollster [disk.device.read.requests]. execute_polling_task_processing /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:272
Dec  1 19:47:49 compute-0 ceilometer_agent_compute[200308]: 2025-12-01 19:47:49.303 15 DEBUG ceilometer.polling.manager [-] Finished processing pollster [disk.device.usage]. execute_polling_task_processing /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:272
Dec  1 19:47:49 compute-0 ceilometer_agent_compute[200308]: 2025-12-01 19:47:49.303 15 DEBUG ceilometer.polling.manager [-] Finished processing pollster [disk.device.write.bytes]. execute_polling_task_processing /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:272
Dec  1 19:47:49 compute-0 ceilometer_agent_compute[200308]: 2025-12-01 19:47:49.303 15 DEBUG ceilometer.polling.manager [-] Finished processing pollster [power.state]. execute_polling_task_processing /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:272
Dec  1 19:47:49 compute-0 ceilometer_agent_compute[200308]: 2025-12-01 19:47:49.303 15 DEBUG ceilometer.polling.manager [-] Finished processing pollster [disk.device.write.latency]. execute_polling_task_processing /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:272
Dec  1 19:47:49 compute-0 ceilometer_agent_compute[200308]: 2025-12-01 19:47:49.303 15 DEBUG ceilometer.polling.manager [-] Finished processing pollster [disk.device.write.requests]. execute_polling_task_processing /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:272
Dec  1 19:47:49 compute-0 ceilometer_agent_compute[200308]: 2025-12-01 19:47:49.303 15 DEBUG ceilometer.polling.manager [-] Finished processing pollster [disk.device.allocation]. execute_polling_task_processing /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:272
Dec  1 19:47:49 compute-0 ceilometer_agent_compute[200308]: 2025-12-01 19:47:49.303 15 DEBUG ceilometer.polling.manager [-] Finished processing pollster [disk.ephemeral.size]. execute_polling_task_processing /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:272
Dec  1 19:47:49 compute-0 ceilometer_agent_compute[200308]: 2025-12-01 19:47:49.303 15 DEBUG ceilometer.polling.manager [-] Finished processing pollster [network.incoming.packets]. execute_polling_task_processing /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:272
Dec  1 19:47:49 compute-0 ceilometer_agent_compute[200308]: 2025-12-01 19:47:49.303 15 DEBUG ceilometer.polling.manager [-] Finished processing pollster [disk.root.size]. execute_polling_task_processing /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:272
Dec  1 19:47:49 compute-0 ceilometer_agent_compute[200308]: 2025-12-01 19:47:49.303 15 DEBUG ceilometer.polling.manager [-] Finished processing pollster [network.incoming.packets.drop]. execute_polling_task_processing /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:272
Dec  1 19:47:49 compute-0 ceilometer_agent_compute[200308]: 2025-12-01 19:47:49.304 15 DEBUG ceilometer.polling.manager [-] Finished processing pollster [network.incoming.packets.error]. execute_polling_task_processing /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:272
Dec  1 19:47:49 compute-0 ceilometer_agent_compute[200308]: 2025-12-01 19:47:49.304 15 DEBUG ceilometer.polling.manager [-] Finished processing pollster [network.outgoing.bytes]. execute_polling_task_processing /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:272
Dec  1 19:47:49 compute-0 ceilometer_agent_compute[200308]: 2025-12-01 19:47:49.304 15 DEBUG ceilometer.polling.manager [-] Finished processing pollster [network.outgoing.bytes.rate]. execute_polling_task_processing /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:272
Dec  1 19:47:49 compute-0 ceilometer_agent_compute[200308]: 2025-12-01 19:47:49.304 15 DEBUG ceilometer.polling.manager [-] Finished processing pollster [cpu]. execute_polling_task_processing /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:272
Dec  1 19:47:49 compute-0 ceilometer_agent_compute[200308]: 2025-12-01 19:47:49.304 15 DEBUG ceilometer.polling.manager [-] Finished processing pollster [memory.usage]. execute_polling_task_processing /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:272
Dec  1 19:47:51 compute-0 nova_compute[189564]: 2025-12-01 19:47:51.030 189568 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Dec  1 19:47:53 compute-0 podman[247205]: 2025-12-01 19:47:53.30825291 +0000 UTC m=+0.077346692 container health_status 61ddba5fa28aaa4735d9b3aecc3d300f499f9ae2248b5f55cd6d6127fcce4236 (image=quay.io/navidys/prometheus-podman-exporter:v1.10.1, name=podman_exporter, health_status=healthy, health_failing_streak=0, health_log=, config_data={'image': 'quay.io/navidys/prometheus-podman-exporter:v1.10.1', 'restart': 'always', 'recreate': True, 'user': 'root', 'privileged': True, 'ports': ['9882:9882'], 'net': 'host', 'command': ['--web.config.file=/etc/podman_exporter/podman_exporter.yaml'], 'environment': {'OS_ENDPOINT_TYPE': 'internal', 'CONTAINER_HOST': 'unix:///run/podman/podman.sock'}, 'healthcheck': {'test': '/openstack/healthcheck podman_exporter', 'mount': '/var/lib/openstack/healthchecks/podman_exporter'}, 'volumes': ['/var/lib/openstack/config/telemetry/podman_exporter.yaml:/etc/podman_exporter/podman_exporter.yaml:z', '/var/lib/openstack/certs/telemetry/default:/etc/podman_exporter/tls:z', '/run/podman/podman.sock:/run/podman/podman.sock:rw,z', '/var/lib/openstack/healthchecks/podman_exporter:/openstack:ro,z']}, config_id=edpm, container_name=podman_exporter, maintainer=Navid Yaghoobi <navidys@fedoraproject.org>, managed_by=edpm_ansible)
Dec  1 19:47:53 compute-0 nova_compute[189564]: 2025-12-01 19:47:53.334 189568 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Dec  1 19:47:56 compute-0 nova_compute[189564]: 2025-12-01 19:47:56.033 189568 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Dec  1 19:47:56 compute-0 podman[247235]: 2025-12-01 19:47:56.339528329 +0000 UTC m=+0.087282690 container health_status 43b014a7c88484529ca37fbc1aa040d68d3c565a681d98a3ffe696ded1c66c8b (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified, name=ovn_metadata_agent, health_status=healthy, health_failing_streak=0, health_log=, maintainer=OpenStack Kubernetes Operator team, managed_by=edpm_ansible, org.label-schema.license=GPLv2, org.label-schema.build-date=20251125, org.label-schema.name=CentOS Stream 9 Base Image, container_name=ovn_metadata_agent, io.buildah.version=1.41.3, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS, config_id=ovn_metadata_agent, tcib_build_tag=fa2bb8efef6782c26ea7f1675eeb36dd, tcib_managed=true, config_data={'cgroupns': 'host', 'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'EDPM_CONFIG_HASH': '0823bd3e096c75f72e4a95820d41b0d4b6a1172bd2892ddb9f29b788a11bc87d'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ovn_metadata_agent', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified', 'net': 'host', 'pid': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/run/openvswitch:/run/openvswitch:z', '/var/lib/config-data/ansible-generated/neutron-ovn-metadata-agent:/etc/neutron.conf.d:z', '/run/netns:/run/netns:shared', '/var/lib/kolla/config_files/ovn_metadata_agent.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/neutron:/var/lib/neutron:shared,z', '/var/lib/neutron/ovn_metadata_haproxy_wrapper:/usr/local/bin/haproxy:ro', '/var/lib/neutron/kill_scripts:/etc/neutron/kill_scripts:ro', '/var/lib/openstack/cacerts/neutron-metadata/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/ca.crt:/etc/pki/tls/certs/ovndbca.crt:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/tls.crt:/etc/pki/tls/certs/ovndb.crt:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/tls.key:/etc/pki/tls/private/ovndb.key:ro,Z', '/var/lib/openstack/healthchecks/ovn_metadata_agent:/openstack:ro,z']})
Dec  1 19:47:56 compute-0 podman[247229]: 2025-12-01 19:47:56.352414647 +0000 UTC m=+0.111299292 container health_status 3a3d264f7eb8586ed3d44da8bad3c69e5911bcb2ca062b771386b6d47a5118de (image=quay.rdoproject.org/podified-master-centos10/openstack-ceilometer-compute:current-tested, name=ceilometer_agent_compute, health_status=healthy, health_failing_streak=0, health_log=, io.buildah.version=1.41.4, maintainer=OpenStack Kubernetes Operator team, managed_by=edpm_ansible, org.label-schema.build-date=20251125, org.label-schema.license=GPLv2, org.label-schema.vendor=CentOS, tcib_build_tag=3a7876c5b6a4ff2e2bc50e11e9db5f42, org.label-schema.name=CentOS Stream 10 Base Image, org.label-schema.schema-version=1.0, tcib_managed=true, config_id=edpm, container_name=ceilometer_agent_compute, config_data={'image': 'quay.rdoproject.org/podified-master-centos10/openstack-ceilometer-compute:current-tested', 'user': 'ceilometer', 'restart': 'always', 'command': 'kolla_start', 'security_opt': 'label:type:ceilometer_polling_t', 'net': 'host', 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'OS_ENDPOINT_TYPE': 'internal'}, 'healthcheck': {'test': '/openstack/healthcheck compute', 'mount': '/var/lib/openstack/healthchecks/ceilometer_agent_compute'}, 'volumes': ['/var/lib/openstack/config/telemetry:/var/lib/openstack/config/:z', '/var/lib/openstack/config/telemetry/ceilometer-agent-compute.json:/var/lib/kolla/config_files/config.json:z', '/run/libvirt:/run/libvirt:shared,ro', '/etc/hosts:/etc/hosts:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/var/lib/openstack/cacerts/telemetry/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/config/telemetry/ceilometer_prom_exporter.yaml:/etc/ceilometer/ceilometer_prom_exporter.yaml:z', '/var/lib/openstack/certs/telemetry/default:/etc/ceilometer/tls:z', '/dev/log:/dev/log', '/var/lib/openstack/healthchecks/ceilometer_agent_compute:/openstack:ro,z']})
Dec  1 19:47:56 compute-0 podman[247228]: 2025-12-01 19:47:56.354431629 +0000 UTC m=+0.116648787 container health_status 34a1614f07848d6f362b3ed1fa2407dbcd0f2c7c831f6ef43ff8b2d278ce7c3d (image=quay.io/podified-antelope-centos9/openstack-ceilometer-ipmi:current-podified, name=ceilometer_agent_ipmi, health_status=healthy, health_failing_streak=0, health_log=, tcib_managed=true, managed_by=edpm_ansible, org.label-schema.build-date=20251125, config_data={'image': 'quay.io/podified-antelope-centos9/openstack-ceilometer-ipmi:current-podified', 'user': 'ceilometer', 'restart': 'always', 'command': 'kolla_start', 'security_opt': 'label:type:ceilometer_polling_t', 'privileged': 'true', 'net': 'host', 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'OS_ENDPOINT_TYPE': 'internal'}, 'healthcheck': {'test': '/openstack/healthcheck ipmi', 'mount': '/var/lib/openstack/healthchecks/ceilometer_agent_ipmi'}, 'volumes': ['/var/lib/openstack/config/telemetry-power-monitoring:/var/lib/openstack/config/:z', '/var/lib/openstack/config/telemetry-power-monitoring/ceilometer-agent-ipmi.json:/var/lib/kolla/config_files/config.json:z', '/etc/hosts:/etc/hosts:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/var/lib/openstack/cacerts/telemetry-power-monitoring/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/config/telemetry-power-monitoring/ceilometer_prom_exporter.yaml:/etc/ceilometer/ceilometer_prom_exporter.yaml:z', '/var/lib/openstack/certs/telemetry-power-monitoring/default:/etc/ceilometer/tls:z', '/dev/log:/dev/log', '/var/lib/openstack/healthchecks/ceilometer_agent_ipmi:/openstack:ro,z']}, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.schema-version=1.0, tcib_build_tag=fa2bb8efef6782c26ea7f1675eeb36dd, container_name=ceilometer_agent_ipmi, io.buildah.version=1.41.3, maintainer=OpenStack Kubernetes Operator team, config_id=edpm, org.label-schema.license=GPLv2, org.label-schema.vendor=CentOS)
Dec  1 19:47:56 compute-0 podman[247227]: 2025-12-01 19:47:56.364299935 +0000 UTC m=+0.131903959 container health_status 23921011954a99f31a49758e512d9e3575f6b2ebf536e7df85e3be11e7690b76 (image=quay.io/sustainable_computing_io/kepler:release-0.7.12, name=kepler, health_status=healthy, health_failing_streak=0, health_log=, container_name=kepler, description=The Universal Base Image is designed and engineered to be the base layer for all of your containerized applications, middleware and utilities. This base image is freely redistributable, but Red Hat only supports Red Hat technologies through subscriptions for Red Hat products. This image is maintained by Red Hat and updated regularly., vcs-ref=e309397d02fc53f7fa99db1371b8700eb49f268f, com.redhat.component=ubi9-container, io.openshift.expose-services=, url=https://access.redhat.com/containers/#/registry.access.redhat.com/ubi9/images/9.4-1214.1726694543, io.buildah.version=1.29.0, release-0.7.12=, version=9.4, com.redhat.license_terms=https://www.redhat.com/en/about/red-hat-end-user-license-agreements#UBI, release=1214.1726694543, config_data={'image': 'quay.io/sustainable_computing_io/kepler:release-0.7.12', 'privileged': 'true', 'restart': 'always', 'ports': ['8888:8888'], 'net': 'host', 'command': '-v=2', 'recreate': True, 'environment': {'ENABLE_GPU': 'true', 'EXPOSE_CONTAINER_METRICS': 'true', 'ENABLE_PROCESS_METRICS': 'true', 'EXPOSE_VM_METRICS': 'true', 'EXPOSE_ESTIMATED_IDLE_POWER_METRICS': 'false', 'LIBVIRT_METADATA_URI': 'http://openstack.org/xmlns/libvirt/nova/1.1'}, 'healthcheck': {'test': '/openstack/healthcheck kepler', 'mount': '/var/lib/openstack/healthchecks/kepler'}, 'volumes': ['/lib/modules:/lib/modules:ro', '/run/libvirt:/run/libvirt:shared,ro', '/sys:/sys', '/proc:/proc', '/var/lib/openstack/healthchecks/kepler:/openstack:ro,z']}, io.k8s.description=The Universal Base Image is designed and engineered to be the base layer for all of your containerized applications, middleware and utilities. This base image is freely redistributable, but Red Hat only supports Red Hat technologies through subscriptions for Red Hat products. This image is maintained by Red Hat and updated regularly., vendor=Red Hat, Inc., architecture=x86_64, io.k8s.display-name=Red Hat Universal Base Image 9, name=ubi9, build-date=2024-09-18T21:23:30, distribution-scope=public, summary=Provides the latest release of Red Hat Universal Base Image 9., managed_by=edpm_ansible, vcs-type=git, config_id=edpm, io.openshift.tags=base rhel9, maintainer=Red Hat, Inc.)
Dec  1 19:47:56 compute-0 podman[247240]: 2025-12-01 19:47:56.400721561 +0000 UTC m=+0.143181758 container health_status ac5c9902abf0db9f43c889599b2bcc73d33eb8b65444ffdd9b56a5cc93dab792 (image=quay.io/podified-antelope-centos9/openstack-ovn-controller:current-podified, name=ovn_controller, health_status=healthy, health_failing_streak=0, health_log=, tcib_build_tag=fa2bb8efef6782c26ea7f1675eeb36dd, config_data={'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ovn_controller', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-ovn-controller:current-podified', 'net': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/lib/modules:/lib/modules:ro', '/run:/run', '/var/lib/openvswitch/ovn:/run/ovn:shared,z', '/var/lib/kolla/config_files/ovn_controller.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/openstack/cacerts/ovn/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/certs/ovn/default/ca.crt:/etc/pki/tls/certs/ovndbca.crt:ro,z', '/var/lib/openstack/certs/ovn/default/tls.crt:/etc/pki/tls/certs/ovndb.crt:ro,z', '/var/lib/openstack/certs/ovn/default/tls.key:/etc/pki/tls/private/ovndb.key:ro,Z', '/var/lib/openstack/healthchecks/ovn_controller:/openstack:ro,z']}, io.buildah.version=1.41.3, org.label-schema.vendor=CentOS, config_id=ovn_controller, container_name=ovn_controller, maintainer=OpenStack Kubernetes Operator team, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 9 Base Image, managed_by=edpm_ansible, org.label-schema.build-date=20251125, org.label-schema.schema-version=1.0, tcib_managed=true)
Dec  1 19:47:58 compute-0 nova_compute[189564]: 2025-12-01 19:47:58.337 189568 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Dec  1 19:47:59 compute-0 podman[203750]: time="2025-12-01T19:47:59Z" level=info msg="List containers: received `last` parameter - overwriting `limit`"
Dec  1 19:47:59 compute-0 podman[203750]: @ - - [01/Dec/2025:19:47:59 +0000] "GET /v4.9.3/libpod/containers/json?all=true&external=false&last=0&namespace=false&size=false&sync=false HTTP/1.1" 200 29521 "" "Go-http-client/1.1"
Dec  1 19:47:59 compute-0 podman[203750]: @ - - [01/Dec/2025:19:47:59 +0000] "GET /v4.9.3/libpod/containers/stats?all=false&interval=1&stream=false HTTP/1.1" 200 4813 "" "Go-http-client/1.1"
Dec  1 19:48:01 compute-0 nova_compute[189564]: 2025-12-01 19:48:01.035 189568 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Dec  1 19:48:01 compute-0 openstack_network_exporter[205914]: ERROR   19:48:01 appctl.go:144: Failed to get PID for ovn-northd: no control socket files found for ovn-northd
Dec  1 19:48:01 compute-0 openstack_network_exporter[205914]: ERROR   19:48:01 appctl.go:144: Failed to get PID for ovn-northd: no control socket files found for ovn-northd
Dec  1 19:48:01 compute-0 openstack_network_exporter[205914]: ERROR   19:48:01 appctl.go:131: Failed to prepare call to ovsdb-server: no control socket files found for the ovs db server
Dec  1 19:48:01 compute-0 openstack_network_exporter[205914]: ERROR   19:48:01 appctl.go:174: call(dpif-netdev/pmd-perf-show): please specify an existing datapath
Dec  1 19:48:01 compute-0 openstack_network_exporter[205914]: 
Dec  1 19:48:01 compute-0 openstack_network_exporter[205914]: ERROR   19:48:01 appctl.go:174: call(dpif-netdev/pmd-rxq-show): please specify an existing datapath
Dec  1 19:48:01 compute-0 openstack_network_exporter[205914]: 
Dec  1 19:48:02 compute-0 nova_compute[189564]: 2025-12-01 19:48:02.249 189568 DEBUG oslo_service.periodic_task [None req-7cec860b-c1c5-49d2-8a4f-732c76c1788d - - - - - -] Running periodic task ComputeManager._heal_instance_info_cache run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210#033[00m
Dec  1 19:48:02 compute-0 nova_compute[189564]: 2025-12-01 19:48:02.250 189568 DEBUG nova.compute.manager [None req-7cec860b-c1c5-49d2-8a4f-732c76c1788d - - - - - -] Starting heal instance info cache _heal_instance_info_cache /usr/lib/python3.9/site-packages/nova/compute/manager.py:9858#033[00m
Dec  1 19:48:02 compute-0 nova_compute[189564]: 2025-12-01 19:48:02.251 189568 DEBUG nova.compute.manager [None req-7cec860b-c1c5-49d2-8a4f-732c76c1788d - - - - - -] Rebuilding the list of instances to heal _heal_instance_info_cache /usr/lib/python3.9/site-packages/nova/compute/manager.py:9862#033[00m
Dec  1 19:48:03 compute-0 nova_compute[189564]: 2025-12-01 19:48:03.147 189568 DEBUG oslo_concurrency.lockutils [None req-7cec860b-c1c5-49d2-8a4f-732c76c1788d - - - - - -] Acquiring lock "refresh_cache-e73931e9-f7fa-4666-b781-700b385532a9" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:312#033[00m
Dec  1 19:48:03 compute-0 nova_compute[189564]: 2025-12-01 19:48:03.147 189568 DEBUG oslo_concurrency.lockutils [None req-7cec860b-c1c5-49d2-8a4f-732c76c1788d - - - - - -] Acquired lock "refresh_cache-e73931e9-f7fa-4666-b781-700b385532a9" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:315#033[00m
Dec  1 19:48:03 compute-0 nova_compute[189564]: 2025-12-01 19:48:03.147 189568 DEBUG nova.network.neutron [None req-7cec860b-c1c5-49d2-8a4f-732c76c1788d - - - - - -] [instance: e73931e9-f7fa-4666-b781-700b385532a9] Forcefully refreshing network info cache for instance _get_instance_nw_info /usr/lib/python3.9/site-packages/nova/network/neutron.py:2004#033[00m
Dec  1 19:48:03 compute-0 nova_compute[189564]: 2025-12-01 19:48:03.148 189568 DEBUG nova.objects.instance [None req-7cec860b-c1c5-49d2-8a4f-732c76c1788d - - - - - -] Lazy-loading 'info_cache' on Instance uuid e73931e9-f7fa-4666-b781-700b385532a9 obj_load_attr /usr/lib/python3.9/site-packages/nova/objects/instance.py:1105#033[00m
Dec  1 19:48:03 compute-0 nova_compute[189564]: 2025-12-01 19:48:03.340 189568 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Dec  1 19:48:05 compute-0 nova_compute[189564]: 2025-12-01 19:48:05.176 189568 DEBUG nova.network.neutron [None req-7cec860b-c1c5-49d2-8a4f-732c76c1788d - - - - - -] [instance: e73931e9-f7fa-4666-b781-700b385532a9] Updating instance_info_cache with network_info: [{"id": "3cef930c-870a-4936-a206-b4c3a7ce5c1a", "address": "fa:16:3e:fc:8b:70", "network": {"id": "2a4b8529-6171-4880-a97c-66966115a61b", "bridge": "br-int", "label": "private", "subnets": [{"cidr": "192.168.0.0/24", "dns": [], "gateway": {"address": "192.168.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "192.168.0.47", "type": "fixed", "version": 4, "meta": {}, "floating_ips": [{"address": "192.168.122.206", "type": "floating", "version": 4, "meta": {}}]}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "35d2a9caf1634dca9fc12ec078239d84", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tap3cef930c-87", "ovs_interfaceid": "3cef930c-870a-4936-a206-b4c3a7ce5c1a", "qbh_params": null, "qbg_params": null, "active": true, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}}] update_instance_cache_with_nw_info /usr/lib/python3.9/site-packages/nova/network/neutron.py:116#033[00m
Dec  1 19:48:05 compute-0 nova_compute[189564]: 2025-12-01 19:48:05.195 189568 DEBUG oslo_concurrency.lockutils [None req-7cec860b-c1c5-49d2-8a4f-732c76c1788d - - - - - -] Releasing lock "refresh_cache-e73931e9-f7fa-4666-b781-700b385532a9" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:333#033[00m
Dec  1 19:48:05 compute-0 nova_compute[189564]: 2025-12-01 19:48:05.196 189568 DEBUG nova.compute.manager [None req-7cec860b-c1c5-49d2-8a4f-732c76c1788d - - - - - -] [instance: e73931e9-f7fa-4666-b781-700b385532a9] Updated the network info_cache for instance _heal_instance_info_cache /usr/lib/python3.9/site-packages/nova/compute/manager.py:9929#033[00m
Dec  1 19:48:05 compute-0 nova_compute[189564]: 2025-12-01 19:48:05.197 189568 DEBUG oslo_service.periodic_task [None req-7cec860b-c1c5-49d2-8a4f-732c76c1788d - - - - - -] Running periodic task ComputeManager._poll_rebooting_instances run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210#033[00m
Dec  1 19:48:05 compute-0 nova_compute[189564]: 2025-12-01 19:48:05.198 189568 DEBUG oslo_service.periodic_task [None req-7cec860b-c1c5-49d2-8a4f-732c76c1788d - - - - - -] Running periodic task ComputeManager._reclaim_queued_deletes run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210#033[00m
Dec  1 19:48:05 compute-0 nova_compute[189564]: 2025-12-01 19:48:05.199 189568 DEBUG nova.compute.manager [None req-7cec860b-c1c5-49d2-8a4f-732c76c1788d - - - - - -] CONF.reclaim_instance_interval <= 0, skipping... _reclaim_queued_deletes /usr/lib/python3.9/site-packages/nova/compute/manager.py:10477#033[00m
Dec  1 19:48:06 compute-0 nova_compute[189564]: 2025-12-01 19:48:06.039 189568 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Dec  1 19:48:06 compute-0 nova_compute[189564]: 2025-12-01 19:48:06.250 189568 DEBUG oslo_service.periodic_task [None req-7cec860b-c1c5-49d2-8a4f-732c76c1788d - - - - - -] Running periodic task ComputeManager._poll_rescued_instances run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210#033[00m
Dec  1 19:48:06 compute-0 nova_compute[189564]: 2025-12-01 19:48:06.252 189568 DEBUG oslo_service.periodic_task [None req-7cec860b-c1c5-49d2-8a4f-732c76c1788d - - - - - -] Running periodic task ComputeManager._poll_volume_usage run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210#033[00m
Dec  1 19:48:07 compute-0 podman[247325]: 2025-12-01 19:48:07.37302038 +0000 UTC m=+0.125567243 container health_status b46bda7fc50db8041eef75400930fc7591d8331b3adc9964f77b2cc87c6b98e2 (image=quay.io/openstack-k8s-operators/openstack-network-exporter:current-podified, name=openstack_network_exporter, health_status=healthy, health_failing_streak=0, health_log=, release=1755695350, version=9.6, vcs-ref=f4b088292653bbf5ca8188a5e59ffd06a8671d4b, container_name=openstack_network_exporter, distribution-scope=public, io.k8s.description=The Universal Base Image Minimal is a stripped down image that uses microdnf as a package manager. This base image is freely redistributable, but Red Hat only supports Red Hat technologies through subscriptions for Red Hat products. This image is maintained by Red Hat and updated regularly., managed_by=edpm_ansible, io.buildah.version=1.33.7, com.redhat.component=ubi9-minimal-container, config_data={'image': 'quay.io/openstack-k8s-operators/openstack-network-exporter:current-podified', 'restart': 'always', 'recreate': True, 'privileged': True, 'ports': ['9105:9105'], 'command': [], 'net': 'host', 'environment': {'OS_ENDPOINT_TYPE': 'internal', 'OPENSTACK_NETWORK_EXPORTER_YAML': '/etc/openstack_network_exporter/openstack_network_exporter.yaml'}, 'healthcheck': {'test': '/openstack/healthcheck openstack-netwo', 'mount': '/var/lib/openstack/healthchecks/openstack_network_exporter'}, 'volumes': ['/var/lib/openstack/config/telemetry/openstack_network_exporter.yaml:/etc/openstack_network_exporter/openstack_network_exporter.yaml:z', '/var/lib/openstack/certs/telemetry/default:/etc/openstack_network_exporter/tls:z', '/var/run/openvswitch:/run/openvswitch:rw,z', '/var/lib/openvswitch/ovn:/run/ovn:rw,z', '/proc:/host/proc:ro', '/var/lib/openstack/healthchecks/openstack_network_exporter:/openstack:ro,z']}, vendor=Red Hat, Inc., io.openshift.expose-services=, vcs-type=git, io.openshift.tags=minimal rhel9, summary=Provides the latest release of the minimal Red Hat Universal Base Image 9., url=https://catalog.redhat.com/en/search?searchType=containers, maintainer=Red Hat, Inc., architecture=x86_64, io.k8s.display-name=Red Hat Universal Base Image 9 Minimal, name=ubi9-minimal, build-date=2025-08-20T13:12:41, description=The Universal Base Image Minimal is a stripped down image that uses microdnf as a package manager. This base image is freely redistributable, but Red Hat only supports Red Hat technologies through subscriptions for Red Hat products. This image is maintained by Red Hat and updated regularly., com.redhat.license_terms=https://www.redhat.com/en/about/red-hat-end-user-license-agreements#UBI, config_id=edpm)
Dec  1 19:48:08 compute-0 nova_compute[189564]: 2025-12-01 19:48:08.248 189568 DEBUG oslo_service.periodic_task [None req-7cec860b-c1c5-49d2-8a4f-732c76c1788d - - - - - -] Running periodic task ComputeManager._poll_unconfirmed_resizes run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210#033[00m
Dec  1 19:48:08 compute-0 nova_compute[189564]: 2025-12-01 19:48:08.344 189568 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Dec  1 19:48:09 compute-0 nova_compute[189564]: 2025-12-01 19:48:09.244 189568 DEBUG oslo_service.periodic_task [None req-7cec860b-c1c5-49d2-8a4f-732c76c1788d - - - - - -] Running periodic task ComputeManager._check_instance_build_time run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210#033[00m
Dec  1 19:48:09 compute-0 nova_compute[189564]: 2025-12-01 19:48:09.247 189568 DEBUG oslo_service.periodic_task [None req-7cec860b-c1c5-49d2-8a4f-732c76c1788d - - - - - -] Running periodic task ComputeManager.update_available_resource run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210#033[00m
Dec  1 19:48:09 compute-0 nova_compute[189564]: 2025-12-01 19:48:09.280 189568 DEBUG oslo_concurrency.lockutils [None req-7cec860b-c1c5-49d2-8a4f-732c76c1788d - - - - - -] Acquiring lock "compute_resources" by "nova.compute.resource_tracker.ResourceTracker.clean_compute_node_cache" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m
Dec  1 19:48:09 compute-0 nova_compute[189564]: 2025-12-01 19:48:09.281 189568 DEBUG oslo_concurrency.lockutils [None req-7cec860b-c1c5-49d2-8a4f-732c76c1788d - - - - - -] Lock "compute_resources" acquired by "nova.compute.resource_tracker.ResourceTracker.clean_compute_node_cache" :: waited 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m
Dec  1 19:48:09 compute-0 nova_compute[189564]: 2025-12-01 19:48:09.281 189568 DEBUG oslo_concurrency.lockutils [None req-7cec860b-c1c5-49d2-8a4f-732c76c1788d - - - - - -] Lock "compute_resources" "released" by "nova.compute.resource_tracker.ResourceTracker.clean_compute_node_cache" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m
Dec  1 19:48:09 compute-0 nova_compute[189564]: 2025-12-01 19:48:09.282 189568 DEBUG nova.compute.resource_tracker [None req-7cec860b-c1c5-49d2-8a4f-732c76c1788d - - - - - -] Auditing locally available compute resources for compute-0.ctlplane.example.com (node: compute-0.ctlplane.example.com) update_available_resource /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:861#033[00m
Dec  1 19:48:09 compute-0 nova_compute[189564]: 2025-12-01 19:48:09.399 189568 DEBUG oslo_concurrency.processutils [None req-7cec860b-c1c5-49d2-8a4f-732c76c1788d - - - - - -] Running cmd (subprocess): /usr/bin/python3 -m oslo_concurrency.prlimit --as=1073741824 --cpu=30 -- env LC_ALL=C LANG=C qemu-img info /var/lib/nova/instances/e73931e9-f7fa-4666-b781-700b385532a9/disk --force-share --output=json execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384#033[00m
Dec  1 19:48:09 compute-0 nova_compute[189564]: 2025-12-01 19:48:09.500 189568 DEBUG oslo_concurrency.processutils [None req-7cec860b-c1c5-49d2-8a4f-732c76c1788d - - - - - -] CMD "/usr/bin/python3 -m oslo_concurrency.prlimit --as=1073741824 --cpu=30 -- env LC_ALL=C LANG=C qemu-img info /var/lib/nova/instances/e73931e9-f7fa-4666-b781-700b385532a9/disk --force-share --output=json" returned: 0 in 0.101s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422#033[00m
Dec  1 19:48:09 compute-0 nova_compute[189564]: 2025-12-01 19:48:09.503 189568 DEBUG oslo_concurrency.processutils [None req-7cec860b-c1c5-49d2-8a4f-732c76c1788d - - - - - -] Running cmd (subprocess): /usr/bin/python3 -m oslo_concurrency.prlimit --as=1073741824 --cpu=30 -- env LC_ALL=C LANG=C qemu-img info /var/lib/nova/instances/e73931e9-f7fa-4666-b781-700b385532a9/disk --force-share --output=json execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384#033[00m
Dec  1 19:48:09 compute-0 nova_compute[189564]: 2025-12-01 19:48:09.565 189568 DEBUG oslo_concurrency.processutils [None req-7cec860b-c1c5-49d2-8a4f-732c76c1788d - - - - - -] CMD "/usr/bin/python3 -m oslo_concurrency.prlimit --as=1073741824 --cpu=30 -- env LC_ALL=C LANG=C qemu-img info /var/lib/nova/instances/e73931e9-f7fa-4666-b781-700b385532a9/disk --force-share --output=json" returned: 0 in 0.062s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422#033[00m
Dec  1 19:48:09 compute-0 nova_compute[189564]: 2025-12-01 19:48:09.566 189568 DEBUG oslo_concurrency.processutils [None req-7cec860b-c1c5-49d2-8a4f-732c76c1788d - - - - - -] Running cmd (subprocess): /usr/bin/python3 -m oslo_concurrency.prlimit --as=1073741824 --cpu=30 -- env LC_ALL=C LANG=C qemu-img info /var/lib/nova/instances/e73931e9-f7fa-4666-b781-700b385532a9/disk.eph0 --force-share --output=json execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384#033[00m
Dec  1 19:48:09 compute-0 nova_compute[189564]: 2025-12-01 19:48:09.675 189568 DEBUG oslo_concurrency.processutils [None req-7cec860b-c1c5-49d2-8a4f-732c76c1788d - - - - - -] CMD "/usr/bin/python3 -m oslo_concurrency.prlimit --as=1073741824 --cpu=30 -- env LC_ALL=C LANG=C qemu-img info /var/lib/nova/instances/e73931e9-f7fa-4666-b781-700b385532a9/disk.eph0 --force-share --output=json" returned: 0 in 0.109s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422#033[00m
Dec  1 19:48:09 compute-0 nova_compute[189564]: 2025-12-01 19:48:09.677 189568 DEBUG oslo_concurrency.processutils [None req-7cec860b-c1c5-49d2-8a4f-732c76c1788d - - - - - -] Running cmd (subprocess): /usr/bin/python3 -m oslo_concurrency.prlimit --as=1073741824 --cpu=30 -- env LC_ALL=C LANG=C qemu-img info /var/lib/nova/instances/e73931e9-f7fa-4666-b781-700b385532a9/disk.eph0 --force-share --output=json execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384#033[00m
Dec  1 19:48:09 compute-0 nova_compute[189564]: 2025-12-01 19:48:09.799 189568 DEBUG oslo_concurrency.processutils [None req-7cec860b-c1c5-49d2-8a4f-732c76c1788d - - - - - -] CMD "/usr/bin/python3 -m oslo_concurrency.prlimit --as=1073741824 --cpu=30 -- env LC_ALL=C LANG=C qemu-img info /var/lib/nova/instances/e73931e9-f7fa-4666-b781-700b385532a9/disk.eph0 --force-share --output=json" returned: 0 in 0.122s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422#033[00m
Dec  1 19:48:09 compute-0 nova_compute[189564]: 2025-12-01 19:48:09.822 189568 DEBUG oslo_concurrency.processutils [None req-7cec860b-c1c5-49d2-8a4f-732c76c1788d - - - - - -] Running cmd (subprocess): /usr/bin/python3 -m oslo_concurrency.prlimit --as=1073741824 --cpu=30 -- env LC_ALL=C LANG=C qemu-img info /var/lib/nova/instances/850ac274-3f22-41ce-b7d7-ac64d7adac70/disk --force-share --output=json execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384#033[00m
Dec  1 19:48:09 compute-0 nova_compute[189564]: 2025-12-01 19:48:09.921 189568 DEBUG oslo_concurrency.processutils [None req-7cec860b-c1c5-49d2-8a4f-732c76c1788d - - - - - -] CMD "/usr/bin/python3 -m oslo_concurrency.prlimit --as=1073741824 --cpu=30 -- env LC_ALL=C LANG=C qemu-img info /var/lib/nova/instances/850ac274-3f22-41ce-b7d7-ac64d7adac70/disk --force-share --output=json" returned: 0 in 0.099s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422#033[00m
Dec  1 19:48:09 compute-0 nova_compute[189564]: 2025-12-01 19:48:09.923 189568 DEBUG oslo_concurrency.processutils [None req-7cec860b-c1c5-49d2-8a4f-732c76c1788d - - - - - -] Running cmd (subprocess): /usr/bin/python3 -m oslo_concurrency.prlimit --as=1073741824 --cpu=30 -- env LC_ALL=C LANG=C qemu-img info /var/lib/nova/instances/850ac274-3f22-41ce-b7d7-ac64d7adac70/disk --force-share --output=json execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384#033[00m
Dec  1 19:48:10 compute-0 nova_compute[189564]: 2025-12-01 19:48:10.017 189568 DEBUG oslo_concurrency.processutils [None req-7cec860b-c1c5-49d2-8a4f-732c76c1788d - - - - - -] CMD "/usr/bin/python3 -m oslo_concurrency.prlimit --as=1073741824 --cpu=30 -- env LC_ALL=C LANG=C qemu-img info /var/lib/nova/instances/850ac274-3f22-41ce-b7d7-ac64d7adac70/disk --force-share --output=json" returned: 0 in 0.094s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422#033[00m
Dec  1 19:48:10 compute-0 nova_compute[189564]: 2025-12-01 19:48:10.019 189568 DEBUG oslo_concurrency.processutils [None req-7cec860b-c1c5-49d2-8a4f-732c76c1788d - - - - - -] Running cmd (subprocess): /usr/bin/python3 -m oslo_concurrency.prlimit --as=1073741824 --cpu=30 -- env LC_ALL=C LANG=C qemu-img info /var/lib/nova/instances/850ac274-3f22-41ce-b7d7-ac64d7adac70/disk.eph0 --force-share --output=json execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384#033[00m
Dec  1 19:48:10 compute-0 nova_compute[189564]: 2025-12-01 19:48:10.083 189568 DEBUG oslo_concurrency.processutils [None req-7cec860b-c1c5-49d2-8a4f-732c76c1788d - - - - - -] CMD "/usr/bin/python3 -m oslo_concurrency.prlimit --as=1073741824 --cpu=30 -- env LC_ALL=C LANG=C qemu-img info /var/lib/nova/instances/850ac274-3f22-41ce-b7d7-ac64d7adac70/disk.eph0 --force-share --output=json" returned: 0 in 0.064s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422#033[00m
Dec  1 19:48:10 compute-0 nova_compute[189564]: 2025-12-01 19:48:10.086 189568 DEBUG oslo_concurrency.processutils [None req-7cec860b-c1c5-49d2-8a4f-732c76c1788d - - - - - -] Running cmd (subprocess): /usr/bin/python3 -m oslo_concurrency.prlimit --as=1073741824 --cpu=30 -- env LC_ALL=C LANG=C qemu-img info /var/lib/nova/instances/850ac274-3f22-41ce-b7d7-ac64d7adac70/disk.eph0 --force-share --output=json execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384#033[00m
Dec  1 19:48:10 compute-0 nova_compute[189564]: 2025-12-01 19:48:10.181 189568 DEBUG oslo_concurrency.processutils [None req-7cec860b-c1c5-49d2-8a4f-732c76c1788d - - - - - -] CMD "/usr/bin/python3 -m oslo_concurrency.prlimit --as=1073741824 --cpu=30 -- env LC_ALL=C LANG=C qemu-img info /var/lib/nova/instances/850ac274-3f22-41ce-b7d7-ac64d7adac70/disk.eph0 --force-share --output=json" returned: 0 in 0.095s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422#033[00m
Dec  1 19:48:10 compute-0 nova_compute[189564]: 2025-12-01 19:48:10.654 189568 WARNING nova.virt.libvirt.driver [None req-7cec860b-c1c5-49d2-8a4f-732c76c1788d - - - - - -] This host appears to have multiple sockets per NUMA node. The `socket` PCI NUMA affinity will not be supported.#033[00m
Dec  1 19:48:10 compute-0 nova_compute[189564]: 2025-12-01 19:48:10.656 189568 DEBUG nova.compute.resource_tracker [None req-7cec860b-c1c5-49d2-8a4f-732c76c1788d - - - - - -] Hypervisor/Node resource view: name=compute-0.ctlplane.example.com free_ram=4846MB free_disk=72.36144638061523GB free_vcpus=6 pci_devices=[{"dev_id": "pci_0000_00_03_0", "address": "0000:00:03.0", "product_id": "1000", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1000", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_01_3", "address": "0000:00:01.3", "product_id": "7113", "vendor_id": "8086", "numa_node": null, "label": "label_8086_7113", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_06_0", "address": "0000:00:06.0", "product_id": "1005", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1005", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_04_0", "address": "0000:00:04.0", "product_id": "1001", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1001", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_01_1", "address": "0000:00:01.1", "product_id": "7010", "vendor_id": "8086", "numa_node": null, "label": "label_8086_7010", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_00_0", "address": "0000:00:00.0", "product_id": "1237", "vendor_id": "8086", "numa_node": null, "label": "label_8086_1237", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_05_0", "address": "0000:00:05.0", "product_id": "1002", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1002", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_02_0", "address": "0000:00:02.0", "product_id": "1050", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1050", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_01_2", "address": "0000:00:01.2", "product_id": "7020", "vendor_id": "8086", "numa_node": null, "label": "label_8086_7020", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_01_0", "address": "0000:00:01.0", "product_id": "7000", "vendor_id": "8086", "numa_node": null, "label": "label_8086_7000", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_07_0", "address": "0000:00:07.0", "product_id": "1000", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1000", "dev_type": "type-PCI"}] _report_hypervisor_resource_view /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:1034#033[00m
Dec  1 19:48:10 compute-0 nova_compute[189564]: 2025-12-01 19:48:10.657 189568 DEBUG oslo_concurrency.lockutils [None req-7cec860b-c1c5-49d2-8a4f-732c76c1788d - - - - - -] Acquiring lock "compute_resources" by "nova.compute.resource_tracker.ResourceTracker._update_available_resource" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m
Dec  1 19:48:10 compute-0 nova_compute[189564]: 2025-12-01 19:48:10.657 189568 DEBUG oslo_concurrency.lockutils [None req-7cec860b-c1c5-49d2-8a4f-732c76c1788d - - - - - -] Lock "compute_resources" acquired by "nova.compute.resource_tracker.ResourceTracker._update_available_resource" :: waited 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m
Dec  1 19:48:10 compute-0 nova_compute[189564]: 2025-12-01 19:48:10.783 189568 DEBUG nova.compute.resource_tracker [None req-7cec860b-c1c5-49d2-8a4f-732c76c1788d - - - - - -] Instance e73931e9-f7fa-4666-b781-700b385532a9 actively managed on this compute host and has allocations in placement: {'resources': {'DISK_GB': 2, 'MEMORY_MB': 512, 'VCPU': 1}}. _remove_deleted_instances_allocations /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:1635#033[00m
Dec  1 19:48:10 compute-0 nova_compute[189564]: 2025-12-01 19:48:10.784 189568 DEBUG nova.compute.resource_tracker [None req-7cec860b-c1c5-49d2-8a4f-732c76c1788d - - - - - -] Instance 850ac274-3f22-41ce-b7d7-ac64d7adac70 actively managed on this compute host and has allocations in placement: {'resources': {'DISK_GB': 2, 'MEMORY_MB': 512, 'VCPU': 1}}. _remove_deleted_instances_allocations /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:1635#033[00m
Dec  1 19:48:10 compute-0 nova_compute[189564]: 2025-12-01 19:48:10.785 189568 DEBUG nova.compute.resource_tracker [None req-7cec860b-c1c5-49d2-8a4f-732c76c1788d - - - - - -] Total usable vcpus: 8, total allocated vcpus: 2 _report_final_resource_view /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:1057#033[00m
Dec  1 19:48:10 compute-0 nova_compute[189564]: 2025-12-01 19:48:10.786 189568 DEBUG nova.compute.resource_tracker [None req-7cec860b-c1c5-49d2-8a4f-732c76c1788d - - - - - -] Final resource view: name=compute-0.ctlplane.example.com phys_ram=7680MB used_ram=1536MB phys_disk=79GB used_disk=4GB total_vcpus=8 used_vcpus=2 pci_stats=[] _report_final_resource_view /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:1066#033[00m
Dec  1 19:48:10 compute-0 nova_compute[189564]: 2025-12-01 19:48:10.875 189568 DEBUG nova.compute.provider_tree [None req-7cec860b-c1c5-49d2-8a4f-732c76c1788d - - - - - -] Inventory has not changed in ProviderTree for provider: 0211b5d4-bab8-409f-8f53-df766ffbcb27 update_inventory /usr/lib/python3.9/site-packages/nova/compute/provider_tree.py:180#033[00m
Dec  1 19:48:10 compute-0 nova_compute[189564]: 2025-12-01 19:48:10.898 189568 DEBUG nova.scheduler.client.report [None req-7cec860b-c1c5-49d2-8a4f-732c76c1788d - - - - - -] Inventory has not changed for provider 0211b5d4-bab8-409f-8f53-df766ffbcb27 based on inventory data: {'VCPU': {'total': 8, 'reserved': 0, 'min_unit': 1, 'max_unit': 8, 'step_size': 1, 'allocation_ratio': 4.0}, 'MEMORY_MB': {'total': 7680, 'reserved': 512, 'min_unit': 1, 'max_unit': 7680, 'step_size': 1, 'allocation_ratio': 1.0}, 'DISK_GB': {'total': 79, 'reserved': 1, 'min_unit': 1, 'max_unit': 79, 'step_size': 1, 'allocation_ratio': 0.9}} set_inventory_for_provider /usr/lib/python3.9/site-packages/nova/scheduler/client/report.py:940#033[00m
Dec  1 19:48:10 compute-0 nova_compute[189564]: 2025-12-01 19:48:10.901 189568 DEBUG nova.compute.resource_tracker [None req-7cec860b-c1c5-49d2-8a4f-732c76c1788d - - - - - -] Compute_service record updated for compute-0.ctlplane.example.com:compute-0.ctlplane.example.com _update_available_resource /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:995#033[00m
Dec  1 19:48:10 compute-0 nova_compute[189564]: 2025-12-01 19:48:10.902 189568 DEBUG oslo_concurrency.lockutils [None req-7cec860b-c1c5-49d2-8a4f-732c76c1788d - - - - - -] Lock "compute_resources" "released" by "nova.compute.resource_tracker.ResourceTracker._update_available_resource" :: held 0.244s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m
Dec  1 19:48:11 compute-0 nova_compute[189564]: 2025-12-01 19:48:11.041 189568 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Dec  1 19:48:12 compute-0 ovn_metadata_agent[106828]: 2025-12-01 19:48:12.201 106833 DEBUG oslo_concurrency.lockutils [-] Acquiring lock "_check_child_processes" by "neutron.agent.linux.external_process.ProcessMonitor._check_child_processes" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m
Dec  1 19:48:12 compute-0 ovn_metadata_agent[106828]: 2025-12-01 19:48:12.202 106833 DEBUG oslo_concurrency.lockutils [-] Lock "_check_child_processes" acquired by "neutron.agent.linux.external_process.ProcessMonitor._check_child_processes" :: waited 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m
Dec  1 19:48:12 compute-0 ovn_metadata_agent[106828]: 2025-12-01 19:48:12.203 106833 DEBUG oslo_concurrency.lockutils [-] Lock "_check_child_processes" "released" by "neutron.agent.linux.external_process.ProcessMonitor._check_child_processes" :: held 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m
Dec  1 19:48:12 compute-0 nova_compute[189564]: 2025-12-01 19:48:12.904 189568 DEBUG oslo_service.periodic_task [None req-7cec860b-c1c5-49d2-8a4f-732c76c1788d - - - - - -] Running periodic task ComputeManager._instance_usage_audit run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210#033[00m
Dec  1 19:48:13 compute-0 nova_compute[189564]: 2025-12-01 19:48:13.346 189568 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Dec  1 19:48:13 compute-0 podman[247367]: 2025-12-01 19:48:13.35701064 +0000 UTC m=+0.112596062 container health_status 9bc16c1e84935b321683dd2dfd3901959431e420d380b6b9982945dff3d516b2 (image=quay.io/prometheus/node-exporter:v1.5.0, name=node_exporter, health_status=healthy, health_failing_streak=0, health_log=, config_data={'image': 'quay.io/prometheus/node-exporter:v1.5.0', 'restart': 'always', 'recreate': True, 'user': 'root', 'privileged': True, 'ports': ['9100:9100'], 'command': ['--web.config.file=/etc/node_exporter/node_exporter.yaml', '--web.disable-exporter-metrics', '--collector.systemd', '--collector.systemd.unit-include=(edpm_.*|ovs.*|openvswitch|virt.*|rsyslog)\\.service', '--no-collector.dmi', '--no-collector.entropy', '--no-collector.thermal_zone', '--no-collector.time', '--no-collector.timex', '--no-collector.uname', '--no-collector.stat', '--no-collector.hwmon', '--no-collector.os', '--no-collector.selinux', '--no-collector.textfile', '--no-collector.powersupplyclass', '--no-collector.pressure', '--no-collector.rapl'], 'net': 'host', 'environment': {'OS_ENDPOINT_TYPE': 'internal'}, 'healthcheck': {'test': '/openstack/healthcheck node_exporter', 'mount': '/var/lib/openstack/healthchecks/node_exporter'}, 'volumes': ['/var/lib/openstack/config/telemetry/node_exporter.yaml:/etc/node_exporter/node_exporter.yaml:z', '/var/lib/openstack/certs/telemetry/default:/etc/node_exporter/tls:z', '/var/run/dbus/system_bus_socket:/var/run/dbus/system_bus_socket:rw', '/var/lib/openstack/healthchecks/node_exporter:/openstack:ro,z']}, config_id=edpm, container_name=node_exporter, maintainer=The Prometheus Authors <prometheus-developers@googlegroups.com>, managed_by=edpm_ansible)
Dec  1 19:48:16 compute-0 nova_compute[189564]: 2025-12-01 19:48:16.045 189568 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Dec  1 19:48:18 compute-0 nova_compute[189564]: 2025-12-01 19:48:18.349 189568 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Dec  1 19:48:18 compute-0 podman[247390]: 2025-12-01 19:48:18.358720593 +0000 UTC m=+0.114165190 container health_status eee51cf6f5ac491b85fb09827fece37ea9afa564acb449d4ec0d0155a452f02b (image=quay.io/podified-antelope-centos9/openstack-multipathd:current-podified, name=multipathd, health_status=healthy, health_failing_streak=0, health_log=, config_id=multipathd, maintainer=OpenStack Kubernetes Operator team, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.schema-version=1.0, tcib_managed=true, config_data={'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/multipathd', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-multipathd:current-podified', 'net': 'host', 'privileged': True, 'restart': 'always', 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/var/lib/kolla/config_files/multipathd.json:/var/lib/kolla/config_files/config.json:ro', '/dev:/dev', '/run/udev:/run/udev', '/sys:/sys', '/lib/modules:/lib/modules:ro', '/etc/iscsi:/etc/iscsi:ro', '/var/lib/iscsi:/var/lib/iscsi', '/etc/multipath:/etc/multipath:z', '/etc/multipath.conf:/etc/multipath.conf:ro', '/var/lib/openstack/healthchecks/multipathd:/openstack:ro,z']}, container_name=multipathd, io.buildah.version=1.41.3, managed_by=edpm_ansible, org.label-schema.build-date=20251125, org.label-schema.vendor=CentOS, tcib_build_tag=fa2bb8efef6782c26ea7f1675eeb36dd)
Dec  1 19:48:21 compute-0 nova_compute[189564]: 2025-12-01 19:48:21.047 189568 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Dec  1 19:48:23 compute-0 nova_compute[189564]: 2025-12-01 19:48:23.354 189568 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Dec  1 19:48:24 compute-0 podman[247409]: 2025-12-01 19:48:24.366969656 +0000 UTC m=+0.119589298 container health_status 61ddba5fa28aaa4735d9b3aecc3d300f499f9ae2248b5f55cd6d6127fcce4236 (image=quay.io/navidys/prometheus-podman-exporter:v1.10.1, name=podman_exporter, health_status=healthy, health_failing_streak=0, health_log=, config_id=edpm, container_name=podman_exporter, maintainer=Navid Yaghoobi <navidys@fedoraproject.org>, managed_by=edpm_ansible, config_data={'image': 'quay.io/navidys/prometheus-podman-exporter:v1.10.1', 'restart': 'always', 'recreate': True, 'user': 'root', 'privileged': True, 'ports': ['9882:9882'], 'net': 'host', 'command': ['--web.config.file=/etc/podman_exporter/podman_exporter.yaml'], 'environment': {'OS_ENDPOINT_TYPE': 'internal', 'CONTAINER_HOST': 'unix:///run/podman/podman.sock'}, 'healthcheck': {'test': '/openstack/healthcheck podman_exporter', 'mount': '/var/lib/openstack/healthchecks/podman_exporter'}, 'volumes': ['/var/lib/openstack/config/telemetry/podman_exporter.yaml:/etc/podman_exporter/podman_exporter.yaml:z', '/var/lib/openstack/certs/telemetry/default:/etc/podman_exporter/tls:z', '/run/podman/podman.sock:/run/podman/podman.sock:rw,z', '/var/lib/openstack/healthchecks/podman_exporter:/openstack:ro,z']})
Dec  1 19:48:26 compute-0 nova_compute[189564]: 2025-12-01 19:48:26.049 189568 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Dec  1 19:48:27 compute-0 podman[247432]: 2025-12-01 19:48:27.35475191 +0000 UTC m=+0.117226285 container health_status 23921011954a99f31a49758e512d9e3575f6b2ebf536e7df85e3be11e7690b76 (image=quay.io/sustainable_computing_io/kepler:release-0.7.12, name=kepler, health_status=healthy, health_failing_streak=0, health_log=, container_name=kepler, name=ubi9, description=The Universal Base Image is designed and engineered to be the base layer for all of your containerized applications, middleware and utilities. This base image is freely redistributable, but Red Hat only supports Red Hat technologies through subscriptions for Red Hat products. This image is maintained by Red Hat and updated regularly., io.k8s.display-name=Red Hat Universal Base Image 9, com.redhat.component=ubi9-container, io.openshift.expose-services=, url=https://access.redhat.com/containers/#/registry.access.redhat.com/ubi9/images/9.4-1214.1726694543, io.k8s.description=The Universal Base Image is designed and engineered to be the base layer for all of your containerized applications, middleware and utilities. This base image is freely redistributable, but Red Hat only supports Red Hat technologies through subscriptions for Red Hat products. This image is maintained by Red Hat and updated regularly., config_data={'image': 'quay.io/sustainable_computing_io/kepler:release-0.7.12', 'privileged': 'true', 'restart': 'always', 'ports': ['8888:8888'], 'net': 'host', 'command': '-v=2', 'recreate': True, 'environment': {'ENABLE_GPU': 'true', 'EXPOSE_CONTAINER_METRICS': 'true', 'ENABLE_PROCESS_METRICS': 'true', 'EXPOSE_VM_METRICS': 'true', 'EXPOSE_ESTIMATED_IDLE_POWER_METRICS': 'false', 'LIBVIRT_METADATA_URI': 'http://openstack.org/xmlns/libvirt/nova/1.1'}, 'healthcheck': {'test': '/openstack/healthcheck kepler', 'mount': '/var/lib/openstack/healthchecks/kepler'}, 'volumes': ['/lib/modules:/lib/modules:ro', '/run/libvirt:/run/libvirt:shared,ro', '/sys:/sys', '/proc:/proc', '/var/lib/openstack/healthchecks/kepler:/openstack:ro,z']}, vcs-ref=e309397d02fc53f7fa99db1371b8700eb49f268f, build-date=2024-09-18T21:23:30, maintainer=Red Hat, Inc., vendor=Red Hat, Inc., release-0.7.12=, summary=Provides the latest release of Red Hat Universal Base Image 9., vcs-type=git, com.redhat.license_terms=https://www.redhat.com/en/about/red-hat-end-user-license-agreements#UBI, io.buildah.version=1.29.0, io.openshift.tags=base rhel9, distribution-scope=public, config_id=edpm, release=1214.1726694543, version=9.4, architecture=x86_64, managed_by=edpm_ansible)
Dec  1 19:48:27 compute-0 podman[247433]: 2025-12-01 19:48:27.358035612 +0000 UTC m=+0.110276151 container health_status 34a1614f07848d6f362b3ed1fa2407dbcd0f2c7c831f6ef43ff8b2d278ce7c3d (image=quay.io/podified-antelope-centos9/openstack-ceilometer-ipmi:current-podified, name=ceilometer_agent_ipmi, health_status=healthy, health_failing_streak=0, health_log=, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.schema-version=1.0, tcib_managed=true, org.label-schema.build-date=20251125, org.label-schema.vendor=CentOS, tcib_build_tag=fa2bb8efef6782c26ea7f1675eeb36dd, config_data={'image': 'quay.io/podified-antelope-centos9/openstack-ceilometer-ipmi:current-podified', 'user': 'ceilometer', 'restart': 'always', 'command': 'kolla_start', 'security_opt': 'label:type:ceilometer_polling_t', 'privileged': 'true', 'net': 'host', 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'OS_ENDPOINT_TYPE': 'internal'}, 'healthcheck': {'test': '/openstack/healthcheck ipmi', 'mount': '/var/lib/openstack/healthchecks/ceilometer_agent_ipmi'}, 'volumes': ['/var/lib/openstack/config/telemetry-power-monitoring:/var/lib/openstack/config/:z', '/var/lib/openstack/config/telemetry-power-monitoring/ceilometer-agent-ipmi.json:/var/lib/kolla/config_files/config.json:z', '/etc/hosts:/etc/hosts:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/var/lib/openstack/cacerts/telemetry-power-monitoring/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/config/telemetry-power-monitoring/ceilometer_prom_exporter.yaml:/etc/ceilometer/ceilometer_prom_exporter.yaml:z', '/var/lib/openstack/certs/telemetry-power-monitoring/default:/etc/ceilometer/tls:z', '/dev/log:/dev/log', '/var/lib/openstack/healthchecks/ceilometer_agent_ipmi:/openstack:ro,z']}, config_id=edpm, container_name=ceilometer_agent_ipmi, io.buildah.version=1.41.3, maintainer=OpenStack Kubernetes Operator team, managed_by=edpm_ansible)
Dec  1 19:48:27 compute-0 podman[247434]: 2025-12-01 19:48:27.370814216 +0000 UTC m=+0.119327770 container health_status 3a3d264f7eb8586ed3d44da8bad3c69e5911bcb2ca062b771386b6d47a5118de (image=quay.rdoproject.org/podified-master-centos10/openstack-ceilometer-compute:current-tested, name=ceilometer_agent_compute, health_status=healthy, health_failing_streak=0, health_log=, maintainer=OpenStack Kubernetes Operator team, org.label-schema.name=CentOS Stream 10 Base Image, config_data={'image': 'quay.rdoproject.org/podified-master-centos10/openstack-ceilometer-compute:current-tested', 'user': 'ceilometer', 'restart': 'always', 'command': 'kolla_start', 'security_opt': 'label:type:ceilometer_polling_t', 'net': 'host', 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'OS_ENDPOINT_TYPE': 'internal'}, 'healthcheck': {'test': '/openstack/healthcheck compute', 'mount': '/var/lib/openstack/healthchecks/ceilometer_agent_compute'}, 'volumes': ['/var/lib/openstack/config/telemetry:/var/lib/openstack/config/:z', '/var/lib/openstack/config/telemetry/ceilometer-agent-compute.json:/var/lib/kolla/config_files/config.json:z', '/run/libvirt:/run/libvirt:shared,ro', '/etc/hosts:/etc/hosts:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/var/lib/openstack/cacerts/telemetry/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/config/telemetry/ceilometer_prom_exporter.yaml:/etc/ceilometer/ceilometer_prom_exporter.yaml:z', '/var/lib/openstack/certs/telemetry/default:/etc/ceilometer/tls:z', '/dev/log:/dev/log', '/var/lib/openstack/healthchecks/ceilometer_agent_compute:/openstack:ro,z']}, config_id=edpm, container_name=ceilometer_agent_compute, io.buildah.version=1.41.4, tcib_managed=true, managed_by=edpm_ansible, org.label-schema.build-date=20251125, org.label-schema.license=GPLv2, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS, tcib_build_tag=3a7876c5b6a4ff2e2bc50e11e9db5f42)
Dec  1 19:48:27 compute-0 podman[247440]: 2025-12-01 19:48:27.390908888 +0000 UTC m=+0.126470802 container health_status 43b014a7c88484529ca37fbc1aa040d68d3c565a681d98a3ffe696ded1c66c8b (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified, name=ovn_metadata_agent, health_status=healthy, health_failing_streak=0, health_log=, config_data={'cgroupns': 'host', 'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'EDPM_CONFIG_HASH': '0823bd3e096c75f72e4a95820d41b0d4b6a1172bd2892ddb9f29b788a11bc87d'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ovn_metadata_agent', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified', 'net': 'host', 'pid': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/run/openvswitch:/run/openvswitch:z', '/var/lib/config-data/ansible-generated/neutron-ovn-metadata-agent:/etc/neutron.conf.d:z', '/run/netns:/run/netns:shared', '/var/lib/kolla/config_files/ovn_metadata_agent.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/neutron:/var/lib/neutron:shared,z', '/var/lib/neutron/ovn_metadata_haproxy_wrapper:/usr/local/bin/haproxy:ro', '/var/lib/neutron/kill_scripts:/etc/neutron/kill_scripts:ro', '/var/lib/openstack/cacerts/neutron-metadata/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/ca.crt:/etc/pki/tls/certs/ovndbca.crt:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/tls.crt:/etc/pki/tls/certs/ovndb.crt:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/tls.key:/etc/pki/tls/private/ovndb.key:ro,Z', '/var/lib/openstack/healthchecks/ovn_metadata_agent:/openstack:ro,z']}, config_id=ovn_metadata_agent, org.label-schema.build-date=20251125, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.schema-version=1.0, tcib_build_tag=fa2bb8efef6782c26ea7f1675eeb36dd, container_name=ovn_metadata_agent, maintainer=OpenStack Kubernetes Operator team, managed_by=edpm_ansible, org.label-schema.license=GPLv2, io.buildah.version=1.41.3, org.label-schema.vendor=CentOS, tcib_managed=true)
Dec  1 19:48:27 compute-0 podman[247442]: 2025-12-01 19:48:27.450418578 +0000 UTC m=+0.180250664 container health_status ac5c9902abf0db9f43c889599b2bcc73d33eb8b65444ffdd9b56a5cc93dab792 (image=quay.io/podified-antelope-centos9/openstack-ovn-controller:current-podified, name=ovn_controller, health_status=healthy, health_failing_streak=0, health_log=, org.label-schema.build-date=20251125, org.label-schema.vendor=CentOS, config_id=ovn_controller, io.buildah.version=1.41.3, managed_by=edpm_ansible, tcib_build_tag=fa2bb8efef6782c26ea7f1675eeb36dd, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 9 Base Image, config_data={'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ovn_controller', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-ovn-controller:current-podified', 'net': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/lib/modules:/lib/modules:ro', '/run:/run', '/var/lib/openvswitch/ovn:/run/ovn:shared,z', '/var/lib/kolla/config_files/ovn_controller.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/openstack/cacerts/ovn/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/certs/ovn/default/ca.crt:/etc/pki/tls/certs/ovndbca.crt:ro,z', '/var/lib/openstack/certs/ovn/default/tls.crt:/etc/pki/tls/certs/ovndb.crt:ro,z', '/var/lib/openstack/certs/ovn/default/tls.key:/etc/pki/tls/private/ovndb.key:ro,Z', '/var/lib/openstack/healthchecks/ovn_controller:/openstack:ro,z']}, maintainer=OpenStack Kubernetes Operator team, org.label-schema.schema-version=1.0, tcib_managed=true, container_name=ovn_controller)
Dec  1 19:48:28 compute-0 nova_compute[189564]: 2025-12-01 19:48:28.358 189568 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Dec  1 19:48:29 compute-0 podman[203750]: time="2025-12-01T19:48:29Z" level=info msg="List containers: received `last` parameter - overwriting `limit`"
Dec  1 19:48:29 compute-0 podman[203750]: @ - - [01/Dec/2025:19:48:29 +0000] "GET /v4.9.3/libpod/containers/json?all=true&external=false&last=0&namespace=false&size=false&sync=false HTTP/1.1" 200 29521 "" "Go-http-client/1.1"
Dec  1 19:48:29 compute-0 podman[203750]: @ - - [01/Dec/2025:19:48:29 +0000] "GET /v4.9.3/libpod/containers/stats?all=false&interval=1&stream=false HTTP/1.1" 200 4809 "" "Go-http-client/1.1"
Dec  1 19:48:31 compute-0 nova_compute[189564]: 2025-12-01 19:48:31.052 189568 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Dec  1 19:48:31 compute-0 openstack_network_exporter[205914]: ERROR   19:48:31 appctl.go:131: Failed to prepare call to ovsdb-server: no control socket files found for the ovs db server
Dec  1 19:48:31 compute-0 openstack_network_exporter[205914]: ERROR   19:48:31 appctl.go:144: Failed to get PID for ovn-northd: no control socket files found for ovn-northd
Dec  1 19:48:31 compute-0 openstack_network_exporter[205914]: ERROR   19:48:31 appctl.go:144: Failed to get PID for ovn-northd: no control socket files found for ovn-northd
Dec  1 19:48:31 compute-0 openstack_network_exporter[205914]: ERROR   19:48:31 appctl.go:174: call(dpif-netdev/pmd-rxq-show): please specify an existing datapath
Dec  1 19:48:31 compute-0 openstack_network_exporter[205914]: 
Dec  1 19:48:31 compute-0 openstack_network_exporter[205914]: ERROR   19:48:31 appctl.go:174: call(dpif-netdev/pmd-perf-show): please specify an existing datapath
Dec  1 19:48:31 compute-0 openstack_network_exporter[205914]: 
Dec  1 19:48:33 compute-0 nova_compute[189564]: 2025-12-01 19:48:33.361 189568 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Dec  1 19:48:36 compute-0 nova_compute[189564]: 2025-12-01 19:48:36.055 189568 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Dec  1 19:48:38 compute-0 podman[247528]: 2025-12-01 19:48:38.349795162 +0000 UTC m=+0.111519270 container health_status b46bda7fc50db8041eef75400930fc7591d8331b3adc9964f77b2cc87c6b98e2 (image=quay.io/openstack-k8s-operators/openstack-network-exporter:current-podified, name=openstack_network_exporter, health_status=healthy, health_failing_streak=0, health_log=, com.redhat.license_terms=https://www.redhat.com/en/about/red-hat-end-user-license-agreements#UBI, io.openshift.expose-services=, io.openshift.tags=minimal rhel9, architecture=x86_64, managed_by=edpm_ansible, vendor=Red Hat, Inc., url=https://catalog.redhat.com/en/search?searchType=containers, version=9.6, name=ubi9-minimal, maintainer=Red Hat, Inc., summary=Provides the latest release of the minimal Red Hat Universal Base Image 9., vcs-ref=f4b088292653bbf5ca8188a5e59ffd06a8671d4b, com.redhat.component=ubi9-minimal-container, config_data={'image': 'quay.io/openstack-k8s-operators/openstack-network-exporter:current-podified', 'restart': 'always', 'recreate': True, 'privileged': True, 'ports': ['9105:9105'], 'command': [], 'net': 'host', 'environment': {'OS_ENDPOINT_TYPE': 'internal', 'OPENSTACK_NETWORK_EXPORTER_YAML': '/etc/openstack_network_exporter/openstack_network_exporter.yaml'}, 'healthcheck': {'test': '/openstack/healthcheck openstack-netwo', 'mount': '/var/lib/openstack/healthchecks/openstack_network_exporter'}, 'volumes': ['/var/lib/openstack/config/telemetry/openstack_network_exporter.yaml:/etc/openstack_network_exporter/openstack_network_exporter.yaml:z', '/var/lib/openstack/certs/telemetry/default:/etc/openstack_network_exporter/tls:z', '/var/run/openvswitch:/run/openvswitch:rw,z', '/var/lib/openvswitch/ovn:/run/ovn:rw,z', '/proc:/host/proc:ro', '/var/lib/openstack/healthchecks/openstack_network_exporter:/openstack:ro,z']}, io.buildah.version=1.33.7, description=The Universal Base Image Minimal is a stripped down image that uses microdnf as a package manager. This base image is freely redistributable, but Red Hat only supports Red Hat technologies through subscriptions for Red Hat products. This image is maintained by Red Hat and updated regularly., io.k8s.display-name=Red Hat Universal Base Image 9 Minimal, vcs-type=git, io.k8s.description=The Universal Base Image Minimal is a stripped down image that uses microdnf as a package manager. This base image is freely redistributable, but Red Hat only supports Red Hat technologies through subscriptions for Red Hat products. This image is maintained by Red Hat and updated regularly., release=1755695350, container_name=openstack_network_exporter, distribution-scope=public, build-date=2025-08-20T13:12:41, config_id=edpm)
Dec  1 19:48:38 compute-0 nova_compute[189564]: 2025-12-01 19:48:38.365 189568 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Dec  1 19:48:41 compute-0 nova_compute[189564]: 2025-12-01 19:48:41.058 189568 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Dec  1 19:48:43 compute-0 nova_compute[189564]: 2025-12-01 19:48:43.369 189568 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Dec  1 19:48:44 compute-0 podman[247550]: 2025-12-01 19:48:44.291495404 +0000 UTC m=+0.067827448 container health_status 9bc16c1e84935b321683dd2dfd3901959431e420d380b6b9982945dff3d516b2 (image=quay.io/prometheus/node-exporter:v1.5.0, name=node_exporter, health_status=healthy, health_failing_streak=0, health_log=, config_data={'image': 'quay.io/prometheus/node-exporter:v1.5.0', 'restart': 'always', 'recreate': True, 'user': 'root', 'privileged': True, 'ports': ['9100:9100'], 'command': ['--web.config.file=/etc/node_exporter/node_exporter.yaml', '--web.disable-exporter-metrics', '--collector.systemd', '--collector.systemd.unit-include=(edpm_.*|ovs.*|openvswitch|virt.*|rsyslog)\\.service', '--no-collector.dmi', '--no-collector.entropy', '--no-collector.thermal_zone', '--no-collector.time', '--no-collector.timex', '--no-collector.uname', '--no-collector.stat', '--no-collector.hwmon', '--no-collector.os', '--no-collector.selinux', '--no-collector.textfile', '--no-collector.powersupplyclass', '--no-collector.pressure', '--no-collector.rapl'], 'net': 'host', 'environment': {'OS_ENDPOINT_TYPE': 'internal'}, 'healthcheck': {'test': '/openstack/healthcheck node_exporter', 'mount': '/var/lib/openstack/healthchecks/node_exporter'}, 'volumes': ['/var/lib/openstack/config/telemetry/node_exporter.yaml:/etc/node_exporter/node_exporter.yaml:z', '/var/lib/openstack/certs/telemetry/default:/etc/node_exporter/tls:z', '/var/run/dbus/system_bus_socket:/var/run/dbus/system_bus_socket:rw', '/var/lib/openstack/healthchecks/node_exporter:/openstack:ro,z']}, config_id=edpm, container_name=node_exporter, maintainer=The Prometheus Authors <prometheus-developers@googlegroups.com>, managed_by=edpm_ansible)
Dec  1 19:48:46 compute-0 nova_compute[189564]: 2025-12-01 19:48:46.061 189568 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Dec  1 19:48:48 compute-0 nova_compute[189564]: 2025-12-01 19:48:48.370 189568 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Dec  1 19:48:49 compute-0 podman[247574]: 2025-12-01 19:48:49.361760728 +0000 UTC m=+0.115092419 container health_status eee51cf6f5ac491b85fb09827fece37ea9afa564acb449d4ec0d0155a452f02b (image=quay.io/podified-antelope-centos9/openstack-multipathd:current-podified, name=multipathd, health_status=healthy, health_failing_streak=0, health_log=, managed_by=edpm_ansible, org.label-schema.vendor=CentOS, config_data={'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/multipathd', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-multipathd:current-podified', 'net': 'host', 'privileged': True, 'restart': 'always', 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/var/lib/kolla/config_files/multipathd.json:/var/lib/kolla/config_files/config.json:ro', '/dev:/dev', '/run/udev:/run/udev', '/sys:/sys', '/lib/modules:/lib/modules:ro', '/etc/iscsi:/etc/iscsi:ro', '/var/lib/iscsi:/var/lib/iscsi', '/etc/multipath:/etc/multipath:z', '/etc/multipath.conf:/etc/multipath.conf:ro', '/var/lib/openstack/healthchecks/multipathd:/openstack:ro,z']}, org.label-schema.build-date=20251125, org.label-schema.license=GPLv2, io.buildah.version=1.41.3, org.label-schema.schema-version=1.0, tcib_managed=true, maintainer=OpenStack Kubernetes Operator team, org.label-schema.name=CentOS Stream 9 Base Image, tcib_build_tag=fa2bb8efef6782c26ea7f1675eeb36dd, config_id=multipathd, container_name=multipathd)
Dec  1 19:48:51 compute-0 nova_compute[189564]: 2025-12-01 19:48:51.064 189568 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Dec  1 19:48:53 compute-0 nova_compute[189564]: 2025-12-01 19:48:53.372 189568 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Dec  1 19:48:55 compute-0 podman[247594]: 2025-12-01 19:48:55.341423795 +0000 UTC m=+0.108400242 container health_status 61ddba5fa28aaa4735d9b3aecc3d300f499f9ae2248b5f55cd6d6127fcce4236 (image=quay.io/navidys/prometheus-podman-exporter:v1.10.1, name=podman_exporter, health_status=healthy, health_failing_streak=0, health_log=, config_data={'image': 'quay.io/navidys/prometheus-podman-exporter:v1.10.1', 'restart': 'always', 'recreate': True, 'user': 'root', 'privileged': True, 'ports': ['9882:9882'], 'net': 'host', 'command': ['--web.config.file=/etc/podman_exporter/podman_exporter.yaml'], 'environment': {'OS_ENDPOINT_TYPE': 'internal', 'CONTAINER_HOST': 'unix:///run/podman/podman.sock'}, 'healthcheck': {'test': '/openstack/healthcheck podman_exporter', 'mount': '/var/lib/openstack/healthchecks/podman_exporter'}, 'volumes': ['/var/lib/openstack/config/telemetry/podman_exporter.yaml:/etc/podman_exporter/podman_exporter.yaml:z', '/var/lib/openstack/certs/telemetry/default:/etc/podman_exporter/tls:z', '/run/podman/podman.sock:/run/podman/podman.sock:rw,z', '/var/lib/openstack/healthchecks/podman_exporter:/openstack:ro,z']}, config_id=edpm, container_name=podman_exporter, maintainer=Navid Yaghoobi <navidys@fedoraproject.org>, managed_by=edpm_ansible)
Dec  1 19:48:56 compute-0 nova_compute[189564]: 2025-12-01 19:48:56.067 189568 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Dec  1 19:48:58 compute-0 podman[247618]: 2025-12-01 19:48:58.368924836 +0000 UTC m=+0.124100028 container health_status 23921011954a99f31a49758e512d9e3575f6b2ebf536e7df85e3be11e7690b76 (image=quay.io/sustainable_computing_io/kepler:release-0.7.12, name=kepler, health_status=healthy, health_failing_streak=0, health_log=, summary=Provides the latest release of Red Hat Universal Base Image 9., url=https://access.redhat.com/containers/#/registry.access.redhat.com/ubi9/images/9.4-1214.1726694543, vendor=Red Hat, Inc., build-date=2024-09-18T21:23:30, config_id=edpm, version=9.4, architecture=x86_64, managed_by=edpm_ansible, release-0.7.12=, vcs-type=git, com.redhat.license_terms=https://www.redhat.com/en/about/red-hat-end-user-license-agreements#UBI, container_name=kepler, io.buildah.version=1.29.0, config_data={'image': 'quay.io/sustainable_computing_io/kepler:release-0.7.12', 'privileged': 'true', 'restart': 'always', 'ports': ['8888:8888'], 'net': 'host', 'command': '-v=2', 'recreate': True, 'environment': {'ENABLE_GPU': 'true', 'EXPOSE_CONTAINER_METRICS': 'true', 'ENABLE_PROCESS_METRICS': 'true', 'EXPOSE_VM_METRICS': 'true', 'EXPOSE_ESTIMATED_IDLE_POWER_METRICS': 'false', 'LIBVIRT_METADATA_URI': 'http://openstack.org/xmlns/libvirt/nova/1.1'}, 'healthcheck': {'test': '/openstack/healthcheck kepler', 'mount': '/var/lib/openstack/healthchecks/kepler'}, 'volumes': ['/lib/modules:/lib/modules:ro', '/run/libvirt:/run/libvirt:shared,ro', '/sys:/sys', '/proc:/proc', '/var/lib/openstack/healthchecks/kepler:/openstack:ro,z']}, io.k8s.display-name=Red Hat Universal Base Image 9, com.redhat.component=ubi9-container, io.openshift.expose-services=, release=1214.1726694543, vcs-ref=e309397d02fc53f7fa99db1371b8700eb49f268f, maintainer=Red Hat, Inc., description=The Universal Base Image is designed and engineered to be the base layer for all of your containerized applications, middleware and utilities. This base image is freely redistributable, but Red Hat only supports Red Hat technologies through subscriptions for Red Hat products. This image is maintained by Red Hat and updated regularly., name=ubi9, io.k8s.description=The Universal Base Image is designed and engineered to be the base layer for all of your containerized applications, middleware and utilities. This base image is freely redistributable, but Red Hat only supports Red Hat technologies through subscriptions for Red Hat products. This image is maintained by Red Hat and updated regularly., distribution-scope=public, io.openshift.tags=base rhel9)
Dec  1 19:48:58 compute-0 nova_compute[189564]: 2025-12-01 19:48:58.374 189568 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Dec  1 19:48:58 compute-0 podman[247619]: 2025-12-01 19:48:58.386684505 +0000 UTC m=+0.133582871 container health_status 34a1614f07848d6f362b3ed1fa2407dbcd0f2c7c831f6ef43ff8b2d278ce7c3d (image=quay.io/podified-antelope-centos9/openstack-ceilometer-ipmi:current-podified, name=ceilometer_agent_ipmi, health_status=healthy, health_failing_streak=0, health_log=, config_data={'image': 'quay.io/podified-antelope-centos9/openstack-ceilometer-ipmi:current-podified', 'user': 'ceilometer', 'restart': 'always', 'command': 'kolla_start', 'security_opt': 'label:type:ceilometer_polling_t', 'privileged': 'true', 'net': 'host', 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'OS_ENDPOINT_TYPE': 'internal'}, 'healthcheck': {'test': '/openstack/healthcheck ipmi', 'mount': '/var/lib/openstack/healthchecks/ceilometer_agent_ipmi'}, 'volumes': ['/var/lib/openstack/config/telemetry-power-monitoring:/var/lib/openstack/config/:z', '/var/lib/openstack/config/telemetry-power-monitoring/ceilometer-agent-ipmi.json:/var/lib/kolla/config_files/config.json:z', '/etc/hosts:/etc/hosts:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/var/lib/openstack/cacerts/telemetry-power-monitoring/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/config/telemetry-power-monitoring/ceilometer_prom_exporter.yaml:/etc/ceilometer/ceilometer_prom_exporter.yaml:z', '/var/lib/openstack/certs/telemetry-power-monitoring/default:/etc/ceilometer/tls:z', '/dev/log:/dev/log', '/var/lib/openstack/healthchecks/ceilometer_agent_ipmi:/openstack:ro,z']}, managed_by=edpm_ansible, org.label-schema.name=CentOS Stream 9 Base Image, io.buildah.version=1.41.3, org.label-schema.build-date=20251125, tcib_build_tag=fa2bb8efef6782c26ea7f1675eeb36dd, config_id=edpm, container_name=ceilometer_agent_ipmi, maintainer=OpenStack Kubernetes Operator team, org.label-schema.license=GPLv2, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS, tcib_managed=true)
Dec  1 19:48:58 compute-0 podman[247620]: 2025-12-01 19:48:58.399961265 +0000 UTC m=+0.143206298 container health_status 3a3d264f7eb8586ed3d44da8bad3c69e5911bcb2ca062b771386b6d47a5118de (image=quay.rdoproject.org/podified-master-centos10/openstack-ceilometer-compute:current-tested, name=ceilometer_agent_compute, health_status=healthy, health_failing_streak=0, health_log=, org.label-schema.vendor=CentOS, container_name=ceilometer_agent_compute, config_id=edpm, org.label-schema.name=CentOS Stream 10 Base Image, org.label-schema.schema-version=1.0, config_data={'image': 'quay.rdoproject.org/podified-master-centos10/openstack-ceilometer-compute:current-tested', 'user': 'ceilometer', 'restart': 'always', 'command': 'kolla_start', 'security_opt': 'label:type:ceilometer_polling_t', 'net': 'host', 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'OS_ENDPOINT_TYPE': 'internal'}, 'healthcheck': {'test': '/openstack/healthcheck compute', 'mount': '/var/lib/openstack/healthchecks/ceilometer_agent_compute'}, 'volumes': ['/var/lib/openstack/config/telemetry:/var/lib/openstack/config/:z', '/var/lib/openstack/config/telemetry/ceilometer-agent-compute.json:/var/lib/kolla/config_files/config.json:z', '/run/libvirt:/run/libvirt:shared,ro', '/etc/hosts:/etc/hosts:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/var/lib/openstack/cacerts/telemetry/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/config/telemetry/ceilometer_prom_exporter.yaml:/etc/ceilometer/ceilometer_prom_exporter.yaml:z', '/var/lib/openstack/certs/telemetry/default:/etc/ceilometer/tls:z', '/dev/log:/dev/log', '/var/lib/openstack/healthchecks/ceilometer_agent_compute:/openstack:ro,z']}, managed_by=edpm_ansible, tcib_build_tag=3a7876c5b6a4ff2e2bc50e11e9db5f42, tcib_managed=true, io.buildah.version=1.41.4, maintainer=OpenStack Kubernetes Operator team, org.label-schema.build-date=20251125, org.label-schema.license=GPLv2)
Dec  1 19:48:58 compute-0 podman[247628]: 2025-12-01 19:48:58.405417594 +0000 UTC m=+0.132050824 container health_status ac5c9902abf0db9f43c889599b2bcc73d33eb8b65444ffdd9b56a5cc93dab792 (image=quay.io/podified-antelope-centos9/openstack-ovn-controller:current-podified, name=ovn_controller, health_status=healthy, health_failing_streak=0, health_log=, config_data={'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ovn_controller', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-ovn-controller:current-podified', 'net': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/lib/modules:/lib/modules:ro', '/run:/run', '/var/lib/openvswitch/ovn:/run/ovn:shared,z', '/var/lib/kolla/config_files/ovn_controller.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/openstack/cacerts/ovn/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/certs/ovn/default/ca.crt:/etc/pki/tls/certs/ovndbca.crt:ro,z', '/var/lib/openstack/certs/ovn/default/tls.crt:/etc/pki/tls/certs/ovndb.crt:ro,z', '/var/lib/openstack/certs/ovn/default/tls.key:/etc/pki/tls/private/ovndb.key:ro,Z', '/var/lib/openstack/healthchecks/ovn_controller:/openstack:ro,z']}, config_id=ovn_controller, io.buildah.version=1.41.3, maintainer=OpenStack Kubernetes Operator team, tcib_managed=true, container_name=ovn_controller, org.label-schema.license=GPLv2, tcib_build_tag=fa2bb8efef6782c26ea7f1675eeb36dd, org.label-schema.vendor=CentOS, managed_by=edpm_ansible, org.label-schema.build-date=20251125, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.schema-version=1.0)
Dec  1 19:48:58 compute-0 podman[247621]: 2025-12-01 19:48:58.415858567 +0000 UTC m=+0.146101229 container health_status 43b014a7c88484529ca37fbc1aa040d68d3c565a681d98a3ffe696ded1c66c8b (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified, name=ovn_metadata_agent, health_status=healthy, health_failing_streak=0, health_log=, tcib_managed=true, config_id=ovn_metadata_agent, container_name=ovn_metadata_agent, config_data={'cgroupns': 'host', 'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'EDPM_CONFIG_HASH': '0823bd3e096c75f72e4a95820d41b0d4b6a1172bd2892ddb9f29b788a11bc87d'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ovn_metadata_agent', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified', 'net': 'host', 'pid': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/run/openvswitch:/run/openvswitch:z', '/var/lib/config-data/ansible-generated/neutron-ovn-metadata-agent:/etc/neutron.conf.d:z', '/run/netns:/run/netns:shared', '/var/lib/kolla/config_files/ovn_metadata_agent.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/neutron:/var/lib/neutron:shared,z', '/var/lib/neutron/ovn_metadata_haproxy_wrapper:/usr/local/bin/haproxy:ro', '/var/lib/neutron/kill_scripts:/etc/neutron/kill_scripts:ro', '/var/lib/openstack/cacerts/neutron-metadata/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/ca.crt:/etc/pki/tls/certs/ovndbca.crt:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/tls.crt:/etc/pki/tls/certs/ovndb.crt:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/tls.key:/etc/pki/tls/private/ovndb.key:ro,Z', '/var/lib/openstack/healthchecks/ovn_metadata_agent:/openstack:ro,z']}, maintainer=OpenStack Kubernetes Operator team, org.label-schema.schema-version=1.0, tcib_build_tag=fa2bb8efef6782c26ea7f1675eeb36dd, managed_by=edpm_ansible, org.label-schema.build-date=20251125, org.label-schema.license=GPLv2, io.buildah.version=1.41.3, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.vendor=CentOS)
Dec  1 19:48:59 compute-0 podman[203750]: time="2025-12-01T19:48:59Z" level=info msg="List containers: received `last` parameter - overwriting `limit`"
Dec  1 19:48:59 compute-0 podman[203750]: @ - - [01/Dec/2025:19:48:59 +0000] "GET /v4.9.3/libpod/containers/json?all=true&external=false&last=0&namespace=false&size=false&sync=false HTTP/1.1" 200 29521 "" "Go-http-client/1.1"
Dec  1 19:48:59 compute-0 podman[203750]: @ - - [01/Dec/2025:19:48:59 +0000] "GET /v4.9.3/libpod/containers/stats?all=false&interval=1&stream=false HTTP/1.1" 200 4811 "" "Go-http-client/1.1"
Dec  1 19:49:01 compute-0 nova_compute[189564]: 2025-12-01 19:49:01.070 189568 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Dec  1 19:49:01 compute-0 openstack_network_exporter[205914]: ERROR   19:49:01 appctl.go:144: Failed to get PID for ovn-northd: no control socket files found for ovn-northd
Dec  1 19:49:01 compute-0 openstack_network_exporter[205914]: ERROR   19:49:01 appctl.go:144: Failed to get PID for ovn-northd: no control socket files found for ovn-northd
Dec  1 19:49:01 compute-0 openstack_network_exporter[205914]: ERROR   19:49:01 appctl.go:131: Failed to prepare call to ovsdb-server: no control socket files found for the ovs db server
Dec  1 19:49:01 compute-0 openstack_network_exporter[205914]: ERROR   19:49:01 appctl.go:174: call(dpif-netdev/pmd-perf-show): please specify an existing datapath
Dec  1 19:49:01 compute-0 openstack_network_exporter[205914]: 
Dec  1 19:49:01 compute-0 openstack_network_exporter[205914]: ERROR   19:49:01 appctl.go:174: call(dpif-netdev/pmd-rxq-show): please specify an existing datapath
Dec  1 19:49:01 compute-0 openstack_network_exporter[205914]: 
Dec  1 19:49:02 compute-0 nova_compute[189564]: 2025-12-01 19:49:02.249 189568 DEBUG oslo_service.periodic_task [None req-7cec860b-c1c5-49d2-8a4f-732c76c1788d - - - - - -] Running periodic task ComputeManager._heal_instance_info_cache run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210#033[00m
Dec  1 19:49:02 compute-0 nova_compute[189564]: 2025-12-01 19:49:02.250 189568 DEBUG nova.compute.manager [None req-7cec860b-c1c5-49d2-8a4f-732c76c1788d - - - - - -] Starting heal instance info cache _heal_instance_info_cache /usr/lib/python3.9/site-packages/nova/compute/manager.py:9858#033[00m
Dec  1 19:49:03 compute-0 nova_compute[189564]: 2025-12-01 19:49:03.167 189568 DEBUG oslo_concurrency.lockutils [None req-7cec860b-c1c5-49d2-8a4f-732c76c1788d - - - - - -] Acquiring lock "refresh_cache-850ac274-3f22-41ce-b7d7-ac64d7adac70" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:312#033[00m
Dec  1 19:49:03 compute-0 nova_compute[189564]: 2025-12-01 19:49:03.168 189568 DEBUG oslo_concurrency.lockutils [None req-7cec860b-c1c5-49d2-8a4f-732c76c1788d - - - - - -] Acquired lock "refresh_cache-850ac274-3f22-41ce-b7d7-ac64d7adac70" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:315#033[00m
Dec  1 19:49:03 compute-0 nova_compute[189564]: 2025-12-01 19:49:03.169 189568 DEBUG nova.network.neutron [None req-7cec860b-c1c5-49d2-8a4f-732c76c1788d - - - - - -] [instance: 850ac274-3f22-41ce-b7d7-ac64d7adac70] Forcefully refreshing network info cache for instance _get_instance_nw_info /usr/lib/python3.9/site-packages/nova/network/neutron.py:2004#033[00m
Dec  1 19:49:03 compute-0 nova_compute[189564]: 2025-12-01 19:49:03.377 189568 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Dec  1 19:49:06 compute-0 nova_compute[189564]: 2025-12-01 19:49:06.073 189568 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Dec  1 19:49:06 compute-0 nova_compute[189564]: 2025-12-01 19:49:06.340 189568 DEBUG nova.network.neutron [None req-7cec860b-c1c5-49d2-8a4f-732c76c1788d - - - - - -] [instance: 850ac274-3f22-41ce-b7d7-ac64d7adac70] Updating instance_info_cache with network_info: [{"id": "076102cd-d411-4d3d-a31e-4851d4a8d107", "address": "fa:16:3e:ce:df:71", "network": {"id": "2a4b8529-6171-4880-a97c-66966115a61b", "bridge": "br-int", "label": "private", "subnets": [{"cidr": "192.168.0.0/24", "dns": [], "gateway": {"address": "192.168.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "192.168.0.62", "type": "fixed", "version": 4, "meta": {}, "floating_ips": [{"address": "192.168.122.240", "type": "floating", "version": 4, "meta": {}}]}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "35d2a9caf1634dca9fc12ec078239d84", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tap076102cd-d4", "ovs_interfaceid": "076102cd-d411-4d3d-a31e-4851d4a8d107", "qbh_params": null, "qbg_params": null, "active": true, "vnic_type": "normal", "profile": {}, "preserve_on_delete": true, "delegate_create": true, "meta": {}}] update_instance_cache_with_nw_info /usr/lib/python3.9/site-packages/nova/network/neutron.py:116#033[00m
Dec  1 19:49:06 compute-0 nova_compute[189564]: 2025-12-01 19:49:06.373 189568 DEBUG oslo_concurrency.lockutils [None req-7cec860b-c1c5-49d2-8a4f-732c76c1788d - - - - - -] Releasing lock "refresh_cache-850ac274-3f22-41ce-b7d7-ac64d7adac70" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:333#033[00m
Dec  1 19:49:06 compute-0 nova_compute[189564]: 2025-12-01 19:49:06.373 189568 DEBUG nova.compute.manager [None req-7cec860b-c1c5-49d2-8a4f-732c76c1788d - - - - - -] [instance: 850ac274-3f22-41ce-b7d7-ac64d7adac70] Updated the network info_cache for instance _heal_instance_info_cache /usr/lib/python3.9/site-packages/nova/compute/manager.py:9929#033[00m
Dec  1 19:49:06 compute-0 nova_compute[189564]: 2025-12-01 19:49:06.373 189568 DEBUG oslo_service.periodic_task [None req-7cec860b-c1c5-49d2-8a4f-732c76c1788d - - - - - -] Running periodic task ComputeManager._poll_rebooting_instances run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210#033[00m
Dec  1 19:49:06 compute-0 nova_compute[189564]: 2025-12-01 19:49:06.374 189568 DEBUG oslo_service.periodic_task [None req-7cec860b-c1c5-49d2-8a4f-732c76c1788d - - - - - -] Running periodic task ComputeManager._poll_rescued_instances run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210#033[00m
Dec  1 19:49:06 compute-0 nova_compute[189564]: 2025-12-01 19:49:06.374 189568 DEBUG oslo_service.periodic_task [None req-7cec860b-c1c5-49d2-8a4f-732c76c1788d - - - - - -] Running periodic task ComputeManager._reclaim_queued_deletes run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210#033[00m
Dec  1 19:49:06 compute-0 nova_compute[189564]: 2025-12-01 19:49:06.374 189568 DEBUG nova.compute.manager [None req-7cec860b-c1c5-49d2-8a4f-732c76c1788d - - - - - -] CONF.reclaim_instance_interval <= 0, skipping... _reclaim_queued_deletes /usr/lib/python3.9/site-packages/nova/compute/manager.py:10477#033[00m
Dec  1 19:49:08 compute-0 nova_compute[189564]: 2025-12-01 19:49:08.248 189568 DEBUG oslo_service.periodic_task [None req-7cec860b-c1c5-49d2-8a4f-732c76c1788d - - - - - -] Running periodic task ComputeManager._poll_volume_usage run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210#033[00m
Dec  1 19:49:08 compute-0 nova_compute[189564]: 2025-12-01 19:49:08.379 189568 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Dec  1 19:49:09 compute-0 nova_compute[189564]: 2025-12-01 19:49:09.248 189568 DEBUG oslo_service.periodic_task [None req-7cec860b-c1c5-49d2-8a4f-732c76c1788d - - - - - -] Running periodic task ComputeManager.update_available_resource run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210#033[00m
Dec  1 19:49:09 compute-0 nova_compute[189564]: 2025-12-01 19:49:09.273 189568 DEBUG oslo_concurrency.lockutils [None req-7cec860b-c1c5-49d2-8a4f-732c76c1788d - - - - - -] Acquiring lock "compute_resources" by "nova.compute.resource_tracker.ResourceTracker.clean_compute_node_cache" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m
Dec  1 19:49:09 compute-0 nova_compute[189564]: 2025-12-01 19:49:09.273 189568 DEBUG oslo_concurrency.lockutils [None req-7cec860b-c1c5-49d2-8a4f-732c76c1788d - - - - - -] Lock "compute_resources" acquired by "nova.compute.resource_tracker.ResourceTracker.clean_compute_node_cache" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m
Dec  1 19:49:09 compute-0 nova_compute[189564]: 2025-12-01 19:49:09.273 189568 DEBUG oslo_concurrency.lockutils [None req-7cec860b-c1c5-49d2-8a4f-732c76c1788d - - - - - -] Lock "compute_resources" "released" by "nova.compute.resource_tracker.ResourceTracker.clean_compute_node_cache" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m
Dec  1 19:49:09 compute-0 nova_compute[189564]: 2025-12-01 19:49:09.273 189568 DEBUG nova.compute.resource_tracker [None req-7cec860b-c1c5-49d2-8a4f-732c76c1788d - - - - - -] Auditing locally available compute resources for compute-0.ctlplane.example.com (node: compute-0.ctlplane.example.com) update_available_resource /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:861#033[00m
Dec  1 19:49:09 compute-0 podman[247716]: 2025-12-01 19:49:09.365161247 +0000 UTC m=+0.128482103 container health_status b46bda7fc50db8041eef75400930fc7591d8331b3adc9964f77b2cc87c6b98e2 (image=quay.io/openstack-k8s-operators/openstack-network-exporter:current-podified, name=openstack_network_exporter, health_status=healthy, health_failing_streak=0, health_log=, distribution-scope=public, description=The Universal Base Image Minimal is a stripped down image that uses microdnf as a package manager. This base image is freely redistributable, but Red Hat only supports Red Hat technologies through subscriptions for Red Hat products. This image is maintained by Red Hat and updated regularly., summary=Provides the latest release of the minimal Red Hat Universal Base Image 9., name=ubi9-minimal, release=1755695350, url=https://catalog.redhat.com/en/search?searchType=containers, version=9.6, config_data={'image': 'quay.io/openstack-k8s-operators/openstack-network-exporter:current-podified', 'restart': 'always', 'recreate': True, 'privileged': True, 'ports': ['9105:9105'], 'command': [], 'net': 'host', 'environment': {'OS_ENDPOINT_TYPE': 'internal', 'OPENSTACK_NETWORK_EXPORTER_YAML': '/etc/openstack_network_exporter/openstack_network_exporter.yaml'}, 'healthcheck': {'test': '/openstack/healthcheck openstack-netwo', 'mount': '/var/lib/openstack/healthchecks/openstack_network_exporter'}, 'volumes': ['/var/lib/openstack/config/telemetry/openstack_network_exporter.yaml:/etc/openstack_network_exporter/openstack_network_exporter.yaml:z', '/var/lib/openstack/certs/telemetry/default:/etc/openstack_network_exporter/tls:z', '/var/run/openvswitch:/run/openvswitch:rw,z', '/var/lib/openvswitch/ovn:/run/ovn:rw,z', '/proc:/host/proc:ro', '/var/lib/openstack/healthchecks/openstack_network_exporter:/openstack:ro,z']}, config_id=edpm, io.k8s.description=The Universal Base Image Minimal is a stripped down image that uses microdnf as a package manager. This base image is freely redistributable, but Red Hat only supports Red Hat technologies through subscriptions for Red Hat products. This image is maintained by Red Hat and updated regularly., com.redhat.license_terms=https://www.redhat.com/en/about/red-hat-end-user-license-agreements#UBI, maintainer=Red Hat, Inc., architecture=x86_64, io.k8s.display-name=Red Hat Universal Base Image 9 Minimal, io.openshift.expose-services=, vendor=Red Hat, Inc., com.redhat.component=ubi9-minimal-container, io.openshift.tags=minimal rhel9, managed_by=edpm_ansible, container_name=openstack_network_exporter, io.buildah.version=1.33.7, vcs-type=git, build-date=2025-08-20T13:12:41, vcs-ref=f4b088292653bbf5ca8188a5e59ffd06a8671d4b)
Dec  1 19:49:09 compute-0 nova_compute[189564]: 2025-12-01 19:49:09.376 189568 DEBUG oslo_concurrency.processutils [None req-7cec860b-c1c5-49d2-8a4f-732c76c1788d - - - - - -] Running cmd (subprocess): /usr/bin/python3 -m oslo_concurrency.prlimit --as=1073741824 --cpu=30 -- env LC_ALL=C LANG=C qemu-img info /var/lib/nova/instances/e73931e9-f7fa-4666-b781-700b385532a9/disk --force-share --output=json execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384#033[00m
Dec  1 19:49:09 compute-0 nova_compute[189564]: 2025-12-01 19:49:09.460 189568 DEBUG oslo_concurrency.processutils [None req-7cec860b-c1c5-49d2-8a4f-732c76c1788d - - - - - -] CMD "/usr/bin/python3 -m oslo_concurrency.prlimit --as=1073741824 --cpu=30 -- env LC_ALL=C LANG=C qemu-img info /var/lib/nova/instances/e73931e9-f7fa-4666-b781-700b385532a9/disk --force-share --output=json" returned: 0 in 0.084s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422#033[00m
Dec  1 19:49:09 compute-0 nova_compute[189564]: 2025-12-01 19:49:09.461 189568 DEBUG oslo_concurrency.processutils [None req-7cec860b-c1c5-49d2-8a4f-732c76c1788d - - - - - -] Running cmd (subprocess): /usr/bin/python3 -m oslo_concurrency.prlimit --as=1073741824 --cpu=30 -- env LC_ALL=C LANG=C qemu-img info /var/lib/nova/instances/e73931e9-f7fa-4666-b781-700b385532a9/disk --force-share --output=json execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384#033[00m
Dec  1 19:49:09 compute-0 nova_compute[189564]: 2025-12-01 19:49:09.545 189568 DEBUG oslo_concurrency.processutils [None req-7cec860b-c1c5-49d2-8a4f-732c76c1788d - - - - - -] CMD "/usr/bin/python3 -m oslo_concurrency.prlimit --as=1073741824 --cpu=30 -- env LC_ALL=C LANG=C qemu-img info /var/lib/nova/instances/e73931e9-f7fa-4666-b781-700b385532a9/disk --force-share --output=json" returned: 0 in 0.084s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422#033[00m
Dec  1 19:49:09 compute-0 nova_compute[189564]: 2025-12-01 19:49:09.547 189568 DEBUG oslo_concurrency.processutils [None req-7cec860b-c1c5-49d2-8a4f-732c76c1788d - - - - - -] Running cmd (subprocess): /usr/bin/python3 -m oslo_concurrency.prlimit --as=1073741824 --cpu=30 -- env LC_ALL=C LANG=C qemu-img info /var/lib/nova/instances/e73931e9-f7fa-4666-b781-700b385532a9/disk.eph0 --force-share --output=json execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384#033[00m
Dec  1 19:49:09 compute-0 nova_compute[189564]: 2025-12-01 19:49:09.634 189568 DEBUG oslo_concurrency.processutils [None req-7cec860b-c1c5-49d2-8a4f-732c76c1788d - - - - - -] CMD "/usr/bin/python3 -m oslo_concurrency.prlimit --as=1073741824 --cpu=30 -- env LC_ALL=C LANG=C qemu-img info /var/lib/nova/instances/e73931e9-f7fa-4666-b781-700b385532a9/disk.eph0 --force-share --output=json" returned: 0 in 0.087s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422#033[00m
Dec  1 19:49:09 compute-0 nova_compute[189564]: 2025-12-01 19:49:09.635 189568 DEBUG oslo_concurrency.processutils [None req-7cec860b-c1c5-49d2-8a4f-732c76c1788d - - - - - -] Running cmd (subprocess): /usr/bin/python3 -m oslo_concurrency.prlimit --as=1073741824 --cpu=30 -- env LC_ALL=C LANG=C qemu-img info /var/lib/nova/instances/e73931e9-f7fa-4666-b781-700b385532a9/disk.eph0 --force-share --output=json execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384#033[00m
Dec  1 19:49:09 compute-0 nova_compute[189564]: 2025-12-01 19:49:09.728 189568 DEBUG oslo_concurrency.processutils [None req-7cec860b-c1c5-49d2-8a4f-732c76c1788d - - - - - -] CMD "/usr/bin/python3 -m oslo_concurrency.prlimit --as=1073741824 --cpu=30 -- env LC_ALL=C LANG=C qemu-img info /var/lib/nova/instances/e73931e9-f7fa-4666-b781-700b385532a9/disk.eph0 --force-share --output=json" returned: 0 in 0.093s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422#033[00m
Dec  1 19:49:09 compute-0 nova_compute[189564]: 2025-12-01 19:49:09.735 189568 DEBUG oslo_concurrency.processutils [None req-7cec860b-c1c5-49d2-8a4f-732c76c1788d - - - - - -] Running cmd (subprocess): /usr/bin/python3 -m oslo_concurrency.prlimit --as=1073741824 --cpu=30 -- env LC_ALL=C LANG=C qemu-img info /var/lib/nova/instances/850ac274-3f22-41ce-b7d7-ac64d7adac70/disk --force-share --output=json execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384#033[00m
Dec  1 19:49:09 compute-0 nova_compute[189564]: 2025-12-01 19:49:09.795 189568 DEBUG oslo_concurrency.processutils [None req-7cec860b-c1c5-49d2-8a4f-732c76c1788d - - - - - -] CMD "/usr/bin/python3 -m oslo_concurrency.prlimit --as=1073741824 --cpu=30 -- env LC_ALL=C LANG=C qemu-img info /var/lib/nova/instances/850ac274-3f22-41ce-b7d7-ac64d7adac70/disk --force-share --output=json" returned: 0 in 0.060s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422#033[00m
Dec  1 19:49:09 compute-0 nova_compute[189564]: 2025-12-01 19:49:09.796 189568 DEBUG oslo_concurrency.processutils [None req-7cec860b-c1c5-49d2-8a4f-732c76c1788d - - - - - -] Running cmd (subprocess): /usr/bin/python3 -m oslo_concurrency.prlimit --as=1073741824 --cpu=30 -- env LC_ALL=C LANG=C qemu-img info /var/lib/nova/instances/850ac274-3f22-41ce-b7d7-ac64d7adac70/disk --force-share --output=json execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384#033[00m
Dec  1 19:49:09 compute-0 nova_compute[189564]: 2025-12-01 19:49:09.854 189568 DEBUG oslo_concurrency.processutils [None req-7cec860b-c1c5-49d2-8a4f-732c76c1788d - - - - - -] CMD "/usr/bin/python3 -m oslo_concurrency.prlimit --as=1073741824 --cpu=30 -- env LC_ALL=C LANG=C qemu-img info /var/lib/nova/instances/850ac274-3f22-41ce-b7d7-ac64d7adac70/disk --force-share --output=json" returned: 0 in 0.057s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422#033[00m
Dec  1 19:49:09 compute-0 nova_compute[189564]: 2025-12-01 19:49:09.856 189568 DEBUG oslo_concurrency.processutils [None req-7cec860b-c1c5-49d2-8a4f-732c76c1788d - - - - - -] Running cmd (subprocess): /usr/bin/python3 -m oslo_concurrency.prlimit --as=1073741824 --cpu=30 -- env LC_ALL=C LANG=C qemu-img info /var/lib/nova/instances/850ac274-3f22-41ce-b7d7-ac64d7adac70/disk.eph0 --force-share --output=json execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384#033[00m
Dec  1 19:49:09 compute-0 nova_compute[189564]: 2025-12-01 19:49:09.922 189568 DEBUG oslo_concurrency.processutils [None req-7cec860b-c1c5-49d2-8a4f-732c76c1788d - - - - - -] CMD "/usr/bin/python3 -m oslo_concurrency.prlimit --as=1073741824 --cpu=30 -- env LC_ALL=C LANG=C qemu-img info /var/lib/nova/instances/850ac274-3f22-41ce-b7d7-ac64d7adac70/disk.eph0 --force-share --output=json" returned: 0 in 0.066s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422#033[00m
Dec  1 19:49:09 compute-0 nova_compute[189564]: 2025-12-01 19:49:09.924 189568 DEBUG oslo_concurrency.processutils [None req-7cec860b-c1c5-49d2-8a4f-732c76c1788d - - - - - -] Running cmd (subprocess): /usr/bin/python3 -m oslo_concurrency.prlimit --as=1073741824 --cpu=30 -- env LC_ALL=C LANG=C qemu-img info /var/lib/nova/instances/850ac274-3f22-41ce-b7d7-ac64d7adac70/disk.eph0 --force-share --output=json execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384#033[00m
Dec  1 19:49:09 compute-0 nova_compute[189564]: 2025-12-01 19:49:09.989 189568 DEBUG oslo_concurrency.processutils [None req-7cec860b-c1c5-49d2-8a4f-732c76c1788d - - - - - -] CMD "/usr/bin/python3 -m oslo_concurrency.prlimit --as=1073741824 --cpu=30 -- env LC_ALL=C LANG=C qemu-img info /var/lib/nova/instances/850ac274-3f22-41ce-b7d7-ac64d7adac70/disk.eph0 --force-share --output=json" returned: 0 in 0.065s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422#033[00m
Dec  1 19:49:10 compute-0 nova_compute[189564]: 2025-12-01 19:49:10.421 189568 WARNING nova.virt.libvirt.driver [None req-7cec860b-c1c5-49d2-8a4f-732c76c1788d - - - - - -] This host appears to have multiple sockets per NUMA node. The `socket` PCI NUMA affinity will not be supported.#033[00m
Dec  1 19:49:10 compute-0 nova_compute[189564]: 2025-12-01 19:49:10.422 189568 DEBUG nova.compute.resource_tracker [None req-7cec860b-c1c5-49d2-8a4f-732c76c1788d - - - - - -] Hypervisor/Node resource view: name=compute-0.ctlplane.example.com free_ram=4836MB free_disk=72.36144638061523GB free_vcpus=6 pci_devices=[{"dev_id": "pci_0000_00_03_0", "address": "0000:00:03.0", "product_id": "1000", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1000", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_01_3", "address": "0000:00:01.3", "product_id": "7113", "vendor_id": "8086", "numa_node": null, "label": "label_8086_7113", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_06_0", "address": "0000:00:06.0", "product_id": "1005", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1005", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_04_0", "address": "0000:00:04.0", "product_id": "1001", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1001", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_01_1", "address": "0000:00:01.1", "product_id": "7010", "vendor_id": "8086", "numa_node": null, "label": "label_8086_7010", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_00_0", "address": "0000:00:00.0", "product_id": "1237", "vendor_id": "8086", "numa_node": null, "label": "label_8086_1237", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_05_0", "address": "0000:00:05.0", "product_id": "1002", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1002", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_02_0", "address": "0000:00:02.0", "product_id": "1050", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1050", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_01_2", "address": "0000:00:01.2", "product_id": "7020", "vendor_id": "8086", "numa_node": null, "label": "label_8086_7020", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_01_0", "address": "0000:00:01.0", "product_id": "7000", "vendor_id": "8086", "numa_node": null, "label": "label_8086_7000", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_07_0", "address": "0000:00:07.0", "product_id": "1000", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1000", "dev_type": "type-PCI"}] _report_hypervisor_resource_view /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:1034#033[00m
Dec  1 19:49:10 compute-0 nova_compute[189564]: 2025-12-01 19:49:10.422 189568 DEBUG oslo_concurrency.lockutils [None req-7cec860b-c1c5-49d2-8a4f-732c76c1788d - - - - - -] Acquiring lock "compute_resources" by "nova.compute.resource_tracker.ResourceTracker._update_available_resource" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m
Dec  1 19:49:10 compute-0 nova_compute[189564]: 2025-12-01 19:49:10.423 189568 DEBUG oslo_concurrency.lockutils [None req-7cec860b-c1c5-49d2-8a4f-732c76c1788d - - - - - -] Lock "compute_resources" acquired by "nova.compute.resource_tracker.ResourceTracker._update_available_resource" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m
Dec  1 19:49:10 compute-0 nova_compute[189564]: 2025-12-01 19:49:10.532 189568 DEBUG nova.compute.resource_tracker [None req-7cec860b-c1c5-49d2-8a4f-732c76c1788d - - - - - -] Instance e73931e9-f7fa-4666-b781-700b385532a9 actively managed on this compute host and has allocations in placement: {'resources': {'DISK_GB': 2, 'MEMORY_MB': 512, 'VCPU': 1}}. _remove_deleted_instances_allocations /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:1635#033[00m
Dec  1 19:49:10 compute-0 nova_compute[189564]: 2025-12-01 19:49:10.532 189568 DEBUG nova.compute.resource_tracker [None req-7cec860b-c1c5-49d2-8a4f-732c76c1788d - - - - - -] Instance 850ac274-3f22-41ce-b7d7-ac64d7adac70 actively managed on this compute host and has allocations in placement: {'resources': {'DISK_GB': 2, 'MEMORY_MB': 512, 'VCPU': 1}}. _remove_deleted_instances_allocations /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:1635#033[00m
Dec  1 19:49:10 compute-0 nova_compute[189564]: 2025-12-01 19:49:10.533 189568 DEBUG nova.compute.resource_tracker [None req-7cec860b-c1c5-49d2-8a4f-732c76c1788d - - - - - -] Total usable vcpus: 8, total allocated vcpus: 2 _report_final_resource_view /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:1057#033[00m
Dec  1 19:49:10 compute-0 nova_compute[189564]: 2025-12-01 19:49:10.533 189568 DEBUG nova.compute.resource_tracker [None req-7cec860b-c1c5-49d2-8a4f-732c76c1788d - - - - - -] Final resource view: name=compute-0.ctlplane.example.com phys_ram=7680MB used_ram=1536MB phys_disk=79GB used_disk=4GB total_vcpus=8 used_vcpus=2 pci_stats=[] _report_final_resource_view /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:1066#033[00m
Dec  1 19:49:10 compute-0 nova_compute[189564]: 2025-12-01 19:49:10.601 189568 DEBUG nova.compute.provider_tree [None req-7cec860b-c1c5-49d2-8a4f-732c76c1788d - - - - - -] Inventory has not changed in ProviderTree for provider: 0211b5d4-bab8-409f-8f53-df766ffbcb27 update_inventory /usr/lib/python3.9/site-packages/nova/compute/provider_tree.py:180#033[00m
Dec  1 19:49:10 compute-0 nova_compute[189564]: 2025-12-01 19:49:10.624 189568 DEBUG nova.scheduler.client.report [None req-7cec860b-c1c5-49d2-8a4f-732c76c1788d - - - - - -] Inventory has not changed for provider 0211b5d4-bab8-409f-8f53-df766ffbcb27 based on inventory data: {'VCPU': {'total': 8, 'reserved': 0, 'min_unit': 1, 'max_unit': 8, 'step_size': 1, 'allocation_ratio': 4.0}, 'MEMORY_MB': {'total': 7680, 'reserved': 512, 'min_unit': 1, 'max_unit': 7680, 'step_size': 1, 'allocation_ratio': 1.0}, 'DISK_GB': {'total': 79, 'reserved': 1, 'min_unit': 1, 'max_unit': 79, 'step_size': 1, 'allocation_ratio': 0.9}} set_inventory_for_provider /usr/lib/python3.9/site-packages/nova/scheduler/client/report.py:940#033[00m
Dec  1 19:49:10 compute-0 nova_compute[189564]: 2025-12-01 19:49:10.626 189568 DEBUG nova.compute.resource_tracker [None req-7cec860b-c1c5-49d2-8a4f-732c76c1788d - - - - - -] Compute_service record updated for compute-0.ctlplane.example.com:compute-0.ctlplane.example.com _update_available_resource /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:995#033[00m
Dec  1 19:49:10 compute-0 nova_compute[189564]: 2025-12-01 19:49:10.626 189568 DEBUG oslo_concurrency.lockutils [None req-7cec860b-c1c5-49d2-8a4f-732c76c1788d - - - - - -] Lock "compute_resources" "released" by "nova.compute.resource_tracker.ResourceTracker._update_available_resource" :: held 0.204s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m
Dec  1 19:49:11 compute-0 nova_compute[189564]: 2025-12-01 19:49:11.075 189568 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Dec  1 19:49:11 compute-0 nova_compute[189564]: 2025-12-01 19:49:11.622 189568 DEBUG oslo_service.periodic_task [None req-7cec860b-c1c5-49d2-8a4f-732c76c1788d - - - - - -] Running periodic task ComputeManager._check_instance_build_time run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210#033[00m
Dec  1 19:49:11 compute-0 nova_compute[189564]: 2025-12-01 19:49:11.623 189568 DEBUG oslo_service.periodic_task [None req-7cec860b-c1c5-49d2-8a4f-732c76c1788d - - - - - -] Running periodic task ComputeManager._sync_scheduler_instance_info run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210#033[00m
Dec  1 19:49:11 compute-0 nova_compute[189564]: 2025-12-01 19:49:11.650 189568 DEBUG oslo_service.periodic_task [None req-7cec860b-c1c5-49d2-8a4f-732c76c1788d - - - - - -] Running periodic task ComputeManager._poll_unconfirmed_resizes run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210#033[00m
Dec  1 19:49:11 compute-0 nova_compute[189564]: 2025-12-01 19:49:11.651 189568 DEBUG oslo_service.periodic_task [None req-7cec860b-c1c5-49d2-8a4f-732c76c1788d - - - - - -] Running periodic task ComputeManager._instance_usage_audit run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210#033[00m
Dec  1 19:49:12 compute-0 ovn_metadata_agent[106828]: 2025-12-01 19:49:12.203 106833 DEBUG oslo_concurrency.lockutils [-] Acquiring lock "_check_child_processes" by "neutron.agent.linux.external_process.ProcessMonitor._check_child_processes" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m
Dec  1 19:49:12 compute-0 ovn_metadata_agent[106828]: 2025-12-01 19:49:12.204 106833 DEBUG oslo_concurrency.lockutils [-] Lock "_check_child_processes" acquired by "neutron.agent.linux.external_process.ProcessMonitor._check_child_processes" :: waited 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m
Dec  1 19:49:12 compute-0 ovn_metadata_agent[106828]: 2025-12-01 19:49:12.205 106833 DEBUG oslo_concurrency.lockutils [-] Lock "_check_child_processes" "released" by "neutron.agent.linux.external_process.ProcessMonitor._check_child_processes" :: held 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m
Dec  1 19:49:13 compute-0 nova_compute[189564]: 2025-12-01 19:49:13.382 189568 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Dec  1 19:49:14 compute-0 podman[247760]: 2025-12-01 19:49:14.785623602 +0000 UTC m=+0.081938905 container health_status 9bc16c1e84935b321683dd2dfd3901959431e420d380b6b9982945dff3d516b2 (image=quay.io/prometheus/node-exporter:v1.5.0, name=node_exporter, health_status=healthy, health_failing_streak=0, health_log=, config_id=edpm, container_name=node_exporter, maintainer=The Prometheus Authors <prometheus-developers@googlegroups.com>, managed_by=edpm_ansible, config_data={'image': 'quay.io/prometheus/node-exporter:v1.5.0', 'restart': 'always', 'recreate': True, 'user': 'root', 'privileged': True, 'ports': ['9100:9100'], 'command': ['--web.config.file=/etc/node_exporter/node_exporter.yaml', '--web.disable-exporter-metrics', '--collector.systemd', '--collector.systemd.unit-include=(edpm_.*|ovs.*|openvswitch|virt.*|rsyslog)\\.service', '--no-collector.dmi', '--no-collector.entropy', '--no-collector.thermal_zone', '--no-collector.time', '--no-collector.timex', '--no-collector.uname', '--no-collector.stat', '--no-collector.hwmon', '--no-collector.os', '--no-collector.selinux', '--no-collector.textfile', '--no-collector.powersupplyclass', '--no-collector.pressure', '--no-collector.rapl'], 'net': 'host', 'environment': {'OS_ENDPOINT_TYPE': 'internal'}, 'healthcheck': {'test': '/openstack/healthcheck node_exporter', 'mount': '/var/lib/openstack/healthchecks/node_exporter'}, 'volumes': ['/var/lib/openstack/config/telemetry/node_exporter.yaml:/etc/node_exporter/node_exporter.yaml:z', '/var/lib/openstack/certs/telemetry/default:/etc/node_exporter/tls:z', '/var/run/dbus/system_bus_socket:/var/run/dbus/system_bus_socket:rw', '/var/lib/openstack/healthchecks/node_exporter:/openstack:ro,z']})
Dec  1 19:49:16 compute-0 nova_compute[189564]: 2025-12-01 19:49:16.078 189568 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Dec  1 19:49:18 compute-0 nova_compute[189564]: 2025-12-01 19:49:18.384 189568 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Dec  1 19:49:20 compute-0 podman[247788]: 2025-12-01 19:49:20.378656978 +0000 UTC m=+0.140609839 container health_status eee51cf6f5ac491b85fb09827fece37ea9afa564acb449d4ec0d0155a452f02b (image=quay.io/podified-antelope-centos9/openstack-multipathd:current-podified, name=multipathd, health_status=healthy, health_failing_streak=0, health_log=, config_data={'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/multipathd', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-multipathd:current-podified', 'net': 'host', 'privileged': True, 'restart': 'always', 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/var/lib/kolla/config_files/multipathd.json:/var/lib/kolla/config_files/config.json:ro', '/dev:/dev', '/run/udev:/run/udev', '/sys:/sys', '/lib/modules:/lib/modules:ro', '/etc/iscsi:/etc/iscsi:ro', '/var/lib/iscsi:/var/lib/iscsi', '/etc/multipath:/etc/multipath:z', '/etc/multipath.conf:/etc/multipath.conf:ro', '/var/lib/openstack/healthchecks/multipathd:/openstack:ro,z']}, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.vendor=CentOS, org.label-schema.schema-version=1.0, tcib_build_tag=fa2bb8efef6782c26ea7f1675eeb36dd, config_id=multipathd, io.buildah.version=1.41.3, maintainer=OpenStack Kubernetes Operator team, managed_by=edpm_ansible, container_name=multipathd, org.label-schema.build-date=20251125, tcib_managed=true)
Dec  1 19:49:21 compute-0 nova_compute[189564]: 2025-12-01 19:49:21.080 189568 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Dec  1 19:49:23 compute-0 nova_compute[189564]: 2025-12-01 19:49:23.387 189568 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Dec  1 19:49:26 compute-0 nova_compute[189564]: 2025-12-01 19:49:26.083 189568 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Dec  1 19:49:26 compute-0 podman[247809]: 2025-12-01 19:49:26.322446169 +0000 UTC m=+0.088562458 container health_status 61ddba5fa28aaa4735d9b3aecc3d300f499f9ae2248b5f55cd6d6127fcce4236 (image=quay.io/navidys/prometheus-podman-exporter:v1.10.1, name=podman_exporter, health_status=healthy, health_failing_streak=0, health_log=, container_name=podman_exporter, maintainer=Navid Yaghoobi <navidys@fedoraproject.org>, managed_by=edpm_ansible, config_data={'image': 'quay.io/navidys/prometheus-podman-exporter:v1.10.1', 'restart': 'always', 'recreate': True, 'user': 'root', 'privileged': True, 'ports': ['9882:9882'], 'net': 'host', 'command': ['--web.config.file=/etc/podman_exporter/podman_exporter.yaml'], 'environment': {'OS_ENDPOINT_TYPE': 'internal', 'CONTAINER_HOST': 'unix:///run/podman/podman.sock'}, 'healthcheck': {'test': '/openstack/healthcheck podman_exporter', 'mount': '/var/lib/openstack/healthchecks/podman_exporter'}, 'volumes': ['/var/lib/openstack/config/telemetry/podman_exporter.yaml:/etc/podman_exporter/podman_exporter.yaml:z', '/var/lib/openstack/certs/telemetry/default:/etc/podman_exporter/tls:z', '/run/podman/podman.sock:/run/podman/podman.sock:rw,z', '/var/lib/openstack/healthchecks/podman_exporter:/openstack:ro,z']}, config_id=edpm)
Dec  1 19:49:28 compute-0 nova_compute[189564]: 2025-12-01 19:49:28.391 189568 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Dec  1 19:49:29 compute-0 podman[247836]: 2025-12-01 19:49:29.372333374 +0000 UTC m=+0.120579939 container health_status 23921011954a99f31a49758e512d9e3575f6b2ebf536e7df85e3be11e7690b76 (image=quay.io/sustainable_computing_io/kepler:release-0.7.12, name=kepler, health_status=healthy, health_failing_streak=0, health_log=, config_id=edpm, io.buildah.version=1.29.0, summary=Provides the latest release of Red Hat Universal Base Image 9., vendor=Red Hat, Inc., io.k8s.display-name=Red Hat Universal Base Image 9, io.openshift.tags=base rhel9, config_data={'image': 'quay.io/sustainable_computing_io/kepler:release-0.7.12', 'privileged': 'true', 'restart': 'always', 'ports': ['8888:8888'], 'net': 'host', 'command': '-v=2', 'recreate': True, 'environment': {'ENABLE_GPU': 'true', 'EXPOSE_CONTAINER_METRICS': 'true', 'ENABLE_PROCESS_METRICS': 'true', 'EXPOSE_VM_METRICS': 'true', 'EXPOSE_ESTIMATED_IDLE_POWER_METRICS': 'false', 'LIBVIRT_METADATA_URI': 'http://openstack.org/xmlns/libvirt/nova/1.1'}, 'healthcheck': {'test': '/openstack/healthcheck kepler', 'mount': '/var/lib/openstack/healthchecks/kepler'}, 'volumes': ['/lib/modules:/lib/modules:ro', '/run/libvirt:/run/libvirt:shared,ro', '/sys:/sys', '/proc:/proc', '/var/lib/openstack/healthchecks/kepler:/openstack:ro,z']}, io.openshift.expose-services=, name=ubi9, vcs-ref=e309397d02fc53f7fa99db1371b8700eb49f268f, com.redhat.license_terms=https://www.redhat.com/en/about/red-hat-end-user-license-agreements#UBI, release-0.7.12=, architecture=x86_64, com.redhat.component=ubi9-container, description=The Universal Base Image is designed and engineered to be the base layer for all of your containerized applications, middleware and utilities. This base image is freely redistributable, but Red Hat only supports Red Hat technologies through subscriptions for Red Hat products. This image is maintained by Red Hat and updated regularly., vcs-type=git, container_name=kepler, io.k8s.description=The Universal Base Image is designed and engineered to be the base layer for all of your containerized applications, middleware and utilities. This base image is freely redistributable, but Red Hat only supports Red Hat technologies through subscriptions for Red Hat products. This image is maintained by Red Hat and updated regularly., version=9.4, distribution-scope=public, maintainer=Red Hat, Inc., managed_by=edpm_ansible, release=1214.1726694543, url=https://access.redhat.com/containers/#/registry.access.redhat.com/ubi9/images/9.4-1214.1726694543, build-date=2024-09-18T21:23:30)
Dec  1 19:49:29 compute-0 podman[247839]: 2025-12-01 19:49:29.391447925 +0000 UTC m=+0.119498136 container health_status 43b014a7c88484529ca37fbc1aa040d68d3c565a681d98a3ffe696ded1c66c8b (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified, name=ovn_metadata_agent, health_status=healthy, health_failing_streak=0, health_log=, managed_by=edpm_ansible, org.label-schema.license=GPLv2, tcib_managed=true, config_data={'cgroupns': 'host', 'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'EDPM_CONFIG_HASH': '0823bd3e096c75f72e4a95820d41b0d4b6a1172bd2892ddb9f29b788a11bc87d'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ovn_metadata_agent', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified', 'net': 'host', 'pid': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/run/openvswitch:/run/openvswitch:z', '/var/lib/config-data/ansible-generated/neutron-ovn-metadata-agent:/etc/neutron.conf.d:z', '/run/netns:/run/netns:shared', '/var/lib/kolla/config_files/ovn_metadata_agent.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/neutron:/var/lib/neutron:shared,z', '/var/lib/neutron/ovn_metadata_haproxy_wrapper:/usr/local/bin/haproxy:ro', '/var/lib/neutron/kill_scripts:/etc/neutron/kill_scripts:ro', '/var/lib/openstack/cacerts/neutron-metadata/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/ca.crt:/etc/pki/tls/certs/ovndbca.crt:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/tls.crt:/etc/pki/tls/certs/ovndb.crt:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/tls.key:/etc/pki/tls/private/ovndb.key:ro,Z', '/var/lib/openstack/healthchecks/ovn_metadata_agent:/openstack:ro,z']}, io.buildah.version=1.41.3, org.label-schema.build-date=20251125, org.label-schema.name=CentOS Stream 9 Base Image, tcib_build_tag=fa2bb8efef6782c26ea7f1675eeb36dd, config_id=ovn_metadata_agent, maintainer=OpenStack Kubernetes Operator team, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS, container_name=ovn_metadata_agent)
Dec  1 19:49:29 compute-0 podman[247837]: 2025-12-01 19:49:29.400550306 +0000 UTC m=+0.141891298 container health_status 34a1614f07848d6f362b3ed1fa2407dbcd0f2c7c831f6ef43ff8b2d278ce7c3d (image=quay.io/podified-antelope-centos9/openstack-ceilometer-ipmi:current-podified, name=ceilometer_agent_ipmi, health_status=healthy, health_failing_streak=0, health_log=, org.label-schema.build-date=20251125, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS, org.label-schema.license=GPLv2, tcib_build_tag=fa2bb8efef6782c26ea7f1675eeb36dd, tcib_managed=true, io.buildah.version=1.41.3, config_id=edpm, container_name=ceilometer_agent_ipmi, config_data={'image': 'quay.io/podified-antelope-centos9/openstack-ceilometer-ipmi:current-podified', 'user': 'ceilometer', 'restart': 'always', 'command': 'kolla_start', 'security_opt': 'label:type:ceilometer_polling_t', 'privileged': 'true', 'net': 'host', 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'OS_ENDPOINT_TYPE': 'internal'}, 'healthcheck': {'test': '/openstack/healthcheck ipmi', 'mount': '/var/lib/openstack/healthchecks/ceilometer_agent_ipmi'}, 'volumes': ['/var/lib/openstack/config/telemetry-power-monitoring:/var/lib/openstack/config/:z', '/var/lib/openstack/config/telemetry-power-monitoring/ceilometer-agent-ipmi.json:/var/lib/kolla/config_files/config.json:z', '/etc/hosts:/etc/hosts:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/var/lib/openstack/cacerts/telemetry-power-monitoring/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/config/telemetry-power-monitoring/ceilometer_prom_exporter.yaml:/etc/ceilometer/ceilometer_prom_exporter.yaml:z', '/var/lib/openstack/certs/telemetry-power-monitoring/default:/etc/ceilometer/tls:z', '/dev/log:/dev/log', '/var/lib/openstack/healthchecks/ceilometer_agent_ipmi:/openstack:ro,z']}, maintainer=OpenStack Kubernetes Operator team, managed_by=edpm_ansible)
Dec  1 19:49:29 compute-0 podman[247838]: 2025-12-01 19:49:29.410313448 +0000 UTC m=+0.143606461 container health_status 3a3d264f7eb8586ed3d44da8bad3c69e5911bcb2ca062b771386b6d47a5118de (image=quay.rdoproject.org/podified-master-centos10/openstack-ceilometer-compute:current-tested, name=ceilometer_agent_compute, health_status=healthy, health_failing_streak=0, health_log=, org.label-schema.build-date=20251125, container_name=ceilometer_agent_compute, maintainer=OpenStack Kubernetes Operator team, tcib_build_tag=3a7876c5b6a4ff2e2bc50e11e9db5f42, tcib_managed=true, config_data={'image': 'quay.rdoproject.org/podified-master-centos10/openstack-ceilometer-compute:current-tested', 'user': 'ceilometer', 'restart': 'always', 'command': 'kolla_start', 'security_opt': 'label:type:ceilometer_polling_t', 'net': 'host', 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'OS_ENDPOINT_TYPE': 'internal'}, 'healthcheck': {'test': '/openstack/healthcheck compute', 'mount': '/var/lib/openstack/healthchecks/ceilometer_agent_compute'}, 'volumes': ['/var/lib/openstack/config/telemetry:/var/lib/openstack/config/:z', '/var/lib/openstack/config/telemetry/ceilometer-agent-compute.json:/var/lib/kolla/config_files/config.json:z', '/run/libvirt:/run/libvirt:shared,ro', '/etc/hosts:/etc/hosts:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/var/lib/openstack/cacerts/telemetry/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/config/telemetry/ceilometer_prom_exporter.yaml:/etc/ceilometer/ceilometer_prom_exporter.yaml:z', '/var/lib/openstack/certs/telemetry/default:/etc/ceilometer/tls:z', '/dev/log:/dev/log', '/var/lib/openstack/healthchecks/ceilometer_agent_compute:/openstack:ro,z']}, org.label-schema.license=GPLv2, managed_by=edpm_ansible, org.label-schema.name=CentOS Stream 10 Base Image, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS, config_id=edpm, io.buildah.version=1.41.4)
Dec  1 19:49:29 compute-0 podman[247840]: 2025-12-01 19:49:29.440270344 +0000 UTC m=+0.164494556 container health_status ac5c9902abf0db9f43c889599b2bcc73d33eb8b65444ffdd9b56a5cc93dab792 (image=quay.io/podified-antelope-centos9/openstack-ovn-controller:current-podified, name=ovn_controller, health_status=healthy, health_failing_streak=0, health_log=, org.label-schema.build-date=20251125, org.label-schema.license=GPLv2, tcib_managed=true, container_name=ovn_controller, org.label-schema.name=CentOS Stream 9 Base Image, tcib_build_tag=fa2bb8efef6782c26ea7f1675eeb36dd, config_data={'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ovn_controller', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-ovn-controller:current-podified', 'net': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/lib/modules:/lib/modules:ro', '/run:/run', '/var/lib/openvswitch/ovn:/run/ovn:shared,z', '/var/lib/kolla/config_files/ovn_controller.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/openstack/cacerts/ovn/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/certs/ovn/default/ca.crt:/etc/pki/tls/certs/ovndbca.crt:ro,z', '/var/lib/openstack/certs/ovn/default/tls.crt:/etc/pki/tls/certs/ovndb.crt:ro,z', '/var/lib/openstack/certs/ovn/default/tls.key:/etc/pki/tls/private/ovndb.key:ro,Z', '/var/lib/openstack/healthchecks/ovn_controller:/openstack:ro,z']}, io.buildah.version=1.41.3, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS, config_id=ovn_controller, maintainer=OpenStack Kubernetes Operator team, managed_by=edpm_ansible)
Dec  1 19:49:29 compute-0 podman[203750]: time="2025-12-01T19:49:29Z" level=info msg="List containers: received `last` parameter - overwriting `limit`"
Dec  1 19:49:29 compute-0 podman[203750]: @ - - [01/Dec/2025:19:49:29 +0000] "GET /v4.9.3/libpod/containers/json?all=true&external=false&last=0&namespace=false&size=false&sync=false HTTP/1.1" 200 29521 "" "Go-http-client/1.1"
Dec  1 19:49:29 compute-0 podman[203750]: @ - - [01/Dec/2025:19:49:29 +0000] "GET /v4.9.3/libpod/containers/stats?all=false&interval=1&stream=false HTTP/1.1" 200 4805 "" "Go-http-client/1.1"
Dec  1 19:49:31 compute-0 nova_compute[189564]: 2025-12-01 19:49:31.086 189568 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Dec  1 19:49:31 compute-0 openstack_network_exporter[205914]: ERROR   19:49:31 appctl.go:144: Failed to get PID for ovn-northd: no control socket files found for ovn-northd
Dec  1 19:49:31 compute-0 openstack_network_exporter[205914]: ERROR   19:49:31 appctl.go:144: Failed to get PID for ovn-northd: no control socket files found for ovn-northd
Dec  1 19:49:31 compute-0 openstack_network_exporter[205914]: ERROR   19:49:31 appctl.go:131: Failed to prepare call to ovsdb-server: no control socket files found for the ovs db server
Dec  1 19:49:31 compute-0 openstack_network_exporter[205914]: ERROR   19:49:31 appctl.go:174: call(dpif-netdev/pmd-perf-show): please specify an existing datapath
Dec  1 19:49:31 compute-0 openstack_network_exporter[205914]: 
Dec  1 19:49:31 compute-0 openstack_network_exporter[205914]: ERROR   19:49:31 appctl.go:174: call(dpif-netdev/pmd-rxq-show): please specify an existing datapath
Dec  1 19:49:31 compute-0 openstack_network_exporter[205914]: 
Dec  1 19:49:33 compute-0 nova_compute[189564]: 2025-12-01 19:49:33.392 189568 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Dec  1 19:49:36 compute-0 nova_compute[189564]: 2025-12-01 19:49:36.091 189568 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Dec  1 19:49:38 compute-0 nova_compute[189564]: 2025-12-01 19:49:38.395 189568 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Dec  1 19:49:40 compute-0 podman[247934]: 2025-12-01 19:49:40.327784474 +0000 UTC m=+0.096967189 container health_status b46bda7fc50db8041eef75400930fc7591d8331b3adc9964f77b2cc87c6b98e2 (image=quay.io/openstack-k8s-operators/openstack-network-exporter:current-podified, name=openstack_network_exporter, health_status=healthy, health_failing_streak=0, health_log=, com.redhat.license_terms=https://www.redhat.com/en/about/red-hat-end-user-license-agreements#UBI, maintainer=Red Hat, Inc., config_id=edpm, url=https://catalog.redhat.com/en/search?searchType=containers, name=ubi9-minimal, io.buildah.version=1.33.7, release=1755695350, container_name=openstack_network_exporter, description=The Universal Base Image Minimal is a stripped down image that uses microdnf as a package manager. This base image is freely redistributable, but Red Hat only supports Red Hat technologies through subscriptions for Red Hat products. This image is maintained by Red Hat and updated regularly., distribution-scope=public, version=9.6, io.k8s.display-name=Red Hat Universal Base Image 9 Minimal, config_data={'image': 'quay.io/openstack-k8s-operators/openstack-network-exporter:current-podified', 'restart': 'always', 'recreate': True, 'privileged': True, 'ports': ['9105:9105'], 'command': [], 'net': 'host', 'environment': {'OS_ENDPOINT_TYPE': 'internal', 'OPENSTACK_NETWORK_EXPORTER_YAML': '/etc/openstack_network_exporter/openstack_network_exporter.yaml'}, 'healthcheck': {'test': '/openstack/healthcheck openstack-netwo', 'mount': '/var/lib/openstack/healthchecks/openstack_network_exporter'}, 'volumes': ['/var/lib/openstack/config/telemetry/openstack_network_exporter.yaml:/etc/openstack_network_exporter/openstack_network_exporter.yaml:z', '/var/lib/openstack/certs/telemetry/default:/etc/openstack_network_exporter/tls:z', '/var/run/openvswitch:/run/openvswitch:rw,z', '/var/lib/openvswitch/ovn:/run/ovn:rw,z', '/proc:/host/proc:ro', '/var/lib/openstack/healthchecks/openstack_network_exporter:/openstack:ro,z']}, vendor=Red Hat, Inc., architecture=x86_64, io.openshift.tags=minimal rhel9, io.k8s.description=The Universal Base Image Minimal is a stripped down image that uses microdnf as a package manager. This base image is freely redistributable, but Red Hat only supports Red Hat technologies through subscriptions for Red Hat products. This image is maintained by Red Hat and updated regularly., vcs-ref=f4b088292653bbf5ca8188a5e59ffd06a8671d4b, io.openshift.expose-services=, vcs-type=git, managed_by=edpm_ansible, summary=Provides the latest release of the minimal Red Hat Universal Base Image 9., build-date=2025-08-20T13:12:41, com.redhat.component=ubi9-minimal-container)
Dec  1 19:49:41 compute-0 nova_compute[189564]: 2025-12-01 19:49:41.094 189568 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Dec  1 19:49:43 compute-0 nova_compute[189564]: 2025-12-01 19:49:43.397 189568 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Dec  1 19:49:45 compute-0 podman[247956]: 2025-12-01 19:49:45.351533846 +0000 UTC m=+0.120429635 container health_status 9bc16c1e84935b321683dd2dfd3901959431e420d380b6b9982945dff3d516b2 (image=quay.io/prometheus/node-exporter:v1.5.0, name=node_exporter, health_status=healthy, health_failing_streak=0, health_log=, config_data={'image': 'quay.io/prometheus/node-exporter:v1.5.0', 'restart': 'always', 'recreate': True, 'user': 'root', 'privileged': True, 'ports': ['9100:9100'], 'command': ['--web.config.file=/etc/node_exporter/node_exporter.yaml', '--web.disable-exporter-metrics', '--collector.systemd', '--collector.systemd.unit-include=(edpm_.*|ovs.*|openvswitch|virt.*|rsyslog)\\.service', '--no-collector.dmi', '--no-collector.entropy', '--no-collector.thermal_zone', '--no-collector.time', '--no-collector.timex', '--no-collector.uname', '--no-collector.stat', '--no-collector.hwmon', '--no-collector.os', '--no-collector.selinux', '--no-collector.textfile', '--no-collector.powersupplyclass', '--no-collector.pressure', '--no-collector.rapl'], 'net': 'host', 'environment': {'OS_ENDPOINT_TYPE': 'internal'}, 'healthcheck': {'test': '/openstack/healthcheck node_exporter', 'mount': '/var/lib/openstack/healthchecks/node_exporter'}, 'volumes': ['/var/lib/openstack/config/telemetry/node_exporter.yaml:/etc/node_exporter/node_exporter.yaml:z', '/var/lib/openstack/certs/telemetry/default:/etc/node_exporter/tls:z', '/var/run/dbus/system_bus_socket:/var/run/dbus/system_bus_socket:rw', '/var/lib/openstack/healthchecks/node_exporter:/openstack:ro,z']}, config_id=edpm, container_name=node_exporter, maintainer=The Prometheus Authors <prometheus-developers@googlegroups.com>, managed_by=edpm_ansible)
Dec  1 19:49:46 compute-0 nova_compute[189564]: 2025-12-01 19:49:46.095 189568 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Dec  1 19:49:48 compute-0 nova_compute[189564]: 2025-12-01 19:49:48.399 189568 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Dec  1 19:49:48 compute-0 ceilometer_agent_compute[200308]: 2025-12-01 19:49:48.816 15 DEBUG ceilometer.polling.manager [-] The number of pollsters in source [pollsters] is bigger than the number of worker threads to execute them. Therefore, one can expect the process to be longer than the expected. execute_polling_task_processing /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:253
Dec  1 19:49:48 compute-0 ceilometer_agent_compute[200308]: 2025-12-01 19:49:48.817 15 DEBUG ceilometer.polling.manager [-] Processing pollsters for [pollsters] with [1] threads. execute_polling_task_processing /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:262
Dec  1 19:49:48 compute-0 ceilometer_agent_compute[200308]: 2025-12-01 19:49:48.818 15 DEBUG ceilometer.polling.manager [-] Registering pollster [<stevedore.extension.Extension object at 0x7fcf6cc3f860>] from source [pollsters] to be executed via executor [<concurrent.futures.thread.ThreadPoolExecutor object at 0x7fcf6ebb4140>] with cache [{}], pollster history [{}], and discovery cache [{}]. register_pollster_execution /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:276
Dec  1 19:49:48 compute-0 ceilometer_agent_compute[200308]: 2025-12-01 19:49:48.819 15 DEBUG ceilometer.polling.manager [-] Executing discovery process for pollsters [<ceilometer.compute.pollsters.net.IncomingBytesDeltaPollster object at 0x7fcf6cc3f830>] and discovery method [local_instances] via process [<bound method AgentManager.discover of <ceilometer.polling.manager.AgentManager object at 0x7fcf6cc07a10>>]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:294
Dec  1 19:49:48 compute-0 ceilometer_agent_compute[200308]: 2025-12-01 19:49:48.820 15 DEBUG ceilometer.polling.manager [-] Registering pollster [<stevedore.extension.Extension object at 0x7fcf6c2e4080>] from source [pollsters] to be executed via executor [<concurrent.futures.thread.ThreadPoolExecutor object at 0x7fcf6ebb4140>] with cache [{}], pollster history [{}], and discovery cache [{}]. register_pollster_execution /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:276
Dec  1 19:49:48 compute-0 ceilometer_agent_compute[200308]: 2025-12-01 19:49:48.820 15 DEBUG ceilometer.polling.manager [-] Registering pollster [<stevedore.extension.Extension object at 0x7fcf6efc98b0>] from source [pollsters] to be executed via executor [<concurrent.futures.thread.ThreadPoolExecutor object at 0x7fcf6ebb4140>] with cache [{}], pollster history [{}], and discovery cache [{}]. register_pollster_execution /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:276
Dec  1 19:49:48 compute-0 ceilometer_agent_compute[200308]: 2025-12-01 19:49:48.820 15 DEBUG ceilometer.polling.manager [-] Registering pollster [<stevedore.extension.Extension object at 0x7fcf6c2e4110>] from source [pollsters] to be executed via executor [<concurrent.futures.thread.ThreadPoolExecutor object at 0x7fcf6ebb4140>] with cache [{}], pollster history [{}], and discovery cache [{}]. register_pollster_execution /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:276
Dec  1 19:49:48 compute-0 ceilometer_agent_compute[200308]: 2025-12-01 19:49:48.821 15 DEBUG ceilometer.polling.manager [-] Registering pollster [<stevedore.extension.Extension object at 0x7fcf6c2e41a0>] from source [pollsters] to be executed via executor [<concurrent.futures.thread.ThreadPoolExecutor object at 0x7fcf6ebb4140>] with cache [{}], pollster history [{}], and discovery cache [{}]. register_pollster_execution /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:276
Dec  1 19:49:48 compute-0 ceilometer_agent_compute[200308]: 2025-12-01 19:49:48.821 15 DEBUG ceilometer.polling.manager [-] Registering pollster [<stevedore.extension.Extension object at 0x7fcf6cc3f290>] from source [pollsters] to be executed via executor [<concurrent.futures.thread.ThreadPoolExecutor object at 0x7fcf6ebb4140>] with cache [{}], pollster history [{}], and discovery cache [{}]. register_pollster_execution /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:276
Dec  1 19:49:48 compute-0 ceilometer_agent_compute[200308]: 2025-12-01 19:49:48.821 15 DEBUG ceilometer.polling.manager [-] Registering pollster [<stevedore.extension.Extension object at 0x7fcf6cc3f2c0>] from source [pollsters] to be executed via executor [<concurrent.futures.thread.ThreadPoolExecutor object at 0x7fcf6ebb4140>] with cache [{}], pollster history [{}], and discovery cache [{}]. register_pollster_execution /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:276
Dec  1 19:49:48 compute-0 ceilometer_agent_compute[200308]: 2025-12-01 19:49:48.821 15 DEBUG ceilometer.polling.manager [-] Registering pollster [<stevedore.extension.Extension object at 0x7fcf6e1e92e0>] from source [pollsters] to be executed via executor [<concurrent.futures.thread.ThreadPoolExecutor object at 0x7fcf6ebb4140>] with cache [{}], pollster history [{}], and discovery cache [{}]. register_pollster_execution /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:276
Dec  1 19:49:48 compute-0 ceilometer_agent_compute[200308]: 2025-12-01 19:49:48.822 15 DEBUG ceilometer.polling.manager [-] Registering pollster [<stevedore.extension.Extension object at 0x7fcf6cc3fb00>] from source [pollsters] to be executed via executor [<concurrent.futures.thread.ThreadPoolExecutor object at 0x7fcf6ebb4140>] with cache [{}], pollster history [{}], and discovery cache [{}]. register_pollster_execution /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:276
Dec  1 19:49:48 compute-0 ceilometer_agent_compute[200308]: 2025-12-01 19:49:48.822 15 DEBUG ceilometer.polling.manager [-] Registering pollster [<stevedore.extension.Extension object at 0x7fcf6cc3f320>] from source [pollsters] to be executed via executor [<concurrent.futures.thread.ThreadPoolExecutor object at 0x7fcf6ebb4140>] with cache [{}], pollster history [{}], and discovery cache [{}]. register_pollster_execution /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:276
Dec  1 19:49:48 compute-0 ceilometer_agent_compute[200308]: 2025-12-01 19:49:48.823 15 DEBUG ceilometer.polling.manager [-] Registering pollster [<stevedore.extension.Extension object at 0x7fcf6cc3f380>] from source [pollsters] to be executed via executor [<concurrent.futures.thread.ThreadPoolExecutor object at 0x7fcf6ebb4140>] with cache [{}], pollster history [{}], and discovery cache [{}]. register_pollster_execution /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:276
Dec  1 19:49:48 compute-0 ceilometer_agent_compute[200308]: 2025-12-01 19:49:48.823 15 DEBUG ceilometer.polling.manager [-] Registering pollster [<stevedore.extension.Extension object at 0x7fcf6cc3f3e0>] from source [pollsters] to be executed via executor [<concurrent.futures.thread.ThreadPoolExecutor object at 0x7fcf6ebb4140>] with cache [{}], pollster history [{}], and discovery cache [{}]. register_pollster_execution /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:276
Dec  1 19:49:48 compute-0 ceilometer_agent_compute[200308]: 2025-12-01 19:49:48.823 15 DEBUG ceilometer.polling.manager [-] Registering pollster [<stevedore.extension.Extension object at 0x7fcf6cc3f440>] from source [pollsters] to be executed via executor [<concurrent.futures.thread.ThreadPoolExecutor object at 0x7fcf6ebb4140>] with cache [{}], pollster history [{}], and discovery cache [{}]. register_pollster_execution /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:276
Dec  1 19:49:48 compute-0 ceilometer_agent_compute[200308]: 2025-12-01 19:49:48.824 15 DEBUG ceilometer.polling.manager [-] Registering pollster [<stevedore.extension.Extension object at 0x7fcf6c2e4470>] from source [pollsters] to be executed via executor [<concurrent.futures.thread.ThreadPoolExecutor object at 0x7fcf6ebb4140>] with cache [{}], pollster history [{}], and discovery cache [{}]. register_pollster_execution /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:276
Dec  1 19:49:48 compute-0 ceilometer_agent_compute[200308]: 2025-12-01 19:49:48.824 15 DEBUG ceilometer.polling.manager [-] Registering pollster [<stevedore.extension.Extension object at 0x7fcf6cc3f4a0>] from source [pollsters] to be executed via executor [<concurrent.futures.thread.ThreadPoolExecutor object at 0x7fcf6ebb4140>] with cache [{}], pollster history [{}], and discovery cache [{}]. register_pollster_execution /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:276
Dec  1 19:49:48 compute-0 ceilometer_agent_compute[200308]: 2025-12-01 19:49:48.825 15 DEBUG ceilometer.polling.manager [-] Registering pollster [<stevedore.extension.Extension object at 0x7fcf6cc3f500>] from source [pollsters] to be executed via executor [<concurrent.futures.thread.ThreadPoolExecutor object at 0x7fcf6ebb4140>] with cache [{}], pollster history [{}], and discovery cache [{}]. register_pollster_execution /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:276
Dec  1 19:49:48 compute-0 ceilometer_agent_compute[200308]: 2025-12-01 19:49:48.825 15 DEBUG ceilometer.polling.manager [-] Registering pollster [<stevedore.extension.Extension object at 0x7fcf6cc3e540>] from source [pollsters] to be executed via executor [<concurrent.futures.thread.ThreadPoolExecutor object at 0x7fcf6ebb4140>] with cache [{}], pollster history [{}], and discovery cache [{}]. register_pollster_execution /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:276
Dec  1 19:49:48 compute-0 ceilometer_agent_compute[200308]: 2025-12-01 19:49:48.825 15 DEBUG ceilometer.polling.manager [-] Registering pollster [<stevedore.extension.Extension object at 0x7fcf6cc3f560>] from source [pollsters] to be executed via executor [<concurrent.futures.thread.ThreadPoolExecutor object at 0x7fcf6ebb4140>] with cache [{}], pollster history [{}], and discovery cache [{}]. register_pollster_execution /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:276
Dec  1 19:49:48 compute-0 ceilometer_agent_compute[200308]: 2025-12-01 19:49:48.826 15 DEBUG ceilometer.polling.manager [-] Registering pollster [<stevedore.extension.Extension object at 0x7fcf6cc3fd70>] from source [pollsters] to be executed via executor [<concurrent.futures.thread.ThreadPoolExecutor object at 0x7fcf6ebb4140>] with cache [{}], pollster history [{}], and discovery cache [{}]. register_pollster_execution /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:276
Dec  1 19:49:48 compute-0 ceilometer_agent_compute[200308]: 2025-12-01 19:49:48.826 15 DEBUG ceilometer.polling.manager [-] Registering pollster [<stevedore.extension.Extension object at 0x7fcf6cc3f5c0>] from source [pollsters] to be executed via executor [<concurrent.futures.thread.ThreadPoolExecutor object at 0x7fcf6ebb4140>] with cache [{}], pollster history [{}], and discovery cache [{}]. register_pollster_execution /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:276
Dec  1 19:49:48 compute-0 ceilometer_agent_compute[200308]: 2025-12-01 19:49:48.827 15 DEBUG ceilometer.polling.manager [-] Registering pollster [<stevedore.extension.Extension object at 0x7fcf6cc3fdd0>] from source [pollsters] to be executed via executor [<concurrent.futures.thread.ThreadPoolExecutor object at 0x7fcf6ebb4140>] with cache [{}], pollster history [{}], and discovery cache [{}]. register_pollster_execution /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:276
Dec  1 19:49:48 compute-0 ceilometer_agent_compute[200308]: 2025-12-01 19:49:48.827 15 DEBUG ceilometer.polling.manager [-] Registering pollster [<stevedore.extension.Extension object at 0x7fcf6cc3fe30>] from source [pollsters] to be executed via executor [<concurrent.futures.thread.ThreadPoolExecutor object at 0x7fcf6ebb4140>] with cache [{}], pollster history [{}], and discovery cache [{}]. register_pollster_execution /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:276
Dec  1 19:49:48 compute-0 ceilometer_agent_compute[200308]: 2025-12-01 19:49:48.827 15 DEBUG ceilometer.polling.manager [-] Registering pollster [<stevedore.extension.Extension object at 0x7fcf6cc3fec0>] from source [pollsters] to be executed via executor [<concurrent.futures.thread.ThreadPoolExecutor object at 0x7fcf6ebb4140>] with cache [{}], pollster history [{}], and discovery cache [{}]. register_pollster_execution /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:276
Dec  1 19:49:48 compute-0 ceilometer_agent_compute[200308]: 2025-12-01 19:49:48.828 15 DEBUG ceilometer.polling.manager [-] Registering pollster [<stevedore.extension.Extension object at 0x7fcf6cc3ffb0>] from source [pollsters] to be executed via executor [<concurrent.futures.thread.ThreadPoolExecutor object at 0x7fcf6ebb4140>] with cache [{}], pollster history [{}], and discovery cache [{}]. register_pollster_execution /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:276
Dec  1 19:49:48 compute-0 ceilometer_agent_compute[200308]: 2025-12-01 19:49:48.828 15 DEBUG ceilometer.polling.manager [-] Registering pollster [<stevedore.extension.Extension object at 0x7fcf6cc3d7c0>] from source [pollsters] to be executed via executor [<concurrent.futures.thread.ThreadPoolExecutor object at 0x7fcf6ebb4140>] with cache [{}], pollster history [{}], and discovery cache [{}]. register_pollster_execution /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:276
Dec  1 19:49:48 compute-0 ceilometer_agent_compute[200308]: 2025-12-01 19:49:48.829 15 DEBUG ceilometer.polling.manager [-] Registering pollster [<stevedore.extension.Extension object at 0x7fcf6cc3f7d0>] from source [pollsters] to be executed via executor [<concurrent.futures.thread.ThreadPoolExecutor object at 0x7fcf6ebb4140>] with cache [{}], pollster history [{}], and discovery cache [{}]. register_pollster_execution /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:276
Dec  1 19:49:48 compute-0 ceilometer_agent_compute[200308]: 2025-12-01 19:49:48.833 15 DEBUG ceilometer.compute.discovery [-] instance data: {'id': 'e73931e9-f7fa-4666-b781-700b385532a9', 'name': 'test_0', 'flavor': {'id': '0891a7f6-7194-4f33-bc11-6f6ab8b16145', 'name': 'm1.small', 'vcpus': 1, 'ram': 512, 'disk': 1, 'ephemeral': 1, 'swap': 0}, 'image': {'id': '15bc897a-453b-4133-b6db-08ecdc2b6db0'}, 'os_type': 'hvm', 'architecture': 'x86_64', 'OS-EXT-SRV-ATTR:instance_name': 'instance-00000001', 'OS-EXT-SRV-ATTR:host': 'compute-0.ctlplane.example.com', 'OS-EXT-STS:vm_state': 'running', 'tenant_id': '35d2a9caf1634dca9fc12ec078239d84', 'user_id': '7c24e8f82e7842b785e565ac65c7f494', 'hostId': 'e632d98aa833376e2652bb395252bb54f4cc7fd6f020f0d51d7efcd6', 'status': 'active', 'metadata': {}} discover_libvirt_polling /usr/lib/python3.12/site-packages/ceilometer/compute/discovery.py:315
Dec  1 19:49:48 compute-0 ceilometer_agent_compute[200308]: 2025-12-01 19:49:48.838 15 DEBUG ceilometer.compute.discovery [-] instance data: {'id': '850ac274-3f22-41ce-b7d7-ac64d7adac70', 'name': 'vn-rxztcck-a6xkcgll2h6t-dmjd3wlevael-vnf-74vtqyxw74yx', 'flavor': {'id': '0891a7f6-7194-4f33-bc11-6f6ab8b16145', 'name': 'm1.small', 'vcpus': 1, 'ram': 512, 'disk': 1, 'ephemeral': 1, 'swap': 0}, 'image': {'id': '15bc897a-453b-4133-b6db-08ecdc2b6db0'}, 'os_type': 'hvm', 'architecture': 'x86_64', 'OS-EXT-SRV-ATTR:instance_name': 'instance-00000003', 'OS-EXT-SRV-ATTR:host': 'compute-0.ctlplane.example.com', 'OS-EXT-STS:vm_state': 'running', 'tenant_id': '35d2a9caf1634dca9fc12ec078239d84', 'user_id': '7c24e8f82e7842b785e565ac65c7f494', 'hostId': 'e632d98aa833376e2652bb395252bb54f4cc7fd6f020f0d51d7efcd6', 'status': 'active', 'metadata': {'metering.server_group': '47cf63e2-5b7c-4ff3-8543-aef6d5b1a5c9'}} discover_libvirt_polling /usr/lib/python3.12/site-packages/ceilometer/compute/discovery.py:315
Dec  1 19:49:48 compute-0 ceilometer_agent_compute[200308]: 2025-12-01 19:49:48.838 15 INFO ceilometer.polling.manager [-] Polling pollster network.incoming.bytes.delta in the context of pollsters
Dec  1 19:49:48 compute-0 ceilometer_agent_compute[200308]: 2025-12-01 19:49:48.839 15 DEBUG ceilometer.polling.manager [-] Checking if we need coordination for pollster [<stevedore.extension.Extension object at 0x7fcf6cc3f860>] with coordination group name [None]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:333
Dec  1 19:49:48 compute-0 ceilometer_agent_compute[200308]: 2025-12-01 19:49:48.839 15 DEBUG ceilometer.polling.manager [-] The pollster [<stevedore.extension.Extension object at 0x7fcf6cc3f860>] is not configured in a source for polling that requires coordination. The current hashrings are the following [None]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:355
Dec  1 19:49:48 compute-0 ceilometer_agent_compute[200308]: 2025-12-01 19:49:48.839 15 DEBUG ceilometer.polling.manager [-] Polster heartbeat update: network.incoming.bytes.delta heartbeat /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:636
Dec  1 19:49:48 compute-0 ceilometer_agent_compute[200308]: 2025-12-01 19:49:48.841 12 DEBUG ceilometer.polling.manager [-] Updated heartbeat for network.incoming.bytes.delta (2025-12-01T19:49:48.839427) _update_status /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:502
Dec  1 19:49:48 compute-0 ceilometer_agent_compute[200308]: 2025-12-01 19:49:48.848 15 DEBUG ceilometer.compute.pollsters [-] e73931e9-f7fa-4666-b781-700b385532a9/network.incoming.bytes.delta volume: 0 _stats_to_sample /usr/lib/python3.12/site-packages/ceilometer/compute/pollsters/__init__.py:108
Dec  1 19:49:48 compute-0 rsyslogd[236874]: imjournal: journal files changed, reloading...  [v8.2510.0-2.el9 try https://www.rsyslog.com/e/0 ]
Dec  1 19:49:48 compute-0 ceilometer_agent_compute[200308]: 2025-12-01 19:49:48.855 15 DEBUG ceilometer.compute.pollsters [-] 850ac274-3f22-41ce-b7d7-ac64d7adac70/network.incoming.bytes.delta volume: 0 _stats_to_sample /usr/lib/python3.12/site-packages/ceilometer/compute/pollsters/__init__.py:108
Dec  1 19:49:48 compute-0 ceilometer_agent_compute[200308]: 2025-12-01 19:49:48.856 15 INFO ceilometer.polling.manager [-] Finished polling pollster network.incoming.bytes.delta in the context of pollsters
Dec  1 19:49:48 compute-0 ceilometer_agent_compute[200308]: 2025-12-01 19:49:48.856 15 DEBUG ceilometer.polling.manager [-] Executing discovery process for pollsters [<ceilometer.compute.pollsters.net.OutgoingPacketsPollster object at 0x7fcf6c2e4050>] and discovery method [local_instances] via process [<bound method AgentManager.discover of <ceilometer.polling.manager.AgentManager object at 0x7fcf6cc07a10>>]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:294
Dec  1 19:49:48 compute-0 ceilometer_agent_compute[200308]: 2025-12-01 19:49:48.857 15 INFO ceilometer.polling.manager [-] Polling pollster network.outgoing.packets in the context of pollsters
Dec  1 19:49:48 compute-0 ceilometer_agent_compute[200308]: 2025-12-01 19:49:48.857 15 DEBUG ceilometer.polling.manager [-] Checking if we need coordination for pollster [<stevedore.extension.Extension object at 0x7fcf6c2e4080>] with coordination group name [None]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:333
Dec  1 19:49:48 compute-0 ceilometer_agent_compute[200308]: 2025-12-01 19:49:48.857 15 DEBUG ceilometer.polling.manager [-] The pollster [<stevedore.extension.Extension object at 0x7fcf6c2e4080>] is not configured in a source for polling that requires coordination. The current hashrings are the following [None]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:355
Dec  1 19:49:48 compute-0 ceilometer_agent_compute[200308]: 2025-12-01 19:49:48.857 15 DEBUG ceilometer.polling.manager [-] Polster heartbeat update: network.outgoing.packets heartbeat /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:636
Dec  1 19:49:48 compute-0 ceilometer_agent_compute[200308]: 2025-12-01 19:49:48.858 15 DEBUG ceilometer.compute.pollsters [-] e73931e9-f7fa-4666-b781-700b385532a9/network.outgoing.packets volume: 24 _stats_to_sample /usr/lib/python3.12/site-packages/ceilometer/compute/pollsters/__init__.py:108
Dec  1 19:49:48 compute-0 ceilometer_agent_compute[200308]: 2025-12-01 19:49:48.858 15 DEBUG ceilometer.compute.pollsters [-] 850ac274-3f22-41ce-b7d7-ac64d7adac70/network.outgoing.packets volume: 23 _stats_to_sample /usr/lib/python3.12/site-packages/ceilometer/compute/pollsters/__init__.py:108
Dec  1 19:49:48 compute-0 ceilometer_agent_compute[200308]: 2025-12-01 19:49:48.859 15 INFO ceilometer.polling.manager [-] Finished polling pollster network.outgoing.packets in the context of pollsters
Dec  1 19:49:48 compute-0 ceilometer_agent_compute[200308]: 2025-12-01 19:49:48.860 15 DEBUG ceilometer.polling.manager [-] Executing discovery process for pollsters [<ceilometer.compute.pollsters.net.OutgoingBytesDeltaPollster object at 0x7fcf6cc3ff20>] and discovery method [local_instances] via process [<bound method AgentManager.discover of <ceilometer.polling.manager.AgentManager object at 0x7fcf6cc07a10>>]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:294
Dec  1 19:49:48 compute-0 ceilometer_agent_compute[200308]: 2025-12-01 19:49:48.860 15 INFO ceilometer.polling.manager [-] Polling pollster network.outgoing.bytes.delta in the context of pollsters
Dec  1 19:49:48 compute-0 ceilometer_agent_compute[200308]: 2025-12-01 19:49:48.860 12 DEBUG ceilometer.polling.manager [-] Updated heartbeat for network.outgoing.packets (2025-12-01T19:49:48.857683) _update_status /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:502
Dec  1 19:49:48 compute-0 ceilometer_agent_compute[200308]: 2025-12-01 19:49:48.860 15 DEBUG ceilometer.polling.manager [-] Checking if we need coordination for pollster [<stevedore.extension.Extension object at 0x7fcf6efc98b0>] with coordination group name [None]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:333
Dec  1 19:49:48 compute-0 ceilometer_agent_compute[200308]: 2025-12-01 19:49:48.860 15 DEBUG ceilometer.polling.manager [-] The pollster [<stevedore.extension.Extension object at 0x7fcf6efc98b0>] is not configured in a source for polling that requires coordination. The current hashrings are the following [None]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:355
Dec  1 19:49:48 compute-0 ceilometer_agent_compute[200308]: 2025-12-01 19:49:48.860 15 DEBUG ceilometer.polling.manager [-] Polster heartbeat update: network.outgoing.bytes.delta heartbeat /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:636
Dec  1 19:49:48 compute-0 ceilometer_agent_compute[200308]: 2025-12-01 19:49:48.861 15 DEBUG ceilometer.compute.pollsters [-] e73931e9-f7fa-4666-b781-700b385532a9/network.outgoing.bytes.delta volume: 0 _stats_to_sample /usr/lib/python3.12/site-packages/ceilometer/compute/pollsters/__init__.py:108
Dec  1 19:49:48 compute-0 ceilometer_agent_compute[200308]: 2025-12-01 19:49:48.861 15 DEBUG ceilometer.compute.pollsters [-] 850ac274-3f22-41ce-b7d7-ac64d7adac70/network.outgoing.bytes.delta volume: 0 _stats_to_sample /usr/lib/python3.12/site-packages/ceilometer/compute/pollsters/__init__.py:108
Dec  1 19:49:48 compute-0 ceilometer_agent_compute[200308]: 2025-12-01 19:49:48.862 12 DEBUG ceilometer.polling.manager [-] Updated heartbeat for network.outgoing.bytes.delta (2025-12-01T19:49:48.860872) _update_status /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:502
Dec  1 19:49:48 compute-0 ceilometer_agent_compute[200308]: 2025-12-01 19:49:48.862 15 INFO ceilometer.polling.manager [-] Finished polling pollster network.outgoing.bytes.delta in the context of pollsters
Dec  1 19:49:48 compute-0 ceilometer_agent_compute[200308]: 2025-12-01 19:49:48.863 15 DEBUG ceilometer.polling.manager [-] Executing discovery process for pollsters [<ceilometer.compute.pollsters.net.OutgoingDropPollster object at 0x7fcf6c2e40e0>] and discovery method [local_instances] via process [<bound method AgentManager.discover of <ceilometer.polling.manager.AgentManager object at 0x7fcf6cc07a10>>]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:294
Dec  1 19:49:48 compute-0 ceilometer_agent_compute[200308]: 2025-12-01 19:49:48.863 15 INFO ceilometer.polling.manager [-] Polling pollster network.outgoing.packets.drop in the context of pollsters
Dec  1 19:49:48 compute-0 ceilometer_agent_compute[200308]: 2025-12-01 19:49:48.863 15 DEBUG ceilometer.polling.manager [-] Checking if we need coordination for pollster [<stevedore.extension.Extension object at 0x7fcf6c2e4110>] with coordination group name [None]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:333
Dec  1 19:49:48 compute-0 ceilometer_agent_compute[200308]: 2025-12-01 19:49:48.863 15 DEBUG ceilometer.polling.manager [-] The pollster [<stevedore.extension.Extension object at 0x7fcf6c2e4110>] is not configured in a source for polling that requires coordination. The current hashrings are the following [None]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:355
Dec  1 19:49:48 compute-0 ceilometer_agent_compute[200308]: 2025-12-01 19:49:48.864 15 DEBUG ceilometer.polling.manager [-] Polster heartbeat update: network.outgoing.packets.drop heartbeat /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:636
Dec  1 19:49:48 compute-0 ceilometer_agent_compute[200308]: 2025-12-01 19:49:48.864 15 DEBUG ceilometer.compute.pollsters [-] e73931e9-f7fa-4666-b781-700b385532a9/network.outgoing.packets.drop volume: 0 _stats_to_sample /usr/lib/python3.12/site-packages/ceilometer/compute/pollsters/__init__.py:108
Dec  1 19:49:48 compute-0 ceilometer_agent_compute[200308]: 2025-12-01 19:49:48.864 15 DEBUG ceilometer.compute.pollsters [-] 850ac274-3f22-41ce-b7d7-ac64d7adac70/network.outgoing.packets.drop volume: 0 _stats_to_sample /usr/lib/python3.12/site-packages/ceilometer/compute/pollsters/__init__.py:108
Dec  1 19:49:48 compute-0 ceilometer_agent_compute[200308]: 2025-12-01 19:49:48.865 12 DEBUG ceilometer.polling.manager [-] Updated heartbeat for network.outgoing.packets.drop (2025-12-01T19:49:48.863983) _update_status /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:502
Dec  1 19:49:48 compute-0 ceilometer_agent_compute[200308]: 2025-12-01 19:49:48.865 15 INFO ceilometer.polling.manager [-] Finished polling pollster network.outgoing.packets.drop in the context of pollsters
Dec  1 19:49:48 compute-0 ceilometer_agent_compute[200308]: 2025-12-01 19:49:48.866 15 DEBUG ceilometer.polling.manager [-] Executing discovery process for pollsters [<ceilometer.compute.pollsters.net.OutgoingErrorsPollster object at 0x7fcf6c2e4170>] and discovery method [local_instances] via process [<bound method AgentManager.discover of <ceilometer.polling.manager.AgentManager object at 0x7fcf6cc07a10>>]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:294
Dec  1 19:49:48 compute-0 ceilometer_agent_compute[200308]: 2025-12-01 19:49:48.866 15 INFO ceilometer.polling.manager [-] Polling pollster network.outgoing.packets.error in the context of pollsters
Dec  1 19:49:48 compute-0 ceilometer_agent_compute[200308]: 2025-12-01 19:49:48.866 15 DEBUG ceilometer.polling.manager [-] Checking if we need coordination for pollster [<stevedore.extension.Extension object at 0x7fcf6c2e41a0>] with coordination group name [None]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:333
Dec  1 19:49:48 compute-0 ceilometer_agent_compute[200308]: 2025-12-01 19:49:48.867 15 DEBUG ceilometer.polling.manager [-] The pollster [<stevedore.extension.Extension object at 0x7fcf6c2e41a0>] is not configured in a source for polling that requires coordination. The current hashrings are the following [None]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:355
Dec  1 19:49:48 compute-0 ceilometer_agent_compute[200308]: 2025-12-01 19:49:48.867 15 DEBUG ceilometer.polling.manager [-] Polster heartbeat update: network.outgoing.packets.error heartbeat /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:636
Dec  1 19:49:48 compute-0 ceilometer_agent_compute[200308]: 2025-12-01 19:49:48.867 12 DEBUG ceilometer.polling.manager [-] Updated heartbeat for network.outgoing.packets.error (2025-12-01T19:49:48.867152) _update_status /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:502
Dec  1 19:49:48 compute-0 ceilometer_agent_compute[200308]: 2025-12-01 19:49:48.867 15 DEBUG ceilometer.compute.pollsters [-] e73931e9-f7fa-4666-b781-700b385532a9/network.outgoing.packets.error volume: 0 _stats_to_sample /usr/lib/python3.12/site-packages/ceilometer/compute/pollsters/__init__.py:108
Dec  1 19:49:48 compute-0 ceilometer_agent_compute[200308]: 2025-12-01 19:49:48.868 15 DEBUG ceilometer.compute.pollsters [-] 850ac274-3f22-41ce-b7d7-ac64d7adac70/network.outgoing.packets.error volume: 0 _stats_to_sample /usr/lib/python3.12/site-packages/ceilometer/compute/pollsters/__init__.py:108
Dec  1 19:49:48 compute-0 ceilometer_agent_compute[200308]: 2025-12-01 19:49:48.869 15 INFO ceilometer.polling.manager [-] Finished polling pollster network.outgoing.packets.error in the context of pollsters
Dec  1 19:49:48 compute-0 ceilometer_agent_compute[200308]: 2025-12-01 19:49:48.869 15 DEBUG ceilometer.polling.manager [-] Executing discovery process for pollsters [<ceilometer.compute.pollsters.disk.PerDeviceCapacityPollster object at 0x7fcf6cc3d820>] and discovery method [local_instances] via process [<bound method AgentManager.discover of <ceilometer.polling.manager.AgentManager object at 0x7fcf6cc07a10>>]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:294
Dec  1 19:49:48 compute-0 ceilometer_agent_compute[200308]: 2025-12-01 19:49:48.869 15 INFO ceilometer.polling.manager [-] Polling pollster disk.device.capacity in the context of pollsters
Dec  1 19:49:48 compute-0 ceilometer_agent_compute[200308]: 2025-12-01 19:49:48.869 15 DEBUG ceilometer.polling.manager [-] Checking if we need coordination for pollster [<stevedore.extension.Extension object at 0x7fcf6cc3f290>] with coordination group name [None]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:333
Dec  1 19:49:48 compute-0 ceilometer_agent_compute[200308]: 2025-12-01 19:49:48.869 15 DEBUG ceilometer.polling.manager [-] The pollster [<stevedore.extension.Extension object at 0x7fcf6cc3f290>] is not configured in a source for polling that requires coordination. The current hashrings are the following [None]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:355
Dec  1 19:49:48 compute-0 ceilometer_agent_compute[200308]: 2025-12-01 19:49:48.870 15 DEBUG ceilometer.polling.manager [-] Polster heartbeat update: disk.device.capacity heartbeat /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:636
Dec  1 19:49:48 compute-0 ceilometer_agent_compute[200308]: 2025-12-01 19:49:48.870 12 DEBUG ceilometer.polling.manager [-] Updated heartbeat for disk.device.capacity (2025-12-01T19:49:48.870023) _update_status /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:502
Dec  1 19:49:48 compute-0 ceilometer_agent_compute[200308]: 2025-12-01 19:49:48.911 15 DEBUG ceilometer.compute.pollsters [-] e73931e9-f7fa-4666-b781-700b385532a9/disk.device.capacity volume: 1073741824 _stats_to_sample /usr/lib/python3.12/site-packages/ceilometer/compute/pollsters/__init__.py:108
Dec  1 19:49:48 compute-0 ceilometer_agent_compute[200308]: 2025-12-01 19:49:48.912 15 DEBUG ceilometer.compute.pollsters [-] e73931e9-f7fa-4666-b781-700b385532a9/disk.device.capacity volume: 1073741824 _stats_to_sample /usr/lib/python3.12/site-packages/ceilometer/compute/pollsters/__init__.py:108
Dec  1 19:49:48 compute-0 ceilometer_agent_compute[200308]: 2025-12-01 19:49:48.912 15 DEBUG ceilometer.compute.pollsters [-] e73931e9-f7fa-4666-b781-700b385532a9/disk.device.capacity volume: 485376 _stats_to_sample /usr/lib/python3.12/site-packages/ceilometer/compute/pollsters/__init__.py:108
Dec  1 19:49:48 compute-0 ceilometer_agent_compute[200308]: 2025-12-01 19:49:48.944 15 DEBUG ceilometer.compute.pollsters [-] 850ac274-3f22-41ce-b7d7-ac64d7adac70/disk.device.capacity volume: 1073741824 _stats_to_sample /usr/lib/python3.12/site-packages/ceilometer/compute/pollsters/__init__.py:108
Dec  1 19:49:48 compute-0 ceilometer_agent_compute[200308]: 2025-12-01 19:49:48.945 15 DEBUG ceilometer.compute.pollsters [-] 850ac274-3f22-41ce-b7d7-ac64d7adac70/disk.device.capacity volume: 1073741824 _stats_to_sample /usr/lib/python3.12/site-packages/ceilometer/compute/pollsters/__init__.py:108
Dec  1 19:49:48 compute-0 ceilometer_agent_compute[200308]: 2025-12-01 19:49:48.945 15 DEBUG ceilometer.compute.pollsters [-] 850ac274-3f22-41ce-b7d7-ac64d7adac70/disk.device.capacity volume: 583680 _stats_to_sample /usr/lib/python3.12/site-packages/ceilometer/compute/pollsters/__init__.py:108
Dec  1 19:49:48 compute-0 ceilometer_agent_compute[200308]: 2025-12-01 19:49:48.946 15 INFO ceilometer.polling.manager [-] Finished polling pollster disk.device.capacity in the context of pollsters
Dec  1 19:49:48 compute-0 ceilometer_agent_compute[200308]: 2025-12-01 19:49:48.947 15 DEBUG ceilometer.polling.manager [-] Executing discovery process for pollsters [<ceilometer.compute.pollsters.disk.PerDeviceReadBytesPollster object at 0x7fcf6cc3f1d0>] and discovery method [local_instances] via process [<bound method AgentManager.discover of <ceilometer.polling.manager.AgentManager object at 0x7fcf6cc07a10>>]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:294
Dec  1 19:49:48 compute-0 ceilometer_agent_compute[200308]: 2025-12-01 19:49:48.947 15 INFO ceilometer.polling.manager [-] Polling pollster disk.device.read.bytes in the context of pollsters
Dec  1 19:49:48 compute-0 ceilometer_agent_compute[200308]: 2025-12-01 19:49:48.947 15 DEBUG ceilometer.polling.manager [-] Checking if we need coordination for pollster [<stevedore.extension.Extension object at 0x7fcf6cc3f2c0>] with coordination group name [None]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:333
Dec  1 19:49:48 compute-0 ceilometer_agent_compute[200308]: 2025-12-01 19:49:48.947 15 DEBUG ceilometer.polling.manager [-] The pollster [<stevedore.extension.Extension object at 0x7fcf6cc3f2c0>] is not configured in a source for polling that requires coordination. The current hashrings are the following [None]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:355
Dec  1 19:49:48 compute-0 ceilometer_agent_compute[200308]: 2025-12-01 19:49:48.948 15 DEBUG ceilometer.polling.manager [-] Polster heartbeat update: disk.device.read.bytes heartbeat /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:636
Dec  1 19:49:48 compute-0 ceilometer_agent_compute[200308]: 2025-12-01 19:49:48.948 12 DEBUG ceilometer.polling.manager [-] Updated heartbeat for disk.device.read.bytes (2025-12-01T19:49:48.948027) _update_status /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:502
Dec  1 19:49:49 compute-0 ceilometer_agent_compute[200308]: 2025-12-01 19:49:49.009 15 DEBUG ceilometer.compute.pollsters [-] e73931e9-f7fa-4666-b781-700b385532a9/disk.device.read.bytes volume: 23308800 _stats_to_sample /usr/lib/python3.12/site-packages/ceilometer/compute/pollsters/__init__.py:108
Dec  1 19:49:49 compute-0 ceilometer_agent_compute[200308]: 2025-12-01 19:49:49.010 15 DEBUG ceilometer.compute.pollsters [-] e73931e9-f7fa-4666-b781-700b385532a9/disk.device.read.bytes volume: 3227648 _stats_to_sample /usr/lib/python3.12/site-packages/ceilometer/compute/pollsters/__init__.py:108
Dec  1 19:49:49 compute-0 ceilometer_agent_compute[200308]: 2025-12-01 19:49:49.010 15 DEBUG ceilometer.compute.pollsters [-] e73931e9-f7fa-4666-b781-700b385532a9/disk.device.read.bytes volume: 274786 _stats_to_sample /usr/lib/python3.12/site-packages/ceilometer/compute/pollsters/__init__.py:108
Dec  1 19:49:49 compute-0 ceilometer_agent_compute[200308]: 2025-12-01 19:49:49.089 15 DEBUG ceilometer.compute.pollsters [-] 850ac274-3f22-41ce-b7d7-ac64d7adac70/disk.device.read.bytes volume: 23308800 _stats_to_sample /usr/lib/python3.12/site-packages/ceilometer/compute/pollsters/__init__.py:108
Dec  1 19:49:49 compute-0 ceilometer_agent_compute[200308]: 2025-12-01 19:49:49.090 15 DEBUG ceilometer.compute.pollsters [-] 850ac274-3f22-41ce-b7d7-ac64d7adac70/disk.device.read.bytes volume: 3227648 _stats_to_sample /usr/lib/python3.12/site-packages/ceilometer/compute/pollsters/__init__.py:108
Dec  1 19:49:49 compute-0 ceilometer_agent_compute[200308]: 2025-12-01 19:49:49.096 15 DEBUG ceilometer.compute.pollsters [-] 850ac274-3f22-41ce-b7d7-ac64d7adac70/disk.device.read.bytes volume: 385378 _stats_to_sample /usr/lib/python3.12/site-packages/ceilometer/compute/pollsters/__init__.py:108
Dec  1 19:49:49 compute-0 ceilometer_agent_compute[200308]: 2025-12-01 19:49:49.097 15 INFO ceilometer.polling.manager [-] Finished polling pollster disk.device.read.bytes in the context of pollsters
Dec  1 19:49:49 compute-0 ceilometer_agent_compute[200308]: 2025-12-01 19:49:49.098 15 DEBUG ceilometer.polling.manager [-] Executing discovery process for pollsters [<ceilometer.compute.pollsters.net.IncomingBytesPollster object at 0x7fcf6cc3f800>] and discovery method [local_instances] via process [<bound method AgentManager.discover of <ceilometer.polling.manager.AgentManager object at 0x7fcf6cc07a10>>]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:294
Dec  1 19:49:49 compute-0 ceilometer_agent_compute[200308]: 2025-12-01 19:49:49.098 15 INFO ceilometer.polling.manager [-] Polling pollster network.incoming.bytes in the context of pollsters
Dec  1 19:49:49 compute-0 ceilometer_agent_compute[200308]: 2025-12-01 19:49:49.098 15 DEBUG ceilometer.polling.manager [-] Checking if we need coordination for pollster [<stevedore.extension.Extension object at 0x7fcf6e1e92e0>] with coordination group name [None]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:333
Dec  1 19:49:49 compute-0 ceilometer_agent_compute[200308]: 2025-12-01 19:49:49.098 15 DEBUG ceilometer.polling.manager [-] The pollster [<stevedore.extension.Extension object at 0x7fcf6e1e92e0>] is not configured in a source for polling that requires coordination. The current hashrings are the following [None]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:355
Dec  1 19:49:49 compute-0 ceilometer_agent_compute[200308]: 2025-12-01 19:49:49.099 15 DEBUG ceilometer.polling.manager [-] Polster heartbeat update: network.incoming.bytes heartbeat /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:636
Dec  1 19:49:49 compute-0 ceilometer_agent_compute[200308]: 2025-12-01 19:49:49.099 15 DEBUG ceilometer.compute.pollsters [-] e73931e9-f7fa-4666-b781-700b385532a9/network.incoming.bytes volume: 2136 _stats_to_sample /usr/lib/python3.12/site-packages/ceilometer/compute/pollsters/__init__.py:108
Dec  1 19:49:49 compute-0 ceilometer_agent_compute[200308]: 2025-12-01 19:49:49.099 12 DEBUG ceilometer.polling.manager [-] Updated heartbeat for network.incoming.bytes (2025-12-01T19:49:49.098984) _update_status /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:502
Dec  1 19:49:49 compute-0 ceilometer_agent_compute[200308]: 2025-12-01 19:49:49.099 15 DEBUG ceilometer.compute.pollsters [-] 850ac274-3f22-41ce-b7d7-ac64d7adac70/network.incoming.bytes volume: 1570 _stats_to_sample /usr/lib/python3.12/site-packages/ceilometer/compute/pollsters/__init__.py:108
Dec  1 19:49:49 compute-0 ceilometer_agent_compute[200308]: 2025-12-01 19:49:49.100 15 INFO ceilometer.polling.manager [-] Finished polling pollster network.incoming.bytes in the context of pollsters
Dec  1 19:49:49 compute-0 ceilometer_agent_compute[200308]: 2025-12-01 19:49:49.100 15 DEBUG ceilometer.polling.manager [-] Executing discovery process for pollsters [<ceilometer.compute.pollsters.net.IncomingBytesRatePollster object at 0x7fcf6cc3fd10>] and discovery method [local_instances] via process [<bound method AgentManager.discover of <ceilometer.polling.manager.AgentManager object at 0x7fcf6cc07a10>>]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:294
Dec  1 19:49:49 compute-0 ceilometer_agent_compute[200308]: 2025-12-01 19:49:49.100 15 DEBUG ceilometer.polling.manager [-] Skip pollster network.incoming.bytes.rate, no new resources found this cycle _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:321
Dec  1 19:49:49 compute-0 ceilometer_agent_compute[200308]: 2025-12-01 19:49:49.100 15 DEBUG ceilometer.polling.manager [-] Executing discovery process for pollsters [<ceilometer.compute.pollsters.disk.PerDeviceDiskReadLatencyPollster object at 0x7fcf6cc3f2f0>] and discovery method [local_instances] via process [<bound method AgentManager.discover of <ceilometer.polling.manager.AgentManager object at 0x7fcf6cc07a10>>]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:294
Dec  1 19:49:49 compute-0 ceilometer_agent_compute[200308]: 2025-12-01 19:49:49.100 15 INFO ceilometer.polling.manager [-] Polling pollster disk.device.read.latency in the context of pollsters
Dec  1 19:49:49 compute-0 ceilometer_agent_compute[200308]: 2025-12-01 19:49:49.100 15 DEBUG ceilometer.polling.manager [-] Checking if we need coordination for pollster [<stevedore.extension.Extension object at 0x7fcf6cc3f320>] with coordination group name [None]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:333
Dec  1 19:49:49 compute-0 ceilometer_agent_compute[200308]: 2025-12-01 19:49:49.100 15 DEBUG ceilometer.polling.manager [-] The pollster [<stevedore.extension.Extension object at 0x7fcf6cc3f320>] is not configured in a source for polling that requires coordination. The current hashrings are the following [None]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:355
Dec  1 19:49:49 compute-0 ceilometer_agent_compute[200308]: 2025-12-01 19:49:49.101 15 DEBUG ceilometer.polling.manager [-] Polster heartbeat update: disk.device.read.latency heartbeat /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:636
Dec  1 19:49:49 compute-0 ceilometer_agent_compute[200308]: 2025-12-01 19:49:49.101 15 DEBUG ceilometer.compute.pollsters [-] e73931e9-f7fa-4666-b781-700b385532a9/disk.device.read.latency volume: 474440550 _stats_to_sample /usr/lib/python3.12/site-packages/ceilometer/compute/pollsters/__init__.py:108
Dec  1 19:49:49 compute-0 ceilometer_agent_compute[200308]: 2025-12-01 19:49:49.101 15 DEBUG ceilometer.compute.pollsters [-] e73931e9-f7fa-4666-b781-700b385532a9/disk.device.read.latency volume: 65600453 _stats_to_sample /usr/lib/python3.12/site-packages/ceilometer/compute/pollsters/__init__.py:108
Dec  1 19:49:49 compute-0 ceilometer_agent_compute[200308]: 2025-12-01 19:49:49.101 12 DEBUG ceilometer.polling.manager [-] Updated heartbeat for disk.device.read.latency (2025-12-01T19:49:49.101056) _update_status /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:502
Dec  1 19:49:49 compute-0 ceilometer_agent_compute[200308]: 2025-12-01 19:49:49.102 15 DEBUG ceilometer.compute.pollsters [-] e73931e9-f7fa-4666-b781-700b385532a9/disk.device.read.latency volume: 49214734 _stats_to_sample /usr/lib/python3.12/site-packages/ceilometer/compute/pollsters/__init__.py:108
Dec  1 19:49:49 compute-0 ceilometer_agent_compute[200308]: 2025-12-01 19:49:49.102 15 DEBUG ceilometer.compute.pollsters [-] 850ac274-3f22-41ce-b7d7-ac64d7adac70/disk.device.read.latency volume: 578521054 _stats_to_sample /usr/lib/python3.12/site-packages/ceilometer/compute/pollsters/__init__.py:108
Dec  1 19:49:49 compute-0 ceilometer_agent_compute[200308]: 2025-12-01 19:49:49.102 15 DEBUG ceilometer.compute.pollsters [-] 850ac274-3f22-41ce-b7d7-ac64d7adac70/disk.device.read.latency volume: 98903610 _stats_to_sample /usr/lib/python3.12/site-packages/ceilometer/compute/pollsters/__init__.py:108
Dec  1 19:49:49 compute-0 ceilometer_agent_compute[200308]: 2025-12-01 19:49:49.103 15 DEBUG ceilometer.compute.pollsters [-] 850ac274-3f22-41ce-b7d7-ac64d7adac70/disk.device.read.latency volume: 76991265 _stats_to_sample /usr/lib/python3.12/site-packages/ceilometer/compute/pollsters/__init__.py:108
Dec  1 19:49:49 compute-0 ceilometer_agent_compute[200308]: 2025-12-01 19:49:49.103 15 INFO ceilometer.polling.manager [-] Finished polling pollster disk.device.read.latency in the context of pollsters
Dec  1 19:49:49 compute-0 ceilometer_agent_compute[200308]: 2025-12-01 19:49:49.103 15 DEBUG ceilometer.polling.manager [-] Executing discovery process for pollsters [<ceilometer.compute.pollsters.disk.PerDeviceReadRequestsPollster object at 0x7fcf6cc3f350>] and discovery method [local_instances] via process [<bound method AgentManager.discover of <ceilometer.polling.manager.AgentManager object at 0x7fcf6cc07a10>>]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:294
Dec  1 19:49:49 compute-0 ceilometer_agent_compute[200308]: 2025-12-01 19:49:49.103 15 INFO ceilometer.polling.manager [-] Polling pollster disk.device.read.requests in the context of pollsters
Dec  1 19:49:49 compute-0 ceilometer_agent_compute[200308]: 2025-12-01 19:49:49.103 15 DEBUG ceilometer.polling.manager [-] Checking if we need coordination for pollster [<stevedore.extension.Extension object at 0x7fcf6cc3f380>] with coordination group name [None]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:333
Dec  1 19:49:49 compute-0 ceilometer_agent_compute[200308]: 2025-12-01 19:49:49.104 15 DEBUG ceilometer.polling.manager [-] The pollster [<stevedore.extension.Extension object at 0x7fcf6cc3f380>] is not configured in a source for polling that requires coordination. The current hashrings are the following [None]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:355
Dec  1 19:49:49 compute-0 ceilometer_agent_compute[200308]: 2025-12-01 19:49:49.104 15 DEBUG ceilometer.polling.manager [-] Polster heartbeat update: disk.device.read.requests heartbeat /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:636
Dec  1 19:49:49 compute-0 ceilometer_agent_compute[200308]: 2025-12-01 19:49:49.104 12 DEBUG ceilometer.polling.manager [-] Updated heartbeat for disk.device.read.requests (2025-12-01T19:49:49.104190) _update_status /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:502
Dec  1 19:49:49 compute-0 ceilometer_agent_compute[200308]: 2025-12-01 19:49:49.104 15 DEBUG ceilometer.compute.pollsters [-] e73931e9-f7fa-4666-b781-700b385532a9/disk.device.read.requests volume: 840 _stats_to_sample /usr/lib/python3.12/site-packages/ceilometer/compute/pollsters/__init__.py:108
Dec  1 19:49:49 compute-0 ceilometer_agent_compute[200308]: 2025-12-01 19:49:49.104 15 DEBUG ceilometer.compute.pollsters [-] e73931e9-f7fa-4666-b781-700b385532a9/disk.device.read.requests volume: 173 _stats_to_sample /usr/lib/python3.12/site-packages/ceilometer/compute/pollsters/__init__.py:108
Dec  1 19:49:49 compute-0 ceilometer_agent_compute[200308]: 2025-12-01 19:49:49.105 15 DEBUG ceilometer.compute.pollsters [-] e73931e9-f7fa-4666-b781-700b385532a9/disk.device.read.requests volume: 109 _stats_to_sample /usr/lib/python3.12/site-packages/ceilometer/compute/pollsters/__init__.py:108
Dec  1 19:49:49 compute-0 ceilometer_agent_compute[200308]: 2025-12-01 19:49:49.105 15 DEBUG ceilometer.compute.pollsters [-] 850ac274-3f22-41ce-b7d7-ac64d7adac70/disk.device.read.requests volume: 840 _stats_to_sample /usr/lib/python3.12/site-packages/ceilometer/compute/pollsters/__init__.py:108
Dec  1 19:49:49 compute-0 ceilometer_agent_compute[200308]: 2025-12-01 19:49:49.105 15 DEBUG ceilometer.compute.pollsters [-] 850ac274-3f22-41ce-b7d7-ac64d7adac70/disk.device.read.requests volume: 173 _stats_to_sample /usr/lib/python3.12/site-packages/ceilometer/compute/pollsters/__init__.py:108
Dec  1 19:49:49 compute-0 ceilometer_agent_compute[200308]: 2025-12-01 19:49:49.106 15 DEBUG ceilometer.compute.pollsters [-] 850ac274-3f22-41ce-b7d7-ac64d7adac70/disk.device.read.requests volume: 124 _stats_to_sample /usr/lib/python3.12/site-packages/ceilometer/compute/pollsters/__init__.py:108
Dec  1 19:49:49 compute-0 ceilometer_agent_compute[200308]: 2025-12-01 19:49:49.106 15 INFO ceilometer.polling.manager [-] Finished polling pollster disk.device.read.requests in the context of pollsters
Dec  1 19:49:49 compute-0 ceilometer_agent_compute[200308]: 2025-12-01 19:49:49.106 15 DEBUG ceilometer.polling.manager [-] Executing discovery process for pollsters [<ceilometer.compute.pollsters.disk.PerDevicePhysicalPollster object at 0x7fcf6cc3f3b0>] and discovery method [local_instances] via process [<bound method AgentManager.discover of <ceilometer.polling.manager.AgentManager object at 0x7fcf6cc07a10>>]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:294
Dec  1 19:49:49 compute-0 ceilometer_agent_compute[200308]: 2025-12-01 19:49:49.107 15 INFO ceilometer.polling.manager [-] Polling pollster disk.device.usage in the context of pollsters
Dec  1 19:49:49 compute-0 ceilometer_agent_compute[200308]: 2025-12-01 19:49:49.107 15 DEBUG ceilometer.polling.manager [-] Checking if we need coordination for pollster [<stevedore.extension.Extension object at 0x7fcf6cc3f3e0>] with coordination group name [None]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:333
Dec  1 19:49:49 compute-0 ceilometer_agent_compute[200308]: 2025-12-01 19:49:49.107 15 DEBUG ceilometer.polling.manager [-] The pollster [<stevedore.extension.Extension object at 0x7fcf6cc3f3e0>] is not configured in a source for polling that requires coordination. The current hashrings are the following [None]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:355
Dec  1 19:49:49 compute-0 ceilometer_agent_compute[200308]: 2025-12-01 19:49:49.107 15 DEBUG ceilometer.polling.manager [-] Polster heartbeat update: disk.device.usage heartbeat /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:636
Dec  1 19:49:49 compute-0 ceilometer_agent_compute[200308]: 2025-12-01 19:49:49.107 15 DEBUG ceilometer.compute.pollsters [-] e73931e9-f7fa-4666-b781-700b385532a9/disk.device.usage volume: 21233664 _stats_to_sample /usr/lib/python3.12/site-packages/ceilometer/compute/pollsters/__init__.py:108
Dec  1 19:49:49 compute-0 ceilometer_agent_compute[200308]: 2025-12-01 19:49:49.107 15 DEBUG ceilometer.compute.pollsters [-] e73931e9-f7fa-4666-b781-700b385532a9/disk.device.usage volume: 393216 _stats_to_sample /usr/lib/python3.12/site-packages/ceilometer/compute/pollsters/__init__.py:108
Dec  1 19:49:49 compute-0 ceilometer_agent_compute[200308]: 2025-12-01 19:49:49.108 12 DEBUG ceilometer.polling.manager [-] Updated heartbeat for disk.device.usage (2025-12-01T19:49:49.107338) _update_status /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:502
Dec  1 19:49:49 compute-0 ceilometer_agent_compute[200308]: 2025-12-01 19:49:49.108 15 DEBUG ceilometer.compute.pollsters [-] e73931e9-f7fa-4666-b781-700b385532a9/disk.device.usage volume: 485376 _stats_to_sample /usr/lib/python3.12/site-packages/ceilometer/compute/pollsters/__init__.py:108
Dec  1 19:49:49 compute-0 ceilometer_agent_compute[200308]: 2025-12-01 19:49:49.108 15 DEBUG ceilometer.compute.pollsters [-] 850ac274-3f22-41ce-b7d7-ac64d7adac70/disk.device.usage volume: 21299200 _stats_to_sample /usr/lib/python3.12/site-packages/ceilometer/compute/pollsters/__init__.py:108
Dec  1 19:49:49 compute-0 ceilometer_agent_compute[200308]: 2025-12-01 19:49:49.108 15 DEBUG ceilometer.compute.pollsters [-] 850ac274-3f22-41ce-b7d7-ac64d7adac70/disk.device.usage volume: 393216 _stats_to_sample /usr/lib/python3.12/site-packages/ceilometer/compute/pollsters/__init__.py:108
Dec  1 19:49:49 compute-0 ceilometer_agent_compute[200308]: 2025-12-01 19:49:49.109 15 DEBUG ceilometer.compute.pollsters [-] 850ac274-3f22-41ce-b7d7-ac64d7adac70/disk.device.usage volume: 583680 _stats_to_sample /usr/lib/python3.12/site-packages/ceilometer/compute/pollsters/__init__.py:108
Dec  1 19:49:49 compute-0 ceilometer_agent_compute[200308]: 2025-12-01 19:49:49.109 15 INFO ceilometer.polling.manager [-] Finished polling pollster disk.device.usage in the context of pollsters
Dec  1 19:49:49 compute-0 ceilometer_agent_compute[200308]: 2025-12-01 19:49:49.109 15 DEBUG ceilometer.polling.manager [-] Executing discovery process for pollsters [<ceilometer.compute.pollsters.disk.PerDeviceWriteBytesPollster object at 0x7fcf6cc3f410>] and discovery method [local_instances] via process [<bound method AgentManager.discover of <ceilometer.polling.manager.AgentManager object at 0x7fcf6cc07a10>>]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:294
Dec  1 19:49:49 compute-0 ceilometer_agent_compute[200308]: 2025-12-01 19:49:49.110 15 INFO ceilometer.polling.manager [-] Polling pollster disk.device.write.bytes in the context of pollsters
Dec  1 19:49:49 compute-0 ceilometer_agent_compute[200308]: 2025-12-01 19:49:49.110 15 DEBUG ceilometer.polling.manager [-] Checking if we need coordination for pollster [<stevedore.extension.Extension object at 0x7fcf6cc3f440>] with coordination group name [None]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:333
Dec  1 19:49:49 compute-0 ceilometer_agent_compute[200308]: 2025-12-01 19:49:49.110 15 DEBUG ceilometer.polling.manager [-] The pollster [<stevedore.extension.Extension object at 0x7fcf6cc3f440>] is not configured in a source for polling that requires coordination. The current hashrings are the following [None]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:355
Dec  1 19:49:49 compute-0 ceilometer_agent_compute[200308]: 2025-12-01 19:49:49.110 15 DEBUG ceilometer.polling.manager [-] Polster heartbeat update: disk.device.write.bytes heartbeat /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:636
Dec  1 19:49:49 compute-0 ceilometer_agent_compute[200308]: 2025-12-01 19:49:49.110 15 DEBUG ceilometer.compute.pollsters [-] e73931e9-f7fa-4666-b781-700b385532a9/disk.device.write.bytes volume: 41779200 _stats_to_sample /usr/lib/python3.12/site-packages/ceilometer/compute/pollsters/__init__.py:108
Dec  1 19:49:49 compute-0 ceilometer_agent_compute[200308]: 2025-12-01 19:49:49.111 15 DEBUG ceilometer.compute.pollsters [-] e73931e9-f7fa-4666-b781-700b385532a9/disk.device.write.bytes volume: 512 _stats_to_sample /usr/lib/python3.12/site-packages/ceilometer/compute/pollsters/__init__.py:108
Dec  1 19:49:49 compute-0 ceilometer_agent_compute[200308]: 2025-12-01 19:49:49.111 15 DEBUG ceilometer.compute.pollsters [-] e73931e9-f7fa-4666-b781-700b385532a9/disk.device.write.bytes volume: 0 _stats_to_sample /usr/lib/python3.12/site-packages/ceilometer/compute/pollsters/__init__.py:108
Dec  1 19:49:49 compute-0 ceilometer_agent_compute[200308]: 2025-12-01 19:49:49.111 15 DEBUG ceilometer.compute.pollsters [-] 850ac274-3f22-41ce-b7d7-ac64d7adac70/disk.device.write.bytes volume: 41779200 _stats_to_sample /usr/lib/python3.12/site-packages/ceilometer/compute/pollsters/__init__.py:108
Dec  1 19:49:49 compute-0 ceilometer_agent_compute[200308]: 2025-12-01 19:49:49.112 15 DEBUG ceilometer.compute.pollsters [-] 850ac274-3f22-41ce-b7d7-ac64d7adac70/disk.device.write.bytes volume: 512 _stats_to_sample /usr/lib/python3.12/site-packages/ceilometer/compute/pollsters/__init__.py:108
Dec  1 19:49:49 compute-0 ceilometer_agent_compute[200308]: 2025-12-01 19:49:49.112 15 DEBUG ceilometer.compute.pollsters [-] 850ac274-3f22-41ce-b7d7-ac64d7adac70/disk.device.write.bytes volume: 0 _stats_to_sample /usr/lib/python3.12/site-packages/ceilometer/compute/pollsters/__init__.py:108
Dec  1 19:49:49 compute-0 ceilometer_agent_compute[200308]: 2025-12-01 19:49:49.113 15 INFO ceilometer.polling.manager [-] Finished polling pollster disk.device.write.bytes in the context of pollsters
Dec  1 19:49:49 compute-0 ceilometer_agent_compute[200308]: 2025-12-01 19:49:49.113 15 DEBUG ceilometer.polling.manager [-] Executing discovery process for pollsters [<ceilometer.compute.pollsters.instance_stats.PowerStatePollster object at 0x7fcf6c2e4440>] and discovery method [local_instances] via process [<bound method AgentManager.discover of <ceilometer.polling.manager.AgentManager object at 0x7fcf6cc07a10>>]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:294
Dec  1 19:49:49 compute-0 ceilometer_agent_compute[200308]: 2025-12-01 19:49:49.113 15 INFO ceilometer.polling.manager [-] Polling pollster power.state in the context of pollsters
Dec  1 19:49:49 compute-0 ceilometer_agent_compute[200308]: 2025-12-01 19:49:49.113 15 DEBUG ceilometer.polling.manager [-] Checking if we need coordination for pollster [<stevedore.extension.Extension object at 0x7fcf6c2e4470>] with coordination group name [None]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:333
Dec  1 19:49:49 compute-0 ceilometer_agent_compute[200308]: 2025-12-01 19:49:49.113 15 DEBUG ceilometer.polling.manager [-] The pollster [<stevedore.extension.Extension object at 0x7fcf6c2e4470>] is not configured in a source for polling that requires coordination. The current hashrings are the following [None]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:355
Dec  1 19:49:49 compute-0 ceilometer_agent_compute[200308]: 2025-12-01 19:49:49.113 15 DEBUG ceilometer.polling.manager [-] Polster heartbeat update: power.state heartbeat /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:636
Dec  1 19:49:49 compute-0 ceilometer_agent_compute[200308]: 2025-12-01 19:49:49.113 12 DEBUG ceilometer.polling.manager [-] Updated heartbeat for disk.device.write.bytes (2025-12-01T19:49:49.110499) _update_status /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:502
Dec  1 19:49:49 compute-0 ceilometer_agent_compute[200308]: 2025-12-01 19:49:49.114 12 DEBUG ceilometer.polling.manager [-] Updated heartbeat for power.state (2025-12-01T19:49:49.113927) _update_status /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:502
Dec  1 19:49:49 compute-0 ceilometer_agent_compute[200308]: 2025-12-01 19:49:49.151 15 DEBUG ceilometer.compute.pollsters [-] e73931e9-f7fa-4666-b781-700b385532a9/power.state volume: 1 _stats_to_sample /usr/lib/python3.12/site-packages/ceilometer/compute/pollsters/__init__.py:108
Dec  1 19:49:49 compute-0 ceilometer_agent_compute[200308]: 2025-12-01 19:49:49.193 15 DEBUG ceilometer.compute.pollsters [-] 850ac274-3f22-41ce-b7d7-ac64d7adac70/power.state volume: 1 _stats_to_sample /usr/lib/python3.12/site-packages/ceilometer/compute/pollsters/__init__.py:108
Dec  1 19:49:49 compute-0 ceilometer_agent_compute[200308]: 2025-12-01 19:49:49.194 15 INFO ceilometer.polling.manager [-] Finished polling pollster power.state in the context of pollsters
Dec  1 19:49:49 compute-0 ceilometer_agent_compute[200308]: 2025-12-01 19:49:49.194 15 DEBUG ceilometer.polling.manager [-] Executing discovery process for pollsters [<ceilometer.compute.pollsters.disk.PerDeviceDiskWriteLatencyPollster object at 0x7fcf6cc3f470>] and discovery method [local_instances] via process [<bound method AgentManager.discover of <ceilometer.polling.manager.AgentManager object at 0x7fcf6cc07a10>>]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:294
Dec  1 19:49:49 compute-0 ceilometer_agent_compute[200308]: 2025-12-01 19:49:49.195 15 INFO ceilometer.polling.manager [-] Polling pollster disk.device.write.latency in the context of pollsters
Dec  1 19:49:49 compute-0 ceilometer_agent_compute[200308]: 2025-12-01 19:49:49.195 15 DEBUG ceilometer.polling.manager [-] Checking if we need coordination for pollster [<stevedore.extension.Extension object at 0x7fcf6cc3f4a0>] with coordination group name [None]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:333
Dec  1 19:49:49 compute-0 ceilometer_agent_compute[200308]: 2025-12-01 19:49:49.195 15 DEBUG ceilometer.polling.manager [-] The pollster [<stevedore.extension.Extension object at 0x7fcf6cc3f4a0>] is not configured in a source for polling that requires coordination. The current hashrings are the following [None]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:355
Dec  1 19:49:49 compute-0 ceilometer_agent_compute[200308]: 2025-12-01 19:49:49.195 15 DEBUG ceilometer.polling.manager [-] Polster heartbeat update: disk.device.write.latency heartbeat /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:636
Dec  1 19:49:49 compute-0 ceilometer_agent_compute[200308]: 2025-12-01 19:49:49.195 15 DEBUG ceilometer.compute.pollsters [-] e73931e9-f7fa-4666-b781-700b385532a9/disk.device.write.latency volume: 1119912171 _stats_to_sample /usr/lib/python3.12/site-packages/ceilometer/compute/pollsters/__init__.py:108
Dec  1 19:49:49 compute-0 ceilometer_agent_compute[200308]: 2025-12-01 19:49:49.196 15 DEBUG ceilometer.compute.pollsters [-] e73931e9-f7fa-4666-b781-700b385532a9/disk.device.write.latency volume: 10391061 _stats_to_sample /usr/lib/python3.12/site-packages/ceilometer/compute/pollsters/__init__.py:108
Dec  1 19:49:49 compute-0 ceilometer_agent_compute[200308]: 2025-12-01 19:49:49.196 15 DEBUG ceilometer.compute.pollsters [-] e73931e9-f7fa-4666-b781-700b385532a9/disk.device.write.latency volume: 0 _stats_to_sample /usr/lib/python3.12/site-packages/ceilometer/compute/pollsters/__init__.py:108
Dec  1 19:49:49 compute-0 ceilometer_agent_compute[200308]: 2025-12-01 19:49:49.197 12 DEBUG ceilometer.polling.manager [-] Updated heartbeat for disk.device.write.latency (2025-12-01T19:49:49.195658) _update_status /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:502
Dec  1 19:49:49 compute-0 ceilometer_agent_compute[200308]: 2025-12-01 19:49:49.197 15 DEBUG ceilometer.compute.pollsters [-] 850ac274-3f22-41ce-b7d7-ac64d7adac70/disk.device.write.latency volume: 2063543219 _stats_to_sample /usr/lib/python3.12/site-packages/ceilometer/compute/pollsters/__init__.py:108
Dec  1 19:49:49 compute-0 ceilometer_agent_compute[200308]: 2025-12-01 19:49:49.198 15 DEBUG ceilometer.compute.pollsters [-] 850ac274-3f22-41ce-b7d7-ac64d7adac70/disk.device.write.latency volume: 12721696 _stats_to_sample /usr/lib/python3.12/site-packages/ceilometer/compute/pollsters/__init__.py:108
Dec  1 19:49:49 compute-0 ceilometer_agent_compute[200308]: 2025-12-01 19:49:49.198 15 DEBUG ceilometer.compute.pollsters [-] 850ac274-3f22-41ce-b7d7-ac64d7adac70/disk.device.write.latency volume: 0 _stats_to_sample /usr/lib/python3.12/site-packages/ceilometer/compute/pollsters/__init__.py:108
Dec  1 19:49:49 compute-0 ceilometer_agent_compute[200308]: 2025-12-01 19:49:49.199 15 INFO ceilometer.polling.manager [-] Finished polling pollster disk.device.write.latency in the context of pollsters
Dec  1 19:49:49 compute-0 ceilometer_agent_compute[200308]: 2025-12-01 19:49:49.199 15 DEBUG ceilometer.polling.manager [-] Executing discovery process for pollsters [<ceilometer.compute.pollsters.disk.PerDeviceWriteRequestsPollster object at 0x7fcf6cc3f4d0>] and discovery method [local_instances] via process [<bound method AgentManager.discover of <ceilometer.polling.manager.AgentManager object at 0x7fcf6cc07a10>>]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:294
Dec  1 19:49:49 compute-0 ceilometer_agent_compute[200308]: 2025-12-01 19:49:49.199 15 INFO ceilometer.polling.manager [-] Polling pollster disk.device.write.requests in the context of pollsters
Dec  1 19:49:49 compute-0 ceilometer_agent_compute[200308]: 2025-12-01 19:49:49.200 15 DEBUG ceilometer.polling.manager [-] Checking if we need coordination for pollster [<stevedore.extension.Extension object at 0x7fcf6cc3f500>] with coordination group name [None]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:333
Dec  1 19:49:49 compute-0 ceilometer_agent_compute[200308]: 2025-12-01 19:49:49.200 15 DEBUG ceilometer.polling.manager [-] The pollster [<stevedore.extension.Extension object at 0x7fcf6cc3f500>] is not configured in a source for polling that requires coordination. The current hashrings are the following [None]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:355
Dec  1 19:49:49 compute-0 ceilometer_agent_compute[200308]: 2025-12-01 19:49:49.200 15 DEBUG ceilometer.polling.manager [-] Polster heartbeat update: disk.device.write.requests heartbeat /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:636
Dec  1 19:49:49 compute-0 ceilometer_agent_compute[200308]: 2025-12-01 19:49:49.200 15 DEBUG ceilometer.compute.pollsters [-] e73931e9-f7fa-4666-b781-700b385532a9/disk.device.write.requests volume: 233 _stats_to_sample /usr/lib/python3.12/site-packages/ceilometer/compute/pollsters/__init__.py:108
Dec  1 19:49:49 compute-0 ceilometer_agent_compute[200308]: 2025-12-01 19:49:49.200 12 DEBUG ceilometer.polling.manager [-] Updated heartbeat for disk.device.write.requests (2025-12-01T19:49:49.200328) _update_status /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:502
Dec  1 19:49:49 compute-0 ceilometer_agent_compute[200308]: 2025-12-01 19:49:49.201 15 DEBUG ceilometer.compute.pollsters [-] e73931e9-f7fa-4666-b781-700b385532a9/disk.device.write.requests volume: 1 _stats_to_sample /usr/lib/python3.12/site-packages/ceilometer/compute/pollsters/__init__.py:108
Dec  1 19:49:49 compute-0 ceilometer_agent_compute[200308]: 2025-12-01 19:49:49.201 15 DEBUG ceilometer.compute.pollsters [-] e73931e9-f7fa-4666-b781-700b385532a9/disk.device.write.requests volume: 0 _stats_to_sample /usr/lib/python3.12/site-packages/ceilometer/compute/pollsters/__init__.py:108
Dec  1 19:49:49 compute-0 ceilometer_agent_compute[200308]: 2025-12-01 19:49:49.202 15 DEBUG ceilometer.compute.pollsters [-] 850ac274-3f22-41ce-b7d7-ac64d7adac70/disk.device.write.requests volume: 232 _stats_to_sample /usr/lib/python3.12/site-packages/ceilometer/compute/pollsters/__init__.py:108
Dec  1 19:49:49 compute-0 ceilometer_agent_compute[200308]: 2025-12-01 19:49:49.202 15 DEBUG ceilometer.compute.pollsters [-] 850ac274-3f22-41ce-b7d7-ac64d7adac70/disk.device.write.requests volume: 1 _stats_to_sample /usr/lib/python3.12/site-packages/ceilometer/compute/pollsters/__init__.py:108
Dec  1 19:49:49 compute-0 ceilometer_agent_compute[200308]: 2025-12-01 19:49:49.203 15 DEBUG ceilometer.compute.pollsters [-] 850ac274-3f22-41ce-b7d7-ac64d7adac70/disk.device.write.requests volume: 0 _stats_to_sample /usr/lib/python3.12/site-packages/ceilometer/compute/pollsters/__init__.py:108
Dec  1 19:49:49 compute-0 ceilometer_agent_compute[200308]: 2025-12-01 19:49:49.204 15 INFO ceilometer.polling.manager [-] Finished polling pollster disk.device.write.requests in the context of pollsters
Dec  1 19:49:49 compute-0 ceilometer_agent_compute[200308]: 2025-12-01 19:49:49.204 15 DEBUG ceilometer.polling.manager [-] Executing discovery process for pollsters [<ceilometer.compute.pollsters.disk.PerDeviceAllocationPollster object at 0x7fcf6cc3e5d0>] and discovery method [local_instances] via process [<bound method AgentManager.discover of <ceilometer.polling.manager.AgentManager object at 0x7fcf6cc07a10>>]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:294
Dec  1 19:49:49 compute-0 ceilometer_agent_compute[200308]: 2025-12-01 19:49:49.204 15 INFO ceilometer.polling.manager [-] Polling pollster disk.device.allocation in the context of pollsters
Dec  1 19:49:49 compute-0 ceilometer_agent_compute[200308]: 2025-12-01 19:49:49.204 15 DEBUG ceilometer.polling.manager [-] Checking if we need coordination for pollster [<stevedore.extension.Extension object at 0x7fcf6cc3e540>] with coordination group name [None]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:333
Dec  1 19:49:49 compute-0 ceilometer_agent_compute[200308]: 2025-12-01 19:49:49.204 15 DEBUG ceilometer.polling.manager [-] The pollster [<stevedore.extension.Extension object at 0x7fcf6cc3e540>] is not configured in a source for polling that requires coordination. The current hashrings are the following [None]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:355
Dec  1 19:49:49 compute-0 ceilometer_agent_compute[200308]: 2025-12-01 19:49:49.205 15 DEBUG ceilometer.polling.manager [-] Polster heartbeat update: disk.device.allocation heartbeat /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:636
Dec  1 19:49:49 compute-0 ceilometer_agent_compute[200308]: 2025-12-01 19:49:49.205 15 DEBUG ceilometer.compute.pollsters [-] e73931e9-f7fa-4666-b781-700b385532a9/disk.device.allocation volume: 21307392 _stats_to_sample /usr/lib/python3.12/site-packages/ceilometer/compute/pollsters/__init__.py:108
Dec  1 19:49:49 compute-0 ceilometer_agent_compute[200308]: 2025-12-01 19:49:49.205 12 DEBUG ceilometer.polling.manager [-] Updated heartbeat for disk.device.allocation (2025-12-01T19:49:49.205009) _update_status /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:502
Dec  1 19:49:49 compute-0 ceilometer_agent_compute[200308]: 2025-12-01 19:49:49.206 15 DEBUG ceilometer.compute.pollsters [-] e73931e9-f7fa-4666-b781-700b385532a9/disk.device.allocation volume: 1253376 _stats_to_sample /usr/lib/python3.12/site-packages/ceilometer/compute/pollsters/__init__.py:108
Dec  1 19:49:49 compute-0 ceilometer_agent_compute[200308]: 2025-12-01 19:49:49.206 15 DEBUG ceilometer.compute.pollsters [-] e73931e9-f7fa-4666-b781-700b385532a9/disk.device.allocation volume: 487424 _stats_to_sample /usr/lib/python3.12/site-packages/ceilometer/compute/pollsters/__init__.py:108
Dec  1 19:49:49 compute-0 ceilometer_agent_compute[200308]: 2025-12-01 19:49:49.206 15 DEBUG ceilometer.compute.pollsters [-] 850ac274-3f22-41ce-b7d7-ac64d7adac70/disk.device.allocation volume: 22224896 _stats_to_sample /usr/lib/python3.12/site-packages/ceilometer/compute/pollsters/__init__.py:108
Dec  1 19:49:49 compute-0 ceilometer_agent_compute[200308]: 2025-12-01 19:49:49.207 15 DEBUG ceilometer.compute.pollsters [-] 850ac274-3f22-41ce-b7d7-ac64d7adac70/disk.device.allocation volume: 1253376 _stats_to_sample /usr/lib/python3.12/site-packages/ceilometer/compute/pollsters/__init__.py:108
Dec  1 19:49:49 compute-0 ceilometer_agent_compute[200308]: 2025-12-01 19:49:49.207 15 DEBUG ceilometer.compute.pollsters [-] 850ac274-3f22-41ce-b7d7-ac64d7adac70/disk.device.allocation volume: 585728 _stats_to_sample /usr/lib/python3.12/site-packages/ceilometer/compute/pollsters/__init__.py:108
Dec  1 19:49:49 compute-0 ceilometer_agent_compute[200308]: 2025-12-01 19:49:49.208 15 INFO ceilometer.polling.manager [-] Finished polling pollster disk.device.allocation in the context of pollsters
Dec  1 19:49:49 compute-0 ceilometer_agent_compute[200308]: 2025-12-01 19:49:49.208 15 DEBUG ceilometer.polling.manager [-] Executing discovery process for pollsters [<ceilometer.compute.pollsters.disk.EphemeralSizePollster object at 0x7fcf6cc3f530>] and discovery method [local_instances] via process [<bound method AgentManager.discover of <ceilometer.polling.manager.AgentManager object at 0x7fcf6cc07a10>>]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:294
Dec  1 19:49:49 compute-0 ceilometer_agent_compute[200308]: 2025-12-01 19:49:49.209 15 INFO ceilometer.polling.manager [-] Polling pollster disk.ephemeral.size in the context of pollsters
Dec  1 19:49:49 compute-0 ceilometer_agent_compute[200308]: 2025-12-01 19:49:49.209 15 DEBUG ceilometer.polling.manager [-] Checking if we need coordination for pollster [<stevedore.extension.Extension object at 0x7fcf6cc3f560>] with coordination group name [None]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:333
Dec  1 19:49:49 compute-0 ceilometer_agent_compute[200308]: 2025-12-01 19:49:49.209 15 DEBUG ceilometer.polling.manager [-] The pollster [<stevedore.extension.Extension object at 0x7fcf6cc3f560>] is not configured in a source for polling that requires coordination. The current hashrings are the following [None]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:355
Dec  1 19:49:49 compute-0 ceilometer_agent_compute[200308]: 2025-12-01 19:49:49.209 15 DEBUG ceilometer.polling.manager [-] Polster heartbeat update: disk.ephemeral.size heartbeat /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:636
Dec  1 19:49:49 compute-0 ceilometer_agent_compute[200308]: 2025-12-01 19:49:49.210 15 INFO ceilometer.polling.manager [-] Finished polling pollster disk.ephemeral.size in the context of pollsters
Dec  1 19:49:49 compute-0 ceilometer_agent_compute[200308]: 2025-12-01 19:49:49.210 15 DEBUG ceilometer.polling.manager [-] Executing discovery process for pollsters [<ceilometer.compute.pollsters.net.IncomingPacketsPollster object at 0x7fcf6cc3fd40>] and discovery method [local_instances] via process [<bound method AgentManager.discover of <ceilometer.polling.manager.AgentManager object at 0x7fcf6cc07a10>>]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:294
Dec  1 19:49:49 compute-0 ceilometer_agent_compute[200308]: 2025-12-01 19:49:49.211 15 INFO ceilometer.polling.manager [-] Polling pollster network.incoming.packets in the context of pollsters
Dec  1 19:49:49 compute-0 ceilometer_agent_compute[200308]: 2025-12-01 19:49:49.211 15 DEBUG ceilometer.polling.manager [-] Checking if we need coordination for pollster [<stevedore.extension.Extension object at 0x7fcf6cc3fd70>] with coordination group name [None]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:333
Dec  1 19:49:49 compute-0 ceilometer_agent_compute[200308]: 2025-12-01 19:49:49.211 12 DEBUG ceilometer.polling.manager [-] Updated heartbeat for disk.ephemeral.size (2025-12-01T19:49:49.209534) _update_status /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:502
Dec  1 19:49:49 compute-0 ceilometer_agent_compute[200308]: 2025-12-01 19:49:49.211 15 DEBUG ceilometer.polling.manager [-] The pollster [<stevedore.extension.Extension object at 0x7fcf6cc3fd70>] is not configured in a source for polling that requires coordination. The current hashrings are the following [None]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:355
Dec  1 19:49:49 compute-0 ceilometer_agent_compute[200308]: 2025-12-01 19:49:49.212 15 DEBUG ceilometer.polling.manager [-] Polster heartbeat update: network.incoming.packets heartbeat /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:636
Dec  1 19:49:49 compute-0 ceilometer_agent_compute[200308]: 2025-12-01 19:49:49.212 15 DEBUG ceilometer.compute.pollsters [-] e73931e9-f7fa-4666-b781-700b385532a9/network.incoming.packets volume: 21 _stats_to_sample /usr/lib/python3.12/site-packages/ceilometer/compute/pollsters/__init__.py:108
Dec  1 19:49:49 compute-0 ceilometer_agent_compute[200308]: 2025-12-01 19:49:49.212 12 DEBUG ceilometer.polling.manager [-] Updated heartbeat for network.incoming.packets (2025-12-01T19:49:49.212016) _update_status /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:502
Dec  1 19:49:49 compute-0 ceilometer_agent_compute[200308]: 2025-12-01 19:49:49.213 15 DEBUG ceilometer.compute.pollsters [-] 850ac274-3f22-41ce-b7d7-ac64d7adac70/network.incoming.packets volume: 14 _stats_to_sample /usr/lib/python3.12/site-packages/ceilometer/compute/pollsters/__init__.py:108
Dec  1 19:49:49 compute-0 ceilometer_agent_compute[200308]: 2025-12-01 19:49:49.213 15 INFO ceilometer.polling.manager [-] Finished polling pollster network.incoming.packets in the context of pollsters
Dec  1 19:49:49 compute-0 ceilometer_agent_compute[200308]: 2025-12-01 19:49:49.214 15 DEBUG ceilometer.polling.manager [-] Executing discovery process for pollsters [<ceilometer.compute.pollsters.disk.RootSizePollster object at 0x7fcf6cc3f590>] and discovery method [local_instances] via process [<bound method AgentManager.discover of <ceilometer.polling.manager.AgentManager object at 0x7fcf6cc07a10>>]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:294
Dec  1 19:49:49 compute-0 ceilometer_agent_compute[200308]: 2025-12-01 19:49:49.214 15 INFO ceilometer.polling.manager [-] Polling pollster disk.root.size in the context of pollsters
Dec  1 19:49:49 compute-0 ceilometer_agent_compute[200308]: 2025-12-01 19:49:49.214 15 DEBUG ceilometer.polling.manager [-] Checking if we need coordination for pollster [<stevedore.extension.Extension object at 0x7fcf6cc3f5c0>] with coordination group name [None]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:333
Dec  1 19:49:49 compute-0 ceilometer_agent_compute[200308]: 2025-12-01 19:49:49.214 15 DEBUG ceilometer.polling.manager [-] The pollster [<stevedore.extension.Extension object at 0x7fcf6cc3f5c0>] is not configured in a source for polling that requires coordination. The current hashrings are the following [None]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:355
Dec  1 19:49:49 compute-0 ceilometer_agent_compute[200308]: 2025-12-01 19:49:49.214 15 DEBUG ceilometer.polling.manager [-] Polster heartbeat update: disk.root.size heartbeat /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:636
Dec  1 19:49:49 compute-0 ceilometer_agent_compute[200308]: 2025-12-01 19:49:49.215 12 DEBUG ceilometer.polling.manager [-] Updated heartbeat for disk.root.size (2025-12-01T19:49:49.214730) _update_status /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:502
Dec  1 19:49:49 compute-0 ceilometer_agent_compute[200308]: 2025-12-01 19:49:49.215 15 INFO ceilometer.polling.manager [-] Finished polling pollster disk.root.size in the context of pollsters
Dec  1 19:49:49 compute-0 ceilometer_agent_compute[200308]: 2025-12-01 19:49:49.216 15 DEBUG ceilometer.polling.manager [-] Executing discovery process for pollsters [<ceilometer.compute.pollsters.net.IncomingDropPollster object at 0x7fcf6cc3fda0>] and discovery method [local_instances] via process [<bound method AgentManager.discover of <ceilometer.polling.manager.AgentManager object at 0x7fcf6cc07a10>>]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:294
Dec  1 19:49:49 compute-0 ceilometer_agent_compute[200308]: 2025-12-01 19:49:49.216 15 INFO ceilometer.polling.manager [-] Polling pollster network.incoming.packets.drop in the context of pollsters
Dec  1 19:49:49 compute-0 ceilometer_agent_compute[200308]: 2025-12-01 19:49:49.216 15 DEBUG ceilometer.polling.manager [-] Checking if we need coordination for pollster [<stevedore.extension.Extension object at 0x7fcf6cc3fdd0>] with coordination group name [None]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:333
Dec  1 19:49:49 compute-0 ceilometer_agent_compute[200308]: 2025-12-01 19:49:49.216 15 DEBUG ceilometer.polling.manager [-] The pollster [<stevedore.extension.Extension object at 0x7fcf6cc3fdd0>] is not configured in a source for polling that requires coordination. The current hashrings are the following [None]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:355
Dec  1 19:49:49 compute-0 ceilometer_agent_compute[200308]: 2025-12-01 19:49:49.216 15 DEBUG ceilometer.polling.manager [-] Polster heartbeat update: network.incoming.packets.drop heartbeat /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:636
Dec  1 19:49:49 compute-0 ceilometer_agent_compute[200308]: 2025-12-01 19:49:49.217 12 DEBUG ceilometer.polling.manager [-] Updated heartbeat for network.incoming.packets.drop (2025-12-01T19:49:49.216858) _update_status /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:502
Dec  1 19:49:49 compute-0 ceilometer_agent_compute[200308]: 2025-12-01 19:49:49.217 15 DEBUG ceilometer.compute.pollsters [-] e73931e9-f7fa-4666-b781-700b385532a9/network.incoming.packets.drop volume: 0 _stats_to_sample /usr/lib/python3.12/site-packages/ceilometer/compute/pollsters/__init__.py:108
Dec  1 19:49:49 compute-0 ceilometer_agent_compute[200308]: 2025-12-01 19:49:49.218 15 DEBUG ceilometer.compute.pollsters [-] 850ac274-3f22-41ce-b7d7-ac64d7adac70/network.incoming.packets.drop volume: 0 _stats_to_sample /usr/lib/python3.12/site-packages/ceilometer/compute/pollsters/__init__.py:108
Dec  1 19:49:49 compute-0 ceilometer_agent_compute[200308]: 2025-12-01 19:49:49.219 15 INFO ceilometer.polling.manager [-] Finished polling pollster network.incoming.packets.drop in the context of pollsters
Dec  1 19:49:49 compute-0 ceilometer_agent_compute[200308]: 2025-12-01 19:49:49.219 15 DEBUG ceilometer.polling.manager [-] Executing discovery process for pollsters [<ceilometer.compute.pollsters.net.IncomingErrorsPollster object at 0x7fcf6cc3fe00>] and discovery method [local_instances] via process [<bound method AgentManager.discover of <ceilometer.polling.manager.AgentManager object at 0x7fcf6cc07a10>>]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:294
Dec  1 19:49:49 compute-0 ceilometer_agent_compute[200308]: 2025-12-01 19:49:49.219 15 INFO ceilometer.polling.manager [-] Polling pollster network.incoming.packets.error in the context of pollsters
Dec  1 19:49:49 compute-0 ceilometer_agent_compute[200308]: 2025-12-01 19:49:49.220 15 DEBUG ceilometer.polling.manager [-] Checking if we need coordination for pollster [<stevedore.extension.Extension object at 0x7fcf6cc3fe30>] with coordination group name [None]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:333
Dec  1 19:49:49 compute-0 ceilometer_agent_compute[200308]: 2025-12-01 19:49:49.220 15 DEBUG ceilometer.polling.manager [-] The pollster [<stevedore.extension.Extension object at 0x7fcf6cc3fe30>] is not configured in a source for polling that requires coordination. The current hashrings are the following [None]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:355
Dec  1 19:49:49 compute-0 ceilometer_agent_compute[200308]: 2025-12-01 19:49:49.220 15 DEBUG ceilometer.polling.manager [-] Polster heartbeat update: network.incoming.packets.error heartbeat /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:636
Dec  1 19:49:49 compute-0 ceilometer_agent_compute[200308]: 2025-12-01 19:49:49.220 12 DEBUG ceilometer.polling.manager [-] Updated heartbeat for network.incoming.packets.error (2025-12-01T19:49:49.220410) _update_status /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:502
Dec  1 19:49:49 compute-0 ceilometer_agent_compute[200308]: 2025-12-01 19:49:49.221 15 DEBUG ceilometer.compute.pollsters [-] e73931e9-f7fa-4666-b781-700b385532a9/network.incoming.packets.error volume: 0 _stats_to_sample /usr/lib/python3.12/site-packages/ceilometer/compute/pollsters/__init__.py:108
Dec  1 19:49:49 compute-0 ceilometer_agent_compute[200308]: 2025-12-01 19:49:49.221 15 DEBUG ceilometer.compute.pollsters [-] 850ac274-3f22-41ce-b7d7-ac64d7adac70/network.incoming.packets.error volume: 0 _stats_to_sample /usr/lib/python3.12/site-packages/ceilometer/compute/pollsters/__init__.py:108
Dec  1 19:49:49 compute-0 ceilometer_agent_compute[200308]: 2025-12-01 19:49:49.222 15 INFO ceilometer.polling.manager [-] Finished polling pollster network.incoming.packets.error in the context of pollsters
Dec  1 19:49:49 compute-0 ceilometer_agent_compute[200308]: 2025-12-01 19:49:49.223 15 DEBUG ceilometer.polling.manager [-] Executing discovery process for pollsters [<ceilometer.compute.pollsters.net.OutgoingBytesPollster object at 0x7fcf6cc3fe90>] and discovery method [local_instances] via process [<bound method AgentManager.discover of <ceilometer.polling.manager.AgentManager object at 0x7fcf6cc07a10>>]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:294
Dec  1 19:49:49 compute-0 ceilometer_agent_compute[200308]: 2025-12-01 19:49:49.223 15 INFO ceilometer.polling.manager [-] Polling pollster network.outgoing.bytes in the context of pollsters
Dec  1 19:49:49 compute-0 ceilometer_agent_compute[200308]: 2025-12-01 19:49:49.223 15 DEBUG ceilometer.polling.manager [-] Checking if we need coordination for pollster [<stevedore.extension.Extension object at 0x7fcf6cc3fec0>] with coordination group name [None]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:333
Dec  1 19:49:49 compute-0 ceilometer_agent_compute[200308]: 2025-12-01 19:49:49.223 15 DEBUG ceilometer.polling.manager [-] The pollster [<stevedore.extension.Extension object at 0x7fcf6cc3fec0>] is not configured in a source for polling that requires coordination. The current hashrings are the following [None]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:355
Dec  1 19:49:49 compute-0 ceilometer_agent_compute[200308]: 2025-12-01 19:49:49.224 15 DEBUG ceilometer.polling.manager [-] Polster heartbeat update: network.outgoing.bytes heartbeat /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:636
Dec  1 19:49:49 compute-0 ceilometer_agent_compute[200308]: 2025-12-01 19:49:49.224 12 DEBUG ceilometer.polling.manager [-] Updated heartbeat for network.outgoing.bytes (2025-12-01T19:49:49.224128) _update_status /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:502
Dec  1 19:49:49 compute-0 ceilometer_agent_compute[200308]: 2025-12-01 19:49:49.224 15 DEBUG ceilometer.compute.pollsters [-] e73931e9-f7fa-4666-b781-700b385532a9/network.outgoing.bytes volume: 2412 _stats_to_sample /usr/lib/python3.12/site-packages/ceilometer/compute/pollsters/__init__.py:108
Dec  1 19:49:49 compute-0 ceilometer_agent_compute[200308]: 2025-12-01 19:49:49.225 15 DEBUG ceilometer.compute.pollsters [-] 850ac274-3f22-41ce-b7d7-ac64d7adac70/network.outgoing.bytes volume: 2426 _stats_to_sample /usr/lib/python3.12/site-packages/ceilometer/compute/pollsters/__init__.py:108
Dec  1 19:49:49 compute-0 ceilometer_agent_compute[200308]: 2025-12-01 19:49:49.226 15 INFO ceilometer.polling.manager [-] Finished polling pollster network.outgoing.bytes in the context of pollsters
Dec  1 19:49:49 compute-0 ceilometer_agent_compute[200308]: 2025-12-01 19:49:49.226 15 DEBUG ceilometer.polling.manager [-] Executing discovery process for pollsters [<ceilometer.compute.pollsters.net.OutgoingBytesRatePollster object at 0x7fcf6cc3ff80>] and discovery method [local_instances] via process [<bound method AgentManager.discover of <ceilometer.polling.manager.AgentManager object at 0x7fcf6cc07a10>>]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:294
Dec  1 19:49:49 compute-0 ceilometer_agent_compute[200308]: 2025-12-01 19:49:49.227 15 DEBUG ceilometer.polling.manager [-] Skip pollster network.outgoing.bytes.rate, no new resources found this cycle _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:321
Dec  1 19:49:49 compute-0 ceilometer_agent_compute[200308]: 2025-12-01 19:49:49.227 15 DEBUG ceilometer.polling.manager [-] Executing discovery process for pollsters [<ceilometer.compute.pollsters.instance_stats.CPUPollster object at 0x7fcf6cbd1b80>] and discovery method [local_instances] via process [<bound method AgentManager.discover of <ceilometer.polling.manager.AgentManager object at 0x7fcf6cc07a10>>]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:294
Dec  1 19:49:49 compute-0 ceilometer_agent_compute[200308]: 2025-12-01 19:49:49.227 15 INFO ceilometer.polling.manager [-] Polling pollster cpu in the context of pollsters
Dec  1 19:49:49 compute-0 ceilometer_agent_compute[200308]: 2025-12-01 19:49:49.227 15 DEBUG ceilometer.polling.manager [-] Checking if we need coordination for pollster [<stevedore.extension.Extension object at 0x7fcf6cc3d7c0>] with coordination group name [None]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:333
Dec  1 19:49:49 compute-0 ceilometer_agent_compute[200308]: 2025-12-01 19:49:49.227 15 DEBUG ceilometer.polling.manager [-] The pollster [<stevedore.extension.Extension object at 0x7fcf6cc3d7c0>] is not configured in a source for polling that requires coordination. The current hashrings are the following [None]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:355
Dec  1 19:49:49 compute-0 ceilometer_agent_compute[200308]: 2025-12-01 19:49:49.227 15 DEBUG ceilometer.polling.manager [-] Polster heartbeat update: cpu heartbeat /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:636
Dec  1 19:49:49 compute-0 ceilometer_agent_compute[200308]: 2025-12-01 19:49:49.228 12 DEBUG ceilometer.polling.manager [-] Updated heartbeat for cpu (2025-12-01T19:49:49.227882) _update_status /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:502
Dec  1 19:49:49 compute-0 ceilometer_agent_compute[200308]: 2025-12-01 19:49:49.228 15 DEBUG ceilometer.compute.pollsters [-] e73931e9-f7fa-4666-b781-700b385532a9/cpu volume: 50180000000 _stats_to_sample /usr/lib/python3.12/site-packages/ceilometer/compute/pollsters/__init__.py:108
Dec  1 19:49:49 compute-0 ceilometer_agent_compute[200308]: 2025-12-01 19:49:49.228 15 DEBUG ceilometer.compute.pollsters [-] 850ac274-3f22-41ce-b7d7-ac64d7adac70/cpu volume: 43060000000 _stats_to_sample /usr/lib/python3.12/site-packages/ceilometer/compute/pollsters/__init__.py:108
Dec  1 19:49:49 compute-0 ceilometer_agent_compute[200308]: 2025-12-01 19:49:49.229 15 INFO ceilometer.polling.manager [-] Finished polling pollster cpu in the context of pollsters
Dec  1 19:49:49 compute-0 ceilometer_agent_compute[200308]: 2025-12-01 19:49:49.229 15 DEBUG ceilometer.polling.manager [-] Executing discovery process for pollsters [<ceilometer.compute.pollsters.instance_stats.MemoryUsagePollster object at 0x7fcf6cc3f7a0>] and discovery method [local_instances] via process [<bound method AgentManager.discover of <ceilometer.polling.manager.AgentManager object at 0x7fcf6cc07a10>>]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:294
Dec  1 19:49:49 compute-0 ceilometer_agent_compute[200308]: 2025-12-01 19:49:49.229 15 INFO ceilometer.polling.manager [-] Polling pollster memory.usage in the context of pollsters
Dec  1 19:49:49 compute-0 ceilometer_agent_compute[200308]: 2025-12-01 19:49:49.230 15 DEBUG ceilometer.polling.manager [-] Checking if we need coordination for pollster [<stevedore.extension.Extension object at 0x7fcf6cc3f7d0>] with coordination group name [None]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:333
Dec  1 19:49:49 compute-0 ceilometer_agent_compute[200308]: 2025-12-01 19:49:49.230 15 DEBUG ceilometer.polling.manager [-] The pollster [<stevedore.extension.Extension object at 0x7fcf6cc3f7d0>] is not configured in a source for polling that requires coordination. The current hashrings are the following [None]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:355
Dec  1 19:49:49 compute-0 ceilometer_agent_compute[200308]: 2025-12-01 19:49:49.230 15 DEBUG ceilometer.polling.manager [-] Polster heartbeat update: memory.usage heartbeat /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:636
Dec  1 19:49:49 compute-0 ceilometer_agent_compute[200308]: 2025-12-01 19:49:49.230 15 DEBUG ceilometer.compute.pollsters [-] e73931e9-f7fa-4666-b781-700b385532a9/memory.usage volume: 48.79296875 _stats_to_sample /usr/lib/python3.12/site-packages/ceilometer/compute/pollsters/__init__.py:108
Dec  1 19:49:49 compute-0 ceilometer_agent_compute[200308]: 2025-12-01 19:49:49.231 15 DEBUG ceilometer.compute.pollsters [-] 850ac274-3f22-41ce-b7d7-ac64d7adac70/memory.usage volume: 48.9375 _stats_to_sample /usr/lib/python3.12/site-packages/ceilometer/compute/pollsters/__init__.py:108
Dec  1 19:49:49 compute-0 ceilometer_agent_compute[200308]: 2025-12-01 19:49:49.231 15 INFO ceilometer.polling.manager [-] Finished polling pollster memory.usage in the context of pollsters
Dec  1 19:49:49 compute-0 ceilometer_agent_compute[200308]: 2025-12-01 19:49:49.232 15 DEBUG ceilometer.polling.manager [-] Finished processing pollster [network.incoming.bytes.delta]. execute_polling_task_processing /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:272
Dec  1 19:49:49 compute-0 ceilometer_agent_compute[200308]: 2025-12-01 19:49:49.232 15 DEBUG ceilometer.polling.manager [-] Finished processing pollster [network.outgoing.packets]. execute_polling_task_processing /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:272
Dec  1 19:49:49 compute-0 ceilometer_agent_compute[200308]: 2025-12-01 19:49:49.232 12 DEBUG ceilometer.polling.manager [-] Updated heartbeat for memory.usage (2025-12-01T19:49:49.230424) _update_status /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:502
Dec  1 19:49:49 compute-0 ceilometer_agent_compute[200308]: 2025-12-01 19:49:49.232 15 DEBUG ceilometer.polling.manager [-] Finished processing pollster [network.outgoing.bytes.delta]. execute_polling_task_processing /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:272
Dec  1 19:49:49 compute-0 ceilometer_agent_compute[200308]: 2025-12-01 19:49:49.232 15 DEBUG ceilometer.polling.manager [-] Finished processing pollster [network.outgoing.packets.drop]. execute_polling_task_processing /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:272
Dec  1 19:49:49 compute-0 ceilometer_agent_compute[200308]: 2025-12-01 19:49:49.232 15 DEBUG ceilometer.polling.manager [-] Finished processing pollster [network.outgoing.packets.error]. execute_polling_task_processing /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:272
Dec  1 19:49:49 compute-0 ceilometer_agent_compute[200308]: 2025-12-01 19:49:49.232 15 DEBUG ceilometer.polling.manager [-] Finished processing pollster [disk.device.capacity]. execute_polling_task_processing /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:272
Dec  1 19:49:49 compute-0 ceilometer_agent_compute[200308]: 2025-12-01 19:49:49.232 15 DEBUG ceilometer.polling.manager [-] Finished processing pollster [disk.device.read.bytes]. execute_polling_task_processing /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:272
Dec  1 19:49:49 compute-0 ceilometer_agent_compute[200308]: 2025-12-01 19:49:49.232 15 DEBUG ceilometer.polling.manager [-] Finished processing pollster [network.incoming.bytes]. execute_polling_task_processing /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:272
Dec  1 19:49:49 compute-0 ceilometer_agent_compute[200308]: 2025-12-01 19:49:49.232 15 DEBUG ceilometer.polling.manager [-] Finished processing pollster [network.incoming.bytes.rate]. execute_polling_task_processing /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:272
Dec  1 19:49:49 compute-0 ceilometer_agent_compute[200308]: 2025-12-01 19:49:49.232 15 DEBUG ceilometer.polling.manager [-] Finished processing pollster [disk.device.read.latency]. execute_polling_task_processing /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:272
Dec  1 19:49:49 compute-0 ceilometer_agent_compute[200308]: 2025-12-01 19:49:49.233 15 DEBUG ceilometer.polling.manager [-] Finished processing pollster [disk.device.read.requests]. execute_polling_task_processing /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:272
Dec  1 19:49:49 compute-0 ceilometer_agent_compute[200308]: 2025-12-01 19:49:49.233 15 DEBUG ceilometer.polling.manager [-] Finished processing pollster [disk.device.usage]. execute_polling_task_processing /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:272
Dec  1 19:49:49 compute-0 ceilometer_agent_compute[200308]: 2025-12-01 19:49:49.233 15 DEBUG ceilometer.polling.manager [-] Finished processing pollster [disk.device.write.bytes]. execute_polling_task_processing /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:272
Dec  1 19:49:49 compute-0 ceilometer_agent_compute[200308]: 2025-12-01 19:49:49.233 15 DEBUG ceilometer.polling.manager [-] Finished processing pollster [power.state]. execute_polling_task_processing /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:272
Dec  1 19:49:49 compute-0 ceilometer_agent_compute[200308]: 2025-12-01 19:49:49.233 15 DEBUG ceilometer.polling.manager [-] Finished processing pollster [disk.device.write.latency]. execute_polling_task_processing /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:272
Dec  1 19:49:49 compute-0 ceilometer_agent_compute[200308]: 2025-12-01 19:49:49.233 15 DEBUG ceilometer.polling.manager [-] Finished processing pollster [disk.device.write.requests]. execute_polling_task_processing /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:272
Dec  1 19:49:49 compute-0 ceilometer_agent_compute[200308]: 2025-12-01 19:49:49.233 15 DEBUG ceilometer.polling.manager [-] Finished processing pollster [disk.device.allocation]. execute_polling_task_processing /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:272
Dec  1 19:49:49 compute-0 ceilometer_agent_compute[200308]: 2025-12-01 19:49:49.233 15 DEBUG ceilometer.polling.manager [-] Finished processing pollster [disk.ephemeral.size]. execute_polling_task_processing /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:272
Dec  1 19:49:49 compute-0 ceilometer_agent_compute[200308]: 2025-12-01 19:49:49.233 15 DEBUG ceilometer.polling.manager [-] Finished processing pollster [network.incoming.packets]. execute_polling_task_processing /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:272
Dec  1 19:49:49 compute-0 ceilometer_agent_compute[200308]: 2025-12-01 19:49:49.233 15 DEBUG ceilometer.polling.manager [-] Finished processing pollster [disk.root.size]. execute_polling_task_processing /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:272
Dec  1 19:49:49 compute-0 ceilometer_agent_compute[200308]: 2025-12-01 19:49:49.233 15 DEBUG ceilometer.polling.manager [-] Finished processing pollster [network.incoming.packets.drop]. execute_polling_task_processing /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:272
Dec  1 19:49:49 compute-0 ceilometer_agent_compute[200308]: 2025-12-01 19:49:49.234 15 DEBUG ceilometer.polling.manager [-] Finished processing pollster [network.incoming.packets.error]. execute_polling_task_processing /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:272
Dec  1 19:49:49 compute-0 ceilometer_agent_compute[200308]: 2025-12-01 19:49:49.234 15 DEBUG ceilometer.polling.manager [-] Finished processing pollster [network.outgoing.bytes]. execute_polling_task_processing /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:272
Dec  1 19:49:49 compute-0 ceilometer_agent_compute[200308]: 2025-12-01 19:49:49.234 15 DEBUG ceilometer.polling.manager [-] Finished processing pollster [network.outgoing.bytes.rate]. execute_polling_task_processing /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:272
Dec  1 19:49:49 compute-0 ceilometer_agent_compute[200308]: 2025-12-01 19:49:49.234 15 DEBUG ceilometer.polling.manager [-] Finished processing pollster [cpu]. execute_polling_task_processing /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:272
Dec  1 19:49:49 compute-0 ceilometer_agent_compute[200308]: 2025-12-01 19:49:49.234 15 DEBUG ceilometer.polling.manager [-] Finished processing pollster [memory.usage]. execute_polling_task_processing /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:272
Dec  1 19:49:51 compute-0 nova_compute[189564]: 2025-12-01 19:49:51.098 189568 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Dec  1 19:49:51 compute-0 podman[247982]: 2025-12-01 19:49:51.400555372 +0000 UTC m=+0.116653177 container health_status eee51cf6f5ac491b85fb09827fece37ea9afa564acb449d4ec0d0155a452f02b (image=quay.io/podified-antelope-centos9/openstack-multipathd:current-podified, name=multipathd, health_status=healthy, health_failing_streak=0, health_log=, container_name=multipathd, maintainer=OpenStack Kubernetes Operator team, org.label-schema.license=GPLv2, tcib_build_tag=fa2bb8efef6782c26ea7f1675eeb36dd, config_id=multipathd, managed_by=edpm_ansible, org.label-schema.schema-version=1.0, io.buildah.version=1.41.3, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.vendor=CentOS, tcib_managed=true, config_data={'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/multipathd', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-multipathd:current-podified', 'net': 'host', 'privileged': True, 'restart': 'always', 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/var/lib/kolla/config_files/multipathd.json:/var/lib/kolla/config_files/config.json:ro', '/dev:/dev', '/run/udev:/run/udev', '/sys:/sys', '/lib/modules:/lib/modules:ro', '/etc/iscsi:/etc/iscsi:ro', '/var/lib/iscsi:/var/lib/iscsi', '/etc/multipath:/etc/multipath:z', '/etc/multipath.conf:/etc/multipath.conf:ro', '/var/lib/openstack/healthchecks/multipathd:/openstack:ro,z']}, org.label-schema.build-date=20251125)
Dec  1 19:49:53 compute-0 nova_compute[189564]: 2025-12-01 19:49:53.402 189568 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Dec  1 19:49:56 compute-0 nova_compute[189564]: 2025-12-01 19:49:56.100 189568 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Dec  1 19:49:57 compute-0 podman[248003]: 2025-12-01 19:49:57.344480048 +0000 UTC m=+0.102564341 container health_status 61ddba5fa28aaa4735d9b3aecc3d300f499f9ae2248b5f55cd6d6127fcce4236 (image=quay.io/navidys/prometheus-podman-exporter:v1.10.1, name=podman_exporter, health_status=healthy, health_failing_streak=0, health_log=, managed_by=edpm_ansible, config_data={'image': 'quay.io/navidys/prometheus-podman-exporter:v1.10.1', 'restart': 'always', 'recreate': True, 'user': 'root', 'privileged': True, 'ports': ['9882:9882'], 'net': 'host', 'command': ['--web.config.file=/etc/podman_exporter/podman_exporter.yaml'], 'environment': {'OS_ENDPOINT_TYPE': 'internal', 'CONTAINER_HOST': 'unix:///run/podman/podman.sock'}, 'healthcheck': {'test': '/openstack/healthcheck podman_exporter', 'mount': '/var/lib/openstack/healthchecks/podman_exporter'}, 'volumes': ['/var/lib/openstack/config/telemetry/podman_exporter.yaml:/etc/podman_exporter/podman_exporter.yaml:z', '/var/lib/openstack/certs/telemetry/default:/etc/podman_exporter/tls:z', '/run/podman/podman.sock:/run/podman/podman.sock:rw,z', '/var/lib/openstack/healthchecks/podman_exporter:/openstack:ro,z']}, config_id=edpm, container_name=podman_exporter, maintainer=Navid Yaghoobi <navidys@fedoraproject.org>)
Dec  1 19:49:58 compute-0 nova_compute[189564]: 2025-12-01 19:49:58.404 189568 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Dec  1 19:49:59 compute-0 podman[203750]: time="2025-12-01T19:49:59Z" level=info msg="List containers: received `last` parameter - overwriting `limit`"
Dec  1 19:49:59 compute-0 podman[203750]: @ - - [01/Dec/2025:19:49:59 +0000] "GET /v4.9.3/libpod/containers/json?all=true&external=false&last=0&namespace=false&size=false&sync=false HTTP/1.1" 200 29521 "" "Go-http-client/1.1"
Dec  1 19:49:59 compute-0 podman[203750]: @ - - [01/Dec/2025:19:49:59 +0000] "GET /v4.9.3/libpod/containers/stats?all=false&interval=1&stream=false HTTP/1.1" 200 4814 "" "Go-http-client/1.1"
Dec  1 19:50:00 compute-0 podman[248027]: 2025-12-01 19:50:00.361063232 +0000 UTC m=+0.117706140 container health_status 23921011954a99f31a49758e512d9e3575f6b2ebf536e7df85e3be11e7690b76 (image=quay.io/sustainable_computing_io/kepler:release-0.7.12, name=kepler, health_status=healthy, health_failing_streak=0, health_log=, io.k8s.description=The Universal Base Image is designed and engineered to be the base layer for all of your containerized applications, middleware and utilities. This base image is freely redistributable, but Red Hat only supports Red Hat technologies through subscriptions for Red Hat products. This image is maintained by Red Hat and updated regularly., config_id=edpm, com.redhat.license_terms=https://www.redhat.com/en/about/red-hat-end-user-license-agreements#UBI, maintainer=Red Hat, Inc., summary=Provides the latest release of Red Hat Universal Base Image 9., vcs-ref=e309397d02fc53f7fa99db1371b8700eb49f268f, io.k8s.display-name=Red Hat Universal Base Image 9, io.openshift.expose-services=, vcs-type=git, io.openshift.tags=base rhel9, release-0.7.12=, version=9.4, container_name=kepler, architecture=x86_64, distribution-scope=public, io.buildah.version=1.29.0, url=https://access.redhat.com/containers/#/registry.access.redhat.com/ubi9/images/9.4-1214.1726694543, vendor=Red Hat, Inc., name=ubi9, com.redhat.component=ubi9-container, managed_by=edpm_ansible, release=1214.1726694543, build-date=2024-09-18T21:23:30, config_data={'image': 'quay.io/sustainable_computing_io/kepler:release-0.7.12', 'privileged': 'true', 'restart': 'always', 'ports': ['8888:8888'], 'net': 'host', 'command': '-v=2', 'recreate': True, 'environment': {'ENABLE_GPU': 'true', 'EXPOSE_CONTAINER_METRICS': 'true', 'ENABLE_PROCESS_METRICS': 'true', 'EXPOSE_VM_METRICS': 'true', 'EXPOSE_ESTIMATED_IDLE_POWER_METRICS': 'false', 'LIBVIRT_METADATA_URI': 'http://openstack.org/xmlns/libvirt/nova/1.1'}, 'healthcheck': {'test': '/openstack/healthcheck kepler', 'mount': '/var/lib/openstack/healthchecks/kepler'}, 'volumes': ['/lib/modules:/lib/modules:ro', '/run/libvirt:/run/libvirt:shared,ro', '/sys:/sys', '/proc:/proc', '/var/lib/openstack/healthchecks/kepler:/openstack:ro,z']}, description=The Universal Base Image is designed and engineered to be the base layer for all of your containerized applications, middleware and utilities. This base image is freely redistributable, but Red Hat only supports Red Hat technologies through subscriptions for Red Hat products. This image is maintained by Red Hat and updated regularly.)
Dec  1 19:50:00 compute-0 podman[248029]: 2025-12-01 19:50:00.362093605 +0000 UTC m=+0.117476953 container health_status 3a3d264f7eb8586ed3d44da8bad3c69e5911bcb2ca062b771386b6d47a5118de (image=quay.rdoproject.org/podified-master-centos10/openstack-ceilometer-compute:current-tested, name=ceilometer_agent_compute, health_status=healthy, health_failing_streak=0, health_log=, org.label-schema.build-date=20251125, config_data={'image': 'quay.rdoproject.org/podified-master-centos10/openstack-ceilometer-compute:current-tested', 'user': 'ceilometer', 'restart': 'always', 'command': 'kolla_start', 'security_opt': 'label:type:ceilometer_polling_t', 'net': 'host', 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'OS_ENDPOINT_TYPE': 'internal'}, 'healthcheck': {'test': '/openstack/healthcheck compute', 'mount': '/var/lib/openstack/healthchecks/ceilometer_agent_compute'}, 'volumes': ['/var/lib/openstack/config/telemetry:/var/lib/openstack/config/:z', '/var/lib/openstack/config/telemetry/ceilometer-agent-compute.json:/var/lib/kolla/config_files/config.json:z', '/run/libvirt:/run/libvirt:shared,ro', '/etc/hosts:/etc/hosts:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/var/lib/openstack/cacerts/telemetry/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/config/telemetry/ceilometer_prom_exporter.yaml:/etc/ceilometer/ceilometer_prom_exporter.yaml:z', '/var/lib/openstack/certs/telemetry/default:/etc/ceilometer/tls:z', '/dev/log:/dev/log', '/var/lib/openstack/healthchecks/ceilometer_agent_compute:/openstack:ro,z']}, managed_by=edpm_ansible, org.label-schema.name=CentOS Stream 10 Base Image, tcib_build_tag=3a7876c5b6a4ff2e2bc50e11e9db5f42, container_name=ceilometer_agent_compute, maintainer=OpenStack Kubernetes Operator team, org.label-schema.schema-version=1.0, tcib_managed=true, config_id=edpm, io.buildah.version=1.41.4, org.label-schema.license=GPLv2, org.label-schema.vendor=CentOS)
Dec  1 19:50:00 compute-0 podman[248028]: 2025-12-01 19:50:00.369468962 +0000 UTC m=+0.119605238 container health_status 34a1614f07848d6f362b3ed1fa2407dbcd0f2c7c831f6ef43ff8b2d278ce7c3d (image=quay.io/podified-antelope-centos9/openstack-ceilometer-ipmi:current-podified, name=ceilometer_agent_ipmi, health_status=healthy, health_failing_streak=0, health_log=, org.label-schema.vendor=CentOS, tcib_build_tag=fa2bb8efef6782c26ea7f1675eeb36dd, tcib_managed=true, container_name=ceilometer_agent_ipmi, io.buildah.version=1.41.3, maintainer=OpenStack Kubernetes Operator team, managed_by=edpm_ansible, org.label-schema.schema-version=1.0, config_data={'image': 'quay.io/podified-antelope-centos9/openstack-ceilometer-ipmi:current-podified', 'user': 'ceilometer', 'restart': 'always', 'command': 'kolla_start', 'security_opt': 'label:type:ceilometer_polling_t', 'privileged': 'true', 'net': 'host', 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'OS_ENDPOINT_TYPE': 'internal'}, 'healthcheck': {'test': '/openstack/healthcheck ipmi', 'mount': '/var/lib/openstack/healthchecks/ceilometer_agent_ipmi'}, 'volumes': ['/var/lib/openstack/config/telemetry-power-monitoring:/var/lib/openstack/config/:z', '/var/lib/openstack/config/telemetry-power-monitoring/ceilometer-agent-ipmi.json:/var/lib/kolla/config_files/config.json:z', '/etc/hosts:/etc/hosts:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/var/lib/openstack/cacerts/telemetry-power-monitoring/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/config/telemetry-power-monitoring/ceilometer_prom_exporter.yaml:/etc/ceilometer/ceilometer_prom_exporter.yaml:z', '/var/lib/openstack/certs/telemetry-power-monitoring/default:/etc/ceilometer/tls:z', '/dev/log:/dev/log', '/var/lib/openstack/healthchecks/ceilometer_agent_ipmi:/openstack:ro,z']}, config_id=edpm, org.label-schema.license=GPLv2, org.label-schema.build-date=20251125, org.label-schema.name=CentOS Stream 9 Base Image)
Dec  1 19:50:00 compute-0 podman[248030]: 2025-12-01 19:50:00.393093583 +0000 UTC m=+0.129856456 container health_status 43b014a7c88484529ca37fbc1aa040d68d3c565a681d98a3ffe696ded1c66c8b (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified, name=ovn_metadata_agent, health_status=healthy, health_failing_streak=0, health_log=, org.label-schema.license=GPLv2, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS, tcib_managed=true, config_data={'cgroupns': 'host', 'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'EDPM_CONFIG_HASH': '0823bd3e096c75f72e4a95820d41b0d4b6a1172bd2892ddb9f29b788a11bc87d'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ovn_metadata_agent', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified', 'net': 'host', 'pid': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/run/openvswitch:/run/openvswitch:z', '/var/lib/config-data/ansible-generated/neutron-ovn-metadata-agent:/etc/neutron.conf.d:z', '/run/netns:/run/netns:shared', '/var/lib/kolla/config_files/ovn_metadata_agent.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/neutron:/var/lib/neutron:shared,z', '/var/lib/neutron/ovn_metadata_haproxy_wrapper:/usr/local/bin/haproxy:ro', '/var/lib/neutron/kill_scripts:/etc/neutron/kill_scripts:ro', '/var/lib/openstack/cacerts/neutron-metadata/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/ca.crt:/etc/pki/tls/certs/ovndbca.crt:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/tls.crt:/etc/pki/tls/certs/ovndb.crt:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/tls.key:/etc/pki/tls/private/ovndb.key:ro,Z', '/var/lib/openstack/healthchecks/ovn_metadata_agent:/openstack:ro,z']}, config_id=ovn_metadata_agent, container_name=ovn_metadata_agent, maintainer=OpenStack Kubernetes Operator team, org.label-schema.build-date=20251125, org.label-schema.name=CentOS Stream 9 Base Image, tcib_build_tag=fa2bb8efef6782c26ea7f1675eeb36dd, io.buildah.version=1.41.3, managed_by=edpm_ansible)
Dec  1 19:50:00 compute-0 podman[248034]: 2025-12-01 19:50:00.434929446 +0000 UTC m=+0.168438548 container health_status ac5c9902abf0db9f43c889599b2bcc73d33eb8b65444ffdd9b56a5cc93dab792 (image=quay.io/podified-antelope-centos9/openstack-ovn-controller:current-podified, name=ovn_controller, health_status=healthy, health_failing_streak=0, health_log=, config_data={'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ovn_controller', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-ovn-controller:current-podified', 'net': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/lib/modules:/lib/modules:ro', '/run:/run', '/var/lib/openvswitch/ovn:/run/ovn:shared,z', '/var/lib/kolla/config_files/ovn_controller.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/openstack/cacerts/ovn/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/certs/ovn/default/ca.crt:/etc/pki/tls/certs/ovndbca.crt:ro,z', '/var/lib/openstack/certs/ovn/default/tls.crt:/etc/pki/tls/certs/ovndb.crt:ro,z', '/var/lib/openstack/certs/ovn/default/tls.key:/etc/pki/tls/private/ovndb.key:ro,Z', '/var/lib/openstack/healthchecks/ovn_controller:/openstack:ro,z']}, managed_by=edpm_ansible, org.label-schema.vendor=CentOS, org.label-schema.name=CentOS Stream 9 Base Image, config_id=ovn_controller, io.buildah.version=1.41.3, org.label-schema.build-date=20251125, tcib_managed=true, container_name=ovn_controller, maintainer=OpenStack Kubernetes Operator team, org.label-schema.license=GPLv2, org.label-schema.schema-version=1.0, tcib_build_tag=fa2bb8efef6782c26ea7f1675eeb36dd)
Dec  1 19:50:01 compute-0 nova_compute[189564]: 2025-12-01 19:50:01.103 189568 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Dec  1 19:50:01 compute-0 openstack_network_exporter[205914]: ERROR   19:50:01 appctl.go:144: Failed to get PID for ovn-northd: no control socket files found for ovn-northd
Dec  1 19:50:01 compute-0 openstack_network_exporter[205914]: ERROR   19:50:01 appctl.go:144: Failed to get PID for ovn-northd: no control socket files found for ovn-northd
Dec  1 19:50:01 compute-0 openstack_network_exporter[205914]: ERROR   19:50:01 appctl.go:131: Failed to prepare call to ovsdb-server: no control socket files found for the ovs db server
Dec  1 19:50:01 compute-0 openstack_network_exporter[205914]: ERROR   19:50:01 appctl.go:174: call(dpif-netdev/pmd-rxq-show): please specify an existing datapath
Dec  1 19:50:01 compute-0 openstack_network_exporter[205914]: 
Dec  1 19:50:01 compute-0 openstack_network_exporter[205914]: ERROR   19:50:01 appctl.go:174: call(dpif-netdev/pmd-perf-show): please specify an existing datapath
Dec  1 19:50:01 compute-0 openstack_network_exporter[205914]: 
Dec  1 19:50:03 compute-0 nova_compute[189564]: 2025-12-01 19:50:03.408 189568 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Dec  1 19:50:04 compute-0 nova_compute[189564]: 2025-12-01 19:50:04.248 189568 DEBUG oslo_service.periodic_task [None req-7cec860b-c1c5-49d2-8a4f-732c76c1788d - - - - - -] Running periodic task ComputeManager._heal_instance_info_cache run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210#033[00m
Dec  1 19:50:04 compute-0 nova_compute[189564]: 2025-12-01 19:50:04.248 189568 DEBUG nova.compute.manager [None req-7cec860b-c1c5-49d2-8a4f-732c76c1788d - - - - - -] Starting heal instance info cache _heal_instance_info_cache /usr/lib/python3.9/site-packages/nova/compute/manager.py:9858#033[00m
Dec  1 19:50:04 compute-0 nova_compute[189564]: 2025-12-01 19:50:04.248 189568 DEBUG nova.compute.manager [None req-7cec860b-c1c5-49d2-8a4f-732c76c1788d - - - - - -] Rebuilding the list of instances to heal _heal_instance_info_cache /usr/lib/python3.9/site-packages/nova/compute/manager.py:9862#033[00m
Dec  1 19:50:05 compute-0 nova_compute[189564]: 2025-12-01 19:50:05.247 189568 DEBUG oslo_concurrency.lockutils [None req-7cec860b-c1c5-49d2-8a4f-732c76c1788d - - - - - -] Acquiring lock "refresh_cache-e73931e9-f7fa-4666-b781-700b385532a9" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:312#033[00m
Dec  1 19:50:05 compute-0 nova_compute[189564]: 2025-12-01 19:50:05.248 189568 DEBUG oslo_concurrency.lockutils [None req-7cec860b-c1c5-49d2-8a4f-732c76c1788d - - - - - -] Acquired lock "refresh_cache-e73931e9-f7fa-4666-b781-700b385532a9" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:315#033[00m
Dec  1 19:50:05 compute-0 nova_compute[189564]: 2025-12-01 19:50:05.248 189568 DEBUG nova.network.neutron [None req-7cec860b-c1c5-49d2-8a4f-732c76c1788d - - - - - -] [instance: e73931e9-f7fa-4666-b781-700b385532a9] Forcefully refreshing network info cache for instance _get_instance_nw_info /usr/lib/python3.9/site-packages/nova/network/neutron.py:2004#033[00m
Dec  1 19:50:05 compute-0 nova_compute[189564]: 2025-12-01 19:50:05.249 189568 DEBUG nova.objects.instance [None req-7cec860b-c1c5-49d2-8a4f-732c76c1788d - - - - - -] Lazy-loading 'info_cache' on Instance uuid e73931e9-f7fa-4666-b781-700b385532a9 obj_load_attr /usr/lib/python3.9/site-packages/nova/objects/instance.py:1105#033[00m
Dec  1 19:50:06 compute-0 nova_compute[189564]: 2025-12-01 19:50:06.106 189568 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Dec  1 19:50:08 compute-0 nova_compute[189564]: 2025-12-01 19:50:08.411 189568 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Dec  1 19:50:09 compute-0 nova_compute[189564]: 2025-12-01 19:50:09.384 189568 DEBUG nova.network.neutron [None req-7cec860b-c1c5-49d2-8a4f-732c76c1788d - - - - - -] [instance: e73931e9-f7fa-4666-b781-700b385532a9] Updating instance_info_cache with network_info: [{"id": "3cef930c-870a-4936-a206-b4c3a7ce5c1a", "address": "fa:16:3e:fc:8b:70", "network": {"id": "2a4b8529-6171-4880-a97c-66966115a61b", "bridge": "br-int", "label": "private", "subnets": [{"cidr": "192.168.0.0/24", "dns": [], "gateway": {"address": "192.168.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "192.168.0.47", "type": "fixed", "version": 4, "meta": {}, "floating_ips": [{"address": "192.168.122.206", "type": "floating", "version": 4, "meta": {}}]}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "35d2a9caf1634dca9fc12ec078239d84", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tap3cef930c-87", "ovs_interfaceid": "3cef930c-870a-4936-a206-b4c3a7ce5c1a", "qbh_params": null, "qbg_params": null, "active": true, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}}] update_instance_cache_with_nw_info /usr/lib/python3.9/site-packages/nova/network/neutron.py:116#033[00m
Dec  1 19:50:09 compute-0 nova_compute[189564]: 2025-12-01 19:50:09.405 189568 DEBUG oslo_concurrency.lockutils [None req-7cec860b-c1c5-49d2-8a4f-732c76c1788d - - - - - -] Releasing lock "refresh_cache-e73931e9-f7fa-4666-b781-700b385532a9" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:333#033[00m
Dec  1 19:50:09 compute-0 nova_compute[189564]: 2025-12-01 19:50:09.406 189568 DEBUG nova.compute.manager [None req-7cec860b-c1c5-49d2-8a4f-732c76c1788d - - - - - -] [instance: e73931e9-f7fa-4666-b781-700b385532a9] Updated the network info_cache for instance _heal_instance_info_cache /usr/lib/python3.9/site-packages/nova/compute/manager.py:9929#033[00m
Dec  1 19:50:09 compute-0 nova_compute[189564]: 2025-12-01 19:50:09.407 189568 DEBUG oslo_service.periodic_task [None req-7cec860b-c1c5-49d2-8a4f-732c76c1788d - - - - - -] Running periodic task ComputeManager._poll_rebooting_instances run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210#033[00m
Dec  1 19:50:09 compute-0 nova_compute[189564]: 2025-12-01 19:50:09.408 189568 DEBUG oslo_service.periodic_task [None req-7cec860b-c1c5-49d2-8a4f-732c76c1788d - - - - - -] Running periodic task ComputeManager._poll_rescued_instances run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210#033[00m
Dec  1 19:50:09 compute-0 nova_compute[189564]: 2025-12-01 19:50:09.408 189568 DEBUG oslo_service.periodic_task [None req-7cec860b-c1c5-49d2-8a4f-732c76c1788d - - - - - -] Running periodic task ComputeManager._poll_volume_usage run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210#033[00m
Dec  1 19:50:09 compute-0 nova_compute[189564]: 2025-12-01 19:50:09.409 189568 DEBUG oslo_service.periodic_task [None req-7cec860b-c1c5-49d2-8a4f-732c76c1788d - - - - - -] Running periodic task ComputeManager._reclaim_queued_deletes run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210#033[00m
Dec  1 19:50:09 compute-0 nova_compute[189564]: 2025-12-01 19:50:09.410 189568 DEBUG nova.compute.manager [None req-7cec860b-c1c5-49d2-8a4f-732c76c1788d - - - - - -] CONF.reclaim_instance_interval <= 0, skipping... _reclaim_queued_deletes /usr/lib/python3.9/site-packages/nova/compute/manager.py:10477#033[00m
Dec  1 19:50:11 compute-0 nova_compute[189564]: 2025-12-01 19:50:11.110 189568 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Dec  1 19:50:11 compute-0 nova_compute[189564]: 2025-12-01 19:50:11.248 189568 DEBUG oslo_service.periodic_task [None req-7cec860b-c1c5-49d2-8a4f-732c76c1788d - - - - - -] Running periodic task ComputeManager.update_available_resource run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210#033[00m
Dec  1 19:50:11 compute-0 nova_compute[189564]: 2025-12-01 19:50:11.280 189568 DEBUG oslo_concurrency.lockutils [None req-7cec860b-c1c5-49d2-8a4f-732c76c1788d - - - - - -] Acquiring lock "compute_resources" by "nova.compute.resource_tracker.ResourceTracker.clean_compute_node_cache" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m
Dec  1 19:50:11 compute-0 nova_compute[189564]: 2025-12-01 19:50:11.281 189568 DEBUG oslo_concurrency.lockutils [None req-7cec860b-c1c5-49d2-8a4f-732c76c1788d - - - - - -] Lock "compute_resources" acquired by "nova.compute.resource_tracker.ResourceTracker.clean_compute_node_cache" :: waited 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m
Dec  1 19:50:11 compute-0 nova_compute[189564]: 2025-12-01 19:50:11.281 189568 DEBUG oslo_concurrency.lockutils [None req-7cec860b-c1c5-49d2-8a4f-732c76c1788d - - - - - -] Lock "compute_resources" "released" by "nova.compute.resource_tracker.ResourceTracker.clean_compute_node_cache" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m
Dec  1 19:50:11 compute-0 nova_compute[189564]: 2025-12-01 19:50:11.282 189568 DEBUG nova.compute.resource_tracker [None req-7cec860b-c1c5-49d2-8a4f-732c76c1788d - - - - - -] Auditing locally available compute resources for compute-0.ctlplane.example.com (node: compute-0.ctlplane.example.com) update_available_resource /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:861#033[00m
Dec  1 19:50:11 compute-0 nova_compute[189564]: 2025-12-01 19:50:11.386 189568 DEBUG oslo_concurrency.processutils [None req-7cec860b-c1c5-49d2-8a4f-732c76c1788d - - - - - -] Running cmd (subprocess): /usr/bin/python3 -m oslo_concurrency.prlimit --as=1073741824 --cpu=30 -- env LC_ALL=C LANG=C qemu-img info /var/lib/nova/instances/e73931e9-f7fa-4666-b781-700b385532a9/disk --force-share --output=json execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384#033[00m
Dec  1 19:50:11 compute-0 podman[248126]: 2025-12-01 19:50:11.415384799 +0000 UTC m=+0.167278053 container health_status b46bda7fc50db8041eef75400930fc7591d8331b3adc9964f77b2cc87c6b98e2 (image=quay.io/openstack-k8s-operators/openstack-network-exporter:current-podified, name=openstack_network_exporter, health_status=healthy, health_failing_streak=0, health_log=, io.buildah.version=1.33.7, version=9.6, architecture=x86_64, com.redhat.license_terms=https://www.redhat.com/en/about/red-hat-end-user-license-agreements#UBI, vendor=Red Hat, Inc., config_id=edpm, io.k8s.display-name=Red Hat Universal Base Image 9 Minimal, summary=Provides the latest release of the minimal Red Hat Universal Base Image 9., build-date=2025-08-20T13:12:41, container_name=openstack_network_exporter, maintainer=Red Hat, Inc., com.redhat.component=ubi9-minimal-container, io.k8s.description=The Universal Base Image Minimal is a stripped down image that uses microdnf as a package manager. This base image is freely redistributable, but Red Hat only supports Red Hat technologies through subscriptions for Red Hat products. This image is maintained by Red Hat and updated regularly., vcs-type=git, distribution-scope=public, io.openshift.tags=minimal rhel9, vcs-ref=f4b088292653bbf5ca8188a5e59ffd06a8671d4b, io.openshift.expose-services=, name=ubi9-minimal, release=1755695350, config_data={'image': 'quay.io/openstack-k8s-operators/openstack-network-exporter:current-podified', 'restart': 'always', 'recreate': True, 'privileged': True, 'ports': ['9105:9105'], 'command': [], 'net': 'host', 'environment': {'OS_ENDPOINT_TYPE': 'internal', 'OPENSTACK_NETWORK_EXPORTER_YAML': '/etc/openstack_network_exporter/openstack_network_exporter.yaml'}, 'healthcheck': {'test': '/openstack/healthcheck openstack-netwo', 'mount': '/var/lib/openstack/healthchecks/openstack_network_exporter'}, 'volumes': ['/var/lib/openstack/config/telemetry/openstack_network_exporter.yaml:/etc/openstack_network_exporter/openstack_network_exporter.yaml:z', '/var/lib/openstack/certs/telemetry/default:/etc/openstack_network_exporter/tls:z', '/var/run/openvswitch:/run/openvswitch:rw,z', '/var/lib/openvswitch/ovn:/run/ovn:rw,z', '/proc:/host/proc:ro', '/var/lib/openstack/healthchecks/openstack_network_exporter:/openstack:ro,z']}, managed_by=edpm_ansible, description=The Universal Base Image Minimal is a stripped down image that uses microdnf as a package manager. This base image is freely redistributable, but Red Hat only supports Red Hat technologies through subscriptions for Red Hat products. This image is maintained by Red Hat and updated regularly., url=https://catalog.redhat.com/en/search?searchType=containers)
Dec  1 19:50:11 compute-0 nova_compute[189564]: 2025-12-01 19:50:11.469 189568 DEBUG oslo_concurrency.processutils [None req-7cec860b-c1c5-49d2-8a4f-732c76c1788d - - - - - -] CMD "/usr/bin/python3 -m oslo_concurrency.prlimit --as=1073741824 --cpu=30 -- env LC_ALL=C LANG=C qemu-img info /var/lib/nova/instances/e73931e9-f7fa-4666-b781-700b385532a9/disk --force-share --output=json" returned: 0 in 0.082s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422#033[00m
Dec  1 19:50:11 compute-0 nova_compute[189564]: 2025-12-01 19:50:11.471 189568 DEBUG oslo_concurrency.processutils [None req-7cec860b-c1c5-49d2-8a4f-732c76c1788d - - - - - -] Running cmd (subprocess): /usr/bin/python3 -m oslo_concurrency.prlimit --as=1073741824 --cpu=30 -- env LC_ALL=C LANG=C qemu-img info /var/lib/nova/instances/e73931e9-f7fa-4666-b781-700b385532a9/disk --force-share --output=json execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384#033[00m
Dec  1 19:50:11 compute-0 nova_compute[189564]: 2025-12-01 19:50:11.533 189568 DEBUG oslo_concurrency.processutils [None req-7cec860b-c1c5-49d2-8a4f-732c76c1788d - - - - - -] CMD "/usr/bin/python3 -m oslo_concurrency.prlimit --as=1073741824 --cpu=30 -- env LC_ALL=C LANG=C qemu-img info /var/lib/nova/instances/e73931e9-f7fa-4666-b781-700b385532a9/disk --force-share --output=json" returned: 0 in 0.062s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422#033[00m
Dec  1 19:50:11 compute-0 nova_compute[189564]: 2025-12-01 19:50:11.534 189568 DEBUG oslo_concurrency.processutils [None req-7cec860b-c1c5-49d2-8a4f-732c76c1788d - - - - - -] Running cmd (subprocess): /usr/bin/python3 -m oslo_concurrency.prlimit --as=1073741824 --cpu=30 -- env LC_ALL=C LANG=C qemu-img info /var/lib/nova/instances/e73931e9-f7fa-4666-b781-700b385532a9/disk.eph0 --force-share --output=json execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384#033[00m
Dec  1 19:50:11 compute-0 nova_compute[189564]: 2025-12-01 19:50:11.631 189568 DEBUG oslo_concurrency.processutils [None req-7cec860b-c1c5-49d2-8a4f-732c76c1788d - - - - - -] CMD "/usr/bin/python3 -m oslo_concurrency.prlimit --as=1073741824 --cpu=30 -- env LC_ALL=C LANG=C qemu-img info /var/lib/nova/instances/e73931e9-f7fa-4666-b781-700b385532a9/disk.eph0 --force-share --output=json" returned: 0 in 0.097s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422#033[00m
Dec  1 19:50:11 compute-0 nova_compute[189564]: 2025-12-01 19:50:11.633 189568 DEBUG oslo_concurrency.processutils [None req-7cec860b-c1c5-49d2-8a4f-732c76c1788d - - - - - -] Running cmd (subprocess): /usr/bin/python3 -m oslo_concurrency.prlimit --as=1073741824 --cpu=30 -- env LC_ALL=C LANG=C qemu-img info /var/lib/nova/instances/e73931e9-f7fa-4666-b781-700b385532a9/disk.eph0 --force-share --output=json execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384#033[00m
Dec  1 19:50:11 compute-0 nova_compute[189564]: 2025-12-01 19:50:11.731 189568 DEBUG oslo_concurrency.processutils [None req-7cec860b-c1c5-49d2-8a4f-732c76c1788d - - - - - -] CMD "/usr/bin/python3 -m oslo_concurrency.prlimit --as=1073741824 --cpu=30 -- env LC_ALL=C LANG=C qemu-img info /var/lib/nova/instances/e73931e9-f7fa-4666-b781-700b385532a9/disk.eph0 --force-share --output=json" returned: 0 in 0.098s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422#033[00m
Dec  1 19:50:11 compute-0 nova_compute[189564]: 2025-12-01 19:50:11.743 189568 DEBUG oslo_concurrency.processutils [None req-7cec860b-c1c5-49d2-8a4f-732c76c1788d - - - - - -] Running cmd (subprocess): /usr/bin/python3 -m oslo_concurrency.prlimit --as=1073741824 --cpu=30 -- env LC_ALL=C LANG=C qemu-img info /var/lib/nova/instances/850ac274-3f22-41ce-b7d7-ac64d7adac70/disk --force-share --output=json execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384#033[00m
Dec  1 19:50:11 compute-0 nova_compute[189564]: 2025-12-01 19:50:11.828 189568 DEBUG oslo_concurrency.processutils [None req-7cec860b-c1c5-49d2-8a4f-732c76c1788d - - - - - -] CMD "/usr/bin/python3 -m oslo_concurrency.prlimit --as=1073741824 --cpu=30 -- env LC_ALL=C LANG=C qemu-img info /var/lib/nova/instances/850ac274-3f22-41ce-b7d7-ac64d7adac70/disk --force-share --output=json" returned: 0 in 0.086s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422#033[00m
Dec  1 19:50:11 compute-0 nova_compute[189564]: 2025-12-01 19:50:11.830 189568 DEBUG oslo_concurrency.processutils [None req-7cec860b-c1c5-49d2-8a4f-732c76c1788d - - - - - -] Running cmd (subprocess): /usr/bin/python3 -m oslo_concurrency.prlimit --as=1073741824 --cpu=30 -- env LC_ALL=C LANG=C qemu-img info /var/lib/nova/instances/850ac274-3f22-41ce-b7d7-ac64d7adac70/disk --force-share --output=json execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384#033[00m
Dec  1 19:50:11 compute-0 nova_compute[189564]: 2025-12-01 19:50:11.890 189568 DEBUG oslo_concurrency.processutils [None req-7cec860b-c1c5-49d2-8a4f-732c76c1788d - - - - - -] CMD "/usr/bin/python3 -m oslo_concurrency.prlimit --as=1073741824 --cpu=30 -- env LC_ALL=C LANG=C qemu-img info /var/lib/nova/instances/850ac274-3f22-41ce-b7d7-ac64d7adac70/disk --force-share --output=json" returned: 0 in 0.060s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422#033[00m
Dec  1 19:50:11 compute-0 nova_compute[189564]: 2025-12-01 19:50:11.891 189568 DEBUG oslo_concurrency.processutils [None req-7cec860b-c1c5-49d2-8a4f-732c76c1788d - - - - - -] Running cmd (subprocess): /usr/bin/python3 -m oslo_concurrency.prlimit --as=1073741824 --cpu=30 -- env LC_ALL=C LANG=C qemu-img info /var/lib/nova/instances/850ac274-3f22-41ce-b7d7-ac64d7adac70/disk.eph0 --force-share --output=json execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384#033[00m
Dec  1 19:50:11 compute-0 nova_compute[189564]: 2025-12-01 19:50:11.970 189568 DEBUG oslo_concurrency.processutils [None req-7cec860b-c1c5-49d2-8a4f-732c76c1788d - - - - - -] CMD "/usr/bin/python3 -m oslo_concurrency.prlimit --as=1073741824 --cpu=30 -- env LC_ALL=C LANG=C qemu-img info /var/lib/nova/instances/850ac274-3f22-41ce-b7d7-ac64d7adac70/disk.eph0 --force-share --output=json" returned: 0 in 0.079s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422#033[00m
Dec  1 19:50:11 compute-0 nova_compute[189564]: 2025-12-01 19:50:11.972 189568 DEBUG oslo_concurrency.processutils [None req-7cec860b-c1c5-49d2-8a4f-732c76c1788d - - - - - -] Running cmd (subprocess): /usr/bin/python3 -m oslo_concurrency.prlimit --as=1073741824 --cpu=30 -- env LC_ALL=C LANG=C qemu-img info /var/lib/nova/instances/850ac274-3f22-41ce-b7d7-ac64d7adac70/disk.eph0 --force-share --output=json execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384#033[00m
Dec  1 19:50:12 compute-0 nova_compute[189564]: 2025-12-01 19:50:12.038 189568 DEBUG oslo_concurrency.processutils [None req-7cec860b-c1c5-49d2-8a4f-732c76c1788d - - - - - -] CMD "/usr/bin/python3 -m oslo_concurrency.prlimit --as=1073741824 --cpu=30 -- env LC_ALL=C LANG=C qemu-img info /var/lib/nova/instances/850ac274-3f22-41ce-b7d7-ac64d7adac70/disk.eph0 --force-share --output=json" returned: 0 in 0.066s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422#033[00m
Dec  1 19:50:12 compute-0 ovn_metadata_agent[106828]: 2025-12-01 19:50:12.205 106833 DEBUG oslo_concurrency.lockutils [-] Acquiring lock "_check_child_processes" by "neutron.agent.linux.external_process.ProcessMonitor._check_child_processes" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m
Dec  1 19:50:12 compute-0 ovn_metadata_agent[106828]: 2025-12-01 19:50:12.208 106833 DEBUG oslo_concurrency.lockutils [-] Lock "_check_child_processes" acquired by "neutron.agent.linux.external_process.ProcessMonitor._check_child_processes" :: waited 0.002s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m
Dec  1 19:50:12 compute-0 ovn_metadata_agent[106828]: 2025-12-01 19:50:12.209 106833 DEBUG oslo_concurrency.lockutils [-] Lock "_check_child_processes" "released" by "neutron.agent.linux.external_process.ProcessMonitor._check_child_processes" :: held 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m
Dec  1 19:50:12 compute-0 nova_compute[189564]: 2025-12-01 19:50:12.586 189568 WARNING nova.virt.libvirt.driver [None req-7cec860b-c1c5-49d2-8a4f-732c76c1788d - - - - - -] This host appears to have multiple sockets per NUMA node. The `socket` PCI NUMA affinity will not be supported.#033[00m
Dec  1 19:50:12 compute-0 nova_compute[189564]: 2025-12-01 19:50:12.589 189568 DEBUG nova.compute.resource_tracker [None req-7cec860b-c1c5-49d2-8a4f-732c76c1788d - - - - - -] Hypervisor/Node resource view: name=compute-0.ctlplane.example.com free_ram=4826MB free_disk=72.36144638061523GB free_vcpus=6 pci_devices=[{"dev_id": "pci_0000_00_03_0", "address": "0000:00:03.0", "product_id": "1000", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1000", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_01_3", "address": "0000:00:01.3", "product_id": "7113", "vendor_id": "8086", "numa_node": null, "label": "label_8086_7113", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_06_0", "address": "0000:00:06.0", "product_id": "1005", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1005", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_04_0", "address": "0000:00:04.0", "product_id": "1001", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1001", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_01_1", "address": "0000:00:01.1", "product_id": "7010", "vendor_id": "8086", "numa_node": null, "label": "label_8086_7010", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_00_0", "address": "0000:00:00.0", "product_id": "1237", "vendor_id": "8086", "numa_node": null, "label": "label_8086_1237", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_05_0", "address": "0000:00:05.0", "product_id": "1002", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1002", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_02_0", "address": "0000:00:02.0", "product_id": "1050", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1050", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_01_2", "address": "0000:00:01.2", "product_id": "7020", "vendor_id": "8086", "numa_node": null, "label": "label_8086_7020", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_01_0", "address": "0000:00:01.0", "product_id": "7000", "vendor_id": "8086", "numa_node": null, "label": "label_8086_7000", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_07_0", "address": "0000:00:07.0", "product_id": "1000", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1000", "dev_type": "type-PCI"}] _report_hypervisor_resource_view /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:1034#033[00m
Dec  1 19:50:12 compute-0 nova_compute[189564]: 2025-12-01 19:50:12.590 189568 DEBUG oslo_concurrency.lockutils [None req-7cec860b-c1c5-49d2-8a4f-732c76c1788d - - - - - -] Acquiring lock "compute_resources" by "nova.compute.resource_tracker.ResourceTracker._update_available_resource" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m
Dec  1 19:50:12 compute-0 nova_compute[189564]: 2025-12-01 19:50:12.590 189568 DEBUG oslo_concurrency.lockutils [None req-7cec860b-c1c5-49d2-8a4f-732c76c1788d - - - - - -] Lock "compute_resources" acquired by "nova.compute.resource_tracker.ResourceTracker._update_available_resource" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m
Dec  1 19:50:12 compute-0 nova_compute[189564]: 2025-12-01 19:50:12.695 189568 DEBUG nova.compute.resource_tracker [None req-7cec860b-c1c5-49d2-8a4f-732c76c1788d - - - - - -] Instance e73931e9-f7fa-4666-b781-700b385532a9 actively managed on this compute host and has allocations in placement: {'resources': {'DISK_GB': 2, 'MEMORY_MB': 512, 'VCPU': 1}}. _remove_deleted_instances_allocations /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:1635#033[00m
Dec  1 19:50:12 compute-0 nova_compute[189564]: 2025-12-01 19:50:12.696 189568 DEBUG nova.compute.resource_tracker [None req-7cec860b-c1c5-49d2-8a4f-732c76c1788d - - - - - -] Instance 850ac274-3f22-41ce-b7d7-ac64d7adac70 actively managed on this compute host and has allocations in placement: {'resources': {'DISK_GB': 2, 'MEMORY_MB': 512, 'VCPU': 1}}. _remove_deleted_instances_allocations /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:1635#033[00m
Dec  1 19:50:12 compute-0 nova_compute[189564]: 2025-12-01 19:50:12.696 189568 DEBUG nova.compute.resource_tracker [None req-7cec860b-c1c5-49d2-8a4f-732c76c1788d - - - - - -] Total usable vcpus: 8, total allocated vcpus: 2 _report_final_resource_view /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:1057#033[00m
Dec  1 19:50:12 compute-0 nova_compute[189564]: 2025-12-01 19:50:12.696 189568 DEBUG nova.compute.resource_tracker [None req-7cec860b-c1c5-49d2-8a4f-732c76c1788d - - - - - -] Final resource view: name=compute-0.ctlplane.example.com phys_ram=7680MB used_ram=1536MB phys_disk=79GB used_disk=4GB total_vcpus=8 used_vcpus=2 pci_stats=[] _report_final_resource_view /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:1066#033[00m
Dec  1 19:50:12 compute-0 nova_compute[189564]: 2025-12-01 19:50:12.718 189568 DEBUG nova.scheduler.client.report [None req-7cec860b-c1c5-49d2-8a4f-732c76c1788d - - - - - -] Refreshing inventories for resource provider 0211b5d4-bab8-409f-8f53-df766ffbcb27 _refresh_associations /usr/lib/python3.9/site-packages/nova/scheduler/client/report.py:804#033[00m
Dec  1 19:50:12 compute-0 nova_compute[189564]: 2025-12-01 19:50:12.749 189568 DEBUG nova.scheduler.client.report [None req-7cec860b-c1c5-49d2-8a4f-732c76c1788d - - - - - -] Updating ProviderTree inventory for provider 0211b5d4-bab8-409f-8f53-df766ffbcb27 from _refresh_and_get_inventory using data: {'VCPU': {'total': 8, 'reserved': 0, 'min_unit': 1, 'max_unit': 8, 'step_size': 1, 'allocation_ratio': 4.0}, 'MEMORY_MB': {'total': 7680, 'reserved': 512, 'min_unit': 1, 'max_unit': 7680, 'step_size': 1, 'allocation_ratio': 1.0}, 'DISK_GB': {'total': 79, 'reserved': 1, 'min_unit': 1, 'max_unit': 79, 'step_size': 1, 'allocation_ratio': 0.9}} _refresh_and_get_inventory /usr/lib/python3.9/site-packages/nova/scheduler/client/report.py:768#033[00m
Dec  1 19:50:12 compute-0 nova_compute[189564]: 2025-12-01 19:50:12.750 189568 DEBUG nova.compute.provider_tree [None req-7cec860b-c1c5-49d2-8a4f-732c76c1788d - - - - - -] Updating inventory in ProviderTree for provider 0211b5d4-bab8-409f-8f53-df766ffbcb27 with inventory: {'VCPU': {'total': 8, 'reserved': 0, 'min_unit': 1, 'max_unit': 8, 'step_size': 1, 'allocation_ratio': 4.0}, 'MEMORY_MB': {'total': 7680, 'reserved': 512, 'min_unit': 1, 'max_unit': 7680, 'step_size': 1, 'allocation_ratio': 1.0}, 'DISK_GB': {'total': 79, 'reserved': 1, 'min_unit': 1, 'max_unit': 79, 'step_size': 1, 'allocation_ratio': 0.9}} update_inventory /usr/lib/python3.9/site-packages/nova/compute/provider_tree.py:176#033[00m
Dec  1 19:50:12 compute-0 nova_compute[189564]: 2025-12-01 19:50:12.776 189568 DEBUG nova.scheduler.client.report [None req-7cec860b-c1c5-49d2-8a4f-732c76c1788d - - - - - -] Refreshing aggregate associations for resource provider 0211b5d4-bab8-409f-8f53-df766ffbcb27, aggregates: None _refresh_associations /usr/lib/python3.9/site-packages/nova/scheduler/client/report.py:813#033[00m
Dec  1 19:50:12 compute-0 nova_compute[189564]: 2025-12-01 19:50:12.815 189568 DEBUG nova.scheduler.client.report [None req-7cec860b-c1c5-49d2-8a4f-732c76c1788d - - - - - -] Refreshing trait associations for resource provider 0211b5d4-bab8-409f-8f53-df766ffbcb27, traits: COMPUTE_RESCUE_BFV,COMPUTE_VIOMMU_MODEL_INTEL,COMPUTE_SECURITY_UEFI_SECURE_BOOT,COMPUTE_GRAPHICS_MODEL_VIRTIO,HW_CPU_X86_AMD_SVM,COMPUTE_NODE,COMPUTE_VIOMMU_MODEL_AUTO,HW_CPU_X86_BMI2,COMPUTE_IMAGE_TYPE_ISO,HW_CPU_X86_SSE2,COMPUTE_STORAGE_BUS_SATA,HW_CPU_X86_SSE41,COMPUTE_IMAGE_TYPE_ARI,COMPUTE_SECURITY_TPM_1_2,COMPUTE_STORAGE_BUS_VIRTIO,COMPUTE_IMAGE_TYPE_AMI,COMPUTE_NET_VIF_MODEL_E1000,COMPUTE_GRAPHICS_MODEL_CIRRUS,COMPUTE_GRAPHICS_MODEL_VGA,COMPUTE_TRUSTED_CERTS,COMPUTE_STORAGE_BUS_USB,COMPUTE_VOLUME_ATTACH_WITH_TAG,COMPUTE_NET_VIF_MODEL_VIRTIO,HW_CPU_X86_FMA3,HW_CPU_X86_SSE4A,COMPUTE_ACCELERATORS,COMPUTE_VOLUME_EXTEND,HW_CPU_X86_ABM,COMPUTE_DEVICE_TAGGING,HW_CPU_X86_AVX,HW_CPU_X86_SSE,HW_CPU_X86_SVM,COMPUTE_STORAGE_BUS_IDE,COMPUTE_NET_ATTACH_INTERFACE,HW_CPU_X86_F16C,HW_CPU_X86_MMX,COMPUTE_NET_VIF_MODEL_SPAPR_VLAN,COMPUTE_NET_VIF_MODEL_E1000E,HW_CPU_X86_CLMUL,COMPUTE_GRAPHICS_MODEL_NONE,COMPUTE_VIOMMU_MODEL_VIRTIO,HW_CPU_X86_AVX2,COMPUTE_NET_VIF_MODEL_VMXNET3,COMPUTE_SECURITY_TPM_2_0,COMPUTE_IMAGE_TYPE_AKI,HW_CPU_X86_SSSE3,COMPUTE_IMAGE_TYPE_QCOW2,HW_CPU_X86_BMI,HW_CPU_X86_AESNI,COMPUTE_GRAPHICS_MODEL_BOCHS,COMPUTE_NET_VIF_MODEL_RTL8139,COMPUTE_STORAGE_BUS_SCSI,COMPUTE_NET_VIF_MODEL_PCNET,COMPUTE_VOLUME_MULTI_ATTACH,COMPUTE_SOCKET_PCI_NUMA_AFFINITY,COMPUTE_NET_VIF_MODEL_NE2K_PCI,HW_CPU_X86_SHA,COMPUTE_IMAGE_TYPE_RAW,COMPUTE_NET_ATTACH_INTERFACE_WITH_TAG,HW_CPU_X86_SSE42,COMPUTE_STORAGE_BUS_FDC _refresh_associations /usr/lib/python3.9/site-packages/nova/scheduler/client/report.py:825#033[00m
Dec  1 19:50:12 compute-0 nova_compute[189564]: 2025-12-01 19:50:12.908 189568 DEBUG nova.compute.provider_tree [None req-7cec860b-c1c5-49d2-8a4f-732c76c1788d - - - - - -] Inventory has not changed in ProviderTree for provider: 0211b5d4-bab8-409f-8f53-df766ffbcb27 update_inventory /usr/lib/python3.9/site-packages/nova/compute/provider_tree.py:180#033[00m
Dec  1 19:50:12 compute-0 nova_compute[189564]: 2025-12-01 19:50:12.927 189568 DEBUG nova.scheduler.client.report [None req-7cec860b-c1c5-49d2-8a4f-732c76c1788d - - - - - -] Inventory has not changed for provider 0211b5d4-bab8-409f-8f53-df766ffbcb27 based on inventory data: {'VCPU': {'total': 8, 'reserved': 0, 'min_unit': 1, 'max_unit': 8, 'step_size': 1, 'allocation_ratio': 4.0}, 'MEMORY_MB': {'total': 7680, 'reserved': 512, 'min_unit': 1, 'max_unit': 7680, 'step_size': 1, 'allocation_ratio': 1.0}, 'DISK_GB': {'total': 79, 'reserved': 1, 'min_unit': 1, 'max_unit': 79, 'step_size': 1, 'allocation_ratio': 0.9}} set_inventory_for_provider /usr/lib/python3.9/site-packages/nova/scheduler/client/report.py:940#033[00m
Dec  1 19:50:12 compute-0 nova_compute[189564]: 2025-12-01 19:50:12.929 189568 DEBUG nova.compute.resource_tracker [None req-7cec860b-c1c5-49d2-8a4f-732c76c1788d - - - - - -] Compute_service record updated for compute-0.ctlplane.example.com:compute-0.ctlplane.example.com _update_available_resource /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:995#033[00m
Dec  1 19:50:12 compute-0 nova_compute[189564]: 2025-12-01 19:50:12.930 189568 DEBUG oslo_concurrency.lockutils [None req-7cec860b-c1c5-49d2-8a4f-732c76c1788d - - - - - -] Lock "compute_resources" "released" by "nova.compute.resource_tracker.ResourceTracker._update_available_resource" :: held 0.340s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m
Dec  1 19:50:13 compute-0 nova_compute[189564]: 2025-12-01 19:50:13.414 189568 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Dec  1 19:50:13 compute-0 nova_compute[189564]: 2025-12-01 19:50:13.931 189568 DEBUG oslo_service.periodic_task [None req-7cec860b-c1c5-49d2-8a4f-732c76c1788d - - - - - -] Running periodic task ComputeManager._check_instance_build_time run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210#033[00m
Dec  1 19:50:13 compute-0 nova_compute[189564]: 2025-12-01 19:50:13.931 189568 DEBUG oslo_service.periodic_task [None req-7cec860b-c1c5-49d2-8a4f-732c76c1788d - - - - - -] Running periodic task ComputeManager._poll_unconfirmed_resizes run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210#033[00m
Dec  1 19:50:13 compute-0 nova_compute[189564]: 2025-12-01 19:50:13.932 189568 DEBUG oslo_service.periodic_task [None req-7cec860b-c1c5-49d2-8a4f-732c76c1788d - - - - - -] Running periodic task ComputeManager._instance_usage_audit run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210#033[00m
Dec  1 19:50:16 compute-0 nova_compute[189564]: 2025-12-01 19:50:16.113 189568 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Dec  1 19:50:16 compute-0 podman[248172]: 2025-12-01 19:50:16.366955906 +0000 UTC m=+0.129861245 container health_status 9bc16c1e84935b321683dd2dfd3901959431e420d380b6b9982945dff3d516b2 (image=quay.io/prometheus/node-exporter:v1.5.0, name=node_exporter, health_status=healthy, health_failing_streak=0, health_log=, container_name=node_exporter, maintainer=The Prometheus Authors <prometheus-developers@googlegroups.com>, managed_by=edpm_ansible, config_data={'image': 'quay.io/prometheus/node-exporter:v1.5.0', 'restart': 'always', 'recreate': True, 'user': 'root', 'privileged': True, 'ports': ['9100:9100'], 'command': ['--web.config.file=/etc/node_exporter/node_exporter.yaml', '--web.disable-exporter-metrics', '--collector.systemd', '--collector.systemd.unit-include=(edpm_.*|ovs.*|openvswitch|virt.*|rsyslog)\\.service', '--no-collector.dmi', '--no-collector.entropy', '--no-collector.thermal_zone', '--no-collector.time', '--no-collector.timex', '--no-collector.uname', '--no-collector.stat', '--no-collector.hwmon', '--no-collector.os', '--no-collector.selinux', '--no-collector.textfile', '--no-collector.powersupplyclass', '--no-collector.pressure', '--no-collector.rapl'], 'net': 'host', 'environment': {'OS_ENDPOINT_TYPE': 'internal'}, 'healthcheck': {'test': '/openstack/healthcheck node_exporter', 'mount': '/var/lib/openstack/healthchecks/node_exporter'}, 'volumes': ['/var/lib/openstack/config/telemetry/node_exporter.yaml:/etc/node_exporter/node_exporter.yaml:z', '/var/lib/openstack/certs/telemetry/default:/etc/node_exporter/tls:z', '/var/run/dbus/system_bus_socket:/var/run/dbus/system_bus_socket:rw', '/var/lib/openstack/healthchecks/node_exporter:/openstack:ro,z']}, config_id=edpm)
Dec  1 19:50:18 compute-0 nova_compute[189564]: 2025-12-01 19:50:18.418 189568 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Dec  1 19:50:21 compute-0 nova_compute[189564]: 2025-12-01 19:50:21.116 189568 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Dec  1 19:50:22 compute-0 podman[248199]: 2025-12-01 19:50:22.382031917 +0000 UTC m=+0.130629561 container health_status eee51cf6f5ac491b85fb09827fece37ea9afa564acb449d4ec0d0155a452f02b (image=quay.io/podified-antelope-centos9/openstack-multipathd:current-podified, name=multipathd, health_status=healthy, health_failing_streak=0, health_log=, io.buildah.version=1.41.3, tcib_build_tag=fa2bb8efef6782c26ea7f1675eeb36dd, config_id=multipathd, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.vendor=CentOS, maintainer=OpenStack Kubernetes Operator team, managed_by=edpm_ansible, org.label-schema.schema-version=1.0, config_data={'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/multipathd', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-multipathd:current-podified', 'net': 'host', 'privileged': True, 'restart': 'always', 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/var/lib/kolla/config_files/multipathd.json:/var/lib/kolla/config_files/config.json:ro', '/dev:/dev', '/run/udev:/run/udev', '/sys:/sys', '/lib/modules:/lib/modules:ro', '/etc/iscsi:/etc/iscsi:ro', '/var/lib/iscsi:/var/lib/iscsi', '/etc/multipath:/etc/multipath:z', '/etc/multipath.conf:/etc/multipath.conf:ro', '/var/lib/openstack/healthchecks/multipathd:/openstack:ro,z']}, org.label-schema.build-date=20251125, org.label-schema.license=GPLv2, tcib_managed=true, container_name=multipathd)
Dec  1 19:50:23 compute-0 nova_compute[189564]: 2025-12-01 19:50:23.419 189568 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Dec  1 19:50:26 compute-0 nova_compute[189564]: 2025-12-01 19:50:26.119 189568 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Dec  1 19:50:28 compute-0 podman[248217]: 2025-12-01 19:50:28.357184901 +0000 UTC m=+0.113993215 container health_status 61ddba5fa28aaa4735d9b3aecc3d300f499f9ae2248b5f55cd6d6127fcce4236 (image=quay.io/navidys/prometheus-podman-exporter:v1.10.1, name=podman_exporter, health_status=healthy, health_failing_streak=0, health_log=, config_data={'image': 'quay.io/navidys/prometheus-podman-exporter:v1.10.1', 'restart': 'always', 'recreate': True, 'user': 'root', 'privileged': True, 'ports': ['9882:9882'], 'net': 'host', 'command': ['--web.config.file=/etc/podman_exporter/podman_exporter.yaml'], 'environment': {'OS_ENDPOINT_TYPE': 'internal', 'CONTAINER_HOST': 'unix:///run/podman/podman.sock'}, 'healthcheck': {'test': '/openstack/healthcheck podman_exporter', 'mount': '/var/lib/openstack/healthchecks/podman_exporter'}, 'volumes': ['/var/lib/openstack/config/telemetry/podman_exporter.yaml:/etc/podman_exporter/podman_exporter.yaml:z', '/var/lib/openstack/certs/telemetry/default:/etc/podman_exporter/tls:z', '/run/podman/podman.sock:/run/podman/podman.sock:rw,z', '/var/lib/openstack/healthchecks/podman_exporter:/openstack:ro,z']}, config_id=edpm, container_name=podman_exporter, maintainer=Navid Yaghoobi <navidys@fedoraproject.org>, managed_by=edpm_ansible)
Dec  1 19:50:28 compute-0 nova_compute[189564]: 2025-12-01 19:50:28.423 189568 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Dec  1 19:50:29 compute-0 podman[203750]: time="2025-12-01T19:50:29Z" level=info msg="List containers: received `last` parameter - overwriting `limit`"
Dec  1 19:50:29 compute-0 podman[203750]: @ - - [01/Dec/2025:19:50:29 +0000] "GET /v4.9.3/libpod/containers/json?all=true&external=false&last=0&namespace=false&size=false&sync=false HTTP/1.1" 200 29521 "" "Go-http-client/1.1"
Dec  1 19:50:29 compute-0 podman[203750]: @ - - [01/Dec/2025:19:50:29 +0000] "GET /v4.9.3/libpod/containers/stats?all=false&interval=1&stream=false HTTP/1.1" 200 4811 "" "Go-http-client/1.1"
Dec  1 19:50:31 compute-0 nova_compute[189564]: 2025-12-01 19:50:31.121 189568 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Dec  1 19:50:31 compute-0 podman[248244]: 2025-12-01 19:50:31.364269522 +0000 UTC m=+0.096186335 container health_status 43b014a7c88484529ca37fbc1aa040d68d3c565a681d98a3ffe696ded1c66c8b (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified, name=ovn_metadata_agent, health_status=healthy, health_failing_streak=0, health_log=, org.label-schema.name=CentOS Stream 9 Base Image, tcib_build_tag=fa2bb8efef6782c26ea7f1675eeb36dd, io.buildah.version=1.41.3, managed_by=edpm_ansible, org.label-schema.build-date=20251125, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS, config_data={'cgroupns': 'host', 'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'EDPM_CONFIG_HASH': '0823bd3e096c75f72e4a95820d41b0d4b6a1172bd2892ddb9f29b788a11bc87d'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ovn_metadata_agent', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified', 'net': 'host', 'pid': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/run/openvswitch:/run/openvswitch:z', '/var/lib/config-data/ansible-generated/neutron-ovn-metadata-agent:/etc/neutron.conf.d:z', '/run/netns:/run/netns:shared', '/var/lib/kolla/config_files/ovn_metadata_agent.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/neutron:/var/lib/neutron:shared,z', '/var/lib/neutron/ovn_metadata_haproxy_wrapper:/usr/local/bin/haproxy:ro', '/var/lib/neutron/kill_scripts:/etc/neutron/kill_scripts:ro', '/var/lib/openstack/cacerts/neutron-metadata/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/ca.crt:/etc/pki/tls/certs/ovndbca.crt:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/tls.crt:/etc/pki/tls/certs/ovndb.crt:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/tls.key:/etc/pki/tls/private/ovndb.key:ro,Z', '/var/lib/openstack/healthchecks/ovn_metadata_agent:/openstack:ro,z']}, tcib_managed=true, config_id=ovn_metadata_agent, container_name=ovn_metadata_agent, maintainer=OpenStack Kubernetes Operator team, org.label-schema.license=GPLv2)
Dec  1 19:50:31 compute-0 podman[248241]: 2025-12-01 19:50:31.378983607 +0000 UTC m=+0.129846596 container health_status 23921011954a99f31a49758e512d9e3575f6b2ebf536e7df85e3be11e7690b76 (image=quay.io/sustainable_computing_io/kepler:release-0.7.12, name=kepler, health_status=healthy, health_failing_streak=0, health_log=, managed_by=edpm_ansible, vcs-ref=e309397d02fc53f7fa99db1371b8700eb49f268f, summary=Provides the latest release of Red Hat Universal Base Image 9., vendor=Red Hat, Inc., description=The Universal Base Image is designed and engineered to be the base layer for all of your containerized applications, middleware and utilities. This base image is freely redistributable, but Red Hat only supports Red Hat technologies through subscriptions for Red Hat products. This image is maintained by Red Hat and updated regularly., io.buildah.version=1.29.0, release-0.7.12=, container_name=kepler, io.k8s.description=The Universal Base Image is designed and engineered to be the base layer for all of your containerized applications, middleware and utilities. This base image is freely redistributable, but Red Hat only supports Red Hat technologies through subscriptions for Red Hat products. This image is maintained by Red Hat and updated regularly., maintainer=Red Hat, Inc., release=1214.1726694543, version=9.4, architecture=x86_64, distribution-scope=public, io.openshift.tags=base rhel9, url=https://access.redhat.com/containers/#/registry.access.redhat.com/ubi9/images/9.4-1214.1726694543, io.k8s.display-name=Red Hat Universal Base Image 9, vcs-type=git, com.redhat.component=ubi9-container, config_data={'image': 'quay.io/sustainable_computing_io/kepler:release-0.7.12', 'privileged': 'true', 'restart': 'always', 'ports': ['8888:8888'], 'net': 'host', 'command': '-v=2', 'recreate': True, 'environment': {'ENABLE_GPU': 'true', 'EXPOSE_CONTAINER_METRICS': 'true', 'ENABLE_PROCESS_METRICS': 'true', 'EXPOSE_VM_METRICS': 'true', 'EXPOSE_ESTIMATED_IDLE_POWER_METRICS': 'false', 'LIBVIRT_METADATA_URI': 'http://openstack.org/xmlns/libvirt/nova/1.1'}, 'healthcheck': {'test': '/openstack/healthcheck kepler', 'mount': '/var/lib/openstack/healthchecks/kepler'}, 'volumes': ['/lib/modules:/lib/modules:ro', '/run/libvirt:/run/libvirt:shared,ro', '/sys:/sys', '/proc:/proc', '/var/lib/openstack/healthchecks/kepler:/openstack:ro,z']}, config_id=edpm, com.redhat.license_terms=https://www.redhat.com/en/about/red-hat-end-user-license-agreements#UBI, build-date=2024-09-18T21:23:30, io.openshift.expose-services=, name=ubi9)
Dec  1 19:50:31 compute-0 podman[248242]: 2025-12-01 19:50:31.385324683 +0000 UTC m=+0.126809511 container health_status 34a1614f07848d6f362b3ed1fa2407dbcd0f2c7c831f6ef43ff8b2d278ce7c3d (image=quay.io/podified-antelope-centos9/openstack-ceilometer-ipmi:current-podified, name=ceilometer_agent_ipmi, health_status=healthy, health_failing_streak=0, health_log=, tcib_managed=true, config_data={'image': 'quay.io/podified-antelope-centos9/openstack-ceilometer-ipmi:current-podified', 'user': 'ceilometer', 'restart': 'always', 'command': 'kolla_start', 'security_opt': 'label:type:ceilometer_polling_t', 'privileged': 'true', 'net': 'host', 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'OS_ENDPOINT_TYPE': 'internal'}, 'healthcheck': {'test': '/openstack/healthcheck ipmi', 'mount': '/var/lib/openstack/healthchecks/ceilometer_agent_ipmi'}, 'volumes': ['/var/lib/openstack/config/telemetry-power-monitoring:/var/lib/openstack/config/:z', '/var/lib/openstack/config/telemetry-power-monitoring/ceilometer-agent-ipmi.json:/var/lib/kolla/config_files/config.json:z', '/etc/hosts:/etc/hosts:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/var/lib/openstack/cacerts/telemetry-power-monitoring/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/config/telemetry-power-monitoring/ceilometer_prom_exporter.yaml:/etc/ceilometer/ceilometer_prom_exporter.yaml:z', '/var/lib/openstack/certs/telemetry-power-monitoring/default:/etc/ceilometer/tls:z', '/dev/log:/dev/log', '/var/lib/openstack/healthchecks/ceilometer_agent_ipmi:/openstack:ro,z']}, config_id=edpm, container_name=ceilometer_agent_ipmi, managed_by=edpm_ansible, org.label-schema.name=CentOS Stream 9 Base Image, maintainer=OpenStack Kubernetes Operator team, org.label-schema.vendor=CentOS, tcib_build_tag=fa2bb8efef6782c26ea7f1675eeb36dd, io.buildah.version=1.41.3, org.label-schema.build-date=20251125, org.label-schema.license=GPLv2, org.label-schema.schema-version=1.0)
Dec  1 19:50:31 compute-0 podman[248243]: 2025-12-01 19:50:31.401982748 +0000 UTC m=+0.139179014 container health_status 3a3d264f7eb8586ed3d44da8bad3c69e5911bcb2ca062b771386b6d47a5118de (image=quay.rdoproject.org/podified-master-centos10/openstack-ceilometer-compute:current-tested, name=ceilometer_agent_compute, health_status=healthy, health_failing_streak=0, health_log=, org.label-schema.build-date=20251125, org.label-schema.name=CentOS Stream 10 Base Image, tcib_managed=true, config_data={'image': 'quay.rdoproject.org/podified-master-centos10/openstack-ceilometer-compute:current-tested', 'user': 'ceilometer', 'restart': 'always', 'command': 'kolla_start', 'security_opt': 'label:type:ceilometer_polling_t', 'net': 'host', 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'OS_ENDPOINT_TYPE': 'internal'}, 'healthcheck': {'test': '/openstack/healthcheck compute', 'mount': '/var/lib/openstack/healthchecks/ceilometer_agent_compute'}, 'volumes': ['/var/lib/openstack/config/telemetry:/var/lib/openstack/config/:z', '/var/lib/openstack/config/telemetry/ceilometer-agent-compute.json:/var/lib/kolla/config_files/config.json:z', '/run/libvirt:/run/libvirt:shared,ro', '/etc/hosts:/etc/hosts:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/var/lib/openstack/cacerts/telemetry/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/config/telemetry/ceilometer_prom_exporter.yaml:/etc/ceilometer/ceilometer_prom_exporter.yaml:z', '/var/lib/openstack/certs/telemetry/default:/etc/ceilometer/tls:z', '/dev/log:/dev/log', '/var/lib/openstack/healthchecks/ceilometer_agent_compute:/openstack:ro,z']}, org.label-schema.vendor=CentOS, tcib_build_tag=3a7876c5b6a4ff2e2bc50e11e9db5f42, config_id=edpm, container_name=ceilometer_agent_compute, maintainer=OpenStack Kubernetes Operator team, managed_by=edpm_ansible, org.label-schema.license=GPLv2, org.label-schema.schema-version=1.0, io.buildah.version=1.41.4)
Dec  1 19:50:31 compute-0 podman[248251]: 2025-12-01 19:50:31.414447823 +0000 UTC m=+0.133009203 container health_status ac5c9902abf0db9f43c889599b2bcc73d33eb8b65444ffdd9b56a5cc93dab792 (image=quay.io/podified-antelope-centos9/openstack-ovn-controller:current-podified, name=ovn_controller, health_status=healthy, health_failing_streak=0, health_log=, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.schema-version=1.0, maintainer=OpenStack Kubernetes Operator team, org.label-schema.license=GPLv2, org.label-schema.vendor=CentOS, config_data={'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ovn_controller', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-ovn-controller:current-podified', 'net': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/lib/modules:/lib/modules:ro', '/run:/run', '/var/lib/openvswitch/ovn:/run/ovn:shared,z', '/var/lib/kolla/config_files/ovn_controller.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/openstack/cacerts/ovn/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/certs/ovn/default/ca.crt:/etc/pki/tls/certs/ovndbca.crt:ro,z', '/var/lib/openstack/certs/ovn/default/tls.crt:/etc/pki/tls/certs/ovndb.crt:ro,z', '/var/lib/openstack/certs/ovn/default/tls.key:/etc/pki/tls/private/ovndb.key:ro,Z', '/var/lib/openstack/healthchecks/ovn_controller:/openstack:ro,z']}, io.buildah.version=1.41.3, managed_by=edpm_ansible, org.label-schema.build-date=20251125, tcib_build_tag=fa2bb8efef6782c26ea7f1675eeb36dd, container_name=ovn_controller, tcib_managed=true, config_id=ovn_controller)
Dec  1 19:50:31 compute-0 openstack_network_exporter[205914]: ERROR   19:50:31 appctl.go:131: Failed to prepare call to ovsdb-server: no control socket files found for the ovs db server
Dec  1 19:50:31 compute-0 openstack_network_exporter[205914]: ERROR   19:50:31 appctl.go:144: Failed to get PID for ovn-northd: no control socket files found for ovn-northd
Dec  1 19:50:31 compute-0 openstack_network_exporter[205914]: ERROR   19:50:31 appctl.go:144: Failed to get PID for ovn-northd: no control socket files found for ovn-northd
Dec  1 19:50:31 compute-0 openstack_network_exporter[205914]: ERROR   19:50:31 appctl.go:174: call(dpif-netdev/pmd-perf-show): please specify an existing datapath
Dec  1 19:50:31 compute-0 openstack_network_exporter[205914]: 
Dec  1 19:50:31 compute-0 openstack_network_exporter[205914]: ERROR   19:50:31 appctl.go:174: call(dpif-netdev/pmd-rxq-show): please specify an existing datapath
Dec  1 19:50:31 compute-0 openstack_network_exporter[205914]: 
Dec  1 19:50:33 compute-0 nova_compute[189564]: 2025-12-01 19:50:33.426 189568 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Dec  1 19:50:36 compute-0 nova_compute[189564]: 2025-12-01 19:50:36.124 189568 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Dec  1 19:50:38 compute-0 nova_compute[189564]: 2025-12-01 19:50:38.427 189568 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Dec  1 19:50:41 compute-0 nova_compute[189564]: 2025-12-01 19:50:41.127 189568 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Dec  1 19:50:42 compute-0 podman[248335]: 2025-12-01 19:50:42.349860297 +0000 UTC m=+0.109884629 container health_status b46bda7fc50db8041eef75400930fc7591d8331b3adc9964f77b2cc87c6b98e2 (image=quay.io/openstack-k8s-operators/openstack-network-exporter:current-podified, name=openstack_network_exporter, health_status=healthy, health_failing_streak=0, health_log=, config_id=edpm, io.buildah.version=1.33.7, io.openshift.expose-services=, maintainer=Red Hat, Inc., container_name=openstack_network_exporter, distribution-scope=public, vcs-type=git, io.openshift.tags=minimal rhel9, build-date=2025-08-20T13:12:41, url=https://catalog.redhat.com/en/search?searchType=containers, version=9.6, description=The Universal Base Image Minimal is a stripped down image that uses microdnf as a package manager. This base image is freely redistributable, but Red Hat only supports Red Hat technologies through subscriptions for Red Hat products. This image is maintained by Red Hat and updated regularly., summary=Provides the latest release of the minimal Red Hat Universal Base Image 9., vendor=Red Hat, Inc., name=ubi9-minimal, vcs-ref=f4b088292653bbf5ca8188a5e59ffd06a8671d4b, config_data={'image': 'quay.io/openstack-k8s-operators/openstack-network-exporter:current-podified', 'restart': 'always', 'recreate': True, 'privileged': True, 'ports': ['9105:9105'], 'command': [], 'net': 'host', 'environment': {'OS_ENDPOINT_TYPE': 'internal', 'OPENSTACK_NETWORK_EXPORTER_YAML': '/etc/openstack_network_exporter/openstack_network_exporter.yaml'}, 'healthcheck': {'test': '/openstack/healthcheck openstack-netwo', 'mount': '/var/lib/openstack/healthchecks/openstack_network_exporter'}, 'volumes': ['/var/lib/openstack/config/telemetry/openstack_network_exporter.yaml:/etc/openstack_network_exporter/openstack_network_exporter.yaml:z', '/var/lib/openstack/certs/telemetry/default:/etc/openstack_network_exporter/tls:z', '/var/run/openvswitch:/run/openvswitch:rw,z', '/var/lib/openvswitch/ovn:/run/ovn:rw,z', '/proc:/host/proc:ro', '/var/lib/openstack/healthchecks/openstack_network_exporter:/openstack:ro,z']}, io.k8s.description=The Universal Base Image Minimal is a stripped down image that uses microdnf as a package manager. This base image is freely redistributable, but Red Hat only supports Red Hat technologies through subscriptions for Red Hat products. This image is maintained by Red Hat and updated regularly., com.redhat.license_terms=https://www.redhat.com/en/about/red-hat-end-user-license-agreements#UBI, release=1755695350, architecture=x86_64, com.redhat.component=ubi9-minimal-container, io.k8s.display-name=Red Hat Universal Base Image 9 Minimal, managed_by=edpm_ansible)
Dec  1 19:50:43 compute-0 nova_compute[189564]: 2025-12-01 19:50:43.430 189568 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Dec  1 19:50:46 compute-0 nova_compute[189564]: 2025-12-01 19:50:46.130 189568 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Dec  1 19:50:47 compute-0 podman[248357]: 2025-12-01 19:50:47.307228121 +0000 UTC m=+0.077142805 container health_status 9bc16c1e84935b321683dd2dfd3901959431e420d380b6b9982945dff3d516b2 (image=quay.io/prometheus/node-exporter:v1.5.0, name=node_exporter, health_status=healthy, health_failing_streak=0, health_log=, managed_by=edpm_ansible, config_data={'image': 'quay.io/prometheus/node-exporter:v1.5.0', 'restart': 'always', 'recreate': True, 'user': 'root', 'privileged': True, 'ports': ['9100:9100'], 'command': ['--web.config.file=/etc/node_exporter/node_exporter.yaml', '--web.disable-exporter-metrics', '--collector.systemd', '--collector.systemd.unit-include=(edpm_.*|ovs.*|openvswitch|virt.*|rsyslog)\\.service', '--no-collector.dmi', '--no-collector.entropy', '--no-collector.thermal_zone', '--no-collector.time', '--no-collector.timex', '--no-collector.uname', '--no-collector.stat', '--no-collector.hwmon', '--no-collector.os', '--no-collector.selinux', '--no-collector.textfile', '--no-collector.powersupplyclass', '--no-collector.pressure', '--no-collector.rapl'], 'net': 'host', 'environment': {'OS_ENDPOINT_TYPE': 'internal'}, 'healthcheck': {'test': '/openstack/healthcheck node_exporter', 'mount': '/var/lib/openstack/healthchecks/node_exporter'}, 'volumes': ['/var/lib/openstack/config/telemetry/node_exporter.yaml:/etc/node_exporter/node_exporter.yaml:z', '/var/lib/openstack/certs/telemetry/default:/etc/node_exporter/tls:z', '/var/run/dbus/system_bus_socket:/var/run/dbus/system_bus_socket:rw', '/var/lib/openstack/healthchecks/node_exporter:/openstack:ro,z']}, config_id=edpm, container_name=node_exporter, maintainer=The Prometheus Authors <prometheus-developers@googlegroups.com>)
Dec  1 19:50:48 compute-0 nova_compute[189564]: 2025-12-01 19:50:48.432 189568 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Dec  1 19:50:51 compute-0 nova_compute[189564]: 2025-12-01 19:50:51.132 189568 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Dec  1 19:50:53 compute-0 podman[248383]: 2025-12-01 19:50:53.378537117 +0000 UTC m=+0.135687747 container health_status eee51cf6f5ac491b85fb09827fece37ea9afa564acb449d4ec0d0155a452f02b (image=quay.io/podified-antelope-centos9/openstack-multipathd:current-podified, name=multipathd, health_status=healthy, health_failing_streak=0, health_log=, org.label-schema.name=CentOS Stream 9 Base Image, maintainer=OpenStack Kubernetes Operator team, org.label-schema.vendor=CentOS, tcib_managed=true, config_data={'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/multipathd', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-multipathd:current-podified', 'net': 'host', 'privileged': True, 'restart': 'always', 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/var/lib/kolla/config_files/multipathd.json:/var/lib/kolla/config_files/config.json:ro', '/dev:/dev', '/run/udev:/run/udev', '/sys:/sys', '/lib/modules:/lib/modules:ro', '/etc/iscsi:/etc/iscsi:ro', '/var/lib/iscsi:/var/lib/iscsi', '/etc/multipath:/etc/multipath:z', '/etc/multipath.conf:/etc/multipath.conf:ro', '/var/lib/openstack/healthchecks/multipathd:/openstack:ro,z']}, config_id=multipathd, container_name=multipathd, org.label-schema.build-date=20251125, org.label-schema.license=GPLv2, tcib_build_tag=fa2bb8efef6782c26ea7f1675eeb36dd, io.buildah.version=1.41.3, org.label-schema.schema-version=1.0, managed_by=edpm_ansible)
Dec  1 19:50:53 compute-0 nova_compute[189564]: 2025-12-01 19:50:53.435 189568 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Dec  1 19:50:56 compute-0 nova_compute[189564]: 2025-12-01 19:50:56.135 189568 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Dec  1 19:50:58 compute-0 nova_compute[189564]: 2025-12-01 19:50:58.438 189568 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Dec  1 19:50:59 compute-0 podman[248402]: 2025-12-01 19:50:59.349502918 +0000 UTC m=+0.099383824 container health_status 61ddba5fa28aaa4735d9b3aecc3d300f499f9ae2248b5f55cd6d6127fcce4236 (image=quay.io/navidys/prometheus-podman-exporter:v1.10.1, name=podman_exporter, health_status=healthy, health_failing_streak=0, health_log=, container_name=podman_exporter, maintainer=Navid Yaghoobi <navidys@fedoraproject.org>, managed_by=edpm_ansible, config_data={'image': 'quay.io/navidys/prometheus-podman-exporter:v1.10.1', 'restart': 'always', 'recreate': True, 'user': 'root', 'privileged': True, 'ports': ['9882:9882'], 'net': 'host', 'command': ['--web.config.file=/etc/podman_exporter/podman_exporter.yaml'], 'environment': {'OS_ENDPOINT_TYPE': 'internal', 'CONTAINER_HOST': 'unix:///run/podman/podman.sock'}, 'healthcheck': {'test': '/openstack/healthcheck podman_exporter', 'mount': '/var/lib/openstack/healthchecks/podman_exporter'}, 'volumes': ['/var/lib/openstack/config/telemetry/podman_exporter.yaml:/etc/podman_exporter/podman_exporter.yaml:z', '/var/lib/openstack/certs/telemetry/default:/etc/podman_exporter/tls:z', '/run/podman/podman.sock:/run/podman/podman.sock:rw,z', '/var/lib/openstack/healthchecks/podman_exporter:/openstack:ro,z']}, config_id=edpm)
Dec  1 19:50:59 compute-0 podman[203750]: time="2025-12-01T19:50:59Z" level=info msg="List containers: received `last` parameter - overwriting `limit`"
Dec  1 19:50:59 compute-0 podman[203750]: @ - - [01/Dec/2025:19:50:59 +0000] "GET /v4.9.3/libpod/containers/json?all=true&external=false&last=0&namespace=false&size=false&sync=false HTTP/1.1" 200 29521 "" "Go-http-client/1.1"
Dec  1 19:50:59 compute-0 podman[203750]: @ - - [01/Dec/2025:19:50:59 +0000] "GET /v4.9.3/libpod/containers/stats?all=false&interval=1&stream=false HTTP/1.1" 200 4814 "" "Go-http-client/1.1"
Dec  1 19:51:01 compute-0 nova_compute[189564]: 2025-12-01 19:51:01.138 189568 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Dec  1 19:51:01 compute-0 openstack_network_exporter[205914]: ERROR   19:51:01 appctl.go:131: Failed to prepare call to ovsdb-server: no control socket files found for the ovs db server
Dec  1 19:51:01 compute-0 openstack_network_exporter[205914]: ERROR   19:51:01 appctl.go:144: Failed to get PID for ovn-northd: no control socket files found for ovn-northd
Dec  1 19:51:01 compute-0 openstack_network_exporter[205914]: ERROR   19:51:01 appctl.go:144: Failed to get PID for ovn-northd: no control socket files found for ovn-northd
Dec  1 19:51:01 compute-0 openstack_network_exporter[205914]: ERROR   19:51:01 appctl.go:174: call(dpif-netdev/pmd-perf-show): please specify an existing datapath
Dec  1 19:51:01 compute-0 openstack_network_exporter[205914]: 
Dec  1 19:51:01 compute-0 openstack_network_exporter[205914]: ERROR   19:51:01 appctl.go:174: call(dpif-netdev/pmd-rxq-show): please specify an existing datapath
Dec  1 19:51:01 compute-0 openstack_network_exporter[205914]: 
Dec  1 19:51:02 compute-0 podman[248424]: 2025-12-01 19:51:02.32715567 +0000 UTC m=+0.088887759 container health_status 23921011954a99f31a49758e512d9e3575f6b2ebf536e7df85e3be11e7690b76 (image=quay.io/sustainable_computing_io/kepler:release-0.7.12, name=kepler, health_status=healthy, health_failing_streak=0, health_log=, vendor=Red Hat, Inc., io.k8s.description=The Universal Base Image is designed and engineered to be the base layer for all of your containerized applications, middleware and utilities. This base image is freely redistributable, but Red Hat only supports Red Hat technologies through subscriptions for Red Hat products. This image is maintained by Red Hat and updated regularly., build-date=2024-09-18T21:23:30, config_data={'image': 'quay.io/sustainable_computing_io/kepler:release-0.7.12', 'privileged': 'true', 'restart': 'always', 'ports': ['8888:8888'], 'net': 'host', 'command': '-v=2', 'recreate': True, 'environment': {'ENABLE_GPU': 'true', 'EXPOSE_CONTAINER_METRICS': 'true', 'ENABLE_PROCESS_METRICS': 'true', 'EXPOSE_VM_METRICS': 'true', 'EXPOSE_ESTIMATED_IDLE_POWER_METRICS': 'false', 'LIBVIRT_METADATA_URI': 'http://openstack.org/xmlns/libvirt/nova/1.1'}, 'healthcheck': {'test': '/openstack/healthcheck kepler', 'mount': '/var/lib/openstack/healthchecks/kepler'}, 'volumes': ['/lib/modules:/lib/modules:ro', '/run/libvirt:/run/libvirt:shared,ro', '/sys:/sys', '/proc:/proc', '/var/lib/openstack/healthchecks/kepler:/openstack:ro,z']}, description=The Universal Base Image is designed and engineered to be the base layer for all of your containerized applications, middleware and utilities. This base image is freely redistributable, but Red Hat only supports Red Hat technologies through subscriptions for Red Hat products. This image is maintained by Red Hat and updated regularly., name=ubi9, vcs-ref=e309397d02fc53f7fa99db1371b8700eb49f268f, container_name=kepler, release=1214.1726694543, vcs-type=git, com.redhat.license_terms=https://www.redhat.com/en/about/red-hat-end-user-license-agreements#UBI, summary=Provides the latest release of Red Hat Universal Base Image 9., url=https://access.redhat.com/containers/#/registry.access.redhat.com/ubi9/images/9.4-1214.1726694543, architecture=x86_64, io.buildah.version=1.29.0, io.k8s.display-name=Red Hat Universal Base Image 9, config_id=edpm, managed_by=edpm_ansible, version=9.4, distribution-scope=public, release-0.7.12=, com.redhat.component=ubi9-container, io.openshift.expose-services=, io.openshift.tags=base rhel9, maintainer=Red Hat, Inc.)
Dec  1 19:51:02 compute-0 podman[248425]: 2025-12-01 19:51:02.355110874 +0000 UTC m=+0.102198540 container health_status 34a1614f07848d6f362b3ed1fa2407dbcd0f2c7c831f6ef43ff8b2d278ce7c3d (image=quay.io/podified-antelope-centos9/openstack-ceilometer-ipmi:current-podified, name=ceilometer_agent_ipmi, health_status=healthy, health_failing_streak=0, health_log=, io.buildah.version=1.41.3, managed_by=edpm_ansible, org.label-schema.name=CentOS Stream 9 Base Image, config_data={'image': 'quay.io/podified-antelope-centos9/openstack-ceilometer-ipmi:current-podified', 'user': 'ceilometer', 'restart': 'always', 'command': 'kolla_start', 'security_opt': 'label:type:ceilometer_polling_t', 'privileged': 'true', 'net': 'host', 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'OS_ENDPOINT_TYPE': 'internal'}, 'healthcheck': {'test': '/openstack/healthcheck ipmi', 'mount': '/var/lib/openstack/healthchecks/ceilometer_agent_ipmi'}, 'volumes': ['/var/lib/openstack/config/telemetry-power-monitoring:/var/lib/openstack/config/:z', '/var/lib/openstack/config/telemetry-power-monitoring/ceilometer-agent-ipmi.json:/var/lib/kolla/config_files/config.json:z', '/etc/hosts:/etc/hosts:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/var/lib/openstack/cacerts/telemetry-power-monitoring/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/config/telemetry-power-monitoring/ceilometer_prom_exporter.yaml:/etc/ceilometer/ceilometer_prom_exporter.yaml:z', '/var/lib/openstack/certs/telemetry-power-monitoring/default:/etc/ceilometer/tls:z', '/dev/log:/dev/log', '/var/lib/openstack/healthchecks/ceilometer_agent_ipmi:/openstack:ro,z']}, container_name=ceilometer_agent_ipmi, maintainer=OpenStack Kubernetes Operator team, org.label-schema.license=GPLv2, org.label-schema.vendor=CentOS, tcib_build_tag=fa2bb8efef6782c26ea7f1675eeb36dd, config_id=edpm, org.label-schema.build-date=20251125, org.label-schema.schema-version=1.0, tcib_managed=true)
Dec  1 19:51:02 compute-0 podman[248432]: 2025-12-01 19:51:02.361463161 +0000 UTC m=+0.096234967 container health_status 43b014a7c88484529ca37fbc1aa040d68d3c565a681d98a3ffe696ded1c66c8b (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified, name=ovn_metadata_agent, health_status=healthy, health_failing_streak=0, health_log=, org.label-schema.build-date=20251125, maintainer=OpenStack Kubernetes Operator team, managed_by=edpm_ansible, org.label-schema.license=GPLv2, org.label-schema.schema-version=1.0, tcib_managed=true, container_name=ovn_metadata_agent, io.buildah.version=1.41.3, org.label-schema.vendor=CentOS, config_data={'cgroupns': 'host', 'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'EDPM_CONFIG_HASH': '0823bd3e096c75f72e4a95820d41b0d4b6a1172bd2892ddb9f29b788a11bc87d'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ovn_metadata_agent', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified', 'net': 'host', 'pid': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/run/openvswitch:/run/openvswitch:z', '/var/lib/config-data/ansible-generated/neutron-ovn-metadata-agent:/etc/neutron.conf.d:z', '/run/netns:/run/netns:shared', '/var/lib/kolla/config_files/ovn_metadata_agent.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/neutron:/var/lib/neutron:shared,z', '/var/lib/neutron/ovn_metadata_haproxy_wrapper:/usr/local/bin/haproxy:ro', '/var/lib/neutron/kill_scripts:/etc/neutron/kill_scripts:ro', '/var/lib/openstack/cacerts/neutron-metadata/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/ca.crt:/etc/pki/tls/certs/ovndbca.crt:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/tls.crt:/etc/pki/tls/certs/ovndb.crt:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/tls.key:/etc/pki/tls/private/ovndb.key:ro,Z', '/var/lib/openstack/healthchecks/ovn_metadata_agent:/openstack:ro,z']}, org.label-schema.name=CentOS Stream 9 Base Image, tcib_build_tag=fa2bb8efef6782c26ea7f1675eeb36dd, config_id=ovn_metadata_agent)
Dec  1 19:51:02 compute-0 podman[248426]: 2025-12-01 19:51:02.389748765 +0000 UTC m=+0.130359432 container health_status 3a3d264f7eb8586ed3d44da8bad3c69e5911bcb2ca062b771386b6d47a5118de (image=quay.rdoproject.org/podified-master-centos10/openstack-ceilometer-compute:current-tested, name=ceilometer_agent_compute, health_status=healthy, health_failing_streak=0, health_log=, io.buildah.version=1.41.4, managed_by=edpm_ansible, org.label-schema.build-date=20251125, org.label-schema.name=CentOS Stream 10 Base Image, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS, tcib_build_tag=3a7876c5b6a4ff2e2bc50e11e9db5f42, tcib_managed=true, container_name=ceilometer_agent_compute, org.label-schema.license=GPLv2, maintainer=OpenStack Kubernetes Operator team, config_data={'image': 'quay.rdoproject.org/podified-master-centos10/openstack-ceilometer-compute:current-tested', 'user': 'ceilometer', 'restart': 'always', 'command': 'kolla_start', 'security_opt': 'label:type:ceilometer_polling_t', 'net': 'host', 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'OS_ENDPOINT_TYPE': 'internal'}, 'healthcheck': {'test': '/openstack/healthcheck compute', 'mount': '/var/lib/openstack/healthchecks/ceilometer_agent_compute'}, 'volumes': ['/var/lib/openstack/config/telemetry:/var/lib/openstack/config/:z', '/var/lib/openstack/config/telemetry/ceilometer-agent-compute.json:/var/lib/kolla/config_files/config.json:z', '/run/libvirt:/run/libvirt:shared,ro', '/etc/hosts:/etc/hosts:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/var/lib/openstack/cacerts/telemetry/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/config/telemetry/ceilometer_prom_exporter.yaml:/etc/ceilometer/ceilometer_prom_exporter.yaml:z', '/var/lib/openstack/certs/telemetry/default:/etc/ceilometer/tls:z', '/dev/log:/dev/log', '/var/lib/openstack/healthchecks/ceilometer_agent_compute:/openstack:ro,z']}, config_id=edpm)
Dec  1 19:51:02 compute-0 podman[248433]: 2025-12-01 19:51:02.395874565 +0000 UTC m=+0.128016250 container health_status ac5c9902abf0db9f43c889599b2bcc73d33eb8b65444ffdd9b56a5cc93dab792 (image=quay.io/podified-antelope-centos9/openstack-ovn-controller:current-podified, name=ovn_controller, health_status=healthy, health_failing_streak=0, health_log=, tcib_managed=true, io.buildah.version=1.41.3, managed_by=edpm_ansible, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.schema-version=1.0, config_data={'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ovn_controller', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-ovn-controller:current-podified', 'net': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/lib/modules:/lib/modules:ro', '/run:/run', '/var/lib/openvswitch/ovn:/run/ovn:shared,z', '/var/lib/kolla/config_files/ovn_controller.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/openstack/cacerts/ovn/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/certs/ovn/default/ca.crt:/etc/pki/tls/certs/ovndbca.crt:ro,z', '/var/lib/openstack/certs/ovn/default/tls.crt:/etc/pki/tls/certs/ovndb.crt:ro,z', '/var/lib/openstack/certs/ovn/default/tls.key:/etc/pki/tls/private/ovndb.key:ro,Z', '/var/lib/openstack/healthchecks/ovn_controller:/openstack:ro,z']}, maintainer=OpenStack Kubernetes Operator team, tcib_build_tag=fa2bb8efef6782c26ea7f1675eeb36dd, org.label-schema.vendor=CentOS, config_id=ovn_controller, container_name=ovn_controller, org.label-schema.build-date=20251125, org.label-schema.license=GPLv2)
Dec  1 19:51:03 compute-0 nova_compute[189564]: 2025-12-01 19:51:03.441 189568 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Dec  1 19:51:06 compute-0 nova_compute[189564]: 2025-12-01 19:51:06.140 189568 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Dec  1 19:51:06 compute-0 nova_compute[189564]: 2025-12-01 19:51:06.248 189568 DEBUG oslo_service.periodic_task [None req-7cec860b-c1c5-49d2-8a4f-732c76c1788d - - - - - -] Running periodic task ComputeManager._heal_instance_info_cache run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210#033[00m
Dec  1 19:51:06 compute-0 nova_compute[189564]: 2025-12-01 19:51:06.249 189568 DEBUG nova.compute.manager [None req-7cec860b-c1c5-49d2-8a4f-732c76c1788d - - - - - -] Starting heal instance info cache _heal_instance_info_cache /usr/lib/python3.9/site-packages/nova/compute/manager.py:9858#033[00m
Dec  1 19:51:07 compute-0 nova_compute[189564]: 2025-12-01 19:51:07.252 189568 DEBUG oslo_concurrency.lockutils [None req-7cec860b-c1c5-49d2-8a4f-732c76c1788d - - - - - -] Acquiring lock "refresh_cache-850ac274-3f22-41ce-b7d7-ac64d7adac70" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:312#033[00m
Dec  1 19:51:07 compute-0 nova_compute[189564]: 2025-12-01 19:51:07.253 189568 DEBUG oslo_concurrency.lockutils [None req-7cec860b-c1c5-49d2-8a4f-732c76c1788d - - - - - -] Acquired lock "refresh_cache-850ac274-3f22-41ce-b7d7-ac64d7adac70" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:315#033[00m
Dec  1 19:51:07 compute-0 nova_compute[189564]: 2025-12-01 19:51:07.254 189568 DEBUG nova.network.neutron [None req-7cec860b-c1c5-49d2-8a4f-732c76c1788d - - - - - -] [instance: 850ac274-3f22-41ce-b7d7-ac64d7adac70] Forcefully refreshing network info cache for instance _get_instance_nw_info /usr/lib/python3.9/site-packages/nova/network/neutron.py:2004#033[00m
Dec  1 19:51:08 compute-0 nova_compute[189564]: 2025-12-01 19:51:08.444 189568 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Dec  1 19:51:10 compute-0 nova_compute[189564]: 2025-12-01 19:51:10.301 189568 DEBUG nova.network.neutron [None req-7cec860b-c1c5-49d2-8a4f-732c76c1788d - - - - - -] [instance: 850ac274-3f22-41ce-b7d7-ac64d7adac70] Updating instance_info_cache with network_info: [{"id": "076102cd-d411-4d3d-a31e-4851d4a8d107", "address": "fa:16:3e:ce:df:71", "network": {"id": "2a4b8529-6171-4880-a97c-66966115a61b", "bridge": "br-int", "label": "private", "subnets": [{"cidr": "192.168.0.0/24", "dns": [], "gateway": {"address": "192.168.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "192.168.0.62", "type": "fixed", "version": 4, "meta": {}, "floating_ips": [{"address": "192.168.122.240", "type": "floating", "version": 4, "meta": {}}]}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "35d2a9caf1634dca9fc12ec078239d84", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tap076102cd-d4", "ovs_interfaceid": "076102cd-d411-4d3d-a31e-4851d4a8d107", "qbh_params": null, "qbg_params": null, "active": true, "vnic_type": "normal", "profile": {}, "preserve_on_delete": true, "delegate_create": true, "meta": {}}] update_instance_cache_with_nw_info /usr/lib/python3.9/site-packages/nova/network/neutron.py:116#033[00m
Dec  1 19:51:10 compute-0 nova_compute[189564]: 2025-12-01 19:51:10.322 189568 DEBUG oslo_concurrency.lockutils [None req-7cec860b-c1c5-49d2-8a4f-732c76c1788d - - - - - -] Releasing lock "refresh_cache-850ac274-3f22-41ce-b7d7-ac64d7adac70" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:333#033[00m
Dec  1 19:51:10 compute-0 nova_compute[189564]: 2025-12-01 19:51:10.323 189568 DEBUG nova.compute.manager [None req-7cec860b-c1c5-49d2-8a4f-732c76c1788d - - - - - -] [instance: 850ac274-3f22-41ce-b7d7-ac64d7adac70] Updated the network info_cache for instance _heal_instance_info_cache /usr/lib/python3.9/site-packages/nova/compute/manager.py:9929#033[00m
Dec  1 19:51:10 compute-0 nova_compute[189564]: 2025-12-01 19:51:10.324 189568 DEBUG oslo_service.periodic_task [None req-7cec860b-c1c5-49d2-8a4f-732c76c1788d - - - - - -] Running periodic task ComputeManager._poll_rebooting_instances run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210#033[00m
Dec  1 19:51:10 compute-0 nova_compute[189564]: 2025-12-01 19:51:10.324 189568 DEBUG oslo_service.periodic_task [None req-7cec860b-c1c5-49d2-8a4f-732c76c1788d - - - - - -] Running periodic task ComputeManager._poll_rescued_instances run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210#033[00m
Dec  1 19:51:10 compute-0 nova_compute[189564]: 2025-12-01 19:51:10.325 189568 DEBUG oslo_service.periodic_task [None req-7cec860b-c1c5-49d2-8a4f-732c76c1788d - - - - - -] Running periodic task ComputeManager._poll_volume_usage run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210#033[00m
Dec  1 19:51:10 compute-0 nova_compute[189564]: 2025-12-01 19:51:10.325 189568 DEBUG oslo_service.periodic_task [None req-7cec860b-c1c5-49d2-8a4f-732c76c1788d - - - - - -] Running periodic task ComputeManager._reclaim_queued_deletes run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210#033[00m
Dec  1 19:51:10 compute-0 nova_compute[189564]: 2025-12-01 19:51:10.326 189568 DEBUG nova.compute.manager [None req-7cec860b-c1c5-49d2-8a4f-732c76c1788d - - - - - -] CONF.reclaim_instance_interval <= 0, skipping... _reclaim_queued_deletes /usr/lib/python3.9/site-packages/nova/compute/manager.py:10477#033[00m
Dec  1 19:51:11 compute-0 nova_compute[189564]: 2025-12-01 19:51:11.143 189568 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Dec  1 19:51:12 compute-0 ovn_metadata_agent[106828]: 2025-12-01 19:51:12.207 106833 DEBUG oslo_concurrency.lockutils [-] Acquiring lock "_check_child_processes" by "neutron.agent.linux.external_process.ProcessMonitor._check_child_processes" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m
Dec  1 19:51:12 compute-0 ovn_metadata_agent[106828]: 2025-12-01 19:51:12.207 106833 DEBUG oslo_concurrency.lockutils [-] Lock "_check_child_processes" acquired by "neutron.agent.linux.external_process.ProcessMonitor._check_child_processes" :: waited 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m
Dec  1 19:51:12 compute-0 ovn_metadata_agent[106828]: 2025-12-01 19:51:12.208 106833 DEBUG oslo_concurrency.lockutils [-] Lock "_check_child_processes" "released" by "neutron.agent.linux.external_process.ProcessMonitor._check_child_processes" :: held 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m
Dec  1 19:51:12 compute-0 nova_compute[189564]: 2025-12-01 19:51:12.250 189568 DEBUG oslo_service.periodic_task [None req-7cec860b-c1c5-49d2-8a4f-732c76c1788d - - - - - -] Running periodic task ComputeManager._poll_unconfirmed_resizes run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210#033[00m
Dec  1 19:51:13 compute-0 nova_compute[189564]: 2025-12-01 19:51:13.248 189568 DEBUG oslo_service.periodic_task [None req-7cec860b-c1c5-49d2-8a4f-732c76c1788d - - - - - -] Running periodic task ComputeManager.update_available_resource run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210#033[00m
Dec  1 19:51:13 compute-0 nova_compute[189564]: 2025-12-01 19:51:13.282 189568 DEBUG oslo_concurrency.lockutils [None req-7cec860b-c1c5-49d2-8a4f-732c76c1788d - - - - - -] Acquiring lock "compute_resources" by "nova.compute.resource_tracker.ResourceTracker.clean_compute_node_cache" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m
Dec  1 19:51:13 compute-0 nova_compute[189564]: 2025-12-01 19:51:13.282 189568 DEBUG oslo_concurrency.lockutils [None req-7cec860b-c1c5-49d2-8a4f-732c76c1788d - - - - - -] Lock "compute_resources" acquired by "nova.compute.resource_tracker.ResourceTracker.clean_compute_node_cache" :: waited 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m
Dec  1 19:51:13 compute-0 nova_compute[189564]: 2025-12-01 19:51:13.283 189568 DEBUG oslo_concurrency.lockutils [None req-7cec860b-c1c5-49d2-8a4f-732c76c1788d - - - - - -] Lock "compute_resources" "released" by "nova.compute.resource_tracker.ResourceTracker.clean_compute_node_cache" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m
Dec  1 19:51:13 compute-0 nova_compute[189564]: 2025-12-01 19:51:13.283 189568 DEBUG nova.compute.resource_tracker [None req-7cec860b-c1c5-49d2-8a4f-732c76c1788d - - - - - -] Auditing locally available compute resources for compute-0.ctlplane.example.com (node: compute-0.ctlplane.example.com) update_available_resource /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:861#033[00m
Dec  1 19:51:13 compute-0 podman[248520]: 2025-12-01 19:51:13.377029311 +0000 UTC m=+0.139246876 container health_status b46bda7fc50db8041eef75400930fc7591d8331b3adc9964f77b2cc87c6b98e2 (image=quay.io/openstack-k8s-operators/openstack-network-exporter:current-podified, name=openstack_network_exporter, health_status=healthy, health_failing_streak=0, health_log=, architecture=x86_64, url=https://catalog.redhat.com/en/search?searchType=containers, vendor=Red Hat, Inc., config_id=edpm, container_name=openstack_network_exporter, io.buildah.version=1.33.7, release=1755695350, io.k8s.display-name=Red Hat Universal Base Image 9 Minimal, vcs-type=git, config_data={'image': 'quay.io/openstack-k8s-operators/openstack-network-exporter:current-podified', 'restart': 'always', 'recreate': True, 'privileged': True, 'ports': ['9105:9105'], 'command': [], 'net': 'host', 'environment': {'OS_ENDPOINT_TYPE': 'internal', 'OPENSTACK_NETWORK_EXPORTER_YAML': '/etc/openstack_network_exporter/openstack_network_exporter.yaml'}, 'healthcheck': {'test': '/openstack/healthcheck openstack-netwo', 'mount': '/var/lib/openstack/healthchecks/openstack_network_exporter'}, 'volumes': ['/var/lib/openstack/config/telemetry/openstack_network_exporter.yaml:/etc/openstack_network_exporter/openstack_network_exporter.yaml:z', '/var/lib/openstack/certs/telemetry/default:/etc/openstack_network_exporter/tls:z', '/var/run/openvswitch:/run/openvswitch:rw,z', '/var/lib/openvswitch/ovn:/run/ovn:rw,z', '/proc:/host/proc:ro', '/var/lib/openstack/healthchecks/openstack_network_exporter:/openstack:ro,z']}, name=ubi9-minimal, version=9.6, com.redhat.license_terms=https://www.redhat.com/en/about/red-hat-end-user-license-agreements#UBI, io.openshift.expose-services=, maintainer=Red Hat, Inc., managed_by=edpm_ansible, com.redhat.component=ubi9-minimal-container, description=The Universal Base Image Minimal is a stripped down image that uses microdnf as a package manager. This base image is freely redistributable, but Red Hat only supports Red Hat technologies through subscriptions for Red Hat products. This image is maintained by Red Hat and updated regularly., distribution-scope=public, summary=Provides the latest release of the minimal Red Hat Universal Base Image 9., io.k8s.description=The Universal Base Image Minimal is a stripped down image that uses microdnf as a package manager. This base image is freely redistributable, but Red Hat only supports Red Hat technologies through subscriptions for Red Hat products. This image is maintained by Red Hat and updated regularly., io.openshift.tags=minimal rhel9, vcs-ref=f4b088292653bbf5ca8188a5e59ffd06a8671d4b, build-date=2025-08-20T13:12:41)
Dec  1 19:51:13 compute-0 nova_compute[189564]: 2025-12-01 19:51:13.388 189568 DEBUG oslo_concurrency.processutils [None req-7cec860b-c1c5-49d2-8a4f-732c76c1788d - - - - - -] Running cmd (subprocess): /usr/bin/python3 -m oslo_concurrency.prlimit --as=1073741824 --cpu=30 -- env LC_ALL=C LANG=C qemu-img info /var/lib/nova/instances/e73931e9-f7fa-4666-b781-700b385532a9/disk --force-share --output=json execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384#033[00m
Dec  1 19:51:13 compute-0 nova_compute[189564]: 2025-12-01 19:51:13.448 189568 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Dec  1 19:51:13 compute-0 nova_compute[189564]: 2025-12-01 19:51:13.490 189568 DEBUG oslo_concurrency.processutils [None req-7cec860b-c1c5-49d2-8a4f-732c76c1788d - - - - - -] CMD "/usr/bin/python3 -m oslo_concurrency.prlimit --as=1073741824 --cpu=30 -- env LC_ALL=C LANG=C qemu-img info /var/lib/nova/instances/e73931e9-f7fa-4666-b781-700b385532a9/disk --force-share --output=json" returned: 0 in 0.102s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422#033[00m
Dec  1 19:51:13 compute-0 nova_compute[189564]: 2025-12-01 19:51:13.491 189568 DEBUG oslo_concurrency.processutils [None req-7cec860b-c1c5-49d2-8a4f-732c76c1788d - - - - - -] Running cmd (subprocess): /usr/bin/python3 -m oslo_concurrency.prlimit --as=1073741824 --cpu=30 -- env LC_ALL=C LANG=C qemu-img info /var/lib/nova/instances/e73931e9-f7fa-4666-b781-700b385532a9/disk --force-share --output=json execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384#033[00m
Dec  1 19:51:13 compute-0 nova_compute[189564]: 2025-12-01 19:51:13.560 189568 DEBUG oslo_concurrency.processutils [None req-7cec860b-c1c5-49d2-8a4f-732c76c1788d - - - - - -] CMD "/usr/bin/python3 -m oslo_concurrency.prlimit --as=1073741824 --cpu=30 -- env LC_ALL=C LANG=C qemu-img info /var/lib/nova/instances/e73931e9-f7fa-4666-b781-700b385532a9/disk --force-share --output=json" returned: 0 in 0.069s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422#033[00m
Dec  1 19:51:13 compute-0 nova_compute[189564]: 2025-12-01 19:51:13.562 189568 DEBUG oslo_concurrency.processutils [None req-7cec860b-c1c5-49d2-8a4f-732c76c1788d - - - - - -] Running cmd (subprocess): /usr/bin/python3 -m oslo_concurrency.prlimit --as=1073741824 --cpu=30 -- env LC_ALL=C LANG=C qemu-img info /var/lib/nova/instances/e73931e9-f7fa-4666-b781-700b385532a9/disk.eph0 --force-share --output=json execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384#033[00m
Dec  1 19:51:13 compute-0 nova_compute[189564]: 2025-12-01 19:51:13.625 189568 DEBUG oslo_concurrency.processutils [None req-7cec860b-c1c5-49d2-8a4f-732c76c1788d - - - - - -] CMD "/usr/bin/python3 -m oslo_concurrency.prlimit --as=1073741824 --cpu=30 -- env LC_ALL=C LANG=C qemu-img info /var/lib/nova/instances/e73931e9-f7fa-4666-b781-700b385532a9/disk.eph0 --force-share --output=json" returned: 0 in 0.063s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422#033[00m
Dec  1 19:51:13 compute-0 nova_compute[189564]: 2025-12-01 19:51:13.627 189568 DEBUG oslo_concurrency.processutils [None req-7cec860b-c1c5-49d2-8a4f-732c76c1788d - - - - - -] Running cmd (subprocess): /usr/bin/python3 -m oslo_concurrency.prlimit --as=1073741824 --cpu=30 -- env LC_ALL=C LANG=C qemu-img info /var/lib/nova/instances/e73931e9-f7fa-4666-b781-700b385532a9/disk.eph0 --force-share --output=json execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384#033[00m
Dec  1 19:51:13 compute-0 nova_compute[189564]: 2025-12-01 19:51:13.704 189568 DEBUG oslo_concurrency.processutils [None req-7cec860b-c1c5-49d2-8a4f-732c76c1788d - - - - - -] CMD "/usr/bin/python3 -m oslo_concurrency.prlimit --as=1073741824 --cpu=30 -- env LC_ALL=C LANG=C qemu-img info /var/lib/nova/instances/e73931e9-f7fa-4666-b781-700b385532a9/disk.eph0 --force-share --output=json" returned: 0 in 0.077s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422#033[00m
Dec  1 19:51:13 compute-0 nova_compute[189564]: 2025-12-01 19:51:13.716 189568 DEBUG oslo_concurrency.processutils [None req-7cec860b-c1c5-49d2-8a4f-732c76c1788d - - - - - -] Running cmd (subprocess): /usr/bin/python3 -m oslo_concurrency.prlimit --as=1073741824 --cpu=30 -- env LC_ALL=C LANG=C qemu-img info /var/lib/nova/instances/850ac274-3f22-41ce-b7d7-ac64d7adac70/disk --force-share --output=json execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384#033[00m
Dec  1 19:51:13 compute-0 nova_compute[189564]: 2025-12-01 19:51:13.803 189568 DEBUG oslo_concurrency.processutils [None req-7cec860b-c1c5-49d2-8a4f-732c76c1788d - - - - - -] CMD "/usr/bin/python3 -m oslo_concurrency.prlimit --as=1073741824 --cpu=30 -- env LC_ALL=C LANG=C qemu-img info /var/lib/nova/instances/850ac274-3f22-41ce-b7d7-ac64d7adac70/disk --force-share --output=json" returned: 0 in 0.087s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422#033[00m
Dec  1 19:51:13 compute-0 nova_compute[189564]: 2025-12-01 19:51:13.804 189568 DEBUG oslo_concurrency.processutils [None req-7cec860b-c1c5-49d2-8a4f-732c76c1788d - - - - - -] Running cmd (subprocess): /usr/bin/python3 -m oslo_concurrency.prlimit --as=1073741824 --cpu=30 -- env LC_ALL=C LANG=C qemu-img info /var/lib/nova/instances/850ac274-3f22-41ce-b7d7-ac64d7adac70/disk --force-share --output=json execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384#033[00m
Dec  1 19:51:13 compute-0 nova_compute[189564]: 2025-12-01 19:51:13.873 189568 DEBUG oslo_concurrency.processutils [None req-7cec860b-c1c5-49d2-8a4f-732c76c1788d - - - - - -] CMD "/usr/bin/python3 -m oslo_concurrency.prlimit --as=1073741824 --cpu=30 -- env LC_ALL=C LANG=C qemu-img info /var/lib/nova/instances/850ac274-3f22-41ce-b7d7-ac64d7adac70/disk --force-share --output=json" returned: 0 in 0.069s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422#033[00m
Dec  1 19:51:13 compute-0 nova_compute[189564]: 2025-12-01 19:51:13.875 189568 DEBUG oslo_concurrency.processutils [None req-7cec860b-c1c5-49d2-8a4f-732c76c1788d - - - - - -] Running cmd (subprocess): /usr/bin/python3 -m oslo_concurrency.prlimit --as=1073741824 --cpu=30 -- env LC_ALL=C LANG=C qemu-img info /var/lib/nova/instances/850ac274-3f22-41ce-b7d7-ac64d7adac70/disk.eph0 --force-share --output=json execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384#033[00m
Dec  1 19:51:13 compute-0 nova_compute[189564]: 2025-12-01 19:51:13.938 189568 DEBUG oslo_concurrency.processutils [None req-7cec860b-c1c5-49d2-8a4f-732c76c1788d - - - - - -] CMD "/usr/bin/python3 -m oslo_concurrency.prlimit --as=1073741824 --cpu=30 -- env LC_ALL=C LANG=C qemu-img info /var/lib/nova/instances/850ac274-3f22-41ce-b7d7-ac64d7adac70/disk.eph0 --force-share --output=json" returned: 0 in 0.063s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422#033[00m
Dec  1 19:51:13 compute-0 nova_compute[189564]: 2025-12-01 19:51:13.940 189568 DEBUG oslo_concurrency.processutils [None req-7cec860b-c1c5-49d2-8a4f-732c76c1788d - - - - - -] Running cmd (subprocess): /usr/bin/python3 -m oslo_concurrency.prlimit --as=1073741824 --cpu=30 -- env LC_ALL=C LANG=C qemu-img info /var/lib/nova/instances/850ac274-3f22-41ce-b7d7-ac64d7adac70/disk.eph0 --force-share --output=json execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384#033[00m
Dec  1 19:51:14 compute-0 nova_compute[189564]: 2025-12-01 19:51:14.000 189568 DEBUG oslo_concurrency.processutils [None req-7cec860b-c1c5-49d2-8a4f-732c76c1788d - - - - - -] CMD "/usr/bin/python3 -m oslo_concurrency.prlimit --as=1073741824 --cpu=30 -- env LC_ALL=C LANG=C qemu-img info /var/lib/nova/instances/850ac274-3f22-41ce-b7d7-ac64d7adac70/disk.eph0 --force-share --output=json" returned: 0 in 0.060s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422#033[00m
Dec  1 19:51:14 compute-0 nova_compute[189564]: 2025-12-01 19:51:14.386 189568 WARNING nova.virt.libvirt.driver [None req-7cec860b-c1c5-49d2-8a4f-732c76c1788d - - - - - -] This host appears to have multiple sockets per NUMA node. The `socket` PCI NUMA affinity will not be supported.#033[00m
Dec  1 19:51:14 compute-0 nova_compute[189564]: 2025-12-01 19:51:14.388 189568 DEBUG nova.compute.resource_tracker [None req-7cec860b-c1c5-49d2-8a4f-732c76c1788d - - - - - -] Hypervisor/Node resource view: name=compute-0.ctlplane.example.com free_ram=4834MB free_disk=72.36144638061523GB free_vcpus=6 pci_devices=[{"dev_id": "pci_0000_00_03_0", "address": "0000:00:03.0", "product_id": "1000", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1000", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_01_3", "address": "0000:00:01.3", "product_id": "7113", "vendor_id": "8086", "numa_node": null, "label": "label_8086_7113", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_06_0", "address": "0000:00:06.0", "product_id": "1005", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1005", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_04_0", "address": "0000:00:04.0", "product_id": "1001", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1001", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_01_1", "address": "0000:00:01.1", "product_id": "7010", "vendor_id": "8086", "numa_node": null, "label": "label_8086_7010", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_00_0", "address": "0000:00:00.0", "product_id": "1237", "vendor_id": "8086", "numa_node": null, "label": "label_8086_1237", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_05_0", "address": "0000:00:05.0", "product_id": "1002", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1002", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_02_0", "address": "0000:00:02.0", "product_id": "1050", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1050", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_01_2", "address": "0000:00:01.2", "product_id": "7020", "vendor_id": "8086", "numa_node": null, "label": "label_8086_7020", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_01_0", "address": "0000:00:01.0", "product_id": "7000", "vendor_id": "8086", "numa_node": null, "label": "label_8086_7000", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_07_0", "address": "0000:00:07.0", "product_id": "1000", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1000", "dev_type": "type-PCI"}] _report_hypervisor_resource_view /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:1034#033[00m
Dec  1 19:51:14 compute-0 nova_compute[189564]: 2025-12-01 19:51:14.388 189568 DEBUG oslo_concurrency.lockutils [None req-7cec860b-c1c5-49d2-8a4f-732c76c1788d - - - - - -] Acquiring lock "compute_resources" by "nova.compute.resource_tracker.ResourceTracker._update_available_resource" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m
Dec  1 19:51:14 compute-0 nova_compute[189564]: 2025-12-01 19:51:14.389 189568 DEBUG oslo_concurrency.lockutils [None req-7cec860b-c1c5-49d2-8a4f-732c76c1788d - - - - - -] Lock "compute_resources" acquired by "nova.compute.resource_tracker.ResourceTracker._update_available_resource" :: waited 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m
Dec  1 19:51:14 compute-0 nova_compute[189564]: 2025-12-01 19:51:14.635 189568 DEBUG nova.compute.resource_tracker [None req-7cec860b-c1c5-49d2-8a4f-732c76c1788d - - - - - -] Instance e73931e9-f7fa-4666-b781-700b385532a9 actively managed on this compute host and has allocations in placement: {'resources': {'DISK_GB': 2, 'MEMORY_MB': 512, 'VCPU': 1}}. _remove_deleted_instances_allocations /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:1635#033[00m
Dec  1 19:51:14 compute-0 nova_compute[189564]: 2025-12-01 19:51:14.635 189568 DEBUG nova.compute.resource_tracker [None req-7cec860b-c1c5-49d2-8a4f-732c76c1788d - - - - - -] Instance 850ac274-3f22-41ce-b7d7-ac64d7adac70 actively managed on this compute host and has allocations in placement: {'resources': {'DISK_GB': 2, 'MEMORY_MB': 512, 'VCPU': 1}}. _remove_deleted_instances_allocations /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:1635#033[00m
Dec  1 19:51:14 compute-0 nova_compute[189564]: 2025-12-01 19:51:14.636 189568 DEBUG nova.compute.resource_tracker [None req-7cec860b-c1c5-49d2-8a4f-732c76c1788d - - - - - -] Total usable vcpus: 8, total allocated vcpus: 2 _report_final_resource_view /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:1057#033[00m
Dec  1 19:51:14 compute-0 nova_compute[189564]: 2025-12-01 19:51:14.636 189568 DEBUG nova.compute.resource_tracker [None req-7cec860b-c1c5-49d2-8a4f-732c76c1788d - - - - - -] Final resource view: name=compute-0.ctlplane.example.com phys_ram=7680MB used_ram=1536MB phys_disk=79GB used_disk=4GB total_vcpus=8 used_vcpus=2 pci_stats=[] _report_final_resource_view /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:1066#033[00m
Dec  1 19:51:14 compute-0 nova_compute[189564]: 2025-12-01 19:51:14.811 189568 DEBUG nova.compute.provider_tree [None req-7cec860b-c1c5-49d2-8a4f-732c76c1788d - - - - - -] Inventory has not changed in ProviderTree for provider: 0211b5d4-bab8-409f-8f53-df766ffbcb27 update_inventory /usr/lib/python3.9/site-packages/nova/compute/provider_tree.py:180#033[00m
Dec  1 19:51:14 compute-0 nova_compute[189564]: 2025-12-01 19:51:14.827 189568 DEBUG nova.scheduler.client.report [None req-7cec860b-c1c5-49d2-8a4f-732c76c1788d - - - - - -] Inventory has not changed for provider 0211b5d4-bab8-409f-8f53-df766ffbcb27 based on inventory data: {'VCPU': {'total': 8, 'reserved': 0, 'min_unit': 1, 'max_unit': 8, 'step_size': 1, 'allocation_ratio': 4.0}, 'MEMORY_MB': {'total': 7680, 'reserved': 512, 'min_unit': 1, 'max_unit': 7680, 'step_size': 1, 'allocation_ratio': 1.0}, 'DISK_GB': {'total': 79, 'reserved': 1, 'min_unit': 1, 'max_unit': 79, 'step_size': 1, 'allocation_ratio': 0.9}} set_inventory_for_provider /usr/lib/python3.9/site-packages/nova/scheduler/client/report.py:940#033[00m
Dec  1 19:51:14 compute-0 nova_compute[189564]: 2025-12-01 19:51:14.829 189568 DEBUG nova.compute.resource_tracker [None req-7cec860b-c1c5-49d2-8a4f-732c76c1788d - - - - - -] Compute_service record updated for compute-0.ctlplane.example.com:compute-0.ctlplane.example.com _update_available_resource /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:995#033[00m
Dec  1 19:51:14 compute-0 nova_compute[189564]: 2025-12-01 19:51:14.829 189568 DEBUG oslo_concurrency.lockutils [None req-7cec860b-c1c5-49d2-8a4f-732c76c1788d - - - - - -] Lock "compute_resources" "released" by "nova.compute.resource_tracker.ResourceTracker._update_available_resource" :: held 0.440s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m
Dec  1 19:51:15 compute-0 nova_compute[189564]: 2025-12-01 19:51:15.824 189568 DEBUG oslo_service.periodic_task [None req-7cec860b-c1c5-49d2-8a4f-732c76c1788d - - - - - -] Running periodic task ComputeManager._check_instance_build_time run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210#033[00m
Dec  1 19:51:15 compute-0 nova_compute[189564]: 2025-12-01 19:51:15.825 189568 DEBUG oslo_service.periodic_task [None req-7cec860b-c1c5-49d2-8a4f-732c76c1788d - - - - - -] Running periodic task ComputeManager._sync_scheduler_instance_info run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210#033[00m
Dec  1 19:51:15 compute-0 nova_compute[189564]: 2025-12-01 19:51:15.847 189568 DEBUG oslo_service.periodic_task [None req-7cec860b-c1c5-49d2-8a4f-732c76c1788d - - - - - -] Running periodic task ComputeManager._instance_usage_audit run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210#033[00m
Dec  1 19:51:16 compute-0 nova_compute[189564]: 2025-12-01 19:51:16.146 189568 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Dec  1 19:51:18 compute-0 podman[248564]: 2025-12-01 19:51:18.318808106 +0000 UTC m=+0.075936929 container health_status 9bc16c1e84935b321683dd2dfd3901959431e420d380b6b9982945dff3d516b2 (image=quay.io/prometheus/node-exporter:v1.5.0, name=node_exporter, health_status=healthy, health_failing_streak=0, health_log=, maintainer=The Prometheus Authors <prometheus-developers@googlegroups.com>, managed_by=edpm_ansible, config_data={'image': 'quay.io/prometheus/node-exporter:v1.5.0', 'restart': 'always', 'recreate': True, 'user': 'root', 'privileged': True, 'ports': ['9100:9100'], 'command': ['--web.config.file=/etc/node_exporter/node_exporter.yaml', '--web.disable-exporter-metrics', '--collector.systemd', '--collector.systemd.unit-include=(edpm_.*|ovs.*|openvswitch|virt.*|rsyslog)\\.service', '--no-collector.dmi', '--no-collector.entropy', '--no-collector.thermal_zone', '--no-collector.time', '--no-collector.timex', '--no-collector.uname', '--no-collector.stat', '--no-collector.hwmon', '--no-collector.os', '--no-collector.selinux', '--no-collector.textfile', '--no-collector.powersupplyclass', '--no-collector.pressure', '--no-collector.rapl'], 'net': 'host', 'environment': {'OS_ENDPOINT_TYPE': 'internal'}, 'healthcheck': {'test': '/openstack/healthcheck node_exporter', 'mount': '/var/lib/openstack/healthchecks/node_exporter'}, 'volumes': ['/var/lib/openstack/config/telemetry/node_exporter.yaml:/etc/node_exporter/node_exporter.yaml:z', '/var/lib/openstack/certs/telemetry/default:/etc/node_exporter/tls:z', '/var/run/dbus/system_bus_socket:/var/run/dbus/system_bus_socket:rw', '/var/lib/openstack/healthchecks/node_exporter:/openstack:ro,z']}, config_id=edpm, container_name=node_exporter)
Dec  1 19:51:18 compute-0 nova_compute[189564]: 2025-12-01 19:51:18.450 189568 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Dec  1 19:51:21 compute-0 nova_compute[189564]: 2025-12-01 19:51:21.149 189568 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Dec  1 19:51:23 compute-0 nova_compute[189564]: 2025-12-01 19:51:23.454 189568 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Dec  1 19:51:24 compute-0 podman[248586]: 2025-12-01 19:51:24.43047419 +0000 UTC m=+0.187294972 container health_status eee51cf6f5ac491b85fb09827fece37ea9afa564acb449d4ec0d0155a452f02b (image=quay.io/podified-antelope-centos9/openstack-multipathd:current-podified, name=multipathd, health_status=healthy, health_failing_streak=0, health_log=, io.buildah.version=1.41.3, tcib_managed=true, config_data={'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/multipathd', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-multipathd:current-podified', 'net': 'host', 'privileged': True, 'restart': 'always', 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/var/lib/kolla/config_files/multipathd.json:/var/lib/kolla/config_files/config.json:ro', '/dev:/dev', '/run/udev:/run/udev', '/sys:/sys', '/lib/modules:/lib/modules:ro', '/etc/iscsi:/etc/iscsi:ro', '/var/lib/iscsi:/var/lib/iscsi', '/etc/multipath:/etc/multipath:z', '/etc/multipath.conf:/etc/multipath.conf:ro', '/var/lib/openstack/healthchecks/multipathd:/openstack:ro,z']}, config_id=multipathd, container_name=multipathd, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.schema-version=1.0, tcib_build_tag=fa2bb8efef6782c26ea7f1675eeb36dd, maintainer=OpenStack Kubernetes Operator team, managed_by=edpm_ansible, org.label-schema.build-date=20251125, org.label-schema.license=GPLv2, org.label-schema.vendor=CentOS)
Dec  1 19:51:25 compute-0 nova_compute[189564]: 2025-12-01 19:51:25.249 189568 DEBUG oslo_service.periodic_task [None req-7cec860b-c1c5-49d2-8a4f-732c76c1788d - - - - - -] Running periodic task ComputeManager._run_pending_deletes run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210#033[00m
Dec  1 19:51:25 compute-0 nova_compute[189564]: 2025-12-01 19:51:25.250 189568 DEBUG nova.compute.manager [None req-7cec860b-c1c5-49d2-8a4f-732c76c1788d - - - - - -] Cleaning up deleted instances _run_pending_deletes /usr/lib/python3.9/site-packages/nova/compute/manager.py:11145#033[00m
Dec  1 19:51:25 compute-0 nova_compute[189564]: 2025-12-01 19:51:25.272 189568 DEBUG nova.compute.manager [None req-7cec860b-c1c5-49d2-8a4f-732c76c1788d - - - - - -] There are 0 instances to clean _run_pending_deletes /usr/lib/python3.9/site-packages/nova/compute/manager.py:11154#033[00m
Dec  1 19:51:25 compute-0 systemd-logind[797]: New session 30 of user zuul.
Dec  1 19:51:25 compute-0 systemd[1]: Started Session 30 of User zuul.
Dec  1 19:51:26 compute-0 nova_compute[189564]: 2025-12-01 19:51:26.153 189568 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Dec  1 19:51:26 compute-0 nova_compute[189564]: 2025-12-01 19:51:26.337 189568 DEBUG oslo_service.periodic_task [None req-7cec860b-c1c5-49d2-8a4f-732c76c1788d - - - - - -] Running periodic task ComputeManager._cleanup_running_deleted_instances run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210#033[00m
Dec  1 19:51:26 compute-0 python3[248785]: ansible-ansible.legacy.command Invoked with _raw_params=podman ps -a --format "{{.Names}} {{.Status}}" | grep ceilometer_agent_compute#012 _uses_shell=True stdin_add_newline=True strip_empty_ends=True argv=None chdir=None executable=None creates=None removes=None stdin=None
Dec  1 19:51:27 compute-0 nova_compute[189564]: 2025-12-01 19:51:27.249 189568 DEBUG oslo_service.periodic_task [None req-7cec860b-c1c5-49d2-8a4f-732c76c1788d - - - - - -] Running periodic task ComputeManager._cleanup_expired_console_auth_tokens run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210#033[00m
Dec  1 19:51:28 compute-0 nova_compute[189564]: 2025-12-01 19:51:28.457 189568 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Dec  1 19:51:29 compute-0 nova_compute[189564]: 2025-12-01 19:51:29.272 189568 DEBUG oslo_service.periodic_task [None req-7cec860b-c1c5-49d2-8a4f-732c76c1788d - - - - - -] Running periodic task ComputeManager._cleanup_incomplete_migrations run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210#033[00m
Dec  1 19:51:29 compute-0 nova_compute[189564]: 2025-12-01 19:51:29.274 189568 DEBUG nova.compute.manager [None req-7cec860b-c1c5-49d2-8a4f-732c76c1788d - - - - - -] Cleaning up deleted instances with incomplete migration  _cleanup_incomplete_migrations /usr/lib/python3.9/site-packages/nova/compute/manager.py:11183#033[00m
Dec  1 19:51:29 compute-0 podman[203750]: time="2025-12-01T19:51:29Z" level=info msg="List containers: received `last` parameter - overwriting `limit`"
Dec  1 19:51:29 compute-0 podman[203750]: @ - - [01/Dec/2025:19:51:29 +0000] "GET /v4.9.3/libpod/containers/json?all=true&external=false&last=0&namespace=false&size=false&sync=false HTTP/1.1" 200 29521 "" "Go-http-client/1.1"
Dec  1 19:51:29 compute-0 podman[203750]: @ - - [01/Dec/2025:19:51:29 +0000] "GET /v4.9.3/libpod/containers/stats?all=false&interval=1&stream=false HTTP/1.1" 200 4811 "" "Go-http-client/1.1"
Dec  1 19:51:30 compute-0 podman[248828]: 2025-12-01 19:51:30.240946008 +0000 UTC m=+0.145501247 container health_status 61ddba5fa28aaa4735d9b3aecc3d300f499f9ae2248b5f55cd6d6127fcce4236 (image=quay.io/navidys/prometheus-podman-exporter:v1.10.1, name=podman_exporter, health_status=healthy, health_failing_streak=0, health_log=, maintainer=Navid Yaghoobi <navidys@fedoraproject.org>, managed_by=edpm_ansible, config_data={'image': 'quay.io/navidys/prometheus-podman-exporter:v1.10.1', 'restart': 'always', 'recreate': True, 'user': 'root', 'privileged': True, 'ports': ['9882:9882'], 'net': 'host', 'command': ['--web.config.file=/etc/podman_exporter/podman_exporter.yaml'], 'environment': {'OS_ENDPOINT_TYPE': 'internal', 'CONTAINER_HOST': 'unix:///run/podman/podman.sock'}, 'healthcheck': {'test': '/openstack/healthcheck podman_exporter', 'mount': '/var/lib/openstack/healthchecks/podman_exporter'}, 'volumes': ['/var/lib/openstack/config/telemetry/podman_exporter.yaml:/etc/podman_exporter/podman_exporter.yaml:z', '/var/lib/openstack/certs/telemetry/default:/etc/podman_exporter/tls:z', '/run/podman/podman.sock:/run/podman/podman.sock:rw,z', '/var/lib/openstack/healthchecks/podman_exporter:/openstack:ro,z']}, config_id=edpm, container_name=podman_exporter)
Dec  1 19:51:31 compute-0 nova_compute[189564]: 2025-12-01 19:51:31.156 189568 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Dec  1 19:51:31 compute-0 openstack_network_exporter[205914]: ERROR   19:51:31 appctl.go:131: Failed to prepare call to ovsdb-server: no control socket files found for the ovs db server
Dec  1 19:51:31 compute-0 openstack_network_exporter[205914]: ERROR   19:51:31 appctl.go:144: Failed to get PID for ovn-northd: no control socket files found for ovn-northd
Dec  1 19:51:31 compute-0 openstack_network_exporter[205914]: ERROR   19:51:31 appctl.go:144: Failed to get PID for ovn-northd: no control socket files found for ovn-northd
Dec  1 19:51:31 compute-0 openstack_network_exporter[205914]: ERROR   19:51:31 appctl.go:174: call(dpif-netdev/pmd-perf-show): please specify an existing datapath
Dec  1 19:51:31 compute-0 openstack_network_exporter[205914]: 
Dec  1 19:51:31 compute-0 openstack_network_exporter[205914]: ERROR   19:51:31 appctl.go:174: call(dpif-netdev/pmd-rxq-show): please specify an existing datapath
Dec  1 19:51:31 compute-0 openstack_network_exporter[205914]: 
Dec  1 19:51:33 compute-0 podman[248853]: 2025-12-01 19:51:33.364720922 +0000 UTC m=+0.103584785 container health_status 34a1614f07848d6f362b3ed1fa2407dbcd0f2c7c831f6ef43ff8b2d278ce7c3d (image=quay.io/podified-antelope-centos9/openstack-ceilometer-ipmi:current-podified, name=ceilometer_agent_ipmi, health_status=healthy, health_failing_streak=0, health_log=, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.schema-version=1.0, io.buildah.version=1.41.3, managed_by=edpm_ansible, tcib_managed=true, org.label-schema.build-date=20251125, org.label-schema.vendor=CentOS, tcib_build_tag=fa2bb8efef6782c26ea7f1675eeb36dd, config_id=edpm, maintainer=OpenStack Kubernetes Operator team, config_data={'image': 'quay.io/podified-antelope-centos9/openstack-ceilometer-ipmi:current-podified', 'user': 'ceilometer', 'restart': 'always', 'command': 'kolla_start', 'security_opt': 'label:type:ceilometer_polling_t', 'privileged': 'true', 'net': 'host', 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'OS_ENDPOINT_TYPE': 'internal'}, 'healthcheck': {'test': '/openstack/healthcheck ipmi', 'mount': '/var/lib/openstack/healthchecks/ceilometer_agent_ipmi'}, 'volumes': ['/var/lib/openstack/config/telemetry-power-monitoring:/var/lib/openstack/config/:z', '/var/lib/openstack/config/telemetry-power-monitoring/ceilometer-agent-ipmi.json:/var/lib/kolla/config_files/config.json:z', '/etc/hosts:/etc/hosts:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/var/lib/openstack/cacerts/telemetry-power-monitoring/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/config/telemetry-power-monitoring/ceilometer_prom_exporter.yaml:/etc/ceilometer/ceilometer_prom_exporter.yaml:z', '/var/lib/openstack/certs/telemetry-power-monitoring/default:/etc/ceilometer/tls:z', '/dev/log:/dev/log', '/var/lib/openstack/healthchecks/ceilometer_agent_ipmi:/openstack:ro,z']}, container_name=ceilometer_agent_ipmi)
Dec  1 19:51:33 compute-0 podman[248852]: 2025-12-01 19:51:33.379275114 +0000 UTC m=+0.134179565 container health_status 23921011954a99f31a49758e512d9e3575f6b2ebf536e7df85e3be11e7690b76 (image=quay.io/sustainable_computing_io/kepler:release-0.7.12, name=kepler, health_status=healthy, health_failing_streak=0, health_log=, config_data={'image': 'quay.io/sustainable_computing_io/kepler:release-0.7.12', 'privileged': 'true', 'restart': 'always', 'ports': ['8888:8888'], 'net': 'host', 'command': '-v=2', 'recreate': True, 'environment': {'ENABLE_GPU': 'true', 'EXPOSE_CONTAINER_METRICS': 'true', 'ENABLE_PROCESS_METRICS': 'true', 'EXPOSE_VM_METRICS': 'true', 'EXPOSE_ESTIMATED_IDLE_POWER_METRICS': 'false', 'LIBVIRT_METADATA_URI': 'http://openstack.org/xmlns/libvirt/nova/1.1'}, 'healthcheck': {'test': '/openstack/healthcheck kepler', 'mount': '/var/lib/openstack/healthchecks/kepler'}, 'volumes': ['/lib/modules:/lib/modules:ro', '/run/libvirt:/run/libvirt:shared,ro', '/sys:/sys', '/proc:/proc', '/var/lib/openstack/healthchecks/kepler:/openstack:ro,z']}, io.openshift.tags=base rhel9, managed_by=edpm_ansible, description=The Universal Base Image is designed and engineered to be the base layer for all of your containerized applications, middleware and utilities. This base image is freely redistributable, but Red Hat only supports Red Hat technologies through subscriptions for Red Hat products. This image is maintained by Red Hat and updated regularly., distribution-scope=public, io.buildah.version=1.29.0, vcs-type=git, com.redhat.component=ubi9-container, version=9.4, io.k8s.description=The Universal Base Image is designed and engineered to be the base layer for all of your containerized applications, middleware and utilities. This base image is freely redistributable, but Red Hat only supports Red Hat technologies through subscriptions for Red Hat products. This image is maintained by Red Hat and updated regularly., name=ubi9, io.openshift.expose-services=, summary=Provides the latest release of Red Hat Universal Base Image 9., vendor=Red Hat, Inc., io.k8s.display-name=Red Hat Universal Base Image 9, vcs-ref=e309397d02fc53f7fa99db1371b8700eb49f268f, build-date=2024-09-18T21:23:30, release=1214.1726694543, release-0.7.12=, com.redhat.license_terms=https://www.redhat.com/en/about/red-hat-end-user-license-agreements#UBI, maintainer=Red Hat, Inc., architecture=x86_64, config_id=edpm, container_name=kepler, url=https://access.redhat.com/containers/#/registry.access.redhat.com/ubi9/images/9.4-1214.1726694543)
Dec  1 19:51:33 compute-0 podman[248859]: 2025-12-01 19:51:33.382485362 +0000 UTC m=+0.109051454 container health_status 3a3d264f7eb8586ed3d44da8bad3c69e5911bcb2ca062b771386b6d47a5118de (image=quay.rdoproject.org/podified-master-centos10/openstack-ceilometer-compute:current-tested, name=ceilometer_agent_compute, health_status=healthy, health_failing_streak=0, health_log=, org.label-schema.name=CentOS Stream 10 Base Image, tcib_managed=true, config_id=edpm, org.label-schema.build-date=20251125, org.label-schema.vendor=CentOS, tcib_build_tag=3a7876c5b6a4ff2e2bc50e11e9db5f42, config_data={'image': 'quay.rdoproject.org/podified-master-centos10/openstack-ceilometer-compute:current-tested', 'user': 'ceilometer', 'restart': 'always', 'command': 'kolla_start', 'security_opt': 'label:type:ceilometer_polling_t', 'net': 'host', 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'OS_ENDPOINT_TYPE': 'internal'}, 'healthcheck': {'test': '/openstack/healthcheck compute', 'mount': '/var/lib/openstack/healthchecks/ceilometer_agent_compute'}, 'volumes': ['/var/lib/openstack/config/telemetry:/var/lib/openstack/config/:z', '/var/lib/openstack/config/telemetry/ceilometer-agent-compute.json:/var/lib/kolla/config_files/config.json:z', '/run/libvirt:/run/libvirt:shared,ro', '/etc/hosts:/etc/hosts:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/var/lib/openstack/cacerts/telemetry/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/config/telemetry/ceilometer_prom_exporter.yaml:/etc/ceilometer/ceilometer_prom_exporter.yaml:z', '/var/lib/openstack/certs/telemetry/default:/etc/ceilometer/tls:z', '/dev/log:/dev/log', '/var/lib/openstack/healthchecks/ceilometer_agent_compute:/openstack:ro,z']}, io.buildah.version=1.41.4, managed_by=edpm_ansible, org.label-schema.schema-version=1.0, maintainer=OpenStack Kubernetes Operator team, container_name=ceilometer_agent_compute, org.label-schema.license=GPLv2)
Dec  1 19:51:33 compute-0 podman[248860]: 2025-12-01 19:51:33.387962322 +0000 UTC m=+0.110418896 container health_status 43b014a7c88484529ca37fbc1aa040d68d3c565a681d98a3ffe696ded1c66c8b (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified, name=ovn_metadata_agent, health_status=healthy, health_failing_streak=0, health_log=, io.buildah.version=1.41.3, managed_by=edpm_ansible, org.label-schema.build-date=20251125, org.label-schema.schema-version=1.0, config_id=ovn_metadata_agent, config_data={'cgroupns': 'host', 'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'EDPM_CONFIG_HASH': '0823bd3e096c75f72e4a95820d41b0d4b6a1172bd2892ddb9f29b788a11bc87d'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ovn_metadata_agent', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified', 'net': 'host', 'pid': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/run/openvswitch:/run/openvswitch:z', '/var/lib/config-data/ansible-generated/neutron-ovn-metadata-agent:/etc/neutron.conf.d:z', '/run/netns:/run/netns:shared', '/var/lib/kolla/config_files/ovn_metadata_agent.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/neutron:/var/lib/neutron:shared,z', '/var/lib/neutron/ovn_metadata_haproxy_wrapper:/usr/local/bin/haproxy:ro', '/var/lib/neutron/kill_scripts:/etc/neutron/kill_scripts:ro', '/var/lib/openstack/cacerts/neutron-metadata/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/ca.crt:/etc/pki/tls/certs/ovndbca.crt:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/tls.crt:/etc/pki/tls/certs/ovndb.crt:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/tls.key:/etc/pki/tls/private/ovndb.key:ro,Z', '/var/lib/openstack/healthchecks/ovn_metadata_agent:/openstack:ro,z']}, maintainer=OpenStack Kubernetes Operator team, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 9 Base Image, container_name=ovn_metadata_agent, org.label-schema.vendor=CentOS, tcib_build_tag=fa2bb8efef6782c26ea7f1675eeb36dd, tcib_managed=true)
Dec  1 19:51:33 compute-0 podman[248861]: 2025-12-01 19:51:33.437800779 +0000 UTC m=+0.155823486 container health_status ac5c9902abf0db9f43c889599b2bcc73d33eb8b65444ffdd9b56a5cc93dab792 (image=quay.io/podified-antelope-centos9/openstack-ovn-controller:current-podified, name=ovn_controller, health_status=healthy, health_failing_streak=0, health_log=, org.label-schema.schema-version=1.0, org.label-schema.build-date=20251125, tcib_build_tag=fa2bb8efef6782c26ea7f1675eeb36dd, maintainer=OpenStack Kubernetes Operator team, org.label-schema.vendor=CentOS, config_id=ovn_controller, io.buildah.version=1.41.3, managed_by=edpm_ansible, org.label-schema.license=GPLv2, tcib_managed=true, config_data={'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ovn_controller', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-ovn-controller:current-podified', 'net': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/lib/modules:/lib/modules:ro', '/run:/run', '/var/lib/openvswitch/ovn:/run/ovn:shared,z', '/var/lib/kolla/config_files/ovn_controller.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/openstack/cacerts/ovn/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/certs/ovn/default/ca.crt:/etc/pki/tls/certs/ovndbca.crt:ro,z', '/var/lib/openstack/certs/ovn/default/tls.crt:/etc/pki/tls/certs/ovndb.crt:ro,z', '/var/lib/openstack/certs/ovn/default/tls.key:/etc/pki/tls/private/ovndb.key:ro,Z', '/var/lib/openstack/healthchecks/ovn_controller:/openstack:ro,z']}, container_name=ovn_controller, org.label-schema.name=CentOS Stream 9 Base Image)
Dec  1 19:51:33 compute-0 nova_compute[189564]: 2025-12-01 19:51:33.460 189568 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Dec  1 19:51:36 compute-0 nova_compute[189564]: 2025-12-01 19:51:36.158 189568 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Dec  1 19:51:38 compute-0 nova_compute[189564]: 2025-12-01 19:51:38.463 189568 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Dec  1 19:51:41 compute-0 nova_compute[189564]: 2025-12-01 19:51:41.162 189568 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Dec  1 19:51:42 compute-0 nova_compute[189564]: 2025-12-01 19:51:42.467 189568 DEBUG oslo_concurrency.lockutils [None req-138e769f-223c-4547-af2a-89a176cd7817 7c24e8f82e7842b785e565ac65c7f494 35d2a9caf1634dca9fc12ec078239d84 - - default default] Acquiring lock "a5d5ccb2-21ac-4d7a-9991-efdf9cbb499d" by "nova.compute.manager.ComputeManager.build_and_run_instance.<locals>._locked_do_build_and_run_instance" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m
Dec  1 19:51:42 compute-0 nova_compute[189564]: 2025-12-01 19:51:42.468 189568 DEBUG oslo_concurrency.lockutils [None req-138e769f-223c-4547-af2a-89a176cd7817 7c24e8f82e7842b785e565ac65c7f494 35d2a9caf1634dca9fc12ec078239d84 - - default default] Lock "a5d5ccb2-21ac-4d7a-9991-efdf9cbb499d" acquired by "nova.compute.manager.ComputeManager.build_and_run_instance.<locals>._locked_do_build_and_run_instance" :: waited 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m
Dec  1 19:51:42 compute-0 nova_compute[189564]: 2025-12-01 19:51:42.487 189568 DEBUG nova.compute.manager [None req-138e769f-223c-4547-af2a-89a176cd7817 7c24e8f82e7842b785e565ac65c7f494 35d2a9caf1634dca9fc12ec078239d84 - - default default] [instance: a5d5ccb2-21ac-4d7a-9991-efdf9cbb499d] Starting instance... _do_build_and_run_instance /usr/lib/python3.9/site-packages/nova/compute/manager.py:2402#033[00m
Dec  1 19:51:42 compute-0 nova_compute[189564]: 2025-12-01 19:51:42.576 189568 DEBUG oslo_concurrency.lockutils [None req-138e769f-223c-4547-af2a-89a176cd7817 7c24e8f82e7842b785e565ac65c7f494 35d2a9caf1634dca9fc12ec078239d84 - - default default] Acquiring lock "compute_resources" by "nova.compute.resource_tracker.ResourceTracker.instance_claim" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m
Dec  1 19:51:42 compute-0 nova_compute[189564]: 2025-12-01 19:51:42.577 189568 DEBUG oslo_concurrency.lockutils [None req-138e769f-223c-4547-af2a-89a176cd7817 7c24e8f82e7842b785e565ac65c7f494 35d2a9caf1634dca9fc12ec078239d84 - - default default] Lock "compute_resources" acquired by "nova.compute.resource_tracker.ResourceTracker.instance_claim" :: waited 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m
Dec  1 19:51:42 compute-0 nova_compute[189564]: 2025-12-01 19:51:42.588 189568 DEBUG nova.virt.hardware [None req-138e769f-223c-4547-af2a-89a176cd7817 7c24e8f82e7842b785e565ac65c7f494 35d2a9caf1634dca9fc12ec078239d84 - - default default] Require both a host and instance NUMA topology to fit instance on host. numa_fit_instance_to_host /usr/lib/python3.9/site-packages/nova/virt/hardware.py:2368#033[00m
Dec  1 19:51:42 compute-0 nova_compute[189564]: 2025-12-01 19:51:42.589 189568 INFO nova.compute.claims [None req-138e769f-223c-4547-af2a-89a176cd7817 7c24e8f82e7842b785e565ac65c7f494 35d2a9caf1634dca9fc12ec078239d84 - - default default] [instance: a5d5ccb2-21ac-4d7a-9991-efdf9cbb499d] Claim successful on node compute-0.ctlplane.example.com#033[00m
Dec  1 19:51:42 compute-0 nova_compute[189564]: 2025-12-01 19:51:42.761 189568 DEBUG nova.compute.provider_tree [None req-138e769f-223c-4547-af2a-89a176cd7817 7c24e8f82e7842b785e565ac65c7f494 35d2a9caf1634dca9fc12ec078239d84 - - default default] Inventory has not changed in ProviderTree for provider: 0211b5d4-bab8-409f-8f53-df766ffbcb27 update_inventory /usr/lib/python3.9/site-packages/nova/compute/provider_tree.py:180#033[00m
Dec  1 19:51:42 compute-0 nova_compute[189564]: 2025-12-01 19:51:42.776 189568 DEBUG nova.scheduler.client.report [None req-138e769f-223c-4547-af2a-89a176cd7817 7c24e8f82e7842b785e565ac65c7f494 35d2a9caf1634dca9fc12ec078239d84 - - default default] Inventory has not changed for provider 0211b5d4-bab8-409f-8f53-df766ffbcb27 based on inventory data: {'VCPU': {'total': 8, 'reserved': 0, 'min_unit': 1, 'max_unit': 8, 'step_size': 1, 'allocation_ratio': 4.0}, 'MEMORY_MB': {'total': 7680, 'reserved': 512, 'min_unit': 1, 'max_unit': 7680, 'step_size': 1, 'allocation_ratio': 1.0}, 'DISK_GB': {'total': 79, 'reserved': 1, 'min_unit': 1, 'max_unit': 79, 'step_size': 1, 'allocation_ratio': 0.9}} set_inventory_for_provider /usr/lib/python3.9/site-packages/nova/scheduler/client/report.py:940#033[00m
Dec  1 19:51:42 compute-0 nova_compute[189564]: 2025-12-01 19:51:42.807 189568 DEBUG oslo_concurrency.lockutils [None req-138e769f-223c-4547-af2a-89a176cd7817 7c24e8f82e7842b785e565ac65c7f494 35d2a9caf1634dca9fc12ec078239d84 - - default default] Lock "compute_resources" "released" by "nova.compute.resource_tracker.ResourceTracker.instance_claim" :: held 0.230s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m
Dec  1 19:51:42 compute-0 nova_compute[189564]: 2025-12-01 19:51:42.809 189568 DEBUG nova.compute.manager [None req-138e769f-223c-4547-af2a-89a176cd7817 7c24e8f82e7842b785e565ac65c7f494 35d2a9caf1634dca9fc12ec078239d84 - - default default] [instance: a5d5ccb2-21ac-4d7a-9991-efdf9cbb499d] Start building networks asynchronously for instance. _build_resources /usr/lib/python3.9/site-packages/nova/compute/manager.py:2799#033[00m
Dec  1 19:51:42 compute-0 nova_compute[189564]: 2025-12-01 19:51:42.864 189568 DEBUG nova.compute.manager [None req-138e769f-223c-4547-af2a-89a176cd7817 7c24e8f82e7842b785e565ac65c7f494 35d2a9caf1634dca9fc12ec078239d84 - - default default] [instance: a5d5ccb2-21ac-4d7a-9991-efdf9cbb499d] Not allocating networking since 'none' was specified. _allocate_network_async /usr/lib/python3.9/site-packages/nova/compute/manager.py:1948#033[00m
Dec  1 19:51:42 compute-0 nova_compute[189564]: 2025-12-01 19:51:42.882 189568 INFO nova.virt.libvirt.driver [None req-138e769f-223c-4547-af2a-89a176cd7817 7c24e8f82e7842b785e565ac65c7f494 35d2a9caf1634dca9fc12ec078239d84 - - default default] [instance: a5d5ccb2-21ac-4d7a-9991-efdf9cbb499d] Ignoring supplied device name: /dev/vda. Libvirt can't honour user-supplied dev names#033[00m
Dec  1 19:51:42 compute-0 nova_compute[189564]: 2025-12-01 19:51:42.914 189568 DEBUG nova.compute.manager [None req-138e769f-223c-4547-af2a-89a176cd7817 7c24e8f82e7842b785e565ac65c7f494 35d2a9caf1634dca9fc12ec078239d84 - - default default] [instance: a5d5ccb2-21ac-4d7a-9991-efdf9cbb499d] Start building block device mappings for instance. _build_resources /usr/lib/python3.9/site-packages/nova/compute/manager.py:2834#033[00m
Dec  1 19:51:43 compute-0 nova_compute[189564]: 2025-12-01 19:51:43.000 189568 DEBUG nova.compute.manager [None req-138e769f-223c-4547-af2a-89a176cd7817 7c24e8f82e7842b785e565ac65c7f494 35d2a9caf1634dca9fc12ec078239d84 - - default default] [instance: a5d5ccb2-21ac-4d7a-9991-efdf9cbb499d] Start spawning the instance on the hypervisor. _build_and_run_instance /usr/lib/python3.9/site-packages/nova/compute/manager.py:2608#033[00m
Dec  1 19:51:43 compute-0 nova_compute[189564]: 2025-12-01 19:51:43.002 189568 DEBUG nova.virt.libvirt.driver [None req-138e769f-223c-4547-af2a-89a176cd7817 7c24e8f82e7842b785e565ac65c7f494 35d2a9caf1634dca9fc12ec078239d84 - - default default] [instance: a5d5ccb2-21ac-4d7a-9991-efdf9cbb499d] Creating instance directory _create_image /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:4723#033[00m
Dec  1 19:51:43 compute-0 nova_compute[189564]: 2025-12-01 19:51:43.003 189568 INFO nova.virt.libvirt.driver [None req-138e769f-223c-4547-af2a-89a176cd7817 7c24e8f82e7842b785e565ac65c7f494 35d2a9caf1634dca9fc12ec078239d84 - - default default] [instance: a5d5ccb2-21ac-4d7a-9991-efdf9cbb499d] Creating image(s)#033[00m
Dec  1 19:51:43 compute-0 nova_compute[189564]: 2025-12-01 19:51:43.004 189568 DEBUG oslo_concurrency.lockutils [None req-138e769f-223c-4547-af2a-89a176cd7817 7c24e8f82e7842b785e565ac65c7f494 35d2a9caf1634dca9fc12ec078239d84 - - default default] Acquiring lock "/var/lib/nova/instances/a5d5ccb2-21ac-4d7a-9991-efdf9cbb499d/disk.info" by "nova.virt.libvirt.imagebackend.Image.resolve_driver_format.<locals>.write_to_disk_info_file" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m
Dec  1 19:51:43 compute-0 nova_compute[189564]: 2025-12-01 19:51:43.004 189568 DEBUG oslo_concurrency.lockutils [None req-138e769f-223c-4547-af2a-89a176cd7817 7c24e8f82e7842b785e565ac65c7f494 35d2a9caf1634dca9fc12ec078239d84 - - default default] Lock "/var/lib/nova/instances/a5d5ccb2-21ac-4d7a-9991-efdf9cbb499d/disk.info" acquired by "nova.virt.libvirt.imagebackend.Image.resolve_driver_format.<locals>.write_to_disk_info_file" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m
Dec  1 19:51:43 compute-0 nova_compute[189564]: 2025-12-01 19:51:43.005 189568 DEBUG oslo_concurrency.lockutils [None req-138e769f-223c-4547-af2a-89a176cd7817 7c24e8f82e7842b785e565ac65c7f494 35d2a9caf1634dca9fc12ec078239d84 - - default default] Lock "/var/lib/nova/instances/a5d5ccb2-21ac-4d7a-9991-efdf9cbb499d/disk.info" "released" by "nova.virt.libvirt.imagebackend.Image.resolve_driver_format.<locals>.write_to_disk_info_file" :: held 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m
Dec  1 19:51:43 compute-0 nova_compute[189564]: 2025-12-01 19:51:43.006 189568 DEBUG oslo_concurrency.lockutils [None req-138e769f-223c-4547-af2a-89a176cd7817 7c24e8f82e7842b785e565ac65c7f494 35d2a9caf1634dca9fc12ec078239d84 - - default default] Acquiring lock "ac10605fd1db743aca604ff67d0f873a18376180" by "nova.virt.libvirt.imagebackend.Image.cache.<locals>.fetch_func_sync" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m
Dec  1 19:51:43 compute-0 nova_compute[189564]: 2025-12-01 19:51:43.006 189568 DEBUG oslo_concurrency.lockutils [None req-138e769f-223c-4547-af2a-89a176cd7817 7c24e8f82e7842b785e565ac65c7f494 35d2a9caf1634dca9fc12ec078239d84 - - default default] Lock "ac10605fd1db743aca604ff67d0f873a18376180" acquired by "nova.virt.libvirt.imagebackend.Image.cache.<locals>.fetch_func_sync" :: waited 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m
Dec  1 19:51:43 compute-0 nova_compute[189564]: 2025-12-01 19:51:43.466 189568 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Dec  1 19:51:44 compute-0 nova_compute[189564]: 2025-12-01 19:51:44.200 189568 DEBUG oslo_concurrency.processutils [None req-138e769f-223c-4547-af2a-89a176cd7817 7c24e8f82e7842b785e565ac65c7f494 35d2a9caf1634dca9fc12ec078239d84 - - default default] Running cmd (subprocess): /usr/bin/python3 -m oslo_concurrency.prlimit --as=1073741824 --cpu=30 -- env LC_ALL=C LANG=C qemu-img info /var/lib/nova/instances/_base/ac10605fd1db743aca604ff67d0f873a18376180.part --force-share --output=json execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384#033[00m
Dec  1 19:51:44 compute-0 nova_compute[189564]: 2025-12-01 19:51:44.268 189568 DEBUG oslo_concurrency.processutils [None req-138e769f-223c-4547-af2a-89a176cd7817 7c24e8f82e7842b785e565ac65c7f494 35d2a9caf1634dca9fc12ec078239d84 - - default default] CMD "/usr/bin/python3 -m oslo_concurrency.prlimit --as=1073741824 --cpu=30 -- env LC_ALL=C LANG=C qemu-img info /var/lib/nova/instances/_base/ac10605fd1db743aca604ff67d0f873a18376180.part --force-share --output=json" returned: 0 in 0.069s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422#033[00m
Dec  1 19:51:44 compute-0 nova_compute[189564]: 2025-12-01 19:51:44.270 189568 DEBUG nova.virt.images [None req-138e769f-223c-4547-af2a-89a176cd7817 7c24e8f82e7842b785e565ac65c7f494 35d2a9caf1634dca9fc12ec078239d84 - - default default] 2db0dcf5-70ca-4fe0-b205-4e14a99e3eee was qcow2, converting to raw fetch_to_raw /usr/lib/python3.9/site-packages/nova/virt/images.py:242#033[00m
Dec  1 19:51:44 compute-0 nova_compute[189564]: 2025-12-01 19:51:44.271 189568 DEBUG nova.privsep.utils [None req-138e769f-223c-4547-af2a-89a176cd7817 7c24e8f82e7842b785e565ac65c7f494 35d2a9caf1634dca9fc12ec078239d84 - - default default] Path '/var/lib/nova/instances' supports direct I/O supports_direct_io /usr/lib/python3.9/site-packages/nova/privsep/utils.py:63#033[00m
Dec  1 19:51:44 compute-0 nova_compute[189564]: 2025-12-01 19:51:44.272 189568 DEBUG oslo_concurrency.processutils [None req-138e769f-223c-4547-af2a-89a176cd7817 7c24e8f82e7842b785e565ac65c7f494 35d2a9caf1634dca9fc12ec078239d84 - - default default] Running cmd (subprocess): qemu-img convert -t none -O raw -f qcow2 /var/lib/nova/instances/_base/ac10605fd1db743aca604ff67d0f873a18376180.part /var/lib/nova/instances/_base/ac10605fd1db743aca604ff67d0f873a18376180.converted execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384#033[00m
Dec  1 19:51:44 compute-0 podman[248947]: 2025-12-01 19:51:44.383736084 +0000 UTC m=+0.133727950 container health_status b46bda7fc50db8041eef75400930fc7591d8331b3adc9964f77b2cc87c6b98e2 (image=quay.io/openstack-k8s-operators/openstack-network-exporter:current-podified, name=openstack_network_exporter, health_status=healthy, health_failing_streak=0, health_log=, container_name=openstack_network_exporter, vendor=Red Hat, Inc., maintainer=Red Hat, Inc., release=1755695350, build-date=2025-08-20T13:12:41, io.k8s.description=The Universal Base Image Minimal is a stripped down image that uses microdnf as a package manager. This base image is freely redistributable, but Red Hat only supports Red Hat technologies through subscriptions for Red Hat products. This image is maintained by Red Hat and updated regularly., url=https://catalog.redhat.com/en/search?searchType=containers, io.openshift.expose-services=, vcs-ref=f4b088292653bbf5ca8188a5e59ffd06a8671d4b, config_id=edpm, summary=Provides the latest release of the minimal Red Hat Universal Base Image 9., architecture=x86_64, com.redhat.component=ubi9-minimal-container, managed_by=edpm_ansible, name=ubi9-minimal, io.buildah.version=1.33.7, io.k8s.display-name=Red Hat Universal Base Image 9 Minimal, io.openshift.tags=minimal rhel9, com.redhat.license_terms=https://www.redhat.com/en/about/red-hat-end-user-license-agreements#UBI, description=The Universal Base Image Minimal is a stripped down image that uses microdnf as a package manager. This base image is freely redistributable, but Red Hat only supports Red Hat technologies through subscriptions for Red Hat products. This image is maintained by Red Hat and updated regularly., distribution-scope=public, version=9.6, vcs-type=git, config_data={'image': 'quay.io/openstack-k8s-operators/openstack-network-exporter:current-podified', 'restart': 'always', 'recreate': True, 'privileged': True, 'ports': ['9105:9105'], 'command': [], 'net': 'host', 'environment': {'OS_ENDPOINT_TYPE': 'internal', 'OPENSTACK_NETWORK_EXPORTER_YAML': '/etc/openstack_network_exporter/openstack_network_exporter.yaml'}, 'healthcheck': {'test': '/openstack/healthcheck openstack-netwo', 'mount': '/var/lib/openstack/healthchecks/openstack_network_exporter'}, 'volumes': ['/var/lib/openstack/config/telemetry/openstack_network_exporter.yaml:/etc/openstack_network_exporter/openstack_network_exporter.yaml:z', '/var/lib/openstack/certs/telemetry/default:/etc/openstack_network_exporter/tls:z', '/var/run/openvswitch:/run/openvswitch:rw,z', '/var/lib/openvswitch/ovn:/run/ovn:rw,z', '/proc:/host/proc:ro', '/var/lib/openstack/healthchecks/openstack_network_exporter:/openstack:ro,z']})
Dec  1 19:51:44 compute-0 nova_compute[189564]: 2025-12-01 19:51:44.492 189568 DEBUG oslo_concurrency.processutils [None req-138e769f-223c-4547-af2a-89a176cd7817 7c24e8f82e7842b785e565ac65c7f494 35d2a9caf1634dca9fc12ec078239d84 - - default default] CMD "qemu-img convert -t none -O raw -f qcow2 /var/lib/nova/instances/_base/ac10605fd1db743aca604ff67d0f873a18376180.part /var/lib/nova/instances/_base/ac10605fd1db743aca604ff67d0f873a18376180.converted" returned: 0 in 0.220s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422#033[00m
Dec  1 19:51:44 compute-0 nova_compute[189564]: 2025-12-01 19:51:44.496 189568 DEBUG oslo_concurrency.processutils [None req-138e769f-223c-4547-af2a-89a176cd7817 7c24e8f82e7842b785e565ac65c7f494 35d2a9caf1634dca9fc12ec078239d84 - - default default] Running cmd (subprocess): /usr/bin/python3 -m oslo_concurrency.prlimit --as=1073741824 --cpu=30 -- env LC_ALL=C LANG=C qemu-img info /var/lib/nova/instances/_base/ac10605fd1db743aca604ff67d0f873a18376180.converted --force-share --output=json execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384#033[00m
Dec  1 19:51:44 compute-0 nova_compute[189564]: 2025-12-01 19:51:44.579 189568 DEBUG oslo_concurrency.processutils [None req-138e769f-223c-4547-af2a-89a176cd7817 7c24e8f82e7842b785e565ac65c7f494 35d2a9caf1634dca9fc12ec078239d84 - - default default] CMD "/usr/bin/python3 -m oslo_concurrency.prlimit --as=1073741824 --cpu=30 -- env LC_ALL=C LANG=C qemu-img info /var/lib/nova/instances/_base/ac10605fd1db743aca604ff67d0f873a18376180.converted --force-share --output=json" returned: 0 in 0.082s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422#033[00m
Dec  1 19:51:44 compute-0 nova_compute[189564]: 2025-12-01 19:51:44.581 189568 DEBUG oslo_concurrency.lockutils [None req-138e769f-223c-4547-af2a-89a176cd7817 7c24e8f82e7842b785e565ac65c7f494 35d2a9caf1634dca9fc12ec078239d84 - - default default] Lock "ac10605fd1db743aca604ff67d0f873a18376180" "released" by "nova.virt.libvirt.imagebackend.Image.cache.<locals>.fetch_func_sync" :: held 1.574s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m
Dec  1 19:51:44 compute-0 nova_compute[189564]: 2025-12-01 19:51:44.614 189568 DEBUG oslo_concurrency.processutils [None req-138e769f-223c-4547-af2a-89a176cd7817 7c24e8f82e7842b785e565ac65c7f494 35d2a9caf1634dca9fc12ec078239d84 - - default default] Running cmd (subprocess): /usr/bin/python3 -m oslo_concurrency.prlimit --as=1073741824 --cpu=30 -- env LC_ALL=C LANG=C qemu-img info /var/lib/nova/instances/_base/ac10605fd1db743aca604ff67d0f873a18376180 --force-share --output=json execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384#033[00m
Dec  1 19:51:44 compute-0 nova_compute[189564]: 2025-12-01 19:51:44.719 189568 DEBUG oslo_concurrency.processutils [None req-138e769f-223c-4547-af2a-89a176cd7817 7c24e8f82e7842b785e565ac65c7f494 35d2a9caf1634dca9fc12ec078239d84 - - default default] CMD "/usr/bin/python3 -m oslo_concurrency.prlimit --as=1073741824 --cpu=30 -- env LC_ALL=C LANG=C qemu-img info /var/lib/nova/instances/_base/ac10605fd1db743aca604ff67d0f873a18376180 --force-share --output=json" returned: 0 in 0.106s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422#033[00m
Dec  1 19:51:44 compute-0 nova_compute[189564]: 2025-12-01 19:51:44.721 189568 DEBUG oslo_concurrency.lockutils [None req-138e769f-223c-4547-af2a-89a176cd7817 7c24e8f82e7842b785e565ac65c7f494 35d2a9caf1634dca9fc12ec078239d84 - - default default] Acquiring lock "ac10605fd1db743aca604ff67d0f873a18376180" by "nova.virt.libvirt.imagebackend.Qcow2.create_image.<locals>.create_qcow2_image" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m
Dec  1 19:51:44 compute-0 nova_compute[189564]: 2025-12-01 19:51:44.722 189568 DEBUG oslo_concurrency.lockutils [None req-138e769f-223c-4547-af2a-89a176cd7817 7c24e8f82e7842b785e565ac65c7f494 35d2a9caf1634dca9fc12ec078239d84 - - default default] Lock "ac10605fd1db743aca604ff67d0f873a18376180" acquired by "nova.virt.libvirt.imagebackend.Qcow2.create_image.<locals>.create_qcow2_image" :: waited 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m
Dec  1 19:51:44 compute-0 nova_compute[189564]: 2025-12-01 19:51:44.750 189568 DEBUG oslo_concurrency.processutils [None req-138e769f-223c-4547-af2a-89a176cd7817 7c24e8f82e7842b785e565ac65c7f494 35d2a9caf1634dca9fc12ec078239d84 - - default default] Running cmd (subprocess): /usr/bin/python3 -m oslo_concurrency.prlimit --as=1073741824 --cpu=30 -- env LC_ALL=C LANG=C qemu-img info /var/lib/nova/instances/_base/ac10605fd1db743aca604ff67d0f873a18376180 --force-share --output=json execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384#033[00m
Dec  1 19:51:44 compute-0 nova_compute[189564]: 2025-12-01 19:51:44.807 189568 DEBUG oslo_concurrency.processutils [None req-138e769f-223c-4547-af2a-89a176cd7817 7c24e8f82e7842b785e565ac65c7f494 35d2a9caf1634dca9fc12ec078239d84 - - default default] CMD "/usr/bin/python3 -m oslo_concurrency.prlimit --as=1073741824 --cpu=30 -- env LC_ALL=C LANG=C qemu-img info /var/lib/nova/instances/_base/ac10605fd1db743aca604ff67d0f873a18376180 --force-share --output=json" returned: 0 in 0.057s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422#033[00m
Dec  1 19:51:44 compute-0 nova_compute[189564]: 2025-12-01 19:51:44.809 189568 DEBUG oslo_concurrency.processutils [None req-138e769f-223c-4547-af2a-89a176cd7817 7c24e8f82e7842b785e565ac65c7f494 35d2a9caf1634dca9fc12ec078239d84 - - default default] Running cmd (subprocess): env LC_ALL=C LANG=C qemu-img create -f qcow2 -o backing_file=/var/lib/nova/instances/_base/ac10605fd1db743aca604ff67d0f873a18376180,backing_fmt=raw /var/lib/nova/instances/a5d5ccb2-21ac-4d7a-9991-efdf9cbb499d/disk 1073741824 execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384#033[00m
Dec  1 19:51:44 compute-0 nova_compute[189564]: 2025-12-01 19:51:44.859 189568 DEBUG oslo_concurrency.processutils [None req-138e769f-223c-4547-af2a-89a176cd7817 7c24e8f82e7842b785e565ac65c7f494 35d2a9caf1634dca9fc12ec078239d84 - - default default] CMD "env LC_ALL=C LANG=C qemu-img create -f qcow2 -o backing_file=/var/lib/nova/instances/_base/ac10605fd1db743aca604ff67d0f873a18376180,backing_fmt=raw /var/lib/nova/instances/a5d5ccb2-21ac-4d7a-9991-efdf9cbb499d/disk 1073741824" returned: 0 in 0.050s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422#033[00m
Dec  1 19:51:44 compute-0 nova_compute[189564]: 2025-12-01 19:51:44.860 189568 DEBUG oslo_concurrency.lockutils [None req-138e769f-223c-4547-af2a-89a176cd7817 7c24e8f82e7842b785e565ac65c7f494 35d2a9caf1634dca9fc12ec078239d84 - - default default] Lock "ac10605fd1db743aca604ff67d0f873a18376180" "released" by "nova.virt.libvirt.imagebackend.Qcow2.create_image.<locals>.create_qcow2_image" :: held 0.138s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m
Dec  1 19:51:44 compute-0 nova_compute[189564]: 2025-12-01 19:51:44.861 189568 DEBUG oslo_concurrency.processutils [None req-138e769f-223c-4547-af2a-89a176cd7817 7c24e8f82e7842b785e565ac65c7f494 35d2a9caf1634dca9fc12ec078239d84 - - default default] Running cmd (subprocess): /usr/bin/python3 -m oslo_concurrency.prlimit --as=1073741824 --cpu=30 -- env LC_ALL=C LANG=C qemu-img info /var/lib/nova/instances/_base/ac10605fd1db743aca604ff67d0f873a18376180 --force-share --output=json execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384#033[00m
Dec  1 19:51:44 compute-0 nova_compute[189564]: 2025-12-01 19:51:44.936 189568 DEBUG oslo_concurrency.processutils [None req-138e769f-223c-4547-af2a-89a176cd7817 7c24e8f82e7842b785e565ac65c7f494 35d2a9caf1634dca9fc12ec078239d84 - - default default] CMD "/usr/bin/python3 -m oslo_concurrency.prlimit --as=1073741824 --cpu=30 -- env LC_ALL=C LANG=C qemu-img info /var/lib/nova/instances/_base/ac10605fd1db743aca604ff67d0f873a18376180 --force-share --output=json" returned: 0 in 0.075s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422#033[00m
Dec  1 19:51:44 compute-0 nova_compute[189564]: 2025-12-01 19:51:44.937 189568 DEBUG nova.virt.disk.api [None req-138e769f-223c-4547-af2a-89a176cd7817 7c24e8f82e7842b785e565ac65c7f494 35d2a9caf1634dca9fc12ec078239d84 - - default default] Checking if we can resize image /var/lib/nova/instances/a5d5ccb2-21ac-4d7a-9991-efdf9cbb499d/disk. size=1073741824 can_resize_image /usr/lib/python3.9/site-packages/nova/virt/disk/api.py:166#033[00m
Dec  1 19:51:44 compute-0 nova_compute[189564]: 2025-12-01 19:51:44.938 189568 DEBUG oslo_concurrency.processutils [None req-138e769f-223c-4547-af2a-89a176cd7817 7c24e8f82e7842b785e565ac65c7f494 35d2a9caf1634dca9fc12ec078239d84 - - default default] Running cmd (subprocess): /usr/bin/python3 -m oslo_concurrency.prlimit --as=1073741824 --cpu=30 -- env LC_ALL=C LANG=C qemu-img info /var/lib/nova/instances/a5d5ccb2-21ac-4d7a-9991-efdf9cbb499d/disk --force-share --output=json execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384#033[00m
Dec  1 19:51:45 compute-0 nova_compute[189564]: 2025-12-01 19:51:45.037 189568 DEBUG oslo_concurrency.processutils [None req-138e769f-223c-4547-af2a-89a176cd7817 7c24e8f82e7842b785e565ac65c7f494 35d2a9caf1634dca9fc12ec078239d84 - - default default] CMD "/usr/bin/python3 -m oslo_concurrency.prlimit --as=1073741824 --cpu=30 -- env LC_ALL=C LANG=C qemu-img info /var/lib/nova/instances/a5d5ccb2-21ac-4d7a-9991-efdf9cbb499d/disk --force-share --output=json" returned: 0 in 0.099s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422#033[00m
Dec  1 19:51:45 compute-0 nova_compute[189564]: 2025-12-01 19:51:45.038 189568 DEBUG nova.virt.disk.api [None req-138e769f-223c-4547-af2a-89a176cd7817 7c24e8f82e7842b785e565ac65c7f494 35d2a9caf1634dca9fc12ec078239d84 - - default default] Cannot resize image /var/lib/nova/instances/a5d5ccb2-21ac-4d7a-9991-efdf9cbb499d/disk to a smaller size. can_resize_image /usr/lib/python3.9/site-packages/nova/virt/disk/api.py:172#033[00m
Dec  1 19:51:45 compute-0 nova_compute[189564]: 2025-12-01 19:51:45.039 189568 DEBUG nova.objects.instance [None req-138e769f-223c-4547-af2a-89a176cd7817 7c24e8f82e7842b785e565ac65c7f494 35d2a9caf1634dca9fc12ec078239d84 - - default default] Lazy-loading 'migration_context' on Instance uuid a5d5ccb2-21ac-4d7a-9991-efdf9cbb499d obj_load_attr /usr/lib/python3.9/site-packages/nova/objects/instance.py:1105#033[00m
Dec  1 19:51:45 compute-0 nova_compute[189564]: 2025-12-01 19:51:45.054 189568 DEBUG oslo_concurrency.lockutils [None req-138e769f-223c-4547-af2a-89a176cd7817 7c24e8f82e7842b785e565ac65c7f494 35d2a9caf1634dca9fc12ec078239d84 - - default default] Acquiring lock "/var/lib/nova/instances/a5d5ccb2-21ac-4d7a-9991-efdf9cbb499d/disk.info" by "nova.virt.libvirt.imagebackend.Image.resolve_driver_format.<locals>.write_to_disk_info_file" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m
Dec  1 19:51:45 compute-0 nova_compute[189564]: 2025-12-01 19:51:45.055 189568 DEBUG oslo_concurrency.lockutils [None req-138e769f-223c-4547-af2a-89a176cd7817 7c24e8f82e7842b785e565ac65c7f494 35d2a9caf1634dca9fc12ec078239d84 - - default default] Lock "/var/lib/nova/instances/a5d5ccb2-21ac-4d7a-9991-efdf9cbb499d/disk.info" acquired by "nova.virt.libvirt.imagebackend.Image.resolve_driver_format.<locals>.write_to_disk_info_file" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m
Dec  1 19:51:45 compute-0 nova_compute[189564]: 2025-12-01 19:51:45.055 189568 DEBUG oslo_concurrency.lockutils [None req-138e769f-223c-4547-af2a-89a176cd7817 7c24e8f82e7842b785e565ac65c7f494 35d2a9caf1634dca9fc12ec078239d84 - - default default] Lock "/var/lib/nova/instances/a5d5ccb2-21ac-4d7a-9991-efdf9cbb499d/disk.info" "released" by "nova.virt.libvirt.imagebackend.Image.resolve_driver_format.<locals>.write_to_disk_info_file" :: held 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m
Dec  1 19:51:45 compute-0 nova_compute[189564]: 2025-12-01 19:51:45.074 189568 DEBUG oslo_concurrency.processutils [None req-138e769f-223c-4547-af2a-89a176cd7817 7c24e8f82e7842b785e565ac65c7f494 35d2a9caf1634dca9fc12ec078239d84 - - default default] Running cmd (subprocess): /usr/bin/python3 -m oslo_concurrency.prlimit --as=1073741824 --cpu=30 -- env LC_ALL=C LANG=C qemu-img info /var/lib/nova/instances/_base/ephemeral_1_0706d66 --force-share --output=json execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384#033[00m
Dec  1 19:51:45 compute-0 nova_compute[189564]: 2025-12-01 19:51:45.152 189568 DEBUG oslo_concurrency.processutils [None req-138e769f-223c-4547-af2a-89a176cd7817 7c24e8f82e7842b785e565ac65c7f494 35d2a9caf1634dca9fc12ec078239d84 - - default default] CMD "/usr/bin/python3 -m oslo_concurrency.prlimit --as=1073741824 --cpu=30 -- env LC_ALL=C LANG=C qemu-img info /var/lib/nova/instances/_base/ephemeral_1_0706d66 --force-share --output=json" returned: 0 in 0.078s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422#033[00m
Dec  1 19:51:45 compute-0 nova_compute[189564]: 2025-12-01 19:51:45.153 189568 DEBUG oslo_concurrency.lockutils [None req-138e769f-223c-4547-af2a-89a176cd7817 7c24e8f82e7842b785e565ac65c7f494 35d2a9caf1634dca9fc12ec078239d84 - - default default] Acquiring lock "ephemeral_1_0706d66" by "nova.virt.libvirt.imagebackend.Qcow2.create_image.<locals>.create_qcow2_image" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m
Dec  1 19:51:45 compute-0 nova_compute[189564]: 2025-12-01 19:51:45.154 189568 DEBUG oslo_concurrency.lockutils [None req-138e769f-223c-4547-af2a-89a176cd7817 7c24e8f82e7842b785e565ac65c7f494 35d2a9caf1634dca9fc12ec078239d84 - - default default] Lock "ephemeral_1_0706d66" acquired by "nova.virt.libvirt.imagebackend.Qcow2.create_image.<locals>.create_qcow2_image" :: waited 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m
Dec  1 19:51:45 compute-0 nova_compute[189564]: 2025-12-01 19:51:45.172 189568 DEBUG oslo_concurrency.processutils [None req-138e769f-223c-4547-af2a-89a176cd7817 7c24e8f82e7842b785e565ac65c7f494 35d2a9caf1634dca9fc12ec078239d84 - - default default] Running cmd (subprocess): /usr/bin/python3 -m oslo_concurrency.prlimit --as=1073741824 --cpu=30 -- env LC_ALL=C LANG=C qemu-img info /var/lib/nova/instances/_base/ephemeral_1_0706d66 --force-share --output=json execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384#033[00m
Dec  1 19:51:45 compute-0 nova_compute[189564]: 2025-12-01 19:51:45.232 189568 DEBUG oslo_concurrency.processutils [None req-138e769f-223c-4547-af2a-89a176cd7817 7c24e8f82e7842b785e565ac65c7f494 35d2a9caf1634dca9fc12ec078239d84 - - default default] CMD "/usr/bin/python3 -m oslo_concurrency.prlimit --as=1073741824 --cpu=30 -- env LC_ALL=C LANG=C qemu-img info /var/lib/nova/instances/_base/ephemeral_1_0706d66 --force-share --output=json" returned: 0 in 0.059s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422#033[00m
Dec  1 19:51:45 compute-0 nova_compute[189564]: 2025-12-01 19:51:45.233 189568 DEBUG oslo_concurrency.processutils [None req-138e769f-223c-4547-af2a-89a176cd7817 7c24e8f82e7842b785e565ac65c7f494 35d2a9caf1634dca9fc12ec078239d84 - - default default] Running cmd (subprocess): env LC_ALL=C LANG=C qemu-img create -f qcow2 -o backing_file=/var/lib/nova/instances/_base/ephemeral_1_0706d66,backing_fmt=raw /var/lib/nova/instances/a5d5ccb2-21ac-4d7a-9991-efdf9cbb499d/disk.eph0 1073741824 execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384#033[00m
Dec  1 19:51:45 compute-0 nova_compute[189564]: 2025-12-01 19:51:45.280 189568 DEBUG oslo_concurrency.processutils [None req-138e769f-223c-4547-af2a-89a176cd7817 7c24e8f82e7842b785e565ac65c7f494 35d2a9caf1634dca9fc12ec078239d84 - - default default] CMD "env LC_ALL=C LANG=C qemu-img create -f qcow2 -o backing_file=/var/lib/nova/instances/_base/ephemeral_1_0706d66,backing_fmt=raw /var/lib/nova/instances/a5d5ccb2-21ac-4d7a-9991-efdf9cbb499d/disk.eph0 1073741824" returned: 0 in 0.047s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422#033[00m
Dec  1 19:51:45 compute-0 nova_compute[189564]: 2025-12-01 19:51:45.282 189568 DEBUG oslo_concurrency.lockutils [None req-138e769f-223c-4547-af2a-89a176cd7817 7c24e8f82e7842b785e565ac65c7f494 35d2a9caf1634dca9fc12ec078239d84 - - default default] Lock "ephemeral_1_0706d66" "released" by "nova.virt.libvirt.imagebackend.Qcow2.create_image.<locals>.create_qcow2_image" :: held 0.127s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m
Dec  1 19:51:45 compute-0 nova_compute[189564]: 2025-12-01 19:51:45.282 189568 DEBUG oslo_concurrency.processutils [None req-138e769f-223c-4547-af2a-89a176cd7817 7c24e8f82e7842b785e565ac65c7f494 35d2a9caf1634dca9fc12ec078239d84 - - default default] Running cmd (subprocess): /usr/bin/python3 -m oslo_concurrency.prlimit --as=1073741824 --cpu=30 -- env LC_ALL=C LANG=C qemu-img info /var/lib/nova/instances/_base/ephemeral_1_0706d66 --force-share --output=json execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384#033[00m
Dec  1 19:51:45 compute-0 nova_compute[189564]: 2025-12-01 19:51:45.358 189568 DEBUG oslo_concurrency.processutils [None req-138e769f-223c-4547-af2a-89a176cd7817 7c24e8f82e7842b785e565ac65c7f494 35d2a9caf1634dca9fc12ec078239d84 - - default default] CMD "/usr/bin/python3 -m oslo_concurrency.prlimit --as=1073741824 --cpu=30 -- env LC_ALL=C LANG=C qemu-img info /var/lib/nova/instances/_base/ephemeral_1_0706d66 --force-share --output=json" returned: 0 in 0.075s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422#033[00m
Dec  1 19:51:45 compute-0 nova_compute[189564]: 2025-12-01 19:51:45.359 189568 DEBUG nova.virt.libvirt.driver [None req-138e769f-223c-4547-af2a-89a176cd7817 7c24e8f82e7842b785e565ac65c7f494 35d2a9caf1634dca9fc12ec078239d84 - - default default] [instance: a5d5ccb2-21ac-4d7a-9991-efdf9cbb499d] Created local disks _create_image /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:4857#033[00m
Dec  1 19:51:45 compute-0 nova_compute[189564]: 2025-12-01 19:51:45.359 189568 DEBUG nova.virt.libvirt.driver [None req-138e769f-223c-4547-af2a-89a176cd7817 7c24e8f82e7842b785e565ac65c7f494 35d2a9caf1634dca9fc12ec078239d84 - - default default] [instance: a5d5ccb2-21ac-4d7a-9991-efdf9cbb499d] Ensure instance console log exists: /var/lib/nova/instances/a5d5ccb2-21ac-4d7a-9991-efdf9cbb499d/console.log _ensure_console_log_for_instance /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:4609#033[00m
Dec  1 19:51:45 compute-0 nova_compute[189564]: 2025-12-01 19:51:45.360 189568 DEBUG oslo_concurrency.lockutils [None req-138e769f-223c-4547-af2a-89a176cd7817 7c24e8f82e7842b785e565ac65c7f494 35d2a9caf1634dca9fc12ec078239d84 - - default default] Acquiring lock "vgpu_resources" by "nova.virt.libvirt.driver.LibvirtDriver._allocate_mdevs" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m
Dec  1 19:51:45 compute-0 nova_compute[189564]: 2025-12-01 19:51:45.360 189568 DEBUG oslo_concurrency.lockutils [None req-138e769f-223c-4547-af2a-89a176cd7817 7c24e8f82e7842b785e565ac65c7f494 35d2a9caf1634dca9fc12ec078239d84 - - default default] Lock "vgpu_resources" acquired by "nova.virt.libvirt.driver.LibvirtDriver._allocate_mdevs" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m
Dec  1 19:51:45 compute-0 nova_compute[189564]: 2025-12-01 19:51:45.360 189568 DEBUG oslo_concurrency.lockutils [None req-138e769f-223c-4547-af2a-89a176cd7817 7c24e8f82e7842b785e565ac65c7f494 35d2a9caf1634dca9fc12ec078239d84 - - default default] Lock "vgpu_resources" "released" by "nova.virt.libvirt.driver.LibvirtDriver._allocate_mdevs" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m
Dec  1 19:51:45 compute-0 nova_compute[189564]: 2025-12-01 19:51:45.362 189568 DEBUG nova.virt.libvirt.driver [None req-138e769f-223c-4547-af2a-89a176cd7817 7c24e8f82e7842b785e565ac65c7f494 35d2a9caf1634dca9fc12ec078239d84 - - default default] [instance: a5d5ccb2-21ac-4d7a-9991-efdf9cbb499d] Start _get_guest_xml network_info=[] disk_info={'disk_bus': 'virtio', 'cdrom_bus': 'sata', 'mapping': {'root': {'bus': 'virtio', 'dev': 'vda', 'type': 'disk', 'boot_index': '1'}, 'disk': {'bus': 'virtio', 'dev': 'vda', 'type': 'disk', 'boot_index': '1'}, 'disk.eph0': {'bus': 'virtio', 'dev': 'vdb', 'type': 'disk'}, 'disk.config': {'bus': 'sata', 'dev': 'sda', 'type': 'cdrom'}}} image_meta=ImageMeta(checksum='b874c39491a2377b8490f5f1e89761a4',container_format='bare',created_at=2025-12-01T19:51:31Z,direct_url=<?>,disk_format='qcow2',id=2db0dcf5-70ca-4fe0-b205-4e14a99e3eee,min_disk=0,min_ram=0,name='fvt_testing_image',owner='35d2a9caf1634dca9fc12ec078239d84',properties=ImageMetaProps,protected=<?>,size=16300544,status='active',tags=<?>,updated_at=2025-12-01T19:51:35Z,virtual_size=<?>,visibility=<?>) rescue=None block_device_info={'root_device_name': '/dev/vda', 'image': [{'boot_index': 0, 'guest_format': None, 'encryption_options': None, 'size': 0, 'encryption_secret_uuid': None, 'device_type': 'disk', 'disk_bus': 'virtio', 'encrypted': False, 'encryption_format': None, 'device_name': '/dev/vda', 'image_id': '2db0dcf5-70ca-4fe0-b205-4e14a99e3eee'}], 'ephemerals': [{'guest_format': None, 'encryption_options': None, 'size': 1, 'encryption_secret_uuid': None, 'device_type': 'disk', 'disk_bus': 'virtio', 'encrypted': False, 'encryption_format': None, 'device_name': '/dev/vdb'}], 'block_device_mapping': [], 'swap': None} _get_guest_xml /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:7549#033[00m
Dec  1 19:51:45 compute-0 nova_compute[189564]: 2025-12-01 19:51:45.370 189568 WARNING nova.virt.libvirt.driver [None req-138e769f-223c-4547-af2a-89a176cd7817 7c24e8f82e7842b785e565ac65c7f494 35d2a9caf1634dca9fc12ec078239d84 - - default default] This host appears to have multiple sockets per NUMA node. The `socket` PCI NUMA affinity will not be supported.#033[00m
Dec  1 19:51:45 compute-0 nova_compute[189564]: 2025-12-01 19:51:45.376 189568 DEBUG nova.virt.libvirt.host [None req-138e769f-223c-4547-af2a-89a176cd7817 7c24e8f82e7842b785e565ac65c7f494 35d2a9caf1634dca9fc12ec078239d84 - - default default] Searching host: 'compute-0.ctlplane.example.com' for CPU controller through CGroups V1... _has_cgroupsv1_cpu_controller /usr/lib/python3.9/site-packages/nova/virt/libvirt/host.py:1653#033[00m
Dec  1 19:51:45 compute-0 nova_compute[189564]: 2025-12-01 19:51:45.377 189568 DEBUG nova.virt.libvirt.host [None req-138e769f-223c-4547-af2a-89a176cd7817 7c24e8f82e7842b785e565ac65c7f494 35d2a9caf1634dca9fc12ec078239d84 - - default default] CPU controller missing on host. _has_cgroupsv1_cpu_controller /usr/lib/python3.9/site-packages/nova/virt/libvirt/host.py:1663#033[00m
Dec  1 19:51:45 compute-0 nova_compute[189564]: 2025-12-01 19:51:45.381 189568 DEBUG nova.virt.libvirt.host [None req-138e769f-223c-4547-af2a-89a176cd7817 7c24e8f82e7842b785e565ac65c7f494 35d2a9caf1634dca9fc12ec078239d84 - - default default] Searching host: 'compute-0.ctlplane.example.com' for CPU controller through CGroups V2... _has_cgroupsv2_cpu_controller /usr/lib/python3.9/site-packages/nova/virt/libvirt/host.py:1672#033[00m
Dec  1 19:51:45 compute-0 nova_compute[189564]: 2025-12-01 19:51:45.381 189568 DEBUG nova.virt.libvirt.host [None req-138e769f-223c-4547-af2a-89a176cd7817 7c24e8f82e7842b785e565ac65c7f494 35d2a9caf1634dca9fc12ec078239d84 - - default default] CPU controller found on host. _has_cgroupsv2_cpu_controller /usr/lib/python3.9/site-packages/nova/virt/libvirt/host.py:1679#033[00m
Dec  1 19:51:45 compute-0 nova_compute[189564]: 2025-12-01 19:51:45.382 189568 DEBUG nova.virt.libvirt.driver [None req-138e769f-223c-4547-af2a-89a176cd7817 7c24e8f82e7842b785e565ac65c7f494 35d2a9caf1634dca9fc12ec078239d84 - - default default] CPU mode 'host-model' models '' was chosen, with extra flags: '' _get_guest_cpu_model_config /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:5396#033[00m
Dec  1 19:51:45 compute-0 nova_compute[189564]: 2025-12-01 19:51:45.382 189568 DEBUG nova.virt.hardware [None req-138e769f-223c-4547-af2a-89a176cd7817 7c24e8f82e7842b785e565ac65c7f494 35d2a9caf1634dca9fc12ec078239d84 - - default default] Getting desirable topologies for flavor Flavor(created_at=2025-12-01T19:51:38Z,deleted=False,deleted_at=None,description=None,disabled=False,ephemeral_gb=1,extra_specs={},flavorid='bcf15242-66f9-49c0-8c36-f60d85ca0bf0',id=2,is_public=True,memory_mb=512,name='fvt_testing_flavor',projects=<?>,root_gb=1,rxtx_factor=1.0,swap=0,updated_at=None,vcpu_weight=0,vcpus=1) and image_meta ImageMeta(checksum='b874c39491a2377b8490f5f1e89761a4',container_format='bare',created_at=2025-12-01T19:51:31Z,direct_url=<?>,disk_format='qcow2',id=2db0dcf5-70ca-4fe0-b205-4e14a99e3eee,min_disk=0,min_ram=0,name='fvt_testing_image',owner='35d2a9caf1634dca9fc12ec078239d84',properties=ImageMetaProps,protected=<?>,size=16300544,status='active',tags=<?>,updated_at=2025-12-01T19:51:35Z,virtual_size=<?>,visibility=<?>), allow threads: True _get_desirable_cpu_topologies /usr/lib/python3.9/site-packages/nova/virt/hardware.py:563#033[00m
Dec  1 19:51:45 compute-0 nova_compute[189564]: 2025-12-01 19:51:45.382 189568 DEBUG nova.virt.hardware [None req-138e769f-223c-4547-af2a-89a176cd7817 7c24e8f82e7842b785e565ac65c7f494 35d2a9caf1634dca9fc12ec078239d84 - - default default] Flavor limits 0:0:0 get_cpu_topology_constraints /usr/lib/python3.9/site-packages/nova/virt/hardware.py:348#033[00m
Dec  1 19:51:45 compute-0 nova_compute[189564]: 2025-12-01 19:51:45.382 189568 DEBUG nova.virt.hardware [None req-138e769f-223c-4547-af2a-89a176cd7817 7c24e8f82e7842b785e565ac65c7f494 35d2a9caf1634dca9fc12ec078239d84 - - default default] Image limits 0:0:0 get_cpu_topology_constraints /usr/lib/python3.9/site-packages/nova/virt/hardware.py:352#033[00m
Dec  1 19:51:45 compute-0 nova_compute[189564]: 2025-12-01 19:51:45.382 189568 DEBUG nova.virt.hardware [None req-138e769f-223c-4547-af2a-89a176cd7817 7c24e8f82e7842b785e565ac65c7f494 35d2a9caf1634dca9fc12ec078239d84 - - default default] Flavor pref 0:0:0 get_cpu_topology_constraints /usr/lib/python3.9/site-packages/nova/virt/hardware.py:388#033[00m
Dec  1 19:51:45 compute-0 nova_compute[189564]: 2025-12-01 19:51:45.383 189568 DEBUG nova.virt.hardware [None req-138e769f-223c-4547-af2a-89a176cd7817 7c24e8f82e7842b785e565ac65c7f494 35d2a9caf1634dca9fc12ec078239d84 - - default default] Image pref 0:0:0 get_cpu_topology_constraints /usr/lib/python3.9/site-packages/nova/virt/hardware.py:392#033[00m
Dec  1 19:51:45 compute-0 nova_compute[189564]: 2025-12-01 19:51:45.383 189568 DEBUG nova.virt.hardware [None req-138e769f-223c-4547-af2a-89a176cd7817 7c24e8f82e7842b785e565ac65c7f494 35d2a9caf1634dca9fc12ec078239d84 - - default default] Chose sockets=0, cores=0, threads=0; limits were sockets=65536, cores=65536, threads=65536 get_cpu_topology_constraints /usr/lib/python3.9/site-packages/nova/virt/hardware.py:430#033[00m
Dec  1 19:51:45 compute-0 nova_compute[189564]: 2025-12-01 19:51:45.383 189568 DEBUG nova.virt.hardware [None req-138e769f-223c-4547-af2a-89a176cd7817 7c24e8f82e7842b785e565ac65c7f494 35d2a9caf1634dca9fc12ec078239d84 - - default default] Topology preferred VirtCPUTopology(cores=0,sockets=0,threads=0), maximum VirtCPUTopology(cores=65536,sockets=65536,threads=65536) _get_desirable_cpu_topologies /usr/lib/python3.9/site-packages/nova/virt/hardware.py:569#033[00m
Dec  1 19:51:45 compute-0 nova_compute[189564]: 2025-12-01 19:51:45.383 189568 DEBUG nova.virt.hardware [None req-138e769f-223c-4547-af2a-89a176cd7817 7c24e8f82e7842b785e565ac65c7f494 35d2a9caf1634dca9fc12ec078239d84 - - default default] Build topologies for 1 vcpu(s) 1:1:1 _get_possible_cpu_topologies /usr/lib/python3.9/site-packages/nova/virt/hardware.py:471#033[00m
Dec  1 19:51:45 compute-0 nova_compute[189564]: 2025-12-01 19:51:45.383 189568 DEBUG nova.virt.hardware [None req-138e769f-223c-4547-af2a-89a176cd7817 7c24e8f82e7842b785e565ac65c7f494 35d2a9caf1634dca9fc12ec078239d84 - - default default] Got 1 possible topologies _get_possible_cpu_topologies /usr/lib/python3.9/site-packages/nova/virt/hardware.py:501#033[00m
Dec  1 19:51:45 compute-0 nova_compute[189564]: 2025-12-01 19:51:45.383 189568 DEBUG nova.virt.hardware [None req-138e769f-223c-4547-af2a-89a176cd7817 7c24e8f82e7842b785e565ac65c7f494 35d2a9caf1634dca9fc12ec078239d84 - - default default] Possible topologies [VirtCPUTopology(cores=1,sockets=1,threads=1)] _get_desirable_cpu_topologies /usr/lib/python3.9/site-packages/nova/virt/hardware.py:575#033[00m
Dec  1 19:51:45 compute-0 nova_compute[189564]: 2025-12-01 19:51:45.384 189568 DEBUG nova.virt.hardware [None req-138e769f-223c-4547-af2a-89a176cd7817 7c24e8f82e7842b785e565ac65c7f494 35d2a9caf1634dca9fc12ec078239d84 - - default default] Sorted desired topologies [VirtCPUTopology(cores=1,sockets=1,threads=1)] _get_desirable_cpu_topologies /usr/lib/python3.9/site-packages/nova/virt/hardware.py:577#033[00m
Dec  1 19:51:45 compute-0 nova_compute[189564]: 2025-12-01 19:51:45.387 189568 DEBUG nova.objects.instance [None req-138e769f-223c-4547-af2a-89a176cd7817 7c24e8f82e7842b785e565ac65c7f494 35d2a9caf1634dca9fc12ec078239d84 - - default default] Lazy-loading 'pci_devices' on Instance uuid a5d5ccb2-21ac-4d7a-9991-efdf9cbb499d obj_load_attr /usr/lib/python3.9/site-packages/nova/objects/instance.py:1105#033[00m
Dec  1 19:51:45 compute-0 nova_compute[189564]: 2025-12-01 19:51:45.409 189568 DEBUG nova.virt.libvirt.driver [None req-138e769f-223c-4547-af2a-89a176cd7817 7c24e8f82e7842b785e565ac65c7f494 35d2a9caf1634dca9fc12ec078239d84 - - default default] [instance: a5d5ccb2-21ac-4d7a-9991-efdf9cbb499d] End _get_guest_xml xml=<domain type="kvm">
Dec  1 19:51:45 compute-0 nova_compute[189564]:  <uuid>a5d5ccb2-21ac-4d7a-9991-efdf9cbb499d</uuid>
Dec  1 19:51:45 compute-0 nova_compute[189564]:  <name>instance-00000004</name>
Dec  1 19:51:45 compute-0 nova_compute[189564]:  <memory>524288</memory>
Dec  1 19:51:45 compute-0 nova_compute[189564]:  <vcpu>1</vcpu>
Dec  1 19:51:45 compute-0 nova_compute[189564]:  <metadata>
Dec  1 19:51:45 compute-0 nova_compute[189564]:    <nova:instance xmlns:nova="http://openstack.org/xmlns/libvirt/nova/1.1">
Dec  1 19:51:45 compute-0 nova_compute[189564]:      <nova:package version="27.5.2-0.20250829104910.6f8decf.el9"/>
Dec  1 19:51:45 compute-0 nova_compute[189564]:      <nova:name>fvt_testing_server</nova:name>
Dec  1 19:51:45 compute-0 nova_compute[189564]:      <nova:creationTime>2025-12-01 19:51:45</nova:creationTime>
Dec  1 19:51:45 compute-0 nova_compute[189564]:      <nova:flavor name="fvt_testing_flavor">
Dec  1 19:51:45 compute-0 nova_compute[189564]:        <nova:memory>512</nova:memory>
Dec  1 19:51:45 compute-0 nova_compute[189564]:        <nova:disk>1</nova:disk>
Dec  1 19:51:45 compute-0 nova_compute[189564]:        <nova:swap>0</nova:swap>
Dec  1 19:51:45 compute-0 nova_compute[189564]:        <nova:ephemeral>1</nova:ephemeral>
Dec  1 19:51:45 compute-0 nova_compute[189564]:        <nova:vcpus>1</nova:vcpus>
Dec  1 19:51:45 compute-0 nova_compute[189564]:      </nova:flavor>
Dec  1 19:51:45 compute-0 nova_compute[189564]:      <nova:owner>
Dec  1 19:51:45 compute-0 nova_compute[189564]:        <nova:user uuid="7c24e8f82e7842b785e565ac65c7f494">admin</nova:user>
Dec  1 19:51:45 compute-0 nova_compute[189564]:        <nova:project uuid="35d2a9caf1634dca9fc12ec078239d84">admin</nova:project>
Dec  1 19:51:45 compute-0 nova_compute[189564]:      </nova:owner>
Dec  1 19:51:45 compute-0 nova_compute[189564]:      <nova:root type="image" uuid="2db0dcf5-70ca-4fe0-b205-4e14a99e3eee"/>
Dec  1 19:51:45 compute-0 nova_compute[189564]:      <nova:ports/>
Dec  1 19:51:45 compute-0 nova_compute[189564]:    </nova:instance>
Dec  1 19:51:45 compute-0 nova_compute[189564]:  </metadata>
Dec  1 19:51:45 compute-0 nova_compute[189564]:  <sysinfo type="smbios">
Dec  1 19:51:45 compute-0 nova_compute[189564]:    <system>
Dec  1 19:51:45 compute-0 nova_compute[189564]:      <entry name="manufacturer">RDO</entry>
Dec  1 19:51:45 compute-0 nova_compute[189564]:      <entry name="product">OpenStack Compute</entry>
Dec  1 19:51:45 compute-0 nova_compute[189564]:      <entry name="version">27.5.2-0.20250829104910.6f8decf.el9</entry>
Dec  1 19:51:45 compute-0 nova_compute[189564]:      <entry name="serial">a5d5ccb2-21ac-4d7a-9991-efdf9cbb499d</entry>
Dec  1 19:51:45 compute-0 nova_compute[189564]:      <entry name="uuid">a5d5ccb2-21ac-4d7a-9991-efdf9cbb499d</entry>
Dec  1 19:51:45 compute-0 nova_compute[189564]:      <entry name="family">Virtual Machine</entry>
Dec  1 19:51:45 compute-0 nova_compute[189564]:    </system>
Dec  1 19:51:45 compute-0 nova_compute[189564]:  </sysinfo>
Dec  1 19:51:45 compute-0 nova_compute[189564]:  <os>
Dec  1 19:51:45 compute-0 nova_compute[189564]:    <type arch="x86_64" machine="q35">hvm</type>
Dec  1 19:51:45 compute-0 nova_compute[189564]:    <boot dev="hd"/>
Dec  1 19:51:45 compute-0 nova_compute[189564]:    <smbios mode="sysinfo"/>
Dec  1 19:51:45 compute-0 nova_compute[189564]:  </os>
Dec  1 19:51:45 compute-0 nova_compute[189564]:  <features>
Dec  1 19:51:45 compute-0 nova_compute[189564]:    <acpi/>
Dec  1 19:51:45 compute-0 nova_compute[189564]:    <apic/>
Dec  1 19:51:45 compute-0 nova_compute[189564]:    <vmcoreinfo/>
Dec  1 19:51:45 compute-0 nova_compute[189564]:  </features>
Dec  1 19:51:45 compute-0 nova_compute[189564]:  <clock offset="utc">
Dec  1 19:51:45 compute-0 nova_compute[189564]:    <timer name="pit" tickpolicy="delay"/>
Dec  1 19:51:45 compute-0 nova_compute[189564]:    <timer name="rtc" tickpolicy="catchup"/>
Dec  1 19:51:45 compute-0 nova_compute[189564]:    <timer name="hpet" present="no"/>
Dec  1 19:51:45 compute-0 nova_compute[189564]:  </clock>
Dec  1 19:51:45 compute-0 nova_compute[189564]:  <cpu mode="host-model" match="exact">
Dec  1 19:51:45 compute-0 nova_compute[189564]:    <topology sockets="1" cores="1" threads="1"/>
Dec  1 19:51:45 compute-0 nova_compute[189564]:  </cpu>
Dec  1 19:51:45 compute-0 nova_compute[189564]:  <devices>
Dec  1 19:51:45 compute-0 nova_compute[189564]:    <disk type="file" device="disk">
Dec  1 19:51:45 compute-0 nova_compute[189564]:      <driver name="qemu" type="qcow2" cache="none"/>
Dec  1 19:51:45 compute-0 nova_compute[189564]:      <source file="/var/lib/nova/instances/a5d5ccb2-21ac-4d7a-9991-efdf9cbb499d/disk"/>
Dec  1 19:51:45 compute-0 nova_compute[189564]:      <target dev="vda" bus="virtio"/>
Dec  1 19:51:45 compute-0 nova_compute[189564]:    </disk>
Dec  1 19:51:45 compute-0 nova_compute[189564]:    <disk type="file" device="disk">
Dec  1 19:51:45 compute-0 nova_compute[189564]:      <driver name="qemu" type="qcow2" cache="none"/>
Dec  1 19:51:45 compute-0 nova_compute[189564]:      <source file="/var/lib/nova/instances/a5d5ccb2-21ac-4d7a-9991-efdf9cbb499d/disk.eph0"/>
Dec  1 19:51:45 compute-0 nova_compute[189564]:      <target dev="vdb" bus="virtio"/>
Dec  1 19:51:45 compute-0 nova_compute[189564]:    </disk>
Dec  1 19:51:45 compute-0 nova_compute[189564]:    <disk type="file" device="cdrom">
Dec  1 19:51:45 compute-0 nova_compute[189564]:      <driver name="qemu" type="raw" cache="none"/>
Dec  1 19:51:45 compute-0 nova_compute[189564]:      <source file="/var/lib/nova/instances/a5d5ccb2-21ac-4d7a-9991-efdf9cbb499d/disk.config"/>
Dec  1 19:51:45 compute-0 nova_compute[189564]:      <target dev="sda" bus="sata"/>
Dec  1 19:51:45 compute-0 nova_compute[189564]:    </disk>
Dec  1 19:51:45 compute-0 nova_compute[189564]:    <serial type="pty">
Dec  1 19:51:45 compute-0 nova_compute[189564]:      <log file="/var/lib/nova/instances/a5d5ccb2-21ac-4d7a-9991-efdf9cbb499d/console.log" append="off"/>
Dec  1 19:51:45 compute-0 nova_compute[189564]:    </serial>
Dec  1 19:51:45 compute-0 nova_compute[189564]:    <graphics type="vnc" autoport="yes" listen="::0"/>
Dec  1 19:51:45 compute-0 nova_compute[189564]:    <video>
Dec  1 19:51:45 compute-0 nova_compute[189564]:      <model type="virtio"/>
Dec  1 19:51:45 compute-0 nova_compute[189564]:    </video>
Dec  1 19:51:45 compute-0 nova_compute[189564]:    <input type="tablet" bus="usb"/>
Dec  1 19:51:45 compute-0 nova_compute[189564]:    <rng model="virtio">
Dec  1 19:51:45 compute-0 nova_compute[189564]:      <backend model="random">/dev/urandom</backend>
Dec  1 19:51:45 compute-0 nova_compute[189564]:    </rng>
Dec  1 19:51:45 compute-0 nova_compute[189564]:    <controller type="pci" model="pcie-root"/>
Dec  1 19:51:45 compute-0 nova_compute[189564]:    <controller type="pci" model="pcie-root-port"/>
Dec  1 19:51:45 compute-0 nova_compute[189564]:    <controller type="pci" model="pcie-root-port"/>
Dec  1 19:51:45 compute-0 nova_compute[189564]:    <controller type="pci" model="pcie-root-port"/>
Dec  1 19:51:45 compute-0 nova_compute[189564]:    <controller type="pci" model="pcie-root-port"/>
Dec  1 19:51:45 compute-0 nova_compute[189564]:    <controller type="pci" model="pcie-root-port"/>
Dec  1 19:51:45 compute-0 nova_compute[189564]:    <controller type="pci" model="pcie-root-port"/>
Dec  1 19:51:45 compute-0 nova_compute[189564]:    <controller type="pci" model="pcie-root-port"/>
Dec  1 19:51:45 compute-0 nova_compute[189564]:    <controller type="pci" model="pcie-root-port"/>
Dec  1 19:51:45 compute-0 nova_compute[189564]:    <controller type="pci" model="pcie-root-port"/>
Dec  1 19:51:45 compute-0 nova_compute[189564]:    <controller type="pci" model="pcie-root-port"/>
Dec  1 19:51:45 compute-0 nova_compute[189564]:    <controller type="pci" model="pcie-root-port"/>
Dec  1 19:51:45 compute-0 nova_compute[189564]:    <controller type="pci" model="pcie-root-port"/>
Dec  1 19:51:45 compute-0 nova_compute[189564]:    <controller type="pci" model="pcie-root-port"/>
Dec  1 19:51:45 compute-0 nova_compute[189564]:    <controller type="pci" model="pcie-root-port"/>
Dec  1 19:51:45 compute-0 nova_compute[189564]:    <controller type="pci" model="pcie-root-port"/>
Dec  1 19:51:45 compute-0 nova_compute[189564]:    <controller type="pci" model="pcie-root-port"/>
Dec  1 19:51:45 compute-0 nova_compute[189564]:    <controller type="pci" model="pcie-root-port"/>
Dec  1 19:51:45 compute-0 nova_compute[189564]:    <controller type="pci" model="pcie-root-port"/>
Dec  1 19:51:45 compute-0 nova_compute[189564]:    <controller type="pci" model="pcie-root-port"/>
Dec  1 19:51:45 compute-0 nova_compute[189564]:    <controller type="pci" model="pcie-root-port"/>
Dec  1 19:51:45 compute-0 nova_compute[189564]:    <controller type="pci" model="pcie-root-port"/>
Dec  1 19:51:45 compute-0 nova_compute[189564]:    <controller type="pci" model="pcie-root-port"/>
Dec  1 19:51:45 compute-0 nova_compute[189564]:    <controller type="pci" model="pcie-root-port"/>
Dec  1 19:51:45 compute-0 nova_compute[189564]:    <controller type="pci" model="pcie-root-port"/>
Dec  1 19:51:45 compute-0 nova_compute[189564]:    <controller type="usb" index="0"/>
Dec  1 19:51:45 compute-0 nova_compute[189564]:    <memballoon model="virtio">
Dec  1 19:51:45 compute-0 nova_compute[189564]:      <stats period="10"/>
Dec  1 19:51:45 compute-0 nova_compute[189564]:    </memballoon>
Dec  1 19:51:45 compute-0 nova_compute[189564]:  </devices>
Dec  1 19:51:45 compute-0 nova_compute[189564]: </domain>
Dec  1 19:51:45 compute-0 nova_compute[189564]: _get_guest_xml /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:7555#033[00m
Dec  1 19:51:45 compute-0 nova_compute[189564]: 2025-12-01 19:51:45.462 189568 DEBUG nova.virt.libvirt.driver [None req-138e769f-223c-4547-af2a-89a176cd7817 7c24e8f82e7842b785e565ac65c7f494 35d2a9caf1634dca9fc12ec078239d84 - - default default] No BDM found with device name vda, not building metadata. _build_disk_metadata /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:12116#033[00m
Dec  1 19:51:45 compute-0 nova_compute[189564]: 2025-12-01 19:51:45.464 189568 DEBUG nova.virt.libvirt.driver [None req-138e769f-223c-4547-af2a-89a176cd7817 7c24e8f82e7842b785e565ac65c7f494 35d2a9caf1634dca9fc12ec078239d84 - - default default] No BDM found with device name vdb, not building metadata. _build_disk_metadata /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:12116#033[00m
Dec  1 19:51:45 compute-0 nova_compute[189564]: 2025-12-01 19:51:45.465 189568 DEBUG nova.virt.libvirt.driver [None req-138e769f-223c-4547-af2a-89a176cd7817 7c24e8f82e7842b785e565ac65c7f494 35d2a9caf1634dca9fc12ec078239d84 - - default default] No BDM found with device name sda, not building metadata. _build_disk_metadata /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:12116#033[00m
Dec  1 19:51:45 compute-0 nova_compute[189564]: 2025-12-01 19:51:45.466 189568 INFO nova.virt.libvirt.driver [None req-138e769f-223c-4547-af2a-89a176cd7817 7c24e8f82e7842b785e565ac65c7f494 35d2a9caf1634dca9fc12ec078239d84 - - default default] [instance: a5d5ccb2-21ac-4d7a-9991-efdf9cbb499d] Using config drive#033[00m
Dec  1 19:51:45 compute-0 nova_compute[189564]: 2025-12-01 19:51:45.943 189568 INFO nova.virt.libvirt.driver [None req-138e769f-223c-4547-af2a-89a176cd7817 7c24e8f82e7842b785e565ac65c7f494 35d2a9caf1634dca9fc12ec078239d84 - - default default] [instance: a5d5ccb2-21ac-4d7a-9991-efdf9cbb499d] Creating config drive at /var/lib/nova/instances/a5d5ccb2-21ac-4d7a-9991-efdf9cbb499d/disk.config#033[00m
Dec  1 19:51:45 compute-0 nova_compute[189564]: 2025-12-01 19:51:45.952 189568 DEBUG oslo_concurrency.processutils [None req-138e769f-223c-4547-af2a-89a176cd7817 7c24e8f82e7842b785e565ac65c7f494 35d2a9caf1634dca9fc12ec078239d84 - - default default] Running cmd (subprocess): /usr/bin/mkisofs -o /var/lib/nova/instances/a5d5ccb2-21ac-4d7a-9991-efdf9cbb499d/disk.config -ldots -allow-lowercase -allow-multidot -l -publisher OpenStack Compute 27.5.2-0.20250829104910.6f8decf.el9 -quiet -J -r -V config-2 /tmp/tmpvemm6u7a execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384#033[00m
Dec  1 19:51:46 compute-0 nova_compute[189564]: 2025-12-01 19:51:46.102 189568 DEBUG oslo_concurrency.processutils [None req-138e769f-223c-4547-af2a-89a176cd7817 7c24e8f82e7842b785e565ac65c7f494 35d2a9caf1634dca9fc12ec078239d84 - - default default] CMD "/usr/bin/mkisofs -o /var/lib/nova/instances/a5d5ccb2-21ac-4d7a-9991-efdf9cbb499d/disk.config -ldots -allow-lowercase -allow-multidot -l -publisher OpenStack Compute 27.5.2-0.20250829104910.6f8decf.el9 -quiet -J -r -V config-2 /tmp/tmpvemm6u7a" returned: 0 in 0.150s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422#033[00m
Dec  1 19:51:46 compute-0 nova_compute[189564]: 2025-12-01 19:51:46.165 189568 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Dec  1 19:51:46 compute-0 systemd-machined[155891]: New machine qemu-4-instance-00000004.
Dec  1 19:51:46 compute-0 systemd[1]: Started Virtual Machine qemu-4-instance-00000004.
Dec  1 19:51:46 compute-0 nova_compute[189564]: 2025-12-01 19:51:46.555 189568 DEBUG nova.virt.driver [None req-025acbbd-8b0a-4055-b5a6-f0460d6fa220 - - - - - -] Emitting event <LifecycleEvent: 1764618706.5541115, a5d5ccb2-21ac-4d7a-9991-efdf9cbb499d => Resumed> emit_event /usr/lib/python3.9/site-packages/nova/virt/driver.py:1653#033[00m
Dec  1 19:51:46 compute-0 nova_compute[189564]: 2025-12-01 19:51:46.556 189568 INFO nova.compute.manager [None req-025acbbd-8b0a-4055-b5a6-f0460d6fa220 - - - - - -] [instance: a5d5ccb2-21ac-4d7a-9991-efdf9cbb499d] VM Resumed (Lifecycle Event)#033[00m
Dec  1 19:51:46 compute-0 nova_compute[189564]: 2025-12-01 19:51:46.559 189568 DEBUG nova.compute.manager [None req-138e769f-223c-4547-af2a-89a176cd7817 7c24e8f82e7842b785e565ac65c7f494 35d2a9caf1634dca9fc12ec078239d84 - - default default] [instance: a5d5ccb2-21ac-4d7a-9991-efdf9cbb499d] Instance event wait completed in 0 seconds for  wait_for_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:577#033[00m
Dec  1 19:51:46 compute-0 nova_compute[189564]: 2025-12-01 19:51:46.559 189568 DEBUG nova.virt.libvirt.driver [None req-138e769f-223c-4547-af2a-89a176cd7817 7c24e8f82e7842b785e565ac65c7f494 35d2a9caf1634dca9fc12ec078239d84 - - default default] [instance: a5d5ccb2-21ac-4d7a-9991-efdf9cbb499d] Guest created on hypervisor spawn /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:4417#033[00m
Dec  1 19:51:46 compute-0 nova_compute[189564]: 2025-12-01 19:51:46.565 189568 INFO nova.virt.libvirt.driver [-] [instance: a5d5ccb2-21ac-4d7a-9991-efdf9cbb499d] Instance spawned successfully.#033[00m
Dec  1 19:51:46 compute-0 nova_compute[189564]: 2025-12-01 19:51:46.565 189568 DEBUG nova.virt.libvirt.driver [None req-138e769f-223c-4547-af2a-89a176cd7817 7c24e8f82e7842b785e565ac65c7f494 35d2a9caf1634dca9fc12ec078239d84 - - default default] [instance: a5d5ccb2-21ac-4d7a-9991-efdf9cbb499d] Attempting to register defaults for the following image properties: ['hw_cdrom_bus', 'hw_disk_bus', 'hw_input_bus', 'hw_pointer_model', 'hw_video_model', 'hw_vif_model'] _register_undefined_instance_details /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:917#033[00m
Dec  1 19:51:46 compute-0 nova_compute[189564]: 2025-12-01 19:51:46.595 189568 DEBUG nova.compute.manager [None req-025acbbd-8b0a-4055-b5a6-f0460d6fa220 - - - - - -] [instance: a5d5ccb2-21ac-4d7a-9991-efdf9cbb499d] Checking state _get_power_state /usr/lib/python3.9/site-packages/nova/compute/manager.py:1762#033[00m
Dec  1 19:51:46 compute-0 nova_compute[189564]: 2025-12-01 19:51:46.600 189568 DEBUG nova.compute.manager [None req-025acbbd-8b0a-4055-b5a6-f0460d6fa220 - - - - - -] [instance: a5d5ccb2-21ac-4d7a-9991-efdf9cbb499d] Synchronizing instance power state after lifecycle event "Resumed"; current vm_state: building, current task_state: spawning, current DB power_state: 0, VM power_state: 1 handle_lifecycle_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:1396#033[00m
Dec  1 19:51:46 compute-0 nova_compute[189564]: 2025-12-01 19:51:46.639 189568 INFO nova.compute.manager [None req-025acbbd-8b0a-4055-b5a6-f0460d6fa220 - - - - - -] [instance: a5d5ccb2-21ac-4d7a-9991-efdf9cbb499d] During sync_power_state the instance has a pending task (spawning). Skip.#033[00m
Dec  1 19:51:46 compute-0 nova_compute[189564]: 2025-12-01 19:51:46.639 189568 DEBUG nova.virt.driver [None req-025acbbd-8b0a-4055-b5a6-f0460d6fa220 - - - - - -] Emitting event <LifecycleEvent: 1764618706.5601304, a5d5ccb2-21ac-4d7a-9991-efdf9cbb499d => Started> emit_event /usr/lib/python3.9/site-packages/nova/virt/driver.py:1653#033[00m
Dec  1 19:51:46 compute-0 nova_compute[189564]: 2025-12-01 19:51:46.639 189568 INFO nova.compute.manager [None req-025acbbd-8b0a-4055-b5a6-f0460d6fa220 - - - - - -] [instance: a5d5ccb2-21ac-4d7a-9991-efdf9cbb499d] VM Started (Lifecycle Event)#033[00m
Dec  1 19:51:46 compute-0 nova_compute[189564]: 2025-12-01 19:51:46.660 189568 DEBUG nova.compute.manager [None req-025acbbd-8b0a-4055-b5a6-f0460d6fa220 - - - - - -] [instance: a5d5ccb2-21ac-4d7a-9991-efdf9cbb499d] Checking state _get_power_state /usr/lib/python3.9/site-packages/nova/compute/manager.py:1762#033[00m
Dec  1 19:51:46 compute-0 nova_compute[189564]: 2025-12-01 19:51:46.667 189568 DEBUG nova.virt.libvirt.driver [None req-138e769f-223c-4547-af2a-89a176cd7817 7c24e8f82e7842b785e565ac65c7f494 35d2a9caf1634dca9fc12ec078239d84 - - default default] [instance: a5d5ccb2-21ac-4d7a-9991-efdf9cbb499d] Found default for hw_cdrom_bus of sata _register_undefined_instance_details /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:946#033[00m
Dec  1 19:51:46 compute-0 nova_compute[189564]: 2025-12-01 19:51:46.667 189568 DEBUG nova.virt.libvirt.driver [None req-138e769f-223c-4547-af2a-89a176cd7817 7c24e8f82e7842b785e565ac65c7f494 35d2a9caf1634dca9fc12ec078239d84 - - default default] [instance: a5d5ccb2-21ac-4d7a-9991-efdf9cbb499d] Found default for hw_disk_bus of virtio _register_undefined_instance_details /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:946#033[00m
Dec  1 19:51:46 compute-0 nova_compute[189564]: 2025-12-01 19:51:46.667 189568 DEBUG nova.virt.libvirt.driver [None req-138e769f-223c-4547-af2a-89a176cd7817 7c24e8f82e7842b785e565ac65c7f494 35d2a9caf1634dca9fc12ec078239d84 - - default default] [instance: a5d5ccb2-21ac-4d7a-9991-efdf9cbb499d] Found default for hw_input_bus of usb _register_undefined_instance_details /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:946#033[00m
Dec  1 19:51:46 compute-0 nova_compute[189564]: 2025-12-01 19:51:46.668 189568 DEBUG nova.virt.libvirt.driver [None req-138e769f-223c-4547-af2a-89a176cd7817 7c24e8f82e7842b785e565ac65c7f494 35d2a9caf1634dca9fc12ec078239d84 - - default default] [instance: a5d5ccb2-21ac-4d7a-9991-efdf9cbb499d] Found default for hw_pointer_model of usbtablet _register_undefined_instance_details /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:946#033[00m
Dec  1 19:51:46 compute-0 nova_compute[189564]: 2025-12-01 19:51:46.668 189568 DEBUG nova.virt.libvirt.driver [None req-138e769f-223c-4547-af2a-89a176cd7817 7c24e8f82e7842b785e565ac65c7f494 35d2a9caf1634dca9fc12ec078239d84 - - default default] [instance: a5d5ccb2-21ac-4d7a-9991-efdf9cbb499d] Found default for hw_video_model of virtio _register_undefined_instance_details /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:946#033[00m
Dec  1 19:51:46 compute-0 nova_compute[189564]: 2025-12-01 19:51:46.669 189568 DEBUG nova.virt.libvirt.driver [None req-138e769f-223c-4547-af2a-89a176cd7817 7c24e8f82e7842b785e565ac65c7f494 35d2a9caf1634dca9fc12ec078239d84 - - default default] [instance: a5d5ccb2-21ac-4d7a-9991-efdf9cbb499d] Found default for hw_vif_model of virtio _register_undefined_instance_details /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:946#033[00m
Dec  1 19:51:46 compute-0 nova_compute[189564]: 2025-12-01 19:51:46.673 189568 DEBUG nova.compute.manager [None req-025acbbd-8b0a-4055-b5a6-f0460d6fa220 - - - - - -] [instance: a5d5ccb2-21ac-4d7a-9991-efdf9cbb499d] Synchronizing instance power state after lifecycle event "Started"; current vm_state: building, current task_state: spawning, current DB power_state: 0, VM power_state: 1 handle_lifecycle_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:1396#033[00m
Dec  1 19:51:46 compute-0 nova_compute[189564]: 2025-12-01 19:51:46.705 189568 INFO nova.compute.manager [None req-025acbbd-8b0a-4055-b5a6-f0460d6fa220 - - - - - -] [instance: a5d5ccb2-21ac-4d7a-9991-efdf9cbb499d] During sync_power_state the instance has a pending task (spawning). Skip.#033[00m
Dec  1 19:51:46 compute-0 nova_compute[189564]: 2025-12-01 19:51:46.721 189568 INFO nova.compute.manager [None req-138e769f-223c-4547-af2a-89a176cd7817 7c24e8f82e7842b785e565ac65c7f494 35d2a9caf1634dca9fc12ec078239d84 - - default default] [instance: a5d5ccb2-21ac-4d7a-9991-efdf9cbb499d] Took 3.72 seconds to spawn the instance on the hypervisor.#033[00m
Dec  1 19:51:46 compute-0 nova_compute[189564]: 2025-12-01 19:51:46.721 189568 DEBUG nova.compute.manager [None req-138e769f-223c-4547-af2a-89a176cd7817 7c24e8f82e7842b785e565ac65c7f494 35d2a9caf1634dca9fc12ec078239d84 - - default default] [instance: a5d5ccb2-21ac-4d7a-9991-efdf9cbb499d] Checking state _get_power_state /usr/lib/python3.9/site-packages/nova/compute/manager.py:1762#033[00m
Dec  1 19:51:46 compute-0 nova_compute[189564]: 2025-12-01 19:51:46.790 189568 INFO nova.compute.manager [None req-138e769f-223c-4547-af2a-89a176cd7817 7c24e8f82e7842b785e565ac65c7f494 35d2a9caf1634dca9fc12ec078239d84 - - default default] [instance: a5d5ccb2-21ac-4d7a-9991-efdf9cbb499d] Took 4.25 seconds to build instance.#033[00m
Dec  1 19:51:46 compute-0 nova_compute[189564]: 2025-12-01 19:51:46.811 189568 DEBUG oslo_concurrency.lockutils [None req-138e769f-223c-4547-af2a-89a176cd7817 7c24e8f82e7842b785e565ac65c7f494 35d2a9caf1634dca9fc12ec078239d84 - - default default] Lock "a5d5ccb2-21ac-4d7a-9991-efdf9cbb499d" "released" by "nova.compute.manager.ComputeManager.build_and_run_instance.<locals>._locked_do_build_and_run_instance" :: held 4.343s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m
Dec  1 19:51:48 compute-0 nova_compute[189564]: 2025-12-01 19:51:48.470 189568 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Dec  1 19:51:48 compute-0 systemd[1]: Starting libvirt proxy daemon...
Dec  1 19:51:48 compute-0 ceilometer_agent_compute[200308]: 2025-12-01 19:51:48.817 15 DEBUG ceilometer.polling.manager [-] The number of pollsters in source [pollsters] is bigger than the number of worker threads to execute them. Therefore, one can expect the process to be longer than the expected. execute_polling_task_processing /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:253
Dec  1 19:51:48 compute-0 ceilometer_agent_compute[200308]: 2025-12-01 19:51:48.818 15 DEBUG ceilometer.polling.manager [-] Processing pollsters for [pollsters] with [1] threads. execute_polling_task_processing /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:262
Dec  1 19:51:48 compute-0 ceilometer_agent_compute[200308]: 2025-12-01 19:51:48.818 15 DEBUG ceilometer.polling.manager [-] Registering pollster [<stevedore.extension.Extension object at 0x7fcf6cc3f860>] from source [pollsters] to be executed via executor [<concurrent.futures.thread.ThreadPoolExecutor object at 0x7fcf6e8ebbf0>] with cache [{}], pollster history [{}], and discovery cache [{}]. register_pollster_execution /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:276
Dec  1 19:51:48 compute-0 ceilometer_agent_compute[200308]: 2025-12-01 19:51:48.819 15 DEBUG ceilometer.polling.manager [-] Executing discovery process for pollsters [<ceilometer.compute.pollsters.net.IncomingBytesDeltaPollster object at 0x7fcf6cc3f830>] and discovery method [local_instances] via process [<bound method AgentManager.discover of <ceilometer.polling.manager.AgentManager object at 0x7fcf6cc07a10>>]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:294
Dec  1 19:51:48 compute-0 ceilometer_agent_compute[200308]: 2025-12-01 19:51:48.820 15 DEBUG ceilometer.polling.manager [-] Registering pollster [<stevedore.extension.Extension object at 0x7fcf6c2e4080>] from source [pollsters] to be executed via executor [<concurrent.futures.thread.ThreadPoolExecutor object at 0x7fcf6e8ebbf0>] with cache [{}], pollster history [{}], and discovery cache [{}]. register_pollster_execution /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:276
Dec  1 19:51:48 compute-0 ceilometer_agent_compute[200308]: 2025-12-01 19:51:48.820 15 DEBUG ceilometer.polling.manager [-] Registering pollster [<stevedore.extension.Extension object at 0x7fcf6efc98b0>] from source [pollsters] to be executed via executor [<concurrent.futures.thread.ThreadPoolExecutor object at 0x7fcf6e8ebbf0>] with cache [{}], pollster history [{}], and discovery cache [{}]. register_pollster_execution /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:276
Dec  1 19:51:48 compute-0 ceilometer_agent_compute[200308]: 2025-12-01 19:51:48.821 15 DEBUG ceilometer.polling.manager [-] Registering pollster [<stevedore.extension.Extension object at 0x7fcf6c2e4110>] from source [pollsters] to be executed via executor [<concurrent.futures.thread.ThreadPoolExecutor object at 0x7fcf6e8ebbf0>] with cache [{}], pollster history [{}], and discovery cache [{}]. register_pollster_execution /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:276
Dec  1 19:51:48 compute-0 ceilometer_agent_compute[200308]: 2025-12-01 19:51:48.821 15 DEBUG ceilometer.polling.manager [-] Registering pollster [<stevedore.extension.Extension object at 0x7fcf6c2e41a0>] from source [pollsters] to be executed via executor [<concurrent.futures.thread.ThreadPoolExecutor object at 0x7fcf6e8ebbf0>] with cache [{}], pollster history [{}], and discovery cache [{}]. register_pollster_execution /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:276
Dec  1 19:51:48 compute-0 ceilometer_agent_compute[200308]: 2025-12-01 19:51:48.821 15 DEBUG ceilometer.polling.manager [-] Registering pollster [<stevedore.extension.Extension object at 0x7fcf6cc3f290>] from source [pollsters] to be executed via executor [<concurrent.futures.thread.ThreadPoolExecutor object at 0x7fcf6e8ebbf0>] with cache [{}], pollster history [{}], and discovery cache [{}]. register_pollster_execution /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:276
Dec  1 19:51:48 compute-0 ceilometer_agent_compute[200308]: 2025-12-01 19:51:48.822 15 DEBUG ceilometer.polling.manager [-] Registering pollster [<stevedore.extension.Extension object at 0x7fcf6cc3f2c0>] from source [pollsters] to be executed via executor [<concurrent.futures.thread.ThreadPoolExecutor object at 0x7fcf6e8ebbf0>] with cache [{}], pollster history [{}], and discovery cache [{}]. register_pollster_execution /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:276
Dec  1 19:51:48 compute-0 ceilometer_agent_compute[200308]: 2025-12-01 19:51:48.822 15 DEBUG ceilometer.polling.manager [-] Registering pollster [<stevedore.extension.Extension object at 0x7fcf6e1e92e0>] from source [pollsters] to be executed via executor [<concurrent.futures.thread.ThreadPoolExecutor object at 0x7fcf6e8ebbf0>] with cache [{}], pollster history [{}], and discovery cache [{}]. register_pollster_execution /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:276
Dec  1 19:51:48 compute-0 ceilometer_agent_compute[200308]: 2025-12-01 19:51:48.822 15 DEBUG ceilometer.polling.manager [-] Registering pollster [<stevedore.extension.Extension object at 0x7fcf6cc3fb00>] from source [pollsters] to be executed via executor [<concurrent.futures.thread.ThreadPoolExecutor object at 0x7fcf6e8ebbf0>] with cache [{}], pollster history [{}], and discovery cache [{}]. register_pollster_execution /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:276
Dec  1 19:51:48 compute-0 ceilometer_agent_compute[200308]: 2025-12-01 19:51:48.822 15 DEBUG ceilometer.polling.manager [-] Registering pollster [<stevedore.extension.Extension object at 0x7fcf6cc3f320>] from source [pollsters] to be executed via executor [<concurrent.futures.thread.ThreadPoolExecutor object at 0x7fcf6e8ebbf0>] with cache [{}], pollster history [{}], and discovery cache [{}]. register_pollster_execution /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:276
Dec  1 19:51:48 compute-0 ceilometer_agent_compute[200308]: 2025-12-01 19:51:48.822 15 DEBUG ceilometer.polling.manager [-] Registering pollster [<stevedore.extension.Extension object at 0x7fcf6cc3f380>] from source [pollsters] to be executed via executor [<concurrent.futures.thread.ThreadPoolExecutor object at 0x7fcf6e8ebbf0>] with cache [{}], pollster history [{}], and discovery cache [{}]. register_pollster_execution /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:276
Dec  1 19:51:48 compute-0 ceilometer_agent_compute[200308]: 2025-12-01 19:51:48.823 15 DEBUG ceilometer.polling.manager [-] Registering pollster [<stevedore.extension.Extension object at 0x7fcf6cc3f3e0>] from source [pollsters] to be executed via executor [<concurrent.futures.thread.ThreadPoolExecutor object at 0x7fcf6e8ebbf0>] with cache [{}], pollster history [{}], and discovery cache [{}]. register_pollster_execution /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:276
Dec  1 19:51:48 compute-0 ceilometer_agent_compute[200308]: 2025-12-01 19:51:48.823 15 DEBUG ceilometer.polling.manager [-] Registering pollster [<stevedore.extension.Extension object at 0x7fcf6cc3f440>] from source [pollsters] to be executed via executor [<concurrent.futures.thread.ThreadPoolExecutor object at 0x7fcf6e8ebbf0>] with cache [{}], pollster history [{}], and discovery cache [{}]. register_pollster_execution /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:276
Dec  1 19:51:48 compute-0 ceilometer_agent_compute[200308]: 2025-12-01 19:51:48.823 15 DEBUG ceilometer.polling.manager [-] Registering pollster [<stevedore.extension.Extension object at 0x7fcf6c2e4470>] from source [pollsters] to be executed via executor [<concurrent.futures.thread.ThreadPoolExecutor object at 0x7fcf6e8ebbf0>] with cache [{}], pollster history [{}], and discovery cache [{}]. register_pollster_execution /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:276
Dec  1 19:51:48 compute-0 ceilometer_agent_compute[200308]: 2025-12-01 19:51:48.823 15 DEBUG ceilometer.polling.manager [-] Registering pollster [<stevedore.extension.Extension object at 0x7fcf6cc3f4a0>] from source [pollsters] to be executed via executor [<concurrent.futures.thread.ThreadPoolExecutor object at 0x7fcf6e8ebbf0>] with cache [{}], pollster history [{}], and discovery cache [{}]. register_pollster_execution /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:276
Dec  1 19:51:48 compute-0 ceilometer_agent_compute[200308]: 2025-12-01 19:51:48.824 15 DEBUG ceilometer.polling.manager [-] Registering pollster [<stevedore.extension.Extension object at 0x7fcf6cc3f500>] from source [pollsters] to be executed via executor [<concurrent.futures.thread.ThreadPoolExecutor object at 0x7fcf6e8ebbf0>] with cache [{}], pollster history [{}], and discovery cache [{}]. register_pollster_execution /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:276
Dec  1 19:51:48 compute-0 ceilometer_agent_compute[200308]: 2025-12-01 19:51:48.824 15 DEBUG ceilometer.polling.manager [-] Registering pollster [<stevedore.extension.Extension object at 0x7fcf6cc3e540>] from source [pollsters] to be executed via executor [<concurrent.futures.thread.ThreadPoolExecutor object at 0x7fcf6e8ebbf0>] with cache [{}], pollster history [{}], and discovery cache [{}]. register_pollster_execution /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:276
Dec  1 19:51:48 compute-0 ceilometer_agent_compute[200308]: 2025-12-01 19:51:48.824 15 DEBUG ceilometer.polling.manager [-] Registering pollster [<stevedore.extension.Extension object at 0x7fcf6cc3f560>] from source [pollsters] to be executed via executor [<concurrent.futures.thread.ThreadPoolExecutor object at 0x7fcf6e8ebbf0>] with cache [{}], pollster history [{}], and discovery cache [{}]. register_pollster_execution /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:276
Dec  1 19:51:48 compute-0 ceilometer_agent_compute[200308]: 2025-12-01 19:51:48.825 15 DEBUG ceilometer.polling.manager [-] Registering pollster [<stevedore.extension.Extension object at 0x7fcf6cc3fd70>] from source [pollsters] to be executed via executor [<concurrent.futures.thread.ThreadPoolExecutor object at 0x7fcf6e8ebbf0>] with cache [{}], pollster history [{}], and discovery cache [{}]. register_pollster_execution /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:276
Dec  1 19:51:48 compute-0 ceilometer_agent_compute[200308]: 2025-12-01 19:51:48.825 15 DEBUG ceilometer.polling.manager [-] Registering pollster [<stevedore.extension.Extension object at 0x7fcf6cc3f5c0>] from source [pollsters] to be executed via executor [<concurrent.futures.thread.ThreadPoolExecutor object at 0x7fcf6e8ebbf0>] with cache [{}], pollster history [{}], and discovery cache [{}]. register_pollster_execution /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:276
Dec  1 19:51:48 compute-0 ceilometer_agent_compute[200308]: 2025-12-01 19:51:48.825 15 DEBUG ceilometer.polling.manager [-] Registering pollster [<stevedore.extension.Extension object at 0x7fcf6cc3fdd0>] from source [pollsters] to be executed via executor [<concurrent.futures.thread.ThreadPoolExecutor object at 0x7fcf6e8ebbf0>] with cache [{}], pollster history [{}], and discovery cache [{}]. register_pollster_execution /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:276
Dec  1 19:51:48 compute-0 ceilometer_agent_compute[200308]: 2025-12-01 19:51:48.826 15 DEBUG ceilometer.polling.manager [-] Registering pollster [<stevedore.extension.Extension object at 0x7fcf6cc3fe30>] from source [pollsters] to be executed via executor [<concurrent.futures.thread.ThreadPoolExecutor object at 0x7fcf6e8ebbf0>] with cache [{}], pollster history [{}], and discovery cache [{}]. register_pollster_execution /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:276
Dec  1 19:51:48 compute-0 ceilometer_agent_compute[200308]: 2025-12-01 19:51:48.826 15 DEBUG ceilometer.polling.manager [-] Registering pollster [<stevedore.extension.Extension object at 0x7fcf6cc3fec0>] from source [pollsters] to be executed via executor [<concurrent.futures.thread.ThreadPoolExecutor object at 0x7fcf6e8ebbf0>] with cache [{}], pollster history [{}], and discovery cache [{}]. register_pollster_execution /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:276
Dec  1 19:51:48 compute-0 ceilometer_agent_compute[200308]: 2025-12-01 19:51:48.826 15 DEBUG ceilometer.polling.manager [-] Registering pollster [<stevedore.extension.Extension object at 0x7fcf6cc3ffb0>] from source [pollsters] to be executed via executor [<concurrent.futures.thread.ThreadPoolExecutor object at 0x7fcf6e8ebbf0>] with cache [{}], pollster history [{}], and discovery cache [{}]. register_pollster_execution /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:276
Dec  1 19:51:48 compute-0 ceilometer_agent_compute[200308]: 2025-12-01 19:51:48.827 15 DEBUG ceilometer.polling.manager [-] Registering pollster [<stevedore.extension.Extension object at 0x7fcf6cc3d7c0>] from source [pollsters] to be executed via executor [<concurrent.futures.thread.ThreadPoolExecutor object at 0x7fcf6e8ebbf0>] with cache [{}], pollster history [{}], and discovery cache [{}]. register_pollster_execution /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:276
Dec  1 19:51:48 compute-0 ceilometer_agent_compute[200308]: 2025-12-01 19:51:48.828 15 DEBUG ceilometer.polling.manager [-] Registering pollster [<stevedore.extension.Extension object at 0x7fcf6cc3f7d0>] from source [pollsters] to be executed via executor [<concurrent.futures.thread.ThreadPoolExecutor object at 0x7fcf6e8ebbf0>] with cache [{}], pollster history [{}], and discovery cache [{}]. register_pollster_execution /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:276
Dec  1 19:51:48 compute-0 ceilometer_agent_compute[200308]: 2025-12-01 19:51:48.837 15 DEBUG ceilometer.compute.discovery [-] instance data: {'id': 'e73931e9-f7fa-4666-b781-700b385532a9', 'name': 'test_0', 'flavor': {'id': '0891a7f6-7194-4f33-bc11-6f6ab8b16145', 'name': 'm1.small', 'vcpus': 1, 'ram': 512, 'disk': 1, 'ephemeral': 1, 'swap': 0}, 'image': {'id': '15bc897a-453b-4133-b6db-08ecdc2b6db0'}, 'os_type': 'hvm', 'architecture': 'x86_64', 'OS-EXT-SRV-ATTR:instance_name': 'instance-00000001', 'OS-EXT-SRV-ATTR:host': 'compute-0.ctlplane.example.com', 'OS-EXT-STS:vm_state': 'running', 'tenant_id': '35d2a9caf1634dca9fc12ec078239d84', 'user_id': '7c24e8f82e7842b785e565ac65c7f494', 'hostId': 'e632d98aa833376e2652bb395252bb54f4cc7fd6f020f0d51d7efcd6', 'status': 'active', 'metadata': {}} discover_libvirt_polling /usr/lib/python3.12/site-packages/ceilometer/compute/discovery.py:315
Dec  1 19:51:48 compute-0 ceilometer_agent_compute[200308]: 2025-12-01 19:51:48.840 15 DEBUG ceilometer.compute.discovery [-] Querying metadata for instance a5d5ccb2-21ac-4d7a-9991-efdf9cbb499d from Nova API get_server /usr/lib/python3.12/site-packages/ceilometer/compute/discovery.py:176
Dec  1 19:51:48 compute-0 ceilometer_agent_compute[200308]: 2025-12-01 19:51:48.841 15 DEBUG novaclient.v2.client [-] REQ: curl -g -i -X GET https://nova-internal.openstack.svc:8774/v2.1/servers/a5d5ccb2-21ac-4d7a-9991-efdf9cbb499d -H "Accept: application/json" -H "User-Agent: python-novaclient" -H "X-Auth-Token: {SHA256}1de7f74c971f7abb068fd11d4466b13593717e525e549549f884402049cc943e" -H "X-OpenStack-Nova-API-Version: 2.1" _http_log_request /usr/lib/python3.12/site-packages/keystoneauth1/session.py:572
Dec  1 19:51:48 compute-0 systemd[1]: Started libvirt proxy daemon.
Dec  1 19:51:48 compute-0 podman[249034]: 2025-12-01 19:51:48.917299847 +0000 UTC m=+0.093066368 container health_status 9bc16c1e84935b321683dd2dfd3901959431e420d380b6b9982945dff3d516b2 (image=quay.io/prometheus/node-exporter:v1.5.0, name=node_exporter, health_status=healthy, health_failing_streak=0, health_log=, config_id=edpm, container_name=node_exporter, maintainer=The Prometheus Authors <prometheus-developers@googlegroups.com>, managed_by=edpm_ansible, config_data={'image': 'quay.io/prometheus/node-exporter:v1.5.0', 'restart': 'always', 'recreate': True, 'user': 'root', 'privileged': True, 'ports': ['9100:9100'], 'command': ['--web.config.file=/etc/node_exporter/node_exporter.yaml', '--web.disable-exporter-metrics', '--collector.systemd', '--collector.systemd.unit-include=(edpm_.*|ovs.*|openvswitch|virt.*|rsyslog)\\.service', '--no-collector.dmi', '--no-collector.entropy', '--no-collector.thermal_zone', '--no-collector.time', '--no-collector.timex', '--no-collector.uname', '--no-collector.stat', '--no-collector.hwmon', '--no-collector.os', '--no-collector.selinux', '--no-collector.textfile', '--no-collector.powersupplyclass', '--no-collector.pressure', '--no-collector.rapl'], 'net': 'host', 'environment': {'OS_ENDPOINT_TYPE': 'internal'}, 'healthcheck': {'test': '/openstack/healthcheck node_exporter', 'mount': '/var/lib/openstack/healthchecks/node_exporter'}, 'volumes': ['/var/lib/openstack/config/telemetry/node_exporter.yaml:/etc/node_exporter/node_exporter.yaml:z', '/var/lib/openstack/certs/telemetry/default:/etc/node_exporter/tls:z', '/var/run/dbus/system_bus_socket:/var/run/dbus/system_bus_socket:rw', '/var/lib/openstack/healthchecks/node_exporter:/openstack:ro,z']})
Dec  1 19:51:49 compute-0 ceilometer_agent_compute[200308]: 2025-12-01 19:51:49.269 15 DEBUG novaclient.v2.client [-] RESP: [200] Connection: Keep-Alive Content-Length: 1572 Content-Type: application/json Date: Mon, 01 Dec 2025 19:51:48 GMT Keep-Alive: timeout=5, max=100 OpenStack-API-Version: compute 2.1 Server: Apache Vary: OpenStack-API-Version,X-OpenStack-Nova-API-Version X-OpenStack-Nova-API-Version: 2.1 x-compute-request-id: req-81e0a800-c17f-4dff-a7fd-ea74e17eb991 x-openstack-request-id: req-81e0a800-c17f-4dff-a7fd-ea74e17eb991 _http_log_response /usr/lib/python3.12/site-packages/keystoneauth1/session.py:613
Dec  1 19:51:49 compute-0 ceilometer_agent_compute[200308]: 2025-12-01 19:51:49.269 15 DEBUG novaclient.v2.client [-] RESP BODY: {"server": {"id": "a5d5ccb2-21ac-4d7a-9991-efdf9cbb499d", "name": "fvt_testing_server", "status": "ACTIVE", "tenant_id": "35d2a9caf1634dca9fc12ec078239d84", "user_id": "7c24e8f82e7842b785e565ac65c7f494", "metadata": {}, "hostId": "e632d98aa833376e2652bb395252bb54f4cc7fd6f020f0d51d7efcd6", "image": {"id": "2db0dcf5-70ca-4fe0-b205-4e14a99e3eee", "links": [{"rel": "bookmark", "href": "https://nova-internal.openstack.svc:8774/images/2db0dcf5-70ca-4fe0-b205-4e14a99e3eee"}]}, "flavor": {"id": "bcf15242-66f9-49c0-8c36-f60d85ca0bf0", "links": [{"rel": "bookmark", "href": "https://nova-internal.openstack.svc:8774/flavors/bcf15242-66f9-49c0-8c36-f60d85ca0bf0"}]}, "created": "2025-12-01T19:51:41Z", "updated": "2025-12-01T19:51:46Z", "addresses": {}, "accessIPv4": "", "accessIPv6": "", "links": [{"rel": "self", "href": "https://nova-internal.openstack.svc:8774/v2.1/servers/a5d5ccb2-21ac-4d7a-9991-efdf9cbb499d"}, {"rel": "bookmark", "href": "https://nova-internal.openstack.svc:8774/servers/a5d5ccb2-21ac-4d7a-9991-efdf9cbb499d"}], "OS-DCF:diskConfig": "MANUAL", "progress": 0, "OS-EXT-AZ:availability_zone": "nova", "config_drive": "True", "key_name": null, "OS-SRV-USG:launched_at": "2025-12-01T19:51:46.000000", "OS-SRV-USG:terminated_at": null, "OS-EXT-SRV-ATTR:host": "compute-0.ctlplane.example.com", "OS-EXT-SRV-ATTR:instance_name": "instance-00000004", "OS-EXT-SRV-ATTR:hypervisor_hostname": "compute-0.ctlplane.example.com", "OS-EXT-STS:task_state": null, "OS-EXT-STS:vm_state": "active", "OS-EXT-STS:power_state": 1, "os-extended-volumes:volumes_attached": []}} _http_log_response /usr/lib/python3.12/site-packages/keystoneauth1/session.py:648
Dec  1 19:51:49 compute-0 ceilometer_agent_compute[200308]: 2025-12-01 19:51:49.269 15 DEBUG novaclient.v2.client [-] GET call to compute for https://nova-internal.openstack.svc:8774/v2.1/servers/a5d5ccb2-21ac-4d7a-9991-efdf9cbb499d used request id req-81e0a800-c17f-4dff-a7fd-ea74e17eb991 request /usr/lib/python3.12/site-packages/keystoneauth1/session.py:1073
Dec  1 19:51:49 compute-0 ceilometer_agent_compute[200308]: 2025-12-01 19:51:49.271 15 DEBUG ceilometer.compute.discovery [-] instance data: {'id': 'a5d5ccb2-21ac-4d7a-9991-efdf9cbb499d', 'name': 'fvt_testing_server', 'flavor': {'id': 'bcf15242-66f9-49c0-8c36-f60d85ca0bf0', 'name': 'fvt_testing_flavor', 'vcpus': 1, 'ram': 512, 'disk': 1, 'ephemeral': 1, 'swap': 0}, 'image': {'id': '2db0dcf5-70ca-4fe0-b205-4e14a99e3eee'}, 'os_type': 'hvm', 'architecture': 'x86_64', 'OS-EXT-SRV-ATTR:instance_name': 'instance-00000004', 'OS-EXT-SRV-ATTR:host': 'compute-0.ctlplane.example.com', 'OS-EXT-STS:vm_state': 'running', 'tenant_id': '35d2a9caf1634dca9fc12ec078239d84', 'user_id': '7c24e8f82e7842b785e565ac65c7f494', 'hostId': 'e632d98aa833376e2652bb395252bb54f4cc7fd6f020f0d51d7efcd6', 'status': 'active', 'metadata': {}} discover_libvirt_polling /usr/lib/python3.12/site-packages/ceilometer/compute/discovery.py:315
Dec  1 19:51:49 compute-0 ceilometer_agent_compute[200308]: 2025-12-01 19:51:49.275 15 DEBUG ceilometer.compute.discovery [-] instance data: {'id': '850ac274-3f22-41ce-b7d7-ac64d7adac70', 'name': 'vn-rxztcck-a6xkcgll2h6t-dmjd3wlevael-vnf-74vtqyxw74yx', 'flavor': {'id': '0891a7f6-7194-4f33-bc11-6f6ab8b16145', 'name': 'm1.small', 'vcpus': 1, 'ram': 512, 'disk': 1, 'ephemeral': 1, 'swap': 0}, 'image': {'id': '15bc897a-453b-4133-b6db-08ecdc2b6db0'}, 'os_type': 'hvm', 'architecture': 'x86_64', 'OS-EXT-SRV-ATTR:instance_name': 'instance-00000003', 'OS-EXT-SRV-ATTR:host': 'compute-0.ctlplane.example.com', 'OS-EXT-STS:vm_state': 'running', 'tenant_id': '35d2a9caf1634dca9fc12ec078239d84', 'user_id': '7c24e8f82e7842b785e565ac65c7f494', 'hostId': 'e632d98aa833376e2652bb395252bb54f4cc7fd6f020f0d51d7efcd6', 'status': 'active', 'metadata': {'metering.server_group': '47cf63e2-5b7c-4ff3-8543-aef6d5b1a5c9'}} discover_libvirt_polling /usr/lib/python3.12/site-packages/ceilometer/compute/discovery.py:315
Dec  1 19:51:49 compute-0 ceilometer_agent_compute[200308]: 2025-12-01 19:51:49.275 15 INFO ceilometer.polling.manager [-] Polling pollster network.incoming.bytes.delta in the context of pollsters
Dec  1 19:51:49 compute-0 ceilometer_agent_compute[200308]: 2025-12-01 19:51:49.276 15 DEBUG ceilometer.polling.manager [-] Checking if we need coordination for pollster [<stevedore.extension.Extension object at 0x7fcf6cc3f860>] with coordination group name [None]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:333
Dec  1 19:51:49 compute-0 ceilometer_agent_compute[200308]: 2025-12-01 19:51:49.276 15 DEBUG ceilometer.polling.manager [-] The pollster [<stevedore.extension.Extension object at 0x7fcf6cc3f860>] is not configured in a source for polling that requires coordination. The current hashrings are the following [None]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:355
Dec  1 19:51:49 compute-0 ceilometer_agent_compute[200308]: 2025-12-01 19:51:49.276 15 DEBUG ceilometer.polling.manager [-] Polster heartbeat update: network.incoming.bytes.delta heartbeat /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:636
Dec  1 19:51:49 compute-0 ceilometer_agent_compute[200308]: 2025-12-01 19:51:49.276 12 DEBUG ceilometer.polling.manager [-] Updated heartbeat for network.incoming.bytes.delta (2025-12-01T19:51:49.276154) _update_status /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:502
Dec  1 19:51:49 compute-0 ceilometer_agent_compute[200308]: 2025-12-01 19:51:49.281 15 DEBUG ceilometer.compute.pollsters [-] e73931e9-f7fa-4666-b781-700b385532a9/network.incoming.bytes.delta volume: 0 _stats_to_sample /usr/lib/python3.12/site-packages/ceilometer/compute/pollsters/__init__.py:108
Dec  1 19:51:49 compute-0 ceilometer_agent_compute[200308]: 2025-12-01 19:51:49.291 15 DEBUG ceilometer.compute.pollsters [-] 850ac274-3f22-41ce-b7d7-ac64d7adac70/network.incoming.bytes.delta volume: 0 _stats_to_sample /usr/lib/python3.12/site-packages/ceilometer/compute/pollsters/__init__.py:108
Dec  1 19:51:49 compute-0 ceilometer_agent_compute[200308]: 2025-12-01 19:51:49.292 15 INFO ceilometer.polling.manager [-] Finished polling pollster network.incoming.bytes.delta in the context of pollsters
Dec  1 19:51:49 compute-0 ceilometer_agent_compute[200308]: 2025-12-01 19:51:49.292 15 DEBUG ceilometer.polling.manager [-] Executing discovery process for pollsters [<ceilometer.compute.pollsters.net.OutgoingPacketsPollster object at 0x7fcf6c2e4050>] and discovery method [local_instances] via process [<bound method AgentManager.discover of <ceilometer.polling.manager.AgentManager object at 0x7fcf6cc07a10>>]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:294
Dec  1 19:51:49 compute-0 ceilometer_agent_compute[200308]: 2025-12-01 19:51:49.292 15 INFO ceilometer.polling.manager [-] Polling pollster network.outgoing.packets in the context of pollsters
Dec  1 19:51:49 compute-0 ceilometer_agent_compute[200308]: 2025-12-01 19:51:49.292 15 DEBUG ceilometer.polling.manager [-] Checking if we need coordination for pollster [<stevedore.extension.Extension object at 0x7fcf6c2e4080>] with coordination group name [None]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:333
Dec  1 19:51:49 compute-0 ceilometer_agent_compute[200308]: 2025-12-01 19:51:49.293 15 DEBUG ceilometer.polling.manager [-] The pollster [<stevedore.extension.Extension object at 0x7fcf6c2e4080>] is not configured in a source for polling that requires coordination. The current hashrings are the following [None]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:355
Dec  1 19:51:49 compute-0 ceilometer_agent_compute[200308]: 2025-12-01 19:51:49.293 15 DEBUG ceilometer.polling.manager [-] Polster heartbeat update: network.outgoing.packets heartbeat /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:636
Dec  1 19:51:49 compute-0 ceilometer_agent_compute[200308]: 2025-12-01 19:51:49.293 15 DEBUG ceilometer.compute.pollsters [-] e73931e9-f7fa-4666-b781-700b385532a9/network.outgoing.packets volume: 24 _stats_to_sample /usr/lib/python3.12/site-packages/ceilometer/compute/pollsters/__init__.py:108
Dec  1 19:51:49 compute-0 ceilometer_agent_compute[200308]: 2025-12-01 19:51:49.293 15 DEBUG ceilometer.compute.pollsters [-] 850ac274-3f22-41ce-b7d7-ac64d7adac70/network.outgoing.packets volume: 23 _stats_to_sample /usr/lib/python3.12/site-packages/ceilometer/compute/pollsters/__init__.py:108
Dec  1 19:51:49 compute-0 ceilometer_agent_compute[200308]: 2025-12-01 19:51:49.294 12 DEBUG ceilometer.polling.manager [-] Updated heartbeat for network.outgoing.packets (2025-12-01T19:51:49.293109) _update_status /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:502
Dec  1 19:51:49 compute-0 ceilometer_agent_compute[200308]: 2025-12-01 19:51:49.294 15 INFO ceilometer.polling.manager [-] Finished polling pollster network.outgoing.packets in the context of pollsters
Dec  1 19:51:49 compute-0 ceilometer_agent_compute[200308]: 2025-12-01 19:51:49.294 15 DEBUG ceilometer.polling.manager [-] Executing discovery process for pollsters [<ceilometer.compute.pollsters.net.OutgoingBytesDeltaPollster object at 0x7fcf6cc3ff20>] and discovery method [local_instances] via process [<bound method AgentManager.discover of <ceilometer.polling.manager.AgentManager object at 0x7fcf6cc07a10>>]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:294
Dec  1 19:51:49 compute-0 ceilometer_agent_compute[200308]: 2025-12-01 19:51:49.294 15 INFO ceilometer.polling.manager [-] Polling pollster network.outgoing.bytes.delta in the context of pollsters
Dec  1 19:51:49 compute-0 ceilometer_agent_compute[200308]: 2025-12-01 19:51:49.294 15 DEBUG ceilometer.polling.manager [-] Checking if we need coordination for pollster [<stevedore.extension.Extension object at 0x7fcf6efc98b0>] with coordination group name [None]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:333
Dec  1 19:51:49 compute-0 ceilometer_agent_compute[200308]: 2025-12-01 19:51:49.294 15 DEBUG ceilometer.polling.manager [-] The pollster [<stevedore.extension.Extension object at 0x7fcf6efc98b0>] is not configured in a source for polling that requires coordination. The current hashrings are the following [None]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:355
Dec  1 19:51:49 compute-0 ceilometer_agent_compute[200308]: 2025-12-01 19:51:49.295 15 DEBUG ceilometer.polling.manager [-] Polster heartbeat update: network.outgoing.bytes.delta heartbeat /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:636
Dec  1 19:51:49 compute-0 ceilometer_agent_compute[200308]: 2025-12-01 19:51:49.295 12 DEBUG ceilometer.polling.manager [-] Updated heartbeat for network.outgoing.bytes.delta (2025-12-01T19:51:49.295078) _update_status /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:502
Dec  1 19:51:49 compute-0 ceilometer_agent_compute[200308]: 2025-12-01 19:51:49.295 15 DEBUG ceilometer.compute.pollsters [-] e73931e9-f7fa-4666-b781-700b385532a9/network.outgoing.bytes.delta volume: 0 _stats_to_sample /usr/lib/python3.12/site-packages/ceilometer/compute/pollsters/__init__.py:108
Dec  1 19:51:49 compute-0 ceilometer_agent_compute[200308]: 2025-12-01 19:51:49.295 15 DEBUG ceilometer.compute.pollsters [-] 850ac274-3f22-41ce-b7d7-ac64d7adac70/network.outgoing.bytes.delta volume: 0 _stats_to_sample /usr/lib/python3.12/site-packages/ceilometer/compute/pollsters/__init__.py:108
Dec  1 19:51:49 compute-0 ceilometer_agent_compute[200308]: 2025-12-01 19:51:49.296 15 INFO ceilometer.polling.manager [-] Finished polling pollster network.outgoing.bytes.delta in the context of pollsters
Dec  1 19:51:49 compute-0 ceilometer_agent_compute[200308]: 2025-12-01 19:51:49.296 15 DEBUG ceilometer.polling.manager [-] Executing discovery process for pollsters [<ceilometer.compute.pollsters.net.OutgoingDropPollster object at 0x7fcf6c2e40e0>] and discovery method [local_instances] via process [<bound method AgentManager.discover of <ceilometer.polling.manager.AgentManager object at 0x7fcf6cc07a10>>]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:294
Dec  1 19:51:49 compute-0 ceilometer_agent_compute[200308]: 2025-12-01 19:51:49.296 15 INFO ceilometer.polling.manager [-] Polling pollster network.outgoing.packets.drop in the context of pollsters
Dec  1 19:51:49 compute-0 ceilometer_agent_compute[200308]: 2025-12-01 19:51:49.296 15 DEBUG ceilometer.polling.manager [-] Checking if we need coordination for pollster [<stevedore.extension.Extension object at 0x7fcf6c2e4110>] with coordination group name [None]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:333
Dec  1 19:51:49 compute-0 ceilometer_agent_compute[200308]: 2025-12-01 19:51:49.296 15 DEBUG ceilometer.polling.manager [-] The pollster [<stevedore.extension.Extension object at 0x7fcf6c2e4110>] is not configured in a source for polling that requires coordination. The current hashrings are the following [None]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:355
Dec  1 19:51:49 compute-0 ceilometer_agent_compute[200308]: 2025-12-01 19:51:49.296 15 DEBUG ceilometer.polling.manager [-] Polster heartbeat update: network.outgoing.packets.drop heartbeat /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:636
Dec  1 19:51:49 compute-0 ceilometer_agent_compute[200308]: 2025-12-01 19:51:49.297 12 DEBUG ceilometer.polling.manager [-] Updated heartbeat for network.outgoing.packets.drop (2025-12-01T19:51:49.296835) _update_status /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:502
Dec  1 19:51:49 compute-0 ceilometer_agent_compute[200308]: 2025-12-01 19:51:49.297 15 DEBUG ceilometer.compute.pollsters [-] e73931e9-f7fa-4666-b781-700b385532a9/network.outgoing.packets.drop volume: 0 _stats_to_sample /usr/lib/python3.12/site-packages/ceilometer/compute/pollsters/__init__.py:108
Dec  1 19:51:49 compute-0 ceilometer_agent_compute[200308]: 2025-12-01 19:51:49.297 15 DEBUG ceilometer.compute.pollsters [-] 850ac274-3f22-41ce-b7d7-ac64d7adac70/network.outgoing.packets.drop volume: 0 _stats_to_sample /usr/lib/python3.12/site-packages/ceilometer/compute/pollsters/__init__.py:108
Dec  1 19:51:49 compute-0 ceilometer_agent_compute[200308]: 2025-12-01 19:51:49.298 15 INFO ceilometer.polling.manager [-] Finished polling pollster network.outgoing.packets.drop in the context of pollsters
Dec  1 19:51:49 compute-0 ceilometer_agent_compute[200308]: 2025-12-01 19:51:49.298 15 DEBUG ceilometer.polling.manager [-] Executing discovery process for pollsters [<ceilometer.compute.pollsters.net.OutgoingErrorsPollster object at 0x7fcf6c2e4170>] and discovery method [local_instances] via process [<bound method AgentManager.discover of <ceilometer.polling.manager.AgentManager object at 0x7fcf6cc07a10>>]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:294
Dec  1 19:51:49 compute-0 ceilometer_agent_compute[200308]: 2025-12-01 19:51:49.298 15 INFO ceilometer.polling.manager [-] Polling pollster network.outgoing.packets.error in the context of pollsters
Dec  1 19:51:49 compute-0 ceilometer_agent_compute[200308]: 2025-12-01 19:51:49.298 15 DEBUG ceilometer.polling.manager [-] Checking if we need coordination for pollster [<stevedore.extension.Extension object at 0x7fcf6c2e41a0>] with coordination group name [None]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:333
Dec  1 19:51:49 compute-0 ceilometer_agent_compute[200308]: 2025-12-01 19:51:49.298 15 DEBUG ceilometer.polling.manager [-] The pollster [<stevedore.extension.Extension object at 0x7fcf6c2e41a0>] is not configured in a source for polling that requires coordination. The current hashrings are the following [None]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:355
Dec  1 19:51:49 compute-0 ceilometer_agent_compute[200308]: 2025-12-01 19:51:49.298 15 DEBUG ceilometer.polling.manager [-] Polster heartbeat update: network.outgoing.packets.error heartbeat /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:636
Dec  1 19:51:49 compute-0 ceilometer_agent_compute[200308]: 2025-12-01 19:51:49.299 12 DEBUG ceilometer.polling.manager [-] Updated heartbeat for network.outgoing.packets.error (2025-12-01T19:51:49.298783) _update_status /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:502
Dec  1 19:51:49 compute-0 ceilometer_agent_compute[200308]: 2025-12-01 19:51:49.299 15 DEBUG ceilometer.compute.pollsters [-] e73931e9-f7fa-4666-b781-700b385532a9/network.outgoing.packets.error volume: 0 _stats_to_sample /usr/lib/python3.12/site-packages/ceilometer/compute/pollsters/__init__.py:108
Dec  1 19:51:49 compute-0 ceilometer_agent_compute[200308]: 2025-12-01 19:51:49.299 15 DEBUG ceilometer.compute.pollsters [-] 850ac274-3f22-41ce-b7d7-ac64d7adac70/network.outgoing.packets.error volume: 0 _stats_to_sample /usr/lib/python3.12/site-packages/ceilometer/compute/pollsters/__init__.py:108
Dec  1 19:51:49 compute-0 ceilometer_agent_compute[200308]: 2025-12-01 19:51:49.300 15 INFO ceilometer.polling.manager [-] Finished polling pollster network.outgoing.packets.error in the context of pollsters
Dec  1 19:51:49 compute-0 ceilometer_agent_compute[200308]: 2025-12-01 19:51:49.300 15 DEBUG ceilometer.polling.manager [-] Executing discovery process for pollsters [<ceilometer.compute.pollsters.disk.PerDeviceCapacityPollster object at 0x7fcf6cc3d820>] and discovery method [local_instances] via process [<bound method AgentManager.discover of <ceilometer.polling.manager.AgentManager object at 0x7fcf6cc07a10>>]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:294
Dec  1 19:51:49 compute-0 ceilometer_agent_compute[200308]: 2025-12-01 19:51:49.300 15 INFO ceilometer.polling.manager [-] Polling pollster disk.device.capacity in the context of pollsters
Dec  1 19:51:49 compute-0 ceilometer_agent_compute[200308]: 2025-12-01 19:51:49.300 15 DEBUG ceilometer.polling.manager [-] Checking if we need coordination for pollster [<stevedore.extension.Extension object at 0x7fcf6cc3f290>] with coordination group name [None]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:333
Dec  1 19:51:49 compute-0 ceilometer_agent_compute[200308]: 2025-12-01 19:51:49.300 15 DEBUG ceilometer.polling.manager [-] The pollster [<stevedore.extension.Extension object at 0x7fcf6cc3f290>] is not configured in a source for polling that requires coordination. The current hashrings are the following [None]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:355
Dec  1 19:51:49 compute-0 ceilometer_agent_compute[200308]: 2025-12-01 19:51:49.300 15 DEBUG ceilometer.polling.manager [-] Polster heartbeat update: disk.device.capacity heartbeat /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:636
Dec  1 19:51:49 compute-0 ceilometer_agent_compute[200308]: 2025-12-01 19:51:49.301 12 DEBUG ceilometer.polling.manager [-] Updated heartbeat for disk.device.capacity (2025-12-01T19:51:49.300802) _update_status /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:502
Dec  1 19:51:49 compute-0 ceilometer_agent_compute[200308]: 2025-12-01 19:51:49.331 15 DEBUG ceilometer.compute.pollsters [-] e73931e9-f7fa-4666-b781-700b385532a9/disk.device.capacity volume: 1073741824 _stats_to_sample /usr/lib/python3.12/site-packages/ceilometer/compute/pollsters/__init__.py:108
Dec  1 19:51:49 compute-0 ceilometer_agent_compute[200308]: 2025-12-01 19:51:49.332 15 DEBUG ceilometer.compute.pollsters [-] e73931e9-f7fa-4666-b781-700b385532a9/disk.device.capacity volume: 1073741824 _stats_to_sample /usr/lib/python3.12/site-packages/ceilometer/compute/pollsters/__init__.py:108
Dec  1 19:51:49 compute-0 ceilometer_agent_compute[200308]: 2025-12-01 19:51:49.332 15 DEBUG ceilometer.compute.pollsters [-] e73931e9-f7fa-4666-b781-700b385532a9/disk.device.capacity volume: 485376 _stats_to_sample /usr/lib/python3.12/site-packages/ceilometer/compute/pollsters/__init__.py:108
Dec  1 19:51:49 compute-0 ceilometer_agent_compute[200308]: 2025-12-01 19:51:49.363 15 DEBUG ceilometer.compute.pollsters [-] a5d5ccb2-21ac-4d7a-9991-efdf9cbb499d/disk.device.capacity volume: 1073741824 _stats_to_sample /usr/lib/python3.12/site-packages/ceilometer/compute/pollsters/__init__.py:108
Dec  1 19:51:49 compute-0 ceilometer_agent_compute[200308]: 2025-12-01 19:51:49.363 15 DEBUG ceilometer.compute.pollsters [-] a5d5ccb2-21ac-4d7a-9991-efdf9cbb499d/disk.device.capacity volume: 1073741824 _stats_to_sample /usr/lib/python3.12/site-packages/ceilometer/compute/pollsters/__init__.py:108
Dec  1 19:51:49 compute-0 ceilometer_agent_compute[200308]: 2025-12-01 19:51:49.363 15 DEBUG ceilometer.compute.pollsters [-] a5d5ccb2-21ac-4d7a-9991-efdf9cbb499d/disk.device.capacity volume: 485376 _stats_to_sample /usr/lib/python3.12/site-packages/ceilometer/compute/pollsters/__init__.py:108
Dec  1 19:51:49 compute-0 ceilometer_agent_compute[200308]: 2025-12-01 19:51:49.401 15 DEBUG ceilometer.compute.pollsters [-] 850ac274-3f22-41ce-b7d7-ac64d7adac70/disk.device.capacity volume: 1073741824 _stats_to_sample /usr/lib/python3.12/site-packages/ceilometer/compute/pollsters/__init__.py:108
Dec  1 19:51:49 compute-0 ceilometer_agent_compute[200308]: 2025-12-01 19:51:49.401 15 DEBUG ceilometer.compute.pollsters [-] 850ac274-3f22-41ce-b7d7-ac64d7adac70/disk.device.capacity volume: 1073741824 _stats_to_sample /usr/lib/python3.12/site-packages/ceilometer/compute/pollsters/__init__.py:108
Dec  1 19:51:49 compute-0 ceilometer_agent_compute[200308]: 2025-12-01 19:51:49.401 15 DEBUG ceilometer.compute.pollsters [-] 850ac274-3f22-41ce-b7d7-ac64d7adac70/disk.device.capacity volume: 583680 _stats_to_sample /usr/lib/python3.12/site-packages/ceilometer/compute/pollsters/__init__.py:108
Dec  1 19:51:49 compute-0 ceilometer_agent_compute[200308]: 2025-12-01 19:51:49.402 15 INFO ceilometer.polling.manager [-] Finished polling pollster disk.device.capacity in the context of pollsters
Dec  1 19:51:49 compute-0 ceilometer_agent_compute[200308]: 2025-12-01 19:51:49.402 15 DEBUG ceilometer.polling.manager [-] Executing discovery process for pollsters [<ceilometer.compute.pollsters.disk.PerDeviceReadBytesPollster object at 0x7fcf6cc3f1d0>] and discovery method [local_instances] via process [<bound method AgentManager.discover of <ceilometer.polling.manager.AgentManager object at 0x7fcf6cc07a10>>]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:294
Dec  1 19:51:49 compute-0 ceilometer_agent_compute[200308]: 2025-12-01 19:51:49.402 15 INFO ceilometer.polling.manager [-] Polling pollster disk.device.read.bytes in the context of pollsters
Dec  1 19:51:49 compute-0 ceilometer_agent_compute[200308]: 2025-12-01 19:51:49.402 15 DEBUG ceilometer.polling.manager [-] Checking if we need coordination for pollster [<stevedore.extension.Extension object at 0x7fcf6cc3f2c0>] with coordination group name [None]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:333
Dec  1 19:51:49 compute-0 ceilometer_agent_compute[200308]: 2025-12-01 19:51:49.402 15 DEBUG ceilometer.polling.manager [-] The pollster [<stevedore.extension.Extension object at 0x7fcf6cc3f2c0>] is not configured in a source for polling that requires coordination. The current hashrings are the following [None]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:355
Dec  1 19:51:49 compute-0 ceilometer_agent_compute[200308]: 2025-12-01 19:51:49.402 15 DEBUG ceilometer.polling.manager [-] Polster heartbeat update: disk.device.read.bytes heartbeat /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:636
Dec  1 19:51:49 compute-0 ceilometer_agent_compute[200308]: 2025-12-01 19:51:49.403 12 DEBUG ceilometer.polling.manager [-] Updated heartbeat for disk.device.read.bytes (2025-12-01T19:51:49.402515) _update_status /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:502
Dec  1 19:51:49 compute-0 ceilometer_agent_compute[200308]: 2025-12-01 19:51:49.518 15 DEBUG ceilometer.compute.pollsters [-] e73931e9-f7fa-4666-b781-700b385532a9/disk.device.read.bytes volume: 23308800 _stats_to_sample /usr/lib/python3.12/site-packages/ceilometer/compute/pollsters/__init__.py:108
Dec  1 19:51:49 compute-0 ceilometer_agent_compute[200308]: 2025-12-01 19:51:49.519 15 DEBUG ceilometer.compute.pollsters [-] e73931e9-f7fa-4666-b781-700b385532a9/disk.device.read.bytes volume: 3227648 _stats_to_sample /usr/lib/python3.12/site-packages/ceilometer/compute/pollsters/__init__.py:108
Dec  1 19:51:49 compute-0 ceilometer_agent_compute[200308]: 2025-12-01 19:51:49.519 15 DEBUG ceilometer.compute.pollsters [-] e73931e9-f7fa-4666-b781-700b385532a9/disk.device.read.bytes volume: 274786 _stats_to_sample /usr/lib/python3.12/site-packages/ceilometer/compute/pollsters/__init__.py:108
Dec  1 19:51:49 compute-0 ceilometer_agent_compute[200308]: 2025-12-01 19:51:49.643 15 DEBUG ceilometer.compute.pollsters [-] a5d5ccb2-21ac-4d7a-9991-efdf9cbb499d/disk.device.read.bytes volume: 18348032 _stats_to_sample /usr/lib/python3.12/site-packages/ceilometer/compute/pollsters/__init__.py:108
Dec  1 19:51:49 compute-0 ceilometer_agent_compute[200308]: 2025-12-01 19:51:49.644 15 DEBUG ceilometer.compute.pollsters [-] a5d5ccb2-21ac-4d7a-9991-efdf9cbb499d/disk.device.read.bytes volume: 0 _stats_to_sample /usr/lib/python3.12/site-packages/ceilometer/compute/pollsters/__init__.py:108
Dec  1 19:51:49 compute-0 ceilometer_agent_compute[200308]: 2025-12-01 19:51:49.644 15 DEBUG ceilometer.compute.pollsters [-] a5d5ccb2-21ac-4d7a-9991-efdf9cbb499d/disk.device.read.bytes volume: 2048 _stats_to_sample /usr/lib/python3.12/site-packages/ceilometer/compute/pollsters/__init__.py:108
Dec  1 19:51:49 compute-0 ceilometer_agent_compute[200308]: 2025-12-01 19:51:49.754 15 DEBUG ceilometer.compute.pollsters [-] 850ac274-3f22-41ce-b7d7-ac64d7adac70/disk.device.read.bytes volume: 23308800 _stats_to_sample /usr/lib/python3.12/site-packages/ceilometer/compute/pollsters/__init__.py:108
Dec  1 19:51:49 compute-0 ceilometer_agent_compute[200308]: 2025-12-01 19:51:49.755 15 DEBUG ceilometer.compute.pollsters [-] 850ac274-3f22-41ce-b7d7-ac64d7adac70/disk.device.read.bytes volume: 3227648 _stats_to_sample /usr/lib/python3.12/site-packages/ceilometer/compute/pollsters/__init__.py:108
Dec  1 19:51:49 compute-0 ceilometer_agent_compute[200308]: 2025-12-01 19:51:49.755 15 DEBUG ceilometer.compute.pollsters [-] 850ac274-3f22-41ce-b7d7-ac64d7adac70/disk.device.read.bytes volume: 385378 _stats_to_sample /usr/lib/python3.12/site-packages/ceilometer/compute/pollsters/__init__.py:108
Dec  1 19:51:49 compute-0 ceilometer_agent_compute[200308]: 2025-12-01 19:51:49.756 15 INFO ceilometer.polling.manager [-] Finished polling pollster disk.device.read.bytes in the context of pollsters
Dec  1 19:51:49 compute-0 ceilometer_agent_compute[200308]: 2025-12-01 19:51:49.757 15 DEBUG ceilometer.polling.manager [-] Executing discovery process for pollsters [<ceilometer.compute.pollsters.net.IncomingBytesPollster object at 0x7fcf6cc3f800>] and discovery method [local_instances] via process [<bound method AgentManager.discover of <ceilometer.polling.manager.AgentManager object at 0x7fcf6cc07a10>>]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:294
Dec  1 19:51:49 compute-0 ceilometer_agent_compute[200308]: 2025-12-01 19:51:49.757 15 INFO ceilometer.polling.manager [-] Polling pollster network.incoming.bytes in the context of pollsters
Dec  1 19:51:49 compute-0 ceilometer_agent_compute[200308]: 2025-12-01 19:51:49.757 15 DEBUG ceilometer.polling.manager [-] Checking if we need coordination for pollster [<stevedore.extension.Extension object at 0x7fcf6e1e92e0>] with coordination group name [None]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:333
Dec  1 19:51:49 compute-0 ceilometer_agent_compute[200308]: 2025-12-01 19:51:49.757 15 DEBUG ceilometer.polling.manager [-] The pollster [<stevedore.extension.Extension object at 0x7fcf6e1e92e0>] is not configured in a source for polling that requires coordination. The current hashrings are the following [None]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:355
Dec  1 19:51:49 compute-0 ceilometer_agent_compute[200308]: 2025-12-01 19:51:49.758 15 DEBUG ceilometer.polling.manager [-] Polster heartbeat update: network.incoming.bytes heartbeat /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:636
Dec  1 19:51:49 compute-0 ceilometer_agent_compute[200308]: 2025-12-01 19:51:49.758 15 DEBUG ceilometer.compute.pollsters [-] e73931e9-f7fa-4666-b781-700b385532a9/network.incoming.bytes volume: 2136 _stats_to_sample /usr/lib/python3.12/site-packages/ceilometer/compute/pollsters/__init__.py:108
Dec  1 19:51:49 compute-0 ceilometer_agent_compute[200308]: 2025-12-01 19:51:49.758 15 DEBUG ceilometer.compute.pollsters [-] 850ac274-3f22-41ce-b7d7-ac64d7adac70/network.incoming.bytes volume: 1570 _stats_to_sample /usr/lib/python3.12/site-packages/ceilometer/compute/pollsters/__init__.py:108
Dec  1 19:51:49 compute-0 ceilometer_agent_compute[200308]: 2025-12-01 19:51:49.759 15 INFO ceilometer.polling.manager [-] Finished polling pollster network.incoming.bytes in the context of pollsters
Dec  1 19:51:49 compute-0 ceilometer_agent_compute[200308]: 2025-12-01 19:51:49.759 15 DEBUG ceilometer.polling.manager [-] Executing discovery process for pollsters [<ceilometer.compute.pollsters.net.IncomingBytesRatePollster object at 0x7fcf6cc3fd10>] and discovery method [local_instances] via process [<bound method AgentManager.discover of <ceilometer.polling.manager.AgentManager object at 0x7fcf6cc07a10>>]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:294
Dec  1 19:51:49 compute-0 ceilometer_agent_compute[200308]: 2025-12-01 19:51:49.760 15 INFO ceilometer.polling.manager [-] Polling pollster network.incoming.bytes.rate in the context of pollsters
Dec  1 19:51:49 compute-0 ceilometer_agent_compute[200308]: 2025-12-01 19:51:49.760 15 DEBUG ceilometer.polling.manager [-] Checking if we need coordination for pollster [<stevedore.extension.Extension object at 0x7fcf6cc3fb00>] with coordination group name [None]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:333
Dec  1 19:51:49 compute-0 ceilometer_agent_compute[200308]: 2025-12-01 19:51:49.760 15 DEBUG ceilometer.polling.manager [-] The pollster [<stevedore.extension.Extension object at 0x7fcf6cc3fb00>] is not configured in a source for polling that requires coordination. The current hashrings are the following [None]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:355
Dec  1 19:51:49 compute-0 ceilometer_agent_compute[200308]: 2025-12-01 19:51:49.761 15 DEBUG ceilometer.polling.manager [-] Polster heartbeat update: network.incoming.bytes.rate heartbeat /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:636
Dec  1 19:51:49 compute-0 ceilometer_agent_compute[200308]: 2025-12-01 19:51:49.761 15 DEBUG ceilometer.compute.pollsters [-] LibvirtInspector does not provide data for IncomingBytesRatePollster get_samples /usr/lib/python3.12/site-packages/ceilometer/compute/pollsters/__init__.py:162
Dec  1 19:51:49 compute-0 ceilometer_agent_compute[200308]: 2025-12-01 19:51:49.761 12 DEBUG ceilometer.polling.manager [-] Updated heartbeat for network.incoming.bytes (2025-12-01T19:51:49.758013) _update_status /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:502
Dec  1 19:51:49 compute-0 ceilometer_agent_compute[200308]: 2025-12-01 19:51:49.761 15 ERROR ceilometer.polling.manager [-] Prevent pollster network.incoming.bytes.rate from polling [<NovaLikeServer: fvt_testing_server>] on source pollsters anymore!: ceilometer.polling.plugin_base.PollsterPermanentError: [<NovaLikeServer: fvt_testing_server>]
Dec  1 19:51:49 compute-0 ceilometer_agent_compute[200308]: 2025-12-01 19:51:49.762 12 DEBUG ceilometer.polling.manager [-] Updated heartbeat for network.incoming.bytes.rate (2025-12-01T19:51:49.761059) _update_status /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:502
Dec  1 19:51:49 compute-0 ceilometer_agent_compute[200308]: 2025-12-01 19:51:49.762 15 DEBUG ceilometer.polling.manager [-] Executing discovery process for pollsters [<ceilometer.compute.pollsters.disk.PerDeviceDiskReadLatencyPollster object at 0x7fcf6cc3f2f0>] and discovery method [local_instances] via process [<bound method AgentManager.discover of <ceilometer.polling.manager.AgentManager object at 0x7fcf6cc07a10>>]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:294
Dec  1 19:51:49 compute-0 ceilometer_agent_compute[200308]: 2025-12-01 19:51:49.762 15 INFO ceilometer.polling.manager [-] Polling pollster disk.device.read.latency in the context of pollsters
Dec  1 19:51:49 compute-0 ceilometer_agent_compute[200308]: 2025-12-01 19:51:49.762 15 DEBUG ceilometer.polling.manager [-] Checking if we need coordination for pollster [<stevedore.extension.Extension object at 0x7fcf6cc3f320>] with coordination group name [None]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:333
Dec  1 19:51:49 compute-0 ceilometer_agent_compute[200308]: 2025-12-01 19:51:49.762 15 DEBUG ceilometer.polling.manager [-] The pollster [<stevedore.extension.Extension object at 0x7fcf6cc3f320>] is not configured in a source for polling that requires coordination. The current hashrings are the following [None]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:355
Dec  1 19:51:49 compute-0 ceilometer_agent_compute[200308]: 2025-12-01 19:51:49.763 15 DEBUG ceilometer.polling.manager [-] Polster heartbeat update: disk.device.read.latency heartbeat /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:636
Dec  1 19:51:49 compute-0 ceilometer_agent_compute[200308]: 2025-12-01 19:51:49.763 15 DEBUG ceilometer.compute.pollsters [-] e73931e9-f7fa-4666-b781-700b385532a9/disk.device.read.latency volume: 474440550 _stats_to_sample /usr/lib/python3.12/site-packages/ceilometer/compute/pollsters/__init__.py:108
Dec  1 19:51:49 compute-0 ceilometer_agent_compute[200308]: 2025-12-01 19:51:49.763 15 DEBUG ceilometer.compute.pollsters [-] e73931e9-f7fa-4666-b781-700b385532a9/disk.device.read.latency volume: 65600453 _stats_to_sample /usr/lib/python3.12/site-packages/ceilometer/compute/pollsters/__init__.py:108
Dec  1 19:51:49 compute-0 ceilometer_agent_compute[200308]: 2025-12-01 19:51:49.764 15 DEBUG ceilometer.compute.pollsters [-] e73931e9-f7fa-4666-b781-700b385532a9/disk.device.read.latency volume: 49214734 _stats_to_sample /usr/lib/python3.12/site-packages/ceilometer/compute/pollsters/__init__.py:108
Dec  1 19:51:49 compute-0 ceilometer_agent_compute[200308]: 2025-12-01 19:51:49.764 12 DEBUG ceilometer.polling.manager [-] Updated heartbeat for disk.device.read.latency (2025-12-01T19:51:49.762937) _update_status /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:502
Dec  1 19:51:49 compute-0 ceilometer_agent_compute[200308]: 2025-12-01 19:51:49.765 15 DEBUG ceilometer.compute.pollsters [-] a5d5ccb2-21ac-4d7a-9991-efdf9cbb499d/disk.device.read.latency volume: 421775777 _stats_to_sample /usr/lib/python3.12/site-packages/ceilometer/compute/pollsters/__init__.py:108
Dec  1 19:51:49 compute-0 ceilometer_agent_compute[200308]: 2025-12-01 19:51:49.765 15 DEBUG ceilometer.compute.pollsters [-] a5d5ccb2-21ac-4d7a-9991-efdf9cbb499d/disk.device.read.latency volume: 0 _stats_to_sample /usr/lib/python3.12/site-packages/ceilometer/compute/pollsters/__init__.py:108
Dec  1 19:51:49 compute-0 ceilometer_agent_compute[200308]: 2025-12-01 19:51:49.766 15 DEBUG ceilometer.compute.pollsters [-] a5d5ccb2-21ac-4d7a-9991-efdf9cbb499d/disk.device.read.latency volume: 1446854 _stats_to_sample /usr/lib/python3.12/site-packages/ceilometer/compute/pollsters/__init__.py:108
Dec  1 19:51:49 compute-0 ceilometer_agent_compute[200308]: 2025-12-01 19:51:49.766 15 DEBUG ceilometer.compute.pollsters [-] 850ac274-3f22-41ce-b7d7-ac64d7adac70/disk.device.read.latency volume: 578521054 _stats_to_sample /usr/lib/python3.12/site-packages/ceilometer/compute/pollsters/__init__.py:108
Dec  1 19:51:49 compute-0 ceilometer_agent_compute[200308]: 2025-12-01 19:51:49.767 15 DEBUG ceilometer.compute.pollsters [-] 850ac274-3f22-41ce-b7d7-ac64d7adac70/disk.device.read.latency volume: 98903610 _stats_to_sample /usr/lib/python3.12/site-packages/ceilometer/compute/pollsters/__init__.py:108
Dec  1 19:51:49 compute-0 ceilometer_agent_compute[200308]: 2025-12-01 19:51:49.767 15 DEBUG ceilometer.compute.pollsters [-] 850ac274-3f22-41ce-b7d7-ac64d7adac70/disk.device.read.latency volume: 76991265 _stats_to_sample /usr/lib/python3.12/site-packages/ceilometer/compute/pollsters/__init__.py:108
Dec  1 19:51:49 compute-0 ceilometer_agent_compute[200308]: 2025-12-01 19:51:49.768 15 INFO ceilometer.polling.manager [-] Finished polling pollster disk.device.read.latency in the context of pollsters
Dec  1 19:51:49 compute-0 ceilometer_agent_compute[200308]: 2025-12-01 19:51:49.768 15 DEBUG ceilometer.polling.manager [-] Executing discovery process for pollsters [<ceilometer.compute.pollsters.disk.PerDeviceReadRequestsPollster object at 0x7fcf6cc3f350>] and discovery method [local_instances] via process [<bound method AgentManager.discover of <ceilometer.polling.manager.AgentManager object at 0x7fcf6cc07a10>>]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:294
Dec  1 19:51:49 compute-0 ceilometer_agent_compute[200308]: 2025-12-01 19:51:49.769 15 INFO ceilometer.polling.manager [-] Polling pollster disk.device.read.requests in the context of pollsters
Dec  1 19:51:49 compute-0 ceilometer_agent_compute[200308]: 2025-12-01 19:51:49.769 15 DEBUG ceilometer.polling.manager [-] Checking if we need coordination for pollster [<stevedore.extension.Extension object at 0x7fcf6cc3f380>] with coordination group name [None]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:333
Dec  1 19:51:49 compute-0 ceilometer_agent_compute[200308]: 2025-12-01 19:51:49.769 15 DEBUG ceilometer.polling.manager [-] The pollster [<stevedore.extension.Extension object at 0x7fcf6cc3f380>] is not configured in a source for polling that requires coordination. The current hashrings are the following [None]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:355
Dec  1 19:51:49 compute-0 ceilometer_agent_compute[200308]: 2025-12-01 19:51:49.769 15 DEBUG ceilometer.polling.manager [-] Polster heartbeat update: disk.device.read.requests heartbeat /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:636
Dec  1 19:51:49 compute-0 ceilometer_agent_compute[200308]: 2025-12-01 19:51:49.769 15 DEBUG ceilometer.compute.pollsters [-] e73931e9-f7fa-4666-b781-700b385532a9/disk.device.read.requests volume: 840 _stats_to_sample /usr/lib/python3.12/site-packages/ceilometer/compute/pollsters/__init__.py:108
Dec  1 19:51:49 compute-0 ceilometer_agent_compute[200308]: 2025-12-01 19:51:49.770 15 DEBUG ceilometer.compute.pollsters [-] e73931e9-f7fa-4666-b781-700b385532a9/disk.device.read.requests volume: 173 _stats_to_sample /usr/lib/python3.12/site-packages/ceilometer/compute/pollsters/__init__.py:108
Dec  1 19:51:49 compute-0 ceilometer_agent_compute[200308]: 2025-12-01 19:51:49.770 15 DEBUG ceilometer.compute.pollsters [-] e73931e9-f7fa-4666-b781-700b385532a9/disk.device.read.requests volume: 109 _stats_to_sample /usr/lib/python3.12/site-packages/ceilometer/compute/pollsters/__init__.py:108
Dec  1 19:51:49 compute-0 ceilometer_agent_compute[200308]: 2025-12-01 19:51:49.771 15 DEBUG ceilometer.compute.pollsters [-] a5d5ccb2-21ac-4d7a-9991-efdf9cbb499d/disk.device.read.requests volume: 573 _stats_to_sample /usr/lib/python3.12/site-packages/ceilometer/compute/pollsters/__init__.py:108
Dec  1 19:51:49 compute-0 ceilometer_agent_compute[200308]: 2025-12-01 19:51:49.771 15 DEBUG ceilometer.compute.pollsters [-] a5d5ccb2-21ac-4d7a-9991-efdf9cbb499d/disk.device.read.requests volume: 0 _stats_to_sample /usr/lib/python3.12/site-packages/ceilometer/compute/pollsters/__init__.py:108
Dec  1 19:51:49 compute-0 ceilometer_agent_compute[200308]: 2025-12-01 19:51:49.772 15 DEBUG ceilometer.compute.pollsters [-] a5d5ccb2-21ac-4d7a-9991-efdf9cbb499d/disk.device.read.requests volume: 1 _stats_to_sample /usr/lib/python3.12/site-packages/ceilometer/compute/pollsters/__init__.py:108
Dec  1 19:51:49 compute-0 ceilometer_agent_compute[200308]: 2025-12-01 19:51:49.772 15 DEBUG ceilometer.compute.pollsters [-] 850ac274-3f22-41ce-b7d7-ac64d7adac70/disk.device.read.requests volume: 840 _stats_to_sample /usr/lib/python3.12/site-packages/ceilometer/compute/pollsters/__init__.py:108
Dec  1 19:51:49 compute-0 ceilometer_agent_compute[200308]: 2025-12-01 19:51:49.773 12 DEBUG ceilometer.polling.manager [-] Updated heartbeat for disk.device.read.requests (2025-12-01T19:51:49.769694) _update_status /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:502
Dec  1 19:51:49 compute-0 ceilometer_agent_compute[200308]: 2025-12-01 19:51:49.773 15 DEBUG ceilometer.compute.pollsters [-] 850ac274-3f22-41ce-b7d7-ac64d7adac70/disk.device.read.requests volume: 173 _stats_to_sample /usr/lib/python3.12/site-packages/ceilometer/compute/pollsters/__init__.py:108
Dec  1 19:51:49 compute-0 ceilometer_agent_compute[200308]: 2025-12-01 19:51:49.774 15 DEBUG ceilometer.compute.pollsters [-] 850ac274-3f22-41ce-b7d7-ac64d7adac70/disk.device.read.requests volume: 124 _stats_to_sample /usr/lib/python3.12/site-packages/ceilometer/compute/pollsters/__init__.py:108
Dec  1 19:51:49 compute-0 ceilometer_agent_compute[200308]: 2025-12-01 19:51:49.774 15 INFO ceilometer.polling.manager [-] Finished polling pollster disk.device.read.requests in the context of pollsters
Dec  1 19:51:49 compute-0 ceilometer_agent_compute[200308]: 2025-12-01 19:51:49.775 15 DEBUG ceilometer.polling.manager [-] Executing discovery process for pollsters [<ceilometer.compute.pollsters.disk.PerDevicePhysicalPollster object at 0x7fcf6cc3f3b0>] and discovery method [local_instances] via process [<bound method AgentManager.discover of <ceilometer.polling.manager.AgentManager object at 0x7fcf6cc07a10>>]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:294
Dec  1 19:51:49 compute-0 ceilometer_agent_compute[200308]: 2025-12-01 19:51:49.775 15 INFO ceilometer.polling.manager [-] Polling pollster disk.device.usage in the context of pollsters
Dec  1 19:51:49 compute-0 ceilometer_agent_compute[200308]: 2025-12-01 19:51:49.775 15 DEBUG ceilometer.polling.manager [-] Checking if we need coordination for pollster [<stevedore.extension.Extension object at 0x7fcf6cc3f3e0>] with coordination group name [None]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:333
Dec  1 19:51:49 compute-0 ceilometer_agent_compute[200308]: 2025-12-01 19:51:49.775 15 DEBUG ceilometer.polling.manager [-] The pollster [<stevedore.extension.Extension object at 0x7fcf6cc3f3e0>] is not configured in a source for polling that requires coordination. The current hashrings are the following [None]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:355
Dec  1 19:51:49 compute-0 ceilometer_agent_compute[200308]: 2025-12-01 19:51:49.776 15 DEBUG ceilometer.polling.manager [-] Polster heartbeat update: disk.device.usage heartbeat /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:636
Dec  1 19:51:49 compute-0 ceilometer_agent_compute[200308]: 2025-12-01 19:51:49.776 15 DEBUG ceilometer.compute.pollsters [-] e73931e9-f7fa-4666-b781-700b385532a9/disk.device.usage volume: 21233664 _stats_to_sample /usr/lib/python3.12/site-packages/ceilometer/compute/pollsters/__init__.py:108
Dec  1 19:51:49 compute-0 ceilometer_agent_compute[200308]: 2025-12-01 19:51:49.776 15 DEBUG ceilometer.compute.pollsters [-] e73931e9-f7fa-4666-b781-700b385532a9/disk.device.usage volume: 393216 _stats_to_sample /usr/lib/python3.12/site-packages/ceilometer/compute/pollsters/__init__.py:108
Dec  1 19:51:49 compute-0 ceilometer_agent_compute[200308]: 2025-12-01 19:51:49.777 15 DEBUG ceilometer.compute.pollsters [-] e73931e9-f7fa-4666-b781-700b385532a9/disk.device.usage volume: 485376 _stats_to_sample /usr/lib/python3.12/site-packages/ceilometer/compute/pollsters/__init__.py:108
Dec  1 19:51:49 compute-0 ceilometer_agent_compute[200308]: 2025-12-01 19:51:49.777 15 DEBUG ceilometer.compute.pollsters [-] a5d5ccb2-21ac-4d7a-9991-efdf9cbb499d/disk.device.usage volume: 196624 _stats_to_sample /usr/lib/python3.12/site-packages/ceilometer/compute/pollsters/__init__.py:108
Dec  1 19:51:49 compute-0 ceilometer_agent_compute[200308]: 2025-12-01 19:51:49.778 15 DEBUG ceilometer.compute.pollsters [-] a5d5ccb2-21ac-4d7a-9991-efdf9cbb499d/disk.device.usage volume: 196624 _stats_to_sample /usr/lib/python3.12/site-packages/ceilometer/compute/pollsters/__init__.py:108
Dec  1 19:51:49 compute-0 ceilometer_agent_compute[200308]: 2025-12-01 19:51:49.778 12 DEBUG ceilometer.polling.manager [-] Updated heartbeat for disk.device.usage (2025-12-01T19:51:49.776134) _update_status /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:502
Dec  1 19:51:49 compute-0 ceilometer_agent_compute[200308]: 2025-12-01 19:51:49.779 15 DEBUG ceilometer.compute.pollsters [-] a5d5ccb2-21ac-4d7a-9991-efdf9cbb499d/disk.device.usage volume: 485376 _stats_to_sample /usr/lib/python3.12/site-packages/ceilometer/compute/pollsters/__init__.py:108
Dec  1 19:51:49 compute-0 ceilometer_agent_compute[200308]: 2025-12-01 19:51:49.779 15 DEBUG ceilometer.compute.pollsters [-] 850ac274-3f22-41ce-b7d7-ac64d7adac70/disk.device.usage volume: 21299200 _stats_to_sample /usr/lib/python3.12/site-packages/ceilometer/compute/pollsters/__init__.py:108
Dec  1 19:51:49 compute-0 ceilometer_agent_compute[200308]: 2025-12-01 19:51:49.780 15 DEBUG ceilometer.compute.pollsters [-] 850ac274-3f22-41ce-b7d7-ac64d7adac70/disk.device.usage volume: 393216 _stats_to_sample /usr/lib/python3.12/site-packages/ceilometer/compute/pollsters/__init__.py:108
Dec  1 19:51:49 compute-0 ceilometer_agent_compute[200308]: 2025-12-01 19:51:49.781 15 DEBUG ceilometer.compute.pollsters [-] 850ac274-3f22-41ce-b7d7-ac64d7adac70/disk.device.usage volume: 583680 _stats_to_sample /usr/lib/python3.12/site-packages/ceilometer/compute/pollsters/__init__.py:108
Dec  1 19:51:49 compute-0 ceilometer_agent_compute[200308]: 2025-12-01 19:51:49.782 15 INFO ceilometer.polling.manager [-] Finished polling pollster disk.device.usage in the context of pollsters
Dec  1 19:51:49 compute-0 ceilometer_agent_compute[200308]: 2025-12-01 19:51:49.782 15 DEBUG ceilometer.polling.manager [-] Executing discovery process for pollsters [<ceilometer.compute.pollsters.disk.PerDeviceWriteBytesPollster object at 0x7fcf6cc3f410>] and discovery method [local_instances] via process [<bound method AgentManager.discover of <ceilometer.polling.manager.AgentManager object at 0x7fcf6cc07a10>>]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:294
Dec  1 19:51:49 compute-0 ceilometer_agent_compute[200308]: 2025-12-01 19:51:49.782 15 INFO ceilometer.polling.manager [-] Polling pollster disk.device.write.bytes in the context of pollsters
Dec  1 19:51:49 compute-0 ceilometer_agent_compute[200308]: 2025-12-01 19:51:49.783 15 DEBUG ceilometer.polling.manager [-] Checking if we need coordination for pollster [<stevedore.extension.Extension object at 0x7fcf6cc3f440>] with coordination group name [None]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:333
Dec  1 19:51:49 compute-0 ceilometer_agent_compute[200308]: 2025-12-01 19:51:49.783 15 DEBUG ceilometer.polling.manager [-] The pollster [<stevedore.extension.Extension object at 0x7fcf6cc3f440>] is not configured in a source for polling that requires coordination. The current hashrings are the following [None]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:355
Dec  1 19:51:49 compute-0 ceilometer_agent_compute[200308]: 2025-12-01 19:51:49.783 15 DEBUG ceilometer.polling.manager [-] Polster heartbeat update: disk.device.write.bytes heartbeat /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:636
Dec  1 19:51:49 compute-0 ceilometer_agent_compute[200308]: 2025-12-01 19:51:49.783 15 DEBUG ceilometer.compute.pollsters [-] e73931e9-f7fa-4666-b781-700b385532a9/disk.device.write.bytes volume: 41779200 _stats_to_sample /usr/lib/python3.12/site-packages/ceilometer/compute/pollsters/__init__.py:108
Dec  1 19:51:49 compute-0 ceilometer_agent_compute[200308]: 2025-12-01 19:51:49.784 15 DEBUG ceilometer.compute.pollsters [-] e73931e9-f7fa-4666-b781-700b385532a9/disk.device.write.bytes volume: 512 _stats_to_sample /usr/lib/python3.12/site-packages/ceilometer/compute/pollsters/__init__.py:108
Dec  1 19:51:49 compute-0 ceilometer_agent_compute[200308]: 2025-12-01 19:51:49.784 15 DEBUG ceilometer.compute.pollsters [-] e73931e9-f7fa-4666-b781-700b385532a9/disk.device.write.bytes volume: 0 _stats_to_sample /usr/lib/python3.12/site-packages/ceilometer/compute/pollsters/__init__.py:108
Dec  1 19:51:49 compute-0 ceilometer_agent_compute[200308]: 2025-12-01 19:51:49.785 15 DEBUG ceilometer.compute.pollsters [-] a5d5ccb2-21ac-4d7a-9991-efdf9cbb499d/disk.device.write.bytes volume: 0 _stats_to_sample /usr/lib/python3.12/site-packages/ceilometer/compute/pollsters/__init__.py:108
Dec  1 19:51:49 compute-0 ceilometer_agent_compute[200308]: 2025-12-01 19:51:49.785 12 DEBUG ceilometer.polling.manager [-] Updated heartbeat for disk.device.write.bytes (2025-12-01T19:51:49.783479) _update_status /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:502
Dec  1 19:51:49 compute-0 ceilometer_agent_compute[200308]: 2025-12-01 19:51:49.785 15 DEBUG ceilometer.compute.pollsters [-] a5d5ccb2-21ac-4d7a-9991-efdf9cbb499d/disk.device.write.bytes volume: 0 _stats_to_sample /usr/lib/python3.12/site-packages/ceilometer/compute/pollsters/__init__.py:108
Dec  1 19:51:49 compute-0 ceilometer_agent_compute[200308]: 2025-12-01 19:51:49.786 15 DEBUG ceilometer.compute.pollsters [-] a5d5ccb2-21ac-4d7a-9991-efdf9cbb499d/disk.device.write.bytes volume: 0 _stats_to_sample /usr/lib/python3.12/site-packages/ceilometer/compute/pollsters/__init__.py:108
Dec  1 19:51:49 compute-0 ceilometer_agent_compute[200308]: 2025-12-01 19:51:49.786 15 DEBUG ceilometer.compute.pollsters [-] 850ac274-3f22-41ce-b7d7-ac64d7adac70/disk.device.write.bytes volume: 41779200 _stats_to_sample /usr/lib/python3.12/site-packages/ceilometer/compute/pollsters/__init__.py:108
Dec  1 19:51:49 compute-0 ceilometer_agent_compute[200308]: 2025-12-01 19:51:49.787 15 DEBUG ceilometer.compute.pollsters [-] 850ac274-3f22-41ce-b7d7-ac64d7adac70/disk.device.write.bytes volume: 512 _stats_to_sample /usr/lib/python3.12/site-packages/ceilometer/compute/pollsters/__init__.py:108
Dec  1 19:51:49 compute-0 ceilometer_agent_compute[200308]: 2025-12-01 19:51:49.787 15 DEBUG ceilometer.compute.pollsters [-] 850ac274-3f22-41ce-b7d7-ac64d7adac70/disk.device.write.bytes volume: 0 _stats_to_sample /usr/lib/python3.12/site-packages/ceilometer/compute/pollsters/__init__.py:108
Dec  1 19:51:49 compute-0 ceilometer_agent_compute[200308]: 2025-12-01 19:51:49.788 15 INFO ceilometer.polling.manager [-] Finished polling pollster disk.device.write.bytes in the context of pollsters
Dec  1 19:51:49 compute-0 ceilometer_agent_compute[200308]: 2025-12-01 19:51:49.788 15 DEBUG ceilometer.polling.manager [-] Executing discovery process for pollsters [<ceilometer.compute.pollsters.instance_stats.PowerStatePollster object at 0x7fcf6c2e4440>] and discovery method [local_instances] via process [<bound method AgentManager.discover of <ceilometer.polling.manager.AgentManager object at 0x7fcf6cc07a10>>]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:294
Dec  1 19:51:49 compute-0 ceilometer_agent_compute[200308]: 2025-12-01 19:51:49.789 15 INFO ceilometer.polling.manager [-] Polling pollster power.state in the context of pollsters
Dec  1 19:51:49 compute-0 ceilometer_agent_compute[200308]: 2025-12-01 19:51:49.789 15 DEBUG ceilometer.polling.manager [-] Checking if we need coordination for pollster [<stevedore.extension.Extension object at 0x7fcf6c2e4470>] with coordination group name [None]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:333
Dec  1 19:51:49 compute-0 ceilometer_agent_compute[200308]: 2025-12-01 19:51:49.789 15 DEBUG ceilometer.polling.manager [-] The pollster [<stevedore.extension.Extension object at 0x7fcf6c2e4470>] is not configured in a source for polling that requires coordination. The current hashrings are the following [None]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:355
Dec  1 19:51:49 compute-0 ceilometer_agent_compute[200308]: 2025-12-01 19:51:49.789 15 DEBUG ceilometer.polling.manager [-] Polster heartbeat update: power.state heartbeat /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:636
Dec  1 19:51:49 compute-0 ceilometer_agent_compute[200308]: 2025-12-01 19:51:49.790 12 DEBUG ceilometer.polling.manager [-] Updated heartbeat for power.state (2025-12-01T19:51:49.789669) _update_status /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:502
Dec  1 19:51:49 compute-0 ceilometer_agent_compute[200308]: 2025-12-01 19:51:49.832 15 DEBUG ceilometer.compute.pollsters [-] e73931e9-f7fa-4666-b781-700b385532a9/power.state volume: 1 _stats_to_sample /usr/lib/python3.12/site-packages/ceilometer/compute/pollsters/__init__.py:108
Dec  1 19:51:49 compute-0 ceilometer_agent_compute[200308]: 2025-12-01 19:51:49.879 15 DEBUG ceilometer.compute.pollsters [-] a5d5ccb2-21ac-4d7a-9991-efdf9cbb499d/power.state volume: 1 _stats_to_sample /usr/lib/python3.12/site-packages/ceilometer/compute/pollsters/__init__.py:108
Dec  1 19:51:49 compute-0 ceilometer_agent_compute[200308]: 2025-12-01 19:51:49.918 15 DEBUG ceilometer.compute.pollsters [-] 850ac274-3f22-41ce-b7d7-ac64d7adac70/power.state volume: 1 _stats_to_sample /usr/lib/python3.12/site-packages/ceilometer/compute/pollsters/__init__.py:108
Dec  1 19:51:49 compute-0 ceilometer_agent_compute[200308]: 2025-12-01 19:51:49.919 15 INFO ceilometer.polling.manager [-] Finished polling pollster power.state in the context of pollsters
Dec  1 19:51:49 compute-0 ceilometer_agent_compute[200308]: 2025-12-01 19:51:49.919 15 DEBUG ceilometer.polling.manager [-] Executing discovery process for pollsters [<ceilometer.compute.pollsters.disk.PerDeviceDiskWriteLatencyPollster object at 0x7fcf6cc3f470>] and discovery method [local_instances] via process [<bound method AgentManager.discover of <ceilometer.polling.manager.AgentManager object at 0x7fcf6cc07a10>>]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:294
Dec  1 19:51:49 compute-0 ceilometer_agent_compute[200308]: 2025-12-01 19:51:49.919 15 INFO ceilometer.polling.manager [-] Polling pollster disk.device.write.latency in the context of pollsters
Dec  1 19:51:49 compute-0 ceilometer_agent_compute[200308]: 2025-12-01 19:51:49.920 15 DEBUG ceilometer.polling.manager [-] Checking if we need coordination for pollster [<stevedore.extension.Extension object at 0x7fcf6cc3f4a0>] with coordination group name [None]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:333
Dec  1 19:51:49 compute-0 ceilometer_agent_compute[200308]: 2025-12-01 19:51:49.920 15 DEBUG ceilometer.polling.manager [-] The pollster [<stevedore.extension.Extension object at 0x7fcf6cc3f4a0>] is not configured in a source for polling that requires coordination. The current hashrings are the following [None]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:355
Dec  1 19:51:49 compute-0 ceilometer_agent_compute[200308]: 2025-12-01 19:51:49.920 15 DEBUG ceilometer.polling.manager [-] Polster heartbeat update: disk.device.write.latency heartbeat /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:636
Dec  1 19:51:49 compute-0 ceilometer_agent_compute[200308]: 2025-12-01 19:51:49.921 15 DEBUG ceilometer.compute.pollsters [-] e73931e9-f7fa-4666-b781-700b385532a9/disk.device.write.latency volume: 1119912171 _stats_to_sample /usr/lib/python3.12/site-packages/ceilometer/compute/pollsters/__init__.py:108
Dec  1 19:51:49 compute-0 ceilometer_agent_compute[200308]: 2025-12-01 19:51:49.921 12 DEBUG ceilometer.polling.manager [-] Updated heartbeat for disk.device.write.latency (2025-12-01T19:51:49.920354) _update_status /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:502
Dec  1 19:51:49 compute-0 ceilometer_agent_compute[200308]: 2025-12-01 19:51:49.921 15 DEBUG ceilometer.compute.pollsters [-] e73931e9-f7fa-4666-b781-700b385532a9/disk.device.write.latency volume: 10391061 _stats_to_sample /usr/lib/python3.12/site-packages/ceilometer/compute/pollsters/__init__.py:108
Dec  1 19:51:49 compute-0 ceilometer_agent_compute[200308]: 2025-12-01 19:51:49.921 15 DEBUG ceilometer.compute.pollsters [-] e73931e9-f7fa-4666-b781-700b385532a9/disk.device.write.latency volume: 0 _stats_to_sample /usr/lib/python3.12/site-packages/ceilometer/compute/pollsters/__init__.py:108
Dec  1 19:51:49 compute-0 ceilometer_agent_compute[200308]: 2025-12-01 19:51:49.922 15 DEBUG ceilometer.compute.pollsters [-] a5d5ccb2-21ac-4d7a-9991-efdf9cbb499d/disk.device.write.latency volume: 0 _stats_to_sample /usr/lib/python3.12/site-packages/ceilometer/compute/pollsters/__init__.py:108
Dec  1 19:51:49 compute-0 ceilometer_agent_compute[200308]: 2025-12-01 19:51:49.922 15 DEBUG ceilometer.compute.pollsters [-] a5d5ccb2-21ac-4d7a-9991-efdf9cbb499d/disk.device.write.latency volume: 0 _stats_to_sample /usr/lib/python3.12/site-packages/ceilometer/compute/pollsters/__init__.py:108
Dec  1 19:51:49 compute-0 ceilometer_agent_compute[200308]: 2025-12-01 19:51:49.923 15 DEBUG ceilometer.compute.pollsters [-] a5d5ccb2-21ac-4d7a-9991-efdf9cbb499d/disk.device.write.latency volume: 0 _stats_to_sample /usr/lib/python3.12/site-packages/ceilometer/compute/pollsters/__init__.py:108
Dec  1 19:51:49 compute-0 ceilometer_agent_compute[200308]: 2025-12-01 19:51:49.923 15 DEBUG ceilometer.compute.pollsters [-] 850ac274-3f22-41ce-b7d7-ac64d7adac70/disk.device.write.latency volume: 2063543219 _stats_to_sample /usr/lib/python3.12/site-packages/ceilometer/compute/pollsters/__init__.py:108
Dec  1 19:51:49 compute-0 ceilometer_agent_compute[200308]: 2025-12-01 19:51:49.924 15 DEBUG ceilometer.compute.pollsters [-] 850ac274-3f22-41ce-b7d7-ac64d7adac70/disk.device.write.latency volume: 12721696 _stats_to_sample /usr/lib/python3.12/site-packages/ceilometer/compute/pollsters/__init__.py:108
Dec  1 19:51:49 compute-0 ceilometer_agent_compute[200308]: 2025-12-01 19:51:49.924 15 DEBUG ceilometer.compute.pollsters [-] 850ac274-3f22-41ce-b7d7-ac64d7adac70/disk.device.write.latency volume: 0 _stats_to_sample /usr/lib/python3.12/site-packages/ceilometer/compute/pollsters/__init__.py:108
Dec  1 19:51:49 compute-0 ceilometer_agent_compute[200308]: 2025-12-01 19:51:49.925 15 INFO ceilometer.polling.manager [-] Finished polling pollster disk.device.write.latency in the context of pollsters
Dec  1 19:51:49 compute-0 ceilometer_agent_compute[200308]: 2025-12-01 19:51:49.925 15 DEBUG ceilometer.polling.manager [-] Executing discovery process for pollsters [<ceilometer.compute.pollsters.disk.PerDeviceWriteRequestsPollster object at 0x7fcf6cc3f4d0>] and discovery method [local_instances] via process [<bound method AgentManager.discover of <ceilometer.polling.manager.AgentManager object at 0x7fcf6cc07a10>>]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:294
Dec  1 19:51:49 compute-0 ceilometer_agent_compute[200308]: 2025-12-01 19:51:49.925 15 INFO ceilometer.polling.manager [-] Polling pollster disk.device.write.requests in the context of pollsters
Dec  1 19:51:49 compute-0 ceilometer_agent_compute[200308]: 2025-12-01 19:51:49.925 15 DEBUG ceilometer.polling.manager [-] Checking if we need coordination for pollster [<stevedore.extension.Extension object at 0x7fcf6cc3f500>] with coordination group name [None]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:333
Dec  1 19:51:49 compute-0 ceilometer_agent_compute[200308]: 2025-12-01 19:51:49.926 15 DEBUG ceilometer.polling.manager [-] The pollster [<stevedore.extension.Extension object at 0x7fcf6cc3f500>] is not configured in a source for polling that requires coordination. The current hashrings are the following [None]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:355
Dec  1 19:51:49 compute-0 ceilometer_agent_compute[200308]: 2025-12-01 19:51:49.926 15 DEBUG ceilometer.polling.manager [-] Polster heartbeat update: disk.device.write.requests heartbeat /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:636
Dec  1 19:51:49 compute-0 ceilometer_agent_compute[200308]: 2025-12-01 19:51:49.926 15 DEBUG ceilometer.compute.pollsters [-] e73931e9-f7fa-4666-b781-700b385532a9/disk.device.write.requests volume: 233 _stats_to_sample /usr/lib/python3.12/site-packages/ceilometer/compute/pollsters/__init__.py:108
Dec  1 19:51:49 compute-0 ceilometer_agent_compute[200308]: 2025-12-01 19:51:49.926 15 DEBUG ceilometer.compute.pollsters [-] e73931e9-f7fa-4666-b781-700b385532a9/disk.device.write.requests volume: 1 _stats_to_sample /usr/lib/python3.12/site-packages/ceilometer/compute/pollsters/__init__.py:108
Dec  1 19:51:49 compute-0 ceilometer_agent_compute[200308]: 2025-12-01 19:51:49.927 15 DEBUG ceilometer.compute.pollsters [-] e73931e9-f7fa-4666-b781-700b385532a9/disk.device.write.requests volume: 0 _stats_to_sample /usr/lib/python3.12/site-packages/ceilometer/compute/pollsters/__init__.py:108
Dec  1 19:51:49 compute-0 ceilometer_agent_compute[200308]: 2025-12-01 19:51:49.927 15 DEBUG ceilometer.compute.pollsters [-] a5d5ccb2-21ac-4d7a-9991-efdf9cbb499d/disk.device.write.requests volume: 0 _stats_to_sample /usr/lib/python3.12/site-packages/ceilometer/compute/pollsters/__init__.py:108
Dec  1 19:51:49 compute-0 ceilometer_agent_compute[200308]: 2025-12-01 19:51:49.928 15 DEBUG ceilometer.compute.pollsters [-] a5d5ccb2-21ac-4d7a-9991-efdf9cbb499d/disk.device.write.requests volume: 0 _stats_to_sample /usr/lib/python3.12/site-packages/ceilometer/compute/pollsters/__init__.py:108
Dec  1 19:51:49 compute-0 ceilometer_agent_compute[200308]: 2025-12-01 19:51:49.928 15 DEBUG ceilometer.compute.pollsters [-] a5d5ccb2-21ac-4d7a-9991-efdf9cbb499d/disk.device.write.requests volume: 0 _stats_to_sample /usr/lib/python3.12/site-packages/ceilometer/compute/pollsters/__init__.py:108
Dec  1 19:51:49 compute-0 ceilometer_agent_compute[200308]: 2025-12-01 19:51:49.929 15 DEBUG ceilometer.compute.pollsters [-] 850ac274-3f22-41ce-b7d7-ac64d7adac70/disk.device.write.requests volume: 232 _stats_to_sample /usr/lib/python3.12/site-packages/ceilometer/compute/pollsters/__init__.py:108
Dec  1 19:51:49 compute-0 ceilometer_agent_compute[200308]: 2025-12-01 19:51:49.929 15 DEBUG ceilometer.compute.pollsters [-] 850ac274-3f22-41ce-b7d7-ac64d7adac70/disk.device.write.requests volume: 1 _stats_to_sample /usr/lib/python3.12/site-packages/ceilometer/compute/pollsters/__init__.py:108
Dec  1 19:51:49 compute-0 ceilometer_agent_compute[200308]: 2025-12-01 19:51:49.930 15 DEBUG ceilometer.compute.pollsters [-] 850ac274-3f22-41ce-b7d7-ac64d7adac70/disk.device.write.requests volume: 0 _stats_to_sample /usr/lib/python3.12/site-packages/ceilometer/compute/pollsters/__init__.py:108
Dec  1 19:51:49 compute-0 ceilometer_agent_compute[200308]: 2025-12-01 19:51:49.931 15 INFO ceilometer.polling.manager [-] Finished polling pollster disk.device.write.requests in the context of pollsters
Dec  1 19:51:49 compute-0 ceilometer_agent_compute[200308]: 2025-12-01 19:51:49.931 15 DEBUG ceilometer.polling.manager [-] Executing discovery process for pollsters [<ceilometer.compute.pollsters.disk.PerDeviceAllocationPollster object at 0x7fcf6cc3e5d0>] and discovery method [local_instances] via process [<bound method AgentManager.discover of <ceilometer.polling.manager.AgentManager object at 0x7fcf6cc07a10>>]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:294
Dec  1 19:51:49 compute-0 ceilometer_agent_compute[200308]: 2025-12-01 19:51:49.931 15 INFO ceilometer.polling.manager [-] Polling pollster disk.device.allocation in the context of pollsters
Dec  1 19:51:49 compute-0 ceilometer_agent_compute[200308]: 2025-12-01 19:51:49.932 15 DEBUG ceilometer.polling.manager [-] Checking if we need coordination for pollster [<stevedore.extension.Extension object at 0x7fcf6cc3e540>] with coordination group name [None]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:333
Dec  1 19:51:49 compute-0 ceilometer_agent_compute[200308]: 2025-12-01 19:51:49.932 12 DEBUG ceilometer.polling.manager [-] Updated heartbeat for disk.device.write.requests (2025-12-01T19:51:49.926222) _update_status /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:502
Dec  1 19:51:49 compute-0 ceilometer_agent_compute[200308]: 2025-12-01 19:51:49.932 15 DEBUG ceilometer.polling.manager [-] The pollster [<stevedore.extension.Extension object at 0x7fcf6cc3e540>] is not configured in a source for polling that requires coordination. The current hashrings are the following [None]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:355
Dec  1 19:51:49 compute-0 ceilometer_agent_compute[200308]: 2025-12-01 19:51:49.932 15 DEBUG ceilometer.polling.manager [-] Polster heartbeat update: disk.device.allocation heartbeat /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:636
Dec  1 19:51:49 compute-0 ceilometer_agent_compute[200308]: 2025-12-01 19:51:49.932 15 DEBUG ceilometer.compute.pollsters [-] e73931e9-f7fa-4666-b781-700b385532a9/disk.device.allocation volume: 21307392 _stats_to_sample /usr/lib/python3.12/site-packages/ceilometer/compute/pollsters/__init__.py:108
Dec  1 19:51:49 compute-0 ceilometer_agent_compute[200308]: 2025-12-01 19:51:49.933 15 DEBUG ceilometer.compute.pollsters [-] e73931e9-f7fa-4666-b781-700b385532a9/disk.device.allocation volume: 1253376 _stats_to_sample /usr/lib/python3.12/site-packages/ceilometer/compute/pollsters/__init__.py:108
Dec  1 19:51:49 compute-0 ceilometer_agent_compute[200308]: 2025-12-01 19:51:49.934 15 DEBUG ceilometer.compute.pollsters [-] e73931e9-f7fa-4666-b781-700b385532a9/disk.device.allocation volume: 487424 _stats_to_sample /usr/lib/python3.12/site-packages/ceilometer/compute/pollsters/__init__.py:108
Dec  1 19:51:49 compute-0 ceilometer_agent_compute[200308]: 2025-12-01 19:51:49.934 15 DEBUG ceilometer.compute.pollsters [-] a5d5ccb2-21ac-4d7a-9991-efdf9cbb499d/disk.device.allocation volume: 204800 _stats_to_sample /usr/lib/python3.12/site-packages/ceilometer/compute/pollsters/__init__.py:108
Dec  1 19:51:49 compute-0 ceilometer_agent_compute[200308]: 2025-12-01 19:51:49.935 15 DEBUG ceilometer.compute.pollsters [-] a5d5ccb2-21ac-4d7a-9991-efdf9cbb499d/disk.device.allocation volume: 204800 _stats_to_sample /usr/lib/python3.12/site-packages/ceilometer/compute/pollsters/__init__.py:108
Dec  1 19:51:49 compute-0 ceilometer_agent_compute[200308]: 2025-12-01 19:51:49.935 15 DEBUG ceilometer.compute.pollsters [-] a5d5ccb2-21ac-4d7a-9991-efdf9cbb499d/disk.device.allocation volume: 487424 _stats_to_sample /usr/lib/python3.12/site-packages/ceilometer/compute/pollsters/__init__.py:108
Dec  1 19:51:49 compute-0 ceilometer_agent_compute[200308]: 2025-12-01 19:51:49.936 12 DEBUG ceilometer.polling.manager [-] Updated heartbeat for disk.device.allocation (2025-12-01T19:51:49.932495) _update_status /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:502
Dec  1 19:51:49 compute-0 ceilometer_agent_compute[200308]: 2025-12-01 19:51:49.936 15 DEBUG ceilometer.compute.pollsters [-] 850ac274-3f22-41ce-b7d7-ac64d7adac70/disk.device.allocation volume: 22224896 _stats_to_sample /usr/lib/python3.12/site-packages/ceilometer/compute/pollsters/__init__.py:108
Dec  1 19:51:49 compute-0 ceilometer_agent_compute[200308]: 2025-12-01 19:51:49.936 15 DEBUG ceilometer.compute.pollsters [-] 850ac274-3f22-41ce-b7d7-ac64d7adac70/disk.device.allocation volume: 1253376 _stats_to_sample /usr/lib/python3.12/site-packages/ceilometer/compute/pollsters/__init__.py:108
Dec  1 19:51:49 compute-0 ceilometer_agent_compute[200308]: 2025-12-01 19:51:49.937 15 DEBUG ceilometer.compute.pollsters [-] 850ac274-3f22-41ce-b7d7-ac64d7adac70/disk.device.allocation volume: 585728 _stats_to_sample /usr/lib/python3.12/site-packages/ceilometer/compute/pollsters/__init__.py:108
Dec  1 19:51:49 compute-0 ceilometer_agent_compute[200308]: 2025-12-01 19:51:49.937 15 INFO ceilometer.polling.manager [-] Finished polling pollster disk.device.allocation in the context of pollsters
Dec  1 19:51:49 compute-0 ceilometer_agent_compute[200308]: 2025-12-01 19:51:49.938 15 DEBUG ceilometer.polling.manager [-] Executing discovery process for pollsters [<ceilometer.compute.pollsters.disk.EphemeralSizePollster object at 0x7fcf6cc3f530>] and discovery method [local_instances] via process [<bound method AgentManager.discover of <ceilometer.polling.manager.AgentManager object at 0x7fcf6cc07a10>>]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:294
Dec  1 19:51:49 compute-0 ceilometer_agent_compute[200308]: 2025-12-01 19:51:49.938 15 INFO ceilometer.polling.manager [-] Polling pollster disk.ephemeral.size in the context of pollsters
Dec  1 19:51:49 compute-0 ceilometer_agent_compute[200308]: 2025-12-01 19:51:49.938 15 DEBUG ceilometer.polling.manager [-] Checking if we need coordination for pollster [<stevedore.extension.Extension object at 0x7fcf6cc3f560>] with coordination group name [None]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:333
Dec  1 19:51:49 compute-0 ceilometer_agent_compute[200308]: 2025-12-01 19:51:49.938 15 DEBUG ceilometer.polling.manager [-] The pollster [<stevedore.extension.Extension object at 0x7fcf6cc3f560>] is not configured in a source for polling that requires coordination. The current hashrings are the following [None]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:355
Dec  1 19:51:49 compute-0 ceilometer_agent_compute[200308]: 2025-12-01 19:51:49.939 15 DEBUG ceilometer.polling.manager [-] Polster heartbeat update: disk.ephemeral.size heartbeat /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:636
Dec  1 19:51:49 compute-0 ceilometer_agent_compute[200308]: 2025-12-01 19:51:49.939 12 DEBUG ceilometer.polling.manager [-] Updated heartbeat for disk.ephemeral.size (2025-12-01T19:51:49.939051) _update_status /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:502
Dec  1 19:51:49 compute-0 ceilometer_agent_compute[200308]: 2025-12-01 19:51:49.940 15 INFO ceilometer.polling.manager [-] Finished polling pollster disk.ephemeral.size in the context of pollsters
Dec  1 19:51:49 compute-0 ceilometer_agent_compute[200308]: 2025-12-01 19:51:49.940 15 DEBUG ceilometer.polling.manager [-] Executing discovery process for pollsters [<ceilometer.compute.pollsters.net.IncomingPacketsPollster object at 0x7fcf6cc3fd40>] and discovery method [local_instances] via process [<bound method AgentManager.discover of <ceilometer.polling.manager.AgentManager object at 0x7fcf6cc07a10>>]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:294
Dec  1 19:51:49 compute-0 ceilometer_agent_compute[200308]: 2025-12-01 19:51:49.941 15 INFO ceilometer.polling.manager [-] Polling pollster network.incoming.packets in the context of pollsters
Dec  1 19:51:49 compute-0 ceilometer_agent_compute[200308]: 2025-12-01 19:51:49.941 15 DEBUG ceilometer.polling.manager [-] Checking if we need coordination for pollster [<stevedore.extension.Extension object at 0x7fcf6cc3fd70>] with coordination group name [None]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:333
Dec  1 19:51:49 compute-0 ceilometer_agent_compute[200308]: 2025-12-01 19:51:49.941 15 DEBUG ceilometer.polling.manager [-] The pollster [<stevedore.extension.Extension object at 0x7fcf6cc3fd70>] is not configured in a source for polling that requires coordination. The current hashrings are the following [None]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:355
Dec  1 19:51:49 compute-0 ceilometer_agent_compute[200308]: 2025-12-01 19:51:49.941 15 DEBUG ceilometer.polling.manager [-] Polster heartbeat update: network.incoming.packets heartbeat /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:636
Dec  1 19:51:49 compute-0 ceilometer_agent_compute[200308]: 2025-12-01 19:51:49.941 15 DEBUG ceilometer.compute.pollsters [-] e73931e9-f7fa-4666-b781-700b385532a9/network.incoming.packets volume: 21 _stats_to_sample /usr/lib/python3.12/site-packages/ceilometer/compute/pollsters/__init__.py:108
Dec  1 19:51:49 compute-0 ceilometer_agent_compute[200308]: 2025-12-01 19:51:49.942 15 DEBUG ceilometer.compute.pollsters [-] 850ac274-3f22-41ce-b7d7-ac64d7adac70/network.incoming.packets volume: 14 _stats_to_sample /usr/lib/python3.12/site-packages/ceilometer/compute/pollsters/__init__.py:108
Dec  1 19:51:49 compute-0 ceilometer_agent_compute[200308]: 2025-12-01 19:51:49.943 15 INFO ceilometer.polling.manager [-] Finished polling pollster network.incoming.packets in the context of pollsters
Dec  1 19:51:49 compute-0 ceilometer_agent_compute[200308]: 2025-12-01 19:51:49.943 15 DEBUG ceilometer.polling.manager [-] Executing discovery process for pollsters [<ceilometer.compute.pollsters.disk.RootSizePollster object at 0x7fcf6cc3f590>] and discovery method [local_instances] via process [<bound method AgentManager.discover of <ceilometer.polling.manager.AgentManager object at 0x7fcf6cc07a10>>]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:294
Dec  1 19:51:49 compute-0 ceilometer_agent_compute[200308]: 2025-12-01 19:51:49.943 15 INFO ceilometer.polling.manager [-] Polling pollster disk.root.size in the context of pollsters
Dec  1 19:51:49 compute-0 ceilometer_agent_compute[200308]: 2025-12-01 19:51:49.943 15 DEBUG ceilometer.polling.manager [-] Checking if we need coordination for pollster [<stevedore.extension.Extension object at 0x7fcf6cc3f5c0>] with coordination group name [None]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:333
Dec  1 19:51:49 compute-0 ceilometer_agent_compute[200308]: 2025-12-01 19:51:49.943 15 DEBUG ceilometer.polling.manager [-] The pollster [<stevedore.extension.Extension object at 0x7fcf6cc3f5c0>] is not configured in a source for polling that requires coordination. The current hashrings are the following [None]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:355
Dec  1 19:51:49 compute-0 ceilometer_agent_compute[200308]: 2025-12-01 19:51:49.944 15 DEBUG ceilometer.polling.manager [-] Polster heartbeat update: disk.root.size heartbeat /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:636
Dec  1 19:51:49 compute-0 ceilometer_agent_compute[200308]: 2025-12-01 19:51:49.945 15 INFO ceilometer.polling.manager [-] Finished polling pollster disk.root.size in the context of pollsters
Dec  1 19:51:49 compute-0 ceilometer_agent_compute[200308]: 2025-12-01 19:51:49.945 15 DEBUG ceilometer.polling.manager [-] Executing discovery process for pollsters [<ceilometer.compute.pollsters.net.IncomingDropPollster object at 0x7fcf6cc3fda0>] and discovery method [local_instances] via process [<bound method AgentManager.discover of <ceilometer.polling.manager.AgentManager object at 0x7fcf6cc07a10>>]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:294
Dec  1 19:51:49 compute-0 ceilometer_agent_compute[200308]: 2025-12-01 19:51:49.945 15 INFO ceilometer.polling.manager [-] Polling pollster network.incoming.packets.drop in the context of pollsters
Dec  1 19:51:49 compute-0 ceilometer_agent_compute[200308]: 2025-12-01 19:51:49.946 15 DEBUG ceilometer.polling.manager [-] Checking if we need coordination for pollster [<stevedore.extension.Extension object at 0x7fcf6cc3fdd0>] with coordination group name [None]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:333
Dec  1 19:51:49 compute-0 ceilometer_agent_compute[200308]: 2025-12-01 19:51:49.946 12 DEBUG ceilometer.polling.manager [-] Updated heartbeat for network.incoming.packets (2025-12-01T19:51:49.941635) _update_status /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:502
Dec  1 19:51:49 compute-0 ceilometer_agent_compute[200308]: 2025-12-01 19:51:49.946 15 DEBUG ceilometer.polling.manager [-] The pollster [<stevedore.extension.Extension object at 0x7fcf6cc3fdd0>] is not configured in a source for polling that requires coordination. The current hashrings are the following [None]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:355
Dec  1 19:51:49 compute-0 ceilometer_agent_compute[200308]: 2025-12-01 19:51:49.946 15 DEBUG ceilometer.polling.manager [-] Polster heartbeat update: network.incoming.packets.drop heartbeat /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:636
Dec  1 19:51:49 compute-0 ceilometer_agent_compute[200308]: 2025-12-01 19:51:49.947 12 DEBUG ceilometer.polling.manager [-] Updated heartbeat for disk.root.size (2025-12-01T19:51:49.944088) _update_status /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:502
Dec  1 19:51:49 compute-0 ceilometer_agent_compute[200308]: 2025-12-01 19:51:49.947 15 DEBUG ceilometer.compute.pollsters [-] e73931e9-f7fa-4666-b781-700b385532a9/network.incoming.packets.drop volume: 0 _stats_to_sample /usr/lib/python3.12/site-packages/ceilometer/compute/pollsters/__init__.py:108
Dec  1 19:51:49 compute-0 ceilometer_agent_compute[200308]: 2025-12-01 19:51:49.947 15 DEBUG ceilometer.compute.pollsters [-] 850ac274-3f22-41ce-b7d7-ac64d7adac70/network.incoming.packets.drop volume: 0 _stats_to_sample /usr/lib/python3.12/site-packages/ceilometer/compute/pollsters/__init__.py:108
Dec  1 19:51:49 compute-0 ceilometer_agent_compute[200308]: 2025-12-01 19:51:49.948 15 INFO ceilometer.polling.manager [-] Finished polling pollster network.incoming.packets.drop in the context of pollsters
Dec  1 19:51:49 compute-0 ceilometer_agent_compute[200308]: 2025-12-01 19:51:49.948 15 DEBUG ceilometer.polling.manager [-] Executing discovery process for pollsters [<ceilometer.compute.pollsters.net.IncomingErrorsPollster object at 0x7fcf6cc3fe00>] and discovery method [local_instances] via process [<bound method AgentManager.discover of <ceilometer.polling.manager.AgentManager object at 0x7fcf6cc07a10>>]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:294
Dec  1 19:51:49 compute-0 ceilometer_agent_compute[200308]: 2025-12-01 19:51:49.948 15 INFO ceilometer.polling.manager [-] Polling pollster network.incoming.packets.error in the context of pollsters
Dec  1 19:51:49 compute-0 ceilometer_agent_compute[200308]: 2025-12-01 19:51:49.948 15 DEBUG ceilometer.polling.manager [-] Checking if we need coordination for pollster [<stevedore.extension.Extension object at 0x7fcf6cc3fe30>] with coordination group name [None]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:333
Dec  1 19:51:49 compute-0 ceilometer_agent_compute[200308]: 2025-12-01 19:51:49.948 15 DEBUG ceilometer.polling.manager [-] The pollster [<stevedore.extension.Extension object at 0x7fcf6cc3fe30>] is not configured in a source for polling that requires coordination. The current hashrings are the following [None]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:355
Dec  1 19:51:49 compute-0 ceilometer_agent_compute[200308]: 2025-12-01 19:51:49.948 12 DEBUG ceilometer.polling.manager [-] Updated heartbeat for network.incoming.packets.drop (2025-12-01T19:51:49.946683) _update_status /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:502
Dec  1 19:51:49 compute-0 ceilometer_agent_compute[200308]: 2025-12-01 19:51:49.948 15 DEBUG ceilometer.polling.manager [-] Polster heartbeat update: network.incoming.packets.error heartbeat /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:636
Dec  1 19:51:49 compute-0 ceilometer_agent_compute[200308]: 2025-12-01 19:51:49.949 15 DEBUG ceilometer.compute.pollsters [-] e73931e9-f7fa-4666-b781-700b385532a9/network.incoming.packets.error volume: 0 _stats_to_sample /usr/lib/python3.12/site-packages/ceilometer/compute/pollsters/__init__.py:108
Dec  1 19:51:49 compute-0 ceilometer_agent_compute[200308]: 2025-12-01 19:51:49.949 15 DEBUG ceilometer.compute.pollsters [-] 850ac274-3f22-41ce-b7d7-ac64d7adac70/network.incoming.packets.error volume: 0 _stats_to_sample /usr/lib/python3.12/site-packages/ceilometer/compute/pollsters/__init__.py:108
Dec  1 19:51:49 compute-0 ceilometer_agent_compute[200308]: 2025-12-01 19:51:49.949 15 INFO ceilometer.polling.manager [-] Finished polling pollster network.incoming.packets.error in the context of pollsters
Dec  1 19:51:49 compute-0 ceilometer_agent_compute[200308]: 2025-12-01 19:51:49.949 15 DEBUG ceilometer.polling.manager [-] Executing discovery process for pollsters [<ceilometer.compute.pollsters.net.OutgoingBytesPollster object at 0x7fcf6cc3fe90>] and discovery method [local_instances] via process [<bound method AgentManager.discover of <ceilometer.polling.manager.AgentManager object at 0x7fcf6cc07a10>>]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:294
Dec  1 19:51:49 compute-0 ceilometer_agent_compute[200308]: 2025-12-01 19:51:49.949 15 INFO ceilometer.polling.manager [-] Polling pollster network.outgoing.bytes in the context of pollsters
Dec  1 19:51:49 compute-0 ceilometer_agent_compute[200308]: 2025-12-01 19:51:49.949 15 DEBUG ceilometer.polling.manager [-] Checking if we need coordination for pollster [<stevedore.extension.Extension object at 0x7fcf6cc3fec0>] with coordination group name [None]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:333
Dec  1 19:51:49 compute-0 ceilometer_agent_compute[200308]: 2025-12-01 19:51:49.950 15 DEBUG ceilometer.polling.manager [-] The pollster [<stevedore.extension.Extension object at 0x7fcf6cc3fec0>] is not configured in a source for polling that requires coordination. The current hashrings are the following [None]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:355
Dec  1 19:51:49 compute-0 ceilometer_agent_compute[200308]: 2025-12-01 19:51:49.950 15 DEBUG ceilometer.polling.manager [-] Polster heartbeat update: network.outgoing.bytes heartbeat /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:636
Dec  1 19:51:49 compute-0 ceilometer_agent_compute[200308]: 2025-12-01 19:51:49.950 15 DEBUG ceilometer.compute.pollsters [-] e73931e9-f7fa-4666-b781-700b385532a9/network.outgoing.bytes volume: 2412 _stats_to_sample /usr/lib/python3.12/site-packages/ceilometer/compute/pollsters/__init__.py:108
Dec  1 19:51:49 compute-0 ceilometer_agent_compute[200308]: 2025-12-01 19:51:49.950 12 DEBUG ceilometer.polling.manager [-] Updated heartbeat for network.incoming.packets.error (2025-12-01T19:51:49.948891) _update_status /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:502
Dec  1 19:51:49 compute-0 ceilometer_agent_compute[200308]: 2025-12-01 19:51:49.950 15 DEBUG ceilometer.compute.pollsters [-] 850ac274-3f22-41ce-b7d7-ac64d7adac70/network.outgoing.bytes volume: 2426 _stats_to_sample /usr/lib/python3.12/site-packages/ceilometer/compute/pollsters/__init__.py:108
Dec  1 19:51:49 compute-0 ceilometer_agent_compute[200308]: 2025-12-01 19:51:49.951 15 INFO ceilometer.polling.manager [-] Finished polling pollster network.outgoing.bytes in the context of pollsters
Dec  1 19:51:49 compute-0 ceilometer_agent_compute[200308]: 2025-12-01 19:51:49.951 15 DEBUG ceilometer.polling.manager [-] Executing discovery process for pollsters [<ceilometer.compute.pollsters.net.OutgoingBytesRatePollster object at 0x7fcf6cc3ff80>] and discovery method [local_instances] via process [<bound method AgentManager.discover of <ceilometer.polling.manager.AgentManager object at 0x7fcf6cc07a10>>]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:294
Dec  1 19:51:49 compute-0 ceilometer_agent_compute[200308]: 2025-12-01 19:51:49.951 15 INFO ceilometer.polling.manager [-] Polling pollster network.outgoing.bytes.rate in the context of pollsters
Dec  1 19:51:49 compute-0 ceilometer_agent_compute[200308]: 2025-12-01 19:51:49.951 15 DEBUG ceilometer.polling.manager [-] Checking if we need coordination for pollster [<stevedore.extension.Extension object at 0x7fcf6cc3ffb0>] with coordination group name [None]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:333
Dec  1 19:51:49 compute-0 ceilometer_agent_compute[200308]: 2025-12-01 19:51:49.951 15 DEBUG ceilometer.polling.manager [-] The pollster [<stevedore.extension.Extension object at 0x7fcf6cc3ffb0>] is not configured in a source for polling that requires coordination. The current hashrings are the following [None]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:355
Dec  1 19:51:49 compute-0 ceilometer_agent_compute[200308]: 2025-12-01 19:51:49.951 15 DEBUG ceilometer.polling.manager [-] Polster heartbeat update: network.outgoing.bytes.rate heartbeat /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:636
Dec  1 19:51:49 compute-0 ceilometer_agent_compute[200308]: 2025-12-01 19:51:49.951 12 DEBUG ceilometer.polling.manager [-] Updated heartbeat for network.outgoing.bytes (2025-12-01T19:51:49.950186) _update_status /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:502
Dec  1 19:51:49 compute-0 ceilometer_agent_compute[200308]: 2025-12-01 19:51:49.952 15 DEBUG ceilometer.compute.pollsters [-] LibvirtInspector does not provide data for OutgoingBytesRatePollster get_samples /usr/lib/python3.12/site-packages/ceilometer/compute/pollsters/__init__.py:162
Dec  1 19:51:49 compute-0 ceilometer_agent_compute[200308]: 2025-12-01 19:51:49.952 15 ERROR ceilometer.polling.manager [-] Prevent pollster network.outgoing.bytes.rate from polling [<NovaLikeServer: fvt_testing_server>] on source pollsters anymore!: ceilometer.polling.plugin_base.PollsterPermanentError: [<NovaLikeServer: fvt_testing_server>]
Dec  1 19:51:49 compute-0 ceilometer_agent_compute[200308]: 2025-12-01 19:51:49.952 15 DEBUG ceilometer.polling.manager [-] Executing discovery process for pollsters [<ceilometer.compute.pollsters.instance_stats.CPUPollster object at 0x7fcf6cbd1b80>] and discovery method [local_instances] via process [<bound method AgentManager.discover of <ceilometer.polling.manager.AgentManager object at 0x7fcf6cc07a10>>]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:294
Dec  1 19:51:49 compute-0 ceilometer_agent_compute[200308]: 2025-12-01 19:51:49.952 15 INFO ceilometer.polling.manager [-] Polling pollster cpu in the context of pollsters
Dec  1 19:51:49 compute-0 ceilometer_agent_compute[200308]: 2025-12-01 19:51:49.952 12 DEBUG ceilometer.polling.manager [-] Updated heartbeat for network.outgoing.bytes.rate (2025-12-01T19:51:49.951932) _update_status /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:502
Dec  1 19:51:49 compute-0 ceilometer_agent_compute[200308]: 2025-12-01 19:51:49.952 15 DEBUG ceilometer.polling.manager [-] Checking if we need coordination for pollster [<stevedore.extension.Extension object at 0x7fcf6cc3d7c0>] with coordination group name [None]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:333
Dec  1 19:51:49 compute-0 ceilometer_agent_compute[200308]: 2025-12-01 19:51:49.952 15 DEBUG ceilometer.polling.manager [-] The pollster [<stevedore.extension.Extension object at 0x7fcf6cc3d7c0>] is not configured in a source for polling that requires coordination. The current hashrings are the following [None]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:355
Dec  1 19:51:49 compute-0 ceilometer_agent_compute[200308]: 2025-12-01 19:51:49.952 15 DEBUG ceilometer.polling.manager [-] Polster heartbeat update: cpu heartbeat /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:636
Dec  1 19:51:49 compute-0 ceilometer_agent_compute[200308]: 2025-12-01 19:51:49.953 15 DEBUG ceilometer.compute.pollsters [-] e73931e9-f7fa-4666-b781-700b385532a9/cpu volume: 52080000000 _stats_to_sample /usr/lib/python3.12/site-packages/ceilometer/compute/pollsters/__init__.py:108
Dec  1 19:51:49 compute-0 ceilometer_agent_compute[200308]: 2025-12-01 19:51:49.953 15 DEBUG ceilometer.compute.pollsters [-] a5d5ccb2-21ac-4d7a-9991-efdf9cbb499d/cpu volume: 3140000000 _stats_to_sample /usr/lib/python3.12/site-packages/ceilometer/compute/pollsters/__init__.py:108
Dec  1 19:51:49 compute-0 ceilometer_agent_compute[200308]: 2025-12-01 19:51:49.953 15 DEBUG ceilometer.compute.pollsters [-] 850ac274-3f22-41ce-b7d7-ac64d7adac70/cpu volume: 45000000000 _stats_to_sample /usr/lib/python3.12/site-packages/ceilometer/compute/pollsters/__init__.py:108
Dec  1 19:51:49 compute-0 ceilometer_agent_compute[200308]: 2025-12-01 19:51:49.954 15 INFO ceilometer.polling.manager [-] Finished polling pollster cpu in the context of pollsters
Dec  1 19:51:49 compute-0 ceilometer_agent_compute[200308]: 2025-12-01 19:51:49.954 15 DEBUG ceilometer.polling.manager [-] Executing discovery process for pollsters [<ceilometer.compute.pollsters.instance_stats.MemoryUsagePollster object at 0x7fcf6cc3f7a0>] and discovery method [local_instances] via process [<bound method AgentManager.discover of <ceilometer.polling.manager.AgentManager object at 0x7fcf6cc07a10>>]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:294
Dec  1 19:51:49 compute-0 ceilometer_agent_compute[200308]: 2025-12-01 19:51:49.954 15 INFO ceilometer.polling.manager [-] Polling pollster memory.usage in the context of pollsters
Dec  1 19:51:49 compute-0 ceilometer_agent_compute[200308]: 2025-12-01 19:51:49.954 15 DEBUG ceilometer.polling.manager [-] Checking if we need coordination for pollster [<stevedore.extension.Extension object at 0x7fcf6cc3f7d0>] with coordination group name [None]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:333
Dec  1 19:51:49 compute-0 ceilometer_agent_compute[200308]: 2025-12-01 19:51:49.954 15 DEBUG ceilometer.polling.manager [-] The pollster [<stevedore.extension.Extension object at 0x7fcf6cc3f7d0>] is not configured in a source for polling that requires coordination. The current hashrings are the following [None]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:355
Dec  1 19:51:49 compute-0 ceilometer_agent_compute[200308]: 2025-12-01 19:51:49.954 15 DEBUG ceilometer.polling.manager [-] Polster heartbeat update: memory.usage heartbeat /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:636
Dec  1 19:51:49 compute-0 ceilometer_agent_compute[200308]: 2025-12-01 19:51:49.954 15 DEBUG ceilometer.compute.pollsters [-] e73931e9-f7fa-4666-b781-700b385532a9/memory.usage volume: 48.79296875 _stats_to_sample /usr/lib/python3.12/site-packages/ceilometer/compute/pollsters/__init__.py:108
Dec  1 19:51:49 compute-0 ceilometer_agent_compute[200308]: 2025-12-01 19:51:49.955 15 DEBUG ceilometer.compute.pollsters [-] a5d5ccb2-21ac-4d7a-9991-efdf9cbb499d/memory.usage volume: Unavailable _stats_to_sample /usr/lib/python3.12/site-packages/ceilometer/compute/pollsters/__init__.py:108
Dec  1 19:51:49 compute-0 ceilometer_agent_compute[200308]: 2025-12-01 19:51:49.955 15 WARNING ceilometer.compute.pollsters [-] memory.usage statistic in not available for instance a5d5ccb2-21ac-4d7a-9991-efdf9cbb499d: ceilometer.compute.pollsters.NoVolumeException
Dec  1 19:51:49 compute-0 ceilometer_agent_compute[200308]: 2025-12-01 19:51:49.955 15 DEBUG ceilometer.compute.pollsters [-] 850ac274-3f22-41ce-b7d7-ac64d7adac70/memory.usage volume: 48.9375 _stats_to_sample /usr/lib/python3.12/site-packages/ceilometer/compute/pollsters/__init__.py:108
Dec  1 19:51:49 compute-0 ceilometer_agent_compute[200308]: 2025-12-01 19:51:49.955 15 INFO ceilometer.polling.manager [-] Finished polling pollster memory.usage in the context of pollsters
Dec  1 19:51:49 compute-0 ceilometer_agent_compute[200308]: 2025-12-01 19:51:49.956 15 DEBUG ceilometer.polling.manager [-] Finished processing pollster [network.incoming.bytes.delta]. execute_polling_task_processing /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:272
Dec  1 19:51:49 compute-0 ceilometer_agent_compute[200308]: 2025-12-01 19:51:49.956 15 DEBUG ceilometer.polling.manager [-] Finished processing pollster [network.outgoing.packets]. execute_polling_task_processing /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:272
Dec  1 19:51:49 compute-0 ceilometer_agent_compute[200308]: 2025-12-01 19:51:49.957 12 DEBUG ceilometer.polling.manager [-] Updated heartbeat for cpu (2025-12-01T19:51:49.952970) _update_status /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:502
Dec  1 19:51:49 compute-0 ceilometer_agent_compute[200308]: 2025-12-01 19:51:49.957 12 DEBUG ceilometer.polling.manager [-] Updated heartbeat for memory.usage (2025-12-01T19:51:49.954661) _update_status /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:502
Dec  1 19:51:49 compute-0 ceilometer_agent_compute[200308]: 2025-12-01 19:51:49.957 15 DEBUG ceilometer.polling.manager [-] Finished processing pollster [network.outgoing.bytes.delta]. execute_polling_task_processing /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:272
Dec  1 19:51:49 compute-0 ceilometer_agent_compute[200308]: 2025-12-01 19:51:49.959 15 DEBUG ceilometer.polling.manager [-] Finished processing pollster [network.outgoing.packets.drop]. execute_polling_task_processing /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:272
Dec  1 19:51:49 compute-0 ceilometer_agent_compute[200308]: 2025-12-01 19:51:49.959 15 DEBUG ceilometer.polling.manager [-] Finished processing pollster [network.outgoing.packets.error]. execute_polling_task_processing /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:272
Dec  1 19:51:49 compute-0 ceilometer_agent_compute[200308]: 2025-12-01 19:51:49.960 15 DEBUG ceilometer.polling.manager [-] Finished processing pollster [disk.device.capacity]. execute_polling_task_processing /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:272
Dec  1 19:51:49 compute-0 ceilometer_agent_compute[200308]: 2025-12-01 19:51:49.960 15 DEBUG ceilometer.polling.manager [-] Finished processing pollster [disk.device.read.bytes]. execute_polling_task_processing /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:272
Dec  1 19:51:49 compute-0 ceilometer_agent_compute[200308]: 2025-12-01 19:51:49.960 15 DEBUG ceilometer.polling.manager [-] Finished processing pollster [network.incoming.bytes]. execute_polling_task_processing /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:272
Dec  1 19:51:49 compute-0 ceilometer_agent_compute[200308]: 2025-12-01 19:51:49.961 15 DEBUG ceilometer.polling.manager [-] Finished processing pollster [network.incoming.bytes.rate]. execute_polling_task_processing /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:272
Dec  1 19:51:49 compute-0 ceilometer_agent_compute[200308]: 2025-12-01 19:51:49.961 15 DEBUG ceilometer.polling.manager [-] Finished processing pollster [disk.device.read.latency]. execute_polling_task_processing /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:272
Dec  1 19:51:49 compute-0 ceilometer_agent_compute[200308]: 2025-12-01 19:51:49.961 15 DEBUG ceilometer.polling.manager [-] Finished processing pollster [disk.device.read.requests]. execute_polling_task_processing /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:272
Dec  1 19:51:49 compute-0 ceilometer_agent_compute[200308]: 2025-12-01 19:51:49.961 15 DEBUG ceilometer.polling.manager [-] Finished processing pollster [disk.device.usage]. execute_polling_task_processing /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:272
Dec  1 19:51:49 compute-0 ceilometer_agent_compute[200308]: 2025-12-01 19:51:49.961 15 DEBUG ceilometer.polling.manager [-] Finished processing pollster [disk.device.write.bytes]. execute_polling_task_processing /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:272
Dec  1 19:51:49 compute-0 ceilometer_agent_compute[200308]: 2025-12-01 19:51:49.962 15 DEBUG ceilometer.polling.manager [-] Finished processing pollster [power.state]. execute_polling_task_processing /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:272
Dec  1 19:51:49 compute-0 ceilometer_agent_compute[200308]: 2025-12-01 19:51:49.962 15 DEBUG ceilometer.polling.manager [-] Finished processing pollster [disk.device.write.latency]. execute_polling_task_processing /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:272
Dec  1 19:51:49 compute-0 ceilometer_agent_compute[200308]: 2025-12-01 19:51:49.962 15 DEBUG ceilometer.polling.manager [-] Finished processing pollster [disk.device.write.requests]. execute_polling_task_processing /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:272
Dec  1 19:51:49 compute-0 ceilometer_agent_compute[200308]: 2025-12-01 19:51:49.962 15 DEBUG ceilometer.polling.manager [-] Finished processing pollster [disk.device.allocation]. execute_polling_task_processing /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:272
Dec  1 19:51:49 compute-0 ceilometer_agent_compute[200308]: 2025-12-01 19:51:49.962 15 DEBUG ceilometer.polling.manager [-] Finished processing pollster [disk.ephemeral.size]. execute_polling_task_processing /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:272
Dec  1 19:51:49 compute-0 ceilometer_agent_compute[200308]: 2025-12-01 19:51:49.962 15 DEBUG ceilometer.polling.manager [-] Finished processing pollster [network.incoming.packets]. execute_polling_task_processing /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:272
Dec  1 19:51:49 compute-0 ceilometer_agent_compute[200308]: 2025-12-01 19:51:49.963 15 DEBUG ceilometer.polling.manager [-] Finished processing pollster [disk.root.size]. execute_polling_task_processing /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:272
Dec  1 19:51:49 compute-0 ceilometer_agent_compute[200308]: 2025-12-01 19:51:49.963 15 DEBUG ceilometer.polling.manager [-] Finished processing pollster [network.incoming.packets.drop]. execute_polling_task_processing /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:272
Dec  1 19:51:49 compute-0 ceilometer_agent_compute[200308]: 2025-12-01 19:51:49.963 15 DEBUG ceilometer.polling.manager [-] Finished processing pollster [network.incoming.packets.error]. execute_polling_task_processing /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:272
Dec  1 19:51:49 compute-0 ceilometer_agent_compute[200308]: 2025-12-01 19:51:49.963 15 DEBUG ceilometer.polling.manager [-] Finished processing pollster [network.outgoing.bytes]. execute_polling_task_processing /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:272
Dec  1 19:51:49 compute-0 ceilometer_agent_compute[200308]: 2025-12-01 19:51:49.963 15 DEBUG ceilometer.polling.manager [-] Finished processing pollster [network.outgoing.bytes.rate]. execute_polling_task_processing /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:272
Dec  1 19:51:49 compute-0 ceilometer_agent_compute[200308]: 2025-12-01 19:51:49.963 15 DEBUG ceilometer.polling.manager [-] Finished processing pollster [cpu]. execute_polling_task_processing /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:272
Dec  1 19:51:49 compute-0 ceilometer_agent_compute[200308]: 2025-12-01 19:51:49.963 15 DEBUG ceilometer.polling.manager [-] Finished processing pollster [memory.usage]. execute_polling_task_processing /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:272
Dec  1 19:51:51 compute-0 nova_compute[189564]: 2025-12-01 19:51:51.168 189568 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Dec  1 19:51:53 compute-0 nova_compute[189564]: 2025-12-01 19:51:53.472 189568 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Dec  1 19:51:55 compute-0 podman[249082]: 2025-12-01 19:51:55.361547056 +0000 UTC m=+0.117109045 container health_status eee51cf6f5ac491b85fb09827fece37ea9afa564acb449d4ec0d0155a452f02b (image=quay.io/podified-antelope-centos9/openstack-multipathd:current-podified, name=multipathd, health_status=healthy, health_failing_streak=0, health_log=, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS, config_data={'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/multipathd', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-multipathd:current-podified', 'net': 'host', 'privileged': True, 'restart': 'always', 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/var/lib/kolla/config_files/multipathd.json:/var/lib/kolla/config_files/config.json:ro', '/dev:/dev', '/run/udev:/run/udev', '/sys:/sys', '/lib/modules:/lib/modules:ro', '/etc/iscsi:/etc/iscsi:ro', '/var/lib/iscsi:/var/lib/iscsi', '/etc/multipath:/etc/multipath:z', '/etc/multipath.conf:/etc/multipath.conf:ro', '/var/lib/openstack/healthchecks/multipathd:/openstack:ro,z']}, config_id=multipathd, org.label-schema.name=CentOS Stream 9 Base Image, tcib_managed=true, org.label-schema.build-date=20251125, tcib_build_tag=fa2bb8efef6782c26ea7f1675eeb36dd, org.label-schema.license=GPLv2, container_name=multipathd, io.buildah.version=1.41.3, maintainer=OpenStack Kubernetes Operator team, managed_by=edpm_ansible)
Dec  1 19:51:56 compute-0 nova_compute[189564]: 2025-12-01 19:51:56.170 189568 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Dec  1 19:51:58 compute-0 nova_compute[189564]: 2025-12-01 19:51:58.475 189568 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Dec  1 19:51:59 compute-0 podman[203750]: time="2025-12-01T19:51:59Z" level=info msg="List containers: received `last` parameter - overwriting `limit`"
Dec  1 19:51:59 compute-0 podman[203750]: @ - - [01/Dec/2025:19:51:59 +0000] "GET /v4.9.3/libpod/containers/json?all=true&external=false&last=0&namespace=false&size=false&sync=false HTTP/1.1" 200 29521 "" "Go-http-client/1.1"
Dec  1 19:51:59 compute-0 podman[203750]: @ - - [01/Dec/2025:19:51:59 +0000] "GET /v4.9.3/libpod/containers/stats?all=false&interval=1&stream=false HTTP/1.1" 200 4809 "" "Go-http-client/1.1"
Dec  1 19:52:00 compute-0 nova_compute[189564]: 2025-12-01 19:52:00.434 189568 DEBUG oslo_concurrency.lockutils [None req-816b7c2e-492d-4d17-9333-ae08ed28950b 7c24e8f82e7842b785e565ac65c7f494 35d2a9caf1634dca9fc12ec078239d84 - - default default] Acquiring lock "a5d5ccb2-21ac-4d7a-9991-efdf9cbb499d" by "nova.compute.manager.ComputeManager.terminate_instance.<locals>.do_terminate_instance" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m
Dec  1 19:52:00 compute-0 nova_compute[189564]: 2025-12-01 19:52:00.435 189568 DEBUG oslo_concurrency.lockutils [None req-816b7c2e-492d-4d17-9333-ae08ed28950b 7c24e8f82e7842b785e565ac65c7f494 35d2a9caf1634dca9fc12ec078239d84 - - default default] Lock "a5d5ccb2-21ac-4d7a-9991-efdf9cbb499d" acquired by "nova.compute.manager.ComputeManager.terminate_instance.<locals>.do_terminate_instance" :: waited 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m
Dec  1 19:52:00 compute-0 nova_compute[189564]: 2025-12-01 19:52:00.436 189568 DEBUG oslo_concurrency.lockutils [None req-816b7c2e-492d-4d17-9333-ae08ed28950b 7c24e8f82e7842b785e565ac65c7f494 35d2a9caf1634dca9fc12ec078239d84 - - default default] Acquiring lock "a5d5ccb2-21ac-4d7a-9991-efdf9cbb499d-events" by "nova.compute.manager.InstanceEvents.clear_events_for_instance.<locals>._clear_events" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m
Dec  1 19:52:00 compute-0 nova_compute[189564]: 2025-12-01 19:52:00.437 189568 DEBUG oslo_concurrency.lockutils [None req-816b7c2e-492d-4d17-9333-ae08ed28950b 7c24e8f82e7842b785e565ac65c7f494 35d2a9caf1634dca9fc12ec078239d84 - - default default] Lock "a5d5ccb2-21ac-4d7a-9991-efdf9cbb499d-events" acquired by "nova.compute.manager.InstanceEvents.clear_events_for_instance.<locals>._clear_events" :: waited 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m
Dec  1 19:52:00 compute-0 nova_compute[189564]: 2025-12-01 19:52:00.438 189568 DEBUG oslo_concurrency.lockutils [None req-816b7c2e-492d-4d17-9333-ae08ed28950b 7c24e8f82e7842b785e565ac65c7f494 35d2a9caf1634dca9fc12ec078239d84 - - default default] Lock "a5d5ccb2-21ac-4d7a-9991-efdf9cbb499d-events" "released" by "nova.compute.manager.InstanceEvents.clear_events_for_instance.<locals>._clear_events" :: held 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m
Dec  1 19:52:00 compute-0 nova_compute[189564]: 2025-12-01 19:52:00.440 189568 INFO nova.compute.manager [None req-816b7c2e-492d-4d17-9333-ae08ed28950b 7c24e8f82e7842b785e565ac65c7f494 35d2a9caf1634dca9fc12ec078239d84 - - default default] [instance: a5d5ccb2-21ac-4d7a-9991-efdf9cbb499d] Terminating instance#033[00m
Dec  1 19:52:00 compute-0 nova_compute[189564]: 2025-12-01 19:52:00.442 189568 DEBUG oslo_concurrency.lockutils [None req-816b7c2e-492d-4d17-9333-ae08ed28950b 7c24e8f82e7842b785e565ac65c7f494 35d2a9caf1634dca9fc12ec078239d84 - - default default] Acquiring lock "refresh_cache-a5d5ccb2-21ac-4d7a-9991-efdf9cbb499d" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:312#033[00m
Dec  1 19:52:00 compute-0 nova_compute[189564]: 2025-12-01 19:52:00.443 189568 DEBUG oslo_concurrency.lockutils [None req-816b7c2e-492d-4d17-9333-ae08ed28950b 7c24e8f82e7842b785e565ac65c7f494 35d2a9caf1634dca9fc12ec078239d84 - - default default] Acquired lock "refresh_cache-a5d5ccb2-21ac-4d7a-9991-efdf9cbb499d" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:315#033[00m
Dec  1 19:52:00 compute-0 nova_compute[189564]: 2025-12-01 19:52:00.444 189568 DEBUG nova.network.neutron [None req-816b7c2e-492d-4d17-9333-ae08ed28950b 7c24e8f82e7842b785e565ac65c7f494 35d2a9caf1634dca9fc12ec078239d84 - - default default] [instance: a5d5ccb2-21ac-4d7a-9991-efdf9cbb499d] Building network info cache for instance _get_instance_nw_info /usr/lib/python3.9/site-packages/nova/network/neutron.py:2010#033[00m
Dec  1 19:52:00 compute-0 nova_compute[189564]: 2025-12-01 19:52:00.653 189568 DEBUG nova.network.neutron [None req-816b7c2e-492d-4d17-9333-ae08ed28950b 7c24e8f82e7842b785e565ac65c7f494 35d2a9caf1634dca9fc12ec078239d84 - - default default] [instance: a5d5ccb2-21ac-4d7a-9991-efdf9cbb499d] Instance cache missing network info. _get_preexisting_port_ids /usr/lib/python3.9/site-packages/nova/network/neutron.py:3323#033[00m
Dec  1 19:52:00 compute-0 nova_compute[189564]: 2025-12-01 19:52:00.968 189568 DEBUG nova.network.neutron [None req-816b7c2e-492d-4d17-9333-ae08ed28950b 7c24e8f82e7842b785e565ac65c7f494 35d2a9caf1634dca9fc12ec078239d84 - - default default] [instance: a5d5ccb2-21ac-4d7a-9991-efdf9cbb499d] Updating instance_info_cache with network_info: [] update_instance_cache_with_nw_info /usr/lib/python3.9/site-packages/nova/network/neutron.py:116#033[00m
Dec  1 19:52:00 compute-0 nova_compute[189564]: 2025-12-01 19:52:00.983 189568 DEBUG oslo_concurrency.lockutils [None req-816b7c2e-492d-4d17-9333-ae08ed28950b 7c24e8f82e7842b785e565ac65c7f494 35d2a9caf1634dca9fc12ec078239d84 - - default default] Releasing lock "refresh_cache-a5d5ccb2-21ac-4d7a-9991-efdf9cbb499d" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:333#033[00m
Dec  1 19:52:00 compute-0 nova_compute[189564]: 2025-12-01 19:52:00.985 189568 DEBUG nova.compute.manager [None req-816b7c2e-492d-4d17-9333-ae08ed28950b 7c24e8f82e7842b785e565ac65c7f494 35d2a9caf1634dca9fc12ec078239d84 - - default default] [instance: a5d5ccb2-21ac-4d7a-9991-efdf9cbb499d] Start destroying the instance on the hypervisor. _shutdown_instance /usr/lib/python3.9/site-packages/nova/compute/manager.py:3120#033[00m
Dec  1 19:52:01 compute-0 systemd[1]: machine-qemu\x2d4\x2dinstance\x2d00000004.scope: Deactivated successfully.
Dec  1 19:52:01 compute-0 systemd[1]: machine-qemu\x2d4\x2dinstance\x2d00000004.scope: Consumed 15.227s CPU time.
Dec  1 19:52:01 compute-0 systemd-machined[155891]: Machine qemu-4-instance-00000004 terminated.
Dec  1 19:52:01 compute-0 podman[249101]: 2025-12-01 19:52:01.130519971 +0000 UTC m=+0.075710140 container health_status 61ddba5fa28aaa4735d9b3aecc3d300f499f9ae2248b5f55cd6d6127fcce4236 (image=quay.io/navidys/prometheus-podman-exporter:v1.10.1, name=podman_exporter, health_status=healthy, health_failing_streak=0, health_log=, config_data={'image': 'quay.io/navidys/prometheus-podman-exporter:v1.10.1', 'restart': 'always', 'recreate': True, 'user': 'root', 'privileged': True, 'ports': ['9882:9882'], 'net': 'host', 'command': ['--web.config.file=/etc/podman_exporter/podman_exporter.yaml'], 'environment': {'OS_ENDPOINT_TYPE': 'internal', 'CONTAINER_HOST': 'unix:///run/podman/podman.sock'}, 'healthcheck': {'test': '/openstack/healthcheck podman_exporter', 'mount': '/var/lib/openstack/healthchecks/podman_exporter'}, 'volumes': ['/var/lib/openstack/config/telemetry/podman_exporter.yaml:/etc/podman_exporter/podman_exporter.yaml:z', '/var/lib/openstack/certs/telemetry/default:/etc/podman_exporter/tls:z', '/run/podman/podman.sock:/run/podman/podman.sock:rw,z', '/var/lib/openstack/healthchecks/podman_exporter:/openstack:ro,z']}, config_id=edpm, container_name=podman_exporter, maintainer=Navid Yaghoobi <navidys@fedoraproject.org>, managed_by=edpm_ansible)
Dec  1 19:52:01 compute-0 nova_compute[189564]: 2025-12-01 19:52:01.173 189568 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Dec  1 19:52:01 compute-0 nova_compute[189564]: 2025-12-01 19:52:01.269 189568 INFO nova.virt.libvirt.driver [-] [instance: a5d5ccb2-21ac-4d7a-9991-efdf9cbb499d] Instance destroyed successfully.#033[00m
Dec  1 19:52:01 compute-0 nova_compute[189564]: 2025-12-01 19:52:01.270 189568 DEBUG nova.objects.instance [None req-816b7c2e-492d-4d17-9333-ae08ed28950b 7c24e8f82e7842b785e565ac65c7f494 35d2a9caf1634dca9fc12ec078239d84 - - default default] Lazy-loading 'resources' on Instance uuid a5d5ccb2-21ac-4d7a-9991-efdf9cbb499d obj_load_attr /usr/lib/python3.9/site-packages/nova/objects/instance.py:1105#033[00m
Dec  1 19:52:01 compute-0 nova_compute[189564]: 2025-12-01 19:52:01.290 189568 INFO nova.virt.libvirt.driver [None req-816b7c2e-492d-4d17-9333-ae08ed28950b 7c24e8f82e7842b785e565ac65c7f494 35d2a9caf1634dca9fc12ec078239d84 - - default default] [instance: a5d5ccb2-21ac-4d7a-9991-efdf9cbb499d] Deleting instance files /var/lib/nova/instances/a5d5ccb2-21ac-4d7a-9991-efdf9cbb499d_del#033[00m
Dec  1 19:52:01 compute-0 nova_compute[189564]: 2025-12-01 19:52:01.291 189568 INFO nova.virt.libvirt.driver [None req-816b7c2e-492d-4d17-9333-ae08ed28950b 7c24e8f82e7842b785e565ac65c7f494 35d2a9caf1634dca9fc12ec078239d84 - - default default] [instance: a5d5ccb2-21ac-4d7a-9991-efdf9cbb499d] Deletion of /var/lib/nova/instances/a5d5ccb2-21ac-4d7a-9991-efdf9cbb499d_del complete#033[00m
Dec  1 19:52:01 compute-0 nova_compute[189564]: 2025-12-01 19:52:01.350 189568 INFO nova.compute.manager [None req-816b7c2e-492d-4d17-9333-ae08ed28950b 7c24e8f82e7842b785e565ac65c7f494 35d2a9caf1634dca9fc12ec078239d84 - - default default] [instance: a5d5ccb2-21ac-4d7a-9991-efdf9cbb499d] Took 0.36 seconds to destroy the instance on the hypervisor.#033[00m
Dec  1 19:52:01 compute-0 nova_compute[189564]: 2025-12-01 19:52:01.351 189568 DEBUG oslo.service.loopingcall [None req-816b7c2e-492d-4d17-9333-ae08ed28950b 7c24e8f82e7842b785e565ac65c7f494 35d2a9caf1634dca9fc12ec078239d84 - - default default] Waiting for function nova.compute.manager.ComputeManager._try_deallocate_network.<locals>._deallocate_network_with_retries to return. func /usr/lib/python3.9/site-packages/oslo_service/loopingcall.py:435#033[00m
Dec  1 19:52:01 compute-0 nova_compute[189564]: 2025-12-01 19:52:01.352 189568 DEBUG nova.compute.manager [-] [instance: a5d5ccb2-21ac-4d7a-9991-efdf9cbb499d] Deallocating network for instance _deallocate_network /usr/lib/python3.9/site-packages/nova/compute/manager.py:2259#033[00m
Dec  1 19:52:01 compute-0 nova_compute[189564]: 2025-12-01 19:52:01.352 189568 DEBUG nova.network.neutron [-] [instance: a5d5ccb2-21ac-4d7a-9991-efdf9cbb499d] deallocate_for_instance() deallocate_for_instance /usr/lib/python3.9/site-packages/nova/network/neutron.py:1803#033[00m
Dec  1 19:52:01 compute-0 openstack_network_exporter[205914]: ERROR   19:52:01 appctl.go:144: Failed to get PID for ovn-northd: no control socket files found for ovn-northd
Dec  1 19:52:01 compute-0 openstack_network_exporter[205914]: ERROR   19:52:01 appctl.go:144: Failed to get PID for ovn-northd: no control socket files found for ovn-northd
Dec  1 19:52:01 compute-0 openstack_network_exporter[205914]: ERROR   19:52:01 appctl.go:131: Failed to prepare call to ovsdb-server: no control socket files found for the ovs db server
Dec  1 19:52:01 compute-0 openstack_network_exporter[205914]: ERROR   19:52:01 appctl.go:174: call(dpif-netdev/pmd-perf-show): please specify an existing datapath
Dec  1 19:52:01 compute-0 openstack_network_exporter[205914]: 
Dec  1 19:52:01 compute-0 openstack_network_exporter[205914]: ERROR   19:52:01 appctl.go:174: call(dpif-netdev/pmd-rxq-show): please specify an existing datapath
Dec  1 19:52:01 compute-0 openstack_network_exporter[205914]: 
Dec  1 19:52:01 compute-0 nova_compute[189564]: 2025-12-01 19:52:01.850 189568 DEBUG nova.network.neutron [-] [instance: a5d5ccb2-21ac-4d7a-9991-efdf9cbb499d] Instance cache missing network info. _get_preexisting_port_ids /usr/lib/python3.9/site-packages/nova/network/neutron.py:3323#033[00m
Dec  1 19:52:01 compute-0 nova_compute[189564]: 2025-12-01 19:52:01.868 189568 DEBUG nova.network.neutron [-] [instance: a5d5ccb2-21ac-4d7a-9991-efdf9cbb499d] Updating instance_info_cache with network_info: [] update_instance_cache_with_nw_info /usr/lib/python3.9/site-packages/nova/network/neutron.py:116#033[00m
Dec  1 19:52:01 compute-0 nova_compute[189564]: 2025-12-01 19:52:01.889 189568 INFO nova.compute.manager [-] [instance: a5d5ccb2-21ac-4d7a-9991-efdf9cbb499d] Took 0.54 seconds to deallocate network for instance.#033[00m
Dec  1 19:52:01 compute-0 nova_compute[189564]: 2025-12-01 19:52:01.943 189568 DEBUG oslo_concurrency.lockutils [None req-816b7c2e-492d-4d17-9333-ae08ed28950b 7c24e8f82e7842b785e565ac65c7f494 35d2a9caf1634dca9fc12ec078239d84 - - default default] Acquiring lock "compute_resources" by "nova.compute.resource_tracker.ResourceTracker.update_usage" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m
Dec  1 19:52:01 compute-0 nova_compute[189564]: 2025-12-01 19:52:01.944 189568 DEBUG oslo_concurrency.lockutils [None req-816b7c2e-492d-4d17-9333-ae08ed28950b 7c24e8f82e7842b785e565ac65c7f494 35d2a9caf1634dca9fc12ec078239d84 - - default default] Lock "compute_resources" acquired by "nova.compute.resource_tracker.ResourceTracker.update_usage" :: waited 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m
Dec  1 19:52:02 compute-0 nova_compute[189564]: 2025-12-01 19:52:02.113 189568 DEBUG nova.compute.provider_tree [None req-816b7c2e-492d-4d17-9333-ae08ed28950b 7c24e8f82e7842b785e565ac65c7f494 35d2a9caf1634dca9fc12ec078239d84 - - default default] Inventory has not changed in ProviderTree for provider: 0211b5d4-bab8-409f-8f53-df766ffbcb27 update_inventory /usr/lib/python3.9/site-packages/nova/compute/provider_tree.py:180#033[00m
Dec  1 19:52:02 compute-0 nova_compute[189564]: 2025-12-01 19:52:02.147 189568 DEBUG nova.scheduler.client.report [None req-816b7c2e-492d-4d17-9333-ae08ed28950b 7c24e8f82e7842b785e565ac65c7f494 35d2a9caf1634dca9fc12ec078239d84 - - default default] Inventory has not changed for provider 0211b5d4-bab8-409f-8f53-df766ffbcb27 based on inventory data: {'VCPU': {'total': 8, 'reserved': 0, 'min_unit': 1, 'max_unit': 8, 'step_size': 1, 'allocation_ratio': 4.0}, 'MEMORY_MB': {'total': 7680, 'reserved': 512, 'min_unit': 1, 'max_unit': 7680, 'step_size': 1, 'allocation_ratio': 1.0}, 'DISK_GB': {'total': 79, 'reserved': 1, 'min_unit': 1, 'max_unit': 79, 'step_size': 1, 'allocation_ratio': 0.9}} set_inventory_for_provider /usr/lib/python3.9/site-packages/nova/scheduler/client/report.py:940#033[00m
Dec  1 19:52:02 compute-0 nova_compute[189564]: 2025-12-01 19:52:02.173 189568 DEBUG oslo_concurrency.lockutils [None req-816b7c2e-492d-4d17-9333-ae08ed28950b 7c24e8f82e7842b785e565ac65c7f494 35d2a9caf1634dca9fc12ec078239d84 - - default default] Lock "compute_resources" "released" by "nova.compute.resource_tracker.ResourceTracker.update_usage" :: held 0.228s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m
Dec  1 19:52:02 compute-0 nova_compute[189564]: 2025-12-01 19:52:02.212 189568 INFO nova.scheduler.client.report [None req-816b7c2e-492d-4d17-9333-ae08ed28950b 7c24e8f82e7842b785e565ac65c7f494 35d2a9caf1634dca9fc12ec078239d84 - - default default] Deleted allocations for instance a5d5ccb2-21ac-4d7a-9991-efdf9cbb499d#033[00m
Dec  1 19:52:02 compute-0 nova_compute[189564]: 2025-12-01 19:52:02.294 189568 DEBUG oslo_concurrency.lockutils [None req-816b7c2e-492d-4d17-9333-ae08ed28950b 7c24e8f82e7842b785e565ac65c7f494 35d2a9caf1634dca9fc12ec078239d84 - - default default] Lock "a5d5ccb2-21ac-4d7a-9991-efdf9cbb499d" "released" by "nova.compute.manager.ComputeManager.terminate_instance.<locals>.do_terminate_instance" :: held 1.859s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m
Dec  1 19:52:03 compute-0 nova_compute[189564]: 2025-12-01 19:52:03.478 189568 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Dec  1 19:52:04 compute-0 podman[249141]: 2025-12-01 19:52:04.336061351 +0000 UTC m=+0.091627003 container health_status 3a3d264f7eb8586ed3d44da8bad3c69e5911bcb2ca062b771386b6d47a5118de (image=quay.rdoproject.org/podified-master-centos10/openstack-ceilometer-compute:current-tested, name=ceilometer_agent_compute, health_status=healthy, health_failing_streak=0, health_log=, config_id=edpm, container_name=ceilometer_agent_compute, maintainer=OpenStack Kubernetes Operator team, org.label-schema.build-date=20251125, tcib_managed=true, org.label-schema.schema-version=1.0, io.buildah.version=1.41.4, org.label-schema.license=GPLv2, managed_by=edpm_ansible, org.label-schema.name=CentOS Stream 10 Base Image, org.label-schema.vendor=CentOS, tcib_build_tag=3a7876c5b6a4ff2e2bc50e11e9db5f42, config_data={'image': 'quay.rdoproject.org/podified-master-centos10/openstack-ceilometer-compute:current-tested', 'user': 'ceilometer', 'restart': 'always', 'command': 'kolla_start', 'security_opt': 'label:type:ceilometer_polling_t', 'net': 'host', 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'OS_ENDPOINT_TYPE': 'internal'}, 'healthcheck': {'test': '/openstack/healthcheck compute', 'mount': '/var/lib/openstack/healthchecks/ceilometer_agent_compute'}, 'volumes': ['/var/lib/openstack/config/telemetry:/var/lib/openstack/config/:z', '/var/lib/openstack/config/telemetry/ceilometer-agent-compute.json:/var/lib/kolla/config_files/config.json:z', '/run/libvirt:/run/libvirt:shared,ro', '/etc/hosts:/etc/hosts:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/var/lib/openstack/cacerts/telemetry/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/config/telemetry/ceilometer_prom_exporter.yaml:/etc/ceilometer/ceilometer_prom_exporter.yaml:z', '/var/lib/openstack/certs/telemetry/default:/etc/ceilometer/tls:z', '/dev/log:/dev/log', '/var/lib/openstack/healthchecks/ceilometer_agent_compute:/openstack:ro,z']})
Dec  1 19:52:04 compute-0 podman[249137]: 2025-12-01 19:52:04.350084457 +0000 UTC m=+0.100269242 container health_status 34a1614f07848d6f362b3ed1fa2407dbcd0f2c7c831f6ef43ff8b2d278ce7c3d (image=quay.io/podified-antelope-centos9/openstack-ceilometer-ipmi:current-podified, name=ceilometer_agent_ipmi, health_status=healthy, health_failing_streak=0, health_log=, maintainer=OpenStack Kubernetes Operator team, org.label-schema.vendor=CentOS, tcib_build_tag=fa2bb8efef6782c26ea7f1675eeb36dd, tcib_managed=true, config_id=edpm, io.buildah.version=1.41.3, org.label-schema.build-date=20251125, org.label-schema.license=GPLv2, container_name=ceilometer_agent_ipmi, org.label-schema.schema-version=1.0, config_data={'image': 'quay.io/podified-antelope-centos9/openstack-ceilometer-ipmi:current-podified', 'user': 'ceilometer', 'restart': 'always', 'command': 'kolla_start', 'security_opt': 'label:type:ceilometer_polling_t', 'privileged': 'true', 'net': 'host', 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'OS_ENDPOINT_TYPE': 'internal'}, 'healthcheck': {'test': '/openstack/healthcheck ipmi', 'mount': '/var/lib/openstack/healthchecks/ceilometer_agent_ipmi'}, 'volumes': ['/var/lib/openstack/config/telemetry-power-monitoring:/var/lib/openstack/config/:z', '/var/lib/openstack/config/telemetry-power-monitoring/ceilometer-agent-ipmi.json:/var/lib/kolla/config_files/config.json:z', '/etc/hosts:/etc/hosts:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/var/lib/openstack/cacerts/telemetry-power-monitoring/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/config/telemetry-power-monitoring/ceilometer_prom_exporter.yaml:/etc/ceilometer/ceilometer_prom_exporter.yaml:z', '/var/lib/openstack/certs/telemetry-power-monitoring/default:/etc/ceilometer/tls:z', '/dev/log:/dev/log', '/var/lib/openstack/healthchecks/ceilometer_agent_ipmi:/openstack:ro,z']}, managed_by=edpm_ansible, org.label-schema.name=CentOS Stream 9 Base Image)
Dec  1 19:52:04 compute-0 podman[249136]: 2025-12-01 19:52:04.35082869 +0000 UTC m=+0.120481259 container health_status 23921011954a99f31a49758e512d9e3575f6b2ebf536e7df85e3be11e7690b76 (image=quay.io/sustainable_computing_io/kepler:release-0.7.12, name=kepler, health_status=healthy, health_failing_streak=0, health_log=, com.redhat.component=ubi9-container, description=The Universal Base Image is designed and engineered to be the base layer for all of your containerized applications, middleware and utilities. This base image is freely redistributable, but Red Hat only supports Red Hat technologies through subscriptions for Red Hat products. This image is maintained by Red Hat and updated regularly., io.openshift.expose-services=, io.buildah.version=1.29.0, io.openshift.tags=base rhel9, io.k8s.display-name=Red Hat Universal Base Image 9, vcs-type=git, config_id=edpm, architecture=x86_64, container_name=kepler, managed_by=edpm_ansible, name=ubi9, com.redhat.license_terms=https://www.redhat.com/en/about/red-hat-end-user-license-agreements#UBI, vcs-ref=e309397d02fc53f7fa99db1371b8700eb49f268f, maintainer=Red Hat, Inc., release=1214.1726694543, version=9.4, config_data={'image': 'quay.io/sustainable_computing_io/kepler:release-0.7.12', 'privileged': 'true', 'restart': 'always', 'ports': ['8888:8888'], 'net': 'host', 'command': '-v=2', 'recreate': True, 'environment': {'ENABLE_GPU': 'true', 'EXPOSE_CONTAINER_METRICS': 'true', 'ENABLE_PROCESS_METRICS': 'true', 'EXPOSE_VM_METRICS': 'true', 'EXPOSE_ESTIMATED_IDLE_POWER_METRICS': 'false', 'LIBVIRT_METADATA_URI': 'http://openstack.org/xmlns/libvirt/nova/1.1'}, 'healthcheck': {'test': '/openstack/healthcheck kepler', 'mount': '/var/lib/openstack/healthchecks/kepler'}, 'volumes': ['/lib/modules:/lib/modules:ro', '/run/libvirt:/run/libvirt:shared,ro', '/sys:/sys', '/proc:/proc', '/var/lib/openstack/healthchecks/kepler:/openstack:ro,z']}, distribution-scope=public, url=https://access.redhat.com/containers/#/registry.access.redhat.com/ubi9/images/9.4-1214.1726694543, build-date=2024-09-18T21:23:30, release-0.7.12=, io.k8s.description=The Universal Base Image is designed and engineered to be the base layer for all of your containerized applications, middleware and utilities. This base image is freely redistributable, but Red Hat only supports Red Hat technologies through subscriptions for Red Hat products. This image is maintained by Red Hat and updated regularly., vendor=Red Hat, Inc., summary=Provides the latest release of Red Hat Universal Base Image 9.)
Dec  1 19:52:04 compute-0 podman[249144]: 2025-12-01 19:52:04.371625495 +0000 UTC m=+0.114470503 container health_status 43b014a7c88484529ca37fbc1aa040d68d3c565a681d98a3ffe696ded1c66c8b (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified, name=ovn_metadata_agent, health_status=healthy, health_failing_streak=0, health_log=, org.label-schema.license=GPLv2, org.label-schema.schema-version=1.0, maintainer=OpenStack Kubernetes Operator team, org.label-schema.name=CentOS Stream 9 Base Image, io.buildah.version=1.41.3, org.label-schema.vendor=CentOS, tcib_build_tag=fa2bb8efef6782c26ea7f1675eeb36dd, config_data={'cgroupns': 'host', 'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'EDPM_CONFIG_HASH': '0823bd3e096c75f72e4a95820d41b0d4b6a1172bd2892ddb9f29b788a11bc87d'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ovn_metadata_agent', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified', 'net': 'host', 'pid': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/run/openvswitch:/run/openvswitch:z', '/var/lib/config-data/ansible-generated/neutron-ovn-metadata-agent:/etc/neutron.conf.d:z', '/run/netns:/run/netns:shared', '/var/lib/kolla/config_files/ovn_metadata_agent.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/neutron:/var/lib/neutron:shared,z', '/var/lib/neutron/ovn_metadata_haproxy_wrapper:/usr/local/bin/haproxy:ro', '/var/lib/neutron/kill_scripts:/etc/neutron/kill_scripts:ro', '/var/lib/openstack/cacerts/neutron-metadata/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/ca.crt:/etc/pki/tls/certs/ovndbca.crt:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/tls.crt:/etc/pki/tls/certs/ovndb.crt:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/tls.key:/etc/pki/tls/private/ovndb.key:ro,Z', '/var/lib/openstack/healthchecks/ovn_metadata_agent:/openstack:ro,z']}, config_id=ovn_metadata_agent, container_name=ovn_metadata_agent, tcib_managed=true, managed_by=edpm_ansible, org.label-schema.build-date=20251125)
Dec  1 19:52:04 compute-0 podman[249151]: 2025-12-01 19:52:04.400624095 +0000 UTC m=+0.131738379 container health_status ac5c9902abf0db9f43c889599b2bcc73d33eb8b65444ffdd9b56a5cc93dab792 (image=quay.io/podified-antelope-centos9/openstack-ovn-controller:current-podified, name=ovn_controller, health_status=healthy, health_failing_streak=0, health_log=, org.label-schema.name=CentOS Stream 9 Base Image, container_name=ovn_controller, org.label-schema.vendor=CentOS, tcib_managed=true, org.label-schema.build-date=20251125, org.label-schema.license=GPLv2, org.label-schema.schema-version=1.0, tcib_build_tag=fa2bb8efef6782c26ea7f1675eeb36dd, config_data={'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ovn_controller', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-ovn-controller:current-podified', 'net': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/lib/modules:/lib/modules:ro', '/run:/run', '/var/lib/openvswitch/ovn:/run/ovn:shared,z', '/var/lib/kolla/config_files/ovn_controller.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/openstack/cacerts/ovn/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/certs/ovn/default/ca.crt:/etc/pki/tls/certs/ovndbca.crt:ro,z', '/var/lib/openstack/certs/ovn/default/tls.crt:/etc/pki/tls/certs/ovndb.crt:ro,z', '/var/lib/openstack/certs/ovn/default/tls.key:/etc/pki/tls/private/ovndb.key:ro,Z', '/var/lib/openstack/healthchecks/ovn_controller:/openstack:ro,z']}, config_id=ovn_controller, io.buildah.version=1.41.3, maintainer=OpenStack Kubernetes Operator team, managed_by=edpm_ansible)
Dec  1 19:52:06 compute-0 nova_compute[189564]: 2025-12-01 19:52:06.175 189568 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Dec  1 19:52:08 compute-0 nova_compute[189564]: 2025-12-01 19:52:08.271 189568 DEBUG oslo_service.periodic_task [None req-7cec860b-c1c5-49d2-8a4f-732c76c1788d - - - - - -] Running periodic task ComputeManager._heal_instance_info_cache run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210#033[00m
Dec  1 19:52:08 compute-0 nova_compute[189564]: 2025-12-01 19:52:08.272 189568 DEBUG nova.compute.manager [None req-7cec860b-c1c5-49d2-8a4f-732c76c1788d - - - - - -] Starting heal instance info cache _heal_instance_info_cache /usr/lib/python3.9/site-packages/nova/compute/manager.py:9858#033[00m
Dec  1 19:52:08 compute-0 nova_compute[189564]: 2025-12-01 19:52:08.273 189568 DEBUG nova.compute.manager [None req-7cec860b-c1c5-49d2-8a4f-732c76c1788d - - - - - -] Rebuilding the list of instances to heal _heal_instance_info_cache /usr/lib/python3.9/site-packages/nova/compute/manager.py:9862#033[00m
Dec  1 19:52:08 compute-0 nova_compute[189564]: 2025-12-01 19:52:08.482 189568 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Dec  1 19:52:09 compute-0 nova_compute[189564]: 2025-12-01 19:52:09.294 189568 DEBUG oslo_concurrency.lockutils [None req-7cec860b-c1c5-49d2-8a4f-732c76c1788d - - - - - -] Acquiring lock "refresh_cache-e73931e9-f7fa-4666-b781-700b385532a9" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:312#033[00m
Dec  1 19:52:09 compute-0 nova_compute[189564]: 2025-12-01 19:52:09.295 189568 DEBUG oslo_concurrency.lockutils [None req-7cec860b-c1c5-49d2-8a4f-732c76c1788d - - - - - -] Acquired lock "refresh_cache-e73931e9-f7fa-4666-b781-700b385532a9" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:315#033[00m
Dec  1 19:52:09 compute-0 nova_compute[189564]: 2025-12-01 19:52:09.295 189568 DEBUG nova.network.neutron [None req-7cec860b-c1c5-49d2-8a4f-732c76c1788d - - - - - -] [instance: e73931e9-f7fa-4666-b781-700b385532a9] Forcefully refreshing network info cache for instance _get_instance_nw_info /usr/lib/python3.9/site-packages/nova/network/neutron.py:2004#033[00m
Dec  1 19:52:09 compute-0 nova_compute[189564]: 2025-12-01 19:52:09.295 189568 DEBUG nova.objects.instance [None req-7cec860b-c1c5-49d2-8a4f-732c76c1788d - - - - - -] Lazy-loading 'info_cache' on Instance uuid e73931e9-f7fa-4666-b781-700b385532a9 obj_load_attr /usr/lib/python3.9/site-packages/nova/objects/instance.py:1105#033[00m
Dec  1 19:52:11 compute-0 nova_compute[189564]: 2025-12-01 19:52:11.178 189568 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Dec  1 19:52:11 compute-0 nova_compute[189564]: 2025-12-01 19:52:11.807 189568 DEBUG nova.network.neutron [None req-7cec860b-c1c5-49d2-8a4f-732c76c1788d - - - - - -] [instance: e73931e9-f7fa-4666-b781-700b385532a9] Updating instance_info_cache with network_info: [{"id": "3cef930c-870a-4936-a206-b4c3a7ce5c1a", "address": "fa:16:3e:fc:8b:70", "network": {"id": "2a4b8529-6171-4880-a97c-66966115a61b", "bridge": "br-int", "label": "private", "subnets": [{"cidr": "192.168.0.0/24", "dns": [], "gateway": {"address": "192.168.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "192.168.0.47", "type": "fixed", "version": 4, "meta": {}, "floating_ips": [{"address": "192.168.122.206", "type": "floating", "version": 4, "meta": {}}]}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "35d2a9caf1634dca9fc12ec078239d84", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tap3cef930c-87", "ovs_interfaceid": "3cef930c-870a-4936-a206-b4c3a7ce5c1a", "qbh_params": null, "qbg_params": null, "active": true, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}}] update_instance_cache_with_nw_info /usr/lib/python3.9/site-packages/nova/network/neutron.py:116#033[00m
Dec  1 19:52:11 compute-0 nova_compute[189564]: 2025-12-01 19:52:11.828 189568 DEBUG oslo_concurrency.lockutils [None req-7cec860b-c1c5-49d2-8a4f-732c76c1788d - - - - - -] Releasing lock "refresh_cache-e73931e9-f7fa-4666-b781-700b385532a9" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:333#033[00m
Dec  1 19:52:11 compute-0 nova_compute[189564]: 2025-12-01 19:52:11.829 189568 DEBUG nova.compute.manager [None req-7cec860b-c1c5-49d2-8a4f-732c76c1788d - - - - - -] [instance: e73931e9-f7fa-4666-b781-700b385532a9] Updated the network info_cache for instance _heal_instance_info_cache /usr/lib/python3.9/site-packages/nova/compute/manager.py:9929#033[00m
Dec  1 19:52:11 compute-0 nova_compute[189564]: 2025-12-01 19:52:11.829 189568 DEBUG oslo_service.periodic_task [None req-7cec860b-c1c5-49d2-8a4f-732c76c1788d - - - - - -] Running periodic task ComputeManager._poll_rebooting_instances run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210#033[00m
Dec  1 19:52:11 compute-0 nova_compute[189564]: 2025-12-01 19:52:11.830 189568 DEBUG oslo_service.periodic_task [None req-7cec860b-c1c5-49d2-8a4f-732c76c1788d - - - - - -] Running periodic task ComputeManager._poll_rescued_instances run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210#033[00m
Dec  1 19:52:11 compute-0 nova_compute[189564]: 2025-12-01 19:52:11.830 189568 DEBUG oslo_service.periodic_task [None req-7cec860b-c1c5-49d2-8a4f-732c76c1788d - - - - - -] Running periodic task ComputeManager._poll_volume_usage run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210#033[00m
Dec  1 19:52:11 compute-0 nova_compute[189564]: 2025-12-01 19:52:11.831 189568 DEBUG oslo_service.periodic_task [None req-7cec860b-c1c5-49d2-8a4f-732c76c1788d - - - - - -] Running periodic task ComputeManager._reclaim_queued_deletes run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210#033[00m
Dec  1 19:52:11 compute-0 nova_compute[189564]: 2025-12-01 19:52:11.831 189568 DEBUG nova.compute.manager [None req-7cec860b-c1c5-49d2-8a4f-732c76c1788d - - - - - -] CONF.reclaim_instance_interval <= 0, skipping... _reclaim_queued_deletes /usr/lib/python3.9/site-packages/nova/compute/manager.py:10477#033[00m
Dec  1 19:52:12 compute-0 ovn_metadata_agent[106828]: 2025-12-01 19:52:12.208 106833 DEBUG oslo_concurrency.lockutils [-] Acquiring lock "_check_child_processes" by "neutron.agent.linux.external_process.ProcessMonitor._check_child_processes" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m
Dec  1 19:52:12 compute-0 ovn_metadata_agent[106828]: 2025-12-01 19:52:12.208 106833 DEBUG oslo_concurrency.lockutils [-] Lock "_check_child_processes" acquired by "neutron.agent.linux.external_process.ProcessMonitor._check_child_processes" :: waited 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m
Dec  1 19:52:12 compute-0 ovn_metadata_agent[106828]: 2025-12-01 19:52:12.209 106833 DEBUG oslo_concurrency.lockutils [-] Lock "_check_child_processes" "released" by "neutron.agent.linux.external_process.ProcessMonitor._check_child_processes" :: held 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m
Dec  1 19:52:13 compute-0 nova_compute[189564]: 2025-12-01 19:52:13.250 189568 DEBUG oslo_service.periodic_task [None req-7cec860b-c1c5-49d2-8a4f-732c76c1788d - - - - - -] Running periodic task ComputeManager._poll_unconfirmed_resizes run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210#033[00m
Dec  1 19:52:13 compute-0 nova_compute[189564]: 2025-12-01 19:52:13.487 189568 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Dec  1 19:52:14 compute-0 nova_compute[189564]: 2025-12-01 19:52:14.248 189568 DEBUG oslo_service.periodic_task [None req-7cec860b-c1c5-49d2-8a4f-732c76c1788d - - - - - -] Running periodic task ComputeManager._instance_usage_audit run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210#033[00m
Dec  1 19:52:14 compute-0 podman[249232]: 2025-12-01 19:52:14.839008679 +0000 UTC m=+0.141258493 container health_status b46bda7fc50db8041eef75400930fc7591d8331b3adc9964f77b2cc87c6b98e2 (image=quay.io/openstack-k8s-operators/openstack-network-exporter:current-podified, name=openstack_network_exporter, health_status=healthy, health_failing_streak=0, health_log=, io.buildah.version=1.33.7, io.openshift.expose-services=, distribution-scope=public, container_name=openstack_network_exporter, io.k8s.display-name=Red Hat Universal Base Image 9 Minimal, managed_by=edpm_ansible, com.redhat.license_terms=https://www.redhat.com/en/about/red-hat-end-user-license-agreements#UBI, url=https://catalog.redhat.com/en/search?searchType=containers, vcs-type=git, config_data={'image': 'quay.io/openstack-k8s-operators/openstack-network-exporter:current-podified', 'restart': 'always', 'recreate': True, 'privileged': True, 'ports': ['9105:9105'], 'command': [], 'net': 'host', 'environment': {'OS_ENDPOINT_TYPE': 'internal', 'OPENSTACK_NETWORK_EXPORTER_YAML': '/etc/openstack_network_exporter/openstack_network_exporter.yaml'}, 'healthcheck': {'test': '/openstack/healthcheck openstack-netwo', 'mount': '/var/lib/openstack/healthchecks/openstack_network_exporter'}, 'volumes': ['/var/lib/openstack/config/telemetry/openstack_network_exporter.yaml:/etc/openstack_network_exporter/openstack_network_exporter.yaml:z', '/var/lib/openstack/certs/telemetry/default:/etc/openstack_network_exporter/tls:z', '/var/run/openvswitch:/run/openvswitch:rw,z', '/var/lib/openvswitch/ovn:/run/ovn:rw,z', '/proc:/host/proc:ro', '/var/lib/openstack/healthchecks/openstack_network_exporter:/openstack:ro,z']}, maintainer=Red Hat, Inc., config_id=edpm, release=1755695350, build-date=2025-08-20T13:12:41, com.redhat.component=ubi9-minimal-container, description=The Universal Base Image Minimal is a stripped down image that uses microdnf as a package manager. This base image is freely redistributable, but Red Hat only supports Red Hat technologies through subscriptions for Red Hat products. This image is maintained by Red Hat and updated regularly., version=9.6, name=ubi9-minimal, vendor=Red Hat, Inc., summary=Provides the latest release of the minimal Red Hat Universal Base Image 9., io.openshift.tags=minimal rhel9, vcs-ref=f4b088292653bbf5ca8188a5e59ffd06a8671d4b, io.k8s.description=The Universal Base Image Minimal is a stripped down image that uses microdnf as a package manager. This base image is freely redistributable, but Red Hat only supports Red Hat technologies through subscriptions for Red Hat products. This image is maintained by Red Hat and updated regularly., architecture=x86_64)
Dec  1 19:52:15 compute-0 nova_compute[189564]: 2025-12-01 19:52:15.244 189568 DEBUG oslo_service.periodic_task [None req-7cec860b-c1c5-49d2-8a4f-732c76c1788d - - - - - -] Running periodic task ComputeManager._check_instance_build_time run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210#033[00m
Dec  1 19:52:15 compute-0 nova_compute[189564]: 2025-12-01 19:52:15.248 189568 DEBUG oslo_service.periodic_task [None req-7cec860b-c1c5-49d2-8a4f-732c76c1788d - - - - - -] Running periodic task ComputeManager.update_available_resource run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210#033[00m
Dec  1 19:52:15 compute-0 nova_compute[189564]: 2025-12-01 19:52:15.294 189568 DEBUG oslo_concurrency.lockutils [None req-7cec860b-c1c5-49d2-8a4f-732c76c1788d - - - - - -] Acquiring lock "compute_resources" by "nova.compute.resource_tracker.ResourceTracker.clean_compute_node_cache" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m
Dec  1 19:52:15 compute-0 nova_compute[189564]: 2025-12-01 19:52:15.295 189568 DEBUG oslo_concurrency.lockutils [None req-7cec860b-c1c5-49d2-8a4f-732c76c1788d - - - - - -] Lock "compute_resources" acquired by "nova.compute.resource_tracker.ResourceTracker.clean_compute_node_cache" :: waited 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m
Dec  1 19:52:15 compute-0 nova_compute[189564]: 2025-12-01 19:52:15.295 189568 DEBUG oslo_concurrency.lockutils [None req-7cec860b-c1c5-49d2-8a4f-732c76c1788d - - - - - -] Lock "compute_resources" "released" by "nova.compute.resource_tracker.ResourceTracker.clean_compute_node_cache" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m
Dec  1 19:52:15 compute-0 nova_compute[189564]: 2025-12-01 19:52:15.295 189568 DEBUG nova.compute.resource_tracker [None req-7cec860b-c1c5-49d2-8a4f-732c76c1788d - - - - - -] Auditing locally available compute resources for compute-0.ctlplane.example.com (node: compute-0.ctlplane.example.com) update_available_resource /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:861#033[00m
Dec  1 19:52:15 compute-0 nova_compute[189564]: 2025-12-01 19:52:15.429 189568 DEBUG oslo_concurrency.processutils [None req-7cec860b-c1c5-49d2-8a4f-732c76c1788d - - - - - -] Running cmd (subprocess): /usr/bin/python3 -m oslo_concurrency.prlimit --as=1073741824 --cpu=30 -- env LC_ALL=C LANG=C qemu-img info /var/lib/nova/instances/e73931e9-f7fa-4666-b781-700b385532a9/disk --force-share --output=json execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384#033[00m
Dec  1 19:52:15 compute-0 nova_compute[189564]: 2025-12-01 19:52:15.524 189568 DEBUG oslo_concurrency.processutils [None req-7cec860b-c1c5-49d2-8a4f-732c76c1788d - - - - - -] CMD "/usr/bin/python3 -m oslo_concurrency.prlimit --as=1073741824 --cpu=30 -- env LC_ALL=C LANG=C qemu-img info /var/lib/nova/instances/e73931e9-f7fa-4666-b781-700b385532a9/disk --force-share --output=json" returned: 0 in 0.095s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422#033[00m
Dec  1 19:52:15 compute-0 nova_compute[189564]: 2025-12-01 19:52:15.525 189568 DEBUG oslo_concurrency.processutils [None req-7cec860b-c1c5-49d2-8a4f-732c76c1788d - - - - - -] Running cmd (subprocess): /usr/bin/python3 -m oslo_concurrency.prlimit --as=1073741824 --cpu=30 -- env LC_ALL=C LANG=C qemu-img info /var/lib/nova/instances/e73931e9-f7fa-4666-b781-700b385532a9/disk --force-share --output=json execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384#033[00m
Dec  1 19:52:15 compute-0 nova_compute[189564]: 2025-12-01 19:52:15.614 189568 DEBUG oslo_concurrency.processutils [None req-7cec860b-c1c5-49d2-8a4f-732c76c1788d - - - - - -] CMD "/usr/bin/python3 -m oslo_concurrency.prlimit --as=1073741824 --cpu=30 -- env LC_ALL=C LANG=C qemu-img info /var/lib/nova/instances/e73931e9-f7fa-4666-b781-700b385532a9/disk --force-share --output=json" returned: 0 in 0.088s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422#033[00m
Dec  1 19:52:15 compute-0 nova_compute[189564]: 2025-12-01 19:52:15.615 189568 DEBUG oslo_concurrency.processutils [None req-7cec860b-c1c5-49d2-8a4f-732c76c1788d - - - - - -] Running cmd (subprocess): /usr/bin/python3 -m oslo_concurrency.prlimit --as=1073741824 --cpu=30 -- env LC_ALL=C LANG=C qemu-img info /var/lib/nova/instances/e73931e9-f7fa-4666-b781-700b385532a9/disk.eph0 --force-share --output=json execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384#033[00m
Dec  1 19:52:15 compute-0 nova_compute[189564]: 2025-12-01 19:52:15.680 189568 DEBUG oslo_concurrency.processutils [None req-7cec860b-c1c5-49d2-8a4f-732c76c1788d - - - - - -] CMD "/usr/bin/python3 -m oslo_concurrency.prlimit --as=1073741824 --cpu=30 -- env LC_ALL=C LANG=C qemu-img info /var/lib/nova/instances/e73931e9-f7fa-4666-b781-700b385532a9/disk.eph0 --force-share --output=json" returned: 0 in 0.066s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422#033[00m
Dec  1 19:52:15 compute-0 nova_compute[189564]: 2025-12-01 19:52:15.681 189568 DEBUG oslo_concurrency.processutils [None req-7cec860b-c1c5-49d2-8a4f-732c76c1788d - - - - - -] Running cmd (subprocess): /usr/bin/python3 -m oslo_concurrency.prlimit --as=1073741824 --cpu=30 -- env LC_ALL=C LANG=C qemu-img info /var/lib/nova/instances/e73931e9-f7fa-4666-b781-700b385532a9/disk.eph0 --force-share --output=json execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384#033[00m
Dec  1 19:52:15 compute-0 nova_compute[189564]: 2025-12-01 19:52:15.740 189568 DEBUG oslo_concurrency.processutils [None req-7cec860b-c1c5-49d2-8a4f-732c76c1788d - - - - - -] CMD "/usr/bin/python3 -m oslo_concurrency.prlimit --as=1073741824 --cpu=30 -- env LC_ALL=C LANG=C qemu-img info /var/lib/nova/instances/e73931e9-f7fa-4666-b781-700b385532a9/disk.eph0 --force-share --output=json" returned: 0 in 0.059s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422#033[00m
Dec  1 19:52:15 compute-0 nova_compute[189564]: 2025-12-01 19:52:15.748 189568 DEBUG oslo_concurrency.processutils [None req-7cec860b-c1c5-49d2-8a4f-732c76c1788d - - - - - -] Running cmd (subprocess): /usr/bin/python3 -m oslo_concurrency.prlimit --as=1073741824 --cpu=30 -- env LC_ALL=C LANG=C qemu-img info /var/lib/nova/instances/850ac274-3f22-41ce-b7d7-ac64d7adac70/disk --force-share --output=json execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384#033[00m
Dec  1 19:52:15 compute-0 nova_compute[189564]: 2025-12-01 19:52:15.840 189568 DEBUG oslo_concurrency.processutils [None req-7cec860b-c1c5-49d2-8a4f-732c76c1788d - - - - - -] CMD "/usr/bin/python3 -m oslo_concurrency.prlimit --as=1073741824 --cpu=30 -- env LC_ALL=C LANG=C qemu-img info /var/lib/nova/instances/850ac274-3f22-41ce-b7d7-ac64d7adac70/disk --force-share --output=json" returned: 0 in 0.092s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422#033[00m
Dec  1 19:52:15 compute-0 nova_compute[189564]: 2025-12-01 19:52:15.842 189568 DEBUG oslo_concurrency.processutils [None req-7cec860b-c1c5-49d2-8a4f-732c76c1788d - - - - - -] Running cmd (subprocess): /usr/bin/python3 -m oslo_concurrency.prlimit --as=1073741824 --cpu=30 -- env LC_ALL=C LANG=C qemu-img info /var/lib/nova/instances/850ac274-3f22-41ce-b7d7-ac64d7adac70/disk --force-share --output=json execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384#033[00m
Dec  1 19:52:15 compute-0 nova_compute[189564]: 2025-12-01 19:52:15.939 189568 DEBUG oslo_concurrency.processutils [None req-7cec860b-c1c5-49d2-8a4f-732c76c1788d - - - - - -] CMD "/usr/bin/python3 -m oslo_concurrency.prlimit --as=1073741824 --cpu=30 -- env LC_ALL=C LANG=C qemu-img info /var/lib/nova/instances/850ac274-3f22-41ce-b7d7-ac64d7adac70/disk --force-share --output=json" returned: 0 in 0.098s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422#033[00m
Dec  1 19:52:15 compute-0 nova_compute[189564]: 2025-12-01 19:52:15.941 189568 DEBUG oslo_concurrency.processutils [None req-7cec860b-c1c5-49d2-8a4f-732c76c1788d - - - - - -] Running cmd (subprocess): /usr/bin/python3 -m oslo_concurrency.prlimit --as=1073741824 --cpu=30 -- env LC_ALL=C LANG=C qemu-img info /var/lib/nova/instances/850ac274-3f22-41ce-b7d7-ac64d7adac70/disk.eph0 --force-share --output=json execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384#033[00m
Dec  1 19:52:16 compute-0 nova_compute[189564]: 2025-12-01 19:52:16.040 189568 DEBUG oslo_concurrency.processutils [None req-7cec860b-c1c5-49d2-8a4f-732c76c1788d - - - - - -] CMD "/usr/bin/python3 -m oslo_concurrency.prlimit --as=1073741824 --cpu=30 -- env LC_ALL=C LANG=C qemu-img info /var/lib/nova/instances/850ac274-3f22-41ce-b7d7-ac64d7adac70/disk.eph0 --force-share --output=json" returned: 0 in 0.099s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422#033[00m
Dec  1 19:52:16 compute-0 nova_compute[189564]: 2025-12-01 19:52:16.041 189568 DEBUG oslo_concurrency.processutils [None req-7cec860b-c1c5-49d2-8a4f-732c76c1788d - - - - - -] Running cmd (subprocess): /usr/bin/python3 -m oslo_concurrency.prlimit --as=1073741824 --cpu=30 -- env LC_ALL=C LANG=C qemu-img info /var/lib/nova/instances/850ac274-3f22-41ce-b7d7-ac64d7adac70/disk.eph0 --force-share --output=json execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384#033[00m
Dec  1 19:52:16 compute-0 nova_compute[189564]: 2025-12-01 19:52:16.137 189568 DEBUG oslo_concurrency.processutils [None req-7cec860b-c1c5-49d2-8a4f-732c76c1788d - - - - - -] CMD "/usr/bin/python3 -m oslo_concurrency.prlimit --as=1073741824 --cpu=30 -- env LC_ALL=C LANG=C qemu-img info /var/lib/nova/instances/850ac274-3f22-41ce-b7d7-ac64d7adac70/disk.eph0 --force-share --output=json" returned: 0 in 0.096s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422#033[00m
Dec  1 19:52:16 compute-0 nova_compute[189564]: 2025-12-01 19:52:16.180 189568 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Dec  1 19:52:16 compute-0 nova_compute[189564]: 2025-12-01 19:52:16.265 189568 DEBUG nova.virt.driver [-] Emitting event <LifecycleEvent: 1764618721.2646422, a5d5ccb2-21ac-4d7a-9991-efdf9cbb499d => Stopped> emit_event /usr/lib/python3.9/site-packages/nova/virt/driver.py:1653#033[00m
Dec  1 19:52:16 compute-0 nova_compute[189564]: 2025-12-01 19:52:16.267 189568 INFO nova.compute.manager [-] [instance: a5d5ccb2-21ac-4d7a-9991-efdf9cbb499d] VM Stopped (Lifecycle Event)#033[00m
Dec  1 19:52:16 compute-0 nova_compute[189564]: 2025-12-01 19:52:16.293 189568 DEBUG nova.compute.manager [None req-cf3eb157-f6c3-4e4e-bbf7-6c3f862ab185 - - - - - -] [instance: a5d5ccb2-21ac-4d7a-9991-efdf9cbb499d] Checking state _get_power_state /usr/lib/python3.9/site-packages/nova/compute/manager.py:1762#033[00m
Dec  1 19:52:16 compute-0 nova_compute[189564]: 2025-12-01 19:52:16.608 189568 WARNING nova.virt.libvirt.driver [None req-7cec860b-c1c5-49d2-8a4f-732c76c1788d - - - - - -] This host appears to have multiple sockets per NUMA node. The `socket` PCI NUMA affinity will not be supported.#033[00m
Dec  1 19:52:16 compute-0 nova_compute[189564]: 2025-12-01 19:52:16.610 189568 DEBUG nova.compute.resource_tracker [None req-7cec860b-c1c5-49d2-8a4f-732c76c1788d - - - - - -] Hypervisor/Node resource view: name=compute-0.ctlplane.example.com free_ram=4841MB free_disk=72.33403015136719GB free_vcpus=6 pci_devices=[{"dev_id": "pci_0000_00_03_0", "address": "0000:00:03.0", "product_id": "1000", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1000", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_01_3", "address": "0000:00:01.3", "product_id": "7113", "vendor_id": "8086", "numa_node": null, "label": "label_8086_7113", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_06_0", "address": "0000:00:06.0", "product_id": "1005", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1005", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_04_0", "address": "0000:00:04.0", "product_id": "1001", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1001", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_01_1", "address": "0000:00:01.1", "product_id": "7010", "vendor_id": "8086", "numa_node": null, "label": "label_8086_7010", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_00_0", "address": "0000:00:00.0", "product_id": "1237", "vendor_id": "8086", "numa_node": null, "label": "label_8086_1237", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_05_0", "address": "0000:00:05.0", "product_id": "1002", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1002", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_02_0", "address": "0000:00:02.0", "product_id": "1050", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1050", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_01_2", "address": "0000:00:01.2", "product_id": "7020", "vendor_id": "8086", "numa_node": null, "label": "label_8086_7020", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_01_0", "address": "0000:00:01.0", "product_id": "7000", "vendor_id": "8086", "numa_node": null, "label": "label_8086_7000", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_07_0", "address": "0000:00:07.0", "product_id": "1000", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1000", "dev_type": "type-PCI"}] _report_hypervisor_resource_view /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:1034#033[00m
Dec  1 19:52:16 compute-0 nova_compute[189564]: 2025-12-01 19:52:16.611 189568 DEBUG oslo_concurrency.lockutils [None req-7cec860b-c1c5-49d2-8a4f-732c76c1788d - - - - - -] Acquiring lock "compute_resources" by "nova.compute.resource_tracker.ResourceTracker._update_available_resource" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m
Dec  1 19:52:16 compute-0 nova_compute[189564]: 2025-12-01 19:52:16.611 189568 DEBUG oslo_concurrency.lockutils [None req-7cec860b-c1c5-49d2-8a4f-732c76c1788d - - - - - -] Lock "compute_resources" acquired by "nova.compute.resource_tracker.ResourceTracker._update_available_resource" :: waited 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m
Dec  1 19:52:16 compute-0 nova_compute[189564]: 2025-12-01 19:52:16.694 189568 DEBUG nova.compute.resource_tracker [None req-7cec860b-c1c5-49d2-8a4f-732c76c1788d - - - - - -] Instance e73931e9-f7fa-4666-b781-700b385532a9 actively managed on this compute host and has allocations in placement: {'resources': {'DISK_GB': 2, 'MEMORY_MB': 512, 'VCPU': 1}}. _remove_deleted_instances_allocations /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:1635#033[00m
Dec  1 19:52:16 compute-0 nova_compute[189564]: 2025-12-01 19:52:16.695 189568 DEBUG nova.compute.resource_tracker [None req-7cec860b-c1c5-49d2-8a4f-732c76c1788d - - - - - -] Instance 850ac274-3f22-41ce-b7d7-ac64d7adac70 actively managed on this compute host and has allocations in placement: {'resources': {'DISK_GB': 2, 'MEMORY_MB': 512, 'VCPU': 1}}. _remove_deleted_instances_allocations /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:1635#033[00m
Dec  1 19:52:16 compute-0 nova_compute[189564]: 2025-12-01 19:52:16.696 189568 DEBUG nova.compute.resource_tracker [None req-7cec860b-c1c5-49d2-8a4f-732c76c1788d - - - - - -] Total usable vcpus: 8, total allocated vcpus: 2 _report_final_resource_view /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:1057#033[00m
Dec  1 19:52:16 compute-0 nova_compute[189564]: 2025-12-01 19:52:16.696 189568 DEBUG nova.compute.resource_tracker [None req-7cec860b-c1c5-49d2-8a4f-732c76c1788d - - - - - -] Final resource view: name=compute-0.ctlplane.example.com phys_ram=7680MB used_ram=1536MB phys_disk=79GB used_disk=4GB total_vcpus=8 used_vcpus=2 pci_stats=[] _report_final_resource_view /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:1066#033[00m
Dec  1 19:52:16 compute-0 nova_compute[189564]: 2025-12-01 19:52:16.769 189568 DEBUG nova.compute.provider_tree [None req-7cec860b-c1c5-49d2-8a4f-732c76c1788d - - - - - -] Inventory has not changed in ProviderTree for provider: 0211b5d4-bab8-409f-8f53-df766ffbcb27 update_inventory /usr/lib/python3.9/site-packages/nova/compute/provider_tree.py:180#033[00m
Dec  1 19:52:16 compute-0 nova_compute[189564]: 2025-12-01 19:52:16.788 189568 DEBUG nova.scheduler.client.report [None req-7cec860b-c1c5-49d2-8a4f-732c76c1788d - - - - - -] Inventory has not changed for provider 0211b5d4-bab8-409f-8f53-df766ffbcb27 based on inventory data: {'VCPU': {'total': 8, 'reserved': 0, 'min_unit': 1, 'max_unit': 8, 'step_size': 1, 'allocation_ratio': 4.0}, 'MEMORY_MB': {'total': 7680, 'reserved': 512, 'min_unit': 1, 'max_unit': 7680, 'step_size': 1, 'allocation_ratio': 1.0}, 'DISK_GB': {'total': 79, 'reserved': 1, 'min_unit': 1, 'max_unit': 79, 'step_size': 1, 'allocation_ratio': 0.9}} set_inventory_for_provider /usr/lib/python3.9/site-packages/nova/scheduler/client/report.py:940#033[00m
Dec  1 19:52:16 compute-0 nova_compute[189564]: 2025-12-01 19:52:16.809 189568 DEBUG nova.compute.resource_tracker [None req-7cec860b-c1c5-49d2-8a4f-732c76c1788d - - - - - -] Compute_service record updated for compute-0.ctlplane.example.com:compute-0.ctlplane.example.com _update_available_resource /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:995#033[00m
Dec  1 19:52:16 compute-0 nova_compute[189564]: 2025-12-01 19:52:16.810 189568 DEBUG oslo_concurrency.lockutils [None req-7cec860b-c1c5-49d2-8a4f-732c76c1788d - - - - - -] Lock "compute_resources" "released" by "nova.compute.resource_tracker.ResourceTracker._update_available_resource" :: held 0.198s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m
Dec  1 19:52:17 compute-0 nova_compute[189564]: 2025-12-01 19:52:17.739 189568 DEBUG oslo_service.periodic_task [None req-7cec860b-c1c5-49d2-8a4f-732c76c1788d - - - - - -] Running periodic task ComputeManager._sync_power_states run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210#033[00m
Dec  1 19:52:17 compute-0 nova_compute[189564]: 2025-12-01 19:52:17.768 189568 DEBUG nova.compute.manager [None req-7cec860b-c1c5-49d2-8a4f-732c76c1788d - - - - - -] Triggering sync for uuid e73931e9-f7fa-4666-b781-700b385532a9 _sync_power_states /usr/lib/python3.9/site-packages/nova/compute/manager.py:10268#033[00m
Dec  1 19:52:17 compute-0 nova_compute[189564]: 2025-12-01 19:52:17.769 189568 DEBUG nova.compute.manager [None req-7cec860b-c1c5-49d2-8a4f-732c76c1788d - - - - - -] Triggering sync for uuid 850ac274-3f22-41ce-b7d7-ac64d7adac70 _sync_power_states /usr/lib/python3.9/site-packages/nova/compute/manager.py:10268#033[00m
Dec  1 19:52:17 compute-0 nova_compute[189564]: 2025-12-01 19:52:17.769 189568 DEBUG oslo_concurrency.lockutils [None req-7cec860b-c1c5-49d2-8a4f-732c76c1788d - - - - - -] Acquiring lock "e73931e9-f7fa-4666-b781-700b385532a9" by "nova.compute.manager.ComputeManager._sync_power_states.<locals>._sync.<locals>.query_driver_power_state_and_sync" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m
Dec  1 19:52:17 compute-0 nova_compute[189564]: 2025-12-01 19:52:17.770 189568 DEBUG oslo_concurrency.lockutils [None req-7cec860b-c1c5-49d2-8a4f-732c76c1788d - - - - - -] Lock "e73931e9-f7fa-4666-b781-700b385532a9" acquired by "nova.compute.manager.ComputeManager._sync_power_states.<locals>._sync.<locals>.query_driver_power_state_and_sync" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m
Dec  1 19:52:17 compute-0 nova_compute[189564]: 2025-12-01 19:52:17.770 189568 DEBUG oslo_concurrency.lockutils [None req-7cec860b-c1c5-49d2-8a4f-732c76c1788d - - - - - -] Acquiring lock "850ac274-3f22-41ce-b7d7-ac64d7adac70" by "nova.compute.manager.ComputeManager._sync_power_states.<locals>._sync.<locals>.query_driver_power_state_and_sync" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m
Dec  1 19:52:17 compute-0 nova_compute[189564]: 2025-12-01 19:52:17.770 189568 DEBUG oslo_concurrency.lockutils [None req-7cec860b-c1c5-49d2-8a4f-732c76c1788d - - - - - -] Lock "850ac274-3f22-41ce-b7d7-ac64d7adac70" acquired by "nova.compute.manager.ComputeManager._sync_power_states.<locals>._sync.<locals>.query_driver_power_state_and_sync" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m
Dec  1 19:52:17 compute-0 nova_compute[189564]: 2025-12-01 19:52:17.816 189568 DEBUG oslo_concurrency.lockutils [None req-7cec860b-c1c5-49d2-8a4f-732c76c1788d - - - - - -] Lock "e73931e9-f7fa-4666-b781-700b385532a9" "released" by "nova.compute.manager.ComputeManager._sync_power_states.<locals>._sync.<locals>.query_driver_power_state_and_sync" :: held 0.046s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m
Dec  1 19:52:17 compute-0 nova_compute[189564]: 2025-12-01 19:52:17.817 189568 DEBUG oslo_concurrency.lockutils [None req-7cec860b-c1c5-49d2-8a4f-732c76c1788d - - - - - -] Lock "850ac274-3f22-41ce-b7d7-ac64d7adac70" "released" by "nova.compute.manager.ComputeManager._sync_power_states.<locals>._sync.<locals>.query_driver_power_state_and_sync" :: held 0.046s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m
Dec  1 19:52:18 compute-0 nova_compute[189564]: 2025-12-01 19:52:18.490 189568 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Dec  1 19:52:19 compute-0 podman[249279]: 2025-12-01 19:52:19.330349453 +0000 UTC m=+0.081862061 container health_status 9bc16c1e84935b321683dd2dfd3901959431e420d380b6b9982945dff3d516b2 (image=quay.io/prometheus/node-exporter:v1.5.0, name=node_exporter, health_status=healthy, health_failing_streak=0, health_log=, config_data={'image': 'quay.io/prometheus/node-exporter:v1.5.0', 'restart': 'always', 'recreate': True, 'user': 'root', 'privileged': True, 'ports': ['9100:9100'], 'command': ['--web.config.file=/etc/node_exporter/node_exporter.yaml', '--web.disable-exporter-metrics', '--collector.systemd', '--collector.systemd.unit-include=(edpm_.*|ovs.*|openvswitch|virt.*|rsyslog)\\.service', '--no-collector.dmi', '--no-collector.entropy', '--no-collector.thermal_zone', '--no-collector.time', '--no-collector.timex', '--no-collector.uname', '--no-collector.stat', '--no-collector.hwmon', '--no-collector.os', '--no-collector.selinux', '--no-collector.textfile', '--no-collector.powersupplyclass', '--no-collector.pressure', '--no-collector.rapl'], 'net': 'host', 'environment': {'OS_ENDPOINT_TYPE': 'internal'}, 'healthcheck': {'test': '/openstack/healthcheck node_exporter', 'mount': '/var/lib/openstack/healthchecks/node_exporter'}, 'volumes': ['/var/lib/openstack/config/telemetry/node_exporter.yaml:/etc/node_exporter/node_exporter.yaml:z', '/var/lib/openstack/certs/telemetry/default:/etc/node_exporter/tls:z', '/var/run/dbus/system_bus_socket:/var/run/dbus/system_bus_socket:rw', '/var/lib/openstack/healthchecks/node_exporter:/openstack:ro,z']}, config_id=edpm, container_name=node_exporter, maintainer=The Prometheus Authors <prometheus-developers@googlegroups.com>, managed_by=edpm_ansible)
Dec  1 19:52:21 compute-0 nova_compute[189564]: 2025-12-01 19:52:21.184 189568 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Dec  1 19:52:23 compute-0 nova_compute[189564]: 2025-12-01 19:52:23.492 189568 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Dec  1 19:52:26 compute-0 nova_compute[189564]: 2025-12-01 19:52:26.187 189568 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Dec  1 19:52:26 compute-0 podman[249305]: 2025-12-01 19:52:26.33597218 +0000 UTC m=+0.109431856 container health_status eee51cf6f5ac491b85fb09827fece37ea9afa564acb449d4ec0d0155a452f02b (image=quay.io/podified-antelope-centos9/openstack-multipathd:current-podified, name=multipathd, health_status=healthy, health_failing_streak=0, health_log=, io.buildah.version=1.41.3, org.label-schema.license=GPLv2, org.label-schema.build-date=20251125, tcib_managed=true, config_id=multipathd, managed_by=edpm_ansible, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.schema-version=1.0, config_data={'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/multipathd', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-multipathd:current-podified', 'net': 'host', 'privileged': True, 'restart': 'always', 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/var/lib/kolla/config_files/multipathd.json:/var/lib/kolla/config_files/config.json:ro', '/dev:/dev', '/run/udev:/run/udev', '/sys:/sys', '/lib/modules:/lib/modules:ro', '/etc/iscsi:/etc/iscsi:ro', '/var/lib/iscsi:/var/lib/iscsi', '/etc/multipath:/etc/multipath:z', '/etc/multipath.conf:/etc/multipath.conf:ro', '/var/lib/openstack/healthchecks/multipathd:/openstack:ro,z']}, container_name=multipathd, maintainer=OpenStack Kubernetes Operator team, org.label-schema.vendor=CentOS, tcib_build_tag=fa2bb8efef6782c26ea7f1675eeb36dd)
Dec  1 19:52:26 compute-0 systemd[1]: session-30.scope: Deactivated successfully.
Dec  1 19:52:26 compute-0 systemd[1]: session-30.scope: Consumed 1.429s CPU time.
Dec  1 19:52:26 compute-0 systemd-logind[797]: Session 30 logged out. Waiting for processes to exit.
Dec  1 19:52:26 compute-0 systemd-logind[797]: Removed session 30.
Dec  1 19:52:28 compute-0 nova_compute[189564]: 2025-12-01 19:52:28.496 189568 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Dec  1 19:52:29 compute-0 podman[203750]: time="2025-12-01T19:52:29Z" level=info msg="List containers: received `last` parameter - overwriting `limit`"
Dec  1 19:52:29 compute-0 podman[203750]: @ - - [01/Dec/2025:19:52:29 +0000] "GET /v4.9.3/libpod/containers/json?all=true&external=false&last=0&namespace=false&size=false&sync=false HTTP/1.1" 200 29521 "" "Go-http-client/1.1"
Dec  1 19:52:29 compute-0 podman[203750]: @ - - [01/Dec/2025:19:52:29 +0000] "GET /v4.9.3/libpod/containers/stats?all=false&interval=1&stream=false HTTP/1.1" 200 4812 "" "Go-http-client/1.1"
Dec  1 19:52:31 compute-0 nova_compute[189564]: 2025-12-01 19:52:31.189 189568 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Dec  1 19:52:31 compute-0 podman[249325]: 2025-12-01 19:52:31.331489297 +0000 UTC m=+0.093262325 container health_status 61ddba5fa28aaa4735d9b3aecc3d300f499f9ae2248b5f55cd6d6127fcce4236 (image=quay.io/navidys/prometheus-podman-exporter:v1.10.1, name=podman_exporter, health_status=healthy, health_failing_streak=0, health_log=, maintainer=Navid Yaghoobi <navidys@fedoraproject.org>, managed_by=edpm_ansible, config_data={'image': 'quay.io/navidys/prometheus-podman-exporter:v1.10.1', 'restart': 'always', 'recreate': True, 'user': 'root', 'privileged': True, 'ports': ['9882:9882'], 'net': 'host', 'command': ['--web.config.file=/etc/podman_exporter/podman_exporter.yaml'], 'environment': {'OS_ENDPOINT_TYPE': 'internal', 'CONTAINER_HOST': 'unix:///run/podman/podman.sock'}, 'healthcheck': {'test': '/openstack/healthcheck podman_exporter', 'mount': '/var/lib/openstack/healthchecks/podman_exporter'}, 'volumes': ['/var/lib/openstack/config/telemetry/podman_exporter.yaml:/etc/podman_exporter/podman_exporter.yaml:z', '/var/lib/openstack/certs/telemetry/default:/etc/podman_exporter/tls:z', '/run/podman/podman.sock:/run/podman/podman.sock:rw,z', '/var/lib/openstack/healthchecks/podman_exporter:/openstack:ro,z']}, config_id=edpm, container_name=podman_exporter)
Dec  1 19:52:31 compute-0 openstack_network_exporter[205914]: ERROR   19:52:31 appctl.go:131: Failed to prepare call to ovsdb-server: no control socket files found for the ovs db server
Dec  1 19:52:31 compute-0 openstack_network_exporter[205914]: ERROR   19:52:31 appctl.go:144: Failed to get PID for ovn-northd: no control socket files found for ovn-northd
Dec  1 19:52:31 compute-0 openstack_network_exporter[205914]: ERROR   19:52:31 appctl.go:144: Failed to get PID for ovn-northd: no control socket files found for ovn-northd
Dec  1 19:52:31 compute-0 openstack_network_exporter[205914]: ERROR   19:52:31 appctl.go:174: call(dpif-netdev/pmd-perf-show): please specify an existing datapath
Dec  1 19:52:31 compute-0 openstack_network_exporter[205914]: 
Dec  1 19:52:31 compute-0 openstack_network_exporter[205914]: ERROR   19:52:31 appctl.go:174: call(dpif-netdev/pmd-rxq-show): please specify an existing datapath
Dec  1 19:52:31 compute-0 openstack_network_exporter[205914]: 
Dec  1 19:52:33 compute-0 nova_compute[189564]: 2025-12-01 19:52:33.498 189568 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Dec  1 19:52:35 compute-0 podman[249350]: 2025-12-01 19:52:35.352508785 +0000 UTC m=+0.107430705 container health_status 3a3d264f7eb8586ed3d44da8bad3c69e5911bcb2ca062b771386b6d47a5118de (image=quay.rdoproject.org/podified-master-centos10/openstack-ceilometer-compute:current-tested, name=ceilometer_agent_compute, health_status=healthy, health_failing_streak=0, health_log=, tcib_managed=true, config_id=edpm, io.buildah.version=1.41.4, org.label-schema.vendor=CentOS, org.label-schema.build-date=20251125, org.label-schema.schema-version=1.0, container_name=ceilometer_agent_compute, maintainer=OpenStack Kubernetes Operator team, org.label-schema.name=CentOS Stream 10 Base Image, config_data={'image': 'quay.rdoproject.org/podified-master-centos10/openstack-ceilometer-compute:current-tested', 'user': 'ceilometer', 'restart': 'always', 'command': 'kolla_start', 'security_opt': 'label:type:ceilometer_polling_t', 'net': 'host', 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'OS_ENDPOINT_TYPE': 'internal'}, 'healthcheck': {'test': '/openstack/healthcheck compute', 'mount': '/var/lib/openstack/healthchecks/ceilometer_agent_compute'}, 'volumes': ['/var/lib/openstack/config/telemetry:/var/lib/openstack/config/:z', '/var/lib/openstack/config/telemetry/ceilometer-agent-compute.json:/var/lib/kolla/config_files/config.json:z', '/run/libvirt:/run/libvirt:shared,ro', '/etc/hosts:/etc/hosts:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/var/lib/openstack/cacerts/telemetry/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/config/telemetry/ceilometer_prom_exporter.yaml:/etc/ceilometer/ceilometer_prom_exporter.yaml:z', '/var/lib/openstack/certs/telemetry/default:/etc/ceilometer/tls:z', '/dev/log:/dev/log', '/var/lib/openstack/healthchecks/ceilometer_agent_compute:/openstack:ro,z']}, managed_by=edpm_ansible, org.label-schema.license=GPLv2, tcib_build_tag=3a7876c5b6a4ff2e2bc50e11e9db5f42)
Dec  1 19:52:35 compute-0 podman[249348]: 2025-12-01 19:52:35.369100149 +0000 UTC m=+0.129751267 container health_status 23921011954a99f31a49758e512d9e3575f6b2ebf536e7df85e3be11e7690b76 (image=quay.io/sustainable_computing_io/kepler:release-0.7.12, name=kepler, health_status=healthy, health_failing_streak=0, health_log=, summary=Provides the latest release of Red Hat Universal Base Image 9., build-date=2024-09-18T21:23:30, version=9.4, release-0.7.12=, vcs-ref=e309397d02fc53f7fa99db1371b8700eb49f268f, config_data={'image': 'quay.io/sustainable_computing_io/kepler:release-0.7.12', 'privileged': 'true', 'restart': 'always', 'ports': ['8888:8888'], 'net': 'host', 'command': '-v=2', 'recreate': True, 'environment': {'ENABLE_GPU': 'true', 'EXPOSE_CONTAINER_METRICS': 'true', 'ENABLE_PROCESS_METRICS': 'true', 'EXPOSE_VM_METRICS': 'true', 'EXPOSE_ESTIMATED_IDLE_POWER_METRICS': 'false', 'LIBVIRT_METADATA_URI': 'http://openstack.org/xmlns/libvirt/nova/1.1'}, 'healthcheck': {'test': '/openstack/healthcheck kepler', 'mount': '/var/lib/openstack/healthchecks/kepler'}, 'volumes': ['/lib/modules:/lib/modules:ro', '/run/libvirt:/run/libvirt:shared,ro', '/sys:/sys', '/proc:/proc', '/var/lib/openstack/healthchecks/kepler:/openstack:ro,z']}, description=The Universal Base Image is designed and engineered to be the base layer for all of your containerized applications, middleware and utilities. This base image is freely redistributable, but Red Hat only supports Red Hat technologies through subscriptions for Red Hat products. This image is maintained by Red Hat and updated regularly., managed_by=edpm_ansible, com.redhat.license_terms=https://www.redhat.com/en/about/red-hat-end-user-license-agreements#UBI, vcs-type=git, config_id=edpm, io.k8s.description=The Universal Base Image is designed and engineered to be the base layer for all of your containerized applications, middleware and utilities. This base image is freely redistributable, but Red Hat only supports Red Hat technologies through subscriptions for Red Hat products. This image is maintained by Red Hat and updated regularly., distribution-scope=public, io.openshift.expose-services=, name=ubi9, io.openshift.tags=base rhel9, vendor=Red Hat, Inc., com.redhat.component=ubi9-container, container_name=kepler, io.k8s.display-name=Red Hat Universal Base Image 9, architecture=x86_64, io.buildah.version=1.29.0, release=1214.1726694543, url=https://access.redhat.com/containers/#/registry.access.redhat.com/ubi9/images/9.4-1214.1726694543, maintainer=Red Hat, Inc.)
Dec  1 19:52:35 compute-0 podman[249351]: 2025-12-01 19:52:35.383485586 +0000 UTC m=+0.126208227 container health_status 43b014a7c88484529ca37fbc1aa040d68d3c565a681d98a3ffe696ded1c66c8b (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified, name=ovn_metadata_agent, health_status=healthy, health_failing_streak=0, health_log=, managed_by=edpm_ansible, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS, config_data={'cgroupns': 'host', 'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'EDPM_CONFIG_HASH': '0823bd3e096c75f72e4a95820d41b0d4b6a1172bd2892ddb9f29b788a11bc87d'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ovn_metadata_agent', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified', 'net': 'host', 'pid': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/run/openvswitch:/run/openvswitch:z', '/var/lib/config-data/ansible-generated/neutron-ovn-metadata-agent:/etc/neutron.conf.d:z', '/run/netns:/run/netns:shared', '/var/lib/kolla/config_files/ovn_metadata_agent.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/neutron:/var/lib/neutron:shared,z', '/var/lib/neutron/ovn_metadata_haproxy_wrapper:/usr/local/bin/haproxy:ro', '/var/lib/neutron/kill_scripts:/etc/neutron/kill_scripts:ro', '/var/lib/openstack/cacerts/neutron-metadata/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/ca.crt:/etc/pki/tls/certs/ovndbca.crt:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/tls.crt:/etc/pki/tls/certs/ovndb.crt:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/tls.key:/etc/pki/tls/private/ovndb.key:ro,Z', '/var/lib/openstack/healthchecks/ovn_metadata_agent:/openstack:ro,z']}, io.buildah.version=1.41.3, org.label-schema.license=GPLv2, config_id=ovn_metadata_agent, tcib_managed=true, org.label-schema.build-date=20251125, tcib_build_tag=fa2bb8efef6782c26ea7f1675eeb36dd, container_name=ovn_metadata_agent, maintainer=OpenStack Kubernetes Operator team)
Dec  1 19:52:35 compute-0 podman[249349]: 2025-12-01 19:52:35.388046997 +0000 UTC m=+0.141258273 container health_status 34a1614f07848d6f362b3ed1fa2407dbcd0f2c7c831f6ef43ff8b2d278ce7c3d (image=quay.io/podified-antelope-centos9/openstack-ceilometer-ipmi:current-podified, name=ceilometer_agent_ipmi, health_status=healthy, health_failing_streak=0, health_log=, config_id=edpm, io.buildah.version=1.41.3, maintainer=OpenStack Kubernetes Operator team, org.label-schema.name=CentOS Stream 9 Base Image, container_name=ceilometer_agent_ipmi, org.label-schema.schema-version=1.0, tcib_build_tag=fa2bb8efef6782c26ea7f1675eeb36dd, tcib_managed=true, managed_by=edpm_ansible, org.label-schema.license=GPLv2, org.label-schema.vendor=CentOS, config_data={'image': 'quay.io/podified-antelope-centos9/openstack-ceilometer-ipmi:current-podified', 'user': 'ceilometer', 'restart': 'always', 'command': 'kolla_start', 'security_opt': 'label:type:ceilometer_polling_t', 'privileged': 'true', 'net': 'host', 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'OS_ENDPOINT_TYPE': 'internal'}, 'healthcheck': {'test': '/openstack/healthcheck ipmi', 'mount': '/var/lib/openstack/healthchecks/ceilometer_agent_ipmi'}, 'volumes': ['/var/lib/openstack/config/telemetry-power-monitoring:/var/lib/openstack/config/:z', '/var/lib/openstack/config/telemetry-power-monitoring/ceilometer-agent-ipmi.json:/var/lib/kolla/config_files/config.json:z', '/etc/hosts:/etc/hosts:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/var/lib/openstack/cacerts/telemetry-power-monitoring/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/config/telemetry-power-monitoring/ceilometer_prom_exporter.yaml:/etc/ceilometer/ceilometer_prom_exporter.yaml:z', '/var/lib/openstack/certs/telemetry-power-monitoring/default:/etc/ceilometer/tls:z', '/dev/log:/dev/log', '/var/lib/openstack/healthchecks/ceilometer_agent_ipmi:/openstack:ro,z']}, org.label-schema.build-date=20251125)
Dec  1 19:52:35 compute-0 podman[249356]: 2025-12-01 19:52:35.40715342 +0000 UTC m=+0.149988074 container health_status ac5c9902abf0db9f43c889599b2bcc73d33eb8b65444ffdd9b56a5cc93dab792 (image=quay.io/podified-antelope-centos9/openstack-ovn-controller:current-podified, name=ovn_controller, health_status=healthy, health_failing_streak=0, health_log=, tcib_managed=true, config_id=ovn_controller, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS, container_name=ovn_controller, managed_by=edpm_ansible, org.label-schema.build-date=20251125, org.label-schema.name=CentOS Stream 9 Base Image, io.buildah.version=1.41.3, maintainer=OpenStack Kubernetes Operator team, tcib_build_tag=fa2bb8efef6782c26ea7f1675eeb36dd, config_data={'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ovn_controller', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-ovn-controller:current-podified', 'net': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/lib/modules:/lib/modules:ro', '/run:/run', '/var/lib/openvswitch/ovn:/run/ovn:shared,z', '/var/lib/kolla/config_files/ovn_controller.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/openstack/cacerts/ovn/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/certs/ovn/default/ca.crt:/etc/pki/tls/certs/ovndbca.crt:ro,z', '/var/lib/openstack/certs/ovn/default/tls.crt:/etc/pki/tls/certs/ovndb.crt:ro,z', '/var/lib/openstack/certs/ovn/default/tls.key:/etc/pki/tls/private/ovndb.key:ro,Z', '/var/lib/openstack/healthchecks/ovn_controller:/openstack:ro,z']}, org.label-schema.license=GPLv2)
Dec  1 19:52:36 compute-0 nova_compute[189564]: 2025-12-01 19:52:36.192 189568 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Dec  1 19:52:38 compute-0 nova_compute[189564]: 2025-12-01 19:52:38.501 189568 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Dec  1 19:52:39 compute-0 systemd-logind[797]: New session 31 of user zuul.
Dec  1 19:52:39 compute-0 systemd[1]: Started Session 31 of User zuul.
Dec  1 19:52:40 compute-0 python3[249624]: ansible-ansible.legacy.command Invoked with _raw_params=podman ps -a --format "{{.Names}} {{.Status}}" | grep node_exporter#012 _uses_shell=True stdin_add_newline=True strip_empty_ends=True argv=None chdir=None executable=None creates=None removes=None stdin=None
Dec  1 19:52:41 compute-0 nova_compute[189564]: 2025-12-01 19:52:41.196 189568 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Dec  1 19:52:43 compute-0 nova_compute[189564]: 2025-12-01 19:52:43.503 189568 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Dec  1 19:52:45 compute-0 podman[249665]: 2025-12-01 19:52:45.352911341 +0000 UTC m=+0.119404836 container health_status b46bda7fc50db8041eef75400930fc7591d8331b3adc9964f77b2cc87c6b98e2 (image=quay.io/openstack-k8s-operators/openstack-network-exporter:current-podified, name=openstack_network_exporter, health_status=healthy, health_failing_streak=0, health_log=, com.redhat.component=ubi9-minimal-container, com.redhat.license_terms=https://www.redhat.com/en/about/red-hat-end-user-license-agreements#UBI, config_id=edpm, container_name=openstack_network_exporter, name=ubi9-minimal, io.buildah.version=1.33.7, vcs-type=git, managed_by=edpm_ansible, io.openshift.expose-services=, io.k8s.description=The Universal Base Image Minimal is a stripped down image that uses microdnf as a package manager. This base image is freely redistributable, but Red Hat only supports Red Hat technologies through subscriptions for Red Hat products. This image is maintained by Red Hat and updated regularly., maintainer=Red Hat, Inc., config_data={'image': 'quay.io/openstack-k8s-operators/openstack-network-exporter:current-podified', 'restart': 'always', 'recreate': True, 'privileged': True, 'ports': ['9105:9105'], 'command': [], 'net': 'host', 'environment': {'OS_ENDPOINT_TYPE': 'internal', 'OPENSTACK_NETWORK_EXPORTER_YAML': '/etc/openstack_network_exporter/openstack_network_exporter.yaml'}, 'healthcheck': {'test': '/openstack/healthcheck openstack-netwo', 'mount': '/var/lib/openstack/healthchecks/openstack_network_exporter'}, 'volumes': ['/var/lib/openstack/config/telemetry/openstack_network_exporter.yaml:/etc/openstack_network_exporter/openstack_network_exporter.yaml:z', '/var/lib/openstack/certs/telemetry/default:/etc/openstack_network_exporter/tls:z', '/var/run/openvswitch:/run/openvswitch:rw,z', '/var/lib/openvswitch/ovn:/run/ovn:rw,z', '/proc:/host/proc:ro', '/var/lib/openstack/healthchecks/openstack_network_exporter:/openstack:ro,z']}, description=The Universal Base Image Minimal is a stripped down image that uses microdnf as a package manager. This base image is freely redistributable, but Red Hat only supports Red Hat technologies through subscriptions for Red Hat products. This image is maintained by Red Hat and updated regularly., release=1755695350, summary=Provides the latest release of the minimal Red Hat Universal Base Image 9., vendor=Red Hat, Inc., io.k8s.display-name=Red Hat Universal Base Image 9 Minimal, version=9.6, architecture=x86_64, build-date=2025-08-20T13:12:41, io.openshift.tags=minimal rhel9, url=https://catalog.redhat.com/en/search?searchType=containers, distribution-scope=public, vcs-ref=f4b088292653bbf5ca8188a5e59ffd06a8671d4b)
Dec  1 19:52:46 compute-0 nova_compute[189564]: 2025-12-01 19:52:46.201 189568 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Dec  1 19:52:47 compute-0 python3[249861]: ansible-ansible.legacy.command Invoked with _raw_params=podman ps -a --format "{{.Names}} {{.Status}}" | grep podman_exporter#012 _uses_shell=True stdin_add_newline=True strip_empty_ends=True argv=None chdir=None executable=None creates=None removes=None stdin=None
Dec  1 19:52:48 compute-0 nova_compute[189564]: 2025-12-01 19:52:48.507 189568 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Dec  1 19:52:50 compute-0 podman[249901]: 2025-12-01 19:52:50.335462436 +0000 UTC m=+0.099947211 container health_status 9bc16c1e84935b321683dd2dfd3901959431e420d380b6b9982945dff3d516b2 (image=quay.io/prometheus/node-exporter:v1.5.0, name=node_exporter, health_status=healthy, health_failing_streak=0, health_log=, config_data={'image': 'quay.io/prometheus/node-exporter:v1.5.0', 'restart': 'always', 'recreate': True, 'user': 'root', 'privileged': True, 'ports': ['9100:9100'], 'command': ['--web.config.file=/etc/node_exporter/node_exporter.yaml', '--web.disable-exporter-metrics', '--collector.systemd', '--collector.systemd.unit-include=(edpm_.*|ovs.*|openvswitch|virt.*|rsyslog)\\.service', '--no-collector.dmi', '--no-collector.entropy', '--no-collector.thermal_zone', '--no-collector.time', '--no-collector.timex', '--no-collector.uname', '--no-collector.stat', '--no-collector.hwmon', '--no-collector.os', '--no-collector.selinux', '--no-collector.textfile', '--no-collector.powersupplyclass', '--no-collector.pressure', '--no-collector.rapl'], 'net': 'host', 'environment': {'OS_ENDPOINT_TYPE': 'internal'}, 'healthcheck': {'test': '/openstack/healthcheck node_exporter', 'mount': '/var/lib/openstack/healthchecks/node_exporter'}, 'volumes': ['/var/lib/openstack/config/telemetry/node_exporter.yaml:/etc/node_exporter/node_exporter.yaml:z', '/var/lib/openstack/certs/telemetry/default:/etc/node_exporter/tls:z', '/var/run/dbus/system_bus_socket:/var/run/dbus/system_bus_socket:rw', '/var/lib/openstack/healthchecks/node_exporter:/openstack:ro,z']}, config_id=edpm, container_name=node_exporter, maintainer=The Prometheus Authors <prometheus-developers@googlegroups.com>, managed_by=edpm_ansible)
Dec  1 19:52:51 compute-0 nova_compute[189564]: 2025-12-01 19:52:51.204 189568 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Dec  1 19:52:53 compute-0 nova_compute[189564]: 2025-12-01 19:52:53.510 189568 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Dec  1 19:52:56 compute-0 nova_compute[189564]: 2025-12-01 19:52:56.207 189568 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Dec  1 19:52:56 compute-0 podman[249950]: 2025-12-01 19:52:56.837116626 +0000 UTC m=+0.123390249 container health_status eee51cf6f5ac491b85fb09827fece37ea9afa564acb449d4ec0d0155a452f02b (image=quay.io/podified-antelope-centos9/openstack-multipathd:current-podified, name=multipathd, health_status=healthy, health_failing_streak=0, health_log=, tcib_managed=true, io.buildah.version=1.41.3, org.label-schema.build-date=20251125, org.label-schema.vendor=CentOS, tcib_build_tag=fa2bb8efef6782c26ea7f1675eeb36dd, container_name=multipathd, maintainer=OpenStack Kubernetes Operator team, managed_by=edpm_ansible, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 9 Base Image, config_data={'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/multipathd', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-multipathd:current-podified', 'net': 'host', 'privileged': True, 'restart': 'always', 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/var/lib/kolla/config_files/multipathd.json:/var/lib/kolla/config_files/config.json:ro', '/dev:/dev', '/run/udev:/run/udev', '/sys:/sys', '/lib/modules:/lib/modules:ro', '/etc/iscsi:/etc/iscsi:ro', '/var/lib/iscsi:/var/lib/iscsi', '/etc/multipath:/etc/multipath:z', '/etc/multipath.conf:/etc/multipath.conf:ro', '/var/lib/openstack/healthchecks/multipathd:/openstack:ro,z']}, org.label-schema.schema-version=1.0, config_id=multipathd)
Dec  1 19:52:57 compute-0 python3[250123]: ansible-ansible.legacy.command Invoked with _raw_params=podman ps -a --format "{{.Names}} {{.Status}}" | grep kepler#012 _uses_shell=True stdin_add_newline=True strip_empty_ends=True argv=None chdir=None executable=None creates=None removes=None stdin=None
Dec  1 19:52:58 compute-0 nova_compute[189564]: 2025-12-01 19:52:58.515 189568 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Dec  1 19:52:59 compute-0 podman[203750]: time="2025-12-01T19:52:59Z" level=info msg="List containers: received `last` parameter - overwriting `limit`"
Dec  1 19:52:59 compute-0 podman[203750]: @ - - [01/Dec/2025:19:52:59 +0000] "GET /v4.9.3/libpod/containers/json?all=true&external=false&last=0&namespace=false&size=false&sync=false HTTP/1.1" 200 29521 "" "Go-http-client/1.1"
Dec  1 19:52:59 compute-0 podman[203750]: @ - - [01/Dec/2025:19:52:59 +0000] "GET /v4.9.3/libpod/containers/stats?all=false&interval=1&stream=false HTTP/1.1" 200 4811 "" "Go-http-client/1.1"
Dec  1 19:53:01 compute-0 nova_compute[189564]: 2025-12-01 19:53:01.209 189568 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Dec  1 19:53:01 compute-0 openstack_network_exporter[205914]: ERROR   19:53:01 appctl.go:144: Failed to get PID for ovn-northd: no control socket files found for ovn-northd
Dec  1 19:53:01 compute-0 openstack_network_exporter[205914]: ERROR   19:53:01 appctl.go:144: Failed to get PID for ovn-northd: no control socket files found for ovn-northd
Dec  1 19:53:01 compute-0 openstack_network_exporter[205914]: ERROR   19:53:01 appctl.go:131: Failed to prepare call to ovsdb-server: no control socket files found for the ovs db server
Dec  1 19:53:01 compute-0 openstack_network_exporter[205914]: ERROR   19:53:01 appctl.go:174: call(dpif-netdev/pmd-perf-show): please specify an existing datapath
Dec  1 19:53:01 compute-0 openstack_network_exporter[205914]: 
Dec  1 19:53:01 compute-0 openstack_network_exporter[205914]: ERROR   19:53:01 appctl.go:174: call(dpif-netdev/pmd-rxq-show): please specify an existing datapath
Dec  1 19:53:01 compute-0 openstack_network_exporter[205914]: 
Dec  1 19:53:02 compute-0 podman[250162]: 2025-12-01 19:53:02.331944473 +0000 UTC m=+0.100039515 container health_status 61ddba5fa28aaa4735d9b3aecc3d300f499f9ae2248b5f55cd6d6127fcce4236 (image=quay.io/navidys/prometheus-podman-exporter:v1.10.1, name=podman_exporter, health_status=healthy, health_failing_streak=0, health_log=, maintainer=Navid Yaghoobi <navidys@fedoraproject.org>, managed_by=edpm_ansible, config_data={'image': 'quay.io/navidys/prometheus-podman-exporter:v1.10.1', 'restart': 'always', 'recreate': True, 'user': 'root', 'privileged': True, 'ports': ['9882:9882'], 'net': 'host', 'command': ['--web.config.file=/etc/podman_exporter/podman_exporter.yaml'], 'environment': {'OS_ENDPOINT_TYPE': 'internal', 'CONTAINER_HOST': 'unix:///run/podman/podman.sock'}, 'healthcheck': {'test': '/openstack/healthcheck podman_exporter', 'mount': '/var/lib/openstack/healthchecks/podman_exporter'}, 'volumes': ['/var/lib/openstack/config/telemetry/podman_exporter.yaml:/etc/podman_exporter/podman_exporter.yaml:z', '/var/lib/openstack/certs/telemetry/default:/etc/podman_exporter/tls:z', '/run/podman/podman.sock:/run/podman/podman.sock:rw,z', '/var/lib/openstack/healthchecks/podman_exporter:/openstack:ro,z']}, config_id=edpm, container_name=podman_exporter)
Dec  1 19:53:03 compute-0 nova_compute[189564]: 2025-12-01 19:53:03.517 189568 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Dec  1 19:53:06 compute-0 nova_compute[189564]: 2025-12-01 19:53:06.211 189568 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Dec  1 19:53:06 compute-0 podman[250187]: 2025-12-01 19:53:06.337011773 +0000 UTC m=+0.096249547 container health_status 34a1614f07848d6f362b3ed1fa2407dbcd0f2c7c831f6ef43ff8b2d278ce7c3d (image=quay.io/podified-antelope-centos9/openstack-ceilometer-ipmi:current-podified, name=ceilometer_agent_ipmi, health_status=healthy, health_failing_streak=0, health_log=, org.label-schema.schema-version=1.0, io.buildah.version=1.41.3, maintainer=OpenStack Kubernetes Operator team, managed_by=edpm_ansible, org.label-schema.license=GPLv2, tcib_managed=true, container_name=ceilometer_agent_ipmi, org.label-schema.name=CentOS Stream 9 Base Image, config_id=edpm, org.label-schema.vendor=CentOS, tcib_build_tag=fa2bb8efef6782c26ea7f1675eeb36dd, config_data={'image': 'quay.io/podified-antelope-centos9/openstack-ceilometer-ipmi:current-podified', 'user': 'ceilometer', 'restart': 'always', 'command': 'kolla_start', 'security_opt': 'label:type:ceilometer_polling_t', 'privileged': 'true', 'net': 'host', 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'OS_ENDPOINT_TYPE': 'internal'}, 'healthcheck': {'test': '/openstack/healthcheck ipmi', 'mount': '/var/lib/openstack/healthchecks/ceilometer_agent_ipmi'}, 'volumes': ['/var/lib/openstack/config/telemetry-power-monitoring:/var/lib/openstack/config/:z', '/var/lib/openstack/config/telemetry-power-monitoring/ceilometer-agent-ipmi.json:/var/lib/kolla/config_files/config.json:z', '/etc/hosts:/etc/hosts:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/var/lib/openstack/cacerts/telemetry-power-monitoring/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/config/telemetry-power-monitoring/ceilometer_prom_exporter.yaml:/etc/ceilometer/ceilometer_prom_exporter.yaml:z', '/var/lib/openstack/certs/telemetry-power-monitoring/default:/etc/ceilometer/tls:z', '/dev/log:/dev/log', '/var/lib/openstack/healthchecks/ceilometer_agent_ipmi:/openstack:ro,z']}, org.label-schema.build-date=20251125)
Dec  1 19:53:06 compute-0 podman[250188]: 2025-12-01 19:53:06.358915102 +0000 UTC m=+0.128716094 container health_status 3a3d264f7eb8586ed3d44da8bad3c69e5911bcb2ca062b771386b6d47a5118de (image=quay.rdoproject.org/podified-master-centos10/openstack-ceilometer-compute:current-tested, name=ceilometer_agent_compute, health_status=healthy, health_failing_streak=0, health_log=, managed_by=edpm_ansible, org.label-schema.license=GPLv2, tcib_managed=true, container_name=ceilometer_agent_compute, maintainer=OpenStack Kubernetes Operator team, org.label-schema.schema-version=1.0, tcib_build_tag=3a7876c5b6a4ff2e2bc50e11e9db5f42, config_data={'image': 'quay.rdoproject.org/podified-master-centos10/openstack-ceilometer-compute:current-tested', 'user': 'ceilometer', 'restart': 'always', 'command': 'kolla_start', 'security_opt': 'label:type:ceilometer_polling_t', 'net': 'host', 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'OS_ENDPOINT_TYPE': 'internal'}, 'healthcheck': {'test': '/openstack/healthcheck compute', 'mount': '/var/lib/openstack/healthchecks/ceilometer_agent_compute'}, 'volumes': ['/var/lib/openstack/config/telemetry:/var/lib/openstack/config/:z', '/var/lib/openstack/config/telemetry/ceilometer-agent-compute.json:/var/lib/kolla/config_files/config.json:z', '/run/libvirt:/run/libvirt:shared,ro', '/etc/hosts:/etc/hosts:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/var/lib/openstack/cacerts/telemetry/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/config/telemetry/ceilometer_prom_exporter.yaml:/etc/ceilometer/ceilometer_prom_exporter.yaml:z', '/var/lib/openstack/certs/telemetry/default:/etc/ceilometer/tls:z', '/dev/log:/dev/log', '/var/lib/openstack/healthchecks/ceilometer_agent_compute:/openstack:ro,z']}, org.label-schema.build-date=20251125, org.label-schema.vendor=CentOS, org.label-schema.name=CentOS Stream 10 Base Image, config_id=edpm, io.buildah.version=1.41.4)
Dec  1 19:53:06 compute-0 podman[250189]: 2025-12-01 19:53:06.36786199 +0000 UTC m=+0.125001599 container health_status 43b014a7c88484529ca37fbc1aa040d68d3c565a681d98a3ffe696ded1c66c8b (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified, name=ovn_metadata_agent, health_status=healthy, health_failing_streak=0, health_log=, tcib_build_tag=fa2bb8efef6782c26ea7f1675eeb36dd, org.label-schema.schema-version=1.0, tcib_managed=true, io.buildah.version=1.41.3, managed_by=edpm_ansible, org.label-schema.name=CentOS Stream 9 Base Image, container_name=ovn_metadata_agent, org.label-schema.license=GPLv2, org.label-schema.vendor=CentOS, config_data={'cgroupns': 'host', 'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'EDPM_CONFIG_HASH': '0823bd3e096c75f72e4a95820d41b0d4b6a1172bd2892ddb9f29b788a11bc87d'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ovn_metadata_agent', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified', 'net': 'host', 'pid': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/run/openvswitch:/run/openvswitch:z', '/var/lib/config-data/ansible-generated/neutron-ovn-metadata-agent:/etc/neutron.conf.d:z', '/run/netns:/run/netns:shared', '/var/lib/kolla/config_files/ovn_metadata_agent.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/neutron:/var/lib/neutron:shared,z', '/var/lib/neutron/ovn_metadata_haproxy_wrapper:/usr/local/bin/haproxy:ro', '/var/lib/neutron/kill_scripts:/etc/neutron/kill_scripts:ro', '/var/lib/openstack/cacerts/neutron-metadata/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/ca.crt:/etc/pki/tls/certs/ovndbca.crt:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/tls.crt:/etc/pki/tls/certs/ovndb.crt:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/tls.key:/etc/pki/tls/private/ovndb.key:ro,Z', '/var/lib/openstack/healthchecks/ovn_metadata_agent:/openstack:ro,z']}, config_id=ovn_metadata_agent, maintainer=OpenStack Kubernetes Operator team, org.label-schema.build-date=20251125)
Dec  1 19:53:06 compute-0 podman[250186]: 2025-12-01 19:53:06.371161163 +0000 UTC m=+0.130991245 container health_status 23921011954a99f31a49758e512d9e3575f6b2ebf536e7df85e3be11e7690b76 (image=quay.io/sustainable_computing_io/kepler:release-0.7.12, name=kepler, health_status=healthy, health_failing_streak=0, health_log=, container_name=kepler, description=The Universal Base Image is designed and engineered to be the base layer for all of your containerized applications, middleware and utilities. This base image is freely redistributable, but Red Hat only supports Red Hat technologies through subscriptions for Red Hat products. This image is maintained by Red Hat and updated regularly., io.buildah.version=1.29.0, io.openshift.expose-services=, name=ubi9, release=1214.1726694543, build-date=2024-09-18T21:23:30, summary=Provides the latest release of Red Hat Universal Base Image 9., vcs-type=git, vendor=Red Hat, Inc., architecture=x86_64, io.k8s.display-name=Red Hat Universal Base Image 9, vcs-ref=e309397d02fc53f7fa99db1371b8700eb49f268f, com.redhat.license_terms=https://www.redhat.com/en/about/red-hat-end-user-license-agreements#UBI, io.k8s.description=The Universal Base Image is designed and engineered to be the base layer for all of your containerized applications, middleware and utilities. This base image is freely redistributable, but Red Hat only supports Red Hat technologies through subscriptions for Red Hat products. This image is maintained by Red Hat and updated regularly., maintainer=Red Hat, Inc., com.redhat.component=ubi9-container, io.openshift.tags=base rhel9, release-0.7.12=, url=https://access.redhat.com/containers/#/registry.access.redhat.com/ubi9/images/9.4-1214.1726694543, version=9.4, distribution-scope=public, managed_by=edpm_ansible, config_data={'image': 'quay.io/sustainable_computing_io/kepler:release-0.7.12', 'privileged': 'true', 'restart': 'always', 'ports': ['8888:8888'], 'net': 'host', 'command': '-v=2', 'recreate': True, 'environment': {'ENABLE_GPU': 'true', 'EXPOSE_CONTAINER_METRICS': 'true', 'ENABLE_PROCESS_METRICS': 'true', 'EXPOSE_VM_METRICS': 'true', 'EXPOSE_ESTIMATED_IDLE_POWER_METRICS': 'false', 'LIBVIRT_METADATA_URI': 'http://openstack.org/xmlns/libvirt/nova/1.1'}, 'healthcheck': {'test': '/openstack/healthcheck kepler', 'mount': '/var/lib/openstack/healthchecks/kepler'}, 'volumes': ['/lib/modules:/lib/modules:ro', '/run/libvirt:/run/libvirt:shared,ro', '/sys:/sys', '/proc:/proc', '/var/lib/openstack/healthchecks/kepler:/openstack:ro,z']}, config_id=edpm)
Dec  1 19:53:06 compute-0 podman[250190]: 2025-12-01 19:53:06.409989257 +0000 UTC m=+0.158161197 container health_status ac5c9902abf0db9f43c889599b2bcc73d33eb8b65444ffdd9b56a5cc93dab792 (image=quay.io/podified-antelope-centos9/openstack-ovn-controller:current-podified, name=ovn_controller, health_status=healthy, health_failing_streak=0, health_log=, managed_by=edpm_ansible, org.label-schema.build-date=20251125, config_id=ovn_controller, org.label-schema.schema-version=1.0, tcib_build_tag=fa2bb8efef6782c26ea7f1675eeb36dd, tcib_managed=true, config_data={'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ovn_controller', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-ovn-controller:current-podified', 'net': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/lib/modules:/lib/modules:ro', '/run:/run', '/var/lib/openvswitch/ovn:/run/ovn:shared,z', '/var/lib/kolla/config_files/ovn_controller.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/openstack/cacerts/ovn/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/certs/ovn/default/ca.crt:/etc/pki/tls/certs/ovndbca.crt:ro,z', '/var/lib/openstack/certs/ovn/default/tls.crt:/etc/pki/tls/certs/ovndb.crt:ro,z', '/var/lib/openstack/certs/ovn/default/tls.key:/etc/pki/tls/private/ovndb.key:ro,Z', '/var/lib/openstack/healthchecks/ovn_controller:/openstack:ro,z']}, container_name=ovn_controller, maintainer=OpenStack Kubernetes Operator team, org.label-schema.license=GPLv2, org.label-schema.vendor=CentOS, org.label-schema.name=CentOS Stream 9 Base Image, io.buildah.version=1.41.3)
Dec  1 19:53:08 compute-0 nova_compute[189564]: 2025-12-01 19:53:08.519 189568 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Dec  1 19:53:09 compute-0 nova_compute[189564]: 2025-12-01 19:53:09.248 189568 DEBUG oslo_service.periodic_task [None req-7cec860b-c1c5-49d2-8a4f-732c76c1788d - - - - - -] Running periodic task ComputeManager._poll_rebooting_instances run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210#033[00m
Dec  1 19:53:09 compute-0 nova_compute[189564]: 2025-12-01 19:53:09.248 189568 DEBUG oslo_service.periodic_task [None req-7cec860b-c1c5-49d2-8a4f-732c76c1788d - - - - - -] Running periodic task ComputeManager._poll_volume_usage run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210#033[00m
Dec  1 19:53:09 compute-0 nova_compute[189564]: 2025-12-01 19:53:09.249 189568 DEBUG oslo_service.periodic_task [None req-7cec860b-c1c5-49d2-8a4f-732c76c1788d - - - - - -] Running periodic task ComputeManager._reclaim_queued_deletes run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210#033[00m
Dec  1 19:53:09 compute-0 nova_compute[189564]: 2025-12-01 19:53:09.249 189568 DEBUG nova.compute.manager [None req-7cec860b-c1c5-49d2-8a4f-732c76c1788d - - - - - -] CONF.reclaim_instance_interval <= 0, skipping... _reclaim_queued_deletes /usr/lib/python3.9/site-packages/nova/compute/manager.py:10477#033[00m
Dec  1 19:53:10 compute-0 nova_compute[189564]: 2025-12-01 19:53:10.250 189568 DEBUG oslo_service.periodic_task [None req-7cec860b-c1c5-49d2-8a4f-732c76c1788d - - - - - -] Running periodic task ComputeManager._heal_instance_info_cache run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210#033[00m
Dec  1 19:53:10 compute-0 nova_compute[189564]: 2025-12-01 19:53:10.252 189568 DEBUG nova.compute.manager [None req-7cec860b-c1c5-49d2-8a4f-732c76c1788d - - - - - -] Starting heal instance info cache _heal_instance_info_cache /usr/lib/python3.9/site-packages/nova/compute/manager.py:9858#033[00m
Dec  1 19:53:11 compute-0 nova_compute[189564]: 2025-12-01 19:53:11.214 189568 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Dec  1 19:53:11 compute-0 nova_compute[189564]: 2025-12-01 19:53:11.313 189568 DEBUG oslo_concurrency.lockutils [None req-7cec860b-c1c5-49d2-8a4f-732c76c1788d - - - - - -] Acquiring lock "refresh_cache-850ac274-3f22-41ce-b7d7-ac64d7adac70" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:312#033[00m
Dec  1 19:53:11 compute-0 nova_compute[189564]: 2025-12-01 19:53:11.314 189568 DEBUG oslo_concurrency.lockutils [None req-7cec860b-c1c5-49d2-8a4f-732c76c1788d - - - - - -] Acquired lock "refresh_cache-850ac274-3f22-41ce-b7d7-ac64d7adac70" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:315#033[00m
Dec  1 19:53:11 compute-0 nova_compute[189564]: 2025-12-01 19:53:11.314 189568 DEBUG nova.network.neutron [None req-7cec860b-c1c5-49d2-8a4f-732c76c1788d - - - - - -] [instance: 850ac274-3f22-41ce-b7d7-ac64d7adac70] Forcefully refreshing network info cache for instance _get_instance_nw_info /usr/lib/python3.9/site-packages/nova/network/neutron.py:2004#033[00m
Dec  1 19:53:12 compute-0 ovn_metadata_agent[106828]: 2025-12-01 19:53:12.208 106833 DEBUG oslo_concurrency.lockutils [-] Acquiring lock "_check_child_processes" by "neutron.agent.linux.external_process.ProcessMonitor._check_child_processes" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m
Dec  1 19:53:12 compute-0 ovn_metadata_agent[106828]: 2025-12-01 19:53:12.209 106833 DEBUG oslo_concurrency.lockutils [-] Lock "_check_child_processes" acquired by "neutron.agent.linux.external_process.ProcessMonitor._check_child_processes" :: waited 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m
Dec  1 19:53:12 compute-0 ovn_metadata_agent[106828]: 2025-12-01 19:53:12.210 106833 DEBUG oslo_concurrency.lockutils [-] Lock "_check_child_processes" "released" by "neutron.agent.linux.external_process.ProcessMonitor._check_child_processes" :: held 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m
Dec  1 19:53:12 compute-0 nova_compute[189564]: 2025-12-01 19:53:12.547 189568 DEBUG nova.network.neutron [None req-7cec860b-c1c5-49d2-8a4f-732c76c1788d - - - - - -] [instance: 850ac274-3f22-41ce-b7d7-ac64d7adac70] Updating instance_info_cache with network_info: [{"id": "076102cd-d411-4d3d-a31e-4851d4a8d107", "address": "fa:16:3e:ce:df:71", "network": {"id": "2a4b8529-6171-4880-a97c-66966115a61b", "bridge": "br-int", "label": "private", "subnets": [{"cidr": "192.168.0.0/24", "dns": [], "gateway": {"address": "192.168.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "192.168.0.62", "type": "fixed", "version": 4, "meta": {}, "floating_ips": [{"address": "192.168.122.240", "type": "floating", "version": 4, "meta": {}}]}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "35d2a9caf1634dca9fc12ec078239d84", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tap076102cd-d4", "ovs_interfaceid": "076102cd-d411-4d3d-a31e-4851d4a8d107", "qbh_params": null, "qbg_params": null, "active": true, "vnic_type": "normal", "profile": {}, "preserve_on_delete": true, "delegate_create": true, "meta": {}}] update_instance_cache_with_nw_info /usr/lib/python3.9/site-packages/nova/network/neutron.py:116#033[00m
Dec  1 19:53:12 compute-0 nova_compute[189564]: 2025-12-01 19:53:12.572 189568 DEBUG oslo_concurrency.lockutils [None req-7cec860b-c1c5-49d2-8a4f-732c76c1788d - - - - - -] Releasing lock "refresh_cache-850ac274-3f22-41ce-b7d7-ac64d7adac70" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:333#033[00m
Dec  1 19:53:12 compute-0 nova_compute[189564]: 2025-12-01 19:53:12.572 189568 DEBUG nova.compute.manager [None req-7cec860b-c1c5-49d2-8a4f-732c76c1788d - - - - - -] [instance: 850ac274-3f22-41ce-b7d7-ac64d7adac70] Updated the network info_cache for instance _heal_instance_info_cache /usr/lib/python3.9/site-packages/nova/compute/manager.py:9929#033[00m
Dec  1 19:53:12 compute-0 nova_compute[189564]: 2025-12-01 19:53:12.573 189568 DEBUG oslo_service.periodic_task [None req-7cec860b-c1c5-49d2-8a4f-732c76c1788d - - - - - -] Running periodic task ComputeManager._poll_rescued_instances run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210#033[00m
Dec  1 19:53:12 compute-0 python3[250455]: ansible-ansible.legacy.command Invoked with _raw_params=podman ps -a --format "{{.Names}} {{.Status}}" | grep openstack_network_exporter#012 _uses_shell=True stdin_add_newline=True strip_empty_ends=True argv=None chdir=None executable=None creates=None removes=None stdin=None
Dec  1 19:53:13 compute-0 nova_compute[189564]: 2025-12-01 19:53:13.522 189568 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Dec  1 19:53:14 compute-0 nova_compute[189564]: 2025-12-01 19:53:14.248 189568 DEBUG oslo_service.periodic_task [None req-7cec860b-c1c5-49d2-8a4f-732c76c1788d - - - - - -] Running periodic task ComputeManager._poll_unconfirmed_resizes run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210#033[00m
Dec  1 19:53:15 compute-0 nova_compute[189564]: 2025-12-01 19:53:15.244 189568 DEBUG oslo_service.periodic_task [None req-7cec860b-c1c5-49d2-8a4f-732c76c1788d - - - - - -] Running periodic task ComputeManager._check_instance_build_time run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210#033[00m
Dec  1 19:53:15 compute-0 nova_compute[189564]: 2025-12-01 19:53:15.245 189568 DEBUG oslo_service.periodic_task [None req-7cec860b-c1c5-49d2-8a4f-732c76c1788d - - - - - -] Running periodic task ComputeManager._sync_scheduler_instance_info run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210#033[00m
Dec  1 19:53:15 compute-0 nova_compute[189564]: 2025-12-01 19:53:15.267 189568 DEBUG oslo_service.periodic_task [None req-7cec860b-c1c5-49d2-8a4f-732c76c1788d - - - - - -] Running periodic task ComputeManager._instance_usage_audit run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210#033[00m
Dec  1 19:53:15 compute-0 nova_compute[189564]: 2025-12-01 19:53:15.268 189568 DEBUG oslo_service.periodic_task [None req-7cec860b-c1c5-49d2-8a4f-732c76c1788d - - - - - -] Running periodic task ComputeManager.update_available_resource run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210#033[00m
Dec  1 19:53:15 compute-0 nova_compute[189564]: 2025-12-01 19:53:15.301 189568 DEBUG oslo_concurrency.lockutils [None req-7cec860b-c1c5-49d2-8a4f-732c76c1788d - - - - - -] Acquiring lock "compute_resources" by "nova.compute.resource_tracker.ResourceTracker.clean_compute_node_cache" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m
Dec  1 19:53:15 compute-0 nova_compute[189564]: 2025-12-01 19:53:15.301 189568 DEBUG oslo_concurrency.lockutils [None req-7cec860b-c1c5-49d2-8a4f-732c76c1788d - - - - - -] Lock "compute_resources" acquired by "nova.compute.resource_tracker.ResourceTracker.clean_compute_node_cache" :: waited 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m
Dec  1 19:53:15 compute-0 nova_compute[189564]: 2025-12-01 19:53:15.302 189568 DEBUG oslo_concurrency.lockutils [None req-7cec860b-c1c5-49d2-8a4f-732c76c1788d - - - - - -] Lock "compute_resources" "released" by "nova.compute.resource_tracker.ResourceTracker.clean_compute_node_cache" :: held 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m
Dec  1 19:53:15 compute-0 nova_compute[189564]: 2025-12-01 19:53:15.303 189568 DEBUG nova.compute.resource_tracker [None req-7cec860b-c1c5-49d2-8a4f-732c76c1788d - - - - - -] Auditing locally available compute resources for compute-0.ctlplane.example.com (node: compute-0.ctlplane.example.com) update_available_resource /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:861#033[00m
Dec  1 19:53:15 compute-0 nova_compute[189564]: 2025-12-01 19:53:15.383 189568 DEBUG oslo_concurrency.processutils [None req-7cec860b-c1c5-49d2-8a4f-732c76c1788d - - - - - -] Running cmd (subprocess): /usr/bin/python3 -m oslo_concurrency.prlimit --as=1073741824 --cpu=30 -- env LC_ALL=C LANG=C qemu-img info /var/lib/nova/instances/e73931e9-f7fa-4666-b781-700b385532a9/disk --force-share --output=json execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384#033[00m
Dec  1 19:53:15 compute-0 nova_compute[189564]: 2025-12-01 19:53:15.471 189568 DEBUG oslo_concurrency.processutils [None req-7cec860b-c1c5-49d2-8a4f-732c76c1788d - - - - - -] CMD "/usr/bin/python3 -m oslo_concurrency.prlimit --as=1073741824 --cpu=30 -- env LC_ALL=C LANG=C qemu-img info /var/lib/nova/instances/e73931e9-f7fa-4666-b781-700b385532a9/disk --force-share --output=json" returned: 0 in 0.088s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422#033[00m
Dec  1 19:53:15 compute-0 nova_compute[189564]: 2025-12-01 19:53:15.473 189568 DEBUG oslo_concurrency.processutils [None req-7cec860b-c1c5-49d2-8a4f-732c76c1788d - - - - - -] Running cmd (subprocess): /usr/bin/python3 -m oslo_concurrency.prlimit --as=1073741824 --cpu=30 -- env LC_ALL=C LANG=C qemu-img info /var/lib/nova/instances/e73931e9-f7fa-4666-b781-700b385532a9/disk --force-share --output=json execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384#033[00m
Dec  1 19:53:15 compute-0 nova_compute[189564]: 2025-12-01 19:53:15.563 189568 DEBUG oslo_concurrency.processutils [None req-7cec860b-c1c5-49d2-8a4f-732c76c1788d - - - - - -] CMD "/usr/bin/python3 -m oslo_concurrency.prlimit --as=1073741824 --cpu=30 -- env LC_ALL=C LANG=C qemu-img info /var/lib/nova/instances/e73931e9-f7fa-4666-b781-700b385532a9/disk --force-share --output=json" returned: 0 in 0.091s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422#033[00m
Dec  1 19:53:15 compute-0 nova_compute[189564]: 2025-12-01 19:53:15.566 189568 DEBUG oslo_concurrency.processutils [None req-7cec860b-c1c5-49d2-8a4f-732c76c1788d - - - - - -] Running cmd (subprocess): /usr/bin/python3 -m oslo_concurrency.prlimit --as=1073741824 --cpu=30 -- env LC_ALL=C LANG=C qemu-img info /var/lib/nova/instances/e73931e9-f7fa-4666-b781-700b385532a9/disk.eph0 --force-share --output=json execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384#033[00m
Dec  1 19:53:15 compute-0 nova_compute[189564]: 2025-12-01 19:53:15.638 189568 DEBUG oslo_concurrency.processutils [None req-7cec860b-c1c5-49d2-8a4f-732c76c1788d - - - - - -] CMD "/usr/bin/python3 -m oslo_concurrency.prlimit --as=1073741824 --cpu=30 -- env LC_ALL=C LANG=C qemu-img info /var/lib/nova/instances/e73931e9-f7fa-4666-b781-700b385532a9/disk.eph0 --force-share --output=json" returned: 0 in 0.072s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422#033[00m
Dec  1 19:53:15 compute-0 nova_compute[189564]: 2025-12-01 19:53:15.641 189568 DEBUG oslo_concurrency.processutils [None req-7cec860b-c1c5-49d2-8a4f-732c76c1788d - - - - - -] Running cmd (subprocess): /usr/bin/python3 -m oslo_concurrency.prlimit --as=1073741824 --cpu=30 -- env LC_ALL=C LANG=C qemu-img info /var/lib/nova/instances/e73931e9-f7fa-4666-b781-700b385532a9/disk.eph0 --force-share --output=json execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384#033[00m
Dec  1 19:53:15 compute-0 nova_compute[189564]: 2025-12-01 19:53:15.712 189568 DEBUG oslo_concurrency.processutils [None req-7cec860b-c1c5-49d2-8a4f-732c76c1788d - - - - - -] CMD "/usr/bin/python3 -m oslo_concurrency.prlimit --as=1073741824 --cpu=30 -- env LC_ALL=C LANG=C qemu-img info /var/lib/nova/instances/e73931e9-f7fa-4666-b781-700b385532a9/disk.eph0 --force-share --output=json" returned: 0 in 0.072s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422#033[00m
Dec  1 19:53:15 compute-0 nova_compute[189564]: 2025-12-01 19:53:15.726 189568 DEBUG oslo_concurrency.processutils [None req-7cec860b-c1c5-49d2-8a4f-732c76c1788d - - - - - -] Running cmd (subprocess): /usr/bin/python3 -m oslo_concurrency.prlimit --as=1073741824 --cpu=30 -- env LC_ALL=C LANG=C qemu-img info /var/lib/nova/instances/850ac274-3f22-41ce-b7d7-ac64d7adac70/disk --force-share --output=json execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384#033[00m
Dec  1 19:53:15 compute-0 nova_compute[189564]: 2025-12-01 19:53:15.790 189568 DEBUG oslo_concurrency.processutils [None req-7cec860b-c1c5-49d2-8a4f-732c76c1788d - - - - - -] CMD "/usr/bin/python3 -m oslo_concurrency.prlimit --as=1073741824 --cpu=30 -- env LC_ALL=C LANG=C qemu-img info /var/lib/nova/instances/850ac274-3f22-41ce-b7d7-ac64d7adac70/disk --force-share --output=json" returned: 0 in 0.065s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422#033[00m
Dec  1 19:53:15 compute-0 nova_compute[189564]: 2025-12-01 19:53:15.792 189568 DEBUG oslo_concurrency.processutils [None req-7cec860b-c1c5-49d2-8a4f-732c76c1788d - - - - - -] Running cmd (subprocess): /usr/bin/python3 -m oslo_concurrency.prlimit --as=1073741824 --cpu=30 -- env LC_ALL=C LANG=C qemu-img info /var/lib/nova/instances/850ac274-3f22-41ce-b7d7-ac64d7adac70/disk --force-share --output=json execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384#033[00m
Dec  1 19:53:15 compute-0 nova_compute[189564]: 2025-12-01 19:53:15.857 189568 DEBUG oslo_concurrency.processutils [None req-7cec860b-c1c5-49d2-8a4f-732c76c1788d - - - - - -] CMD "/usr/bin/python3 -m oslo_concurrency.prlimit --as=1073741824 --cpu=30 -- env LC_ALL=C LANG=C qemu-img info /var/lib/nova/instances/850ac274-3f22-41ce-b7d7-ac64d7adac70/disk --force-share --output=json" returned: 0 in 0.064s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422#033[00m
Dec  1 19:53:15 compute-0 nova_compute[189564]: 2025-12-01 19:53:15.860 189568 DEBUG oslo_concurrency.processutils [None req-7cec860b-c1c5-49d2-8a4f-732c76c1788d - - - - - -] Running cmd (subprocess): /usr/bin/python3 -m oslo_concurrency.prlimit --as=1073741824 --cpu=30 -- env LC_ALL=C LANG=C qemu-img info /var/lib/nova/instances/850ac274-3f22-41ce-b7d7-ac64d7adac70/disk.eph0 --force-share --output=json execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384#033[00m
Dec  1 19:53:15 compute-0 nova_compute[189564]: 2025-12-01 19:53:15.923 189568 DEBUG oslo_concurrency.processutils [None req-7cec860b-c1c5-49d2-8a4f-732c76c1788d - - - - - -] CMD "/usr/bin/python3 -m oslo_concurrency.prlimit --as=1073741824 --cpu=30 -- env LC_ALL=C LANG=C qemu-img info /var/lib/nova/instances/850ac274-3f22-41ce-b7d7-ac64d7adac70/disk.eph0 --force-share --output=json" returned: 0 in 0.062s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422#033[00m
Dec  1 19:53:15 compute-0 nova_compute[189564]: 2025-12-01 19:53:15.925 189568 DEBUG oslo_concurrency.processutils [None req-7cec860b-c1c5-49d2-8a4f-732c76c1788d - - - - - -] Running cmd (subprocess): /usr/bin/python3 -m oslo_concurrency.prlimit --as=1073741824 --cpu=30 -- env LC_ALL=C LANG=C qemu-img info /var/lib/nova/instances/850ac274-3f22-41ce-b7d7-ac64d7adac70/disk.eph0 --force-share --output=json execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384#033[00m
Dec  1 19:53:16 compute-0 nova_compute[189564]: 2025-12-01 19:53:16.019 189568 DEBUG oslo_concurrency.processutils [None req-7cec860b-c1c5-49d2-8a4f-732c76c1788d - - - - - -] CMD "/usr/bin/python3 -m oslo_concurrency.prlimit --as=1073741824 --cpu=30 -- env LC_ALL=C LANG=C qemu-img info /var/lib/nova/instances/850ac274-3f22-41ce-b7d7-ac64d7adac70/disk.eph0 --force-share --output=json" returned: 0 in 0.094s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422#033[00m
Dec  1 19:53:16 compute-0 nova_compute[189564]: 2025-12-01 19:53:16.218 189568 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Dec  1 19:53:16 compute-0 podman[250520]: 2025-12-01 19:53:16.39371465 +0000 UTC m=+0.142867503 container health_status b46bda7fc50db8041eef75400930fc7591d8331b3adc9964f77b2cc87c6b98e2 (image=quay.io/openstack-k8s-operators/openstack-network-exporter:current-podified, name=openstack_network_exporter, health_status=healthy, health_failing_streak=0, health_log=, release=1755695350, build-date=2025-08-20T13:12:41, com.redhat.component=ubi9-minimal-container, vcs-ref=f4b088292653bbf5ca8188a5e59ffd06a8671d4b, architecture=x86_64, io.openshift.expose-services=, summary=Provides the latest release of the minimal Red Hat Universal Base Image 9., description=The Universal Base Image Minimal is a stripped down image that uses microdnf as a package manager. This base image is freely redistributable, but Red Hat only supports Red Hat technologies through subscriptions for Red Hat products. This image is maintained by Red Hat and updated regularly., vcs-type=git, distribution-scope=public, io.k8s.display-name=Red Hat Universal Base Image 9 Minimal, managed_by=edpm_ansible, vendor=Red Hat, Inc., config_data={'image': 'quay.io/openstack-k8s-operators/openstack-network-exporter:current-podified', 'restart': 'always', 'recreate': True, 'privileged': True, 'ports': ['9105:9105'], 'command': [], 'net': 'host', 'environment': {'OS_ENDPOINT_TYPE': 'internal', 'OPENSTACK_NETWORK_EXPORTER_YAML': '/etc/openstack_network_exporter/openstack_network_exporter.yaml'}, 'healthcheck': {'test': '/openstack/healthcheck openstack-netwo', 'mount': '/var/lib/openstack/healthchecks/openstack_network_exporter'}, 'volumes': ['/var/lib/openstack/config/telemetry/openstack_network_exporter.yaml:/etc/openstack_network_exporter/openstack_network_exporter.yaml:z', '/var/lib/openstack/certs/telemetry/default:/etc/openstack_network_exporter/tls:z', '/var/run/openvswitch:/run/openvswitch:rw,z', '/var/lib/openvswitch/ovn:/run/ovn:rw,z', '/proc:/host/proc:ro', '/var/lib/openstack/healthchecks/openstack_network_exporter:/openstack:ro,z']}, io.buildah.version=1.33.7, maintainer=Red Hat, Inc., version=9.6, name=ubi9-minimal, url=https://catalog.redhat.com/en/search?searchType=containers, com.redhat.license_terms=https://www.redhat.com/en/about/red-hat-end-user-license-agreements#UBI, io.k8s.description=The Universal Base Image Minimal is a stripped down image that uses microdnf as a package manager. This base image is freely redistributable, but Red Hat only supports Red Hat technologies through subscriptions for Red Hat products. This image is maintained by Red Hat and updated regularly., io.openshift.tags=minimal rhel9, config_id=edpm, container_name=openstack_network_exporter)
Dec  1 19:53:16 compute-0 nova_compute[189564]: 2025-12-01 19:53:16.496 189568 WARNING nova.virt.libvirt.driver [None req-7cec860b-c1c5-49d2-8a4f-732c76c1788d - - - - - -] This host appears to have multiple sockets per NUMA node. The `socket` PCI NUMA affinity will not be supported.#033[00m
Dec  1 19:53:16 compute-0 nova_compute[189564]: 2025-12-01 19:53:16.497 189568 DEBUG nova.compute.resource_tracker [None req-7cec860b-c1c5-49d2-8a4f-732c76c1788d - - - - - -] Hypervisor/Node resource view: name=compute-0.ctlplane.example.com free_ram=4827MB free_disk=72.33403015136719GB free_vcpus=6 pci_devices=[{"dev_id": "pci_0000_00_03_0", "address": "0000:00:03.0", "product_id": "1000", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1000", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_01_3", "address": "0000:00:01.3", "product_id": "7113", "vendor_id": "8086", "numa_node": null, "label": "label_8086_7113", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_06_0", "address": "0000:00:06.0", "product_id": "1005", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1005", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_04_0", "address": "0000:00:04.0", "product_id": "1001", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1001", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_01_1", "address": "0000:00:01.1", "product_id": "7010", "vendor_id": "8086", "numa_node": null, "label": "label_8086_7010", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_00_0", "address": "0000:00:00.0", "product_id": "1237", "vendor_id": "8086", "numa_node": null, "label": "label_8086_1237", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_05_0", "address": "0000:00:05.0", "product_id": "1002", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1002", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_02_0", "address": "0000:00:02.0", "product_id": "1050", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1050", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_01_2", "address": "0000:00:01.2", "product_id": "7020", "vendor_id": "8086", "numa_node": null, "label": "label_8086_7020", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_01_0", "address": "0000:00:01.0", "product_id": "7000", "vendor_id": "8086", "numa_node": null, "label": "label_8086_7000", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_07_0", "address": "0000:00:07.0", "product_id": "1000", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1000", "dev_type": "type-PCI"}] _report_hypervisor_resource_view /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:1034#033[00m
Dec  1 19:53:16 compute-0 nova_compute[189564]: 2025-12-01 19:53:16.498 189568 DEBUG oslo_concurrency.lockutils [None req-7cec860b-c1c5-49d2-8a4f-732c76c1788d - - - - - -] Acquiring lock "compute_resources" by "nova.compute.resource_tracker.ResourceTracker._update_available_resource" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m
Dec  1 19:53:16 compute-0 nova_compute[189564]: 2025-12-01 19:53:16.498 189568 DEBUG oslo_concurrency.lockutils [None req-7cec860b-c1c5-49d2-8a4f-732c76c1788d - - - - - -] Lock "compute_resources" acquired by "nova.compute.resource_tracker.ResourceTracker._update_available_resource" :: waited 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m
Dec  1 19:53:16 compute-0 nova_compute[189564]: 2025-12-01 19:53:16.628 189568 DEBUG nova.compute.resource_tracker [None req-7cec860b-c1c5-49d2-8a4f-732c76c1788d - - - - - -] Instance e73931e9-f7fa-4666-b781-700b385532a9 actively managed on this compute host and has allocations in placement: {'resources': {'DISK_GB': 2, 'MEMORY_MB': 512, 'VCPU': 1}}. _remove_deleted_instances_allocations /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:1635#033[00m
Dec  1 19:53:16 compute-0 nova_compute[189564]: 2025-12-01 19:53:16.629 189568 DEBUG nova.compute.resource_tracker [None req-7cec860b-c1c5-49d2-8a4f-732c76c1788d - - - - - -] Instance 850ac274-3f22-41ce-b7d7-ac64d7adac70 actively managed on this compute host and has allocations in placement: {'resources': {'DISK_GB': 2, 'MEMORY_MB': 512, 'VCPU': 1}}. _remove_deleted_instances_allocations /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:1635#033[00m
Dec  1 19:53:16 compute-0 nova_compute[189564]: 2025-12-01 19:53:16.629 189568 DEBUG nova.compute.resource_tracker [None req-7cec860b-c1c5-49d2-8a4f-732c76c1788d - - - - - -] Total usable vcpus: 8, total allocated vcpus: 2 _report_final_resource_view /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:1057#033[00m
Dec  1 19:53:16 compute-0 nova_compute[189564]: 2025-12-01 19:53:16.630 189568 DEBUG nova.compute.resource_tracker [None req-7cec860b-c1c5-49d2-8a4f-732c76c1788d - - - - - -] Final resource view: name=compute-0.ctlplane.example.com phys_ram=7680MB used_ram=1536MB phys_disk=79GB used_disk=4GB total_vcpus=8 used_vcpus=2 pci_stats=[] _report_final_resource_view /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:1066#033[00m
Dec  1 19:53:16 compute-0 nova_compute[189564]: 2025-12-01 19:53:16.706 189568 DEBUG nova.compute.provider_tree [None req-7cec860b-c1c5-49d2-8a4f-732c76c1788d - - - - - -] Inventory has not changed in ProviderTree for provider: 0211b5d4-bab8-409f-8f53-df766ffbcb27 update_inventory /usr/lib/python3.9/site-packages/nova/compute/provider_tree.py:180#033[00m
Dec  1 19:53:16 compute-0 nova_compute[189564]: 2025-12-01 19:53:16.725 189568 DEBUG nova.scheduler.client.report [None req-7cec860b-c1c5-49d2-8a4f-732c76c1788d - - - - - -] Inventory has not changed for provider 0211b5d4-bab8-409f-8f53-df766ffbcb27 based on inventory data: {'VCPU': {'total': 8, 'reserved': 0, 'min_unit': 1, 'max_unit': 8, 'step_size': 1, 'allocation_ratio': 4.0}, 'MEMORY_MB': {'total': 7680, 'reserved': 512, 'min_unit': 1, 'max_unit': 7680, 'step_size': 1, 'allocation_ratio': 1.0}, 'DISK_GB': {'total': 79, 'reserved': 1, 'min_unit': 1, 'max_unit': 79, 'step_size': 1, 'allocation_ratio': 0.9}} set_inventory_for_provider /usr/lib/python3.9/site-packages/nova/scheduler/client/report.py:940#033[00m
Dec  1 19:53:16 compute-0 nova_compute[189564]: 2025-12-01 19:53:16.728 189568 DEBUG nova.compute.resource_tracker [None req-7cec860b-c1c5-49d2-8a4f-732c76c1788d - - - - - -] Compute_service record updated for compute-0.ctlplane.example.com:compute-0.ctlplane.example.com _update_available_resource /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:995#033[00m
Dec  1 19:53:16 compute-0 nova_compute[189564]: 2025-12-01 19:53:16.729 189568 DEBUG oslo_concurrency.lockutils [None req-7cec860b-c1c5-49d2-8a4f-732c76c1788d - - - - - -] Lock "compute_resources" "released" by "nova.compute.resource_tracker.ResourceTracker._update_available_resource" :: held 0.230s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m
Dec  1 19:53:18 compute-0 nova_compute[189564]: 2025-12-01 19:53:18.525 189568 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Dec  1 19:53:21 compute-0 nova_compute[189564]: 2025-12-01 19:53:21.221 189568 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Dec  1 19:53:21 compute-0 podman[250543]: 2025-12-01 19:53:21.372337673 +0000 UTC m=+0.129599112 container health_status 9bc16c1e84935b321683dd2dfd3901959431e420d380b6b9982945dff3d516b2 (image=quay.io/prometheus/node-exporter:v1.5.0, name=node_exporter, health_status=healthy, health_failing_streak=0, health_log=, config_id=edpm, container_name=node_exporter, maintainer=The Prometheus Authors <prometheus-developers@googlegroups.com>, managed_by=edpm_ansible, config_data={'image': 'quay.io/prometheus/node-exporter:v1.5.0', 'restart': 'always', 'recreate': True, 'user': 'root', 'privileged': True, 'ports': ['9100:9100'], 'command': ['--web.config.file=/etc/node_exporter/node_exporter.yaml', '--web.disable-exporter-metrics', '--collector.systemd', '--collector.systemd.unit-include=(edpm_.*|ovs.*|openvswitch|virt.*|rsyslog)\\.service', '--no-collector.dmi', '--no-collector.entropy', '--no-collector.thermal_zone', '--no-collector.time', '--no-collector.timex', '--no-collector.uname', '--no-collector.stat', '--no-collector.hwmon', '--no-collector.os', '--no-collector.selinux', '--no-collector.textfile', '--no-collector.powersupplyclass', '--no-collector.pressure', '--no-collector.rapl'], 'net': 'host', 'environment': {'OS_ENDPOINT_TYPE': 'internal'}, 'healthcheck': {'test': '/openstack/healthcheck node_exporter', 'mount': '/var/lib/openstack/healthchecks/node_exporter'}, 'volumes': ['/var/lib/openstack/config/telemetry/node_exporter.yaml:/etc/node_exporter/node_exporter.yaml:z', '/var/lib/openstack/certs/telemetry/default:/etc/node_exporter/tls:z', '/var/run/dbus/system_bus_socket:/var/run/dbus/system_bus_socket:rw', '/var/lib/openstack/healthchecks/node_exporter:/openstack:ro,z']})
Dec  1 19:53:23 compute-0 nova_compute[189564]: 2025-12-01 19:53:23.528 189568 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Dec  1 19:53:26 compute-0 nova_compute[189564]: 2025-12-01 19:53:26.224 189568 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Dec  1 19:53:27 compute-0 podman[250567]: 2025-12-01 19:53:27.354609218 +0000 UTC m=+0.119598382 container health_status eee51cf6f5ac491b85fb09827fece37ea9afa564acb449d4ec0d0155a452f02b (image=quay.io/podified-antelope-centos9/openstack-multipathd:current-podified, name=multipathd, health_status=healthy, health_failing_streak=0, health_log=, config_id=multipathd, maintainer=OpenStack Kubernetes Operator team, managed_by=edpm_ansible, org.label-schema.license=GPLv2, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS, config_data={'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/multipathd', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-multipathd:current-podified', 'net': 'host', 'privileged': True, 'restart': 'always', 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/var/lib/kolla/config_files/multipathd.json:/var/lib/kolla/config_files/config.json:ro', '/dev:/dev', '/run/udev:/run/udev', '/sys:/sys', '/lib/modules:/lib/modules:ro', '/etc/iscsi:/etc/iscsi:ro', '/var/lib/iscsi:/var/lib/iscsi', '/etc/multipath:/etc/multipath:z', '/etc/multipath.conf:/etc/multipath.conf:ro', '/var/lib/openstack/healthchecks/multipathd:/openstack:ro,z']}, org.label-schema.name=CentOS Stream 9 Base Image, tcib_managed=true, container_name=multipathd, io.buildah.version=1.41.3, org.label-schema.build-date=20251125, tcib_build_tag=fa2bb8efef6782c26ea7f1675eeb36dd)
Dec  1 19:53:28 compute-0 nova_compute[189564]: 2025-12-01 19:53:28.534 189568 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Dec  1 19:53:29 compute-0 podman[203750]: time="2025-12-01T19:53:29Z" level=info msg="List containers: received `last` parameter - overwriting `limit`"
Dec  1 19:53:29 compute-0 podman[203750]: @ - - [01/Dec/2025:19:53:29 +0000] "GET /v4.9.3/libpod/containers/json?all=true&external=false&last=0&namespace=false&size=false&sync=false HTTP/1.1" 200 29521 "" "Go-http-client/1.1"
Dec  1 19:53:29 compute-0 podman[203750]: @ - - [01/Dec/2025:19:53:29 +0000] "GET /v4.9.3/libpod/containers/stats?all=false&interval=1&stream=false HTTP/1.1" 200 4808 "" "Go-http-client/1.1"
Dec  1 19:53:31 compute-0 nova_compute[189564]: 2025-12-01 19:53:31.226 189568 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Dec  1 19:53:31 compute-0 openstack_network_exporter[205914]: ERROR   19:53:31 appctl.go:131: Failed to prepare call to ovsdb-server: no control socket files found for the ovs db server
Dec  1 19:53:31 compute-0 openstack_network_exporter[205914]: ERROR   19:53:31 appctl.go:144: Failed to get PID for ovn-northd: no control socket files found for ovn-northd
Dec  1 19:53:31 compute-0 openstack_network_exporter[205914]: ERROR   19:53:31 appctl.go:144: Failed to get PID for ovn-northd: no control socket files found for ovn-northd
Dec  1 19:53:31 compute-0 openstack_network_exporter[205914]: ERROR   19:53:31 appctl.go:174: call(dpif-netdev/pmd-perf-show): please specify an existing datapath
Dec  1 19:53:31 compute-0 openstack_network_exporter[205914]: 
Dec  1 19:53:31 compute-0 openstack_network_exporter[205914]: ERROR   19:53:31 appctl.go:174: call(dpif-netdev/pmd-rxq-show): please specify an existing datapath
Dec  1 19:53:31 compute-0 openstack_network_exporter[205914]: 
Dec  1 19:53:33 compute-0 podman[250587]: 2025-12-01 19:53:33.361831337 +0000 UTC m=+0.118918741 container health_status 61ddba5fa28aaa4735d9b3aecc3d300f499f9ae2248b5f55cd6d6127fcce4236 (image=quay.io/navidys/prometheus-podman-exporter:v1.10.1, name=podman_exporter, health_status=healthy, health_failing_streak=0, health_log=, config_data={'image': 'quay.io/navidys/prometheus-podman-exporter:v1.10.1', 'restart': 'always', 'recreate': True, 'user': 'root', 'privileged': True, 'ports': ['9882:9882'], 'net': 'host', 'command': ['--web.config.file=/etc/podman_exporter/podman_exporter.yaml'], 'environment': {'OS_ENDPOINT_TYPE': 'internal', 'CONTAINER_HOST': 'unix:///run/podman/podman.sock'}, 'healthcheck': {'test': '/openstack/healthcheck podman_exporter', 'mount': '/var/lib/openstack/healthchecks/podman_exporter'}, 'volumes': ['/var/lib/openstack/config/telemetry/podman_exporter.yaml:/etc/podman_exporter/podman_exporter.yaml:z', '/var/lib/openstack/certs/telemetry/default:/etc/podman_exporter/tls:z', '/run/podman/podman.sock:/run/podman/podman.sock:rw,z', '/var/lib/openstack/healthchecks/podman_exporter:/openstack:ro,z']}, config_id=edpm, container_name=podman_exporter, maintainer=Navid Yaghoobi <navidys@fedoraproject.org>, managed_by=edpm_ansible)
Dec  1 19:53:33 compute-0 nova_compute[189564]: 2025-12-01 19:53:33.534 189568 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Dec  1 19:53:36 compute-0 nova_compute[189564]: 2025-12-01 19:53:36.229 189568 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Dec  1 19:53:37 compute-0 podman[250613]: 2025-12-01 19:53:37.342737149 +0000 UTC m=+0.088482296 container health_status 3a3d264f7eb8586ed3d44da8bad3c69e5911bcb2ca062b771386b6d47a5118de (image=quay.rdoproject.org/podified-master-centos10/openstack-ceilometer-compute:current-tested, name=ceilometer_agent_compute, health_status=healthy, health_failing_streak=0, health_log=, io.buildah.version=1.41.4, maintainer=OpenStack Kubernetes Operator team, org.label-schema.name=CentOS Stream 10 Base Image, tcib_build_tag=3a7876c5b6a4ff2e2bc50e11e9db5f42, config_id=edpm, org.label-schema.build-date=20251125, tcib_managed=true, config_data={'image': 'quay.rdoproject.org/podified-master-centos10/openstack-ceilometer-compute:current-tested', 'user': 'ceilometer', 'restart': 'always', 'command': 'kolla_start', 'security_opt': 'label:type:ceilometer_polling_t', 'net': 'host', 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'OS_ENDPOINT_TYPE': 'internal'}, 'healthcheck': {'test': '/openstack/healthcheck compute', 'mount': '/var/lib/openstack/healthchecks/ceilometer_agent_compute'}, 'volumes': ['/var/lib/openstack/config/telemetry:/var/lib/openstack/config/:z', '/var/lib/openstack/config/telemetry/ceilometer-agent-compute.json:/var/lib/kolla/config_files/config.json:z', '/run/libvirt:/run/libvirt:shared,ro', '/etc/hosts:/etc/hosts:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/var/lib/openstack/cacerts/telemetry/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/config/telemetry/ceilometer_prom_exporter.yaml:/etc/ceilometer/ceilometer_prom_exporter.yaml:z', '/var/lib/openstack/certs/telemetry/default:/etc/ceilometer/tls:z', '/dev/log:/dev/log', '/var/lib/openstack/healthchecks/ceilometer_agent_compute:/openstack:ro,z']}, managed_by=edpm_ansible, org.label-schema.schema-version=1.0, container_name=ceilometer_agent_compute, org.label-schema.license=GPLv2, org.label-schema.vendor=CentOS)
Dec  1 19:53:37 compute-0 podman[250611]: 2025-12-01 19:53:37.373905636 +0000 UTC m=+0.125532626 container health_status 23921011954a99f31a49758e512d9e3575f6b2ebf536e7df85e3be11e7690b76 (image=quay.io/sustainable_computing_io/kepler:release-0.7.12, name=kepler, health_status=healthy, health_failing_streak=0, health_log=, name=ubi9, description=The Universal Base Image is designed and engineered to be the base layer for all of your containerized applications, middleware and utilities. This base image is freely redistributable, but Red Hat only supports Red Hat technologies through subscriptions for Red Hat products. This image is maintained by Red Hat and updated regularly., vcs-ref=e309397d02fc53f7fa99db1371b8700eb49f268f, io.k8s.description=The Universal Base Image is designed and engineered to be the base layer for all of your containerized applications, middleware and utilities. This base image is freely redistributable, but Red Hat only supports Red Hat technologies through subscriptions for Red Hat products. This image is maintained by Red Hat and updated regularly., io.openshift.tags=base rhel9, config_data={'image': 'quay.io/sustainable_computing_io/kepler:release-0.7.12', 'privileged': 'true', 'restart': 'always', 'ports': ['8888:8888'], 'net': 'host', 'command': '-v=2', 'recreate': True, 'environment': {'ENABLE_GPU': 'true', 'EXPOSE_CONTAINER_METRICS': 'true', 'ENABLE_PROCESS_METRICS': 'true', 'EXPOSE_VM_METRICS': 'true', 'EXPOSE_ESTIMATED_IDLE_POWER_METRICS': 'false', 'LIBVIRT_METADATA_URI': 'http://openstack.org/xmlns/libvirt/nova/1.1'}, 'healthcheck': {'test': '/openstack/healthcheck kepler', 'mount': '/var/lib/openstack/healthchecks/kepler'}, 'volumes': ['/lib/modules:/lib/modules:ro', '/run/libvirt:/run/libvirt:shared,ro', '/sys:/sys', '/proc:/proc', '/var/lib/openstack/healthchecks/kepler:/openstack:ro,z']}, io.buildah.version=1.29.0, vendor=Red Hat, Inc., distribution-scope=public, summary=Provides the latest release of Red Hat Universal Base Image 9., url=https://access.redhat.com/containers/#/registry.access.redhat.com/ubi9/images/9.4-1214.1726694543, vcs-type=git, architecture=x86_64, build-date=2024-09-18T21:23:30, com.redhat.component=ubi9-container, config_id=edpm, release-0.7.12=, io.openshift.expose-services=, managed_by=edpm_ansible, io.k8s.display-name=Red Hat Universal Base Image 9, release=1214.1726694543, version=9.4, com.redhat.license_terms=https://www.redhat.com/en/about/red-hat-end-user-license-agreements#UBI, container_name=kepler, maintainer=Red Hat, Inc.)
Dec  1 19:53:37 compute-0 podman[250612]: 2025-12-01 19:53:37.389100037 +0000 UTC m=+0.140375096 container health_status 34a1614f07848d6f362b3ed1fa2407dbcd0f2c7c831f6ef43ff8b2d278ce7c3d (image=quay.io/podified-antelope-centos9/openstack-ceilometer-ipmi:current-podified, name=ceilometer_agent_ipmi, health_status=healthy, health_failing_streak=0, health_log=, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.schema-version=1.0, config_data={'image': 'quay.io/podified-antelope-centos9/openstack-ceilometer-ipmi:current-podified', 'user': 'ceilometer', 'restart': 'always', 'command': 'kolla_start', 'security_opt': 'label:type:ceilometer_polling_t', 'privileged': 'true', 'net': 'host', 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'OS_ENDPOINT_TYPE': 'internal'}, 'healthcheck': {'test': '/openstack/healthcheck ipmi', 'mount': '/var/lib/openstack/healthchecks/ceilometer_agent_ipmi'}, 'volumes': ['/var/lib/openstack/config/telemetry-power-monitoring:/var/lib/openstack/config/:z', '/var/lib/openstack/config/telemetry-power-monitoring/ceilometer-agent-ipmi.json:/var/lib/kolla/config_files/config.json:z', '/etc/hosts:/etc/hosts:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/var/lib/openstack/cacerts/telemetry-power-monitoring/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/config/telemetry-power-monitoring/ceilometer_prom_exporter.yaml:/etc/ceilometer/ceilometer_prom_exporter.yaml:z', '/var/lib/openstack/certs/telemetry-power-monitoring/default:/etc/ceilometer/tls:z', '/dev/log:/dev/log', '/var/lib/openstack/healthchecks/ceilometer_agent_ipmi:/openstack:ro,z']}, config_id=edpm, org.label-schema.build-date=20251125, org.label-schema.license=GPLv2, tcib_build_tag=fa2bb8efef6782c26ea7f1675eeb36dd, tcib_managed=true, container_name=ceilometer_agent_ipmi, org.label-schema.vendor=CentOS, io.buildah.version=1.41.3, maintainer=OpenStack Kubernetes Operator team, managed_by=edpm_ansible)
Dec  1 19:53:37 compute-0 podman[250614]: 2025-12-01 19:53:37.399790449 +0000 UTC m=+0.144917258 container health_status 43b014a7c88484529ca37fbc1aa040d68d3c565a681d98a3ffe696ded1c66c8b (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified, name=ovn_metadata_agent, health_status=healthy, health_failing_streak=0, health_log=, org.label-schema.license=GPLv2, tcib_build_tag=fa2bb8efef6782c26ea7f1675eeb36dd, config_id=ovn_metadata_agent, maintainer=OpenStack Kubernetes Operator team, org.label-schema.name=CentOS Stream 9 Base Image, tcib_managed=true, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS, managed_by=edpm_ansible, config_data={'cgroupns': 'host', 'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'EDPM_CONFIG_HASH': '0823bd3e096c75f72e4a95820d41b0d4b6a1172bd2892ddb9f29b788a11bc87d'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ovn_metadata_agent', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified', 'net': 'host', 'pid': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/run/openvswitch:/run/openvswitch:z', '/var/lib/config-data/ansible-generated/neutron-ovn-metadata-agent:/etc/neutron.conf.d:z', '/run/netns:/run/netns:shared', '/var/lib/kolla/config_files/ovn_metadata_agent.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/neutron:/var/lib/neutron:shared,z', '/var/lib/neutron/ovn_metadata_haproxy_wrapper:/usr/local/bin/haproxy:ro', '/var/lib/neutron/kill_scripts:/etc/neutron/kill_scripts:ro', '/var/lib/openstack/cacerts/neutron-metadata/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/ca.crt:/etc/pki/tls/certs/ovndbca.crt:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/tls.crt:/etc/pki/tls/certs/ovndb.crt:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/tls.key:/etc/pki/tls/private/ovndb.key:ro,Z', '/var/lib/openstack/healthchecks/ovn_metadata_agent:/openstack:ro,z']}, container_name=ovn_metadata_agent, io.buildah.version=1.41.3, org.label-schema.build-date=20251125)
Dec  1 19:53:37 compute-0 podman[250615]: 2025-12-01 19:53:37.407204709 +0000 UTC m=+0.145870337 container health_status ac5c9902abf0db9f43c889599b2bcc73d33eb8b65444ffdd9b56a5cc93dab792 (image=quay.io/podified-antelope-centos9/openstack-ovn-controller:current-podified, name=ovn_controller, health_status=healthy, health_failing_streak=0, health_log=, config_id=ovn_controller, container_name=ovn_controller, org.label-schema.build-date=20251125, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.vendor=CentOS, tcib_build_tag=fa2bb8efef6782c26ea7f1675eeb36dd, config_data={'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ovn_controller', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-ovn-controller:current-podified', 'net': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/lib/modules:/lib/modules:ro', '/run:/run', '/var/lib/openvswitch/ovn:/run/ovn:shared,z', '/var/lib/kolla/config_files/ovn_controller.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/openstack/cacerts/ovn/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/certs/ovn/default/ca.crt:/etc/pki/tls/certs/ovndbca.crt:ro,z', '/var/lib/openstack/certs/ovn/default/tls.crt:/etc/pki/tls/certs/ovndb.crt:ro,z', '/var/lib/openstack/certs/ovn/default/tls.key:/etc/pki/tls/private/ovndb.key:ro,Z', '/var/lib/openstack/healthchecks/ovn_controller:/openstack:ro,z']}, io.buildah.version=1.41.3, maintainer=OpenStack Kubernetes Operator team, managed_by=edpm_ansible, org.label-schema.schema-version=1.0, tcib_managed=true)
Dec  1 19:53:38 compute-0 nova_compute[189564]: 2025-12-01 19:53:38.537 189568 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Dec  1 19:53:41 compute-0 nova_compute[189564]: 2025-12-01 19:53:41.232 189568 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Dec  1 19:53:43 compute-0 nova_compute[189564]: 2025-12-01 19:53:43.540 189568 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Dec  1 19:53:46 compute-0 nova_compute[189564]: 2025-12-01 19:53:46.235 189568 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Dec  1 19:53:47 compute-0 podman[250707]: 2025-12-01 19:53:47.372383286 +0000 UTC m=+0.127104784 container health_status b46bda7fc50db8041eef75400930fc7591d8331b3adc9964f77b2cc87c6b98e2 (image=quay.io/openstack-k8s-operators/openstack-network-exporter:current-podified, name=openstack_network_exporter, health_status=healthy, health_failing_streak=0, health_log=, summary=Provides the latest release of the minimal Red Hat Universal Base Image 9., container_name=openstack_network_exporter, config_data={'image': 'quay.io/openstack-k8s-operators/openstack-network-exporter:current-podified', 'restart': 'always', 'recreate': True, 'privileged': True, 'ports': ['9105:9105'], 'command': [], 'net': 'host', 'environment': {'OS_ENDPOINT_TYPE': 'internal', 'OPENSTACK_NETWORK_EXPORTER_YAML': '/etc/openstack_network_exporter/openstack_network_exporter.yaml'}, 'healthcheck': {'test': '/openstack/healthcheck openstack-netwo', 'mount': '/var/lib/openstack/healthchecks/openstack_network_exporter'}, 'volumes': ['/var/lib/openstack/config/telemetry/openstack_network_exporter.yaml:/etc/openstack_network_exporter/openstack_network_exporter.yaml:z', '/var/lib/openstack/certs/telemetry/default:/etc/openstack_network_exporter/tls:z', '/var/run/openvswitch:/run/openvswitch:rw,z', '/var/lib/openvswitch/ovn:/run/ovn:rw,z', '/proc:/host/proc:ro', '/var/lib/openstack/healthchecks/openstack_network_exporter:/openstack:ro,z']}, io.openshift.expose-services=, architecture=x86_64, build-date=2025-08-20T13:12:41, description=The Universal Base Image Minimal is a stripped down image that uses microdnf as a package manager. This base image is freely redistributable, but Red Hat only supports Red Hat technologies through subscriptions for Red Hat products. This image is maintained by Red Hat and updated regularly., io.k8s.description=The Universal Base Image Minimal is a stripped down image that uses microdnf as a package manager. This base image is freely redistributable, but Red Hat only supports Red Hat technologies through subscriptions for Red Hat products. This image is maintained by Red Hat and updated regularly., vendor=Red Hat, Inc., name=ubi9-minimal, managed_by=edpm_ansible, release=1755695350, com.redhat.component=ubi9-minimal-container, distribution-scope=public, io.openshift.tags=minimal rhel9, io.buildah.version=1.33.7, config_id=edpm, io.k8s.display-name=Red Hat Universal Base Image 9 Minimal, url=https://catalog.redhat.com/en/search?searchType=containers, vcs-type=git, maintainer=Red Hat, Inc., vcs-ref=f4b088292653bbf5ca8188a5e59ffd06a8671d4b, com.redhat.license_terms=https://www.redhat.com/en/about/red-hat-end-user-license-agreements#UBI, version=9.6)
Dec  1 19:53:48 compute-0 nova_compute[189564]: 2025-12-01 19:53:48.542 189568 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Dec  1 19:53:48 compute-0 ceilometer_agent_compute[200308]: 2025-12-01 19:53:48.818 15 DEBUG ceilometer.polling.manager [-] The number of pollsters in source [pollsters] is bigger than the number of worker threads to execute them. Therefore, one can expect the process to be longer than the expected. execute_polling_task_processing /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:253
Dec  1 19:53:48 compute-0 ceilometer_agent_compute[200308]: 2025-12-01 19:53:48.818 15 DEBUG ceilometer.polling.manager [-] Processing pollsters for [pollsters] with [1] threads. execute_polling_task_processing /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:262
Dec  1 19:53:48 compute-0 ceilometer_agent_compute[200308]: 2025-12-01 19:53:48.819 15 DEBUG ceilometer.polling.manager [-] Registering pollster [<stevedore.extension.Extension object at 0x7fcf6cc3f860>] from source [pollsters] to be executed via executor [<concurrent.futures.thread.ThreadPoolExecutor object at 0x7fcf6755b1a0>] with cache [{}], pollster history [{}], and discovery cache [{}]. register_pollster_execution /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:276
Dec  1 19:53:48 compute-0 ceilometer_agent_compute[200308]: 2025-12-01 19:53:48.820 15 DEBUG ceilometer.polling.manager [-] Executing discovery process for pollsters [<ceilometer.compute.pollsters.net.IncomingBytesDeltaPollster object at 0x7fcf6cc3f830>] and discovery method [local_instances] via process [<bound method AgentManager.discover of <ceilometer.polling.manager.AgentManager object at 0x7fcf6cc07a10>>]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:294
Dec  1 19:53:48 compute-0 ceilometer_agent_compute[200308]: 2025-12-01 19:53:48.820 15 DEBUG ceilometer.polling.manager [-] Registering pollster [<stevedore.extension.Extension object at 0x7fcf6c2e4080>] from source [pollsters] to be executed via executor [<concurrent.futures.thread.ThreadPoolExecutor object at 0x7fcf6755b1a0>] with cache [{}], pollster history [{}], and discovery cache [{}]. register_pollster_execution /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:276
Dec  1 19:53:48 compute-0 ceilometer_agent_compute[200308]: 2025-12-01 19:53:48.821 15 DEBUG ceilometer.polling.manager [-] Registering pollster [<stevedore.extension.Extension object at 0x7fcf6efc98b0>] from source [pollsters] to be executed via executor [<concurrent.futures.thread.ThreadPoolExecutor object at 0x7fcf6755b1a0>] with cache [{}], pollster history [{}], and discovery cache [{}]. register_pollster_execution /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:276
Dec  1 19:53:48 compute-0 ceilometer_agent_compute[200308]: 2025-12-01 19:53:48.821 15 DEBUG ceilometer.polling.manager [-] Registering pollster [<stevedore.extension.Extension object at 0x7fcf6c2e4110>] from source [pollsters] to be executed via executor [<concurrent.futures.thread.ThreadPoolExecutor object at 0x7fcf6755b1a0>] with cache [{}], pollster history [{}], and discovery cache [{}]. register_pollster_execution /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:276
Dec  1 19:53:48 compute-0 ceilometer_agent_compute[200308]: 2025-12-01 19:53:48.822 15 DEBUG ceilometer.polling.manager [-] Registering pollster [<stevedore.extension.Extension object at 0x7fcf6c2e41a0>] from source [pollsters] to be executed via executor [<concurrent.futures.thread.ThreadPoolExecutor object at 0x7fcf6755b1a0>] with cache [{}], pollster history [{}], and discovery cache [{}]. register_pollster_execution /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:276
Dec  1 19:53:48 compute-0 ceilometer_agent_compute[200308]: 2025-12-01 19:53:48.822 15 DEBUG ceilometer.polling.manager [-] Registering pollster [<stevedore.extension.Extension object at 0x7fcf6cc3f290>] from source [pollsters] to be executed via executor [<concurrent.futures.thread.ThreadPoolExecutor object at 0x7fcf6755b1a0>] with cache [{}], pollster history [{}], and discovery cache [{}]. register_pollster_execution /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:276
Dec  1 19:53:48 compute-0 ceilometer_agent_compute[200308]: 2025-12-01 19:53:48.823 15 DEBUG ceilometer.polling.manager [-] Registering pollster [<stevedore.extension.Extension object at 0x7fcf6cc3f2c0>] from source [pollsters] to be executed via executor [<concurrent.futures.thread.ThreadPoolExecutor object at 0x7fcf6755b1a0>] with cache [{}], pollster history [{}], and discovery cache [{}]. register_pollster_execution /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:276
Dec  1 19:53:48 compute-0 ceilometer_agent_compute[200308]: 2025-12-01 19:53:48.823 15 DEBUG ceilometer.polling.manager [-] Registering pollster [<stevedore.extension.Extension object at 0x7fcf6e1e92e0>] from source [pollsters] to be executed via executor [<concurrent.futures.thread.ThreadPoolExecutor object at 0x7fcf6755b1a0>] with cache [{}], pollster history [{}], and discovery cache [{}]. register_pollster_execution /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:276
Dec  1 19:53:48 compute-0 ceilometer_agent_compute[200308]: 2025-12-01 19:53:48.823 15 DEBUG ceilometer.polling.manager [-] Registering pollster [<stevedore.extension.Extension object at 0x7fcf6cc3fb00>] from source [pollsters] to be executed via executor [<concurrent.futures.thread.ThreadPoolExecutor object at 0x7fcf6755b1a0>] with cache [{}], pollster history [{}], and discovery cache [{}]. register_pollster_execution /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:276
Dec  1 19:53:48 compute-0 ceilometer_agent_compute[200308]: 2025-12-01 19:53:48.824 15 DEBUG ceilometer.polling.manager [-] Registering pollster [<stevedore.extension.Extension object at 0x7fcf6cc3f320>] from source [pollsters] to be executed via executor [<concurrent.futures.thread.ThreadPoolExecutor object at 0x7fcf6755b1a0>] with cache [{}], pollster history [{}], and discovery cache [{}]. register_pollster_execution /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:276
Dec  1 19:53:48 compute-0 ceilometer_agent_compute[200308]: 2025-12-01 19:53:48.825 15 DEBUG ceilometer.polling.manager [-] Registering pollster [<stevedore.extension.Extension object at 0x7fcf6cc3f380>] from source [pollsters] to be executed via executor [<concurrent.futures.thread.ThreadPoolExecutor object at 0x7fcf6755b1a0>] with cache [{}], pollster history [{}], and discovery cache [{}]. register_pollster_execution /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:276
Dec  1 19:53:48 compute-0 ceilometer_agent_compute[200308]: 2025-12-01 19:53:48.825 15 DEBUG ceilometer.compute.discovery [-] instance data: {'id': 'e73931e9-f7fa-4666-b781-700b385532a9', 'name': 'test_0', 'flavor': {'id': '0891a7f6-7194-4f33-bc11-6f6ab8b16145', 'name': 'm1.small', 'vcpus': 1, 'ram': 512, 'disk': 1, 'ephemeral': 1, 'swap': 0}, 'image': {'id': '15bc897a-453b-4133-b6db-08ecdc2b6db0'}, 'os_type': 'hvm', 'architecture': 'x86_64', 'OS-EXT-SRV-ATTR:instance_name': 'instance-00000001', 'OS-EXT-SRV-ATTR:host': 'compute-0.ctlplane.example.com', 'OS-EXT-STS:vm_state': 'running', 'tenant_id': '35d2a9caf1634dca9fc12ec078239d84', 'user_id': '7c24e8f82e7842b785e565ac65c7f494', 'hostId': 'e632d98aa833376e2652bb395252bb54f4cc7fd6f020f0d51d7efcd6', 'status': 'active', 'metadata': {}} discover_libvirt_polling /usr/lib/python3.12/site-packages/ceilometer/compute/discovery.py:315
Dec  1 19:53:48 compute-0 ceilometer_agent_compute[200308]: 2025-12-01 19:53:48.825 15 DEBUG ceilometer.polling.manager [-] Registering pollster [<stevedore.extension.Extension object at 0x7fcf6cc3f3e0>] from source [pollsters] to be executed via executor [<concurrent.futures.thread.ThreadPoolExecutor object at 0x7fcf6755b1a0>] with cache [{}], pollster history [{}], and discovery cache [{}]. register_pollster_execution /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:276
Dec  1 19:53:48 compute-0 ceilometer_agent_compute[200308]: 2025-12-01 19:53:48.826 15 DEBUG ceilometer.polling.manager [-] Registering pollster [<stevedore.extension.Extension object at 0x7fcf6cc3f440>] from source [pollsters] to be executed via executor [<concurrent.futures.thread.ThreadPoolExecutor object at 0x7fcf6755b1a0>] with cache [{}], pollster history [{}], and discovery cache [{}]. register_pollster_execution /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:276
Dec  1 19:53:48 compute-0 ceilometer_agent_compute[200308]: 2025-12-01 19:53:48.827 15 DEBUG ceilometer.polling.manager [-] Registering pollster [<stevedore.extension.Extension object at 0x7fcf6c2e4470>] from source [pollsters] to be executed via executor [<concurrent.futures.thread.ThreadPoolExecutor object at 0x7fcf6755b1a0>] with cache [{}], pollster history [{}], and discovery cache [{}]. register_pollster_execution /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:276
Dec  1 19:53:48 compute-0 ceilometer_agent_compute[200308]: 2025-12-01 19:53:48.827 15 DEBUG ceilometer.polling.manager [-] Registering pollster [<stevedore.extension.Extension object at 0x7fcf6cc3f4a0>] from source [pollsters] to be executed via executor [<concurrent.futures.thread.ThreadPoolExecutor object at 0x7fcf6755b1a0>] with cache [{}], pollster history [{}], and discovery cache [{}]. register_pollster_execution /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:276
Dec  1 19:53:48 compute-0 ceilometer_agent_compute[200308]: 2025-12-01 19:53:48.827 15 DEBUG ceilometer.polling.manager [-] Registering pollster [<stevedore.extension.Extension object at 0x7fcf6cc3f500>] from source [pollsters] to be executed via executor [<concurrent.futures.thread.ThreadPoolExecutor object at 0x7fcf6755b1a0>] with cache [{}], pollster history [{}], and discovery cache [{}]. register_pollster_execution /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:276
Dec  1 19:53:48 compute-0 ceilometer_agent_compute[200308]: 2025-12-01 19:53:48.828 15 DEBUG ceilometer.polling.manager [-] Registering pollster [<stevedore.extension.Extension object at 0x7fcf6cc3e540>] from source [pollsters] to be executed via executor [<concurrent.futures.thread.ThreadPoolExecutor object at 0x7fcf6755b1a0>] with cache [{}], pollster history [{}], and discovery cache [{}]. register_pollster_execution /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:276
Dec  1 19:53:48 compute-0 ceilometer_agent_compute[200308]: 2025-12-01 19:53:48.828 15 DEBUG ceilometer.polling.manager [-] Registering pollster [<stevedore.extension.Extension object at 0x7fcf6cc3f560>] from source [pollsters] to be executed via executor [<concurrent.futures.thread.ThreadPoolExecutor object at 0x7fcf6755b1a0>] with cache [{}], pollster history [{}], and discovery cache [{}]. register_pollster_execution /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:276
Dec  1 19:53:48 compute-0 ceilometer_agent_compute[200308]: 2025-12-01 19:53:48.829 15 DEBUG ceilometer.compute.discovery [-] instance data: {'id': '850ac274-3f22-41ce-b7d7-ac64d7adac70', 'name': 'vn-rxztcck-a6xkcgll2h6t-dmjd3wlevael-vnf-74vtqyxw74yx', 'flavor': {'id': '0891a7f6-7194-4f33-bc11-6f6ab8b16145', 'name': 'm1.small', 'vcpus': 1, 'ram': 512, 'disk': 1, 'ephemeral': 1, 'swap': 0}, 'image': {'id': '15bc897a-453b-4133-b6db-08ecdc2b6db0'}, 'os_type': 'hvm', 'architecture': 'x86_64', 'OS-EXT-SRV-ATTR:instance_name': 'instance-00000003', 'OS-EXT-SRV-ATTR:host': 'compute-0.ctlplane.example.com', 'OS-EXT-STS:vm_state': 'running', 'tenant_id': '35d2a9caf1634dca9fc12ec078239d84', 'user_id': '7c24e8f82e7842b785e565ac65c7f494', 'hostId': 'e632d98aa833376e2652bb395252bb54f4cc7fd6f020f0d51d7efcd6', 'status': 'active', 'metadata': {'metering.server_group': '47cf63e2-5b7c-4ff3-8543-aef6d5b1a5c9'}} discover_libvirt_polling /usr/lib/python3.12/site-packages/ceilometer/compute/discovery.py:315
Dec  1 19:53:48 compute-0 ceilometer_agent_compute[200308]: 2025-12-01 19:53:48.829 15 INFO ceilometer.polling.manager [-] Polling pollster network.incoming.bytes.delta in the context of pollsters
Dec  1 19:53:48 compute-0 ceilometer_agent_compute[200308]: 2025-12-01 19:53:48.829 15 DEBUG ceilometer.polling.manager [-] Checking if we need coordination for pollster [<stevedore.extension.Extension object at 0x7fcf6cc3f860>] with coordination group name [None]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:333
Dec  1 19:53:48 compute-0 ceilometer_agent_compute[200308]: 2025-12-01 19:53:48.829 15 DEBUG ceilometer.polling.manager [-] The pollster [<stevedore.extension.Extension object at 0x7fcf6cc3f860>] is not configured in a source for polling that requires coordination. The current hashrings are the following [None]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:355
Dec  1 19:53:48 compute-0 ceilometer_agent_compute[200308]: 2025-12-01 19:53:48.830 12 DEBUG ceilometer.polling.manager [-] Updated heartbeat for network.incoming.bytes.delta (2025-12-01T19:53:48.829803) _update_status /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:502
Dec  1 19:53:48 compute-0 ceilometer_agent_compute[200308]: 2025-12-01 19:53:48.829 15 DEBUG ceilometer.polling.manager [-] Polster heartbeat update: network.incoming.bytes.delta heartbeat /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:636
Dec  1 19:53:48 compute-0 ceilometer_agent_compute[200308]: 2025-12-01 19:53:48.829 15 DEBUG ceilometer.polling.manager [-] Registering pollster [<stevedore.extension.Extension object at 0x7fcf6cc3fd70>] from source [pollsters] to be executed via executor [<concurrent.futures.thread.ThreadPoolExecutor object at 0x7fcf6755b1a0>] with cache [{'inspect_vnics': {}}], pollster history [{'network.incoming.bytes.delta': [<NovaLikeServer: test_0>, <NovaLikeServer: vn-rxztcck-a6xkcgll2h6t-dmjd3wlevael-vnf-74vtqyxw74yx>]}], and discovery cache [{'local_instances': [<NovaLikeServer: test_0>, <NovaLikeServer: vn-rxztcck-a6xkcgll2h6t-dmjd3wlevael-vnf-74vtqyxw74yx>]}]. register_pollster_execution /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:276
Dec  1 19:53:48 compute-0 ceilometer_agent_compute[200308]: 2025-12-01 19:53:48.831 15 DEBUG ceilometer.polling.manager [-] Registering pollster [<stevedore.extension.Extension object at 0x7fcf6cc3f5c0>] from source [pollsters] to be executed via executor [<concurrent.futures.thread.ThreadPoolExecutor object at 0x7fcf6755b1a0>] with cache [{'inspect_vnics': {}}], pollster history [{'network.incoming.bytes.delta': [<NovaLikeServer: test_0>, <NovaLikeServer: vn-rxztcck-a6xkcgll2h6t-dmjd3wlevael-vnf-74vtqyxw74yx>]}], and discovery cache [{'local_instances': [<NovaLikeServer: test_0>, <NovaLikeServer: vn-rxztcck-a6xkcgll2h6t-dmjd3wlevael-vnf-74vtqyxw74yx>]}]. register_pollster_execution /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:276
Dec  1 19:53:48 compute-0 ceilometer_agent_compute[200308]: 2025-12-01 19:53:48.831 15 DEBUG ceilometer.polling.manager [-] Registering pollster [<stevedore.extension.Extension object at 0x7fcf6cc3fdd0>] from source [pollsters] to be executed via executor [<concurrent.futures.thread.ThreadPoolExecutor object at 0x7fcf6755b1a0>] with cache [{'inspect_vnics': {}}], pollster history [{'network.incoming.bytes.delta': [<NovaLikeServer: test_0>, <NovaLikeServer: vn-rxztcck-a6xkcgll2h6t-dmjd3wlevael-vnf-74vtqyxw74yx>]}], and discovery cache [{'local_instances': [<NovaLikeServer: test_0>, <NovaLikeServer: vn-rxztcck-a6xkcgll2h6t-dmjd3wlevael-vnf-74vtqyxw74yx>]}]. register_pollster_execution /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:276
Dec  1 19:53:48 compute-0 ceilometer_agent_compute[200308]: 2025-12-01 19:53:48.832 15 DEBUG ceilometer.polling.manager [-] Registering pollster [<stevedore.extension.Extension object at 0x7fcf6cc3fe30>] from source [pollsters] to be executed via executor [<concurrent.futures.thread.ThreadPoolExecutor object at 0x7fcf6755b1a0>] with cache [{'inspect_vnics': {}}], pollster history [{'network.incoming.bytes.delta': [<NovaLikeServer: test_0>, <NovaLikeServer: vn-rxztcck-a6xkcgll2h6t-dmjd3wlevael-vnf-74vtqyxw74yx>]}], and discovery cache [{'local_instances': [<NovaLikeServer: test_0>, <NovaLikeServer: vn-rxztcck-a6xkcgll2h6t-dmjd3wlevael-vnf-74vtqyxw74yx>]}]. register_pollster_execution /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:276
Dec  1 19:53:48 compute-0 ceilometer_agent_compute[200308]: 2025-12-01 19:53:48.832 15 DEBUG ceilometer.polling.manager [-] Registering pollster [<stevedore.extension.Extension object at 0x7fcf6cc3fec0>] from source [pollsters] to be executed via executor [<concurrent.futures.thread.ThreadPoolExecutor object at 0x7fcf6755b1a0>] with cache [{'inspect_vnics': {}}], pollster history [{'network.incoming.bytes.delta': [<NovaLikeServer: test_0>, <NovaLikeServer: vn-rxztcck-a6xkcgll2h6t-dmjd3wlevael-vnf-74vtqyxw74yx>]}], and discovery cache [{'local_instances': [<NovaLikeServer: test_0>, <NovaLikeServer: vn-rxztcck-a6xkcgll2h6t-dmjd3wlevael-vnf-74vtqyxw74yx>]}]. register_pollster_execution /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:276
Dec  1 19:53:48 compute-0 ceilometer_agent_compute[200308]: 2025-12-01 19:53:48.833 15 DEBUG ceilometer.polling.manager [-] Registering pollster [<stevedore.extension.Extension object at 0x7fcf6cc3ffb0>] from source [pollsters] to be executed via executor [<concurrent.futures.thread.ThreadPoolExecutor object at 0x7fcf6755b1a0>] with cache [{'inspect_vnics': {}}], pollster history [{'network.incoming.bytes.delta': [<NovaLikeServer: test_0>, <NovaLikeServer: vn-rxztcck-a6xkcgll2h6t-dmjd3wlevael-vnf-74vtqyxw74yx>]}], and discovery cache [{'local_instances': [<NovaLikeServer: test_0>, <NovaLikeServer: vn-rxztcck-a6xkcgll2h6t-dmjd3wlevael-vnf-74vtqyxw74yx>]}]. register_pollster_execution /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:276
Dec  1 19:53:48 compute-0 ceilometer_agent_compute[200308]: 2025-12-01 19:53:48.833 15 DEBUG ceilometer.polling.manager [-] Registering pollster [<stevedore.extension.Extension object at 0x7fcf6cc3d7c0>] from source [pollsters] to be executed via executor [<concurrent.futures.thread.ThreadPoolExecutor object at 0x7fcf6755b1a0>] with cache [{'inspect_vnics': {}}], pollster history [{'network.incoming.bytes.delta': [<NovaLikeServer: test_0>, <NovaLikeServer: vn-rxztcck-a6xkcgll2h6t-dmjd3wlevael-vnf-74vtqyxw74yx>]}], and discovery cache [{'local_instances': [<NovaLikeServer: test_0>, <NovaLikeServer: vn-rxztcck-a6xkcgll2h6t-dmjd3wlevael-vnf-74vtqyxw74yx>]}]. register_pollster_execution /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:276
Dec  1 19:53:48 compute-0 ceilometer_agent_compute[200308]: 2025-12-01 19:53:48.834 15 DEBUG ceilometer.polling.manager [-] Registering pollster [<stevedore.extension.Extension object at 0x7fcf6cc3f7d0>] from source [pollsters] to be executed via executor [<concurrent.futures.thread.ThreadPoolExecutor object at 0x7fcf6755b1a0>] with cache [{'inspect_vnics': {}}], pollster history [{'network.incoming.bytes.delta': [<NovaLikeServer: test_0>, <NovaLikeServer: vn-rxztcck-a6xkcgll2h6t-dmjd3wlevael-vnf-74vtqyxw74yx>]}], and discovery cache [{'local_instances': [<NovaLikeServer: test_0>, <NovaLikeServer: vn-rxztcck-a6xkcgll2h6t-dmjd3wlevael-vnf-74vtqyxw74yx>]}]. register_pollster_execution /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:276
Dec  1 19:53:48 compute-0 ceilometer_agent_compute[200308]: 2025-12-01 19:53:48.835 15 DEBUG ceilometer.compute.pollsters [-] e73931e9-f7fa-4666-b781-700b385532a9/network.incoming.bytes.delta volume: 0 _stats_to_sample /usr/lib/python3.12/site-packages/ceilometer/compute/pollsters/__init__.py:108
Dec  1 19:53:48 compute-0 ceilometer_agent_compute[200308]: 2025-12-01 19:53:48.839 15 DEBUG ceilometer.compute.pollsters [-] 850ac274-3f22-41ce-b7d7-ac64d7adac70/network.incoming.bytes.delta volume: 0 _stats_to_sample /usr/lib/python3.12/site-packages/ceilometer/compute/pollsters/__init__.py:108
Dec  1 19:53:48 compute-0 ceilometer_agent_compute[200308]: 2025-12-01 19:53:48.839 15 INFO ceilometer.polling.manager [-] Finished polling pollster network.incoming.bytes.delta in the context of pollsters
Dec  1 19:53:48 compute-0 ceilometer_agent_compute[200308]: 2025-12-01 19:53:48.839 15 DEBUG ceilometer.polling.manager [-] Executing discovery process for pollsters [<ceilometer.compute.pollsters.net.OutgoingPacketsPollster object at 0x7fcf6c2e4050>] and discovery method [local_instances] via process [<bound method AgentManager.discover of <ceilometer.polling.manager.AgentManager object at 0x7fcf6cc07a10>>]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:294
Dec  1 19:53:48 compute-0 ceilometer_agent_compute[200308]: 2025-12-01 19:53:48.839 15 INFO ceilometer.polling.manager [-] Polling pollster network.outgoing.packets in the context of pollsters
Dec  1 19:53:48 compute-0 ceilometer_agent_compute[200308]: 2025-12-01 19:53:48.839 15 DEBUG ceilometer.polling.manager [-] Checking if we need coordination for pollster [<stevedore.extension.Extension object at 0x7fcf6c2e4080>] with coordination group name [None]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:333
Dec  1 19:53:48 compute-0 ceilometer_agent_compute[200308]: 2025-12-01 19:53:48.839 15 DEBUG ceilometer.polling.manager [-] The pollster [<stevedore.extension.Extension object at 0x7fcf6c2e4080>] is not configured in a source for polling that requires coordination. The current hashrings are the following [None]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:355
Dec  1 19:53:48 compute-0 ceilometer_agent_compute[200308]: 2025-12-01 19:53:48.840 12 DEBUG ceilometer.polling.manager [-] Updated heartbeat for network.outgoing.packets (2025-12-01T19:53:48.839909) _update_status /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:502
Dec  1 19:53:48 compute-0 ceilometer_agent_compute[200308]: 2025-12-01 19:53:48.840 15 DEBUG ceilometer.polling.manager [-] Polster heartbeat update: network.outgoing.packets heartbeat /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:636
Dec  1 19:53:48 compute-0 ceilometer_agent_compute[200308]: 2025-12-01 19:53:48.840 15 DEBUG ceilometer.compute.pollsters [-] e73931e9-f7fa-4666-b781-700b385532a9/network.outgoing.packets volume: 24 _stats_to_sample /usr/lib/python3.12/site-packages/ceilometer/compute/pollsters/__init__.py:108
Dec  1 19:53:48 compute-0 ceilometer_agent_compute[200308]: 2025-12-01 19:53:48.840 15 DEBUG ceilometer.compute.pollsters [-] 850ac274-3f22-41ce-b7d7-ac64d7adac70/network.outgoing.packets volume: 23 _stats_to_sample /usr/lib/python3.12/site-packages/ceilometer/compute/pollsters/__init__.py:108
Dec  1 19:53:48 compute-0 ceilometer_agent_compute[200308]: 2025-12-01 19:53:48.841 15 INFO ceilometer.polling.manager [-] Finished polling pollster network.outgoing.packets in the context of pollsters
Dec  1 19:53:48 compute-0 ceilometer_agent_compute[200308]: 2025-12-01 19:53:48.841 15 DEBUG ceilometer.polling.manager [-] Executing discovery process for pollsters [<ceilometer.compute.pollsters.net.OutgoingBytesDeltaPollster object at 0x7fcf6cc3ff20>] and discovery method [local_instances] via process [<bound method AgentManager.discover of <ceilometer.polling.manager.AgentManager object at 0x7fcf6cc07a10>>]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:294
Dec  1 19:53:48 compute-0 ceilometer_agent_compute[200308]: 2025-12-01 19:53:48.841 15 INFO ceilometer.polling.manager [-] Polling pollster network.outgoing.bytes.delta in the context of pollsters
Dec  1 19:53:48 compute-0 ceilometer_agent_compute[200308]: 2025-12-01 19:53:48.841 15 DEBUG ceilometer.polling.manager [-] Checking if we need coordination for pollster [<stevedore.extension.Extension object at 0x7fcf6efc98b0>] with coordination group name [None]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:333
Dec  1 19:53:48 compute-0 ceilometer_agent_compute[200308]: 2025-12-01 19:53:48.841 15 DEBUG ceilometer.polling.manager [-] The pollster [<stevedore.extension.Extension object at 0x7fcf6efc98b0>] is not configured in a source for polling that requires coordination. The current hashrings are the following [None]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:355
Dec  1 19:53:48 compute-0 ceilometer_agent_compute[200308]: 2025-12-01 19:53:48.842 12 DEBUG ceilometer.polling.manager [-] Updated heartbeat for network.outgoing.bytes.delta (2025-12-01T19:53:48.841539) _update_status /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:502
Dec  1 19:53:48 compute-0 ceilometer_agent_compute[200308]: 2025-12-01 19:53:48.841 15 DEBUG ceilometer.polling.manager [-] Polster heartbeat update: network.outgoing.bytes.delta heartbeat /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:636
Dec  1 19:53:48 compute-0 ceilometer_agent_compute[200308]: 2025-12-01 19:53:48.842 15 DEBUG ceilometer.compute.pollsters [-] e73931e9-f7fa-4666-b781-700b385532a9/network.outgoing.bytes.delta volume: 0 _stats_to_sample /usr/lib/python3.12/site-packages/ceilometer/compute/pollsters/__init__.py:108
Dec  1 19:53:48 compute-0 ceilometer_agent_compute[200308]: 2025-12-01 19:53:48.842 15 DEBUG ceilometer.compute.pollsters [-] 850ac274-3f22-41ce-b7d7-ac64d7adac70/network.outgoing.bytes.delta volume: 0 _stats_to_sample /usr/lib/python3.12/site-packages/ceilometer/compute/pollsters/__init__.py:108
Dec  1 19:53:48 compute-0 ceilometer_agent_compute[200308]: 2025-12-01 19:53:48.842 15 INFO ceilometer.polling.manager [-] Finished polling pollster network.outgoing.bytes.delta in the context of pollsters
Dec  1 19:53:48 compute-0 ceilometer_agent_compute[200308]: 2025-12-01 19:53:48.842 15 DEBUG ceilometer.polling.manager [-] Executing discovery process for pollsters [<ceilometer.compute.pollsters.net.OutgoingDropPollster object at 0x7fcf6c2e40e0>] and discovery method [local_instances] via process [<bound method AgentManager.discover of <ceilometer.polling.manager.AgentManager object at 0x7fcf6cc07a10>>]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:294
Dec  1 19:53:48 compute-0 ceilometer_agent_compute[200308]: 2025-12-01 19:53:48.842 15 INFO ceilometer.polling.manager [-] Polling pollster network.outgoing.packets.drop in the context of pollsters
Dec  1 19:53:48 compute-0 ceilometer_agent_compute[200308]: 2025-12-01 19:53:48.843 15 DEBUG ceilometer.polling.manager [-] Checking if we need coordination for pollster [<stevedore.extension.Extension object at 0x7fcf6c2e4110>] with coordination group name [None]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:333
Dec  1 19:53:48 compute-0 ceilometer_agent_compute[200308]: 2025-12-01 19:53:48.843 15 DEBUG ceilometer.polling.manager [-] The pollster [<stevedore.extension.Extension object at 0x7fcf6c2e4110>] is not configured in a source for polling that requires coordination. The current hashrings are the following [None]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:355
Dec  1 19:53:48 compute-0 ceilometer_agent_compute[200308]: 2025-12-01 19:53:48.843 12 DEBUG ceilometer.polling.manager [-] Updated heartbeat for network.outgoing.packets.drop (2025-12-01T19:53:48.843183) _update_status /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:502
Dec  1 19:53:48 compute-0 ceilometer_agent_compute[200308]: 2025-12-01 19:53:48.843 15 DEBUG ceilometer.polling.manager [-] Polster heartbeat update: network.outgoing.packets.drop heartbeat /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:636
Dec  1 19:53:48 compute-0 ceilometer_agent_compute[200308]: 2025-12-01 19:53:48.843 15 DEBUG ceilometer.compute.pollsters [-] e73931e9-f7fa-4666-b781-700b385532a9/network.outgoing.packets.drop volume: 0 _stats_to_sample /usr/lib/python3.12/site-packages/ceilometer/compute/pollsters/__init__.py:108
Dec  1 19:53:48 compute-0 ceilometer_agent_compute[200308]: 2025-12-01 19:53:48.844 15 DEBUG ceilometer.compute.pollsters [-] 850ac274-3f22-41ce-b7d7-ac64d7adac70/network.outgoing.packets.drop volume: 0 _stats_to_sample /usr/lib/python3.12/site-packages/ceilometer/compute/pollsters/__init__.py:108
Dec  1 19:53:48 compute-0 ceilometer_agent_compute[200308]: 2025-12-01 19:53:48.844 15 INFO ceilometer.polling.manager [-] Finished polling pollster network.outgoing.packets.drop in the context of pollsters
Dec  1 19:53:48 compute-0 ceilometer_agent_compute[200308]: 2025-12-01 19:53:48.844 15 DEBUG ceilometer.polling.manager [-] Executing discovery process for pollsters [<ceilometer.compute.pollsters.net.OutgoingErrorsPollster object at 0x7fcf6c2e4170>] and discovery method [local_instances] via process [<bound method AgentManager.discover of <ceilometer.polling.manager.AgentManager object at 0x7fcf6cc07a10>>]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:294
Dec  1 19:53:48 compute-0 ceilometer_agent_compute[200308]: 2025-12-01 19:53:48.844 15 INFO ceilometer.polling.manager [-] Polling pollster network.outgoing.packets.error in the context of pollsters
Dec  1 19:53:48 compute-0 ceilometer_agent_compute[200308]: 2025-12-01 19:53:48.844 15 DEBUG ceilometer.polling.manager [-] Checking if we need coordination for pollster [<stevedore.extension.Extension object at 0x7fcf6c2e41a0>] with coordination group name [None]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:333
Dec  1 19:53:48 compute-0 ceilometer_agent_compute[200308]: 2025-12-01 19:53:48.844 15 DEBUG ceilometer.polling.manager [-] The pollster [<stevedore.extension.Extension object at 0x7fcf6c2e41a0>] is not configured in a source for polling that requires coordination. The current hashrings are the following [None]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:355
Dec  1 19:53:48 compute-0 ceilometer_agent_compute[200308]: 2025-12-01 19:53:48.845 12 DEBUG ceilometer.polling.manager [-] Updated heartbeat for network.outgoing.packets.error (2025-12-01T19:53:48.844796) _update_status /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:502
Dec  1 19:53:48 compute-0 ceilometer_agent_compute[200308]: 2025-12-01 19:53:48.844 15 DEBUG ceilometer.polling.manager [-] Polster heartbeat update: network.outgoing.packets.error heartbeat /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:636
Dec  1 19:53:48 compute-0 ceilometer_agent_compute[200308]: 2025-12-01 19:53:48.845 15 DEBUG ceilometer.compute.pollsters [-] e73931e9-f7fa-4666-b781-700b385532a9/network.outgoing.packets.error volume: 0 _stats_to_sample /usr/lib/python3.12/site-packages/ceilometer/compute/pollsters/__init__.py:108
Dec  1 19:53:48 compute-0 ceilometer_agent_compute[200308]: 2025-12-01 19:53:48.845 15 DEBUG ceilometer.compute.pollsters [-] 850ac274-3f22-41ce-b7d7-ac64d7adac70/network.outgoing.packets.error volume: 0 _stats_to_sample /usr/lib/python3.12/site-packages/ceilometer/compute/pollsters/__init__.py:108
Dec  1 19:53:48 compute-0 ceilometer_agent_compute[200308]: 2025-12-01 19:53:48.845 15 INFO ceilometer.polling.manager [-] Finished polling pollster network.outgoing.packets.error in the context of pollsters
Dec  1 19:53:48 compute-0 ceilometer_agent_compute[200308]: 2025-12-01 19:53:48.846 15 DEBUG ceilometer.polling.manager [-] Executing discovery process for pollsters [<ceilometer.compute.pollsters.disk.PerDeviceCapacityPollster object at 0x7fcf6cc3d820>] and discovery method [local_instances] via process [<bound method AgentManager.discover of <ceilometer.polling.manager.AgentManager object at 0x7fcf6cc07a10>>]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:294
Dec  1 19:53:48 compute-0 ceilometer_agent_compute[200308]: 2025-12-01 19:53:48.846 15 INFO ceilometer.polling.manager [-] Polling pollster disk.device.capacity in the context of pollsters
Dec  1 19:53:48 compute-0 ceilometer_agent_compute[200308]: 2025-12-01 19:53:48.846 15 DEBUG ceilometer.polling.manager [-] Checking if we need coordination for pollster [<stevedore.extension.Extension object at 0x7fcf6cc3f290>] with coordination group name [None]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:333
Dec  1 19:53:48 compute-0 ceilometer_agent_compute[200308]: 2025-12-01 19:53:48.846 15 DEBUG ceilometer.polling.manager [-] The pollster [<stevedore.extension.Extension object at 0x7fcf6cc3f290>] is not configured in a source for polling that requires coordination. The current hashrings are the following [None]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:355
Dec  1 19:53:48 compute-0 ceilometer_agent_compute[200308]: 2025-12-01 19:53:48.846 12 DEBUG ceilometer.polling.manager [-] Updated heartbeat for disk.device.capacity (2025-12-01T19:53:48.846281) _update_status /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:502
Dec  1 19:53:48 compute-0 ceilometer_agent_compute[200308]: 2025-12-01 19:53:48.846 15 DEBUG ceilometer.polling.manager [-] Polster heartbeat update: disk.device.capacity heartbeat /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:636
Dec  1 19:53:48 compute-0 ceilometer_agent_compute[200308]: 2025-12-01 19:53:48.879 15 DEBUG ceilometer.compute.pollsters [-] e73931e9-f7fa-4666-b781-700b385532a9/disk.device.capacity volume: 1073741824 _stats_to_sample /usr/lib/python3.12/site-packages/ceilometer/compute/pollsters/__init__.py:108
Dec  1 19:53:48 compute-0 ceilometer_agent_compute[200308]: 2025-12-01 19:53:48.880 15 DEBUG ceilometer.compute.pollsters [-] e73931e9-f7fa-4666-b781-700b385532a9/disk.device.capacity volume: 1073741824 _stats_to_sample /usr/lib/python3.12/site-packages/ceilometer/compute/pollsters/__init__.py:108
Dec  1 19:53:48 compute-0 ceilometer_agent_compute[200308]: 2025-12-01 19:53:48.880 15 DEBUG ceilometer.compute.pollsters [-] e73931e9-f7fa-4666-b781-700b385532a9/disk.device.capacity volume: 485376 _stats_to_sample /usr/lib/python3.12/site-packages/ceilometer/compute/pollsters/__init__.py:108
Dec  1 19:53:48 compute-0 ceilometer_agent_compute[200308]: 2025-12-01 19:53:48.903 15 DEBUG ceilometer.compute.pollsters [-] 850ac274-3f22-41ce-b7d7-ac64d7adac70/disk.device.capacity volume: 1073741824 _stats_to_sample /usr/lib/python3.12/site-packages/ceilometer/compute/pollsters/__init__.py:108
Dec  1 19:53:48 compute-0 ceilometer_agent_compute[200308]: 2025-12-01 19:53:48.904 15 DEBUG ceilometer.compute.pollsters [-] 850ac274-3f22-41ce-b7d7-ac64d7adac70/disk.device.capacity volume: 1073741824 _stats_to_sample /usr/lib/python3.12/site-packages/ceilometer/compute/pollsters/__init__.py:108
Dec  1 19:53:48 compute-0 ceilometer_agent_compute[200308]: 2025-12-01 19:53:48.904 15 DEBUG ceilometer.compute.pollsters [-] 850ac274-3f22-41ce-b7d7-ac64d7adac70/disk.device.capacity volume: 583680 _stats_to_sample /usr/lib/python3.12/site-packages/ceilometer/compute/pollsters/__init__.py:108
Dec  1 19:53:48 compute-0 ceilometer_agent_compute[200308]: 2025-12-01 19:53:48.904 15 INFO ceilometer.polling.manager [-] Finished polling pollster disk.device.capacity in the context of pollsters
Dec  1 19:53:48 compute-0 ceilometer_agent_compute[200308]: 2025-12-01 19:53:48.905 15 DEBUG ceilometer.polling.manager [-] Executing discovery process for pollsters [<ceilometer.compute.pollsters.disk.PerDeviceReadBytesPollster object at 0x7fcf6cc3f1d0>] and discovery method [local_instances] via process [<bound method AgentManager.discover of <ceilometer.polling.manager.AgentManager object at 0x7fcf6cc07a10>>]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:294
Dec  1 19:53:48 compute-0 ceilometer_agent_compute[200308]: 2025-12-01 19:53:48.905 15 INFO ceilometer.polling.manager [-] Polling pollster disk.device.read.bytes in the context of pollsters
Dec  1 19:53:48 compute-0 ceilometer_agent_compute[200308]: 2025-12-01 19:53:48.905 15 DEBUG ceilometer.polling.manager [-] Checking if we need coordination for pollster [<stevedore.extension.Extension object at 0x7fcf6cc3f2c0>] with coordination group name [None]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:333
Dec  1 19:53:48 compute-0 ceilometer_agent_compute[200308]: 2025-12-01 19:53:48.905 15 DEBUG ceilometer.polling.manager [-] The pollster [<stevedore.extension.Extension object at 0x7fcf6cc3f2c0>] is not configured in a source for polling that requires coordination. The current hashrings are the following [None]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:355
Dec  1 19:53:48 compute-0 ceilometer_agent_compute[200308]: 2025-12-01 19:53:48.905 12 DEBUG ceilometer.polling.manager [-] Updated heartbeat for disk.device.read.bytes (2025-12-01T19:53:48.905360) _update_status /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:502
Dec  1 19:53:48 compute-0 ceilometer_agent_compute[200308]: 2025-12-01 19:53:48.905 15 DEBUG ceilometer.polling.manager [-] Polster heartbeat update: disk.device.read.bytes heartbeat /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:636
Dec  1 19:53:48 compute-0 systemd[1]: virtproxyd.service: Deactivated successfully.
Dec  1 19:53:49 compute-0 ceilometer_agent_compute[200308]: 2025-12-01 19:53:49.015 15 DEBUG ceilometer.compute.pollsters [-] e73931e9-f7fa-4666-b781-700b385532a9/disk.device.read.bytes volume: 23308800 _stats_to_sample /usr/lib/python3.12/site-packages/ceilometer/compute/pollsters/__init__.py:108
Dec  1 19:53:49 compute-0 ceilometer_agent_compute[200308]: 2025-12-01 19:53:49.016 15 DEBUG ceilometer.compute.pollsters [-] e73931e9-f7fa-4666-b781-700b385532a9/disk.device.read.bytes volume: 3227648 _stats_to_sample /usr/lib/python3.12/site-packages/ceilometer/compute/pollsters/__init__.py:108
Dec  1 19:53:49 compute-0 ceilometer_agent_compute[200308]: 2025-12-01 19:53:49.016 15 DEBUG ceilometer.compute.pollsters [-] e73931e9-f7fa-4666-b781-700b385532a9/disk.device.read.bytes volume: 274786 _stats_to_sample /usr/lib/python3.12/site-packages/ceilometer/compute/pollsters/__init__.py:108
Dec  1 19:53:49 compute-0 ceilometer_agent_compute[200308]: 2025-12-01 19:53:49.116 15 DEBUG ceilometer.compute.pollsters [-] 850ac274-3f22-41ce-b7d7-ac64d7adac70/disk.device.read.bytes volume: 23308800 _stats_to_sample /usr/lib/python3.12/site-packages/ceilometer/compute/pollsters/__init__.py:108
Dec  1 19:53:49 compute-0 ceilometer_agent_compute[200308]: 2025-12-01 19:53:49.116 15 DEBUG ceilometer.compute.pollsters [-] 850ac274-3f22-41ce-b7d7-ac64d7adac70/disk.device.read.bytes volume: 3227648 _stats_to_sample /usr/lib/python3.12/site-packages/ceilometer/compute/pollsters/__init__.py:108
Dec  1 19:53:49 compute-0 ceilometer_agent_compute[200308]: 2025-12-01 19:53:49.117 15 DEBUG ceilometer.compute.pollsters [-] 850ac274-3f22-41ce-b7d7-ac64d7adac70/disk.device.read.bytes volume: 385378 _stats_to_sample /usr/lib/python3.12/site-packages/ceilometer/compute/pollsters/__init__.py:108
Dec  1 19:53:49 compute-0 ceilometer_agent_compute[200308]: 2025-12-01 19:53:49.117 15 INFO ceilometer.polling.manager [-] Finished polling pollster disk.device.read.bytes in the context of pollsters
Dec  1 19:53:49 compute-0 ceilometer_agent_compute[200308]: 2025-12-01 19:53:49.117 15 DEBUG ceilometer.polling.manager [-] Executing discovery process for pollsters [<ceilometer.compute.pollsters.net.IncomingBytesPollster object at 0x7fcf6cc3f800>] and discovery method [local_instances] via process [<bound method AgentManager.discover of <ceilometer.polling.manager.AgentManager object at 0x7fcf6cc07a10>>]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:294
Dec  1 19:53:49 compute-0 ceilometer_agent_compute[200308]: 2025-12-01 19:53:49.117 15 INFO ceilometer.polling.manager [-] Polling pollster network.incoming.bytes in the context of pollsters
Dec  1 19:53:49 compute-0 ceilometer_agent_compute[200308]: 2025-12-01 19:53:49.117 15 DEBUG ceilometer.polling.manager [-] Checking if we need coordination for pollster [<stevedore.extension.Extension object at 0x7fcf6e1e92e0>] with coordination group name [None]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:333
Dec  1 19:53:49 compute-0 ceilometer_agent_compute[200308]: 2025-12-01 19:53:49.118 15 DEBUG ceilometer.polling.manager [-] The pollster [<stevedore.extension.Extension object at 0x7fcf6e1e92e0>] is not configured in a source for polling that requires coordination. The current hashrings are the following [None]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:355
Dec  1 19:53:49 compute-0 ceilometer_agent_compute[200308]: 2025-12-01 19:53:49.118 15 DEBUG ceilometer.polling.manager [-] Polster heartbeat update: network.incoming.bytes heartbeat /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:636
Dec  1 19:53:49 compute-0 ceilometer_agent_compute[200308]: 2025-12-01 19:53:49.118 15 DEBUG ceilometer.compute.pollsters [-] e73931e9-f7fa-4666-b781-700b385532a9/network.incoming.bytes volume: 2136 _stats_to_sample /usr/lib/python3.12/site-packages/ceilometer/compute/pollsters/__init__.py:108
Dec  1 19:53:49 compute-0 ceilometer_agent_compute[200308]: 2025-12-01 19:53:49.118 15 DEBUG ceilometer.compute.pollsters [-] 850ac274-3f22-41ce-b7d7-ac64d7adac70/network.incoming.bytes volume: 1570 _stats_to_sample /usr/lib/python3.12/site-packages/ceilometer/compute/pollsters/__init__.py:108
Dec  1 19:53:49 compute-0 ceilometer_agent_compute[200308]: 2025-12-01 19:53:49.118 15 INFO ceilometer.polling.manager [-] Finished polling pollster network.incoming.bytes in the context of pollsters
Dec  1 19:53:49 compute-0 ceilometer_agent_compute[200308]: 2025-12-01 19:53:49.118 15 DEBUG ceilometer.polling.manager [-] Executing discovery process for pollsters [<ceilometer.compute.pollsters.net.IncomingBytesRatePollster object at 0x7fcf6cc3fd10>] and discovery method [local_instances] via process [<bound method AgentManager.discover of <ceilometer.polling.manager.AgentManager object at 0x7fcf6cc07a10>>]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:294
Dec  1 19:53:49 compute-0 ceilometer_agent_compute[200308]: 2025-12-01 19:53:49.119 15 DEBUG ceilometer.polling.manager [-] Skip pollster network.incoming.bytes.rate, no new resources found this cycle _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:321
Dec  1 19:53:49 compute-0 ceilometer_agent_compute[200308]: 2025-12-01 19:53:49.119 15 DEBUG ceilometer.polling.manager [-] Executing discovery process for pollsters [<ceilometer.compute.pollsters.disk.PerDeviceDiskReadLatencyPollster object at 0x7fcf6cc3f2f0>] and discovery method [local_instances] via process [<bound method AgentManager.discover of <ceilometer.polling.manager.AgentManager object at 0x7fcf6cc07a10>>]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:294
Dec  1 19:53:49 compute-0 ceilometer_agent_compute[200308]: 2025-12-01 19:53:49.119 15 INFO ceilometer.polling.manager [-] Polling pollster disk.device.read.latency in the context of pollsters
Dec  1 19:53:49 compute-0 ceilometer_agent_compute[200308]: 2025-12-01 19:53:49.119 15 DEBUG ceilometer.polling.manager [-] Checking if we need coordination for pollster [<stevedore.extension.Extension object at 0x7fcf6cc3f320>] with coordination group name [None]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:333
Dec  1 19:53:49 compute-0 ceilometer_agent_compute[200308]: 2025-12-01 19:53:49.119 15 DEBUG ceilometer.polling.manager [-] The pollster [<stevedore.extension.Extension object at 0x7fcf6cc3f320>] is not configured in a source for polling that requires coordination. The current hashrings are the following [None]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:355
Dec  1 19:53:49 compute-0 ceilometer_agent_compute[200308]: 2025-12-01 19:53:49.119 15 DEBUG ceilometer.polling.manager [-] Polster heartbeat update: disk.device.read.latency heartbeat /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:636
Dec  1 19:53:49 compute-0 ceilometer_agent_compute[200308]: 2025-12-01 19:53:49.119 15 DEBUG ceilometer.compute.pollsters [-] e73931e9-f7fa-4666-b781-700b385532a9/disk.device.read.latency volume: 474440550 _stats_to_sample /usr/lib/python3.12/site-packages/ceilometer/compute/pollsters/__init__.py:108
Dec  1 19:53:49 compute-0 ceilometer_agent_compute[200308]: 2025-12-01 19:53:49.119 12 DEBUG ceilometer.polling.manager [-] Updated heartbeat for network.incoming.bytes (2025-12-01T19:53:49.118082) _update_status /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:502
Dec  1 19:53:49 compute-0 ceilometer_agent_compute[200308]: 2025-12-01 19:53:49.119 15 DEBUG ceilometer.compute.pollsters [-] e73931e9-f7fa-4666-b781-700b385532a9/disk.device.read.latency volume: 65600453 _stats_to_sample /usr/lib/python3.12/site-packages/ceilometer/compute/pollsters/__init__.py:108
Dec  1 19:53:49 compute-0 ceilometer_agent_compute[200308]: 2025-12-01 19:53:49.120 15 DEBUG ceilometer.compute.pollsters [-] e73931e9-f7fa-4666-b781-700b385532a9/disk.device.read.latency volume: 49214734 _stats_to_sample /usr/lib/python3.12/site-packages/ceilometer/compute/pollsters/__init__.py:108
Dec  1 19:53:49 compute-0 ceilometer_agent_compute[200308]: 2025-12-01 19:53:49.120 15 DEBUG ceilometer.compute.pollsters [-] 850ac274-3f22-41ce-b7d7-ac64d7adac70/disk.device.read.latency volume: 578521054 _stats_to_sample /usr/lib/python3.12/site-packages/ceilometer/compute/pollsters/__init__.py:108
Dec  1 19:53:49 compute-0 ceilometer_agent_compute[200308]: 2025-12-01 19:53:49.120 15 DEBUG ceilometer.compute.pollsters [-] 850ac274-3f22-41ce-b7d7-ac64d7adac70/disk.device.read.latency volume: 98903610 _stats_to_sample /usr/lib/python3.12/site-packages/ceilometer/compute/pollsters/__init__.py:108
Dec  1 19:53:49 compute-0 ceilometer_agent_compute[200308]: 2025-12-01 19:53:49.120 15 DEBUG ceilometer.compute.pollsters [-] 850ac274-3f22-41ce-b7d7-ac64d7adac70/disk.device.read.latency volume: 76991265 _stats_to_sample /usr/lib/python3.12/site-packages/ceilometer/compute/pollsters/__init__.py:108
Dec  1 19:53:49 compute-0 ceilometer_agent_compute[200308]: 2025-12-01 19:53:49.121 15 INFO ceilometer.polling.manager [-] Finished polling pollster disk.device.read.latency in the context of pollsters
Dec  1 19:53:49 compute-0 ceilometer_agent_compute[200308]: 2025-12-01 19:53:49.121 12 DEBUG ceilometer.polling.manager [-] Updated heartbeat for disk.device.read.latency (2025-12-01T19:53:49.119726) _update_status /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:502
Dec  1 19:53:49 compute-0 ceilometer_agent_compute[200308]: 2025-12-01 19:53:49.121 15 DEBUG ceilometer.polling.manager [-] Executing discovery process for pollsters [<ceilometer.compute.pollsters.disk.PerDeviceReadRequestsPollster object at 0x7fcf6cc3f350>] and discovery method [local_instances] via process [<bound method AgentManager.discover of <ceilometer.polling.manager.AgentManager object at 0x7fcf6cc07a10>>]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:294
Dec  1 19:53:49 compute-0 ceilometer_agent_compute[200308]: 2025-12-01 19:53:49.121 15 INFO ceilometer.polling.manager [-] Polling pollster disk.device.read.requests in the context of pollsters
Dec  1 19:53:49 compute-0 ceilometer_agent_compute[200308]: 2025-12-01 19:53:49.122 15 DEBUG ceilometer.polling.manager [-] Checking if we need coordination for pollster [<stevedore.extension.Extension object at 0x7fcf6cc3f380>] with coordination group name [None]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:333
Dec  1 19:53:49 compute-0 ceilometer_agent_compute[200308]: 2025-12-01 19:53:49.122 15 DEBUG ceilometer.polling.manager [-] The pollster [<stevedore.extension.Extension object at 0x7fcf6cc3f380>] is not configured in a source for polling that requires coordination. The current hashrings are the following [None]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:355
Dec  1 19:53:49 compute-0 ceilometer_agent_compute[200308]: 2025-12-01 19:53:49.122 15 DEBUG ceilometer.polling.manager [-] Polster heartbeat update: disk.device.read.requests heartbeat /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:636
Dec  1 19:53:49 compute-0 ceilometer_agent_compute[200308]: 2025-12-01 19:53:49.122 15 DEBUG ceilometer.compute.pollsters [-] e73931e9-f7fa-4666-b781-700b385532a9/disk.device.read.requests volume: 840 _stats_to_sample /usr/lib/python3.12/site-packages/ceilometer/compute/pollsters/__init__.py:108
Dec  1 19:53:49 compute-0 ceilometer_agent_compute[200308]: 2025-12-01 19:53:49.122 15 DEBUG ceilometer.compute.pollsters [-] e73931e9-f7fa-4666-b781-700b385532a9/disk.device.read.requests volume: 173 _stats_to_sample /usr/lib/python3.12/site-packages/ceilometer/compute/pollsters/__init__.py:108
Dec  1 19:53:49 compute-0 ceilometer_agent_compute[200308]: 2025-12-01 19:53:49.122 15 DEBUG ceilometer.compute.pollsters [-] e73931e9-f7fa-4666-b781-700b385532a9/disk.device.read.requests volume: 109 _stats_to_sample /usr/lib/python3.12/site-packages/ceilometer/compute/pollsters/__init__.py:108
Dec  1 19:53:49 compute-0 ceilometer_agent_compute[200308]: 2025-12-01 19:53:49.122 15 DEBUG ceilometer.compute.pollsters [-] 850ac274-3f22-41ce-b7d7-ac64d7adac70/disk.device.read.requests volume: 840 _stats_to_sample /usr/lib/python3.12/site-packages/ceilometer/compute/pollsters/__init__.py:108
Dec  1 19:53:49 compute-0 ceilometer_agent_compute[200308]: 2025-12-01 19:53:49.123 15 DEBUG ceilometer.compute.pollsters [-] 850ac274-3f22-41ce-b7d7-ac64d7adac70/disk.device.read.requests volume: 173 _stats_to_sample /usr/lib/python3.12/site-packages/ceilometer/compute/pollsters/__init__.py:108
Dec  1 19:53:49 compute-0 ceilometer_agent_compute[200308]: 2025-12-01 19:53:49.123 15 DEBUG ceilometer.compute.pollsters [-] 850ac274-3f22-41ce-b7d7-ac64d7adac70/disk.device.read.requests volume: 124 _stats_to_sample /usr/lib/python3.12/site-packages/ceilometer/compute/pollsters/__init__.py:108
Dec  1 19:53:49 compute-0 ceilometer_agent_compute[200308]: 2025-12-01 19:53:49.123 15 INFO ceilometer.polling.manager [-] Finished polling pollster disk.device.read.requests in the context of pollsters
Dec  1 19:53:49 compute-0 ceilometer_agent_compute[200308]: 2025-12-01 19:53:49.123 15 DEBUG ceilometer.polling.manager [-] Executing discovery process for pollsters [<ceilometer.compute.pollsters.disk.PerDevicePhysicalPollster object at 0x7fcf6cc3f3b0>] and discovery method [local_instances] via process [<bound method AgentManager.discover of <ceilometer.polling.manager.AgentManager object at 0x7fcf6cc07a10>>]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:294
Dec  1 19:53:49 compute-0 ceilometer_agent_compute[200308]: 2025-12-01 19:53:49.123 15 INFO ceilometer.polling.manager [-] Polling pollster disk.device.usage in the context of pollsters
Dec  1 19:53:49 compute-0 ceilometer_agent_compute[200308]: 2025-12-01 19:53:49.123 15 DEBUG ceilometer.polling.manager [-] Checking if we need coordination for pollster [<stevedore.extension.Extension object at 0x7fcf6cc3f3e0>] with coordination group name [None]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:333
Dec  1 19:53:49 compute-0 ceilometer_agent_compute[200308]: 2025-12-01 19:53:49.123 15 DEBUG ceilometer.polling.manager [-] The pollster [<stevedore.extension.Extension object at 0x7fcf6cc3f3e0>] is not configured in a source for polling that requires coordination. The current hashrings are the following [None]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:355
Dec  1 19:53:49 compute-0 ceilometer_agent_compute[200308]: 2025-12-01 19:53:49.124 15 DEBUG ceilometer.polling.manager [-] Polster heartbeat update: disk.device.usage heartbeat /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:636
Dec  1 19:53:49 compute-0 ceilometer_agent_compute[200308]: 2025-12-01 19:53:49.124 15 DEBUG ceilometer.compute.pollsters [-] e73931e9-f7fa-4666-b781-700b385532a9/disk.device.usage volume: 21233664 _stats_to_sample /usr/lib/python3.12/site-packages/ceilometer/compute/pollsters/__init__.py:108
Dec  1 19:53:49 compute-0 ceilometer_agent_compute[200308]: 2025-12-01 19:53:49.124 15 DEBUG ceilometer.compute.pollsters [-] e73931e9-f7fa-4666-b781-700b385532a9/disk.device.usage volume: 393216 _stats_to_sample /usr/lib/python3.12/site-packages/ceilometer/compute/pollsters/__init__.py:108
Dec  1 19:53:49 compute-0 ceilometer_agent_compute[200308]: 2025-12-01 19:53:49.124 15 DEBUG ceilometer.compute.pollsters [-] e73931e9-f7fa-4666-b781-700b385532a9/disk.device.usage volume: 485376 _stats_to_sample /usr/lib/python3.12/site-packages/ceilometer/compute/pollsters/__init__.py:108
Dec  1 19:53:49 compute-0 ceilometer_agent_compute[200308]: 2025-12-01 19:53:49.124 15 DEBUG ceilometer.compute.pollsters [-] 850ac274-3f22-41ce-b7d7-ac64d7adac70/disk.device.usage volume: 21299200 _stats_to_sample /usr/lib/python3.12/site-packages/ceilometer/compute/pollsters/__init__.py:108
Dec  1 19:53:49 compute-0 ceilometer_agent_compute[200308]: 2025-12-01 19:53:49.124 15 DEBUG ceilometer.compute.pollsters [-] 850ac274-3f22-41ce-b7d7-ac64d7adac70/disk.device.usage volume: 393216 _stats_to_sample /usr/lib/python3.12/site-packages/ceilometer/compute/pollsters/__init__.py:108
Dec  1 19:53:49 compute-0 ceilometer_agent_compute[200308]: 2025-12-01 19:53:49.125 15 DEBUG ceilometer.compute.pollsters [-] 850ac274-3f22-41ce-b7d7-ac64d7adac70/disk.device.usage volume: 583680 _stats_to_sample /usr/lib/python3.12/site-packages/ceilometer/compute/pollsters/__init__.py:108
Dec  1 19:53:49 compute-0 ceilometer_agent_compute[200308]: 2025-12-01 19:53:49.125 15 INFO ceilometer.polling.manager [-] Finished polling pollster disk.device.usage in the context of pollsters
Dec  1 19:53:49 compute-0 ceilometer_agent_compute[200308]: 2025-12-01 19:53:49.125 12 DEBUG ceilometer.polling.manager [-] Updated heartbeat for disk.device.read.requests (2025-12-01T19:53:49.122120) _update_status /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:502
Dec  1 19:53:49 compute-0 ceilometer_agent_compute[200308]: 2025-12-01 19:53:49.125 12 DEBUG ceilometer.polling.manager [-] Updated heartbeat for disk.device.usage (2025-12-01T19:53:49.124043) _update_status /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:502
Dec  1 19:53:49 compute-0 ceilometer_agent_compute[200308]: 2025-12-01 19:53:49.125 15 DEBUG ceilometer.polling.manager [-] Executing discovery process for pollsters [<ceilometer.compute.pollsters.disk.PerDeviceWriteBytesPollster object at 0x7fcf6cc3f410>] and discovery method [local_instances] via process [<bound method AgentManager.discover of <ceilometer.polling.manager.AgentManager object at 0x7fcf6cc07a10>>]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:294
Dec  1 19:53:49 compute-0 ceilometer_agent_compute[200308]: 2025-12-01 19:53:49.125 15 INFO ceilometer.polling.manager [-] Polling pollster disk.device.write.bytes in the context of pollsters
Dec  1 19:53:49 compute-0 ceilometer_agent_compute[200308]: 2025-12-01 19:53:49.126 15 DEBUG ceilometer.polling.manager [-] Checking if we need coordination for pollster [<stevedore.extension.Extension object at 0x7fcf6cc3f440>] with coordination group name [None]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:333
Dec  1 19:53:49 compute-0 ceilometer_agent_compute[200308]: 2025-12-01 19:53:49.126 15 DEBUG ceilometer.polling.manager [-] The pollster [<stevedore.extension.Extension object at 0x7fcf6cc3f440>] is not configured in a source for polling that requires coordination. The current hashrings are the following [None]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:355
Dec  1 19:53:49 compute-0 ceilometer_agent_compute[200308]: 2025-12-01 19:53:49.126 15 DEBUG ceilometer.polling.manager [-] Polster heartbeat update: disk.device.write.bytes heartbeat /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:636
Dec  1 19:53:49 compute-0 ceilometer_agent_compute[200308]: 2025-12-01 19:53:49.126 15 DEBUG ceilometer.compute.pollsters [-] e73931e9-f7fa-4666-b781-700b385532a9/disk.device.write.bytes volume: 41779200 _stats_to_sample /usr/lib/python3.12/site-packages/ceilometer/compute/pollsters/__init__.py:108
Dec  1 19:53:49 compute-0 ceilometer_agent_compute[200308]: 2025-12-01 19:53:49.126 15 DEBUG ceilometer.compute.pollsters [-] e73931e9-f7fa-4666-b781-700b385532a9/disk.device.write.bytes volume: 512 _stats_to_sample /usr/lib/python3.12/site-packages/ceilometer/compute/pollsters/__init__.py:108
Dec  1 19:53:49 compute-0 ceilometer_agent_compute[200308]: 2025-12-01 19:53:49.126 12 DEBUG ceilometer.polling.manager [-] Updated heartbeat for disk.device.write.bytes (2025-12-01T19:53:49.126140) _update_status /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:502
Dec  1 19:53:49 compute-0 ceilometer_agent_compute[200308]: 2025-12-01 19:53:49.126 15 DEBUG ceilometer.compute.pollsters [-] e73931e9-f7fa-4666-b781-700b385532a9/disk.device.write.bytes volume: 0 _stats_to_sample /usr/lib/python3.12/site-packages/ceilometer/compute/pollsters/__init__.py:108
Dec  1 19:53:49 compute-0 ceilometer_agent_compute[200308]: 2025-12-01 19:53:49.127 15 DEBUG ceilometer.compute.pollsters [-] 850ac274-3f22-41ce-b7d7-ac64d7adac70/disk.device.write.bytes volume: 41779200 _stats_to_sample /usr/lib/python3.12/site-packages/ceilometer/compute/pollsters/__init__.py:108
Dec  1 19:53:49 compute-0 ceilometer_agent_compute[200308]: 2025-12-01 19:53:49.127 15 DEBUG ceilometer.compute.pollsters [-] 850ac274-3f22-41ce-b7d7-ac64d7adac70/disk.device.write.bytes volume: 512 _stats_to_sample /usr/lib/python3.12/site-packages/ceilometer/compute/pollsters/__init__.py:108
Dec  1 19:53:49 compute-0 ceilometer_agent_compute[200308]: 2025-12-01 19:53:49.127 15 DEBUG ceilometer.compute.pollsters [-] 850ac274-3f22-41ce-b7d7-ac64d7adac70/disk.device.write.bytes volume: 0 _stats_to_sample /usr/lib/python3.12/site-packages/ceilometer/compute/pollsters/__init__.py:108
Dec  1 19:53:49 compute-0 ceilometer_agent_compute[200308]: 2025-12-01 19:53:49.127 15 INFO ceilometer.polling.manager [-] Finished polling pollster disk.device.write.bytes in the context of pollsters
Dec  1 19:53:49 compute-0 ceilometer_agent_compute[200308]: 2025-12-01 19:53:49.127 15 DEBUG ceilometer.polling.manager [-] Executing discovery process for pollsters [<ceilometer.compute.pollsters.instance_stats.PowerStatePollster object at 0x7fcf6c2e4440>] and discovery method [local_instances] via process [<bound method AgentManager.discover of <ceilometer.polling.manager.AgentManager object at 0x7fcf6cc07a10>>]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:294
Dec  1 19:53:49 compute-0 ceilometer_agent_compute[200308]: 2025-12-01 19:53:49.128 15 INFO ceilometer.polling.manager [-] Polling pollster power.state in the context of pollsters
Dec  1 19:53:49 compute-0 ceilometer_agent_compute[200308]: 2025-12-01 19:53:49.128 15 DEBUG ceilometer.polling.manager [-] Checking if we need coordination for pollster [<stevedore.extension.Extension object at 0x7fcf6c2e4470>] with coordination group name [None]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:333
Dec  1 19:53:49 compute-0 ceilometer_agent_compute[200308]: 2025-12-01 19:53:49.128 15 DEBUG ceilometer.polling.manager [-] The pollster [<stevedore.extension.Extension object at 0x7fcf6c2e4470>] is not configured in a source for polling that requires coordination. The current hashrings are the following [None]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:355
Dec  1 19:53:49 compute-0 ceilometer_agent_compute[200308]: 2025-12-01 19:53:49.128 15 DEBUG ceilometer.polling.manager [-] Polster heartbeat update: power.state heartbeat /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:636
Dec  1 19:53:49 compute-0 ceilometer_agent_compute[200308]: 2025-12-01 19:53:49.128 12 DEBUG ceilometer.polling.manager [-] Updated heartbeat for power.state (2025-12-01T19:53:49.128243) _update_status /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:502
Dec  1 19:53:49 compute-0 ceilometer_agent_compute[200308]: 2025-12-01 19:53:49.160 15 DEBUG ceilometer.compute.pollsters [-] e73931e9-f7fa-4666-b781-700b385532a9/power.state volume: 1 _stats_to_sample /usr/lib/python3.12/site-packages/ceilometer/compute/pollsters/__init__.py:108
Dec  1 19:53:49 compute-0 ceilometer_agent_compute[200308]: 2025-12-01 19:53:49.188 15 DEBUG ceilometer.compute.pollsters [-] 850ac274-3f22-41ce-b7d7-ac64d7adac70/power.state volume: 1 _stats_to_sample /usr/lib/python3.12/site-packages/ceilometer/compute/pollsters/__init__.py:108
Dec  1 19:53:49 compute-0 ceilometer_agent_compute[200308]: 2025-12-01 19:53:49.188 15 INFO ceilometer.polling.manager [-] Finished polling pollster power.state in the context of pollsters
Dec  1 19:53:49 compute-0 ceilometer_agent_compute[200308]: 2025-12-01 19:53:49.189 15 DEBUG ceilometer.polling.manager [-] Executing discovery process for pollsters [<ceilometer.compute.pollsters.disk.PerDeviceDiskWriteLatencyPollster object at 0x7fcf6cc3f470>] and discovery method [local_instances] via process [<bound method AgentManager.discover of <ceilometer.polling.manager.AgentManager object at 0x7fcf6cc07a10>>]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:294
Dec  1 19:53:49 compute-0 ceilometer_agent_compute[200308]: 2025-12-01 19:53:49.189 15 INFO ceilometer.polling.manager [-] Polling pollster disk.device.write.latency in the context of pollsters
Dec  1 19:53:49 compute-0 ceilometer_agent_compute[200308]: 2025-12-01 19:53:49.189 15 DEBUG ceilometer.polling.manager [-] Checking if we need coordination for pollster [<stevedore.extension.Extension object at 0x7fcf6cc3f4a0>] with coordination group name [None]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:333
Dec  1 19:53:49 compute-0 ceilometer_agent_compute[200308]: 2025-12-01 19:53:49.189 15 DEBUG ceilometer.polling.manager [-] The pollster [<stevedore.extension.Extension object at 0x7fcf6cc3f4a0>] is not configured in a source for polling that requires coordination. The current hashrings are the following [None]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:355
Dec  1 19:53:49 compute-0 ceilometer_agent_compute[200308]: 2025-12-01 19:53:49.189 15 DEBUG ceilometer.polling.manager [-] Polster heartbeat update: disk.device.write.latency heartbeat /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:636
Dec  1 19:53:49 compute-0 ceilometer_agent_compute[200308]: 2025-12-01 19:53:49.189 15 DEBUG ceilometer.compute.pollsters [-] e73931e9-f7fa-4666-b781-700b385532a9/disk.device.write.latency volume: 1119912171 _stats_to_sample /usr/lib/python3.12/site-packages/ceilometer/compute/pollsters/__init__.py:108
Dec  1 19:53:49 compute-0 ceilometer_agent_compute[200308]: 2025-12-01 19:53:49.189 15 DEBUG ceilometer.compute.pollsters [-] e73931e9-f7fa-4666-b781-700b385532a9/disk.device.write.latency volume: 10391061 _stats_to_sample /usr/lib/python3.12/site-packages/ceilometer/compute/pollsters/__init__.py:108
Dec  1 19:53:49 compute-0 ceilometer_agent_compute[200308]: 2025-12-01 19:53:49.189 15 DEBUG ceilometer.compute.pollsters [-] e73931e9-f7fa-4666-b781-700b385532a9/disk.device.write.latency volume: 0 _stats_to_sample /usr/lib/python3.12/site-packages/ceilometer/compute/pollsters/__init__.py:108
Dec  1 19:53:49 compute-0 ceilometer_agent_compute[200308]: 2025-12-01 19:53:49.190 15 DEBUG ceilometer.compute.pollsters [-] 850ac274-3f22-41ce-b7d7-ac64d7adac70/disk.device.write.latency volume: 2063543219 _stats_to_sample /usr/lib/python3.12/site-packages/ceilometer/compute/pollsters/__init__.py:108
Dec  1 19:53:49 compute-0 ceilometer_agent_compute[200308]: 2025-12-01 19:53:49.190 15 DEBUG ceilometer.compute.pollsters [-] 850ac274-3f22-41ce-b7d7-ac64d7adac70/disk.device.write.latency volume: 12721696 _stats_to_sample /usr/lib/python3.12/site-packages/ceilometer/compute/pollsters/__init__.py:108
Dec  1 19:53:49 compute-0 ceilometer_agent_compute[200308]: 2025-12-01 19:53:49.190 15 DEBUG ceilometer.compute.pollsters [-] 850ac274-3f22-41ce-b7d7-ac64d7adac70/disk.device.write.latency volume: 0 _stats_to_sample /usr/lib/python3.12/site-packages/ceilometer/compute/pollsters/__init__.py:108
Dec  1 19:53:49 compute-0 ceilometer_agent_compute[200308]: 2025-12-01 19:53:49.190 15 INFO ceilometer.polling.manager [-] Finished polling pollster disk.device.write.latency in the context of pollsters
Dec  1 19:53:49 compute-0 ceilometer_agent_compute[200308]: 2025-12-01 19:53:49.190 15 DEBUG ceilometer.polling.manager [-] Executing discovery process for pollsters [<ceilometer.compute.pollsters.disk.PerDeviceWriteRequestsPollster object at 0x7fcf6cc3f4d0>] and discovery method [local_instances] via process [<bound method AgentManager.discover of <ceilometer.polling.manager.AgentManager object at 0x7fcf6cc07a10>>]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:294
Dec  1 19:53:49 compute-0 ceilometer_agent_compute[200308]: 2025-12-01 19:53:49.191 15 INFO ceilometer.polling.manager [-] Polling pollster disk.device.write.requests in the context of pollsters
Dec  1 19:53:49 compute-0 ceilometer_agent_compute[200308]: 2025-12-01 19:53:49.191 15 DEBUG ceilometer.polling.manager [-] Checking if we need coordination for pollster [<stevedore.extension.Extension object at 0x7fcf6cc3f500>] with coordination group name [None]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:333
Dec  1 19:53:49 compute-0 ceilometer_agent_compute[200308]: 2025-12-01 19:53:49.191 12 DEBUG ceilometer.polling.manager [-] Updated heartbeat for disk.device.write.latency (2025-12-01T19:53:49.189343) _update_status /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:502
Dec  1 19:53:49 compute-0 ceilometer_agent_compute[200308]: 2025-12-01 19:53:49.191 15 DEBUG ceilometer.polling.manager [-] The pollster [<stevedore.extension.Extension object at 0x7fcf6cc3f500>] is not configured in a source for polling that requires coordination. The current hashrings are the following [None]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:355
Dec  1 19:53:49 compute-0 ceilometer_agent_compute[200308]: 2025-12-01 19:53:49.191 15 DEBUG ceilometer.polling.manager [-] Polster heartbeat update: disk.device.write.requests heartbeat /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:636
Dec  1 19:53:49 compute-0 ceilometer_agent_compute[200308]: 2025-12-01 19:53:49.191 15 DEBUG ceilometer.compute.pollsters [-] e73931e9-f7fa-4666-b781-700b385532a9/disk.device.write.requests volume: 233 _stats_to_sample /usr/lib/python3.12/site-packages/ceilometer/compute/pollsters/__init__.py:108
Dec  1 19:53:49 compute-0 ceilometer_agent_compute[200308]: 2025-12-01 19:53:49.192 15 DEBUG ceilometer.compute.pollsters [-] e73931e9-f7fa-4666-b781-700b385532a9/disk.device.write.requests volume: 1 _stats_to_sample /usr/lib/python3.12/site-packages/ceilometer/compute/pollsters/__init__.py:108
Dec  1 19:53:49 compute-0 ceilometer_agent_compute[200308]: 2025-12-01 19:53:49.192 15 DEBUG ceilometer.compute.pollsters [-] e73931e9-f7fa-4666-b781-700b385532a9/disk.device.write.requests volume: 0 _stats_to_sample /usr/lib/python3.12/site-packages/ceilometer/compute/pollsters/__init__.py:108
Dec  1 19:53:49 compute-0 ceilometer_agent_compute[200308]: 2025-12-01 19:53:49.192 15 DEBUG ceilometer.compute.pollsters [-] 850ac274-3f22-41ce-b7d7-ac64d7adac70/disk.device.write.requests volume: 232 _stats_to_sample /usr/lib/python3.12/site-packages/ceilometer/compute/pollsters/__init__.py:108
Dec  1 19:53:49 compute-0 ceilometer_agent_compute[200308]: 2025-12-01 19:53:49.192 15 DEBUG ceilometer.compute.pollsters [-] 850ac274-3f22-41ce-b7d7-ac64d7adac70/disk.device.write.requests volume: 1 _stats_to_sample /usr/lib/python3.12/site-packages/ceilometer/compute/pollsters/__init__.py:108
Dec  1 19:53:49 compute-0 ceilometer_agent_compute[200308]: 2025-12-01 19:53:49.192 15 DEBUG ceilometer.compute.pollsters [-] 850ac274-3f22-41ce-b7d7-ac64d7adac70/disk.device.write.requests volume: 0 _stats_to_sample /usr/lib/python3.12/site-packages/ceilometer/compute/pollsters/__init__.py:108
Dec  1 19:53:49 compute-0 ceilometer_agent_compute[200308]: 2025-12-01 19:53:49.193 15 INFO ceilometer.polling.manager [-] Finished polling pollster disk.device.write.requests in the context of pollsters
Dec  1 19:53:49 compute-0 ceilometer_agent_compute[200308]: 2025-12-01 19:53:49.193 15 DEBUG ceilometer.polling.manager [-] Executing discovery process for pollsters [<ceilometer.compute.pollsters.disk.PerDeviceAllocationPollster object at 0x7fcf6cc3e5d0>] and discovery method [local_instances] via process [<bound method AgentManager.discover of <ceilometer.polling.manager.AgentManager object at 0x7fcf6cc07a10>>]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:294
Dec  1 19:53:49 compute-0 ceilometer_agent_compute[200308]: 2025-12-01 19:53:49.193 15 INFO ceilometer.polling.manager [-] Polling pollster disk.device.allocation in the context of pollsters
Dec  1 19:53:49 compute-0 ceilometer_agent_compute[200308]: 2025-12-01 19:53:49.193 15 DEBUG ceilometer.polling.manager [-] Checking if we need coordination for pollster [<stevedore.extension.Extension object at 0x7fcf6cc3e540>] with coordination group name [None]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:333
Dec  1 19:53:49 compute-0 ceilometer_agent_compute[200308]: 2025-12-01 19:53:49.193 15 DEBUG ceilometer.polling.manager [-] The pollster [<stevedore.extension.Extension object at 0x7fcf6cc3e540>] is not configured in a source for polling that requires coordination. The current hashrings are the following [None]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:355
Dec  1 19:53:49 compute-0 ceilometer_agent_compute[200308]: 2025-12-01 19:53:49.193 15 DEBUG ceilometer.polling.manager [-] Polster heartbeat update: disk.device.allocation heartbeat /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:636
Dec  1 19:53:49 compute-0 ceilometer_agent_compute[200308]: 2025-12-01 19:53:49.193 15 DEBUG ceilometer.compute.pollsters [-] e73931e9-f7fa-4666-b781-700b385532a9/disk.device.allocation volume: 21307392 _stats_to_sample /usr/lib/python3.12/site-packages/ceilometer/compute/pollsters/__init__.py:108
Dec  1 19:53:49 compute-0 ceilometer_agent_compute[200308]: 2025-12-01 19:53:49.193 15 DEBUG ceilometer.compute.pollsters [-] e73931e9-f7fa-4666-b781-700b385532a9/disk.device.allocation volume: 1253376 _stats_to_sample /usr/lib/python3.12/site-packages/ceilometer/compute/pollsters/__init__.py:108
Dec  1 19:53:49 compute-0 ceilometer_agent_compute[200308]: 2025-12-01 19:53:49.194 15 DEBUG ceilometer.compute.pollsters [-] e73931e9-f7fa-4666-b781-700b385532a9/disk.device.allocation volume: 487424 _stats_to_sample /usr/lib/python3.12/site-packages/ceilometer/compute/pollsters/__init__.py:108
Dec  1 19:53:49 compute-0 ceilometer_agent_compute[200308]: 2025-12-01 19:53:49.194 15 DEBUG ceilometer.compute.pollsters [-] 850ac274-3f22-41ce-b7d7-ac64d7adac70/disk.device.allocation volume: 22224896 _stats_to_sample /usr/lib/python3.12/site-packages/ceilometer/compute/pollsters/__init__.py:108
Dec  1 19:53:49 compute-0 ceilometer_agent_compute[200308]: 2025-12-01 19:53:49.194 15 DEBUG ceilometer.compute.pollsters [-] 850ac274-3f22-41ce-b7d7-ac64d7adac70/disk.device.allocation volume: 1253376 _stats_to_sample /usr/lib/python3.12/site-packages/ceilometer/compute/pollsters/__init__.py:108
Dec  1 19:53:49 compute-0 ceilometer_agent_compute[200308]: 2025-12-01 19:53:49.194 15 DEBUG ceilometer.compute.pollsters [-] 850ac274-3f22-41ce-b7d7-ac64d7adac70/disk.device.allocation volume: 585728 _stats_to_sample /usr/lib/python3.12/site-packages/ceilometer/compute/pollsters/__init__.py:108
Dec  1 19:53:49 compute-0 ceilometer_agent_compute[200308]: 2025-12-01 19:53:49.195 15 INFO ceilometer.polling.manager [-] Finished polling pollster disk.device.allocation in the context of pollsters
Dec  1 19:53:49 compute-0 ceilometer_agent_compute[200308]: 2025-12-01 19:53:49.195 15 DEBUG ceilometer.polling.manager [-] Executing discovery process for pollsters [<ceilometer.compute.pollsters.disk.EphemeralSizePollster object at 0x7fcf6cc3f530>] and discovery method [local_instances] via process [<bound method AgentManager.discover of <ceilometer.polling.manager.AgentManager object at 0x7fcf6cc07a10>>]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:294
Dec  1 19:53:49 compute-0 ceilometer_agent_compute[200308]: 2025-12-01 19:53:49.195 15 INFO ceilometer.polling.manager [-] Polling pollster disk.ephemeral.size in the context of pollsters
Dec  1 19:53:49 compute-0 ceilometer_agent_compute[200308]: 2025-12-01 19:53:49.195 15 DEBUG ceilometer.polling.manager [-] Checking if we need coordination for pollster [<stevedore.extension.Extension object at 0x7fcf6cc3f560>] with coordination group name [None]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:333
Dec  1 19:53:49 compute-0 ceilometer_agent_compute[200308]: 2025-12-01 19:53:49.195 15 DEBUG ceilometer.polling.manager [-] The pollster [<stevedore.extension.Extension object at 0x7fcf6cc3f560>] is not configured in a source for polling that requires coordination. The current hashrings are the following [None]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:355
Dec  1 19:53:49 compute-0 ceilometer_agent_compute[200308]: 2025-12-01 19:53:49.195 15 DEBUG ceilometer.polling.manager [-] Polster heartbeat update: disk.ephemeral.size heartbeat /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:636
Dec  1 19:53:49 compute-0 ceilometer_agent_compute[200308]: 2025-12-01 19:53:49.195 15 INFO ceilometer.polling.manager [-] Finished polling pollster disk.ephemeral.size in the context of pollsters
Dec  1 19:53:49 compute-0 ceilometer_agent_compute[200308]: 2025-12-01 19:53:49.196 15 DEBUG ceilometer.polling.manager [-] Executing discovery process for pollsters [<ceilometer.compute.pollsters.net.IncomingPacketsPollster object at 0x7fcf6cc3fd40>] and discovery method [local_instances] via process [<bound method AgentManager.discover of <ceilometer.polling.manager.AgentManager object at 0x7fcf6cc07a10>>]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:294
Dec  1 19:53:49 compute-0 ceilometer_agent_compute[200308]: 2025-12-01 19:53:49.196 15 INFO ceilometer.polling.manager [-] Polling pollster network.incoming.packets in the context of pollsters
Dec  1 19:53:49 compute-0 ceilometer_agent_compute[200308]: 2025-12-01 19:53:49.196 15 DEBUG ceilometer.polling.manager [-] Checking if we need coordination for pollster [<stevedore.extension.Extension object at 0x7fcf6cc3fd70>] with coordination group name [None]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:333
Dec  1 19:53:49 compute-0 ceilometer_agent_compute[200308]: 2025-12-01 19:53:49.196 15 DEBUG ceilometer.polling.manager [-] The pollster [<stevedore.extension.Extension object at 0x7fcf6cc3fd70>] is not configured in a source for polling that requires coordination. The current hashrings are the following [None]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:355
Dec  1 19:53:49 compute-0 ceilometer_agent_compute[200308]: 2025-12-01 19:53:49.196 15 DEBUG ceilometer.polling.manager [-] Polster heartbeat update: network.incoming.packets heartbeat /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:636
Dec  1 19:53:49 compute-0 ceilometer_agent_compute[200308]: 2025-12-01 19:53:49.196 15 DEBUG ceilometer.compute.pollsters [-] e73931e9-f7fa-4666-b781-700b385532a9/network.incoming.packets volume: 21 _stats_to_sample /usr/lib/python3.12/site-packages/ceilometer/compute/pollsters/__init__.py:108
Dec  1 19:53:49 compute-0 ceilometer_agent_compute[200308]: 2025-12-01 19:53:49.196 15 DEBUG ceilometer.compute.pollsters [-] 850ac274-3f22-41ce-b7d7-ac64d7adac70/network.incoming.packets volume: 14 _stats_to_sample /usr/lib/python3.12/site-packages/ceilometer/compute/pollsters/__init__.py:108
Dec  1 19:53:49 compute-0 ceilometer_agent_compute[200308]: 2025-12-01 19:53:49.196 15 INFO ceilometer.polling.manager [-] Finished polling pollster network.incoming.packets in the context of pollsters
Dec  1 19:53:49 compute-0 ceilometer_agent_compute[200308]: 2025-12-01 19:53:49.197 15 DEBUG ceilometer.polling.manager [-] Executing discovery process for pollsters [<ceilometer.compute.pollsters.disk.RootSizePollster object at 0x7fcf6cc3f590>] and discovery method [local_instances] via process [<bound method AgentManager.discover of <ceilometer.polling.manager.AgentManager object at 0x7fcf6cc07a10>>]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:294
Dec  1 19:53:49 compute-0 ceilometer_agent_compute[200308]: 2025-12-01 19:53:49.197 15 INFO ceilometer.polling.manager [-] Polling pollster disk.root.size in the context of pollsters
Dec  1 19:53:49 compute-0 ceilometer_agent_compute[200308]: 2025-12-01 19:53:49.197 15 DEBUG ceilometer.polling.manager [-] Checking if we need coordination for pollster [<stevedore.extension.Extension object at 0x7fcf6cc3f5c0>] with coordination group name [None]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:333
Dec  1 19:53:49 compute-0 ceilometer_agent_compute[200308]: 2025-12-01 19:53:49.197 15 DEBUG ceilometer.polling.manager [-] The pollster [<stevedore.extension.Extension object at 0x7fcf6cc3f5c0>] is not configured in a source for polling that requires coordination. The current hashrings are the following [None]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:355
Dec  1 19:53:49 compute-0 ceilometer_agent_compute[200308]: 2025-12-01 19:53:49.197 12 DEBUG ceilometer.polling.manager [-] Updated heartbeat for disk.device.write.requests (2025-12-01T19:53:49.191731) _update_status /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:502
Dec  1 19:53:49 compute-0 ceilometer_agent_compute[200308]: 2025-12-01 19:53:49.197 12 DEBUG ceilometer.polling.manager [-] Updated heartbeat for disk.device.allocation (2025-12-01T19:53:49.193592) _update_status /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:502
Dec  1 19:53:49 compute-0 ceilometer_agent_compute[200308]: 2025-12-01 19:53:49.197 12 DEBUG ceilometer.polling.manager [-] Updated heartbeat for disk.ephemeral.size (2025-12-01T19:53:49.195497) _update_status /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:502
Dec  1 19:53:49 compute-0 ceilometer_agent_compute[200308]: 2025-12-01 19:53:49.197 12 DEBUG ceilometer.polling.manager [-] Updated heartbeat for network.incoming.packets (2025-12-01T19:53:49.196339) _update_status /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:502
Dec  1 19:53:49 compute-0 ceilometer_agent_compute[200308]: 2025-12-01 19:53:49.197 12 DEBUG ceilometer.polling.manager [-] Updated heartbeat for disk.root.size (2025-12-01T19:53:49.197376) _update_status /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:502
Dec  1 19:53:49 compute-0 ceilometer_agent_compute[200308]: 2025-12-01 19:53:49.197 15 DEBUG ceilometer.polling.manager [-] Polster heartbeat update: disk.root.size heartbeat /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:636
Dec  1 19:53:49 compute-0 ceilometer_agent_compute[200308]: 2025-12-01 19:53:49.198 15 INFO ceilometer.polling.manager [-] Finished polling pollster disk.root.size in the context of pollsters
Dec  1 19:53:49 compute-0 ceilometer_agent_compute[200308]: 2025-12-01 19:53:49.198 15 DEBUG ceilometer.polling.manager [-] Executing discovery process for pollsters [<ceilometer.compute.pollsters.net.IncomingDropPollster object at 0x7fcf6cc3fda0>] and discovery method [local_instances] via process [<bound method AgentManager.discover of <ceilometer.polling.manager.AgentManager object at 0x7fcf6cc07a10>>]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:294
Dec  1 19:53:49 compute-0 ceilometer_agent_compute[200308]: 2025-12-01 19:53:49.198 15 INFO ceilometer.polling.manager [-] Polling pollster network.incoming.packets.drop in the context of pollsters
Dec  1 19:53:49 compute-0 ceilometer_agent_compute[200308]: 2025-12-01 19:53:49.198 15 DEBUG ceilometer.polling.manager [-] Checking if we need coordination for pollster [<stevedore.extension.Extension object at 0x7fcf6cc3fdd0>] with coordination group name [None]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:333
Dec  1 19:53:49 compute-0 ceilometer_agent_compute[200308]: 2025-12-01 19:53:49.198 15 DEBUG ceilometer.polling.manager [-] The pollster [<stevedore.extension.Extension object at 0x7fcf6cc3fdd0>] is not configured in a source for polling that requires coordination. The current hashrings are the following [None]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:355
Dec  1 19:53:49 compute-0 ceilometer_agent_compute[200308]: 2025-12-01 19:53:49.198 15 DEBUG ceilometer.polling.manager [-] Polster heartbeat update: network.incoming.packets.drop heartbeat /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:636
Dec  1 19:53:49 compute-0 ceilometer_agent_compute[200308]: 2025-12-01 19:53:49.199 15 DEBUG ceilometer.compute.pollsters [-] e73931e9-f7fa-4666-b781-700b385532a9/network.incoming.packets.drop volume: 0 _stats_to_sample /usr/lib/python3.12/site-packages/ceilometer/compute/pollsters/__init__.py:108
Dec  1 19:53:49 compute-0 ceilometer_agent_compute[200308]: 2025-12-01 19:53:49.199 15 DEBUG ceilometer.compute.pollsters [-] 850ac274-3f22-41ce-b7d7-ac64d7adac70/network.incoming.packets.drop volume: 0 _stats_to_sample /usr/lib/python3.12/site-packages/ceilometer/compute/pollsters/__init__.py:108
Dec  1 19:53:49 compute-0 ceilometer_agent_compute[200308]: 2025-12-01 19:53:49.199 12 DEBUG ceilometer.polling.manager [-] Updated heartbeat for network.incoming.packets.drop (2025-12-01T19:53:49.198898) _update_status /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:502
Dec  1 19:53:49 compute-0 ceilometer_agent_compute[200308]: 2025-12-01 19:53:49.199 15 INFO ceilometer.polling.manager [-] Finished polling pollster network.incoming.packets.drop in the context of pollsters
Dec  1 19:53:49 compute-0 ceilometer_agent_compute[200308]: 2025-12-01 19:53:49.199 15 DEBUG ceilometer.polling.manager [-] Executing discovery process for pollsters [<ceilometer.compute.pollsters.net.IncomingErrorsPollster object at 0x7fcf6cc3fe00>] and discovery method [local_instances] via process [<bound method AgentManager.discover of <ceilometer.polling.manager.AgentManager object at 0x7fcf6cc07a10>>]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:294
Dec  1 19:53:49 compute-0 ceilometer_agent_compute[200308]: 2025-12-01 19:53:49.199 15 INFO ceilometer.polling.manager [-] Polling pollster network.incoming.packets.error in the context of pollsters
Dec  1 19:53:49 compute-0 ceilometer_agent_compute[200308]: 2025-12-01 19:53:49.200 15 DEBUG ceilometer.polling.manager [-] Checking if we need coordination for pollster [<stevedore.extension.Extension object at 0x7fcf6cc3fe30>] with coordination group name [None]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:333
Dec  1 19:53:49 compute-0 ceilometer_agent_compute[200308]: 2025-12-01 19:53:49.200 15 DEBUG ceilometer.polling.manager [-] The pollster [<stevedore.extension.Extension object at 0x7fcf6cc3fe30>] is not configured in a source for polling that requires coordination. The current hashrings are the following [None]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:355
Dec  1 19:53:49 compute-0 ceilometer_agent_compute[200308]: 2025-12-01 19:53:49.200 15 DEBUG ceilometer.polling.manager [-] Polster heartbeat update: network.incoming.packets.error heartbeat /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:636
Dec  1 19:53:49 compute-0 ceilometer_agent_compute[200308]: 2025-12-01 19:53:49.200 15 DEBUG ceilometer.compute.pollsters [-] e73931e9-f7fa-4666-b781-700b385532a9/network.incoming.packets.error volume: 0 _stats_to_sample /usr/lib/python3.12/site-packages/ceilometer/compute/pollsters/__init__.py:108
Dec  1 19:53:49 compute-0 ceilometer_agent_compute[200308]: 2025-12-01 19:53:49.200 12 DEBUG ceilometer.polling.manager [-] Updated heartbeat for network.incoming.packets.error (2025-12-01T19:53:49.200221) _update_status /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:502
Dec  1 19:53:49 compute-0 ceilometer_agent_compute[200308]: 2025-12-01 19:53:49.200 15 DEBUG ceilometer.compute.pollsters [-] 850ac274-3f22-41ce-b7d7-ac64d7adac70/network.incoming.packets.error volume: 0 _stats_to_sample /usr/lib/python3.12/site-packages/ceilometer/compute/pollsters/__init__.py:108
Dec  1 19:53:49 compute-0 ceilometer_agent_compute[200308]: 2025-12-01 19:53:49.201 15 INFO ceilometer.polling.manager [-] Finished polling pollster network.incoming.packets.error in the context of pollsters
Dec  1 19:53:49 compute-0 ceilometer_agent_compute[200308]: 2025-12-01 19:53:49.201 15 DEBUG ceilometer.polling.manager [-] Executing discovery process for pollsters [<ceilometer.compute.pollsters.net.OutgoingBytesPollster object at 0x7fcf6cc3fe90>] and discovery method [local_instances] via process [<bound method AgentManager.discover of <ceilometer.polling.manager.AgentManager object at 0x7fcf6cc07a10>>]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:294
Dec  1 19:53:49 compute-0 ceilometer_agent_compute[200308]: 2025-12-01 19:53:49.201 15 INFO ceilometer.polling.manager [-] Polling pollster network.outgoing.bytes in the context of pollsters
Dec  1 19:53:49 compute-0 ceilometer_agent_compute[200308]: 2025-12-01 19:53:49.201 15 DEBUG ceilometer.polling.manager [-] Checking if we need coordination for pollster [<stevedore.extension.Extension object at 0x7fcf6cc3fec0>] with coordination group name [None]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:333
Dec  1 19:53:49 compute-0 ceilometer_agent_compute[200308]: 2025-12-01 19:53:49.201 15 DEBUG ceilometer.polling.manager [-] The pollster [<stevedore.extension.Extension object at 0x7fcf6cc3fec0>] is not configured in a source for polling that requires coordination. The current hashrings are the following [None]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:355
Dec  1 19:53:49 compute-0 ceilometer_agent_compute[200308]: 2025-12-01 19:53:49.201 15 DEBUG ceilometer.polling.manager [-] Polster heartbeat update: network.outgoing.bytes heartbeat /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:636
Dec  1 19:53:49 compute-0 ceilometer_agent_compute[200308]: 2025-12-01 19:53:49.201 15 DEBUG ceilometer.compute.pollsters [-] e73931e9-f7fa-4666-b781-700b385532a9/network.outgoing.bytes volume: 2412 _stats_to_sample /usr/lib/python3.12/site-packages/ceilometer/compute/pollsters/__init__.py:108
Dec  1 19:53:49 compute-0 ceilometer_agent_compute[200308]: 2025-12-01 19:53:49.202 15 DEBUG ceilometer.compute.pollsters [-] 850ac274-3f22-41ce-b7d7-ac64d7adac70/network.outgoing.bytes volume: 2426 _stats_to_sample /usr/lib/python3.12/site-packages/ceilometer/compute/pollsters/__init__.py:108
Dec  1 19:53:49 compute-0 ceilometer_agent_compute[200308]: 2025-12-01 19:53:49.202 15 INFO ceilometer.polling.manager [-] Finished polling pollster network.outgoing.bytes in the context of pollsters
Dec  1 19:53:49 compute-0 ceilometer_agent_compute[200308]: 2025-12-01 19:53:49.202 15 DEBUG ceilometer.polling.manager [-] Executing discovery process for pollsters [<ceilometer.compute.pollsters.net.OutgoingBytesRatePollster object at 0x7fcf6cc3ff80>] and discovery method [local_instances] via process [<bound method AgentManager.discover of <ceilometer.polling.manager.AgentManager object at 0x7fcf6cc07a10>>]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:294
Dec  1 19:53:49 compute-0 ceilometer_agent_compute[200308]: 2025-12-01 19:53:49.202 15 DEBUG ceilometer.polling.manager [-] Skip pollster network.outgoing.bytes.rate, no new resources found this cycle _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:321
Dec  1 19:53:49 compute-0 ceilometer_agent_compute[200308]: 2025-12-01 19:53:49.202 15 DEBUG ceilometer.polling.manager [-] Executing discovery process for pollsters [<ceilometer.compute.pollsters.instance_stats.CPUPollster object at 0x7fcf6cbd1b80>] and discovery method [local_instances] via process [<bound method AgentManager.discover of <ceilometer.polling.manager.AgentManager object at 0x7fcf6cc07a10>>]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:294
Dec  1 19:53:49 compute-0 ceilometer_agent_compute[200308]: 2025-12-01 19:53:49.202 15 INFO ceilometer.polling.manager [-] Polling pollster cpu in the context of pollsters
Dec  1 19:53:49 compute-0 ceilometer_agent_compute[200308]: 2025-12-01 19:53:49.202 15 DEBUG ceilometer.polling.manager [-] Checking if we need coordination for pollster [<stevedore.extension.Extension object at 0x7fcf6cc3d7c0>] with coordination group name [None]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:333
Dec  1 19:53:49 compute-0 ceilometer_agent_compute[200308]: 2025-12-01 19:53:49.203 15 DEBUG ceilometer.polling.manager [-] The pollster [<stevedore.extension.Extension object at 0x7fcf6cc3d7c0>] is not configured in a source for polling that requires coordination. The current hashrings are the following [None]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:355
Dec  1 19:53:49 compute-0 ceilometer_agent_compute[200308]: 2025-12-01 19:53:49.203 12 DEBUG ceilometer.polling.manager [-] Updated heartbeat for network.outgoing.bytes (2025-12-01T19:53:49.201674) _update_status /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:502
Dec  1 19:53:49 compute-0 ceilometer_agent_compute[200308]: 2025-12-01 19:53:49.203 12 DEBUG ceilometer.polling.manager [-] Updated heartbeat for cpu (2025-12-01T19:53:49.203070) _update_status /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:502
Dec  1 19:53:49 compute-0 ceilometer_agent_compute[200308]: 2025-12-01 19:53:49.203 15 DEBUG ceilometer.polling.manager [-] Polster heartbeat update: cpu heartbeat /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:636
Dec  1 19:53:49 compute-0 ceilometer_agent_compute[200308]: 2025-12-01 19:53:49.203 15 DEBUG ceilometer.compute.pollsters [-] e73931e9-f7fa-4666-b781-700b385532a9/cpu volume: 53890000000 _stats_to_sample /usr/lib/python3.12/site-packages/ceilometer/compute/pollsters/__init__.py:108
Dec  1 19:53:49 compute-0 ceilometer_agent_compute[200308]: 2025-12-01 19:53:49.203 15 DEBUG ceilometer.compute.pollsters [-] 850ac274-3f22-41ce-b7d7-ac64d7adac70/cpu volume: 46790000000 _stats_to_sample /usr/lib/python3.12/site-packages/ceilometer/compute/pollsters/__init__.py:108
Dec  1 19:53:49 compute-0 ceilometer_agent_compute[200308]: 2025-12-01 19:53:49.204 15 INFO ceilometer.polling.manager [-] Finished polling pollster cpu in the context of pollsters
Dec  1 19:53:49 compute-0 ceilometer_agent_compute[200308]: 2025-12-01 19:53:49.204 15 DEBUG ceilometer.polling.manager [-] Executing discovery process for pollsters [<ceilometer.compute.pollsters.instance_stats.MemoryUsagePollster object at 0x7fcf6cc3f7a0>] and discovery method [local_instances] via process [<bound method AgentManager.discover of <ceilometer.polling.manager.AgentManager object at 0x7fcf6cc07a10>>]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:294
Dec  1 19:53:49 compute-0 ceilometer_agent_compute[200308]: 2025-12-01 19:53:49.204 15 INFO ceilometer.polling.manager [-] Polling pollster memory.usage in the context of pollsters
Dec  1 19:53:49 compute-0 ceilometer_agent_compute[200308]: 2025-12-01 19:53:49.204 15 DEBUG ceilometer.polling.manager [-] Checking if we need coordination for pollster [<stevedore.extension.Extension object at 0x7fcf6cc3f7d0>] with coordination group name [None]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:333
Dec  1 19:53:49 compute-0 ceilometer_agent_compute[200308]: 2025-12-01 19:53:49.204 15 DEBUG ceilometer.polling.manager [-] The pollster [<stevedore.extension.Extension object at 0x7fcf6cc3f7d0>] is not configured in a source for polling that requires coordination. The current hashrings are the following [None]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:355
Dec  1 19:53:49 compute-0 ceilometer_agent_compute[200308]: 2025-12-01 19:53:49.204 15 DEBUG ceilometer.polling.manager [-] Polster heartbeat update: memory.usage heartbeat /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:636
Dec  1 19:53:49 compute-0 ceilometer_agent_compute[200308]: 2025-12-01 19:53:49.204 15 DEBUG ceilometer.compute.pollsters [-] e73931e9-f7fa-4666-b781-700b385532a9/memory.usage volume: 48.79296875 _stats_to_sample /usr/lib/python3.12/site-packages/ceilometer/compute/pollsters/__init__.py:108
Dec  1 19:53:49 compute-0 ceilometer_agent_compute[200308]: 2025-12-01 19:53:49.204 15 DEBUG ceilometer.compute.pollsters [-] 850ac274-3f22-41ce-b7d7-ac64d7adac70/memory.usage volume: 48.9375 _stats_to_sample /usr/lib/python3.12/site-packages/ceilometer/compute/pollsters/__init__.py:108
Dec  1 19:53:49 compute-0 ceilometer_agent_compute[200308]: 2025-12-01 19:53:49.205 15 INFO ceilometer.polling.manager [-] Finished polling pollster memory.usage in the context of pollsters
Dec  1 19:53:49 compute-0 ceilometer_agent_compute[200308]: 2025-12-01 19:53:49.205 12 DEBUG ceilometer.polling.manager [-] Updated heartbeat for memory.usage (2025-12-01T19:53:49.204448) _update_status /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:502
Dec  1 19:53:49 compute-0 ceilometer_agent_compute[200308]: 2025-12-01 19:53:49.205 15 DEBUG ceilometer.polling.manager [-] Finished processing pollster [network.incoming.bytes.delta]. execute_polling_task_processing /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:272
Dec  1 19:53:49 compute-0 ceilometer_agent_compute[200308]: 2025-12-01 19:53:49.205 15 DEBUG ceilometer.polling.manager [-] Finished processing pollster [network.outgoing.packets]. execute_polling_task_processing /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:272
Dec  1 19:53:49 compute-0 ceilometer_agent_compute[200308]: 2025-12-01 19:53:49.206 15 DEBUG ceilometer.polling.manager [-] Finished processing pollster [network.outgoing.bytes.delta]. execute_polling_task_processing /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:272
Dec  1 19:53:49 compute-0 ceilometer_agent_compute[200308]: 2025-12-01 19:53:49.206 15 DEBUG ceilometer.polling.manager [-] Finished processing pollster [network.outgoing.packets.drop]. execute_polling_task_processing /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:272
Dec  1 19:53:49 compute-0 ceilometer_agent_compute[200308]: 2025-12-01 19:53:49.206 15 DEBUG ceilometer.polling.manager [-] Finished processing pollster [network.outgoing.packets.error]. execute_polling_task_processing /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:272
Dec  1 19:53:49 compute-0 ceilometer_agent_compute[200308]: 2025-12-01 19:53:49.206 15 DEBUG ceilometer.polling.manager [-] Finished processing pollster [disk.device.capacity]. execute_polling_task_processing /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:272
Dec  1 19:53:49 compute-0 ceilometer_agent_compute[200308]: 2025-12-01 19:53:49.206 15 DEBUG ceilometer.polling.manager [-] Finished processing pollster [disk.device.read.bytes]. execute_polling_task_processing /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:272
Dec  1 19:53:49 compute-0 ceilometer_agent_compute[200308]: 2025-12-01 19:53:49.207 15 DEBUG ceilometer.polling.manager [-] Finished processing pollster [network.incoming.bytes]. execute_polling_task_processing /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:272
Dec  1 19:53:49 compute-0 ceilometer_agent_compute[200308]: 2025-12-01 19:53:49.207 15 DEBUG ceilometer.polling.manager [-] Finished processing pollster [network.incoming.bytes.rate]. execute_polling_task_processing /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:272
Dec  1 19:53:49 compute-0 ceilometer_agent_compute[200308]: 2025-12-01 19:53:49.207 15 DEBUG ceilometer.polling.manager [-] Finished processing pollster [disk.device.read.latency]. execute_polling_task_processing /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:272
Dec  1 19:53:49 compute-0 ceilometer_agent_compute[200308]: 2025-12-01 19:53:49.207 15 DEBUG ceilometer.polling.manager [-] Finished processing pollster [disk.device.read.requests]. execute_polling_task_processing /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:272
Dec  1 19:53:49 compute-0 ceilometer_agent_compute[200308]: 2025-12-01 19:53:49.207 15 DEBUG ceilometer.polling.manager [-] Finished processing pollster [disk.device.usage]. execute_polling_task_processing /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:272
Dec  1 19:53:49 compute-0 ceilometer_agent_compute[200308]: 2025-12-01 19:53:49.208 15 DEBUG ceilometer.polling.manager [-] Finished processing pollster [disk.device.write.bytes]. execute_polling_task_processing /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:272
Dec  1 19:53:49 compute-0 ceilometer_agent_compute[200308]: 2025-12-01 19:53:49.208 15 DEBUG ceilometer.polling.manager [-] Finished processing pollster [power.state]. execute_polling_task_processing /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:272
Dec  1 19:53:49 compute-0 ceilometer_agent_compute[200308]: 2025-12-01 19:53:49.208 15 DEBUG ceilometer.polling.manager [-] Finished processing pollster [disk.device.write.latency]. execute_polling_task_processing /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:272
Dec  1 19:53:49 compute-0 ceilometer_agent_compute[200308]: 2025-12-01 19:53:49.208 15 DEBUG ceilometer.polling.manager [-] Finished processing pollster [disk.device.write.requests]. execute_polling_task_processing /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:272
Dec  1 19:53:49 compute-0 ceilometer_agent_compute[200308]: 2025-12-01 19:53:49.208 15 DEBUG ceilometer.polling.manager [-] Finished processing pollster [disk.device.allocation]. execute_polling_task_processing /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:272
Dec  1 19:53:49 compute-0 ceilometer_agent_compute[200308]: 2025-12-01 19:53:49.209 15 DEBUG ceilometer.polling.manager [-] Finished processing pollster [disk.ephemeral.size]. execute_polling_task_processing /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:272
Dec  1 19:53:49 compute-0 ceilometer_agent_compute[200308]: 2025-12-01 19:53:49.209 15 DEBUG ceilometer.polling.manager [-] Finished processing pollster [network.incoming.packets]. execute_polling_task_processing /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:272
Dec  1 19:53:49 compute-0 ceilometer_agent_compute[200308]: 2025-12-01 19:53:49.209 15 DEBUG ceilometer.polling.manager [-] Finished processing pollster [disk.root.size]. execute_polling_task_processing /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:272
Dec  1 19:53:49 compute-0 ceilometer_agent_compute[200308]: 2025-12-01 19:53:49.209 15 DEBUG ceilometer.polling.manager [-] Finished processing pollster [network.incoming.packets.drop]. execute_polling_task_processing /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:272
Dec  1 19:53:49 compute-0 ceilometer_agent_compute[200308]: 2025-12-01 19:53:49.209 15 DEBUG ceilometer.polling.manager [-] Finished processing pollster [network.incoming.packets.error]. execute_polling_task_processing /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:272
Dec  1 19:53:49 compute-0 ceilometer_agent_compute[200308]: 2025-12-01 19:53:49.210 15 DEBUG ceilometer.polling.manager [-] Finished processing pollster [network.outgoing.bytes]. execute_polling_task_processing /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:272
Dec  1 19:53:49 compute-0 ceilometer_agent_compute[200308]: 2025-12-01 19:53:49.210 15 DEBUG ceilometer.polling.manager [-] Finished processing pollster [network.outgoing.bytes.rate]. execute_polling_task_processing /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:272
Dec  1 19:53:49 compute-0 ceilometer_agent_compute[200308]: 2025-12-01 19:53:49.210 15 DEBUG ceilometer.polling.manager [-] Finished processing pollster [cpu]. execute_polling_task_processing /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:272
Dec  1 19:53:49 compute-0 ceilometer_agent_compute[200308]: 2025-12-01 19:53:49.210 15 DEBUG ceilometer.polling.manager [-] Finished processing pollster [memory.usage]. execute_polling_task_processing /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:272
Dec  1 19:53:51 compute-0 nova_compute[189564]: 2025-12-01 19:53:51.238 189568 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Dec  1 19:53:52 compute-0 podman[250732]: 2025-12-01 19:53:52.362535496 +0000 UTC m=+0.121095088 container health_status 9bc16c1e84935b321683dd2dfd3901959431e420d380b6b9982945dff3d516b2 (image=quay.io/prometheus/node-exporter:v1.5.0, name=node_exporter, health_status=healthy, health_failing_streak=0, health_log=, maintainer=The Prometheus Authors <prometheus-developers@googlegroups.com>, managed_by=edpm_ansible, config_data={'image': 'quay.io/prometheus/node-exporter:v1.5.0', 'restart': 'always', 'recreate': True, 'user': 'root', 'privileged': True, 'ports': ['9100:9100'], 'command': ['--web.config.file=/etc/node_exporter/node_exporter.yaml', '--web.disable-exporter-metrics', '--collector.systemd', '--collector.systemd.unit-include=(edpm_.*|ovs.*|openvswitch|virt.*|rsyslog)\\.service', '--no-collector.dmi', '--no-collector.entropy', '--no-collector.thermal_zone', '--no-collector.time', '--no-collector.timex', '--no-collector.uname', '--no-collector.stat', '--no-collector.hwmon', '--no-collector.os', '--no-collector.selinux', '--no-collector.textfile', '--no-collector.powersupplyclass', '--no-collector.pressure', '--no-collector.rapl'], 'net': 'host', 'environment': {'OS_ENDPOINT_TYPE': 'internal'}, 'healthcheck': {'test': '/openstack/healthcheck node_exporter', 'mount': '/var/lib/openstack/healthchecks/node_exporter'}, 'volumes': ['/var/lib/openstack/config/telemetry/node_exporter.yaml:/etc/node_exporter/node_exporter.yaml:z', '/var/lib/openstack/certs/telemetry/default:/etc/node_exporter/tls:z', '/var/run/dbus/system_bus_socket:/var/run/dbus/system_bus_socket:rw', '/var/lib/openstack/healthchecks/node_exporter:/openstack:ro,z']}, config_id=edpm, container_name=node_exporter)
Dec  1 19:53:53 compute-0 nova_compute[189564]: 2025-12-01 19:53:53.545 189568 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Dec  1 19:53:56 compute-0 nova_compute[189564]: 2025-12-01 19:53:56.241 189568 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Dec  1 19:53:58 compute-0 podman[250755]: 2025-12-01 19:53:58.374931934 +0000 UTC m=+0.132092259 container health_status eee51cf6f5ac491b85fb09827fece37ea9afa564acb449d4ec0d0155a452f02b (image=quay.io/podified-antelope-centos9/openstack-multipathd:current-podified, name=multipathd, health_status=healthy, health_failing_streak=0, health_log=, tcib_build_tag=fa2bb8efef6782c26ea7f1675eeb36dd, tcib_managed=true, org.label-schema.license=GPLv2, config_data={'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/multipathd', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-multipathd:current-podified', 'net': 'host', 'privileged': True, 'restart': 'always', 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/var/lib/kolla/config_files/multipathd.json:/var/lib/kolla/config_files/config.json:ro', '/dev:/dev', '/run/udev:/run/udev', '/sys:/sys', '/lib/modules:/lib/modules:ro', '/etc/iscsi:/etc/iscsi:ro', '/var/lib/iscsi:/var/lib/iscsi', '/etc/multipath:/etc/multipath:z', '/etc/multipath.conf:/etc/multipath.conf:ro', '/var/lib/openstack/healthchecks/multipathd:/openstack:ro,z']}, io.buildah.version=1.41.3, maintainer=OpenStack Kubernetes Operator team, org.label-schema.build-date=20251125, org.label-schema.name=CentOS Stream 9 Base Image, config_id=multipathd, managed_by=edpm_ansible, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS, container_name=multipathd)
Dec  1 19:53:58 compute-0 nova_compute[189564]: 2025-12-01 19:53:58.547 189568 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Dec  1 19:53:59 compute-0 podman[203750]: time="2025-12-01T19:53:59Z" level=info msg="List containers: received `last` parameter - overwriting `limit`"
Dec  1 19:53:59 compute-0 podman[203750]: @ - - [01/Dec/2025:19:53:59 +0000] "GET /v4.9.3/libpod/containers/json?all=true&external=false&last=0&namespace=false&size=false&sync=false HTTP/1.1" 200 29521 "" "Go-http-client/1.1"
Dec  1 19:53:59 compute-0 podman[203750]: @ - - [01/Dec/2025:19:53:59 +0000] "GET /v4.9.3/libpod/containers/stats?all=false&interval=1&stream=false HTTP/1.1" 200 4809 "" "Go-http-client/1.1"
Dec  1 19:54:01 compute-0 nova_compute[189564]: 2025-12-01 19:54:01.243 189568 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Dec  1 19:54:01 compute-0 openstack_network_exporter[205914]: ERROR   19:54:01 appctl.go:131: Failed to prepare call to ovsdb-server: no control socket files found for the ovs db server
Dec  1 19:54:01 compute-0 openstack_network_exporter[205914]: ERROR   19:54:01 appctl.go:144: Failed to get PID for ovn-northd: no control socket files found for ovn-northd
Dec  1 19:54:01 compute-0 openstack_network_exporter[205914]: ERROR   19:54:01 appctl.go:144: Failed to get PID for ovn-northd: no control socket files found for ovn-northd
Dec  1 19:54:01 compute-0 openstack_network_exporter[205914]: ERROR   19:54:01 appctl.go:174: call(dpif-netdev/pmd-perf-show): please specify an existing datapath
Dec  1 19:54:01 compute-0 openstack_network_exporter[205914]: 
Dec  1 19:54:01 compute-0 openstack_network_exporter[205914]: ERROR   19:54:01 appctl.go:174: call(dpif-netdev/pmd-rxq-show): please specify an existing datapath
Dec  1 19:54:01 compute-0 openstack_network_exporter[205914]: 
Dec  1 19:54:03 compute-0 nova_compute[189564]: 2025-12-01 19:54:03.550 189568 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Dec  1 19:54:04 compute-0 podman[250779]: 2025-12-01 19:54:04.355366294 +0000 UTC m=+0.116106812 container health_status 61ddba5fa28aaa4735d9b3aecc3d300f499f9ae2248b5f55cd6d6127fcce4236 (image=quay.io/navidys/prometheus-podman-exporter:v1.10.1, name=podman_exporter, health_status=healthy, health_failing_streak=0, health_log=, container_name=podman_exporter, maintainer=Navid Yaghoobi <navidys@fedoraproject.org>, managed_by=edpm_ansible, config_data={'image': 'quay.io/navidys/prometheus-podman-exporter:v1.10.1', 'restart': 'always', 'recreate': True, 'user': 'root', 'privileged': True, 'ports': ['9882:9882'], 'net': 'host', 'command': ['--web.config.file=/etc/podman_exporter/podman_exporter.yaml'], 'environment': {'OS_ENDPOINT_TYPE': 'internal', 'CONTAINER_HOST': 'unix:///run/podman/podman.sock'}, 'healthcheck': {'test': '/openstack/healthcheck podman_exporter', 'mount': '/var/lib/openstack/healthchecks/podman_exporter'}, 'volumes': ['/var/lib/openstack/config/telemetry/podman_exporter.yaml:/etc/podman_exporter/podman_exporter.yaml:z', '/var/lib/openstack/certs/telemetry/default:/etc/podman_exporter/tls:z', '/run/podman/podman.sock:/run/podman/podman.sock:rw,z', '/var/lib/openstack/healthchecks/podman_exporter:/openstack:ro,z']}, config_id=edpm)
Dec  1 19:54:06 compute-0 nova_compute[189564]: 2025-12-01 19:54:06.246 189568 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Dec  1 19:54:08 compute-0 podman[250807]: 2025-12-01 19:54:08.350373383 +0000 UTC m=+0.100597392 container health_status 43b014a7c88484529ca37fbc1aa040d68d3c565a681d98a3ffe696ded1c66c8b (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified, name=ovn_metadata_agent, health_status=healthy, health_failing_streak=0, health_log=, org.label-schema.build-date=20251125, tcib_build_tag=fa2bb8efef6782c26ea7f1675eeb36dd, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.schema-version=1.0, tcib_managed=true, config_id=ovn_metadata_agent, managed_by=edpm_ansible, org.label-schema.vendor=CentOS, config_data={'cgroupns': 'host', 'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'EDPM_CONFIG_HASH': '0823bd3e096c75f72e4a95820d41b0d4b6a1172bd2892ddb9f29b788a11bc87d'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ovn_metadata_agent', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified', 'net': 'host', 'pid': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/run/openvswitch:/run/openvswitch:z', '/var/lib/config-data/ansible-generated/neutron-ovn-metadata-agent:/etc/neutron.conf.d:z', '/run/netns:/run/netns:shared', '/var/lib/kolla/config_files/ovn_metadata_agent.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/neutron:/var/lib/neutron:shared,z', '/var/lib/neutron/ovn_metadata_haproxy_wrapper:/usr/local/bin/haproxy:ro', '/var/lib/neutron/kill_scripts:/etc/neutron/kill_scripts:ro', '/var/lib/openstack/cacerts/neutron-metadata/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/ca.crt:/etc/pki/tls/certs/ovndbca.crt:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/tls.crt:/etc/pki/tls/certs/ovndb.crt:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/tls.key:/etc/pki/tls/private/ovndb.key:ro,Z', '/var/lib/openstack/healthchecks/ovn_metadata_agent:/openstack:ro,z']}, container_name=ovn_metadata_agent, io.buildah.version=1.41.3, maintainer=OpenStack Kubernetes Operator team)
Dec  1 19:54:08 compute-0 podman[250806]: 2025-12-01 19:54:08.353442109 +0000 UTC m=+0.110821630 container health_status 3a3d264f7eb8586ed3d44da8bad3c69e5911bcb2ca062b771386b6d47a5118de (image=quay.rdoproject.org/podified-master-centos10/openstack-ceilometer-compute:current-tested, name=ceilometer_agent_compute, health_status=healthy, health_failing_streak=0, health_log=, org.label-schema.vendor=CentOS, tcib_managed=true, container_name=ceilometer_agent_compute, org.label-schema.schema-version=1.0, io.buildah.version=1.41.4, maintainer=OpenStack Kubernetes Operator team, tcib_build_tag=3a7876c5b6a4ff2e2bc50e11e9db5f42, config_data={'image': 'quay.rdoproject.org/podified-master-centos10/openstack-ceilometer-compute:current-tested', 'user': 'ceilometer', 'restart': 'always', 'command': 'kolla_start', 'security_opt': 'label:type:ceilometer_polling_t', 'net': 'host', 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'OS_ENDPOINT_TYPE': 'internal'}, 'healthcheck': {'test': '/openstack/healthcheck compute', 'mount': '/var/lib/openstack/healthchecks/ceilometer_agent_compute'}, 'volumes': ['/var/lib/openstack/config/telemetry:/var/lib/openstack/config/:z', '/var/lib/openstack/config/telemetry/ceilometer-agent-compute.json:/var/lib/kolla/config_files/config.json:z', '/run/libvirt:/run/libvirt:shared,ro', '/etc/hosts:/etc/hosts:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/var/lib/openstack/cacerts/telemetry/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/config/telemetry/ceilometer_prom_exporter.yaml:/etc/ceilometer/ceilometer_prom_exporter.yaml:z', '/var/lib/openstack/certs/telemetry/default:/etc/ceilometer/tls:z', '/dev/log:/dev/log', '/var/lib/openstack/healthchecks/ceilometer_agent_compute:/openstack:ro,z']}, config_id=edpm, org.label-schema.build-date=20251125, org.label-schema.license=GPLv2, managed_by=edpm_ansible, org.label-schema.name=CentOS Stream 10 Base Image)
Dec  1 19:54:08 compute-0 podman[250804]: 2025-12-01 19:54:08.355370428 +0000 UTC m=+0.121464109 container health_status 23921011954a99f31a49758e512d9e3575f6b2ebf536e7df85e3be11e7690b76 (image=quay.io/sustainable_computing_io/kepler:release-0.7.12, name=kepler, health_status=healthy, health_failing_streak=0, health_log=, version=9.4, com.redhat.component=ubi9-container, config_id=edpm, managed_by=edpm_ansible, maintainer=Red Hat, Inc., summary=Provides the latest release of Red Hat Universal Base Image 9., io.openshift.tags=base rhel9, url=https://access.redhat.com/containers/#/registry.access.redhat.com/ubi9/images/9.4-1214.1726694543, config_data={'image': 'quay.io/sustainable_computing_io/kepler:release-0.7.12', 'privileged': 'true', 'restart': 'always', 'ports': ['8888:8888'], 'net': 'host', 'command': '-v=2', 'recreate': True, 'environment': {'ENABLE_GPU': 'true', 'EXPOSE_CONTAINER_METRICS': 'true', 'ENABLE_PROCESS_METRICS': 'true', 'EXPOSE_VM_METRICS': 'true', 'EXPOSE_ESTIMATED_IDLE_POWER_METRICS': 'false', 'LIBVIRT_METADATA_URI': 'http://openstack.org/xmlns/libvirt/nova/1.1'}, 'healthcheck': {'test': '/openstack/healthcheck kepler', 'mount': '/var/lib/openstack/healthchecks/kepler'}, 'volumes': ['/lib/modules:/lib/modules:ro', '/run/libvirt:/run/libvirt:shared,ro', '/sys:/sys', '/proc:/proc', '/var/lib/openstack/healthchecks/kepler:/openstack:ro,z']}, release-0.7.12=, distribution-scope=public, vendor=Red Hat, Inc., build-date=2024-09-18T21:23:30, container_name=kepler, io.k8s.display-name=Red Hat Universal Base Image 9, io.k8s.description=The Universal Base Image is designed and engineered to be the base layer for all of your containerized applications, middleware and utilities. This base image is freely redistributable, but Red Hat only supports Red Hat technologies through subscriptions for Red Hat products. This image is maintained by Red Hat and updated regularly., io.buildah.version=1.29.0, vcs-type=git, architecture=x86_64, description=The Universal Base Image is designed and engineered to be the base layer for all of your containerized applications, middleware and utilities. This base image is freely redistributable, but Red Hat only supports Red Hat technologies through subscriptions for Red Hat products. This image is maintained by Red Hat and updated regularly., com.redhat.license_terms=https://www.redhat.com/en/about/red-hat-end-user-license-agreements#UBI, release=1214.1726694543, vcs-ref=e309397d02fc53f7fa99db1371b8700eb49f268f, io.openshift.expose-services=, name=ubi9)
Dec  1 19:54:08 compute-0 podman[250805]: 2025-12-01 19:54:08.374633277 +0000 UTC m=+0.135916559 container health_status 34a1614f07848d6f362b3ed1fa2407dbcd0f2c7c831f6ef43ff8b2d278ce7c3d (image=quay.io/podified-antelope-centos9/openstack-ceilometer-ipmi:current-podified, name=ceilometer_agent_ipmi, health_status=healthy, health_failing_streak=0, health_log=, io.buildah.version=1.41.3, tcib_build_tag=fa2bb8efef6782c26ea7f1675eeb36dd, config_data={'image': 'quay.io/podified-antelope-centos9/openstack-ceilometer-ipmi:current-podified', 'user': 'ceilometer', 'restart': 'always', 'command': 'kolla_start', 'security_opt': 'label:type:ceilometer_polling_t', 'privileged': 'true', 'net': 'host', 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'OS_ENDPOINT_TYPE': 'internal'}, 'healthcheck': {'test': '/openstack/healthcheck ipmi', 'mount': '/var/lib/openstack/healthchecks/ceilometer_agent_ipmi'}, 'volumes': ['/var/lib/openstack/config/telemetry-power-monitoring:/var/lib/openstack/config/:z', '/var/lib/openstack/config/telemetry-power-monitoring/ceilometer-agent-ipmi.json:/var/lib/kolla/config_files/config.json:z', '/etc/hosts:/etc/hosts:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/var/lib/openstack/cacerts/telemetry-power-monitoring/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/config/telemetry-power-monitoring/ceilometer_prom_exporter.yaml:/etc/ceilometer/ceilometer_prom_exporter.yaml:z', '/var/lib/openstack/certs/telemetry-power-monitoring/default:/etc/ceilometer/tls:z', '/dev/log:/dev/log', '/var/lib/openstack/healthchecks/ceilometer_agent_ipmi:/openstack:ro,z']}, maintainer=OpenStack Kubernetes Operator team, managed_by=edpm_ansible, config_id=edpm, org.label-schema.build-date=20251125, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.vendor=CentOS, org.label-schema.schema-version=1.0, tcib_managed=true, container_name=ceilometer_agent_ipmi)
Dec  1 19:54:08 compute-0 podman[250808]: 2025-12-01 19:54:08.38763406 +0000 UTC m=+0.136677822 container health_status ac5c9902abf0db9f43c889599b2bcc73d33eb8b65444ffdd9b56a5cc93dab792 (image=quay.io/podified-antelope-centos9/openstack-ovn-controller:current-podified, name=ovn_controller, health_status=healthy, health_failing_streak=0, health_log=, org.label-schema.build-date=20251125, org.label-schema.name=CentOS Stream 9 Base Image, config_id=ovn_controller, container_name=ovn_controller, io.buildah.version=1.41.3, tcib_managed=true, config_data={'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ovn_controller', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-ovn-controller:current-podified', 'net': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/lib/modules:/lib/modules:ro', '/run:/run', '/var/lib/openvswitch/ovn:/run/ovn:shared,z', '/var/lib/kolla/config_files/ovn_controller.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/openstack/cacerts/ovn/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/certs/ovn/default/ca.crt:/etc/pki/tls/certs/ovndbca.crt:ro,z', '/var/lib/openstack/certs/ovn/default/tls.crt:/etc/pki/tls/certs/ovndb.crt:ro,z', '/var/lib/openstack/certs/ovn/default/tls.key:/etc/pki/tls/private/ovndb.key:ro,Z', '/var/lib/openstack/healthchecks/ovn_controller:/openstack:ro,z']}, managed_by=edpm_ansible, org.label-schema.license=GPLv2, org.label-schema.schema-version=1.0, tcib_build_tag=fa2bb8efef6782c26ea7f1675eeb36dd, org.label-schema.vendor=CentOS, maintainer=OpenStack Kubernetes Operator team)
Dec  1 19:54:08 compute-0 nova_compute[189564]: 2025-12-01 19:54:08.554 189568 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Dec  1 19:54:10 compute-0 nova_compute[189564]: 2025-12-01 19:54:10.711 189568 DEBUG oslo_service.periodic_task [None req-7cec860b-c1c5-49d2-8a4f-732c76c1788d - - - - - -] Running periodic task ComputeManager._poll_rebooting_instances run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210#033[00m
Dec  1 19:54:10 compute-0 nova_compute[189564]: 2025-12-01 19:54:10.712 189568 DEBUG oslo_service.periodic_task [None req-7cec860b-c1c5-49d2-8a4f-732c76c1788d - - - - - -] Running periodic task ComputeManager._poll_volume_usage run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210#033[00m
Dec  1 19:54:10 compute-0 nova_compute[189564]: 2025-12-01 19:54:10.712 189568 DEBUG oslo_service.periodic_task [None req-7cec860b-c1c5-49d2-8a4f-732c76c1788d - - - - - -] Running periodic task ComputeManager._reclaim_queued_deletes run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210#033[00m
Dec  1 19:54:10 compute-0 nova_compute[189564]: 2025-12-01 19:54:10.713 189568 DEBUG nova.compute.manager [None req-7cec860b-c1c5-49d2-8a4f-732c76c1788d - - - - - -] CONF.reclaim_instance_interval <= 0, skipping... _reclaim_queued_deletes /usr/lib/python3.9/site-packages/nova/compute/manager.py:10477#033[00m
Dec  1 19:54:11 compute-0 nova_compute[189564]: 2025-12-01 19:54:11.249 189568 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Dec  1 19:54:12 compute-0 ovn_metadata_agent[106828]: 2025-12-01 19:54:12.210 106833 DEBUG oslo_concurrency.lockutils [-] Acquiring lock "_check_child_processes" by "neutron.agent.linux.external_process.ProcessMonitor._check_child_processes" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m
Dec  1 19:54:12 compute-0 ovn_metadata_agent[106828]: 2025-12-01 19:54:12.211 106833 DEBUG oslo_concurrency.lockutils [-] Lock "_check_child_processes" acquired by "neutron.agent.linux.external_process.ProcessMonitor._check_child_processes" :: waited 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m
Dec  1 19:54:12 compute-0 ovn_metadata_agent[106828]: 2025-12-01 19:54:12.212 106833 DEBUG oslo_concurrency.lockutils [-] Lock "_check_child_processes" "released" by "neutron.agent.linux.external_process.ProcessMonitor._check_child_processes" :: held 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m
Dec  1 19:54:12 compute-0 nova_compute[189564]: 2025-12-01 19:54:12.250 189568 DEBUG oslo_service.periodic_task [None req-7cec860b-c1c5-49d2-8a4f-732c76c1788d - - - - - -] Running periodic task ComputeManager._heal_instance_info_cache run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210#033[00m
Dec  1 19:54:12 compute-0 nova_compute[189564]: 2025-12-01 19:54:12.251 189568 DEBUG nova.compute.manager [None req-7cec860b-c1c5-49d2-8a4f-732c76c1788d - - - - - -] Starting heal instance info cache _heal_instance_info_cache /usr/lib/python3.9/site-packages/nova/compute/manager.py:9858#033[00m
Dec  1 19:54:12 compute-0 nova_compute[189564]: 2025-12-01 19:54:12.251 189568 DEBUG nova.compute.manager [None req-7cec860b-c1c5-49d2-8a4f-732c76c1788d - - - - - -] Rebuilding the list of instances to heal _heal_instance_info_cache /usr/lib/python3.9/site-packages/nova/compute/manager.py:9862#033[00m
Dec  1 19:54:12 compute-0 systemd[1]: session-31.scope: Deactivated successfully.
Dec  1 19:54:12 compute-0 systemd[1]: session-31.scope: Consumed 4.786s CPU time.
Dec  1 19:54:12 compute-0 systemd-logind[797]: Session 31 logged out. Waiting for processes to exit.
Dec  1 19:54:12 compute-0 systemd-logind[797]: Removed session 31.
Dec  1 19:54:12 compute-0 nova_compute[189564]: 2025-12-01 19:54:12.728 189568 DEBUG oslo_concurrency.lockutils [None req-7cec860b-c1c5-49d2-8a4f-732c76c1788d - - - - - -] Acquiring lock "refresh_cache-e73931e9-f7fa-4666-b781-700b385532a9" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:312#033[00m
Dec  1 19:54:12 compute-0 nova_compute[189564]: 2025-12-01 19:54:12.728 189568 DEBUG oslo_concurrency.lockutils [None req-7cec860b-c1c5-49d2-8a4f-732c76c1788d - - - - - -] Acquired lock "refresh_cache-e73931e9-f7fa-4666-b781-700b385532a9" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:315#033[00m
Dec  1 19:54:12 compute-0 nova_compute[189564]: 2025-12-01 19:54:12.729 189568 DEBUG nova.network.neutron [None req-7cec860b-c1c5-49d2-8a4f-732c76c1788d - - - - - -] [instance: e73931e9-f7fa-4666-b781-700b385532a9] Forcefully refreshing network info cache for instance _get_instance_nw_info /usr/lib/python3.9/site-packages/nova/network/neutron.py:2004#033[00m
Dec  1 19:54:12 compute-0 nova_compute[189564]: 2025-12-01 19:54:12.730 189568 DEBUG nova.objects.instance [None req-7cec860b-c1c5-49d2-8a4f-732c76c1788d - - - - - -] Lazy-loading 'info_cache' on Instance uuid e73931e9-f7fa-4666-b781-700b385532a9 obj_load_attr /usr/lib/python3.9/site-packages/nova/objects/instance.py:1105#033[00m
Dec  1 19:54:13 compute-0 nova_compute[189564]: 2025-12-01 19:54:13.558 189568 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Dec  1 19:54:14 compute-0 nova_compute[189564]: 2025-12-01 19:54:14.139 189568 DEBUG nova.network.neutron [None req-7cec860b-c1c5-49d2-8a4f-732c76c1788d - - - - - -] [instance: e73931e9-f7fa-4666-b781-700b385532a9] Updating instance_info_cache with network_info: [{"id": "3cef930c-870a-4936-a206-b4c3a7ce5c1a", "address": "fa:16:3e:fc:8b:70", "network": {"id": "2a4b8529-6171-4880-a97c-66966115a61b", "bridge": "br-int", "label": "private", "subnets": [{"cidr": "192.168.0.0/24", "dns": [], "gateway": {"address": "192.168.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "192.168.0.47", "type": "fixed", "version": 4, "meta": {}, "floating_ips": [{"address": "192.168.122.206", "type": "floating", "version": 4, "meta": {}}]}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "35d2a9caf1634dca9fc12ec078239d84", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tap3cef930c-87", "ovs_interfaceid": "3cef930c-870a-4936-a206-b4c3a7ce5c1a", "qbh_params": null, "qbg_params": null, "active": true, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}}] update_instance_cache_with_nw_info /usr/lib/python3.9/site-packages/nova/network/neutron.py:116#033[00m
Dec  1 19:54:14 compute-0 nova_compute[189564]: 2025-12-01 19:54:14.156 189568 DEBUG oslo_concurrency.lockutils [None req-7cec860b-c1c5-49d2-8a4f-732c76c1788d - - - - - -] Releasing lock "refresh_cache-e73931e9-f7fa-4666-b781-700b385532a9" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:333#033[00m
Dec  1 19:54:14 compute-0 nova_compute[189564]: 2025-12-01 19:54:14.156 189568 DEBUG nova.compute.manager [None req-7cec860b-c1c5-49d2-8a4f-732c76c1788d - - - - - -] [instance: e73931e9-f7fa-4666-b781-700b385532a9] Updated the network info_cache for instance _heal_instance_info_cache /usr/lib/python3.9/site-packages/nova/compute/manager.py:9929#033[00m
Dec  1 19:54:14 compute-0 nova_compute[189564]: 2025-12-01 19:54:14.247 189568 DEBUG oslo_service.periodic_task [None req-7cec860b-c1c5-49d2-8a4f-732c76c1788d - - - - - -] Running periodic task ComputeManager._poll_rescued_instances run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210#033[00m
Dec  1 19:54:14 compute-0 nova_compute[189564]: 2025-12-01 19:54:14.248 189568 DEBUG oslo_service.periodic_task [None req-7cec860b-c1c5-49d2-8a4f-732c76c1788d - - - - - -] Running periodic task ComputeManager._poll_unconfirmed_resizes run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210#033[00m
Dec  1 19:54:15 compute-0 nova_compute[189564]: 2025-12-01 19:54:15.248 189568 DEBUG oslo_service.periodic_task [None req-7cec860b-c1c5-49d2-8a4f-732c76c1788d - - - - - -] Running periodic task ComputeManager.update_available_resource run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210#033[00m
Dec  1 19:54:15 compute-0 nova_compute[189564]: 2025-12-01 19:54:15.292 189568 DEBUG oslo_concurrency.lockutils [None req-7cec860b-c1c5-49d2-8a4f-732c76c1788d - - - - - -] Acquiring lock "compute_resources" by "nova.compute.resource_tracker.ResourceTracker.clean_compute_node_cache" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m
Dec  1 19:54:15 compute-0 nova_compute[189564]: 2025-12-01 19:54:15.293 189568 DEBUG oslo_concurrency.lockutils [None req-7cec860b-c1c5-49d2-8a4f-732c76c1788d - - - - - -] Lock "compute_resources" acquired by "nova.compute.resource_tracker.ResourceTracker.clean_compute_node_cache" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m
Dec  1 19:54:15 compute-0 nova_compute[189564]: 2025-12-01 19:54:15.293 189568 DEBUG oslo_concurrency.lockutils [None req-7cec860b-c1c5-49d2-8a4f-732c76c1788d - - - - - -] Lock "compute_resources" "released" by "nova.compute.resource_tracker.ResourceTracker.clean_compute_node_cache" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m
Dec  1 19:54:15 compute-0 nova_compute[189564]: 2025-12-01 19:54:15.294 189568 DEBUG nova.compute.resource_tracker [None req-7cec860b-c1c5-49d2-8a4f-732c76c1788d - - - - - -] Auditing locally available compute resources for compute-0.ctlplane.example.com (node: compute-0.ctlplane.example.com) update_available_resource /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:861#033[00m
Dec  1 19:54:15 compute-0 nova_compute[189564]: 2025-12-01 19:54:15.614 189568 DEBUG oslo_concurrency.processutils [None req-7cec860b-c1c5-49d2-8a4f-732c76c1788d - - - - - -] Running cmd (subprocess): /usr/bin/python3 -m oslo_concurrency.prlimit --as=1073741824 --cpu=30 -- env LC_ALL=C LANG=C qemu-img info /var/lib/nova/instances/e73931e9-f7fa-4666-b781-700b385532a9/disk --force-share --output=json execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384#033[00m
Dec  1 19:54:15 compute-0 nova_compute[189564]: 2025-12-01 19:54:15.676 189568 DEBUG oslo_concurrency.processutils [None req-7cec860b-c1c5-49d2-8a4f-732c76c1788d - - - - - -] CMD "/usr/bin/python3 -m oslo_concurrency.prlimit --as=1073741824 --cpu=30 -- env LC_ALL=C LANG=C qemu-img info /var/lib/nova/instances/e73931e9-f7fa-4666-b781-700b385532a9/disk --force-share --output=json" returned: 0 in 0.062s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422#033[00m
Dec  1 19:54:15 compute-0 nova_compute[189564]: 2025-12-01 19:54:15.677 189568 DEBUG oslo_concurrency.processutils [None req-7cec860b-c1c5-49d2-8a4f-732c76c1788d - - - - - -] Running cmd (subprocess): /usr/bin/python3 -m oslo_concurrency.prlimit --as=1073741824 --cpu=30 -- env LC_ALL=C LANG=C qemu-img info /var/lib/nova/instances/e73931e9-f7fa-4666-b781-700b385532a9/disk --force-share --output=json execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384#033[00m
Dec  1 19:54:15 compute-0 nova_compute[189564]: 2025-12-01 19:54:15.773 189568 DEBUG oslo_concurrency.processutils [None req-7cec860b-c1c5-49d2-8a4f-732c76c1788d - - - - - -] CMD "/usr/bin/python3 -m oslo_concurrency.prlimit --as=1073741824 --cpu=30 -- env LC_ALL=C LANG=C qemu-img info /var/lib/nova/instances/e73931e9-f7fa-4666-b781-700b385532a9/disk --force-share --output=json" returned: 0 in 0.096s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422#033[00m
Dec  1 19:54:15 compute-0 nova_compute[189564]: 2025-12-01 19:54:15.775 189568 DEBUG oslo_concurrency.processutils [None req-7cec860b-c1c5-49d2-8a4f-732c76c1788d - - - - - -] Running cmd (subprocess): /usr/bin/python3 -m oslo_concurrency.prlimit --as=1073741824 --cpu=30 -- env LC_ALL=C LANG=C qemu-img info /var/lib/nova/instances/e73931e9-f7fa-4666-b781-700b385532a9/disk.eph0 --force-share --output=json execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384#033[00m
Dec  1 19:54:15 compute-0 nova_compute[189564]: 2025-12-01 19:54:15.837 189568 DEBUG oslo_concurrency.processutils [None req-7cec860b-c1c5-49d2-8a4f-732c76c1788d - - - - - -] CMD "/usr/bin/python3 -m oslo_concurrency.prlimit --as=1073741824 --cpu=30 -- env LC_ALL=C LANG=C qemu-img info /var/lib/nova/instances/e73931e9-f7fa-4666-b781-700b385532a9/disk.eph0 --force-share --output=json" returned: 0 in 0.062s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422#033[00m
Dec  1 19:54:15 compute-0 nova_compute[189564]: 2025-12-01 19:54:15.839 189568 DEBUG oslo_concurrency.processutils [None req-7cec860b-c1c5-49d2-8a4f-732c76c1788d - - - - - -] Running cmd (subprocess): /usr/bin/python3 -m oslo_concurrency.prlimit --as=1073741824 --cpu=30 -- env LC_ALL=C LANG=C qemu-img info /var/lib/nova/instances/e73931e9-f7fa-4666-b781-700b385532a9/disk.eph0 --force-share --output=json execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384#033[00m
Dec  1 19:54:15 compute-0 nova_compute[189564]: 2025-12-01 19:54:15.903 189568 DEBUG oslo_concurrency.processutils [None req-7cec860b-c1c5-49d2-8a4f-732c76c1788d - - - - - -] CMD "/usr/bin/python3 -m oslo_concurrency.prlimit --as=1073741824 --cpu=30 -- env LC_ALL=C LANG=C qemu-img info /var/lib/nova/instances/e73931e9-f7fa-4666-b781-700b385532a9/disk.eph0 --force-share --output=json" returned: 0 in 0.064s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422#033[00m
Dec  1 19:54:15 compute-0 nova_compute[189564]: 2025-12-01 19:54:15.917 189568 DEBUG oslo_concurrency.processutils [None req-7cec860b-c1c5-49d2-8a4f-732c76c1788d - - - - - -] Running cmd (subprocess): /usr/bin/python3 -m oslo_concurrency.prlimit --as=1073741824 --cpu=30 -- env LC_ALL=C LANG=C qemu-img info /var/lib/nova/instances/850ac274-3f22-41ce-b7d7-ac64d7adac70/disk --force-share --output=json execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384#033[00m
Dec  1 19:54:15 compute-0 nova_compute[189564]: 2025-12-01 19:54:15.978 189568 DEBUG oslo_concurrency.processutils [None req-7cec860b-c1c5-49d2-8a4f-732c76c1788d - - - - - -] CMD "/usr/bin/python3 -m oslo_concurrency.prlimit --as=1073741824 --cpu=30 -- env LC_ALL=C LANG=C qemu-img info /var/lib/nova/instances/850ac274-3f22-41ce-b7d7-ac64d7adac70/disk --force-share --output=json" returned: 0 in 0.062s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422#033[00m
Dec  1 19:54:15 compute-0 nova_compute[189564]: 2025-12-01 19:54:15.980 189568 DEBUG oslo_concurrency.processutils [None req-7cec860b-c1c5-49d2-8a4f-732c76c1788d - - - - - -] Running cmd (subprocess): /usr/bin/python3 -m oslo_concurrency.prlimit --as=1073741824 --cpu=30 -- env LC_ALL=C LANG=C qemu-img info /var/lib/nova/instances/850ac274-3f22-41ce-b7d7-ac64d7adac70/disk --force-share --output=json execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384#033[00m
Dec  1 19:54:16 compute-0 nova_compute[189564]: 2025-12-01 19:54:16.061 189568 DEBUG oslo_concurrency.processutils [None req-7cec860b-c1c5-49d2-8a4f-732c76c1788d - - - - - -] CMD "/usr/bin/python3 -m oslo_concurrency.prlimit --as=1073741824 --cpu=30 -- env LC_ALL=C LANG=C qemu-img info /var/lib/nova/instances/850ac274-3f22-41ce-b7d7-ac64d7adac70/disk --force-share --output=json" returned: 0 in 0.082s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422#033[00m
Dec  1 19:54:16 compute-0 nova_compute[189564]: 2025-12-01 19:54:16.063 189568 DEBUG oslo_concurrency.processutils [None req-7cec860b-c1c5-49d2-8a4f-732c76c1788d - - - - - -] Running cmd (subprocess): /usr/bin/python3 -m oslo_concurrency.prlimit --as=1073741824 --cpu=30 -- env LC_ALL=C LANG=C qemu-img info /var/lib/nova/instances/850ac274-3f22-41ce-b7d7-ac64d7adac70/disk.eph0 --force-share --output=json execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384#033[00m
Dec  1 19:54:16 compute-0 nova_compute[189564]: 2025-12-01 19:54:16.148 189568 DEBUG oslo_concurrency.processutils [None req-7cec860b-c1c5-49d2-8a4f-732c76c1788d - - - - - -] CMD "/usr/bin/python3 -m oslo_concurrency.prlimit --as=1073741824 --cpu=30 -- env LC_ALL=C LANG=C qemu-img info /var/lib/nova/instances/850ac274-3f22-41ce-b7d7-ac64d7adac70/disk.eph0 --force-share --output=json" returned: 0 in 0.085s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422#033[00m
Dec  1 19:54:16 compute-0 nova_compute[189564]: 2025-12-01 19:54:16.149 189568 DEBUG oslo_concurrency.processutils [None req-7cec860b-c1c5-49d2-8a4f-732c76c1788d - - - - - -] Running cmd (subprocess): /usr/bin/python3 -m oslo_concurrency.prlimit --as=1073741824 --cpu=30 -- env LC_ALL=C LANG=C qemu-img info /var/lib/nova/instances/850ac274-3f22-41ce-b7d7-ac64d7adac70/disk.eph0 --force-share --output=json execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384#033[00m
Dec  1 19:54:16 compute-0 nova_compute[189564]: 2025-12-01 19:54:16.240 189568 DEBUG oslo_concurrency.processutils [None req-7cec860b-c1c5-49d2-8a4f-732c76c1788d - - - - - -] CMD "/usr/bin/python3 -m oslo_concurrency.prlimit --as=1073741824 --cpu=30 -- env LC_ALL=C LANG=C qemu-img info /var/lib/nova/instances/850ac274-3f22-41ce-b7d7-ac64d7adac70/disk.eph0 --force-share --output=json" returned: 0 in 0.091s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422#033[00m
Dec  1 19:54:16 compute-0 nova_compute[189564]: 2025-12-01 19:54:16.251 189568 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Dec  1 19:54:16 compute-0 nova_compute[189564]: 2025-12-01 19:54:16.603 189568 WARNING nova.virt.libvirt.driver [None req-7cec860b-c1c5-49d2-8a4f-732c76c1788d - - - - - -] This host appears to have multiple sockets per NUMA node. The `socket` PCI NUMA affinity will not be supported.#033[00m
Dec  1 19:54:16 compute-0 nova_compute[189564]: 2025-12-01 19:54:16.604 189568 DEBUG nova.compute.resource_tracker [None req-7cec860b-c1c5-49d2-8a4f-732c76c1788d - - - - - -] Hypervisor/Node resource view: name=compute-0.ctlplane.example.com free_ram=4835MB free_disk=72.33403015136719GB free_vcpus=6 pci_devices=[{"dev_id": "pci_0000_00_03_0", "address": "0000:00:03.0", "product_id": "1000", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1000", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_01_3", "address": "0000:00:01.3", "product_id": "7113", "vendor_id": "8086", "numa_node": null, "label": "label_8086_7113", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_06_0", "address": "0000:00:06.0", "product_id": "1005", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1005", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_04_0", "address": "0000:00:04.0", "product_id": "1001", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1001", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_01_1", "address": "0000:00:01.1", "product_id": "7010", "vendor_id": "8086", "numa_node": null, "label": "label_8086_7010", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_00_0", "address": "0000:00:00.0", "product_id": "1237", "vendor_id": "8086", "numa_node": null, "label": "label_8086_1237", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_05_0", "address": "0000:00:05.0", "product_id": "1002", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1002", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_02_0", "address": "0000:00:02.0", "product_id": "1050", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1050", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_01_2", "address": "0000:00:01.2", "product_id": "7020", "vendor_id": "8086", "numa_node": null, "label": "label_8086_7020", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_01_0", "address": "0000:00:01.0", "product_id": "7000", "vendor_id": "8086", "numa_node": null, "label": "label_8086_7000", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_07_0", "address": "0000:00:07.0", "product_id": "1000", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1000", "dev_type": "type-PCI"}] _report_hypervisor_resource_view /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:1034#033[00m
Dec  1 19:54:16 compute-0 nova_compute[189564]: 2025-12-01 19:54:16.604 189568 DEBUG oslo_concurrency.lockutils [None req-7cec860b-c1c5-49d2-8a4f-732c76c1788d - - - - - -] Acquiring lock "compute_resources" by "nova.compute.resource_tracker.ResourceTracker._update_available_resource" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m
Dec  1 19:54:16 compute-0 nova_compute[189564]: 2025-12-01 19:54:16.604 189568 DEBUG oslo_concurrency.lockutils [None req-7cec860b-c1c5-49d2-8a4f-732c76c1788d - - - - - -] Lock "compute_resources" acquired by "nova.compute.resource_tracker.ResourceTracker._update_available_resource" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m
Dec  1 19:54:16 compute-0 nova_compute[189564]: 2025-12-01 19:54:16.691 189568 DEBUG nova.compute.resource_tracker [None req-7cec860b-c1c5-49d2-8a4f-732c76c1788d - - - - - -] Instance e73931e9-f7fa-4666-b781-700b385532a9 actively managed on this compute host and has allocations in placement: {'resources': {'DISK_GB': 2, 'MEMORY_MB': 512, 'VCPU': 1}}. _remove_deleted_instances_allocations /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:1635#033[00m
Dec  1 19:54:16 compute-0 nova_compute[189564]: 2025-12-01 19:54:16.692 189568 DEBUG nova.compute.resource_tracker [None req-7cec860b-c1c5-49d2-8a4f-732c76c1788d - - - - - -] Instance 850ac274-3f22-41ce-b7d7-ac64d7adac70 actively managed on this compute host and has allocations in placement: {'resources': {'DISK_GB': 2, 'MEMORY_MB': 512, 'VCPU': 1}}. _remove_deleted_instances_allocations /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:1635#033[00m
Dec  1 19:54:16 compute-0 nova_compute[189564]: 2025-12-01 19:54:16.692 189568 DEBUG nova.compute.resource_tracker [None req-7cec860b-c1c5-49d2-8a4f-732c76c1788d - - - - - -] Total usable vcpus: 8, total allocated vcpus: 2 _report_final_resource_view /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:1057#033[00m
Dec  1 19:54:16 compute-0 nova_compute[189564]: 2025-12-01 19:54:16.692 189568 DEBUG nova.compute.resource_tracker [None req-7cec860b-c1c5-49d2-8a4f-732c76c1788d - - - - - -] Final resource view: name=compute-0.ctlplane.example.com phys_ram=7680MB used_ram=1536MB phys_disk=79GB used_disk=4GB total_vcpus=8 used_vcpus=2 pci_stats=[] _report_final_resource_view /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:1066#033[00m
Dec  1 19:54:16 compute-0 nova_compute[189564]: 2025-12-01 19:54:16.769 189568 DEBUG nova.compute.provider_tree [None req-7cec860b-c1c5-49d2-8a4f-732c76c1788d - - - - - -] Inventory has not changed in ProviderTree for provider: 0211b5d4-bab8-409f-8f53-df766ffbcb27 update_inventory /usr/lib/python3.9/site-packages/nova/compute/provider_tree.py:180#033[00m
Dec  1 19:54:16 compute-0 nova_compute[189564]: 2025-12-01 19:54:16.785 189568 DEBUG nova.scheduler.client.report [None req-7cec860b-c1c5-49d2-8a4f-732c76c1788d - - - - - -] Inventory has not changed for provider 0211b5d4-bab8-409f-8f53-df766ffbcb27 based on inventory data: {'VCPU': {'total': 8, 'reserved': 0, 'min_unit': 1, 'max_unit': 8, 'step_size': 1, 'allocation_ratio': 4.0}, 'MEMORY_MB': {'total': 7680, 'reserved': 512, 'min_unit': 1, 'max_unit': 7680, 'step_size': 1, 'allocation_ratio': 1.0}, 'DISK_GB': {'total': 79, 'reserved': 1, 'min_unit': 1, 'max_unit': 79, 'step_size': 1, 'allocation_ratio': 0.9}} set_inventory_for_provider /usr/lib/python3.9/site-packages/nova/scheduler/client/report.py:940#033[00m
Dec  1 19:54:16 compute-0 nova_compute[189564]: 2025-12-01 19:54:16.787 189568 DEBUG nova.compute.resource_tracker [None req-7cec860b-c1c5-49d2-8a4f-732c76c1788d - - - - - -] Compute_service record updated for compute-0.ctlplane.example.com:compute-0.ctlplane.example.com _update_available_resource /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:995#033[00m
Dec  1 19:54:16 compute-0 nova_compute[189564]: 2025-12-01 19:54:16.788 189568 DEBUG oslo_concurrency.lockutils [None req-7cec860b-c1c5-49d2-8a4f-732c76c1788d - - - - - -] Lock "compute_resources" "released" by "nova.compute.resource_tracker.ResourceTracker._update_available_resource" :: held 0.184s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m
Dec  1 19:54:18 compute-0 podman[250929]: 2025-12-01 19:54:18.368947338 +0000 UTC m=+0.122642037 container health_status b46bda7fc50db8041eef75400930fc7591d8331b3adc9964f77b2cc87c6b98e2 (image=quay.io/openstack-k8s-operators/openstack-network-exporter:current-podified, name=openstack_network_exporter, health_status=healthy, health_failing_streak=0, health_log=, version=9.6, vcs-type=git, com.redhat.component=ubi9-minimal-container, description=The Universal Base Image Minimal is a stripped down image that uses microdnf as a package manager. This base image is freely redistributable, but Red Hat only supports Red Hat technologies through subscriptions for Red Hat products. This image is maintained by Red Hat and updated regularly., architecture=x86_64, url=https://catalog.redhat.com/en/search?searchType=containers, com.redhat.license_terms=https://www.redhat.com/en/about/red-hat-end-user-license-agreements#UBI, config_data={'image': 'quay.io/openstack-k8s-operators/openstack-network-exporter:current-podified', 'restart': 'always', 'recreate': True, 'privileged': True, 'ports': ['9105:9105'], 'command': [], 'net': 'host', 'environment': {'OS_ENDPOINT_TYPE': 'internal', 'OPENSTACK_NETWORK_EXPORTER_YAML': '/etc/openstack_network_exporter/openstack_network_exporter.yaml'}, 'healthcheck': {'test': '/openstack/healthcheck openstack-netwo', 'mount': '/var/lib/openstack/healthchecks/openstack_network_exporter'}, 'volumes': ['/var/lib/openstack/config/telemetry/openstack_network_exporter.yaml:/etc/openstack_network_exporter/openstack_network_exporter.yaml:z', '/var/lib/openstack/certs/telemetry/default:/etc/openstack_network_exporter/tls:z', '/var/run/openvswitch:/run/openvswitch:rw,z', '/var/lib/openvswitch/ovn:/run/ovn:rw,z', '/proc:/host/proc:ro', '/var/lib/openstack/healthchecks/openstack_network_exporter:/openstack:ro,z']}, io.buildah.version=1.33.7, release=1755695350, io.openshift.expose-services=, build-date=2025-08-20T13:12:41, io.openshift.tags=minimal rhel9, vendor=Red Hat, Inc., vcs-ref=f4b088292653bbf5ca8188a5e59ffd06a8671d4b, container_name=openstack_network_exporter, maintainer=Red Hat, Inc., name=ubi9-minimal, config_id=edpm, io.k8s.display-name=Red Hat Universal Base Image 9 Minimal, managed_by=edpm_ansible, summary=Provides the latest release of the minimal Red Hat Universal Base Image 9., distribution-scope=public, io.k8s.description=The Universal Base Image Minimal is a stripped down image that uses microdnf as a package manager. This base image is freely redistributable, but Red Hat only supports Red Hat technologies through subscriptions for Red Hat products. This image is maintained by Red Hat and updated regularly.)
Dec  1 19:54:18 compute-0 nova_compute[189564]: 2025-12-01 19:54:18.562 189568 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Dec  1 19:54:18 compute-0 nova_compute[189564]: 2025-12-01 19:54:18.784 189568 DEBUG oslo_service.periodic_task [None req-7cec860b-c1c5-49d2-8a4f-732c76c1788d - - - - - -] Running periodic task ComputeManager._check_instance_build_time run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210#033[00m
Dec  1 19:54:18 compute-0 nova_compute[189564]: 2025-12-01 19:54:18.784 189568 DEBUG oslo_service.periodic_task [None req-7cec860b-c1c5-49d2-8a4f-732c76c1788d - - - - - -] Running periodic task ComputeManager._instance_usage_audit run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210#033[00m
Dec  1 19:54:21 compute-0 nova_compute[189564]: 2025-12-01 19:54:21.254 189568 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Dec  1 19:54:23 compute-0 podman[250953]: 2025-12-01 19:54:23.316099313 +0000 UTC m=+0.084794161 container health_status 9bc16c1e84935b321683dd2dfd3901959431e420d380b6b9982945dff3d516b2 (image=quay.io/prometheus/node-exporter:v1.5.0, name=node_exporter, health_status=healthy, health_failing_streak=0, health_log=, config_id=edpm, container_name=node_exporter, maintainer=The Prometheus Authors <prometheus-developers@googlegroups.com>, managed_by=edpm_ansible, config_data={'image': 'quay.io/prometheus/node-exporter:v1.5.0', 'restart': 'always', 'recreate': True, 'user': 'root', 'privileged': True, 'ports': ['9100:9100'], 'command': ['--web.config.file=/etc/node_exporter/node_exporter.yaml', '--web.disable-exporter-metrics', '--collector.systemd', '--collector.systemd.unit-include=(edpm_.*|ovs.*|openvswitch|virt.*|rsyslog)\\.service', '--no-collector.dmi', '--no-collector.entropy', '--no-collector.thermal_zone', '--no-collector.time', '--no-collector.timex', '--no-collector.uname', '--no-collector.stat', '--no-collector.hwmon', '--no-collector.os', '--no-collector.selinux', '--no-collector.textfile', '--no-collector.powersupplyclass', '--no-collector.pressure', '--no-collector.rapl'], 'net': 'host', 'environment': {'OS_ENDPOINT_TYPE': 'internal'}, 'healthcheck': {'test': '/openstack/healthcheck node_exporter', 'mount': '/var/lib/openstack/healthchecks/node_exporter'}, 'volumes': ['/var/lib/openstack/config/telemetry/node_exporter.yaml:/etc/node_exporter/node_exporter.yaml:z', '/var/lib/openstack/certs/telemetry/default:/etc/node_exporter/tls:z', '/var/run/dbus/system_bus_socket:/var/run/dbus/system_bus_socket:rw', '/var/lib/openstack/healthchecks/node_exporter:/openstack:ro,z']})
Dec  1 19:54:23 compute-0 nova_compute[189564]: 2025-12-01 19:54:23.563 189568 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Dec  1 19:54:26 compute-0 nova_compute[189564]: 2025-12-01 19:54:26.256 189568 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Dec  1 19:54:28 compute-0 nova_compute[189564]: 2025-12-01 19:54:28.566 189568 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Dec  1 19:54:29 compute-0 podman[250976]: 2025-12-01 19:54:29.349843974 +0000 UTC m=+0.120702557 container health_status eee51cf6f5ac491b85fb09827fece37ea9afa564acb449d4ec0d0155a452f02b (image=quay.io/podified-antelope-centos9/openstack-multipathd:current-podified, name=multipathd, health_status=healthy, health_failing_streak=0, health_log=, io.buildah.version=1.41.3, org.label-schema.vendor=CentOS, tcib_managed=true, config_id=multipathd, maintainer=OpenStack Kubernetes Operator team, org.label-schema.license=GPLv2, config_data={'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/multipathd', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-multipathd:current-podified', 'net': 'host', 'privileged': True, 'restart': 'always', 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/var/lib/kolla/config_files/multipathd.json:/var/lib/kolla/config_files/config.json:ro', '/dev:/dev', '/run/udev:/run/udev', '/sys:/sys', '/lib/modules:/lib/modules:ro', '/etc/iscsi:/etc/iscsi:ro', '/var/lib/iscsi:/var/lib/iscsi', '/etc/multipath:/etc/multipath:z', '/etc/multipath.conf:/etc/multipath.conf:ro', '/var/lib/openstack/healthchecks/multipathd:/openstack:ro,z']}, org.label-schema.build-date=20251125, tcib_build_tag=fa2bb8efef6782c26ea7f1675eeb36dd, container_name=multipathd, managed_by=edpm_ansible, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.schema-version=1.0)
Dec  1 19:54:29 compute-0 podman[203750]: time="2025-12-01T19:54:29Z" level=info msg="List containers: received `last` parameter - overwriting `limit`"
Dec  1 19:54:29 compute-0 podman[203750]: @ - - [01/Dec/2025:19:54:29 +0000] "GET /v4.9.3/libpod/containers/json?all=true&external=false&last=0&namespace=false&size=false&sync=false HTTP/1.1" 200 29521 "" "Go-http-client/1.1"
Dec  1 19:54:29 compute-0 podman[203750]: @ - - [01/Dec/2025:19:54:29 +0000] "GET /v4.9.3/libpod/containers/stats?all=false&interval=1&stream=false HTTP/1.1" 200 4802 "" "Go-http-client/1.1"
Dec  1 19:54:31 compute-0 nova_compute[189564]: 2025-12-01 19:54:31.260 189568 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Dec  1 19:54:31 compute-0 openstack_network_exporter[205914]: ERROR   19:54:31 appctl.go:131: Failed to prepare call to ovsdb-server: no control socket files found for the ovs db server
Dec  1 19:54:31 compute-0 openstack_network_exporter[205914]: ERROR   19:54:31 appctl.go:144: Failed to get PID for ovn-northd: no control socket files found for ovn-northd
Dec  1 19:54:31 compute-0 openstack_network_exporter[205914]: ERROR   19:54:31 appctl.go:144: Failed to get PID for ovn-northd: no control socket files found for ovn-northd
Dec  1 19:54:31 compute-0 openstack_network_exporter[205914]: ERROR   19:54:31 appctl.go:174: call(dpif-netdev/pmd-perf-show): please specify an existing datapath
Dec  1 19:54:31 compute-0 openstack_network_exporter[205914]: 
Dec  1 19:54:31 compute-0 openstack_network_exporter[205914]: ERROR   19:54:31 appctl.go:174: call(dpif-netdev/pmd-rxq-show): please specify an existing datapath
Dec  1 19:54:31 compute-0 openstack_network_exporter[205914]: 
Dec  1 19:54:33 compute-0 nova_compute[189564]: 2025-12-01 19:54:33.569 189568 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Dec  1 19:54:35 compute-0 podman[250994]: 2025-12-01 19:54:35.355151428 +0000 UTC m=+0.120599453 container health_status 61ddba5fa28aaa4735d9b3aecc3d300f499f9ae2248b5f55cd6d6127fcce4236 (image=quay.io/navidys/prometheus-podman-exporter:v1.10.1, name=podman_exporter, health_status=healthy, health_failing_streak=0, health_log=, managed_by=edpm_ansible, config_data={'image': 'quay.io/navidys/prometheus-podman-exporter:v1.10.1', 'restart': 'always', 'recreate': True, 'user': 'root', 'privileged': True, 'ports': ['9882:9882'], 'net': 'host', 'command': ['--web.config.file=/etc/podman_exporter/podman_exporter.yaml'], 'environment': {'OS_ENDPOINT_TYPE': 'internal', 'CONTAINER_HOST': 'unix:///run/podman/podman.sock'}, 'healthcheck': {'test': '/openstack/healthcheck podman_exporter', 'mount': '/var/lib/openstack/healthchecks/podman_exporter'}, 'volumes': ['/var/lib/openstack/config/telemetry/podman_exporter.yaml:/etc/podman_exporter/podman_exporter.yaml:z', '/var/lib/openstack/certs/telemetry/default:/etc/podman_exporter/tls:z', '/run/podman/podman.sock:/run/podman/podman.sock:rw,z', '/var/lib/openstack/healthchecks/podman_exporter:/openstack:ro,z']}, config_id=edpm, container_name=podman_exporter, maintainer=Navid Yaghoobi <navidys@fedoraproject.org>)
Dec  1 19:54:36 compute-0 nova_compute[189564]: 2025-12-01 19:54:36.264 189568 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Dec  1 19:54:38 compute-0 nova_compute[189564]: 2025-12-01 19:54:38.573 189568 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Dec  1 19:54:39 compute-0 podman[251021]: 2025-12-01 19:54:39.379409756 +0000 UTC m=+0.120339864 container health_status 3a3d264f7eb8586ed3d44da8bad3c69e5911bcb2ca062b771386b6d47a5118de (image=quay.rdoproject.org/podified-master-centos10/openstack-ceilometer-compute:current-tested, name=ceilometer_agent_compute, health_status=healthy, health_failing_streak=0, health_log=, tcib_managed=true, config_data={'image': 'quay.rdoproject.org/podified-master-centos10/openstack-ceilometer-compute:current-tested', 'user': 'ceilometer', 'restart': 'always', 'command': 'kolla_start', 'security_opt': 'label:type:ceilometer_polling_t', 'net': 'host', 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'OS_ENDPOINT_TYPE': 'internal'}, 'healthcheck': {'test': '/openstack/healthcheck compute', 'mount': '/var/lib/openstack/healthchecks/ceilometer_agent_compute'}, 'volumes': ['/var/lib/openstack/config/telemetry:/var/lib/openstack/config/:z', '/var/lib/openstack/config/telemetry/ceilometer-agent-compute.json:/var/lib/kolla/config_files/config.json:z', '/run/libvirt:/run/libvirt:shared,ro', '/etc/hosts:/etc/hosts:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/var/lib/openstack/cacerts/telemetry/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/config/telemetry/ceilometer_prom_exporter.yaml:/etc/ceilometer/ceilometer_prom_exporter.yaml:z', '/var/lib/openstack/certs/telemetry/default:/etc/ceilometer/tls:z', '/dev/log:/dev/log', '/var/lib/openstack/healthchecks/ceilometer_agent_compute:/openstack:ro,z']}, config_id=edpm, io.buildah.version=1.41.4, maintainer=OpenStack Kubernetes Operator team, org.label-schema.vendor=CentOS, managed_by=edpm_ansible, org.label-schema.build-date=20251125, org.label-schema.schema-version=1.0, container_name=ceilometer_agent_compute, org.label-schema.name=CentOS Stream 10 Base Image, tcib_build_tag=3a7876c5b6a4ff2e2bc50e11e9db5f42, org.label-schema.license=GPLv2)
Dec  1 19:54:39 compute-0 podman[251019]: 2025-12-01 19:54:39.384470593 +0000 UTC m=+0.140292754 container health_status 23921011954a99f31a49758e512d9e3575f6b2ebf536e7df85e3be11e7690b76 (image=quay.io/sustainable_computing_io/kepler:release-0.7.12, name=kepler, health_status=healthy, health_failing_streak=0, health_log=, com.redhat.license_terms=https://www.redhat.com/en/about/red-hat-end-user-license-agreements#UBI, summary=Provides the latest release of Red Hat Universal Base Image 9., vendor=Red Hat, Inc., distribution-scope=public, build-date=2024-09-18T21:23:30, config_data={'image': 'quay.io/sustainable_computing_io/kepler:release-0.7.12', 'privileged': 'true', 'restart': 'always', 'ports': ['8888:8888'], 'net': 'host', 'command': '-v=2', 'recreate': True, 'environment': {'ENABLE_GPU': 'true', 'EXPOSE_CONTAINER_METRICS': 'true', 'ENABLE_PROCESS_METRICS': 'true', 'EXPOSE_VM_METRICS': 'true', 'EXPOSE_ESTIMATED_IDLE_POWER_METRICS': 'false', 'LIBVIRT_METADATA_URI': 'http://openstack.org/xmlns/libvirt/nova/1.1'}, 'healthcheck': {'test': '/openstack/healthcheck kepler', 'mount': '/var/lib/openstack/healthchecks/kepler'}, 'volumes': ['/lib/modules:/lib/modules:ro', '/run/libvirt:/run/libvirt:shared,ro', '/sys:/sys', '/proc:/proc', '/var/lib/openstack/healthchecks/kepler:/openstack:ro,z']}, io.k8s.display-name=Red Hat Universal Base Image 9, url=https://access.redhat.com/containers/#/registry.access.redhat.com/ubi9/images/9.4-1214.1726694543, release=1214.1726694543, release-0.7.12=, version=9.4, name=ubi9, config_id=edpm, io.buildah.version=1.29.0, vcs-ref=e309397d02fc53f7fa99db1371b8700eb49f268f, io.k8s.description=The Universal Base Image is designed and engineered to be the base layer for all of your containerized applications, middleware and utilities. This base image is freely redistributable, but Red Hat only supports Red Hat technologies through subscriptions for Red Hat products. This image is maintained by Red Hat and updated regularly., io.openshift.expose-services=, description=The Universal Base Image is designed and engineered to be the base layer for all of your containerized applications, middleware and utilities. This base image is freely redistributable, but Red Hat only supports Red Hat technologies through subscriptions for Red Hat products. This image is maintained by Red Hat and updated regularly., io.openshift.tags=base rhel9, maintainer=Red Hat, Inc., com.redhat.component=ubi9-container, container_name=kepler, managed_by=edpm_ansible, vcs-type=git, architecture=x86_64)
Dec  1 19:54:39 compute-0 podman[251022]: 2025-12-01 19:54:39.392921515 +0000 UTC m=+0.126186996 container health_status 43b014a7c88484529ca37fbc1aa040d68d3c565a681d98a3ffe696ded1c66c8b (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified, name=ovn_metadata_agent, health_status=healthy, health_failing_streak=0, health_log=, managed_by=edpm_ansible, config_data={'cgroupns': 'host', 'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'EDPM_CONFIG_HASH': '0823bd3e096c75f72e4a95820d41b0d4b6a1172bd2892ddb9f29b788a11bc87d'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ovn_metadata_agent', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified', 'net': 'host', 'pid': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/run/openvswitch:/run/openvswitch:z', '/var/lib/config-data/ansible-generated/neutron-ovn-metadata-agent:/etc/neutron.conf.d:z', '/run/netns:/run/netns:shared', '/var/lib/kolla/config_files/ovn_metadata_agent.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/neutron:/var/lib/neutron:shared,z', '/var/lib/neutron/ovn_metadata_haproxy_wrapper:/usr/local/bin/haproxy:ro', '/var/lib/neutron/kill_scripts:/etc/neutron/kill_scripts:ro', '/var/lib/openstack/cacerts/neutron-metadata/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/ca.crt:/etc/pki/tls/certs/ovndbca.crt:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/tls.crt:/etc/pki/tls/certs/ovndb.crt:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/tls.key:/etc/pki/tls/private/ovndb.key:ro,Z', '/var/lib/openstack/healthchecks/ovn_metadata_agent:/openstack:ro,z']}, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.vendor=CentOS, maintainer=OpenStack Kubernetes Operator team, tcib_build_tag=fa2bb8efef6782c26ea7f1675eeb36dd, tcib_managed=true, container_name=ovn_metadata_agent, io.buildah.version=1.41.3, org.label-schema.build-date=20251125, org.label-schema.license=GPLv2, org.label-schema.schema-version=1.0, config_id=ovn_metadata_agent)
Dec  1 19:54:39 compute-0 podman[251020]: 2025-12-01 19:54:39.395160425 +0000 UTC m=+0.141677217 container health_status 34a1614f07848d6f362b3ed1fa2407dbcd0f2c7c831f6ef43ff8b2d278ce7c3d (image=quay.io/podified-antelope-centos9/openstack-ceilometer-ipmi:current-podified, name=ceilometer_agent_ipmi, health_status=healthy, health_failing_streak=0, health_log=, tcib_build_tag=fa2bb8efef6782c26ea7f1675eeb36dd, config_data={'image': 'quay.io/podified-antelope-centos9/openstack-ceilometer-ipmi:current-podified', 'user': 'ceilometer', 'restart': 'always', 'command': 'kolla_start', 'security_opt': 'label:type:ceilometer_polling_t', 'privileged': 'true', 'net': 'host', 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'OS_ENDPOINT_TYPE': 'internal'}, 'healthcheck': {'test': '/openstack/healthcheck ipmi', 'mount': '/var/lib/openstack/healthchecks/ceilometer_agent_ipmi'}, 'volumes': ['/var/lib/openstack/config/telemetry-power-monitoring:/var/lib/openstack/config/:z', '/var/lib/openstack/config/telemetry-power-monitoring/ceilometer-agent-ipmi.json:/var/lib/kolla/config_files/config.json:z', '/etc/hosts:/etc/hosts:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/var/lib/openstack/cacerts/telemetry-power-monitoring/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/config/telemetry-power-monitoring/ceilometer_prom_exporter.yaml:/etc/ceilometer/ceilometer_prom_exporter.yaml:z', '/var/lib/openstack/certs/telemetry-power-monitoring/default:/etc/ceilometer/tls:z', '/dev/log:/dev/log', '/var/lib/openstack/healthchecks/ceilometer_agent_ipmi:/openstack:ro,z']}, managed_by=edpm_ansible, org.label-schema.build-date=20251125, org.label-schema.name=CentOS Stream 9 Base Image, config_id=edpm, io.buildah.version=1.41.3, org.label-schema.schema-version=1.0, container_name=ceilometer_agent_ipmi, org.label-schema.license=GPLv2, tcib_managed=true, maintainer=OpenStack Kubernetes Operator team, org.label-schema.vendor=CentOS)
Dec  1 19:54:39 compute-0 podman[251028]: 2025-12-01 19:54:39.449555593 +0000 UTC m=+0.171769511 container health_status ac5c9902abf0db9f43c889599b2bcc73d33eb8b65444ffdd9b56a5cc93dab792 (image=quay.io/podified-antelope-centos9/openstack-ovn-controller:current-podified, name=ovn_controller, health_status=healthy, health_failing_streak=0, health_log=, maintainer=OpenStack Kubernetes Operator team, org.label-schema.schema-version=1.0, org.label-schema.license=GPLv2, tcib_build_tag=fa2bb8efef6782c26ea7f1675eeb36dd, config_data={'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ovn_controller', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-ovn-controller:current-podified', 'net': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/lib/modules:/lib/modules:ro', '/run:/run', '/var/lib/openvswitch/ovn:/run/ovn:shared,z', '/var/lib/kolla/config_files/ovn_controller.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/openstack/cacerts/ovn/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/certs/ovn/default/ca.crt:/etc/pki/tls/certs/ovndbca.crt:ro,z', '/var/lib/openstack/certs/ovn/default/tls.crt:/etc/pki/tls/certs/ovndb.crt:ro,z', '/var/lib/openstack/certs/ovn/default/tls.key:/etc/pki/tls/private/ovndb.key:ro,Z', '/var/lib/openstack/healthchecks/ovn_controller:/openstack:ro,z']}, container_name=ovn_controller, org.label-schema.name=CentOS Stream 9 Base Image, tcib_managed=true, config_id=ovn_controller, io.buildah.version=1.41.3, managed_by=edpm_ansible, org.label-schema.build-date=20251125, org.label-schema.vendor=CentOS)
Dec  1 19:54:41 compute-0 nova_compute[189564]: 2025-12-01 19:54:41.268 189568 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Dec  1 19:54:43 compute-0 nova_compute[189564]: 2025-12-01 19:54:43.575 189568 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Dec  1 19:54:46 compute-0 nova_compute[189564]: 2025-12-01 19:54:46.271 189568 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Dec  1 19:54:48 compute-0 nova_compute[189564]: 2025-12-01 19:54:48.577 189568 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Dec  1 19:54:49 compute-0 podman[251117]: 2025-12-01 19:54:49.347216898 +0000 UTC m=+0.118909790 container health_status b46bda7fc50db8041eef75400930fc7591d8331b3adc9964f77b2cc87c6b98e2 (image=quay.io/openstack-k8s-operators/openstack-network-exporter:current-podified, name=openstack_network_exporter, health_status=healthy, health_failing_streak=0, health_log=, io.openshift.expose-services=, io.k8s.description=The Universal Base Image Minimal is a stripped down image that uses microdnf as a package manager. This base image is freely redistributable, but Red Hat only supports Red Hat technologies through subscriptions for Red Hat products. This image is maintained by Red Hat and updated regularly., vcs-ref=f4b088292653bbf5ca8188a5e59ffd06a8671d4b, vcs-type=git, summary=Provides the latest release of the minimal Red Hat Universal Base Image 9., url=https://catalog.redhat.com/en/search?searchType=containers, release=1755695350, description=The Universal Base Image Minimal is a stripped down image that uses microdnf as a package manager. This base image is freely redistributable, but Red Hat only supports Red Hat technologies through subscriptions for Red Hat products. This image is maintained by Red Hat and updated regularly., io.k8s.display-name=Red Hat Universal Base Image 9 Minimal, name=ubi9-minimal, vendor=Red Hat, Inc., version=9.6, maintainer=Red Hat, Inc., config_id=edpm, com.redhat.license_terms=https://www.redhat.com/en/about/red-hat-end-user-license-agreements#UBI, architecture=x86_64, com.redhat.component=ubi9-minimal-container, distribution-scope=public, io.buildah.version=1.33.7, container_name=openstack_network_exporter, managed_by=edpm_ansible, build-date=2025-08-20T13:12:41, io.openshift.tags=minimal rhel9, config_data={'image': 'quay.io/openstack-k8s-operators/openstack-network-exporter:current-podified', 'restart': 'always', 'recreate': True, 'privileged': True, 'ports': ['9105:9105'], 'command': [], 'net': 'host', 'environment': {'OS_ENDPOINT_TYPE': 'internal', 'OPENSTACK_NETWORK_EXPORTER_YAML': '/etc/openstack_network_exporter/openstack_network_exporter.yaml'}, 'healthcheck': {'test': '/openstack/healthcheck openstack-netwo', 'mount': '/var/lib/openstack/healthchecks/openstack_network_exporter'}, 'volumes': ['/var/lib/openstack/config/telemetry/openstack_network_exporter.yaml:/etc/openstack_network_exporter/openstack_network_exporter.yaml:z', '/var/lib/openstack/certs/telemetry/default:/etc/openstack_network_exporter/tls:z', '/var/run/openvswitch:/run/openvswitch:rw,z', '/var/lib/openvswitch/ovn:/run/ovn:rw,z', '/proc:/host/proc:ro', '/var/lib/openstack/healthchecks/openstack_network_exporter:/openstack:ro,z']})
Dec  1 19:54:51 compute-0 nova_compute[189564]: 2025-12-01 19:54:51.274 189568 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Dec  1 19:54:53 compute-0 nova_compute[189564]: 2025-12-01 19:54:53.579 189568 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Dec  1 19:54:54 compute-0 podman[251137]: 2025-12-01 19:54:54.310555277 +0000 UTC m=+0.086409902 container health_status 9bc16c1e84935b321683dd2dfd3901959431e420d380b6b9982945dff3d516b2 (image=quay.io/prometheus/node-exporter:v1.5.0, name=node_exporter, health_status=healthy, health_failing_streak=0, health_log=, managed_by=edpm_ansible, config_data={'image': 'quay.io/prometheus/node-exporter:v1.5.0', 'restart': 'always', 'recreate': True, 'user': 'root', 'privileged': True, 'ports': ['9100:9100'], 'command': ['--web.config.file=/etc/node_exporter/node_exporter.yaml', '--web.disable-exporter-metrics', '--collector.systemd', '--collector.systemd.unit-include=(edpm_.*|ovs.*|openvswitch|virt.*|rsyslog)\\.service', '--no-collector.dmi', '--no-collector.entropy', '--no-collector.thermal_zone', '--no-collector.time', '--no-collector.timex', '--no-collector.uname', '--no-collector.stat', '--no-collector.hwmon', '--no-collector.os', '--no-collector.selinux', '--no-collector.textfile', '--no-collector.powersupplyclass', '--no-collector.pressure', '--no-collector.rapl'], 'net': 'host', 'environment': {'OS_ENDPOINT_TYPE': 'internal'}, 'healthcheck': {'test': '/openstack/healthcheck node_exporter', 'mount': '/var/lib/openstack/healthchecks/node_exporter'}, 'volumes': ['/var/lib/openstack/config/telemetry/node_exporter.yaml:/etc/node_exporter/node_exporter.yaml:z', '/var/lib/openstack/certs/telemetry/default:/etc/node_exporter/tls:z', '/var/run/dbus/system_bus_socket:/var/run/dbus/system_bus_socket:rw', '/var/lib/openstack/healthchecks/node_exporter:/openstack:ro,z']}, config_id=edpm, container_name=node_exporter, maintainer=The Prometheus Authors <prometheus-developers@googlegroups.com>)
Dec  1 19:54:56 compute-0 nova_compute[189564]: 2025-12-01 19:54:56.276 189568 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Dec  1 19:54:58 compute-0 nova_compute[189564]: 2025-12-01 19:54:58.584 189568 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Dec  1 19:54:59 compute-0 podman[203750]: time="2025-12-01T19:54:59Z" level=info msg="List containers: received `last` parameter - overwriting `limit`"
Dec  1 19:54:59 compute-0 podman[203750]: @ - - [01/Dec/2025:19:54:59 +0000] "GET /v4.9.3/libpod/containers/json?all=true&external=false&last=0&namespace=false&size=false&sync=false HTTP/1.1" 200 29521 "" "Go-http-client/1.1"
Dec  1 19:54:59 compute-0 podman[203750]: @ - - [01/Dec/2025:19:54:59 +0000] "GET /v4.9.3/libpod/containers/stats?all=false&interval=1&stream=false HTTP/1.1" 200 4810 "" "Go-http-client/1.1"
Dec  1 19:55:00 compute-0 podman[251164]: 2025-12-01 19:55:00.369379144 +0000 UTC m=+0.123473352 container health_status eee51cf6f5ac491b85fb09827fece37ea9afa564acb449d4ec0d0155a452f02b (image=quay.io/podified-antelope-centos9/openstack-multipathd:current-podified, name=multipathd, health_status=healthy, health_failing_streak=0, health_log=, config_id=multipathd, tcib_build_tag=fa2bb8efef6782c26ea7f1675eeb36dd, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.vendor=CentOS, config_data={'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/multipathd', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-multipathd:current-podified', 'net': 'host', 'privileged': True, 'restart': 'always', 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/var/lib/kolla/config_files/multipathd.json:/var/lib/kolla/config_files/config.json:ro', '/dev:/dev', '/run/udev:/run/udev', '/sys:/sys', '/lib/modules:/lib/modules:ro', '/etc/iscsi:/etc/iscsi:ro', '/var/lib/iscsi:/var/lib/iscsi', '/etc/multipath:/etc/multipath:z', '/etc/multipath.conf:/etc/multipath.conf:ro', '/var/lib/openstack/healthchecks/multipathd:/openstack:ro,z']}, maintainer=OpenStack Kubernetes Operator team, tcib_managed=true, container_name=multipathd, io.buildah.version=1.41.3, managed_by=edpm_ansible, org.label-schema.build-date=20251125, org.label-schema.schema-version=1.0)
Dec  1 19:55:01 compute-0 nova_compute[189564]: 2025-12-01 19:55:01.280 189568 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Dec  1 19:55:01 compute-0 openstack_network_exporter[205914]: ERROR   19:55:01 appctl.go:144: Failed to get PID for ovn-northd: no control socket files found for ovn-northd
Dec  1 19:55:01 compute-0 openstack_network_exporter[205914]: ERROR   19:55:01 appctl.go:144: Failed to get PID for ovn-northd: no control socket files found for ovn-northd
Dec  1 19:55:01 compute-0 openstack_network_exporter[205914]: ERROR   19:55:01 appctl.go:131: Failed to prepare call to ovsdb-server: no control socket files found for the ovs db server
Dec  1 19:55:01 compute-0 openstack_network_exporter[205914]: ERROR   19:55:01 appctl.go:174: call(dpif-netdev/pmd-perf-show): please specify an existing datapath
Dec  1 19:55:01 compute-0 openstack_network_exporter[205914]: 
Dec  1 19:55:01 compute-0 openstack_network_exporter[205914]: ERROR   19:55:01 appctl.go:174: call(dpif-netdev/pmd-rxq-show): please specify an existing datapath
Dec  1 19:55:01 compute-0 openstack_network_exporter[205914]: 
Dec  1 19:55:03 compute-0 nova_compute[189564]: 2025-12-01 19:55:03.586 189568 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Dec  1 19:55:06 compute-0 nova_compute[189564]: 2025-12-01 19:55:06.283 189568 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Dec  1 19:55:06 compute-0 podman[251186]: 2025-12-01 19:55:06.315992667 +0000 UTC m=+0.083137110 container health_status 61ddba5fa28aaa4735d9b3aecc3d300f499f9ae2248b5f55cd6d6127fcce4236 (image=quay.io/navidys/prometheus-podman-exporter:v1.10.1, name=podman_exporter, health_status=healthy, health_failing_streak=0, health_log=, config_id=edpm, container_name=podman_exporter, maintainer=Navid Yaghoobi <navidys@fedoraproject.org>, managed_by=edpm_ansible, config_data={'image': 'quay.io/navidys/prometheus-podman-exporter:v1.10.1', 'restart': 'always', 'recreate': True, 'user': 'root', 'privileged': True, 'ports': ['9882:9882'], 'net': 'host', 'command': ['--web.config.file=/etc/podman_exporter/podman_exporter.yaml'], 'environment': {'OS_ENDPOINT_TYPE': 'internal', 'CONTAINER_HOST': 'unix:///run/podman/podman.sock'}, 'healthcheck': {'test': '/openstack/healthcheck podman_exporter', 'mount': '/var/lib/openstack/healthchecks/podman_exporter'}, 'volumes': ['/var/lib/openstack/config/telemetry/podman_exporter.yaml:/etc/podman_exporter/podman_exporter.yaml:z', '/var/lib/openstack/certs/telemetry/default:/etc/podman_exporter/tls:z', '/run/podman/podman.sock:/run/podman/podman.sock:rw,z', '/var/lib/openstack/healthchecks/podman_exporter:/openstack:ro,z']})
Dec  1 19:55:08 compute-0 nova_compute[189564]: 2025-12-01 19:55:08.588 189568 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Dec  1 19:55:09 compute-0 nova_compute[189564]: 2025-12-01 19:55:09.247 189568 DEBUG oslo_service.periodic_task [None req-7cec860b-c1c5-49d2-8a4f-732c76c1788d - - - - - -] Running periodic task ComputeManager._reclaim_queued_deletes run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210#033[00m
Dec  1 19:55:09 compute-0 nova_compute[189564]: 2025-12-01 19:55:09.248 189568 DEBUG nova.compute.manager [None req-7cec860b-c1c5-49d2-8a4f-732c76c1788d - - - - - -] CONF.reclaim_instance_interval <= 0, skipping... _reclaim_queued_deletes /usr/lib/python3.9/site-packages/nova/compute/manager.py:10477#033[00m
Dec  1 19:55:10 compute-0 podman[251209]: 2025-12-01 19:55:10.366888292 +0000 UTC m=+0.113167073 container health_status 34a1614f07848d6f362b3ed1fa2407dbcd0f2c7c831f6ef43ff8b2d278ce7c3d (image=quay.io/podified-antelope-centos9/openstack-ceilometer-ipmi:current-podified, name=ceilometer_agent_ipmi, health_status=healthy, health_failing_streak=0, health_log=, tcib_build_tag=fa2bb8efef6782c26ea7f1675eeb36dd, org.label-schema.build-date=20251125, org.label-schema.name=CentOS Stream 9 Base Image, config_id=edpm, container_name=ceilometer_agent_ipmi, io.buildah.version=1.41.3, maintainer=OpenStack Kubernetes Operator team, org.label-schema.vendor=CentOS, config_data={'image': 'quay.io/podified-antelope-centos9/openstack-ceilometer-ipmi:current-podified', 'user': 'ceilometer', 'restart': 'always', 'command': 'kolla_start', 'security_opt': 'label:type:ceilometer_polling_t', 'privileged': 'true', 'net': 'host', 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'OS_ENDPOINT_TYPE': 'internal'}, 'healthcheck': {'test': '/openstack/healthcheck ipmi', 'mount': '/var/lib/openstack/healthchecks/ceilometer_agent_ipmi'}, 'volumes': ['/var/lib/openstack/config/telemetry-power-monitoring:/var/lib/openstack/config/:z', '/var/lib/openstack/config/telemetry-power-monitoring/ceilometer-agent-ipmi.json:/var/lib/kolla/config_files/config.json:z', '/etc/hosts:/etc/hosts:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/var/lib/openstack/cacerts/telemetry-power-monitoring/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/config/telemetry-power-monitoring/ceilometer_prom_exporter.yaml:/etc/ceilometer/ceilometer_prom_exporter.yaml:z', '/var/lib/openstack/certs/telemetry-power-monitoring/default:/etc/ceilometer/tls:z', '/dev/log:/dev/log', '/var/lib/openstack/healthchecks/ceilometer_agent_ipmi:/openstack:ro,z']}, org.label-schema.license=GPLv2, tcib_managed=true, managed_by=edpm_ansible, org.label-schema.schema-version=1.0)
Dec  1 19:55:10 compute-0 podman[251208]: 2025-12-01 19:55:10.391501495 +0000 UTC m=+0.145343741 container health_status 23921011954a99f31a49758e512d9e3575f6b2ebf536e7df85e3be11e7690b76 (image=quay.io/sustainable_computing_io/kepler:release-0.7.12, name=kepler, health_status=healthy, health_failing_streak=0, health_log=, name=ubi9, distribution-scope=public, io.k8s.description=The Universal Base Image is designed and engineered to be the base layer for all of your containerized applications, middleware and utilities. This base image is freely redistributable, but Red Hat only supports Red Hat technologies through subscriptions for Red Hat products. This image is maintained by Red Hat and updated regularly., config_data={'image': 'quay.io/sustainable_computing_io/kepler:release-0.7.12', 'privileged': 'true', 'restart': 'always', 'ports': ['8888:8888'], 'net': 'host', 'command': '-v=2', 'recreate': True, 'environment': {'ENABLE_GPU': 'true', 'EXPOSE_CONTAINER_METRICS': 'true', 'ENABLE_PROCESS_METRICS': 'true', 'EXPOSE_VM_METRICS': 'true', 'EXPOSE_ESTIMATED_IDLE_POWER_METRICS': 'false', 'LIBVIRT_METADATA_URI': 'http://openstack.org/xmlns/libvirt/nova/1.1'}, 'healthcheck': {'test': '/openstack/healthcheck kepler', 'mount': '/var/lib/openstack/healthchecks/kepler'}, 'volumes': ['/lib/modules:/lib/modules:ro', '/run/libvirt:/run/libvirt:shared,ro', '/sys:/sys', '/proc:/proc', '/var/lib/openstack/healthchecks/kepler:/openstack:ro,z']}, io.openshift.expose-services=, release-0.7.12=, summary=Provides the latest release of Red Hat Universal Base Image 9., vcs-ref=e309397d02fc53f7fa99db1371b8700eb49f268f, io.k8s.display-name=Red Hat Universal Base Image 9, url=https://access.redhat.com/containers/#/registry.access.redhat.com/ubi9/images/9.4-1214.1726694543, release=1214.1726694543, com.redhat.license_terms=https://www.redhat.com/en/about/red-hat-end-user-license-agreements#UBI, vcs-type=git, build-date=2024-09-18T21:23:30, io.buildah.version=1.29.0, description=The Universal Base Image is designed and engineered to be the base layer for all of your containerized applications, middleware and utilities. This base image is freely redistributable, but Red Hat only supports Red Hat technologies through subscriptions for Red Hat products. This image is maintained by Red Hat and updated regularly., managed_by=edpm_ansible, version=9.4, maintainer=Red Hat, Inc., vendor=Red Hat, Inc., config_id=edpm, container_name=kepler, io.openshift.tags=base rhel9, architecture=x86_64, com.redhat.component=ubi9-container)
Dec  1 19:55:10 compute-0 podman[251218]: 2025-12-01 19:55:10.413117107 +0000 UTC m=+0.131060438 container health_status ac5c9902abf0db9f43c889599b2bcc73d33eb8b65444ffdd9b56a5cc93dab792 (image=quay.io/podified-antelope-centos9/openstack-ovn-controller:current-podified, name=ovn_controller, health_status=healthy, health_failing_streak=0, health_log=, org.label-schema.vendor=CentOS, tcib_build_tag=fa2bb8efef6782c26ea7f1675eeb36dd, config_data={'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ovn_controller', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-ovn-controller:current-podified', 'net': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/lib/modules:/lib/modules:ro', '/run:/run', '/var/lib/openvswitch/ovn:/run/ovn:shared,z', '/var/lib/kolla/config_files/ovn_controller.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/openstack/cacerts/ovn/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/certs/ovn/default/ca.crt:/etc/pki/tls/certs/ovndbca.crt:ro,z', '/var/lib/openstack/certs/ovn/default/tls.crt:/etc/pki/tls/certs/ovndb.crt:ro,z', '/var/lib/openstack/certs/ovn/default/tls.key:/etc/pki/tls/private/ovndb.key:ro,Z', '/var/lib/openstack/healthchecks/ovn_controller:/openstack:ro,z']}, container_name=ovn_controller, org.label-schema.build-date=20251125, tcib_managed=true, managed_by=edpm_ansible, org.label-schema.license=GPLv2, config_id=ovn_controller, maintainer=OpenStack Kubernetes Operator team, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.schema-version=1.0, io.buildah.version=1.41.3)
Dec  1 19:55:10 compute-0 podman[251211]: 2025-12-01 19:55:10.417874654 +0000 UTC m=+0.150123759 container health_status 43b014a7c88484529ca37fbc1aa040d68d3c565a681d98a3ffe696ded1c66c8b (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified, name=ovn_metadata_agent, health_status=healthy, health_failing_streak=0, health_log=, config_data={'cgroupns': 'host', 'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'EDPM_CONFIG_HASH': '0823bd3e096c75f72e4a95820d41b0d4b6a1172bd2892ddb9f29b788a11bc87d'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ovn_metadata_agent', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified', 'net': 'host', 'pid': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/run/openvswitch:/run/openvswitch:z', '/var/lib/config-data/ansible-generated/neutron-ovn-metadata-agent:/etc/neutron.conf.d:z', '/run/netns:/run/netns:shared', '/var/lib/kolla/config_files/ovn_metadata_agent.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/neutron:/var/lib/neutron:shared,z', '/var/lib/neutron/ovn_metadata_haproxy_wrapper:/usr/local/bin/haproxy:ro', '/var/lib/neutron/kill_scripts:/etc/neutron/kill_scripts:ro', '/var/lib/openstack/cacerts/neutron-metadata/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/ca.crt:/etc/pki/tls/certs/ovndbca.crt:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/tls.crt:/etc/pki/tls/certs/ovndb.crt:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/tls.key:/etc/pki/tls/private/ovndb.key:ro,Z', '/var/lib/openstack/healthchecks/ovn_metadata_agent:/openstack:ro,z']}, config_id=ovn_metadata_agent, container_name=ovn_metadata_agent, io.buildah.version=1.41.3, managed_by=edpm_ansible, org.label-schema.schema-version=1.0, org.label-schema.build-date=20251125, org.label-schema.vendor=CentOS, maintainer=OpenStack Kubernetes Operator team, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.license=GPLv2, tcib_build_tag=fa2bb8efef6782c26ea7f1675eeb36dd, tcib_managed=true)
Dec  1 19:55:10 compute-0 podman[251210]: 2025-12-01 19:55:10.420754454 +0000 UTC m=+0.162056879 container health_status 3a3d264f7eb8586ed3d44da8bad3c69e5911bcb2ca062b771386b6d47a5118de (image=quay.rdoproject.org/podified-master-centos10/openstack-ceilometer-compute:current-tested, name=ceilometer_agent_compute, health_status=healthy, health_failing_streak=0, health_log=, org.label-schema.license=GPLv2, org.label-schema.vendor=CentOS, io.buildah.version=1.41.4, managed_by=edpm_ansible, maintainer=OpenStack Kubernetes Operator team, tcib_build_tag=3a7876c5b6a4ff2e2bc50e11e9db5f42, tcib_managed=true, config_data={'image': 'quay.rdoproject.org/podified-master-centos10/openstack-ceilometer-compute:current-tested', 'user': 'ceilometer', 'restart': 'always', 'command': 'kolla_start', 'security_opt': 'label:type:ceilometer_polling_t', 'net': 'host', 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'OS_ENDPOINT_TYPE': 'internal'}, 'healthcheck': {'test': '/openstack/healthcheck compute', 'mount': '/var/lib/openstack/healthchecks/ceilometer_agent_compute'}, 'volumes': ['/var/lib/openstack/config/telemetry:/var/lib/openstack/config/:z', '/var/lib/openstack/config/telemetry/ceilometer-agent-compute.json:/var/lib/kolla/config_files/config.json:z', '/run/libvirt:/run/libvirt:shared,ro', '/etc/hosts:/etc/hosts:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/var/lib/openstack/cacerts/telemetry/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/config/telemetry/ceilometer_prom_exporter.yaml:/etc/ceilometer/ceilometer_prom_exporter.yaml:z', '/var/lib/openstack/certs/telemetry/default:/etc/ceilometer/tls:z', '/dev/log:/dev/log', '/var/lib/openstack/healthchecks/ceilometer_agent_compute:/openstack:ro,z']}, config_id=edpm, container_name=ceilometer_agent_compute, org.label-schema.build-date=20251125, org.label-schema.name=CentOS Stream 10 Base Image, org.label-schema.schema-version=1.0)
Dec  1 19:55:11 compute-0 nova_compute[189564]: 2025-12-01 19:55:11.286 189568 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Dec  1 19:55:12 compute-0 ovn_metadata_agent[106828]: 2025-12-01 19:55:12.211 106833 DEBUG oslo_concurrency.lockutils [-] Acquiring lock "_check_child_processes" by "neutron.agent.linux.external_process.ProcessMonitor._check_child_processes" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m
Dec  1 19:55:12 compute-0 ovn_metadata_agent[106828]: 2025-12-01 19:55:12.211 106833 DEBUG oslo_concurrency.lockutils [-] Lock "_check_child_processes" acquired by "neutron.agent.linux.external_process.ProcessMonitor._check_child_processes" :: waited 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m
Dec  1 19:55:12 compute-0 ovn_metadata_agent[106828]: 2025-12-01 19:55:12.212 106833 DEBUG oslo_concurrency.lockutils [-] Lock "_check_child_processes" "released" by "neutron.agent.linux.external_process.ProcessMonitor._check_child_processes" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m
Dec  1 19:55:12 compute-0 nova_compute[189564]: 2025-12-01 19:55:12.248 189568 DEBUG oslo_service.periodic_task [None req-7cec860b-c1c5-49d2-8a4f-732c76c1788d - - - - - -] Running periodic task ComputeManager._poll_rebooting_instances run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210#033[00m
Dec  1 19:55:12 compute-0 nova_compute[189564]: 2025-12-01 19:55:12.249 189568 DEBUG oslo_service.periodic_task [None req-7cec860b-c1c5-49d2-8a4f-732c76c1788d - - - - - -] Running periodic task ComputeManager._poll_volume_usage run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210#033[00m
Dec  1 19:55:13 compute-0 nova_compute[189564]: 2025-12-01 19:55:13.249 189568 DEBUG oslo_service.periodic_task [None req-7cec860b-c1c5-49d2-8a4f-732c76c1788d - - - - - -] Running periodic task ComputeManager._heal_instance_info_cache run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210#033[00m
Dec  1 19:55:13 compute-0 nova_compute[189564]: 2025-12-01 19:55:13.249 189568 DEBUG nova.compute.manager [None req-7cec860b-c1c5-49d2-8a4f-732c76c1788d - - - - - -] Starting heal instance info cache _heal_instance_info_cache /usr/lib/python3.9/site-packages/nova/compute/manager.py:9858#033[00m
Dec  1 19:55:13 compute-0 nova_compute[189564]: 2025-12-01 19:55:13.591 189568 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Dec  1 19:55:13 compute-0 nova_compute[189564]: 2025-12-01 19:55:13.807 189568 DEBUG oslo_concurrency.lockutils [None req-7cec860b-c1c5-49d2-8a4f-732c76c1788d - - - - - -] Acquiring lock "refresh_cache-850ac274-3f22-41ce-b7d7-ac64d7adac70" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:312#033[00m
Dec  1 19:55:13 compute-0 nova_compute[189564]: 2025-12-01 19:55:13.808 189568 DEBUG oslo_concurrency.lockutils [None req-7cec860b-c1c5-49d2-8a4f-732c76c1788d - - - - - -] Acquired lock "refresh_cache-850ac274-3f22-41ce-b7d7-ac64d7adac70" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:315#033[00m
Dec  1 19:55:13 compute-0 nova_compute[189564]: 2025-12-01 19:55:13.809 189568 DEBUG nova.network.neutron [None req-7cec860b-c1c5-49d2-8a4f-732c76c1788d - - - - - -] [instance: 850ac274-3f22-41ce-b7d7-ac64d7adac70] Forcefully refreshing network info cache for instance _get_instance_nw_info /usr/lib/python3.9/site-packages/nova/network/neutron.py:2004#033[00m
Dec  1 19:55:15 compute-0 nova_compute[189564]: 2025-12-01 19:55:15.267 189568 DEBUG nova.network.neutron [None req-7cec860b-c1c5-49d2-8a4f-732c76c1788d - - - - - -] [instance: 850ac274-3f22-41ce-b7d7-ac64d7adac70] Updating instance_info_cache with network_info: [{"id": "076102cd-d411-4d3d-a31e-4851d4a8d107", "address": "fa:16:3e:ce:df:71", "network": {"id": "2a4b8529-6171-4880-a97c-66966115a61b", "bridge": "br-int", "label": "private", "subnets": [{"cidr": "192.168.0.0/24", "dns": [], "gateway": {"address": "192.168.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "192.168.0.62", "type": "fixed", "version": 4, "meta": {}, "floating_ips": [{"address": "192.168.122.240", "type": "floating", "version": 4, "meta": {}}]}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "35d2a9caf1634dca9fc12ec078239d84", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tap076102cd-d4", "ovs_interfaceid": "076102cd-d411-4d3d-a31e-4851d4a8d107", "qbh_params": null, "qbg_params": null, "active": true, "vnic_type": "normal", "profile": {}, "preserve_on_delete": true, "delegate_create": true, "meta": {}}] update_instance_cache_with_nw_info /usr/lib/python3.9/site-packages/nova/network/neutron.py:116#033[00m
Dec  1 19:55:15 compute-0 nova_compute[189564]: 2025-12-01 19:55:15.290 189568 DEBUG oslo_concurrency.lockutils [None req-7cec860b-c1c5-49d2-8a4f-732c76c1788d - - - - - -] Releasing lock "refresh_cache-850ac274-3f22-41ce-b7d7-ac64d7adac70" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:333#033[00m
Dec  1 19:55:15 compute-0 nova_compute[189564]: 2025-12-01 19:55:15.290 189568 DEBUG nova.compute.manager [None req-7cec860b-c1c5-49d2-8a4f-732c76c1788d - - - - - -] [instance: 850ac274-3f22-41ce-b7d7-ac64d7adac70] Updated the network info_cache for instance _heal_instance_info_cache /usr/lib/python3.9/site-packages/nova/compute/manager.py:9929#033[00m
Dec  1 19:55:15 compute-0 nova_compute[189564]: 2025-12-01 19:55:15.292 189568 DEBUG oslo_service.periodic_task [None req-7cec860b-c1c5-49d2-8a4f-732c76c1788d - - - - - -] Running periodic task ComputeManager._poll_rescued_instances run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210#033[00m
Dec  1 19:55:15 compute-0 nova_compute[189564]: 2025-12-01 19:55:15.292 189568 DEBUG oslo_service.periodic_task [None req-7cec860b-c1c5-49d2-8a4f-732c76c1788d - - - - - -] Running periodic task ComputeManager._poll_unconfirmed_resizes run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210#033[00m
Dec  1 19:55:16 compute-0 nova_compute[189564]: 2025-12-01 19:55:16.248 189568 DEBUG oslo_service.periodic_task [None req-7cec860b-c1c5-49d2-8a4f-732c76c1788d - - - - - -] Running periodic task ComputeManager.update_available_resource run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210#033[00m
Dec  1 19:55:16 compute-0 nova_compute[189564]: 2025-12-01 19:55:16.278 189568 DEBUG oslo_concurrency.lockutils [None req-7cec860b-c1c5-49d2-8a4f-732c76c1788d - - - - - -] Acquiring lock "compute_resources" by "nova.compute.resource_tracker.ResourceTracker.clean_compute_node_cache" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m
Dec  1 19:55:16 compute-0 nova_compute[189564]: 2025-12-01 19:55:16.279 189568 DEBUG oslo_concurrency.lockutils [None req-7cec860b-c1c5-49d2-8a4f-732c76c1788d - - - - - -] Lock "compute_resources" acquired by "nova.compute.resource_tracker.ResourceTracker.clean_compute_node_cache" :: waited 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m
Dec  1 19:55:16 compute-0 nova_compute[189564]: 2025-12-01 19:55:16.279 189568 DEBUG oslo_concurrency.lockutils [None req-7cec860b-c1c5-49d2-8a4f-732c76c1788d - - - - - -] Lock "compute_resources" "released" by "nova.compute.resource_tracker.ResourceTracker.clean_compute_node_cache" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m
Dec  1 19:55:16 compute-0 nova_compute[189564]: 2025-12-01 19:55:16.279 189568 DEBUG nova.compute.resource_tracker [None req-7cec860b-c1c5-49d2-8a4f-732c76c1788d - - - - - -] Auditing locally available compute resources for compute-0.ctlplane.example.com (node: compute-0.ctlplane.example.com) update_available_resource /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:861#033[00m
Dec  1 19:55:16 compute-0 nova_compute[189564]: 2025-12-01 19:55:16.289 189568 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Dec  1 19:55:16 compute-0 nova_compute[189564]: 2025-12-01 19:55:16.375 189568 DEBUG oslo_concurrency.processutils [None req-7cec860b-c1c5-49d2-8a4f-732c76c1788d - - - - - -] Running cmd (subprocess): /usr/bin/python3 -m oslo_concurrency.prlimit --as=1073741824 --cpu=30 -- env LC_ALL=C LANG=C qemu-img info /var/lib/nova/instances/e73931e9-f7fa-4666-b781-700b385532a9/disk --force-share --output=json execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384#033[00m
Dec  1 19:55:16 compute-0 nova_compute[189564]: 2025-12-01 19:55:16.441 189568 DEBUG oslo_concurrency.processutils [None req-7cec860b-c1c5-49d2-8a4f-732c76c1788d - - - - - -] CMD "/usr/bin/python3 -m oslo_concurrency.prlimit --as=1073741824 --cpu=30 -- env LC_ALL=C LANG=C qemu-img info /var/lib/nova/instances/e73931e9-f7fa-4666-b781-700b385532a9/disk --force-share --output=json" returned: 0 in 0.066s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422#033[00m
Dec  1 19:55:16 compute-0 nova_compute[189564]: 2025-12-01 19:55:16.443 189568 DEBUG oslo_concurrency.processutils [None req-7cec860b-c1c5-49d2-8a4f-732c76c1788d - - - - - -] Running cmd (subprocess): /usr/bin/python3 -m oslo_concurrency.prlimit --as=1073741824 --cpu=30 -- env LC_ALL=C LANG=C qemu-img info /var/lib/nova/instances/e73931e9-f7fa-4666-b781-700b385532a9/disk --force-share --output=json execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384#033[00m
Dec  1 19:55:16 compute-0 nova_compute[189564]: 2025-12-01 19:55:16.530 189568 DEBUG oslo_concurrency.processutils [None req-7cec860b-c1c5-49d2-8a4f-732c76c1788d - - - - - -] CMD "/usr/bin/python3 -m oslo_concurrency.prlimit --as=1073741824 --cpu=30 -- env LC_ALL=C LANG=C qemu-img info /var/lib/nova/instances/e73931e9-f7fa-4666-b781-700b385532a9/disk --force-share --output=json" returned: 0 in 0.087s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422#033[00m
Dec  1 19:55:16 compute-0 nova_compute[189564]: 2025-12-01 19:55:16.532 189568 DEBUG oslo_concurrency.processutils [None req-7cec860b-c1c5-49d2-8a4f-732c76c1788d - - - - - -] Running cmd (subprocess): /usr/bin/python3 -m oslo_concurrency.prlimit --as=1073741824 --cpu=30 -- env LC_ALL=C LANG=C qemu-img info /var/lib/nova/instances/e73931e9-f7fa-4666-b781-700b385532a9/disk.eph0 --force-share --output=json execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384#033[00m
Dec  1 19:55:16 compute-0 nova_compute[189564]: 2025-12-01 19:55:16.613 189568 DEBUG oslo_concurrency.processutils [None req-7cec860b-c1c5-49d2-8a4f-732c76c1788d - - - - - -] CMD "/usr/bin/python3 -m oslo_concurrency.prlimit --as=1073741824 --cpu=30 -- env LC_ALL=C LANG=C qemu-img info /var/lib/nova/instances/e73931e9-f7fa-4666-b781-700b385532a9/disk.eph0 --force-share --output=json" returned: 0 in 0.081s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422#033[00m
Dec  1 19:55:16 compute-0 nova_compute[189564]: 2025-12-01 19:55:16.615 189568 DEBUG oslo_concurrency.processutils [None req-7cec860b-c1c5-49d2-8a4f-732c76c1788d - - - - - -] Running cmd (subprocess): /usr/bin/python3 -m oslo_concurrency.prlimit --as=1073741824 --cpu=30 -- env LC_ALL=C LANG=C qemu-img info /var/lib/nova/instances/e73931e9-f7fa-4666-b781-700b385532a9/disk.eph0 --force-share --output=json execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384#033[00m
Dec  1 19:55:16 compute-0 nova_compute[189564]: 2025-12-01 19:55:16.702 189568 DEBUG oslo_concurrency.processutils [None req-7cec860b-c1c5-49d2-8a4f-732c76c1788d - - - - - -] CMD "/usr/bin/python3 -m oslo_concurrency.prlimit --as=1073741824 --cpu=30 -- env LC_ALL=C LANG=C qemu-img info /var/lib/nova/instances/e73931e9-f7fa-4666-b781-700b385532a9/disk.eph0 --force-share --output=json" returned: 0 in 0.088s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422#033[00m
Dec  1 19:55:16 compute-0 nova_compute[189564]: 2025-12-01 19:55:16.711 189568 DEBUG oslo_concurrency.processutils [None req-7cec860b-c1c5-49d2-8a4f-732c76c1788d - - - - - -] Running cmd (subprocess): /usr/bin/python3 -m oslo_concurrency.prlimit --as=1073741824 --cpu=30 -- env LC_ALL=C LANG=C qemu-img info /var/lib/nova/instances/850ac274-3f22-41ce-b7d7-ac64d7adac70/disk --force-share --output=json execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384#033[00m
Dec  1 19:55:16 compute-0 nova_compute[189564]: 2025-12-01 19:55:16.797 189568 DEBUG oslo_concurrency.processutils [None req-7cec860b-c1c5-49d2-8a4f-732c76c1788d - - - - - -] CMD "/usr/bin/python3 -m oslo_concurrency.prlimit --as=1073741824 --cpu=30 -- env LC_ALL=C LANG=C qemu-img info /var/lib/nova/instances/850ac274-3f22-41ce-b7d7-ac64d7adac70/disk --force-share --output=json" returned: 0 in 0.086s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422#033[00m
Dec  1 19:55:16 compute-0 nova_compute[189564]: 2025-12-01 19:55:16.799 189568 DEBUG oslo_concurrency.processutils [None req-7cec860b-c1c5-49d2-8a4f-732c76c1788d - - - - - -] Running cmd (subprocess): /usr/bin/python3 -m oslo_concurrency.prlimit --as=1073741824 --cpu=30 -- env LC_ALL=C LANG=C qemu-img info /var/lib/nova/instances/850ac274-3f22-41ce-b7d7-ac64d7adac70/disk --force-share --output=json execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384#033[00m
Dec  1 19:55:16 compute-0 nova_compute[189564]: 2025-12-01 19:55:16.877 189568 DEBUG oslo_concurrency.processutils [None req-7cec860b-c1c5-49d2-8a4f-732c76c1788d - - - - - -] CMD "/usr/bin/python3 -m oslo_concurrency.prlimit --as=1073741824 --cpu=30 -- env LC_ALL=C LANG=C qemu-img info /var/lib/nova/instances/850ac274-3f22-41ce-b7d7-ac64d7adac70/disk --force-share --output=json" returned: 0 in 0.078s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422#033[00m
Dec  1 19:55:16 compute-0 nova_compute[189564]: 2025-12-01 19:55:16.878 189568 DEBUG oslo_concurrency.processutils [None req-7cec860b-c1c5-49d2-8a4f-732c76c1788d - - - - - -] Running cmd (subprocess): /usr/bin/python3 -m oslo_concurrency.prlimit --as=1073741824 --cpu=30 -- env LC_ALL=C LANG=C qemu-img info /var/lib/nova/instances/850ac274-3f22-41ce-b7d7-ac64d7adac70/disk.eph0 --force-share --output=json execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384#033[00m
Dec  1 19:55:16 compute-0 nova_compute[189564]: 2025-12-01 19:55:16.936 189568 DEBUG oslo_concurrency.processutils [None req-7cec860b-c1c5-49d2-8a4f-732c76c1788d - - - - - -] CMD "/usr/bin/python3 -m oslo_concurrency.prlimit --as=1073741824 --cpu=30 -- env LC_ALL=C LANG=C qemu-img info /var/lib/nova/instances/850ac274-3f22-41ce-b7d7-ac64d7adac70/disk.eph0 --force-share --output=json" returned: 0 in 0.058s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422#033[00m
Dec  1 19:55:16 compute-0 nova_compute[189564]: 2025-12-01 19:55:16.937 189568 DEBUG oslo_concurrency.processutils [None req-7cec860b-c1c5-49d2-8a4f-732c76c1788d - - - - - -] Running cmd (subprocess): /usr/bin/python3 -m oslo_concurrency.prlimit --as=1073741824 --cpu=30 -- env LC_ALL=C LANG=C qemu-img info /var/lib/nova/instances/850ac274-3f22-41ce-b7d7-ac64d7adac70/disk.eph0 --force-share --output=json execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384#033[00m
Dec  1 19:55:17 compute-0 nova_compute[189564]: 2025-12-01 19:55:17.010 189568 DEBUG oslo_concurrency.processutils [None req-7cec860b-c1c5-49d2-8a4f-732c76c1788d - - - - - -] CMD "/usr/bin/python3 -m oslo_concurrency.prlimit --as=1073741824 --cpu=30 -- env LC_ALL=C LANG=C qemu-img info /var/lib/nova/instances/850ac274-3f22-41ce-b7d7-ac64d7adac70/disk.eph0 --force-share --output=json" returned: 0 in 0.073s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422#033[00m
Dec  1 19:55:17 compute-0 nova_compute[189564]: 2025-12-01 19:55:17.490 189568 WARNING nova.virt.libvirt.driver [None req-7cec860b-c1c5-49d2-8a4f-732c76c1788d - - - - - -] This host appears to have multiple sockets per NUMA node. The `socket` PCI NUMA affinity will not be supported.#033[00m
Dec  1 19:55:17 compute-0 nova_compute[189564]: 2025-12-01 19:55:17.492 189568 DEBUG nova.compute.resource_tracker [None req-7cec860b-c1c5-49d2-8a4f-732c76c1788d - - - - - -] Hypervisor/Node resource view: name=compute-0.ctlplane.example.com free_ram=4847MB free_disk=72.33402633666992GB free_vcpus=6 pci_devices=[{"dev_id": "pci_0000_00_03_0", "address": "0000:00:03.0", "product_id": "1000", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1000", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_01_3", "address": "0000:00:01.3", "product_id": "7113", "vendor_id": "8086", "numa_node": null, "label": "label_8086_7113", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_06_0", "address": "0000:00:06.0", "product_id": "1005", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1005", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_04_0", "address": "0000:00:04.0", "product_id": "1001", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1001", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_01_1", "address": "0000:00:01.1", "product_id": "7010", "vendor_id": "8086", "numa_node": null, "label": "label_8086_7010", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_00_0", "address": "0000:00:00.0", "product_id": "1237", "vendor_id": "8086", "numa_node": null, "label": "label_8086_1237", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_05_0", "address": "0000:00:05.0", "product_id": "1002", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1002", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_02_0", "address": "0000:00:02.0", "product_id": "1050", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1050", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_01_2", "address": "0000:00:01.2", "product_id": "7020", "vendor_id": "8086", "numa_node": null, "label": "label_8086_7020", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_01_0", "address": "0000:00:01.0", "product_id": "7000", "vendor_id": "8086", "numa_node": null, "label": "label_8086_7000", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_07_0", "address": "0000:00:07.0", "product_id": "1000", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1000", "dev_type": "type-PCI"}] _report_hypervisor_resource_view /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:1034#033[00m
Dec  1 19:55:17 compute-0 nova_compute[189564]: 2025-12-01 19:55:17.492 189568 DEBUG oslo_concurrency.lockutils [None req-7cec860b-c1c5-49d2-8a4f-732c76c1788d - - - - - -] Acquiring lock "compute_resources" by "nova.compute.resource_tracker.ResourceTracker._update_available_resource" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m
Dec  1 19:55:17 compute-0 nova_compute[189564]: 2025-12-01 19:55:17.493 189568 DEBUG oslo_concurrency.lockutils [None req-7cec860b-c1c5-49d2-8a4f-732c76c1788d - - - - - -] Lock "compute_resources" acquired by "nova.compute.resource_tracker.ResourceTracker._update_available_resource" :: waited 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m
Dec  1 19:55:17 compute-0 nova_compute[189564]: 2025-12-01 19:55:17.641 189568 DEBUG nova.compute.resource_tracker [None req-7cec860b-c1c5-49d2-8a4f-732c76c1788d - - - - - -] Instance e73931e9-f7fa-4666-b781-700b385532a9 actively managed on this compute host and has allocations in placement: {'resources': {'DISK_GB': 2, 'MEMORY_MB': 512, 'VCPU': 1}}. _remove_deleted_instances_allocations /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:1635#033[00m
Dec  1 19:55:17 compute-0 nova_compute[189564]: 2025-12-01 19:55:17.642 189568 DEBUG nova.compute.resource_tracker [None req-7cec860b-c1c5-49d2-8a4f-732c76c1788d - - - - - -] Instance 850ac274-3f22-41ce-b7d7-ac64d7adac70 actively managed on this compute host and has allocations in placement: {'resources': {'DISK_GB': 2, 'MEMORY_MB': 512, 'VCPU': 1}}. _remove_deleted_instances_allocations /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:1635#033[00m
Dec  1 19:55:17 compute-0 nova_compute[189564]: 2025-12-01 19:55:17.642 189568 DEBUG nova.compute.resource_tracker [None req-7cec860b-c1c5-49d2-8a4f-732c76c1788d - - - - - -] Total usable vcpus: 8, total allocated vcpus: 2 _report_final_resource_view /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:1057#033[00m
Dec  1 19:55:17 compute-0 nova_compute[189564]: 2025-12-01 19:55:17.642 189568 DEBUG nova.compute.resource_tracker [None req-7cec860b-c1c5-49d2-8a4f-732c76c1788d - - - - - -] Final resource view: name=compute-0.ctlplane.example.com phys_ram=7680MB used_ram=1536MB phys_disk=79GB used_disk=4GB total_vcpus=8 used_vcpus=2 pci_stats=[] _report_final_resource_view /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:1066#033[00m
Dec  1 19:55:17 compute-0 nova_compute[189564]: 2025-12-01 19:55:17.692 189568 DEBUG nova.scheduler.client.report [None req-7cec860b-c1c5-49d2-8a4f-732c76c1788d - - - - - -] Refreshing inventories for resource provider 0211b5d4-bab8-409f-8f53-df766ffbcb27 _refresh_associations /usr/lib/python3.9/site-packages/nova/scheduler/client/report.py:804#033[00m
Dec  1 19:55:17 compute-0 nova_compute[189564]: 2025-12-01 19:55:17.735 189568 DEBUG nova.scheduler.client.report [None req-7cec860b-c1c5-49d2-8a4f-732c76c1788d - - - - - -] Updating ProviderTree inventory for provider 0211b5d4-bab8-409f-8f53-df766ffbcb27 from _refresh_and_get_inventory using data: {'VCPU': {'total': 8, 'reserved': 0, 'min_unit': 1, 'max_unit': 8, 'step_size': 1, 'allocation_ratio': 4.0}, 'MEMORY_MB': {'total': 7680, 'reserved': 512, 'min_unit': 1, 'max_unit': 7680, 'step_size': 1, 'allocation_ratio': 1.0}, 'DISK_GB': {'total': 79, 'reserved': 1, 'min_unit': 1, 'max_unit': 79, 'step_size': 1, 'allocation_ratio': 0.9}} _refresh_and_get_inventory /usr/lib/python3.9/site-packages/nova/scheduler/client/report.py:768#033[00m
Dec  1 19:55:17 compute-0 nova_compute[189564]: 2025-12-01 19:55:17.735 189568 DEBUG nova.compute.provider_tree [None req-7cec860b-c1c5-49d2-8a4f-732c76c1788d - - - - - -] Updating inventory in ProviderTree for provider 0211b5d4-bab8-409f-8f53-df766ffbcb27 with inventory: {'VCPU': {'total': 8, 'reserved': 0, 'min_unit': 1, 'max_unit': 8, 'step_size': 1, 'allocation_ratio': 4.0}, 'MEMORY_MB': {'total': 7680, 'reserved': 512, 'min_unit': 1, 'max_unit': 7680, 'step_size': 1, 'allocation_ratio': 1.0}, 'DISK_GB': {'total': 79, 'reserved': 1, 'min_unit': 1, 'max_unit': 79, 'step_size': 1, 'allocation_ratio': 0.9}} update_inventory /usr/lib/python3.9/site-packages/nova/compute/provider_tree.py:176#033[00m
Dec  1 19:55:17 compute-0 nova_compute[189564]: 2025-12-01 19:55:17.749 189568 DEBUG nova.scheduler.client.report [None req-7cec860b-c1c5-49d2-8a4f-732c76c1788d - - - - - -] Refreshing aggregate associations for resource provider 0211b5d4-bab8-409f-8f53-df766ffbcb27, aggregates: None _refresh_associations /usr/lib/python3.9/site-packages/nova/scheduler/client/report.py:813#033[00m
Dec  1 19:55:17 compute-0 nova_compute[189564]: 2025-12-01 19:55:17.775 189568 DEBUG nova.scheduler.client.report [None req-7cec860b-c1c5-49d2-8a4f-732c76c1788d - - - - - -] Refreshing trait associations for resource provider 0211b5d4-bab8-409f-8f53-df766ffbcb27, traits: COMPUTE_RESCUE_BFV,COMPUTE_VIOMMU_MODEL_INTEL,COMPUTE_SECURITY_UEFI_SECURE_BOOT,COMPUTE_GRAPHICS_MODEL_VIRTIO,HW_CPU_X86_AMD_SVM,COMPUTE_NODE,COMPUTE_VIOMMU_MODEL_AUTO,HW_CPU_X86_BMI2,COMPUTE_IMAGE_TYPE_ISO,HW_CPU_X86_SSE2,COMPUTE_STORAGE_BUS_SATA,HW_CPU_X86_SSE41,COMPUTE_IMAGE_TYPE_ARI,COMPUTE_SECURITY_TPM_1_2,COMPUTE_STORAGE_BUS_VIRTIO,COMPUTE_IMAGE_TYPE_AMI,COMPUTE_NET_VIF_MODEL_E1000,COMPUTE_GRAPHICS_MODEL_CIRRUS,COMPUTE_GRAPHICS_MODEL_VGA,COMPUTE_TRUSTED_CERTS,COMPUTE_STORAGE_BUS_USB,COMPUTE_VOLUME_ATTACH_WITH_TAG,COMPUTE_NET_VIF_MODEL_VIRTIO,HW_CPU_X86_FMA3,HW_CPU_X86_SSE4A,COMPUTE_ACCELERATORS,COMPUTE_VOLUME_EXTEND,HW_CPU_X86_ABM,COMPUTE_DEVICE_TAGGING,HW_CPU_X86_AVX,HW_CPU_X86_SSE,HW_CPU_X86_SVM,COMPUTE_STORAGE_BUS_IDE,COMPUTE_NET_ATTACH_INTERFACE,HW_CPU_X86_F16C,HW_CPU_X86_MMX,COMPUTE_NET_VIF_MODEL_SPAPR_VLAN,COMPUTE_NET_VIF_MODEL_E1000E,HW_CPU_X86_CLMUL,COMPUTE_GRAPHICS_MODEL_NONE,COMPUTE_VIOMMU_MODEL_VIRTIO,HW_CPU_X86_AVX2,COMPUTE_NET_VIF_MODEL_VMXNET3,COMPUTE_SECURITY_TPM_2_0,COMPUTE_IMAGE_TYPE_AKI,HW_CPU_X86_SSSE3,COMPUTE_IMAGE_TYPE_QCOW2,HW_CPU_X86_BMI,HW_CPU_X86_AESNI,COMPUTE_GRAPHICS_MODEL_BOCHS,COMPUTE_NET_VIF_MODEL_RTL8139,COMPUTE_STORAGE_BUS_SCSI,COMPUTE_NET_VIF_MODEL_PCNET,COMPUTE_VOLUME_MULTI_ATTACH,COMPUTE_SOCKET_PCI_NUMA_AFFINITY,COMPUTE_NET_VIF_MODEL_NE2K_PCI,HW_CPU_X86_SHA,COMPUTE_IMAGE_TYPE_RAW,COMPUTE_NET_ATTACH_INTERFACE_WITH_TAG,HW_CPU_X86_SSE42,COMPUTE_STORAGE_BUS_FDC _refresh_associations /usr/lib/python3.9/site-packages/nova/scheduler/client/report.py:825#033[00m
Dec  1 19:55:17 compute-0 nova_compute[189564]: 2025-12-01 19:55:17.835 189568 DEBUG nova.compute.provider_tree [None req-7cec860b-c1c5-49d2-8a4f-732c76c1788d - - - - - -] Inventory has not changed in ProviderTree for provider: 0211b5d4-bab8-409f-8f53-df766ffbcb27 update_inventory /usr/lib/python3.9/site-packages/nova/compute/provider_tree.py:180#033[00m
Dec  1 19:55:17 compute-0 nova_compute[189564]: 2025-12-01 19:55:17.849 189568 DEBUG nova.scheduler.client.report [None req-7cec860b-c1c5-49d2-8a4f-732c76c1788d - - - - - -] Inventory has not changed for provider 0211b5d4-bab8-409f-8f53-df766ffbcb27 based on inventory data: {'VCPU': {'total': 8, 'reserved': 0, 'min_unit': 1, 'max_unit': 8, 'step_size': 1, 'allocation_ratio': 4.0}, 'MEMORY_MB': {'total': 7680, 'reserved': 512, 'min_unit': 1, 'max_unit': 7680, 'step_size': 1, 'allocation_ratio': 1.0}, 'DISK_GB': {'total': 79, 'reserved': 1, 'min_unit': 1, 'max_unit': 79, 'step_size': 1, 'allocation_ratio': 0.9}} set_inventory_for_provider /usr/lib/python3.9/site-packages/nova/scheduler/client/report.py:940#033[00m
Dec  1 19:55:17 compute-0 nova_compute[189564]: 2025-12-01 19:55:17.851 189568 DEBUG nova.compute.resource_tracker [None req-7cec860b-c1c5-49d2-8a4f-732c76c1788d - - - - - -] Compute_service record updated for compute-0.ctlplane.example.com:compute-0.ctlplane.example.com _update_available_resource /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:995#033[00m
Dec  1 19:55:17 compute-0 nova_compute[189564]: 2025-12-01 19:55:17.851 189568 DEBUG oslo_concurrency.lockutils [None req-7cec860b-c1c5-49d2-8a4f-732c76c1788d - - - - - -] Lock "compute_resources" "released" by "nova.compute.resource_tracker.ResourceTracker._update_available_resource" :: held 0.358s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m
Dec  1 19:55:18 compute-0 nova_compute[189564]: 2025-12-01 19:55:18.593 189568 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Dec  1 19:55:18 compute-0 nova_compute[189564]: 2025-12-01 19:55:18.846 189568 DEBUG oslo_service.periodic_task [None req-7cec860b-c1c5-49d2-8a4f-732c76c1788d - - - - - -] Running periodic task ComputeManager._check_instance_build_time run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210#033[00m
Dec  1 19:55:19 compute-0 nova_compute[189564]: 2025-12-01 19:55:19.243 189568 DEBUG oslo_service.periodic_task [None req-7cec860b-c1c5-49d2-8a4f-732c76c1788d - - - - - -] Running periodic task ComputeManager._sync_scheduler_instance_info run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210#033[00m
Dec  1 19:55:19 compute-0 nova_compute[189564]: 2025-12-01 19:55:19.264 189568 DEBUG oslo_service.periodic_task [None req-7cec860b-c1c5-49d2-8a4f-732c76c1788d - - - - - -] Running periodic task ComputeManager._instance_usage_audit run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210#033[00m
Dec  1 19:55:19 compute-0 nova_compute[189564]: 2025-12-01 19:55:19.408 189568 DEBUG nova.compute.manager [req-07d61063-83cf-4bf8-8253-a81d1e2c7a55 req-da49523e-1125-473e-8b4f-295258505f0a 8141e98fb94749b083df4b60a326419b 58466eba0634458f8dd92f929824a1d9 - - default default] [instance: 850ac274-3f22-41ce-b7d7-ac64d7adac70] Received event network-changed-076102cd-d411-4d3d-a31e-4851d4a8d107 external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11048#033[00m
Dec  1 19:55:19 compute-0 nova_compute[189564]: 2025-12-01 19:55:19.409 189568 DEBUG nova.compute.manager [req-07d61063-83cf-4bf8-8253-a81d1e2c7a55 req-da49523e-1125-473e-8b4f-295258505f0a 8141e98fb94749b083df4b60a326419b 58466eba0634458f8dd92f929824a1d9 - - default default] [instance: 850ac274-3f22-41ce-b7d7-ac64d7adac70] Refreshing instance network info cache due to event network-changed-076102cd-d411-4d3d-a31e-4851d4a8d107. external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11053#033[00m
Dec  1 19:55:19 compute-0 nova_compute[189564]: 2025-12-01 19:55:19.410 189568 DEBUG oslo_concurrency.lockutils [req-07d61063-83cf-4bf8-8253-a81d1e2c7a55 req-da49523e-1125-473e-8b4f-295258505f0a 8141e98fb94749b083df4b60a326419b 58466eba0634458f8dd92f929824a1d9 - - default default] Acquiring lock "refresh_cache-850ac274-3f22-41ce-b7d7-ac64d7adac70" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:312#033[00m
Dec  1 19:55:19 compute-0 nova_compute[189564]: 2025-12-01 19:55:19.411 189568 DEBUG oslo_concurrency.lockutils [req-07d61063-83cf-4bf8-8253-a81d1e2c7a55 req-da49523e-1125-473e-8b4f-295258505f0a 8141e98fb94749b083df4b60a326419b 58466eba0634458f8dd92f929824a1d9 - - default default] Acquired lock "refresh_cache-850ac274-3f22-41ce-b7d7-ac64d7adac70" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:315#033[00m
Dec  1 19:55:19 compute-0 nova_compute[189564]: 2025-12-01 19:55:19.412 189568 DEBUG nova.network.neutron [req-07d61063-83cf-4bf8-8253-a81d1e2c7a55 req-da49523e-1125-473e-8b4f-295258505f0a 8141e98fb94749b083df4b60a326419b 58466eba0634458f8dd92f929824a1d9 - - default default] [instance: 850ac274-3f22-41ce-b7d7-ac64d7adac70] Refreshing network info cache for port 076102cd-d411-4d3d-a31e-4851d4a8d107 _get_instance_nw_info /usr/lib/python3.9/site-packages/nova/network/neutron.py:2007#033[00m
Dec  1 19:55:19 compute-0 nova_compute[189564]: 2025-12-01 19:55:19.484 189568 DEBUG oslo_concurrency.lockutils [None req-e4a609d8-0b73-472d-baec-d29542a99ced 7c24e8f82e7842b785e565ac65c7f494 35d2a9caf1634dca9fc12ec078239d84 - - default default] Acquiring lock "850ac274-3f22-41ce-b7d7-ac64d7adac70" by "nova.compute.manager.ComputeManager.terminate_instance.<locals>.do_terminate_instance" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m
Dec  1 19:55:19 compute-0 nova_compute[189564]: 2025-12-01 19:55:19.485 189568 DEBUG oslo_concurrency.lockutils [None req-e4a609d8-0b73-472d-baec-d29542a99ced 7c24e8f82e7842b785e565ac65c7f494 35d2a9caf1634dca9fc12ec078239d84 - - default default] Lock "850ac274-3f22-41ce-b7d7-ac64d7adac70" acquired by "nova.compute.manager.ComputeManager.terminate_instance.<locals>.do_terminate_instance" :: waited 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m
Dec  1 19:55:19 compute-0 nova_compute[189564]: 2025-12-01 19:55:19.486 189568 DEBUG oslo_concurrency.lockutils [None req-e4a609d8-0b73-472d-baec-d29542a99ced 7c24e8f82e7842b785e565ac65c7f494 35d2a9caf1634dca9fc12ec078239d84 - - default default] Acquiring lock "850ac274-3f22-41ce-b7d7-ac64d7adac70-events" by "nova.compute.manager.InstanceEvents.clear_events_for_instance.<locals>._clear_events" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m
Dec  1 19:55:19 compute-0 nova_compute[189564]: 2025-12-01 19:55:19.487 189568 DEBUG oslo_concurrency.lockutils [None req-e4a609d8-0b73-472d-baec-d29542a99ced 7c24e8f82e7842b785e565ac65c7f494 35d2a9caf1634dca9fc12ec078239d84 - - default default] Lock "850ac274-3f22-41ce-b7d7-ac64d7adac70-events" acquired by "nova.compute.manager.InstanceEvents.clear_events_for_instance.<locals>._clear_events" :: waited 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m
Dec  1 19:55:19 compute-0 nova_compute[189564]: 2025-12-01 19:55:19.488 189568 DEBUG oslo_concurrency.lockutils [None req-e4a609d8-0b73-472d-baec-d29542a99ced 7c24e8f82e7842b785e565ac65c7f494 35d2a9caf1634dca9fc12ec078239d84 - - default default] Lock "850ac274-3f22-41ce-b7d7-ac64d7adac70-events" "released" by "nova.compute.manager.InstanceEvents.clear_events_for_instance.<locals>._clear_events" :: held 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m
Dec  1 19:55:19 compute-0 nova_compute[189564]: 2025-12-01 19:55:19.491 189568 INFO nova.compute.manager [None req-e4a609d8-0b73-472d-baec-d29542a99ced 7c24e8f82e7842b785e565ac65c7f494 35d2a9caf1634dca9fc12ec078239d84 - - default default] [instance: 850ac274-3f22-41ce-b7d7-ac64d7adac70] Terminating instance#033[00m
Dec  1 19:55:19 compute-0 nova_compute[189564]: 2025-12-01 19:55:19.493 189568 DEBUG nova.compute.manager [None req-e4a609d8-0b73-472d-baec-d29542a99ced 7c24e8f82e7842b785e565ac65c7f494 35d2a9caf1634dca9fc12ec078239d84 - - default default] [instance: 850ac274-3f22-41ce-b7d7-ac64d7adac70] Start destroying the instance on the hypervisor. _shutdown_instance /usr/lib/python3.9/site-packages/nova/compute/manager.py:3120#033[00m
Dec  1 19:55:19 compute-0 kernel: tap076102cd-d4 (unregistering): left promiscuous mode
Dec  1 19:55:19 compute-0 NetworkManager[56474]: <info>  [1764618919.5491] device (tap076102cd-d4): state change: disconnected -> unmanaged (reason 'unmanaged', managed-type: 'removed')
Dec  1 19:55:19 compute-0 nova_compute[189564]: 2025-12-01 19:55:19.560 189568 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Dec  1 19:55:19 compute-0 ovn_controller[97948]: 2025-12-01T19:55:19Z|00049|binding|INFO|Releasing lport 076102cd-d411-4d3d-a31e-4851d4a8d107 from this chassis (sb_readonly=0)
Dec  1 19:55:19 compute-0 ovn_controller[97948]: 2025-12-01T19:55:19Z|00050|binding|INFO|Setting lport 076102cd-d411-4d3d-a31e-4851d4a8d107 down in Southbound
Dec  1 19:55:19 compute-0 ovn_controller[97948]: 2025-12-01T19:55:19Z|00051|binding|INFO|Removing iface tap076102cd-d4 ovn-installed in OVS
Dec  1 19:55:19 compute-0 ovn_metadata_agent[106828]: 2025-12-01 19:55:19.572 106833 DEBUG ovsdbapp.backend.ovs_idl.event [-] Matched UPDATE: PortBindingUpdatedEvent(events=('update',), table='Port_Binding', conditions=None, old_conditions=None), priority=20 to row=Port_Binding(mac=['fa:16:3e:ce:df:71 192.168.0.62'], port_security=['fa:16:3e:ce:df:71 192.168.0.62'], type=, nat_addresses=[], virtual_parent=[], up=[False], options={'requested-chassis': 'compute-0.ctlplane.example.com'}, parent_port=[], requested_additional_chassis=[], ha_chassis_group=[], external_ids={'name': 'vnf-scaleup_group-vz2nmrxztcck-a6xkcgll2h6t-dmjd3wlevael-port-elrdg4anttdl', 'neutron:cidrs': '192.168.0.62/24', 'neutron:device_id': '850ac274-3f22-41ce-b7d7-ac64d7adac70', 'neutron:device_owner': 'compute:nova', 'neutron:mtu': '', 'neutron:network_name': 'neutron-2a4b8529-6171-4880-a97c-66966115a61b', 'neutron:port_capabilities': '', 'neutron:port_name': 'vnf-scaleup_group-vz2nmrxztcck-a6xkcgll2h6t-dmjd3wlevael-port-elrdg4anttdl', 'neutron:project_id': '35d2a9caf1634dca9fc12ec078239d84', 'neutron:revision_number': '4', 'neutron:security_group_ids': 'e61a5e79-a7e0-4e4e-bcbc-f9aad845c2b8', 'neutron:subnet_pool_addr_scope4': '', 'neutron:subnet_pool_addr_scope6': '', 'neutron:vnic_type': 'normal', 'neutron:host_id': 'compute-0.ctlplane.example.com'}, additional_chassis=[], tag=[], additional_encap=[], encap=[], mirror_rules=[], datapath=58f8227a-30b3-42df-b03a-90442a651a6d, chassis=[], tunnel_key=5, gateway_chassis=[], requested_chassis=[<ovs.db.idl.Row object at 0x7f1b36766670>], logical_port=076102cd-d411-4d3d-a31e-4851d4a8d107) old=Port_Binding(up=[True], chassis=[<ovs.db.idl.Row object at 0x7f1b36766670>]) matches /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/event.py:43#033[00m
Dec  1 19:55:19 compute-0 ovn_metadata_agent[106828]: 2025-12-01 19:55:19.573 106833 INFO neutron.agent.ovn.metadata.agent [-] Port 076102cd-d411-4d3d-a31e-4851d4a8d107 in datapath 2a4b8529-6171-4880-a97c-66966115a61b unbound from our chassis#033[00m
Dec  1 19:55:19 compute-0 ovn_metadata_agent[106828]: 2025-12-01 19:55:19.574 106833 INFO neutron.agent.ovn.metadata.agent [-] Provisioning metadata for network 2a4b8529-6171-4880-a97c-66966115a61b#033[00m
Dec  1 19:55:19 compute-0 nova_compute[189564]: 2025-12-01 19:55:19.567 189568 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Dec  1 19:55:19 compute-0 nova_compute[189564]: 2025-12-01 19:55:19.588 189568 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Dec  1 19:55:19 compute-0 ovn_metadata_agent[106828]: 2025-12-01 19:55:19.596 239862 DEBUG oslo.privsep.daemon [-] privsep: reply[16bf4991-3b7c-48fe-826f-add70e7f2e1f]: (4, True) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Dec  1 19:55:19 compute-0 systemd[1]: machine-qemu\x2d3\x2dinstance\x2d00000003.scope: Deactivated successfully.
Dec  1 19:55:19 compute-0 systemd[1]: machine-qemu\x2d3\x2dinstance\x2d00000003.scope: Consumed 3min 27.803s CPU time.
Dec  1 19:55:19 compute-0 systemd-machined[155891]: Machine qemu-3-instance-00000003 terminated.
Dec  1 19:55:19 compute-0 ovn_metadata_agent[106828]: 2025-12-01 19:55:19.630 239942 DEBUG oslo.privsep.daemon [-] privsep: reply[9fb9bd12-c41f-448e-bb91-a0af5158fe51]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Dec  1 19:55:19 compute-0 ovn_metadata_agent[106828]: 2025-12-01 19:55:19.633 239942 DEBUG oslo.privsep.daemon [-] privsep: reply[7d4a7142-8385-494a-9562-caf943155cbb]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Dec  1 19:55:19 compute-0 ovn_metadata_agent[106828]: 2025-12-01 19:55:19.659 106833 DEBUG ovsdbapp.backend.ovs_idl.event [-] Matched UPDATE: SbGlobalUpdateEvent(events=('update',), table='SB_Global', conditions=None, old_conditions=None), priority=20 to row=SB_Global(external_ids={}, nb_cfg=7, options={'arp_ns_explicit_output': 'true', 'mac_prefix': 'ae:b8:e0', 'max_tunid': '16711680', 'northd_internal_version': '24.03.8-20.33.0-76.8', 'svc_monitor_mac': 'f2:87:69:a7:38:2b'}, ipsec=False) old=SB_Global(nb_cfg=6) matches /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/event.py:43#033[00m
Dec  1 19:55:19 compute-0 nova_compute[189564]: 2025-12-01 19:55:19.659 189568 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Dec  1 19:55:19 compute-0 ovn_metadata_agent[106828]: 2025-12-01 19:55:19.663 239942 DEBUG oslo.privsep.daemon [-] privsep: reply[804714b4-9feb-4510-ba57-39584f11facd]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Dec  1 19:55:19 compute-0 podman[251330]: 2025-12-01 19:55:19.674914139 +0000 UTC m=+0.089583230 container health_status b46bda7fc50db8041eef75400930fc7591d8331b3adc9964f77b2cc87c6b98e2 (image=quay.io/openstack-k8s-operators/openstack-network-exporter:current-podified, name=openstack_network_exporter, health_status=healthy, health_failing_streak=0, health_log=, com.redhat.license_terms=https://www.redhat.com/en/about/red-hat-end-user-license-agreements#UBI, io.k8s.description=The Universal Base Image Minimal is a stripped down image that uses microdnf as a package manager. This base image is freely redistributable, but Red Hat only supports Red Hat technologies through subscriptions for Red Hat products. This image is maintained by Red Hat and updated regularly., io.openshift.tags=minimal rhel9, vcs-type=git, config_data={'image': 'quay.io/openstack-k8s-operators/openstack-network-exporter:current-podified', 'restart': 'always', 'recreate': True, 'privileged': True, 'ports': ['9105:9105'], 'command': [], 'net': 'host', 'environment': {'OS_ENDPOINT_TYPE': 'internal', 'OPENSTACK_NETWORK_EXPORTER_YAML': '/etc/openstack_network_exporter/openstack_network_exporter.yaml'}, 'healthcheck': {'test': '/openstack/healthcheck openstack-netwo', 'mount': '/var/lib/openstack/healthchecks/openstack_network_exporter'}, 'volumes': ['/var/lib/openstack/config/telemetry/openstack_network_exporter.yaml:/etc/openstack_network_exporter/openstack_network_exporter.yaml:z', '/var/lib/openstack/certs/telemetry/default:/etc/openstack_network_exporter/tls:z', '/var/run/openvswitch:/run/openvswitch:rw,z', '/var/lib/openvswitch/ovn:/run/ovn:rw,z', '/proc:/host/proc:ro', '/var/lib/openstack/healthchecks/openstack_network_exporter:/openstack:ro,z']}, distribution-scope=public, io.openshift.expose-services=, com.redhat.component=ubi9-minimal-container, build-date=2025-08-20T13:12:41, io.k8s.display-name=Red Hat Universal Base Image 9 Minimal, url=https://catalog.redhat.com/en/search?searchType=containers, version=9.6, architecture=x86_64, managed_by=edpm_ansible, summary=Provides the latest release of the minimal Red Hat Universal Base Image 9., config_id=edpm, vcs-ref=f4b088292653bbf5ca8188a5e59ffd06a8671d4b, name=ubi9-minimal, vendor=Red Hat, Inc., container_name=openstack_network_exporter, io.buildah.version=1.33.7, release=1755695350, maintainer=Red Hat, Inc., description=The Universal Base Image Minimal is a stripped down image that uses microdnf as a package manager. This base image is freely redistributable, but Red Hat only supports Red Hat technologies through subscriptions for Red Hat products. This image is maintained by Red Hat and updated regularly.)
Dec  1 19:55:19 compute-0 ovn_metadata_agent[106828]: 2025-12-01 19:55:19.682 239862 DEBUG oslo.privsep.daemon [-] privsep: reply[d2345f7a-4479-4bd5-9a44-70fefa59d701]: (4, [{'family': 0, '__align': (), 'ifi_type': 1, 'index': 2, 'flags': 69699, 'change': 0, 'attrs': [['IFLA_IFNAME', 'tap2a4b8529-61'], ['IFLA_TXQLEN', 1000], ['IFLA_OPERSTATE', 'UP'], ['IFLA_LINKMODE', 0], ['IFLA_MTU', 1500], ['IFLA_MIN_MTU', 68], ['IFLA_MAX_MTU', 65535], ['IFLA_GROUP', 0], ['IFLA_PROMISCUITY', 0], ['UNKNOWN', {'header': {'length': 8, 'type': 61}}], ['IFLA_NUM_TX_QUEUES', 8], ['IFLA_GSO_MAX_SEGS', 65535], ['IFLA_GSO_MAX_SIZE', 65536], ['IFLA_GRO_MAX_SIZE', 65536], ['UNKNOWN', {'header': {'length': 8, 'type': 63}}], ['UNKNOWN', {'header': {'length': 8, 'type': 64}}], ['IFLA_TSO_MAX_SIZE', 524280], ['IFLA_TSO_MAX_SEGS', 65535], ['UNKNOWN', {'header': {'length': 8, 'type': 66}}], ['IFLA_NUM_RX_QUEUES', 8], ['IFLA_CARRIER', 1], ['IFLA_CARRIER_CHANGES', 2], ['IFLA_CARRIER_UP_COUNT', 1], ['IFLA_CARRIER_DOWN_COUNT', 1], ['IFLA_PROTO_DOWN', 0], ['IFLA_ADDRESS', 'fa:16:3e:47:81:e1'], ['IFLA_BROADCAST', 'ff:ff:ff:ff:ff:ff'], ['IFLA_STATS64', {'rx_packets': 7, 'tx_packets': 11, 'rx_bytes': 574, 'tx_bytes': 606, 'rx_errors': 0, 'tx_errors': 0, 'rx_dropped': 0, 'tx_dropped': 0, 'multicast': 0, 'collisions': 0, 'rx_length_errors': 0, 'rx_over_errors': 0, 'rx_crc_errors': 0, 'rx_frame_errors': 0, 'rx_fifo_errors': 0, 'rx_missed_errors': 0, 'tx_aborted_errors': 0, 'tx_carrier_errors': 0, 'tx_fifo_errors': 0, 'tx_heartbeat_errors': 0, 'tx_window_errors': 0, 'rx_compressed': 0, 'tx_compressed': 0}], ['IFLA_STATS', {'rx_packets': 7, 'tx_packets': 11, 'rx_bytes': 574, 'tx_bytes': 606, 'rx_errors': 0, 'tx_errors': 0, 'rx_dropped': 0, 'tx_dropped': 0, 'multicast': 0, 'collisions': 0, 'rx_length_errors': 0, 'rx_over_errors': 0, 'rx_crc_errors': 0, 'rx_frame_errors': 0, 'rx_fifo_errors': 0, 'rx_missed_errors': 0, 'tx_aborted_errors': 0, 'tx_carrier_errors': 0, 'tx_fifo_errors': 0, 'tx_heartbeat_errors': 0, 'tx_window_errors': 0, 'rx_compressed': 0, 'tx_compressed': 0}], ['IFLA_XDP', {'attrs': [['IFLA_XDP_ATTACHED', None]]}], ['IFLA_LINKINFO', {'attrs': [['IFLA_INFO_KIND', 'veth']]}], ['IFLA_LINK_NETNSID', 0], ['IFLA_LINK', 12], ['IFLA_QDISC', 'noqueue'], ['IFLA_AF_SPEC', {'attrs': [['AF_INET', {'dummy': 65668, 'forwarding': 1, 'mc_forwarding': 0, 'proxy_arp': 0, 'accept_redirects': 0, 'secure_redirects': 0, 'send_redirects': 0, 'shared_media': 1, 'rp_filter': 1, 'accept_source_route': 0, 'bootp_relay': 0, 'log_martians': 0, 'tag': 0, 'arpfilter': 0, 'medium_id': 0, 'noxfrm': 0, 'nopolicy': 0, 'force_igmp_version': 0, 'arp_announce': 0, 'arp_ignore': 0, 'promote_secondaries': 1, 'arp_accept': 0, 'arp_notify': 0, 'accept_local': 0, 'src_vmark': 0, 'proxy_arp_pvlan': 0, 'route_localnet': 0, 'igmpv2_unsolicited_report_interval': 10000, 'igmpv3_unsolicited_report_interval': 1000}], ['AF_INET6', {'attrs': [['IFLA_INET6_FLAGS', 2147483648], ['IFLA_INET6_CACHEINFO', {'max_reasm_len': 65535, 'tstamp': 388613, 'reachable_time': 31930, 'retrans_time': 1000}], ['IFLA_INET6_CONF', {'forwarding': 0, 'hop_limit': 64, 'mtu': 1500, 'accept_ra': 1, 'accept_redirects': 1, 'autoconf': 1, 'dad_transmits': 1, 'router_solicitations': 4294967295, 'router_solicitation_interval': 4000, 'router_solicitation_delay': 1000, 'use_tempaddr': 0, 'temp_valid_lft': 604800, 'temp_preferred_lft': 86400, 'regen_max_retry': 3, 'max_desync_factor': 600, 'max_addresses': 16, 'force_mld_version': 0, 'accept_ra_defrtr': 1, 'accept_ra_pinfo': 1, 'accept_ra_rtr_pref': 1, 'router_probe_interval': 60000, 'accept_ra_rt_info_max_plen': 0, 'proxy_ndp': 0, 'optimistic_dad': 0, 'accept_source_route': 0, 'mc_forwarding': 0, 'disable_ipv6': 0, 'accept_dad': 1, 'force_tllao': 0, 'ndisc_notify': 0}], ['IFLA_INET6_STATS', {'num': 38, 'inpkts': 6, 'inoctets': 448, 'indelivers': 1, 'outforwdatagrams': 0, 'outpkts': 3, 'outoctets': 228, 'inhdrerrors': 0, 'intoobigerrors': 0, 'innoroutes': 0, 'inaddrerrors': 0, 'inunknownprotos': 0, 'intruncatedpkts': 0, 'indiscards': 0, 'outdiscards': 0, 'outnoroutes': 0, 'reasmtimeout': 0, 'reasmreqds': 0, 'reasmoks': 0, 'reasmfails': 0, 'fragoks': 0, 'fragfails': 0, 'fragcreates': 0, 'inmcastpkts': 6, 'outmcastpkts': 3, 'inbcastpkts': 0, 'outbcastpkts': 0, 'inmcastoctets': 448, 'outmcastoctets': 228, 'inbcastoctets': 0, 'outbcastoctets': 0, 'csumerrors': 0, 'noectpkts': 6, 'ect1pkts': 0, 'ect0pkts': 0, 'cepkts': 0}], ['IFLA_INET6_ICMP6STATS', {'num': 7, 'inmsgs': 1, 'inerrors': 0, 'outmsgs': 3, 'outerrors': 0, 'csumerrors': 0}], ['IFLA_INET6_TOKEN', '::'], ['IFLA_INET6_ADDR_GEN_MODE', 0]]}]]}], ['IFLA_MAP', {'mem_start': 0, 'mem_end': 0, 'base_addr': 0, 'irq': 0, 'dma': 0, 'port': 0}], ['UNKNOWN', {'header': {'length': 4, 'type': 32830}}], ['UNKNOWN', {'header': {'length': 4, 'type': 32833}}]], 'header': {'length': 1448, 'type': 16, 'flags': 2, 'sequence_number': 255, 'pid': 251359, 'error': None, 'target': 'ovnmeta-2a4b8529-6171-4880-a97c-66966115a61b', 'stats': (0, 0, 0)}, 'state': 'up', 'event': 'RTM_NEWLINK'}]) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Dec  1 19:55:19 compute-0 ovn_metadata_agent[106828]: 2025-12-01 19:55:19.699 239862 DEBUG oslo.privsep.daemon [-] privsep: reply[a5ba34dd-426b-411e-bcab-3bc33dd67e04]: (4, ({'family': 2, 'prefixlen': 24, 'flags': 128, 'scope': 0, 'index': 2, 'attrs': [['IFA_ADDRESS', '192.168.0.2'], ['IFA_LOCAL', '192.168.0.2'], ['IFA_BROADCAST', '192.168.0.255'], ['IFA_LABEL', 'tap2a4b8529-61'], ['IFA_FLAGS', 128], ['IFA_CACHEINFO', {'ifa_preferred': 4294967295, 'ifa_valid': 4294967295, 'cstamp': 388627, 'tstamp': 388627}]], 'header': {'length': 96, 'type': 20, 'flags': 2, 'sequence_number': 255, 'pid': 251362, 'error': None, 'target': 'ovnmeta-2a4b8529-6171-4880-a97c-66966115a61b', 'stats': (0, 0, 0)}, 'event': 'RTM_NEWADDR'}, {'family': 2, 'prefixlen': 32, 'flags': 128, 'scope': 0, 'index': 2, 'attrs': [['IFA_ADDRESS', '169.254.169.254'], ['IFA_LOCAL', '169.254.169.254'], ['IFA_BROADCAST', '169.254.169.254'], ['IFA_LABEL', 'tap2a4b8529-61'], ['IFA_FLAGS', 128], ['IFA_CACHEINFO', {'ifa_preferred': 4294967295, 'ifa_valid': 4294967295, 'cstamp': 388631, 'tstamp': 388631}]], 'header': {'length': 96, 'type': 20, 'flags': 2, 'sequence_number': 255, 'pid': 251362, 'error': None, 'target': 'ovnmeta-2a4b8529-6171-4880-a97c-66966115a61b', 'stats': (0, 0, 0)}, 'event': 'RTM_NEWADDR'})) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Dec  1 19:55:19 compute-0 ovn_metadata_agent[106828]: 2025-12-01 19:55:19.700 106833 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): DelPortCommand(_result=None, port=tap2a4b8529-60, bridge=br-ex, if_exists=True) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89#033[00m
Dec  1 19:55:19 compute-0 nova_compute[189564]: 2025-12-01 19:55:19.702 189568 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Dec  1 19:55:19 compute-0 nova_compute[189564]: 2025-12-01 19:55:19.710 189568 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Dec  1 19:55:19 compute-0 ovn_metadata_agent[106828]: 2025-12-01 19:55:19.711 106833 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): AddPortCommand(_result=None, bridge=br-int, port=tap2a4b8529-60, may_exist=True) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89#033[00m
Dec  1 19:55:19 compute-0 ovn_metadata_agent[106828]: 2025-12-01 19:55:19.712 106833 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Transaction caused no change do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:129#033[00m
Dec  1 19:55:19 compute-0 ovn_metadata_agent[106828]: 2025-12-01 19:55:19.713 106833 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): DbSetCommand(_result=None, table=Interface, record=tap2a4b8529-60, col_values=(('external_ids', {'iface-id': 'f95692ff-1cac-46fe-9e62-21af9fa55eb1'}),)) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89#033[00m
Dec  1 19:55:19 compute-0 ovn_metadata_agent[106828]: 2025-12-01 19:55:19.713 106833 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Transaction caused no change do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:129#033[00m
Dec  1 19:55:19 compute-0 ovn_metadata_agent[106828]: 2025-12-01 19:55:19.714 106833 DEBUG neutron.agent.ovn.metadata.agent [-] Delaying updating chassis table for 3 seconds run /usr/lib/python3.9/site-packages/neutron/agent/ovn/metadata/agent.py:274#033[00m
Dec  1 19:55:19 compute-0 kernel: tap076102cd-d4: entered promiscuous mode
Dec  1 19:55:19 compute-0 kernel: tap076102cd-d4 (unregistering): left promiscuous mode
Dec  1 19:55:19 compute-0 NetworkManager[56474]: <info>  [1764618919.7298] manager: (tap076102cd-d4): new Tun device (/org/freedesktop/NetworkManager/Devices/31)
Dec  1 19:55:19 compute-0 ovn_controller[97948]: 2025-12-01T19:55:19Z|00052|binding|INFO|Claiming lport 076102cd-d411-4d3d-a31e-4851d4a8d107 for this chassis.
Dec  1 19:55:19 compute-0 ovn_controller[97948]: 2025-12-01T19:55:19Z|00053|binding|INFO|076102cd-d411-4d3d-a31e-4851d4a8d107: Claiming fa:16:3e:ce:df:71 192.168.0.62
Dec  1 19:55:19 compute-0 nova_compute[189564]: 2025-12-01 19:55:19.737 189568 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Dec  1 19:55:19 compute-0 ovn_metadata_agent[106828]: 2025-12-01 19:55:19.744 106833 DEBUG ovsdbapp.backend.ovs_idl.event [-] Matched UPDATE: PortBindingUpdatedEvent(events=('update',), table='Port_Binding', conditions=None, old_conditions=None), priority=20 to row=Port_Binding(mac=['fa:16:3e:ce:df:71 192.168.0.62'], port_security=['fa:16:3e:ce:df:71 192.168.0.62'], type=, nat_addresses=[], virtual_parent=[], up=[False], options={'requested-chassis': 'compute-0.ctlplane.example.com'}, parent_port=[], requested_additional_chassis=[], ha_chassis_group=[], external_ids={'name': 'vnf-scaleup_group-vz2nmrxztcck-a6xkcgll2h6t-dmjd3wlevael-port-elrdg4anttdl', 'neutron:cidrs': '192.168.0.62/24', 'neutron:device_id': '850ac274-3f22-41ce-b7d7-ac64d7adac70', 'neutron:device_owner': 'compute:nova', 'neutron:mtu': '', 'neutron:network_name': 'neutron-2a4b8529-6171-4880-a97c-66966115a61b', 'neutron:port_capabilities': '', 'neutron:port_name': 'vnf-scaleup_group-vz2nmrxztcck-a6xkcgll2h6t-dmjd3wlevael-port-elrdg4anttdl', 'neutron:project_id': '35d2a9caf1634dca9fc12ec078239d84', 'neutron:revision_number': '4', 'neutron:security_group_ids': 'e61a5e79-a7e0-4e4e-bcbc-f9aad845c2b8', 'neutron:subnet_pool_addr_scope4': '', 'neutron:subnet_pool_addr_scope6': '', 'neutron:vnic_type': 'normal', 'neutron:host_id': 'compute-0.ctlplane.example.com'}, additional_chassis=[], tag=[], additional_encap=[], encap=[], mirror_rules=[], datapath=58f8227a-30b3-42df-b03a-90442a651a6d, chassis=[<ovs.db.idl.Row object at 0x7f1b36766670>], tunnel_key=5, gateway_chassis=[], requested_chassis=[<ovs.db.idl.Row object at 0x7f1b36766670>], logical_port=076102cd-d411-4d3d-a31e-4851d4a8d107) old=Port_Binding(chassis=[]) matches /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/event.py:43#033[00m
Dec  1 19:55:19 compute-0 ovn_metadata_agent[106828]: 2025-12-01 19:55:19.745 106833 INFO neutron.agent.ovn.metadata.agent [-] Port 076102cd-d411-4d3d-a31e-4851d4a8d107 in datapath 2a4b8529-6171-4880-a97c-66966115a61b bound to our chassis#033[00m
Dec  1 19:55:19 compute-0 ovn_metadata_agent[106828]: 2025-12-01 19:55:19.746 106833 INFO neutron.agent.ovn.metadata.agent [-] Provisioning metadata for network 2a4b8529-6171-4880-a97c-66966115a61b#033[00m
Dec  1 19:55:19 compute-0 ovn_controller[97948]: 2025-12-01T19:55:19Z|00054|binding|INFO|Setting lport 076102cd-d411-4d3d-a31e-4851d4a8d107 ovn-installed in OVS
Dec  1 19:55:19 compute-0 ovn_controller[97948]: 2025-12-01T19:55:19Z|00055|binding|INFO|Setting lport 076102cd-d411-4d3d-a31e-4851d4a8d107 up in Southbound
Dec  1 19:55:19 compute-0 ovn_controller[97948]: 2025-12-01T19:55:19Z|00056|binding|INFO|Releasing lport 076102cd-d411-4d3d-a31e-4851d4a8d107 from this chassis (sb_readonly=1)
Dec  1 19:55:19 compute-0 ovn_controller[97948]: 2025-12-01T19:55:19Z|00057|if_status|INFO|Not setting lport 076102cd-d411-4d3d-a31e-4851d4a8d107 down as sb is readonly
Dec  1 19:55:19 compute-0 ovn_controller[97948]: 2025-12-01T19:55:19Z|00058|binding|INFO|Removing iface tap076102cd-d4 ovn-installed in OVS
Dec  1 19:55:19 compute-0 nova_compute[189564]: 2025-12-01 19:55:19.760 189568 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Dec  1 19:55:19 compute-0 ovn_controller[97948]: 2025-12-01T19:55:19Z|00059|binding|INFO|Releasing lport 076102cd-d411-4d3d-a31e-4851d4a8d107 from this chassis (sb_readonly=1)
Dec  1 19:55:19 compute-0 ovn_controller[97948]: 2025-12-01T19:55:19Z|00060|binding|INFO|Setting lport 076102cd-d411-4d3d-a31e-4851d4a8d107 down in Southbound
Dec  1 19:55:19 compute-0 ovn_metadata_agent[106828]: 2025-12-01 19:55:19.767 239862 DEBUG oslo.privsep.daemon [-] privsep: reply[9f0b7b58-93eb-44df-a989-e38f61f1a062]: (4, True) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Dec  1 19:55:19 compute-0 ovn_metadata_agent[106828]: 2025-12-01 19:55:19.771 106833 DEBUG ovsdbapp.backend.ovs_idl.event [-] Matched UPDATE: PortBindingUpdatedEvent(events=('update',), table='Port_Binding', conditions=None, old_conditions=None), priority=20 to row=Port_Binding(mac=['fa:16:3e:ce:df:71 192.168.0.62'], port_security=['fa:16:3e:ce:df:71 192.168.0.62'], type=, nat_addresses=[], virtual_parent=[], up=[False], options={'requested-chassis': 'compute-0.ctlplane.example.com'}, parent_port=[], requested_additional_chassis=[], ha_chassis_group=[], external_ids={'name': 'vnf-scaleup_group-vz2nmrxztcck-a6xkcgll2h6t-dmjd3wlevael-port-elrdg4anttdl', 'neutron:cidrs': '192.168.0.62/24', 'neutron:device_id': '850ac274-3f22-41ce-b7d7-ac64d7adac70', 'neutron:device_owner': 'compute:nova', 'neutron:mtu': '', 'neutron:network_name': 'neutron-2a4b8529-6171-4880-a97c-66966115a61b', 'neutron:port_capabilities': '', 'neutron:port_name': 'vnf-scaleup_group-vz2nmrxztcck-a6xkcgll2h6t-dmjd3wlevael-port-elrdg4anttdl', 'neutron:project_id': '35d2a9caf1634dca9fc12ec078239d84', 'neutron:revision_number': '4', 'neutron:security_group_ids': 'e61a5e79-a7e0-4e4e-bcbc-f9aad845c2b8', 'neutron:subnet_pool_addr_scope4': '', 'neutron:subnet_pool_addr_scope6': '', 'neutron:vnic_type': 'normal', 'neutron:host_id': 'compute-0.ctlplane.example.com'}, additional_chassis=[], tag=[], additional_encap=[], encap=[], mirror_rules=[], datapath=58f8227a-30b3-42df-b03a-90442a651a6d, chassis=[], tunnel_key=5, gateway_chassis=[], requested_chassis=[<ovs.db.idl.Row object at 0x7f1b36766670>], logical_port=076102cd-d411-4d3d-a31e-4851d4a8d107) old=Port_Binding(up=[True], chassis=[<ovs.db.idl.Row object at 0x7f1b36766670>]) matches /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/event.py:43#033[00m
Dec  1 19:55:19 compute-0 nova_compute[189564]: 2025-12-01 19:55:19.773 189568 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Dec  1 19:55:19 compute-0 nova_compute[189564]: 2025-12-01 19:55:19.791 189568 INFO nova.virt.libvirt.driver [-] [instance: 850ac274-3f22-41ce-b7d7-ac64d7adac70] Instance destroyed successfully.#033[00m
Dec  1 19:55:19 compute-0 nova_compute[189564]: 2025-12-01 19:55:19.791 189568 DEBUG nova.objects.instance [None req-e4a609d8-0b73-472d-baec-d29542a99ced 7c24e8f82e7842b785e565ac65c7f494 35d2a9caf1634dca9fc12ec078239d84 - - default default] Lazy-loading 'resources' on Instance uuid 850ac274-3f22-41ce-b7d7-ac64d7adac70 obj_load_attr /usr/lib/python3.9/site-packages/nova/objects/instance.py:1105#033[00m
Dec  1 19:55:19 compute-0 ovn_metadata_agent[106828]: 2025-12-01 19:55:19.801 239942 DEBUG oslo.privsep.daemon [-] privsep: reply[bef5d940-182b-4197-8d43-3826c6bcba99]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Dec  1 19:55:19 compute-0 ovn_metadata_agent[106828]: 2025-12-01 19:55:19.804 239942 DEBUG oslo.privsep.daemon [-] privsep: reply[220ab0ff-ad1c-4d83-946b-3a706dcc7c5b]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Dec  1 19:55:19 compute-0 nova_compute[189564]: 2025-12-01 19:55:19.809 189568 DEBUG nova.virt.libvirt.vif [None req-e4a609d8-0b73-472d-baec-d29542a99ced 7c24e8f82e7842b785e565ac65c7f494 35d2a9caf1634dca9fc12ec078239d84 - - default default] vif_type=ovs instance=Instance(access_ip_v4=None,access_ip_v6=None,architecture=None,auto_disk_config=False,availability_zone='nova',cell_name=None,cleaned=False,config_drive='True',created_at=2025-12-01T19:36:30Z,default_ephemeral_device=None,default_swap_device=None,deleted=False,deleted_at=None,device_metadata=<?>,disable_terminate=False,display_description=None,display_name='vn-rxztcck-a6xkcgll2h6t-dmjd3wlevael-vnf-74vtqyxw74yx',ec2_ids=<?>,ephemeral_gb=1,ephemeral_key_uuid=None,fault=<?>,flavor=Flavor(1),hidden=False,host='compute-0.ctlplane.example.com',hostname='vn-rxztcck-a6xkcgll2h6t-dmjd3wlevael-vnf-74vtqyxw74yx',id=3,image_ref='15bc897a-453b-4133-b6db-08ecdc2b6db0',info_cache=InstanceInfoCache,instance_type_id=1,kernel_id='',key_data=None,key_name=None,keypairs=<?>,launch_index=0,launched_at=2025-12-01T19:36:38Z,launched_on='compute-0.ctlplane.example.com',locked=False,locked_by=None,memory_mb=512,metadata={metering.server_group='47cf63e2-5b7c-4ff3-8543-aef6d5b1a5c9'},migration_context=<?>,new_flavor=None,node='compute-0.ctlplane.example.com',numa_topology=None,old_flavor=None,os_type=None,pci_devices=<?>,pci_requests=<?>,power_state=1,progress=0,project_id='35d2a9caf1634dca9fc12ec078239d84',ramdisk_id='',reservation_id='r-fbknfj75',resources=None,root_device_name='/dev/vda',root_gb=1,security_groups=SecurityGroupList,services=<?>,shutdown_terminate=False,system_metadata={boot_roles='reader,member,admin',image_base_image_ref='15bc897a-453b-4133-b6db-08ecdc2b6db0',image_container_format='bare',image_disk_format='qcow2',image_hw_cdrom_bus='sata',image_hw_disk_bus='virtio',image_hw_input_bus='usb',image_hw_machine_type='q35',image_hw_pointer_model='usbtablet',image_hw_video_model='virtio',image_hw_vif_model='virtio',image_min_disk='1',image_min_ram='0',image_owner_specified.openstack.md5='',image_owner_specified.openstack.object='images/cirros',image_owner_specified.openstack.sha256='',owner_project_name='admin',owner_user_name='admin'},tags=<?>,task_state='deleting',terminated_at=None,trusted_certs=<?>,updated_at=2025-12-01T19:36:38Z,user_data='Q29udGVudC1UeXBlOiBtdWx0aXBhcnQvbWl4ZWQ7IGJvdW5kYXJ5PSI9PT09PT09PT09PT09PT01NzIyMDUwMDgxOTA2NDcwMDg4PT0iCk1JTUUtVmVyc2lvbjogMS4wCgotLT09PT09PT09PT09PT09PTU3MjIwNTAwODE5MDY0NzAwODg9PQpDb250ZW50LVR5cGU6IHRleHQvY2xvdWQtY29uZmlnOyBjaGFyc2V0PSJ1cy1hc2NpaSIKTUlNRS1WZXJzaW9uOiAxLjAKQ29udGVudC1UcmFuc2Zlci1FbmNvZGluZzogN2JpdApDb250ZW50LURpc3Bvc2l0aW9uOiBhdHRhY2htZW50OyBmaWxlbmFtZT0iY2xvdWQtY29uZmlnIgoKCgojIENhcHR1cmUgYWxsIHN1YnByb2Nlc3Mgb3V0cHV0IGludG8gYSBsb2dmaWxlCiMgVXNlZnVsIGZvciB0cm91Ymxlc2hvb3RpbmcgY2xvdWQtaW5pdCBpc3N1ZXMKb3V0cHV0OiB7YWxsOiAnfCB0ZWUgLWEgL3Zhci9sb2cvY2xvdWQtaW5pdC1vdXRwdXQubG9nJ30KCi0tPT09PT09PT09PT09PT09NTcyMjA1MDA4MTkwNjQ3MDA4OD09CkNvbnRlbnQtVHlwZTogdGV4dC9jbG91ZC1ib290aG9vazsgY2hhcnNldD0idXMtYXNjaWkiCk1JTUUtVmVyc2lvbjogMS4wCkNvbnRlbnQtVHJhbnNmZXItRW5jb2Rpbmc6IDdiaXQKQ29udGVudC1EaXNwb3NpdGlvbjogYXR0YWNobWVudDsgZmlsZW5hbWU9ImJvb3Rob29rLnNoIgoKIyEvdXNyL2Jpbi9iYXNoCgojIEZJWE1FKHNoYWRvd2VyKSB0aGlzIGlzIGEgd29ya2Fyb3VuZCBmb3IgY2xvdWQtaW5pdCAwLjYuMyBwcmVzZW50IGluIFVidW50dQojIDEyLjA0IExUUzoKIyBodHRwczovL2J1Z3MubGF1bmNocGFkLm5ldC9oZWF0LytidWcvMTI1NzQxMAojCiMgVGhlIG9sZCBjbG91ZC1pbml0IGRvZXNuJ3QgY3JlYXRlIHRoZSB1c2VycyBkaXJlY3RseSBzbyB0aGUgY29tbWFuZHMgdG8gZG8KIyB0aGlzIGFyZSBpbmplY3RlZCB0aG91Z2ggbm92YV91dGlscy5weS4KIwojIE9uY2Ugd2UgZHJvcCBzdXBwb3J0IGZvciAwLjYuMywgd2UgY2FuIHNhZmVseSByZW1vdmUgdGhpcy4KCgojIGluIGNhc2UgaGVhdC1jZm50b29scyBoYXMgYmVlbiBpbnN0YWxsZWQgZnJvbSBwYWNrYWdlIGJ1dCBubyBzeW1saW5rcwojIGFyZSB5ZXQgaW4gL29wdC9hd3MvYmluLwpjZm4tY3JlYXRlLWF3cy1zeW1saW5rcwoKIyBEbyBub3QgcmVtb3ZlIC0gdGhlIGNsb3VkIGJvb3Rob29rIHNob3VsZCBhbHdheXMgcmV0dXJuIHN1Y2Nlc3MKZXhpdCAwCgotLT09PT09PT09PT09PT09PTU3MjIwNTAwODE5MDY0NzAwODg9PQpDb250ZW50LVR5cGU6IHRleHQvcGFydC1oYW5kbGVyOyBjaGFyc2V0PSJ1cy1hc2NpaSIKTUlNRS1WZXJzaW9uOiAxLjAKQ29udGVudC1UcmFuc2Zlci1FbmNvZGluZzogN2JpdApDb250ZW50LURpc3Bvc2l0aW9uOiBhdHRhY2htZW50OyBmaWxlbmFtZT0icGFydC1oYW5kbGVyLnB5IgoKIyBwYXJ0LWhhbmRsZXIKIwojICAgIExpY2Vuc2VkIHVuZGVyIHRoZSBBcGFjaGUgTGljZW5zZSwgVmVyc2lvbiAyLjAgKHRoZSAiTGljZW5zZSIpOyB5b3UgbWF5CiMgICAgbm90IHVzZSB0aGlzIGZpbGUgZXhjZXB0IGluIGNvbXBsaWFuY2Ugd2l0aCB0aGUgTGljZW5zZS4gWW91IG1heSBvYnRhaW4KIyAgICBhIGNvcHkgb2YgdGhlIExpY2Vuc2UgYXQKIwojICAgICAgICAgaHR0cDovL3d3dy5hcGFjaGUub3JnL2xpY2Vuc2VzL0xJQ0VOU0UtMi4wCiMKIyAgICBVbmxlc3MgcmVxdWlyZWQgYnkgYXBwbGljYWJsZSBsYXcgb3IgYWdyZWVkIHRvIGluIHdyaXRpbmcsIHNvZnR3YXJlCiMgICAgZGlzdHJpYnV0ZWQgdW5kZXIgdGhlIExpY2Vuc2UgaXMgZGlzdHJpYnV0ZWQgb24gYW4gIkFTIElTIiBCQVNJUywgV0lUSE9VVAojICAgIFdBUlJBTlRJRVMgT1IgQ09ORElUSU9OUyBPRiBBTlkgS0lORCwgZWl0aGVyIGV4cHJlc3Mgb3IgaW1wbGllZC4gU2VlIHRoZQojICAgIExpY2Vuc2UgZm9yIHRoZSBzcGVjaWZpYyBsYW5ndWFnZSBnb3Zlcm5pbmcgcGVybWlzc2lvbnMgYW5kIGxpbWl0YXRpb25zCiMgICAgdW5kZXIgdGhlIExpY2Vuc2UuCgppbXBvcnQgZGF0ZXRpbWUKaW1wb3J0IGVycm5vCmltcG9ydCBvcwppbXBvcnQgc3lzCgoKZGVmIGxpc3RfdHlwZXMoKToKICAgIHJldHVybiBbInRleHQveC1jZm5pbml0ZGF0YSJdCgoKZGVmIGhhbmRsZV9wYXJ0KGRhdGEsIGN0eXBlLCBmaWxlbmFtZSwgcGF5bG9hZCk6CiAgICBpZiBjdHlwZSA9PSAiX19iZWdpbl9fIjoKICAgICAgICB0cnk6CiAgICAgICAgICAgIG9zLm1ha2VkaXJzKCcvdmFyL2xpYi9oZWF0LWNmbnRvb2xzJywgaW50KCI3MDAiLCA4KSkKICAgICAgICBleGNlcHQgT1NFcnJvcjoKICAgICAgICAgICAgZXhfdHlwZSwgZSwgdGIgPSBzeXMuZXhjX2luZm8oKQogICAgICAgICAgICBpZiBlLmVycm5vICE9IGVycm5vLkVFWElTVDoKICAgICAgICAgICAgICAgIHJhaXNlCiAgICAgICAgcmV0dXJuCgogICAgaWYgY3R5cGUgPT0gIl9fZW5kX18iOgogICAgICAgIHJldHVybgoKICAgIHRpbWVzdGFtcCA9IGRhdGV0aW1lLmRhdGV0aW1lLm5vdygpCiAgICB3aXRoIG9wZW4oJy92YXIvbG9nL3BhcnQtaGFuZGxlci5sb2cnLCAnYScpIGFzIGxvZzoKICAgICAgICBsb2cud3JpdGUoJyVzIGZpbGVuYW1lOiVzLCBjdHlwZTolc1xuJyAlICh0aW1lc3RhbXAsIGZpbGVuYW1lLCBjdHlwZSkpCgogICAgaWYgY3R5cGUgPT0gJ3RleHQveC1jZm5pbml0ZGF0YSc6CiAgICAgICAgd2l0aCBvcGVuKCcvdmFyL2xpYi9oZWF0LWNmbnRvb2xzLyVzJyAlIGZpbGVuYW1lLCAndycpIGFzIGY6CiAgICAgICAgICAgIGYud3JpdGUocGF5bG9hZCkKCiAgICAgICAgIyBUT0RPKHNkYWtlKSBob3BlZnVsbHkgdGVtcG9yYXJ5IHVudGlsIHVzZXJzIG1vdmUgdG8gaGVhdC1jZm50b29scy0xLjMKICAgICAgICB3aXRoIG9wZW4oJy92YXIvbGliL2Nsb3VkL2RhdGEvJXMnICUgZmlsZW5hbWUsICd3JykgYXMgZjoKICAgICAgICAgICAgZi53cml0ZShwYXlsb2FkKQoKLS09PT09PT09PT09PT09PT01NzIyMDUwMDgxOTA2NDcwMDg4PT0KQ29udGVudC1UeXBlOiB0ZXh0L3gtY2ZuaW5pdGRhdGE7IGNoYXJzZXQ9InVzLWFzY2lpIgpNSU1FLVZlcnNpb246IDEuMApDb250ZW50LVRyYW5zZmVyLUVuY29kaW5nOiA3Yml0CkNvbnRlbnQtRGlzcG9zaXRpb246IGF0dGFjaG1lbnQ7IGZpbGVuYW1lPSJjZm4tdXNlcmRhdGEiCgoKLS09PT09PT09PT09PT09PT01NzIyMDUwMDgxOTA2NDcwMDg4PT0KQ29udGVudC1UeXBlOiB0ZXh0L3gtc2hlbGxzY3JpcHQ7IGNoYXJzZXQ9InVzLWFzY2lpIgpNSU1FLVZlcnNpb246IDEuMApDb250ZW50LVRyYW5zZmVyLUVuY29kaW5nOiA3Yml0CkNvbnRlbnQtRGlzcG9zaXRpb246IGF0dGFjaG1lbnQ7IGZpbGVuYW1lPSJsb2d1c2VyZGF0YS5weSIKCiMhL3Vzci9iaW4vZW52IHB5dGhvbjMKIwojICAgIExpY2Vuc2VkIHVuZGVyIHRoZSBBcGFjaGUgTGljZW5zZSwgVmVyc2lvbiAyLjAgKHRoZSAiTGljZW5zZSIpOyB5b3UgbWF5CiMgICAgbm90IHVzZSB0aGlzIGZpbGUgZXhjZXB0IGluIGNvbXBsaWFuY2Ugd2l0aCB0aGUgTGljZW5zZS4gWW91IG1heSBvYnRhaW4KIyAgICBhIGNvcHkgb2YgdGhlIExpY2Vuc2UgYXQKIwojICAgICAgICAgaHR0cDovL3d3dy5hcGFjaGUub3JnL2xpY2Vuc2VzL0xJQ0VOU0UtMi4wCiMKIyAgICBVbmxlc3MgcmVxdWlyZWQgYnkgYXBwbGljYWJsZSBsYXcgb3IgYWdyZWVkIHRvIGluIHdyaXRpbmcsIHNvZnR3YXJlCiMgICAgZGlzdHJpYnV0ZWQgdW5kZXIgdGhlIExpY2Vuc2UgaXMgZGlzdHJpYnV0ZWQgb24gYW4gIkFTIElTIiBCQVNJUywgV0lUSE9VVAojICAgIFdBUlJBTlRJRVMgT1IgQ09ORElUSU9OUyBPRiBBTlkgS0lORCwgZWl0aGVyIGV4cHJlc3Mgb3IgaW1wbGllZC4gU2VlIHRoZQojICAgIExpY2Vuc2UgZm9yIHRoZSBzcGVjaWZpYyBsYW5ndWFnZSBnb3Zlcm5pbmcgcGVybWlzc2lvbnMgYW5kIGxpbWl0YXRpb25zCiMgICAgdW5kZXIgdGhlIExpY2Vuc2UuCgppbXBvcnQgZGF0ZXRpbWUKaW1wb3J0IGVycm5vCmltcG9ydCBsb2dnaW5nCmltcG9ydCBvcwppbXBvcnQgc3VicHJvY2VzcwppbXBvcnQgc3lzCgoKVkFSX1BBVEggPSAnL3Zhci9saWIvaGVhdC1jZm50b29scycKTE9HID0gbG9nZ2luZy5nZXRMb2dnZXIoJ2hlYXQtcHJvdmlzaW9uJykKCgpkZWYgaW5pdF9sb2dnaW5nKCk6CiAgICBMT0cuc2V0TGV2ZWwobG9nZ2luZy5JTkZPKQogICAgTE9HLmFkZEhhbmRsZXIobG9nZ2luZy5TdHJlYW1IYW5kbGVyKCkpCiAgICBmaCA9IGxvZ2dpbmcuRmlsZUhhbmRsZXIoIi92YXIvbG9nL2hlYXQtcHJvdmlzaW9uLmxvZyIpCiAgICBvcy5jaG1vZChmaC5iYXNlRmlsZW5hbWUsIGludCgiNjAwIiwgOCkpCiAgICBMT0cuYWRkSGFuZGxlcihmaCkKCgpkZWYgY2FsbChhcmdzKToKCiAgICBjbGFzcyBMb2dTdHJlYW0ob2JqZWN0KToKC
Dec  1 19:55:19 compute-0 nova_compute[189564]: Cclc1xuJywgJyAnLmpvaW4oYXJncykpICAjIG5vcWEKICAgIHRyeToKICAgICAgICBscyA9IExvZ1N0cmVhbSgpCiAgICAgICAgcCA9IHN1YnByb2Nlc3MuUG9wZW4oYXJncywgc3Rkb3V0PXN1YnByb2Nlc3MuUElQRSwKICAgICAgICAgICAgICAgICAgICAgICAgICAgICBzdGRlcnI9c3VicHJvY2Vzcy5QSVBFKQogICAgICAgIGRhdGEgPSBwLmNvbW11bmljYXRlKCkKICAgICAgICBpZiBkYXRhOgogICAgICAgICAgICBmb3IgeCBpbiBkYXRhOgogICAgICAgICAgICAgICAgbHMud3JpdGUoeCkKICAgIGV4Y2VwdCBPU0Vycm9yOgogICAgICAgIGV4X3R5cGUsIGV4LCB0YiA9IHN5cy5leGNfaW5mbygpCiAgICAgICAgaWYgZXguZXJybm8gPT0gZXJybm8uRU5PRVhFQzoKICAgICAgICAgICAgTE9HLmVycm9yKCdVc2VyZGF0YSBlbXB0eSBvciBub3QgZXhlY3V0YWJsZTogJXMnLCBleCkKICAgICAgICAgICAgcmV0dXJuIG9zLkVYX09LCiAgICAgICAgZWxzZToKICAgICAgICAgICAgTE9HLmVycm9yKCdPUyBlcnJvciBydW5uaW5nIHVzZXJkYXRhOiAlcycsIGV4KQogICAgICAgICAgICByZXR1cm4gb3MuRVhfT1NFUlIKICAgIGV4Y2VwdCBFeGNlcHRpb246CiAgICAgICAgZXhfdHlwZSwgZXgsIHRiID0gc3lzLmV4Y19pbmZvKCkKICAgICAgICBMT0cuZXJyb3IoJ1Vua25vd24gZXJyb3IgcnVubmluZyB1c2VyZGF0YTogJXMnLCBleCkKICAgICAgICByZXR1cm4gb3MuRVhfU09GVFdBUkUKICAgIHJldHVybiBwLnJldHVybmNvZGUKCgpkZWYgbWFpbigpOgogICAgdXNlcmRhdGFfcGF0aCA9IG9zLnBhdGguam9pbihWQVJfUEFUSCwgJ2Nmbi11c2VyZGF0YScpCiAgICBvcy5jaG1vZCh1c2VyZGF0YV9wYXRoLCBpbnQoIjcwMCIsIDgpKQoKICAgIExPRy5pbmZvKCdQcm92aXNpb24gYmVnYW46ICVzJywgZGF0ZXRpbWUuZGF0ZXRpbWUubm93KCkpCiAgICByZXR1cm5jb2RlID0gY2FsbChbdXNlcmRhdGFfcGF0aF0pCiAgICBMT0cuaW5mbygnUHJvdmlzaW9uIGRvbmU6ICVzJywgZGF0ZXRpbWUuZGF0ZXRpbWUubm93KCkpCiAgICBpZiByZXR1cm5jb2RlOgogICAgICAgIHJldHVybiByZXR1cm5jb2RlCgoKaWYgX19uYW1lX18gPT0gJ19fbWFpbl9fJzoKICAgIGluaXRfbG9nZ2luZygpCgogICAgY29kZSA9IG1haW4oKQogICAgaWYgY29kZToKICAgICAgICBMT0cuZXJyb3IoJ1Byb3Zpc2lvbiBmYWlsZWQgd2l0aCBleGl0IGNvZGUgJXMnLCBjb2RlKQogICAgICAgIHN5cy5leGl0KGNvZGUpCgogICAgcHJvdmlzaW9uX2xvZyA9IG9zLnBhdGguam9pbihWQVJfUEFUSCwgJ3Byb3Zpc2lvbi1maW5pc2hlZCcpCiAgICAjIHRvdWNoIHRoZSBmaWxlIHNvIGl0IGlzIHRpbWVzdGFtcGVkIHdpdGggd2hlbiBmaW5pc2hlZAogICAgd2l0aCBvcGVuKHByb3Zpc2lvbl9sb2csICdhJyk6CiAgICAgICAgb3MudXRpbWUocHJvdmlzaW9uX2xvZywgTm9uZSkKCi0tPT09PT09PT09PT09PT09NTcyMjA1MDA4MTkwNjQ3MDA4OD09CkNvbnRlbnQtVHlwZTogdGV4dC94LWNmbmluaXRkYXRhOyBjaGFyc2V0PSJ1cy1hc2NpaSIKTUlNRS1WZXJzaW9uOiAxLjAKQ29udGVudC1UcmFuc2Zlci1FbmNvZGluZzogN2JpdApDb250ZW50LURpc3Bvc2l0aW9uOiBhdHRhY2htZW50OyBmaWxlbmFtZT0iY2ZuLW1ldGFkYXRhLXNlcnZlciIKCmh0dHBzOi8vaGVhdC1jZm5hcGktaW50ZXJuYWwub3BlbnN0YWNrLnN2Yzo4MDAwL3YxLwotLT09PT09PT09PT09PT09PTU3MjIwNTAwODE5MDY0NzAwODg9PQpDb250ZW50LVR5cGU6IHRleHQveC1jZm5pbml0ZGF0YTsgY2hhcnNldD0idXMtYXNjaWkiCk1JTUUtVmVyc2lvbjogMS4wCkNvbnRlbnQtVHJhbnNmZXItRW5jb2Rpbmc6IDdiaXQKQ29udGVudC1EaXNwb3NpdGlvbjogYXR0YWNobWVudDsgZmlsZW5hbWU9ImNmbi1ib3RvLWNmZyIKCltCb3RvXQpkZWJ1ZyA9IDAKaXNfc2VjdXJlID0gMApodHRwc192YWxpZGF0ZV9jZXJ0aWZpY2F0ZXMgPSAxCmNmbl9yZWdpb25fbmFtZSA9IGhlYXQKY2ZuX3JlZ2lvbl9lbmRwb2ludCA9IGhlYXQtY2ZuYXBpLWludGVybmFsLm9wZW5zdGFjay5zdmMKLS09PT09PT09PT09PT09PT01NzIyMDUwMDgxOTA2NDcwMDg4PT0tLQo=',user_id='7c24e8f82e7842b785e565ac65c7f494',uuid=850ac274-3f22-41ce-b7d7-ac64d7adac70,vcpu_model=<?>,vcpus=1,vm_mode=None,vm_state='active') vif={"id": "076102cd-d411-4d3d-a31e-4851d4a8d107", "address": "fa:16:3e:ce:df:71", "network": {"id": "2a4b8529-6171-4880-a97c-66966115a61b", "bridge": "br-int", "label": "private", "subnets": [{"cidr": "192.168.0.0/24", "dns": [], "gateway": {"address": "192.168.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "192.168.0.62", "type": "fixed", "version": 4, "meta": {}, "floating_ips": [{"address": "192.168.122.240", "type": "floating", "version": 4, "meta": {}}]}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "35d2a9caf1634dca9fc12ec078239d84", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tap076102cd-d4", "ovs_interfaceid": "076102cd-d411-4d3d-a31e-4851d4a8d107", "qbh_params": null, "qbg_params": null, "active": true, "vnic_type": "normal", "profile": {}, "preserve_on_delete": true, "delegate_create": true, "meta": {}} unplug /usr/lib/python3.9/site-packages/nova/virt/libvirt/vif.py:828#033[00m
Dec  1 19:55:19 compute-0 nova_compute[189564]: 2025-12-01 19:55:19.809 189568 DEBUG nova.network.os_vif_util [None req-e4a609d8-0b73-472d-baec-d29542a99ced 7c24e8f82e7842b785e565ac65c7f494 35d2a9caf1634dca9fc12ec078239d84 - - default default] Converting VIF {"id": "076102cd-d411-4d3d-a31e-4851d4a8d107", "address": "fa:16:3e:ce:df:71", "network": {"id": "2a4b8529-6171-4880-a97c-66966115a61b", "bridge": "br-int", "label": "private", "subnets": [{"cidr": "192.168.0.0/24", "dns": [], "gateway": {"address": "192.168.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "192.168.0.62", "type": "fixed", "version": 4, "meta": {}, "floating_ips": [{"address": "192.168.122.240", "type": "floating", "version": 4, "meta": {}}]}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "35d2a9caf1634dca9fc12ec078239d84", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tap076102cd-d4", "ovs_interfaceid": "076102cd-d411-4d3d-a31e-4851d4a8d107", "qbh_params": null, "qbg_params": null, "active": true, "vnic_type": "normal", "profile": {}, "preserve_on_delete": true, "delegate_create": true, "meta": {}} nova_to_osvif_vif /usr/lib/python3.9/site-packages/nova/network/os_vif_util.py:511#033[00m
Dec  1 19:55:19 compute-0 nova_compute[189564]: 2025-12-01 19:55:19.811 189568 DEBUG nova.network.os_vif_util [None req-e4a609d8-0b73-472d-baec-d29542a99ced 7c24e8f82e7842b785e565ac65c7f494 35d2a9caf1634dca9fc12ec078239d84 - - default default] Converted object VIFOpenVSwitch(active=True,address=fa:16:3e:ce:df:71,bridge_name='br-int',has_traffic_filtering=True,id=076102cd-d411-4d3d-a31e-4851d4a8d107,network=Network(2a4b8529-6171-4880-a97c-66966115a61b),plugin='ovs',port_profile=VIFPortProfileOpenVSwitch,preserve_on_delete=True,vif_name='tap076102cd-d4') nova_to_osvif_vif /usr/lib/python3.9/site-packages/nova/network/os_vif_util.py:548#033[00m
Dec  1 19:55:19 compute-0 nova_compute[189564]: 2025-12-01 19:55:19.812 189568 DEBUG os_vif [None req-e4a609d8-0b73-472d-baec-d29542a99ced 7c24e8f82e7842b785e565ac65c7f494 35d2a9caf1634dca9fc12ec078239d84 - - default default] Unplugging vif VIFOpenVSwitch(active=True,address=fa:16:3e:ce:df:71,bridge_name='br-int',has_traffic_filtering=True,id=076102cd-d411-4d3d-a31e-4851d4a8d107,network=Network(2a4b8529-6171-4880-a97c-66966115a61b),plugin='ovs',port_profile=VIFPortProfileOpenVSwitch,preserve_on_delete=True,vif_name='tap076102cd-d4') unplug /usr/lib/python3.9/site-packages/os_vif/__init__.py:109#033[00m
Dec  1 19:55:19 compute-0 nova_compute[189564]: 2025-12-01 19:55:19.814 189568 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 21 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Dec  1 19:55:19 compute-0 nova_compute[189564]: 2025-12-01 19:55:19.816 189568 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): DelPortCommand(_result=None, port=tap076102cd-d4, bridge=br-int, if_exists=True) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89#033[00m
Dec  1 19:55:19 compute-0 nova_compute[189564]: 2025-12-01 19:55:19.819 189568 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Dec  1 19:55:19 compute-0 nova_compute[189564]: 2025-12-01 19:55:19.821 189568 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] 0-ms timeout __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:248#033[00m
Dec  1 19:55:19 compute-0 nova_compute[189564]: 2025-12-01 19:55:19.824 189568 INFO os_vif [None req-e4a609d8-0b73-472d-baec-d29542a99ced 7c24e8f82e7842b785e565ac65c7f494 35d2a9caf1634dca9fc12ec078239d84 - - default default] Successfully unplugged vif VIFOpenVSwitch(active=True,address=fa:16:3e:ce:df:71,bridge_name='br-int',has_traffic_filtering=True,id=076102cd-d411-4d3d-a31e-4851d4a8d107,network=Network(2a4b8529-6171-4880-a97c-66966115a61b),plugin='ovs',port_profile=VIFPortProfileOpenVSwitch,preserve_on_delete=True,vif_name='tap076102cd-d4')#033[00m
Dec  1 19:55:19 compute-0 nova_compute[189564]: 2025-12-01 19:55:19.825 189568 INFO nova.virt.libvirt.driver [None req-e4a609d8-0b73-472d-baec-d29542a99ced 7c24e8f82e7842b785e565ac65c7f494 35d2a9caf1634dca9fc12ec078239d84 - - default default] [instance: 850ac274-3f22-41ce-b7d7-ac64d7adac70] Deleting instance files /var/lib/nova/instances/850ac274-3f22-41ce-b7d7-ac64d7adac70_del#033[00m
Dec  1 19:55:19 compute-0 nova_compute[189564]: 2025-12-01 19:55:19.826 189568 INFO nova.virt.libvirt.driver [None req-e4a609d8-0b73-472d-baec-d29542a99ced 7c24e8f82e7842b785e565ac65c7f494 35d2a9caf1634dca9fc12ec078239d84 - - default default] [instance: 850ac274-3f22-41ce-b7d7-ac64d7adac70] Deletion of /var/lib/nova/instances/850ac274-3f22-41ce-b7d7-ac64d7adac70_del complete#033[00m
Dec  1 19:55:19 compute-0 ovn_metadata_agent[106828]: 2025-12-01 19:55:19.831 239942 DEBUG oslo.privsep.daemon [-] privsep: reply[ba933c76-ba1d-437f-9181-1a2384ea762e]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Dec  1 19:55:19 compute-0 nova_compute[189564]: 2025-12-01 19:55:19.835 189568 DEBUG nova.compute.manager [req-f47fbee3-04aa-441f-9d34-0a9c80142479 req-0121f26a-22c7-4bfb-ac56-5d67e04ba58b 8141e98fb94749b083df4b60a326419b 58466eba0634458f8dd92f929824a1d9 - - default default] [instance: 850ac274-3f22-41ce-b7d7-ac64d7adac70] Received event network-vif-unplugged-076102cd-d411-4d3d-a31e-4851d4a8d107 external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11048#033[00m
Dec  1 19:55:19 compute-0 nova_compute[189564]: 2025-12-01 19:55:19.836 189568 DEBUG oslo_concurrency.lockutils [req-f47fbee3-04aa-441f-9d34-0a9c80142479 req-0121f26a-22c7-4bfb-ac56-5d67e04ba58b 8141e98fb94749b083df4b60a326419b 58466eba0634458f8dd92f929824a1d9 - - default default] Acquiring lock "850ac274-3f22-41ce-b7d7-ac64d7adac70-events" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m
Dec  1 19:55:19 compute-0 nova_compute[189564]: 2025-12-01 19:55:19.836 189568 DEBUG oslo_concurrency.lockutils [req-f47fbee3-04aa-441f-9d34-0a9c80142479 req-0121f26a-22c7-4bfb-ac56-5d67e04ba58b 8141e98fb94749b083df4b60a326419b 58466eba0634458f8dd92f929824a1d9 - - default default] Lock "850ac274-3f22-41ce-b7d7-ac64d7adac70-events" acquired by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: waited 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m
Dec  1 19:55:19 compute-0 nova_compute[189564]: 2025-12-01 19:55:19.836 189568 DEBUG oslo_concurrency.lockutils [req-f47fbee3-04aa-441f-9d34-0a9c80142479 req-0121f26a-22c7-4bfb-ac56-5d67e04ba58b 8141e98fb94749b083df4b60a326419b 58466eba0634458f8dd92f929824a1d9 - - default default] Lock "850ac274-3f22-41ce-b7d7-ac64d7adac70-events" "released" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m
Dec  1 19:55:19 compute-0 nova_compute[189564]: 2025-12-01 19:55:19.837 189568 DEBUG nova.compute.manager [req-f47fbee3-04aa-441f-9d34-0a9c80142479 req-0121f26a-22c7-4bfb-ac56-5d67e04ba58b 8141e98fb94749b083df4b60a326419b 58466eba0634458f8dd92f929824a1d9 - - default default] [instance: 850ac274-3f22-41ce-b7d7-ac64d7adac70] No waiting events found dispatching network-vif-unplugged-076102cd-d411-4d3d-a31e-4851d4a8d107 pop_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:320#033[00m
Dec  1 19:55:19 compute-0 nova_compute[189564]: 2025-12-01 19:55:19.837 189568 DEBUG nova.compute.manager [req-f47fbee3-04aa-441f-9d34-0a9c80142479 req-0121f26a-22c7-4bfb-ac56-5d67e04ba58b 8141e98fb94749b083df4b60a326419b 58466eba0634458f8dd92f929824a1d9 - - default default] [instance: 850ac274-3f22-41ce-b7d7-ac64d7adac70] Received event network-vif-unplugged-076102cd-d411-4d3d-a31e-4851d4a8d107 for instance with task_state deleting. _process_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:10826#033[00m
Dec  1 19:55:19 compute-0 ovn_metadata_agent[106828]: 2025-12-01 19:55:19.851 239862 DEBUG oslo.privsep.daemon [-] privsep: reply[0c01e38d-21cf-4a1e-81b3-f61e2e3fb1fa]: (4, [{'family': 0, '__align': (), 'ifi_type': 1, 'index': 2, 'flags': 69699, 'change': 0, 'attrs': [['IFLA_IFNAME', 'tap2a4b8529-61'], ['IFLA_TXQLEN', 1000], ['IFLA_OPERSTATE', 'UP'], ['IFLA_LINKMODE', 0], ['IFLA_MTU', 1500], ['IFLA_MIN_MTU', 68], ['IFLA_MAX_MTU', 65535], ['IFLA_GROUP', 0], ['IFLA_PROMISCUITY', 0], ['UNKNOWN', {'header': {'length': 8, 'type': 61}}], ['IFLA_NUM_TX_QUEUES', 8], ['IFLA_GSO_MAX_SEGS', 65535], ['IFLA_GSO_MAX_SIZE', 65536], ['IFLA_GRO_MAX_SIZE', 65536], ['UNKNOWN', {'header': {'length': 8, 'type': 63}}], ['UNKNOWN', {'header': {'length': 8, 'type': 64}}], ['IFLA_TSO_MAX_SIZE', 524280], ['IFLA_TSO_MAX_SEGS', 65535], ['UNKNOWN', {'header': {'length': 8, 'type': 66}}], ['IFLA_NUM_RX_QUEUES', 8], ['IFLA_CARRIER', 1], ['IFLA_CARRIER_CHANGES', 2], ['IFLA_CARRIER_UP_COUNT', 1], ['IFLA_CARRIER_DOWN_COUNT', 1], ['IFLA_PROTO_DOWN', 0], ['IFLA_ADDRESS', 'fa:16:3e:47:81:e1'], ['IFLA_BROADCAST', 'ff:ff:ff:ff:ff:ff'], ['IFLA_STATS64', {'rx_packets': 7, 'tx_packets': 13, 'rx_bytes': 574, 'tx_bytes': 690, 'rx_errors': 0, 'tx_errors': 0, 'rx_dropped': 0, 'tx_dropped': 0, 'multicast': 0, 'collisions': 0, 'rx_length_errors': 0, 'rx_over_errors': 0, 'rx_crc_errors': 0, 'rx_frame_errors': 0, 'rx_fifo_errors': 0, 'rx_missed_errors': 0, 'tx_aborted_errors': 0, 'tx_carrier_errors': 0, 'tx_fifo_errors': 0, 'tx_heartbeat_errors': 0, 'tx_window_errors': 0, 'rx_compressed': 0, 'tx_compressed': 0}], ['IFLA_STATS', {'rx_packets': 7, 'tx_packets': 13, 'rx_bytes': 574, 'tx_bytes': 690, 'rx_errors': 0, 'tx_errors': 0, 'rx_dropped': 0, 'tx_dropped': 0, 'multicast': 0, 'collisions': 0, 'rx_length_errors': 0, 'rx_over_errors': 0, 'rx_crc_errors': 0, 'rx_frame_errors': 0, 'rx_fifo_errors': 0, 'rx_missed_errors': 0, 'tx_aborted_errors': 0, 'tx_carrier_errors': 0, 'tx_fifo_errors': 0, 'tx_heartbeat_errors': 0, 'tx_window_errors': 0, 'rx_compressed': 0, 'tx_compressed': 0}], ['IFLA_XDP', {'attrs': [['IFLA_XDP_ATTACHED', None]]}], ['IFLA_LINKINFO', {'attrs': [['IFLA_INFO_KIND', 'veth']]}], ['IFLA_LINK_NETNSID', 0], ['IFLA_LINK', 12], ['IFLA_QDISC', 'noqueue'], ['IFLA_AF_SPEC', {'attrs': [['AF_INET', {'dummy': 65668, 'forwarding': 1, 'mc_forwarding': 0, 'proxy_arp': 0, 'accept_redirects': 0, 'secure_redirects': 0, 'send_redirects': 0, 'shared_media': 1, 'rp_filter': 1, 'accept_source_route': 0, 'bootp_relay': 0, 'log_martians': 0, 'tag': 0, 'arpfilter': 0, 'medium_id': 0, 'noxfrm': 0, 'nopolicy': 0, 'force_igmp_version': 0, 'arp_announce': 0, 'arp_ignore': 0, 'promote_secondaries': 1, 'arp_accept': 0, 'arp_notify': 0, 'accept_local': 0, 'src_vmark': 0, 'proxy_arp_pvlan': 0, 'route_localnet': 0, 'igmpv2_unsolicited_report_interval': 10000, 'igmpv3_unsolicited_report_interval': 1000}], ['AF_INET6', {'attrs': [['IFLA_INET6_FLAGS', 2147483648], ['IFLA_INET6_CACHEINFO', {'max_reasm_len': 65535, 'tstamp': 388613, 'reachable_time': 31930, 'retrans_time': 1000}], ['IFLA_INET6_CONF', {'forwarding': 0, 'hop_limit': 64, 'mtu': 1500, 'accept_ra': 1, 'accept_redirects': 1, 'autoconf': 1, 'dad_transmits': 1, 'router_solicitations': 4294967295, 'router_solicitation_interval': 4000, 'router_solicitation_delay': 1000, 'use_tempaddr': 0, 'temp_valid_lft': 604800, 'temp_preferred_lft': 86400, 'regen_max_retry': 3, 'max_desync_factor': 600, 'max_addresses': 16, 'force_mld_version': 0, 'accept_ra_defrtr': 1, 'accept_ra_pinfo': 1, 'accept_ra_rtr_pref': 1, 'router_probe_interval': 60000, 'accept_ra_rt_info_max_plen': 0, 'proxy_ndp': 0, 'optimistic_dad': 0, 'accept_source_route': 0, 'mc_forwarding': 0, 'disable_ipv6': 0, 'accept_dad': 1, 'force_tllao': 0, 'ndisc_notify': 0}], ['IFLA_INET6_STATS', {'num': 38, 'inpkts': 6, 'inoctets': 448, 'indelivers': 1, 'outforwdatagrams': 0, 'outpkts': 3, 'outoctets': 228, 'inhdrerrors': 0, 'intoobigerrors': 0, 'innoroutes': 0, 'inaddrerrors': 0, 'inunknownprotos': 0, 'intruncatedpkts': 0, 'indiscards': 0, 'outdiscards': 0, 'outnoroutes': 0, 'reasmtimeout': 0, 'reasmreqds': 0, 'reasmoks': 0, 'reasmfails': 0, 'fragoks': 0, 'fragfails': 0, 'fragcreates': 0, 'inmcastpkts': 6, 'outmcastpkts': 3, 'inbcastpkts': 0, 'outbcastpkts': 0, 'inmcastoctets': 448, 'outmcastoctets': 228, 'inbcastoctets': 0, 'outbcastoctets': 0, 'csumerrors': 0, 'noectpkts': 6, 'ect1pkts': 0, 'ect0pkts': 0, 'cepkts': 0}], ['IFLA_INET6_ICMP6STATS', {'num': 7, 'inmsgs': 1, 'inerrors': 0, 'outmsgs': 3, 'outerrors': 0, 'csumerrors': 0}], ['IFLA_INET6_TOKEN', '::'], ['IFLA_INET6_ADDR_GEN_MODE', 0]]}]]}], ['IFLA_MAP', {'mem_start': 0, 'mem_end': 0, 'base_addr': 0, 'irq': 0, 'dma': 0, 'port': 0}], ['UNKNOWN', {'header': {'length': 4, 'type': 32830}}], ['UNKNOWN', {'header': {'length': 4, 'type': 32833}}]], 'header': {'length': 1448, 'type': 16, 'flags': 2, 'sequence_number': 255, 'pid': 251389, 'error': None, 'target': 'ovnmeta-2a4b8529-6171-4880-a97c-66966115a61b', 'stats': (0, 0, 0)}, 'state': 'up', 'event': 'RTM_NEWLINK'}]) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Dec  1 19:55:19 compute-0 ovn_metadata_agent[106828]: 2025-12-01 19:55:19.868 239862 DEBUG oslo.privsep.daemon [-] privsep: reply[a36fa370-f18a-49ac-a16b-36e3eaefc0fd]: (4, ({'family': 2, 'prefixlen': 24, 'flags': 128, 'scope': 0, 'index': 2, 'attrs': [['IFA_ADDRESS', '192.168.0.2'], ['IFA_LOCAL', '192.168.0.2'], ['IFA_BROADCAST', '192.168.0.255'], ['IFA_LABEL', 'tap2a4b8529-61'], ['IFA_FLAGS', 128], ['IFA_CACHEINFO', {'ifa_preferred': 4294967295, 'ifa_valid': 4294967295, 'cstamp': 388627, 'tstamp': 388627}]], 'header': {'length': 96, 'type': 20, 'flags': 2, 'sequence_number': 255, 'pid': 251390, 'error': None, 'target': 'ovnmeta-2a4b8529-6171-4880-a97c-66966115a61b', 'stats': (0, 0, 0)}, 'event': 'RTM_NEWADDR'}, {'family': 2, 'prefixlen': 32, 'flags': 128, 'scope': 0, 'index': 2, 'attrs': [['IFA_ADDRESS', '169.254.169.254'], ['IFA_LOCAL', '169.254.169.254'], ['IFA_BROADCAST', '169.254.169.254'], ['IFA_LABEL', 'tap2a4b8529-61'], ['IFA_FLAGS', 128], ['IFA_CACHEINFO', {'ifa_preferred': 4294967295, 'ifa_valid': 4294967295, 'cstamp': 388631, 'tstamp': 388631}]], 'header': {'length': 96, 'type': 20, 'flags': 2, 'sequence_number': 255, 'pid': 251390, 'error': None, 'target': 'ovnmeta-2a4b8529-6171-4880-a97c-66966115a61b', 'stats': (0, 0, 0)}, 'event': 'RTM_NEWADDR'})) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Dec  1 19:55:19 compute-0 ovn_metadata_agent[106828]: 2025-12-01 19:55:19.870 106833 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): DelPortCommand(_result=None, port=tap2a4b8529-60, bridge=br-ex, if_exists=True) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89#033[00m
Dec  1 19:55:19 compute-0 nova_compute[189564]: 2025-12-01 19:55:19.872 189568 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Dec  1 19:55:19 compute-0 nova_compute[189564]: 2025-12-01 19:55:19.874 189568 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Dec  1 19:55:19 compute-0 ovn_metadata_agent[106828]: 2025-12-01 19:55:19.875 106833 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): AddPortCommand(_result=None, bridge=br-int, port=tap2a4b8529-60, may_exist=True) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89#033[00m
Dec  1 19:55:19 compute-0 ovn_metadata_agent[106828]: 2025-12-01 19:55:19.875 106833 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Transaction caused no change do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:129#033[00m
Dec  1 19:55:19 compute-0 ovn_metadata_agent[106828]: 2025-12-01 19:55:19.876 106833 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): DbSetCommand(_result=None, table=Interface, record=tap2a4b8529-60, col_values=(('external_ids', {'iface-id': 'f95692ff-1cac-46fe-9e62-21af9fa55eb1'}),)) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89#033[00m
Dec  1 19:55:19 compute-0 ovn_metadata_agent[106828]: 2025-12-01 19:55:19.876 106833 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Transaction caused no change do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:129#033[00m
Dec  1 19:55:19 compute-0 ovn_metadata_agent[106828]: 2025-12-01 19:55:19.878 106833 INFO neutron.agent.ovn.metadata.agent [-] Port 076102cd-d411-4d3d-a31e-4851d4a8d107 in datapath 2a4b8529-6171-4880-a97c-66966115a61b unbound from our chassis#033[00m
Dec  1 19:55:19 compute-0 ovn_metadata_agent[106828]: 2025-12-01 19:55:19.880 106833 INFO neutron.agent.ovn.metadata.agent [-] Provisioning metadata for network 2a4b8529-6171-4880-a97c-66966115a61b#033[00m
Dec  1 19:55:19 compute-0 ovn_metadata_agent[106828]: 2025-12-01 19:55:19.892 239862 DEBUG oslo.privsep.daemon [-] privsep: reply[83fbe9fd-7df2-49ba-b399-2ab41c82bc45]: (4, True) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Dec  1 19:55:19 compute-0 nova_compute[189564]: 2025-12-01 19:55:19.903 189568 INFO nova.compute.manager [None req-e4a609d8-0b73-472d-baec-d29542a99ced 7c24e8f82e7842b785e565ac65c7f494 35d2a9caf1634dca9fc12ec078239d84 - - default default] [instance: 850ac274-3f22-41ce-b7d7-ac64d7adac70] Took 0.41 seconds to destroy the instance on the hypervisor.#033[00m
Dec  1 19:55:19 compute-0 nova_compute[189564]: 2025-12-01 19:55:19.904 189568 DEBUG oslo.service.loopingcall [None req-e4a609d8-0b73-472d-baec-d29542a99ced 7c24e8f82e7842b785e565ac65c7f494 35d2a9caf1634dca9fc12ec078239d84 - - default default] Waiting for function nova.compute.manager.ComputeManager._try_deallocate_network.<locals>._deallocate_network_with_retries to return. func /usr/lib/python3.9/site-packages/oslo_service/loopingcall.py:435#033[00m
Dec  1 19:55:19 compute-0 nova_compute[189564]: 2025-12-01 19:55:19.904 189568 DEBUG nova.compute.manager [-] [instance: 850ac274-3f22-41ce-b7d7-ac64d7adac70] Deallocating network for instance _deallocate_network /usr/lib/python3.9/site-packages/nova/compute/manager.py:2259#033[00m
Dec  1 19:55:19 compute-0 nova_compute[189564]: 2025-12-01 19:55:19.904 189568 DEBUG nova.network.neutron [-] [instance: 850ac274-3f22-41ce-b7d7-ac64d7adac70] deallocate_for_instance() deallocate_for_instance /usr/lib/python3.9/site-packages/nova/network/neutron.py:1803#033[00m
Dec  1 19:55:19 compute-0 ovn_metadata_agent[106828]: 2025-12-01 19:55:19.927 239942 DEBUG oslo.privsep.daemon [-] privsep: reply[47337934-9ada-46f8-94ee-48e45cd9d1d1]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Dec  1 19:55:19 compute-0 ovn_metadata_agent[106828]: 2025-12-01 19:55:19.931 239942 DEBUG oslo.privsep.daemon [-] privsep: reply[55453e25-6397-430d-8185-92915414272f]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Dec  1 19:55:19 compute-0 ovn_metadata_agent[106828]: 2025-12-01 19:55:19.968 239942 DEBUG oslo.privsep.daemon [-] privsep: reply[fe7eeb1f-aae6-4637-9cef-6f0f1e8857ff]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Dec  1 19:55:19 compute-0 ovn_metadata_agent[106828]: 2025-12-01 19:55:19.987 239862 DEBUG oslo.privsep.daemon [-] privsep: reply[9b1f6e08-70e1-49e3-84b1-5428c929be85]: (4, [{'family': 0, '__align': (), 'ifi_type': 1, 'index': 2, 'flags': 69699, 'change': 0, 'attrs': [['IFLA_IFNAME', 'tap2a4b8529-61'], ['IFLA_TXQLEN', 1000], ['IFLA_OPERSTATE', 'UP'], ['IFLA_LINKMODE', 0], ['IFLA_MTU', 1500], ['IFLA_MIN_MTU', 68], ['IFLA_MAX_MTU', 65535], ['IFLA_GROUP', 0], ['IFLA_PROMISCUITY', 0], ['UNKNOWN', {'header': {'length': 8, 'type': 61}}], ['IFLA_NUM_TX_QUEUES', 8], ['IFLA_GSO_MAX_SEGS', 65535], ['IFLA_GSO_MAX_SIZE', 65536], ['IFLA_GRO_MAX_SIZE', 65536], ['UNKNOWN', {'header': {'length': 8, 'type': 63}}], ['UNKNOWN', {'header': {'length': 8, 'type': 64}}], ['IFLA_TSO_MAX_SIZE', 524280], ['IFLA_TSO_MAX_SEGS', 65535], ['UNKNOWN', {'header': {'length': 8, 'type': 66}}], ['IFLA_NUM_RX_QUEUES', 8], ['IFLA_CARRIER', 1], ['IFLA_CARRIER_CHANGES', 2], ['IFLA_CARRIER_UP_COUNT', 1], ['IFLA_CARRIER_DOWN_COUNT', 1], ['IFLA_PROTO_DOWN', 0], ['IFLA_ADDRESS', 'fa:16:3e:47:81:e1'], ['IFLA_BROADCAST', 'ff:ff:ff:ff:ff:ff'], ['IFLA_STATS64', {'rx_packets': 7, 'tx_packets': 15, 'rx_bytes': 574, 'tx_bytes': 774, 'rx_errors': 0, 'tx_errors': 0, 'rx_dropped': 0, 'tx_dropped': 0, 'multicast': 0, 'collisions': 0, 'rx_length_errors': 0, 'rx_over_errors': 0, 'rx_crc_errors': 0, 'rx_frame_errors': 0, 'rx_fifo_errors': 0, 'rx_missed_errors': 0, 'tx_aborted_errors': 0, 'tx_carrier_errors': 0, 'tx_fifo_errors': 0, 'tx_heartbeat_errors': 0, 'tx_window_errors': 0, 'rx_compressed': 0, 'tx_compressed': 0}], ['IFLA_STATS', {'rx_packets': 7, 'tx_packets': 15, 'rx_bytes': 574, 'tx_bytes': 774, 'rx_errors': 0, 'tx_errors': 0, 'rx_dropped': 0, 'tx_dropped': 0, 'multicast': 0, 'collisions': 0, 'rx_length_errors': 0, 'rx_over_errors': 0, 'rx_crc_errors': 0, 'rx_frame_errors': 0, 'rx_fifo_errors': 0, 'rx_missed_errors': 0, 'tx_aborted_errors': 0, 'tx_carrier_errors': 0, 'tx_fifo_errors': 0, 'tx_heartbeat_errors': 0, 'tx_window_errors': 0, 'rx_compressed': 0, 'tx_compressed': 0}], ['IFLA_XDP', {'attrs': [['IFLA_XDP_ATTACHED', None]]}], ['IFLA_LINKINFO', {'attrs': [['IFLA_INFO_KIND', 'veth']]}], ['IFLA_LINK_NETNSID', 0], ['IFLA_LINK', 12], ['IFLA_QDISC', 'noqueue'], ['IFLA_AF_SPEC', {'attrs': [['AF_INET', {'dummy': 65668, 'forwarding': 1, 'mc_forwarding': 0, 'proxy_arp': 0, 'accept_redirects': 0, 'secure_redirects': 0, 'send_redirects': 0, 'shared_media': 1, 'rp_filter': 1, 'accept_source_route': 0, 'bootp_relay': 0, 'log_martians': 0, 'tag': 0, 'arpfilter': 0, 'medium_id': 0, 'noxfrm': 0, 'nopolicy': 0, 'force_igmp_version': 0, 'arp_announce': 0, 'arp_ignore': 0, 'promote_secondaries': 1, 'arp_accept': 0, 'arp_notify': 0, 'accept_local': 0, 'src_vmark': 0, 'proxy_arp_pvlan': 0, 'route_localnet': 0, 'igmpv2_unsolicited_report_interval': 10000, 'igmpv3_unsolicited_report_interval': 1000}], ['AF_INET6', {'attrs': [['IFLA_INET6_FLAGS', 2147483648], ['IFLA_INET6_CACHEINFO', {'max_reasm_len': 65535, 'tstamp': 388613, 'reachable_time': 31930, 'retrans_time': 1000}], ['IFLA_INET6_CONF', {'forwarding': 0, 'hop_limit': 64, 'mtu': 1500, 'accept_ra': 1, 'accept_redirects': 1, 'autoconf': 1, 'dad_transmits': 1, 'router_solicitations': 4294967295, 'router_solicitation_interval': 4000, 'router_solicitation_delay': 1000, 'use_tempaddr': 0, 'temp_valid_lft': 604800, 'temp_preferred_lft': 86400, 'regen_max_retry': 3, 'max_desync_factor': 600, 'max_addresses': 16, 'force_mld_version': 0, 'accept_ra_defrtr': 1, 'accept_ra_pinfo': 1, 'accept_ra_rtr_pref': 1, 'router_probe_interval': 60000, 'accept_ra_rt_info_max_plen': 0, 'proxy_ndp': 0, 'optimistic_dad': 0, 'accept_source_route': 0, 'mc_forwarding': 0, 'disable_ipv6': 0, 'accept_dad': 1, 'force_tllao': 0, 'ndisc_notify': 0}], ['IFLA_INET6_STATS', {'num': 38, 'inpkts': 6, 'inoctets': 448, 'indelivers': 1, 'outforwdatagrams': 0, 'outpkts': 3, 'outoctets': 228, 'inhdrerrors': 0, 'intoobigerrors': 0, 'innoroutes': 0, 'inaddrerrors': 0, 'inunknownprotos': 0, 'intruncatedpkts': 0, 'indiscards': 0, 'outdiscards': 0, 'outnoroutes': 0, 'reasmtimeout': 0, 'reasmreqds': 0, 'reasmoks': 0, 'reasmfails': 0, 'fragoks': 0, 'fragfails': 0, 'fragcreates': 0, 'inmcastpkts': 6, 'outmcastpkts': 3, 'inbcastpkts': 0, 'outbcastpkts': 0, 'inmcastoctets': 448, 'outmcastoctets': 228, 'inbcastoctets': 0, 'outbcastoctets': 0, 'csumerrors': 0, 'noectpkts': 6, 'ect1pkts': 0, 'ect0pkts': 0, 'cepkts': 0}], ['IFLA_INET6_ICMP6STATS', {'num': 7, 'inmsgs': 1, 'inerrors': 0, 'outmsgs': 3, 'outerrors': 0, 'csumerrors': 0}], ['IFLA_INET6_TOKEN', '::'], ['IFLA_INET6_ADDR_GEN_MODE', 0]]}]]}], ['IFLA_MAP', {'mem_start': 0, 'mem_end': 0, 'base_addr': 0, 'irq': 0, 'dma': 0, 'port': 0}], ['UNKNOWN', {'header': {'length': 4, 'type': 32830}}], ['UNKNOWN', {'header': {'length': 4, 'type': 32833}}]], 'header': {'length': 1448, 'type': 16, 'flags': 2, 'sequence_number': 255, 'pid': 251396, 'error': None, 'target': 'ovnmeta-2a4b8529-6171-4880-a97c-66966115a61b', 'stats': (0, 0, 0)}, 'state': 'up', 'event': 'RTM_NEWLINK'}]) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Dec  1 19:55:20 compute-0 ovn_metadata_agent[106828]: 2025-12-01 19:55:20.010 239862 DEBUG oslo.privsep.daemon [-] privsep: reply[d4f8ca08-a9e5-4181-a972-92c9c48860b8]: (4, ({'family': 2, 'prefixlen': 24, 'flags': 128, 'scope': 0, 'index': 2, 'attrs': [['IFA_ADDRESS', '192.168.0.2'], ['IFA_LOCAL', '192.168.0.2'], ['IFA_BROADCAST', '192.168.0.255'], ['IFA_LABEL', 'tap2a4b8529-61'], ['IFA_FLAGS', 128], ['IFA_CACHEINFO', {'ifa_preferred': 4294967295, 'ifa_valid': 4294967295, 'cstamp': 388627, 'tstamp': 388627}]], 'header': {'length': 96, 'type': 20, 'flags': 2, 'sequence_number': 255, 'pid': 251397, 'error': None, 'target': 'ovnmeta-2a4b8529-6171-4880-a97c-66966115a61b', 'stats': (0, 0, 0)}, 'event': 'RTM_NEWADDR'}, {'family': 2, 'prefixlen': 32, 'flags': 128, 'scope': 0, 'index': 2, 'attrs': [['IFA_ADDRESS', '169.254.169.254'], ['IFA_LOCAL', '169.254.169.254'], ['IFA_BROADCAST', '169.254.169.254'], ['IFA_LABEL', 'tap2a4b8529-61'], ['IFA_FLAGS', 128], ['IFA_CACHEINFO', {'ifa_preferred': 4294967295, 'ifa_valid': 4294967295, 'cstamp': 388631, 'tstamp': 388631}]], 'header': {'length': 96, 'type': 20, 'flags': 2, 'sequence_number': 255, 'pid': 251397, 'error': None, 'target': 'ovnmeta-2a4b8529-6171-4880-a97c-66966115a61b', 'stats': (0, 0, 0)}, 'event': 'RTM_NEWADDR'})) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Dec  1 19:55:20 compute-0 ovn_metadata_agent[106828]: 2025-12-01 19:55:20.011 106833 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): DelPortCommand(_result=None, port=tap2a4b8529-60, bridge=br-ex, if_exists=True) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89#033[00m
Dec  1 19:55:20 compute-0 nova_compute[189564]: 2025-12-01 19:55:20.013 189568 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Dec  1 19:55:20 compute-0 nova_compute[189564]: 2025-12-01 19:55:20.015 189568 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Dec  1 19:55:20 compute-0 ovn_metadata_agent[106828]: 2025-12-01 19:55:20.016 106833 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): AddPortCommand(_result=None, bridge=br-int, port=tap2a4b8529-60, may_exist=True) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89#033[00m
Dec  1 19:55:20 compute-0 ovn_metadata_agent[106828]: 2025-12-01 19:55:20.016 106833 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Transaction caused no change do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:129#033[00m
Dec  1 19:55:20 compute-0 ovn_metadata_agent[106828]: 2025-12-01 19:55:20.016 106833 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): DbSetCommand(_result=None, table=Interface, record=tap2a4b8529-60, col_values=(('external_ids', {'iface-id': 'f95692ff-1cac-46fe-9e62-21af9fa55eb1'}),)) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89#033[00m
Dec  1 19:55:20 compute-0 ovn_metadata_agent[106828]: 2025-12-01 19:55:20.017 106833 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Transaction caused no change do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:129#033[00m
Dec  1 19:55:20 compute-0 rsyslogd[236874]: message too long (8192) with configured size 8096, begin of message is: 2025-12-01 19:55:19.809 189568 DEBUG nova.virt.libvirt.vif [None req-e4a609d8-0b [v8.2510.0-2.el9 try https://www.rsyslog.com/e/2445 ]
Dec  1 19:55:20 compute-0 nova_compute[189564]: 2025-12-01 19:55:20.872 189568 DEBUG nova.network.neutron [req-07d61063-83cf-4bf8-8253-a81d1e2c7a55 req-da49523e-1125-473e-8b4f-295258505f0a 8141e98fb94749b083df4b60a326419b 58466eba0634458f8dd92f929824a1d9 - - default default] [instance: 850ac274-3f22-41ce-b7d7-ac64d7adac70] Updated VIF entry in instance network info cache for port 076102cd-d411-4d3d-a31e-4851d4a8d107. _build_network_info_model /usr/lib/python3.9/site-packages/nova/network/neutron.py:3482#033[00m
Dec  1 19:55:20 compute-0 nova_compute[189564]: 2025-12-01 19:55:20.873 189568 DEBUG nova.network.neutron [req-07d61063-83cf-4bf8-8253-a81d1e2c7a55 req-da49523e-1125-473e-8b4f-295258505f0a 8141e98fb94749b083df4b60a326419b 58466eba0634458f8dd92f929824a1d9 - - default default] [instance: 850ac274-3f22-41ce-b7d7-ac64d7adac70] Updating instance_info_cache with network_info: [{"id": "076102cd-d411-4d3d-a31e-4851d4a8d107", "address": "fa:16:3e:ce:df:71", "network": {"id": "2a4b8529-6171-4880-a97c-66966115a61b", "bridge": "br-int", "label": "private", "subnets": [{"cidr": "192.168.0.0/24", "dns": [], "gateway": {"address": "192.168.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "192.168.0.62", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "35d2a9caf1634dca9fc12ec078239d84", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tap076102cd-d4", "ovs_interfaceid": "076102cd-d411-4d3d-a31e-4851d4a8d107", "qbh_params": null, "qbg_params": null, "active": true, "vnic_type": "normal", "profile": {}, "preserve_on_delete": true, "delegate_create": true, "meta": {}}] update_instance_cache_with_nw_info /usr/lib/python3.9/site-packages/nova/network/neutron.py:116#033[00m
Dec  1 19:55:20 compute-0 nova_compute[189564]: 2025-12-01 19:55:20.903 189568 DEBUG oslo_concurrency.lockutils [req-07d61063-83cf-4bf8-8253-a81d1e2c7a55 req-da49523e-1125-473e-8b4f-295258505f0a 8141e98fb94749b083df4b60a326419b 58466eba0634458f8dd92f929824a1d9 - - default default] Releasing lock "refresh_cache-850ac274-3f22-41ce-b7d7-ac64d7adac70" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:333#033[00m
Dec  1 19:55:21 compute-0 nova_compute[189564]: 2025-12-01 19:55:21.233 189568 DEBUG nova.network.neutron [-] [instance: 850ac274-3f22-41ce-b7d7-ac64d7adac70] Updating instance_info_cache with network_info: [] update_instance_cache_with_nw_info /usr/lib/python3.9/site-packages/nova/network/neutron.py:116#033[00m
Dec  1 19:55:21 compute-0 nova_compute[189564]: 2025-12-01 19:55:21.256 189568 INFO nova.compute.manager [-] [instance: 850ac274-3f22-41ce-b7d7-ac64d7adac70] Took 1.35 seconds to deallocate network for instance.#033[00m
Dec  1 19:55:21 compute-0 nova_compute[189564]: 2025-12-01 19:55:21.291 189568 DEBUG oslo_concurrency.lockutils [None req-e4a609d8-0b73-472d-baec-d29542a99ced 7c24e8f82e7842b785e565ac65c7f494 35d2a9caf1634dca9fc12ec078239d84 - - default default] Acquiring lock "compute_resources" by "nova.compute.resource_tracker.ResourceTracker.update_usage" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m
Dec  1 19:55:21 compute-0 nova_compute[189564]: 2025-12-01 19:55:21.292 189568 DEBUG oslo_concurrency.lockutils [None req-e4a609d8-0b73-472d-baec-d29542a99ced 7c24e8f82e7842b785e565ac65c7f494 35d2a9caf1634dca9fc12ec078239d84 - - default default] Lock "compute_resources" acquired by "nova.compute.resource_tracker.ResourceTracker.update_usage" :: waited 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m
Dec  1 19:55:21 compute-0 nova_compute[189564]: 2025-12-01 19:55:21.386 189568 DEBUG nova.compute.provider_tree [None req-e4a609d8-0b73-472d-baec-d29542a99ced 7c24e8f82e7842b785e565ac65c7f494 35d2a9caf1634dca9fc12ec078239d84 - - default default] Inventory has not changed in ProviderTree for provider: 0211b5d4-bab8-409f-8f53-df766ffbcb27 update_inventory /usr/lib/python3.9/site-packages/nova/compute/provider_tree.py:180#033[00m
Dec  1 19:55:21 compute-0 nova_compute[189564]: 2025-12-01 19:55:21.400 189568 DEBUG nova.scheduler.client.report [None req-e4a609d8-0b73-472d-baec-d29542a99ced 7c24e8f82e7842b785e565ac65c7f494 35d2a9caf1634dca9fc12ec078239d84 - - default default] Inventory has not changed for provider 0211b5d4-bab8-409f-8f53-df766ffbcb27 based on inventory data: {'VCPU': {'total': 8, 'reserved': 0, 'min_unit': 1, 'max_unit': 8, 'step_size': 1, 'allocation_ratio': 4.0}, 'MEMORY_MB': {'total': 7680, 'reserved': 512, 'min_unit': 1, 'max_unit': 7680, 'step_size': 1, 'allocation_ratio': 1.0}, 'DISK_GB': {'total': 79, 'reserved': 1, 'min_unit': 1, 'max_unit': 79, 'step_size': 1, 'allocation_ratio': 0.9}} set_inventory_for_provider /usr/lib/python3.9/site-packages/nova/scheduler/client/report.py:940#033[00m
Dec  1 19:55:21 compute-0 nova_compute[189564]: 2025-12-01 19:55:21.419 189568 DEBUG oslo_concurrency.lockutils [None req-e4a609d8-0b73-472d-baec-d29542a99ced 7c24e8f82e7842b785e565ac65c7f494 35d2a9caf1634dca9fc12ec078239d84 - - default default] Lock "compute_resources" "released" by "nova.compute.resource_tracker.ResourceTracker.update_usage" :: held 0.127s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m
Dec  1 19:55:21 compute-0 nova_compute[189564]: 2025-12-01 19:55:21.456 189568 INFO nova.scheduler.client.report [None req-e4a609d8-0b73-472d-baec-d29542a99ced 7c24e8f82e7842b785e565ac65c7f494 35d2a9caf1634dca9fc12ec078239d84 - - default default] Deleted allocations for instance 850ac274-3f22-41ce-b7d7-ac64d7adac70#033[00m
Dec  1 19:55:21 compute-0 nova_compute[189564]: 2025-12-01 19:55:21.541 189568 DEBUG oslo_concurrency.lockutils [None req-e4a609d8-0b73-472d-baec-d29542a99ced 7c24e8f82e7842b785e565ac65c7f494 35d2a9caf1634dca9fc12ec078239d84 - - default default] Lock "850ac274-3f22-41ce-b7d7-ac64d7adac70" "released" by "nova.compute.manager.ComputeManager.terminate_instance.<locals>.do_terminate_instance" :: held 2.056s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m
Dec  1 19:55:21 compute-0 nova_compute[189564]: 2025-12-01 19:55:21.917 189568 DEBUG nova.compute.manager [req-8466dda5-eefa-4826-a55f-93f79ceeacef req-17e61bb1-7fd6-4b19-beb7-be5e8fe6b8c5 8141e98fb94749b083df4b60a326419b 58466eba0634458f8dd92f929824a1d9 - - default default] [instance: 850ac274-3f22-41ce-b7d7-ac64d7adac70] Received event network-vif-plugged-076102cd-d411-4d3d-a31e-4851d4a8d107 external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11048#033[00m
Dec  1 19:55:21 compute-0 nova_compute[189564]: 2025-12-01 19:55:21.919 189568 DEBUG oslo_concurrency.lockutils [req-8466dda5-eefa-4826-a55f-93f79ceeacef req-17e61bb1-7fd6-4b19-beb7-be5e8fe6b8c5 8141e98fb94749b083df4b60a326419b 58466eba0634458f8dd92f929824a1d9 - - default default] Acquiring lock "850ac274-3f22-41ce-b7d7-ac64d7adac70-events" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m
Dec  1 19:55:21 compute-0 nova_compute[189564]: 2025-12-01 19:55:21.919 189568 DEBUG oslo_concurrency.lockutils [req-8466dda5-eefa-4826-a55f-93f79ceeacef req-17e61bb1-7fd6-4b19-beb7-be5e8fe6b8c5 8141e98fb94749b083df4b60a326419b 58466eba0634458f8dd92f929824a1d9 - - default default] Lock "850ac274-3f22-41ce-b7d7-ac64d7adac70-events" acquired by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m
Dec  1 19:55:21 compute-0 nova_compute[189564]: 2025-12-01 19:55:21.919 189568 DEBUG oslo_concurrency.lockutils [req-8466dda5-eefa-4826-a55f-93f79ceeacef req-17e61bb1-7fd6-4b19-beb7-be5e8fe6b8c5 8141e98fb94749b083df4b60a326419b 58466eba0634458f8dd92f929824a1d9 - - default default] Lock "850ac274-3f22-41ce-b7d7-ac64d7adac70-events" "released" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m
Dec  1 19:55:21 compute-0 nova_compute[189564]: 2025-12-01 19:55:21.919 189568 DEBUG nova.compute.manager [req-8466dda5-eefa-4826-a55f-93f79ceeacef req-17e61bb1-7fd6-4b19-beb7-be5e8fe6b8c5 8141e98fb94749b083df4b60a326419b 58466eba0634458f8dd92f929824a1d9 - - default default] [instance: 850ac274-3f22-41ce-b7d7-ac64d7adac70] No waiting events found dispatching network-vif-plugged-076102cd-d411-4d3d-a31e-4851d4a8d107 pop_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:320#033[00m
Dec  1 19:55:21 compute-0 nova_compute[189564]: 2025-12-01 19:55:21.919 189568 WARNING nova.compute.manager [req-8466dda5-eefa-4826-a55f-93f79ceeacef req-17e61bb1-7fd6-4b19-beb7-be5e8fe6b8c5 8141e98fb94749b083df4b60a326419b 58466eba0634458f8dd92f929824a1d9 - - default default] [instance: 850ac274-3f22-41ce-b7d7-ac64d7adac70] Received unexpected event network-vif-plugged-076102cd-d411-4d3d-a31e-4851d4a8d107 for instance with vm_state deleted and task_state None.#033[00m
Dec  1 19:55:21 compute-0 nova_compute[189564]: 2025-12-01 19:55:21.919 189568 DEBUG nova.compute.manager [req-8466dda5-eefa-4826-a55f-93f79ceeacef req-17e61bb1-7fd6-4b19-beb7-be5e8fe6b8c5 8141e98fb94749b083df4b60a326419b 58466eba0634458f8dd92f929824a1d9 - - default default] [instance: 850ac274-3f22-41ce-b7d7-ac64d7adac70] Received event network-vif-plugged-076102cd-d411-4d3d-a31e-4851d4a8d107 external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11048#033[00m
Dec  1 19:55:21 compute-0 nova_compute[189564]: 2025-12-01 19:55:21.920 189568 DEBUG oslo_concurrency.lockutils [req-8466dda5-eefa-4826-a55f-93f79ceeacef req-17e61bb1-7fd6-4b19-beb7-be5e8fe6b8c5 8141e98fb94749b083df4b60a326419b 58466eba0634458f8dd92f929824a1d9 - - default default] Acquiring lock "850ac274-3f22-41ce-b7d7-ac64d7adac70-events" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m
Dec  1 19:55:21 compute-0 nova_compute[189564]: 2025-12-01 19:55:21.920 189568 DEBUG oslo_concurrency.lockutils [req-8466dda5-eefa-4826-a55f-93f79ceeacef req-17e61bb1-7fd6-4b19-beb7-be5e8fe6b8c5 8141e98fb94749b083df4b60a326419b 58466eba0634458f8dd92f929824a1d9 - - default default] Lock "850ac274-3f22-41ce-b7d7-ac64d7adac70-events" acquired by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m
Dec  1 19:55:21 compute-0 nova_compute[189564]: 2025-12-01 19:55:21.920 189568 DEBUG oslo_concurrency.lockutils [req-8466dda5-eefa-4826-a55f-93f79ceeacef req-17e61bb1-7fd6-4b19-beb7-be5e8fe6b8c5 8141e98fb94749b083df4b60a326419b 58466eba0634458f8dd92f929824a1d9 - - default default] Lock "850ac274-3f22-41ce-b7d7-ac64d7adac70-events" "released" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m
Dec  1 19:55:21 compute-0 nova_compute[189564]: 2025-12-01 19:55:21.920 189568 DEBUG nova.compute.manager [req-8466dda5-eefa-4826-a55f-93f79ceeacef req-17e61bb1-7fd6-4b19-beb7-be5e8fe6b8c5 8141e98fb94749b083df4b60a326419b 58466eba0634458f8dd92f929824a1d9 - - default default] [instance: 850ac274-3f22-41ce-b7d7-ac64d7adac70] No waiting events found dispatching network-vif-plugged-076102cd-d411-4d3d-a31e-4851d4a8d107 pop_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:320#033[00m
Dec  1 19:55:21 compute-0 nova_compute[189564]: 2025-12-01 19:55:21.920 189568 WARNING nova.compute.manager [req-8466dda5-eefa-4826-a55f-93f79ceeacef req-17e61bb1-7fd6-4b19-beb7-be5e8fe6b8c5 8141e98fb94749b083df4b60a326419b 58466eba0634458f8dd92f929824a1d9 - - default default] [instance: 850ac274-3f22-41ce-b7d7-ac64d7adac70] Received unexpected event network-vif-plugged-076102cd-d411-4d3d-a31e-4851d4a8d107 for instance with vm_state deleted and task_state None.#033[00m
Dec  1 19:55:21 compute-0 nova_compute[189564]: 2025-12-01 19:55:21.920 189568 DEBUG nova.compute.manager [req-8466dda5-eefa-4826-a55f-93f79ceeacef req-17e61bb1-7fd6-4b19-beb7-be5e8fe6b8c5 8141e98fb94749b083df4b60a326419b 58466eba0634458f8dd92f929824a1d9 - - default default] [instance: 850ac274-3f22-41ce-b7d7-ac64d7adac70] Received event network-vif-plugged-076102cd-d411-4d3d-a31e-4851d4a8d107 external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11048#033[00m
Dec  1 19:55:21 compute-0 nova_compute[189564]: 2025-12-01 19:55:21.921 189568 DEBUG oslo_concurrency.lockutils [req-8466dda5-eefa-4826-a55f-93f79ceeacef req-17e61bb1-7fd6-4b19-beb7-be5e8fe6b8c5 8141e98fb94749b083df4b60a326419b 58466eba0634458f8dd92f929824a1d9 - - default default] Acquiring lock "850ac274-3f22-41ce-b7d7-ac64d7adac70-events" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m
Dec  1 19:55:21 compute-0 nova_compute[189564]: 2025-12-01 19:55:21.921 189568 DEBUG oslo_concurrency.lockutils [req-8466dda5-eefa-4826-a55f-93f79ceeacef req-17e61bb1-7fd6-4b19-beb7-be5e8fe6b8c5 8141e98fb94749b083df4b60a326419b 58466eba0634458f8dd92f929824a1d9 - - default default] Lock "850ac274-3f22-41ce-b7d7-ac64d7adac70-events" acquired by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m
Dec  1 19:55:21 compute-0 nova_compute[189564]: 2025-12-01 19:55:21.921 189568 DEBUG oslo_concurrency.lockutils [req-8466dda5-eefa-4826-a55f-93f79ceeacef req-17e61bb1-7fd6-4b19-beb7-be5e8fe6b8c5 8141e98fb94749b083df4b60a326419b 58466eba0634458f8dd92f929824a1d9 - - default default] Lock "850ac274-3f22-41ce-b7d7-ac64d7adac70-events" "released" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m
Dec  1 19:55:21 compute-0 nova_compute[189564]: 2025-12-01 19:55:21.922 189568 DEBUG nova.compute.manager [req-8466dda5-eefa-4826-a55f-93f79ceeacef req-17e61bb1-7fd6-4b19-beb7-be5e8fe6b8c5 8141e98fb94749b083df4b60a326419b 58466eba0634458f8dd92f929824a1d9 - - default default] [instance: 850ac274-3f22-41ce-b7d7-ac64d7adac70] No waiting events found dispatching network-vif-plugged-076102cd-d411-4d3d-a31e-4851d4a8d107 pop_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:320#033[00m
Dec  1 19:55:21 compute-0 nova_compute[189564]: 2025-12-01 19:55:21.922 189568 WARNING nova.compute.manager [req-8466dda5-eefa-4826-a55f-93f79ceeacef req-17e61bb1-7fd6-4b19-beb7-be5e8fe6b8c5 8141e98fb94749b083df4b60a326419b 58466eba0634458f8dd92f929824a1d9 - - default default] [instance: 850ac274-3f22-41ce-b7d7-ac64d7adac70] Received unexpected event network-vif-plugged-076102cd-d411-4d3d-a31e-4851d4a8d107 for instance with vm_state deleted and task_state None.#033[00m
Dec  1 19:55:21 compute-0 nova_compute[189564]: 2025-12-01 19:55:21.922 189568 DEBUG nova.compute.manager [req-8466dda5-eefa-4826-a55f-93f79ceeacef req-17e61bb1-7fd6-4b19-beb7-be5e8fe6b8c5 8141e98fb94749b083df4b60a326419b 58466eba0634458f8dd92f929824a1d9 - - default default] [instance: 850ac274-3f22-41ce-b7d7-ac64d7adac70] Received event network-vif-unplugged-076102cd-d411-4d3d-a31e-4851d4a8d107 external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11048#033[00m
Dec  1 19:55:21 compute-0 nova_compute[189564]: 2025-12-01 19:55:21.922 189568 DEBUG oslo_concurrency.lockutils [req-8466dda5-eefa-4826-a55f-93f79ceeacef req-17e61bb1-7fd6-4b19-beb7-be5e8fe6b8c5 8141e98fb94749b083df4b60a326419b 58466eba0634458f8dd92f929824a1d9 - - default default] Acquiring lock "850ac274-3f22-41ce-b7d7-ac64d7adac70-events" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m
Dec  1 19:55:21 compute-0 nova_compute[189564]: 2025-12-01 19:55:21.923 189568 DEBUG oslo_concurrency.lockutils [req-8466dda5-eefa-4826-a55f-93f79ceeacef req-17e61bb1-7fd6-4b19-beb7-be5e8fe6b8c5 8141e98fb94749b083df4b60a326419b 58466eba0634458f8dd92f929824a1d9 - - default default] Lock "850ac274-3f22-41ce-b7d7-ac64d7adac70-events" acquired by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m
Dec  1 19:55:21 compute-0 nova_compute[189564]: 2025-12-01 19:55:21.923 189568 DEBUG oslo_concurrency.lockutils [req-8466dda5-eefa-4826-a55f-93f79ceeacef req-17e61bb1-7fd6-4b19-beb7-be5e8fe6b8c5 8141e98fb94749b083df4b60a326419b 58466eba0634458f8dd92f929824a1d9 - - default default] Lock "850ac274-3f22-41ce-b7d7-ac64d7adac70-events" "released" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m
Dec  1 19:55:21 compute-0 nova_compute[189564]: 2025-12-01 19:55:21.923 189568 DEBUG nova.compute.manager [req-8466dda5-eefa-4826-a55f-93f79ceeacef req-17e61bb1-7fd6-4b19-beb7-be5e8fe6b8c5 8141e98fb94749b083df4b60a326419b 58466eba0634458f8dd92f929824a1d9 - - default default] [instance: 850ac274-3f22-41ce-b7d7-ac64d7adac70] No waiting events found dispatching network-vif-unplugged-076102cd-d411-4d3d-a31e-4851d4a8d107 pop_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:320#033[00m
Dec  1 19:55:21 compute-0 nova_compute[189564]: 2025-12-01 19:55:21.924 189568 WARNING nova.compute.manager [req-8466dda5-eefa-4826-a55f-93f79ceeacef req-17e61bb1-7fd6-4b19-beb7-be5e8fe6b8c5 8141e98fb94749b083df4b60a326419b 58466eba0634458f8dd92f929824a1d9 - - default default] [instance: 850ac274-3f22-41ce-b7d7-ac64d7adac70] Received unexpected event network-vif-unplugged-076102cd-d411-4d3d-a31e-4851d4a8d107 for instance with vm_state deleted and task_state None.#033[00m
Dec  1 19:55:21 compute-0 nova_compute[189564]: 2025-12-01 19:55:21.929 189568 DEBUG nova.compute.manager [req-8466dda5-eefa-4826-a55f-93f79ceeacef req-17e61bb1-7fd6-4b19-beb7-be5e8fe6b8c5 8141e98fb94749b083df4b60a326419b 58466eba0634458f8dd92f929824a1d9 - - default default] [instance: 850ac274-3f22-41ce-b7d7-ac64d7adac70] Received event network-vif-plugged-076102cd-d411-4d3d-a31e-4851d4a8d107 external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11048#033[00m
Dec  1 19:55:21 compute-0 nova_compute[189564]: 2025-12-01 19:55:21.930 189568 DEBUG oslo_concurrency.lockutils [req-8466dda5-eefa-4826-a55f-93f79ceeacef req-17e61bb1-7fd6-4b19-beb7-be5e8fe6b8c5 8141e98fb94749b083df4b60a326419b 58466eba0634458f8dd92f929824a1d9 - - default default] Acquiring lock "850ac274-3f22-41ce-b7d7-ac64d7adac70-events" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m
Dec  1 19:55:21 compute-0 nova_compute[189564]: 2025-12-01 19:55:21.930 189568 DEBUG oslo_concurrency.lockutils [req-8466dda5-eefa-4826-a55f-93f79ceeacef req-17e61bb1-7fd6-4b19-beb7-be5e8fe6b8c5 8141e98fb94749b083df4b60a326419b 58466eba0634458f8dd92f929824a1d9 - - default default] Lock "850ac274-3f22-41ce-b7d7-ac64d7adac70-events" acquired by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m
Dec  1 19:55:21 compute-0 nova_compute[189564]: 2025-12-01 19:55:21.930 189568 DEBUG oslo_concurrency.lockutils [req-8466dda5-eefa-4826-a55f-93f79ceeacef req-17e61bb1-7fd6-4b19-beb7-be5e8fe6b8c5 8141e98fb94749b083df4b60a326419b 58466eba0634458f8dd92f929824a1d9 - - default default] Lock "850ac274-3f22-41ce-b7d7-ac64d7adac70-events" "released" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m
Dec  1 19:55:21 compute-0 nova_compute[189564]: 2025-12-01 19:55:21.930 189568 DEBUG nova.compute.manager [req-8466dda5-eefa-4826-a55f-93f79ceeacef req-17e61bb1-7fd6-4b19-beb7-be5e8fe6b8c5 8141e98fb94749b083df4b60a326419b 58466eba0634458f8dd92f929824a1d9 - - default default] [instance: 850ac274-3f22-41ce-b7d7-ac64d7adac70] No waiting events found dispatching network-vif-plugged-076102cd-d411-4d3d-a31e-4851d4a8d107 pop_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:320#033[00m
Dec  1 19:55:21 compute-0 nova_compute[189564]: 2025-12-01 19:55:21.931 189568 WARNING nova.compute.manager [req-8466dda5-eefa-4826-a55f-93f79ceeacef req-17e61bb1-7fd6-4b19-beb7-be5e8fe6b8c5 8141e98fb94749b083df4b60a326419b 58466eba0634458f8dd92f929824a1d9 - - default default] [instance: 850ac274-3f22-41ce-b7d7-ac64d7adac70] Received unexpected event network-vif-plugged-076102cd-d411-4d3d-a31e-4851d4a8d107 for instance with vm_state deleted and task_state None.#033[00m
Dec  1 19:55:22 compute-0 ovn_metadata_agent[106828]: 2025-12-01 19:55:22.717 106833 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): DbSetCommand(_result=None, table=Chassis_Private, record=91869463-7ce7-4561-8225-db4a77bb5f12, col_values=(('external_ids', {'neutron:ovn-metadata-sb-cfg': '7'}),), if_exists=True) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89#033[00m
Dec  1 19:55:23 compute-0 nova_compute[189564]: 2025-12-01 19:55:23.596 189568 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Dec  1 19:55:24 compute-0 nova_compute[189564]: 2025-12-01 19:55:24.820 189568 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Dec  1 19:55:25 compute-0 podman[251398]: 2025-12-01 19:55:25.294613543 +0000 UTC m=+0.064791111 container health_status 9bc16c1e84935b321683dd2dfd3901959431e420d380b6b9982945dff3d516b2 (image=quay.io/prometheus/node-exporter:v1.5.0, name=node_exporter, health_status=healthy, health_failing_streak=0, health_log=, maintainer=The Prometheus Authors <prometheus-developers@googlegroups.com>, managed_by=edpm_ansible, config_data={'image': 'quay.io/prometheus/node-exporter:v1.5.0', 'restart': 'always', 'recreate': True, 'user': 'root', 'privileged': True, 'ports': ['9100:9100'], 'command': ['--web.config.file=/etc/node_exporter/node_exporter.yaml', '--web.disable-exporter-metrics', '--collector.systemd', '--collector.systemd.unit-include=(edpm_.*|ovs.*|openvswitch|virt.*|rsyslog)\\.service', '--no-collector.dmi', '--no-collector.entropy', '--no-collector.thermal_zone', '--no-collector.time', '--no-collector.timex', '--no-collector.uname', '--no-collector.stat', '--no-collector.hwmon', '--no-collector.os', '--no-collector.selinux', '--no-collector.textfile', '--no-collector.powersupplyclass', '--no-collector.pressure', '--no-collector.rapl'], 'net': 'host', 'environment': {'OS_ENDPOINT_TYPE': 'internal'}, 'healthcheck': {'test': '/openstack/healthcheck node_exporter', 'mount': '/var/lib/openstack/healthchecks/node_exporter'}, 'volumes': ['/var/lib/openstack/config/telemetry/node_exporter.yaml:/etc/node_exporter/node_exporter.yaml:z', '/var/lib/openstack/certs/telemetry/default:/etc/node_exporter/tls:z', '/var/run/dbus/system_bus_socket:/var/run/dbus/system_bus_socket:rw', '/var/lib/openstack/healthchecks/node_exporter:/openstack:ro,z']}, config_id=edpm, container_name=node_exporter)
Dec  1 19:55:28 compute-0 nova_compute[189564]: 2025-12-01 19:55:28.599 189568 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Dec  1 19:55:29 compute-0 podman[203750]: time="2025-12-01T19:55:29Z" level=info msg="List containers: received `last` parameter - overwriting `limit`"
Dec  1 19:55:29 compute-0 podman[203750]: @ - - [01/Dec/2025:19:55:29 +0000] "GET /v4.9.3/libpod/containers/json?all=true&external=false&last=0&namespace=false&size=false&sync=false HTTP/1.1" 200 29521 "" "Go-http-client/1.1"
Dec  1 19:55:29 compute-0 podman[203750]: @ - - [01/Dec/2025:19:55:29 +0000] "GET /v4.9.3/libpod/containers/stats?all=false&interval=1&stream=false HTTP/1.1" 200 4805 "" "Go-http-client/1.1"
Dec  1 19:55:29 compute-0 nova_compute[189564]: 2025-12-01 19:55:29.822 189568 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Dec  1 19:55:31 compute-0 podman[251426]: 2025-12-01 19:55:31.394670321 +0000 UTC m=+0.152440762 container health_status eee51cf6f5ac491b85fb09827fece37ea9afa564acb449d4ec0d0155a452f02b (image=quay.io/podified-antelope-centos9/openstack-multipathd:current-podified, name=multipathd, health_status=healthy, health_failing_streak=0, health_log=, org.label-schema.vendor=CentOS, container_name=multipathd, org.label-schema.schema-version=1.0, tcib_build_tag=fa2bb8efef6782c26ea7f1675eeb36dd, config_id=multipathd, maintainer=OpenStack Kubernetes Operator team, org.label-schema.license=GPLv2, org.label-schema.build-date=20251125, tcib_managed=true, config_data={'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/multipathd', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-multipathd:current-podified', 'net': 'host', 'privileged': True, 'restart': 'always', 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/var/lib/kolla/config_files/multipathd.json:/var/lib/kolla/config_files/config.json:ro', '/dev:/dev', '/run/udev:/run/udev', '/sys:/sys', '/lib/modules:/lib/modules:ro', '/etc/iscsi:/etc/iscsi:ro', '/var/lib/iscsi:/var/lib/iscsi', '/etc/multipath:/etc/multipath:z', '/etc/multipath.conf:/etc/multipath.conf:ro', '/var/lib/openstack/healthchecks/multipathd:/openstack:ro,z']}, io.buildah.version=1.41.3, managed_by=edpm_ansible, org.label-schema.name=CentOS Stream 9 Base Image)
Dec  1 19:55:31 compute-0 openstack_network_exporter[205914]: ERROR   19:55:31 appctl.go:131: Failed to prepare call to ovsdb-server: no control socket files found for the ovs db server
Dec  1 19:55:31 compute-0 openstack_network_exporter[205914]: ERROR   19:55:31 appctl.go:144: Failed to get PID for ovn-northd: no control socket files found for ovn-northd
Dec  1 19:55:31 compute-0 openstack_network_exporter[205914]: ERROR   19:55:31 appctl.go:144: Failed to get PID for ovn-northd: no control socket files found for ovn-northd
Dec  1 19:55:31 compute-0 openstack_network_exporter[205914]: ERROR   19:55:31 appctl.go:174: call(dpif-netdev/pmd-perf-show): please specify an existing datapath
Dec  1 19:55:31 compute-0 openstack_network_exporter[205914]: 
Dec  1 19:55:31 compute-0 openstack_network_exporter[205914]: ERROR   19:55:31 appctl.go:174: call(dpif-netdev/pmd-rxq-show): please specify an existing datapath
Dec  1 19:55:31 compute-0 openstack_network_exporter[205914]: 
Dec  1 19:55:33 compute-0 nova_compute[189564]: 2025-12-01 19:55:33.602 189568 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Dec  1 19:55:34 compute-0 nova_compute[189564]: 2025-12-01 19:55:34.789 189568 DEBUG nova.virt.driver [-] Emitting event <LifecycleEvent: 1764618919.787034, 850ac274-3f22-41ce-b7d7-ac64d7adac70 => Stopped> emit_event /usr/lib/python3.9/site-packages/nova/virt/driver.py:1653#033[00m
Dec  1 19:55:34 compute-0 nova_compute[189564]: 2025-12-01 19:55:34.790 189568 INFO nova.compute.manager [-] [instance: 850ac274-3f22-41ce-b7d7-ac64d7adac70] VM Stopped (Lifecycle Event)#033[00m
Dec  1 19:55:34 compute-0 nova_compute[189564]: 2025-12-01 19:55:34.819 189568 DEBUG nova.compute.manager [None req-6e6bdde8-2871-4b2f-8ff8-15b30bd9de63 - - - - - -] [instance: 850ac274-3f22-41ce-b7d7-ac64d7adac70] Checking state _get_power_state /usr/lib/python3.9/site-packages/nova/compute/manager.py:1762#033[00m
Dec  1 19:55:34 compute-0 nova_compute[189564]: 2025-12-01 19:55:34.827 189568 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Dec  1 19:55:37 compute-0 nova_compute[189564]: 2025-12-01 19:55:37.169 189568 DEBUG oslo_concurrency.lockutils [None req-89b324ab-e1ff-4df8-8941-7a0457a3888f 7c24e8f82e7842b785e565ac65c7f494 35d2a9caf1634dca9fc12ec078239d84 - - default default] Acquiring lock "e73931e9-f7fa-4666-b781-700b385532a9" by "nova.compute.manager.ComputeManager.terminate_instance.<locals>.do_terminate_instance" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m
Dec  1 19:55:37 compute-0 nova_compute[189564]: 2025-12-01 19:55:37.170 189568 DEBUG oslo_concurrency.lockutils [None req-89b324ab-e1ff-4df8-8941-7a0457a3888f 7c24e8f82e7842b785e565ac65c7f494 35d2a9caf1634dca9fc12ec078239d84 - - default default] Lock "e73931e9-f7fa-4666-b781-700b385532a9" acquired by "nova.compute.manager.ComputeManager.terminate_instance.<locals>.do_terminate_instance" :: waited 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m
Dec  1 19:55:37 compute-0 nova_compute[189564]: 2025-12-01 19:55:37.171 189568 DEBUG oslo_concurrency.lockutils [None req-89b324ab-e1ff-4df8-8941-7a0457a3888f 7c24e8f82e7842b785e565ac65c7f494 35d2a9caf1634dca9fc12ec078239d84 - - default default] Acquiring lock "e73931e9-f7fa-4666-b781-700b385532a9-events" by "nova.compute.manager.InstanceEvents.clear_events_for_instance.<locals>._clear_events" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m
Dec  1 19:55:37 compute-0 nova_compute[189564]: 2025-12-01 19:55:37.172 189568 DEBUG oslo_concurrency.lockutils [None req-89b324ab-e1ff-4df8-8941-7a0457a3888f 7c24e8f82e7842b785e565ac65c7f494 35d2a9caf1634dca9fc12ec078239d84 - - default default] Lock "e73931e9-f7fa-4666-b781-700b385532a9-events" acquired by "nova.compute.manager.InstanceEvents.clear_events_for_instance.<locals>._clear_events" :: waited 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m
Dec  1 19:55:37 compute-0 nova_compute[189564]: 2025-12-01 19:55:37.173 189568 DEBUG oslo_concurrency.lockutils [None req-89b324ab-e1ff-4df8-8941-7a0457a3888f 7c24e8f82e7842b785e565ac65c7f494 35d2a9caf1634dca9fc12ec078239d84 - - default default] Lock "e73931e9-f7fa-4666-b781-700b385532a9-events" "released" by "nova.compute.manager.InstanceEvents.clear_events_for_instance.<locals>._clear_events" :: held 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m
Dec  1 19:55:37 compute-0 nova_compute[189564]: 2025-12-01 19:55:37.176 189568 INFO nova.compute.manager [None req-89b324ab-e1ff-4df8-8941-7a0457a3888f 7c24e8f82e7842b785e565ac65c7f494 35d2a9caf1634dca9fc12ec078239d84 - - default default] [instance: e73931e9-f7fa-4666-b781-700b385532a9] Terminating instance#033[00m
Dec  1 19:55:37 compute-0 nova_compute[189564]: 2025-12-01 19:55:37.179 189568 DEBUG nova.compute.manager [None req-89b324ab-e1ff-4df8-8941-7a0457a3888f 7c24e8f82e7842b785e565ac65c7f494 35d2a9caf1634dca9fc12ec078239d84 - - default default] [instance: e73931e9-f7fa-4666-b781-700b385532a9] Start destroying the instance on the hypervisor. _shutdown_instance /usr/lib/python3.9/site-packages/nova/compute/manager.py:3120#033[00m
Dec  1 19:55:37 compute-0 kernel: tap3cef930c-87 (unregistering): left promiscuous mode
Dec  1 19:55:37 compute-0 NetworkManager[56474]: <info>  [1764618937.2362] device (tap3cef930c-87): state change: disconnected -> unmanaged (reason 'unmanaged', managed-type: 'removed')
Dec  1 19:55:37 compute-0 ovn_controller[97948]: 2025-12-01T19:55:37Z|00061|binding|INFO|Releasing lport 3cef930c-870a-4936-a206-b4c3a7ce5c1a from this chassis (sb_readonly=0)
Dec  1 19:55:37 compute-0 ovn_controller[97948]: 2025-12-01T19:55:37Z|00062|binding|INFO|Setting lport 3cef930c-870a-4936-a206-b4c3a7ce5c1a down in Southbound
Dec  1 19:55:37 compute-0 ovn_controller[97948]: 2025-12-01T19:55:37Z|00063|binding|INFO|Removing iface tap3cef930c-87 ovn-installed in OVS
Dec  1 19:55:37 compute-0 nova_compute[189564]: 2025-12-01 19:55:37.255 189568 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Dec  1 19:55:37 compute-0 nova_compute[189564]: 2025-12-01 19:55:37.259 189568 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Dec  1 19:55:37 compute-0 ovn_metadata_agent[106828]: 2025-12-01 19:55:37.267 106833 DEBUG ovsdbapp.backend.ovs_idl.event [-] Matched UPDATE: PortBindingUpdatedEvent(events=('update',), table='Port_Binding', conditions=None, old_conditions=None), priority=20 to row=Port_Binding(mac=['fa:16:3e:fc:8b:70 192.168.0.47'], port_security=['fa:16:3e:fc:8b:70 192.168.0.47'], type=, nat_addresses=[], virtual_parent=[], up=[False], options={'requested-chassis': 'compute-0.ctlplane.example.com'}, parent_port=[], requested_additional_chassis=[], ha_chassis_group=[], external_ids={'neutron:cidrs': '192.168.0.47/24', 'neutron:device_id': 'e73931e9-f7fa-4666-b781-700b385532a9', 'neutron:device_owner': 'compute:nova', 'neutron:mtu': '', 'neutron:network_name': 'neutron-2a4b8529-6171-4880-a97c-66966115a61b', 'neutron:port_capabilities': '', 'neutron:port_name': '', 'neutron:project_id': '35d2a9caf1634dca9fc12ec078239d84', 'neutron:revision_number': '4', 'neutron:security_group_ids': 'e61a5e79-a7e0-4e4e-bcbc-f9aad845c2b8', 'neutron:subnet_pool_addr_scope4': '', 'neutron:subnet_pool_addr_scope6': '', 'neutron:vnic_type': 'normal', 'neutron:host_id': 'compute-0.ctlplane.example.com', 'neutron:port_fip': '192.168.122.206'}, additional_chassis=[], tag=[], additional_encap=[], encap=[], mirror_rules=[], datapath=58f8227a-30b3-42df-b03a-90442a651a6d, chassis=[], tunnel_key=3, gateway_chassis=[], requested_chassis=[<ovs.db.idl.Row object at 0x7f1b36766670>], logical_port=3cef930c-870a-4936-a206-b4c3a7ce5c1a) old=Port_Binding(up=[True], chassis=[<ovs.db.idl.Row object at 0x7f1b36766670>]) matches /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/event.py:43#033[00m
Dec  1 19:55:37 compute-0 ovn_metadata_agent[106828]: 2025-12-01 19:55:37.269 106833 INFO neutron.agent.ovn.metadata.agent [-] Port 3cef930c-870a-4936-a206-b4c3a7ce5c1a in datapath 2a4b8529-6171-4880-a97c-66966115a61b unbound from our chassis#033[00m
Dec  1 19:55:37 compute-0 ovn_metadata_agent[106828]: 2025-12-01 19:55:37.271 106833 DEBUG neutron.agent.ovn.metadata.agent [-] No valid VIF ports were found for network 2a4b8529-6171-4880-a97c-66966115a61b, tearing the namespace down if needed _get_provision_params /usr/lib/python3.9/site-packages/neutron/agent/ovn/metadata/agent.py:628#033[00m
Dec  1 19:55:37 compute-0 ovn_metadata_agent[106828]: 2025-12-01 19:55:37.272 239862 DEBUG oslo.privsep.daemon [-] privsep: reply[695e9d22-c133-4b8d-bbd3-2c0111c8e8b3]: (4, True) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Dec  1 19:55:37 compute-0 ovn_metadata_agent[106828]: 2025-12-01 19:55:37.273 106833 INFO neutron.agent.ovn.metadata.agent [-] Cleaning up ovnmeta-2a4b8529-6171-4880-a97c-66966115a61b namespace which is not needed anymore#033[00m
Dec  1 19:55:37 compute-0 nova_compute[189564]: 2025-12-01 19:55:37.293 189568 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Dec  1 19:55:37 compute-0 systemd[1]: machine-qemu\x2d1\x2dinstance\x2d00000001.scope: Deactivated successfully.
Dec  1 19:55:37 compute-0 systemd[1]: machine-qemu\x2d1\x2dinstance\x2d00000001.scope: Consumed 4min 35.626s CPU time.
Dec  1 19:55:37 compute-0 systemd-machined[155891]: Machine qemu-1-instance-00000001 terminated.
Dec  1 19:55:37 compute-0 podman[251446]: 2025-12-01 19:55:37.37619889 +0000 UTC m=+0.137372864 container health_status 61ddba5fa28aaa4735d9b3aecc3d300f499f9ae2248b5f55cd6d6127fcce4236 (image=quay.io/navidys/prometheus-podman-exporter:v1.10.1, name=podman_exporter, health_status=healthy, health_failing_streak=0, health_log=, maintainer=Navid Yaghoobi <navidys@fedoraproject.org>, managed_by=edpm_ansible, config_data={'image': 'quay.io/navidys/prometheus-podman-exporter:v1.10.1', 'restart': 'always', 'recreate': True, 'user': 'root', 'privileged': True, 'ports': ['9882:9882'], 'net': 'host', 'command': ['--web.config.file=/etc/podman_exporter/podman_exporter.yaml'], 'environment': {'OS_ENDPOINT_TYPE': 'internal', 'CONTAINER_HOST': 'unix:///run/podman/podman.sock'}, 'healthcheck': {'test': '/openstack/healthcheck podman_exporter', 'mount': '/var/lib/openstack/healthchecks/podman_exporter'}, 'volumes': ['/var/lib/openstack/config/telemetry/podman_exporter.yaml:/etc/podman_exporter/podman_exporter.yaml:z', '/var/lib/openstack/certs/telemetry/default:/etc/podman_exporter/tls:z', '/run/podman/podman.sock:/run/podman/podman.sock:rw,z', '/var/lib/openstack/healthchecks/podman_exporter:/openstack:ro,z']}, config_id=edpm, container_name=podman_exporter)
Dec  1 19:55:37 compute-0 nova_compute[189564]: 2025-12-01 19:55:37.414 189568 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Dec  1 19:55:37 compute-0 nova_compute[189564]: 2025-12-01 19:55:37.423 189568 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Dec  1 19:55:37 compute-0 nova_compute[189564]: 2025-12-01 19:55:37.485 189568 INFO nova.virt.libvirt.driver [-] [instance: e73931e9-f7fa-4666-b781-700b385532a9] Instance destroyed successfully.#033[00m
Dec  1 19:55:37 compute-0 nova_compute[189564]: 2025-12-01 19:55:37.486 189568 DEBUG nova.objects.instance [None req-89b324ab-e1ff-4df8-8941-7a0457a3888f 7c24e8f82e7842b785e565ac65c7f494 35d2a9caf1634dca9fc12ec078239d84 - - default default] Lazy-loading 'resources' on Instance uuid e73931e9-f7fa-4666-b781-700b385532a9 obj_load_attr /usr/lib/python3.9/site-packages/nova/objects/instance.py:1105#033[00m
Dec  1 19:55:37 compute-0 nova_compute[189564]: 2025-12-01 19:55:37.497 189568 DEBUG nova.compute.manager [req-e31a68d0-7ea3-49b3-a383-f87ebe48faec req-19c28666-e670-4c13-963e-3d85e3792bb6 8141e98fb94749b083df4b60a326419b 58466eba0634458f8dd92f929824a1d9 - - default default] [instance: e73931e9-f7fa-4666-b781-700b385532a9] Received event network-vif-unplugged-3cef930c-870a-4936-a206-b4c3a7ce5c1a external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11048#033[00m
Dec  1 19:55:37 compute-0 nova_compute[189564]: 2025-12-01 19:55:37.498 189568 DEBUG oslo_concurrency.lockutils [req-e31a68d0-7ea3-49b3-a383-f87ebe48faec req-19c28666-e670-4c13-963e-3d85e3792bb6 8141e98fb94749b083df4b60a326419b 58466eba0634458f8dd92f929824a1d9 - - default default] Acquiring lock "e73931e9-f7fa-4666-b781-700b385532a9-events" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m
Dec  1 19:55:37 compute-0 nova_compute[189564]: 2025-12-01 19:55:37.499 189568 DEBUG oslo_concurrency.lockutils [req-e31a68d0-7ea3-49b3-a383-f87ebe48faec req-19c28666-e670-4c13-963e-3d85e3792bb6 8141e98fb94749b083df4b60a326419b 58466eba0634458f8dd92f929824a1d9 - - default default] Lock "e73931e9-f7fa-4666-b781-700b385532a9-events" acquired by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: waited 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m
Dec  1 19:55:37 compute-0 nova_compute[189564]: 2025-12-01 19:55:37.499 189568 DEBUG oslo_concurrency.lockutils [req-e31a68d0-7ea3-49b3-a383-f87ebe48faec req-19c28666-e670-4c13-963e-3d85e3792bb6 8141e98fb94749b083df4b60a326419b 58466eba0634458f8dd92f929824a1d9 - - default default] Lock "e73931e9-f7fa-4666-b781-700b385532a9-events" "released" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: held 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m
Dec  1 19:55:37 compute-0 nova_compute[189564]: 2025-12-01 19:55:37.500 189568 DEBUG nova.compute.manager [req-e31a68d0-7ea3-49b3-a383-f87ebe48faec req-19c28666-e670-4c13-963e-3d85e3792bb6 8141e98fb94749b083df4b60a326419b 58466eba0634458f8dd92f929824a1d9 - - default default] [instance: e73931e9-f7fa-4666-b781-700b385532a9] No waiting events found dispatching network-vif-unplugged-3cef930c-870a-4936-a206-b4c3a7ce5c1a pop_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:320#033[00m
Dec  1 19:55:37 compute-0 nova_compute[189564]: 2025-12-01 19:55:37.500 189568 DEBUG nova.compute.manager [req-e31a68d0-7ea3-49b3-a383-f87ebe48faec req-19c28666-e670-4c13-963e-3d85e3792bb6 8141e98fb94749b083df4b60a326419b 58466eba0634458f8dd92f929824a1d9 - - default default] [instance: e73931e9-f7fa-4666-b781-700b385532a9] Received event network-vif-unplugged-3cef930c-870a-4936-a206-b4c3a7ce5c1a for instance with task_state deleting. _process_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:10826#033[00m
Dec  1 19:55:37 compute-0 neutron-haproxy-ovnmeta-2a4b8529-6171-4880-a97c-66966115a61b[240041]: [NOTICE]   (240045) : haproxy version is 2.8.14-c23fe91
Dec  1 19:55:37 compute-0 neutron-haproxy-ovnmeta-2a4b8529-6171-4880-a97c-66966115a61b[240041]: [NOTICE]   (240045) : path to executable is /usr/sbin/haproxy
Dec  1 19:55:37 compute-0 neutron-haproxy-ovnmeta-2a4b8529-6171-4880-a97c-66966115a61b[240041]: [WARNING]  (240045) : Exiting Master process...
Dec  1 19:55:37 compute-0 neutron-haproxy-ovnmeta-2a4b8529-6171-4880-a97c-66966115a61b[240041]: [WARNING]  (240045) : Exiting Master process...
Dec  1 19:55:37 compute-0 neutron-haproxy-ovnmeta-2a4b8529-6171-4880-a97c-66966115a61b[240041]: [ALERT]    (240045) : Current worker (240047) exited with code 143 (Terminated)
Dec  1 19:55:37 compute-0 neutron-haproxy-ovnmeta-2a4b8529-6171-4880-a97c-66966115a61b[240041]: [WARNING]  (240045) : All workers exited. Exiting... (0)
Dec  1 19:55:37 compute-0 nova_compute[189564]: 2025-12-01 19:55:37.504 189568 DEBUG nova.virt.libvirt.vif [None req-89b324ab-e1ff-4df8-8941-7a0457a3888f 7c24e8f82e7842b785e565ac65c7f494 35d2a9caf1634dca9fc12ec078239d84 - - default default] vif_type=ovs instance=Instance(access_ip_v4=None,access_ip_v6=None,architecture=None,auto_disk_config=False,availability_zone='nova',cell_name=None,cleaned=False,config_drive='True',created_at=2025-12-01T19:29:43Z,default_ephemeral_device=None,default_swap_device=None,deleted=False,deleted_at=None,device_metadata=<?>,disable_terminate=False,display_description='test_0',display_name='test_0',ec2_ids=<?>,ephemeral_gb=1,ephemeral_key_uuid=None,fault=<?>,flavor=Flavor(1),hidden=False,host='compute-0.ctlplane.example.com',hostname='test-0',id=1,image_ref='15bc897a-453b-4133-b6db-08ecdc2b6db0',info_cache=InstanceInfoCache,instance_type_id=1,kernel_id='',key_data=None,key_name=None,keypairs=<?>,launch_index=0,launched_at=2025-12-01T19:29:55Z,launched_on='compute-0.ctlplane.example.com',locked=False,locked_by=None,memory_mb=512,metadata={},migration_context=<?>,new_flavor=None,node='compute-0.ctlplane.example.com',numa_topology=None,old_flavor=None,os_type=None,pci_devices=<?>,pci_requests=<?>,power_state=1,progress=0,project_id='35d2a9caf1634dca9fc12ec078239d84',ramdisk_id='',reservation_id='r-rcohc3gr',resources=None,root_device_name='/dev/vda',root_gb=1,security_groups=SecurityGroupList,services=<?>,shutdown_terminate=False,system_metadata={boot_roles='member,admin,reader',image_base_image_ref='15bc897a-453b-4133-b6db-08ecdc2b6db0',image_container_format='bare',image_disk_format='qcow2',image_hw_cdrom_bus='sata',image_hw_disk_bus='virtio',image_hw_input_bus='usb',image_hw_machine_type='q35',image_hw_pointer_model='usbtablet',image_hw_video_model='virtio',image_hw_vif_model='virtio',image_min_disk='1',image_min_ram='0',image_owner_specified.openstack.md5='',image_owner_specified.openstack.object='images/cirros',image_owner_specified.openstack.sha256='',owner_project_name='admin',owner_user_name='admin'},tags=<?>,task_state='deleting',terminated_at=None,trusted_certs=<?>,updated_at=2025-12-01T19:29:55Z,user_data=None,user_id='7c24e8f82e7842b785e565ac65c7f494',uuid=e73931e9-f7fa-4666-b781-700b385532a9,vcpu_model=<?>,vcpus=1,vm_mode=None,vm_state='active') vif={"id": "3cef930c-870a-4936-a206-b4c3a7ce5c1a", "address": "fa:16:3e:fc:8b:70", "network": {"id": "2a4b8529-6171-4880-a97c-66966115a61b", "bridge": "br-int", "label": "private", "subnets": [{"cidr": "192.168.0.0/24", "dns": [], "gateway": {"address": "192.168.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "192.168.0.47", "type": "fixed", "version": 4, "meta": {}, "floating_ips": [{"address": "192.168.122.206", "type": "floating", "version": 4, "meta": {}}]}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "35d2a9caf1634dca9fc12ec078239d84", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tap3cef930c-87", "ovs_interfaceid": "3cef930c-870a-4936-a206-b4c3a7ce5c1a", "qbh_params": null, "qbg_params": null, "active": true, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}} unplug /usr/lib/python3.9/site-packages/nova/virt/libvirt/vif.py:828#033[00m
Dec  1 19:55:37 compute-0 nova_compute[189564]: 2025-12-01 19:55:37.505 189568 DEBUG nova.network.os_vif_util [None req-89b324ab-e1ff-4df8-8941-7a0457a3888f 7c24e8f82e7842b785e565ac65c7f494 35d2a9caf1634dca9fc12ec078239d84 - - default default] Converting VIF {"id": "3cef930c-870a-4936-a206-b4c3a7ce5c1a", "address": "fa:16:3e:fc:8b:70", "network": {"id": "2a4b8529-6171-4880-a97c-66966115a61b", "bridge": "br-int", "label": "private", "subnets": [{"cidr": "192.168.0.0/24", "dns": [], "gateway": {"address": "192.168.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "192.168.0.47", "type": "fixed", "version": 4, "meta": {}, "floating_ips": [{"address": "192.168.122.206", "type": "floating", "version": 4, "meta": {}}]}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "35d2a9caf1634dca9fc12ec078239d84", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tap3cef930c-87", "ovs_interfaceid": "3cef930c-870a-4936-a206-b4c3a7ce5c1a", "qbh_params": null, "qbg_params": null, "active": true, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}} nova_to_osvif_vif /usr/lib/python3.9/site-packages/nova/network/os_vif_util.py:511#033[00m
Dec  1 19:55:37 compute-0 nova_compute[189564]: 2025-12-01 19:55:37.506 189568 DEBUG nova.network.os_vif_util [None req-89b324ab-e1ff-4df8-8941-7a0457a3888f 7c24e8f82e7842b785e565ac65c7f494 35d2a9caf1634dca9fc12ec078239d84 - - default default] Converted object VIFOpenVSwitch(active=True,address=fa:16:3e:fc:8b:70,bridge_name='br-int',has_traffic_filtering=True,id=3cef930c-870a-4936-a206-b4c3a7ce5c1a,network=Network(2a4b8529-6171-4880-a97c-66966115a61b),plugin='ovs',port_profile=VIFPortProfileOpenVSwitch,preserve_on_delete=False,vif_name='tap3cef930c-87') nova_to_osvif_vif /usr/lib/python3.9/site-packages/nova/network/os_vif_util.py:548#033[00m
Dec  1 19:55:37 compute-0 nova_compute[189564]: 2025-12-01 19:55:37.506 189568 DEBUG os_vif [None req-89b324ab-e1ff-4df8-8941-7a0457a3888f 7c24e8f82e7842b785e565ac65c7f494 35d2a9caf1634dca9fc12ec078239d84 - - default default] Unplugging vif VIFOpenVSwitch(active=True,address=fa:16:3e:fc:8b:70,bridge_name='br-int',has_traffic_filtering=True,id=3cef930c-870a-4936-a206-b4c3a7ce5c1a,network=Network(2a4b8529-6171-4880-a97c-66966115a61b),plugin='ovs',port_profile=VIFPortProfileOpenVSwitch,preserve_on_delete=False,vif_name='tap3cef930c-87') unplug /usr/lib/python3.9/site-packages/os_vif/__init__.py:109#033[00m
Dec  1 19:55:37 compute-0 nova_compute[189564]: 2025-12-01 19:55:37.508 189568 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 21 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Dec  1 19:55:37 compute-0 nova_compute[189564]: 2025-12-01 19:55:37.509 189568 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): DelPortCommand(_result=None, port=tap3cef930c-87, bridge=br-int, if_exists=True) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89#033[00m
Dec  1 19:55:37 compute-0 systemd[1]: libpod-d90ba4d9f5da009772020c9c416936175fc09c2471a29f0edd5fd21cc78957cd.scope: Deactivated successfully.
Dec  1 19:55:37 compute-0 conmon[240041]: conmon d90ba4d9f5da00977202 <nwarn>: Failed to open cgroups file: /sys/fs/cgroup/machine.slice/libpod-d90ba4d9f5da009772020c9c416936175fc09c2471a29f0edd5fd21cc78957cd.scope/container/memory.events
Dec  1 19:55:37 compute-0 nova_compute[189564]: 2025-12-01 19:55:37.511 189568 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Dec  1 19:55:37 compute-0 podman[251496]: 2025-12-01 19:55:37.512970434 +0000 UTC m=+0.083001067 container died d90ba4d9f5da009772020c9c416936175fc09c2471a29f0edd5fd21cc78957cd (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified, name=neutron-haproxy-ovnmeta-2a4b8529-6171-4880-a97c-66966115a61b, tcib_managed=true, org.label-schema.build-date=20251125, org.label-schema.license=GPLv2, tcib_build_tag=fa2bb8efef6782c26ea7f1675eeb36dd, maintainer=OpenStack Kubernetes Operator team, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS, io.buildah.version=1.41.3, org.label-schema.name=CentOS Stream 9 Base Image)
Dec  1 19:55:37 compute-0 nova_compute[189564]: 2025-12-01 19:55:37.513 189568 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Dec  1 19:55:37 compute-0 nova_compute[189564]: 2025-12-01 19:55:37.516 189568 INFO os_vif [None req-89b324ab-e1ff-4df8-8941-7a0457a3888f 7c24e8f82e7842b785e565ac65c7f494 35d2a9caf1634dca9fc12ec078239d84 - - default default] Successfully unplugged vif VIFOpenVSwitch(active=True,address=fa:16:3e:fc:8b:70,bridge_name='br-int',has_traffic_filtering=True,id=3cef930c-870a-4936-a206-b4c3a7ce5c1a,network=Network(2a4b8529-6171-4880-a97c-66966115a61b),plugin='ovs',port_profile=VIFPortProfileOpenVSwitch,preserve_on_delete=False,vif_name='tap3cef930c-87')#033[00m
Dec  1 19:55:37 compute-0 nova_compute[189564]: 2025-12-01 19:55:37.517 189568 INFO nova.virt.libvirt.driver [None req-89b324ab-e1ff-4df8-8941-7a0457a3888f 7c24e8f82e7842b785e565ac65c7f494 35d2a9caf1634dca9fc12ec078239d84 - - default default] [instance: e73931e9-f7fa-4666-b781-700b385532a9] Deleting instance files /var/lib/nova/instances/e73931e9-f7fa-4666-b781-700b385532a9_del#033[00m
Dec  1 19:55:37 compute-0 nova_compute[189564]: 2025-12-01 19:55:37.517 189568 INFO nova.virt.libvirt.driver [None req-89b324ab-e1ff-4df8-8941-7a0457a3888f 7c24e8f82e7842b785e565ac65c7f494 35d2a9caf1634dca9fc12ec078239d84 - - default default] [instance: e73931e9-f7fa-4666-b781-700b385532a9] Deletion of /var/lib/nova/instances/e73931e9-f7fa-4666-b781-700b385532a9_del complete#033[00m
Dec  1 19:55:37 compute-0 systemd[1]: var-lib-containers-storage-overlay\x2dcontainers-d90ba4d9f5da009772020c9c416936175fc09c2471a29f0edd5fd21cc78957cd-userdata-shm.mount: Deactivated successfully.
Dec  1 19:55:37 compute-0 systemd[1]: var-lib-containers-storage-overlay-8dcbb4f8550a56991064453850edc00aa9ca762c63447d47eb35d8dba1732d59-merged.mount: Deactivated successfully.
Dec  1 19:55:37 compute-0 podman[251496]: 2025-12-01 19:55:37.567548987 +0000 UTC m=+0.137579580 container cleanup d90ba4d9f5da009772020c9c416936175fc09c2471a29f0edd5fd21cc78957cd (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified, name=neutron-haproxy-ovnmeta-2a4b8529-6171-4880-a97c-66966115a61b, io.buildah.version=1.41.3, org.label-schema.license=GPLv2, tcib_build_tag=fa2bb8efef6782c26ea7f1675eeb36dd, maintainer=OpenStack Kubernetes Operator team, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS, tcib_managed=true, org.label-schema.build-date=20251125, org.label-schema.name=CentOS Stream 9 Base Image)
Dec  1 19:55:37 compute-0 nova_compute[189564]: 2025-12-01 19:55:37.571 189568 INFO nova.compute.manager [None req-89b324ab-e1ff-4df8-8941-7a0457a3888f 7c24e8f82e7842b785e565ac65c7f494 35d2a9caf1634dca9fc12ec078239d84 - - default default] [instance: e73931e9-f7fa-4666-b781-700b385532a9] Took 0.39 seconds to destroy the instance on the hypervisor.#033[00m
Dec  1 19:55:37 compute-0 nova_compute[189564]: 2025-12-01 19:55:37.572 189568 DEBUG oslo.service.loopingcall [None req-89b324ab-e1ff-4df8-8941-7a0457a3888f 7c24e8f82e7842b785e565ac65c7f494 35d2a9caf1634dca9fc12ec078239d84 - - default default] Waiting for function nova.compute.manager.ComputeManager._try_deallocate_network.<locals>._deallocate_network_with_retries to return. func /usr/lib/python3.9/site-packages/oslo_service/loopingcall.py:435#033[00m
Dec  1 19:55:37 compute-0 nova_compute[189564]: 2025-12-01 19:55:37.573 189568 DEBUG nova.compute.manager [-] [instance: e73931e9-f7fa-4666-b781-700b385532a9] Deallocating network for instance _deallocate_network /usr/lib/python3.9/site-packages/nova/compute/manager.py:2259#033[00m
Dec  1 19:55:37 compute-0 nova_compute[189564]: 2025-12-01 19:55:37.573 189568 DEBUG nova.network.neutron [-] [instance: e73931e9-f7fa-4666-b781-700b385532a9] deallocate_for_instance() deallocate_for_instance /usr/lib/python3.9/site-packages/nova/network/neutron.py:1803#033[00m
Dec  1 19:55:37 compute-0 systemd[1]: libpod-conmon-d90ba4d9f5da009772020c9c416936175fc09c2471a29f0edd5fd21cc78957cd.scope: Deactivated successfully.
Dec  1 19:55:37 compute-0 podman[251545]: 2025-12-01 19:55:37.655652341 +0000 UTC m=+0.063583624 container remove d90ba4d9f5da009772020c9c416936175fc09c2471a29f0edd5fd21cc78957cd (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified, name=neutron-haproxy-ovnmeta-2a4b8529-6171-4880-a97c-66966115a61b, maintainer=OpenStack Kubernetes Operator team, org.label-schema.name=CentOS Stream 9 Base Image, tcib_managed=true, io.buildah.version=1.41.3, org.label-schema.license=GPLv2, tcib_build_tag=fa2bb8efef6782c26ea7f1675eeb36dd, org.label-schema.build-date=20251125, org.label-schema.vendor=CentOS, org.label-schema.schema-version=1.0)
Dec  1 19:55:37 compute-0 ovn_metadata_agent[106828]: 2025-12-01 19:55:37.670 239862 DEBUG oslo.privsep.daemon [-] privsep: reply[c8d9e37c-bd5b-4628-b4a8-500a0994638e]: (4, ('Mon Dec  1 07:55:37 PM UTC 2025 Stopping container neutron-haproxy-ovnmeta-2a4b8529-6171-4880-a97c-66966115a61b (d90ba4d9f5da009772020c9c416936175fc09c2471a29f0edd5fd21cc78957cd)\nd90ba4d9f5da009772020c9c416936175fc09c2471a29f0edd5fd21cc78957cd\nMon Dec  1 07:55:37 PM UTC 2025 Deleting container neutron-haproxy-ovnmeta-2a4b8529-6171-4880-a97c-66966115a61b (d90ba4d9f5da009772020c9c416936175fc09c2471a29f0edd5fd21cc78957cd)\nd90ba4d9f5da009772020c9c416936175fc09c2471a29f0edd5fd21cc78957cd\n', '', 0)) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Dec  1 19:55:37 compute-0 ovn_metadata_agent[106828]: 2025-12-01 19:55:37.673 239862 DEBUG oslo.privsep.daemon [-] privsep: reply[b6df4f93-febd-45bb-b579-e82f889c09f8]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Dec  1 19:55:37 compute-0 ovn_metadata_agent[106828]: 2025-12-01 19:55:37.675 106833 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): DelPortCommand(_result=None, port=tap2a4b8529-60, bridge=None, if_exists=True) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89#033[00m
Dec  1 19:55:37 compute-0 nova_compute[189564]: 2025-12-01 19:55:37.678 189568 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Dec  1 19:55:37 compute-0 kernel: tap2a4b8529-60: left promiscuous mode
Dec  1 19:55:37 compute-0 nova_compute[189564]: 2025-12-01 19:55:37.695 189568 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Dec  1 19:55:37 compute-0 ovn_metadata_agent[106828]: 2025-12-01 19:55:37.698 239862 DEBUG oslo.privsep.daemon [-] privsep: reply[64d0cef8-dab4-4d0e-ad68-a778c78ac044]: (4, True) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Dec  1 19:55:37 compute-0 ovn_metadata_agent[106828]: 2025-12-01 19:55:37.717 239862 DEBUG oslo.privsep.daemon [-] privsep: reply[23a5c206-5e3a-41da-a6b7-916d7e275637]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Dec  1 19:55:37 compute-0 ovn_metadata_agent[106828]: 2025-12-01 19:55:37.719 239862 DEBUG oslo.privsep.daemon [-] privsep: reply[4b2c852f-9260-4501-8b7a-62ec39422b55]: (4, True) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Dec  1 19:55:37 compute-0 ovn_metadata_agent[106828]: 2025-12-01 19:55:37.749 239862 DEBUG oslo.privsep.daemon [-] privsep: reply[dd19e481-1e34-4d54-a014-10477b606d9c]: (4, [{'family': 0, '__align': (), 'ifi_type': 772, 'index': 1, 'flags': 65609, 'change': 0, 'attrs': [['IFLA_IFNAME', 'lo'], ['IFLA_TXQLEN', 1000], ['IFLA_OPERSTATE', 'UNKNOWN'], ['IFLA_LINKMODE', 0], ['IFLA_MTU', 65536], ['IFLA_MIN_MTU', 0], ['IFLA_MAX_MTU', 0], ['IFLA_GROUP', 0], ['IFLA_PROMISCUITY', 0], ['UNKNOWN', {'header': {'length': 8, 'type': 61}}], ['IFLA_NUM_TX_QUEUES', 1], ['IFLA_GSO_MAX_SEGS', 65535], ['IFLA_GSO_MAX_SIZE', 65536], ['IFLA_GRO_MAX_SIZE', 65536], ['UNKNOWN', {'header': {'length': 8, 'type': 63}}], ['UNKNOWN', {'header': {'length': 8, 'type': 64}}], ['IFLA_TSO_MAX_SIZE', 524280], ['IFLA_TSO_MAX_SEGS', 65535], ['UNKNOWN', {'header': {'length': 8, 'type': 66}}], ['IFLA_NUM_RX_QUEUES', 1], ['IFLA_CARRIER', 1], ['IFLA_CARRIER_CHANGES', 0], ['IFLA_CARRIER_UP_COUNT', 0], ['IFLA_CARRIER_DOWN_COUNT', 0], ['IFLA_PROTO_DOWN', 0], ['IFLA_ADDRESS', '00:00:00:00:00:00'], ['IFLA_BROADCAST', '00:00:00:00:00:00'], ['IFLA_STATS64', {'rx_packets': 1, 'tx_packets': 1, 'rx_bytes': 28, 'tx_bytes': 28, 'rx_errors': 0, 'tx_errors': 0, 'rx_dropped': 0, 'tx_dropped': 0, 'multicast': 0, 'collisions': 0, 'rx_length_errors': 0, 'rx_over_errors': 0, 'rx_crc_errors': 0, 'rx_frame_errors': 0, 'rx_fifo_errors': 0, 'rx_missed_errors': 0, 'tx_aborted_errors': 0, 'tx_carrier_errors': 0, 'tx_fifo_errors': 0, 'tx_heartbeat_errors': 0, 'tx_window_errors': 0, 'rx_compressed': 0, 'tx_compressed': 0}], ['IFLA_STATS', {'rx_packets': 1, 'tx_packets': 1, 'rx_bytes': 28, 'tx_bytes': 28, 'rx_errors': 0, 'tx_errors': 0, 'rx_dropped': 0, 'tx_dropped': 0, 'multicast': 0, 'collisions': 0, 'rx_length_errors': 0, 'rx_over_errors': 0, 'rx_crc_errors': 0, 'rx_frame_errors': 0, 'rx_fifo_errors': 0, 'rx_missed_errors': 0, 'tx_aborted_errors': 0, 'tx_carrier_errors': 0, 'tx_fifo_errors': 0, 'tx_heartbeat_errors': 0, 'tx_window_errors': 0, 'rx_compressed': 0, 'tx_compressed': 0}], ['IFLA_XDP', {'attrs': [['IFLA_XDP_ATTACHED', None]]}], ['IFLA_QDISC', 'noqueue'], ['IFLA_AF_SPEC', {'attrs': [['AF_INET', {'dummy': 65668, 'forwarding': 1, 'mc_forwarding': 0, 'proxy_arp': 0, 'accept_redirects': 0, 'secure_redirects': 0, 'send_redirects': 0, 'shared_media': 1, 'rp_filter': 1, 'accept_source_route': 0, 'bootp_relay': 0, 'log_martians': 0, 'tag': 0, 'arpfilter': 0, 'medium_id': 0, 'noxfrm': 1, 'nopolicy': 1, 'force_igmp_version': 0, 'arp_announce': 0, 'arp_ignore': 0, 'promote_secondaries': 1, 'arp_accept': 0, 'arp_notify': 0, 'accept_local': 0, 'src_vmark': 0, 'proxy_arp_pvlan': 0, 'route_localnet': 0, 'igmpv2_unsolicited_report_interval': 10000, 'igmpv3_unsolicited_report_interval': 1000}], ['AF_INET6', {'attrs': [['IFLA_INET6_FLAGS', 2147483648], ['IFLA_INET6_CACHEINFO', {'max_reasm_len': 65535, 'tstamp': 388602, 'reachable_time': 28719, 'retrans_time': 1000}], ['IFLA_INET6_CONF', {'forwarding': 0, 'hop_limit': 64, 'mtu': 65536, 'accept_ra': 1, 'accept_redirects': 1, 'autoconf': 1, 'dad_transmits': 1, 'router_solicitations': 4294967295, 'router_solicitation_interval': 4000, 'router_solicitation_delay': 1000, 'use_tempaddr': 4294967295, 'temp_valid_lft': 604800, 'temp_preferred_lft': 86400, 'regen_max_retry': 3, 'max_desync_factor': 600, 'max_addresses': 16, 'force_mld_version': 0, 'accept_ra_defrtr': 1, 'accept_ra_pinfo': 1, 'accept_ra_rtr_pref': 1, 'router_probe_interval': 60000, 'accept_ra_rt_info_max_plen': 0, 'proxy_ndp': 0, 'optimistic_dad': 0, 'accept_source_route': 0, 'mc_forwarding': 0, 'disable_ipv6': 0, 'accept_dad': 4294967295, 'force_tllao': 0, 'ndisc_notify': 0}], ['IFLA_INET6_STATS', {'num': 38, 'inpkts': 0, 'inoctets': 0, 'indelivers': 0, 'outforwdatagrams': 0, 'outpkts': 0, 'outoctets': 0, 'inhdrerrors': 0, 'intoobigerrors': 0, 'innoroutes': 0, 'inaddrerrors': 0, 'inunknownprotos': 0, 'intruncatedpkts': 0, 'indiscards': 0, 'outdiscards': 0, 'outnoroutes': 0, 'reasmtimeout': 0, 'reasmreqds': 0, 'reasmoks': 0, 'reasmfails': 0, 'fragoks': 0, 'fragfails': 0, 'fragcreates': 0, 'inmcastpkts': 0, 'outmcastpkts': 0, 'inbcastpkts': 0, 'outbcastpkts': 0, 'inmcastoctets': 0, 'outmcastoctets': 0, 'inbcastoctets': 0, 'outbcastoctets': 0, 'csumerrors': 0, 'noectpkts': 0, 'ect1pkts': 0, 'ect0pkts': 0, 'cepkts': 0}], ['IFLA_INET6_ICMP6STATS', {'num': 7, 'inmsgs': 0, 'inerrors': 0, 'outmsgs': 0, 'outerrors': 0, 'csumerrors': 0}], ['IFLA_INET6_TOKEN', '::'], ['IFLA_INET6_ADDR_GEN_MODE', 0]]}]]}], ['IFLA_MAP', {'mem_start': 0, 'mem_end': 0, 'base_addr': 0, 'irq': 0, 'dma': 0, 'port': 0}], ['UNKNOWN', {'header': {'length': 4, 'type': 32830}}], ['UNKNOWN', {'header': {'length': 4, 'type': 32833}}]], 'header': {'length': 1404, 'type': 16, 'flags': 2, 'sequence_number': 255, 'pid': 251561, 'error': None, 'target': 'ovnmeta-2a4b8529-6171-4880-a97c-66966115a61b', 'stats': (0, 0, 0)}, 'state': 'up', 'event': 'RTM_NEWLINK'}]) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Dec  1 19:55:37 compute-0 systemd[1]: run-netns-ovnmeta\x2d2a4b8529\x2d6171\x2d4880\x2da97c\x2d66966115a61b.mount: Deactivated successfully.
Dec  1 19:55:37 compute-0 ovn_metadata_agent[106828]: 2025-12-01 19:55:37.766 106945 DEBUG neutron.privileged.agent.linux.ip_lib [-] Namespace ovnmeta-2a4b8529-6171-4880-a97c-66966115a61b deleted. remove_netns /usr/lib/python3.9/site-packages/neutron/privileged/agent/linux/ip_lib.py:607#033[00m
Dec  1 19:55:37 compute-0 ovn_metadata_agent[106828]: 2025-12-01 19:55:37.768 106945 DEBUG oslo.privsep.daemon [-] privsep: reply[eab5ce4d-1452-44ae-95ad-ff39cd5bd906]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Dec  1 19:55:38 compute-0 nova_compute[189564]: 2025-12-01 19:55:38.606 189568 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Dec  1 19:55:39 compute-0 nova_compute[189564]: 2025-12-01 19:55:39.621 189568 DEBUG nova.compute.manager [req-52d0f38b-f64e-48cb-88b4-cd6807f8eec0 req-66fc1a8a-a3a7-40ca-aa2f-1b0ce6690627 8141e98fb94749b083df4b60a326419b 58466eba0634458f8dd92f929824a1d9 - - default default] [instance: e73931e9-f7fa-4666-b781-700b385532a9] Received event network-vif-plugged-3cef930c-870a-4936-a206-b4c3a7ce5c1a external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11048#033[00m
Dec  1 19:55:39 compute-0 nova_compute[189564]: 2025-12-01 19:55:39.621 189568 DEBUG oslo_concurrency.lockutils [req-52d0f38b-f64e-48cb-88b4-cd6807f8eec0 req-66fc1a8a-a3a7-40ca-aa2f-1b0ce6690627 8141e98fb94749b083df4b60a326419b 58466eba0634458f8dd92f929824a1d9 - - default default] Acquiring lock "e73931e9-f7fa-4666-b781-700b385532a9-events" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m
Dec  1 19:55:39 compute-0 nova_compute[189564]: 2025-12-01 19:55:39.622 189568 DEBUG oslo_concurrency.lockutils [req-52d0f38b-f64e-48cb-88b4-cd6807f8eec0 req-66fc1a8a-a3a7-40ca-aa2f-1b0ce6690627 8141e98fb94749b083df4b60a326419b 58466eba0634458f8dd92f929824a1d9 - - default default] Lock "e73931e9-f7fa-4666-b781-700b385532a9-events" acquired by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m
Dec  1 19:55:39 compute-0 nova_compute[189564]: 2025-12-01 19:55:39.622 189568 DEBUG oslo_concurrency.lockutils [req-52d0f38b-f64e-48cb-88b4-cd6807f8eec0 req-66fc1a8a-a3a7-40ca-aa2f-1b0ce6690627 8141e98fb94749b083df4b60a326419b 58466eba0634458f8dd92f929824a1d9 - - default default] Lock "e73931e9-f7fa-4666-b781-700b385532a9-events" "released" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m
Dec  1 19:55:39 compute-0 nova_compute[189564]: 2025-12-01 19:55:39.623 189568 DEBUG nova.compute.manager [req-52d0f38b-f64e-48cb-88b4-cd6807f8eec0 req-66fc1a8a-a3a7-40ca-aa2f-1b0ce6690627 8141e98fb94749b083df4b60a326419b 58466eba0634458f8dd92f929824a1d9 - - default default] [instance: e73931e9-f7fa-4666-b781-700b385532a9] No waiting events found dispatching network-vif-plugged-3cef930c-870a-4936-a206-b4c3a7ce5c1a pop_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:320#033[00m
Dec  1 19:55:39 compute-0 nova_compute[189564]: 2025-12-01 19:55:39.623 189568 WARNING nova.compute.manager [req-52d0f38b-f64e-48cb-88b4-cd6807f8eec0 req-66fc1a8a-a3a7-40ca-aa2f-1b0ce6690627 8141e98fb94749b083df4b60a326419b 58466eba0634458f8dd92f929824a1d9 - - default default] [instance: e73931e9-f7fa-4666-b781-700b385532a9] Received unexpected event network-vif-plugged-3cef930c-870a-4936-a206-b4c3a7ce5c1a for instance with vm_state active and task_state deleting.#033[00m
Dec  1 19:55:40 compute-0 nova_compute[189564]: 2025-12-01 19:55:40.202 189568 DEBUG nova.network.neutron [-] [instance: e73931e9-f7fa-4666-b781-700b385532a9] Updating instance_info_cache with network_info: [] update_instance_cache_with_nw_info /usr/lib/python3.9/site-packages/nova/network/neutron.py:116#033[00m
Dec  1 19:55:40 compute-0 nova_compute[189564]: 2025-12-01 19:55:40.227 189568 INFO nova.compute.manager [-] [instance: e73931e9-f7fa-4666-b781-700b385532a9] Took 2.65 seconds to deallocate network for instance.#033[00m
Dec  1 19:55:40 compute-0 nova_compute[189564]: 2025-12-01 19:55:40.303 189568 DEBUG nova.compute.manager [req-fae9cd18-4676-4c77-a170-b363270663f0 req-43e7a107-d259-4edc-a3f7-8a2f8160bdfb 8141e98fb94749b083df4b60a326419b 58466eba0634458f8dd92f929824a1d9 - - default default] [instance: e73931e9-f7fa-4666-b781-700b385532a9] Received event network-vif-deleted-3cef930c-870a-4936-a206-b4c3a7ce5c1a external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11048#033[00m
Dec  1 19:55:40 compute-0 nova_compute[189564]: 2025-12-01 19:55:40.306 189568 DEBUG oslo_concurrency.lockutils [None req-89b324ab-e1ff-4df8-8941-7a0457a3888f 7c24e8f82e7842b785e565ac65c7f494 35d2a9caf1634dca9fc12ec078239d84 - - default default] Acquiring lock "compute_resources" by "nova.compute.resource_tracker.ResourceTracker.update_usage" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m
Dec  1 19:55:40 compute-0 nova_compute[189564]: 2025-12-01 19:55:40.307 189568 DEBUG oslo_concurrency.lockutils [None req-89b324ab-e1ff-4df8-8941-7a0457a3888f 7c24e8f82e7842b785e565ac65c7f494 35d2a9caf1634dca9fc12ec078239d84 - - default default] Lock "compute_resources" acquired by "nova.compute.resource_tracker.ResourceTracker.update_usage" :: waited 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m
Dec  1 19:55:40 compute-0 nova_compute[189564]: 2025-12-01 19:55:40.392 189568 DEBUG nova.compute.provider_tree [None req-89b324ab-e1ff-4df8-8941-7a0457a3888f 7c24e8f82e7842b785e565ac65c7f494 35d2a9caf1634dca9fc12ec078239d84 - - default default] Inventory has not changed in ProviderTree for provider: 0211b5d4-bab8-409f-8f53-df766ffbcb27 update_inventory /usr/lib/python3.9/site-packages/nova/compute/provider_tree.py:180#033[00m
Dec  1 19:55:40 compute-0 nova_compute[189564]: 2025-12-01 19:55:40.420 189568 DEBUG nova.scheduler.client.report [None req-89b324ab-e1ff-4df8-8941-7a0457a3888f 7c24e8f82e7842b785e565ac65c7f494 35d2a9caf1634dca9fc12ec078239d84 - - default default] Inventory has not changed for provider 0211b5d4-bab8-409f-8f53-df766ffbcb27 based on inventory data: {'VCPU': {'total': 8, 'reserved': 0, 'min_unit': 1, 'max_unit': 8, 'step_size': 1, 'allocation_ratio': 4.0}, 'MEMORY_MB': {'total': 7680, 'reserved': 512, 'min_unit': 1, 'max_unit': 7680, 'step_size': 1, 'allocation_ratio': 1.0}, 'DISK_GB': {'total': 79, 'reserved': 1, 'min_unit': 1, 'max_unit': 79, 'step_size': 1, 'allocation_ratio': 0.9}} set_inventory_for_provider /usr/lib/python3.9/site-packages/nova/scheduler/client/report.py:940#033[00m
Dec  1 19:55:40 compute-0 nova_compute[189564]: 2025-12-01 19:55:40.452 189568 DEBUG oslo_concurrency.lockutils [None req-89b324ab-e1ff-4df8-8941-7a0457a3888f 7c24e8f82e7842b785e565ac65c7f494 35d2a9caf1634dca9fc12ec078239d84 - - default default] Lock "compute_resources" "released" by "nova.compute.resource_tracker.ResourceTracker.update_usage" :: held 0.145s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m
Dec  1 19:55:40 compute-0 nova_compute[189564]: 2025-12-01 19:55:40.494 189568 INFO nova.scheduler.client.report [None req-89b324ab-e1ff-4df8-8941-7a0457a3888f 7c24e8f82e7842b785e565ac65c7f494 35d2a9caf1634dca9fc12ec078239d84 - - default default] Deleted allocations for instance e73931e9-f7fa-4666-b781-700b385532a9#033[00m
Dec  1 19:55:40 compute-0 nova_compute[189564]: 2025-12-01 19:55:40.581 189568 DEBUG oslo_concurrency.lockutils [None req-89b324ab-e1ff-4df8-8941-7a0457a3888f 7c24e8f82e7842b785e565ac65c7f494 35d2a9caf1634dca9fc12ec078239d84 - - default default] Lock "e73931e9-f7fa-4666-b781-700b385532a9" "released" by "nova.compute.manager.ComputeManager.terminate_instance.<locals>.do_terminate_instance" :: held 3.411s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m
Dec  1 19:55:41 compute-0 podman[251568]: 2025-12-01 19:55:41.319221171 +0000 UTC m=+0.083487941 container health_status 43b014a7c88484529ca37fbc1aa040d68d3c565a681d98a3ffe696ded1c66c8b (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified, name=ovn_metadata_agent, health_status=healthy, health_failing_streak=0, health_log=, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.vendor=CentOS, tcib_build_tag=fa2bb8efef6782c26ea7f1675eeb36dd, tcib_managed=true, container_name=ovn_metadata_agent, io.buildah.version=1.41.3, maintainer=OpenStack Kubernetes Operator team, org.label-schema.license=GPLv2, config_data={'cgroupns': 'host', 'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'EDPM_CONFIG_HASH': '0823bd3e096c75f72e4a95820d41b0d4b6a1172bd2892ddb9f29b788a11bc87d'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ovn_metadata_agent', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified', 'net': 'host', 'pid': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/run/openvswitch:/run/openvswitch:z', '/var/lib/config-data/ansible-generated/neutron-ovn-metadata-agent:/etc/neutron.conf.d:z', '/run/netns:/run/netns:shared', '/var/lib/kolla/config_files/ovn_metadata_agent.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/neutron:/var/lib/neutron:shared,z', '/var/lib/neutron/ovn_metadata_haproxy_wrapper:/usr/local/bin/haproxy:ro', '/var/lib/neutron/kill_scripts:/etc/neutron/kill_scripts:ro', '/var/lib/openstack/cacerts/neutron-metadata/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/ca.crt:/etc/pki/tls/certs/ovndbca.crt:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/tls.crt:/etc/pki/tls/certs/ovndb.crt:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/tls.key:/etc/pki/tls/private/ovndb.key:ro,Z', '/var/lib/openstack/healthchecks/ovn_metadata_agent:/openstack:ro,z']}, org.label-schema.schema-version=1.0, config_id=ovn_metadata_agent, managed_by=edpm_ansible, org.label-schema.build-date=20251125)
Dec  1 19:55:41 compute-0 podman[251565]: 2025-12-01 19:55:41.335695392 +0000 UTC m=+0.101480270 container health_status 23921011954a99f31a49758e512d9e3575f6b2ebf536e7df85e3be11e7690b76 (image=quay.io/sustainable_computing_io/kepler:release-0.7.12, name=kepler, health_status=healthy, health_failing_streak=0, health_log=, container_name=kepler, io.openshift.tags=base rhel9, vendor=Red Hat, Inc., architecture=x86_64, config_data={'image': 'quay.io/sustainable_computing_io/kepler:release-0.7.12', 'privileged': 'true', 'restart': 'always', 'ports': ['8888:8888'], 'net': 'host', 'command': '-v=2', 'recreate': True, 'environment': {'ENABLE_GPU': 'true', 'EXPOSE_CONTAINER_METRICS': 'true', 'ENABLE_PROCESS_METRICS': 'true', 'EXPOSE_VM_METRICS': 'true', 'EXPOSE_ESTIMATED_IDLE_POWER_METRICS': 'false', 'LIBVIRT_METADATA_URI': 'http://openstack.org/xmlns/libvirt/nova/1.1'}, 'healthcheck': {'test': '/openstack/healthcheck kepler', 'mount': '/var/lib/openstack/healthchecks/kepler'}, 'volumes': ['/lib/modules:/lib/modules:ro', '/run/libvirt:/run/libvirt:shared,ro', '/sys:/sys', '/proc:/proc', '/var/lib/openstack/healthchecks/kepler:/openstack:ro,z']}, maintainer=Red Hat, Inc., config_id=edpm, vcs-type=git, description=The Universal Base Image is designed and engineered to be the base layer for all of your containerized applications, middleware and utilities. This base image is freely redistributable, but Red Hat only supports Red Hat technologies through subscriptions for Red Hat products. This image is maintained by Red Hat and updated regularly., release=1214.1726694543, summary=Provides the latest release of Red Hat Universal Base Image 9., com.redhat.license_terms=https://www.redhat.com/en/about/red-hat-end-user-license-agreements#UBI, url=https://access.redhat.com/containers/#/registry.access.redhat.com/ubi9/images/9.4-1214.1726694543, build-date=2024-09-18T21:23:30, io.k8s.description=The Universal Base Image is designed and engineered to be the base layer for all of your containerized applications, middleware and utilities. This base image is freely redistributable, but Red Hat only supports Red Hat technologies through subscriptions for Red Hat products. This image is maintained by Red Hat and updated regularly., io.k8s.display-name=Red Hat Universal Base Image 9, managed_by=edpm_ansible, io.buildah.version=1.29.0, distribution-scope=public, name=ubi9, release-0.7.12=, vcs-ref=e309397d02fc53f7fa99db1371b8700eb49f268f, com.redhat.component=ubi9-container, io.openshift.expose-services=, version=9.4)
Dec  1 19:55:41 compute-0 podman[251566]: 2025-12-01 19:55:41.351016428 +0000 UTC m=+0.118283171 container health_status 34a1614f07848d6f362b3ed1fa2407dbcd0f2c7c831f6ef43ff8b2d278ce7c3d (image=quay.io/podified-antelope-centos9/openstack-ceilometer-ipmi:current-podified, name=ceilometer_agent_ipmi, health_status=healthy, health_failing_streak=0, health_log=, config_id=edpm, io.buildah.version=1.41.3, maintainer=OpenStack Kubernetes Operator team, managed_by=edpm_ansible, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.schema-version=1.0, container_name=ceilometer_agent_ipmi, org.label-schema.license=GPLv2, config_data={'image': 'quay.io/podified-antelope-centos9/openstack-ceilometer-ipmi:current-podified', 'user': 'ceilometer', 'restart': 'always', 'command': 'kolla_start', 'security_opt': 'label:type:ceilometer_polling_t', 'privileged': 'true', 'net': 'host', 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'OS_ENDPOINT_TYPE': 'internal'}, 'healthcheck': {'test': '/openstack/healthcheck ipmi', 'mount': '/var/lib/openstack/healthchecks/ceilometer_agent_ipmi'}, 'volumes': ['/var/lib/openstack/config/telemetry-power-monitoring:/var/lib/openstack/config/:z', '/var/lib/openstack/config/telemetry-power-monitoring/ceilometer-agent-ipmi.json:/var/lib/kolla/config_files/config.json:z', '/etc/hosts:/etc/hosts:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/var/lib/openstack/cacerts/telemetry-power-monitoring/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/config/telemetry-power-monitoring/ceilometer_prom_exporter.yaml:/etc/ceilometer/ceilometer_prom_exporter.yaml:z', '/var/lib/openstack/certs/telemetry-power-monitoring/default:/etc/ceilometer/tls:z', '/dev/log:/dev/log', '/var/lib/openstack/healthchecks/ceilometer_agent_ipmi:/openstack:ro,z']}, org.label-schema.vendor=CentOS, org.label-schema.build-date=20251125, tcib_build_tag=fa2bb8efef6782c26ea7f1675eeb36dd, tcib_managed=true)
Dec  1 19:55:41 compute-0 podman[251569]: 2025-12-01 19:55:41.357470908 +0000 UTC m=+0.117854428 container health_status ac5c9902abf0db9f43c889599b2bcc73d33eb8b65444ffdd9b56a5cc93dab792 (image=quay.io/podified-antelope-centos9/openstack-ovn-controller:current-podified, name=ovn_controller, health_status=healthy, health_failing_streak=0, health_log=, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS, tcib_build_tag=fa2bb8efef6782c26ea7f1675eeb36dd, config_data={'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ovn_controller', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-ovn-controller:current-podified', 'net': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/lib/modules:/lib/modules:ro', '/run:/run', '/var/lib/openvswitch/ovn:/run/ovn:shared,z', '/var/lib/kolla/config_files/ovn_controller.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/openstack/cacerts/ovn/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/certs/ovn/default/ca.crt:/etc/pki/tls/certs/ovndbca.crt:ro,z', '/var/lib/openstack/certs/ovn/default/tls.crt:/etc/pki/tls/certs/ovndb.crt:ro,z', '/var/lib/openstack/certs/ovn/default/tls.key:/etc/pki/tls/private/ovndb.key:ro,Z', '/var/lib/openstack/healthchecks/ovn_controller:/openstack:ro,z']}, io.buildah.version=1.41.3, org.label-schema.license=GPLv2, config_id=ovn_controller, maintainer=OpenStack Kubernetes Operator team, org.label-schema.name=CentOS Stream 9 Base Image, tcib_managed=true, managed_by=edpm_ansible, org.label-schema.build-date=20251125, container_name=ovn_controller)
Dec  1 19:55:41 compute-0 podman[251567]: 2025-12-01 19:55:41.362170804 +0000 UTC m=+0.122723319 container health_status 3a3d264f7eb8586ed3d44da8bad3c69e5911bcb2ca062b771386b6d47a5118de (image=quay.rdoproject.org/podified-master-centos10/openstack-ceilometer-compute:current-tested, name=ceilometer_agent_compute, health_status=healthy, health_failing_streak=0, health_log=, managed_by=edpm_ansible, org.label-schema.build-date=20251125, config_data={'image': 'quay.rdoproject.org/podified-master-centos10/openstack-ceilometer-compute:current-tested', 'user': 'ceilometer', 'restart': 'always', 'command': 'kolla_start', 'security_opt': 'label:type:ceilometer_polling_t', 'net': 'host', 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'OS_ENDPOINT_TYPE': 'internal'}, 'healthcheck': {'test': '/openstack/healthcheck compute', 'mount': '/var/lib/openstack/healthchecks/ceilometer_agent_compute'}, 'volumes': ['/var/lib/openstack/config/telemetry:/var/lib/openstack/config/:z', '/var/lib/openstack/config/telemetry/ceilometer-agent-compute.json:/var/lib/kolla/config_files/config.json:z', '/run/libvirt:/run/libvirt:shared,ro', '/etc/hosts:/etc/hosts:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/var/lib/openstack/cacerts/telemetry/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/config/telemetry/ceilometer_prom_exporter.yaml:/etc/ceilometer/ceilometer_prom_exporter.yaml:z', '/var/lib/openstack/certs/telemetry/default:/etc/ceilometer/tls:z', '/dev/log:/dev/log', '/var/lib/openstack/healthchecks/ceilometer_agent_compute:/openstack:ro,z']}, io.buildah.version=1.41.4, org.label-schema.license=GPLv2, tcib_build_tag=3a7876c5b6a4ff2e2bc50e11e9db5f42, tcib_managed=true, config_id=edpm, container_name=ceilometer_agent_compute, maintainer=OpenStack Kubernetes Operator team, org.label-schema.vendor=CentOS, org.label-schema.name=CentOS Stream 10 Base Image, org.label-schema.schema-version=1.0)
Dec  1 19:55:42 compute-0 nova_compute[189564]: 2025-12-01 19:55:42.513 189568 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Dec  1 19:55:43 compute-0 nova_compute[189564]: 2025-12-01 19:55:43.610 189568 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Dec  1 19:55:47 compute-0 nova_compute[189564]: 2025-12-01 19:55:47.519 189568 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Dec  1 19:55:48 compute-0 nova_compute[189564]: 2025-12-01 19:55:48.613 189568 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Dec  1 19:55:48 compute-0 ceilometer_agent_compute[200308]: 2025-12-01 19:55:48.819 15 DEBUG ceilometer.polling.manager [-] The number of pollsters in source [pollsters] is bigger than the number of worker threads to execute them. Therefore, one can expect the process to be longer than the expected. execute_polling_task_processing /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:253
Dec  1 19:55:48 compute-0 ceilometer_agent_compute[200308]: 2025-12-01 19:55:48.819 15 DEBUG ceilometer.polling.manager [-] Processing pollsters for [pollsters] with [1] threads. execute_polling_task_processing /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:262
Dec  1 19:55:48 compute-0 ceilometer_agent_compute[200308]: 2025-12-01 19:55:48.819 15 DEBUG ceilometer.polling.manager [-] Registering pollster [<stevedore.extension.Extension object at 0x7fcf6cc3f860>] from source [pollsters] to be executed via executor [<concurrent.futures.thread.ThreadPoolExecutor object at 0x7fcf6cd3b320>] with cache [{}], pollster history [{}], and discovery cache [{}]. register_pollster_execution /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:276
Dec  1 19:55:48 compute-0 ceilometer_agent_compute[200308]: 2025-12-01 19:55:48.820 15 DEBUG ceilometer.polling.manager [-] Executing discovery process for pollsters [<ceilometer.compute.pollsters.net.IncomingBytesDeltaPollster object at 0x7fcf6cc3f830>] and discovery method [local_instances] via process [<bound method AgentManager.discover of <ceilometer.polling.manager.AgentManager object at 0x7fcf6cc07a10>>]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:294
Dec  1 19:55:48 compute-0 ceilometer_agent_compute[200308]: 2025-12-01 19:55:48.821 15 DEBUG ceilometer.polling.manager [-] Registering pollster [<stevedore.extension.Extension object at 0x7fcf6c2e4080>] from source [pollsters] to be executed via executor [<concurrent.futures.thread.ThreadPoolExecutor object at 0x7fcf6cd3b320>] with cache [{}], pollster history [{}], and discovery cache [{}]. register_pollster_execution /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:276
Dec  1 19:55:48 compute-0 ceilometer_agent_compute[200308]: 2025-12-01 19:55:48.821 15 DEBUG ceilometer.polling.manager [-] Registering pollster [<stevedore.extension.Extension object at 0x7fcf6efc98b0>] from source [pollsters] to be executed via executor [<concurrent.futures.thread.ThreadPoolExecutor object at 0x7fcf6cd3b320>] with cache [{}], pollster history [{}], and discovery cache [{}]. register_pollster_execution /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:276
Dec  1 19:55:48 compute-0 ceilometer_agent_compute[200308]: 2025-12-01 19:55:48.821 15 DEBUG ceilometer.polling.manager [-] Registering pollster [<stevedore.extension.Extension object at 0x7fcf6c2e4110>] from source [pollsters] to be executed via executor [<concurrent.futures.thread.ThreadPoolExecutor object at 0x7fcf6cd3b320>] with cache [{}], pollster history [{}], and discovery cache [{}]. register_pollster_execution /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:276
Dec  1 19:55:48 compute-0 ceilometer_agent_compute[200308]: 2025-12-01 19:55:48.821 15 DEBUG ceilometer.polling.manager [-] Registering pollster [<stevedore.extension.Extension object at 0x7fcf6c2e41a0>] from source [pollsters] to be executed via executor [<concurrent.futures.thread.ThreadPoolExecutor object at 0x7fcf6cd3b320>] with cache [{}], pollster history [{}], and discovery cache [{}]. register_pollster_execution /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:276
Dec  1 19:55:48 compute-0 ceilometer_agent_compute[200308]: 2025-12-01 19:55:48.821 15 DEBUG ceilometer.polling.manager [-] Registering pollster [<stevedore.extension.Extension object at 0x7fcf6cc3f290>] from source [pollsters] to be executed via executor [<concurrent.futures.thread.ThreadPoolExecutor object at 0x7fcf6cd3b320>] with cache [{}], pollster history [{}], and discovery cache [{}]. register_pollster_execution /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:276
Dec  1 19:55:48 compute-0 ceilometer_agent_compute[200308]: 2025-12-01 19:55:48.821 15 DEBUG ceilometer.polling.manager [-] Registering pollster [<stevedore.extension.Extension object at 0x7fcf6cc3f2c0>] from source [pollsters] to be executed via executor [<concurrent.futures.thread.ThreadPoolExecutor object at 0x7fcf6cd3b320>] with cache [{}], pollster history [{}], and discovery cache [{}]. register_pollster_execution /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:276
Dec  1 19:55:48 compute-0 ceilometer_agent_compute[200308]: 2025-12-01 19:55:48.822 15 DEBUG ceilometer.polling.manager [-] Registering pollster [<stevedore.extension.Extension object at 0x7fcf6e1e92e0>] from source [pollsters] to be executed via executor [<concurrent.futures.thread.ThreadPoolExecutor object at 0x7fcf6cd3b320>] with cache [{}], pollster history [{}], and discovery cache [{}]. register_pollster_execution /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:276
Dec  1 19:55:48 compute-0 ceilometer_agent_compute[200308]: 2025-12-01 19:55:48.822 15 DEBUG ceilometer.polling.manager [-] Registering pollster [<stevedore.extension.Extension object at 0x7fcf6cc3fb00>] from source [pollsters] to be executed via executor [<concurrent.futures.thread.ThreadPoolExecutor object at 0x7fcf6cd3b320>] with cache [{}], pollster history [{}], and discovery cache [{}]. register_pollster_execution /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:276
Dec  1 19:55:48 compute-0 ceilometer_agent_compute[200308]: 2025-12-01 19:55:48.822 15 DEBUG ceilometer.polling.manager [-] Registering pollster [<stevedore.extension.Extension object at 0x7fcf6cc3f320>] from source [pollsters] to be executed via executor [<concurrent.futures.thread.ThreadPoolExecutor object at 0x7fcf6cd3b320>] with cache [{}], pollster history [{}], and discovery cache [{}]. register_pollster_execution /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:276
Dec  1 19:55:48 compute-0 ceilometer_agent_compute[200308]: 2025-12-01 19:55:48.822 15 DEBUG ceilometer.polling.manager [-] Registering pollster [<stevedore.extension.Extension object at 0x7fcf6cc3f380>] from source [pollsters] to be executed via executor [<concurrent.futures.thread.ThreadPoolExecutor object at 0x7fcf6cd3b320>] with cache [{}], pollster history [{}], and discovery cache [{}]. register_pollster_execution /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:276
Dec  1 19:55:48 compute-0 ceilometer_agent_compute[200308]: 2025-12-01 19:55:48.822 15 DEBUG ceilometer.polling.manager [-] Registering pollster [<stevedore.extension.Extension object at 0x7fcf6cc3f3e0>] from source [pollsters] to be executed via executor [<concurrent.futures.thread.ThreadPoolExecutor object at 0x7fcf6cd3b320>] with cache [{}], pollster history [{}], and discovery cache [{}]. register_pollster_execution /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:276
Dec  1 19:55:48 compute-0 ceilometer_agent_compute[200308]: 2025-12-01 19:55:48.822 15 DEBUG ceilometer.polling.manager [-] Registering pollster [<stevedore.extension.Extension object at 0x7fcf6cc3f440>] from source [pollsters] to be executed via executor [<concurrent.futures.thread.ThreadPoolExecutor object at 0x7fcf6cd3b320>] with cache [{}], pollster history [{}], and discovery cache [{}]. register_pollster_execution /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:276
Dec  1 19:55:48 compute-0 ceilometer_agent_compute[200308]: 2025-12-01 19:55:48.822 15 DEBUG ceilometer.polling.manager [-] Registering pollster [<stevedore.extension.Extension object at 0x7fcf6c2e4470>] from source [pollsters] to be executed via executor [<concurrent.futures.thread.ThreadPoolExecutor object at 0x7fcf6cd3b320>] with cache [{}], pollster history [{}], and discovery cache [{}]. register_pollster_execution /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:276
Dec  1 19:55:48 compute-0 ceilometer_agent_compute[200308]: 2025-12-01 19:55:48.822 15 DEBUG ceilometer.polling.manager [-] Registering pollster [<stevedore.extension.Extension object at 0x7fcf6cc3f4a0>] from source [pollsters] to be executed via executor [<concurrent.futures.thread.ThreadPoolExecutor object at 0x7fcf6cd3b320>] with cache [{}], pollster history [{}], and discovery cache [{}]. register_pollster_execution /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:276
Dec  1 19:55:48 compute-0 ceilometer_agent_compute[200308]: 2025-12-01 19:55:48.822 15 DEBUG ceilometer.polling.manager [-] Registering pollster [<stevedore.extension.Extension object at 0x7fcf6cc3f500>] from source [pollsters] to be executed via executor [<concurrent.futures.thread.ThreadPoolExecutor object at 0x7fcf6cd3b320>] with cache [{}], pollster history [{}], and discovery cache [{}]. register_pollster_execution /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:276
Dec  1 19:55:48 compute-0 ceilometer_agent_compute[200308]: 2025-12-01 19:55:48.822 15 DEBUG ceilometer.polling.manager [-] Registering pollster [<stevedore.extension.Extension object at 0x7fcf6cc3e540>] from source [pollsters] to be executed via executor [<concurrent.futures.thread.ThreadPoolExecutor object at 0x7fcf6cd3b320>] with cache [{}], pollster history [{}], and discovery cache [{}]. register_pollster_execution /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:276
Dec  1 19:55:48 compute-0 ceilometer_agent_compute[200308]: 2025-12-01 19:55:48.823 15 DEBUG ceilometer.polling.manager [-] Registering pollster [<stevedore.extension.Extension object at 0x7fcf6cc3f560>] from source [pollsters] to be executed via executor [<concurrent.futures.thread.ThreadPoolExecutor object at 0x7fcf6cd3b320>] with cache [{}], pollster history [{}], and discovery cache [{}]. register_pollster_execution /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:276
Dec  1 19:55:48 compute-0 ceilometer_agent_compute[200308]: 2025-12-01 19:55:48.823 15 DEBUG ceilometer.polling.manager [-] Registering pollster [<stevedore.extension.Extension object at 0x7fcf6cc3fd70>] from source [pollsters] to be executed via executor [<concurrent.futures.thread.ThreadPoolExecutor object at 0x7fcf6cd3b320>] with cache [{}], pollster history [{}], and discovery cache [{}]. register_pollster_execution /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:276
Dec  1 19:55:48 compute-0 ceilometer_agent_compute[200308]: 2025-12-01 19:55:48.823 15 DEBUG ceilometer.polling.manager [-] Registering pollster [<stevedore.extension.Extension object at 0x7fcf6cc3f5c0>] from source [pollsters] to be executed via executor [<concurrent.futures.thread.ThreadPoolExecutor object at 0x7fcf6cd3b320>] with cache [{}], pollster history [{}], and discovery cache [{}]. register_pollster_execution /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:276
Dec  1 19:55:48 compute-0 ceilometer_agent_compute[200308]: 2025-12-01 19:55:48.823 15 DEBUG ceilometer.polling.manager [-] Registering pollster [<stevedore.extension.Extension object at 0x7fcf6cc3fdd0>] from source [pollsters] to be executed via executor [<concurrent.futures.thread.ThreadPoolExecutor object at 0x7fcf6cd3b320>] with cache [{}], pollster history [{}], and discovery cache [{}]. register_pollster_execution /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:276
Dec  1 19:55:48 compute-0 ceilometer_agent_compute[200308]: 2025-12-01 19:55:48.823 15 DEBUG ceilometer.polling.manager [-] Registering pollster [<stevedore.extension.Extension object at 0x7fcf6cc3fe30>] from source [pollsters] to be executed via executor [<concurrent.futures.thread.ThreadPoolExecutor object at 0x7fcf6cd3b320>] with cache [{}], pollster history [{}], and discovery cache [{}]. register_pollster_execution /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:276
Dec  1 19:55:48 compute-0 ceilometer_agent_compute[200308]: 2025-12-01 19:55:48.823 15 DEBUG ceilometer.polling.manager [-] Registering pollster [<stevedore.extension.Extension object at 0x7fcf6cc3fec0>] from source [pollsters] to be executed via executor [<concurrent.futures.thread.ThreadPoolExecutor object at 0x7fcf6cd3b320>] with cache [{}], pollster history [{}], and discovery cache [{}]. register_pollster_execution /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:276
Dec  1 19:55:48 compute-0 ceilometer_agent_compute[200308]: 2025-12-01 19:55:48.823 15 DEBUG ceilometer.polling.manager [-] Registering pollster [<stevedore.extension.Extension object at 0x7fcf6cc3ffb0>] from source [pollsters] to be executed via executor [<concurrent.futures.thread.ThreadPoolExecutor object at 0x7fcf6cd3b320>] with cache [{}], pollster history [{}], and discovery cache [{}]. register_pollster_execution /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:276
Dec  1 19:55:48 compute-0 ceilometer_agent_compute[200308]: 2025-12-01 19:55:48.823 15 DEBUG ceilometer.polling.manager [-] Registering pollster [<stevedore.extension.Extension object at 0x7fcf6cc3d7c0>] from source [pollsters] to be executed via executor [<concurrent.futures.thread.ThreadPoolExecutor object at 0x7fcf6cd3b320>] with cache [{}], pollster history [{}], and discovery cache [{}]. register_pollster_execution /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:276
Dec  1 19:55:48 compute-0 ceilometer_agent_compute[200308]: 2025-12-01 19:55:48.823 15 DEBUG ceilometer.polling.manager [-] Registering pollster [<stevedore.extension.Extension object at 0x7fcf6cc3f7d0>] from source [pollsters] to be executed via executor [<concurrent.futures.thread.ThreadPoolExecutor object at 0x7fcf6cd3b320>] with cache [{}], pollster history [{}], and discovery cache [{}]. register_pollster_execution /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:276
Dec  1 19:55:48 compute-0 ceilometer_agent_compute[200308]: 2025-12-01 19:55:48.824 15 DEBUG ceilometer.polling.manager [-] Skip pollster network.incoming.bytes.delta, no  resources found this cycle _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:321
Dec  1 19:55:48 compute-0 ceilometer_agent_compute[200308]: 2025-12-01 19:55:48.824 15 DEBUG ceilometer.polling.manager [-] Executing discovery process for pollsters [<ceilometer.compute.pollsters.net.OutgoingPacketsPollster object at 0x7fcf6c2e4050>] and discovery method [local_instances] via process [<bound method AgentManager.discover of <ceilometer.polling.manager.AgentManager object at 0x7fcf6cc07a10>>]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:294
Dec  1 19:55:48 compute-0 ceilometer_agent_compute[200308]: 2025-12-01 19:55:48.824 15 DEBUG ceilometer.polling.manager [-] Skip pollster network.outgoing.packets, no  resources found this cycle _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:321
Dec  1 19:55:48 compute-0 ceilometer_agent_compute[200308]: 2025-12-01 19:55:48.825 15 DEBUG ceilometer.polling.manager [-] Executing discovery process for pollsters [<ceilometer.compute.pollsters.net.OutgoingBytesDeltaPollster object at 0x7fcf6cc3ff20>] and discovery method [local_instances] via process [<bound method AgentManager.discover of <ceilometer.polling.manager.AgentManager object at 0x7fcf6cc07a10>>]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:294
Dec  1 19:55:48 compute-0 ceilometer_agent_compute[200308]: 2025-12-01 19:55:48.825 15 DEBUG ceilometer.polling.manager [-] Skip pollster network.outgoing.bytes.delta, no  resources found this cycle _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:321
Dec  1 19:55:48 compute-0 ceilometer_agent_compute[200308]: 2025-12-01 19:55:48.825 15 DEBUG ceilometer.polling.manager [-] Executing discovery process for pollsters [<ceilometer.compute.pollsters.net.OutgoingDropPollster object at 0x7fcf6c2e40e0>] and discovery method [local_instances] via process [<bound method AgentManager.discover of <ceilometer.polling.manager.AgentManager object at 0x7fcf6cc07a10>>]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:294
Dec  1 19:55:48 compute-0 ceilometer_agent_compute[200308]: 2025-12-01 19:55:48.826 15 DEBUG ceilometer.polling.manager [-] Skip pollster network.outgoing.packets.drop, no  resources found this cycle _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:321
Dec  1 19:55:48 compute-0 ceilometer_agent_compute[200308]: 2025-12-01 19:55:48.826 15 DEBUG ceilometer.polling.manager [-] Executing discovery process for pollsters [<ceilometer.compute.pollsters.net.OutgoingErrorsPollster object at 0x7fcf6c2e4170>] and discovery method [local_instances] via process [<bound method AgentManager.discover of <ceilometer.polling.manager.AgentManager object at 0x7fcf6cc07a10>>]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:294
Dec  1 19:55:48 compute-0 ceilometer_agent_compute[200308]: 2025-12-01 19:55:48.826 15 DEBUG ceilometer.polling.manager [-] Skip pollster network.outgoing.packets.error, no  resources found this cycle _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:321
Dec  1 19:55:48 compute-0 ceilometer_agent_compute[200308]: 2025-12-01 19:55:48.826 15 DEBUG ceilometer.polling.manager [-] Executing discovery process for pollsters [<ceilometer.compute.pollsters.disk.PerDeviceCapacityPollster object at 0x7fcf6cc3d820>] and discovery method [local_instances] via process [<bound method AgentManager.discover of <ceilometer.polling.manager.AgentManager object at 0x7fcf6cc07a10>>]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:294
Dec  1 19:55:48 compute-0 ceilometer_agent_compute[200308]: 2025-12-01 19:55:48.827 15 DEBUG ceilometer.polling.manager [-] Skip pollster disk.device.capacity, no  resources found this cycle _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:321
Dec  1 19:55:48 compute-0 ceilometer_agent_compute[200308]: 2025-12-01 19:55:48.827 15 DEBUG ceilometer.polling.manager [-] Executing discovery process for pollsters [<ceilometer.compute.pollsters.disk.PerDeviceReadBytesPollster object at 0x7fcf6cc3f1d0>] and discovery method [local_instances] via process [<bound method AgentManager.discover of <ceilometer.polling.manager.AgentManager object at 0x7fcf6cc07a10>>]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:294
Dec  1 19:55:48 compute-0 ceilometer_agent_compute[200308]: 2025-12-01 19:55:48.827 15 DEBUG ceilometer.polling.manager [-] Skip pollster disk.device.read.bytes, no  resources found this cycle _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:321
Dec  1 19:55:48 compute-0 ceilometer_agent_compute[200308]: 2025-12-01 19:55:48.827 15 DEBUG ceilometer.polling.manager [-] Executing discovery process for pollsters [<ceilometer.compute.pollsters.net.IncomingBytesPollster object at 0x7fcf6cc3f800>] and discovery method [local_instances] via process [<bound method AgentManager.discover of <ceilometer.polling.manager.AgentManager object at 0x7fcf6cc07a10>>]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:294
Dec  1 19:55:48 compute-0 ceilometer_agent_compute[200308]: 2025-12-01 19:55:48.827 15 DEBUG ceilometer.polling.manager [-] Skip pollster network.incoming.bytes, no  resources found this cycle _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:321
Dec  1 19:55:48 compute-0 ceilometer_agent_compute[200308]: 2025-12-01 19:55:48.827 15 DEBUG ceilometer.polling.manager [-] Executing discovery process for pollsters [<ceilometer.compute.pollsters.net.IncomingBytesRatePollster object at 0x7fcf6cc3fd10>] and discovery method [local_instances] via process [<bound method AgentManager.discover of <ceilometer.polling.manager.AgentManager object at 0x7fcf6cc07a10>>]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:294
Dec  1 19:55:48 compute-0 ceilometer_agent_compute[200308]: 2025-12-01 19:55:48.828 15 DEBUG ceilometer.polling.manager [-] Skip pollster network.incoming.bytes.rate, no  resources found this cycle _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:321
Dec  1 19:55:48 compute-0 ceilometer_agent_compute[200308]: 2025-12-01 19:55:48.828 15 DEBUG ceilometer.polling.manager [-] Executing discovery process for pollsters [<ceilometer.compute.pollsters.disk.PerDeviceDiskReadLatencyPollster object at 0x7fcf6cc3f2f0>] and discovery method [local_instances] via process [<bound method AgentManager.discover of <ceilometer.polling.manager.AgentManager object at 0x7fcf6cc07a10>>]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:294
Dec  1 19:55:48 compute-0 ceilometer_agent_compute[200308]: 2025-12-01 19:55:48.828 15 DEBUG ceilometer.polling.manager [-] Skip pollster disk.device.read.latency, no  resources found this cycle _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:321
Dec  1 19:55:48 compute-0 ceilometer_agent_compute[200308]: 2025-12-01 19:55:48.828 15 DEBUG ceilometer.polling.manager [-] Executing discovery process for pollsters [<ceilometer.compute.pollsters.disk.PerDeviceReadRequestsPollster object at 0x7fcf6cc3f350>] and discovery method [local_instances] via process [<bound method AgentManager.discover of <ceilometer.polling.manager.AgentManager object at 0x7fcf6cc07a10>>]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:294
Dec  1 19:55:48 compute-0 ceilometer_agent_compute[200308]: 2025-12-01 19:55:48.828 15 DEBUG ceilometer.polling.manager [-] Skip pollster disk.device.read.requests, no  resources found this cycle _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:321
Dec  1 19:55:48 compute-0 ceilometer_agent_compute[200308]: 2025-12-01 19:55:48.828 15 DEBUG ceilometer.polling.manager [-] Executing discovery process for pollsters [<ceilometer.compute.pollsters.disk.PerDevicePhysicalPollster object at 0x7fcf6cc3f3b0>] and discovery method [local_instances] via process [<bound method AgentManager.discover of <ceilometer.polling.manager.AgentManager object at 0x7fcf6cc07a10>>]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:294
Dec  1 19:55:48 compute-0 ceilometer_agent_compute[200308]: 2025-12-01 19:55:48.829 15 DEBUG ceilometer.polling.manager [-] Skip pollster disk.device.usage, no  resources found this cycle _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:321
Dec  1 19:55:48 compute-0 ceilometer_agent_compute[200308]: 2025-12-01 19:55:48.829 15 DEBUG ceilometer.polling.manager [-] Executing discovery process for pollsters [<ceilometer.compute.pollsters.disk.PerDeviceWriteBytesPollster object at 0x7fcf6cc3f410>] and discovery method [local_instances] via process [<bound method AgentManager.discover of <ceilometer.polling.manager.AgentManager object at 0x7fcf6cc07a10>>]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:294
Dec  1 19:55:48 compute-0 ceilometer_agent_compute[200308]: 2025-12-01 19:55:48.829 15 DEBUG ceilometer.polling.manager [-] Skip pollster disk.device.write.bytes, no  resources found this cycle _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:321
Dec  1 19:55:48 compute-0 ceilometer_agent_compute[200308]: 2025-12-01 19:55:48.829 15 DEBUG ceilometer.polling.manager [-] Executing discovery process for pollsters [<ceilometer.compute.pollsters.instance_stats.PowerStatePollster object at 0x7fcf6c2e4440>] and discovery method [local_instances] via process [<bound method AgentManager.discover of <ceilometer.polling.manager.AgentManager object at 0x7fcf6cc07a10>>]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:294
Dec  1 19:55:48 compute-0 ceilometer_agent_compute[200308]: 2025-12-01 19:55:48.829 15 DEBUG ceilometer.polling.manager [-] Skip pollster power.state, no  resources found this cycle _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:321
Dec  1 19:55:48 compute-0 ceilometer_agent_compute[200308]: 2025-12-01 19:55:48.829 15 DEBUG ceilometer.polling.manager [-] Executing discovery process for pollsters [<ceilometer.compute.pollsters.disk.PerDeviceDiskWriteLatencyPollster object at 0x7fcf6cc3f470>] and discovery method [local_instances] via process [<bound method AgentManager.discover of <ceilometer.polling.manager.AgentManager object at 0x7fcf6cc07a10>>]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:294
Dec  1 19:55:48 compute-0 ceilometer_agent_compute[200308]: 2025-12-01 19:55:48.830 15 DEBUG ceilometer.polling.manager [-] Skip pollster disk.device.write.latency, no  resources found this cycle _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:321
Dec  1 19:55:48 compute-0 ceilometer_agent_compute[200308]: 2025-12-01 19:55:48.830 15 DEBUG ceilometer.polling.manager [-] Executing discovery process for pollsters [<ceilometer.compute.pollsters.disk.PerDeviceWriteRequestsPollster object at 0x7fcf6cc3f4d0>] and discovery method [local_instances] via process [<bound method AgentManager.discover of <ceilometer.polling.manager.AgentManager object at 0x7fcf6cc07a10>>]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:294
Dec  1 19:55:48 compute-0 ceilometer_agent_compute[200308]: 2025-12-01 19:55:48.830 15 DEBUG ceilometer.polling.manager [-] Skip pollster disk.device.write.requests, no  resources found this cycle _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:321
Dec  1 19:55:48 compute-0 ceilometer_agent_compute[200308]: 2025-12-01 19:55:48.830 15 DEBUG ceilometer.polling.manager [-] Executing discovery process for pollsters [<ceilometer.compute.pollsters.disk.PerDeviceAllocationPollster object at 0x7fcf6cc3e5d0>] and discovery method [local_instances] via process [<bound method AgentManager.discover of <ceilometer.polling.manager.AgentManager object at 0x7fcf6cc07a10>>]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:294
Dec  1 19:55:48 compute-0 ceilometer_agent_compute[200308]: 2025-12-01 19:55:48.830 15 DEBUG ceilometer.polling.manager [-] Skip pollster disk.device.allocation, no  resources found this cycle _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:321
Dec  1 19:55:48 compute-0 ceilometer_agent_compute[200308]: 2025-12-01 19:55:48.830 15 DEBUG ceilometer.polling.manager [-] Executing discovery process for pollsters [<ceilometer.compute.pollsters.disk.EphemeralSizePollster object at 0x7fcf6cc3f530>] and discovery method [local_instances] via process [<bound method AgentManager.discover of <ceilometer.polling.manager.AgentManager object at 0x7fcf6cc07a10>>]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:294
Dec  1 19:55:48 compute-0 ceilometer_agent_compute[200308]: 2025-12-01 19:55:48.831 15 DEBUG ceilometer.polling.manager [-] Skip pollster disk.ephemeral.size, no  resources found this cycle _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:321
Dec  1 19:55:48 compute-0 ceilometer_agent_compute[200308]: 2025-12-01 19:55:48.831 15 DEBUG ceilometer.polling.manager [-] Executing discovery process for pollsters [<ceilometer.compute.pollsters.net.IncomingPacketsPollster object at 0x7fcf6cc3fd40>] and discovery method [local_instances] via process [<bound method AgentManager.discover of <ceilometer.polling.manager.AgentManager object at 0x7fcf6cc07a10>>]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:294
Dec  1 19:55:48 compute-0 ceilometer_agent_compute[200308]: 2025-12-01 19:55:48.831 15 DEBUG ceilometer.polling.manager [-] Skip pollster network.incoming.packets, no  resources found this cycle _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:321
Dec  1 19:55:48 compute-0 ceilometer_agent_compute[200308]: 2025-12-01 19:55:48.831 15 DEBUG ceilometer.polling.manager [-] Executing discovery process for pollsters [<ceilometer.compute.pollsters.disk.RootSizePollster object at 0x7fcf6cc3f590>] and discovery method [local_instances] via process [<bound method AgentManager.discover of <ceilometer.polling.manager.AgentManager object at 0x7fcf6cc07a10>>]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:294
Dec  1 19:55:48 compute-0 ceilometer_agent_compute[200308]: 2025-12-01 19:55:48.831 15 DEBUG ceilometer.polling.manager [-] Skip pollster disk.root.size, no  resources found this cycle _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:321
Dec  1 19:55:48 compute-0 ceilometer_agent_compute[200308]: 2025-12-01 19:55:48.831 15 DEBUG ceilometer.polling.manager [-] Executing discovery process for pollsters [<ceilometer.compute.pollsters.net.IncomingDropPollster object at 0x7fcf6cc3fda0>] and discovery method [local_instances] via process [<bound method AgentManager.discover of <ceilometer.polling.manager.AgentManager object at 0x7fcf6cc07a10>>]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:294
Dec  1 19:55:48 compute-0 ceilometer_agent_compute[200308]: 2025-12-01 19:55:48.832 15 DEBUG ceilometer.polling.manager [-] Skip pollster network.incoming.packets.drop, no  resources found this cycle _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:321
Dec  1 19:55:48 compute-0 ceilometer_agent_compute[200308]: 2025-12-01 19:55:48.832 15 DEBUG ceilometer.polling.manager [-] Executing discovery process for pollsters [<ceilometer.compute.pollsters.net.IncomingErrorsPollster object at 0x7fcf6cc3fe00>] and discovery method [local_instances] via process [<bound method AgentManager.discover of <ceilometer.polling.manager.AgentManager object at 0x7fcf6cc07a10>>]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:294
Dec  1 19:55:48 compute-0 ceilometer_agent_compute[200308]: 2025-12-01 19:55:48.832 15 DEBUG ceilometer.polling.manager [-] Skip pollster network.incoming.packets.error, no  resources found this cycle _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:321
Dec  1 19:55:48 compute-0 ceilometer_agent_compute[200308]: 2025-12-01 19:55:48.832 15 DEBUG ceilometer.polling.manager [-] Executing discovery process for pollsters [<ceilometer.compute.pollsters.net.OutgoingBytesPollster object at 0x7fcf6cc3fe90>] and discovery method [local_instances] via process [<bound method AgentManager.discover of <ceilometer.polling.manager.AgentManager object at 0x7fcf6cc07a10>>]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:294
Dec  1 19:55:48 compute-0 ceilometer_agent_compute[200308]: 2025-12-01 19:55:48.832 15 DEBUG ceilometer.polling.manager [-] Skip pollster network.outgoing.bytes, no  resources found this cycle _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:321
Dec  1 19:55:48 compute-0 ceilometer_agent_compute[200308]: 2025-12-01 19:55:48.832 15 DEBUG ceilometer.polling.manager [-] Executing discovery process for pollsters [<ceilometer.compute.pollsters.net.OutgoingBytesRatePollster object at 0x7fcf6cc3ff80>] and discovery method [local_instances] via process [<bound method AgentManager.discover of <ceilometer.polling.manager.AgentManager object at 0x7fcf6cc07a10>>]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:294
Dec  1 19:55:48 compute-0 ceilometer_agent_compute[200308]: 2025-12-01 19:55:48.833 15 DEBUG ceilometer.polling.manager [-] Skip pollster network.outgoing.bytes.rate, no  resources found this cycle _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:321
Dec  1 19:55:48 compute-0 ceilometer_agent_compute[200308]: 2025-12-01 19:55:48.833 15 DEBUG ceilometer.polling.manager [-] Executing discovery process for pollsters [<ceilometer.compute.pollsters.instance_stats.CPUPollster object at 0x7fcf6cbd1b80>] and discovery method [local_instances] via process [<bound method AgentManager.discover of <ceilometer.polling.manager.AgentManager object at 0x7fcf6cc07a10>>]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:294
Dec  1 19:55:48 compute-0 ceilometer_agent_compute[200308]: 2025-12-01 19:55:48.833 15 DEBUG ceilometer.polling.manager [-] Skip pollster cpu, no  resources found this cycle _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:321
Dec  1 19:55:48 compute-0 ceilometer_agent_compute[200308]: 2025-12-01 19:55:48.833 15 DEBUG ceilometer.polling.manager [-] Executing discovery process for pollsters [<ceilometer.compute.pollsters.instance_stats.MemoryUsagePollster object at 0x7fcf6cc3f7a0>] and discovery method [local_instances] via process [<bound method AgentManager.discover of <ceilometer.polling.manager.AgentManager object at 0x7fcf6cc07a10>>]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:294
Dec  1 19:55:48 compute-0 ceilometer_agent_compute[200308]: 2025-12-01 19:55:48.833 15 DEBUG ceilometer.polling.manager [-] Skip pollster memory.usage, no  resources found this cycle _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:321
Dec  1 19:55:48 compute-0 ceilometer_agent_compute[200308]: 2025-12-01 19:55:48.833 15 DEBUG ceilometer.polling.manager [-] Finished processing pollster [network.incoming.bytes.delta]. execute_polling_task_processing /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:272
Dec  1 19:55:48 compute-0 ceilometer_agent_compute[200308]: 2025-12-01 19:55:48.834 15 DEBUG ceilometer.polling.manager [-] Finished processing pollster [network.outgoing.packets]. execute_polling_task_processing /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:272
Dec  1 19:55:48 compute-0 ceilometer_agent_compute[200308]: 2025-12-01 19:55:48.834 15 DEBUG ceilometer.polling.manager [-] Finished processing pollster [network.outgoing.bytes.delta]. execute_polling_task_processing /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:272
Dec  1 19:55:48 compute-0 ceilometer_agent_compute[200308]: 2025-12-01 19:55:48.834 15 DEBUG ceilometer.polling.manager [-] Finished processing pollster [network.outgoing.packets.drop]. execute_polling_task_processing /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:272
Dec  1 19:55:48 compute-0 ceilometer_agent_compute[200308]: 2025-12-01 19:55:48.834 15 DEBUG ceilometer.polling.manager [-] Finished processing pollster [network.outgoing.packets.error]. execute_polling_task_processing /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:272
Dec  1 19:55:48 compute-0 ceilometer_agent_compute[200308]: 2025-12-01 19:55:48.834 15 DEBUG ceilometer.polling.manager [-] Finished processing pollster [disk.device.capacity]. execute_polling_task_processing /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:272
Dec  1 19:55:48 compute-0 ceilometer_agent_compute[200308]: 2025-12-01 19:55:48.834 15 DEBUG ceilometer.polling.manager [-] Finished processing pollster [disk.device.read.bytes]. execute_polling_task_processing /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:272
Dec  1 19:55:48 compute-0 ceilometer_agent_compute[200308]: 2025-12-01 19:55:48.834 15 DEBUG ceilometer.polling.manager [-] Finished processing pollster [network.incoming.bytes]. execute_polling_task_processing /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:272
Dec  1 19:55:48 compute-0 ceilometer_agent_compute[200308]: 2025-12-01 19:55:48.834 15 DEBUG ceilometer.polling.manager [-] Finished processing pollster [network.incoming.bytes.rate]. execute_polling_task_processing /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:272
Dec  1 19:55:48 compute-0 ceilometer_agent_compute[200308]: 2025-12-01 19:55:48.834 15 DEBUG ceilometer.polling.manager [-] Finished processing pollster [disk.device.read.latency]. execute_polling_task_processing /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:272
Dec  1 19:55:48 compute-0 ceilometer_agent_compute[200308]: 2025-12-01 19:55:48.834 15 DEBUG ceilometer.polling.manager [-] Finished processing pollster [disk.device.read.requests]. execute_polling_task_processing /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:272
Dec  1 19:55:48 compute-0 ceilometer_agent_compute[200308]: 2025-12-01 19:55:48.834 15 DEBUG ceilometer.polling.manager [-] Finished processing pollster [disk.device.usage]. execute_polling_task_processing /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:272
Dec  1 19:55:48 compute-0 ceilometer_agent_compute[200308]: 2025-12-01 19:55:48.834 15 DEBUG ceilometer.polling.manager [-] Finished processing pollster [disk.device.write.bytes]. execute_polling_task_processing /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:272
Dec  1 19:55:48 compute-0 ceilometer_agent_compute[200308]: 2025-12-01 19:55:48.834 15 DEBUG ceilometer.polling.manager [-] Finished processing pollster [power.state]. execute_polling_task_processing /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:272
Dec  1 19:55:48 compute-0 ceilometer_agent_compute[200308]: 2025-12-01 19:55:48.834 15 DEBUG ceilometer.polling.manager [-] Finished processing pollster [disk.device.write.latency]. execute_polling_task_processing /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:272
Dec  1 19:55:48 compute-0 ceilometer_agent_compute[200308]: 2025-12-01 19:55:48.834 15 DEBUG ceilometer.polling.manager [-] Finished processing pollster [disk.device.write.requests]. execute_polling_task_processing /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:272
Dec  1 19:55:48 compute-0 ceilometer_agent_compute[200308]: 2025-12-01 19:55:48.834 15 DEBUG ceilometer.polling.manager [-] Finished processing pollster [disk.device.allocation]. execute_polling_task_processing /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:272
Dec  1 19:55:48 compute-0 ceilometer_agent_compute[200308]: 2025-12-01 19:55:48.834 15 DEBUG ceilometer.polling.manager [-] Finished processing pollster [disk.ephemeral.size]. execute_polling_task_processing /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:272
Dec  1 19:55:48 compute-0 ceilometer_agent_compute[200308]: 2025-12-01 19:55:48.835 15 DEBUG ceilometer.polling.manager [-] Finished processing pollster [network.incoming.packets]. execute_polling_task_processing /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:272
Dec  1 19:55:48 compute-0 ceilometer_agent_compute[200308]: 2025-12-01 19:55:48.835 15 DEBUG ceilometer.polling.manager [-] Finished processing pollster [disk.root.size]. execute_polling_task_processing /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:272
Dec  1 19:55:48 compute-0 ceilometer_agent_compute[200308]: 2025-12-01 19:55:48.835 15 DEBUG ceilometer.polling.manager [-] Finished processing pollster [network.incoming.packets.drop]. execute_polling_task_processing /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:272
Dec  1 19:55:48 compute-0 ceilometer_agent_compute[200308]: 2025-12-01 19:55:48.835 15 DEBUG ceilometer.polling.manager [-] Finished processing pollster [network.incoming.packets.error]. execute_polling_task_processing /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:272
Dec  1 19:55:48 compute-0 ceilometer_agent_compute[200308]: 2025-12-01 19:55:48.835 15 DEBUG ceilometer.polling.manager [-] Finished processing pollster [network.outgoing.bytes]. execute_polling_task_processing /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:272
Dec  1 19:55:48 compute-0 ceilometer_agent_compute[200308]: 2025-12-01 19:55:48.835 15 DEBUG ceilometer.polling.manager [-] Finished processing pollster [network.outgoing.bytes.rate]. execute_polling_task_processing /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:272
Dec  1 19:55:48 compute-0 ceilometer_agent_compute[200308]: 2025-12-01 19:55:48.835 15 DEBUG ceilometer.polling.manager [-] Finished processing pollster [cpu]. execute_polling_task_processing /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:272
Dec  1 19:55:48 compute-0 ceilometer_agent_compute[200308]: 2025-12-01 19:55:48.835 15 DEBUG ceilometer.polling.manager [-] Finished processing pollster [memory.usage]. execute_polling_task_processing /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:272
Dec  1 19:55:50 compute-0 podman[251662]: 2025-12-01 19:55:50.344889781 +0000 UTC m=+0.108167426 container health_status b46bda7fc50db8041eef75400930fc7591d8331b3adc9964f77b2cc87c6b98e2 (image=quay.io/openstack-k8s-operators/openstack-network-exporter:current-podified, name=openstack_network_exporter, health_status=healthy, health_failing_streak=0, health_log=, config_id=edpm, url=https://catalog.redhat.com/en/search?searchType=containers, com.redhat.license_terms=https://www.redhat.com/en/about/red-hat-end-user-license-agreements#UBI, build-date=2025-08-20T13:12:41, release=1755695350, maintainer=Red Hat, Inc., vcs-ref=f4b088292653bbf5ca8188a5e59ffd06a8671d4b, vendor=Red Hat, Inc., container_name=openstack_network_exporter, description=The Universal Base Image Minimal is a stripped down image that uses microdnf as a package manager. This base image is freely redistributable, but Red Hat only supports Red Hat technologies through subscriptions for Red Hat products. This image is maintained by Red Hat and updated regularly., io.k8s.description=The Universal Base Image Minimal is a stripped down image that uses microdnf as a package manager. This base image is freely redistributable, but Red Hat only supports Red Hat technologies through subscriptions for Red Hat products. This image is maintained by Red Hat and updated regularly., name=ubi9-minimal, config_data={'image': 'quay.io/openstack-k8s-operators/openstack-network-exporter:current-podified', 'restart': 'always', 'recreate': True, 'privileged': True, 'ports': ['9105:9105'], 'command': [], 'net': 'host', 'environment': {'OS_ENDPOINT_TYPE': 'internal', 'OPENSTACK_NETWORK_EXPORTER_YAML': '/etc/openstack_network_exporter/openstack_network_exporter.yaml'}, 'healthcheck': {'test': '/openstack/healthcheck openstack-netwo', 'mount': '/var/lib/openstack/healthchecks/openstack_network_exporter'}, 'volumes': ['/var/lib/openstack/config/telemetry/openstack_network_exporter.yaml:/etc/openstack_network_exporter/openstack_network_exporter.yaml:z', '/var/lib/openstack/certs/telemetry/default:/etc/openstack_network_exporter/tls:z', '/var/run/openvswitch:/run/openvswitch:rw,z', '/var/lib/openvswitch/ovn:/run/ovn:rw,z', '/proc:/host/proc:ro', '/var/lib/openstack/healthchecks/openstack_network_exporter:/openstack:ro,z']}, io.openshift.tags=minimal rhel9, distribution-scope=public, io.buildah.version=1.33.7, summary=Provides the latest release of the minimal Red Hat Universal Base Image 9., vcs-type=git, com.redhat.component=ubi9-minimal-container, io.k8s.display-name=Red Hat Universal Base Image 9 Minimal, io.openshift.expose-services=, version=9.6, architecture=x86_64, managed_by=edpm_ansible)
Dec  1 19:55:52 compute-0 nova_compute[189564]: 2025-12-01 19:55:52.482 189568 DEBUG nova.virt.driver [-] Emitting event <LifecycleEvent: 1764618937.4797685, e73931e9-f7fa-4666-b781-700b385532a9 => Stopped> emit_event /usr/lib/python3.9/site-packages/nova/virt/driver.py:1653#033[00m
Dec  1 19:55:52 compute-0 nova_compute[189564]: 2025-12-01 19:55:52.482 189568 INFO nova.compute.manager [-] [instance: e73931e9-f7fa-4666-b781-700b385532a9] VM Stopped (Lifecycle Event)#033[00m
Dec  1 19:55:52 compute-0 nova_compute[189564]: 2025-12-01 19:55:52.506 189568 DEBUG nova.compute.manager [None req-bcef202f-550e-42f1-ab8a-75e7330b1734 - - - - - -] [instance: e73931e9-f7fa-4666-b781-700b385532a9] Checking state _get_power_state /usr/lib/python3.9/site-packages/nova/compute/manager.py:1762#033[00m
Dec  1 19:55:52 compute-0 nova_compute[189564]: 2025-12-01 19:55:52.524 189568 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Dec  1 19:55:53 compute-0 nova_compute[189564]: 2025-12-01 19:55:53.616 189568 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Dec  1 19:55:56 compute-0 podman[251685]: 2025-12-01 19:55:56.278244437 +0000 UTC m=+0.111010105 container health_status 9bc16c1e84935b321683dd2dfd3901959431e420d380b6b9982945dff3d516b2 (image=quay.io/prometheus/node-exporter:v1.5.0, name=node_exporter, health_status=healthy, health_failing_streak=0, health_log=, config_id=edpm, container_name=node_exporter, maintainer=The Prometheus Authors <prometheus-developers@googlegroups.com>, managed_by=edpm_ansible, config_data={'image': 'quay.io/prometheus/node-exporter:v1.5.0', 'restart': 'always', 'recreate': True, 'user': 'root', 'privileged': True, 'ports': ['9100:9100'], 'command': ['--web.config.file=/etc/node_exporter/node_exporter.yaml', '--web.disable-exporter-metrics', '--collector.systemd', '--collector.systemd.unit-include=(edpm_.*|ovs.*|openvswitch|virt.*|rsyslog)\\.service', '--no-collector.dmi', '--no-collector.entropy', '--no-collector.thermal_zone', '--no-collector.time', '--no-collector.timex', '--no-collector.uname', '--no-collector.stat', '--no-collector.hwmon', '--no-collector.os', '--no-collector.selinux', '--no-collector.textfile', '--no-collector.powersupplyclass', '--no-collector.pressure', '--no-collector.rapl'], 'net': 'host', 'environment': {'OS_ENDPOINT_TYPE': 'internal'}, 'healthcheck': {'test': '/openstack/healthcheck node_exporter', 'mount': '/var/lib/openstack/healthchecks/node_exporter'}, 'volumes': ['/var/lib/openstack/config/telemetry/node_exporter.yaml:/etc/node_exporter/node_exporter.yaml:z', '/var/lib/openstack/certs/telemetry/default:/etc/node_exporter/tls:z', '/var/run/dbus/system_bus_socket:/var/run/dbus/system_bus_socket:rw', '/var/lib/openstack/healthchecks/node_exporter:/openstack:ro,z']})
Dec  1 19:55:57 compute-0 nova_compute[189564]: 2025-12-01 19:55:57.529 189568 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Dec  1 19:55:58 compute-0 nova_compute[189564]: 2025-12-01 19:55:58.620 189568 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Dec  1 19:55:59 compute-0 podman[203750]: time="2025-12-01T19:55:59Z" level=info msg="List containers: received `last` parameter - overwriting `limit`"
Dec  1 19:55:59 compute-0 podman[203750]: @ - - [01/Dec/2025:19:55:59 +0000] "GET /v4.9.3/libpod/containers/json?all=true&external=false&last=0&namespace=false&size=false&sync=false HTTP/1.1" 200 28288 "" "Go-http-client/1.1"
Dec  1 19:55:59 compute-0 podman[203750]: @ - - [01/Dec/2025:19:55:59 +0000] "GET /v4.9.3/libpod/containers/stats?all=false&interval=1&stream=false HTTP/1.1" 200 4338 "" "Go-http-client/1.1"
Dec  1 19:56:01 compute-0 openstack_network_exporter[205914]: ERROR   19:56:01 appctl.go:131: Failed to prepare call to ovsdb-server: no control socket files found for the ovs db server
Dec  1 19:56:01 compute-0 openstack_network_exporter[205914]: ERROR   19:56:01 appctl.go:144: Failed to get PID for ovn-northd: no control socket files found for ovn-northd
Dec  1 19:56:01 compute-0 openstack_network_exporter[205914]: ERROR   19:56:01 appctl.go:144: Failed to get PID for ovn-northd: no control socket files found for ovn-northd
Dec  1 19:56:01 compute-0 openstack_network_exporter[205914]: ERROR   19:56:01 appctl.go:174: call(dpif-netdev/pmd-perf-show): please specify an existing datapath
Dec  1 19:56:01 compute-0 openstack_network_exporter[205914]: 
Dec  1 19:56:01 compute-0 openstack_network_exporter[205914]: ERROR   19:56:01 appctl.go:174: call(dpif-netdev/pmd-rxq-show): please specify an existing datapath
Dec  1 19:56:01 compute-0 openstack_network_exporter[205914]: 
Dec  1 19:56:02 compute-0 podman[251710]: 2025-12-01 19:56:02.361706965 +0000 UTC m=+0.116340192 container health_status eee51cf6f5ac491b85fb09827fece37ea9afa564acb449d4ec0d0155a452f02b (image=quay.io/podified-antelope-centos9/openstack-multipathd:current-podified, name=multipathd, health_status=healthy, health_failing_streak=0, health_log=, org.label-schema.schema-version=1.0, tcib_build_tag=fa2bb8efef6782c26ea7f1675eeb36dd, config_data={'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/multipathd', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-multipathd:current-podified', 'net': 'host', 'privileged': True, 'restart': 'always', 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/var/lib/kolla/config_files/multipathd.json:/var/lib/kolla/config_files/config.json:ro', '/dev:/dev', '/run/udev:/run/udev', '/sys:/sys', '/lib/modules:/lib/modules:ro', '/etc/iscsi:/etc/iscsi:ro', '/var/lib/iscsi:/var/lib/iscsi', '/etc/multipath:/etc/multipath:z', '/etc/multipath.conf:/etc/multipath.conf:ro', '/var/lib/openstack/healthchecks/multipathd:/openstack:ro,z']}, io.buildah.version=1.41.3, managed_by=edpm_ansible, org.label-schema.name=CentOS Stream 9 Base Image, tcib_managed=true, container_name=multipathd, config_id=multipathd, org.label-schema.vendor=CentOS, maintainer=OpenStack Kubernetes Operator team, org.label-schema.build-date=20251125, org.label-schema.license=GPLv2)
Dec  1 19:56:02 compute-0 nova_compute[189564]: 2025-12-01 19:56:02.533 189568 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Dec  1 19:56:03 compute-0 nova_compute[189564]: 2025-12-01 19:56:03.623 189568 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Dec  1 19:56:07 compute-0 nova_compute[189564]: 2025-12-01 19:56:07.539 189568 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Dec  1 19:56:08 compute-0 podman[251731]: 2025-12-01 19:56:08.329556805 +0000 UTC m=+0.090719294 container health_status 61ddba5fa28aaa4735d9b3aecc3d300f499f9ae2248b5f55cd6d6127fcce4236 (image=quay.io/navidys/prometheus-podman-exporter:v1.10.1, name=podman_exporter, health_status=healthy, health_failing_streak=0, health_log=, maintainer=Navid Yaghoobi <navidys@fedoraproject.org>, managed_by=edpm_ansible, config_data={'image': 'quay.io/navidys/prometheus-podman-exporter:v1.10.1', 'restart': 'always', 'recreate': True, 'user': 'root', 'privileged': True, 'ports': ['9882:9882'], 'net': 'host', 'command': ['--web.config.file=/etc/podman_exporter/podman_exporter.yaml'], 'environment': {'OS_ENDPOINT_TYPE': 'internal', 'CONTAINER_HOST': 'unix:///run/podman/podman.sock'}, 'healthcheck': {'test': '/openstack/healthcheck podman_exporter', 'mount': '/var/lib/openstack/healthchecks/podman_exporter'}, 'volumes': ['/var/lib/openstack/config/telemetry/podman_exporter.yaml:/etc/podman_exporter/podman_exporter.yaml:z', '/var/lib/openstack/certs/telemetry/default:/etc/podman_exporter/tls:z', '/run/podman/podman.sock:/run/podman/podman.sock:rw,z', '/var/lib/openstack/healthchecks/podman_exporter:/openstack:ro,z']}, config_id=edpm, container_name=podman_exporter)
Dec  1 19:56:08 compute-0 nova_compute[189564]: 2025-12-01 19:56:08.625 189568 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Dec  1 19:56:09 compute-0 nova_compute[189564]: 2025-12-01 19:56:09.248 189568 DEBUG oslo_service.periodic_task [None req-7cec860b-c1c5-49d2-8a4f-732c76c1788d - - - - - -] Running periodic task ComputeManager._reclaim_queued_deletes run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210#033[00m
Dec  1 19:56:09 compute-0 nova_compute[189564]: 2025-12-01 19:56:09.249 189568 DEBUG nova.compute.manager [None req-7cec860b-c1c5-49d2-8a4f-732c76c1788d - - - - - -] CONF.reclaim_instance_interval <= 0, skipping... _reclaim_queued_deletes /usr/lib/python3.9/site-packages/nova/compute/manager.py:10477#033[00m
Dec  1 19:56:10 compute-0 ovn_controller[97948]: 2025-12-01T19:56:10Z|00064|memory_trim|INFO|Detected inactivity (last active 30004 ms ago): trimming memory
Dec  1 19:56:12 compute-0 ovn_metadata_agent[106828]: 2025-12-01 19:56:12.212 106833 DEBUG oslo_concurrency.lockutils [-] Acquiring lock "_check_child_processes" by "neutron.agent.linux.external_process.ProcessMonitor._check_child_processes" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m
Dec  1 19:56:12 compute-0 ovn_metadata_agent[106828]: 2025-12-01 19:56:12.213 106833 DEBUG oslo_concurrency.lockutils [-] Lock "_check_child_processes" acquired by "neutron.agent.linux.external_process.ProcessMonitor._check_child_processes" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m
Dec  1 19:56:12 compute-0 ovn_metadata_agent[106828]: 2025-12-01 19:56:12.213 106833 DEBUG oslo_concurrency.lockutils [-] Lock "_check_child_processes" "released" by "neutron.agent.linux.external_process.ProcessMonitor._check_child_processes" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m
Dec  1 19:56:12 compute-0 nova_compute[189564]: 2025-12-01 19:56:12.248 189568 DEBUG oslo_service.periodic_task [None req-7cec860b-c1c5-49d2-8a4f-732c76c1788d - - - - - -] Running periodic task ComputeManager._poll_rebooting_instances run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210#033[00m
Dec  1 19:56:12 compute-0 podman[251758]: 2025-12-01 19:56:12.328097719 +0000 UTC m=+0.083194300 container health_status 43b014a7c88484529ca37fbc1aa040d68d3c565a681d98a3ffe696ded1c66c8b (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified, name=ovn_metadata_agent, health_status=healthy, health_failing_streak=0, health_log=, org.label-schema.build-date=20251125, org.label-schema.name=CentOS Stream 9 Base Image, tcib_build_tag=fa2bb8efef6782c26ea7f1675eeb36dd, config_id=ovn_metadata_agent, container_name=ovn_metadata_agent, io.buildah.version=1.41.3, managed_by=edpm_ansible, tcib_managed=true, maintainer=OpenStack Kubernetes Operator team, org.label-schema.license=GPLv2, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS, config_data={'cgroupns': 'host', 'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'EDPM_CONFIG_HASH': '0823bd3e096c75f72e4a95820d41b0d4b6a1172bd2892ddb9f29b788a11bc87d'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ovn_metadata_agent', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified', 'net': 'host', 'pid': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/run/openvswitch:/run/openvswitch:z', '/var/lib/config-data/ansible-generated/neutron-ovn-metadata-agent:/etc/neutron.conf.d:z', '/run/netns:/run/netns:shared', '/var/lib/kolla/config_files/ovn_metadata_agent.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/neutron:/var/lib/neutron:shared,z', '/var/lib/neutron/ovn_metadata_haproxy_wrapper:/usr/local/bin/haproxy:ro', '/var/lib/neutron/kill_scripts:/etc/neutron/kill_scripts:ro', '/var/lib/openstack/cacerts/neutron-metadata/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/ca.crt:/etc/pki/tls/certs/ovndbca.crt:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/tls.crt:/etc/pki/tls/certs/ovndb.crt:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/tls.key:/etc/pki/tls/private/ovndb.key:ro,Z', '/var/lib/openstack/healthchecks/ovn_metadata_agent:/openstack:ro,z']})
Dec  1 19:56:12 compute-0 podman[251757]: 2025-12-01 19:56:12.359094844 +0000 UTC m=+0.115986330 container health_status 3a3d264f7eb8586ed3d44da8bad3c69e5911bcb2ca062b771386b6d47a5118de (image=quay.rdoproject.org/podified-master-centos10/openstack-ceilometer-compute:current-tested, name=ceilometer_agent_compute, health_status=healthy, health_failing_streak=0, health_log=, tcib_build_tag=3a7876c5b6a4ff2e2bc50e11e9db5f42, org.label-schema.build-date=20251125, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS, tcib_managed=true, managed_by=edpm_ansible, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 10 Base Image, config_data={'image': 'quay.rdoproject.org/podified-master-centos10/openstack-ceilometer-compute:current-tested', 'user': 'ceilometer', 'restart': 'always', 'command': 'kolla_start', 'security_opt': 'label:type:ceilometer_polling_t', 'net': 'host', 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'OS_ENDPOINT_TYPE': 'internal'}, 'healthcheck': {'test': '/openstack/healthcheck compute', 'mount': '/var/lib/openstack/healthchecks/ceilometer_agent_compute'}, 'volumes': ['/var/lib/openstack/config/telemetry:/var/lib/openstack/config/:z', '/var/lib/openstack/config/telemetry/ceilometer-agent-compute.json:/var/lib/kolla/config_files/config.json:z', '/run/libvirt:/run/libvirt:shared,ro', '/etc/hosts:/etc/hosts:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/var/lib/openstack/cacerts/telemetry/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/config/telemetry/ceilometer_prom_exporter.yaml:/etc/ceilometer/ceilometer_prom_exporter.yaml:z', '/var/lib/openstack/certs/telemetry/default:/etc/ceilometer/tls:z', '/dev/log:/dev/log', '/var/lib/openstack/healthchecks/ceilometer_agent_compute:/openstack:ro,z']}, config_id=edpm, container_name=ceilometer_agent_compute, io.buildah.version=1.41.4, maintainer=OpenStack Kubernetes Operator team)
Dec  1 19:56:12 compute-0 podman[251756]: 2025-12-01 19:56:12.35931477 +0000 UTC m=+0.118229799 container health_status 34a1614f07848d6f362b3ed1fa2407dbcd0f2c7c831f6ef43ff8b2d278ce7c3d (image=quay.io/podified-antelope-centos9/openstack-ceilometer-ipmi:current-podified, name=ceilometer_agent_ipmi, health_status=healthy, health_failing_streak=0, health_log=, container_name=ceilometer_agent_ipmi, io.buildah.version=1.41.3, org.label-schema.build-date=20251125, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS, tcib_build_tag=fa2bb8efef6782c26ea7f1675eeb36dd, tcib_managed=true, org.label-schema.license=GPLv2, maintainer=OpenStack Kubernetes Operator team, managed_by=edpm_ansible, config_data={'image': 'quay.io/podified-antelope-centos9/openstack-ceilometer-ipmi:current-podified', 'user': 'ceilometer', 'restart': 'always', 'command': 'kolla_start', 'security_opt': 'label:type:ceilometer_polling_t', 'privileged': 'true', 'net': 'host', 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'OS_ENDPOINT_TYPE': 'internal'}, 'healthcheck': {'test': '/openstack/healthcheck ipmi', 'mount': '/var/lib/openstack/healthchecks/ceilometer_agent_ipmi'}, 'volumes': ['/var/lib/openstack/config/telemetry-power-monitoring:/var/lib/openstack/config/:z', '/var/lib/openstack/config/telemetry-power-monitoring/ceilometer-agent-ipmi.json:/var/lib/kolla/config_files/config.json:z', '/etc/hosts:/etc/hosts:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/var/lib/openstack/cacerts/telemetry-power-monitoring/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/config/telemetry-power-monitoring/ceilometer_prom_exporter.yaml:/etc/ceilometer/ceilometer_prom_exporter.yaml:z', '/var/lib/openstack/certs/telemetry-power-monitoring/default:/etc/ceilometer/tls:z', '/dev/log:/dev/log', '/var/lib/openstack/healthchecks/ceilometer_agent_ipmi:/openstack:ro,z']}, config_id=edpm)
Dec  1 19:56:12 compute-0 podman[251755]: 2025-12-01 19:56:12.375300688 +0000 UTC m=+0.138864412 container health_status 23921011954a99f31a49758e512d9e3575f6b2ebf536e7df85e3be11e7690b76 (image=quay.io/sustainable_computing_io/kepler:release-0.7.12, name=kepler, health_status=healthy, health_failing_streak=0, health_log=, architecture=x86_64, io.openshift.tags=base rhel9, distribution-scope=public, io.buildah.version=1.29.0, com.redhat.license_terms=https://www.redhat.com/en/about/red-hat-end-user-license-agreements#UBI, release-0.7.12=, vcs-type=git, io.k8s.display-name=Red Hat Universal Base Image 9, build-date=2024-09-18T21:23:30, container_name=kepler, release=1214.1726694543, vcs-ref=e309397d02fc53f7fa99db1371b8700eb49f268f, vendor=Red Hat, Inc., description=The Universal Base Image is designed and engineered to be the base layer for all of your containerized applications, middleware and utilities. This base image is freely redistributable, but Red Hat only supports Red Hat technologies through subscriptions for Red Hat products. This image is maintained by Red Hat and updated regularly., name=ubi9, summary=Provides the latest release of Red Hat Universal Base Image 9., io.k8s.description=The Universal Base Image is designed and engineered to be the base layer for all of your containerized applications, middleware and utilities. This base image is freely redistributable, but Red Hat only supports Red Hat technologies through subscriptions for Red Hat products. This image is maintained by Red Hat and updated regularly., url=https://access.redhat.com/containers/#/registry.access.redhat.com/ubi9/images/9.4-1214.1726694543, config_id=edpm, managed_by=edpm_ansible, maintainer=Red Hat, Inc., version=9.4, io.openshift.expose-services=, com.redhat.component=ubi9-container, config_data={'image': 'quay.io/sustainable_computing_io/kepler:release-0.7.12', 'privileged': 'true', 'restart': 'always', 'ports': ['8888:8888'], 'net': 'host', 'command': '-v=2', 'recreate': True, 'environment': {'ENABLE_GPU': 'true', 'EXPOSE_CONTAINER_METRICS': 'true', 'ENABLE_PROCESS_METRICS': 'true', 'EXPOSE_VM_METRICS': 'true', 'EXPOSE_ESTIMATED_IDLE_POWER_METRICS': 'false', 'LIBVIRT_METADATA_URI': 'http://openstack.org/xmlns/libvirt/nova/1.1'}, 'healthcheck': {'test': '/openstack/healthcheck kepler', 'mount': '/var/lib/openstack/healthchecks/kepler'}, 'volumes': ['/lib/modules:/lib/modules:ro', '/run/libvirt:/run/libvirt:shared,ro', '/sys:/sys', '/proc:/proc', '/var/lib/openstack/healthchecks/kepler:/openstack:ro,z']})
Dec  1 19:56:12 compute-0 podman[251759]: 2025-12-01 19:56:12.42230594 +0000 UTC m=+0.173257311 container health_status ac5c9902abf0db9f43c889599b2bcc73d33eb8b65444ffdd9b56a5cc93dab792 (image=quay.io/podified-antelope-centos9/openstack-ovn-controller:current-podified, name=ovn_controller, health_status=healthy, health_failing_streak=0, health_log=, org.label-schema.build-date=20251125, tcib_build_tag=fa2bb8efef6782c26ea7f1675eeb36dd, tcib_managed=true, container_name=ovn_controller, managed_by=edpm_ansible, org.label-schema.license=GPLv2, org.label-schema.schema-version=1.0, config_data={'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ovn_controller', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-ovn-controller:current-podified', 'net': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/lib/modules:/lib/modules:ro', '/run:/run', '/var/lib/openvswitch/ovn:/run/ovn:shared,z', '/var/lib/kolla/config_files/ovn_controller.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/openstack/cacerts/ovn/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/certs/ovn/default/ca.crt:/etc/pki/tls/certs/ovndbca.crt:ro,z', '/var/lib/openstack/certs/ovn/default/tls.crt:/etc/pki/tls/certs/ovndb.crt:ro,z', '/var/lib/openstack/certs/ovn/default/tls.key:/etc/pki/tls/private/ovndb.key:ro,Z', '/var/lib/openstack/healthchecks/ovn_controller:/openstack:ro,z']}, config_id=ovn_controller, maintainer=OpenStack Kubernetes Operator team, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.vendor=CentOS, io.buildah.version=1.41.3)
Dec  1 19:56:12 compute-0 nova_compute[189564]: 2025-12-01 19:56:12.542 189568 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Dec  1 19:56:13 compute-0 nova_compute[189564]: 2025-12-01 19:56:13.627 189568 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Dec  1 19:56:14 compute-0 nova_compute[189564]: 2025-12-01 19:56:14.248 189568 DEBUG oslo_service.periodic_task [None req-7cec860b-c1c5-49d2-8a4f-732c76c1788d - - - - - -] Running periodic task ComputeManager._heal_instance_info_cache run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210#033[00m
Dec  1 19:56:14 compute-0 nova_compute[189564]: 2025-12-01 19:56:14.249 189568 DEBUG nova.compute.manager [None req-7cec860b-c1c5-49d2-8a4f-732c76c1788d - - - - - -] Starting heal instance info cache _heal_instance_info_cache /usr/lib/python3.9/site-packages/nova/compute/manager.py:9858#033[00m
Dec  1 19:56:14 compute-0 nova_compute[189564]: 2025-12-01 19:56:14.250 189568 DEBUG nova.compute.manager [None req-7cec860b-c1c5-49d2-8a4f-732c76c1788d - - - - - -] Rebuilding the list of instances to heal _heal_instance_info_cache /usr/lib/python3.9/site-packages/nova/compute/manager.py:9862#033[00m
Dec  1 19:56:14 compute-0 nova_compute[189564]: 2025-12-01 19:56:14.269 189568 DEBUG nova.compute.manager [None req-7cec860b-c1c5-49d2-8a4f-732c76c1788d - - - - - -] Didn't find any instances for network info cache update. _heal_instance_info_cache /usr/lib/python3.9/site-packages/nova/compute/manager.py:9944#033[00m
Dec  1 19:56:14 compute-0 nova_compute[189564]: 2025-12-01 19:56:14.270 189568 DEBUG oslo_service.periodic_task [None req-7cec860b-c1c5-49d2-8a4f-732c76c1788d - - - - - -] Running periodic task ComputeManager._poll_rescued_instances run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210#033[00m
Dec  1 19:56:14 compute-0 nova_compute[189564]: 2025-12-01 19:56:14.271 189568 DEBUG oslo_service.periodic_task [None req-7cec860b-c1c5-49d2-8a4f-732c76c1788d - - - - - -] Running periodic task ComputeManager._poll_volume_usage run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210#033[00m
Dec  1 19:56:16 compute-0 nova_compute[189564]: 2025-12-01 19:56:16.247 189568 DEBUG oslo_service.periodic_task [None req-7cec860b-c1c5-49d2-8a4f-732c76c1788d - - - - - -] Running periodic task ComputeManager._poll_unconfirmed_resizes run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210#033[00m
Dec  1 19:56:16 compute-0 nova_compute[189564]: 2025-12-01 19:56:16.248 189568 DEBUG oslo_service.periodic_task [None req-7cec860b-c1c5-49d2-8a4f-732c76c1788d - - - - - -] Running periodic task ComputeManager.update_available_resource run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210#033[00m
Dec  1 19:56:16 compute-0 nova_compute[189564]: 2025-12-01 19:56:16.286 189568 DEBUG oslo_concurrency.lockutils [None req-7cec860b-c1c5-49d2-8a4f-732c76c1788d - - - - - -] Acquiring lock "compute_resources" by "nova.compute.resource_tracker.ResourceTracker.clean_compute_node_cache" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m
Dec  1 19:56:16 compute-0 nova_compute[189564]: 2025-12-01 19:56:16.286 189568 DEBUG oslo_concurrency.lockutils [None req-7cec860b-c1c5-49d2-8a4f-732c76c1788d - - - - - -] Lock "compute_resources" acquired by "nova.compute.resource_tracker.ResourceTracker.clean_compute_node_cache" :: waited 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m
Dec  1 19:56:16 compute-0 nova_compute[189564]: 2025-12-01 19:56:16.289 189568 DEBUG oslo_concurrency.lockutils [None req-7cec860b-c1c5-49d2-8a4f-732c76c1788d - - - - - -] Lock "compute_resources" "released" by "nova.compute.resource_tracker.ResourceTracker.clean_compute_node_cache" :: held 0.003s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m
Dec  1 19:56:16 compute-0 nova_compute[189564]: 2025-12-01 19:56:16.290 189568 DEBUG nova.compute.resource_tracker [None req-7cec860b-c1c5-49d2-8a4f-732c76c1788d - - - - - -] Auditing locally available compute resources for compute-0.ctlplane.example.com (node: compute-0.ctlplane.example.com) update_available_resource /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:861#033[00m
Dec  1 19:56:16 compute-0 nova_compute[189564]: 2025-12-01 19:56:16.798 189568 WARNING nova.virt.libvirt.driver [None req-7cec860b-c1c5-49d2-8a4f-732c76c1788d - - - - - -] This host appears to have multiple sockets per NUMA node. The `socket` PCI NUMA affinity will not be supported.#033[00m
Dec  1 19:56:16 compute-0 nova_compute[189564]: 2025-12-01 19:56:16.800 189568 DEBUG nova.compute.resource_tracker [None req-7cec860b-c1c5-49d2-8a4f-732c76c1788d - - - - - -] Hypervisor/Node resource view: name=compute-0.ctlplane.example.com free_ram=5369MB free_disk=72.37810134887695GB free_vcpus=8 pci_devices=[{"dev_id": "pci_0000_00_03_0", "address": "0000:00:03.0", "product_id": "1000", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1000", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_01_3", "address": "0000:00:01.3", "product_id": "7113", "vendor_id": "8086", "numa_node": null, "label": "label_8086_7113", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_06_0", "address": "0000:00:06.0", "product_id": "1005", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1005", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_04_0", "address": "0000:00:04.0", "product_id": "1001", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1001", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_01_1", "address": "0000:00:01.1", "product_id": "7010", "vendor_id": "8086", "numa_node": null, "label": "label_8086_7010", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_00_0", "address": "0000:00:00.0", "product_id": "1237", "vendor_id": "8086", "numa_node": null, "label": "label_8086_1237", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_05_0", "address": "0000:00:05.0", "product_id": "1002", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1002", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_02_0", "address": "0000:00:02.0", "product_id": "1050", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1050", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_01_2", "address": "0000:00:01.2", "product_id": "7020", "vendor_id": "8086", "numa_node": null, "label": "label_8086_7020", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_01_0", "address": "0000:00:01.0", "product_id": "7000", "vendor_id": "8086", "numa_node": null, "label": "label_8086_7000", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_07_0", "address": "0000:00:07.0", "product_id": "1000", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1000", "dev_type": "type-PCI"}] _report_hypervisor_resource_view /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:1034#033[00m
Dec  1 19:56:16 compute-0 nova_compute[189564]: 2025-12-01 19:56:16.800 189568 DEBUG oslo_concurrency.lockutils [None req-7cec860b-c1c5-49d2-8a4f-732c76c1788d - - - - - -] Acquiring lock "compute_resources" by "nova.compute.resource_tracker.ResourceTracker._update_available_resource" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m
Dec  1 19:56:16 compute-0 nova_compute[189564]: 2025-12-01 19:56:16.800 189568 DEBUG oslo_concurrency.lockutils [None req-7cec860b-c1c5-49d2-8a4f-732c76c1788d - - - - - -] Lock "compute_resources" acquired by "nova.compute.resource_tracker.ResourceTracker._update_available_resource" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m
Dec  1 19:56:16 compute-0 nova_compute[189564]: 2025-12-01 19:56:16.976 189568 DEBUG nova.compute.resource_tracker [None req-7cec860b-c1c5-49d2-8a4f-732c76c1788d - - - - - -] Total usable vcpus: 8, total allocated vcpus: 0 _report_final_resource_view /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:1057#033[00m
Dec  1 19:56:16 compute-0 nova_compute[189564]: 2025-12-01 19:56:16.976 189568 DEBUG nova.compute.resource_tracker [None req-7cec860b-c1c5-49d2-8a4f-732c76c1788d - - - - - -] Final resource view: name=compute-0.ctlplane.example.com phys_ram=7680MB used_ram=512MB phys_disk=79GB used_disk=0GB total_vcpus=8 used_vcpus=0 pci_stats=[] _report_final_resource_view /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:1066#033[00m
Dec  1 19:56:17 compute-0 nova_compute[189564]: 2025-12-01 19:56:17.046 189568 DEBUG nova.compute.provider_tree [None req-7cec860b-c1c5-49d2-8a4f-732c76c1788d - - - - - -] Inventory has not changed in ProviderTree for provider: 0211b5d4-bab8-409f-8f53-df766ffbcb27 update_inventory /usr/lib/python3.9/site-packages/nova/compute/provider_tree.py:180#033[00m
Dec  1 19:56:17 compute-0 nova_compute[189564]: 2025-12-01 19:56:17.064 189568 DEBUG nova.scheduler.client.report [None req-7cec860b-c1c5-49d2-8a4f-732c76c1788d - - - - - -] Inventory has not changed for provider 0211b5d4-bab8-409f-8f53-df766ffbcb27 based on inventory data: {'VCPU': {'total': 8, 'reserved': 0, 'min_unit': 1, 'max_unit': 8, 'step_size': 1, 'allocation_ratio': 4.0}, 'MEMORY_MB': {'total': 7680, 'reserved': 512, 'min_unit': 1, 'max_unit': 7680, 'step_size': 1, 'allocation_ratio': 1.0}, 'DISK_GB': {'total': 79, 'reserved': 1, 'min_unit': 1, 'max_unit': 79, 'step_size': 1, 'allocation_ratio': 0.9}} set_inventory_for_provider /usr/lib/python3.9/site-packages/nova/scheduler/client/report.py:940#033[00m
Dec  1 19:56:17 compute-0 nova_compute[189564]: 2025-12-01 19:56:17.094 189568 DEBUG nova.compute.resource_tracker [None req-7cec860b-c1c5-49d2-8a4f-732c76c1788d - - - - - -] Compute_service record updated for compute-0.ctlplane.example.com:compute-0.ctlplane.example.com _update_available_resource /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:995#033[00m
Dec  1 19:56:17 compute-0 nova_compute[189564]: 2025-12-01 19:56:17.095 189568 DEBUG oslo_concurrency.lockutils [None req-7cec860b-c1c5-49d2-8a4f-732c76c1788d - - - - - -] Lock "compute_resources" "released" by "nova.compute.resource_tracker.ResourceTracker._update_available_resource" :: held 0.294s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m
Dec  1 19:56:17 compute-0 nova_compute[189564]: 2025-12-01 19:56:17.548 189568 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Dec  1 19:56:18 compute-0 nova_compute[189564]: 2025-12-01 19:56:18.629 189568 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Dec  1 19:56:20 compute-0 nova_compute[189564]: 2025-12-01 19:56:20.092 189568 DEBUG oslo_service.periodic_task [None req-7cec860b-c1c5-49d2-8a4f-732c76c1788d - - - - - -] Running periodic task ComputeManager._check_instance_build_time run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210#033[00m
Dec  1 19:56:20 compute-0 nova_compute[189564]: 2025-12-01 19:56:20.247 189568 DEBUG oslo_service.periodic_task [None req-7cec860b-c1c5-49d2-8a4f-732c76c1788d - - - - - -] Running periodic task ComputeManager._instance_usage_audit run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210#033[00m
Dec  1 19:56:21 compute-0 podman[251851]: 2025-12-01 19:56:21.343149753 +0000 UTC m=+0.115803144 container health_status b46bda7fc50db8041eef75400930fc7591d8331b3adc9964f77b2cc87c6b98e2 (image=quay.io/openstack-k8s-operators/openstack-network-exporter:current-podified, name=openstack_network_exporter, health_status=healthy, health_failing_streak=0, health_log=, vendor=Red Hat, Inc., config_data={'image': 'quay.io/openstack-k8s-operators/openstack-network-exporter:current-podified', 'restart': 'always', 'recreate': True, 'privileged': True, 'ports': ['9105:9105'], 'command': [], 'net': 'host', 'environment': {'OS_ENDPOINT_TYPE': 'internal', 'OPENSTACK_NETWORK_EXPORTER_YAML': '/etc/openstack_network_exporter/openstack_network_exporter.yaml'}, 'healthcheck': {'test': '/openstack/healthcheck openstack-netwo', 'mount': '/var/lib/openstack/healthchecks/openstack_network_exporter'}, 'volumes': ['/var/lib/openstack/config/telemetry/openstack_network_exporter.yaml:/etc/openstack_network_exporter/openstack_network_exporter.yaml:z', '/var/lib/openstack/certs/telemetry/default:/etc/openstack_network_exporter/tls:z', '/var/run/openvswitch:/run/openvswitch:rw,z', '/var/lib/openvswitch/ovn:/run/ovn:rw,z', '/proc:/host/proc:ro', '/var/lib/openstack/healthchecks/openstack_network_exporter:/openstack:ro,z']}, io.buildah.version=1.33.7, name=ubi9-minimal, com.redhat.component=ubi9-minimal-container, distribution-scope=public, maintainer=Red Hat, Inc., com.redhat.license_terms=https://www.redhat.com/en/about/red-hat-end-user-license-agreements#UBI, version=9.6, io.k8s.display-name=Red Hat Universal Base Image 9 Minimal, config_id=edpm, container_name=openstack_network_exporter, architecture=x86_64, io.openshift.tags=minimal rhel9, managed_by=edpm_ansible, url=https://catalog.redhat.com/en/search?searchType=containers, vcs-ref=f4b088292653bbf5ca8188a5e59ffd06a8671d4b, vcs-type=git, description=The Universal Base Image Minimal is a stripped down image that uses microdnf as a package manager. This base image is freely redistributable, but Red Hat only supports Red Hat technologies through subscriptions for Red Hat products. This image is maintained by Red Hat and updated regularly., io.openshift.expose-services=, summary=Provides the latest release of the minimal Red Hat Universal Base Image 9., build-date=2025-08-20T13:12:41, release=1755695350, io.k8s.description=The Universal Base Image Minimal is a stripped down image that uses microdnf as a package manager. This base image is freely redistributable, but Red Hat only supports Red Hat technologies through subscriptions for Red Hat products. This image is maintained by Red Hat and updated regularly.)
Dec  1 19:56:22 compute-0 nova_compute[189564]: 2025-12-01 19:56:22.554 189568 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Dec  1 19:56:23 compute-0 nova_compute[189564]: 2025-12-01 19:56:23.634 189568 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Dec  1 19:56:26 compute-0 nova_compute[189564]: 2025-12-01 19:56:26.248 189568 DEBUG oslo_service.periodic_task [None req-7cec860b-c1c5-49d2-8a4f-732c76c1788d - - - - - -] Running periodic task ComputeManager._run_pending_deletes run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210#033[00m
Dec  1 19:56:26 compute-0 nova_compute[189564]: 2025-12-01 19:56:26.250 189568 DEBUG nova.compute.manager [None req-7cec860b-c1c5-49d2-8a4f-732c76c1788d - - - - - -] Cleaning up deleted instances _run_pending_deletes /usr/lib/python3.9/site-packages/nova/compute/manager.py:11145#033[00m
Dec  1 19:56:26 compute-0 nova_compute[189564]: 2025-12-01 19:56:26.270 189568 DEBUG nova.compute.manager [None req-7cec860b-c1c5-49d2-8a4f-732c76c1788d - - - - - -] There are 0 instances to clean _run_pending_deletes /usr/lib/python3.9/site-packages/nova/compute/manager.py:11154#033[00m
Dec  1 19:56:27 compute-0 podman[251871]: 2025-12-01 19:56:27.335466484 +0000 UTC m=+0.100214539 container health_status 9bc16c1e84935b321683dd2dfd3901959431e420d380b6b9982945dff3d516b2 (image=quay.io/prometheus/node-exporter:v1.5.0, name=node_exporter, health_status=healthy, health_failing_streak=0, health_log=, config_data={'image': 'quay.io/prometheus/node-exporter:v1.5.0', 'restart': 'always', 'recreate': True, 'user': 'root', 'privileged': True, 'ports': ['9100:9100'], 'command': ['--web.config.file=/etc/node_exporter/node_exporter.yaml', '--web.disable-exporter-metrics', '--collector.systemd', '--collector.systemd.unit-include=(edpm_.*|ovs.*|openvswitch|virt.*|rsyslog)\\.service', '--no-collector.dmi', '--no-collector.entropy', '--no-collector.thermal_zone', '--no-collector.time', '--no-collector.timex', '--no-collector.uname', '--no-collector.stat', '--no-collector.hwmon', '--no-collector.os', '--no-collector.selinux', '--no-collector.textfile', '--no-collector.powersupplyclass', '--no-collector.pressure', '--no-collector.rapl'], 'net': 'host', 'environment': {'OS_ENDPOINT_TYPE': 'internal'}, 'healthcheck': {'test': '/openstack/healthcheck node_exporter', 'mount': '/var/lib/openstack/healthchecks/node_exporter'}, 'volumes': ['/var/lib/openstack/config/telemetry/node_exporter.yaml:/etc/node_exporter/node_exporter.yaml:z', '/var/lib/openstack/certs/telemetry/default:/etc/node_exporter/tls:z', '/var/run/dbus/system_bus_socket:/var/run/dbus/system_bus_socket:rw', '/var/lib/openstack/healthchecks/node_exporter:/openstack:ro,z']}, config_id=edpm, container_name=node_exporter, maintainer=The Prometheus Authors <prometheus-developers@googlegroups.com>, managed_by=edpm_ansible)
Dec  1 19:56:27 compute-0 nova_compute[189564]: 2025-12-01 19:56:27.558 189568 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Dec  1 19:56:28 compute-0 nova_compute[189564]: 2025-12-01 19:56:28.636 189568 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Dec  1 19:56:29 compute-0 nova_compute[189564]: 2025-12-01 19:56:29.248 189568 DEBUG oslo_service.periodic_task [None req-7cec860b-c1c5-49d2-8a4f-732c76c1788d - - - - - -] Running periodic task ComputeManager._cleanup_expired_console_auth_tokens run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210#033[00m
Dec  1 19:56:29 compute-0 podman[203750]: time="2025-12-01T19:56:29Z" level=info msg="List containers: received `last` parameter - overwriting `limit`"
Dec  1 19:56:29 compute-0 podman[203750]: @ - - [01/Dec/2025:19:56:29 +0000] "GET /v4.9.3/libpod/containers/json?all=true&external=false&last=0&namespace=false&size=false&sync=false HTTP/1.1" 200 28288 "" "Go-http-client/1.1"
Dec  1 19:56:29 compute-0 podman[203750]: @ - - [01/Dec/2025:19:56:29 +0000] "GET /v4.9.3/libpod/containers/stats?all=false&interval=1&stream=false HTTP/1.1" 200 4338 "" "Go-http-client/1.1"
Dec  1 19:56:31 compute-0 openstack_network_exporter[205914]: ERROR   19:56:31 appctl.go:144: Failed to get PID for ovn-northd: no control socket files found for ovn-northd
Dec  1 19:56:31 compute-0 openstack_network_exporter[205914]: ERROR   19:56:31 appctl.go:144: Failed to get PID for ovn-northd: no control socket files found for ovn-northd
Dec  1 19:56:31 compute-0 openstack_network_exporter[205914]: ERROR   19:56:31 appctl.go:131: Failed to prepare call to ovsdb-server: no control socket files found for the ovs db server
Dec  1 19:56:31 compute-0 openstack_network_exporter[205914]: ERROR   19:56:31 appctl.go:174: call(dpif-netdev/pmd-rxq-show): please specify an existing datapath
Dec  1 19:56:31 compute-0 openstack_network_exporter[205914]: 
Dec  1 19:56:31 compute-0 openstack_network_exporter[205914]: ERROR   19:56:31 appctl.go:174: call(dpif-netdev/pmd-perf-show): please specify an existing datapath
Dec  1 19:56:31 compute-0 openstack_network_exporter[205914]: 
Dec  1 19:56:32 compute-0 nova_compute[189564]: 2025-12-01 19:56:32.562 189568 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Dec  1 19:56:33 compute-0 podman[251897]: 2025-12-01 19:56:33.369090141 +0000 UTC m=+0.130552973 container health_status eee51cf6f5ac491b85fb09827fece37ea9afa564acb449d4ec0d0155a452f02b (image=quay.io/podified-antelope-centos9/openstack-multipathd:current-podified, name=multipathd, health_status=healthy, health_failing_streak=0, health_log=, io.buildah.version=1.41.3, managed_by=edpm_ansible, org.label-schema.build-date=20251125, org.label-schema.license=GPLv2, org.label-schema.schema-version=1.0, tcib_build_tag=fa2bb8efef6782c26ea7f1675eeb36dd, config_data={'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/multipathd', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-multipathd:current-podified', 'net': 'host', 'privileged': True, 'restart': 'always', 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/var/lib/kolla/config_files/multipathd.json:/var/lib/kolla/config_files/config.json:ro', '/dev:/dev', '/run/udev:/run/udev', '/sys:/sys', '/lib/modules:/lib/modules:ro', '/etc/iscsi:/etc/iscsi:ro', '/var/lib/iscsi:/var/lib/iscsi', '/etc/multipath:/etc/multipath:z', '/etc/multipath.conf:/etc/multipath.conf:ro', '/var/lib/openstack/healthchecks/multipathd:/openstack:ro,z']}, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.vendor=CentOS, container_name=multipathd, maintainer=OpenStack Kubernetes Operator team, tcib_managed=true, config_id=multipathd)
Dec  1 19:56:33 compute-0 nova_compute[189564]: 2025-12-01 19:56:33.639 189568 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Dec  1 19:56:37 compute-0 nova_compute[189564]: 2025-12-01 19:56:37.567 189568 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Dec  1 19:56:38 compute-0 nova_compute[189564]: 2025-12-01 19:56:38.263 189568 DEBUG oslo_service.periodic_task [None req-7cec860b-c1c5-49d2-8a4f-732c76c1788d - - - - - -] Running periodic task ComputeManager._cleanup_incomplete_migrations run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210#033[00m
Dec  1 19:56:38 compute-0 nova_compute[189564]: 2025-12-01 19:56:38.263 189568 DEBUG nova.compute.manager [None req-7cec860b-c1c5-49d2-8a4f-732c76c1788d - - - - - -] Cleaning up deleted instances with incomplete migration  _cleanup_incomplete_migrations /usr/lib/python3.9/site-packages/nova/compute/manager.py:11183#033[00m
Dec  1 19:56:38 compute-0 nova_compute[189564]: 2025-12-01 19:56:38.642 189568 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Dec  1 19:56:39 compute-0 podman[251917]: 2025-12-01 19:56:39.349608326 +0000 UTC m=+0.107200377 container health_status 61ddba5fa28aaa4735d9b3aecc3d300f499f9ae2248b5f55cd6d6127fcce4236 (image=quay.io/navidys/prometheus-podman-exporter:v1.10.1, name=podman_exporter, health_status=healthy, health_failing_streak=0, health_log=, container_name=podman_exporter, maintainer=Navid Yaghoobi <navidys@fedoraproject.org>, managed_by=edpm_ansible, config_data={'image': 'quay.io/navidys/prometheus-podman-exporter:v1.10.1', 'restart': 'always', 'recreate': True, 'user': 'root', 'privileged': True, 'ports': ['9882:9882'], 'net': 'host', 'command': ['--web.config.file=/etc/podman_exporter/podman_exporter.yaml'], 'environment': {'OS_ENDPOINT_TYPE': 'internal', 'CONTAINER_HOST': 'unix:///run/podman/podman.sock'}, 'healthcheck': {'test': '/openstack/healthcheck podman_exporter', 'mount': '/var/lib/openstack/healthchecks/podman_exporter'}, 'volumes': ['/var/lib/openstack/config/telemetry/podman_exporter.yaml:/etc/podman_exporter/podman_exporter.yaml:z', '/var/lib/openstack/certs/telemetry/default:/etc/podman_exporter/tls:z', '/run/podman/podman.sock:/run/podman/podman.sock:rw,z', '/var/lib/openstack/healthchecks/podman_exporter:/openstack:ro,z']}, config_id=edpm)
Dec  1 19:56:42 compute-0 nova_compute[189564]: 2025-12-01 19:56:42.572 189568 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Dec  1 19:56:43 compute-0 podman[251947]: 2025-12-01 19:56:43.360921547 +0000 UTC m=+0.099679823 container health_status 3a3d264f7eb8586ed3d44da8bad3c69e5911bcb2ca062b771386b6d47a5118de (image=quay.rdoproject.org/podified-master-centos10/openstack-ceilometer-compute:current-tested, name=ceilometer_agent_compute, health_status=healthy, health_failing_streak=0, health_log=, org.label-schema.schema-version=1.0, config_data={'image': 'quay.rdoproject.org/podified-master-centos10/openstack-ceilometer-compute:current-tested', 'user': 'ceilometer', 'restart': 'always', 'command': 'kolla_start', 'security_opt': 'label:type:ceilometer_polling_t', 'net': 'host', 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'OS_ENDPOINT_TYPE': 'internal'}, 'healthcheck': {'test': '/openstack/healthcheck compute', 'mount': '/var/lib/openstack/healthchecks/ceilometer_agent_compute'}, 'volumes': ['/var/lib/openstack/config/telemetry:/var/lib/openstack/config/:z', '/var/lib/openstack/config/telemetry/ceilometer-agent-compute.json:/var/lib/kolla/config_files/config.json:z', '/run/libvirt:/run/libvirt:shared,ro', '/etc/hosts:/etc/hosts:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/var/lib/openstack/cacerts/telemetry/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/config/telemetry/ceilometer_prom_exporter.yaml:/etc/ceilometer/ceilometer_prom_exporter.yaml:z', '/var/lib/openstack/certs/telemetry/default:/etc/ceilometer/tls:z', '/dev/log:/dev/log', '/var/lib/openstack/healthchecks/ceilometer_agent_compute:/openstack:ro,z']}, config_id=edpm, io.buildah.version=1.41.4, maintainer=OpenStack Kubernetes Operator team, org.label-schema.license=GPLv2, tcib_build_tag=3a7876c5b6a4ff2e2bc50e11e9db5f42, managed_by=edpm_ansible, org.label-schema.build-date=20251125, tcib_managed=true, container_name=ceilometer_agent_compute, org.label-schema.vendor=CentOS, org.label-schema.name=CentOS Stream 10 Base Image)
Dec  1 19:56:43 compute-0 podman[251940]: 2025-12-01 19:56:43.373711824 +0000 UTC m=+0.141569235 container health_status 23921011954a99f31a49758e512d9e3575f6b2ebf536e7df85e3be11e7690b76 (image=quay.io/sustainable_computing_io/kepler:release-0.7.12, name=kepler, health_status=healthy, health_failing_streak=0, health_log=, managed_by=edpm_ansible, distribution-scope=public, config_data={'image': 'quay.io/sustainable_computing_io/kepler:release-0.7.12', 'privileged': 'true', 'restart': 'always', 'ports': ['8888:8888'], 'net': 'host', 'command': '-v=2', 'recreate': True, 'environment': {'ENABLE_GPU': 'true', 'EXPOSE_CONTAINER_METRICS': 'true', 'ENABLE_PROCESS_METRICS': 'true', 'EXPOSE_VM_METRICS': 'true', 'EXPOSE_ESTIMATED_IDLE_POWER_METRICS': 'false', 'LIBVIRT_METADATA_URI': 'http://openstack.org/xmlns/libvirt/nova/1.1'}, 'healthcheck': {'test': '/openstack/healthcheck kepler', 'mount': '/var/lib/openstack/healthchecks/kepler'}, 'volumes': ['/lib/modules:/lib/modules:ro', '/run/libvirt:/run/libvirt:shared,ro', '/sys:/sys', '/proc:/proc', '/var/lib/openstack/healthchecks/kepler:/openstack:ro,z']}, com.redhat.component=ubi9-container, release-0.7.12=, architecture=x86_64, maintainer=Red Hat, Inc., name=ubi9, release=1214.1726694543, com.redhat.license_terms=https://www.redhat.com/en/about/red-hat-end-user-license-agreements#UBI, summary=Provides the latest release of Red Hat Universal Base Image 9., container_name=kepler, url=https://access.redhat.com/containers/#/registry.access.redhat.com/ubi9/images/9.4-1214.1726694543, vcs-ref=e309397d02fc53f7fa99db1371b8700eb49f268f, vcs-type=git, config_id=edpm, description=The Universal Base Image is designed and engineered to be the base layer for all of your containerized applications, middleware and utilities. This base image is freely redistributable, but Red Hat only supports Red Hat technologies through subscriptions for Red Hat products. This image is maintained by Red Hat and updated regularly., io.k8s.display-name=Red Hat Universal Base Image 9, io.openshift.tags=base rhel9, io.buildah.version=1.29.0, io.k8s.description=The Universal Base Image is designed and engineered to be the base layer for all of your containerized applications, middleware and utilities. This base image is freely redistributable, but Red Hat only supports Red Hat technologies through subscriptions for Red Hat products. This image is maintained by Red Hat and updated regularly., vendor=Red Hat, Inc., version=9.4, build-date=2024-09-18T21:23:30, io.openshift.expose-services=)
Dec  1 19:56:43 compute-0 podman[251941]: 2025-12-01 19:56:43.380715593 +0000 UTC m=+0.128235721 container health_status 34a1614f07848d6f362b3ed1fa2407dbcd0f2c7c831f6ef43ff8b2d278ce7c3d (image=quay.io/podified-antelope-centos9/openstack-ceilometer-ipmi:current-podified, name=ceilometer_agent_ipmi, health_status=healthy, health_failing_streak=0, health_log=, tcib_managed=true, container_name=ceilometer_agent_ipmi, org.label-schema.license=GPLv2, org.label-schema.vendor=CentOS, config_data={'image': 'quay.io/podified-antelope-centos9/openstack-ceilometer-ipmi:current-podified', 'user': 'ceilometer', 'restart': 'always', 'command': 'kolla_start', 'security_opt': 'label:type:ceilometer_polling_t', 'privileged': 'true', 'net': 'host', 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'OS_ENDPOINT_TYPE': 'internal'}, 'healthcheck': {'test': '/openstack/healthcheck ipmi', 'mount': '/var/lib/openstack/healthchecks/ceilometer_agent_ipmi'}, 'volumes': ['/var/lib/openstack/config/telemetry-power-monitoring:/var/lib/openstack/config/:z', '/var/lib/openstack/config/telemetry-power-monitoring/ceilometer-agent-ipmi.json:/var/lib/kolla/config_files/config.json:z', '/etc/hosts:/etc/hosts:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/var/lib/openstack/cacerts/telemetry-power-monitoring/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/config/telemetry-power-monitoring/ceilometer_prom_exporter.yaml:/etc/ceilometer/ceilometer_prom_exporter.yaml:z', '/var/lib/openstack/certs/telemetry-power-monitoring/default:/etc/ceilometer/tls:z', '/dev/log:/dev/log', '/var/lib/openstack/healthchecks/ceilometer_agent_ipmi:/openstack:ro,z']}, io.buildah.version=1.41.3, managed_by=edpm_ansible, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.schema-version=1.0, config_id=edpm, maintainer=OpenStack Kubernetes Operator team, org.label-schema.build-date=20251125, tcib_build_tag=fa2bb8efef6782c26ea7f1675eeb36dd)
Dec  1 19:56:43 compute-0 podman[251948]: 2025-12-01 19:56:43.386345548 +0000 UTC m=+0.115403992 container health_status 43b014a7c88484529ca37fbc1aa040d68d3c565a681d98a3ffe696ded1c66c8b (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified, name=ovn_metadata_agent, health_status=healthy, health_failing_streak=0, health_log=, org.label-schema.vendor=CentOS, tcib_managed=true, config_data={'cgroupns': 'host', 'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'EDPM_CONFIG_HASH': '0823bd3e096c75f72e4a95820d41b0d4b6a1172bd2892ddb9f29b788a11bc87d'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ovn_metadata_agent', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified', 'net': 'host', 'pid': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/run/openvswitch:/run/openvswitch:z', '/var/lib/config-data/ansible-generated/neutron-ovn-metadata-agent:/etc/neutron.conf.d:z', '/run/netns:/run/netns:shared', '/var/lib/kolla/config_files/ovn_metadata_agent.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/neutron:/var/lib/neutron:shared,z', '/var/lib/neutron/ovn_metadata_haproxy_wrapper:/usr/local/bin/haproxy:ro', '/var/lib/neutron/kill_scripts:/etc/neutron/kill_scripts:ro', '/var/lib/openstack/cacerts/neutron-metadata/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/ca.crt:/etc/pki/tls/certs/ovndbca.crt:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/tls.crt:/etc/pki/tls/certs/ovndb.crt:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/tls.key:/etc/pki/tls/private/ovndb.key:ro,Z', '/var/lib/openstack/healthchecks/ovn_metadata_agent:/openstack:ro,z']}, config_id=ovn_metadata_agent, managed_by=edpm_ansible, org.label-schema.license=GPLv2, maintainer=OpenStack Kubernetes Operator team, org.label-schema.build-date=20251125, org.label-schema.name=CentOS Stream 9 Base Image, io.buildah.version=1.41.3, org.label-schema.schema-version=1.0, tcib_build_tag=fa2bb8efef6782c26ea7f1675eeb36dd, container_name=ovn_metadata_agent)
Dec  1 19:56:43 compute-0 podman[251950]: 2025-12-01 19:56:43.410984004 +0000 UTC m=+0.138285674 container health_status ac5c9902abf0db9f43c889599b2bcc73d33eb8b65444ffdd9b56a5cc93dab792 (image=quay.io/podified-antelope-centos9/openstack-ovn-controller:current-podified, name=ovn_controller, health_status=healthy, health_failing_streak=0, health_log=, tcib_managed=true, config_data={'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ovn_controller', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-ovn-controller:current-podified', 'net': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/lib/modules:/lib/modules:ro', '/run:/run', '/var/lib/openvswitch/ovn:/run/ovn:shared,z', '/var/lib/kolla/config_files/ovn_controller.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/openstack/cacerts/ovn/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/certs/ovn/default/ca.crt:/etc/pki/tls/certs/ovndbca.crt:ro,z', '/var/lib/openstack/certs/ovn/default/tls.crt:/etc/pki/tls/certs/ovndb.crt:ro,z', '/var/lib/openstack/certs/ovn/default/tls.key:/etc/pki/tls/private/ovndb.key:ro,Z', '/var/lib/openstack/healthchecks/ovn_controller:/openstack:ro,z']}, config_id=ovn_controller, io.buildah.version=1.41.3, org.label-schema.build-date=20251125, org.label-schema.vendor=CentOS, maintainer=OpenStack Kubernetes Operator team, managed_by=edpm_ansible, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.schema-version=1.0, tcib_build_tag=fa2bb8efef6782c26ea7f1675eeb36dd, container_name=ovn_controller)
Dec  1 19:56:43 compute-0 nova_compute[189564]: 2025-12-01 19:56:43.645 189568 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Dec  1 19:56:47 compute-0 nova_compute[189564]: 2025-12-01 19:56:47.577 189568 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Dec  1 19:56:48 compute-0 nova_compute[189564]: 2025-12-01 19:56:48.646 189568 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Dec  1 19:56:52 compute-0 podman[252040]: 2025-12-01 19:56:52.358933622 +0000 UTC m=+0.118963403 container health_status b46bda7fc50db8041eef75400930fc7591d8331b3adc9964f77b2cc87c6b98e2 (image=quay.io/openstack-k8s-operators/openstack-network-exporter:current-podified, name=openstack_network_exporter, health_status=healthy, health_failing_streak=0, health_log=, io.openshift.expose-services=, io.k8s.description=The Universal Base Image Minimal is a stripped down image that uses microdnf as a package manager. This base image is freely redistributable, but Red Hat only supports Red Hat technologies through subscriptions for Red Hat products. This image is maintained by Red Hat and updated regularly., container_name=openstack_network_exporter, vcs-ref=f4b088292653bbf5ca8188a5e59ffd06a8671d4b, build-date=2025-08-20T13:12:41, com.redhat.license_terms=https://www.redhat.com/en/about/red-hat-end-user-license-agreements#UBI, config_id=edpm, io.k8s.display-name=Red Hat Universal Base Image 9 Minimal, distribution-scope=public, architecture=x86_64, version=9.6, io.buildah.version=1.33.7, io.openshift.tags=minimal rhel9, summary=Provides the latest release of the minimal Red Hat Universal Base Image 9., vcs-type=git, release=1755695350, vendor=Red Hat, Inc., com.redhat.component=ubi9-minimal-container, maintainer=Red Hat, Inc., config_data={'image': 'quay.io/openstack-k8s-operators/openstack-network-exporter:current-podified', 'restart': 'always', 'recreate': True, 'privileged': True, 'ports': ['9105:9105'], 'command': [], 'net': 'host', 'environment': {'OS_ENDPOINT_TYPE': 'internal', 'OPENSTACK_NETWORK_EXPORTER_YAML': '/etc/openstack_network_exporter/openstack_network_exporter.yaml'}, 'healthcheck': {'test': '/openstack/healthcheck openstack-netwo', 'mount': '/var/lib/openstack/healthchecks/openstack_network_exporter'}, 'volumes': ['/var/lib/openstack/config/telemetry/openstack_network_exporter.yaml:/etc/openstack_network_exporter/openstack_network_exporter.yaml:z', '/var/lib/openstack/certs/telemetry/default:/etc/openstack_network_exporter/tls:z', '/var/run/openvswitch:/run/openvswitch:rw,z', '/var/lib/openvswitch/ovn:/run/ovn:rw,z', '/proc:/host/proc:ro', '/var/lib/openstack/healthchecks/openstack_network_exporter:/openstack:ro,z']}, managed_by=edpm_ansible, url=https://catalog.redhat.com/en/search?searchType=containers, description=The Universal Base Image Minimal is a stripped down image that uses microdnf as a package manager. This base image is freely redistributable, but Red Hat only supports Red Hat technologies through subscriptions for Red Hat products. This image is maintained by Red Hat and updated regularly., name=ubi9-minimal)
Dec  1 19:56:52 compute-0 nova_compute[189564]: 2025-12-01 19:56:52.581 189568 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Dec  1 19:56:53 compute-0 nova_compute[189564]: 2025-12-01 19:56:53.650 189568 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Dec  1 19:56:57 compute-0 nova_compute[189564]: 2025-12-01 19:56:57.586 189568 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Dec  1 19:56:58 compute-0 podman[252062]: 2025-12-01 19:56:58.333000724 +0000 UTC m=+0.099007201 container health_status 9bc16c1e84935b321683dd2dfd3901959431e420d380b6b9982945dff3d516b2 (image=quay.io/prometheus/node-exporter:v1.5.0, name=node_exporter, health_status=healthy, health_failing_streak=0, health_log=, config_id=edpm, container_name=node_exporter, maintainer=The Prometheus Authors <prometheus-developers@googlegroups.com>, managed_by=edpm_ansible, config_data={'image': 'quay.io/prometheus/node-exporter:v1.5.0', 'restart': 'always', 'recreate': True, 'user': 'root', 'privileged': True, 'ports': ['9100:9100'], 'command': ['--web.config.file=/etc/node_exporter/node_exporter.yaml', '--web.disable-exporter-metrics', '--collector.systemd', '--collector.systemd.unit-include=(edpm_.*|ovs.*|openvswitch|virt.*|rsyslog)\\.service', '--no-collector.dmi', '--no-collector.entropy', '--no-collector.thermal_zone', '--no-collector.time', '--no-collector.timex', '--no-collector.uname', '--no-collector.stat', '--no-collector.hwmon', '--no-collector.os', '--no-collector.selinux', '--no-collector.textfile', '--no-collector.powersupplyclass', '--no-collector.pressure', '--no-collector.rapl'], 'net': 'host', 'environment': {'OS_ENDPOINT_TYPE': 'internal'}, 'healthcheck': {'test': '/openstack/healthcheck node_exporter', 'mount': '/var/lib/openstack/healthchecks/node_exporter'}, 'volumes': ['/var/lib/openstack/config/telemetry/node_exporter.yaml:/etc/node_exporter/node_exporter.yaml:z', '/var/lib/openstack/certs/telemetry/default:/etc/node_exporter/tls:z', '/var/run/dbus/system_bus_socket:/var/run/dbus/system_bus_socket:rw', '/var/lib/openstack/healthchecks/node_exporter:/openstack:ro,z']})
Dec  1 19:56:58 compute-0 nova_compute[189564]: 2025-12-01 19:56:58.652 189568 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Dec  1 19:56:59 compute-0 podman[203750]: time="2025-12-01T19:56:59Z" level=info msg="List containers: received `last` parameter - overwriting `limit`"
Dec  1 19:56:59 compute-0 podman[203750]: @ - - [01/Dec/2025:19:56:59 +0000] "GET /v4.9.3/libpod/containers/json?all=true&external=false&last=0&namespace=false&size=false&sync=false HTTP/1.1" 200 28288 "" "Go-http-client/1.1"
Dec  1 19:56:59 compute-0 podman[203750]: @ - - [01/Dec/2025:19:56:59 +0000] "GET /v4.9.3/libpod/containers/stats?all=false&interval=1&stream=false HTTP/1.1" 200 4336 "" "Go-http-client/1.1"
Dec  1 19:57:01 compute-0 openstack_network_exporter[205914]: ERROR   19:57:01 appctl.go:144: Failed to get PID for ovn-northd: no control socket files found for ovn-northd
Dec  1 19:57:01 compute-0 openstack_network_exporter[205914]: ERROR   19:57:01 appctl.go:144: Failed to get PID for ovn-northd: no control socket files found for ovn-northd
Dec  1 19:57:01 compute-0 openstack_network_exporter[205914]: ERROR   19:57:01 appctl.go:131: Failed to prepare call to ovsdb-server: no control socket files found for the ovs db server
Dec  1 19:57:01 compute-0 openstack_network_exporter[205914]: ERROR   19:57:01 appctl.go:174: call(dpif-netdev/pmd-perf-show): please specify an existing datapath
Dec  1 19:57:01 compute-0 openstack_network_exporter[205914]: 
Dec  1 19:57:01 compute-0 openstack_network_exporter[205914]: ERROR   19:57:01 appctl.go:174: call(dpif-netdev/pmd-rxq-show): please specify an existing datapath
Dec  1 19:57:01 compute-0 openstack_network_exporter[205914]: 
Dec  1 19:57:02 compute-0 nova_compute[189564]: 2025-12-01 19:57:02.589 189568 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Dec  1 19:57:03 compute-0 nova_compute[189564]: 2025-12-01 19:57:03.655 189568 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Dec  1 19:57:04 compute-0 podman[252087]: 2025-12-01 19:57:04.339355872 +0000 UTC m=+0.113071258 container health_status eee51cf6f5ac491b85fb09827fece37ea9afa564acb449d4ec0d0155a452f02b (image=quay.io/podified-antelope-centos9/openstack-multipathd:current-podified, name=multipathd, health_status=healthy, health_failing_streak=0, health_log=, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.vendor=CentOS, config_id=multipathd, org.label-schema.schema-version=1.0, tcib_build_tag=fa2bb8efef6782c26ea7f1675eeb36dd, tcib_managed=true, config_data={'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/multipathd', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-multipathd:current-podified', 'net': 'host', 'privileged': True, 'restart': 'always', 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/var/lib/kolla/config_files/multipathd.json:/var/lib/kolla/config_files/config.json:ro', '/dev:/dev', '/run/udev:/run/udev', '/sys:/sys', '/lib/modules:/lib/modules:ro', '/etc/iscsi:/etc/iscsi:ro', '/var/lib/iscsi:/var/lib/iscsi', '/etc/multipath:/etc/multipath:z', '/etc/multipath.conf:/etc/multipath.conf:ro', '/var/lib/openstack/healthchecks/multipathd:/openstack:ro,z']}, container_name=multipathd, io.buildah.version=1.41.3, org.label-schema.build-date=20251125, managed_by=edpm_ansible, maintainer=OpenStack Kubernetes Operator team)
Dec  1 19:57:07 compute-0 nova_compute[189564]: 2025-12-01 19:57:07.595 189568 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Dec  1 19:57:08 compute-0 nova_compute[189564]: 2025-12-01 19:57:08.657 189568 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Dec  1 19:57:09 compute-0 nova_compute[189564]: 2025-12-01 19:57:09.267 189568 DEBUG oslo_service.periodic_task [None req-7cec860b-c1c5-49d2-8a4f-732c76c1788d - - - - - -] Running periodic task ComputeManager._reclaim_queued_deletes run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210#033[00m
Dec  1 19:57:09 compute-0 nova_compute[189564]: 2025-12-01 19:57:09.267 189568 DEBUG nova.compute.manager [None req-7cec860b-c1c5-49d2-8a4f-732c76c1788d - - - - - -] CONF.reclaim_instance_interval <= 0, skipping... _reclaim_queued_deletes /usr/lib/python3.9/site-packages/nova/compute/manager.py:10477#033[00m
Dec  1 19:57:10 compute-0 podman[252110]: 2025-12-01 19:57:10.304426907 +0000 UTC m=+0.073595611 container health_status 61ddba5fa28aaa4735d9b3aecc3d300f499f9ae2248b5f55cd6d6127fcce4236 (image=quay.io/navidys/prometheus-podman-exporter:v1.10.1, name=podman_exporter, health_status=healthy, health_failing_streak=0, health_log=, config_data={'image': 'quay.io/navidys/prometheus-podman-exporter:v1.10.1', 'restart': 'always', 'recreate': True, 'user': 'root', 'privileged': True, 'ports': ['9882:9882'], 'net': 'host', 'command': ['--web.config.file=/etc/podman_exporter/podman_exporter.yaml'], 'environment': {'OS_ENDPOINT_TYPE': 'internal', 'CONTAINER_HOST': 'unix:///run/podman/podman.sock'}, 'healthcheck': {'test': '/openstack/healthcheck podman_exporter', 'mount': '/var/lib/openstack/healthchecks/podman_exporter'}, 'volumes': ['/var/lib/openstack/config/telemetry/podman_exporter.yaml:/etc/podman_exporter/podman_exporter.yaml:z', '/var/lib/openstack/certs/telemetry/default:/etc/podman_exporter/tls:z', '/run/podman/podman.sock:/run/podman/podman.sock:rw,z', '/var/lib/openstack/healthchecks/podman_exporter:/openstack:ro,z']}, config_id=edpm, container_name=podman_exporter, maintainer=Navid Yaghoobi <navidys@fedoraproject.org>, managed_by=edpm_ansible)
Dec  1 19:57:12 compute-0 ovn_metadata_agent[106828]: 2025-12-01 19:57:12.213 106833 DEBUG oslo_concurrency.lockutils [-] Acquiring lock "_check_child_processes" by "neutron.agent.linux.external_process.ProcessMonitor._check_child_processes" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m
Dec  1 19:57:12 compute-0 ovn_metadata_agent[106828]: 2025-12-01 19:57:12.214 106833 DEBUG oslo_concurrency.lockutils [-] Lock "_check_child_processes" acquired by "neutron.agent.linux.external_process.ProcessMonitor._check_child_processes" :: waited 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m
Dec  1 19:57:12 compute-0 ovn_metadata_agent[106828]: 2025-12-01 19:57:12.214 106833 DEBUG oslo_concurrency.lockutils [-] Lock "_check_child_processes" "released" by "neutron.agent.linux.external_process.ProcessMonitor._check_child_processes" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m
Dec  1 19:57:12 compute-0 nova_compute[189564]: 2025-12-01 19:57:12.598 189568 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Dec  1 19:57:13 compute-0 nova_compute[189564]: 2025-12-01 19:57:13.660 189568 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Dec  1 19:57:14 compute-0 nova_compute[189564]: 2025-12-01 19:57:14.248 189568 DEBUG oslo_service.periodic_task [None req-7cec860b-c1c5-49d2-8a4f-732c76c1788d - - - - - -] Running periodic task ComputeManager._poll_rebooting_instances run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210#033[00m
Dec  1 19:57:14 compute-0 nova_compute[189564]: 2025-12-01 19:57:14.249 189568 DEBUG oslo_service.periodic_task [None req-7cec860b-c1c5-49d2-8a4f-732c76c1788d - - - - - -] Running periodic task ComputeManager._poll_volume_usage run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210#033[00m
Dec  1 19:57:14 compute-0 podman[252136]: 2025-12-01 19:57:14.342081268 +0000 UTC m=+0.091307891 container health_status 34a1614f07848d6f362b3ed1fa2407dbcd0f2c7c831f6ef43ff8b2d278ce7c3d (image=quay.io/podified-antelope-centos9/openstack-ceilometer-ipmi:current-podified, name=ceilometer_agent_ipmi, health_status=healthy, health_failing_streak=0, health_log=, container_name=ceilometer_agent_ipmi, maintainer=OpenStack Kubernetes Operator team, org.label-schema.vendor=CentOS, tcib_managed=true, config_id=edpm, org.label-schema.build-date=20251125, tcib_build_tag=fa2bb8efef6782c26ea7f1675eeb36dd, config_data={'image': 'quay.io/podified-antelope-centos9/openstack-ceilometer-ipmi:current-podified', 'user': 'ceilometer', 'restart': 'always', 'command': 'kolla_start', 'security_opt': 'label:type:ceilometer_polling_t', 'privileged': 'true', 'net': 'host', 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'OS_ENDPOINT_TYPE': 'internal'}, 'healthcheck': {'test': '/openstack/healthcheck ipmi', 'mount': '/var/lib/openstack/healthchecks/ceilometer_agent_ipmi'}, 'volumes': ['/var/lib/openstack/config/telemetry-power-monitoring:/var/lib/openstack/config/:z', '/var/lib/openstack/config/telemetry-power-monitoring/ceilometer-agent-ipmi.json:/var/lib/kolla/config_files/config.json:z', '/etc/hosts:/etc/hosts:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/var/lib/openstack/cacerts/telemetry-power-monitoring/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/config/telemetry-power-monitoring/ceilometer_prom_exporter.yaml:/etc/ceilometer/ceilometer_prom_exporter.yaml:z', '/var/lib/openstack/certs/telemetry-power-monitoring/default:/etc/ceilometer/tls:z', '/dev/log:/dev/log', '/var/lib/openstack/healthchecks/ceilometer_agent_ipmi:/openstack:ro,z']}, io.buildah.version=1.41.3, org.label-schema.license=GPLv2, managed_by=edpm_ansible, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.schema-version=1.0)
Dec  1 19:57:14 compute-0 podman[252135]: 2025-12-01 19:57:14.354464384 +0000 UTC m=+0.109756216 container health_status 23921011954a99f31a49758e512d9e3575f6b2ebf536e7df85e3be11e7690b76 (image=quay.io/sustainable_computing_io/kepler:release-0.7.12, name=kepler, health_status=healthy, health_failing_streak=0, health_log=, vcs-type=git, distribution-scope=public, vendor=Red Hat, Inc., config_id=edpm, io.k8s.display-name=Red Hat Universal Base Image 9, maintainer=Red Hat, Inc., summary=Provides the latest release of Red Hat Universal Base Image 9., url=https://access.redhat.com/containers/#/registry.access.redhat.com/ubi9/images/9.4-1214.1726694543, com.redhat.license_terms=https://www.redhat.com/en/about/red-hat-end-user-license-agreements#UBI, description=The Universal Base Image is designed and engineered to be the base layer for all of your containerized applications, middleware and utilities. This base image is freely redistributable, but Red Hat only supports Red Hat technologies through subscriptions for Red Hat products. This image is maintained by Red Hat and updated regularly., name=ubi9, build-date=2024-09-18T21:23:30, config_data={'image': 'quay.io/sustainable_computing_io/kepler:release-0.7.12', 'privileged': 'true', 'restart': 'always', 'ports': ['8888:8888'], 'net': 'host', 'command': '-v=2', 'recreate': True, 'environment': {'ENABLE_GPU': 'true', 'EXPOSE_CONTAINER_METRICS': 'true', 'ENABLE_PROCESS_METRICS': 'true', 'EXPOSE_VM_METRICS': 'true', 'EXPOSE_ESTIMATED_IDLE_POWER_METRICS': 'false', 'LIBVIRT_METADATA_URI': 'http://openstack.org/xmlns/libvirt/nova/1.1'}, 'healthcheck': {'test': '/openstack/healthcheck kepler', 'mount': '/var/lib/openstack/healthchecks/kepler'}, 'volumes': ['/lib/modules:/lib/modules:ro', '/run/libvirt:/run/libvirt:shared,ro', '/sys:/sys', '/proc:/proc', '/var/lib/openstack/healthchecks/kepler:/openstack:ro,z']}, vcs-ref=e309397d02fc53f7fa99db1371b8700eb49f268f, container_name=kepler, io.openshift.expose-services=, release-0.7.12=, io.openshift.tags=base rhel9, version=9.4, architecture=x86_64, com.redhat.component=ubi9-container, io.buildah.version=1.29.0, managed_by=edpm_ansible, release=1214.1726694543, io.k8s.description=The Universal Base Image is designed and engineered to be the base layer for all of your containerized applications, middleware and utilities. This base image is freely redistributable, but Red Hat only supports Red Hat technologies through subscriptions for Red Hat products. This image is maintained by Red Hat and updated regularly.)
Dec  1 19:57:14 compute-0 podman[252137]: 2025-12-01 19:57:14.358461729 +0000 UTC m=+0.101906062 container health_status 3a3d264f7eb8586ed3d44da8bad3c69e5911bcb2ca062b771386b6d47a5118de (image=quay.rdoproject.org/podified-master-centos10/openstack-ceilometer-compute:current-tested, name=ceilometer_agent_compute, health_status=healthy, health_failing_streak=0, health_log=, io.buildah.version=1.41.4, maintainer=OpenStack Kubernetes Operator team, config_data={'image': 'quay.rdoproject.org/podified-master-centos10/openstack-ceilometer-compute:current-tested', 'user': 'ceilometer', 'restart': 'always', 'command': 'kolla_start', 'security_opt': 'label:type:ceilometer_polling_t', 'net': 'host', 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'OS_ENDPOINT_TYPE': 'internal'}, 'healthcheck': {'test': '/openstack/healthcheck compute', 'mount': '/var/lib/openstack/healthchecks/ceilometer_agent_compute'}, 'volumes': ['/var/lib/openstack/config/telemetry:/var/lib/openstack/config/:z', '/var/lib/openstack/config/telemetry/ceilometer-agent-compute.json:/var/lib/kolla/config_files/config.json:z', '/run/libvirt:/run/libvirt:shared,ro', '/etc/hosts:/etc/hosts:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/var/lib/openstack/cacerts/telemetry/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/config/telemetry/ceilometer_prom_exporter.yaml:/etc/ceilometer/ceilometer_prom_exporter.yaml:z', '/var/lib/openstack/certs/telemetry/default:/etc/ceilometer/tls:z', '/dev/log:/dev/log', '/var/lib/openstack/healthchecks/ceilometer_agent_compute:/openstack:ro,z']}, org.label-schema.build-date=20251125, org.label-schema.name=CentOS Stream 10 Base Image, org.label-schema.vendor=CentOS, container_name=ceilometer_agent_compute, managed_by=edpm_ansible, org.label-schema.license=GPLv2, tcib_managed=true, config_id=edpm, org.label-schema.schema-version=1.0, tcib_build_tag=3a7876c5b6a4ff2e2bc50e11e9db5f42)
Dec  1 19:57:14 compute-0 podman[252138]: 2025-12-01 19:57:14.361496952 +0000 UTC m=+0.113097469 container health_status 43b014a7c88484529ca37fbc1aa040d68d3c565a681d98a3ffe696ded1c66c8b (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified, name=ovn_metadata_agent, health_status=healthy, health_failing_streak=0, health_log=, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS, tcib_managed=true, config_data={'cgroupns': 'host', 'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'EDPM_CONFIG_HASH': '0823bd3e096c75f72e4a95820d41b0d4b6a1172bd2892ddb9f29b788a11bc87d'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ovn_metadata_agent', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified', 'net': 'host', 'pid': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/run/openvswitch:/run/openvswitch:z', '/var/lib/config-data/ansible-generated/neutron-ovn-metadata-agent:/etc/neutron.conf.d:z', '/run/netns:/run/netns:shared', '/var/lib/kolla/config_files/ovn_metadata_agent.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/neutron:/var/lib/neutron:shared,z', '/var/lib/neutron/ovn_metadata_haproxy_wrapper:/usr/local/bin/haproxy:ro', '/var/lib/neutron/kill_scripts:/etc/neutron/kill_scripts:ro', '/var/lib/openstack/cacerts/neutron-metadata/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/ca.crt:/etc/pki/tls/certs/ovndbca.crt:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/tls.crt:/etc/pki/tls/certs/ovndb.crt:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/tls.key:/etc/pki/tls/private/ovndb.key:ro,Z', '/var/lib/openstack/healthchecks/ovn_metadata_agent:/openstack:ro,z']}, maintainer=OpenStack Kubernetes Operator team, tcib_build_tag=fa2bb8efef6782c26ea7f1675eeb36dd, config_id=ovn_metadata_agent, container_name=ovn_metadata_agent, io.buildah.version=1.41.3, org.label-schema.license=GPLv2, managed_by=edpm_ansible, org.label-schema.build-date=20251125, org.label-schema.name=CentOS Stream 9 Base Image)
Dec  1 19:57:14 compute-0 podman[252139]: 2025-12-01 19:57:14.39096206 +0000 UTC m=+0.129481090 container health_status ac5c9902abf0db9f43c889599b2bcc73d33eb8b65444ffdd9b56a5cc93dab792 (image=quay.io/podified-antelope-centos9/openstack-ovn-controller:current-podified, name=ovn_controller, health_status=healthy, health_failing_streak=0, health_log=, config_id=ovn_controller, maintainer=OpenStack Kubernetes Operator team, org.label-schema.build-date=20251125, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 9 Base Image, config_data={'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ovn_controller', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-ovn-controller:current-podified', 'net': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/lib/modules:/lib/modules:ro', '/run:/run', '/var/lib/openvswitch/ovn:/run/ovn:shared,z', '/var/lib/kolla/config_files/ovn_controller.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/openstack/cacerts/ovn/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/certs/ovn/default/ca.crt:/etc/pki/tls/certs/ovndbca.crt:ro,z', '/var/lib/openstack/certs/ovn/default/tls.crt:/etc/pki/tls/certs/ovndb.crt:ro,z', '/var/lib/openstack/certs/ovn/default/tls.key:/etc/pki/tls/private/ovndb.key:ro,Z', '/var/lib/openstack/healthchecks/ovn_controller:/openstack:ro,z']}, org.label-schema.schema-version=1.0, tcib_build_tag=fa2bb8efef6782c26ea7f1675eeb36dd, container_name=ovn_controller, managed_by=edpm_ansible, org.label-schema.vendor=CentOS, io.buildah.version=1.41.3, tcib_managed=true)
Dec  1 19:57:15 compute-0 nova_compute[189564]: 2025-12-01 19:57:15.248 189568 DEBUG oslo_service.periodic_task [None req-7cec860b-c1c5-49d2-8a4f-732c76c1788d - - - - - -] Running periodic task ComputeManager._poll_rescued_instances run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210#033[00m
Dec  1 19:57:16 compute-0 nova_compute[189564]: 2025-12-01 19:57:16.249 189568 DEBUG oslo_service.periodic_task [None req-7cec860b-c1c5-49d2-8a4f-732c76c1788d - - - - - -] Running periodic task ComputeManager._heal_instance_info_cache run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210#033[00m
Dec  1 19:57:16 compute-0 nova_compute[189564]: 2025-12-01 19:57:16.249 189568 DEBUG nova.compute.manager [None req-7cec860b-c1c5-49d2-8a4f-732c76c1788d - - - - - -] Starting heal instance info cache _heal_instance_info_cache /usr/lib/python3.9/site-packages/nova/compute/manager.py:9858#033[00m
Dec  1 19:57:16 compute-0 nova_compute[189564]: 2025-12-01 19:57:16.250 189568 DEBUG nova.compute.manager [None req-7cec860b-c1c5-49d2-8a4f-732c76c1788d - - - - - -] Rebuilding the list of instances to heal _heal_instance_info_cache /usr/lib/python3.9/site-packages/nova/compute/manager.py:9862#033[00m
Dec  1 19:57:16 compute-0 nova_compute[189564]: 2025-12-01 19:57:16.269 189568 DEBUG nova.compute.manager [None req-7cec860b-c1c5-49d2-8a4f-732c76c1788d - - - - - -] Didn't find any instances for network info cache update. _heal_instance_info_cache /usr/lib/python3.9/site-packages/nova/compute/manager.py:9944#033[00m
Dec  1 19:57:16 compute-0 nova_compute[189564]: 2025-12-01 19:57:16.269 189568 DEBUG oslo_service.periodic_task [None req-7cec860b-c1c5-49d2-8a4f-732c76c1788d - - - - - -] Running periodic task ComputeManager._poll_unconfirmed_resizes run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210#033[00m
Dec  1 19:57:17 compute-0 nova_compute[189564]: 2025-12-01 19:57:17.249 189568 DEBUG oslo_service.periodic_task [None req-7cec860b-c1c5-49d2-8a4f-732c76c1788d - - - - - -] Running periodic task ComputeManager.update_available_resource run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210#033[00m
Dec  1 19:57:17 compute-0 nova_compute[189564]: 2025-12-01 19:57:17.288 189568 DEBUG oslo_concurrency.lockutils [None req-7cec860b-c1c5-49d2-8a4f-732c76c1788d - - - - - -] Acquiring lock "compute_resources" by "nova.compute.resource_tracker.ResourceTracker.clean_compute_node_cache" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m
Dec  1 19:57:17 compute-0 nova_compute[189564]: 2025-12-01 19:57:17.289 189568 DEBUG oslo_concurrency.lockutils [None req-7cec860b-c1c5-49d2-8a4f-732c76c1788d - - - - - -] Lock "compute_resources" acquired by "nova.compute.resource_tracker.ResourceTracker.clean_compute_node_cache" :: waited 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m
Dec  1 19:57:17 compute-0 nova_compute[189564]: 2025-12-01 19:57:17.289 189568 DEBUG oslo_concurrency.lockutils [None req-7cec860b-c1c5-49d2-8a4f-732c76c1788d - - - - - -] Lock "compute_resources" "released" by "nova.compute.resource_tracker.ResourceTracker.clean_compute_node_cache" :: held 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m
Dec  1 19:57:17 compute-0 nova_compute[189564]: 2025-12-01 19:57:17.290 189568 DEBUG nova.compute.resource_tracker [None req-7cec860b-c1c5-49d2-8a4f-732c76c1788d - - - - - -] Auditing locally available compute resources for compute-0.ctlplane.example.com (node: compute-0.ctlplane.example.com) update_available_resource /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:861#033[00m
Dec  1 19:57:17 compute-0 nova_compute[189564]: 2025-12-01 19:57:17.602 189568 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Dec  1 19:57:17 compute-0 nova_compute[189564]: 2025-12-01 19:57:17.824 189568 WARNING nova.virt.libvirt.driver [None req-7cec860b-c1c5-49d2-8a4f-732c76c1788d - - - - - -] This host appears to have multiple sockets per NUMA node. The `socket` PCI NUMA affinity will not be supported.#033[00m
Dec  1 19:57:17 compute-0 nova_compute[189564]: 2025-12-01 19:57:17.825 189568 DEBUG nova.compute.resource_tracker [None req-7cec860b-c1c5-49d2-8a4f-732c76c1788d - - - - - -] Hypervisor/Node resource view: name=compute-0.ctlplane.example.com free_ram=5365MB free_disk=72.37419128417969GB free_vcpus=8 pci_devices=[{"dev_id": "pci_0000_00_03_0", "address": "0000:00:03.0", "product_id": "1000", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1000", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_01_3", "address": "0000:00:01.3", "product_id": "7113", "vendor_id": "8086", "numa_node": null, "label": "label_8086_7113", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_06_0", "address": "0000:00:06.0", "product_id": "1005", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1005", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_04_0", "address": "0000:00:04.0", "product_id": "1001", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1001", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_01_1", "address": "0000:00:01.1", "product_id": "7010", "vendor_id": "8086", "numa_node": null, "label": "label_8086_7010", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_00_0", "address": "0000:00:00.0", "product_id": "1237", "vendor_id": "8086", "numa_node": null, "label": "label_8086_1237", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_05_0", "address": "0000:00:05.0", "product_id": "1002", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1002", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_02_0", "address": "0000:00:02.0", "product_id": "1050", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1050", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_01_2", "address": "0000:00:01.2", "product_id": "7020", "vendor_id": "8086", "numa_node": null, "label": "label_8086_7020", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_01_0", "address": "0000:00:01.0", "product_id": "7000", "vendor_id": "8086", "numa_node": null, "label": "label_8086_7000", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_07_0", "address": "0000:00:07.0", "product_id": "1000", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1000", "dev_type": "type-PCI"}] _report_hypervisor_resource_view /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:1034#033[00m
Dec  1 19:57:17 compute-0 nova_compute[189564]: 2025-12-01 19:57:17.826 189568 DEBUG oslo_concurrency.lockutils [None req-7cec860b-c1c5-49d2-8a4f-732c76c1788d - - - - - -] Acquiring lock "compute_resources" by "nova.compute.resource_tracker.ResourceTracker._update_available_resource" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m
Dec  1 19:57:17 compute-0 nova_compute[189564]: 2025-12-01 19:57:17.826 189568 DEBUG oslo_concurrency.lockutils [None req-7cec860b-c1c5-49d2-8a4f-732c76c1788d - - - - - -] Lock "compute_resources" acquired by "nova.compute.resource_tracker.ResourceTracker._update_available_resource" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m
Dec  1 19:57:18 compute-0 nova_compute[189564]: 2025-12-01 19:57:18.032 189568 DEBUG nova.compute.resource_tracker [None req-7cec860b-c1c5-49d2-8a4f-732c76c1788d - - - - - -] Total usable vcpus: 8, total allocated vcpus: 0 _report_final_resource_view /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:1057#033[00m
Dec  1 19:57:18 compute-0 nova_compute[189564]: 2025-12-01 19:57:18.033 189568 DEBUG nova.compute.resource_tracker [None req-7cec860b-c1c5-49d2-8a4f-732c76c1788d - - - - - -] Final resource view: name=compute-0.ctlplane.example.com phys_ram=7680MB used_ram=512MB phys_disk=79GB used_disk=0GB total_vcpus=8 used_vcpus=0 pci_stats=[] _report_final_resource_view /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:1066#033[00m
Dec  1 19:57:18 compute-0 nova_compute[189564]: 2025-12-01 19:57:18.066 189568 DEBUG nova.compute.provider_tree [None req-7cec860b-c1c5-49d2-8a4f-732c76c1788d - - - - - -] Inventory has not changed in ProviderTree for provider: 0211b5d4-bab8-409f-8f53-df766ffbcb27 update_inventory /usr/lib/python3.9/site-packages/nova/compute/provider_tree.py:180#033[00m
Dec  1 19:57:18 compute-0 nova_compute[189564]: 2025-12-01 19:57:18.090 189568 DEBUG nova.scheduler.client.report [None req-7cec860b-c1c5-49d2-8a4f-732c76c1788d - - - - - -] Inventory has not changed for provider 0211b5d4-bab8-409f-8f53-df766ffbcb27 based on inventory data: {'VCPU': {'total': 8, 'reserved': 0, 'min_unit': 1, 'max_unit': 8, 'step_size': 1, 'allocation_ratio': 4.0}, 'MEMORY_MB': {'total': 7680, 'reserved': 512, 'min_unit': 1, 'max_unit': 7680, 'step_size': 1, 'allocation_ratio': 1.0}, 'DISK_GB': {'total': 79, 'reserved': 1, 'min_unit': 1, 'max_unit': 79, 'step_size': 1, 'allocation_ratio': 0.9}} set_inventory_for_provider /usr/lib/python3.9/site-packages/nova/scheduler/client/report.py:940#033[00m
Dec  1 19:57:18 compute-0 nova_compute[189564]: 2025-12-01 19:57:18.093 189568 DEBUG nova.compute.resource_tracker [None req-7cec860b-c1c5-49d2-8a4f-732c76c1788d - - - - - -] Compute_service record updated for compute-0.ctlplane.example.com:compute-0.ctlplane.example.com _update_available_resource /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:995#033[00m
Dec  1 19:57:18 compute-0 nova_compute[189564]: 2025-12-01 19:57:18.093 189568 DEBUG oslo_concurrency.lockutils [None req-7cec860b-c1c5-49d2-8a4f-732c76c1788d - - - - - -] Lock "compute_resources" "released" by "nova.compute.resource_tracker.ResourceTracker._update_available_resource" :: held 0.267s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m
Dec  1 19:57:18 compute-0 nova_compute[189564]: 2025-12-01 19:57:18.664 189568 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Dec  1 19:57:21 compute-0 nova_compute[189564]: 2025-12-01 19:57:21.089 189568 DEBUG oslo_service.periodic_task [None req-7cec860b-c1c5-49d2-8a4f-732c76c1788d - - - - - -] Running periodic task ComputeManager._check_instance_build_time run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210#033[00m
Dec  1 19:57:22 compute-0 nova_compute[189564]: 2025-12-01 19:57:22.243 189568 DEBUG oslo_service.periodic_task [None req-7cec860b-c1c5-49d2-8a4f-732c76c1788d - - - - - -] Running periodic task ComputeManager._sync_scheduler_instance_info run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210#033[00m
Dec  1 19:57:22 compute-0 nova_compute[189564]: 2025-12-01 19:57:22.264 189568 DEBUG oslo_service.periodic_task [None req-7cec860b-c1c5-49d2-8a4f-732c76c1788d - - - - - -] Running periodic task ComputeManager._instance_usage_audit run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210#033[00m
Dec  1 19:57:22 compute-0 nova_compute[189564]: 2025-12-01 19:57:22.607 189568 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Dec  1 19:57:23 compute-0 podman[252234]: 2025-12-01 19:57:23.347774913 +0000 UTC m=+0.107233167 container health_status b46bda7fc50db8041eef75400930fc7591d8331b3adc9964f77b2cc87c6b98e2 (image=quay.io/openstack-k8s-operators/openstack-network-exporter:current-podified, name=openstack_network_exporter, health_status=healthy, health_failing_streak=0, health_log=, container_name=openstack_network_exporter, io.openshift.tags=minimal rhel9, managed_by=edpm_ansible, io.openshift.expose-services=, io.k8s.description=The Universal Base Image Minimal is a stripped down image that uses microdnf as a package manager. This base image is freely redistributable, but Red Hat only supports Red Hat technologies through subscriptions for Red Hat products. This image is maintained by Red Hat and updated regularly., com.redhat.component=ubi9-minimal-container, url=https://catalog.redhat.com/en/search?searchType=containers, build-date=2025-08-20T13:12:41, version=9.6, description=The Universal Base Image Minimal is a stripped down image that uses microdnf as a package manager. This base image is freely redistributable, but Red Hat only supports Red Hat technologies through subscriptions for Red Hat products. This image is maintained by Red Hat and updated regularly., io.k8s.display-name=Red Hat Universal Base Image 9 Minimal, release=1755695350, name=ubi9-minimal, com.redhat.license_terms=https://www.redhat.com/en/about/red-hat-end-user-license-agreements#UBI, config_data={'image': 'quay.io/openstack-k8s-operators/openstack-network-exporter:current-podified', 'restart': 'always', 'recreate': True, 'privileged': True, 'ports': ['9105:9105'], 'command': [], 'net': 'host', 'environment': {'OS_ENDPOINT_TYPE': 'internal', 'OPENSTACK_NETWORK_EXPORTER_YAML': '/etc/openstack_network_exporter/openstack_network_exporter.yaml'}, 'healthcheck': {'test': '/openstack/healthcheck openstack-netwo', 'mount': '/var/lib/openstack/healthchecks/openstack_network_exporter'}, 'volumes': ['/var/lib/openstack/config/telemetry/openstack_network_exporter.yaml:/etc/openstack_network_exporter/openstack_network_exporter.yaml:z', '/var/lib/openstack/certs/telemetry/default:/etc/openstack_network_exporter/tls:z', '/var/run/openvswitch:/run/openvswitch:rw,z', '/var/lib/openvswitch/ovn:/run/ovn:rw,z', '/proc:/host/proc:ro', '/var/lib/openstack/healthchecks/openstack_network_exporter:/openstack:ro,z']}, io.buildah.version=1.33.7, vendor=Red Hat, Inc., distribution-scope=public, config_id=edpm, maintainer=Red Hat, Inc., summary=Provides the latest release of the minimal Red Hat Universal Base Image 9., vcs-ref=f4b088292653bbf5ca8188a5e59ffd06a8671d4b, vcs-type=git, architecture=x86_64)
Dec  1 19:57:23 compute-0 nova_compute[189564]: 2025-12-01 19:57:23.667 189568 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Dec  1 19:57:27 compute-0 nova_compute[189564]: 2025-12-01 19:57:27.613 189568 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Dec  1 19:57:28 compute-0 nova_compute[189564]: 2025-12-01 19:57:28.670 189568 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Dec  1 19:57:29 compute-0 podman[252254]: 2025-12-01 19:57:29.330402123 +0000 UTC m=+0.092178089 container health_status 9bc16c1e84935b321683dd2dfd3901959431e420d380b6b9982945dff3d516b2 (image=quay.io/prometheus/node-exporter:v1.5.0, name=node_exporter, health_status=healthy, health_failing_streak=0, health_log=, config_id=edpm, container_name=node_exporter, maintainer=The Prometheus Authors <prometheus-developers@googlegroups.com>, managed_by=edpm_ansible, config_data={'image': 'quay.io/prometheus/node-exporter:v1.5.0', 'restart': 'always', 'recreate': True, 'user': 'root', 'privileged': True, 'ports': ['9100:9100'], 'command': ['--web.config.file=/etc/node_exporter/node_exporter.yaml', '--web.disable-exporter-metrics', '--collector.systemd', '--collector.systemd.unit-include=(edpm_.*|ovs.*|openvswitch|virt.*|rsyslog)\\.service', '--no-collector.dmi', '--no-collector.entropy', '--no-collector.thermal_zone', '--no-collector.time', '--no-collector.timex', '--no-collector.uname', '--no-collector.stat', '--no-collector.hwmon', '--no-collector.os', '--no-collector.selinux', '--no-collector.textfile', '--no-collector.powersupplyclass', '--no-collector.pressure', '--no-collector.rapl'], 'net': 'host', 'environment': {'OS_ENDPOINT_TYPE': 'internal'}, 'healthcheck': {'test': '/openstack/healthcheck node_exporter', 'mount': '/var/lib/openstack/healthchecks/node_exporter'}, 'volumes': ['/var/lib/openstack/config/telemetry/node_exporter.yaml:/etc/node_exporter/node_exporter.yaml:z', '/var/lib/openstack/certs/telemetry/default:/etc/node_exporter/tls:z', '/var/run/dbus/system_bus_socket:/var/run/dbus/system_bus_socket:rw', '/var/lib/openstack/healthchecks/node_exporter:/openstack:ro,z']})
Dec  1 19:57:29 compute-0 podman[203750]: time="2025-12-01T19:57:29Z" level=info msg="List containers: received `last` parameter - overwriting `limit`"
Dec  1 19:57:29 compute-0 podman[203750]: @ - - [01/Dec/2025:19:57:29 +0000] "GET /v4.9.3/libpod/containers/json?all=true&external=false&last=0&namespace=false&size=false&sync=false HTTP/1.1" 200 28288 "" "Go-http-client/1.1"
Dec  1 19:57:29 compute-0 podman[203750]: @ - - [01/Dec/2025:19:57:29 +0000] "GET /v4.9.3/libpod/containers/stats?all=false&interval=1&stream=false HTTP/1.1" 200 4330 "" "Go-http-client/1.1"
Dec  1 19:57:31 compute-0 openstack_network_exporter[205914]: ERROR   19:57:31 appctl.go:131: Failed to prepare call to ovsdb-server: no control socket files found for the ovs db server
Dec  1 19:57:31 compute-0 openstack_network_exporter[205914]: ERROR   19:57:31 appctl.go:144: Failed to get PID for ovn-northd: no control socket files found for ovn-northd
Dec  1 19:57:31 compute-0 openstack_network_exporter[205914]: ERROR   19:57:31 appctl.go:144: Failed to get PID for ovn-northd: no control socket files found for ovn-northd
Dec  1 19:57:31 compute-0 openstack_network_exporter[205914]: ERROR   19:57:31 appctl.go:174: call(dpif-netdev/pmd-perf-show): please specify an existing datapath
Dec  1 19:57:31 compute-0 openstack_network_exporter[205914]: 
Dec  1 19:57:31 compute-0 openstack_network_exporter[205914]: ERROR   19:57:31 appctl.go:174: call(dpif-netdev/pmd-rxq-show): please specify an existing datapath
Dec  1 19:57:31 compute-0 openstack_network_exporter[205914]: 
Dec  1 19:57:32 compute-0 nova_compute[189564]: 2025-12-01 19:57:32.617 189568 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Dec  1 19:57:33 compute-0 nova_compute[189564]: 2025-12-01 19:57:33.672 189568 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Dec  1 19:57:35 compute-0 podman[252278]: 2025-12-01 19:57:35.334872291 +0000 UTC m=+0.105370510 container health_status eee51cf6f5ac491b85fb09827fece37ea9afa564acb449d4ec0d0155a452f02b (image=quay.io/podified-antelope-centos9/openstack-multipathd:current-podified, name=multipathd, health_status=healthy, health_failing_streak=0, health_log=, org.label-schema.schema-version=1.0, tcib_build_tag=fa2bb8efef6782c26ea7f1675eeb36dd, config_id=multipathd, io.buildah.version=1.41.3, org.label-schema.build-date=20251125, org.label-schema.vendor=CentOS, tcib_managed=true, config_data={'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/multipathd', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-multipathd:current-podified', 'net': 'host', 'privileged': True, 'restart': 'always', 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/var/lib/kolla/config_files/multipathd.json:/var/lib/kolla/config_files/config.json:ro', '/dev:/dev', '/run/udev:/run/udev', '/sys:/sys', '/lib/modules:/lib/modules:ro', '/etc/iscsi:/etc/iscsi:ro', '/var/lib/iscsi:/var/lib/iscsi', '/etc/multipath:/etc/multipath:z', '/etc/multipath.conf:/etc/multipath.conf:ro', '/var/lib/openstack/healthchecks/multipathd:/openstack:ro,z']}, container_name=multipathd, managed_by=edpm_ansible, maintainer=OpenStack Kubernetes Operator team, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 9 Base Image)
Dec  1 19:57:37 compute-0 nova_compute[189564]: 2025-12-01 19:57:37.624 189568 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Dec  1 19:57:38 compute-0 nova_compute[189564]: 2025-12-01 19:57:38.676 189568 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Dec  1 19:57:41 compute-0 podman[252296]: 2025-12-01 19:57:41.34707152 +0000 UTC m=+0.107048370 container health_status 61ddba5fa28aaa4735d9b3aecc3d300f499f9ae2248b5f55cd6d6127fcce4236 (image=quay.io/navidys/prometheus-podman-exporter:v1.10.1, name=podman_exporter, health_status=healthy, health_failing_streak=0, health_log=, config_id=edpm, container_name=podman_exporter, maintainer=Navid Yaghoobi <navidys@fedoraproject.org>, managed_by=edpm_ansible, config_data={'image': 'quay.io/navidys/prometheus-podman-exporter:v1.10.1', 'restart': 'always', 'recreate': True, 'user': 'root', 'privileged': True, 'ports': ['9882:9882'], 'net': 'host', 'command': ['--web.config.file=/etc/podman_exporter/podman_exporter.yaml'], 'environment': {'OS_ENDPOINT_TYPE': 'internal', 'CONTAINER_HOST': 'unix:///run/podman/podman.sock'}, 'healthcheck': {'test': '/openstack/healthcheck podman_exporter', 'mount': '/var/lib/openstack/healthchecks/podman_exporter'}, 'volumes': ['/var/lib/openstack/config/telemetry/podman_exporter.yaml:/etc/podman_exporter/podman_exporter.yaml:z', '/var/lib/openstack/certs/telemetry/default:/etc/podman_exporter/tls:z', '/run/podman/podman.sock:/run/podman/podman.sock:rw,z', '/var/lib/openstack/healthchecks/podman_exporter:/openstack:ro,z']})
Dec  1 19:57:42 compute-0 nova_compute[189564]: 2025-12-01 19:57:42.627 189568 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Dec  1 19:57:43 compute-0 nova_compute[189564]: 2025-12-01 19:57:43.680 189568 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Dec  1 19:57:44 compute-0 podman[252320]: 2025-12-01 19:57:44.821836788 +0000 UTC m=+0.103496681 container health_status 3a3d264f7eb8586ed3d44da8bad3c69e5911bcb2ca062b771386b6d47a5118de (image=quay.rdoproject.org/podified-master-centos10/openstack-ceilometer-compute:current-tested, name=ceilometer_agent_compute, health_status=healthy, health_failing_streak=0, health_log=, org.label-schema.license=GPLv2, tcib_managed=true, config_data={'image': 'quay.rdoproject.org/podified-master-centos10/openstack-ceilometer-compute:current-tested', 'user': 'ceilometer', 'restart': 'always', 'command': 'kolla_start', 'security_opt': 'label:type:ceilometer_polling_t', 'net': 'host', 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'OS_ENDPOINT_TYPE': 'internal'}, 'healthcheck': {'test': '/openstack/healthcheck compute', 'mount': '/var/lib/openstack/healthchecks/ceilometer_agent_compute'}, 'volumes': ['/var/lib/openstack/config/telemetry:/var/lib/openstack/config/:z', '/var/lib/openstack/config/telemetry/ceilometer-agent-compute.json:/var/lib/kolla/config_files/config.json:z', '/run/libvirt:/run/libvirt:shared,ro', '/etc/hosts:/etc/hosts:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/var/lib/openstack/cacerts/telemetry/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/config/telemetry/ceilometer_prom_exporter.yaml:/etc/ceilometer/ceilometer_prom_exporter.yaml:z', '/var/lib/openstack/certs/telemetry/default:/etc/ceilometer/tls:z', '/dev/log:/dev/log', '/var/lib/openstack/healthchecks/ceilometer_agent_compute:/openstack:ro,z']}, org.label-schema.name=CentOS Stream 10 Base Image, config_id=edpm, container_name=ceilometer_agent_compute, tcib_build_tag=3a7876c5b6a4ff2e2bc50e11e9db5f42, maintainer=OpenStack Kubernetes Operator team, managed_by=edpm_ansible, org.label-schema.build-date=20251125, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS, io.buildah.version=1.41.4)
Dec  1 19:57:44 compute-0 podman[252318]: 2025-12-01 19:57:44.830487147 +0000 UTC m=+0.125796415 container health_status 23921011954a99f31a49758e512d9e3575f6b2ebf536e7df85e3be11e7690b76 (image=quay.io/sustainable_computing_io/kepler:release-0.7.12, name=kepler, health_status=healthy, health_failing_streak=0, health_log=, io.k8s.description=The Universal Base Image is designed and engineered to be the base layer for all of your containerized applications, middleware and utilities. This base image is freely redistributable, but Red Hat only supports Red Hat technologies through subscriptions for Red Hat products. This image is maintained by Red Hat and updated regularly., managed_by=edpm_ansible, architecture=x86_64, container_name=kepler, release=1214.1726694543, vcs-ref=e309397d02fc53f7fa99db1371b8700eb49f268f, vendor=Red Hat, Inc., com.redhat.license_terms=https://www.redhat.com/en/about/red-hat-end-user-license-agreements#UBI, url=https://access.redhat.com/containers/#/registry.access.redhat.com/ubi9/images/9.4-1214.1726694543, io.openshift.expose-services=, maintainer=Red Hat, Inc., name=ubi9, vcs-type=git, version=9.4, description=The Universal Base Image is designed and engineered to be the base layer for all of your containerized applications, middleware and utilities. This base image is freely redistributable, but Red Hat only supports Red Hat technologies through subscriptions for Red Hat products. This image is maintained by Red Hat and updated regularly., io.buildah.version=1.29.0, config_id=edpm, com.redhat.component=ubi9-container, config_data={'image': 'quay.io/sustainable_computing_io/kepler:release-0.7.12', 'privileged': 'true', 'restart': 'always', 'ports': ['8888:8888'], 'net': 'host', 'command': '-v=2', 'recreate': True, 'environment': {'ENABLE_GPU': 'true', 'EXPOSE_CONTAINER_METRICS': 'true', 'ENABLE_PROCESS_METRICS': 'true', 'EXPOSE_VM_METRICS': 'true', 'EXPOSE_ESTIMATED_IDLE_POWER_METRICS': 'false', 'LIBVIRT_METADATA_URI': 'http://openstack.org/xmlns/libvirt/nova/1.1'}, 'healthcheck': {'test': '/openstack/healthcheck kepler', 'mount': '/var/lib/openstack/healthchecks/kepler'}, 'volumes': ['/lib/modules:/lib/modules:ro', '/run/libvirt:/run/libvirt:shared,ro', '/sys:/sys', '/proc:/proc', '/var/lib/openstack/healthchecks/kepler:/openstack:ro,z']}, summary=Provides the latest release of Red Hat Universal Base Image 9., build-date=2024-09-18T21:23:30, distribution-scope=public, io.k8s.display-name=Red Hat Universal Base Image 9, io.openshift.tags=base rhel9, release-0.7.12=)
Dec  1 19:57:44 compute-0 podman[252319]: 2025-12-01 19:57:44.834530073 +0000 UTC m=+0.120713287 container health_status 34a1614f07848d6f362b3ed1fa2407dbcd0f2c7c831f6ef43ff8b2d278ce7c3d (image=quay.io/podified-antelope-centos9/openstack-ceilometer-ipmi:current-podified, name=ceilometer_agent_ipmi, health_status=healthy, health_failing_streak=0, health_log=, maintainer=OpenStack Kubernetes Operator team, tcib_build_tag=fa2bb8efef6782c26ea7f1675eeb36dd, config_id=edpm, org.label-schema.name=CentOS Stream 9 Base Image, config_data={'image': 'quay.io/podified-antelope-centos9/openstack-ceilometer-ipmi:current-podified', 'user': 'ceilometer', 'restart': 'always', 'command': 'kolla_start', 'security_opt': 'label:type:ceilometer_polling_t', 'privileged': 'true', 'net': 'host', 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'OS_ENDPOINT_TYPE': 'internal'}, 'healthcheck': {'test': '/openstack/healthcheck ipmi', 'mount': '/var/lib/openstack/healthchecks/ceilometer_agent_ipmi'}, 'volumes': ['/var/lib/openstack/config/telemetry-power-monitoring:/var/lib/openstack/config/:z', '/var/lib/openstack/config/telemetry-power-monitoring/ceilometer-agent-ipmi.json:/var/lib/kolla/config_files/config.json:z', '/etc/hosts:/etc/hosts:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/var/lib/openstack/cacerts/telemetry-power-monitoring/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/config/telemetry-power-monitoring/ceilometer_prom_exporter.yaml:/etc/ceilometer/ceilometer_prom_exporter.yaml:z', '/var/lib/openstack/certs/telemetry-power-monitoring/default:/etc/ceilometer/tls:z', '/dev/log:/dev/log', '/var/lib/openstack/healthchecks/ceilometer_agent_ipmi:/openstack:ro,z']}, container_name=ceilometer_agent_ipmi, managed_by=edpm_ansible, org.label-schema.license=GPLv2, org.label-schema.vendor=CentOS, tcib_managed=true, io.buildah.version=1.41.3, org.label-schema.build-date=20251125, org.label-schema.schema-version=1.0)
Dec  1 19:57:44 compute-0 podman[252321]: 2025-12-01 19:57:44.854058771 +0000 UTC m=+0.128905712 container health_status 43b014a7c88484529ca37fbc1aa040d68d3c565a681d98a3ffe696ded1c66c8b (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified, name=ovn_metadata_agent, health_status=healthy, health_failing_streak=0, health_log=, maintainer=OpenStack Kubernetes Operator team, managed_by=edpm_ansible, org.label-schema.build-date=20251125, org.label-schema.license=GPLv2, tcib_managed=true, io.buildah.version=1.41.3, org.label-schema.vendor=CentOS, config_data={'cgroupns': 'host', 'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'EDPM_CONFIG_HASH': '0823bd3e096c75f72e4a95820d41b0d4b6a1172bd2892ddb9f29b788a11bc87d'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ovn_metadata_agent', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified', 'net': 'host', 'pid': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/run/openvswitch:/run/openvswitch:z', '/var/lib/config-data/ansible-generated/neutron-ovn-metadata-agent:/etc/neutron.conf.d:z', '/run/netns:/run/netns:shared', '/var/lib/kolla/config_files/ovn_metadata_agent.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/neutron:/var/lib/neutron:shared,z', '/var/lib/neutron/ovn_metadata_haproxy_wrapper:/usr/local/bin/haproxy:ro', '/var/lib/neutron/kill_scripts:/etc/neutron/kill_scripts:ro', '/var/lib/openstack/cacerts/neutron-metadata/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/ca.crt:/etc/pki/tls/certs/ovndbca.crt:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/tls.crt:/etc/pki/tls/certs/ovndb.crt:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/tls.key:/etc/pki/tls/private/ovndb.key:ro,Z', '/var/lib/openstack/healthchecks/ovn_metadata_agent:/openstack:ro,z']}, config_id=ovn_metadata_agent, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.schema-version=1.0, tcib_build_tag=fa2bb8efef6782c26ea7f1675eeb36dd, container_name=ovn_metadata_agent)
Dec  1 19:57:44 compute-0 podman[252326]: 2025-12-01 19:57:44.868395376 +0000 UTC m=+0.139425309 container health_status ac5c9902abf0db9f43c889599b2bcc73d33eb8b65444ffdd9b56a5cc93dab792 (image=quay.io/podified-antelope-centos9/openstack-ovn-controller:current-podified, name=ovn_controller, health_status=healthy, health_failing_streak=0, health_log=, container_name=ovn_controller, io.buildah.version=1.41.3, org.label-schema.build-date=20251125, org.label-schema.license=GPLv2, org.label-schema.vendor=CentOS, config_data={'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ovn_controller', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-ovn-controller:current-podified', 'net': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/lib/modules:/lib/modules:ro', '/run:/run', '/var/lib/openvswitch/ovn:/run/ovn:shared,z', '/var/lib/kolla/config_files/ovn_controller.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/openstack/cacerts/ovn/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/certs/ovn/default/ca.crt:/etc/pki/tls/certs/ovndbca.crt:ro,z', '/var/lib/openstack/certs/ovn/default/tls.crt:/etc/pki/tls/certs/ovndb.crt:ro,z', '/var/lib/openstack/certs/ovn/default/tls.key:/etc/pki/tls/private/ovndb.key:ro,Z', '/var/lib/openstack/healthchecks/ovn_controller:/openstack:ro,z']}, maintainer=OpenStack Kubernetes Operator team, org.label-schema.name=CentOS Stream 9 Base Image, managed_by=edpm_ansible, org.label-schema.schema-version=1.0, tcib_build_tag=fa2bb8efef6782c26ea7f1675eeb36dd, tcib_managed=true, config_id=ovn_controller)
Dec  1 19:57:47 compute-0 nova_compute[189564]: 2025-12-01 19:57:47.630 189568 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Dec  1 19:57:48 compute-0 nova_compute[189564]: 2025-12-01 19:57:48.683 189568 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Dec  1 19:57:48 compute-0 ceilometer_agent_compute[200308]: 2025-12-01 19:57:48.819 15 DEBUG ceilometer.polling.manager [-] The number of pollsters in source [pollsters] is bigger than the number of worker threads to execute them. Therefore, one can expect the process to be longer than the expected. execute_polling_task_processing /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:253
Dec  1 19:57:48 compute-0 ceilometer_agent_compute[200308]: 2025-12-01 19:57:48.820 15 DEBUG ceilometer.polling.manager [-] Processing pollsters for [pollsters] with [1] threads. execute_polling_task_processing /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:262
Dec  1 19:57:48 compute-0 ceilometer_agent_compute[200308]: 2025-12-01 19:57:48.820 15 DEBUG ceilometer.polling.manager [-] Registering pollster [<stevedore.extension.Extension object at 0x7fcf6cc3f860>] from source [pollsters] to be executed via executor [<concurrent.futures.thread.ThreadPoolExecutor object at 0x7fcf6ebb41d0>] with cache [{}], pollster history [{}], and discovery cache [{}]. register_pollster_execution /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:276
Dec  1 19:57:48 compute-0 ceilometer_agent_compute[200308]: 2025-12-01 19:57:48.821 15 DEBUG ceilometer.polling.manager [-] Executing discovery process for pollsters [<ceilometer.compute.pollsters.net.IncomingBytesDeltaPollster object at 0x7fcf6cc3f830>] and discovery method [local_instances] via process [<bound method AgentManager.discover of <ceilometer.polling.manager.AgentManager object at 0x7fcf6cc07a10>>]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:294
Dec  1 19:57:48 compute-0 ceilometer_agent_compute[200308]: 2025-12-01 19:57:48.821 15 DEBUG ceilometer.polling.manager [-] Registering pollster [<stevedore.extension.Extension object at 0x7fcf6c2e4080>] from source [pollsters] to be executed via executor [<concurrent.futures.thread.ThreadPoolExecutor object at 0x7fcf6ebb41d0>] with cache [{}], pollster history [{}], and discovery cache [{}]. register_pollster_execution /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:276
Dec  1 19:57:48 compute-0 ceilometer_agent_compute[200308]: 2025-12-01 19:57:48.821 15 DEBUG ceilometer.polling.manager [-] Registering pollster [<stevedore.extension.Extension object at 0x7fcf6efc98b0>] from source [pollsters] to be executed via executor [<concurrent.futures.thread.ThreadPoolExecutor object at 0x7fcf6ebb41d0>] with cache [{}], pollster history [{}], and discovery cache [{}]. register_pollster_execution /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:276
Dec  1 19:57:48 compute-0 ceilometer_agent_compute[200308]: 2025-12-01 19:57:48.822 15 DEBUG ceilometer.polling.manager [-] Registering pollster [<stevedore.extension.Extension object at 0x7fcf6c2e4110>] from source [pollsters] to be executed via executor [<concurrent.futures.thread.ThreadPoolExecutor object at 0x7fcf6ebb41d0>] with cache [{}], pollster history [{}], and discovery cache [{}]. register_pollster_execution /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:276
Dec  1 19:57:48 compute-0 ceilometer_agent_compute[200308]: 2025-12-01 19:57:48.822 15 DEBUG ceilometer.polling.manager [-] Registering pollster [<stevedore.extension.Extension object at 0x7fcf6c2e41a0>] from source [pollsters] to be executed via executor [<concurrent.futures.thread.ThreadPoolExecutor object at 0x7fcf6ebb41d0>] with cache [{}], pollster history [{}], and discovery cache [{}]. register_pollster_execution /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:276
Dec  1 19:57:48 compute-0 ceilometer_agent_compute[200308]: 2025-12-01 19:57:48.822 15 DEBUG ceilometer.polling.manager [-] Registering pollster [<stevedore.extension.Extension object at 0x7fcf6cc3f290>] from source [pollsters] to be executed via executor [<concurrent.futures.thread.ThreadPoolExecutor object at 0x7fcf6ebb41d0>] with cache [{}], pollster history [{}], and discovery cache [{}]. register_pollster_execution /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:276
Dec  1 19:57:48 compute-0 ceilometer_agent_compute[200308]: 2025-12-01 19:57:48.822 15 DEBUG ceilometer.polling.manager [-] Registering pollster [<stevedore.extension.Extension object at 0x7fcf6cc3f2c0>] from source [pollsters] to be executed via executor [<concurrent.futures.thread.ThreadPoolExecutor object at 0x7fcf6ebb41d0>] with cache [{}], pollster history [{}], and discovery cache [{}]. register_pollster_execution /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:276
Dec  1 19:57:48 compute-0 ceilometer_agent_compute[200308]: 2025-12-01 19:57:48.822 15 DEBUG ceilometer.polling.manager [-] Registering pollster [<stevedore.extension.Extension object at 0x7fcf6e1e92e0>] from source [pollsters] to be executed via executor [<concurrent.futures.thread.ThreadPoolExecutor object at 0x7fcf6ebb41d0>] with cache [{}], pollster history [{}], and discovery cache [{}]. register_pollster_execution /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:276
Dec  1 19:57:48 compute-0 ceilometer_agent_compute[200308]: 2025-12-01 19:57:48.823 15 DEBUG ceilometer.polling.manager [-] Registering pollster [<stevedore.extension.Extension object at 0x7fcf6cc3fb00>] from source [pollsters] to be executed via executor [<concurrent.futures.thread.ThreadPoolExecutor object at 0x7fcf6ebb41d0>] with cache [{}], pollster history [{}], and discovery cache [{}]. register_pollster_execution /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:276
Dec  1 19:57:48 compute-0 ceilometer_agent_compute[200308]: 2025-12-01 19:57:48.823 15 DEBUG ceilometer.polling.manager [-] Registering pollster [<stevedore.extension.Extension object at 0x7fcf6cc3f320>] from source [pollsters] to be executed via executor [<concurrent.futures.thread.ThreadPoolExecutor object at 0x7fcf6ebb41d0>] with cache [{}], pollster history [{}], and discovery cache [{}]. register_pollster_execution /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:276
Dec  1 19:57:48 compute-0 ceilometer_agent_compute[200308]: 2025-12-01 19:57:48.823 15 DEBUG ceilometer.polling.manager [-] Registering pollster [<stevedore.extension.Extension object at 0x7fcf6cc3f380>] from source [pollsters] to be executed via executor [<concurrent.futures.thread.ThreadPoolExecutor object at 0x7fcf6ebb41d0>] with cache [{}], pollster history [{}], and discovery cache [{}]. register_pollster_execution /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:276
Dec  1 19:57:48 compute-0 ceilometer_agent_compute[200308]: 2025-12-01 19:57:48.823 15 DEBUG ceilometer.polling.manager [-] Registering pollster [<stevedore.extension.Extension object at 0x7fcf6cc3f3e0>] from source [pollsters] to be executed via executor [<concurrent.futures.thread.ThreadPoolExecutor object at 0x7fcf6ebb41d0>] with cache [{}], pollster history [{}], and discovery cache [{}]. register_pollster_execution /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:276
Dec  1 19:57:48 compute-0 ceilometer_agent_compute[200308]: 2025-12-01 19:57:48.824 15 DEBUG ceilometer.polling.manager [-] Registering pollster [<stevedore.extension.Extension object at 0x7fcf6cc3f440>] from source [pollsters] to be executed via executor [<concurrent.futures.thread.ThreadPoolExecutor object at 0x7fcf6ebb41d0>] with cache [{}], pollster history [{}], and discovery cache [{}]. register_pollster_execution /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:276
Dec  1 19:57:48 compute-0 ceilometer_agent_compute[200308]: 2025-12-01 19:57:48.824 15 DEBUG ceilometer.polling.manager [-] Registering pollster [<stevedore.extension.Extension object at 0x7fcf6c2e4470>] from source [pollsters] to be executed via executor [<concurrent.futures.thread.ThreadPoolExecutor object at 0x7fcf6ebb41d0>] with cache [{}], pollster history [{}], and discovery cache [{}]. register_pollster_execution /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:276
Dec  1 19:57:48 compute-0 ceilometer_agent_compute[200308]: 2025-12-01 19:57:48.824 15 DEBUG ceilometer.polling.manager [-] Registering pollster [<stevedore.extension.Extension object at 0x7fcf6cc3f4a0>] from source [pollsters] to be executed via executor [<concurrent.futures.thread.ThreadPoolExecutor object at 0x7fcf6ebb41d0>] with cache [{}], pollster history [{}], and discovery cache [{}]. register_pollster_execution /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:276
Dec  1 19:57:48 compute-0 ceilometer_agent_compute[200308]: 2025-12-01 19:57:48.824 15 DEBUG ceilometer.polling.manager [-] Registering pollster [<stevedore.extension.Extension object at 0x7fcf6cc3f500>] from source [pollsters] to be executed via executor [<concurrent.futures.thread.ThreadPoolExecutor object at 0x7fcf6ebb41d0>] with cache [{}], pollster history [{}], and discovery cache [{}]. register_pollster_execution /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:276
Dec  1 19:57:48 compute-0 ceilometer_agent_compute[200308]: 2025-12-01 19:57:48.825 15 DEBUG ceilometer.polling.manager [-] Registering pollster [<stevedore.extension.Extension object at 0x7fcf6cc3e540>] from source [pollsters] to be executed via executor [<concurrent.futures.thread.ThreadPoolExecutor object at 0x7fcf6ebb41d0>] with cache [{}], pollster history [{}], and discovery cache [{}]. register_pollster_execution /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:276
Dec  1 19:57:48 compute-0 ceilometer_agent_compute[200308]: 2025-12-01 19:57:48.825 15 DEBUG ceilometer.polling.manager [-] Registering pollster [<stevedore.extension.Extension object at 0x7fcf6cc3f560>] from source [pollsters] to be executed via executor [<concurrent.futures.thread.ThreadPoolExecutor object at 0x7fcf6ebb41d0>] with cache [{}], pollster history [{}], and discovery cache [{}]. register_pollster_execution /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:276
Dec  1 19:57:48 compute-0 ceilometer_agent_compute[200308]: 2025-12-01 19:57:48.825 15 DEBUG ceilometer.polling.manager [-] Registering pollster [<stevedore.extension.Extension object at 0x7fcf6cc3fd70>] from source [pollsters] to be executed via executor [<concurrent.futures.thread.ThreadPoolExecutor object at 0x7fcf6ebb41d0>] with cache [{}], pollster history [{}], and discovery cache [{}]. register_pollster_execution /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:276
Dec  1 19:57:48 compute-0 ceilometer_agent_compute[200308]: 2025-12-01 19:57:48.825 15 DEBUG ceilometer.polling.manager [-] Registering pollster [<stevedore.extension.Extension object at 0x7fcf6cc3f5c0>] from source [pollsters] to be executed via executor [<concurrent.futures.thread.ThreadPoolExecutor object at 0x7fcf6ebb41d0>] with cache [{}], pollster history [{}], and discovery cache [{}]. register_pollster_execution /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:276
Dec  1 19:57:48 compute-0 ceilometer_agent_compute[200308]: 2025-12-01 19:57:48.826 15 DEBUG ceilometer.polling.manager [-] Registering pollster [<stevedore.extension.Extension object at 0x7fcf6cc3fdd0>] from source [pollsters] to be executed via executor [<concurrent.futures.thread.ThreadPoolExecutor object at 0x7fcf6ebb41d0>] with cache [{}], pollster history [{}], and discovery cache [{}]. register_pollster_execution /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:276
Dec  1 19:57:48 compute-0 ceilometer_agent_compute[200308]: 2025-12-01 19:57:48.826 15 DEBUG ceilometer.polling.manager [-] Skip pollster network.incoming.bytes.delta, no  resources found this cycle _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:321
Dec  1 19:57:48 compute-0 ceilometer_agent_compute[200308]: 2025-12-01 19:57:48.826 15 DEBUG ceilometer.polling.manager [-] Executing discovery process for pollsters [<ceilometer.compute.pollsters.net.OutgoingPacketsPollster object at 0x7fcf6c2e4050>] and discovery method [local_instances] via process [<bound method AgentManager.discover of <ceilometer.polling.manager.AgentManager object at 0x7fcf6cc07a10>>]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:294
Dec  1 19:57:48 compute-0 ceilometer_agent_compute[200308]: 2025-12-01 19:57:48.826 15 DEBUG ceilometer.polling.manager [-] Registering pollster [<stevedore.extension.Extension object at 0x7fcf6cc3fe30>] from source [pollsters] to be executed via executor [<concurrent.futures.thread.ThreadPoolExecutor object at 0x7fcf6ebb41d0>] with cache [{}], pollster history [{'network.incoming.bytes.delta': [], 'network.outgoing.packets': []}], and discovery cache [{'local_instances': []}]. register_pollster_execution /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:276
Dec  1 19:57:48 compute-0 ceilometer_agent_compute[200308]: 2025-12-01 19:57:48.827 15 DEBUG ceilometer.polling.manager [-] Registering pollster [<stevedore.extension.Extension object at 0x7fcf6cc3fec0>] from source [pollsters] to be executed via executor [<concurrent.futures.thread.ThreadPoolExecutor object at 0x7fcf6ebb41d0>] with cache [{}], pollster history [{'network.incoming.bytes.delta': [], 'network.outgoing.packets': []}], and discovery cache [{'local_instances': []}]. register_pollster_execution /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:276
Dec  1 19:57:48 compute-0 ceilometer_agent_compute[200308]: 2025-12-01 19:57:48.827 15 DEBUG ceilometer.polling.manager [-] Skip pollster network.outgoing.packets, no  resources found this cycle _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:321
Dec  1 19:57:48 compute-0 ceilometer_agent_compute[200308]: 2025-12-01 19:57:48.827 15 DEBUG ceilometer.polling.manager [-] Executing discovery process for pollsters [<ceilometer.compute.pollsters.net.OutgoingBytesDeltaPollster object at 0x7fcf6cc3ff20>] and discovery method [local_instances] via process [<bound method AgentManager.discover of <ceilometer.polling.manager.AgentManager object at 0x7fcf6cc07a10>>]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:294
Dec  1 19:57:48 compute-0 ceilometer_agent_compute[200308]: 2025-12-01 19:57:48.828 15 DEBUG ceilometer.polling.manager [-] Skip pollster network.outgoing.bytes.delta, no  resources found this cycle _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:321
Dec  1 19:57:48 compute-0 ceilometer_agent_compute[200308]: 2025-12-01 19:57:48.827 15 DEBUG ceilometer.polling.manager [-] Registering pollster [<stevedore.extension.Extension object at 0x7fcf6cc3ffb0>] from source [pollsters] to be executed via executor [<concurrent.futures.thread.ThreadPoolExecutor object at 0x7fcf6ebb41d0>] with cache [{}], pollster history [{'network.incoming.bytes.delta': [], 'network.outgoing.packets': [], 'network.outgoing.bytes.delta': []}], and discovery cache [{'local_instances': []}]. register_pollster_execution /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:276
Dec  1 19:57:48 compute-0 ceilometer_agent_compute[200308]: 2025-12-01 19:57:48.828 15 DEBUG ceilometer.polling.manager [-] Executing discovery process for pollsters [<ceilometer.compute.pollsters.net.OutgoingDropPollster object at 0x7fcf6c2e40e0>] and discovery method [local_instances] via process [<bound method AgentManager.discover of <ceilometer.polling.manager.AgentManager object at 0x7fcf6cc07a10>>]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:294
Dec  1 19:57:48 compute-0 ceilometer_agent_compute[200308]: 2025-12-01 19:57:48.828 15 DEBUG ceilometer.polling.manager [-] Skip pollster network.outgoing.packets.drop, no  resources found this cycle _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:321
Dec  1 19:57:48 compute-0 ceilometer_agent_compute[200308]: 2025-12-01 19:57:48.829 15 DEBUG ceilometer.polling.manager [-] Executing discovery process for pollsters [<ceilometer.compute.pollsters.net.OutgoingErrorsPollster object at 0x7fcf6c2e4170>] and discovery method [local_instances] via process [<bound method AgentManager.discover of <ceilometer.polling.manager.AgentManager object at 0x7fcf6cc07a10>>]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:294
Dec  1 19:57:48 compute-0 ceilometer_agent_compute[200308]: 2025-12-01 19:57:48.828 15 DEBUG ceilometer.polling.manager [-] Registering pollster [<stevedore.extension.Extension object at 0x7fcf6cc3d7c0>] from source [pollsters] to be executed via executor [<concurrent.futures.thread.ThreadPoolExecutor object at 0x7fcf6ebb41d0>] with cache [{}], pollster history [{'network.incoming.bytes.delta': [], 'network.outgoing.packets': [], 'network.outgoing.bytes.delta': [], 'network.outgoing.packets.drop': [], 'network.outgoing.packets.error': []}], and discovery cache [{'local_instances': []}]. register_pollster_execution /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:276
Dec  1 19:57:48 compute-0 ceilometer_agent_compute[200308]: 2025-12-01 19:57:48.829 15 DEBUG ceilometer.polling.manager [-] Skip pollster network.outgoing.packets.error, no  resources found this cycle _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:321
Dec  1 19:57:48 compute-0 ceilometer_agent_compute[200308]: 2025-12-01 19:57:48.829 15 DEBUG ceilometer.polling.manager [-] Executing discovery process for pollsters [<ceilometer.compute.pollsters.disk.PerDeviceCapacityPollster object at 0x7fcf6cc3d820>] and discovery method [local_instances] via process [<bound method AgentManager.discover of <ceilometer.polling.manager.AgentManager object at 0x7fcf6cc07a10>>]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:294
Dec  1 19:57:48 compute-0 ceilometer_agent_compute[200308]: 2025-12-01 19:57:48.830 15 DEBUG ceilometer.polling.manager [-] Skip pollster disk.device.capacity, no  resources found this cycle _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:321
Dec  1 19:57:48 compute-0 ceilometer_agent_compute[200308]: 2025-12-01 19:57:48.829 15 DEBUG ceilometer.polling.manager [-] Registering pollster [<stevedore.extension.Extension object at 0x7fcf6cc3f7d0>] from source [pollsters] to be executed via executor [<concurrent.futures.thread.ThreadPoolExecutor object at 0x7fcf6ebb41d0>] with cache [{}], pollster history [{'network.incoming.bytes.delta': [], 'network.outgoing.packets': [], 'network.outgoing.bytes.delta': [], 'network.outgoing.packets.drop': [], 'network.outgoing.packets.error': [], 'disk.device.capacity': []}], and discovery cache [{'local_instances': []}]. register_pollster_execution /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:276
Dec  1 19:57:48 compute-0 ceilometer_agent_compute[200308]: 2025-12-01 19:57:48.830 15 DEBUG ceilometer.polling.manager [-] Executing discovery process for pollsters [<ceilometer.compute.pollsters.disk.PerDeviceReadBytesPollster object at 0x7fcf6cc3f1d0>] and discovery method [local_instances] via process [<bound method AgentManager.discover of <ceilometer.polling.manager.AgentManager object at 0x7fcf6cc07a10>>]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:294
Dec  1 19:57:48 compute-0 ceilometer_agent_compute[200308]: 2025-12-01 19:57:48.830 15 DEBUG ceilometer.polling.manager [-] Skip pollster disk.device.read.bytes, no  resources found this cycle _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:321
Dec  1 19:57:48 compute-0 ceilometer_agent_compute[200308]: 2025-12-01 19:57:48.830 15 DEBUG ceilometer.polling.manager [-] Executing discovery process for pollsters [<ceilometer.compute.pollsters.net.IncomingBytesPollster object at 0x7fcf6cc3f800>] and discovery method [local_instances] via process [<bound method AgentManager.discover of <ceilometer.polling.manager.AgentManager object at 0x7fcf6cc07a10>>]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:294
Dec  1 19:57:48 compute-0 ceilometer_agent_compute[200308]: 2025-12-01 19:57:48.831 15 DEBUG ceilometer.polling.manager [-] Skip pollster network.incoming.bytes, no  resources found this cycle _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:321
Dec  1 19:57:48 compute-0 ceilometer_agent_compute[200308]: 2025-12-01 19:57:48.831 15 DEBUG ceilometer.polling.manager [-] Executing discovery process for pollsters [<ceilometer.compute.pollsters.net.IncomingBytesRatePollster object at 0x7fcf6cc3fd10>] and discovery method [local_instances] via process [<bound method AgentManager.discover of <ceilometer.polling.manager.AgentManager object at 0x7fcf6cc07a10>>]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:294
Dec  1 19:57:48 compute-0 ceilometer_agent_compute[200308]: 2025-12-01 19:57:48.831 15 DEBUG ceilometer.polling.manager [-] Skip pollster network.incoming.bytes.rate, no  resources found this cycle _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:321
Dec  1 19:57:48 compute-0 ceilometer_agent_compute[200308]: 2025-12-01 19:57:48.831 15 DEBUG ceilometer.polling.manager [-] Executing discovery process for pollsters [<ceilometer.compute.pollsters.disk.PerDeviceDiskReadLatencyPollster object at 0x7fcf6cc3f2f0>] and discovery method [local_instances] via process [<bound method AgentManager.discover of <ceilometer.polling.manager.AgentManager object at 0x7fcf6cc07a10>>]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:294
Dec  1 19:57:48 compute-0 ceilometer_agent_compute[200308]: 2025-12-01 19:57:48.831 15 DEBUG ceilometer.polling.manager [-] Skip pollster disk.device.read.latency, no  resources found this cycle _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:321
Dec  1 19:57:48 compute-0 ceilometer_agent_compute[200308]: 2025-12-01 19:57:48.831 15 DEBUG ceilometer.polling.manager [-] Executing discovery process for pollsters [<ceilometer.compute.pollsters.disk.PerDeviceReadRequestsPollster object at 0x7fcf6cc3f350>] and discovery method [local_instances] via process [<bound method AgentManager.discover of <ceilometer.polling.manager.AgentManager object at 0x7fcf6cc07a10>>]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:294
Dec  1 19:57:48 compute-0 ceilometer_agent_compute[200308]: 2025-12-01 19:57:48.831 15 DEBUG ceilometer.polling.manager [-] Skip pollster disk.device.read.requests, no  resources found this cycle _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:321
Dec  1 19:57:48 compute-0 ceilometer_agent_compute[200308]: 2025-12-01 19:57:48.831 15 DEBUG ceilometer.polling.manager [-] Executing discovery process for pollsters [<ceilometer.compute.pollsters.disk.PerDevicePhysicalPollster object at 0x7fcf6cc3f3b0>] and discovery method [local_instances] via process [<bound method AgentManager.discover of <ceilometer.polling.manager.AgentManager object at 0x7fcf6cc07a10>>]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:294
Dec  1 19:57:48 compute-0 ceilometer_agent_compute[200308]: 2025-12-01 19:57:48.831 15 DEBUG ceilometer.polling.manager [-] Skip pollster disk.device.usage, no  resources found this cycle _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:321
Dec  1 19:57:48 compute-0 ceilometer_agent_compute[200308]: 2025-12-01 19:57:48.831 15 DEBUG ceilometer.polling.manager [-] Executing discovery process for pollsters [<ceilometer.compute.pollsters.disk.PerDeviceWriteBytesPollster object at 0x7fcf6cc3f410>] and discovery method [local_instances] via process [<bound method AgentManager.discover of <ceilometer.polling.manager.AgentManager object at 0x7fcf6cc07a10>>]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:294
Dec  1 19:57:48 compute-0 ceilometer_agent_compute[200308]: 2025-12-01 19:57:48.831 15 DEBUG ceilometer.polling.manager [-] Skip pollster disk.device.write.bytes, no  resources found this cycle _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:321
Dec  1 19:57:48 compute-0 ceilometer_agent_compute[200308]: 2025-12-01 19:57:48.832 15 DEBUG ceilometer.polling.manager [-] Executing discovery process for pollsters [<ceilometer.compute.pollsters.instance_stats.PowerStatePollster object at 0x7fcf6c2e4440>] and discovery method [local_instances] via process [<bound method AgentManager.discover of <ceilometer.polling.manager.AgentManager object at 0x7fcf6cc07a10>>]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:294
Dec  1 19:57:48 compute-0 ceilometer_agent_compute[200308]: 2025-12-01 19:57:48.832 15 DEBUG ceilometer.polling.manager [-] Skip pollster power.state, no  resources found this cycle _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:321
Dec  1 19:57:48 compute-0 ceilometer_agent_compute[200308]: 2025-12-01 19:57:48.832 15 DEBUG ceilometer.polling.manager [-] Executing discovery process for pollsters [<ceilometer.compute.pollsters.disk.PerDeviceDiskWriteLatencyPollster object at 0x7fcf6cc3f470>] and discovery method [local_instances] via process [<bound method AgentManager.discover of <ceilometer.polling.manager.AgentManager object at 0x7fcf6cc07a10>>]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:294
Dec  1 19:57:48 compute-0 ceilometer_agent_compute[200308]: 2025-12-01 19:57:48.832 15 DEBUG ceilometer.polling.manager [-] Skip pollster disk.device.write.latency, no  resources found this cycle _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:321
Dec  1 19:57:48 compute-0 ceilometer_agent_compute[200308]: 2025-12-01 19:57:48.832 15 DEBUG ceilometer.polling.manager [-] Executing discovery process for pollsters [<ceilometer.compute.pollsters.disk.PerDeviceWriteRequestsPollster object at 0x7fcf6cc3f4d0>] and discovery method [local_instances] via process [<bound method AgentManager.discover of <ceilometer.polling.manager.AgentManager object at 0x7fcf6cc07a10>>]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:294
Dec  1 19:57:48 compute-0 ceilometer_agent_compute[200308]: 2025-12-01 19:57:48.832 15 DEBUG ceilometer.polling.manager [-] Skip pollster disk.device.write.requests, no  resources found this cycle _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:321
Dec  1 19:57:48 compute-0 ceilometer_agent_compute[200308]: 2025-12-01 19:57:48.832 15 DEBUG ceilometer.polling.manager [-] Executing discovery process for pollsters [<ceilometer.compute.pollsters.disk.PerDeviceAllocationPollster object at 0x7fcf6cc3e5d0>] and discovery method [local_instances] via process [<bound method AgentManager.discover of <ceilometer.polling.manager.AgentManager object at 0x7fcf6cc07a10>>]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:294
Dec  1 19:57:48 compute-0 ceilometer_agent_compute[200308]: 2025-12-01 19:57:48.832 15 DEBUG ceilometer.polling.manager [-] Skip pollster disk.device.allocation, no  resources found this cycle _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:321
Dec  1 19:57:48 compute-0 ceilometer_agent_compute[200308]: 2025-12-01 19:57:48.832 15 DEBUG ceilometer.polling.manager [-] Executing discovery process for pollsters [<ceilometer.compute.pollsters.disk.EphemeralSizePollster object at 0x7fcf6cc3f530>] and discovery method [local_instances] via process [<bound method AgentManager.discover of <ceilometer.polling.manager.AgentManager object at 0x7fcf6cc07a10>>]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:294
Dec  1 19:57:48 compute-0 ceilometer_agent_compute[200308]: 2025-12-01 19:57:48.832 15 DEBUG ceilometer.polling.manager [-] Skip pollster disk.ephemeral.size, no  resources found this cycle _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:321
Dec  1 19:57:48 compute-0 ceilometer_agent_compute[200308]: 2025-12-01 19:57:48.832 15 DEBUG ceilometer.polling.manager [-] Executing discovery process for pollsters [<ceilometer.compute.pollsters.net.IncomingPacketsPollster object at 0x7fcf6cc3fd40>] and discovery method [local_instances] via process [<bound method AgentManager.discover of <ceilometer.polling.manager.AgentManager object at 0x7fcf6cc07a10>>]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:294
Dec  1 19:57:48 compute-0 ceilometer_agent_compute[200308]: 2025-12-01 19:57:48.833 15 DEBUG ceilometer.polling.manager [-] Skip pollster network.incoming.packets, no  resources found this cycle _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:321
Dec  1 19:57:48 compute-0 ceilometer_agent_compute[200308]: 2025-12-01 19:57:48.833 15 DEBUG ceilometer.polling.manager [-] Executing discovery process for pollsters [<ceilometer.compute.pollsters.disk.RootSizePollster object at 0x7fcf6cc3f590>] and discovery method [local_instances] via process [<bound method AgentManager.discover of <ceilometer.polling.manager.AgentManager object at 0x7fcf6cc07a10>>]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:294
Dec  1 19:57:48 compute-0 ceilometer_agent_compute[200308]: 2025-12-01 19:57:48.833 15 DEBUG ceilometer.polling.manager [-] Skip pollster disk.root.size, no  resources found this cycle _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:321
Dec  1 19:57:48 compute-0 ceilometer_agent_compute[200308]: 2025-12-01 19:57:48.833 15 DEBUG ceilometer.polling.manager [-] Executing discovery process for pollsters [<ceilometer.compute.pollsters.net.IncomingDropPollster object at 0x7fcf6cc3fda0>] and discovery method [local_instances] via process [<bound method AgentManager.discover of <ceilometer.polling.manager.AgentManager object at 0x7fcf6cc07a10>>]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:294
Dec  1 19:57:48 compute-0 ceilometer_agent_compute[200308]: 2025-12-01 19:57:48.833 15 DEBUG ceilometer.polling.manager [-] Skip pollster network.incoming.packets.drop, no  resources found this cycle _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:321
Dec  1 19:57:48 compute-0 ceilometer_agent_compute[200308]: 2025-12-01 19:57:48.833 15 DEBUG ceilometer.polling.manager [-] Executing discovery process for pollsters [<ceilometer.compute.pollsters.net.IncomingErrorsPollster object at 0x7fcf6cc3fe00>] and discovery method [local_instances] via process [<bound method AgentManager.discover of <ceilometer.polling.manager.AgentManager object at 0x7fcf6cc07a10>>]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:294
Dec  1 19:57:48 compute-0 ceilometer_agent_compute[200308]: 2025-12-01 19:57:48.833 15 DEBUG ceilometer.polling.manager [-] Skip pollster network.incoming.packets.error, no  resources found this cycle _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:321
Dec  1 19:57:48 compute-0 ceilometer_agent_compute[200308]: 2025-12-01 19:57:48.833 15 DEBUG ceilometer.polling.manager [-] Executing discovery process for pollsters [<ceilometer.compute.pollsters.net.OutgoingBytesPollster object at 0x7fcf6cc3fe90>] and discovery method [local_instances] via process [<bound method AgentManager.discover of <ceilometer.polling.manager.AgentManager object at 0x7fcf6cc07a10>>]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:294
Dec  1 19:57:48 compute-0 ceilometer_agent_compute[200308]: 2025-12-01 19:57:48.833 15 DEBUG ceilometer.polling.manager [-] Skip pollster network.outgoing.bytes, no  resources found this cycle _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:321
Dec  1 19:57:48 compute-0 ceilometer_agent_compute[200308]: 2025-12-01 19:57:48.833 15 DEBUG ceilometer.polling.manager [-] Executing discovery process for pollsters [<ceilometer.compute.pollsters.net.OutgoingBytesRatePollster object at 0x7fcf6cc3ff80>] and discovery method [local_instances] via process [<bound method AgentManager.discover of <ceilometer.polling.manager.AgentManager object at 0x7fcf6cc07a10>>]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:294
Dec  1 19:57:48 compute-0 ceilometer_agent_compute[200308]: 2025-12-01 19:57:48.834 15 DEBUG ceilometer.polling.manager [-] Skip pollster network.outgoing.bytes.rate, no  resources found this cycle _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:321
Dec  1 19:57:48 compute-0 ceilometer_agent_compute[200308]: 2025-12-01 19:57:48.834 15 DEBUG ceilometer.polling.manager [-] Executing discovery process for pollsters [<ceilometer.compute.pollsters.instance_stats.CPUPollster object at 0x7fcf6cbd1b80>] and discovery method [local_instances] via process [<bound method AgentManager.discover of <ceilometer.polling.manager.AgentManager object at 0x7fcf6cc07a10>>]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:294
Dec  1 19:57:48 compute-0 ceilometer_agent_compute[200308]: 2025-12-01 19:57:48.834 15 DEBUG ceilometer.polling.manager [-] Skip pollster cpu, no  resources found this cycle _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:321
Dec  1 19:57:48 compute-0 ceilometer_agent_compute[200308]: 2025-12-01 19:57:48.834 15 DEBUG ceilometer.polling.manager [-] Executing discovery process for pollsters [<ceilometer.compute.pollsters.instance_stats.MemoryUsagePollster object at 0x7fcf6cc3f7a0>] and discovery method [local_instances] via process [<bound method AgentManager.discover of <ceilometer.polling.manager.AgentManager object at 0x7fcf6cc07a10>>]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:294
Dec  1 19:57:48 compute-0 ceilometer_agent_compute[200308]: 2025-12-01 19:57:48.834 15 DEBUG ceilometer.polling.manager [-] Skip pollster memory.usage, no  resources found this cycle _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:321
Dec  1 19:57:48 compute-0 ceilometer_agent_compute[200308]: 2025-12-01 19:57:48.834 15 DEBUG ceilometer.polling.manager [-] Finished processing pollster [network.incoming.bytes.delta]. execute_polling_task_processing /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:272
Dec  1 19:57:48 compute-0 ceilometer_agent_compute[200308]: 2025-12-01 19:57:48.835 15 DEBUG ceilometer.polling.manager [-] Finished processing pollster [network.outgoing.packets]. execute_polling_task_processing /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:272
Dec  1 19:57:48 compute-0 ceilometer_agent_compute[200308]: 2025-12-01 19:57:48.835 15 DEBUG ceilometer.polling.manager [-] Finished processing pollster [network.outgoing.bytes.delta]. execute_polling_task_processing /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:272
Dec  1 19:57:48 compute-0 ceilometer_agent_compute[200308]: 2025-12-01 19:57:48.835 15 DEBUG ceilometer.polling.manager [-] Finished processing pollster [network.outgoing.packets.drop]. execute_polling_task_processing /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:272
Dec  1 19:57:48 compute-0 ceilometer_agent_compute[200308]: 2025-12-01 19:57:48.835 15 DEBUG ceilometer.polling.manager [-] Finished processing pollster [network.outgoing.packets.error]. execute_polling_task_processing /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:272
Dec  1 19:57:48 compute-0 ceilometer_agent_compute[200308]: 2025-12-01 19:57:48.835 15 DEBUG ceilometer.polling.manager [-] Finished processing pollster [disk.device.capacity]. execute_polling_task_processing /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:272
Dec  1 19:57:48 compute-0 ceilometer_agent_compute[200308]: 2025-12-01 19:57:48.835 15 DEBUG ceilometer.polling.manager [-] Finished processing pollster [disk.device.read.bytes]. execute_polling_task_processing /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:272
Dec  1 19:57:48 compute-0 ceilometer_agent_compute[200308]: 2025-12-01 19:57:48.835 15 DEBUG ceilometer.polling.manager [-] Finished processing pollster [network.incoming.bytes]. execute_polling_task_processing /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:272
Dec  1 19:57:48 compute-0 ceilometer_agent_compute[200308]: 2025-12-01 19:57:48.835 15 DEBUG ceilometer.polling.manager [-] Finished processing pollster [network.incoming.bytes.rate]. execute_polling_task_processing /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:272
Dec  1 19:57:48 compute-0 ceilometer_agent_compute[200308]: 2025-12-01 19:57:48.836 15 DEBUG ceilometer.polling.manager [-] Finished processing pollster [disk.device.read.latency]. execute_polling_task_processing /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:272
Dec  1 19:57:48 compute-0 ceilometer_agent_compute[200308]: 2025-12-01 19:57:48.836 15 DEBUG ceilometer.polling.manager [-] Finished processing pollster [disk.device.read.requests]. execute_polling_task_processing /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:272
Dec  1 19:57:48 compute-0 ceilometer_agent_compute[200308]: 2025-12-01 19:57:48.836 15 DEBUG ceilometer.polling.manager [-] Finished processing pollster [disk.device.usage]. execute_polling_task_processing /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:272
Dec  1 19:57:48 compute-0 ceilometer_agent_compute[200308]: 2025-12-01 19:57:48.836 15 DEBUG ceilometer.polling.manager [-] Finished processing pollster [disk.device.write.bytes]. execute_polling_task_processing /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:272
Dec  1 19:57:48 compute-0 ceilometer_agent_compute[200308]: 2025-12-01 19:57:48.836 15 DEBUG ceilometer.polling.manager [-] Finished processing pollster [power.state]. execute_polling_task_processing /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:272
Dec  1 19:57:48 compute-0 ceilometer_agent_compute[200308]: 2025-12-01 19:57:48.836 15 DEBUG ceilometer.polling.manager [-] Finished processing pollster [disk.device.write.latency]. execute_polling_task_processing /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:272
Dec  1 19:57:48 compute-0 ceilometer_agent_compute[200308]: 2025-12-01 19:57:48.836 15 DEBUG ceilometer.polling.manager [-] Finished processing pollster [disk.device.write.requests]. execute_polling_task_processing /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:272
Dec  1 19:57:48 compute-0 ceilometer_agent_compute[200308]: 2025-12-01 19:57:48.837 15 DEBUG ceilometer.polling.manager [-] Finished processing pollster [disk.device.allocation]. execute_polling_task_processing /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:272
Dec  1 19:57:48 compute-0 ceilometer_agent_compute[200308]: 2025-12-01 19:57:48.837 15 DEBUG ceilometer.polling.manager [-] Finished processing pollster [disk.ephemeral.size]. execute_polling_task_processing /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:272
Dec  1 19:57:48 compute-0 ceilometer_agent_compute[200308]: 2025-12-01 19:57:48.837 15 DEBUG ceilometer.polling.manager [-] Finished processing pollster [network.incoming.packets]. execute_polling_task_processing /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:272
Dec  1 19:57:48 compute-0 ceilometer_agent_compute[200308]: 2025-12-01 19:57:48.837 15 DEBUG ceilometer.polling.manager [-] Finished processing pollster [disk.root.size]. execute_polling_task_processing /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:272
Dec  1 19:57:48 compute-0 ceilometer_agent_compute[200308]: 2025-12-01 19:57:48.837 15 DEBUG ceilometer.polling.manager [-] Finished processing pollster [network.incoming.packets.drop]. execute_polling_task_processing /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:272
Dec  1 19:57:48 compute-0 ceilometer_agent_compute[200308]: 2025-12-01 19:57:48.837 15 DEBUG ceilometer.polling.manager [-] Finished processing pollster [network.incoming.packets.error]. execute_polling_task_processing /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:272
Dec  1 19:57:48 compute-0 ceilometer_agent_compute[200308]: 2025-12-01 19:57:48.837 15 DEBUG ceilometer.polling.manager [-] Finished processing pollster [network.outgoing.bytes]. execute_polling_task_processing /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:272
Dec  1 19:57:48 compute-0 ceilometer_agent_compute[200308]: 2025-12-01 19:57:48.837 15 DEBUG ceilometer.polling.manager [-] Finished processing pollster [network.outgoing.bytes.rate]. execute_polling_task_processing /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:272
Dec  1 19:57:48 compute-0 ceilometer_agent_compute[200308]: 2025-12-01 19:57:48.837 15 DEBUG ceilometer.polling.manager [-] Finished processing pollster [cpu]. execute_polling_task_processing /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:272
Dec  1 19:57:48 compute-0 ceilometer_agent_compute[200308]: 2025-12-01 19:57:48.838 15 DEBUG ceilometer.polling.manager [-] Finished processing pollster [memory.usage]. execute_polling_task_processing /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:272
Dec  1 19:57:52 compute-0 nova_compute[189564]: 2025-12-01 19:57:52.635 189568 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Dec  1 19:57:53 compute-0 nova_compute[189564]: 2025-12-01 19:57:53.685 189568 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Dec  1 19:57:54 compute-0 podman[252416]: 2025-12-01 19:57:54.359933067 +0000 UTC m=+0.126178817 container health_status b46bda7fc50db8041eef75400930fc7591d8331b3adc9964f77b2cc87c6b98e2 (image=quay.io/openstack-k8s-operators/openstack-network-exporter:current-podified, name=openstack_network_exporter, health_status=healthy, health_failing_streak=0, health_log=, architecture=x86_64, com.redhat.license_terms=https://www.redhat.com/en/about/red-hat-end-user-license-agreements#UBI, config_data={'image': 'quay.io/openstack-k8s-operators/openstack-network-exporter:current-podified', 'restart': 'always', 'recreate': True, 'privileged': True, 'ports': ['9105:9105'], 'command': [], 'net': 'host', 'environment': {'OS_ENDPOINT_TYPE': 'internal', 'OPENSTACK_NETWORK_EXPORTER_YAML': '/etc/openstack_network_exporter/openstack_network_exporter.yaml'}, 'healthcheck': {'test': '/openstack/healthcheck openstack-netwo', 'mount': '/var/lib/openstack/healthchecks/openstack_network_exporter'}, 'volumes': ['/var/lib/openstack/config/telemetry/openstack_network_exporter.yaml:/etc/openstack_network_exporter/openstack_network_exporter.yaml:z', '/var/lib/openstack/certs/telemetry/default:/etc/openstack_network_exporter/tls:z', '/var/run/openvswitch:/run/openvswitch:rw,z', '/var/lib/openvswitch/ovn:/run/ovn:rw,z', '/proc:/host/proc:ro', '/var/lib/openstack/healthchecks/openstack_network_exporter:/openstack:ro,z']}, config_id=edpm, summary=Provides the latest release of the minimal Red Hat Universal Base Image 9., vcs-ref=f4b088292653bbf5ca8188a5e59ffd06a8671d4b, distribution-scope=public, managed_by=edpm_ansible, vcs-type=git, io.k8s.description=The Universal Base Image Minimal is a stripped down image that uses microdnf as a package manager. This base image is freely redistributable, but Red Hat only supports Red Hat technologies through subscriptions for Red Hat products. This image is maintained by Red Hat and updated regularly., name=ubi9-minimal, release=1755695350, url=https://catalog.redhat.com/en/search?searchType=containers, io.k8s.display-name=Red Hat Universal Base Image 9 Minimal, com.redhat.component=ubi9-minimal-container, io.openshift.expose-services=, io.buildah.version=1.33.7, io.openshift.tags=minimal rhel9, version=9.6, container_name=openstack_network_exporter, build-date=2025-08-20T13:12:41, description=The Universal Base Image Minimal is a stripped down image that uses microdnf as a package manager. This base image is freely redistributable, but Red Hat only supports Red Hat technologies through subscriptions for Red Hat products. This image is maintained by Red Hat and updated regularly., maintainer=Red Hat, Inc., vendor=Red Hat, Inc.)
Dec  1 19:57:57 compute-0 nova_compute[189564]: 2025-12-01 19:57:57.641 189568 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Dec  1 19:57:58 compute-0 nova_compute[189564]: 2025-12-01 19:57:58.687 189568 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Dec  1 19:57:59 compute-0 podman[203750]: time="2025-12-01T19:57:59Z" level=info msg="List containers: received `last` parameter - overwriting `limit`"
Dec  1 19:57:59 compute-0 podman[203750]: @ - - [01/Dec/2025:19:57:59 +0000] "GET /v4.9.3/libpod/containers/json?all=true&external=false&last=0&namespace=false&size=false&sync=false HTTP/1.1" 200 28288 "" "Go-http-client/1.1"
Dec  1 19:57:59 compute-0 podman[203750]: @ - - [01/Dec/2025:19:57:59 +0000] "GET /v4.9.3/libpod/containers/stats?all=false&interval=1&stream=false HTTP/1.1" 200 4334 "" "Go-http-client/1.1"
Dec  1 19:58:00 compute-0 podman[252439]: 2025-12-01 19:58:00.11348865 +0000 UTC m=+0.095930666 container health_status 9bc16c1e84935b321683dd2dfd3901959431e420d380b6b9982945dff3d516b2 (image=quay.io/prometheus/node-exporter:v1.5.0, name=node_exporter, health_status=healthy, health_failing_streak=0, health_log=, config_data={'image': 'quay.io/prometheus/node-exporter:v1.5.0', 'restart': 'always', 'recreate': True, 'user': 'root', 'privileged': True, 'ports': ['9100:9100'], 'command': ['--web.config.file=/etc/node_exporter/node_exporter.yaml', '--web.disable-exporter-metrics', '--collector.systemd', '--collector.systemd.unit-include=(edpm_.*|ovs.*|openvswitch|virt.*|rsyslog)\\.service', '--no-collector.dmi', '--no-collector.entropy', '--no-collector.thermal_zone', '--no-collector.time', '--no-collector.timex', '--no-collector.uname', '--no-collector.stat', '--no-collector.hwmon', '--no-collector.os', '--no-collector.selinux', '--no-collector.textfile', '--no-collector.powersupplyclass', '--no-collector.pressure', '--no-collector.rapl'], 'net': 'host', 'environment': {'OS_ENDPOINT_TYPE': 'internal'}, 'healthcheck': {'test': '/openstack/healthcheck node_exporter', 'mount': '/var/lib/openstack/healthchecks/node_exporter'}, 'volumes': ['/var/lib/openstack/config/telemetry/node_exporter.yaml:/etc/node_exporter/node_exporter.yaml:z', '/var/lib/openstack/certs/telemetry/default:/etc/node_exporter/tls:z', '/var/run/dbus/system_bus_socket:/var/run/dbus/system_bus_socket:rw', '/var/lib/openstack/healthchecks/node_exporter:/openstack:ro,z']}, config_id=edpm, container_name=node_exporter, maintainer=The Prometheus Authors <prometheus-developers@googlegroups.com>, managed_by=edpm_ansible)
Dec  1 19:58:01 compute-0 openstack_network_exporter[205914]: ERROR   19:58:01 appctl.go:131: Failed to prepare call to ovsdb-server: no control socket files found for the ovs db server
Dec  1 19:58:01 compute-0 openstack_network_exporter[205914]: ERROR   19:58:01 appctl.go:144: Failed to get PID for ovn-northd: no control socket files found for ovn-northd
Dec  1 19:58:01 compute-0 openstack_network_exporter[205914]: ERROR   19:58:01 appctl.go:144: Failed to get PID for ovn-northd: no control socket files found for ovn-northd
Dec  1 19:58:01 compute-0 openstack_network_exporter[205914]: ERROR   19:58:01 appctl.go:174: call(dpif-netdev/pmd-perf-show): please specify an existing datapath
Dec  1 19:58:01 compute-0 openstack_network_exporter[205914]: 
Dec  1 19:58:01 compute-0 openstack_network_exporter[205914]: ERROR   19:58:01 appctl.go:174: call(dpif-netdev/pmd-rxq-show): please specify an existing datapath
Dec  1 19:58:01 compute-0 openstack_network_exporter[205914]: 
Dec  1 19:58:02 compute-0 nova_compute[189564]: 2025-12-01 19:58:02.645 189568 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Dec  1 19:58:03 compute-0 nova_compute[189564]: 2025-12-01 19:58:03.691 189568 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Dec  1 19:58:06 compute-0 podman[252465]: 2025-12-01 19:58:06.340943868 +0000 UTC m=+0.098169726 container health_status eee51cf6f5ac491b85fb09827fece37ea9afa564acb449d4ec0d0155a452f02b (image=quay.io/podified-antelope-centos9/openstack-multipathd:current-podified, name=multipathd, health_status=healthy, health_failing_streak=0, health_log=, maintainer=OpenStack Kubernetes Operator team, org.label-schema.build-date=20251125, tcib_build_tag=fa2bb8efef6782c26ea7f1675eeb36dd, config_data={'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/multipathd', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-multipathd:current-podified', 'net': 'host', 'privileged': True, 'restart': 'always', 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/var/lib/kolla/config_files/multipathd.json:/var/lib/kolla/config_files/config.json:ro', '/dev:/dev', '/run/udev:/run/udev', '/sys:/sys', '/lib/modules:/lib/modules:ro', '/etc/iscsi:/etc/iscsi:ro', '/var/lib/iscsi:/var/lib/iscsi', '/etc/multipath:/etc/multipath:z', '/etc/multipath.conf:/etc/multipath.conf:ro', '/var/lib/openstack/healthchecks/multipathd:/openstack:ro,z']}, container_name=multipathd, managed_by=edpm_ansible, tcib_managed=true, config_id=multipathd, org.label-schema.license=GPLv2, org.label-schema.schema-version=1.0, io.buildah.version=1.41.3, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.vendor=CentOS)
Dec  1 19:58:07 compute-0 nova_compute[189564]: 2025-12-01 19:58:07.649 189568 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Dec  1 19:58:08 compute-0 nova_compute[189564]: 2025-12-01 19:58:08.698 189568 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Dec  1 19:58:09 compute-0 nova_compute[189564]: 2025-12-01 19:58:09.247 189568 DEBUG oslo_service.periodic_task [None req-7cec860b-c1c5-49d2-8a4f-732c76c1788d - - - - - -] Running periodic task ComputeManager._reclaim_queued_deletes run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210#033[00m
Dec  1 19:58:09 compute-0 nova_compute[189564]: 2025-12-01 19:58:09.248 189568 DEBUG nova.compute.manager [None req-7cec860b-c1c5-49d2-8a4f-732c76c1788d - - - - - -] CONF.reclaim_instance_interval <= 0, skipping... _reclaim_queued_deletes /usr/lib/python3.9/site-packages/nova/compute/manager.py:10477#033[00m
Dec  1 19:58:12 compute-0 ovn_metadata_agent[106828]: 2025-12-01 19:58:12.215 106833 DEBUG oslo_concurrency.lockutils [-] Acquiring lock "_check_child_processes" by "neutron.agent.linux.external_process.ProcessMonitor._check_child_processes" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m
Dec  1 19:58:12 compute-0 ovn_metadata_agent[106828]: 2025-12-01 19:58:12.215 106833 DEBUG oslo_concurrency.lockutils [-] Lock "_check_child_processes" acquired by "neutron.agent.linux.external_process.ProcessMonitor._check_child_processes" :: waited 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m
Dec  1 19:58:12 compute-0 ovn_metadata_agent[106828]: 2025-12-01 19:58:12.216 106833 DEBUG oslo_concurrency.lockutils [-] Lock "_check_child_processes" "released" by "neutron.agent.linux.external_process.ProcessMonitor._check_child_processes" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m
Dec  1 19:58:12 compute-0 podman[252485]: 2025-12-01 19:58:12.376715832 +0000 UTC m=+0.134974821 container health_status 61ddba5fa28aaa4735d9b3aecc3d300f499f9ae2248b5f55cd6d6127fcce4236 (image=quay.io/navidys/prometheus-podman-exporter:v1.10.1, name=podman_exporter, health_status=healthy, health_failing_streak=0, health_log=, container_name=podman_exporter, maintainer=Navid Yaghoobi <navidys@fedoraproject.org>, managed_by=edpm_ansible, config_data={'image': 'quay.io/navidys/prometheus-podman-exporter:v1.10.1', 'restart': 'always', 'recreate': True, 'user': 'root', 'privileged': True, 'ports': ['9882:9882'], 'net': 'host', 'command': ['--web.config.file=/etc/podman_exporter/podman_exporter.yaml'], 'environment': {'OS_ENDPOINT_TYPE': 'internal', 'CONTAINER_HOST': 'unix:///run/podman/podman.sock'}, 'healthcheck': {'test': '/openstack/healthcheck podman_exporter', 'mount': '/var/lib/openstack/healthchecks/podman_exporter'}, 'volumes': ['/var/lib/openstack/config/telemetry/podman_exporter.yaml:/etc/podman_exporter/podman_exporter.yaml:z', '/var/lib/openstack/certs/telemetry/default:/etc/podman_exporter/tls:z', '/run/podman/podman.sock:/run/podman/podman.sock:rw,z', '/var/lib/openstack/healthchecks/podman_exporter:/openstack:ro,z']}, config_id=edpm)
Dec  1 19:58:12 compute-0 nova_compute[189564]: 2025-12-01 19:58:12.654 189568 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Dec  1 19:58:13 compute-0 nova_compute[189564]: 2025-12-01 19:58:13.700 189568 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Dec  1 19:58:15 compute-0 nova_compute[189564]: 2025-12-01 19:58:15.248 189568 DEBUG oslo_service.periodic_task [None req-7cec860b-c1c5-49d2-8a4f-732c76c1788d - - - - - -] Running periodic task ComputeManager._poll_rebooting_instances run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210#033[00m
Dec  1 19:58:15 compute-0 nova_compute[189564]: 2025-12-01 19:58:15.249 189568 DEBUG oslo_service.periodic_task [None req-7cec860b-c1c5-49d2-8a4f-732c76c1788d - - - - - -] Running periodic task ComputeManager._poll_rescued_instances run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210#033[00m
Dec  1 19:58:15 compute-0 podman[252512]: 2025-12-01 19:58:15.354130704 +0000 UTC m=+0.092612232 container health_status 43b014a7c88484529ca37fbc1aa040d68d3c565a681d98a3ffe696ded1c66c8b (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified, name=ovn_metadata_agent, health_status=healthy, health_failing_streak=0, health_log=, config_id=ovn_metadata_agent, container_name=ovn_metadata_agent, io.buildah.version=1.41.3, org.label-schema.license=GPLv2, org.label-schema.vendor=CentOS, managed_by=edpm_ansible, org.label-schema.build-date=20251125, maintainer=OpenStack Kubernetes Operator team, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.schema-version=1.0, tcib_build_tag=fa2bb8efef6782c26ea7f1675eeb36dd, tcib_managed=true, config_data={'cgroupns': 'host', 'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'EDPM_CONFIG_HASH': '0823bd3e096c75f72e4a95820d41b0d4b6a1172bd2892ddb9f29b788a11bc87d'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ovn_metadata_agent', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified', 'net': 'host', 'pid': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/run/openvswitch:/run/openvswitch:z', '/var/lib/config-data/ansible-generated/neutron-ovn-metadata-agent:/etc/neutron.conf.d:z', '/run/netns:/run/netns:shared', '/var/lib/kolla/config_files/ovn_metadata_agent.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/neutron:/var/lib/neutron:shared,z', '/var/lib/neutron/ovn_metadata_haproxy_wrapper:/usr/local/bin/haproxy:ro', '/var/lib/neutron/kill_scripts:/etc/neutron/kill_scripts:ro', '/var/lib/openstack/cacerts/neutron-metadata/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/ca.crt:/etc/pki/tls/certs/ovndbca.crt:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/tls.crt:/etc/pki/tls/certs/ovndb.crt:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/tls.key:/etc/pki/tls/private/ovndb.key:ro,Z', '/var/lib/openstack/healthchecks/ovn_metadata_agent:/openstack:ro,z']})
Dec  1 19:58:15 compute-0 podman[252509]: 2025-12-01 19:58:15.376310714 +0000 UTC m=+0.135623630 container health_status 23921011954a99f31a49758e512d9e3575f6b2ebf536e7df85e3be11e7690b76 (image=quay.io/sustainable_computing_io/kepler:release-0.7.12, name=kepler, health_status=healthy, health_failing_streak=0, health_log=, release-0.7.12=, version=9.4, url=https://access.redhat.com/containers/#/registry.access.redhat.com/ubi9/images/9.4-1214.1726694543, vcs-ref=e309397d02fc53f7fa99db1371b8700eb49f268f, description=The Universal Base Image is designed and engineered to be the base layer for all of your containerized applications, middleware and utilities. This base image is freely redistributable, but Red Hat only supports Red Hat technologies through subscriptions for Red Hat products. This image is maintained by Red Hat and updated regularly., io.openshift.tags=base rhel9, name=ubi9, release=1214.1726694543, build-date=2024-09-18T21:23:30, io.k8s.description=The Universal Base Image is designed and engineered to be the base layer for all of your containerized applications, middleware and utilities. This base image is freely redistributable, but Red Hat only supports Red Hat technologies through subscriptions for Red Hat products. This image is maintained by Red Hat and updated regularly., vendor=Red Hat, Inc., com.redhat.component=ubi9-container, com.redhat.license_terms=https://www.redhat.com/en/about/red-hat-end-user-license-agreements#UBI, config_data={'image': 'quay.io/sustainable_computing_io/kepler:release-0.7.12', 'privileged': 'true', 'restart': 'always', 'ports': ['8888:8888'], 'net': 'host', 'command': '-v=2', 'recreate': True, 'environment': {'ENABLE_GPU': 'true', 'EXPOSE_CONTAINER_METRICS': 'true', 'ENABLE_PROCESS_METRICS': 'true', 'EXPOSE_VM_METRICS': 'true', 'EXPOSE_ESTIMATED_IDLE_POWER_METRICS': 'false', 'LIBVIRT_METADATA_URI': 'http://openstack.org/xmlns/libvirt/nova/1.1'}, 'healthcheck': {'test': '/openstack/healthcheck kepler', 'mount': '/var/lib/openstack/healthchecks/kepler'}, 'volumes': ['/lib/modules:/lib/modules:ro', '/run/libvirt:/run/libvirt:shared,ro', '/sys:/sys', '/proc:/proc', '/var/lib/openstack/healthchecks/kepler:/openstack:ro,z']}, distribution-scope=public, io.buildah.version=1.29.0, architecture=x86_64, maintainer=Red Hat, Inc., vcs-type=git, container_name=kepler, io.k8s.display-name=Red Hat Universal Base Image 9, config_id=edpm, managed_by=edpm_ansible, summary=Provides the latest release of Red Hat Universal Base Image 9., io.openshift.expose-services=)
Dec  1 19:58:15 compute-0 podman[252511]: 2025-12-01 19:58:15.381878637 +0000 UTC m=+0.130540792 container health_status 3a3d264f7eb8586ed3d44da8bad3c69e5911bcb2ca062b771386b6d47a5118de (image=quay.rdoproject.org/podified-master-centos10/openstack-ceilometer-compute:current-tested, name=ceilometer_agent_compute, health_status=healthy, health_failing_streak=0, health_log=, org.label-schema.license=GPLv2, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS, tcib_build_tag=3a7876c5b6a4ff2e2bc50e11e9db5f42, maintainer=OpenStack Kubernetes Operator team, tcib_managed=true, org.label-schema.build-date=20251125, org.label-schema.name=CentOS Stream 10 Base Image, config_data={'image': 'quay.rdoproject.org/podified-master-centos10/openstack-ceilometer-compute:current-tested', 'user': 'ceilometer', 'restart': 'always', 'command': 'kolla_start', 'security_opt': 'label:type:ceilometer_polling_t', 'net': 'host', 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'OS_ENDPOINT_TYPE': 'internal'}, 'healthcheck': {'test': '/openstack/healthcheck compute', 'mount': '/var/lib/openstack/healthchecks/ceilometer_agent_compute'}, 'volumes': ['/var/lib/openstack/config/telemetry:/var/lib/openstack/config/:z', '/var/lib/openstack/config/telemetry/ceilometer-agent-compute.json:/var/lib/kolla/config_files/config.json:z', '/run/libvirt:/run/libvirt:shared,ro', '/etc/hosts:/etc/hosts:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/var/lib/openstack/cacerts/telemetry/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/config/telemetry/ceilometer_prom_exporter.yaml:/etc/ceilometer/ceilometer_prom_exporter.yaml:z', '/var/lib/openstack/certs/telemetry/default:/etc/ceilometer/tls:z', '/dev/log:/dev/log', '/var/lib/openstack/healthchecks/ceilometer_agent_compute:/openstack:ro,z']}, config_id=edpm, io.buildah.version=1.41.4, container_name=ceilometer_agent_compute, managed_by=edpm_ansible)
Dec  1 19:58:15 compute-0 podman[252517]: 2025-12-01 19:58:15.382530768 +0000 UTC m=+0.127049894 container health_status ac5c9902abf0db9f43c889599b2bcc73d33eb8b65444ffdd9b56a5cc93dab792 (image=quay.io/podified-antelope-centos9/openstack-ovn-controller:current-podified, name=ovn_controller, health_status=healthy, health_failing_streak=0, health_log=, io.buildah.version=1.41.3, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 9 Base Image, managed_by=edpm_ansible, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS, tcib_build_tag=fa2bb8efef6782c26ea7f1675eeb36dd, config_id=ovn_controller, org.label-schema.build-date=20251125, config_data={'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ovn_controller', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-ovn-controller:current-podified', 'net': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/lib/modules:/lib/modules:ro', '/run:/run', '/var/lib/openvswitch/ovn:/run/ovn:shared,z', '/var/lib/kolla/config_files/ovn_controller.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/openstack/cacerts/ovn/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/certs/ovn/default/ca.crt:/etc/pki/tls/certs/ovndbca.crt:ro,z', '/var/lib/openstack/certs/ovn/default/tls.crt:/etc/pki/tls/certs/ovndb.crt:ro,z', '/var/lib/openstack/certs/ovn/default/tls.key:/etc/pki/tls/private/ovndb.key:ro,Z', '/var/lib/openstack/healthchecks/ovn_controller:/openstack:ro,z']}, container_name=ovn_controller, maintainer=OpenStack Kubernetes Operator team, tcib_managed=true)
Dec  1 19:58:15 compute-0 podman[252510]: 2025-12-01 19:58:15.392432096 +0000 UTC m=+0.141248746 container health_status 34a1614f07848d6f362b3ed1fa2407dbcd0f2c7c831f6ef43ff8b2d278ce7c3d (image=quay.io/podified-antelope-centos9/openstack-ceilometer-ipmi:current-podified, name=ceilometer_agent_ipmi, health_status=healthy, health_failing_streak=0, health_log=, maintainer=OpenStack Kubernetes Operator team, org.label-schema.vendor=CentOS, config_id=edpm, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.schema-version=1.0, config_data={'image': 'quay.io/podified-antelope-centos9/openstack-ceilometer-ipmi:current-podified', 'user': 'ceilometer', 'restart': 'always', 'command': 'kolla_start', 'security_opt': 'label:type:ceilometer_polling_t', 'privileged': 'true', 'net': 'host', 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'OS_ENDPOINT_TYPE': 'internal'}, 'healthcheck': {'test': '/openstack/healthcheck ipmi', 'mount': '/var/lib/openstack/healthchecks/ceilometer_agent_ipmi'}, 'volumes': ['/var/lib/openstack/config/telemetry-power-monitoring:/var/lib/openstack/config/:z', '/var/lib/openstack/config/telemetry-power-monitoring/ceilometer-agent-ipmi.json:/var/lib/kolla/config_files/config.json:z', '/etc/hosts:/etc/hosts:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/var/lib/openstack/cacerts/telemetry-power-monitoring/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/config/telemetry-power-monitoring/ceilometer_prom_exporter.yaml:/etc/ceilometer/ceilometer_prom_exporter.yaml:z', '/var/lib/openstack/certs/telemetry-power-monitoring/default:/etc/ceilometer/tls:z', '/dev/log:/dev/log', '/var/lib/openstack/healthchecks/ceilometer_agent_ipmi:/openstack:ro,z']}, container_name=ceilometer_agent_ipmi, io.buildah.version=1.41.3, tcib_managed=true, managed_by=edpm_ansible, org.label-schema.build-date=20251125, org.label-schema.license=GPLv2, tcib_build_tag=fa2bb8efef6782c26ea7f1675eeb36dd)
Dec  1 19:58:16 compute-0 nova_compute[189564]: 2025-12-01 19:58:16.248 189568 DEBUG oslo_service.periodic_task [None req-7cec860b-c1c5-49d2-8a4f-732c76c1788d - - - - - -] Running periodic task ComputeManager._poll_volume_usage run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210#033[00m
Dec  1 19:58:17 compute-0 nova_compute[189564]: 2025-12-01 19:58:17.248 189568 DEBUG oslo_service.periodic_task [None req-7cec860b-c1c5-49d2-8a4f-732c76c1788d - - - - - -] Running periodic task ComputeManager._poll_unconfirmed_resizes run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210#033[00m
Dec  1 19:58:17 compute-0 nova_compute[189564]: 2025-12-01 19:58:17.249 189568 DEBUG oslo_service.periodic_task [None req-7cec860b-c1c5-49d2-8a4f-732c76c1788d - - - - - -] Running periodic task ComputeManager.update_available_resource run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210#033[00m
Dec  1 19:58:17 compute-0 nova_compute[189564]: 2025-12-01 19:58:17.293 189568 DEBUG oslo_concurrency.lockutils [None req-7cec860b-c1c5-49d2-8a4f-732c76c1788d - - - - - -] Acquiring lock "compute_resources" by "nova.compute.resource_tracker.ResourceTracker.clean_compute_node_cache" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m
Dec  1 19:58:17 compute-0 nova_compute[189564]: 2025-12-01 19:58:17.294 189568 DEBUG oslo_concurrency.lockutils [None req-7cec860b-c1c5-49d2-8a4f-732c76c1788d - - - - - -] Lock "compute_resources" acquired by "nova.compute.resource_tracker.ResourceTracker.clean_compute_node_cache" :: waited 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m
Dec  1 19:58:17 compute-0 nova_compute[189564]: 2025-12-01 19:58:17.294 189568 DEBUG oslo_concurrency.lockutils [None req-7cec860b-c1c5-49d2-8a4f-732c76c1788d - - - - - -] Lock "compute_resources" "released" by "nova.compute.resource_tracker.ResourceTracker.clean_compute_node_cache" :: held 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m
Dec  1 19:58:17 compute-0 nova_compute[189564]: 2025-12-01 19:58:17.295 189568 DEBUG nova.compute.resource_tracker [None req-7cec860b-c1c5-49d2-8a4f-732c76c1788d - - - - - -] Auditing locally available compute resources for compute-0.ctlplane.example.com (node: compute-0.ctlplane.example.com) update_available_resource /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:861#033[00m
Dec  1 19:58:17 compute-0 nova_compute[189564]: 2025-12-01 19:58:17.657 189568 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Dec  1 19:58:17 compute-0 nova_compute[189564]: 2025-12-01 19:58:17.729 189568 WARNING nova.virt.libvirt.driver [None req-7cec860b-c1c5-49d2-8a4f-732c76c1788d - - - - - -] This host appears to have multiple sockets per NUMA node. The `socket` PCI NUMA affinity will not be supported.#033[00m
Dec  1 19:58:17 compute-0 nova_compute[189564]: 2025-12-01 19:58:17.730 189568 DEBUG nova.compute.resource_tracker [None req-7cec860b-c1c5-49d2-8a4f-732c76c1788d - - - - - -] Hypervisor/Node resource view: name=compute-0.ctlplane.example.com free_ram=5390MB free_disk=72.37419128417969GB free_vcpus=8 pci_devices=[{"dev_id": "pci_0000_00_03_0", "address": "0000:00:03.0", "product_id": "1000", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1000", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_01_3", "address": "0000:00:01.3", "product_id": "7113", "vendor_id": "8086", "numa_node": null, "label": "label_8086_7113", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_06_0", "address": "0000:00:06.0", "product_id": "1005", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1005", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_04_0", "address": "0000:00:04.0", "product_id": "1001", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1001", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_01_1", "address": "0000:00:01.1", "product_id": "7010", "vendor_id": "8086", "numa_node": null, "label": "label_8086_7010", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_00_0", "address": "0000:00:00.0", "product_id": "1237", "vendor_id": "8086", "numa_node": null, "label": "label_8086_1237", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_05_0", "address": "0000:00:05.0", "product_id": "1002", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1002", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_02_0", "address": "0000:00:02.0", "product_id": "1050", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1050", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_01_2", "address": "0000:00:01.2", "product_id": "7020", "vendor_id": "8086", "numa_node": null, "label": "label_8086_7020", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_01_0", "address": "0000:00:01.0", "product_id": "7000", "vendor_id": "8086", "numa_node": null, "label": "label_8086_7000", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_07_0", "address": "0000:00:07.0", "product_id": "1000", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1000", "dev_type": "type-PCI"}] _report_hypervisor_resource_view /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:1034#033[00m
Dec  1 19:58:17 compute-0 nova_compute[189564]: 2025-12-01 19:58:17.731 189568 DEBUG oslo_concurrency.lockutils [None req-7cec860b-c1c5-49d2-8a4f-732c76c1788d - - - - - -] Acquiring lock "compute_resources" by "nova.compute.resource_tracker.ResourceTracker._update_available_resource" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m
Dec  1 19:58:17 compute-0 nova_compute[189564]: 2025-12-01 19:58:17.732 189568 DEBUG oslo_concurrency.lockutils [None req-7cec860b-c1c5-49d2-8a4f-732c76c1788d - - - - - -] Lock "compute_resources" acquired by "nova.compute.resource_tracker.ResourceTracker._update_available_resource" :: waited 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m
Dec  1 19:58:17 compute-0 nova_compute[189564]: 2025-12-01 19:58:17.825 189568 DEBUG nova.compute.resource_tracker [None req-7cec860b-c1c5-49d2-8a4f-732c76c1788d - - - - - -] Total usable vcpus: 8, total allocated vcpus: 0 _report_final_resource_view /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:1057#033[00m
Dec  1 19:58:17 compute-0 nova_compute[189564]: 2025-12-01 19:58:17.826 189568 DEBUG nova.compute.resource_tracker [None req-7cec860b-c1c5-49d2-8a4f-732c76c1788d - - - - - -] Final resource view: name=compute-0.ctlplane.example.com phys_ram=7680MB used_ram=512MB phys_disk=79GB used_disk=0GB total_vcpus=8 used_vcpus=0 pci_stats=[] _report_final_resource_view /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:1066#033[00m
Dec  1 19:58:17 compute-0 nova_compute[189564]: 2025-12-01 19:58:17.859 189568 DEBUG nova.compute.provider_tree [None req-7cec860b-c1c5-49d2-8a4f-732c76c1788d - - - - - -] Inventory has not changed in ProviderTree for provider: 0211b5d4-bab8-409f-8f53-df766ffbcb27 update_inventory /usr/lib/python3.9/site-packages/nova/compute/provider_tree.py:180#033[00m
Dec  1 19:58:17 compute-0 nova_compute[189564]: 2025-12-01 19:58:17.880 189568 DEBUG nova.scheduler.client.report [None req-7cec860b-c1c5-49d2-8a4f-732c76c1788d - - - - - -] Inventory has not changed for provider 0211b5d4-bab8-409f-8f53-df766ffbcb27 based on inventory data: {'VCPU': {'total': 8, 'reserved': 0, 'min_unit': 1, 'max_unit': 8, 'step_size': 1, 'allocation_ratio': 4.0}, 'MEMORY_MB': {'total': 7680, 'reserved': 512, 'min_unit': 1, 'max_unit': 7680, 'step_size': 1, 'allocation_ratio': 1.0}, 'DISK_GB': {'total': 79, 'reserved': 1, 'min_unit': 1, 'max_unit': 79, 'step_size': 1, 'allocation_ratio': 0.9}} set_inventory_for_provider /usr/lib/python3.9/site-packages/nova/scheduler/client/report.py:940#033[00m
Dec  1 19:58:17 compute-0 nova_compute[189564]: 2025-12-01 19:58:17.884 189568 DEBUG nova.compute.resource_tracker [None req-7cec860b-c1c5-49d2-8a4f-732c76c1788d - - - - - -] Compute_service record updated for compute-0.ctlplane.example.com:compute-0.ctlplane.example.com _update_available_resource /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:995#033[00m
Dec  1 19:58:17 compute-0 nova_compute[189564]: 2025-12-01 19:58:17.885 189568 DEBUG oslo_concurrency.lockutils [None req-7cec860b-c1c5-49d2-8a4f-732c76c1788d - - - - - -] Lock "compute_resources" "released" by "nova.compute.resource_tracker.ResourceTracker._update_available_resource" :: held 0.153s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m
Dec  1 19:58:18 compute-0 nova_compute[189564]: 2025-12-01 19:58:18.701 189568 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Dec  1 19:58:18 compute-0 nova_compute[189564]: 2025-12-01 19:58:18.885 189568 DEBUG oslo_service.periodic_task [None req-7cec860b-c1c5-49d2-8a4f-732c76c1788d - - - - - -] Running periodic task ComputeManager._heal_instance_info_cache run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210#033[00m
Dec  1 19:58:18 compute-0 nova_compute[189564]: 2025-12-01 19:58:18.886 189568 DEBUG nova.compute.manager [None req-7cec860b-c1c5-49d2-8a4f-732c76c1788d - - - - - -] Starting heal instance info cache _heal_instance_info_cache /usr/lib/python3.9/site-packages/nova/compute/manager.py:9858#033[00m
Dec  1 19:58:18 compute-0 nova_compute[189564]: 2025-12-01 19:58:18.887 189568 DEBUG nova.compute.manager [None req-7cec860b-c1c5-49d2-8a4f-732c76c1788d - - - - - -] Rebuilding the list of instances to heal _heal_instance_info_cache /usr/lib/python3.9/site-packages/nova/compute/manager.py:9862#033[00m
Dec  1 19:58:18 compute-0 nova_compute[189564]: 2025-12-01 19:58:18.911 189568 DEBUG nova.compute.manager [None req-7cec860b-c1c5-49d2-8a4f-732c76c1788d - - - - - -] Didn't find any instances for network info cache update. _heal_instance_info_cache /usr/lib/python3.9/site-packages/nova/compute/manager.py:9944#033[00m
Dec  1 19:58:21 compute-0 nova_compute[189564]: 2025-12-01 19:58:21.270 189568 DEBUG oslo_service.periodic_task [None req-7cec860b-c1c5-49d2-8a4f-732c76c1788d - - - - - -] Running periodic task ComputeManager._check_instance_build_time run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210#033[00m
Dec  1 19:58:22 compute-0 nova_compute[189564]: 2025-12-01 19:58:22.248 189568 DEBUG oslo_service.periodic_task [None req-7cec860b-c1c5-49d2-8a4f-732c76c1788d - - - - - -] Running periodic task ComputeManager._instance_usage_audit run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210#033[00m
Dec  1 19:58:22 compute-0 nova_compute[189564]: 2025-12-01 19:58:22.662 189568 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Dec  1 19:58:23 compute-0 nova_compute[189564]: 2025-12-01 19:58:23.705 189568 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Dec  1 19:58:25 compute-0 podman[252606]: 2025-12-01 19:58:25.352866216 +0000 UTC m=+0.112735199 container health_status b46bda7fc50db8041eef75400930fc7591d8331b3adc9964f77b2cc87c6b98e2 (image=quay.io/openstack-k8s-operators/openstack-network-exporter:current-podified, name=openstack_network_exporter, health_status=healthy, health_failing_streak=0, health_log=, description=The Universal Base Image Minimal is a stripped down image that uses microdnf as a package manager. This base image is freely redistributable, but Red Hat only supports Red Hat technologies through subscriptions for Red Hat products. This image is maintained by Red Hat and updated regularly., vcs-ref=f4b088292653bbf5ca8188a5e59ffd06a8671d4b, maintainer=Red Hat, Inc., config_id=edpm, container_name=openstack_network_exporter, build-date=2025-08-20T13:12:41, io.openshift.expose-services=, summary=Provides the latest release of the minimal Red Hat Universal Base Image 9., version=9.6, managed_by=edpm_ansible, com.redhat.component=ubi9-minimal-container, release=1755695350, io.k8s.description=The Universal Base Image Minimal is a stripped down image that uses microdnf as a package manager. This base image is freely redistributable, but Red Hat only supports Red Hat technologies through subscriptions for Red Hat products. This image is maintained by Red Hat and updated regularly., vendor=Red Hat, Inc., config_data={'image': 'quay.io/openstack-k8s-operators/openstack-network-exporter:current-podified', 'restart': 'always', 'recreate': True, 'privileged': True, 'ports': ['9105:9105'], 'command': [], 'net': 'host', 'environment': {'OS_ENDPOINT_TYPE': 'internal', 'OPENSTACK_NETWORK_EXPORTER_YAML': '/etc/openstack_network_exporter/openstack_network_exporter.yaml'}, 'healthcheck': {'test': '/openstack/healthcheck openstack-netwo', 'mount': '/var/lib/openstack/healthchecks/openstack_network_exporter'}, 'volumes': ['/var/lib/openstack/config/telemetry/openstack_network_exporter.yaml:/etc/openstack_network_exporter/openstack_network_exporter.yaml:z', '/var/lib/openstack/certs/telemetry/default:/etc/openstack_network_exporter/tls:z', '/var/run/openvswitch:/run/openvswitch:rw,z', '/var/lib/openvswitch/ovn:/run/ovn:rw,z', '/proc:/host/proc:ro', '/var/lib/openstack/healthchecks/openstack_network_exporter:/openstack:ro,z']}, name=ubi9-minimal, url=https://catalog.redhat.com/en/search?searchType=containers, io.buildah.version=1.33.7, io.openshift.tags=minimal rhel9, architecture=x86_64, com.redhat.license_terms=https://www.redhat.com/en/about/red-hat-end-user-license-agreements#UBI, distribution-scope=public, vcs-type=git, io.k8s.display-name=Red Hat Universal Base Image 9 Minimal)
Dec  1 19:58:27 compute-0 nova_compute[189564]: 2025-12-01 19:58:27.668 189568 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Dec  1 19:58:28 compute-0 nova_compute[189564]: 2025-12-01 19:58:28.708 189568 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Dec  1 19:58:29 compute-0 podman[203750]: time="2025-12-01T19:58:29Z" level=info msg="List containers: received `last` parameter - overwriting `limit`"
Dec  1 19:58:29 compute-0 podman[203750]: @ - - [01/Dec/2025:19:58:29 +0000] "GET /v4.9.3/libpod/containers/json?all=true&external=false&last=0&namespace=false&size=false&sync=false HTTP/1.1" 200 28288 "" "Go-http-client/1.1"
Dec  1 19:58:29 compute-0 podman[203750]: @ - - [01/Dec/2025:19:58:29 +0000] "GET /v4.9.3/libpod/containers/stats?all=false&interval=1&stream=false HTTP/1.1" 200 4335 "" "Go-http-client/1.1"
Dec  1 19:58:30 compute-0 podman[252625]: 2025-12-01 19:58:30.315394427 +0000 UTC m=+0.076393368 container health_status 9bc16c1e84935b321683dd2dfd3901959431e420d380b6b9982945dff3d516b2 (image=quay.io/prometheus/node-exporter:v1.5.0, name=node_exporter, health_status=healthy, health_failing_streak=0, health_log=, config_id=edpm, container_name=node_exporter, maintainer=The Prometheus Authors <prometheus-developers@googlegroups.com>, managed_by=edpm_ansible, config_data={'image': 'quay.io/prometheus/node-exporter:v1.5.0', 'restart': 'always', 'recreate': True, 'user': 'root', 'privileged': True, 'ports': ['9100:9100'], 'command': ['--web.config.file=/etc/node_exporter/node_exporter.yaml', '--web.disable-exporter-metrics', '--collector.systemd', '--collector.systemd.unit-include=(edpm_.*|ovs.*|openvswitch|virt.*|rsyslog)\\.service', '--no-collector.dmi', '--no-collector.entropy', '--no-collector.thermal_zone', '--no-collector.time', '--no-collector.timex', '--no-collector.uname', '--no-collector.stat', '--no-collector.hwmon', '--no-collector.os', '--no-collector.selinux', '--no-collector.textfile', '--no-collector.powersupplyclass', '--no-collector.pressure', '--no-collector.rapl'], 'net': 'host', 'environment': {'OS_ENDPOINT_TYPE': 'internal'}, 'healthcheck': {'test': '/openstack/healthcheck node_exporter', 'mount': '/var/lib/openstack/healthchecks/node_exporter'}, 'volumes': ['/var/lib/openstack/config/telemetry/node_exporter.yaml:/etc/node_exporter/node_exporter.yaml:z', '/var/lib/openstack/certs/telemetry/default:/etc/node_exporter/tls:z', '/var/run/dbus/system_bus_socket:/var/run/dbus/system_bus_socket:rw', '/var/lib/openstack/healthchecks/node_exporter:/openstack:ro,z']})
Dec  1 19:58:31 compute-0 openstack_network_exporter[205914]: ERROR   19:58:31 appctl.go:144: Failed to get PID for ovn-northd: no control socket files found for ovn-northd
Dec  1 19:58:31 compute-0 openstack_network_exporter[205914]: ERROR   19:58:31 appctl.go:144: Failed to get PID for ovn-northd: no control socket files found for ovn-northd
Dec  1 19:58:31 compute-0 openstack_network_exporter[205914]: ERROR   19:58:31 appctl.go:131: Failed to prepare call to ovsdb-server: no control socket files found for the ovs db server
Dec  1 19:58:31 compute-0 openstack_network_exporter[205914]: ERROR   19:58:31 appctl.go:174: call(dpif-netdev/pmd-perf-show): please specify an existing datapath
Dec  1 19:58:31 compute-0 openstack_network_exporter[205914]: 
Dec  1 19:58:31 compute-0 openstack_network_exporter[205914]: ERROR   19:58:31 appctl.go:174: call(dpif-netdev/pmd-rxq-show): please specify an existing datapath
Dec  1 19:58:31 compute-0 openstack_network_exporter[205914]: 
Dec  1 19:58:32 compute-0 nova_compute[189564]: 2025-12-01 19:58:32.671 189568 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Dec  1 19:58:33 compute-0 nova_compute[189564]: 2025-12-01 19:58:33.711 189568 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Dec  1 19:58:37 compute-0 podman[252649]: 2025-12-01 19:58:37.361755395 +0000 UTC m=+0.123760352 container health_status eee51cf6f5ac491b85fb09827fece37ea9afa564acb449d4ec0d0155a452f02b (image=quay.io/podified-antelope-centos9/openstack-multipathd:current-podified, name=multipathd, health_status=healthy, health_failing_streak=0, health_log=, tcib_managed=true, config_id=multipathd, container_name=multipathd, io.buildah.version=1.41.3, maintainer=OpenStack Kubernetes Operator team, org.label-schema.license=GPLv2, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS, config_data={'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/multipathd', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-multipathd:current-podified', 'net': 'host', 'privileged': True, 'restart': 'always', 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/var/lib/kolla/config_files/multipathd.json:/var/lib/kolla/config_files/config.json:ro', '/dev:/dev', '/run/udev:/run/udev', '/sys:/sys', '/lib/modules:/lib/modules:ro', '/etc/iscsi:/etc/iscsi:ro', '/var/lib/iscsi:/var/lib/iscsi', '/etc/multipath:/etc/multipath:z', '/etc/multipath.conf:/etc/multipath.conf:ro', '/var/lib/openstack/healthchecks/multipathd:/openstack:ro,z']}, org.label-schema.build-date=20251125, org.label-schema.name=CentOS Stream 9 Base Image, managed_by=edpm_ansible, tcib_build_tag=fa2bb8efef6782c26ea7f1675eeb36dd)
Dec  1 19:58:37 compute-0 nova_compute[189564]: 2025-12-01 19:58:37.676 189568 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Dec  1 19:58:38 compute-0 nova_compute[189564]: 2025-12-01 19:58:38.714 189568 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Dec  1 19:58:42 compute-0 nova_compute[189564]: 2025-12-01 19:58:42.682 189568 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Dec  1 19:58:43 compute-0 podman[252670]: 2025-12-01 19:58:43.359266987 +0000 UTC m=+0.120927223 container health_status 61ddba5fa28aaa4735d9b3aecc3d300f499f9ae2248b5f55cd6d6127fcce4236 (image=quay.io/navidys/prometheus-podman-exporter:v1.10.1, name=podman_exporter, health_status=healthy, health_failing_streak=0, health_log=, config_data={'image': 'quay.io/navidys/prometheus-podman-exporter:v1.10.1', 'restart': 'always', 'recreate': True, 'user': 'root', 'privileged': True, 'ports': ['9882:9882'], 'net': 'host', 'command': ['--web.config.file=/etc/podman_exporter/podman_exporter.yaml'], 'environment': {'OS_ENDPOINT_TYPE': 'internal', 'CONTAINER_HOST': 'unix:///run/podman/podman.sock'}, 'healthcheck': {'test': '/openstack/healthcheck podman_exporter', 'mount': '/var/lib/openstack/healthchecks/podman_exporter'}, 'volumes': ['/var/lib/openstack/config/telemetry/podman_exporter.yaml:/etc/podman_exporter/podman_exporter.yaml:z', '/var/lib/openstack/certs/telemetry/default:/etc/podman_exporter/tls:z', '/run/podman/podman.sock:/run/podman/podman.sock:rw,z', '/var/lib/openstack/healthchecks/podman_exporter:/openstack:ro,z']}, config_id=edpm, container_name=podman_exporter, maintainer=Navid Yaghoobi <navidys@fedoraproject.org>, managed_by=edpm_ansible)
Dec  1 19:58:43 compute-0 nova_compute[189564]: 2025-12-01 19:58:43.717 189568 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Dec  1 19:58:46 compute-0 podman[252693]: 2025-12-01 19:58:46.347598918 +0000 UTC m=+0.113355217 container health_status 23921011954a99f31a49758e512d9e3575f6b2ebf536e7df85e3be11e7690b76 (image=quay.io/sustainable_computing_io/kepler:release-0.7.12, name=kepler, health_status=healthy, health_failing_streak=0, health_log=, vendor=Red Hat, Inc., description=The Universal Base Image is designed and engineered to be the base layer for all of your containerized applications, middleware and utilities. This base image is freely redistributable, but Red Hat only supports Red Hat technologies through subscriptions for Red Hat products. This image is maintained by Red Hat and updated regularly., url=https://access.redhat.com/containers/#/registry.access.redhat.com/ubi9/images/9.4-1214.1726694543, io.buildah.version=1.29.0, summary=Provides the latest release of Red Hat Universal Base Image 9., io.k8s.description=The Universal Base Image is designed and engineered to be the base layer for all of your containerized applications, middleware and utilities. This base image is freely redistributable, but Red Hat only supports Red Hat technologies through subscriptions for Red Hat products. This image is maintained by Red Hat and updated regularly., name=ubi9, release=1214.1726694543, vcs-type=git, com.redhat.component=ubi9-container, io.openshift.expose-services=, maintainer=Red Hat, Inc., vcs-ref=e309397d02fc53f7fa99db1371b8700eb49f268f, com.redhat.license_terms=https://www.redhat.com/en/about/red-hat-end-user-license-agreements#UBI, distribution-scope=public, architecture=x86_64, io.openshift.tags=base rhel9, release-0.7.12=, container_name=kepler, build-date=2024-09-18T21:23:30, version=9.4, config_id=edpm, io.k8s.display-name=Red Hat Universal Base Image 9, managed_by=edpm_ansible, config_data={'image': 'quay.io/sustainable_computing_io/kepler:release-0.7.12', 'privileged': 'true', 'restart': 'always', 'ports': ['8888:8888'], 'net': 'host', 'command': '-v=2', 'recreate': True, 'environment': {'ENABLE_GPU': 'true', 'EXPOSE_CONTAINER_METRICS': 'true', 'ENABLE_PROCESS_METRICS': 'true', 'EXPOSE_VM_METRICS': 'true', 'EXPOSE_ESTIMATED_IDLE_POWER_METRICS': 'false', 'LIBVIRT_METADATA_URI': 'http://openstack.org/xmlns/libvirt/nova/1.1'}, 'healthcheck': {'test': '/openstack/healthcheck kepler', 'mount': '/var/lib/openstack/healthchecks/kepler'}, 'volumes': ['/lib/modules:/lib/modules:ro', '/run/libvirt:/run/libvirt:shared,ro', '/sys:/sys', '/proc:/proc', '/var/lib/openstack/healthchecks/kepler:/openstack:ro,z']})
Dec  1 19:58:46 compute-0 podman[252694]: 2025-12-01 19:58:46.352371407 +0000 UTC m=+0.102094858 container health_status 34a1614f07848d6f362b3ed1fa2407dbcd0f2c7c831f6ef43ff8b2d278ce7c3d (image=quay.io/podified-antelope-centos9/openstack-ceilometer-ipmi:current-podified, name=ceilometer_agent_ipmi, health_status=healthy, health_failing_streak=0, health_log=, io.buildah.version=1.41.3, org.label-schema.name=CentOS Stream 9 Base Image, tcib_build_tag=fa2bb8efef6782c26ea7f1675eeb36dd, tcib_managed=true, maintainer=OpenStack Kubernetes Operator team, managed_by=edpm_ansible, org.label-schema.license=GPLv2, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS, config_data={'image': 'quay.io/podified-antelope-centos9/openstack-ceilometer-ipmi:current-podified', 'user': 'ceilometer', 'restart': 'always', 'command': 'kolla_start', 'security_opt': 'label:type:ceilometer_polling_t', 'privileged': 'true', 'net': 'host', 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'OS_ENDPOINT_TYPE': 'internal'}, 'healthcheck': {'test': '/openstack/healthcheck ipmi', 'mount': '/var/lib/openstack/healthchecks/ceilometer_agent_ipmi'}, 'volumes': ['/var/lib/openstack/config/telemetry-power-monitoring:/var/lib/openstack/config/:z', '/var/lib/openstack/config/telemetry-power-monitoring/ceilometer-agent-ipmi.json:/var/lib/kolla/config_files/config.json:z', '/etc/hosts:/etc/hosts:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/var/lib/openstack/cacerts/telemetry-power-monitoring/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/config/telemetry-power-monitoring/ceilometer_prom_exporter.yaml:/etc/ceilometer/ceilometer_prom_exporter.yaml:z', '/var/lib/openstack/certs/telemetry-power-monitoring/default:/etc/ceilometer/tls:z', '/dev/log:/dev/log', '/var/lib/openstack/healthchecks/ceilometer_agent_ipmi:/openstack:ro,z']}, container_name=ceilometer_agent_ipmi, org.label-schema.build-date=20251125, config_id=edpm)
Dec  1 19:58:46 compute-0 podman[252702]: 2025-12-01 19:58:46.391789913 +0000 UTC m=+0.136120546 container health_status 43b014a7c88484529ca37fbc1aa040d68d3c565a681d98a3ffe696ded1c66c8b (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified, name=ovn_metadata_agent, health_status=healthy, health_failing_streak=0, health_log=, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.vendor=CentOS, container_name=ovn_metadata_agent, maintainer=OpenStack Kubernetes Operator team, org.label-schema.build-date=20251125, org.label-schema.schema-version=1.0, config_data={'cgroupns': 'host', 'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'EDPM_CONFIG_HASH': '0823bd3e096c75f72e4a95820d41b0d4b6a1172bd2892ddb9f29b788a11bc87d'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ovn_metadata_agent', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified', 'net': 'host', 'pid': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/run/openvswitch:/run/openvswitch:z', '/var/lib/config-data/ansible-generated/neutron-ovn-metadata-agent:/etc/neutron.conf.d:z', '/run/netns:/run/netns:shared', '/var/lib/kolla/config_files/ovn_metadata_agent.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/neutron:/var/lib/neutron:shared,z', '/var/lib/neutron/ovn_metadata_haproxy_wrapper:/usr/local/bin/haproxy:ro', '/var/lib/neutron/kill_scripts:/etc/neutron/kill_scripts:ro', '/var/lib/openstack/cacerts/neutron-metadata/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/ca.crt:/etc/pki/tls/certs/ovndbca.crt:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/tls.crt:/etc/pki/tls/certs/ovndb.crt:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/tls.key:/etc/pki/tls/private/ovndb.key:ro,Z', '/var/lib/openstack/healthchecks/ovn_metadata_agent:/openstack:ro,z']}, config_id=ovn_metadata_agent, tcib_managed=true, org.label-schema.license=GPLv2, tcib_build_tag=fa2bb8efef6782c26ea7f1675eeb36dd, io.buildah.version=1.41.3, managed_by=edpm_ansible)
Dec  1 19:58:46 compute-0 podman[252695]: 2025-12-01 19:58:46.396640925 +0000 UTC m=+0.142229777 container health_status 3a3d264f7eb8586ed3d44da8bad3c69e5911bcb2ca062b771386b6d47a5118de (image=quay.rdoproject.org/podified-master-centos10/openstack-ceilometer-compute:current-tested, name=ceilometer_agent_compute, health_status=healthy, health_failing_streak=0, health_log=, org.label-schema.vendor=CentOS, org.label-schema.name=CentOS Stream 10 Base Image, org.label-schema.schema-version=1.0, config_id=edpm, org.label-schema.license=GPLv2, tcib_build_tag=3a7876c5b6a4ff2e2bc50e11e9db5f42, config_data={'image': 'quay.rdoproject.org/podified-master-centos10/openstack-ceilometer-compute:current-tested', 'user': 'ceilometer', 'restart': 'always', 'command': 'kolla_start', 'security_opt': 'label:type:ceilometer_polling_t', 'net': 'host', 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'OS_ENDPOINT_TYPE': 'internal'}, 'healthcheck': {'test': '/openstack/healthcheck compute', 'mount': '/var/lib/openstack/healthchecks/ceilometer_agent_compute'}, 'volumes': ['/var/lib/openstack/config/telemetry:/var/lib/openstack/config/:z', '/var/lib/openstack/config/telemetry/ceilometer-agent-compute.json:/var/lib/kolla/config_files/config.json:z', '/run/libvirt:/run/libvirt:shared,ro', '/etc/hosts:/etc/hosts:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/var/lib/openstack/cacerts/telemetry/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/config/telemetry/ceilometer_prom_exporter.yaml:/etc/ceilometer/ceilometer_prom_exporter.yaml:z', '/var/lib/openstack/certs/telemetry/default:/etc/ceilometer/tls:z', '/dev/log:/dev/log', '/var/lib/openstack/healthchecks/ceilometer_agent_compute:/openstack:ro,z']}, container_name=ceilometer_agent_compute, io.buildah.version=1.41.4, maintainer=OpenStack Kubernetes Operator team, org.label-schema.build-date=20251125, tcib_managed=true, managed_by=edpm_ansible)
Dec  1 19:58:46 compute-0 podman[252703]: 2025-12-01 19:58:46.417637957 +0000 UTC m=+0.145083205 container health_status ac5c9902abf0db9f43c889599b2bcc73d33eb8b65444ffdd9b56a5cc93dab792 (image=quay.io/podified-antelope-centos9/openstack-ovn-controller:current-podified, name=ovn_controller, health_status=healthy, health_failing_streak=0, health_log=, config_id=ovn_controller, io.buildah.version=1.41.3, tcib_build_tag=fa2bb8efef6782c26ea7f1675eeb36dd, managed_by=edpm_ansible, container_name=ovn_controller, org.label-schema.build-date=20251125, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.schema-version=1.0, config_data={'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ovn_controller', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-ovn-controller:current-podified', 'net': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/lib/modules:/lib/modules:ro', '/run:/run', '/var/lib/openvswitch/ovn:/run/ovn:shared,z', '/var/lib/kolla/config_files/ovn_controller.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/openstack/cacerts/ovn/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/certs/ovn/default/ca.crt:/etc/pki/tls/certs/ovndbca.crt:ro,z', '/var/lib/openstack/certs/ovn/default/tls.crt:/etc/pki/tls/certs/ovndb.crt:ro,z', '/var/lib/openstack/certs/ovn/default/tls.key:/etc/pki/tls/private/ovndb.key:ro,Z', '/var/lib/openstack/healthchecks/ovn_controller:/openstack:ro,z']}, maintainer=OpenStack Kubernetes Operator team, org.label-schema.license=GPLv2, org.label-schema.vendor=CentOS, tcib_managed=true)
Dec  1 19:58:47 compute-0 nova_compute[189564]: 2025-12-01 19:58:47.685 189568 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Dec  1 19:58:48 compute-0 nova_compute[189564]: 2025-12-01 19:58:48.720 189568 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Dec  1 19:58:52 compute-0 nova_compute[189564]: 2025-12-01 19:58:52.690 189568 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Dec  1 19:58:53 compute-0 nova_compute[189564]: 2025-12-01 19:58:53.722 189568 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Dec  1 19:58:56 compute-0 podman[252790]: 2025-12-01 19:58:56.352978996 +0000 UTC m=+0.125207396 container health_status b46bda7fc50db8041eef75400930fc7591d8331b3adc9964f77b2cc87c6b98e2 (image=quay.io/openstack-k8s-operators/openstack-network-exporter:current-podified, name=openstack_network_exporter, health_status=healthy, health_failing_streak=0, health_log=, io.k8s.display-name=Red Hat Universal Base Image 9 Minimal, managed_by=edpm_ansible, summary=Provides the latest release of the minimal Red Hat Universal Base Image 9., build-date=2025-08-20T13:12:41, distribution-scope=public, container_name=openstack_network_exporter, vcs-type=git, config_data={'image': 'quay.io/openstack-k8s-operators/openstack-network-exporter:current-podified', 'restart': 'always', 'recreate': True, 'privileged': True, 'ports': ['9105:9105'], 'command': [], 'net': 'host', 'environment': {'OS_ENDPOINT_TYPE': 'internal', 'OPENSTACK_NETWORK_EXPORTER_YAML': '/etc/openstack_network_exporter/openstack_network_exporter.yaml'}, 'healthcheck': {'test': '/openstack/healthcheck openstack-netwo', 'mount': '/var/lib/openstack/healthchecks/openstack_network_exporter'}, 'volumes': ['/var/lib/openstack/config/telemetry/openstack_network_exporter.yaml:/etc/openstack_network_exporter/openstack_network_exporter.yaml:z', '/var/lib/openstack/certs/telemetry/default:/etc/openstack_network_exporter/tls:z', '/var/run/openvswitch:/run/openvswitch:rw,z', '/var/lib/openvswitch/ovn:/run/ovn:rw,z', '/proc:/host/proc:ro', '/var/lib/openstack/healthchecks/openstack_network_exporter:/openstack:ro,z']}, io.openshift.expose-services=, release=1755695350, vcs-ref=f4b088292653bbf5ca8188a5e59ffd06a8671d4b, architecture=x86_64, name=ubi9-minimal, url=https://catalog.redhat.com/en/search?searchType=containers, version=9.6, io.openshift.tags=minimal rhel9, io.k8s.description=The Universal Base Image Minimal is a stripped down image that uses microdnf as a package manager. This base image is freely redistributable, but Red Hat only supports Red Hat technologies through subscriptions for Red Hat products. This image is maintained by Red Hat and updated regularly., com.redhat.component=ubi9-minimal-container, com.redhat.license_terms=https://www.redhat.com/en/about/red-hat-end-user-license-agreements#UBI, config_id=edpm, description=The Universal Base Image Minimal is a stripped down image that uses microdnf as a package manager. This base image is freely redistributable, but Red Hat only supports Red Hat technologies through subscriptions for Red Hat products. This image is maintained by Red Hat and updated regularly., maintainer=Red Hat, Inc., vendor=Red Hat, Inc., io.buildah.version=1.33.7)
Dec  1 19:58:57 compute-0 nova_compute[189564]: 2025-12-01 19:58:57.696 189568 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Dec  1 19:58:58 compute-0 nova_compute[189564]: 2025-12-01 19:58:58.724 189568 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Dec  1 19:58:59 compute-0 podman[203750]: time="2025-12-01T19:58:59Z" level=info msg="List containers: received `last` parameter - overwriting `limit`"
Dec  1 19:58:59 compute-0 podman[203750]: @ - - [01/Dec/2025:19:58:59 +0000] "GET /v4.9.3/libpod/containers/json?all=true&external=false&last=0&namespace=false&size=false&sync=false HTTP/1.1" 200 28288 "" "Go-http-client/1.1"
Dec  1 19:58:59 compute-0 podman[203750]: @ - - [01/Dec/2025:19:58:59 +0000] "GET /v4.9.3/libpod/containers/stats?all=false&interval=1&stream=false HTTP/1.1" 200 4343 "" "Go-http-client/1.1"
Dec  1 19:59:01 compute-0 podman[252812]: 2025-12-01 19:59:01.277539884 +0000 UTC m=+0.055226599 container health_status 9bc16c1e84935b321683dd2dfd3901959431e420d380b6b9982945dff3d516b2 (image=quay.io/prometheus/node-exporter:v1.5.0, name=node_exporter, health_status=healthy, health_failing_streak=0, health_log=, config_data={'image': 'quay.io/prometheus/node-exporter:v1.5.0', 'restart': 'always', 'recreate': True, 'user': 'root', 'privileged': True, 'ports': ['9100:9100'], 'command': ['--web.config.file=/etc/node_exporter/node_exporter.yaml', '--web.disable-exporter-metrics', '--collector.systemd', '--collector.systemd.unit-include=(edpm_.*|ovs.*|openvswitch|virt.*|rsyslog)\\.service', '--no-collector.dmi', '--no-collector.entropy', '--no-collector.thermal_zone', '--no-collector.time', '--no-collector.timex', '--no-collector.uname', '--no-collector.stat', '--no-collector.hwmon', '--no-collector.os', '--no-collector.selinux', '--no-collector.textfile', '--no-collector.powersupplyclass', '--no-collector.pressure', '--no-collector.rapl'], 'net': 'host', 'environment': {'OS_ENDPOINT_TYPE': 'internal'}, 'healthcheck': {'test': '/openstack/healthcheck node_exporter', 'mount': '/var/lib/openstack/healthchecks/node_exporter'}, 'volumes': ['/var/lib/openstack/config/telemetry/node_exporter.yaml:/etc/node_exporter/node_exporter.yaml:z', '/var/lib/openstack/certs/telemetry/default:/etc/node_exporter/tls:z', '/var/run/dbus/system_bus_socket:/var/run/dbus/system_bus_socket:rw', '/var/lib/openstack/healthchecks/node_exporter:/openstack:ro,z']}, config_id=edpm, container_name=node_exporter, maintainer=The Prometheus Authors <prometheus-developers@googlegroups.com>, managed_by=edpm_ansible)
Dec  1 19:59:01 compute-0 openstack_network_exporter[205914]: ERROR   19:59:01 appctl.go:131: Failed to prepare call to ovsdb-server: no control socket files found for the ovs db server
Dec  1 19:59:01 compute-0 openstack_network_exporter[205914]: ERROR   19:59:01 appctl.go:144: Failed to get PID for ovn-northd: no control socket files found for ovn-northd
Dec  1 19:59:01 compute-0 openstack_network_exporter[205914]: ERROR   19:59:01 appctl.go:144: Failed to get PID for ovn-northd: no control socket files found for ovn-northd
Dec  1 19:59:01 compute-0 openstack_network_exporter[205914]: ERROR   19:59:01 appctl.go:174: call(dpif-netdev/pmd-perf-show): please specify an existing datapath
Dec  1 19:59:01 compute-0 openstack_network_exporter[205914]: 
Dec  1 19:59:01 compute-0 openstack_network_exporter[205914]: ERROR   19:59:01 appctl.go:174: call(dpif-netdev/pmd-rxq-show): please specify an existing datapath
Dec  1 19:59:01 compute-0 openstack_network_exporter[205914]: 
Dec  1 19:59:02 compute-0 nova_compute[189564]: 2025-12-01 19:59:02.701 189568 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Dec  1 19:59:03 compute-0 nova_compute[189564]: 2025-12-01 19:59:03.726 189568 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Dec  1 19:59:07 compute-0 nova_compute[189564]: 2025-12-01 19:59:07.706 189568 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Dec  1 19:59:08 compute-0 podman[252836]: 2025-12-01 19:59:08.301927179 +0000 UTC m=+0.074582111 container health_status eee51cf6f5ac491b85fb09827fece37ea9afa564acb449d4ec0d0155a452f02b (image=quay.io/podified-antelope-centos9/openstack-multipathd:current-podified, name=multipathd, health_status=healthy, health_failing_streak=0, health_log=, managed_by=edpm_ansible, org.label-schema.schema-version=1.0, tcib_managed=true, config_data={'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/multipathd', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-multipathd:current-podified', 'net': 'host', 'privileged': True, 'restart': 'always', 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/var/lib/kolla/config_files/multipathd.json:/var/lib/kolla/config_files/config.json:ro', '/dev:/dev', '/run/udev:/run/udev', '/sys:/sys', '/lib/modules:/lib/modules:ro', '/etc/iscsi:/etc/iscsi:ro', '/var/lib/iscsi:/var/lib/iscsi', '/etc/multipath:/etc/multipath:z', '/etc/multipath.conf:/etc/multipath.conf:ro', '/var/lib/openstack/healthchecks/multipathd:/openstack:ro,z']}, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.vendor=CentOS, org.label-schema.build-date=20251125, org.label-schema.license=GPLv2, maintainer=OpenStack Kubernetes Operator team, tcib_build_tag=fa2bb8efef6782c26ea7f1675eeb36dd, config_id=multipathd, container_name=multipathd, io.buildah.version=1.41.3)
Dec  1 19:59:08 compute-0 nova_compute[189564]: 2025-12-01 19:59:08.728 189568 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Dec  1 19:59:09 compute-0 nova_compute[189564]: 2025-12-01 19:59:09.248 189568 DEBUG oslo_service.periodic_task [None req-7cec860b-c1c5-49d2-8a4f-732c76c1788d - - - - - -] Running periodic task ComputeManager._reclaim_queued_deletes run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210#033[00m
Dec  1 19:59:09 compute-0 nova_compute[189564]: 2025-12-01 19:59:09.248 189568 DEBUG nova.compute.manager [None req-7cec860b-c1c5-49d2-8a4f-732c76c1788d - - - - - -] CONF.reclaim_instance_interval <= 0, skipping... _reclaim_queued_deletes /usr/lib/python3.9/site-packages/nova/compute/manager.py:10477#033[00m
Dec  1 19:59:12 compute-0 ovn_metadata_agent[106828]: 2025-12-01 19:59:12.217 106833 DEBUG oslo_concurrency.lockutils [-] Acquiring lock "_check_child_processes" by "neutron.agent.linux.external_process.ProcessMonitor._check_child_processes" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m
Dec  1 19:59:12 compute-0 ovn_metadata_agent[106828]: 2025-12-01 19:59:12.218 106833 DEBUG oslo_concurrency.lockutils [-] Lock "_check_child_processes" acquired by "neutron.agent.linux.external_process.ProcessMonitor._check_child_processes" :: waited 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m
Dec  1 19:59:12 compute-0 ovn_metadata_agent[106828]: 2025-12-01 19:59:12.218 106833 DEBUG oslo_concurrency.lockutils [-] Lock "_check_child_processes" "released" by "neutron.agent.linux.external_process.ProcessMonitor._check_child_processes" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m
Dec  1 19:59:12 compute-0 nova_compute[189564]: 2025-12-01 19:59:12.709 189568 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Dec  1 19:59:13 compute-0 nova_compute[189564]: 2025-12-01 19:59:13.731 189568 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Dec  1 19:59:14 compute-0 podman[252857]: 2025-12-01 19:59:14.361336228 +0000 UTC m=+0.118024363 container health_status 61ddba5fa28aaa4735d9b3aecc3d300f499f9ae2248b5f55cd6d6127fcce4236 (image=quay.io/navidys/prometheus-podman-exporter:v1.10.1, name=podman_exporter, health_status=healthy, health_failing_streak=0, health_log=, config_data={'image': 'quay.io/navidys/prometheus-podman-exporter:v1.10.1', 'restart': 'always', 'recreate': True, 'user': 'root', 'privileged': True, 'ports': ['9882:9882'], 'net': 'host', 'command': ['--web.config.file=/etc/podman_exporter/podman_exporter.yaml'], 'environment': {'OS_ENDPOINT_TYPE': 'internal', 'CONTAINER_HOST': 'unix:///run/podman/podman.sock'}, 'healthcheck': {'test': '/openstack/healthcheck podman_exporter', 'mount': '/var/lib/openstack/healthchecks/podman_exporter'}, 'volumes': ['/var/lib/openstack/config/telemetry/podman_exporter.yaml:/etc/podman_exporter/podman_exporter.yaml:z', '/var/lib/openstack/certs/telemetry/default:/etc/podman_exporter/tls:z', '/run/podman/podman.sock:/run/podman/podman.sock:rw,z', '/var/lib/openstack/healthchecks/podman_exporter:/openstack:ro,z']}, config_id=edpm, container_name=podman_exporter, maintainer=Navid Yaghoobi <navidys@fedoraproject.org>, managed_by=edpm_ansible)
Dec  1 19:59:15 compute-0 nova_compute[189564]: 2025-12-01 19:59:15.248 189568 DEBUG oslo_service.periodic_task [None req-7cec860b-c1c5-49d2-8a4f-732c76c1788d - - - - - -] Running periodic task ComputeManager._poll_rebooting_instances run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210#033[00m
Dec  1 19:59:15 compute-0 nova_compute[189564]: 2025-12-01 19:59:15.248 189568 DEBUG oslo_service.periodic_task [None req-7cec860b-c1c5-49d2-8a4f-732c76c1788d - - - - - -] Running periodic task ComputeManager._poll_rescued_instances run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210#033[00m
Dec  1 19:59:17 compute-0 nova_compute[189564]: 2025-12-01 19:59:17.248 189568 DEBUG oslo_service.periodic_task [None req-7cec860b-c1c5-49d2-8a4f-732c76c1788d - - - - - -] Running periodic task ComputeManager.update_available_resource run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210#033[00m
Dec  1 19:59:17 compute-0 podman[252882]: 2025-12-01 19:59:17.332224778 +0000 UTC m=+0.081279230 container health_status 43b014a7c88484529ca37fbc1aa040d68d3c565a681d98a3ffe696ded1c66c8b (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified, name=ovn_metadata_agent, health_status=healthy, health_failing_streak=0, health_log=, config_id=ovn_metadata_agent, org.label-schema.name=CentOS Stream 9 Base Image, container_name=ovn_metadata_agent, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS, org.label-schema.build-date=20251125, tcib_managed=true, io.buildah.version=1.41.3, maintainer=OpenStack Kubernetes Operator team, managed_by=edpm_ansible, org.label-schema.license=GPLv2, tcib_build_tag=fa2bb8efef6782c26ea7f1675eeb36dd, config_data={'cgroupns': 'host', 'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'EDPM_CONFIG_HASH': '0823bd3e096c75f72e4a95820d41b0d4b6a1172bd2892ddb9f29b788a11bc87d'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ovn_metadata_agent', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified', 'net': 'host', 'pid': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/run/openvswitch:/run/openvswitch:z', '/var/lib/config-data/ansible-generated/neutron-ovn-metadata-agent:/etc/neutron.conf.d:z', '/run/netns:/run/netns:shared', '/var/lib/kolla/config_files/ovn_metadata_agent.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/neutron:/var/lib/neutron:shared,z', '/var/lib/neutron/ovn_metadata_haproxy_wrapper:/usr/local/bin/haproxy:ro', '/var/lib/neutron/kill_scripts:/etc/neutron/kill_scripts:ro', '/var/lib/openstack/cacerts/neutron-metadata/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/ca.crt:/etc/pki/tls/certs/ovndbca.crt:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/tls.crt:/etc/pki/tls/certs/ovndb.crt:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/tls.key:/etc/pki/tls/private/ovndb.key:ro,Z', '/var/lib/openstack/healthchecks/ovn_metadata_agent:/openstack:ro,z']})
Dec  1 19:59:17 compute-0 podman[252879]: 2025-12-01 19:59:17.344878751 +0000 UTC m=+0.104180012 container health_status 23921011954a99f31a49758e512d9e3575f6b2ebf536e7df85e3be11e7690b76 (image=quay.io/sustainable_computing_io/kepler:release-0.7.12, name=kepler, health_status=healthy, health_failing_streak=0, health_log=, io.k8s.display-name=Red Hat Universal Base Image 9, architecture=x86_64, container_name=kepler, url=https://access.redhat.com/containers/#/registry.access.redhat.com/ubi9/images/9.4-1214.1726694543, maintainer=Red Hat, Inc., vendor=Red Hat, Inc., com.redhat.component=ubi9-container, description=The Universal Base Image is designed and engineered to be the base layer for all of your containerized applications, middleware and utilities. This base image is freely redistributable, but Red Hat only supports Red Hat technologies through subscriptions for Red Hat products. This image is maintained by Red Hat and updated regularly., io.k8s.description=The Universal Base Image is designed and engineered to be the base layer for all of your containerized applications, middleware and utilities. This base image is freely redistributable, but Red Hat only supports Red Hat technologies through subscriptions for Red Hat products. This image is maintained by Red Hat and updated regularly., summary=Provides the latest release of Red Hat Universal Base Image 9., managed_by=edpm_ansible, release=1214.1726694543, vcs-type=git, version=9.4, com.redhat.license_terms=https://www.redhat.com/en/about/red-hat-end-user-license-agreements#UBI, build-date=2024-09-18T21:23:30, config_data={'image': 'quay.io/sustainable_computing_io/kepler:release-0.7.12', 'privileged': 'true', 'restart': 'always', 'ports': ['8888:8888'], 'net': 'host', 'command': '-v=2', 'recreate': True, 'environment': {'ENABLE_GPU': 'true', 'EXPOSE_CONTAINER_METRICS': 'true', 'ENABLE_PROCESS_METRICS': 'true', 'EXPOSE_VM_METRICS': 'true', 'EXPOSE_ESTIMATED_IDLE_POWER_METRICS': 'false', 'LIBVIRT_METADATA_URI': 'http://openstack.org/xmlns/libvirt/nova/1.1'}, 'healthcheck': {'test': '/openstack/healthcheck kepler', 'mount': '/var/lib/openstack/healthchecks/kepler'}, 'volumes': ['/lib/modules:/lib/modules:ro', '/run/libvirt:/run/libvirt:shared,ro', '/sys:/sys', '/proc:/proc', '/var/lib/openstack/healthchecks/kepler:/openstack:ro,z']}, io.openshift.expose-services=, release-0.7.12=, vcs-ref=e309397d02fc53f7fa99db1371b8700eb49f268f, io.buildah.version=1.29.0, name=ubi9, distribution-scope=public, config_id=edpm, io.openshift.tags=base rhel9)
Dec  1 19:59:17 compute-0 podman[252881]: 2025-12-01 19:59:17.351794247 +0000 UTC m=+0.093989776 container health_status 3a3d264f7eb8586ed3d44da8bad3c69e5911bcb2ca062b771386b6d47a5118de (image=quay.rdoproject.org/podified-master-centos10/openstack-ceilometer-compute:current-tested, name=ceilometer_agent_compute, health_status=healthy, health_failing_streak=0, health_log=, tcib_managed=true, config_id=edpm, io.buildah.version=1.41.4, managed_by=edpm_ansible, org.label-schema.license=GPLv2, org.label-schema.vendor=CentOS, tcib_build_tag=3a7876c5b6a4ff2e2bc50e11e9db5f42, container_name=ceilometer_agent_compute, org.label-schema.build-date=20251125, org.label-schema.name=CentOS Stream 10 Base Image, config_data={'image': 'quay.rdoproject.org/podified-master-centos10/openstack-ceilometer-compute:current-tested', 'user': 'ceilometer', 'restart': 'always', 'command': 'kolla_start', 'security_opt': 'label:type:ceilometer_polling_t', 'net': 'host', 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'OS_ENDPOINT_TYPE': 'internal'}, 'healthcheck': {'test': '/openstack/healthcheck compute', 'mount': '/var/lib/openstack/healthchecks/ceilometer_agent_compute'}, 'volumes': ['/var/lib/openstack/config/telemetry:/var/lib/openstack/config/:z', '/var/lib/openstack/config/telemetry/ceilometer-agent-compute.json:/var/lib/kolla/config_files/config.json:z', '/run/libvirt:/run/libvirt:shared,ro', '/etc/hosts:/etc/hosts:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/var/lib/openstack/cacerts/telemetry/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/config/telemetry/ceilometer_prom_exporter.yaml:/etc/ceilometer/ceilometer_prom_exporter.yaml:z', '/var/lib/openstack/certs/telemetry/default:/etc/ceilometer/tls:z', '/dev/log:/dev/log', '/var/lib/openstack/healthchecks/ceilometer_agent_compute:/openstack:ro,z']}, maintainer=OpenStack Kubernetes Operator team, org.label-schema.schema-version=1.0)
Dec  1 19:59:17 compute-0 podman[252880]: 2025-12-01 19:59:17.355019846 +0000 UTC m=+0.109305482 container health_status 34a1614f07848d6f362b3ed1fa2407dbcd0f2c7c831f6ef43ff8b2d278ce7c3d (image=quay.io/podified-antelope-centos9/openstack-ceilometer-ipmi:current-podified, name=ceilometer_agent_ipmi, health_status=healthy, health_failing_streak=0, health_log=, org.label-schema.schema-version=1.0, config_id=edpm, container_name=ceilometer_agent_ipmi, org.label-schema.license=GPLv2, tcib_build_tag=fa2bb8efef6782c26ea7f1675eeb36dd, io.buildah.version=1.41.3, managed_by=edpm_ansible, org.label-schema.build-date=20251125, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.vendor=CentOS, tcib_managed=true, maintainer=OpenStack Kubernetes Operator team, config_data={'image': 'quay.io/podified-antelope-centos9/openstack-ceilometer-ipmi:current-podified', 'user': 'ceilometer', 'restart': 'always', 'command': 'kolla_start', 'security_opt': 'label:type:ceilometer_polling_t', 'privileged': 'true', 'net': 'host', 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'OS_ENDPOINT_TYPE': 'internal'}, 'healthcheck': {'test': '/openstack/healthcheck ipmi', 'mount': '/var/lib/openstack/healthchecks/ceilometer_agent_ipmi'}, 'volumes': ['/var/lib/openstack/config/telemetry-power-monitoring:/var/lib/openstack/config/:z', '/var/lib/openstack/config/telemetry-power-monitoring/ceilometer-agent-ipmi.json:/var/lib/kolla/config_files/config.json:z', '/etc/hosts:/etc/hosts:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/var/lib/openstack/cacerts/telemetry-power-monitoring/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/config/telemetry-power-monitoring/ceilometer_prom_exporter.yaml:/etc/ceilometer/ceilometer_prom_exporter.yaml:z', '/var/lib/openstack/certs/telemetry-power-monitoring/default:/etc/ceilometer/tls:z', '/dev/log:/dev/log', '/var/lib/openstack/healthchecks/ceilometer_agent_ipmi:/openstack:ro,z']})
Dec  1 19:59:17 compute-0 podman[252887]: 2025-12-01 19:59:17.407523931 +0000 UTC m=+0.154112966 container health_status ac5c9902abf0db9f43c889599b2bcc73d33eb8b65444ffdd9b56a5cc93dab792 (image=quay.io/podified-antelope-centos9/openstack-ovn-controller:current-podified, name=ovn_controller, health_status=healthy, health_failing_streak=0, health_log=, org.label-schema.build-date=20251125, org.label-schema.license=GPLv2, tcib_managed=true, container_name=ovn_controller, io.buildah.version=1.41.3, maintainer=OpenStack Kubernetes Operator team, managed_by=edpm_ansible, org.label-schema.schema-version=1.0, config_data={'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ovn_controller', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-ovn-controller:current-podified', 'net': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/lib/modules:/lib/modules:ro', '/run:/run', '/var/lib/openvswitch/ovn:/run/ovn:shared,z', '/var/lib/kolla/config_files/ovn_controller.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/openstack/cacerts/ovn/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/certs/ovn/default/ca.crt:/etc/pki/tls/certs/ovndbca.crt:ro,z', '/var/lib/openstack/certs/ovn/default/tls.crt:/etc/pki/tls/certs/ovndb.crt:ro,z', '/var/lib/openstack/certs/ovn/default/tls.key:/etc/pki/tls/private/ovndb.key:ro,Z', '/var/lib/openstack/healthchecks/ovn_controller:/openstack:ro,z']}, config_id=ovn_controller, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.vendor=CentOS, tcib_build_tag=fa2bb8efef6782c26ea7f1675eeb36dd)
Dec  1 19:59:17 compute-0 nova_compute[189564]: 2025-12-01 19:59:17.439 189568 DEBUG oslo_concurrency.lockutils [None req-7cec860b-c1c5-49d2-8a4f-732c76c1788d - - - - - -] Acquiring lock "compute_resources" by "nova.compute.resource_tracker.ResourceTracker.clean_compute_node_cache" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m
Dec  1 19:59:17 compute-0 nova_compute[189564]: 2025-12-01 19:59:17.440 189568 DEBUG oslo_concurrency.lockutils [None req-7cec860b-c1c5-49d2-8a4f-732c76c1788d - - - - - -] Lock "compute_resources" acquired by "nova.compute.resource_tracker.ResourceTracker.clean_compute_node_cache" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m
Dec  1 19:59:17 compute-0 nova_compute[189564]: 2025-12-01 19:59:17.440 189568 DEBUG oslo_concurrency.lockutils [None req-7cec860b-c1c5-49d2-8a4f-732c76c1788d - - - - - -] Lock "compute_resources" "released" by "nova.compute.resource_tracker.ResourceTracker.clean_compute_node_cache" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m
Dec  1 19:59:17 compute-0 nova_compute[189564]: 2025-12-01 19:59:17.440 189568 DEBUG nova.compute.resource_tracker [None req-7cec860b-c1c5-49d2-8a4f-732c76c1788d - - - - - -] Auditing locally available compute resources for compute-0.ctlplane.example.com (node: compute-0.ctlplane.example.com) update_available_resource /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:861#033[00m
Dec  1 19:59:17 compute-0 nova_compute[189564]: 2025-12-01 19:59:17.711 189568 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Dec  1 19:59:17 compute-0 nova_compute[189564]: 2025-12-01 19:59:17.771 189568 WARNING nova.virt.libvirt.driver [None req-7cec860b-c1c5-49d2-8a4f-732c76c1788d - - - - - -] This host appears to have multiple sockets per NUMA node. The `socket` PCI NUMA affinity will not be supported.#033[00m
Dec  1 19:59:17 compute-0 nova_compute[189564]: 2025-12-01 19:59:17.772 189568 DEBUG nova.compute.resource_tracker [None req-7cec860b-c1c5-49d2-8a4f-732c76c1788d - - - - - -] Hypervisor/Node resource view: name=compute-0.ctlplane.example.com free_ram=5361MB free_disk=72.37424087524414GB free_vcpus=8 pci_devices=[{"dev_id": "pci_0000_00_03_0", "address": "0000:00:03.0", "product_id": "1000", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1000", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_01_3", "address": "0000:00:01.3", "product_id": "7113", "vendor_id": "8086", "numa_node": null, "label": "label_8086_7113", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_06_0", "address": "0000:00:06.0", "product_id": "1005", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1005", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_04_0", "address": "0000:00:04.0", "product_id": "1001", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1001", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_01_1", "address": "0000:00:01.1", "product_id": "7010", "vendor_id": "8086", "numa_node": null, "label": "label_8086_7010", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_00_0", "address": "0000:00:00.0", "product_id": "1237", "vendor_id": "8086", "numa_node": null, "label": "label_8086_1237", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_05_0", "address": "0000:00:05.0", "product_id": "1002", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1002", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_02_0", "address": "0000:00:02.0", "product_id": "1050", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1050", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_01_2", "address": "0000:00:01.2", "product_id": "7020", "vendor_id": "8086", "numa_node": null, "label": "label_8086_7020", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_01_0", "address": "0000:00:01.0", "product_id": "7000", "vendor_id": "8086", "numa_node": null, "label": "label_8086_7000", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_07_0", "address": "0000:00:07.0", "product_id": "1000", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1000", "dev_type": "type-PCI"}] _report_hypervisor_resource_view /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:1034#033[00m
Dec  1 19:59:17 compute-0 nova_compute[189564]: 2025-12-01 19:59:17.772 189568 DEBUG oslo_concurrency.lockutils [None req-7cec860b-c1c5-49d2-8a4f-732c76c1788d - - - - - -] Acquiring lock "compute_resources" by "nova.compute.resource_tracker.ResourceTracker._update_available_resource" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m
Dec  1 19:59:17 compute-0 nova_compute[189564]: 2025-12-01 19:59:17.772 189568 DEBUG oslo_concurrency.lockutils [None req-7cec860b-c1c5-49d2-8a4f-732c76c1788d - - - - - -] Lock "compute_resources" acquired by "nova.compute.resource_tracker.ResourceTracker._update_available_resource" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m
Dec  1 19:59:17 compute-0 nova_compute[189564]: 2025-12-01 19:59:17.944 189568 DEBUG nova.compute.resource_tracker [None req-7cec860b-c1c5-49d2-8a4f-732c76c1788d - - - - - -] Total usable vcpus: 8, total allocated vcpus: 0 _report_final_resource_view /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:1057#033[00m
Dec  1 19:59:17 compute-0 nova_compute[189564]: 2025-12-01 19:59:17.945 189568 DEBUG nova.compute.resource_tracker [None req-7cec860b-c1c5-49d2-8a4f-732c76c1788d - - - - - -] Final resource view: name=compute-0.ctlplane.example.com phys_ram=7680MB used_ram=512MB phys_disk=79GB used_disk=0GB total_vcpus=8 used_vcpus=0 pci_stats=[] _report_final_resource_view /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:1066#033[00m
Dec  1 19:59:17 compute-0 nova_compute[189564]: 2025-12-01 19:59:17.974 189568 DEBUG nova.compute.provider_tree [None req-7cec860b-c1c5-49d2-8a4f-732c76c1788d - - - - - -] Inventory has not changed in ProviderTree for provider: 0211b5d4-bab8-409f-8f53-df766ffbcb27 update_inventory /usr/lib/python3.9/site-packages/nova/compute/provider_tree.py:180#033[00m
Dec  1 19:59:18 compute-0 nova_compute[189564]: 2025-12-01 19:59:18.036 189568 DEBUG nova.scheduler.client.report [None req-7cec860b-c1c5-49d2-8a4f-732c76c1788d - - - - - -] Inventory has not changed for provider 0211b5d4-bab8-409f-8f53-df766ffbcb27 based on inventory data: {'VCPU': {'total': 8, 'reserved': 0, 'min_unit': 1, 'max_unit': 8, 'step_size': 1, 'allocation_ratio': 4.0}, 'MEMORY_MB': {'total': 7680, 'reserved': 512, 'min_unit': 1, 'max_unit': 7680, 'step_size': 1, 'allocation_ratio': 1.0}, 'DISK_GB': {'total': 79, 'reserved': 1, 'min_unit': 1, 'max_unit': 79, 'step_size': 1, 'allocation_ratio': 0.9}} set_inventory_for_provider /usr/lib/python3.9/site-packages/nova/scheduler/client/report.py:940#033[00m
Dec  1 19:59:18 compute-0 nova_compute[189564]: 2025-12-01 19:59:18.039 189568 DEBUG nova.compute.resource_tracker [None req-7cec860b-c1c5-49d2-8a4f-732c76c1788d - - - - - -] Compute_service record updated for compute-0.ctlplane.example.com:compute-0.ctlplane.example.com _update_available_resource /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:995#033[00m
Dec  1 19:59:18 compute-0 nova_compute[189564]: 2025-12-01 19:59:18.039 189568 DEBUG oslo_concurrency.lockutils [None req-7cec860b-c1c5-49d2-8a4f-732c76c1788d - - - - - -] Lock "compute_resources" "released" by "nova.compute.resource_tracker.ResourceTracker._update_available_resource" :: held 0.267s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m
Dec  1 19:59:18 compute-0 nova_compute[189564]: 2025-12-01 19:59:18.733 189568 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Dec  1 19:59:19 compute-0 nova_compute[189564]: 2025-12-01 19:59:19.041 189568 DEBUG oslo_service.periodic_task [None req-7cec860b-c1c5-49d2-8a4f-732c76c1788d - - - - - -] Running periodic task ComputeManager._poll_unconfirmed_resizes run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210#033[00m
Dec  1 19:59:19 compute-0 nova_compute[189564]: 2025-12-01 19:59:19.041 189568 DEBUG oslo_service.periodic_task [None req-7cec860b-c1c5-49d2-8a4f-732c76c1788d - - - - - -] Running periodic task ComputeManager._poll_volume_usage run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210#033[00m
Dec  1 19:59:19 compute-0 nova_compute[189564]: 2025-12-01 19:59:19.248 189568 DEBUG oslo_service.periodic_task [None req-7cec860b-c1c5-49d2-8a4f-732c76c1788d - - - - - -] Running periodic task ComputeManager._heal_instance_info_cache run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210#033[00m
Dec  1 19:59:19 compute-0 nova_compute[189564]: 2025-12-01 19:59:19.249 189568 DEBUG nova.compute.manager [None req-7cec860b-c1c5-49d2-8a4f-732c76c1788d - - - - - -] Starting heal instance info cache _heal_instance_info_cache /usr/lib/python3.9/site-packages/nova/compute/manager.py:9858#033[00m
Dec  1 19:59:19 compute-0 nova_compute[189564]: 2025-12-01 19:59:19.249 189568 DEBUG nova.compute.manager [None req-7cec860b-c1c5-49d2-8a4f-732c76c1788d - - - - - -] Rebuilding the list of instances to heal _heal_instance_info_cache /usr/lib/python3.9/site-packages/nova/compute/manager.py:9862#033[00m
Dec  1 19:59:19 compute-0 nova_compute[189564]: 2025-12-01 19:59:19.354 189568 DEBUG nova.compute.manager [None req-7cec860b-c1c5-49d2-8a4f-732c76c1788d - - - - - -] Didn't find any instances for network info cache update. _heal_instance_info_cache /usr/lib/python3.9/site-packages/nova/compute/manager.py:9944#033[00m
Dec  1 19:59:22 compute-0 nova_compute[189564]: 2025-12-01 19:59:22.247 189568 DEBUG oslo_service.periodic_task [None req-7cec860b-c1c5-49d2-8a4f-732c76c1788d - - - - - -] Running periodic task ComputeManager._check_instance_build_time run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210#033[00m
Dec  1 19:59:22 compute-0 nova_compute[189564]: 2025-12-01 19:59:22.248 189568 DEBUG oslo_service.periodic_task [None req-7cec860b-c1c5-49d2-8a4f-732c76c1788d - - - - - -] Running periodic task ComputeManager._sync_scheduler_instance_info run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210#033[00m
Dec  1 19:59:22 compute-0 nova_compute[189564]: 2025-12-01 19:59:22.494 189568 DEBUG oslo_service.periodic_task [None req-7cec860b-c1c5-49d2-8a4f-732c76c1788d - - - - - -] Running periodic task ComputeManager._instance_usage_audit run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210#033[00m
Dec  1 19:59:22 compute-0 nova_compute[189564]: 2025-12-01 19:59:22.717 189568 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Dec  1 19:59:23 compute-0 nova_compute[189564]: 2025-12-01 19:59:23.736 189568 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Dec  1 19:59:27 compute-0 podman[252981]: 2025-12-01 19:59:27.346519204 +0000 UTC m=+0.108067585 container health_status b46bda7fc50db8041eef75400930fc7591d8331b3adc9964f77b2cc87c6b98e2 (image=quay.io/openstack-k8s-operators/openstack-network-exporter:current-podified, name=openstack_network_exporter, health_status=healthy, health_failing_streak=0, health_log=, maintainer=Red Hat, Inc., url=https://catalog.redhat.com/en/search?searchType=containers, vendor=Red Hat, Inc., io.openshift.expose-services=, config_id=edpm, name=ubi9-minimal, description=The Universal Base Image Minimal is a stripped down image that uses microdnf as a package manager. This base image is freely redistributable, but Red Hat only supports Red Hat technologies through subscriptions for Red Hat products. This image is maintained by Red Hat and updated regularly., io.openshift.tags=minimal rhel9, summary=Provides the latest release of the minimal Red Hat Universal Base Image 9., vcs-ref=f4b088292653bbf5ca8188a5e59ffd06a8671d4b, architecture=x86_64, build-date=2025-08-20T13:12:41, com.redhat.component=ubi9-minimal-container, com.redhat.license_terms=https://www.redhat.com/en/about/red-hat-end-user-license-agreements#UBI, release=1755695350, container_name=openstack_network_exporter, version=9.6, config_data={'image': 'quay.io/openstack-k8s-operators/openstack-network-exporter:current-podified', 'restart': 'always', 'recreate': True, 'privileged': True, 'ports': ['9105:9105'], 'command': [], 'net': 'host', 'environment': {'OS_ENDPOINT_TYPE': 'internal', 'OPENSTACK_NETWORK_EXPORTER_YAML': '/etc/openstack_network_exporter/openstack_network_exporter.yaml'}, 'healthcheck': {'test': '/openstack/healthcheck openstack-netwo', 'mount': '/var/lib/openstack/healthchecks/openstack_network_exporter'}, 'volumes': ['/var/lib/openstack/config/telemetry/openstack_network_exporter.yaml:/etc/openstack_network_exporter/openstack_network_exporter.yaml:z', '/var/lib/openstack/certs/telemetry/default:/etc/openstack_network_exporter/tls:z', '/var/run/openvswitch:/run/openvswitch:rw,z', '/var/lib/openvswitch/ovn:/run/ovn:rw,z', '/proc:/host/proc:ro', '/var/lib/openstack/healthchecks/openstack_network_exporter:/openstack:ro,z']}, distribution-scope=public, io.buildah.version=1.33.7, io.k8s.description=The Universal Base Image Minimal is a stripped down image that uses microdnf as a package manager. This base image is freely redistributable, but Red Hat only supports Red Hat technologies through subscriptions for Red Hat products. This image is maintained by Red Hat and updated regularly., io.k8s.display-name=Red Hat Universal Base Image 9 Minimal, managed_by=edpm_ansible, vcs-type=git)
Dec  1 19:59:27 compute-0 nova_compute[189564]: 2025-12-01 19:59:27.721 189568 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Dec  1 19:59:28 compute-0 nova_compute[189564]: 2025-12-01 19:59:28.738 189568 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Dec  1 19:59:29 compute-0 podman[203750]: time="2025-12-01T19:59:29Z" level=info msg="List containers: received `last` parameter - overwriting `limit`"
Dec  1 19:59:29 compute-0 podman[203750]: @ - - [01/Dec/2025:19:59:29 +0000] "GET /v4.9.3/libpod/containers/json?all=true&external=false&last=0&namespace=false&size=false&sync=false HTTP/1.1" 200 28288 "" "Go-http-client/1.1"
Dec  1 19:59:29 compute-0 podman[203750]: @ - - [01/Dec/2025:19:59:29 +0000] "GET /v4.9.3/libpod/containers/stats?all=false&interval=1&stream=false HTTP/1.1" 200 4339 "" "Go-http-client/1.1"
Dec  1 19:59:31 compute-0 openstack_network_exporter[205914]: ERROR   19:59:31 appctl.go:131: Failed to prepare call to ovsdb-server: no control socket files found for the ovs db server
Dec  1 19:59:31 compute-0 openstack_network_exporter[205914]: ERROR   19:59:31 appctl.go:144: Failed to get PID for ovn-northd: no control socket files found for ovn-northd
Dec  1 19:59:31 compute-0 openstack_network_exporter[205914]: ERROR   19:59:31 appctl.go:144: Failed to get PID for ovn-northd: no control socket files found for ovn-northd
Dec  1 19:59:31 compute-0 openstack_network_exporter[205914]: ERROR   19:59:31 appctl.go:174: call(dpif-netdev/pmd-perf-show): please specify an existing datapath
Dec  1 19:59:31 compute-0 openstack_network_exporter[205914]: 
Dec  1 19:59:31 compute-0 openstack_network_exporter[205914]: ERROR   19:59:31 appctl.go:174: call(dpif-netdev/pmd-rxq-show): please specify an existing datapath
Dec  1 19:59:31 compute-0 openstack_network_exporter[205914]: 
Dec  1 19:59:32 compute-0 podman[253000]: 2025-12-01 19:59:32.348506001 +0000 UTC m=+0.106427252 container health_status 9bc16c1e84935b321683dd2dfd3901959431e420d380b6b9982945dff3d516b2 (image=quay.io/prometheus/node-exporter:v1.5.0, name=node_exporter, health_status=healthy, health_failing_streak=0, health_log=, container_name=node_exporter, maintainer=The Prometheus Authors <prometheus-developers@googlegroups.com>, managed_by=edpm_ansible, config_data={'image': 'quay.io/prometheus/node-exporter:v1.5.0', 'restart': 'always', 'recreate': True, 'user': 'root', 'privileged': True, 'ports': ['9100:9100'], 'command': ['--web.config.file=/etc/node_exporter/node_exporter.yaml', '--web.disable-exporter-metrics', '--collector.systemd', '--collector.systemd.unit-include=(edpm_.*|ovs.*|openvswitch|virt.*|rsyslog)\\.service', '--no-collector.dmi', '--no-collector.entropy', '--no-collector.thermal_zone', '--no-collector.time', '--no-collector.timex', '--no-collector.uname', '--no-collector.stat', '--no-collector.hwmon', '--no-collector.os', '--no-collector.selinux', '--no-collector.textfile', '--no-collector.powersupplyclass', '--no-collector.pressure', '--no-collector.rapl'], 'net': 'host', 'environment': {'OS_ENDPOINT_TYPE': 'internal'}, 'healthcheck': {'test': '/openstack/healthcheck node_exporter', 'mount': '/var/lib/openstack/healthchecks/node_exporter'}, 'volumes': ['/var/lib/openstack/config/telemetry/node_exporter.yaml:/etc/node_exporter/node_exporter.yaml:z', '/var/lib/openstack/certs/telemetry/default:/etc/node_exporter/tls:z', '/var/run/dbus/system_bus_socket:/var/run/dbus/system_bus_socket:rw', '/var/lib/openstack/healthchecks/node_exporter:/openstack:ro,z']}, config_id=edpm)
Dec  1 19:59:32 compute-0 nova_compute[189564]: 2025-12-01 19:59:32.726 189568 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Dec  1 19:59:33 compute-0 nova_compute[189564]: 2025-12-01 19:59:33.742 189568 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Dec  1 19:59:37 compute-0 nova_compute[189564]: 2025-12-01 19:59:37.730 189568 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Dec  1 19:59:38 compute-0 nova_compute[189564]: 2025-12-01 19:59:38.744 189568 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Dec  1 19:59:39 compute-0 podman[253024]: 2025-12-01 19:59:39.321045592 +0000 UTC m=+0.100695394 container health_status eee51cf6f5ac491b85fb09827fece37ea9afa564acb449d4ec0d0155a452f02b (image=quay.io/podified-antelope-centos9/openstack-multipathd:current-podified, name=multipathd, health_status=healthy, health_failing_streak=0, health_log=, org.label-schema.vendor=CentOS, tcib_managed=true, container_name=multipathd, maintainer=OpenStack Kubernetes Operator team, managed_by=edpm_ansible, config_data={'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/multipathd', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-multipathd:current-podified', 'net': 'host', 'privileged': True, 'restart': 'always', 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/var/lib/kolla/config_files/multipathd.json:/var/lib/kolla/config_files/config.json:ro', '/dev:/dev', '/run/udev:/run/udev', '/sys:/sys', '/lib/modules:/lib/modules:ro', '/etc/iscsi:/etc/iscsi:ro', '/var/lib/iscsi:/var/lib/iscsi', '/etc/multipath:/etc/multipath:z', '/etc/multipath.conf:/etc/multipath.conf:ro', '/var/lib/openstack/healthchecks/multipathd:/openstack:ro,z']}, io.buildah.version=1.41.3, org.label-schema.name=CentOS Stream 9 Base Image, tcib_build_tag=fa2bb8efef6782c26ea7f1675eeb36dd, config_id=multipathd, org.label-schema.build-date=20251125, org.label-schema.license=GPLv2, org.label-schema.schema-version=1.0)
Dec  1 19:59:42 compute-0 nova_compute[189564]: 2025-12-01 19:59:42.735 189568 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Dec  1 19:59:43 compute-0 nova_compute[189564]: 2025-12-01 19:59:43.746 189568 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Dec  1 19:59:44 compute-0 podman[253047]: 2025-12-01 19:59:44.759627435 +0000 UTC m=+0.082271591 container health_status 61ddba5fa28aaa4735d9b3aecc3d300f499f9ae2248b5f55cd6d6127fcce4236 (image=quay.io/navidys/prometheus-podman-exporter:v1.10.1, name=podman_exporter, health_status=healthy, health_failing_streak=0, health_log=, config_data={'image': 'quay.io/navidys/prometheus-podman-exporter:v1.10.1', 'restart': 'always', 'recreate': True, 'user': 'root', 'privileged': True, 'ports': ['9882:9882'], 'net': 'host', 'command': ['--web.config.file=/etc/podman_exporter/podman_exporter.yaml'], 'environment': {'OS_ENDPOINT_TYPE': 'internal', 'CONTAINER_HOST': 'unix:///run/podman/podman.sock'}, 'healthcheck': {'test': '/openstack/healthcheck podman_exporter', 'mount': '/var/lib/openstack/healthchecks/podman_exporter'}, 'volumes': ['/var/lib/openstack/config/telemetry/podman_exporter.yaml:/etc/podman_exporter/podman_exporter.yaml:z', '/var/lib/openstack/certs/telemetry/default:/etc/podman_exporter/tls:z', '/run/podman/podman.sock:/run/podman/podman.sock:rw,z', '/var/lib/openstack/healthchecks/podman_exporter:/openstack:ro,z']}, config_id=edpm, container_name=podman_exporter, maintainer=Navid Yaghoobi <navidys@fedoraproject.org>, managed_by=edpm_ansible)
Dec  1 19:59:47 compute-0 nova_compute[189564]: 2025-12-01 19:59:47.741 189568 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Dec  1 19:59:47 compute-0 podman[253069]: 2025-12-01 19:59:47.886446506 +0000 UTC m=+0.112659137 container health_status 23921011954a99f31a49758e512d9e3575f6b2ebf536e7df85e3be11e7690b76 (image=quay.io/sustainable_computing_io/kepler:release-0.7.12, name=kepler, health_status=healthy, health_failing_streak=0, health_log=, vendor=Red Hat, Inc., architecture=x86_64, maintainer=Red Hat, Inc., io.buildah.version=1.29.0, name=ubi9, release=1214.1726694543, config_data={'image': 'quay.io/sustainable_computing_io/kepler:release-0.7.12', 'privileged': 'true', 'restart': 'always', 'ports': ['8888:8888'], 'net': 'host', 'command': '-v=2', 'recreate': True, 'environment': {'ENABLE_GPU': 'true', 'EXPOSE_CONTAINER_METRICS': 'true', 'ENABLE_PROCESS_METRICS': 'true', 'EXPOSE_VM_METRICS': 'true', 'EXPOSE_ESTIMATED_IDLE_POWER_METRICS': 'false', 'LIBVIRT_METADATA_URI': 'http://openstack.org/xmlns/libvirt/nova/1.1'}, 'healthcheck': {'test': '/openstack/healthcheck kepler', 'mount': '/var/lib/openstack/healthchecks/kepler'}, 'volumes': ['/lib/modules:/lib/modules:ro', '/run/libvirt:/run/libvirt:shared,ro', '/sys:/sys', '/proc:/proc', '/var/lib/openstack/healthchecks/kepler:/openstack:ro,z']}, io.openshift.tags=base rhel9, io.k8s.display-name=Red Hat Universal Base Image 9, io.k8s.description=The Universal Base Image is designed and engineered to be the base layer for all of your containerized applications, middleware and utilities. This base image is freely redistributable, but Red Hat only supports Red Hat technologies through subscriptions for Red Hat products. This image is maintained by Red Hat and updated regularly., io.openshift.expose-services=, com.redhat.component=ubi9-container, config_id=edpm, release-0.7.12=, url=https://access.redhat.com/containers/#/registry.access.redhat.com/ubi9/images/9.4-1214.1726694543, vcs-ref=e309397d02fc53f7fa99db1371b8700eb49f268f, description=The Universal Base Image is designed and engineered to be the base layer for all of your containerized applications, middleware and utilities. This base image is freely redistributable, but Red Hat only supports Red Hat technologies through subscriptions for Red Hat products. This image is maintained by Red Hat and updated regularly., build-date=2024-09-18T21:23:30, managed_by=edpm_ansible, vcs-type=git, version=9.4, com.redhat.license_terms=https://www.redhat.com/en/about/red-hat-end-user-license-agreements#UBI, distribution-scope=public, summary=Provides the latest release of Red Hat Universal Base Image 9., container_name=kepler)
Dec  1 19:59:47 compute-0 podman[253070]: 2025-12-01 19:59:47.89429986 +0000 UTC m=+0.110778878 container health_status 34a1614f07848d6f362b3ed1fa2407dbcd0f2c7c831f6ef43ff8b2d278ce7c3d (image=quay.io/podified-antelope-centos9/openstack-ceilometer-ipmi:current-podified, name=ceilometer_agent_ipmi, health_status=healthy, health_failing_streak=0, health_log=, maintainer=OpenStack Kubernetes Operator team, org.label-schema.schema-version=1.0, org.label-schema.license=GPLv2, tcib_build_tag=fa2bb8efef6782c26ea7f1675eeb36dd, io.buildah.version=1.41.3, org.label-schema.build-date=20251125, org.label-schema.name=CentOS Stream 9 Base Image, managed_by=edpm_ansible, org.label-schema.vendor=CentOS, tcib_managed=true, config_data={'image': 'quay.io/podified-antelope-centos9/openstack-ceilometer-ipmi:current-podified', 'user': 'ceilometer', 'restart': 'always', 'command': 'kolla_start', 'security_opt': 'label:type:ceilometer_polling_t', 'privileged': 'true', 'net': 'host', 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'OS_ENDPOINT_TYPE': 'internal'}, 'healthcheck': {'test': '/openstack/healthcheck ipmi', 'mount': '/var/lib/openstack/healthchecks/ceilometer_agent_ipmi'}, 'volumes': ['/var/lib/openstack/config/telemetry-power-monitoring:/var/lib/openstack/config/:z', '/var/lib/openstack/config/telemetry-power-monitoring/ceilometer-agent-ipmi.json:/var/lib/kolla/config_files/config.json:z', '/etc/hosts:/etc/hosts:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/var/lib/openstack/cacerts/telemetry-power-monitoring/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/config/telemetry-power-monitoring/ceilometer_prom_exporter.yaml:/etc/ceilometer/ceilometer_prom_exporter.yaml:z', '/var/lib/openstack/certs/telemetry-power-monitoring/default:/etc/ceilometer/tls:z', '/dev/log:/dev/log', '/var/lib/openstack/healthchecks/ceilometer_agent_ipmi:/openstack:ro,z']}, config_id=edpm, container_name=ceilometer_agent_ipmi)
Dec  1 19:59:47 compute-0 podman[253077]: 2025-12-01 19:59:47.894458345 +0000 UTC m=+0.096610977 container health_status 43b014a7c88484529ca37fbc1aa040d68d3c565a681d98a3ffe696ded1c66c8b (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified, name=ovn_metadata_agent, health_status=healthy, health_failing_streak=0, health_log=, org.label-schema.license=GPLv2, tcib_managed=true, config_id=ovn_metadata_agent, container_name=ovn_metadata_agent, managed_by=edpm_ansible, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.schema-version=1.0, tcib_build_tag=fa2bb8efef6782c26ea7f1675eeb36dd, maintainer=OpenStack Kubernetes Operator team, org.label-schema.build-date=20251125, config_data={'cgroupns': 'host', 'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'EDPM_CONFIG_HASH': '0823bd3e096c75f72e4a95820d41b0d4b6a1172bd2892ddb9f29b788a11bc87d'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ovn_metadata_agent', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified', 'net': 'host', 'pid': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/run/openvswitch:/run/openvswitch:z', '/var/lib/config-data/ansible-generated/neutron-ovn-metadata-agent:/etc/neutron.conf.d:z', '/run/netns:/run/netns:shared', '/var/lib/kolla/config_files/ovn_metadata_agent.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/neutron:/var/lib/neutron:shared,z', '/var/lib/neutron/ovn_metadata_haproxy_wrapper:/usr/local/bin/haproxy:ro', '/var/lib/neutron/kill_scripts:/etc/neutron/kill_scripts:ro', '/var/lib/openstack/cacerts/neutron-metadata/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/ca.crt:/etc/pki/tls/certs/ovndbca.crt:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/tls.crt:/etc/pki/tls/certs/ovndb.crt:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/tls.key:/etc/pki/tls/private/ovndb.key:ro,Z', '/var/lib/openstack/healthchecks/ovn_metadata_agent:/openstack:ro,z']}, org.label-schema.vendor=CentOS, io.buildah.version=1.41.3)
Dec  1 19:59:47 compute-0 podman[253071]: 2025-12-01 19:59:47.919224525 +0000 UTC m=+0.125396312 container health_status 3a3d264f7eb8586ed3d44da8bad3c69e5911bcb2ca062b771386b6d47a5118de (image=quay.rdoproject.org/podified-master-centos10/openstack-ceilometer-compute:current-tested, name=ceilometer_agent_compute, health_status=healthy, health_failing_streak=0, health_log=, container_name=ceilometer_agent_compute, maintainer=OpenStack Kubernetes Operator team, org.label-schema.name=CentOS Stream 10 Base Image, org.label-schema.schema-version=1.0, config_id=edpm, io.buildah.version=1.41.4, org.label-schema.vendor=CentOS, config_data={'image': 'quay.rdoproject.org/podified-master-centos10/openstack-ceilometer-compute:current-tested', 'user': 'ceilometer', 'restart': 'always', 'command': 'kolla_start', 'security_opt': 'label:type:ceilometer_polling_t', 'net': 'host', 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'OS_ENDPOINT_TYPE': 'internal'}, 'healthcheck': {'test': '/openstack/healthcheck compute', 'mount': '/var/lib/openstack/healthchecks/ceilometer_agent_compute'}, 'volumes': ['/var/lib/openstack/config/telemetry:/var/lib/openstack/config/:z', '/var/lib/openstack/config/telemetry/ceilometer-agent-compute.json:/var/lib/kolla/config_files/config.json:z', '/run/libvirt:/run/libvirt:shared,ro', '/etc/hosts:/etc/hosts:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/var/lib/openstack/cacerts/telemetry/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/config/telemetry/ceilometer_prom_exporter.yaml:/etc/ceilometer/ceilometer_prom_exporter.yaml:z', '/var/lib/openstack/certs/telemetry/default:/etc/ceilometer/tls:z', '/dev/log:/dev/log', '/var/lib/openstack/healthchecks/ceilometer_agent_compute:/openstack:ro,z']}, managed_by=edpm_ansible, org.label-schema.build-date=20251125, org.label-schema.license=GPLv2, tcib_build_tag=3a7876c5b6a4ff2e2bc50e11e9db5f42, tcib_managed=true)
Dec  1 19:59:47 compute-0 podman[253084]: 2025-12-01 19:59:47.944492342 +0000 UTC m=+0.127693604 container health_status ac5c9902abf0db9f43c889599b2bcc73d33eb8b65444ffdd9b56a5cc93dab792 (image=quay.io/podified-antelope-centos9/openstack-ovn-controller:current-podified, name=ovn_controller, health_status=healthy, health_failing_streak=0, health_log=, config_data={'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ovn_controller', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-ovn-controller:current-podified', 'net': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/lib/modules:/lib/modules:ro', '/run:/run', '/var/lib/openvswitch/ovn:/run/ovn:shared,z', '/var/lib/kolla/config_files/ovn_controller.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/openstack/cacerts/ovn/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/certs/ovn/default/ca.crt:/etc/pki/tls/certs/ovndbca.crt:ro,z', '/var/lib/openstack/certs/ovn/default/tls.crt:/etc/pki/tls/certs/ovndb.crt:ro,z', '/var/lib/openstack/certs/ovn/default/tls.key:/etc/pki/tls/private/ovndb.key:ro,Z', '/var/lib/openstack/healthchecks/ovn_controller:/openstack:ro,z']}, maintainer=OpenStack Kubernetes Operator team, org.label-schema.build-date=20251125, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.schema-version=1.0, tcib_build_tag=fa2bb8efef6782c26ea7f1675eeb36dd, container_name=ovn_controller, org.label-schema.vendor=CentOS, config_id=ovn_controller, org.label-schema.license=GPLv2, io.buildah.version=1.41.3, managed_by=edpm_ansible, tcib_managed=true)
Dec  1 19:59:48 compute-0 nova_compute[189564]: 2025-12-01 19:59:48.752 189568 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Dec  1 19:59:48 compute-0 ceilometer_agent_compute[200308]: 2025-12-01 19:59:48.821 15 DEBUG ceilometer.polling.manager [-] The number of pollsters in source [pollsters] is bigger than the number of worker threads to execute them. Therefore, one can expect the process to be longer than the expected. execute_polling_task_processing /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:253
Dec  1 19:59:48 compute-0 ceilometer_agent_compute[200308]: 2025-12-01 19:59:48.821 15 DEBUG ceilometer.polling.manager [-] Processing pollsters for [pollsters] with [1] threads. execute_polling_task_processing /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:262
Dec  1 19:59:48 compute-0 ceilometer_agent_compute[200308]: 2025-12-01 19:59:48.821 15 DEBUG ceilometer.polling.manager [-] Registering pollster [<stevedore.extension.Extension object at 0x7fcf6cc3f860>] from source [pollsters] to be executed via executor [<concurrent.futures.thread.ThreadPoolExecutor object at 0x7fcf6e8ebbf0>] with cache [{}], pollster history [{}], and discovery cache [{}]. register_pollster_execution /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:276
Dec  1 19:59:48 compute-0 ceilometer_agent_compute[200308]: 2025-12-01 19:59:48.822 15 DEBUG ceilometer.polling.manager [-] Executing discovery process for pollsters [<ceilometer.compute.pollsters.net.IncomingBytesDeltaPollster object at 0x7fcf6cc3f830>] and discovery method [local_instances] via process [<bound method AgentManager.discover of <ceilometer.polling.manager.AgentManager object at 0x7fcf6cc07a10>>]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:294
Dec  1 19:59:48 compute-0 ceilometer_agent_compute[200308]: 2025-12-01 19:59:48.823 15 DEBUG ceilometer.polling.manager [-] Registering pollster [<stevedore.extension.Extension object at 0x7fcf6c2e4080>] from source [pollsters] to be executed via executor [<concurrent.futures.thread.ThreadPoolExecutor object at 0x7fcf6e8ebbf0>] with cache [{}], pollster history [{}], and discovery cache [{}]. register_pollster_execution /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:276
Dec  1 19:59:48 compute-0 ceilometer_agent_compute[200308]: 2025-12-01 19:59:48.823 15 DEBUG ceilometer.polling.manager [-] Registering pollster [<stevedore.extension.Extension object at 0x7fcf6efc98b0>] from source [pollsters] to be executed via executor [<concurrent.futures.thread.ThreadPoolExecutor object at 0x7fcf6e8ebbf0>] with cache [{}], pollster history [{}], and discovery cache [{}]. register_pollster_execution /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:276
Dec  1 19:59:48 compute-0 ceilometer_agent_compute[200308]: 2025-12-01 19:59:48.824 15 DEBUG ceilometer.polling.manager [-] Registering pollster [<stevedore.extension.Extension object at 0x7fcf6c2e4110>] from source [pollsters] to be executed via executor [<concurrent.futures.thread.ThreadPoolExecutor object at 0x7fcf6e8ebbf0>] with cache [{}], pollster history [{}], and discovery cache [{}]. register_pollster_execution /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:276
Dec  1 19:59:48 compute-0 ceilometer_agent_compute[200308]: 2025-12-01 19:59:48.824 15 DEBUG ceilometer.polling.manager [-] Registering pollster [<stevedore.extension.Extension object at 0x7fcf6c2e41a0>] from source [pollsters] to be executed via executor [<concurrent.futures.thread.ThreadPoolExecutor object at 0x7fcf6e8ebbf0>] with cache [{}], pollster history [{}], and discovery cache [{}]. register_pollster_execution /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:276
Dec  1 19:59:48 compute-0 ceilometer_agent_compute[200308]: 2025-12-01 19:59:48.824 15 DEBUG ceilometer.polling.manager [-] Registering pollster [<stevedore.extension.Extension object at 0x7fcf6cc3f290>] from source [pollsters] to be executed via executor [<concurrent.futures.thread.ThreadPoolExecutor object at 0x7fcf6e8ebbf0>] with cache [{}], pollster history [{}], and discovery cache [{}]. register_pollster_execution /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:276
Dec  1 19:59:48 compute-0 ceilometer_agent_compute[200308]: 2025-12-01 19:59:48.824 15 DEBUG ceilometer.polling.manager [-] Registering pollster [<stevedore.extension.Extension object at 0x7fcf6cc3f2c0>] from source [pollsters] to be executed via executor [<concurrent.futures.thread.ThreadPoolExecutor object at 0x7fcf6e8ebbf0>] with cache [{}], pollster history [{}], and discovery cache [{}]. register_pollster_execution /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:276
Dec  1 19:59:48 compute-0 ceilometer_agent_compute[200308]: 2025-12-01 19:59:48.825 15 DEBUG ceilometer.polling.manager [-] Registering pollster [<stevedore.extension.Extension object at 0x7fcf6e1e92e0>] from source [pollsters] to be executed via executor [<concurrent.futures.thread.ThreadPoolExecutor object at 0x7fcf6e8ebbf0>] with cache [{}], pollster history [{}], and discovery cache [{}]. register_pollster_execution /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:276
Dec  1 19:59:48 compute-0 ceilometer_agent_compute[200308]: 2025-12-01 19:59:48.825 15 DEBUG ceilometer.polling.manager [-] Registering pollster [<stevedore.extension.Extension object at 0x7fcf6cc3fb00>] from source [pollsters] to be executed via executor [<concurrent.futures.thread.ThreadPoolExecutor object at 0x7fcf6e8ebbf0>] with cache [{}], pollster history [{}], and discovery cache [{}]. register_pollster_execution /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:276
Dec  1 19:59:48 compute-0 ceilometer_agent_compute[200308]: 2025-12-01 19:59:48.825 15 DEBUG ceilometer.polling.manager [-] Registering pollster [<stevedore.extension.Extension object at 0x7fcf6cc3f320>] from source [pollsters] to be executed via executor [<concurrent.futures.thread.ThreadPoolExecutor object at 0x7fcf6e8ebbf0>] with cache [{}], pollster history [{}], and discovery cache [{}]. register_pollster_execution /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:276
Dec  1 19:59:48 compute-0 ceilometer_agent_compute[200308]: 2025-12-01 19:59:48.825 15 DEBUG ceilometer.polling.manager [-] Registering pollster [<stevedore.extension.Extension object at 0x7fcf6cc3f380>] from source [pollsters] to be executed via executor [<concurrent.futures.thread.ThreadPoolExecutor object at 0x7fcf6e8ebbf0>] with cache [{}], pollster history [{'network.incoming.bytes.delta': []}], and discovery cache [{'local_instances': []}]. register_pollster_execution /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:276
Dec  1 19:59:48 compute-0 ceilometer_agent_compute[200308]: 2025-12-01 19:59:48.825 15 DEBUG ceilometer.polling.manager [-] Skip pollster network.incoming.bytes.delta, no  resources found this cycle _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:321
Dec  1 19:59:48 compute-0 ceilometer_agent_compute[200308]: 2025-12-01 19:59:48.827 15 DEBUG ceilometer.polling.manager [-] Executing discovery process for pollsters [<ceilometer.compute.pollsters.net.OutgoingPacketsPollster object at 0x7fcf6c2e4050>] and discovery method [local_instances] via process [<bound method AgentManager.discover of <ceilometer.polling.manager.AgentManager object at 0x7fcf6cc07a10>>]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:294
Dec  1 19:59:48 compute-0 ceilometer_agent_compute[200308]: 2025-12-01 19:59:48.827 15 DEBUG ceilometer.polling.manager [-] Skip pollster network.outgoing.packets, no  resources found this cycle _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:321
Dec  1 19:59:48 compute-0 ceilometer_agent_compute[200308]: 2025-12-01 19:59:48.827 15 DEBUG ceilometer.polling.manager [-] Executing discovery process for pollsters [<ceilometer.compute.pollsters.net.OutgoingBytesDeltaPollster object at 0x7fcf6cc3ff20>] and discovery method [local_instances] via process [<bound method AgentManager.discover of <ceilometer.polling.manager.AgentManager object at 0x7fcf6cc07a10>>]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:294
Dec  1 19:59:48 compute-0 ceilometer_agent_compute[200308]: 2025-12-01 19:59:48.827 15 DEBUG ceilometer.polling.manager [-] Skip pollster network.outgoing.bytes.delta, no  resources found this cycle _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:321
Dec  1 19:59:48 compute-0 ceilometer_agent_compute[200308]: 2025-12-01 19:59:48.828 15 DEBUG ceilometer.polling.manager [-] Executing discovery process for pollsters [<ceilometer.compute.pollsters.net.OutgoingDropPollster object at 0x7fcf6c2e40e0>] and discovery method [local_instances] via process [<bound method AgentManager.discover of <ceilometer.polling.manager.AgentManager object at 0x7fcf6cc07a10>>]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:294
Dec  1 19:59:48 compute-0 ceilometer_agent_compute[200308]: 2025-12-01 19:59:48.828 15 DEBUG ceilometer.polling.manager [-] Skip pollster network.outgoing.packets.drop, no  resources found this cycle _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:321
Dec  1 19:59:48 compute-0 ceilometer_agent_compute[200308]: 2025-12-01 19:59:48.828 15 DEBUG ceilometer.polling.manager [-] Executing discovery process for pollsters [<ceilometer.compute.pollsters.net.OutgoingErrorsPollster object at 0x7fcf6c2e4170>] and discovery method [local_instances] via process [<bound method AgentManager.discover of <ceilometer.polling.manager.AgentManager object at 0x7fcf6cc07a10>>]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:294
Dec  1 19:59:48 compute-0 ceilometer_agent_compute[200308]: 2025-12-01 19:59:48.828 15 DEBUG ceilometer.polling.manager [-] Skip pollster network.outgoing.packets.error, no  resources found this cycle _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:321
Dec  1 19:59:48 compute-0 ceilometer_agent_compute[200308]: 2025-12-01 19:59:48.829 15 DEBUG ceilometer.polling.manager [-] Executing discovery process for pollsters [<ceilometer.compute.pollsters.disk.PerDeviceCapacityPollster object at 0x7fcf6cc3d820>] and discovery method [local_instances] via process [<bound method AgentManager.discover of <ceilometer.polling.manager.AgentManager object at 0x7fcf6cc07a10>>]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:294
Dec  1 19:59:48 compute-0 ceilometer_agent_compute[200308]: 2025-12-01 19:59:48.829 15 DEBUG ceilometer.polling.manager [-] Skip pollster disk.device.capacity, no  resources found this cycle _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:321
Dec  1 19:59:48 compute-0 ceilometer_agent_compute[200308]: 2025-12-01 19:59:48.829 15 DEBUG ceilometer.polling.manager [-] Executing discovery process for pollsters [<ceilometer.compute.pollsters.disk.PerDeviceReadBytesPollster object at 0x7fcf6cc3f1d0>] and discovery method [local_instances] via process [<bound method AgentManager.discover of <ceilometer.polling.manager.AgentManager object at 0x7fcf6cc07a10>>]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:294
Dec  1 19:59:48 compute-0 ceilometer_agent_compute[200308]: 2025-12-01 19:59:48.829 15 DEBUG ceilometer.polling.manager [-] Skip pollster disk.device.read.bytes, no  resources found this cycle _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:321
Dec  1 19:59:48 compute-0 ceilometer_agent_compute[200308]: 2025-12-01 19:59:48.829 15 DEBUG ceilometer.polling.manager [-] Executing discovery process for pollsters [<ceilometer.compute.pollsters.net.IncomingBytesPollster object at 0x7fcf6cc3f800>] and discovery method [local_instances] via process [<bound method AgentManager.discover of <ceilometer.polling.manager.AgentManager object at 0x7fcf6cc07a10>>]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:294
Dec  1 19:59:48 compute-0 ceilometer_agent_compute[200308]: 2025-12-01 19:59:48.829 15 DEBUG ceilometer.polling.manager [-] Skip pollster network.incoming.bytes, no  resources found this cycle _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:321
Dec  1 19:59:48 compute-0 ceilometer_agent_compute[200308]: 2025-12-01 19:59:48.830 15 DEBUG ceilometer.polling.manager [-] Executing discovery process for pollsters [<ceilometer.compute.pollsters.net.IncomingBytesRatePollster object at 0x7fcf6cc3fd10>] and discovery method [local_instances] via process [<bound method AgentManager.discover of <ceilometer.polling.manager.AgentManager object at 0x7fcf6cc07a10>>]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:294
Dec  1 19:59:48 compute-0 ceilometer_agent_compute[200308]: 2025-12-01 19:59:48.830 15 DEBUG ceilometer.polling.manager [-] Skip pollster network.incoming.bytes.rate, no  resources found this cycle _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:321
Dec  1 19:59:48 compute-0 ceilometer_agent_compute[200308]: 2025-12-01 19:59:48.830 15 DEBUG ceilometer.polling.manager [-] Executing discovery process for pollsters [<ceilometer.compute.pollsters.disk.PerDeviceDiskReadLatencyPollster object at 0x7fcf6cc3f2f0>] and discovery method [local_instances] via process [<bound method AgentManager.discover of <ceilometer.polling.manager.AgentManager object at 0x7fcf6cc07a10>>]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:294
Dec  1 19:59:48 compute-0 ceilometer_agent_compute[200308]: 2025-12-01 19:59:48.830 15 DEBUG ceilometer.polling.manager [-] Skip pollster disk.device.read.latency, no  resources found this cycle _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:321
Dec  1 19:59:48 compute-0 ceilometer_agent_compute[200308]: 2025-12-01 19:59:48.830 15 DEBUG ceilometer.polling.manager [-] Executing discovery process for pollsters [<ceilometer.compute.pollsters.disk.PerDeviceReadRequestsPollster object at 0x7fcf6cc3f350>] and discovery method [local_instances] via process [<bound method AgentManager.discover of <ceilometer.polling.manager.AgentManager object at 0x7fcf6cc07a10>>]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:294
Dec  1 19:59:48 compute-0 ceilometer_agent_compute[200308]: 2025-12-01 19:59:48.831 15 DEBUG ceilometer.polling.manager [-] Skip pollster disk.device.read.requests, no  resources found this cycle _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:321
Dec  1 19:59:48 compute-0 ceilometer_agent_compute[200308]: 2025-12-01 19:59:48.826 15 DEBUG ceilometer.polling.manager [-] Registering pollster [<stevedore.extension.Extension object at 0x7fcf6cc3f3e0>] from source [pollsters] to be executed via executor [<concurrent.futures.thread.ThreadPoolExecutor object at 0x7fcf6e8ebbf0>] with cache [{}], pollster history [{'network.incoming.bytes.delta': [], 'network.outgoing.packets': [], 'network.outgoing.bytes.delta': [], 'network.outgoing.packets.drop': [], 'network.outgoing.packets.error': [], 'disk.device.capacity': [], 'disk.device.read.bytes': [], 'network.incoming.bytes': [], 'network.incoming.bytes.rate': [], 'disk.device.read.latency': [], 'disk.device.read.requests': []}], and discovery cache [{'local_instances': []}]. register_pollster_execution /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:276
Dec  1 19:59:48 compute-0 ceilometer_agent_compute[200308]: 2025-12-01 19:59:48.831 15 DEBUG ceilometer.polling.manager [-] Registering pollster [<stevedore.extension.Extension object at 0x7fcf6cc3f440>] from source [pollsters] to be executed via executor [<concurrent.futures.thread.ThreadPoolExecutor object at 0x7fcf6e8ebbf0>] with cache [{}], pollster history [{'network.incoming.bytes.delta': [], 'network.outgoing.packets': [], 'network.outgoing.bytes.delta': [], 'network.outgoing.packets.drop': [], 'network.outgoing.packets.error': [], 'disk.device.capacity': [], 'disk.device.read.bytes': [], 'network.incoming.bytes': [], 'network.incoming.bytes.rate': [], 'disk.device.read.latency': [], 'disk.device.read.requests': []}], and discovery cache [{'local_instances': []}]. register_pollster_execution /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:276
Dec  1 19:59:48 compute-0 ceilometer_agent_compute[200308]: 2025-12-01 19:59:48.831 15 DEBUG ceilometer.polling.manager [-] Registering pollster [<stevedore.extension.Extension object at 0x7fcf6c2e4470>] from source [pollsters] to be executed via executor [<concurrent.futures.thread.ThreadPoolExecutor object at 0x7fcf6e8ebbf0>] with cache [{}], pollster history [{'network.incoming.bytes.delta': [], 'network.outgoing.packets': [], 'network.outgoing.bytes.delta': [], 'network.outgoing.packets.drop': [], 'network.outgoing.packets.error': [], 'disk.device.capacity': [], 'disk.device.read.bytes': [], 'network.incoming.bytes': [], 'network.incoming.bytes.rate': [], 'disk.device.read.latency': [], 'disk.device.read.requests': []}], and discovery cache [{'local_instances': []}]. register_pollster_execution /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:276
Dec  1 19:59:48 compute-0 ceilometer_agent_compute[200308]: 2025-12-01 19:59:48.832 15 DEBUG ceilometer.polling.manager [-] Registering pollster [<stevedore.extension.Extension object at 0x7fcf6cc3f4a0>] from source [pollsters] to be executed via executor [<concurrent.futures.thread.ThreadPoolExecutor object at 0x7fcf6e8ebbf0>] with cache [{}], pollster history [{'network.incoming.bytes.delta': [], 'network.outgoing.packets': [], 'network.outgoing.bytes.delta': [], 'network.outgoing.packets.drop': [], 'network.outgoing.packets.error': [], 'disk.device.capacity': [], 'disk.device.read.bytes': [], 'network.incoming.bytes': [], 'network.incoming.bytes.rate': [], 'disk.device.read.latency': [], 'disk.device.read.requests': []}], and discovery cache [{'local_instances': []}]. register_pollster_execution /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:276
Dec  1 19:59:48 compute-0 ceilometer_agent_compute[200308]: 2025-12-01 19:59:48.832 15 DEBUG ceilometer.polling.manager [-] Registering pollster [<stevedore.extension.Extension object at 0x7fcf6cc3f500>] from source [pollsters] to be executed via executor [<concurrent.futures.thread.ThreadPoolExecutor object at 0x7fcf6e8ebbf0>] with cache [{}], pollster history [{'network.incoming.bytes.delta': [], 'network.outgoing.packets': [], 'network.outgoing.bytes.delta': [], 'network.outgoing.packets.drop': [], 'network.outgoing.packets.error': [], 'disk.device.capacity': [], 'disk.device.read.bytes': [], 'network.incoming.bytes': [], 'network.incoming.bytes.rate': [], 'disk.device.read.latency': [], 'disk.device.read.requests': []}], and discovery cache [{'local_instances': []}]. register_pollster_execution /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:276
Dec  1 19:59:48 compute-0 ceilometer_agent_compute[200308]: 2025-12-01 19:59:48.832 15 DEBUG ceilometer.polling.manager [-] Registering pollster [<stevedore.extension.Extension object at 0x7fcf6cc3e540>] from source [pollsters] to be executed via executor [<concurrent.futures.thread.ThreadPoolExecutor object at 0x7fcf6e8ebbf0>] with cache [{}], pollster history [{'network.incoming.bytes.delta': [], 'network.outgoing.packets': [], 'network.outgoing.bytes.delta': [], 'network.outgoing.packets.drop': [], 'network.outgoing.packets.error': [], 'disk.device.capacity': [], 'disk.device.read.bytes': [], 'network.incoming.bytes': [], 'network.incoming.bytes.rate': [], 'disk.device.read.latency': [], 'disk.device.read.requests': []}], and discovery cache [{'local_instances': []}]. register_pollster_execution /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:276
Dec  1 19:59:48 compute-0 ceilometer_agent_compute[200308]: 2025-12-01 19:59:48.832 15 DEBUG ceilometer.polling.manager [-] Registering pollster [<stevedore.extension.Extension object at 0x7fcf6cc3f560>] from source [pollsters] to be executed via executor [<concurrent.futures.thread.ThreadPoolExecutor object at 0x7fcf6e8ebbf0>] with cache [{}], pollster history [{'network.incoming.bytes.delta': [], 'network.outgoing.packets': [], 'network.outgoing.bytes.delta': [], 'network.outgoing.packets.drop': [], 'network.outgoing.packets.error': [], 'disk.device.capacity': [], 'disk.device.read.bytes': [], 'network.incoming.bytes': [], 'network.incoming.bytes.rate': [], 'disk.device.read.latency': [], 'disk.device.read.requests': []}], and discovery cache [{'local_instances': []}]. register_pollster_execution /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:276
Dec  1 19:59:48 compute-0 ceilometer_agent_compute[200308]: 2025-12-01 19:59:48.832 15 DEBUG ceilometer.polling.manager [-] Registering pollster [<stevedore.extension.Extension object at 0x7fcf6cc3fd70>] from source [pollsters] to be executed via executor [<concurrent.futures.thread.ThreadPoolExecutor object at 0x7fcf6e8ebbf0>] with cache [{}], pollster history [{'network.incoming.bytes.delta': [], 'network.outgoing.packets': [], 'network.outgoing.bytes.delta': [], 'network.outgoing.packets.drop': [], 'network.outgoing.packets.error': [], 'disk.device.capacity': [], 'disk.device.read.bytes': [], 'network.incoming.bytes': [], 'network.incoming.bytes.rate': [], 'disk.device.read.latency': [], 'disk.device.read.requests': []}], and discovery cache [{'local_instances': []}]. register_pollster_execution /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:276
Dec  1 19:59:48 compute-0 ceilometer_agent_compute[200308]: 2025-12-01 19:59:48.833 15 DEBUG ceilometer.polling.manager [-] Registering pollster [<stevedore.extension.Extension object at 0x7fcf6cc3f5c0>] from source [pollsters] to be executed via executor [<concurrent.futures.thread.ThreadPoolExecutor object at 0x7fcf6e8ebbf0>] with cache [{}], pollster history [{'network.incoming.bytes.delta': [], 'network.outgoing.packets': [], 'network.outgoing.bytes.delta': [], 'network.outgoing.packets.drop': [], 'network.outgoing.packets.error': [], 'disk.device.capacity': [], 'disk.device.read.bytes': [], 'network.incoming.bytes': [], 'network.incoming.bytes.rate': [], 'disk.device.read.latency': [], 'disk.device.read.requests': []}], and discovery cache [{'local_instances': []}]. register_pollster_execution /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:276
Dec  1 19:59:48 compute-0 ceilometer_agent_compute[200308]: 2025-12-01 19:59:48.833 15 DEBUG ceilometer.polling.manager [-] Registering pollster [<stevedore.extension.Extension object at 0x7fcf6cc3fdd0>] from source [pollsters] to be executed via executor [<concurrent.futures.thread.ThreadPoolExecutor object at 0x7fcf6e8ebbf0>] with cache [{}], pollster history [{'network.incoming.bytes.delta': [], 'network.outgoing.packets': [], 'network.outgoing.bytes.delta': [], 'network.outgoing.packets.drop': [], 'network.outgoing.packets.error': [], 'disk.device.capacity': [], 'disk.device.read.bytes': [], 'network.incoming.bytes': [], 'network.incoming.bytes.rate': [], 'disk.device.read.latency': [], 'disk.device.read.requests': []}], and discovery cache [{'local_instances': []}]. register_pollster_execution /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:276
Dec  1 19:59:48 compute-0 ceilometer_agent_compute[200308]: 2025-12-01 19:59:48.834 15 DEBUG ceilometer.polling.manager [-] Registering pollster [<stevedore.extension.Extension object at 0x7fcf6cc3fe30>] from source [pollsters] to be executed via executor [<concurrent.futures.thread.ThreadPoolExecutor object at 0x7fcf6e8ebbf0>] with cache [{}], pollster history [{'network.incoming.bytes.delta': [], 'network.outgoing.packets': [], 'network.outgoing.bytes.delta': [], 'network.outgoing.packets.drop': [], 'network.outgoing.packets.error': [], 'disk.device.capacity': [], 'disk.device.read.bytes': [], 'network.incoming.bytes': [], 'network.incoming.bytes.rate': [], 'disk.device.read.latency': [], 'disk.device.read.requests': []}], and discovery cache [{'local_instances': []}]. register_pollster_execution /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:276
Dec  1 19:59:48 compute-0 ceilometer_agent_compute[200308]: 2025-12-01 19:59:48.833 15 DEBUG ceilometer.polling.manager [-] Executing discovery process for pollsters [<ceilometer.compute.pollsters.disk.PerDevicePhysicalPollster object at 0x7fcf6cc3f3b0>] and discovery method [local_instances] via process [<bound method AgentManager.discover of <ceilometer.polling.manager.AgentManager object at 0x7fcf6cc07a10>>]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:294
Dec  1 19:59:48 compute-0 ceilometer_agent_compute[200308]: 2025-12-01 19:59:48.835 15 DEBUG ceilometer.polling.manager [-] Skip pollster disk.device.usage, no  resources found this cycle _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:321
Dec  1 19:59:48 compute-0 ceilometer_agent_compute[200308]: 2025-12-01 19:59:48.835 15 DEBUG ceilometer.polling.manager [-] Executing discovery process for pollsters [<ceilometer.compute.pollsters.disk.PerDeviceWriteBytesPollster object at 0x7fcf6cc3f410>] and discovery method [local_instances] via process [<bound method AgentManager.discover of <ceilometer.polling.manager.AgentManager object at 0x7fcf6cc07a10>>]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:294
Dec  1 19:59:48 compute-0 ceilometer_agent_compute[200308]: 2025-12-01 19:59:48.835 15 DEBUG ceilometer.polling.manager [-] Skip pollster disk.device.write.bytes, no  resources found this cycle _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:321
Dec  1 19:59:48 compute-0 ceilometer_agent_compute[200308]: 2025-12-01 19:59:48.836 15 DEBUG ceilometer.polling.manager [-] Executing discovery process for pollsters [<ceilometer.compute.pollsters.instance_stats.PowerStatePollster object at 0x7fcf6c2e4440>] and discovery method [local_instances] via process [<bound method AgentManager.discover of <ceilometer.polling.manager.AgentManager object at 0x7fcf6cc07a10>>]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:294
Dec  1 19:59:48 compute-0 ceilometer_agent_compute[200308]: 2025-12-01 19:59:48.836 15 DEBUG ceilometer.polling.manager [-] Skip pollster power.state, no  resources found this cycle _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:321
Dec  1 19:59:48 compute-0 ceilometer_agent_compute[200308]: 2025-12-01 19:59:48.834 15 DEBUG ceilometer.polling.manager [-] Registering pollster [<stevedore.extension.Extension object at 0x7fcf6cc3fec0>] from source [pollsters] to be executed via executor [<concurrent.futures.thread.ThreadPoolExecutor object at 0x7fcf6e8ebbf0>] with cache [{}], pollster history [{'network.incoming.bytes.delta': [], 'network.outgoing.packets': [], 'network.outgoing.bytes.delta': [], 'network.outgoing.packets.drop': [], 'network.outgoing.packets.error': [], 'disk.device.capacity': [], 'disk.device.read.bytes': [], 'network.incoming.bytes': [], 'network.incoming.bytes.rate': [], 'disk.device.read.latency': [], 'disk.device.read.requests': [], 'disk.device.usage': [], 'disk.device.write.bytes': [], 'power.state': []}], and discovery cache [{'local_instances': []}]. register_pollster_execution /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:276
Dec  1 19:59:48 compute-0 ceilometer_agent_compute[200308]: 2025-12-01 19:59:48.837 15 DEBUG ceilometer.polling.manager [-] Registering pollster [<stevedore.extension.Extension object at 0x7fcf6cc3ffb0>] from source [pollsters] to be executed via executor [<concurrent.futures.thread.ThreadPoolExecutor object at 0x7fcf6e8ebbf0>] with cache [{}], pollster history [{'network.incoming.bytes.delta': [], 'network.outgoing.packets': [], 'network.outgoing.bytes.delta': [], 'network.outgoing.packets.drop': [], 'network.outgoing.packets.error': [], 'disk.device.capacity': [], 'disk.device.read.bytes': [], 'network.incoming.bytes': [], 'network.incoming.bytes.rate': [], 'disk.device.read.latency': [], 'disk.device.read.requests': [], 'disk.device.usage': [], 'disk.device.write.bytes': [], 'power.state': []}], and discovery cache [{'local_instances': []}]. register_pollster_execution /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:276
Dec  1 19:59:48 compute-0 ceilometer_agent_compute[200308]: 2025-12-01 19:59:48.837 15 DEBUG ceilometer.polling.manager [-] Registering pollster [<stevedore.extension.Extension object at 0x7fcf6cc3d7c0>] from source [pollsters] to be executed via executor [<concurrent.futures.thread.ThreadPoolExecutor object at 0x7fcf6e8ebbf0>] with cache [{}], pollster history [{'network.incoming.bytes.delta': [], 'network.outgoing.packets': [], 'network.outgoing.bytes.delta': [], 'network.outgoing.packets.drop': [], 'network.outgoing.packets.error': [], 'disk.device.capacity': [], 'disk.device.read.bytes': [], 'network.incoming.bytes': [], 'network.incoming.bytes.rate': [], 'disk.device.read.latency': [], 'disk.device.read.requests': [], 'disk.device.usage': [], 'disk.device.write.bytes': [], 'power.state': []}], and discovery cache [{'local_instances': []}]. register_pollster_execution /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:276
Dec  1 19:59:48 compute-0 ceilometer_agent_compute[200308]: 2025-12-01 19:59:48.837 15 DEBUG ceilometer.polling.manager [-] Registering pollster [<stevedore.extension.Extension object at 0x7fcf6cc3f7d0>] from source [pollsters] to be executed via executor [<concurrent.futures.thread.ThreadPoolExecutor object at 0x7fcf6e8ebbf0>] with cache [{}], pollster history [{'network.incoming.bytes.delta': [], 'network.outgoing.packets': [], 'network.outgoing.bytes.delta': [], 'network.outgoing.packets.drop': [], 'network.outgoing.packets.error': [], 'disk.device.capacity': [], 'disk.device.read.bytes': [], 'network.incoming.bytes': [], 'network.incoming.bytes.rate': [], 'disk.device.read.latency': [], 'disk.device.read.requests': [], 'disk.device.usage': [], 'disk.device.write.bytes': [], 'power.state': []}], and discovery cache [{'local_instances': []}]. register_pollster_execution /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:276
Dec  1 19:59:48 compute-0 ceilometer_agent_compute[200308]: 2025-12-01 19:59:48.836 15 DEBUG ceilometer.polling.manager [-] Executing discovery process for pollsters [<ceilometer.compute.pollsters.disk.PerDeviceDiskWriteLatencyPollster object at 0x7fcf6cc3f470>] and discovery method [local_instances] via process [<bound method AgentManager.discover of <ceilometer.polling.manager.AgentManager object at 0x7fcf6cc07a10>>]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:294
Dec  1 19:59:48 compute-0 ceilometer_agent_compute[200308]: 2025-12-01 19:59:48.838 15 DEBUG ceilometer.polling.manager [-] Skip pollster disk.device.write.latency, no  resources found this cycle _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:321
Dec  1 19:59:48 compute-0 ceilometer_agent_compute[200308]: 2025-12-01 19:59:48.838 15 DEBUG ceilometer.polling.manager [-] Executing discovery process for pollsters [<ceilometer.compute.pollsters.disk.PerDeviceWriteRequestsPollster object at 0x7fcf6cc3f4d0>] and discovery method [local_instances] via process [<bound method AgentManager.discover of <ceilometer.polling.manager.AgentManager object at 0x7fcf6cc07a10>>]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:294
Dec  1 19:59:48 compute-0 ceilometer_agent_compute[200308]: 2025-12-01 19:59:48.838 15 DEBUG ceilometer.polling.manager [-] Skip pollster disk.device.write.requests, no  resources found this cycle _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:321
Dec  1 19:59:48 compute-0 ceilometer_agent_compute[200308]: 2025-12-01 19:59:48.839 15 DEBUG ceilometer.polling.manager [-] Executing discovery process for pollsters [<ceilometer.compute.pollsters.disk.PerDeviceAllocationPollster object at 0x7fcf6cc3e5d0>] and discovery method [local_instances] via process [<bound method AgentManager.discover of <ceilometer.polling.manager.AgentManager object at 0x7fcf6cc07a10>>]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:294
Dec  1 19:59:48 compute-0 ceilometer_agent_compute[200308]: 2025-12-01 19:59:48.839 15 DEBUG ceilometer.polling.manager [-] Skip pollster disk.device.allocation, no  resources found this cycle _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:321
Dec  1 19:59:48 compute-0 ceilometer_agent_compute[200308]: 2025-12-01 19:59:48.839 15 DEBUG ceilometer.polling.manager [-] Executing discovery process for pollsters [<ceilometer.compute.pollsters.disk.EphemeralSizePollster object at 0x7fcf6cc3f530>] and discovery method [local_instances] via process [<bound method AgentManager.discover of <ceilometer.polling.manager.AgentManager object at 0x7fcf6cc07a10>>]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:294
Dec  1 19:59:48 compute-0 ceilometer_agent_compute[200308]: 2025-12-01 19:59:48.839 15 DEBUG ceilometer.polling.manager [-] Skip pollster disk.ephemeral.size, no  resources found this cycle _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:321
Dec  1 19:59:48 compute-0 ceilometer_agent_compute[200308]: 2025-12-01 19:59:48.840 15 DEBUG ceilometer.polling.manager [-] Executing discovery process for pollsters [<ceilometer.compute.pollsters.net.IncomingPacketsPollster object at 0x7fcf6cc3fd40>] and discovery method [local_instances] via process [<bound method AgentManager.discover of <ceilometer.polling.manager.AgentManager object at 0x7fcf6cc07a10>>]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:294
Dec  1 19:59:48 compute-0 ceilometer_agent_compute[200308]: 2025-12-01 19:59:48.840 15 DEBUG ceilometer.polling.manager [-] Skip pollster network.incoming.packets, no  resources found this cycle _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:321
Dec  1 19:59:48 compute-0 ceilometer_agent_compute[200308]: 2025-12-01 19:59:48.840 15 DEBUG ceilometer.polling.manager [-] Executing discovery process for pollsters [<ceilometer.compute.pollsters.disk.RootSizePollster object at 0x7fcf6cc3f590>] and discovery method [local_instances] via process [<bound method AgentManager.discover of <ceilometer.polling.manager.AgentManager object at 0x7fcf6cc07a10>>]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:294
Dec  1 19:59:48 compute-0 ceilometer_agent_compute[200308]: 2025-12-01 19:59:48.840 15 DEBUG ceilometer.polling.manager [-] Skip pollster disk.root.size, no  resources found this cycle _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:321
Dec  1 19:59:48 compute-0 ceilometer_agent_compute[200308]: 2025-12-01 19:59:48.841 15 DEBUG ceilometer.polling.manager [-] Executing discovery process for pollsters [<ceilometer.compute.pollsters.net.IncomingDropPollster object at 0x7fcf6cc3fda0>] and discovery method [local_instances] via process [<bound method AgentManager.discover of <ceilometer.polling.manager.AgentManager object at 0x7fcf6cc07a10>>]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:294
Dec  1 19:59:48 compute-0 ceilometer_agent_compute[200308]: 2025-12-01 19:59:48.841 15 DEBUG ceilometer.polling.manager [-] Skip pollster network.incoming.packets.drop, no  resources found this cycle _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:321
Dec  1 19:59:48 compute-0 ceilometer_agent_compute[200308]: 2025-12-01 19:59:48.841 15 DEBUG ceilometer.polling.manager [-] Executing discovery process for pollsters [<ceilometer.compute.pollsters.net.IncomingErrorsPollster object at 0x7fcf6cc3fe00>] and discovery method [local_instances] via process [<bound method AgentManager.discover of <ceilometer.polling.manager.AgentManager object at 0x7fcf6cc07a10>>]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:294
Dec  1 19:59:48 compute-0 ceilometer_agent_compute[200308]: 2025-12-01 19:59:48.841 15 DEBUG ceilometer.polling.manager [-] Skip pollster network.incoming.packets.error, no  resources found this cycle _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:321
Dec  1 19:59:48 compute-0 ceilometer_agent_compute[200308]: 2025-12-01 19:59:48.842 15 DEBUG ceilometer.polling.manager [-] Executing discovery process for pollsters [<ceilometer.compute.pollsters.net.OutgoingBytesPollster object at 0x7fcf6cc3fe90>] and discovery method [local_instances] via process [<bound method AgentManager.discover of <ceilometer.polling.manager.AgentManager object at 0x7fcf6cc07a10>>]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:294
Dec  1 19:59:48 compute-0 ceilometer_agent_compute[200308]: 2025-12-01 19:59:48.842 15 DEBUG ceilometer.polling.manager [-] Skip pollster network.outgoing.bytes, no  resources found this cycle _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:321
Dec  1 19:59:48 compute-0 ceilometer_agent_compute[200308]: 2025-12-01 19:59:48.842 15 DEBUG ceilometer.polling.manager [-] Executing discovery process for pollsters [<ceilometer.compute.pollsters.net.OutgoingBytesRatePollster object at 0x7fcf6cc3ff80>] and discovery method [local_instances] via process [<bound method AgentManager.discover of <ceilometer.polling.manager.AgentManager object at 0x7fcf6cc07a10>>]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:294
Dec  1 19:59:48 compute-0 ceilometer_agent_compute[200308]: 2025-12-01 19:59:48.842 15 DEBUG ceilometer.polling.manager [-] Skip pollster network.outgoing.bytes.rate, no  resources found this cycle _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:321
Dec  1 19:59:48 compute-0 ceilometer_agent_compute[200308]: 2025-12-01 19:59:48.843 15 DEBUG ceilometer.polling.manager [-] Executing discovery process for pollsters [<ceilometer.compute.pollsters.instance_stats.CPUPollster object at 0x7fcf6cbd1b80>] and discovery method [local_instances] via process [<bound method AgentManager.discover of <ceilometer.polling.manager.AgentManager object at 0x7fcf6cc07a10>>]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:294
Dec  1 19:59:48 compute-0 ceilometer_agent_compute[200308]: 2025-12-01 19:59:48.843 15 DEBUG ceilometer.polling.manager [-] Skip pollster cpu, no  resources found this cycle _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:321
Dec  1 19:59:48 compute-0 ceilometer_agent_compute[200308]: 2025-12-01 19:59:48.843 15 DEBUG ceilometer.polling.manager [-] Executing discovery process for pollsters [<ceilometer.compute.pollsters.instance_stats.MemoryUsagePollster object at 0x7fcf6cc3f7a0>] and discovery method [local_instances] via process [<bound method AgentManager.discover of <ceilometer.polling.manager.AgentManager object at 0x7fcf6cc07a10>>]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:294
Dec  1 19:59:48 compute-0 ceilometer_agent_compute[200308]: 2025-12-01 19:59:48.844 15 DEBUG ceilometer.polling.manager [-] Skip pollster memory.usage, no  resources found this cycle _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:321
Dec  1 19:59:48 compute-0 ceilometer_agent_compute[200308]: 2025-12-01 19:59:48.844 15 DEBUG ceilometer.polling.manager [-] Finished processing pollster [network.incoming.bytes.delta]. execute_polling_task_processing /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:272
Dec  1 19:59:48 compute-0 ceilometer_agent_compute[200308]: 2025-12-01 19:59:48.844 15 DEBUG ceilometer.polling.manager [-] Finished processing pollster [network.outgoing.packets]. execute_polling_task_processing /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:272
Dec  1 19:59:48 compute-0 ceilometer_agent_compute[200308]: 2025-12-01 19:59:48.845 15 DEBUG ceilometer.polling.manager [-] Finished processing pollster [network.outgoing.bytes.delta]. execute_polling_task_processing /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:272
Dec  1 19:59:48 compute-0 ceilometer_agent_compute[200308]: 2025-12-01 19:59:48.845 15 DEBUG ceilometer.polling.manager [-] Finished processing pollster [network.outgoing.packets.drop]. execute_polling_task_processing /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:272
Dec  1 19:59:48 compute-0 ceilometer_agent_compute[200308]: 2025-12-01 19:59:48.845 15 DEBUG ceilometer.polling.manager [-] Finished processing pollster [network.outgoing.packets.error]. execute_polling_task_processing /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:272
Dec  1 19:59:48 compute-0 ceilometer_agent_compute[200308]: 2025-12-01 19:59:48.845 15 DEBUG ceilometer.polling.manager [-] Finished processing pollster [disk.device.capacity]. execute_polling_task_processing /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:272
Dec  1 19:59:48 compute-0 ceilometer_agent_compute[200308]: 2025-12-01 19:59:48.845 15 DEBUG ceilometer.polling.manager [-] Finished processing pollster [disk.device.read.bytes]. execute_polling_task_processing /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:272
Dec  1 19:59:48 compute-0 ceilometer_agent_compute[200308]: 2025-12-01 19:59:48.845 15 DEBUG ceilometer.polling.manager [-] Finished processing pollster [network.incoming.bytes]. execute_polling_task_processing /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:272
Dec  1 19:59:48 compute-0 ceilometer_agent_compute[200308]: 2025-12-01 19:59:48.845 15 DEBUG ceilometer.polling.manager [-] Finished processing pollster [network.incoming.bytes.rate]. execute_polling_task_processing /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:272
Dec  1 19:59:48 compute-0 ceilometer_agent_compute[200308]: 2025-12-01 19:59:48.845 15 DEBUG ceilometer.polling.manager [-] Finished processing pollster [disk.device.read.latency]. execute_polling_task_processing /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:272
Dec  1 19:59:48 compute-0 ceilometer_agent_compute[200308]: 2025-12-01 19:59:48.846 15 DEBUG ceilometer.polling.manager [-] Finished processing pollster [disk.device.read.requests]. execute_polling_task_processing /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:272
Dec  1 19:59:48 compute-0 ceilometer_agent_compute[200308]: 2025-12-01 19:59:48.846 15 DEBUG ceilometer.polling.manager [-] Finished processing pollster [disk.device.usage]. execute_polling_task_processing /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:272
Dec  1 19:59:48 compute-0 ceilometer_agent_compute[200308]: 2025-12-01 19:59:48.846 15 DEBUG ceilometer.polling.manager [-] Finished processing pollster [disk.device.write.bytes]. execute_polling_task_processing /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:272
Dec  1 19:59:48 compute-0 ceilometer_agent_compute[200308]: 2025-12-01 19:59:48.846 15 DEBUG ceilometer.polling.manager [-] Finished processing pollster [power.state]. execute_polling_task_processing /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:272
Dec  1 19:59:48 compute-0 ceilometer_agent_compute[200308]: 2025-12-01 19:59:48.846 15 DEBUG ceilometer.polling.manager [-] Finished processing pollster [disk.device.write.latency]. execute_polling_task_processing /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:272
Dec  1 19:59:48 compute-0 ceilometer_agent_compute[200308]: 2025-12-01 19:59:48.846 15 DEBUG ceilometer.polling.manager [-] Finished processing pollster [disk.device.write.requests]. execute_polling_task_processing /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:272
Dec  1 19:59:48 compute-0 ceilometer_agent_compute[200308]: 2025-12-01 19:59:48.847 15 DEBUG ceilometer.polling.manager [-] Finished processing pollster [disk.device.allocation]. execute_polling_task_processing /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:272
Dec  1 19:59:48 compute-0 ceilometer_agent_compute[200308]: 2025-12-01 19:59:48.847 15 DEBUG ceilometer.polling.manager [-] Finished processing pollster [disk.ephemeral.size]. execute_polling_task_processing /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:272
Dec  1 19:59:48 compute-0 ceilometer_agent_compute[200308]: 2025-12-01 19:59:48.847 15 DEBUG ceilometer.polling.manager [-] Finished processing pollster [network.incoming.packets]. execute_polling_task_processing /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:272
Dec  1 19:59:48 compute-0 ceilometer_agent_compute[200308]: 2025-12-01 19:59:48.847 15 DEBUG ceilometer.polling.manager [-] Finished processing pollster [disk.root.size]. execute_polling_task_processing /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:272
Dec  1 19:59:48 compute-0 ceilometer_agent_compute[200308]: 2025-12-01 19:59:48.847 15 DEBUG ceilometer.polling.manager [-] Finished processing pollster [network.incoming.packets.drop]. execute_polling_task_processing /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:272
Dec  1 19:59:48 compute-0 ceilometer_agent_compute[200308]: 2025-12-01 19:59:48.847 15 DEBUG ceilometer.polling.manager [-] Finished processing pollster [network.incoming.packets.error]. execute_polling_task_processing /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:272
Dec  1 19:59:48 compute-0 ceilometer_agent_compute[200308]: 2025-12-01 19:59:48.847 15 DEBUG ceilometer.polling.manager [-] Finished processing pollster [network.outgoing.bytes]. execute_polling_task_processing /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:272
Dec  1 19:59:48 compute-0 ceilometer_agent_compute[200308]: 2025-12-01 19:59:48.848 15 DEBUG ceilometer.polling.manager [-] Finished processing pollster [network.outgoing.bytes.rate]. execute_polling_task_processing /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:272
Dec  1 19:59:48 compute-0 ceilometer_agent_compute[200308]: 2025-12-01 19:59:48.848 15 DEBUG ceilometer.polling.manager [-] Finished processing pollster [cpu]. execute_polling_task_processing /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:272
Dec  1 19:59:48 compute-0 ceilometer_agent_compute[200308]: 2025-12-01 19:59:48.848 15 DEBUG ceilometer.polling.manager [-] Finished processing pollster [memory.usage]. execute_polling_task_processing /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:272
Dec  1 19:59:52 compute-0 nova_compute[189564]: 2025-12-01 19:59:52.746 189568 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Dec  1 19:59:53 compute-0 nova_compute[189564]: 2025-12-01 19:59:53.752 189568 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Dec  1 19:59:57 compute-0 nova_compute[189564]: 2025-12-01 19:59:57.750 189568 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Dec  1 19:59:58 compute-0 podman[253169]: 2025-12-01 19:59:58.346493321 +0000 UTC m=+0.123683409 container health_status b46bda7fc50db8041eef75400930fc7591d8331b3adc9964f77b2cc87c6b98e2 (image=quay.io/openstack-k8s-operators/openstack-network-exporter:current-podified, name=openstack_network_exporter, health_status=healthy, health_failing_streak=0, health_log=, config_data={'image': 'quay.io/openstack-k8s-operators/openstack-network-exporter:current-podified', 'restart': 'always', 'recreate': True, 'privileged': True, 'ports': ['9105:9105'], 'command': [], 'net': 'host', 'environment': {'OS_ENDPOINT_TYPE': 'internal', 'OPENSTACK_NETWORK_EXPORTER_YAML': '/etc/openstack_network_exporter/openstack_network_exporter.yaml'}, 'healthcheck': {'test': '/openstack/healthcheck openstack-netwo', 'mount': '/var/lib/openstack/healthchecks/openstack_network_exporter'}, 'volumes': ['/var/lib/openstack/config/telemetry/openstack_network_exporter.yaml:/etc/openstack_network_exporter/openstack_network_exporter.yaml:z', '/var/lib/openstack/certs/telemetry/default:/etc/openstack_network_exporter/tls:z', '/var/run/openvswitch:/run/openvswitch:rw,z', '/var/lib/openvswitch/ovn:/run/ovn:rw,z', '/proc:/host/proc:ro', '/var/lib/openstack/healthchecks/openstack_network_exporter:/openstack:ro,z']}, com.redhat.component=ubi9-minimal-container, vcs-type=git, config_id=edpm, container_name=openstack_network_exporter, managed_by=edpm_ansible, distribution-scope=public, release=1755695350, io.openshift.expose-services=, architecture=x86_64, io.k8s.description=The Universal Base Image Minimal is a stripped down image that uses microdnf as a package manager. This base image is freely redistributable, but Red Hat only supports Red Hat technologies through subscriptions for Red Hat products. This image is maintained by Red Hat and updated regularly., vendor=Red Hat, Inc., maintainer=Red Hat, Inc., name=ubi9-minimal, vcs-ref=f4b088292653bbf5ca8188a5e59ffd06a8671d4b, url=https://catalog.redhat.com/en/search?searchType=containers, io.openshift.tags=minimal rhel9, summary=Provides the latest release of the minimal Red Hat Universal Base Image 9., io.k8s.display-name=Red Hat Universal Base Image 9 Minimal, description=The Universal Base Image Minimal is a stripped down image that uses microdnf as a package manager. This base image is freely redistributable, but Red Hat only supports Red Hat technologies through subscriptions for Red Hat products. This image is maintained by Red Hat and updated regularly., io.buildah.version=1.33.7, version=9.6, build-date=2025-08-20T13:12:41, com.redhat.license_terms=https://www.redhat.com/en/about/red-hat-end-user-license-agreements#UBI)
Dec  1 19:59:58 compute-0 nova_compute[189564]: 2025-12-01 19:59:58.754 189568 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Dec  1 19:59:59 compute-0 podman[203750]: time="2025-12-01T19:59:59Z" level=info msg="List containers: received `last` parameter - overwriting `limit`"
Dec  1 19:59:59 compute-0 podman[203750]: @ - - [01/Dec/2025:19:59:59 +0000] "GET /v4.9.3/libpod/containers/json?all=true&external=false&last=0&namespace=false&size=false&sync=false HTTP/1.1" 200 28288 "" "Go-http-client/1.1"
Dec  1 19:59:59 compute-0 podman[203750]: @ - - [01/Dec/2025:19:59:59 +0000] "GET /v4.9.3/libpod/containers/stats?all=false&interval=1&stream=false HTTP/1.1" 200 4336 "" "Go-http-client/1.1"
Dec  1 20:00:01 compute-0 openstack_network_exporter[205914]: ERROR   20:00:01 appctl.go:131: Failed to prepare call to ovsdb-server: no control socket files found for the ovs db server
Dec  1 20:00:01 compute-0 openstack_network_exporter[205914]: ERROR   20:00:01 appctl.go:144: Failed to get PID for ovn-northd: no control socket files found for ovn-northd
Dec  1 20:00:01 compute-0 openstack_network_exporter[205914]: ERROR   20:00:01 appctl.go:144: Failed to get PID for ovn-northd: no control socket files found for ovn-northd
Dec  1 20:00:01 compute-0 openstack_network_exporter[205914]: ERROR   20:00:01 appctl.go:174: call(dpif-netdev/pmd-perf-show): please specify an existing datapath
Dec  1 20:00:01 compute-0 openstack_network_exporter[205914]: 
Dec  1 20:00:01 compute-0 openstack_network_exporter[205914]: ERROR   20:00:01 appctl.go:174: call(dpif-netdev/pmd-rxq-show): please specify an existing datapath
Dec  1 20:00:01 compute-0 openstack_network_exporter[205914]: 
Dec  1 20:00:02 compute-0 nova_compute[189564]: 2025-12-01 20:00:02.753 189568 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Dec  1 20:00:03 compute-0 podman[253189]: 2025-12-01 20:00:03.367886263 +0000 UTC m=+0.126423425 container health_status 9bc16c1e84935b321683dd2dfd3901959431e420d380b6b9982945dff3d516b2 (image=quay.io/prometheus/node-exporter:v1.5.0, name=node_exporter, health_status=healthy, health_failing_streak=0, health_log=, container_name=node_exporter, maintainer=The Prometheus Authors <prometheus-developers@googlegroups.com>, managed_by=edpm_ansible, config_data={'image': 'quay.io/prometheus/node-exporter:v1.5.0', 'restart': 'always', 'recreate': True, 'user': 'root', 'privileged': True, 'ports': ['9100:9100'], 'command': ['--web.config.file=/etc/node_exporter/node_exporter.yaml', '--web.disable-exporter-metrics', '--collector.systemd', '--collector.systemd.unit-include=(edpm_.*|ovs.*|openvswitch|virt.*|rsyslog)\\.service', '--no-collector.dmi', '--no-collector.entropy', '--no-collector.thermal_zone', '--no-collector.time', '--no-collector.timex', '--no-collector.uname', '--no-collector.stat', '--no-collector.hwmon', '--no-collector.os', '--no-collector.selinux', '--no-collector.textfile', '--no-collector.powersupplyclass', '--no-collector.pressure', '--no-collector.rapl'], 'net': 'host', 'environment': {'OS_ENDPOINT_TYPE': 'internal'}, 'healthcheck': {'test': '/openstack/healthcheck node_exporter', 'mount': '/var/lib/openstack/healthchecks/node_exporter'}, 'volumes': ['/var/lib/openstack/config/telemetry/node_exporter.yaml:/etc/node_exporter/node_exporter.yaml:z', '/var/lib/openstack/certs/telemetry/default:/etc/node_exporter/tls:z', '/var/run/dbus/system_bus_socket:/var/run/dbus/system_bus_socket:rw', '/var/lib/openstack/healthchecks/node_exporter:/openstack:ro,z']}, config_id=edpm)
Dec  1 20:00:03 compute-0 nova_compute[189564]: 2025-12-01 20:00:03.756 189568 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Dec  1 20:00:06 compute-0 ovn_metadata_agent[106828]: 2025-12-01 20:00:06.220 106833 DEBUG ovsdbapp.backend.ovs_idl.event [-] Matched UPDATE: SbGlobalUpdateEvent(events=('update',), table='SB_Global', conditions=None, old_conditions=None), priority=20 to row=SB_Global(external_ids={}, nb_cfg=8, options={'arp_ns_explicit_output': 'true', 'mac_prefix': 'ae:b8:e0', 'max_tunid': '16711680', 'northd_internal_version': '24.03.8-20.33.0-76.8', 'svc_monitor_mac': 'f2:87:69:a7:38:2b'}, ipsec=False) old=SB_Global(nb_cfg=7) matches /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/event.py:43#033[00m
Dec  1 20:00:06 compute-0 nova_compute[189564]: 2025-12-01 20:00:06.220 189568 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Dec  1 20:00:06 compute-0 ovn_metadata_agent[106828]: 2025-12-01 20:00:06.221 106833 DEBUG neutron.agent.ovn.metadata.agent [-] Delaying updating chassis table for 3 seconds run /usr/lib/python3.9/site-packages/neutron/agent/ovn/metadata/agent.py:274#033[00m
Dec  1 20:00:07 compute-0 nova_compute[189564]: 2025-12-01 20:00:07.757 189568 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Dec  1 20:00:08 compute-0 nova_compute[189564]: 2025-12-01 20:00:08.759 189568 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Dec  1 20:00:09 compute-0 ovn_metadata_agent[106828]: 2025-12-01 20:00:09.223 106833 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): DbSetCommand(_result=None, table=Chassis_Private, record=91869463-7ce7-4561-8225-db4a77bb5f12, col_values=(('external_ids', {'neutron:ovn-metadata-sb-cfg': '8'}),), if_exists=True) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89#033[00m
Dec  1 20:00:09 compute-0 nova_compute[189564]: 2025-12-01 20:00:09.248 189568 DEBUG oslo_service.periodic_task [None req-7cec860b-c1c5-49d2-8a4f-732c76c1788d - - - - - -] Running periodic task ComputeManager._reclaim_queued_deletes run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210#033[00m
Dec  1 20:00:09 compute-0 nova_compute[189564]: 2025-12-01 20:00:09.248 189568 DEBUG nova.compute.manager [None req-7cec860b-c1c5-49d2-8a4f-732c76c1788d - - - - - -] CONF.reclaim_instance_interval <= 0, skipping... _reclaim_queued_deletes /usr/lib/python3.9/site-packages/nova/compute/manager.py:10477#033[00m
Dec  1 20:00:10 compute-0 podman[253213]: 2025-12-01 20:00:10.340038811 +0000 UTC m=+0.110243550 container health_status eee51cf6f5ac491b85fb09827fece37ea9afa564acb449d4ec0d0155a452f02b (image=quay.io/podified-antelope-centos9/openstack-multipathd:current-podified, name=multipathd, health_status=healthy, health_failing_streak=0, health_log=, org.label-schema.name=CentOS Stream 9 Base Image, config_id=multipathd, tcib_managed=true, maintainer=OpenStack Kubernetes Operator team, tcib_build_tag=fa2bb8efef6782c26ea7f1675eeb36dd, container_name=multipathd, io.buildah.version=1.41.3, managed_by=edpm_ansible, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS, config_data={'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/multipathd', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-multipathd:current-podified', 'net': 'host', 'privileged': True, 'restart': 'always', 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/var/lib/kolla/config_files/multipathd.json:/var/lib/kolla/config_files/config.json:ro', '/dev:/dev', '/run/udev:/run/udev', '/sys:/sys', '/lib/modules:/lib/modules:ro', '/etc/iscsi:/etc/iscsi:ro', '/var/lib/iscsi:/var/lib/iscsi', '/etc/multipath:/etc/multipath:z', '/etc/multipath.conf:/etc/multipath.conf:ro', '/var/lib/openstack/healthchecks/multipathd:/openstack:ro,z']}, org.label-schema.build-date=20251125, org.label-schema.license=GPLv2)
Dec  1 20:00:12 compute-0 ovn_metadata_agent[106828]: 2025-12-01 20:00:12.219 106833 DEBUG oslo_concurrency.lockutils [-] Acquiring lock "_check_child_processes" by "neutron.agent.linux.external_process.ProcessMonitor._check_child_processes" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m
Dec  1 20:00:12 compute-0 ovn_metadata_agent[106828]: 2025-12-01 20:00:12.219 106833 DEBUG oslo_concurrency.lockutils [-] Lock "_check_child_processes" acquired by "neutron.agent.linux.external_process.ProcessMonitor._check_child_processes" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m
Dec  1 20:00:12 compute-0 ovn_metadata_agent[106828]: 2025-12-01 20:00:12.219 106833 DEBUG oslo_concurrency.lockutils [-] Lock "_check_child_processes" "released" by "neutron.agent.linux.external_process.ProcessMonitor._check_child_processes" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m
Dec  1 20:00:12 compute-0 nova_compute[189564]: 2025-12-01 20:00:12.761 189568 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Dec  1 20:00:13 compute-0 nova_compute[189564]: 2025-12-01 20:00:13.760 189568 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Dec  1 20:00:15 compute-0 podman[253233]: 2025-12-01 20:00:15.278104749 +0000 UTC m=+0.055844348 container health_status 61ddba5fa28aaa4735d9b3aecc3d300f499f9ae2248b5f55cd6d6127fcce4236 (image=quay.io/navidys/prometheus-podman-exporter:v1.10.1, name=podman_exporter, health_status=healthy, health_failing_streak=0, health_log=, container_name=podman_exporter, maintainer=Navid Yaghoobi <navidys@fedoraproject.org>, managed_by=edpm_ansible, config_data={'image': 'quay.io/navidys/prometheus-podman-exporter:v1.10.1', 'restart': 'always', 'recreate': True, 'user': 'root', 'privileged': True, 'ports': ['9882:9882'], 'net': 'host', 'command': ['--web.config.file=/etc/podman_exporter/podman_exporter.yaml'], 'environment': {'OS_ENDPOINT_TYPE': 'internal', 'CONTAINER_HOST': 'unix:///run/podman/podman.sock'}, 'healthcheck': {'test': '/openstack/healthcheck podman_exporter', 'mount': '/var/lib/openstack/healthchecks/podman_exporter'}, 'volumes': ['/var/lib/openstack/config/telemetry/podman_exporter.yaml:/etc/podman_exporter/podman_exporter.yaml:z', '/var/lib/openstack/certs/telemetry/default:/etc/podman_exporter/tls:z', '/run/podman/podman.sock:/run/podman/podman.sock:rw,z', '/var/lib/openstack/healthchecks/podman_exporter:/openstack:ro,z']}, config_id=edpm)
Dec  1 20:00:17 compute-0 nova_compute[189564]: 2025-12-01 20:00:17.249 189568 DEBUG oslo_service.periodic_task [None req-7cec860b-c1c5-49d2-8a4f-732c76c1788d - - - - - -] Running periodic task ComputeManager._poll_rebooting_instances run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210#033[00m
Dec  1 20:00:17 compute-0 nova_compute[189564]: 2025-12-01 20:00:17.249 189568 DEBUG oslo_service.periodic_task [None req-7cec860b-c1c5-49d2-8a4f-732c76c1788d - - - - - -] Running periodic task ComputeManager._poll_rescued_instances run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210#033[00m
Dec  1 20:00:17 compute-0 nova_compute[189564]: 2025-12-01 20:00:17.249 189568 DEBUG oslo_service.periodic_task [None req-7cec860b-c1c5-49d2-8a4f-732c76c1788d - - - - - -] Running periodic task ComputeManager.update_available_resource run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210#033[00m
Dec  1 20:00:17 compute-0 nova_compute[189564]: 2025-12-01 20:00:17.445 189568 DEBUG oslo_concurrency.lockutils [None req-7cec860b-c1c5-49d2-8a4f-732c76c1788d - - - - - -] Acquiring lock "compute_resources" by "nova.compute.resource_tracker.ResourceTracker.clean_compute_node_cache" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m
Dec  1 20:00:17 compute-0 nova_compute[189564]: 2025-12-01 20:00:17.445 189568 DEBUG oslo_concurrency.lockutils [None req-7cec860b-c1c5-49d2-8a4f-732c76c1788d - - - - - -] Lock "compute_resources" acquired by "nova.compute.resource_tracker.ResourceTracker.clean_compute_node_cache" :: waited 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m
Dec  1 20:00:17 compute-0 nova_compute[189564]: 2025-12-01 20:00:17.446 189568 DEBUG oslo_concurrency.lockutils [None req-7cec860b-c1c5-49d2-8a4f-732c76c1788d - - - - - -] Lock "compute_resources" "released" by "nova.compute.resource_tracker.ResourceTracker.clean_compute_node_cache" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m
Dec  1 20:00:17 compute-0 nova_compute[189564]: 2025-12-01 20:00:17.446 189568 DEBUG nova.compute.resource_tracker [None req-7cec860b-c1c5-49d2-8a4f-732c76c1788d - - - - - -] Auditing locally available compute resources for compute-0.ctlplane.example.com (node: compute-0.ctlplane.example.com) update_available_resource /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:861#033[00m
Dec  1 20:00:17 compute-0 nova_compute[189564]: 2025-12-01 20:00:17.766 189568 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Dec  1 20:00:17 compute-0 nova_compute[189564]: 2025-12-01 20:00:17.789 189568 WARNING nova.virt.libvirt.driver [None req-7cec860b-c1c5-49d2-8a4f-732c76c1788d - - - - - -] This host appears to have multiple sockets per NUMA node. The `socket` PCI NUMA affinity will not be supported.#033[00m
Dec  1 20:00:17 compute-0 nova_compute[189564]: 2025-12-01 20:00:17.790 189568 DEBUG nova.compute.resource_tracker [None req-7cec860b-c1c5-49d2-8a4f-732c76c1788d - - - - - -] Hypervisor/Node resource view: name=compute-0.ctlplane.example.com free_ram=5422MB free_disk=72.37423706054688GB free_vcpus=8 pci_devices=[{"dev_id": "pci_0000_00_03_0", "address": "0000:00:03.0", "product_id": "1000", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1000", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_01_3", "address": "0000:00:01.3", "product_id": "7113", "vendor_id": "8086", "numa_node": null, "label": "label_8086_7113", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_06_0", "address": "0000:00:06.0", "product_id": "1005", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1005", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_04_0", "address": "0000:00:04.0", "product_id": "1001", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1001", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_01_1", "address": "0000:00:01.1", "product_id": "7010", "vendor_id": "8086", "numa_node": null, "label": "label_8086_7010", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_00_0", "address": "0000:00:00.0", "product_id": "1237", "vendor_id": "8086", "numa_node": null, "label": "label_8086_1237", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_05_0", "address": "0000:00:05.0", "product_id": "1002", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1002", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_02_0", "address": "0000:00:02.0", "product_id": "1050", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1050", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_01_2", "address": "0000:00:01.2", "product_id": "7020", "vendor_id": "8086", "numa_node": null, "label": "label_8086_7020", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_01_0", "address": "0000:00:01.0", "product_id": "7000", "vendor_id": "8086", "numa_node": null, "label": "label_8086_7000", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_07_0", "address": "0000:00:07.0", "product_id": "1000", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1000", "dev_type": "type-PCI"}] _report_hypervisor_resource_view /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:1034#033[00m
Dec  1 20:00:17 compute-0 nova_compute[189564]: 2025-12-01 20:00:17.791 189568 DEBUG oslo_concurrency.lockutils [None req-7cec860b-c1c5-49d2-8a4f-732c76c1788d - - - - - -] Acquiring lock "compute_resources" by "nova.compute.resource_tracker.ResourceTracker._update_available_resource" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m
Dec  1 20:00:17 compute-0 nova_compute[189564]: 2025-12-01 20:00:17.791 189568 DEBUG oslo_concurrency.lockutils [None req-7cec860b-c1c5-49d2-8a4f-732c76c1788d - - - - - -] Lock "compute_resources" acquired by "nova.compute.resource_tracker.ResourceTracker._update_available_resource" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m
Dec  1 20:00:17 compute-0 nova_compute[189564]: 2025-12-01 20:00:17.880 189568 DEBUG nova.compute.resource_tracker [None req-7cec860b-c1c5-49d2-8a4f-732c76c1788d - - - - - -] Total usable vcpus: 8, total allocated vcpus: 0 _report_final_resource_view /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:1057#033[00m
Dec  1 20:00:17 compute-0 nova_compute[189564]: 2025-12-01 20:00:17.880 189568 DEBUG nova.compute.resource_tracker [None req-7cec860b-c1c5-49d2-8a4f-732c76c1788d - - - - - -] Final resource view: name=compute-0.ctlplane.example.com phys_ram=7680MB used_ram=512MB phys_disk=79GB used_disk=0GB total_vcpus=8 used_vcpus=0 pci_stats=[] _report_final_resource_view /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:1066#033[00m
Dec  1 20:00:17 compute-0 nova_compute[189564]: 2025-12-01 20:00:17.913 189568 DEBUG nova.scheduler.client.report [None req-7cec860b-c1c5-49d2-8a4f-732c76c1788d - - - - - -] Refreshing inventories for resource provider 0211b5d4-bab8-409f-8f53-df766ffbcb27 _refresh_associations /usr/lib/python3.9/site-packages/nova/scheduler/client/report.py:804#033[00m
Dec  1 20:00:17 compute-0 nova_compute[189564]: 2025-12-01 20:00:17.960 189568 DEBUG nova.scheduler.client.report [None req-7cec860b-c1c5-49d2-8a4f-732c76c1788d - - - - - -] Updating ProviderTree inventory for provider 0211b5d4-bab8-409f-8f53-df766ffbcb27 from _refresh_and_get_inventory using data: {'VCPU': {'total': 8, 'reserved': 0, 'min_unit': 1, 'max_unit': 8, 'step_size': 1, 'allocation_ratio': 4.0}, 'MEMORY_MB': {'total': 7680, 'reserved': 512, 'min_unit': 1, 'max_unit': 7680, 'step_size': 1, 'allocation_ratio': 1.0}, 'DISK_GB': {'total': 79, 'reserved': 1, 'min_unit': 1, 'max_unit': 79, 'step_size': 1, 'allocation_ratio': 0.9}} _refresh_and_get_inventory /usr/lib/python3.9/site-packages/nova/scheduler/client/report.py:768#033[00m
Dec  1 20:00:17 compute-0 nova_compute[189564]: 2025-12-01 20:00:17.961 189568 DEBUG nova.compute.provider_tree [None req-7cec860b-c1c5-49d2-8a4f-732c76c1788d - - - - - -] Updating inventory in ProviderTree for provider 0211b5d4-bab8-409f-8f53-df766ffbcb27 with inventory: {'VCPU': {'total': 8, 'reserved': 0, 'min_unit': 1, 'max_unit': 8, 'step_size': 1, 'allocation_ratio': 4.0}, 'MEMORY_MB': {'total': 7680, 'reserved': 512, 'min_unit': 1, 'max_unit': 7680, 'step_size': 1, 'allocation_ratio': 1.0}, 'DISK_GB': {'total': 79, 'reserved': 1, 'min_unit': 1, 'max_unit': 79, 'step_size': 1, 'allocation_ratio': 0.9}} update_inventory /usr/lib/python3.9/site-packages/nova/compute/provider_tree.py:176#033[00m
Dec  1 20:00:17 compute-0 nova_compute[189564]: 2025-12-01 20:00:17.979 189568 DEBUG nova.scheduler.client.report [None req-7cec860b-c1c5-49d2-8a4f-732c76c1788d - - - - - -] Refreshing aggregate associations for resource provider 0211b5d4-bab8-409f-8f53-df766ffbcb27, aggregates: None _refresh_associations /usr/lib/python3.9/site-packages/nova/scheduler/client/report.py:813#033[00m
Dec  1 20:00:17 compute-0 nova_compute[189564]: 2025-12-01 20:00:17.997 189568 DEBUG nova.scheduler.client.report [None req-7cec860b-c1c5-49d2-8a4f-732c76c1788d - - - - - -] Refreshing trait associations for resource provider 0211b5d4-bab8-409f-8f53-df766ffbcb27, traits: COMPUTE_RESCUE_BFV,COMPUTE_VIOMMU_MODEL_INTEL,COMPUTE_SECURITY_UEFI_SECURE_BOOT,COMPUTE_GRAPHICS_MODEL_VIRTIO,HW_CPU_X86_AMD_SVM,COMPUTE_NODE,COMPUTE_VIOMMU_MODEL_AUTO,HW_CPU_X86_BMI2,COMPUTE_IMAGE_TYPE_ISO,HW_CPU_X86_SSE2,COMPUTE_STORAGE_BUS_SATA,HW_CPU_X86_SSE41,COMPUTE_IMAGE_TYPE_ARI,COMPUTE_SECURITY_TPM_1_2,COMPUTE_STORAGE_BUS_VIRTIO,COMPUTE_IMAGE_TYPE_AMI,COMPUTE_NET_VIF_MODEL_E1000,COMPUTE_GRAPHICS_MODEL_CIRRUS,COMPUTE_GRAPHICS_MODEL_VGA,COMPUTE_TRUSTED_CERTS,COMPUTE_STORAGE_BUS_USB,COMPUTE_VOLUME_ATTACH_WITH_TAG,COMPUTE_NET_VIF_MODEL_VIRTIO,HW_CPU_X86_FMA3,HW_CPU_X86_SSE4A,COMPUTE_ACCELERATORS,COMPUTE_VOLUME_EXTEND,HW_CPU_X86_ABM,COMPUTE_DEVICE_TAGGING,HW_CPU_X86_AVX,HW_CPU_X86_SSE,HW_CPU_X86_SVM,COMPUTE_STORAGE_BUS_IDE,COMPUTE_NET_ATTACH_INTERFACE,HW_CPU_X86_F16C,HW_CPU_X86_MMX,COMPUTE_NET_VIF_MODEL_SPAPR_VLAN,COMPUTE_NET_VIF_MODEL_E1000E,HW_CPU_X86_CLMUL,COMPUTE_GRAPHICS_MODEL_NONE,COMPUTE_VIOMMU_MODEL_VIRTIO,HW_CPU_X86_AVX2,COMPUTE_NET_VIF_MODEL_VMXNET3,COMPUTE_SECURITY_TPM_2_0,COMPUTE_IMAGE_TYPE_AKI,HW_CPU_X86_SSSE3,COMPUTE_IMAGE_TYPE_QCOW2,HW_CPU_X86_BMI,HW_CPU_X86_AESNI,COMPUTE_GRAPHICS_MODEL_BOCHS,COMPUTE_NET_VIF_MODEL_RTL8139,COMPUTE_STORAGE_BUS_SCSI,COMPUTE_NET_VIF_MODEL_PCNET,COMPUTE_VOLUME_MULTI_ATTACH,COMPUTE_SOCKET_PCI_NUMA_AFFINITY,COMPUTE_NET_VIF_MODEL_NE2K_PCI,HW_CPU_X86_SHA,COMPUTE_IMAGE_TYPE_RAW,COMPUTE_NET_ATTACH_INTERFACE_WITH_TAG,HW_CPU_X86_SSE42,COMPUTE_STORAGE_BUS_FDC _refresh_associations /usr/lib/python3.9/site-packages/nova/scheduler/client/report.py:825#033[00m
Dec  1 20:00:18 compute-0 nova_compute[189564]: 2025-12-01 20:00:18.020 189568 DEBUG nova.compute.provider_tree [None req-7cec860b-c1c5-49d2-8a4f-732c76c1788d - - - - - -] Inventory has not changed in ProviderTree for provider: 0211b5d4-bab8-409f-8f53-df766ffbcb27 update_inventory /usr/lib/python3.9/site-packages/nova/compute/provider_tree.py:180#033[00m
Dec  1 20:00:18 compute-0 nova_compute[189564]: 2025-12-01 20:00:18.034 189568 DEBUG nova.scheduler.client.report [None req-7cec860b-c1c5-49d2-8a4f-732c76c1788d - - - - - -] Inventory has not changed for provider 0211b5d4-bab8-409f-8f53-df766ffbcb27 based on inventory data: {'VCPU': {'total': 8, 'reserved': 0, 'min_unit': 1, 'max_unit': 8, 'step_size': 1, 'allocation_ratio': 4.0}, 'MEMORY_MB': {'total': 7680, 'reserved': 512, 'min_unit': 1, 'max_unit': 7680, 'step_size': 1, 'allocation_ratio': 1.0}, 'DISK_GB': {'total': 79, 'reserved': 1, 'min_unit': 1, 'max_unit': 79, 'step_size': 1, 'allocation_ratio': 0.9}} set_inventory_for_provider /usr/lib/python3.9/site-packages/nova/scheduler/client/report.py:940#033[00m
Dec  1 20:00:18 compute-0 nova_compute[189564]: 2025-12-01 20:00:18.035 189568 DEBUG nova.compute.resource_tracker [None req-7cec860b-c1c5-49d2-8a4f-732c76c1788d - - - - - -] Compute_service record updated for compute-0.ctlplane.example.com:compute-0.ctlplane.example.com _update_available_resource /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:995#033[00m
Dec  1 20:00:18 compute-0 nova_compute[189564]: 2025-12-01 20:00:18.036 189568 DEBUG oslo_concurrency.lockutils [None req-7cec860b-c1c5-49d2-8a4f-732c76c1788d - - - - - -] Lock "compute_resources" "released" by "nova.compute.resource_tracker.ResourceTracker._update_available_resource" :: held 0.244s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m
Dec  1 20:00:18 compute-0 podman[253262]: 2025-12-01 20:00:18.330500635 +0000 UTC m=+0.090464136 container health_status 34a1614f07848d6f362b3ed1fa2407dbcd0f2c7c831f6ef43ff8b2d278ce7c3d (image=quay.io/podified-antelope-centos9/openstack-ceilometer-ipmi:current-podified, name=ceilometer_agent_ipmi, health_status=healthy, health_failing_streak=0, health_log=, config_id=edpm, maintainer=OpenStack Kubernetes Operator team, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.vendor=CentOS, tcib_build_tag=fa2bb8efef6782c26ea7f1675eeb36dd, io.buildah.version=1.41.3, org.label-schema.build-date=20251125, org.label-schema.license=GPLv2, tcib_managed=true, config_data={'image': 'quay.io/podified-antelope-centos9/openstack-ceilometer-ipmi:current-podified', 'user': 'ceilometer', 'restart': 'always', 'command': 'kolla_start', 'security_opt': 'label:type:ceilometer_polling_t', 'privileged': 'true', 'net': 'host', 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'OS_ENDPOINT_TYPE': 'internal'}, 'healthcheck': {'test': '/openstack/healthcheck ipmi', 'mount': '/var/lib/openstack/healthchecks/ceilometer_agent_ipmi'}, 'volumes': ['/var/lib/openstack/config/telemetry-power-monitoring:/var/lib/openstack/config/:z', '/var/lib/openstack/config/telemetry-power-monitoring/ceilometer-agent-ipmi.json:/var/lib/kolla/config_files/config.json:z', '/etc/hosts:/etc/hosts:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/var/lib/openstack/cacerts/telemetry-power-monitoring/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/config/telemetry-power-monitoring/ceilometer_prom_exporter.yaml:/etc/ceilometer/ceilometer_prom_exporter.yaml:z', '/var/lib/openstack/certs/telemetry-power-monitoring/default:/etc/ceilometer/tls:z', '/dev/log:/dev/log', '/var/lib/openstack/healthchecks/ceilometer_agent_ipmi:/openstack:ro,z']}, container_name=ceilometer_agent_ipmi, managed_by=edpm_ansible, org.label-schema.schema-version=1.0)
Dec  1 20:00:18 compute-0 podman[253261]: 2025-12-01 20:00:18.360396435 +0000 UTC m=+0.116725803 container health_status 23921011954a99f31a49758e512d9e3575f6b2ebf536e7df85e3be11e7690b76 (image=quay.io/sustainable_computing_io/kepler:release-0.7.12, name=kepler, health_status=healthy, health_failing_streak=0, health_log=, maintainer=Red Hat, Inc., io.k8s.display-name=Red Hat Universal Base Image 9, com.redhat.component=ubi9-container, io.openshift.expose-services=, container_name=kepler, url=https://access.redhat.com/containers/#/registry.access.redhat.com/ubi9/images/9.4-1214.1726694543, name=ubi9, io.openshift.tags=base rhel9, version=9.4, summary=Provides the latest release of Red Hat Universal Base Image 9., vendor=Red Hat, Inc., config_data={'image': 'quay.io/sustainable_computing_io/kepler:release-0.7.12', 'privileged': 'true', 'restart': 'always', 'ports': ['8888:8888'], 'net': 'host', 'command': '-v=2', 'recreate': True, 'environment': {'ENABLE_GPU': 'true', 'EXPOSE_CONTAINER_METRICS': 'true', 'ENABLE_PROCESS_METRICS': 'true', 'EXPOSE_VM_METRICS': 'true', 'EXPOSE_ESTIMATED_IDLE_POWER_METRICS': 'false', 'LIBVIRT_METADATA_URI': 'http://openstack.org/xmlns/libvirt/nova/1.1'}, 'healthcheck': {'test': '/openstack/healthcheck kepler', 'mount': '/var/lib/openstack/healthchecks/kepler'}, 'volumes': ['/lib/modules:/lib/modules:ro', '/run/libvirt:/run/libvirt:shared,ro', '/sys:/sys', '/proc:/proc', '/var/lib/openstack/healthchecks/kepler:/openstack:ro,z']}, managed_by=edpm_ansible, vcs-type=git, com.redhat.license_terms=https://www.redhat.com/en/about/red-hat-end-user-license-agreements#UBI, description=The Universal Base Image is designed and engineered to be the base layer for all of your containerized applications, middleware and utilities. This base image is freely redistributable, but Red Hat only supports Red Hat technologies through subscriptions for Red Hat products. This image is maintained by Red Hat and updated regularly., io.buildah.version=1.29.0, release=1214.1726694543, io.k8s.description=The Universal Base Image is designed and engineered to be the base layer for all of your containerized applications, middleware and utilities. This base image is freely redistributable, but Red Hat only supports Red Hat technologies through subscriptions for Red Hat products. This image is maintained by Red Hat and updated regularly., release-0.7.12=, vcs-ref=e309397d02fc53f7fa99db1371b8700eb49f268f, build-date=2024-09-18T21:23:30, architecture=x86_64, config_id=edpm, distribution-scope=public)
Dec  1 20:00:18 compute-0 podman[253263]: 2025-12-01 20:00:18.36089483 +0000 UTC m=+0.116644720 container health_status 3a3d264f7eb8586ed3d44da8bad3c69e5911bcb2ca062b771386b6d47a5118de (image=quay.rdoproject.org/podified-master-centos10/openstack-ceilometer-compute:current-tested, name=ceilometer_agent_compute, health_status=healthy, health_failing_streak=0, health_log=, io.buildah.version=1.41.4, org.label-schema.vendor=CentOS, maintainer=OpenStack Kubernetes Operator team, org.label-schema.build-date=20251125, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 10 Base Image, org.label-schema.schema-version=1.0, config_id=edpm, container_name=ceilometer_agent_compute, managed_by=edpm_ansible, tcib_build_tag=3a7876c5b6a4ff2e2bc50e11e9db5f42, config_data={'image': 'quay.rdoproject.org/podified-master-centos10/openstack-ceilometer-compute:current-tested', 'user': 'ceilometer', 'restart': 'always', 'command': 'kolla_start', 'security_opt': 'label:type:ceilometer_polling_t', 'net': 'host', 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'OS_ENDPOINT_TYPE': 'internal'}, 'healthcheck': {'test': '/openstack/healthcheck compute', 'mount': '/var/lib/openstack/healthchecks/ceilometer_agent_compute'}, 'volumes': ['/var/lib/openstack/config/telemetry:/var/lib/openstack/config/:z', '/var/lib/openstack/config/telemetry/ceilometer-agent-compute.json:/var/lib/kolla/config_files/config.json:z', '/run/libvirt:/run/libvirt:shared,ro', '/etc/hosts:/etc/hosts:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/var/lib/openstack/cacerts/telemetry/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/config/telemetry/ceilometer_prom_exporter.yaml:/etc/ceilometer/ceilometer_prom_exporter.yaml:z', '/var/lib/openstack/certs/telemetry/default:/etc/ceilometer/tls:z', '/dev/log:/dev/log', '/var/lib/openstack/healthchecks/ceilometer_agent_compute:/openstack:ro,z']}, tcib_managed=true)
Dec  1 20:00:18 compute-0 podman[253264]: 2025-12-01 20:00:18.372818861 +0000 UTC m=+0.104037718 container health_status 43b014a7c88484529ca37fbc1aa040d68d3c565a681d98a3ffe696ded1c66c8b (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified, name=ovn_metadata_agent, health_status=healthy, health_failing_streak=0, health_log=, org.label-schema.build-date=20251125, org.label-schema.license=GPLv2, org.label-schema.schema-version=1.0, tcib_managed=true, config_id=ovn_metadata_agent, maintainer=OpenStack Kubernetes Operator team, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.vendor=CentOS, tcib_build_tag=fa2bb8efef6782c26ea7f1675eeb36dd, config_data={'cgroupns': 'host', 'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'EDPM_CONFIG_HASH': '0823bd3e096c75f72e4a95820d41b0d4b6a1172bd2892ddb9f29b788a11bc87d'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ovn_metadata_agent', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified', 'net': 'host', 'pid': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/run/openvswitch:/run/openvswitch:z', '/var/lib/config-data/ansible-generated/neutron-ovn-metadata-agent:/etc/neutron.conf.d:z', '/run/netns:/run/netns:shared', '/var/lib/kolla/config_files/ovn_metadata_agent.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/neutron:/var/lib/neutron:shared,z', '/var/lib/neutron/ovn_metadata_haproxy_wrapper:/usr/local/bin/haproxy:ro', '/var/lib/neutron/kill_scripts:/etc/neutron/kill_scripts:ro', '/var/lib/openstack/cacerts/neutron-metadata/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/ca.crt:/etc/pki/tls/certs/ovndbca.crt:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/tls.crt:/etc/pki/tls/certs/ovndb.crt:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/tls.key:/etc/pki/tls/private/ovndb.key:ro,Z', '/var/lib/openstack/healthchecks/ovn_metadata_agent:/openstack:ro,z']}, container_name=ovn_metadata_agent, io.buildah.version=1.41.3, managed_by=edpm_ansible)
Dec  1 20:00:18 compute-0 podman[253281]: 2025-12-01 20:00:18.439772335 +0000 UTC m=+0.156997756 container health_status ac5c9902abf0db9f43c889599b2bcc73d33eb8b65444ffdd9b56a5cc93dab792 (image=quay.io/podified-antelope-centos9/openstack-ovn-controller:current-podified, name=ovn_controller, health_status=healthy, health_failing_streak=0, health_log=, io.buildah.version=1.41.3, maintainer=OpenStack Kubernetes Operator team, org.label-schema.build-date=20251125, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS, tcib_build_tag=fa2bb8efef6782c26ea7f1675eeb36dd, config_data={'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ovn_controller', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-ovn-controller:current-podified', 'net': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/lib/modules:/lib/modules:ro', '/run:/run', '/var/lib/openvswitch/ovn:/run/ovn:shared,z', '/var/lib/kolla/config_files/ovn_controller.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/openstack/cacerts/ovn/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/certs/ovn/default/ca.crt:/etc/pki/tls/certs/ovndbca.crt:ro,z', '/var/lib/openstack/certs/ovn/default/tls.crt:/etc/pki/tls/certs/ovndb.crt:ro,z', '/var/lib/openstack/certs/ovn/default/tls.key:/etc/pki/tls/private/ovndb.key:ro,Z', '/var/lib/openstack/healthchecks/ovn_controller:/openstack:ro,z']}, managed_by=edpm_ansible, config_id=ovn_controller, tcib_managed=true, container_name=ovn_controller)
Dec  1 20:00:18 compute-0 nova_compute[189564]: 2025-12-01 20:00:18.763 189568 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Dec  1 20:00:20 compute-0 nova_compute[189564]: 2025-12-01 20:00:20.034 189568 DEBUG oslo_service.periodic_task [None req-7cec860b-c1c5-49d2-8a4f-732c76c1788d - - - - - -] Running periodic task ComputeManager._poll_unconfirmed_resizes run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210#033[00m
Dec  1 20:00:20 compute-0 nova_compute[189564]: 2025-12-01 20:00:20.248 189568 DEBUG oslo_service.periodic_task [None req-7cec860b-c1c5-49d2-8a4f-732c76c1788d - - - - - -] Running periodic task ComputeManager._heal_instance_info_cache run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210#033[00m
Dec  1 20:00:20 compute-0 nova_compute[189564]: 2025-12-01 20:00:20.248 189568 DEBUG nova.compute.manager [None req-7cec860b-c1c5-49d2-8a4f-732c76c1788d - - - - - -] Starting heal instance info cache _heal_instance_info_cache /usr/lib/python3.9/site-packages/nova/compute/manager.py:9858#033[00m
Dec  1 20:00:20 compute-0 nova_compute[189564]: 2025-12-01 20:00:20.249 189568 DEBUG nova.compute.manager [None req-7cec860b-c1c5-49d2-8a4f-732c76c1788d - - - - - -] Rebuilding the list of instances to heal _heal_instance_info_cache /usr/lib/python3.9/site-packages/nova/compute/manager.py:9862#033[00m
Dec  1 20:00:20 compute-0 nova_compute[189564]: 2025-12-01 20:00:20.269 189568 DEBUG nova.compute.manager [None req-7cec860b-c1c5-49d2-8a4f-732c76c1788d - - - - - -] Didn't find any instances for network info cache update. _heal_instance_info_cache /usr/lib/python3.9/site-packages/nova/compute/manager.py:9944#033[00m
Dec  1 20:00:20 compute-0 nova_compute[189564]: 2025-12-01 20:00:20.270 189568 DEBUG oslo_service.periodic_task [None req-7cec860b-c1c5-49d2-8a4f-732c76c1788d - - - - - -] Running periodic task ComputeManager._poll_volume_usage run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210#033[00m
Dec  1 20:00:22 compute-0 nova_compute[189564]: 2025-12-01 20:00:22.265 189568 DEBUG oslo_service.periodic_task [None req-7cec860b-c1c5-49d2-8a4f-732c76c1788d - - - - - -] Running periodic task ComputeManager._check_instance_build_time run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210#033[00m
Dec  1 20:00:22 compute-0 nova_compute[189564]: 2025-12-01 20:00:22.770 189568 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Dec  1 20:00:23 compute-0 nova_compute[189564]: 2025-12-01 20:00:23.248 189568 DEBUG oslo_service.periodic_task [None req-7cec860b-c1c5-49d2-8a4f-732c76c1788d - - - - - -] Running periodic task ComputeManager._instance_usage_audit run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210#033[00m
Dec  1 20:00:23 compute-0 nova_compute[189564]: 2025-12-01 20:00:23.766 189568 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Dec  1 20:00:27 compute-0 nova_compute[189564]: 2025-12-01 20:00:27.772 189568 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Dec  1 20:00:28 compute-0 nova_compute[189564]: 2025-12-01 20:00:28.768 189568 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Dec  1 20:00:29 compute-0 podman[253367]: 2025-12-01 20:00:29.304377967 +0000 UTC m=+0.076676466 container health_status b46bda7fc50db8041eef75400930fc7591d8331b3adc9964f77b2cc87c6b98e2 (image=quay.io/openstack-k8s-operators/openstack-network-exporter:current-podified, name=openstack_network_exporter, health_status=healthy, health_failing_streak=0, health_log=, io.buildah.version=1.33.7, maintainer=Red Hat, Inc., summary=Provides the latest release of the minimal Red Hat Universal Base Image 9., com.redhat.license_terms=https://www.redhat.com/en/about/red-hat-end-user-license-agreements#UBI, io.openshift.expose-services=, io.k8s.display-name=Red Hat Universal Base Image 9 Minimal, url=https://catalog.redhat.com/en/search?searchType=containers, vendor=Red Hat, Inc., distribution-scope=public, version=9.6, build-date=2025-08-20T13:12:41, com.redhat.component=ubi9-minimal-container, container_name=openstack_network_exporter, release=1755695350, config_data={'image': 'quay.io/openstack-k8s-operators/openstack-network-exporter:current-podified', 'restart': 'always', 'recreate': True, 'privileged': True, 'ports': ['9105:9105'], 'command': [], 'net': 'host', 'environment': {'OS_ENDPOINT_TYPE': 'internal', 'OPENSTACK_NETWORK_EXPORTER_YAML': '/etc/openstack_network_exporter/openstack_network_exporter.yaml'}, 'healthcheck': {'test': '/openstack/healthcheck openstack-netwo', 'mount': '/var/lib/openstack/healthchecks/openstack_network_exporter'}, 'volumes': ['/var/lib/openstack/config/telemetry/openstack_network_exporter.yaml:/etc/openstack_network_exporter/openstack_network_exporter.yaml:z', '/var/lib/openstack/certs/telemetry/default:/etc/openstack_network_exporter/tls:z', '/var/run/openvswitch:/run/openvswitch:rw,z', '/var/lib/openvswitch/ovn:/run/ovn:rw,z', '/proc:/host/proc:ro', '/var/lib/openstack/healthchecks/openstack_network_exporter:/openstack:ro,z']}, vcs-type=git, io.k8s.description=The Universal Base Image Minimal is a stripped down image that uses microdnf as a package manager. This base image is freely redistributable, but Red Hat only supports Red Hat technologies through subscriptions for Red Hat products. This image is maintained by Red Hat and updated regularly., description=The Universal Base Image Minimal is a stripped down image that uses microdnf as a package manager. This base image is freely redistributable, but Red Hat only supports Red Hat technologies through subscriptions for Red Hat products. This image is maintained by Red Hat and updated regularly., managed_by=edpm_ansible, io.openshift.tags=minimal rhel9, name=ubi9-minimal, vcs-ref=f4b088292653bbf5ca8188a5e59ffd06a8671d4b, architecture=x86_64, config_id=edpm)
Dec  1 20:00:29 compute-0 podman[203750]: time="2025-12-01T20:00:29Z" level=info msg="List containers: received `last` parameter - overwriting `limit`"
Dec  1 20:00:29 compute-0 podman[203750]: @ - - [01/Dec/2025:20:00:29 +0000] "GET /v4.9.3/libpod/containers/json?all=true&external=false&last=0&namespace=false&size=false&sync=false HTTP/1.1" 200 28288 "" "Go-http-client/1.1"
Dec  1 20:00:29 compute-0 podman[203750]: @ - - [01/Dec/2025:20:00:29 +0000] "GET /v4.9.3/libpod/containers/stats?all=false&interval=1&stream=false HTTP/1.1" 200 4337 "" "Go-http-client/1.1"
Dec  1 20:00:31 compute-0 openstack_network_exporter[205914]: ERROR   20:00:31 appctl.go:144: Failed to get PID for ovn-northd: no control socket files found for ovn-northd
Dec  1 20:00:31 compute-0 openstack_network_exporter[205914]: ERROR   20:00:31 appctl.go:144: Failed to get PID for ovn-northd: no control socket files found for ovn-northd
Dec  1 20:00:31 compute-0 openstack_network_exporter[205914]: ERROR   20:00:31 appctl.go:131: Failed to prepare call to ovsdb-server: no control socket files found for the ovs db server
Dec  1 20:00:31 compute-0 openstack_network_exporter[205914]: ERROR   20:00:31 appctl.go:174: call(dpif-netdev/pmd-perf-show): please specify an existing datapath
Dec  1 20:00:31 compute-0 openstack_network_exporter[205914]: 
Dec  1 20:00:31 compute-0 openstack_network_exporter[205914]: ERROR   20:00:31 appctl.go:174: call(dpif-netdev/pmd-rxq-show): please specify an existing datapath
Dec  1 20:00:31 compute-0 openstack_network_exporter[205914]: 
Dec  1 20:00:32 compute-0 nova_compute[189564]: 2025-12-01 20:00:32.775 189568 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Dec  1 20:00:33 compute-0 nova_compute[189564]: 2025-12-01 20:00:33.770 189568 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Dec  1 20:00:34 compute-0 podman[253387]: 2025-12-01 20:00:34.277175597 +0000 UTC m=+0.050540914 container health_status 9bc16c1e84935b321683dd2dfd3901959431e420d380b6b9982945dff3d516b2 (image=quay.io/prometheus/node-exporter:v1.5.0, name=node_exporter, health_status=healthy, health_failing_streak=0, health_log=, maintainer=The Prometheus Authors <prometheus-developers@googlegroups.com>, managed_by=edpm_ansible, config_data={'image': 'quay.io/prometheus/node-exporter:v1.5.0', 'restart': 'always', 'recreate': True, 'user': 'root', 'privileged': True, 'ports': ['9100:9100'], 'command': ['--web.config.file=/etc/node_exporter/node_exporter.yaml', '--web.disable-exporter-metrics', '--collector.systemd', '--collector.systemd.unit-include=(edpm_.*|ovs.*|openvswitch|virt.*|rsyslog)\\.service', '--no-collector.dmi', '--no-collector.entropy', '--no-collector.thermal_zone', '--no-collector.time', '--no-collector.timex', '--no-collector.uname', '--no-collector.stat', '--no-collector.hwmon', '--no-collector.os', '--no-collector.selinux', '--no-collector.textfile', '--no-collector.powersupplyclass', '--no-collector.pressure', '--no-collector.rapl'], 'net': 'host', 'environment': {'OS_ENDPOINT_TYPE': 'internal'}, 'healthcheck': {'test': '/openstack/healthcheck node_exporter', 'mount': '/var/lib/openstack/healthchecks/node_exporter'}, 'volumes': ['/var/lib/openstack/config/telemetry/node_exporter.yaml:/etc/node_exporter/node_exporter.yaml:z', '/var/lib/openstack/certs/telemetry/default:/etc/node_exporter/tls:z', '/var/run/dbus/system_bus_socket:/var/run/dbus/system_bus_socket:rw', '/var/lib/openstack/healthchecks/node_exporter:/openstack:ro,z']}, config_id=edpm, container_name=node_exporter)
Dec  1 20:00:36 compute-0 ovn_controller[97948]: 2025-12-01T20:00:36Z|00065|memory_trim|INFO|Detected inactivity (last active 30006 ms ago): trimming memory
Dec  1 20:00:37 compute-0 nova_compute[189564]: 2025-12-01 20:00:37.779 189568 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Dec  1 20:00:38 compute-0 nova_compute[189564]: 2025-12-01 20:00:38.772 189568 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Dec  1 20:00:40 compute-0 nova_compute[189564]: 2025-12-01 20:00:40.249 189568 DEBUG oslo_service.periodic_task [None req-7cec860b-c1c5-49d2-8a4f-732c76c1788d - - - - - -] Running periodic task ComputeManager._run_image_cache_manager_pass run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210#033[00m
Dec  1 20:00:40 compute-0 nova_compute[189564]: 2025-12-01 20:00:40.250 189568 DEBUG oslo_concurrency.lockutils [None req-7cec860b-c1c5-49d2-8a4f-732c76c1788d - - - - - -] Acquiring lock "storage-registry-lock" by "nova.virt.storage_users.register_storage_use.<locals>.do_register_storage_use" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m
Dec  1 20:00:40 compute-0 nova_compute[189564]: 2025-12-01 20:00:40.251 189568 DEBUG oslo_concurrency.lockutils [None req-7cec860b-c1c5-49d2-8a4f-732c76c1788d - - - - - -] Lock "storage-registry-lock" acquired by "nova.virt.storage_users.register_storage_use.<locals>.do_register_storage_use" :: waited 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m
Dec  1 20:00:40 compute-0 nova_compute[189564]: 2025-12-01 20:00:40.252 189568 DEBUG oslo_concurrency.lockutils [None req-7cec860b-c1c5-49d2-8a4f-732c76c1788d - - - - - -] Lock "storage-registry-lock" "released" by "nova.virt.storage_users.register_storage_use.<locals>.do_register_storage_use" :: held 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m
Dec  1 20:00:40 compute-0 nova_compute[189564]: 2025-12-01 20:00:40.252 189568 DEBUG oslo_concurrency.lockutils [None req-7cec860b-c1c5-49d2-8a4f-732c76c1788d - - - - - -] Acquiring lock "storage-registry-lock" by "nova.virt.storage_users.get_storage_users.<locals>.do_get_storage_users" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m
Dec  1 20:00:40 compute-0 nova_compute[189564]: 2025-12-01 20:00:40.253 189568 DEBUG oslo_concurrency.lockutils [None req-7cec860b-c1c5-49d2-8a4f-732c76c1788d - - - - - -] Lock "storage-registry-lock" acquired by "nova.virt.storage_users.get_storage_users.<locals>.do_get_storage_users" :: waited 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m
Dec  1 20:00:40 compute-0 nova_compute[189564]: 2025-12-01 20:00:40.254 189568 DEBUG oslo_concurrency.lockutils [None req-7cec860b-c1c5-49d2-8a4f-732c76c1788d - - - - - -] Lock "storage-registry-lock" "released" by "nova.virt.storage_users.get_storage_users.<locals>.do_get_storage_users" :: held 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m
Dec  1 20:00:40 compute-0 nova_compute[189564]: 2025-12-01 20:00:40.336 189568 DEBUG nova.virt.libvirt.imagecache [None req-7cec860b-c1c5-49d2-8a4f-732c76c1788d - - - - - -] Adding ephemeral_1_0706d66 into backend ephemeral images _store_ephemeral_image /usr/lib/python3.9/site-packages/nova/virt/libvirt/imagecache.py:100#033[00m
Dec  1 20:00:40 compute-0 nova_compute[189564]: 2025-12-01 20:00:40.346 189568 DEBUG nova.virt.libvirt.imagecache [None req-7cec860b-c1c5-49d2-8a4f-732c76c1788d - - - - - -] Verify base images _age_and_verify_cached_images /usr/lib/python3.9/site-packages/nova/virt/libvirt/imagecache.py:314#033[00m
Dec  1 20:00:40 compute-0 nova_compute[189564]: 2025-12-01 20:00:40.346 189568 WARNING nova.virt.libvirt.imagecache [None req-7cec860b-c1c5-49d2-8a4f-732c76c1788d - - - - - -] Unknown base file: /var/lib/nova/instances/_base/1324593a3f01becd5f72fdfdb0281e45c2a6b683#033[00m
Dec  1 20:00:40 compute-0 nova_compute[189564]: 2025-12-01 20:00:40.346 189568 WARNING nova.virt.libvirt.imagecache [None req-7cec860b-c1c5-49d2-8a4f-732c76c1788d - - - - - -] Unknown base file: /var/lib/nova/instances/_base/ac10605fd1db743aca604ff67d0f873a18376180#033[00m
Dec  1 20:00:40 compute-0 nova_compute[189564]: 2025-12-01 20:00:40.347 189568 INFO nova.virt.libvirt.imagecache [None req-7cec860b-c1c5-49d2-8a4f-732c76c1788d - - - - - -] Removable base files: /var/lib/nova/instances/_base/1324593a3f01becd5f72fdfdb0281e45c2a6b683 /var/lib/nova/instances/_base/ac10605fd1db743aca604ff67d0f873a18376180#033[00m
Dec  1 20:00:40 compute-0 nova_compute[189564]: 2025-12-01 20:00:40.347 189568 INFO nova.virt.libvirt.imagecache [None req-7cec860b-c1c5-49d2-8a4f-732c76c1788d - - - - - -] Base, swap or ephemeral file too young to remove: /var/lib/nova/instances/_base/1324593a3f01becd5f72fdfdb0281e45c2a6b683#033[00m
Dec  1 20:00:40 compute-0 nova_compute[189564]: 2025-12-01 20:00:40.347 189568 INFO nova.virt.libvirt.imagecache [None req-7cec860b-c1c5-49d2-8a4f-732c76c1788d - - - - - -] Base, swap or ephemeral file too young to remove: /var/lib/nova/instances/_base/ac10605fd1db743aca604ff67d0f873a18376180#033[00m
Dec  1 20:00:40 compute-0 nova_compute[189564]: 2025-12-01 20:00:40.348 189568 DEBUG nova.virt.libvirt.imagecache [None req-7cec860b-c1c5-49d2-8a4f-732c76c1788d - - - - - -] Verification complete _age_and_verify_cached_images /usr/lib/python3.9/site-packages/nova/virt/libvirt/imagecache.py:350#033[00m
Dec  1 20:00:40 compute-0 nova_compute[189564]: 2025-12-01 20:00:40.348 189568 DEBUG nova.virt.libvirt.imagecache [None req-7cec860b-c1c5-49d2-8a4f-732c76c1788d - - - - - -] Verify swap images _age_and_verify_swap_images /usr/lib/python3.9/site-packages/nova/virt/libvirt/imagecache.py:299#033[00m
Dec  1 20:00:40 compute-0 nova_compute[189564]: 2025-12-01 20:00:40.348 189568 DEBUG nova.virt.libvirt.imagecache [None req-7cec860b-c1c5-49d2-8a4f-732c76c1788d - - - - - -] Verify ephemeral images _age_and_verify_ephemeral_images /usr/lib/python3.9/site-packages/nova/virt/libvirt/imagecache.py:284#033[00m
Dec  1 20:00:40 compute-0 nova_compute[189564]: 2025-12-01 20:00:40.349 189568 INFO nova.virt.libvirt.imagecache [None req-7cec860b-c1c5-49d2-8a4f-732c76c1788d - - - - - -] Base, swap or ephemeral file too young to remove: /var/lib/nova/instances/_base/ephemeral_1_0706d66#033[00m
Dec  1 20:00:41 compute-0 podman[253412]: 2025-12-01 20:00:41.356433078 +0000 UTC m=+0.124335419 container health_status eee51cf6f5ac491b85fb09827fece37ea9afa564acb449d4ec0d0155a452f02b (image=quay.io/podified-antelope-centos9/openstack-multipathd:current-podified, name=multipathd, health_status=healthy, health_failing_streak=0, health_log=, org.label-schema.build-date=20251125, org.label-schema.license=GPLv2, org.label-schema.schema-version=1.0, config_id=multipathd, io.buildah.version=1.41.3, maintainer=OpenStack Kubernetes Operator team, org.label-schema.name=CentOS Stream 9 Base Image, tcib_managed=true, org.label-schema.vendor=CentOS, tcib_build_tag=fa2bb8efef6782c26ea7f1675eeb36dd, config_data={'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/multipathd', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-multipathd:current-podified', 'net': 'host', 'privileged': True, 'restart': 'always', 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/var/lib/kolla/config_files/multipathd.json:/var/lib/kolla/config_files/config.json:ro', '/dev:/dev', '/run/udev:/run/udev', '/sys:/sys', '/lib/modules:/lib/modules:ro', '/etc/iscsi:/etc/iscsi:ro', '/var/lib/iscsi:/var/lib/iscsi', '/etc/multipath:/etc/multipath:z', '/etc/multipath.conf:/etc/multipath.conf:ro', '/var/lib/openstack/healthchecks/multipathd:/openstack:ro,z']}, container_name=multipathd, managed_by=edpm_ansible)
Dec  1 20:00:42 compute-0 nova_compute[189564]: 2025-12-01 20:00:42.785 189568 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Dec  1 20:00:43 compute-0 ovn_metadata_agent[106828]: 2025-12-01 20:00:43.632 106833 DEBUG ovsdbapp.backend.ovs_idl.event [-] Matched UPDATE: SbGlobalUpdateEvent(events=('update',), table='SB_Global', conditions=None, old_conditions=None), priority=20 to row=SB_Global(external_ids={}, nb_cfg=9, options={'arp_ns_explicit_output': 'true', 'mac_prefix': 'ae:b8:e0', 'max_tunid': '16711680', 'northd_internal_version': '24.03.8-20.33.0-76.8', 'svc_monitor_mac': 'f2:87:69:a7:38:2b'}, ipsec=False) old=SB_Global(nb_cfg=8) matches /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/event.py:43#033[00m
Dec  1 20:00:43 compute-0 ovn_metadata_agent[106828]: 2025-12-01 20:00:43.632 106833 DEBUG neutron.agent.ovn.metadata.agent [-] Delaying updating chassis table for 4 seconds run /usr/lib/python3.9/site-packages/neutron/agent/ovn/metadata/agent.py:274#033[00m
Dec  1 20:00:43 compute-0 nova_compute[189564]: 2025-12-01 20:00:43.635 189568 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Dec  1 20:00:43 compute-0 nova_compute[189564]: 2025-12-01 20:00:43.774 189568 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Dec  1 20:00:46 compute-0 podman[253431]: 2025-12-01 20:00:46.305051726 +0000 UTC m=+0.073052944 container health_status 61ddba5fa28aaa4735d9b3aecc3d300f499f9ae2248b5f55cd6d6127fcce4236 (image=quay.io/navidys/prometheus-podman-exporter:v1.10.1, name=podman_exporter, health_status=healthy, health_failing_streak=0, health_log=, managed_by=edpm_ansible, config_data={'image': 'quay.io/navidys/prometheus-podman-exporter:v1.10.1', 'restart': 'always', 'recreate': True, 'user': 'root', 'privileged': True, 'ports': ['9882:9882'], 'net': 'host', 'command': ['--web.config.file=/etc/podman_exporter/podman_exporter.yaml'], 'environment': {'OS_ENDPOINT_TYPE': 'internal', 'CONTAINER_HOST': 'unix:///run/podman/podman.sock'}, 'healthcheck': {'test': '/openstack/healthcheck podman_exporter', 'mount': '/var/lib/openstack/healthchecks/podman_exporter'}, 'volumes': ['/var/lib/openstack/config/telemetry/podman_exporter.yaml:/etc/podman_exporter/podman_exporter.yaml:z', '/var/lib/openstack/certs/telemetry/default:/etc/podman_exporter/tls:z', '/run/podman/podman.sock:/run/podman/podman.sock:rw,z', '/var/lib/openstack/healthchecks/podman_exporter:/openstack:ro,z']}, config_id=edpm, container_name=podman_exporter, maintainer=Navid Yaghoobi <navidys@fedoraproject.org>)
Dec  1 20:00:47 compute-0 ovn_metadata_agent[106828]: 2025-12-01 20:00:47.635 106833 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): DbSetCommand(_result=None, table=Chassis_Private, record=91869463-7ce7-4561-8225-db4a77bb5f12, col_values=(('external_ids', {'neutron:ovn-metadata-sb-cfg': '9'}),), if_exists=True) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89#033[00m
Dec  1 20:00:47 compute-0 nova_compute[189564]: 2025-12-01 20:00:47.790 189568 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Dec  1 20:00:47 compute-0 nova_compute[189564]: 2025-12-01 20:00:47.973 189568 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Dec  1 20:00:48 compute-0 nova_compute[189564]: 2025-12-01 20:00:48.777 189568 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Dec  1 20:00:49 compute-0 nova_compute[189564]: 2025-12-01 20:00:49.004 189568 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Dec  1 20:00:49 compute-0 podman[253457]: 2025-12-01 20:00:49.327341784 +0000 UTC m=+0.093382036 container health_status 34a1614f07848d6f362b3ed1fa2407dbcd0f2c7c831f6ef43ff8b2d278ce7c3d (image=quay.io/podified-antelope-centos9/openstack-ceilometer-ipmi:current-podified, name=ceilometer_agent_ipmi, health_status=healthy, health_failing_streak=0, health_log=, container_name=ceilometer_agent_ipmi, io.buildah.version=1.41.3, maintainer=OpenStack Kubernetes Operator team, managed_by=edpm_ansible, org.label-schema.build-date=20251125, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.vendor=CentOS, tcib_build_tag=fa2bb8efef6782c26ea7f1675eeb36dd, config_data={'image': 'quay.io/podified-antelope-centos9/openstack-ceilometer-ipmi:current-podified', 'user': 'ceilometer', 'restart': 'always', 'command': 'kolla_start', 'security_opt': 'label:type:ceilometer_polling_t', 'privileged': 'true', 'net': 'host', 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'OS_ENDPOINT_TYPE': 'internal'}, 'healthcheck': {'test': '/openstack/healthcheck ipmi', 'mount': '/var/lib/openstack/healthchecks/ceilometer_agent_ipmi'}, 'volumes': ['/var/lib/openstack/config/telemetry-power-monitoring:/var/lib/openstack/config/:z', '/var/lib/openstack/config/telemetry-power-monitoring/ceilometer-agent-ipmi.json:/var/lib/kolla/config_files/config.json:z', '/etc/hosts:/etc/hosts:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/var/lib/openstack/cacerts/telemetry-power-monitoring/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/config/telemetry-power-monitoring/ceilometer_prom_exporter.yaml:/etc/ceilometer/ceilometer_prom_exporter.yaml:z', '/var/lib/openstack/certs/telemetry-power-monitoring/default:/etc/ceilometer/tls:z', '/dev/log:/dev/log', '/var/lib/openstack/healthchecks/ceilometer_agent_ipmi:/openstack:ro,z']}, config_id=edpm, org.label-schema.license=GPLv2, org.label-schema.schema-version=1.0, tcib_managed=true)
Dec  1 20:00:49 compute-0 podman[253458]: 2025-12-01 20:00:49.340322588 +0000 UTC m=+0.089269099 container health_status 3a3d264f7eb8586ed3d44da8bad3c69e5911bcb2ca062b771386b6d47a5118de (image=quay.rdoproject.org/podified-master-centos10/openstack-ceilometer-compute:current-tested, name=ceilometer_agent_compute, health_status=healthy, health_failing_streak=0, health_log=, config_data={'image': 'quay.rdoproject.org/podified-master-centos10/openstack-ceilometer-compute:current-tested', 'user': 'ceilometer', 'restart': 'always', 'command': 'kolla_start', 'security_opt': 'label:type:ceilometer_polling_t', 'net': 'host', 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'OS_ENDPOINT_TYPE': 'internal'}, 'healthcheck': {'test': '/openstack/healthcheck compute', 'mount': '/var/lib/openstack/healthchecks/ceilometer_agent_compute'}, 'volumes': ['/var/lib/openstack/config/telemetry:/var/lib/openstack/config/:z', '/var/lib/openstack/config/telemetry/ceilometer-agent-compute.json:/var/lib/kolla/config_files/config.json:z', '/run/libvirt:/run/libvirt:shared,ro', '/etc/hosts:/etc/hosts:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/var/lib/openstack/cacerts/telemetry/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/config/telemetry/ceilometer_prom_exporter.yaml:/etc/ceilometer/ceilometer_prom_exporter.yaml:z', '/var/lib/openstack/certs/telemetry/default:/etc/ceilometer/tls:z', '/dev/log:/dev/log', '/var/lib/openstack/healthchecks/ceilometer_agent_compute:/openstack:ro,z']}, config_id=edpm, io.buildah.version=1.41.4, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 10 Base Image, org.label-schema.vendor=CentOS, managed_by=edpm_ansible, org.label-schema.build-date=20251125, tcib_build_tag=3a7876c5b6a4ff2e2bc50e11e9db5f42, container_name=ceilometer_agent_compute, maintainer=OpenStack Kubernetes Operator team, org.label-schema.schema-version=1.0, tcib_managed=true)
Dec  1 20:00:49 compute-0 podman[253459]: 2025-12-01 20:00:49.355161119 +0000 UTC m=+0.099898389 container health_status 43b014a7c88484529ca37fbc1aa040d68d3c565a681d98a3ffe696ded1c66c8b (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified, name=ovn_metadata_agent, health_status=healthy, health_failing_streak=0, health_log=, org.label-schema.schema-version=1.0, io.buildah.version=1.41.3, org.label-schema.build-date=20251125, tcib_managed=true, config_id=ovn_metadata_agent, container_name=ovn_metadata_agent, managed_by=edpm_ansible, org.label-schema.license=GPLv2, org.label-schema.vendor=CentOS, org.label-schema.name=CentOS Stream 9 Base Image, tcib_build_tag=fa2bb8efef6782c26ea7f1675eeb36dd, config_data={'cgroupns': 'host', 'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'EDPM_CONFIG_HASH': '0823bd3e096c75f72e4a95820d41b0d4b6a1172bd2892ddb9f29b788a11bc87d'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ovn_metadata_agent', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified', 'net': 'host', 'pid': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/run/openvswitch:/run/openvswitch:z', '/var/lib/config-data/ansible-generated/neutron-ovn-metadata-agent:/etc/neutron.conf.d:z', '/run/netns:/run/netns:shared', '/var/lib/kolla/config_files/ovn_metadata_agent.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/neutron:/var/lib/neutron:shared,z', '/var/lib/neutron/ovn_metadata_haproxy_wrapper:/usr/local/bin/haproxy:ro', '/var/lib/neutron/kill_scripts:/etc/neutron/kill_scripts:ro', '/var/lib/openstack/cacerts/neutron-metadata/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/ca.crt:/etc/pki/tls/certs/ovndbca.crt:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/tls.crt:/etc/pki/tls/certs/ovndb.crt:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/tls.key:/etc/pki/tls/private/ovndb.key:ro,Z', '/var/lib/openstack/healthchecks/ovn_metadata_agent:/openstack:ro,z']}, maintainer=OpenStack Kubernetes Operator team)
Dec  1 20:00:49 compute-0 podman[253463]: 2025-12-01 20:00:49.356349667 +0000 UTC m=+0.111284254 container health_status ac5c9902abf0db9f43c889599b2bcc73d33eb8b65444ffdd9b56a5cc93dab792 (image=quay.io/podified-antelope-centos9/openstack-ovn-controller:current-podified, name=ovn_controller, health_status=healthy, health_failing_streak=0, health_log=, maintainer=OpenStack Kubernetes Operator team, org.label-schema.vendor=CentOS, tcib_build_tag=fa2bb8efef6782c26ea7f1675eeb36dd, config_id=ovn_controller, container_name=ovn_controller, io.buildah.version=1.41.3, managed_by=edpm_ansible, org.label-schema.license=GPLv2, config_data={'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ovn_controller', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-ovn-controller:current-podified', 'net': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/lib/modules:/lib/modules:ro', '/run:/run', '/var/lib/openvswitch/ovn:/run/ovn:shared,z', '/var/lib/kolla/config_files/ovn_controller.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/openstack/cacerts/ovn/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/certs/ovn/default/ca.crt:/etc/pki/tls/certs/ovndbca.crt:ro,z', '/var/lib/openstack/certs/ovn/default/tls.crt:/etc/pki/tls/certs/ovndb.crt:ro,z', '/var/lib/openstack/certs/ovn/default/tls.key:/etc/pki/tls/private/ovndb.key:ro,Z', '/var/lib/openstack/healthchecks/ovn_controller:/openstack:ro,z']}, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.schema-version=1.0, tcib_managed=true, org.label-schema.build-date=20251125)
Dec  1 20:00:49 compute-0 podman[253456]: 2025-12-01 20:00:49.360723842 +0000 UTC m=+0.125297079 container health_status 23921011954a99f31a49758e512d9e3575f6b2ebf536e7df85e3be11e7690b76 (image=quay.io/sustainable_computing_io/kepler:release-0.7.12, name=kepler, health_status=healthy, health_failing_streak=0, health_log=, io.buildah.version=1.29.0, container_name=kepler, description=The Universal Base Image is designed and engineered to be the base layer for all of your containerized applications, middleware and utilities. This base image is freely redistributable, but Red Hat only supports Red Hat technologies through subscriptions for Red Hat products. This image is maintained by Red Hat and updated regularly., summary=Provides the latest release of Red Hat Universal Base Image 9., vcs-ref=e309397d02fc53f7fa99db1371b8700eb49f268f, vcs-type=git, vendor=Red Hat, Inc., build-date=2024-09-18T21:23:30, name=ubi9, io.openshift.expose-services=, com.redhat.component=ubi9-container, io.k8s.description=The Universal Base Image is designed and engineered to be the base layer for all of your containerized applications, middleware and utilities. This base image is freely redistributable, but Red Hat only supports Red Hat technologies through subscriptions for Red Hat products. This image is maintained by Red Hat and updated regularly., config_data={'image': 'quay.io/sustainable_computing_io/kepler:release-0.7.12', 'privileged': 'true', 'restart': 'always', 'ports': ['8888:8888'], 'net': 'host', 'command': '-v=2', 'recreate': True, 'environment': {'ENABLE_GPU': 'true', 'EXPOSE_CONTAINER_METRICS': 'true', 'ENABLE_PROCESS_METRICS': 'true', 'EXPOSE_VM_METRICS': 'true', 'EXPOSE_ESTIMATED_IDLE_POWER_METRICS': 'false', 'LIBVIRT_METADATA_URI': 'http://openstack.org/xmlns/libvirt/nova/1.1'}, 'healthcheck': {'test': '/openstack/healthcheck kepler', 'mount': '/var/lib/openstack/healthchecks/kepler'}, 'volumes': ['/lib/modules:/lib/modules:ro', '/run/libvirt:/run/libvirt:shared,ro', '/sys:/sys', '/proc:/proc', '/var/lib/openstack/healthchecks/kepler:/openstack:ro,z']}, architecture=x86_64, release=1214.1726694543, release-0.7.12=, config_id=edpm, io.openshift.tags=base rhel9, managed_by=edpm_ansible, version=9.4, distribution-scope=public, io.k8s.display-name=Red Hat Universal Base Image 9, maintainer=Red Hat, Inc., url=https://access.redhat.com/containers/#/registry.access.redhat.com/ubi9/images/9.4-1214.1726694543, com.redhat.license_terms=https://www.redhat.com/en/about/red-hat-end-user-license-agreements#UBI)
Dec  1 20:00:49 compute-0 nova_compute[189564]: 2025-12-01 20:00:49.965 189568 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Dec  1 20:00:49 compute-0 nova_compute[189564]: 2025-12-01 20:00:49.995 189568 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Dec  1 20:00:51 compute-0 nova_compute[189564]: 2025-12-01 20:00:51.360 189568 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Dec  1 20:00:52 compute-0 nova_compute[189564]: 2025-12-01 20:00:52.792 189568 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Dec  1 20:00:53 compute-0 nova_compute[189564]: 2025-12-01 20:00:53.780 189568 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Dec  1 20:00:55 compute-0 nova_compute[189564]: 2025-12-01 20:00:55.038 189568 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Dec  1 20:00:55 compute-0 nova_compute[189564]: 2025-12-01 20:00:55.786 189568 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Dec  1 20:00:57 compute-0 nova_compute[189564]: 2025-12-01 20:00:57.587 189568 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Dec  1 20:00:57 compute-0 nova_compute[189564]: 2025-12-01 20:00:57.794 189568 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Dec  1 20:00:57 compute-0 nova_compute[189564]: 2025-12-01 20:00:57.809 189568 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Dec  1 20:00:58 compute-0 nova_compute[189564]: 2025-12-01 20:00:58.469 189568 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Dec  1 20:00:58 compute-0 nova_compute[189564]: 2025-12-01 20:00:58.784 189568 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Dec  1 20:00:59 compute-0 podman[203750]: time="2025-12-01T20:00:59Z" level=info msg="List containers: received `last` parameter - overwriting `limit`"
Dec  1 20:00:59 compute-0 podman[203750]: @ - - [01/Dec/2025:20:00:59 +0000] "GET /v4.9.3/libpod/containers/json?all=true&external=false&last=0&namespace=false&size=false&sync=false HTTP/1.1" 200 28288 "" "Go-http-client/1.1"
Dec  1 20:00:59 compute-0 podman[203750]: @ - - [01/Dec/2025:20:00:59 +0000] "GET /v4.9.3/libpod/containers/stats?all=false&interval=1&stream=false HTTP/1.1" 200 4339 "" "Go-http-client/1.1"
Dec  1 20:01:00 compute-0 podman[253558]: 2025-12-01 20:01:00.296966385 +0000 UTC m=+0.069269366 container health_status b46bda7fc50db8041eef75400930fc7591d8331b3adc9964f77b2cc87c6b98e2 (image=quay.io/openstack-k8s-operators/openstack-network-exporter:current-podified, name=openstack_network_exporter, health_status=healthy, health_failing_streak=0, health_log=, name=ubi9-minimal, release=1755695350, vcs-type=git, build-date=2025-08-20T13:12:41, description=The Universal Base Image Minimal is a stripped down image that uses microdnf as a package manager. This base image is freely redistributable, but Red Hat only supports Red Hat technologies through subscriptions for Red Hat products. This image is maintained by Red Hat and updated regularly., io.k8s.description=The Universal Base Image Minimal is a stripped down image that uses microdnf as a package manager. This base image is freely redistributable, but Red Hat only supports Red Hat technologies through subscriptions for Red Hat products. This image is maintained by Red Hat and updated regularly., config_id=edpm, managed_by=edpm_ansible, io.openshift.expose-services=, vendor=Red Hat, Inc., architecture=x86_64, summary=Provides the latest release of the minimal Red Hat Universal Base Image 9., version=9.6, io.k8s.display-name=Red Hat Universal Base Image 9 Minimal, vcs-ref=f4b088292653bbf5ca8188a5e59ffd06a8671d4b, com.redhat.component=ubi9-minimal-container, container_name=openstack_network_exporter, distribution-scope=public, io.buildah.version=1.33.7, maintainer=Red Hat, Inc., url=https://catalog.redhat.com/en/search?searchType=containers, com.redhat.license_terms=https://www.redhat.com/en/about/red-hat-end-user-license-agreements#UBI, io.openshift.tags=minimal rhel9, config_data={'image': 'quay.io/openstack-k8s-operators/openstack-network-exporter:current-podified', 'restart': 'always', 'recreate': True, 'privileged': True, 'ports': ['9105:9105'], 'command': [], 'net': 'host', 'environment': {'OS_ENDPOINT_TYPE': 'internal', 'OPENSTACK_NETWORK_EXPORTER_YAML': '/etc/openstack_network_exporter/openstack_network_exporter.yaml'}, 'healthcheck': {'test': '/openstack/healthcheck openstack-netwo', 'mount': '/var/lib/openstack/healthchecks/openstack_network_exporter'}, 'volumes': ['/var/lib/openstack/config/telemetry/openstack_network_exporter.yaml:/etc/openstack_network_exporter/openstack_network_exporter.yaml:z', '/var/lib/openstack/certs/telemetry/default:/etc/openstack_network_exporter/tls:z', '/var/run/openvswitch:/run/openvswitch:rw,z', '/var/lib/openvswitch/ovn:/run/ovn:rw,z', '/proc:/host/proc:ro', '/var/lib/openstack/healthchecks/openstack_network_exporter:/openstack:ro,z']})
Dec  1 20:01:01 compute-0 openstack_network_exporter[205914]: ERROR   20:01:01 appctl.go:131: Failed to prepare call to ovsdb-server: no control socket files found for the ovs db server
Dec  1 20:01:01 compute-0 openstack_network_exporter[205914]: ERROR   20:01:01 appctl.go:144: Failed to get PID for ovn-northd: no control socket files found for ovn-northd
Dec  1 20:01:01 compute-0 openstack_network_exporter[205914]: ERROR   20:01:01 appctl.go:144: Failed to get PID for ovn-northd: no control socket files found for ovn-northd
Dec  1 20:01:01 compute-0 openstack_network_exporter[205914]: ERROR   20:01:01 appctl.go:174: call(dpif-netdev/pmd-perf-show): please specify an existing datapath
Dec  1 20:01:01 compute-0 openstack_network_exporter[205914]: 
Dec  1 20:01:01 compute-0 openstack_network_exporter[205914]: ERROR   20:01:01 appctl.go:174: call(dpif-netdev/pmd-rxq-show): please specify an existing datapath
Dec  1 20:01:01 compute-0 openstack_network_exporter[205914]: 
Dec  1 20:01:02 compute-0 nova_compute[189564]: 2025-12-01 20:01:02.797 189568 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Dec  1 20:01:03 compute-0 nova_compute[189564]: 2025-12-01 20:01:03.786 189568 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Dec  1 20:01:05 compute-0 podman[253594]: 2025-12-01 20:01:05.308948493 +0000 UTC m=+0.079580058 container health_status 9bc16c1e84935b321683dd2dfd3901959431e420d380b6b9982945dff3d516b2 (image=quay.io/prometheus/node-exporter:v1.5.0, name=node_exporter, health_status=healthy, health_failing_streak=0, health_log=, config_id=edpm, container_name=node_exporter, maintainer=The Prometheus Authors <prometheus-developers@googlegroups.com>, managed_by=edpm_ansible, config_data={'image': 'quay.io/prometheus/node-exporter:v1.5.0', 'restart': 'always', 'recreate': True, 'user': 'root', 'privileged': True, 'ports': ['9100:9100'], 'command': ['--web.config.file=/etc/node_exporter/node_exporter.yaml', '--web.disable-exporter-metrics', '--collector.systemd', '--collector.systemd.unit-include=(edpm_.*|ovs.*|openvswitch|virt.*|rsyslog)\\.service', '--no-collector.dmi', '--no-collector.entropy', '--no-collector.thermal_zone', '--no-collector.time', '--no-collector.timex', '--no-collector.uname', '--no-collector.stat', '--no-collector.hwmon', '--no-collector.os', '--no-collector.selinux', '--no-collector.textfile', '--no-collector.powersupplyclass', '--no-collector.pressure', '--no-collector.rapl'], 'net': 'host', 'environment': {'OS_ENDPOINT_TYPE': 'internal'}, 'healthcheck': {'test': '/openstack/healthcheck node_exporter', 'mount': '/var/lib/openstack/healthchecks/node_exporter'}, 'volumes': ['/var/lib/openstack/config/telemetry/node_exporter.yaml:/etc/node_exporter/node_exporter.yaml:z', '/var/lib/openstack/certs/telemetry/default:/etc/node_exporter/tls:z', '/var/run/dbus/system_bus_socket:/var/run/dbus/system_bus_socket:rw', '/var/lib/openstack/healthchecks/node_exporter:/openstack:ro,z']})
Dec  1 20:01:06 compute-0 nova_compute[189564]: 2025-12-01 20:01:06.211 189568 DEBUG oslo_concurrency.lockutils [None req-96b534b9-88b0-4edd-a388-382ce12d6cf4 e346f67d906543ea8982cb53415ee19b d9b058a656be4393a4619312186fc083 - - default default] Acquiring lock "98c0547a-3efc-4214-85f9-ccceaf32a2a6" by "nova.compute.manager.ComputeManager.build_and_run_instance.<locals>._locked_do_build_and_run_instance" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m
Dec  1 20:01:06 compute-0 nova_compute[189564]: 2025-12-01 20:01:06.211 189568 DEBUG oslo_concurrency.lockutils [None req-96b534b9-88b0-4edd-a388-382ce12d6cf4 e346f67d906543ea8982cb53415ee19b d9b058a656be4393a4619312186fc083 - - default default] Lock "98c0547a-3efc-4214-85f9-ccceaf32a2a6" acquired by "nova.compute.manager.ComputeManager.build_and_run_instance.<locals>._locked_do_build_and_run_instance" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m
Dec  1 20:01:06 compute-0 nova_compute[189564]: 2025-12-01 20:01:06.237 189568 DEBUG nova.compute.manager [None req-96b534b9-88b0-4edd-a388-382ce12d6cf4 e346f67d906543ea8982cb53415ee19b d9b058a656be4393a4619312186fc083 - - default default] [instance: 98c0547a-3efc-4214-85f9-ccceaf32a2a6] Starting instance... _do_build_and_run_instance /usr/lib/python3.9/site-packages/nova/compute/manager.py:2402#033[00m
Dec  1 20:01:06 compute-0 nova_compute[189564]: 2025-12-01 20:01:06.387 189568 DEBUG oslo_concurrency.lockutils [None req-96b534b9-88b0-4edd-a388-382ce12d6cf4 e346f67d906543ea8982cb53415ee19b d9b058a656be4393a4619312186fc083 - - default default] Acquiring lock "compute_resources" by "nova.compute.resource_tracker.ResourceTracker.instance_claim" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m
Dec  1 20:01:06 compute-0 nova_compute[189564]: 2025-12-01 20:01:06.388 189568 DEBUG oslo_concurrency.lockutils [None req-96b534b9-88b0-4edd-a388-382ce12d6cf4 e346f67d906543ea8982cb53415ee19b d9b058a656be4393a4619312186fc083 - - default default] Lock "compute_resources" acquired by "nova.compute.resource_tracker.ResourceTracker.instance_claim" :: waited 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m
Dec  1 20:01:06 compute-0 nova_compute[189564]: 2025-12-01 20:01:06.402 189568 DEBUG nova.virt.hardware [None req-96b534b9-88b0-4edd-a388-382ce12d6cf4 e346f67d906543ea8982cb53415ee19b d9b058a656be4393a4619312186fc083 - - default default] Require both a host and instance NUMA topology to fit instance on host. numa_fit_instance_to_host /usr/lib/python3.9/site-packages/nova/virt/hardware.py:2368#033[00m
Dec  1 20:01:06 compute-0 nova_compute[189564]: 2025-12-01 20:01:06.403 189568 INFO nova.compute.claims [None req-96b534b9-88b0-4edd-a388-382ce12d6cf4 e346f67d906543ea8982cb53415ee19b d9b058a656be4393a4619312186fc083 - - default default] [instance: 98c0547a-3efc-4214-85f9-ccceaf32a2a6] Claim successful on node compute-0.ctlplane.example.com#033[00m
Dec  1 20:01:06 compute-0 nova_compute[189564]: 2025-12-01 20:01:06.566 189568 DEBUG nova.compute.provider_tree [None req-96b534b9-88b0-4edd-a388-382ce12d6cf4 e346f67d906543ea8982cb53415ee19b d9b058a656be4393a4619312186fc083 - - default default] Inventory has not changed in ProviderTree for provider: 0211b5d4-bab8-409f-8f53-df766ffbcb27 update_inventory /usr/lib/python3.9/site-packages/nova/compute/provider_tree.py:180#033[00m
Dec  1 20:01:06 compute-0 nova_compute[189564]: 2025-12-01 20:01:06.584 189568 DEBUG nova.scheduler.client.report [None req-96b534b9-88b0-4edd-a388-382ce12d6cf4 e346f67d906543ea8982cb53415ee19b d9b058a656be4393a4619312186fc083 - - default default] Inventory has not changed for provider 0211b5d4-bab8-409f-8f53-df766ffbcb27 based on inventory data: {'VCPU': {'total': 8, 'reserved': 0, 'min_unit': 1, 'max_unit': 8, 'step_size': 1, 'allocation_ratio': 4.0}, 'MEMORY_MB': {'total': 7680, 'reserved': 512, 'min_unit': 1, 'max_unit': 7680, 'step_size': 1, 'allocation_ratio': 1.0}, 'DISK_GB': {'total': 79, 'reserved': 1, 'min_unit': 1, 'max_unit': 79, 'step_size': 1, 'allocation_ratio': 0.9}} set_inventory_for_provider /usr/lib/python3.9/site-packages/nova/scheduler/client/report.py:940#033[00m
Dec  1 20:01:06 compute-0 nova_compute[189564]: 2025-12-01 20:01:06.614 189568 DEBUG oslo_concurrency.lockutils [None req-96b534b9-88b0-4edd-a388-382ce12d6cf4 e346f67d906543ea8982cb53415ee19b d9b058a656be4393a4619312186fc083 - - default default] Lock "compute_resources" "released" by "nova.compute.resource_tracker.ResourceTracker.instance_claim" :: held 0.226s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m
Dec  1 20:01:06 compute-0 nova_compute[189564]: 2025-12-01 20:01:06.615 189568 DEBUG nova.compute.manager [None req-96b534b9-88b0-4edd-a388-382ce12d6cf4 e346f67d906543ea8982cb53415ee19b d9b058a656be4393a4619312186fc083 - - default default] [instance: 98c0547a-3efc-4214-85f9-ccceaf32a2a6] Start building networks asynchronously for instance. _build_resources /usr/lib/python3.9/site-packages/nova/compute/manager.py:2799#033[00m
Dec  1 20:01:06 compute-0 nova_compute[189564]: 2025-12-01 20:01:06.720 189568 DEBUG nova.compute.manager [None req-96b534b9-88b0-4edd-a388-382ce12d6cf4 e346f67d906543ea8982cb53415ee19b d9b058a656be4393a4619312186fc083 - - default default] [instance: 98c0547a-3efc-4214-85f9-ccceaf32a2a6] Allocating IP information in the background. _allocate_network_async /usr/lib/python3.9/site-packages/nova/compute/manager.py:1952#033[00m
Dec  1 20:01:06 compute-0 nova_compute[189564]: 2025-12-01 20:01:06.721 189568 DEBUG nova.network.neutron [None req-96b534b9-88b0-4edd-a388-382ce12d6cf4 e346f67d906543ea8982cb53415ee19b d9b058a656be4393a4619312186fc083 - - default default] [instance: 98c0547a-3efc-4214-85f9-ccceaf32a2a6] allocate_for_instance() allocate_for_instance /usr/lib/python3.9/site-packages/nova/network/neutron.py:1156#033[00m
Dec  1 20:01:06 compute-0 nova_compute[189564]: 2025-12-01 20:01:06.789 189568 INFO nova.virt.libvirt.driver [None req-96b534b9-88b0-4edd-a388-382ce12d6cf4 e346f67d906543ea8982cb53415ee19b d9b058a656be4393a4619312186fc083 - - default default] [instance: 98c0547a-3efc-4214-85f9-ccceaf32a2a6] Ignoring supplied device name: /dev/vda. Libvirt can't honour user-supplied dev names#033[00m
Dec  1 20:01:06 compute-0 nova_compute[189564]: 2025-12-01 20:01:06.809 189568 DEBUG nova.compute.manager [None req-96b534b9-88b0-4edd-a388-382ce12d6cf4 e346f67d906543ea8982cb53415ee19b d9b058a656be4393a4619312186fc083 - - default default] [instance: 98c0547a-3efc-4214-85f9-ccceaf32a2a6] Start building block device mappings for instance. _build_resources /usr/lib/python3.9/site-packages/nova/compute/manager.py:2834#033[00m
Dec  1 20:01:06 compute-0 nova_compute[189564]: 2025-12-01 20:01:06.911 189568 DEBUG nova.compute.manager [None req-96b534b9-88b0-4edd-a388-382ce12d6cf4 e346f67d906543ea8982cb53415ee19b d9b058a656be4393a4619312186fc083 - - default default] [instance: 98c0547a-3efc-4214-85f9-ccceaf32a2a6] Start spawning the instance on the hypervisor. _build_and_run_instance /usr/lib/python3.9/site-packages/nova/compute/manager.py:2608#033[00m
Dec  1 20:01:06 compute-0 nova_compute[189564]: 2025-12-01 20:01:06.912 189568 DEBUG nova.virt.libvirt.driver [None req-96b534b9-88b0-4edd-a388-382ce12d6cf4 e346f67d906543ea8982cb53415ee19b d9b058a656be4393a4619312186fc083 - - default default] [instance: 98c0547a-3efc-4214-85f9-ccceaf32a2a6] Creating instance directory _create_image /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:4723#033[00m
Dec  1 20:01:06 compute-0 nova_compute[189564]: 2025-12-01 20:01:06.913 189568 INFO nova.virt.libvirt.driver [None req-96b534b9-88b0-4edd-a388-382ce12d6cf4 e346f67d906543ea8982cb53415ee19b d9b058a656be4393a4619312186fc083 - - default default] [instance: 98c0547a-3efc-4214-85f9-ccceaf32a2a6] Creating image(s)#033[00m
Dec  1 20:01:06 compute-0 nova_compute[189564]: 2025-12-01 20:01:06.914 189568 DEBUG oslo_concurrency.lockutils [None req-96b534b9-88b0-4edd-a388-382ce12d6cf4 e346f67d906543ea8982cb53415ee19b d9b058a656be4393a4619312186fc083 - - default default] Acquiring lock "/var/lib/nova/instances/98c0547a-3efc-4214-85f9-ccceaf32a2a6/disk.info" by "nova.virt.libvirt.imagebackend.Image.resolve_driver_format.<locals>.write_to_disk_info_file" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m
Dec  1 20:01:06 compute-0 nova_compute[189564]: 2025-12-01 20:01:06.914 189568 DEBUG oslo_concurrency.lockutils [None req-96b534b9-88b0-4edd-a388-382ce12d6cf4 e346f67d906543ea8982cb53415ee19b d9b058a656be4393a4619312186fc083 - - default default] Lock "/var/lib/nova/instances/98c0547a-3efc-4214-85f9-ccceaf32a2a6/disk.info" acquired by "nova.virt.libvirt.imagebackend.Image.resolve_driver_format.<locals>.write_to_disk_info_file" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m
Dec  1 20:01:06 compute-0 nova_compute[189564]: 2025-12-01 20:01:06.915 189568 DEBUG oslo_concurrency.lockutils [None req-96b534b9-88b0-4edd-a388-382ce12d6cf4 e346f67d906543ea8982cb53415ee19b d9b058a656be4393a4619312186fc083 - - default default] Lock "/var/lib/nova/instances/98c0547a-3efc-4214-85f9-ccceaf32a2a6/disk.info" "released" by "nova.virt.libvirt.imagebackend.Image.resolve_driver_format.<locals>.write_to_disk_info_file" :: held 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m
Dec  1 20:01:06 compute-0 nova_compute[189564]: 2025-12-01 20:01:06.915 189568 DEBUG oslo_concurrency.lockutils [None req-96b534b9-88b0-4edd-a388-382ce12d6cf4 e346f67d906543ea8982cb53415ee19b d9b058a656be4393a4619312186fc083 - - default default] Acquiring lock "b6c46a34fa48a1b06387586e8222a42077151abd" by "nova.virt.libvirt.imagebackend.Image.cache.<locals>.fetch_func_sync" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m
Dec  1 20:01:06 compute-0 nova_compute[189564]: 2025-12-01 20:01:06.916 189568 DEBUG oslo_concurrency.lockutils [None req-96b534b9-88b0-4edd-a388-382ce12d6cf4 e346f67d906543ea8982cb53415ee19b d9b058a656be4393a4619312186fc083 - - default default] Lock "b6c46a34fa48a1b06387586e8222a42077151abd" acquired by "nova.virt.libvirt.imagebackend.Image.cache.<locals>.fetch_func_sync" :: waited 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m
Dec  1 20:01:07 compute-0 nova_compute[189564]: 2025-12-01 20:01:07.214 189568 DEBUG nova.policy [None req-96b534b9-88b0-4edd-a388-382ce12d6cf4 e346f67d906543ea8982cb53415ee19b d9b058a656be4393a4619312186fc083 - - default default] Policy check for network:attach_external_network failed with credentials {'is_admin': False, 'user_id': 'e346f67d906543ea8982cb53415ee19b', 'user_domain_id': 'default', 'system_scope': None, 'domain_id': None, 'project_id': 'd9b058a656be4393a4619312186fc083', 'project_domain_id': 'default', 'roles': ['reader', 'member'], 'is_admin_project': True, 'service_user_id': None, 'service_user_domain_id': None, 'service_project_id': None, 'service_project_domain_id': None, 'service_roles': []} authorize /usr/lib/python3.9/site-packages/nova/policy.py:203#033[00m
Dec  1 20:01:07 compute-0 nova_compute[189564]: 2025-12-01 20:01:07.278 189568 DEBUG oslo_concurrency.lockutils [None req-43758946-978a-45a5-8816-a58faf122cbd 1b42f5bff3ce40c99c067bb358d36444 02b2a851f173482691b98aa9a993fbf9 - - default default] Acquiring lock "5e264735-c003-4c77-8b16-cb48211f837f" by "nova.compute.manager.ComputeManager.build_and_run_instance.<locals>._locked_do_build_and_run_instance" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m
Dec  1 20:01:07 compute-0 nova_compute[189564]: 2025-12-01 20:01:07.279 189568 DEBUG oslo_concurrency.lockutils [None req-43758946-978a-45a5-8816-a58faf122cbd 1b42f5bff3ce40c99c067bb358d36444 02b2a851f173482691b98aa9a993fbf9 - - default default] Lock "5e264735-c003-4c77-8b16-cb48211f837f" acquired by "nova.compute.manager.ComputeManager.build_and_run_instance.<locals>._locked_do_build_and_run_instance" :: waited 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m
Dec  1 20:01:07 compute-0 nova_compute[189564]: 2025-12-01 20:01:07.303 189568 DEBUG nova.compute.manager [None req-43758946-978a-45a5-8816-a58faf122cbd 1b42f5bff3ce40c99c067bb358d36444 02b2a851f173482691b98aa9a993fbf9 - - default default] [instance: 5e264735-c003-4c77-8b16-cb48211f837f] Starting instance... _do_build_and_run_instance /usr/lib/python3.9/site-packages/nova/compute/manager.py:2402#033[00m
Dec  1 20:01:07 compute-0 nova_compute[189564]: 2025-12-01 20:01:07.380 189568 DEBUG oslo_concurrency.lockutils [None req-43758946-978a-45a5-8816-a58faf122cbd 1b42f5bff3ce40c99c067bb358d36444 02b2a851f173482691b98aa9a993fbf9 - - default default] Acquiring lock "compute_resources" by "nova.compute.resource_tracker.ResourceTracker.instance_claim" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m
Dec  1 20:01:07 compute-0 nova_compute[189564]: 2025-12-01 20:01:07.381 189568 DEBUG oslo_concurrency.lockutils [None req-43758946-978a-45a5-8816-a58faf122cbd 1b42f5bff3ce40c99c067bb358d36444 02b2a851f173482691b98aa9a993fbf9 - - default default] Lock "compute_resources" acquired by "nova.compute.resource_tracker.ResourceTracker.instance_claim" :: waited 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m
Dec  1 20:01:07 compute-0 nova_compute[189564]: 2025-12-01 20:01:07.389 189568 DEBUG nova.virt.hardware [None req-43758946-978a-45a5-8816-a58faf122cbd 1b42f5bff3ce40c99c067bb358d36444 02b2a851f173482691b98aa9a993fbf9 - - default default] Require both a host and instance NUMA topology to fit instance on host. numa_fit_instance_to_host /usr/lib/python3.9/site-packages/nova/virt/hardware.py:2368#033[00m
Dec  1 20:01:07 compute-0 nova_compute[189564]: 2025-12-01 20:01:07.390 189568 INFO nova.compute.claims [None req-43758946-978a-45a5-8816-a58faf122cbd 1b42f5bff3ce40c99c067bb358d36444 02b2a851f173482691b98aa9a993fbf9 - - default default] [instance: 5e264735-c003-4c77-8b16-cb48211f837f] Claim successful on node compute-0.ctlplane.example.com#033[00m
Dec  1 20:01:07 compute-0 nova_compute[189564]: 2025-12-01 20:01:07.564 189568 DEBUG nova.compute.provider_tree [None req-43758946-978a-45a5-8816-a58faf122cbd 1b42f5bff3ce40c99c067bb358d36444 02b2a851f173482691b98aa9a993fbf9 - - default default] Inventory has not changed in ProviderTree for provider: 0211b5d4-bab8-409f-8f53-df766ffbcb27 update_inventory /usr/lib/python3.9/site-packages/nova/compute/provider_tree.py:180#033[00m
Dec  1 20:01:07 compute-0 nova_compute[189564]: 2025-12-01 20:01:07.595 189568 DEBUG nova.scheduler.client.report [None req-43758946-978a-45a5-8816-a58faf122cbd 1b42f5bff3ce40c99c067bb358d36444 02b2a851f173482691b98aa9a993fbf9 - - default default] Inventory has not changed for provider 0211b5d4-bab8-409f-8f53-df766ffbcb27 based on inventory data: {'VCPU': {'total': 8, 'reserved': 0, 'min_unit': 1, 'max_unit': 8, 'step_size': 1, 'allocation_ratio': 4.0}, 'MEMORY_MB': {'total': 7680, 'reserved': 512, 'min_unit': 1, 'max_unit': 7680, 'step_size': 1, 'allocation_ratio': 1.0}, 'DISK_GB': {'total': 79, 'reserved': 1, 'min_unit': 1, 'max_unit': 79, 'step_size': 1, 'allocation_ratio': 0.9}} set_inventory_for_provider /usr/lib/python3.9/site-packages/nova/scheduler/client/report.py:940#033[00m
Dec  1 20:01:07 compute-0 nova_compute[189564]: 2025-12-01 20:01:07.638 189568 DEBUG oslo_concurrency.lockutils [None req-43758946-978a-45a5-8816-a58faf122cbd 1b42f5bff3ce40c99c067bb358d36444 02b2a851f173482691b98aa9a993fbf9 - - default default] Lock "compute_resources" "released" by "nova.compute.resource_tracker.ResourceTracker.instance_claim" :: held 0.257s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m
Dec  1 20:01:07 compute-0 nova_compute[189564]: 2025-12-01 20:01:07.639 189568 DEBUG nova.compute.manager [None req-43758946-978a-45a5-8816-a58faf122cbd 1b42f5bff3ce40c99c067bb358d36444 02b2a851f173482691b98aa9a993fbf9 - - default default] [instance: 5e264735-c003-4c77-8b16-cb48211f837f] Start building networks asynchronously for instance. _build_resources /usr/lib/python3.9/site-packages/nova/compute/manager.py:2799#033[00m
Dec  1 20:01:07 compute-0 nova_compute[189564]: 2025-12-01 20:01:07.699 189568 DEBUG nova.compute.manager [None req-43758946-978a-45a5-8816-a58faf122cbd 1b42f5bff3ce40c99c067bb358d36444 02b2a851f173482691b98aa9a993fbf9 - - default default] [instance: 5e264735-c003-4c77-8b16-cb48211f837f] Allocating IP information in the background. _allocate_network_async /usr/lib/python3.9/site-packages/nova/compute/manager.py:1952#033[00m
Dec  1 20:01:07 compute-0 nova_compute[189564]: 2025-12-01 20:01:07.700 189568 DEBUG nova.network.neutron [None req-43758946-978a-45a5-8816-a58faf122cbd 1b42f5bff3ce40c99c067bb358d36444 02b2a851f173482691b98aa9a993fbf9 - - default default] [instance: 5e264735-c003-4c77-8b16-cb48211f837f] allocate_for_instance() allocate_for_instance /usr/lib/python3.9/site-packages/nova/network/neutron.py:1156#033[00m
Dec  1 20:01:07 compute-0 nova_compute[189564]: 2025-12-01 20:01:07.726 189568 INFO nova.virt.libvirt.driver [None req-43758946-978a-45a5-8816-a58faf122cbd 1b42f5bff3ce40c99c067bb358d36444 02b2a851f173482691b98aa9a993fbf9 - - default default] [instance: 5e264735-c003-4c77-8b16-cb48211f837f] Ignoring supplied device name: /dev/vda. Libvirt can't honour user-supplied dev names#033[00m
Dec  1 20:01:07 compute-0 nova_compute[189564]: 2025-12-01 20:01:07.762 189568 DEBUG nova.compute.manager [None req-43758946-978a-45a5-8816-a58faf122cbd 1b42f5bff3ce40c99c067bb358d36444 02b2a851f173482691b98aa9a993fbf9 - - default default] [instance: 5e264735-c003-4c77-8b16-cb48211f837f] Start building block device mappings for instance. _build_resources /usr/lib/python3.9/site-packages/nova/compute/manager.py:2834#033[00m
Dec  1 20:01:07 compute-0 nova_compute[189564]: 2025-12-01 20:01:07.802 189568 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Dec  1 20:01:07 compute-0 nova_compute[189564]: 2025-12-01 20:01:07.873 189568 DEBUG nova.compute.manager [None req-43758946-978a-45a5-8816-a58faf122cbd 1b42f5bff3ce40c99c067bb358d36444 02b2a851f173482691b98aa9a993fbf9 - - default default] [instance: 5e264735-c003-4c77-8b16-cb48211f837f] Start spawning the instance on the hypervisor. _build_and_run_instance /usr/lib/python3.9/site-packages/nova/compute/manager.py:2608#033[00m
Dec  1 20:01:07 compute-0 nova_compute[189564]: 2025-12-01 20:01:07.875 189568 DEBUG nova.virt.libvirt.driver [None req-43758946-978a-45a5-8816-a58faf122cbd 1b42f5bff3ce40c99c067bb358d36444 02b2a851f173482691b98aa9a993fbf9 - - default default] [instance: 5e264735-c003-4c77-8b16-cb48211f837f] Creating instance directory _create_image /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:4723#033[00m
Dec  1 20:01:07 compute-0 nova_compute[189564]: 2025-12-01 20:01:07.875 189568 INFO nova.virt.libvirt.driver [None req-43758946-978a-45a5-8816-a58faf122cbd 1b42f5bff3ce40c99c067bb358d36444 02b2a851f173482691b98aa9a993fbf9 - - default default] [instance: 5e264735-c003-4c77-8b16-cb48211f837f] Creating image(s)#033[00m
Dec  1 20:01:07 compute-0 nova_compute[189564]: 2025-12-01 20:01:07.876 189568 DEBUG oslo_concurrency.lockutils [None req-43758946-978a-45a5-8816-a58faf122cbd 1b42f5bff3ce40c99c067bb358d36444 02b2a851f173482691b98aa9a993fbf9 - - default default] Acquiring lock "/var/lib/nova/instances/5e264735-c003-4c77-8b16-cb48211f837f/disk.info" by "nova.virt.libvirt.imagebackend.Image.resolve_driver_format.<locals>.write_to_disk_info_file" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m
Dec  1 20:01:07 compute-0 nova_compute[189564]: 2025-12-01 20:01:07.876 189568 DEBUG oslo_concurrency.lockutils [None req-43758946-978a-45a5-8816-a58faf122cbd 1b42f5bff3ce40c99c067bb358d36444 02b2a851f173482691b98aa9a993fbf9 - - default default] Lock "/var/lib/nova/instances/5e264735-c003-4c77-8b16-cb48211f837f/disk.info" acquired by "nova.virt.libvirt.imagebackend.Image.resolve_driver_format.<locals>.write_to_disk_info_file" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m
Dec  1 20:01:07 compute-0 nova_compute[189564]: 2025-12-01 20:01:07.877 189568 DEBUG oslo_concurrency.lockutils [None req-43758946-978a-45a5-8816-a58faf122cbd 1b42f5bff3ce40c99c067bb358d36444 02b2a851f173482691b98aa9a993fbf9 - - default default] Lock "/var/lib/nova/instances/5e264735-c003-4c77-8b16-cb48211f837f/disk.info" "released" by "nova.virt.libvirt.imagebackend.Image.resolve_driver_format.<locals>.write_to_disk_info_file" :: held 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m
Dec  1 20:01:07 compute-0 nova_compute[189564]: 2025-12-01 20:01:07.878 189568 DEBUG oslo_concurrency.lockutils [None req-43758946-978a-45a5-8816-a58faf122cbd 1b42f5bff3ce40c99c067bb358d36444 02b2a851f173482691b98aa9a993fbf9 - - default default] Acquiring lock "b6c46a34fa48a1b06387586e8222a42077151abd" by "nova.virt.libvirt.imagebackend.Image.cache.<locals>.fetch_func_sync" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m
Dec  1 20:01:08 compute-0 nova_compute[189564]: 2025-12-01 20:01:08.333 189568 DEBUG nova.policy [None req-43758946-978a-45a5-8816-a58faf122cbd 1b42f5bff3ce40c99c067bb358d36444 02b2a851f173482691b98aa9a993fbf9 - - default default] Policy check for network:attach_external_network failed with credentials {'is_admin': False, 'user_id': '1b42f5bff3ce40c99c067bb358d36444', 'user_domain_id': 'default', 'system_scope': None, 'domain_id': None, 'project_id': '02b2a851f173482691b98aa9a993fbf9', 'project_domain_id': 'default', 'roles': ['reader', 'member'], 'is_admin_project': True, 'service_user_id': None, 'service_user_domain_id': None, 'service_project_id': None, 'service_project_domain_id': None, 'service_roles': []} authorize /usr/lib/python3.9/site-packages/nova/policy.py:203#033[00m
Dec  1 20:01:08 compute-0 nova_compute[189564]: 2025-12-01 20:01:08.795 189568 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Dec  1 20:01:09 compute-0 nova_compute[189564]: 2025-12-01 20:01:09.078 189568 DEBUG nova.network.neutron [None req-96b534b9-88b0-4edd-a388-382ce12d6cf4 e346f67d906543ea8982cb53415ee19b d9b058a656be4393a4619312186fc083 - - default default] [instance: 98c0547a-3efc-4214-85f9-ccceaf32a2a6] Successfully created port: 6f128282-4268-4162-a349-1906ef0a8e4d _create_port_minimal /usr/lib/python3.9/site-packages/nova/network/neutron.py:548#033[00m
Dec  1 20:01:09 compute-0 nova_compute[189564]: 2025-12-01 20:01:09.430 189568 DEBUG oslo_concurrency.processutils [None req-96b534b9-88b0-4edd-a388-382ce12d6cf4 e346f67d906543ea8982cb53415ee19b d9b058a656be4393a4619312186fc083 - - default default] Running cmd (subprocess): /usr/bin/python3 -m oslo_concurrency.prlimit --as=1073741824 --cpu=30 -- env LC_ALL=C LANG=C qemu-img info /var/lib/nova/instances/_base/b6c46a34fa48a1b06387586e8222a42077151abd.part --force-share --output=json execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384#033[00m
Dec  1 20:01:09 compute-0 nova_compute[189564]: 2025-12-01 20:01:09.509 189568 DEBUG oslo_concurrency.processutils [None req-96b534b9-88b0-4edd-a388-382ce12d6cf4 e346f67d906543ea8982cb53415ee19b d9b058a656be4393a4619312186fc083 - - default default] CMD "/usr/bin/python3 -m oslo_concurrency.prlimit --as=1073741824 --cpu=30 -- env LC_ALL=C LANG=C qemu-img info /var/lib/nova/instances/_base/b6c46a34fa48a1b06387586e8222a42077151abd.part --force-share --output=json" returned: 0 in 0.079s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422#033[00m
Dec  1 20:01:09 compute-0 nova_compute[189564]: 2025-12-01 20:01:09.510 189568 DEBUG nova.virt.images [None req-96b534b9-88b0-4edd-a388-382ce12d6cf4 e346f67d906543ea8982cb53415ee19b d9b058a656be4393a4619312186fc083 - - default default] d169c234-7ac2-4fdc-b9fa-a08c93484d75 was qcow2, converting to raw fetch_to_raw /usr/lib/python3.9/site-packages/nova/virt/images.py:242#033[00m
Dec  1 20:01:09 compute-0 nova_compute[189564]: 2025-12-01 20:01:09.512 189568 DEBUG nova.privsep.utils [None req-96b534b9-88b0-4edd-a388-382ce12d6cf4 e346f67d906543ea8982cb53415ee19b d9b058a656be4393a4619312186fc083 - - default default] Path '/var/lib/nova/instances' supports direct I/O supports_direct_io /usr/lib/python3.9/site-packages/nova/privsep/utils.py:63#033[00m
Dec  1 20:01:09 compute-0 nova_compute[189564]: 2025-12-01 20:01:09.513 189568 DEBUG oslo_concurrency.processutils [None req-96b534b9-88b0-4edd-a388-382ce12d6cf4 e346f67d906543ea8982cb53415ee19b d9b058a656be4393a4619312186fc083 - - default default] Running cmd (subprocess): qemu-img convert -t none -O raw -f qcow2 /var/lib/nova/instances/_base/b6c46a34fa48a1b06387586e8222a42077151abd.part /var/lib/nova/instances/_base/b6c46a34fa48a1b06387586e8222a42077151abd.converted execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384#033[00m
Dec  1 20:01:09 compute-0 nova_compute[189564]: 2025-12-01 20:01:09.801 189568 DEBUG oslo_concurrency.processutils [None req-96b534b9-88b0-4edd-a388-382ce12d6cf4 e346f67d906543ea8982cb53415ee19b d9b058a656be4393a4619312186fc083 - - default default] CMD "qemu-img convert -t none -O raw -f qcow2 /var/lib/nova/instances/_base/b6c46a34fa48a1b06387586e8222a42077151abd.part /var/lib/nova/instances/_base/b6c46a34fa48a1b06387586e8222a42077151abd.converted" returned: 0 in 0.289s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422#033[00m
Dec  1 20:01:09 compute-0 nova_compute[189564]: 2025-12-01 20:01:09.809 189568 DEBUG oslo_concurrency.processutils [None req-96b534b9-88b0-4edd-a388-382ce12d6cf4 e346f67d906543ea8982cb53415ee19b d9b058a656be4393a4619312186fc083 - - default default] Running cmd (subprocess): /usr/bin/python3 -m oslo_concurrency.prlimit --as=1073741824 --cpu=30 -- env LC_ALL=C LANG=C qemu-img info /var/lib/nova/instances/_base/b6c46a34fa48a1b06387586e8222a42077151abd.converted --force-share --output=json execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384#033[00m
Dec  1 20:01:09 compute-0 nova_compute[189564]: 2025-12-01 20:01:09.870 189568 DEBUG oslo_concurrency.processutils [None req-96b534b9-88b0-4edd-a388-382ce12d6cf4 e346f67d906543ea8982cb53415ee19b d9b058a656be4393a4619312186fc083 - - default default] CMD "/usr/bin/python3 -m oslo_concurrency.prlimit --as=1073741824 --cpu=30 -- env LC_ALL=C LANG=C qemu-img info /var/lib/nova/instances/_base/b6c46a34fa48a1b06387586e8222a42077151abd.converted --force-share --output=json" returned: 0 in 0.061s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422#033[00m
Dec  1 20:01:09 compute-0 nova_compute[189564]: 2025-12-01 20:01:09.871 189568 DEBUG oslo_concurrency.lockutils [None req-96b534b9-88b0-4edd-a388-382ce12d6cf4 e346f67d906543ea8982cb53415ee19b d9b058a656be4393a4619312186fc083 - - default default] Lock "b6c46a34fa48a1b06387586e8222a42077151abd" "released" by "nova.virt.libvirt.imagebackend.Image.cache.<locals>.fetch_func_sync" :: held 2.955s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m
Dec  1 20:01:09 compute-0 nova_compute[189564]: 2025-12-01 20:01:09.886 189568 DEBUG oslo_concurrency.lockutils [None req-43758946-978a-45a5-8816-a58faf122cbd 1b42f5bff3ce40c99c067bb358d36444 02b2a851f173482691b98aa9a993fbf9 - - default default] Lock "b6c46a34fa48a1b06387586e8222a42077151abd" acquired by "nova.virt.libvirt.imagebackend.Image.cache.<locals>.fetch_func_sync" :: waited 2.008s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m
Dec  1 20:01:09 compute-0 nova_compute[189564]: 2025-12-01 20:01:09.886 189568 DEBUG oslo_concurrency.lockutils [None req-43758946-978a-45a5-8816-a58faf122cbd 1b42f5bff3ce40c99c067bb358d36444 02b2a851f173482691b98aa9a993fbf9 - - default default] Lock "b6c46a34fa48a1b06387586e8222a42077151abd" "released" by "nova.virt.libvirt.imagebackend.Image.cache.<locals>.fetch_func_sync" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m
Dec  1 20:01:09 compute-0 nova_compute[189564]: 2025-12-01 20:01:09.899 189568 DEBUG oslo_concurrency.processutils [None req-96b534b9-88b0-4edd-a388-382ce12d6cf4 e346f67d906543ea8982cb53415ee19b d9b058a656be4393a4619312186fc083 - - default default] Running cmd (subprocess): /usr/bin/python3 -m oslo_concurrency.prlimit --as=1073741824 --cpu=30 -- env LC_ALL=C LANG=C qemu-img info /var/lib/nova/instances/_base/b6c46a34fa48a1b06387586e8222a42077151abd --force-share --output=json execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384#033[00m
Dec  1 20:01:09 compute-0 nova_compute[189564]: 2025-12-01 20:01:09.913 189568 DEBUG oslo_concurrency.processutils [None req-43758946-978a-45a5-8816-a58faf122cbd 1b42f5bff3ce40c99c067bb358d36444 02b2a851f173482691b98aa9a993fbf9 - - default default] Running cmd (subprocess): /usr/bin/python3 -m oslo_concurrency.prlimit --as=1073741824 --cpu=30 -- env LC_ALL=C LANG=C qemu-img info /var/lib/nova/instances/_base/b6c46a34fa48a1b06387586e8222a42077151abd --force-share --output=json execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384#033[00m
Dec  1 20:01:09 compute-0 nova_compute[189564]: 2025-12-01 20:01:09.952 189568 DEBUG oslo_concurrency.processutils [None req-96b534b9-88b0-4edd-a388-382ce12d6cf4 e346f67d906543ea8982cb53415ee19b d9b058a656be4393a4619312186fc083 - - default default] CMD "/usr/bin/python3 -m oslo_concurrency.prlimit --as=1073741824 --cpu=30 -- env LC_ALL=C LANG=C qemu-img info /var/lib/nova/instances/_base/b6c46a34fa48a1b06387586e8222a42077151abd --force-share --output=json" returned: 0 in 0.053s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422#033[00m
Dec  1 20:01:09 compute-0 nova_compute[189564]: 2025-12-01 20:01:09.953 189568 DEBUG oslo_concurrency.lockutils [None req-96b534b9-88b0-4edd-a388-382ce12d6cf4 e346f67d906543ea8982cb53415ee19b d9b058a656be4393a4619312186fc083 - - default default] Acquiring lock "b6c46a34fa48a1b06387586e8222a42077151abd" by "nova.virt.libvirt.imagebackend.Qcow2.create_image.<locals>.create_qcow2_image" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m
Dec  1 20:01:09 compute-0 nova_compute[189564]: 2025-12-01 20:01:09.953 189568 DEBUG oslo_concurrency.lockutils [None req-96b534b9-88b0-4edd-a388-382ce12d6cf4 e346f67d906543ea8982cb53415ee19b d9b058a656be4393a4619312186fc083 - - default default] Lock "b6c46a34fa48a1b06387586e8222a42077151abd" acquired by "nova.virt.libvirt.imagebackend.Qcow2.create_image.<locals>.create_qcow2_image" :: waited 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m
Dec  1 20:01:09 compute-0 nova_compute[189564]: 2025-12-01 20:01:09.964 189568 DEBUG oslo_concurrency.processutils [None req-96b534b9-88b0-4edd-a388-382ce12d6cf4 e346f67d906543ea8982cb53415ee19b d9b058a656be4393a4619312186fc083 - - default default] Running cmd (subprocess): /usr/bin/python3 -m oslo_concurrency.prlimit --as=1073741824 --cpu=30 -- env LC_ALL=C LANG=C qemu-img info /var/lib/nova/instances/_base/b6c46a34fa48a1b06387586e8222a42077151abd --force-share --output=json execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384#033[00m
Dec  1 20:01:09 compute-0 nova_compute[189564]: 2025-12-01 20:01:09.978 189568 DEBUG oslo_concurrency.processutils [None req-43758946-978a-45a5-8816-a58faf122cbd 1b42f5bff3ce40c99c067bb358d36444 02b2a851f173482691b98aa9a993fbf9 - - default default] CMD "/usr/bin/python3 -m oslo_concurrency.prlimit --as=1073741824 --cpu=30 -- env LC_ALL=C LANG=C qemu-img info /var/lib/nova/instances/_base/b6c46a34fa48a1b06387586e8222a42077151abd --force-share --output=json" returned: 0 in 0.066s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422#033[00m
Dec  1 20:01:09 compute-0 nova_compute[189564]: 2025-12-01 20:01:09.979 189568 DEBUG oslo_concurrency.lockutils [None req-43758946-978a-45a5-8816-a58faf122cbd 1b42f5bff3ce40c99c067bb358d36444 02b2a851f173482691b98aa9a993fbf9 - - default default] Acquiring lock "b6c46a34fa48a1b06387586e8222a42077151abd" by "nova.virt.libvirt.imagebackend.Qcow2.create_image.<locals>.create_qcow2_image" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m
Dec  1 20:01:10 compute-0 nova_compute[189564]: 2025-12-01 20:01:10.020 189568 DEBUG oslo_concurrency.processutils [None req-96b534b9-88b0-4edd-a388-382ce12d6cf4 e346f67d906543ea8982cb53415ee19b d9b058a656be4393a4619312186fc083 - - default default] CMD "/usr/bin/python3 -m oslo_concurrency.prlimit --as=1073741824 --cpu=30 -- env LC_ALL=C LANG=C qemu-img info /var/lib/nova/instances/_base/b6c46a34fa48a1b06387586e8222a42077151abd --force-share --output=json" returned: 0 in 0.055s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422#033[00m
Dec  1 20:01:10 compute-0 nova_compute[189564]: 2025-12-01 20:01:10.020 189568 DEBUG oslo_concurrency.processutils [None req-96b534b9-88b0-4edd-a388-382ce12d6cf4 e346f67d906543ea8982cb53415ee19b d9b058a656be4393a4619312186fc083 - - default default] Running cmd (subprocess): env LC_ALL=C LANG=C qemu-img create -f qcow2 -o backing_file=/var/lib/nova/instances/_base/b6c46a34fa48a1b06387586e8222a42077151abd,backing_fmt=raw /var/lib/nova/instances/98c0547a-3efc-4214-85f9-ccceaf32a2a6/disk 1073741824 execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384#033[00m
Dec  1 20:01:10 compute-0 nova_compute[189564]: 2025-12-01 20:01:10.066 189568 DEBUG oslo_concurrency.processutils [None req-96b534b9-88b0-4edd-a388-382ce12d6cf4 e346f67d906543ea8982cb53415ee19b d9b058a656be4393a4619312186fc083 - - default default] CMD "env LC_ALL=C LANG=C qemu-img create -f qcow2 -o backing_file=/var/lib/nova/instances/_base/b6c46a34fa48a1b06387586e8222a42077151abd,backing_fmt=raw /var/lib/nova/instances/98c0547a-3efc-4214-85f9-ccceaf32a2a6/disk 1073741824" returned: 0 in 0.046s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422#033[00m
Dec  1 20:01:10 compute-0 nova_compute[189564]: 2025-12-01 20:01:10.068 189568 DEBUG oslo_concurrency.lockutils [None req-96b534b9-88b0-4edd-a388-382ce12d6cf4 e346f67d906543ea8982cb53415ee19b d9b058a656be4393a4619312186fc083 - - default default] Lock "b6c46a34fa48a1b06387586e8222a42077151abd" "released" by "nova.virt.libvirt.imagebackend.Qcow2.create_image.<locals>.create_qcow2_image" :: held 0.114s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m
Dec  1 20:01:10 compute-0 nova_compute[189564]: 2025-12-01 20:01:10.068 189568 DEBUG oslo_concurrency.processutils [None req-96b534b9-88b0-4edd-a388-382ce12d6cf4 e346f67d906543ea8982cb53415ee19b d9b058a656be4393a4619312186fc083 - - default default] Running cmd (subprocess): /usr/bin/python3 -m oslo_concurrency.prlimit --as=1073741824 --cpu=30 -- env LC_ALL=C LANG=C qemu-img info /var/lib/nova/instances/_base/b6c46a34fa48a1b06387586e8222a42077151abd --force-share --output=json execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384#033[00m
Dec  1 20:01:10 compute-0 nova_compute[189564]: 2025-12-01 20:01:10.086 189568 DEBUG oslo_concurrency.lockutils [None req-43758946-978a-45a5-8816-a58faf122cbd 1b42f5bff3ce40c99c067bb358d36444 02b2a851f173482691b98aa9a993fbf9 - - default default] Lock "b6c46a34fa48a1b06387586e8222a42077151abd" acquired by "nova.virt.libvirt.imagebackend.Qcow2.create_image.<locals>.create_qcow2_image" :: waited 0.107s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m
Dec  1 20:01:10 compute-0 nova_compute[189564]: 2025-12-01 20:01:10.099 189568 DEBUG oslo_concurrency.processutils [None req-43758946-978a-45a5-8816-a58faf122cbd 1b42f5bff3ce40c99c067bb358d36444 02b2a851f173482691b98aa9a993fbf9 - - default default] Running cmd (subprocess): /usr/bin/python3 -m oslo_concurrency.prlimit --as=1073741824 --cpu=30 -- env LC_ALL=C LANG=C qemu-img info /var/lib/nova/instances/_base/b6c46a34fa48a1b06387586e8222a42077151abd --force-share --output=json execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384#033[00m
Dec  1 20:01:10 compute-0 nova_compute[189564]: 2025-12-01 20:01:10.142 189568 DEBUG oslo_concurrency.processutils [None req-96b534b9-88b0-4edd-a388-382ce12d6cf4 e346f67d906543ea8982cb53415ee19b d9b058a656be4393a4619312186fc083 - - default default] CMD "/usr/bin/python3 -m oslo_concurrency.prlimit --as=1073741824 --cpu=30 -- env LC_ALL=C LANG=C qemu-img info /var/lib/nova/instances/_base/b6c46a34fa48a1b06387586e8222a42077151abd --force-share --output=json" returned: 0 in 0.074s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422#033[00m
Dec  1 20:01:10 compute-0 nova_compute[189564]: 2025-12-01 20:01:10.143 189568 DEBUG nova.virt.disk.api [None req-96b534b9-88b0-4edd-a388-382ce12d6cf4 e346f67d906543ea8982cb53415ee19b d9b058a656be4393a4619312186fc083 - - default default] Checking if we can resize image /var/lib/nova/instances/98c0547a-3efc-4214-85f9-ccceaf32a2a6/disk. size=1073741824 can_resize_image /usr/lib/python3.9/site-packages/nova/virt/disk/api.py:166#033[00m
Dec  1 20:01:10 compute-0 nova_compute[189564]: 2025-12-01 20:01:10.144 189568 DEBUG oslo_concurrency.processutils [None req-96b534b9-88b0-4edd-a388-382ce12d6cf4 e346f67d906543ea8982cb53415ee19b d9b058a656be4393a4619312186fc083 - - default default] Running cmd (subprocess): /usr/bin/python3 -m oslo_concurrency.prlimit --as=1073741824 --cpu=30 -- env LC_ALL=C LANG=C qemu-img info /var/lib/nova/instances/98c0547a-3efc-4214-85f9-ccceaf32a2a6/disk --force-share --output=json execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384#033[00m
Dec  1 20:01:10 compute-0 nova_compute[189564]: 2025-12-01 20:01:10.182 189568 DEBUG oslo_concurrency.processutils [None req-43758946-978a-45a5-8816-a58faf122cbd 1b42f5bff3ce40c99c067bb358d36444 02b2a851f173482691b98aa9a993fbf9 - - default default] CMD "/usr/bin/python3 -m oslo_concurrency.prlimit --as=1073741824 --cpu=30 -- env LC_ALL=C LANG=C qemu-img info /var/lib/nova/instances/_base/b6c46a34fa48a1b06387586e8222a42077151abd --force-share --output=json" returned: 0 in 0.084s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422#033[00m
Dec  1 20:01:10 compute-0 nova_compute[189564]: 2025-12-01 20:01:10.183 189568 DEBUG oslo_concurrency.processutils [None req-43758946-978a-45a5-8816-a58faf122cbd 1b42f5bff3ce40c99c067bb358d36444 02b2a851f173482691b98aa9a993fbf9 - - default default] Running cmd (subprocess): env LC_ALL=C LANG=C qemu-img create -f qcow2 -o backing_file=/var/lib/nova/instances/_base/b6c46a34fa48a1b06387586e8222a42077151abd,backing_fmt=raw /var/lib/nova/instances/5e264735-c003-4c77-8b16-cb48211f837f/disk 1073741824 execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384#033[00m
Dec  1 20:01:10 compute-0 nova_compute[189564]: 2025-12-01 20:01:10.207 189568 DEBUG oslo_concurrency.processutils [None req-96b534b9-88b0-4edd-a388-382ce12d6cf4 e346f67d906543ea8982cb53415ee19b d9b058a656be4393a4619312186fc083 - - default default] CMD "/usr/bin/python3 -m oslo_concurrency.prlimit --as=1073741824 --cpu=30 -- env LC_ALL=C LANG=C qemu-img info /var/lib/nova/instances/98c0547a-3efc-4214-85f9-ccceaf32a2a6/disk --force-share --output=json" returned: 0 in 0.063s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422#033[00m
Dec  1 20:01:10 compute-0 nova_compute[189564]: 2025-12-01 20:01:10.208 189568 DEBUG nova.virt.disk.api [None req-96b534b9-88b0-4edd-a388-382ce12d6cf4 e346f67d906543ea8982cb53415ee19b d9b058a656be4393a4619312186fc083 - - default default] Cannot resize image /var/lib/nova/instances/98c0547a-3efc-4214-85f9-ccceaf32a2a6/disk to a smaller size. can_resize_image /usr/lib/python3.9/site-packages/nova/virt/disk/api.py:172#033[00m
Dec  1 20:01:10 compute-0 nova_compute[189564]: 2025-12-01 20:01:10.208 189568 DEBUG nova.objects.instance [None req-96b534b9-88b0-4edd-a388-382ce12d6cf4 e346f67d906543ea8982cb53415ee19b d9b058a656be4393a4619312186fc083 - - default default] Lazy-loading 'migration_context' on Instance uuid 98c0547a-3efc-4214-85f9-ccceaf32a2a6 obj_load_attr /usr/lib/python3.9/site-packages/nova/objects/instance.py:1105#033[00m
Dec  1 20:01:10 compute-0 nova_compute[189564]: 2025-12-01 20:01:10.238 189568 DEBUG nova.virt.libvirt.driver [None req-96b534b9-88b0-4edd-a388-382ce12d6cf4 e346f67d906543ea8982cb53415ee19b d9b058a656be4393a4619312186fc083 - - default default] [instance: 98c0547a-3efc-4214-85f9-ccceaf32a2a6] Created local disks _create_image /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:4857#033[00m
Dec  1 20:01:10 compute-0 nova_compute[189564]: 2025-12-01 20:01:10.238 189568 DEBUG nova.virt.libvirt.driver [None req-96b534b9-88b0-4edd-a388-382ce12d6cf4 e346f67d906543ea8982cb53415ee19b d9b058a656be4393a4619312186fc083 - - default default] [instance: 98c0547a-3efc-4214-85f9-ccceaf32a2a6] Ensure instance console log exists: /var/lib/nova/instances/98c0547a-3efc-4214-85f9-ccceaf32a2a6/console.log _ensure_console_log_for_instance /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:4609#033[00m
Dec  1 20:01:10 compute-0 nova_compute[189564]: 2025-12-01 20:01:10.239 189568 DEBUG oslo_concurrency.lockutils [None req-96b534b9-88b0-4edd-a388-382ce12d6cf4 e346f67d906543ea8982cb53415ee19b d9b058a656be4393a4619312186fc083 - - default default] Acquiring lock "vgpu_resources" by "nova.virt.libvirt.driver.LibvirtDriver._allocate_mdevs" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m
Dec  1 20:01:10 compute-0 nova_compute[189564]: 2025-12-01 20:01:10.239 189568 DEBUG oslo_concurrency.lockutils [None req-96b534b9-88b0-4edd-a388-382ce12d6cf4 e346f67d906543ea8982cb53415ee19b d9b058a656be4393a4619312186fc083 - - default default] Lock "vgpu_resources" acquired by "nova.virt.libvirt.driver.LibvirtDriver._allocate_mdevs" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m
Dec  1 20:01:10 compute-0 nova_compute[189564]: 2025-12-01 20:01:10.239 189568 DEBUG oslo_concurrency.lockutils [None req-96b534b9-88b0-4edd-a388-382ce12d6cf4 e346f67d906543ea8982cb53415ee19b d9b058a656be4393a4619312186fc083 - - default default] Lock "vgpu_resources" "released" by "nova.virt.libvirt.driver.LibvirtDriver._allocate_mdevs" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m
Dec  1 20:01:10 compute-0 nova_compute[189564]: 2025-12-01 20:01:10.244 189568 DEBUG oslo_concurrency.processutils [None req-43758946-978a-45a5-8816-a58faf122cbd 1b42f5bff3ce40c99c067bb358d36444 02b2a851f173482691b98aa9a993fbf9 - - default default] CMD "env LC_ALL=C LANG=C qemu-img create -f qcow2 -o backing_file=/var/lib/nova/instances/_base/b6c46a34fa48a1b06387586e8222a42077151abd,backing_fmt=raw /var/lib/nova/instances/5e264735-c003-4c77-8b16-cb48211f837f/disk 1073741824" returned: 0 in 0.061s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422#033[00m
Dec  1 20:01:10 compute-0 nova_compute[189564]: 2025-12-01 20:01:10.245 189568 DEBUG oslo_concurrency.lockutils [None req-43758946-978a-45a5-8816-a58faf122cbd 1b42f5bff3ce40c99c067bb358d36444 02b2a851f173482691b98aa9a993fbf9 - - default default] Lock "b6c46a34fa48a1b06387586e8222a42077151abd" "released" by "nova.virt.libvirt.imagebackend.Qcow2.create_image.<locals>.create_qcow2_image" :: held 0.158s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m
Dec  1 20:01:10 compute-0 nova_compute[189564]: 2025-12-01 20:01:10.245 189568 DEBUG oslo_concurrency.processutils [None req-43758946-978a-45a5-8816-a58faf122cbd 1b42f5bff3ce40c99c067bb358d36444 02b2a851f173482691b98aa9a993fbf9 - - default default] Running cmd (subprocess): /usr/bin/python3 -m oslo_concurrency.prlimit --as=1073741824 --cpu=30 -- env LC_ALL=C LANG=C qemu-img info /var/lib/nova/instances/_base/b6c46a34fa48a1b06387586e8222a42077151abd --force-share --output=json execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384#033[00m
Dec  1 20:01:10 compute-0 nova_compute[189564]: 2025-12-01 20:01:10.339 189568 DEBUG oslo_concurrency.processutils [None req-43758946-978a-45a5-8816-a58faf122cbd 1b42f5bff3ce40c99c067bb358d36444 02b2a851f173482691b98aa9a993fbf9 - - default default] CMD "/usr/bin/python3 -m oslo_concurrency.prlimit --as=1073741824 --cpu=30 -- env LC_ALL=C LANG=C qemu-img info /var/lib/nova/instances/_base/b6c46a34fa48a1b06387586e8222a42077151abd --force-share --output=json" returned: 0 in 0.094s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422#033[00m
Dec  1 20:01:10 compute-0 nova_compute[189564]: 2025-12-01 20:01:10.340 189568 DEBUG nova.virt.disk.api [None req-43758946-978a-45a5-8816-a58faf122cbd 1b42f5bff3ce40c99c067bb358d36444 02b2a851f173482691b98aa9a993fbf9 - - default default] Checking if we can resize image /var/lib/nova/instances/5e264735-c003-4c77-8b16-cb48211f837f/disk. size=1073741824 can_resize_image /usr/lib/python3.9/site-packages/nova/virt/disk/api.py:166#033[00m
Dec  1 20:01:10 compute-0 nova_compute[189564]: 2025-12-01 20:01:10.340 189568 DEBUG oslo_concurrency.processutils [None req-43758946-978a-45a5-8816-a58faf122cbd 1b42f5bff3ce40c99c067bb358d36444 02b2a851f173482691b98aa9a993fbf9 - - default default] Running cmd (subprocess): /usr/bin/python3 -m oslo_concurrency.prlimit --as=1073741824 --cpu=30 -- env LC_ALL=C LANG=C qemu-img info /var/lib/nova/instances/5e264735-c003-4c77-8b16-cb48211f837f/disk --force-share --output=json execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384#033[00m
Dec  1 20:01:10 compute-0 nova_compute[189564]: 2025-12-01 20:01:10.415 189568 DEBUG oslo_concurrency.processutils [None req-43758946-978a-45a5-8816-a58faf122cbd 1b42f5bff3ce40c99c067bb358d36444 02b2a851f173482691b98aa9a993fbf9 - - default default] CMD "/usr/bin/python3 -m oslo_concurrency.prlimit --as=1073741824 --cpu=30 -- env LC_ALL=C LANG=C qemu-img info /var/lib/nova/instances/5e264735-c003-4c77-8b16-cb48211f837f/disk --force-share --output=json" returned: 0 in 0.075s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422#033[00m
Dec  1 20:01:10 compute-0 nova_compute[189564]: 2025-12-01 20:01:10.416 189568 DEBUG nova.virt.disk.api [None req-43758946-978a-45a5-8816-a58faf122cbd 1b42f5bff3ce40c99c067bb358d36444 02b2a851f173482691b98aa9a993fbf9 - - default default] Cannot resize image /var/lib/nova/instances/5e264735-c003-4c77-8b16-cb48211f837f/disk to a smaller size. can_resize_image /usr/lib/python3.9/site-packages/nova/virt/disk/api.py:172#033[00m
Dec  1 20:01:10 compute-0 nova_compute[189564]: 2025-12-01 20:01:10.416 189568 DEBUG nova.objects.instance [None req-43758946-978a-45a5-8816-a58faf122cbd 1b42f5bff3ce40c99c067bb358d36444 02b2a851f173482691b98aa9a993fbf9 - - default default] Lazy-loading 'migration_context' on Instance uuid 5e264735-c003-4c77-8b16-cb48211f837f obj_load_attr /usr/lib/python3.9/site-packages/nova/objects/instance.py:1105#033[00m
Dec  1 20:01:10 compute-0 nova_compute[189564]: 2025-12-01 20:01:10.431 189568 DEBUG nova.virt.libvirt.driver [None req-43758946-978a-45a5-8816-a58faf122cbd 1b42f5bff3ce40c99c067bb358d36444 02b2a851f173482691b98aa9a993fbf9 - - default default] [instance: 5e264735-c003-4c77-8b16-cb48211f837f] Created local disks _create_image /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:4857#033[00m
Dec  1 20:01:10 compute-0 nova_compute[189564]: 2025-12-01 20:01:10.432 189568 DEBUG nova.virt.libvirt.driver [None req-43758946-978a-45a5-8816-a58faf122cbd 1b42f5bff3ce40c99c067bb358d36444 02b2a851f173482691b98aa9a993fbf9 - - default default] [instance: 5e264735-c003-4c77-8b16-cb48211f837f] Ensure instance console log exists: /var/lib/nova/instances/5e264735-c003-4c77-8b16-cb48211f837f/console.log _ensure_console_log_for_instance /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:4609#033[00m
Dec  1 20:01:10 compute-0 nova_compute[189564]: 2025-12-01 20:01:10.433 189568 DEBUG oslo_concurrency.lockutils [None req-43758946-978a-45a5-8816-a58faf122cbd 1b42f5bff3ce40c99c067bb358d36444 02b2a851f173482691b98aa9a993fbf9 - - default default] Acquiring lock "vgpu_resources" by "nova.virt.libvirt.driver.LibvirtDriver._allocate_mdevs" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m
Dec  1 20:01:10 compute-0 nova_compute[189564]: 2025-12-01 20:01:10.433 189568 DEBUG oslo_concurrency.lockutils [None req-43758946-978a-45a5-8816-a58faf122cbd 1b42f5bff3ce40c99c067bb358d36444 02b2a851f173482691b98aa9a993fbf9 - - default default] Lock "vgpu_resources" acquired by "nova.virt.libvirt.driver.LibvirtDriver._allocate_mdevs" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m
Dec  1 20:01:10 compute-0 nova_compute[189564]: 2025-12-01 20:01:10.433 189568 DEBUG oslo_concurrency.lockutils [None req-43758946-978a-45a5-8816-a58faf122cbd 1b42f5bff3ce40c99c067bb358d36444 02b2a851f173482691b98aa9a993fbf9 - - default default] Lock "vgpu_resources" "released" by "nova.virt.libvirt.driver.LibvirtDriver._allocate_mdevs" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m
Dec  1 20:01:11 compute-0 nova_compute[189564]: 2025-12-01 20:01:11.348 189568 DEBUG oslo_service.periodic_task [None req-7cec860b-c1c5-49d2-8a4f-732c76c1788d - - - - - -] Running periodic task ComputeManager._reclaim_queued_deletes run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210#033[00m
Dec  1 20:01:11 compute-0 nova_compute[189564]: 2025-12-01 20:01:11.349 189568 DEBUG nova.compute.manager [None req-7cec860b-c1c5-49d2-8a4f-732c76c1788d - - - - - -] CONF.reclaim_instance_interval <= 0, skipping... _reclaim_queued_deletes /usr/lib/python3.9/site-packages/nova/compute/manager.py:10477#033[00m
Dec  1 20:01:11 compute-0 nova_compute[189564]: 2025-12-01 20:01:11.478 189568 DEBUG nova.network.neutron [None req-43758946-978a-45a5-8816-a58faf122cbd 1b42f5bff3ce40c99c067bb358d36444 02b2a851f173482691b98aa9a993fbf9 - - default default] [instance: 5e264735-c003-4c77-8b16-cb48211f837f] Successfully created port: 241aee4b-acee-43c4-b165-e8322c56a1d3 _create_port_minimal /usr/lib/python3.9/site-packages/nova/network/neutron.py:548#033[00m
Dec  1 20:01:11 compute-0 nova_compute[189564]: 2025-12-01 20:01:11.541 189568 DEBUG oslo_concurrency.lockutils [None req-323d5ec3-38e7-418e-9c44-916e6b02d0c3 89c8a8cb31224140bf2b9c0b94acfe04 5102d72cb1ce4e6da810b2584a2abd73 - - default default] Acquiring lock "4a104baa-5fd5-47aa-973b-11d99c76c3e2" by "nova.compute.manager.ComputeManager.build_and_run_instance.<locals>._locked_do_build_and_run_instance" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m
Dec  1 20:01:11 compute-0 nova_compute[189564]: 2025-12-01 20:01:11.542 189568 DEBUG oslo_concurrency.lockutils [None req-323d5ec3-38e7-418e-9c44-916e6b02d0c3 89c8a8cb31224140bf2b9c0b94acfe04 5102d72cb1ce4e6da810b2584a2abd73 - - default default] Lock "4a104baa-5fd5-47aa-973b-11d99c76c3e2" acquired by "nova.compute.manager.ComputeManager.build_and_run_instance.<locals>._locked_do_build_and_run_instance" :: waited 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m
Dec  1 20:01:11 compute-0 nova_compute[189564]: 2025-12-01 20:01:11.571 189568 DEBUG nova.compute.manager [None req-323d5ec3-38e7-418e-9c44-916e6b02d0c3 89c8a8cb31224140bf2b9c0b94acfe04 5102d72cb1ce4e6da810b2584a2abd73 - - default default] [instance: 4a104baa-5fd5-47aa-973b-11d99c76c3e2] Starting instance... _do_build_and_run_instance /usr/lib/python3.9/site-packages/nova/compute/manager.py:2402#033[00m
Dec  1 20:01:11 compute-0 nova_compute[189564]: 2025-12-01 20:01:11.648 189568 DEBUG oslo_concurrency.lockutils [None req-323d5ec3-38e7-418e-9c44-916e6b02d0c3 89c8a8cb31224140bf2b9c0b94acfe04 5102d72cb1ce4e6da810b2584a2abd73 - - default default] Acquiring lock "compute_resources" by "nova.compute.resource_tracker.ResourceTracker.instance_claim" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m
Dec  1 20:01:11 compute-0 nova_compute[189564]: 2025-12-01 20:01:11.648 189568 DEBUG oslo_concurrency.lockutils [None req-323d5ec3-38e7-418e-9c44-916e6b02d0c3 89c8a8cb31224140bf2b9c0b94acfe04 5102d72cb1ce4e6da810b2584a2abd73 - - default default] Lock "compute_resources" acquired by "nova.compute.resource_tracker.ResourceTracker.instance_claim" :: waited 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m
Dec  1 20:01:11 compute-0 nova_compute[189564]: 2025-12-01 20:01:11.659 189568 DEBUG nova.virt.hardware [None req-323d5ec3-38e7-418e-9c44-916e6b02d0c3 89c8a8cb31224140bf2b9c0b94acfe04 5102d72cb1ce4e6da810b2584a2abd73 - - default default] Require both a host and instance NUMA topology to fit instance on host. numa_fit_instance_to_host /usr/lib/python3.9/site-packages/nova/virt/hardware.py:2368#033[00m
Dec  1 20:01:11 compute-0 nova_compute[189564]: 2025-12-01 20:01:11.660 189568 INFO nova.compute.claims [None req-323d5ec3-38e7-418e-9c44-916e6b02d0c3 89c8a8cb31224140bf2b9c0b94acfe04 5102d72cb1ce4e6da810b2584a2abd73 - - default default] [instance: 4a104baa-5fd5-47aa-973b-11d99c76c3e2] Claim successful on node compute-0.ctlplane.example.com#033[00m
Dec  1 20:01:11 compute-0 nova_compute[189564]: 2025-12-01 20:01:11.812 189568 DEBUG nova.compute.provider_tree [None req-323d5ec3-38e7-418e-9c44-916e6b02d0c3 89c8a8cb31224140bf2b9c0b94acfe04 5102d72cb1ce4e6da810b2584a2abd73 - - default default] Inventory has not changed in ProviderTree for provider: 0211b5d4-bab8-409f-8f53-df766ffbcb27 update_inventory /usr/lib/python3.9/site-packages/nova/compute/provider_tree.py:180#033[00m
Dec  1 20:01:11 compute-0 nova_compute[189564]: 2025-12-01 20:01:11.830 189568 DEBUG nova.scheduler.client.report [None req-323d5ec3-38e7-418e-9c44-916e6b02d0c3 89c8a8cb31224140bf2b9c0b94acfe04 5102d72cb1ce4e6da810b2584a2abd73 - - default default] Inventory has not changed for provider 0211b5d4-bab8-409f-8f53-df766ffbcb27 based on inventory data: {'VCPU': {'total': 8, 'reserved': 0, 'min_unit': 1, 'max_unit': 8, 'step_size': 1, 'allocation_ratio': 4.0}, 'MEMORY_MB': {'total': 7680, 'reserved': 512, 'min_unit': 1, 'max_unit': 7680, 'step_size': 1, 'allocation_ratio': 1.0}, 'DISK_GB': {'total': 79, 'reserved': 1, 'min_unit': 1, 'max_unit': 79, 'step_size': 1, 'allocation_ratio': 0.9}} set_inventory_for_provider /usr/lib/python3.9/site-packages/nova/scheduler/client/report.py:940#033[00m
Dec  1 20:01:11 compute-0 nova_compute[189564]: 2025-12-01 20:01:11.854 189568 DEBUG oslo_concurrency.lockutils [None req-323d5ec3-38e7-418e-9c44-916e6b02d0c3 89c8a8cb31224140bf2b9c0b94acfe04 5102d72cb1ce4e6da810b2584a2abd73 - - default default] Lock "compute_resources" "released" by "nova.compute.resource_tracker.ResourceTracker.instance_claim" :: held 0.206s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m
Dec  1 20:01:11 compute-0 nova_compute[189564]: 2025-12-01 20:01:11.855 189568 DEBUG nova.compute.manager [None req-323d5ec3-38e7-418e-9c44-916e6b02d0c3 89c8a8cb31224140bf2b9c0b94acfe04 5102d72cb1ce4e6da810b2584a2abd73 - - default default] [instance: 4a104baa-5fd5-47aa-973b-11d99c76c3e2] Start building networks asynchronously for instance. _build_resources /usr/lib/python3.9/site-packages/nova/compute/manager.py:2799#033[00m
Dec  1 20:01:11 compute-0 nova_compute[189564]: 2025-12-01 20:01:11.908 189568 DEBUG nova.compute.manager [None req-323d5ec3-38e7-418e-9c44-916e6b02d0c3 89c8a8cb31224140bf2b9c0b94acfe04 5102d72cb1ce4e6da810b2584a2abd73 - - default default] [instance: 4a104baa-5fd5-47aa-973b-11d99c76c3e2] Allocating IP information in the background. _allocate_network_async /usr/lib/python3.9/site-packages/nova/compute/manager.py:1952#033[00m
Dec  1 20:01:11 compute-0 nova_compute[189564]: 2025-12-01 20:01:11.909 189568 DEBUG nova.network.neutron [None req-323d5ec3-38e7-418e-9c44-916e6b02d0c3 89c8a8cb31224140bf2b9c0b94acfe04 5102d72cb1ce4e6da810b2584a2abd73 - - default default] [instance: 4a104baa-5fd5-47aa-973b-11d99c76c3e2] allocate_for_instance() allocate_for_instance /usr/lib/python3.9/site-packages/nova/network/neutron.py:1156#033[00m
Dec  1 20:01:11 compute-0 nova_compute[189564]: 2025-12-01 20:01:11.946 189568 INFO nova.virt.libvirt.driver [None req-323d5ec3-38e7-418e-9c44-916e6b02d0c3 89c8a8cb31224140bf2b9c0b94acfe04 5102d72cb1ce4e6da810b2584a2abd73 - - default default] [instance: 4a104baa-5fd5-47aa-973b-11d99c76c3e2] Ignoring supplied device name: /dev/vda. Libvirt can't honour user-supplied dev names#033[00m
Dec  1 20:01:12 compute-0 nova_compute[189564]: 2025-12-01 20:01:12.001 189568 DEBUG nova.compute.manager [None req-323d5ec3-38e7-418e-9c44-916e6b02d0c3 89c8a8cb31224140bf2b9c0b94acfe04 5102d72cb1ce4e6da810b2584a2abd73 - - default default] [instance: 4a104baa-5fd5-47aa-973b-11d99c76c3e2] Start building block device mappings for instance. _build_resources /usr/lib/python3.9/site-packages/nova/compute/manager.py:2834#033[00m
Dec  1 20:01:12 compute-0 nova_compute[189564]: 2025-12-01 20:01:12.112 189568 DEBUG nova.compute.manager [None req-323d5ec3-38e7-418e-9c44-916e6b02d0c3 89c8a8cb31224140bf2b9c0b94acfe04 5102d72cb1ce4e6da810b2584a2abd73 - - default default] [instance: 4a104baa-5fd5-47aa-973b-11d99c76c3e2] Start spawning the instance on the hypervisor. _build_and_run_instance /usr/lib/python3.9/site-packages/nova/compute/manager.py:2608#033[00m
Dec  1 20:01:12 compute-0 nova_compute[189564]: 2025-12-01 20:01:12.114 189568 DEBUG nova.virt.libvirt.driver [None req-323d5ec3-38e7-418e-9c44-916e6b02d0c3 89c8a8cb31224140bf2b9c0b94acfe04 5102d72cb1ce4e6da810b2584a2abd73 - - default default] [instance: 4a104baa-5fd5-47aa-973b-11d99c76c3e2] Creating instance directory _create_image /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:4723#033[00m
Dec  1 20:01:12 compute-0 nova_compute[189564]: 2025-12-01 20:01:12.114 189568 INFO nova.virt.libvirt.driver [None req-323d5ec3-38e7-418e-9c44-916e6b02d0c3 89c8a8cb31224140bf2b9c0b94acfe04 5102d72cb1ce4e6da810b2584a2abd73 - - default default] [instance: 4a104baa-5fd5-47aa-973b-11d99c76c3e2] Creating image(s)#033[00m
Dec  1 20:01:12 compute-0 nova_compute[189564]: 2025-12-01 20:01:12.114 189568 DEBUG oslo_concurrency.lockutils [None req-323d5ec3-38e7-418e-9c44-916e6b02d0c3 89c8a8cb31224140bf2b9c0b94acfe04 5102d72cb1ce4e6da810b2584a2abd73 - - default default] Acquiring lock "/var/lib/nova/instances/4a104baa-5fd5-47aa-973b-11d99c76c3e2/disk.info" by "nova.virt.libvirt.imagebackend.Image.resolve_driver_format.<locals>.write_to_disk_info_file" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m
Dec  1 20:01:12 compute-0 nova_compute[189564]: 2025-12-01 20:01:12.115 189568 DEBUG oslo_concurrency.lockutils [None req-323d5ec3-38e7-418e-9c44-916e6b02d0c3 89c8a8cb31224140bf2b9c0b94acfe04 5102d72cb1ce4e6da810b2584a2abd73 - - default default] Lock "/var/lib/nova/instances/4a104baa-5fd5-47aa-973b-11d99c76c3e2/disk.info" acquired by "nova.virt.libvirt.imagebackend.Image.resolve_driver_format.<locals>.write_to_disk_info_file" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m
Dec  1 20:01:12 compute-0 nova_compute[189564]: 2025-12-01 20:01:12.116 189568 DEBUG oslo_concurrency.lockutils [None req-323d5ec3-38e7-418e-9c44-916e6b02d0c3 89c8a8cb31224140bf2b9c0b94acfe04 5102d72cb1ce4e6da810b2584a2abd73 - - default default] Lock "/var/lib/nova/instances/4a104baa-5fd5-47aa-973b-11d99c76c3e2/disk.info" "released" by "nova.virt.libvirt.imagebackend.Image.resolve_driver_format.<locals>.write_to_disk_info_file" :: held 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m
Dec  1 20:01:12 compute-0 nova_compute[189564]: 2025-12-01 20:01:12.131 189568 DEBUG oslo_concurrency.processutils [None req-323d5ec3-38e7-418e-9c44-916e6b02d0c3 89c8a8cb31224140bf2b9c0b94acfe04 5102d72cb1ce4e6da810b2584a2abd73 - - default default] Running cmd (subprocess): /usr/bin/python3 -m oslo_concurrency.prlimit --as=1073741824 --cpu=30 -- env LC_ALL=C LANG=C qemu-img info /var/lib/nova/instances/_base/b6c46a34fa48a1b06387586e8222a42077151abd --force-share --output=json execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384#033[00m
Dec  1 20:01:12 compute-0 nova_compute[189564]: 2025-12-01 20:01:12.199 189568 DEBUG oslo_concurrency.processutils [None req-323d5ec3-38e7-418e-9c44-916e6b02d0c3 89c8a8cb31224140bf2b9c0b94acfe04 5102d72cb1ce4e6da810b2584a2abd73 - - default default] CMD "/usr/bin/python3 -m oslo_concurrency.prlimit --as=1073741824 --cpu=30 -- env LC_ALL=C LANG=C qemu-img info /var/lib/nova/instances/_base/b6c46a34fa48a1b06387586e8222a42077151abd --force-share --output=json" returned: 0 in 0.068s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422#033[00m
Dec  1 20:01:12 compute-0 nova_compute[189564]: 2025-12-01 20:01:12.200 189568 DEBUG oslo_concurrency.lockutils [None req-323d5ec3-38e7-418e-9c44-916e6b02d0c3 89c8a8cb31224140bf2b9c0b94acfe04 5102d72cb1ce4e6da810b2584a2abd73 - - default default] Acquiring lock "b6c46a34fa48a1b06387586e8222a42077151abd" by "nova.virt.libvirt.imagebackend.Qcow2.create_image.<locals>.create_qcow2_image" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m
Dec  1 20:01:12 compute-0 nova_compute[189564]: 2025-12-01 20:01:12.201 189568 DEBUG oslo_concurrency.lockutils [None req-323d5ec3-38e7-418e-9c44-916e6b02d0c3 89c8a8cb31224140bf2b9c0b94acfe04 5102d72cb1ce4e6da810b2584a2abd73 - - default default] Lock "b6c46a34fa48a1b06387586e8222a42077151abd" acquired by "nova.virt.libvirt.imagebackend.Qcow2.create_image.<locals>.create_qcow2_image" :: waited 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m
Dec  1 20:01:12 compute-0 nova_compute[189564]: 2025-12-01 20:01:12.211 189568 DEBUG oslo_concurrency.processutils [None req-323d5ec3-38e7-418e-9c44-916e6b02d0c3 89c8a8cb31224140bf2b9c0b94acfe04 5102d72cb1ce4e6da810b2584a2abd73 - - default default] Running cmd (subprocess): /usr/bin/python3 -m oslo_concurrency.prlimit --as=1073741824 --cpu=30 -- env LC_ALL=C LANG=C qemu-img info /var/lib/nova/instances/_base/b6c46a34fa48a1b06387586e8222a42077151abd --force-share --output=json execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384#033[00m
Dec  1 20:01:12 compute-0 ovn_metadata_agent[106828]: 2025-12-01 20:01:12.220 106833 DEBUG oslo_concurrency.lockutils [-] Acquiring lock "_check_child_processes" by "neutron.agent.linux.external_process.ProcessMonitor._check_child_processes" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m
Dec  1 20:01:12 compute-0 ovn_metadata_agent[106828]: 2025-12-01 20:01:12.220 106833 DEBUG oslo_concurrency.lockutils [-] Lock "_check_child_processes" acquired by "neutron.agent.linux.external_process.ProcessMonitor._check_child_processes" :: waited 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m
Dec  1 20:01:12 compute-0 ovn_metadata_agent[106828]: 2025-12-01 20:01:12.221 106833 DEBUG oslo_concurrency.lockutils [-] Lock "_check_child_processes" "released" by "neutron.agent.linux.external_process.ProcessMonitor._check_child_processes" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m
Dec  1 20:01:12 compute-0 nova_compute[189564]: 2025-12-01 20:01:12.269 189568 DEBUG oslo_concurrency.processutils [None req-323d5ec3-38e7-418e-9c44-916e6b02d0c3 89c8a8cb31224140bf2b9c0b94acfe04 5102d72cb1ce4e6da810b2584a2abd73 - - default default] CMD "/usr/bin/python3 -m oslo_concurrency.prlimit --as=1073741824 --cpu=30 -- env LC_ALL=C LANG=C qemu-img info /var/lib/nova/instances/_base/b6c46a34fa48a1b06387586e8222a42077151abd --force-share --output=json" returned: 0 in 0.058s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422#033[00m
Dec  1 20:01:12 compute-0 nova_compute[189564]: 2025-12-01 20:01:12.270 189568 DEBUG oslo_concurrency.processutils [None req-323d5ec3-38e7-418e-9c44-916e6b02d0c3 89c8a8cb31224140bf2b9c0b94acfe04 5102d72cb1ce4e6da810b2584a2abd73 - - default default] Running cmd (subprocess): env LC_ALL=C LANG=C qemu-img create -f qcow2 -o backing_file=/var/lib/nova/instances/_base/b6c46a34fa48a1b06387586e8222a42077151abd,backing_fmt=raw /var/lib/nova/instances/4a104baa-5fd5-47aa-973b-11d99c76c3e2/disk 1073741824 execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384#033[00m
Dec  1 20:01:12 compute-0 nova_compute[189564]: 2025-12-01 20:01:12.287 189568 DEBUG nova.network.neutron [None req-96b534b9-88b0-4edd-a388-382ce12d6cf4 e346f67d906543ea8982cb53415ee19b d9b058a656be4393a4619312186fc083 - - default default] [instance: 98c0547a-3efc-4214-85f9-ccceaf32a2a6] Successfully updated port: 6f128282-4268-4162-a349-1906ef0a8e4d _update_port /usr/lib/python3.9/site-packages/nova/network/neutron.py:586#033[00m
Dec  1 20:01:12 compute-0 nova_compute[189564]: 2025-12-01 20:01:12.307 189568 DEBUG oslo_concurrency.lockutils [None req-96b534b9-88b0-4edd-a388-382ce12d6cf4 e346f67d906543ea8982cb53415ee19b d9b058a656be4393a4619312186fc083 - - default default] Acquiring lock "refresh_cache-98c0547a-3efc-4214-85f9-ccceaf32a2a6" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:312#033[00m
Dec  1 20:01:12 compute-0 nova_compute[189564]: 2025-12-01 20:01:12.308 189568 DEBUG oslo_concurrency.lockutils [None req-96b534b9-88b0-4edd-a388-382ce12d6cf4 e346f67d906543ea8982cb53415ee19b d9b058a656be4393a4619312186fc083 - - default default] Acquired lock "refresh_cache-98c0547a-3efc-4214-85f9-ccceaf32a2a6" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:315#033[00m
Dec  1 20:01:12 compute-0 nova_compute[189564]: 2025-12-01 20:01:12.308 189568 DEBUG nova.network.neutron [None req-96b534b9-88b0-4edd-a388-382ce12d6cf4 e346f67d906543ea8982cb53415ee19b d9b058a656be4393a4619312186fc083 - - default default] [instance: 98c0547a-3efc-4214-85f9-ccceaf32a2a6] Building network info cache for instance _get_instance_nw_info /usr/lib/python3.9/site-packages/nova/network/neutron.py:2010#033[00m
Dec  1 20:01:12 compute-0 nova_compute[189564]: 2025-12-01 20:01:12.312 189568 DEBUG oslo_concurrency.processutils [None req-323d5ec3-38e7-418e-9c44-916e6b02d0c3 89c8a8cb31224140bf2b9c0b94acfe04 5102d72cb1ce4e6da810b2584a2abd73 - - default default] CMD "env LC_ALL=C LANG=C qemu-img create -f qcow2 -o backing_file=/var/lib/nova/instances/_base/b6c46a34fa48a1b06387586e8222a42077151abd,backing_fmt=raw /var/lib/nova/instances/4a104baa-5fd5-47aa-973b-11d99c76c3e2/disk 1073741824" returned: 0 in 0.042s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422#033[00m
Dec  1 20:01:12 compute-0 nova_compute[189564]: 2025-12-01 20:01:12.312 189568 DEBUG oslo_concurrency.lockutils [None req-323d5ec3-38e7-418e-9c44-916e6b02d0c3 89c8a8cb31224140bf2b9c0b94acfe04 5102d72cb1ce4e6da810b2584a2abd73 - - default default] Lock "b6c46a34fa48a1b06387586e8222a42077151abd" "released" by "nova.virt.libvirt.imagebackend.Qcow2.create_image.<locals>.create_qcow2_image" :: held 0.111s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m
Dec  1 20:01:12 compute-0 nova_compute[189564]: 2025-12-01 20:01:12.312 189568 DEBUG oslo_concurrency.processutils [None req-323d5ec3-38e7-418e-9c44-916e6b02d0c3 89c8a8cb31224140bf2b9c0b94acfe04 5102d72cb1ce4e6da810b2584a2abd73 - - default default] Running cmd (subprocess): /usr/bin/python3 -m oslo_concurrency.prlimit --as=1073741824 --cpu=30 -- env LC_ALL=C LANG=C qemu-img info /var/lib/nova/instances/_base/b6c46a34fa48a1b06387586e8222a42077151abd --force-share --output=json execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384#033[00m
Dec  1 20:01:12 compute-0 podman[253663]: 2025-12-01 20:01:12.348447837 +0000 UTC m=+0.114106151 container health_status eee51cf6f5ac491b85fb09827fece37ea9afa564acb449d4ec0d0155a452f02b (image=quay.io/podified-antelope-centos9/openstack-multipathd:current-podified, name=multipathd, health_status=healthy, health_failing_streak=0, health_log=, config_data={'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/multipathd', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-multipathd:current-podified', 'net': 'host', 'privileged': True, 'restart': 'always', 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/var/lib/kolla/config_files/multipathd.json:/var/lib/kolla/config_files/config.json:ro', '/dev:/dev', '/run/udev:/run/udev', '/sys:/sys', '/lib/modules:/lib/modules:ro', '/etc/iscsi:/etc/iscsi:ro', '/var/lib/iscsi:/var/lib/iscsi', '/etc/multipath:/etc/multipath:z', '/etc/multipath.conf:/etc/multipath.conf:ro', '/var/lib/openstack/healthchecks/multipathd:/openstack:ro,z']}, container_name=multipathd, maintainer=OpenStack Kubernetes Operator team, managed_by=edpm_ansible, org.label-schema.build-date=20251125, org.label-schema.license=GPLv2, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS, tcib_managed=true, config_id=multipathd, io.buildah.version=1.41.3, org.label-schema.name=CentOS Stream 9 Base Image, tcib_build_tag=fa2bb8efef6782c26ea7f1675eeb36dd)
Dec  1 20:01:12 compute-0 nova_compute[189564]: 2025-12-01 20:01:12.403 189568 DEBUG oslo_concurrency.processutils [None req-323d5ec3-38e7-418e-9c44-916e6b02d0c3 89c8a8cb31224140bf2b9c0b94acfe04 5102d72cb1ce4e6da810b2584a2abd73 - - default default] CMD "/usr/bin/python3 -m oslo_concurrency.prlimit --as=1073741824 --cpu=30 -- env LC_ALL=C LANG=C qemu-img info /var/lib/nova/instances/_base/b6c46a34fa48a1b06387586e8222a42077151abd --force-share --output=json" returned: 0 in 0.091s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422#033[00m
Dec  1 20:01:12 compute-0 nova_compute[189564]: 2025-12-01 20:01:12.404 189568 DEBUG nova.virt.disk.api [None req-323d5ec3-38e7-418e-9c44-916e6b02d0c3 89c8a8cb31224140bf2b9c0b94acfe04 5102d72cb1ce4e6da810b2584a2abd73 - - default default] Checking if we can resize image /var/lib/nova/instances/4a104baa-5fd5-47aa-973b-11d99c76c3e2/disk. size=1073741824 can_resize_image /usr/lib/python3.9/site-packages/nova/virt/disk/api.py:166#033[00m
Dec  1 20:01:12 compute-0 nova_compute[189564]: 2025-12-01 20:01:12.404 189568 DEBUG oslo_concurrency.processutils [None req-323d5ec3-38e7-418e-9c44-916e6b02d0c3 89c8a8cb31224140bf2b9c0b94acfe04 5102d72cb1ce4e6da810b2584a2abd73 - - default default] Running cmd (subprocess): /usr/bin/python3 -m oslo_concurrency.prlimit --as=1073741824 --cpu=30 -- env LC_ALL=C LANG=C qemu-img info /var/lib/nova/instances/4a104baa-5fd5-47aa-973b-11d99c76c3e2/disk --force-share --output=json execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384#033[00m
Dec  1 20:01:12 compute-0 nova_compute[189564]: 2025-12-01 20:01:12.454 189568 DEBUG nova.policy [None req-323d5ec3-38e7-418e-9c44-916e6b02d0c3 89c8a8cb31224140bf2b9c0b94acfe04 5102d72cb1ce4e6da810b2584a2abd73 - - default default] Policy check for network:attach_external_network failed with credentials {'is_admin': False, 'user_id': '89c8a8cb31224140bf2b9c0b94acfe04', 'user_domain_id': 'default', 'system_scope': None, 'domain_id': None, 'project_id': '5102d72cb1ce4e6da810b2584a2abd73', 'project_domain_id': 'default', 'roles': ['member', 'reader'], 'is_admin_project': True, 'service_user_id': None, 'service_user_domain_id': None, 'service_project_id': None, 'service_project_domain_id': None, 'service_roles': []} authorize /usr/lib/python3.9/site-packages/nova/policy.py:203#033[00m
Dec  1 20:01:12 compute-0 nova_compute[189564]: 2025-12-01 20:01:12.496 189568 DEBUG oslo_concurrency.processutils [None req-323d5ec3-38e7-418e-9c44-916e6b02d0c3 89c8a8cb31224140bf2b9c0b94acfe04 5102d72cb1ce4e6da810b2584a2abd73 - - default default] CMD "/usr/bin/python3 -m oslo_concurrency.prlimit --as=1073741824 --cpu=30 -- env LC_ALL=C LANG=C qemu-img info /var/lib/nova/instances/4a104baa-5fd5-47aa-973b-11d99c76c3e2/disk --force-share --output=json" returned: 0 in 0.092s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422#033[00m
Dec  1 20:01:12 compute-0 nova_compute[189564]: 2025-12-01 20:01:12.497 189568 DEBUG nova.virt.disk.api [None req-323d5ec3-38e7-418e-9c44-916e6b02d0c3 89c8a8cb31224140bf2b9c0b94acfe04 5102d72cb1ce4e6da810b2584a2abd73 - - default default] Cannot resize image /var/lib/nova/instances/4a104baa-5fd5-47aa-973b-11d99c76c3e2/disk to a smaller size. can_resize_image /usr/lib/python3.9/site-packages/nova/virt/disk/api.py:172#033[00m
Dec  1 20:01:12 compute-0 nova_compute[189564]: 2025-12-01 20:01:12.498 189568 DEBUG nova.objects.instance [None req-323d5ec3-38e7-418e-9c44-916e6b02d0c3 89c8a8cb31224140bf2b9c0b94acfe04 5102d72cb1ce4e6da810b2584a2abd73 - - default default] Lazy-loading 'migration_context' on Instance uuid 4a104baa-5fd5-47aa-973b-11d99c76c3e2 obj_load_attr /usr/lib/python3.9/site-packages/nova/objects/instance.py:1105#033[00m
Dec  1 20:01:12 compute-0 nova_compute[189564]: 2025-12-01 20:01:12.522 189568 DEBUG nova.virt.libvirt.driver [None req-323d5ec3-38e7-418e-9c44-916e6b02d0c3 89c8a8cb31224140bf2b9c0b94acfe04 5102d72cb1ce4e6da810b2584a2abd73 - - default default] [instance: 4a104baa-5fd5-47aa-973b-11d99c76c3e2] Created local disks _create_image /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:4857#033[00m
Dec  1 20:01:12 compute-0 nova_compute[189564]: 2025-12-01 20:01:12.522 189568 DEBUG nova.virt.libvirt.driver [None req-323d5ec3-38e7-418e-9c44-916e6b02d0c3 89c8a8cb31224140bf2b9c0b94acfe04 5102d72cb1ce4e6da810b2584a2abd73 - - default default] [instance: 4a104baa-5fd5-47aa-973b-11d99c76c3e2] Ensure instance console log exists: /var/lib/nova/instances/4a104baa-5fd5-47aa-973b-11d99c76c3e2/console.log _ensure_console_log_for_instance /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:4609#033[00m
Dec  1 20:01:12 compute-0 nova_compute[189564]: 2025-12-01 20:01:12.523 189568 DEBUG oslo_concurrency.lockutils [None req-323d5ec3-38e7-418e-9c44-916e6b02d0c3 89c8a8cb31224140bf2b9c0b94acfe04 5102d72cb1ce4e6da810b2584a2abd73 - - default default] Acquiring lock "vgpu_resources" by "nova.virt.libvirt.driver.LibvirtDriver._allocate_mdevs" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m
Dec  1 20:01:12 compute-0 nova_compute[189564]: 2025-12-01 20:01:12.524 189568 DEBUG oslo_concurrency.lockutils [None req-323d5ec3-38e7-418e-9c44-916e6b02d0c3 89c8a8cb31224140bf2b9c0b94acfe04 5102d72cb1ce4e6da810b2584a2abd73 - - default default] Lock "vgpu_resources" acquired by "nova.virt.libvirt.driver.LibvirtDriver._allocate_mdevs" :: waited 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m
Dec  1 20:01:12 compute-0 nova_compute[189564]: 2025-12-01 20:01:12.525 189568 DEBUG oslo_concurrency.lockutils [None req-323d5ec3-38e7-418e-9c44-916e6b02d0c3 89c8a8cb31224140bf2b9c0b94acfe04 5102d72cb1ce4e6da810b2584a2abd73 - - default default] Lock "vgpu_resources" "released" by "nova.virt.libvirt.driver.LibvirtDriver._allocate_mdevs" :: held 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m
Dec  1 20:01:12 compute-0 nova_compute[189564]: 2025-12-01 20:01:12.633 189568 DEBUG nova.network.neutron [None req-96b534b9-88b0-4edd-a388-382ce12d6cf4 e346f67d906543ea8982cb53415ee19b d9b058a656be4393a4619312186fc083 - - default default] [instance: 98c0547a-3efc-4214-85f9-ccceaf32a2a6] Instance cache missing network info. _get_preexisting_port_ids /usr/lib/python3.9/site-packages/nova/network/neutron.py:3323#033[00m
Dec  1 20:01:12 compute-0 nova_compute[189564]: 2025-12-01 20:01:12.805 189568 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Dec  1 20:01:13 compute-0 nova_compute[189564]: 2025-12-01 20:01:13.238 189568 DEBUG nova.compute.manager [req-239f4da3-70c2-4b24-af48-3a8009836f9d req-7d72dca6-051b-4476-baa1-01b06a4a5f47 8141e98fb94749b083df4b60a326419b 58466eba0634458f8dd92f929824a1d9 - - default default] [instance: 98c0547a-3efc-4214-85f9-ccceaf32a2a6] Received event network-changed-6f128282-4268-4162-a349-1906ef0a8e4d external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11048#033[00m
Dec  1 20:01:13 compute-0 nova_compute[189564]: 2025-12-01 20:01:13.239 189568 DEBUG nova.compute.manager [req-239f4da3-70c2-4b24-af48-3a8009836f9d req-7d72dca6-051b-4476-baa1-01b06a4a5f47 8141e98fb94749b083df4b60a326419b 58466eba0634458f8dd92f929824a1d9 - - default default] [instance: 98c0547a-3efc-4214-85f9-ccceaf32a2a6] Refreshing instance network info cache due to event network-changed-6f128282-4268-4162-a349-1906ef0a8e4d. external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11053#033[00m
Dec  1 20:01:13 compute-0 nova_compute[189564]: 2025-12-01 20:01:13.240 189568 DEBUG oslo_concurrency.lockutils [req-239f4da3-70c2-4b24-af48-3a8009836f9d req-7d72dca6-051b-4476-baa1-01b06a4a5f47 8141e98fb94749b083df4b60a326419b 58466eba0634458f8dd92f929824a1d9 - - default default] Acquiring lock "refresh_cache-98c0547a-3efc-4214-85f9-ccceaf32a2a6" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:312#033[00m
Dec  1 20:01:13 compute-0 nova_compute[189564]: 2025-12-01 20:01:13.738 189568 DEBUG nova.network.neutron [None req-323d5ec3-38e7-418e-9c44-916e6b02d0c3 89c8a8cb31224140bf2b9c0b94acfe04 5102d72cb1ce4e6da810b2584a2abd73 - - default default] [instance: 4a104baa-5fd5-47aa-973b-11d99c76c3e2] Successfully created port: 09097114-7a48-4b64-ab17-ed474efbf80e _create_port_minimal /usr/lib/python3.9/site-packages/nova/network/neutron.py:548#033[00m
Dec  1 20:01:13 compute-0 nova_compute[189564]: 2025-12-01 20:01:13.791 189568 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Dec  1 20:01:14 compute-0 nova_compute[189564]: 2025-12-01 20:01:14.015 189568 DEBUG nova.network.neutron [None req-43758946-978a-45a5-8816-a58faf122cbd 1b42f5bff3ce40c99c067bb358d36444 02b2a851f173482691b98aa9a993fbf9 - - default default] [instance: 5e264735-c003-4c77-8b16-cb48211f837f] Successfully updated port: 241aee4b-acee-43c4-b165-e8322c56a1d3 _update_port /usr/lib/python3.9/site-packages/nova/network/neutron.py:586#033[00m
Dec  1 20:01:14 compute-0 nova_compute[189564]: 2025-12-01 20:01:14.031 189568 DEBUG oslo_concurrency.lockutils [None req-43758946-978a-45a5-8816-a58faf122cbd 1b42f5bff3ce40c99c067bb358d36444 02b2a851f173482691b98aa9a993fbf9 - - default default] Acquiring lock "refresh_cache-5e264735-c003-4c77-8b16-cb48211f837f" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:312#033[00m
Dec  1 20:01:14 compute-0 nova_compute[189564]: 2025-12-01 20:01:14.032 189568 DEBUG oslo_concurrency.lockutils [None req-43758946-978a-45a5-8816-a58faf122cbd 1b42f5bff3ce40c99c067bb358d36444 02b2a851f173482691b98aa9a993fbf9 - - default default] Acquired lock "refresh_cache-5e264735-c003-4c77-8b16-cb48211f837f" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:315#033[00m
Dec  1 20:01:14 compute-0 nova_compute[189564]: 2025-12-01 20:01:14.032 189568 DEBUG nova.network.neutron [None req-43758946-978a-45a5-8816-a58faf122cbd 1b42f5bff3ce40c99c067bb358d36444 02b2a851f173482691b98aa9a993fbf9 - - default default] [instance: 5e264735-c003-4c77-8b16-cb48211f837f] Building network info cache for instance _get_instance_nw_info /usr/lib/python3.9/site-packages/nova/network/neutron.py:2010#033[00m
Dec  1 20:01:14 compute-0 nova_compute[189564]: 2025-12-01 20:01:14.287 189568 DEBUG nova.network.neutron [None req-43758946-978a-45a5-8816-a58faf122cbd 1b42f5bff3ce40c99c067bb358d36444 02b2a851f173482691b98aa9a993fbf9 - - default default] [instance: 5e264735-c003-4c77-8b16-cb48211f837f] Instance cache missing network info. _get_preexisting_port_ids /usr/lib/python3.9/site-packages/nova/network/neutron.py:3323#033[00m
Dec  1 20:01:14 compute-0 nova_compute[189564]: 2025-12-01 20:01:14.740 189568 DEBUG nova.network.neutron [None req-96b534b9-88b0-4edd-a388-382ce12d6cf4 e346f67d906543ea8982cb53415ee19b d9b058a656be4393a4619312186fc083 - - default default] [instance: 98c0547a-3efc-4214-85f9-ccceaf32a2a6] Updating instance_info_cache with network_info: [{"id": "6f128282-4268-4162-a349-1906ef0a8e4d", "address": "fa:16:3e:6f:a3:82", "network": {"id": "584f129c-30be-45c6-a239-e6753cbee124", "bridge": "br-int", "label": "tempest-ServerAddressesTestJSON-1254726330-network", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.12", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "d9b058a656be4393a4619312186fc083", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tap6f128282-42", "ovs_interfaceid": "6f128282-4268-4162-a349-1906ef0a8e4d", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}}] update_instance_cache_with_nw_info /usr/lib/python3.9/site-packages/nova/network/neutron.py:116#033[00m
Dec  1 20:01:14 compute-0 nova_compute[189564]: 2025-12-01 20:01:14.769 189568 DEBUG oslo_concurrency.lockutils [None req-96b534b9-88b0-4edd-a388-382ce12d6cf4 e346f67d906543ea8982cb53415ee19b d9b058a656be4393a4619312186fc083 - - default default] Releasing lock "refresh_cache-98c0547a-3efc-4214-85f9-ccceaf32a2a6" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:333#033[00m
Dec  1 20:01:14 compute-0 nova_compute[189564]: 2025-12-01 20:01:14.770 189568 DEBUG nova.compute.manager [None req-96b534b9-88b0-4edd-a388-382ce12d6cf4 e346f67d906543ea8982cb53415ee19b d9b058a656be4393a4619312186fc083 - - default default] [instance: 98c0547a-3efc-4214-85f9-ccceaf32a2a6] Instance network_info: |[{"id": "6f128282-4268-4162-a349-1906ef0a8e4d", "address": "fa:16:3e:6f:a3:82", "network": {"id": "584f129c-30be-45c6-a239-e6753cbee124", "bridge": "br-int", "label": "tempest-ServerAddressesTestJSON-1254726330-network", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.12", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "d9b058a656be4393a4619312186fc083", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tap6f128282-42", "ovs_interfaceid": "6f128282-4268-4162-a349-1906ef0a8e4d", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}}]| _allocate_network_async /usr/lib/python3.9/site-packages/nova/compute/manager.py:1967#033[00m
Dec  1 20:01:14 compute-0 nova_compute[189564]: 2025-12-01 20:01:14.771 189568 DEBUG oslo_concurrency.lockutils [req-239f4da3-70c2-4b24-af48-3a8009836f9d req-7d72dca6-051b-4476-baa1-01b06a4a5f47 8141e98fb94749b083df4b60a326419b 58466eba0634458f8dd92f929824a1d9 - - default default] Acquired lock "refresh_cache-98c0547a-3efc-4214-85f9-ccceaf32a2a6" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:315#033[00m
Dec  1 20:01:14 compute-0 nova_compute[189564]: 2025-12-01 20:01:14.771 189568 DEBUG nova.network.neutron [req-239f4da3-70c2-4b24-af48-3a8009836f9d req-7d72dca6-051b-4476-baa1-01b06a4a5f47 8141e98fb94749b083df4b60a326419b 58466eba0634458f8dd92f929824a1d9 - - default default] [instance: 98c0547a-3efc-4214-85f9-ccceaf32a2a6] Refreshing network info cache for port 6f128282-4268-4162-a349-1906ef0a8e4d _get_instance_nw_info /usr/lib/python3.9/site-packages/nova/network/neutron.py:2007#033[00m
Dec  1 20:01:14 compute-0 nova_compute[189564]: 2025-12-01 20:01:14.775 189568 DEBUG nova.virt.libvirt.driver [None req-96b534b9-88b0-4edd-a388-382ce12d6cf4 e346f67d906543ea8982cb53415ee19b d9b058a656be4393a4619312186fc083 - - default default] [instance: 98c0547a-3efc-4214-85f9-ccceaf32a2a6] Start _get_guest_xml network_info=[{"id": "6f128282-4268-4162-a349-1906ef0a8e4d", "address": "fa:16:3e:6f:a3:82", "network": {"id": "584f129c-30be-45c6-a239-e6753cbee124", "bridge": "br-int", "label": "tempest-ServerAddressesTestJSON-1254726330-network", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.12", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "d9b058a656be4393a4619312186fc083", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tap6f128282-42", "ovs_interfaceid": "6f128282-4268-4162-a349-1906ef0a8e4d", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}}] disk_info={'disk_bus': 'virtio', 'cdrom_bus': 'sata', 'mapping': {'root': {'bus': 'virtio', 'dev': 'vda', 'type': 'disk', 'boot_index': '1'}, 'disk': {'bus': 'virtio', 'dev': 'vda', 'type': 'disk', 'boot_index': '1'}, 'disk.config': {'bus': 'sata', 'dev': 'sda', 'type': 'cdrom'}}} image_meta=ImageMeta(checksum='c8fc807773e5354afe61636071771906',container_format='bare',created_at=2025-12-01T20:00:12Z,direct_url=<?>,disk_format='qcow2',id=d169c234-7ac2-4fdc-b9fa-a08c93484d75,min_disk=0,min_ram=0,name='cirros-0.6.2-x86_64-disk.img',owner='35d2a9caf1634dca9fc12ec078239d84',properties=ImageMetaProps,protected=<?>,size=21430272,status='active',tags=<?>,updated_at=2025-12-01T20:00:13Z,virtual_size=<?>,visibility=<?>) rescue=None block_device_info={'root_device_name': '/dev/vda', 'image': [{'boot_index': 0, 'guest_format': None, 'encryption_options': None, 'size': 0, 'encryption_secret_uuid': None, 'device_type': 'disk', 'disk_bus': 'virtio', 'encrypted': False, 'encryption_format': None, 'device_name': '/dev/vda', 'image_id': 'd169c234-7ac2-4fdc-b9fa-a08c93484d75'}], 'ephemerals': [], 'block_device_mapping': [], 'swap': None} _get_guest_xml /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:7549#033[00m
Dec  1 20:01:14 compute-0 nova_compute[189564]: 2025-12-01 20:01:14.784 189568 WARNING nova.virt.libvirt.driver [None req-96b534b9-88b0-4edd-a388-382ce12d6cf4 e346f67d906543ea8982cb53415ee19b d9b058a656be4393a4619312186fc083 - - default default] This host appears to have multiple sockets per NUMA node. The `socket` PCI NUMA affinity will not be supported.#033[00m
Dec  1 20:01:14 compute-0 nova_compute[189564]: 2025-12-01 20:01:14.790 189568 DEBUG nova.virt.libvirt.host [None req-96b534b9-88b0-4edd-a388-382ce12d6cf4 e346f67d906543ea8982cb53415ee19b d9b058a656be4393a4619312186fc083 - - default default] Searching host: 'compute-0.ctlplane.example.com' for CPU controller through CGroups V1... _has_cgroupsv1_cpu_controller /usr/lib/python3.9/site-packages/nova/virt/libvirt/host.py:1653#033[00m
Dec  1 20:01:14 compute-0 nova_compute[189564]: 2025-12-01 20:01:14.791 189568 DEBUG nova.virt.libvirt.host [None req-96b534b9-88b0-4edd-a388-382ce12d6cf4 e346f67d906543ea8982cb53415ee19b d9b058a656be4393a4619312186fc083 - - default default] CPU controller missing on host. _has_cgroupsv1_cpu_controller /usr/lib/python3.9/site-packages/nova/virt/libvirt/host.py:1663#033[00m
Dec  1 20:01:14 compute-0 nova_compute[189564]: 2025-12-01 20:01:14.796 189568 DEBUG nova.virt.libvirt.host [None req-96b534b9-88b0-4edd-a388-382ce12d6cf4 e346f67d906543ea8982cb53415ee19b d9b058a656be4393a4619312186fc083 - - default default] Searching host: 'compute-0.ctlplane.example.com' for CPU controller through CGroups V2... _has_cgroupsv2_cpu_controller /usr/lib/python3.9/site-packages/nova/virt/libvirt/host.py:1672#033[00m
Dec  1 20:01:14 compute-0 nova_compute[189564]: 2025-12-01 20:01:14.797 189568 DEBUG nova.virt.libvirt.host [None req-96b534b9-88b0-4edd-a388-382ce12d6cf4 e346f67d906543ea8982cb53415ee19b d9b058a656be4393a4619312186fc083 - - default default] CPU controller found on host. _has_cgroupsv2_cpu_controller /usr/lib/python3.9/site-packages/nova/virt/libvirt/host.py:1679#033[00m
Dec  1 20:01:14 compute-0 nova_compute[189564]: 2025-12-01 20:01:14.797 189568 DEBUG nova.virt.libvirt.driver [None req-96b534b9-88b0-4edd-a388-382ce12d6cf4 e346f67d906543ea8982cb53415ee19b d9b058a656be4393a4619312186fc083 - - default default] CPU mode 'host-model' models '' was chosen, with extra flags: '' _get_guest_cpu_model_config /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:5396#033[00m
Dec  1 20:01:14 compute-0 nova_compute[189564]: 2025-12-01 20:01:14.798 189568 DEBUG nova.virt.hardware [None req-96b534b9-88b0-4edd-a388-382ce12d6cf4 e346f67d906543ea8982cb53415ee19b d9b058a656be4393a4619312186fc083 - - default default] Getting desirable topologies for flavor Flavor(created_at=2025-12-01T20:00:10Z,deleted=False,deleted_at=None,description=None,disabled=False,ephemeral_gb=0,extra_specs={hw_rng:allowed='True'},flavorid='69252fc0-77e5-4ac1-807d-77003542464f',id=3,is_public=True,memory_mb=128,name='m1.nano',projects=<?>,root_gb=1,rxtx_factor=1.0,swap=0,updated_at=None,vcpu_weight=0,vcpus=1) and image_meta ImageMeta(checksum='c8fc807773e5354afe61636071771906',container_format='bare',created_at=2025-12-01T20:00:12Z,direct_url=<?>,disk_format='qcow2',id=d169c234-7ac2-4fdc-b9fa-a08c93484d75,min_disk=0,min_ram=0,name='cirros-0.6.2-x86_64-disk.img',owner='35d2a9caf1634dca9fc12ec078239d84',properties=ImageMetaProps,protected=<?>,size=21430272,status='active',tags=<?>,updated_at=2025-12-01T20:00:13Z,virtual_size=<?>,visibility=<?>), allow threads: True _get_desirable_cpu_topologies /usr/lib/python3.9/site-packages/nova/virt/hardware.py:563#033[00m
Dec  1 20:01:14 compute-0 nova_compute[189564]: 2025-12-01 20:01:14.799 189568 DEBUG nova.virt.hardware [None req-96b534b9-88b0-4edd-a388-382ce12d6cf4 e346f67d906543ea8982cb53415ee19b d9b058a656be4393a4619312186fc083 - - default default] Flavor limits 0:0:0 get_cpu_topology_constraints /usr/lib/python3.9/site-packages/nova/virt/hardware.py:348#033[00m
Dec  1 20:01:14 compute-0 nova_compute[189564]: 2025-12-01 20:01:14.800 189568 DEBUG nova.virt.hardware [None req-96b534b9-88b0-4edd-a388-382ce12d6cf4 e346f67d906543ea8982cb53415ee19b d9b058a656be4393a4619312186fc083 - - default default] Image limits 0:0:0 get_cpu_topology_constraints /usr/lib/python3.9/site-packages/nova/virt/hardware.py:352#033[00m
Dec  1 20:01:14 compute-0 nova_compute[189564]: 2025-12-01 20:01:14.800 189568 DEBUG nova.virt.hardware [None req-96b534b9-88b0-4edd-a388-382ce12d6cf4 e346f67d906543ea8982cb53415ee19b d9b058a656be4393a4619312186fc083 - - default default] Flavor pref 0:0:0 get_cpu_topology_constraints /usr/lib/python3.9/site-packages/nova/virt/hardware.py:388#033[00m
Dec  1 20:01:14 compute-0 nova_compute[189564]: 2025-12-01 20:01:14.801 189568 DEBUG nova.virt.hardware [None req-96b534b9-88b0-4edd-a388-382ce12d6cf4 e346f67d906543ea8982cb53415ee19b d9b058a656be4393a4619312186fc083 - - default default] Image pref 0:0:0 get_cpu_topology_constraints /usr/lib/python3.9/site-packages/nova/virt/hardware.py:392#033[00m
Dec  1 20:01:14 compute-0 nova_compute[189564]: 2025-12-01 20:01:14.802 189568 DEBUG nova.virt.hardware [None req-96b534b9-88b0-4edd-a388-382ce12d6cf4 e346f67d906543ea8982cb53415ee19b d9b058a656be4393a4619312186fc083 - - default default] Chose sockets=0, cores=0, threads=0; limits were sockets=65536, cores=65536, threads=65536 get_cpu_topology_constraints /usr/lib/python3.9/site-packages/nova/virt/hardware.py:430#033[00m
Dec  1 20:01:14 compute-0 nova_compute[189564]: 2025-12-01 20:01:14.802 189568 DEBUG nova.virt.hardware [None req-96b534b9-88b0-4edd-a388-382ce12d6cf4 e346f67d906543ea8982cb53415ee19b d9b058a656be4393a4619312186fc083 - - default default] Topology preferred VirtCPUTopology(cores=0,sockets=0,threads=0), maximum VirtCPUTopology(cores=65536,sockets=65536,threads=65536) _get_desirable_cpu_topologies /usr/lib/python3.9/site-packages/nova/virt/hardware.py:569#033[00m
Dec  1 20:01:14 compute-0 nova_compute[189564]: 2025-12-01 20:01:14.803 189568 DEBUG nova.virt.hardware [None req-96b534b9-88b0-4edd-a388-382ce12d6cf4 e346f67d906543ea8982cb53415ee19b d9b058a656be4393a4619312186fc083 - - default default] Build topologies for 1 vcpu(s) 1:1:1 _get_possible_cpu_topologies /usr/lib/python3.9/site-packages/nova/virt/hardware.py:471#033[00m
Dec  1 20:01:14 compute-0 nova_compute[189564]: 2025-12-01 20:01:14.804 189568 DEBUG nova.virt.hardware [None req-96b534b9-88b0-4edd-a388-382ce12d6cf4 e346f67d906543ea8982cb53415ee19b d9b058a656be4393a4619312186fc083 - - default default] Got 1 possible topologies _get_possible_cpu_topologies /usr/lib/python3.9/site-packages/nova/virt/hardware.py:501#033[00m
Dec  1 20:01:14 compute-0 nova_compute[189564]: 2025-12-01 20:01:14.804 189568 DEBUG nova.virt.hardware [None req-96b534b9-88b0-4edd-a388-382ce12d6cf4 e346f67d906543ea8982cb53415ee19b d9b058a656be4393a4619312186fc083 - - default default] Possible topologies [VirtCPUTopology(cores=1,sockets=1,threads=1)] _get_desirable_cpu_topologies /usr/lib/python3.9/site-packages/nova/virt/hardware.py:575#033[00m
Dec  1 20:01:14 compute-0 nova_compute[189564]: 2025-12-01 20:01:14.805 189568 DEBUG nova.virt.hardware [None req-96b534b9-88b0-4edd-a388-382ce12d6cf4 e346f67d906543ea8982cb53415ee19b d9b058a656be4393a4619312186fc083 - - default default] Sorted desired topologies [VirtCPUTopology(cores=1,sockets=1,threads=1)] _get_desirable_cpu_topologies /usr/lib/python3.9/site-packages/nova/virt/hardware.py:577#033[00m
Dec  1 20:01:14 compute-0 nova_compute[189564]: 2025-12-01 20:01:14.814 189568 DEBUG nova.virt.libvirt.vif [None req-96b534b9-88b0-4edd-a388-382ce12d6cf4 e346f67d906543ea8982cb53415ee19b d9b058a656be4393a4619312186fc083 - - default default] vif_type=ovs instance=Instance(access_ip_v4=None,access_ip_v6=None,architecture=None,auto_disk_config=False,availability_zone='nova',cell_name=None,cleaned=False,config_drive='',created_at=2025-12-01T20:01:04Z,default_ephemeral_device=None,default_swap_device=None,deleted=False,deleted_at=None,device_metadata=None,disable_terminate=False,display_description='tempest-ServerAddressesTestJSON-server-1274346215',display_name='tempest-ServerAddressesTestJSON-server-1274346215',ec2_ids=EC2Ids,ephemeral_gb=0,ephemeral_key_uuid=None,fault=<?>,flavor=Flavor(3),hidden=False,host='compute-0.ctlplane.example.com',hostname='tempest-serveraddressestestjson-server-1274346215',id=5,image_ref='d169c234-7ac2-4fdc-b9fa-a08c93484d75',info_cache=InstanceInfoCache,instance_type_id=3,kernel_id='',key_data=None,key_name=None,keypairs=KeyPairList,launch_index=0,launched_at=None,launched_on='compute-0.ctlplane.example.com',locked=False,locked_by=None,memory_mb=128,metadata={},migration_context=None,new_flavor=None,node='compute-0.ctlplane.example.com',numa_topology=None,old_flavor=None,os_type=None,pci_devices=<?>,pci_requests=InstancePCIRequests,power_state=0,progress=0,project_id='d9b058a656be4393a4619312186fc083',ramdisk_id='',reservation_id='r-p96s3cxd',resources=None,root_device_name='/dev/vda',root_gb=1,security_groups=SecurityGroupList,services=<?>,shutdown_terminate=False,system_metadata={boot_roles='reader,member',image_base_image_ref='d169c234-7ac2-4fdc-b9fa-a08c93484d75',image_container_format='bare',image_disk_format='qcow2',image_hw_machine_type='q35',image_hw_rng_model='virtio',image_min_disk='1',image_min_ram='0',network_allocated='True',owner_project_name='tempest-ServerAddressesTestJSON-296714616',owner_user_name='tempest-ServerAddressesTestJSON-296714616-project-member'},tags=TagList,task_state='spawning',terminated_at=None,trusted_certs=None,updated_at=2025-12-01T20:01:06Z,user_data=None,user_id='e346f67d906543ea8982cb53415ee19b',uuid=98c0547a-3efc-4214-85f9-ccceaf32a2a6,vcpu_model=VirtCPUModel,vcpus=1,vm_mode=None,vm_state='building') vif={"id": "6f128282-4268-4162-a349-1906ef0a8e4d", "address": "fa:16:3e:6f:a3:82", "network": {"id": "584f129c-30be-45c6-a239-e6753cbee124", "bridge": "br-int", "label": "tempest-ServerAddressesTestJSON-1254726330-network", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.12", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "d9b058a656be4393a4619312186fc083", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tap6f128282-42", "ovs_interfaceid": "6f128282-4268-4162-a349-1906ef0a8e4d", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}} virt_type=kvm get_config /usr/lib/python3.9/site-packages/nova/virt/libvirt/vif.py:563#033[00m
Dec  1 20:01:14 compute-0 nova_compute[189564]: 2025-12-01 20:01:14.814 189568 DEBUG nova.network.os_vif_util [None req-96b534b9-88b0-4edd-a388-382ce12d6cf4 e346f67d906543ea8982cb53415ee19b d9b058a656be4393a4619312186fc083 - - default default] Converting VIF {"id": "6f128282-4268-4162-a349-1906ef0a8e4d", "address": "fa:16:3e:6f:a3:82", "network": {"id": "584f129c-30be-45c6-a239-e6753cbee124", "bridge": "br-int", "label": "tempest-ServerAddressesTestJSON-1254726330-network", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.12", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "d9b058a656be4393a4619312186fc083", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tap6f128282-42", "ovs_interfaceid": "6f128282-4268-4162-a349-1906ef0a8e4d", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}} nova_to_osvif_vif /usr/lib/python3.9/site-packages/nova/network/os_vif_util.py:511#033[00m
Dec  1 20:01:14 compute-0 nova_compute[189564]: 2025-12-01 20:01:14.816 189568 DEBUG nova.network.os_vif_util [None req-96b534b9-88b0-4edd-a388-382ce12d6cf4 e346f67d906543ea8982cb53415ee19b d9b058a656be4393a4619312186fc083 - - default default] Converted object VIFOpenVSwitch(active=False,address=fa:16:3e:6f:a3:82,bridge_name='br-int',has_traffic_filtering=True,id=6f128282-4268-4162-a349-1906ef0a8e4d,network=Network(584f129c-30be-45c6-a239-e6753cbee124),plugin='ovs',port_profile=VIFPortProfileOpenVSwitch,preserve_on_delete=False,vif_name='tap6f128282-42') nova_to_osvif_vif /usr/lib/python3.9/site-packages/nova/network/os_vif_util.py:548#033[00m
Dec  1 20:01:14 compute-0 nova_compute[189564]: 2025-12-01 20:01:14.818 189568 DEBUG nova.objects.instance [None req-96b534b9-88b0-4edd-a388-382ce12d6cf4 e346f67d906543ea8982cb53415ee19b d9b058a656be4393a4619312186fc083 - - default default] Lazy-loading 'pci_devices' on Instance uuid 98c0547a-3efc-4214-85f9-ccceaf32a2a6 obj_load_attr /usr/lib/python3.9/site-packages/nova/objects/instance.py:1105#033[00m
Dec  1 20:01:14 compute-0 nova_compute[189564]: 2025-12-01 20:01:14.833 189568 DEBUG nova.virt.libvirt.driver [None req-96b534b9-88b0-4edd-a388-382ce12d6cf4 e346f67d906543ea8982cb53415ee19b d9b058a656be4393a4619312186fc083 - - default default] [instance: 98c0547a-3efc-4214-85f9-ccceaf32a2a6] End _get_guest_xml xml=<domain type="kvm">
Dec  1 20:01:14 compute-0 nova_compute[189564]:  <uuid>98c0547a-3efc-4214-85f9-ccceaf32a2a6</uuid>
Dec  1 20:01:14 compute-0 nova_compute[189564]:  <name>instance-00000005</name>
Dec  1 20:01:14 compute-0 nova_compute[189564]:  <memory>131072</memory>
Dec  1 20:01:14 compute-0 nova_compute[189564]:  <vcpu>1</vcpu>
Dec  1 20:01:14 compute-0 nova_compute[189564]:  <metadata>
Dec  1 20:01:14 compute-0 nova_compute[189564]:    <nova:instance xmlns:nova="http://openstack.org/xmlns/libvirt/nova/1.1">
Dec  1 20:01:14 compute-0 nova_compute[189564]:      <nova:package version="27.5.2-0.20250829104910.6f8decf.el9"/>
Dec  1 20:01:14 compute-0 nova_compute[189564]:      <nova:name>tempest-ServerAddressesTestJSON-server-1274346215</nova:name>
Dec  1 20:01:14 compute-0 nova_compute[189564]:      <nova:creationTime>2025-12-01 20:01:14</nova:creationTime>
Dec  1 20:01:14 compute-0 nova_compute[189564]:      <nova:flavor name="m1.nano">
Dec  1 20:01:14 compute-0 nova_compute[189564]:        <nova:memory>128</nova:memory>
Dec  1 20:01:14 compute-0 nova_compute[189564]:        <nova:disk>1</nova:disk>
Dec  1 20:01:14 compute-0 nova_compute[189564]:        <nova:swap>0</nova:swap>
Dec  1 20:01:14 compute-0 nova_compute[189564]:        <nova:ephemeral>0</nova:ephemeral>
Dec  1 20:01:14 compute-0 nova_compute[189564]:        <nova:vcpus>1</nova:vcpus>
Dec  1 20:01:14 compute-0 nova_compute[189564]:      </nova:flavor>
Dec  1 20:01:14 compute-0 nova_compute[189564]:      <nova:owner>
Dec  1 20:01:14 compute-0 nova_compute[189564]:        <nova:user uuid="e346f67d906543ea8982cb53415ee19b">tempest-ServerAddressesTestJSON-296714616-project-member</nova:user>
Dec  1 20:01:14 compute-0 nova_compute[189564]:        <nova:project uuid="d9b058a656be4393a4619312186fc083">tempest-ServerAddressesTestJSON-296714616</nova:project>
Dec  1 20:01:14 compute-0 nova_compute[189564]:      </nova:owner>
Dec  1 20:01:14 compute-0 nova_compute[189564]:      <nova:root type="image" uuid="d169c234-7ac2-4fdc-b9fa-a08c93484d75"/>
Dec  1 20:01:14 compute-0 nova_compute[189564]:      <nova:ports>
Dec  1 20:01:14 compute-0 nova_compute[189564]:        <nova:port uuid="6f128282-4268-4162-a349-1906ef0a8e4d">
Dec  1 20:01:14 compute-0 nova_compute[189564]:          <nova:ip type="fixed" address="10.100.0.12" ipVersion="4"/>
Dec  1 20:01:14 compute-0 nova_compute[189564]:        </nova:port>
Dec  1 20:01:14 compute-0 nova_compute[189564]:      </nova:ports>
Dec  1 20:01:14 compute-0 nova_compute[189564]:    </nova:instance>
Dec  1 20:01:14 compute-0 nova_compute[189564]:  </metadata>
Dec  1 20:01:14 compute-0 nova_compute[189564]:  <sysinfo type="smbios">
Dec  1 20:01:14 compute-0 nova_compute[189564]:    <system>
Dec  1 20:01:14 compute-0 nova_compute[189564]:      <entry name="manufacturer">RDO</entry>
Dec  1 20:01:14 compute-0 nova_compute[189564]:      <entry name="product">OpenStack Compute</entry>
Dec  1 20:01:14 compute-0 nova_compute[189564]:      <entry name="version">27.5.2-0.20250829104910.6f8decf.el9</entry>
Dec  1 20:01:14 compute-0 nova_compute[189564]:      <entry name="serial">98c0547a-3efc-4214-85f9-ccceaf32a2a6</entry>
Dec  1 20:01:14 compute-0 nova_compute[189564]:      <entry name="uuid">98c0547a-3efc-4214-85f9-ccceaf32a2a6</entry>
Dec  1 20:01:14 compute-0 nova_compute[189564]:      <entry name="family">Virtual Machine</entry>
Dec  1 20:01:14 compute-0 nova_compute[189564]:    </system>
Dec  1 20:01:14 compute-0 nova_compute[189564]:  </sysinfo>
Dec  1 20:01:14 compute-0 nova_compute[189564]:  <os>
Dec  1 20:01:14 compute-0 nova_compute[189564]:    <type arch="x86_64" machine="q35">hvm</type>
Dec  1 20:01:14 compute-0 nova_compute[189564]:    <boot dev="hd"/>
Dec  1 20:01:14 compute-0 nova_compute[189564]:    <smbios mode="sysinfo"/>
Dec  1 20:01:14 compute-0 nova_compute[189564]:  </os>
Dec  1 20:01:14 compute-0 nova_compute[189564]:  <features>
Dec  1 20:01:14 compute-0 nova_compute[189564]:    <acpi/>
Dec  1 20:01:14 compute-0 nova_compute[189564]:    <apic/>
Dec  1 20:01:14 compute-0 nova_compute[189564]:    <vmcoreinfo/>
Dec  1 20:01:14 compute-0 nova_compute[189564]:  </features>
Dec  1 20:01:14 compute-0 nova_compute[189564]:  <clock offset="utc">
Dec  1 20:01:14 compute-0 nova_compute[189564]:    <timer name="pit" tickpolicy="delay"/>
Dec  1 20:01:14 compute-0 nova_compute[189564]:    <timer name="rtc" tickpolicy="catchup"/>
Dec  1 20:01:14 compute-0 nova_compute[189564]:    <timer name="hpet" present="no"/>
Dec  1 20:01:14 compute-0 nova_compute[189564]:  </clock>
Dec  1 20:01:14 compute-0 nova_compute[189564]:  <cpu mode="host-model" match="exact">
Dec  1 20:01:14 compute-0 nova_compute[189564]:    <topology sockets="1" cores="1" threads="1"/>
Dec  1 20:01:14 compute-0 nova_compute[189564]:  </cpu>
Dec  1 20:01:14 compute-0 nova_compute[189564]:  <devices>
Dec  1 20:01:14 compute-0 nova_compute[189564]:    <disk type="file" device="disk">
Dec  1 20:01:14 compute-0 nova_compute[189564]:      <driver name="qemu" type="qcow2" cache="none"/>
Dec  1 20:01:14 compute-0 nova_compute[189564]:      <source file="/var/lib/nova/instances/98c0547a-3efc-4214-85f9-ccceaf32a2a6/disk"/>
Dec  1 20:01:14 compute-0 nova_compute[189564]:      <target dev="vda" bus="virtio"/>
Dec  1 20:01:14 compute-0 nova_compute[189564]:    </disk>
Dec  1 20:01:14 compute-0 nova_compute[189564]:    <disk type="file" device="cdrom">
Dec  1 20:01:14 compute-0 nova_compute[189564]:      <driver name="qemu" type="raw" cache="none"/>
Dec  1 20:01:14 compute-0 nova_compute[189564]:      <source file="/var/lib/nova/instances/98c0547a-3efc-4214-85f9-ccceaf32a2a6/disk.config"/>
Dec  1 20:01:14 compute-0 nova_compute[189564]:      <target dev="sda" bus="sata"/>
Dec  1 20:01:14 compute-0 nova_compute[189564]:    </disk>
Dec  1 20:01:14 compute-0 nova_compute[189564]:    <interface type="ethernet">
Dec  1 20:01:14 compute-0 nova_compute[189564]:      <mac address="fa:16:3e:6f:a3:82"/>
Dec  1 20:01:14 compute-0 nova_compute[189564]:      <model type="virtio"/>
Dec  1 20:01:14 compute-0 nova_compute[189564]:      <driver name="vhost" rx_queue_size="512"/>
Dec  1 20:01:14 compute-0 nova_compute[189564]:      <mtu size="1442"/>
Dec  1 20:01:14 compute-0 nova_compute[189564]:      <target dev="tap6f128282-42"/>
Dec  1 20:01:14 compute-0 nova_compute[189564]:    </interface>
Dec  1 20:01:14 compute-0 nova_compute[189564]:    <serial type="pty">
Dec  1 20:01:14 compute-0 nova_compute[189564]:      <log file="/var/lib/nova/instances/98c0547a-3efc-4214-85f9-ccceaf32a2a6/console.log" append="off"/>
Dec  1 20:01:14 compute-0 nova_compute[189564]:    </serial>
Dec  1 20:01:14 compute-0 nova_compute[189564]:    <graphics type="vnc" autoport="yes" listen="::0"/>
Dec  1 20:01:14 compute-0 nova_compute[189564]:    <video>
Dec  1 20:01:14 compute-0 nova_compute[189564]:      <model type="virtio"/>
Dec  1 20:01:14 compute-0 nova_compute[189564]:    </video>
Dec  1 20:01:14 compute-0 nova_compute[189564]:    <input type="tablet" bus="usb"/>
Dec  1 20:01:14 compute-0 nova_compute[189564]:    <rng model="virtio">
Dec  1 20:01:14 compute-0 nova_compute[189564]:      <backend model="random">/dev/urandom</backend>
Dec  1 20:01:14 compute-0 nova_compute[189564]:    </rng>
Dec  1 20:01:14 compute-0 nova_compute[189564]:    <controller type="pci" model="pcie-root"/>
Dec  1 20:01:14 compute-0 nova_compute[189564]:    <controller type="pci" model="pcie-root-port"/>
Dec  1 20:01:14 compute-0 nova_compute[189564]:    <controller type="pci" model="pcie-root-port"/>
Dec  1 20:01:14 compute-0 nova_compute[189564]:    <controller type="pci" model="pcie-root-port"/>
Dec  1 20:01:14 compute-0 nova_compute[189564]:    <controller type="pci" model="pcie-root-port"/>
Dec  1 20:01:14 compute-0 nova_compute[189564]:    <controller type="pci" model="pcie-root-port"/>
Dec  1 20:01:14 compute-0 nova_compute[189564]:    <controller type="pci" model="pcie-root-port"/>
Dec  1 20:01:14 compute-0 nova_compute[189564]:    <controller type="pci" model="pcie-root-port"/>
Dec  1 20:01:14 compute-0 nova_compute[189564]:    <controller type="pci" model="pcie-root-port"/>
Dec  1 20:01:14 compute-0 nova_compute[189564]:    <controller type="pci" model="pcie-root-port"/>
Dec  1 20:01:14 compute-0 nova_compute[189564]:    <controller type="pci" model="pcie-root-port"/>
Dec  1 20:01:14 compute-0 nova_compute[189564]:    <controller type="pci" model="pcie-root-port"/>
Dec  1 20:01:14 compute-0 nova_compute[189564]:    <controller type="pci" model="pcie-root-port"/>
Dec  1 20:01:14 compute-0 nova_compute[189564]:    <controller type="pci" model="pcie-root-port"/>
Dec  1 20:01:14 compute-0 nova_compute[189564]:    <controller type="pci" model="pcie-root-port"/>
Dec  1 20:01:14 compute-0 nova_compute[189564]:    <controller type="pci" model="pcie-root-port"/>
Dec  1 20:01:14 compute-0 nova_compute[189564]:    <controller type="pci" model="pcie-root-port"/>
Dec  1 20:01:14 compute-0 nova_compute[189564]:    <controller type="pci" model="pcie-root-port"/>
Dec  1 20:01:14 compute-0 nova_compute[189564]:    <controller type="pci" model="pcie-root-port"/>
Dec  1 20:01:14 compute-0 nova_compute[189564]:    <controller type="pci" model="pcie-root-port"/>
Dec  1 20:01:14 compute-0 nova_compute[189564]:    <controller type="pci" model="pcie-root-port"/>
Dec  1 20:01:14 compute-0 nova_compute[189564]:    <controller type="pci" model="pcie-root-port"/>
Dec  1 20:01:14 compute-0 nova_compute[189564]:    <controller type="pci" model="pcie-root-port"/>
Dec  1 20:01:14 compute-0 nova_compute[189564]:    <controller type="pci" model="pcie-root-port"/>
Dec  1 20:01:14 compute-0 nova_compute[189564]:    <controller type="pci" model="pcie-root-port"/>
Dec  1 20:01:14 compute-0 nova_compute[189564]:    <controller type="usb" index="0"/>
Dec  1 20:01:14 compute-0 nova_compute[189564]:    <memballoon model="virtio">
Dec  1 20:01:14 compute-0 nova_compute[189564]:      <stats period="10"/>
Dec  1 20:01:14 compute-0 nova_compute[189564]:    </memballoon>
Dec  1 20:01:14 compute-0 nova_compute[189564]:  </devices>
Dec  1 20:01:14 compute-0 nova_compute[189564]: </domain>
Dec  1 20:01:14 compute-0 nova_compute[189564]: _get_guest_xml /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:7555#033[00m
Dec  1 20:01:14 compute-0 nova_compute[189564]: 2025-12-01 20:01:14.835 189568 DEBUG nova.compute.manager [None req-96b534b9-88b0-4edd-a388-382ce12d6cf4 e346f67d906543ea8982cb53415ee19b d9b058a656be4393a4619312186fc083 - - default default] [instance: 98c0547a-3efc-4214-85f9-ccceaf32a2a6] Preparing to wait for external event network-vif-plugged-6f128282-4268-4162-a349-1906ef0a8e4d prepare_for_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:283#033[00m
Dec  1 20:01:14 compute-0 nova_compute[189564]: 2025-12-01 20:01:14.835 189568 DEBUG oslo_concurrency.lockutils [None req-96b534b9-88b0-4edd-a388-382ce12d6cf4 e346f67d906543ea8982cb53415ee19b d9b058a656be4393a4619312186fc083 - - default default] Acquiring lock "98c0547a-3efc-4214-85f9-ccceaf32a2a6-events" by "nova.compute.manager.InstanceEvents.prepare_for_instance_event.<locals>._create_or_get_event" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m
Dec  1 20:01:14 compute-0 nova_compute[189564]: 2025-12-01 20:01:14.836 189568 DEBUG oslo_concurrency.lockutils [None req-96b534b9-88b0-4edd-a388-382ce12d6cf4 e346f67d906543ea8982cb53415ee19b d9b058a656be4393a4619312186fc083 - - default default] Lock "98c0547a-3efc-4214-85f9-ccceaf32a2a6-events" acquired by "nova.compute.manager.InstanceEvents.prepare_for_instance_event.<locals>._create_or_get_event" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m
Dec  1 20:01:14 compute-0 nova_compute[189564]: 2025-12-01 20:01:14.836 189568 DEBUG oslo_concurrency.lockutils [None req-96b534b9-88b0-4edd-a388-382ce12d6cf4 e346f67d906543ea8982cb53415ee19b d9b058a656be4393a4619312186fc083 - - default default] Lock "98c0547a-3efc-4214-85f9-ccceaf32a2a6-events" "released" by "nova.compute.manager.InstanceEvents.prepare_for_instance_event.<locals>._create_or_get_event" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m
Dec  1 20:01:14 compute-0 nova_compute[189564]: 2025-12-01 20:01:14.837 189568 DEBUG nova.virt.libvirt.vif [None req-96b534b9-88b0-4edd-a388-382ce12d6cf4 e346f67d906543ea8982cb53415ee19b d9b058a656be4393a4619312186fc083 - - default default] vif_type=ovs instance=Instance(access_ip_v4=None,access_ip_v6=None,architecture=None,auto_disk_config=False,availability_zone='nova',cell_name=None,cleaned=False,config_drive='',created_at=2025-12-01T20:01:04Z,default_ephemeral_device=None,default_swap_device=None,deleted=False,deleted_at=None,device_metadata=None,disable_terminate=False,display_description='tempest-ServerAddressesTestJSON-server-1274346215',display_name='tempest-ServerAddressesTestJSON-server-1274346215',ec2_ids=EC2Ids,ephemeral_gb=0,ephemeral_key_uuid=None,fault=<?>,flavor=Flavor(3),hidden=False,host='compute-0.ctlplane.example.com',hostname='tempest-serveraddressestestjson-server-1274346215',id=5,image_ref='d169c234-7ac2-4fdc-b9fa-a08c93484d75',info_cache=InstanceInfoCache,instance_type_id=3,kernel_id='',key_data=None,key_name=None,keypairs=KeyPairList,launch_index=0,launched_at=None,launched_on='compute-0.ctlplane.example.com',locked=False,locked_by=None,memory_mb=128,metadata={},migration_context=None,new_flavor=None,node='compute-0.ctlplane.example.com',numa_topology=None,old_flavor=None,os_type=None,pci_devices=PciDeviceList,pci_requests=InstancePCIRequests,power_state=0,progress=0,project_id='d9b058a656be4393a4619312186fc083',ramdisk_id='',reservation_id='r-p96s3cxd',resources=None,root_device_name='/dev/vda',root_gb=1,security_groups=SecurityGroupList,services=<?>,shutdown_terminate=False,system_metadata={boot_roles='reader,member',image_base_image_ref='d169c234-7ac2-4fdc-b9fa-a08c93484d75',image_container_format='bare',image_disk_format='qcow2',image_hw_machine_type='q35',image_hw_rng_model='virtio',image_min_disk='1',image_min_ram='0',network_allocated='True',owner_project_name='tempest-ServerAddressesTestJSON-296714616',owner_user_name='tempest-ServerAddressesTestJSON-296714616-project-member'},tags=TagList,task_state='spawning',terminated_at=None,trusted_certs=None,updated_at=2025-12-01T20:01:06Z,user_data=None,user_id='e346f67d906543ea8982cb53415ee19b',uuid=98c0547a-3efc-4214-85f9-ccceaf32a2a6,vcpu_model=VirtCPUModel,vcpus=1,vm_mode=None,vm_state='building') vif={"id": "6f128282-4268-4162-a349-1906ef0a8e4d", "address": "fa:16:3e:6f:a3:82", "network": {"id": "584f129c-30be-45c6-a239-e6753cbee124", "bridge": "br-int", "label": "tempest-ServerAddressesTestJSON-1254726330-network", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.12", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "d9b058a656be4393a4619312186fc083", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tap6f128282-42", "ovs_interfaceid": "6f128282-4268-4162-a349-1906ef0a8e4d", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}} plug /usr/lib/python3.9/site-packages/nova/virt/libvirt/vif.py:710#033[00m
Dec  1 20:01:14 compute-0 nova_compute[189564]: 2025-12-01 20:01:14.838 189568 DEBUG nova.network.os_vif_util [None req-96b534b9-88b0-4edd-a388-382ce12d6cf4 e346f67d906543ea8982cb53415ee19b d9b058a656be4393a4619312186fc083 - - default default] Converting VIF {"id": "6f128282-4268-4162-a349-1906ef0a8e4d", "address": "fa:16:3e:6f:a3:82", "network": {"id": "584f129c-30be-45c6-a239-e6753cbee124", "bridge": "br-int", "label": "tempest-ServerAddressesTestJSON-1254726330-network", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.12", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "d9b058a656be4393a4619312186fc083", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tap6f128282-42", "ovs_interfaceid": "6f128282-4268-4162-a349-1906ef0a8e4d", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}} nova_to_osvif_vif /usr/lib/python3.9/site-packages/nova/network/os_vif_util.py:511#033[00m
Dec  1 20:01:14 compute-0 nova_compute[189564]: 2025-12-01 20:01:14.839 189568 DEBUG nova.network.os_vif_util [None req-96b534b9-88b0-4edd-a388-382ce12d6cf4 e346f67d906543ea8982cb53415ee19b d9b058a656be4393a4619312186fc083 - - default default] Converted object VIFOpenVSwitch(active=False,address=fa:16:3e:6f:a3:82,bridge_name='br-int',has_traffic_filtering=True,id=6f128282-4268-4162-a349-1906ef0a8e4d,network=Network(584f129c-30be-45c6-a239-e6753cbee124),plugin='ovs',port_profile=VIFPortProfileOpenVSwitch,preserve_on_delete=False,vif_name='tap6f128282-42') nova_to_osvif_vif /usr/lib/python3.9/site-packages/nova/network/os_vif_util.py:548#033[00m
Dec  1 20:01:14 compute-0 nova_compute[189564]: 2025-12-01 20:01:14.839 189568 DEBUG os_vif [None req-96b534b9-88b0-4edd-a388-382ce12d6cf4 e346f67d906543ea8982cb53415ee19b d9b058a656be4393a4619312186fc083 - - default default] Plugging vif VIFOpenVSwitch(active=False,address=fa:16:3e:6f:a3:82,bridge_name='br-int',has_traffic_filtering=True,id=6f128282-4268-4162-a349-1906ef0a8e4d,network=Network(584f129c-30be-45c6-a239-e6753cbee124),plugin='ovs',port_profile=VIFPortProfileOpenVSwitch,preserve_on_delete=False,vif_name='tap6f128282-42') plug /usr/lib/python3.9/site-packages/os_vif/__init__.py:76#033[00m
Dec  1 20:01:14 compute-0 nova_compute[189564]: 2025-12-01 20:01:14.840 189568 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 21 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Dec  1 20:01:14 compute-0 nova_compute[189564]: 2025-12-01 20:01:14.841 189568 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): AddBridgeCommand(_result=None, name=br-int, may_exist=True, datapath_type=system) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89#033[00m
Dec  1 20:01:14 compute-0 nova_compute[189564]: 2025-12-01 20:01:14.841 189568 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Transaction caused no change do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:129#033[00m
Dec  1 20:01:14 compute-0 nova_compute[189564]: 2025-12-01 20:01:14.849 189568 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 21 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Dec  1 20:01:14 compute-0 nova_compute[189564]: 2025-12-01 20:01:14.850 189568 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): AddPortCommand(_result=None, bridge=br-int, port=tap6f128282-42, may_exist=True) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89#033[00m
Dec  1 20:01:14 compute-0 nova_compute[189564]: 2025-12-01 20:01:14.850 189568 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=1): DbSetCommand(_result=None, table=Interface, record=tap6f128282-42, col_values=(('external_ids', {'iface-id': '6f128282-4268-4162-a349-1906ef0a8e4d', 'iface-status': 'active', 'attached-mac': 'fa:16:3e:6f:a3:82', 'vm-uuid': '98c0547a-3efc-4214-85f9-ccceaf32a2a6'}),)) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89#033[00m
Dec  1 20:01:14 compute-0 nova_compute[189564]: 2025-12-01 20:01:14.853 189568 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Dec  1 20:01:14 compute-0 nova_compute[189564]: 2025-12-01 20:01:14.854 189568 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] 0-ms timeout __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:248#033[00m
Dec  1 20:01:14 compute-0 NetworkManager[56474]: <info>  [1764619274.8550] manager: (tap6f128282-42): new Open vSwitch Port device (/org/freedesktop/NetworkManager/Devices/32)
Dec  1 20:01:14 compute-0 nova_compute[189564]: 2025-12-01 20:01:14.866 189568 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Dec  1 20:01:14 compute-0 nova_compute[189564]: 2025-12-01 20:01:14.868 189568 INFO os_vif [None req-96b534b9-88b0-4edd-a388-382ce12d6cf4 e346f67d906543ea8982cb53415ee19b d9b058a656be4393a4619312186fc083 - - default default] Successfully plugged vif VIFOpenVSwitch(active=False,address=fa:16:3e:6f:a3:82,bridge_name='br-int',has_traffic_filtering=True,id=6f128282-4268-4162-a349-1906ef0a8e4d,network=Network(584f129c-30be-45c6-a239-e6753cbee124),plugin='ovs',port_profile=VIFPortProfileOpenVSwitch,preserve_on_delete=False,vif_name='tap6f128282-42')#033[00m
Dec  1 20:01:14 compute-0 nova_compute[189564]: 2025-12-01 20:01:14.943 189568 DEBUG nova.virt.libvirt.driver [None req-96b534b9-88b0-4edd-a388-382ce12d6cf4 e346f67d906543ea8982cb53415ee19b d9b058a656be4393a4619312186fc083 - - default default] No BDM found with device name vda, not building metadata. _build_disk_metadata /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:12116#033[00m
Dec  1 20:01:14 compute-0 nova_compute[189564]: 2025-12-01 20:01:14.944 189568 DEBUG nova.virt.libvirt.driver [None req-96b534b9-88b0-4edd-a388-382ce12d6cf4 e346f67d906543ea8982cb53415ee19b d9b058a656be4393a4619312186fc083 - - default default] No BDM found with device name sda, not building metadata. _build_disk_metadata /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:12116#033[00m
Dec  1 20:01:14 compute-0 nova_compute[189564]: 2025-12-01 20:01:14.945 189568 DEBUG nova.virt.libvirt.driver [None req-96b534b9-88b0-4edd-a388-382ce12d6cf4 e346f67d906543ea8982cb53415ee19b d9b058a656be4393a4619312186fc083 - - default default] No VIF found with MAC fa:16:3e:6f:a3:82, not building metadata _build_interface_metadata /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:12092#033[00m
Dec  1 20:01:14 compute-0 nova_compute[189564]: 2025-12-01 20:01:14.946 189568 INFO nova.virt.libvirt.driver [None req-96b534b9-88b0-4edd-a388-382ce12d6cf4 e346f67d906543ea8982cb53415ee19b d9b058a656be4393a4619312186fc083 - - default default] [instance: 98c0547a-3efc-4214-85f9-ccceaf32a2a6] Using config drive#033[00m
Dec  1 20:01:14 compute-0 nova_compute[189564]: 2025-12-01 20:01:14.956 189568 DEBUG nova.network.neutron [None req-323d5ec3-38e7-418e-9c44-916e6b02d0c3 89c8a8cb31224140bf2b9c0b94acfe04 5102d72cb1ce4e6da810b2584a2abd73 - - default default] [instance: 4a104baa-5fd5-47aa-973b-11d99c76c3e2] Successfully updated port: 09097114-7a48-4b64-ab17-ed474efbf80e _update_port /usr/lib/python3.9/site-packages/nova/network/neutron.py:586#033[00m
Dec  1 20:01:14 compute-0 nova_compute[189564]: 2025-12-01 20:01:14.986 189568 DEBUG oslo_concurrency.lockutils [None req-323d5ec3-38e7-418e-9c44-916e6b02d0c3 89c8a8cb31224140bf2b9c0b94acfe04 5102d72cb1ce4e6da810b2584a2abd73 - - default default] Acquiring lock "refresh_cache-4a104baa-5fd5-47aa-973b-11d99c76c3e2" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:312#033[00m
Dec  1 20:01:14 compute-0 nova_compute[189564]: 2025-12-01 20:01:14.987 189568 DEBUG oslo_concurrency.lockutils [None req-323d5ec3-38e7-418e-9c44-916e6b02d0c3 89c8a8cb31224140bf2b9c0b94acfe04 5102d72cb1ce4e6da810b2584a2abd73 - - default default] Acquired lock "refresh_cache-4a104baa-5fd5-47aa-973b-11d99c76c3e2" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:315#033[00m
Dec  1 20:01:14 compute-0 nova_compute[189564]: 2025-12-01 20:01:14.987 189568 DEBUG nova.network.neutron [None req-323d5ec3-38e7-418e-9c44-916e6b02d0c3 89c8a8cb31224140bf2b9c0b94acfe04 5102d72cb1ce4e6da810b2584a2abd73 - - default default] [instance: 4a104baa-5fd5-47aa-973b-11d99c76c3e2] Building network info cache for instance _get_instance_nw_info /usr/lib/python3.9/site-packages/nova/network/neutron.py:2010#033[00m
Dec  1 20:01:16 compute-0 nova_compute[189564]: 2025-12-01 20:01:16.051 189568 DEBUG nova.network.neutron [None req-323d5ec3-38e7-418e-9c44-916e6b02d0c3 89c8a8cb31224140bf2b9c0b94acfe04 5102d72cb1ce4e6da810b2584a2abd73 - - default default] [instance: 4a104baa-5fd5-47aa-973b-11d99c76c3e2] Instance cache missing network info. _get_preexisting_port_ids /usr/lib/python3.9/site-packages/nova/network/neutron.py:3323#033[00m
Dec  1 20:01:16 compute-0 nova_compute[189564]: 2025-12-01 20:01:16.113 189568 INFO nova.virt.libvirt.driver [None req-96b534b9-88b0-4edd-a388-382ce12d6cf4 e346f67d906543ea8982cb53415ee19b d9b058a656be4393a4619312186fc083 - - default default] [instance: 98c0547a-3efc-4214-85f9-ccceaf32a2a6] Creating config drive at /var/lib/nova/instances/98c0547a-3efc-4214-85f9-ccceaf32a2a6/disk.config#033[00m
Dec  1 20:01:16 compute-0 nova_compute[189564]: 2025-12-01 20:01:16.122 189568 DEBUG oslo_concurrency.processutils [None req-96b534b9-88b0-4edd-a388-382ce12d6cf4 e346f67d906543ea8982cb53415ee19b d9b058a656be4393a4619312186fc083 - - default default] Running cmd (subprocess): /usr/bin/mkisofs -o /var/lib/nova/instances/98c0547a-3efc-4214-85f9-ccceaf32a2a6/disk.config -ldots -allow-lowercase -allow-multidot -l -publisher OpenStack Compute 27.5.2-0.20250829104910.6f8decf.el9 -quiet -J -r -V config-2 /tmp/tmp6n3ec9fm execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384#033[00m
Dec  1 20:01:16 compute-0 nova_compute[189564]: 2025-12-01 20:01:16.267 189568 DEBUG oslo_concurrency.processutils [None req-96b534b9-88b0-4edd-a388-382ce12d6cf4 e346f67d906543ea8982cb53415ee19b d9b058a656be4393a4619312186fc083 - - default default] CMD "/usr/bin/mkisofs -o /var/lib/nova/instances/98c0547a-3efc-4214-85f9-ccceaf32a2a6/disk.config -ldots -allow-lowercase -allow-multidot -l -publisher OpenStack Compute 27.5.2-0.20250829104910.6f8decf.el9 -quiet -J -r -V config-2 /tmp/tmp6n3ec9fm" returned: 0 in 0.145s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422#033[00m
Dec  1 20:01:16 compute-0 kernel: tap6f128282-42: entered promiscuous mode
Dec  1 20:01:16 compute-0 NetworkManager[56474]: <info>  [1764619276.3691] manager: (tap6f128282-42): new Tun device (/org/freedesktop/NetworkManager/Devices/33)
Dec  1 20:01:16 compute-0 nova_compute[189564]: 2025-12-01 20:01:16.373 189568 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Dec  1 20:01:16 compute-0 ovn_controller[97948]: 2025-12-01T20:01:16Z|00066|binding|INFO|Claiming lport 6f128282-4268-4162-a349-1906ef0a8e4d for this chassis.
Dec  1 20:01:16 compute-0 ovn_controller[97948]: 2025-12-01T20:01:16Z|00067|binding|INFO|6f128282-4268-4162-a349-1906ef0a8e4d: Claiming fa:16:3e:6f:a3:82 10.100.0.12
Dec  1 20:01:16 compute-0 ovn_controller[97948]: 2025-12-01T20:01:16Z|00068|binding|INFO|Setting lport 6f128282-4268-4162-a349-1906ef0a8e4d ovn-installed in OVS
Dec  1 20:01:16 compute-0 ovn_controller[97948]: 2025-12-01T20:01:16Z|00069|binding|INFO|Setting lport 6f128282-4268-4162-a349-1906ef0a8e4d up in Southbound
Dec  1 20:01:16 compute-0 nova_compute[189564]: 2025-12-01 20:01:16.387 189568 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Dec  1 20:01:16 compute-0 ovn_metadata_agent[106828]: 2025-12-01 20:01:16.383 106833 DEBUG ovsdbapp.backend.ovs_idl.event [-] Matched UPDATE: PortBindingUpdatedEvent(events=('update',), table='Port_Binding', conditions=None, old_conditions=None), priority=20 to row=Port_Binding(mac=['fa:16:3e:6f:a3:82 10.100.0.12'], port_security=['fa:16:3e:6f:a3:82 10.100.0.12'], type=, nat_addresses=[], virtual_parent=[], up=[False], options={'requested-chassis': 'compute-0.ctlplane.example.com'}, parent_port=[], requested_additional_chassis=[], ha_chassis_group=[], external_ids={'neutron:cidrs': '10.100.0.12/28', 'neutron:device_id': '98c0547a-3efc-4214-85f9-ccceaf32a2a6', 'neutron:device_owner': 'compute:nova', 'neutron:mtu': '', 'neutron:network_name': 'neutron-584f129c-30be-45c6-a239-e6753cbee124', 'neutron:port_capabilities': '', 'neutron:port_name': '', 'neutron:project_id': 'd9b058a656be4393a4619312186fc083', 'neutron:revision_number': '2', 'neutron:security_group_ids': '270a4d79-bd17-4ca0-b3a5-599aea8e31b2', 'neutron:subnet_pool_addr_scope4': '', 'neutron:subnet_pool_addr_scope6': '', 'neutron:vnic_type': 'normal'}, additional_chassis=[], tag=[], additional_encap=[], encap=[], mirror_rules=[], datapath=1c227f0b-b424-4195-b582-5bbd834fa708, chassis=[<ovs.db.idl.Row object at 0x7f1b36766670>], tunnel_key=3, gateway_chassis=[], requested_chassis=[<ovs.db.idl.Row object at 0x7f1b36766670>], logical_port=6f128282-4268-4162-a349-1906ef0a8e4d) old=Port_Binding(chassis=[]) matches /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/event.py:43#033[00m
Dec  1 20:01:16 compute-0 nova_compute[189564]: 2025-12-01 20:01:16.396 189568 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Dec  1 20:01:16 compute-0 ovn_metadata_agent[106828]: 2025-12-01 20:01:16.386 106833 INFO neutron.agent.ovn.metadata.agent [-] Port 6f128282-4268-4162-a349-1906ef0a8e4d in datapath 584f129c-30be-45c6-a239-e6753cbee124 bound to our chassis#033[00m
Dec  1 20:01:16 compute-0 ovn_metadata_agent[106828]: 2025-12-01 20:01:16.390 106833 INFO neutron.agent.ovn.metadata.agent [-] Provisioning metadata for network 584f129c-30be-45c6-a239-e6753cbee124#033[00m
Dec  1 20:01:16 compute-0 nova_compute[189564]: 2025-12-01 20:01:16.400 189568 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Dec  1 20:01:16 compute-0 nova_compute[189564]: 2025-12-01 20:01:16.403 189568 DEBUG nova.network.neutron [None req-43758946-978a-45a5-8816-a58faf122cbd 1b42f5bff3ce40c99c067bb358d36444 02b2a851f173482691b98aa9a993fbf9 - - default default] [instance: 5e264735-c003-4c77-8b16-cb48211f837f] Updating instance_info_cache with network_info: [{"id": "241aee4b-acee-43c4-b165-e8322c56a1d3", "address": "fa:16:3e:94:01:de", "network": {"id": "50f1d760-d79c-40bd-a9b3-cf73e6f75cf0", "bridge": "br-int", "label": "tempest-ServersTestManualDisk-1633365007-network", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.13", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "02b2a851f173482691b98aa9a993fbf9", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tap241aee4b-ac", "ovs_interfaceid": "241aee4b-acee-43c4-b165-e8322c56a1d3", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}}] update_instance_cache_with_nw_info /usr/lib/python3.9/site-packages/nova/network/neutron.py:116#033[00m
Dec  1 20:01:16 compute-0 ovn_metadata_agent[106828]: 2025-12-01 20:01:16.408 239862 DEBUG oslo.privsep.daemon [-] privsep: reply[2ee67b24-0528-477d-ad30-5bda376de5e4]: (4, False) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Dec  1 20:01:16 compute-0 ovn_metadata_agent[106828]: 2025-12-01 20:01:16.409 106833 DEBUG neutron.agent.ovn.metadata.agent [-] Creating VETH tap584f129c-31 in ovnmeta-584f129c-30be-45c6-a239-e6753cbee124 namespace provision_datapath /usr/lib/python3.9/site-packages/neutron/agent/ovn/metadata/agent.py:665#033[00m
Dec  1 20:01:16 compute-0 ovn_metadata_agent[106828]: 2025-12-01 20:01:16.413 239862 DEBUG neutron.privileged.agent.linux.ip_lib [-] Interface tap584f129c-30 not found in namespace None get_link_id /usr/lib/python3.9/site-packages/neutron/privileged/agent/linux/ip_lib.py:204#033[00m
Dec  1 20:01:16 compute-0 ovn_metadata_agent[106828]: 2025-12-01 20:01:16.413 239862 DEBUG oslo.privsep.daemon [-] privsep: reply[dff0c3b4-6dd1-4cb4-90c8-219f3a3b264f]: (4, False) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Dec  1 20:01:16 compute-0 ovn_metadata_agent[106828]: 2025-12-01 20:01:16.414 239862 DEBUG oslo.privsep.daemon [-] privsep: reply[66d720b2-1bf0-48c1-b929-1c61d4af31ab]: (4, False) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Dec  1 20:01:16 compute-0 systemd-udevd[253726]: Network interface NamePolicy= disabled on kernel command line.
Dec  1 20:01:16 compute-0 systemd-machined[155891]: New machine qemu-5-instance-00000005.
Dec  1 20:01:16 compute-0 nova_compute[189564]: 2025-12-01 20:01:16.426 189568 DEBUG oslo_concurrency.lockutils [None req-43758946-978a-45a5-8816-a58faf122cbd 1b42f5bff3ce40c99c067bb358d36444 02b2a851f173482691b98aa9a993fbf9 - - default default] Releasing lock "refresh_cache-5e264735-c003-4c77-8b16-cb48211f837f" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:333#033[00m
Dec  1 20:01:16 compute-0 nova_compute[189564]: 2025-12-01 20:01:16.427 189568 DEBUG nova.compute.manager [None req-43758946-978a-45a5-8816-a58faf122cbd 1b42f5bff3ce40c99c067bb358d36444 02b2a851f173482691b98aa9a993fbf9 - - default default] [instance: 5e264735-c003-4c77-8b16-cb48211f837f] Instance network_info: |[{"id": "241aee4b-acee-43c4-b165-e8322c56a1d3", "address": "fa:16:3e:94:01:de", "network": {"id": "50f1d760-d79c-40bd-a9b3-cf73e6f75cf0", "bridge": "br-int", "label": "tempest-ServersTestManualDisk-1633365007-network", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.13", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "02b2a851f173482691b98aa9a993fbf9", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tap241aee4b-ac", "ovs_interfaceid": "241aee4b-acee-43c4-b165-e8322c56a1d3", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}}]| _allocate_network_async /usr/lib/python3.9/site-packages/nova/compute/manager.py:1967#033[00m
Dec  1 20:01:16 compute-0 nova_compute[189564]: 2025-12-01 20:01:16.430 189568 DEBUG nova.virt.libvirt.driver [None req-43758946-978a-45a5-8816-a58faf122cbd 1b42f5bff3ce40c99c067bb358d36444 02b2a851f173482691b98aa9a993fbf9 - - default default] [instance: 5e264735-c003-4c77-8b16-cb48211f837f] Start _get_guest_xml network_info=[{"id": "241aee4b-acee-43c4-b165-e8322c56a1d3", "address": "fa:16:3e:94:01:de", "network": {"id": "50f1d760-d79c-40bd-a9b3-cf73e6f75cf0", "bridge": "br-int", "label": "tempest-ServersTestManualDisk-1633365007-network", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.13", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "02b2a851f173482691b98aa9a993fbf9", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tap241aee4b-ac", "ovs_interfaceid": "241aee4b-acee-43c4-b165-e8322c56a1d3", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}}] disk_info={'disk_bus': 'virtio', 'cdrom_bus': 'sata', 'mapping': {'root': {'bus': 'virtio', 'dev': 'vda', 'type': 'disk', 'boot_index': '1'}, 'disk': {'bus': 'virtio', 'dev': 'vda', 'type': 'disk', 'boot_index': '1'}, 'disk.config': {'bus': 'sata', 'dev': 'sda', 'type': 'cdrom'}}} image_meta=ImageMeta(checksum='c8fc807773e5354afe61636071771906',container_format='bare',created_at=2025-12-01T20:00:12Z,direct_url=<?>,disk_format='qcow2',id=d169c234-7ac2-4fdc-b9fa-a08c93484d75,min_disk=0,min_ram=0,name='cirros-0.6.2-x86_64-disk.img',owner='35d2a9caf1634dca9fc12ec078239d84',properties=ImageMetaProps,protected=<?>,size=21430272,status='active',tags=<?>,updated_at=2025-12-01T20:00:13Z,virtual_size=<?>,visibility=<?>) rescue=None block_device_info={'root_device_name': '/dev/vda', 'image': [{'boot_index': 0, 'guest_format': None, 'encryption_options': None, 'size': 0, 'encryption_secret_uuid': None, 'device_type': 'disk', 'disk_bus': 'virtio', 'encrypted': False, 'encryption_format': None, 'device_name': '/dev/vda', 'image_id': 'd169c234-7ac2-4fdc-b9fa-a08c93484d75'}], 'ephemerals': [], 'block_device_mapping': [], 'swap': None} _get_guest_xml /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:7549#033[00m
Dec  1 20:01:16 compute-0 ovn_metadata_agent[106828]: 2025-12-01 20:01:16.431 106945 DEBUG oslo.privsep.daemon [-] privsep: reply[dfaef768-dff3-4c69-83f5-f60537cb5558]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Dec  1 20:01:16 compute-0 systemd[1]: Started Virtual Machine qemu-5-instance-00000005.
Dec  1 20:01:16 compute-0 NetworkManager[56474]: <info>  [1764619276.4458] device (tap6f128282-42): state change: unmanaged -> unavailable (reason 'connection-assumed', managed-type: 'external')
Dec  1 20:01:16 compute-0 nova_compute[189564]: 2025-12-01 20:01:16.444 189568 WARNING nova.virt.libvirt.driver [None req-43758946-978a-45a5-8816-a58faf122cbd 1b42f5bff3ce40c99c067bb358d36444 02b2a851f173482691b98aa9a993fbf9 - - default default] This host appears to have multiple sockets per NUMA node. The `socket` PCI NUMA affinity will not be supported.#033[00m
Dec  1 20:01:16 compute-0 NetworkManager[56474]: <info>  [1764619276.4475] device (tap6f128282-42): state change: unavailable -> disconnected (reason 'none', managed-type: 'external')
Dec  1 20:01:16 compute-0 nova_compute[189564]: 2025-12-01 20:01:16.450 189568 DEBUG nova.virt.libvirt.host [None req-43758946-978a-45a5-8816-a58faf122cbd 1b42f5bff3ce40c99c067bb358d36444 02b2a851f173482691b98aa9a993fbf9 - - default default] Searching host: 'compute-0.ctlplane.example.com' for CPU controller through CGroups V1... _has_cgroupsv1_cpu_controller /usr/lib/python3.9/site-packages/nova/virt/libvirt/host.py:1653#033[00m
Dec  1 20:01:16 compute-0 nova_compute[189564]: 2025-12-01 20:01:16.451 189568 DEBUG nova.virt.libvirt.host [None req-43758946-978a-45a5-8816-a58faf122cbd 1b42f5bff3ce40c99c067bb358d36444 02b2a851f173482691b98aa9a993fbf9 - - default default] CPU controller missing on host. _has_cgroupsv1_cpu_controller /usr/lib/python3.9/site-packages/nova/virt/libvirt/host.py:1663#033[00m
Dec  1 20:01:16 compute-0 nova_compute[189564]: 2025-12-01 20:01:16.456 189568 DEBUG nova.virt.libvirt.host [None req-43758946-978a-45a5-8816-a58faf122cbd 1b42f5bff3ce40c99c067bb358d36444 02b2a851f173482691b98aa9a993fbf9 - - default default] Searching host: 'compute-0.ctlplane.example.com' for CPU controller through CGroups V2... _has_cgroupsv2_cpu_controller /usr/lib/python3.9/site-packages/nova/virt/libvirt/host.py:1672#033[00m
Dec  1 20:01:16 compute-0 nova_compute[189564]: 2025-12-01 20:01:16.456 189568 DEBUG nova.virt.libvirt.host [None req-43758946-978a-45a5-8816-a58faf122cbd 1b42f5bff3ce40c99c067bb358d36444 02b2a851f173482691b98aa9a993fbf9 - - default default] CPU controller found on host. _has_cgroupsv2_cpu_controller /usr/lib/python3.9/site-packages/nova/virt/libvirt/host.py:1679#033[00m
Dec  1 20:01:16 compute-0 nova_compute[189564]: 2025-12-01 20:01:16.457 189568 DEBUG nova.virt.libvirt.driver [None req-43758946-978a-45a5-8816-a58faf122cbd 1b42f5bff3ce40c99c067bb358d36444 02b2a851f173482691b98aa9a993fbf9 - - default default] CPU mode 'host-model' models '' was chosen, with extra flags: '' _get_guest_cpu_model_config /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:5396#033[00m
Dec  1 20:01:16 compute-0 nova_compute[189564]: 2025-12-01 20:01:16.457 189568 DEBUG nova.virt.hardware [None req-43758946-978a-45a5-8816-a58faf122cbd 1b42f5bff3ce40c99c067bb358d36444 02b2a851f173482691b98aa9a993fbf9 - - default default] Getting desirable topologies for flavor Flavor(created_at=2025-12-01T20:00:10Z,deleted=False,deleted_at=None,description=None,disabled=False,ephemeral_gb=0,extra_specs={hw_rng:allowed='True'},flavorid='69252fc0-77e5-4ac1-807d-77003542464f',id=3,is_public=True,memory_mb=128,name='m1.nano',projects=<?>,root_gb=1,rxtx_factor=1.0,swap=0,updated_at=None,vcpu_weight=0,vcpus=1) and image_meta ImageMeta(checksum='c8fc807773e5354afe61636071771906',container_format='bare',created_at=2025-12-01T20:00:12Z,direct_url=<?>,disk_format='qcow2',id=d169c234-7ac2-4fdc-b9fa-a08c93484d75,min_disk=0,min_ram=0,name='cirros-0.6.2-x86_64-disk.img',owner='35d2a9caf1634dca9fc12ec078239d84',properties=ImageMetaProps,protected=<?>,size=21430272,status='active',tags=<?>,updated_at=2025-12-01T20:00:13Z,virtual_size=<?>,visibility=<?>), allow threads: True _get_desirable_cpu_topologies /usr/lib/python3.9/site-packages/nova/virt/hardware.py:563#033[00m
Dec  1 20:01:16 compute-0 nova_compute[189564]: 2025-12-01 20:01:16.457 189568 DEBUG nova.virt.hardware [None req-43758946-978a-45a5-8816-a58faf122cbd 1b42f5bff3ce40c99c067bb358d36444 02b2a851f173482691b98aa9a993fbf9 - - default default] Flavor limits 0:0:0 get_cpu_topology_constraints /usr/lib/python3.9/site-packages/nova/virt/hardware.py:348#033[00m
Dec  1 20:01:16 compute-0 nova_compute[189564]: 2025-12-01 20:01:16.458 189568 DEBUG nova.virt.hardware [None req-43758946-978a-45a5-8816-a58faf122cbd 1b42f5bff3ce40c99c067bb358d36444 02b2a851f173482691b98aa9a993fbf9 - - default default] Image limits 0:0:0 get_cpu_topology_constraints /usr/lib/python3.9/site-packages/nova/virt/hardware.py:352#033[00m
Dec  1 20:01:16 compute-0 nova_compute[189564]: 2025-12-01 20:01:16.458 189568 DEBUG nova.virt.hardware [None req-43758946-978a-45a5-8816-a58faf122cbd 1b42f5bff3ce40c99c067bb358d36444 02b2a851f173482691b98aa9a993fbf9 - - default default] Flavor pref 0:0:0 get_cpu_topology_constraints /usr/lib/python3.9/site-packages/nova/virt/hardware.py:388#033[00m
Dec  1 20:01:16 compute-0 nova_compute[189564]: 2025-12-01 20:01:16.458 189568 DEBUG nova.virt.hardware [None req-43758946-978a-45a5-8816-a58faf122cbd 1b42f5bff3ce40c99c067bb358d36444 02b2a851f173482691b98aa9a993fbf9 - - default default] Image pref 0:0:0 get_cpu_topology_constraints /usr/lib/python3.9/site-packages/nova/virt/hardware.py:392#033[00m
Dec  1 20:01:16 compute-0 nova_compute[189564]: 2025-12-01 20:01:16.458 189568 DEBUG nova.virt.hardware [None req-43758946-978a-45a5-8816-a58faf122cbd 1b42f5bff3ce40c99c067bb358d36444 02b2a851f173482691b98aa9a993fbf9 - - default default] Chose sockets=0, cores=0, threads=0; limits were sockets=65536, cores=65536, threads=65536 get_cpu_topology_constraints /usr/lib/python3.9/site-packages/nova/virt/hardware.py:430#033[00m
Dec  1 20:01:16 compute-0 nova_compute[189564]: 2025-12-01 20:01:16.458 189568 DEBUG nova.virt.hardware [None req-43758946-978a-45a5-8816-a58faf122cbd 1b42f5bff3ce40c99c067bb358d36444 02b2a851f173482691b98aa9a993fbf9 - - default default] Topology preferred VirtCPUTopology(cores=0,sockets=0,threads=0), maximum VirtCPUTopology(cores=65536,sockets=65536,threads=65536) _get_desirable_cpu_topologies /usr/lib/python3.9/site-packages/nova/virt/hardware.py:569#033[00m
Dec  1 20:01:16 compute-0 nova_compute[189564]: 2025-12-01 20:01:16.459 189568 DEBUG nova.virt.hardware [None req-43758946-978a-45a5-8816-a58faf122cbd 1b42f5bff3ce40c99c067bb358d36444 02b2a851f173482691b98aa9a993fbf9 - - default default] Build topologies for 1 vcpu(s) 1:1:1 _get_possible_cpu_topologies /usr/lib/python3.9/site-packages/nova/virt/hardware.py:471#033[00m
Dec  1 20:01:16 compute-0 nova_compute[189564]: 2025-12-01 20:01:16.459 189568 DEBUG nova.virt.hardware [None req-43758946-978a-45a5-8816-a58faf122cbd 1b42f5bff3ce40c99c067bb358d36444 02b2a851f173482691b98aa9a993fbf9 - - default default] Got 1 possible topologies _get_possible_cpu_topologies /usr/lib/python3.9/site-packages/nova/virt/hardware.py:501#033[00m
Dec  1 20:01:16 compute-0 nova_compute[189564]: 2025-12-01 20:01:16.459 189568 DEBUG nova.virt.hardware [None req-43758946-978a-45a5-8816-a58faf122cbd 1b42f5bff3ce40c99c067bb358d36444 02b2a851f173482691b98aa9a993fbf9 - - default default] Possible topologies [VirtCPUTopology(cores=1,sockets=1,threads=1)] _get_desirable_cpu_topologies /usr/lib/python3.9/site-packages/nova/virt/hardware.py:575#033[00m
Dec  1 20:01:16 compute-0 nova_compute[189564]: 2025-12-01 20:01:16.459 189568 DEBUG nova.virt.hardware [None req-43758946-978a-45a5-8816-a58faf122cbd 1b42f5bff3ce40c99c067bb358d36444 02b2a851f173482691b98aa9a993fbf9 - - default default] Sorted desired topologies [VirtCPUTopology(cores=1,sockets=1,threads=1)] _get_desirable_cpu_topologies /usr/lib/python3.9/site-packages/nova/virt/hardware.py:577#033[00m
Dec  1 20:01:16 compute-0 ovn_metadata_agent[106828]: 2025-12-01 20:01:16.461 239862 DEBUG oslo.privsep.daemon [-] privsep: reply[1c87552a-e60f-4228-acc3-42e684094608]: (4, ('net.ipv4.conf.all.promote_secondaries = 1\n', '', 0)) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Dec  1 20:01:16 compute-0 nova_compute[189564]: 2025-12-01 20:01:16.467 189568 DEBUG nova.virt.libvirt.vif [None req-43758946-978a-45a5-8816-a58faf122cbd 1b42f5bff3ce40c99c067bb358d36444 02b2a851f173482691b98aa9a993fbf9 - - default default] vif_type=ovs instance=Instance(access_ip_v4=1.1.1.1,access_ip_v6=::babe:dc0c:1602,architecture=None,auto_disk_config=False,availability_zone='nova',cell_name=None,cleaned=False,config_drive='',created_at=2025-12-01T20:01:06Z,default_ephemeral_device=None,default_swap_device=None,deleted=False,deleted_at=None,device_metadata=None,disable_terminate=False,display_description='tempest-ServersTestManualDisk-server-201991304',display_name='tempest-ServersTestManualDisk-server-201991304',ec2_ids=EC2Ids,ephemeral_gb=0,ephemeral_key_uuid=None,fault=<?>,flavor=Flavor(3),hidden=False,host='compute-0.ctlplane.example.com',hostname='tempest-serverstestmanualdisk-server-201991304',id=6,image_ref='d169c234-7ac2-4fdc-b9fa-a08c93484d75',info_cache=InstanceInfoCache,instance_type_id=3,kernel_id='',key_data='ecdsa-sha2-nistp384 AAAAE2VjZHNhLXNoYTItbmlzdHAzODQAAAAIbmlzdHAzODQAAABhBFDXKIJaNf7CqoWh7JOYr3T2ezeyWmUGqNR82Xznhp/JccD7+YhSMqoe/FRMjQKDTS9wNNY9dntu4a+xhzKktw1bK7nZ+gYLBifcMHKOv321YPJkytZo0eQBr0ZL7ZZ/Cw==',key_name='tempest-keypair-1442487873',keypairs=KeyPairList,launch_index=0,launched_at=None,launched_on='compute-0.ctlplane.example.com',locked=False,locked_by=None,memory_mb=128,metadata={hello='world'},migration_context=None,new_flavor=None,node='compute-0.ctlplane.example.com',numa_topology=None,old_flavor=None,os_type=None,pci_devices=<?>,pci_requests=InstancePCIRequests,power_state=0,progress=0,project_id='02b2a851f173482691b98aa9a993fbf9',ramdisk_id='',reservation_id='r-ikjv1kvh',resources=None,root_device_name='/dev/vda',root_gb=1,security_groups=SecurityGroupList,services=<?>,shutdown_terminate=False,system_metadata={boot_roles='reader,member',image_base_image_ref='d169c234-7ac2-4fdc-b9fa-a08c93484d75',image_container_format='bare',image_disk_format='qcow2',image_hw_machine_type='q35',image_hw_rng_model='virtio',image_min_disk='1',image_min_ram='0',network_allocated='True',owner_project_name='tempest-ServersTestManualDisk-1579803427',owner_user_name='tempest-ServersTestManualDisk-1579803427-project-member'},tags=TagList,task_state='spawning',terminated_at=None,trusted_certs=None,updated_at=2025-12-01T20:01:07Z,user_data='IyEvYmluL3NoCmVjaG8gIlByaW50aW5nIGNpcnJvcyB1c2VyIGF1dGhvcml6ZWQga2V5cyIKY2F0IH5jaXJyb3MvLnNzaC9hdXRob3JpemVkX2tleXMgfHwgdHJ1ZQo=',user_id='1b42f5bff3ce40c99c067bb358d36444',uuid=5e264735-c003-4c77-8b16-cb48211f837f,vcpu_model=VirtCPUModel,vcpus=1,vm_mode=None,vm_state='building') vif={"id": "241aee4b-acee-43c4-b165-e8322c56a1d3", "address": "fa:16:3e:94:01:de", "network": {"id": "50f1d760-d79c-40bd-a9b3-cf73e6f75cf0", "bridge": "br-int", "label": "tempest-ServersTestManualDisk-1633365007-network", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.13", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "02b2a851f173482691b98aa9a993fbf9", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tap241aee4b-ac", "ovs_interfaceid": "241aee4b-acee-43c4-b165-e8322c56a1d3", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}} virt_type=kvm get_config /usr/lib/python3.9/site-packages/nova/virt/libvirt/vif.py:563#033[00m
Dec  1 20:01:16 compute-0 nova_compute[189564]: 2025-12-01 20:01:16.467 189568 DEBUG nova.network.os_vif_util [None req-43758946-978a-45a5-8816-a58faf122cbd 1b42f5bff3ce40c99c067bb358d36444 02b2a851f173482691b98aa9a993fbf9 - - default default] Converting VIF {"id": "241aee4b-acee-43c4-b165-e8322c56a1d3", "address": "fa:16:3e:94:01:de", "network": {"id": "50f1d760-d79c-40bd-a9b3-cf73e6f75cf0", "bridge": "br-int", "label": "tempest-ServersTestManualDisk-1633365007-network", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.13", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "02b2a851f173482691b98aa9a993fbf9", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tap241aee4b-ac", "ovs_interfaceid": "241aee4b-acee-43c4-b165-e8322c56a1d3", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}} nova_to_osvif_vif /usr/lib/python3.9/site-packages/nova/network/os_vif_util.py:511#033[00m
Dec  1 20:01:16 compute-0 nova_compute[189564]: 2025-12-01 20:01:16.468 189568 DEBUG nova.network.os_vif_util [None req-43758946-978a-45a5-8816-a58faf122cbd 1b42f5bff3ce40c99c067bb358d36444 02b2a851f173482691b98aa9a993fbf9 - - default default] Converted object VIFOpenVSwitch(active=False,address=fa:16:3e:94:01:de,bridge_name='br-int',has_traffic_filtering=True,id=241aee4b-acee-43c4-b165-e8322c56a1d3,network=Network(50f1d760-d79c-40bd-a9b3-cf73e6f75cf0),plugin='ovs',port_profile=VIFPortProfileOpenVSwitch,preserve_on_delete=False,vif_name='tap241aee4b-ac') nova_to_osvif_vif /usr/lib/python3.9/site-packages/nova/network/os_vif_util.py:548#033[00m
Dec  1 20:01:16 compute-0 nova_compute[189564]: 2025-12-01 20:01:16.469 189568 DEBUG nova.objects.instance [None req-43758946-978a-45a5-8816-a58faf122cbd 1b42f5bff3ce40c99c067bb358d36444 02b2a851f173482691b98aa9a993fbf9 - - default default] Lazy-loading 'pci_devices' on Instance uuid 5e264735-c003-4c77-8b16-cb48211f837f obj_load_attr /usr/lib/python3.9/site-packages/nova/objects/instance.py:1105#033[00m
Dec  1 20:01:16 compute-0 podman[253707]: 2025-12-01 20:01:16.473841388 +0000 UTC m=+0.123721290 container health_status 61ddba5fa28aaa4735d9b3aecc3d300f499f9ae2248b5f55cd6d6127fcce4236 (image=quay.io/navidys/prometheus-podman-exporter:v1.10.1, name=podman_exporter, health_status=healthy, health_failing_streak=0, health_log=, config_id=edpm, container_name=podman_exporter, maintainer=Navid Yaghoobi <navidys@fedoraproject.org>, managed_by=edpm_ansible, config_data={'image': 'quay.io/navidys/prometheus-podman-exporter:v1.10.1', 'restart': 'always', 'recreate': True, 'user': 'root', 'privileged': True, 'ports': ['9882:9882'], 'net': 'host', 'command': ['--web.config.file=/etc/podman_exporter/podman_exporter.yaml'], 'environment': {'OS_ENDPOINT_TYPE': 'internal', 'CONTAINER_HOST': 'unix:///run/podman/podman.sock'}, 'healthcheck': {'test': '/openstack/healthcheck podman_exporter', 'mount': '/var/lib/openstack/healthchecks/podman_exporter'}, 'volumes': ['/var/lib/openstack/config/telemetry/podman_exporter.yaml:/etc/podman_exporter/podman_exporter.yaml:z', '/var/lib/openstack/certs/telemetry/default:/etc/podman_exporter/tls:z', '/run/podman/podman.sock:/run/podman/podman.sock:rw,z', '/var/lib/openstack/healthchecks/podman_exporter:/openstack:ro,z']})
Dec  1 20:01:16 compute-0 nova_compute[189564]: 2025-12-01 20:01:16.495 189568 DEBUG nova.virt.libvirt.driver [None req-43758946-978a-45a5-8816-a58faf122cbd 1b42f5bff3ce40c99c067bb358d36444 02b2a851f173482691b98aa9a993fbf9 - - default default] [instance: 5e264735-c003-4c77-8b16-cb48211f837f] End _get_guest_xml xml=<domain type="kvm">
Dec  1 20:01:16 compute-0 nova_compute[189564]:  <uuid>5e264735-c003-4c77-8b16-cb48211f837f</uuid>
Dec  1 20:01:16 compute-0 nova_compute[189564]:  <name>instance-00000006</name>
Dec  1 20:01:16 compute-0 nova_compute[189564]:  <memory>131072</memory>
Dec  1 20:01:16 compute-0 nova_compute[189564]:  <vcpu>1</vcpu>
Dec  1 20:01:16 compute-0 nova_compute[189564]:  <metadata>
Dec  1 20:01:16 compute-0 nova_compute[189564]:    <nova:instance xmlns:nova="http://openstack.org/xmlns/libvirt/nova/1.1">
Dec  1 20:01:16 compute-0 nova_compute[189564]:      <nova:package version="27.5.2-0.20250829104910.6f8decf.el9"/>
Dec  1 20:01:16 compute-0 nova_compute[189564]:      <nova:name>tempest-ServersTestManualDisk-server-201991304</nova:name>
Dec  1 20:01:16 compute-0 nova_compute[189564]:      <nova:creationTime>2025-12-01 20:01:16</nova:creationTime>
Dec  1 20:01:16 compute-0 nova_compute[189564]:      <nova:flavor name="m1.nano">
Dec  1 20:01:16 compute-0 nova_compute[189564]:        <nova:memory>128</nova:memory>
Dec  1 20:01:16 compute-0 nova_compute[189564]:        <nova:disk>1</nova:disk>
Dec  1 20:01:16 compute-0 nova_compute[189564]:        <nova:swap>0</nova:swap>
Dec  1 20:01:16 compute-0 nova_compute[189564]:        <nova:ephemeral>0</nova:ephemeral>
Dec  1 20:01:16 compute-0 nova_compute[189564]:        <nova:vcpus>1</nova:vcpus>
Dec  1 20:01:16 compute-0 nova_compute[189564]:      </nova:flavor>
Dec  1 20:01:16 compute-0 nova_compute[189564]:      <nova:owner>
Dec  1 20:01:16 compute-0 nova_compute[189564]:        <nova:user uuid="1b42f5bff3ce40c99c067bb358d36444">tempest-ServersTestManualDisk-1579803427-project-member</nova:user>
Dec  1 20:01:16 compute-0 nova_compute[189564]:        <nova:project uuid="02b2a851f173482691b98aa9a993fbf9">tempest-ServersTestManualDisk-1579803427</nova:project>
Dec  1 20:01:16 compute-0 nova_compute[189564]:      </nova:owner>
Dec  1 20:01:16 compute-0 nova_compute[189564]:      <nova:root type="image" uuid="d169c234-7ac2-4fdc-b9fa-a08c93484d75"/>
Dec  1 20:01:16 compute-0 nova_compute[189564]:      <nova:ports>
Dec  1 20:01:16 compute-0 nova_compute[189564]:        <nova:port uuid="241aee4b-acee-43c4-b165-e8322c56a1d3">
Dec  1 20:01:16 compute-0 nova_compute[189564]:          <nova:ip type="fixed" address="10.100.0.13" ipVersion="4"/>
Dec  1 20:01:16 compute-0 nova_compute[189564]:        </nova:port>
Dec  1 20:01:16 compute-0 nova_compute[189564]:      </nova:ports>
Dec  1 20:01:16 compute-0 nova_compute[189564]:    </nova:instance>
Dec  1 20:01:16 compute-0 nova_compute[189564]:  </metadata>
Dec  1 20:01:16 compute-0 nova_compute[189564]:  <sysinfo type="smbios">
Dec  1 20:01:16 compute-0 nova_compute[189564]:    <system>
Dec  1 20:01:16 compute-0 nova_compute[189564]:      <entry name="manufacturer">RDO</entry>
Dec  1 20:01:16 compute-0 nova_compute[189564]:      <entry name="product">OpenStack Compute</entry>
Dec  1 20:01:16 compute-0 nova_compute[189564]:      <entry name="version">27.5.2-0.20250829104910.6f8decf.el9</entry>
Dec  1 20:01:16 compute-0 nova_compute[189564]:      <entry name="serial">5e264735-c003-4c77-8b16-cb48211f837f</entry>
Dec  1 20:01:16 compute-0 nova_compute[189564]:      <entry name="uuid">5e264735-c003-4c77-8b16-cb48211f837f</entry>
Dec  1 20:01:16 compute-0 nova_compute[189564]:      <entry name="family">Virtual Machine</entry>
Dec  1 20:01:16 compute-0 nova_compute[189564]:    </system>
Dec  1 20:01:16 compute-0 nova_compute[189564]:  </sysinfo>
Dec  1 20:01:16 compute-0 nova_compute[189564]:  <os>
Dec  1 20:01:16 compute-0 nova_compute[189564]:    <type arch="x86_64" machine="q35">hvm</type>
Dec  1 20:01:16 compute-0 nova_compute[189564]:    <boot dev="hd"/>
Dec  1 20:01:16 compute-0 nova_compute[189564]:    <smbios mode="sysinfo"/>
Dec  1 20:01:16 compute-0 nova_compute[189564]:  </os>
Dec  1 20:01:16 compute-0 nova_compute[189564]:  <features>
Dec  1 20:01:16 compute-0 nova_compute[189564]:    <acpi/>
Dec  1 20:01:16 compute-0 nova_compute[189564]:    <apic/>
Dec  1 20:01:16 compute-0 nova_compute[189564]:    <vmcoreinfo/>
Dec  1 20:01:16 compute-0 nova_compute[189564]:  </features>
Dec  1 20:01:16 compute-0 nova_compute[189564]:  <clock offset="utc">
Dec  1 20:01:16 compute-0 nova_compute[189564]:    <timer name="pit" tickpolicy="delay"/>
Dec  1 20:01:16 compute-0 nova_compute[189564]:    <timer name="rtc" tickpolicy="catchup"/>
Dec  1 20:01:16 compute-0 nova_compute[189564]:    <timer name="hpet" present="no"/>
Dec  1 20:01:16 compute-0 nova_compute[189564]:  </clock>
Dec  1 20:01:16 compute-0 nova_compute[189564]:  <cpu mode="host-model" match="exact">
Dec  1 20:01:16 compute-0 nova_compute[189564]:    <topology sockets="1" cores="1" threads="1"/>
Dec  1 20:01:16 compute-0 nova_compute[189564]:  </cpu>
Dec  1 20:01:16 compute-0 nova_compute[189564]:  <devices>
Dec  1 20:01:16 compute-0 nova_compute[189564]:    <disk type="file" device="disk">
Dec  1 20:01:16 compute-0 nova_compute[189564]:      <driver name="qemu" type="qcow2" cache="none"/>
Dec  1 20:01:16 compute-0 nova_compute[189564]:      <source file="/var/lib/nova/instances/5e264735-c003-4c77-8b16-cb48211f837f/disk"/>
Dec  1 20:01:16 compute-0 nova_compute[189564]:      <target dev="vda" bus="virtio"/>
Dec  1 20:01:16 compute-0 nova_compute[189564]:    </disk>
Dec  1 20:01:16 compute-0 nova_compute[189564]:    <disk type="file" device="cdrom">
Dec  1 20:01:16 compute-0 nova_compute[189564]:      <driver name="qemu" type="raw" cache="none"/>
Dec  1 20:01:16 compute-0 nova_compute[189564]:      <source file="/var/lib/nova/instances/5e264735-c003-4c77-8b16-cb48211f837f/disk.config"/>
Dec  1 20:01:16 compute-0 nova_compute[189564]:      <target dev="sda" bus="sata"/>
Dec  1 20:01:16 compute-0 nova_compute[189564]:    </disk>
Dec  1 20:01:16 compute-0 nova_compute[189564]:    <interface type="ethernet">
Dec  1 20:01:16 compute-0 nova_compute[189564]:      <mac address="fa:16:3e:94:01:de"/>
Dec  1 20:01:16 compute-0 nova_compute[189564]:      <model type="virtio"/>
Dec  1 20:01:16 compute-0 nova_compute[189564]:      <driver name="vhost" rx_queue_size="512"/>
Dec  1 20:01:16 compute-0 nova_compute[189564]:      <mtu size="1442"/>
Dec  1 20:01:16 compute-0 nova_compute[189564]:      <target dev="tap241aee4b-ac"/>
Dec  1 20:01:16 compute-0 nova_compute[189564]:    </interface>
Dec  1 20:01:16 compute-0 nova_compute[189564]:    <serial type="pty">
Dec  1 20:01:16 compute-0 nova_compute[189564]:      <log file="/var/lib/nova/instances/5e264735-c003-4c77-8b16-cb48211f837f/console.log" append="off"/>
Dec  1 20:01:16 compute-0 nova_compute[189564]:    </serial>
Dec  1 20:01:16 compute-0 nova_compute[189564]:    <graphics type="vnc" autoport="yes" listen="::0"/>
Dec  1 20:01:16 compute-0 nova_compute[189564]:    <video>
Dec  1 20:01:16 compute-0 nova_compute[189564]:      <model type="virtio"/>
Dec  1 20:01:16 compute-0 nova_compute[189564]:    </video>
Dec  1 20:01:16 compute-0 nova_compute[189564]:    <input type="tablet" bus="usb"/>
Dec  1 20:01:16 compute-0 nova_compute[189564]:    <rng model="virtio">
Dec  1 20:01:16 compute-0 nova_compute[189564]:      <backend model="random">/dev/urandom</backend>
Dec  1 20:01:16 compute-0 nova_compute[189564]:    </rng>
Dec  1 20:01:16 compute-0 nova_compute[189564]:    <controller type="pci" model="pcie-root"/>
Dec  1 20:01:16 compute-0 nova_compute[189564]:    <controller type="pci" model="pcie-root-port"/>
Dec  1 20:01:16 compute-0 nova_compute[189564]:    <controller type="pci" model="pcie-root-port"/>
Dec  1 20:01:16 compute-0 nova_compute[189564]:    <controller type="pci" model="pcie-root-port"/>
Dec  1 20:01:16 compute-0 nova_compute[189564]:    <controller type="pci" model="pcie-root-port"/>
Dec  1 20:01:16 compute-0 nova_compute[189564]:    <controller type="pci" model="pcie-root-port"/>
Dec  1 20:01:16 compute-0 nova_compute[189564]:    <controller type="pci" model="pcie-root-port"/>
Dec  1 20:01:16 compute-0 nova_compute[189564]:    <controller type="pci" model="pcie-root-port"/>
Dec  1 20:01:16 compute-0 nova_compute[189564]:    <controller type="pci" model="pcie-root-port"/>
Dec  1 20:01:16 compute-0 nova_compute[189564]:    <controller type="pci" model="pcie-root-port"/>
Dec  1 20:01:16 compute-0 nova_compute[189564]:    <controller type="pci" model="pcie-root-port"/>
Dec  1 20:01:16 compute-0 nova_compute[189564]:    <controller type="pci" model="pcie-root-port"/>
Dec  1 20:01:16 compute-0 nova_compute[189564]:    <controller type="pci" model="pcie-root-port"/>
Dec  1 20:01:16 compute-0 nova_compute[189564]:    <controller type="pci" model="pcie-root-port"/>
Dec  1 20:01:16 compute-0 nova_compute[189564]:    <controller type="pci" model="pcie-root-port"/>
Dec  1 20:01:16 compute-0 nova_compute[189564]:    <controller type="pci" model="pcie-root-port"/>
Dec  1 20:01:16 compute-0 nova_compute[189564]:    <controller type="pci" model="pcie-root-port"/>
Dec  1 20:01:16 compute-0 nova_compute[189564]:    <controller type="pci" model="pcie-root-port"/>
Dec  1 20:01:16 compute-0 nova_compute[189564]:    <controller type="pci" model="pcie-root-port"/>
Dec  1 20:01:16 compute-0 nova_compute[189564]:    <controller type="pci" model="pcie-root-port"/>
Dec  1 20:01:16 compute-0 nova_compute[189564]:    <controller type="pci" model="pcie-root-port"/>
Dec  1 20:01:16 compute-0 nova_compute[189564]:    <controller type="pci" model="pcie-root-port"/>
Dec  1 20:01:16 compute-0 nova_compute[189564]:    <controller type="pci" model="pcie-root-port"/>
Dec  1 20:01:16 compute-0 nova_compute[189564]:    <controller type="pci" model="pcie-root-port"/>
Dec  1 20:01:16 compute-0 nova_compute[189564]:    <controller type="pci" model="pcie-root-port"/>
Dec  1 20:01:16 compute-0 nova_compute[189564]:    <controller type="usb" index="0"/>
Dec  1 20:01:16 compute-0 nova_compute[189564]:    <memballoon model="virtio">
Dec  1 20:01:16 compute-0 nova_compute[189564]:      <stats period="10"/>
Dec  1 20:01:16 compute-0 nova_compute[189564]:    </memballoon>
Dec  1 20:01:16 compute-0 nova_compute[189564]:  </devices>
Dec  1 20:01:16 compute-0 nova_compute[189564]: </domain>
Dec  1 20:01:16 compute-0 nova_compute[189564]: _get_guest_xml /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:7555#033[00m
Dec  1 20:01:16 compute-0 nova_compute[189564]: 2025-12-01 20:01:16.496 189568 DEBUG nova.compute.manager [None req-43758946-978a-45a5-8816-a58faf122cbd 1b42f5bff3ce40c99c067bb358d36444 02b2a851f173482691b98aa9a993fbf9 - - default default] [instance: 5e264735-c003-4c77-8b16-cb48211f837f] Preparing to wait for external event network-vif-plugged-241aee4b-acee-43c4-b165-e8322c56a1d3 prepare_for_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:283#033[00m
Dec  1 20:01:16 compute-0 nova_compute[189564]: 2025-12-01 20:01:16.496 189568 DEBUG oslo_concurrency.lockutils [None req-43758946-978a-45a5-8816-a58faf122cbd 1b42f5bff3ce40c99c067bb358d36444 02b2a851f173482691b98aa9a993fbf9 - - default default] Acquiring lock "5e264735-c003-4c77-8b16-cb48211f837f-events" by "nova.compute.manager.InstanceEvents.prepare_for_instance_event.<locals>._create_or_get_event" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m
Dec  1 20:01:16 compute-0 nova_compute[189564]: 2025-12-01 20:01:16.496 189568 DEBUG oslo_concurrency.lockutils [None req-43758946-978a-45a5-8816-a58faf122cbd 1b42f5bff3ce40c99c067bb358d36444 02b2a851f173482691b98aa9a993fbf9 - - default default] Lock "5e264735-c003-4c77-8b16-cb48211f837f-events" acquired by "nova.compute.manager.InstanceEvents.prepare_for_instance_event.<locals>._create_or_get_event" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m
Dec  1 20:01:16 compute-0 nova_compute[189564]: 2025-12-01 20:01:16.496 189568 DEBUG oslo_concurrency.lockutils [None req-43758946-978a-45a5-8816-a58faf122cbd 1b42f5bff3ce40c99c067bb358d36444 02b2a851f173482691b98aa9a993fbf9 - - default default] Lock "5e264735-c003-4c77-8b16-cb48211f837f-events" "released" by "nova.compute.manager.InstanceEvents.prepare_for_instance_event.<locals>._create_or_get_event" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m
Dec  1 20:01:16 compute-0 nova_compute[189564]: 2025-12-01 20:01:16.497 189568 DEBUG nova.virt.libvirt.vif [None req-43758946-978a-45a5-8816-a58faf122cbd 1b42f5bff3ce40c99c067bb358d36444 02b2a851f173482691b98aa9a993fbf9 - - default default] vif_type=ovs instance=Instance(access_ip_v4=1.1.1.1,access_ip_v6=::babe:dc0c:1602,architecture=None,auto_disk_config=False,availability_zone='nova',cell_name=None,cleaned=False,config_drive='',created_at=2025-12-01T20:01:06Z,default_ephemeral_device=None,default_swap_device=None,deleted=False,deleted_at=None,device_metadata=None,disable_terminate=False,display_description='tempest-ServersTestManualDisk-server-201991304',display_name='tempest-ServersTestManualDisk-server-201991304',ec2_ids=EC2Ids,ephemeral_gb=0,ephemeral_key_uuid=None,fault=<?>,flavor=Flavor(3),hidden=False,host='compute-0.ctlplane.example.com',hostname='tempest-serverstestmanualdisk-server-201991304',id=6,image_ref='d169c234-7ac2-4fdc-b9fa-a08c93484d75',info_cache=InstanceInfoCache,instance_type_id=3,kernel_id='',key_data='ecdsa-sha2-nistp384 AAAAE2VjZHNhLXNoYTItbmlzdHAzODQAAAAIbmlzdHAzODQAAABhBFDXKIJaNf7CqoWh7JOYr3T2ezeyWmUGqNR82Xznhp/JccD7+YhSMqoe/FRMjQKDTS9wNNY9dntu4a+xhzKktw1bK7nZ+gYLBifcMHKOv321YPJkytZo0eQBr0ZL7ZZ/Cw==',key_name='tempest-keypair-1442487873',keypairs=KeyPairList,launch_index=0,launched_at=None,launched_on='compute-0.ctlplane.example.com',locked=False,locked_by=None,memory_mb=128,metadata={hello='world'},migration_context=None,new_flavor=None,node='compute-0.ctlplane.example.com',numa_topology=None,old_flavor=None,os_type=None,pci_devices=PciDeviceList,pci_requests=InstancePCIRequests,power_state=0,progress=0,project_id='02b2a851f173482691b98aa9a993fbf9',ramdisk_id='',reservation_id='r-ikjv1kvh',resources=None,root_device_name='/dev/vda',root_gb=1,security_groups=SecurityGroupList,services=<?>,shutdown_terminate=False,system_metadata={boot_roles='reader,member',image_base_image_ref='d169c234-7ac2-4fdc-b9fa-a08c93484d75',image_container_format='bare',image_disk_format='qcow2',image_hw_machine_type='q35',image_hw_rng_model='virtio',image_min_disk='1',image_min_ram='0',network_allocated='True',owner_project_name='tempest-ServersTestManualDisk-1579803427',owner_user_name='tempest-ServersTestManualDisk-1579803427-project-member'},tags=TagList,task_state='spawning',terminated_at=None,trusted_certs=None,updated_at=2025-12-01T20:01:07Z,user_data='IyEvYmluL3NoCmVjaG8gIlByaW50aW5nIGNpcnJvcyB1c2VyIGF1dGhvcml6ZWQga2V5cyIKY2F0IH5jaXJyb3MvLnNzaC9hdXRob3JpemVkX2tleXMgfHwgdHJ1ZQo=',user_id='1b42f5bff3ce40c99c067bb358d36444',uuid=5e264735-c003-4c77-8b16-cb48211f837f,vcpu_model=VirtCPUModel,vcpus=1,vm_mode=None,vm_state='building') vif={"id": "241aee4b-acee-43c4-b165-e8322c56a1d3", "address": "fa:16:3e:94:01:de", "network": {"id": "50f1d760-d79c-40bd-a9b3-cf73e6f75cf0", "bridge": "br-int", "label": "tempest-ServersTestManualDisk-1633365007-network", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.13", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "02b2a851f173482691b98aa9a993fbf9", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tap241aee4b-ac", "ovs_interfaceid": "241aee4b-acee-43c4-b165-e8322c56a1d3", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}} plug /usr/lib/python3.9/site-packages/nova/virt/libvirt/vif.py:710#033[00m
Dec  1 20:01:16 compute-0 nova_compute[189564]: 2025-12-01 20:01:16.497 189568 DEBUG nova.network.os_vif_util [None req-43758946-978a-45a5-8816-a58faf122cbd 1b42f5bff3ce40c99c067bb358d36444 02b2a851f173482691b98aa9a993fbf9 - - default default] Converting VIF {"id": "241aee4b-acee-43c4-b165-e8322c56a1d3", "address": "fa:16:3e:94:01:de", "network": {"id": "50f1d760-d79c-40bd-a9b3-cf73e6f75cf0", "bridge": "br-int", "label": "tempest-ServersTestManualDisk-1633365007-network", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.13", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "02b2a851f173482691b98aa9a993fbf9", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tap241aee4b-ac", "ovs_interfaceid": "241aee4b-acee-43c4-b165-e8322c56a1d3", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}} nova_to_osvif_vif /usr/lib/python3.9/site-packages/nova/network/os_vif_util.py:511#033[00m
Dec  1 20:01:16 compute-0 nova_compute[189564]: 2025-12-01 20:01:16.498 189568 DEBUG nova.network.os_vif_util [None req-43758946-978a-45a5-8816-a58faf122cbd 1b42f5bff3ce40c99c067bb358d36444 02b2a851f173482691b98aa9a993fbf9 - - default default] Converted object VIFOpenVSwitch(active=False,address=fa:16:3e:94:01:de,bridge_name='br-int',has_traffic_filtering=True,id=241aee4b-acee-43c4-b165-e8322c56a1d3,network=Network(50f1d760-d79c-40bd-a9b3-cf73e6f75cf0),plugin='ovs',port_profile=VIFPortProfileOpenVSwitch,preserve_on_delete=False,vif_name='tap241aee4b-ac') nova_to_osvif_vif /usr/lib/python3.9/site-packages/nova/network/os_vif_util.py:548#033[00m
Dec  1 20:01:16 compute-0 nova_compute[189564]: 2025-12-01 20:01:16.498 189568 DEBUG os_vif [None req-43758946-978a-45a5-8816-a58faf122cbd 1b42f5bff3ce40c99c067bb358d36444 02b2a851f173482691b98aa9a993fbf9 - - default default] Plugging vif VIFOpenVSwitch(active=False,address=fa:16:3e:94:01:de,bridge_name='br-int',has_traffic_filtering=True,id=241aee4b-acee-43c4-b165-e8322c56a1d3,network=Network(50f1d760-d79c-40bd-a9b3-cf73e6f75cf0),plugin='ovs',port_profile=VIFPortProfileOpenVSwitch,preserve_on_delete=False,vif_name='tap241aee4b-ac') plug /usr/lib/python3.9/site-packages/os_vif/__init__.py:76#033[00m
Dec  1 20:01:16 compute-0 nova_compute[189564]: 2025-12-01 20:01:16.498 189568 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 21 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Dec  1 20:01:16 compute-0 nova_compute[189564]: 2025-12-01 20:01:16.499 189568 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): AddBridgeCommand(_result=None, name=br-int, may_exist=True, datapath_type=system) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89#033[00m
Dec  1 20:01:16 compute-0 nova_compute[189564]: 2025-12-01 20:01:16.499 189568 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Transaction caused no change do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:129#033[00m
Dec  1 20:01:16 compute-0 ovn_metadata_agent[106828]: 2025-12-01 20:01:16.498 239942 DEBUG oslo.privsep.daemon [-] privsep: reply[f41359cf-aef4-47b1-95dc-50d655e3eb2a]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Dec  1 20:01:16 compute-0 nova_compute[189564]: 2025-12-01 20:01:16.502 189568 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 21 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Dec  1 20:01:16 compute-0 nova_compute[189564]: 2025-12-01 20:01:16.502 189568 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): AddPortCommand(_result=None, bridge=br-int, port=tap241aee4b-ac, may_exist=True) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89#033[00m
Dec  1 20:01:16 compute-0 nova_compute[189564]: 2025-12-01 20:01:16.502 189568 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=1): DbSetCommand(_result=None, table=Interface, record=tap241aee4b-ac, col_values=(('external_ids', {'iface-id': '241aee4b-acee-43c4-b165-e8322c56a1d3', 'iface-status': 'active', 'attached-mac': 'fa:16:3e:94:01:de', 'vm-uuid': '5e264735-c003-4c77-8b16-cb48211f837f'}),)) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89#033[00m
Dec  1 20:01:16 compute-0 nova_compute[189564]: 2025-12-01 20:01:16.505 189568 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] 0-ms timeout __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:248#033[00m
Dec  1 20:01:16 compute-0 NetworkManager[56474]: <info>  [1764619276.5057] manager: (tap241aee4b-ac): new Open vSwitch Port device (/org/freedesktop/NetworkManager/Devices/34)
Dec  1 20:01:16 compute-0 ovn_metadata_agent[106828]: 2025-12-01 20:01:16.507 239862 DEBUG oslo.privsep.daemon [-] privsep: reply[ac3449b5-cd1a-4e88-aabb-d61b52408484]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Dec  1 20:01:16 compute-0 NetworkManager[56474]: <info>  [1764619276.5093] manager: (tap584f129c-30): new Veth device (/org/freedesktop/NetworkManager/Devices/35)
Dec  1 20:01:16 compute-0 nova_compute[189564]: 2025-12-01 20:01:16.515 189568 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Dec  1 20:01:16 compute-0 nova_compute[189564]: 2025-12-01 20:01:16.517 189568 INFO os_vif [None req-43758946-978a-45a5-8816-a58faf122cbd 1b42f5bff3ce40c99c067bb358d36444 02b2a851f173482691b98aa9a993fbf9 - - default default] Successfully plugged vif VIFOpenVSwitch(active=False,address=fa:16:3e:94:01:de,bridge_name='br-int',has_traffic_filtering=True,id=241aee4b-acee-43c4-b165-e8322c56a1d3,network=Network(50f1d760-d79c-40bd-a9b3-cf73e6f75cf0),plugin='ovs',port_profile=VIFPortProfileOpenVSwitch,preserve_on_delete=False,vif_name='tap241aee4b-ac')#033[00m
Dec  1 20:01:16 compute-0 ovn_metadata_agent[106828]: 2025-12-01 20:01:16.548 239942 DEBUG oslo.privsep.daemon [-] privsep: reply[1eb19596-f7bf-4d72-b4de-151372fd530b]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Dec  1 20:01:16 compute-0 ovn_metadata_agent[106828]: 2025-12-01 20:01:16.552 239942 DEBUG oslo.privsep.daemon [-] privsep: reply[c6aa6bc4-495c-4d74-b003-ec9cd854ad64]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Dec  1 20:01:16 compute-0 nova_compute[189564]: 2025-12-01 20:01:16.576 189568 DEBUG nova.virt.libvirt.driver [None req-43758946-978a-45a5-8816-a58faf122cbd 1b42f5bff3ce40c99c067bb358d36444 02b2a851f173482691b98aa9a993fbf9 - - default default] No BDM found with device name vda, not building metadata. _build_disk_metadata /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:12116#033[00m
Dec  1 20:01:16 compute-0 nova_compute[189564]: 2025-12-01 20:01:16.577 189568 DEBUG nova.virt.libvirt.driver [None req-43758946-978a-45a5-8816-a58faf122cbd 1b42f5bff3ce40c99c067bb358d36444 02b2a851f173482691b98aa9a993fbf9 - - default default] No BDM found with device name sda, not building metadata. _build_disk_metadata /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:12116#033[00m
Dec  1 20:01:16 compute-0 nova_compute[189564]: 2025-12-01 20:01:16.577 189568 DEBUG nova.virt.libvirt.driver [None req-43758946-978a-45a5-8816-a58faf122cbd 1b42f5bff3ce40c99c067bb358d36444 02b2a851f173482691b98aa9a993fbf9 - - default default] No VIF found with MAC fa:16:3e:94:01:de, not building metadata _build_interface_metadata /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:12092#033[00m
Dec  1 20:01:16 compute-0 nova_compute[189564]: 2025-12-01 20:01:16.577 189568 INFO nova.virt.libvirt.driver [None req-43758946-978a-45a5-8816-a58faf122cbd 1b42f5bff3ce40c99c067bb358d36444 02b2a851f173482691b98aa9a993fbf9 - - default default] [instance: 5e264735-c003-4c77-8b16-cb48211f837f] Using config drive#033[00m
Dec  1 20:01:16 compute-0 NetworkManager[56474]: <info>  [1764619276.5868] device (tap584f129c-30): carrier: link connected
Dec  1 20:01:16 compute-0 ovn_metadata_agent[106828]: 2025-12-01 20:01:16.594 239942 DEBUG oslo.privsep.daemon [-] privsep: reply[be866fbd-9c17-479d-ac8d-d0731225fe9d]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Dec  1 20:01:16 compute-0 nova_compute[189564]: 2025-12-01 20:01:16.611 189568 DEBUG nova.compute.manager [req-e068856a-a5fb-46b4-8a84-39e4de18a119 req-5671b8e4-0e06-4880-96b0-0769abbfed50 8141e98fb94749b083df4b60a326419b 58466eba0634458f8dd92f929824a1d9 - - default default] [instance: 4a104baa-5fd5-47aa-973b-11d99c76c3e2] Received event network-changed-09097114-7a48-4b64-ab17-ed474efbf80e external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11048#033[00m
Dec  1 20:01:16 compute-0 nova_compute[189564]: 2025-12-01 20:01:16.612 189568 DEBUG nova.compute.manager [req-e068856a-a5fb-46b4-8a84-39e4de18a119 req-5671b8e4-0e06-4880-96b0-0769abbfed50 8141e98fb94749b083df4b60a326419b 58466eba0634458f8dd92f929824a1d9 - - default default] [instance: 4a104baa-5fd5-47aa-973b-11d99c76c3e2] Refreshing instance network info cache due to event network-changed-09097114-7a48-4b64-ab17-ed474efbf80e. external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11053#033[00m
Dec  1 20:01:16 compute-0 nova_compute[189564]: 2025-12-01 20:01:16.612 189568 DEBUG oslo_concurrency.lockutils [req-e068856a-a5fb-46b4-8a84-39e4de18a119 req-5671b8e4-0e06-4880-96b0-0769abbfed50 8141e98fb94749b083df4b60a326419b 58466eba0634458f8dd92f929824a1d9 - - default default] Acquiring lock "refresh_cache-4a104baa-5fd5-47aa-973b-11d99c76c3e2" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:312#033[00m
Dec  1 20:01:16 compute-0 ovn_metadata_agent[106828]: 2025-12-01 20:01:16.615 239862 DEBUG oslo.privsep.daemon [-] privsep: reply[2805f89f-2ba1-4fdb-a367-1fce7b2ec725]: (4, [{'family': 0, '__align': (), 'ifi_type': 1, 'index': 2, 'flags': 69699, 'change': 0, 'attrs': [['IFLA_IFNAME', 'tap584f129c-31'], ['IFLA_TXQLEN', 1000], ['IFLA_OPERSTATE', 'UP'], ['IFLA_LINKMODE', 0], ['IFLA_MTU', 1500], ['IFLA_MIN_MTU', 68], ['IFLA_MAX_MTU', 65535], ['IFLA_GROUP', 0], ['IFLA_PROMISCUITY', 0], ['UNKNOWN', {'header': {'length': 8, 'type': 61}}], ['IFLA_NUM_TX_QUEUES', 8], ['IFLA_GSO_MAX_SEGS', 65535], ['IFLA_GSO_MAX_SIZE', 65536], ['IFLA_GRO_MAX_SIZE', 65536], ['UNKNOWN', {'header': {'length': 8, 'type': 63}}], ['UNKNOWN', {'header': {'length': 8, 'type': 64}}], ['IFLA_TSO_MAX_SIZE', 524280], ['IFLA_TSO_MAX_SEGS', 65535], ['UNKNOWN', {'header': {'length': 8, 'type': 66}}], ['IFLA_NUM_RX_QUEUES', 8], ['IFLA_CARRIER', 1], ['IFLA_CARRIER_CHANGES', 2], ['IFLA_CARRIER_UP_COUNT', 1], ['IFLA_CARRIER_DOWN_COUNT', 1], ['IFLA_PROTO_DOWN', 0], ['IFLA_ADDRESS', 'fa:16:3e:18:66:fb'], ['IFLA_BROADCAST', 'ff:ff:ff:ff:ff:ff'], ['IFLA_STATS64', {'rx_packets': 2, 'tx_packets': 1, 'rx_bytes': 180, 'tx_bytes': 90, 'rx_errors': 0, 'tx_errors': 0, 'rx_dropped': 0, 'tx_dropped': 0, 'multicast': 0, 'collisions': 0, 'rx_length_errors': 0, 'rx_over_errors': 0, 'rx_crc_errors': 0, 'rx_frame_errors': 0, 'rx_fifo_errors': 0, 'rx_missed_errors': 0, 'tx_aborted_errors': 0, 'tx_carrier_errors': 0, 'tx_fifo_errors': 0, 'tx_heartbeat_errors': 0, 'tx_window_errors': 0, 'rx_compressed': 0, 'tx_compressed': 0}], ['IFLA_STATS', {'rx_packets': 2, 'tx_packets': 1, 'rx_bytes': 180, 'tx_bytes': 90, 'rx_errors': 0, 'tx_errors': 0, 'rx_dropped': 0, 'tx_dropped': 0, 'multicast': 0, 'collisions': 0, 'rx_length_errors': 0, 'rx_over_errors': 0, 'rx_crc_errors': 0, 'rx_frame_errors': 0, 'rx_fifo_errors': 0, 'rx_missed_errors': 0, 'tx_aborted_errors': 0, 'tx_carrier_errors': 0, 'tx_fifo_errors': 0, 'tx_heartbeat_errors': 0, 'tx_window_errors': 0, 'rx_compressed': 0, 'tx_compressed': 0}], ['IFLA_XDP', {'attrs': [['IFLA_XDP_ATTACHED', None]]}], ['IFLA_LINKINFO', {'attrs': [['IFLA_INFO_KIND', 'veth']]}], ['IFLA_LINK_NETNSID', 0], ['IFLA_LINK', 19], ['IFLA_QDISC', 'noqueue'], ['IFLA_AF_SPEC', {'attrs': [['AF_INET', {'dummy': 65668, 'forwarding': 1, 'mc_forwarding': 0, 'proxy_arp': 0, 'accept_redirects': 0, 'secure_redirects': 0, 'send_redirects': 0, 'shared_media': 1, 'rp_filter': 1, 'accept_source_route': 0, 'bootp_relay': 0, 'log_martians': 0, 'tag': 0, 'arpfilter': 0, 'medium_id': 0, 'noxfrm': 0, 'nopolicy': 0, 'force_igmp_version': 0, 'arp_announce': 0, 'arp_ignore': 0, 'promote_secondaries': 1, 'arp_accept': 0, 'arp_notify': 0, 'accept_local': 0, 'src_vmark': 0, 'proxy_arp_pvlan': 0, 'route_localnet': 0, 'igmpv2_unsolicited_report_interval': 10000, 'igmpv3_unsolicited_report_interval': 1000}], ['AF_INET6', {'attrs': [['IFLA_INET6_FLAGS', 2147483648], ['IFLA_INET6_CACHEINFO', {'max_reasm_len': 65535, 'tstamp': 576430, 'reachable_time': 20801, 'retrans_time': 1000}], ['IFLA_INET6_CONF', {'forwarding': 0, 'hop_limit': 64, 'mtu': 1500, 'accept_ra': 1, 'accept_redirects': 1, 'autoconf': 1, 'dad_transmits': 1, 'router_solicitations': 4294967295, 'router_solicitation_interval': 4000, 'router_solicitation_delay': 1000, 'use_tempaddr': 0, 'temp_valid_lft': 604800, 'temp_preferred_lft': 86400, 'regen_max_retry': 3, 'max_desync_factor': 600, 'max_addresses': 16, 'force_mld_version': 0, 'accept_ra_defrtr': 1, 'accept_ra_pinfo': 1, 'accept_ra_rtr_pref': 1, 'router_probe_interval': 60000, 'accept_ra_rt_info_max_plen': 0, 'proxy_ndp': 0, 'optimistic_dad': 0, 'accept_source_route': 0, 'mc_forwarding': 0, 'disable_ipv6': 0, 'accept_dad': 1, 'force_tllao': 0, 'ndisc_notify': 0}], ['IFLA_INET6_STATS', {'num': 38, 'inpkts': 2, 'inoctets': 152, 'indelivers': 0, 'outforwdatagrams': 0, 'outpkts': 1, 'outoctets': 76, 'inhdrerrors': 0, 'intoobigerrors': 0, 'innoroutes': 0, 'inaddrerrors': 0, 'inunknownprotos': 0, 'intruncatedpkts': 0, 'indiscards': 0, 'outdiscards': 0, 'outnoroutes': 0, 'reasmtimeout': 0, 'reasmreqds': 0, 'reasmoks': 0, 'reasmfails': 0, 'fragoks': 0, 'fragfails': 0, 'fragcreates': 0, 'inmcastpkts': 2, 'outmcastpkts': 1, 'inbcastpkts': 0, 'outbcastpkts': 0, 'inmcastoctets': 152, 'outmcastoctets': 76, 'inbcastoctets': 0, 'outbcastoctets': 0, 'csumerrors': 0, 'noectpkts': 2, 'ect1pkts': 0, 'ect0pkts': 0, 'cepkts': 0}], ['IFLA_INET6_ICMP6STATS', {'num': 7, 'inmsgs': 0, 'inerrors': 0, 'outmsgs': 1, 'outerrors': 0, 'csumerrors': 0}], ['IFLA_INET6_TOKEN', '::'], ['IFLA_INET6_ADDR_GEN_MODE', 0]]}]]}], ['IFLA_MAP', {'mem_start': 0, 'mem_end': 0, 'base_addr': 0, 'irq': 0, 'dma': 0, 'port': 0}], ['UNKNOWN', {'header': {'length': 4, 'type': 32830}}], ['UNKNOWN', {'header': {'length': 4, 'type': 32833}}]], 'header': {'length': 1448, 'type': 16, 'flags': 2, 'sequence_number': 255, 'pid': 253774, 'error': None, 'target': 'ovnmeta-584f129c-30be-45c6-a239-e6753cbee124', 'stats': (0, 0, 0)}, 'state': 'up', 'event': 'RTM_NEWLINK'}]) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Dec  1 20:01:16 compute-0 ovn_metadata_agent[106828]: 2025-12-01 20:01:16.641 239862 DEBUG oslo.privsep.daemon [-] privsep: reply[e93f5554-c1c9-4ba4-84e7-b21575ae9fdb]: (4, ({'family': 10, 'prefixlen': 64, 'flags': 192, 'scope': 253, 'index': 2, 'attrs': [['IFA_ADDRESS', 'fe80::f816:3eff:fe18:66fb'], ['IFA_CACHEINFO', {'ifa_preferred': 4294967295, 'ifa_valid': 4294967295, 'cstamp': 576430, 'tstamp': 576430}], ['IFA_FLAGS', 192]], 'header': {'length': 72, 'type': 20, 'flags': 2, 'sequence_number': 255, 'pid': 253775, 'error': None, 'target': 'ovnmeta-584f129c-30be-45c6-a239-e6753cbee124', 'stats': (0, 0, 0)}, 'event': 'RTM_NEWADDR'},)) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Dec  1 20:01:16 compute-0 ovn_metadata_agent[106828]: 2025-12-01 20:01:16.662 239862 DEBUG oslo.privsep.daemon [-] privsep: reply[db089ad8-d381-49cc-ab77-9c2c652b43ff]: (4, [{'family': 0, '__align': (), 'ifi_type': 1, 'index': 2, 'flags': 69699, 'change': 0, 'attrs': [['IFLA_IFNAME', 'tap584f129c-31'], ['IFLA_TXQLEN', 1000], ['IFLA_OPERSTATE', 'UP'], ['IFLA_LINKMODE', 0], ['IFLA_MTU', 1500], ['IFLA_MIN_MTU', 68], ['IFLA_MAX_MTU', 65535], ['IFLA_GROUP', 0], ['IFLA_PROMISCUITY', 0], ['UNKNOWN', {'header': {'length': 8, 'type': 61}}], ['IFLA_NUM_TX_QUEUES', 8], ['IFLA_GSO_MAX_SEGS', 65535], ['IFLA_GSO_MAX_SIZE', 65536], ['IFLA_GRO_MAX_SIZE', 65536], ['UNKNOWN', {'header': {'length': 8, 'type': 63}}], ['UNKNOWN', {'header': {'length': 8, 'type': 64}}], ['IFLA_TSO_MAX_SIZE', 524280], ['IFLA_TSO_MAX_SEGS', 65535], ['UNKNOWN', {'header': {'length': 8, 'type': 66}}], ['IFLA_NUM_RX_QUEUES', 8], ['IFLA_CARRIER', 1], ['IFLA_CARRIER_CHANGES', 2], ['IFLA_CARRIER_UP_COUNT', 1], ['IFLA_CARRIER_DOWN_COUNT', 1], ['IFLA_PROTO_DOWN', 0], ['IFLA_ADDRESS', 'fa:16:3e:18:66:fb'], ['IFLA_BROADCAST', 'ff:ff:ff:ff:ff:ff'], ['IFLA_STATS64', {'rx_packets': 2, 'tx_packets': 1, 'rx_bytes': 180, 'tx_bytes': 90, 'rx_errors': 0, 'tx_errors': 0, 'rx_dropped': 0, 'tx_dropped': 0, 'multicast': 0, 'collisions': 0, 'rx_length_errors': 0, 'rx_over_errors': 0, 'rx_crc_errors': 0, 'rx_frame_errors': 0, 'rx_fifo_errors': 0, 'rx_missed_errors': 0, 'tx_aborted_errors': 0, 'tx_carrier_errors': 0, 'tx_fifo_errors': 0, 'tx_heartbeat_errors': 0, 'tx_window_errors': 0, 'rx_compressed': 0, 'tx_compressed': 0}], ['IFLA_STATS', {'rx_packets': 2, 'tx_packets': 1, 'rx_bytes': 180, 'tx_bytes': 90, 'rx_errors': 0, 'tx_errors': 0, 'rx_dropped': 0, 'tx_dropped': 0, 'multicast': 0, 'collisions': 0, 'rx_length_errors': 0, 'rx_over_errors': 0, 'rx_crc_errors': 0, 'rx_frame_errors': 0, 'rx_fifo_errors': 0, 'rx_missed_errors': 0, 'tx_aborted_errors': 0, 'tx_carrier_errors': 0, 'tx_fifo_errors': 0, 'tx_heartbeat_errors': 0, 'tx_window_errors': 0, 'rx_compressed': 0, 'tx_compressed': 0}], ['IFLA_XDP', {'attrs': [['IFLA_XDP_ATTACHED', None]]}], ['IFLA_LINKINFO', {'attrs': [['IFLA_INFO_KIND', 'veth']]}], ['IFLA_LINK_NETNSID', 0], ['IFLA_LINK', 19], ['IFLA_QDISC', 'noqueue'], ['IFLA_AF_SPEC', {'attrs': [['AF_INET', {'dummy': 65668, 'forwarding': 1, 'mc_forwarding': 0, 'proxy_arp': 0, 'accept_redirects': 0, 'secure_redirects': 0, 'send_redirects': 0, 'shared_media': 1, 'rp_filter': 1, 'accept_source_route': 0, 'bootp_relay': 0, 'log_martians': 0, 'tag': 0, 'arpfilter': 0, 'medium_id': 0, 'noxfrm': 0, 'nopolicy': 0, 'force_igmp_version': 0, 'arp_announce': 0, 'arp_ignore': 0, 'promote_secondaries': 1, 'arp_accept': 0, 'arp_notify': 0, 'accept_local': 0, 'src_vmark': 0, 'proxy_arp_pvlan': 0, 'route_localnet': 0, 'igmpv2_unsolicited_report_interval': 10000, 'igmpv3_unsolicited_report_interval': 1000}], ['AF_INET6', {'attrs': [['IFLA_INET6_FLAGS', 2147483648], ['IFLA_INET6_CACHEINFO', {'max_reasm_len': 65535, 'tstamp': 576430, 'reachable_time': 20801, 'retrans_time': 1000}], ['IFLA_INET6_CONF', {'forwarding': 0, 'hop_limit': 64, 'mtu': 1500, 'accept_ra': 1, 'accept_redirects': 1, 'autoconf': 1, 'dad_transmits': 1, 'router_solicitations': 4294967295, 'router_solicitation_interval': 4000, 'router_solicitation_delay': 1000, 'use_tempaddr': 0, 'temp_valid_lft': 604800, 'temp_preferred_lft': 86400, 'regen_max_retry': 3, 'max_desync_factor': 600, 'max_addresses': 16, 'force_mld_version': 0, 'accept_ra_defrtr': 1, 'accept_ra_pinfo': 1, 'accept_ra_rtr_pref': 1, 'router_probe_interval': 60000, 'accept_ra_rt_info_max_plen': 0, 'proxy_ndp': 0, 'optimistic_dad': 0, 'accept_source_route': 0, 'mc_forwarding': 0, 'disable_ipv6': 0, 'accept_dad': 1, 'force_tllao': 0, 'ndisc_notify': 0}], ['IFLA_INET6_STATS', {'num': 38, 'inpkts': 2, 'inoctets': 152, 'indelivers': 0, 'outforwdatagrams': 0, 'outpkts': 1, 'outoctets': 76, 'inhdrerrors': 0, 'intoobigerrors': 0, 'innoroutes': 0, 'inaddrerrors': 0, 'inunknownprotos': 0, 'intruncatedpkts': 0, 'indiscards': 0, 'outdiscards': 0, 'outnoroutes': 0, 'reasmtimeout': 0, 'reasmreqds': 0, 'reasmoks': 0, 'reasmfails': 0, 'fragoks': 0, 'fragfails': 0, 'fragcreates': 0, 'inmcastpkts': 2, 'outmcastpkts': 1, 'inbcastpkts': 0, 'outbcastpkts': 0, 'inmcastoctets': 152, 'outmcastoctets': 76, 'inbcastoctets': 0, 'outbcastoctets': 0, 'csumerrors': 0, 'noectpkts': 2, 'ect1pkts': 0, 'ect0pkts': 0, 'cepkts': 0}], ['IFLA_INET6_ICMP6STATS', {'num': 7, 'inmsgs': 0, 'inerrors': 0, 'outmsgs': 1, 'outerrors': 0, 'csumerrors': 0}], ['IFLA_INET6_TOKEN', '::'], ['IFLA_INET6_ADDR_GEN_MODE', 0]]}]]}], ['IFLA_MAP', {'mem_start': 0, 'mem_end': 0, 'base_addr': 0, 'irq': 0, 'dma': 0, 'port': 0}], ['UNKNOWN', {'header': {'length': 4, 'type': 32830}}], ['UNKNOWN', {'header': {'length': 4, 'type': 32833}}]], 'header': {'length': 1448, 'type': 16, 'flags': 0, 'sequence_number': 255, 'pid': 253778, 'error': None, 'target': 'ovnmeta-584f129c-30be-45c6-a239-e6753cbee124', 'stats': (0, 0, 0)}, 'state': 'up', 'event': 'RTM_NEWLINK'}]) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Dec  1 20:01:16 compute-0 ovn_metadata_agent[106828]: 2025-12-01 20:01:16.693 239862 DEBUG oslo.privsep.daemon [-] privsep: reply[73d8cc0e-916b-480b-a8d5-01ca38fbf736]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Dec  1 20:01:16 compute-0 nova_compute[189564]: 2025-12-01 20:01:16.751 189568 DEBUG nova.compute.manager [req-6a171cd7-98ee-482d-8051-08cc653cc61b req-0a18b6d9-5570-465e-b34e-40b35a98fd5f 8141e98fb94749b083df4b60a326419b 58466eba0634458f8dd92f929824a1d9 - - default default] [instance: 5e264735-c003-4c77-8b16-cb48211f837f] Received event network-changed-241aee4b-acee-43c4-b165-e8322c56a1d3 external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11048#033[00m
Dec  1 20:01:16 compute-0 nova_compute[189564]: 2025-12-01 20:01:16.752 189568 DEBUG nova.compute.manager [req-6a171cd7-98ee-482d-8051-08cc653cc61b req-0a18b6d9-5570-465e-b34e-40b35a98fd5f 8141e98fb94749b083df4b60a326419b 58466eba0634458f8dd92f929824a1d9 - - default default] [instance: 5e264735-c003-4c77-8b16-cb48211f837f] Refreshing instance network info cache due to event network-changed-241aee4b-acee-43c4-b165-e8322c56a1d3. external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11053#033[00m
Dec  1 20:01:16 compute-0 nova_compute[189564]: 2025-12-01 20:01:16.753 189568 DEBUG oslo_concurrency.lockutils [req-6a171cd7-98ee-482d-8051-08cc653cc61b req-0a18b6d9-5570-465e-b34e-40b35a98fd5f 8141e98fb94749b083df4b60a326419b 58466eba0634458f8dd92f929824a1d9 - - default default] Acquiring lock "refresh_cache-5e264735-c003-4c77-8b16-cb48211f837f" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:312#033[00m
Dec  1 20:01:16 compute-0 nova_compute[189564]: 2025-12-01 20:01:16.753 189568 DEBUG oslo_concurrency.lockutils [req-6a171cd7-98ee-482d-8051-08cc653cc61b req-0a18b6d9-5570-465e-b34e-40b35a98fd5f 8141e98fb94749b083df4b60a326419b 58466eba0634458f8dd92f929824a1d9 - - default default] Acquired lock "refresh_cache-5e264735-c003-4c77-8b16-cb48211f837f" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:315#033[00m
Dec  1 20:01:16 compute-0 nova_compute[189564]: 2025-12-01 20:01:16.754 189568 DEBUG nova.network.neutron [req-6a171cd7-98ee-482d-8051-08cc653cc61b req-0a18b6d9-5570-465e-b34e-40b35a98fd5f 8141e98fb94749b083df4b60a326419b 58466eba0634458f8dd92f929824a1d9 - - default default] [instance: 5e264735-c003-4c77-8b16-cb48211f837f] Refreshing network info cache for port 241aee4b-acee-43c4-b165-e8322c56a1d3 _get_instance_nw_info /usr/lib/python3.9/site-packages/nova/network/neutron.py:2007#033[00m
Dec  1 20:01:16 compute-0 nova_compute[189564]: 2025-12-01 20:01:16.763 189568 DEBUG nova.virt.driver [None req-025acbbd-8b0a-4055-b5a6-f0460d6fa220 - - - - - -] Emitting event <LifecycleEvent: 1764619276.7624223, 98c0547a-3efc-4214-85f9-ccceaf32a2a6 => Started> emit_event /usr/lib/python3.9/site-packages/nova/virt/driver.py:1653#033[00m
Dec  1 20:01:16 compute-0 nova_compute[189564]: 2025-12-01 20:01:16.764 189568 INFO nova.compute.manager [None req-025acbbd-8b0a-4055-b5a6-f0460d6fa220 - - - - - -] [instance: 98c0547a-3efc-4214-85f9-ccceaf32a2a6] VM Started (Lifecycle Event)#033[00m
Dec  1 20:01:16 compute-0 ovn_metadata_agent[106828]: 2025-12-01 20:01:16.765 239862 DEBUG oslo.privsep.daemon [-] privsep: reply[4eae0361-5798-402f-ac21-acdbc9aad9db]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Dec  1 20:01:16 compute-0 ovn_metadata_agent[106828]: 2025-12-01 20:01:16.766 106833 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): DelPortCommand(_result=None, port=tap584f129c-30, bridge=br-ex, if_exists=True) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89#033[00m
Dec  1 20:01:16 compute-0 ovn_metadata_agent[106828]: 2025-12-01 20:01:16.766 106833 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Transaction caused no change do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:129#033[00m
Dec  1 20:01:16 compute-0 ovn_metadata_agent[106828]: 2025-12-01 20:01:16.767 106833 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): AddPortCommand(_result=None, bridge=br-int, port=tap584f129c-30, may_exist=True) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89#033[00m
Dec  1 20:01:16 compute-0 kernel: tap584f129c-30: entered promiscuous mode
Dec  1 20:01:16 compute-0 nova_compute[189564]: 2025-12-01 20:01:16.771 189568 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Dec  1 20:01:16 compute-0 NetworkManager[56474]: <info>  [1764619276.7733] manager: (tap584f129c-30): new Open vSwitch Port device (/org/freedesktop/NetworkManager/Devices/36)
Dec  1 20:01:16 compute-0 nova_compute[189564]: 2025-12-01 20:01:16.779 189568 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Dec  1 20:01:16 compute-0 ovn_metadata_agent[106828]: 2025-12-01 20:01:16.779 106833 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): DbSetCommand(_result=None, table=Interface, record=tap584f129c-30, col_values=(('external_ids', {'iface-id': '58b50f5f-6287-4aaf-8771-fdbeb3300763'}),)) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89#033[00m
Dec  1 20:01:16 compute-0 ovn_controller[97948]: 2025-12-01T20:01:16Z|00070|binding|INFO|Releasing lport 58b50f5f-6287-4aaf-8771-fdbeb3300763 from this chassis (sb_readonly=0)
Dec  1 20:01:16 compute-0 nova_compute[189564]: 2025-12-01 20:01:16.783 189568 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Dec  1 20:01:16 compute-0 nova_compute[189564]: 2025-12-01 20:01:16.787 189568 DEBUG nova.compute.manager [None req-025acbbd-8b0a-4055-b5a6-f0460d6fa220 - - - - - -] [instance: 98c0547a-3efc-4214-85f9-ccceaf32a2a6] Checking state _get_power_state /usr/lib/python3.9/site-packages/nova/compute/manager.py:1762#033[00m
Dec  1 20:01:16 compute-0 nova_compute[189564]: 2025-12-01 20:01:16.794 189568 DEBUG nova.virt.driver [None req-025acbbd-8b0a-4055-b5a6-f0460d6fa220 - - - - - -] Emitting event <LifecycleEvent: 1764619276.764358, 98c0547a-3efc-4214-85f9-ccceaf32a2a6 => Paused> emit_event /usr/lib/python3.9/site-packages/nova/virt/driver.py:1653#033[00m
Dec  1 20:01:16 compute-0 nova_compute[189564]: 2025-12-01 20:01:16.794 189568 INFO nova.compute.manager [None req-025acbbd-8b0a-4055-b5a6-f0460d6fa220 - - - - - -] [instance: 98c0547a-3efc-4214-85f9-ccceaf32a2a6] VM Paused (Lifecycle Event)#033[00m
Dec  1 20:01:16 compute-0 nova_compute[189564]: 2025-12-01 20:01:16.806 189568 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Dec  1 20:01:16 compute-0 ovn_metadata_agent[106828]: 2025-12-01 20:01:16.807 106833 DEBUG neutron.agent.linux.utils [-] Unable to access /var/lib/neutron/external/pids/584f129c-30be-45c6-a239-e6753cbee124.pid.haproxy; Error: [Errno 2] No such file or directory: '/var/lib/neutron/external/pids/584f129c-30be-45c6-a239-e6753cbee124.pid.haproxy' get_value_from_file /usr/lib/python3.9/site-packages/neutron/agent/linux/utils.py:252#033[00m
Dec  1 20:01:16 compute-0 ovn_metadata_agent[106828]: 2025-12-01 20:01:16.809 239862 DEBUG oslo.privsep.daemon [-] privsep: reply[f904f4bb-8670-45ec-8aee-c891d12ad554]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Dec  1 20:01:16 compute-0 ovn_metadata_agent[106828]: 2025-12-01 20:01:16.810 106833 DEBUG neutron.agent.ovn.metadata.driver [-] haproxy_cfg = 
Dec  1 20:01:16 compute-0 ovn_metadata_agent[106828]: global
Dec  1 20:01:16 compute-0 ovn_metadata_agent[106828]:    log         /dev/log local0 debug
Dec  1 20:01:16 compute-0 ovn_metadata_agent[106828]:    log-tag     haproxy-metadata-proxy-584f129c-30be-45c6-a239-e6753cbee124
Dec  1 20:01:16 compute-0 ovn_metadata_agent[106828]:    user        root
Dec  1 20:01:16 compute-0 ovn_metadata_agent[106828]:    group       root
Dec  1 20:01:16 compute-0 ovn_metadata_agent[106828]:    maxconn     1024
Dec  1 20:01:16 compute-0 ovn_metadata_agent[106828]:    pidfile     /var/lib/neutron/external/pids/584f129c-30be-45c6-a239-e6753cbee124.pid.haproxy
Dec  1 20:01:16 compute-0 ovn_metadata_agent[106828]:    daemon
Dec  1 20:01:16 compute-0 ovn_metadata_agent[106828]: 
Dec  1 20:01:16 compute-0 ovn_metadata_agent[106828]: defaults
Dec  1 20:01:16 compute-0 ovn_metadata_agent[106828]:    log global
Dec  1 20:01:16 compute-0 ovn_metadata_agent[106828]:    mode http
Dec  1 20:01:16 compute-0 ovn_metadata_agent[106828]:    option httplog
Dec  1 20:01:16 compute-0 ovn_metadata_agent[106828]:    option dontlognull
Dec  1 20:01:16 compute-0 ovn_metadata_agent[106828]:    option http-server-close
Dec  1 20:01:16 compute-0 ovn_metadata_agent[106828]:    option forwardfor
Dec  1 20:01:16 compute-0 ovn_metadata_agent[106828]:    retries                 3
Dec  1 20:01:16 compute-0 ovn_metadata_agent[106828]:    timeout http-request    30s
Dec  1 20:01:16 compute-0 ovn_metadata_agent[106828]:    timeout connect         30s
Dec  1 20:01:16 compute-0 ovn_metadata_agent[106828]:    timeout client          32s
Dec  1 20:01:16 compute-0 ovn_metadata_agent[106828]:    timeout server          32s
Dec  1 20:01:16 compute-0 ovn_metadata_agent[106828]:    timeout http-keep-alive 30s
Dec  1 20:01:16 compute-0 ovn_metadata_agent[106828]: 
Dec  1 20:01:16 compute-0 ovn_metadata_agent[106828]: 
Dec  1 20:01:16 compute-0 ovn_metadata_agent[106828]: listen listener
Dec  1 20:01:16 compute-0 ovn_metadata_agent[106828]:    bind 169.254.169.254:80
Dec  1 20:01:16 compute-0 ovn_metadata_agent[106828]:    server metadata /var/lib/neutron/metadata_proxy
Dec  1 20:01:16 compute-0 ovn_metadata_agent[106828]:    http-request add-header X-OVN-Network-ID 584f129c-30be-45c6-a239-e6753cbee124
Dec  1 20:01:16 compute-0 ovn_metadata_agent[106828]: create_config_file /usr/lib/python3.9/site-packages/neutron/agent/ovn/metadata/driver.py:107#033[00m
Dec  1 20:01:16 compute-0 nova_compute[189564]: 2025-12-01 20:01:16.810 189568 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Dec  1 20:01:16 compute-0 ovn_metadata_agent[106828]: 2025-12-01 20:01:16.810 106833 DEBUG neutron.agent.linux.utils [-] Running command: ['sudo', 'neutron-rootwrap', '/etc/neutron/rootwrap.conf', 'ip', 'netns', 'exec', 'ovnmeta-584f129c-30be-45c6-a239-e6753cbee124', 'env', 'PROCESS_TAG=haproxy-584f129c-30be-45c6-a239-e6753cbee124', 'haproxy', '-f', '/var/lib/neutron/ovn-metadata-proxy/584f129c-30be-45c6-a239-e6753cbee124.conf'] create_process /usr/lib/python3.9/site-packages/neutron/agent/linux/utils.py:84#033[00m
Dec  1 20:01:16 compute-0 nova_compute[189564]: 2025-12-01 20:01:16.818 189568 DEBUG nova.compute.manager [None req-025acbbd-8b0a-4055-b5a6-f0460d6fa220 - - - - - -] [instance: 98c0547a-3efc-4214-85f9-ccceaf32a2a6] Checking state _get_power_state /usr/lib/python3.9/site-packages/nova/compute/manager.py:1762#033[00m
Dec  1 20:01:16 compute-0 nova_compute[189564]: 2025-12-01 20:01:16.823 189568 DEBUG nova.compute.manager [None req-025acbbd-8b0a-4055-b5a6-f0460d6fa220 - - - - - -] [instance: 98c0547a-3efc-4214-85f9-ccceaf32a2a6] Synchronizing instance power state after lifecycle event "Paused"; current vm_state: building, current task_state: spawning, current DB power_state: 0, VM power_state: 3 handle_lifecycle_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:1396#033[00m
Dec  1 20:01:16 compute-0 nova_compute[189564]: 2025-12-01 20:01:16.852 189568 INFO nova.compute.manager [None req-025acbbd-8b0a-4055-b5a6-f0460d6fa220 - - - - - -] [instance: 98c0547a-3efc-4214-85f9-ccceaf32a2a6] During sync_power_state the instance has a pending task (spawning). Skip.#033[00m
Dec  1 20:01:17 compute-0 nova_compute[189564]: 2025-12-01 20:01:17.058 189568 INFO nova.virt.libvirt.driver [None req-43758946-978a-45a5-8816-a58faf122cbd 1b42f5bff3ce40c99c067bb358d36444 02b2a851f173482691b98aa9a993fbf9 - - default default] [instance: 5e264735-c003-4c77-8b16-cb48211f837f] Creating config drive at /var/lib/nova/instances/5e264735-c003-4c77-8b16-cb48211f837f/disk.config#033[00m
Dec  1 20:01:17 compute-0 nova_compute[189564]: 2025-12-01 20:01:17.065 189568 DEBUG oslo_concurrency.processutils [None req-43758946-978a-45a5-8816-a58faf122cbd 1b42f5bff3ce40c99c067bb358d36444 02b2a851f173482691b98aa9a993fbf9 - - default default] Running cmd (subprocess): /usr/bin/mkisofs -o /var/lib/nova/instances/5e264735-c003-4c77-8b16-cb48211f837f/disk.config -ldots -allow-lowercase -allow-multidot -l -publisher OpenStack Compute 27.5.2-0.20250829104910.6f8decf.el9 -quiet -J -r -V config-2 /tmp/tmph8uo_yq6 execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384#033[00m
Dec  1 20:01:17 compute-0 nova_compute[189564]: 2025-12-01 20:01:17.194 189568 DEBUG oslo_concurrency.processutils [None req-43758946-978a-45a5-8816-a58faf122cbd 1b42f5bff3ce40c99c067bb358d36444 02b2a851f173482691b98aa9a993fbf9 - - default default] CMD "/usr/bin/mkisofs -o /var/lib/nova/instances/5e264735-c003-4c77-8b16-cb48211f837f/disk.config -ldots -allow-lowercase -allow-multidot -l -publisher OpenStack Compute 27.5.2-0.20250829104910.6f8decf.el9 -quiet -J -r -V config-2 /tmp/tmph8uo_yq6" returned: 0 in 0.129s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422#033[00m
Dec  1 20:01:17 compute-0 nova_compute[189564]: 2025-12-01 20:01:17.231 189568 DEBUG nova.network.neutron [req-239f4da3-70c2-4b24-af48-3a8009836f9d req-7d72dca6-051b-4476-baa1-01b06a4a5f47 8141e98fb94749b083df4b60a326419b 58466eba0634458f8dd92f929824a1d9 - - default default] [instance: 98c0547a-3efc-4214-85f9-ccceaf32a2a6] Updated VIF entry in instance network info cache for port 6f128282-4268-4162-a349-1906ef0a8e4d. _build_network_info_model /usr/lib/python3.9/site-packages/nova/network/neutron.py:3482#033[00m
Dec  1 20:01:17 compute-0 nova_compute[189564]: 2025-12-01 20:01:17.232 189568 DEBUG nova.network.neutron [req-239f4da3-70c2-4b24-af48-3a8009836f9d req-7d72dca6-051b-4476-baa1-01b06a4a5f47 8141e98fb94749b083df4b60a326419b 58466eba0634458f8dd92f929824a1d9 - - default default] [instance: 98c0547a-3efc-4214-85f9-ccceaf32a2a6] Updating instance_info_cache with network_info: [{"id": "6f128282-4268-4162-a349-1906ef0a8e4d", "address": "fa:16:3e:6f:a3:82", "network": {"id": "584f129c-30be-45c6-a239-e6753cbee124", "bridge": "br-int", "label": "tempest-ServerAddressesTestJSON-1254726330-network", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.12", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "d9b058a656be4393a4619312186fc083", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tap6f128282-42", "ovs_interfaceid": "6f128282-4268-4162-a349-1906ef0a8e4d", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}}] update_instance_cache_with_nw_info /usr/lib/python3.9/site-packages/nova/network/neutron.py:116#033[00m
Dec  1 20:01:17 compute-0 nova_compute[189564]: 2025-12-01 20:01:17.250 189568 DEBUG oslo_service.periodic_task [None req-7cec860b-c1c5-49d2-8a4f-732c76c1788d - - - - - -] Running periodic task ComputeManager._poll_rescued_instances run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210#033[00m
Dec  1 20:01:17 compute-0 nova_compute[189564]: 2025-12-01 20:01:17.251 189568 DEBUG oslo_concurrency.lockutils [req-239f4da3-70c2-4b24-af48-3a8009836f9d req-7d72dca6-051b-4476-baa1-01b06a4a5f47 8141e98fb94749b083df4b60a326419b 58466eba0634458f8dd92f929824a1d9 - - default default] Releasing lock "refresh_cache-98c0547a-3efc-4214-85f9-ccceaf32a2a6" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:333#033[00m
Dec  1 20:01:17 compute-0 podman[253822]: 2025-12-01 20:01:17.252221218 +0000 UTC m=+0.060109671 container create e749739cfbc5cfd2122701ec135bc620bac36e648f13e541c33c066b863f9917 (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified, name=neutron-haproxy-ovnmeta-584f129c-30be-45c6-a239-e6753cbee124, maintainer=OpenStack Kubernetes Operator team, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.vendor=CentOS, org.label-schema.schema-version=1.0, org.label-schema.build-date=20251125, io.buildah.version=1.41.3, org.label-schema.license=GPLv2, tcib_build_tag=fa2bb8efef6782c26ea7f1675eeb36dd, tcib_managed=true)
Dec  1 20:01:17 compute-0 kernel: tap241aee4b-ac: entered promiscuous mode
Dec  1 20:01:17 compute-0 NetworkManager[56474]: <info>  [1764619277.2626] manager: (tap241aee4b-ac): new Tun device (/org/freedesktop/NetworkManager/Devices/37)
Dec  1 20:01:17 compute-0 ovn_controller[97948]: 2025-12-01T20:01:17Z|00071|binding|INFO|Claiming lport 241aee4b-acee-43c4-b165-e8322c56a1d3 for this chassis.
Dec  1 20:01:17 compute-0 ovn_controller[97948]: 2025-12-01T20:01:17Z|00072|binding|INFO|241aee4b-acee-43c4-b165-e8322c56a1d3: Claiming fa:16:3e:94:01:de 10.100.0.13
Dec  1 20:01:17 compute-0 systemd-udevd[253758]: Network interface NamePolicy= disabled on kernel command line.
Dec  1 20:01:17 compute-0 nova_compute[189564]: 2025-12-01 20:01:17.266 189568 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Dec  1 20:01:17 compute-0 ovn_metadata_agent[106828]: 2025-12-01 20:01:17.273 106833 DEBUG ovsdbapp.backend.ovs_idl.event [-] Matched UPDATE: PortBindingUpdatedEvent(events=('update',), table='Port_Binding', conditions=None, old_conditions=None), priority=20 to row=Port_Binding(mac=['fa:16:3e:94:01:de 10.100.0.13'], port_security=['fa:16:3e:94:01:de 10.100.0.13'], type=, nat_addresses=[], virtual_parent=[], up=[False], options={'requested-chassis': 'compute-0.ctlplane.example.com'}, parent_port=[], requested_additional_chassis=[], ha_chassis_group=[], external_ids={'neutron:cidrs': '10.100.0.13/28', 'neutron:device_id': '5e264735-c003-4c77-8b16-cb48211f837f', 'neutron:device_owner': 'compute:nova', 'neutron:mtu': '', 'neutron:network_name': 'neutron-50f1d760-d79c-40bd-a9b3-cf73e6f75cf0', 'neutron:port_capabilities': '', 'neutron:port_name': '', 'neutron:project_id': '02b2a851f173482691b98aa9a993fbf9', 'neutron:revision_number': '2', 'neutron:security_group_ids': '0013d713-aa83-4343-96c6-63b4b2a5c1dc', 'neutron:subnet_pool_addr_scope4': '', 'neutron:subnet_pool_addr_scope6': '', 'neutron:vnic_type': 'normal'}, additional_chassis=[], tag=[], additional_encap=[], encap=[], mirror_rules=[], datapath=a954f810-f351-47e3-9327-23a3c2f185c8, chassis=[<ovs.db.idl.Row object at 0x7f1b36766670>], tunnel_key=3, gateway_chassis=[], requested_chassis=[<ovs.db.idl.Row object at 0x7f1b36766670>], logical_port=241aee4b-acee-43c4-b165-e8322c56a1d3) old=Port_Binding(chassis=[]) matches /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/event.py:43#033[00m
Dec  1 20:01:17 compute-0 NetworkManager[56474]: <info>  [1764619277.2796] device (tap241aee4b-ac): state change: unmanaged -> unavailable (reason 'connection-assumed', managed-type: 'external')
Dec  1 20:01:17 compute-0 NetworkManager[56474]: <info>  [1764619277.2803] device (tap241aee4b-ac): state change: unavailable -> disconnected (reason 'none', managed-type: 'external')
Dec  1 20:01:17 compute-0 ovn_controller[97948]: 2025-12-01T20:01:17Z|00073|binding|INFO|Setting lport 241aee4b-acee-43c4-b165-e8322c56a1d3 ovn-installed in OVS
Dec  1 20:01:17 compute-0 ovn_controller[97948]: 2025-12-01T20:01:17Z|00074|binding|INFO|Setting lport 241aee4b-acee-43c4-b165-e8322c56a1d3 up in Southbound
Dec  1 20:01:17 compute-0 nova_compute[189564]: 2025-12-01 20:01:17.281 189568 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Dec  1 20:01:17 compute-0 nova_compute[189564]: 2025-12-01 20:01:17.282 189568 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Dec  1 20:01:17 compute-0 systemd[1]: Started libpod-conmon-e749739cfbc5cfd2122701ec135bc620bac36e648f13e541c33c066b863f9917.scope.
Dec  1 20:01:17 compute-0 podman[253822]: 2025-12-01 20:01:17.220540483 +0000 UTC m=+0.028428956 image pull 014dc726c85414b29f2dde7b5d875685d08784761c0f0ffa8630d1583a877bf9 quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified
Dec  1 20:01:17 compute-0 systemd-machined[155891]: New machine qemu-6-instance-00000006.
Dec  1 20:01:17 compute-0 systemd[1]: Started libcrun container.
Dec  1 20:01:17 compute-0 systemd[1]: Started Virtual Machine qemu-6-instance-00000006.
Dec  1 20:01:17 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/45db12e47ffb0ebad5ab580a0a80fc4bcbd9365e0fd34157f2d71c9a8a45090f/merged/var/lib/neutron supports timestamps until 2038 (0x7fffffff)
Dec  1 20:01:17 compute-0 podman[253822]: 2025-12-01 20:01:17.360005902 +0000 UTC m=+0.167894375 container init e749739cfbc5cfd2122701ec135bc620bac36e648f13e541c33c066b863f9917 (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified, name=neutron-haproxy-ovnmeta-584f129c-30be-45c6-a239-e6753cbee124, io.buildah.version=1.41.3, maintainer=OpenStack Kubernetes Operator team, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.vendor=CentOS, org.label-schema.build-date=20251125, org.label-schema.license=GPLv2, tcib_build_tag=fa2bb8efef6782c26ea7f1675eeb36dd, tcib_managed=true, org.label-schema.schema-version=1.0)
Dec  1 20:01:17 compute-0 podman[253822]: 2025-12-01 20:01:17.370271461 +0000 UTC m=+0.178159914 container start e749739cfbc5cfd2122701ec135bc620bac36e648f13e541c33c066b863f9917 (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified, name=neutron-haproxy-ovnmeta-584f129c-30be-45c6-a239-e6753cbee124, org.label-schema.license=GPLv2, tcib_build_tag=fa2bb8efef6782c26ea7f1675eeb36dd, org.label-schema.vendor=CentOS, org.label-schema.schema-version=1.0, org.label-schema.build-date=20251125, tcib_managed=true, maintainer=OpenStack Kubernetes Operator team, org.label-schema.name=CentOS Stream 9 Base Image, io.buildah.version=1.41.3)
Dec  1 20:01:17 compute-0 neutron-haproxy-ovnmeta-584f129c-30be-45c6-a239-e6753cbee124[253850]: [NOTICE]   (253860) : New worker (253864) forked
Dec  1 20:01:17 compute-0 neutron-haproxy-ovnmeta-584f129c-30be-45c6-a239-e6753cbee124[253850]: [NOTICE]   (253860) : Loading success.
Dec  1 20:01:17 compute-0 ovn_metadata_agent[106828]: 2025-12-01 20:01:17.427 106833 INFO neutron.agent.ovn.metadata.agent [-] Port 241aee4b-acee-43c4-b165-e8322c56a1d3 in datapath 50f1d760-d79c-40bd-a9b3-cf73e6f75cf0 unbound from our chassis#033[00m
Dec  1 20:01:17 compute-0 ovn_metadata_agent[106828]: 2025-12-01 20:01:17.430 106833 INFO neutron.agent.ovn.metadata.agent [-] Provisioning metadata for network 50f1d760-d79c-40bd-a9b3-cf73e6f75cf0#033[00m
Dec  1 20:01:17 compute-0 ovn_metadata_agent[106828]: 2025-12-01 20:01:17.443 239862 DEBUG oslo.privsep.daemon [-] privsep: reply[5f427499-f0d2-4744-bce7-17ca04f5aed4]: (4, False) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Dec  1 20:01:17 compute-0 ovn_metadata_agent[106828]: 2025-12-01 20:01:17.444 106833 DEBUG neutron.agent.ovn.metadata.agent [-] Creating VETH tap50f1d760-d1 in ovnmeta-50f1d760-d79c-40bd-a9b3-cf73e6f75cf0 namespace provision_datapath /usr/lib/python3.9/site-packages/neutron/agent/ovn/metadata/agent.py:665#033[00m
Dec  1 20:01:17 compute-0 ovn_metadata_agent[106828]: 2025-12-01 20:01:17.446 239862 DEBUG neutron.privileged.agent.linux.ip_lib [-] Interface tap50f1d760-d0 not found in namespace None get_link_id /usr/lib/python3.9/site-packages/neutron/privileged/agent/linux/ip_lib.py:204#033[00m
Dec  1 20:01:17 compute-0 ovn_metadata_agent[106828]: 2025-12-01 20:01:17.446 239862 DEBUG oslo.privsep.daemon [-] privsep: reply[fa4ba836-6e5b-4d3f-a9f7-9bb120f42725]: (4, False) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Dec  1 20:01:17 compute-0 ovn_metadata_agent[106828]: 2025-12-01 20:01:17.447 239862 DEBUG oslo.privsep.daemon [-] privsep: reply[dbeabda0-4dbe-4ec5-a926-900b137b14e1]: (4, False) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Dec  1 20:01:17 compute-0 ovn_metadata_agent[106828]: 2025-12-01 20:01:17.460 106945 DEBUG oslo.privsep.daemon [-] privsep: reply[cef1aae1-630a-4d2b-88f1-13595f9ccb34]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Dec  1 20:01:17 compute-0 ovn_metadata_agent[106828]: 2025-12-01 20:01:17.487 239862 DEBUG oslo.privsep.daemon [-] privsep: reply[623bc6ed-e97c-45a0-82ae-07ac2956b925]: (4, ('net.ipv4.conf.all.promote_secondaries = 1\n', '', 0)) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Dec  1 20:01:17 compute-0 ovn_metadata_agent[106828]: 2025-12-01 20:01:17.523 239942 DEBUG oslo.privsep.daemon [-] privsep: reply[7050ce00-18af-4d2e-b8aa-3851d0a143fe]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Dec  1 20:01:17 compute-0 ovn_metadata_agent[106828]: 2025-12-01 20:01:17.530 239862 DEBUG oslo.privsep.daemon [-] privsep: reply[4bfdae71-61d9-4330-ad3a-954317fb3fb5]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Dec  1 20:01:17 compute-0 NetworkManager[56474]: <info>  [1764619277.5314] manager: (tap50f1d760-d0): new Veth device (/org/freedesktop/NetworkManager/Devices/38)
Dec  1 20:01:17 compute-0 ovn_metadata_agent[106828]: 2025-12-01 20:01:17.573 239942 DEBUG oslo.privsep.daemon [-] privsep: reply[8ba78b60-e866-469b-8421-24c65efab19a]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Dec  1 20:01:17 compute-0 ovn_metadata_agent[106828]: 2025-12-01 20:01:17.583 239942 DEBUG oslo.privsep.daemon [-] privsep: reply[93a42409-2692-4a51-8d5f-ab0197478336]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Dec  1 20:01:17 compute-0 NetworkManager[56474]: <info>  [1764619277.6248] device (tap50f1d760-d0): carrier: link connected
Dec  1 20:01:17 compute-0 ovn_metadata_agent[106828]: 2025-12-01 20:01:17.637 239942 DEBUG oslo.privsep.daemon [-] privsep: reply[c3e66098-0899-4f37-81ba-42a577c23761]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Dec  1 20:01:17 compute-0 ovn_metadata_agent[106828]: 2025-12-01 20:01:17.660 239862 DEBUG oslo.privsep.daemon [-] privsep: reply[d6f1b0c0-69c9-42b1-a03c-313b76407cf0]: (4, [{'family': 0, '__align': (), 'ifi_type': 1, 'index': 2, 'flags': 69699, 'change': 0, 'attrs': [['IFLA_IFNAME', 'tap50f1d760-d1'], ['IFLA_TXQLEN', 1000], ['IFLA_OPERSTATE', 'UP'], ['IFLA_LINKMODE', 0], ['IFLA_MTU', 1500], ['IFLA_MIN_MTU', 68], ['IFLA_MAX_MTU', 65535], ['IFLA_GROUP', 0], ['IFLA_PROMISCUITY', 0], ['UNKNOWN', {'header': {'length': 8, 'type': 61}}], ['IFLA_NUM_TX_QUEUES', 8], ['IFLA_GSO_MAX_SEGS', 65535], ['IFLA_GSO_MAX_SIZE', 65536], ['IFLA_GRO_MAX_SIZE', 65536], ['UNKNOWN', {'header': {'length': 8, 'type': 63}}], ['UNKNOWN', {'header': {'length': 8, 'type': 64}}], ['IFLA_TSO_MAX_SIZE', 524280], ['IFLA_TSO_MAX_SEGS', 65535], ['UNKNOWN', {'header': {'length': 8, 'type': 66}}], ['IFLA_NUM_RX_QUEUES', 8], ['IFLA_CARRIER', 1], ['IFLA_CARRIER_CHANGES', 2], ['IFLA_CARRIER_UP_COUNT', 1], ['IFLA_CARRIER_DOWN_COUNT', 1], ['IFLA_PROTO_DOWN', 0], ['IFLA_ADDRESS', 'fa:16:3e:c6:7e:f3'], ['IFLA_BROADCAST', 'ff:ff:ff:ff:ff:ff'], ['IFLA_STATS64', {'rx_packets': 1, 'tx_packets': 2, 'rx_bytes': 90, 'tx_bytes': 176, 'rx_errors': 0, 'tx_errors': 0, 'rx_dropped': 0, 'tx_dropped': 0, 'multicast': 0, 'collisions': 0, 'rx_length_errors': 0, 'rx_over_errors': 0, 'rx_crc_errors': 0, 'rx_frame_errors': 0, 'rx_fifo_errors': 0, 'rx_missed_errors': 0, 'tx_aborted_errors': 0, 'tx_carrier_errors': 0, 'tx_fifo_errors': 0, 'tx_heartbeat_errors': 0, 'tx_window_errors': 0, 'rx_compressed': 0, 'tx_compressed': 0}], ['IFLA_STATS', {'rx_packets': 1, 'tx_packets': 2, 'rx_bytes': 90, 'tx_bytes': 176, 'rx_errors': 0, 'tx_errors': 0, 'rx_dropped': 0, 'tx_dropped': 0, 'multicast': 0, 'collisions': 0, 'rx_length_errors': 0, 'rx_over_errors': 0, 'rx_crc_errors': 0, 'rx_frame_errors': 0, 'rx_fifo_errors': 0, 'rx_missed_errors': 0, 'tx_aborted_errors': 0, 'tx_carrier_errors': 0, 'tx_fifo_errors': 0, 'tx_heartbeat_errors': 0, 'tx_window_errors': 0, 'rx_compressed': 0, 'tx_compressed': 0}], ['IFLA_XDP', {'attrs': [['IFLA_XDP_ATTACHED', None]]}], ['IFLA_LINKINFO', {'attrs': [['IFLA_INFO_KIND', 'veth']]}], ['IFLA_LINK_NETNSID', 0], ['IFLA_LINK', 21], ['IFLA_QDISC', 'noqueue'], ['IFLA_AF_SPEC', {'attrs': [['AF_INET', {'dummy': 65668, 'forwarding': 1, 'mc_forwarding': 0, 'proxy_arp': 0, 'accept_redirects': 0, 'secure_redirects': 0, 'send_redirects': 0, 'shared_media': 1, 'rp_filter': 1, 'accept_source_route': 0, 'bootp_relay': 0, 'log_martians': 0, 'tag': 0, 'arpfilter': 0, 'medium_id': 0, 'noxfrm': 0, 'nopolicy': 0, 'force_igmp_version': 0, 'arp_announce': 0, 'arp_ignore': 0, 'promote_secondaries': 1, 'arp_accept': 0, 'arp_notify': 0, 'accept_local': 0, 'src_vmark': 0, 'proxy_arp_pvlan': 0, 'route_localnet': 0, 'igmpv2_unsolicited_report_interval': 10000, 'igmpv3_unsolicited_report_interval': 1000}], ['AF_INET6', {'attrs': [['IFLA_INET6_FLAGS', 2147483648], ['IFLA_INET6_CACHEINFO', {'max_reasm_len': 65535, 'tstamp': 576534, 'reachable_time': 39442, 'retrans_time': 1000}], ['IFLA_INET6_CONF', {'forwarding': 0, 'hop_limit': 64, 'mtu': 1500, 'accept_ra': 1, 'accept_redirects': 1, 'autoconf': 1, 'dad_transmits': 1, 'router_solicitations': 4294967295, 'router_solicitation_interval': 4000, 'router_solicitation_delay': 1000, 'use_tempaddr': 0, 'temp_valid_lft': 604800, 'temp_preferred_lft': 86400, 'regen_max_retry': 3, 'max_desync_factor': 600, 'max_addresses': 16, 'force_mld_version': 0, 'accept_ra_defrtr': 1, 'accept_ra_pinfo': 1, 'accept_ra_rtr_pref': 1, 'router_probe_interval': 60000, 'accept_ra_rt_info_max_plen': 0, 'proxy_ndp': 0, 'optimistic_dad': 0, 'accept_source_route': 0, 'mc_forwarding': 0, 'disable_ipv6': 0, 'accept_dad': 1, 'force_tllao': 0, 'ndisc_notify': 0}], ['IFLA_INET6_STATS', {'num': 38, 'inpkts': 1, 'inoctets': 76, 'indelivers': 0, 'outforwdatagrams': 0, 'outpkts': 2, 'outoctets': 148, 'inhdrerrors': 0, 'intoobigerrors': 0, 'innoroutes': 0, 'inaddrerrors': 0, 'inunknownprotos': 0, 'intruncatedpkts': 0, 'indiscards': 0, 'outdiscards': 0, 'outnoroutes': 0, 'reasmtimeout': 0, 'reasmreqds': 0, 'reasmoks': 0, 'reasmfails': 0, 'fragoks': 0, 'fragfails': 0, 'fragcreates': 0, 'inmcastpkts': 1, 'outmcastpkts': 2, 'inbcastpkts': 0, 'outbcastpkts': 0, 'inmcastoctets': 76, 'outmcastoctets': 148, 'inbcastoctets': 0, 'outbcastoctets': 0, 'csumerrors': 0, 'noectpkts': 1, 'ect1pkts': 0, 'ect0pkts': 0, 'cepkts': 0}], ['IFLA_INET6_ICMP6STATS', {'num': 7, 'inmsgs': 0, 'inerrors': 0, 'outmsgs': 2, 'outerrors': 0, 'csumerrors': 0}], ['IFLA_INET6_TOKEN', '::'], ['IFLA_INET6_ADDR_GEN_MODE', 0]]}]]}], ['IFLA_MAP', {'mem_start': 0, 'mem_end': 0, 'base_addr': 0, 'irq': 0, 'dma': 0, 'port': 0}], ['UNKNOWN', {'header': {'length': 4, 'type': 32830}}], ['UNKNOWN', {'header': {'length': 4, 'type': 32833}}]], 'header': {'length': 1448, 'type': 16, 'flags': 2, 'sequence_number': 255, 'pid': 253884, 'error': None, 'target': 'ovnmeta-50f1d760-d79c-40bd-a9b3-cf73e6f75cf0', 'stats': (0, 0, 0)}, 'state': 'up', 'event': 'RTM_NEWLINK'}]) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Dec  1 20:01:17 compute-0 ovn_metadata_agent[106828]: 2025-12-01 20:01:17.682 239862 DEBUG oslo.privsep.daemon [-] privsep: reply[c9c74c86-98d6-4ad0-aa90-36027a4fa096]: (4, ({'family': 10, 'prefixlen': 64, 'flags': 192, 'scope': 253, 'index': 2, 'attrs': [['IFA_ADDRESS', 'fe80::f816:3eff:fec6:7ef3'], ['IFA_CACHEINFO', {'ifa_preferred': 4294967295, 'ifa_valid': 4294967295, 'cstamp': 576534, 'tstamp': 576534}], ['IFA_FLAGS', 192]], 'header': {'length': 72, 'type': 20, 'flags': 2, 'sequence_number': 255, 'pid': 253886, 'error': None, 'target': 'ovnmeta-50f1d760-d79c-40bd-a9b3-cf73e6f75cf0', 'stats': (0, 0, 0)}, 'event': 'RTM_NEWADDR'},)) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Dec  1 20:01:17 compute-0 ovn_metadata_agent[106828]: 2025-12-01 20:01:17.698 239862 DEBUG oslo.privsep.daemon [-] privsep: reply[dc00d3ee-a579-45af-986d-85cac3bc8a19]: (4, [{'family': 0, '__align': (), 'ifi_type': 1, 'index': 2, 'flags': 69699, 'change': 0, 'attrs': [['IFLA_IFNAME', 'tap50f1d760-d1'], ['IFLA_TXQLEN', 1000], ['IFLA_OPERSTATE', 'UP'], ['IFLA_LINKMODE', 0], ['IFLA_MTU', 1500], ['IFLA_MIN_MTU', 68], ['IFLA_MAX_MTU', 65535], ['IFLA_GROUP', 0], ['IFLA_PROMISCUITY', 0], ['UNKNOWN', {'header': {'length': 8, 'type': 61}}], ['IFLA_NUM_TX_QUEUES', 8], ['IFLA_GSO_MAX_SEGS', 65535], ['IFLA_GSO_MAX_SIZE', 65536], ['IFLA_GRO_MAX_SIZE', 65536], ['UNKNOWN', {'header': {'length': 8, 'type': 63}}], ['UNKNOWN', {'header': {'length': 8, 'type': 64}}], ['IFLA_TSO_MAX_SIZE', 524280], ['IFLA_TSO_MAX_SEGS', 65535], ['UNKNOWN', {'header': {'length': 8, 'type': 66}}], ['IFLA_NUM_RX_QUEUES', 8], ['IFLA_CARRIER', 1], ['IFLA_CARRIER_CHANGES', 2], ['IFLA_CARRIER_UP_COUNT', 1], ['IFLA_CARRIER_DOWN_COUNT', 1], ['IFLA_PROTO_DOWN', 0], ['IFLA_ADDRESS', 'fa:16:3e:c6:7e:f3'], ['IFLA_BROADCAST', 'ff:ff:ff:ff:ff:ff'], ['IFLA_STATS64', {'rx_packets': 1, 'tx_packets': 2, 'rx_bytes': 90, 'tx_bytes': 176, 'rx_errors': 0, 'tx_errors': 0, 'rx_dropped': 0, 'tx_dropped': 0, 'multicast': 0, 'collisions': 0, 'rx_length_errors': 0, 'rx_over_errors': 0, 'rx_crc_errors': 0, 'rx_frame_errors': 0, 'rx_fifo_errors': 0, 'rx_missed_errors': 0, 'tx_aborted_errors': 0, 'tx_carrier_errors': 0, 'tx_fifo_errors': 0, 'tx_heartbeat_errors': 0, 'tx_window_errors': 0, 'rx_compressed': 0, 'tx_compressed': 0}], ['IFLA_STATS', {'rx_packets': 1, 'tx_packets': 2, 'rx_bytes': 90, 'tx_bytes': 176, 'rx_errors': 0, 'tx_errors': 0, 'rx_dropped': 0, 'tx_dropped': 0, 'multicast': 0, 'collisions': 0, 'rx_length_errors': 0, 'rx_over_errors': 0, 'rx_crc_errors': 0, 'rx_frame_errors': 0, 'rx_fifo_errors': 0, 'rx_missed_errors': 0, 'tx_aborted_errors': 0, 'tx_carrier_errors': 0, 'tx_fifo_errors': 0, 'tx_heartbeat_errors': 0, 'tx_window_errors': 0, 'rx_compressed': 0, 'tx_compressed': 0}], ['IFLA_XDP', {'attrs': [['IFLA_XDP_ATTACHED', None]]}], ['IFLA_LINKINFO', {'attrs': [['IFLA_INFO_KIND', 'veth']]}], ['IFLA_LINK_NETNSID', 0], ['IFLA_LINK', 21], ['IFLA_QDISC', 'noqueue'], ['IFLA_AF_SPEC', {'attrs': [['AF_INET', {'dummy': 65668, 'forwarding': 1, 'mc_forwarding': 0, 'proxy_arp': 0, 'accept_redirects': 0, 'secure_redirects': 0, 'send_redirects': 0, 'shared_media': 1, 'rp_filter': 1, 'accept_source_route': 0, 'bootp_relay': 0, 'log_martians': 0, 'tag': 0, 'arpfilter': 0, 'medium_id': 0, 'noxfrm': 0, 'nopolicy': 0, 'force_igmp_version': 0, 'arp_announce': 0, 'arp_ignore': 0, 'promote_secondaries': 1, 'arp_accept': 0, 'arp_notify': 0, 'accept_local': 0, 'src_vmark': 0, 'proxy_arp_pvlan': 0, 'route_localnet': 0, 'igmpv2_unsolicited_report_interval': 10000, 'igmpv3_unsolicited_report_interval': 1000}], ['AF_INET6', {'attrs': [['IFLA_INET6_FLAGS', 2147483648], ['IFLA_INET6_CACHEINFO', {'max_reasm_len': 65535, 'tstamp': 576534, 'reachable_time': 39442, 'retrans_time': 1000}], ['IFLA_INET6_CONF', {'forwarding': 0, 'hop_limit': 64, 'mtu': 1500, 'accept_ra': 1, 'accept_redirects': 1, 'autoconf': 1, 'dad_transmits': 1, 'router_solicitations': 4294967295, 'router_solicitation_interval': 4000, 'router_solicitation_delay': 1000, 'use_tempaddr': 0, 'temp_valid_lft': 604800, 'temp_preferred_lft': 86400, 'regen_max_retry': 3, 'max_desync_factor': 600, 'max_addresses': 16, 'force_mld_version': 0, 'accept_ra_defrtr': 1, 'accept_ra_pinfo': 1, 'accept_ra_rtr_pref': 1, 'router_probe_interval': 60000, 'accept_ra_rt_info_max_plen': 0, 'proxy_ndp': 0, 'optimistic_dad': 0, 'accept_source_route': 0, 'mc_forwarding': 0, 'disable_ipv6': 0, 'accept_dad': 1, 'force_tllao': 0, 'ndisc_notify': 0}], ['IFLA_INET6_STATS', {'num': 38, 'inpkts': 1, 'inoctets': 76, 'indelivers': 0, 'outforwdatagrams': 0, 'outpkts': 2, 'outoctets': 148, 'inhdrerrors': 0, 'intoobigerrors': 0, 'innoroutes': 0, 'inaddrerrors': 0, 'inunknownprotos': 0, 'intruncatedpkts': 0, 'indiscards': 0, 'outdiscards': 0, 'outnoroutes': 0, 'reasmtimeout': 0, 'reasmreqds': 0, 'reasmoks': 0, 'reasmfails': 0, 'fragoks': 0, 'fragfails': 0, 'fragcreates': 0, 'inmcastpkts': 1, 'outmcastpkts': 2, 'inbcastpkts': 0, 'outbcastpkts': 0, 'inmcastoctets': 76, 'outmcastoctets': 148, 'inbcastoctets': 0, 'outbcastoctets': 0, 'csumerrors': 0, 'noectpkts': 1, 'ect1pkts': 0, 'ect0pkts': 0, 'cepkts': 0}], ['IFLA_INET6_ICMP6STATS', {'num': 7, 'inmsgs': 0, 'inerrors': 0, 'outmsgs': 2, 'outerrors': 0, 'csumerrors': 0}], ['IFLA_INET6_TOKEN', '::'], ['IFLA_INET6_ADDR_GEN_MODE', 0]]}]]}], ['IFLA_MAP', {'mem_start': 0, 'mem_end': 0, 'base_addr': 0, 'irq': 0, 'dma': 0, 'port': 0}], ['UNKNOWN', {'header': {'length': 4, 'type': 32830}}], ['UNKNOWN', {'header': {'length': 4, 'type': 32833}}]], 'header': {'length': 1448, 'type': 16, 'flags': 0, 'sequence_number': 255, 'pid': 253887, 'error': None, 'target': 'ovnmeta-50f1d760-d79c-40bd-a9b3-cf73e6f75cf0', 'stats': (0, 0, 0)}, 'state': 'up', 'event': 'RTM_NEWLINK'}]) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Dec  1 20:01:17 compute-0 ovn_metadata_agent[106828]: 2025-12-01 20:01:17.744 239862 DEBUG oslo.privsep.daemon [-] privsep: reply[f3fe0351-5b88-40e4-b571-b2f0b240047c]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Dec  1 20:01:17 compute-0 nova_compute[189564]: 2025-12-01 20:01:17.762 189568 DEBUG nova.network.neutron [None req-323d5ec3-38e7-418e-9c44-916e6b02d0c3 89c8a8cb31224140bf2b9c0b94acfe04 5102d72cb1ce4e6da810b2584a2abd73 - - default default] [instance: 4a104baa-5fd5-47aa-973b-11d99c76c3e2] Updating instance_info_cache with network_info: [{"id": "09097114-7a48-4b64-ab17-ed474efbf80e", "address": "fa:16:3e:3e:bf:1a", "network": {"id": "419dfb65-f0dd-44b5-a131-b7c37ebf4bab", "bridge": "br-int", "label": "tempest-ServerActionsTestJSON-188173667-network", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.13", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "5102d72cb1ce4e6da810b2584a2abd73", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tap09097114-7a", "ovs_interfaceid": "09097114-7a48-4b64-ab17-ed474efbf80e", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}}] update_instance_cache_with_nw_info /usr/lib/python3.9/site-packages/nova/network/neutron.py:116#033[00m
Dec  1 20:01:17 compute-0 nova_compute[189564]: 2025-12-01 20:01:17.790 189568 DEBUG oslo_concurrency.lockutils [None req-323d5ec3-38e7-418e-9c44-916e6b02d0c3 89c8a8cb31224140bf2b9c0b94acfe04 5102d72cb1ce4e6da810b2584a2abd73 - - default default] Releasing lock "refresh_cache-4a104baa-5fd5-47aa-973b-11d99c76c3e2" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:333#033[00m
Dec  1 20:01:17 compute-0 nova_compute[189564]: 2025-12-01 20:01:17.790 189568 DEBUG nova.compute.manager [None req-323d5ec3-38e7-418e-9c44-916e6b02d0c3 89c8a8cb31224140bf2b9c0b94acfe04 5102d72cb1ce4e6da810b2584a2abd73 - - default default] [instance: 4a104baa-5fd5-47aa-973b-11d99c76c3e2] Instance network_info: |[{"id": "09097114-7a48-4b64-ab17-ed474efbf80e", "address": "fa:16:3e:3e:bf:1a", "network": {"id": "419dfb65-f0dd-44b5-a131-b7c37ebf4bab", "bridge": "br-int", "label": "tempest-ServerActionsTestJSON-188173667-network", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.13", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "5102d72cb1ce4e6da810b2584a2abd73", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tap09097114-7a", "ovs_interfaceid": "09097114-7a48-4b64-ab17-ed474efbf80e", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}}]| _allocate_network_async /usr/lib/python3.9/site-packages/nova/compute/manager.py:1967#033[00m
Dec  1 20:01:17 compute-0 nova_compute[189564]: 2025-12-01 20:01:17.791 189568 DEBUG oslo_concurrency.lockutils [req-e068856a-a5fb-46b4-8a84-39e4de18a119 req-5671b8e4-0e06-4880-96b0-0769abbfed50 8141e98fb94749b083df4b60a326419b 58466eba0634458f8dd92f929824a1d9 - - default default] Acquired lock "refresh_cache-4a104baa-5fd5-47aa-973b-11d99c76c3e2" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:315#033[00m
Dec  1 20:01:17 compute-0 nova_compute[189564]: 2025-12-01 20:01:17.792 189568 DEBUG nova.network.neutron [req-e068856a-a5fb-46b4-8a84-39e4de18a119 req-5671b8e4-0e06-4880-96b0-0769abbfed50 8141e98fb94749b083df4b60a326419b 58466eba0634458f8dd92f929824a1d9 - - default default] [instance: 4a104baa-5fd5-47aa-973b-11d99c76c3e2] Refreshing network info cache for port 09097114-7a48-4b64-ab17-ed474efbf80e _get_instance_nw_info /usr/lib/python3.9/site-packages/nova/network/neutron.py:2007#033[00m
Dec  1 20:01:17 compute-0 nova_compute[189564]: 2025-12-01 20:01:17.795 189568 DEBUG nova.virt.libvirt.driver [None req-323d5ec3-38e7-418e-9c44-916e6b02d0c3 89c8a8cb31224140bf2b9c0b94acfe04 5102d72cb1ce4e6da810b2584a2abd73 - - default default] [instance: 4a104baa-5fd5-47aa-973b-11d99c76c3e2] Start _get_guest_xml network_info=[{"id": "09097114-7a48-4b64-ab17-ed474efbf80e", "address": "fa:16:3e:3e:bf:1a", "network": {"id": "419dfb65-f0dd-44b5-a131-b7c37ebf4bab", "bridge": "br-int", "label": "tempest-ServerActionsTestJSON-188173667-network", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.13", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "5102d72cb1ce4e6da810b2584a2abd73", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tap09097114-7a", "ovs_interfaceid": "09097114-7a48-4b64-ab17-ed474efbf80e", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}}] disk_info={'disk_bus': 'virtio', 'cdrom_bus': 'sata', 'mapping': {'root': {'bus': 'virtio', 'dev': 'vda', 'type': 'disk', 'boot_index': '1'}, 'disk': {'bus': 'virtio', 'dev': 'vda', 'type': 'disk', 'boot_index': '1'}, 'disk.config': {'bus': 'sata', 'dev': 'sda', 'type': 'cdrom'}}} image_meta=ImageMeta(checksum='c8fc807773e5354afe61636071771906',container_format='bare',created_at=2025-12-01T20:00:12Z,direct_url=<?>,disk_format='qcow2',id=d169c234-7ac2-4fdc-b9fa-a08c93484d75,min_disk=0,min_ram=0,name='cirros-0.6.2-x86_64-disk.img',owner='35d2a9caf1634dca9fc12ec078239d84',properties=ImageMetaProps,protected=<?>,size=21430272,status='active',tags=<?>,updated_at=2025-12-01T20:00:13Z,virtual_size=<?>,visibility=<?>) rescue=None block_device_info={'root_device_name': '/dev/vda', 'image': [{'boot_index': 0, 'guest_format': None, 'encryption_options': None, 'size': 0, 'encryption_secret_uuid': None, 'device_type': 'disk', 'disk_bus': 'virtio', 'encrypted': False, 'encryption_format': None, 'device_name': '/dev/vda', 'image_id': 'd169c234-7ac2-4fdc-b9fa-a08c93484d75'}], 'ephemerals': [], 'block_device_mapping': [], 'swap': None} _get_guest_xml /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:7549#033[00m
Dec  1 20:01:17 compute-0 nova_compute[189564]: 2025-12-01 20:01:17.813 189568 WARNING nova.virt.libvirt.driver [None req-323d5ec3-38e7-418e-9c44-916e6b02d0c3 89c8a8cb31224140bf2b9c0b94acfe04 5102d72cb1ce4e6da810b2584a2abd73 - - default default] This host appears to have multiple sockets per NUMA node. The `socket` PCI NUMA affinity will not be supported.#033[00m
Dec  1 20:01:17 compute-0 nova_compute[189564]: 2025-12-01 20:01:17.817 189568 DEBUG nova.virt.driver [None req-025acbbd-8b0a-4055-b5a6-f0460d6fa220 - - - - - -] Emitting event <LifecycleEvent: 1764619277.8162906, 5e264735-c003-4c77-8b16-cb48211f837f => Started> emit_event /usr/lib/python3.9/site-packages/nova/virt/driver.py:1653#033[00m
Dec  1 20:01:17 compute-0 ovn_metadata_agent[106828]: 2025-12-01 20:01:17.817 239862 DEBUG oslo.privsep.daemon [-] privsep: reply[da69f9f6-0572-4457-9525-42bf294019f0]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Dec  1 20:01:17 compute-0 nova_compute[189564]: 2025-12-01 20:01:17.819 189568 INFO nova.compute.manager [None req-025acbbd-8b0a-4055-b5a6-f0460d6fa220 - - - - - -] [instance: 5e264735-c003-4c77-8b16-cb48211f837f] VM Started (Lifecycle Event)#033[00m
Dec  1 20:01:17 compute-0 ovn_metadata_agent[106828]: 2025-12-01 20:01:17.819 106833 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): DelPortCommand(_result=None, port=tap50f1d760-d0, bridge=br-ex, if_exists=True) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89#033[00m
Dec  1 20:01:17 compute-0 ovn_metadata_agent[106828]: 2025-12-01 20:01:17.819 106833 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Transaction caused no change do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:129#033[00m
Dec  1 20:01:17 compute-0 ovn_metadata_agent[106828]: 2025-12-01 20:01:17.820 106833 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): AddPortCommand(_result=None, bridge=br-int, port=tap50f1d760-d0, may_exist=True) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89#033[00m
Dec  1 20:01:17 compute-0 NetworkManager[56474]: <info>  [1764619277.8225] manager: (tap50f1d760-d0): new Open vSwitch Port device (/org/freedesktop/NetworkManager/Devices/39)
Dec  1 20:01:17 compute-0 kernel: tap50f1d760-d0: entered promiscuous mode
Dec  1 20:01:17 compute-0 nova_compute[189564]: 2025-12-01 20:01:17.824 189568 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Dec  1 20:01:17 compute-0 nova_compute[189564]: 2025-12-01 20:01:17.830 189568 DEBUG nova.virt.libvirt.host [None req-323d5ec3-38e7-418e-9c44-916e6b02d0c3 89c8a8cb31224140bf2b9c0b94acfe04 5102d72cb1ce4e6da810b2584a2abd73 - - default default] Searching host: 'compute-0.ctlplane.example.com' for CPU controller through CGroups V1... _has_cgroupsv1_cpu_controller /usr/lib/python3.9/site-packages/nova/virt/libvirt/host.py:1653#033[00m
Dec  1 20:01:17 compute-0 nova_compute[189564]: 2025-12-01 20:01:17.831 189568 DEBUG nova.virt.libvirt.host [None req-323d5ec3-38e7-418e-9c44-916e6b02d0c3 89c8a8cb31224140bf2b9c0b94acfe04 5102d72cb1ce4e6da810b2584a2abd73 - - default default] CPU controller missing on host. _has_cgroupsv1_cpu_controller /usr/lib/python3.9/site-packages/nova/virt/libvirt/host.py:1663#033[00m
Dec  1 20:01:17 compute-0 ovn_metadata_agent[106828]: 2025-12-01 20:01:17.832 106833 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): DbSetCommand(_result=None, table=Interface, record=tap50f1d760-d0, col_values=(('external_ids', {'iface-id': '0fde8b60-857e-42f6-8410-4e92f155d06a'}),)) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89#033[00m
Dec  1 20:01:17 compute-0 nova_compute[189564]: 2025-12-01 20:01:17.836 189568 DEBUG nova.virt.libvirt.host [None req-323d5ec3-38e7-418e-9c44-916e6b02d0c3 89c8a8cb31224140bf2b9c0b94acfe04 5102d72cb1ce4e6da810b2584a2abd73 - - default default] Searching host: 'compute-0.ctlplane.example.com' for CPU controller through CGroups V2... _has_cgroupsv2_cpu_controller /usr/lib/python3.9/site-packages/nova/virt/libvirt/host.py:1672#033[00m
Dec  1 20:01:17 compute-0 ovn_controller[97948]: 2025-12-01T20:01:17Z|00075|binding|INFO|Releasing lport 0fde8b60-857e-42f6-8410-4e92f155d06a from this chassis (sb_readonly=0)
Dec  1 20:01:17 compute-0 nova_compute[189564]: 2025-12-01 20:01:17.836 189568 DEBUG nova.virt.libvirt.host [None req-323d5ec3-38e7-418e-9c44-916e6b02d0c3 89c8a8cb31224140bf2b9c0b94acfe04 5102d72cb1ce4e6da810b2584a2abd73 - - default default] CPU controller found on host. _has_cgroupsv2_cpu_controller /usr/lib/python3.9/site-packages/nova/virt/libvirt/host.py:1679#033[00m
Dec  1 20:01:17 compute-0 nova_compute[189564]: 2025-12-01 20:01:17.837 189568 DEBUG nova.virt.libvirt.driver [None req-323d5ec3-38e7-418e-9c44-916e6b02d0c3 89c8a8cb31224140bf2b9c0b94acfe04 5102d72cb1ce4e6da810b2584a2abd73 - - default default] CPU mode 'host-model' models '' was chosen, with extra flags: '' _get_guest_cpu_model_config /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:5396#033[00m
Dec  1 20:01:17 compute-0 nova_compute[189564]: 2025-12-01 20:01:17.837 189568 DEBUG nova.virt.hardware [None req-323d5ec3-38e7-418e-9c44-916e6b02d0c3 89c8a8cb31224140bf2b9c0b94acfe04 5102d72cb1ce4e6da810b2584a2abd73 - - default default] Getting desirable topologies for flavor Flavor(created_at=2025-12-01T20:00:10Z,deleted=False,deleted_at=None,description=None,disabled=False,ephemeral_gb=0,extra_specs={hw_rng:allowed='True'},flavorid='69252fc0-77e5-4ac1-807d-77003542464f',id=3,is_public=True,memory_mb=128,name='m1.nano',projects=<?>,root_gb=1,rxtx_factor=1.0,swap=0,updated_at=None,vcpu_weight=0,vcpus=1) and image_meta ImageMeta(checksum='c8fc807773e5354afe61636071771906',container_format='bare',created_at=2025-12-01T20:00:12Z,direct_url=<?>,disk_format='qcow2',id=d169c234-7ac2-4fdc-b9fa-a08c93484d75,min_disk=0,min_ram=0,name='cirros-0.6.2-x86_64-disk.img',owner='35d2a9caf1634dca9fc12ec078239d84',properties=ImageMetaProps,protected=<?>,size=21430272,status='active',tags=<?>,updated_at=2025-12-01T20:00:13Z,virtual_size=<?>,visibility=<?>), allow threads: True _get_desirable_cpu_topologies /usr/lib/python3.9/site-packages/nova/virt/hardware.py:563#033[00m
Dec  1 20:01:17 compute-0 nova_compute[189564]: 2025-12-01 20:01:17.838 189568 DEBUG nova.virt.hardware [None req-323d5ec3-38e7-418e-9c44-916e6b02d0c3 89c8a8cb31224140bf2b9c0b94acfe04 5102d72cb1ce4e6da810b2584a2abd73 - - default default] Flavor limits 0:0:0 get_cpu_topology_constraints /usr/lib/python3.9/site-packages/nova/virt/hardware.py:348#033[00m
Dec  1 20:01:17 compute-0 nova_compute[189564]: 2025-12-01 20:01:17.838 189568 DEBUG nova.virt.hardware [None req-323d5ec3-38e7-418e-9c44-916e6b02d0c3 89c8a8cb31224140bf2b9c0b94acfe04 5102d72cb1ce4e6da810b2584a2abd73 - - default default] Image limits 0:0:0 get_cpu_topology_constraints /usr/lib/python3.9/site-packages/nova/virt/hardware.py:352#033[00m
Dec  1 20:01:17 compute-0 nova_compute[189564]: 2025-12-01 20:01:17.839 189568 DEBUG nova.virt.hardware [None req-323d5ec3-38e7-418e-9c44-916e6b02d0c3 89c8a8cb31224140bf2b9c0b94acfe04 5102d72cb1ce4e6da810b2584a2abd73 - - default default] Flavor pref 0:0:0 get_cpu_topology_constraints /usr/lib/python3.9/site-packages/nova/virt/hardware.py:388#033[00m
Dec  1 20:01:17 compute-0 nova_compute[189564]: 2025-12-01 20:01:17.839 189568 DEBUG nova.virt.hardware [None req-323d5ec3-38e7-418e-9c44-916e6b02d0c3 89c8a8cb31224140bf2b9c0b94acfe04 5102d72cb1ce4e6da810b2584a2abd73 - - default default] Image pref 0:0:0 get_cpu_topology_constraints /usr/lib/python3.9/site-packages/nova/virt/hardware.py:392#033[00m
Dec  1 20:01:17 compute-0 nova_compute[189564]: 2025-12-01 20:01:17.839 189568 DEBUG nova.virt.hardware [None req-323d5ec3-38e7-418e-9c44-916e6b02d0c3 89c8a8cb31224140bf2b9c0b94acfe04 5102d72cb1ce4e6da810b2584a2abd73 - - default default] Chose sockets=0, cores=0, threads=0; limits were sockets=65536, cores=65536, threads=65536 get_cpu_topology_constraints /usr/lib/python3.9/site-packages/nova/virt/hardware.py:430#033[00m
Dec  1 20:01:17 compute-0 nova_compute[189564]: 2025-12-01 20:01:17.839 189568 DEBUG nova.virt.hardware [None req-323d5ec3-38e7-418e-9c44-916e6b02d0c3 89c8a8cb31224140bf2b9c0b94acfe04 5102d72cb1ce4e6da810b2584a2abd73 - - default default] Topology preferred VirtCPUTopology(cores=0,sockets=0,threads=0), maximum VirtCPUTopology(cores=65536,sockets=65536,threads=65536) _get_desirable_cpu_topologies /usr/lib/python3.9/site-packages/nova/virt/hardware.py:569#033[00m
Dec  1 20:01:17 compute-0 nova_compute[189564]: 2025-12-01 20:01:17.840 189568 DEBUG nova.virt.hardware [None req-323d5ec3-38e7-418e-9c44-916e6b02d0c3 89c8a8cb31224140bf2b9c0b94acfe04 5102d72cb1ce4e6da810b2584a2abd73 - - default default] Build topologies for 1 vcpu(s) 1:1:1 _get_possible_cpu_topologies /usr/lib/python3.9/site-packages/nova/virt/hardware.py:471#033[00m
Dec  1 20:01:17 compute-0 nova_compute[189564]: 2025-12-01 20:01:17.840 189568 DEBUG nova.virt.hardware [None req-323d5ec3-38e7-418e-9c44-916e6b02d0c3 89c8a8cb31224140bf2b9c0b94acfe04 5102d72cb1ce4e6da810b2584a2abd73 - - default default] Got 1 possible topologies _get_possible_cpu_topologies /usr/lib/python3.9/site-packages/nova/virt/hardware.py:501#033[00m
Dec  1 20:01:17 compute-0 nova_compute[189564]: 2025-12-01 20:01:17.840 189568 DEBUG nova.virt.hardware [None req-323d5ec3-38e7-418e-9c44-916e6b02d0c3 89c8a8cb31224140bf2b9c0b94acfe04 5102d72cb1ce4e6da810b2584a2abd73 - - default default] Possible topologies [VirtCPUTopology(cores=1,sockets=1,threads=1)] _get_desirable_cpu_topologies /usr/lib/python3.9/site-packages/nova/virt/hardware.py:575#033[00m
Dec  1 20:01:17 compute-0 nova_compute[189564]: 2025-12-01 20:01:17.841 189568 DEBUG nova.virt.hardware [None req-323d5ec3-38e7-418e-9c44-916e6b02d0c3 89c8a8cb31224140bf2b9c0b94acfe04 5102d72cb1ce4e6da810b2584a2abd73 - - default default] Sorted desired topologies [VirtCPUTopology(cores=1,sockets=1,threads=1)] _get_desirable_cpu_topologies /usr/lib/python3.9/site-packages/nova/virt/hardware.py:577#033[00m
Dec  1 20:01:17 compute-0 nova_compute[189564]: 2025-12-01 20:01:17.844 189568 DEBUG nova.virt.libvirt.vif [None req-323d5ec3-38e7-418e-9c44-916e6b02d0c3 89c8a8cb31224140bf2b9c0b94acfe04 5102d72cb1ce4e6da810b2584a2abd73 - - default default] vif_type=ovs instance=Instance(access_ip_v4=None,access_ip_v6=None,architecture=None,auto_disk_config=False,availability_zone='nova',cell_name=None,cleaned=False,config_drive='',created_at=2025-12-01T20:01:10Z,default_ephemeral_device=None,default_swap_device=None,deleted=False,deleted_at=None,device_metadata=None,disable_terminate=False,display_description='tempest-ServerActionsTestJSON-server-1064429924',display_name='tempest-ServerActionsTestJSON-server-1064429924',ec2_ids=EC2Ids,ephemeral_gb=0,ephemeral_key_uuid=None,fault=<?>,flavor=Flavor(3),hidden=False,host='compute-0.ctlplane.example.com',hostname='tempest-serveractionstestjson-server-1064429924',id=7,image_ref='d169c234-7ac2-4fdc-b9fa-a08c93484d75',info_cache=InstanceInfoCache,instance_type_id=3,kernel_id='',key_data='ecdsa-sha2-nistp384 AAAAE2VjZHNhLXNoYTItbmlzdHAzODQAAAAIbmlzdHAzODQAAABhBNy2Fa/005sFOm6rBTfWAhWPMicjwNe2lxBTmDNZ4YT4rkioptEkmqoV9BaZ0x7iRnfzTvUcepaaUfsJtdWIwpd6ISWDG/KMPFbrCHDmVc4nqNhxbzpyNrnXIODKw/JJYg==',key_name='tempest-keypair-1301911410',keypairs=KeyPairList,launch_index=0,launched_at=None,launched_on='compute-0.ctlplane.example.com',locked=False,locked_by=None,memory_mb=128,metadata={},migration_context=None,new_flavor=None,node='compute-0.ctlplane.example.com',numa_topology=None,old_flavor=None,os_type=None,pci_devices=<?>,pci_requests=InstancePCIRequests,power_state=0,progress=0,project_id='5102d72cb1ce4e6da810b2584a2abd73',ramdisk_id='',reservation_id='r-3k9rdt17',resources=None,root_device_name='/dev/vda',root_gb=1,security_groups=SecurityGroupList,services=<?>,shutdown_terminate=False,system_metadata={boot_roles='member,reader',image_base_image_ref='d169c234-7ac2-4fdc-b9fa-a08c93484d75',image_container_format='bare',image_disk_format='qcow2',image_hw_machine_type='q35',image_hw_rng_model='virtio',image_min_disk='1',image_min_ram='0',network_allocated='True',owner_project_name='tempest-ServerActionsTestJSON-87382225',owner_user_name='tempest-ServerActionsTestJSON-87382225-project-member'},tags=TagList,task_state='spawning',terminated_at=None,trusted_certs=None,updated_at=2025-12-01T20:01:12Z,user_data='IyEvYmluL3NoCmVjaG8gIlByaW50aW5nIGNpcnJvcyB1c2VyIGF1dGhvcml6ZWQga2V5cyIKY2F0IH5jaXJyb3MvLnNzaC9hdXRob3JpemVkX2tleXMgfHwgdHJ1ZQo=',user_id='89c8a8cb31224140bf2b9c0b94acfe04',uuid=4a104baa-5fd5-47aa-973b-11d99c76c3e2,vcpu_model=VirtCPUModel,vcpus=1,vm_mode=None,vm_state='building') vif={"id": "09097114-7a48-4b64-ab17-ed474efbf80e", "address": "fa:16:3e:3e:bf:1a", "network": {"id": "419dfb65-f0dd-44b5-a131-b7c37ebf4bab", "bridge": "br-int", "label": "tempest-ServerActionsTestJSON-188173667-network", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.13", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "5102d72cb1ce4e6da810b2584a2abd73", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tap09097114-7a", "ovs_interfaceid": "09097114-7a48-4b64-ab17-ed474efbf80e", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}} virt_type=kvm get_config /usr/lib/python3.9/site-packages/nova/virt/libvirt/vif.py:563#033[00m
Dec  1 20:01:17 compute-0 nova_compute[189564]: 2025-12-01 20:01:17.844 189568 DEBUG nova.network.os_vif_util [None req-323d5ec3-38e7-418e-9c44-916e6b02d0c3 89c8a8cb31224140bf2b9c0b94acfe04 5102d72cb1ce4e6da810b2584a2abd73 - - default default] Converting VIF {"id": "09097114-7a48-4b64-ab17-ed474efbf80e", "address": "fa:16:3e:3e:bf:1a", "network": {"id": "419dfb65-f0dd-44b5-a131-b7c37ebf4bab", "bridge": "br-int", "label": "tempest-ServerActionsTestJSON-188173667-network", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.13", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "5102d72cb1ce4e6da810b2584a2abd73", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tap09097114-7a", "ovs_interfaceid": "09097114-7a48-4b64-ab17-ed474efbf80e", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}} nova_to_osvif_vif /usr/lib/python3.9/site-packages/nova/network/os_vif_util.py:511#033[00m
Dec  1 20:01:17 compute-0 nova_compute[189564]: 2025-12-01 20:01:17.845 189568 DEBUG nova.network.os_vif_util [None req-323d5ec3-38e7-418e-9c44-916e6b02d0c3 89c8a8cb31224140bf2b9c0b94acfe04 5102d72cb1ce4e6da810b2584a2abd73 - - default default] Converted object VIFOpenVSwitch(active=False,address=fa:16:3e:3e:bf:1a,bridge_name='br-int',has_traffic_filtering=True,id=09097114-7a48-4b64-ab17-ed474efbf80e,network=Network(419dfb65-f0dd-44b5-a131-b7c37ebf4bab),plugin='ovs',port_profile=VIFPortProfileOpenVSwitch,preserve_on_delete=False,vif_name='tap09097114-7a') nova_to_osvif_vif /usr/lib/python3.9/site-packages/nova/network/os_vif_util.py:548#033[00m
Dec  1 20:01:17 compute-0 nova_compute[189564]: 2025-12-01 20:01:17.846 189568 DEBUG nova.objects.instance [None req-323d5ec3-38e7-418e-9c44-916e6b02d0c3 89c8a8cb31224140bf2b9c0b94acfe04 5102d72cb1ce4e6da810b2584a2abd73 - - default default] Lazy-loading 'pci_devices' on Instance uuid 4a104baa-5fd5-47aa-973b-11d99c76c3e2 obj_load_attr /usr/lib/python3.9/site-packages/nova/objects/instance.py:1105#033[00m
Dec  1 20:01:17 compute-0 nova_compute[189564]: 2025-12-01 20:01:17.848 189568 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Dec  1 20:01:17 compute-0 nova_compute[189564]: 2025-12-01 20:01:17.849 189568 DEBUG nova.compute.manager [None req-025acbbd-8b0a-4055-b5a6-f0460d6fa220 - - - - - -] [instance: 5e264735-c003-4c77-8b16-cb48211f837f] Checking state _get_power_state /usr/lib/python3.9/site-packages/nova/compute/manager.py:1762#033[00m
Dec  1 20:01:17 compute-0 nova_compute[189564]: 2025-12-01 20:01:17.857 189568 DEBUG nova.virt.driver [None req-025acbbd-8b0a-4055-b5a6-f0460d6fa220 - - - - - -] Emitting event <LifecycleEvent: 1764619277.816409, 5e264735-c003-4c77-8b16-cb48211f837f => Paused> emit_event /usr/lib/python3.9/site-packages/nova/virt/driver.py:1653#033[00m
Dec  1 20:01:17 compute-0 nova_compute[189564]: 2025-12-01 20:01:17.857 189568 INFO nova.compute.manager [None req-025acbbd-8b0a-4055-b5a6-f0460d6fa220 - - - - - -] [instance: 5e264735-c003-4c77-8b16-cb48211f837f] VM Paused (Lifecycle Event)#033[00m
Dec  1 20:01:17 compute-0 nova_compute[189564]: 2025-12-01 20:01:17.864 189568 DEBUG nova.virt.libvirt.driver [None req-323d5ec3-38e7-418e-9c44-916e6b02d0c3 89c8a8cb31224140bf2b9c0b94acfe04 5102d72cb1ce4e6da810b2584a2abd73 - - default default] [instance: 4a104baa-5fd5-47aa-973b-11d99c76c3e2] End _get_guest_xml xml=<domain type="kvm">
Dec  1 20:01:17 compute-0 nova_compute[189564]:  <uuid>4a104baa-5fd5-47aa-973b-11d99c76c3e2</uuid>
Dec  1 20:01:17 compute-0 nova_compute[189564]:  <name>instance-00000007</name>
Dec  1 20:01:17 compute-0 nova_compute[189564]:  <memory>131072</memory>
Dec  1 20:01:17 compute-0 nova_compute[189564]:  <vcpu>1</vcpu>
Dec  1 20:01:17 compute-0 nova_compute[189564]:  <metadata>
Dec  1 20:01:17 compute-0 nova_compute[189564]:    <nova:instance xmlns:nova="http://openstack.org/xmlns/libvirt/nova/1.1">
Dec  1 20:01:17 compute-0 nova_compute[189564]:      <nova:package version="27.5.2-0.20250829104910.6f8decf.el9"/>
Dec  1 20:01:17 compute-0 nova_compute[189564]:      <nova:name>tempest-ServerActionsTestJSON-server-1064429924</nova:name>
Dec  1 20:01:17 compute-0 nova_compute[189564]:      <nova:creationTime>2025-12-01 20:01:17</nova:creationTime>
Dec  1 20:01:17 compute-0 nova_compute[189564]:      <nova:flavor name="m1.nano">
Dec  1 20:01:17 compute-0 nova_compute[189564]:        <nova:memory>128</nova:memory>
Dec  1 20:01:17 compute-0 nova_compute[189564]:        <nova:disk>1</nova:disk>
Dec  1 20:01:17 compute-0 nova_compute[189564]:        <nova:swap>0</nova:swap>
Dec  1 20:01:17 compute-0 nova_compute[189564]:        <nova:ephemeral>0</nova:ephemeral>
Dec  1 20:01:17 compute-0 nova_compute[189564]:        <nova:vcpus>1</nova:vcpus>
Dec  1 20:01:17 compute-0 nova_compute[189564]:      </nova:flavor>
Dec  1 20:01:17 compute-0 nova_compute[189564]:      <nova:owner>
Dec  1 20:01:17 compute-0 nova_compute[189564]:        <nova:user uuid="89c8a8cb31224140bf2b9c0b94acfe04">tempest-ServerActionsTestJSON-87382225-project-member</nova:user>
Dec  1 20:01:17 compute-0 nova_compute[189564]:        <nova:project uuid="5102d72cb1ce4e6da810b2584a2abd73">tempest-ServerActionsTestJSON-87382225</nova:project>
Dec  1 20:01:17 compute-0 nova_compute[189564]:      </nova:owner>
Dec  1 20:01:17 compute-0 nova_compute[189564]:      <nova:root type="image" uuid="d169c234-7ac2-4fdc-b9fa-a08c93484d75"/>
Dec  1 20:01:17 compute-0 nova_compute[189564]:      <nova:ports>
Dec  1 20:01:17 compute-0 nova_compute[189564]:        <nova:port uuid="09097114-7a48-4b64-ab17-ed474efbf80e">
Dec  1 20:01:17 compute-0 nova_compute[189564]:          <nova:ip type="fixed" address="10.100.0.13" ipVersion="4"/>
Dec  1 20:01:17 compute-0 nova_compute[189564]:        </nova:port>
Dec  1 20:01:17 compute-0 nova_compute[189564]:      </nova:ports>
Dec  1 20:01:17 compute-0 nova_compute[189564]:    </nova:instance>
Dec  1 20:01:17 compute-0 nova_compute[189564]:  </metadata>
Dec  1 20:01:17 compute-0 nova_compute[189564]:  <sysinfo type="smbios">
Dec  1 20:01:17 compute-0 nova_compute[189564]:    <system>
Dec  1 20:01:17 compute-0 nova_compute[189564]:      <entry name="manufacturer">RDO</entry>
Dec  1 20:01:17 compute-0 nova_compute[189564]:      <entry name="product">OpenStack Compute</entry>
Dec  1 20:01:17 compute-0 nova_compute[189564]:      <entry name="version">27.5.2-0.20250829104910.6f8decf.el9</entry>
Dec  1 20:01:17 compute-0 nova_compute[189564]:      <entry name="serial">4a104baa-5fd5-47aa-973b-11d99c76c3e2</entry>
Dec  1 20:01:17 compute-0 nova_compute[189564]:      <entry name="uuid">4a104baa-5fd5-47aa-973b-11d99c76c3e2</entry>
Dec  1 20:01:17 compute-0 nova_compute[189564]:      <entry name="family">Virtual Machine</entry>
Dec  1 20:01:17 compute-0 nova_compute[189564]:    </system>
Dec  1 20:01:17 compute-0 nova_compute[189564]:  </sysinfo>
Dec  1 20:01:17 compute-0 nova_compute[189564]:  <os>
Dec  1 20:01:17 compute-0 nova_compute[189564]:    <type arch="x86_64" machine="q35">hvm</type>
Dec  1 20:01:17 compute-0 nova_compute[189564]:    <boot dev="hd"/>
Dec  1 20:01:17 compute-0 nova_compute[189564]:    <smbios mode="sysinfo"/>
Dec  1 20:01:17 compute-0 nova_compute[189564]:  </os>
Dec  1 20:01:17 compute-0 nova_compute[189564]:  <features>
Dec  1 20:01:17 compute-0 nova_compute[189564]:    <acpi/>
Dec  1 20:01:17 compute-0 nova_compute[189564]:    <apic/>
Dec  1 20:01:17 compute-0 nova_compute[189564]:    <vmcoreinfo/>
Dec  1 20:01:17 compute-0 nova_compute[189564]:  </features>
Dec  1 20:01:17 compute-0 nova_compute[189564]:  <clock offset="utc">
Dec  1 20:01:17 compute-0 nova_compute[189564]:    <timer name="pit" tickpolicy="delay"/>
Dec  1 20:01:17 compute-0 nova_compute[189564]:    <timer name="rtc" tickpolicy="catchup"/>
Dec  1 20:01:17 compute-0 nova_compute[189564]:    <timer name="hpet" present="no"/>
Dec  1 20:01:17 compute-0 nova_compute[189564]:  </clock>
Dec  1 20:01:17 compute-0 nova_compute[189564]:  <cpu mode="host-model" match="exact">
Dec  1 20:01:17 compute-0 nova_compute[189564]:    <topology sockets="1" cores="1" threads="1"/>
Dec  1 20:01:17 compute-0 nova_compute[189564]:  </cpu>
Dec  1 20:01:17 compute-0 nova_compute[189564]:  <devices>
Dec  1 20:01:17 compute-0 nova_compute[189564]:    <disk type="file" device="disk">
Dec  1 20:01:17 compute-0 nova_compute[189564]:      <driver name="qemu" type="qcow2" cache="none"/>
Dec  1 20:01:17 compute-0 nova_compute[189564]:      <source file="/var/lib/nova/instances/4a104baa-5fd5-47aa-973b-11d99c76c3e2/disk"/>
Dec  1 20:01:17 compute-0 nova_compute[189564]:      <target dev="vda" bus="virtio"/>
Dec  1 20:01:17 compute-0 nova_compute[189564]:    </disk>
Dec  1 20:01:17 compute-0 nova_compute[189564]:    <disk type="file" device="cdrom">
Dec  1 20:01:17 compute-0 nova_compute[189564]:      <driver name="qemu" type="raw" cache="none"/>
Dec  1 20:01:17 compute-0 nova_compute[189564]:      <source file="/var/lib/nova/instances/4a104baa-5fd5-47aa-973b-11d99c76c3e2/disk.config"/>
Dec  1 20:01:17 compute-0 nova_compute[189564]:      <target dev="sda" bus="sata"/>
Dec  1 20:01:17 compute-0 nova_compute[189564]:    </disk>
Dec  1 20:01:17 compute-0 nova_compute[189564]:    <interface type="ethernet">
Dec  1 20:01:17 compute-0 nova_compute[189564]:      <mac address="fa:16:3e:3e:bf:1a"/>
Dec  1 20:01:17 compute-0 nova_compute[189564]:      <model type="virtio"/>
Dec  1 20:01:17 compute-0 nova_compute[189564]:      <driver name="vhost" rx_queue_size="512"/>
Dec  1 20:01:17 compute-0 nova_compute[189564]:      <mtu size="1442"/>
Dec  1 20:01:17 compute-0 nova_compute[189564]:      <target dev="tap09097114-7a"/>
Dec  1 20:01:17 compute-0 nova_compute[189564]:    </interface>
Dec  1 20:01:17 compute-0 nova_compute[189564]:    <serial type="pty">
Dec  1 20:01:17 compute-0 nova_compute[189564]:      <log file="/var/lib/nova/instances/4a104baa-5fd5-47aa-973b-11d99c76c3e2/console.log" append="off"/>
Dec  1 20:01:17 compute-0 nova_compute[189564]:    </serial>
Dec  1 20:01:17 compute-0 nova_compute[189564]:    <graphics type="vnc" autoport="yes" listen="::0"/>
Dec  1 20:01:17 compute-0 nova_compute[189564]:    <video>
Dec  1 20:01:17 compute-0 nova_compute[189564]:      <model type="virtio"/>
Dec  1 20:01:17 compute-0 nova_compute[189564]:    </video>
Dec  1 20:01:17 compute-0 nova_compute[189564]:    <input type="tablet" bus="usb"/>
Dec  1 20:01:17 compute-0 nova_compute[189564]:    <rng model="virtio">
Dec  1 20:01:17 compute-0 nova_compute[189564]:      <backend model="random">/dev/urandom</backend>
Dec  1 20:01:17 compute-0 nova_compute[189564]:    </rng>
Dec  1 20:01:17 compute-0 nova_compute[189564]:    <controller type="pci" model="pcie-root"/>
Dec  1 20:01:17 compute-0 nova_compute[189564]:    <controller type="pci" model="pcie-root-port"/>
Dec  1 20:01:17 compute-0 nova_compute[189564]:    <controller type="pci" model="pcie-root-port"/>
Dec  1 20:01:17 compute-0 nova_compute[189564]:    <controller type="pci" model="pcie-root-port"/>
Dec  1 20:01:17 compute-0 nova_compute[189564]:    <controller type="pci" model="pcie-root-port"/>
Dec  1 20:01:17 compute-0 nova_compute[189564]:    <controller type="pci" model="pcie-root-port"/>
Dec  1 20:01:17 compute-0 nova_compute[189564]:    <controller type="pci" model="pcie-root-port"/>
Dec  1 20:01:17 compute-0 nova_compute[189564]:    <controller type="pci" model="pcie-root-port"/>
Dec  1 20:01:17 compute-0 nova_compute[189564]:    <controller type="pci" model="pcie-root-port"/>
Dec  1 20:01:17 compute-0 nova_compute[189564]:    <controller type="pci" model="pcie-root-port"/>
Dec  1 20:01:17 compute-0 nova_compute[189564]:    <controller type="pci" model="pcie-root-port"/>
Dec  1 20:01:17 compute-0 nova_compute[189564]:    <controller type="pci" model="pcie-root-port"/>
Dec  1 20:01:17 compute-0 nova_compute[189564]:    <controller type="pci" model="pcie-root-port"/>
Dec  1 20:01:17 compute-0 nova_compute[189564]:    <controller type="pci" model="pcie-root-port"/>
Dec  1 20:01:17 compute-0 nova_compute[189564]:    <controller type="pci" model="pcie-root-port"/>
Dec  1 20:01:17 compute-0 nova_compute[189564]:    <controller type="pci" model="pcie-root-port"/>
Dec  1 20:01:17 compute-0 nova_compute[189564]:    <controller type="pci" model="pcie-root-port"/>
Dec  1 20:01:17 compute-0 nova_compute[189564]:    <controller type="pci" model="pcie-root-port"/>
Dec  1 20:01:17 compute-0 nova_compute[189564]:    <controller type="pci" model="pcie-root-port"/>
Dec  1 20:01:17 compute-0 nova_compute[189564]:    <controller type="pci" model="pcie-root-port"/>
Dec  1 20:01:17 compute-0 nova_compute[189564]:    <controller type="pci" model="pcie-root-port"/>
Dec  1 20:01:17 compute-0 nova_compute[189564]:    <controller type="pci" model="pcie-root-port"/>
Dec  1 20:01:17 compute-0 nova_compute[189564]:    <controller type="pci" model="pcie-root-port"/>
Dec  1 20:01:17 compute-0 nova_compute[189564]:    <controller type="pci" model="pcie-root-port"/>
Dec  1 20:01:17 compute-0 nova_compute[189564]:    <controller type="pci" model="pcie-root-port"/>
Dec  1 20:01:17 compute-0 nova_compute[189564]:    <controller type="usb" index="0"/>
Dec  1 20:01:17 compute-0 nova_compute[189564]:    <memballoon model="virtio">
Dec  1 20:01:17 compute-0 nova_compute[189564]:      <stats period="10"/>
Dec  1 20:01:17 compute-0 nova_compute[189564]:    </memballoon>
Dec  1 20:01:17 compute-0 nova_compute[189564]:  </devices>
Dec  1 20:01:17 compute-0 nova_compute[189564]: </domain>
Dec  1 20:01:17 compute-0 nova_compute[189564]: _get_guest_xml /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:7555#033[00m
Dec  1 20:01:17 compute-0 nova_compute[189564]: 2025-12-01 20:01:17.865 189568 DEBUG nova.compute.manager [None req-323d5ec3-38e7-418e-9c44-916e6b02d0c3 89c8a8cb31224140bf2b9c0b94acfe04 5102d72cb1ce4e6da810b2584a2abd73 - - default default] [instance: 4a104baa-5fd5-47aa-973b-11d99c76c3e2] Preparing to wait for external event network-vif-plugged-09097114-7a48-4b64-ab17-ed474efbf80e prepare_for_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:283#033[00m
Dec  1 20:01:17 compute-0 nova_compute[189564]: 2025-12-01 20:01:17.866 189568 DEBUG oslo_concurrency.lockutils [None req-323d5ec3-38e7-418e-9c44-916e6b02d0c3 89c8a8cb31224140bf2b9c0b94acfe04 5102d72cb1ce4e6da810b2584a2abd73 - - default default] Acquiring lock "4a104baa-5fd5-47aa-973b-11d99c76c3e2-events" by "nova.compute.manager.InstanceEvents.prepare_for_instance_event.<locals>._create_or_get_event" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m
Dec  1 20:01:17 compute-0 nova_compute[189564]: 2025-12-01 20:01:17.867 189568 DEBUG oslo_concurrency.lockutils [None req-323d5ec3-38e7-418e-9c44-916e6b02d0c3 89c8a8cb31224140bf2b9c0b94acfe04 5102d72cb1ce4e6da810b2584a2abd73 - - default default] Lock "4a104baa-5fd5-47aa-973b-11d99c76c3e2-events" acquired by "nova.compute.manager.InstanceEvents.prepare_for_instance_event.<locals>._create_or_get_event" :: waited 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m
Dec  1 20:01:17 compute-0 nova_compute[189564]: 2025-12-01 20:01:17.867 189568 DEBUG oslo_concurrency.lockutils [None req-323d5ec3-38e7-418e-9c44-916e6b02d0c3 89c8a8cb31224140bf2b9c0b94acfe04 5102d72cb1ce4e6da810b2584a2abd73 - - default default] Lock "4a104baa-5fd5-47aa-973b-11d99c76c3e2-events" "released" by "nova.compute.manager.InstanceEvents.prepare_for_instance_event.<locals>._create_or_get_event" :: held 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m
Dec  1 20:01:17 compute-0 nova_compute[189564]: 2025-12-01 20:01:17.868 189568 DEBUG nova.virt.libvirt.vif [None req-323d5ec3-38e7-418e-9c44-916e6b02d0c3 89c8a8cb31224140bf2b9c0b94acfe04 5102d72cb1ce4e6da810b2584a2abd73 - - default default] vif_type=ovs instance=Instance(access_ip_v4=None,access_ip_v6=None,architecture=None,auto_disk_config=False,availability_zone='nova',cell_name=None,cleaned=False,config_drive='',created_at=2025-12-01T20:01:10Z,default_ephemeral_device=None,default_swap_device=None,deleted=False,deleted_at=None,device_metadata=None,disable_terminate=False,display_description='tempest-ServerActionsTestJSON-server-1064429924',display_name='tempest-ServerActionsTestJSON-server-1064429924',ec2_ids=EC2Ids,ephemeral_gb=0,ephemeral_key_uuid=None,fault=<?>,flavor=Flavor(3),hidden=False,host='compute-0.ctlplane.example.com',hostname='tempest-serveractionstestjson-server-1064429924',id=7,image_ref='d169c234-7ac2-4fdc-b9fa-a08c93484d75',info_cache=InstanceInfoCache,instance_type_id=3,kernel_id='',key_data='ecdsa-sha2-nistp384 AAAAE2VjZHNhLXNoYTItbmlzdHAzODQAAAAIbmlzdHAzODQAAABhBNy2Fa/005sFOm6rBTfWAhWPMicjwNe2lxBTmDNZ4YT4rkioptEkmqoV9BaZ0x7iRnfzTvUcepaaUfsJtdWIwpd6ISWDG/KMPFbrCHDmVc4nqNhxbzpyNrnXIODKw/JJYg==',key_name='tempest-keypair-1301911410',keypairs=KeyPairList,launch_index=0,launched_at=None,launched_on='compute-0.ctlplane.example.com',locked=False,locked_by=None,memory_mb=128,metadata={},migration_context=None,new_flavor=None,node='compute-0.ctlplane.example.com',numa_topology=None,old_flavor=None,os_type=None,pci_devices=PciDeviceList,pci_requests=InstancePCIRequests,power_state=0,progress=0,project_id='5102d72cb1ce4e6da810b2584a2abd73',ramdisk_id='',reservation_id='r-3k9rdt17',resources=None,root_device_name='/dev/vda',root_gb=1,security_groups=SecurityGroupList,services=<?>,shutdown_terminate=False,system_metadata={boot_roles='member,reader',image_base_image_ref='d169c234-7ac2-4fdc-b9fa-a08c93484d75',image_container_format='bare',image_disk_format='qcow2',image_hw_machine_type='q35',image_hw_rng_model='virtio',image_min_disk='1',image_min_ram='0',network_allocated='True',owner_project_name='tempest-ServerActionsTestJSON-87382225',owner_user_name='tempest-ServerActionsTestJSON-87382225-project-member'},tags=TagList,task_state='spawning',terminated_at=None,trusted_certs=None,updated_at=2025-12-01T20:01:12Z,user_data='IyEvYmluL3NoCmVjaG8gIlByaW50aW5nIGNpcnJvcyB1c2VyIGF1dGhvcml6ZWQga2V5cyIKY2F0IH5jaXJyb3MvLnNzaC9hdXRob3JpemVkX2tleXMgfHwgdHJ1ZQo=',user_id='89c8a8cb31224140bf2b9c0b94acfe04',uuid=4a104baa-5fd5-47aa-973b-11d99c76c3e2,vcpu_model=VirtCPUModel,vcpus=1,vm_mode=None,vm_state='building') vif={"id": "09097114-7a48-4b64-ab17-ed474efbf80e", "address": "fa:16:3e:3e:bf:1a", "network": {"id": "419dfb65-f0dd-44b5-a131-b7c37ebf4bab", "bridge": "br-int", "label": "tempest-ServerActionsTestJSON-188173667-network", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.13", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "5102d72cb1ce4e6da810b2584a2abd73", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tap09097114-7a", "ovs_interfaceid": "09097114-7a48-4b64-ab17-ed474efbf80e", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}} plug /usr/lib/python3.9/site-packages/nova/virt/libvirt/vif.py:710#033[00m
Dec  1 20:01:17 compute-0 ovn_metadata_agent[106828]: 2025-12-01 20:01:17.868 106833 DEBUG neutron.agent.linux.utils [-] Unable to access /var/lib/neutron/external/pids/50f1d760-d79c-40bd-a9b3-cf73e6f75cf0.pid.haproxy; Error: [Errno 2] No such file or directory: '/var/lib/neutron/external/pids/50f1d760-d79c-40bd-a9b3-cf73e6f75cf0.pid.haproxy' get_value_from_file /usr/lib/python3.9/site-packages/neutron/agent/linux/utils.py:252#033[00m
Dec  1 20:01:17 compute-0 nova_compute[189564]: 2025-12-01 20:01:17.869 189568 DEBUG nova.network.os_vif_util [None req-323d5ec3-38e7-418e-9c44-916e6b02d0c3 89c8a8cb31224140bf2b9c0b94acfe04 5102d72cb1ce4e6da810b2584a2abd73 - - default default] Converting VIF {"id": "09097114-7a48-4b64-ab17-ed474efbf80e", "address": "fa:16:3e:3e:bf:1a", "network": {"id": "419dfb65-f0dd-44b5-a131-b7c37ebf4bab", "bridge": "br-int", "label": "tempest-ServerActionsTestJSON-188173667-network", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.13", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "5102d72cb1ce4e6da810b2584a2abd73", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tap09097114-7a", "ovs_interfaceid": "09097114-7a48-4b64-ab17-ed474efbf80e", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}} nova_to_osvif_vif /usr/lib/python3.9/site-packages/nova/network/os_vif_util.py:511#033[00m
Dec  1 20:01:17 compute-0 ovn_metadata_agent[106828]: 2025-12-01 20:01:17.870 239862 DEBUG oslo.privsep.daemon [-] privsep: reply[c8fa0ac6-38e0-4bf8-9520-79bcf8080872]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Dec  1 20:01:17 compute-0 nova_compute[189564]: 2025-12-01 20:01:17.871 189568 DEBUG nova.network.os_vif_util [None req-323d5ec3-38e7-418e-9c44-916e6b02d0c3 89c8a8cb31224140bf2b9c0b94acfe04 5102d72cb1ce4e6da810b2584a2abd73 - - default default] Converted object VIFOpenVSwitch(active=False,address=fa:16:3e:3e:bf:1a,bridge_name='br-int',has_traffic_filtering=True,id=09097114-7a48-4b64-ab17-ed474efbf80e,network=Network(419dfb65-f0dd-44b5-a131-b7c37ebf4bab),plugin='ovs',port_profile=VIFPortProfileOpenVSwitch,preserve_on_delete=False,vif_name='tap09097114-7a') nova_to_osvif_vif /usr/lib/python3.9/site-packages/nova/network/os_vif_util.py:548#033[00m
Dec  1 20:01:17 compute-0 ovn_metadata_agent[106828]: 2025-12-01 20:01:17.872 106833 DEBUG neutron.agent.ovn.metadata.driver [-] haproxy_cfg = 
Dec  1 20:01:17 compute-0 ovn_metadata_agent[106828]: global
Dec  1 20:01:17 compute-0 ovn_metadata_agent[106828]:    log         /dev/log local0 debug
Dec  1 20:01:17 compute-0 ovn_metadata_agent[106828]:    log-tag     haproxy-metadata-proxy-50f1d760-d79c-40bd-a9b3-cf73e6f75cf0
Dec  1 20:01:17 compute-0 ovn_metadata_agent[106828]:    user        root
Dec  1 20:01:17 compute-0 ovn_metadata_agent[106828]:    group       root
Dec  1 20:01:17 compute-0 ovn_metadata_agent[106828]:    maxconn     1024
Dec  1 20:01:17 compute-0 ovn_metadata_agent[106828]:    pidfile     /var/lib/neutron/external/pids/50f1d760-d79c-40bd-a9b3-cf73e6f75cf0.pid.haproxy
Dec  1 20:01:17 compute-0 ovn_metadata_agent[106828]:    daemon
Dec  1 20:01:17 compute-0 ovn_metadata_agent[106828]: 
Dec  1 20:01:17 compute-0 ovn_metadata_agent[106828]: defaults
Dec  1 20:01:17 compute-0 ovn_metadata_agent[106828]:    log global
Dec  1 20:01:17 compute-0 ovn_metadata_agent[106828]:    mode http
Dec  1 20:01:17 compute-0 ovn_metadata_agent[106828]:    option httplog
Dec  1 20:01:17 compute-0 nova_compute[189564]: 2025-12-01 20:01:17.872 189568 DEBUG os_vif [None req-323d5ec3-38e7-418e-9c44-916e6b02d0c3 89c8a8cb31224140bf2b9c0b94acfe04 5102d72cb1ce4e6da810b2584a2abd73 - - default default] Plugging vif VIFOpenVSwitch(active=False,address=fa:16:3e:3e:bf:1a,bridge_name='br-int',has_traffic_filtering=True,id=09097114-7a48-4b64-ab17-ed474efbf80e,network=Network(419dfb65-f0dd-44b5-a131-b7c37ebf4bab),plugin='ovs',port_profile=VIFPortProfileOpenVSwitch,preserve_on_delete=False,vif_name='tap09097114-7a') plug /usr/lib/python3.9/site-packages/os_vif/__init__.py:76#033[00m
Dec  1 20:01:17 compute-0 ovn_metadata_agent[106828]:    option dontlognull
Dec  1 20:01:17 compute-0 ovn_metadata_agent[106828]:    option http-server-close
Dec  1 20:01:17 compute-0 ovn_metadata_agent[106828]:    option forwardfor
Dec  1 20:01:17 compute-0 ovn_metadata_agent[106828]:    retries                 3
Dec  1 20:01:17 compute-0 ovn_metadata_agent[106828]:    timeout http-request    30s
Dec  1 20:01:17 compute-0 ovn_metadata_agent[106828]:    timeout connect         30s
Dec  1 20:01:17 compute-0 ovn_metadata_agent[106828]:    timeout client          32s
Dec  1 20:01:17 compute-0 ovn_metadata_agent[106828]:    timeout server          32s
Dec  1 20:01:17 compute-0 ovn_metadata_agent[106828]:    timeout http-keep-alive 30s
Dec  1 20:01:17 compute-0 ovn_metadata_agent[106828]: 
Dec  1 20:01:17 compute-0 ovn_metadata_agent[106828]: 
Dec  1 20:01:17 compute-0 ovn_metadata_agent[106828]: listen listener
Dec  1 20:01:17 compute-0 ovn_metadata_agent[106828]:    bind 169.254.169.254:80
Dec  1 20:01:17 compute-0 ovn_metadata_agent[106828]:    server metadata /var/lib/neutron/metadata_proxy
Dec  1 20:01:17 compute-0 ovn_metadata_agent[106828]:    http-request add-header X-OVN-Network-ID 50f1d760-d79c-40bd-a9b3-cf73e6f75cf0
Dec  1 20:01:17 compute-0 ovn_metadata_agent[106828]: create_config_file /usr/lib/python3.9/site-packages/neutron/agent/ovn/metadata/driver.py:107#033[00m
Dec  1 20:01:17 compute-0 ovn_metadata_agent[106828]: 2025-12-01 20:01:17.873 106833 DEBUG neutron.agent.linux.utils [-] Running command: ['sudo', 'neutron-rootwrap', '/etc/neutron/rootwrap.conf', 'ip', 'netns', 'exec', 'ovnmeta-50f1d760-d79c-40bd-a9b3-cf73e6f75cf0', 'env', 'PROCESS_TAG=haproxy-50f1d760-d79c-40bd-a9b3-cf73e6f75cf0', 'haproxy', '-f', '/var/lib/neutron/ovn-metadata-proxy/50f1d760-d79c-40bd-a9b3-cf73e6f75cf0.conf'] create_process /usr/lib/python3.9/site-packages/neutron/agent/linux/utils.py:84#033[00m
Dec  1 20:01:17 compute-0 nova_compute[189564]: 2025-12-01 20:01:17.873 189568 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 21 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Dec  1 20:01:17 compute-0 nova_compute[189564]: 2025-12-01 20:01:17.874 189568 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): AddBridgeCommand(_result=None, name=br-int, may_exist=True, datapath_type=system) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89#033[00m
Dec  1 20:01:17 compute-0 nova_compute[189564]: 2025-12-01 20:01:17.875 189568 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Transaction caused no change do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:129#033[00m
Dec  1 20:01:17 compute-0 nova_compute[189564]: 2025-12-01 20:01:17.876 189568 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Dec  1 20:01:17 compute-0 nova_compute[189564]: 2025-12-01 20:01:17.879 189568 DEBUG nova.compute.manager [None req-025acbbd-8b0a-4055-b5a6-f0460d6fa220 - - - - - -] [instance: 5e264735-c003-4c77-8b16-cb48211f837f] Checking state _get_power_state /usr/lib/python3.9/site-packages/nova/compute/manager.py:1762#033[00m
Dec  1 20:01:17 compute-0 nova_compute[189564]: 2025-12-01 20:01:17.881 189568 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 21 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Dec  1 20:01:17 compute-0 nova_compute[189564]: 2025-12-01 20:01:17.883 189568 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): AddPortCommand(_result=None, bridge=br-int, port=tap09097114-7a, may_exist=True) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89#033[00m
Dec  1 20:01:17 compute-0 nova_compute[189564]: 2025-12-01 20:01:17.884 189568 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=1): DbSetCommand(_result=None, table=Interface, record=tap09097114-7a, col_values=(('external_ids', {'iface-id': '09097114-7a48-4b64-ab17-ed474efbf80e', 'iface-status': 'active', 'attached-mac': 'fa:16:3e:3e:bf:1a', 'vm-uuid': '4a104baa-5fd5-47aa-973b-11d99c76c3e2'}),)) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89#033[00m
Dec  1 20:01:17 compute-0 NetworkManager[56474]: <info>  [1764619277.8872] manager: (tap09097114-7a): new Open vSwitch Port device (/org/freedesktop/NetworkManager/Devices/40)
Dec  1 20:01:17 compute-0 nova_compute[189564]: 2025-12-01 20:01:17.886 189568 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Dec  1 20:01:17 compute-0 nova_compute[189564]: 2025-12-01 20:01:17.888 189568 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] 0-ms timeout __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:248#033[00m
Dec  1 20:01:17 compute-0 nova_compute[189564]: 2025-12-01 20:01:17.892 189568 DEBUG nova.compute.manager [None req-025acbbd-8b0a-4055-b5a6-f0460d6fa220 - - - - - -] [instance: 5e264735-c003-4c77-8b16-cb48211f837f] Synchronizing instance power state after lifecycle event "Paused"; current vm_state: building, current task_state: spawning, current DB power_state: 0, VM power_state: 3 handle_lifecycle_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:1396#033[00m
Dec  1 20:01:17 compute-0 nova_compute[189564]: 2025-12-01 20:01:17.897 189568 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Dec  1 20:01:17 compute-0 nova_compute[189564]: 2025-12-01 20:01:17.898 189568 INFO os_vif [None req-323d5ec3-38e7-418e-9c44-916e6b02d0c3 89c8a8cb31224140bf2b9c0b94acfe04 5102d72cb1ce4e6da810b2584a2abd73 - - default default] Successfully plugged vif VIFOpenVSwitch(active=False,address=fa:16:3e:3e:bf:1a,bridge_name='br-int',has_traffic_filtering=True,id=09097114-7a48-4b64-ab17-ed474efbf80e,network=Network(419dfb65-f0dd-44b5-a131-b7c37ebf4bab),plugin='ovs',port_profile=VIFPortProfileOpenVSwitch,preserve_on_delete=False,vif_name='tap09097114-7a')#033[00m
Dec  1 20:01:17 compute-0 nova_compute[189564]: 2025-12-01 20:01:17.917 189568 INFO nova.compute.manager [None req-025acbbd-8b0a-4055-b5a6-f0460d6fa220 - - - - - -] [instance: 5e264735-c003-4c77-8b16-cb48211f837f] During sync_power_state the instance has a pending task (spawning). Skip.#033[00m
Dec  1 20:01:17 compute-0 nova_compute[189564]: 2025-12-01 20:01:17.954 189568 DEBUG nova.virt.libvirt.driver [None req-323d5ec3-38e7-418e-9c44-916e6b02d0c3 89c8a8cb31224140bf2b9c0b94acfe04 5102d72cb1ce4e6da810b2584a2abd73 - - default default] No BDM found with device name vda, not building metadata. _build_disk_metadata /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:12116#033[00m
Dec  1 20:01:17 compute-0 nova_compute[189564]: 2025-12-01 20:01:17.955 189568 DEBUG nova.virt.libvirt.driver [None req-323d5ec3-38e7-418e-9c44-916e6b02d0c3 89c8a8cb31224140bf2b9c0b94acfe04 5102d72cb1ce4e6da810b2584a2abd73 - - default default] No BDM found with device name sda, not building metadata. _build_disk_metadata /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:12116#033[00m
Dec  1 20:01:17 compute-0 nova_compute[189564]: 2025-12-01 20:01:17.955 189568 DEBUG nova.virt.libvirt.driver [None req-323d5ec3-38e7-418e-9c44-916e6b02d0c3 89c8a8cb31224140bf2b9c0b94acfe04 5102d72cb1ce4e6da810b2584a2abd73 - - default default] No VIF found with MAC fa:16:3e:3e:bf:1a, not building metadata _build_interface_metadata /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:12092#033[00m
Dec  1 20:01:17 compute-0 nova_compute[189564]: 2025-12-01 20:01:17.956 189568 INFO nova.virt.libvirt.driver [None req-323d5ec3-38e7-418e-9c44-916e6b02d0c3 89c8a8cb31224140bf2b9c0b94acfe04 5102d72cb1ce4e6da810b2584a2abd73 - - default default] [instance: 4a104baa-5fd5-47aa-973b-11d99c76c3e2] Using config drive#033[00m
Dec  1 20:01:18 compute-0 nova_compute[189564]: 2025-12-01 20:01:18.266 189568 DEBUG nova.network.neutron [req-6a171cd7-98ee-482d-8051-08cc653cc61b req-0a18b6d9-5570-465e-b34e-40b35a98fd5f 8141e98fb94749b083df4b60a326419b 58466eba0634458f8dd92f929824a1d9 - - default default] [instance: 5e264735-c003-4c77-8b16-cb48211f837f] Updated VIF entry in instance network info cache for port 241aee4b-acee-43c4-b165-e8322c56a1d3. _build_network_info_model /usr/lib/python3.9/site-packages/nova/network/neutron.py:3482#033[00m
Dec  1 20:01:18 compute-0 nova_compute[189564]: 2025-12-01 20:01:18.267 189568 DEBUG nova.network.neutron [req-6a171cd7-98ee-482d-8051-08cc653cc61b req-0a18b6d9-5570-465e-b34e-40b35a98fd5f 8141e98fb94749b083df4b60a326419b 58466eba0634458f8dd92f929824a1d9 - - default default] [instance: 5e264735-c003-4c77-8b16-cb48211f837f] Updating instance_info_cache with network_info: [{"id": "241aee4b-acee-43c4-b165-e8322c56a1d3", "address": "fa:16:3e:94:01:de", "network": {"id": "50f1d760-d79c-40bd-a9b3-cf73e6f75cf0", "bridge": "br-int", "label": "tempest-ServersTestManualDisk-1633365007-network", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.13", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "02b2a851f173482691b98aa9a993fbf9", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tap241aee4b-ac", "ovs_interfaceid": "241aee4b-acee-43c4-b165-e8322c56a1d3", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}}] update_instance_cache_with_nw_info /usr/lib/python3.9/site-packages/nova/network/neutron.py:116#033[00m
Dec  1 20:01:18 compute-0 nova_compute[189564]: 2025-12-01 20:01:18.287 189568 DEBUG oslo_concurrency.lockutils [req-6a171cd7-98ee-482d-8051-08cc653cc61b req-0a18b6d9-5570-465e-b34e-40b35a98fd5f 8141e98fb94749b083df4b60a326419b 58466eba0634458f8dd92f929824a1d9 - - default default] Releasing lock "refresh_cache-5e264735-c003-4c77-8b16-cb48211f837f" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:333#033[00m
Dec  1 20:01:18 compute-0 podman[253928]: 2025-12-01 20:01:18.309286109 +0000 UTC m=+0.099397764 container create 574821a8544f913b2c7d7d6f4074b882d230ee7d7020c25c952f12efb1e519b7 (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified, name=neutron-haproxy-ovnmeta-50f1d760-d79c-40bd-a9b3-cf73e6f75cf0, maintainer=OpenStack Kubernetes Operator team, org.label-schema.schema-version=1.0, io.buildah.version=1.41.3, org.label-schema.license=GPLv2, tcib_managed=true, org.label-schema.build-date=20251125, org.label-schema.vendor=CentOS, tcib_build_tag=fa2bb8efef6782c26ea7f1675eeb36dd, org.label-schema.name=CentOS Stream 9 Base Image)
Dec  1 20:01:18 compute-0 systemd[1]: Started libpod-conmon-574821a8544f913b2c7d7d6f4074b882d230ee7d7020c25c952f12efb1e519b7.scope.
Dec  1 20:01:18 compute-0 podman[253928]: 2025-12-01 20:01:18.27494639 +0000 UTC m=+0.065058095 image pull 014dc726c85414b29f2dde7b5d875685d08784761c0f0ffa8630d1583a877bf9 quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified
Dec  1 20:01:18 compute-0 systemd[1]: Started libcrun container.
Dec  1 20:01:18 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/43a1576c400e4a043d1f747219044d7cbdec89b586824c0040672cb1dae1439f/merged/var/lib/neutron supports timestamps until 2038 (0x7fffffff)
Dec  1 20:01:18 compute-0 nova_compute[189564]: 2025-12-01 20:01:18.405 189568 INFO nova.virt.libvirt.driver [None req-323d5ec3-38e7-418e-9c44-916e6b02d0c3 89c8a8cb31224140bf2b9c0b94acfe04 5102d72cb1ce4e6da810b2584a2abd73 - - default default] [instance: 4a104baa-5fd5-47aa-973b-11d99c76c3e2] Creating config drive at /var/lib/nova/instances/4a104baa-5fd5-47aa-973b-11d99c76c3e2/disk.config#033[00m
Dec  1 20:01:18 compute-0 nova_compute[189564]: 2025-12-01 20:01:18.412 189568 DEBUG oslo_concurrency.processutils [None req-323d5ec3-38e7-418e-9c44-916e6b02d0c3 89c8a8cb31224140bf2b9c0b94acfe04 5102d72cb1ce4e6da810b2584a2abd73 - - default default] Running cmd (subprocess): /usr/bin/mkisofs -o /var/lib/nova/instances/4a104baa-5fd5-47aa-973b-11d99c76c3e2/disk.config -ldots -allow-lowercase -allow-multidot -l -publisher OpenStack Compute 27.5.2-0.20250829104910.6f8decf.el9 -quiet -J -r -V config-2 /tmp/tmpctihbjbm execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384#033[00m
Dec  1 20:01:18 compute-0 podman[253928]: 2025-12-01 20:01:18.42536314 +0000 UTC m=+0.215474815 container init 574821a8544f913b2c7d7d6f4074b882d230ee7d7020c25c952f12efb1e519b7 (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified, name=neutron-haproxy-ovnmeta-50f1d760-d79c-40bd-a9b3-cf73e6f75cf0, tcib_managed=true, org.label-schema.build-date=20251125, org.label-schema.vendor=CentOS, io.buildah.version=1.41.3, tcib_build_tag=fa2bb8efef6782c26ea7f1675eeb36dd, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.license=GPLv2, maintainer=OpenStack Kubernetes Operator team, org.label-schema.schema-version=1.0)
Dec  1 20:01:18 compute-0 podman[253928]: 2025-12-01 20:01:18.435133555 +0000 UTC m=+0.225245220 container start 574821a8544f913b2c7d7d6f4074b882d230ee7d7020c25c952f12efb1e519b7 (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified, name=neutron-haproxy-ovnmeta-50f1d760-d79c-40bd-a9b3-cf73e6f75cf0, org.label-schema.name=CentOS Stream 9 Base Image, tcib_managed=true, org.label-schema.build-date=20251125, org.label-schema.license=GPLv2, maintainer=OpenStack Kubernetes Operator team, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS, io.buildah.version=1.41.3, tcib_build_tag=fa2bb8efef6782c26ea7f1675eeb36dd)
Dec  1 20:01:18 compute-0 neutron-haproxy-ovnmeta-50f1d760-d79c-40bd-a9b3-cf73e6f75cf0[253939]: [NOTICE]   (253947) : New worker (253949) forked
Dec  1 20:01:18 compute-0 neutron-haproxy-ovnmeta-50f1d760-d79c-40bd-a9b3-cf73e6f75cf0[253939]: [NOTICE]   (253947) : Loading success.
Dec  1 20:01:18 compute-0 nova_compute[189564]: 2025-12-01 20:01:18.537 189568 DEBUG oslo_concurrency.processutils [None req-323d5ec3-38e7-418e-9c44-916e6b02d0c3 89c8a8cb31224140bf2b9c0b94acfe04 5102d72cb1ce4e6da810b2584a2abd73 - - default default] CMD "/usr/bin/mkisofs -o /var/lib/nova/instances/4a104baa-5fd5-47aa-973b-11d99c76c3e2/disk.config -ldots -allow-lowercase -allow-multidot -l -publisher OpenStack Compute 27.5.2-0.20250829104910.6f8decf.el9 -quiet -J -r -V config-2 /tmp/tmpctihbjbm" returned: 0 in 0.126s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422#033[00m
Dec  1 20:01:18 compute-0 kernel: tap09097114-7a: entered promiscuous mode
Dec  1 20:01:18 compute-0 NetworkManager[56474]: <info>  [1764619278.6281] manager: (tap09097114-7a): new Tun device (/org/freedesktop/NetworkManager/Devices/41)
Dec  1 20:01:18 compute-0 ovn_controller[97948]: 2025-12-01T20:01:18Z|00076|binding|INFO|Claiming lport 09097114-7a48-4b64-ab17-ed474efbf80e for this chassis.
Dec  1 20:01:18 compute-0 ovn_controller[97948]: 2025-12-01T20:01:18Z|00077|binding|INFO|09097114-7a48-4b64-ab17-ed474efbf80e: Claiming fa:16:3e:3e:bf:1a 10.100.0.13
Dec  1 20:01:18 compute-0 nova_compute[189564]: 2025-12-01 20:01:18.631 189568 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Dec  1 20:01:18 compute-0 ovn_metadata_agent[106828]: 2025-12-01 20:01:18.642 106833 DEBUG ovsdbapp.backend.ovs_idl.event [-] Matched UPDATE: PortBindingUpdatedEvent(events=('update',), table='Port_Binding', conditions=None, old_conditions=None), priority=20 to row=Port_Binding(mac=['fa:16:3e:3e:bf:1a 10.100.0.13'], port_security=['fa:16:3e:3e:bf:1a 10.100.0.13'], type=, nat_addresses=[], virtual_parent=[], up=[False], options={'requested-chassis': 'compute-0.ctlplane.example.com'}, parent_port=[], requested_additional_chassis=[], ha_chassis_group=[], external_ids={'neutron:cidrs': '10.100.0.13/28', 'neutron:device_id': '4a104baa-5fd5-47aa-973b-11d99c76c3e2', 'neutron:device_owner': 'compute:nova', 'neutron:mtu': '', 'neutron:network_name': 'neutron-419dfb65-f0dd-44b5-a131-b7c37ebf4bab', 'neutron:port_capabilities': '', 'neutron:port_name': '', 'neutron:project_id': '5102d72cb1ce4e6da810b2584a2abd73', 'neutron:revision_number': '2', 'neutron:security_group_ids': 'fb1a9182-2a79-4a69-a063-58799cf34a33', 'neutron:subnet_pool_addr_scope4': '', 'neutron:subnet_pool_addr_scope6': '', 'neutron:vnic_type': 'normal'}, additional_chassis=[], tag=[], additional_encap=[], encap=[], mirror_rules=[], datapath=b0f29072-dc2b-4972-a602-c2fe180fbdaf, chassis=[<ovs.db.idl.Row object at 0x7f1b36766670>], tunnel_key=3, gateway_chassis=[], requested_chassis=[<ovs.db.idl.Row object at 0x7f1b36766670>], logical_port=09097114-7a48-4b64-ab17-ed474efbf80e) old=Port_Binding(chassis=[]) matches /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/event.py:43#033[00m
Dec  1 20:01:18 compute-0 ovn_metadata_agent[106828]: 2025-12-01 20:01:18.644 106833 INFO neutron.agent.ovn.metadata.agent [-] Port 09097114-7a48-4b64-ab17-ed474efbf80e in datapath 419dfb65-f0dd-44b5-a131-b7c37ebf4bab bound to our chassis#033[00m
Dec  1 20:01:18 compute-0 ovn_metadata_agent[106828]: 2025-12-01 20:01:18.647 106833 INFO neutron.agent.ovn.metadata.agent [-] Provisioning metadata for network 419dfb65-f0dd-44b5-a131-b7c37ebf4bab#033[00m
Dec  1 20:01:18 compute-0 NetworkManager[56474]: <info>  [1764619278.6495] device (tap09097114-7a): state change: unmanaged -> unavailable (reason 'connection-assumed', managed-type: 'external')
Dec  1 20:01:18 compute-0 NetworkManager[56474]: <info>  [1764619278.6583] device (tap09097114-7a): state change: unavailable -> disconnected (reason 'none', managed-type: 'external')
Dec  1 20:01:18 compute-0 ovn_controller[97948]: 2025-12-01T20:01:18Z|00078|binding|INFO|Setting lport 09097114-7a48-4b64-ab17-ed474efbf80e ovn-installed in OVS
Dec  1 20:01:18 compute-0 ovn_controller[97948]: 2025-12-01T20:01:18Z|00079|binding|INFO|Setting lport 09097114-7a48-4b64-ab17-ed474efbf80e up in Southbound
Dec  1 20:01:18 compute-0 nova_compute[189564]: 2025-12-01 20:01:18.660 189568 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Dec  1 20:01:18 compute-0 nova_compute[189564]: 2025-12-01 20:01:18.661 189568 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Dec  1 20:01:18 compute-0 ovn_metadata_agent[106828]: 2025-12-01 20:01:18.666 239862 DEBUG oslo.privsep.daemon [-] privsep: reply[ecbebb34-256c-41b2-b89c-4a3cbb1de477]: (4, False) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Dec  1 20:01:18 compute-0 ovn_metadata_agent[106828]: 2025-12-01 20:01:18.668 106833 DEBUG neutron.agent.ovn.metadata.agent [-] Creating VETH tap419dfb65-f1 in ovnmeta-419dfb65-f0dd-44b5-a131-b7c37ebf4bab namespace provision_datapath /usr/lib/python3.9/site-packages/neutron/agent/ovn/metadata/agent.py:665#033[00m
Dec  1 20:01:18 compute-0 ovn_metadata_agent[106828]: 2025-12-01 20:01:18.671 239862 DEBUG neutron.privileged.agent.linux.ip_lib [-] Interface tap419dfb65-f0 not found in namespace None get_link_id /usr/lib/python3.9/site-packages/neutron/privileged/agent/linux/ip_lib.py:204#033[00m
Dec  1 20:01:18 compute-0 ovn_metadata_agent[106828]: 2025-12-01 20:01:18.671 239862 DEBUG oslo.privsep.daemon [-] privsep: reply[431e0098-d153-47e8-a213-cac0500df6b0]: (4, False) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Dec  1 20:01:18 compute-0 ovn_metadata_agent[106828]: 2025-12-01 20:01:18.674 239862 DEBUG oslo.privsep.daemon [-] privsep: reply[95b98227-0c9e-487a-9949-a4a3e81abcca]: (4, False) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Dec  1 20:01:18 compute-0 systemd-machined[155891]: New machine qemu-7-instance-00000007.
Dec  1 20:01:18 compute-0 ovn_metadata_agent[106828]: 2025-12-01 20:01:18.690 106945 DEBUG oslo.privsep.daemon [-] privsep: reply[a58d0513-258c-49df-95e0-b0fb12bcea36]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Dec  1 20:01:18 compute-0 systemd[1]: Started Virtual Machine qemu-7-instance-00000007.
Dec  1 20:01:18 compute-0 ovn_metadata_agent[106828]: 2025-12-01 20:01:18.716 239862 DEBUG oslo.privsep.daemon [-] privsep: reply[7ac09996-acc0-4e2b-9656-edd961e43276]: (4, ('net.ipv4.conf.all.promote_secondaries = 1\n', '', 0)) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Dec  1 20:01:18 compute-0 ovn_metadata_agent[106828]: 2025-12-01 20:01:18.750 239942 DEBUG oslo.privsep.daemon [-] privsep: reply[7c7393e5-32f5-4338-9c11-9d1ecda47b15]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Dec  1 20:01:18 compute-0 ovn_metadata_agent[106828]: 2025-12-01 20:01:18.757 239862 DEBUG oslo.privsep.daemon [-] privsep: reply[902bca77-8462-46f2-8165-b9cfc3555e05]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Dec  1 20:01:18 compute-0 NetworkManager[56474]: <info>  [1764619278.7600] manager: (tap419dfb65-f0): new Veth device (/org/freedesktop/NetworkManager/Devices/42)
Dec  1 20:01:18 compute-0 ovn_metadata_agent[106828]: 2025-12-01 20:01:18.790 239942 DEBUG oslo.privsep.daemon [-] privsep: reply[7b355847-c76d-4776-bf06-9eb5bce98275]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Dec  1 20:01:18 compute-0 nova_compute[189564]: 2025-12-01 20:01:18.793 189568 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Dec  1 20:01:18 compute-0 ovn_metadata_agent[106828]: 2025-12-01 20:01:18.795 239942 DEBUG oslo.privsep.daemon [-] privsep: reply[28095e49-84f0-4dfb-b9c3-4f21df1aac59]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Dec  1 20:01:18 compute-0 systemd[1]: Starting libvirt proxy daemon...
Dec  1 20:01:18 compute-0 NetworkManager[56474]: <info>  [1764619278.8323] device (tap419dfb65-f0): carrier: link connected
Dec  1 20:01:18 compute-0 ovn_metadata_agent[106828]: 2025-12-01 20:01:18.839 239942 DEBUG oslo.privsep.daemon [-] privsep: reply[46692346-c484-47fd-951a-a4535dd1a6bc]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Dec  1 20:01:18 compute-0 systemd[1]: Started libvirt proxy daemon.
Dec  1 20:01:18 compute-0 ovn_metadata_agent[106828]: 2025-12-01 20:01:18.860 239862 DEBUG oslo.privsep.daemon [-] privsep: reply[88c27904-9f43-44a5-a97e-1fc9ddb53361]: (4, [{'family': 0, '__align': (), 'ifi_type': 1, 'index': 2, 'flags': 69699, 'change': 0, 'attrs': [['IFLA_IFNAME', 'tap419dfb65-f1'], ['IFLA_TXQLEN', 1000], ['IFLA_OPERSTATE', 'UP'], ['IFLA_LINKMODE', 0], ['IFLA_MTU', 1500], ['IFLA_MIN_MTU', 68], ['IFLA_MAX_MTU', 65535], ['IFLA_GROUP', 0], ['IFLA_PROMISCUITY', 0], ['UNKNOWN', {'header': {'length': 8, 'type': 61}}], ['IFLA_NUM_TX_QUEUES', 8], ['IFLA_GSO_MAX_SEGS', 65535], ['IFLA_GSO_MAX_SIZE', 65536], ['IFLA_GRO_MAX_SIZE', 65536], ['UNKNOWN', {'header': {'length': 8, 'type': 63}}], ['UNKNOWN', {'header': {'length': 8, 'type': 64}}], ['IFLA_TSO_MAX_SIZE', 524280], ['IFLA_TSO_MAX_SEGS', 65535], ['UNKNOWN', {'header': {'length': 8, 'type': 66}}], ['IFLA_NUM_RX_QUEUES', 8], ['IFLA_CARRIER', 1], ['IFLA_CARRIER_CHANGES', 2], ['IFLA_CARRIER_UP_COUNT', 1], ['IFLA_CARRIER_DOWN_COUNT', 1], ['IFLA_PROTO_DOWN', 0], ['IFLA_ADDRESS', 'fa:16:3e:4f:9b:3e'], ['IFLA_BROADCAST', 'ff:ff:ff:ff:ff:ff'], ['IFLA_STATS64', {'rx_packets': 1, 'tx_packets': 1, 'rx_bytes': 90, 'tx_bytes': 90, 'rx_errors': 0, 'tx_errors': 0, 'rx_dropped': 0, 'tx_dropped': 0, 'multicast': 0, 'collisions': 0, 'rx_length_errors': 0, 'rx_over_errors': 0, 'rx_crc_errors': 0, 'rx_frame_errors': 0, 'rx_fifo_errors': 0, 'rx_missed_errors': 0, 'tx_aborted_errors': 0, 'tx_carrier_errors': 0, 'tx_fifo_errors': 0, 'tx_heartbeat_errors': 0, 'tx_window_errors': 0, 'rx_compressed': 0, 'tx_compressed': 0}], ['IFLA_STATS', {'rx_packets': 1, 'tx_packets': 1, 'rx_bytes': 90, 'tx_bytes': 90, 'rx_errors': 0, 'tx_errors': 0, 'rx_dropped': 0, 'tx_dropped': 0, 'multicast': 0, 'collisions': 0, 'rx_length_errors': 0, 'rx_over_errors': 0, 'rx_crc_errors': 0, 'rx_frame_errors': 0, 'rx_fifo_errors': 0, 'rx_missed_errors': 0, 'tx_aborted_errors': 0, 'tx_carrier_errors': 0, 'tx_fifo_errors': 0, 'tx_heartbeat_errors': 0, 'tx_window_errors': 0, 'rx_compressed': 0, 'tx_compressed': 0}], ['IFLA_XDP', {'attrs': [['IFLA_XDP_ATTACHED', None]]}], ['IFLA_LINKINFO', {'attrs': [['IFLA_INFO_KIND', 'veth']]}], ['IFLA_LINK_NETNSID', 0], ['IFLA_LINK', 23], ['IFLA_QDISC', 'noqueue'], ['IFLA_AF_SPEC', {'attrs': [['AF_INET', {'dummy': 65668, 'forwarding': 1, 'mc_forwarding': 0, 'proxy_arp': 0, 'accept_redirects': 0, 'secure_redirects': 0, 'send_redirects': 0, 'shared_media': 1, 'rp_filter': 1, 'accept_source_route': 0, 'bootp_relay': 0, 'log_martians': 0, 'tag': 0, 'arpfilter': 0, 'medium_id': 0, 'noxfrm': 0, 'nopolicy': 0, 'force_igmp_version': 0, 'arp_announce': 0, 'arp_ignore': 0, 'promote_secondaries': 1, 'arp_accept': 0, 'arp_notify': 0, 'accept_local': 0, 'src_vmark': 0, 'proxy_arp_pvlan': 0, 'route_localnet': 0, 'igmpv2_unsolicited_report_interval': 10000, 'igmpv3_unsolicited_report_interval': 1000}], ['AF_INET6', {'attrs': [['IFLA_INET6_FLAGS', 2147483648], ['IFLA_INET6_CACHEINFO', {'max_reasm_len': 65535, 'tstamp': 576655, 'reachable_time': 35965, 'retrans_time': 1000}], ['IFLA_INET6_CONF', {'forwarding': 0, 'hop_limit': 64, 'mtu': 1500, 'accept_ra': 1, 'accept_redirects': 1, 'autoconf': 1, 'dad_transmits': 1, 'router_solicitations': 4294967295, 'router_solicitation_interval': 4000, 'router_solicitation_delay': 1000, 'use_tempaddr': 0, 'temp_valid_lft': 604800, 'temp_preferred_lft': 86400, 'regen_max_retry': 3, 'max_desync_factor': 600, 'max_addresses': 16, 'force_mld_version': 0, 'accept_ra_defrtr': 1, 'accept_ra_pinfo': 1, 'accept_ra_rtr_pref': 1, 'router_probe_interval': 60000, 'accept_ra_rt_info_max_plen': 0, 'proxy_ndp': 0, 'optimistic_dad': 0, 'accept_source_route': 0, 'mc_forwarding': 0, 'disable_ipv6': 0, 'accept_dad': 1, 'force_tllao': 0, 'ndisc_notify': 0}], ['IFLA_INET6_STATS', {'num': 38, 'inpkts': 1, 'inoctets': 76, 'indelivers': 0, 'outforwdatagrams': 0, 'outpkts': 1, 'outoctets': 76, 'inhdrerrors': 0, 'intoobigerrors': 0, 'innoroutes': 0, 'inaddrerrors': 0, 'inunknownprotos': 0, 'intruncatedpkts': 0, 'indiscards': 0, 'outdiscards': 0, 'outnoroutes': 0, 'reasmtimeout': 0, 'reasmreqds': 0, 'reasmoks': 0, 'reasmfails': 0, 'fragoks': 0, 'fragfails': 0, 'fragcreates': 0, 'inmcastpkts': 1, 'outmcastpkts': 1, 'inbcastpkts': 0, 'outbcastpkts': 0, 'inmcastoctets': 76, 'outmcastoctets': 76, 'inbcastoctets': 0, 'outbcastoctets': 0, 'csumerrors': 0, 'noectpkts': 1, 'ect1pkts': 0, 'ect0pkts': 0, 'cepkts': 0}], ['IFLA_INET6_ICMP6STATS', {'num': 7, 'inmsgs': 0, 'inerrors': 0, 'outmsgs': 1, 'outerrors': 0, 'csumerrors': 0}], ['IFLA_INET6_TOKEN', '::'], ['IFLA_INET6_ADDR_GEN_MODE', 0]]}]]}], ['IFLA_MAP', {'mem_start': 0, 'mem_end': 0, 'base_addr': 0, 'irq': 0, 'dma': 0, 'port': 0}], ['UNKNOWN', {'header': {'length': 4, 'type': 32830}}], ['UNKNOWN', {'header': {'length': 4, 'type': 32833}}]], 'header': {'length': 1448, 'type': 16, 'flags': 2, 'sequence_number': 255, 'pid': 254006, 'error': None, 'target': 'ovnmeta-419dfb65-f0dd-44b5-a131-b7c37ebf4bab', 'stats': (0, 0, 0)}, 'state': 'up', 'event': 'RTM_NEWLINK'}]) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Dec  1 20:01:18 compute-0 ovn_metadata_agent[106828]: 2025-12-01 20:01:18.879 239862 DEBUG oslo.privsep.daemon [-] privsep: reply[b3bb56a0-44c9-4eee-81d0-3ff275bc720a]: (4, ({'family': 10, 'prefixlen': 64, 'flags': 192, 'scope': 253, 'index': 2, 'attrs': [['IFA_ADDRESS', 'fe80::f816:3eff:fe4f:9b3e'], ['IFA_CACHEINFO', {'ifa_preferred': 4294967295, 'ifa_valid': 4294967295, 'cstamp': 576655, 'tstamp': 576655}], ['IFA_FLAGS', 192]], 'header': {'length': 72, 'type': 20, 'flags': 2, 'sequence_number': 255, 'pid': 254009, 'error': None, 'target': 'ovnmeta-419dfb65-f0dd-44b5-a131-b7c37ebf4bab', 'stats': (0, 0, 0)}, 'event': 'RTM_NEWADDR'},)) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Dec  1 20:01:18 compute-0 ovn_metadata_agent[106828]: 2025-12-01 20:01:18.903 239862 DEBUG oslo.privsep.daemon [-] privsep: reply[5ffba55f-6ca3-44ba-bb3a-6e00ecfbf493]: (4, [{'family': 0, '__align': (), 'ifi_type': 1, 'index': 2, 'flags': 69699, 'change': 0, 'attrs': [['IFLA_IFNAME', 'tap419dfb65-f1'], ['IFLA_TXQLEN', 1000], ['IFLA_OPERSTATE', 'UP'], ['IFLA_LINKMODE', 0], ['IFLA_MTU', 1500], ['IFLA_MIN_MTU', 68], ['IFLA_MAX_MTU', 65535], ['IFLA_GROUP', 0], ['IFLA_PROMISCUITY', 0], ['UNKNOWN', {'header': {'length': 8, 'type': 61}}], ['IFLA_NUM_TX_QUEUES', 8], ['IFLA_GSO_MAX_SEGS', 65535], ['IFLA_GSO_MAX_SIZE', 65536], ['IFLA_GRO_MAX_SIZE', 65536], ['UNKNOWN', {'header': {'length': 8, 'type': 63}}], ['UNKNOWN', {'header': {'length': 8, 'type': 64}}], ['IFLA_TSO_MAX_SIZE', 524280], ['IFLA_TSO_MAX_SEGS', 65535], ['UNKNOWN', {'header': {'length': 8, 'type': 66}}], ['IFLA_NUM_RX_QUEUES', 8], ['IFLA_CARRIER', 1], ['IFLA_CARRIER_CHANGES', 2], ['IFLA_CARRIER_UP_COUNT', 1], ['IFLA_CARRIER_DOWN_COUNT', 1], ['IFLA_PROTO_DOWN', 0], ['IFLA_ADDRESS', 'fa:16:3e:4f:9b:3e'], ['IFLA_BROADCAST', 'ff:ff:ff:ff:ff:ff'], ['IFLA_STATS64', {'rx_packets': 1, 'tx_packets': 1, 'rx_bytes': 90, 'tx_bytes': 90, 'rx_errors': 0, 'tx_errors': 0, 'rx_dropped': 0, 'tx_dropped': 0, 'multicast': 0, 'collisions': 0, 'rx_length_errors': 0, 'rx_over_errors': 0, 'rx_crc_errors': 0, 'rx_frame_errors': 0, 'rx_fifo_errors': 0, 'rx_missed_errors': 0, 'tx_aborted_errors': 0, 'tx_carrier_errors': 0, 'tx_fifo_errors': 0, 'tx_heartbeat_errors': 0, 'tx_window_errors': 0, 'rx_compressed': 0, 'tx_compressed': 0}], ['IFLA_STATS', {'rx_packets': 1, 'tx_packets': 1, 'rx_bytes': 90, 'tx_bytes': 90, 'rx_errors': 0, 'tx_errors': 0, 'rx_dropped': 0, 'tx_dropped': 0, 'multicast': 0, 'collisions': 0, 'rx_length_errors': 0, 'rx_over_errors': 0, 'rx_crc_errors': 0, 'rx_frame_errors': 0, 'rx_fifo_errors': 0, 'rx_missed_errors': 0, 'tx_aborted_errors': 0, 'tx_carrier_errors': 0, 'tx_fifo_errors': 0, 'tx_heartbeat_errors': 0, 'tx_window_errors': 0, 'rx_compressed': 0, 'tx_compressed': 0}], ['IFLA_XDP', {'attrs': [['IFLA_XDP_ATTACHED', None]]}], ['IFLA_LINKINFO', {'attrs': [['IFLA_INFO_KIND', 'veth']]}], ['IFLA_LINK_NETNSID', 0], ['IFLA_LINK', 23], ['IFLA_QDISC', 'noqueue'], ['IFLA_AF_SPEC', {'attrs': [['AF_INET', {'dummy': 65668, 'forwarding': 1, 'mc_forwarding': 0, 'proxy_arp': 0, 'accept_redirects': 0, 'secure_redirects': 0, 'send_redirects': 0, 'shared_media': 1, 'rp_filter': 1, 'accept_source_route': 0, 'bootp_relay': 0, 'log_martians': 0, 'tag': 0, 'arpfilter': 0, 'medium_id': 0, 'noxfrm': 0, 'nopolicy': 0, 'force_igmp_version': 0, 'arp_announce': 0, 'arp_ignore': 0, 'promote_secondaries': 1, 'arp_accept': 0, 'arp_notify': 0, 'accept_local': 0, 'src_vmark': 0, 'proxy_arp_pvlan': 0, 'route_localnet': 0, 'igmpv2_unsolicited_report_interval': 10000, 'igmpv3_unsolicited_report_interval': 1000}], ['AF_INET6', {'attrs': [['IFLA_INET6_FLAGS', 2147483648], ['IFLA_INET6_CACHEINFO', {'max_reasm_len': 65535, 'tstamp': 576655, 'reachable_time': 35965, 'retrans_time': 1000}], ['IFLA_INET6_CONF', {'forwarding': 0, 'hop_limit': 64, 'mtu': 1500, 'accept_ra': 1, 'accept_redirects': 1, 'autoconf': 1, 'dad_transmits': 1, 'router_solicitations': 4294967295, 'router_solicitation_interval': 4000, 'router_solicitation_delay': 1000, 'use_tempaddr': 0, 'temp_valid_lft': 604800, 'temp_preferred_lft': 86400, 'regen_max_retry': 3, 'max_desync_factor': 600, 'max_addresses': 16, 'force_mld_version': 0, 'accept_ra_defrtr': 1, 'accept_ra_pinfo': 1, 'accept_ra_rtr_pref': 1, 'router_probe_interval': 60000, 'accept_ra_rt_info_max_plen': 0, 'proxy_ndp': 0, 'optimistic_dad': 0, 'accept_source_route': 0, 'mc_forwarding': 0, 'disable_ipv6': 0, 'accept_dad': 1, 'force_tllao': 0, 'ndisc_notify': 0}], ['IFLA_INET6_STATS', {'num': 38, 'inpkts': 1, 'inoctets': 76, 'indelivers': 0, 'outforwdatagrams': 0, 'outpkts': 1, 'outoctets': 76, 'inhdrerrors': 0, 'intoobigerrors': 0, 'innoroutes': 0, 'inaddrerrors': 0, 'inunknownprotos': 0, 'intruncatedpkts': 0, 'indiscards': 0, 'outdiscards': 0, 'outnoroutes': 0, 'reasmtimeout': 0, 'reasmreqds': 0, 'reasmoks': 0, 'reasmfails': 0, 'fragoks': 0, 'fragfails': 0, 'fragcreates': 0, 'inmcastpkts': 1, 'outmcastpkts': 1, 'inbcastpkts': 0, 'outbcastpkts': 0, 'inmcastoctets': 76, 'outmcastoctets': 76, 'inbcastoctets': 0, 'outbcastoctets': 0, 'csumerrors': 0, 'noectpkts': 1, 'ect1pkts': 0, 'ect0pkts': 0, 'cepkts': 0}], ['IFLA_INET6_ICMP6STATS', {'num': 7, 'inmsgs': 0, 'inerrors': 0, 'outmsgs': 1, 'outerrors': 0, 'csumerrors': 0}], ['IFLA_INET6_TOKEN', '::'], ['IFLA_INET6_ADDR_GEN_MODE', 0]]}]]}], ['IFLA_MAP', {'mem_start': 0, 'mem_end': 0, 'base_addr': 0, 'irq': 0, 'dma': 0, 'port': 0}], ['UNKNOWN', {'header': {'length': 4, 'type': 32830}}], ['UNKNOWN', {'header': {'length': 4, 'type': 32833}}]], 'header': {'length': 1448, 'type': 16, 'flags': 0, 'sequence_number': 255, 'pid': 254010, 'error': None, 'target': 'ovnmeta-419dfb65-f0dd-44b5-a131-b7c37ebf4bab', 'stats': (0, 0, 0)}, 'state': 'up', 'event': 'RTM_NEWLINK'}]) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Dec  1 20:01:18 compute-0 ovn_metadata_agent[106828]: 2025-12-01 20:01:18.939 239862 DEBUG oslo.privsep.daemon [-] privsep: reply[fc36dbd9-4c78-4014-b5a2-715cae41c5ed]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Dec  1 20:01:19 compute-0 ovn_metadata_agent[106828]: 2025-12-01 20:01:19.005 239862 DEBUG oslo.privsep.daemon [-] privsep: reply[bbec5972-b1a7-431a-93b6-406a4adb1bf6]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Dec  1 20:01:19 compute-0 ovn_metadata_agent[106828]: 2025-12-01 20:01:19.006 106833 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): DelPortCommand(_result=None, port=tap419dfb65-f0, bridge=br-ex, if_exists=True) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89#033[00m
Dec  1 20:01:19 compute-0 ovn_metadata_agent[106828]: 2025-12-01 20:01:19.006 106833 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Transaction caused no change do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:129#033[00m
Dec  1 20:01:19 compute-0 ovn_metadata_agent[106828]: 2025-12-01 20:01:19.007 106833 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): AddPortCommand(_result=None, bridge=br-int, port=tap419dfb65-f0, may_exist=True) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89#033[00m
Dec  1 20:01:19 compute-0 nova_compute[189564]: 2025-12-01 20:01:19.009 189568 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Dec  1 20:01:19 compute-0 kernel: tap419dfb65-f0: entered promiscuous mode
Dec  1 20:01:19 compute-0 ovn_metadata_agent[106828]: 2025-12-01 20:01:19.012 106833 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): DbSetCommand(_result=None, table=Interface, record=tap419dfb65-f0, col_values=(('external_ids', {'iface-id': '0966f8f1-95fd-4a77-80c1-25197c60ec2b'}),)) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89#033[00m
Dec  1 20:01:19 compute-0 ovn_controller[97948]: 2025-12-01T20:01:19Z|00080|binding|INFO|Releasing lport 0966f8f1-95fd-4a77-80c1-25197c60ec2b from this chassis (sb_readonly=0)
Dec  1 20:01:19 compute-0 NetworkManager[56474]: <info>  [1764619279.0142] manager: (tap419dfb65-f0): new Open vSwitch Port device (/org/freedesktop/NetworkManager/Devices/43)
Dec  1 20:01:19 compute-0 nova_compute[189564]: 2025-12-01 20:01:19.013 189568 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Dec  1 20:01:19 compute-0 nova_compute[189564]: 2025-12-01 20:01:19.026 189568 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Dec  1 20:01:19 compute-0 ovn_metadata_agent[106828]: 2025-12-01 20:01:19.027 106833 DEBUG neutron.agent.linux.utils [-] Unable to access /var/lib/neutron/external/pids/419dfb65-f0dd-44b5-a131-b7c37ebf4bab.pid.haproxy; Error: [Errno 2] No such file or directory: '/var/lib/neutron/external/pids/419dfb65-f0dd-44b5-a131-b7c37ebf4bab.pid.haproxy' get_value_from_file /usr/lib/python3.9/site-packages/neutron/agent/linux/utils.py:252#033[00m
Dec  1 20:01:19 compute-0 ovn_metadata_agent[106828]: 2025-12-01 20:01:19.028 239862 DEBUG oslo.privsep.daemon [-] privsep: reply[134abbb0-9943-4c8f-8a90-5347669c7fea]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Dec  1 20:01:19 compute-0 ovn_metadata_agent[106828]: 2025-12-01 20:01:19.029 106833 DEBUG neutron.agent.ovn.metadata.driver [-] haproxy_cfg = 
Dec  1 20:01:19 compute-0 ovn_metadata_agent[106828]: global
Dec  1 20:01:19 compute-0 ovn_metadata_agent[106828]:    log         /dev/log local0 debug
Dec  1 20:01:19 compute-0 ovn_metadata_agent[106828]:    log-tag     haproxy-metadata-proxy-419dfb65-f0dd-44b5-a131-b7c37ebf4bab
Dec  1 20:01:19 compute-0 ovn_metadata_agent[106828]:    user        root
Dec  1 20:01:19 compute-0 ovn_metadata_agent[106828]:    group       root
Dec  1 20:01:19 compute-0 ovn_metadata_agent[106828]:    maxconn     1024
Dec  1 20:01:19 compute-0 ovn_metadata_agent[106828]:    pidfile     /var/lib/neutron/external/pids/419dfb65-f0dd-44b5-a131-b7c37ebf4bab.pid.haproxy
Dec  1 20:01:19 compute-0 ovn_metadata_agent[106828]:    daemon
Dec  1 20:01:19 compute-0 ovn_metadata_agent[106828]: 
Dec  1 20:01:19 compute-0 ovn_metadata_agent[106828]: defaults
Dec  1 20:01:19 compute-0 ovn_metadata_agent[106828]:    log global
Dec  1 20:01:19 compute-0 ovn_metadata_agent[106828]:    mode http
Dec  1 20:01:19 compute-0 ovn_metadata_agent[106828]:    option httplog
Dec  1 20:01:19 compute-0 ovn_metadata_agent[106828]:    option dontlognull
Dec  1 20:01:19 compute-0 ovn_metadata_agent[106828]:    option http-server-close
Dec  1 20:01:19 compute-0 ovn_metadata_agent[106828]:    option forwardfor
Dec  1 20:01:19 compute-0 ovn_metadata_agent[106828]:    retries                 3
Dec  1 20:01:19 compute-0 ovn_metadata_agent[106828]:    timeout http-request    30s
Dec  1 20:01:19 compute-0 ovn_metadata_agent[106828]:    timeout connect         30s
Dec  1 20:01:19 compute-0 ovn_metadata_agent[106828]:    timeout client          32s
Dec  1 20:01:19 compute-0 ovn_metadata_agent[106828]:    timeout server          32s
Dec  1 20:01:19 compute-0 ovn_metadata_agent[106828]:    timeout http-keep-alive 30s
Dec  1 20:01:19 compute-0 ovn_metadata_agent[106828]: 
Dec  1 20:01:19 compute-0 ovn_metadata_agent[106828]: 
Dec  1 20:01:19 compute-0 ovn_metadata_agent[106828]: listen listener
Dec  1 20:01:19 compute-0 ovn_metadata_agent[106828]:    bind 169.254.169.254:80
Dec  1 20:01:19 compute-0 ovn_metadata_agent[106828]:    server metadata /var/lib/neutron/metadata_proxy
Dec  1 20:01:19 compute-0 ovn_metadata_agent[106828]:    http-request add-header X-OVN-Network-ID 419dfb65-f0dd-44b5-a131-b7c37ebf4bab
Dec  1 20:01:19 compute-0 ovn_metadata_agent[106828]: create_config_file /usr/lib/python3.9/site-packages/neutron/agent/ovn/metadata/driver.py:107#033[00m
Dec  1 20:01:19 compute-0 ovn_metadata_agent[106828]: 2025-12-01 20:01:19.029 106833 DEBUG neutron.agent.linux.utils [-] Running command: ['sudo', 'neutron-rootwrap', '/etc/neutron/rootwrap.conf', 'ip', 'netns', 'exec', 'ovnmeta-419dfb65-f0dd-44b5-a131-b7c37ebf4bab', 'env', 'PROCESS_TAG=haproxy-419dfb65-f0dd-44b5-a131-b7c37ebf4bab', 'haproxy', '-f', '/var/lib/neutron/ovn-metadata-proxy/419dfb65-f0dd-44b5-a131-b7c37ebf4bab.conf'] create_process /usr/lib/python3.9/site-packages/neutron/agent/linux/utils.py:84#033[00m
Dec  1 20:01:19 compute-0 nova_compute[189564]: 2025-12-01 20:01:19.074 189568 DEBUG nova.virt.driver [None req-025acbbd-8b0a-4055-b5a6-f0460d6fa220 - - - - - -] Emitting event <LifecycleEvent: 1764619279.0737026, 4a104baa-5fd5-47aa-973b-11d99c76c3e2 => Started> emit_event /usr/lib/python3.9/site-packages/nova/virt/driver.py:1653#033[00m
Dec  1 20:01:19 compute-0 nova_compute[189564]: 2025-12-01 20:01:19.074 189568 INFO nova.compute.manager [None req-025acbbd-8b0a-4055-b5a6-f0460d6fa220 - - - - - -] [instance: 4a104baa-5fd5-47aa-973b-11d99c76c3e2] VM Started (Lifecycle Event)#033[00m
Dec  1 20:01:19 compute-0 nova_compute[189564]: 2025-12-01 20:01:19.100 189568 DEBUG nova.compute.manager [None req-025acbbd-8b0a-4055-b5a6-f0460d6fa220 - - - - - -] [instance: 4a104baa-5fd5-47aa-973b-11d99c76c3e2] Checking state _get_power_state /usr/lib/python3.9/site-packages/nova/compute/manager.py:1762#033[00m
Dec  1 20:01:19 compute-0 nova_compute[189564]: 2025-12-01 20:01:19.108 189568 DEBUG nova.virt.driver [None req-025acbbd-8b0a-4055-b5a6-f0460d6fa220 - - - - - -] Emitting event <LifecycleEvent: 1764619279.0748212, 4a104baa-5fd5-47aa-973b-11d99c76c3e2 => Paused> emit_event /usr/lib/python3.9/site-packages/nova/virt/driver.py:1653#033[00m
Dec  1 20:01:19 compute-0 nova_compute[189564]: 2025-12-01 20:01:19.108 189568 INFO nova.compute.manager [None req-025acbbd-8b0a-4055-b5a6-f0460d6fa220 - - - - - -] [instance: 4a104baa-5fd5-47aa-973b-11d99c76c3e2] VM Paused (Lifecycle Event)#033[00m
Dec  1 20:01:19 compute-0 nova_compute[189564]: 2025-12-01 20:01:19.129 189568 DEBUG nova.compute.manager [None req-025acbbd-8b0a-4055-b5a6-f0460d6fa220 - - - - - -] [instance: 4a104baa-5fd5-47aa-973b-11d99c76c3e2] Checking state _get_power_state /usr/lib/python3.9/site-packages/nova/compute/manager.py:1762#033[00m
Dec  1 20:01:19 compute-0 nova_compute[189564]: 2025-12-01 20:01:19.134 189568 DEBUG nova.compute.manager [None req-025acbbd-8b0a-4055-b5a6-f0460d6fa220 - - - - - -] [instance: 4a104baa-5fd5-47aa-973b-11d99c76c3e2] Synchronizing instance power state after lifecycle event "Paused"; current vm_state: building, current task_state: spawning, current DB power_state: 0, VM power_state: 3 handle_lifecycle_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:1396#033[00m
Dec  1 20:01:19 compute-0 nova_compute[189564]: 2025-12-01 20:01:19.150 189568 INFO nova.compute.manager [None req-025acbbd-8b0a-4055-b5a6-f0460d6fa220 - - - - - -] [instance: 4a104baa-5fd5-47aa-973b-11d99c76c3e2] During sync_power_state the instance has a pending task (spawning). Skip.#033[00m
Dec  1 20:01:19 compute-0 nova_compute[189564]: 2025-12-01 20:01:19.248 189568 DEBUG oslo_service.periodic_task [None req-7cec860b-c1c5-49d2-8a4f-732c76c1788d - - - - - -] Running periodic task ComputeManager._poll_rebooting_instances run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210#033[00m
Dec  1 20:01:19 compute-0 nova_compute[189564]: 2025-12-01 20:01:19.248 189568 DEBUG oslo_service.periodic_task [None req-7cec860b-c1c5-49d2-8a4f-732c76c1788d - - - - - -] Running periodic task ComputeManager.update_available_resource run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210#033[00m
Dec  1 20:01:19 compute-0 nova_compute[189564]: 2025-12-01 20:01:19.270 189568 DEBUG oslo_concurrency.lockutils [None req-7cec860b-c1c5-49d2-8a4f-732c76c1788d - - - - - -] Acquiring lock "compute_resources" by "nova.compute.resource_tracker.ResourceTracker.clean_compute_node_cache" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m
Dec  1 20:01:19 compute-0 nova_compute[189564]: 2025-12-01 20:01:19.271 189568 DEBUG oslo_concurrency.lockutils [None req-7cec860b-c1c5-49d2-8a4f-732c76c1788d - - - - - -] Lock "compute_resources" acquired by "nova.compute.resource_tracker.ResourceTracker.clean_compute_node_cache" :: waited 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m
Dec  1 20:01:19 compute-0 nova_compute[189564]: 2025-12-01 20:01:19.271 189568 DEBUG oslo_concurrency.lockutils [None req-7cec860b-c1c5-49d2-8a4f-732c76c1788d - - - - - -] Lock "compute_resources" "released" by "nova.compute.resource_tracker.ResourceTracker.clean_compute_node_cache" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m
Dec  1 20:01:19 compute-0 nova_compute[189564]: 2025-12-01 20:01:19.271 189568 DEBUG nova.compute.resource_tracker [None req-7cec860b-c1c5-49d2-8a4f-732c76c1788d - - - - - -] Auditing locally available compute resources for compute-0.ctlplane.example.com (node: compute-0.ctlplane.example.com) update_available_resource /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:861#033[00m
Dec  1 20:01:19 compute-0 nova_compute[189564]: 2025-12-01 20:01:19.362 189568 DEBUG oslo_concurrency.processutils [None req-7cec860b-c1c5-49d2-8a4f-732c76c1788d - - - - - -] Running cmd (subprocess): /usr/bin/python3 -m oslo_concurrency.prlimit --as=1073741824 --cpu=30 -- env LC_ALL=C LANG=C qemu-img info /var/lib/nova/instances/5e264735-c003-4c77-8b16-cb48211f837f/disk --force-share --output=json execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384#033[00m
Dec  1 20:01:19 compute-0 podman[254049]: 2025-12-01 20:01:19.419031778 +0000 UTC m=+0.070885706 container create c0ec349cd527aaa2050cd456a2adde135cadbf6873f2e9819fe20dd3647d976c (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified, name=neutron-haproxy-ovnmeta-419dfb65-f0dd-44b5-a131-b7c37ebf4bab, org.label-schema.license=GPLv2, tcib_managed=true, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.schema-version=1.0, tcib_build_tag=fa2bb8efef6782c26ea7f1675eeb36dd, maintainer=OpenStack Kubernetes Operator team, org.label-schema.build-date=20251125, org.label-schema.vendor=CentOS, io.buildah.version=1.41.3)
Dec  1 20:01:19 compute-0 nova_compute[189564]: 2025-12-01 20:01:19.434 189568 DEBUG oslo_concurrency.processutils [None req-7cec860b-c1c5-49d2-8a4f-732c76c1788d - - - - - -] CMD "/usr/bin/python3 -m oslo_concurrency.prlimit --as=1073741824 --cpu=30 -- env LC_ALL=C LANG=C qemu-img info /var/lib/nova/instances/5e264735-c003-4c77-8b16-cb48211f837f/disk --force-share --output=json" returned: 0 in 0.072s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422#033[00m
Dec  1 20:01:19 compute-0 nova_compute[189564]: 2025-12-01 20:01:19.436 189568 DEBUG oslo_concurrency.processutils [None req-7cec860b-c1c5-49d2-8a4f-732c76c1788d - - - - - -] Running cmd (subprocess): /usr/bin/python3 -m oslo_concurrency.prlimit --as=1073741824 --cpu=30 -- env LC_ALL=C LANG=C qemu-img info /var/lib/nova/instances/5e264735-c003-4c77-8b16-cb48211f837f/disk --force-share --output=json execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384#033[00m
Dec  1 20:01:19 compute-0 podman[254049]: 2025-12-01 20:01:19.381959276 +0000 UTC m=+0.033813204 image pull 014dc726c85414b29f2dde7b5d875685d08784761c0f0ffa8630d1583a877bf9 quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified
Dec  1 20:01:19 compute-0 systemd[1]: Started libpod-conmon-c0ec349cd527aaa2050cd456a2adde135cadbf6873f2e9819fe20dd3647d976c.scope.
Dec  1 20:01:19 compute-0 nova_compute[189564]: 2025-12-01 20:01:19.511 189568 DEBUG oslo_concurrency.processutils [None req-7cec860b-c1c5-49d2-8a4f-732c76c1788d - - - - - -] CMD "/usr/bin/python3 -m oslo_concurrency.prlimit --as=1073741824 --cpu=30 -- env LC_ALL=C LANG=C qemu-img info /var/lib/nova/instances/5e264735-c003-4c77-8b16-cb48211f837f/disk --force-share --output=json" returned: 0 in 0.065s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422#033[00m
Dec  1 20:01:19 compute-0 nova_compute[189564]: 2025-12-01 20:01:19.520 189568 DEBUG oslo_concurrency.processutils [None req-7cec860b-c1c5-49d2-8a4f-732c76c1788d - - - - - -] Running cmd (subprocess): /usr/bin/python3 -m oslo_concurrency.prlimit --as=1073741824 --cpu=30 -- env LC_ALL=C LANG=C qemu-img info /var/lib/nova/instances/98c0547a-3efc-4214-85f9-ccceaf32a2a6/disk --force-share --output=json execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384#033[00m
Dec  1 20:01:19 compute-0 systemd[1]: Started libcrun container.
Dec  1 20:01:19 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/9d139d7d6fc9e60e18b5717679f82de0ce940f2f4fa2594cdcdf9444ca8cd222/merged/var/lib/neutron supports timestamps until 2038 (0x7fffffff)
Dec  1 20:01:19 compute-0 podman[254049]: 2025-12-01 20:01:19.561848542 +0000 UTC m=+0.213702450 container init c0ec349cd527aaa2050cd456a2adde135cadbf6873f2e9819fe20dd3647d976c (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified, name=neutron-haproxy-ovnmeta-419dfb65-f0dd-44b5-a131-b7c37ebf4bab, org.label-schema.schema-version=1.0, org.label-schema.build-date=20251125, org.label-schema.license=GPLv2, tcib_managed=true, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.vendor=CentOS, io.buildah.version=1.41.3, tcib_build_tag=fa2bb8efef6782c26ea7f1675eeb36dd, maintainer=OpenStack Kubernetes Operator team)
Dec  1 20:01:19 compute-0 podman[254065]: 2025-12-01 20:01:19.567598131 +0000 UTC m=+0.100474167 container health_status 23921011954a99f31a49758e512d9e3575f6b2ebf536e7df85e3be11e7690b76 (image=quay.io/sustainable_computing_io/kepler:release-0.7.12, name=kepler, health_status=healthy, health_failing_streak=0, health_log=, io.openshift.expose-services=, build-date=2024-09-18T21:23:30, com.redhat.component=ubi9-container, description=The Universal Base Image is designed and engineered to be the base layer for all of your containerized applications, middleware and utilities. This base image is freely redistributable, but Red Hat only supports Red Hat technologies through subscriptions for Red Hat products. This image is maintained by Red Hat and updated regularly., architecture=x86_64, config_data={'image': 'quay.io/sustainable_computing_io/kepler:release-0.7.12', 'privileged': 'true', 'restart': 'always', 'ports': ['8888:8888'], 'net': 'host', 'command': '-v=2', 'recreate': True, 'environment': {'ENABLE_GPU': 'true', 'EXPOSE_CONTAINER_METRICS': 'true', 'ENABLE_PROCESS_METRICS': 'true', 'EXPOSE_VM_METRICS': 'true', 'EXPOSE_ESTIMATED_IDLE_POWER_METRICS': 'false', 'LIBVIRT_METADATA_URI': 'http://openstack.org/xmlns/libvirt/nova/1.1'}, 'healthcheck': {'test': '/openstack/healthcheck kepler', 'mount': '/var/lib/openstack/healthchecks/kepler'}, 'volumes': ['/lib/modules:/lib/modules:ro', '/run/libvirt:/run/libvirt:shared,ro', '/sys:/sys', '/proc:/proc', '/var/lib/openstack/healthchecks/kepler:/openstack:ro,z']}, summary=Provides the latest release of Red Hat Universal Base Image 9., release-0.7.12=, io.buildah.version=1.29.0, managed_by=edpm_ansible, io.k8s.description=The Universal Base Image is designed and engineered to be the base layer for all of your containerized applications, middleware and utilities. This base image is freely redistributable, but Red Hat only supports Red Hat technologies through subscriptions for Red Hat products. This image is maintained by Red Hat and updated regularly., vendor=Red Hat, Inc., config_id=edpm, name=ubi9, io.openshift.tags=base rhel9, release=1214.1726694543, vcs-ref=e309397d02fc53f7fa99db1371b8700eb49f268f, container_name=kepler, distribution-scope=public, io.k8s.display-name=Red Hat Universal Base Image 9, maintainer=Red Hat, Inc., com.redhat.license_terms=https://www.redhat.com/en/about/red-hat-end-user-license-agreements#UBI, url=https://access.redhat.com/containers/#/registry.access.redhat.com/ubi9/images/9.4-1214.1726694543, vcs-type=git, version=9.4)
Dec  1 20:01:19 compute-0 podman[254049]: 2025-12-01 20:01:19.56915082 +0000 UTC m=+0.221004708 container start c0ec349cd527aaa2050cd456a2adde135cadbf6873f2e9819fe20dd3647d976c (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified, name=neutron-haproxy-ovnmeta-419dfb65-f0dd-44b5-a131-b7c37ebf4bab, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS, org.label-schema.license=GPLv2, maintainer=OpenStack Kubernetes Operator team, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.build-date=20251125, io.buildah.version=1.41.3, tcib_build_tag=fa2bb8efef6782c26ea7f1675eeb36dd, tcib_managed=true)
Dec  1 20:01:19 compute-0 podman[254069]: 2025-12-01 20:01:19.587215622 +0000 UTC m=+0.099182237 container health_status 3a3d264f7eb8586ed3d44da8bad3c69e5911bcb2ca062b771386b6d47a5118de (image=quay.rdoproject.org/podified-master-centos10/openstack-ceilometer-compute:current-tested, name=ceilometer_agent_compute, health_status=healthy, health_failing_streak=0, health_log=, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 10 Base Image, tcib_managed=true, container_name=ceilometer_agent_compute, managed_by=edpm_ansible, tcib_build_tag=3a7876c5b6a4ff2e2bc50e11e9db5f42, maintainer=OpenStack Kubernetes Operator team, org.label-schema.build-date=20251125, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS, config_data={'image': 'quay.rdoproject.org/podified-master-centos10/openstack-ceilometer-compute:current-tested', 'user': 'ceilometer', 'restart': 'always', 'command': 'kolla_start', 'security_opt': 'label:type:ceilometer_polling_t', 'net': 'host', 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'OS_ENDPOINT_TYPE': 'internal'}, 'healthcheck': {'test': '/openstack/healthcheck compute', 'mount': '/var/lib/openstack/healthchecks/ceilometer_agent_compute'}, 'volumes': ['/var/lib/openstack/config/telemetry:/var/lib/openstack/config/:z', '/var/lib/openstack/config/telemetry/ceilometer-agent-compute.json:/var/lib/kolla/config_files/config.json:z', '/run/libvirt:/run/libvirt:shared,ro', '/etc/hosts:/etc/hosts:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/var/lib/openstack/cacerts/telemetry/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/config/telemetry/ceilometer_prom_exporter.yaml:/etc/ceilometer/ceilometer_prom_exporter.yaml:z', '/var/lib/openstack/certs/telemetry/default:/etc/ceilometer/tls:z', '/dev/log:/dev/log', '/var/lib/openstack/healthchecks/ceilometer_agent_compute:/openstack:ro,z']}, config_id=edpm, io.buildah.version=1.41.4)
Dec  1 20:01:19 compute-0 podman[254075]: 2025-12-01 20:01:19.590198944 +0000 UTC m=+0.106400391 container health_status 43b014a7c88484529ca37fbc1aa040d68d3c565a681d98a3ffe696ded1c66c8b (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified, name=ovn_metadata_agent, health_status=healthy, health_failing_streak=0, health_log=, io.buildah.version=1.41.3, config_data={'cgroupns': 'host', 'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'EDPM_CONFIG_HASH': '0823bd3e096c75f72e4a95820d41b0d4b6a1172bd2892ddb9f29b788a11bc87d'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ovn_metadata_agent', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified', 'net': 'host', 'pid': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/run/openvswitch:/run/openvswitch:z', '/var/lib/config-data/ansible-generated/neutron-ovn-metadata-agent:/etc/neutron.conf.d:z', '/run/netns:/run/netns:shared', '/var/lib/kolla/config_files/ovn_metadata_agent.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/neutron:/var/lib/neutron:shared,z', '/var/lib/neutron/ovn_metadata_haproxy_wrapper:/usr/local/bin/haproxy:ro', '/var/lib/neutron/kill_scripts:/etc/neutron/kill_scripts:ro', '/var/lib/openstack/cacerts/neutron-metadata/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/ca.crt:/etc/pki/tls/certs/ovndbca.crt:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/tls.crt:/etc/pki/tls/certs/ovndb.crt:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/tls.key:/etc/pki/tls/private/ovndb.key:ro,Z', '/var/lib/openstack/healthchecks/ovn_metadata_agent:/openstack:ro,z']}, maintainer=OpenStack Kubernetes Operator team, managed_by=edpm_ansible, org.label-schema.license=GPLv2, config_id=ovn_metadata_agent, container_name=ovn_metadata_agent, org.label-schema.vendor=CentOS, tcib_managed=true, org.label-schema.build-date=20251125, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.schema-version=1.0, tcib_build_tag=fa2bb8efef6782c26ea7f1675eeb36dd)
Dec  1 20:01:19 compute-0 neutron-haproxy-ovnmeta-419dfb65-f0dd-44b5-a131-b7c37ebf4bab[254107]: [NOTICE]   (254158) : New worker (254170) forked
Dec  1 20:01:19 compute-0 neutron-haproxy-ovnmeta-419dfb65-f0dd-44b5-a131-b7c37ebf4bab[254107]: [NOTICE]   (254158) : Loading success.
Dec  1 20:01:19 compute-0 nova_compute[189564]: 2025-12-01 20:01:19.599 189568 DEBUG oslo_concurrency.processutils [None req-7cec860b-c1c5-49d2-8a4f-732c76c1788d - - - - - -] CMD "/usr/bin/python3 -m oslo_concurrency.prlimit --as=1073741824 --cpu=30 -- env LC_ALL=C LANG=C qemu-img info /var/lib/nova/instances/98c0547a-3efc-4214-85f9-ccceaf32a2a6/disk --force-share --output=json" returned: 0 in 0.080s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422#033[00m
Dec  1 20:01:19 compute-0 nova_compute[189564]: 2025-12-01 20:01:19.601 189568 DEBUG oslo_concurrency.processutils [None req-7cec860b-c1c5-49d2-8a4f-732c76c1788d - - - - - -] Running cmd (subprocess): /usr/bin/python3 -m oslo_concurrency.prlimit --as=1073741824 --cpu=30 -- env LC_ALL=C LANG=C qemu-img info /var/lib/nova/instances/98c0547a-3efc-4214-85f9-ccceaf32a2a6/disk --force-share --output=json execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384#033[00m
Dec  1 20:01:19 compute-0 podman[254081]: 2025-12-01 20:01:19.611029003 +0000 UTC m=+0.119348724 container health_status ac5c9902abf0db9f43c889599b2bcc73d33eb8b65444ffdd9b56a5cc93dab792 (image=quay.io/podified-antelope-centos9/openstack-ovn-controller:current-podified, name=ovn_controller, health_status=healthy, health_failing_streak=0, health_log=, config_data={'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ovn_controller', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-ovn-controller:current-podified', 'net': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/lib/modules:/lib/modules:ro', '/run:/run', '/var/lib/openvswitch/ovn:/run/ovn:shared,z', '/var/lib/kolla/config_files/ovn_controller.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/openstack/cacerts/ovn/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/certs/ovn/default/ca.crt:/etc/pki/tls/certs/ovndbca.crt:ro,z', '/var/lib/openstack/certs/ovn/default/tls.crt:/etc/pki/tls/certs/ovndb.crt:ro,z', '/var/lib/openstack/certs/ovn/default/tls.key:/etc/pki/tls/private/ovndb.key:ro,Z', '/var/lib/openstack/healthchecks/ovn_controller:/openstack:ro,z']}, maintainer=OpenStack Kubernetes Operator team, org.label-schema.build-date=20251125, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.schema-version=1.0, container_name=ovn_controller, io.buildah.version=1.41.3, tcib_build_tag=fa2bb8efef6782c26ea7f1675eeb36dd, config_id=ovn_controller, org.label-schema.vendor=CentOS, managed_by=edpm_ansible, tcib_managed=true)
Dec  1 20:01:19 compute-0 podman[254068]: 2025-12-01 20:01:19.626469783 +0000 UTC m=+0.151375751 container health_status 34a1614f07848d6f362b3ed1fa2407dbcd0f2c7c831f6ef43ff8b2d278ce7c3d (image=quay.io/podified-antelope-centos9/openstack-ceilometer-ipmi:current-podified, name=ceilometer_agent_ipmi, health_status=healthy, health_failing_streak=0, health_log=, org.label-schema.license=GPLv2, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS, org.label-schema.name=CentOS Stream 9 Base Image, tcib_build_tag=fa2bb8efef6782c26ea7f1675eeb36dd, config_data={'image': 'quay.io/podified-antelope-centos9/openstack-ceilometer-ipmi:current-podified', 'user': 'ceilometer', 'restart': 'always', 'command': 'kolla_start', 'security_opt': 'label:type:ceilometer_polling_t', 'privileged': 'true', 'net': 'host', 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'OS_ENDPOINT_TYPE': 'internal'}, 'healthcheck': {'test': '/openstack/healthcheck ipmi', 'mount': '/var/lib/openstack/healthchecks/ceilometer_agent_ipmi'}, 'volumes': ['/var/lib/openstack/config/telemetry-power-monitoring:/var/lib/openstack/config/:z', '/var/lib/openstack/config/telemetry-power-monitoring/ceilometer-agent-ipmi.json:/var/lib/kolla/config_files/config.json:z', '/etc/hosts:/etc/hosts:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/var/lib/openstack/cacerts/telemetry-power-monitoring/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/config/telemetry-power-monitoring/ceilometer_prom_exporter.yaml:/etc/ceilometer/ceilometer_prom_exporter.yaml:z', '/var/lib/openstack/certs/telemetry-power-monitoring/default:/etc/ceilometer/tls:z', '/dev/log:/dev/log', '/var/lib/openstack/healthchecks/ceilometer_agent_ipmi:/openstack:ro,z']}, org.label-schema.build-date=20251125, config_id=edpm, io.buildah.version=1.41.3, tcib_managed=true, container_name=ceilometer_agent_ipmi, maintainer=OpenStack Kubernetes Operator team, managed_by=edpm_ansible)
Dec  1 20:01:19 compute-0 nova_compute[189564]: 2025-12-01 20:01:19.660 189568 DEBUG oslo_concurrency.processutils [None req-7cec860b-c1c5-49d2-8a4f-732c76c1788d - - - - - -] CMD "/usr/bin/python3 -m oslo_concurrency.prlimit --as=1073741824 --cpu=30 -- env LC_ALL=C LANG=C qemu-img info /var/lib/nova/instances/98c0547a-3efc-4214-85f9-ccceaf32a2a6/disk --force-share --output=json" returned: 0 in 0.059s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422#033[00m
Dec  1 20:01:19 compute-0 nova_compute[189564]: 2025-12-01 20:01:19.670 189568 DEBUG oslo_concurrency.processutils [None req-7cec860b-c1c5-49d2-8a4f-732c76c1788d - - - - - -] Running cmd (subprocess): /usr/bin/python3 -m oslo_concurrency.prlimit --as=1073741824 --cpu=30 -- env LC_ALL=C LANG=C qemu-img info /var/lib/nova/instances/4a104baa-5fd5-47aa-973b-11d99c76c3e2/disk --force-share --output=json execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384#033[00m
Dec  1 20:01:19 compute-0 nova_compute[189564]: 2025-12-01 20:01:19.739 189568 DEBUG oslo_concurrency.processutils [None req-7cec860b-c1c5-49d2-8a4f-732c76c1788d - - - - - -] CMD "/usr/bin/python3 -m oslo_concurrency.prlimit --as=1073741824 --cpu=30 -- env LC_ALL=C LANG=C qemu-img info /var/lib/nova/instances/4a104baa-5fd5-47aa-973b-11d99c76c3e2/disk --force-share --output=json" returned: 0 in 0.069s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422#033[00m
Dec  1 20:01:19 compute-0 nova_compute[189564]: 2025-12-01 20:01:19.740 189568 DEBUG oslo_concurrency.processutils [None req-7cec860b-c1c5-49d2-8a4f-732c76c1788d - - - - - -] Running cmd (subprocess): /usr/bin/python3 -m oslo_concurrency.prlimit --as=1073741824 --cpu=30 -- env LC_ALL=C LANG=C qemu-img info /var/lib/nova/instances/4a104baa-5fd5-47aa-973b-11d99c76c3e2/disk --force-share --output=json execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384#033[00m
Dec  1 20:01:19 compute-0 nova_compute[189564]: 2025-12-01 20:01:19.830 189568 DEBUG oslo_concurrency.processutils [None req-7cec860b-c1c5-49d2-8a4f-732c76c1788d - - - - - -] CMD "/usr/bin/python3 -m oslo_concurrency.prlimit --as=1073741824 --cpu=30 -- env LC_ALL=C LANG=C qemu-img info /var/lib/nova/instances/4a104baa-5fd5-47aa-973b-11d99c76c3e2/disk --force-share --output=json" returned: 0 in 0.090s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422#033[00m
Dec  1 20:01:20 compute-0 nova_compute[189564]: 2025-12-01 20:01:20.240 189568 DEBUG nova.network.neutron [req-e068856a-a5fb-46b4-8a84-39e4de18a119 req-5671b8e4-0e06-4880-96b0-0769abbfed50 8141e98fb94749b083df4b60a326419b 58466eba0634458f8dd92f929824a1d9 - - default default] [instance: 4a104baa-5fd5-47aa-973b-11d99c76c3e2] Updated VIF entry in instance network info cache for port 09097114-7a48-4b64-ab17-ed474efbf80e. _build_network_info_model /usr/lib/python3.9/site-packages/nova/network/neutron.py:3482#033[00m
Dec  1 20:01:20 compute-0 nova_compute[189564]: 2025-12-01 20:01:20.241 189568 DEBUG nova.network.neutron [req-e068856a-a5fb-46b4-8a84-39e4de18a119 req-5671b8e4-0e06-4880-96b0-0769abbfed50 8141e98fb94749b083df4b60a326419b 58466eba0634458f8dd92f929824a1d9 - - default default] [instance: 4a104baa-5fd5-47aa-973b-11d99c76c3e2] Updating instance_info_cache with network_info: [{"id": "09097114-7a48-4b64-ab17-ed474efbf80e", "address": "fa:16:3e:3e:bf:1a", "network": {"id": "419dfb65-f0dd-44b5-a131-b7c37ebf4bab", "bridge": "br-int", "label": "tempest-ServerActionsTestJSON-188173667-network", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.13", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "5102d72cb1ce4e6da810b2584a2abd73", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tap09097114-7a", "ovs_interfaceid": "09097114-7a48-4b64-ab17-ed474efbf80e", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}}] update_instance_cache_with_nw_info /usr/lib/python3.9/site-packages/nova/network/neutron.py:116#033[00m
Dec  1 20:01:20 compute-0 nova_compute[189564]: 2025-12-01 20:01:20.267 189568 WARNING nova.virt.libvirt.driver [None req-7cec860b-c1c5-49d2-8a4f-732c76c1788d - - - - - -] This host appears to have multiple sockets per NUMA node. The `socket` PCI NUMA affinity will not be supported.#033[00m
Dec  1 20:01:20 compute-0 nova_compute[189564]: 2025-12-01 20:01:20.268 189568 DEBUG nova.compute.resource_tracker [None req-7cec860b-c1c5-49d2-8a4f-732c76c1788d - - - - - -] Hypervisor/Node resource view: name=compute-0.ctlplane.example.com free_ram=5191MB free_disk=72.33732223510742GB free_vcpus=5 pci_devices=[{"dev_id": "pci_0000_00_03_0", "address": "0000:00:03.0", "product_id": "1000", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1000", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_01_3", "address": "0000:00:01.3", "product_id": "7113", "vendor_id": "8086", "numa_node": null, "label": "label_8086_7113", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_06_0", "address": "0000:00:06.0", "product_id": "1005", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1005", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_04_0", "address": "0000:00:04.0", "product_id": "1001", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1001", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_01_1", "address": "0000:00:01.1", "product_id": "7010", "vendor_id": "8086", "numa_node": null, "label": "label_8086_7010", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_00_0", "address": "0000:00:00.0", "product_id": "1237", "vendor_id": "8086", "numa_node": null, "label": "label_8086_1237", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_05_0", "address": "0000:00:05.0", "product_id": "1002", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1002", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_02_0", "address": "0000:00:02.0", "product_id": "1050", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1050", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_01_2", "address": "0000:00:01.2", "product_id": "7020", "vendor_id": "8086", "numa_node": null, "label": "label_8086_7020", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_01_0", "address": "0000:00:01.0", "product_id": "7000", "vendor_id": "8086", "numa_node": null, "label": "label_8086_7000", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_07_0", "address": "0000:00:07.0", "product_id": "1000", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1000", "dev_type": "type-PCI"}] _report_hypervisor_resource_view /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:1034#033[00m
Dec  1 20:01:20 compute-0 nova_compute[189564]: 2025-12-01 20:01:20.269 189568 DEBUG oslo_concurrency.lockutils [None req-7cec860b-c1c5-49d2-8a4f-732c76c1788d - - - - - -] Acquiring lock "compute_resources" by "nova.compute.resource_tracker.ResourceTracker._update_available_resource" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m
Dec  1 20:01:20 compute-0 nova_compute[189564]: 2025-12-01 20:01:20.269 189568 DEBUG oslo_concurrency.lockutils [None req-7cec860b-c1c5-49d2-8a4f-732c76c1788d - - - - - -] Lock "compute_resources" acquired by "nova.compute.resource_tracker.ResourceTracker._update_available_resource" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m
Dec  1 20:01:20 compute-0 nova_compute[189564]: 2025-12-01 20:01:20.271 189568 DEBUG oslo_concurrency.lockutils [req-e068856a-a5fb-46b4-8a84-39e4de18a119 req-5671b8e4-0e06-4880-96b0-0769abbfed50 8141e98fb94749b083df4b60a326419b 58466eba0634458f8dd92f929824a1d9 - - default default] Releasing lock "refresh_cache-4a104baa-5fd5-47aa-973b-11d99c76c3e2" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:333#033[00m
Dec  1 20:01:20 compute-0 nova_compute[189564]: 2025-12-01 20:01:20.363 189568 DEBUG nova.compute.resource_tracker [None req-7cec860b-c1c5-49d2-8a4f-732c76c1788d - - - - - -] Instance 98c0547a-3efc-4214-85f9-ccceaf32a2a6 actively managed on this compute host and has allocations in placement: {'resources': {'DISK_GB': 1, 'MEMORY_MB': 128, 'VCPU': 1}}. _remove_deleted_instances_allocations /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:1635#033[00m
Dec  1 20:01:20 compute-0 nova_compute[189564]: 2025-12-01 20:01:20.363 189568 DEBUG nova.compute.resource_tracker [None req-7cec860b-c1c5-49d2-8a4f-732c76c1788d - - - - - -] Instance 5e264735-c003-4c77-8b16-cb48211f837f actively managed on this compute host and has allocations in placement: {'resources': {'DISK_GB': 1, 'MEMORY_MB': 128, 'VCPU': 1}}. _remove_deleted_instances_allocations /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:1635#033[00m
Dec  1 20:01:20 compute-0 nova_compute[189564]: 2025-12-01 20:01:20.363 189568 DEBUG nova.compute.resource_tracker [None req-7cec860b-c1c5-49d2-8a4f-732c76c1788d - - - - - -] Instance 4a104baa-5fd5-47aa-973b-11d99c76c3e2 actively managed on this compute host and has allocations in placement: {'resources': {'DISK_GB': 1, 'MEMORY_MB': 128, 'VCPU': 1}}. _remove_deleted_instances_allocations /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:1635#033[00m
Dec  1 20:01:20 compute-0 nova_compute[189564]: 2025-12-01 20:01:20.364 189568 DEBUG nova.compute.resource_tracker [None req-7cec860b-c1c5-49d2-8a4f-732c76c1788d - - - - - -] Total usable vcpus: 8, total allocated vcpus: 3 _report_final_resource_view /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:1057#033[00m
Dec  1 20:01:20 compute-0 nova_compute[189564]: 2025-12-01 20:01:20.364 189568 DEBUG nova.compute.resource_tracker [None req-7cec860b-c1c5-49d2-8a4f-732c76c1788d - - - - - -] Final resource view: name=compute-0.ctlplane.example.com phys_ram=7680MB used_ram=896MB phys_disk=79GB used_disk=3GB total_vcpus=8 used_vcpus=3 pci_stats=[] _report_final_resource_view /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:1066#033[00m
Dec  1 20:01:20 compute-0 nova_compute[189564]: 2025-12-01 20:01:20.499 189568 DEBUG nova.compute.manager [req-62a00a04-4838-4026-94be-dc11432f5002 req-2fe78f00-c311-4465-b8eb-062b9bbfde99 8141e98fb94749b083df4b60a326419b 58466eba0634458f8dd92f929824a1d9 - - default default] [instance: 98c0547a-3efc-4214-85f9-ccceaf32a2a6] Received event network-vif-plugged-6f128282-4268-4162-a349-1906ef0a8e4d external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11048#033[00m
Dec  1 20:01:20 compute-0 nova_compute[189564]: 2025-12-01 20:01:20.500 189568 DEBUG oslo_concurrency.lockutils [req-62a00a04-4838-4026-94be-dc11432f5002 req-2fe78f00-c311-4465-b8eb-062b9bbfde99 8141e98fb94749b083df4b60a326419b 58466eba0634458f8dd92f929824a1d9 - - default default] Acquiring lock "98c0547a-3efc-4214-85f9-ccceaf32a2a6-events" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m
Dec  1 20:01:20 compute-0 nova_compute[189564]: 2025-12-01 20:01:20.500 189568 DEBUG oslo_concurrency.lockutils [req-62a00a04-4838-4026-94be-dc11432f5002 req-2fe78f00-c311-4465-b8eb-062b9bbfde99 8141e98fb94749b083df4b60a326419b 58466eba0634458f8dd92f929824a1d9 - - default default] Lock "98c0547a-3efc-4214-85f9-ccceaf32a2a6-events" acquired by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: waited 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m
Dec  1 20:01:20 compute-0 nova_compute[189564]: 2025-12-01 20:01:20.502 189568 DEBUG oslo_concurrency.lockutils [req-62a00a04-4838-4026-94be-dc11432f5002 req-2fe78f00-c311-4465-b8eb-062b9bbfde99 8141e98fb94749b083df4b60a326419b 58466eba0634458f8dd92f929824a1d9 - - default default] Lock "98c0547a-3efc-4214-85f9-ccceaf32a2a6-events" "released" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: held 0.002s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m
Dec  1 20:01:20 compute-0 nova_compute[189564]: 2025-12-01 20:01:20.504 189568 DEBUG nova.compute.manager [req-62a00a04-4838-4026-94be-dc11432f5002 req-2fe78f00-c311-4465-b8eb-062b9bbfde99 8141e98fb94749b083df4b60a326419b 58466eba0634458f8dd92f929824a1d9 - - default default] [instance: 98c0547a-3efc-4214-85f9-ccceaf32a2a6] Processing event network-vif-plugged-6f128282-4268-4162-a349-1906ef0a8e4d _process_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:10808#033[00m
Dec  1 20:01:20 compute-0 nova_compute[189564]: 2025-12-01 20:01:20.506 189568 DEBUG nova.compute.manager [req-62a00a04-4838-4026-94be-dc11432f5002 req-2fe78f00-c311-4465-b8eb-062b9bbfde99 8141e98fb94749b083df4b60a326419b 58466eba0634458f8dd92f929824a1d9 - - default default] [instance: 98c0547a-3efc-4214-85f9-ccceaf32a2a6] Received event network-vif-plugged-6f128282-4268-4162-a349-1906ef0a8e4d external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11048#033[00m
Dec  1 20:01:20 compute-0 nova_compute[189564]: 2025-12-01 20:01:20.508 189568 DEBUG oslo_concurrency.lockutils [req-62a00a04-4838-4026-94be-dc11432f5002 req-2fe78f00-c311-4465-b8eb-062b9bbfde99 8141e98fb94749b083df4b60a326419b 58466eba0634458f8dd92f929824a1d9 - - default default] Acquiring lock "98c0547a-3efc-4214-85f9-ccceaf32a2a6-events" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m
Dec  1 20:01:20 compute-0 nova_compute[189564]: 2025-12-01 20:01:20.509 189568 DEBUG oslo_concurrency.lockutils [req-62a00a04-4838-4026-94be-dc11432f5002 req-2fe78f00-c311-4465-b8eb-062b9bbfde99 8141e98fb94749b083df4b60a326419b 58466eba0634458f8dd92f929824a1d9 - - default default] Lock "98c0547a-3efc-4214-85f9-ccceaf32a2a6-events" acquired by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: waited 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m
Dec  1 20:01:20 compute-0 nova_compute[189564]: 2025-12-01 20:01:20.510 189568 DEBUG oslo_concurrency.lockutils [req-62a00a04-4838-4026-94be-dc11432f5002 req-2fe78f00-c311-4465-b8eb-062b9bbfde99 8141e98fb94749b083df4b60a326419b 58466eba0634458f8dd92f929824a1d9 - - default default] Lock "98c0547a-3efc-4214-85f9-ccceaf32a2a6-events" "released" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: held 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m
Dec  1 20:01:20 compute-0 nova_compute[189564]: 2025-12-01 20:01:20.511 189568 DEBUG nova.compute.manager [req-62a00a04-4838-4026-94be-dc11432f5002 req-2fe78f00-c311-4465-b8eb-062b9bbfde99 8141e98fb94749b083df4b60a326419b 58466eba0634458f8dd92f929824a1d9 - - default default] [instance: 98c0547a-3efc-4214-85f9-ccceaf32a2a6] No waiting events found dispatching network-vif-plugged-6f128282-4268-4162-a349-1906ef0a8e4d pop_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:320#033[00m
Dec  1 20:01:20 compute-0 nova_compute[189564]: 2025-12-01 20:01:20.511 189568 WARNING nova.compute.manager [req-62a00a04-4838-4026-94be-dc11432f5002 req-2fe78f00-c311-4465-b8eb-062b9bbfde99 8141e98fb94749b083df4b60a326419b 58466eba0634458f8dd92f929824a1d9 - - default default] [instance: 98c0547a-3efc-4214-85f9-ccceaf32a2a6] Received unexpected event network-vif-plugged-6f128282-4268-4162-a349-1906ef0a8e4d for instance with vm_state building and task_state spawning.#033[00m
Dec  1 20:01:20 compute-0 nova_compute[189564]: 2025-12-01 20:01:20.512 189568 DEBUG nova.compute.manager [req-62a00a04-4838-4026-94be-dc11432f5002 req-2fe78f00-c311-4465-b8eb-062b9bbfde99 8141e98fb94749b083df4b60a326419b 58466eba0634458f8dd92f929824a1d9 - - default default] [instance: 5e264735-c003-4c77-8b16-cb48211f837f] Received event network-vif-plugged-241aee4b-acee-43c4-b165-e8322c56a1d3 external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11048#033[00m
Dec  1 20:01:20 compute-0 nova_compute[189564]: 2025-12-01 20:01:20.513 189568 DEBUG oslo_concurrency.lockutils [req-62a00a04-4838-4026-94be-dc11432f5002 req-2fe78f00-c311-4465-b8eb-062b9bbfde99 8141e98fb94749b083df4b60a326419b 58466eba0634458f8dd92f929824a1d9 - - default default] Acquiring lock "5e264735-c003-4c77-8b16-cb48211f837f-events" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m
Dec  1 20:01:20 compute-0 nova_compute[189564]: 2025-12-01 20:01:20.514 189568 DEBUG oslo_concurrency.lockutils [req-62a00a04-4838-4026-94be-dc11432f5002 req-2fe78f00-c311-4465-b8eb-062b9bbfde99 8141e98fb94749b083df4b60a326419b 58466eba0634458f8dd92f929824a1d9 - - default default] Lock "5e264735-c003-4c77-8b16-cb48211f837f-events" acquired by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: waited 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m
Dec  1 20:01:20 compute-0 nova_compute[189564]: 2025-12-01 20:01:20.514 189568 DEBUG oslo_concurrency.lockutils [req-62a00a04-4838-4026-94be-dc11432f5002 req-2fe78f00-c311-4465-b8eb-062b9bbfde99 8141e98fb94749b083df4b60a326419b 58466eba0634458f8dd92f929824a1d9 - - default default] Lock "5e264735-c003-4c77-8b16-cb48211f837f-events" "released" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: held 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m
Dec  1 20:01:20 compute-0 nova_compute[189564]: 2025-12-01 20:01:20.515 189568 DEBUG nova.compute.manager [req-62a00a04-4838-4026-94be-dc11432f5002 req-2fe78f00-c311-4465-b8eb-062b9bbfde99 8141e98fb94749b083df4b60a326419b 58466eba0634458f8dd92f929824a1d9 - - default default] [instance: 5e264735-c003-4c77-8b16-cb48211f837f] Processing event network-vif-plugged-241aee4b-acee-43c4-b165-e8322c56a1d3 _process_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:10808#033[00m
Dec  1 20:01:20 compute-0 nova_compute[189564]: 2025-12-01 20:01:20.516 189568 DEBUG nova.compute.manager [req-62a00a04-4838-4026-94be-dc11432f5002 req-2fe78f00-c311-4465-b8eb-062b9bbfde99 8141e98fb94749b083df4b60a326419b 58466eba0634458f8dd92f929824a1d9 - - default default] [instance: 5e264735-c003-4c77-8b16-cb48211f837f] Received event network-vif-plugged-241aee4b-acee-43c4-b165-e8322c56a1d3 external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11048#033[00m
Dec  1 20:01:20 compute-0 nova_compute[189564]: 2025-12-01 20:01:20.516 189568 DEBUG oslo_concurrency.lockutils [req-62a00a04-4838-4026-94be-dc11432f5002 req-2fe78f00-c311-4465-b8eb-062b9bbfde99 8141e98fb94749b083df4b60a326419b 58466eba0634458f8dd92f929824a1d9 - - default default] Acquiring lock "5e264735-c003-4c77-8b16-cb48211f837f-events" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m
Dec  1 20:01:20 compute-0 nova_compute[189564]: 2025-12-01 20:01:20.517 189568 DEBUG oslo_concurrency.lockutils [req-62a00a04-4838-4026-94be-dc11432f5002 req-2fe78f00-c311-4465-b8eb-062b9bbfde99 8141e98fb94749b083df4b60a326419b 58466eba0634458f8dd92f929824a1d9 - - default default] Lock "5e264735-c003-4c77-8b16-cb48211f837f-events" acquired by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: waited 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m
Dec  1 20:01:20 compute-0 nova_compute[189564]: 2025-12-01 20:01:20.518 189568 DEBUG oslo_concurrency.lockutils [req-62a00a04-4838-4026-94be-dc11432f5002 req-2fe78f00-c311-4465-b8eb-062b9bbfde99 8141e98fb94749b083df4b60a326419b 58466eba0634458f8dd92f929824a1d9 - - default default] Lock "5e264735-c003-4c77-8b16-cb48211f837f-events" "released" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: held 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m
Dec  1 20:01:20 compute-0 nova_compute[189564]: 2025-12-01 20:01:20.518 189568 DEBUG nova.compute.manager [req-62a00a04-4838-4026-94be-dc11432f5002 req-2fe78f00-c311-4465-b8eb-062b9bbfde99 8141e98fb94749b083df4b60a326419b 58466eba0634458f8dd92f929824a1d9 - - default default] [instance: 5e264735-c003-4c77-8b16-cb48211f837f] No waiting events found dispatching network-vif-plugged-241aee4b-acee-43c4-b165-e8322c56a1d3 pop_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:320#033[00m
Dec  1 20:01:20 compute-0 nova_compute[189564]: 2025-12-01 20:01:20.519 189568 WARNING nova.compute.manager [req-62a00a04-4838-4026-94be-dc11432f5002 req-2fe78f00-c311-4465-b8eb-062b9bbfde99 8141e98fb94749b083df4b60a326419b 58466eba0634458f8dd92f929824a1d9 - - default default] [instance: 5e264735-c003-4c77-8b16-cb48211f837f] Received unexpected event network-vif-plugged-241aee4b-acee-43c4-b165-e8322c56a1d3 for instance with vm_state building and task_state spawning.#033[00m
Dec  1 20:01:20 compute-0 nova_compute[189564]: 2025-12-01 20:01:20.521 189568 DEBUG nova.compute.manager [None req-96b534b9-88b0-4edd-a388-382ce12d6cf4 e346f67d906543ea8982cb53415ee19b d9b058a656be4393a4619312186fc083 - - default default] [instance: 98c0547a-3efc-4214-85f9-ccceaf32a2a6] Instance event wait completed in 3 seconds for network-vif-plugged wait_for_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:577#033[00m
Dec  1 20:01:20 compute-0 nova_compute[189564]: 2025-12-01 20:01:20.522 189568 DEBUG nova.compute.manager [None req-43758946-978a-45a5-8816-a58faf122cbd 1b42f5bff3ce40c99c067bb358d36444 02b2a851f173482691b98aa9a993fbf9 - - default default] [instance: 5e264735-c003-4c77-8b16-cb48211f837f] Instance event wait completed in 2 seconds for network-vif-plugged wait_for_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:577#033[00m
Dec  1 20:01:20 compute-0 nova_compute[189564]: 2025-12-01 20:01:20.529 189568 DEBUG nova.virt.driver [None req-025acbbd-8b0a-4055-b5a6-f0460d6fa220 - - - - - -] Emitting event <LifecycleEvent: 1764619280.5290632, 98c0547a-3efc-4214-85f9-ccceaf32a2a6 => Resumed> emit_event /usr/lib/python3.9/site-packages/nova/virt/driver.py:1653#033[00m
Dec  1 20:01:20 compute-0 nova_compute[189564]: 2025-12-01 20:01:20.531 189568 INFO nova.compute.manager [None req-025acbbd-8b0a-4055-b5a6-f0460d6fa220 - - - - - -] [instance: 98c0547a-3efc-4214-85f9-ccceaf32a2a6] VM Resumed (Lifecycle Event)#033[00m
Dec  1 20:01:20 compute-0 nova_compute[189564]: 2025-12-01 20:01:20.535 189568 DEBUG nova.virt.libvirt.driver [None req-96b534b9-88b0-4edd-a388-382ce12d6cf4 e346f67d906543ea8982cb53415ee19b d9b058a656be4393a4619312186fc083 - - default default] [instance: 98c0547a-3efc-4214-85f9-ccceaf32a2a6] Guest created on hypervisor spawn /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:4417#033[00m
Dec  1 20:01:20 compute-0 nova_compute[189564]: 2025-12-01 20:01:20.536 189568 DEBUG nova.virt.libvirt.driver [None req-43758946-978a-45a5-8816-a58faf122cbd 1b42f5bff3ce40c99c067bb358d36444 02b2a851f173482691b98aa9a993fbf9 - - default default] [instance: 5e264735-c003-4c77-8b16-cb48211f837f] Guest created on hypervisor spawn /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:4417#033[00m
Dec  1 20:01:20 compute-0 nova_compute[189564]: 2025-12-01 20:01:20.545 189568 INFO nova.virt.libvirt.driver [-] [instance: 98c0547a-3efc-4214-85f9-ccceaf32a2a6] Instance spawned successfully.#033[00m
Dec  1 20:01:20 compute-0 nova_compute[189564]: 2025-12-01 20:01:20.546 189568 DEBUG nova.virt.libvirt.driver [None req-96b534b9-88b0-4edd-a388-382ce12d6cf4 e346f67d906543ea8982cb53415ee19b d9b058a656be4393a4619312186fc083 - - default default] [instance: 98c0547a-3efc-4214-85f9-ccceaf32a2a6] Attempting to register defaults for the following image properties: ['hw_cdrom_bus', 'hw_disk_bus', 'hw_input_bus', 'hw_pointer_model', 'hw_video_model', 'hw_vif_model'] _register_undefined_instance_details /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:917#033[00m
Dec  1 20:01:20 compute-0 nova_compute[189564]: 2025-12-01 20:01:20.550 189568 INFO nova.virt.libvirt.driver [-] [instance: 5e264735-c003-4c77-8b16-cb48211f837f] Instance spawned successfully.#033[00m
Dec  1 20:01:20 compute-0 nova_compute[189564]: 2025-12-01 20:01:20.551 189568 DEBUG nova.virt.libvirt.driver [None req-43758946-978a-45a5-8816-a58faf122cbd 1b42f5bff3ce40c99c067bb358d36444 02b2a851f173482691b98aa9a993fbf9 - - default default] [instance: 5e264735-c003-4c77-8b16-cb48211f837f] Attempting to register defaults for the following image properties: ['hw_cdrom_bus', 'hw_disk_bus', 'hw_input_bus', 'hw_pointer_model', 'hw_video_model', 'hw_vif_model'] _register_undefined_instance_details /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:917#033[00m
Dec  1 20:01:20 compute-0 nova_compute[189564]: 2025-12-01 20:01:20.647 189568 DEBUG nova.compute.provider_tree [None req-7cec860b-c1c5-49d2-8a4f-732c76c1788d - - - - - -] Inventory has not changed in ProviderTree for provider: 0211b5d4-bab8-409f-8f53-df766ffbcb27 update_inventory /usr/lib/python3.9/site-packages/nova/compute/provider_tree.py:180#033[00m
Dec  1 20:01:20 compute-0 nova_compute[189564]: 2025-12-01 20:01:20.656 189568 DEBUG nova.compute.manager [None req-025acbbd-8b0a-4055-b5a6-f0460d6fa220 - - - - - -] [instance: 98c0547a-3efc-4214-85f9-ccceaf32a2a6] Checking state _get_power_state /usr/lib/python3.9/site-packages/nova/compute/manager.py:1762#033[00m
Dec  1 20:01:20 compute-0 nova_compute[189564]: 2025-12-01 20:01:20.664 189568 DEBUG nova.scheduler.client.report [None req-7cec860b-c1c5-49d2-8a4f-732c76c1788d - - - - - -] Inventory has not changed for provider 0211b5d4-bab8-409f-8f53-df766ffbcb27 based on inventory data: {'VCPU': {'total': 8, 'reserved': 0, 'min_unit': 1, 'max_unit': 8, 'step_size': 1, 'allocation_ratio': 4.0}, 'MEMORY_MB': {'total': 7680, 'reserved': 512, 'min_unit': 1, 'max_unit': 7680, 'step_size': 1, 'allocation_ratio': 1.0}, 'DISK_GB': {'total': 79, 'reserved': 1, 'min_unit': 1, 'max_unit': 79, 'step_size': 1, 'allocation_ratio': 0.9}} set_inventory_for_provider /usr/lib/python3.9/site-packages/nova/scheduler/client/report.py:940#033[00m
Dec  1 20:01:20 compute-0 nova_compute[189564]: 2025-12-01 20:01:20.674 189568 DEBUG nova.compute.manager [None req-025acbbd-8b0a-4055-b5a6-f0460d6fa220 - - - - - -] [instance: 98c0547a-3efc-4214-85f9-ccceaf32a2a6] Synchronizing instance power state after lifecycle event "Resumed"; current vm_state: building, current task_state: spawning, current DB power_state: 0, VM power_state: 1 handle_lifecycle_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:1396#033[00m
Dec  1 20:01:20 compute-0 nova_compute[189564]: 2025-12-01 20:01:20.681 189568 DEBUG nova.virt.libvirt.driver [None req-96b534b9-88b0-4edd-a388-382ce12d6cf4 e346f67d906543ea8982cb53415ee19b d9b058a656be4393a4619312186fc083 - - default default] [instance: 98c0547a-3efc-4214-85f9-ccceaf32a2a6] Found default for hw_cdrom_bus of sata _register_undefined_instance_details /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:946#033[00m
Dec  1 20:01:20 compute-0 nova_compute[189564]: 2025-12-01 20:01:20.681 189568 DEBUG nova.virt.libvirt.driver [None req-96b534b9-88b0-4edd-a388-382ce12d6cf4 e346f67d906543ea8982cb53415ee19b d9b058a656be4393a4619312186fc083 - - default default] [instance: 98c0547a-3efc-4214-85f9-ccceaf32a2a6] Found default for hw_disk_bus of virtio _register_undefined_instance_details /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:946#033[00m
Dec  1 20:01:20 compute-0 nova_compute[189564]: 2025-12-01 20:01:20.682 189568 DEBUG nova.virt.libvirt.driver [None req-96b534b9-88b0-4edd-a388-382ce12d6cf4 e346f67d906543ea8982cb53415ee19b d9b058a656be4393a4619312186fc083 - - default default] [instance: 98c0547a-3efc-4214-85f9-ccceaf32a2a6] Found default for hw_input_bus of usb _register_undefined_instance_details /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:946#033[00m
Dec  1 20:01:20 compute-0 nova_compute[189564]: 2025-12-01 20:01:20.682 189568 DEBUG nova.virt.libvirt.driver [None req-96b534b9-88b0-4edd-a388-382ce12d6cf4 e346f67d906543ea8982cb53415ee19b d9b058a656be4393a4619312186fc083 - - default default] [instance: 98c0547a-3efc-4214-85f9-ccceaf32a2a6] Found default for hw_pointer_model of usbtablet _register_undefined_instance_details /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:946#033[00m
Dec  1 20:01:20 compute-0 nova_compute[189564]: 2025-12-01 20:01:20.683 189568 DEBUG nova.virt.libvirt.driver [None req-96b534b9-88b0-4edd-a388-382ce12d6cf4 e346f67d906543ea8982cb53415ee19b d9b058a656be4393a4619312186fc083 - - default default] [instance: 98c0547a-3efc-4214-85f9-ccceaf32a2a6] Found default for hw_video_model of virtio _register_undefined_instance_details /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:946#033[00m
Dec  1 20:01:20 compute-0 nova_compute[189564]: 2025-12-01 20:01:20.683 189568 DEBUG nova.virt.libvirt.driver [None req-96b534b9-88b0-4edd-a388-382ce12d6cf4 e346f67d906543ea8982cb53415ee19b d9b058a656be4393a4619312186fc083 - - default default] [instance: 98c0547a-3efc-4214-85f9-ccceaf32a2a6] Found default for hw_vif_model of virtio _register_undefined_instance_details /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:946#033[00m
Dec  1 20:01:20 compute-0 nova_compute[189564]: 2025-12-01 20:01:20.689 189568 DEBUG nova.virt.libvirt.driver [None req-43758946-978a-45a5-8816-a58faf122cbd 1b42f5bff3ce40c99c067bb358d36444 02b2a851f173482691b98aa9a993fbf9 - - default default] [instance: 5e264735-c003-4c77-8b16-cb48211f837f] Found default for hw_cdrom_bus of sata _register_undefined_instance_details /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:946#033[00m
Dec  1 20:01:20 compute-0 nova_compute[189564]: 2025-12-01 20:01:20.690 189568 DEBUG nova.virt.libvirt.driver [None req-43758946-978a-45a5-8816-a58faf122cbd 1b42f5bff3ce40c99c067bb358d36444 02b2a851f173482691b98aa9a993fbf9 - - default default] [instance: 5e264735-c003-4c77-8b16-cb48211f837f] Found default for hw_disk_bus of virtio _register_undefined_instance_details /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:946#033[00m
Dec  1 20:01:20 compute-0 nova_compute[189564]: 2025-12-01 20:01:20.690 189568 DEBUG nova.virt.libvirt.driver [None req-43758946-978a-45a5-8816-a58faf122cbd 1b42f5bff3ce40c99c067bb358d36444 02b2a851f173482691b98aa9a993fbf9 - - default default] [instance: 5e264735-c003-4c77-8b16-cb48211f837f] Found default for hw_input_bus of usb _register_undefined_instance_details /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:946#033[00m
Dec  1 20:01:20 compute-0 nova_compute[189564]: 2025-12-01 20:01:20.690 189568 DEBUG nova.virt.libvirt.driver [None req-43758946-978a-45a5-8816-a58faf122cbd 1b42f5bff3ce40c99c067bb358d36444 02b2a851f173482691b98aa9a993fbf9 - - default default] [instance: 5e264735-c003-4c77-8b16-cb48211f837f] Found default for hw_pointer_model of usbtablet _register_undefined_instance_details /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:946#033[00m
Dec  1 20:01:20 compute-0 nova_compute[189564]: 2025-12-01 20:01:20.691 189568 DEBUG nova.virt.libvirt.driver [None req-43758946-978a-45a5-8816-a58faf122cbd 1b42f5bff3ce40c99c067bb358d36444 02b2a851f173482691b98aa9a993fbf9 - - default default] [instance: 5e264735-c003-4c77-8b16-cb48211f837f] Found default for hw_video_model of virtio _register_undefined_instance_details /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:946#033[00m
Dec  1 20:01:20 compute-0 nova_compute[189564]: 2025-12-01 20:01:20.691 189568 DEBUG nova.virt.libvirt.driver [None req-43758946-978a-45a5-8816-a58faf122cbd 1b42f5bff3ce40c99c067bb358d36444 02b2a851f173482691b98aa9a993fbf9 - - default default] [instance: 5e264735-c003-4c77-8b16-cb48211f837f] Found default for hw_vif_model of virtio _register_undefined_instance_details /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:946#033[00m
Dec  1 20:01:20 compute-0 nova_compute[189564]: 2025-12-01 20:01:20.697 189568 DEBUG nova.compute.resource_tracker [None req-7cec860b-c1c5-49d2-8a4f-732c76c1788d - - - - - -] Compute_service record updated for compute-0.ctlplane.example.com:compute-0.ctlplane.example.com _update_available_resource /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:995#033[00m
Dec  1 20:01:20 compute-0 nova_compute[189564]: 2025-12-01 20:01:20.697 189568 DEBUG oslo_concurrency.lockutils [None req-7cec860b-c1c5-49d2-8a4f-732c76c1788d - - - - - -] Lock "compute_resources" "released" by "nova.compute.resource_tracker.ResourceTracker._update_available_resource" :: held 0.428s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m
Dec  1 20:01:20 compute-0 nova_compute[189564]: 2025-12-01 20:01:20.733 189568 INFO nova.compute.manager [None req-025acbbd-8b0a-4055-b5a6-f0460d6fa220 - - - - - -] [instance: 98c0547a-3efc-4214-85f9-ccceaf32a2a6] During sync_power_state the instance has a pending task (spawning). Skip.#033[00m
Dec  1 20:01:20 compute-0 nova_compute[189564]: 2025-12-01 20:01:20.734 189568 DEBUG nova.virt.driver [None req-025acbbd-8b0a-4055-b5a6-f0460d6fa220 - - - - - -] Emitting event <LifecycleEvent: 1764619280.5292585, 5e264735-c003-4c77-8b16-cb48211f837f => Resumed> emit_event /usr/lib/python3.9/site-packages/nova/virt/driver.py:1653#033[00m
Dec  1 20:01:20 compute-0 nova_compute[189564]: 2025-12-01 20:01:20.734 189568 INFO nova.compute.manager [None req-025acbbd-8b0a-4055-b5a6-f0460d6fa220 - - - - - -] [instance: 5e264735-c003-4c77-8b16-cb48211f837f] VM Resumed (Lifecycle Event)#033[00m
Dec  1 20:01:20 compute-0 nova_compute[189564]: 2025-12-01 20:01:20.772 189568 DEBUG nova.compute.manager [None req-025acbbd-8b0a-4055-b5a6-f0460d6fa220 - - - - - -] [instance: 5e264735-c003-4c77-8b16-cb48211f837f] Checking state _get_power_state /usr/lib/python3.9/site-packages/nova/compute/manager.py:1762#033[00m
Dec  1 20:01:20 compute-0 nova_compute[189564]: 2025-12-01 20:01:20.790 189568 INFO nova.compute.manager [None req-43758946-978a-45a5-8816-a58faf122cbd 1b42f5bff3ce40c99c067bb358d36444 02b2a851f173482691b98aa9a993fbf9 - - default default] [instance: 5e264735-c003-4c77-8b16-cb48211f837f] Took 12.92 seconds to spawn the instance on the hypervisor.#033[00m
Dec  1 20:01:20 compute-0 nova_compute[189564]: 2025-12-01 20:01:20.791 189568 DEBUG nova.compute.manager [None req-43758946-978a-45a5-8816-a58faf122cbd 1b42f5bff3ce40c99c067bb358d36444 02b2a851f173482691b98aa9a993fbf9 - - default default] [instance: 5e264735-c003-4c77-8b16-cb48211f837f] Checking state _get_power_state /usr/lib/python3.9/site-packages/nova/compute/manager.py:1762#033[00m
Dec  1 20:01:20 compute-0 nova_compute[189564]: 2025-12-01 20:01:20.795 189568 INFO nova.compute.manager [None req-96b534b9-88b0-4edd-a388-382ce12d6cf4 e346f67d906543ea8982cb53415ee19b d9b058a656be4393a4619312186fc083 - - default default] [instance: 98c0547a-3efc-4214-85f9-ccceaf32a2a6] Took 13.88 seconds to spawn the instance on the hypervisor.#033[00m
Dec  1 20:01:20 compute-0 nova_compute[189564]: 2025-12-01 20:01:20.796 189568 DEBUG nova.compute.manager [None req-96b534b9-88b0-4edd-a388-382ce12d6cf4 e346f67d906543ea8982cb53415ee19b d9b058a656be4393a4619312186fc083 - - default default] [instance: 98c0547a-3efc-4214-85f9-ccceaf32a2a6] Checking state _get_power_state /usr/lib/python3.9/site-packages/nova/compute/manager.py:1762#033[00m
Dec  1 20:01:20 compute-0 nova_compute[189564]: 2025-12-01 20:01:20.803 189568 DEBUG nova.compute.manager [None req-025acbbd-8b0a-4055-b5a6-f0460d6fa220 - - - - - -] [instance: 5e264735-c003-4c77-8b16-cb48211f837f] Synchronizing instance power state after lifecycle event "Resumed"; current vm_state: building, current task_state: spawning, current DB power_state: 0, VM power_state: 1 handle_lifecycle_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:1396#033[00m
Dec  1 20:01:20 compute-0 nova_compute[189564]: 2025-12-01 20:01:20.847 189568 INFO nova.compute.manager [None req-025acbbd-8b0a-4055-b5a6-f0460d6fa220 - - - - - -] [instance: 5e264735-c003-4c77-8b16-cb48211f837f] During sync_power_state the instance has a pending task (spawning). Skip.#033[00m
Dec  1 20:01:20 compute-0 nova_compute[189564]: 2025-12-01 20:01:20.898 189568 INFO nova.compute.manager [None req-43758946-978a-45a5-8816-a58faf122cbd 1b42f5bff3ce40c99c067bb358d36444 02b2a851f173482691b98aa9a993fbf9 - - default default] [instance: 5e264735-c003-4c77-8b16-cb48211f837f] Took 13.54 seconds to build instance.#033[00m
Dec  1 20:01:20 compute-0 nova_compute[189564]: 2025-12-01 20:01:20.905 189568 INFO nova.compute.manager [None req-96b534b9-88b0-4edd-a388-382ce12d6cf4 e346f67d906543ea8982cb53415ee19b d9b058a656be4393a4619312186fc083 - - default default] [instance: 98c0547a-3efc-4214-85f9-ccceaf32a2a6] Took 14.57 seconds to build instance.#033[00m
Dec  1 20:01:20 compute-0 nova_compute[189564]: 2025-12-01 20:01:20.920 189568 DEBUG oslo_concurrency.lockutils [None req-43758946-978a-45a5-8816-a58faf122cbd 1b42f5bff3ce40c99c067bb358d36444 02b2a851f173482691b98aa9a993fbf9 - - default default] Lock "5e264735-c003-4c77-8b16-cb48211f837f" "released" by "nova.compute.manager.ComputeManager.build_and_run_instance.<locals>._locked_do_build_and_run_instance" :: held 13.641s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m
Dec  1 20:01:20 compute-0 nova_compute[189564]: 2025-12-01 20:01:20.922 189568 DEBUG oslo_concurrency.lockutils [None req-96b534b9-88b0-4edd-a388-382ce12d6cf4 e346f67d906543ea8982cb53415ee19b d9b058a656be4393a4619312186fc083 - - default default] Lock "98c0547a-3efc-4214-85f9-ccceaf32a2a6" "released" by "nova.compute.manager.ComputeManager.build_and_run_instance.<locals>._locked_do_build_and_run_instance" :: held 14.711s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m
Dec  1 20:01:21 compute-0 nova_compute[189564]: 2025-12-01 20:01:21.696 189568 DEBUG oslo_service.periodic_task [None req-7cec860b-c1c5-49d2-8a4f-732c76c1788d - - - - - -] Running periodic task ComputeManager._heal_instance_info_cache run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210#033[00m
Dec  1 20:01:21 compute-0 nova_compute[189564]: 2025-12-01 20:01:21.697 189568 DEBUG nova.compute.manager [None req-7cec860b-c1c5-49d2-8a4f-732c76c1788d - - - - - -] Starting heal instance info cache _heal_instance_info_cache /usr/lib/python3.9/site-packages/nova/compute/manager.py:9858#033[00m
Dec  1 20:01:21 compute-0 nova_compute[189564]: 2025-12-01 20:01:21.698 189568 DEBUG nova.compute.manager [None req-7cec860b-c1c5-49d2-8a4f-732c76c1788d - - - - - -] Rebuilding the list of instances to heal _heal_instance_info_cache /usr/lib/python3.9/site-packages/nova/compute/manager.py:9862#033[00m
Dec  1 20:01:21 compute-0 nova_compute[189564]: 2025-12-01 20:01:21.721 189568 DEBUG nova.compute.manager [None req-7cec860b-c1c5-49d2-8a4f-732c76c1788d - - - - - -] [instance: 4a104baa-5fd5-47aa-973b-11d99c76c3e2] Skipping network cache update for instance because it is Building. _heal_instance_info_cache /usr/lib/python3.9/site-packages/nova/compute/manager.py:9871#033[00m
Dec  1 20:01:21 compute-0 nova_compute[189564]: 2025-12-01 20:01:21.913 189568 DEBUG oslo_concurrency.lockutils [None req-7cec860b-c1c5-49d2-8a4f-732c76c1788d - - - - - -] Acquiring lock "refresh_cache-98c0547a-3efc-4214-85f9-ccceaf32a2a6" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:312#033[00m
Dec  1 20:01:21 compute-0 nova_compute[189564]: 2025-12-01 20:01:21.913 189568 DEBUG oslo_concurrency.lockutils [None req-7cec860b-c1c5-49d2-8a4f-732c76c1788d - - - - - -] Acquired lock "refresh_cache-98c0547a-3efc-4214-85f9-ccceaf32a2a6" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:315#033[00m
Dec  1 20:01:21 compute-0 nova_compute[189564]: 2025-12-01 20:01:21.914 189568 DEBUG nova.network.neutron [None req-7cec860b-c1c5-49d2-8a4f-732c76c1788d - - - - - -] [instance: 98c0547a-3efc-4214-85f9-ccceaf32a2a6] Forcefully refreshing network info cache for instance _get_instance_nw_info /usr/lib/python3.9/site-packages/nova/network/neutron.py:2004#033[00m
Dec  1 20:01:21 compute-0 nova_compute[189564]: 2025-12-01 20:01:21.915 189568 DEBUG nova.objects.instance [None req-7cec860b-c1c5-49d2-8a4f-732c76c1788d - - - - - -] Lazy-loading 'info_cache' on Instance uuid 98c0547a-3efc-4214-85f9-ccceaf32a2a6 obj_load_attr /usr/lib/python3.9/site-packages/nova/objects/instance.py:1105#033[00m
Dec  1 20:01:22 compute-0 nova_compute[189564]: 2025-12-01 20:01:22.887 189568 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Dec  1 20:01:23 compute-0 nova_compute[189564]: 2025-12-01 20:01:23.169 189568 DEBUG oslo_concurrency.lockutils [None req-116818aa-5bd5-4a72-9d0a-5611d400a5cb e346f67d906543ea8982cb53415ee19b d9b058a656be4393a4619312186fc083 - - default default] Acquiring lock "98c0547a-3efc-4214-85f9-ccceaf32a2a6" by "nova.compute.manager.ComputeManager.terminate_instance.<locals>.do_terminate_instance" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m
Dec  1 20:01:23 compute-0 nova_compute[189564]: 2025-12-01 20:01:23.171 189568 DEBUG oslo_concurrency.lockutils [None req-116818aa-5bd5-4a72-9d0a-5611d400a5cb e346f67d906543ea8982cb53415ee19b d9b058a656be4393a4619312186fc083 - - default default] Lock "98c0547a-3efc-4214-85f9-ccceaf32a2a6" acquired by "nova.compute.manager.ComputeManager.terminate_instance.<locals>.do_terminate_instance" :: waited 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m
Dec  1 20:01:23 compute-0 nova_compute[189564]: 2025-12-01 20:01:23.172 189568 DEBUG oslo_concurrency.lockutils [None req-116818aa-5bd5-4a72-9d0a-5611d400a5cb e346f67d906543ea8982cb53415ee19b d9b058a656be4393a4619312186fc083 - - default default] Acquiring lock "98c0547a-3efc-4214-85f9-ccceaf32a2a6-events" by "nova.compute.manager.InstanceEvents.clear_events_for_instance.<locals>._clear_events" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m
Dec  1 20:01:23 compute-0 nova_compute[189564]: 2025-12-01 20:01:23.172 189568 DEBUG oslo_concurrency.lockutils [None req-116818aa-5bd5-4a72-9d0a-5611d400a5cb e346f67d906543ea8982cb53415ee19b d9b058a656be4393a4619312186fc083 - - default default] Lock "98c0547a-3efc-4214-85f9-ccceaf32a2a6-events" acquired by "nova.compute.manager.InstanceEvents.clear_events_for_instance.<locals>._clear_events" :: waited 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m
Dec  1 20:01:23 compute-0 nova_compute[189564]: 2025-12-01 20:01:23.173 189568 DEBUG oslo_concurrency.lockutils [None req-116818aa-5bd5-4a72-9d0a-5611d400a5cb e346f67d906543ea8982cb53415ee19b d9b058a656be4393a4619312186fc083 - - default default] Lock "98c0547a-3efc-4214-85f9-ccceaf32a2a6-events" "released" by "nova.compute.manager.InstanceEvents.clear_events_for_instance.<locals>._clear_events" :: held 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m
Dec  1 20:01:23 compute-0 nova_compute[189564]: 2025-12-01 20:01:23.176 189568 INFO nova.compute.manager [None req-116818aa-5bd5-4a72-9d0a-5611d400a5cb e346f67d906543ea8982cb53415ee19b d9b058a656be4393a4619312186fc083 - - default default] [instance: 98c0547a-3efc-4214-85f9-ccceaf32a2a6] Terminating instance#033[00m
Dec  1 20:01:23 compute-0 nova_compute[189564]: 2025-12-01 20:01:23.179 189568 DEBUG nova.compute.manager [None req-116818aa-5bd5-4a72-9d0a-5611d400a5cb e346f67d906543ea8982cb53415ee19b d9b058a656be4393a4619312186fc083 - - default default] [instance: 98c0547a-3efc-4214-85f9-ccceaf32a2a6] Start destroying the instance on the hypervisor. _shutdown_instance /usr/lib/python3.9/site-packages/nova/compute/manager.py:3120#033[00m
Dec  1 20:01:23 compute-0 kernel: tap6f128282-42 (unregistering): left promiscuous mode
Dec  1 20:01:23 compute-0 NetworkManager[56474]: <info>  [1764619283.2350] device (tap6f128282-42): state change: disconnected -> unmanaged (reason 'unmanaged', managed-type: 'removed')
Dec  1 20:01:23 compute-0 ovn_controller[97948]: 2025-12-01T20:01:23Z|00081|binding|INFO|Releasing lport 6f128282-4268-4162-a349-1906ef0a8e4d from this chassis (sb_readonly=0)
Dec  1 20:01:23 compute-0 ovn_controller[97948]: 2025-12-01T20:01:23Z|00082|binding|INFO|Setting lport 6f128282-4268-4162-a349-1906ef0a8e4d down in Southbound
Dec  1 20:01:23 compute-0 ovn_controller[97948]: 2025-12-01T20:01:23Z|00083|binding|INFO|Removing iface tap6f128282-42 ovn-installed in OVS
Dec  1 20:01:23 compute-0 nova_compute[189564]: 2025-12-01 20:01:23.257 189568 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Dec  1 20:01:23 compute-0 ovn_metadata_agent[106828]: 2025-12-01 20:01:23.267 106833 DEBUG ovsdbapp.backend.ovs_idl.event [-] Matched UPDATE: PortBindingUpdatedEvent(events=('update',), table='Port_Binding', conditions=None, old_conditions=None), priority=20 to row=Port_Binding(mac=['fa:16:3e:6f:a3:82 10.100.0.12'], port_security=['fa:16:3e:6f:a3:82 10.100.0.12'], type=, nat_addresses=[], virtual_parent=[], up=[False], options={'requested-chassis': 'compute-0.ctlplane.example.com'}, parent_port=[], requested_additional_chassis=[], ha_chassis_group=[], external_ids={'neutron:cidrs': '10.100.0.12/28', 'neutron:device_id': '98c0547a-3efc-4214-85f9-ccceaf32a2a6', 'neutron:device_owner': 'compute:nova', 'neutron:mtu': '', 'neutron:network_name': 'neutron-584f129c-30be-45c6-a239-e6753cbee124', 'neutron:port_capabilities': '', 'neutron:port_name': '', 'neutron:project_id': 'd9b058a656be4393a4619312186fc083', 'neutron:revision_number': '4', 'neutron:security_group_ids': '270a4d79-bd17-4ca0-b3a5-599aea8e31b2', 'neutron:subnet_pool_addr_scope4': '', 'neutron:subnet_pool_addr_scope6': '', 'neutron:vnic_type': 'normal', 'neutron:host_id': 'compute-0.ctlplane.example.com'}, additional_chassis=[], tag=[], additional_encap=[], encap=[], mirror_rules=[], datapath=1c227f0b-b424-4195-b582-5bbd834fa708, chassis=[], tunnel_key=3, gateway_chassis=[], requested_chassis=[<ovs.db.idl.Row object at 0x7f1b36766670>], logical_port=6f128282-4268-4162-a349-1906ef0a8e4d) old=Port_Binding(up=[True], chassis=[<ovs.db.idl.Row object at 0x7f1b36766670>]) matches /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/event.py:43#033[00m
Dec  1 20:01:23 compute-0 ovn_metadata_agent[106828]: 2025-12-01 20:01:23.269 106833 INFO neutron.agent.ovn.metadata.agent [-] Port 6f128282-4268-4162-a349-1906ef0a8e4d in datapath 584f129c-30be-45c6-a239-e6753cbee124 unbound from our chassis#033[00m
Dec  1 20:01:23 compute-0 ovn_metadata_agent[106828]: 2025-12-01 20:01:23.271 106833 DEBUG neutron.agent.ovn.metadata.agent [-] No valid VIF ports were found for network 584f129c-30be-45c6-a239-e6753cbee124, tearing the namespace down if needed _get_provision_params /usr/lib/python3.9/site-packages/neutron/agent/ovn/metadata/agent.py:628#033[00m
Dec  1 20:01:23 compute-0 ovn_metadata_agent[106828]: 2025-12-01 20:01:23.273 239862 DEBUG oslo.privsep.daemon [-] privsep: reply[5c2c5da9-87c4-490b-9b5b-b1a8e136ac07]: (4, True) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Dec  1 20:01:23 compute-0 ovn_metadata_agent[106828]: 2025-12-01 20:01:23.274 106833 INFO neutron.agent.ovn.metadata.agent [-] Cleaning up ovnmeta-584f129c-30be-45c6-a239-e6753cbee124 namespace which is not needed anymore#033[00m
Dec  1 20:01:23 compute-0 nova_compute[189564]: 2025-12-01 20:01:23.284 189568 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Dec  1 20:01:23 compute-0 systemd[1]: machine-qemu\x2d5\x2dinstance\x2d00000005.scope: Deactivated successfully.
Dec  1 20:01:23 compute-0 systemd[1]: machine-qemu\x2d5\x2dinstance\x2d00000005.scope: Consumed 3.310s CPU time.
Dec  1 20:01:23 compute-0 systemd-machined[155891]: Machine qemu-5-instance-00000005 terminated.
Dec  1 20:01:23 compute-0 nova_compute[189564]: 2025-12-01 20:01:23.418 189568 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Dec  1 20:01:23 compute-0 nova_compute[189564]: 2025-12-01 20:01:23.427 189568 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Dec  1 20:01:23 compute-0 nova_compute[189564]: 2025-12-01 20:01:23.492 189568 INFO nova.virt.libvirt.driver [-] [instance: 98c0547a-3efc-4214-85f9-ccceaf32a2a6] Instance destroyed successfully.#033[00m
Dec  1 20:01:23 compute-0 nova_compute[189564]: 2025-12-01 20:01:23.493 189568 DEBUG nova.objects.instance [None req-116818aa-5bd5-4a72-9d0a-5611d400a5cb e346f67d906543ea8982cb53415ee19b d9b058a656be4393a4619312186fc083 - - default default] Lazy-loading 'resources' on Instance uuid 98c0547a-3efc-4214-85f9-ccceaf32a2a6 obj_load_attr /usr/lib/python3.9/site-packages/nova/objects/instance.py:1105#033[00m
Dec  1 20:01:23 compute-0 nova_compute[189564]: 2025-12-01 20:01:23.513 189568 DEBUG nova.virt.libvirt.vif [None req-116818aa-5bd5-4a72-9d0a-5611d400a5cb e346f67d906543ea8982cb53415ee19b d9b058a656be4393a4619312186fc083 - - default default] vif_type=ovs instance=Instance(access_ip_v4=None,access_ip_v6=None,architecture=None,auto_disk_config=False,availability_zone='nova',cell_name=None,cleaned=False,config_drive='True',created_at=2025-12-01T20:01:04Z,default_ephemeral_device=None,default_swap_device=None,deleted=False,deleted_at=None,device_metadata=<?>,disable_terminate=False,display_description='tempest-ServerAddressesTestJSON-server-1274346215',display_name='tempest-ServerAddressesTestJSON-server-1274346215',ec2_ids=<?>,ephemeral_gb=0,ephemeral_key_uuid=None,fault=<?>,flavor=Flavor(3),hidden=False,host='compute-0.ctlplane.example.com',hostname='tempest-serveraddressestestjson-server-1274346215',id=5,image_ref='d169c234-7ac2-4fdc-b9fa-a08c93484d75',info_cache=InstanceInfoCache,instance_type_id=3,kernel_id='',key_data=None,key_name=None,keypairs=<?>,launch_index=0,launched_at=2025-12-01T20:01:20Z,launched_on='compute-0.ctlplane.example.com',locked=False,locked_by=None,memory_mb=128,metadata={},migration_context=<?>,new_flavor=None,node='compute-0.ctlplane.example.com',numa_topology=None,old_flavor=None,os_type=None,pci_devices=<?>,pci_requests=<?>,power_state=1,progress=0,project_id='d9b058a656be4393a4619312186fc083',ramdisk_id='',reservation_id='r-p96s3cxd',resources=None,root_device_name='/dev/vda',root_gb=1,security_groups=SecurityGroupList,services=<?>,shutdown_terminate=False,system_metadata={boot_roles='reader,member',image_base_image_ref='d169c234-7ac2-4fdc-b9fa-a08c93484d75',image_container_format='bare',image_disk_format='qcow2',image_hw_cdrom_bus='sata',image_hw_disk_bus='virtio',image_hw_input_bus='usb',image_hw_machine_type='q35',image_hw_pointer_model='usbtablet',image_hw_rng_model='virtio',image_hw_video_model='virtio',image_hw_vif_model='virtio',image_min_disk='1',image_min_ram='0',owner_project_name='tempest-ServerAddressesTestJSON-296714616',owner_user_name='tempest-ServerAddressesTestJSON-296714616-project-member'},tags=<?>,task_state='deleting',terminated_at=None,trusted_certs=<?>,updated_at=2025-12-01T20:01:20Z,user_data=None,user_id='e346f67d906543ea8982cb53415ee19b',uuid=98c0547a-3efc-4214-85f9-ccceaf32a2a6,vcpu_model=<?>,vcpus=1,vm_mode=None,vm_state='active') vif={"id": "6f128282-4268-4162-a349-1906ef0a8e4d", "address": "fa:16:3e:6f:a3:82", "network": {"id": "584f129c-30be-45c6-a239-e6753cbee124", "bridge": "br-int", "label": "tempest-ServerAddressesTestJSON-1254726330-network", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.12", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "d9b058a656be4393a4619312186fc083", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tap6f128282-42", "ovs_interfaceid": "6f128282-4268-4162-a349-1906ef0a8e4d", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}} unplug /usr/lib/python3.9/site-packages/nova/virt/libvirt/vif.py:828#033[00m
Dec  1 20:01:23 compute-0 nova_compute[189564]: 2025-12-01 20:01:23.515 189568 DEBUG nova.network.os_vif_util [None req-116818aa-5bd5-4a72-9d0a-5611d400a5cb e346f67d906543ea8982cb53415ee19b d9b058a656be4393a4619312186fc083 - - default default] Converting VIF {"id": "6f128282-4268-4162-a349-1906ef0a8e4d", "address": "fa:16:3e:6f:a3:82", "network": {"id": "584f129c-30be-45c6-a239-e6753cbee124", "bridge": "br-int", "label": "tempest-ServerAddressesTestJSON-1254726330-network", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.12", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "d9b058a656be4393a4619312186fc083", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tap6f128282-42", "ovs_interfaceid": "6f128282-4268-4162-a349-1906ef0a8e4d", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}} nova_to_osvif_vif /usr/lib/python3.9/site-packages/nova/network/os_vif_util.py:511#033[00m
Dec  1 20:01:23 compute-0 nova_compute[189564]: 2025-12-01 20:01:23.517 189568 DEBUG nova.network.os_vif_util [None req-116818aa-5bd5-4a72-9d0a-5611d400a5cb e346f67d906543ea8982cb53415ee19b d9b058a656be4393a4619312186fc083 - - default default] Converted object VIFOpenVSwitch(active=False,address=fa:16:3e:6f:a3:82,bridge_name='br-int',has_traffic_filtering=True,id=6f128282-4268-4162-a349-1906ef0a8e4d,network=Network(584f129c-30be-45c6-a239-e6753cbee124),plugin='ovs',port_profile=VIFPortProfileOpenVSwitch,preserve_on_delete=False,vif_name='tap6f128282-42') nova_to_osvif_vif /usr/lib/python3.9/site-packages/nova/network/os_vif_util.py:548#033[00m
Dec  1 20:01:23 compute-0 nova_compute[189564]: 2025-12-01 20:01:23.518 189568 DEBUG os_vif [None req-116818aa-5bd5-4a72-9d0a-5611d400a5cb e346f67d906543ea8982cb53415ee19b d9b058a656be4393a4619312186fc083 - - default default] Unplugging vif VIFOpenVSwitch(active=False,address=fa:16:3e:6f:a3:82,bridge_name='br-int',has_traffic_filtering=True,id=6f128282-4268-4162-a349-1906ef0a8e4d,network=Network(584f129c-30be-45c6-a239-e6753cbee124),plugin='ovs',port_profile=VIFPortProfileOpenVSwitch,preserve_on_delete=False,vif_name='tap6f128282-42') unplug /usr/lib/python3.9/site-packages/os_vif/__init__.py:109#033[00m
Dec  1 20:01:23 compute-0 nova_compute[189564]: 2025-12-01 20:01:23.521 189568 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 21 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Dec  1 20:01:23 compute-0 nova_compute[189564]: 2025-12-01 20:01:23.522 189568 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): DelPortCommand(_result=None, port=tap6f128282-42, bridge=br-int, if_exists=True) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89#033[00m
Dec  1 20:01:23 compute-0 nova_compute[189564]: 2025-12-01 20:01:23.526 189568 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Dec  1 20:01:23 compute-0 nova_compute[189564]: 2025-12-01 20:01:23.528 189568 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] 0-ms timeout __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:248#033[00m
Dec  1 20:01:23 compute-0 nova_compute[189564]: 2025-12-01 20:01:23.533 189568 INFO os_vif [None req-116818aa-5bd5-4a72-9d0a-5611d400a5cb e346f67d906543ea8982cb53415ee19b d9b058a656be4393a4619312186fc083 - - default default] Successfully unplugged vif VIFOpenVSwitch(active=False,address=fa:16:3e:6f:a3:82,bridge_name='br-int',has_traffic_filtering=True,id=6f128282-4268-4162-a349-1906ef0a8e4d,network=Network(584f129c-30be-45c6-a239-e6753cbee124),plugin='ovs',port_profile=VIFPortProfileOpenVSwitch,preserve_on_delete=False,vif_name='tap6f128282-42')#033[00m
Dec  1 20:01:23 compute-0 nova_compute[189564]: 2025-12-01 20:01:23.534 189568 INFO nova.virt.libvirt.driver [None req-116818aa-5bd5-4a72-9d0a-5611d400a5cb e346f67d906543ea8982cb53415ee19b d9b058a656be4393a4619312186fc083 - - default default] [instance: 98c0547a-3efc-4214-85f9-ccceaf32a2a6] Deleting instance files /var/lib/nova/instances/98c0547a-3efc-4214-85f9-ccceaf32a2a6_del#033[00m
Dec  1 20:01:23 compute-0 nova_compute[189564]: 2025-12-01 20:01:23.535 189568 INFO nova.virt.libvirt.driver [None req-116818aa-5bd5-4a72-9d0a-5611d400a5cb e346f67d906543ea8982cb53415ee19b d9b058a656be4393a4619312186fc083 - - default default] [instance: 98c0547a-3efc-4214-85f9-ccceaf32a2a6] Deletion of /var/lib/nova/instances/98c0547a-3efc-4214-85f9-ccceaf32a2a6_del complete#033[00m
Dec  1 20:01:23 compute-0 neutron-haproxy-ovnmeta-584f129c-30be-45c6-a239-e6753cbee124[253850]: [NOTICE]   (253860) : haproxy version is 2.8.14-c23fe91
Dec  1 20:01:23 compute-0 neutron-haproxy-ovnmeta-584f129c-30be-45c6-a239-e6753cbee124[253850]: [NOTICE]   (253860) : path to executable is /usr/sbin/haproxy
Dec  1 20:01:23 compute-0 neutron-haproxy-ovnmeta-584f129c-30be-45c6-a239-e6753cbee124[253850]: [WARNING]  (253860) : Exiting Master process...
Dec  1 20:01:23 compute-0 neutron-haproxy-ovnmeta-584f129c-30be-45c6-a239-e6753cbee124[253850]: [ALERT]    (253860) : Current worker (253864) exited with code 143 (Terminated)
Dec  1 20:01:23 compute-0 neutron-haproxy-ovnmeta-584f129c-30be-45c6-a239-e6753cbee124[253850]: [WARNING]  (253860) : All workers exited. Exiting... (0)
Dec  1 20:01:23 compute-0 systemd[1]: libpod-e749739cfbc5cfd2122701ec135bc620bac36e648f13e541c33c066b863f9917.scope: Deactivated successfully.
Dec  1 20:01:23 compute-0 podman[254222]: 2025-12-01 20:01:23.562945787 +0000 UTC m=+0.093422297 container died e749739cfbc5cfd2122701ec135bc620bac36e648f13e541c33c066b863f9917 (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified, name=neutron-haproxy-ovnmeta-584f129c-30be-45c6-a239-e6753cbee124, maintainer=OpenStack Kubernetes Operator team, org.label-schema.name=CentOS Stream 9 Base Image, tcib_build_tag=fa2bb8efef6782c26ea7f1675eeb36dd, org.label-schema.vendor=CentOS, tcib_managed=true, io.buildah.version=1.41.3, org.label-schema.build-date=20251125, org.label-schema.license=GPLv2, org.label-schema.schema-version=1.0)
Dec  1 20:01:23 compute-0 systemd[1]: var-lib-containers-storage-overlay\x2dcontainers-e749739cfbc5cfd2122701ec135bc620bac36e648f13e541c33c066b863f9917-userdata-shm.mount: Deactivated successfully.
Dec  1 20:01:23 compute-0 systemd[1]: var-lib-containers-storage-overlay-45db12e47ffb0ebad5ab580a0a80fc4bcbd9365e0fd34157f2d71c9a8a45090f-merged.mount: Deactivated successfully.
Dec  1 20:01:23 compute-0 podman[254222]: 2025-12-01 20:01:23.630935653 +0000 UTC m=+0.161412163 container cleanup e749739cfbc5cfd2122701ec135bc620bac36e648f13e541c33c066b863f9917 (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified, name=neutron-haproxy-ovnmeta-584f129c-30be-45c6-a239-e6753cbee124, org.label-schema.license=GPLv2, org.label-schema.schema-version=1.0, io.buildah.version=1.41.3, org.label-schema.name=CentOS Stream 9 Base Image, tcib_build_tag=fa2bb8efef6782c26ea7f1675eeb36dd, org.label-schema.vendor=CentOS, tcib_managed=true, maintainer=OpenStack Kubernetes Operator team, org.label-schema.build-date=20251125)
Dec  1 20:01:23 compute-0 nova_compute[189564]: 2025-12-01 20:01:23.632 189568 INFO nova.compute.manager [None req-116818aa-5bd5-4a72-9d0a-5611d400a5cb e346f67d906543ea8982cb53415ee19b d9b058a656be4393a4619312186fc083 - - default default] [instance: 98c0547a-3efc-4214-85f9-ccceaf32a2a6] Took 0.45 seconds to destroy the instance on the hypervisor.#033[00m
Dec  1 20:01:23 compute-0 nova_compute[189564]: 2025-12-01 20:01:23.633 189568 DEBUG oslo.service.loopingcall [None req-116818aa-5bd5-4a72-9d0a-5611d400a5cb e346f67d906543ea8982cb53415ee19b d9b058a656be4393a4619312186fc083 - - default default] Waiting for function nova.compute.manager.ComputeManager._try_deallocate_network.<locals>._deallocate_network_with_retries to return. func /usr/lib/python3.9/site-packages/oslo_service/loopingcall.py:435#033[00m
Dec  1 20:01:23 compute-0 nova_compute[189564]: 2025-12-01 20:01:23.634 189568 DEBUG nova.compute.manager [-] [instance: 98c0547a-3efc-4214-85f9-ccceaf32a2a6] Deallocating network for instance _deallocate_network /usr/lib/python3.9/site-packages/nova/compute/manager.py:2259#033[00m
Dec  1 20:01:23 compute-0 nova_compute[189564]: 2025-12-01 20:01:23.635 189568 DEBUG nova.network.neutron [-] [instance: 98c0547a-3efc-4214-85f9-ccceaf32a2a6] deallocate_for_instance() deallocate_for_instance /usr/lib/python3.9/site-packages/nova/network/neutron.py:1803#033[00m
Dec  1 20:01:23 compute-0 systemd[1]: libpod-conmon-e749739cfbc5cfd2122701ec135bc620bac36e648f13e541c33c066b863f9917.scope: Deactivated successfully.
Dec  1 20:01:23 compute-0 podman[254258]: 2025-12-01 20:01:23.740868644 +0000 UTC m=+0.077916956 container remove e749739cfbc5cfd2122701ec135bc620bac36e648f13e541c33c066b863f9917 (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified, name=neutron-haproxy-ovnmeta-584f129c-30be-45c6-a239-e6753cbee124, io.buildah.version=1.41.3, org.label-schema.license=GPLv2, org.label-schema.vendor=CentOS, tcib_managed=true, maintainer=OpenStack Kubernetes Operator team, org.label-schema.build-date=20251125, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.schema-version=1.0, tcib_build_tag=fa2bb8efef6782c26ea7f1675eeb36dd)
Dec  1 20:01:23 compute-0 ovn_metadata_agent[106828]: 2025-12-01 20:01:23.755 239862 DEBUG oslo.privsep.daemon [-] privsep: reply[a666ef65-7448-40dc-b81a-9bb04db860dc]: (4, ('Mon Dec  1 08:01:23 PM UTC 2025 Stopping container neutron-haproxy-ovnmeta-584f129c-30be-45c6-a239-e6753cbee124 (e749739cfbc5cfd2122701ec135bc620bac36e648f13e541c33c066b863f9917)\ne749739cfbc5cfd2122701ec135bc620bac36e648f13e541c33c066b863f9917\nMon Dec  1 08:01:23 PM UTC 2025 Deleting container neutron-haproxy-ovnmeta-584f129c-30be-45c6-a239-e6753cbee124 (e749739cfbc5cfd2122701ec135bc620bac36e648f13e541c33c066b863f9917)\ne749739cfbc5cfd2122701ec135bc620bac36e648f13e541c33c066b863f9917\n', '', 0)) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Dec  1 20:01:23 compute-0 ovn_metadata_agent[106828]: 2025-12-01 20:01:23.758 239862 DEBUG oslo.privsep.daemon [-] privsep: reply[275ea3ba-e6a7-49c4-b117-f0e2906f39e8]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Dec  1 20:01:23 compute-0 ovn_metadata_agent[106828]: 2025-12-01 20:01:23.759 106833 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): DelPortCommand(_result=None, port=tap584f129c-30, bridge=None, if_exists=True) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89#033[00m
Dec  1 20:01:23 compute-0 nova_compute[189564]: 2025-12-01 20:01:23.763 189568 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Dec  1 20:01:23 compute-0 kernel: tap584f129c-30: left promiscuous mode
Dec  1 20:01:23 compute-0 nova_compute[189564]: 2025-12-01 20:01:23.781 189568 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Dec  1 20:01:23 compute-0 ovn_metadata_agent[106828]: 2025-12-01 20:01:23.787 239862 DEBUG oslo.privsep.daemon [-] privsep: reply[fb16eb87-8d34-447a-9b49-7c6c8f0812ab]: (4, True) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Dec  1 20:01:23 compute-0 nova_compute[189564]: 2025-12-01 20:01:23.800 189568 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Dec  1 20:01:23 compute-0 ovn_metadata_agent[106828]: 2025-12-01 20:01:23.806 239862 DEBUG oslo.privsep.daemon [-] privsep: reply[a463feb7-c0aa-447e-bf87-01675e812173]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Dec  1 20:01:23 compute-0 ovn_metadata_agent[106828]: 2025-12-01 20:01:23.807 239862 DEBUG oslo.privsep.daemon [-] privsep: reply[b9a397fb-de37-4023-a850-a06946910050]: (4, True) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Dec  1 20:01:23 compute-0 ovn_metadata_agent[106828]: 2025-12-01 20:01:23.834 239862 DEBUG oslo.privsep.daemon [-] privsep: reply[a2c53192-f650-413a-ac36-f007e15eeb61]: (4, [{'family': 0, '__align': (), 'ifi_type': 772, 'index': 1, 'flags': 65609, 'change': 0, 'attrs': [['IFLA_IFNAME', 'lo'], ['IFLA_TXQLEN', 1000], ['IFLA_OPERSTATE', 'UNKNOWN'], ['IFLA_LINKMODE', 0], ['IFLA_MTU', 65536], ['IFLA_MIN_MTU', 0], ['IFLA_MAX_MTU', 0], ['IFLA_GROUP', 0], ['IFLA_PROMISCUITY', 0], ['UNKNOWN', {'header': {'length': 8, 'type': 61}}], ['IFLA_NUM_TX_QUEUES', 1], ['IFLA_GSO_MAX_SEGS', 65535], ['IFLA_GSO_MAX_SIZE', 65536], ['IFLA_GRO_MAX_SIZE', 65536], ['UNKNOWN', {'header': {'length': 8, 'type': 63}}], ['UNKNOWN', {'header': {'length': 8, 'type': 64}}], ['IFLA_TSO_MAX_SIZE', 524280], ['IFLA_TSO_MAX_SEGS', 65535], ['UNKNOWN', {'header': {'length': 8, 'type': 66}}], ['IFLA_NUM_RX_QUEUES', 1], ['IFLA_CARRIER', 1], ['IFLA_CARRIER_CHANGES', 0], ['IFLA_CARRIER_UP_COUNT', 0], ['IFLA_CARRIER_DOWN_COUNT', 0], ['IFLA_PROTO_DOWN', 0], ['IFLA_ADDRESS', '00:00:00:00:00:00'], ['IFLA_BROADCAST', '00:00:00:00:00:00'], ['IFLA_STATS64', {'rx_packets': 1, 'tx_packets': 1, 'rx_bytes': 28, 'tx_bytes': 28, 'rx_errors': 0, 'tx_errors': 0, 'rx_dropped': 0, 'tx_dropped': 0, 'multicast': 0, 'collisions': 0, 'rx_length_errors': 0, 'rx_over_errors': 0, 'rx_crc_errors': 0, 'rx_frame_errors': 0, 'rx_fifo_errors': 0, 'rx_missed_errors': 0, 'tx_aborted_errors': 0, 'tx_carrier_errors': 0, 'tx_fifo_errors': 0, 'tx_heartbeat_errors': 0, 'tx_window_errors': 0, 'rx_compressed': 0, 'tx_compressed': 0}], ['IFLA_STATS', {'rx_packets': 1, 'tx_packets': 1, 'rx_bytes': 28, 'tx_bytes': 28, 'rx_errors': 0, 'tx_errors': 0, 'rx_dropped': 0, 'tx_dropped': 0, 'multicast': 0, 'collisions': 0, 'rx_length_errors': 0, 'rx_over_errors': 0, 'rx_crc_errors': 0, 'rx_frame_errors': 0, 'rx_fifo_errors': 0, 'rx_missed_errors': 0, 'tx_aborted_errors': 0, 'tx_carrier_errors': 0, 'tx_fifo_errors': 0, 'tx_heartbeat_errors': 0, 'tx_window_errors': 0, 'rx_compressed': 0, 'tx_compressed': 0}], ['IFLA_XDP', {'attrs': [['IFLA_XDP_ATTACHED', None]]}], ['IFLA_QDISC', 'noqueue'], ['IFLA_AF_SPEC', {'attrs': [['AF_INET', {'dummy': 65668, 'forwarding': 1, 'mc_forwarding': 0, 'proxy_arp': 0, 'accept_redirects': 0, 'secure_redirects': 0, 'send_redirects': 0, 'shared_media': 1, 'rp_filter': 1, 'accept_source_route': 0, 'bootp_relay': 0, 'log_martians': 0, 'tag': 0, 'arpfilter': 0, 'medium_id': 0, 'noxfrm': 1, 'nopolicy': 1, 'force_igmp_version': 0, 'arp_announce': 0, 'arp_ignore': 0, 'promote_secondaries': 1, 'arp_accept': 0, 'arp_notify': 0, 'accept_local': 0, 'src_vmark': 0, 'proxy_arp_pvlan': 0, 'route_localnet': 0, 'igmpv2_unsolicited_report_interval': 10000, 'igmpv3_unsolicited_report_interval': 1000}], ['AF_INET6', {'attrs': [['IFLA_INET6_FLAGS', 2147483648], ['IFLA_INET6_CACHEINFO', {'max_reasm_len': 65535, 'tstamp': 576421, 'reachable_time': 25396, 'retrans_time': 1000}], ['IFLA_INET6_CONF', {'forwarding': 0, 'hop_limit': 64, 'mtu': 65536, 'accept_ra': 1, 'accept_redirects': 1, 'autoconf': 1, 'dad_transmits': 1, 'router_solicitations': 4294967295, 'router_solicitation_interval': 4000, 'router_solicitation_delay': 1000, 'use_tempaddr': 4294967295, 'temp_valid_lft': 604800, 'temp_preferred_lft': 86400, 'regen_max_retry': 3, 'max_desync_factor': 600, 'max_addresses': 16, 'force_mld_version': 0, 'accept_ra_defrtr': 1, 'accept_ra_pinfo': 1, 'accept_ra_rtr_pref': 1, 'router_probe_interval': 60000, 'accept_ra_rt_info_max_plen': 0, 'proxy_ndp': 0, 'optimistic_dad': 0, 'accept_source_route': 0, 'mc_forwarding': 0, 'disable_ipv6': 0, 'accept_dad': 4294967295, 'force_tllao': 0, 'ndisc_notify': 0}], ['IFLA_INET6_STATS', {'num': 38, 'inpkts': 0, 'inoctets': 0, 'indelivers': 0, 'outforwdatagrams': 0, 'outpkts': 0, 'outoctets': 0, 'inhdrerrors': 0, 'intoobigerrors': 0, 'innoroutes': 0, 'inaddrerrors': 0, 'inunknownprotos': 0, 'intruncatedpkts': 0, 'indiscards': 0, 'outdiscards': 0, 'outnoroutes': 0, 'reasmtimeout': 0, 'reasmreqds': 0, 'reasmoks': 0, 'reasmfails': 0, 'fragoks': 0, 'fragfails': 0, 'fragcreates': 0, 'inmcastpkts': 0, 'outmcastpkts': 0, 'inbcastpkts': 0, 'outbcastpkts': 0, 'inmcastoctets': 0, 'outmcastoctets': 0, 'inbcastoctets': 0, 'outbcastoctets': 0, 'csumerrors': 0, 'noectpkts': 0, 'ect1pkts': 0, 'ect0pkts': 0, 'cepkts': 0}], ['IFLA_INET6_ICMP6STATS', {'num': 7, 'inmsgs': 0, 'inerrors': 0, 'outmsgs': 0, 'outerrors': 0, 'csumerrors': 0}], ['IFLA_INET6_TOKEN', '::'], ['IFLA_INET6_ADDR_GEN_MODE', 0]]}]]}], ['IFLA_MAP', {'mem_start': 0, 'mem_end': 0, 'base_addr': 0, 'irq': 0, 'dma': 0, 'port': 0}], ['UNKNOWN', {'header': {'length': 4, 'type': 32830}}], ['UNKNOWN', {'header': {'length': 4, 'type': 32833}}]], 'header': {'length': 1404, 'type': 16, 'flags': 2, 'sequence_number': 255, 'pid': 254272, 'error': None, 'target': 'ovnmeta-584f129c-30be-45c6-a239-e6753cbee124', 'stats': (0, 0, 0)}, 'state': 'up', 'event': 'RTM_NEWLINK'}]) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Dec  1 20:01:23 compute-0 systemd[1]: run-netns-ovnmeta\x2d584f129c\x2d30be\x2d45c6\x2da239\x2de6753cbee124.mount: Deactivated successfully.
Dec  1 20:01:23 compute-0 ovn_metadata_agent[106828]: 2025-12-01 20:01:23.840 106945 DEBUG neutron.privileged.agent.linux.ip_lib [-] Namespace ovnmeta-584f129c-30be-45c6-a239-e6753cbee124 deleted. remove_netns /usr/lib/python3.9/site-packages/neutron/privileged/agent/linux/ip_lib.py:607#033[00m
Dec  1 20:01:23 compute-0 ovn_metadata_agent[106828]: 2025-12-01 20:01:23.840 106945 DEBUG oslo.privsep.daemon [-] privsep: reply[7920f6dc-246d-4818-8873-7bc916b411cd]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Dec  1 20:01:24 compute-0 nova_compute[189564]: 2025-12-01 20:01:24.094 189568 DEBUG nova.compute.manager [req-b92027e1-ee8b-4022-8f8e-2a785791e335 req-d4f1c457-d9ea-4912-944c-6eca2ea6d146 8141e98fb94749b083df4b60a326419b 58466eba0634458f8dd92f929824a1d9 - - default default] [instance: 98c0547a-3efc-4214-85f9-ccceaf32a2a6] Received event network-vif-unplugged-6f128282-4268-4162-a349-1906ef0a8e4d external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11048#033[00m
Dec  1 20:01:24 compute-0 nova_compute[189564]: 2025-12-01 20:01:24.104 189568 DEBUG oslo_concurrency.lockutils [req-b92027e1-ee8b-4022-8f8e-2a785791e335 req-d4f1c457-d9ea-4912-944c-6eca2ea6d146 8141e98fb94749b083df4b60a326419b 58466eba0634458f8dd92f929824a1d9 - - default default] Acquiring lock "98c0547a-3efc-4214-85f9-ccceaf32a2a6-events" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m
Dec  1 20:01:24 compute-0 nova_compute[189564]: 2025-12-01 20:01:24.104 189568 DEBUG oslo_concurrency.lockutils [req-b92027e1-ee8b-4022-8f8e-2a785791e335 req-d4f1c457-d9ea-4912-944c-6eca2ea6d146 8141e98fb94749b083df4b60a326419b 58466eba0634458f8dd92f929824a1d9 - - default default] Lock "98c0547a-3efc-4214-85f9-ccceaf32a2a6-events" acquired by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m
Dec  1 20:01:24 compute-0 nova_compute[189564]: 2025-12-01 20:01:24.104 189568 DEBUG oslo_concurrency.lockutils [req-b92027e1-ee8b-4022-8f8e-2a785791e335 req-d4f1c457-d9ea-4912-944c-6eca2ea6d146 8141e98fb94749b083df4b60a326419b 58466eba0634458f8dd92f929824a1d9 - - default default] Lock "98c0547a-3efc-4214-85f9-ccceaf32a2a6-events" "released" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m
Dec  1 20:01:24 compute-0 nova_compute[189564]: 2025-12-01 20:01:24.104 189568 DEBUG nova.compute.manager [req-b92027e1-ee8b-4022-8f8e-2a785791e335 req-d4f1c457-d9ea-4912-944c-6eca2ea6d146 8141e98fb94749b083df4b60a326419b 58466eba0634458f8dd92f929824a1d9 - - default default] [instance: 98c0547a-3efc-4214-85f9-ccceaf32a2a6] No waiting events found dispatching network-vif-unplugged-6f128282-4268-4162-a349-1906ef0a8e4d pop_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:320#033[00m
Dec  1 20:01:24 compute-0 nova_compute[189564]: 2025-12-01 20:01:24.105 189568 DEBUG nova.compute.manager [req-b92027e1-ee8b-4022-8f8e-2a785791e335 req-d4f1c457-d9ea-4912-944c-6eca2ea6d146 8141e98fb94749b083df4b60a326419b 58466eba0634458f8dd92f929824a1d9 - - default default] [instance: 98c0547a-3efc-4214-85f9-ccceaf32a2a6] Received event network-vif-unplugged-6f128282-4268-4162-a349-1906ef0a8e4d for instance with task_state deleting. _process_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:10826#033[00m
Dec  1 20:01:24 compute-0 nova_compute[189564]: 2025-12-01 20:01:24.402 189568 DEBUG nova.network.neutron [None req-7cec860b-c1c5-49d2-8a4f-732c76c1788d - - - - - -] [instance: 98c0547a-3efc-4214-85f9-ccceaf32a2a6] Updating instance_info_cache with network_info: [{"id": "6f128282-4268-4162-a349-1906ef0a8e4d", "address": "fa:16:3e:6f:a3:82", "network": {"id": "584f129c-30be-45c6-a239-e6753cbee124", "bridge": "br-int", "label": "tempest-ServerAddressesTestJSON-1254726330-network", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.12", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "d9b058a656be4393a4619312186fc083", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tap6f128282-42", "ovs_interfaceid": "6f128282-4268-4162-a349-1906ef0a8e4d", "qbh_params": null, "qbg_params": null, "active": true, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}}] update_instance_cache_with_nw_info /usr/lib/python3.9/site-packages/nova/network/neutron.py:116#033[00m
Dec  1 20:01:24 compute-0 nova_compute[189564]: 2025-12-01 20:01:24.432 189568 DEBUG oslo_concurrency.lockutils [None req-7cec860b-c1c5-49d2-8a4f-732c76c1788d - - - - - -] Releasing lock "refresh_cache-98c0547a-3efc-4214-85f9-ccceaf32a2a6" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:333#033[00m
Dec  1 20:01:24 compute-0 nova_compute[189564]: 2025-12-01 20:01:24.433 189568 DEBUG nova.compute.manager [None req-7cec860b-c1c5-49d2-8a4f-732c76c1788d - - - - - -] [instance: 98c0547a-3efc-4214-85f9-ccceaf32a2a6] Updated the network info_cache for instance _heal_instance_info_cache /usr/lib/python3.9/site-packages/nova/compute/manager.py:9929#033[00m
Dec  1 20:01:24 compute-0 nova_compute[189564]: 2025-12-01 20:01:24.434 189568 DEBUG oslo_service.periodic_task [None req-7cec860b-c1c5-49d2-8a4f-732c76c1788d - - - - - -] Running periodic task ComputeManager._poll_unconfirmed_resizes run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210#033[00m
Dec  1 20:01:24 compute-0 nova_compute[189564]: 2025-12-01 20:01:24.434 189568 DEBUG oslo_service.periodic_task [None req-7cec860b-c1c5-49d2-8a4f-732c76c1788d - - - - - -] Running periodic task ComputeManager._instance_usage_audit run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210#033[00m
Dec  1 20:01:24 compute-0 nova_compute[189564]: 2025-12-01 20:01:24.435 189568 DEBUG oslo_service.periodic_task [None req-7cec860b-c1c5-49d2-8a4f-732c76c1788d - - - - - -] Running periodic task ComputeManager._poll_volume_usage run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210#033[00m
Dec  1 20:01:25 compute-0 nova_compute[189564]: 2025-12-01 20:01:25.775 189568 DEBUG nova.network.neutron [-] [instance: 98c0547a-3efc-4214-85f9-ccceaf32a2a6] Updating instance_info_cache with network_info: [] update_instance_cache_with_nw_info /usr/lib/python3.9/site-packages/nova/network/neutron.py:116#033[00m
Dec  1 20:01:25 compute-0 nova_compute[189564]: 2025-12-01 20:01:25.807 189568 INFO nova.compute.manager [-] [instance: 98c0547a-3efc-4214-85f9-ccceaf32a2a6] Took 2.17 seconds to deallocate network for instance.#033[00m
Dec  1 20:01:25 compute-0 nova_compute[189564]: 2025-12-01 20:01:25.865 189568 DEBUG oslo_concurrency.lockutils [None req-116818aa-5bd5-4a72-9d0a-5611d400a5cb e346f67d906543ea8982cb53415ee19b d9b058a656be4393a4619312186fc083 - - default default] Acquiring lock "compute_resources" by "nova.compute.resource_tracker.ResourceTracker.update_usage" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m
Dec  1 20:01:25 compute-0 nova_compute[189564]: 2025-12-01 20:01:25.866 189568 DEBUG oslo_concurrency.lockutils [None req-116818aa-5bd5-4a72-9d0a-5611d400a5cb e346f67d906543ea8982cb53415ee19b d9b058a656be4393a4619312186fc083 - - default default] Lock "compute_resources" acquired by "nova.compute.resource_tracker.ResourceTracker.update_usage" :: waited 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m
Dec  1 20:01:26 compute-0 nova_compute[189564]: 2025-12-01 20:01:26.012 189568 DEBUG nova.compute.provider_tree [None req-116818aa-5bd5-4a72-9d0a-5611d400a5cb e346f67d906543ea8982cb53415ee19b d9b058a656be4393a4619312186fc083 - - default default] Inventory has not changed in ProviderTree for provider: 0211b5d4-bab8-409f-8f53-df766ffbcb27 update_inventory /usr/lib/python3.9/site-packages/nova/compute/provider_tree.py:180#033[00m
Dec  1 20:01:26 compute-0 nova_compute[189564]: 2025-12-01 20:01:26.041 189568 DEBUG nova.scheduler.client.report [None req-116818aa-5bd5-4a72-9d0a-5611d400a5cb e346f67d906543ea8982cb53415ee19b d9b058a656be4393a4619312186fc083 - - default default] Inventory has not changed for provider 0211b5d4-bab8-409f-8f53-df766ffbcb27 based on inventory data: {'VCPU': {'total': 8, 'reserved': 0, 'min_unit': 1, 'max_unit': 8, 'step_size': 1, 'allocation_ratio': 4.0}, 'MEMORY_MB': {'total': 7680, 'reserved': 512, 'min_unit': 1, 'max_unit': 7680, 'step_size': 1, 'allocation_ratio': 1.0}, 'DISK_GB': {'total': 79, 'reserved': 1, 'min_unit': 1, 'max_unit': 79, 'step_size': 1, 'allocation_ratio': 0.9}} set_inventory_for_provider /usr/lib/python3.9/site-packages/nova/scheduler/client/report.py:940#033[00m
Dec  1 20:01:26 compute-0 nova_compute[189564]: 2025-12-01 20:01:26.077 189568 DEBUG oslo_concurrency.lockutils [None req-116818aa-5bd5-4a72-9d0a-5611d400a5cb e346f67d906543ea8982cb53415ee19b d9b058a656be4393a4619312186fc083 - - default default] Lock "compute_resources" "released" by "nova.compute.resource_tracker.ResourceTracker.update_usage" :: held 0.211s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m
Dec  1 20:01:26 compute-0 nova_compute[189564]: 2025-12-01 20:01:26.112 189568 INFO nova.scheduler.client.report [None req-116818aa-5bd5-4a72-9d0a-5611d400a5cb e346f67d906543ea8982cb53415ee19b d9b058a656be4393a4619312186fc083 - - default default] Deleted allocations for instance 98c0547a-3efc-4214-85f9-ccceaf32a2a6#033[00m
Dec  1 20:01:26 compute-0 nova_compute[189564]: 2025-12-01 20:01:26.211 189568 DEBUG nova.compute.manager [req-e678207d-953d-4504-9461-7511d69b56e1 req-683001d3-36a7-42bb-adc8-79f74dbe6a1f 8141e98fb94749b083df4b60a326419b 58466eba0634458f8dd92f929824a1d9 - - default default] [instance: 98c0547a-3efc-4214-85f9-ccceaf32a2a6] Received event network-vif-plugged-6f128282-4268-4162-a349-1906ef0a8e4d external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11048#033[00m
Dec  1 20:01:26 compute-0 nova_compute[189564]: 2025-12-01 20:01:26.213 189568 DEBUG oslo_concurrency.lockutils [req-e678207d-953d-4504-9461-7511d69b56e1 req-683001d3-36a7-42bb-adc8-79f74dbe6a1f 8141e98fb94749b083df4b60a326419b 58466eba0634458f8dd92f929824a1d9 - - default default] Acquiring lock "98c0547a-3efc-4214-85f9-ccceaf32a2a6-events" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m
Dec  1 20:01:26 compute-0 nova_compute[189564]: 2025-12-01 20:01:26.214 189568 DEBUG oslo_concurrency.lockutils [req-e678207d-953d-4504-9461-7511d69b56e1 req-683001d3-36a7-42bb-adc8-79f74dbe6a1f 8141e98fb94749b083df4b60a326419b 58466eba0634458f8dd92f929824a1d9 - - default default] Lock "98c0547a-3efc-4214-85f9-ccceaf32a2a6-events" acquired by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: waited 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m
Dec  1 20:01:26 compute-0 nova_compute[189564]: 2025-12-01 20:01:26.215 189568 DEBUG oslo_concurrency.lockutils [req-e678207d-953d-4504-9461-7511d69b56e1 req-683001d3-36a7-42bb-adc8-79f74dbe6a1f 8141e98fb94749b083df4b60a326419b 58466eba0634458f8dd92f929824a1d9 - - default default] Lock "98c0547a-3efc-4214-85f9-ccceaf32a2a6-events" "released" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: held 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m
Dec  1 20:01:26 compute-0 nova_compute[189564]: 2025-12-01 20:01:26.216 189568 DEBUG nova.compute.manager [req-e678207d-953d-4504-9461-7511d69b56e1 req-683001d3-36a7-42bb-adc8-79f74dbe6a1f 8141e98fb94749b083df4b60a326419b 58466eba0634458f8dd92f929824a1d9 - - default default] [instance: 98c0547a-3efc-4214-85f9-ccceaf32a2a6] No waiting events found dispatching network-vif-plugged-6f128282-4268-4162-a349-1906ef0a8e4d pop_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:320#033[00m
Dec  1 20:01:26 compute-0 nova_compute[189564]: 2025-12-01 20:01:26.217 189568 WARNING nova.compute.manager [req-e678207d-953d-4504-9461-7511d69b56e1 req-683001d3-36a7-42bb-adc8-79f74dbe6a1f 8141e98fb94749b083df4b60a326419b 58466eba0634458f8dd92f929824a1d9 - - default default] [instance: 98c0547a-3efc-4214-85f9-ccceaf32a2a6] Received unexpected event network-vif-plugged-6f128282-4268-4162-a349-1906ef0a8e4d for instance with vm_state deleted and task_state None.#033[00m
Dec  1 20:01:26 compute-0 nova_compute[189564]: 2025-12-01 20:01:26.218 189568 DEBUG nova.compute.manager [req-e678207d-953d-4504-9461-7511d69b56e1 req-683001d3-36a7-42bb-adc8-79f74dbe6a1f 8141e98fb94749b083df4b60a326419b 58466eba0634458f8dd92f929824a1d9 - - default default] [instance: 5e264735-c003-4c77-8b16-cb48211f837f] Received event network-changed-241aee4b-acee-43c4-b165-e8322c56a1d3 external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11048#033[00m
Dec  1 20:01:26 compute-0 nova_compute[189564]: 2025-12-01 20:01:26.219 189568 DEBUG nova.compute.manager [req-e678207d-953d-4504-9461-7511d69b56e1 req-683001d3-36a7-42bb-adc8-79f74dbe6a1f 8141e98fb94749b083df4b60a326419b 58466eba0634458f8dd92f929824a1d9 - - default default] [instance: 5e264735-c003-4c77-8b16-cb48211f837f] Refreshing instance network info cache due to event network-changed-241aee4b-acee-43c4-b165-e8322c56a1d3. external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11053#033[00m
Dec  1 20:01:26 compute-0 nova_compute[189564]: 2025-12-01 20:01:26.220 189568 DEBUG oslo_concurrency.lockutils [req-e678207d-953d-4504-9461-7511d69b56e1 req-683001d3-36a7-42bb-adc8-79f74dbe6a1f 8141e98fb94749b083df4b60a326419b 58466eba0634458f8dd92f929824a1d9 - - default default] Acquiring lock "refresh_cache-5e264735-c003-4c77-8b16-cb48211f837f" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:312#033[00m
Dec  1 20:01:26 compute-0 nova_compute[189564]: 2025-12-01 20:01:26.221 189568 DEBUG oslo_concurrency.lockutils [req-e678207d-953d-4504-9461-7511d69b56e1 req-683001d3-36a7-42bb-adc8-79f74dbe6a1f 8141e98fb94749b083df4b60a326419b 58466eba0634458f8dd92f929824a1d9 - - default default] Acquired lock "refresh_cache-5e264735-c003-4c77-8b16-cb48211f837f" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:315#033[00m
Dec  1 20:01:26 compute-0 nova_compute[189564]: 2025-12-01 20:01:26.222 189568 DEBUG nova.network.neutron [req-e678207d-953d-4504-9461-7511d69b56e1 req-683001d3-36a7-42bb-adc8-79f74dbe6a1f 8141e98fb94749b083df4b60a326419b 58466eba0634458f8dd92f929824a1d9 - - default default] [instance: 5e264735-c003-4c77-8b16-cb48211f837f] Refreshing network info cache for port 241aee4b-acee-43c4-b165-e8322c56a1d3 _get_instance_nw_info /usr/lib/python3.9/site-packages/nova/network/neutron.py:2007#033[00m
Dec  1 20:01:26 compute-0 nova_compute[189564]: 2025-12-01 20:01:26.309 189568 DEBUG nova.compute.manager [req-babb953e-99bb-44a0-b27e-a87c01e5ce53 req-711a4c42-7bd3-4ffc-93d8-617c8c6d19c3 8141e98fb94749b083df4b60a326419b 58466eba0634458f8dd92f929824a1d9 - - default default] [instance: 4a104baa-5fd5-47aa-973b-11d99c76c3e2] Received event network-vif-plugged-09097114-7a48-4b64-ab17-ed474efbf80e external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11048#033[00m
Dec  1 20:01:26 compute-0 nova_compute[189564]: 2025-12-01 20:01:26.311 189568 DEBUG oslo_concurrency.lockutils [req-babb953e-99bb-44a0-b27e-a87c01e5ce53 req-711a4c42-7bd3-4ffc-93d8-617c8c6d19c3 8141e98fb94749b083df4b60a326419b 58466eba0634458f8dd92f929824a1d9 - - default default] Acquiring lock "4a104baa-5fd5-47aa-973b-11d99c76c3e2-events" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m
Dec  1 20:01:26 compute-0 nova_compute[189564]: 2025-12-01 20:01:26.312 189568 DEBUG oslo_concurrency.lockutils [req-babb953e-99bb-44a0-b27e-a87c01e5ce53 req-711a4c42-7bd3-4ffc-93d8-617c8c6d19c3 8141e98fb94749b083df4b60a326419b 58466eba0634458f8dd92f929824a1d9 - - default default] Lock "4a104baa-5fd5-47aa-973b-11d99c76c3e2-events" acquired by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: waited 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m
Dec  1 20:01:26 compute-0 nova_compute[189564]: 2025-12-01 20:01:26.314 189568 DEBUG oslo_concurrency.lockutils [req-babb953e-99bb-44a0-b27e-a87c01e5ce53 req-711a4c42-7bd3-4ffc-93d8-617c8c6d19c3 8141e98fb94749b083df4b60a326419b 58466eba0634458f8dd92f929824a1d9 - - default default] Lock "4a104baa-5fd5-47aa-973b-11d99c76c3e2-events" "released" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: held 0.002s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m
Dec  1 20:01:26 compute-0 nova_compute[189564]: 2025-12-01 20:01:26.315 189568 DEBUG nova.compute.manager [req-babb953e-99bb-44a0-b27e-a87c01e5ce53 req-711a4c42-7bd3-4ffc-93d8-617c8c6d19c3 8141e98fb94749b083df4b60a326419b 58466eba0634458f8dd92f929824a1d9 - - default default] [instance: 4a104baa-5fd5-47aa-973b-11d99c76c3e2] Processing event network-vif-plugged-09097114-7a48-4b64-ab17-ed474efbf80e _process_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:10808#033[00m
Dec  1 20:01:26 compute-0 nova_compute[189564]: 2025-12-01 20:01:26.316 189568 DEBUG nova.compute.manager [None req-323d5ec3-38e7-418e-9c44-916e6b02d0c3 89c8a8cb31224140bf2b9c0b94acfe04 5102d72cb1ce4e6da810b2584a2abd73 - - default default] [instance: 4a104baa-5fd5-47aa-973b-11d99c76c3e2] Instance event wait completed in 7 seconds for network-vif-plugged wait_for_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:577#033[00m
Dec  1 20:01:26 compute-0 nova_compute[189564]: 2025-12-01 20:01:26.326 189568 DEBUG nova.virt.driver [None req-025acbbd-8b0a-4055-b5a6-f0460d6fa220 - - - - - -] Emitting event <LifecycleEvent: 1764619286.3263288, 4a104baa-5fd5-47aa-973b-11d99c76c3e2 => Resumed> emit_event /usr/lib/python3.9/site-packages/nova/virt/driver.py:1653#033[00m
Dec  1 20:01:26 compute-0 nova_compute[189564]: 2025-12-01 20:01:26.327 189568 INFO nova.compute.manager [None req-025acbbd-8b0a-4055-b5a6-f0460d6fa220 - - - - - -] [instance: 4a104baa-5fd5-47aa-973b-11d99c76c3e2] VM Resumed (Lifecycle Event)#033[00m
Dec  1 20:01:26 compute-0 nova_compute[189564]: 2025-12-01 20:01:26.340 189568 DEBUG nova.virt.libvirt.driver [None req-323d5ec3-38e7-418e-9c44-916e6b02d0c3 89c8a8cb31224140bf2b9c0b94acfe04 5102d72cb1ce4e6da810b2584a2abd73 - - default default] [instance: 4a104baa-5fd5-47aa-973b-11d99c76c3e2] Guest created on hypervisor spawn /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:4417#033[00m
Dec  1 20:01:26 compute-0 nova_compute[189564]: 2025-12-01 20:01:26.347 189568 INFO nova.virt.libvirt.driver [-] [instance: 4a104baa-5fd5-47aa-973b-11d99c76c3e2] Instance spawned successfully.#033[00m
Dec  1 20:01:26 compute-0 nova_compute[189564]: 2025-12-01 20:01:26.348 189568 DEBUG nova.virt.libvirt.driver [None req-323d5ec3-38e7-418e-9c44-916e6b02d0c3 89c8a8cb31224140bf2b9c0b94acfe04 5102d72cb1ce4e6da810b2584a2abd73 - - default default] [instance: 4a104baa-5fd5-47aa-973b-11d99c76c3e2] Attempting to register defaults for the following image properties: ['hw_cdrom_bus', 'hw_disk_bus', 'hw_input_bus', 'hw_pointer_model', 'hw_video_model', 'hw_vif_model'] _register_undefined_instance_details /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:917#033[00m
Dec  1 20:01:26 compute-0 nova_compute[189564]: 2025-12-01 20:01:26.411 189568 DEBUG nova.compute.manager [None req-025acbbd-8b0a-4055-b5a6-f0460d6fa220 - - - - - -] [instance: 4a104baa-5fd5-47aa-973b-11d99c76c3e2] Checking state _get_power_state /usr/lib/python3.9/site-packages/nova/compute/manager.py:1762#033[00m
Dec  1 20:01:26 compute-0 nova_compute[189564]: 2025-12-01 20:01:26.418 189568 DEBUG nova.compute.manager [None req-025acbbd-8b0a-4055-b5a6-f0460d6fa220 - - - - - -] [instance: 4a104baa-5fd5-47aa-973b-11d99c76c3e2] Synchronizing instance power state after lifecycle event "Resumed"; current vm_state: building, current task_state: spawning, current DB power_state: 0, VM power_state: 1 handle_lifecycle_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:1396#033[00m
Dec  1 20:01:26 compute-0 nova_compute[189564]: 2025-12-01 20:01:26.437 189568 DEBUG oslo_concurrency.lockutils [None req-116818aa-5bd5-4a72-9d0a-5611d400a5cb e346f67d906543ea8982cb53415ee19b d9b058a656be4393a4619312186fc083 - - default default] Lock "98c0547a-3efc-4214-85f9-ccceaf32a2a6" "released" by "nova.compute.manager.ComputeManager.terminate_instance.<locals>.do_terminate_instance" :: held 3.267s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m
Dec  1 20:01:26 compute-0 nova_compute[189564]: 2025-12-01 20:01:26.441 189568 DEBUG nova.virt.libvirt.driver [None req-323d5ec3-38e7-418e-9c44-916e6b02d0c3 89c8a8cb31224140bf2b9c0b94acfe04 5102d72cb1ce4e6da810b2584a2abd73 - - default default] [instance: 4a104baa-5fd5-47aa-973b-11d99c76c3e2] Found default for hw_cdrom_bus of sata _register_undefined_instance_details /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:946#033[00m
Dec  1 20:01:26 compute-0 nova_compute[189564]: 2025-12-01 20:01:26.441 189568 DEBUG nova.virt.libvirt.driver [None req-323d5ec3-38e7-418e-9c44-916e6b02d0c3 89c8a8cb31224140bf2b9c0b94acfe04 5102d72cb1ce4e6da810b2584a2abd73 - - default default] [instance: 4a104baa-5fd5-47aa-973b-11d99c76c3e2] Found default for hw_disk_bus of virtio _register_undefined_instance_details /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:946#033[00m
Dec  1 20:01:26 compute-0 nova_compute[189564]: 2025-12-01 20:01:26.442 189568 DEBUG nova.virt.libvirt.driver [None req-323d5ec3-38e7-418e-9c44-916e6b02d0c3 89c8a8cb31224140bf2b9c0b94acfe04 5102d72cb1ce4e6da810b2584a2abd73 - - default default] [instance: 4a104baa-5fd5-47aa-973b-11d99c76c3e2] Found default for hw_input_bus of usb _register_undefined_instance_details /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:946#033[00m
Dec  1 20:01:26 compute-0 nova_compute[189564]: 2025-12-01 20:01:26.443 189568 DEBUG nova.virt.libvirt.driver [None req-323d5ec3-38e7-418e-9c44-916e6b02d0c3 89c8a8cb31224140bf2b9c0b94acfe04 5102d72cb1ce4e6da810b2584a2abd73 - - default default] [instance: 4a104baa-5fd5-47aa-973b-11d99c76c3e2] Found default for hw_pointer_model of usbtablet _register_undefined_instance_details /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:946#033[00m
Dec  1 20:01:26 compute-0 nova_compute[189564]: 2025-12-01 20:01:26.444 189568 DEBUG nova.virt.libvirt.driver [None req-323d5ec3-38e7-418e-9c44-916e6b02d0c3 89c8a8cb31224140bf2b9c0b94acfe04 5102d72cb1ce4e6da810b2584a2abd73 - - default default] [instance: 4a104baa-5fd5-47aa-973b-11d99c76c3e2] Found default for hw_video_model of virtio _register_undefined_instance_details /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:946#033[00m
Dec  1 20:01:26 compute-0 nova_compute[189564]: 2025-12-01 20:01:26.445 189568 DEBUG nova.virt.libvirt.driver [None req-323d5ec3-38e7-418e-9c44-916e6b02d0c3 89c8a8cb31224140bf2b9c0b94acfe04 5102d72cb1ce4e6da810b2584a2abd73 - - default default] [instance: 4a104baa-5fd5-47aa-973b-11d99c76c3e2] Found default for hw_vif_model of virtio _register_undefined_instance_details /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:946#033[00m
Dec  1 20:01:26 compute-0 nova_compute[189564]: 2025-12-01 20:01:26.450 189568 INFO nova.compute.manager [None req-025acbbd-8b0a-4055-b5a6-f0460d6fa220 - - - - - -] [instance: 4a104baa-5fd5-47aa-973b-11d99c76c3e2] During sync_power_state the instance has a pending task (spawning). Skip.#033[00m
Dec  1 20:01:26 compute-0 nova_compute[189564]: 2025-12-01 20:01:26.531 189568 INFO nova.compute.manager [None req-323d5ec3-38e7-418e-9c44-916e6b02d0c3 89c8a8cb31224140bf2b9c0b94acfe04 5102d72cb1ce4e6da810b2584a2abd73 - - default default] [instance: 4a104baa-5fd5-47aa-973b-11d99c76c3e2] Took 14.42 seconds to spawn the instance on the hypervisor.#033[00m
Dec  1 20:01:26 compute-0 nova_compute[189564]: 2025-12-01 20:01:26.533 189568 DEBUG nova.compute.manager [None req-323d5ec3-38e7-418e-9c44-916e6b02d0c3 89c8a8cb31224140bf2b9c0b94acfe04 5102d72cb1ce4e6da810b2584a2abd73 - - default default] [instance: 4a104baa-5fd5-47aa-973b-11d99c76c3e2] Checking state _get_power_state /usr/lib/python3.9/site-packages/nova/compute/manager.py:1762#033[00m
Dec  1 20:01:26 compute-0 nova_compute[189564]: 2025-12-01 20:01:26.605 189568 INFO nova.compute.manager [None req-323d5ec3-38e7-418e-9c44-916e6b02d0c3 89c8a8cb31224140bf2b9c0b94acfe04 5102d72cb1ce4e6da810b2584a2abd73 - - default default] [instance: 4a104baa-5fd5-47aa-973b-11d99c76c3e2] Took 14.98 seconds to build instance.#033[00m
Dec  1 20:01:26 compute-0 nova_compute[189564]: 2025-12-01 20:01:26.631 189568 DEBUG oslo_concurrency.lockutils [None req-323d5ec3-38e7-418e-9c44-916e6b02d0c3 89c8a8cb31224140bf2b9c0b94acfe04 5102d72cb1ce4e6da810b2584a2abd73 - - default default] Lock "4a104baa-5fd5-47aa-973b-11d99c76c3e2" "released" by "nova.compute.manager.ComputeManager.build_and_run_instance.<locals>._locked_do_build_and_run_instance" :: held 15.089s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m
Dec  1 20:01:26 compute-0 nova_compute[189564]: 2025-12-01 20:01:26.836 189568 DEBUG oslo_concurrency.lockutils [None req-84e2e3e5-7d2d-4291-b543-a3a120e057e2 1b42f5bff3ce40c99c067bb358d36444 02b2a851f173482691b98aa9a993fbf9 - - default default] Acquiring lock "5e264735-c003-4c77-8b16-cb48211f837f" by "nova.compute.manager.ComputeManager.terminate_instance.<locals>.do_terminate_instance" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m
Dec  1 20:01:26 compute-0 nova_compute[189564]: 2025-12-01 20:01:26.839 189568 DEBUG oslo_concurrency.lockutils [None req-84e2e3e5-7d2d-4291-b543-a3a120e057e2 1b42f5bff3ce40c99c067bb358d36444 02b2a851f173482691b98aa9a993fbf9 - - default default] Lock "5e264735-c003-4c77-8b16-cb48211f837f" acquired by "nova.compute.manager.ComputeManager.terminate_instance.<locals>.do_terminate_instance" :: waited 0.003s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m
Dec  1 20:01:26 compute-0 nova_compute[189564]: 2025-12-01 20:01:26.840 189568 DEBUG oslo_concurrency.lockutils [None req-84e2e3e5-7d2d-4291-b543-a3a120e057e2 1b42f5bff3ce40c99c067bb358d36444 02b2a851f173482691b98aa9a993fbf9 - - default default] Acquiring lock "5e264735-c003-4c77-8b16-cb48211f837f-events" by "nova.compute.manager.InstanceEvents.clear_events_for_instance.<locals>._clear_events" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m
Dec  1 20:01:26 compute-0 nova_compute[189564]: 2025-12-01 20:01:26.841 189568 DEBUG oslo_concurrency.lockutils [None req-84e2e3e5-7d2d-4291-b543-a3a120e057e2 1b42f5bff3ce40c99c067bb358d36444 02b2a851f173482691b98aa9a993fbf9 - - default default] Lock "5e264735-c003-4c77-8b16-cb48211f837f-events" acquired by "nova.compute.manager.InstanceEvents.clear_events_for_instance.<locals>._clear_events" :: waited 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m
Dec  1 20:01:26 compute-0 nova_compute[189564]: 2025-12-01 20:01:26.841 189568 DEBUG oslo_concurrency.lockutils [None req-84e2e3e5-7d2d-4291-b543-a3a120e057e2 1b42f5bff3ce40c99c067bb358d36444 02b2a851f173482691b98aa9a993fbf9 - - default default] Lock "5e264735-c003-4c77-8b16-cb48211f837f-events" "released" by "nova.compute.manager.InstanceEvents.clear_events_for_instance.<locals>._clear_events" :: held 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m
Dec  1 20:01:26 compute-0 nova_compute[189564]: 2025-12-01 20:01:26.844 189568 INFO nova.compute.manager [None req-84e2e3e5-7d2d-4291-b543-a3a120e057e2 1b42f5bff3ce40c99c067bb358d36444 02b2a851f173482691b98aa9a993fbf9 - - default default] [instance: 5e264735-c003-4c77-8b16-cb48211f837f] Terminating instance#033[00m
Dec  1 20:01:26 compute-0 nova_compute[189564]: 2025-12-01 20:01:26.846 189568 DEBUG nova.compute.manager [None req-84e2e3e5-7d2d-4291-b543-a3a120e057e2 1b42f5bff3ce40c99c067bb358d36444 02b2a851f173482691b98aa9a993fbf9 - - default default] [instance: 5e264735-c003-4c77-8b16-cb48211f837f] Start destroying the instance on the hypervisor. _shutdown_instance /usr/lib/python3.9/site-packages/nova/compute/manager.py:3120#033[00m
Dec  1 20:01:26 compute-0 kernel: tap241aee4b-ac (unregistering): left promiscuous mode
Dec  1 20:01:26 compute-0 NetworkManager[56474]: <info>  [1764619286.8795] device (tap241aee4b-ac): state change: disconnected -> unmanaged (reason 'unmanaged', managed-type: 'removed')
Dec  1 20:01:26 compute-0 nova_compute[189564]: 2025-12-01 20:01:26.893 189568 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Dec  1 20:01:26 compute-0 ovn_controller[97948]: 2025-12-01T20:01:26Z|00084|binding|INFO|Releasing lport 241aee4b-acee-43c4-b165-e8322c56a1d3 from this chassis (sb_readonly=0)
Dec  1 20:01:26 compute-0 ovn_controller[97948]: 2025-12-01T20:01:26Z|00085|binding|INFO|Setting lport 241aee4b-acee-43c4-b165-e8322c56a1d3 down in Southbound
Dec  1 20:01:26 compute-0 ovn_controller[97948]: 2025-12-01T20:01:26Z|00086|binding|INFO|Removing iface tap241aee4b-ac ovn-installed in OVS
Dec  1 20:01:26 compute-0 ovn_metadata_agent[106828]: 2025-12-01 20:01:26.903 106833 DEBUG ovsdbapp.backend.ovs_idl.event [-] Matched UPDATE: PortBindingUpdatedEvent(events=('update',), table='Port_Binding', conditions=None, old_conditions=None), priority=20 to row=Port_Binding(mac=['fa:16:3e:94:01:de 10.100.0.13'], port_security=['fa:16:3e:94:01:de 10.100.0.13'], type=, nat_addresses=[], virtual_parent=[], up=[False], options={'requested-chassis': 'compute-0.ctlplane.example.com'}, parent_port=[], requested_additional_chassis=[], ha_chassis_group=[], external_ids={'neutron:cidrs': '10.100.0.13/28', 'neutron:device_id': '5e264735-c003-4c77-8b16-cb48211f837f', 'neutron:device_owner': 'compute:nova', 'neutron:mtu': '', 'neutron:network_name': 'neutron-50f1d760-d79c-40bd-a9b3-cf73e6f75cf0', 'neutron:port_capabilities': '', 'neutron:port_name': '', 'neutron:project_id': '02b2a851f173482691b98aa9a993fbf9', 'neutron:revision_number': '4', 'neutron:security_group_ids': '0013d713-aa83-4343-96c6-63b4b2a5c1dc', 'neutron:subnet_pool_addr_scope4': '', 'neutron:subnet_pool_addr_scope6': '', 'neutron:vnic_type': 'normal', 'neutron:host_id': 'compute-0.ctlplane.example.com', 'neutron:port_fip': '192.168.122.217'}, additional_chassis=[], tag=[], additional_encap=[], encap=[], mirror_rules=[], datapath=a954f810-f351-47e3-9327-23a3c2f185c8, chassis=[], tunnel_key=3, gateway_chassis=[], requested_chassis=[<ovs.db.idl.Row object at 0x7f1b36766670>], logical_port=241aee4b-acee-43c4-b165-e8322c56a1d3) old=Port_Binding(up=[True], chassis=[<ovs.db.idl.Row object at 0x7f1b36766670>]) matches /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/event.py:43#033[00m
Dec  1 20:01:26 compute-0 ovn_metadata_agent[106828]: 2025-12-01 20:01:26.905 106833 INFO neutron.agent.ovn.metadata.agent [-] Port 241aee4b-acee-43c4-b165-e8322c56a1d3 in datapath 50f1d760-d79c-40bd-a9b3-cf73e6f75cf0 unbound from our chassis#033[00m
Dec  1 20:01:26 compute-0 ovn_metadata_agent[106828]: 2025-12-01 20:01:26.909 106833 DEBUG neutron.agent.ovn.metadata.agent [-] No valid VIF ports were found for network 50f1d760-d79c-40bd-a9b3-cf73e6f75cf0, tearing the namespace down if needed _get_provision_params /usr/lib/python3.9/site-packages/neutron/agent/ovn/metadata/agent.py:628#033[00m
Dec  1 20:01:26 compute-0 ovn_metadata_agent[106828]: 2025-12-01 20:01:26.910 239862 DEBUG oslo.privsep.daemon [-] privsep: reply[c7ccae98-c0ca-433f-bbfc-6faafb8eeb64]: (4, True) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Dec  1 20:01:26 compute-0 ovn_metadata_agent[106828]: 2025-12-01 20:01:26.913 106833 INFO neutron.agent.ovn.metadata.agent [-] Cleaning up ovnmeta-50f1d760-d79c-40bd-a9b3-cf73e6f75cf0 namespace which is not needed anymore#033[00m
Dec  1 20:01:26 compute-0 nova_compute[189564]: 2025-12-01 20:01:26.922 189568 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Dec  1 20:01:26 compute-0 systemd[1]: machine-qemu\x2d6\x2dinstance\x2d00000006.scope: Deactivated successfully.
Dec  1 20:01:26 compute-0 systemd[1]: machine-qemu\x2d6\x2dinstance\x2d00000006.scope: Consumed 7.126s CPU time.
Dec  1 20:01:26 compute-0 systemd-machined[155891]: Machine qemu-6-instance-00000006 terminated.
Dec  1 20:01:26 compute-0 nova_compute[189564]: 2025-12-01 20:01:26.981 189568 DEBUG oslo_service.periodic_task [None req-7cec860b-c1c5-49d2-8a4f-732c76c1788d - - - - - -] Running periodic task ComputeManager._check_instance_build_time run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210#033[00m
Dec  1 20:01:27 compute-0 nova_compute[189564]: 2025-12-01 20:01:27.079 189568 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Dec  1 20:01:27 compute-0 nova_compute[189564]: 2025-12-01 20:01:27.091 189568 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Dec  1 20:01:27 compute-0 nova_compute[189564]: 2025-12-01 20:01:27.124 189568 INFO nova.virt.libvirt.driver [-] [instance: 5e264735-c003-4c77-8b16-cb48211f837f] Instance destroyed successfully.#033[00m
Dec  1 20:01:27 compute-0 nova_compute[189564]: 2025-12-01 20:01:27.125 189568 DEBUG nova.objects.instance [None req-84e2e3e5-7d2d-4291-b543-a3a120e057e2 1b42f5bff3ce40c99c067bb358d36444 02b2a851f173482691b98aa9a993fbf9 - - default default] Lazy-loading 'resources' on Instance uuid 5e264735-c003-4c77-8b16-cb48211f837f obj_load_attr /usr/lib/python3.9/site-packages/nova/objects/instance.py:1105#033[00m
Dec  1 20:01:27 compute-0 neutron-haproxy-ovnmeta-50f1d760-d79c-40bd-a9b3-cf73e6f75cf0[253939]: [NOTICE]   (253947) : haproxy version is 2.8.14-c23fe91
Dec  1 20:01:27 compute-0 neutron-haproxy-ovnmeta-50f1d760-d79c-40bd-a9b3-cf73e6f75cf0[253939]: [NOTICE]   (253947) : path to executable is /usr/sbin/haproxy
Dec  1 20:01:27 compute-0 neutron-haproxy-ovnmeta-50f1d760-d79c-40bd-a9b3-cf73e6f75cf0[253939]: [WARNING]  (253947) : Exiting Master process...
Dec  1 20:01:27 compute-0 neutron-haproxy-ovnmeta-50f1d760-d79c-40bd-a9b3-cf73e6f75cf0[253939]: [ALERT]    (253947) : Current worker (253949) exited with code 143 (Terminated)
Dec  1 20:01:27 compute-0 neutron-haproxy-ovnmeta-50f1d760-d79c-40bd-a9b3-cf73e6f75cf0[253939]: [WARNING]  (253947) : All workers exited. Exiting... (0)
Dec  1 20:01:27 compute-0 nova_compute[189564]: 2025-12-01 20:01:27.144 189568 DEBUG nova.virt.libvirt.vif [None req-84e2e3e5-7d2d-4291-b543-a3a120e057e2 1b42f5bff3ce40c99c067bb358d36444 02b2a851f173482691b98aa9a993fbf9 - - default default] vif_type=ovs instance=Instance(access_ip_v4=1.1.1.1,access_ip_v6=::babe:dc0c:1602,architecture=None,auto_disk_config=False,availability_zone='nova',cell_name=None,cleaned=False,config_drive='True',created_at=2025-12-01T20:01:06Z,default_ephemeral_device=None,default_swap_device=None,deleted=False,deleted_at=None,device_metadata=<?>,disable_terminate=False,display_description='tempest-ServersTestManualDisk-server-201991304',display_name='tempest-ServersTestManualDisk-server-201991304',ec2_ids=<?>,ephemeral_gb=0,ephemeral_key_uuid=None,fault=<?>,flavor=Flavor(3),hidden=False,host='compute-0.ctlplane.example.com',hostname='tempest-serverstestmanualdisk-server-201991304',id=6,image_ref='d169c234-7ac2-4fdc-b9fa-a08c93484d75',info_cache=InstanceInfoCache,instance_type_id=3,kernel_id='',key_data='ecdsa-sha2-nistp384 AAAAE2VjZHNhLXNoYTItbmlzdHAzODQAAAAIbmlzdHAzODQAAABhBFDXKIJaNf7CqoWh7JOYr3T2ezeyWmUGqNR82Xznhp/JccD7+YhSMqoe/FRMjQKDTS9wNNY9dntu4a+xhzKktw1bK7nZ+gYLBifcMHKOv321YPJkytZo0eQBr0ZL7ZZ/Cw==',key_name='tempest-keypair-1442487873',keypairs=<?>,launch_index=0,launched_at=2025-12-01T20:01:20Z,launched_on='compute-0.ctlplane.example.com',locked=False,locked_by=None,memory_mb=128,metadata={hello='world'},migration_context=<?>,new_flavor=None,node='compute-0.ctlplane.example.com',numa_topology=None,old_flavor=None,os_type=None,pci_devices=<?>,pci_requests=<?>,power_state=1,progress=0,project_id='02b2a851f173482691b98aa9a993fbf9',ramdisk_id='',reservation_id='r-ikjv1kvh',resources=None,root_device_name='/dev/vda',root_gb=1,security_groups=SecurityGroupList,services=<?>,shutdown_terminate=False,system_metadata={boot_roles='reader,member',image_base_image_ref='d169c234-7ac2-4fdc-b9fa-a08c93484d75',image_container_format='bare',image_disk_format='qcow2',image_hw_cdrom_bus='sata',image_hw_disk_bus='virtio',image_hw_input_bus='usb',image_hw_machine_type='q35',image_hw_pointer_model='usbtablet',image_hw_rng_model='virtio',image_hw_video_model='virtio',image_hw_vif_model='virtio',image_min_disk='1',image_min_ram='0',owner_project_name='tempest-ServersTestManualDisk-1579803427',owner_user_name='tempest-ServersTestManualDisk-1579803427-project-member'},tags=<?>,task_state='deleting',terminated_at=None,trusted_certs=<?>,updated_at=2025-12-01T20:01:20Z,user_data='IyEvYmluL3NoCmVjaG8gIlByaW50aW5nIGNpcnJvcyB1c2VyIGF1dGhvcml6ZWQga2V5cyIKY2F0IH5jaXJyb3MvLnNzaC9hdXRob3JpemVkX2tleXMgfHwgdHJ1ZQo=',user_id='1b42f5bff3ce40c99c067bb358d36444',uuid=5e264735-c003-4c77-8b16-cb48211f837f,vcpu_model=<?>,vcpus=1,vm_mode=None,vm_state='active') vif={"id": "241aee4b-acee-43c4-b165-e8322c56a1d3", "address": "fa:16:3e:94:01:de", "network": {"id": "50f1d760-d79c-40bd-a9b3-cf73e6f75cf0", "bridge": "br-int", "label": "tempest-ServersTestManualDisk-1633365007-network", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.13", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "02b2a851f173482691b98aa9a993fbf9", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tap241aee4b-ac", "ovs_interfaceid": "241aee4b-acee-43c4-b165-e8322c56a1d3", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}} unplug /usr/lib/python3.9/site-packages/nova/virt/libvirt/vif.py:828#033[00m
Dec  1 20:01:27 compute-0 nova_compute[189564]: 2025-12-01 20:01:27.145 189568 DEBUG nova.network.os_vif_util [None req-84e2e3e5-7d2d-4291-b543-a3a120e057e2 1b42f5bff3ce40c99c067bb358d36444 02b2a851f173482691b98aa9a993fbf9 - - default default] Converting VIF {"id": "241aee4b-acee-43c4-b165-e8322c56a1d3", "address": "fa:16:3e:94:01:de", "network": {"id": "50f1d760-d79c-40bd-a9b3-cf73e6f75cf0", "bridge": "br-int", "label": "tempest-ServersTestManualDisk-1633365007-network", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.13", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "02b2a851f173482691b98aa9a993fbf9", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tap241aee4b-ac", "ovs_interfaceid": "241aee4b-acee-43c4-b165-e8322c56a1d3", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}} nova_to_osvif_vif /usr/lib/python3.9/site-packages/nova/network/os_vif_util.py:511#033[00m
Dec  1 20:01:27 compute-0 systemd[1]: libpod-574821a8544f913b2c7d7d6f4074b882d230ee7d7020c25c952f12efb1e519b7.scope: Deactivated successfully.
Dec  1 20:01:27 compute-0 nova_compute[189564]: 2025-12-01 20:01:27.147 189568 DEBUG nova.network.os_vif_util [None req-84e2e3e5-7d2d-4291-b543-a3a120e057e2 1b42f5bff3ce40c99c067bb358d36444 02b2a851f173482691b98aa9a993fbf9 - - default default] Converted object VIFOpenVSwitch(active=False,address=fa:16:3e:94:01:de,bridge_name='br-int',has_traffic_filtering=True,id=241aee4b-acee-43c4-b165-e8322c56a1d3,network=Network(50f1d760-d79c-40bd-a9b3-cf73e6f75cf0),plugin='ovs',port_profile=VIFPortProfileOpenVSwitch,preserve_on_delete=False,vif_name='tap241aee4b-ac') nova_to_osvif_vif /usr/lib/python3.9/site-packages/nova/network/os_vif_util.py:548#033[00m
Dec  1 20:01:27 compute-0 nova_compute[189564]: 2025-12-01 20:01:27.148 189568 DEBUG os_vif [None req-84e2e3e5-7d2d-4291-b543-a3a120e057e2 1b42f5bff3ce40c99c067bb358d36444 02b2a851f173482691b98aa9a993fbf9 - - default default] Unplugging vif VIFOpenVSwitch(active=False,address=fa:16:3e:94:01:de,bridge_name='br-int',has_traffic_filtering=True,id=241aee4b-acee-43c4-b165-e8322c56a1d3,network=Network(50f1d760-d79c-40bd-a9b3-cf73e6f75cf0),plugin='ovs',port_profile=VIFPortProfileOpenVSwitch,preserve_on_delete=False,vif_name='tap241aee4b-ac') unplug /usr/lib/python3.9/site-packages/os_vif/__init__.py:109#033[00m
Dec  1 20:01:27 compute-0 nova_compute[189564]: 2025-12-01 20:01:27.150 189568 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 21 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Dec  1 20:01:27 compute-0 nova_compute[189564]: 2025-12-01 20:01:27.151 189568 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): DelPortCommand(_result=None, port=tap241aee4b-ac, bridge=br-int, if_exists=True) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89#033[00m
Dec  1 20:01:27 compute-0 nova_compute[189564]: 2025-12-01 20:01:27.153 189568 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Dec  1 20:01:27 compute-0 nova_compute[189564]: 2025-12-01 20:01:27.156 189568 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Dec  1 20:01:27 compute-0 podman[254294]: 2025-12-01 20:01:27.156544053 +0000 UTC m=+0.098620280 container died 574821a8544f913b2c7d7d6f4074b882d230ee7d7020c25c952f12efb1e519b7 (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified, name=neutron-haproxy-ovnmeta-50f1d760-d79c-40bd-a9b3-cf73e6f75cf0, org.label-schema.license=GPLv2, org.label-schema.schema-version=1.0, tcib_managed=true, io.buildah.version=1.41.3, org.label-schema.build-date=20251125, org.label-schema.vendor=CentOS, maintainer=OpenStack Kubernetes Operator team, org.label-schema.name=CentOS Stream 9 Base Image, tcib_build_tag=fa2bb8efef6782c26ea7f1675eeb36dd)
Dec  1 20:01:27 compute-0 nova_compute[189564]: 2025-12-01 20:01:27.159 189568 INFO os_vif [None req-84e2e3e5-7d2d-4291-b543-a3a120e057e2 1b42f5bff3ce40c99c067bb358d36444 02b2a851f173482691b98aa9a993fbf9 - - default default] Successfully unplugged vif VIFOpenVSwitch(active=False,address=fa:16:3e:94:01:de,bridge_name='br-int',has_traffic_filtering=True,id=241aee4b-acee-43c4-b165-e8322c56a1d3,network=Network(50f1d760-d79c-40bd-a9b3-cf73e6f75cf0),plugin='ovs',port_profile=VIFPortProfileOpenVSwitch,preserve_on_delete=False,vif_name='tap241aee4b-ac')#033[00m
Dec  1 20:01:27 compute-0 nova_compute[189564]: 2025-12-01 20:01:27.160 189568 INFO nova.virt.libvirt.driver [None req-84e2e3e5-7d2d-4291-b543-a3a120e057e2 1b42f5bff3ce40c99c067bb358d36444 02b2a851f173482691b98aa9a993fbf9 - - default default] [instance: 5e264735-c003-4c77-8b16-cb48211f837f] Deleting instance files /var/lib/nova/instances/5e264735-c003-4c77-8b16-cb48211f837f_del#033[00m
Dec  1 20:01:27 compute-0 nova_compute[189564]: 2025-12-01 20:01:27.162 189568 INFO nova.virt.libvirt.driver [None req-84e2e3e5-7d2d-4291-b543-a3a120e057e2 1b42f5bff3ce40c99c067bb358d36444 02b2a851f173482691b98aa9a993fbf9 - - default default] [instance: 5e264735-c003-4c77-8b16-cb48211f837f] Deletion of /var/lib/nova/instances/5e264735-c003-4c77-8b16-cb48211f837f_del complete#033[00m
Dec  1 20:01:27 compute-0 systemd[1]: var-lib-containers-storage-overlay\x2dcontainers-574821a8544f913b2c7d7d6f4074b882d230ee7d7020c25c952f12efb1e519b7-userdata-shm.mount: Deactivated successfully.
Dec  1 20:01:27 compute-0 systemd[1]: var-lib-containers-storage-overlay-43a1576c400e4a043d1f747219044d7cbdec89b586824c0040672cb1dae1439f-merged.mount: Deactivated successfully.
Dec  1 20:01:27 compute-0 podman[254294]: 2025-12-01 20:01:27.21304818 +0000 UTC m=+0.155124367 container cleanup 574821a8544f913b2c7d7d6f4074b882d230ee7d7020c25c952f12efb1e519b7 (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified, name=neutron-haproxy-ovnmeta-50f1d760-d79c-40bd-a9b3-cf73e6f75cf0, maintainer=OpenStack Kubernetes Operator team, org.label-schema.name=CentOS Stream 9 Base Image, tcib_build_tag=fa2bb8efef6782c26ea7f1675eeb36dd, org.label-schema.license=GPLv2, org.label-schema.schema-version=1.0, org.label-schema.build-date=20251125, org.label-schema.vendor=CentOS, tcib_managed=true, io.buildah.version=1.41.3)
Dec  1 20:01:27 compute-0 systemd[1]: libpod-conmon-574821a8544f913b2c7d7d6f4074b882d230ee7d7020c25c952f12efb1e519b7.scope: Deactivated successfully.
Dec  1 20:01:27 compute-0 nova_compute[189564]: 2025-12-01 20:01:27.225 189568 INFO nova.compute.manager [None req-84e2e3e5-7d2d-4291-b543-a3a120e057e2 1b42f5bff3ce40c99c067bb358d36444 02b2a851f173482691b98aa9a993fbf9 - - default default] [instance: 5e264735-c003-4c77-8b16-cb48211f837f] Took 0.38 seconds to destroy the instance on the hypervisor.#033[00m
Dec  1 20:01:27 compute-0 nova_compute[189564]: 2025-12-01 20:01:27.226 189568 DEBUG oslo.service.loopingcall [None req-84e2e3e5-7d2d-4291-b543-a3a120e057e2 1b42f5bff3ce40c99c067bb358d36444 02b2a851f173482691b98aa9a993fbf9 - - default default] Waiting for function nova.compute.manager.ComputeManager._try_deallocate_network.<locals>._deallocate_network_with_retries to return. func /usr/lib/python3.9/site-packages/oslo_service/loopingcall.py:435#033[00m
Dec  1 20:01:27 compute-0 nova_compute[189564]: 2025-12-01 20:01:27.227 189568 DEBUG nova.compute.manager [-] [instance: 5e264735-c003-4c77-8b16-cb48211f837f] Deallocating network for instance _deallocate_network /usr/lib/python3.9/site-packages/nova/compute/manager.py:2259#033[00m
Dec  1 20:01:27 compute-0 nova_compute[189564]: 2025-12-01 20:01:27.228 189568 DEBUG nova.network.neutron [-] [instance: 5e264735-c003-4c77-8b16-cb48211f837f] deallocate_for_instance() deallocate_for_instance /usr/lib/python3.9/site-packages/nova/network/neutron.py:1803#033[00m
Dec  1 20:01:27 compute-0 nova_compute[189564]: 2025-12-01 20:01:27.243 189568 DEBUG oslo_service.periodic_task [None req-7cec860b-c1c5-49d2-8a4f-732c76c1788d - - - - - -] Running periodic task ComputeManager._sync_scheduler_instance_info run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210#033[00m
Dec  1 20:01:27 compute-0 nova_compute[189564]: 2025-12-01 20:01:27.277 189568 DEBUG oslo_service.periodic_task [None req-7cec860b-c1c5-49d2-8a4f-732c76c1788d - - - - - -] Running periodic task ComputeManager._run_pending_deletes run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210#033[00m
Dec  1 20:01:27 compute-0 nova_compute[189564]: 2025-12-01 20:01:27.277 189568 DEBUG nova.compute.manager [None req-7cec860b-c1c5-49d2-8a4f-732c76c1788d - - - - - -] Cleaning up deleted instances _run_pending_deletes /usr/lib/python3.9/site-packages/nova/compute/manager.py:11145#033[00m
Dec  1 20:01:27 compute-0 nova_compute[189564]: 2025-12-01 20:01:27.292 189568 DEBUG nova.compute.manager [None req-7cec860b-c1c5-49d2-8a4f-732c76c1788d - - - - - -] There are 0 instances to clean _run_pending_deletes /usr/lib/python3.9/site-packages/nova/compute/manager.py:11154#033[00m
Dec  1 20:01:27 compute-0 podman[254336]: 2025-12-01 20:01:27.317270783 +0000 UTC m=+0.066453999 container remove 574821a8544f913b2c7d7d6f4074b882d230ee7d7020c25c952f12efb1e519b7 (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified, name=neutron-haproxy-ovnmeta-50f1d760-d79c-40bd-a9b3-cf73e6f75cf0, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.schema-version=1.0, maintainer=OpenStack Kubernetes Operator team, org.label-schema.build-date=20251125, tcib_build_tag=fa2bb8efef6782c26ea7f1675eeb36dd, tcib_managed=true, io.buildah.version=1.41.3, org.label-schema.license=GPLv2, org.label-schema.vendor=CentOS)
Dec  1 20:01:27 compute-0 ovn_metadata_agent[106828]: 2025-12-01 20:01:27.327 239862 DEBUG oslo.privsep.daemon [-] privsep: reply[8ff12cc8-bd71-420a-a14b-1f599eff0b1c]: (4, ('Mon Dec  1 08:01:27 PM UTC 2025 Stopping container neutron-haproxy-ovnmeta-50f1d760-d79c-40bd-a9b3-cf73e6f75cf0 (574821a8544f913b2c7d7d6f4074b882d230ee7d7020c25c952f12efb1e519b7)\n574821a8544f913b2c7d7d6f4074b882d230ee7d7020c25c952f12efb1e519b7\nMon Dec  1 08:01:27 PM UTC 2025 Deleting container neutron-haproxy-ovnmeta-50f1d760-d79c-40bd-a9b3-cf73e6f75cf0 (574821a8544f913b2c7d7d6f4074b882d230ee7d7020c25c952f12efb1e519b7)\n574821a8544f913b2c7d7d6f4074b882d230ee7d7020c25c952f12efb1e519b7\n', '', 0)) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Dec  1 20:01:27 compute-0 ovn_metadata_agent[106828]: 2025-12-01 20:01:27.330 239862 DEBUG oslo.privsep.daemon [-] privsep: reply[f7ca3118-56df-4e21-9556-42b929442718]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Dec  1 20:01:27 compute-0 ovn_metadata_agent[106828]: 2025-12-01 20:01:27.331 106833 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): DelPortCommand(_result=None, port=tap50f1d760-d0, bridge=None, if_exists=True) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89#033[00m
Dec  1 20:01:27 compute-0 nova_compute[189564]: 2025-12-01 20:01:27.334 189568 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Dec  1 20:01:27 compute-0 kernel: tap50f1d760-d0: left promiscuous mode
Dec  1 20:01:27 compute-0 ovn_metadata_agent[106828]: 2025-12-01 20:01:27.346 239862 DEBUG oslo.privsep.daemon [-] privsep: reply[e07a2f19-2249-44b8-961f-4ee864c8de0d]: (4, True) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Dec  1 20:01:27 compute-0 nova_compute[189564]: 2025-12-01 20:01:27.365 189568 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Dec  1 20:01:27 compute-0 ovn_metadata_agent[106828]: 2025-12-01 20:01:27.374 239862 DEBUG oslo.privsep.daemon [-] privsep: reply[b32f4f7a-5312-4789-a70e-792d9b3b1d96]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Dec  1 20:01:27 compute-0 ovn_metadata_agent[106828]: 2025-12-01 20:01:27.376 239862 DEBUG oslo.privsep.daemon [-] privsep: reply[f1177bd6-7a0b-4837-af79-178b2d147834]: (4, True) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Dec  1 20:01:27 compute-0 ovn_metadata_agent[106828]: 2025-12-01 20:01:27.399 239862 DEBUG oslo.privsep.daemon [-] privsep: reply[9f9d63ef-2463-4d43-b750-a4ebd08340ba]: (4, [{'family': 0, '__align': (), 'ifi_type': 772, 'index': 1, 'flags': 65609, 'change': 0, 'attrs': [['IFLA_IFNAME', 'lo'], ['IFLA_TXQLEN', 1000], ['IFLA_OPERSTATE', 'UNKNOWN'], ['IFLA_LINKMODE', 0], ['IFLA_MTU', 65536], ['IFLA_MIN_MTU', 0], ['IFLA_MAX_MTU', 0], ['IFLA_GROUP', 0], ['IFLA_PROMISCUITY', 0], ['UNKNOWN', {'header': {'length': 8, 'type': 61}}], ['IFLA_NUM_TX_QUEUES', 1], ['IFLA_GSO_MAX_SEGS', 65535], ['IFLA_GSO_MAX_SIZE', 65536], ['IFLA_GRO_MAX_SIZE', 65536], ['UNKNOWN', {'header': {'length': 8, 'type': 63}}], ['UNKNOWN', {'header': {'length': 8, 'type': 64}}], ['IFLA_TSO_MAX_SIZE', 524280], ['IFLA_TSO_MAX_SEGS', 65535], ['UNKNOWN', {'header': {'length': 8, 'type': 66}}], ['IFLA_NUM_RX_QUEUES', 1], ['IFLA_CARRIER', 1], ['IFLA_CARRIER_CHANGES', 0], ['IFLA_CARRIER_UP_COUNT', 0], ['IFLA_CARRIER_DOWN_COUNT', 0], ['IFLA_PROTO_DOWN', 0], ['IFLA_ADDRESS', '00:00:00:00:00:00'], ['IFLA_BROADCAST', '00:00:00:00:00:00'], ['IFLA_STATS64', {'rx_packets': 1, 'tx_packets': 1, 'rx_bytes': 28, 'tx_bytes': 28, 'rx_errors': 0, 'tx_errors': 0, 'rx_dropped': 0, 'tx_dropped': 0, 'multicast': 0, 'collisions': 0, 'rx_length_errors': 0, 'rx_over_errors': 0, 'rx_crc_errors': 0, 'rx_frame_errors': 0, 'rx_fifo_errors': 0, 'rx_missed_errors': 0, 'tx_aborted_errors': 0, 'tx_carrier_errors': 0, 'tx_fifo_errors': 0, 'tx_heartbeat_errors': 0, 'tx_window_errors': 0, 'rx_compressed': 0, 'tx_compressed': 0}], ['IFLA_STATS', {'rx_packets': 1, 'tx_packets': 1, 'rx_bytes': 28, 'tx_bytes': 28, 'rx_errors': 0, 'tx_errors': 0, 'rx_dropped': 0, 'tx_dropped': 0, 'multicast': 0, 'collisions': 0, 'rx_length_errors': 0, 'rx_over_errors': 0, 'rx_crc_errors': 0, 'rx_frame_errors': 0, 'rx_fifo_errors': 0, 'rx_missed_errors': 0, 'tx_aborted_errors': 0, 'tx_carrier_errors': 0, 'tx_fifo_errors': 0, 'tx_heartbeat_errors': 0, 'tx_window_errors': 0, 'rx_compressed': 0, 'tx_compressed': 0}], ['IFLA_XDP', {'attrs': [['IFLA_XDP_ATTACHED', None]]}], ['IFLA_QDISC', 'noqueue'], ['IFLA_AF_SPEC', {'attrs': [['AF_INET', {'dummy': 65668, 'forwarding': 1, 'mc_forwarding': 0, 'proxy_arp': 0, 'accept_redirects': 0, 'secure_redirects': 0, 'send_redirects': 0, 'shared_media': 1, 'rp_filter': 1, 'accept_source_route': 0, 'bootp_relay': 0, 'log_martians': 0, 'tag': 0, 'arpfilter': 0, 'medium_id': 0, 'noxfrm': 1, 'nopolicy': 1, 'force_igmp_version': 0, 'arp_announce': 0, 'arp_ignore': 0, 'promote_secondaries': 1, 'arp_accept': 0, 'arp_notify': 0, 'accept_local': 0, 'src_vmark': 0, 'proxy_arp_pvlan': 0, 'route_localnet': 0, 'igmpv2_unsolicited_report_interval': 10000, 'igmpv3_unsolicited_report_interval': 1000}], ['AF_INET6', {'attrs': [['IFLA_INET6_FLAGS', 2147483648], ['IFLA_INET6_CACHEINFO', {'max_reasm_len': 65535, 'tstamp': 576523, 'reachable_time': 32940, 'retrans_time': 1000}], ['IFLA_INET6_CONF', {'forwarding': 0, 'hop_limit': 64, 'mtu': 65536, 'accept_ra': 1, 'accept_redirects': 1, 'autoconf': 1, 'dad_transmits': 1, 'router_solicitations': 4294967295, 'router_solicitation_interval': 4000, 'router_solicitation_delay': 1000, 'use_tempaddr': 4294967295, 'temp_valid_lft': 604800, 'temp_preferred_lft': 86400, 'regen_max_retry': 3, 'max_desync_factor': 600, 'max_addresses': 16, 'force_mld_version': 0, 'accept_ra_defrtr': 1, 'accept_ra_pinfo': 1, 'accept_ra_rtr_pref': 1, 'router_probe_interval': 60000, 'accept_ra_rt_info_max_plen': 0, 'proxy_ndp': 0, 'optimistic_dad': 0, 'accept_source_route': 0, 'mc_forwarding': 0, 'disable_ipv6': 0, 'accept_dad': 4294967295, 'force_tllao': 0, 'ndisc_notify': 0}], ['IFLA_INET6_STATS', {'num': 38, 'inpkts': 0, 'inoctets': 0, 'indelivers': 0, 'outforwdatagrams': 0, 'outpkts': 0, 'outoctets': 0, 'inhdrerrors': 0, 'intoobigerrors': 0, 'innoroutes': 0, 'inaddrerrors': 0, 'inunknownprotos': 0, 'intruncatedpkts': 0, 'indiscards': 0, 'outdiscards': 0, 'outnoroutes': 0, 'reasmtimeout': 0, 'reasmreqds': 0, 'reasmoks': 0, 'reasmfails': 0, 'fragoks': 0, 'fragfails': 0, 'fragcreates': 0, 'inmcastpkts': 0, 'outmcastpkts': 0, 'inbcastpkts': 0, 'outbcastpkts': 0, 'inmcastoctets': 0, 'outmcastoctets': 0, 'inbcastoctets': 0, 'outbcastoctets': 0, 'csumerrors': 0, 'noectpkts': 0, 'ect1pkts': 0, 'ect0pkts': 0, 'cepkts': 0}], ['IFLA_INET6_ICMP6STATS', {'num': 7, 'inmsgs': 0, 'inerrors': 0, 'outmsgs': 0, 'outerrors': 0, 'csumerrors': 0}], ['IFLA_INET6_TOKEN', '::'], ['IFLA_INET6_ADDR_GEN_MODE', 0]]}]]}], ['IFLA_MAP', {'mem_start': 0, 'mem_end': 0, 'base_addr': 0, 'irq': 0, 'dma': 0, 'port': 0}], ['UNKNOWN', {'header': {'length': 4, 'type': 32830}}], ['UNKNOWN', {'header': {'length': 4, 'type': 32833}}]], 'header': {'length': 1404, 'type': 16, 'flags': 2, 'sequence_number': 255, 'pid': 254349, 'error': None, 'target': 'ovnmeta-50f1d760-d79c-40bd-a9b3-cf73e6f75cf0', 'stats': (0, 0, 0)}, 'state': 'up', 'event': 'RTM_NEWLINK'}]) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Dec  1 20:01:27 compute-0 systemd[1]: run-netns-ovnmeta\x2d50f1d760\x2dd79c\x2d40bd\x2da9b3\x2dcf73e6f75cf0.mount: Deactivated successfully.
Dec  1 20:01:27 compute-0 ovn_metadata_agent[106828]: 2025-12-01 20:01:27.408 106945 DEBUG neutron.privileged.agent.linux.ip_lib [-] Namespace ovnmeta-50f1d760-d79c-40bd-a9b3-cf73e6f75cf0 deleted. remove_netns /usr/lib/python3.9/site-packages/neutron/privileged/agent/linux/ip_lib.py:607#033[00m
Dec  1 20:01:27 compute-0 ovn_metadata_agent[106828]: 2025-12-01 20:01:27.408 106945 DEBUG oslo.privsep.daemon [-] privsep: reply[adada425-55a4-419c-8f74-67e3faad3808]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Dec  1 20:01:27 compute-0 ovn_controller[97948]: 2025-12-01T20:01:27Z|00087|binding|INFO|Releasing lport 0966f8f1-95fd-4a77-80c1-25197c60ec2b from this chassis (sb_readonly=0)
Dec  1 20:01:27 compute-0 nova_compute[189564]: 2025-12-01 20:01:27.603 189568 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Dec  1 20:01:27 compute-0 ovn_controller[97948]: 2025-12-01T20:01:27Z|00088|binding|INFO|Releasing lport 0966f8f1-95fd-4a77-80c1-25197c60ec2b from this chassis (sb_readonly=0)
Dec  1 20:01:27 compute-0 nova_compute[189564]: 2025-12-01 20:01:27.844 189568 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Dec  1 20:01:28 compute-0 nova_compute[189564]: 2025-12-01 20:01:28.362 189568 DEBUG nova.compute.manager [req-4b127446-be44-4ffe-b7d8-1d1f858f6811 req-fea823c8-7287-4e90-bb8c-674414ab8029 8141e98fb94749b083df4b60a326419b 58466eba0634458f8dd92f929824a1d9 - - default default] [instance: 5e264735-c003-4c77-8b16-cb48211f837f] Received event network-vif-unplugged-241aee4b-acee-43c4-b165-e8322c56a1d3 external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11048#033[00m
Dec  1 20:01:28 compute-0 nova_compute[189564]: 2025-12-01 20:01:28.362 189568 DEBUG oslo_concurrency.lockutils [req-4b127446-be44-4ffe-b7d8-1d1f858f6811 req-fea823c8-7287-4e90-bb8c-674414ab8029 8141e98fb94749b083df4b60a326419b 58466eba0634458f8dd92f929824a1d9 - - default default] Acquiring lock "5e264735-c003-4c77-8b16-cb48211f837f-events" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m
Dec  1 20:01:28 compute-0 nova_compute[189564]: 2025-12-01 20:01:28.362 189568 DEBUG oslo_concurrency.lockutils [req-4b127446-be44-4ffe-b7d8-1d1f858f6811 req-fea823c8-7287-4e90-bb8c-674414ab8029 8141e98fb94749b083df4b60a326419b 58466eba0634458f8dd92f929824a1d9 - - default default] Lock "5e264735-c003-4c77-8b16-cb48211f837f-events" acquired by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m
Dec  1 20:01:28 compute-0 nova_compute[189564]: 2025-12-01 20:01:28.363 189568 DEBUG oslo_concurrency.lockutils [req-4b127446-be44-4ffe-b7d8-1d1f858f6811 req-fea823c8-7287-4e90-bb8c-674414ab8029 8141e98fb94749b083df4b60a326419b 58466eba0634458f8dd92f929824a1d9 - - default default] Lock "5e264735-c003-4c77-8b16-cb48211f837f-events" "released" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m
Dec  1 20:01:28 compute-0 nova_compute[189564]: 2025-12-01 20:01:28.363 189568 DEBUG nova.compute.manager [req-4b127446-be44-4ffe-b7d8-1d1f858f6811 req-fea823c8-7287-4e90-bb8c-674414ab8029 8141e98fb94749b083df4b60a326419b 58466eba0634458f8dd92f929824a1d9 - - default default] [instance: 5e264735-c003-4c77-8b16-cb48211f837f] No waiting events found dispatching network-vif-unplugged-241aee4b-acee-43c4-b165-e8322c56a1d3 pop_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:320#033[00m
Dec  1 20:01:28 compute-0 nova_compute[189564]: 2025-12-01 20:01:28.363 189568 DEBUG nova.compute.manager [req-4b127446-be44-4ffe-b7d8-1d1f858f6811 req-fea823c8-7287-4e90-bb8c-674414ab8029 8141e98fb94749b083df4b60a326419b 58466eba0634458f8dd92f929824a1d9 - - default default] [instance: 5e264735-c003-4c77-8b16-cb48211f837f] Received event network-vif-unplugged-241aee4b-acee-43c4-b165-e8322c56a1d3 for instance with task_state deleting. _process_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:10826#033[00m
Dec  1 20:01:28 compute-0 nova_compute[189564]: 2025-12-01 20:01:28.364 189568 DEBUG nova.compute.manager [req-4b127446-be44-4ffe-b7d8-1d1f858f6811 req-fea823c8-7287-4e90-bb8c-674414ab8029 8141e98fb94749b083df4b60a326419b 58466eba0634458f8dd92f929824a1d9 - - default default] [instance: 5e264735-c003-4c77-8b16-cb48211f837f] Received event network-vif-plugged-241aee4b-acee-43c4-b165-e8322c56a1d3 external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11048#033[00m
Dec  1 20:01:28 compute-0 nova_compute[189564]: 2025-12-01 20:01:28.364 189568 DEBUG oslo_concurrency.lockutils [req-4b127446-be44-4ffe-b7d8-1d1f858f6811 req-fea823c8-7287-4e90-bb8c-674414ab8029 8141e98fb94749b083df4b60a326419b 58466eba0634458f8dd92f929824a1d9 - - default default] Acquiring lock "5e264735-c003-4c77-8b16-cb48211f837f-events" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m
Dec  1 20:01:28 compute-0 nova_compute[189564]: 2025-12-01 20:01:28.364 189568 DEBUG oslo_concurrency.lockutils [req-4b127446-be44-4ffe-b7d8-1d1f858f6811 req-fea823c8-7287-4e90-bb8c-674414ab8029 8141e98fb94749b083df4b60a326419b 58466eba0634458f8dd92f929824a1d9 - - default default] Lock "5e264735-c003-4c77-8b16-cb48211f837f-events" acquired by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m
Dec  1 20:01:28 compute-0 nova_compute[189564]: 2025-12-01 20:01:28.365 189568 DEBUG oslo_concurrency.lockutils [req-4b127446-be44-4ffe-b7d8-1d1f858f6811 req-fea823c8-7287-4e90-bb8c-674414ab8029 8141e98fb94749b083df4b60a326419b 58466eba0634458f8dd92f929824a1d9 - - default default] Lock "5e264735-c003-4c77-8b16-cb48211f837f-events" "released" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m
Dec  1 20:01:28 compute-0 nova_compute[189564]: 2025-12-01 20:01:28.365 189568 DEBUG nova.compute.manager [req-4b127446-be44-4ffe-b7d8-1d1f858f6811 req-fea823c8-7287-4e90-bb8c-674414ab8029 8141e98fb94749b083df4b60a326419b 58466eba0634458f8dd92f929824a1d9 - - default default] [instance: 5e264735-c003-4c77-8b16-cb48211f837f] No waiting events found dispatching network-vif-plugged-241aee4b-acee-43c4-b165-e8322c56a1d3 pop_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:320#033[00m
Dec  1 20:01:28 compute-0 nova_compute[189564]: 2025-12-01 20:01:28.365 189568 WARNING nova.compute.manager [req-4b127446-be44-4ffe-b7d8-1d1f858f6811 req-fea823c8-7287-4e90-bb8c-674414ab8029 8141e98fb94749b083df4b60a326419b 58466eba0634458f8dd92f929824a1d9 - - default default] [instance: 5e264735-c003-4c77-8b16-cb48211f837f] Received unexpected event network-vif-plugged-241aee4b-acee-43c4-b165-e8322c56a1d3 for instance with vm_state active and task_state deleting.#033[00m
Dec  1 20:01:28 compute-0 nova_compute[189564]: 2025-12-01 20:01:28.415 189568 DEBUG nova.compute.manager [req-c3483195-16b1-4a16-b3dd-104df4b71b0a req-c364ab73-0291-411a-ab45-f187b24b8815 8141e98fb94749b083df4b60a326419b 58466eba0634458f8dd92f929824a1d9 - - default default] [instance: 4a104baa-5fd5-47aa-973b-11d99c76c3e2] Received event network-vif-plugged-09097114-7a48-4b64-ab17-ed474efbf80e external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11048#033[00m
Dec  1 20:01:28 compute-0 nova_compute[189564]: 2025-12-01 20:01:28.416 189568 DEBUG oslo_concurrency.lockutils [req-c3483195-16b1-4a16-b3dd-104df4b71b0a req-c364ab73-0291-411a-ab45-f187b24b8815 8141e98fb94749b083df4b60a326419b 58466eba0634458f8dd92f929824a1d9 - - default default] Acquiring lock "4a104baa-5fd5-47aa-973b-11d99c76c3e2-events" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m
Dec  1 20:01:28 compute-0 nova_compute[189564]: 2025-12-01 20:01:28.417 189568 DEBUG oslo_concurrency.lockutils [req-c3483195-16b1-4a16-b3dd-104df4b71b0a req-c364ab73-0291-411a-ab45-f187b24b8815 8141e98fb94749b083df4b60a326419b 58466eba0634458f8dd92f929824a1d9 - - default default] Lock "4a104baa-5fd5-47aa-973b-11d99c76c3e2-events" acquired by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m
Dec  1 20:01:28 compute-0 nova_compute[189564]: 2025-12-01 20:01:28.417 189568 DEBUG oslo_concurrency.lockutils [req-c3483195-16b1-4a16-b3dd-104df4b71b0a req-c364ab73-0291-411a-ab45-f187b24b8815 8141e98fb94749b083df4b60a326419b 58466eba0634458f8dd92f929824a1d9 - - default default] Lock "4a104baa-5fd5-47aa-973b-11d99c76c3e2-events" "released" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m
Dec  1 20:01:28 compute-0 nova_compute[189564]: 2025-12-01 20:01:28.417 189568 DEBUG nova.compute.manager [req-c3483195-16b1-4a16-b3dd-104df4b71b0a req-c364ab73-0291-411a-ab45-f187b24b8815 8141e98fb94749b083df4b60a326419b 58466eba0634458f8dd92f929824a1d9 - - default default] [instance: 4a104baa-5fd5-47aa-973b-11d99c76c3e2] No waiting events found dispatching network-vif-plugged-09097114-7a48-4b64-ab17-ed474efbf80e pop_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:320#033[00m
Dec  1 20:01:28 compute-0 nova_compute[189564]: 2025-12-01 20:01:28.418 189568 WARNING nova.compute.manager [req-c3483195-16b1-4a16-b3dd-104df4b71b0a req-c364ab73-0291-411a-ab45-f187b24b8815 8141e98fb94749b083df4b60a326419b 58466eba0634458f8dd92f929824a1d9 - - default default] [instance: 4a104baa-5fd5-47aa-973b-11d99c76c3e2] Received unexpected event network-vif-plugged-09097114-7a48-4b64-ab17-ed474efbf80e for instance with vm_state active and task_state None.#033[00m
Dec  1 20:01:28 compute-0 nova_compute[189564]: 2025-12-01 20:01:28.642 189568 DEBUG nova.network.neutron [req-e678207d-953d-4504-9461-7511d69b56e1 req-683001d3-36a7-42bb-adc8-79f74dbe6a1f 8141e98fb94749b083df4b60a326419b 58466eba0634458f8dd92f929824a1d9 - - default default] [instance: 5e264735-c003-4c77-8b16-cb48211f837f] Updated VIF entry in instance network info cache for port 241aee4b-acee-43c4-b165-e8322c56a1d3. _build_network_info_model /usr/lib/python3.9/site-packages/nova/network/neutron.py:3482#033[00m
Dec  1 20:01:28 compute-0 nova_compute[189564]: 2025-12-01 20:01:28.643 189568 DEBUG nova.network.neutron [req-e678207d-953d-4504-9461-7511d69b56e1 req-683001d3-36a7-42bb-adc8-79f74dbe6a1f 8141e98fb94749b083df4b60a326419b 58466eba0634458f8dd92f929824a1d9 - - default default] [instance: 5e264735-c003-4c77-8b16-cb48211f837f] Updating instance_info_cache with network_info: [{"id": "241aee4b-acee-43c4-b165-e8322c56a1d3", "address": "fa:16:3e:94:01:de", "network": {"id": "50f1d760-d79c-40bd-a9b3-cf73e6f75cf0", "bridge": "br-int", "label": "tempest-ServersTestManualDisk-1633365007-network", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.13", "type": "fixed", "version": 4, "meta": {}, "floating_ips": [{"address": "192.168.122.217", "type": "floating", "version": 4, "meta": {}}]}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "02b2a851f173482691b98aa9a993fbf9", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tap241aee4b-ac", "ovs_interfaceid": "241aee4b-acee-43c4-b165-e8322c56a1d3", "qbh_params": null, "qbg_params": null, "active": true, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}}] update_instance_cache_with_nw_info /usr/lib/python3.9/site-packages/nova/network/neutron.py:116#033[00m
Dec  1 20:01:28 compute-0 nova_compute[189564]: 2025-12-01 20:01:28.687 189568 DEBUG oslo_concurrency.lockutils [req-e678207d-953d-4504-9461-7511d69b56e1 req-683001d3-36a7-42bb-adc8-79f74dbe6a1f 8141e98fb94749b083df4b60a326419b 58466eba0634458f8dd92f929824a1d9 - - default default] Releasing lock "refresh_cache-5e264735-c003-4c77-8b16-cb48211f837f" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:333#033[00m
Dec  1 20:01:28 compute-0 nova_compute[189564]: 2025-12-01 20:01:28.687 189568 DEBUG nova.compute.manager [req-e678207d-953d-4504-9461-7511d69b56e1 req-683001d3-36a7-42bb-adc8-79f74dbe6a1f 8141e98fb94749b083df4b60a326419b 58466eba0634458f8dd92f929824a1d9 - - default default] [instance: 98c0547a-3efc-4214-85f9-ccceaf32a2a6] Received event network-vif-deleted-6f128282-4268-4162-a349-1906ef0a8e4d external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11048#033[00m
Dec  1 20:01:28 compute-0 nova_compute[189564]: 2025-12-01 20:01:28.801 189568 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Dec  1 20:01:29 compute-0 nova_compute[189564]: 2025-12-01 20:01:29.632 189568 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Dec  1 20:01:29 compute-0 NetworkManager[56474]: <info>  [1764619289.6333] manager: (patch-provnet-d6dc1a29-1c9e-4360-96f3-c2c2e887b11b-to-br-int): new Open vSwitch Port device (/org/freedesktop/NetworkManager/Devices/44)
Dec  1 20:01:29 compute-0 NetworkManager[56474]: <info>  [1764619289.6350] manager: (patch-br-int-to-provnet-d6dc1a29-1c9e-4360-96f3-c2c2e887b11b): new Open vSwitch Port device (/org/freedesktop/NetworkManager/Devices/45)
Dec  1 20:01:29 compute-0 podman[203750]: time="2025-12-01T20:01:29Z" level=info msg="List containers: received `last` parameter - overwriting `limit`"
Dec  1 20:01:29 compute-0 podman[203750]: @ - - [01/Dec/2025:20:01:29 +0000] "GET /v4.9.3/libpod/containers/json?all=true&external=false&last=0&namespace=false&size=false&sync=false HTTP/1.1" 200 29521 "" "Go-http-client/1.1"
Dec  1 20:01:29 compute-0 podman[203750]: @ - - [01/Dec/2025:20:01:29 +0000] "GET /v4.9.3/libpod/containers/stats?all=false&interval=1&stream=false HTTP/1.1" 200 4800 "" "Go-http-client/1.1"
Dec  1 20:01:29 compute-0 nova_compute[189564]: 2025-12-01 20:01:29.843 189568 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Dec  1 20:01:29 compute-0 ovn_controller[97948]: 2025-12-01T20:01:29Z|00089|binding|INFO|Releasing lport 0966f8f1-95fd-4a77-80c1-25197c60ec2b from this chassis (sb_readonly=0)
Dec  1 20:01:29 compute-0 nova_compute[189564]: 2025-12-01 20:01:29.886 189568 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Dec  1 20:01:29 compute-0 nova_compute[189564]: 2025-12-01 20:01:29.909 189568 DEBUG nova.network.neutron [-] [instance: 5e264735-c003-4c77-8b16-cb48211f837f] Updating instance_info_cache with network_info: [] update_instance_cache_with_nw_info /usr/lib/python3.9/site-packages/nova/network/neutron.py:116#033[00m
Dec  1 20:01:29 compute-0 nova_compute[189564]: 2025-12-01 20:01:29.934 189568 INFO nova.compute.manager [-] [instance: 5e264735-c003-4c77-8b16-cb48211f837f] Took 2.71 seconds to deallocate network for instance.#033[00m
Dec  1 20:01:29 compute-0 nova_compute[189564]: 2025-12-01 20:01:29.995 189568 DEBUG oslo_concurrency.lockutils [None req-84e2e3e5-7d2d-4291-b543-a3a120e057e2 1b42f5bff3ce40c99c067bb358d36444 02b2a851f173482691b98aa9a993fbf9 - - default default] Acquiring lock "compute_resources" by "nova.compute.resource_tracker.ResourceTracker.update_usage" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m
Dec  1 20:01:29 compute-0 nova_compute[189564]: 2025-12-01 20:01:29.996 189568 DEBUG oslo_concurrency.lockutils [None req-84e2e3e5-7d2d-4291-b543-a3a120e057e2 1b42f5bff3ce40c99c067bb358d36444 02b2a851f173482691b98aa9a993fbf9 - - default default] Lock "compute_resources" acquired by "nova.compute.resource_tracker.ResourceTracker.update_usage" :: waited 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m
Dec  1 20:01:30 compute-0 nova_compute[189564]: 2025-12-01 20:01:30.101 189568 DEBUG nova.compute.provider_tree [None req-84e2e3e5-7d2d-4291-b543-a3a120e057e2 1b42f5bff3ce40c99c067bb358d36444 02b2a851f173482691b98aa9a993fbf9 - - default default] Inventory has not changed in ProviderTree for provider: 0211b5d4-bab8-409f-8f53-df766ffbcb27 update_inventory /usr/lib/python3.9/site-packages/nova/compute/provider_tree.py:180#033[00m
Dec  1 20:01:30 compute-0 nova_compute[189564]: 2025-12-01 20:01:30.121 189568 DEBUG nova.scheduler.client.report [None req-84e2e3e5-7d2d-4291-b543-a3a120e057e2 1b42f5bff3ce40c99c067bb358d36444 02b2a851f173482691b98aa9a993fbf9 - - default default] Inventory has not changed for provider 0211b5d4-bab8-409f-8f53-df766ffbcb27 based on inventory data: {'VCPU': {'total': 8, 'reserved': 0, 'min_unit': 1, 'max_unit': 8, 'step_size': 1, 'allocation_ratio': 4.0}, 'MEMORY_MB': {'total': 7680, 'reserved': 512, 'min_unit': 1, 'max_unit': 7680, 'step_size': 1, 'allocation_ratio': 1.0}, 'DISK_GB': {'total': 79, 'reserved': 1, 'min_unit': 1, 'max_unit': 79, 'step_size': 1, 'allocation_ratio': 0.9}} set_inventory_for_provider /usr/lib/python3.9/site-packages/nova/scheduler/client/report.py:940#033[00m
Dec  1 20:01:30 compute-0 nova_compute[189564]: 2025-12-01 20:01:30.155 189568 DEBUG oslo_concurrency.lockutils [None req-84e2e3e5-7d2d-4291-b543-a3a120e057e2 1b42f5bff3ce40c99c067bb358d36444 02b2a851f173482691b98aa9a993fbf9 - - default default] Lock "compute_resources" "released" by "nova.compute.resource_tracker.ResourceTracker.update_usage" :: held 0.159s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m
Dec  1 20:01:30 compute-0 nova_compute[189564]: 2025-12-01 20:01:30.188 189568 INFO nova.scheduler.client.report [None req-84e2e3e5-7d2d-4291-b543-a3a120e057e2 1b42f5bff3ce40c99c067bb358d36444 02b2a851f173482691b98aa9a993fbf9 - - default default] Deleted allocations for instance 5e264735-c003-4c77-8b16-cb48211f837f#033[00m
Dec  1 20:01:30 compute-0 nova_compute[189564]: 2025-12-01 20:01:30.258 189568 DEBUG oslo_concurrency.lockutils [None req-84e2e3e5-7d2d-4291-b543-a3a120e057e2 1b42f5bff3ce40c99c067bb358d36444 02b2a851f173482691b98aa9a993fbf9 - - default default] Lock "5e264735-c003-4c77-8b16-cb48211f837f" "released" by "nova.compute.manager.ComputeManager.terminate_instance.<locals>.do_terminate_instance" :: held 3.419s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m
Dec  1 20:01:30 compute-0 nova_compute[189564]: 2025-12-01 20:01:30.886 189568 DEBUG nova.compute.manager [req-b3fff76d-5bdc-465b-a8cd-9e226f3721ae req-177dca75-6488-4ddb-8779-9625986fdfe2 8141e98fb94749b083df4b60a326419b 58466eba0634458f8dd92f929824a1d9 - - default default] [instance: 5e264735-c003-4c77-8b16-cb48211f837f] Received event network-vif-deleted-241aee4b-acee-43c4-b165-e8322c56a1d3 external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11048#033[00m
Dec  1 20:01:30 compute-0 nova_compute[189564]: 2025-12-01 20:01:30.887 189568 DEBUG nova.compute.manager [req-b3fff76d-5bdc-465b-a8cd-9e226f3721ae req-177dca75-6488-4ddb-8779-9625986fdfe2 8141e98fb94749b083df4b60a326419b 58466eba0634458f8dd92f929824a1d9 - - default default] [instance: 4a104baa-5fd5-47aa-973b-11d99c76c3e2] Received event network-changed-09097114-7a48-4b64-ab17-ed474efbf80e external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11048#033[00m
Dec  1 20:01:30 compute-0 nova_compute[189564]: 2025-12-01 20:01:30.889 189568 DEBUG nova.compute.manager [req-b3fff76d-5bdc-465b-a8cd-9e226f3721ae req-177dca75-6488-4ddb-8779-9625986fdfe2 8141e98fb94749b083df4b60a326419b 58466eba0634458f8dd92f929824a1d9 - - default default] [instance: 4a104baa-5fd5-47aa-973b-11d99c76c3e2] Refreshing instance network info cache due to event network-changed-09097114-7a48-4b64-ab17-ed474efbf80e. external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11053#033[00m
Dec  1 20:01:30 compute-0 nova_compute[189564]: 2025-12-01 20:01:30.890 189568 DEBUG oslo_concurrency.lockutils [req-b3fff76d-5bdc-465b-a8cd-9e226f3721ae req-177dca75-6488-4ddb-8779-9625986fdfe2 8141e98fb94749b083df4b60a326419b 58466eba0634458f8dd92f929824a1d9 - - default default] Acquiring lock "refresh_cache-4a104baa-5fd5-47aa-973b-11d99c76c3e2" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:312#033[00m
Dec  1 20:01:30 compute-0 nova_compute[189564]: 2025-12-01 20:01:30.891 189568 DEBUG oslo_concurrency.lockutils [req-b3fff76d-5bdc-465b-a8cd-9e226f3721ae req-177dca75-6488-4ddb-8779-9625986fdfe2 8141e98fb94749b083df4b60a326419b 58466eba0634458f8dd92f929824a1d9 - - default default] Acquired lock "refresh_cache-4a104baa-5fd5-47aa-973b-11d99c76c3e2" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:315#033[00m
Dec  1 20:01:30 compute-0 nova_compute[189564]: 2025-12-01 20:01:30.892 189568 DEBUG nova.network.neutron [req-b3fff76d-5bdc-465b-a8cd-9e226f3721ae req-177dca75-6488-4ddb-8779-9625986fdfe2 8141e98fb94749b083df4b60a326419b 58466eba0634458f8dd92f929824a1d9 - - default default] [instance: 4a104baa-5fd5-47aa-973b-11d99c76c3e2] Refreshing network info cache for port 09097114-7a48-4b64-ab17-ed474efbf80e _get_instance_nw_info /usr/lib/python3.9/site-packages/nova/network/neutron.py:2007#033[00m
Dec  1 20:01:31 compute-0 podman[254353]: 2025-12-01 20:01:31.34734255 +0000 UTC m=+0.105674690 container health_status b46bda7fc50db8041eef75400930fc7591d8331b3adc9964f77b2cc87c6b98e2 (image=quay.io/openstack-k8s-operators/openstack-network-exporter:current-podified, name=openstack_network_exporter, health_status=healthy, health_failing_streak=0, health_log=, io.openshift.expose-services=, vcs-ref=f4b088292653bbf5ca8188a5e59ffd06a8671d4b, vcs-type=git, vendor=Red Hat, Inc., description=The Universal Base Image Minimal is a stripped down image that uses microdnf as a package manager. This base image is freely redistributable, but Red Hat only supports Red Hat technologies through subscriptions for Red Hat products. This image is maintained by Red Hat and updated regularly., build-date=2025-08-20T13:12:41, version=9.6, managed_by=edpm_ansible, config_id=edpm, distribution-scope=public, io.openshift.tags=minimal rhel9, url=https://catalog.redhat.com/en/search?searchType=containers, architecture=x86_64, summary=Provides the latest release of the minimal Red Hat Universal Base Image 9., config_data={'image': 'quay.io/openstack-k8s-operators/openstack-network-exporter:current-podified', 'restart': 'always', 'recreate': True, 'privileged': True, 'ports': ['9105:9105'], 'command': [], 'net': 'host', 'environment': {'OS_ENDPOINT_TYPE': 'internal', 'OPENSTACK_NETWORK_EXPORTER_YAML': '/etc/openstack_network_exporter/openstack_network_exporter.yaml'}, 'healthcheck': {'test': '/openstack/healthcheck openstack-netwo', 'mount': '/var/lib/openstack/healthchecks/openstack_network_exporter'}, 'volumes': ['/var/lib/openstack/config/telemetry/openstack_network_exporter.yaml:/etc/openstack_network_exporter/openstack_network_exporter.yaml:z', '/var/lib/openstack/certs/telemetry/default:/etc/openstack_network_exporter/tls:z', '/var/run/openvswitch:/run/openvswitch:rw,z', '/var/lib/openvswitch/ovn:/run/ovn:rw,z', '/proc:/host/proc:ro', '/var/lib/openstack/healthchecks/openstack_network_exporter:/openstack:ro,z']}, release=1755695350, com.redhat.component=ubi9-minimal-container, name=ubi9-minimal, container_name=openstack_network_exporter, com.redhat.license_terms=https://www.redhat.com/en/about/red-hat-end-user-license-agreements#UBI, maintainer=Red Hat, Inc., io.buildah.version=1.33.7, io.k8s.description=The Universal Base Image Minimal is a stripped down image that uses microdnf as a package manager. This base image is freely redistributable, but Red Hat only supports Red Hat technologies through subscriptions for Red Hat products. This image is maintained by Red Hat and updated regularly., io.k8s.display-name=Red Hat Universal Base Image 9 Minimal)
Dec  1 20:01:31 compute-0 openstack_network_exporter[205914]: ERROR   20:01:31 appctl.go:131: Failed to prepare call to ovsdb-server: no control socket files found for the ovs db server
Dec  1 20:01:31 compute-0 openstack_network_exporter[205914]: ERROR   20:01:31 appctl.go:144: Failed to get PID for ovn-northd: no control socket files found for ovn-northd
Dec  1 20:01:31 compute-0 openstack_network_exporter[205914]: ERROR   20:01:31 appctl.go:144: Failed to get PID for ovn-northd: no control socket files found for ovn-northd
Dec  1 20:01:31 compute-0 openstack_network_exporter[205914]: ERROR   20:01:31 appctl.go:174: call(dpif-netdev/pmd-perf-show): please specify an existing datapath
Dec  1 20:01:31 compute-0 openstack_network_exporter[205914]: 
Dec  1 20:01:31 compute-0 openstack_network_exporter[205914]: ERROR   20:01:31 appctl.go:174: call(dpif-netdev/pmd-rxq-show): please specify an existing datapath
Dec  1 20:01:31 compute-0 openstack_network_exporter[205914]: 
Dec  1 20:01:32 compute-0 nova_compute[189564]: 2025-12-01 20:01:32.155 189568 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Dec  1 20:01:33 compute-0 ovn_controller[97948]: 2025-12-01T20:01:33Z|00090|binding|INFO|Releasing lport 0966f8f1-95fd-4a77-80c1-25197c60ec2b from this chassis (sb_readonly=0)
Dec  1 20:01:33 compute-0 nova_compute[189564]: 2025-12-01 20:01:33.463 189568 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Dec  1 20:01:33 compute-0 nova_compute[189564]: 2025-12-01 20:01:33.805 189568 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Dec  1 20:01:34 compute-0 nova_compute[189564]: 2025-12-01 20:01:34.273 189568 DEBUG nova.network.neutron [req-b3fff76d-5bdc-465b-a8cd-9e226f3721ae req-177dca75-6488-4ddb-8779-9625986fdfe2 8141e98fb94749b083df4b60a326419b 58466eba0634458f8dd92f929824a1d9 - - default default] [instance: 4a104baa-5fd5-47aa-973b-11d99c76c3e2] Updated VIF entry in instance network info cache for port 09097114-7a48-4b64-ab17-ed474efbf80e. _build_network_info_model /usr/lib/python3.9/site-packages/nova/network/neutron.py:3482#033[00m
Dec  1 20:01:34 compute-0 nova_compute[189564]: 2025-12-01 20:01:34.273 189568 DEBUG nova.network.neutron [req-b3fff76d-5bdc-465b-a8cd-9e226f3721ae req-177dca75-6488-4ddb-8779-9625986fdfe2 8141e98fb94749b083df4b60a326419b 58466eba0634458f8dd92f929824a1d9 - - default default] [instance: 4a104baa-5fd5-47aa-973b-11d99c76c3e2] Updating instance_info_cache with network_info: [{"id": "09097114-7a48-4b64-ab17-ed474efbf80e", "address": "fa:16:3e:3e:bf:1a", "network": {"id": "419dfb65-f0dd-44b5-a131-b7c37ebf4bab", "bridge": "br-int", "label": "tempest-ServerActionsTestJSON-188173667-network", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.13", "type": "fixed", "version": 4, "meta": {}, "floating_ips": [{"address": "192.168.122.211", "type": "floating", "version": 4, "meta": {}}]}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "5102d72cb1ce4e6da810b2584a2abd73", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tap09097114-7a", "ovs_interfaceid": "09097114-7a48-4b64-ab17-ed474efbf80e", "qbh_params": null, "qbg_params": null, "active": true, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}}] update_instance_cache_with_nw_info /usr/lib/python3.9/site-packages/nova/network/neutron.py:116#033[00m
Dec  1 20:01:34 compute-0 nova_compute[189564]: 2025-12-01 20:01:34.305 189568 DEBUG oslo_concurrency.lockutils [req-b3fff76d-5bdc-465b-a8cd-9e226f3721ae req-177dca75-6488-4ddb-8779-9625986fdfe2 8141e98fb94749b083df4b60a326419b 58466eba0634458f8dd92f929824a1d9 - - default default] Releasing lock "refresh_cache-4a104baa-5fd5-47aa-973b-11d99c76c3e2" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:333#033[00m
Dec  1 20:01:35 compute-0 nova_compute[189564]: 2025-12-01 20:01:35.248 189568 DEBUG oslo_service.periodic_task [None req-7cec860b-c1c5-49d2-8a4f-732c76c1788d - - - - - -] Running periodic task ComputeManager._cleanup_expired_console_auth_tokens run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210#033[00m
Dec  1 20:01:36 compute-0 podman[254375]: 2025-12-01 20:01:36.351952118 +0000 UTC m=+0.118968922 container health_status 9bc16c1e84935b321683dd2dfd3901959431e420d380b6b9982945dff3d516b2 (image=quay.io/prometheus/node-exporter:v1.5.0, name=node_exporter, health_status=healthy, health_failing_streak=0, health_log=, container_name=node_exporter, maintainer=The Prometheus Authors <prometheus-developers@googlegroups.com>, managed_by=edpm_ansible, config_data={'image': 'quay.io/prometheus/node-exporter:v1.5.0', 'restart': 'always', 'recreate': True, 'user': 'root', 'privileged': True, 'ports': ['9100:9100'], 'command': ['--web.config.file=/etc/node_exporter/node_exporter.yaml', '--web.disable-exporter-metrics', '--collector.systemd', '--collector.systemd.unit-include=(edpm_.*|ovs.*|openvswitch|virt.*|rsyslog)\\.service', '--no-collector.dmi', '--no-collector.entropy', '--no-collector.thermal_zone', '--no-collector.time', '--no-collector.timex', '--no-collector.uname', '--no-collector.stat', '--no-collector.hwmon', '--no-collector.os', '--no-collector.selinux', '--no-collector.textfile', '--no-collector.powersupplyclass', '--no-collector.pressure', '--no-collector.rapl'], 'net': 'host', 'environment': {'OS_ENDPOINT_TYPE': 'internal'}, 'healthcheck': {'test': '/openstack/healthcheck node_exporter', 'mount': '/var/lib/openstack/healthchecks/node_exporter'}, 'volumes': ['/var/lib/openstack/config/telemetry/node_exporter.yaml:/etc/node_exporter/node_exporter.yaml:z', '/var/lib/openstack/certs/telemetry/default:/etc/node_exporter/tls:z', '/var/run/dbus/system_bus_socket:/var/run/dbus/system_bus_socket:rw', '/var/lib/openstack/healthchecks/node_exporter:/openstack:ro,z']}, config_id=edpm)
Dec  1 20:01:37 compute-0 nova_compute[189564]: 2025-12-01 20:01:37.159 189568 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Dec  1 20:01:37 compute-0 ovn_controller[97948]: 2025-12-01T20:01:37Z|00091|binding|INFO|Releasing lport 0966f8f1-95fd-4a77-80c1-25197c60ec2b from this chassis (sb_readonly=0)
Dec  1 20:01:37 compute-0 nova_compute[189564]: 2025-12-01 20:01:37.313 189568 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Dec  1 20:01:38 compute-0 nova_compute[189564]: 2025-12-01 20:01:38.489 189568 DEBUG nova.virt.driver [-] Emitting event <LifecycleEvent: 1764619283.48825, 98c0547a-3efc-4214-85f9-ccceaf32a2a6 => Stopped> emit_event /usr/lib/python3.9/site-packages/nova/virt/driver.py:1653#033[00m
Dec  1 20:01:38 compute-0 nova_compute[189564]: 2025-12-01 20:01:38.490 189568 INFO nova.compute.manager [-] [instance: 98c0547a-3efc-4214-85f9-ccceaf32a2a6] VM Stopped (Lifecycle Event)#033[00m
Dec  1 20:01:38 compute-0 nova_compute[189564]: 2025-12-01 20:01:38.522 189568 DEBUG nova.compute.manager [None req-792c6a61-90af-4aef-a297-58fb93184190 - - - - - -] [instance: 98c0547a-3efc-4214-85f9-ccceaf32a2a6] Checking state _get_power_state /usr/lib/python3.9/site-packages/nova/compute/manager.py:1762#033[00m
Dec  1 20:01:38 compute-0 nova_compute[189564]: 2025-12-01 20:01:38.808 189568 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Dec  1 20:01:40 compute-0 nova_compute[189564]: 2025-12-01 20:01:40.597 189568 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Dec  1 20:01:40 compute-0 nova_compute[189564]: 2025-12-01 20:01:40.876 189568 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Dec  1 20:01:42 compute-0 nova_compute[189564]: 2025-12-01 20:01:42.121 189568 DEBUG nova.virt.driver [-] Emitting event <LifecycleEvent: 1764619287.1201162, 5e264735-c003-4c77-8b16-cb48211f837f => Stopped> emit_event /usr/lib/python3.9/site-packages/nova/virt/driver.py:1653#033[00m
Dec  1 20:01:42 compute-0 nova_compute[189564]: 2025-12-01 20:01:42.122 189568 INFO nova.compute.manager [-] [instance: 5e264735-c003-4c77-8b16-cb48211f837f] VM Stopped (Lifecycle Event)#033[00m
Dec  1 20:01:42 compute-0 nova_compute[189564]: 2025-12-01 20:01:42.143 189568 DEBUG nova.compute.manager [None req-68c7a6f7-f46c-42c0-8999-4996a6efbf37 - - - - - -] [instance: 5e264735-c003-4c77-8b16-cb48211f837f] Checking state _get_power_state /usr/lib/python3.9/site-packages/nova/compute/manager.py:1762#033[00m
Dec  1 20:01:42 compute-0 nova_compute[189564]: 2025-12-01 20:01:42.163 189568 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Dec  1 20:01:43 compute-0 podman[254399]: 2025-12-01 20:01:43.316897203 +0000 UTC m=+0.084108558 container health_status eee51cf6f5ac491b85fb09827fece37ea9afa564acb449d4ec0d0155a452f02b (image=quay.io/podified-antelope-centos9/openstack-multipathd:current-podified, name=multipathd, health_status=healthy, health_failing_streak=0, health_log=, config_data={'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/multipathd', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-multipathd:current-podified', 'net': 'host', 'privileged': True, 'restart': 'always', 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/var/lib/kolla/config_files/multipathd.json:/var/lib/kolla/config_files/config.json:ro', '/dev:/dev', '/run/udev:/run/udev', '/sys:/sys', '/lib/modules:/lib/modules:ro', '/etc/iscsi:/etc/iscsi:ro', '/var/lib/iscsi:/var/lib/iscsi', '/etc/multipath:/etc/multipath:z', '/etc/multipath.conf:/etc/multipath.conf:ro', '/var/lib/openstack/healthchecks/multipathd:/openstack:ro,z']}, org.label-schema.build-date=20251125, container_name=multipathd, org.label-schema.schema-version=1.0, io.buildah.version=1.41.3, maintainer=OpenStack Kubernetes Operator team, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.vendor=CentOS, config_id=multipathd, managed_by=edpm_ansible, org.label-schema.license=GPLv2, tcib_build_tag=fa2bb8efef6782c26ea7f1675eeb36dd, tcib_managed=true)
Dec  1 20:01:43 compute-0 nova_compute[189564]: 2025-12-01 20:01:43.810 189568 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Dec  1 20:01:43 compute-0 ovn_metadata_agent[106828]: 2025-12-01 20:01:43.950 106833 DEBUG ovsdbapp.backend.ovs_idl.event [-] Matched UPDATE: SbGlobalUpdateEvent(events=('update',), table='SB_Global', conditions=None, old_conditions=None), priority=20 to row=SB_Global(external_ids={}, nb_cfg=10, options={'arp_ns_explicit_output': 'true', 'mac_prefix': 'ae:b8:e0', 'max_tunid': '16711680', 'northd_internal_version': '24.03.8-20.33.0-76.8', 'svc_monitor_mac': 'f2:87:69:a7:38:2b'}, ipsec=False) old=SB_Global(nb_cfg=9) matches /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/event.py:43#033[00m
Dec  1 20:01:43 compute-0 ovn_metadata_agent[106828]: 2025-12-01 20:01:43.951 106833 DEBUG neutron.agent.ovn.metadata.agent [-] Delaying updating chassis table for 3 seconds run /usr/lib/python3.9/site-packages/neutron/agent/ovn/metadata/agent.py:274#033[00m
Dec  1 20:01:43 compute-0 nova_compute[189564]: 2025-12-01 20:01:43.953 189568 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Dec  1 20:01:46 compute-0 ovn_metadata_agent[106828]: 2025-12-01 20:01:46.952 106833 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): DbSetCommand(_result=None, table=Chassis_Private, record=91869463-7ce7-4561-8225-db4a77bb5f12, col_values=(('external_ids', {'neutron:ovn-metadata-sb-cfg': '10'}),), if_exists=True) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89#033[00m
Dec  1 20:01:46 compute-0 nova_compute[189564]: 2025-12-01 20:01:46.993 189568 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Dec  1 20:01:47 compute-0 nova_compute[189564]: 2025-12-01 20:01:47.165 189568 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Dec  1 20:01:47 compute-0 nova_compute[189564]: 2025-12-01 20:01:47.253 189568 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Dec  1 20:01:47 compute-0 nova_compute[189564]: 2025-12-01 20:01:47.294 189568 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Dec  1 20:01:47 compute-0 podman[254418]: 2025-12-01 20:01:47.334704117 +0000 UTC m=+0.105180633 container health_status 61ddba5fa28aaa4735d9b3aecc3d300f499f9ae2248b5f55cd6d6127fcce4236 (image=quay.io/navidys/prometheus-podman-exporter:v1.10.1, name=podman_exporter, health_status=healthy, health_failing_streak=0, health_log=, maintainer=Navid Yaghoobi <navidys@fedoraproject.org>, managed_by=edpm_ansible, config_data={'image': 'quay.io/navidys/prometheus-podman-exporter:v1.10.1', 'restart': 'always', 'recreate': True, 'user': 'root', 'privileged': True, 'ports': ['9882:9882'], 'net': 'host', 'command': ['--web.config.file=/etc/podman_exporter/podman_exporter.yaml'], 'environment': {'OS_ENDPOINT_TYPE': 'internal', 'CONTAINER_HOST': 'unix:///run/podman/podman.sock'}, 'healthcheck': {'test': '/openstack/healthcheck podman_exporter', 'mount': '/var/lib/openstack/healthchecks/podman_exporter'}, 'volumes': ['/var/lib/openstack/config/telemetry/podman_exporter.yaml:/etc/podman_exporter/podman_exporter.yaml:z', '/var/lib/openstack/certs/telemetry/default:/etc/podman_exporter/tls:z', '/run/podman/podman.sock:/run/podman/podman.sock:rw,z', '/var/lib/openstack/healthchecks/podman_exporter:/openstack:ro,z']}, config_id=edpm, container_name=podman_exporter)
Dec  1 20:01:48 compute-0 nova_compute[189564]: 2025-12-01 20:01:48.814 189568 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Dec  1 20:01:48 compute-0 ceilometer_agent_compute[200308]: 2025-12-01 20:01:48.821 15 DEBUG ceilometer.polling.manager [-] The number of pollsters in source [pollsters] is bigger than the number of worker threads to execute them. Therefore, one can expect the process to be longer than the expected. execute_polling_task_processing /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:253
Dec  1 20:01:48 compute-0 ceilometer_agent_compute[200308]: 2025-12-01 20:01:48.822 15 DEBUG ceilometer.polling.manager [-] Processing pollsters for [pollsters] with [1] threads. execute_polling_task_processing /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:262
Dec  1 20:01:48 compute-0 ceilometer_agent_compute[200308]: 2025-12-01 20:01:48.822 15 DEBUG ceilometer.polling.manager [-] Registering pollster [<stevedore.extension.Extension object at 0x7fcf6cc3f860>] from source [pollsters] to be executed via executor [<concurrent.futures.thread.ThreadPoolExecutor object at 0x7fcf66438380>] with cache [{}], pollster history [{}], and discovery cache [{}]. register_pollster_execution /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:276
Dec  1 20:01:48 compute-0 ceilometer_agent_compute[200308]: 2025-12-01 20:01:48.823 15 DEBUG ceilometer.polling.manager [-] Executing discovery process for pollsters [<ceilometer.compute.pollsters.net.IncomingBytesDeltaPollster object at 0x7fcf6cc3f830>] and discovery method [local_instances] via process [<bound method AgentManager.discover of <ceilometer.polling.manager.AgentManager object at 0x7fcf6cc07a10>>]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:294
Dec  1 20:01:48 compute-0 ceilometer_agent_compute[200308]: 2025-12-01 20:01:48.823 15 DEBUG ceilometer.polling.manager [-] Registering pollster [<stevedore.extension.Extension object at 0x7fcf6c2e4080>] from source [pollsters] to be executed via executor [<concurrent.futures.thread.ThreadPoolExecutor object at 0x7fcf66438380>] with cache [{}], pollster history [{}], and discovery cache [{}]. register_pollster_execution /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:276
Dec  1 20:01:48 compute-0 ceilometer_agent_compute[200308]: 2025-12-01 20:01:48.823 15 DEBUG ceilometer.polling.manager [-] Registering pollster [<stevedore.extension.Extension object at 0x7fcf6efc98b0>] from source [pollsters] to be executed via executor [<concurrent.futures.thread.ThreadPoolExecutor object at 0x7fcf66438380>] with cache [{}], pollster history [{}], and discovery cache [{}]. register_pollster_execution /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:276
Dec  1 20:01:48 compute-0 ceilometer_agent_compute[200308]: 2025-12-01 20:01:48.823 15 DEBUG ceilometer.polling.manager [-] Registering pollster [<stevedore.extension.Extension object at 0x7fcf6c2e4110>] from source [pollsters] to be executed via executor [<concurrent.futures.thread.ThreadPoolExecutor object at 0x7fcf66438380>] with cache [{}], pollster history [{}], and discovery cache [{}]. register_pollster_execution /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:276
Dec  1 20:01:48 compute-0 ceilometer_agent_compute[200308]: 2025-12-01 20:01:48.824 15 DEBUG ceilometer.polling.manager [-] Registering pollster [<stevedore.extension.Extension object at 0x7fcf6c2e41a0>] from source [pollsters] to be executed via executor [<concurrent.futures.thread.ThreadPoolExecutor object at 0x7fcf66438380>] with cache [{}], pollster history [{}], and discovery cache [{}]. register_pollster_execution /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:276
Dec  1 20:01:48 compute-0 ceilometer_agent_compute[200308]: 2025-12-01 20:01:48.824 15 DEBUG ceilometer.polling.manager [-] Registering pollster [<stevedore.extension.Extension object at 0x7fcf6cc3f290>] from source [pollsters] to be executed via executor [<concurrent.futures.thread.ThreadPoolExecutor object at 0x7fcf66438380>] with cache [{}], pollster history [{}], and discovery cache [{}]. register_pollster_execution /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:276
Dec  1 20:01:48 compute-0 ceilometer_agent_compute[200308]: 2025-12-01 20:01:48.824 15 DEBUG ceilometer.polling.manager [-] Registering pollster [<stevedore.extension.Extension object at 0x7fcf6cc3f2c0>] from source [pollsters] to be executed via executor [<concurrent.futures.thread.ThreadPoolExecutor object at 0x7fcf66438380>] with cache [{}], pollster history [{}], and discovery cache [{}]. register_pollster_execution /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:276
Dec  1 20:01:48 compute-0 ceilometer_agent_compute[200308]: 2025-12-01 20:01:48.824 15 DEBUG ceilometer.polling.manager [-] Registering pollster [<stevedore.extension.Extension object at 0x7fcf6e1e92e0>] from source [pollsters] to be executed via executor [<concurrent.futures.thread.ThreadPoolExecutor object at 0x7fcf66438380>] with cache [{}], pollster history [{}], and discovery cache [{}]. register_pollster_execution /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:276
Dec  1 20:01:48 compute-0 ceilometer_agent_compute[200308]: 2025-12-01 20:01:48.824 15 DEBUG ceilometer.polling.manager [-] Registering pollster [<stevedore.extension.Extension object at 0x7fcf6cc3fb00>] from source [pollsters] to be executed via executor [<concurrent.futures.thread.ThreadPoolExecutor object at 0x7fcf66438380>] with cache [{}], pollster history [{}], and discovery cache [{}]. register_pollster_execution /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:276
Dec  1 20:01:48 compute-0 ceilometer_agent_compute[200308]: 2025-12-01 20:01:48.824 15 DEBUG ceilometer.polling.manager [-] Registering pollster [<stevedore.extension.Extension object at 0x7fcf6cc3f320>] from source [pollsters] to be executed via executor [<concurrent.futures.thread.ThreadPoolExecutor object at 0x7fcf66438380>] with cache [{}], pollster history [{}], and discovery cache [{}]. register_pollster_execution /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:276
Dec  1 20:01:48 compute-0 ceilometer_agent_compute[200308]: 2025-12-01 20:01:48.824 15 DEBUG ceilometer.polling.manager [-] Registering pollster [<stevedore.extension.Extension object at 0x7fcf6cc3f380>] from source [pollsters] to be executed via executor [<concurrent.futures.thread.ThreadPoolExecutor object at 0x7fcf66438380>] with cache [{}], pollster history [{}], and discovery cache [{}]. register_pollster_execution /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:276
Dec  1 20:01:48 compute-0 ceilometer_agent_compute[200308]: 2025-12-01 20:01:48.825 15 DEBUG ceilometer.polling.manager [-] Registering pollster [<stevedore.extension.Extension object at 0x7fcf6cc3f3e0>] from source [pollsters] to be executed via executor [<concurrent.futures.thread.ThreadPoolExecutor object at 0x7fcf66438380>] with cache [{}], pollster history [{}], and discovery cache [{}]. register_pollster_execution /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:276
Dec  1 20:01:48 compute-0 ceilometer_agent_compute[200308]: 2025-12-01 20:01:48.825 15 DEBUG ceilometer.polling.manager [-] Registering pollster [<stevedore.extension.Extension object at 0x7fcf6cc3f440>] from source [pollsters] to be executed via executor [<concurrent.futures.thread.ThreadPoolExecutor object at 0x7fcf66438380>] with cache [{}], pollster history [{}], and discovery cache [{}]. register_pollster_execution /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:276
Dec  1 20:01:48 compute-0 ceilometer_agent_compute[200308]: 2025-12-01 20:01:48.825 15 DEBUG ceilometer.polling.manager [-] Registering pollster [<stevedore.extension.Extension object at 0x7fcf6c2e4470>] from source [pollsters] to be executed via executor [<concurrent.futures.thread.ThreadPoolExecutor object at 0x7fcf66438380>] with cache [{}], pollster history [{}], and discovery cache [{}]. register_pollster_execution /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:276
Dec  1 20:01:48 compute-0 ceilometer_agent_compute[200308]: 2025-12-01 20:01:48.825 15 DEBUG ceilometer.polling.manager [-] Registering pollster [<stevedore.extension.Extension object at 0x7fcf6cc3f4a0>] from source [pollsters] to be executed via executor [<concurrent.futures.thread.ThreadPoolExecutor object at 0x7fcf66438380>] with cache [{}], pollster history [{}], and discovery cache [{}]. register_pollster_execution /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:276
Dec  1 20:01:48 compute-0 ceilometer_agent_compute[200308]: 2025-12-01 20:01:48.825 15 DEBUG ceilometer.polling.manager [-] Registering pollster [<stevedore.extension.Extension object at 0x7fcf6cc3f500>] from source [pollsters] to be executed via executor [<concurrent.futures.thread.ThreadPoolExecutor object at 0x7fcf66438380>] with cache [{}], pollster history [{}], and discovery cache [{}]. register_pollster_execution /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:276
Dec  1 20:01:48 compute-0 ceilometer_agent_compute[200308]: 2025-12-01 20:01:48.826 15 DEBUG ceilometer.polling.manager [-] Registering pollster [<stevedore.extension.Extension object at 0x7fcf6cc3e540>] from source [pollsters] to be executed via executor [<concurrent.futures.thread.ThreadPoolExecutor object at 0x7fcf66438380>] with cache [{}], pollster history [{}], and discovery cache [{}]. register_pollster_execution /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:276
Dec  1 20:01:48 compute-0 ceilometer_agent_compute[200308]: 2025-12-01 20:01:48.826 15 DEBUG ceilometer.polling.manager [-] Registering pollster [<stevedore.extension.Extension object at 0x7fcf6cc3f560>] from source [pollsters] to be executed via executor [<concurrent.futures.thread.ThreadPoolExecutor object at 0x7fcf66438380>] with cache [{}], pollster history [{}], and discovery cache [{}]. register_pollster_execution /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:276
Dec  1 20:01:48 compute-0 ceilometer_agent_compute[200308]: 2025-12-01 20:01:48.826 15 DEBUG ceilometer.polling.manager [-] Registering pollster [<stevedore.extension.Extension object at 0x7fcf6cc3fd70>] from source [pollsters] to be executed via executor [<concurrent.futures.thread.ThreadPoolExecutor object at 0x7fcf66438380>] with cache [{}], pollster history [{}], and discovery cache [{}]. register_pollster_execution /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:276
Dec  1 20:01:48 compute-0 ceilometer_agent_compute[200308]: 2025-12-01 20:01:48.826 15 DEBUG ceilometer.polling.manager [-] Registering pollster [<stevedore.extension.Extension object at 0x7fcf6cc3f5c0>] from source [pollsters] to be executed via executor [<concurrent.futures.thread.ThreadPoolExecutor object at 0x7fcf66438380>] with cache [{}], pollster history [{}], and discovery cache [{}]. register_pollster_execution /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:276
Dec  1 20:01:48 compute-0 ceilometer_agent_compute[200308]: 2025-12-01 20:01:48.826 15 DEBUG ceilometer.polling.manager [-] Registering pollster [<stevedore.extension.Extension object at 0x7fcf6cc3fdd0>] from source [pollsters] to be executed via executor [<concurrent.futures.thread.ThreadPoolExecutor object at 0x7fcf66438380>] with cache [{}], pollster history [{}], and discovery cache [{}]. register_pollster_execution /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:276
Dec  1 20:01:48 compute-0 ceilometer_agent_compute[200308]: 2025-12-01 20:01:48.826 15 DEBUG ceilometer.polling.manager [-] Registering pollster [<stevedore.extension.Extension object at 0x7fcf6cc3fe30>] from source [pollsters] to be executed via executor [<concurrent.futures.thread.ThreadPoolExecutor object at 0x7fcf66438380>] with cache [{}], pollster history [{}], and discovery cache [{}]. register_pollster_execution /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:276
Dec  1 20:01:48 compute-0 ceilometer_agent_compute[200308]: 2025-12-01 20:01:48.827 15 DEBUG ceilometer.polling.manager [-] Registering pollster [<stevedore.extension.Extension object at 0x7fcf6cc3fec0>] from source [pollsters] to be executed via executor [<concurrent.futures.thread.ThreadPoolExecutor object at 0x7fcf66438380>] with cache [{}], pollster history [{}], and discovery cache [{}]. register_pollster_execution /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:276
Dec  1 20:01:48 compute-0 ceilometer_agent_compute[200308]: 2025-12-01 20:01:48.827 15 DEBUG ceilometer.polling.manager [-] Registering pollster [<stevedore.extension.Extension object at 0x7fcf6cc3ffb0>] from source [pollsters] to be executed via executor [<concurrent.futures.thread.ThreadPoolExecutor object at 0x7fcf66438380>] with cache [{}], pollster history [{}], and discovery cache [{}]. register_pollster_execution /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:276
Dec  1 20:01:48 compute-0 ceilometer_agent_compute[200308]: 2025-12-01 20:01:48.827 15 DEBUG ceilometer.polling.manager [-] Registering pollster [<stevedore.extension.Extension object at 0x7fcf6cc3d7c0>] from source [pollsters] to be executed via executor [<concurrent.futures.thread.ThreadPoolExecutor object at 0x7fcf66438380>] with cache [{}], pollster history [{}], and discovery cache [{}]. register_pollster_execution /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:276
Dec  1 20:01:48 compute-0 ceilometer_agent_compute[200308]: 2025-12-01 20:01:48.827 15 DEBUG ceilometer.polling.manager [-] Registering pollster [<stevedore.extension.Extension object at 0x7fcf6cc3f7d0>] from source [pollsters] to be executed via executor [<concurrent.futures.thread.ThreadPoolExecutor object at 0x7fcf66438380>] with cache [{}], pollster history [{}], and discovery cache [{}]. register_pollster_execution /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:276
Dec  1 20:01:48 compute-0 ceilometer_agent_compute[200308]: 2025-12-01 20:01:48.828 15 DEBUG ceilometer.compute.discovery [-] Querying metadata for instance 4a104baa-5fd5-47aa-973b-11d99c76c3e2 from Nova API get_server /usr/lib/python3.12/site-packages/ceilometer/compute/discovery.py:176
Dec  1 20:01:48 compute-0 ceilometer_agent_compute[200308]: 2025-12-01 20:01:48.829 15 DEBUG novaclient.v2.client [-] REQ: curl -g -i -X GET https://nova-internal.openstack.svc:8774/v2.1/servers/4a104baa-5fd5-47aa-973b-11d99c76c3e2 -H "Accept: application/json" -H "User-Agent: python-novaclient" -H "X-Auth-Token: {SHA256}1de7f74c971f7abb068fd11d4466b13593717e525e549549f884402049cc943e" -H "X-OpenStack-Nova-API-Version: 2.1" _http_log_request /usr/lib/python3.12/site-packages/keystoneauth1/session.py:572
Dec  1 20:01:50 compute-0 nova_compute[189564]: 2025-12-01 20:01:50.268 189568 DEBUG oslo_service.periodic_task [None req-7cec860b-c1c5-49d2-8a4f-732c76c1788d - - - - - -] Running periodic task ComputeManager._cleanup_incomplete_migrations run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210#033[00m
Dec  1 20:01:50 compute-0 nova_compute[189564]: 2025-12-01 20:01:50.268 189568 DEBUG nova.compute.manager [None req-7cec860b-c1c5-49d2-8a4f-732c76c1788d - - - - - -] Cleaning up deleted instances with incomplete migration  _cleanup_incomplete_migrations /usr/lib/python3.9/site-packages/nova/compute/manager.py:11183#033[00m
Dec  1 20:01:50 compute-0 ceilometer_agent_compute[200308]: 2025-12-01 20:01:50.329 15 DEBUG novaclient.v2.client [-] RESP: [200] Connection: Keep-Alive Content-Length: 1981 Content-Type: application/json Date: Mon, 01 Dec 2025 20:01:49 GMT Keep-Alive: timeout=5, max=100 OpenStack-API-Version: compute 2.1 Server: Apache Vary: OpenStack-API-Version,X-OpenStack-Nova-API-Version X-OpenStack-Nova-API-Version: 2.1 x-compute-request-id: req-30b96b04-94e9-4626-a1c9-28279f985811 x-openstack-request-id: req-30b96b04-94e9-4626-a1c9-28279f985811 _http_log_response /usr/lib/python3.12/site-packages/keystoneauth1/session.py:613
Dec  1 20:01:50 compute-0 ceilometer_agent_compute[200308]: 2025-12-01 20:01:50.330 15 DEBUG novaclient.v2.client [-] RESP BODY: {"server": {"id": "4a104baa-5fd5-47aa-973b-11d99c76c3e2", "name": "tempest-ServerActionsTestJSON-server-1064429924", "status": "ACTIVE", "tenant_id": "5102d72cb1ce4e6da810b2584a2abd73", "user_id": "89c8a8cb31224140bf2b9c0b94acfe04", "metadata": {}, "hostId": "1b1a73eeccf63f76a6d7c21e57a5ecb8f82f7b9a17500c23d6e3f562", "image": {"id": "d169c234-7ac2-4fdc-b9fa-a08c93484d75", "links": [{"rel": "bookmark", "href": "https://nova-internal.openstack.svc:8774/images/d169c234-7ac2-4fdc-b9fa-a08c93484d75"}]}, "flavor": {"id": "69252fc0-77e5-4ac1-807d-77003542464f", "links": [{"rel": "bookmark", "href": "https://nova-internal.openstack.svc:8774/flavors/69252fc0-77e5-4ac1-807d-77003542464f"}]}, "created": "2025-12-01T20:01:10Z", "updated": "2025-12-01T20:01:26Z", "addresses": {"tempest-ServerActionsTestJSON-188173667-network": [{"version": 4, "addr": "10.100.0.13", "OS-EXT-IPS:type": "fixed", "OS-EXT-IPS-MAC:mac_addr": "fa:16:3e:3e:bf:1a"}, {"version": 4, "addr": "192.168.122.211", "OS-EXT-IPS:type": "floating", "OS-EXT-IPS-MAC:mac_addr": "fa:16:3e:3e:bf:1a"}]}, "accessIPv4": "", "accessIPv6": "", "links": [{"rel": "self", "href": "https://nova-internal.openstack.svc:8774/v2.1/servers/4a104baa-5fd5-47aa-973b-11d99c76c3e2"}, {"rel": "bookmark", "href": "https://nova-internal.openstack.svc:8774/servers/4a104baa-5fd5-47aa-973b-11d99c76c3e2"}], "OS-DCF:diskConfig": "MANUAL", "progress": 0, "OS-EXT-AZ:availability_zone": "nova", "config_drive": "True", "key_name": "tempest-keypair-1301911410", "OS-SRV-USG:launched_at": "2025-12-01T20:01:26.000000", "OS-SRV-USG:terminated_at": null, "security_groups": [{"name": "tempest-securitygroup--1638647414"}], "OS-EXT-SRV-ATTR:host": "compute-0.ctlplane.example.com", "OS-EXT-SRV-ATTR:instance_name": "instance-00000007", "OS-EXT-SRV-ATTR:hypervisor_hostname": "compute-0.ctlplane.example.com", "OS-EXT-STS:task_state": null, "OS-EXT-STS:vm_state": "active", "OS-EXT-STS:power_state": 1, "os-extended-volumes:volumes_attached": []}} _http_log_response /usr/lib/python3.12/site-packages/keystoneauth1/session.py:648
Dec  1 20:01:50 compute-0 ceilometer_agent_compute[200308]: 2025-12-01 20:01:50.330 15 DEBUG novaclient.v2.client [-] GET call to compute for https://nova-internal.openstack.svc:8774/v2.1/servers/4a104baa-5fd5-47aa-973b-11d99c76c3e2 used request id req-30b96b04-94e9-4626-a1c9-28279f985811 request /usr/lib/python3.12/site-packages/keystoneauth1/session.py:1073
Dec  1 20:01:50 compute-0 ceilometer_agent_compute[200308]: 2025-12-01 20:01:50.331 15 DEBUG ceilometer.compute.discovery [-] instance data: {'id': '4a104baa-5fd5-47aa-973b-11d99c76c3e2', 'name': 'tempest-ServerActionsTestJSON-server-1064429924', 'flavor': {'id': '69252fc0-77e5-4ac1-807d-77003542464f', 'name': 'm1.nano', 'vcpus': 1, 'ram': 128, 'disk': 1, 'ephemeral': 0, 'swap': 0}, 'image': {'id': 'd169c234-7ac2-4fdc-b9fa-a08c93484d75'}, 'os_type': 'hvm', 'architecture': 'x86_64', 'OS-EXT-SRV-ATTR:instance_name': 'instance-00000007', 'OS-EXT-SRV-ATTR:host': 'compute-0.ctlplane.example.com', 'OS-EXT-STS:vm_state': 'running', 'tenant_id': '5102d72cb1ce4e6da810b2584a2abd73', 'user_id': '89c8a8cb31224140bf2b9c0b94acfe04', 'hostId': '1b1a73eeccf63f76a6d7c21e57a5ecb8f82f7b9a17500c23d6e3f562', 'status': 'active', 'metadata': {}} discover_libvirt_polling /usr/lib/python3.12/site-packages/ceilometer/compute/discovery.py:315
Dec  1 20:01:50 compute-0 ceilometer_agent_compute[200308]: 2025-12-01 20:01:50.331 15 INFO ceilometer.polling.manager [-] Polling pollster network.incoming.bytes.delta in the context of pollsters
Dec  1 20:01:50 compute-0 ceilometer_agent_compute[200308]: 2025-12-01 20:01:50.331 15 DEBUG ceilometer.polling.manager [-] Checking if we need coordination for pollster [<stevedore.extension.Extension object at 0x7fcf6cc3f860>] with coordination group name [None]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:333
Dec  1 20:01:50 compute-0 ceilometer_agent_compute[200308]: 2025-12-01 20:01:50.331 15 DEBUG ceilometer.polling.manager [-] The pollster [<stevedore.extension.Extension object at 0x7fcf6cc3f860>] is not configured in a source for polling that requires coordination. The current hashrings are the following [None]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:355
Dec  1 20:01:50 compute-0 ceilometer_agent_compute[200308]: 2025-12-01 20:01:50.331 15 DEBUG ceilometer.polling.manager [-] Polster heartbeat update: network.incoming.bytes.delta heartbeat /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:636
Dec  1 20:01:50 compute-0 ceilometer_agent_compute[200308]: 2025-12-01 20:01:50.332 12 DEBUG ceilometer.polling.manager [-] Updated heartbeat for network.incoming.bytes.delta (2025-12-01T20:01:50.331760) _update_status /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:502
Dec  1 20:01:50 compute-0 ceilometer_agent_compute[200308]: 2025-12-01 20:01:50.335 15 DEBUG ceilometer.compute.virt.libvirt.inspector [-] No delta meter predecessor for 4a104baa-5fd5-47aa-973b-11d99c76c3e2 / tap09097114-7a inspect_vnics /usr/lib/python3.12/site-packages/ceilometer/compute/virt/libvirt/inspector.py:143
Dec  1 20:01:50 compute-0 ceilometer_agent_compute[200308]: 2025-12-01 20:01:50.336 15 DEBUG ceilometer.compute.pollsters [-] 4a104baa-5fd5-47aa-973b-11d99c76c3e2/network.incoming.bytes.delta volume: 0 _stats_to_sample /usr/lib/python3.12/site-packages/ceilometer/compute/pollsters/__init__.py:108
Dec  1 20:01:50 compute-0 ceilometer_agent_compute[200308]: 2025-12-01 20:01:50.336 15 INFO ceilometer.polling.manager [-] Finished polling pollster network.incoming.bytes.delta in the context of pollsters
Dec  1 20:01:50 compute-0 ceilometer_agent_compute[200308]: 2025-12-01 20:01:50.336 15 DEBUG ceilometer.polling.manager [-] Executing discovery process for pollsters [<ceilometer.compute.pollsters.net.OutgoingPacketsPollster object at 0x7fcf6c2e4050>] and discovery method [local_instances] via process [<bound method AgentManager.discover of <ceilometer.polling.manager.AgentManager object at 0x7fcf6cc07a10>>]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:294
Dec  1 20:01:50 compute-0 ceilometer_agent_compute[200308]: 2025-12-01 20:01:50.336 15 INFO ceilometer.polling.manager [-] Polling pollster network.outgoing.packets in the context of pollsters
Dec  1 20:01:50 compute-0 ceilometer_agent_compute[200308]: 2025-12-01 20:01:50.336 15 DEBUG ceilometer.polling.manager [-] Checking if we need coordination for pollster [<stevedore.extension.Extension object at 0x7fcf6c2e4080>] with coordination group name [None]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:333
Dec  1 20:01:50 compute-0 ceilometer_agent_compute[200308]: 2025-12-01 20:01:50.336 15 DEBUG ceilometer.polling.manager [-] The pollster [<stevedore.extension.Extension object at 0x7fcf6c2e4080>] is not configured in a source for polling that requires coordination. The current hashrings are the following [None]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:355
Dec  1 20:01:50 compute-0 ceilometer_agent_compute[200308]: 2025-12-01 20:01:50.337 15 DEBUG ceilometer.polling.manager [-] Polster heartbeat update: network.outgoing.packets heartbeat /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:636
Dec  1 20:01:50 compute-0 ceilometer_agent_compute[200308]: 2025-12-01 20:01:50.337 15 DEBUG ceilometer.compute.pollsters [-] 4a104baa-5fd5-47aa-973b-11d99c76c3e2/network.outgoing.packets volume: 0 _stats_to_sample /usr/lib/python3.12/site-packages/ceilometer/compute/pollsters/__init__.py:108
Dec  1 20:01:50 compute-0 ceilometer_agent_compute[200308]: 2025-12-01 20:01:50.337 15 INFO ceilometer.polling.manager [-] Finished polling pollster network.outgoing.packets in the context of pollsters
Dec  1 20:01:50 compute-0 ceilometer_agent_compute[200308]: 2025-12-01 20:01:50.337 15 DEBUG ceilometer.polling.manager [-] Executing discovery process for pollsters [<ceilometer.compute.pollsters.net.OutgoingBytesDeltaPollster object at 0x7fcf6cc3ff20>] and discovery method [local_instances] via process [<bound method AgentManager.discover of <ceilometer.polling.manager.AgentManager object at 0x7fcf6cc07a10>>]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:294
Dec  1 20:01:50 compute-0 ceilometer_agent_compute[200308]: 2025-12-01 20:01:50.338 12 DEBUG ceilometer.polling.manager [-] Updated heartbeat for network.outgoing.packets (2025-12-01T20:01:50.337025) _update_status /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:502
Dec  1 20:01:50 compute-0 ceilometer_agent_compute[200308]: 2025-12-01 20:01:50.337 15 INFO ceilometer.polling.manager [-] Polling pollster network.outgoing.bytes.delta in the context of pollsters
Dec  1 20:01:50 compute-0 ceilometer_agent_compute[200308]: 2025-12-01 20:01:50.338 15 DEBUG ceilometer.polling.manager [-] Checking if we need coordination for pollster [<stevedore.extension.Extension object at 0x7fcf6efc98b0>] with coordination group name [None]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:333
Dec  1 20:01:50 compute-0 ceilometer_agent_compute[200308]: 2025-12-01 20:01:50.338 15 DEBUG ceilometer.polling.manager [-] The pollster [<stevedore.extension.Extension object at 0x7fcf6efc98b0>] is not configured in a source for polling that requires coordination. The current hashrings are the following [None]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:355
Dec  1 20:01:50 compute-0 ceilometer_agent_compute[200308]: 2025-12-01 20:01:50.338 15 DEBUG ceilometer.polling.manager [-] Polster heartbeat update: network.outgoing.bytes.delta heartbeat /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:636
Dec  1 20:01:50 compute-0 ceilometer_agent_compute[200308]: 2025-12-01 20:01:50.338 15 DEBUG ceilometer.compute.pollsters [-] 4a104baa-5fd5-47aa-973b-11d99c76c3e2/network.outgoing.bytes.delta volume: 0 _stats_to_sample /usr/lib/python3.12/site-packages/ceilometer/compute/pollsters/__init__.py:108
Dec  1 20:01:50 compute-0 ceilometer_agent_compute[200308]: 2025-12-01 20:01:50.339 15 INFO ceilometer.polling.manager [-] Finished polling pollster network.outgoing.bytes.delta in the context of pollsters
Dec  1 20:01:50 compute-0 ceilometer_agent_compute[200308]: 2025-12-01 20:01:50.339 12 DEBUG ceilometer.polling.manager [-] Updated heartbeat for network.outgoing.bytes.delta (2025-12-01T20:01:50.338593) _update_status /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:502
Dec  1 20:01:50 compute-0 ceilometer_agent_compute[200308]: 2025-12-01 20:01:50.339 15 DEBUG ceilometer.polling.manager [-] Executing discovery process for pollsters [<ceilometer.compute.pollsters.net.OutgoingDropPollster object at 0x7fcf6c2e40e0>] and discovery method [local_instances] via process [<bound method AgentManager.discover of <ceilometer.polling.manager.AgentManager object at 0x7fcf6cc07a10>>]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:294
Dec  1 20:01:50 compute-0 ceilometer_agent_compute[200308]: 2025-12-01 20:01:50.339 15 INFO ceilometer.polling.manager [-] Polling pollster network.outgoing.packets.drop in the context of pollsters
Dec  1 20:01:50 compute-0 ceilometer_agent_compute[200308]: 2025-12-01 20:01:50.339 15 DEBUG ceilometer.polling.manager [-] Checking if we need coordination for pollster [<stevedore.extension.Extension object at 0x7fcf6c2e4110>] with coordination group name [None]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:333
Dec  1 20:01:50 compute-0 ceilometer_agent_compute[200308]: 2025-12-01 20:01:50.339 15 DEBUG ceilometer.polling.manager [-] The pollster [<stevedore.extension.Extension object at 0x7fcf6c2e4110>] is not configured in a source for polling that requires coordination. The current hashrings are the following [None]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:355
Dec  1 20:01:50 compute-0 ceilometer_agent_compute[200308]: 2025-12-01 20:01:50.339 15 DEBUG ceilometer.polling.manager [-] Polster heartbeat update: network.outgoing.packets.drop heartbeat /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:636
Dec  1 20:01:50 compute-0 ceilometer_agent_compute[200308]: 2025-12-01 20:01:50.339 15 DEBUG ceilometer.compute.pollsters [-] 4a104baa-5fd5-47aa-973b-11d99c76c3e2/network.outgoing.packets.drop volume: 0 _stats_to_sample /usr/lib/python3.12/site-packages/ceilometer/compute/pollsters/__init__.py:108
Dec  1 20:01:50 compute-0 ceilometer_agent_compute[200308]: 2025-12-01 20:01:50.340 15 INFO ceilometer.polling.manager [-] Finished polling pollster network.outgoing.packets.drop in the context of pollsters
Dec  1 20:01:50 compute-0 ceilometer_agent_compute[200308]: 2025-12-01 20:01:50.340 12 DEBUG ceilometer.polling.manager [-] Updated heartbeat for network.outgoing.packets.drop (2025-12-01T20:01:50.339767) _update_status /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:502
Dec  1 20:01:50 compute-0 ceilometer_agent_compute[200308]: 2025-12-01 20:01:50.340 15 DEBUG ceilometer.polling.manager [-] Executing discovery process for pollsters [<ceilometer.compute.pollsters.net.OutgoingErrorsPollster object at 0x7fcf6c2e4170>] and discovery method [local_instances] via process [<bound method AgentManager.discover of <ceilometer.polling.manager.AgentManager object at 0x7fcf6cc07a10>>]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:294
Dec  1 20:01:50 compute-0 ceilometer_agent_compute[200308]: 2025-12-01 20:01:50.340 15 INFO ceilometer.polling.manager [-] Polling pollster network.outgoing.packets.error in the context of pollsters
Dec  1 20:01:50 compute-0 ceilometer_agent_compute[200308]: 2025-12-01 20:01:50.340 15 DEBUG ceilometer.polling.manager [-] Checking if we need coordination for pollster [<stevedore.extension.Extension object at 0x7fcf6c2e41a0>] with coordination group name [None]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:333
Dec  1 20:01:50 compute-0 ceilometer_agent_compute[200308]: 2025-12-01 20:01:50.340 15 DEBUG ceilometer.polling.manager [-] The pollster [<stevedore.extension.Extension object at 0x7fcf6c2e41a0>] is not configured in a source for polling that requires coordination. The current hashrings are the following [None]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:355
Dec  1 20:01:50 compute-0 ceilometer_agent_compute[200308]: 2025-12-01 20:01:50.340 15 DEBUG ceilometer.polling.manager [-] Polster heartbeat update: network.outgoing.packets.error heartbeat /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:636
Dec  1 20:01:50 compute-0 ceilometer_agent_compute[200308]: 2025-12-01 20:01:50.341 12 DEBUG ceilometer.polling.manager [-] Updated heartbeat for network.outgoing.packets.error (2025-12-01T20:01:50.340773) _update_status /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:502
Dec  1 20:01:50 compute-0 ceilometer_agent_compute[200308]: 2025-12-01 20:01:50.340 15 DEBUG ceilometer.compute.pollsters [-] 4a104baa-5fd5-47aa-973b-11d99c76c3e2/network.outgoing.packets.error volume: 0 _stats_to_sample /usr/lib/python3.12/site-packages/ceilometer/compute/pollsters/__init__.py:108
Dec  1 20:01:50 compute-0 ceilometer_agent_compute[200308]: 2025-12-01 20:01:50.341 15 INFO ceilometer.polling.manager [-] Finished polling pollster network.outgoing.packets.error in the context of pollsters
Dec  1 20:01:50 compute-0 ceilometer_agent_compute[200308]: 2025-12-01 20:01:50.341 15 DEBUG ceilometer.polling.manager [-] Executing discovery process for pollsters [<ceilometer.compute.pollsters.disk.PerDeviceCapacityPollster object at 0x7fcf6cc3d820>] and discovery method [local_instances] via process [<bound method AgentManager.discover of <ceilometer.polling.manager.AgentManager object at 0x7fcf6cc07a10>>]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:294
Dec  1 20:01:50 compute-0 ceilometer_agent_compute[200308]: 2025-12-01 20:01:50.341 15 INFO ceilometer.polling.manager [-] Polling pollster disk.device.capacity in the context of pollsters
Dec  1 20:01:50 compute-0 ceilometer_agent_compute[200308]: 2025-12-01 20:01:50.341 15 DEBUG ceilometer.polling.manager [-] Checking if we need coordination for pollster [<stevedore.extension.Extension object at 0x7fcf6cc3f290>] with coordination group name [None]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:333
Dec  1 20:01:50 compute-0 ceilometer_agent_compute[200308]: 2025-12-01 20:01:50.341 15 DEBUG ceilometer.polling.manager [-] The pollster [<stevedore.extension.Extension object at 0x7fcf6cc3f290>] is not configured in a source for polling that requires coordination. The current hashrings are the following [None]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:355
Dec  1 20:01:50 compute-0 ceilometer_agent_compute[200308]: 2025-12-01 20:01:50.342 15 DEBUG ceilometer.polling.manager [-] Polster heartbeat update: disk.device.capacity heartbeat /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:636
Dec  1 20:01:50 compute-0 ceilometer_agent_compute[200308]: 2025-12-01 20:01:50.342 12 DEBUG ceilometer.polling.manager [-] Updated heartbeat for disk.device.capacity (2025-12-01T20:01:50.341988) _update_status /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:502
Dec  1 20:01:50 compute-0 ceilometer_agent_compute[200308]: 2025-12-01 20:01:50.365 15 DEBUG ceilometer.compute.pollsters [-] 4a104baa-5fd5-47aa-973b-11d99c76c3e2/disk.device.capacity volume: 1073741824 _stats_to_sample /usr/lib/python3.12/site-packages/ceilometer/compute/pollsters/__init__.py:108
Dec  1 20:01:50 compute-0 ceilometer_agent_compute[200308]: 2025-12-01 20:01:50.366 15 DEBUG ceilometer.compute.pollsters [-] 4a104baa-5fd5-47aa-973b-11d99c76c3e2/disk.device.capacity volume: 509952 _stats_to_sample /usr/lib/python3.12/site-packages/ceilometer/compute/pollsters/__init__.py:108
Dec  1 20:01:50 compute-0 ceilometer_agent_compute[200308]: 2025-12-01 20:01:50.366 15 INFO ceilometer.polling.manager [-] Finished polling pollster disk.device.capacity in the context of pollsters
Dec  1 20:01:50 compute-0 ceilometer_agent_compute[200308]: 2025-12-01 20:01:50.366 15 DEBUG ceilometer.polling.manager [-] Executing discovery process for pollsters [<ceilometer.compute.pollsters.disk.PerDeviceReadBytesPollster object at 0x7fcf6cc3f1d0>] and discovery method [local_instances] via process [<bound method AgentManager.discover of <ceilometer.polling.manager.AgentManager object at 0x7fcf6cc07a10>>]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:294
Dec  1 20:01:50 compute-0 ceilometer_agent_compute[200308]: 2025-12-01 20:01:50.366 15 INFO ceilometer.polling.manager [-] Polling pollster disk.device.read.bytes in the context of pollsters
Dec  1 20:01:50 compute-0 ceilometer_agent_compute[200308]: 2025-12-01 20:01:50.366 15 DEBUG ceilometer.polling.manager [-] Checking if we need coordination for pollster [<stevedore.extension.Extension object at 0x7fcf6cc3f2c0>] with coordination group name [None]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:333
Dec  1 20:01:50 compute-0 ceilometer_agent_compute[200308]: 2025-12-01 20:01:50.366 15 DEBUG ceilometer.polling.manager [-] The pollster [<stevedore.extension.Extension object at 0x7fcf6cc3f2c0>] is not configured in a source for polling that requires coordination. The current hashrings are the following [None]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:355
Dec  1 20:01:50 compute-0 ceilometer_agent_compute[200308]: 2025-12-01 20:01:50.367 15 DEBUG ceilometer.polling.manager [-] Polster heartbeat update: disk.device.read.bytes heartbeat /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:636
Dec  1 20:01:50 compute-0 ceilometer_agent_compute[200308]: 2025-12-01 20:01:50.369 12 DEBUG ceilometer.polling.manager [-] Updated heartbeat for disk.device.read.bytes (2025-12-01T20:01:50.367043) _update_status /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:502
Dec  1 20:01:50 compute-0 podman[254445]: 2025-12-01 20:01:50.37446287 +0000 UTC m=+0.108709224 container health_status 3a3d264f7eb8586ed3d44da8bad3c69e5911bcb2ca062b771386b6d47a5118de (image=quay.rdoproject.org/podified-master-centos10/openstack-ceilometer-compute:current-tested, name=ceilometer_agent_compute, health_status=healthy, health_failing_streak=0, health_log=, config_id=edpm, maintainer=OpenStack Kubernetes Operator team, managed_by=edpm_ansible, org.label-schema.build-date=20251125, tcib_build_tag=3a7876c5b6a4ff2e2bc50e11e9db5f42, io.buildah.version=1.41.4, org.label-schema.name=CentOS Stream 10 Base Image, org.label-schema.vendor=CentOS, container_name=ceilometer_agent_compute, org.label-schema.license=GPLv2, org.label-schema.schema-version=1.0, tcib_managed=true, config_data={'image': 'quay.rdoproject.org/podified-master-centos10/openstack-ceilometer-compute:current-tested', 'user': 'ceilometer', 'restart': 'always', 'command': 'kolla_start', 'security_opt': 'label:type:ceilometer_polling_t', 'net': 'host', 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'OS_ENDPOINT_TYPE': 'internal'}, 'healthcheck': {'test': '/openstack/healthcheck compute', 'mount': '/var/lib/openstack/healthchecks/ceilometer_agent_compute'}, 'volumes': ['/var/lib/openstack/config/telemetry:/var/lib/openstack/config/:z', '/var/lib/openstack/config/telemetry/ceilometer-agent-compute.json:/var/lib/kolla/config_files/config.json:z', '/run/libvirt:/run/libvirt:shared,ro', '/etc/hosts:/etc/hosts:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/var/lib/openstack/cacerts/telemetry/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/config/telemetry/ceilometer_prom_exporter.yaml:/etc/ceilometer/ceilometer_prom_exporter.yaml:z', '/var/lib/openstack/certs/telemetry/default:/etc/ceilometer/tls:z', '/dev/log:/dev/log', '/var/lib/openstack/healthchecks/ceilometer_agent_compute:/openstack:ro,z']})
Dec  1 20:01:50 compute-0 podman[254443]: 2025-12-01 20:01:50.3808833 +0000 UTC m=+0.138142620 container health_status 23921011954a99f31a49758e512d9e3575f6b2ebf536e7df85e3be11e7690b76 (image=quay.io/sustainable_computing_io/kepler:release-0.7.12, name=kepler, health_status=healthy, health_failing_streak=0, health_log=, name=ubi9, vendor=Red Hat, Inc., version=9.4, com.redhat.license_terms=https://www.redhat.com/en/about/red-hat-end-user-license-agreements#UBI, config_data={'image': 'quay.io/sustainable_computing_io/kepler:release-0.7.12', 'privileged': 'true', 'restart': 'always', 'ports': ['8888:8888'], 'net': 'host', 'command': '-v=2', 'recreate': True, 'environment': {'ENABLE_GPU': 'true', 'EXPOSE_CONTAINER_METRICS': 'true', 'ENABLE_PROCESS_METRICS': 'true', 'EXPOSE_VM_METRICS': 'true', 'EXPOSE_ESTIMATED_IDLE_POWER_METRICS': 'false', 'LIBVIRT_METADATA_URI': 'http://openstack.org/xmlns/libvirt/nova/1.1'}, 'healthcheck': {'test': '/openstack/healthcheck kepler', 'mount': '/var/lib/openstack/healthchecks/kepler'}, 'volumes': ['/lib/modules:/lib/modules:ro', '/run/libvirt:/run/libvirt:shared,ro', '/sys:/sys', '/proc:/proc', '/var/lib/openstack/healthchecks/kepler:/openstack:ro,z']}, container_name=kepler, release=1214.1726694543, architecture=x86_64, maintainer=Red Hat, Inc., description=The Universal Base Image is designed and engineered to be the base layer for all of your containerized applications, middleware and utilities. This base image is freely redistributable, but Red Hat only supports Red Hat technologies through subscriptions for Red Hat products. This image is maintained by Red Hat and updated regularly., vcs-ref=e309397d02fc53f7fa99db1371b8700eb49f268f, io.openshift.expose-services=, release-0.7.12=, summary=Provides the latest release of Red Hat Universal Base Image 9., url=https://access.redhat.com/containers/#/registry.access.redhat.com/ubi9/images/9.4-1214.1726694543, build-date=2024-09-18T21:23:30, io.k8s.display-name=Red Hat Universal Base Image 9, com.redhat.component=ubi9-container, io.buildah.version=1.29.0, vcs-type=git, io.openshift.tags=base rhel9, distribution-scope=public, config_id=edpm, managed_by=edpm_ansible, io.k8s.description=The Universal Base Image is designed and engineered to be the base layer for all of your containerized applications, middleware and utilities. This base image is freely redistributable, but Red Hat only supports Red Hat technologies through subscriptions for Red Hat products. This image is maintained by Red Hat and updated regularly.)
Dec  1 20:01:50 compute-0 podman[254444]: 2025-12-01 20:01:50.39182724 +0000 UTC m=+0.136418025 container health_status 34a1614f07848d6f362b3ed1fa2407dbcd0f2c7c831f6ef43ff8b2d278ce7c3d (image=quay.io/podified-antelope-centos9/openstack-ceilometer-ipmi:current-podified, name=ceilometer_agent_ipmi, health_status=healthy, health_failing_streak=0, health_log=, org.label-schema.schema-version=1.0, tcib_build_tag=fa2bb8efef6782c26ea7f1675eeb36dd, tcib_managed=true, managed_by=edpm_ansible, org.label-schema.build-date=20251125, config_id=edpm, container_name=ceilometer_agent_ipmi, io.buildah.version=1.41.3, org.label-schema.license=GPLv2, org.label-schema.vendor=CentOS, config_data={'image': 'quay.io/podified-antelope-centos9/openstack-ceilometer-ipmi:current-podified', 'user': 'ceilometer', 'restart': 'always', 'command': 'kolla_start', 'security_opt': 'label:type:ceilometer_polling_t', 'privileged': 'true', 'net': 'host', 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'OS_ENDPOINT_TYPE': 'internal'}, 'healthcheck': {'test': '/openstack/healthcheck ipmi', 'mount': '/var/lib/openstack/healthchecks/ceilometer_agent_ipmi'}, 'volumes': ['/var/lib/openstack/config/telemetry-power-monitoring:/var/lib/openstack/config/:z', '/var/lib/openstack/config/telemetry-power-monitoring/ceilometer-agent-ipmi.json:/var/lib/kolla/config_files/config.json:z', '/etc/hosts:/etc/hosts:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/var/lib/openstack/cacerts/telemetry-power-monitoring/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/config/telemetry-power-monitoring/ceilometer_prom_exporter.yaml:/etc/ceilometer/ceilometer_prom_exporter.yaml:z', '/var/lib/openstack/certs/telemetry-power-monitoring/default:/etc/ceilometer/tls:z', '/dev/log:/dev/log', '/var/lib/openstack/healthchecks/ceilometer_agent_ipmi:/openstack:ro,z']}, maintainer=OpenStack Kubernetes Operator team, org.label-schema.name=CentOS Stream 9 Base Image)
Dec  1 20:01:50 compute-0 podman[254451]: 2025-12-01 20:01:50.407469956 +0000 UTC m=+0.138967134 container health_status 43b014a7c88484529ca37fbc1aa040d68d3c565a681d98a3ffe696ded1c66c8b (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified, name=ovn_metadata_agent, health_status=healthy, health_failing_streak=0, health_log=, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.schema-version=1.0, maintainer=OpenStack Kubernetes Operator team, org.label-schema.build-date=20251125, org.label-schema.license=GPLv2, tcib_managed=true, managed_by=edpm_ansible, org.label-schema.vendor=CentOS, tcib_build_tag=fa2bb8efef6782c26ea7f1675eeb36dd, config_data={'cgroupns': 'host', 'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'EDPM_CONFIG_HASH': '0823bd3e096c75f72e4a95820d41b0d4b6a1172bd2892ddb9f29b788a11bc87d'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ovn_metadata_agent', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified', 'net': 'host', 'pid': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/run/openvswitch:/run/openvswitch:z', '/var/lib/config-data/ansible-generated/neutron-ovn-metadata-agent:/etc/neutron.conf.d:z', '/run/netns:/run/netns:shared', '/var/lib/kolla/config_files/ovn_metadata_agent.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/neutron:/var/lib/neutron:shared,z', '/var/lib/neutron/ovn_metadata_haproxy_wrapper:/usr/local/bin/haproxy:ro', '/var/lib/neutron/kill_scripts:/etc/neutron/kill_scripts:ro', '/var/lib/openstack/cacerts/neutron-metadata/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/ca.crt:/etc/pki/tls/certs/ovndbca.crt:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/tls.crt:/etc/pki/tls/certs/ovndb.crt:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/tls.key:/etc/pki/tls/private/ovndb.key:ro,Z', '/var/lib/openstack/healthchecks/ovn_metadata_agent:/openstack:ro,z']}, config_id=ovn_metadata_agent, container_name=ovn_metadata_agent, io.buildah.version=1.41.3)
Dec  1 20:01:50 compute-0 ceilometer_agent_compute[200308]: 2025-12-01 20:01:50.430 15 DEBUG ceilometer.compute.pollsters [-] 4a104baa-5fd5-47aa-973b-11d99c76c3e2/disk.device.read.bytes volume: 23775232 _stats_to_sample /usr/lib/python3.12/site-packages/ceilometer/compute/pollsters/__init__.py:108
Dec  1 20:01:50 compute-0 ceilometer_agent_compute[200308]: 2025-12-01 20:01:50.432 15 DEBUG ceilometer.compute.pollsters [-] 4a104baa-5fd5-47aa-973b-11d99c76c3e2/disk.device.read.bytes volume: 2048 _stats_to_sample /usr/lib/python3.12/site-packages/ceilometer/compute/pollsters/__init__.py:108
Dec  1 20:01:50 compute-0 ceilometer_agent_compute[200308]: 2025-12-01 20:01:50.434 15 INFO ceilometer.polling.manager [-] Finished polling pollster disk.device.read.bytes in the context of pollsters
Dec  1 20:01:50 compute-0 ceilometer_agent_compute[200308]: 2025-12-01 20:01:50.434 15 DEBUG ceilometer.polling.manager [-] Executing discovery process for pollsters [<ceilometer.compute.pollsters.net.IncomingBytesPollster object at 0x7fcf6cc3f800>] and discovery method [local_instances] via process [<bound method AgentManager.discover of <ceilometer.polling.manager.AgentManager object at 0x7fcf6cc07a10>>]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:294
Dec  1 20:01:50 compute-0 ceilometer_agent_compute[200308]: 2025-12-01 20:01:50.434 15 INFO ceilometer.polling.manager [-] Polling pollster network.incoming.bytes in the context of pollsters
Dec  1 20:01:50 compute-0 ceilometer_agent_compute[200308]: 2025-12-01 20:01:50.434 15 DEBUG ceilometer.polling.manager [-] Checking if we need coordination for pollster [<stevedore.extension.Extension object at 0x7fcf6e1e92e0>] with coordination group name [None]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:333
Dec  1 20:01:50 compute-0 ceilometer_agent_compute[200308]: 2025-12-01 20:01:50.435 15 DEBUG ceilometer.polling.manager [-] The pollster [<stevedore.extension.Extension object at 0x7fcf6e1e92e0>] is not configured in a source for polling that requires coordination. The current hashrings are the following [None]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:355
Dec  1 20:01:50 compute-0 ceilometer_agent_compute[200308]: 2025-12-01 20:01:50.435 15 DEBUG ceilometer.polling.manager [-] Polster heartbeat update: network.incoming.bytes heartbeat /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:636
Dec  1 20:01:50 compute-0 ceilometer_agent_compute[200308]: 2025-12-01 20:01:50.435 15 DEBUG ceilometer.compute.pollsters [-] 4a104baa-5fd5-47aa-973b-11d99c76c3e2/network.incoming.bytes volume: 90 _stats_to_sample /usr/lib/python3.12/site-packages/ceilometer/compute/pollsters/__init__.py:108
Dec  1 20:01:50 compute-0 ceilometer_agent_compute[200308]: 2025-12-01 20:01:50.435 15 INFO ceilometer.polling.manager [-] Finished polling pollster network.incoming.bytes in the context of pollsters
Dec  1 20:01:50 compute-0 ceilometer_agent_compute[200308]: 2025-12-01 20:01:50.435 15 DEBUG ceilometer.polling.manager [-] Executing discovery process for pollsters [<ceilometer.compute.pollsters.net.IncomingBytesRatePollster object at 0x7fcf6cc3fd10>] and discovery method [local_instances] via process [<bound method AgentManager.discover of <ceilometer.polling.manager.AgentManager object at 0x7fcf6cc07a10>>]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:294
Dec  1 20:01:50 compute-0 ceilometer_agent_compute[200308]: 2025-12-01 20:01:50.436 12 DEBUG ceilometer.polling.manager [-] Updated heartbeat for network.incoming.bytes (2025-12-01T20:01:50.435102) _update_status /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:502
Dec  1 20:01:50 compute-0 ceilometer_agent_compute[200308]: 2025-12-01 20:01:50.435 15 INFO ceilometer.polling.manager [-] Polling pollster network.incoming.bytes.rate in the context of pollsters
Dec  1 20:01:50 compute-0 ceilometer_agent_compute[200308]: 2025-12-01 20:01:50.436 15 DEBUG ceilometer.polling.manager [-] Checking if we need coordination for pollster [<stevedore.extension.Extension object at 0x7fcf6cc3fb00>] with coordination group name [None]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:333
Dec  1 20:01:50 compute-0 ceilometer_agent_compute[200308]: 2025-12-01 20:01:50.436 15 DEBUG ceilometer.polling.manager [-] The pollster [<stevedore.extension.Extension object at 0x7fcf6cc3fb00>] is not configured in a source for polling that requires coordination. The current hashrings are the following [None]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:355
Dec  1 20:01:50 compute-0 ceilometer_agent_compute[200308]: 2025-12-01 20:01:50.436 15 DEBUG ceilometer.polling.manager [-] Polster heartbeat update: network.incoming.bytes.rate heartbeat /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:636
Dec  1 20:01:50 compute-0 ceilometer_agent_compute[200308]: 2025-12-01 20:01:50.436 15 DEBUG ceilometer.compute.pollsters [-] LibvirtInspector does not provide data for IncomingBytesRatePollster get_samples /usr/lib/python3.12/site-packages/ceilometer/compute/pollsters/__init__.py:162
Dec  1 20:01:50 compute-0 ceilometer_agent_compute[200308]: 2025-12-01 20:01:50.436 15 ERROR ceilometer.polling.manager [-] Prevent pollster network.incoming.bytes.rate from polling [<NovaLikeServer: tempest-ServerActionsTestJSON-server-1064429924>] on source pollsters anymore!: ceilometer.polling.plugin_base.PollsterPermanentError: [<NovaLikeServer: tempest-ServerActionsTestJSON-server-1064429924>]
Dec  1 20:01:50 compute-0 ceilometer_agent_compute[200308]: 2025-12-01 20:01:50.437 15 DEBUG ceilometer.polling.manager [-] Executing discovery process for pollsters [<ceilometer.compute.pollsters.disk.PerDeviceDiskReadLatencyPollster object at 0x7fcf6cc3f2f0>] and discovery method [local_instances] via process [<bound method AgentManager.discover of <ceilometer.polling.manager.AgentManager object at 0x7fcf6cc07a10>>]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:294
Dec  1 20:01:50 compute-0 ceilometer_agent_compute[200308]: 2025-12-01 20:01:50.437 12 DEBUG ceilometer.polling.manager [-] Updated heartbeat for network.incoming.bytes.rate (2025-12-01T20:01:50.436549) _update_status /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:502
Dec  1 20:01:50 compute-0 ceilometer_agent_compute[200308]: 2025-12-01 20:01:50.437 15 INFO ceilometer.polling.manager [-] Polling pollster disk.device.read.latency in the context of pollsters
Dec  1 20:01:50 compute-0 ceilometer_agent_compute[200308]: 2025-12-01 20:01:50.437 15 DEBUG ceilometer.polling.manager [-] Checking if we need coordination for pollster [<stevedore.extension.Extension object at 0x7fcf6cc3f320>] with coordination group name [None]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:333
Dec  1 20:01:50 compute-0 ceilometer_agent_compute[200308]: 2025-12-01 20:01:50.437 15 DEBUG ceilometer.polling.manager [-] The pollster [<stevedore.extension.Extension object at 0x7fcf6cc3f320>] is not configured in a source for polling that requires coordination. The current hashrings are the following [None]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:355
Dec  1 20:01:50 compute-0 ceilometer_agent_compute[200308]: 2025-12-01 20:01:50.437 15 DEBUG ceilometer.polling.manager [-] Polster heartbeat update: disk.device.read.latency heartbeat /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:636
Dec  1 20:01:50 compute-0 ceilometer_agent_compute[200308]: 2025-12-01 20:01:50.437 15 DEBUG ceilometer.compute.pollsters [-] 4a104baa-5fd5-47aa-973b-11d99c76c3e2/disk.device.read.latency volume: 591516891 _stats_to_sample /usr/lib/python3.12/site-packages/ceilometer/compute/pollsters/__init__.py:108
Dec  1 20:01:50 compute-0 ceilometer_agent_compute[200308]: 2025-12-01 20:01:50.438 12 DEBUG ceilometer.polling.manager [-] Updated heartbeat for disk.device.read.latency (2025-12-01T20:01:50.437865) _update_status /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:502
Dec  1 20:01:50 compute-0 ceilometer_agent_compute[200308]: 2025-12-01 20:01:50.438 15 DEBUG ceilometer.compute.pollsters [-] 4a104baa-5fd5-47aa-973b-11d99c76c3e2/disk.device.read.latency volume: 865227 _stats_to_sample /usr/lib/python3.12/site-packages/ceilometer/compute/pollsters/__init__.py:108
Dec  1 20:01:50 compute-0 ceilometer_agent_compute[200308]: 2025-12-01 20:01:50.438 15 INFO ceilometer.polling.manager [-] Finished polling pollster disk.device.read.latency in the context of pollsters
Dec  1 20:01:50 compute-0 ceilometer_agent_compute[200308]: 2025-12-01 20:01:50.438 15 DEBUG ceilometer.polling.manager [-] Executing discovery process for pollsters [<ceilometer.compute.pollsters.disk.PerDeviceReadRequestsPollster object at 0x7fcf6cc3f350>] and discovery method [local_instances] via process [<bound method AgentManager.discover of <ceilometer.polling.manager.AgentManager object at 0x7fcf6cc07a10>>]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:294
Dec  1 20:01:50 compute-0 ceilometer_agent_compute[200308]: 2025-12-01 20:01:50.439 15 INFO ceilometer.polling.manager [-] Polling pollster disk.device.read.requests in the context of pollsters
Dec  1 20:01:50 compute-0 ceilometer_agent_compute[200308]: 2025-12-01 20:01:50.439 15 DEBUG ceilometer.polling.manager [-] Checking if we need coordination for pollster [<stevedore.extension.Extension object at 0x7fcf6cc3f380>] with coordination group name [None]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:333
Dec  1 20:01:50 compute-0 ceilometer_agent_compute[200308]: 2025-12-01 20:01:50.439 15 DEBUG ceilometer.polling.manager [-] The pollster [<stevedore.extension.Extension object at 0x7fcf6cc3f380>] is not configured in a source for polling that requires coordination. The current hashrings are the following [None]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:355
Dec  1 20:01:50 compute-0 ceilometer_agent_compute[200308]: 2025-12-01 20:01:50.439 15 DEBUG ceilometer.polling.manager [-] Polster heartbeat update: disk.device.read.requests heartbeat /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:636
Dec  1 20:01:50 compute-0 ceilometer_agent_compute[200308]: 2025-12-01 20:01:50.439 15 DEBUG ceilometer.compute.pollsters [-] 4a104baa-5fd5-47aa-973b-11d99c76c3e2/disk.device.read.requests volume: 760 _stats_to_sample /usr/lib/python3.12/site-packages/ceilometer/compute/pollsters/__init__.py:108
Dec  1 20:01:50 compute-0 ceilometer_agent_compute[200308]: 2025-12-01 20:01:50.439 12 DEBUG ceilometer.polling.manager [-] Updated heartbeat for disk.device.read.requests (2025-12-01T20:01:50.439222) _update_status /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:502
Dec  1 20:01:50 compute-0 ceilometer_agent_compute[200308]: 2025-12-01 20:01:50.439 15 DEBUG ceilometer.compute.pollsters [-] 4a104baa-5fd5-47aa-973b-11d99c76c3e2/disk.device.read.requests volume: 1 _stats_to_sample /usr/lib/python3.12/site-packages/ceilometer/compute/pollsters/__init__.py:108
Dec  1 20:01:50 compute-0 ceilometer_agent_compute[200308]: 2025-12-01 20:01:50.440 15 INFO ceilometer.polling.manager [-] Finished polling pollster disk.device.read.requests in the context of pollsters
Dec  1 20:01:50 compute-0 ceilometer_agent_compute[200308]: 2025-12-01 20:01:50.440 15 DEBUG ceilometer.polling.manager [-] Executing discovery process for pollsters [<ceilometer.compute.pollsters.disk.PerDevicePhysicalPollster object at 0x7fcf6cc3f3b0>] and discovery method [local_instances] via process [<bound method AgentManager.discover of <ceilometer.polling.manager.AgentManager object at 0x7fcf6cc07a10>>]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:294
Dec  1 20:01:50 compute-0 ceilometer_agent_compute[200308]: 2025-12-01 20:01:50.440 15 INFO ceilometer.polling.manager [-] Polling pollster disk.device.usage in the context of pollsters
Dec  1 20:01:50 compute-0 ceilometer_agent_compute[200308]: 2025-12-01 20:01:50.440 15 DEBUG ceilometer.polling.manager [-] Checking if we need coordination for pollster [<stevedore.extension.Extension object at 0x7fcf6cc3f3e0>] with coordination group name [None]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:333
Dec  1 20:01:50 compute-0 ceilometer_agent_compute[200308]: 2025-12-01 20:01:50.440 15 DEBUG ceilometer.polling.manager [-] The pollster [<stevedore.extension.Extension object at 0x7fcf6cc3f3e0>] is not configured in a source for polling that requires coordination. The current hashrings are the following [None]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:355
Dec  1 20:01:50 compute-0 ceilometer_agent_compute[200308]: 2025-12-01 20:01:50.440 15 DEBUG ceilometer.polling.manager [-] Polster heartbeat update: disk.device.usage heartbeat /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:636
Dec  1 20:01:50 compute-0 ceilometer_agent_compute[200308]: 2025-12-01 20:01:50.440 15 DEBUG ceilometer.compute.pollsters [-] 4a104baa-5fd5-47aa-973b-11d99c76c3e2/disk.device.usage volume: 196624 _stats_to_sample /usr/lib/python3.12/site-packages/ceilometer/compute/pollsters/__init__.py:108
Dec  1 20:01:50 compute-0 ceilometer_agent_compute[200308]: 2025-12-01 20:01:50.441 15 DEBUG ceilometer.compute.pollsters [-] 4a104baa-5fd5-47aa-973b-11d99c76c3e2/disk.device.usage volume: 509952 _stats_to_sample /usr/lib/python3.12/site-packages/ceilometer/compute/pollsters/__init__.py:108
Dec  1 20:01:50 compute-0 podman[254455]: 2025-12-01 20:01:50.4410191 +0000 UTC m=+0.161329170 container health_status ac5c9902abf0db9f43c889599b2bcc73d33eb8b65444ffdd9b56a5cc93dab792 (image=quay.io/podified-antelope-centos9/openstack-ovn-controller:current-podified, name=ovn_controller, health_status=healthy, health_failing_streak=0, health_log=, org.label-schema.license=GPLv2, org.label-schema.vendor=CentOS, io.buildah.version=1.41.3, maintainer=OpenStack Kubernetes Operator team, org.label-schema.build-date=20251125, org.label-schema.schema-version=1.0, org.label-schema.name=CentOS Stream 9 Base Image, tcib_build_tag=fa2bb8efef6782c26ea7f1675eeb36dd, tcib_managed=true, config_data={'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ovn_controller', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-ovn-controller:current-podified', 'net': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/lib/modules:/lib/modules:ro', '/run:/run', '/var/lib/openvswitch/ovn:/run/ovn:shared,z', '/var/lib/kolla/config_files/ovn_controller.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/openstack/cacerts/ovn/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/certs/ovn/default/ca.crt:/etc/pki/tls/certs/ovndbca.crt:ro,z', '/var/lib/openstack/certs/ovn/default/tls.crt:/etc/pki/tls/certs/ovndb.crt:ro,z', '/var/lib/openstack/certs/ovn/default/tls.key:/etc/pki/tls/private/ovndb.key:ro,Z', '/var/lib/openstack/healthchecks/ovn_controller:/openstack:ro,z']}, container_name=ovn_controller, config_id=ovn_controller, managed_by=edpm_ansible)
Dec  1 20:01:50 compute-0 ceilometer_agent_compute[200308]: 2025-12-01 20:01:50.441 15 INFO ceilometer.polling.manager [-] Finished polling pollster disk.device.usage in the context of pollsters
Dec  1 20:01:50 compute-0 ceilometer_agent_compute[200308]: 2025-12-01 20:01:50.441 15 DEBUG ceilometer.polling.manager [-] Executing discovery process for pollsters [<ceilometer.compute.pollsters.disk.PerDeviceWriteBytesPollster object at 0x7fcf6cc3f410>] and discovery method [local_instances] via process [<bound method AgentManager.discover of <ceilometer.polling.manager.AgentManager object at 0x7fcf6cc07a10>>]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:294
Dec  1 20:01:50 compute-0 ceilometer_agent_compute[200308]: 2025-12-01 20:01:50.441 15 INFO ceilometer.polling.manager [-] Polling pollster disk.device.write.bytes in the context of pollsters
Dec  1 20:01:50 compute-0 ceilometer_agent_compute[200308]: 2025-12-01 20:01:50.441 15 DEBUG ceilometer.polling.manager [-] Checking if we need coordination for pollster [<stevedore.extension.Extension object at 0x7fcf6cc3f440>] with coordination group name [None]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:333
Dec  1 20:01:50 compute-0 ceilometer_agent_compute[200308]: 2025-12-01 20:01:50.441 15 DEBUG ceilometer.polling.manager [-] The pollster [<stevedore.extension.Extension object at 0x7fcf6cc3f440>] is not configured in a source for polling that requires coordination. The current hashrings are the following [None]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:355
Dec  1 20:01:50 compute-0 ceilometer_agent_compute[200308]: 2025-12-01 20:01:50.442 15 DEBUG ceilometer.polling.manager [-] Polster heartbeat update: disk.device.write.bytes heartbeat /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:636
Dec  1 20:01:50 compute-0 ceilometer_agent_compute[200308]: 2025-12-01 20:01:50.442 15 DEBUG ceilometer.compute.pollsters [-] 4a104baa-5fd5-47aa-973b-11d99c76c3e2/disk.device.write.bytes volume: 0 _stats_to_sample /usr/lib/python3.12/site-packages/ceilometer/compute/pollsters/__init__.py:108
Dec  1 20:01:50 compute-0 ceilometer_agent_compute[200308]: 2025-12-01 20:01:50.442 12 DEBUG ceilometer.polling.manager [-] Updated heartbeat for disk.device.usage (2025-12-01T20:01:50.440517) _update_status /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:502
Dec  1 20:01:50 compute-0 ceilometer_agent_compute[200308]: 2025-12-01 20:01:50.442 12 DEBUG ceilometer.polling.manager [-] Updated heartbeat for disk.device.write.bytes (2025-12-01T20:01:50.441981) _update_status /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:502
Dec  1 20:01:50 compute-0 ceilometer_agent_compute[200308]: 2025-12-01 20:01:50.442 15 DEBUG ceilometer.compute.pollsters [-] 4a104baa-5fd5-47aa-973b-11d99c76c3e2/disk.device.write.bytes volume: 0 _stats_to_sample /usr/lib/python3.12/site-packages/ceilometer/compute/pollsters/__init__.py:108
Dec  1 20:01:50 compute-0 ceilometer_agent_compute[200308]: 2025-12-01 20:01:50.443 15 INFO ceilometer.polling.manager [-] Finished polling pollster disk.device.write.bytes in the context of pollsters
Dec  1 20:01:50 compute-0 ceilometer_agent_compute[200308]: 2025-12-01 20:01:50.444 15 DEBUG ceilometer.polling.manager [-] Executing discovery process for pollsters [<ceilometer.compute.pollsters.instance_stats.PowerStatePollster object at 0x7fcf6c2e4440>] and discovery method [local_instances] via process [<bound method AgentManager.discover of <ceilometer.polling.manager.AgentManager object at 0x7fcf6cc07a10>>]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:294
Dec  1 20:01:50 compute-0 ceilometer_agent_compute[200308]: 2025-12-01 20:01:50.444 15 INFO ceilometer.polling.manager [-] Polling pollster power.state in the context of pollsters
Dec  1 20:01:50 compute-0 ceilometer_agent_compute[200308]: 2025-12-01 20:01:50.444 15 DEBUG ceilometer.polling.manager [-] Checking if we need coordination for pollster [<stevedore.extension.Extension object at 0x7fcf6c2e4470>] with coordination group name [None]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:333
Dec  1 20:01:50 compute-0 ceilometer_agent_compute[200308]: 2025-12-01 20:01:50.445 15 DEBUG ceilometer.polling.manager [-] The pollster [<stevedore.extension.Extension object at 0x7fcf6c2e4470>] is not configured in a source for polling that requires coordination. The current hashrings are the following [None]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:355
Dec  1 20:01:50 compute-0 ceilometer_agent_compute[200308]: 2025-12-01 20:01:50.445 15 DEBUG ceilometer.polling.manager [-] Polster heartbeat update: power.state heartbeat /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:636
Dec  1 20:01:50 compute-0 ceilometer_agent_compute[200308]: 2025-12-01 20:01:50.446 12 DEBUG ceilometer.polling.manager [-] Updated heartbeat for power.state (2025-12-01T20:01:50.445441) _update_status /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:502
Dec  1 20:01:50 compute-0 ceilometer_agent_compute[200308]: 2025-12-01 20:01:50.469 15 DEBUG ceilometer.compute.pollsters [-] 4a104baa-5fd5-47aa-973b-11d99c76c3e2/power.state volume: 1 _stats_to_sample /usr/lib/python3.12/site-packages/ceilometer/compute/pollsters/__init__.py:108
Dec  1 20:01:50 compute-0 ceilometer_agent_compute[200308]: 2025-12-01 20:01:50.470 15 INFO ceilometer.polling.manager [-] Finished polling pollster power.state in the context of pollsters
Dec  1 20:01:50 compute-0 ceilometer_agent_compute[200308]: 2025-12-01 20:01:50.470 15 DEBUG ceilometer.polling.manager [-] Executing discovery process for pollsters [<ceilometer.compute.pollsters.disk.PerDeviceDiskWriteLatencyPollster object at 0x7fcf6cc3f470>] and discovery method [local_instances] via process [<bound method AgentManager.discover of <ceilometer.polling.manager.AgentManager object at 0x7fcf6cc07a10>>]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:294
Dec  1 20:01:50 compute-0 ceilometer_agent_compute[200308]: 2025-12-01 20:01:50.470 15 INFO ceilometer.polling.manager [-] Polling pollster disk.device.write.latency in the context of pollsters
Dec  1 20:01:50 compute-0 ceilometer_agent_compute[200308]: 2025-12-01 20:01:50.470 15 DEBUG ceilometer.polling.manager [-] Checking if we need coordination for pollster [<stevedore.extension.Extension object at 0x7fcf6cc3f4a0>] with coordination group name [None]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:333
Dec  1 20:01:50 compute-0 ceilometer_agent_compute[200308]: 2025-12-01 20:01:50.470 15 DEBUG ceilometer.polling.manager [-] The pollster [<stevedore.extension.Extension object at 0x7fcf6cc3f4a0>] is not configured in a source for polling that requires coordination. The current hashrings are the following [None]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:355
Dec  1 20:01:50 compute-0 ceilometer_agent_compute[200308]: 2025-12-01 20:01:50.470 15 DEBUG ceilometer.polling.manager [-] Polster heartbeat update: disk.device.write.latency heartbeat /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:636
Dec  1 20:01:50 compute-0 ceilometer_agent_compute[200308]: 2025-12-01 20:01:50.471 12 DEBUG ceilometer.polling.manager [-] Updated heartbeat for disk.device.write.latency (2025-12-01T20:01:50.470910) _update_status /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:502
Dec  1 20:01:50 compute-0 ceilometer_agent_compute[200308]: 2025-12-01 20:01:50.473 15 DEBUG ceilometer.compute.pollsters [-] 4a104baa-5fd5-47aa-973b-11d99c76c3e2/disk.device.write.latency volume: 0 _stats_to_sample /usr/lib/python3.12/site-packages/ceilometer/compute/pollsters/__init__.py:108
Dec  1 20:01:50 compute-0 ceilometer_agent_compute[200308]: 2025-12-01 20:01:50.474 15 DEBUG ceilometer.compute.pollsters [-] 4a104baa-5fd5-47aa-973b-11d99c76c3e2/disk.device.write.latency volume: 0 _stats_to_sample /usr/lib/python3.12/site-packages/ceilometer/compute/pollsters/__init__.py:108
Dec  1 20:01:50 compute-0 ceilometer_agent_compute[200308]: 2025-12-01 20:01:50.476 15 INFO ceilometer.polling.manager [-] Finished polling pollster disk.device.write.latency in the context of pollsters
Dec  1 20:01:50 compute-0 ceilometer_agent_compute[200308]: 2025-12-01 20:01:50.476 15 DEBUG ceilometer.polling.manager [-] Executing discovery process for pollsters [<ceilometer.compute.pollsters.disk.PerDeviceWriteRequestsPollster object at 0x7fcf6cc3f4d0>] and discovery method [local_instances] via process [<bound method AgentManager.discover of <ceilometer.polling.manager.AgentManager object at 0x7fcf6cc07a10>>]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:294
Dec  1 20:01:50 compute-0 ceilometer_agent_compute[200308]: 2025-12-01 20:01:50.476 15 INFO ceilometer.polling.manager [-] Polling pollster disk.device.write.requests in the context of pollsters
Dec  1 20:01:50 compute-0 ceilometer_agent_compute[200308]: 2025-12-01 20:01:50.476 15 DEBUG ceilometer.polling.manager [-] Checking if we need coordination for pollster [<stevedore.extension.Extension object at 0x7fcf6cc3f500>] with coordination group name [None]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:333
Dec  1 20:01:50 compute-0 ceilometer_agent_compute[200308]: 2025-12-01 20:01:50.476 15 DEBUG ceilometer.polling.manager [-] The pollster [<stevedore.extension.Extension object at 0x7fcf6cc3f500>] is not configured in a source for polling that requires coordination. The current hashrings are the following [None]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:355
Dec  1 20:01:50 compute-0 ceilometer_agent_compute[200308]: 2025-12-01 20:01:50.476 15 DEBUG ceilometer.polling.manager [-] Polster heartbeat update: disk.device.write.requests heartbeat /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:636
Dec  1 20:01:50 compute-0 ceilometer_agent_compute[200308]: 2025-12-01 20:01:50.477 15 DEBUG ceilometer.compute.pollsters [-] 4a104baa-5fd5-47aa-973b-11d99c76c3e2/disk.device.write.requests volume: 0 _stats_to_sample /usr/lib/python3.12/site-packages/ceilometer/compute/pollsters/__init__.py:108
Dec  1 20:01:50 compute-0 ceilometer_agent_compute[200308]: 2025-12-01 20:01:50.477 15 DEBUG ceilometer.compute.pollsters [-] 4a104baa-5fd5-47aa-973b-11d99c76c3e2/disk.device.write.requests volume: 0 _stats_to_sample /usr/lib/python3.12/site-packages/ceilometer/compute/pollsters/__init__.py:108
Dec  1 20:01:50 compute-0 ceilometer_agent_compute[200308]: 2025-12-01 20:01:50.477 12 DEBUG ceilometer.polling.manager [-] Updated heartbeat for disk.device.write.requests (2025-12-01T20:01:50.476941) _update_status /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:502
Dec  1 20:01:50 compute-0 ceilometer_agent_compute[200308]: 2025-12-01 20:01:50.478 15 INFO ceilometer.polling.manager [-] Finished polling pollster disk.device.write.requests in the context of pollsters
Dec  1 20:01:50 compute-0 ceilometer_agent_compute[200308]: 2025-12-01 20:01:50.478 15 DEBUG ceilometer.polling.manager [-] Executing discovery process for pollsters [<ceilometer.compute.pollsters.disk.PerDeviceAllocationPollster object at 0x7fcf6cc3e5d0>] and discovery method [local_instances] via process [<bound method AgentManager.discover of <ceilometer.polling.manager.AgentManager object at 0x7fcf6cc07a10>>]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:294
Dec  1 20:01:50 compute-0 ceilometer_agent_compute[200308]: 2025-12-01 20:01:50.478 15 INFO ceilometer.polling.manager [-] Polling pollster disk.device.allocation in the context of pollsters
Dec  1 20:01:50 compute-0 ceilometer_agent_compute[200308]: 2025-12-01 20:01:50.478 15 DEBUG ceilometer.polling.manager [-] Checking if we need coordination for pollster [<stevedore.extension.Extension object at 0x7fcf6cc3e540>] with coordination group name [None]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:333
Dec  1 20:01:50 compute-0 ceilometer_agent_compute[200308]: 2025-12-01 20:01:50.478 15 DEBUG ceilometer.polling.manager [-] The pollster [<stevedore.extension.Extension object at 0x7fcf6cc3e540>] is not configured in a source for polling that requires coordination. The current hashrings are the following [None]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:355
Dec  1 20:01:50 compute-0 ceilometer_agent_compute[200308]: 2025-12-01 20:01:50.478 15 DEBUG ceilometer.polling.manager [-] Polster heartbeat update: disk.device.allocation heartbeat /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:636
Dec  1 20:01:50 compute-0 ceilometer_agent_compute[200308]: 2025-12-01 20:01:50.478 15 DEBUG ceilometer.compute.pollsters [-] 4a104baa-5fd5-47aa-973b-11d99c76c3e2/disk.device.allocation volume: 204800 _stats_to_sample /usr/lib/python3.12/site-packages/ceilometer/compute/pollsters/__init__.py:108
Dec  1 20:01:50 compute-0 ceilometer_agent_compute[200308]: 2025-12-01 20:01:50.479 15 DEBUG ceilometer.compute.pollsters [-] 4a104baa-5fd5-47aa-973b-11d99c76c3e2/disk.device.allocation volume: 512000 _stats_to_sample /usr/lib/python3.12/site-packages/ceilometer/compute/pollsters/__init__.py:108
Dec  1 20:01:50 compute-0 ceilometer_agent_compute[200308]: 2025-12-01 20:01:50.479 12 DEBUG ceilometer.polling.manager [-] Updated heartbeat for disk.device.allocation (2025-12-01T20:01:50.478778) _update_status /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:502
Dec  1 20:01:50 compute-0 ceilometer_agent_compute[200308]: 2025-12-01 20:01:50.479 15 INFO ceilometer.polling.manager [-] Finished polling pollster disk.device.allocation in the context of pollsters
Dec  1 20:01:50 compute-0 ceilometer_agent_compute[200308]: 2025-12-01 20:01:50.480 15 DEBUG ceilometer.polling.manager [-] Executing discovery process for pollsters [<ceilometer.compute.pollsters.disk.EphemeralSizePollster object at 0x7fcf6cc3f530>] and discovery method [local_instances] via process [<bound method AgentManager.discover of <ceilometer.polling.manager.AgentManager object at 0x7fcf6cc07a10>>]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:294
Dec  1 20:01:50 compute-0 ceilometer_agent_compute[200308]: 2025-12-01 20:01:50.480 15 INFO ceilometer.polling.manager [-] Polling pollster disk.ephemeral.size in the context of pollsters
Dec  1 20:01:50 compute-0 ceilometer_agent_compute[200308]: 2025-12-01 20:01:50.480 15 DEBUG ceilometer.polling.manager [-] Checking if we need coordination for pollster [<stevedore.extension.Extension object at 0x7fcf6cc3f560>] with coordination group name [None]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:333
Dec  1 20:01:50 compute-0 ceilometer_agent_compute[200308]: 2025-12-01 20:01:50.480 15 DEBUG ceilometer.polling.manager [-] The pollster [<stevedore.extension.Extension object at 0x7fcf6cc3f560>] is not configured in a source for polling that requires coordination. The current hashrings are the following [None]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:355
Dec  1 20:01:50 compute-0 ceilometer_agent_compute[200308]: 2025-12-01 20:01:50.480 15 DEBUG ceilometer.polling.manager [-] Polster heartbeat update: disk.ephemeral.size heartbeat /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:636
Dec  1 20:01:50 compute-0 ceilometer_agent_compute[200308]: 2025-12-01 20:01:50.481 15 INFO ceilometer.polling.manager [-] Finished polling pollster disk.ephemeral.size in the context of pollsters
Dec  1 20:01:50 compute-0 ceilometer_agent_compute[200308]: 2025-12-01 20:01:50.481 12 DEBUG ceilometer.polling.manager [-] Updated heartbeat for disk.ephemeral.size (2025-12-01T20:01:50.480626) _update_status /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:502
Dec  1 20:01:50 compute-0 ceilometer_agent_compute[200308]: 2025-12-01 20:01:50.481 15 DEBUG ceilometer.polling.manager [-] Executing discovery process for pollsters [<ceilometer.compute.pollsters.net.IncomingPacketsPollster object at 0x7fcf6cc3fd40>] and discovery method [local_instances] via process [<bound method AgentManager.discover of <ceilometer.polling.manager.AgentManager object at 0x7fcf6cc07a10>>]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:294
Dec  1 20:01:50 compute-0 ceilometer_agent_compute[200308]: 2025-12-01 20:01:50.481 15 INFO ceilometer.polling.manager [-] Polling pollster network.incoming.packets in the context of pollsters
Dec  1 20:01:50 compute-0 ceilometer_agent_compute[200308]: 2025-12-01 20:01:50.481 15 DEBUG ceilometer.polling.manager [-] Checking if we need coordination for pollster [<stevedore.extension.Extension object at 0x7fcf6cc3fd70>] with coordination group name [None]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:333
Dec  1 20:01:50 compute-0 ceilometer_agent_compute[200308]: 2025-12-01 20:01:50.481 15 DEBUG ceilometer.polling.manager [-] The pollster [<stevedore.extension.Extension object at 0x7fcf6cc3fd70>] is not configured in a source for polling that requires coordination. The current hashrings are the following [None]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:355
Dec  1 20:01:50 compute-0 ceilometer_agent_compute[200308]: 2025-12-01 20:01:50.482 15 DEBUG ceilometer.polling.manager [-] Polster heartbeat update: network.incoming.packets heartbeat /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:636
Dec  1 20:01:50 compute-0 ceilometer_agent_compute[200308]: 2025-12-01 20:01:50.482 15 DEBUG ceilometer.compute.pollsters [-] 4a104baa-5fd5-47aa-973b-11d99c76c3e2/network.incoming.packets volume: 1 _stats_to_sample /usr/lib/python3.12/site-packages/ceilometer/compute/pollsters/__init__.py:108
Dec  1 20:01:50 compute-0 ceilometer_agent_compute[200308]: 2025-12-01 20:01:50.482 15 INFO ceilometer.polling.manager [-] Finished polling pollster network.incoming.packets in the context of pollsters
Dec  1 20:01:50 compute-0 ceilometer_agent_compute[200308]: 2025-12-01 20:01:50.482 15 DEBUG ceilometer.polling.manager [-] Executing discovery process for pollsters [<ceilometer.compute.pollsters.disk.RootSizePollster object at 0x7fcf6cc3f590>] and discovery method [local_instances] via process [<bound method AgentManager.discover of <ceilometer.polling.manager.AgentManager object at 0x7fcf6cc07a10>>]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:294
Dec  1 20:01:50 compute-0 ceilometer_agent_compute[200308]: 2025-12-01 20:01:50.482 15 INFO ceilometer.polling.manager [-] Polling pollster disk.root.size in the context of pollsters
Dec  1 20:01:50 compute-0 ceilometer_agent_compute[200308]: 2025-12-01 20:01:50.482 15 DEBUG ceilometer.polling.manager [-] Checking if we need coordination for pollster [<stevedore.extension.Extension object at 0x7fcf6cc3f5c0>] with coordination group name [None]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:333
Dec  1 20:01:50 compute-0 ceilometer_agent_compute[200308]: 2025-12-01 20:01:50.483 12 DEBUG ceilometer.polling.manager [-] Updated heartbeat for network.incoming.packets (2025-12-01T20:01:50.481992) _update_status /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:502
Dec  1 20:01:50 compute-0 ceilometer_agent_compute[200308]: 2025-12-01 20:01:50.483 15 DEBUG ceilometer.polling.manager [-] The pollster [<stevedore.extension.Extension object at 0x7fcf6cc3f5c0>] is not configured in a source for polling that requires coordination. The current hashrings are the following [None]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:355
Dec  1 20:01:50 compute-0 ceilometer_agent_compute[200308]: 2025-12-01 20:01:50.483 12 DEBUG ceilometer.polling.manager [-] Updated heartbeat for disk.root.size (2025-12-01T20:01:50.483513) _update_status /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:502
Dec  1 20:01:50 compute-0 ceilometer_agent_compute[200308]: 2025-12-01 20:01:50.483 15 DEBUG ceilometer.polling.manager [-] Polster heartbeat update: disk.root.size heartbeat /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:636
Dec  1 20:01:50 compute-0 ceilometer_agent_compute[200308]: 2025-12-01 20:01:50.484 15 INFO ceilometer.polling.manager [-] Finished polling pollster disk.root.size in the context of pollsters
Dec  1 20:01:50 compute-0 ceilometer_agent_compute[200308]: 2025-12-01 20:01:50.484 15 DEBUG ceilometer.polling.manager [-] Executing discovery process for pollsters [<ceilometer.compute.pollsters.net.IncomingDropPollster object at 0x7fcf6cc3fda0>] and discovery method [local_instances] via process [<bound method AgentManager.discover of <ceilometer.polling.manager.AgentManager object at 0x7fcf6cc07a10>>]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:294
Dec  1 20:01:50 compute-0 ceilometer_agent_compute[200308]: 2025-12-01 20:01:50.484 15 INFO ceilometer.polling.manager [-] Polling pollster network.incoming.packets.drop in the context of pollsters
Dec  1 20:01:50 compute-0 ceilometer_agent_compute[200308]: 2025-12-01 20:01:50.484 15 DEBUG ceilometer.polling.manager [-] Checking if we need coordination for pollster [<stevedore.extension.Extension object at 0x7fcf6cc3fdd0>] with coordination group name [None]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:333
Dec  1 20:01:50 compute-0 ceilometer_agent_compute[200308]: 2025-12-01 20:01:50.484 15 DEBUG ceilometer.polling.manager [-] The pollster [<stevedore.extension.Extension object at 0x7fcf6cc3fdd0>] is not configured in a source for polling that requires coordination. The current hashrings are the following [None]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:355
Dec  1 20:01:50 compute-0 ceilometer_agent_compute[200308]: 2025-12-01 20:01:50.484 15 DEBUG ceilometer.polling.manager [-] Polster heartbeat update: network.incoming.packets.drop heartbeat /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:636
Dec  1 20:01:50 compute-0 ceilometer_agent_compute[200308]: 2025-12-01 20:01:50.485 12 DEBUG ceilometer.polling.manager [-] Updated heartbeat for network.incoming.packets.drop (2025-12-01T20:01:50.484818) _update_status /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:502
Dec  1 20:01:50 compute-0 ceilometer_agent_compute[200308]: 2025-12-01 20:01:50.484 15 DEBUG ceilometer.compute.pollsters [-] 4a104baa-5fd5-47aa-973b-11d99c76c3e2/network.incoming.packets.drop volume: 0 _stats_to_sample /usr/lib/python3.12/site-packages/ceilometer/compute/pollsters/__init__.py:108
Dec  1 20:01:50 compute-0 ceilometer_agent_compute[200308]: 2025-12-01 20:01:50.485 15 INFO ceilometer.polling.manager [-] Finished polling pollster network.incoming.packets.drop in the context of pollsters
Dec  1 20:01:50 compute-0 ceilometer_agent_compute[200308]: 2025-12-01 20:01:50.485 15 DEBUG ceilometer.polling.manager [-] Executing discovery process for pollsters [<ceilometer.compute.pollsters.net.IncomingErrorsPollster object at 0x7fcf6cc3fe00>] and discovery method [local_instances] via process [<bound method AgentManager.discover of <ceilometer.polling.manager.AgentManager object at 0x7fcf6cc07a10>>]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:294
Dec  1 20:01:50 compute-0 ceilometer_agent_compute[200308]: 2025-12-01 20:01:50.485 15 INFO ceilometer.polling.manager [-] Polling pollster network.incoming.packets.error in the context of pollsters
Dec  1 20:01:50 compute-0 ceilometer_agent_compute[200308]: 2025-12-01 20:01:50.485 15 DEBUG ceilometer.polling.manager [-] Checking if we need coordination for pollster [<stevedore.extension.Extension object at 0x7fcf6cc3fe30>] with coordination group name [None]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:333
Dec  1 20:01:50 compute-0 ceilometer_agent_compute[200308]: 2025-12-01 20:01:50.486 15 DEBUG ceilometer.polling.manager [-] The pollster [<stevedore.extension.Extension object at 0x7fcf6cc3fe30>] is not configured in a source for polling that requires coordination. The current hashrings are the following [None]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:355
Dec  1 20:01:50 compute-0 ceilometer_agent_compute[200308]: 2025-12-01 20:01:50.486 15 DEBUG ceilometer.polling.manager [-] Polster heartbeat update: network.incoming.packets.error heartbeat /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:636
Dec  1 20:01:50 compute-0 ceilometer_agent_compute[200308]: 2025-12-01 20:01:50.486 15 DEBUG ceilometer.compute.pollsters [-] 4a104baa-5fd5-47aa-973b-11d99c76c3e2/network.incoming.packets.error volume: 0 _stats_to_sample /usr/lib/python3.12/site-packages/ceilometer/compute/pollsters/__init__.py:108
Dec  1 20:01:50 compute-0 ceilometer_agent_compute[200308]: 2025-12-01 20:01:50.486 15 INFO ceilometer.polling.manager [-] Finished polling pollster network.incoming.packets.error in the context of pollsters
Dec  1 20:01:50 compute-0 ceilometer_agent_compute[200308]: 2025-12-01 20:01:50.487 15 DEBUG ceilometer.polling.manager [-] Executing discovery process for pollsters [<ceilometer.compute.pollsters.net.OutgoingBytesPollster object at 0x7fcf6cc3fe90>] and discovery method [local_instances] via process [<bound method AgentManager.discover of <ceilometer.polling.manager.AgentManager object at 0x7fcf6cc07a10>>]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:294
Dec  1 20:01:50 compute-0 ceilometer_agent_compute[200308]: 2025-12-01 20:01:50.487 15 INFO ceilometer.polling.manager [-] Polling pollster network.outgoing.bytes in the context of pollsters
Dec  1 20:01:50 compute-0 ceilometer_agent_compute[200308]: 2025-12-01 20:01:50.487 12 DEBUG ceilometer.polling.manager [-] Updated heartbeat for network.incoming.packets.error (2025-12-01T20:01:50.486163) _update_status /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:502
Dec  1 20:01:50 compute-0 ceilometer_agent_compute[200308]: 2025-12-01 20:01:50.487 15 DEBUG ceilometer.polling.manager [-] Checking if we need coordination for pollster [<stevedore.extension.Extension object at 0x7fcf6cc3fec0>] with coordination group name [None]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:333
Dec  1 20:01:50 compute-0 ceilometer_agent_compute[200308]: 2025-12-01 20:01:50.487 15 DEBUG ceilometer.polling.manager [-] The pollster [<stevedore.extension.Extension object at 0x7fcf6cc3fec0>] is not configured in a source for polling that requires coordination. The current hashrings are the following [None]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:355
Dec  1 20:01:50 compute-0 ceilometer_agent_compute[200308]: 2025-12-01 20:01:50.487 15 DEBUG ceilometer.polling.manager [-] Polster heartbeat update: network.outgoing.bytes heartbeat /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:636
Dec  1 20:01:50 compute-0 ceilometer_agent_compute[200308]: 2025-12-01 20:01:50.488 15 DEBUG ceilometer.compute.pollsters [-] 4a104baa-5fd5-47aa-973b-11d99c76c3e2/network.outgoing.bytes volume: 0 _stats_to_sample /usr/lib/python3.12/site-packages/ceilometer/compute/pollsters/__init__.py:108
Dec  1 20:01:50 compute-0 ceilometer_agent_compute[200308]: 2025-12-01 20:01:50.488 15 INFO ceilometer.polling.manager [-] Finished polling pollster network.outgoing.bytes in the context of pollsters
Dec  1 20:01:50 compute-0 ceilometer_agent_compute[200308]: 2025-12-01 20:01:50.488 15 DEBUG ceilometer.polling.manager [-] Executing discovery process for pollsters [<ceilometer.compute.pollsters.net.OutgoingBytesRatePollster object at 0x7fcf6cc3ff80>] and discovery method [local_instances] via process [<bound method AgentManager.discover of <ceilometer.polling.manager.AgentManager object at 0x7fcf6cc07a10>>]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:294
Dec  1 20:01:50 compute-0 ceilometer_agent_compute[200308]: 2025-12-01 20:01:50.488 12 DEBUG ceilometer.polling.manager [-] Updated heartbeat for network.outgoing.bytes (2025-12-01T20:01:50.487849) _update_status /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:502
Dec  1 20:01:50 compute-0 ceilometer_agent_compute[200308]: 2025-12-01 20:01:50.488 15 INFO ceilometer.polling.manager [-] Polling pollster network.outgoing.bytes.rate in the context of pollsters
Dec  1 20:01:50 compute-0 ceilometer_agent_compute[200308]: 2025-12-01 20:01:50.489 15 DEBUG ceilometer.polling.manager [-] Checking if we need coordination for pollster [<stevedore.extension.Extension object at 0x7fcf6cc3ffb0>] with coordination group name [None]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:333
Dec  1 20:01:50 compute-0 ceilometer_agent_compute[200308]: 2025-12-01 20:01:50.489 15 DEBUG ceilometer.polling.manager [-] The pollster [<stevedore.extension.Extension object at 0x7fcf6cc3ffb0>] is not configured in a source for polling that requires coordination. The current hashrings are the following [None]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:355
Dec  1 20:01:50 compute-0 ceilometer_agent_compute[200308]: 2025-12-01 20:01:50.489 15 DEBUG ceilometer.polling.manager [-] Polster heartbeat update: network.outgoing.bytes.rate heartbeat /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:636
Dec  1 20:01:50 compute-0 ceilometer_agent_compute[200308]: 2025-12-01 20:01:50.489 15 DEBUG ceilometer.compute.pollsters [-] LibvirtInspector does not provide data for OutgoingBytesRatePollster get_samples /usr/lib/python3.12/site-packages/ceilometer/compute/pollsters/__init__.py:162
Dec  1 20:01:50 compute-0 ceilometer_agent_compute[200308]: 2025-12-01 20:01:50.489 12 DEBUG ceilometer.polling.manager [-] Updated heartbeat for network.outgoing.bytes.rate (2025-12-01T20:01:50.489338) _update_status /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:502
Dec  1 20:01:50 compute-0 ceilometer_agent_compute[200308]: 2025-12-01 20:01:50.489 15 ERROR ceilometer.polling.manager [-] Prevent pollster network.outgoing.bytes.rate from polling [<NovaLikeServer: tempest-ServerActionsTestJSON-server-1064429924>] on source pollsters anymore!: ceilometer.polling.plugin_base.PollsterPermanentError: [<NovaLikeServer: tempest-ServerActionsTestJSON-server-1064429924>]
Dec  1 20:01:50 compute-0 ceilometer_agent_compute[200308]: 2025-12-01 20:01:50.490 15 DEBUG ceilometer.polling.manager [-] Executing discovery process for pollsters [<ceilometer.compute.pollsters.instance_stats.CPUPollster object at 0x7fcf6cbd1b80>] and discovery method [local_instances] via process [<bound method AgentManager.discover of <ceilometer.polling.manager.AgentManager object at 0x7fcf6cc07a10>>]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:294
Dec  1 20:01:50 compute-0 ceilometer_agent_compute[200308]: 2025-12-01 20:01:50.490 15 INFO ceilometer.polling.manager [-] Polling pollster cpu in the context of pollsters
Dec  1 20:01:50 compute-0 ceilometer_agent_compute[200308]: 2025-12-01 20:01:50.490 15 DEBUG ceilometer.polling.manager [-] Checking if we need coordination for pollster [<stevedore.extension.Extension object at 0x7fcf6cc3d7c0>] with coordination group name [None]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:333
Dec  1 20:01:50 compute-0 ceilometer_agent_compute[200308]: 2025-12-01 20:01:50.490 15 DEBUG ceilometer.polling.manager [-] The pollster [<stevedore.extension.Extension object at 0x7fcf6cc3d7c0>] is not configured in a source for polling that requires coordination. The current hashrings are the following [None]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:355
Dec  1 20:01:50 compute-0 ceilometer_agent_compute[200308]: 2025-12-01 20:01:50.490 15 DEBUG ceilometer.polling.manager [-] Polster heartbeat update: cpu heartbeat /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:636
Dec  1 20:01:50 compute-0 ceilometer_agent_compute[200308]: 2025-12-01 20:01:50.490 15 DEBUG ceilometer.compute.pollsters [-] 4a104baa-5fd5-47aa-973b-11d99c76c3e2/cpu volume: 23670000000 _stats_to_sample /usr/lib/python3.12/site-packages/ceilometer/compute/pollsters/__init__.py:108
Dec  1 20:01:50 compute-0 ceilometer_agent_compute[200308]: 2025-12-01 20:01:50.490 15 INFO ceilometer.polling.manager [-] Finished polling pollster cpu in the context of pollsters
Dec  1 20:01:50 compute-0 ceilometer_agent_compute[200308]: 2025-12-01 20:01:50.490 15 DEBUG ceilometer.polling.manager [-] Executing discovery process for pollsters [<ceilometer.compute.pollsters.instance_stats.MemoryUsagePollster object at 0x7fcf6cc3f7a0>] and discovery method [local_instances] via process [<bound method AgentManager.discover of <ceilometer.polling.manager.AgentManager object at 0x7fcf6cc07a10>>]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:294
Dec  1 20:01:50 compute-0 ceilometer_agent_compute[200308]: 2025-12-01 20:01:50.491 15 INFO ceilometer.polling.manager [-] Polling pollster memory.usage in the context of pollsters
Dec  1 20:01:50 compute-0 ceilometer_agent_compute[200308]: 2025-12-01 20:01:50.491 12 DEBUG ceilometer.polling.manager [-] Updated heartbeat for cpu (2025-12-01T20:01:50.490406) _update_status /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:502
Dec  1 20:01:50 compute-0 ceilometer_agent_compute[200308]: 2025-12-01 20:01:50.491 15 DEBUG ceilometer.polling.manager [-] Checking if we need coordination for pollster [<stevedore.extension.Extension object at 0x7fcf6cc3f7d0>] with coordination group name [None]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:333
Dec  1 20:01:50 compute-0 ceilometer_agent_compute[200308]: 2025-12-01 20:01:50.491 15 DEBUG ceilometer.polling.manager [-] The pollster [<stevedore.extension.Extension object at 0x7fcf6cc3f7d0>] is not configured in a source for polling that requires coordination. The current hashrings are the following [None]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:355
Dec  1 20:01:50 compute-0 ceilometer_agent_compute[200308]: 2025-12-01 20:01:50.491 15 DEBUG ceilometer.polling.manager [-] Polster heartbeat update: memory.usage heartbeat /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:636
Dec  1 20:01:50 compute-0 ceilometer_agent_compute[200308]: 2025-12-01 20:01:50.492 12 DEBUG ceilometer.polling.manager [-] Updated heartbeat for memory.usage (2025-12-01T20:01:50.491789) _update_status /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:502
Dec  1 20:01:50 compute-0 ceilometer_agent_compute[200308]: 2025-12-01 20:01:50.492 15 DEBUG ceilometer.compute.pollsters [-] 4a104baa-5fd5-47aa-973b-11d99c76c3e2/memory.usage volume: Unavailable _stats_to_sample /usr/lib/python3.12/site-packages/ceilometer/compute/pollsters/__init__.py:108
Dec  1 20:01:50 compute-0 ceilometer_agent_compute[200308]: 2025-12-01 20:01:50.492 15 WARNING ceilometer.compute.pollsters [-] memory.usage statistic in not available for instance 4a104baa-5fd5-47aa-973b-11d99c76c3e2: ceilometer.compute.pollsters.NoVolumeException
Dec  1 20:01:50 compute-0 ceilometer_agent_compute[200308]: 2025-12-01 20:01:50.492 15 INFO ceilometer.polling.manager [-] Finished polling pollster memory.usage in the context of pollsters
Dec  1 20:01:50 compute-0 ceilometer_agent_compute[200308]: 2025-12-01 20:01:50.492 15 DEBUG ceilometer.polling.manager [-] Finished processing pollster [network.incoming.bytes.delta]. execute_polling_task_processing /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:272
Dec  1 20:01:50 compute-0 ceilometer_agent_compute[200308]: 2025-12-01 20:01:50.493 15 DEBUG ceilometer.polling.manager [-] Finished processing pollster [network.outgoing.packets]. execute_polling_task_processing /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:272
Dec  1 20:01:50 compute-0 ceilometer_agent_compute[200308]: 2025-12-01 20:01:50.493 15 DEBUG ceilometer.polling.manager [-] Finished processing pollster [network.outgoing.bytes.delta]. execute_polling_task_processing /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:272
Dec  1 20:01:50 compute-0 ceilometer_agent_compute[200308]: 2025-12-01 20:01:50.493 15 DEBUG ceilometer.polling.manager [-] Finished processing pollster [network.outgoing.packets.drop]. execute_polling_task_processing /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:272
Dec  1 20:01:50 compute-0 ceilometer_agent_compute[200308]: 2025-12-01 20:01:50.493 15 DEBUG ceilometer.polling.manager [-] Finished processing pollster [network.outgoing.packets.error]. execute_polling_task_processing /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:272
Dec  1 20:01:50 compute-0 ceilometer_agent_compute[200308]: 2025-12-01 20:01:50.493 15 DEBUG ceilometer.polling.manager [-] Finished processing pollster [disk.device.capacity]. execute_polling_task_processing /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:272
Dec  1 20:01:50 compute-0 ceilometer_agent_compute[200308]: 2025-12-01 20:01:50.493 15 DEBUG ceilometer.polling.manager [-] Finished processing pollster [disk.device.read.bytes]. execute_polling_task_processing /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:272
Dec  1 20:01:50 compute-0 ceilometer_agent_compute[200308]: 2025-12-01 20:01:50.494 15 DEBUG ceilometer.polling.manager [-] Finished processing pollster [network.incoming.bytes]. execute_polling_task_processing /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:272
Dec  1 20:01:50 compute-0 ceilometer_agent_compute[200308]: 2025-12-01 20:01:50.494 15 DEBUG ceilometer.polling.manager [-] Finished processing pollster [network.incoming.bytes.rate]. execute_polling_task_processing /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:272
Dec  1 20:01:50 compute-0 ceilometer_agent_compute[200308]: 2025-12-01 20:01:50.494 15 DEBUG ceilometer.polling.manager [-] Finished processing pollster [disk.device.read.latency]. execute_polling_task_processing /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:272
Dec  1 20:01:50 compute-0 ceilometer_agent_compute[200308]: 2025-12-01 20:01:50.494 15 DEBUG ceilometer.polling.manager [-] Finished processing pollster [disk.device.read.requests]. execute_polling_task_processing /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:272
Dec  1 20:01:50 compute-0 ceilometer_agent_compute[200308]: 2025-12-01 20:01:50.494 15 DEBUG ceilometer.polling.manager [-] Finished processing pollster [disk.device.usage]. execute_polling_task_processing /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:272
Dec  1 20:01:50 compute-0 ceilometer_agent_compute[200308]: 2025-12-01 20:01:50.494 15 DEBUG ceilometer.polling.manager [-] Finished processing pollster [disk.device.write.bytes]. execute_polling_task_processing /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:272
Dec  1 20:01:50 compute-0 ceilometer_agent_compute[200308]: 2025-12-01 20:01:50.494 15 DEBUG ceilometer.polling.manager [-] Finished processing pollster [power.state]. execute_polling_task_processing /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:272
Dec  1 20:01:50 compute-0 ceilometer_agent_compute[200308]: 2025-12-01 20:01:50.494 15 DEBUG ceilometer.polling.manager [-] Finished processing pollster [disk.device.write.latency]. execute_polling_task_processing /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:272
Dec  1 20:01:50 compute-0 ceilometer_agent_compute[200308]: 2025-12-01 20:01:50.494 15 DEBUG ceilometer.polling.manager [-] Finished processing pollster [disk.device.write.requests]. execute_polling_task_processing /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:272
Dec  1 20:01:50 compute-0 ceilometer_agent_compute[200308]: 2025-12-01 20:01:50.494 15 DEBUG ceilometer.polling.manager [-] Finished processing pollster [disk.device.allocation]. execute_polling_task_processing /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:272
Dec  1 20:01:50 compute-0 ceilometer_agent_compute[200308]: 2025-12-01 20:01:50.495 15 DEBUG ceilometer.polling.manager [-] Finished processing pollster [disk.ephemeral.size]. execute_polling_task_processing /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:272
Dec  1 20:01:50 compute-0 ceilometer_agent_compute[200308]: 2025-12-01 20:01:50.495 15 DEBUG ceilometer.polling.manager [-] Finished processing pollster [network.incoming.packets]. execute_polling_task_processing /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:272
Dec  1 20:01:50 compute-0 ceilometer_agent_compute[200308]: 2025-12-01 20:01:50.495 15 DEBUG ceilometer.polling.manager [-] Finished processing pollster [disk.root.size]. execute_polling_task_processing /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:272
Dec  1 20:01:50 compute-0 ceilometer_agent_compute[200308]: 2025-12-01 20:01:50.495 15 DEBUG ceilometer.polling.manager [-] Finished processing pollster [network.incoming.packets.drop]. execute_polling_task_processing /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:272
Dec  1 20:01:50 compute-0 ceilometer_agent_compute[200308]: 2025-12-01 20:01:50.495 15 DEBUG ceilometer.polling.manager [-] Finished processing pollster [network.incoming.packets.error]. execute_polling_task_processing /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:272
Dec  1 20:01:50 compute-0 ceilometer_agent_compute[200308]: 2025-12-01 20:01:50.495 15 DEBUG ceilometer.polling.manager [-] Finished processing pollster [network.outgoing.bytes]. execute_polling_task_processing /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:272
Dec  1 20:01:50 compute-0 ceilometer_agent_compute[200308]: 2025-12-01 20:01:50.496 15 DEBUG ceilometer.polling.manager [-] Finished processing pollster [network.outgoing.bytes.rate]. execute_polling_task_processing /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:272
Dec  1 20:01:50 compute-0 ceilometer_agent_compute[200308]: 2025-12-01 20:01:50.496 15 DEBUG ceilometer.polling.manager [-] Finished processing pollster [cpu]. execute_polling_task_processing /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:272
Dec  1 20:01:50 compute-0 ceilometer_agent_compute[200308]: 2025-12-01 20:01:50.496 15 DEBUG ceilometer.polling.manager [-] Finished processing pollster [memory.usage]. execute_polling_task_processing /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:272
Dec  1 20:01:52 compute-0 nova_compute[189564]: 2025-12-01 20:01:52.169 189568 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Dec  1 20:01:53 compute-0 nova_compute[189564]: 2025-12-01 20:01:53.816 189568 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Dec  1 20:01:54 compute-0 nova_compute[189564]: 2025-12-01 20:01:54.413 189568 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Dec  1 20:01:57 compute-0 nova_compute[189564]: 2025-12-01 20:01:57.174 189568 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Dec  1 20:01:58 compute-0 nova_compute[189564]: 2025-12-01 20:01:58.819 189568 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Dec  1 20:01:59 compute-0 podman[203750]: time="2025-12-01T20:01:59Z" level=info msg="List containers: received `last` parameter - overwriting `limit`"
Dec  1 20:01:59 compute-0 podman[203750]: @ - - [01/Dec/2025:20:01:59 +0000] "GET /v4.9.3/libpod/containers/json?all=true&external=false&last=0&namespace=false&size=false&sync=false HTTP/1.1" 200 29521 "" "Go-http-client/1.1"
Dec  1 20:01:59 compute-0 podman[203750]: @ - - [01/Dec/2025:20:01:59 +0000] "GET /v4.9.3/libpod/containers/stats?all=false&interval=1&stream=false HTTP/1.1" 200 4804 "" "Go-http-client/1.1"
Dec  1 20:01:59 compute-0 nova_compute[189564]: 2025-12-01 20:01:59.853 189568 DEBUG oslo_concurrency.lockutils [None req-a63dd88b-b0e6-4d69-9e61-96ca65e37b62 b7979dae5a4746189d660cfad52a7031 074be7edf37d4e09a02286825460dcb3 - - default default] Acquiring lock "40daa6fd-543f-42a7-8b3f-8bbbd3b4ecc0" by "nova.compute.manager.ComputeManager.build_and_run_instance.<locals>._locked_do_build_and_run_instance" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m
Dec  1 20:01:59 compute-0 nova_compute[189564]: 2025-12-01 20:01:59.853 189568 DEBUG oslo_concurrency.lockutils [None req-a63dd88b-b0e6-4d69-9e61-96ca65e37b62 b7979dae5a4746189d660cfad52a7031 074be7edf37d4e09a02286825460dcb3 - - default default] Lock "40daa6fd-543f-42a7-8b3f-8bbbd3b4ecc0" acquired by "nova.compute.manager.ComputeManager.build_and_run_instance.<locals>._locked_do_build_and_run_instance" :: waited 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m
Dec  1 20:01:59 compute-0 ovn_controller[97948]: 2025-12-01T20:01:59Z|00010|pinctrl(ovn_pinctrl0)|INFO|DHCPOFFER fa:16:3e:3e:bf:1a 10.100.0.13
Dec  1 20:01:59 compute-0 nova_compute[189564]: 2025-12-01 20:01:59.872 189568 DEBUG nova.compute.manager [None req-a63dd88b-b0e6-4d69-9e61-96ca65e37b62 b7979dae5a4746189d660cfad52a7031 074be7edf37d4e09a02286825460dcb3 - - default default] [instance: 40daa6fd-543f-42a7-8b3f-8bbbd3b4ecc0] Starting instance... _do_build_and_run_instance /usr/lib/python3.9/site-packages/nova/compute/manager.py:2402#033[00m
Dec  1 20:01:59 compute-0 ovn_controller[97948]: 2025-12-01T20:01:59Z|00011|pinctrl(ovn_pinctrl0)|INFO|DHCPACK fa:16:3e:3e:bf:1a 10.100.0.13
Dec  1 20:01:59 compute-0 nova_compute[189564]: 2025-12-01 20:01:59.967 189568 DEBUG oslo_concurrency.lockutils [None req-a63dd88b-b0e6-4d69-9e61-96ca65e37b62 b7979dae5a4746189d660cfad52a7031 074be7edf37d4e09a02286825460dcb3 - - default default] Acquiring lock "compute_resources" by "nova.compute.resource_tracker.ResourceTracker.instance_claim" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m
Dec  1 20:01:59 compute-0 nova_compute[189564]: 2025-12-01 20:01:59.968 189568 DEBUG oslo_concurrency.lockutils [None req-a63dd88b-b0e6-4d69-9e61-96ca65e37b62 b7979dae5a4746189d660cfad52a7031 074be7edf37d4e09a02286825460dcb3 - - default default] Lock "compute_resources" acquired by "nova.compute.resource_tracker.ResourceTracker.instance_claim" :: waited 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m
Dec  1 20:01:59 compute-0 nova_compute[189564]: 2025-12-01 20:01:59.976 189568 DEBUG nova.virt.hardware [None req-a63dd88b-b0e6-4d69-9e61-96ca65e37b62 b7979dae5a4746189d660cfad52a7031 074be7edf37d4e09a02286825460dcb3 - - default default] Require both a host and instance NUMA topology to fit instance on host. numa_fit_instance_to_host /usr/lib/python3.9/site-packages/nova/virt/hardware.py:2368#033[00m
Dec  1 20:01:59 compute-0 nova_compute[189564]: 2025-12-01 20:01:59.977 189568 INFO nova.compute.claims [None req-a63dd88b-b0e6-4d69-9e61-96ca65e37b62 b7979dae5a4746189d660cfad52a7031 074be7edf37d4e09a02286825460dcb3 - - default default] [instance: 40daa6fd-543f-42a7-8b3f-8bbbd3b4ecc0] Claim successful on node compute-0.ctlplane.example.com#033[00m
Dec  1 20:02:00 compute-0 nova_compute[189564]: 2025-12-01 20:02:00.135 189568 DEBUG nova.compute.provider_tree [None req-a63dd88b-b0e6-4d69-9e61-96ca65e37b62 b7979dae5a4746189d660cfad52a7031 074be7edf37d4e09a02286825460dcb3 - - default default] Inventory has not changed in ProviderTree for provider: 0211b5d4-bab8-409f-8f53-df766ffbcb27 update_inventory /usr/lib/python3.9/site-packages/nova/compute/provider_tree.py:180#033[00m
Dec  1 20:02:00 compute-0 nova_compute[189564]: 2025-12-01 20:02:00.149 189568 DEBUG nova.scheduler.client.report [None req-a63dd88b-b0e6-4d69-9e61-96ca65e37b62 b7979dae5a4746189d660cfad52a7031 074be7edf37d4e09a02286825460dcb3 - - default default] Inventory has not changed for provider 0211b5d4-bab8-409f-8f53-df766ffbcb27 based on inventory data: {'VCPU': {'total': 8, 'reserved': 0, 'min_unit': 1, 'max_unit': 8, 'step_size': 1, 'allocation_ratio': 4.0}, 'MEMORY_MB': {'total': 7680, 'reserved': 512, 'min_unit': 1, 'max_unit': 7680, 'step_size': 1, 'allocation_ratio': 1.0}, 'DISK_GB': {'total': 79, 'reserved': 1, 'min_unit': 1, 'max_unit': 79, 'step_size': 1, 'allocation_ratio': 0.9}} set_inventory_for_provider /usr/lib/python3.9/site-packages/nova/scheduler/client/report.py:940#033[00m
Dec  1 20:02:00 compute-0 nova_compute[189564]: 2025-12-01 20:02:00.169 189568 DEBUG oslo_concurrency.lockutils [None req-a63dd88b-b0e6-4d69-9e61-96ca65e37b62 b7979dae5a4746189d660cfad52a7031 074be7edf37d4e09a02286825460dcb3 - - default default] Lock "compute_resources" "released" by "nova.compute.resource_tracker.ResourceTracker.instance_claim" :: held 0.201s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m
Dec  1 20:02:00 compute-0 nova_compute[189564]: 2025-12-01 20:02:00.170 189568 DEBUG nova.compute.manager [None req-a63dd88b-b0e6-4d69-9e61-96ca65e37b62 b7979dae5a4746189d660cfad52a7031 074be7edf37d4e09a02286825460dcb3 - - default default] [instance: 40daa6fd-543f-42a7-8b3f-8bbbd3b4ecc0] Start building networks asynchronously for instance. _build_resources /usr/lib/python3.9/site-packages/nova/compute/manager.py:2799#033[00m
Dec  1 20:02:00 compute-0 nova_compute[189564]: 2025-12-01 20:02:00.213 189568 DEBUG nova.compute.manager [None req-a63dd88b-b0e6-4d69-9e61-96ca65e37b62 b7979dae5a4746189d660cfad52a7031 074be7edf37d4e09a02286825460dcb3 - - default default] [instance: 40daa6fd-543f-42a7-8b3f-8bbbd3b4ecc0] Allocating IP information in the background. _allocate_network_async /usr/lib/python3.9/site-packages/nova/compute/manager.py:1952#033[00m
Dec  1 20:02:00 compute-0 nova_compute[189564]: 2025-12-01 20:02:00.214 189568 DEBUG nova.network.neutron [None req-a63dd88b-b0e6-4d69-9e61-96ca65e37b62 b7979dae5a4746189d660cfad52a7031 074be7edf37d4e09a02286825460dcb3 - - default default] [instance: 40daa6fd-543f-42a7-8b3f-8bbbd3b4ecc0] allocate_for_instance() allocate_for_instance /usr/lib/python3.9/site-packages/nova/network/neutron.py:1156#033[00m
Dec  1 20:02:00 compute-0 nova_compute[189564]: 2025-12-01 20:02:00.245 189568 INFO nova.virt.libvirt.driver [None req-a63dd88b-b0e6-4d69-9e61-96ca65e37b62 b7979dae5a4746189d660cfad52a7031 074be7edf37d4e09a02286825460dcb3 - - default default] [instance: 40daa6fd-543f-42a7-8b3f-8bbbd3b4ecc0] Ignoring supplied device name: /dev/vda. Libvirt can't honour user-supplied dev names#033[00m
Dec  1 20:02:00 compute-0 nova_compute[189564]: 2025-12-01 20:02:00.262 189568 DEBUG oslo_concurrency.lockutils [None req-27a587a4-40c5-444e-a53d-f9c90e3a57ff f4faf878be724ad8aa31fd034c9818d9 4517904b95d64f0c874d5afda12566c4 - - default default] Acquiring lock "4ace6300-5447-4f61-9b27-a7249155c57b" by "nova.compute.manager.ComputeManager.build_and_run_instance.<locals>._locked_do_build_and_run_instance" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m
Dec  1 20:02:00 compute-0 nova_compute[189564]: 2025-12-01 20:02:00.262 189568 DEBUG oslo_concurrency.lockutils [None req-27a587a4-40c5-444e-a53d-f9c90e3a57ff f4faf878be724ad8aa31fd034c9818d9 4517904b95d64f0c874d5afda12566c4 - - default default] Lock "4ace6300-5447-4f61-9b27-a7249155c57b" acquired by "nova.compute.manager.ComputeManager.build_and_run_instance.<locals>._locked_do_build_and_run_instance" :: waited 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m
Dec  1 20:02:00 compute-0 nova_compute[189564]: 2025-12-01 20:02:00.279 189568 DEBUG nova.compute.manager [None req-a63dd88b-b0e6-4d69-9e61-96ca65e37b62 b7979dae5a4746189d660cfad52a7031 074be7edf37d4e09a02286825460dcb3 - - default default] [instance: 40daa6fd-543f-42a7-8b3f-8bbbd3b4ecc0] Start building block device mappings for instance. _build_resources /usr/lib/python3.9/site-packages/nova/compute/manager.py:2834#033[00m
Dec  1 20:02:00 compute-0 nova_compute[189564]: 2025-12-01 20:02:00.322 189568 DEBUG nova.compute.manager [None req-27a587a4-40c5-444e-a53d-f9c90e3a57ff f4faf878be724ad8aa31fd034c9818d9 4517904b95d64f0c874d5afda12566c4 - - default default] [instance: 4ace6300-5447-4f61-9b27-a7249155c57b] Starting instance... _do_build_and_run_instance /usr/lib/python3.9/site-packages/nova/compute/manager.py:2402#033[00m
Dec  1 20:02:00 compute-0 nova_compute[189564]: 2025-12-01 20:02:00.394 189568 DEBUG oslo_concurrency.lockutils [None req-27a587a4-40c5-444e-a53d-f9c90e3a57ff f4faf878be724ad8aa31fd034c9818d9 4517904b95d64f0c874d5afda12566c4 - - default default] Acquiring lock "compute_resources" by "nova.compute.resource_tracker.ResourceTracker.instance_claim" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m
Dec  1 20:02:00 compute-0 nova_compute[189564]: 2025-12-01 20:02:00.394 189568 DEBUG oslo_concurrency.lockutils [None req-27a587a4-40c5-444e-a53d-f9c90e3a57ff f4faf878be724ad8aa31fd034c9818d9 4517904b95d64f0c874d5afda12566c4 - - default default] Lock "compute_resources" acquired by "nova.compute.resource_tracker.ResourceTracker.instance_claim" :: waited 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m
Dec  1 20:02:00 compute-0 nova_compute[189564]: 2025-12-01 20:02:00.403 189568 DEBUG nova.virt.hardware [None req-27a587a4-40c5-444e-a53d-f9c90e3a57ff f4faf878be724ad8aa31fd034c9818d9 4517904b95d64f0c874d5afda12566c4 - - default default] Require both a host and instance NUMA topology to fit instance on host. numa_fit_instance_to_host /usr/lib/python3.9/site-packages/nova/virt/hardware.py:2368#033[00m
Dec  1 20:02:00 compute-0 nova_compute[189564]: 2025-12-01 20:02:00.403 189568 INFO nova.compute.claims [None req-27a587a4-40c5-444e-a53d-f9c90e3a57ff f4faf878be724ad8aa31fd034c9818d9 4517904b95d64f0c874d5afda12566c4 - - default default] [instance: 4ace6300-5447-4f61-9b27-a7249155c57b] Claim successful on node compute-0.ctlplane.example.com#033[00m
Dec  1 20:02:00 compute-0 nova_compute[189564]: 2025-12-01 20:02:00.412 189568 DEBUG nova.compute.manager [None req-a63dd88b-b0e6-4d69-9e61-96ca65e37b62 b7979dae5a4746189d660cfad52a7031 074be7edf37d4e09a02286825460dcb3 - - default default] [instance: 40daa6fd-543f-42a7-8b3f-8bbbd3b4ecc0] Start spawning the instance on the hypervisor. _build_and_run_instance /usr/lib/python3.9/site-packages/nova/compute/manager.py:2608#033[00m
Dec  1 20:02:00 compute-0 nova_compute[189564]: 2025-12-01 20:02:00.413 189568 DEBUG nova.virt.libvirt.driver [None req-a63dd88b-b0e6-4d69-9e61-96ca65e37b62 b7979dae5a4746189d660cfad52a7031 074be7edf37d4e09a02286825460dcb3 - - default default] [instance: 40daa6fd-543f-42a7-8b3f-8bbbd3b4ecc0] Creating instance directory _create_image /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:4723#033[00m
Dec  1 20:02:00 compute-0 nova_compute[189564]: 2025-12-01 20:02:00.414 189568 INFO nova.virt.libvirt.driver [None req-a63dd88b-b0e6-4d69-9e61-96ca65e37b62 b7979dae5a4746189d660cfad52a7031 074be7edf37d4e09a02286825460dcb3 - - default default] [instance: 40daa6fd-543f-42a7-8b3f-8bbbd3b4ecc0] Creating image(s)#033[00m
Dec  1 20:02:00 compute-0 nova_compute[189564]: 2025-12-01 20:02:00.414 189568 DEBUG oslo_concurrency.lockutils [None req-a63dd88b-b0e6-4d69-9e61-96ca65e37b62 b7979dae5a4746189d660cfad52a7031 074be7edf37d4e09a02286825460dcb3 - - default default] Acquiring lock "/var/lib/nova/instances/40daa6fd-543f-42a7-8b3f-8bbbd3b4ecc0/disk.info" by "nova.virt.libvirt.imagebackend.Image.resolve_driver_format.<locals>.write_to_disk_info_file" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m
Dec  1 20:02:00 compute-0 nova_compute[189564]: 2025-12-01 20:02:00.415 189568 DEBUG oslo_concurrency.lockutils [None req-a63dd88b-b0e6-4d69-9e61-96ca65e37b62 b7979dae5a4746189d660cfad52a7031 074be7edf37d4e09a02286825460dcb3 - - default default] Lock "/var/lib/nova/instances/40daa6fd-543f-42a7-8b3f-8bbbd3b4ecc0/disk.info" acquired by "nova.virt.libvirt.imagebackend.Image.resolve_driver_format.<locals>.write_to_disk_info_file" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m
Dec  1 20:02:00 compute-0 nova_compute[189564]: 2025-12-01 20:02:00.415 189568 DEBUG oslo_concurrency.lockutils [None req-a63dd88b-b0e6-4d69-9e61-96ca65e37b62 b7979dae5a4746189d660cfad52a7031 074be7edf37d4e09a02286825460dcb3 - - default default] Lock "/var/lib/nova/instances/40daa6fd-543f-42a7-8b3f-8bbbd3b4ecc0/disk.info" "released" by "nova.virt.libvirt.imagebackend.Image.resolve_driver_format.<locals>.write_to_disk_info_file" :: held 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m
Dec  1 20:02:00 compute-0 nova_compute[189564]: 2025-12-01 20:02:00.427 189568 DEBUG oslo_concurrency.processutils [None req-a63dd88b-b0e6-4d69-9e61-96ca65e37b62 b7979dae5a4746189d660cfad52a7031 074be7edf37d4e09a02286825460dcb3 - - default default] Running cmd (subprocess): /usr/bin/python3 -m oslo_concurrency.prlimit --as=1073741824 --cpu=30 -- env LC_ALL=C LANG=C qemu-img info /var/lib/nova/instances/_base/b6c46a34fa48a1b06387586e8222a42077151abd --force-share --output=json execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384#033[00m
Dec  1 20:02:00 compute-0 nova_compute[189564]: 2025-12-01 20:02:00.485 189568 DEBUG oslo_concurrency.processutils [None req-a63dd88b-b0e6-4d69-9e61-96ca65e37b62 b7979dae5a4746189d660cfad52a7031 074be7edf37d4e09a02286825460dcb3 - - default default] CMD "/usr/bin/python3 -m oslo_concurrency.prlimit --as=1073741824 --cpu=30 -- env LC_ALL=C LANG=C qemu-img info /var/lib/nova/instances/_base/b6c46a34fa48a1b06387586e8222a42077151abd --force-share --output=json" returned: 0 in 0.058s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422#033[00m
Dec  1 20:02:00 compute-0 nova_compute[189564]: 2025-12-01 20:02:00.486 189568 DEBUG oslo_concurrency.lockutils [None req-a63dd88b-b0e6-4d69-9e61-96ca65e37b62 b7979dae5a4746189d660cfad52a7031 074be7edf37d4e09a02286825460dcb3 - - default default] Acquiring lock "b6c46a34fa48a1b06387586e8222a42077151abd" by "nova.virt.libvirt.imagebackend.Qcow2.create_image.<locals>.create_qcow2_image" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m
Dec  1 20:02:00 compute-0 nova_compute[189564]: 2025-12-01 20:02:00.486 189568 DEBUG oslo_concurrency.lockutils [None req-a63dd88b-b0e6-4d69-9e61-96ca65e37b62 b7979dae5a4746189d660cfad52a7031 074be7edf37d4e09a02286825460dcb3 - - default default] Lock "b6c46a34fa48a1b06387586e8222a42077151abd" acquired by "nova.virt.libvirt.imagebackend.Qcow2.create_image.<locals>.create_qcow2_image" :: waited 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m
Dec  1 20:02:00 compute-0 nova_compute[189564]: 2025-12-01 20:02:00.497 189568 DEBUG oslo_concurrency.processutils [None req-a63dd88b-b0e6-4d69-9e61-96ca65e37b62 b7979dae5a4746189d660cfad52a7031 074be7edf37d4e09a02286825460dcb3 - - default default] Running cmd (subprocess): /usr/bin/python3 -m oslo_concurrency.prlimit --as=1073741824 --cpu=30 -- env LC_ALL=C LANG=C qemu-img info /var/lib/nova/instances/_base/b6c46a34fa48a1b06387586e8222a42077151abd --force-share --output=json execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384#033[00m
Dec  1 20:02:00 compute-0 nova_compute[189564]: 2025-12-01 20:02:00.562 189568 DEBUG oslo_concurrency.processutils [None req-a63dd88b-b0e6-4d69-9e61-96ca65e37b62 b7979dae5a4746189d660cfad52a7031 074be7edf37d4e09a02286825460dcb3 - - default default] CMD "/usr/bin/python3 -m oslo_concurrency.prlimit --as=1073741824 --cpu=30 -- env LC_ALL=C LANG=C qemu-img info /var/lib/nova/instances/_base/b6c46a34fa48a1b06387586e8222a42077151abd --force-share --output=json" returned: 0 in 0.066s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422#033[00m
Dec  1 20:02:00 compute-0 nova_compute[189564]: 2025-12-01 20:02:00.564 189568 DEBUG oslo_concurrency.processutils [None req-a63dd88b-b0e6-4d69-9e61-96ca65e37b62 b7979dae5a4746189d660cfad52a7031 074be7edf37d4e09a02286825460dcb3 - - default default] Running cmd (subprocess): env LC_ALL=C LANG=C qemu-img create -f qcow2 -o backing_file=/var/lib/nova/instances/_base/b6c46a34fa48a1b06387586e8222a42077151abd,backing_fmt=raw /var/lib/nova/instances/40daa6fd-543f-42a7-8b3f-8bbbd3b4ecc0/disk 1073741824 execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384#033[00m
Dec  1 20:02:00 compute-0 nova_compute[189564]: 2025-12-01 20:02:00.588 189568 DEBUG nova.compute.provider_tree [None req-27a587a4-40c5-444e-a53d-f9c90e3a57ff f4faf878be724ad8aa31fd034c9818d9 4517904b95d64f0c874d5afda12566c4 - - default default] Inventory has not changed in ProviderTree for provider: 0211b5d4-bab8-409f-8f53-df766ffbcb27 update_inventory /usr/lib/python3.9/site-packages/nova/compute/provider_tree.py:180#033[00m
Dec  1 20:02:00 compute-0 nova_compute[189564]: 2025-12-01 20:02:00.606 189568 DEBUG nova.scheduler.client.report [None req-27a587a4-40c5-444e-a53d-f9c90e3a57ff f4faf878be724ad8aa31fd034c9818d9 4517904b95d64f0c874d5afda12566c4 - - default default] Inventory has not changed for provider 0211b5d4-bab8-409f-8f53-df766ffbcb27 based on inventory data: {'VCPU': {'total': 8, 'reserved': 0, 'min_unit': 1, 'max_unit': 8, 'step_size': 1, 'allocation_ratio': 4.0}, 'MEMORY_MB': {'total': 7680, 'reserved': 512, 'min_unit': 1, 'max_unit': 7680, 'step_size': 1, 'allocation_ratio': 1.0}, 'DISK_GB': {'total': 79, 'reserved': 1, 'min_unit': 1, 'max_unit': 79, 'step_size': 1, 'allocation_ratio': 0.9}} set_inventory_for_provider /usr/lib/python3.9/site-packages/nova/scheduler/client/report.py:940#033[00m
Dec  1 20:02:00 compute-0 nova_compute[189564]: 2025-12-01 20:02:00.611 189568 DEBUG oslo_concurrency.processutils [None req-a63dd88b-b0e6-4d69-9e61-96ca65e37b62 b7979dae5a4746189d660cfad52a7031 074be7edf37d4e09a02286825460dcb3 - - default default] CMD "env LC_ALL=C LANG=C qemu-img create -f qcow2 -o backing_file=/var/lib/nova/instances/_base/b6c46a34fa48a1b06387586e8222a42077151abd,backing_fmt=raw /var/lib/nova/instances/40daa6fd-543f-42a7-8b3f-8bbbd3b4ecc0/disk 1073741824" returned: 0 in 0.047s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422#033[00m
Dec  1 20:02:00 compute-0 nova_compute[189564]: 2025-12-01 20:02:00.612 189568 DEBUG oslo_concurrency.lockutils [None req-a63dd88b-b0e6-4d69-9e61-96ca65e37b62 b7979dae5a4746189d660cfad52a7031 074be7edf37d4e09a02286825460dcb3 - - default default] Lock "b6c46a34fa48a1b06387586e8222a42077151abd" "released" by "nova.virt.libvirt.imagebackend.Qcow2.create_image.<locals>.create_qcow2_image" :: held 0.126s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m
Dec  1 20:02:00 compute-0 nova_compute[189564]: 2025-12-01 20:02:00.613 189568 DEBUG oslo_concurrency.processutils [None req-a63dd88b-b0e6-4d69-9e61-96ca65e37b62 b7979dae5a4746189d660cfad52a7031 074be7edf37d4e09a02286825460dcb3 - - default default] Running cmd (subprocess): /usr/bin/python3 -m oslo_concurrency.prlimit --as=1073741824 --cpu=30 -- env LC_ALL=C LANG=C qemu-img info /var/lib/nova/instances/_base/b6c46a34fa48a1b06387586e8222a42077151abd --force-share --output=json execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384#033[00m
Dec  1 20:02:00 compute-0 nova_compute[189564]: 2025-12-01 20:02:00.639 189568 DEBUG oslo_concurrency.lockutils [None req-27a587a4-40c5-444e-a53d-f9c90e3a57ff f4faf878be724ad8aa31fd034c9818d9 4517904b95d64f0c874d5afda12566c4 - - default default] Lock "compute_resources" "released" by "nova.compute.resource_tracker.ResourceTracker.instance_claim" :: held 0.245s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m
Dec  1 20:02:00 compute-0 nova_compute[189564]: 2025-12-01 20:02:00.641 189568 DEBUG nova.compute.manager [None req-27a587a4-40c5-444e-a53d-f9c90e3a57ff f4faf878be724ad8aa31fd034c9818d9 4517904b95d64f0c874d5afda12566c4 - - default default] [instance: 4ace6300-5447-4f61-9b27-a7249155c57b] Start building networks asynchronously for instance. _build_resources /usr/lib/python3.9/site-packages/nova/compute/manager.py:2799#033[00m
Dec  1 20:02:00 compute-0 nova_compute[189564]: 2025-12-01 20:02:00.675 189568 DEBUG oslo_concurrency.processutils [None req-a63dd88b-b0e6-4d69-9e61-96ca65e37b62 b7979dae5a4746189d660cfad52a7031 074be7edf37d4e09a02286825460dcb3 - - default default] CMD "/usr/bin/python3 -m oslo_concurrency.prlimit --as=1073741824 --cpu=30 -- env LC_ALL=C LANG=C qemu-img info /var/lib/nova/instances/_base/b6c46a34fa48a1b06387586e8222a42077151abd --force-share --output=json" returned: 0 in 0.062s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422#033[00m
Dec  1 20:02:00 compute-0 nova_compute[189564]: 2025-12-01 20:02:00.678 189568 DEBUG nova.virt.disk.api [None req-a63dd88b-b0e6-4d69-9e61-96ca65e37b62 b7979dae5a4746189d660cfad52a7031 074be7edf37d4e09a02286825460dcb3 - - default default] Checking if we can resize image /var/lib/nova/instances/40daa6fd-543f-42a7-8b3f-8bbbd3b4ecc0/disk. size=1073741824 can_resize_image /usr/lib/python3.9/site-packages/nova/virt/disk/api.py:166#033[00m
Dec  1 20:02:00 compute-0 nova_compute[189564]: 2025-12-01 20:02:00.681 189568 DEBUG oslo_concurrency.processutils [None req-a63dd88b-b0e6-4d69-9e61-96ca65e37b62 b7979dae5a4746189d660cfad52a7031 074be7edf37d4e09a02286825460dcb3 - - default default] Running cmd (subprocess): /usr/bin/python3 -m oslo_concurrency.prlimit --as=1073741824 --cpu=30 -- env LC_ALL=C LANG=C qemu-img info /var/lib/nova/instances/40daa6fd-543f-42a7-8b3f-8bbbd3b4ecc0/disk --force-share --output=json execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384#033[00m
Dec  1 20:02:00 compute-0 nova_compute[189564]: 2025-12-01 20:02:00.706 189568 DEBUG nova.compute.manager [None req-27a587a4-40c5-444e-a53d-f9c90e3a57ff f4faf878be724ad8aa31fd034c9818d9 4517904b95d64f0c874d5afda12566c4 - - default default] [instance: 4ace6300-5447-4f61-9b27-a7249155c57b] Allocating IP information in the background. _allocate_network_async /usr/lib/python3.9/site-packages/nova/compute/manager.py:1952#033[00m
Dec  1 20:02:00 compute-0 nova_compute[189564]: 2025-12-01 20:02:00.707 189568 DEBUG nova.network.neutron [None req-27a587a4-40c5-444e-a53d-f9c90e3a57ff f4faf878be724ad8aa31fd034c9818d9 4517904b95d64f0c874d5afda12566c4 - - default default] [instance: 4ace6300-5447-4f61-9b27-a7249155c57b] allocate_for_instance() allocate_for_instance /usr/lib/python3.9/site-packages/nova/network/neutron.py:1156#033[00m
Dec  1 20:02:00 compute-0 nova_compute[189564]: 2025-12-01 20:02:00.724 189568 INFO nova.virt.libvirt.driver [None req-27a587a4-40c5-444e-a53d-f9c90e3a57ff f4faf878be724ad8aa31fd034c9818d9 4517904b95d64f0c874d5afda12566c4 - - default default] [instance: 4ace6300-5447-4f61-9b27-a7249155c57b] Ignoring supplied device name: /dev/vda. Libvirt can't honour user-supplied dev names#033[00m
Dec  1 20:02:00 compute-0 nova_compute[189564]: 2025-12-01 20:02:00.741 189568 DEBUG nova.compute.manager [None req-27a587a4-40c5-444e-a53d-f9c90e3a57ff f4faf878be724ad8aa31fd034c9818d9 4517904b95d64f0c874d5afda12566c4 - - default default] [instance: 4ace6300-5447-4f61-9b27-a7249155c57b] Start building block device mappings for instance. _build_resources /usr/lib/python3.9/site-packages/nova/compute/manager.py:2834#033[00m
Dec  1 20:02:00 compute-0 nova_compute[189564]: 2025-12-01 20:02:00.748 189568 DEBUG oslo_concurrency.processutils [None req-a63dd88b-b0e6-4d69-9e61-96ca65e37b62 b7979dae5a4746189d660cfad52a7031 074be7edf37d4e09a02286825460dcb3 - - default default] CMD "/usr/bin/python3 -m oslo_concurrency.prlimit --as=1073741824 --cpu=30 -- env LC_ALL=C LANG=C qemu-img info /var/lib/nova/instances/40daa6fd-543f-42a7-8b3f-8bbbd3b4ecc0/disk --force-share --output=json" returned: 0 in 0.067s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422#033[00m
Dec  1 20:02:00 compute-0 nova_compute[189564]: 2025-12-01 20:02:00.749 189568 DEBUG nova.virt.disk.api [None req-a63dd88b-b0e6-4d69-9e61-96ca65e37b62 b7979dae5a4746189d660cfad52a7031 074be7edf37d4e09a02286825460dcb3 - - default default] Cannot resize image /var/lib/nova/instances/40daa6fd-543f-42a7-8b3f-8bbbd3b4ecc0/disk to a smaller size. can_resize_image /usr/lib/python3.9/site-packages/nova/virt/disk/api.py:172#033[00m
Dec  1 20:02:00 compute-0 nova_compute[189564]: 2025-12-01 20:02:00.749 189568 DEBUG nova.objects.instance [None req-a63dd88b-b0e6-4d69-9e61-96ca65e37b62 b7979dae5a4746189d660cfad52a7031 074be7edf37d4e09a02286825460dcb3 - - default default] Lazy-loading 'migration_context' on Instance uuid 40daa6fd-543f-42a7-8b3f-8bbbd3b4ecc0 obj_load_attr /usr/lib/python3.9/site-packages/nova/objects/instance.py:1105#033[00m
Dec  1 20:02:00 compute-0 nova_compute[189564]: 2025-12-01 20:02:00.768 189568 DEBUG nova.virt.libvirt.driver [None req-a63dd88b-b0e6-4d69-9e61-96ca65e37b62 b7979dae5a4746189d660cfad52a7031 074be7edf37d4e09a02286825460dcb3 - - default default] [instance: 40daa6fd-543f-42a7-8b3f-8bbbd3b4ecc0] Created local disks _create_image /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:4857#033[00m
Dec  1 20:02:00 compute-0 nova_compute[189564]: 2025-12-01 20:02:00.768 189568 DEBUG nova.virt.libvirt.driver [None req-a63dd88b-b0e6-4d69-9e61-96ca65e37b62 b7979dae5a4746189d660cfad52a7031 074be7edf37d4e09a02286825460dcb3 - - default default] [instance: 40daa6fd-543f-42a7-8b3f-8bbbd3b4ecc0] Ensure instance console log exists: /var/lib/nova/instances/40daa6fd-543f-42a7-8b3f-8bbbd3b4ecc0/console.log _ensure_console_log_for_instance /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:4609#033[00m
Dec  1 20:02:00 compute-0 nova_compute[189564]: 2025-12-01 20:02:00.769 189568 DEBUG oslo_concurrency.lockutils [None req-a63dd88b-b0e6-4d69-9e61-96ca65e37b62 b7979dae5a4746189d660cfad52a7031 074be7edf37d4e09a02286825460dcb3 - - default default] Acquiring lock "vgpu_resources" by "nova.virt.libvirt.driver.LibvirtDriver._allocate_mdevs" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m
Dec  1 20:02:00 compute-0 nova_compute[189564]: 2025-12-01 20:02:00.770 189568 DEBUG oslo_concurrency.lockutils [None req-a63dd88b-b0e6-4d69-9e61-96ca65e37b62 b7979dae5a4746189d660cfad52a7031 074be7edf37d4e09a02286825460dcb3 - - default default] Lock "vgpu_resources" acquired by "nova.virt.libvirt.driver.LibvirtDriver._allocate_mdevs" :: waited 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m
Dec  1 20:02:00 compute-0 nova_compute[189564]: 2025-12-01 20:02:00.770 189568 DEBUG oslo_concurrency.lockutils [None req-a63dd88b-b0e6-4d69-9e61-96ca65e37b62 b7979dae5a4746189d660cfad52a7031 074be7edf37d4e09a02286825460dcb3 - - default default] Lock "vgpu_resources" "released" by "nova.virt.libvirt.driver.LibvirtDriver._allocate_mdevs" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m
Dec  1 20:02:00 compute-0 nova_compute[189564]: 2025-12-01 20:02:00.789 189568 DEBUG nova.policy [None req-a63dd88b-b0e6-4d69-9e61-96ca65e37b62 b7979dae5a4746189d660cfad52a7031 074be7edf37d4e09a02286825460dcb3 - - default default] Policy check for network:attach_external_network failed with credentials {'is_admin': False, 'user_id': 'b7979dae5a4746189d660cfad52a7031', 'user_domain_id': 'default', 'system_scope': None, 'domain_id': None, 'project_id': '074be7edf37d4e09a02286825460dcb3', 'project_domain_id': 'default', 'roles': ['member', 'reader'], 'is_admin_project': True, 'service_user_id': None, 'service_user_domain_id': None, 'service_project_id': None, 'service_project_domain_id': None, 'service_roles': []} authorize /usr/lib/python3.9/site-packages/nova/policy.py:203#033[00m
Dec  1 20:02:00 compute-0 nova_compute[189564]: 2025-12-01 20:02:00.836 189568 DEBUG nova.compute.manager [None req-27a587a4-40c5-444e-a53d-f9c90e3a57ff f4faf878be724ad8aa31fd034c9818d9 4517904b95d64f0c874d5afda12566c4 - - default default] [instance: 4ace6300-5447-4f61-9b27-a7249155c57b] Start spawning the instance on the hypervisor. _build_and_run_instance /usr/lib/python3.9/site-packages/nova/compute/manager.py:2608#033[00m
Dec  1 20:02:00 compute-0 nova_compute[189564]: 2025-12-01 20:02:00.838 189568 DEBUG nova.virt.libvirt.driver [None req-27a587a4-40c5-444e-a53d-f9c90e3a57ff f4faf878be724ad8aa31fd034c9818d9 4517904b95d64f0c874d5afda12566c4 - - default default] [instance: 4ace6300-5447-4f61-9b27-a7249155c57b] Creating instance directory _create_image /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:4723#033[00m
Dec  1 20:02:00 compute-0 nova_compute[189564]: 2025-12-01 20:02:00.839 189568 INFO nova.virt.libvirt.driver [None req-27a587a4-40c5-444e-a53d-f9c90e3a57ff f4faf878be724ad8aa31fd034c9818d9 4517904b95d64f0c874d5afda12566c4 - - default default] [instance: 4ace6300-5447-4f61-9b27-a7249155c57b] Creating image(s)#033[00m
Dec  1 20:02:00 compute-0 nova_compute[189564]: 2025-12-01 20:02:00.839 189568 DEBUG oslo_concurrency.lockutils [None req-27a587a4-40c5-444e-a53d-f9c90e3a57ff f4faf878be724ad8aa31fd034c9818d9 4517904b95d64f0c874d5afda12566c4 - - default default] Acquiring lock "/var/lib/nova/instances/4ace6300-5447-4f61-9b27-a7249155c57b/disk.info" by "nova.virt.libvirt.imagebackend.Image.resolve_driver_format.<locals>.write_to_disk_info_file" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m
Dec  1 20:02:00 compute-0 nova_compute[189564]: 2025-12-01 20:02:00.840 189568 DEBUG oslo_concurrency.lockutils [None req-27a587a4-40c5-444e-a53d-f9c90e3a57ff f4faf878be724ad8aa31fd034c9818d9 4517904b95d64f0c874d5afda12566c4 - - default default] Lock "/var/lib/nova/instances/4ace6300-5447-4f61-9b27-a7249155c57b/disk.info" acquired by "nova.virt.libvirt.imagebackend.Image.resolve_driver_format.<locals>.write_to_disk_info_file" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m
Dec  1 20:02:00 compute-0 nova_compute[189564]: 2025-12-01 20:02:00.841 189568 DEBUG oslo_concurrency.lockutils [None req-27a587a4-40c5-444e-a53d-f9c90e3a57ff f4faf878be724ad8aa31fd034c9818d9 4517904b95d64f0c874d5afda12566c4 - - default default] Lock "/var/lib/nova/instances/4ace6300-5447-4f61-9b27-a7249155c57b/disk.info" "released" by "nova.virt.libvirt.imagebackend.Image.resolve_driver_format.<locals>.write_to_disk_info_file" :: held 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m
Dec  1 20:02:00 compute-0 nova_compute[189564]: 2025-12-01 20:02:00.853 189568 DEBUG oslo_concurrency.processutils [None req-27a587a4-40c5-444e-a53d-f9c90e3a57ff f4faf878be724ad8aa31fd034c9818d9 4517904b95d64f0c874d5afda12566c4 - - default default] Running cmd (subprocess): /usr/bin/python3 -m oslo_concurrency.prlimit --as=1073741824 --cpu=30 -- env LC_ALL=C LANG=C qemu-img info /var/lib/nova/instances/_base/b6c46a34fa48a1b06387586e8222a42077151abd --force-share --output=json execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384#033[00m
Dec  1 20:02:00 compute-0 nova_compute[189564]: 2025-12-01 20:02:00.929 189568 DEBUG oslo_concurrency.processutils [None req-27a587a4-40c5-444e-a53d-f9c90e3a57ff f4faf878be724ad8aa31fd034c9818d9 4517904b95d64f0c874d5afda12566c4 - - default default] CMD "/usr/bin/python3 -m oslo_concurrency.prlimit --as=1073741824 --cpu=30 -- env LC_ALL=C LANG=C qemu-img info /var/lib/nova/instances/_base/b6c46a34fa48a1b06387586e8222a42077151abd --force-share --output=json" returned: 0 in 0.076s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422#033[00m
Dec  1 20:02:00 compute-0 nova_compute[189564]: 2025-12-01 20:02:00.932 189568 DEBUG oslo_concurrency.lockutils [None req-27a587a4-40c5-444e-a53d-f9c90e3a57ff f4faf878be724ad8aa31fd034c9818d9 4517904b95d64f0c874d5afda12566c4 - - default default] Acquiring lock "b6c46a34fa48a1b06387586e8222a42077151abd" by "nova.virt.libvirt.imagebackend.Qcow2.create_image.<locals>.create_qcow2_image" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m
Dec  1 20:02:00 compute-0 nova_compute[189564]: 2025-12-01 20:02:00.933 189568 DEBUG oslo_concurrency.lockutils [None req-27a587a4-40c5-444e-a53d-f9c90e3a57ff f4faf878be724ad8aa31fd034c9818d9 4517904b95d64f0c874d5afda12566c4 - - default default] Lock "b6c46a34fa48a1b06387586e8222a42077151abd" acquired by "nova.virt.libvirt.imagebackend.Qcow2.create_image.<locals>.create_qcow2_image" :: waited 0.002s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m
Dec  1 20:02:00 compute-0 nova_compute[189564]: 2025-12-01 20:02:00.946 189568 DEBUG oslo_concurrency.processutils [None req-27a587a4-40c5-444e-a53d-f9c90e3a57ff f4faf878be724ad8aa31fd034c9818d9 4517904b95d64f0c874d5afda12566c4 - - default default] Running cmd (subprocess): /usr/bin/python3 -m oslo_concurrency.prlimit --as=1073741824 --cpu=30 -- env LC_ALL=C LANG=C qemu-img info /var/lib/nova/instances/_base/b6c46a34fa48a1b06387586e8222a42077151abd --force-share --output=json execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384#033[00m
Dec  1 20:02:01 compute-0 nova_compute[189564]: 2025-12-01 20:02:01.020 189568 DEBUG oslo_concurrency.processutils [None req-27a587a4-40c5-444e-a53d-f9c90e3a57ff f4faf878be724ad8aa31fd034c9818d9 4517904b95d64f0c874d5afda12566c4 - - default default] CMD "/usr/bin/python3 -m oslo_concurrency.prlimit --as=1073741824 --cpu=30 -- env LC_ALL=C LANG=C qemu-img info /var/lib/nova/instances/_base/b6c46a34fa48a1b06387586e8222a42077151abd --force-share --output=json" returned: 0 in 0.074s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422#033[00m
Dec  1 20:02:01 compute-0 nova_compute[189564]: 2025-12-01 20:02:01.021 189568 DEBUG oslo_concurrency.processutils [None req-27a587a4-40c5-444e-a53d-f9c90e3a57ff f4faf878be724ad8aa31fd034c9818d9 4517904b95d64f0c874d5afda12566c4 - - default default] Running cmd (subprocess): env LC_ALL=C LANG=C qemu-img create -f qcow2 -o backing_file=/var/lib/nova/instances/_base/b6c46a34fa48a1b06387586e8222a42077151abd,backing_fmt=raw /var/lib/nova/instances/4ace6300-5447-4f61-9b27-a7249155c57b/disk 1073741824 execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384#033[00m
Dec  1 20:02:01 compute-0 nova_compute[189564]: 2025-12-01 20:02:01.058 189568 DEBUG oslo_concurrency.processutils [None req-27a587a4-40c5-444e-a53d-f9c90e3a57ff f4faf878be724ad8aa31fd034c9818d9 4517904b95d64f0c874d5afda12566c4 - - default default] CMD "env LC_ALL=C LANG=C qemu-img create -f qcow2 -o backing_file=/var/lib/nova/instances/_base/b6c46a34fa48a1b06387586e8222a42077151abd,backing_fmt=raw /var/lib/nova/instances/4ace6300-5447-4f61-9b27-a7249155c57b/disk 1073741824" returned: 0 in 0.037s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422#033[00m
Dec  1 20:02:01 compute-0 nova_compute[189564]: 2025-12-01 20:02:01.059 189568 DEBUG oslo_concurrency.lockutils [None req-27a587a4-40c5-444e-a53d-f9c90e3a57ff f4faf878be724ad8aa31fd034c9818d9 4517904b95d64f0c874d5afda12566c4 - - default default] Lock "b6c46a34fa48a1b06387586e8222a42077151abd" "released" by "nova.virt.libvirt.imagebackend.Qcow2.create_image.<locals>.create_qcow2_image" :: held 0.126s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m
Dec  1 20:02:01 compute-0 nova_compute[189564]: 2025-12-01 20:02:01.060 189568 DEBUG oslo_concurrency.processutils [None req-27a587a4-40c5-444e-a53d-f9c90e3a57ff f4faf878be724ad8aa31fd034c9818d9 4517904b95d64f0c874d5afda12566c4 - - default default] Running cmd (subprocess): /usr/bin/python3 -m oslo_concurrency.prlimit --as=1073741824 --cpu=30 -- env LC_ALL=C LANG=C qemu-img info /var/lib/nova/instances/_base/b6c46a34fa48a1b06387586e8222a42077151abd --force-share --output=json execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384#033[00m
Dec  1 20:02:01 compute-0 nova_compute[189564]: 2025-12-01 20:02:01.117 189568 DEBUG oslo_concurrency.processutils [None req-27a587a4-40c5-444e-a53d-f9c90e3a57ff f4faf878be724ad8aa31fd034c9818d9 4517904b95d64f0c874d5afda12566c4 - - default default] CMD "/usr/bin/python3 -m oslo_concurrency.prlimit --as=1073741824 --cpu=30 -- env LC_ALL=C LANG=C qemu-img info /var/lib/nova/instances/_base/b6c46a34fa48a1b06387586e8222a42077151abd --force-share --output=json" returned: 0 in 0.056s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422#033[00m
Dec  1 20:02:01 compute-0 nova_compute[189564]: 2025-12-01 20:02:01.119 189568 DEBUG nova.virt.disk.api [None req-27a587a4-40c5-444e-a53d-f9c90e3a57ff f4faf878be724ad8aa31fd034c9818d9 4517904b95d64f0c874d5afda12566c4 - - default default] Checking if we can resize image /var/lib/nova/instances/4ace6300-5447-4f61-9b27-a7249155c57b/disk. size=1073741824 can_resize_image /usr/lib/python3.9/site-packages/nova/virt/disk/api.py:166#033[00m
Dec  1 20:02:01 compute-0 nova_compute[189564]: 2025-12-01 20:02:01.120 189568 DEBUG oslo_concurrency.processutils [None req-27a587a4-40c5-444e-a53d-f9c90e3a57ff f4faf878be724ad8aa31fd034c9818d9 4517904b95d64f0c874d5afda12566c4 - - default default] Running cmd (subprocess): /usr/bin/python3 -m oslo_concurrency.prlimit --as=1073741824 --cpu=30 -- env LC_ALL=C LANG=C qemu-img info /var/lib/nova/instances/4ace6300-5447-4f61-9b27-a7249155c57b/disk --force-share --output=json execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384#033[00m
Dec  1 20:02:01 compute-0 nova_compute[189564]: 2025-12-01 20:02:01.186 189568 DEBUG oslo_concurrency.processutils [None req-27a587a4-40c5-444e-a53d-f9c90e3a57ff f4faf878be724ad8aa31fd034c9818d9 4517904b95d64f0c874d5afda12566c4 - - default default] CMD "/usr/bin/python3 -m oslo_concurrency.prlimit --as=1073741824 --cpu=30 -- env LC_ALL=C LANG=C qemu-img info /var/lib/nova/instances/4ace6300-5447-4f61-9b27-a7249155c57b/disk --force-share --output=json" returned: 0 in 0.066s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422#033[00m
Dec  1 20:02:01 compute-0 nova_compute[189564]: 2025-12-01 20:02:01.192 189568 DEBUG nova.virt.disk.api [None req-27a587a4-40c5-444e-a53d-f9c90e3a57ff f4faf878be724ad8aa31fd034c9818d9 4517904b95d64f0c874d5afda12566c4 - - default default] Cannot resize image /var/lib/nova/instances/4ace6300-5447-4f61-9b27-a7249155c57b/disk to a smaller size. can_resize_image /usr/lib/python3.9/site-packages/nova/virt/disk/api.py:172#033[00m
Dec  1 20:02:01 compute-0 nova_compute[189564]: 2025-12-01 20:02:01.193 189568 DEBUG nova.objects.instance [None req-27a587a4-40c5-444e-a53d-f9c90e3a57ff f4faf878be724ad8aa31fd034c9818d9 4517904b95d64f0c874d5afda12566c4 - - default default] Lazy-loading 'migration_context' on Instance uuid 4ace6300-5447-4f61-9b27-a7249155c57b obj_load_attr /usr/lib/python3.9/site-packages/nova/objects/instance.py:1105#033[00m
Dec  1 20:02:01 compute-0 nova_compute[189564]: 2025-12-01 20:02:01.209 189568 DEBUG nova.virt.libvirt.driver [None req-27a587a4-40c5-444e-a53d-f9c90e3a57ff f4faf878be724ad8aa31fd034c9818d9 4517904b95d64f0c874d5afda12566c4 - - default default] [instance: 4ace6300-5447-4f61-9b27-a7249155c57b] Created local disks _create_image /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:4857#033[00m
Dec  1 20:02:01 compute-0 nova_compute[189564]: 2025-12-01 20:02:01.210 189568 DEBUG nova.virt.libvirt.driver [None req-27a587a4-40c5-444e-a53d-f9c90e3a57ff f4faf878be724ad8aa31fd034c9818d9 4517904b95d64f0c874d5afda12566c4 - - default default] [instance: 4ace6300-5447-4f61-9b27-a7249155c57b] Ensure instance console log exists: /var/lib/nova/instances/4ace6300-5447-4f61-9b27-a7249155c57b/console.log _ensure_console_log_for_instance /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:4609#033[00m
Dec  1 20:02:01 compute-0 nova_compute[189564]: 2025-12-01 20:02:01.211 189568 DEBUG oslo_concurrency.lockutils [None req-27a587a4-40c5-444e-a53d-f9c90e3a57ff f4faf878be724ad8aa31fd034c9818d9 4517904b95d64f0c874d5afda12566c4 - - default default] Acquiring lock "vgpu_resources" by "nova.virt.libvirt.driver.LibvirtDriver._allocate_mdevs" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m
Dec  1 20:02:01 compute-0 nova_compute[189564]: 2025-12-01 20:02:01.211 189568 DEBUG oslo_concurrency.lockutils [None req-27a587a4-40c5-444e-a53d-f9c90e3a57ff f4faf878be724ad8aa31fd034c9818d9 4517904b95d64f0c874d5afda12566c4 - - default default] Lock "vgpu_resources" acquired by "nova.virt.libvirt.driver.LibvirtDriver._allocate_mdevs" :: waited 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m
Dec  1 20:02:01 compute-0 nova_compute[189564]: 2025-12-01 20:02:01.212 189568 DEBUG oslo_concurrency.lockutils [None req-27a587a4-40c5-444e-a53d-f9c90e3a57ff f4faf878be724ad8aa31fd034c9818d9 4517904b95d64f0c874d5afda12566c4 - - default default] Lock "vgpu_resources" "released" by "nova.virt.libvirt.driver.LibvirtDriver._allocate_mdevs" :: held 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m
Dec  1 20:02:01 compute-0 openstack_network_exporter[205914]: ERROR   20:02:01 appctl.go:144: Failed to get PID for ovn-northd: no control socket files found for ovn-northd
Dec  1 20:02:01 compute-0 openstack_network_exporter[205914]: ERROR   20:02:01 appctl.go:144: Failed to get PID for ovn-northd: no control socket files found for ovn-northd
Dec  1 20:02:01 compute-0 openstack_network_exporter[205914]: ERROR   20:02:01 appctl.go:131: Failed to prepare call to ovsdb-server: no control socket files found for the ovs db server
Dec  1 20:02:01 compute-0 openstack_network_exporter[205914]: ERROR   20:02:01 appctl.go:174: call(dpif-netdev/pmd-rxq-show): please specify an existing datapath
Dec  1 20:02:01 compute-0 openstack_network_exporter[205914]: 
Dec  1 20:02:01 compute-0 openstack_network_exporter[205914]: ERROR   20:02:01 appctl.go:174: call(dpif-netdev/pmd-perf-show): please specify an existing datapath
Dec  1 20:02:01 compute-0 openstack_network_exporter[205914]: 
Dec  1 20:02:01 compute-0 nova_compute[189564]: 2025-12-01 20:02:01.492 189568 DEBUG nova.policy [None req-27a587a4-40c5-444e-a53d-f9c90e3a57ff f4faf878be724ad8aa31fd034c9818d9 4517904b95d64f0c874d5afda12566c4 - - default default] Policy check for network:attach_external_network failed with credentials {'is_admin': False, 'user_id': 'f4faf878be724ad8aa31fd034c9818d9', 'user_domain_id': 'default', 'system_scope': None, 'domain_id': None, 'project_id': '4517904b95d64f0c874d5afda12566c4', 'project_domain_id': 'default', 'roles': ['member', 'reader'], 'is_admin_project': True, 'service_user_id': None, 'service_user_domain_id': None, 'service_project_id': None, 'service_project_domain_id': None, 'service_roles': []} authorize /usr/lib/python3.9/site-packages/nova/policy.py:203#033[00m
Dec  1 20:02:02 compute-0 nova_compute[189564]: 2025-12-01 20:02:02.179 189568 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Dec  1 20:02:02 compute-0 podman[254592]: 2025-12-01 20:02:02.344301254 +0000 UTC m=+0.110916863 container health_status b46bda7fc50db8041eef75400930fc7591d8331b3adc9964f77b2cc87c6b98e2 (image=quay.io/openstack-k8s-operators/openstack-network-exporter:current-podified, name=openstack_network_exporter, health_status=healthy, health_failing_streak=0, health_log=, name=ubi9-minimal, io.openshift.tags=minimal rhel9, com.redhat.component=ubi9-minimal-container, managed_by=edpm_ansible, vendor=Red Hat, Inc., build-date=2025-08-20T13:12:41, distribution-scope=public, vcs-ref=f4b088292653bbf5ca8188a5e59ffd06a8671d4b, container_name=openstack_network_exporter, com.redhat.license_terms=https://www.redhat.com/en/about/red-hat-end-user-license-agreements#UBI, io.openshift.expose-services=, io.k8s.description=The Universal Base Image Minimal is a stripped down image that uses microdnf as a package manager. This base image is freely redistributable, but Red Hat only supports Red Hat technologies through subscriptions for Red Hat products. This image is maintained by Red Hat and updated regularly., maintainer=Red Hat, Inc., summary=Provides the latest release of the minimal Red Hat Universal Base Image 9., url=https://catalog.redhat.com/en/search?searchType=containers, description=The Universal Base Image Minimal is a stripped down image that uses microdnf as a package manager. This base image is freely redistributable, but Red Hat only supports Red Hat technologies through subscriptions for Red Hat products. This image is maintained by Red Hat and updated regularly., io.buildah.version=1.33.7, release=1755695350, architecture=x86_64, config_id=edpm, vcs-type=git, version=9.6, config_data={'image': 'quay.io/openstack-k8s-operators/openstack-network-exporter:current-podified', 'restart': 'always', 'recreate': True, 'privileged': True, 'ports': ['9105:9105'], 'command': [], 'net': 'host', 'environment': {'OS_ENDPOINT_TYPE': 'internal', 'OPENSTACK_NETWORK_EXPORTER_YAML': '/etc/openstack_network_exporter/openstack_network_exporter.yaml'}, 'healthcheck': {'test': '/openstack/healthcheck openstack-netwo', 'mount': '/var/lib/openstack/healthchecks/openstack_network_exporter'}, 'volumes': ['/var/lib/openstack/config/telemetry/openstack_network_exporter.yaml:/etc/openstack_network_exporter/openstack_network_exporter.yaml:z', '/var/lib/openstack/certs/telemetry/default:/etc/openstack_network_exporter/tls:z', '/var/run/openvswitch:/run/openvswitch:rw,z', '/var/lib/openvswitch/ovn:/run/ovn:rw,z', '/proc:/host/proc:ro', '/var/lib/openstack/healthchecks/openstack_network_exporter:/openstack:ro,z']}, io.k8s.display-name=Red Hat Universal Base Image 9 Minimal)
Dec  1 20:02:03 compute-0 nova_compute[189564]: 2025-12-01 20:02:03.221 189568 DEBUG nova.network.neutron [None req-27a587a4-40c5-444e-a53d-f9c90e3a57ff f4faf878be724ad8aa31fd034c9818d9 4517904b95d64f0c874d5afda12566c4 - - default default] [instance: 4ace6300-5447-4f61-9b27-a7249155c57b] Successfully created port: 7101ff55-a92d-431c-8cc4-8b3412507465 _create_port_minimal /usr/lib/python3.9/site-packages/nova/network/neutron.py:548#033[00m
Dec  1 20:02:03 compute-0 nova_compute[189564]: 2025-12-01 20:02:03.285 189568 DEBUG nova.network.neutron [None req-a63dd88b-b0e6-4d69-9e61-96ca65e37b62 b7979dae5a4746189d660cfad52a7031 074be7edf37d4e09a02286825460dcb3 - - default default] [instance: 40daa6fd-543f-42a7-8b3f-8bbbd3b4ecc0] Successfully created port: 5f412491-e88a-4387-aa56-6b4e024e1eb2 _create_port_minimal /usr/lib/python3.9/site-packages/nova/network/neutron.py:548#033[00m
Dec  1 20:02:03 compute-0 nova_compute[189564]: 2025-12-01 20:02:03.823 189568 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Dec  1 20:02:04 compute-0 nova_compute[189564]: 2025-12-01 20:02:04.875 189568 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Dec  1 20:02:05 compute-0 nova_compute[189564]: 2025-12-01 20:02:05.265 189568 DEBUG nova.network.neutron [None req-27a587a4-40c5-444e-a53d-f9c90e3a57ff f4faf878be724ad8aa31fd034c9818d9 4517904b95d64f0c874d5afda12566c4 - - default default] [instance: 4ace6300-5447-4f61-9b27-a7249155c57b] Successfully updated port: 7101ff55-a92d-431c-8cc4-8b3412507465 _update_port /usr/lib/python3.9/site-packages/nova/network/neutron.py:586#033[00m
Dec  1 20:02:05 compute-0 nova_compute[189564]: 2025-12-01 20:02:05.290 189568 DEBUG oslo_concurrency.lockutils [None req-27a587a4-40c5-444e-a53d-f9c90e3a57ff f4faf878be724ad8aa31fd034c9818d9 4517904b95d64f0c874d5afda12566c4 - - default default] Acquiring lock "refresh_cache-4ace6300-5447-4f61-9b27-a7249155c57b" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:312#033[00m
Dec  1 20:02:05 compute-0 nova_compute[189564]: 2025-12-01 20:02:05.291 189568 DEBUG oslo_concurrency.lockutils [None req-27a587a4-40c5-444e-a53d-f9c90e3a57ff f4faf878be724ad8aa31fd034c9818d9 4517904b95d64f0c874d5afda12566c4 - - default default] Acquired lock "refresh_cache-4ace6300-5447-4f61-9b27-a7249155c57b" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:315#033[00m
Dec  1 20:02:05 compute-0 nova_compute[189564]: 2025-12-01 20:02:05.291 189568 DEBUG nova.network.neutron [None req-27a587a4-40c5-444e-a53d-f9c90e3a57ff f4faf878be724ad8aa31fd034c9818d9 4517904b95d64f0c874d5afda12566c4 - - default default] [instance: 4ace6300-5447-4f61-9b27-a7249155c57b] Building network info cache for instance _get_instance_nw_info /usr/lib/python3.9/site-packages/nova/network/neutron.py:2010#033[00m
Dec  1 20:02:05 compute-0 nova_compute[189564]: 2025-12-01 20:02:05.690 189568 DEBUG nova.network.neutron [None req-27a587a4-40c5-444e-a53d-f9c90e3a57ff f4faf878be724ad8aa31fd034c9818d9 4517904b95d64f0c874d5afda12566c4 - - default default] [instance: 4ace6300-5447-4f61-9b27-a7249155c57b] Instance cache missing network info. _get_preexisting_port_ids /usr/lib/python3.9/site-packages/nova/network/neutron.py:3323#033[00m
Dec  1 20:02:05 compute-0 nova_compute[189564]: 2025-12-01 20:02:05.710 189568 DEBUG nova.network.neutron [None req-a63dd88b-b0e6-4d69-9e61-96ca65e37b62 b7979dae5a4746189d660cfad52a7031 074be7edf37d4e09a02286825460dcb3 - - default default] [instance: 40daa6fd-543f-42a7-8b3f-8bbbd3b4ecc0] Successfully updated port: 5f412491-e88a-4387-aa56-6b4e024e1eb2 _update_port /usr/lib/python3.9/site-packages/nova/network/neutron.py:586#033[00m
Dec  1 20:02:05 compute-0 nova_compute[189564]: 2025-12-01 20:02:05.724 189568 DEBUG oslo_concurrency.lockutils [None req-a63dd88b-b0e6-4d69-9e61-96ca65e37b62 b7979dae5a4746189d660cfad52a7031 074be7edf37d4e09a02286825460dcb3 - - default default] Acquiring lock "refresh_cache-40daa6fd-543f-42a7-8b3f-8bbbd3b4ecc0" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:312#033[00m
Dec  1 20:02:05 compute-0 nova_compute[189564]: 2025-12-01 20:02:05.724 189568 DEBUG oslo_concurrency.lockutils [None req-a63dd88b-b0e6-4d69-9e61-96ca65e37b62 b7979dae5a4746189d660cfad52a7031 074be7edf37d4e09a02286825460dcb3 - - default default] Acquired lock "refresh_cache-40daa6fd-543f-42a7-8b3f-8bbbd3b4ecc0" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:315#033[00m
Dec  1 20:02:05 compute-0 nova_compute[189564]: 2025-12-01 20:02:05.724 189568 DEBUG nova.network.neutron [None req-a63dd88b-b0e6-4d69-9e61-96ca65e37b62 b7979dae5a4746189d660cfad52a7031 074be7edf37d4e09a02286825460dcb3 - - default default] [instance: 40daa6fd-543f-42a7-8b3f-8bbbd3b4ecc0] Building network info cache for instance _get_instance_nw_info /usr/lib/python3.9/site-packages/nova/network/neutron.py:2010#033[00m
Dec  1 20:02:05 compute-0 nova_compute[189564]: 2025-12-01 20:02:05.871 189568 DEBUG nova.compute.manager [req-2560b864-a8e9-418e-8c94-d3a19a52509b req-3821a581-a845-421d-a008-ad1af8a48c42 8141e98fb94749b083df4b60a326419b 58466eba0634458f8dd92f929824a1d9 - - default default] [instance: 4ace6300-5447-4f61-9b27-a7249155c57b] Received event network-changed-7101ff55-a92d-431c-8cc4-8b3412507465 external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11048#033[00m
Dec  1 20:02:05 compute-0 nova_compute[189564]: 2025-12-01 20:02:05.872 189568 DEBUG nova.compute.manager [req-2560b864-a8e9-418e-8c94-d3a19a52509b req-3821a581-a845-421d-a008-ad1af8a48c42 8141e98fb94749b083df4b60a326419b 58466eba0634458f8dd92f929824a1d9 - - default default] [instance: 4ace6300-5447-4f61-9b27-a7249155c57b] Refreshing instance network info cache due to event network-changed-7101ff55-a92d-431c-8cc4-8b3412507465. external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11053#033[00m
Dec  1 20:02:05 compute-0 nova_compute[189564]: 2025-12-01 20:02:05.873 189568 DEBUG oslo_concurrency.lockutils [req-2560b864-a8e9-418e-8c94-d3a19a52509b req-3821a581-a845-421d-a008-ad1af8a48c42 8141e98fb94749b083df4b60a326419b 58466eba0634458f8dd92f929824a1d9 - - default default] Acquiring lock "refresh_cache-4ace6300-5447-4f61-9b27-a7249155c57b" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:312#033[00m
Dec  1 20:02:06 compute-0 nova_compute[189564]: 2025-12-01 20:02:06.008 189568 DEBUG nova.compute.manager [req-33d23549-70ee-4985-b2f7-181448ab699f req-7c4b8cd8-2bcc-4d47-a4d2-899051b3584a 8141e98fb94749b083df4b60a326419b 58466eba0634458f8dd92f929824a1d9 - - default default] [instance: 40daa6fd-543f-42a7-8b3f-8bbbd3b4ecc0] Received event network-changed-5f412491-e88a-4387-aa56-6b4e024e1eb2 external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11048#033[00m
Dec  1 20:02:06 compute-0 nova_compute[189564]: 2025-12-01 20:02:06.008 189568 DEBUG nova.compute.manager [req-33d23549-70ee-4985-b2f7-181448ab699f req-7c4b8cd8-2bcc-4d47-a4d2-899051b3584a 8141e98fb94749b083df4b60a326419b 58466eba0634458f8dd92f929824a1d9 - - default default] [instance: 40daa6fd-543f-42a7-8b3f-8bbbd3b4ecc0] Refreshing instance network info cache due to event network-changed-5f412491-e88a-4387-aa56-6b4e024e1eb2. external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11053#033[00m
Dec  1 20:02:06 compute-0 nova_compute[189564]: 2025-12-01 20:02:06.008 189568 DEBUG oslo_concurrency.lockutils [req-33d23549-70ee-4985-b2f7-181448ab699f req-7c4b8cd8-2bcc-4d47-a4d2-899051b3584a 8141e98fb94749b083df4b60a326419b 58466eba0634458f8dd92f929824a1d9 - - default default] Acquiring lock "refresh_cache-40daa6fd-543f-42a7-8b3f-8bbbd3b4ecc0" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:312#033[00m
Dec  1 20:02:06 compute-0 nova_compute[189564]: 2025-12-01 20:02:06.144 189568 DEBUG nova.network.neutron [None req-a63dd88b-b0e6-4d69-9e61-96ca65e37b62 b7979dae5a4746189d660cfad52a7031 074be7edf37d4e09a02286825460dcb3 - - default default] [instance: 40daa6fd-543f-42a7-8b3f-8bbbd3b4ecc0] Instance cache missing network info. _get_preexisting_port_ids /usr/lib/python3.9/site-packages/nova/network/neutron.py:3323#033[00m
Dec  1 20:02:06 compute-0 nova_compute[189564]: 2025-12-01 20:02:06.693 189568 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Dec  1 20:02:07 compute-0 nova_compute[189564]: 2025-12-01 20:02:07.076 189568 DEBUG nova.network.neutron [None req-27a587a4-40c5-444e-a53d-f9c90e3a57ff f4faf878be724ad8aa31fd034c9818d9 4517904b95d64f0c874d5afda12566c4 - - default default] [instance: 4ace6300-5447-4f61-9b27-a7249155c57b] Updating instance_info_cache with network_info: [{"id": "7101ff55-a92d-431c-8cc4-8b3412507465", "address": "fa:16:3e:69:55:e7", "network": {"id": "f6d551f8-4db8-41ef-9a06-51292bc6bab6", "bridge": "br-int", "label": "tempest-AttachInterfacesUnderV243Test-1484983586-network", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.6", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "4517904b95d64f0c874d5afda12566c4", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tap7101ff55-a9", "ovs_interfaceid": "7101ff55-a92d-431c-8cc4-8b3412507465", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}}] update_instance_cache_with_nw_info /usr/lib/python3.9/site-packages/nova/network/neutron.py:116#033[00m
Dec  1 20:02:07 compute-0 nova_compute[189564]: 2025-12-01 20:02:07.096 189568 DEBUG oslo_concurrency.lockutils [None req-27a587a4-40c5-444e-a53d-f9c90e3a57ff f4faf878be724ad8aa31fd034c9818d9 4517904b95d64f0c874d5afda12566c4 - - default default] Releasing lock "refresh_cache-4ace6300-5447-4f61-9b27-a7249155c57b" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:333#033[00m
Dec  1 20:02:07 compute-0 nova_compute[189564]: 2025-12-01 20:02:07.096 189568 DEBUG nova.compute.manager [None req-27a587a4-40c5-444e-a53d-f9c90e3a57ff f4faf878be724ad8aa31fd034c9818d9 4517904b95d64f0c874d5afda12566c4 - - default default] [instance: 4ace6300-5447-4f61-9b27-a7249155c57b] Instance network_info: |[{"id": "7101ff55-a92d-431c-8cc4-8b3412507465", "address": "fa:16:3e:69:55:e7", "network": {"id": "f6d551f8-4db8-41ef-9a06-51292bc6bab6", "bridge": "br-int", "label": "tempest-AttachInterfacesUnderV243Test-1484983586-network", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.6", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "4517904b95d64f0c874d5afda12566c4", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tap7101ff55-a9", "ovs_interfaceid": "7101ff55-a92d-431c-8cc4-8b3412507465", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}}]| _allocate_network_async /usr/lib/python3.9/site-packages/nova/compute/manager.py:1967#033[00m
Dec  1 20:02:07 compute-0 nova_compute[189564]: 2025-12-01 20:02:07.097 189568 DEBUG oslo_concurrency.lockutils [req-2560b864-a8e9-418e-8c94-d3a19a52509b req-3821a581-a845-421d-a008-ad1af8a48c42 8141e98fb94749b083df4b60a326419b 58466eba0634458f8dd92f929824a1d9 - - default default] Acquired lock "refresh_cache-4ace6300-5447-4f61-9b27-a7249155c57b" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:315#033[00m
Dec  1 20:02:07 compute-0 nova_compute[189564]: 2025-12-01 20:02:07.097 189568 DEBUG nova.network.neutron [req-2560b864-a8e9-418e-8c94-d3a19a52509b req-3821a581-a845-421d-a008-ad1af8a48c42 8141e98fb94749b083df4b60a326419b 58466eba0634458f8dd92f929824a1d9 - - default default] [instance: 4ace6300-5447-4f61-9b27-a7249155c57b] Refreshing network info cache for port 7101ff55-a92d-431c-8cc4-8b3412507465 _get_instance_nw_info /usr/lib/python3.9/site-packages/nova/network/neutron.py:2007#033[00m
Dec  1 20:02:07 compute-0 nova_compute[189564]: 2025-12-01 20:02:07.102 189568 DEBUG nova.virt.libvirt.driver [None req-27a587a4-40c5-444e-a53d-f9c90e3a57ff f4faf878be724ad8aa31fd034c9818d9 4517904b95d64f0c874d5afda12566c4 - - default default] [instance: 4ace6300-5447-4f61-9b27-a7249155c57b] Start _get_guest_xml network_info=[{"id": "7101ff55-a92d-431c-8cc4-8b3412507465", "address": "fa:16:3e:69:55:e7", "network": {"id": "f6d551f8-4db8-41ef-9a06-51292bc6bab6", "bridge": "br-int", "label": "tempest-AttachInterfacesUnderV243Test-1484983586-network", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.6", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "4517904b95d64f0c874d5afda12566c4", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tap7101ff55-a9", "ovs_interfaceid": "7101ff55-a92d-431c-8cc4-8b3412507465", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}}] disk_info={'disk_bus': 'virtio', 'cdrom_bus': 'sata', 'mapping': {'root': {'bus': 'virtio', 'dev': 'vda', 'type': 'disk', 'boot_index': '1'}, 'disk': {'bus': 'virtio', 'dev': 'vda', 'type': 'disk', 'boot_index': '1'}, 'disk.config': {'bus': 'sata', 'dev': 'sda', 'type': 'cdrom'}}} image_meta=ImageMeta(checksum='c8fc807773e5354afe61636071771906',container_format='bare',created_at=2025-12-01T20:00:12Z,direct_url=<?>,disk_format='qcow2',id=d169c234-7ac2-4fdc-b9fa-a08c93484d75,min_disk=0,min_ram=0,name='cirros-0.6.2-x86_64-disk.img',owner='35d2a9caf1634dca9fc12ec078239d84',properties=ImageMetaProps,protected=<?>,size=21430272,status='active',tags=<?>,updated_at=2025-12-01T20:00:13Z,virtual_size=<?>,visibility=<?>) rescue=None block_device_info={'root_device_name': '/dev/vda', 'image': [{'boot_index': 0, 'guest_format': None, 'encryption_options': None, 'size': 0, 'encryption_secret_uuid': None, 'device_type': 'disk', 'disk_bus': 'virtio', 'encrypted': False, 'encryption_format': None, 'device_name': '/dev/vda', 'image_id': 'd169c234-7ac2-4fdc-b9fa-a08c93484d75'}], 'ephemerals': [], 'block_device_mapping': [], 'swap': None} _get_guest_xml /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:7549#033[00m
Dec  1 20:02:07 compute-0 nova_compute[189564]: 2025-12-01 20:02:07.112 189568 WARNING nova.virt.libvirt.driver [None req-27a587a4-40c5-444e-a53d-f9c90e3a57ff f4faf878be724ad8aa31fd034c9818d9 4517904b95d64f0c874d5afda12566c4 - - default default] This host appears to have multiple sockets per NUMA node. The `socket` PCI NUMA affinity will not be supported.#033[00m
Dec  1 20:02:07 compute-0 nova_compute[189564]: 2025-12-01 20:02:07.120 189568 DEBUG nova.virt.libvirt.host [None req-27a587a4-40c5-444e-a53d-f9c90e3a57ff f4faf878be724ad8aa31fd034c9818d9 4517904b95d64f0c874d5afda12566c4 - - default default] Searching host: 'compute-0.ctlplane.example.com' for CPU controller through CGroups V1... _has_cgroupsv1_cpu_controller /usr/lib/python3.9/site-packages/nova/virt/libvirt/host.py:1653#033[00m
Dec  1 20:02:07 compute-0 nova_compute[189564]: 2025-12-01 20:02:07.121 189568 DEBUG nova.virt.libvirt.host [None req-27a587a4-40c5-444e-a53d-f9c90e3a57ff f4faf878be724ad8aa31fd034c9818d9 4517904b95d64f0c874d5afda12566c4 - - default default] CPU controller missing on host. _has_cgroupsv1_cpu_controller /usr/lib/python3.9/site-packages/nova/virt/libvirt/host.py:1663#033[00m
Dec  1 20:02:07 compute-0 nova_compute[189564]: 2025-12-01 20:02:07.134 189568 DEBUG nova.virt.libvirt.host [None req-27a587a4-40c5-444e-a53d-f9c90e3a57ff f4faf878be724ad8aa31fd034c9818d9 4517904b95d64f0c874d5afda12566c4 - - default default] Searching host: 'compute-0.ctlplane.example.com' for CPU controller through CGroups V2... _has_cgroupsv2_cpu_controller /usr/lib/python3.9/site-packages/nova/virt/libvirt/host.py:1672#033[00m
Dec  1 20:02:07 compute-0 nova_compute[189564]: 2025-12-01 20:02:07.134 189568 DEBUG nova.virt.libvirt.host [None req-27a587a4-40c5-444e-a53d-f9c90e3a57ff f4faf878be724ad8aa31fd034c9818d9 4517904b95d64f0c874d5afda12566c4 - - default default] CPU controller found on host. _has_cgroupsv2_cpu_controller /usr/lib/python3.9/site-packages/nova/virt/libvirt/host.py:1679#033[00m
Dec  1 20:02:07 compute-0 nova_compute[189564]: 2025-12-01 20:02:07.135 189568 DEBUG nova.virt.libvirt.driver [None req-27a587a4-40c5-444e-a53d-f9c90e3a57ff f4faf878be724ad8aa31fd034c9818d9 4517904b95d64f0c874d5afda12566c4 - - default default] CPU mode 'host-model' models '' was chosen, with extra flags: '' _get_guest_cpu_model_config /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:5396#033[00m
Dec  1 20:02:07 compute-0 nova_compute[189564]: 2025-12-01 20:02:07.135 189568 DEBUG nova.virt.hardware [None req-27a587a4-40c5-444e-a53d-f9c90e3a57ff f4faf878be724ad8aa31fd034c9818d9 4517904b95d64f0c874d5afda12566c4 - - default default] Getting desirable topologies for flavor Flavor(created_at=2025-12-01T20:00:10Z,deleted=False,deleted_at=None,description=None,disabled=False,ephemeral_gb=0,extra_specs={hw_rng:allowed='True'},flavorid='69252fc0-77e5-4ac1-807d-77003542464f',id=3,is_public=True,memory_mb=128,name='m1.nano',projects=<?>,root_gb=1,rxtx_factor=1.0,swap=0,updated_at=None,vcpu_weight=0,vcpus=1) and image_meta ImageMeta(checksum='c8fc807773e5354afe61636071771906',container_format='bare',created_at=2025-12-01T20:00:12Z,direct_url=<?>,disk_format='qcow2',id=d169c234-7ac2-4fdc-b9fa-a08c93484d75,min_disk=0,min_ram=0,name='cirros-0.6.2-x86_64-disk.img',owner='35d2a9caf1634dca9fc12ec078239d84',properties=ImageMetaProps,protected=<?>,size=21430272,status='active',tags=<?>,updated_at=2025-12-01T20:00:13Z,virtual_size=<?>,visibility=<?>), allow threads: True _get_desirable_cpu_topologies /usr/lib/python3.9/site-packages/nova/virt/hardware.py:563#033[00m
Dec  1 20:02:07 compute-0 nova_compute[189564]: 2025-12-01 20:02:07.136 189568 DEBUG nova.virt.hardware [None req-27a587a4-40c5-444e-a53d-f9c90e3a57ff f4faf878be724ad8aa31fd034c9818d9 4517904b95d64f0c874d5afda12566c4 - - default default] Flavor limits 0:0:0 get_cpu_topology_constraints /usr/lib/python3.9/site-packages/nova/virt/hardware.py:348#033[00m
Dec  1 20:02:07 compute-0 nova_compute[189564]: 2025-12-01 20:02:07.137 189568 DEBUG nova.virt.hardware [None req-27a587a4-40c5-444e-a53d-f9c90e3a57ff f4faf878be724ad8aa31fd034c9818d9 4517904b95d64f0c874d5afda12566c4 - - default default] Image limits 0:0:0 get_cpu_topology_constraints /usr/lib/python3.9/site-packages/nova/virt/hardware.py:352#033[00m
Dec  1 20:02:07 compute-0 nova_compute[189564]: 2025-12-01 20:02:07.137 189568 DEBUG nova.virt.hardware [None req-27a587a4-40c5-444e-a53d-f9c90e3a57ff f4faf878be724ad8aa31fd034c9818d9 4517904b95d64f0c874d5afda12566c4 - - default default] Flavor pref 0:0:0 get_cpu_topology_constraints /usr/lib/python3.9/site-packages/nova/virt/hardware.py:388#033[00m
Dec  1 20:02:07 compute-0 nova_compute[189564]: 2025-12-01 20:02:07.137 189568 DEBUG nova.virt.hardware [None req-27a587a4-40c5-444e-a53d-f9c90e3a57ff f4faf878be724ad8aa31fd034c9818d9 4517904b95d64f0c874d5afda12566c4 - - default default] Image pref 0:0:0 get_cpu_topology_constraints /usr/lib/python3.9/site-packages/nova/virt/hardware.py:392#033[00m
Dec  1 20:02:07 compute-0 nova_compute[189564]: 2025-12-01 20:02:07.138 189568 DEBUG nova.virt.hardware [None req-27a587a4-40c5-444e-a53d-f9c90e3a57ff f4faf878be724ad8aa31fd034c9818d9 4517904b95d64f0c874d5afda12566c4 - - default default] Chose sockets=0, cores=0, threads=0; limits were sockets=65536, cores=65536, threads=65536 get_cpu_topology_constraints /usr/lib/python3.9/site-packages/nova/virt/hardware.py:430#033[00m
Dec  1 20:02:07 compute-0 nova_compute[189564]: 2025-12-01 20:02:07.138 189568 DEBUG nova.virt.hardware [None req-27a587a4-40c5-444e-a53d-f9c90e3a57ff f4faf878be724ad8aa31fd034c9818d9 4517904b95d64f0c874d5afda12566c4 - - default default] Topology preferred VirtCPUTopology(cores=0,sockets=0,threads=0), maximum VirtCPUTopology(cores=65536,sockets=65536,threads=65536) _get_desirable_cpu_topologies /usr/lib/python3.9/site-packages/nova/virt/hardware.py:569#033[00m
Dec  1 20:02:07 compute-0 nova_compute[189564]: 2025-12-01 20:02:07.139 189568 DEBUG nova.virt.hardware [None req-27a587a4-40c5-444e-a53d-f9c90e3a57ff f4faf878be724ad8aa31fd034c9818d9 4517904b95d64f0c874d5afda12566c4 - - default default] Build topologies for 1 vcpu(s) 1:1:1 _get_possible_cpu_topologies /usr/lib/python3.9/site-packages/nova/virt/hardware.py:471#033[00m
Dec  1 20:02:07 compute-0 nova_compute[189564]: 2025-12-01 20:02:07.140 189568 DEBUG nova.virt.hardware [None req-27a587a4-40c5-444e-a53d-f9c90e3a57ff f4faf878be724ad8aa31fd034c9818d9 4517904b95d64f0c874d5afda12566c4 - - default default] Got 1 possible topologies _get_possible_cpu_topologies /usr/lib/python3.9/site-packages/nova/virt/hardware.py:501#033[00m
Dec  1 20:02:07 compute-0 nova_compute[189564]: 2025-12-01 20:02:07.140 189568 DEBUG nova.virt.hardware [None req-27a587a4-40c5-444e-a53d-f9c90e3a57ff f4faf878be724ad8aa31fd034c9818d9 4517904b95d64f0c874d5afda12566c4 - - default default] Possible topologies [VirtCPUTopology(cores=1,sockets=1,threads=1)] _get_desirable_cpu_topologies /usr/lib/python3.9/site-packages/nova/virt/hardware.py:575#033[00m
Dec  1 20:02:07 compute-0 nova_compute[189564]: 2025-12-01 20:02:07.140 189568 DEBUG nova.virt.hardware [None req-27a587a4-40c5-444e-a53d-f9c90e3a57ff f4faf878be724ad8aa31fd034c9818d9 4517904b95d64f0c874d5afda12566c4 - - default default] Sorted desired topologies [VirtCPUTopology(cores=1,sockets=1,threads=1)] _get_desirable_cpu_topologies /usr/lib/python3.9/site-packages/nova/virt/hardware.py:577#033[00m
Dec  1 20:02:07 compute-0 nova_compute[189564]: 2025-12-01 20:02:07.147 189568 DEBUG nova.virt.libvirt.vif [None req-27a587a4-40c5-444e-a53d-f9c90e3a57ff f4faf878be724ad8aa31fd034c9818d9 4517904b95d64f0c874d5afda12566c4 - - default default] vif_type=ovs instance=Instance(access_ip_v4=None,access_ip_v6=None,architecture=None,auto_disk_config=False,availability_zone='nova',cell_name=None,cleaned=False,config_drive='',created_at=2025-12-01T20:01:59Z,default_ephemeral_device=None,default_swap_device=None,deleted=False,deleted_at=None,device_metadata=None,disable_terminate=False,display_description='tempest-AttachInterfacesUnderV243Test-server-1479640136',display_name='tempest-AttachInterfacesUnderV243Test-server-1479640136',ec2_ids=EC2Ids,ephemeral_gb=0,ephemeral_key_uuid=None,fault=<?>,flavor=Flavor(3),hidden=False,host='compute-0.ctlplane.example.com',hostname='tempest-attachinterfacesunderv243test-server-1479640136',id=9,image_ref='d169c234-7ac2-4fdc-b9fa-a08c93484d75',info_cache=InstanceInfoCache,instance_type_id=3,kernel_id='',key_data='ecdsa-sha2-nistp384 AAAAE2VjZHNhLXNoYTItbmlzdHAzODQAAAAIbmlzdHAzODQAAABhBOPVZoWW8r4f94xaa9hAUCfBMAMdM1AmJScI4znu9hdCX1jEINzVnS4DsiCUu/xmx9ibNZ0YEMnpa2LoFXPPqSMLj/g4TA6XBMSRJA8vxRXcj98f9dTCmQdhYfylR7YynQ==',key_name='tempest-keypair-1081056876',keypairs=KeyPairList,launch_index=0,launched_at=None,launched_on='compute-0.ctlplane.example.com',locked=False,locked_by=None,memory_mb=128,metadata={},migration_context=None,new_flavor=None,node='compute-0.ctlplane.example.com',numa_topology=None,old_flavor=None,os_type=None,pci_devices=<?>,pci_requests=InstancePCIRequests,power_state=0,progress=0,project_id='4517904b95d64f0c874d5afda12566c4',ramdisk_id='',reservation_id='r-k1ogh0w2',resources=None,root_device_name='/dev/vda',root_gb=1,security_groups=SecurityGroupList,services=<?>,shutdown_terminate=False,system_metadata={boot_roles='member,reader',image_base_image_ref='d169c234-7ac2-4fdc-b9fa-a08c93484d75',image_container_format='bare',image_disk_format='qcow2',image_hw_machine_type='q35',image_hw_rng_model='virtio',image_min_disk='1',image_min_ram='0',network_allocated='True',owner_project_name='tempest-AttachInterfacesUnderV243Test-1152149572',owner_user_name='tempest-AttachInterfacesUnderV243Test-1152149572-project-member'},tags=TagList,task_state='spawning',terminated_at=None,trusted_certs=None,updated_at=2025-12-01T20:02:00Z,user_data='IyEvYmluL3NoCmVjaG8gIlByaW50aW5nIGNpcnJvcyB1c2VyIGF1dGhvcml6ZWQga2V5cyIKY2F0IH5jaXJyb3MvLnNzaC9hdXRob3JpemVkX2tleXMgfHwgdHJ1ZQo=',user_id='f4faf878be724ad8aa31fd034c9818d9',uuid=4ace6300-5447-4f61-9b27-a7249155c57b,vcpu_model=VirtCPUModel,vcpus=1,vm_mode=None,vm_state='building') vif={"id": "7101ff55-a92d-431c-8cc4-8b3412507465", "address": "fa:16:3e:69:55:e7", "network": {"id": "f6d551f8-4db8-41ef-9a06-51292bc6bab6", "bridge": "br-int", "label": "tempest-AttachInterfacesUnderV243Test-1484983586-network", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.6", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "4517904b95d64f0c874d5afda12566c4", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tap7101ff55-a9", "ovs_interfaceid": "7101ff55-a92d-431c-8cc4-8b3412507465", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}} virt_type=kvm get_config /usr/lib/python3.9/site-packages/nova/virt/libvirt/vif.py:563#033[00m
Dec  1 20:02:07 compute-0 nova_compute[189564]: 2025-12-01 20:02:07.147 189568 DEBUG nova.network.os_vif_util [None req-27a587a4-40c5-444e-a53d-f9c90e3a57ff f4faf878be724ad8aa31fd034c9818d9 4517904b95d64f0c874d5afda12566c4 - - default default] Converting VIF {"id": "7101ff55-a92d-431c-8cc4-8b3412507465", "address": "fa:16:3e:69:55:e7", "network": {"id": "f6d551f8-4db8-41ef-9a06-51292bc6bab6", "bridge": "br-int", "label": "tempest-AttachInterfacesUnderV243Test-1484983586-network", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.6", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "4517904b95d64f0c874d5afda12566c4", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tap7101ff55-a9", "ovs_interfaceid": "7101ff55-a92d-431c-8cc4-8b3412507465", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}} nova_to_osvif_vif /usr/lib/python3.9/site-packages/nova/network/os_vif_util.py:511#033[00m
Dec  1 20:02:07 compute-0 nova_compute[189564]: 2025-12-01 20:02:07.149 189568 DEBUG nova.network.os_vif_util [None req-27a587a4-40c5-444e-a53d-f9c90e3a57ff f4faf878be724ad8aa31fd034c9818d9 4517904b95d64f0c874d5afda12566c4 - - default default] Converted object VIFOpenVSwitch(active=False,address=fa:16:3e:69:55:e7,bridge_name='br-int',has_traffic_filtering=True,id=7101ff55-a92d-431c-8cc4-8b3412507465,network=Network(f6d551f8-4db8-41ef-9a06-51292bc6bab6),plugin='ovs',port_profile=VIFPortProfileOpenVSwitch,preserve_on_delete=False,vif_name='tap7101ff55-a9') nova_to_osvif_vif /usr/lib/python3.9/site-packages/nova/network/os_vif_util.py:548#033[00m
Dec  1 20:02:07 compute-0 nova_compute[189564]: 2025-12-01 20:02:07.150 189568 DEBUG nova.objects.instance [None req-27a587a4-40c5-444e-a53d-f9c90e3a57ff f4faf878be724ad8aa31fd034c9818d9 4517904b95d64f0c874d5afda12566c4 - - default default] Lazy-loading 'pci_devices' on Instance uuid 4ace6300-5447-4f61-9b27-a7249155c57b obj_load_attr /usr/lib/python3.9/site-packages/nova/objects/instance.py:1105#033[00m
Dec  1 20:02:07 compute-0 nova_compute[189564]: 2025-12-01 20:02:07.167 189568 DEBUG nova.virt.libvirt.driver [None req-27a587a4-40c5-444e-a53d-f9c90e3a57ff f4faf878be724ad8aa31fd034c9818d9 4517904b95d64f0c874d5afda12566c4 - - default default] [instance: 4ace6300-5447-4f61-9b27-a7249155c57b] End _get_guest_xml xml=<domain type="kvm">
Dec  1 20:02:07 compute-0 nova_compute[189564]:  <uuid>4ace6300-5447-4f61-9b27-a7249155c57b</uuid>
Dec  1 20:02:07 compute-0 nova_compute[189564]:  <name>instance-00000009</name>
Dec  1 20:02:07 compute-0 nova_compute[189564]:  <memory>131072</memory>
Dec  1 20:02:07 compute-0 nova_compute[189564]:  <vcpu>1</vcpu>
Dec  1 20:02:07 compute-0 nova_compute[189564]:  <metadata>
Dec  1 20:02:07 compute-0 nova_compute[189564]:    <nova:instance xmlns:nova="http://openstack.org/xmlns/libvirt/nova/1.1">
Dec  1 20:02:07 compute-0 nova_compute[189564]:      <nova:package version="27.5.2-0.20250829104910.6f8decf.el9"/>
Dec  1 20:02:07 compute-0 nova_compute[189564]:      <nova:name>tempest-AttachInterfacesUnderV243Test-server-1479640136</nova:name>
Dec  1 20:02:07 compute-0 nova_compute[189564]:      <nova:creationTime>2025-12-01 20:02:07</nova:creationTime>
Dec  1 20:02:07 compute-0 nova_compute[189564]:      <nova:flavor name="m1.nano">
Dec  1 20:02:07 compute-0 nova_compute[189564]:        <nova:memory>128</nova:memory>
Dec  1 20:02:07 compute-0 nova_compute[189564]:        <nova:disk>1</nova:disk>
Dec  1 20:02:07 compute-0 nova_compute[189564]:        <nova:swap>0</nova:swap>
Dec  1 20:02:07 compute-0 nova_compute[189564]:        <nova:ephemeral>0</nova:ephemeral>
Dec  1 20:02:07 compute-0 nova_compute[189564]:        <nova:vcpus>1</nova:vcpus>
Dec  1 20:02:07 compute-0 nova_compute[189564]:      </nova:flavor>
Dec  1 20:02:07 compute-0 nova_compute[189564]:      <nova:owner>
Dec  1 20:02:07 compute-0 nova_compute[189564]:        <nova:user uuid="f4faf878be724ad8aa31fd034c9818d9">tempest-AttachInterfacesUnderV243Test-1152149572-project-member</nova:user>
Dec  1 20:02:07 compute-0 nova_compute[189564]:        <nova:project uuid="4517904b95d64f0c874d5afda12566c4">tempest-AttachInterfacesUnderV243Test-1152149572</nova:project>
Dec  1 20:02:07 compute-0 nova_compute[189564]:      </nova:owner>
Dec  1 20:02:07 compute-0 nova_compute[189564]:      <nova:root type="image" uuid="d169c234-7ac2-4fdc-b9fa-a08c93484d75"/>
Dec  1 20:02:07 compute-0 nova_compute[189564]:      <nova:ports>
Dec  1 20:02:07 compute-0 nova_compute[189564]:        <nova:port uuid="7101ff55-a92d-431c-8cc4-8b3412507465">
Dec  1 20:02:07 compute-0 nova_compute[189564]:          <nova:ip type="fixed" address="10.100.0.6" ipVersion="4"/>
Dec  1 20:02:07 compute-0 nova_compute[189564]:        </nova:port>
Dec  1 20:02:07 compute-0 nova_compute[189564]:      </nova:ports>
Dec  1 20:02:07 compute-0 nova_compute[189564]:    </nova:instance>
Dec  1 20:02:07 compute-0 nova_compute[189564]:  </metadata>
Dec  1 20:02:07 compute-0 nova_compute[189564]:  <sysinfo type="smbios">
Dec  1 20:02:07 compute-0 nova_compute[189564]:    <system>
Dec  1 20:02:07 compute-0 nova_compute[189564]:      <entry name="manufacturer">RDO</entry>
Dec  1 20:02:07 compute-0 nova_compute[189564]:      <entry name="product">OpenStack Compute</entry>
Dec  1 20:02:07 compute-0 nova_compute[189564]:      <entry name="version">27.5.2-0.20250829104910.6f8decf.el9</entry>
Dec  1 20:02:07 compute-0 nova_compute[189564]:      <entry name="serial">4ace6300-5447-4f61-9b27-a7249155c57b</entry>
Dec  1 20:02:07 compute-0 nova_compute[189564]:      <entry name="uuid">4ace6300-5447-4f61-9b27-a7249155c57b</entry>
Dec  1 20:02:07 compute-0 nova_compute[189564]:      <entry name="family">Virtual Machine</entry>
Dec  1 20:02:07 compute-0 nova_compute[189564]:    </system>
Dec  1 20:02:07 compute-0 nova_compute[189564]:  </sysinfo>
Dec  1 20:02:07 compute-0 nova_compute[189564]:  <os>
Dec  1 20:02:07 compute-0 nova_compute[189564]:    <type arch="x86_64" machine="q35">hvm</type>
Dec  1 20:02:07 compute-0 nova_compute[189564]:    <boot dev="hd"/>
Dec  1 20:02:07 compute-0 nova_compute[189564]:    <smbios mode="sysinfo"/>
Dec  1 20:02:07 compute-0 nova_compute[189564]:  </os>
Dec  1 20:02:07 compute-0 nova_compute[189564]:  <features>
Dec  1 20:02:07 compute-0 nova_compute[189564]:    <acpi/>
Dec  1 20:02:07 compute-0 nova_compute[189564]:    <apic/>
Dec  1 20:02:07 compute-0 nova_compute[189564]:    <vmcoreinfo/>
Dec  1 20:02:07 compute-0 nova_compute[189564]:  </features>
Dec  1 20:02:07 compute-0 nova_compute[189564]:  <clock offset="utc">
Dec  1 20:02:07 compute-0 nova_compute[189564]:    <timer name="pit" tickpolicy="delay"/>
Dec  1 20:02:07 compute-0 nova_compute[189564]:    <timer name="rtc" tickpolicy="catchup"/>
Dec  1 20:02:07 compute-0 nova_compute[189564]:    <timer name="hpet" present="no"/>
Dec  1 20:02:07 compute-0 nova_compute[189564]:  </clock>
Dec  1 20:02:07 compute-0 nova_compute[189564]:  <cpu mode="host-model" match="exact">
Dec  1 20:02:07 compute-0 nova_compute[189564]:    <topology sockets="1" cores="1" threads="1"/>
Dec  1 20:02:07 compute-0 nova_compute[189564]:  </cpu>
Dec  1 20:02:07 compute-0 nova_compute[189564]:  <devices>
Dec  1 20:02:07 compute-0 nova_compute[189564]:    <disk type="file" device="disk">
Dec  1 20:02:07 compute-0 nova_compute[189564]:      <driver name="qemu" type="qcow2" cache="none"/>
Dec  1 20:02:07 compute-0 nova_compute[189564]:      <source file="/var/lib/nova/instances/4ace6300-5447-4f61-9b27-a7249155c57b/disk"/>
Dec  1 20:02:07 compute-0 nova_compute[189564]:      <target dev="vda" bus="virtio"/>
Dec  1 20:02:07 compute-0 nova_compute[189564]:    </disk>
Dec  1 20:02:07 compute-0 nova_compute[189564]:    <disk type="file" device="cdrom">
Dec  1 20:02:07 compute-0 nova_compute[189564]:      <driver name="qemu" type="raw" cache="none"/>
Dec  1 20:02:07 compute-0 nova_compute[189564]:      <source file="/var/lib/nova/instances/4ace6300-5447-4f61-9b27-a7249155c57b/disk.config"/>
Dec  1 20:02:07 compute-0 nova_compute[189564]:      <target dev="sda" bus="sata"/>
Dec  1 20:02:07 compute-0 nova_compute[189564]:    </disk>
Dec  1 20:02:07 compute-0 nova_compute[189564]:    <interface type="ethernet">
Dec  1 20:02:07 compute-0 nova_compute[189564]:      <mac address="fa:16:3e:69:55:e7"/>
Dec  1 20:02:07 compute-0 nova_compute[189564]:      <model type="virtio"/>
Dec  1 20:02:07 compute-0 nova_compute[189564]:      <driver name="vhost" rx_queue_size="512"/>
Dec  1 20:02:07 compute-0 nova_compute[189564]:      <mtu size="1442"/>
Dec  1 20:02:07 compute-0 nova_compute[189564]:      <target dev="tap7101ff55-a9"/>
Dec  1 20:02:07 compute-0 nova_compute[189564]:    </interface>
Dec  1 20:02:07 compute-0 nova_compute[189564]:    <serial type="pty">
Dec  1 20:02:07 compute-0 nova_compute[189564]:      <log file="/var/lib/nova/instances/4ace6300-5447-4f61-9b27-a7249155c57b/console.log" append="off"/>
Dec  1 20:02:07 compute-0 nova_compute[189564]:    </serial>
Dec  1 20:02:07 compute-0 nova_compute[189564]:    <graphics type="vnc" autoport="yes" listen="::0"/>
Dec  1 20:02:07 compute-0 nova_compute[189564]:    <video>
Dec  1 20:02:07 compute-0 nova_compute[189564]:      <model type="virtio"/>
Dec  1 20:02:07 compute-0 nova_compute[189564]:    </video>
Dec  1 20:02:07 compute-0 nova_compute[189564]:    <input type="tablet" bus="usb"/>
Dec  1 20:02:07 compute-0 nova_compute[189564]:    <rng model="virtio">
Dec  1 20:02:07 compute-0 nova_compute[189564]:      <backend model="random">/dev/urandom</backend>
Dec  1 20:02:07 compute-0 nova_compute[189564]:    </rng>
Dec  1 20:02:07 compute-0 nova_compute[189564]:    <controller type="pci" model="pcie-root"/>
Dec  1 20:02:07 compute-0 nova_compute[189564]:    <controller type="pci" model="pcie-root-port"/>
Dec  1 20:02:07 compute-0 nova_compute[189564]:    <controller type="pci" model="pcie-root-port"/>
Dec  1 20:02:07 compute-0 nova_compute[189564]:    <controller type="pci" model="pcie-root-port"/>
Dec  1 20:02:07 compute-0 nova_compute[189564]:    <controller type="pci" model="pcie-root-port"/>
Dec  1 20:02:07 compute-0 nova_compute[189564]:    <controller type="pci" model="pcie-root-port"/>
Dec  1 20:02:07 compute-0 nova_compute[189564]:    <controller type="pci" model="pcie-root-port"/>
Dec  1 20:02:07 compute-0 nova_compute[189564]:    <controller type="pci" model="pcie-root-port"/>
Dec  1 20:02:07 compute-0 nova_compute[189564]:    <controller type="pci" model="pcie-root-port"/>
Dec  1 20:02:07 compute-0 nova_compute[189564]:    <controller type="pci" model="pcie-root-port"/>
Dec  1 20:02:07 compute-0 nova_compute[189564]:    <controller type="pci" model="pcie-root-port"/>
Dec  1 20:02:07 compute-0 nova_compute[189564]:    <controller type="pci" model="pcie-root-port"/>
Dec  1 20:02:07 compute-0 nova_compute[189564]:    <controller type="pci" model="pcie-root-port"/>
Dec  1 20:02:07 compute-0 nova_compute[189564]:    <controller type="pci" model="pcie-root-port"/>
Dec  1 20:02:07 compute-0 nova_compute[189564]:    <controller type="pci" model="pcie-root-port"/>
Dec  1 20:02:07 compute-0 nova_compute[189564]:    <controller type="pci" model="pcie-root-port"/>
Dec  1 20:02:07 compute-0 nova_compute[189564]:    <controller type="pci" model="pcie-root-port"/>
Dec  1 20:02:07 compute-0 nova_compute[189564]:    <controller type="pci" model="pcie-root-port"/>
Dec  1 20:02:07 compute-0 nova_compute[189564]:    <controller type="pci" model="pcie-root-port"/>
Dec  1 20:02:07 compute-0 nova_compute[189564]:    <controller type="pci" model="pcie-root-port"/>
Dec  1 20:02:07 compute-0 nova_compute[189564]:    <controller type="pci" model="pcie-root-port"/>
Dec  1 20:02:07 compute-0 nova_compute[189564]:    <controller type="pci" model="pcie-root-port"/>
Dec  1 20:02:07 compute-0 nova_compute[189564]:    <controller type="pci" model="pcie-root-port"/>
Dec  1 20:02:07 compute-0 nova_compute[189564]:    <controller type="pci" model="pcie-root-port"/>
Dec  1 20:02:07 compute-0 nova_compute[189564]:    <controller type="pci" model="pcie-root-port"/>
Dec  1 20:02:07 compute-0 nova_compute[189564]:    <controller type="usb" index="0"/>
Dec  1 20:02:07 compute-0 nova_compute[189564]:    <memballoon model="virtio">
Dec  1 20:02:07 compute-0 nova_compute[189564]:      <stats period="10"/>
Dec  1 20:02:07 compute-0 nova_compute[189564]:    </memballoon>
Dec  1 20:02:07 compute-0 nova_compute[189564]:  </devices>
Dec  1 20:02:07 compute-0 nova_compute[189564]: </domain>
Dec  1 20:02:07 compute-0 nova_compute[189564]: _get_guest_xml /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:7555#033[00m
Dec  1 20:02:07 compute-0 nova_compute[189564]: 2025-12-01 20:02:07.167 189568 DEBUG nova.compute.manager [None req-27a587a4-40c5-444e-a53d-f9c90e3a57ff f4faf878be724ad8aa31fd034c9818d9 4517904b95d64f0c874d5afda12566c4 - - default default] [instance: 4ace6300-5447-4f61-9b27-a7249155c57b] Preparing to wait for external event network-vif-plugged-7101ff55-a92d-431c-8cc4-8b3412507465 prepare_for_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:283#033[00m
Dec  1 20:02:07 compute-0 nova_compute[189564]: 2025-12-01 20:02:07.168 189568 DEBUG oslo_concurrency.lockutils [None req-27a587a4-40c5-444e-a53d-f9c90e3a57ff f4faf878be724ad8aa31fd034c9818d9 4517904b95d64f0c874d5afda12566c4 - - default default] Acquiring lock "4ace6300-5447-4f61-9b27-a7249155c57b-events" by "nova.compute.manager.InstanceEvents.prepare_for_instance_event.<locals>._create_or_get_event" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m
Dec  1 20:02:07 compute-0 nova_compute[189564]: 2025-12-01 20:02:07.168 189568 DEBUG oslo_concurrency.lockutils [None req-27a587a4-40c5-444e-a53d-f9c90e3a57ff f4faf878be724ad8aa31fd034c9818d9 4517904b95d64f0c874d5afda12566c4 - - default default] Lock "4ace6300-5447-4f61-9b27-a7249155c57b-events" acquired by "nova.compute.manager.InstanceEvents.prepare_for_instance_event.<locals>._create_or_get_event" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m
Dec  1 20:02:07 compute-0 nova_compute[189564]: 2025-12-01 20:02:07.168 189568 DEBUG oslo_concurrency.lockutils [None req-27a587a4-40c5-444e-a53d-f9c90e3a57ff f4faf878be724ad8aa31fd034c9818d9 4517904b95d64f0c874d5afda12566c4 - - default default] Lock "4ace6300-5447-4f61-9b27-a7249155c57b-events" "released" by "nova.compute.manager.InstanceEvents.prepare_for_instance_event.<locals>._create_or_get_event" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m
Dec  1 20:02:07 compute-0 nova_compute[189564]: 2025-12-01 20:02:07.168 189568 DEBUG nova.virt.libvirt.vif [None req-27a587a4-40c5-444e-a53d-f9c90e3a57ff f4faf878be724ad8aa31fd034c9818d9 4517904b95d64f0c874d5afda12566c4 - - default default] vif_type=ovs instance=Instance(access_ip_v4=None,access_ip_v6=None,architecture=None,auto_disk_config=False,availability_zone='nova',cell_name=None,cleaned=False,config_drive='',created_at=2025-12-01T20:01:59Z,default_ephemeral_device=None,default_swap_device=None,deleted=False,deleted_at=None,device_metadata=None,disable_terminate=False,display_description='tempest-AttachInterfacesUnderV243Test-server-1479640136',display_name='tempest-AttachInterfacesUnderV243Test-server-1479640136',ec2_ids=EC2Ids,ephemeral_gb=0,ephemeral_key_uuid=None,fault=<?>,flavor=Flavor(3),hidden=False,host='compute-0.ctlplane.example.com',hostname='tempest-attachinterfacesunderv243test-server-1479640136',id=9,image_ref='d169c234-7ac2-4fdc-b9fa-a08c93484d75',info_cache=InstanceInfoCache,instance_type_id=3,kernel_id='',key_data='ecdsa-sha2-nistp384 AAAAE2VjZHNhLXNoYTItbmlzdHAzODQAAAAIbmlzdHAzODQAAABhBOPVZoWW8r4f94xaa9hAUCfBMAMdM1AmJScI4znu9hdCX1jEINzVnS4DsiCUu/xmx9ibNZ0YEMnpa2LoFXPPqSMLj/g4TA6XBMSRJA8vxRXcj98f9dTCmQdhYfylR7YynQ==',key_name='tempest-keypair-1081056876',keypairs=KeyPairList,launch_index=0,launched_at=None,launched_on='compute-0.ctlplane.example.com',locked=False,locked_by=None,memory_mb=128,metadata={},migration_context=None,new_flavor=None,node='compute-0.ctlplane.example.com',numa_topology=None,old_flavor=None,os_type=None,pci_devices=PciDeviceList,pci_requests=InstancePCIRequests,power_state=0,progress=0,project_id='4517904b95d64f0c874d5afda12566c4',ramdisk_id='',reservation_id='r-k1ogh0w2',resources=None,root_device_name='/dev/vda',root_gb=1,security_groups=SecurityGroupList,services=<?>,shutdown_terminate=False,system_metadata={boot_roles='member,reader',image_base_image_ref='d169c234-7ac2-4fdc-b9fa-a08c93484d75',image_container_format='bare',image_disk_format='qcow2',image_hw_machine_type='q35',image_hw_rng_model='virtio',image_min_disk='1',image_min_ram='0',network_allocated='True',owner_project_name='tempest-AttachInterfacesUnderV243Test-1152149572',owner_user_name='tempest-AttachInterfacesUnderV243Test-1152149572-project-member'},tags=TagList,task_state='spawning',terminated_at=None,trusted_certs=None,updated_at=2025-12-01T20:02:00Z,user_data='IyEvYmluL3NoCmVjaG8gIlByaW50aW5nIGNpcnJvcyB1c2VyIGF1dGhvcml6ZWQga2V5cyIKY2F0IH5jaXJyb3MvLnNzaC9hdXRob3JpemVkX2tleXMgfHwgdHJ1ZQo=',user_id='f4faf878be724ad8aa31fd034c9818d9',uuid=4ace6300-5447-4f61-9b27-a7249155c57b,vcpu_model=VirtCPUModel,vcpus=1,vm_mode=None,vm_state='building') vif={"id": "7101ff55-a92d-431c-8cc4-8b3412507465", "address": "fa:16:3e:69:55:e7", "network": {"id": "f6d551f8-4db8-41ef-9a06-51292bc6bab6", "bridge": "br-int", "label": "tempest-AttachInterfacesUnderV243Test-1484983586-network", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.6", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "4517904b95d64f0c874d5afda12566c4", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tap7101ff55-a9", "ovs_interfaceid": "7101ff55-a92d-431c-8cc4-8b3412507465", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}} plug /usr/lib/python3.9/site-packages/nova/virt/libvirt/vif.py:710#033[00m
Dec  1 20:02:07 compute-0 nova_compute[189564]: 2025-12-01 20:02:07.169 189568 DEBUG nova.network.os_vif_util [None req-27a587a4-40c5-444e-a53d-f9c90e3a57ff f4faf878be724ad8aa31fd034c9818d9 4517904b95d64f0c874d5afda12566c4 - - default default] Converting VIF {"id": "7101ff55-a92d-431c-8cc4-8b3412507465", "address": "fa:16:3e:69:55:e7", "network": {"id": "f6d551f8-4db8-41ef-9a06-51292bc6bab6", "bridge": "br-int", "label": "tempest-AttachInterfacesUnderV243Test-1484983586-network", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.6", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "4517904b95d64f0c874d5afda12566c4", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tap7101ff55-a9", "ovs_interfaceid": "7101ff55-a92d-431c-8cc4-8b3412507465", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}} nova_to_osvif_vif /usr/lib/python3.9/site-packages/nova/network/os_vif_util.py:511#033[00m
Dec  1 20:02:07 compute-0 nova_compute[189564]: 2025-12-01 20:02:07.169 189568 DEBUG nova.network.os_vif_util [None req-27a587a4-40c5-444e-a53d-f9c90e3a57ff f4faf878be724ad8aa31fd034c9818d9 4517904b95d64f0c874d5afda12566c4 - - default default] Converted object VIFOpenVSwitch(active=False,address=fa:16:3e:69:55:e7,bridge_name='br-int',has_traffic_filtering=True,id=7101ff55-a92d-431c-8cc4-8b3412507465,network=Network(f6d551f8-4db8-41ef-9a06-51292bc6bab6),plugin='ovs',port_profile=VIFPortProfileOpenVSwitch,preserve_on_delete=False,vif_name='tap7101ff55-a9') nova_to_osvif_vif /usr/lib/python3.9/site-packages/nova/network/os_vif_util.py:548#033[00m
Dec  1 20:02:07 compute-0 nova_compute[189564]: 2025-12-01 20:02:07.169 189568 DEBUG os_vif [None req-27a587a4-40c5-444e-a53d-f9c90e3a57ff f4faf878be724ad8aa31fd034c9818d9 4517904b95d64f0c874d5afda12566c4 - - default default] Plugging vif VIFOpenVSwitch(active=False,address=fa:16:3e:69:55:e7,bridge_name='br-int',has_traffic_filtering=True,id=7101ff55-a92d-431c-8cc4-8b3412507465,network=Network(f6d551f8-4db8-41ef-9a06-51292bc6bab6),plugin='ovs',port_profile=VIFPortProfileOpenVSwitch,preserve_on_delete=False,vif_name='tap7101ff55-a9') plug /usr/lib/python3.9/site-packages/os_vif/__init__.py:76#033[00m
Dec  1 20:02:07 compute-0 nova_compute[189564]: 2025-12-01 20:02:07.170 189568 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 21 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Dec  1 20:02:07 compute-0 nova_compute[189564]: 2025-12-01 20:02:07.170 189568 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): AddBridgeCommand(_result=None, name=br-int, may_exist=True, datapath_type=system) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89#033[00m
Dec  1 20:02:07 compute-0 nova_compute[189564]: 2025-12-01 20:02:07.170 189568 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Transaction caused no change do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:129#033[00m
Dec  1 20:02:07 compute-0 nova_compute[189564]: 2025-12-01 20:02:07.173 189568 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 21 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Dec  1 20:02:07 compute-0 nova_compute[189564]: 2025-12-01 20:02:07.173 189568 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): AddPortCommand(_result=None, bridge=br-int, port=tap7101ff55-a9, may_exist=True) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89#033[00m
Dec  1 20:02:07 compute-0 nova_compute[189564]: 2025-12-01 20:02:07.173 189568 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=1): DbSetCommand(_result=None, table=Interface, record=tap7101ff55-a9, col_values=(('external_ids', {'iface-id': '7101ff55-a92d-431c-8cc4-8b3412507465', 'iface-status': 'active', 'attached-mac': 'fa:16:3e:69:55:e7', 'vm-uuid': '4ace6300-5447-4f61-9b27-a7249155c57b'}),)) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89#033[00m
Dec  1 20:02:07 compute-0 nova_compute[189564]: 2025-12-01 20:02:07.175 189568 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Dec  1 20:02:07 compute-0 NetworkManager[56474]: <info>  [1764619327.1763] manager: (tap7101ff55-a9): new Open vSwitch Port device (/org/freedesktop/NetworkManager/Devices/46)
Dec  1 20:02:07 compute-0 nova_compute[189564]: 2025-12-01 20:02:07.177 189568 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] 0-ms timeout __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:248#033[00m
Dec  1 20:02:07 compute-0 nova_compute[189564]: 2025-12-01 20:02:07.185 189568 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Dec  1 20:02:07 compute-0 nova_compute[189564]: 2025-12-01 20:02:07.185 189568 INFO os_vif [None req-27a587a4-40c5-444e-a53d-f9c90e3a57ff f4faf878be724ad8aa31fd034c9818d9 4517904b95d64f0c874d5afda12566c4 - - default default] Successfully plugged vif VIFOpenVSwitch(active=False,address=fa:16:3e:69:55:e7,bridge_name='br-int',has_traffic_filtering=True,id=7101ff55-a92d-431c-8cc4-8b3412507465,network=Network(f6d551f8-4db8-41ef-9a06-51292bc6bab6),plugin='ovs',port_profile=VIFPortProfileOpenVSwitch,preserve_on_delete=False,vif_name='tap7101ff55-a9')#033[00m
Dec  1 20:02:07 compute-0 nova_compute[189564]: 2025-12-01 20:02:07.241 189568 DEBUG nova.virt.libvirt.driver [None req-27a587a4-40c5-444e-a53d-f9c90e3a57ff f4faf878be724ad8aa31fd034c9818d9 4517904b95d64f0c874d5afda12566c4 - - default default] No BDM found with device name vda, not building metadata. _build_disk_metadata /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:12116#033[00m
Dec  1 20:02:07 compute-0 nova_compute[189564]: 2025-12-01 20:02:07.241 189568 DEBUG nova.virt.libvirt.driver [None req-27a587a4-40c5-444e-a53d-f9c90e3a57ff f4faf878be724ad8aa31fd034c9818d9 4517904b95d64f0c874d5afda12566c4 - - default default] No BDM found with device name sda, not building metadata. _build_disk_metadata /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:12116#033[00m
Dec  1 20:02:07 compute-0 nova_compute[189564]: 2025-12-01 20:02:07.241 189568 DEBUG nova.virt.libvirt.driver [None req-27a587a4-40c5-444e-a53d-f9c90e3a57ff f4faf878be724ad8aa31fd034c9818d9 4517904b95d64f0c874d5afda12566c4 - - default default] No VIF found with MAC fa:16:3e:69:55:e7, not building metadata _build_interface_metadata /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:12092#033[00m
Dec  1 20:02:07 compute-0 nova_compute[189564]: 2025-12-01 20:02:07.241 189568 INFO nova.virt.libvirt.driver [None req-27a587a4-40c5-444e-a53d-f9c90e3a57ff f4faf878be724ad8aa31fd034c9818d9 4517904b95d64f0c874d5afda12566c4 - - default default] [instance: 4ace6300-5447-4f61-9b27-a7249155c57b] Using config drive#033[00m
Dec  1 20:02:07 compute-0 podman[254620]: 2025-12-01 20:02:07.313386687 +0000 UTC m=+0.089894288 container health_status 9bc16c1e84935b321683dd2dfd3901959431e420d380b6b9982945dff3d516b2 (image=quay.io/prometheus/node-exporter:v1.5.0, name=node_exporter, health_status=healthy, health_failing_streak=0, health_log=, managed_by=edpm_ansible, config_data={'image': 'quay.io/prometheus/node-exporter:v1.5.0', 'restart': 'always', 'recreate': True, 'user': 'root', 'privileged': True, 'ports': ['9100:9100'], 'command': ['--web.config.file=/etc/node_exporter/node_exporter.yaml', '--web.disable-exporter-metrics', '--collector.systemd', '--collector.systemd.unit-include=(edpm_.*|ovs.*|openvswitch|virt.*|rsyslog)\\.service', '--no-collector.dmi', '--no-collector.entropy', '--no-collector.thermal_zone', '--no-collector.time', '--no-collector.timex', '--no-collector.uname', '--no-collector.stat', '--no-collector.hwmon', '--no-collector.os', '--no-collector.selinux', '--no-collector.textfile', '--no-collector.powersupplyclass', '--no-collector.pressure', '--no-collector.rapl'], 'net': 'host', 'environment': {'OS_ENDPOINT_TYPE': 'internal'}, 'healthcheck': {'test': '/openstack/healthcheck node_exporter', 'mount': '/var/lib/openstack/healthchecks/node_exporter'}, 'volumes': ['/var/lib/openstack/config/telemetry/node_exporter.yaml:/etc/node_exporter/node_exporter.yaml:z', '/var/lib/openstack/certs/telemetry/default:/etc/node_exporter/tls:z', '/var/run/dbus/system_bus_socket:/var/run/dbus/system_bus_socket:rw', '/var/lib/openstack/healthchecks/node_exporter:/openstack:ro,z']}, config_id=edpm, container_name=node_exporter, maintainer=The Prometheus Authors <prometheus-developers@googlegroups.com>)
Dec  1 20:02:08 compute-0 nova_compute[189564]: 2025-12-01 20:02:08.200 189568 DEBUG nova.network.neutron [None req-a63dd88b-b0e6-4d69-9e61-96ca65e37b62 b7979dae5a4746189d660cfad52a7031 074be7edf37d4e09a02286825460dcb3 - - default default] [instance: 40daa6fd-543f-42a7-8b3f-8bbbd3b4ecc0] Updating instance_info_cache with network_info: [{"id": "5f412491-e88a-4387-aa56-6b4e024e1eb2", "address": "fa:16:3e:ae:1d:64", "network": {"id": "5c0f6fba-7bb5-44dd-9009-a572ffba2e90", "bridge": "br-int", "label": "tempest-ServersTestJSON-1715889735-network", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.5", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "074be7edf37d4e09a02286825460dcb3", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tap5f412491-e8", "ovs_interfaceid": "5f412491-e88a-4387-aa56-6b4e024e1eb2", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}}] update_instance_cache_with_nw_info /usr/lib/python3.9/site-packages/nova/network/neutron.py:116#033[00m
Dec  1 20:02:08 compute-0 nova_compute[189564]: 2025-12-01 20:02:08.247 189568 DEBUG oslo_concurrency.lockutils [None req-a63dd88b-b0e6-4d69-9e61-96ca65e37b62 b7979dae5a4746189d660cfad52a7031 074be7edf37d4e09a02286825460dcb3 - - default default] Releasing lock "refresh_cache-40daa6fd-543f-42a7-8b3f-8bbbd3b4ecc0" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:333#033[00m
Dec  1 20:02:08 compute-0 nova_compute[189564]: 2025-12-01 20:02:08.247 189568 DEBUG nova.compute.manager [None req-a63dd88b-b0e6-4d69-9e61-96ca65e37b62 b7979dae5a4746189d660cfad52a7031 074be7edf37d4e09a02286825460dcb3 - - default default] [instance: 40daa6fd-543f-42a7-8b3f-8bbbd3b4ecc0] Instance network_info: |[{"id": "5f412491-e88a-4387-aa56-6b4e024e1eb2", "address": "fa:16:3e:ae:1d:64", "network": {"id": "5c0f6fba-7bb5-44dd-9009-a572ffba2e90", "bridge": "br-int", "label": "tempest-ServersTestJSON-1715889735-network", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.5", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "074be7edf37d4e09a02286825460dcb3", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tap5f412491-e8", "ovs_interfaceid": "5f412491-e88a-4387-aa56-6b4e024e1eb2", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}}]| _allocate_network_async /usr/lib/python3.9/site-packages/nova/compute/manager.py:1967#033[00m
Dec  1 20:02:08 compute-0 nova_compute[189564]: 2025-12-01 20:02:08.248 189568 DEBUG oslo_concurrency.lockutils [req-33d23549-70ee-4985-b2f7-181448ab699f req-7c4b8cd8-2bcc-4d47-a4d2-899051b3584a 8141e98fb94749b083df4b60a326419b 58466eba0634458f8dd92f929824a1d9 - - default default] Acquired lock "refresh_cache-40daa6fd-543f-42a7-8b3f-8bbbd3b4ecc0" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:315#033[00m
Dec  1 20:02:08 compute-0 nova_compute[189564]: 2025-12-01 20:02:08.248 189568 DEBUG nova.network.neutron [req-33d23549-70ee-4985-b2f7-181448ab699f req-7c4b8cd8-2bcc-4d47-a4d2-899051b3584a 8141e98fb94749b083df4b60a326419b 58466eba0634458f8dd92f929824a1d9 - - default default] [instance: 40daa6fd-543f-42a7-8b3f-8bbbd3b4ecc0] Refreshing network info cache for port 5f412491-e88a-4387-aa56-6b4e024e1eb2 _get_instance_nw_info /usr/lib/python3.9/site-packages/nova/network/neutron.py:2007#033[00m
Dec  1 20:02:08 compute-0 nova_compute[189564]: 2025-12-01 20:02:08.252 189568 DEBUG nova.virt.libvirt.driver [None req-a63dd88b-b0e6-4d69-9e61-96ca65e37b62 b7979dae5a4746189d660cfad52a7031 074be7edf37d4e09a02286825460dcb3 - - default default] [instance: 40daa6fd-543f-42a7-8b3f-8bbbd3b4ecc0] Start _get_guest_xml network_info=[{"id": "5f412491-e88a-4387-aa56-6b4e024e1eb2", "address": "fa:16:3e:ae:1d:64", "network": {"id": "5c0f6fba-7bb5-44dd-9009-a572ffba2e90", "bridge": "br-int", "label": "tempest-ServersTestJSON-1715889735-network", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.5", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "074be7edf37d4e09a02286825460dcb3", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tap5f412491-e8", "ovs_interfaceid": "5f412491-e88a-4387-aa56-6b4e024e1eb2", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}}] disk_info={'disk_bus': 'virtio', 'cdrom_bus': 'sata', 'mapping': {'root': {'bus': 'virtio', 'dev': 'vda', 'type': 'disk', 'boot_index': '1'}, 'disk': {'bus': 'virtio', 'dev': 'vda', 'type': 'disk', 'boot_index': '1'}, 'disk.config': {'bus': 'sata', 'dev': 'sda', 'type': 'cdrom'}}} image_meta=ImageMeta(checksum='c8fc807773e5354afe61636071771906',container_format='bare',created_at=2025-12-01T20:00:12Z,direct_url=<?>,disk_format='qcow2',id=d169c234-7ac2-4fdc-b9fa-a08c93484d75,min_disk=0,min_ram=0,name='cirros-0.6.2-x86_64-disk.img',owner='35d2a9caf1634dca9fc12ec078239d84',properties=ImageMetaProps,protected=<?>,size=21430272,status='active',tags=<?>,updated_at=2025-12-01T20:00:13Z,virtual_size=<?>,visibility=<?>) rescue=None block_device_info={'root_device_name': '/dev/vda', 'image': [{'boot_index': 0, 'guest_format': None, 'encryption_options': None, 'size': 0, 'encryption_secret_uuid': None, 'device_type': 'disk', 'disk_bus': 'virtio', 'encrypted': False, 'encryption_format': None, 'device_name': '/dev/vda', 'image_id': 'd169c234-7ac2-4fdc-b9fa-a08c93484d75'}], 'ephemerals': [], 'block_device_mapping': [], 'swap': None} _get_guest_xml /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:7549#033[00m
Dec  1 20:02:08 compute-0 nova_compute[189564]: 2025-12-01 20:02:08.255 189568 INFO nova.virt.libvirt.driver [None req-27a587a4-40c5-444e-a53d-f9c90e3a57ff f4faf878be724ad8aa31fd034c9818d9 4517904b95d64f0c874d5afda12566c4 - - default default] [instance: 4ace6300-5447-4f61-9b27-a7249155c57b] Creating config drive at /var/lib/nova/instances/4ace6300-5447-4f61-9b27-a7249155c57b/disk.config#033[00m
Dec  1 20:02:08 compute-0 nova_compute[189564]: 2025-12-01 20:02:08.264 189568 DEBUG oslo_concurrency.processutils [None req-27a587a4-40c5-444e-a53d-f9c90e3a57ff f4faf878be724ad8aa31fd034c9818d9 4517904b95d64f0c874d5afda12566c4 - - default default] Running cmd (subprocess): /usr/bin/mkisofs -o /var/lib/nova/instances/4ace6300-5447-4f61-9b27-a7249155c57b/disk.config -ldots -allow-lowercase -allow-multidot -l -publisher OpenStack Compute 27.5.2-0.20250829104910.6f8decf.el9 -quiet -J -r -V config-2 /tmp/tmpqjwd85dy execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384#033[00m
Dec  1 20:02:08 compute-0 nova_compute[189564]: 2025-12-01 20:02:08.290 189568 WARNING nova.virt.libvirt.driver [None req-a63dd88b-b0e6-4d69-9e61-96ca65e37b62 b7979dae5a4746189d660cfad52a7031 074be7edf37d4e09a02286825460dcb3 - - default default] This host appears to have multiple sockets per NUMA node. The `socket` PCI NUMA affinity will not be supported.#033[00m
Dec  1 20:02:08 compute-0 nova_compute[189564]: 2025-12-01 20:02:08.298 189568 DEBUG nova.virt.libvirt.host [None req-a63dd88b-b0e6-4d69-9e61-96ca65e37b62 b7979dae5a4746189d660cfad52a7031 074be7edf37d4e09a02286825460dcb3 - - default default] Searching host: 'compute-0.ctlplane.example.com' for CPU controller through CGroups V1... _has_cgroupsv1_cpu_controller /usr/lib/python3.9/site-packages/nova/virt/libvirt/host.py:1653#033[00m
Dec  1 20:02:08 compute-0 nova_compute[189564]: 2025-12-01 20:02:08.299 189568 DEBUG nova.virt.libvirt.host [None req-a63dd88b-b0e6-4d69-9e61-96ca65e37b62 b7979dae5a4746189d660cfad52a7031 074be7edf37d4e09a02286825460dcb3 - - default default] CPU controller missing on host. _has_cgroupsv1_cpu_controller /usr/lib/python3.9/site-packages/nova/virt/libvirt/host.py:1663#033[00m
Dec  1 20:02:08 compute-0 nova_compute[189564]: 2025-12-01 20:02:08.315 189568 DEBUG nova.virt.libvirt.host [None req-a63dd88b-b0e6-4d69-9e61-96ca65e37b62 b7979dae5a4746189d660cfad52a7031 074be7edf37d4e09a02286825460dcb3 - - default default] Searching host: 'compute-0.ctlplane.example.com' for CPU controller through CGroups V2... _has_cgroupsv2_cpu_controller /usr/lib/python3.9/site-packages/nova/virt/libvirt/host.py:1672#033[00m
Dec  1 20:02:08 compute-0 nova_compute[189564]: 2025-12-01 20:02:08.316 189568 DEBUG nova.virt.libvirt.host [None req-a63dd88b-b0e6-4d69-9e61-96ca65e37b62 b7979dae5a4746189d660cfad52a7031 074be7edf37d4e09a02286825460dcb3 - - default default] CPU controller found on host. _has_cgroupsv2_cpu_controller /usr/lib/python3.9/site-packages/nova/virt/libvirt/host.py:1679#033[00m
Dec  1 20:02:08 compute-0 nova_compute[189564]: 2025-12-01 20:02:08.317 189568 DEBUG nova.virt.libvirt.driver [None req-a63dd88b-b0e6-4d69-9e61-96ca65e37b62 b7979dae5a4746189d660cfad52a7031 074be7edf37d4e09a02286825460dcb3 - - default default] CPU mode 'host-model' models '' was chosen, with extra flags: '' _get_guest_cpu_model_config /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:5396#033[00m
Dec  1 20:02:08 compute-0 nova_compute[189564]: 2025-12-01 20:02:08.317 189568 DEBUG nova.virt.hardware [None req-a63dd88b-b0e6-4d69-9e61-96ca65e37b62 b7979dae5a4746189d660cfad52a7031 074be7edf37d4e09a02286825460dcb3 - - default default] Getting desirable topologies for flavor Flavor(created_at=2025-12-01T20:00:10Z,deleted=False,deleted_at=None,description=None,disabled=False,ephemeral_gb=0,extra_specs={hw_rng:allowed='True'},flavorid='69252fc0-77e5-4ac1-807d-77003542464f',id=3,is_public=True,memory_mb=128,name='m1.nano',projects=<?>,root_gb=1,rxtx_factor=1.0,swap=0,updated_at=None,vcpu_weight=0,vcpus=1) and image_meta ImageMeta(checksum='c8fc807773e5354afe61636071771906',container_format='bare',created_at=2025-12-01T20:00:12Z,direct_url=<?>,disk_format='qcow2',id=d169c234-7ac2-4fdc-b9fa-a08c93484d75,min_disk=0,min_ram=0,name='cirros-0.6.2-x86_64-disk.img',owner='35d2a9caf1634dca9fc12ec078239d84',properties=ImageMetaProps,protected=<?>,size=21430272,status='active',tags=<?>,updated_at=2025-12-01T20:00:13Z,virtual_size=<?>,visibility=<?>), allow threads: True _get_desirable_cpu_topologies /usr/lib/python3.9/site-packages/nova/virt/hardware.py:563#033[00m
Dec  1 20:02:08 compute-0 nova_compute[189564]: 2025-12-01 20:02:08.318 189568 DEBUG nova.virt.hardware [None req-a63dd88b-b0e6-4d69-9e61-96ca65e37b62 b7979dae5a4746189d660cfad52a7031 074be7edf37d4e09a02286825460dcb3 - - default default] Flavor limits 0:0:0 get_cpu_topology_constraints /usr/lib/python3.9/site-packages/nova/virt/hardware.py:348#033[00m
Dec  1 20:02:08 compute-0 nova_compute[189564]: 2025-12-01 20:02:08.318 189568 DEBUG nova.virt.hardware [None req-a63dd88b-b0e6-4d69-9e61-96ca65e37b62 b7979dae5a4746189d660cfad52a7031 074be7edf37d4e09a02286825460dcb3 - - default default] Image limits 0:0:0 get_cpu_topology_constraints /usr/lib/python3.9/site-packages/nova/virt/hardware.py:352#033[00m
Dec  1 20:02:08 compute-0 nova_compute[189564]: 2025-12-01 20:02:08.318 189568 DEBUG nova.virt.hardware [None req-a63dd88b-b0e6-4d69-9e61-96ca65e37b62 b7979dae5a4746189d660cfad52a7031 074be7edf37d4e09a02286825460dcb3 - - default default] Flavor pref 0:0:0 get_cpu_topology_constraints /usr/lib/python3.9/site-packages/nova/virt/hardware.py:388#033[00m
Dec  1 20:02:08 compute-0 nova_compute[189564]: 2025-12-01 20:02:08.319 189568 DEBUG nova.virt.hardware [None req-a63dd88b-b0e6-4d69-9e61-96ca65e37b62 b7979dae5a4746189d660cfad52a7031 074be7edf37d4e09a02286825460dcb3 - - default default] Image pref 0:0:0 get_cpu_topology_constraints /usr/lib/python3.9/site-packages/nova/virt/hardware.py:392#033[00m
Dec  1 20:02:08 compute-0 nova_compute[189564]: 2025-12-01 20:02:08.319 189568 DEBUG nova.virt.hardware [None req-a63dd88b-b0e6-4d69-9e61-96ca65e37b62 b7979dae5a4746189d660cfad52a7031 074be7edf37d4e09a02286825460dcb3 - - default default] Chose sockets=0, cores=0, threads=0; limits were sockets=65536, cores=65536, threads=65536 get_cpu_topology_constraints /usr/lib/python3.9/site-packages/nova/virt/hardware.py:430#033[00m
Dec  1 20:02:08 compute-0 nova_compute[189564]: 2025-12-01 20:02:08.320 189568 DEBUG nova.virt.hardware [None req-a63dd88b-b0e6-4d69-9e61-96ca65e37b62 b7979dae5a4746189d660cfad52a7031 074be7edf37d4e09a02286825460dcb3 - - default default] Topology preferred VirtCPUTopology(cores=0,sockets=0,threads=0), maximum VirtCPUTopology(cores=65536,sockets=65536,threads=65536) _get_desirable_cpu_topologies /usr/lib/python3.9/site-packages/nova/virt/hardware.py:569#033[00m
Dec  1 20:02:08 compute-0 nova_compute[189564]: 2025-12-01 20:02:08.320 189568 DEBUG nova.virt.hardware [None req-a63dd88b-b0e6-4d69-9e61-96ca65e37b62 b7979dae5a4746189d660cfad52a7031 074be7edf37d4e09a02286825460dcb3 - - default default] Build topologies for 1 vcpu(s) 1:1:1 _get_possible_cpu_topologies /usr/lib/python3.9/site-packages/nova/virt/hardware.py:471#033[00m
Dec  1 20:02:08 compute-0 nova_compute[189564]: 2025-12-01 20:02:08.321 189568 DEBUG nova.virt.hardware [None req-a63dd88b-b0e6-4d69-9e61-96ca65e37b62 b7979dae5a4746189d660cfad52a7031 074be7edf37d4e09a02286825460dcb3 - - default default] Got 1 possible topologies _get_possible_cpu_topologies /usr/lib/python3.9/site-packages/nova/virt/hardware.py:501#033[00m
Dec  1 20:02:08 compute-0 nova_compute[189564]: 2025-12-01 20:02:08.321 189568 DEBUG nova.virt.hardware [None req-a63dd88b-b0e6-4d69-9e61-96ca65e37b62 b7979dae5a4746189d660cfad52a7031 074be7edf37d4e09a02286825460dcb3 - - default default] Possible topologies [VirtCPUTopology(cores=1,sockets=1,threads=1)] _get_desirable_cpu_topologies /usr/lib/python3.9/site-packages/nova/virt/hardware.py:575#033[00m
Dec  1 20:02:08 compute-0 nova_compute[189564]: 2025-12-01 20:02:08.321 189568 DEBUG nova.virt.hardware [None req-a63dd88b-b0e6-4d69-9e61-96ca65e37b62 b7979dae5a4746189d660cfad52a7031 074be7edf37d4e09a02286825460dcb3 - - default default] Sorted desired topologies [VirtCPUTopology(cores=1,sockets=1,threads=1)] _get_desirable_cpu_topologies /usr/lib/python3.9/site-packages/nova/virt/hardware.py:577#033[00m
Dec  1 20:02:08 compute-0 nova_compute[189564]: 2025-12-01 20:02:08.325 189568 DEBUG nova.virt.libvirt.vif [None req-a63dd88b-b0e6-4d69-9e61-96ca65e37b62 b7979dae5a4746189d660cfad52a7031 074be7edf37d4e09a02286825460dcb3 - - default default] vif_type=ovs instance=Instance(access_ip_v4=1.1.1.1,access_ip_v6=::babe:dc0c:1602,architecture=None,auto_disk_config=True,availability_zone='nova',cell_name=None,cleaned=False,config_drive='',created_at=2025-12-01T20:01:58Z,default_ephemeral_device=None,default_swap_device=None,deleted=False,deleted_at=None,device_metadata=None,disable_terminate=False,display_description='tempest-ServersTestJSON-server-1211995356',display_name='tempest-ServersTestJSON-server-1211995356',ec2_ids=EC2Ids,ephemeral_gb=0,ephemeral_key_uuid=None,fault=<?>,flavor=Flavor(3),hidden=False,host='compute-0.ctlplane.example.com',hostname='tempest-serverstestjson-server-1211995356',id=8,image_ref='d169c234-7ac2-4fdc-b9fa-a08c93484d75',info_cache=InstanceInfoCache,instance_type_id=3,kernel_id='',key_data='ecdsa-sha2-nistp384 AAAAE2VjZHNhLXNoYTItbmlzdHAzODQAAAAIbmlzdHAzODQAAABhBIbuC54RSyNF2gJa7npiiLaIRL78R1TXKo4XNanm90UEgHc1f+7BTaY0iWo/e8z5jrkJOwzot8Y9LI9IMPC58xf5rObkXNbC2mu20jLZlDP5U+zTCPoD9o/vCr9D7if0Pw==',key_name='tempest-keypair-1078658975',keypairs=KeyPairList,launch_index=0,launched_at=None,launched_on='compute-0.ctlplane.example.com',locked=False,locked_by=None,memory_mb=128,metadata={hello='world'},migration_context=None,new_flavor=None,node='compute-0.ctlplane.example.com',numa_topology=None,old_flavor=None,os_type=None,pci_devices=<?>,pci_requests=InstancePCIRequests,power_state=0,progress=0,project_id='074be7edf37d4e09a02286825460dcb3',ramdisk_id='',reservation_id='r-8toh0020',resources=None,root_device_name='/dev/vda',root_gb=1,security_groups=SecurityGroupList,services=<?>,shutdown_terminate=False,system_metadata={boot_roles='member,reader',image_base_image_ref='d169c234-7ac2-4fdc-b9fa-a08c93484d75',image_container_format='bare',image_disk_format='qcow2',image_hw_machine_type='q35',image_hw_rng_model='virtio',image_min_disk='1',image_min_ram='0',network_allocated='True',owner_project_name='tempest-ServersTestJSON-578797395',owner_user_name='tempest-ServersTestJSON-578797395-project-member'},tags=TagList,task_state='spawning',terminated_at=None,trusted_certs=None,updated_at=2025-12-01T20:02:00Z,user_data='IyEvYmluL3NoCmVjaG8gIlByaW50aW5nIGNpcnJvcyB1c2VyIGF1dGhvcml6ZWQga2V5cyIKY2F0IH5jaXJyb3MvLnNzaC9hdXRob3JpemVkX2tleXMgfHwgdHJ1ZQo=',user_id='b7979dae5a4746189d660cfad52a7031',uuid=40daa6fd-543f-42a7-8b3f-8bbbd3b4ecc0,vcpu_model=VirtCPUModel,vcpus=1,vm_mode=None,vm_state='building') vif={"id": "5f412491-e88a-4387-aa56-6b4e024e1eb2", "address": "fa:16:3e:ae:1d:64", "network": {"id": "5c0f6fba-7bb5-44dd-9009-a572ffba2e90", "bridge": "br-int", "label": "tempest-ServersTestJSON-1715889735-network", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.5", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "074be7edf37d4e09a02286825460dcb3", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tap5f412491-e8", "ovs_interfaceid": "5f412491-e88a-4387-aa56-6b4e024e1eb2", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}} virt_type=kvm get_config /usr/lib/python3.9/site-packages/nova/virt/libvirt/vif.py:563#033[00m
Dec  1 20:02:08 compute-0 nova_compute[189564]: 2025-12-01 20:02:08.326 189568 DEBUG nova.network.os_vif_util [None req-a63dd88b-b0e6-4d69-9e61-96ca65e37b62 b7979dae5a4746189d660cfad52a7031 074be7edf37d4e09a02286825460dcb3 - - default default] Converting VIF {"id": "5f412491-e88a-4387-aa56-6b4e024e1eb2", "address": "fa:16:3e:ae:1d:64", "network": {"id": "5c0f6fba-7bb5-44dd-9009-a572ffba2e90", "bridge": "br-int", "label": "tempest-ServersTestJSON-1715889735-network", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.5", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "074be7edf37d4e09a02286825460dcb3", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tap5f412491-e8", "ovs_interfaceid": "5f412491-e88a-4387-aa56-6b4e024e1eb2", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}} nova_to_osvif_vif /usr/lib/python3.9/site-packages/nova/network/os_vif_util.py:511#033[00m
Dec  1 20:02:08 compute-0 nova_compute[189564]: 2025-12-01 20:02:08.327 189568 DEBUG nova.network.os_vif_util [None req-a63dd88b-b0e6-4d69-9e61-96ca65e37b62 b7979dae5a4746189d660cfad52a7031 074be7edf37d4e09a02286825460dcb3 - - default default] Converted object VIFOpenVSwitch(active=False,address=fa:16:3e:ae:1d:64,bridge_name='br-int',has_traffic_filtering=True,id=5f412491-e88a-4387-aa56-6b4e024e1eb2,network=Network(5c0f6fba-7bb5-44dd-9009-a572ffba2e90),plugin='ovs',port_profile=VIFPortProfileOpenVSwitch,preserve_on_delete=False,vif_name='tap5f412491-e8') nova_to_osvif_vif /usr/lib/python3.9/site-packages/nova/network/os_vif_util.py:548#033[00m
Dec  1 20:02:08 compute-0 nova_compute[189564]: 2025-12-01 20:02:08.328 189568 DEBUG nova.objects.instance [None req-a63dd88b-b0e6-4d69-9e61-96ca65e37b62 b7979dae5a4746189d660cfad52a7031 074be7edf37d4e09a02286825460dcb3 - - default default] Lazy-loading 'pci_devices' on Instance uuid 40daa6fd-543f-42a7-8b3f-8bbbd3b4ecc0 obj_load_attr /usr/lib/python3.9/site-packages/nova/objects/instance.py:1105#033[00m
Dec  1 20:02:08 compute-0 nova_compute[189564]: 2025-12-01 20:02:08.349 189568 DEBUG nova.virt.libvirt.driver [None req-a63dd88b-b0e6-4d69-9e61-96ca65e37b62 b7979dae5a4746189d660cfad52a7031 074be7edf37d4e09a02286825460dcb3 - - default default] [instance: 40daa6fd-543f-42a7-8b3f-8bbbd3b4ecc0] End _get_guest_xml xml=<domain type="kvm">
Dec  1 20:02:08 compute-0 nova_compute[189564]:  <uuid>40daa6fd-543f-42a7-8b3f-8bbbd3b4ecc0</uuid>
Dec  1 20:02:08 compute-0 nova_compute[189564]:  <name>instance-00000008</name>
Dec  1 20:02:08 compute-0 nova_compute[189564]:  <memory>131072</memory>
Dec  1 20:02:08 compute-0 nova_compute[189564]:  <vcpu>1</vcpu>
Dec  1 20:02:08 compute-0 nova_compute[189564]:  <metadata>
Dec  1 20:02:08 compute-0 nova_compute[189564]:    <nova:instance xmlns:nova="http://openstack.org/xmlns/libvirt/nova/1.1">
Dec  1 20:02:08 compute-0 nova_compute[189564]:      <nova:package version="27.5.2-0.20250829104910.6f8decf.el9"/>
Dec  1 20:02:08 compute-0 nova_compute[189564]:      <nova:name>tempest-ServersTestJSON-server-1211995356</nova:name>
Dec  1 20:02:08 compute-0 nova_compute[189564]:      <nova:creationTime>2025-12-01 20:02:08</nova:creationTime>
Dec  1 20:02:08 compute-0 nova_compute[189564]:      <nova:flavor name="m1.nano">
Dec  1 20:02:08 compute-0 nova_compute[189564]:        <nova:memory>128</nova:memory>
Dec  1 20:02:08 compute-0 nova_compute[189564]:        <nova:disk>1</nova:disk>
Dec  1 20:02:08 compute-0 nova_compute[189564]:        <nova:swap>0</nova:swap>
Dec  1 20:02:08 compute-0 nova_compute[189564]:        <nova:ephemeral>0</nova:ephemeral>
Dec  1 20:02:08 compute-0 nova_compute[189564]:        <nova:vcpus>1</nova:vcpus>
Dec  1 20:02:08 compute-0 nova_compute[189564]:      </nova:flavor>
Dec  1 20:02:08 compute-0 nova_compute[189564]:      <nova:owner>
Dec  1 20:02:08 compute-0 nova_compute[189564]:        <nova:user uuid="b7979dae5a4746189d660cfad52a7031">tempest-ServersTestJSON-578797395-project-member</nova:user>
Dec  1 20:02:08 compute-0 nova_compute[189564]:        <nova:project uuid="074be7edf37d4e09a02286825460dcb3">tempest-ServersTestJSON-578797395</nova:project>
Dec  1 20:02:08 compute-0 nova_compute[189564]:      </nova:owner>
Dec  1 20:02:08 compute-0 nova_compute[189564]:      <nova:root type="image" uuid="d169c234-7ac2-4fdc-b9fa-a08c93484d75"/>
Dec  1 20:02:08 compute-0 nova_compute[189564]:      <nova:ports>
Dec  1 20:02:08 compute-0 nova_compute[189564]:        <nova:port uuid="5f412491-e88a-4387-aa56-6b4e024e1eb2">
Dec  1 20:02:08 compute-0 nova_compute[189564]:          <nova:ip type="fixed" address="10.100.0.5" ipVersion="4"/>
Dec  1 20:02:08 compute-0 nova_compute[189564]:        </nova:port>
Dec  1 20:02:08 compute-0 nova_compute[189564]:      </nova:ports>
Dec  1 20:02:08 compute-0 nova_compute[189564]:    </nova:instance>
Dec  1 20:02:08 compute-0 nova_compute[189564]:  </metadata>
Dec  1 20:02:08 compute-0 nova_compute[189564]:  <sysinfo type="smbios">
Dec  1 20:02:08 compute-0 nova_compute[189564]:    <system>
Dec  1 20:02:08 compute-0 nova_compute[189564]:      <entry name="manufacturer">RDO</entry>
Dec  1 20:02:08 compute-0 nova_compute[189564]:      <entry name="product">OpenStack Compute</entry>
Dec  1 20:02:08 compute-0 nova_compute[189564]:      <entry name="version">27.5.2-0.20250829104910.6f8decf.el9</entry>
Dec  1 20:02:08 compute-0 nova_compute[189564]:      <entry name="serial">40daa6fd-543f-42a7-8b3f-8bbbd3b4ecc0</entry>
Dec  1 20:02:08 compute-0 nova_compute[189564]:      <entry name="uuid">40daa6fd-543f-42a7-8b3f-8bbbd3b4ecc0</entry>
Dec  1 20:02:08 compute-0 nova_compute[189564]:      <entry name="family">Virtual Machine</entry>
Dec  1 20:02:08 compute-0 nova_compute[189564]:    </system>
Dec  1 20:02:08 compute-0 nova_compute[189564]:  </sysinfo>
Dec  1 20:02:08 compute-0 nova_compute[189564]:  <os>
Dec  1 20:02:08 compute-0 nova_compute[189564]:    <type arch="x86_64" machine="q35">hvm</type>
Dec  1 20:02:08 compute-0 nova_compute[189564]:    <boot dev="hd"/>
Dec  1 20:02:08 compute-0 nova_compute[189564]:    <smbios mode="sysinfo"/>
Dec  1 20:02:08 compute-0 nova_compute[189564]:  </os>
Dec  1 20:02:08 compute-0 nova_compute[189564]:  <features>
Dec  1 20:02:08 compute-0 nova_compute[189564]:    <acpi/>
Dec  1 20:02:08 compute-0 nova_compute[189564]:    <apic/>
Dec  1 20:02:08 compute-0 nova_compute[189564]:    <vmcoreinfo/>
Dec  1 20:02:08 compute-0 nova_compute[189564]:  </features>
Dec  1 20:02:08 compute-0 nova_compute[189564]:  <clock offset="utc">
Dec  1 20:02:08 compute-0 nova_compute[189564]:    <timer name="pit" tickpolicy="delay"/>
Dec  1 20:02:08 compute-0 nova_compute[189564]:    <timer name="rtc" tickpolicy="catchup"/>
Dec  1 20:02:08 compute-0 nova_compute[189564]:    <timer name="hpet" present="no"/>
Dec  1 20:02:08 compute-0 nova_compute[189564]:  </clock>
Dec  1 20:02:08 compute-0 nova_compute[189564]:  <cpu mode="host-model" match="exact">
Dec  1 20:02:08 compute-0 nova_compute[189564]:    <topology sockets="1" cores="1" threads="1"/>
Dec  1 20:02:08 compute-0 nova_compute[189564]:  </cpu>
Dec  1 20:02:08 compute-0 nova_compute[189564]:  <devices>
Dec  1 20:02:08 compute-0 nova_compute[189564]:    <disk type="file" device="disk">
Dec  1 20:02:08 compute-0 nova_compute[189564]:      <driver name="qemu" type="qcow2" cache="none"/>
Dec  1 20:02:08 compute-0 nova_compute[189564]:      <source file="/var/lib/nova/instances/40daa6fd-543f-42a7-8b3f-8bbbd3b4ecc0/disk"/>
Dec  1 20:02:08 compute-0 nova_compute[189564]:      <target dev="vda" bus="virtio"/>
Dec  1 20:02:08 compute-0 nova_compute[189564]:    </disk>
Dec  1 20:02:08 compute-0 nova_compute[189564]:    <disk type="file" device="cdrom">
Dec  1 20:02:08 compute-0 nova_compute[189564]:      <driver name="qemu" type="raw" cache="none"/>
Dec  1 20:02:08 compute-0 nova_compute[189564]:      <source file="/var/lib/nova/instances/40daa6fd-543f-42a7-8b3f-8bbbd3b4ecc0/disk.config"/>
Dec  1 20:02:08 compute-0 nova_compute[189564]:      <target dev="sda" bus="sata"/>
Dec  1 20:02:08 compute-0 nova_compute[189564]:    </disk>
Dec  1 20:02:08 compute-0 nova_compute[189564]:    <interface type="ethernet">
Dec  1 20:02:08 compute-0 nova_compute[189564]:      <mac address="fa:16:3e:ae:1d:64"/>
Dec  1 20:02:08 compute-0 nova_compute[189564]:      <model type="virtio"/>
Dec  1 20:02:08 compute-0 nova_compute[189564]:      <driver name="vhost" rx_queue_size="512"/>
Dec  1 20:02:08 compute-0 nova_compute[189564]:      <mtu size="1442"/>
Dec  1 20:02:08 compute-0 nova_compute[189564]:      <target dev="tap5f412491-e8"/>
Dec  1 20:02:08 compute-0 nova_compute[189564]:    </interface>
Dec  1 20:02:08 compute-0 nova_compute[189564]:    <serial type="pty">
Dec  1 20:02:08 compute-0 nova_compute[189564]:      <log file="/var/lib/nova/instances/40daa6fd-543f-42a7-8b3f-8bbbd3b4ecc0/console.log" append="off"/>
Dec  1 20:02:08 compute-0 nova_compute[189564]:    </serial>
Dec  1 20:02:08 compute-0 nova_compute[189564]:    <graphics type="vnc" autoport="yes" listen="::0"/>
Dec  1 20:02:08 compute-0 nova_compute[189564]:    <video>
Dec  1 20:02:08 compute-0 nova_compute[189564]:      <model type="virtio"/>
Dec  1 20:02:08 compute-0 nova_compute[189564]:    </video>
Dec  1 20:02:08 compute-0 nova_compute[189564]:    <input type="tablet" bus="usb"/>
Dec  1 20:02:08 compute-0 nova_compute[189564]:    <rng model="virtio">
Dec  1 20:02:08 compute-0 nova_compute[189564]:      <backend model="random">/dev/urandom</backend>
Dec  1 20:02:08 compute-0 nova_compute[189564]:    </rng>
Dec  1 20:02:08 compute-0 nova_compute[189564]:    <controller type="pci" model="pcie-root"/>
Dec  1 20:02:08 compute-0 nova_compute[189564]:    <controller type="pci" model="pcie-root-port"/>
Dec  1 20:02:08 compute-0 nova_compute[189564]:    <controller type="pci" model="pcie-root-port"/>
Dec  1 20:02:08 compute-0 nova_compute[189564]:    <controller type="pci" model="pcie-root-port"/>
Dec  1 20:02:08 compute-0 nova_compute[189564]:    <controller type="pci" model="pcie-root-port"/>
Dec  1 20:02:08 compute-0 nova_compute[189564]:    <controller type="pci" model="pcie-root-port"/>
Dec  1 20:02:08 compute-0 nova_compute[189564]:    <controller type="pci" model="pcie-root-port"/>
Dec  1 20:02:08 compute-0 nova_compute[189564]:    <controller type="pci" model="pcie-root-port"/>
Dec  1 20:02:08 compute-0 nova_compute[189564]:    <controller type="pci" model="pcie-root-port"/>
Dec  1 20:02:08 compute-0 nova_compute[189564]:    <controller type="pci" model="pcie-root-port"/>
Dec  1 20:02:08 compute-0 nova_compute[189564]:    <controller type="pci" model="pcie-root-port"/>
Dec  1 20:02:08 compute-0 nova_compute[189564]:    <controller type="pci" model="pcie-root-port"/>
Dec  1 20:02:08 compute-0 nova_compute[189564]:    <controller type="pci" model="pcie-root-port"/>
Dec  1 20:02:08 compute-0 nova_compute[189564]:    <controller type="pci" model="pcie-root-port"/>
Dec  1 20:02:08 compute-0 nova_compute[189564]:    <controller type="pci" model="pcie-root-port"/>
Dec  1 20:02:08 compute-0 nova_compute[189564]:    <controller type="pci" model="pcie-root-port"/>
Dec  1 20:02:08 compute-0 nova_compute[189564]:    <controller type="pci" model="pcie-root-port"/>
Dec  1 20:02:08 compute-0 nova_compute[189564]:    <controller type="pci" model="pcie-root-port"/>
Dec  1 20:02:08 compute-0 nova_compute[189564]:    <controller type="pci" model="pcie-root-port"/>
Dec  1 20:02:08 compute-0 nova_compute[189564]:    <controller type="pci" model="pcie-root-port"/>
Dec  1 20:02:08 compute-0 nova_compute[189564]:    <controller type="pci" model="pcie-root-port"/>
Dec  1 20:02:08 compute-0 nova_compute[189564]:    <controller type="pci" model="pcie-root-port"/>
Dec  1 20:02:08 compute-0 nova_compute[189564]:    <controller type="pci" model="pcie-root-port"/>
Dec  1 20:02:08 compute-0 nova_compute[189564]:    <controller type="pci" model="pcie-root-port"/>
Dec  1 20:02:08 compute-0 nova_compute[189564]:    <controller type="pci" model="pcie-root-port"/>
Dec  1 20:02:08 compute-0 nova_compute[189564]:    <controller type="usb" index="0"/>
Dec  1 20:02:08 compute-0 nova_compute[189564]:    <memballoon model="virtio">
Dec  1 20:02:08 compute-0 nova_compute[189564]:      <stats period="10"/>
Dec  1 20:02:08 compute-0 nova_compute[189564]:    </memballoon>
Dec  1 20:02:08 compute-0 nova_compute[189564]:  </devices>
Dec  1 20:02:08 compute-0 nova_compute[189564]: </domain>
Dec  1 20:02:08 compute-0 nova_compute[189564]: _get_guest_xml /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:7555#033[00m
Dec  1 20:02:08 compute-0 nova_compute[189564]: 2025-12-01 20:02:08.351 189568 DEBUG nova.compute.manager [None req-a63dd88b-b0e6-4d69-9e61-96ca65e37b62 b7979dae5a4746189d660cfad52a7031 074be7edf37d4e09a02286825460dcb3 - - default default] [instance: 40daa6fd-543f-42a7-8b3f-8bbbd3b4ecc0] Preparing to wait for external event network-vif-plugged-5f412491-e88a-4387-aa56-6b4e024e1eb2 prepare_for_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:283#033[00m
Dec  1 20:02:08 compute-0 nova_compute[189564]: 2025-12-01 20:02:08.351 189568 DEBUG oslo_concurrency.lockutils [None req-a63dd88b-b0e6-4d69-9e61-96ca65e37b62 b7979dae5a4746189d660cfad52a7031 074be7edf37d4e09a02286825460dcb3 - - default default] Acquiring lock "40daa6fd-543f-42a7-8b3f-8bbbd3b4ecc0-events" by "nova.compute.manager.InstanceEvents.prepare_for_instance_event.<locals>._create_or_get_event" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m
Dec  1 20:02:08 compute-0 nova_compute[189564]: 2025-12-01 20:02:08.351 189568 DEBUG oslo_concurrency.lockutils [None req-a63dd88b-b0e6-4d69-9e61-96ca65e37b62 b7979dae5a4746189d660cfad52a7031 074be7edf37d4e09a02286825460dcb3 - - default default] Lock "40daa6fd-543f-42a7-8b3f-8bbbd3b4ecc0-events" acquired by "nova.compute.manager.InstanceEvents.prepare_for_instance_event.<locals>._create_or_get_event" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m
Dec  1 20:02:08 compute-0 nova_compute[189564]: 2025-12-01 20:02:08.352 189568 DEBUG oslo_concurrency.lockutils [None req-a63dd88b-b0e6-4d69-9e61-96ca65e37b62 b7979dae5a4746189d660cfad52a7031 074be7edf37d4e09a02286825460dcb3 - - default default] Lock "40daa6fd-543f-42a7-8b3f-8bbbd3b4ecc0-events" "released" by "nova.compute.manager.InstanceEvents.prepare_for_instance_event.<locals>._create_or_get_event" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m
Dec  1 20:02:08 compute-0 nova_compute[189564]: 2025-12-01 20:02:08.352 189568 DEBUG nova.virt.libvirt.vif [None req-a63dd88b-b0e6-4d69-9e61-96ca65e37b62 b7979dae5a4746189d660cfad52a7031 074be7edf37d4e09a02286825460dcb3 - - default default] vif_type=ovs instance=Instance(access_ip_v4=1.1.1.1,access_ip_v6=::babe:dc0c:1602,architecture=None,auto_disk_config=True,availability_zone='nova',cell_name=None,cleaned=False,config_drive='',created_at=2025-12-01T20:01:58Z,default_ephemeral_device=None,default_swap_device=None,deleted=False,deleted_at=None,device_metadata=None,disable_terminate=False,display_description='tempest-ServersTestJSON-server-1211995356',display_name='tempest-ServersTestJSON-server-1211995356',ec2_ids=EC2Ids,ephemeral_gb=0,ephemeral_key_uuid=None,fault=<?>,flavor=Flavor(3),hidden=False,host='compute-0.ctlplane.example.com',hostname='tempest-serverstestjson-server-1211995356',id=8,image_ref='d169c234-7ac2-4fdc-b9fa-a08c93484d75',info_cache=InstanceInfoCache,instance_type_id=3,kernel_id='',key_data='ecdsa-sha2-nistp384 AAAAE2VjZHNhLXNoYTItbmlzdHAzODQAAAAIbmlzdHAzODQAAABhBIbuC54RSyNF2gJa7npiiLaIRL78R1TXKo4XNanm90UEgHc1f+7BTaY0iWo/e8z5jrkJOwzot8Y9LI9IMPC58xf5rObkXNbC2mu20jLZlDP5U+zTCPoD9o/vCr9D7if0Pw==',key_name='tempest-keypair-1078658975',keypairs=KeyPairList,launch_index=0,launched_at=None,launched_on='compute-0.ctlplane.example.com',locked=False,locked_by=None,memory_mb=128,metadata={hello='world'},migration_context=None,new_flavor=None,node='compute-0.ctlplane.example.com',numa_topology=None,old_flavor=None,os_type=None,pci_devices=PciDeviceList,pci_requests=InstancePCIRequests,power_state=0,progress=0,project_id='074be7edf37d4e09a02286825460dcb3',ramdisk_id='',reservation_id='r-8toh0020',resources=None,root_device_name='/dev/vda',root_gb=1,security_groups=SecurityGroupList,services=<?>,shutdown_terminate=False,system_metadata={boot_roles='member,reader',image_base_image_ref='d169c234-7ac2-4fdc-b9fa-a08c93484d75',image_container_format='bare',image_disk_format='qcow2',image_hw_machine_type='q35',image_hw_rng_model='virtio',image_min_disk='1',image_min_ram='0',network_allocated='True',owner_project_name='tempest-ServersTestJSON-578797395',owner_user_name='tempest-ServersTestJSON-578797395-project-member'},tags=TagList,task_state='spawning',terminated_at=None,trusted_certs=None,updated_at=2025-12-01T20:02:00Z,user_data='IyEvYmluL3NoCmVjaG8gIlByaW50aW5nIGNpcnJvcyB1c2VyIGF1dGhvcml6ZWQga2V5cyIKY2F0IH5jaXJyb3MvLnNzaC9hdXRob3JpemVkX2tleXMgfHwgdHJ1ZQo=',user_id='b7979dae5a4746189d660cfad52a7031',uuid=40daa6fd-543f-42a7-8b3f-8bbbd3b4ecc0,vcpu_model=VirtCPUModel,vcpus=1,vm_mode=None,vm_state='building') vif={"id": "5f412491-e88a-4387-aa56-6b4e024e1eb2", "address": "fa:16:3e:ae:1d:64", "network": {"id": "5c0f6fba-7bb5-44dd-9009-a572ffba2e90", "bridge": "br-int", "label": "tempest-ServersTestJSON-1715889735-network", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.5", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "074be7edf37d4e09a02286825460dcb3", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tap5f412491-e8", "ovs_interfaceid": "5f412491-e88a-4387-aa56-6b4e024e1eb2", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}} plug /usr/lib/python3.9/site-packages/nova/virt/libvirt/vif.py:710#033[00m
Dec  1 20:02:08 compute-0 nova_compute[189564]: 2025-12-01 20:02:08.353 189568 DEBUG nova.network.os_vif_util [None req-a63dd88b-b0e6-4d69-9e61-96ca65e37b62 b7979dae5a4746189d660cfad52a7031 074be7edf37d4e09a02286825460dcb3 - - default default] Converting VIF {"id": "5f412491-e88a-4387-aa56-6b4e024e1eb2", "address": "fa:16:3e:ae:1d:64", "network": {"id": "5c0f6fba-7bb5-44dd-9009-a572ffba2e90", "bridge": "br-int", "label": "tempest-ServersTestJSON-1715889735-network", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.5", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "074be7edf37d4e09a02286825460dcb3", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tap5f412491-e8", "ovs_interfaceid": "5f412491-e88a-4387-aa56-6b4e024e1eb2", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}} nova_to_osvif_vif /usr/lib/python3.9/site-packages/nova/network/os_vif_util.py:511#033[00m
Dec  1 20:02:08 compute-0 nova_compute[189564]: 2025-12-01 20:02:08.353 189568 DEBUG nova.network.os_vif_util [None req-a63dd88b-b0e6-4d69-9e61-96ca65e37b62 b7979dae5a4746189d660cfad52a7031 074be7edf37d4e09a02286825460dcb3 - - default default] Converted object VIFOpenVSwitch(active=False,address=fa:16:3e:ae:1d:64,bridge_name='br-int',has_traffic_filtering=True,id=5f412491-e88a-4387-aa56-6b4e024e1eb2,network=Network(5c0f6fba-7bb5-44dd-9009-a572ffba2e90),plugin='ovs',port_profile=VIFPortProfileOpenVSwitch,preserve_on_delete=False,vif_name='tap5f412491-e8') nova_to_osvif_vif /usr/lib/python3.9/site-packages/nova/network/os_vif_util.py:548#033[00m
Dec  1 20:02:08 compute-0 nova_compute[189564]: 2025-12-01 20:02:08.354 189568 DEBUG os_vif [None req-a63dd88b-b0e6-4d69-9e61-96ca65e37b62 b7979dae5a4746189d660cfad52a7031 074be7edf37d4e09a02286825460dcb3 - - default default] Plugging vif VIFOpenVSwitch(active=False,address=fa:16:3e:ae:1d:64,bridge_name='br-int',has_traffic_filtering=True,id=5f412491-e88a-4387-aa56-6b4e024e1eb2,network=Network(5c0f6fba-7bb5-44dd-9009-a572ffba2e90),plugin='ovs',port_profile=VIFPortProfileOpenVSwitch,preserve_on_delete=False,vif_name='tap5f412491-e8') plug /usr/lib/python3.9/site-packages/os_vif/__init__.py:76#033[00m
Dec  1 20:02:08 compute-0 nova_compute[189564]: 2025-12-01 20:02:08.354 189568 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 21 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Dec  1 20:02:08 compute-0 nova_compute[189564]: 2025-12-01 20:02:08.355 189568 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): AddBridgeCommand(_result=None, name=br-int, may_exist=True, datapath_type=system) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89#033[00m
Dec  1 20:02:08 compute-0 nova_compute[189564]: 2025-12-01 20:02:08.355 189568 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Transaction caused no change do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:129#033[00m
Dec  1 20:02:08 compute-0 nova_compute[189564]: 2025-12-01 20:02:08.358 189568 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 21 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Dec  1 20:02:08 compute-0 nova_compute[189564]: 2025-12-01 20:02:08.358 189568 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): AddPortCommand(_result=None, bridge=br-int, port=tap5f412491-e8, may_exist=True) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89#033[00m
Dec  1 20:02:08 compute-0 nova_compute[189564]: 2025-12-01 20:02:08.358 189568 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=1): DbSetCommand(_result=None, table=Interface, record=tap5f412491-e8, col_values=(('external_ids', {'iface-id': '5f412491-e88a-4387-aa56-6b4e024e1eb2', 'iface-status': 'active', 'attached-mac': 'fa:16:3e:ae:1d:64', 'vm-uuid': '40daa6fd-543f-42a7-8b3f-8bbbd3b4ecc0'}),)) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89#033[00m
Dec  1 20:02:08 compute-0 nova_compute[189564]: 2025-12-01 20:02:08.360 189568 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Dec  1 20:02:08 compute-0 NetworkManager[56474]: <info>  [1764619328.3621] manager: (tap5f412491-e8): new Open vSwitch Port device (/org/freedesktop/NetworkManager/Devices/47)
Dec  1 20:02:08 compute-0 nova_compute[189564]: 2025-12-01 20:02:08.363 189568 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] 0-ms timeout __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:248#033[00m
Dec  1 20:02:08 compute-0 nova_compute[189564]: 2025-12-01 20:02:08.371 189568 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Dec  1 20:02:08 compute-0 nova_compute[189564]: 2025-12-01 20:02:08.372 189568 INFO os_vif [None req-a63dd88b-b0e6-4d69-9e61-96ca65e37b62 b7979dae5a4746189d660cfad52a7031 074be7edf37d4e09a02286825460dcb3 - - default default] Successfully plugged vif VIFOpenVSwitch(active=False,address=fa:16:3e:ae:1d:64,bridge_name='br-int',has_traffic_filtering=True,id=5f412491-e88a-4387-aa56-6b4e024e1eb2,network=Network(5c0f6fba-7bb5-44dd-9009-a572ffba2e90),plugin='ovs',port_profile=VIFPortProfileOpenVSwitch,preserve_on_delete=False,vif_name='tap5f412491-e8')#033[00m
Dec  1 20:02:08 compute-0 nova_compute[189564]: 2025-12-01 20:02:08.391 189568 DEBUG oslo_concurrency.processutils [None req-27a587a4-40c5-444e-a53d-f9c90e3a57ff f4faf878be724ad8aa31fd034c9818d9 4517904b95d64f0c874d5afda12566c4 - - default default] CMD "/usr/bin/mkisofs -o /var/lib/nova/instances/4ace6300-5447-4f61-9b27-a7249155c57b/disk.config -ldots -allow-lowercase -allow-multidot -l -publisher OpenStack Compute 27.5.2-0.20250829104910.6f8decf.el9 -quiet -J -r -V config-2 /tmp/tmpqjwd85dy" returned: 0 in 0.128s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422#033[00m
Dec  1 20:02:08 compute-0 nova_compute[189564]: 2025-12-01 20:02:08.440 189568 DEBUG nova.virt.libvirt.driver [None req-a63dd88b-b0e6-4d69-9e61-96ca65e37b62 b7979dae5a4746189d660cfad52a7031 074be7edf37d4e09a02286825460dcb3 - - default default] No BDM found with device name vda, not building metadata. _build_disk_metadata /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:12116#033[00m
Dec  1 20:02:08 compute-0 nova_compute[189564]: 2025-12-01 20:02:08.441 189568 DEBUG nova.virt.libvirt.driver [None req-a63dd88b-b0e6-4d69-9e61-96ca65e37b62 b7979dae5a4746189d660cfad52a7031 074be7edf37d4e09a02286825460dcb3 - - default default] No BDM found with device name sda, not building metadata. _build_disk_metadata /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:12116#033[00m
Dec  1 20:02:08 compute-0 nova_compute[189564]: 2025-12-01 20:02:08.442 189568 DEBUG nova.virt.libvirt.driver [None req-a63dd88b-b0e6-4d69-9e61-96ca65e37b62 b7979dae5a4746189d660cfad52a7031 074be7edf37d4e09a02286825460dcb3 - - default default] No VIF found with MAC fa:16:3e:ae:1d:64, not building metadata _build_interface_metadata /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:12092#033[00m
Dec  1 20:02:08 compute-0 nova_compute[189564]: 2025-12-01 20:02:08.443 189568 INFO nova.virt.libvirt.driver [None req-a63dd88b-b0e6-4d69-9e61-96ca65e37b62 b7979dae5a4746189d660cfad52a7031 074be7edf37d4e09a02286825460dcb3 - - default default] [instance: 40daa6fd-543f-42a7-8b3f-8bbbd3b4ecc0] Using config drive#033[00m
Dec  1 20:02:08 compute-0 NetworkManager[56474]: <info>  [1764619328.4629] manager: (tap7101ff55-a9): new Tun device (/org/freedesktop/NetworkManager/Devices/48)
Dec  1 20:02:08 compute-0 kernel: tap7101ff55-a9: entered promiscuous mode
Dec  1 20:02:08 compute-0 ovn_controller[97948]: 2025-12-01T20:02:08Z|00092|binding|INFO|Claiming lport 7101ff55-a92d-431c-8cc4-8b3412507465 for this chassis.
Dec  1 20:02:08 compute-0 ovn_controller[97948]: 2025-12-01T20:02:08Z|00093|binding|INFO|7101ff55-a92d-431c-8cc4-8b3412507465: Claiming fa:16:3e:69:55:e7 10.100.0.6
Dec  1 20:02:08 compute-0 nova_compute[189564]: 2025-12-01 20:02:08.474 189568 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Dec  1 20:02:08 compute-0 ovn_metadata_agent[106828]: 2025-12-01 20:02:08.479 106833 DEBUG ovsdbapp.backend.ovs_idl.event [-] Matched UPDATE: PortBindingUpdatedEvent(events=('update',), table='Port_Binding', conditions=None, old_conditions=None), priority=20 to row=Port_Binding(mac=['fa:16:3e:69:55:e7 10.100.0.6'], port_security=['fa:16:3e:69:55:e7 10.100.0.6'], type=, nat_addresses=[], virtual_parent=[], up=[False], options={'requested-chassis': 'compute-0.ctlplane.example.com'}, parent_port=[], requested_additional_chassis=[], ha_chassis_group=[], external_ids={'neutron:cidrs': '10.100.0.6/28', 'neutron:device_id': '4ace6300-5447-4f61-9b27-a7249155c57b', 'neutron:device_owner': 'compute:nova', 'neutron:mtu': '', 'neutron:network_name': 'neutron-f6d551f8-4db8-41ef-9a06-51292bc6bab6', 'neutron:port_capabilities': '', 'neutron:port_name': '', 'neutron:project_id': '4517904b95d64f0c874d5afda12566c4', 'neutron:revision_number': '2', 'neutron:security_group_ids': 'b68416a2-a571-45d1-83ff-8369ecb15d10', 'neutron:subnet_pool_addr_scope4': '', 'neutron:subnet_pool_addr_scope6': '', 'neutron:vnic_type': 'normal'}, additional_chassis=[], tag=[], additional_encap=[], encap=[], mirror_rules=[], datapath=3cd09c29-bcaf-417a-9d6d-85e82a6aa131, chassis=[<ovs.db.idl.Row object at 0x7f1b36766670>], tunnel_key=3, gateway_chassis=[], requested_chassis=[<ovs.db.idl.Row object at 0x7f1b36766670>], logical_port=7101ff55-a92d-431c-8cc4-8b3412507465) old=Port_Binding(chassis=[]) matches /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/event.py:43#033[00m
Dec  1 20:02:08 compute-0 ovn_metadata_agent[106828]: 2025-12-01 20:02:08.483 106833 INFO neutron.agent.ovn.metadata.agent [-] Port 7101ff55-a92d-431c-8cc4-8b3412507465 in datapath f6d551f8-4db8-41ef-9a06-51292bc6bab6 bound to our chassis#033[00m
Dec  1 20:02:08 compute-0 ovn_controller[97948]: 2025-12-01T20:02:08Z|00094|binding|INFO|Setting lport 7101ff55-a92d-431c-8cc4-8b3412507465 ovn-installed in OVS
Dec  1 20:02:08 compute-0 ovn_controller[97948]: 2025-12-01T20:02:08Z|00095|binding|INFO|Setting lport 7101ff55-a92d-431c-8cc4-8b3412507465 up in Southbound
Dec  1 20:02:08 compute-0 ovn_metadata_agent[106828]: 2025-12-01 20:02:08.492 106833 INFO neutron.agent.ovn.metadata.agent [-] Provisioning metadata for network f6d551f8-4db8-41ef-9a06-51292bc6bab6#033[00m
Dec  1 20:02:08 compute-0 nova_compute[189564]: 2025-12-01 20:02:08.495 189568 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Dec  1 20:02:08 compute-0 systemd-machined[155891]: New machine qemu-8-instance-00000009.
Dec  1 20:02:08 compute-0 ovn_metadata_agent[106828]: 2025-12-01 20:02:08.505 239862 DEBUG oslo.privsep.daemon [-] privsep: reply[8cf9759d-08db-420c-bcd6-ff901396eef2]: (4, False) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Dec  1 20:02:08 compute-0 ovn_metadata_agent[106828]: 2025-12-01 20:02:08.509 106833 DEBUG neutron.agent.ovn.metadata.agent [-] Creating VETH tapf6d551f8-41 in ovnmeta-f6d551f8-4db8-41ef-9a06-51292bc6bab6 namespace provision_datapath /usr/lib/python3.9/site-packages/neutron/agent/ovn/metadata/agent.py:665#033[00m
Dec  1 20:02:08 compute-0 ovn_metadata_agent[106828]: 2025-12-01 20:02:08.512 239862 DEBUG neutron.privileged.agent.linux.ip_lib [-] Interface tapf6d551f8-40 not found in namespace None get_link_id /usr/lib/python3.9/site-packages/neutron/privileged/agent/linux/ip_lib.py:204#033[00m
Dec  1 20:02:08 compute-0 ovn_metadata_agent[106828]: 2025-12-01 20:02:08.512 239862 DEBUG oslo.privsep.daemon [-] privsep: reply[7f05570d-5520-4930-98bd-f9fd34f53223]: (4, False) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Dec  1 20:02:08 compute-0 ovn_metadata_agent[106828]: 2025-12-01 20:02:08.513 239862 DEBUG oslo.privsep.daemon [-] privsep: reply[e9ec49b5-61ea-4a14-a816-950d91676791]: (4, False) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Dec  1 20:02:08 compute-0 systemd[1]: Started Virtual Machine qemu-8-instance-00000009.
Dec  1 20:02:08 compute-0 ovn_metadata_agent[106828]: 2025-12-01 20:02:08.523 106945 DEBUG oslo.privsep.daemon [-] privsep: reply[0dadc663-3f2b-4fc5-ba04-65cbce90877f]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Dec  1 20:02:08 compute-0 systemd-udevd[254668]: Network interface NamePolicy= disabled on kernel command line.
Dec  1 20:02:08 compute-0 NetworkManager[56474]: <info>  [1764619328.5441] device (tap7101ff55-a9): state change: unmanaged -> unavailable (reason 'connection-assumed', managed-type: 'external')
Dec  1 20:02:08 compute-0 NetworkManager[56474]: <info>  [1764619328.5464] device (tap7101ff55-a9): state change: unavailable -> disconnected (reason 'none', managed-type: 'external')
Dec  1 20:02:08 compute-0 ovn_metadata_agent[106828]: 2025-12-01 20:02:08.550 239862 DEBUG oslo.privsep.daemon [-] privsep: reply[b4e707f1-9013-4cbf-aa3c-7378104c8a75]: (4, ('net.ipv4.conf.all.promote_secondaries = 1\n', '', 0)) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Dec  1 20:02:08 compute-0 ovn_metadata_agent[106828]: 2025-12-01 20:02:08.579 239942 DEBUG oslo.privsep.daemon [-] privsep: reply[03c86b61-cd5e-4337-a61b-0b4ce4a436b7]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Dec  1 20:02:08 compute-0 ovn_metadata_agent[106828]: 2025-12-01 20:02:08.585 239862 DEBUG oslo.privsep.daemon [-] privsep: reply[96a5cab4-c9c4-4723-9b43-021fc7a10b99]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Dec  1 20:02:08 compute-0 NetworkManager[56474]: <info>  [1764619328.5866] manager: (tapf6d551f8-40): new Veth device (/org/freedesktop/NetworkManager/Devices/49)
Dec  1 20:02:08 compute-0 ovn_metadata_agent[106828]: 2025-12-01 20:02:08.618 239942 DEBUG oslo.privsep.daemon [-] privsep: reply[9fee4b8e-d366-435d-81cd-b0619142a776]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Dec  1 20:02:08 compute-0 ovn_metadata_agent[106828]: 2025-12-01 20:02:08.622 239942 DEBUG oslo.privsep.daemon [-] privsep: reply[d51b5e02-a4f6-4658-ab61-4d3b1080ca45]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Dec  1 20:02:08 compute-0 NetworkManager[56474]: <info>  [1764619328.6458] device (tapf6d551f8-40): carrier: link connected
Dec  1 20:02:08 compute-0 ovn_metadata_agent[106828]: 2025-12-01 20:02:08.651 239942 DEBUG oslo.privsep.daemon [-] privsep: reply[2b4dadb0-fef4-44aa-880a-caa181827ccb]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Dec  1 20:02:08 compute-0 ovn_metadata_agent[106828]: 2025-12-01 20:02:08.670 239862 DEBUG oslo.privsep.daemon [-] privsep: reply[8342ea29-bea3-4263-92d4-95cab33a004e]: (4, [{'family': 0, '__align': (), 'ifi_type': 1, 'index': 2, 'flags': 69699, 'change': 0, 'attrs': [['IFLA_IFNAME', 'tapf6d551f8-41'], ['IFLA_TXQLEN', 1000], ['IFLA_OPERSTATE', 'UP'], ['IFLA_LINKMODE', 0], ['IFLA_MTU', 1500], ['IFLA_MIN_MTU', 68], ['IFLA_MAX_MTU', 65535], ['IFLA_GROUP', 0], ['IFLA_PROMISCUITY', 0], ['UNKNOWN', {'header': {'length': 8, 'type': 61}}], ['IFLA_NUM_TX_QUEUES', 8], ['IFLA_GSO_MAX_SEGS', 65535], ['IFLA_GSO_MAX_SIZE', 65536], ['IFLA_GRO_MAX_SIZE', 65536], ['UNKNOWN', {'header': {'length': 8, 'type': 63}}], ['UNKNOWN', {'header': {'length': 8, 'type': 64}}], ['IFLA_TSO_MAX_SIZE', 524280], ['IFLA_TSO_MAX_SEGS', 65535], ['UNKNOWN', {'header': {'length': 8, 'type': 66}}], ['IFLA_NUM_RX_QUEUES', 8], ['IFLA_CARRIER', 1], ['IFLA_CARRIER_CHANGES', 2], ['IFLA_CARRIER_UP_COUNT', 1], ['IFLA_CARRIER_DOWN_COUNT', 1], ['IFLA_PROTO_DOWN', 0], ['IFLA_ADDRESS', 'fa:16:3e:09:44:58'], ['IFLA_BROADCAST', 'ff:ff:ff:ff:ff:ff'], ['IFLA_STATS64', {'rx_packets': 1, 'tx_packets': 1, 'rx_bytes': 90, 'tx_bytes': 90, 'rx_errors': 0, 'tx_errors': 0, 'rx_dropped': 0, 'tx_dropped': 0, 'multicast': 0, 'collisions': 0, 'rx_length_errors': 0, 'rx_over_errors': 0, 'rx_crc_errors': 0, 'rx_frame_errors': 0, 'rx_fifo_errors': 0, 'rx_missed_errors': 0, 'tx_aborted_errors': 0, 'tx_carrier_errors': 0, 'tx_fifo_errors': 0, 'tx_heartbeat_errors': 0, 'tx_window_errors': 0, 'rx_compressed': 0, 'tx_compressed': 0}], ['IFLA_STATS', {'rx_packets': 1, 'tx_packets': 1, 'rx_bytes': 90, 'tx_bytes': 90, 'rx_errors': 0, 'tx_errors': 0, 'rx_dropped': 0, 'tx_dropped': 0, 'multicast': 0, 'collisions': 0, 'rx_length_errors': 0, 'rx_over_errors': 0, 'rx_crc_errors': 0, 'rx_frame_errors': 0, 'rx_fifo_errors': 0, 'rx_missed_errors': 0, 'tx_aborted_errors': 0, 'tx_carrier_errors': 0, 'tx_fifo_errors': 0, 'tx_heartbeat_errors': 0, 'tx_window_errors': 0, 'rx_compressed': 0, 'tx_compressed': 0}], ['IFLA_XDP', {'attrs': [['IFLA_XDP_ATTACHED', None]]}], ['IFLA_LINKINFO', {'attrs': [['IFLA_INFO_KIND', 'veth']]}], ['IFLA_LINK_NETNSID', 0], ['IFLA_LINK', 27], ['IFLA_QDISC', 'noqueue'], ['IFLA_AF_SPEC', {'attrs': [['AF_INET', {'dummy': 65668, 'forwarding': 1, 'mc_forwarding': 0, 'proxy_arp': 0, 'accept_redirects': 0, 'secure_redirects': 0, 'send_redirects': 0, 'shared_media': 1, 'rp_filter': 1, 'accept_source_route': 0, 'bootp_relay': 0, 'log_martians': 0, 'tag': 0, 'arpfilter': 0, 'medium_id': 0, 'noxfrm': 0, 'nopolicy': 0, 'force_igmp_version': 0, 'arp_announce': 0, 'arp_ignore': 0, 'promote_secondaries': 1, 'arp_accept': 0, 'arp_notify': 0, 'accept_local': 0, 'src_vmark': 0, 'proxy_arp_pvlan': 0, 'route_localnet': 0, 'igmpv2_unsolicited_report_interval': 10000, 'igmpv3_unsolicited_report_interval': 1000}], ['AF_INET6', {'attrs': [['IFLA_INET6_FLAGS', 2147483648], ['IFLA_INET6_CACHEINFO', {'max_reasm_len': 65535, 'tstamp': 581636, 'reachable_time': 31952, 'retrans_time': 1000}], ['IFLA_INET6_CONF', {'forwarding': 0, 'hop_limit': 64, 'mtu': 1500, 'accept_ra': 1, 'accept_redirects': 1, 'autoconf': 1, 'dad_transmits': 1, 'router_solicitations': 4294967295, 'router_solicitation_interval': 4000, 'router_solicitation_delay': 1000, 'use_tempaddr': 0, 'temp_valid_lft': 604800, 'temp_preferred_lft': 86400, 'regen_max_retry': 3, 'max_desync_factor': 600, 'max_addresses': 16, 'force_mld_version': 0, 'accept_ra_defrtr': 1, 'accept_ra_pinfo': 1, 'accept_ra_rtr_pref': 1, 'router_probe_interval': 60000, 'accept_ra_rt_info_max_plen': 0, 'proxy_ndp': 0, 'optimistic_dad': 0, 'accept_source_route': 0, 'mc_forwarding': 0, 'disable_ipv6': 0, 'accept_dad': 1, 'force_tllao': 0, 'ndisc_notify': 0}], ['IFLA_INET6_STATS', {'num': 38, 'inpkts': 1, 'inoctets': 76, 'indelivers': 0, 'outforwdatagrams': 0, 'outpkts': 1, 'outoctets': 76, 'inhdrerrors': 0, 'intoobigerrors': 0, 'innoroutes': 0, 'inaddrerrors': 0, 'inunknownprotos': 0, 'intruncatedpkts': 0, 'indiscards': 0, 'outdiscards': 0, 'outnoroutes': 0, 'reasmtimeout': 0, 'reasmreqds': 0, 'reasmoks': 0, 'reasmfails': 0, 'fragoks': 0, 'fragfails': 0, 'fragcreates': 0, 'inmcastpkts': 1, 'outmcastpkts': 1, 'inbcastpkts': 0, 'outbcastpkts': 0, 'inmcastoctets': 76, 'outmcastoctets': 76, 'inbcastoctets': 0, 'outbcastoctets': 0, 'csumerrors': 0, 'noectpkts': 1, 'ect1pkts': 0, 'ect0pkts': 0, 'cepkts': 0}], ['IFLA_INET6_ICMP6STATS', {'num': 7, 'inmsgs': 0, 'inerrors': 0, 'outmsgs': 1, 'outerrors': 0, 'csumerrors': 0}], ['IFLA_INET6_TOKEN', '::'], ['IFLA_INET6_ADDR_GEN_MODE', 0]]}]]}], ['IFLA_MAP', {'mem_start': 0, 'mem_end': 0, 'base_addr': 0, 'irq': 0, 'dma': 0, 'port': 0}], ['UNKNOWN', {'header': {'length': 4, 'type': 32830}}], ['UNKNOWN', {'header': {'length': 4, 'type': 32833}}]], 'header': {'length': 1448, 'type': 16, 'flags': 2, 'sequence_number': 255, 'pid': 254702, 'error': None, 'target': 'ovnmeta-f6d551f8-4db8-41ef-9a06-51292bc6bab6', 'stats': (0, 0, 0)}, 'state': 'up', 'event': 'RTM_NEWLINK'}]) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Dec  1 20:02:08 compute-0 ovn_metadata_agent[106828]: 2025-12-01 20:02:08.690 239862 DEBUG oslo.privsep.daemon [-] privsep: reply[052621d0-4eb4-4a6a-b0a6-4252bcdae998]: (4, ({'family': 10, 'prefixlen': 64, 'flags': 192, 'scope': 253, 'index': 2, 'attrs': [['IFA_ADDRESS', 'fe80::f816:3eff:fe09:4458'], ['IFA_CACHEINFO', {'ifa_preferred': 4294967295, 'ifa_valid': 4294967295, 'cstamp': 581636, 'tstamp': 581636}], ['IFA_FLAGS', 192]], 'header': {'length': 72, 'type': 20, 'flags': 2, 'sequence_number': 255, 'pid': 254703, 'error': None, 'target': 'ovnmeta-f6d551f8-4db8-41ef-9a06-51292bc6bab6', 'stats': (0, 0, 0)}, 'event': 'RTM_NEWADDR'},)) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Dec  1 20:02:08 compute-0 ovn_metadata_agent[106828]: 2025-12-01 20:02:08.708 239862 DEBUG oslo.privsep.daemon [-] privsep: reply[b44778a0-3c4e-4f20-ae8f-0d827e84cd75]: (4, [{'family': 0, '__align': (), 'ifi_type': 1, 'index': 2, 'flags': 69699, 'change': 0, 'attrs': [['IFLA_IFNAME', 'tapf6d551f8-41'], ['IFLA_TXQLEN', 1000], ['IFLA_OPERSTATE', 'UP'], ['IFLA_LINKMODE', 0], ['IFLA_MTU', 1500], ['IFLA_MIN_MTU', 68], ['IFLA_MAX_MTU', 65535], ['IFLA_GROUP', 0], ['IFLA_PROMISCUITY', 0], ['UNKNOWN', {'header': {'length': 8, 'type': 61}}], ['IFLA_NUM_TX_QUEUES', 8], ['IFLA_GSO_MAX_SEGS', 65535], ['IFLA_GSO_MAX_SIZE', 65536], ['IFLA_GRO_MAX_SIZE', 65536], ['UNKNOWN', {'header': {'length': 8, 'type': 63}}], ['UNKNOWN', {'header': {'length': 8, 'type': 64}}], ['IFLA_TSO_MAX_SIZE', 524280], ['IFLA_TSO_MAX_SEGS', 65535], ['UNKNOWN', {'header': {'length': 8, 'type': 66}}], ['IFLA_NUM_RX_QUEUES', 8], ['IFLA_CARRIER', 1], ['IFLA_CARRIER_CHANGES', 2], ['IFLA_CARRIER_UP_COUNT', 1], ['IFLA_CARRIER_DOWN_COUNT', 1], ['IFLA_PROTO_DOWN', 0], ['IFLA_ADDRESS', 'fa:16:3e:09:44:58'], ['IFLA_BROADCAST', 'ff:ff:ff:ff:ff:ff'], ['IFLA_STATS64', {'rx_packets': 1, 'tx_packets': 1, 'rx_bytes': 90, 'tx_bytes': 90, 'rx_errors': 0, 'tx_errors': 0, 'rx_dropped': 0, 'tx_dropped': 0, 'multicast': 0, 'collisions': 0, 'rx_length_errors': 0, 'rx_over_errors': 0, 'rx_crc_errors': 0, 'rx_frame_errors': 0, 'rx_fifo_errors': 0, 'rx_missed_errors': 0, 'tx_aborted_errors': 0, 'tx_carrier_errors': 0, 'tx_fifo_errors': 0, 'tx_heartbeat_errors': 0, 'tx_window_errors': 0, 'rx_compressed': 0, 'tx_compressed': 0}], ['IFLA_STATS', {'rx_packets': 1, 'tx_packets': 1, 'rx_bytes': 90, 'tx_bytes': 90, 'rx_errors': 0, 'tx_errors': 0, 'rx_dropped': 0, 'tx_dropped': 0, 'multicast': 0, 'collisions': 0, 'rx_length_errors': 0, 'rx_over_errors': 0, 'rx_crc_errors': 0, 'rx_frame_errors': 0, 'rx_fifo_errors': 0, 'rx_missed_errors': 0, 'tx_aborted_errors': 0, 'tx_carrier_errors': 0, 'tx_fifo_errors': 0, 'tx_heartbeat_errors': 0, 'tx_window_errors': 0, 'rx_compressed': 0, 'tx_compressed': 0}], ['IFLA_XDP', {'attrs': [['IFLA_XDP_ATTACHED', None]]}], ['IFLA_LINKINFO', {'attrs': [['IFLA_INFO_KIND', 'veth']]}], ['IFLA_LINK_NETNSID', 0], ['IFLA_LINK', 27], ['IFLA_QDISC', 'noqueue'], ['IFLA_AF_SPEC', {'attrs': [['AF_INET', {'dummy': 65668, 'forwarding': 1, 'mc_forwarding': 0, 'proxy_arp': 0, 'accept_redirects': 0, 'secure_redirects': 0, 'send_redirects': 0, 'shared_media': 1, 'rp_filter': 1, 'accept_source_route': 0, 'bootp_relay': 0, 'log_martians': 0, 'tag': 0, 'arpfilter': 0, 'medium_id': 0, 'noxfrm': 0, 'nopolicy': 0, 'force_igmp_version': 0, 'arp_announce': 0, 'arp_ignore': 0, 'promote_secondaries': 1, 'arp_accept': 0, 'arp_notify': 0, 'accept_local': 0, 'src_vmark': 0, 'proxy_arp_pvlan': 0, 'route_localnet': 0, 'igmpv2_unsolicited_report_interval': 10000, 'igmpv3_unsolicited_report_interval': 1000}], ['AF_INET6', {'attrs': [['IFLA_INET6_FLAGS', 2147483648], ['IFLA_INET6_CACHEINFO', {'max_reasm_len': 65535, 'tstamp': 581636, 'reachable_time': 31952, 'retrans_time': 1000}], ['IFLA_INET6_CONF', {'forwarding': 0, 'hop_limit': 64, 'mtu': 1500, 'accept_ra': 1, 'accept_redirects': 1, 'autoconf': 1, 'dad_transmits': 1, 'router_solicitations': 4294967295, 'router_solicitation_interval': 4000, 'router_solicitation_delay': 1000, 'use_tempaddr': 0, 'temp_valid_lft': 604800, 'temp_preferred_lft': 86400, 'regen_max_retry': 3, 'max_desync_factor': 600, 'max_addresses': 16, 'force_mld_version': 0, 'accept_ra_defrtr': 1, 'accept_ra_pinfo': 1, 'accept_ra_rtr_pref': 1, 'router_probe_interval': 60000, 'accept_ra_rt_info_max_plen': 0, 'proxy_ndp': 0, 'optimistic_dad': 0, 'accept_source_route': 0, 'mc_forwarding': 0, 'disable_ipv6': 0, 'accept_dad': 1, 'force_tllao': 0, 'ndisc_notify': 0}], ['IFLA_INET6_STATS', {'num': 38, 'inpkts': 1, 'inoctets': 76, 'indelivers': 0, 'outforwdatagrams': 0, 'outpkts': 1, 'outoctets': 76, 'inhdrerrors': 0, 'intoobigerrors': 0, 'innoroutes': 0, 'inaddrerrors': 0, 'inunknownprotos': 0, 'intruncatedpkts': 0, 'indiscards': 0, 'outdiscards': 0, 'outnoroutes': 0, 'reasmtimeout': 0, 'reasmreqds': 0, 'reasmoks': 0, 'reasmfails': 0, 'fragoks': 0, 'fragfails': 0, 'fragcreates': 0, 'inmcastpkts': 1, 'outmcastpkts': 1, 'inbcastpkts': 0, 'outbcastpkts': 0, 'inmcastoctets': 76, 'outmcastoctets': 76, 'inbcastoctets': 0, 'outbcastoctets': 0, 'csumerrors': 0, 'noectpkts': 1, 'ect1pkts': 0, 'ect0pkts': 0, 'cepkts': 0}], ['IFLA_INET6_ICMP6STATS', {'num': 7, 'inmsgs': 0, 'inerrors': 0, 'outmsgs': 1, 'outerrors': 0, 'csumerrors': 0}], ['IFLA_INET6_TOKEN', '::'], ['IFLA_INET6_ADDR_GEN_MODE', 0]]}]]}], ['IFLA_MAP', {'mem_start': 0, 'mem_end': 0, 'base_addr': 0, 'irq': 0, 'dma': 0, 'port': 0}], ['UNKNOWN', {'header': {'length': 4, 'type': 32830}}], ['UNKNOWN', {'header': {'length': 4, 'type': 32833}}]], 'header': {'length': 1448, 'type': 16, 'flags': 0, 'sequence_number': 255, 'pid': 254704, 'error': None, 'target': 'ovnmeta-f6d551f8-4db8-41ef-9a06-51292bc6bab6', 'stats': (0, 0, 0)}, 'state': 'up', 'event': 'RTM_NEWLINK'}]) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Dec  1 20:02:08 compute-0 ovn_metadata_agent[106828]: 2025-12-01 20:02:08.743 239862 DEBUG oslo.privsep.daemon [-] privsep: reply[9396971e-2661-48af-9ac2-a8bedf2ae999]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Dec  1 20:02:08 compute-0 ovn_metadata_agent[106828]: 2025-12-01 20:02:08.806 239862 DEBUG oslo.privsep.daemon [-] privsep: reply[fa132f82-ecae-4c2a-831c-e9518a2dbe18]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Dec  1 20:02:08 compute-0 ovn_metadata_agent[106828]: 2025-12-01 20:02:08.808 106833 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): DelPortCommand(_result=None, port=tapf6d551f8-40, bridge=br-ex, if_exists=True) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89#033[00m
Dec  1 20:02:08 compute-0 ovn_metadata_agent[106828]: 2025-12-01 20:02:08.808 106833 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Transaction caused no change do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:129#033[00m
Dec  1 20:02:08 compute-0 ovn_metadata_agent[106828]: 2025-12-01 20:02:08.809 106833 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): AddPortCommand(_result=None, bridge=br-int, port=tapf6d551f8-40, may_exist=True) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89#033[00m
Dec  1 20:02:08 compute-0 NetworkManager[56474]: <info>  [1764619328.8117] manager: (tapf6d551f8-40): new Open vSwitch Port device (/org/freedesktop/NetworkManager/Devices/50)
Dec  1 20:02:08 compute-0 nova_compute[189564]: 2025-12-01 20:02:08.813 189568 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Dec  1 20:02:08 compute-0 kernel: tapf6d551f8-40: entered promiscuous mode
Dec  1 20:02:08 compute-0 nova_compute[189564]: 2025-12-01 20:02:08.819 189568 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Dec  1 20:02:08 compute-0 ovn_metadata_agent[106828]: 2025-12-01 20:02:08.820 106833 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): DbSetCommand(_result=None, table=Interface, record=tapf6d551f8-40, col_values=(('external_ids', {'iface-id': 'cb6caae9-9b40-4384-a692-7fed62ba0bdc'}),)) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89#033[00m
Dec  1 20:02:08 compute-0 ovn_controller[97948]: 2025-12-01T20:02:08Z|00096|binding|INFO|Releasing lport cb6caae9-9b40-4384-a692-7fed62ba0bdc from this chassis (sb_readonly=0)
Dec  1 20:02:08 compute-0 nova_compute[189564]: 2025-12-01 20:02:08.822 189568 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Dec  1 20:02:08 compute-0 nova_compute[189564]: 2025-12-01 20:02:08.833 189568 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Dec  1 20:02:08 compute-0 nova_compute[189564]: 2025-12-01 20:02:08.840 189568 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Dec  1 20:02:08 compute-0 ovn_metadata_agent[106828]: 2025-12-01 20:02:08.841 106833 DEBUG neutron.agent.linux.utils [-] Unable to access /var/lib/neutron/external/pids/f6d551f8-4db8-41ef-9a06-51292bc6bab6.pid.haproxy; Error: [Errno 2] No such file or directory: '/var/lib/neutron/external/pids/f6d551f8-4db8-41ef-9a06-51292bc6bab6.pid.haproxy' get_value_from_file /usr/lib/python3.9/site-packages/neutron/agent/linux/utils.py:252#033[00m
Dec  1 20:02:08 compute-0 ovn_metadata_agent[106828]: 2025-12-01 20:02:08.842 239862 DEBUG oslo.privsep.daemon [-] privsep: reply[b2adc044-842f-4779-b9fa-8ee893d49384]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Dec  1 20:02:08 compute-0 ovn_metadata_agent[106828]: 2025-12-01 20:02:08.843 106833 DEBUG neutron.agent.ovn.metadata.driver [-] haproxy_cfg = 
Dec  1 20:02:08 compute-0 ovn_metadata_agent[106828]: global
Dec  1 20:02:08 compute-0 ovn_metadata_agent[106828]:    log         /dev/log local0 debug
Dec  1 20:02:08 compute-0 ovn_metadata_agent[106828]:    log-tag     haproxy-metadata-proxy-f6d551f8-4db8-41ef-9a06-51292bc6bab6
Dec  1 20:02:08 compute-0 ovn_metadata_agent[106828]:    user        root
Dec  1 20:02:08 compute-0 ovn_metadata_agent[106828]:    group       root
Dec  1 20:02:08 compute-0 ovn_metadata_agent[106828]:    maxconn     1024
Dec  1 20:02:08 compute-0 ovn_metadata_agent[106828]:    pidfile     /var/lib/neutron/external/pids/f6d551f8-4db8-41ef-9a06-51292bc6bab6.pid.haproxy
Dec  1 20:02:08 compute-0 ovn_metadata_agent[106828]:    daemon
Dec  1 20:02:08 compute-0 ovn_metadata_agent[106828]: 
Dec  1 20:02:08 compute-0 ovn_metadata_agent[106828]: defaults
Dec  1 20:02:08 compute-0 ovn_metadata_agent[106828]:    log global
Dec  1 20:02:08 compute-0 ovn_metadata_agent[106828]:    mode http
Dec  1 20:02:08 compute-0 ovn_metadata_agent[106828]:    option httplog
Dec  1 20:02:08 compute-0 ovn_metadata_agent[106828]:    option dontlognull
Dec  1 20:02:08 compute-0 ovn_metadata_agent[106828]:    option http-server-close
Dec  1 20:02:08 compute-0 ovn_metadata_agent[106828]:    option forwardfor
Dec  1 20:02:08 compute-0 ovn_metadata_agent[106828]:    retries                 3
Dec  1 20:02:08 compute-0 ovn_metadata_agent[106828]:    timeout http-request    30s
Dec  1 20:02:08 compute-0 ovn_metadata_agent[106828]:    timeout connect         30s
Dec  1 20:02:08 compute-0 ovn_metadata_agent[106828]:    timeout client          32s
Dec  1 20:02:08 compute-0 ovn_metadata_agent[106828]:    timeout server          32s
Dec  1 20:02:08 compute-0 ovn_metadata_agent[106828]:    timeout http-keep-alive 30s
Dec  1 20:02:08 compute-0 ovn_metadata_agent[106828]: 
Dec  1 20:02:08 compute-0 ovn_metadata_agent[106828]: 
Dec  1 20:02:08 compute-0 ovn_metadata_agent[106828]: listen listener
Dec  1 20:02:08 compute-0 ovn_metadata_agent[106828]:    bind 169.254.169.254:80
Dec  1 20:02:08 compute-0 ovn_metadata_agent[106828]:    server metadata /var/lib/neutron/metadata_proxy
Dec  1 20:02:08 compute-0 ovn_metadata_agent[106828]:    http-request add-header X-OVN-Network-ID f6d551f8-4db8-41ef-9a06-51292bc6bab6
Dec  1 20:02:08 compute-0 ovn_metadata_agent[106828]: create_config_file /usr/lib/python3.9/site-packages/neutron/agent/ovn/metadata/driver.py:107#033[00m
Dec  1 20:02:08 compute-0 ovn_metadata_agent[106828]: 2025-12-01 20:02:08.844 106833 DEBUG neutron.agent.linux.utils [-] Running command: ['sudo', 'neutron-rootwrap', '/etc/neutron/rootwrap.conf', 'ip', 'netns', 'exec', 'ovnmeta-f6d551f8-4db8-41ef-9a06-51292bc6bab6', 'env', 'PROCESS_TAG=haproxy-f6d551f8-4db8-41ef-9a06-51292bc6bab6', 'haproxy', '-f', '/var/lib/neutron/ovn-metadata-proxy/f6d551f8-4db8-41ef-9a06-51292bc6bab6.conf'] create_process /usr/lib/python3.9/site-packages/neutron/agent/linux/utils.py:84#033[00m
Dec  1 20:02:08 compute-0 nova_compute[189564]: 2025-12-01 20:02:08.877 189568 DEBUG nova.virt.driver [None req-025acbbd-8b0a-4055-b5a6-f0460d6fa220 - - - - - -] Emitting event <LifecycleEvent: 1764619328.8765988, 4ace6300-5447-4f61-9b27-a7249155c57b => Started> emit_event /usr/lib/python3.9/site-packages/nova/virt/driver.py:1653#033[00m
Dec  1 20:02:08 compute-0 nova_compute[189564]: 2025-12-01 20:02:08.878 189568 INFO nova.compute.manager [None req-025acbbd-8b0a-4055-b5a6-f0460d6fa220 - - - - - -] [instance: 4ace6300-5447-4f61-9b27-a7249155c57b] VM Started (Lifecycle Event)#033[00m
Dec  1 20:02:08 compute-0 nova_compute[189564]: 2025-12-01 20:02:08.907 189568 DEBUG nova.compute.manager [None req-025acbbd-8b0a-4055-b5a6-f0460d6fa220 - - - - - -] [instance: 4ace6300-5447-4f61-9b27-a7249155c57b] Checking state _get_power_state /usr/lib/python3.9/site-packages/nova/compute/manager.py:1762#033[00m
Dec  1 20:02:08 compute-0 nova_compute[189564]: 2025-12-01 20:02:08.913 189568 DEBUG nova.virt.driver [None req-025acbbd-8b0a-4055-b5a6-f0460d6fa220 - - - - - -] Emitting event <LifecycleEvent: 1764619328.8767474, 4ace6300-5447-4f61-9b27-a7249155c57b => Paused> emit_event /usr/lib/python3.9/site-packages/nova/virt/driver.py:1653#033[00m
Dec  1 20:02:08 compute-0 nova_compute[189564]: 2025-12-01 20:02:08.914 189568 INFO nova.compute.manager [None req-025acbbd-8b0a-4055-b5a6-f0460d6fa220 - - - - - -] [instance: 4ace6300-5447-4f61-9b27-a7249155c57b] VM Paused (Lifecycle Event)#033[00m
Dec  1 20:02:08 compute-0 nova_compute[189564]: 2025-12-01 20:02:08.944 189568 DEBUG nova.compute.manager [None req-025acbbd-8b0a-4055-b5a6-f0460d6fa220 - - - - - -] [instance: 4ace6300-5447-4f61-9b27-a7249155c57b] Checking state _get_power_state /usr/lib/python3.9/site-packages/nova/compute/manager.py:1762#033[00m
Dec  1 20:02:08 compute-0 nova_compute[189564]: 2025-12-01 20:02:08.949 189568 DEBUG nova.compute.manager [None req-025acbbd-8b0a-4055-b5a6-f0460d6fa220 - - - - - -] [instance: 4ace6300-5447-4f61-9b27-a7249155c57b] Synchronizing instance power state after lifecycle event "Paused"; current vm_state: building, current task_state: spawning, current DB power_state: 0, VM power_state: 3 handle_lifecycle_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:1396#033[00m
Dec  1 20:02:08 compute-0 nova_compute[189564]: 2025-12-01 20:02:08.967 189568 INFO nova.compute.manager [None req-025acbbd-8b0a-4055-b5a6-f0460d6fa220 - - - - - -] [instance: 4ace6300-5447-4f61-9b27-a7249155c57b] During sync_power_state the instance has a pending task (spawning). Skip.#033[00m
Dec  1 20:02:09 compute-0 podman[254745]: 2025-12-01 20:02:09.270778432 +0000 UTC m=+0.070110523 container create d2a1a4b50e867ea1cc67999ecf3954d066cbeb366f113ade7af0b4553fbff670 (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified, name=neutron-haproxy-ovnmeta-f6d551f8-4db8-41ef-9a06-51292bc6bab6, org.label-schema.build-date=20251125, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.license=GPLv2, io.buildah.version=1.41.3, tcib_build_tag=fa2bb8efef6782c26ea7f1675eeb36dd, tcib_managed=true, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS, maintainer=OpenStack Kubernetes Operator team)
Dec  1 20:02:09 compute-0 systemd[1]: Started libpod-conmon-d2a1a4b50e867ea1cc67999ecf3954d066cbeb366f113ade7af0b4553fbff670.scope.
Dec  1 20:02:09 compute-0 podman[254745]: 2025-12-01 20:02:09.227104632 +0000 UTC m=+0.026436713 image pull 014dc726c85414b29f2dde7b5d875685d08784761c0f0ffa8630d1583a877bf9 quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified
Dec  1 20:02:09 compute-0 systemd[1]: Started libcrun container.
Dec  1 20:02:09 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/4211cff01512ea3e49fdeae5b3d8f473d64bddd91f6be1fac0d1fca1ad30c9f1/merged/var/lib/neutron supports timestamps until 2038 (0x7fffffff)
Dec  1 20:02:09 compute-0 podman[254745]: 2025-12-01 20:02:09.389150184 +0000 UTC m=+0.188482285 container init d2a1a4b50e867ea1cc67999ecf3954d066cbeb366f113ade7af0b4553fbff670 (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified, name=neutron-haproxy-ovnmeta-f6d551f8-4db8-41ef-9a06-51292bc6bab6, org.label-schema.build-date=20251125, tcib_build_tag=fa2bb8efef6782c26ea7f1675eeb36dd, tcib_managed=true, org.label-schema.schema-version=1.0, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.license=GPLv2, io.buildah.version=1.41.3, org.label-schema.vendor=CentOS, maintainer=OpenStack Kubernetes Operator team)
Dec  1 20:02:09 compute-0 podman[254745]: 2025-12-01 20:02:09.39928594 +0000 UTC m=+0.198618011 container start d2a1a4b50e867ea1cc67999ecf3954d066cbeb366f113ade7af0b4553fbff670 (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified, name=neutron-haproxy-ovnmeta-f6d551f8-4db8-41ef-9a06-51292bc6bab6, tcib_managed=true, org.label-schema.license=GPLv2, org.label-schema.schema-version=1.0, io.buildah.version=1.41.3, maintainer=OpenStack Kubernetes Operator team, org.label-schema.build-date=20251125, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.vendor=CentOS, tcib_build_tag=fa2bb8efef6782c26ea7f1675eeb36dd)
Dec  1 20:02:09 compute-0 neutron-haproxy-ovnmeta-f6d551f8-4db8-41ef-9a06-51292bc6bab6[254759]: [NOTICE]   (254764) : New worker (254766) forked
Dec  1 20:02:09 compute-0 neutron-haproxy-ovnmeta-f6d551f8-4db8-41ef-9a06-51292bc6bab6[254759]: [NOTICE]   (254764) : Loading success.
Dec  1 20:02:09 compute-0 nova_compute[189564]: 2025-12-01 20:02:09.782 189568 INFO nova.virt.libvirt.driver [None req-a63dd88b-b0e6-4d69-9e61-96ca65e37b62 b7979dae5a4746189d660cfad52a7031 074be7edf37d4e09a02286825460dcb3 - - default default] [instance: 40daa6fd-543f-42a7-8b3f-8bbbd3b4ecc0] Creating config drive at /var/lib/nova/instances/40daa6fd-543f-42a7-8b3f-8bbbd3b4ecc0/disk.config#033[00m
Dec  1 20:02:09 compute-0 nova_compute[189564]: 2025-12-01 20:02:09.789 189568 DEBUG oslo_concurrency.processutils [None req-a63dd88b-b0e6-4d69-9e61-96ca65e37b62 b7979dae5a4746189d660cfad52a7031 074be7edf37d4e09a02286825460dcb3 - - default default] Running cmd (subprocess): /usr/bin/mkisofs -o /var/lib/nova/instances/40daa6fd-543f-42a7-8b3f-8bbbd3b4ecc0/disk.config -ldots -allow-lowercase -allow-multidot -l -publisher OpenStack Compute 27.5.2-0.20250829104910.6f8decf.el9 -quiet -J -r -V config-2 /tmp/tmpm9qgblsg execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384#033[00m
Dec  1 20:02:09 compute-0 nova_compute[189564]: 2025-12-01 20:02:09.924 189568 DEBUG oslo_concurrency.processutils [None req-a63dd88b-b0e6-4d69-9e61-96ca65e37b62 b7979dae5a4746189d660cfad52a7031 074be7edf37d4e09a02286825460dcb3 - - default default] CMD "/usr/bin/mkisofs -o /var/lib/nova/instances/40daa6fd-543f-42a7-8b3f-8bbbd3b4ecc0/disk.config -ldots -allow-lowercase -allow-multidot -l -publisher OpenStack Compute 27.5.2-0.20250829104910.6f8decf.el9 -quiet -J -r -V config-2 /tmp/tmpm9qgblsg" returned: 0 in 0.134s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422#033[00m
Dec  1 20:02:09 compute-0 NetworkManager[56474]: <info>  [1764619329.9859] manager: (tap5f412491-e8): new Tun device (/org/freedesktop/NetworkManager/Devices/51)
Dec  1 20:02:09 compute-0 kernel: tap5f412491-e8: entered promiscuous mode
Dec  1 20:02:09 compute-0 systemd-udevd[254684]: Network interface NamePolicy= disabled on kernel command line.
Dec  1 20:02:09 compute-0 ovn_controller[97948]: 2025-12-01T20:02:09Z|00097|binding|INFO|Claiming lport 5f412491-e88a-4387-aa56-6b4e024e1eb2 for this chassis.
Dec  1 20:02:09 compute-0 ovn_controller[97948]: 2025-12-01T20:02:09Z|00098|binding|INFO|5f412491-e88a-4387-aa56-6b4e024e1eb2: Claiming fa:16:3e:ae:1d:64 10.100.0.5
Dec  1 20:02:09 compute-0 nova_compute[189564]: 2025-12-01 20:02:09.989 189568 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Dec  1 20:02:09 compute-0 nova_compute[189564]: 2025-12-01 20:02:09.996 189568 DEBUG nova.network.neutron [req-2560b864-a8e9-418e-8c94-d3a19a52509b req-3821a581-a845-421d-a008-ad1af8a48c42 8141e98fb94749b083df4b60a326419b 58466eba0634458f8dd92f929824a1d9 - - default default] [instance: 4ace6300-5447-4f61-9b27-a7249155c57b] Updated VIF entry in instance network info cache for port 7101ff55-a92d-431c-8cc4-8b3412507465. _build_network_info_model /usr/lib/python3.9/site-packages/nova/network/neutron.py:3482#033[00m
Dec  1 20:02:09 compute-0 nova_compute[189564]: 2025-12-01 20:02:09.997 189568 DEBUG nova.network.neutron [req-2560b864-a8e9-418e-8c94-d3a19a52509b req-3821a581-a845-421d-a008-ad1af8a48c42 8141e98fb94749b083df4b60a326419b 58466eba0634458f8dd92f929824a1d9 - - default default] [instance: 4ace6300-5447-4f61-9b27-a7249155c57b] Updating instance_info_cache with network_info: [{"id": "7101ff55-a92d-431c-8cc4-8b3412507465", "address": "fa:16:3e:69:55:e7", "network": {"id": "f6d551f8-4db8-41ef-9a06-51292bc6bab6", "bridge": "br-int", "label": "tempest-AttachInterfacesUnderV243Test-1484983586-network", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.6", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "4517904b95d64f0c874d5afda12566c4", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tap7101ff55-a9", "ovs_interfaceid": "7101ff55-a92d-431c-8cc4-8b3412507465", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}}] update_instance_cache_with_nw_info /usr/lib/python3.9/site-packages/nova/network/neutron.py:116#033[00m
Dec  1 20:02:10 compute-0 ovn_metadata_agent[106828]: 2025-12-01 20:02:10.000 106833 DEBUG ovsdbapp.backend.ovs_idl.event [-] Matched UPDATE: PortBindingUpdatedEvent(events=('update',), table='Port_Binding', conditions=None, old_conditions=None), priority=20 to row=Port_Binding(mac=['fa:16:3e:ae:1d:64 10.100.0.5'], port_security=['fa:16:3e:ae:1d:64 10.100.0.5'], type=, nat_addresses=[], virtual_parent=[], up=[False], options={'requested-chassis': 'compute-0.ctlplane.example.com'}, parent_port=[], requested_additional_chassis=[], ha_chassis_group=[], external_ids={'neutron:cidrs': '10.100.0.5/28', 'neutron:device_id': '40daa6fd-543f-42a7-8b3f-8bbbd3b4ecc0', 'neutron:device_owner': 'compute:nova', 'neutron:mtu': '', 'neutron:network_name': 'neutron-5c0f6fba-7bb5-44dd-9009-a572ffba2e90', 'neutron:port_capabilities': '', 'neutron:port_name': '', 'neutron:project_id': '074be7edf37d4e09a02286825460dcb3', 'neutron:revision_number': '2', 'neutron:security_group_ids': '8ecf62a5-ec01-4b95-ba8d-23b8e92002aa', 'neutron:subnet_pool_addr_scope4': '', 'neutron:subnet_pool_addr_scope6': '', 'neutron:vnic_type': 'normal'}, additional_chassis=[], tag=[], additional_encap=[], encap=[], mirror_rules=[], datapath=e203a296-7e55-44d6-b67a-9567ace4ce1c, chassis=[<ovs.db.idl.Row object at 0x7f1b36766670>], tunnel_key=3, gateway_chassis=[], requested_chassis=[<ovs.db.idl.Row object at 0x7f1b36766670>], logical_port=5f412491-e88a-4387-aa56-6b4e024e1eb2) old=Port_Binding(chassis=[]) matches /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/event.py:43#033[00m
Dec  1 20:02:10 compute-0 ovn_metadata_agent[106828]: 2025-12-01 20:02:10.002 106833 INFO neutron.agent.ovn.metadata.agent [-] Port 5f412491-e88a-4387-aa56-6b4e024e1eb2 in datapath 5c0f6fba-7bb5-44dd-9009-a572ffba2e90 bound to our chassis#033[00m
Dec  1 20:02:10 compute-0 ovn_metadata_agent[106828]: 2025-12-01 20:02:10.003 106833 INFO neutron.agent.ovn.metadata.agent [-] Provisioning metadata for network 5c0f6fba-7bb5-44dd-9009-a572ffba2e90#033[00m
Dec  1 20:02:10 compute-0 NetworkManager[56474]: <info>  [1764619330.0048] device (tap5f412491-e8): state change: unmanaged -> unavailable (reason 'connection-assumed', managed-type: 'external')
Dec  1 20:02:10 compute-0 NetworkManager[56474]: <info>  [1764619330.0064] device (tap5f412491-e8): state change: unavailable -> disconnected (reason 'none', managed-type: 'external')
Dec  1 20:02:10 compute-0 ovn_controller[97948]: 2025-12-01T20:02:10Z|00099|binding|INFO|Setting lport 5f412491-e88a-4387-aa56-6b4e024e1eb2 ovn-installed in OVS
Dec  1 20:02:10 compute-0 ovn_controller[97948]: 2025-12-01T20:02:10Z|00100|binding|INFO|Setting lport 5f412491-e88a-4387-aa56-6b4e024e1eb2 up in Southbound
Dec  1 20:02:10 compute-0 nova_compute[189564]: 2025-12-01 20:02:10.008 189568 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Dec  1 20:02:10 compute-0 nova_compute[189564]: 2025-12-01 20:02:10.011 189568 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Dec  1 20:02:10 compute-0 ovn_metadata_agent[106828]: 2025-12-01 20:02:10.017 239862 DEBUG oslo.privsep.daemon [-] privsep: reply[239b255f-2d35-470c-aab1-e2ce7b81f5bf]: (4, False) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Dec  1 20:02:10 compute-0 ovn_metadata_agent[106828]: 2025-12-01 20:02:10.018 106833 DEBUG neutron.agent.ovn.metadata.agent [-] Creating VETH tap5c0f6fba-71 in ovnmeta-5c0f6fba-7bb5-44dd-9009-a572ffba2e90 namespace provision_datapath /usr/lib/python3.9/site-packages/neutron/agent/ovn/metadata/agent.py:665#033[00m
Dec  1 20:02:10 compute-0 nova_compute[189564]: 2025-12-01 20:02:10.020 189568 DEBUG oslo_concurrency.lockutils [req-2560b864-a8e9-418e-8c94-d3a19a52509b req-3821a581-a845-421d-a008-ad1af8a48c42 8141e98fb94749b083df4b60a326419b 58466eba0634458f8dd92f929824a1d9 - - default default] Releasing lock "refresh_cache-4ace6300-5447-4f61-9b27-a7249155c57b" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:333#033[00m
Dec  1 20:02:10 compute-0 ovn_metadata_agent[106828]: 2025-12-01 20:02:10.021 239862 DEBUG neutron.privileged.agent.linux.ip_lib [-] Interface tap5c0f6fba-70 not found in namespace None get_link_id /usr/lib/python3.9/site-packages/neutron/privileged/agent/linux/ip_lib.py:204#033[00m
Dec  1 20:02:10 compute-0 ovn_metadata_agent[106828]: 2025-12-01 20:02:10.021 239862 DEBUG oslo.privsep.daemon [-] privsep: reply[d6922257-3c42-494b-bd20-274dded0e068]: (4, False) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Dec  1 20:02:10 compute-0 ovn_metadata_agent[106828]: 2025-12-01 20:02:10.022 239862 DEBUG oslo.privsep.daemon [-] privsep: reply[b0e26e7a-7c3e-4151-afc3-f5ddd27c5eca]: (4, False) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Dec  1 20:02:10 compute-0 ovn_metadata_agent[106828]: 2025-12-01 20:02:10.032 106945 DEBUG oslo.privsep.daemon [-] privsep: reply[65929568-546e-4492-bc4b-a34d5ade7b7a]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Dec  1 20:02:10 compute-0 systemd-machined[155891]: New machine qemu-9-instance-00000008.
Dec  1 20:02:10 compute-0 systemd[1]: Started Virtual Machine qemu-9-instance-00000008.
Dec  1 20:02:10 compute-0 ovn_metadata_agent[106828]: 2025-12-01 20:02:10.059 239862 DEBUG oslo.privsep.daemon [-] privsep: reply[4202e430-6be8-4d5f-a513-cbf089d0c8e9]: (4, ('net.ipv4.conf.all.promote_secondaries = 1\n', '', 0)) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Dec  1 20:02:10 compute-0 ovn_metadata_agent[106828]: 2025-12-01 20:02:10.087 239942 DEBUG oslo.privsep.daemon [-] privsep: reply[2d8df042-eb17-4708-8f1e-20599f2af7e5]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Dec  1 20:02:10 compute-0 NetworkManager[56474]: <info>  [1764619330.1031] manager: (tap5c0f6fba-70): new Veth device (/org/freedesktop/NetworkManager/Devices/52)
Dec  1 20:02:10 compute-0 ovn_metadata_agent[106828]: 2025-12-01 20:02:10.104 239862 DEBUG oslo.privsep.daemon [-] privsep: reply[5315770c-2882-42ee-99a6-7224a2554e52]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Dec  1 20:02:10 compute-0 nova_compute[189564]: 2025-12-01 20:02:10.112 189568 DEBUG nova.compute.manager [req-aa00709c-2949-4bcd-9d72-67224b494410 req-0b387c99-1427-49f3-b7fb-b082aaba7396 8141e98fb94749b083df4b60a326419b 58466eba0634458f8dd92f929824a1d9 - - default default] [instance: 4ace6300-5447-4f61-9b27-a7249155c57b] Received event network-vif-plugged-7101ff55-a92d-431c-8cc4-8b3412507465 external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11048#033[00m
Dec  1 20:02:10 compute-0 nova_compute[189564]: 2025-12-01 20:02:10.113 189568 DEBUG oslo_concurrency.lockutils [req-aa00709c-2949-4bcd-9d72-67224b494410 req-0b387c99-1427-49f3-b7fb-b082aaba7396 8141e98fb94749b083df4b60a326419b 58466eba0634458f8dd92f929824a1d9 - - default default] Acquiring lock "4ace6300-5447-4f61-9b27-a7249155c57b-events" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m
Dec  1 20:02:10 compute-0 nova_compute[189564]: 2025-12-01 20:02:10.113 189568 DEBUG oslo_concurrency.lockutils [req-aa00709c-2949-4bcd-9d72-67224b494410 req-0b387c99-1427-49f3-b7fb-b082aaba7396 8141e98fb94749b083df4b60a326419b 58466eba0634458f8dd92f929824a1d9 - - default default] Lock "4ace6300-5447-4f61-9b27-a7249155c57b-events" acquired by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m
Dec  1 20:02:10 compute-0 nova_compute[189564]: 2025-12-01 20:02:10.114 189568 DEBUG oslo_concurrency.lockutils [req-aa00709c-2949-4bcd-9d72-67224b494410 req-0b387c99-1427-49f3-b7fb-b082aaba7396 8141e98fb94749b083df4b60a326419b 58466eba0634458f8dd92f929824a1d9 - - default default] Lock "4ace6300-5447-4f61-9b27-a7249155c57b-events" "released" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m
Dec  1 20:02:10 compute-0 nova_compute[189564]: 2025-12-01 20:02:10.114 189568 DEBUG nova.compute.manager [req-aa00709c-2949-4bcd-9d72-67224b494410 req-0b387c99-1427-49f3-b7fb-b082aaba7396 8141e98fb94749b083df4b60a326419b 58466eba0634458f8dd92f929824a1d9 - - default default] [instance: 4ace6300-5447-4f61-9b27-a7249155c57b] Processing event network-vif-plugged-7101ff55-a92d-431c-8cc4-8b3412507465 _process_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:10808#033[00m
Dec  1 20:02:10 compute-0 nova_compute[189564]: 2025-12-01 20:02:10.115 189568 DEBUG nova.compute.manager [None req-27a587a4-40c5-444e-a53d-f9c90e3a57ff f4faf878be724ad8aa31fd034c9818d9 4517904b95d64f0c874d5afda12566c4 - - default default] [instance: 4ace6300-5447-4f61-9b27-a7249155c57b] Instance event wait completed in 1 seconds for network-vif-plugged wait_for_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:577#033[00m
Dec  1 20:02:10 compute-0 nova_compute[189564]: 2025-12-01 20:02:10.123 189568 DEBUG nova.virt.driver [None req-025acbbd-8b0a-4055-b5a6-f0460d6fa220 - - - - - -] Emitting event <LifecycleEvent: 1764619330.1228943, 4ace6300-5447-4f61-9b27-a7249155c57b => Resumed> emit_event /usr/lib/python3.9/site-packages/nova/virt/driver.py:1653#033[00m
Dec  1 20:02:10 compute-0 nova_compute[189564]: 2025-12-01 20:02:10.123 189568 INFO nova.compute.manager [None req-025acbbd-8b0a-4055-b5a6-f0460d6fa220 - - - - - -] [instance: 4ace6300-5447-4f61-9b27-a7249155c57b] VM Resumed (Lifecycle Event)#033[00m
Dec  1 20:02:10 compute-0 nova_compute[189564]: 2025-12-01 20:02:10.126 189568 DEBUG nova.virt.libvirt.driver [None req-27a587a4-40c5-444e-a53d-f9c90e3a57ff f4faf878be724ad8aa31fd034c9818d9 4517904b95d64f0c874d5afda12566c4 - - default default] [instance: 4ace6300-5447-4f61-9b27-a7249155c57b] Guest created on hypervisor spawn /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:4417#033[00m
Dec  1 20:02:10 compute-0 nova_compute[189564]: 2025-12-01 20:02:10.131 189568 INFO nova.virt.libvirt.driver [-] [instance: 4ace6300-5447-4f61-9b27-a7249155c57b] Instance spawned successfully.#033[00m
Dec  1 20:02:10 compute-0 nova_compute[189564]: 2025-12-01 20:02:10.132 189568 DEBUG nova.virt.libvirt.driver [None req-27a587a4-40c5-444e-a53d-f9c90e3a57ff f4faf878be724ad8aa31fd034c9818d9 4517904b95d64f0c874d5afda12566c4 - - default default] [instance: 4ace6300-5447-4f61-9b27-a7249155c57b] Attempting to register defaults for the following image properties: ['hw_cdrom_bus', 'hw_disk_bus', 'hw_input_bus', 'hw_pointer_model', 'hw_video_model', 'hw_vif_model'] _register_undefined_instance_details /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:917#033[00m
Dec  1 20:02:10 compute-0 ovn_metadata_agent[106828]: 2025-12-01 20:02:10.144 239942 DEBUG oslo.privsep.daemon [-] privsep: reply[f73c9da3-6506-4052-b9d1-5223325120d2]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Dec  1 20:02:10 compute-0 ovn_metadata_agent[106828]: 2025-12-01 20:02:10.150 239942 DEBUG oslo.privsep.daemon [-] privsep: reply[eaae812e-9348-47cb-ade2-fc967908d343]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Dec  1 20:02:10 compute-0 nova_compute[189564]: 2025-12-01 20:02:10.154 189568 DEBUG nova.compute.manager [None req-025acbbd-8b0a-4055-b5a6-f0460d6fa220 - - - - - -] [instance: 4ace6300-5447-4f61-9b27-a7249155c57b] Checking state _get_power_state /usr/lib/python3.9/site-packages/nova/compute/manager.py:1762#033[00m
Dec  1 20:02:10 compute-0 nova_compute[189564]: 2025-12-01 20:02:10.170 189568 DEBUG nova.compute.manager [None req-025acbbd-8b0a-4055-b5a6-f0460d6fa220 - - - - - -] [instance: 4ace6300-5447-4f61-9b27-a7249155c57b] Synchronizing instance power state after lifecycle event "Resumed"; current vm_state: building, current task_state: spawning, current DB power_state: 0, VM power_state: 1 handle_lifecycle_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:1396#033[00m
Dec  1 20:02:10 compute-0 nova_compute[189564]: 2025-12-01 20:02:10.175 189568 DEBUG nova.virt.libvirt.driver [None req-27a587a4-40c5-444e-a53d-f9c90e3a57ff f4faf878be724ad8aa31fd034c9818d9 4517904b95d64f0c874d5afda12566c4 - - default default] [instance: 4ace6300-5447-4f61-9b27-a7249155c57b] Found default for hw_cdrom_bus of sata _register_undefined_instance_details /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:946#033[00m
Dec  1 20:02:10 compute-0 nova_compute[189564]: 2025-12-01 20:02:10.175 189568 DEBUG nova.virt.libvirt.driver [None req-27a587a4-40c5-444e-a53d-f9c90e3a57ff f4faf878be724ad8aa31fd034c9818d9 4517904b95d64f0c874d5afda12566c4 - - default default] [instance: 4ace6300-5447-4f61-9b27-a7249155c57b] Found default for hw_disk_bus of virtio _register_undefined_instance_details /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:946#033[00m
Dec  1 20:02:10 compute-0 nova_compute[189564]: 2025-12-01 20:02:10.176 189568 DEBUG nova.virt.libvirt.driver [None req-27a587a4-40c5-444e-a53d-f9c90e3a57ff f4faf878be724ad8aa31fd034c9818d9 4517904b95d64f0c874d5afda12566c4 - - default default] [instance: 4ace6300-5447-4f61-9b27-a7249155c57b] Found default for hw_input_bus of usb _register_undefined_instance_details /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:946#033[00m
Dec  1 20:02:10 compute-0 nova_compute[189564]: 2025-12-01 20:02:10.176 189568 DEBUG nova.virt.libvirt.driver [None req-27a587a4-40c5-444e-a53d-f9c90e3a57ff f4faf878be724ad8aa31fd034c9818d9 4517904b95d64f0c874d5afda12566c4 - - default default] [instance: 4ace6300-5447-4f61-9b27-a7249155c57b] Found default for hw_pointer_model of usbtablet _register_undefined_instance_details /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:946#033[00m
Dec  1 20:02:10 compute-0 nova_compute[189564]: 2025-12-01 20:02:10.177 189568 DEBUG nova.virt.libvirt.driver [None req-27a587a4-40c5-444e-a53d-f9c90e3a57ff f4faf878be724ad8aa31fd034c9818d9 4517904b95d64f0c874d5afda12566c4 - - default default] [instance: 4ace6300-5447-4f61-9b27-a7249155c57b] Found default for hw_video_model of virtio _register_undefined_instance_details /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:946#033[00m
Dec  1 20:02:10 compute-0 nova_compute[189564]: 2025-12-01 20:02:10.177 189568 DEBUG nova.virt.libvirt.driver [None req-27a587a4-40c5-444e-a53d-f9c90e3a57ff f4faf878be724ad8aa31fd034c9818d9 4517904b95d64f0c874d5afda12566c4 - - default default] [instance: 4ace6300-5447-4f61-9b27-a7249155c57b] Found default for hw_vif_model of virtio _register_undefined_instance_details /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:946#033[00m
Dec  1 20:02:10 compute-0 NetworkManager[56474]: <info>  [1764619330.1874] device (tap5c0f6fba-70): carrier: link connected
Dec  1 20:02:10 compute-0 ovn_metadata_agent[106828]: 2025-12-01 20:02:10.193 239942 DEBUG oslo.privsep.daemon [-] privsep: reply[d0875adf-163e-44fc-8c4c-49f93b31b291]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Dec  1 20:02:10 compute-0 nova_compute[189564]: 2025-12-01 20:02:10.209 189568 INFO nova.compute.manager [None req-025acbbd-8b0a-4055-b5a6-f0460d6fa220 - - - - - -] [instance: 4ace6300-5447-4f61-9b27-a7249155c57b] During sync_power_state the instance has a pending task (spawning). Skip.#033[00m
Dec  1 20:02:10 compute-0 ovn_metadata_agent[106828]: 2025-12-01 20:02:10.215 239862 DEBUG oslo.privsep.daemon [-] privsep: reply[3158e6dc-ae18-4851-992a-f41bf934063d]: (4, [{'family': 0, '__align': (), 'ifi_type': 1, 'index': 2, 'flags': 69699, 'change': 0, 'attrs': [['IFLA_IFNAME', 'tap5c0f6fba-71'], ['IFLA_TXQLEN', 1000], ['IFLA_OPERSTATE', 'UP'], ['IFLA_LINKMODE', 0], ['IFLA_MTU', 1500], ['IFLA_MIN_MTU', 68], ['IFLA_MAX_MTU', 65535], ['IFLA_GROUP', 0], ['IFLA_PROMISCUITY', 0], ['UNKNOWN', {'header': {'length': 8, 'type': 61}}], ['IFLA_NUM_TX_QUEUES', 8], ['IFLA_GSO_MAX_SEGS', 65535], ['IFLA_GSO_MAX_SIZE', 65536], ['IFLA_GRO_MAX_SIZE', 65536], ['UNKNOWN', {'header': {'length': 8, 'type': 63}}], ['UNKNOWN', {'header': {'length': 8, 'type': 64}}], ['IFLA_TSO_MAX_SIZE', 524280], ['IFLA_TSO_MAX_SEGS', 65535], ['UNKNOWN', {'header': {'length': 8, 'type': 66}}], ['IFLA_NUM_RX_QUEUES', 8], ['IFLA_CARRIER', 1], ['IFLA_CARRIER_CHANGES', 2], ['IFLA_CARRIER_UP_COUNT', 1], ['IFLA_CARRIER_DOWN_COUNT', 1], ['IFLA_PROTO_DOWN', 0], ['IFLA_ADDRESS', 'fa:16:3e:07:0d:a9'], ['IFLA_BROADCAST', 'ff:ff:ff:ff:ff:ff'], ['IFLA_STATS64', {'rx_packets': 1, 'tx_packets': 1, 'rx_bytes': 90, 'tx_bytes': 90, 'rx_errors': 0, 'tx_errors': 0, 'rx_dropped': 0, 'tx_dropped': 0, 'multicast': 0, 'collisions': 0, 'rx_length_errors': 0, 'rx_over_errors': 0, 'rx_crc_errors': 0, 'rx_frame_errors': 0, 'rx_fifo_errors': 0, 'rx_missed_errors': 0, 'tx_aborted_errors': 0, 'tx_carrier_errors': 0, 'tx_fifo_errors': 0, 'tx_heartbeat_errors': 0, 'tx_window_errors': 0, 'rx_compressed': 0, 'tx_compressed': 0}], ['IFLA_STATS', {'rx_packets': 1, 'tx_packets': 1, 'rx_bytes': 90, 'tx_bytes': 90, 'rx_errors': 0, 'tx_errors': 0, 'rx_dropped': 0, 'tx_dropped': 0, 'multicast': 0, 'collisions': 0, 'rx_length_errors': 0, 'rx_over_errors': 0, 'rx_crc_errors': 0, 'rx_frame_errors': 0, 'rx_fifo_errors': 0, 'rx_missed_errors': 0, 'tx_aborted_errors': 0, 'tx_carrier_errors': 0, 'tx_fifo_errors': 0, 'tx_heartbeat_errors': 0, 'tx_window_errors': 0, 'rx_compressed': 0, 'tx_compressed': 0}], ['IFLA_XDP', {'attrs': [['IFLA_XDP_ATTACHED', None]]}], ['IFLA_LINKINFO', {'attrs': [['IFLA_INFO_KIND', 'veth']]}], ['IFLA_LINK_NETNSID', 0], ['IFLA_LINK', 29], ['IFLA_QDISC', 'noqueue'], ['IFLA_AF_SPEC', {'attrs': [['AF_INET', {'dummy': 65668, 'forwarding': 1, 'mc_forwarding': 0, 'proxy_arp': 0, 'accept_redirects': 0, 'secure_redirects': 0, 'send_redirects': 0, 'shared_media': 1, 'rp_filter': 1, 'accept_source_route': 0, 'bootp_relay': 0, 'log_martians': 0, 'tag': 0, 'arpfilter': 0, 'medium_id': 0, 'noxfrm': 0, 'nopolicy': 0, 'force_igmp_version': 0, 'arp_announce': 0, 'arp_ignore': 0, 'promote_secondaries': 1, 'arp_accept': 0, 'arp_notify': 0, 'accept_local': 0, 'src_vmark': 0, 'proxy_arp_pvlan': 0, 'route_localnet': 0, 'igmpv2_unsolicited_report_interval': 10000, 'igmpv3_unsolicited_report_interval': 1000}], ['AF_INET6', {'attrs': [['IFLA_INET6_FLAGS', 2147483648], ['IFLA_INET6_CACHEINFO', {'max_reasm_len': 65535, 'tstamp': 581790, 'reachable_time': 40260, 'retrans_time': 1000}], ['IFLA_INET6_CONF', {'forwarding': 0, 'hop_limit': 64, 'mtu': 1500, 'accept_ra': 1, 'accept_redirects': 1, 'autoconf': 1, 'dad_transmits': 1, 'router_solicitations': 4294967295, 'router_solicitation_interval': 4000, 'router_solicitation_delay': 1000, 'use_tempaddr': 0, 'temp_valid_lft': 604800, 'temp_preferred_lft': 86400, 'regen_max_retry': 3, 'max_desync_factor': 600, 'max_addresses': 16, 'force_mld_version': 0, 'accept_ra_defrtr': 1, 'accept_ra_pinfo': 1, 'accept_ra_rtr_pref': 1, 'router_probe_interval': 60000, 'accept_ra_rt_info_max_plen': 0, 'proxy_ndp': 0, 'optimistic_dad': 0, 'accept_source_route': 0, 'mc_forwarding': 0, 'disable_ipv6': 0, 'accept_dad': 1, 'force_tllao': 0, 'ndisc_notify': 0}], ['IFLA_INET6_STATS', {'num': 38, 'inpkts': 1, 'inoctets': 76, 'indelivers': 0, 'outforwdatagrams': 0, 'outpkts': 1, 'outoctets': 76, 'inhdrerrors': 0, 'intoobigerrors': 0, 'innoroutes': 0, 'inaddrerrors': 0, 'inunknownprotos': 0, 'intruncatedpkts': 0, 'indiscards': 0, 'outdiscards': 0, 'outnoroutes': 0, 'reasmtimeout': 0, 'reasmreqds': 0, 'reasmoks': 0, 'reasmfails': 0, 'fragoks': 0, 'fragfails': 0, 'fragcreates': 0, 'inmcastpkts': 1, 'outmcastpkts': 1, 'inbcastpkts': 0, 'outbcastpkts': 0, 'inmcastoctets': 76, 'outmcastoctets': 76, 'inbcastoctets': 0, 'outbcastoctets': 0, 'csumerrors': 0, 'noectpkts': 1, 'ect1pkts': 0, 'ect0pkts': 0, 'cepkts': 0}], ['IFLA_INET6_ICMP6STATS', {'num': 7, 'inmsgs': 0, 'inerrors': 0, 'outmsgs': 1, 'outerrors': 0, 'csumerrors': 0}], ['IFLA_INET6_TOKEN', '::'], ['IFLA_INET6_ADDR_GEN_MODE', 0]]}]]}], ['IFLA_MAP', {'mem_start': 0, 'mem_end': 0, 'base_addr': 0, 'irq': 0, 'dma': 0, 'port': 0}], ['UNKNOWN', {'header': {'length': 4, 'type': 32830}}], ['UNKNOWN', {'header': {'length': 4, 'type': 32833}}]], 'header': {'length': 1448, 'type': 16, 'flags': 2, 'sequence_number': 255, 'pid': 254813, 'error': None, 'target': 'ovnmeta-5c0f6fba-7bb5-44dd-9009-a572ffba2e90', 'stats': (0, 0, 0)}, 'state': 'up', 'event': 'RTM_NEWLINK'}]) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Dec  1 20:02:10 compute-0 ovn_metadata_agent[106828]: 2025-12-01 20:02:10.235 239862 DEBUG oslo.privsep.daemon [-] privsep: reply[6f7b69f2-96f2-493f-bd32-18e2426af3a7]: (4, ({'family': 10, 'prefixlen': 64, 'flags': 192, 'scope': 253, 'index': 2, 'attrs': [['IFA_ADDRESS', 'fe80::f816:3eff:fe07:da9'], ['IFA_CACHEINFO', {'ifa_preferred': 4294967295, 'ifa_valid': 4294967295, 'cstamp': 581790, 'tstamp': 581790}], ['IFA_FLAGS', 192]], 'header': {'length': 72, 'type': 20, 'flags': 2, 'sequence_number': 255, 'pid': 254814, 'error': None, 'target': 'ovnmeta-5c0f6fba-7bb5-44dd-9009-a572ffba2e90', 'stats': (0, 0, 0)}, 'event': 'RTM_NEWADDR'},)) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Dec  1 20:02:10 compute-0 ovn_metadata_agent[106828]: 2025-12-01 20:02:10.254 239862 DEBUG oslo.privsep.daemon [-] privsep: reply[c0c002bb-27c2-4546-bab7-be598b0e1ddd]: (4, [{'family': 0, '__align': (), 'ifi_type': 1, 'index': 2, 'flags': 69699, 'change': 0, 'attrs': [['IFLA_IFNAME', 'tap5c0f6fba-71'], ['IFLA_TXQLEN', 1000], ['IFLA_OPERSTATE', 'UP'], ['IFLA_LINKMODE', 0], ['IFLA_MTU', 1500], ['IFLA_MIN_MTU', 68], ['IFLA_MAX_MTU', 65535], ['IFLA_GROUP', 0], ['IFLA_PROMISCUITY', 0], ['UNKNOWN', {'header': {'length': 8, 'type': 61}}], ['IFLA_NUM_TX_QUEUES', 8], ['IFLA_GSO_MAX_SEGS', 65535], ['IFLA_GSO_MAX_SIZE', 65536], ['IFLA_GRO_MAX_SIZE', 65536], ['UNKNOWN', {'header': {'length': 8, 'type': 63}}], ['UNKNOWN', {'header': {'length': 8, 'type': 64}}], ['IFLA_TSO_MAX_SIZE', 524280], ['IFLA_TSO_MAX_SEGS', 65535], ['UNKNOWN', {'header': {'length': 8, 'type': 66}}], ['IFLA_NUM_RX_QUEUES', 8], ['IFLA_CARRIER', 1], ['IFLA_CARRIER_CHANGES', 2], ['IFLA_CARRIER_UP_COUNT', 1], ['IFLA_CARRIER_DOWN_COUNT', 1], ['IFLA_PROTO_DOWN', 0], ['IFLA_ADDRESS', 'fa:16:3e:07:0d:a9'], ['IFLA_BROADCAST', 'ff:ff:ff:ff:ff:ff'], ['IFLA_STATS64', {'rx_packets': 1, 'tx_packets': 1, 'rx_bytes': 90, 'tx_bytes': 90, 'rx_errors': 0, 'tx_errors': 0, 'rx_dropped': 0, 'tx_dropped': 0, 'multicast': 0, 'collisions': 0, 'rx_length_errors': 0, 'rx_over_errors': 0, 'rx_crc_errors': 0, 'rx_frame_errors': 0, 'rx_fifo_errors': 0, 'rx_missed_errors': 0, 'tx_aborted_errors': 0, 'tx_carrier_errors': 0, 'tx_fifo_errors': 0, 'tx_heartbeat_errors': 0, 'tx_window_errors': 0, 'rx_compressed': 0, 'tx_compressed': 0}], ['IFLA_STATS', {'rx_packets': 1, 'tx_packets': 1, 'rx_bytes': 90, 'tx_bytes': 90, 'rx_errors': 0, 'tx_errors': 0, 'rx_dropped': 0, 'tx_dropped': 0, 'multicast': 0, 'collisions': 0, 'rx_length_errors': 0, 'rx_over_errors': 0, 'rx_crc_errors': 0, 'rx_frame_errors': 0, 'rx_fifo_errors': 0, 'rx_missed_errors': 0, 'tx_aborted_errors': 0, 'tx_carrier_errors': 0, 'tx_fifo_errors': 0, 'tx_heartbeat_errors': 0, 'tx_window_errors': 0, 'rx_compressed': 0, 'tx_compressed': 0}], ['IFLA_XDP', {'attrs': [['IFLA_XDP_ATTACHED', None]]}], ['IFLA_LINKINFO', {'attrs': [['IFLA_INFO_KIND', 'veth']]}], ['IFLA_LINK_NETNSID', 0], ['IFLA_LINK', 29], ['IFLA_QDISC', 'noqueue'], ['IFLA_AF_SPEC', {'attrs': [['AF_INET', {'dummy': 65668, 'forwarding': 1, 'mc_forwarding': 0, 'proxy_arp': 0, 'accept_redirects': 0, 'secure_redirects': 0, 'send_redirects': 0, 'shared_media': 1, 'rp_filter': 1, 'accept_source_route': 0, 'bootp_relay': 0, 'log_martians': 0, 'tag': 0, 'arpfilter': 0, 'medium_id': 0, 'noxfrm': 0, 'nopolicy': 0, 'force_igmp_version': 0, 'arp_announce': 0, 'arp_ignore': 0, 'promote_secondaries': 1, 'arp_accept': 0, 'arp_notify': 0, 'accept_local': 0, 'src_vmark': 0, 'proxy_arp_pvlan': 0, 'route_localnet': 0, 'igmpv2_unsolicited_report_interval': 10000, 'igmpv3_unsolicited_report_interval': 1000}], ['AF_INET6', {'attrs': [['IFLA_INET6_FLAGS', 2147483648], ['IFLA_INET6_CACHEINFO', {'max_reasm_len': 65535, 'tstamp': 581790, 'reachable_time': 40260, 'retrans_time': 1000}], ['IFLA_INET6_CONF', {'forwarding': 0, 'hop_limit': 64, 'mtu': 1500, 'accept_ra': 1, 'accept_redirects': 1, 'autoconf': 1, 'dad_transmits': 1, 'router_solicitations': 4294967295, 'router_solicitation_interval': 4000, 'router_solicitation_delay': 1000, 'use_tempaddr': 0, 'temp_valid_lft': 604800, 'temp_preferred_lft': 86400, 'regen_max_retry': 3, 'max_desync_factor': 600, 'max_addresses': 16, 'force_mld_version': 0, 'accept_ra_defrtr': 1, 'accept_ra_pinfo': 1, 'accept_ra_rtr_pref': 1, 'router_probe_interval': 60000, 'accept_ra_rt_info_max_plen': 0, 'proxy_ndp': 0, 'optimistic_dad': 0, 'accept_source_route': 0, 'mc_forwarding': 0, 'disable_ipv6': 0, 'accept_dad': 1, 'force_tllao': 0, 'ndisc_notify': 0}], ['IFLA_INET6_STATS', {'num': 38, 'inpkts': 1, 'inoctets': 76, 'indelivers': 0, 'outforwdatagrams': 0, 'outpkts': 1, 'outoctets': 76, 'inhdrerrors': 0, 'intoobigerrors': 0, 'innoroutes': 0, 'inaddrerrors': 0, 'inunknownprotos': 0, 'intruncatedpkts': 0, 'indiscards': 0, 'outdiscards': 0, 'outnoroutes': 0, 'reasmtimeout': 0, 'reasmreqds': 0, 'reasmoks': 0, 'reasmfails': 0, 'fragoks': 0, 'fragfails': 0, 'fragcreates': 0, 'inmcastpkts': 1, 'outmcastpkts': 1, 'inbcastpkts': 0, 'outbcastpkts': 0, 'inmcastoctets': 76, 'outmcastoctets': 76, 'inbcastoctets': 0, 'outbcastoctets': 0, 'csumerrors': 0, 'noectpkts': 1, 'ect1pkts': 0, 'ect0pkts': 0, 'cepkts': 0}], ['IFLA_INET6_ICMP6STATS', {'num': 7, 'inmsgs': 0, 'inerrors': 0, 'outmsgs': 1, 'outerrors': 0, 'csumerrors': 0}], ['IFLA_INET6_TOKEN', '::'], ['IFLA_INET6_ADDR_GEN_MODE', 0]]}]]}], ['IFLA_MAP', {'mem_start': 0, 'mem_end': 0, 'base_addr': 0, 'irq': 0, 'dma': 0, 'port': 0}], ['UNKNOWN', {'header': {'length': 4, 'type': 32830}}], ['UNKNOWN', {'header': {'length': 4, 'type': 32833}}]], 'header': {'length': 1448, 'type': 16, 'flags': 0, 'sequence_number': 255, 'pid': 254815, 'error': None, 'target': 'ovnmeta-5c0f6fba-7bb5-44dd-9009-a572ffba2e90', 'stats': (0, 0, 0)}, 'state': 'up', 'event': 'RTM_NEWLINK'}]) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Dec  1 20:02:10 compute-0 nova_compute[189564]: 2025-12-01 20:02:10.284 189568 INFO nova.compute.manager [None req-27a587a4-40c5-444e-a53d-f9c90e3a57ff f4faf878be724ad8aa31fd034c9818d9 4517904b95d64f0c874d5afda12566c4 - - default default] [instance: 4ace6300-5447-4f61-9b27-a7249155c57b] Took 9.45 seconds to spawn the instance on the hypervisor.#033[00m
Dec  1 20:02:10 compute-0 nova_compute[189564]: 2025-12-01 20:02:10.285 189568 DEBUG nova.compute.manager [None req-27a587a4-40c5-444e-a53d-f9c90e3a57ff f4faf878be724ad8aa31fd034c9818d9 4517904b95d64f0c874d5afda12566c4 - - default default] [instance: 4ace6300-5447-4f61-9b27-a7249155c57b] Checking state _get_power_state /usr/lib/python3.9/site-packages/nova/compute/manager.py:1762#033[00m
Dec  1 20:02:10 compute-0 ovn_metadata_agent[106828]: 2025-12-01 20:02:10.295 239862 DEBUG oslo.privsep.daemon [-] privsep: reply[df6da84f-c0e1-44a8-b0fe-cf11e4004d62]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Dec  1 20:02:10 compute-0 nova_compute[189564]: 2025-12-01 20:02:10.358 189568 INFO nova.compute.manager [None req-27a587a4-40c5-444e-a53d-f9c90e3a57ff f4faf878be724ad8aa31fd034c9818d9 4517904b95d64f0c874d5afda12566c4 - - default default] [instance: 4ace6300-5447-4f61-9b27-a7249155c57b] Took 9.98 seconds to build instance.#033[00m
Dec  1 20:02:10 compute-0 ovn_metadata_agent[106828]: 2025-12-01 20:02:10.363 239862 DEBUG oslo.privsep.daemon [-] privsep: reply[427a92cc-9f26-4e15-a1cb-f6110ff91c28]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Dec  1 20:02:10 compute-0 ovn_metadata_agent[106828]: 2025-12-01 20:02:10.365 106833 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): DelPortCommand(_result=None, port=tap5c0f6fba-70, bridge=br-ex, if_exists=True) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89#033[00m
Dec  1 20:02:10 compute-0 ovn_metadata_agent[106828]: 2025-12-01 20:02:10.366 106833 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Transaction caused no change do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:129#033[00m
Dec  1 20:02:10 compute-0 ovn_metadata_agent[106828]: 2025-12-01 20:02:10.366 106833 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): AddPortCommand(_result=None, bridge=br-int, port=tap5c0f6fba-70, may_exist=True) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89#033[00m
Dec  1 20:02:10 compute-0 nova_compute[189564]: 2025-12-01 20:02:10.368 189568 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Dec  1 20:02:10 compute-0 kernel: tap5c0f6fba-70: entered promiscuous mode
Dec  1 20:02:10 compute-0 nova_compute[189564]: 2025-12-01 20:02:10.370 189568 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Dec  1 20:02:10 compute-0 NetworkManager[56474]: <info>  [1764619330.3722] manager: (tap5c0f6fba-70): new Open vSwitch Port device (/org/freedesktop/NetworkManager/Devices/53)
Dec  1 20:02:10 compute-0 ovn_metadata_agent[106828]: 2025-12-01 20:02:10.374 106833 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): DbSetCommand(_result=None, table=Interface, record=tap5c0f6fba-70, col_values=(('external_ids', {'iface-id': '87baa0a5-a8ed-4944-b599-f3b3af896d38'}),)) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89#033[00m
Dec  1 20:02:10 compute-0 nova_compute[189564]: 2025-12-01 20:02:10.377 189568 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Dec  1 20:02:10 compute-0 ovn_controller[97948]: 2025-12-01T20:02:10Z|00101|binding|INFO|Releasing lport 87baa0a5-a8ed-4944-b599-f3b3af896d38 from this chassis (sb_readonly=0)
Dec  1 20:02:10 compute-0 nova_compute[189564]: 2025-12-01 20:02:10.378 189568 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Dec  1 20:02:10 compute-0 ovn_metadata_agent[106828]: 2025-12-01 20:02:10.380 106833 DEBUG neutron.agent.linux.utils [-] Unable to access /var/lib/neutron/external/pids/5c0f6fba-7bb5-44dd-9009-a572ffba2e90.pid.haproxy; Error: [Errno 2] No such file or directory: '/var/lib/neutron/external/pids/5c0f6fba-7bb5-44dd-9009-a572ffba2e90.pid.haproxy' get_value_from_file /usr/lib/python3.9/site-packages/neutron/agent/linux/utils.py:252#033[00m
Dec  1 20:02:10 compute-0 ovn_metadata_agent[106828]: 2025-12-01 20:02:10.381 239862 DEBUG oslo.privsep.daemon [-] privsep: reply[dff9aed2-cba7-4090-8791-a81689928497]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Dec  1 20:02:10 compute-0 ovn_metadata_agent[106828]: 2025-12-01 20:02:10.383 106833 DEBUG neutron.agent.ovn.metadata.driver [-] haproxy_cfg = 
Dec  1 20:02:10 compute-0 ovn_metadata_agent[106828]: global
Dec  1 20:02:10 compute-0 ovn_metadata_agent[106828]:    log         /dev/log local0 debug
Dec  1 20:02:10 compute-0 ovn_metadata_agent[106828]:    log-tag     haproxy-metadata-proxy-5c0f6fba-7bb5-44dd-9009-a572ffba2e90
Dec  1 20:02:10 compute-0 ovn_metadata_agent[106828]:    user        root
Dec  1 20:02:10 compute-0 ovn_metadata_agent[106828]:    group       root
Dec  1 20:02:10 compute-0 ovn_metadata_agent[106828]:    maxconn     1024
Dec  1 20:02:10 compute-0 ovn_metadata_agent[106828]:    pidfile     /var/lib/neutron/external/pids/5c0f6fba-7bb5-44dd-9009-a572ffba2e90.pid.haproxy
Dec  1 20:02:10 compute-0 ovn_metadata_agent[106828]:    daemon
Dec  1 20:02:10 compute-0 ovn_metadata_agent[106828]: 
Dec  1 20:02:10 compute-0 ovn_metadata_agent[106828]: defaults
Dec  1 20:02:10 compute-0 ovn_metadata_agent[106828]:    log global
Dec  1 20:02:10 compute-0 ovn_metadata_agent[106828]:    mode http
Dec  1 20:02:10 compute-0 ovn_metadata_agent[106828]:    option httplog
Dec  1 20:02:10 compute-0 ovn_metadata_agent[106828]:    option dontlognull
Dec  1 20:02:10 compute-0 ovn_metadata_agent[106828]:    option http-server-close
Dec  1 20:02:10 compute-0 ovn_metadata_agent[106828]:    option forwardfor
Dec  1 20:02:10 compute-0 ovn_metadata_agent[106828]:    retries                 3
Dec  1 20:02:10 compute-0 ovn_metadata_agent[106828]:    timeout http-request    30s
Dec  1 20:02:10 compute-0 ovn_metadata_agent[106828]:    timeout connect         30s
Dec  1 20:02:10 compute-0 ovn_metadata_agent[106828]:    timeout client          32s
Dec  1 20:02:10 compute-0 ovn_metadata_agent[106828]:    timeout server          32s
Dec  1 20:02:10 compute-0 ovn_metadata_agent[106828]:    timeout http-keep-alive 30s
Dec  1 20:02:10 compute-0 ovn_metadata_agent[106828]: 
Dec  1 20:02:10 compute-0 ovn_metadata_agent[106828]: 
Dec  1 20:02:10 compute-0 ovn_metadata_agent[106828]: listen listener
Dec  1 20:02:10 compute-0 ovn_metadata_agent[106828]:    bind 169.254.169.254:80
Dec  1 20:02:10 compute-0 ovn_metadata_agent[106828]:    server metadata /var/lib/neutron/metadata_proxy
Dec  1 20:02:10 compute-0 ovn_metadata_agent[106828]:    http-request add-header X-OVN-Network-ID 5c0f6fba-7bb5-44dd-9009-a572ffba2e90
Dec  1 20:02:10 compute-0 ovn_metadata_agent[106828]: create_config_file /usr/lib/python3.9/site-packages/neutron/agent/ovn/metadata/driver.py:107#033[00m
Dec  1 20:02:10 compute-0 ovn_metadata_agent[106828]: 2025-12-01 20:02:10.384 106833 DEBUG neutron.agent.linux.utils [-] Running command: ['sudo', 'neutron-rootwrap', '/etc/neutron/rootwrap.conf', 'ip', 'netns', 'exec', 'ovnmeta-5c0f6fba-7bb5-44dd-9009-a572ffba2e90', 'env', 'PROCESS_TAG=haproxy-5c0f6fba-7bb5-44dd-9009-a572ffba2e90', 'haproxy', '-f', '/var/lib/neutron/ovn-metadata-proxy/5c0f6fba-7bb5-44dd-9009-a572ffba2e90.conf'] create_process /usr/lib/python3.9/site-packages/neutron/agent/linux/utils.py:84#033[00m
Dec  1 20:02:10 compute-0 nova_compute[189564]: 2025-12-01 20:02:10.390 189568 DEBUG oslo_concurrency.lockutils [None req-27a587a4-40c5-444e-a53d-f9c90e3a57ff f4faf878be724ad8aa31fd034c9818d9 4517904b95d64f0c874d5afda12566c4 - - default default] Lock "4ace6300-5447-4f61-9b27-a7249155c57b" "released" by "nova.compute.manager.ComputeManager.build_and_run_instance.<locals>._locked_do_build_and_run_instance" :: held 10.128s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m
Dec  1 20:02:10 compute-0 nova_compute[189564]: 2025-12-01 20:02:10.391 189568 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Dec  1 20:02:10 compute-0 nova_compute[189564]: 2025-12-01 20:02:10.629 189568 DEBUG nova.virt.driver [None req-025acbbd-8b0a-4055-b5a6-f0460d6fa220 - - - - - -] Emitting event <LifecycleEvent: 1764619330.6291664, 40daa6fd-543f-42a7-8b3f-8bbbd3b4ecc0 => Started> emit_event /usr/lib/python3.9/site-packages/nova/virt/driver.py:1653#033[00m
Dec  1 20:02:10 compute-0 nova_compute[189564]: 2025-12-01 20:02:10.630 189568 INFO nova.compute.manager [None req-025acbbd-8b0a-4055-b5a6-f0460d6fa220 - - - - - -] [instance: 40daa6fd-543f-42a7-8b3f-8bbbd3b4ecc0] VM Started (Lifecycle Event)#033[00m
Dec  1 20:02:10 compute-0 nova_compute[189564]: 2025-12-01 20:02:10.653 189568 DEBUG nova.compute.manager [None req-025acbbd-8b0a-4055-b5a6-f0460d6fa220 - - - - - -] [instance: 40daa6fd-543f-42a7-8b3f-8bbbd3b4ecc0] Checking state _get_power_state /usr/lib/python3.9/site-packages/nova/compute/manager.py:1762#033[00m
Dec  1 20:02:10 compute-0 nova_compute[189564]: 2025-12-01 20:02:10.660 189568 DEBUG nova.virt.driver [None req-025acbbd-8b0a-4055-b5a6-f0460d6fa220 - - - - - -] Emitting event <LifecycleEvent: 1764619330.6304586, 40daa6fd-543f-42a7-8b3f-8bbbd3b4ecc0 => Paused> emit_event /usr/lib/python3.9/site-packages/nova/virt/driver.py:1653#033[00m
Dec  1 20:02:10 compute-0 nova_compute[189564]: 2025-12-01 20:02:10.660 189568 INFO nova.compute.manager [None req-025acbbd-8b0a-4055-b5a6-f0460d6fa220 - - - - - -] [instance: 40daa6fd-543f-42a7-8b3f-8bbbd3b4ecc0] VM Paused (Lifecycle Event)#033[00m
Dec  1 20:02:10 compute-0 nova_compute[189564]: 2025-12-01 20:02:10.689 189568 DEBUG nova.compute.manager [None req-025acbbd-8b0a-4055-b5a6-f0460d6fa220 - - - - - -] [instance: 40daa6fd-543f-42a7-8b3f-8bbbd3b4ecc0] Checking state _get_power_state /usr/lib/python3.9/site-packages/nova/compute/manager.py:1762#033[00m
Dec  1 20:02:10 compute-0 nova_compute[189564]: 2025-12-01 20:02:10.694 189568 DEBUG nova.compute.manager [None req-025acbbd-8b0a-4055-b5a6-f0460d6fa220 - - - - - -] [instance: 40daa6fd-543f-42a7-8b3f-8bbbd3b4ecc0] Synchronizing instance power state after lifecycle event "Paused"; current vm_state: building, current task_state: spawning, current DB power_state: 0, VM power_state: 3 handle_lifecycle_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:1396#033[00m
Dec  1 20:02:10 compute-0 nova_compute[189564]: 2025-12-01 20:02:10.730 189568 INFO nova.compute.manager [None req-025acbbd-8b0a-4055-b5a6-f0460d6fa220 - - - - - -] [instance: 40daa6fd-543f-42a7-8b3f-8bbbd3b4ecc0] During sync_power_state the instance has a pending task (spawning). Skip.#033[00m
Dec  1 20:02:10 compute-0 podman[254853]: 2025-12-01 20:02:10.823474864 +0000 UTC m=+0.058167501 container create 7ee0b7f65774ed4d4602e9212ad3371520b7da839c0136fc6083c1f297756dc3 (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified, name=neutron-haproxy-ovnmeta-5c0f6fba-7bb5-44dd-9009-a572ffba2e90, org.label-schema.vendor=CentOS, tcib_build_tag=fa2bb8efef6782c26ea7f1675eeb36dd, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 9 Base Image, tcib_managed=true, maintainer=OpenStack Kubernetes Operator team, org.label-schema.build-date=20251125, org.label-schema.schema-version=1.0, io.buildah.version=1.41.3)
Dec  1 20:02:10 compute-0 nova_compute[189564]: 2025-12-01 20:02:10.849 189568 DEBUG nova.network.neutron [req-33d23549-70ee-4985-b2f7-181448ab699f req-7c4b8cd8-2bcc-4d47-a4d2-899051b3584a 8141e98fb94749b083df4b60a326419b 58466eba0634458f8dd92f929824a1d9 - - default default] [instance: 40daa6fd-543f-42a7-8b3f-8bbbd3b4ecc0] Updated VIF entry in instance network info cache for port 5f412491-e88a-4387-aa56-6b4e024e1eb2. _build_network_info_model /usr/lib/python3.9/site-packages/nova/network/neutron.py:3482#033[00m
Dec  1 20:02:10 compute-0 nova_compute[189564]: 2025-12-01 20:02:10.849 189568 DEBUG nova.network.neutron [req-33d23549-70ee-4985-b2f7-181448ab699f req-7c4b8cd8-2bcc-4d47-a4d2-899051b3584a 8141e98fb94749b083df4b60a326419b 58466eba0634458f8dd92f929824a1d9 - - default default] [instance: 40daa6fd-543f-42a7-8b3f-8bbbd3b4ecc0] Updating instance_info_cache with network_info: [{"id": "5f412491-e88a-4387-aa56-6b4e024e1eb2", "address": "fa:16:3e:ae:1d:64", "network": {"id": "5c0f6fba-7bb5-44dd-9009-a572ffba2e90", "bridge": "br-int", "label": "tempest-ServersTestJSON-1715889735-network", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.5", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "074be7edf37d4e09a02286825460dcb3", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tap5f412491-e8", "ovs_interfaceid": "5f412491-e88a-4387-aa56-6b4e024e1eb2", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}}] update_instance_cache_with_nw_info /usr/lib/python3.9/site-packages/nova/network/neutron.py:116#033[00m
Dec  1 20:02:10 compute-0 systemd[1]: Started libpod-conmon-7ee0b7f65774ed4d4602e9212ad3371520b7da839c0136fc6083c1f297756dc3.scope.
Dec  1 20:02:10 compute-0 nova_compute[189564]: 2025-12-01 20:02:10.870 189568 DEBUG oslo_concurrency.lockutils [req-33d23549-70ee-4985-b2f7-181448ab699f req-7c4b8cd8-2bcc-4d47-a4d2-899051b3584a 8141e98fb94749b083df4b60a326419b 58466eba0634458f8dd92f929824a1d9 - - default default] Releasing lock "refresh_cache-40daa6fd-543f-42a7-8b3f-8bbbd3b4ecc0" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:333#033[00m
Dec  1 20:02:10 compute-0 podman[254853]: 2025-12-01 20:02:10.797202506 +0000 UTC m=+0.031895163 image pull 014dc726c85414b29f2dde7b5d875685d08784761c0f0ffa8630d1583a877bf9 quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified
Dec  1 20:02:10 compute-0 systemd[1]: Started libcrun container.
Dec  1 20:02:10 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/1d4c31a89dbabbd1a7368cd6edaaead01a3ca318b6b60c4edd6970f20539c8d6/merged/var/lib/neutron supports timestamps until 2038 (0x7fffffff)
Dec  1 20:02:10 compute-0 podman[254853]: 2025-12-01 20:02:10.951103304 +0000 UTC m=+0.185795961 container init 7ee0b7f65774ed4d4602e9212ad3371520b7da839c0136fc6083c1f297756dc3 (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified, name=neutron-haproxy-ovnmeta-5c0f6fba-7bb5-44dd-9009-a572ffba2e90, tcib_managed=true, io.buildah.version=1.41.3, org.label-schema.vendor=CentOS, org.label-schema.build-date=20251125, org.label-schema.license=GPLv2, org.label-schema.schema-version=1.0, maintainer=OpenStack Kubernetes Operator team, tcib_build_tag=fa2bb8efef6782c26ea7f1675eeb36dd, org.label-schema.name=CentOS Stream 9 Base Image)
Dec  1 20:02:10 compute-0 podman[254853]: 2025-12-01 20:02:10.958210026 +0000 UTC m=+0.192902663 container start 7ee0b7f65774ed4d4602e9212ad3371520b7da839c0136fc6083c1f297756dc3 (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified, name=neutron-haproxy-ovnmeta-5c0f6fba-7bb5-44dd-9009-a572ffba2e90, io.buildah.version=1.41.3, tcib_build_tag=fa2bb8efef6782c26ea7f1675eeb36dd, org.label-schema.build-date=20251125, tcib_managed=true, maintainer=OpenStack Kubernetes Operator team, org.label-schema.vendor=CentOS, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.schema-version=1.0)
Dec  1 20:02:10 compute-0 neutron-haproxy-ovnmeta-5c0f6fba-7bb5-44dd-9009-a572ffba2e90[254868]: [NOTICE]   (254872) : New worker (254874) forked
Dec  1 20:02:10 compute-0 neutron-haproxy-ovnmeta-5c0f6fba-7bb5-44dd-9009-a572ffba2e90[254868]: [NOTICE]   (254872) : Loading success.
Dec  1 20:02:11 compute-0 nova_compute[189564]: 2025-12-01 20:02:11.890 189568 DEBUG nova.compute.manager [req-a6283e81-84cb-4057-8b6a-e50d7b99ae3f req-c0568ddf-a565-43cf-bbca-491d33499a26 8141e98fb94749b083df4b60a326419b 58466eba0634458f8dd92f929824a1d9 - - default default] [instance: 40daa6fd-543f-42a7-8b3f-8bbbd3b4ecc0] Received event network-vif-plugged-5f412491-e88a-4387-aa56-6b4e024e1eb2 external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11048#033[00m
Dec  1 20:02:11 compute-0 nova_compute[189564]: 2025-12-01 20:02:11.891 189568 DEBUG oslo_concurrency.lockutils [req-a6283e81-84cb-4057-8b6a-e50d7b99ae3f req-c0568ddf-a565-43cf-bbca-491d33499a26 8141e98fb94749b083df4b60a326419b 58466eba0634458f8dd92f929824a1d9 - - default default] Acquiring lock "40daa6fd-543f-42a7-8b3f-8bbbd3b4ecc0-events" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m
Dec  1 20:02:11 compute-0 nova_compute[189564]: 2025-12-01 20:02:11.893 189568 DEBUG oslo_concurrency.lockutils [req-a6283e81-84cb-4057-8b6a-e50d7b99ae3f req-c0568ddf-a565-43cf-bbca-491d33499a26 8141e98fb94749b083df4b60a326419b 58466eba0634458f8dd92f929824a1d9 - - default default] Lock "40daa6fd-543f-42a7-8b3f-8bbbd3b4ecc0-events" acquired by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: waited 0.002s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m
Dec  1 20:02:11 compute-0 nova_compute[189564]: 2025-12-01 20:02:11.894 189568 DEBUG oslo_concurrency.lockutils [req-a6283e81-84cb-4057-8b6a-e50d7b99ae3f req-c0568ddf-a565-43cf-bbca-491d33499a26 8141e98fb94749b083df4b60a326419b 58466eba0634458f8dd92f929824a1d9 - - default default] Lock "40daa6fd-543f-42a7-8b3f-8bbbd3b4ecc0-events" "released" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: held 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m
Dec  1 20:02:11 compute-0 nova_compute[189564]: 2025-12-01 20:02:11.895 189568 DEBUG nova.compute.manager [req-a6283e81-84cb-4057-8b6a-e50d7b99ae3f req-c0568ddf-a565-43cf-bbca-491d33499a26 8141e98fb94749b083df4b60a326419b 58466eba0634458f8dd92f929824a1d9 - - default default] [instance: 40daa6fd-543f-42a7-8b3f-8bbbd3b4ecc0] Processing event network-vif-plugged-5f412491-e88a-4387-aa56-6b4e024e1eb2 _process_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:10808#033[00m
Dec  1 20:02:11 compute-0 nova_compute[189564]: 2025-12-01 20:02:11.897 189568 DEBUG nova.compute.manager [None req-a63dd88b-b0e6-4d69-9e61-96ca65e37b62 b7979dae5a4746189d660cfad52a7031 074be7edf37d4e09a02286825460dcb3 - - default default] [instance: 40daa6fd-543f-42a7-8b3f-8bbbd3b4ecc0] Instance event wait completed in 1 seconds for network-vif-plugged wait_for_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:577#033[00m
Dec  1 20:02:11 compute-0 nova_compute[189564]: 2025-12-01 20:02:11.904 189568 DEBUG nova.virt.driver [None req-025acbbd-8b0a-4055-b5a6-f0460d6fa220 - - - - - -] Emitting event <LifecycleEvent: 1764619331.9043317, 40daa6fd-543f-42a7-8b3f-8bbbd3b4ecc0 => Resumed> emit_event /usr/lib/python3.9/site-packages/nova/virt/driver.py:1653#033[00m
Dec  1 20:02:11 compute-0 nova_compute[189564]: 2025-12-01 20:02:11.905 189568 INFO nova.compute.manager [None req-025acbbd-8b0a-4055-b5a6-f0460d6fa220 - - - - - -] [instance: 40daa6fd-543f-42a7-8b3f-8bbbd3b4ecc0] VM Resumed (Lifecycle Event)#033[00m
Dec  1 20:02:11 compute-0 nova_compute[189564]: 2025-12-01 20:02:11.909 189568 DEBUG nova.virt.libvirt.driver [None req-a63dd88b-b0e6-4d69-9e61-96ca65e37b62 b7979dae5a4746189d660cfad52a7031 074be7edf37d4e09a02286825460dcb3 - - default default] [instance: 40daa6fd-543f-42a7-8b3f-8bbbd3b4ecc0] Guest created on hypervisor spawn /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:4417#033[00m
Dec  1 20:02:11 compute-0 nova_compute[189564]: 2025-12-01 20:02:11.917 189568 INFO nova.virt.libvirt.driver [-] [instance: 40daa6fd-543f-42a7-8b3f-8bbbd3b4ecc0] Instance spawned successfully.#033[00m
Dec  1 20:02:11 compute-0 nova_compute[189564]: 2025-12-01 20:02:11.918 189568 DEBUG nova.virt.libvirt.driver [None req-a63dd88b-b0e6-4d69-9e61-96ca65e37b62 b7979dae5a4746189d660cfad52a7031 074be7edf37d4e09a02286825460dcb3 - - default default] [instance: 40daa6fd-543f-42a7-8b3f-8bbbd3b4ecc0] Attempting to register defaults for the following image properties: ['hw_cdrom_bus', 'hw_disk_bus', 'hw_input_bus', 'hw_pointer_model', 'hw_video_model', 'hw_vif_model'] _register_undefined_instance_details /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:917#033[00m
Dec  1 20:02:11 compute-0 nova_compute[189564]: 2025-12-01 20:02:11.933 189568 DEBUG nova.compute.manager [None req-025acbbd-8b0a-4055-b5a6-f0460d6fa220 - - - - - -] [instance: 40daa6fd-543f-42a7-8b3f-8bbbd3b4ecc0] Checking state _get_power_state /usr/lib/python3.9/site-packages/nova/compute/manager.py:1762#033[00m
Dec  1 20:02:11 compute-0 nova_compute[189564]: 2025-12-01 20:02:11.942 189568 DEBUG nova.compute.manager [None req-025acbbd-8b0a-4055-b5a6-f0460d6fa220 - - - - - -] [instance: 40daa6fd-543f-42a7-8b3f-8bbbd3b4ecc0] Synchronizing instance power state after lifecycle event "Resumed"; current vm_state: building, current task_state: spawning, current DB power_state: 0, VM power_state: 1 handle_lifecycle_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:1396#033[00m
Dec  1 20:02:11 compute-0 nova_compute[189564]: 2025-12-01 20:02:11.947 189568 DEBUG nova.virt.libvirt.driver [None req-a63dd88b-b0e6-4d69-9e61-96ca65e37b62 b7979dae5a4746189d660cfad52a7031 074be7edf37d4e09a02286825460dcb3 - - default default] [instance: 40daa6fd-543f-42a7-8b3f-8bbbd3b4ecc0] Found default for hw_cdrom_bus of sata _register_undefined_instance_details /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:946#033[00m
Dec  1 20:02:11 compute-0 nova_compute[189564]: 2025-12-01 20:02:11.947 189568 DEBUG nova.virt.libvirt.driver [None req-a63dd88b-b0e6-4d69-9e61-96ca65e37b62 b7979dae5a4746189d660cfad52a7031 074be7edf37d4e09a02286825460dcb3 - - default default] [instance: 40daa6fd-543f-42a7-8b3f-8bbbd3b4ecc0] Found default for hw_disk_bus of virtio _register_undefined_instance_details /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:946#033[00m
Dec  1 20:02:11 compute-0 nova_compute[189564]: 2025-12-01 20:02:11.948 189568 DEBUG nova.virt.libvirt.driver [None req-a63dd88b-b0e6-4d69-9e61-96ca65e37b62 b7979dae5a4746189d660cfad52a7031 074be7edf37d4e09a02286825460dcb3 - - default default] [instance: 40daa6fd-543f-42a7-8b3f-8bbbd3b4ecc0] Found default for hw_input_bus of usb _register_undefined_instance_details /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:946#033[00m
Dec  1 20:02:11 compute-0 nova_compute[189564]: 2025-12-01 20:02:11.948 189568 DEBUG nova.virt.libvirt.driver [None req-a63dd88b-b0e6-4d69-9e61-96ca65e37b62 b7979dae5a4746189d660cfad52a7031 074be7edf37d4e09a02286825460dcb3 - - default default] [instance: 40daa6fd-543f-42a7-8b3f-8bbbd3b4ecc0] Found default for hw_pointer_model of usbtablet _register_undefined_instance_details /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:946#033[00m
Dec  1 20:02:11 compute-0 nova_compute[189564]: 2025-12-01 20:02:11.948 189568 DEBUG nova.virt.libvirt.driver [None req-a63dd88b-b0e6-4d69-9e61-96ca65e37b62 b7979dae5a4746189d660cfad52a7031 074be7edf37d4e09a02286825460dcb3 - - default default] [instance: 40daa6fd-543f-42a7-8b3f-8bbbd3b4ecc0] Found default for hw_video_model of virtio _register_undefined_instance_details /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:946#033[00m
Dec  1 20:02:11 compute-0 nova_compute[189564]: 2025-12-01 20:02:11.949 189568 DEBUG nova.virt.libvirt.driver [None req-a63dd88b-b0e6-4d69-9e61-96ca65e37b62 b7979dae5a4746189d660cfad52a7031 074be7edf37d4e09a02286825460dcb3 - - default default] [instance: 40daa6fd-543f-42a7-8b3f-8bbbd3b4ecc0] Found default for hw_vif_model of virtio _register_undefined_instance_details /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:946#033[00m
Dec  1 20:02:11 compute-0 nova_compute[189564]: 2025-12-01 20:02:11.987 189568 INFO nova.compute.manager [None req-025acbbd-8b0a-4055-b5a6-f0460d6fa220 - - - - - -] [instance: 40daa6fd-543f-42a7-8b3f-8bbbd3b4ecc0] During sync_power_state the instance has a pending task (spawning). Skip.#033[00m
Dec  1 20:02:12 compute-0 nova_compute[189564]: 2025-12-01 20:02:12.046 189568 INFO nova.compute.manager [None req-a63dd88b-b0e6-4d69-9e61-96ca65e37b62 b7979dae5a4746189d660cfad52a7031 074be7edf37d4e09a02286825460dcb3 - - default default] [instance: 40daa6fd-543f-42a7-8b3f-8bbbd3b4ecc0] Took 11.63 seconds to spawn the instance on the hypervisor.#033[00m
Dec  1 20:02:12 compute-0 nova_compute[189564]: 2025-12-01 20:02:12.047 189568 DEBUG nova.compute.manager [None req-a63dd88b-b0e6-4d69-9e61-96ca65e37b62 b7979dae5a4746189d660cfad52a7031 074be7edf37d4e09a02286825460dcb3 - - default default] [instance: 40daa6fd-543f-42a7-8b3f-8bbbd3b4ecc0] Checking state _get_power_state /usr/lib/python3.9/site-packages/nova/compute/manager.py:1762#033[00m
Dec  1 20:02:12 compute-0 nova_compute[189564]: 2025-12-01 20:02:12.130 189568 INFO nova.compute.manager [None req-a63dd88b-b0e6-4d69-9e61-96ca65e37b62 b7979dae5a4746189d660cfad52a7031 074be7edf37d4e09a02286825460dcb3 - - default default] [instance: 40daa6fd-543f-42a7-8b3f-8bbbd3b4ecc0] Took 12.20 seconds to build instance.#033[00m
Dec  1 20:02:12 compute-0 nova_compute[189564]: 2025-12-01 20:02:12.156 189568 DEBUG oslo_concurrency.lockutils [None req-a63dd88b-b0e6-4d69-9e61-96ca65e37b62 b7979dae5a4746189d660cfad52a7031 074be7edf37d4e09a02286825460dcb3 - - default default] Lock "40daa6fd-543f-42a7-8b3f-8bbbd3b4ecc0" "released" by "nova.compute.manager.ComputeManager.build_and_run_instance.<locals>._locked_do_build_and_run_instance" :: held 12.302s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m
Dec  1 20:02:12 compute-0 ovn_metadata_agent[106828]: 2025-12-01 20:02:12.220 106833 DEBUG oslo_concurrency.lockutils [-] Acquiring lock "_check_child_processes" by "neutron.agent.linux.external_process.ProcessMonitor._check_child_processes" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m
Dec  1 20:02:12 compute-0 ovn_metadata_agent[106828]: 2025-12-01 20:02:12.221 106833 DEBUG oslo_concurrency.lockutils [-] Lock "_check_child_processes" acquired by "neutron.agent.linux.external_process.ProcessMonitor._check_child_processes" :: waited 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m
Dec  1 20:02:12 compute-0 ovn_metadata_agent[106828]: 2025-12-01 20:02:12.222 106833 DEBUG oslo_concurrency.lockutils [-] Lock "_check_child_processes" "released" by "neutron.agent.linux.external_process.ProcessMonitor._check_child_processes" :: held 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m
Dec  1 20:02:12 compute-0 nova_compute[189564]: 2025-12-01 20:02:12.436 189568 DEBUG nova.compute.manager [req-f9fc047a-f4b9-455d-840f-fa78b14e7b51 req-36b26322-b3bd-4efd-8e73-d5d8e893d3c5 8141e98fb94749b083df4b60a326419b 58466eba0634458f8dd92f929824a1d9 - - default default] [instance: 4ace6300-5447-4f61-9b27-a7249155c57b] Received event network-vif-plugged-7101ff55-a92d-431c-8cc4-8b3412507465 external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11048#033[00m
Dec  1 20:02:12 compute-0 nova_compute[189564]: 2025-12-01 20:02:12.436 189568 DEBUG oslo_concurrency.lockutils [req-f9fc047a-f4b9-455d-840f-fa78b14e7b51 req-36b26322-b3bd-4efd-8e73-d5d8e893d3c5 8141e98fb94749b083df4b60a326419b 58466eba0634458f8dd92f929824a1d9 - - default default] Acquiring lock "4ace6300-5447-4f61-9b27-a7249155c57b-events" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m
Dec  1 20:02:12 compute-0 nova_compute[189564]: 2025-12-01 20:02:12.437 189568 DEBUG oslo_concurrency.lockutils [req-f9fc047a-f4b9-455d-840f-fa78b14e7b51 req-36b26322-b3bd-4efd-8e73-d5d8e893d3c5 8141e98fb94749b083df4b60a326419b 58466eba0634458f8dd92f929824a1d9 - - default default] Lock "4ace6300-5447-4f61-9b27-a7249155c57b-events" acquired by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: waited 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m
Dec  1 20:02:12 compute-0 nova_compute[189564]: 2025-12-01 20:02:12.437 189568 DEBUG oslo_concurrency.lockutils [req-f9fc047a-f4b9-455d-840f-fa78b14e7b51 req-36b26322-b3bd-4efd-8e73-d5d8e893d3c5 8141e98fb94749b083df4b60a326419b 58466eba0634458f8dd92f929824a1d9 - - default default] Lock "4ace6300-5447-4f61-9b27-a7249155c57b-events" "released" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: held 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m
Dec  1 20:02:12 compute-0 nova_compute[189564]: 2025-12-01 20:02:12.438 189568 DEBUG nova.compute.manager [req-f9fc047a-f4b9-455d-840f-fa78b14e7b51 req-36b26322-b3bd-4efd-8e73-d5d8e893d3c5 8141e98fb94749b083df4b60a326419b 58466eba0634458f8dd92f929824a1d9 - - default default] [instance: 4ace6300-5447-4f61-9b27-a7249155c57b] No waiting events found dispatching network-vif-plugged-7101ff55-a92d-431c-8cc4-8b3412507465 pop_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:320#033[00m
Dec  1 20:02:12 compute-0 nova_compute[189564]: 2025-12-01 20:02:12.438 189568 WARNING nova.compute.manager [req-f9fc047a-f4b9-455d-840f-fa78b14e7b51 req-36b26322-b3bd-4efd-8e73-d5d8e893d3c5 8141e98fb94749b083df4b60a326419b 58466eba0634458f8dd92f929824a1d9 - - default default] [instance: 4ace6300-5447-4f61-9b27-a7249155c57b] Received unexpected event network-vif-plugged-7101ff55-a92d-431c-8cc4-8b3412507465 for instance with vm_state active and task_state None.#033[00m
Dec  1 20:02:13 compute-0 nova_compute[189564]: 2025-12-01 20:02:13.267 189568 DEBUG oslo_service.periodic_task [None req-7cec860b-c1c5-49d2-8a4f-732c76c1788d - - - - - -] Running periodic task ComputeManager._reclaim_queued_deletes run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210#033[00m
Dec  1 20:02:13 compute-0 nova_compute[189564]: 2025-12-01 20:02:13.267 189568 DEBUG nova.compute.manager [None req-7cec860b-c1c5-49d2-8a4f-732c76c1788d - - - - - -] CONF.reclaim_instance_interval <= 0, skipping... _reclaim_queued_deletes /usr/lib/python3.9/site-packages/nova/compute/manager.py:10477#033[00m
Dec  1 20:02:13 compute-0 nova_compute[189564]: 2025-12-01 20:02:13.361 189568 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Dec  1 20:02:13 compute-0 nova_compute[189564]: 2025-12-01 20:02:13.843 189568 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Dec  1 20:02:14 compute-0 podman[254883]: 2025-12-01 20:02:14.303883217 +0000 UTC m=+0.079151014 container health_status eee51cf6f5ac491b85fb09827fece37ea9afa564acb449d4ec0d0155a452f02b (image=quay.io/podified-antelope-centos9/openstack-multipathd:current-podified, name=multipathd, health_status=healthy, health_failing_streak=0, health_log=, container_name=multipathd, org.label-schema.name=CentOS Stream 9 Base Image, tcib_managed=true, config_data={'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/multipathd', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-multipathd:current-podified', 'net': 'host', 'privileged': True, 'restart': 'always', 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/var/lib/kolla/config_files/multipathd.json:/var/lib/kolla/config_files/config.json:ro', '/dev:/dev', '/run/udev:/run/udev', '/sys:/sys', '/lib/modules:/lib/modules:ro', '/etc/iscsi:/etc/iscsi:ro', '/var/lib/iscsi:/var/lib/iscsi', '/etc/multipath:/etc/multipath:z', '/etc/multipath.conf:/etc/multipath.conf:ro', '/var/lib/openstack/healthchecks/multipathd:/openstack:ro,z']}, config_id=multipathd, org.label-schema.build-date=20251125, org.label-schema.vendor=CentOS, io.buildah.version=1.41.3, maintainer=OpenStack Kubernetes Operator team, managed_by=edpm_ansible, org.label-schema.schema-version=1.0, tcib_build_tag=fa2bb8efef6782c26ea7f1675eeb36dd, org.label-schema.license=GPLv2)
Dec  1 20:02:14 compute-0 nova_compute[189564]: 2025-12-01 20:02:14.680 189568 DEBUG nova.compute.manager [req-32ef380f-10e4-43d2-a7b2-4e9a5d07f483 req-bc8a130a-95f3-4b31-9415-a59ba624e7a2 8141e98fb94749b083df4b60a326419b 58466eba0634458f8dd92f929824a1d9 - - default default] [instance: 40daa6fd-543f-42a7-8b3f-8bbbd3b4ecc0] Received event network-vif-plugged-5f412491-e88a-4387-aa56-6b4e024e1eb2 external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11048#033[00m
Dec  1 20:02:14 compute-0 nova_compute[189564]: 2025-12-01 20:02:14.681 189568 DEBUG oslo_concurrency.lockutils [req-32ef380f-10e4-43d2-a7b2-4e9a5d07f483 req-bc8a130a-95f3-4b31-9415-a59ba624e7a2 8141e98fb94749b083df4b60a326419b 58466eba0634458f8dd92f929824a1d9 - - default default] Acquiring lock "40daa6fd-543f-42a7-8b3f-8bbbd3b4ecc0-events" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m
Dec  1 20:02:14 compute-0 nova_compute[189564]: 2025-12-01 20:02:14.681 189568 DEBUG oslo_concurrency.lockutils [req-32ef380f-10e4-43d2-a7b2-4e9a5d07f483 req-bc8a130a-95f3-4b31-9415-a59ba624e7a2 8141e98fb94749b083df4b60a326419b 58466eba0634458f8dd92f929824a1d9 - - default default] Lock "40daa6fd-543f-42a7-8b3f-8bbbd3b4ecc0-events" acquired by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m
Dec  1 20:02:14 compute-0 nova_compute[189564]: 2025-12-01 20:02:14.681 189568 DEBUG oslo_concurrency.lockutils [req-32ef380f-10e4-43d2-a7b2-4e9a5d07f483 req-bc8a130a-95f3-4b31-9415-a59ba624e7a2 8141e98fb94749b083df4b60a326419b 58466eba0634458f8dd92f929824a1d9 - - default default] Lock "40daa6fd-543f-42a7-8b3f-8bbbd3b4ecc0-events" "released" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m
Dec  1 20:02:14 compute-0 nova_compute[189564]: 2025-12-01 20:02:14.681 189568 DEBUG nova.compute.manager [req-32ef380f-10e4-43d2-a7b2-4e9a5d07f483 req-bc8a130a-95f3-4b31-9415-a59ba624e7a2 8141e98fb94749b083df4b60a326419b 58466eba0634458f8dd92f929824a1d9 - - default default] [instance: 40daa6fd-543f-42a7-8b3f-8bbbd3b4ecc0] No waiting events found dispatching network-vif-plugged-5f412491-e88a-4387-aa56-6b4e024e1eb2 pop_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:320#033[00m
Dec  1 20:02:14 compute-0 nova_compute[189564]: 2025-12-01 20:02:14.682 189568 WARNING nova.compute.manager [req-32ef380f-10e4-43d2-a7b2-4e9a5d07f483 req-bc8a130a-95f3-4b31-9415-a59ba624e7a2 8141e98fb94749b083df4b60a326419b 58466eba0634458f8dd92f929824a1d9 - - default default] [instance: 40daa6fd-543f-42a7-8b3f-8bbbd3b4ecc0] Received unexpected event network-vif-plugged-5f412491-e88a-4387-aa56-6b4e024e1eb2 for instance with vm_state active and task_state None.#033[00m
Dec  1 20:02:14 compute-0 nova_compute[189564]: 2025-12-01 20:02:14.682 189568 DEBUG nova.compute.manager [req-32ef380f-10e4-43d2-a7b2-4e9a5d07f483 req-bc8a130a-95f3-4b31-9415-a59ba624e7a2 8141e98fb94749b083df4b60a326419b 58466eba0634458f8dd92f929824a1d9 - - default default] [instance: 4ace6300-5447-4f61-9b27-a7249155c57b] Received event network-changed-7101ff55-a92d-431c-8cc4-8b3412507465 external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11048#033[00m
Dec  1 20:02:14 compute-0 nova_compute[189564]: 2025-12-01 20:02:14.682 189568 DEBUG nova.compute.manager [req-32ef380f-10e4-43d2-a7b2-4e9a5d07f483 req-bc8a130a-95f3-4b31-9415-a59ba624e7a2 8141e98fb94749b083df4b60a326419b 58466eba0634458f8dd92f929824a1d9 - - default default] [instance: 4ace6300-5447-4f61-9b27-a7249155c57b] Refreshing instance network info cache due to event network-changed-7101ff55-a92d-431c-8cc4-8b3412507465. external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11053#033[00m
Dec  1 20:02:14 compute-0 nova_compute[189564]: 2025-12-01 20:02:14.682 189568 DEBUG oslo_concurrency.lockutils [req-32ef380f-10e4-43d2-a7b2-4e9a5d07f483 req-bc8a130a-95f3-4b31-9415-a59ba624e7a2 8141e98fb94749b083df4b60a326419b 58466eba0634458f8dd92f929824a1d9 - - default default] Acquiring lock "refresh_cache-4ace6300-5447-4f61-9b27-a7249155c57b" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:312#033[00m
Dec  1 20:02:14 compute-0 nova_compute[189564]: 2025-12-01 20:02:14.682 189568 DEBUG oslo_concurrency.lockutils [req-32ef380f-10e4-43d2-a7b2-4e9a5d07f483 req-bc8a130a-95f3-4b31-9415-a59ba624e7a2 8141e98fb94749b083df4b60a326419b 58466eba0634458f8dd92f929824a1d9 - - default default] Acquired lock "refresh_cache-4ace6300-5447-4f61-9b27-a7249155c57b" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:315#033[00m
Dec  1 20:02:14 compute-0 nova_compute[189564]: 2025-12-01 20:02:14.683 189568 DEBUG nova.network.neutron [req-32ef380f-10e4-43d2-a7b2-4e9a5d07f483 req-bc8a130a-95f3-4b31-9415-a59ba624e7a2 8141e98fb94749b083df4b60a326419b 58466eba0634458f8dd92f929824a1d9 - - default default] [instance: 4ace6300-5447-4f61-9b27-a7249155c57b] Refreshing network info cache for port 7101ff55-a92d-431c-8cc4-8b3412507465 _get_instance_nw_info /usr/lib/python3.9/site-packages/nova/network/neutron.py:2007#033[00m
Dec  1 20:02:16 compute-0 nova_compute[189564]: 2025-12-01 20:02:16.412 189568 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Dec  1 20:02:16 compute-0 nova_compute[189564]: 2025-12-01 20:02:16.792 189568 DEBUG nova.compute.manager [req-2f16166b-5a20-4d8c-9d29-45695a20cc87 req-b2c1072a-e19c-4983-935c-109c9b668955 8141e98fb94749b083df4b60a326419b 58466eba0634458f8dd92f929824a1d9 - - default default] [instance: 40daa6fd-543f-42a7-8b3f-8bbbd3b4ecc0] Received event network-changed-5f412491-e88a-4387-aa56-6b4e024e1eb2 external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11048#033[00m
Dec  1 20:02:16 compute-0 nova_compute[189564]: 2025-12-01 20:02:16.793 189568 DEBUG nova.compute.manager [req-2f16166b-5a20-4d8c-9d29-45695a20cc87 req-b2c1072a-e19c-4983-935c-109c9b668955 8141e98fb94749b083df4b60a326419b 58466eba0634458f8dd92f929824a1d9 - - default default] [instance: 40daa6fd-543f-42a7-8b3f-8bbbd3b4ecc0] Refreshing instance network info cache due to event network-changed-5f412491-e88a-4387-aa56-6b4e024e1eb2. external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11053#033[00m
Dec  1 20:02:16 compute-0 nova_compute[189564]: 2025-12-01 20:02:16.793 189568 DEBUG oslo_concurrency.lockutils [req-2f16166b-5a20-4d8c-9d29-45695a20cc87 req-b2c1072a-e19c-4983-935c-109c9b668955 8141e98fb94749b083df4b60a326419b 58466eba0634458f8dd92f929824a1d9 - - default default] Acquiring lock "refresh_cache-40daa6fd-543f-42a7-8b3f-8bbbd3b4ecc0" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:312#033[00m
Dec  1 20:02:16 compute-0 nova_compute[189564]: 2025-12-01 20:02:16.794 189568 DEBUG oslo_concurrency.lockutils [req-2f16166b-5a20-4d8c-9d29-45695a20cc87 req-b2c1072a-e19c-4983-935c-109c9b668955 8141e98fb94749b083df4b60a326419b 58466eba0634458f8dd92f929824a1d9 - - default default] Acquired lock "refresh_cache-40daa6fd-543f-42a7-8b3f-8bbbd3b4ecc0" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:315#033[00m
Dec  1 20:02:16 compute-0 nova_compute[189564]: 2025-12-01 20:02:16.794 189568 DEBUG nova.network.neutron [req-2f16166b-5a20-4d8c-9d29-45695a20cc87 req-b2c1072a-e19c-4983-935c-109c9b668955 8141e98fb94749b083df4b60a326419b 58466eba0634458f8dd92f929824a1d9 - - default default] [instance: 40daa6fd-543f-42a7-8b3f-8bbbd3b4ecc0] Refreshing network info cache for port 5f412491-e88a-4387-aa56-6b4e024e1eb2 _get_instance_nw_info /usr/lib/python3.9/site-packages/nova/network/neutron.py:2007#033[00m
Dec  1 20:02:16 compute-0 nova_compute[189564]: 2025-12-01 20:02:16.971 189568 DEBUG nova.network.neutron [req-32ef380f-10e4-43d2-a7b2-4e9a5d07f483 req-bc8a130a-95f3-4b31-9415-a59ba624e7a2 8141e98fb94749b083df4b60a326419b 58466eba0634458f8dd92f929824a1d9 - - default default] [instance: 4ace6300-5447-4f61-9b27-a7249155c57b] Updated VIF entry in instance network info cache for port 7101ff55-a92d-431c-8cc4-8b3412507465. _build_network_info_model /usr/lib/python3.9/site-packages/nova/network/neutron.py:3482#033[00m
Dec  1 20:02:16 compute-0 nova_compute[189564]: 2025-12-01 20:02:16.972 189568 DEBUG nova.network.neutron [req-32ef380f-10e4-43d2-a7b2-4e9a5d07f483 req-bc8a130a-95f3-4b31-9415-a59ba624e7a2 8141e98fb94749b083df4b60a326419b 58466eba0634458f8dd92f929824a1d9 - - default default] [instance: 4ace6300-5447-4f61-9b27-a7249155c57b] Updating instance_info_cache with network_info: [{"id": "7101ff55-a92d-431c-8cc4-8b3412507465", "address": "fa:16:3e:69:55:e7", "network": {"id": "f6d551f8-4db8-41ef-9a06-51292bc6bab6", "bridge": "br-int", "label": "tempest-AttachInterfacesUnderV243Test-1484983586-network", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.6", "type": "fixed", "version": 4, "meta": {}, "floating_ips": [{"address": "192.168.122.189", "type": "floating", "version": 4, "meta": {}}]}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "4517904b95d64f0c874d5afda12566c4", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tap7101ff55-a9", "ovs_interfaceid": "7101ff55-a92d-431c-8cc4-8b3412507465", "qbh_params": null, "qbg_params": null, "active": true, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}}] update_instance_cache_with_nw_info /usr/lib/python3.9/site-packages/nova/network/neutron.py:116#033[00m
Dec  1 20:02:17 compute-0 nova_compute[189564]: 2025-12-01 20:02:17.008 189568 DEBUG oslo_concurrency.lockutils [req-32ef380f-10e4-43d2-a7b2-4e9a5d07f483 req-bc8a130a-95f3-4b31-9415-a59ba624e7a2 8141e98fb94749b083df4b60a326419b 58466eba0634458f8dd92f929824a1d9 - - default default] Releasing lock "refresh_cache-4ace6300-5447-4f61-9b27-a7249155c57b" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:333#033[00m
Dec  1 20:02:17 compute-0 nova_compute[189564]: 2025-12-01 20:02:17.248 189568 DEBUG oslo_service.periodic_task [None req-7cec860b-c1c5-49d2-8a4f-732c76c1788d - - - - - -] Running periodic task ComputeManager._poll_rescued_instances run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210#033[00m
Dec  1 20:02:17 compute-0 nova_compute[189564]: 2025-12-01 20:02:17.576 189568 DEBUG oslo_concurrency.lockutils [None req-736dc799-0e0b-4c80-9857-eaddcf6cab39 b7979dae5a4746189d660cfad52a7031 074be7edf37d4e09a02286825460dcb3 - - default default] Acquiring lock "40daa6fd-543f-42a7-8b3f-8bbbd3b4ecc0" by "nova.compute.manager.ComputeManager.terminate_instance.<locals>.do_terminate_instance" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m
Dec  1 20:02:17 compute-0 nova_compute[189564]: 2025-12-01 20:02:17.576 189568 DEBUG oslo_concurrency.lockutils [None req-736dc799-0e0b-4c80-9857-eaddcf6cab39 b7979dae5a4746189d660cfad52a7031 074be7edf37d4e09a02286825460dcb3 - - default default] Lock "40daa6fd-543f-42a7-8b3f-8bbbd3b4ecc0" acquired by "nova.compute.manager.ComputeManager.terminate_instance.<locals>.do_terminate_instance" :: waited 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m
Dec  1 20:02:17 compute-0 nova_compute[189564]: 2025-12-01 20:02:17.577 189568 DEBUG oslo_concurrency.lockutils [None req-736dc799-0e0b-4c80-9857-eaddcf6cab39 b7979dae5a4746189d660cfad52a7031 074be7edf37d4e09a02286825460dcb3 - - default default] Acquiring lock "40daa6fd-543f-42a7-8b3f-8bbbd3b4ecc0-events" by "nova.compute.manager.InstanceEvents.clear_events_for_instance.<locals>._clear_events" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m
Dec  1 20:02:17 compute-0 nova_compute[189564]: 2025-12-01 20:02:17.577 189568 DEBUG oslo_concurrency.lockutils [None req-736dc799-0e0b-4c80-9857-eaddcf6cab39 b7979dae5a4746189d660cfad52a7031 074be7edf37d4e09a02286825460dcb3 - - default default] Lock "40daa6fd-543f-42a7-8b3f-8bbbd3b4ecc0-events" acquired by "nova.compute.manager.InstanceEvents.clear_events_for_instance.<locals>._clear_events" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m
Dec  1 20:02:17 compute-0 nova_compute[189564]: 2025-12-01 20:02:17.578 189568 DEBUG oslo_concurrency.lockutils [None req-736dc799-0e0b-4c80-9857-eaddcf6cab39 b7979dae5a4746189d660cfad52a7031 074be7edf37d4e09a02286825460dcb3 - - default default] Lock "40daa6fd-543f-42a7-8b3f-8bbbd3b4ecc0-events" "released" by "nova.compute.manager.InstanceEvents.clear_events_for_instance.<locals>._clear_events" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m
Dec  1 20:02:17 compute-0 nova_compute[189564]: 2025-12-01 20:02:17.579 189568 INFO nova.compute.manager [None req-736dc799-0e0b-4c80-9857-eaddcf6cab39 b7979dae5a4746189d660cfad52a7031 074be7edf37d4e09a02286825460dcb3 - - default default] [instance: 40daa6fd-543f-42a7-8b3f-8bbbd3b4ecc0] Terminating instance#033[00m
Dec  1 20:02:17 compute-0 nova_compute[189564]: 2025-12-01 20:02:17.581 189568 DEBUG nova.compute.manager [None req-736dc799-0e0b-4c80-9857-eaddcf6cab39 b7979dae5a4746189d660cfad52a7031 074be7edf37d4e09a02286825460dcb3 - - default default] [instance: 40daa6fd-543f-42a7-8b3f-8bbbd3b4ecc0] Start destroying the instance on the hypervisor. _shutdown_instance /usr/lib/python3.9/site-packages/nova/compute/manager.py:3120#033[00m
Dec  1 20:02:17 compute-0 kernel: tap5f412491-e8 (unregistering): left promiscuous mode
Dec  1 20:02:17 compute-0 NetworkManager[56474]: <info>  [1764619337.6162] device (tap5f412491-e8): state change: disconnected -> unmanaged (reason 'unmanaged', managed-type: 'removed')
Dec  1 20:02:17 compute-0 ovn_controller[97948]: 2025-12-01T20:02:17Z|00102|binding|INFO|Releasing lport 5f412491-e88a-4387-aa56-6b4e024e1eb2 from this chassis (sb_readonly=0)
Dec  1 20:02:17 compute-0 ovn_controller[97948]: 2025-12-01T20:02:17Z|00103|binding|INFO|Setting lport 5f412491-e88a-4387-aa56-6b4e024e1eb2 down in Southbound
Dec  1 20:02:17 compute-0 nova_compute[189564]: 2025-12-01 20:02:17.626 189568 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Dec  1 20:02:17 compute-0 ovn_controller[97948]: 2025-12-01T20:02:17Z|00104|binding|INFO|Removing iface tap5f412491-e8 ovn-installed in OVS
Dec  1 20:02:17 compute-0 nova_compute[189564]: 2025-12-01 20:02:17.629 189568 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Dec  1 20:02:17 compute-0 nova_compute[189564]: 2025-12-01 20:02:17.644 189568 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Dec  1 20:02:17 compute-0 systemd[1]: machine-qemu\x2d9\x2dinstance\x2d00000008.scope: Deactivated successfully.
Dec  1 20:02:17 compute-0 systemd[1]: machine-qemu\x2d9\x2dinstance\x2d00000008.scope: Consumed 6.581s CPU time.
Dec  1 20:02:17 compute-0 ovn_metadata_agent[106828]: 2025-12-01 20:02:17.670 106833 DEBUG ovsdbapp.backend.ovs_idl.event [-] Matched UPDATE: PortBindingUpdatedEvent(events=('update',), table='Port_Binding', conditions=None, old_conditions=None), priority=20 to row=Port_Binding(mac=['fa:16:3e:ae:1d:64 10.100.0.5'], port_security=['fa:16:3e:ae:1d:64 10.100.0.5'], type=, nat_addresses=[], virtual_parent=[], up=[False], options={'requested-chassis': 'compute-0.ctlplane.example.com'}, parent_port=[], requested_additional_chassis=[], ha_chassis_group=[], external_ids={'neutron:cidrs': '10.100.0.5/28', 'neutron:device_id': '40daa6fd-543f-42a7-8b3f-8bbbd3b4ecc0', 'neutron:device_owner': 'compute:nova', 'neutron:mtu': '', 'neutron:network_name': 'neutron-5c0f6fba-7bb5-44dd-9009-a572ffba2e90', 'neutron:port_capabilities': '', 'neutron:port_name': '', 'neutron:project_id': '074be7edf37d4e09a02286825460dcb3', 'neutron:revision_number': '4', 'neutron:security_group_ids': '8ecf62a5-ec01-4b95-ba8d-23b8e92002aa', 'neutron:subnet_pool_addr_scope4': '', 'neutron:subnet_pool_addr_scope6': '', 'neutron:vnic_type': 'normal', 'neutron:host_id': 'compute-0.ctlplane.example.com', 'neutron:port_fip': '192.168.122.240'}, additional_chassis=[], tag=[], additional_encap=[], encap=[], mirror_rules=[], datapath=e203a296-7e55-44d6-b67a-9567ace4ce1c, chassis=[], tunnel_key=3, gateway_chassis=[], requested_chassis=[<ovs.db.idl.Row object at 0x7f1b36766670>], logical_port=5f412491-e88a-4387-aa56-6b4e024e1eb2) old=Port_Binding(up=[True], chassis=[<ovs.db.idl.Row object at 0x7f1b36766670>]) matches /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/event.py:43#033[00m
Dec  1 20:02:17 compute-0 ovn_metadata_agent[106828]: 2025-12-01 20:02:17.671 106833 INFO neutron.agent.ovn.metadata.agent [-] Port 5f412491-e88a-4387-aa56-6b4e024e1eb2 in datapath 5c0f6fba-7bb5-44dd-9009-a572ffba2e90 unbound from our chassis#033[00m
Dec  1 20:02:17 compute-0 ovn_metadata_agent[106828]: 2025-12-01 20:02:17.673 106833 DEBUG neutron.agent.ovn.metadata.agent [-] No valid VIF ports were found for network 5c0f6fba-7bb5-44dd-9009-a572ffba2e90, tearing the namespace down if needed _get_provision_params /usr/lib/python3.9/site-packages/neutron/agent/ovn/metadata/agent.py:628#033[00m
Dec  1 20:02:17 compute-0 ovn_metadata_agent[106828]: 2025-12-01 20:02:17.674 239862 DEBUG oslo.privsep.daemon [-] privsep: reply[53b164bf-f87d-4a73-9e89-63f01d3dcb68]: (4, True) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Dec  1 20:02:17 compute-0 ovn_metadata_agent[106828]: 2025-12-01 20:02:17.675 106833 INFO neutron.agent.ovn.metadata.agent [-] Cleaning up ovnmeta-5c0f6fba-7bb5-44dd-9009-a572ffba2e90 namespace which is not needed anymore#033[00m
Dec  1 20:02:17 compute-0 systemd-machined[155891]: Machine qemu-9-instance-00000008 terminated.
Dec  1 20:02:17 compute-0 podman[254902]: 2025-12-01 20:02:17.717883893 +0000 UTC m=+0.081276179 container health_status 61ddba5fa28aaa4735d9b3aecc3d300f499f9ae2248b5f55cd6d6127fcce4236 (image=quay.io/navidys/prometheus-podman-exporter:v1.10.1, name=podman_exporter, health_status=healthy, health_failing_streak=0, health_log=, maintainer=Navid Yaghoobi <navidys@fedoraproject.org>, managed_by=edpm_ansible, config_data={'image': 'quay.io/navidys/prometheus-podman-exporter:v1.10.1', 'restart': 'always', 'recreate': True, 'user': 'root', 'privileged': True, 'ports': ['9882:9882'], 'net': 'host', 'command': ['--web.config.file=/etc/podman_exporter/podman_exporter.yaml'], 'environment': {'OS_ENDPOINT_TYPE': 'internal', 'CONTAINER_HOST': 'unix:///run/podman/podman.sock'}, 'healthcheck': {'test': '/openstack/healthcheck podman_exporter', 'mount': '/var/lib/openstack/healthchecks/podman_exporter'}, 'volumes': ['/var/lib/openstack/config/telemetry/podman_exporter.yaml:/etc/podman_exporter/podman_exporter.yaml:z', '/var/lib/openstack/certs/telemetry/default:/etc/podman_exporter/tls:z', '/run/podman/podman.sock:/run/podman/podman.sock:rw,z', '/var/lib/openstack/healthchecks/podman_exporter:/openstack:ro,z']}, config_id=edpm, container_name=podman_exporter)
Dec  1 20:02:17 compute-0 nova_compute[189564]: 2025-12-01 20:02:17.807 189568 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Dec  1 20:02:17 compute-0 nova_compute[189564]: 2025-12-01 20:02:17.814 189568 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Dec  1 20:02:17 compute-0 nova_compute[189564]: 2025-12-01 20:02:17.850 189568 INFO nova.virt.libvirt.driver [-] [instance: 40daa6fd-543f-42a7-8b3f-8bbbd3b4ecc0] Instance destroyed successfully.#033[00m
Dec  1 20:02:17 compute-0 nova_compute[189564]: 2025-12-01 20:02:17.852 189568 DEBUG nova.objects.instance [None req-736dc799-0e0b-4c80-9857-eaddcf6cab39 b7979dae5a4746189d660cfad52a7031 074be7edf37d4e09a02286825460dcb3 - - default default] Lazy-loading 'resources' on Instance uuid 40daa6fd-543f-42a7-8b3f-8bbbd3b4ecc0 obj_load_attr /usr/lib/python3.9/site-packages/nova/objects/instance.py:1105#033[00m
Dec  1 20:02:17 compute-0 nova_compute[189564]: 2025-12-01 20:02:17.901 189568 DEBUG nova.virt.libvirt.vif [None req-736dc799-0e0b-4c80-9857-eaddcf6cab39 b7979dae5a4746189d660cfad52a7031 074be7edf37d4e09a02286825460dcb3 - - default default] vif_type=ovs instance=Instance(access_ip_v4=1.1.1.1,access_ip_v6=::babe:dc0c:1602,architecture=None,auto_disk_config=True,availability_zone='nova',cell_name=None,cleaned=False,config_drive='True',created_at=2025-12-01T20:01:58Z,default_ephemeral_device=None,default_swap_device=None,deleted=False,deleted_at=None,device_metadata=<?>,disable_terminate=False,display_description='tempest-ServersTestJSON-server-1211995356',display_name='tempest-ServersTestJSON-server-1211995356',ec2_ids=<?>,ephemeral_gb=0,ephemeral_key_uuid=None,fault=<?>,flavor=Flavor(3),hidden=False,host='compute-0.ctlplane.example.com',hostname='tempest-serverstestjson-server-1211995356',id=8,image_ref='d169c234-7ac2-4fdc-b9fa-a08c93484d75',info_cache=InstanceInfoCache,instance_type_id=3,kernel_id='',key_data='ecdsa-sha2-nistp384 AAAAE2VjZHNhLXNoYTItbmlzdHAzODQAAAAIbmlzdHAzODQAAABhBIbuC54RSyNF2gJa7npiiLaIRL78R1TXKo4XNanm90UEgHc1f+7BTaY0iWo/e8z5jrkJOwzot8Y9LI9IMPC58xf5rObkXNbC2mu20jLZlDP5U+zTCPoD9o/vCr9D7if0Pw==',key_name='tempest-keypair-1078658975',keypairs=<?>,launch_index=0,launched_at=2025-12-01T20:02:12Z,launched_on='compute-0.ctlplane.example.com',locked=False,locked_by=None,memory_mb=128,metadata={hello='world'},migration_context=<?>,new_flavor=None,node='compute-0.ctlplane.example.com',numa_topology=None,old_flavor=None,os_type=None,pci_devices=<?>,pci_requests=<?>,power_state=1,progress=0,project_id='074be7edf37d4e09a02286825460dcb3',ramdisk_id='',reservation_id='r-8toh0020',resources=None,root_device_name='/dev/vda',root_gb=1,security_groups=SecurityGroupList,services=<?>,shutdown_terminate=False,system_metadata={boot_roles='member,reader',image_base_image_ref='d169c234-7ac2-4fdc-b9fa-a08c93484d75',image_container_format='bare',image_disk_format='qcow2',image_hw_cdrom_bus='sata',image_hw_disk_bus='virtio',image_hw_input_bus='usb',image_hw_machine_type='q35',image_hw_pointer_model='usbtablet',image_hw_rng_model='virtio',image_hw_video_model='virtio',image_hw_vif_model='virtio',image_min_disk='1',image_min_ram='0',owner_project_name='tempest-ServersTestJSON-578797395',owner_user_name='tempest-ServersTestJSON-578797395-project-member'},tags=<?>,task_state='deleting',terminated_at=None,trusted_certs=<?>,updated_at=2025-12-01T20:02:12Z,user_data='IyEvYmluL3NoCmVjaG8gIlByaW50aW5nIGNpcnJvcyB1c2VyIGF1dGhvcml6ZWQga2V5cyIKY2F0IH5jaXJyb3MvLnNzaC9hdXRob3JpemVkX2tleXMgfHwgdHJ1ZQo=',user_id='b7979dae5a4746189d660cfad52a7031',uuid=40daa6fd-543f-42a7-8b3f-8bbbd3b4ecc0,vcpu_model=<?>,vcpus=1,vm_mode=None,vm_state='active') vif={"id": "5f412491-e88a-4387-aa56-6b4e024e1eb2", "address": "fa:16:3e:ae:1d:64", "network": {"id": "5c0f6fba-7bb5-44dd-9009-a572ffba2e90", "bridge": "br-int", "label": "tempest-ServersTestJSON-1715889735-network", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.5", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "074be7edf37d4e09a02286825460dcb3", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tap5f412491-e8", "ovs_interfaceid": "5f412491-e88a-4387-aa56-6b4e024e1eb2", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}} unplug /usr/lib/python3.9/site-packages/nova/virt/libvirt/vif.py:828#033[00m
Dec  1 20:02:17 compute-0 nova_compute[189564]: 2025-12-01 20:02:17.903 189568 DEBUG nova.network.os_vif_util [None req-736dc799-0e0b-4c80-9857-eaddcf6cab39 b7979dae5a4746189d660cfad52a7031 074be7edf37d4e09a02286825460dcb3 - - default default] Converting VIF {"id": "5f412491-e88a-4387-aa56-6b4e024e1eb2", "address": "fa:16:3e:ae:1d:64", "network": {"id": "5c0f6fba-7bb5-44dd-9009-a572ffba2e90", "bridge": "br-int", "label": "tempest-ServersTestJSON-1715889735-network", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.5", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "074be7edf37d4e09a02286825460dcb3", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tap5f412491-e8", "ovs_interfaceid": "5f412491-e88a-4387-aa56-6b4e024e1eb2", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}} nova_to_osvif_vif /usr/lib/python3.9/site-packages/nova/network/os_vif_util.py:511#033[00m
Dec  1 20:02:17 compute-0 nova_compute[189564]: 2025-12-01 20:02:17.906 189568 DEBUG nova.network.os_vif_util [None req-736dc799-0e0b-4c80-9857-eaddcf6cab39 b7979dae5a4746189d660cfad52a7031 074be7edf37d4e09a02286825460dcb3 - - default default] Converted object VIFOpenVSwitch(active=False,address=fa:16:3e:ae:1d:64,bridge_name='br-int',has_traffic_filtering=True,id=5f412491-e88a-4387-aa56-6b4e024e1eb2,network=Network(5c0f6fba-7bb5-44dd-9009-a572ffba2e90),plugin='ovs',port_profile=VIFPortProfileOpenVSwitch,preserve_on_delete=False,vif_name='tap5f412491-e8') nova_to_osvif_vif /usr/lib/python3.9/site-packages/nova/network/os_vif_util.py:548#033[00m
Dec  1 20:02:17 compute-0 nova_compute[189564]: 2025-12-01 20:02:17.908 189568 DEBUG os_vif [None req-736dc799-0e0b-4c80-9857-eaddcf6cab39 b7979dae5a4746189d660cfad52a7031 074be7edf37d4e09a02286825460dcb3 - - default default] Unplugging vif VIFOpenVSwitch(active=False,address=fa:16:3e:ae:1d:64,bridge_name='br-int',has_traffic_filtering=True,id=5f412491-e88a-4387-aa56-6b4e024e1eb2,network=Network(5c0f6fba-7bb5-44dd-9009-a572ffba2e90),plugin='ovs',port_profile=VIFPortProfileOpenVSwitch,preserve_on_delete=False,vif_name='tap5f412491-e8') unplug /usr/lib/python3.9/site-packages/os_vif/__init__.py:109#033[00m
Dec  1 20:02:17 compute-0 nova_compute[189564]: 2025-12-01 20:02:17.912 189568 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 21 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Dec  1 20:02:17 compute-0 nova_compute[189564]: 2025-12-01 20:02:17.913 189568 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): DelPortCommand(_result=None, port=tap5f412491-e8, bridge=br-int, if_exists=True) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89#033[00m
Dec  1 20:02:17 compute-0 neutron-haproxy-ovnmeta-5c0f6fba-7bb5-44dd-9009-a572ffba2e90[254868]: [NOTICE]   (254872) : haproxy version is 2.8.14-c23fe91
Dec  1 20:02:17 compute-0 neutron-haproxy-ovnmeta-5c0f6fba-7bb5-44dd-9009-a572ffba2e90[254868]: [NOTICE]   (254872) : path to executable is /usr/sbin/haproxy
Dec  1 20:02:17 compute-0 neutron-haproxy-ovnmeta-5c0f6fba-7bb5-44dd-9009-a572ffba2e90[254868]: [WARNING]  (254872) : Exiting Master process...
Dec  1 20:02:17 compute-0 neutron-haproxy-ovnmeta-5c0f6fba-7bb5-44dd-9009-a572ffba2e90[254868]: [WARNING]  (254872) : Exiting Master process...
Dec  1 20:02:17 compute-0 neutron-haproxy-ovnmeta-5c0f6fba-7bb5-44dd-9009-a572ffba2e90[254868]: [ALERT]    (254872) : Current worker (254874) exited with code 143 (Terminated)
Dec  1 20:02:17 compute-0 neutron-haproxy-ovnmeta-5c0f6fba-7bb5-44dd-9009-a572ffba2e90[254868]: [WARNING]  (254872) : All workers exited. Exiting... (0)
Dec  1 20:02:17 compute-0 nova_compute[189564]: 2025-12-01 20:02:17.923 189568 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Dec  1 20:02:17 compute-0 nova_compute[189564]: 2025-12-01 20:02:17.926 189568 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] 0-ms timeout __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:248#033[00m
Dec  1 20:02:17 compute-0 systemd[1]: libpod-7ee0b7f65774ed4d4602e9212ad3371520b7da839c0136fc6083c1f297756dc3.scope: Deactivated successfully.
Dec  1 20:02:17 compute-0 podman[254953]: 2025-12-01 20:02:17.931114468 +0000 UTC m=+0.094144290 container died 7ee0b7f65774ed4d4602e9212ad3371520b7da839c0136fc6083c1f297756dc3 (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified, name=neutron-haproxy-ovnmeta-5c0f6fba-7bb5-44dd-9009-a572ffba2e90, maintainer=OpenStack Kubernetes Operator team, org.label-schema.license=GPLv2, org.label-schema.vendor=CentOS, io.buildah.version=1.41.3, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.schema-version=1.0, tcib_managed=true, org.label-schema.build-date=20251125, tcib_build_tag=fa2bb8efef6782c26ea7f1675eeb36dd)
Dec  1 20:02:17 compute-0 nova_compute[189564]: 2025-12-01 20:02:17.933 189568 INFO os_vif [None req-736dc799-0e0b-4c80-9857-eaddcf6cab39 b7979dae5a4746189d660cfad52a7031 074be7edf37d4e09a02286825460dcb3 - - default default] Successfully unplugged vif VIFOpenVSwitch(active=False,address=fa:16:3e:ae:1d:64,bridge_name='br-int',has_traffic_filtering=True,id=5f412491-e88a-4387-aa56-6b4e024e1eb2,network=Network(5c0f6fba-7bb5-44dd-9009-a572ffba2e90),plugin='ovs',port_profile=VIFPortProfileOpenVSwitch,preserve_on_delete=False,vif_name='tap5f412491-e8')#033[00m
Dec  1 20:02:17 compute-0 nova_compute[189564]: 2025-12-01 20:02:17.934 189568 INFO nova.virt.libvirt.driver [None req-736dc799-0e0b-4c80-9857-eaddcf6cab39 b7979dae5a4746189d660cfad52a7031 074be7edf37d4e09a02286825460dcb3 - - default default] [instance: 40daa6fd-543f-42a7-8b3f-8bbbd3b4ecc0] Deleting instance files /var/lib/nova/instances/40daa6fd-543f-42a7-8b3f-8bbbd3b4ecc0_del#033[00m
Dec  1 20:02:17 compute-0 nova_compute[189564]: 2025-12-01 20:02:17.935 189568 INFO nova.virt.libvirt.driver [None req-736dc799-0e0b-4c80-9857-eaddcf6cab39 b7979dae5a4746189d660cfad52a7031 074be7edf37d4e09a02286825460dcb3 - - default default] [instance: 40daa6fd-543f-42a7-8b3f-8bbbd3b4ecc0] Deletion of /var/lib/nova/instances/40daa6fd-543f-42a7-8b3f-8bbbd3b4ecc0_del complete#033[00m
Dec  1 20:02:17 compute-0 systemd[1]: var-lib-containers-storage-overlay\x2dcontainers-7ee0b7f65774ed4d4602e9212ad3371520b7da839c0136fc6083c1f297756dc3-userdata-shm.mount: Deactivated successfully.
Dec  1 20:02:18 compute-0 systemd[1]: var-lib-containers-storage-overlay-1d4c31a89dbabbd1a7368cd6edaaead01a3ca318b6b60c4edd6970f20539c8d6-merged.mount: Deactivated successfully.
Dec  1 20:02:18 compute-0 nova_compute[189564]: 2025-12-01 20:02:18.008 189568 INFO nova.compute.manager [None req-736dc799-0e0b-4c80-9857-eaddcf6cab39 b7979dae5a4746189d660cfad52a7031 074be7edf37d4e09a02286825460dcb3 - - default default] [instance: 40daa6fd-543f-42a7-8b3f-8bbbd3b4ecc0] Took 0.43 seconds to destroy the instance on the hypervisor.#033[00m
Dec  1 20:02:18 compute-0 nova_compute[189564]: 2025-12-01 20:02:18.009 189568 DEBUG oslo.service.loopingcall [None req-736dc799-0e0b-4c80-9857-eaddcf6cab39 b7979dae5a4746189d660cfad52a7031 074be7edf37d4e09a02286825460dcb3 - - default default] Waiting for function nova.compute.manager.ComputeManager._try_deallocate_network.<locals>._deallocate_network_with_retries to return. func /usr/lib/python3.9/site-packages/oslo_service/loopingcall.py:435#033[00m
Dec  1 20:02:18 compute-0 nova_compute[189564]: 2025-12-01 20:02:18.009 189568 DEBUG nova.compute.manager [-] [instance: 40daa6fd-543f-42a7-8b3f-8bbbd3b4ecc0] Deallocating network for instance _deallocate_network /usr/lib/python3.9/site-packages/nova/compute/manager.py:2259#033[00m
Dec  1 20:02:18 compute-0 nova_compute[189564]: 2025-12-01 20:02:18.010 189568 DEBUG nova.network.neutron [-] [instance: 40daa6fd-543f-42a7-8b3f-8bbbd3b4ecc0] deallocate_for_instance() deallocate_for_instance /usr/lib/python3.9/site-packages/nova/network/neutron.py:1803#033[00m
Dec  1 20:02:18 compute-0 podman[254953]: 2025-12-01 20:02:18.027932551 +0000 UTC m=+0.190962373 container cleanup 7ee0b7f65774ed4d4602e9212ad3371520b7da839c0136fc6083c1f297756dc3 (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified, name=neutron-haproxy-ovnmeta-5c0f6fba-7bb5-44dd-9009-a572ffba2e90, org.label-schema.build-date=20251125, org.label-schema.name=CentOS Stream 9 Base Image, tcib_build_tag=fa2bb8efef6782c26ea7f1675eeb36dd, org.label-schema.license=GPLv2, org.label-schema.vendor=CentOS, tcib_managed=true, io.buildah.version=1.41.3, org.label-schema.schema-version=1.0, maintainer=OpenStack Kubernetes Operator team)
Dec  1 20:02:18 compute-0 systemd[1]: libpod-conmon-7ee0b7f65774ed4d4602e9212ad3371520b7da839c0136fc6083c1f297756dc3.scope: Deactivated successfully.
Dec  1 20:02:18 compute-0 nova_compute[189564]: 2025-12-01 20:02:18.199 189568 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Dec  1 20:02:18 compute-0 podman[254992]: 2025-12-01 20:02:18.266288467 +0000 UTC m=+0.207737904 container remove 7ee0b7f65774ed4d4602e9212ad3371520b7da839c0136fc6083c1f297756dc3 (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified, name=neutron-haproxy-ovnmeta-5c0f6fba-7bb5-44dd-9009-a572ffba2e90, tcib_build_tag=fa2bb8efef6782c26ea7f1675eeb36dd, tcib_managed=true, io.buildah.version=1.41.3, org.label-schema.schema-version=1.0, maintainer=OpenStack Kubernetes Operator team, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.build-date=20251125, org.label-schema.vendor=CentOS)
Dec  1 20:02:18 compute-0 ovn_metadata_agent[106828]: 2025-12-01 20:02:18.280 239862 DEBUG oslo.privsep.daemon [-] privsep: reply[f0681664-a8aa-42b1-b673-ca5d5d57e464]: (4, ('Mon Dec  1 08:02:17 PM UTC 2025 Stopping container neutron-haproxy-ovnmeta-5c0f6fba-7bb5-44dd-9009-a572ffba2e90 (7ee0b7f65774ed4d4602e9212ad3371520b7da839c0136fc6083c1f297756dc3)\n7ee0b7f65774ed4d4602e9212ad3371520b7da839c0136fc6083c1f297756dc3\nMon Dec  1 08:02:18 PM UTC 2025 Deleting container neutron-haproxy-ovnmeta-5c0f6fba-7bb5-44dd-9009-a572ffba2e90 (7ee0b7f65774ed4d4602e9212ad3371520b7da839c0136fc6083c1f297756dc3)\n7ee0b7f65774ed4d4602e9212ad3371520b7da839c0136fc6083c1f297756dc3\n', '', 0)) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Dec  1 20:02:18 compute-0 ovn_metadata_agent[106828]: 2025-12-01 20:02:18.281 239862 DEBUG oslo.privsep.daemon [-] privsep: reply[2cd8b475-d7dd-4761-94a5-47ce052f6bc6]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Dec  1 20:02:18 compute-0 ovn_metadata_agent[106828]: 2025-12-01 20:02:18.282 106833 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): DelPortCommand(_result=None, port=tap5c0f6fba-70, bridge=None, if_exists=True) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89#033[00m
Dec  1 20:02:18 compute-0 nova_compute[189564]: 2025-12-01 20:02:18.285 189568 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Dec  1 20:02:18 compute-0 kernel: tap5c0f6fba-70: left promiscuous mode
Dec  1 20:02:18 compute-0 nova_compute[189564]: 2025-12-01 20:02:18.300 189568 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Dec  1 20:02:18 compute-0 ovn_metadata_agent[106828]: 2025-12-01 20:02:18.303 239862 DEBUG oslo.privsep.daemon [-] privsep: reply[59c75705-2d87-4a42-9e0a-ac3b32d49d55]: (4, True) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Dec  1 20:02:18 compute-0 nova_compute[189564]: 2025-12-01 20:02:18.310 189568 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Dec  1 20:02:18 compute-0 ovn_metadata_agent[106828]: 2025-12-01 20:02:18.320 239862 DEBUG oslo.privsep.daemon [-] privsep: reply[bf680c4e-3029-417b-88ef-027d210400b0]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Dec  1 20:02:18 compute-0 ovn_metadata_agent[106828]: 2025-12-01 20:02:18.322 239862 DEBUG oslo.privsep.daemon [-] privsep: reply[5609fbb3-23b1-42c3-8b4f-5dd5ad38aa24]: (4, True) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Dec  1 20:02:18 compute-0 ovn_metadata_agent[106828]: 2025-12-01 20:02:18.342 239862 DEBUG oslo.privsep.daemon [-] privsep: reply[6294e206-cf4d-48ec-aff8-1ef55af15dd9]: (4, [{'family': 0, '__align': (), 'ifi_type': 772, 'index': 1, 'flags': 65609, 'change': 0, 'attrs': [['IFLA_IFNAME', 'lo'], ['IFLA_TXQLEN', 1000], ['IFLA_OPERSTATE', 'UNKNOWN'], ['IFLA_LINKMODE', 0], ['IFLA_MTU', 65536], ['IFLA_MIN_MTU', 0], ['IFLA_MAX_MTU', 0], ['IFLA_GROUP', 0], ['IFLA_PROMISCUITY', 0], ['UNKNOWN', {'header': {'length': 8, 'type': 61}}], ['IFLA_NUM_TX_QUEUES', 1], ['IFLA_GSO_MAX_SEGS', 65535], ['IFLA_GSO_MAX_SIZE', 65536], ['IFLA_GRO_MAX_SIZE', 65536], ['UNKNOWN', {'header': {'length': 8, 'type': 63}}], ['UNKNOWN', {'header': {'length': 8, 'type': 64}}], ['IFLA_TSO_MAX_SIZE', 524280], ['IFLA_TSO_MAX_SEGS', 65535], ['UNKNOWN', {'header': {'length': 8, 'type': 66}}], ['IFLA_NUM_RX_QUEUES', 1], ['IFLA_CARRIER', 1], ['IFLA_CARRIER_CHANGES', 0], ['IFLA_CARRIER_UP_COUNT', 0], ['IFLA_CARRIER_DOWN_COUNT', 0], ['IFLA_PROTO_DOWN', 0], ['IFLA_ADDRESS', '00:00:00:00:00:00'], ['IFLA_BROADCAST', '00:00:00:00:00:00'], ['IFLA_STATS64', {'rx_packets': 1, 'tx_packets': 1, 'rx_bytes': 28, 'tx_bytes': 28, 'rx_errors': 0, 'tx_errors': 0, 'rx_dropped': 0, 'tx_dropped': 0, 'multicast': 0, 'collisions': 0, 'rx_length_errors': 0, 'rx_over_errors': 0, 'rx_crc_errors': 0, 'rx_frame_errors': 0, 'rx_fifo_errors': 0, 'rx_missed_errors': 0, 'tx_aborted_errors': 0, 'tx_carrier_errors': 0, 'tx_fifo_errors': 0, 'tx_heartbeat_errors': 0, 'tx_window_errors': 0, 'rx_compressed': 0, 'tx_compressed': 0}], ['IFLA_STATS', {'rx_packets': 1, 'tx_packets': 1, 'rx_bytes': 28, 'tx_bytes': 28, 'rx_errors': 0, 'tx_errors': 0, 'rx_dropped': 0, 'tx_dropped': 0, 'multicast': 0, 'collisions': 0, 'rx_length_errors': 0, 'rx_over_errors': 0, 'rx_crc_errors': 0, 'rx_frame_errors': 0, 'rx_fifo_errors': 0, 'rx_missed_errors': 0, 'tx_aborted_errors': 0, 'tx_carrier_errors': 0, 'tx_fifo_errors': 0, 'tx_heartbeat_errors': 0, 'tx_window_errors': 0, 'rx_compressed': 0, 'tx_compressed': 0}], ['IFLA_XDP', {'attrs': [['IFLA_XDP_ATTACHED', None]]}], ['IFLA_QDISC', 'noqueue'], ['IFLA_AF_SPEC', {'attrs': [['AF_INET', {'dummy': 65668, 'forwarding': 1, 'mc_forwarding': 0, 'proxy_arp': 0, 'accept_redirects': 0, 'secure_redirects': 0, 'send_redirects': 0, 'shared_media': 1, 'rp_filter': 1, 'accept_source_route': 0, 'bootp_relay': 0, 'log_martians': 0, 'tag': 0, 'arpfilter': 0, 'medium_id': 0, 'noxfrm': 1, 'nopolicy': 1, 'force_igmp_version': 0, 'arp_announce': 0, 'arp_ignore': 0, 'promote_secondaries': 1, 'arp_accept': 0, 'arp_notify': 0, 'accept_local': 0, 'src_vmark': 0, 'proxy_arp_pvlan': 0, 'route_localnet': 0, 'igmpv2_unsolicited_report_interval': 10000, 'igmpv3_unsolicited_report_interval': 1000}], ['AF_INET6', {'attrs': [['IFLA_INET6_FLAGS', 2147483648], ['IFLA_INET6_CACHEINFO', {'max_reasm_len': 65535, 'tstamp': 581780, 'reachable_time': 16182, 'retrans_time': 1000}], ['IFLA_INET6_CONF', {'forwarding': 0, 'hop_limit': 64, 'mtu': 65536, 'accept_ra': 1, 'accept_redirects': 1, 'autoconf': 1, 'dad_transmits': 1, 'router_solicitations': 4294967295, 'router_solicitation_interval': 4000, 'router_solicitation_delay': 1000, 'use_tempaddr': 4294967295, 'temp_valid_lft': 604800, 'temp_preferred_lft': 86400, 'regen_max_retry': 3, 'max_desync_factor': 600, 'max_addresses': 16, 'force_mld_version': 0, 'accept_ra_defrtr': 1, 'accept_ra_pinfo': 1, 'accept_ra_rtr_pref': 1, 'router_probe_interval': 60000, 'accept_ra_rt_info_max_plen': 0, 'proxy_ndp': 0, 'optimistic_dad': 0, 'accept_source_route': 0, 'mc_forwarding': 0, 'disable_ipv6': 0, 'accept_dad': 4294967295, 'force_tllao': 0, 'ndisc_notify': 0}], ['IFLA_INET6_STATS', {'num': 38, 'inpkts': 0, 'inoctets': 0, 'indelivers': 0, 'outforwdatagrams': 0, 'outpkts': 0, 'outoctets': 0, 'inhdrerrors': 0, 'intoobigerrors': 0, 'innoroutes': 0, 'inaddrerrors': 0, 'inunknownprotos': 0, 'intruncatedpkts': 0, 'indiscards': 0, 'outdiscards': 0, 'outnoroutes': 0, 'reasmtimeout': 0, 'reasmreqds': 0, 'reasmoks': 0, 'reasmfails': 0, 'fragoks': 0, 'fragfails': 0, 'fragcreates': 0, 'inmcastpkts': 0, 'outmcastpkts': 0, 'inbcastpkts': 0, 'outbcastpkts': 0, 'inmcastoctets': 0, 'outmcastoctets': 0, 'inbcastoctets': 0, 'outbcastoctets': 0, 'csumerrors': 0, 'noectpkts': 0, 'ect1pkts': 0, 'ect0pkts': 0, 'cepkts': 0}], ['IFLA_INET6_ICMP6STATS', {'num': 7, 'inmsgs': 0, 'inerrors': 0, 'outmsgs': 0, 'outerrors': 0, 'csumerrors': 0}], ['IFLA_INET6_TOKEN', '::'], ['IFLA_INET6_ADDR_GEN_MODE', 0]]}]]}], ['IFLA_MAP', {'mem_start': 0, 'mem_end': 0, 'base_addr': 0, 'irq': 0, 'dma': 0, 'port': 0}], ['UNKNOWN', {'header': {'length': 4, 'type': 32830}}], ['UNKNOWN', {'header': {'length': 4, 'type': 32833}}]], 'header': {'length': 1404, 'type': 16, 'flags': 2, 'sequence_number': 255, 'pid': 255005, 'error': None, 'target': 'ovnmeta-5c0f6fba-7bb5-44dd-9009-a572ffba2e90', 'stats': (0, 0, 0)}, 'state': 'up', 'event': 'RTM_NEWLINK'}]) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Dec  1 20:02:18 compute-0 ovn_metadata_agent[106828]: 2025-12-01 20:02:18.347 106945 DEBUG neutron.privileged.agent.linux.ip_lib [-] Namespace ovnmeta-5c0f6fba-7bb5-44dd-9009-a572ffba2e90 deleted. remove_netns /usr/lib/python3.9/site-packages/neutron/privileged/agent/linux/ip_lib.py:607#033[00m
Dec  1 20:02:18 compute-0 ovn_metadata_agent[106828]: 2025-12-01 20:02:18.347 106945 DEBUG oslo.privsep.daemon [-] privsep: reply[eac22b1e-7353-4e0e-bf12-6edc77c37698]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Dec  1 20:02:18 compute-0 systemd[1]: run-netns-ovnmeta\x2d5c0f6fba\x2d7bb5\x2d44dd\x2d9009\x2da572ffba2e90.mount: Deactivated successfully.
Dec  1 20:02:18 compute-0 nova_compute[189564]: 2025-12-01 20:02:18.846 189568 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Dec  1 20:02:18 compute-0 nova_compute[189564]: 2025-12-01 20:02:18.938 189568 DEBUG nova.network.neutron [req-2f16166b-5a20-4d8c-9d29-45695a20cc87 req-b2c1072a-e19c-4983-935c-109c9b668955 8141e98fb94749b083df4b60a326419b 58466eba0634458f8dd92f929824a1d9 - - default default] [instance: 40daa6fd-543f-42a7-8b3f-8bbbd3b4ecc0] Updated VIF entry in instance network info cache for port 5f412491-e88a-4387-aa56-6b4e024e1eb2. _build_network_info_model /usr/lib/python3.9/site-packages/nova/network/neutron.py:3482#033[00m
Dec  1 20:02:18 compute-0 nova_compute[189564]: 2025-12-01 20:02:18.939 189568 DEBUG nova.network.neutron [req-2f16166b-5a20-4d8c-9d29-45695a20cc87 req-b2c1072a-e19c-4983-935c-109c9b668955 8141e98fb94749b083df4b60a326419b 58466eba0634458f8dd92f929824a1d9 - - default default] [instance: 40daa6fd-543f-42a7-8b3f-8bbbd3b4ecc0] Updating instance_info_cache with network_info: [{"id": "5f412491-e88a-4387-aa56-6b4e024e1eb2", "address": "fa:16:3e:ae:1d:64", "network": {"id": "5c0f6fba-7bb5-44dd-9009-a572ffba2e90", "bridge": "br-int", "label": "tempest-ServersTestJSON-1715889735-network", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.5", "type": "fixed", "version": 4, "meta": {}, "floating_ips": [{"address": "192.168.122.240", "type": "floating", "version": 4, "meta": {}}]}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "074be7edf37d4e09a02286825460dcb3", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tap5f412491-e8", "ovs_interfaceid": "5f412491-e88a-4387-aa56-6b4e024e1eb2", "qbh_params": null, "qbg_params": null, "active": true, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}}] update_instance_cache_with_nw_info /usr/lib/python3.9/site-packages/nova/network/neutron.py:116#033[00m
Dec  1 20:02:19 compute-0 nova_compute[189564]: 2025-12-01 20:02:19.009 189568 DEBUG oslo_concurrency.lockutils [req-2f16166b-5a20-4d8c-9d29-45695a20cc87 req-b2c1072a-e19c-4983-935c-109c9b668955 8141e98fb94749b083df4b60a326419b 58466eba0634458f8dd92f929824a1d9 - - default default] Releasing lock "refresh_cache-40daa6fd-543f-42a7-8b3f-8bbbd3b4ecc0" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:333#033[00m
Dec  1 20:02:19 compute-0 nova_compute[189564]: 2025-12-01 20:02:19.248 189568 DEBUG oslo_service.periodic_task [None req-7cec860b-c1c5-49d2-8a4f-732c76c1788d - - - - - -] Running periodic task ComputeManager._poll_rebooting_instances run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210#033[00m
Dec  1 20:02:20 compute-0 nova_compute[189564]: 2025-12-01 20:02:20.706 189568 DEBUG nova.network.neutron [-] [instance: 40daa6fd-543f-42a7-8b3f-8bbbd3b4ecc0] Updating instance_info_cache with network_info: [] update_instance_cache_with_nw_info /usr/lib/python3.9/site-packages/nova/network/neutron.py:116#033[00m
Dec  1 20:02:20 compute-0 nova_compute[189564]: 2025-12-01 20:02:20.725 189568 INFO nova.compute.manager [-] [instance: 40daa6fd-543f-42a7-8b3f-8bbbd3b4ecc0] Took 2.72 seconds to deallocate network for instance.#033[00m
Dec  1 20:02:20 compute-0 nova_compute[189564]: 2025-12-01 20:02:20.780 189568 DEBUG oslo_concurrency.lockutils [None req-736dc799-0e0b-4c80-9857-eaddcf6cab39 b7979dae5a4746189d660cfad52a7031 074be7edf37d4e09a02286825460dcb3 - - default default] Acquiring lock "compute_resources" by "nova.compute.resource_tracker.ResourceTracker.update_usage" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m
Dec  1 20:02:20 compute-0 nova_compute[189564]: 2025-12-01 20:02:20.780 189568 DEBUG oslo_concurrency.lockutils [None req-736dc799-0e0b-4c80-9857-eaddcf6cab39 b7979dae5a4746189d660cfad52a7031 074be7edf37d4e09a02286825460dcb3 - - default default] Lock "compute_resources" acquired by "nova.compute.resource_tracker.ResourceTracker.update_usage" :: waited 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m
Dec  1 20:02:20 compute-0 nova_compute[189564]: 2025-12-01 20:02:20.798 189568 DEBUG nova.compute.manager [req-0ecac081-4588-481d-b55b-944fc724ba0e req-ddd91d74-3439-42e0-8a61-b2a0253a46c0 8141e98fb94749b083df4b60a326419b 58466eba0634458f8dd92f929824a1d9 - - default default] [instance: 40daa6fd-543f-42a7-8b3f-8bbbd3b4ecc0] Received event network-vif-deleted-5f412491-e88a-4387-aa56-6b4e024e1eb2 external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11048#033[00m
Dec  1 20:02:21 compute-0 nova_compute[189564]: 2025-12-01 20:02:21.050 189568 DEBUG nova.compute.provider_tree [None req-736dc799-0e0b-4c80-9857-eaddcf6cab39 b7979dae5a4746189d660cfad52a7031 074be7edf37d4e09a02286825460dcb3 - - default default] Inventory has not changed in ProviderTree for provider: 0211b5d4-bab8-409f-8f53-df766ffbcb27 update_inventory /usr/lib/python3.9/site-packages/nova/compute/provider_tree.py:180#033[00m
Dec  1 20:02:21 compute-0 nova_compute[189564]: 2025-12-01 20:02:21.086 189568 DEBUG nova.scheduler.client.report [None req-736dc799-0e0b-4c80-9857-eaddcf6cab39 b7979dae5a4746189d660cfad52a7031 074be7edf37d4e09a02286825460dcb3 - - default default] Inventory has not changed for provider 0211b5d4-bab8-409f-8f53-df766ffbcb27 based on inventory data: {'VCPU': {'total': 8, 'reserved': 0, 'min_unit': 1, 'max_unit': 8, 'step_size': 1, 'allocation_ratio': 4.0}, 'MEMORY_MB': {'total': 7680, 'reserved': 512, 'min_unit': 1, 'max_unit': 7680, 'step_size': 1, 'allocation_ratio': 1.0}, 'DISK_GB': {'total': 79, 'reserved': 1, 'min_unit': 1, 'max_unit': 79, 'step_size': 1, 'allocation_ratio': 0.9}} set_inventory_for_provider /usr/lib/python3.9/site-packages/nova/scheduler/client/report.py:940#033[00m
Dec  1 20:02:21 compute-0 nova_compute[189564]: 2025-12-01 20:02:21.114 189568 DEBUG oslo_concurrency.lockutils [None req-736dc799-0e0b-4c80-9857-eaddcf6cab39 b7979dae5a4746189d660cfad52a7031 074be7edf37d4e09a02286825460dcb3 - - default default] Lock "compute_resources" "released" by "nova.compute.resource_tracker.ResourceTracker.update_usage" :: held 0.334s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m
Dec  1 20:02:21 compute-0 nova_compute[189564]: 2025-12-01 20:02:21.162 189568 INFO nova.scheduler.client.report [None req-736dc799-0e0b-4c80-9857-eaddcf6cab39 b7979dae5a4746189d660cfad52a7031 074be7edf37d4e09a02286825460dcb3 - - default default] Deleted allocations for instance 40daa6fd-543f-42a7-8b3f-8bbbd3b4ecc0#033[00m
Dec  1 20:02:21 compute-0 nova_compute[189564]: 2025-12-01 20:02:21.234 189568 DEBUG oslo_concurrency.lockutils [None req-736dc799-0e0b-4c80-9857-eaddcf6cab39 b7979dae5a4746189d660cfad52a7031 074be7edf37d4e09a02286825460dcb3 - - default default] Lock "40daa6fd-543f-42a7-8b3f-8bbbd3b4ecc0" "released" by "nova.compute.manager.ComputeManager.terminate_instance.<locals>.do_terminate_instance" :: held 3.658s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m
Dec  1 20:02:21 compute-0 nova_compute[189564]: 2025-12-01 20:02:21.247 189568 DEBUG oslo_service.periodic_task [None req-7cec860b-c1c5-49d2-8a4f-732c76c1788d - - - - - -] Running periodic task ComputeManager.update_available_resource run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210#033[00m
Dec  1 20:02:21 compute-0 nova_compute[189564]: 2025-12-01 20:02:21.271 189568 DEBUG oslo_concurrency.lockutils [None req-7cec860b-c1c5-49d2-8a4f-732c76c1788d - - - - - -] Acquiring lock "compute_resources" by "nova.compute.resource_tracker.ResourceTracker.clean_compute_node_cache" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m
Dec  1 20:02:21 compute-0 nova_compute[189564]: 2025-12-01 20:02:21.272 189568 DEBUG oslo_concurrency.lockutils [None req-7cec860b-c1c5-49d2-8a4f-732c76c1788d - - - - - -] Lock "compute_resources" acquired by "nova.compute.resource_tracker.ResourceTracker.clean_compute_node_cache" :: waited 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m
Dec  1 20:02:21 compute-0 nova_compute[189564]: 2025-12-01 20:02:21.272 189568 DEBUG oslo_concurrency.lockutils [None req-7cec860b-c1c5-49d2-8a4f-732c76c1788d - - - - - -] Lock "compute_resources" "released" by "nova.compute.resource_tracker.ResourceTracker.clean_compute_node_cache" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m
Dec  1 20:02:21 compute-0 nova_compute[189564]: 2025-12-01 20:02:21.272 189568 DEBUG nova.compute.resource_tracker [None req-7cec860b-c1c5-49d2-8a4f-732c76c1788d - - - - - -] Auditing locally available compute resources for compute-0.ctlplane.example.com (node: compute-0.ctlplane.example.com) update_available_resource /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:861#033[00m
Dec  1 20:02:21 compute-0 podman[255015]: 2025-12-01 20:02:21.333645159 +0000 UTC m=+0.083359546 container health_status 43b014a7c88484529ca37fbc1aa040d68d3c565a681d98a3ffe696ded1c66c8b (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified, name=ovn_metadata_agent, health_status=healthy, health_failing_streak=0, health_log=, maintainer=OpenStack Kubernetes Operator team, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 9 Base Image, config_id=ovn_metadata_agent, org.label-schema.build-date=20251125, tcib_build_tag=fa2bb8efef6782c26ea7f1675eeb36dd, io.buildah.version=1.41.3, managed_by=edpm_ansible, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS, tcib_managed=true, config_data={'cgroupns': 'host', 'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'EDPM_CONFIG_HASH': '0823bd3e096c75f72e4a95820d41b0d4b6a1172bd2892ddb9f29b788a11bc87d'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ovn_metadata_agent', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified', 'net': 'host', 'pid': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/run/openvswitch:/run/openvswitch:z', '/var/lib/config-data/ansible-generated/neutron-ovn-metadata-agent:/etc/neutron.conf.d:z', '/run/netns:/run/netns:shared', '/var/lib/kolla/config_files/ovn_metadata_agent.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/neutron:/var/lib/neutron:shared,z', '/var/lib/neutron/ovn_metadata_haproxy_wrapper:/usr/local/bin/haproxy:ro', '/var/lib/neutron/kill_scripts:/etc/neutron/kill_scripts:ro', '/var/lib/openstack/cacerts/neutron-metadata/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/ca.crt:/etc/pki/tls/certs/ovndbca.crt:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/tls.crt:/etc/pki/tls/certs/ovndb.crt:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/tls.key:/etc/pki/tls/private/ovndb.key:ro,Z', '/var/lib/openstack/healthchecks/ovn_metadata_agent:/openstack:ro,z']}, container_name=ovn_metadata_agent)
Dec  1 20:02:21 compute-0 nova_compute[189564]: 2025-12-01 20:02:21.368 189568 DEBUG oslo_concurrency.processutils [None req-7cec860b-c1c5-49d2-8a4f-732c76c1788d - - - - - -] Running cmd (subprocess): /usr/bin/python3 -m oslo_concurrency.prlimit --as=1073741824 --cpu=30 -- env LC_ALL=C LANG=C qemu-img info /var/lib/nova/instances/4ace6300-5447-4f61-9b27-a7249155c57b/disk --force-share --output=json execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384#033[00m
Dec  1 20:02:21 compute-0 podman[255007]: 2025-12-01 20:02:21.368351448 +0000 UTC m=+0.133722622 container health_status 23921011954a99f31a49758e512d9e3575f6b2ebf536e7df85e3be11e7690b76 (image=quay.io/sustainable_computing_io/kepler:release-0.7.12, name=kepler, health_status=healthy, health_failing_streak=0, health_log=, release=1214.1726694543, build-date=2024-09-18T21:23:30, description=The Universal Base Image is designed and engineered to be the base layer for all of your containerized applications, middleware and utilities. This base image is freely redistributable, but Red Hat only supports Red Hat technologies through subscriptions for Red Hat products. This image is maintained by Red Hat and updated regularly., url=https://access.redhat.com/containers/#/registry.access.redhat.com/ubi9/images/9.4-1214.1726694543, vendor=Red Hat, Inc., summary=Provides the latest release of Red Hat Universal Base Image 9., io.k8s.description=The Universal Base Image is designed and engineered to be the base layer for all of your containerized applications, middleware and utilities. This base image is freely redistributable, but Red Hat only supports Red Hat technologies through subscriptions for Red Hat products. This image is maintained by Red Hat and updated regularly., io.openshift.tags=base rhel9, maintainer=Red Hat, Inc., release-0.7.12=, architecture=x86_64, io.buildah.version=1.29.0, config_id=edpm, managed_by=edpm_ansible, com.redhat.component=ubi9-container, config_data={'image': 'quay.io/sustainable_computing_io/kepler:release-0.7.12', 'privileged': 'true', 'restart': 'always', 'ports': ['8888:8888'], 'net': 'host', 'command': '-v=2', 'recreate': True, 'environment': {'ENABLE_GPU': 'true', 'EXPOSE_CONTAINER_METRICS': 'true', 'ENABLE_PROCESS_METRICS': 'true', 'EXPOSE_VM_METRICS': 'true', 'EXPOSE_ESTIMATED_IDLE_POWER_METRICS': 'false', 'LIBVIRT_METADATA_URI': 'http://openstack.org/xmlns/libvirt/nova/1.1'}, 'healthcheck': {'test': '/openstack/healthcheck kepler', 'mount': '/var/lib/openstack/healthchecks/kepler'}, 'volumes': ['/lib/modules:/lib/modules:ro', '/run/libvirt:/run/libvirt:shared,ro', '/sys:/sys', '/proc:/proc', '/var/lib/openstack/healthchecks/kepler:/openstack:ro,z']}, container_name=kepler, io.k8s.display-name=Red Hat Universal Base Image 9, vcs-ref=e309397d02fc53f7fa99db1371b8700eb49f268f, com.redhat.license_terms=https://www.redhat.com/en/about/red-hat-end-user-license-agreements#UBI, version=9.4, distribution-scope=public, name=ubi9, io.openshift.expose-services=, vcs-type=git)
Dec  1 20:02:21 compute-0 podman[255009]: 2025-12-01 20:02:21.400118317 +0000 UTC m=+0.139285775 container health_status 3a3d264f7eb8586ed3d44da8bad3c69e5911bcb2ca062b771386b6d47a5118de (image=quay.rdoproject.org/podified-master-centos10/openstack-ceilometer-compute:current-tested, name=ceilometer_agent_compute, health_status=healthy, health_failing_streak=0, health_log=, tcib_managed=true, io.buildah.version=1.41.4, managed_by=edpm_ansible, org.label-schema.build-date=20251125, org.label-schema.vendor=CentOS, config_data={'image': 'quay.rdoproject.org/podified-master-centos10/openstack-ceilometer-compute:current-tested', 'user': 'ceilometer', 'restart': 'always', 'command': 'kolla_start', 'security_opt': 'label:type:ceilometer_polling_t', 'net': 'host', 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'OS_ENDPOINT_TYPE': 'internal'}, 'healthcheck': {'test': '/openstack/healthcheck compute', 'mount': '/var/lib/openstack/healthchecks/ceilometer_agent_compute'}, 'volumes': ['/var/lib/openstack/config/telemetry:/var/lib/openstack/config/:z', '/var/lib/openstack/config/telemetry/ceilometer-agent-compute.json:/var/lib/kolla/config_files/config.json:z', '/run/libvirt:/run/libvirt:shared,ro', '/etc/hosts:/etc/hosts:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/var/lib/openstack/cacerts/telemetry/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/config/telemetry/ceilometer_prom_exporter.yaml:/etc/ceilometer/ceilometer_prom_exporter.yaml:z', '/var/lib/openstack/certs/telemetry/default:/etc/ceilometer/tls:z', '/dev/log:/dev/log', '/var/lib/openstack/healthchecks/ceilometer_agent_compute:/openstack:ro,z']}, container_name=ceilometer_agent_compute, org.label-schema.name=CentOS Stream 10 Base Image, org.label-schema.license=GPLv2, org.label-schema.schema-version=1.0, tcib_build_tag=3a7876c5b6a4ff2e2bc50e11e9db5f42, config_id=edpm, maintainer=OpenStack Kubernetes Operator team)
Dec  1 20:02:21 compute-0 podman[255016]: 2025-12-01 20:02:21.405498804 +0000 UTC m=+0.144481126 container health_status ac5c9902abf0db9f43c889599b2bcc73d33eb8b65444ffdd9b56a5cc93dab792 (image=quay.io/podified-antelope-centos9/openstack-ovn-controller:current-podified, name=ovn_controller, health_status=healthy, health_failing_streak=0, health_log=, container_name=ovn_controller, io.buildah.version=1.41.3, managed_by=edpm_ansible, org.label-schema.license=GPLv2, tcib_build_tag=fa2bb8efef6782c26ea7f1675eeb36dd, tcib_managed=true, org.label-schema.vendor=CentOS, maintainer=OpenStack Kubernetes Operator team, org.label-schema.build-date=20251125, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.schema-version=1.0, config_data={'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ovn_controller', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-ovn-controller:current-podified', 'net': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/lib/modules:/lib/modules:ro', '/run:/run', '/var/lib/openvswitch/ovn:/run/ovn:shared,z', '/var/lib/kolla/config_files/ovn_controller.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/openstack/cacerts/ovn/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/certs/ovn/default/ca.crt:/etc/pki/tls/certs/ovndbca.crt:ro,z', '/var/lib/openstack/certs/ovn/default/tls.crt:/etc/pki/tls/certs/ovndb.crt:ro,z', '/var/lib/openstack/certs/ovn/default/tls.key:/etc/pki/tls/private/ovndb.key:ro,Z', '/var/lib/openstack/healthchecks/ovn_controller:/openstack:ro,z']}, config_id=ovn_controller)
Dec  1 20:02:21 compute-0 podman[255008]: 2025-12-01 20:02:21.407809566 +0000 UTC m=+0.155308203 container health_status 34a1614f07848d6f362b3ed1fa2407dbcd0f2c7c831f6ef43ff8b2d278ce7c3d (image=quay.io/podified-antelope-centos9/openstack-ceilometer-ipmi:current-podified, name=ceilometer_agent_ipmi, health_status=healthy, health_failing_streak=0, health_log=, managed_by=edpm_ansible, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.schema-version=1.0, config_data={'image': 'quay.io/podified-antelope-centos9/openstack-ceilometer-ipmi:current-podified', 'user': 'ceilometer', 'restart': 'always', 'command': 'kolla_start', 'security_opt': 'label:type:ceilometer_polling_t', 'privileged': 'true', 'net': 'host', 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'OS_ENDPOINT_TYPE': 'internal'}, 'healthcheck': {'test': '/openstack/healthcheck ipmi', 'mount': '/var/lib/openstack/healthchecks/ceilometer_agent_ipmi'}, 'volumes': ['/var/lib/openstack/config/telemetry-power-monitoring:/var/lib/openstack/config/:z', '/var/lib/openstack/config/telemetry-power-monitoring/ceilometer-agent-ipmi.json:/var/lib/kolla/config_files/config.json:z', '/etc/hosts:/etc/hosts:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/var/lib/openstack/cacerts/telemetry-power-monitoring/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/config/telemetry-power-monitoring/ceilometer_prom_exporter.yaml:/etc/ceilometer/ceilometer_prom_exporter.yaml:z', '/var/lib/openstack/certs/telemetry-power-monitoring/default:/etc/ceilometer/tls:z', '/dev/log:/dev/log', '/var/lib/openstack/healthchecks/ceilometer_agent_ipmi:/openstack:ro,z']}, maintainer=OpenStack Kubernetes Operator team, config_id=edpm, io.buildah.version=1.41.3, org.label-schema.build-date=20251125, org.label-schema.license=GPLv2, org.label-schema.vendor=CentOS, tcib_build_tag=fa2bb8efef6782c26ea7f1675eeb36dd, tcib_managed=true, container_name=ceilometer_agent_ipmi)
Dec  1 20:02:21 compute-0 nova_compute[189564]: 2025-12-01 20:02:21.431 189568 DEBUG oslo_concurrency.processutils [None req-7cec860b-c1c5-49d2-8a4f-732c76c1788d - - - - - -] CMD "/usr/bin/python3 -m oslo_concurrency.prlimit --as=1073741824 --cpu=30 -- env LC_ALL=C LANG=C qemu-img info /var/lib/nova/instances/4ace6300-5447-4f61-9b27-a7249155c57b/disk --force-share --output=json" returned: 0 in 0.064s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422#033[00m
Dec  1 20:02:21 compute-0 nova_compute[189564]: 2025-12-01 20:02:21.432 189568 DEBUG oslo_concurrency.processutils [None req-7cec860b-c1c5-49d2-8a4f-732c76c1788d - - - - - -] Running cmd (subprocess): /usr/bin/python3 -m oslo_concurrency.prlimit --as=1073741824 --cpu=30 -- env LC_ALL=C LANG=C qemu-img info /var/lib/nova/instances/4ace6300-5447-4f61-9b27-a7249155c57b/disk --force-share --output=json execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384#033[00m
Dec  1 20:02:21 compute-0 nova_compute[189564]: 2025-12-01 20:02:21.494 189568 DEBUG oslo_concurrency.processutils [None req-7cec860b-c1c5-49d2-8a4f-732c76c1788d - - - - - -] CMD "/usr/bin/python3 -m oslo_concurrency.prlimit --as=1073741824 --cpu=30 -- env LC_ALL=C LANG=C qemu-img info /var/lib/nova/instances/4ace6300-5447-4f61-9b27-a7249155c57b/disk --force-share --output=json" returned: 0 in 0.062s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422#033[00m
Dec  1 20:02:21 compute-0 nova_compute[189564]: 2025-12-01 20:02:21.501 189568 DEBUG oslo_concurrency.processutils [None req-7cec860b-c1c5-49d2-8a4f-732c76c1788d - - - - - -] Running cmd (subprocess): /usr/bin/python3 -m oslo_concurrency.prlimit --as=1073741824 --cpu=30 -- env LC_ALL=C LANG=C qemu-img info /var/lib/nova/instances/4a104baa-5fd5-47aa-973b-11d99c76c3e2/disk --force-share --output=json execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384#033[00m
Dec  1 20:02:21 compute-0 nova_compute[189564]: 2025-12-01 20:02:21.565 189568 DEBUG oslo_concurrency.processutils [None req-7cec860b-c1c5-49d2-8a4f-732c76c1788d - - - - - -] CMD "/usr/bin/python3 -m oslo_concurrency.prlimit --as=1073741824 --cpu=30 -- env LC_ALL=C LANG=C qemu-img info /var/lib/nova/instances/4a104baa-5fd5-47aa-973b-11d99c76c3e2/disk --force-share --output=json" returned: 0 in 0.063s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422#033[00m
Dec  1 20:02:21 compute-0 nova_compute[189564]: 2025-12-01 20:02:21.565 189568 DEBUG oslo_concurrency.processutils [None req-7cec860b-c1c5-49d2-8a4f-732c76c1788d - - - - - -] Running cmd (subprocess): /usr/bin/python3 -m oslo_concurrency.prlimit --as=1073741824 --cpu=30 -- env LC_ALL=C LANG=C qemu-img info /var/lib/nova/instances/4a104baa-5fd5-47aa-973b-11d99c76c3e2/disk --force-share --output=json execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384#033[00m
Dec  1 20:02:21 compute-0 nova_compute[189564]: 2025-12-01 20:02:21.653 189568 DEBUG oslo_concurrency.processutils [None req-7cec860b-c1c5-49d2-8a4f-732c76c1788d - - - - - -] CMD "/usr/bin/python3 -m oslo_concurrency.prlimit --as=1073741824 --cpu=30 -- env LC_ALL=C LANG=C qemu-img info /var/lib/nova/instances/4a104baa-5fd5-47aa-973b-11d99c76c3e2/disk --force-share --output=json" returned: 0 in 0.087s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422#033[00m
Dec  1 20:02:22 compute-0 nova_compute[189564]: 2025-12-01 20:02:22.013 189568 WARNING nova.virt.libvirt.driver [None req-7cec860b-c1c5-49d2-8a4f-732c76c1788d - - - - - -] This host appears to have multiple sockets per NUMA node. The `socket` PCI NUMA affinity will not be supported.#033[00m
Dec  1 20:02:22 compute-0 nova_compute[189564]: 2025-12-01 20:02:22.015 189568 DEBUG nova.compute.resource_tracker [None req-7cec860b-c1c5-49d2-8a4f-732c76c1788d - - - - - -] Hypervisor/Node resource view: name=compute-0.ctlplane.example.com free_ram=5016MB free_disk=72.31034088134766GB free_vcpus=6 pci_devices=[{"dev_id": "pci_0000_00_03_0", "address": "0000:00:03.0", "product_id": "1000", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1000", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_01_3", "address": "0000:00:01.3", "product_id": "7113", "vendor_id": "8086", "numa_node": null, "label": "label_8086_7113", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_06_0", "address": "0000:00:06.0", "product_id": "1005", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1005", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_04_0", "address": "0000:00:04.0", "product_id": "1001", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1001", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_01_1", "address": "0000:00:01.1", "product_id": "7010", "vendor_id": "8086", "numa_node": null, "label": "label_8086_7010", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_00_0", "address": "0000:00:00.0", "product_id": "1237", "vendor_id": "8086", "numa_node": null, "label": "label_8086_1237", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_05_0", "address": "0000:00:05.0", "product_id": "1002", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1002", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_02_0", "address": "0000:00:02.0", "product_id": "1050", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1050", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_01_2", "address": "0000:00:01.2", "product_id": "7020", "vendor_id": "8086", "numa_node": null, "label": "label_8086_7020", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_01_0", "address": "0000:00:01.0", "product_id": "7000", "vendor_id": "8086", "numa_node": null, "label": "label_8086_7000", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_07_0", "address": "0000:00:07.0", "product_id": "1000", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1000", "dev_type": "type-PCI"}] _report_hypervisor_resource_view /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:1034#033[00m
Dec  1 20:02:22 compute-0 nova_compute[189564]: 2025-12-01 20:02:22.015 189568 DEBUG oslo_concurrency.lockutils [None req-7cec860b-c1c5-49d2-8a4f-732c76c1788d - - - - - -] Acquiring lock "compute_resources" by "nova.compute.resource_tracker.ResourceTracker._update_available_resource" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m
Dec  1 20:02:22 compute-0 nova_compute[189564]: 2025-12-01 20:02:22.015 189568 DEBUG oslo_concurrency.lockutils [None req-7cec860b-c1c5-49d2-8a4f-732c76c1788d - - - - - -] Lock "compute_resources" acquired by "nova.compute.resource_tracker.ResourceTracker._update_available_resource" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m
Dec  1 20:02:22 compute-0 nova_compute[189564]: 2025-12-01 20:02:22.090 189568 DEBUG nova.compute.resource_tracker [None req-7cec860b-c1c5-49d2-8a4f-732c76c1788d - - - - - -] Instance 4a104baa-5fd5-47aa-973b-11d99c76c3e2 actively managed on this compute host and has allocations in placement: {'resources': {'DISK_GB': 1, 'MEMORY_MB': 128, 'VCPU': 1}}. _remove_deleted_instances_allocations /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:1635#033[00m
Dec  1 20:02:22 compute-0 nova_compute[189564]: 2025-12-01 20:02:22.090 189568 DEBUG nova.compute.resource_tracker [None req-7cec860b-c1c5-49d2-8a4f-732c76c1788d - - - - - -] Instance 4ace6300-5447-4f61-9b27-a7249155c57b actively managed on this compute host and has allocations in placement: {'resources': {'DISK_GB': 1, 'MEMORY_MB': 128, 'VCPU': 1}}. _remove_deleted_instances_allocations /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:1635#033[00m
Dec  1 20:02:22 compute-0 nova_compute[189564]: 2025-12-01 20:02:22.090 189568 DEBUG nova.compute.resource_tracker [None req-7cec860b-c1c5-49d2-8a4f-732c76c1788d - - - - - -] Total usable vcpus: 8, total allocated vcpus: 2 _report_final_resource_view /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:1057#033[00m
Dec  1 20:02:22 compute-0 nova_compute[189564]: 2025-12-01 20:02:22.090 189568 DEBUG nova.compute.resource_tracker [None req-7cec860b-c1c5-49d2-8a4f-732c76c1788d - - - - - -] Final resource view: name=compute-0.ctlplane.example.com phys_ram=7680MB used_ram=768MB phys_disk=79GB used_disk=2GB total_vcpus=8 used_vcpus=2 pci_stats=[] _report_final_resource_view /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:1066#033[00m
Dec  1 20:02:22 compute-0 nova_compute[189564]: 2025-12-01 20:02:22.156 189568 DEBUG nova.compute.provider_tree [None req-7cec860b-c1c5-49d2-8a4f-732c76c1788d - - - - - -] Inventory has not changed in ProviderTree for provider: 0211b5d4-bab8-409f-8f53-df766ffbcb27 update_inventory /usr/lib/python3.9/site-packages/nova/compute/provider_tree.py:180#033[00m
Dec  1 20:02:22 compute-0 nova_compute[189564]: 2025-12-01 20:02:22.182 189568 DEBUG nova.scheduler.client.report [None req-7cec860b-c1c5-49d2-8a4f-732c76c1788d - - - - - -] Inventory has not changed for provider 0211b5d4-bab8-409f-8f53-df766ffbcb27 based on inventory data: {'VCPU': {'total': 8, 'reserved': 0, 'min_unit': 1, 'max_unit': 8, 'step_size': 1, 'allocation_ratio': 4.0}, 'MEMORY_MB': {'total': 7680, 'reserved': 512, 'min_unit': 1, 'max_unit': 7680, 'step_size': 1, 'allocation_ratio': 1.0}, 'DISK_GB': {'total': 79, 'reserved': 1, 'min_unit': 1, 'max_unit': 79, 'step_size': 1, 'allocation_ratio': 0.9}} set_inventory_for_provider /usr/lib/python3.9/site-packages/nova/scheduler/client/report.py:940#033[00m
Dec  1 20:02:22 compute-0 nova_compute[189564]: 2025-12-01 20:02:22.205 189568 DEBUG nova.compute.resource_tracker [None req-7cec860b-c1c5-49d2-8a4f-732c76c1788d - - - - - -] Compute_service record updated for compute-0.ctlplane.example.com:compute-0.ctlplane.example.com _update_available_resource /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:995#033[00m
Dec  1 20:02:22 compute-0 nova_compute[189564]: 2025-12-01 20:02:22.206 189568 DEBUG oslo_concurrency.lockutils [None req-7cec860b-c1c5-49d2-8a4f-732c76c1788d - - - - - -] Lock "compute_resources" "released" by "nova.compute.resource_tracker.ResourceTracker._update_available_resource" :: held 0.190s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m
Dec  1 20:02:22 compute-0 nova_compute[189564]: 2025-12-01 20:02:22.916 189568 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Dec  1 20:02:23 compute-0 nova_compute[189564]: 2025-12-01 20:02:23.207 189568 DEBUG oslo_service.periodic_task [None req-7cec860b-c1c5-49d2-8a4f-732c76c1788d - - - - - -] Running periodic task ComputeManager._poll_unconfirmed_resizes run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210#033[00m
Dec  1 20:02:23 compute-0 nova_compute[189564]: 2025-12-01 20:02:23.247 189568 DEBUG oslo_service.periodic_task [None req-7cec860b-c1c5-49d2-8a4f-732c76c1788d - - - - - -] Running periodic task ComputeManager._heal_instance_info_cache run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210#033[00m
Dec  1 20:02:23 compute-0 nova_compute[189564]: 2025-12-01 20:02:23.248 189568 DEBUG nova.compute.manager [None req-7cec860b-c1c5-49d2-8a4f-732c76c1788d - - - - - -] Starting heal instance info cache _heal_instance_info_cache /usr/lib/python3.9/site-packages/nova/compute/manager.py:9858#033[00m
Dec  1 20:02:23 compute-0 nova_compute[189564]: 2025-12-01 20:02:23.271 189568 DEBUG nova.compute.manager [None req-7cec860b-c1c5-49d2-8a4f-732c76c1788d - - - - - -] Didn't find any instances for network info cache update. _heal_instance_info_cache /usr/lib/python3.9/site-packages/nova/compute/manager.py:9944#033[00m
Dec  1 20:02:23 compute-0 nova_compute[189564]: 2025-12-01 20:02:23.271 189568 DEBUG oslo_service.periodic_task [None req-7cec860b-c1c5-49d2-8a4f-732c76c1788d - - - - - -] Running periodic task ComputeManager._poll_volume_usage run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210#033[00m
Dec  1 20:02:23 compute-0 nova_compute[189564]: 2025-12-01 20:02:23.849 189568 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Dec  1 20:02:24 compute-0 nova_compute[189564]: 2025-12-01 20:02:24.248 189568 DEBUG oslo_service.periodic_task [None req-7cec860b-c1c5-49d2-8a4f-732c76c1788d - - - - - -] Running periodic task ComputeManager._instance_usage_audit run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210#033[00m
Dec  1 20:02:25 compute-0 ovn_controller[97948]: 2025-12-01T20:02:25Z|00105|memory|INFO|peak resident set size grew 53% in last 2989.0 seconds, from 16000 kB to 24444 kB
Dec  1 20:02:25 compute-0 ovn_controller[97948]: 2025-12-01T20:02:25Z|00106|memory|INFO|idl-cells-OVN_Southbound:10506 idl-cells-Open_vSwitch:813 if_status_mgr_ifaces_state_usage-KB:1 if_status_mgr_ifaces_usage-KB:1 lflow-cache-entries-cache-expr:370 lflow-cache-entries-cache-matches:291 lflow-cache-size-KB:1525 local_datapath_usage-KB:3 ofctrl_desired_flow_usage-KB:629 ofctrl_installed_flow_usage-KB:459 ofctrl_sb_flow_ref_usage-KB:239
Dec  1 20:02:26 compute-0 nova_compute[189564]: 2025-12-01 20:02:26.244 189568 DEBUG oslo_service.periodic_task [None req-7cec860b-c1c5-49d2-8a4f-732c76c1788d - - - - - -] Running periodic task ComputeManager._check_instance_build_time run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210#033[00m
Dec  1 20:02:27 compute-0 nova_compute[189564]: 2025-12-01 20:02:27.919 189568 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Dec  1 20:02:28 compute-0 nova_compute[189564]: 2025-12-01 20:02:28.370 189568 DEBUG oslo_concurrency.lockutils [None req-edc753ca-df70-41b8-9e8c-a36b0a4da18d 715e289b64b4407387cbcfe958eb2d0f 162c071887824085bcc9c384a2f8baf0 - - default default] Acquiring lock "6c1de815-4e42-4798-9a73-220b67333524" by "nova.compute.manager.ComputeManager.build_and_run_instance.<locals>._locked_do_build_and_run_instance" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m
Dec  1 20:02:28 compute-0 nova_compute[189564]: 2025-12-01 20:02:28.370 189568 DEBUG oslo_concurrency.lockutils [None req-edc753ca-df70-41b8-9e8c-a36b0a4da18d 715e289b64b4407387cbcfe958eb2d0f 162c071887824085bcc9c384a2f8baf0 - - default default] Lock "6c1de815-4e42-4798-9a73-220b67333524" acquired by "nova.compute.manager.ComputeManager.build_and_run_instance.<locals>._locked_do_build_and_run_instance" :: waited 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m
Dec  1 20:02:28 compute-0 nova_compute[189564]: 2025-12-01 20:02:28.638 189568 DEBUG nova.compute.manager [None req-edc753ca-df70-41b8-9e8c-a36b0a4da18d 715e289b64b4407387cbcfe958eb2d0f 162c071887824085bcc9c384a2f8baf0 - - default default] [instance: 6c1de815-4e42-4798-9a73-220b67333524] Starting instance... _do_build_and_run_instance /usr/lib/python3.9/site-packages/nova/compute/manager.py:2402#033[00m
Dec  1 20:02:28 compute-0 nova_compute[189564]: 2025-12-01 20:02:28.852 189568 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Dec  1 20:02:29 compute-0 nova_compute[189564]: 2025-12-01 20:02:29.461 189568 DEBUG oslo_concurrency.lockutils [None req-edc753ca-df70-41b8-9e8c-a36b0a4da18d 715e289b64b4407387cbcfe958eb2d0f 162c071887824085bcc9c384a2f8baf0 - - default default] Acquiring lock "compute_resources" by "nova.compute.resource_tracker.ResourceTracker.instance_claim" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m
Dec  1 20:02:29 compute-0 nova_compute[189564]: 2025-12-01 20:02:29.462 189568 DEBUG oslo_concurrency.lockutils [None req-edc753ca-df70-41b8-9e8c-a36b0a4da18d 715e289b64b4407387cbcfe958eb2d0f 162c071887824085bcc9c384a2f8baf0 - - default default] Lock "compute_resources" acquired by "nova.compute.resource_tracker.ResourceTracker.instance_claim" :: waited 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m
Dec  1 20:02:29 compute-0 nova_compute[189564]: 2025-12-01 20:02:29.471 189568 DEBUG nova.virt.hardware [None req-edc753ca-df70-41b8-9e8c-a36b0a4da18d 715e289b64b4407387cbcfe958eb2d0f 162c071887824085bcc9c384a2f8baf0 - - default default] Require both a host and instance NUMA topology to fit instance on host. numa_fit_instance_to_host /usr/lib/python3.9/site-packages/nova/virt/hardware.py:2368#033[00m
Dec  1 20:02:29 compute-0 nova_compute[189564]: 2025-12-01 20:02:29.471 189568 INFO nova.compute.claims [None req-edc753ca-df70-41b8-9e8c-a36b0a4da18d 715e289b64b4407387cbcfe958eb2d0f 162c071887824085bcc9c384a2f8baf0 - - default default] [instance: 6c1de815-4e42-4798-9a73-220b67333524] Claim successful on node compute-0.ctlplane.example.com#033[00m
Dec  1 20:02:29 compute-0 nova_compute[189564]: 2025-12-01 20:02:29.714 189568 DEBUG nova.compute.provider_tree [None req-edc753ca-df70-41b8-9e8c-a36b0a4da18d 715e289b64b4407387cbcfe958eb2d0f 162c071887824085bcc9c384a2f8baf0 - - default default] Inventory has not changed in ProviderTree for provider: 0211b5d4-bab8-409f-8f53-df766ffbcb27 update_inventory /usr/lib/python3.9/site-packages/nova/compute/provider_tree.py:180#033[00m
Dec  1 20:02:29 compute-0 nova_compute[189564]: 2025-12-01 20:02:29.745 189568 DEBUG nova.scheduler.client.report [None req-edc753ca-df70-41b8-9e8c-a36b0a4da18d 715e289b64b4407387cbcfe958eb2d0f 162c071887824085bcc9c384a2f8baf0 - - default default] Inventory has not changed for provider 0211b5d4-bab8-409f-8f53-df766ffbcb27 based on inventory data: {'VCPU': {'total': 8, 'reserved': 0, 'min_unit': 1, 'max_unit': 8, 'step_size': 1, 'allocation_ratio': 4.0}, 'MEMORY_MB': {'total': 7680, 'reserved': 512, 'min_unit': 1, 'max_unit': 7680, 'step_size': 1, 'allocation_ratio': 1.0}, 'DISK_GB': {'total': 79, 'reserved': 1, 'min_unit': 1, 'max_unit': 79, 'step_size': 1, 'allocation_ratio': 0.9}} set_inventory_for_provider /usr/lib/python3.9/site-packages/nova/scheduler/client/report.py:940#033[00m
Dec  1 20:02:29 compute-0 podman[203750]: time="2025-12-01T20:02:29Z" level=info msg="List containers: received `last` parameter - overwriting `limit`"
Dec  1 20:02:29 compute-0 podman[203750]: @ - - [01/Dec/2025:20:02:29 +0000] "GET /v4.9.3/libpod/containers/json?all=true&external=false&last=0&namespace=false&size=false&sync=false HTTP/1.1" 200 30754 "" "Go-http-client/1.1"
Dec  1 20:02:29 compute-0 podman[203750]: @ - - [01/Dec/2025:20:02:29 +0000] "GET /v4.9.3/libpod/containers/stats?all=false&interval=1&stream=false HTTP/1.1" 200 5266 "" "Go-http-client/1.1"
Dec  1 20:02:29 compute-0 nova_compute[189564]: 2025-12-01 20:02:29.785 189568 DEBUG oslo_concurrency.lockutils [None req-edc753ca-df70-41b8-9e8c-a36b0a4da18d 715e289b64b4407387cbcfe958eb2d0f 162c071887824085bcc9c384a2f8baf0 - - default default] Lock "compute_resources" "released" by "nova.compute.resource_tracker.ResourceTracker.instance_claim" :: held 0.323s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m
Dec  1 20:02:29 compute-0 nova_compute[189564]: 2025-12-01 20:02:29.785 189568 DEBUG nova.compute.manager [None req-edc753ca-df70-41b8-9e8c-a36b0a4da18d 715e289b64b4407387cbcfe958eb2d0f 162c071887824085bcc9c384a2f8baf0 - - default default] [instance: 6c1de815-4e42-4798-9a73-220b67333524] Start building networks asynchronously for instance. _build_resources /usr/lib/python3.9/site-packages/nova/compute/manager.py:2799#033[00m
Dec  1 20:02:29 compute-0 nova_compute[189564]: 2025-12-01 20:02:29.837 189568 DEBUG nova.compute.manager [None req-edc753ca-df70-41b8-9e8c-a36b0a4da18d 715e289b64b4407387cbcfe958eb2d0f 162c071887824085bcc9c384a2f8baf0 - - default default] [instance: 6c1de815-4e42-4798-9a73-220b67333524] Allocating IP information in the background. _allocate_network_async /usr/lib/python3.9/site-packages/nova/compute/manager.py:1952#033[00m
Dec  1 20:02:29 compute-0 nova_compute[189564]: 2025-12-01 20:02:29.838 189568 DEBUG nova.network.neutron [None req-edc753ca-df70-41b8-9e8c-a36b0a4da18d 715e289b64b4407387cbcfe958eb2d0f 162c071887824085bcc9c384a2f8baf0 - - default default] [instance: 6c1de815-4e42-4798-9a73-220b67333524] allocate_for_instance() allocate_for_instance /usr/lib/python3.9/site-packages/nova/network/neutron.py:1156#033[00m
Dec  1 20:02:29 compute-0 nova_compute[189564]: 2025-12-01 20:02:29.870 189568 INFO nova.virt.libvirt.driver [None req-edc753ca-df70-41b8-9e8c-a36b0a4da18d 715e289b64b4407387cbcfe958eb2d0f 162c071887824085bcc9c384a2f8baf0 - - default default] [instance: 6c1de815-4e42-4798-9a73-220b67333524] Ignoring supplied device name: /dev/vda. Libvirt can't honour user-supplied dev names#033[00m
Dec  1 20:02:29 compute-0 nova_compute[189564]: 2025-12-01 20:02:29.947 189568 DEBUG nova.compute.manager [None req-edc753ca-df70-41b8-9e8c-a36b0a4da18d 715e289b64b4407387cbcfe958eb2d0f 162c071887824085bcc9c384a2f8baf0 - - default default] [instance: 6c1de815-4e42-4798-9a73-220b67333524] Start building block device mappings for instance. _build_resources /usr/lib/python3.9/site-packages/nova/compute/manager.py:2834#033[00m
Dec  1 20:02:30 compute-0 nova_compute[189564]: 2025-12-01 20:02:30.097 189568 DEBUG nova.policy [None req-edc753ca-df70-41b8-9e8c-a36b0a4da18d 715e289b64b4407387cbcfe958eb2d0f 162c071887824085bcc9c384a2f8baf0 - - default default] Policy check for network:attach_external_network failed with credentials {'is_admin': False, 'user_id': '715e289b64b4407387cbcfe958eb2d0f', 'user_domain_id': 'default', 'system_scope': None, 'domain_id': None, 'project_id': '162c071887824085bcc9c384a2f8baf0', 'project_domain_id': 'default', 'roles': ['reader', 'member'], 'is_admin_project': True, 'service_user_id': None, 'service_user_domain_id': None, 'service_project_id': None, 'service_project_domain_id': None, 'service_roles': []} authorize /usr/lib/python3.9/site-packages/nova/policy.py:203#033[00m
Dec  1 20:02:30 compute-0 nova_compute[189564]: 2025-12-01 20:02:30.161 189568 DEBUG nova.compute.manager [None req-edc753ca-df70-41b8-9e8c-a36b0a4da18d 715e289b64b4407387cbcfe958eb2d0f 162c071887824085bcc9c384a2f8baf0 - - default default] [instance: 6c1de815-4e42-4798-9a73-220b67333524] Start spawning the instance on the hypervisor. _build_and_run_instance /usr/lib/python3.9/site-packages/nova/compute/manager.py:2608#033[00m
Dec  1 20:02:30 compute-0 nova_compute[189564]: 2025-12-01 20:02:30.162 189568 DEBUG nova.virt.libvirt.driver [None req-edc753ca-df70-41b8-9e8c-a36b0a4da18d 715e289b64b4407387cbcfe958eb2d0f 162c071887824085bcc9c384a2f8baf0 - - default default] [instance: 6c1de815-4e42-4798-9a73-220b67333524] Creating instance directory _create_image /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:4723#033[00m
Dec  1 20:02:30 compute-0 nova_compute[189564]: 2025-12-01 20:02:30.163 189568 INFO nova.virt.libvirt.driver [None req-edc753ca-df70-41b8-9e8c-a36b0a4da18d 715e289b64b4407387cbcfe958eb2d0f 162c071887824085bcc9c384a2f8baf0 - - default default] [instance: 6c1de815-4e42-4798-9a73-220b67333524] Creating image(s)#033[00m
Dec  1 20:02:30 compute-0 nova_compute[189564]: 2025-12-01 20:02:30.164 189568 DEBUG oslo_concurrency.lockutils [None req-edc753ca-df70-41b8-9e8c-a36b0a4da18d 715e289b64b4407387cbcfe958eb2d0f 162c071887824085bcc9c384a2f8baf0 - - default default] Acquiring lock "/var/lib/nova/instances/6c1de815-4e42-4798-9a73-220b67333524/disk.info" by "nova.virt.libvirt.imagebackend.Image.resolve_driver_format.<locals>.write_to_disk_info_file" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m
Dec  1 20:02:30 compute-0 nova_compute[189564]: 2025-12-01 20:02:30.165 189568 DEBUG oslo_concurrency.lockutils [None req-edc753ca-df70-41b8-9e8c-a36b0a4da18d 715e289b64b4407387cbcfe958eb2d0f 162c071887824085bcc9c384a2f8baf0 - - default default] Lock "/var/lib/nova/instances/6c1de815-4e42-4798-9a73-220b67333524/disk.info" acquired by "nova.virt.libvirt.imagebackend.Image.resolve_driver_format.<locals>.write_to_disk_info_file" :: waited 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m
Dec  1 20:02:30 compute-0 nova_compute[189564]: 2025-12-01 20:02:30.167 189568 DEBUG oslo_concurrency.lockutils [None req-edc753ca-df70-41b8-9e8c-a36b0a4da18d 715e289b64b4407387cbcfe958eb2d0f 162c071887824085bcc9c384a2f8baf0 - - default default] Lock "/var/lib/nova/instances/6c1de815-4e42-4798-9a73-220b67333524/disk.info" "released" by "nova.virt.libvirt.imagebackend.Image.resolve_driver_format.<locals>.write_to_disk_info_file" :: held 0.002s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m
Dec  1 20:02:30 compute-0 nova_compute[189564]: 2025-12-01 20:02:30.187 189568 DEBUG oslo_concurrency.processutils [None req-edc753ca-df70-41b8-9e8c-a36b0a4da18d 715e289b64b4407387cbcfe958eb2d0f 162c071887824085bcc9c384a2f8baf0 - - default default] Running cmd (subprocess): /usr/bin/python3 -m oslo_concurrency.prlimit --as=1073741824 --cpu=30 -- env LC_ALL=C LANG=C qemu-img info /var/lib/nova/instances/_base/b6c46a34fa48a1b06387586e8222a42077151abd --force-share --output=json execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384#033[00m
Dec  1 20:02:30 compute-0 nova_compute[189564]: 2025-12-01 20:02:30.253 189568 DEBUG oslo_concurrency.processutils [None req-edc753ca-df70-41b8-9e8c-a36b0a4da18d 715e289b64b4407387cbcfe958eb2d0f 162c071887824085bcc9c384a2f8baf0 - - default default] CMD "/usr/bin/python3 -m oslo_concurrency.prlimit --as=1073741824 --cpu=30 -- env LC_ALL=C LANG=C qemu-img info /var/lib/nova/instances/_base/b6c46a34fa48a1b06387586e8222a42077151abd --force-share --output=json" returned: 0 in 0.066s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422#033[00m
Dec  1 20:02:30 compute-0 nova_compute[189564]: 2025-12-01 20:02:30.254 189568 DEBUG oslo_concurrency.lockutils [None req-edc753ca-df70-41b8-9e8c-a36b0a4da18d 715e289b64b4407387cbcfe958eb2d0f 162c071887824085bcc9c384a2f8baf0 - - default default] Acquiring lock "b6c46a34fa48a1b06387586e8222a42077151abd" by "nova.virt.libvirt.imagebackend.Qcow2.create_image.<locals>.create_qcow2_image" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m
Dec  1 20:02:30 compute-0 nova_compute[189564]: 2025-12-01 20:02:30.255 189568 DEBUG oslo_concurrency.lockutils [None req-edc753ca-df70-41b8-9e8c-a36b0a4da18d 715e289b64b4407387cbcfe958eb2d0f 162c071887824085bcc9c384a2f8baf0 - - default default] Lock "b6c46a34fa48a1b06387586e8222a42077151abd" acquired by "nova.virt.libvirt.imagebackend.Qcow2.create_image.<locals>.create_qcow2_image" :: waited 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m
Dec  1 20:02:30 compute-0 nova_compute[189564]: 2025-12-01 20:02:30.274 189568 DEBUG oslo_concurrency.processutils [None req-edc753ca-df70-41b8-9e8c-a36b0a4da18d 715e289b64b4407387cbcfe958eb2d0f 162c071887824085bcc9c384a2f8baf0 - - default default] Running cmd (subprocess): /usr/bin/python3 -m oslo_concurrency.prlimit --as=1073741824 --cpu=30 -- env LC_ALL=C LANG=C qemu-img info /var/lib/nova/instances/_base/b6c46a34fa48a1b06387586e8222a42077151abd --force-share --output=json execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384#033[00m
Dec  1 20:02:30 compute-0 nova_compute[189564]: 2025-12-01 20:02:30.340 189568 DEBUG oslo_concurrency.processutils [None req-edc753ca-df70-41b8-9e8c-a36b0a4da18d 715e289b64b4407387cbcfe958eb2d0f 162c071887824085bcc9c384a2f8baf0 - - default default] CMD "/usr/bin/python3 -m oslo_concurrency.prlimit --as=1073741824 --cpu=30 -- env LC_ALL=C LANG=C qemu-img info /var/lib/nova/instances/_base/b6c46a34fa48a1b06387586e8222a42077151abd --force-share --output=json" returned: 0 in 0.066s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422#033[00m
Dec  1 20:02:30 compute-0 nova_compute[189564]: 2025-12-01 20:02:30.341 189568 DEBUG oslo_concurrency.processutils [None req-edc753ca-df70-41b8-9e8c-a36b0a4da18d 715e289b64b4407387cbcfe958eb2d0f 162c071887824085bcc9c384a2f8baf0 - - default default] Running cmd (subprocess): env LC_ALL=C LANG=C qemu-img create -f qcow2 -o backing_file=/var/lib/nova/instances/_base/b6c46a34fa48a1b06387586e8222a42077151abd,backing_fmt=raw /var/lib/nova/instances/6c1de815-4e42-4798-9a73-220b67333524/disk 1073741824 execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384#033[00m
Dec  1 20:02:30 compute-0 ovn_controller[97948]: 2025-12-01T20:02:30Z|00107|binding|INFO|Releasing lport 0966f8f1-95fd-4a77-80c1-25197c60ec2b from this chassis (sb_readonly=0)
Dec  1 20:02:30 compute-0 ovn_controller[97948]: 2025-12-01T20:02:30Z|00108|binding|INFO|Releasing lport cb6caae9-9b40-4384-a692-7fed62ba0bdc from this chassis (sb_readonly=0)
Dec  1 20:02:30 compute-0 nova_compute[189564]: 2025-12-01 20:02:30.433 189568 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Dec  1 20:02:30 compute-0 nova_compute[189564]: 2025-12-01 20:02:30.439 189568 DEBUG oslo_concurrency.processutils [None req-edc753ca-df70-41b8-9e8c-a36b0a4da18d 715e289b64b4407387cbcfe958eb2d0f 162c071887824085bcc9c384a2f8baf0 - - default default] CMD "env LC_ALL=C LANG=C qemu-img create -f qcow2 -o backing_file=/var/lib/nova/instances/_base/b6c46a34fa48a1b06387586e8222a42077151abd,backing_fmt=raw /var/lib/nova/instances/6c1de815-4e42-4798-9a73-220b67333524/disk 1073741824" returned: 0 in 0.098s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422#033[00m
Dec  1 20:02:30 compute-0 nova_compute[189564]: 2025-12-01 20:02:30.440 189568 DEBUG oslo_concurrency.lockutils [None req-edc753ca-df70-41b8-9e8c-a36b0a4da18d 715e289b64b4407387cbcfe958eb2d0f 162c071887824085bcc9c384a2f8baf0 - - default default] Lock "b6c46a34fa48a1b06387586e8222a42077151abd" "released" by "nova.virt.libvirt.imagebackend.Qcow2.create_image.<locals>.create_qcow2_image" :: held 0.185s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m
Dec  1 20:02:30 compute-0 nova_compute[189564]: 2025-12-01 20:02:30.440 189568 DEBUG oslo_concurrency.processutils [None req-edc753ca-df70-41b8-9e8c-a36b0a4da18d 715e289b64b4407387cbcfe958eb2d0f 162c071887824085bcc9c384a2f8baf0 - - default default] Running cmd (subprocess): /usr/bin/python3 -m oslo_concurrency.prlimit --as=1073741824 --cpu=30 -- env LC_ALL=C LANG=C qemu-img info /var/lib/nova/instances/_base/b6c46a34fa48a1b06387586e8222a42077151abd --force-share --output=json execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384#033[00m
Dec  1 20:02:30 compute-0 nova_compute[189564]: 2025-12-01 20:02:30.514 189568 DEBUG oslo_concurrency.processutils [None req-edc753ca-df70-41b8-9e8c-a36b0a4da18d 715e289b64b4407387cbcfe958eb2d0f 162c071887824085bcc9c384a2f8baf0 - - default default] CMD "/usr/bin/python3 -m oslo_concurrency.prlimit --as=1073741824 --cpu=30 -- env LC_ALL=C LANG=C qemu-img info /var/lib/nova/instances/_base/b6c46a34fa48a1b06387586e8222a42077151abd --force-share --output=json" returned: 0 in 0.074s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422#033[00m
Dec  1 20:02:30 compute-0 nova_compute[189564]: 2025-12-01 20:02:30.515 189568 DEBUG nova.virt.disk.api [None req-edc753ca-df70-41b8-9e8c-a36b0a4da18d 715e289b64b4407387cbcfe958eb2d0f 162c071887824085bcc9c384a2f8baf0 - - default default] Checking if we can resize image /var/lib/nova/instances/6c1de815-4e42-4798-9a73-220b67333524/disk. size=1073741824 can_resize_image /usr/lib/python3.9/site-packages/nova/virt/disk/api.py:166#033[00m
Dec  1 20:02:30 compute-0 nova_compute[189564]: 2025-12-01 20:02:30.516 189568 DEBUG oslo_concurrency.processutils [None req-edc753ca-df70-41b8-9e8c-a36b0a4da18d 715e289b64b4407387cbcfe958eb2d0f 162c071887824085bcc9c384a2f8baf0 - - default default] Running cmd (subprocess): /usr/bin/python3 -m oslo_concurrency.prlimit --as=1073741824 --cpu=30 -- env LC_ALL=C LANG=C qemu-img info /var/lib/nova/instances/6c1de815-4e42-4798-9a73-220b67333524/disk --force-share --output=json execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384#033[00m
Dec  1 20:02:30 compute-0 nova_compute[189564]: 2025-12-01 20:02:30.571 189568 DEBUG oslo_concurrency.processutils [None req-edc753ca-df70-41b8-9e8c-a36b0a4da18d 715e289b64b4407387cbcfe958eb2d0f 162c071887824085bcc9c384a2f8baf0 - - default default] CMD "/usr/bin/python3 -m oslo_concurrency.prlimit --as=1073741824 --cpu=30 -- env LC_ALL=C LANG=C qemu-img info /var/lib/nova/instances/6c1de815-4e42-4798-9a73-220b67333524/disk --force-share --output=json" returned: 0 in 0.055s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422#033[00m
Dec  1 20:02:30 compute-0 nova_compute[189564]: 2025-12-01 20:02:30.573 189568 DEBUG nova.virt.disk.api [None req-edc753ca-df70-41b8-9e8c-a36b0a4da18d 715e289b64b4407387cbcfe958eb2d0f 162c071887824085bcc9c384a2f8baf0 - - default default] Cannot resize image /var/lib/nova/instances/6c1de815-4e42-4798-9a73-220b67333524/disk to a smaller size. can_resize_image /usr/lib/python3.9/site-packages/nova/virt/disk/api.py:172#033[00m
Dec  1 20:02:30 compute-0 nova_compute[189564]: 2025-12-01 20:02:30.574 189568 DEBUG nova.objects.instance [None req-edc753ca-df70-41b8-9e8c-a36b0a4da18d 715e289b64b4407387cbcfe958eb2d0f 162c071887824085bcc9c384a2f8baf0 - - default default] Lazy-loading 'migration_context' on Instance uuid 6c1de815-4e42-4798-9a73-220b67333524 obj_load_attr /usr/lib/python3.9/site-packages/nova/objects/instance.py:1105#033[00m
Dec  1 20:02:30 compute-0 nova_compute[189564]: 2025-12-01 20:02:30.588 189568 DEBUG oslo_concurrency.lockutils [None req-909d3cbc-f07b-42ff-b10d-448f6649e7c7 304fade4774b4bb3838efcc56501f582 bde8983778e8471a8b7f6da9e9d53732 - - default default] Acquiring lock "421c1bd5-7edf-41ce-b0a5-872efcaf35b0" by "nova.compute.manager.ComputeManager.build_and_run_instance.<locals>._locked_do_build_and_run_instance" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m
Dec  1 20:02:30 compute-0 nova_compute[189564]: 2025-12-01 20:02:30.588 189568 DEBUG oslo_concurrency.lockutils [None req-909d3cbc-f07b-42ff-b10d-448f6649e7c7 304fade4774b4bb3838efcc56501f582 bde8983778e8471a8b7f6da9e9d53732 - - default default] Lock "421c1bd5-7edf-41ce-b0a5-872efcaf35b0" acquired by "nova.compute.manager.ComputeManager.build_and_run_instance.<locals>._locked_do_build_and_run_instance" :: waited 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m
Dec  1 20:02:30 compute-0 nova_compute[189564]: 2025-12-01 20:02:30.595 189568 DEBUG nova.virt.libvirt.driver [None req-edc753ca-df70-41b8-9e8c-a36b0a4da18d 715e289b64b4407387cbcfe958eb2d0f 162c071887824085bcc9c384a2f8baf0 - - default default] [instance: 6c1de815-4e42-4798-9a73-220b67333524] Created local disks _create_image /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:4857#033[00m
Dec  1 20:02:30 compute-0 nova_compute[189564]: 2025-12-01 20:02:30.595 189568 DEBUG nova.virt.libvirt.driver [None req-edc753ca-df70-41b8-9e8c-a36b0a4da18d 715e289b64b4407387cbcfe958eb2d0f 162c071887824085bcc9c384a2f8baf0 - - default default] [instance: 6c1de815-4e42-4798-9a73-220b67333524] Ensure instance console log exists: /var/lib/nova/instances/6c1de815-4e42-4798-9a73-220b67333524/console.log _ensure_console_log_for_instance /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:4609#033[00m
Dec  1 20:02:30 compute-0 nova_compute[189564]: 2025-12-01 20:02:30.596 189568 DEBUG oslo_concurrency.lockutils [None req-edc753ca-df70-41b8-9e8c-a36b0a4da18d 715e289b64b4407387cbcfe958eb2d0f 162c071887824085bcc9c384a2f8baf0 - - default default] Acquiring lock "vgpu_resources" by "nova.virt.libvirt.driver.LibvirtDriver._allocate_mdevs" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m
Dec  1 20:02:30 compute-0 nova_compute[189564]: 2025-12-01 20:02:30.596 189568 DEBUG oslo_concurrency.lockutils [None req-edc753ca-df70-41b8-9e8c-a36b0a4da18d 715e289b64b4407387cbcfe958eb2d0f 162c071887824085bcc9c384a2f8baf0 - - default default] Lock "vgpu_resources" acquired by "nova.virt.libvirt.driver.LibvirtDriver._allocate_mdevs" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m
Dec  1 20:02:30 compute-0 nova_compute[189564]: 2025-12-01 20:02:30.597 189568 DEBUG oslo_concurrency.lockutils [None req-edc753ca-df70-41b8-9e8c-a36b0a4da18d 715e289b64b4407387cbcfe958eb2d0f 162c071887824085bcc9c384a2f8baf0 - - default default] Lock "vgpu_resources" "released" by "nova.virt.libvirt.driver.LibvirtDriver._allocate_mdevs" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m
Dec  1 20:02:30 compute-0 nova_compute[189564]: 2025-12-01 20:02:30.654 189568 DEBUG nova.compute.manager [None req-909d3cbc-f07b-42ff-b10d-448f6649e7c7 304fade4774b4bb3838efcc56501f582 bde8983778e8471a8b7f6da9e9d53732 - - default default] [instance: 421c1bd5-7edf-41ce-b0a5-872efcaf35b0] Starting instance... _do_build_and_run_instance /usr/lib/python3.9/site-packages/nova/compute/manager.py:2402#033[00m
Dec  1 20:02:30 compute-0 nova_compute[189564]: 2025-12-01 20:02:30.766 189568 DEBUG oslo_concurrency.lockutils [None req-909d3cbc-f07b-42ff-b10d-448f6649e7c7 304fade4774b4bb3838efcc56501f582 bde8983778e8471a8b7f6da9e9d53732 - - default default] Acquiring lock "compute_resources" by "nova.compute.resource_tracker.ResourceTracker.instance_claim" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m
Dec  1 20:02:30 compute-0 nova_compute[189564]: 2025-12-01 20:02:30.766 189568 DEBUG oslo_concurrency.lockutils [None req-909d3cbc-f07b-42ff-b10d-448f6649e7c7 304fade4774b4bb3838efcc56501f582 bde8983778e8471a8b7f6da9e9d53732 - - default default] Lock "compute_resources" acquired by "nova.compute.resource_tracker.ResourceTracker.instance_claim" :: waited 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m
Dec  1 20:02:30 compute-0 nova_compute[189564]: 2025-12-01 20:02:30.774 189568 DEBUG nova.virt.hardware [None req-909d3cbc-f07b-42ff-b10d-448f6649e7c7 304fade4774b4bb3838efcc56501f582 bde8983778e8471a8b7f6da9e9d53732 - - default default] Require both a host and instance NUMA topology to fit instance on host. numa_fit_instance_to_host /usr/lib/python3.9/site-packages/nova/virt/hardware.py:2368#033[00m
Dec  1 20:02:30 compute-0 nova_compute[189564]: 2025-12-01 20:02:30.774 189568 INFO nova.compute.claims [None req-909d3cbc-f07b-42ff-b10d-448f6649e7c7 304fade4774b4bb3838efcc56501f582 bde8983778e8471a8b7f6da9e9d53732 - - default default] [instance: 421c1bd5-7edf-41ce-b0a5-872efcaf35b0] Claim successful on node compute-0.ctlplane.example.com#033[00m
Dec  1 20:02:31 compute-0 nova_compute[189564]: 2025-12-01 20:02:31.003 189568 DEBUG nova.compute.provider_tree [None req-909d3cbc-f07b-42ff-b10d-448f6649e7c7 304fade4774b4bb3838efcc56501f582 bde8983778e8471a8b7f6da9e9d53732 - - default default] Inventory has not changed in ProviderTree for provider: 0211b5d4-bab8-409f-8f53-df766ffbcb27 update_inventory /usr/lib/python3.9/site-packages/nova/compute/provider_tree.py:180#033[00m
Dec  1 20:02:31 compute-0 nova_compute[189564]: 2025-12-01 20:02:31.017 189568 DEBUG nova.scheduler.client.report [None req-909d3cbc-f07b-42ff-b10d-448f6649e7c7 304fade4774b4bb3838efcc56501f582 bde8983778e8471a8b7f6da9e9d53732 - - default default] Inventory has not changed for provider 0211b5d4-bab8-409f-8f53-df766ffbcb27 based on inventory data: {'VCPU': {'total': 8, 'reserved': 0, 'min_unit': 1, 'max_unit': 8, 'step_size': 1, 'allocation_ratio': 4.0}, 'MEMORY_MB': {'total': 7680, 'reserved': 512, 'min_unit': 1, 'max_unit': 7680, 'step_size': 1, 'allocation_ratio': 1.0}, 'DISK_GB': {'total': 79, 'reserved': 1, 'min_unit': 1, 'max_unit': 79, 'step_size': 1, 'allocation_ratio': 0.9}} set_inventory_for_provider /usr/lib/python3.9/site-packages/nova/scheduler/client/report.py:940#033[00m
Dec  1 20:02:31 compute-0 nova_compute[189564]: 2025-12-01 20:02:31.039 189568 DEBUG oslo_concurrency.lockutils [None req-909d3cbc-f07b-42ff-b10d-448f6649e7c7 304fade4774b4bb3838efcc56501f582 bde8983778e8471a8b7f6da9e9d53732 - - default default] Lock "compute_resources" "released" by "nova.compute.resource_tracker.ResourceTracker.instance_claim" :: held 0.272s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m
Dec  1 20:02:31 compute-0 nova_compute[189564]: 2025-12-01 20:02:31.040 189568 DEBUG nova.compute.manager [None req-909d3cbc-f07b-42ff-b10d-448f6649e7c7 304fade4774b4bb3838efcc56501f582 bde8983778e8471a8b7f6da9e9d53732 - - default default] [instance: 421c1bd5-7edf-41ce-b0a5-872efcaf35b0] Start building networks asynchronously for instance. _build_resources /usr/lib/python3.9/site-packages/nova/compute/manager.py:2799#033[00m
Dec  1 20:02:31 compute-0 nova_compute[189564]: 2025-12-01 20:02:31.083 189568 DEBUG nova.compute.manager [None req-909d3cbc-f07b-42ff-b10d-448f6649e7c7 304fade4774b4bb3838efcc56501f582 bde8983778e8471a8b7f6da9e9d53732 - - default default] [instance: 421c1bd5-7edf-41ce-b0a5-872efcaf35b0] Allocating IP information in the background. _allocate_network_async /usr/lib/python3.9/site-packages/nova/compute/manager.py:1952#033[00m
Dec  1 20:02:31 compute-0 nova_compute[189564]: 2025-12-01 20:02:31.084 189568 DEBUG nova.network.neutron [None req-909d3cbc-f07b-42ff-b10d-448f6649e7c7 304fade4774b4bb3838efcc56501f582 bde8983778e8471a8b7f6da9e9d53732 - - default default] [instance: 421c1bd5-7edf-41ce-b0a5-872efcaf35b0] allocate_for_instance() allocate_for_instance /usr/lib/python3.9/site-packages/nova/network/neutron.py:1156#033[00m
Dec  1 20:02:31 compute-0 nova_compute[189564]: 2025-12-01 20:02:31.101 189568 INFO nova.virt.libvirt.driver [None req-909d3cbc-f07b-42ff-b10d-448f6649e7c7 304fade4774b4bb3838efcc56501f582 bde8983778e8471a8b7f6da9e9d53732 - - default default] [instance: 421c1bd5-7edf-41ce-b0a5-872efcaf35b0] Ignoring supplied device name: /dev/vda. Libvirt can't honour user-supplied dev names#033[00m
Dec  1 20:02:31 compute-0 nova_compute[189564]: 2025-12-01 20:02:31.199 189568 DEBUG nova.compute.manager [None req-909d3cbc-f07b-42ff-b10d-448f6649e7c7 304fade4774b4bb3838efcc56501f582 bde8983778e8471a8b7f6da9e9d53732 - - default default] [instance: 421c1bd5-7edf-41ce-b0a5-872efcaf35b0] Start building block device mappings for instance. _build_resources /usr/lib/python3.9/site-packages/nova/compute/manager.py:2834#033[00m
Dec  1 20:02:31 compute-0 openstack_network_exporter[205914]: ERROR   20:02:31 appctl.go:131: Failed to prepare call to ovsdb-server: no control socket files found for the ovs db server
Dec  1 20:02:31 compute-0 openstack_network_exporter[205914]: ERROR   20:02:31 appctl.go:144: Failed to get PID for ovn-northd: no control socket files found for ovn-northd
Dec  1 20:02:31 compute-0 openstack_network_exporter[205914]: ERROR   20:02:31 appctl.go:144: Failed to get PID for ovn-northd: no control socket files found for ovn-northd
Dec  1 20:02:31 compute-0 openstack_network_exporter[205914]: ERROR   20:02:31 appctl.go:174: call(dpif-netdev/pmd-perf-show): please specify an existing datapath
Dec  1 20:02:31 compute-0 openstack_network_exporter[205914]: 
Dec  1 20:02:31 compute-0 openstack_network_exporter[205914]: ERROR   20:02:31 appctl.go:174: call(dpif-netdev/pmd-rxq-show): please specify an existing datapath
Dec  1 20:02:31 compute-0 openstack_network_exporter[205914]: 
Dec  1 20:02:31 compute-0 nova_compute[189564]: 2025-12-01 20:02:31.453 189568 DEBUG nova.compute.manager [None req-909d3cbc-f07b-42ff-b10d-448f6649e7c7 304fade4774b4bb3838efcc56501f582 bde8983778e8471a8b7f6da9e9d53732 - - default default] [instance: 421c1bd5-7edf-41ce-b0a5-872efcaf35b0] Start spawning the instance on the hypervisor. _build_and_run_instance /usr/lib/python3.9/site-packages/nova/compute/manager.py:2608#033[00m
Dec  1 20:02:31 compute-0 nova_compute[189564]: 2025-12-01 20:02:31.455 189568 DEBUG nova.virt.libvirt.driver [None req-909d3cbc-f07b-42ff-b10d-448f6649e7c7 304fade4774b4bb3838efcc56501f582 bde8983778e8471a8b7f6da9e9d53732 - - default default] [instance: 421c1bd5-7edf-41ce-b0a5-872efcaf35b0] Creating instance directory _create_image /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:4723#033[00m
Dec  1 20:02:31 compute-0 nova_compute[189564]: 2025-12-01 20:02:31.456 189568 INFO nova.virt.libvirt.driver [None req-909d3cbc-f07b-42ff-b10d-448f6649e7c7 304fade4774b4bb3838efcc56501f582 bde8983778e8471a8b7f6da9e9d53732 - - default default] [instance: 421c1bd5-7edf-41ce-b0a5-872efcaf35b0] Creating image(s)#033[00m
Dec  1 20:02:31 compute-0 nova_compute[189564]: 2025-12-01 20:02:31.458 189568 DEBUG oslo_concurrency.lockutils [None req-909d3cbc-f07b-42ff-b10d-448f6649e7c7 304fade4774b4bb3838efcc56501f582 bde8983778e8471a8b7f6da9e9d53732 - - default default] Acquiring lock "/var/lib/nova/instances/421c1bd5-7edf-41ce-b0a5-872efcaf35b0/disk.info" by "nova.virt.libvirt.imagebackend.Image.resolve_driver_format.<locals>.write_to_disk_info_file" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m
Dec  1 20:02:31 compute-0 nova_compute[189564]: 2025-12-01 20:02:31.459 189568 DEBUG oslo_concurrency.lockutils [None req-909d3cbc-f07b-42ff-b10d-448f6649e7c7 304fade4774b4bb3838efcc56501f582 bde8983778e8471a8b7f6da9e9d53732 - - default default] Lock "/var/lib/nova/instances/421c1bd5-7edf-41ce-b0a5-872efcaf35b0/disk.info" acquired by "nova.virt.libvirt.imagebackend.Image.resolve_driver_format.<locals>.write_to_disk_info_file" :: waited 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m
Dec  1 20:02:31 compute-0 nova_compute[189564]: 2025-12-01 20:02:31.460 189568 DEBUG oslo_concurrency.lockutils [None req-909d3cbc-f07b-42ff-b10d-448f6649e7c7 304fade4774b4bb3838efcc56501f582 bde8983778e8471a8b7f6da9e9d53732 - - default default] Lock "/var/lib/nova/instances/421c1bd5-7edf-41ce-b0a5-872efcaf35b0/disk.info" "released" by "nova.virt.libvirt.imagebackend.Image.resolve_driver_format.<locals>.write_to_disk_info_file" :: held 0.002s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m
Dec  1 20:02:31 compute-0 nova_compute[189564]: 2025-12-01 20:02:31.489 189568 DEBUG oslo_concurrency.processutils [None req-909d3cbc-f07b-42ff-b10d-448f6649e7c7 304fade4774b4bb3838efcc56501f582 bde8983778e8471a8b7f6da9e9d53732 - - default default] Running cmd (subprocess): /usr/bin/python3 -m oslo_concurrency.prlimit --as=1073741824 --cpu=30 -- env LC_ALL=C LANG=C qemu-img info /var/lib/nova/instances/_base/b6c46a34fa48a1b06387586e8222a42077151abd --force-share --output=json execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384#033[00m
Dec  1 20:02:31 compute-0 nova_compute[189564]: 2025-12-01 20:02:31.563 189568 DEBUG oslo_concurrency.processutils [None req-909d3cbc-f07b-42ff-b10d-448f6649e7c7 304fade4774b4bb3838efcc56501f582 bde8983778e8471a8b7f6da9e9d53732 - - default default] CMD "/usr/bin/python3 -m oslo_concurrency.prlimit --as=1073741824 --cpu=30 -- env LC_ALL=C LANG=C qemu-img info /var/lib/nova/instances/_base/b6c46a34fa48a1b06387586e8222a42077151abd --force-share --output=json" returned: 0 in 0.074s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422#033[00m
Dec  1 20:02:31 compute-0 nova_compute[189564]: 2025-12-01 20:02:31.564 189568 DEBUG oslo_concurrency.lockutils [None req-909d3cbc-f07b-42ff-b10d-448f6649e7c7 304fade4774b4bb3838efcc56501f582 bde8983778e8471a8b7f6da9e9d53732 - - default default] Acquiring lock "b6c46a34fa48a1b06387586e8222a42077151abd" by "nova.virt.libvirt.imagebackend.Qcow2.create_image.<locals>.create_qcow2_image" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m
Dec  1 20:02:31 compute-0 nova_compute[189564]: 2025-12-01 20:02:31.565 189568 DEBUG oslo_concurrency.lockutils [None req-909d3cbc-f07b-42ff-b10d-448f6649e7c7 304fade4774b4bb3838efcc56501f582 bde8983778e8471a8b7f6da9e9d53732 - - default default] Lock "b6c46a34fa48a1b06387586e8222a42077151abd" acquired by "nova.virt.libvirt.imagebackend.Qcow2.create_image.<locals>.create_qcow2_image" :: waited 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m
Dec  1 20:02:31 compute-0 nova_compute[189564]: 2025-12-01 20:02:31.577 189568 DEBUG oslo_concurrency.processutils [None req-909d3cbc-f07b-42ff-b10d-448f6649e7c7 304fade4774b4bb3838efcc56501f582 bde8983778e8471a8b7f6da9e9d53732 - - default default] Running cmd (subprocess): /usr/bin/python3 -m oslo_concurrency.prlimit --as=1073741824 --cpu=30 -- env LC_ALL=C LANG=C qemu-img info /var/lib/nova/instances/_base/b6c46a34fa48a1b06387586e8222a42077151abd --force-share --output=json execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384#033[00m
Dec  1 20:02:31 compute-0 nova_compute[189564]: 2025-12-01 20:02:31.601 189568 DEBUG nova.policy [None req-909d3cbc-f07b-42ff-b10d-448f6649e7c7 304fade4774b4bb3838efcc56501f582 bde8983778e8471a8b7f6da9e9d53732 - - default default] Policy check for network:attach_external_network failed with credentials {'is_admin': False, 'user_id': '304fade4774b4bb3838efcc56501f582', 'user_domain_id': 'default', 'system_scope': None, 'domain_id': None, 'project_id': 'bde8983778e8471a8b7f6da9e9d53732', 'project_domain_id': 'default', 'roles': ['member', 'reader'], 'is_admin_project': True, 'service_user_id': None, 'service_user_domain_id': None, 'service_project_id': None, 'service_project_domain_id': None, 'service_roles': []} authorize /usr/lib/python3.9/site-packages/nova/policy.py:203#033[00m
Dec  1 20:02:31 compute-0 nova_compute[189564]: 2025-12-01 20:02:31.669 189568 DEBUG oslo_concurrency.processutils [None req-909d3cbc-f07b-42ff-b10d-448f6649e7c7 304fade4774b4bb3838efcc56501f582 bde8983778e8471a8b7f6da9e9d53732 - - default default] CMD "/usr/bin/python3 -m oslo_concurrency.prlimit --as=1073741824 --cpu=30 -- env LC_ALL=C LANG=C qemu-img info /var/lib/nova/instances/_base/b6c46a34fa48a1b06387586e8222a42077151abd --force-share --output=json" returned: 0 in 0.092s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422#033[00m
Dec  1 20:02:31 compute-0 nova_compute[189564]: 2025-12-01 20:02:31.670 189568 DEBUG oslo_concurrency.processutils [None req-909d3cbc-f07b-42ff-b10d-448f6649e7c7 304fade4774b4bb3838efcc56501f582 bde8983778e8471a8b7f6da9e9d53732 - - default default] Running cmd (subprocess): env LC_ALL=C LANG=C qemu-img create -f qcow2 -o backing_file=/var/lib/nova/instances/_base/b6c46a34fa48a1b06387586e8222a42077151abd,backing_fmt=raw /var/lib/nova/instances/421c1bd5-7edf-41ce-b0a5-872efcaf35b0/disk 1073741824 execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384#033[00m
Dec  1 20:02:31 compute-0 nova_compute[189564]: 2025-12-01 20:02:31.722 189568 DEBUG oslo_concurrency.processutils [None req-909d3cbc-f07b-42ff-b10d-448f6649e7c7 304fade4774b4bb3838efcc56501f582 bde8983778e8471a8b7f6da9e9d53732 - - default default] CMD "env LC_ALL=C LANG=C qemu-img create -f qcow2 -o backing_file=/var/lib/nova/instances/_base/b6c46a34fa48a1b06387586e8222a42077151abd,backing_fmt=raw /var/lib/nova/instances/421c1bd5-7edf-41ce-b0a5-872efcaf35b0/disk 1073741824" returned: 0 in 0.052s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422#033[00m
Dec  1 20:02:31 compute-0 nova_compute[189564]: 2025-12-01 20:02:31.723 189568 DEBUG oslo_concurrency.lockutils [None req-909d3cbc-f07b-42ff-b10d-448f6649e7c7 304fade4774b4bb3838efcc56501f582 bde8983778e8471a8b7f6da9e9d53732 - - default default] Lock "b6c46a34fa48a1b06387586e8222a42077151abd" "released" by "nova.virt.libvirt.imagebackend.Qcow2.create_image.<locals>.create_qcow2_image" :: held 0.159s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m
Dec  1 20:02:31 compute-0 nova_compute[189564]: 2025-12-01 20:02:31.724 189568 DEBUG oslo_concurrency.processutils [None req-909d3cbc-f07b-42ff-b10d-448f6649e7c7 304fade4774b4bb3838efcc56501f582 bde8983778e8471a8b7f6da9e9d53732 - - default default] Running cmd (subprocess): /usr/bin/python3 -m oslo_concurrency.prlimit --as=1073741824 --cpu=30 -- env LC_ALL=C LANG=C qemu-img info /var/lib/nova/instances/_base/b6c46a34fa48a1b06387586e8222a42077151abd --force-share --output=json execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384#033[00m
Dec  1 20:02:31 compute-0 nova_compute[189564]: 2025-12-01 20:02:31.811 189568 DEBUG oslo_concurrency.processutils [None req-909d3cbc-f07b-42ff-b10d-448f6649e7c7 304fade4774b4bb3838efcc56501f582 bde8983778e8471a8b7f6da9e9d53732 - - default default] CMD "/usr/bin/python3 -m oslo_concurrency.prlimit --as=1073741824 --cpu=30 -- env LC_ALL=C LANG=C qemu-img info /var/lib/nova/instances/_base/b6c46a34fa48a1b06387586e8222a42077151abd --force-share --output=json" returned: 0 in 0.087s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422#033[00m
Dec  1 20:02:31 compute-0 nova_compute[189564]: 2025-12-01 20:02:31.812 189568 DEBUG nova.virt.disk.api [None req-909d3cbc-f07b-42ff-b10d-448f6649e7c7 304fade4774b4bb3838efcc56501f582 bde8983778e8471a8b7f6da9e9d53732 - - default default] Checking if we can resize image /var/lib/nova/instances/421c1bd5-7edf-41ce-b0a5-872efcaf35b0/disk. size=1073741824 can_resize_image /usr/lib/python3.9/site-packages/nova/virt/disk/api.py:166#033[00m
Dec  1 20:02:31 compute-0 nova_compute[189564]: 2025-12-01 20:02:31.813 189568 DEBUG oslo_concurrency.processutils [None req-909d3cbc-f07b-42ff-b10d-448f6649e7c7 304fade4774b4bb3838efcc56501f582 bde8983778e8471a8b7f6da9e9d53732 - - default default] Running cmd (subprocess): /usr/bin/python3 -m oslo_concurrency.prlimit --as=1073741824 --cpu=30 -- env LC_ALL=C LANG=C qemu-img info /var/lib/nova/instances/421c1bd5-7edf-41ce-b0a5-872efcaf35b0/disk --force-share --output=json execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384#033[00m
Dec  1 20:02:31 compute-0 nova_compute[189564]: 2025-12-01 20:02:31.902 189568 DEBUG oslo_concurrency.processutils [None req-909d3cbc-f07b-42ff-b10d-448f6649e7c7 304fade4774b4bb3838efcc56501f582 bde8983778e8471a8b7f6da9e9d53732 - - default default] CMD "/usr/bin/python3 -m oslo_concurrency.prlimit --as=1073741824 --cpu=30 -- env LC_ALL=C LANG=C qemu-img info /var/lib/nova/instances/421c1bd5-7edf-41ce-b0a5-872efcaf35b0/disk --force-share --output=json" returned: 0 in 0.089s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422#033[00m
Dec  1 20:02:31 compute-0 nova_compute[189564]: 2025-12-01 20:02:31.903 189568 DEBUG nova.virt.disk.api [None req-909d3cbc-f07b-42ff-b10d-448f6649e7c7 304fade4774b4bb3838efcc56501f582 bde8983778e8471a8b7f6da9e9d53732 - - default default] Cannot resize image /var/lib/nova/instances/421c1bd5-7edf-41ce-b0a5-872efcaf35b0/disk to a smaller size. can_resize_image /usr/lib/python3.9/site-packages/nova/virt/disk/api.py:172#033[00m
Dec  1 20:02:31 compute-0 nova_compute[189564]: 2025-12-01 20:02:31.904 189568 DEBUG nova.objects.instance [None req-909d3cbc-f07b-42ff-b10d-448f6649e7c7 304fade4774b4bb3838efcc56501f582 bde8983778e8471a8b7f6da9e9d53732 - - default default] Lazy-loading 'migration_context' on Instance uuid 421c1bd5-7edf-41ce-b0a5-872efcaf35b0 obj_load_attr /usr/lib/python3.9/site-packages/nova/objects/instance.py:1105#033[00m
Dec  1 20:02:31 compute-0 nova_compute[189564]: 2025-12-01 20:02:31.919 189568 DEBUG nova.virt.libvirt.driver [None req-909d3cbc-f07b-42ff-b10d-448f6649e7c7 304fade4774b4bb3838efcc56501f582 bde8983778e8471a8b7f6da9e9d53732 - - default default] [instance: 421c1bd5-7edf-41ce-b0a5-872efcaf35b0] Created local disks _create_image /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:4857#033[00m
Dec  1 20:02:31 compute-0 nova_compute[189564]: 2025-12-01 20:02:31.919 189568 DEBUG nova.virt.libvirt.driver [None req-909d3cbc-f07b-42ff-b10d-448f6649e7c7 304fade4774b4bb3838efcc56501f582 bde8983778e8471a8b7f6da9e9d53732 - - default default] [instance: 421c1bd5-7edf-41ce-b0a5-872efcaf35b0] Ensure instance console log exists: /var/lib/nova/instances/421c1bd5-7edf-41ce-b0a5-872efcaf35b0/console.log _ensure_console_log_for_instance /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:4609#033[00m
Dec  1 20:02:31 compute-0 nova_compute[189564]: 2025-12-01 20:02:31.920 189568 DEBUG oslo_concurrency.lockutils [None req-909d3cbc-f07b-42ff-b10d-448f6649e7c7 304fade4774b4bb3838efcc56501f582 bde8983778e8471a8b7f6da9e9d53732 - - default default] Acquiring lock "vgpu_resources" by "nova.virt.libvirt.driver.LibvirtDriver._allocate_mdevs" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m
Dec  1 20:02:31 compute-0 nova_compute[189564]: 2025-12-01 20:02:31.920 189568 DEBUG oslo_concurrency.lockutils [None req-909d3cbc-f07b-42ff-b10d-448f6649e7c7 304fade4774b4bb3838efcc56501f582 bde8983778e8471a8b7f6da9e9d53732 - - default default] Lock "vgpu_resources" acquired by "nova.virt.libvirt.driver.LibvirtDriver._allocate_mdevs" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m
Dec  1 20:02:31 compute-0 nova_compute[189564]: 2025-12-01 20:02:31.921 189568 DEBUG oslo_concurrency.lockutils [None req-909d3cbc-f07b-42ff-b10d-448f6649e7c7 304fade4774b4bb3838efcc56501f582 bde8983778e8471a8b7f6da9e9d53732 - - default default] Lock "vgpu_resources" "released" by "nova.virt.libvirt.driver.LibvirtDriver._allocate_mdevs" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m
Dec  1 20:02:32 compute-0 nova_compute[189564]: 2025-12-01 20:02:32.425 189568 DEBUG nova.network.neutron [None req-909d3cbc-f07b-42ff-b10d-448f6649e7c7 304fade4774b4bb3838efcc56501f582 bde8983778e8471a8b7f6da9e9d53732 - - default default] [instance: 421c1bd5-7edf-41ce-b0a5-872efcaf35b0] Successfully created port: 36c65cc8-9f73-47e0-8a82-7ca2a02890e5 _create_port_minimal /usr/lib/python3.9/site-packages/nova/network/neutron.py:548#033[00m
Dec  1 20:02:32 compute-0 nova_compute[189564]: 2025-12-01 20:02:32.438 189568 DEBUG nova.network.neutron [None req-edc753ca-df70-41b8-9e8c-a36b0a4da18d 715e289b64b4407387cbcfe958eb2d0f 162c071887824085bcc9c384a2f8baf0 - - default default] [instance: 6c1de815-4e42-4798-9a73-220b67333524] Successfully created port: 05dcfe74-fe60-45d4-b1df-aec9fcc57adb _create_port_minimal /usr/lib/python3.9/site-packages/nova/network/neutron.py:548#033[00m
Dec  1 20:02:32 compute-0 nova_compute[189564]: 2025-12-01 20:02:32.848 189568 DEBUG nova.virt.driver [-] Emitting event <LifecycleEvent: 1764619337.847952, 40daa6fd-543f-42a7-8b3f-8bbbd3b4ecc0 => Stopped> emit_event /usr/lib/python3.9/site-packages/nova/virt/driver.py:1653#033[00m
Dec  1 20:02:32 compute-0 nova_compute[189564]: 2025-12-01 20:02:32.849 189568 INFO nova.compute.manager [-] [instance: 40daa6fd-543f-42a7-8b3f-8bbbd3b4ecc0] VM Stopped (Lifecycle Event)#033[00m
Dec  1 20:02:32 compute-0 nova_compute[189564]: 2025-12-01 20:02:32.875 189568 DEBUG nova.compute.manager [None req-8b29c93c-c010-4017-a890-151583416a59 - - - - - -] [instance: 40daa6fd-543f-42a7-8b3f-8bbbd3b4ecc0] Checking state _get_power_state /usr/lib/python3.9/site-packages/nova/compute/manager.py:1762#033[00m
Dec  1 20:02:32 compute-0 nova_compute[189564]: 2025-12-01 20:02:32.921 189568 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Dec  1 20:02:33 compute-0 podman[255155]: 2025-12-01 20:02:33.351864127 +0000 UTC m=+0.120499441 container health_status b46bda7fc50db8041eef75400930fc7591d8331b3adc9964f77b2cc87c6b98e2 (image=quay.io/openstack-k8s-operators/openstack-network-exporter:current-podified, name=openstack_network_exporter, health_status=healthy, health_failing_streak=0, health_log=, com.redhat.license_terms=https://www.redhat.com/en/about/red-hat-end-user-license-agreements#UBI, com.redhat.component=ubi9-minimal-container, distribution-scope=public, vcs-ref=f4b088292653bbf5ca8188a5e59ffd06a8671d4b, config_data={'image': 'quay.io/openstack-k8s-operators/openstack-network-exporter:current-podified', 'restart': 'always', 'recreate': True, 'privileged': True, 'ports': ['9105:9105'], 'command': [], 'net': 'host', 'environment': {'OS_ENDPOINT_TYPE': 'internal', 'OPENSTACK_NETWORK_EXPORTER_YAML': '/etc/openstack_network_exporter/openstack_network_exporter.yaml'}, 'healthcheck': {'test': '/openstack/healthcheck openstack-netwo', 'mount': '/var/lib/openstack/healthchecks/openstack_network_exporter'}, 'volumes': ['/var/lib/openstack/config/telemetry/openstack_network_exporter.yaml:/etc/openstack_network_exporter/openstack_network_exporter.yaml:z', '/var/lib/openstack/certs/telemetry/default:/etc/openstack_network_exporter/tls:z', '/var/run/openvswitch:/run/openvswitch:rw,z', '/var/lib/openvswitch/ovn:/run/ovn:rw,z', '/proc:/host/proc:ro', '/var/lib/openstack/healthchecks/openstack_network_exporter:/openstack:ro,z']}, container_name=openstack_network_exporter, io.k8s.description=The Universal Base Image Minimal is a stripped down image that uses microdnf as a package manager. This base image is freely redistributable, but Red Hat only supports Red Hat technologies through subscriptions for Red Hat products. This image is maintained by Red Hat and updated regularly., io.openshift.tags=minimal rhel9, url=https://catalog.redhat.com/en/search?searchType=containers, architecture=x86_64, config_id=edpm, io.buildah.version=1.33.7, maintainer=Red Hat, Inc., vendor=Red Hat, Inc., description=The Universal Base Image Minimal is a stripped down image that uses microdnf as a package manager. This base image is freely redistributable, but Red Hat only supports Red Hat technologies through subscriptions for Red Hat products. This image is maintained by Red Hat and updated regularly., summary=Provides the latest release of the minimal Red Hat Universal Base Image 9., version=9.6, io.k8s.display-name=Red Hat Universal Base Image 9 Minimal, io.openshift.expose-services=, managed_by=edpm_ansible, build-date=2025-08-20T13:12:41, name=ubi9-minimal, vcs-type=git, release=1755695350)
Dec  1 20:02:33 compute-0 nova_compute[189564]: 2025-12-01 20:02:33.442 189568 DEBUG nova.network.neutron [None req-909d3cbc-f07b-42ff-b10d-448f6649e7c7 304fade4774b4bb3838efcc56501f582 bde8983778e8471a8b7f6da9e9d53732 - - default default] [instance: 421c1bd5-7edf-41ce-b0a5-872efcaf35b0] Successfully updated port: 36c65cc8-9f73-47e0-8a82-7ca2a02890e5 _update_port /usr/lib/python3.9/site-packages/nova/network/neutron.py:586#033[00m
Dec  1 20:02:33 compute-0 nova_compute[189564]: 2025-12-01 20:02:33.461 189568 DEBUG oslo_concurrency.lockutils [None req-909d3cbc-f07b-42ff-b10d-448f6649e7c7 304fade4774b4bb3838efcc56501f582 bde8983778e8471a8b7f6da9e9d53732 - - default default] Acquiring lock "refresh_cache-421c1bd5-7edf-41ce-b0a5-872efcaf35b0" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:312#033[00m
Dec  1 20:02:33 compute-0 nova_compute[189564]: 2025-12-01 20:02:33.461 189568 DEBUG oslo_concurrency.lockutils [None req-909d3cbc-f07b-42ff-b10d-448f6649e7c7 304fade4774b4bb3838efcc56501f582 bde8983778e8471a8b7f6da9e9d53732 - - default default] Acquired lock "refresh_cache-421c1bd5-7edf-41ce-b0a5-872efcaf35b0" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:315#033[00m
Dec  1 20:02:33 compute-0 nova_compute[189564]: 2025-12-01 20:02:33.461 189568 DEBUG nova.network.neutron [None req-909d3cbc-f07b-42ff-b10d-448f6649e7c7 304fade4774b4bb3838efcc56501f582 bde8983778e8471a8b7f6da9e9d53732 - - default default] [instance: 421c1bd5-7edf-41ce-b0a5-872efcaf35b0] Building network info cache for instance _get_instance_nw_info /usr/lib/python3.9/site-packages/nova/network/neutron.py:2010#033[00m
Dec  1 20:02:33 compute-0 nova_compute[189564]: 2025-12-01 20:02:33.775 189568 DEBUG nova.network.neutron [None req-909d3cbc-f07b-42ff-b10d-448f6649e7c7 304fade4774b4bb3838efcc56501f582 bde8983778e8471a8b7f6da9e9d53732 - - default default] [instance: 421c1bd5-7edf-41ce-b0a5-872efcaf35b0] Instance cache missing network info. _get_preexisting_port_ids /usr/lib/python3.9/site-packages/nova/network/neutron.py:3323#033[00m
Dec  1 20:02:33 compute-0 nova_compute[189564]: 2025-12-01 20:02:33.854 189568 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Dec  1 20:02:34 compute-0 nova_compute[189564]: 2025-12-01 20:02:34.094 189568 DEBUG nova.compute.manager [req-b072c661-97d3-4d62-b14f-21290abf750e req-25e0fb6d-4c2f-444e-a75d-db2f5fc6d4af 8141e98fb94749b083df4b60a326419b 58466eba0634458f8dd92f929824a1d9 - - default default] [instance: 421c1bd5-7edf-41ce-b0a5-872efcaf35b0] Received event network-changed-36c65cc8-9f73-47e0-8a82-7ca2a02890e5 external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11048#033[00m
Dec  1 20:02:34 compute-0 nova_compute[189564]: 2025-12-01 20:02:34.094 189568 DEBUG nova.compute.manager [req-b072c661-97d3-4d62-b14f-21290abf750e req-25e0fb6d-4c2f-444e-a75d-db2f5fc6d4af 8141e98fb94749b083df4b60a326419b 58466eba0634458f8dd92f929824a1d9 - - default default] [instance: 421c1bd5-7edf-41ce-b0a5-872efcaf35b0] Refreshing instance network info cache due to event network-changed-36c65cc8-9f73-47e0-8a82-7ca2a02890e5. external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11053#033[00m
Dec  1 20:02:34 compute-0 nova_compute[189564]: 2025-12-01 20:02:34.094 189568 DEBUG oslo_concurrency.lockutils [req-b072c661-97d3-4d62-b14f-21290abf750e req-25e0fb6d-4c2f-444e-a75d-db2f5fc6d4af 8141e98fb94749b083df4b60a326419b 58466eba0634458f8dd92f929824a1d9 - - default default] Acquiring lock "refresh_cache-421c1bd5-7edf-41ce-b0a5-872efcaf35b0" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:312#033[00m
Dec  1 20:02:34 compute-0 nova_compute[189564]: 2025-12-01 20:02:34.469 189568 DEBUG nova.network.neutron [None req-edc753ca-df70-41b8-9e8c-a36b0a4da18d 715e289b64b4407387cbcfe958eb2d0f 162c071887824085bcc9c384a2f8baf0 - - default default] [instance: 6c1de815-4e42-4798-9a73-220b67333524] Successfully updated port: 05dcfe74-fe60-45d4-b1df-aec9fcc57adb _update_port /usr/lib/python3.9/site-packages/nova/network/neutron.py:586#033[00m
Dec  1 20:02:34 compute-0 nova_compute[189564]: 2025-12-01 20:02:34.489 189568 DEBUG oslo_concurrency.lockutils [None req-edc753ca-df70-41b8-9e8c-a36b0a4da18d 715e289b64b4407387cbcfe958eb2d0f 162c071887824085bcc9c384a2f8baf0 - - default default] Acquiring lock "refresh_cache-6c1de815-4e42-4798-9a73-220b67333524" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:312#033[00m
Dec  1 20:02:34 compute-0 nova_compute[189564]: 2025-12-01 20:02:34.489 189568 DEBUG oslo_concurrency.lockutils [None req-edc753ca-df70-41b8-9e8c-a36b0a4da18d 715e289b64b4407387cbcfe958eb2d0f 162c071887824085bcc9c384a2f8baf0 - - default default] Acquired lock "refresh_cache-6c1de815-4e42-4798-9a73-220b67333524" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:315#033[00m
Dec  1 20:02:34 compute-0 nova_compute[189564]: 2025-12-01 20:02:34.490 189568 DEBUG nova.network.neutron [None req-edc753ca-df70-41b8-9e8c-a36b0a4da18d 715e289b64b4407387cbcfe958eb2d0f 162c071887824085bcc9c384a2f8baf0 - - default default] [instance: 6c1de815-4e42-4798-9a73-220b67333524] Building network info cache for instance _get_instance_nw_info /usr/lib/python3.9/site-packages/nova/network/neutron.py:2010#033[00m
Dec  1 20:02:34 compute-0 nova_compute[189564]: 2025-12-01 20:02:34.767 189568 DEBUG nova.network.neutron [None req-909d3cbc-f07b-42ff-b10d-448f6649e7c7 304fade4774b4bb3838efcc56501f582 bde8983778e8471a8b7f6da9e9d53732 - - default default] [instance: 421c1bd5-7edf-41ce-b0a5-872efcaf35b0] Updating instance_info_cache with network_info: [{"id": "36c65cc8-9f73-47e0-8a82-7ca2a02890e5", "address": "fa:16:3e:67:e4:f2", "network": {"id": "61c137f0-effb-4f90-8a6c-ea3831f8e4db", "bridge": "br-int", "label": "tempest-TestServerBasicOps-1994330948-network", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.14", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "bde8983778e8471a8b7f6da9e9d53732", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tap36c65cc8-9f", "ovs_interfaceid": "36c65cc8-9f73-47e0-8a82-7ca2a02890e5", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}}] update_instance_cache_with_nw_info /usr/lib/python3.9/site-packages/nova/network/neutron.py:116#033[00m
Dec  1 20:02:34 compute-0 nova_compute[189564]: 2025-12-01 20:02:34.791 189568 DEBUG nova.compute.manager [req-4d117f9b-37b2-4680-b4d7-3bba14b8a359 req-50903c77-232c-43b8-a5d8-30bd15f1b1bb 8141e98fb94749b083df4b60a326419b 58466eba0634458f8dd92f929824a1d9 - - default default] [instance: 6c1de815-4e42-4798-9a73-220b67333524] Received event network-changed-05dcfe74-fe60-45d4-b1df-aec9fcc57adb external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11048#033[00m
Dec  1 20:02:34 compute-0 nova_compute[189564]: 2025-12-01 20:02:34.791 189568 DEBUG nova.compute.manager [req-4d117f9b-37b2-4680-b4d7-3bba14b8a359 req-50903c77-232c-43b8-a5d8-30bd15f1b1bb 8141e98fb94749b083df4b60a326419b 58466eba0634458f8dd92f929824a1d9 - - default default] [instance: 6c1de815-4e42-4798-9a73-220b67333524] Refreshing instance network info cache due to event network-changed-05dcfe74-fe60-45d4-b1df-aec9fcc57adb. external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11053#033[00m
Dec  1 20:02:34 compute-0 nova_compute[189564]: 2025-12-01 20:02:34.792 189568 DEBUG oslo_concurrency.lockutils [req-4d117f9b-37b2-4680-b4d7-3bba14b8a359 req-50903c77-232c-43b8-a5d8-30bd15f1b1bb 8141e98fb94749b083df4b60a326419b 58466eba0634458f8dd92f929824a1d9 - - default default] Acquiring lock "refresh_cache-6c1de815-4e42-4798-9a73-220b67333524" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:312#033[00m
Dec  1 20:02:34 compute-0 nova_compute[189564]: 2025-12-01 20:02:34.793 189568 DEBUG oslo_concurrency.lockutils [None req-909d3cbc-f07b-42ff-b10d-448f6649e7c7 304fade4774b4bb3838efcc56501f582 bde8983778e8471a8b7f6da9e9d53732 - - default default] Releasing lock "refresh_cache-421c1bd5-7edf-41ce-b0a5-872efcaf35b0" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:333#033[00m
Dec  1 20:02:34 compute-0 nova_compute[189564]: 2025-12-01 20:02:34.793 189568 DEBUG nova.compute.manager [None req-909d3cbc-f07b-42ff-b10d-448f6649e7c7 304fade4774b4bb3838efcc56501f582 bde8983778e8471a8b7f6da9e9d53732 - - default default] [instance: 421c1bd5-7edf-41ce-b0a5-872efcaf35b0] Instance network_info: |[{"id": "36c65cc8-9f73-47e0-8a82-7ca2a02890e5", "address": "fa:16:3e:67:e4:f2", "network": {"id": "61c137f0-effb-4f90-8a6c-ea3831f8e4db", "bridge": "br-int", "label": "tempest-TestServerBasicOps-1994330948-network", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.14", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "bde8983778e8471a8b7f6da9e9d53732", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tap36c65cc8-9f", "ovs_interfaceid": "36c65cc8-9f73-47e0-8a82-7ca2a02890e5", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}}]| _allocate_network_async /usr/lib/python3.9/site-packages/nova/compute/manager.py:1967#033[00m
Dec  1 20:02:34 compute-0 nova_compute[189564]: 2025-12-01 20:02:34.793 189568 DEBUG oslo_concurrency.lockutils [req-b072c661-97d3-4d62-b14f-21290abf750e req-25e0fb6d-4c2f-444e-a75d-db2f5fc6d4af 8141e98fb94749b083df4b60a326419b 58466eba0634458f8dd92f929824a1d9 - - default default] Acquired lock "refresh_cache-421c1bd5-7edf-41ce-b0a5-872efcaf35b0" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:315#033[00m
Dec  1 20:02:34 compute-0 nova_compute[189564]: 2025-12-01 20:02:34.794 189568 DEBUG nova.network.neutron [req-b072c661-97d3-4d62-b14f-21290abf750e req-25e0fb6d-4c2f-444e-a75d-db2f5fc6d4af 8141e98fb94749b083df4b60a326419b 58466eba0634458f8dd92f929824a1d9 - - default default] [instance: 421c1bd5-7edf-41ce-b0a5-872efcaf35b0] Refreshing network info cache for port 36c65cc8-9f73-47e0-8a82-7ca2a02890e5 _get_instance_nw_info /usr/lib/python3.9/site-packages/nova/network/neutron.py:2007#033[00m
Dec  1 20:02:34 compute-0 nova_compute[189564]: 2025-12-01 20:02:34.796 189568 DEBUG nova.virt.libvirt.driver [None req-909d3cbc-f07b-42ff-b10d-448f6649e7c7 304fade4774b4bb3838efcc56501f582 bde8983778e8471a8b7f6da9e9d53732 - - default default] [instance: 421c1bd5-7edf-41ce-b0a5-872efcaf35b0] Start _get_guest_xml network_info=[{"id": "36c65cc8-9f73-47e0-8a82-7ca2a02890e5", "address": "fa:16:3e:67:e4:f2", "network": {"id": "61c137f0-effb-4f90-8a6c-ea3831f8e4db", "bridge": "br-int", "label": "tempest-TestServerBasicOps-1994330948-network", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.14", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "bde8983778e8471a8b7f6da9e9d53732", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tap36c65cc8-9f", "ovs_interfaceid": "36c65cc8-9f73-47e0-8a82-7ca2a02890e5", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}}] disk_info={'disk_bus': 'virtio', 'cdrom_bus': 'sata', 'mapping': {'root': {'bus': 'virtio', 'dev': 'vda', 'type': 'disk', 'boot_index': '1'}, 'disk': {'bus': 'virtio', 'dev': 'vda', 'type': 'disk', 'boot_index': '1'}, 'disk.config': {'bus': 'sata', 'dev': 'sda', 'type': 'cdrom'}}} image_meta=ImageMeta(checksum='c8fc807773e5354afe61636071771906',container_format='bare',created_at=2025-12-01T20:00:12Z,direct_url=<?>,disk_format='qcow2',id=d169c234-7ac2-4fdc-b9fa-a08c93484d75,min_disk=0,min_ram=0,name='cirros-0.6.2-x86_64-disk.img',owner='35d2a9caf1634dca9fc12ec078239d84',properties=ImageMetaProps,protected=<?>,size=21430272,status='active',tags=<?>,updated_at=2025-12-01T20:00:13Z,virtual_size=<?>,visibility=<?>) rescue=None block_device_info={'root_device_name': '/dev/vda', 'image': [{'boot_index': 0, 'guest_format': None, 'encryption_options': None, 'size': 0, 'encryption_secret_uuid': None, 'device_type': 'disk', 'disk_bus': 'virtio', 'encrypted': False, 'encryption_format': None, 'device_name': '/dev/vda', 'image_id': 'd169c234-7ac2-4fdc-b9fa-a08c93484d75'}], 'ephemerals': [], 'block_device_mapping': [], 'swap': None} _get_guest_xml /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:7549#033[00m
Dec  1 20:02:34 compute-0 nova_compute[189564]: 2025-12-01 20:02:34.805 189568 WARNING nova.virt.libvirt.driver [None req-909d3cbc-f07b-42ff-b10d-448f6649e7c7 304fade4774b4bb3838efcc56501f582 bde8983778e8471a8b7f6da9e9d53732 - - default default] This host appears to have multiple sockets per NUMA node. The `socket` PCI NUMA affinity will not be supported.#033[00m
Dec  1 20:02:34 compute-0 nova_compute[189564]: 2025-12-01 20:02:34.819 189568 DEBUG nova.virt.libvirt.host [None req-909d3cbc-f07b-42ff-b10d-448f6649e7c7 304fade4774b4bb3838efcc56501f582 bde8983778e8471a8b7f6da9e9d53732 - - default default] Searching host: 'compute-0.ctlplane.example.com' for CPU controller through CGroups V1... _has_cgroupsv1_cpu_controller /usr/lib/python3.9/site-packages/nova/virt/libvirt/host.py:1653#033[00m
Dec  1 20:02:34 compute-0 nova_compute[189564]: 2025-12-01 20:02:34.820 189568 DEBUG nova.virt.libvirt.host [None req-909d3cbc-f07b-42ff-b10d-448f6649e7c7 304fade4774b4bb3838efcc56501f582 bde8983778e8471a8b7f6da9e9d53732 - - default default] CPU controller missing on host. _has_cgroupsv1_cpu_controller /usr/lib/python3.9/site-packages/nova/virt/libvirt/host.py:1663#033[00m
Dec  1 20:02:34 compute-0 nova_compute[189564]: 2025-12-01 20:02:34.826 189568 DEBUG nova.virt.libvirt.host [None req-909d3cbc-f07b-42ff-b10d-448f6649e7c7 304fade4774b4bb3838efcc56501f582 bde8983778e8471a8b7f6da9e9d53732 - - default default] Searching host: 'compute-0.ctlplane.example.com' for CPU controller through CGroups V2... _has_cgroupsv2_cpu_controller /usr/lib/python3.9/site-packages/nova/virt/libvirt/host.py:1672#033[00m
Dec  1 20:02:34 compute-0 nova_compute[189564]: 2025-12-01 20:02:34.827 189568 DEBUG nova.virt.libvirt.host [None req-909d3cbc-f07b-42ff-b10d-448f6649e7c7 304fade4774b4bb3838efcc56501f582 bde8983778e8471a8b7f6da9e9d53732 - - default default] CPU controller found on host. _has_cgroupsv2_cpu_controller /usr/lib/python3.9/site-packages/nova/virt/libvirt/host.py:1679#033[00m
Dec  1 20:02:34 compute-0 nova_compute[189564]: 2025-12-01 20:02:34.827 189568 DEBUG nova.virt.libvirt.driver [None req-909d3cbc-f07b-42ff-b10d-448f6649e7c7 304fade4774b4bb3838efcc56501f582 bde8983778e8471a8b7f6da9e9d53732 - - default default] CPU mode 'host-model' models '' was chosen, with extra flags: '' _get_guest_cpu_model_config /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:5396#033[00m
Dec  1 20:02:34 compute-0 nova_compute[189564]: 2025-12-01 20:02:34.828 189568 DEBUG nova.virt.hardware [None req-909d3cbc-f07b-42ff-b10d-448f6649e7c7 304fade4774b4bb3838efcc56501f582 bde8983778e8471a8b7f6da9e9d53732 - - default default] Getting desirable topologies for flavor Flavor(created_at=2025-12-01T20:00:10Z,deleted=False,deleted_at=None,description=None,disabled=False,ephemeral_gb=0,extra_specs={hw_rng:allowed='True'},flavorid='69252fc0-77e5-4ac1-807d-77003542464f',id=3,is_public=True,memory_mb=128,name='m1.nano',projects=<?>,root_gb=1,rxtx_factor=1.0,swap=0,updated_at=None,vcpu_weight=0,vcpus=1) and image_meta ImageMeta(checksum='c8fc807773e5354afe61636071771906',container_format='bare',created_at=2025-12-01T20:00:12Z,direct_url=<?>,disk_format='qcow2',id=d169c234-7ac2-4fdc-b9fa-a08c93484d75,min_disk=0,min_ram=0,name='cirros-0.6.2-x86_64-disk.img',owner='35d2a9caf1634dca9fc12ec078239d84',properties=ImageMetaProps,protected=<?>,size=21430272,status='active',tags=<?>,updated_at=2025-12-01T20:00:13Z,virtual_size=<?>,visibility=<?>), allow threads: True _get_desirable_cpu_topologies /usr/lib/python3.9/site-packages/nova/virt/hardware.py:563#033[00m
Dec  1 20:02:34 compute-0 nova_compute[189564]: 2025-12-01 20:02:34.829 189568 DEBUG nova.virt.hardware [None req-909d3cbc-f07b-42ff-b10d-448f6649e7c7 304fade4774b4bb3838efcc56501f582 bde8983778e8471a8b7f6da9e9d53732 - - default default] Flavor limits 0:0:0 get_cpu_topology_constraints /usr/lib/python3.9/site-packages/nova/virt/hardware.py:348#033[00m
Dec  1 20:02:34 compute-0 nova_compute[189564]: 2025-12-01 20:02:34.829 189568 DEBUG nova.virt.hardware [None req-909d3cbc-f07b-42ff-b10d-448f6649e7c7 304fade4774b4bb3838efcc56501f582 bde8983778e8471a8b7f6da9e9d53732 - - default default] Image limits 0:0:0 get_cpu_topology_constraints /usr/lib/python3.9/site-packages/nova/virt/hardware.py:352#033[00m
Dec  1 20:02:34 compute-0 nova_compute[189564]: 2025-12-01 20:02:34.829 189568 DEBUG nova.virt.hardware [None req-909d3cbc-f07b-42ff-b10d-448f6649e7c7 304fade4774b4bb3838efcc56501f582 bde8983778e8471a8b7f6da9e9d53732 - - default default] Flavor pref 0:0:0 get_cpu_topology_constraints /usr/lib/python3.9/site-packages/nova/virt/hardware.py:388#033[00m
Dec  1 20:02:34 compute-0 nova_compute[189564]: 2025-12-01 20:02:34.830 189568 DEBUG nova.virt.hardware [None req-909d3cbc-f07b-42ff-b10d-448f6649e7c7 304fade4774b4bb3838efcc56501f582 bde8983778e8471a8b7f6da9e9d53732 - - default default] Image pref 0:0:0 get_cpu_topology_constraints /usr/lib/python3.9/site-packages/nova/virt/hardware.py:392#033[00m
Dec  1 20:02:34 compute-0 nova_compute[189564]: 2025-12-01 20:02:34.830 189568 DEBUG nova.virt.hardware [None req-909d3cbc-f07b-42ff-b10d-448f6649e7c7 304fade4774b4bb3838efcc56501f582 bde8983778e8471a8b7f6da9e9d53732 - - default default] Chose sockets=0, cores=0, threads=0; limits were sockets=65536, cores=65536, threads=65536 get_cpu_topology_constraints /usr/lib/python3.9/site-packages/nova/virt/hardware.py:430#033[00m
Dec  1 20:02:34 compute-0 nova_compute[189564]: 2025-12-01 20:02:34.830 189568 DEBUG nova.virt.hardware [None req-909d3cbc-f07b-42ff-b10d-448f6649e7c7 304fade4774b4bb3838efcc56501f582 bde8983778e8471a8b7f6da9e9d53732 - - default default] Topology preferred VirtCPUTopology(cores=0,sockets=0,threads=0), maximum VirtCPUTopology(cores=65536,sockets=65536,threads=65536) _get_desirable_cpu_topologies /usr/lib/python3.9/site-packages/nova/virt/hardware.py:569#033[00m
Dec  1 20:02:34 compute-0 nova_compute[189564]: 2025-12-01 20:02:34.831 189568 DEBUG nova.virt.hardware [None req-909d3cbc-f07b-42ff-b10d-448f6649e7c7 304fade4774b4bb3838efcc56501f582 bde8983778e8471a8b7f6da9e9d53732 - - default default] Build topologies for 1 vcpu(s) 1:1:1 _get_possible_cpu_topologies /usr/lib/python3.9/site-packages/nova/virt/hardware.py:471#033[00m
Dec  1 20:02:34 compute-0 nova_compute[189564]: 2025-12-01 20:02:34.831 189568 DEBUG nova.virt.hardware [None req-909d3cbc-f07b-42ff-b10d-448f6649e7c7 304fade4774b4bb3838efcc56501f582 bde8983778e8471a8b7f6da9e9d53732 - - default default] Got 1 possible topologies _get_possible_cpu_topologies /usr/lib/python3.9/site-packages/nova/virt/hardware.py:501#033[00m
Dec  1 20:02:34 compute-0 nova_compute[189564]: 2025-12-01 20:02:34.832 189568 DEBUG nova.virt.hardware [None req-909d3cbc-f07b-42ff-b10d-448f6649e7c7 304fade4774b4bb3838efcc56501f582 bde8983778e8471a8b7f6da9e9d53732 - - default default] Possible topologies [VirtCPUTopology(cores=1,sockets=1,threads=1)] _get_desirable_cpu_topologies /usr/lib/python3.9/site-packages/nova/virt/hardware.py:575#033[00m
Dec  1 20:02:34 compute-0 nova_compute[189564]: 2025-12-01 20:02:34.832 189568 DEBUG nova.virt.hardware [None req-909d3cbc-f07b-42ff-b10d-448f6649e7c7 304fade4774b4bb3838efcc56501f582 bde8983778e8471a8b7f6da9e9d53732 - - default default] Sorted desired topologies [VirtCPUTopology(cores=1,sockets=1,threads=1)] _get_desirable_cpu_topologies /usr/lib/python3.9/site-packages/nova/virt/hardware.py:577#033[00m
Dec  1 20:02:34 compute-0 nova_compute[189564]: 2025-12-01 20:02:34.835 189568 DEBUG nova.virt.libvirt.vif [None req-909d3cbc-f07b-42ff-b10d-448f6649e7c7 304fade4774b4bb3838efcc56501f582 bde8983778e8471a8b7f6da9e9d53732 - - default default] vif_type=ovs instance=Instance(access_ip_v4=None,access_ip_v6=None,architecture=None,auto_disk_config=False,availability_zone='nova',cell_name=None,cleaned=False,config_drive='True',created_at=2025-12-01T20:02:29Z,default_ephemeral_device=None,default_swap_device=None,deleted=False,deleted_at=None,device_metadata=None,disable_terminate=False,display_description='tempest-TestServerBasicOps-server-48441956',display_name='tempest-TestServerBasicOps-server-48441956',ec2_ids=EC2Ids,ephemeral_gb=0,ephemeral_key_uuid=None,fault=<?>,flavor=Flavor(3),hidden=False,host='compute-0.ctlplane.example.com',hostname='tempest-testserverbasicops-server-48441956',id=11,image_ref='d169c234-7ac2-4fdc-b9fa-a08c93484d75',info_cache=InstanceInfoCache,instance_type_id=3,kernel_id='',key_data='ecdsa-sha2-nistp384 AAAAE2VjZHNhLXNoYTItbmlzdHAzODQAAAAIbmlzdHAzODQAAABhBAjaBVksdVBINl9zeD8esJMb4Vfc08yy8kW7yEo+Tn5f93Vx5EP21WRviUp4cdA9l5B1MnoKZGq0fFz416IF/plwNciZi0lqZU9c6SZEc6R79Ku1E8FXtQULIca0cSlUsA==',key_name='tempest-TestServerBasicOps-232633533',keypairs=KeyPairList,launch_index=0,launched_at=None,launched_on='compute-0.ctlplane.example.com',locked=False,locked_by=None,memory_mb=128,metadata={meta1='data1',meta2='data2',metaN='dataN'},migration_context=None,new_flavor=None,node='compute-0.ctlplane.example.com',numa_topology=None,old_flavor=None,os_type=None,pci_devices=<?>,pci_requests=InstancePCIRequests,power_state=0,progress=0,project_id='bde8983778e8471a8b7f6da9e9d53732',ramdisk_id='',reservation_id='r-g6r0wj4i',resources=None,root_device_name='/dev/vda',root_gb=1,security_groups=SecurityGroupList,services=<?>,shutdown_terminate=False,system_metadata={boot_roles='member,reader',image_base_image_ref='d169c234-7ac2-4fdc-b9fa-a08c93484d75',image_container_format='bare',image_disk_format='qcow2',image_hw_machine_type='q35',image_hw_rng_model='virtio',image_min_disk='1',image_min_ram='0',network_allocated='True',owner_project_name='tempest-TestServerBasicOps-212789688',owner_user_name='tempest-TestServerBasicOps-212789688-project-member'},tags=TagList,task_state='spawning',terminated_at=None,trusted_certs=None,updated_at=2025-12-01T20:02:31Z,user_data='IyEvYmluL3NoCmVjaG8gIlByaW50aW5nIGNpcnJvcyB1c2VyIGF1dGhvcml6ZWQga2V5cyIKY2F0IH5jaXJyb3MvLnNzaC9hdXRob3JpemVkX2tleXMgfHwgdHJ1ZQo=',user_id='304fade4774b4bb3838efcc56501f582',uuid=421c1bd5-7edf-41ce-b0a5-872efcaf35b0,vcpu_model=VirtCPUModel,vcpus=1,vm_mode=None,vm_state='building') vif={"id": "36c65cc8-9f73-47e0-8a82-7ca2a02890e5", "address": "fa:16:3e:67:e4:f2", "network": {"id": "61c137f0-effb-4f90-8a6c-ea3831f8e4db", "bridge": "br-int", "label": "tempest-TestServerBasicOps-1994330948-network", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.14", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "bde8983778e8471a8b7f6da9e9d53732", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tap36c65cc8-9f", "ovs_interfaceid": "36c65cc8-9f73-47e0-8a82-7ca2a02890e5", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}} virt_type=kvm get_config /usr/lib/python3.9/site-packages/nova/virt/libvirt/vif.py:563#033[00m
Dec  1 20:02:34 compute-0 nova_compute[189564]: 2025-12-01 20:02:34.836 189568 DEBUG nova.network.os_vif_util [None req-909d3cbc-f07b-42ff-b10d-448f6649e7c7 304fade4774b4bb3838efcc56501f582 bde8983778e8471a8b7f6da9e9d53732 - - default default] Converting VIF {"id": "36c65cc8-9f73-47e0-8a82-7ca2a02890e5", "address": "fa:16:3e:67:e4:f2", "network": {"id": "61c137f0-effb-4f90-8a6c-ea3831f8e4db", "bridge": "br-int", "label": "tempest-TestServerBasicOps-1994330948-network", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.14", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "bde8983778e8471a8b7f6da9e9d53732", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tap36c65cc8-9f", "ovs_interfaceid": "36c65cc8-9f73-47e0-8a82-7ca2a02890e5", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}} nova_to_osvif_vif /usr/lib/python3.9/site-packages/nova/network/os_vif_util.py:511#033[00m
Dec  1 20:02:34 compute-0 nova_compute[189564]: 2025-12-01 20:02:34.836 189568 DEBUG nova.network.os_vif_util [None req-909d3cbc-f07b-42ff-b10d-448f6649e7c7 304fade4774b4bb3838efcc56501f582 bde8983778e8471a8b7f6da9e9d53732 - - default default] Converted object VIFOpenVSwitch(active=False,address=fa:16:3e:67:e4:f2,bridge_name='br-int',has_traffic_filtering=True,id=36c65cc8-9f73-47e0-8a82-7ca2a02890e5,network=Network(61c137f0-effb-4f90-8a6c-ea3831f8e4db),plugin='ovs',port_profile=VIFPortProfileOpenVSwitch,preserve_on_delete=False,vif_name='tap36c65cc8-9f') nova_to_osvif_vif /usr/lib/python3.9/site-packages/nova/network/os_vif_util.py:548#033[00m
Dec  1 20:02:34 compute-0 nova_compute[189564]: 2025-12-01 20:02:34.837 189568 DEBUG nova.objects.instance [None req-909d3cbc-f07b-42ff-b10d-448f6649e7c7 304fade4774b4bb3838efcc56501f582 bde8983778e8471a8b7f6da9e9d53732 - - default default] Lazy-loading 'pci_devices' on Instance uuid 421c1bd5-7edf-41ce-b0a5-872efcaf35b0 obj_load_attr /usr/lib/python3.9/site-packages/nova/objects/instance.py:1105#033[00m
Dec  1 20:02:34 compute-0 nova_compute[189564]: 2025-12-01 20:02:34.850 189568 DEBUG nova.virt.libvirt.driver [None req-909d3cbc-f07b-42ff-b10d-448f6649e7c7 304fade4774b4bb3838efcc56501f582 bde8983778e8471a8b7f6da9e9d53732 - - default default] [instance: 421c1bd5-7edf-41ce-b0a5-872efcaf35b0] End _get_guest_xml xml=<domain type="kvm">
Dec  1 20:02:34 compute-0 nova_compute[189564]:  <uuid>421c1bd5-7edf-41ce-b0a5-872efcaf35b0</uuid>
Dec  1 20:02:34 compute-0 nova_compute[189564]:  <name>instance-0000000b</name>
Dec  1 20:02:34 compute-0 nova_compute[189564]:  <memory>131072</memory>
Dec  1 20:02:34 compute-0 nova_compute[189564]:  <vcpu>1</vcpu>
Dec  1 20:02:34 compute-0 nova_compute[189564]:  <metadata>
Dec  1 20:02:34 compute-0 nova_compute[189564]:    <nova:instance xmlns:nova="http://openstack.org/xmlns/libvirt/nova/1.1">
Dec  1 20:02:34 compute-0 nova_compute[189564]:      <nova:package version="27.5.2-0.20250829104910.6f8decf.el9"/>
Dec  1 20:02:34 compute-0 nova_compute[189564]:      <nova:name>tempest-TestServerBasicOps-server-48441956</nova:name>
Dec  1 20:02:34 compute-0 nova_compute[189564]:      <nova:creationTime>2025-12-01 20:02:34</nova:creationTime>
Dec  1 20:02:34 compute-0 nova_compute[189564]:      <nova:flavor name="m1.nano">
Dec  1 20:02:34 compute-0 nova_compute[189564]:        <nova:memory>128</nova:memory>
Dec  1 20:02:34 compute-0 nova_compute[189564]:        <nova:disk>1</nova:disk>
Dec  1 20:02:34 compute-0 nova_compute[189564]:        <nova:swap>0</nova:swap>
Dec  1 20:02:34 compute-0 nova_compute[189564]:        <nova:ephemeral>0</nova:ephemeral>
Dec  1 20:02:34 compute-0 nova_compute[189564]:        <nova:vcpus>1</nova:vcpus>
Dec  1 20:02:34 compute-0 nova_compute[189564]:      </nova:flavor>
Dec  1 20:02:34 compute-0 nova_compute[189564]:      <nova:owner>
Dec  1 20:02:34 compute-0 nova_compute[189564]:        <nova:user uuid="304fade4774b4bb3838efcc56501f582">tempest-TestServerBasicOps-212789688-project-member</nova:user>
Dec  1 20:02:34 compute-0 nova_compute[189564]:        <nova:project uuid="bde8983778e8471a8b7f6da9e9d53732">tempest-TestServerBasicOps-212789688</nova:project>
Dec  1 20:02:34 compute-0 nova_compute[189564]:      </nova:owner>
Dec  1 20:02:34 compute-0 nova_compute[189564]:      <nova:root type="image" uuid="d169c234-7ac2-4fdc-b9fa-a08c93484d75"/>
Dec  1 20:02:34 compute-0 nova_compute[189564]:      <nova:ports>
Dec  1 20:02:34 compute-0 nova_compute[189564]:        <nova:port uuid="36c65cc8-9f73-47e0-8a82-7ca2a02890e5">
Dec  1 20:02:34 compute-0 nova_compute[189564]:          <nova:ip type="fixed" address="10.100.0.14" ipVersion="4"/>
Dec  1 20:02:34 compute-0 nova_compute[189564]:        </nova:port>
Dec  1 20:02:34 compute-0 nova_compute[189564]:      </nova:ports>
Dec  1 20:02:34 compute-0 nova_compute[189564]:    </nova:instance>
Dec  1 20:02:34 compute-0 nova_compute[189564]:  </metadata>
Dec  1 20:02:34 compute-0 nova_compute[189564]:  <sysinfo type="smbios">
Dec  1 20:02:34 compute-0 nova_compute[189564]:    <system>
Dec  1 20:02:34 compute-0 nova_compute[189564]:      <entry name="manufacturer">RDO</entry>
Dec  1 20:02:34 compute-0 nova_compute[189564]:      <entry name="product">OpenStack Compute</entry>
Dec  1 20:02:34 compute-0 nova_compute[189564]:      <entry name="version">27.5.2-0.20250829104910.6f8decf.el9</entry>
Dec  1 20:02:34 compute-0 nova_compute[189564]:      <entry name="serial">421c1bd5-7edf-41ce-b0a5-872efcaf35b0</entry>
Dec  1 20:02:34 compute-0 nova_compute[189564]:      <entry name="uuid">421c1bd5-7edf-41ce-b0a5-872efcaf35b0</entry>
Dec  1 20:02:34 compute-0 nova_compute[189564]:      <entry name="family">Virtual Machine</entry>
Dec  1 20:02:34 compute-0 nova_compute[189564]:    </system>
Dec  1 20:02:34 compute-0 nova_compute[189564]:  </sysinfo>
Dec  1 20:02:34 compute-0 nova_compute[189564]:  <os>
Dec  1 20:02:34 compute-0 nova_compute[189564]:    <type arch="x86_64" machine="q35">hvm</type>
Dec  1 20:02:34 compute-0 nova_compute[189564]:    <boot dev="hd"/>
Dec  1 20:02:34 compute-0 nova_compute[189564]:    <smbios mode="sysinfo"/>
Dec  1 20:02:34 compute-0 nova_compute[189564]:  </os>
Dec  1 20:02:34 compute-0 nova_compute[189564]:  <features>
Dec  1 20:02:34 compute-0 nova_compute[189564]:    <acpi/>
Dec  1 20:02:34 compute-0 nova_compute[189564]:    <apic/>
Dec  1 20:02:34 compute-0 nova_compute[189564]:    <vmcoreinfo/>
Dec  1 20:02:34 compute-0 nova_compute[189564]:  </features>
Dec  1 20:02:34 compute-0 nova_compute[189564]:  <clock offset="utc">
Dec  1 20:02:34 compute-0 nova_compute[189564]:    <timer name="pit" tickpolicy="delay"/>
Dec  1 20:02:34 compute-0 nova_compute[189564]:    <timer name="rtc" tickpolicy="catchup"/>
Dec  1 20:02:34 compute-0 nova_compute[189564]:    <timer name="hpet" present="no"/>
Dec  1 20:02:34 compute-0 nova_compute[189564]:  </clock>
Dec  1 20:02:34 compute-0 nova_compute[189564]:  <cpu mode="host-model" match="exact">
Dec  1 20:02:34 compute-0 nova_compute[189564]:    <topology sockets="1" cores="1" threads="1"/>
Dec  1 20:02:34 compute-0 nova_compute[189564]:  </cpu>
Dec  1 20:02:34 compute-0 nova_compute[189564]:  <devices>
Dec  1 20:02:34 compute-0 nova_compute[189564]:    <disk type="file" device="disk">
Dec  1 20:02:34 compute-0 nova_compute[189564]:      <driver name="qemu" type="qcow2" cache="none"/>
Dec  1 20:02:34 compute-0 nova_compute[189564]:      <source file="/var/lib/nova/instances/421c1bd5-7edf-41ce-b0a5-872efcaf35b0/disk"/>
Dec  1 20:02:34 compute-0 nova_compute[189564]:      <target dev="vda" bus="virtio"/>
Dec  1 20:02:34 compute-0 nova_compute[189564]:    </disk>
Dec  1 20:02:34 compute-0 nova_compute[189564]:    <disk type="file" device="cdrom">
Dec  1 20:02:34 compute-0 nova_compute[189564]:      <driver name="qemu" type="raw" cache="none"/>
Dec  1 20:02:34 compute-0 nova_compute[189564]:      <source file="/var/lib/nova/instances/421c1bd5-7edf-41ce-b0a5-872efcaf35b0/disk.config"/>
Dec  1 20:02:34 compute-0 nova_compute[189564]:      <target dev="sda" bus="sata"/>
Dec  1 20:02:34 compute-0 nova_compute[189564]:    </disk>
Dec  1 20:02:34 compute-0 nova_compute[189564]:    <interface type="ethernet">
Dec  1 20:02:34 compute-0 nova_compute[189564]:      <mac address="fa:16:3e:67:e4:f2"/>
Dec  1 20:02:34 compute-0 nova_compute[189564]:      <model type="virtio"/>
Dec  1 20:02:34 compute-0 nova_compute[189564]:      <driver name="vhost" rx_queue_size="512"/>
Dec  1 20:02:34 compute-0 nova_compute[189564]:      <mtu size="1442"/>
Dec  1 20:02:34 compute-0 nova_compute[189564]:      <target dev="tap36c65cc8-9f"/>
Dec  1 20:02:34 compute-0 nova_compute[189564]:    </interface>
Dec  1 20:02:34 compute-0 nova_compute[189564]:    <serial type="pty">
Dec  1 20:02:34 compute-0 nova_compute[189564]:      <log file="/var/lib/nova/instances/421c1bd5-7edf-41ce-b0a5-872efcaf35b0/console.log" append="off"/>
Dec  1 20:02:34 compute-0 nova_compute[189564]:    </serial>
Dec  1 20:02:34 compute-0 nova_compute[189564]:    <graphics type="vnc" autoport="yes" listen="::0"/>
Dec  1 20:02:34 compute-0 nova_compute[189564]:    <video>
Dec  1 20:02:34 compute-0 nova_compute[189564]:      <model type="virtio"/>
Dec  1 20:02:34 compute-0 nova_compute[189564]:    </video>
Dec  1 20:02:34 compute-0 nova_compute[189564]:    <input type="tablet" bus="usb"/>
Dec  1 20:02:34 compute-0 nova_compute[189564]:    <rng model="virtio">
Dec  1 20:02:34 compute-0 nova_compute[189564]:      <backend model="random">/dev/urandom</backend>
Dec  1 20:02:34 compute-0 nova_compute[189564]:    </rng>
Dec  1 20:02:34 compute-0 nova_compute[189564]:    <controller type="pci" model="pcie-root"/>
Dec  1 20:02:34 compute-0 nova_compute[189564]:    <controller type="pci" model="pcie-root-port"/>
Dec  1 20:02:34 compute-0 nova_compute[189564]:    <controller type="pci" model="pcie-root-port"/>
Dec  1 20:02:34 compute-0 nova_compute[189564]:    <controller type="pci" model="pcie-root-port"/>
Dec  1 20:02:34 compute-0 nova_compute[189564]:    <controller type="pci" model="pcie-root-port"/>
Dec  1 20:02:34 compute-0 nova_compute[189564]:    <controller type="pci" model="pcie-root-port"/>
Dec  1 20:02:34 compute-0 nova_compute[189564]:    <controller type="pci" model="pcie-root-port"/>
Dec  1 20:02:34 compute-0 nova_compute[189564]:    <controller type="pci" model="pcie-root-port"/>
Dec  1 20:02:34 compute-0 nova_compute[189564]:    <controller type="pci" model="pcie-root-port"/>
Dec  1 20:02:34 compute-0 nova_compute[189564]:    <controller type="pci" model="pcie-root-port"/>
Dec  1 20:02:34 compute-0 nova_compute[189564]:    <controller type="pci" model="pcie-root-port"/>
Dec  1 20:02:34 compute-0 nova_compute[189564]:    <controller type="pci" model="pcie-root-port"/>
Dec  1 20:02:34 compute-0 nova_compute[189564]:    <controller type="pci" model="pcie-root-port"/>
Dec  1 20:02:34 compute-0 nova_compute[189564]:    <controller type="pci" model="pcie-root-port"/>
Dec  1 20:02:34 compute-0 nova_compute[189564]:    <controller type="pci" model="pcie-root-port"/>
Dec  1 20:02:34 compute-0 nova_compute[189564]:    <controller type="pci" model="pcie-root-port"/>
Dec  1 20:02:34 compute-0 nova_compute[189564]:    <controller type="pci" model="pcie-root-port"/>
Dec  1 20:02:34 compute-0 nova_compute[189564]:    <controller type="pci" model="pcie-root-port"/>
Dec  1 20:02:34 compute-0 nova_compute[189564]:    <controller type="pci" model="pcie-root-port"/>
Dec  1 20:02:34 compute-0 nova_compute[189564]:    <controller type="pci" model="pcie-root-port"/>
Dec  1 20:02:34 compute-0 nova_compute[189564]:    <controller type="pci" model="pcie-root-port"/>
Dec  1 20:02:34 compute-0 nova_compute[189564]:    <controller type="pci" model="pcie-root-port"/>
Dec  1 20:02:34 compute-0 nova_compute[189564]:    <controller type="pci" model="pcie-root-port"/>
Dec  1 20:02:34 compute-0 nova_compute[189564]:    <controller type="pci" model="pcie-root-port"/>
Dec  1 20:02:34 compute-0 nova_compute[189564]:    <controller type="pci" model="pcie-root-port"/>
Dec  1 20:02:34 compute-0 nova_compute[189564]:    <controller type="usb" index="0"/>
Dec  1 20:02:34 compute-0 nova_compute[189564]:    <memballoon model="virtio">
Dec  1 20:02:34 compute-0 nova_compute[189564]:      <stats period="10"/>
Dec  1 20:02:34 compute-0 nova_compute[189564]:    </memballoon>
Dec  1 20:02:34 compute-0 nova_compute[189564]:  </devices>
Dec  1 20:02:34 compute-0 nova_compute[189564]: </domain>
Dec  1 20:02:34 compute-0 nova_compute[189564]: _get_guest_xml /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:7555#033[00m
Dec  1 20:02:34 compute-0 nova_compute[189564]: 2025-12-01 20:02:34.852 189568 DEBUG nova.compute.manager [None req-909d3cbc-f07b-42ff-b10d-448f6649e7c7 304fade4774b4bb3838efcc56501f582 bde8983778e8471a8b7f6da9e9d53732 - - default default] [instance: 421c1bd5-7edf-41ce-b0a5-872efcaf35b0] Preparing to wait for external event network-vif-plugged-36c65cc8-9f73-47e0-8a82-7ca2a02890e5 prepare_for_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:283#033[00m
Dec  1 20:02:34 compute-0 nova_compute[189564]: 2025-12-01 20:02:34.852 189568 DEBUG oslo_concurrency.lockutils [None req-909d3cbc-f07b-42ff-b10d-448f6649e7c7 304fade4774b4bb3838efcc56501f582 bde8983778e8471a8b7f6da9e9d53732 - - default default] Acquiring lock "421c1bd5-7edf-41ce-b0a5-872efcaf35b0-events" by "nova.compute.manager.InstanceEvents.prepare_for_instance_event.<locals>._create_or_get_event" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m
Dec  1 20:02:34 compute-0 nova_compute[189564]: 2025-12-01 20:02:34.853 189568 DEBUG oslo_concurrency.lockutils [None req-909d3cbc-f07b-42ff-b10d-448f6649e7c7 304fade4774b4bb3838efcc56501f582 bde8983778e8471a8b7f6da9e9d53732 - - default default] Lock "421c1bd5-7edf-41ce-b0a5-872efcaf35b0-events" acquired by "nova.compute.manager.InstanceEvents.prepare_for_instance_event.<locals>._create_or_get_event" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m
Dec  1 20:02:34 compute-0 nova_compute[189564]: 2025-12-01 20:02:34.853 189568 DEBUG oslo_concurrency.lockutils [None req-909d3cbc-f07b-42ff-b10d-448f6649e7c7 304fade4774b4bb3838efcc56501f582 bde8983778e8471a8b7f6da9e9d53732 - - default default] Lock "421c1bd5-7edf-41ce-b0a5-872efcaf35b0-events" "released" by "nova.compute.manager.InstanceEvents.prepare_for_instance_event.<locals>._create_or_get_event" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m
Dec  1 20:02:34 compute-0 nova_compute[189564]: 2025-12-01 20:02:34.854 189568 DEBUG nova.virt.libvirt.vif [None req-909d3cbc-f07b-42ff-b10d-448f6649e7c7 304fade4774b4bb3838efcc56501f582 bde8983778e8471a8b7f6da9e9d53732 - - default default] vif_type=ovs instance=Instance(access_ip_v4=None,access_ip_v6=None,architecture=None,auto_disk_config=False,availability_zone='nova',cell_name=None,cleaned=False,config_drive='True',created_at=2025-12-01T20:02:29Z,default_ephemeral_device=None,default_swap_device=None,deleted=False,deleted_at=None,device_metadata=None,disable_terminate=False,display_description='tempest-TestServerBasicOps-server-48441956',display_name='tempest-TestServerBasicOps-server-48441956',ec2_ids=EC2Ids,ephemeral_gb=0,ephemeral_key_uuid=None,fault=<?>,flavor=Flavor(3),hidden=False,host='compute-0.ctlplane.example.com',hostname='tempest-testserverbasicops-server-48441956',id=11,image_ref='d169c234-7ac2-4fdc-b9fa-a08c93484d75',info_cache=InstanceInfoCache,instance_type_id=3,kernel_id='',key_data='ecdsa-sha2-nistp384 AAAAE2VjZHNhLXNoYTItbmlzdHAzODQAAAAIbmlzdHAzODQAAABhBAjaBVksdVBINl9zeD8esJMb4Vfc08yy8kW7yEo+Tn5f93Vx5EP21WRviUp4cdA9l5B1MnoKZGq0fFz416IF/plwNciZi0lqZU9c6SZEc6R79Ku1E8FXtQULIca0cSlUsA==',key_name='tempest-TestServerBasicOps-232633533',keypairs=KeyPairList,launch_index=0,launched_at=None,launched_on='compute-0.ctlplane.example.com',locked=False,locked_by=None,memory_mb=128,metadata={meta1='data1',meta2='data2',metaN='dataN'},migration_context=None,new_flavor=None,node='compute-0.ctlplane.example.com',numa_topology=None,old_flavor=None,os_type=None,pci_devices=PciDeviceList,pci_requests=InstancePCIRequests,power_state=0,progress=0,project_id='bde8983778e8471a8b7f6da9e9d53732',ramdisk_id='',reservation_id='r-g6r0wj4i',resources=None,root_device_name='/dev/vda',root_gb=1,security_groups=SecurityGroupList,services=<?>,shutdown_terminate=False,system_metadata={boot_roles='member,reader',image_base_image_ref='d169c234-7ac2-4fdc-b9fa-a08c93484d75',image_container_format='bare',image_disk_format='qcow2',image_hw_machine_type='q35',image_hw_rng_model='virtio',image_min_disk='1',image_min_ram='0',network_allocated='True',owner_project_name='tempest-TestServerBasicOps-212789688',owner_user_name='tempest-TestServerBasicOps-212789688-project-member'},tags=TagList,task_state='spawning',terminated_at=None,trusted_certs=None,updated_at=2025-12-01T20:02:31Z,user_data='IyEvYmluL3NoCmVjaG8gIlByaW50aW5nIGNpcnJvcyB1c2VyIGF1dGhvcml6ZWQga2V5cyIKY2F0IH5jaXJyb3MvLnNzaC9hdXRob3JpemVkX2tleXMgfHwgdHJ1ZQo=',user_id='304fade4774b4bb3838efcc56501f582',uuid=421c1bd5-7edf-41ce-b0a5-872efcaf35b0,vcpu_model=VirtCPUModel,vcpus=1,vm_mode=None,vm_state='building') vif={"id": "36c65cc8-9f73-47e0-8a82-7ca2a02890e5", "address": "fa:16:3e:67:e4:f2", "network": {"id": "61c137f0-effb-4f90-8a6c-ea3831f8e4db", "bridge": "br-int", "label": "tempest-TestServerBasicOps-1994330948-network", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.14", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "bde8983778e8471a8b7f6da9e9d53732", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tap36c65cc8-9f", "ovs_interfaceid": "36c65cc8-9f73-47e0-8a82-7ca2a02890e5", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}} plug /usr/lib/python3.9/site-packages/nova/virt/libvirt/vif.py:710#033[00m
Dec  1 20:02:34 compute-0 nova_compute[189564]: 2025-12-01 20:02:34.854 189568 DEBUG nova.network.os_vif_util [None req-909d3cbc-f07b-42ff-b10d-448f6649e7c7 304fade4774b4bb3838efcc56501f582 bde8983778e8471a8b7f6da9e9d53732 - - default default] Converting VIF {"id": "36c65cc8-9f73-47e0-8a82-7ca2a02890e5", "address": "fa:16:3e:67:e4:f2", "network": {"id": "61c137f0-effb-4f90-8a6c-ea3831f8e4db", "bridge": "br-int", "label": "tempest-TestServerBasicOps-1994330948-network", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.14", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "bde8983778e8471a8b7f6da9e9d53732", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tap36c65cc8-9f", "ovs_interfaceid": "36c65cc8-9f73-47e0-8a82-7ca2a02890e5", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}} nova_to_osvif_vif /usr/lib/python3.9/site-packages/nova/network/os_vif_util.py:511#033[00m
Dec  1 20:02:34 compute-0 nova_compute[189564]: 2025-12-01 20:02:34.855 189568 DEBUG nova.network.os_vif_util [None req-909d3cbc-f07b-42ff-b10d-448f6649e7c7 304fade4774b4bb3838efcc56501f582 bde8983778e8471a8b7f6da9e9d53732 - - default default] Converted object VIFOpenVSwitch(active=False,address=fa:16:3e:67:e4:f2,bridge_name='br-int',has_traffic_filtering=True,id=36c65cc8-9f73-47e0-8a82-7ca2a02890e5,network=Network(61c137f0-effb-4f90-8a6c-ea3831f8e4db),plugin='ovs',port_profile=VIFPortProfileOpenVSwitch,preserve_on_delete=False,vif_name='tap36c65cc8-9f') nova_to_osvif_vif /usr/lib/python3.9/site-packages/nova/network/os_vif_util.py:548#033[00m
Dec  1 20:02:34 compute-0 nova_compute[189564]: 2025-12-01 20:02:34.855 189568 DEBUG os_vif [None req-909d3cbc-f07b-42ff-b10d-448f6649e7c7 304fade4774b4bb3838efcc56501f582 bde8983778e8471a8b7f6da9e9d53732 - - default default] Plugging vif VIFOpenVSwitch(active=False,address=fa:16:3e:67:e4:f2,bridge_name='br-int',has_traffic_filtering=True,id=36c65cc8-9f73-47e0-8a82-7ca2a02890e5,network=Network(61c137f0-effb-4f90-8a6c-ea3831f8e4db),plugin='ovs',port_profile=VIFPortProfileOpenVSwitch,preserve_on_delete=False,vif_name='tap36c65cc8-9f') plug /usr/lib/python3.9/site-packages/os_vif/__init__.py:76#033[00m
Dec  1 20:02:34 compute-0 nova_compute[189564]: 2025-12-01 20:02:34.856 189568 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 21 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Dec  1 20:02:34 compute-0 nova_compute[189564]: 2025-12-01 20:02:34.857 189568 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): AddBridgeCommand(_result=None, name=br-int, may_exist=True, datapath_type=system) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89#033[00m
Dec  1 20:02:34 compute-0 nova_compute[189564]: 2025-12-01 20:02:34.857 189568 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Transaction caused no change do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:129#033[00m
Dec  1 20:02:34 compute-0 nova_compute[189564]: 2025-12-01 20:02:34.861 189568 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 21 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Dec  1 20:02:34 compute-0 nova_compute[189564]: 2025-12-01 20:02:34.861 189568 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): AddPortCommand(_result=None, bridge=br-int, port=tap36c65cc8-9f, may_exist=True) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89#033[00m
Dec  1 20:02:34 compute-0 nova_compute[189564]: 2025-12-01 20:02:34.861 189568 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=1): DbSetCommand(_result=None, table=Interface, record=tap36c65cc8-9f, col_values=(('external_ids', {'iface-id': '36c65cc8-9f73-47e0-8a82-7ca2a02890e5', 'iface-status': 'active', 'attached-mac': 'fa:16:3e:67:e4:f2', 'vm-uuid': '421c1bd5-7edf-41ce-b0a5-872efcaf35b0'}),)) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89#033[00m
Dec  1 20:02:34 compute-0 nova_compute[189564]: 2025-12-01 20:02:34.863 189568 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Dec  1 20:02:34 compute-0 NetworkManager[56474]: <info>  [1764619354.8645] manager: (tap36c65cc8-9f): new Open vSwitch Port device (/org/freedesktop/NetworkManager/Devices/54)
Dec  1 20:02:34 compute-0 nova_compute[189564]: 2025-12-01 20:02:34.865 189568 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] 0-ms timeout __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:248#033[00m
Dec  1 20:02:34 compute-0 nova_compute[189564]: 2025-12-01 20:02:34.867 189568 DEBUG nova.network.neutron [None req-edc753ca-df70-41b8-9e8c-a36b0a4da18d 715e289b64b4407387cbcfe958eb2d0f 162c071887824085bcc9c384a2f8baf0 - - default default] [instance: 6c1de815-4e42-4798-9a73-220b67333524] Instance cache missing network info. _get_preexisting_port_ids /usr/lib/python3.9/site-packages/nova/network/neutron.py:3323#033[00m
Dec  1 20:02:34 compute-0 nova_compute[189564]: 2025-12-01 20:02:34.872 189568 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Dec  1 20:02:34 compute-0 nova_compute[189564]: 2025-12-01 20:02:34.873 189568 INFO os_vif [None req-909d3cbc-f07b-42ff-b10d-448f6649e7c7 304fade4774b4bb3838efcc56501f582 bde8983778e8471a8b7f6da9e9d53732 - - default default] Successfully plugged vif VIFOpenVSwitch(active=False,address=fa:16:3e:67:e4:f2,bridge_name='br-int',has_traffic_filtering=True,id=36c65cc8-9f73-47e0-8a82-7ca2a02890e5,network=Network(61c137f0-effb-4f90-8a6c-ea3831f8e4db),plugin='ovs',port_profile=VIFPortProfileOpenVSwitch,preserve_on_delete=False,vif_name='tap36c65cc8-9f')#033[00m
Dec  1 20:02:34 compute-0 nova_compute[189564]: 2025-12-01 20:02:34.935 189568 DEBUG nova.virt.libvirt.driver [None req-909d3cbc-f07b-42ff-b10d-448f6649e7c7 304fade4774b4bb3838efcc56501f582 bde8983778e8471a8b7f6da9e9d53732 - - default default] No BDM found with device name vda, not building metadata. _build_disk_metadata /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:12116#033[00m
Dec  1 20:02:34 compute-0 nova_compute[189564]: 2025-12-01 20:02:34.935 189568 DEBUG nova.virt.libvirt.driver [None req-909d3cbc-f07b-42ff-b10d-448f6649e7c7 304fade4774b4bb3838efcc56501f582 bde8983778e8471a8b7f6da9e9d53732 - - default default] No BDM found with device name sda, not building metadata. _build_disk_metadata /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:12116#033[00m
Dec  1 20:02:34 compute-0 nova_compute[189564]: 2025-12-01 20:02:34.935 189568 DEBUG nova.virt.libvirt.driver [None req-909d3cbc-f07b-42ff-b10d-448f6649e7c7 304fade4774b4bb3838efcc56501f582 bde8983778e8471a8b7f6da9e9d53732 - - default default] No VIF found with MAC fa:16:3e:67:e4:f2, not building metadata _build_interface_metadata /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:12092#033[00m
Dec  1 20:02:34 compute-0 nova_compute[189564]: 2025-12-01 20:02:34.936 189568 INFO nova.virt.libvirt.driver [None req-909d3cbc-f07b-42ff-b10d-448f6649e7c7 304fade4774b4bb3838efcc56501f582 bde8983778e8471a8b7f6da9e9d53732 - - default default] [instance: 421c1bd5-7edf-41ce-b0a5-872efcaf35b0] Using config drive#033[00m
Dec  1 20:02:35 compute-0 nova_compute[189564]: 2025-12-01 20:02:35.397 189568 INFO nova.virt.libvirt.driver [None req-909d3cbc-f07b-42ff-b10d-448f6649e7c7 304fade4774b4bb3838efcc56501f582 bde8983778e8471a8b7f6da9e9d53732 - - default default] [instance: 421c1bd5-7edf-41ce-b0a5-872efcaf35b0] Creating config drive at /var/lib/nova/instances/421c1bd5-7edf-41ce-b0a5-872efcaf35b0/disk.config#033[00m
Dec  1 20:02:35 compute-0 nova_compute[189564]: 2025-12-01 20:02:35.409 189568 DEBUG oslo_concurrency.processutils [None req-909d3cbc-f07b-42ff-b10d-448f6649e7c7 304fade4774b4bb3838efcc56501f582 bde8983778e8471a8b7f6da9e9d53732 - - default default] Running cmd (subprocess): /usr/bin/mkisofs -o /var/lib/nova/instances/421c1bd5-7edf-41ce-b0a5-872efcaf35b0/disk.config -ldots -allow-lowercase -allow-multidot -l -publisher OpenStack Compute 27.5.2-0.20250829104910.6f8decf.el9 -quiet -J -r -V config-2 /tmp/tmpiefuwjzd execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384#033[00m
Dec  1 20:02:35 compute-0 nova_compute[189564]: 2025-12-01 20:02:35.555 189568 DEBUG oslo_concurrency.processutils [None req-909d3cbc-f07b-42ff-b10d-448f6649e7c7 304fade4774b4bb3838efcc56501f582 bde8983778e8471a8b7f6da9e9d53732 - - default default] CMD "/usr/bin/mkisofs -o /var/lib/nova/instances/421c1bd5-7edf-41ce-b0a5-872efcaf35b0/disk.config -ldots -allow-lowercase -allow-multidot -l -publisher OpenStack Compute 27.5.2-0.20250829104910.6f8decf.el9 -quiet -J -r -V config-2 /tmp/tmpiefuwjzd" returned: 0 in 0.145s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422#033[00m
Dec  1 20:02:35 compute-0 kernel: tap36c65cc8-9f: entered promiscuous mode
Dec  1 20:02:35 compute-0 NetworkManager[56474]: <info>  [1764619355.6511] manager: (tap36c65cc8-9f): new Tun device (/org/freedesktop/NetworkManager/Devices/55)
Dec  1 20:02:35 compute-0 ovn_controller[97948]: 2025-12-01T20:02:35Z|00109|binding|INFO|Claiming lport 36c65cc8-9f73-47e0-8a82-7ca2a02890e5 for this chassis.
Dec  1 20:02:35 compute-0 ovn_controller[97948]: 2025-12-01T20:02:35Z|00110|binding|INFO|36c65cc8-9f73-47e0-8a82-7ca2a02890e5: Claiming fa:16:3e:67:e4:f2 10.100.0.14
Dec  1 20:02:35 compute-0 nova_compute[189564]: 2025-12-01 20:02:35.661 189568 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Dec  1 20:02:35 compute-0 ovn_controller[97948]: 2025-12-01T20:02:35Z|00111|binding|INFO|Setting lport 36c65cc8-9f73-47e0-8a82-7ca2a02890e5 ovn-installed in OVS
Dec  1 20:02:35 compute-0 nova_compute[189564]: 2025-12-01 20:02:35.694 189568 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Dec  1 20:02:35 compute-0 nova_compute[189564]: 2025-12-01 20:02:35.703 189568 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Dec  1 20:02:35 compute-0 systemd-udevd[255192]: Network interface NamePolicy= disabled on kernel command line.
Dec  1 20:02:35 compute-0 NetworkManager[56474]: <info>  [1764619355.7348] device (tap36c65cc8-9f): state change: unmanaged -> unavailable (reason 'connection-assumed', managed-type: 'external')
Dec  1 20:02:35 compute-0 NetworkManager[56474]: <info>  [1764619355.7356] device (tap36c65cc8-9f): state change: unavailable -> disconnected (reason 'none', managed-type: 'external')
Dec  1 20:02:35 compute-0 ovn_controller[97948]: 2025-12-01T20:02:35Z|00112|binding|INFO|Setting lport 36c65cc8-9f73-47e0-8a82-7ca2a02890e5 up in Southbound
Dec  1 20:02:35 compute-0 ovn_metadata_agent[106828]: 2025-12-01 20:02:35.854 106833 DEBUG ovsdbapp.backend.ovs_idl.event [-] Matched UPDATE: PortBindingUpdatedEvent(events=('update',), table='Port_Binding', conditions=None, old_conditions=None), priority=20 to row=Port_Binding(mac=['fa:16:3e:67:e4:f2 10.100.0.14'], port_security=['fa:16:3e:67:e4:f2 10.100.0.14'], type=, nat_addresses=[], virtual_parent=[], up=[False], options={'requested-chassis': 'compute-0.ctlplane.example.com'}, parent_port=[], requested_additional_chassis=[], ha_chassis_group=[], external_ids={'neutron:cidrs': '10.100.0.14/28', 'neutron:device_id': '421c1bd5-7edf-41ce-b0a5-872efcaf35b0', 'neutron:device_owner': 'compute:nova', 'neutron:mtu': '', 'neutron:network_name': 'neutron-61c137f0-effb-4f90-8a6c-ea3831f8e4db', 'neutron:port_capabilities': '', 'neutron:port_name': '', 'neutron:project_id': 'bde8983778e8471a8b7f6da9e9d53732', 'neutron:revision_number': '2', 'neutron:security_group_ids': 'bfd44490-c6a6-4dbb-b2ea-afe6ce03a378 e9f2ae9c-ee72-46a2-b911-c2f7a0a61f4f', 'neutron:subnet_pool_addr_scope4': '', 'neutron:subnet_pool_addr_scope6': '', 'neutron:vnic_type': 'normal'}, additional_chassis=[], tag=[], additional_encap=[], encap=[], mirror_rules=[], datapath=b0cf4599-31fb-4d2b-a772-41955e5d1a1c, chassis=[<ovs.db.idl.Row object at 0x7f1b36766670>], tunnel_key=3, gateway_chassis=[], requested_chassis=[<ovs.db.idl.Row object at 0x7f1b36766670>], logical_port=36c65cc8-9f73-47e0-8a82-7ca2a02890e5) old=Port_Binding(chassis=[]) matches /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/event.py:43#033[00m
Dec  1 20:02:35 compute-0 ovn_metadata_agent[106828]: 2025-12-01 20:02:35.858 106833 INFO neutron.agent.ovn.metadata.agent [-] Port 36c65cc8-9f73-47e0-8a82-7ca2a02890e5 in datapath 61c137f0-effb-4f90-8a6c-ea3831f8e4db bound to our chassis#033[00m
Dec  1 20:02:35 compute-0 ovn_metadata_agent[106828]: 2025-12-01 20:02:35.864 106833 INFO neutron.agent.ovn.metadata.agent [-] Provisioning metadata for network 61c137f0-effb-4f90-8a6c-ea3831f8e4db#033[00m
Dec  1 20:02:35 compute-0 ovn_metadata_agent[106828]: 2025-12-01 20:02:35.874 239862 DEBUG oslo.privsep.daemon [-] privsep: reply[cacce509-781e-418f-ae44-ca83b2d34dab]: (4, False) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Dec  1 20:02:35 compute-0 ovn_metadata_agent[106828]: 2025-12-01 20:02:35.876 106833 DEBUG neutron.agent.ovn.metadata.agent [-] Creating VETH tap61c137f0-e1 in ovnmeta-61c137f0-effb-4f90-8a6c-ea3831f8e4db namespace provision_datapath /usr/lib/python3.9/site-packages/neutron/agent/ovn/metadata/agent.py:665#033[00m
Dec  1 20:02:35 compute-0 ovn_metadata_agent[106828]: 2025-12-01 20:02:35.879 239862 DEBUG neutron.privileged.agent.linux.ip_lib [-] Interface tap61c137f0-e0 not found in namespace None get_link_id /usr/lib/python3.9/site-packages/neutron/privileged/agent/linux/ip_lib.py:204#033[00m
Dec  1 20:02:35 compute-0 ovn_metadata_agent[106828]: 2025-12-01 20:02:35.879 239862 DEBUG oslo.privsep.daemon [-] privsep: reply[0f3f21ac-b97c-405a-b6c7-872113b36643]: (4, False) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Dec  1 20:02:35 compute-0 ovn_metadata_agent[106828]: 2025-12-01 20:02:35.882 239862 DEBUG oslo.privsep.daemon [-] privsep: reply[aae025a4-e385-48e5-9d8f-cdffa07b47cb]: (4, False) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Dec  1 20:02:35 compute-0 systemd-machined[155891]: New machine qemu-10-instance-0000000b.
Dec  1 20:02:35 compute-0 ovn_metadata_agent[106828]: 2025-12-01 20:02:35.896 106945 DEBUG oslo.privsep.daemon [-] privsep: reply[75389d79-c273-4fe5-81a1-5365239f2d1f]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Dec  1 20:02:35 compute-0 systemd[1]: Started Virtual Machine qemu-10-instance-0000000b.
Dec  1 20:02:35 compute-0 ovn_metadata_agent[106828]: 2025-12-01 20:02:35.922 239862 DEBUG oslo.privsep.daemon [-] privsep: reply[2a418d53-2407-4cf0-933a-9599dc0191fe]: (4, ('net.ipv4.conf.all.promote_secondaries = 1\n', '', 0)) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Dec  1 20:02:35 compute-0 ovn_metadata_agent[106828]: 2025-12-01 20:02:35.959 239942 DEBUG oslo.privsep.daemon [-] privsep: reply[90596800-58f5-44be-b487-b4f0e839d092]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Dec  1 20:02:35 compute-0 NetworkManager[56474]: <info>  [1764619355.9718] manager: (tap61c137f0-e0): new Veth device (/org/freedesktop/NetworkManager/Devices/56)
Dec  1 20:02:35 compute-0 ovn_metadata_agent[106828]: 2025-12-01 20:02:35.973 239862 DEBUG oslo.privsep.daemon [-] privsep: reply[3626e257-dfca-42b8-83e8-a7a02130524e]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Dec  1 20:02:36 compute-0 ovn_metadata_agent[106828]: 2025-12-01 20:02:36.017 239942 DEBUG oslo.privsep.daemon [-] privsep: reply[6a1ded39-4e25-40a3-960d-1e304c3b8edd]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Dec  1 20:02:36 compute-0 ovn_metadata_agent[106828]: 2025-12-01 20:02:36.020 239942 DEBUG oslo.privsep.daemon [-] privsep: reply[5cbfc8b0-e1d3-4478-94a6-e8b9d0e08646]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Dec  1 20:02:36 compute-0 NetworkManager[56474]: <info>  [1764619356.0448] device (tap61c137f0-e0): carrier: link connected
Dec  1 20:02:36 compute-0 ovn_metadata_agent[106828]: 2025-12-01 20:02:36.052 239942 DEBUG oslo.privsep.daemon [-] privsep: reply[1ff42771-c171-41dd-9a7d-ce3aec2845a4]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Dec  1 20:02:36 compute-0 ovn_metadata_agent[106828]: 2025-12-01 20:02:36.074 239862 DEBUG oslo.privsep.daemon [-] privsep: reply[0c3b4241-31d8-4680-9adb-3e1c880815af]: (4, [{'family': 0, '__align': (), 'ifi_type': 1, 'index': 2, 'flags': 69699, 'change': 0, 'attrs': [['IFLA_IFNAME', 'tap61c137f0-e1'], ['IFLA_TXQLEN', 1000], ['IFLA_OPERSTATE', 'UP'], ['IFLA_LINKMODE', 0], ['IFLA_MTU', 1500], ['IFLA_MIN_MTU', 68], ['IFLA_MAX_MTU', 65535], ['IFLA_GROUP', 0], ['IFLA_PROMISCUITY', 0], ['UNKNOWN', {'header': {'length': 8, 'type': 61}}], ['IFLA_NUM_TX_QUEUES', 8], ['IFLA_GSO_MAX_SEGS', 65535], ['IFLA_GSO_MAX_SIZE', 65536], ['IFLA_GRO_MAX_SIZE', 65536], ['UNKNOWN', {'header': {'length': 8, 'type': 63}}], ['UNKNOWN', {'header': {'length': 8, 'type': 64}}], ['IFLA_TSO_MAX_SIZE', 524280], ['IFLA_TSO_MAX_SEGS', 65535], ['UNKNOWN', {'header': {'length': 8, 'type': 66}}], ['IFLA_NUM_RX_QUEUES', 8], ['IFLA_CARRIER', 1], ['IFLA_CARRIER_CHANGES', 2], ['IFLA_CARRIER_UP_COUNT', 1], ['IFLA_CARRIER_DOWN_COUNT', 1], ['IFLA_PROTO_DOWN', 0], ['IFLA_ADDRESS', 'fa:16:3e:68:7f:93'], ['IFLA_BROADCAST', 'ff:ff:ff:ff:ff:ff'], ['IFLA_STATS64', {'rx_packets': 1, 'tx_packets': 1, 'rx_bytes': 90, 'tx_bytes': 90, 'rx_errors': 0, 'tx_errors': 0, 'rx_dropped': 0, 'tx_dropped': 0, 'multicast': 0, 'collisions': 0, 'rx_length_errors': 0, 'rx_over_errors': 0, 'rx_crc_errors': 0, 'rx_frame_errors': 0, 'rx_fifo_errors': 0, 'rx_missed_errors': 0, 'tx_aborted_errors': 0, 'tx_carrier_errors': 0, 'tx_fifo_errors': 0, 'tx_heartbeat_errors': 0, 'tx_window_errors': 0, 'rx_compressed': 0, 'tx_compressed': 0}], ['IFLA_STATS', {'rx_packets': 1, 'tx_packets': 1, 'rx_bytes': 90, 'tx_bytes': 90, 'rx_errors': 0, 'tx_errors': 0, 'rx_dropped': 0, 'tx_dropped': 0, 'multicast': 0, 'collisions': 0, 'rx_length_errors': 0, 'rx_over_errors': 0, 'rx_crc_errors': 0, 'rx_frame_errors': 0, 'rx_fifo_errors': 0, 'rx_missed_errors': 0, 'tx_aborted_errors': 0, 'tx_carrier_errors': 0, 'tx_fifo_errors': 0, 'tx_heartbeat_errors': 0, 'tx_window_errors': 0, 'rx_compressed': 0, 'tx_compressed': 0}], ['IFLA_XDP', {'attrs': [['IFLA_XDP_ATTACHED', None]]}], ['IFLA_LINKINFO', {'attrs': [['IFLA_INFO_KIND', 'veth']]}], ['IFLA_LINK_NETNSID', 0], ['IFLA_LINK', 32], ['IFLA_QDISC', 'noqueue'], ['IFLA_AF_SPEC', {'attrs': [['AF_INET', {'dummy': 65668, 'forwarding': 1, 'mc_forwarding': 0, 'proxy_arp': 0, 'accept_redirects': 0, 'secure_redirects': 0, 'send_redirects': 0, 'shared_media': 1, 'rp_filter': 1, 'accept_source_route': 0, 'bootp_relay': 0, 'log_martians': 0, 'tag': 0, 'arpfilter': 0, 'medium_id': 0, 'noxfrm': 0, 'nopolicy': 0, 'force_igmp_version': 0, 'arp_announce': 0, 'arp_ignore': 0, 'promote_secondaries': 1, 'arp_accept': 0, 'arp_notify': 0, 'accept_local': 0, 'src_vmark': 0, 'proxy_arp_pvlan': 0, 'route_localnet': 0, 'igmpv2_unsolicited_report_interval': 10000, 'igmpv3_unsolicited_report_interval': 1000}], ['AF_INET6', {'attrs': [['IFLA_INET6_FLAGS', 2147483648], ['IFLA_INET6_CACHEINFO', {'max_reasm_len': 65535, 'tstamp': 584376, 'reachable_time': 20449, 'retrans_time': 1000}], ['IFLA_INET6_CONF', {'forwarding': 0, 'hop_limit': 64, 'mtu': 1500, 'accept_ra': 1, 'accept_redirects': 1, 'autoconf': 1, 'dad_transmits': 1, 'router_solicitations': 4294967295, 'router_solicitation_interval': 4000, 'router_solicitation_delay': 1000, 'use_tempaddr': 0, 'temp_valid_lft': 604800, 'temp_preferred_lft': 86400, 'regen_max_retry': 3, 'max_desync_factor': 600, 'max_addresses': 16, 'force_mld_version': 0, 'accept_ra_defrtr': 1, 'accept_ra_pinfo': 1, 'accept_ra_rtr_pref': 1, 'router_probe_interval': 60000, 'accept_ra_rt_info_max_plen': 0, 'proxy_ndp': 0, 'optimistic_dad': 0, 'accept_source_route': 0, 'mc_forwarding': 0, 'disable_ipv6': 0, 'accept_dad': 1, 'force_tllao': 0, 'ndisc_notify': 0}], ['IFLA_INET6_STATS', {'num': 38, 'inpkts': 1, 'inoctets': 76, 'indelivers': 0, 'outforwdatagrams': 0, 'outpkts': 1, 'outoctets': 76, 'inhdrerrors': 0, 'intoobigerrors': 0, 'innoroutes': 0, 'inaddrerrors': 0, 'inunknownprotos': 0, 'intruncatedpkts': 0, 'indiscards': 0, 'outdiscards': 0, 'outnoroutes': 0, 'reasmtimeout': 0, 'reasmreqds': 0, 'reasmoks': 0, 'reasmfails': 0, 'fragoks': 0, 'fragfails': 0, 'fragcreates': 0, 'inmcastpkts': 1, 'outmcastpkts': 1, 'inbcastpkts': 0, 'outbcastpkts': 0, 'inmcastoctets': 76, 'outmcastoctets': 76, 'inbcastoctets': 0, 'outbcastoctets': 0, 'csumerrors': 0, 'noectpkts': 1, 'ect1pkts': 0, 'ect0pkts': 0, 'cepkts': 0}], ['IFLA_INET6_ICMP6STATS', {'num': 7, 'inmsgs': 0, 'inerrors': 0, 'outmsgs': 1, 'outerrors': 0, 'csumerrors': 0}], ['IFLA_INET6_TOKEN', '::'], ['IFLA_INET6_ADDR_GEN_MODE', 0]]}]]}], ['IFLA_MAP', {'mem_start': 0, 'mem_end': 0, 'base_addr': 0, 'irq': 0, 'dma': 0, 'port': 0}], ['UNKNOWN', {'header': {'length': 4, 'type': 32830}}], ['UNKNOWN', {'header': {'length': 4, 'type': 32833}}]], 'header': {'length': 1448, 'type': 16, 'flags': 2, 'sequence_number': 255, 'pid': 255228, 'error': None, 'target': 'ovnmeta-61c137f0-effb-4f90-8a6c-ea3831f8e4db', 'stats': (0, 0, 0)}, 'state': 'up', 'event': 'RTM_NEWLINK'}]) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Dec  1 20:02:36 compute-0 ovn_metadata_agent[106828]: 2025-12-01 20:02:36.092 239862 DEBUG oslo.privsep.daemon [-] privsep: reply[d4846f0c-0a16-49d2-83fb-9018b9ba51aa]: (4, ({'family': 10, 'prefixlen': 64, 'flags': 192, 'scope': 253, 'index': 2, 'attrs': [['IFA_ADDRESS', 'fe80::f816:3eff:fe68:7f93'], ['IFA_CACHEINFO', {'ifa_preferred': 4294967295, 'ifa_valid': 4294967295, 'cstamp': 584376, 'tstamp': 584376}], ['IFA_FLAGS', 192]], 'header': {'length': 72, 'type': 20, 'flags': 2, 'sequence_number': 255, 'pid': 255229, 'error': None, 'target': 'ovnmeta-61c137f0-effb-4f90-8a6c-ea3831f8e4db', 'stats': (0, 0, 0)}, 'event': 'RTM_NEWADDR'},)) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Dec  1 20:02:36 compute-0 ovn_metadata_agent[106828]: 2025-12-01 20:02:36.113 239862 DEBUG oslo.privsep.daemon [-] privsep: reply[dc8378d5-f707-4f64-b5d6-50372cd0a233]: (4, [{'family': 0, '__align': (), 'ifi_type': 1, 'index': 2, 'flags': 69699, 'change': 0, 'attrs': [['IFLA_IFNAME', 'tap61c137f0-e1'], ['IFLA_TXQLEN', 1000], ['IFLA_OPERSTATE', 'UP'], ['IFLA_LINKMODE', 0], ['IFLA_MTU', 1500], ['IFLA_MIN_MTU', 68], ['IFLA_MAX_MTU', 65535], ['IFLA_GROUP', 0], ['IFLA_PROMISCUITY', 0], ['UNKNOWN', {'header': {'length': 8, 'type': 61}}], ['IFLA_NUM_TX_QUEUES', 8], ['IFLA_GSO_MAX_SEGS', 65535], ['IFLA_GSO_MAX_SIZE', 65536], ['IFLA_GRO_MAX_SIZE', 65536], ['UNKNOWN', {'header': {'length': 8, 'type': 63}}], ['UNKNOWN', {'header': {'length': 8, 'type': 64}}], ['IFLA_TSO_MAX_SIZE', 524280], ['IFLA_TSO_MAX_SEGS', 65535], ['UNKNOWN', {'header': {'length': 8, 'type': 66}}], ['IFLA_NUM_RX_QUEUES', 8], ['IFLA_CARRIER', 1], ['IFLA_CARRIER_CHANGES', 2], ['IFLA_CARRIER_UP_COUNT', 1], ['IFLA_CARRIER_DOWN_COUNT', 1], ['IFLA_PROTO_DOWN', 0], ['IFLA_ADDRESS', 'fa:16:3e:68:7f:93'], ['IFLA_BROADCAST', 'ff:ff:ff:ff:ff:ff'], ['IFLA_STATS64', {'rx_packets': 1, 'tx_packets': 1, 'rx_bytes': 90, 'tx_bytes': 90, 'rx_errors': 0, 'tx_errors': 0, 'rx_dropped': 0, 'tx_dropped': 0, 'multicast': 0, 'collisions': 0, 'rx_length_errors': 0, 'rx_over_errors': 0, 'rx_crc_errors': 0, 'rx_frame_errors': 0, 'rx_fifo_errors': 0, 'rx_missed_errors': 0, 'tx_aborted_errors': 0, 'tx_carrier_errors': 0, 'tx_fifo_errors': 0, 'tx_heartbeat_errors': 0, 'tx_window_errors': 0, 'rx_compressed': 0, 'tx_compressed': 0}], ['IFLA_STATS', {'rx_packets': 1, 'tx_packets': 1, 'rx_bytes': 90, 'tx_bytes': 90, 'rx_errors': 0, 'tx_errors': 0, 'rx_dropped': 0, 'tx_dropped': 0, 'multicast': 0, 'collisions': 0, 'rx_length_errors': 0, 'rx_over_errors': 0, 'rx_crc_errors': 0, 'rx_frame_errors': 0, 'rx_fifo_errors': 0, 'rx_missed_errors': 0, 'tx_aborted_errors': 0, 'tx_carrier_errors': 0, 'tx_fifo_errors': 0, 'tx_heartbeat_errors': 0, 'tx_window_errors': 0, 'rx_compressed': 0, 'tx_compressed': 0}], ['IFLA_XDP', {'attrs': [['IFLA_XDP_ATTACHED', None]]}], ['IFLA_LINKINFO', {'attrs': [['IFLA_INFO_KIND', 'veth']]}], ['IFLA_LINK_NETNSID', 0], ['IFLA_LINK', 32], ['IFLA_QDISC', 'noqueue'], ['IFLA_AF_SPEC', {'attrs': [['AF_INET', {'dummy': 65668, 'forwarding': 1, 'mc_forwarding': 0, 'proxy_arp': 0, 'accept_redirects': 0, 'secure_redirects': 0, 'send_redirects': 0, 'shared_media': 1, 'rp_filter': 1, 'accept_source_route': 0, 'bootp_relay': 0, 'log_martians': 0, 'tag': 0, 'arpfilter': 0, 'medium_id': 0, 'noxfrm': 0, 'nopolicy': 0, 'force_igmp_version': 0, 'arp_announce': 0, 'arp_ignore': 0, 'promote_secondaries': 1, 'arp_accept': 0, 'arp_notify': 0, 'accept_local': 0, 'src_vmark': 0, 'proxy_arp_pvlan': 0, 'route_localnet': 0, 'igmpv2_unsolicited_report_interval': 10000, 'igmpv3_unsolicited_report_interval': 1000}], ['AF_INET6', {'attrs': [['IFLA_INET6_FLAGS', 2147483648], ['IFLA_INET6_CACHEINFO', {'max_reasm_len': 65535, 'tstamp': 584376, 'reachable_time': 20449, 'retrans_time': 1000}], ['IFLA_INET6_CONF', {'forwarding': 0, 'hop_limit': 64, 'mtu': 1500, 'accept_ra': 1, 'accept_redirects': 1, 'autoconf': 1, 'dad_transmits': 1, 'router_solicitations': 4294967295, 'router_solicitation_interval': 4000, 'router_solicitation_delay': 1000, 'use_tempaddr': 0, 'temp_valid_lft': 604800, 'temp_preferred_lft': 86400, 'regen_max_retry': 3, 'max_desync_factor': 600, 'max_addresses': 16, 'force_mld_version': 0, 'accept_ra_defrtr': 1, 'accept_ra_pinfo': 1, 'accept_ra_rtr_pref': 1, 'router_probe_interval': 60000, 'accept_ra_rt_info_max_plen': 0, 'proxy_ndp': 0, 'optimistic_dad': 0, 'accept_source_route': 0, 'mc_forwarding': 0, 'disable_ipv6': 0, 'accept_dad': 1, 'force_tllao': 0, 'ndisc_notify': 0}], ['IFLA_INET6_STATS', {'num': 38, 'inpkts': 1, 'inoctets': 76, 'indelivers': 0, 'outforwdatagrams': 0, 'outpkts': 1, 'outoctets': 76, 'inhdrerrors': 0, 'intoobigerrors': 0, 'innoroutes': 0, 'inaddrerrors': 0, 'inunknownprotos': 0, 'intruncatedpkts': 0, 'indiscards': 0, 'outdiscards': 0, 'outnoroutes': 0, 'reasmtimeout': 0, 'reasmreqds': 0, 'reasmoks': 0, 'reasmfails': 0, 'fragoks': 0, 'fragfails': 0, 'fragcreates': 0, 'inmcastpkts': 1, 'outmcastpkts': 1, 'inbcastpkts': 0, 'outbcastpkts': 0, 'inmcastoctets': 76, 'outmcastoctets': 76, 'inbcastoctets': 0, 'outbcastoctets': 0, 'csumerrors': 0, 'noectpkts': 1, 'ect1pkts': 0, 'ect0pkts': 0, 'cepkts': 0}], ['IFLA_INET6_ICMP6STATS', {'num': 7, 'inmsgs': 0, 'inerrors': 0, 'outmsgs': 1, 'outerrors': 0, 'csumerrors': 0}], ['IFLA_INET6_TOKEN', '::'], ['IFLA_INET6_ADDR_GEN_MODE', 0]]}]]}], ['IFLA_MAP', {'mem_start': 0, 'mem_end': 0, 'base_addr': 0, 'irq': 0, 'dma': 0, 'port': 0}], ['UNKNOWN', {'header': {'length': 4, 'type': 32830}}], ['UNKNOWN', {'header': {'length': 4, 'type': 32833}}]], 'header': {'length': 1448, 'type': 16, 'flags': 0, 'sequence_number': 255, 'pid': 255230, 'error': None, 'target': 'ovnmeta-61c137f0-effb-4f90-8a6c-ea3831f8e4db', 'stats': (0, 0, 0)}, 'state': 'up', 'event': 'RTM_NEWLINK'}]) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Dec  1 20:02:36 compute-0 ovn_metadata_agent[106828]: 2025-12-01 20:02:36.149 239862 DEBUG oslo.privsep.daemon [-] privsep: reply[91b13900-ec1d-41df-a30a-925c3ba82508]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Dec  1 20:02:36 compute-0 nova_compute[189564]: 2025-12-01 20:02:36.175 189568 DEBUG oslo_service.periodic_task [None req-7cec860b-c1c5-49d2-8a4f-732c76c1788d - - - - - -] Running periodic task ComputeManager._sync_power_states run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210#033[00m
Dec  1 20:02:36 compute-0 nova_compute[189564]: 2025-12-01 20:02:36.208 189568 WARNING nova.compute.manager [None req-7cec860b-c1c5-49d2-8a4f-732c76c1788d - - - - - -] While synchronizing instance power states, found 4 instances in the database and 3 instances on the hypervisor.#033[00m
Dec  1 20:02:36 compute-0 nova_compute[189564]: 2025-12-01 20:02:36.209 189568 DEBUG nova.compute.manager [None req-7cec860b-c1c5-49d2-8a4f-732c76c1788d - - - - - -] Triggering sync for uuid 4a104baa-5fd5-47aa-973b-11d99c76c3e2 _sync_power_states /usr/lib/python3.9/site-packages/nova/compute/manager.py:10268#033[00m
Dec  1 20:02:36 compute-0 nova_compute[189564]: 2025-12-01 20:02:36.210 189568 DEBUG nova.compute.manager [None req-7cec860b-c1c5-49d2-8a4f-732c76c1788d - - - - - -] Triggering sync for uuid 4ace6300-5447-4f61-9b27-a7249155c57b _sync_power_states /usr/lib/python3.9/site-packages/nova/compute/manager.py:10268#033[00m
Dec  1 20:02:36 compute-0 nova_compute[189564]: 2025-12-01 20:02:36.211 189568 DEBUG nova.compute.manager [None req-7cec860b-c1c5-49d2-8a4f-732c76c1788d - - - - - -] Triggering sync for uuid 6c1de815-4e42-4798-9a73-220b67333524 _sync_power_states /usr/lib/python3.9/site-packages/nova/compute/manager.py:10268#033[00m
Dec  1 20:02:36 compute-0 nova_compute[189564]: 2025-12-01 20:02:36.211 189568 DEBUG nova.compute.manager [None req-7cec860b-c1c5-49d2-8a4f-732c76c1788d - - - - - -] Triggering sync for uuid 421c1bd5-7edf-41ce-b0a5-872efcaf35b0 _sync_power_states /usr/lib/python3.9/site-packages/nova/compute/manager.py:10268#033[00m
Dec  1 20:02:36 compute-0 nova_compute[189564]: 2025-12-01 20:02:36.212 189568 DEBUG oslo_concurrency.lockutils [None req-7cec860b-c1c5-49d2-8a4f-732c76c1788d - - - - - -] Acquiring lock "4a104baa-5fd5-47aa-973b-11d99c76c3e2" by "nova.compute.manager.ComputeManager._sync_power_states.<locals>._sync.<locals>.query_driver_power_state_and_sync" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m
Dec  1 20:02:36 compute-0 nova_compute[189564]: 2025-12-01 20:02:36.213 189568 DEBUG oslo_concurrency.lockutils [None req-7cec860b-c1c5-49d2-8a4f-732c76c1788d - - - - - -] Lock "4a104baa-5fd5-47aa-973b-11d99c76c3e2" acquired by "nova.compute.manager.ComputeManager._sync_power_states.<locals>._sync.<locals>.query_driver_power_state_and_sync" :: waited 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m
Dec  1 20:02:36 compute-0 nova_compute[189564]: 2025-12-01 20:02:36.215 189568 DEBUG oslo_concurrency.lockutils [None req-7cec860b-c1c5-49d2-8a4f-732c76c1788d - - - - - -] Acquiring lock "4ace6300-5447-4f61-9b27-a7249155c57b" by "nova.compute.manager.ComputeManager._sync_power_states.<locals>._sync.<locals>.query_driver_power_state_and_sync" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m
Dec  1 20:02:36 compute-0 nova_compute[189564]: 2025-12-01 20:02:36.216 189568 DEBUG oslo_concurrency.lockutils [None req-7cec860b-c1c5-49d2-8a4f-732c76c1788d - - - - - -] Lock "4ace6300-5447-4f61-9b27-a7249155c57b" acquired by "nova.compute.manager.ComputeManager._sync_power_states.<locals>._sync.<locals>.query_driver_power_state_and_sync" :: waited 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m
Dec  1 20:02:36 compute-0 nova_compute[189564]: 2025-12-01 20:02:36.217 189568 DEBUG oslo_concurrency.lockutils [None req-7cec860b-c1c5-49d2-8a4f-732c76c1788d - - - - - -] Acquiring lock "6c1de815-4e42-4798-9a73-220b67333524" by "nova.compute.manager.ComputeManager._sync_power_states.<locals>._sync.<locals>.query_driver_power_state_and_sync" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m
Dec  1 20:02:36 compute-0 nova_compute[189564]: 2025-12-01 20:02:36.217 189568 DEBUG oslo_concurrency.lockutils [None req-7cec860b-c1c5-49d2-8a4f-732c76c1788d - - - - - -] Acquiring lock "421c1bd5-7edf-41ce-b0a5-872efcaf35b0" by "nova.compute.manager.ComputeManager._sync_power_states.<locals>._sync.<locals>.query_driver_power_state_and_sync" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m
Dec  1 20:02:36 compute-0 ovn_metadata_agent[106828]: 2025-12-01 20:02:36.233 239862 DEBUG oslo.privsep.daemon [-] privsep: reply[c5d0e1d7-f4a2-4684-b1f8-a29d879bcf08]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Dec  1 20:02:36 compute-0 ovn_metadata_agent[106828]: 2025-12-01 20:02:36.234 106833 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): DelPortCommand(_result=None, port=tap61c137f0-e0, bridge=br-ex, if_exists=True) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89#033[00m
Dec  1 20:02:36 compute-0 ovn_metadata_agent[106828]: 2025-12-01 20:02:36.234 106833 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Transaction caused no change do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:129#033[00m
Dec  1 20:02:36 compute-0 ovn_metadata_agent[106828]: 2025-12-01 20:02:36.235 106833 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): AddPortCommand(_result=None, bridge=br-int, port=tap61c137f0-e0, may_exist=True) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89#033[00m
Dec  1 20:02:36 compute-0 nova_compute[189564]: 2025-12-01 20:02:36.237 189568 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Dec  1 20:02:36 compute-0 NetworkManager[56474]: <info>  [1764619356.2382] manager: (tap61c137f0-e0): new Open vSwitch Port device (/org/freedesktop/NetworkManager/Devices/57)
Dec  1 20:02:36 compute-0 kernel: tap61c137f0-e0: entered promiscuous mode
Dec  1 20:02:36 compute-0 ovn_metadata_agent[106828]: 2025-12-01 20:02:36.241 106833 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): DbSetCommand(_result=None, table=Interface, record=tap61c137f0-e0, col_values=(('external_ids', {'iface-id': '39b24bc2-6265-4d8f-9166-2751c476b101'}),)) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89#033[00m
Dec  1 20:02:36 compute-0 ovn_controller[97948]: 2025-12-01T20:02:36Z|00113|binding|INFO|Releasing lport 39b24bc2-6265-4d8f-9166-2751c476b101 from this chassis (sb_readonly=0)
Dec  1 20:02:36 compute-0 nova_compute[189564]: 2025-12-01 20:02:36.248 189568 DEBUG oslo_concurrency.lockutils [None req-7cec860b-c1c5-49d2-8a4f-732c76c1788d - - - - - -] Lock "4a104baa-5fd5-47aa-973b-11d99c76c3e2" "released" by "nova.compute.manager.ComputeManager._sync_power_states.<locals>._sync.<locals>.query_driver_power_state_and_sync" :: held 0.034s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m
Dec  1 20:02:36 compute-0 nova_compute[189564]: 2025-12-01 20:02:36.251 189568 DEBUG oslo_concurrency.lockutils [None req-7cec860b-c1c5-49d2-8a4f-732c76c1788d - - - - - -] Lock "4ace6300-5447-4f61-9b27-a7249155c57b" "released" by "nova.compute.manager.ComputeManager._sync_power_states.<locals>._sync.<locals>.query_driver_power_state_and_sync" :: held 0.035s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m
Dec  1 20:02:36 compute-0 ovn_metadata_agent[106828]: 2025-12-01 20:02:36.268 106833 DEBUG neutron.agent.linux.utils [-] Unable to access /var/lib/neutron/external/pids/61c137f0-effb-4f90-8a6c-ea3831f8e4db.pid.haproxy; Error: [Errno 2] No such file or directory: '/var/lib/neutron/external/pids/61c137f0-effb-4f90-8a6c-ea3831f8e4db.pid.haproxy' get_value_from_file /usr/lib/python3.9/site-packages/neutron/agent/linux/utils.py:252#033[00m
Dec  1 20:02:36 compute-0 nova_compute[189564]: 2025-12-01 20:02:36.269 189568 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Dec  1 20:02:36 compute-0 ovn_metadata_agent[106828]: 2025-12-01 20:02:36.269 239862 DEBUG oslo.privsep.daemon [-] privsep: reply[2f132492-21a9-4d55-92b9-25fb1ce651a9]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Dec  1 20:02:36 compute-0 ovn_metadata_agent[106828]: 2025-12-01 20:02:36.270 106833 DEBUG neutron.agent.ovn.metadata.driver [-] haproxy_cfg = 
Dec  1 20:02:36 compute-0 ovn_metadata_agent[106828]: global
Dec  1 20:02:36 compute-0 ovn_metadata_agent[106828]:    log         /dev/log local0 debug
Dec  1 20:02:36 compute-0 ovn_metadata_agent[106828]:    log-tag     haproxy-metadata-proxy-61c137f0-effb-4f90-8a6c-ea3831f8e4db
Dec  1 20:02:36 compute-0 ovn_metadata_agent[106828]:    user        root
Dec  1 20:02:36 compute-0 ovn_metadata_agent[106828]:    group       root
Dec  1 20:02:36 compute-0 ovn_metadata_agent[106828]:    maxconn     1024
Dec  1 20:02:36 compute-0 ovn_metadata_agent[106828]:    pidfile     /var/lib/neutron/external/pids/61c137f0-effb-4f90-8a6c-ea3831f8e4db.pid.haproxy
Dec  1 20:02:36 compute-0 ovn_metadata_agent[106828]:    daemon
Dec  1 20:02:36 compute-0 ovn_metadata_agent[106828]: 
Dec  1 20:02:36 compute-0 ovn_metadata_agent[106828]: defaults
Dec  1 20:02:36 compute-0 ovn_metadata_agent[106828]:    log global
Dec  1 20:02:36 compute-0 ovn_metadata_agent[106828]:    mode http
Dec  1 20:02:36 compute-0 ovn_metadata_agent[106828]:    option httplog
Dec  1 20:02:36 compute-0 ovn_metadata_agent[106828]:    option dontlognull
Dec  1 20:02:36 compute-0 ovn_metadata_agent[106828]:    option http-server-close
Dec  1 20:02:36 compute-0 ovn_metadata_agent[106828]:    option forwardfor
Dec  1 20:02:36 compute-0 ovn_metadata_agent[106828]:    retries                 3
Dec  1 20:02:36 compute-0 ovn_metadata_agent[106828]:    timeout http-request    30s
Dec  1 20:02:36 compute-0 ovn_metadata_agent[106828]:    timeout connect         30s
Dec  1 20:02:36 compute-0 ovn_metadata_agent[106828]:    timeout client          32s
Dec  1 20:02:36 compute-0 ovn_metadata_agent[106828]:    timeout server          32s
Dec  1 20:02:36 compute-0 ovn_metadata_agent[106828]:    timeout http-keep-alive 30s
Dec  1 20:02:36 compute-0 ovn_metadata_agent[106828]: 
Dec  1 20:02:36 compute-0 ovn_metadata_agent[106828]: 
Dec  1 20:02:36 compute-0 ovn_metadata_agent[106828]: listen listener
Dec  1 20:02:36 compute-0 ovn_metadata_agent[106828]:    bind 169.254.169.254:80
Dec  1 20:02:36 compute-0 ovn_metadata_agent[106828]:    server metadata /var/lib/neutron/metadata_proxy
Dec  1 20:02:36 compute-0 ovn_metadata_agent[106828]:    http-request add-header X-OVN-Network-ID 61c137f0-effb-4f90-8a6c-ea3831f8e4db
Dec  1 20:02:36 compute-0 ovn_metadata_agent[106828]: create_config_file /usr/lib/python3.9/site-packages/neutron/agent/ovn/metadata/driver.py:107#033[00m
Dec  1 20:02:36 compute-0 ovn_metadata_agent[106828]: 2025-12-01 20:02:36.271 106833 DEBUG neutron.agent.linux.utils [-] Running command: ['sudo', 'neutron-rootwrap', '/etc/neutron/rootwrap.conf', 'ip', 'netns', 'exec', 'ovnmeta-61c137f0-effb-4f90-8a6c-ea3831f8e4db', 'env', 'PROCESS_TAG=haproxy-61c137f0-effb-4f90-8a6c-ea3831f8e4db', 'haproxy', '-f', '/var/lib/neutron/ovn-metadata-proxy/61c137f0-effb-4f90-8a6c-ea3831f8e4db.conf'] create_process /usr/lib/python3.9/site-packages/neutron/agent/linux/utils.py:84#033[00m
Dec  1 20:02:36 compute-0 nova_compute[189564]: 2025-12-01 20:02:36.324 189568 DEBUG nova.network.neutron [None req-edc753ca-df70-41b8-9e8c-a36b0a4da18d 715e289b64b4407387cbcfe958eb2d0f 162c071887824085bcc9c384a2f8baf0 - - default default] [instance: 6c1de815-4e42-4798-9a73-220b67333524] Updating instance_info_cache with network_info: [{"id": "05dcfe74-fe60-45d4-b1df-aec9fcc57adb", "address": "fa:16:3e:96:ce:cc", "network": {"id": "d273f808-5cbd-4428-9f2c-ed8b50232c12", "bridge": "br-int", "label": "tempest-network-smoke--1707279970", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.11", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "162c071887824085bcc9c384a2f8baf0", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tap05dcfe74-fe", "ovs_interfaceid": "05dcfe74-fe60-45d4-b1df-aec9fcc57adb", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}}] update_instance_cache_with_nw_info /usr/lib/python3.9/site-packages/nova/network/neutron.py:116#033[00m
Dec  1 20:02:36 compute-0 nova_compute[189564]: 2025-12-01 20:02:36.358 189568 DEBUG oslo_concurrency.lockutils [None req-edc753ca-df70-41b8-9e8c-a36b0a4da18d 715e289b64b4407387cbcfe958eb2d0f 162c071887824085bcc9c384a2f8baf0 - - default default] Releasing lock "refresh_cache-6c1de815-4e42-4798-9a73-220b67333524" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:333#033[00m
Dec  1 20:02:36 compute-0 nova_compute[189564]: 2025-12-01 20:02:36.359 189568 DEBUG nova.compute.manager [None req-edc753ca-df70-41b8-9e8c-a36b0a4da18d 715e289b64b4407387cbcfe958eb2d0f 162c071887824085bcc9c384a2f8baf0 - - default default] [instance: 6c1de815-4e42-4798-9a73-220b67333524] Instance network_info: |[{"id": "05dcfe74-fe60-45d4-b1df-aec9fcc57adb", "address": "fa:16:3e:96:ce:cc", "network": {"id": "d273f808-5cbd-4428-9f2c-ed8b50232c12", "bridge": "br-int", "label": "tempest-network-smoke--1707279970", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.11", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "162c071887824085bcc9c384a2f8baf0", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tap05dcfe74-fe", "ovs_interfaceid": "05dcfe74-fe60-45d4-b1df-aec9fcc57adb", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}}]| _allocate_network_async /usr/lib/python3.9/site-packages/nova/compute/manager.py:1967#033[00m
Dec  1 20:02:36 compute-0 nova_compute[189564]: 2025-12-01 20:02:36.361 189568 DEBUG oslo_concurrency.lockutils [req-4d117f9b-37b2-4680-b4d7-3bba14b8a359 req-50903c77-232c-43b8-a5d8-30bd15f1b1bb 8141e98fb94749b083df4b60a326419b 58466eba0634458f8dd92f929824a1d9 - - default default] Acquired lock "refresh_cache-6c1de815-4e42-4798-9a73-220b67333524" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:315#033[00m
Dec  1 20:02:36 compute-0 nova_compute[189564]: 2025-12-01 20:02:36.362 189568 DEBUG nova.network.neutron [req-4d117f9b-37b2-4680-b4d7-3bba14b8a359 req-50903c77-232c-43b8-a5d8-30bd15f1b1bb 8141e98fb94749b083df4b60a326419b 58466eba0634458f8dd92f929824a1d9 - - default default] [instance: 6c1de815-4e42-4798-9a73-220b67333524] Refreshing network info cache for port 05dcfe74-fe60-45d4-b1df-aec9fcc57adb _get_instance_nw_info /usr/lib/python3.9/site-packages/nova/network/neutron.py:2007#033[00m
Dec  1 20:02:36 compute-0 nova_compute[189564]: 2025-12-01 20:02:36.367 189568 DEBUG nova.virt.libvirt.driver [None req-edc753ca-df70-41b8-9e8c-a36b0a4da18d 715e289b64b4407387cbcfe958eb2d0f 162c071887824085bcc9c384a2f8baf0 - - default default] [instance: 6c1de815-4e42-4798-9a73-220b67333524] Start _get_guest_xml network_info=[{"id": "05dcfe74-fe60-45d4-b1df-aec9fcc57adb", "address": "fa:16:3e:96:ce:cc", "network": {"id": "d273f808-5cbd-4428-9f2c-ed8b50232c12", "bridge": "br-int", "label": "tempest-network-smoke--1707279970", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.11", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "162c071887824085bcc9c384a2f8baf0", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tap05dcfe74-fe", "ovs_interfaceid": "05dcfe74-fe60-45d4-b1df-aec9fcc57adb", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}}] disk_info={'disk_bus': 'virtio', 'cdrom_bus': 'sata', 'mapping': {'root': {'bus': 'virtio', 'dev': 'vda', 'type': 'disk', 'boot_index': '1'}, 'disk': {'bus': 'virtio', 'dev': 'vda', 'type': 'disk', 'boot_index': '1'}, 'disk.config': {'bus': 'sata', 'dev': 'sda', 'type': 'cdrom'}}} image_meta=ImageMeta(checksum='c8fc807773e5354afe61636071771906',container_format='bare',created_at=2025-12-01T20:00:12Z,direct_url=<?>,disk_format='qcow2',id=d169c234-7ac2-4fdc-b9fa-a08c93484d75,min_disk=0,min_ram=0,name='cirros-0.6.2-x86_64-disk.img',owner='35d2a9caf1634dca9fc12ec078239d84',properties=ImageMetaProps,protected=<?>,size=21430272,status='active',tags=<?>,updated_at=2025-12-01T20:00:13Z,virtual_size=<?>,visibility=<?>) rescue=None block_device_info={'root_device_name': '/dev/vda', 'image': [{'boot_index': 0, 'guest_format': None, 'encryption_options': None, 'size': 0, 'encryption_secret_uuid': None, 'device_type': 'disk', 'disk_bus': 'virtio', 'encrypted': False, 'encryption_format': None, 'device_name': '/dev/vda', 'image_id': 'd169c234-7ac2-4fdc-b9fa-a08c93484d75'}], 'ephemerals': [], 'block_device_mapping': [], 'swap': None} _get_guest_xml /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:7549#033[00m
Dec  1 20:02:36 compute-0 nova_compute[189564]: 2025-12-01 20:02:36.388 189568 WARNING nova.virt.libvirt.driver [None req-edc753ca-df70-41b8-9e8c-a36b0a4da18d 715e289b64b4407387cbcfe958eb2d0f 162c071887824085bcc9c384a2f8baf0 - - default default] This host appears to have multiple sockets per NUMA node. The `socket` PCI NUMA affinity will not be supported.#033[00m
Dec  1 20:02:36 compute-0 nova_compute[189564]: 2025-12-01 20:02:36.400 189568 DEBUG nova.virt.libvirt.host [None req-edc753ca-df70-41b8-9e8c-a36b0a4da18d 715e289b64b4407387cbcfe958eb2d0f 162c071887824085bcc9c384a2f8baf0 - - default default] Searching host: 'compute-0.ctlplane.example.com' for CPU controller through CGroups V1... _has_cgroupsv1_cpu_controller /usr/lib/python3.9/site-packages/nova/virt/libvirt/host.py:1653#033[00m
Dec  1 20:02:36 compute-0 nova_compute[189564]: 2025-12-01 20:02:36.401 189568 DEBUG nova.virt.libvirt.host [None req-edc753ca-df70-41b8-9e8c-a36b0a4da18d 715e289b64b4407387cbcfe958eb2d0f 162c071887824085bcc9c384a2f8baf0 - - default default] CPU controller missing on host. _has_cgroupsv1_cpu_controller /usr/lib/python3.9/site-packages/nova/virt/libvirt/host.py:1663#033[00m
Dec  1 20:02:36 compute-0 nova_compute[189564]: 2025-12-01 20:02:36.408 189568 DEBUG nova.virt.libvirt.host [None req-edc753ca-df70-41b8-9e8c-a36b0a4da18d 715e289b64b4407387cbcfe958eb2d0f 162c071887824085bcc9c384a2f8baf0 - - default default] Searching host: 'compute-0.ctlplane.example.com' for CPU controller through CGroups V2... _has_cgroupsv2_cpu_controller /usr/lib/python3.9/site-packages/nova/virt/libvirt/host.py:1672#033[00m
Dec  1 20:02:36 compute-0 nova_compute[189564]: 2025-12-01 20:02:36.409 189568 DEBUG nova.virt.libvirt.host [None req-edc753ca-df70-41b8-9e8c-a36b0a4da18d 715e289b64b4407387cbcfe958eb2d0f 162c071887824085bcc9c384a2f8baf0 - - default default] CPU controller found on host. _has_cgroupsv2_cpu_controller /usr/lib/python3.9/site-packages/nova/virt/libvirt/host.py:1679#033[00m
Dec  1 20:02:36 compute-0 nova_compute[189564]: 2025-12-01 20:02:36.410 189568 DEBUG nova.virt.libvirt.driver [None req-edc753ca-df70-41b8-9e8c-a36b0a4da18d 715e289b64b4407387cbcfe958eb2d0f 162c071887824085bcc9c384a2f8baf0 - - default default] CPU mode 'host-model' models '' was chosen, with extra flags: '' _get_guest_cpu_model_config /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:5396#033[00m
Dec  1 20:02:36 compute-0 nova_compute[189564]: 2025-12-01 20:02:36.410 189568 DEBUG nova.virt.hardware [None req-edc753ca-df70-41b8-9e8c-a36b0a4da18d 715e289b64b4407387cbcfe958eb2d0f 162c071887824085bcc9c384a2f8baf0 - - default default] Getting desirable topologies for flavor Flavor(created_at=2025-12-01T20:00:10Z,deleted=False,deleted_at=None,description=None,disabled=False,ephemeral_gb=0,extra_specs={hw_rng:allowed='True'},flavorid='69252fc0-77e5-4ac1-807d-77003542464f',id=3,is_public=True,memory_mb=128,name='m1.nano',projects=<?>,root_gb=1,rxtx_factor=1.0,swap=0,updated_at=None,vcpu_weight=0,vcpus=1) and image_meta ImageMeta(checksum='c8fc807773e5354afe61636071771906',container_format='bare',created_at=2025-12-01T20:00:12Z,direct_url=<?>,disk_format='qcow2',id=d169c234-7ac2-4fdc-b9fa-a08c93484d75,min_disk=0,min_ram=0,name='cirros-0.6.2-x86_64-disk.img',owner='35d2a9caf1634dca9fc12ec078239d84',properties=ImageMetaProps,protected=<?>,size=21430272,status='active',tags=<?>,updated_at=2025-12-01T20:00:13Z,virtual_size=<?>,visibility=<?>), allow threads: True _get_desirable_cpu_topologies /usr/lib/python3.9/site-packages/nova/virt/hardware.py:563#033[00m
Dec  1 20:02:36 compute-0 nova_compute[189564]: 2025-12-01 20:02:36.412 189568 DEBUG nova.virt.hardware [None req-edc753ca-df70-41b8-9e8c-a36b0a4da18d 715e289b64b4407387cbcfe958eb2d0f 162c071887824085bcc9c384a2f8baf0 - - default default] Flavor limits 0:0:0 get_cpu_topology_constraints /usr/lib/python3.9/site-packages/nova/virt/hardware.py:348#033[00m
Dec  1 20:02:36 compute-0 nova_compute[189564]: 2025-12-01 20:02:36.413 189568 DEBUG nova.virt.hardware [None req-edc753ca-df70-41b8-9e8c-a36b0a4da18d 715e289b64b4407387cbcfe958eb2d0f 162c071887824085bcc9c384a2f8baf0 - - default default] Image limits 0:0:0 get_cpu_topology_constraints /usr/lib/python3.9/site-packages/nova/virt/hardware.py:352#033[00m
Dec  1 20:02:36 compute-0 nova_compute[189564]: 2025-12-01 20:02:36.413 189568 DEBUG nova.virt.hardware [None req-edc753ca-df70-41b8-9e8c-a36b0a4da18d 715e289b64b4407387cbcfe958eb2d0f 162c071887824085bcc9c384a2f8baf0 - - default default] Flavor pref 0:0:0 get_cpu_topology_constraints /usr/lib/python3.9/site-packages/nova/virt/hardware.py:388#033[00m
Dec  1 20:02:36 compute-0 nova_compute[189564]: 2025-12-01 20:02:36.414 189568 DEBUG nova.virt.hardware [None req-edc753ca-df70-41b8-9e8c-a36b0a4da18d 715e289b64b4407387cbcfe958eb2d0f 162c071887824085bcc9c384a2f8baf0 - - default default] Image pref 0:0:0 get_cpu_topology_constraints /usr/lib/python3.9/site-packages/nova/virt/hardware.py:392#033[00m
Dec  1 20:02:36 compute-0 nova_compute[189564]: 2025-12-01 20:02:36.415 189568 DEBUG nova.virt.hardware [None req-edc753ca-df70-41b8-9e8c-a36b0a4da18d 715e289b64b4407387cbcfe958eb2d0f 162c071887824085bcc9c384a2f8baf0 - - default default] Chose sockets=0, cores=0, threads=0; limits were sockets=65536, cores=65536, threads=65536 get_cpu_topology_constraints /usr/lib/python3.9/site-packages/nova/virt/hardware.py:430#033[00m
Dec  1 20:02:36 compute-0 nova_compute[189564]: 2025-12-01 20:02:36.416 189568 DEBUG nova.virt.hardware [None req-edc753ca-df70-41b8-9e8c-a36b0a4da18d 715e289b64b4407387cbcfe958eb2d0f 162c071887824085bcc9c384a2f8baf0 - - default default] Topology preferred VirtCPUTopology(cores=0,sockets=0,threads=0), maximum VirtCPUTopology(cores=65536,sockets=65536,threads=65536) _get_desirable_cpu_topologies /usr/lib/python3.9/site-packages/nova/virt/hardware.py:569#033[00m
Dec  1 20:02:36 compute-0 nova_compute[189564]: 2025-12-01 20:02:36.416 189568 DEBUG nova.virt.hardware [None req-edc753ca-df70-41b8-9e8c-a36b0a4da18d 715e289b64b4407387cbcfe958eb2d0f 162c071887824085bcc9c384a2f8baf0 - - default default] Build topologies for 1 vcpu(s) 1:1:1 _get_possible_cpu_topologies /usr/lib/python3.9/site-packages/nova/virt/hardware.py:471#033[00m
Dec  1 20:02:36 compute-0 nova_compute[189564]: 2025-12-01 20:02:36.417 189568 DEBUG nova.virt.hardware [None req-edc753ca-df70-41b8-9e8c-a36b0a4da18d 715e289b64b4407387cbcfe958eb2d0f 162c071887824085bcc9c384a2f8baf0 - - default default] Got 1 possible topologies _get_possible_cpu_topologies /usr/lib/python3.9/site-packages/nova/virt/hardware.py:501#033[00m
Dec  1 20:02:36 compute-0 nova_compute[189564]: 2025-12-01 20:02:36.418 189568 DEBUG nova.virt.hardware [None req-edc753ca-df70-41b8-9e8c-a36b0a4da18d 715e289b64b4407387cbcfe958eb2d0f 162c071887824085bcc9c384a2f8baf0 - - default default] Possible topologies [VirtCPUTopology(cores=1,sockets=1,threads=1)] _get_desirable_cpu_topologies /usr/lib/python3.9/site-packages/nova/virt/hardware.py:575#033[00m
Dec  1 20:02:36 compute-0 nova_compute[189564]: 2025-12-01 20:02:36.418 189568 DEBUG nova.virt.hardware [None req-edc753ca-df70-41b8-9e8c-a36b0a4da18d 715e289b64b4407387cbcfe958eb2d0f 162c071887824085bcc9c384a2f8baf0 - - default default] Sorted desired topologies [VirtCPUTopology(cores=1,sockets=1,threads=1)] _get_desirable_cpu_topologies /usr/lib/python3.9/site-packages/nova/virt/hardware.py:577#033[00m
Dec  1 20:02:36 compute-0 nova_compute[189564]: 2025-12-01 20:02:36.424 189568 DEBUG nova.virt.libvirt.vif [None req-edc753ca-df70-41b8-9e8c-a36b0a4da18d 715e289b64b4407387cbcfe958eb2d0f 162c071887824085bcc9c384a2f8baf0 - - default default] vif_type=ovs instance=Instance(access_ip_v4=None,access_ip_v6=None,architecture=None,auto_disk_config=False,availability_zone='nova',cell_name=None,cleaned=False,config_drive='',created_at=2025-12-01T20:02:26Z,default_ephemeral_device=None,default_swap_device=None,deleted=False,deleted_at=None,device_metadata=None,disable_terminate=False,display_description='tempest-TestNetworkBasicOps-server-1354137625',display_name='tempest-TestNetworkBasicOps-server-1354137625',ec2_ids=EC2Ids,ephemeral_gb=0,ephemeral_key_uuid=None,fault=<?>,flavor=Flavor(3),hidden=False,host='compute-0.ctlplane.example.com',hostname='tempest-testnetworkbasicops-server-1354137625',id=10,image_ref='d169c234-7ac2-4fdc-b9fa-a08c93484d75',info_cache=InstanceInfoCache,instance_type_id=3,kernel_id='',key_data='ecdsa-sha2-nistp384 AAAAE2VjZHNhLXNoYTItbmlzdHAzODQAAAAIbmlzdHAzODQAAABhBJXXL1VmYJQIcc1w3eeVop88t2Ef6y4FcvSuzTqjnp4aoRVZAWxw/mpCexZIWojf5DtgeBdIftUsHhfzzaOrN8U3tBt+3B3E1Cnro9vJzaqRXCHV+LgsCurD0OxCo26xfA==',key_name='tempest-TestNetworkBasicOps-1284131701',keypairs=KeyPairList,launch_index=0,launched_at=None,launched_on='compute-0.ctlplane.example.com',locked=False,locked_by=None,memory_mb=128,metadata={},migration_context=None,new_flavor=None,node='compute-0.ctlplane.example.com',numa_topology=None,old_flavor=None,os_type=None,pci_devices=<?>,pci_requests=InstancePCIRequests,power_state=0,progress=0,project_id='162c071887824085bcc9c384a2f8baf0',ramdisk_id='',reservation_id='r-yvgdafub',resources=None,root_device_name='/dev/vda',root_gb=1,security_groups=SecurityGroupList,services=<?>,shutdown_terminate=False,system_metadata={boot_roles='reader,member',image_base_image_ref='d169c234-7ac2-4fdc-b9fa-a08c93484d75',image_container_format='bare',image_disk_format='qcow2',image_hw_machine_type='q35',image_hw_rng_model='virtio',image_min_disk='1',image_min_ram='0',network_allocated='True',owner_project_name='tempest-TestNetworkBasicOps-11937336',owner_user_name='tempest-TestNetworkBasicOps-11937336-project-member'},tags=TagList,task_state='spawning',terminated_at=None,trusted_certs=None,updated_at=2025-12-01T20:02:30Z,user_data=None,user_id='715e289b64b4407387cbcfe958eb2d0f',uuid=6c1de815-4e42-4798-9a73-220b67333524,vcpu_model=VirtCPUModel,vcpus=1,vm_mode=None,vm_state='building') vif={"id": "05dcfe74-fe60-45d4-b1df-aec9fcc57adb", "address": "fa:16:3e:96:ce:cc", "network": {"id": "d273f808-5cbd-4428-9f2c-ed8b50232c12", "bridge": "br-int", "label": "tempest-network-smoke--1707279970", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.11", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "162c071887824085bcc9c384a2f8baf0", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tap05dcfe74-fe", "ovs_interfaceid": "05dcfe74-fe60-45d4-b1df-aec9fcc57adb", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}} virt_type=kvm get_config /usr/lib/python3.9/site-packages/nova/virt/libvirt/vif.py:563#033[00m
Dec  1 20:02:36 compute-0 nova_compute[189564]: 2025-12-01 20:02:36.425 189568 DEBUG nova.network.os_vif_util [None req-edc753ca-df70-41b8-9e8c-a36b0a4da18d 715e289b64b4407387cbcfe958eb2d0f 162c071887824085bcc9c384a2f8baf0 - - default default] Converting VIF {"id": "05dcfe74-fe60-45d4-b1df-aec9fcc57adb", "address": "fa:16:3e:96:ce:cc", "network": {"id": "d273f808-5cbd-4428-9f2c-ed8b50232c12", "bridge": "br-int", "label": "tempest-network-smoke--1707279970", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.11", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "162c071887824085bcc9c384a2f8baf0", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tap05dcfe74-fe", "ovs_interfaceid": "05dcfe74-fe60-45d4-b1df-aec9fcc57adb", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}} nova_to_osvif_vif /usr/lib/python3.9/site-packages/nova/network/os_vif_util.py:511#033[00m
Dec  1 20:02:36 compute-0 nova_compute[189564]: 2025-12-01 20:02:36.427 189568 DEBUG nova.network.os_vif_util [None req-edc753ca-df70-41b8-9e8c-a36b0a4da18d 715e289b64b4407387cbcfe958eb2d0f 162c071887824085bcc9c384a2f8baf0 - - default default] Converted object VIFOpenVSwitch(active=False,address=fa:16:3e:96:ce:cc,bridge_name='br-int',has_traffic_filtering=True,id=05dcfe74-fe60-45d4-b1df-aec9fcc57adb,network=Network(d273f808-5cbd-4428-9f2c-ed8b50232c12),plugin='ovs',port_profile=VIFPortProfileOpenVSwitch,preserve_on_delete=False,vif_name='tap05dcfe74-fe') nova_to_osvif_vif /usr/lib/python3.9/site-packages/nova/network/os_vif_util.py:548#033[00m
Dec  1 20:02:36 compute-0 nova_compute[189564]: 2025-12-01 20:02:36.429 189568 DEBUG nova.objects.instance [None req-edc753ca-df70-41b8-9e8c-a36b0a4da18d 715e289b64b4407387cbcfe958eb2d0f 162c071887824085bcc9c384a2f8baf0 - - default default] Lazy-loading 'pci_devices' on Instance uuid 6c1de815-4e42-4798-9a73-220b67333524 obj_load_attr /usr/lib/python3.9/site-packages/nova/objects/instance.py:1105#033[00m
Dec  1 20:02:36 compute-0 nova_compute[189564]: 2025-12-01 20:02:36.431 189568 DEBUG nova.virt.driver [None req-025acbbd-8b0a-4055-b5a6-f0460d6fa220 - - - - - -] Emitting event <LifecycleEvent: 1764619356.4230113, 421c1bd5-7edf-41ce-b0a5-872efcaf35b0 => Started> emit_event /usr/lib/python3.9/site-packages/nova/virt/driver.py:1653#033[00m
Dec  1 20:02:36 compute-0 nova_compute[189564]: 2025-12-01 20:02:36.432 189568 INFO nova.compute.manager [None req-025acbbd-8b0a-4055-b5a6-f0460d6fa220 - - - - - -] [instance: 421c1bd5-7edf-41ce-b0a5-872efcaf35b0] VM Started (Lifecycle Event)#033[00m
Dec  1 20:02:36 compute-0 nova_compute[189564]: 2025-12-01 20:02:36.457 189568 DEBUG nova.virt.libvirt.driver [None req-edc753ca-df70-41b8-9e8c-a36b0a4da18d 715e289b64b4407387cbcfe958eb2d0f 162c071887824085bcc9c384a2f8baf0 - - default default] [instance: 6c1de815-4e42-4798-9a73-220b67333524] End _get_guest_xml xml=<domain type="kvm">
Dec  1 20:02:36 compute-0 nova_compute[189564]:  <uuid>6c1de815-4e42-4798-9a73-220b67333524</uuid>
Dec  1 20:02:36 compute-0 nova_compute[189564]:  <name>instance-0000000a</name>
Dec  1 20:02:36 compute-0 nova_compute[189564]:  <memory>131072</memory>
Dec  1 20:02:36 compute-0 nova_compute[189564]:  <vcpu>1</vcpu>
Dec  1 20:02:36 compute-0 nova_compute[189564]:  <metadata>
Dec  1 20:02:36 compute-0 nova_compute[189564]:    <nova:instance xmlns:nova="http://openstack.org/xmlns/libvirt/nova/1.1">
Dec  1 20:02:36 compute-0 nova_compute[189564]:      <nova:package version="27.5.2-0.20250829104910.6f8decf.el9"/>
Dec  1 20:02:36 compute-0 nova_compute[189564]:      <nova:name>tempest-TestNetworkBasicOps-server-1354137625</nova:name>
Dec  1 20:02:36 compute-0 nova_compute[189564]:      <nova:creationTime>2025-12-01 20:02:36</nova:creationTime>
Dec  1 20:02:36 compute-0 nova_compute[189564]:      <nova:flavor name="m1.nano">
Dec  1 20:02:36 compute-0 nova_compute[189564]:        <nova:memory>128</nova:memory>
Dec  1 20:02:36 compute-0 nova_compute[189564]:        <nova:disk>1</nova:disk>
Dec  1 20:02:36 compute-0 nova_compute[189564]:        <nova:swap>0</nova:swap>
Dec  1 20:02:36 compute-0 nova_compute[189564]:        <nova:ephemeral>0</nova:ephemeral>
Dec  1 20:02:36 compute-0 nova_compute[189564]:        <nova:vcpus>1</nova:vcpus>
Dec  1 20:02:36 compute-0 nova_compute[189564]:      </nova:flavor>
Dec  1 20:02:36 compute-0 nova_compute[189564]:      <nova:owner>
Dec  1 20:02:36 compute-0 nova_compute[189564]:        <nova:user uuid="715e289b64b4407387cbcfe958eb2d0f">tempest-TestNetworkBasicOps-11937336-project-member</nova:user>
Dec  1 20:02:36 compute-0 nova_compute[189564]:        <nova:project uuid="162c071887824085bcc9c384a2f8baf0">tempest-TestNetworkBasicOps-11937336</nova:project>
Dec  1 20:02:36 compute-0 nova_compute[189564]:      </nova:owner>
Dec  1 20:02:36 compute-0 nova_compute[189564]:      <nova:root type="image" uuid="d169c234-7ac2-4fdc-b9fa-a08c93484d75"/>
Dec  1 20:02:36 compute-0 nova_compute[189564]:      <nova:ports>
Dec  1 20:02:36 compute-0 nova_compute[189564]:        <nova:port uuid="05dcfe74-fe60-45d4-b1df-aec9fcc57adb">
Dec  1 20:02:36 compute-0 nova_compute[189564]:          <nova:ip type="fixed" address="10.100.0.11" ipVersion="4"/>
Dec  1 20:02:36 compute-0 nova_compute[189564]:        </nova:port>
Dec  1 20:02:36 compute-0 nova_compute[189564]:      </nova:ports>
Dec  1 20:02:36 compute-0 nova_compute[189564]:    </nova:instance>
Dec  1 20:02:36 compute-0 nova_compute[189564]:  </metadata>
Dec  1 20:02:36 compute-0 nova_compute[189564]:  <sysinfo type="smbios">
Dec  1 20:02:36 compute-0 nova_compute[189564]:    <system>
Dec  1 20:02:36 compute-0 nova_compute[189564]:      <entry name="manufacturer">RDO</entry>
Dec  1 20:02:36 compute-0 nova_compute[189564]:      <entry name="product">OpenStack Compute</entry>
Dec  1 20:02:36 compute-0 nova_compute[189564]:      <entry name="version">27.5.2-0.20250829104910.6f8decf.el9</entry>
Dec  1 20:02:36 compute-0 nova_compute[189564]:      <entry name="serial">6c1de815-4e42-4798-9a73-220b67333524</entry>
Dec  1 20:02:36 compute-0 nova_compute[189564]:      <entry name="uuid">6c1de815-4e42-4798-9a73-220b67333524</entry>
Dec  1 20:02:36 compute-0 nova_compute[189564]:      <entry name="family">Virtual Machine</entry>
Dec  1 20:02:36 compute-0 nova_compute[189564]:    </system>
Dec  1 20:02:36 compute-0 nova_compute[189564]:  </sysinfo>
Dec  1 20:02:36 compute-0 nova_compute[189564]:  <os>
Dec  1 20:02:36 compute-0 nova_compute[189564]:    <type arch="x86_64" machine="q35">hvm</type>
Dec  1 20:02:36 compute-0 nova_compute[189564]:    <boot dev="hd"/>
Dec  1 20:02:36 compute-0 nova_compute[189564]:    <smbios mode="sysinfo"/>
Dec  1 20:02:36 compute-0 nova_compute[189564]:  </os>
Dec  1 20:02:36 compute-0 nova_compute[189564]:  <features>
Dec  1 20:02:36 compute-0 nova_compute[189564]:    <acpi/>
Dec  1 20:02:36 compute-0 nova_compute[189564]:    <apic/>
Dec  1 20:02:36 compute-0 nova_compute[189564]:    <vmcoreinfo/>
Dec  1 20:02:36 compute-0 nova_compute[189564]:  </features>
Dec  1 20:02:36 compute-0 nova_compute[189564]:  <clock offset="utc">
Dec  1 20:02:36 compute-0 nova_compute[189564]:    <timer name="pit" tickpolicy="delay"/>
Dec  1 20:02:36 compute-0 nova_compute[189564]:    <timer name="rtc" tickpolicy="catchup"/>
Dec  1 20:02:36 compute-0 nova_compute[189564]:    <timer name="hpet" present="no"/>
Dec  1 20:02:36 compute-0 nova_compute[189564]:  </clock>
Dec  1 20:02:36 compute-0 nova_compute[189564]:  <cpu mode="host-model" match="exact">
Dec  1 20:02:36 compute-0 nova_compute[189564]:    <topology sockets="1" cores="1" threads="1"/>
Dec  1 20:02:36 compute-0 nova_compute[189564]:  </cpu>
Dec  1 20:02:36 compute-0 nova_compute[189564]:  <devices>
Dec  1 20:02:36 compute-0 nova_compute[189564]:    <disk type="file" device="disk">
Dec  1 20:02:36 compute-0 nova_compute[189564]:      <driver name="qemu" type="qcow2" cache="none"/>
Dec  1 20:02:36 compute-0 nova_compute[189564]:      <source file="/var/lib/nova/instances/6c1de815-4e42-4798-9a73-220b67333524/disk"/>
Dec  1 20:02:36 compute-0 nova_compute[189564]:      <target dev="vda" bus="virtio"/>
Dec  1 20:02:36 compute-0 nova_compute[189564]:    </disk>
Dec  1 20:02:36 compute-0 nova_compute[189564]:    <disk type="file" device="cdrom">
Dec  1 20:02:36 compute-0 nova_compute[189564]:      <driver name="qemu" type="raw" cache="none"/>
Dec  1 20:02:36 compute-0 nova_compute[189564]:      <source file="/var/lib/nova/instances/6c1de815-4e42-4798-9a73-220b67333524/disk.config"/>
Dec  1 20:02:36 compute-0 nova_compute[189564]:      <target dev="sda" bus="sata"/>
Dec  1 20:02:36 compute-0 nova_compute[189564]:    </disk>
Dec  1 20:02:36 compute-0 nova_compute[189564]:    <interface type="ethernet">
Dec  1 20:02:36 compute-0 nova_compute[189564]:      <mac address="fa:16:3e:96:ce:cc"/>
Dec  1 20:02:36 compute-0 nova_compute[189564]:      <model type="virtio"/>
Dec  1 20:02:36 compute-0 nova_compute[189564]:      <driver name="vhost" rx_queue_size="512"/>
Dec  1 20:02:36 compute-0 nova_compute[189564]:      <mtu size="1442"/>
Dec  1 20:02:36 compute-0 nova_compute[189564]:      <target dev="tap05dcfe74-fe"/>
Dec  1 20:02:36 compute-0 nova_compute[189564]:    </interface>
Dec  1 20:02:36 compute-0 nova_compute[189564]:    <serial type="pty">
Dec  1 20:02:36 compute-0 nova_compute[189564]:      <log file="/var/lib/nova/instances/6c1de815-4e42-4798-9a73-220b67333524/console.log" append="off"/>
Dec  1 20:02:36 compute-0 nova_compute[189564]:    </serial>
Dec  1 20:02:36 compute-0 nova_compute[189564]:    <graphics type="vnc" autoport="yes" listen="::0"/>
Dec  1 20:02:36 compute-0 nova_compute[189564]:    <video>
Dec  1 20:02:36 compute-0 nova_compute[189564]:      <model type="virtio"/>
Dec  1 20:02:36 compute-0 nova_compute[189564]:    </video>
Dec  1 20:02:36 compute-0 nova_compute[189564]:    <input type="tablet" bus="usb"/>
Dec  1 20:02:36 compute-0 nova_compute[189564]:    <rng model="virtio">
Dec  1 20:02:36 compute-0 nova_compute[189564]:      <backend model="random">/dev/urandom</backend>
Dec  1 20:02:36 compute-0 nova_compute[189564]:    </rng>
Dec  1 20:02:36 compute-0 nova_compute[189564]:    <controller type="pci" model="pcie-root"/>
Dec  1 20:02:36 compute-0 nova_compute[189564]:    <controller type="pci" model="pcie-root-port"/>
Dec  1 20:02:36 compute-0 nova_compute[189564]:    <controller type="pci" model="pcie-root-port"/>
Dec  1 20:02:36 compute-0 nova_compute[189564]:    <controller type="pci" model="pcie-root-port"/>
Dec  1 20:02:36 compute-0 nova_compute[189564]:    <controller type="pci" model="pcie-root-port"/>
Dec  1 20:02:36 compute-0 nova_compute[189564]:    <controller type="pci" model="pcie-root-port"/>
Dec  1 20:02:36 compute-0 nova_compute[189564]:    <controller type="pci" model="pcie-root-port"/>
Dec  1 20:02:36 compute-0 nova_compute[189564]:    <controller type="pci" model="pcie-root-port"/>
Dec  1 20:02:36 compute-0 nova_compute[189564]:    <controller type="pci" model="pcie-root-port"/>
Dec  1 20:02:36 compute-0 nova_compute[189564]:    <controller type="pci" model="pcie-root-port"/>
Dec  1 20:02:36 compute-0 nova_compute[189564]:    <controller type="pci" model="pcie-root-port"/>
Dec  1 20:02:36 compute-0 nova_compute[189564]:    <controller type="pci" model="pcie-root-port"/>
Dec  1 20:02:36 compute-0 nova_compute[189564]:    <controller type="pci" model="pcie-root-port"/>
Dec  1 20:02:36 compute-0 nova_compute[189564]:    <controller type="pci" model="pcie-root-port"/>
Dec  1 20:02:36 compute-0 nova_compute[189564]:    <controller type="pci" model="pcie-root-port"/>
Dec  1 20:02:36 compute-0 nova_compute[189564]:    <controller type="pci" model="pcie-root-port"/>
Dec  1 20:02:36 compute-0 nova_compute[189564]:    <controller type="pci" model="pcie-root-port"/>
Dec  1 20:02:36 compute-0 nova_compute[189564]:    <controller type="pci" model="pcie-root-port"/>
Dec  1 20:02:36 compute-0 nova_compute[189564]:    <controller type="pci" model="pcie-root-port"/>
Dec  1 20:02:36 compute-0 nova_compute[189564]:    <controller type="pci" model="pcie-root-port"/>
Dec  1 20:02:36 compute-0 nova_compute[189564]:    <controller type="pci" model="pcie-root-port"/>
Dec  1 20:02:36 compute-0 nova_compute[189564]:    <controller type="pci" model="pcie-root-port"/>
Dec  1 20:02:36 compute-0 nova_compute[189564]:    <controller type="pci" model="pcie-root-port"/>
Dec  1 20:02:36 compute-0 nova_compute[189564]:    <controller type="pci" model="pcie-root-port"/>
Dec  1 20:02:36 compute-0 nova_compute[189564]:    <controller type="pci" model="pcie-root-port"/>
Dec  1 20:02:36 compute-0 nova_compute[189564]:    <controller type="usb" index="0"/>
Dec  1 20:02:36 compute-0 nova_compute[189564]:    <memballoon model="virtio">
Dec  1 20:02:36 compute-0 nova_compute[189564]:      <stats period="10"/>
Dec  1 20:02:36 compute-0 nova_compute[189564]:    </memballoon>
Dec  1 20:02:36 compute-0 nova_compute[189564]:  </devices>
Dec  1 20:02:36 compute-0 nova_compute[189564]: </domain>
Dec  1 20:02:36 compute-0 nova_compute[189564]: _get_guest_xml /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:7555#033[00m
Dec  1 20:02:36 compute-0 nova_compute[189564]: 2025-12-01 20:02:36.471 189568 DEBUG nova.compute.manager [None req-edc753ca-df70-41b8-9e8c-a36b0a4da18d 715e289b64b4407387cbcfe958eb2d0f 162c071887824085bcc9c384a2f8baf0 - - default default] [instance: 6c1de815-4e42-4798-9a73-220b67333524] Preparing to wait for external event network-vif-plugged-05dcfe74-fe60-45d4-b1df-aec9fcc57adb prepare_for_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:283#033[00m
Dec  1 20:02:36 compute-0 nova_compute[189564]: 2025-12-01 20:02:36.471 189568 DEBUG oslo_concurrency.lockutils [None req-edc753ca-df70-41b8-9e8c-a36b0a4da18d 715e289b64b4407387cbcfe958eb2d0f 162c071887824085bcc9c384a2f8baf0 - - default default] Acquiring lock "6c1de815-4e42-4798-9a73-220b67333524-events" by "nova.compute.manager.InstanceEvents.prepare_for_instance_event.<locals>._create_or_get_event" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m
Dec  1 20:02:36 compute-0 nova_compute[189564]: 2025-12-01 20:02:36.472 189568 DEBUG oslo_concurrency.lockutils [None req-edc753ca-df70-41b8-9e8c-a36b0a4da18d 715e289b64b4407387cbcfe958eb2d0f 162c071887824085bcc9c384a2f8baf0 - - default default] Lock "6c1de815-4e42-4798-9a73-220b67333524-events" acquired by "nova.compute.manager.InstanceEvents.prepare_for_instance_event.<locals>._create_or_get_event" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m
Dec  1 20:02:36 compute-0 nova_compute[189564]: 2025-12-01 20:02:36.472 189568 DEBUG oslo_concurrency.lockutils [None req-edc753ca-df70-41b8-9e8c-a36b0a4da18d 715e289b64b4407387cbcfe958eb2d0f 162c071887824085bcc9c384a2f8baf0 - - default default] Lock "6c1de815-4e42-4798-9a73-220b67333524-events" "released" by "nova.compute.manager.InstanceEvents.prepare_for_instance_event.<locals>._create_or_get_event" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m
Dec  1 20:02:36 compute-0 nova_compute[189564]: 2025-12-01 20:02:36.473 189568 DEBUG nova.virt.libvirt.vif [None req-edc753ca-df70-41b8-9e8c-a36b0a4da18d 715e289b64b4407387cbcfe958eb2d0f 162c071887824085bcc9c384a2f8baf0 - - default default] vif_type=ovs instance=Instance(access_ip_v4=None,access_ip_v6=None,architecture=None,auto_disk_config=False,availability_zone='nova',cell_name=None,cleaned=False,config_drive='',created_at=2025-12-01T20:02:26Z,default_ephemeral_device=None,default_swap_device=None,deleted=False,deleted_at=None,device_metadata=None,disable_terminate=False,display_description='tempest-TestNetworkBasicOps-server-1354137625',display_name='tempest-TestNetworkBasicOps-server-1354137625',ec2_ids=EC2Ids,ephemeral_gb=0,ephemeral_key_uuid=None,fault=<?>,flavor=Flavor(3),hidden=False,host='compute-0.ctlplane.example.com',hostname='tempest-testnetworkbasicops-server-1354137625',id=10,image_ref='d169c234-7ac2-4fdc-b9fa-a08c93484d75',info_cache=InstanceInfoCache,instance_type_id=3,kernel_id='',key_data='ecdsa-sha2-nistp384 AAAAE2VjZHNhLXNoYTItbmlzdHAzODQAAAAIbmlzdHAzODQAAABhBJXXL1VmYJQIcc1w3eeVop88t2Ef6y4FcvSuzTqjnp4aoRVZAWxw/mpCexZIWojf5DtgeBdIftUsHhfzzaOrN8U3tBt+3B3E1Cnro9vJzaqRXCHV+LgsCurD0OxCo26xfA==',key_name='tempest-TestNetworkBasicOps-1284131701',keypairs=KeyPairList,launch_index=0,launched_at=None,launched_on='compute-0.ctlplane.example.com',locked=False,locked_by=None,memory_mb=128,metadata={},migration_context=None,new_flavor=None,node='compute-0.ctlplane.example.com',numa_topology=None,old_flavor=None,os_type=None,pci_devices=PciDeviceList,pci_requests=InstancePCIRequests,power_state=0,progress=0,project_id='162c071887824085bcc9c384a2f8baf0',ramdisk_id='',reservation_id='r-yvgdafub',resources=None,root_device_name='/dev/vda',root_gb=1,security_groups=SecurityGroupList,services=<?>,shutdown_terminate=False,system_metadata={boot_roles='reader,member',image_base_image_ref='d169c234-7ac2-4fdc-b9fa-a08c93484d75',image_container_format='bare',image_disk_format='qcow2',image_hw_machine_type='q35',image_hw_rng_model='virtio',image_min_disk='1',image_min_ram='0',network_allocated='True',owner_project_name='tempest-TestNetworkBasicOps-11937336',owner_user_name='tempest-TestNetworkBasicOps-11937336-project-member'},tags=TagList,task_state='spawning',terminated_at=None,trusted_certs=None,updated_at=2025-12-01T20:02:30Z,user_data=None,user_id='715e289b64b4407387cbcfe958eb2d0f',uuid=6c1de815-4e42-4798-9a73-220b67333524,vcpu_model=VirtCPUModel,vcpus=1,vm_mode=None,vm_state='building') vif={"id": "05dcfe74-fe60-45d4-b1df-aec9fcc57adb", "address": "fa:16:3e:96:ce:cc", "network": {"id": "d273f808-5cbd-4428-9f2c-ed8b50232c12", "bridge": "br-int", "label": "tempest-network-smoke--1707279970", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.11", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "162c071887824085bcc9c384a2f8baf0", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tap05dcfe74-fe", "ovs_interfaceid": "05dcfe74-fe60-45d4-b1df-aec9fcc57adb", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}} plug /usr/lib/python3.9/site-packages/nova/virt/libvirt/vif.py:710#033[00m
Dec  1 20:02:36 compute-0 nova_compute[189564]: 2025-12-01 20:02:36.473 189568 DEBUG nova.network.os_vif_util [None req-edc753ca-df70-41b8-9e8c-a36b0a4da18d 715e289b64b4407387cbcfe958eb2d0f 162c071887824085bcc9c384a2f8baf0 - - default default] Converting VIF {"id": "05dcfe74-fe60-45d4-b1df-aec9fcc57adb", "address": "fa:16:3e:96:ce:cc", "network": {"id": "d273f808-5cbd-4428-9f2c-ed8b50232c12", "bridge": "br-int", "label": "tempest-network-smoke--1707279970", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.11", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "162c071887824085bcc9c384a2f8baf0", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tap05dcfe74-fe", "ovs_interfaceid": "05dcfe74-fe60-45d4-b1df-aec9fcc57adb", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}} nova_to_osvif_vif /usr/lib/python3.9/site-packages/nova/network/os_vif_util.py:511#033[00m
Dec  1 20:02:36 compute-0 nova_compute[189564]: 2025-12-01 20:02:36.474 189568 DEBUG nova.network.os_vif_util [None req-edc753ca-df70-41b8-9e8c-a36b0a4da18d 715e289b64b4407387cbcfe958eb2d0f 162c071887824085bcc9c384a2f8baf0 - - default default] Converted object VIFOpenVSwitch(active=False,address=fa:16:3e:96:ce:cc,bridge_name='br-int',has_traffic_filtering=True,id=05dcfe74-fe60-45d4-b1df-aec9fcc57adb,network=Network(d273f808-5cbd-4428-9f2c-ed8b50232c12),plugin='ovs',port_profile=VIFPortProfileOpenVSwitch,preserve_on_delete=False,vif_name='tap05dcfe74-fe') nova_to_osvif_vif /usr/lib/python3.9/site-packages/nova/network/os_vif_util.py:548#033[00m
Dec  1 20:02:36 compute-0 nova_compute[189564]: 2025-12-01 20:02:36.474 189568 DEBUG os_vif [None req-edc753ca-df70-41b8-9e8c-a36b0a4da18d 715e289b64b4407387cbcfe958eb2d0f 162c071887824085bcc9c384a2f8baf0 - - default default] Plugging vif VIFOpenVSwitch(active=False,address=fa:16:3e:96:ce:cc,bridge_name='br-int',has_traffic_filtering=True,id=05dcfe74-fe60-45d4-b1df-aec9fcc57adb,network=Network(d273f808-5cbd-4428-9f2c-ed8b50232c12),plugin='ovs',port_profile=VIFPortProfileOpenVSwitch,preserve_on_delete=False,vif_name='tap05dcfe74-fe') plug /usr/lib/python3.9/site-packages/os_vif/__init__.py:76#033[00m
Dec  1 20:02:36 compute-0 nova_compute[189564]: 2025-12-01 20:02:36.476 189568 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 21 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Dec  1 20:02:36 compute-0 nova_compute[189564]: 2025-12-01 20:02:36.476 189568 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): AddBridgeCommand(_result=None, name=br-int, may_exist=True, datapath_type=system) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89#033[00m
Dec  1 20:02:36 compute-0 nova_compute[189564]: 2025-12-01 20:02:36.476 189568 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Transaction caused no change do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:129#033[00m
Dec  1 20:02:36 compute-0 nova_compute[189564]: 2025-12-01 20:02:36.477 189568 DEBUG nova.compute.manager [None req-025acbbd-8b0a-4055-b5a6-f0460d6fa220 - - - - - -] [instance: 421c1bd5-7edf-41ce-b0a5-872efcaf35b0] Checking state _get_power_state /usr/lib/python3.9/site-packages/nova/compute/manager.py:1762#033[00m
Dec  1 20:02:36 compute-0 nova_compute[189564]: 2025-12-01 20:02:36.482 189568 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 21 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Dec  1 20:02:36 compute-0 nova_compute[189564]: 2025-12-01 20:02:36.483 189568 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): AddPortCommand(_result=None, bridge=br-int, port=tap05dcfe74-fe, may_exist=True) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89#033[00m
Dec  1 20:02:36 compute-0 nova_compute[189564]: 2025-12-01 20:02:36.484 189568 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=1): DbSetCommand(_result=None, table=Interface, record=tap05dcfe74-fe, col_values=(('external_ids', {'iface-id': '05dcfe74-fe60-45d4-b1df-aec9fcc57adb', 'iface-status': 'active', 'attached-mac': 'fa:16:3e:96:ce:cc', 'vm-uuid': '6c1de815-4e42-4798-9a73-220b67333524'}),)) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89#033[00m
Dec  1 20:02:36 compute-0 nova_compute[189564]: 2025-12-01 20:02:36.486 189568 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Dec  1 20:02:36 compute-0 NetworkManager[56474]: <info>  [1764619356.4870] manager: (tap05dcfe74-fe): new Open vSwitch Port device (/org/freedesktop/NetworkManager/Devices/58)
Dec  1 20:02:36 compute-0 nova_compute[189564]: 2025-12-01 20:02:36.489 189568 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] 0-ms timeout __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:248#033[00m
Dec  1 20:02:36 compute-0 nova_compute[189564]: 2025-12-01 20:02:36.492 189568 DEBUG nova.virt.driver [None req-025acbbd-8b0a-4055-b5a6-f0460d6fa220 - - - - - -] Emitting event <LifecycleEvent: 1764619356.4231217, 421c1bd5-7edf-41ce-b0a5-872efcaf35b0 => Paused> emit_event /usr/lib/python3.9/site-packages/nova/virt/driver.py:1653#033[00m
Dec  1 20:02:36 compute-0 nova_compute[189564]: 2025-12-01 20:02:36.492 189568 INFO nova.compute.manager [None req-025acbbd-8b0a-4055-b5a6-f0460d6fa220 - - - - - -] [instance: 421c1bd5-7edf-41ce-b0a5-872efcaf35b0] VM Paused (Lifecycle Event)#033[00m
Dec  1 20:02:36 compute-0 nova_compute[189564]: 2025-12-01 20:02:36.494 189568 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Dec  1 20:02:36 compute-0 nova_compute[189564]: 2025-12-01 20:02:36.495 189568 INFO os_vif [None req-edc753ca-df70-41b8-9e8c-a36b0a4da18d 715e289b64b4407387cbcfe958eb2d0f 162c071887824085bcc9c384a2f8baf0 - - default default] Successfully plugged vif VIFOpenVSwitch(active=False,address=fa:16:3e:96:ce:cc,bridge_name='br-int',has_traffic_filtering=True,id=05dcfe74-fe60-45d4-b1df-aec9fcc57adb,network=Network(d273f808-5cbd-4428-9f2c-ed8b50232c12),plugin='ovs',port_profile=VIFPortProfileOpenVSwitch,preserve_on_delete=False,vif_name='tap05dcfe74-fe')#033[00m
Dec  1 20:02:36 compute-0 nova_compute[189564]: 2025-12-01 20:02:36.529 189568 DEBUG nova.compute.manager [None req-025acbbd-8b0a-4055-b5a6-f0460d6fa220 - - - - - -] [instance: 421c1bd5-7edf-41ce-b0a5-872efcaf35b0] Checking state _get_power_state /usr/lib/python3.9/site-packages/nova/compute/manager.py:1762#033[00m
Dec  1 20:02:36 compute-0 nova_compute[189564]: 2025-12-01 20:02:36.534 189568 DEBUG nova.compute.manager [None req-025acbbd-8b0a-4055-b5a6-f0460d6fa220 - - - - - -] [instance: 421c1bd5-7edf-41ce-b0a5-872efcaf35b0] Synchronizing instance power state after lifecycle event "Paused"; current vm_state: building, current task_state: spawning, current DB power_state: 0, VM power_state: 3 handle_lifecycle_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:1396#033[00m
Dec  1 20:02:36 compute-0 nova_compute[189564]: 2025-12-01 20:02:36.568 189568 INFO nova.compute.manager [None req-025acbbd-8b0a-4055-b5a6-f0460d6fa220 - - - - - -] [instance: 421c1bd5-7edf-41ce-b0a5-872efcaf35b0] During sync_power_state the instance has a pending task (spawning). Skip.#033[00m
Dec  1 20:02:36 compute-0 nova_compute[189564]: 2025-12-01 20:02:36.571 189568 DEBUG nova.virt.libvirt.driver [None req-edc753ca-df70-41b8-9e8c-a36b0a4da18d 715e289b64b4407387cbcfe958eb2d0f 162c071887824085bcc9c384a2f8baf0 - - default default] No BDM found with device name vda, not building metadata. _build_disk_metadata /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:12116#033[00m
Dec  1 20:02:36 compute-0 nova_compute[189564]: 2025-12-01 20:02:36.572 189568 DEBUG nova.virt.libvirt.driver [None req-edc753ca-df70-41b8-9e8c-a36b0a4da18d 715e289b64b4407387cbcfe958eb2d0f 162c071887824085bcc9c384a2f8baf0 - - default default] No BDM found with device name sda, not building metadata. _build_disk_metadata /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:12116#033[00m
Dec  1 20:02:36 compute-0 nova_compute[189564]: 2025-12-01 20:02:36.572 189568 DEBUG nova.virt.libvirt.driver [None req-edc753ca-df70-41b8-9e8c-a36b0a4da18d 715e289b64b4407387cbcfe958eb2d0f 162c071887824085bcc9c384a2f8baf0 - - default default] No VIF found with MAC fa:16:3e:96:ce:cc, not building metadata _build_interface_metadata /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:12092#033[00m
Dec  1 20:02:36 compute-0 nova_compute[189564]: 2025-12-01 20:02:36.573 189568 INFO nova.virt.libvirt.driver [None req-edc753ca-df70-41b8-9e8c-a36b0a4da18d 715e289b64b4407387cbcfe958eb2d0f 162c071887824085bcc9c384a2f8baf0 - - default default] [instance: 6c1de815-4e42-4798-9a73-220b67333524] Using config drive#033[00m
Dec  1 20:02:36 compute-0 podman[255271]: 2025-12-01 20:02:36.813904768 +0000 UTC m=+0.108981172 container create 59e9b70137d81be1d8c697c11c6297dcc613a0b5cc7c25b2724f466cd2778010 (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified, name=neutron-haproxy-ovnmeta-61c137f0-effb-4f90-8a6c-ea3831f8e4db, tcib_managed=true, org.label-schema.vendor=CentOS, org.label-schema.license=GPLv2, org.label-schema.schema-version=1.0, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.build-date=20251125, tcib_build_tag=fa2bb8efef6782c26ea7f1675eeb36dd, maintainer=OpenStack Kubernetes Operator team, io.buildah.version=1.41.3)
Dec  1 20:02:36 compute-0 podman[255271]: 2025-12-01 20:02:36.759939569 +0000 UTC m=+0.055015983 image pull 014dc726c85414b29f2dde7b5d875685d08784761c0f0ffa8630d1583a877bf9 quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified
Dec  1 20:02:36 compute-0 systemd[1]: Started libpod-conmon-59e9b70137d81be1d8c697c11c6297dcc613a0b5cc7c25b2724f466cd2778010.scope.
Dec  1 20:02:36 compute-0 systemd[1]: Started libcrun container.
Dec  1 20:02:36 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/9722a2c7ce3c81ce6b286e207364c55fc594916c29299d314fc0bb6f3313a714/merged/var/lib/neutron supports timestamps until 2038 (0x7fffffff)
Dec  1 20:02:36 compute-0 podman[255271]: 2025-12-01 20:02:36.975511846 +0000 UTC m=+0.270588290 container init 59e9b70137d81be1d8c697c11c6297dcc613a0b5cc7c25b2724f466cd2778010 (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified, name=neutron-haproxy-ovnmeta-61c137f0-effb-4f90-8a6c-ea3831f8e4db, maintainer=OpenStack Kubernetes Operator team, org.label-schema.vendor=CentOS, org.label-schema.license=GPLv2, org.label-schema.schema-version=1.0, tcib_build_tag=fa2bb8efef6782c26ea7f1675eeb36dd, tcib_managed=true, org.label-schema.name=CentOS Stream 9 Base Image, io.buildah.version=1.41.3, org.label-schema.build-date=20251125)
Dec  1 20:02:36 compute-0 podman[255271]: 2025-12-01 20:02:36.983352191 +0000 UTC m=+0.278428595 container start 59e9b70137d81be1d8c697c11c6297dcc613a0b5cc7c25b2724f466cd2778010 (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified, name=neutron-haproxy-ovnmeta-61c137f0-effb-4f90-8a6c-ea3831f8e4db, tcib_build_tag=fa2bb8efef6782c26ea7f1675eeb36dd, maintainer=OpenStack Kubernetes Operator team, org.label-schema.vendor=CentOS, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 9 Base Image, io.buildah.version=1.41.3, org.label-schema.build-date=20251125, tcib_managed=true, org.label-schema.schema-version=1.0)
Dec  1 20:02:37 compute-0 neutron-haproxy-ovnmeta-61c137f0-effb-4f90-8a6c-ea3831f8e4db[255284]: [NOTICE]   (255288) : New worker (255290) forked
Dec  1 20:02:37 compute-0 neutron-haproxy-ovnmeta-61c137f0-effb-4f90-8a6c-ea3831f8e4db[255284]: [NOTICE]   (255288) : Loading success.
Dec  1 20:02:37 compute-0 nova_compute[189564]: 2025-12-01 20:02:37.044 189568 DEBUG oslo_concurrency.lockutils [None req-841ca05d-97dc-4ba4-a7eb-6937d7c1c9bc 89c8a8cb31224140bf2b9c0b94acfe04 5102d72cb1ce4e6da810b2584a2abd73 - - default default] Acquiring lock "4a104baa-5fd5-47aa-973b-11d99c76c3e2" by "nova.compute.manager.ComputeManager.reboot_instance.<locals>.do_reboot_instance" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m
Dec  1 20:02:37 compute-0 nova_compute[189564]: 2025-12-01 20:02:37.046 189568 DEBUG oslo_concurrency.lockutils [None req-841ca05d-97dc-4ba4-a7eb-6937d7c1c9bc 89c8a8cb31224140bf2b9c0b94acfe04 5102d72cb1ce4e6da810b2584a2abd73 - - default default] Lock "4a104baa-5fd5-47aa-973b-11d99c76c3e2" acquired by "nova.compute.manager.ComputeManager.reboot_instance.<locals>.do_reboot_instance" :: waited 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m
Dec  1 20:02:37 compute-0 nova_compute[189564]: 2025-12-01 20:02:37.047 189568 INFO nova.compute.manager [None req-841ca05d-97dc-4ba4-a7eb-6937d7c1c9bc 89c8a8cb31224140bf2b9c0b94acfe04 5102d72cb1ce4e6da810b2584a2abd73 - - default default] [instance: 4a104baa-5fd5-47aa-973b-11d99c76c3e2] Rebooting instance#033[00m
Dec  1 20:02:37 compute-0 nova_compute[189564]: 2025-12-01 20:02:37.072 189568 INFO nova.virt.libvirt.driver [None req-edc753ca-df70-41b8-9e8c-a36b0a4da18d 715e289b64b4407387cbcfe958eb2d0f 162c071887824085bcc9c384a2f8baf0 - - default default] [instance: 6c1de815-4e42-4798-9a73-220b67333524] Creating config drive at /var/lib/nova/instances/6c1de815-4e42-4798-9a73-220b67333524/disk.config#033[00m
Dec  1 20:02:37 compute-0 nova_compute[189564]: 2025-12-01 20:02:37.083 189568 DEBUG oslo_concurrency.processutils [None req-edc753ca-df70-41b8-9e8c-a36b0a4da18d 715e289b64b4407387cbcfe958eb2d0f 162c071887824085bcc9c384a2f8baf0 - - default default] Running cmd (subprocess): /usr/bin/mkisofs -o /var/lib/nova/instances/6c1de815-4e42-4798-9a73-220b67333524/disk.config -ldots -allow-lowercase -allow-multidot -l -publisher OpenStack Compute 27.5.2-0.20250829104910.6f8decf.el9 -quiet -J -r -V config-2 /tmp/tmpg21i0zzn execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384#033[00m
Dec  1 20:02:37 compute-0 nova_compute[189564]: 2025-12-01 20:02:37.115 189568 DEBUG oslo_concurrency.lockutils [None req-841ca05d-97dc-4ba4-a7eb-6937d7c1c9bc 89c8a8cb31224140bf2b9c0b94acfe04 5102d72cb1ce4e6da810b2584a2abd73 - - default default] Acquiring lock "refresh_cache-4a104baa-5fd5-47aa-973b-11d99c76c3e2" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:312#033[00m
Dec  1 20:02:37 compute-0 nova_compute[189564]: 2025-12-01 20:02:37.117 189568 DEBUG oslo_concurrency.lockutils [None req-841ca05d-97dc-4ba4-a7eb-6937d7c1c9bc 89c8a8cb31224140bf2b9c0b94acfe04 5102d72cb1ce4e6da810b2584a2abd73 - - default default] Acquired lock "refresh_cache-4a104baa-5fd5-47aa-973b-11d99c76c3e2" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:315#033[00m
Dec  1 20:02:37 compute-0 nova_compute[189564]: 2025-12-01 20:02:37.118 189568 DEBUG nova.network.neutron [None req-841ca05d-97dc-4ba4-a7eb-6937d7c1c9bc 89c8a8cb31224140bf2b9c0b94acfe04 5102d72cb1ce4e6da810b2584a2abd73 - - default default] [instance: 4a104baa-5fd5-47aa-973b-11d99c76c3e2] Building network info cache for instance _get_instance_nw_info /usr/lib/python3.9/site-packages/nova/network/neutron.py:2010#033[00m
Dec  1 20:02:37 compute-0 nova_compute[189564]: 2025-12-01 20:02:37.225 189568 DEBUG oslo_concurrency.processutils [None req-edc753ca-df70-41b8-9e8c-a36b0a4da18d 715e289b64b4407387cbcfe958eb2d0f 162c071887824085bcc9c384a2f8baf0 - - default default] CMD "/usr/bin/mkisofs -o /var/lib/nova/instances/6c1de815-4e42-4798-9a73-220b67333524/disk.config -ldots -allow-lowercase -allow-multidot -l -publisher OpenStack Compute 27.5.2-0.20250829104910.6f8decf.el9 -quiet -J -r -V config-2 /tmp/tmpg21i0zzn" returned: 0 in 0.142s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422#033[00m
Dec  1 20:02:37 compute-0 nova_compute[189564]: 2025-12-01 20:02:37.270 189568 DEBUG nova.network.neutron [req-b072c661-97d3-4d62-b14f-21290abf750e req-25e0fb6d-4c2f-444e-a75d-db2f5fc6d4af 8141e98fb94749b083df4b60a326419b 58466eba0634458f8dd92f929824a1d9 - - default default] [instance: 421c1bd5-7edf-41ce-b0a5-872efcaf35b0] Updated VIF entry in instance network info cache for port 36c65cc8-9f73-47e0-8a82-7ca2a02890e5. _build_network_info_model /usr/lib/python3.9/site-packages/nova/network/neutron.py:3482#033[00m
Dec  1 20:02:37 compute-0 nova_compute[189564]: 2025-12-01 20:02:37.273 189568 DEBUG nova.network.neutron [req-b072c661-97d3-4d62-b14f-21290abf750e req-25e0fb6d-4c2f-444e-a75d-db2f5fc6d4af 8141e98fb94749b083df4b60a326419b 58466eba0634458f8dd92f929824a1d9 - - default default] [instance: 421c1bd5-7edf-41ce-b0a5-872efcaf35b0] Updating instance_info_cache with network_info: [{"id": "36c65cc8-9f73-47e0-8a82-7ca2a02890e5", "address": "fa:16:3e:67:e4:f2", "network": {"id": "61c137f0-effb-4f90-8a6c-ea3831f8e4db", "bridge": "br-int", "label": "tempest-TestServerBasicOps-1994330948-network", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.14", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "bde8983778e8471a8b7f6da9e9d53732", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tap36c65cc8-9f", "ovs_interfaceid": "36c65cc8-9f73-47e0-8a82-7ca2a02890e5", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}}] update_instance_cache_with_nw_info /usr/lib/python3.9/site-packages/nova/network/neutron.py:116#033[00m
Dec  1 20:02:37 compute-0 kernel: tap05dcfe74-fe: entered promiscuous mode
Dec  1 20:02:37 compute-0 NetworkManager[56474]: <info>  [1764619357.3001] manager: (tap05dcfe74-fe): new Tun device (/org/freedesktop/NetworkManager/Devices/59)
Dec  1 20:02:37 compute-0 nova_compute[189564]: 2025-12-01 20:02:37.304 189568 DEBUG oslo_concurrency.lockutils [req-b072c661-97d3-4d62-b14f-21290abf750e req-25e0fb6d-4c2f-444e-a75d-db2f5fc6d4af 8141e98fb94749b083df4b60a326419b 58466eba0634458f8dd92f929824a1d9 - - default default] Releasing lock "refresh_cache-421c1bd5-7edf-41ce-b0a5-872efcaf35b0" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:333#033[00m
Dec  1 20:02:37 compute-0 ovn_controller[97948]: 2025-12-01T20:02:37Z|00114|binding|INFO|Claiming lport 05dcfe74-fe60-45d4-b1df-aec9fcc57adb for this chassis.
Dec  1 20:02:37 compute-0 ovn_controller[97948]: 2025-12-01T20:02:37Z|00115|binding|INFO|05dcfe74-fe60-45d4-b1df-aec9fcc57adb: Claiming fa:16:3e:96:ce:cc 10.100.0.11
Dec  1 20:02:37 compute-0 nova_compute[189564]: 2025-12-01 20:02:37.308 189568 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Dec  1 20:02:37 compute-0 ovn_metadata_agent[106828]: 2025-12-01 20:02:37.315 106833 DEBUG ovsdbapp.backend.ovs_idl.event [-] Matched UPDATE: PortBindingUpdatedEvent(events=('update',), table='Port_Binding', conditions=None, old_conditions=None), priority=20 to row=Port_Binding(mac=['fa:16:3e:96:ce:cc 10.100.0.11'], port_security=['fa:16:3e:96:ce:cc 10.100.0.11'], type=, nat_addresses=[], virtual_parent=[], up=[False], options={'requested-chassis': 'compute-0.ctlplane.example.com'}, parent_port=[], requested_additional_chassis=[], ha_chassis_group=[], external_ids={'neutron:cidrs': '10.100.0.11/28', 'neutron:device_id': '6c1de815-4e42-4798-9a73-220b67333524', 'neutron:device_owner': 'compute:nova', 'neutron:mtu': '', 'neutron:network_name': 'neutron-d273f808-5cbd-4428-9f2c-ed8b50232c12', 'neutron:port_capabilities': '', 'neutron:port_name': '', 'neutron:project_id': '162c071887824085bcc9c384a2f8baf0', 'neutron:revision_number': '2', 'neutron:security_group_ids': '2076f83d-5552-45b8-8fa9-3136d8f7a584', 'neutron:subnet_pool_addr_scope4': '', 'neutron:subnet_pool_addr_scope6': '', 'neutron:vnic_type': 'normal'}, additional_chassis=[], tag=[], additional_encap=[], encap=[], mirror_rules=[], datapath=814c1014-135a-4652-9979-0910a324d6ee, chassis=[<ovs.db.idl.Row object at 0x7f1b36766670>], tunnel_key=3, gateway_chassis=[], requested_chassis=[<ovs.db.idl.Row object at 0x7f1b36766670>], logical_port=05dcfe74-fe60-45d4-b1df-aec9fcc57adb) old=Port_Binding(chassis=[]) matches /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/event.py:43#033[00m
Dec  1 20:02:37 compute-0 ovn_metadata_agent[106828]: 2025-12-01 20:02:37.318 106833 INFO neutron.agent.ovn.metadata.agent [-] Port 05dcfe74-fe60-45d4-b1df-aec9fcc57adb in datapath d273f808-5cbd-4428-9f2c-ed8b50232c12 bound to our chassis#033[00m
Dec  1 20:02:37 compute-0 ovn_metadata_agent[106828]: 2025-12-01 20:02:37.322 106833 INFO neutron.agent.ovn.metadata.agent [-] Provisioning metadata for network d273f808-5cbd-4428-9f2c-ed8b50232c12#033[00m
Dec  1 20:02:37 compute-0 NetworkManager[56474]: <info>  [1764619357.3341] device (tap05dcfe74-fe): state change: unmanaged -> unavailable (reason 'connection-assumed', managed-type: 'external')
Dec  1 20:02:37 compute-0 NetworkManager[56474]: <info>  [1764619357.3349] device (tap05dcfe74-fe): state change: unavailable -> disconnected (reason 'none', managed-type: 'external')
Dec  1 20:02:37 compute-0 ovn_controller[97948]: 2025-12-01T20:02:37Z|00116|binding|INFO|Setting lport 05dcfe74-fe60-45d4-b1df-aec9fcc57adb ovn-installed in OVS
Dec  1 20:02:37 compute-0 ovn_controller[97948]: 2025-12-01T20:02:37Z|00117|binding|INFO|Setting lport 05dcfe74-fe60-45d4-b1df-aec9fcc57adb up in Southbound
Dec  1 20:02:37 compute-0 nova_compute[189564]: 2025-12-01 20:02:37.340 189568 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Dec  1 20:02:37 compute-0 nova_compute[189564]: 2025-12-01 20:02:37.344 189568 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Dec  1 20:02:37 compute-0 ovn_metadata_agent[106828]: 2025-12-01 20:02:37.346 239862 DEBUG oslo.privsep.daemon [-] privsep: reply[9cc25c2a-134a-4fe5-baa7-b3256618eb7c]: (4, False) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Dec  1 20:02:37 compute-0 ovn_metadata_agent[106828]: 2025-12-01 20:02:37.347 106833 DEBUG neutron.agent.ovn.metadata.agent [-] Creating VETH tapd273f808-51 in ovnmeta-d273f808-5cbd-4428-9f2c-ed8b50232c12 namespace provision_datapath /usr/lib/python3.9/site-packages/neutron/agent/ovn/metadata/agent.py:665#033[00m
Dec  1 20:02:37 compute-0 nova_compute[189564]: 2025-12-01 20:02:37.348 189568 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Dec  1 20:02:37 compute-0 ovn_metadata_agent[106828]: 2025-12-01 20:02:37.352 239862 DEBUG neutron.privileged.agent.linux.ip_lib [-] Interface tapd273f808-50 not found in namespace None get_link_id /usr/lib/python3.9/site-packages/neutron/privileged/agent/linux/ip_lib.py:204#033[00m
Dec  1 20:02:37 compute-0 ovn_metadata_agent[106828]: 2025-12-01 20:02:37.352 239862 DEBUG oslo.privsep.daemon [-] privsep: reply[eb4548b9-9b67-4f1c-954e-ae8f97d73bc2]: (4, False) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Dec  1 20:02:37 compute-0 ovn_metadata_agent[106828]: 2025-12-01 20:02:37.355 239862 DEBUG oslo.privsep.daemon [-] privsep: reply[995a028b-a5fd-4437-800f-335b4e634645]: (4, False) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Dec  1 20:02:37 compute-0 ovn_metadata_agent[106828]: 2025-12-01 20:02:37.369 106945 DEBUG oslo.privsep.daemon [-] privsep: reply[847b2229-7aa9-41cd-8d58-54c204f5b199]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Dec  1 20:02:37 compute-0 ovn_metadata_agent[106828]: 2025-12-01 20:02:37.389 239862 DEBUG oslo.privsep.daemon [-] privsep: reply[020c38eb-8e0e-4cc9-92ef-17cff7a3fe13]: (4, ('net.ipv4.conf.all.promote_secondaries = 1\n', '', 0)) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Dec  1 20:02:37 compute-0 systemd-machined[155891]: New machine qemu-11-instance-0000000a.
Dec  1 20:02:37 compute-0 systemd[1]: Started Virtual Machine qemu-11-instance-0000000a.
Dec  1 20:02:37 compute-0 ovn_metadata_agent[106828]: 2025-12-01 20:02:37.419 239942 DEBUG oslo.privsep.daemon [-] privsep: reply[6f6c310a-0dc4-4607-9581-fad1ceb5f598]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Dec  1 20:02:37 compute-0 ovn_metadata_agent[106828]: 2025-12-01 20:02:37.426 239862 DEBUG oslo.privsep.daemon [-] privsep: reply[9533522a-fd57-4976-9a85-c39adbc7fe13]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Dec  1 20:02:37 compute-0 NetworkManager[56474]: <info>  [1764619357.4278] manager: (tapd273f808-50): new Veth device (/org/freedesktop/NetworkManager/Devices/60)
Dec  1 20:02:37 compute-0 podman[255315]: 2025-12-01 20:02:37.451634131 +0000 UTC m=+0.091104615 container health_status 9bc16c1e84935b321683dd2dfd3901959431e420d380b6b9982945dff3d516b2 (image=quay.io/prometheus/node-exporter:v1.5.0, name=node_exporter, health_status=healthy, health_failing_streak=0, health_log=, config_id=edpm, container_name=node_exporter, maintainer=The Prometheus Authors <prometheus-developers@googlegroups.com>, managed_by=edpm_ansible, config_data={'image': 'quay.io/prometheus/node-exporter:v1.5.0', 'restart': 'always', 'recreate': True, 'user': 'root', 'privileged': True, 'ports': ['9100:9100'], 'command': ['--web.config.file=/etc/node_exporter/node_exporter.yaml', '--web.disable-exporter-metrics', '--collector.systemd', '--collector.systemd.unit-include=(edpm_.*|ovs.*|openvswitch|virt.*|rsyslog)\\.service', '--no-collector.dmi', '--no-collector.entropy', '--no-collector.thermal_zone', '--no-collector.time', '--no-collector.timex', '--no-collector.uname', '--no-collector.stat', '--no-collector.hwmon', '--no-collector.os', '--no-collector.selinux', '--no-collector.textfile', '--no-collector.powersupplyclass', '--no-collector.pressure', '--no-collector.rapl'], 'net': 'host', 'environment': {'OS_ENDPOINT_TYPE': 'internal'}, 'healthcheck': {'test': '/openstack/healthcheck node_exporter', 'mount': '/var/lib/openstack/healthchecks/node_exporter'}, 'volumes': ['/var/lib/openstack/config/telemetry/node_exporter.yaml:/etc/node_exporter/node_exporter.yaml:z', '/var/lib/openstack/certs/telemetry/default:/etc/node_exporter/tls:z', '/var/run/dbus/system_bus_socket:/var/run/dbus/system_bus_socket:rw', '/var/lib/openstack/healthchecks/node_exporter:/openstack:ro,z']})
Dec  1 20:02:37 compute-0 ovn_metadata_agent[106828]: 2025-12-01 20:02:37.460 239942 DEBUG oslo.privsep.daemon [-] privsep: reply[3c4b4ea7-d880-4a48-95a5-8683b3ac12ba]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Dec  1 20:02:37 compute-0 ovn_metadata_agent[106828]: 2025-12-01 20:02:37.464 239942 DEBUG oslo.privsep.daemon [-] privsep: reply[c0f70150-7204-4ae1-9e3a-b8c1d65202d8]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Dec  1 20:02:37 compute-0 NetworkManager[56474]: <info>  [1764619357.4893] device (tapd273f808-50): carrier: link connected
Dec  1 20:02:37 compute-0 ovn_metadata_agent[106828]: 2025-12-01 20:02:37.495 239942 DEBUG oslo.privsep.daemon [-] privsep: reply[1fd98ae7-d424-4e7d-b479-a0aa4fe00398]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Dec  1 20:02:37 compute-0 ovn_metadata_agent[106828]: 2025-12-01 20:02:37.515 239862 DEBUG oslo.privsep.daemon [-] privsep: reply[4acca885-caee-4527-a4c6-48a4fcff70c7]: (4, [{'family': 0, '__align': (), 'ifi_type': 1, 'index': 2, 'flags': 69699, 'change': 0, 'attrs': [['IFLA_IFNAME', 'tapd273f808-51'], ['IFLA_TXQLEN', 1000], ['IFLA_OPERSTATE', 'UP'], ['IFLA_LINKMODE', 0], ['IFLA_MTU', 1500], ['IFLA_MIN_MTU', 68], ['IFLA_MAX_MTU', 65535], ['IFLA_GROUP', 0], ['IFLA_PROMISCUITY', 0], ['UNKNOWN', {'header': {'length': 8, 'type': 61}}], ['IFLA_NUM_TX_QUEUES', 8], ['IFLA_GSO_MAX_SEGS', 65535], ['IFLA_GSO_MAX_SIZE', 65536], ['IFLA_GRO_MAX_SIZE', 65536], ['UNKNOWN', {'header': {'length': 8, 'type': 63}}], ['UNKNOWN', {'header': {'length': 8, 'type': 64}}], ['IFLA_TSO_MAX_SIZE', 524280], ['IFLA_TSO_MAX_SEGS', 65535], ['UNKNOWN', {'header': {'length': 8, 'type': 66}}], ['IFLA_NUM_RX_QUEUES', 8], ['IFLA_CARRIER', 1], ['IFLA_CARRIER_CHANGES', 2], ['IFLA_CARRIER_UP_COUNT', 1], ['IFLA_CARRIER_DOWN_COUNT', 1], ['IFLA_PROTO_DOWN', 0], ['IFLA_ADDRESS', 'fa:16:3e:ec:ef:68'], ['IFLA_BROADCAST', 'ff:ff:ff:ff:ff:ff'], ['IFLA_STATS64', {'rx_packets': 1, 'tx_packets': 1, 'rx_bytes': 90, 'tx_bytes': 90, 'rx_errors': 0, 'tx_errors': 0, 'rx_dropped': 0, 'tx_dropped': 0, 'multicast': 0, 'collisions': 0, 'rx_length_errors': 0, 'rx_over_errors': 0, 'rx_crc_errors': 0, 'rx_frame_errors': 0, 'rx_fifo_errors': 0, 'rx_missed_errors': 0, 'tx_aborted_errors': 0, 'tx_carrier_errors': 0, 'tx_fifo_errors': 0, 'tx_heartbeat_errors': 0, 'tx_window_errors': 0, 'rx_compressed': 0, 'tx_compressed': 0}], ['IFLA_STATS', {'rx_packets': 1, 'tx_packets': 1, 'rx_bytes': 90, 'tx_bytes': 90, 'rx_errors': 0, 'tx_errors': 0, 'rx_dropped': 0, 'tx_dropped': 0, 'multicast': 0, 'collisions': 0, 'rx_length_errors': 0, 'rx_over_errors': 0, 'rx_crc_errors': 0, 'rx_frame_errors': 0, 'rx_fifo_errors': 0, 'rx_missed_errors': 0, 'tx_aborted_errors': 0, 'tx_carrier_errors': 0, 'tx_fifo_errors': 0, 'tx_heartbeat_errors': 0, 'tx_window_errors': 0, 'rx_compressed': 0, 'tx_compressed': 0}], ['IFLA_XDP', {'attrs': [['IFLA_XDP_ATTACHED', None]]}], ['IFLA_LINKINFO', {'attrs': [['IFLA_INFO_KIND', 'veth']]}], ['IFLA_LINK_NETNSID', 0], ['IFLA_LINK', 34], ['IFLA_QDISC', 'noqueue'], ['IFLA_AF_SPEC', {'attrs': [['AF_INET', {'dummy': 65668, 'forwarding': 1, 'mc_forwarding': 0, 'proxy_arp': 0, 'accept_redirects': 0, 'secure_redirects': 0, 'send_redirects': 0, 'shared_media': 1, 'rp_filter': 1, 'accept_source_route': 0, 'bootp_relay': 0, 'log_martians': 0, 'tag': 0, 'arpfilter': 0, 'medium_id': 0, 'noxfrm': 0, 'nopolicy': 0, 'force_igmp_version': 0, 'arp_announce': 0, 'arp_ignore': 0, 'promote_secondaries': 1, 'arp_accept': 0, 'arp_notify': 0, 'accept_local': 0, 'src_vmark': 0, 'proxy_arp_pvlan': 0, 'route_localnet': 0, 'igmpv2_unsolicited_report_interval': 10000, 'igmpv3_unsolicited_report_interval': 1000}], ['AF_INET6', {'attrs': [['IFLA_INET6_FLAGS', 2147483648], ['IFLA_INET6_CACHEINFO', {'max_reasm_len': 65535, 'tstamp': 584520, 'reachable_time': 22085, 'retrans_time': 1000}], ['IFLA_INET6_CONF', {'forwarding': 0, 'hop_limit': 64, 'mtu': 1500, 'accept_ra': 1, 'accept_redirects': 1, 'autoconf': 1, 'dad_transmits': 1, 'router_solicitations': 4294967295, 'router_solicitation_interval': 4000, 'router_solicitation_delay': 1000, 'use_tempaddr': 0, 'temp_valid_lft': 604800, 'temp_preferred_lft': 86400, 'regen_max_retry': 3, 'max_desync_factor': 600, 'max_addresses': 16, 'force_mld_version': 0, 'accept_ra_defrtr': 1, 'accept_ra_pinfo': 1, 'accept_ra_rtr_pref': 1, 'router_probe_interval': 60000, 'accept_ra_rt_info_max_plen': 0, 'proxy_ndp': 0, 'optimistic_dad': 0, 'accept_source_route': 0, 'mc_forwarding': 0, 'disable_ipv6': 0, 'accept_dad': 1, 'force_tllao': 0, 'ndisc_notify': 0}], ['IFLA_INET6_STATS', {'num': 38, 'inpkts': 1, 'inoctets': 76, 'indelivers': 0, 'outforwdatagrams': 0, 'outpkts': 1, 'outoctets': 76, 'inhdrerrors': 0, 'intoobigerrors': 0, 'innoroutes': 0, 'inaddrerrors': 0, 'inunknownprotos': 0, 'intruncatedpkts': 0, 'indiscards': 0, 'outdiscards': 0, 'outnoroutes': 0, 'reasmtimeout': 0, 'reasmreqds': 0, 'reasmoks': 0, 'reasmfails': 0, 'fragoks': 0, 'fragfails': 0, 'fragcreates': 0, 'inmcastpkts': 1, 'outmcastpkts': 1, 'inbcastpkts': 0, 'outbcastpkts': 0, 'inmcastoctets': 76, 'outmcastoctets': 76, 'inbcastoctets': 0, 'outbcastoctets': 0, 'csumerrors': 0, 'noectpkts': 1, 'ect1pkts': 0, 'ect0pkts': 0, 'cepkts': 0}], ['IFLA_INET6_ICMP6STATS', {'num': 7, 'inmsgs': 0, 'inerrors': 0, 'outmsgs': 1, 'outerrors': 0, 'csumerrors': 0}], ['IFLA_INET6_TOKEN', '::'], ['IFLA_INET6_ADDR_GEN_MODE', 0]]}]]}], ['IFLA_MAP', {'mem_start': 0, 'mem_end': 0, 'base_addr': 0, 'irq': 0, 'dma': 0, 'port': 0}], ['UNKNOWN', {'header': {'length': 4, 'type': 32830}}], ['UNKNOWN', {'header': {'length': 4, 'type': 32833}}]], 'header': {'length': 1448, 'type': 16, 'flags': 2, 'sequence_number': 255, 'pid': 255356, 'error': None, 'target': 'ovnmeta-d273f808-5cbd-4428-9f2c-ed8b50232c12', 'stats': (0, 0, 0)}, 'state': 'up', 'event': 'RTM_NEWLINK'}]) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Dec  1 20:02:37 compute-0 ovn_metadata_agent[106828]: 2025-12-01 20:02:37.535 239862 DEBUG oslo.privsep.daemon [-] privsep: reply[ccfc6198-3e4c-4ed5-b975-ae8a6b353ddf]: (4, ({'family': 10, 'prefixlen': 64, 'flags': 192, 'scope': 253, 'index': 2, 'attrs': [['IFA_ADDRESS', 'fe80::f816:3eff:feec:ef68'], ['IFA_CACHEINFO', {'ifa_preferred': 4294967295, 'ifa_valid': 4294967295, 'cstamp': 584520, 'tstamp': 584520}], ['IFA_FLAGS', 192]], 'header': {'length': 72, 'type': 20, 'flags': 2, 'sequence_number': 255, 'pid': 255358, 'error': None, 'target': 'ovnmeta-d273f808-5cbd-4428-9f2c-ed8b50232c12', 'stats': (0, 0, 0)}, 'event': 'RTM_NEWADDR'},)) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Dec  1 20:02:37 compute-0 ovn_metadata_agent[106828]: 2025-12-01 20:02:37.563 239862 DEBUG oslo.privsep.daemon [-] privsep: reply[fba70146-2d58-4313-be6c-1609172a895c]: (4, [{'family': 0, '__align': (), 'ifi_type': 1, 'index': 2, 'flags': 69699, 'change': 0, 'attrs': [['IFLA_IFNAME', 'tapd273f808-51'], ['IFLA_TXQLEN', 1000], ['IFLA_OPERSTATE', 'UP'], ['IFLA_LINKMODE', 0], ['IFLA_MTU', 1500], ['IFLA_MIN_MTU', 68], ['IFLA_MAX_MTU', 65535], ['IFLA_GROUP', 0], ['IFLA_PROMISCUITY', 0], ['UNKNOWN', {'header': {'length': 8, 'type': 61}}], ['IFLA_NUM_TX_QUEUES', 8], ['IFLA_GSO_MAX_SEGS', 65535], ['IFLA_GSO_MAX_SIZE', 65536], ['IFLA_GRO_MAX_SIZE', 65536], ['UNKNOWN', {'header': {'length': 8, 'type': 63}}], ['UNKNOWN', {'header': {'length': 8, 'type': 64}}], ['IFLA_TSO_MAX_SIZE', 524280], ['IFLA_TSO_MAX_SEGS', 65535], ['UNKNOWN', {'header': {'length': 8, 'type': 66}}], ['IFLA_NUM_RX_QUEUES', 8], ['IFLA_CARRIER', 1], ['IFLA_CARRIER_CHANGES', 2], ['IFLA_CARRIER_UP_COUNT', 1], ['IFLA_CARRIER_DOWN_COUNT', 1], ['IFLA_PROTO_DOWN', 0], ['IFLA_ADDRESS', 'fa:16:3e:ec:ef:68'], ['IFLA_BROADCAST', 'ff:ff:ff:ff:ff:ff'], ['IFLA_STATS64', {'rx_packets': 1, 'tx_packets': 1, 'rx_bytes': 90, 'tx_bytes': 90, 'rx_errors': 0, 'tx_errors': 0, 'rx_dropped': 0, 'tx_dropped': 0, 'multicast': 0, 'collisions': 0, 'rx_length_errors': 0, 'rx_over_errors': 0, 'rx_crc_errors': 0, 'rx_frame_errors': 0, 'rx_fifo_errors': 0, 'rx_missed_errors': 0, 'tx_aborted_errors': 0, 'tx_carrier_errors': 0, 'tx_fifo_errors': 0, 'tx_heartbeat_errors': 0, 'tx_window_errors': 0, 'rx_compressed': 0, 'tx_compressed': 0}], ['IFLA_STATS', {'rx_packets': 1, 'tx_packets': 1, 'rx_bytes': 90, 'tx_bytes': 90, 'rx_errors': 0, 'tx_errors': 0, 'rx_dropped': 0, 'tx_dropped': 0, 'multicast': 0, 'collisions': 0, 'rx_length_errors': 0, 'rx_over_errors': 0, 'rx_crc_errors': 0, 'rx_frame_errors': 0, 'rx_fifo_errors': 0, 'rx_missed_errors': 0, 'tx_aborted_errors': 0, 'tx_carrier_errors': 0, 'tx_fifo_errors': 0, 'tx_heartbeat_errors': 0, 'tx_window_errors': 0, 'rx_compressed': 0, 'tx_compressed': 0}], ['IFLA_XDP', {'attrs': [['IFLA_XDP_ATTACHED', None]]}], ['IFLA_LINKINFO', {'attrs': [['IFLA_INFO_KIND', 'veth']]}], ['IFLA_LINK_NETNSID', 0], ['IFLA_LINK', 34], ['IFLA_QDISC', 'noqueue'], ['IFLA_AF_SPEC', {'attrs': [['AF_INET', {'dummy': 65668, 'forwarding': 1, 'mc_forwarding': 0, 'proxy_arp': 0, 'accept_redirects': 0, 'secure_redirects': 0, 'send_redirects': 0, 'shared_media': 1, 'rp_filter': 1, 'accept_source_route': 0, 'bootp_relay': 0, 'log_martians': 0, 'tag': 0, 'arpfilter': 0, 'medium_id': 0, 'noxfrm': 0, 'nopolicy': 0, 'force_igmp_version': 0, 'arp_announce': 0, 'arp_ignore': 0, 'promote_secondaries': 1, 'arp_accept': 0, 'arp_notify': 0, 'accept_local': 0, 'src_vmark': 0, 'proxy_arp_pvlan': 0, 'route_localnet': 0, 'igmpv2_unsolicited_report_interval': 10000, 'igmpv3_unsolicited_report_interval': 1000}], ['AF_INET6', {'attrs': [['IFLA_INET6_FLAGS', 2147483648], ['IFLA_INET6_CACHEINFO', {'max_reasm_len': 65535, 'tstamp': 584520, 'reachable_time': 22085, 'retrans_time': 1000}], ['IFLA_INET6_CONF', {'forwarding': 0, 'hop_limit': 64, 'mtu': 1500, 'accept_ra': 1, 'accept_redirects': 1, 'autoconf': 1, 'dad_transmits': 1, 'router_solicitations': 4294967295, 'router_solicitation_interval': 4000, 'router_solicitation_delay': 1000, 'use_tempaddr': 0, 'temp_valid_lft': 604800, 'temp_preferred_lft': 86400, 'regen_max_retry': 3, 'max_desync_factor': 600, 'max_addresses': 16, 'force_mld_version': 0, 'accept_ra_defrtr': 1, 'accept_ra_pinfo': 1, 'accept_ra_rtr_pref': 1, 'router_probe_interval': 60000, 'accept_ra_rt_info_max_plen': 0, 'proxy_ndp': 0, 'optimistic_dad': 0, 'accept_source_route': 0, 'mc_forwarding': 0, 'disable_ipv6': 0, 'accept_dad': 1, 'force_tllao': 0, 'ndisc_notify': 0}], ['IFLA_INET6_STATS', {'num': 38, 'inpkts': 1, 'inoctets': 76, 'indelivers': 0, 'outforwdatagrams': 0, 'outpkts': 1, 'outoctets': 76, 'inhdrerrors': 0, 'intoobigerrors': 0, 'innoroutes': 0, 'inaddrerrors': 0, 'inunknownprotos': 0, 'intruncatedpkts': 0, 'indiscards': 0, 'outdiscards': 0, 'outnoroutes': 0, 'reasmtimeout': 0, 'reasmreqds': 0, 'reasmoks': 0, 'reasmfails': 0, 'fragoks': 0, 'fragfails': 0, 'fragcreates': 0, 'inmcastpkts': 1, 'outmcastpkts': 1, 'inbcastpkts': 0, 'outbcastpkts': 0, 'inmcastoctets': 76, 'outmcastoctets': 76, 'inbcastoctets': 0, 'outbcastoctets': 0, 'csumerrors': 0, 'noectpkts': 1, 'ect1pkts': 0, 'ect0pkts': 0, 'cepkts': 0}], ['IFLA_INET6_ICMP6STATS', {'num': 7, 'inmsgs': 0, 'inerrors': 0, 'outmsgs': 1, 'outerrors': 0, 'csumerrors': 0}], ['IFLA_INET6_TOKEN', '::'], ['IFLA_INET6_ADDR_GEN_MODE', 0]]}]]}], ['IFLA_MAP', {'mem_start': 0, 'mem_end': 0, 'base_addr': 0, 'irq': 0, 'dma': 0, 'port': 0}], ['UNKNOWN', {'header': {'length': 4, 'type': 32830}}], ['UNKNOWN', {'header': {'length': 4, 'type': 32833}}]], 'header': {'length': 1448, 'type': 16, 'flags': 0, 'sequence_number': 255, 'pid': 255359, 'error': None, 'target': 'ovnmeta-d273f808-5cbd-4428-9f2c-ed8b50232c12', 'stats': (0, 0, 0)}, 'state': 'up', 'event': 'RTM_NEWLINK'}]) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Dec  1 20:02:37 compute-0 ovn_metadata_agent[106828]: 2025-12-01 20:02:37.603 239862 DEBUG oslo.privsep.daemon [-] privsep: reply[6ac4383e-22f7-43d2-a039-b257f80ad941]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Dec  1 20:02:37 compute-0 ovn_metadata_agent[106828]: 2025-12-01 20:02:37.696 239862 DEBUG oslo.privsep.daemon [-] privsep: reply[37df7830-78cf-412e-8cb9-f6cfe745adae]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Dec  1 20:02:37 compute-0 ovn_metadata_agent[106828]: 2025-12-01 20:02:37.698 106833 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): DelPortCommand(_result=None, port=tapd273f808-50, bridge=br-ex, if_exists=True) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89#033[00m
Dec  1 20:02:37 compute-0 ovn_metadata_agent[106828]: 2025-12-01 20:02:37.698 106833 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Transaction caused no change do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:129#033[00m
Dec  1 20:02:37 compute-0 ovn_metadata_agent[106828]: 2025-12-01 20:02:37.698 106833 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): AddPortCommand(_result=None, bridge=br-int, port=tapd273f808-50, may_exist=True) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89#033[00m
Dec  1 20:02:37 compute-0 NetworkManager[56474]: <info>  [1764619357.7030] manager: (tapd273f808-50): new Open vSwitch Port device (/org/freedesktop/NetworkManager/Devices/61)
Dec  1 20:02:37 compute-0 kernel: tapd273f808-50: entered promiscuous mode
Dec  1 20:02:37 compute-0 ovn_metadata_agent[106828]: 2025-12-01 20:02:37.708 106833 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): DbSetCommand(_result=None, table=Interface, record=tapd273f808-50, col_values=(('external_ids', {'iface-id': 'b1e4fac5-26a3-4807-b860-bcfa4669fff5'}),)) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89#033[00m
Dec  1 20:02:37 compute-0 ovn_controller[97948]: 2025-12-01T20:02:37Z|00118|binding|INFO|Releasing lport b1e4fac5-26a3-4807-b860-bcfa4669fff5 from this chassis (sb_readonly=0)
Dec  1 20:02:37 compute-0 nova_compute[189564]: 2025-12-01 20:02:37.713 189568 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Dec  1 20:02:37 compute-0 nova_compute[189564]: 2025-12-01 20:02:37.747 189568 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Dec  1 20:02:37 compute-0 ovn_metadata_agent[106828]: 2025-12-01 20:02:37.747 106833 DEBUG neutron.agent.linux.utils [-] Unable to access /var/lib/neutron/external/pids/d273f808-5cbd-4428-9f2c-ed8b50232c12.pid.haproxy; Error: [Errno 2] No such file or directory: '/var/lib/neutron/external/pids/d273f808-5cbd-4428-9f2c-ed8b50232c12.pid.haproxy' get_value_from_file /usr/lib/python3.9/site-packages/neutron/agent/linux/utils.py:252#033[00m
Dec  1 20:02:37 compute-0 ovn_metadata_agent[106828]: 2025-12-01 20:02:37.751 239862 DEBUG oslo.privsep.daemon [-] privsep: reply[a0759df8-ba9d-4949-8252-1ed360a27ab6]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Dec  1 20:02:37 compute-0 ovn_metadata_agent[106828]: 2025-12-01 20:02:37.751 106833 DEBUG neutron.agent.ovn.metadata.driver [-] haproxy_cfg = 
Dec  1 20:02:37 compute-0 ovn_metadata_agent[106828]: global
Dec  1 20:02:37 compute-0 ovn_metadata_agent[106828]:    log         /dev/log local0 debug
Dec  1 20:02:37 compute-0 ovn_metadata_agent[106828]:    log-tag     haproxy-metadata-proxy-d273f808-5cbd-4428-9f2c-ed8b50232c12
Dec  1 20:02:37 compute-0 ovn_metadata_agent[106828]:    user        root
Dec  1 20:02:37 compute-0 ovn_metadata_agent[106828]:    group       root
Dec  1 20:02:37 compute-0 ovn_metadata_agent[106828]:    maxconn     1024
Dec  1 20:02:37 compute-0 ovn_metadata_agent[106828]:    pidfile     /var/lib/neutron/external/pids/d273f808-5cbd-4428-9f2c-ed8b50232c12.pid.haproxy
Dec  1 20:02:37 compute-0 ovn_metadata_agent[106828]:    daemon
Dec  1 20:02:37 compute-0 ovn_metadata_agent[106828]: 
Dec  1 20:02:37 compute-0 ovn_metadata_agent[106828]: defaults
Dec  1 20:02:37 compute-0 ovn_metadata_agent[106828]:    log global
Dec  1 20:02:37 compute-0 ovn_metadata_agent[106828]:    mode http
Dec  1 20:02:37 compute-0 ovn_metadata_agent[106828]:    option httplog
Dec  1 20:02:37 compute-0 ovn_metadata_agent[106828]:    option dontlognull
Dec  1 20:02:37 compute-0 ovn_metadata_agent[106828]:    option http-server-close
Dec  1 20:02:37 compute-0 ovn_metadata_agent[106828]:    option forwardfor
Dec  1 20:02:37 compute-0 ovn_metadata_agent[106828]:    retries                 3
Dec  1 20:02:37 compute-0 ovn_metadata_agent[106828]:    timeout http-request    30s
Dec  1 20:02:37 compute-0 ovn_metadata_agent[106828]:    timeout connect         30s
Dec  1 20:02:37 compute-0 ovn_metadata_agent[106828]:    timeout client          32s
Dec  1 20:02:37 compute-0 ovn_metadata_agent[106828]:    timeout server          32s
Dec  1 20:02:37 compute-0 ovn_metadata_agent[106828]:    timeout http-keep-alive 30s
Dec  1 20:02:37 compute-0 ovn_metadata_agent[106828]: 
Dec  1 20:02:37 compute-0 ovn_metadata_agent[106828]: 
Dec  1 20:02:37 compute-0 ovn_metadata_agent[106828]: listen listener
Dec  1 20:02:37 compute-0 ovn_metadata_agent[106828]:    bind 169.254.169.254:80
Dec  1 20:02:37 compute-0 ovn_metadata_agent[106828]:    server metadata /var/lib/neutron/metadata_proxy
Dec  1 20:02:37 compute-0 ovn_metadata_agent[106828]:    http-request add-header X-OVN-Network-ID d273f808-5cbd-4428-9f2c-ed8b50232c12
Dec  1 20:02:37 compute-0 ovn_metadata_agent[106828]: create_config_file /usr/lib/python3.9/site-packages/neutron/agent/ovn/metadata/driver.py:107#033[00m
Dec  1 20:02:37 compute-0 ovn_metadata_agent[106828]: 2025-12-01 20:02:37.752 106833 DEBUG neutron.agent.linux.utils [-] Running command: ['sudo', 'neutron-rootwrap', '/etc/neutron/rootwrap.conf', 'ip', 'netns', 'exec', 'ovnmeta-d273f808-5cbd-4428-9f2c-ed8b50232c12', 'env', 'PROCESS_TAG=haproxy-d273f808-5cbd-4428-9f2c-ed8b50232c12', 'haproxy', '-f', '/var/lib/neutron/ovn-metadata-proxy/d273f808-5cbd-4428-9f2c-ed8b50232c12.conf'] create_process /usr/lib/python3.9/site-packages/neutron/agent/linux/utils.py:84#033[00m
Dec  1 20:02:37 compute-0 nova_compute[189564]: 2025-12-01 20:02:37.760 189568 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Dec  1 20:02:37 compute-0 nova_compute[189564]: 2025-12-01 20:02:37.987 189568 DEBUG nova.compute.manager [req-665cb57e-2086-49b7-a67b-fc50d2a0878f req-0b225237-be0f-48a8-b856-285a326e3c4a 8141e98fb94749b083df4b60a326419b 58466eba0634458f8dd92f929824a1d9 - - default default] [instance: 421c1bd5-7edf-41ce-b0a5-872efcaf35b0] Received event network-vif-plugged-36c65cc8-9f73-47e0-8a82-7ca2a02890e5 external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11048#033[00m
Dec  1 20:02:37 compute-0 nova_compute[189564]: 2025-12-01 20:02:37.988 189568 DEBUG oslo_concurrency.lockutils [req-665cb57e-2086-49b7-a67b-fc50d2a0878f req-0b225237-be0f-48a8-b856-285a326e3c4a 8141e98fb94749b083df4b60a326419b 58466eba0634458f8dd92f929824a1d9 - - default default] Acquiring lock "421c1bd5-7edf-41ce-b0a5-872efcaf35b0-events" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m
Dec  1 20:02:37 compute-0 nova_compute[189564]: 2025-12-01 20:02:37.989 189568 DEBUG oslo_concurrency.lockutils [req-665cb57e-2086-49b7-a67b-fc50d2a0878f req-0b225237-be0f-48a8-b856-285a326e3c4a 8141e98fb94749b083df4b60a326419b 58466eba0634458f8dd92f929824a1d9 - - default default] Lock "421c1bd5-7edf-41ce-b0a5-872efcaf35b0-events" acquired by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: waited 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m
Dec  1 20:02:37 compute-0 nova_compute[189564]: 2025-12-01 20:02:37.989 189568 DEBUG oslo_concurrency.lockutils [req-665cb57e-2086-49b7-a67b-fc50d2a0878f req-0b225237-be0f-48a8-b856-285a326e3c4a 8141e98fb94749b083df4b60a326419b 58466eba0634458f8dd92f929824a1d9 - - default default] Lock "421c1bd5-7edf-41ce-b0a5-872efcaf35b0-events" "released" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: held 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m
Dec  1 20:02:37 compute-0 nova_compute[189564]: 2025-12-01 20:02:37.990 189568 DEBUG nova.compute.manager [req-665cb57e-2086-49b7-a67b-fc50d2a0878f req-0b225237-be0f-48a8-b856-285a326e3c4a 8141e98fb94749b083df4b60a326419b 58466eba0634458f8dd92f929824a1d9 - - default default] [instance: 421c1bd5-7edf-41ce-b0a5-872efcaf35b0] Processing event network-vif-plugged-36c65cc8-9f73-47e0-8a82-7ca2a02890e5 _process_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:10808#033[00m
Dec  1 20:02:37 compute-0 nova_compute[189564]: 2025-12-01 20:02:37.992 189568 DEBUG nova.compute.manager [None req-909d3cbc-f07b-42ff-b10d-448f6649e7c7 304fade4774b4bb3838efcc56501f582 bde8983778e8471a8b7f6da9e9d53732 - - default default] [instance: 421c1bd5-7edf-41ce-b0a5-872efcaf35b0] Instance event wait completed in 1 seconds for network-vif-plugged wait_for_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:577#033[00m
Dec  1 20:02:37 compute-0 nova_compute[189564]: 2025-12-01 20:02:37.998 189568 DEBUG nova.virt.driver [None req-025acbbd-8b0a-4055-b5a6-f0460d6fa220 - - - - - -] Emitting event <LifecycleEvent: 1764619357.9980302, 421c1bd5-7edf-41ce-b0a5-872efcaf35b0 => Resumed> emit_event /usr/lib/python3.9/site-packages/nova/virt/driver.py:1653#033[00m
Dec  1 20:02:37 compute-0 nova_compute[189564]: 2025-12-01 20:02:37.998 189568 INFO nova.compute.manager [None req-025acbbd-8b0a-4055-b5a6-f0460d6fa220 - - - - - -] [instance: 421c1bd5-7edf-41ce-b0a5-872efcaf35b0] VM Resumed (Lifecycle Event)#033[00m
Dec  1 20:02:38 compute-0 nova_compute[189564]: 2025-12-01 20:02:38.007 189568 DEBUG nova.virt.libvirt.driver [None req-909d3cbc-f07b-42ff-b10d-448f6649e7c7 304fade4774b4bb3838efcc56501f582 bde8983778e8471a8b7f6da9e9d53732 - - default default] [instance: 421c1bd5-7edf-41ce-b0a5-872efcaf35b0] Guest created on hypervisor spawn /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:4417#033[00m
Dec  1 20:02:38 compute-0 nova_compute[189564]: 2025-12-01 20:02:38.015 189568 INFO nova.virt.libvirt.driver [-] [instance: 421c1bd5-7edf-41ce-b0a5-872efcaf35b0] Instance spawned successfully.#033[00m
Dec  1 20:02:38 compute-0 nova_compute[189564]: 2025-12-01 20:02:38.016 189568 DEBUG nova.virt.libvirt.driver [None req-909d3cbc-f07b-42ff-b10d-448f6649e7c7 304fade4774b4bb3838efcc56501f582 bde8983778e8471a8b7f6da9e9d53732 - - default default] [instance: 421c1bd5-7edf-41ce-b0a5-872efcaf35b0] Attempting to register defaults for the following image properties: ['hw_cdrom_bus', 'hw_disk_bus', 'hw_input_bus', 'hw_pointer_model', 'hw_video_model', 'hw_vif_model'] _register_undefined_instance_details /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:917#033[00m
Dec  1 20:02:38 compute-0 nova_compute[189564]: 2025-12-01 20:02:38.022 189568 DEBUG nova.compute.manager [None req-025acbbd-8b0a-4055-b5a6-f0460d6fa220 - - - - - -] [instance: 421c1bd5-7edf-41ce-b0a5-872efcaf35b0] Checking state _get_power_state /usr/lib/python3.9/site-packages/nova/compute/manager.py:1762#033[00m
Dec  1 20:02:38 compute-0 nova_compute[189564]: 2025-12-01 20:02:38.043 189568 DEBUG nova.compute.manager [None req-025acbbd-8b0a-4055-b5a6-f0460d6fa220 - - - - - -] [instance: 421c1bd5-7edf-41ce-b0a5-872efcaf35b0] Synchronizing instance power state after lifecycle event "Resumed"; current vm_state: building, current task_state: spawning, current DB power_state: 0, VM power_state: 1 handle_lifecycle_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:1396#033[00m
Dec  1 20:02:38 compute-0 nova_compute[189564]: 2025-12-01 20:02:38.050 189568 DEBUG nova.virt.libvirt.driver [None req-909d3cbc-f07b-42ff-b10d-448f6649e7c7 304fade4774b4bb3838efcc56501f582 bde8983778e8471a8b7f6da9e9d53732 - - default default] [instance: 421c1bd5-7edf-41ce-b0a5-872efcaf35b0] Found default for hw_cdrom_bus of sata _register_undefined_instance_details /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:946#033[00m
Dec  1 20:02:38 compute-0 nova_compute[189564]: 2025-12-01 20:02:38.051 189568 DEBUG nova.virt.libvirt.driver [None req-909d3cbc-f07b-42ff-b10d-448f6649e7c7 304fade4774b4bb3838efcc56501f582 bde8983778e8471a8b7f6da9e9d53732 - - default default] [instance: 421c1bd5-7edf-41ce-b0a5-872efcaf35b0] Found default for hw_disk_bus of virtio _register_undefined_instance_details /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:946#033[00m
Dec  1 20:02:38 compute-0 nova_compute[189564]: 2025-12-01 20:02:38.052 189568 DEBUG nova.virt.libvirt.driver [None req-909d3cbc-f07b-42ff-b10d-448f6649e7c7 304fade4774b4bb3838efcc56501f582 bde8983778e8471a8b7f6da9e9d53732 - - default default] [instance: 421c1bd5-7edf-41ce-b0a5-872efcaf35b0] Found default for hw_input_bus of usb _register_undefined_instance_details /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:946#033[00m
Dec  1 20:02:38 compute-0 nova_compute[189564]: 2025-12-01 20:02:38.053 189568 DEBUG nova.virt.libvirt.driver [None req-909d3cbc-f07b-42ff-b10d-448f6649e7c7 304fade4774b4bb3838efcc56501f582 bde8983778e8471a8b7f6da9e9d53732 - - default default] [instance: 421c1bd5-7edf-41ce-b0a5-872efcaf35b0] Found default for hw_pointer_model of usbtablet _register_undefined_instance_details /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:946#033[00m
Dec  1 20:02:38 compute-0 nova_compute[189564]: 2025-12-01 20:02:38.053 189568 DEBUG nova.virt.libvirt.driver [None req-909d3cbc-f07b-42ff-b10d-448f6649e7c7 304fade4774b4bb3838efcc56501f582 bde8983778e8471a8b7f6da9e9d53732 - - default default] [instance: 421c1bd5-7edf-41ce-b0a5-872efcaf35b0] Found default for hw_video_model of virtio _register_undefined_instance_details /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:946#033[00m
Dec  1 20:02:38 compute-0 nova_compute[189564]: 2025-12-01 20:02:38.054 189568 DEBUG nova.virt.libvirt.driver [None req-909d3cbc-f07b-42ff-b10d-448f6649e7c7 304fade4774b4bb3838efcc56501f582 bde8983778e8471a8b7f6da9e9d53732 - - default default] [instance: 421c1bd5-7edf-41ce-b0a5-872efcaf35b0] Found default for hw_vif_model of virtio _register_undefined_instance_details /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:946#033[00m
Dec  1 20:02:38 compute-0 nova_compute[189564]: 2025-12-01 20:02:38.082 189568 INFO nova.compute.manager [None req-025acbbd-8b0a-4055-b5a6-f0460d6fa220 - - - - - -] [instance: 421c1bd5-7edf-41ce-b0a5-872efcaf35b0] During sync_power_state the instance has a pending task (spawning). Skip.#033[00m
Dec  1 20:02:38 compute-0 nova_compute[189564]: 2025-12-01 20:02:38.123 189568 INFO nova.compute.manager [None req-909d3cbc-f07b-42ff-b10d-448f6649e7c7 304fade4774b4bb3838efcc56501f582 bde8983778e8471a8b7f6da9e9d53732 - - default default] [instance: 421c1bd5-7edf-41ce-b0a5-872efcaf35b0] Took 6.67 seconds to spawn the instance on the hypervisor.#033[00m
Dec  1 20:02:38 compute-0 nova_compute[189564]: 2025-12-01 20:02:38.123 189568 DEBUG nova.compute.manager [None req-909d3cbc-f07b-42ff-b10d-448f6649e7c7 304fade4774b4bb3838efcc56501f582 bde8983778e8471a8b7f6da9e9d53732 - - default default] [instance: 421c1bd5-7edf-41ce-b0a5-872efcaf35b0] Checking state _get_power_state /usr/lib/python3.9/site-packages/nova/compute/manager.py:1762#033[00m
Dec  1 20:02:38 compute-0 nova_compute[189564]: 2025-12-01 20:02:38.194 189568 INFO nova.compute.manager [None req-909d3cbc-f07b-42ff-b10d-448f6649e7c7 304fade4774b4bb3838efcc56501f582 bde8983778e8471a8b7f6da9e9d53732 - - default default] [instance: 421c1bd5-7edf-41ce-b0a5-872efcaf35b0] Took 7.45 seconds to build instance.#033[00m
Dec  1 20:02:38 compute-0 nova_compute[189564]: 2025-12-01 20:02:38.220 189568 DEBUG oslo_concurrency.lockutils [None req-909d3cbc-f07b-42ff-b10d-448f6649e7c7 304fade4774b4bb3838efcc56501f582 bde8983778e8471a8b7f6da9e9d53732 - - default default] Lock "421c1bd5-7edf-41ce-b0a5-872efcaf35b0" "released" by "nova.compute.manager.ComputeManager.build_and_run_instance.<locals>._locked_do_build_and_run_instance" :: held 7.632s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m
Dec  1 20:02:38 compute-0 nova_compute[189564]: 2025-12-01 20:02:38.221 189568 DEBUG oslo_concurrency.lockutils [None req-7cec860b-c1c5-49d2-8a4f-732c76c1788d - - - - - -] Lock "421c1bd5-7edf-41ce-b0a5-872efcaf35b0" acquired by "nova.compute.manager.ComputeManager._sync_power_states.<locals>._sync.<locals>.query_driver_power_state_and_sync" :: waited 2.004s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m
Dec  1 20:02:38 compute-0 nova_compute[189564]: 2025-12-01 20:02:38.222 189568 INFO nova.compute.manager [None req-7cec860b-c1c5-49d2-8a4f-732c76c1788d - - - - - -] [instance: 421c1bd5-7edf-41ce-b0a5-872efcaf35b0] During sync_power_state the instance has a pending task (spawning). Skip.#033[00m
Dec  1 20:02:38 compute-0 nova_compute[189564]: 2025-12-01 20:02:38.222 189568 DEBUG oslo_concurrency.lockutils [None req-7cec860b-c1c5-49d2-8a4f-732c76c1788d - - - - - -] Lock "421c1bd5-7edf-41ce-b0a5-872efcaf35b0" "released" by "nova.compute.manager.ComputeManager._sync_power_states.<locals>._sync.<locals>.query_driver_power_state_and_sync" :: held 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m
Dec  1 20:02:38 compute-0 podman[255388]: 2025-12-01 20:02:38.266721872 +0000 UTC m=+0.110257431 container create 60e50dd4313bdb53c88c794a22d7e1fe77f90f939c042f8eb10c1e7d9d164410 (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified, name=neutron-haproxy-ovnmeta-d273f808-5cbd-4428-9f2c-ed8b50232c12, org.label-schema.schema-version=1.0, maintainer=OpenStack Kubernetes Operator team, tcib_managed=true, org.label-schema.vendor=CentOS, io.buildah.version=1.41.3, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 9 Base Image, tcib_build_tag=fa2bb8efef6782c26ea7f1675eeb36dd, org.label-schema.build-date=20251125)
Dec  1 20:02:38 compute-0 podman[255388]: 2025-12-01 20:02:38.20010276 +0000 UTC m=+0.043638349 image pull 014dc726c85414b29f2dde7b5d875685d08784761c0f0ffa8630d1583a877bf9 quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified
Dec  1 20:02:38 compute-0 systemd[1]: Started libpod-conmon-60e50dd4313bdb53c88c794a22d7e1fe77f90f939c042f8eb10c1e7d9d164410.scope.
Dec  1 20:02:38 compute-0 systemd[1]: Started libcrun container.
Dec  1 20:02:38 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/25e21cfd61ff86c2cdb153566dcaac9b1e4f22b0c8f3ebb15b3a06c6c2916ce9/merged/var/lib/neutron supports timestamps until 2038 (0x7fffffff)
Dec  1 20:02:38 compute-0 nova_compute[189564]: 2025-12-01 20:02:38.400 189568 DEBUG nova.virt.driver [None req-025acbbd-8b0a-4055-b5a6-f0460d6fa220 - - - - - -] Emitting event <LifecycleEvent: 1764619358.4002712, 6c1de815-4e42-4798-9a73-220b67333524 => Started> emit_event /usr/lib/python3.9/site-packages/nova/virt/driver.py:1653#033[00m
Dec  1 20:02:38 compute-0 nova_compute[189564]: 2025-12-01 20:02:38.401 189568 INFO nova.compute.manager [None req-025acbbd-8b0a-4055-b5a6-f0460d6fa220 - - - - - -] [instance: 6c1de815-4e42-4798-9a73-220b67333524] VM Started (Lifecycle Event)#033[00m
Dec  1 20:02:38 compute-0 nova_compute[189564]: 2025-12-01 20:02:38.518 189568 DEBUG nova.network.neutron [req-4d117f9b-37b2-4680-b4d7-3bba14b8a359 req-50903c77-232c-43b8-a5d8-30bd15f1b1bb 8141e98fb94749b083df4b60a326419b 58466eba0634458f8dd92f929824a1d9 - - default default] [instance: 6c1de815-4e42-4798-9a73-220b67333524] Updated VIF entry in instance network info cache for port 05dcfe74-fe60-45d4-b1df-aec9fcc57adb. _build_network_info_model /usr/lib/python3.9/site-packages/nova/network/neutron.py:3482#033[00m
Dec  1 20:02:38 compute-0 nova_compute[189564]: 2025-12-01 20:02:38.519 189568 DEBUG nova.network.neutron [req-4d117f9b-37b2-4680-b4d7-3bba14b8a359 req-50903c77-232c-43b8-a5d8-30bd15f1b1bb 8141e98fb94749b083df4b60a326419b 58466eba0634458f8dd92f929824a1d9 - - default default] [instance: 6c1de815-4e42-4798-9a73-220b67333524] Updating instance_info_cache with network_info: [{"id": "05dcfe74-fe60-45d4-b1df-aec9fcc57adb", "address": "fa:16:3e:96:ce:cc", "network": {"id": "d273f808-5cbd-4428-9f2c-ed8b50232c12", "bridge": "br-int", "label": "tempest-network-smoke--1707279970", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.11", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "162c071887824085bcc9c384a2f8baf0", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tap05dcfe74-fe", "ovs_interfaceid": "05dcfe74-fe60-45d4-b1df-aec9fcc57adb", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}}] update_instance_cache_with_nw_info /usr/lib/python3.9/site-packages/nova/network/neutron.py:116#033[00m
Dec  1 20:02:38 compute-0 podman[255388]: 2025-12-01 20:02:38.53114269 +0000 UTC m=+0.374678279 container init 60e50dd4313bdb53c88c794a22d7e1fe77f90f939c042f8eb10c1e7d9d164410 (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified, name=neutron-haproxy-ovnmeta-d273f808-5cbd-4428-9f2c-ed8b50232c12, org.label-schema.vendor=CentOS, io.buildah.version=1.41.3, tcib_build_tag=fa2bb8efef6782c26ea7f1675eeb36dd, tcib_managed=true, org.label-schema.schema-version=1.0, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.build-date=20251125, maintainer=OpenStack Kubernetes Operator team)
Dec  1 20:02:38 compute-0 nova_compute[189564]: 2025-12-01 20:02:38.537 189568 DEBUG nova.compute.manager [None req-025acbbd-8b0a-4055-b5a6-f0460d6fa220 - - - - - -] [instance: 6c1de815-4e42-4798-9a73-220b67333524] Checking state _get_power_state /usr/lib/python3.9/site-packages/nova/compute/manager.py:1762#033[00m
Dec  1 20:02:38 compute-0 podman[255388]: 2025-12-01 20:02:38.542127072 +0000 UTC m=+0.385662631 container start 60e50dd4313bdb53c88c794a22d7e1fe77f90f939c042f8eb10c1e7d9d164410 (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified, name=neutron-haproxy-ovnmeta-d273f808-5cbd-4428-9f2c-ed8b50232c12, org.label-schema.vendor=CentOS, io.buildah.version=1.41.3, org.label-schema.name=CentOS Stream 9 Base Image, tcib_managed=true, org.label-schema.build-date=20251125, org.label-schema.schema-version=1.0, maintainer=OpenStack Kubernetes Operator team, org.label-schema.license=GPLv2, tcib_build_tag=fa2bb8efef6782c26ea7f1675eeb36dd)
Dec  1 20:02:38 compute-0 nova_compute[189564]: 2025-12-01 20:02:38.549 189568 DEBUG nova.virt.driver [None req-025acbbd-8b0a-4055-b5a6-f0460d6fa220 - - - - - -] Emitting event <LifecycleEvent: 1764619358.4003305, 6c1de815-4e42-4798-9a73-220b67333524 => Paused> emit_event /usr/lib/python3.9/site-packages/nova/virt/driver.py:1653#033[00m
Dec  1 20:02:38 compute-0 nova_compute[189564]: 2025-12-01 20:02:38.549 189568 INFO nova.compute.manager [None req-025acbbd-8b0a-4055-b5a6-f0460d6fa220 - - - - - -] [instance: 6c1de815-4e42-4798-9a73-220b67333524] VM Paused (Lifecycle Event)#033[00m
Dec  1 20:02:38 compute-0 neutron-haproxy-ovnmeta-d273f808-5cbd-4428-9f2c-ed8b50232c12[255407]: [NOTICE]   (255412) : New worker (255414) forked
Dec  1 20:02:38 compute-0 neutron-haproxy-ovnmeta-d273f808-5cbd-4428-9f2c-ed8b50232c12[255407]: [NOTICE]   (255412) : Loading success.
Dec  1 20:02:38 compute-0 nova_compute[189564]: 2025-12-01 20:02:38.567 189568 DEBUG oslo_concurrency.lockutils [req-4d117f9b-37b2-4680-b4d7-3bba14b8a359 req-50903c77-232c-43b8-a5d8-30bd15f1b1bb 8141e98fb94749b083df4b60a326419b 58466eba0634458f8dd92f929824a1d9 - - default default] Releasing lock "refresh_cache-6c1de815-4e42-4798-9a73-220b67333524" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:333#033[00m
Dec  1 20:02:38 compute-0 nova_compute[189564]: 2025-12-01 20:02:38.569 189568 DEBUG nova.compute.manager [None req-025acbbd-8b0a-4055-b5a6-f0460d6fa220 - - - - - -] [instance: 6c1de815-4e42-4798-9a73-220b67333524] Checking state _get_power_state /usr/lib/python3.9/site-packages/nova/compute/manager.py:1762#033[00m
Dec  1 20:02:38 compute-0 nova_compute[189564]: 2025-12-01 20:02:38.574 189568 DEBUG nova.compute.manager [None req-025acbbd-8b0a-4055-b5a6-f0460d6fa220 - - - - - -] [instance: 6c1de815-4e42-4798-9a73-220b67333524] Synchronizing instance power state after lifecycle event "Paused"; current vm_state: building, current task_state: spawning, current DB power_state: 0, VM power_state: 3 handle_lifecycle_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:1396#033[00m
Dec  1 20:02:38 compute-0 nova_compute[189564]: 2025-12-01 20:02:38.606 189568 INFO nova.compute.manager [None req-025acbbd-8b0a-4055-b5a6-f0460d6fa220 - - - - - -] [instance: 6c1de815-4e42-4798-9a73-220b67333524] During sync_power_state the instance has a pending task (spawning). Skip.#033[00m
Dec  1 20:02:38 compute-0 nova_compute[189564]: 2025-12-01 20:02:38.857 189568 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Dec  1 20:02:39 compute-0 nova_compute[189564]: 2025-12-01 20:02:39.496 189568 DEBUG nova.network.neutron [None req-841ca05d-97dc-4ba4-a7eb-6937d7c1c9bc 89c8a8cb31224140bf2b9c0b94acfe04 5102d72cb1ce4e6da810b2584a2abd73 - - default default] [instance: 4a104baa-5fd5-47aa-973b-11d99c76c3e2] Updating instance_info_cache with network_info: [{"id": "09097114-7a48-4b64-ab17-ed474efbf80e", "address": "fa:16:3e:3e:bf:1a", "network": {"id": "419dfb65-f0dd-44b5-a131-b7c37ebf4bab", "bridge": "br-int", "label": "tempest-ServerActionsTestJSON-188173667-network", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.13", "type": "fixed", "version": 4, "meta": {}, "floating_ips": [{"address": "192.168.122.211", "type": "floating", "version": 4, "meta": {}}]}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "5102d72cb1ce4e6da810b2584a2abd73", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tap09097114-7a", "ovs_interfaceid": "09097114-7a48-4b64-ab17-ed474efbf80e", "qbh_params": null, "qbg_params": null, "active": true, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}}] update_instance_cache_with_nw_info /usr/lib/python3.9/site-packages/nova/network/neutron.py:116#033[00m
Dec  1 20:02:39 compute-0 nova_compute[189564]: 2025-12-01 20:02:39.536 189568 DEBUG oslo_concurrency.lockutils [None req-841ca05d-97dc-4ba4-a7eb-6937d7c1c9bc 89c8a8cb31224140bf2b9c0b94acfe04 5102d72cb1ce4e6da810b2584a2abd73 - - default default] Releasing lock "refresh_cache-4a104baa-5fd5-47aa-973b-11d99c76c3e2" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:333#033[00m
Dec  1 20:02:39 compute-0 nova_compute[189564]: 2025-12-01 20:02:39.538 189568 DEBUG nova.compute.manager [None req-841ca05d-97dc-4ba4-a7eb-6937d7c1c9bc 89c8a8cb31224140bf2b9c0b94acfe04 5102d72cb1ce4e6da810b2584a2abd73 - - default default] [instance: 4a104baa-5fd5-47aa-973b-11d99c76c3e2] Checking state _get_power_state /usr/lib/python3.9/site-packages/nova/compute/manager.py:1762#033[00m
Dec  1 20:02:39 compute-0 kernel: tap09097114-7a (unregistering): left promiscuous mode
Dec  1 20:02:39 compute-0 NetworkManager[56474]: <info>  [1764619359.7991] device (tap09097114-7a): state change: disconnected -> unmanaged (reason 'unmanaged', managed-type: 'removed')
Dec  1 20:02:39 compute-0 ovn_controller[97948]: 2025-12-01T20:02:39Z|00119|binding|INFO|Releasing lport 09097114-7a48-4b64-ab17-ed474efbf80e from this chassis (sb_readonly=0)
Dec  1 20:02:39 compute-0 ovn_controller[97948]: 2025-12-01T20:02:39Z|00120|binding|INFO|Setting lport 09097114-7a48-4b64-ab17-ed474efbf80e down in Southbound
Dec  1 20:02:39 compute-0 ovn_controller[97948]: 2025-12-01T20:02:39Z|00121|binding|INFO|Removing iface tap09097114-7a ovn-installed in OVS
Dec  1 20:02:39 compute-0 nova_compute[189564]: 2025-12-01 20:02:39.815 189568 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Dec  1 20:02:39 compute-0 ovn_metadata_agent[106828]: 2025-12-01 20:02:39.821 106833 DEBUG ovsdbapp.backend.ovs_idl.event [-] Matched UPDATE: PortBindingUpdatedEvent(events=('update',), table='Port_Binding', conditions=None, old_conditions=None), priority=20 to row=Port_Binding(mac=['fa:16:3e:3e:bf:1a 10.100.0.13'], port_security=['fa:16:3e:3e:bf:1a 10.100.0.13'], type=, nat_addresses=[], virtual_parent=[], up=[False], options={'requested-chassis': 'compute-0.ctlplane.example.com'}, parent_port=[], requested_additional_chassis=[], ha_chassis_group=[], external_ids={'neutron:cidrs': '10.100.0.13/28', 'neutron:device_id': '4a104baa-5fd5-47aa-973b-11d99c76c3e2', 'neutron:device_owner': 'compute:nova', 'neutron:mtu': '', 'neutron:network_name': 'neutron-419dfb65-f0dd-44b5-a131-b7c37ebf4bab', 'neutron:port_capabilities': '', 'neutron:port_name': '', 'neutron:project_id': '5102d72cb1ce4e6da810b2584a2abd73', 'neutron:revision_number': '4', 'neutron:security_group_ids': 'fb1a9182-2a79-4a69-a063-58799cf34a33', 'neutron:subnet_pool_addr_scope4': '', 'neutron:subnet_pool_addr_scope6': '', 'neutron:vnic_type': 'normal', 'neutron:host_id': 'compute-0.ctlplane.example.com', 'neutron:port_fip': '192.168.122.211'}, additional_chassis=[], tag=[], additional_encap=[], encap=[], mirror_rules=[], datapath=b0f29072-dc2b-4972-a602-c2fe180fbdaf, chassis=[], tunnel_key=3, gateway_chassis=[], requested_chassis=[<ovs.db.idl.Row object at 0x7f1b36766670>], logical_port=09097114-7a48-4b64-ab17-ed474efbf80e) old=Port_Binding(up=[True], chassis=[<ovs.db.idl.Row object at 0x7f1b36766670>]) matches /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/event.py:43#033[00m
Dec  1 20:02:39 compute-0 ovn_metadata_agent[106828]: 2025-12-01 20:02:39.823 106833 INFO neutron.agent.ovn.metadata.agent [-] Port 09097114-7a48-4b64-ab17-ed474efbf80e in datapath 419dfb65-f0dd-44b5-a131-b7c37ebf4bab unbound from our chassis#033[00m
Dec  1 20:02:39 compute-0 ovn_metadata_agent[106828]: 2025-12-01 20:02:39.825 106833 DEBUG neutron.agent.ovn.metadata.agent [-] No valid VIF ports were found for network 419dfb65-f0dd-44b5-a131-b7c37ebf4bab, tearing the namespace down if needed _get_provision_params /usr/lib/python3.9/site-packages/neutron/agent/ovn/metadata/agent.py:628#033[00m
Dec  1 20:02:39 compute-0 ovn_metadata_agent[106828]: 2025-12-01 20:02:39.826 239862 DEBUG oslo.privsep.daemon [-] privsep: reply[f4698387-4946-46d2-b937-b4a17b8d6f22]: (4, True) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Dec  1 20:02:39 compute-0 ovn_metadata_agent[106828]: 2025-12-01 20:02:39.828 106833 INFO neutron.agent.ovn.metadata.agent [-] Cleaning up ovnmeta-419dfb65-f0dd-44b5-a131-b7c37ebf4bab namespace which is not needed anymore#033[00m
Dec  1 20:02:39 compute-0 nova_compute[189564]: 2025-12-01 20:02:39.833 189568 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Dec  1 20:02:39 compute-0 systemd[1]: machine-qemu\x2d7\x2dinstance\x2d00000007.scope: Deactivated successfully.
Dec  1 20:02:39 compute-0 systemd[1]: machine-qemu\x2d7\x2dinstance\x2d00000007.scope: Consumed 41.473s CPU time.
Dec  1 20:02:39 compute-0 systemd-machined[155891]: Machine qemu-7-instance-00000007 terminated.
Dec  1 20:02:39 compute-0 nova_compute[189564]: 2025-12-01 20:02:39.980 189568 INFO nova.virt.libvirt.driver [-] [instance: 4a104baa-5fd5-47aa-973b-11d99c76c3e2] Instance destroyed successfully.#033[00m
Dec  1 20:02:39 compute-0 nova_compute[189564]: 2025-12-01 20:02:39.981 189568 DEBUG nova.objects.instance [None req-841ca05d-97dc-4ba4-a7eb-6937d7c1c9bc 89c8a8cb31224140bf2b9c0b94acfe04 5102d72cb1ce4e6da810b2584a2abd73 - - default default] Lazy-loading 'resources' on Instance uuid 4a104baa-5fd5-47aa-973b-11d99c76c3e2 obj_load_attr /usr/lib/python3.9/site-packages/nova/objects/instance.py:1105#033[00m
Dec  1 20:02:39 compute-0 nova_compute[189564]: 2025-12-01 20:02:39.998 189568 DEBUG nova.virt.libvirt.vif [None req-841ca05d-97dc-4ba4-a7eb-6937d7c1c9bc 89c8a8cb31224140bf2b9c0b94acfe04 5102d72cb1ce4e6da810b2584a2abd73 - - default default] vif_type=ovs instance=Instance(access_ip_v4=None,access_ip_v6=None,architecture=None,auto_disk_config=False,availability_zone='nova',cell_name=None,cleaned=False,config_drive='True',created_at=2025-12-01T20:01:10Z,default_ephemeral_device=None,default_swap_device=None,deleted=False,deleted_at=None,device_metadata=<?>,disable_terminate=False,display_description='tempest-ServerActionsTestJSON-server-1064429924',display_name='tempest-ServerActionsTestJSON-server-1064429924',ec2_ids=<?>,ephemeral_gb=0,ephemeral_key_uuid=None,fault=<?>,flavor=Flavor(3),hidden=False,host='compute-0.ctlplane.example.com',hostname='tempest-serveractionstestjson-server-1064429924',id=7,image_ref='d169c234-7ac2-4fdc-b9fa-a08c93484d75',info_cache=InstanceInfoCache,instance_type_id=3,kernel_id='',key_data='ecdsa-sha2-nistp384 AAAAE2VjZHNhLXNoYTItbmlzdHAzODQAAAAIbmlzdHAzODQAAABhBNy2Fa/005sFOm6rBTfWAhWPMicjwNe2lxBTmDNZ4YT4rkioptEkmqoV9BaZ0x7iRnfzTvUcepaaUfsJtdWIwpd6ISWDG/KMPFbrCHDmVc4nqNhxbzpyNrnXIODKw/JJYg==',key_name='tempest-keypair-1301911410',keypairs=<?>,launch_index=0,launched_at=2025-12-01T20:01:26Z,launched_on='compute-0.ctlplane.example.com',locked=False,locked_by=None,memory_mb=128,metadata={},migration_context=<?>,new_flavor=None,node='compute-0.ctlplane.example.com',numa_topology=None,old_flavor=None,os_type=None,pci_devices=<?>,pci_requests=<?>,power_state=1,progress=0,project_id='5102d72cb1ce4e6da810b2584a2abd73',ramdisk_id='',reservation_id='r-3k9rdt17',resources=None,root_device_name='/dev/vda',root_gb=1,security_groups=SecurityGroupList,services=<?>,shutdown_terminate=False,system_metadata={boot_roles='member,reader',image_base_image_ref='d169c234-7ac2-4fdc-b9fa-a08c93484d75',image_container_format='bare',image_disk_format='qcow2',image_hw_cdrom_bus='sata',image_hw_disk_bus='virtio',image_hw_input_bus='usb',image_hw_machine_type='q35',image_hw_pointer_model='usbtablet',image_hw_rng_model='virtio',image_hw_video_model='virtio',image_hw_vif_model='virtio',image_min_disk='1',image_min_ram='0',owner_project_name='tempest-ServerActionsTestJSON-87382225',owner_user_name='tempest-ServerActionsTestJSON-87382225-project-member'},tags=<?>,task_state='reboot_started_hard',terminated_at=None,trusted_certs=<?>,updated_at=2025-12-01T20:02:39Z,user_data='IyEvYmluL3NoCmVjaG8gIlByaW50aW5nIGNpcnJvcyB1c2VyIGF1dGhvcml6ZWQga2V5cyIKY2F0IH5jaXJyb3MvLnNzaC9hdXRob3JpemVkX2tleXMgfHwgdHJ1ZQo=',user_id='89c8a8cb31224140bf2b9c0b94acfe04',uuid=4a104baa-5fd5-47aa-973b-11d99c76c3e2,vcpu_model=<?>,vcpus=1,vm_mode=None,vm_state='active') vif={"id": "09097114-7a48-4b64-ab17-ed474efbf80e", "address": "fa:16:3e:3e:bf:1a", "network": {"id": "419dfb65-f0dd-44b5-a131-b7c37ebf4bab", "bridge": "br-int", "label": "tempest-ServerActionsTestJSON-188173667-network", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.13", "type": "fixed", "version": 4, "meta": {}, "floating_ips": [{"address": "192.168.122.211", "type": "floating", "version": 4, "meta": {}}]}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "5102d72cb1ce4e6da810b2584a2abd73", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tap09097114-7a", "ovs_interfaceid": "09097114-7a48-4b64-ab17-ed474efbf80e", "qbh_params": null, "qbg_params": null, "active": true, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}} unplug /usr/lib/python3.9/site-packages/nova/virt/libvirt/vif.py:828#033[00m
Dec  1 20:02:39 compute-0 nova_compute[189564]: 2025-12-01 20:02:39.999 189568 DEBUG nova.network.os_vif_util [None req-841ca05d-97dc-4ba4-a7eb-6937d7c1c9bc 89c8a8cb31224140bf2b9c0b94acfe04 5102d72cb1ce4e6da810b2584a2abd73 - - default default] Converting VIF {"id": "09097114-7a48-4b64-ab17-ed474efbf80e", "address": "fa:16:3e:3e:bf:1a", "network": {"id": "419dfb65-f0dd-44b5-a131-b7c37ebf4bab", "bridge": "br-int", "label": "tempest-ServerActionsTestJSON-188173667-network", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.13", "type": "fixed", "version": 4, "meta": {}, "floating_ips": [{"address": "192.168.122.211", "type": "floating", "version": 4, "meta": {}}]}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "5102d72cb1ce4e6da810b2584a2abd73", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tap09097114-7a", "ovs_interfaceid": "09097114-7a48-4b64-ab17-ed474efbf80e", "qbh_params": null, "qbg_params": null, "active": true, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}} nova_to_osvif_vif /usr/lib/python3.9/site-packages/nova/network/os_vif_util.py:511#033[00m
Dec  1 20:02:40 compute-0 nova_compute[189564]: 2025-12-01 20:02:39.999 189568 DEBUG nova.network.os_vif_util [None req-841ca05d-97dc-4ba4-a7eb-6937d7c1c9bc 89c8a8cb31224140bf2b9c0b94acfe04 5102d72cb1ce4e6da810b2584a2abd73 - - default default] Converted object VIFOpenVSwitch(active=True,address=fa:16:3e:3e:bf:1a,bridge_name='br-int',has_traffic_filtering=True,id=09097114-7a48-4b64-ab17-ed474efbf80e,network=Network(419dfb65-f0dd-44b5-a131-b7c37ebf4bab),plugin='ovs',port_profile=VIFPortProfileOpenVSwitch,preserve_on_delete=False,vif_name='tap09097114-7a') nova_to_osvif_vif /usr/lib/python3.9/site-packages/nova/network/os_vif_util.py:548#033[00m
Dec  1 20:02:40 compute-0 nova_compute[189564]: 2025-12-01 20:02:40.000 189568 DEBUG os_vif [None req-841ca05d-97dc-4ba4-a7eb-6937d7c1c9bc 89c8a8cb31224140bf2b9c0b94acfe04 5102d72cb1ce4e6da810b2584a2abd73 - - default default] Unplugging vif VIFOpenVSwitch(active=True,address=fa:16:3e:3e:bf:1a,bridge_name='br-int',has_traffic_filtering=True,id=09097114-7a48-4b64-ab17-ed474efbf80e,network=Network(419dfb65-f0dd-44b5-a131-b7c37ebf4bab),plugin='ovs',port_profile=VIFPortProfileOpenVSwitch,preserve_on_delete=False,vif_name='tap09097114-7a') unplug /usr/lib/python3.9/site-packages/os_vif/__init__.py:109#033[00m
Dec  1 20:02:40 compute-0 nova_compute[189564]: 2025-12-01 20:02:40.001 189568 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 21 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Dec  1 20:02:40 compute-0 nova_compute[189564]: 2025-12-01 20:02:40.002 189568 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): DelPortCommand(_result=None, port=tap09097114-7a, bridge=br-int, if_exists=True) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89#033[00m
Dec  1 20:02:40 compute-0 nova_compute[189564]: 2025-12-01 20:02:40.003 189568 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Dec  1 20:02:40 compute-0 nova_compute[189564]: 2025-12-01 20:02:40.007 189568 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] 0-ms timeout __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:248#033[00m
Dec  1 20:02:40 compute-0 nova_compute[189564]: 2025-12-01 20:02:40.009 189568 INFO os_vif [None req-841ca05d-97dc-4ba4-a7eb-6937d7c1c9bc 89c8a8cb31224140bf2b9c0b94acfe04 5102d72cb1ce4e6da810b2584a2abd73 - - default default] Successfully unplugged vif VIFOpenVSwitch(active=True,address=fa:16:3e:3e:bf:1a,bridge_name='br-int',has_traffic_filtering=True,id=09097114-7a48-4b64-ab17-ed474efbf80e,network=Network(419dfb65-f0dd-44b5-a131-b7c37ebf4bab),plugin='ovs',port_profile=VIFPortProfileOpenVSwitch,preserve_on_delete=False,vif_name='tap09097114-7a')#033[00m
Dec  1 20:02:40 compute-0 nova_compute[189564]: 2025-12-01 20:02:40.019 189568 DEBUG nova.virt.libvirt.driver [None req-841ca05d-97dc-4ba4-a7eb-6937d7c1c9bc 89c8a8cb31224140bf2b9c0b94acfe04 5102d72cb1ce4e6da810b2584a2abd73 - - default default] [instance: 4a104baa-5fd5-47aa-973b-11d99c76c3e2] Start _get_guest_xml network_info=[{"id": "09097114-7a48-4b64-ab17-ed474efbf80e", "address": "fa:16:3e:3e:bf:1a", "network": {"id": "419dfb65-f0dd-44b5-a131-b7c37ebf4bab", "bridge": "br-int", "label": "tempest-ServerActionsTestJSON-188173667-network", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.13", "type": "fixed", "version": 4, "meta": {}, "floating_ips": [{"address": "192.168.122.211", "type": "floating", "version": 4, "meta": {}}]}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "5102d72cb1ce4e6da810b2584a2abd73", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tap09097114-7a", "ovs_interfaceid": "09097114-7a48-4b64-ab17-ed474efbf80e", "qbh_params": null, "qbg_params": null, "active": true, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}}] disk_info={'disk_bus': 'virtio', 'cdrom_bus': 'sata', 'mapping': {'root': {'bus': 'virtio', 'dev': 'vda', 'type': 'disk', 'boot_index': '1'}, 'disk': {'bus': 'virtio', 'dev': 'vda', 'type': 'disk', 'boot_index': '1'}, 'disk.config': {'bus': 'sata', 'dev': 'sda', 'type': 'cdrom'}}} image_meta=ImageMeta(checksum=<?>,container_format='bare',created_at=<?>,direct_url=<?>,disk_format='qcow2',id=d169c234-7ac2-4fdc-b9fa-a08c93484d75,min_disk=1,min_ram=0,name=<?>,owner=<?>,properties=ImageMetaProps,protected=<?>,size=<?>,status=<?>,tags=<?>,updated_at=<?>,virtual_size=<?>,visibility=<?>) rescue=None block_device_info={'root_device_name': '/dev/vda', 'image': [{'boot_index': 0, 'guest_format': None, 'encryption_options': None, 'size': 0, 'encryption_secret_uuid': None, 'device_type': 'disk', 'disk_bus': 'virtio', 'encrypted': False, 'encryption_format': None, 'device_name': '/dev/vda', 'image_id': 'd169c234-7ac2-4fdc-b9fa-a08c93484d75'}], 'ephemerals': [], 'block_device_mapping': [], 'swap': None} _get_guest_xml /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:7549#033[00m
Dec  1 20:02:40 compute-0 nova_compute[189564]: 2025-12-01 20:02:40.026 189568 WARNING nova.virt.libvirt.driver [None req-841ca05d-97dc-4ba4-a7eb-6937d7c1c9bc 89c8a8cb31224140bf2b9c0b94acfe04 5102d72cb1ce4e6da810b2584a2abd73 - - default default] This host appears to have multiple sockets per NUMA node. The `socket` PCI NUMA affinity will not be supported.#033[00m
Dec  1 20:02:40 compute-0 nova_compute[189564]: 2025-12-01 20:02:40.033 189568 DEBUG nova.virt.libvirt.host [None req-841ca05d-97dc-4ba4-a7eb-6937d7c1c9bc 89c8a8cb31224140bf2b9c0b94acfe04 5102d72cb1ce4e6da810b2584a2abd73 - - default default] Searching host: 'compute-0.ctlplane.example.com' for CPU controller through CGroups V1... _has_cgroupsv1_cpu_controller /usr/lib/python3.9/site-packages/nova/virt/libvirt/host.py:1653#033[00m
Dec  1 20:02:40 compute-0 nova_compute[189564]: 2025-12-01 20:02:40.034 189568 DEBUG nova.virt.libvirt.host [None req-841ca05d-97dc-4ba4-a7eb-6937d7c1c9bc 89c8a8cb31224140bf2b9c0b94acfe04 5102d72cb1ce4e6da810b2584a2abd73 - - default default] CPU controller missing on host. _has_cgroupsv1_cpu_controller /usr/lib/python3.9/site-packages/nova/virt/libvirt/host.py:1663#033[00m
Dec  1 20:02:40 compute-0 nova_compute[189564]: 2025-12-01 20:02:40.037 189568 DEBUG nova.virt.libvirt.host [None req-841ca05d-97dc-4ba4-a7eb-6937d7c1c9bc 89c8a8cb31224140bf2b9c0b94acfe04 5102d72cb1ce4e6da810b2584a2abd73 - - default default] Searching host: 'compute-0.ctlplane.example.com' for CPU controller through CGroups V2... _has_cgroupsv2_cpu_controller /usr/lib/python3.9/site-packages/nova/virt/libvirt/host.py:1672#033[00m
Dec  1 20:02:40 compute-0 nova_compute[189564]: 2025-12-01 20:02:40.038 189568 DEBUG nova.virt.libvirt.host [None req-841ca05d-97dc-4ba4-a7eb-6937d7c1c9bc 89c8a8cb31224140bf2b9c0b94acfe04 5102d72cb1ce4e6da810b2584a2abd73 - - default default] CPU controller found on host. _has_cgroupsv2_cpu_controller /usr/lib/python3.9/site-packages/nova/virt/libvirt/host.py:1679#033[00m
Dec  1 20:02:40 compute-0 nova_compute[189564]: 2025-12-01 20:02:40.038 189568 DEBUG nova.virt.libvirt.driver [None req-841ca05d-97dc-4ba4-a7eb-6937d7c1c9bc 89c8a8cb31224140bf2b9c0b94acfe04 5102d72cb1ce4e6da810b2584a2abd73 - - default default] CPU mode 'host-model' models '' was chosen, with extra flags: '' _get_guest_cpu_model_config /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:5396#033[00m
Dec  1 20:02:40 compute-0 nova_compute[189564]: 2025-12-01 20:02:40.038 189568 DEBUG nova.virt.hardware [None req-841ca05d-97dc-4ba4-a7eb-6937d7c1c9bc 89c8a8cb31224140bf2b9c0b94acfe04 5102d72cb1ce4e6da810b2584a2abd73 - - default default] Getting desirable topologies for flavor Flavor(created_at=2025-12-01T20:00:10Z,deleted=False,deleted_at=None,description=None,disabled=False,ephemeral_gb=0,extra_specs={hw_rng:allowed='True'},flavorid='69252fc0-77e5-4ac1-807d-77003542464f',id=3,is_public=True,memory_mb=128,name='m1.nano',projects=<?>,root_gb=1,rxtx_factor=1.0,swap=0,updated_at=None,vcpu_weight=0,vcpus=1) and image_meta ImageMeta(checksum=<?>,container_format='bare',created_at=<?>,direct_url=<?>,disk_format='qcow2',id=d169c234-7ac2-4fdc-b9fa-a08c93484d75,min_disk=1,min_ram=0,name=<?>,owner=<?>,properties=ImageMetaProps,protected=<?>,size=<?>,status=<?>,tags=<?>,updated_at=<?>,virtual_size=<?>,visibility=<?>), allow threads: True _get_desirable_cpu_topologies /usr/lib/python3.9/site-packages/nova/virt/hardware.py:563#033[00m
Dec  1 20:02:40 compute-0 nova_compute[189564]: 2025-12-01 20:02:40.039 189568 DEBUG nova.virt.hardware [None req-841ca05d-97dc-4ba4-a7eb-6937d7c1c9bc 89c8a8cb31224140bf2b9c0b94acfe04 5102d72cb1ce4e6da810b2584a2abd73 - - default default] Flavor limits 0:0:0 get_cpu_topology_constraints /usr/lib/python3.9/site-packages/nova/virt/hardware.py:348#033[00m
Dec  1 20:02:40 compute-0 nova_compute[189564]: 2025-12-01 20:02:40.039 189568 DEBUG nova.virt.hardware [None req-841ca05d-97dc-4ba4-a7eb-6937d7c1c9bc 89c8a8cb31224140bf2b9c0b94acfe04 5102d72cb1ce4e6da810b2584a2abd73 - - default default] Image limits 0:0:0 get_cpu_topology_constraints /usr/lib/python3.9/site-packages/nova/virt/hardware.py:352#033[00m
Dec  1 20:02:40 compute-0 nova_compute[189564]: 2025-12-01 20:02:40.039 189568 DEBUG nova.virt.hardware [None req-841ca05d-97dc-4ba4-a7eb-6937d7c1c9bc 89c8a8cb31224140bf2b9c0b94acfe04 5102d72cb1ce4e6da810b2584a2abd73 - - default default] Flavor pref 0:0:0 get_cpu_topology_constraints /usr/lib/python3.9/site-packages/nova/virt/hardware.py:388#033[00m
Dec  1 20:02:40 compute-0 nova_compute[189564]: 2025-12-01 20:02:40.039 189568 DEBUG nova.virt.hardware [None req-841ca05d-97dc-4ba4-a7eb-6937d7c1c9bc 89c8a8cb31224140bf2b9c0b94acfe04 5102d72cb1ce4e6da810b2584a2abd73 - - default default] Image pref 0:0:0 get_cpu_topology_constraints /usr/lib/python3.9/site-packages/nova/virt/hardware.py:392#033[00m
Dec  1 20:02:40 compute-0 nova_compute[189564]: 2025-12-01 20:02:40.039 189568 DEBUG nova.virt.hardware [None req-841ca05d-97dc-4ba4-a7eb-6937d7c1c9bc 89c8a8cb31224140bf2b9c0b94acfe04 5102d72cb1ce4e6da810b2584a2abd73 - - default default] Chose sockets=0, cores=0, threads=0; limits were sockets=65536, cores=65536, threads=65536 get_cpu_topology_constraints /usr/lib/python3.9/site-packages/nova/virt/hardware.py:430#033[00m
Dec  1 20:02:40 compute-0 nova_compute[189564]: 2025-12-01 20:02:40.040 189568 DEBUG nova.virt.hardware [None req-841ca05d-97dc-4ba4-a7eb-6937d7c1c9bc 89c8a8cb31224140bf2b9c0b94acfe04 5102d72cb1ce4e6da810b2584a2abd73 - - default default] Topology preferred VirtCPUTopology(cores=0,sockets=0,threads=0), maximum VirtCPUTopology(cores=65536,sockets=65536,threads=65536) _get_desirable_cpu_topologies /usr/lib/python3.9/site-packages/nova/virt/hardware.py:569#033[00m
Dec  1 20:02:40 compute-0 nova_compute[189564]: 2025-12-01 20:02:40.040 189568 DEBUG nova.virt.hardware [None req-841ca05d-97dc-4ba4-a7eb-6937d7c1c9bc 89c8a8cb31224140bf2b9c0b94acfe04 5102d72cb1ce4e6da810b2584a2abd73 - - default default] Build topologies for 1 vcpu(s) 1:1:1 _get_possible_cpu_topologies /usr/lib/python3.9/site-packages/nova/virt/hardware.py:471#033[00m
Dec  1 20:02:40 compute-0 nova_compute[189564]: 2025-12-01 20:02:40.040 189568 DEBUG nova.virt.hardware [None req-841ca05d-97dc-4ba4-a7eb-6937d7c1c9bc 89c8a8cb31224140bf2b9c0b94acfe04 5102d72cb1ce4e6da810b2584a2abd73 - - default default] Got 1 possible topologies _get_possible_cpu_topologies /usr/lib/python3.9/site-packages/nova/virt/hardware.py:501#033[00m
Dec  1 20:02:40 compute-0 nova_compute[189564]: 2025-12-01 20:02:40.040 189568 DEBUG nova.virt.hardware [None req-841ca05d-97dc-4ba4-a7eb-6937d7c1c9bc 89c8a8cb31224140bf2b9c0b94acfe04 5102d72cb1ce4e6da810b2584a2abd73 - - default default] Possible topologies [VirtCPUTopology(cores=1,sockets=1,threads=1)] _get_desirable_cpu_topologies /usr/lib/python3.9/site-packages/nova/virt/hardware.py:575#033[00m
Dec  1 20:02:40 compute-0 nova_compute[189564]: 2025-12-01 20:02:40.041 189568 DEBUG nova.virt.hardware [None req-841ca05d-97dc-4ba4-a7eb-6937d7c1c9bc 89c8a8cb31224140bf2b9c0b94acfe04 5102d72cb1ce4e6da810b2584a2abd73 - - default default] Sorted desired topologies [VirtCPUTopology(cores=1,sockets=1,threads=1)] _get_desirable_cpu_topologies /usr/lib/python3.9/site-packages/nova/virt/hardware.py:577#033[00m
Dec  1 20:02:40 compute-0 nova_compute[189564]: 2025-12-01 20:02:40.041 189568 DEBUG nova.objects.instance [None req-841ca05d-97dc-4ba4-a7eb-6937d7c1c9bc 89c8a8cb31224140bf2b9c0b94acfe04 5102d72cb1ce4e6da810b2584a2abd73 - - default default] Lazy-loading 'vcpu_model' on Instance uuid 4a104baa-5fd5-47aa-973b-11d99c76c3e2 obj_load_attr /usr/lib/python3.9/site-packages/nova/objects/instance.py:1105#033[00m
Dec  1 20:02:40 compute-0 neutron-haproxy-ovnmeta-419dfb65-f0dd-44b5-a131-b7c37ebf4bab[254107]: [NOTICE]   (254158) : haproxy version is 2.8.14-c23fe91
Dec  1 20:02:40 compute-0 nova_compute[189564]: 2025-12-01 20:02:40.062 189568 DEBUG oslo_concurrency.processutils [None req-841ca05d-97dc-4ba4-a7eb-6937d7c1c9bc 89c8a8cb31224140bf2b9c0b94acfe04 5102d72cb1ce4e6da810b2584a2abd73 - - default default] Running cmd (subprocess): /usr/bin/python3 -m oslo_concurrency.prlimit --as=1073741824 --cpu=30 -- env LC_ALL=C LANG=C qemu-img info /var/lib/nova/instances/4a104baa-5fd5-47aa-973b-11d99c76c3e2/disk.config --force-share --output=json execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384#033[00m
Dec  1 20:02:40 compute-0 neutron-haproxy-ovnmeta-419dfb65-f0dd-44b5-a131-b7c37ebf4bab[254107]: [NOTICE]   (254158) : path to executable is /usr/sbin/haproxy
Dec  1 20:02:40 compute-0 neutron-haproxy-ovnmeta-419dfb65-f0dd-44b5-a131-b7c37ebf4bab[254107]: [WARNING]  (254158) : Exiting Master process...
Dec  1 20:02:40 compute-0 neutron-haproxy-ovnmeta-419dfb65-f0dd-44b5-a131-b7c37ebf4bab[254107]: [WARNING]  (254158) : Exiting Master process...
Dec  1 20:02:40 compute-0 neutron-haproxy-ovnmeta-419dfb65-f0dd-44b5-a131-b7c37ebf4bab[254107]: [ALERT]    (254158) : Current worker (254170) exited with code 143 (Terminated)
Dec  1 20:02:40 compute-0 neutron-haproxy-ovnmeta-419dfb65-f0dd-44b5-a131-b7c37ebf4bab[254107]: [WARNING]  (254158) : All workers exited. Exiting... (0)
Dec  1 20:02:40 compute-0 systemd[1]: libpod-c0ec349cd527aaa2050cd456a2adde135cadbf6873f2e9819fe20dd3647d976c.scope: Deactivated successfully.
Dec  1 20:02:40 compute-0 podman[255457]: 2025-12-01 20:02:40.085085421 +0000 UTC m=+0.112780920 container died c0ec349cd527aaa2050cd456a2adde135cadbf6873f2e9819fe20dd3647d976c (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified, name=neutron-haproxy-ovnmeta-419dfb65-f0dd-44b5-a131-b7c37ebf4bab, io.buildah.version=1.41.3, org.label-schema.build-date=20251125, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.schema-version=1.0, tcib_build_tag=fa2bb8efef6782c26ea7f1675eeb36dd, maintainer=OpenStack Kubernetes Operator team, org.label-schema.vendor=CentOS, tcib_managed=true)
Dec  1 20:02:40 compute-0 nova_compute[189564]: 2025-12-01 20:02:40.126 189568 DEBUG oslo_concurrency.processutils [None req-841ca05d-97dc-4ba4-a7eb-6937d7c1c9bc 89c8a8cb31224140bf2b9c0b94acfe04 5102d72cb1ce4e6da810b2584a2abd73 - - default default] CMD "/usr/bin/python3 -m oslo_concurrency.prlimit --as=1073741824 --cpu=30 -- env LC_ALL=C LANG=C qemu-img info /var/lib/nova/instances/4a104baa-5fd5-47aa-973b-11d99c76c3e2/disk.config --force-share --output=json" returned: 0 in 0.064s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422#033[00m
Dec  1 20:02:40 compute-0 nova_compute[189564]: 2025-12-01 20:02:40.127 189568 DEBUG oslo_concurrency.lockutils [None req-841ca05d-97dc-4ba4-a7eb-6937d7c1c9bc 89c8a8cb31224140bf2b9c0b94acfe04 5102d72cb1ce4e6da810b2584a2abd73 - - default default] Acquiring lock "/var/lib/nova/instances/4a104baa-5fd5-47aa-973b-11d99c76c3e2/disk.info" by "nova.virt.libvirt.imagebackend.Image.resolve_driver_format.<locals>.write_to_disk_info_file" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m
Dec  1 20:02:40 compute-0 nova_compute[189564]: 2025-12-01 20:02:40.127 189568 DEBUG oslo_concurrency.lockutils [None req-841ca05d-97dc-4ba4-a7eb-6937d7c1c9bc 89c8a8cb31224140bf2b9c0b94acfe04 5102d72cb1ce4e6da810b2584a2abd73 - - default default] Lock "/var/lib/nova/instances/4a104baa-5fd5-47aa-973b-11d99c76c3e2/disk.info" acquired by "nova.virt.libvirt.imagebackend.Image.resolve_driver_format.<locals>.write_to_disk_info_file" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m
Dec  1 20:02:40 compute-0 nova_compute[189564]: 2025-12-01 20:02:40.128 189568 DEBUG oslo_concurrency.lockutils [None req-841ca05d-97dc-4ba4-a7eb-6937d7c1c9bc 89c8a8cb31224140bf2b9c0b94acfe04 5102d72cb1ce4e6da810b2584a2abd73 - - default default] Lock "/var/lib/nova/instances/4a104baa-5fd5-47aa-973b-11d99c76c3e2/disk.info" "released" by "nova.virt.libvirt.imagebackend.Image.resolve_driver_format.<locals>.write_to_disk_info_file" :: held 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m
Dec  1 20:02:40 compute-0 nova_compute[189564]: 2025-12-01 20:02:40.130 189568 DEBUG nova.virt.libvirt.vif [None req-841ca05d-97dc-4ba4-a7eb-6937d7c1c9bc 89c8a8cb31224140bf2b9c0b94acfe04 5102d72cb1ce4e6da810b2584a2abd73 - - default default] vif_type=ovs instance=Instance(access_ip_v4=None,access_ip_v6=None,architecture=None,auto_disk_config=False,availability_zone='nova',cell_name=None,cleaned=False,config_drive='True',created_at=2025-12-01T20:01:10Z,default_ephemeral_device=None,default_swap_device=None,deleted=False,deleted_at=None,device_metadata=<?>,disable_terminate=False,display_description='tempest-ServerActionsTestJSON-server-1064429924',display_name='tempest-ServerActionsTestJSON-server-1064429924',ec2_ids=<?>,ephemeral_gb=0,ephemeral_key_uuid=None,fault=<?>,flavor=Flavor(3),hidden=False,host='compute-0.ctlplane.example.com',hostname='tempest-serveractionstestjson-server-1064429924',id=7,image_ref='d169c234-7ac2-4fdc-b9fa-a08c93484d75',info_cache=InstanceInfoCache,instance_type_id=3,kernel_id='',key_data='ecdsa-sha2-nistp384 AAAAE2VjZHNhLXNoYTItbmlzdHAzODQAAAAIbmlzdHAzODQAAABhBNy2Fa/005sFOm6rBTfWAhWPMicjwNe2lxBTmDNZ4YT4rkioptEkmqoV9BaZ0x7iRnfzTvUcepaaUfsJtdWIwpd6ISWDG/KMPFbrCHDmVc4nqNhxbzpyNrnXIODKw/JJYg==',key_name='tempest-keypair-1301911410',keypairs=<?>,launch_index=0,launched_at=2025-12-01T20:01:26Z,launched_on='compute-0.ctlplane.example.com',locked=False,locked_by=None,memory_mb=128,metadata={},migration_context=<?>,new_flavor=None,node='compute-0.ctlplane.example.com',numa_topology=None,old_flavor=None,os_type=None,pci_devices=<?>,pci_requests=<?>,power_state=1,progress=0,project_id='5102d72cb1ce4e6da810b2584a2abd73',ramdisk_id='',reservation_id='r-3k9rdt17',resources=None,root_device_name='/dev/vda',root_gb=1,security_groups=SecurityGroupList,services=<?>,shutdown_terminate=False,system_metadata={boot_roles='member,reader',image_base_image_ref='d169c234-7ac2-4fdc-b9fa-a08c93484d75',image_container_format='bare',image_disk_format='qcow2',image_hw_cdrom_bus='sata',image_hw_disk_bus='virtio',image_hw_input_bus='usb',image_hw_machine_type='q35',image_hw_pointer_model='usbtablet',image_hw_rng_model='virtio',image_hw_video_model='virtio',image_hw_vif_model='virtio',image_min_disk='1',image_min_ram='0',owner_project_name='tempest-ServerActionsTestJSON-87382225',owner_user_name='tempest-ServerActionsTestJSON-87382225-project-member'},tags=<?>,task_state='reboot_started_hard',terminated_at=None,trusted_certs=<?>,updated_at=2025-12-01T20:02:39Z,user_data='IyEvYmluL3NoCmVjaG8gIlByaW50aW5nIGNpcnJvcyB1c2VyIGF1dGhvcml6ZWQga2V5cyIKY2F0IH5jaXJyb3MvLnNzaC9hdXRob3JpemVkX2tleXMgfHwgdHJ1ZQo=',user_id='89c8a8cb31224140bf2b9c0b94acfe04',uuid=4a104baa-5fd5-47aa-973b-11d99c76c3e2,vcpu_model=VirtCPUModel,vcpus=1,vm_mode=None,vm_state='active') vif={"id": "09097114-7a48-4b64-ab17-ed474efbf80e", "address": "fa:16:3e:3e:bf:1a", "network": {"id": "419dfb65-f0dd-44b5-a131-b7c37ebf4bab", "bridge": "br-int", "label": "tempest-ServerActionsTestJSON-188173667-network", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.13", "type": "fixed", "version": 4, "meta": {}, "floating_ips": [{"address": "192.168.122.211", "type": "floating", "version": 4, "meta": {}}]}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "5102d72cb1ce4e6da810b2584a2abd73", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tap09097114-7a", "ovs_interfaceid": "09097114-7a48-4b64-ab17-ed474efbf80e", "qbh_params": null, "qbg_params": null, "active": true, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}} virt_type=kvm get_config /usr/lib/python3.9/site-packages/nova/virt/libvirt/vif.py:563#033[00m
Dec  1 20:02:40 compute-0 nova_compute[189564]: 2025-12-01 20:02:40.131 189568 DEBUG nova.network.os_vif_util [None req-841ca05d-97dc-4ba4-a7eb-6937d7c1c9bc 89c8a8cb31224140bf2b9c0b94acfe04 5102d72cb1ce4e6da810b2584a2abd73 - - default default] Converting VIF {"id": "09097114-7a48-4b64-ab17-ed474efbf80e", "address": "fa:16:3e:3e:bf:1a", "network": {"id": "419dfb65-f0dd-44b5-a131-b7c37ebf4bab", "bridge": "br-int", "label": "tempest-ServerActionsTestJSON-188173667-network", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.13", "type": "fixed", "version": 4, "meta": {}, "floating_ips": [{"address": "192.168.122.211", "type": "floating", "version": 4, "meta": {}}]}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "5102d72cb1ce4e6da810b2584a2abd73", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tap09097114-7a", "ovs_interfaceid": "09097114-7a48-4b64-ab17-ed474efbf80e", "qbh_params": null, "qbg_params": null, "active": true, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}} nova_to_osvif_vif /usr/lib/python3.9/site-packages/nova/network/os_vif_util.py:511#033[00m
Dec  1 20:02:40 compute-0 nova_compute[189564]: 2025-12-01 20:02:40.131 189568 DEBUG nova.network.os_vif_util [None req-841ca05d-97dc-4ba4-a7eb-6937d7c1c9bc 89c8a8cb31224140bf2b9c0b94acfe04 5102d72cb1ce4e6da810b2584a2abd73 - - default default] Converted object VIFOpenVSwitch(active=True,address=fa:16:3e:3e:bf:1a,bridge_name='br-int',has_traffic_filtering=True,id=09097114-7a48-4b64-ab17-ed474efbf80e,network=Network(419dfb65-f0dd-44b5-a131-b7c37ebf4bab),plugin='ovs',port_profile=VIFPortProfileOpenVSwitch,preserve_on_delete=False,vif_name='tap09097114-7a') nova_to_osvif_vif /usr/lib/python3.9/site-packages/nova/network/os_vif_util.py:548#033[00m
Dec  1 20:02:40 compute-0 nova_compute[189564]: 2025-12-01 20:02:40.133 189568 DEBUG nova.objects.instance [None req-841ca05d-97dc-4ba4-a7eb-6937d7c1c9bc 89c8a8cb31224140bf2b9c0b94acfe04 5102d72cb1ce4e6da810b2584a2abd73 - - default default] Lazy-loading 'pci_devices' on Instance uuid 4a104baa-5fd5-47aa-973b-11d99c76c3e2 obj_load_attr /usr/lib/python3.9/site-packages/nova/objects/instance.py:1105#033[00m
Dec  1 20:02:40 compute-0 systemd[1]: var-lib-containers-storage-overlay\x2dcontainers-c0ec349cd527aaa2050cd456a2adde135cadbf6873f2e9819fe20dd3647d976c-userdata-shm.mount: Deactivated successfully.
Dec  1 20:02:40 compute-0 systemd[1]: var-lib-containers-storage-overlay-9d139d7d6fc9e60e18b5717679f82de0ce940f2f4fa2594cdcdf9444ca8cd222-merged.mount: Deactivated successfully.
Dec  1 20:02:40 compute-0 nova_compute[189564]: 2025-12-01 20:02:40.155 189568 DEBUG nova.virt.libvirt.driver [None req-841ca05d-97dc-4ba4-a7eb-6937d7c1c9bc 89c8a8cb31224140bf2b9c0b94acfe04 5102d72cb1ce4e6da810b2584a2abd73 - - default default] [instance: 4a104baa-5fd5-47aa-973b-11d99c76c3e2] End _get_guest_xml xml=<domain type="kvm">
Dec  1 20:02:40 compute-0 nova_compute[189564]:  <uuid>4a104baa-5fd5-47aa-973b-11d99c76c3e2</uuid>
Dec  1 20:02:40 compute-0 nova_compute[189564]:  <name>instance-00000007</name>
Dec  1 20:02:40 compute-0 nova_compute[189564]:  <memory>131072</memory>
Dec  1 20:02:40 compute-0 nova_compute[189564]:  <vcpu>1</vcpu>
Dec  1 20:02:40 compute-0 nova_compute[189564]:  <metadata>
Dec  1 20:02:40 compute-0 nova_compute[189564]:    <nova:instance xmlns:nova="http://openstack.org/xmlns/libvirt/nova/1.1">
Dec  1 20:02:40 compute-0 nova_compute[189564]:      <nova:package version="27.5.2-0.20250829104910.6f8decf.el9"/>
Dec  1 20:02:40 compute-0 nova_compute[189564]:      <nova:name>tempest-ServerActionsTestJSON-server-1064429924</nova:name>
Dec  1 20:02:40 compute-0 nova_compute[189564]:      <nova:creationTime>2025-12-01 20:02:40</nova:creationTime>
Dec  1 20:02:40 compute-0 nova_compute[189564]:      <nova:flavor name="m1.nano">
Dec  1 20:02:40 compute-0 nova_compute[189564]:        <nova:memory>128</nova:memory>
Dec  1 20:02:40 compute-0 nova_compute[189564]:        <nova:disk>1</nova:disk>
Dec  1 20:02:40 compute-0 nova_compute[189564]:        <nova:swap>0</nova:swap>
Dec  1 20:02:40 compute-0 nova_compute[189564]:        <nova:ephemeral>0</nova:ephemeral>
Dec  1 20:02:40 compute-0 nova_compute[189564]:        <nova:vcpus>1</nova:vcpus>
Dec  1 20:02:40 compute-0 nova_compute[189564]:      </nova:flavor>
Dec  1 20:02:40 compute-0 nova_compute[189564]:      <nova:owner>
Dec  1 20:02:40 compute-0 nova_compute[189564]:        <nova:user uuid="89c8a8cb31224140bf2b9c0b94acfe04">tempest-ServerActionsTestJSON-87382225-project-member</nova:user>
Dec  1 20:02:40 compute-0 nova_compute[189564]:        <nova:project uuid="5102d72cb1ce4e6da810b2584a2abd73">tempest-ServerActionsTestJSON-87382225</nova:project>
Dec  1 20:02:40 compute-0 nova_compute[189564]:      </nova:owner>
Dec  1 20:02:40 compute-0 nova_compute[189564]:      <nova:root type="image" uuid="d169c234-7ac2-4fdc-b9fa-a08c93484d75"/>
Dec  1 20:02:40 compute-0 nova_compute[189564]:      <nova:ports>
Dec  1 20:02:40 compute-0 nova_compute[189564]:        <nova:port uuid="09097114-7a48-4b64-ab17-ed474efbf80e">
Dec  1 20:02:40 compute-0 nova_compute[189564]:          <nova:ip type="fixed" address="10.100.0.13" ipVersion="4"/>
Dec  1 20:02:40 compute-0 nova_compute[189564]:        </nova:port>
Dec  1 20:02:40 compute-0 nova_compute[189564]:      </nova:ports>
Dec  1 20:02:40 compute-0 nova_compute[189564]:    </nova:instance>
Dec  1 20:02:40 compute-0 nova_compute[189564]:  </metadata>
Dec  1 20:02:40 compute-0 nova_compute[189564]:  <sysinfo type="smbios">
Dec  1 20:02:40 compute-0 nova_compute[189564]:    <system>
Dec  1 20:02:40 compute-0 nova_compute[189564]:      <entry name="manufacturer">RDO</entry>
Dec  1 20:02:40 compute-0 nova_compute[189564]:      <entry name="product">OpenStack Compute</entry>
Dec  1 20:02:40 compute-0 nova_compute[189564]:      <entry name="version">27.5.2-0.20250829104910.6f8decf.el9</entry>
Dec  1 20:02:40 compute-0 nova_compute[189564]:      <entry name="serial">4a104baa-5fd5-47aa-973b-11d99c76c3e2</entry>
Dec  1 20:02:40 compute-0 nova_compute[189564]:      <entry name="uuid">4a104baa-5fd5-47aa-973b-11d99c76c3e2</entry>
Dec  1 20:02:40 compute-0 nova_compute[189564]:      <entry name="family">Virtual Machine</entry>
Dec  1 20:02:40 compute-0 nova_compute[189564]:    </system>
Dec  1 20:02:40 compute-0 nova_compute[189564]:  </sysinfo>
Dec  1 20:02:40 compute-0 nova_compute[189564]:  <os>
Dec  1 20:02:40 compute-0 nova_compute[189564]:    <type arch="x86_64" machine="q35">hvm</type>
Dec  1 20:02:40 compute-0 nova_compute[189564]:    <boot dev="hd"/>
Dec  1 20:02:40 compute-0 nova_compute[189564]:    <smbios mode="sysinfo"/>
Dec  1 20:02:40 compute-0 nova_compute[189564]:  </os>
Dec  1 20:02:40 compute-0 nova_compute[189564]:  <features>
Dec  1 20:02:40 compute-0 nova_compute[189564]:    <acpi/>
Dec  1 20:02:40 compute-0 nova_compute[189564]:    <apic/>
Dec  1 20:02:40 compute-0 nova_compute[189564]:    <vmcoreinfo/>
Dec  1 20:02:40 compute-0 nova_compute[189564]:  </features>
Dec  1 20:02:40 compute-0 nova_compute[189564]:  <clock offset="utc">
Dec  1 20:02:40 compute-0 nova_compute[189564]:    <timer name="pit" tickpolicy="delay"/>
Dec  1 20:02:40 compute-0 nova_compute[189564]:    <timer name="rtc" tickpolicy="catchup"/>
Dec  1 20:02:40 compute-0 nova_compute[189564]:    <timer name="hpet" present="no"/>
Dec  1 20:02:40 compute-0 nova_compute[189564]:  </clock>
Dec  1 20:02:40 compute-0 nova_compute[189564]:  <cpu mode="host-model" match="exact">
Dec  1 20:02:40 compute-0 nova_compute[189564]:    <topology sockets="1" cores="1" threads="1"/>
Dec  1 20:02:40 compute-0 nova_compute[189564]:  </cpu>
Dec  1 20:02:40 compute-0 nova_compute[189564]:  <devices>
Dec  1 20:02:40 compute-0 nova_compute[189564]:    <disk type="file" device="disk">
Dec  1 20:02:40 compute-0 nova_compute[189564]:      <driver name="qemu" type="qcow2" cache="none"/>
Dec  1 20:02:40 compute-0 nova_compute[189564]:      <source file="/var/lib/nova/instances/4a104baa-5fd5-47aa-973b-11d99c76c3e2/disk"/>
Dec  1 20:02:40 compute-0 nova_compute[189564]:      <target dev="vda" bus="virtio"/>
Dec  1 20:02:40 compute-0 nova_compute[189564]:    </disk>
Dec  1 20:02:40 compute-0 nova_compute[189564]:    <disk type="file" device="cdrom">
Dec  1 20:02:40 compute-0 nova_compute[189564]:      <driver name="qemu" type="raw" cache="none"/>
Dec  1 20:02:40 compute-0 nova_compute[189564]:      <source file="/var/lib/nova/instances/4a104baa-5fd5-47aa-973b-11d99c76c3e2/disk.config"/>
Dec  1 20:02:40 compute-0 nova_compute[189564]:      <target dev="sda" bus="sata"/>
Dec  1 20:02:40 compute-0 nova_compute[189564]:    </disk>
Dec  1 20:02:40 compute-0 nova_compute[189564]:    <interface type="ethernet">
Dec  1 20:02:40 compute-0 nova_compute[189564]:      <mac address="fa:16:3e:3e:bf:1a"/>
Dec  1 20:02:40 compute-0 nova_compute[189564]:      <model type="virtio"/>
Dec  1 20:02:40 compute-0 nova_compute[189564]:      <driver name="vhost" rx_queue_size="512"/>
Dec  1 20:02:40 compute-0 nova_compute[189564]:      <mtu size="1442"/>
Dec  1 20:02:40 compute-0 nova_compute[189564]:      <target dev="tap09097114-7a"/>
Dec  1 20:02:40 compute-0 nova_compute[189564]:    </interface>
Dec  1 20:02:40 compute-0 nova_compute[189564]:    <serial type="pty">
Dec  1 20:02:40 compute-0 nova_compute[189564]:      <log file="/var/lib/nova/instances/4a104baa-5fd5-47aa-973b-11d99c76c3e2/console.log" append="off"/>
Dec  1 20:02:40 compute-0 nova_compute[189564]:    </serial>
Dec  1 20:02:40 compute-0 nova_compute[189564]:    <graphics type="vnc" autoport="yes" listen="::0"/>
Dec  1 20:02:40 compute-0 nova_compute[189564]:    <video>
Dec  1 20:02:40 compute-0 nova_compute[189564]:      <model type="virtio"/>
Dec  1 20:02:40 compute-0 nova_compute[189564]:    </video>
Dec  1 20:02:40 compute-0 nova_compute[189564]:    <input type="tablet" bus="usb"/>
Dec  1 20:02:40 compute-0 nova_compute[189564]:    <input type="keyboard" bus="usb"/>
Dec  1 20:02:40 compute-0 nova_compute[189564]:    <rng model="virtio">
Dec  1 20:02:40 compute-0 nova_compute[189564]:      <backend model="random">/dev/urandom</backend>
Dec  1 20:02:40 compute-0 nova_compute[189564]:    </rng>
Dec  1 20:02:40 compute-0 nova_compute[189564]:    <controller type="pci" model="pcie-root"/>
Dec  1 20:02:40 compute-0 nova_compute[189564]:    <controller type="pci" model="pcie-root-port"/>
Dec  1 20:02:40 compute-0 nova_compute[189564]:    <controller type="pci" model="pcie-root-port"/>
Dec  1 20:02:40 compute-0 nova_compute[189564]:    <controller type="pci" model="pcie-root-port"/>
Dec  1 20:02:40 compute-0 nova_compute[189564]:    <controller type="pci" model="pcie-root-port"/>
Dec  1 20:02:40 compute-0 nova_compute[189564]:    <controller type="pci" model="pcie-root-port"/>
Dec  1 20:02:40 compute-0 nova_compute[189564]:    <controller type="pci" model="pcie-root-port"/>
Dec  1 20:02:40 compute-0 nova_compute[189564]:    <controller type="pci" model="pcie-root-port"/>
Dec  1 20:02:40 compute-0 nova_compute[189564]:    <controller type="pci" model="pcie-root-port"/>
Dec  1 20:02:40 compute-0 nova_compute[189564]:    <controller type="pci" model="pcie-root-port"/>
Dec  1 20:02:40 compute-0 nova_compute[189564]:    <controller type="pci" model="pcie-root-port"/>
Dec  1 20:02:40 compute-0 nova_compute[189564]:    <controller type="pci" model="pcie-root-port"/>
Dec  1 20:02:40 compute-0 nova_compute[189564]:    <controller type="pci" model="pcie-root-port"/>
Dec  1 20:02:40 compute-0 nova_compute[189564]:    <controller type="pci" model="pcie-root-port"/>
Dec  1 20:02:40 compute-0 nova_compute[189564]:    <controller type="pci" model="pcie-root-port"/>
Dec  1 20:02:40 compute-0 nova_compute[189564]:    <controller type="pci" model="pcie-root-port"/>
Dec  1 20:02:40 compute-0 nova_compute[189564]:    <controller type="pci" model="pcie-root-port"/>
Dec  1 20:02:40 compute-0 nova_compute[189564]:    <controller type="pci" model="pcie-root-port"/>
Dec  1 20:02:40 compute-0 nova_compute[189564]:    <controller type="pci" model="pcie-root-port"/>
Dec  1 20:02:40 compute-0 nova_compute[189564]:    <controller type="pci" model="pcie-root-port"/>
Dec  1 20:02:40 compute-0 nova_compute[189564]:    <controller type="pci" model="pcie-root-port"/>
Dec  1 20:02:40 compute-0 nova_compute[189564]:    <controller type="pci" model="pcie-root-port"/>
Dec  1 20:02:40 compute-0 nova_compute[189564]:    <controller type="pci" model="pcie-root-port"/>
Dec  1 20:02:40 compute-0 nova_compute[189564]:    <controller type="pci" model="pcie-root-port"/>
Dec  1 20:02:40 compute-0 nova_compute[189564]:    <controller type="pci" model="pcie-root-port"/>
Dec  1 20:02:40 compute-0 nova_compute[189564]:    <controller type="usb" index="0"/>
Dec  1 20:02:40 compute-0 nova_compute[189564]:    <memballoon model="virtio">
Dec  1 20:02:40 compute-0 nova_compute[189564]:      <stats period="10"/>
Dec  1 20:02:40 compute-0 nova_compute[189564]:    </memballoon>
Dec  1 20:02:40 compute-0 nova_compute[189564]:  </devices>
Dec  1 20:02:40 compute-0 nova_compute[189564]: </domain>
Dec  1 20:02:40 compute-0 nova_compute[189564]: _get_guest_xml /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:7555#033[00m
Dec  1 20:02:40 compute-0 nova_compute[189564]: 2025-12-01 20:02:40.156 189568 DEBUG oslo_concurrency.processutils [None req-841ca05d-97dc-4ba4-a7eb-6937d7c1c9bc 89c8a8cb31224140bf2b9c0b94acfe04 5102d72cb1ce4e6da810b2584a2abd73 - - default default] Running cmd (subprocess): /usr/bin/python3 -m oslo_concurrency.prlimit --as=1073741824 --cpu=30 -- env LC_ALL=C LANG=C qemu-img info /var/lib/nova/instances/4a104baa-5fd5-47aa-973b-11d99c76c3e2/disk --force-share --output=json execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384#033[00m
Dec  1 20:02:40 compute-0 podman[255457]: 2025-12-01 20:02:40.166207856 +0000 UTC m=+0.193903375 container cleanup c0ec349cd527aaa2050cd456a2adde135cadbf6873f2e9819fe20dd3647d976c (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified, name=neutron-haproxy-ovnmeta-419dfb65-f0dd-44b5-a131-b7c37ebf4bab, tcib_managed=true, io.buildah.version=1.41.3, maintainer=OpenStack Kubernetes Operator team, org.label-schema.build-date=20251125, org.label-schema.license=GPLv2, org.label-schema.schema-version=1.0, tcib_build_tag=fa2bb8efef6782c26ea7f1675eeb36dd, org.label-schema.vendor=CentOS, org.label-schema.name=CentOS Stream 9 Base Image)
Dec  1 20:02:40 compute-0 systemd[1]: libpod-conmon-c0ec349cd527aaa2050cd456a2adde135cadbf6873f2e9819fe20dd3647d976c.scope: Deactivated successfully.
Dec  1 20:02:40 compute-0 nova_compute[189564]: 2025-12-01 20:02:40.234 189568 DEBUG oslo_concurrency.processutils [None req-841ca05d-97dc-4ba4-a7eb-6937d7c1c9bc 89c8a8cb31224140bf2b9c0b94acfe04 5102d72cb1ce4e6da810b2584a2abd73 - - default default] CMD "/usr/bin/python3 -m oslo_concurrency.prlimit --as=1073741824 --cpu=30 -- env LC_ALL=C LANG=C qemu-img info /var/lib/nova/instances/4a104baa-5fd5-47aa-973b-11d99c76c3e2/disk --force-share --output=json" returned: 0 in 0.078s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422#033[00m
Dec  1 20:02:40 compute-0 nova_compute[189564]: 2025-12-01 20:02:40.235 189568 DEBUG oslo_concurrency.processutils [None req-841ca05d-97dc-4ba4-a7eb-6937d7c1c9bc 89c8a8cb31224140bf2b9c0b94acfe04 5102d72cb1ce4e6da810b2584a2abd73 - - default default] Running cmd (subprocess): /usr/bin/python3 -m oslo_concurrency.prlimit --as=1073741824 --cpu=30 -- env LC_ALL=C LANG=C qemu-img info /var/lib/nova/instances/4a104baa-5fd5-47aa-973b-11d99c76c3e2/disk --force-share --output=json execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384#033[00m
Dec  1 20:02:40 compute-0 podman[255493]: 2025-12-01 20:02:40.26826101 +0000 UTC m=+0.070734281 container remove c0ec349cd527aaa2050cd456a2adde135cadbf6873f2e9819fe20dd3647d976c (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified, name=neutron-haproxy-ovnmeta-419dfb65-f0dd-44b5-a131-b7c37ebf4bab, maintainer=OpenStack Kubernetes Operator team, org.label-schema.build-date=20251125, org.label-schema.name=CentOS Stream 9 Base Image, tcib_build_tag=fa2bb8efef6782c26ea7f1675eeb36dd, io.buildah.version=1.41.3, org.label-schema.vendor=CentOS, tcib_managed=true, org.label-schema.license=GPLv2, org.label-schema.schema-version=1.0)
Dec  1 20:02:40 compute-0 ovn_metadata_agent[106828]: 2025-12-01 20:02:40.277 239862 DEBUG oslo.privsep.daemon [-] privsep: reply[91a709e4-9c1f-4fc1-a0f7-da18f7463afc]: (4, ('Mon Dec  1 08:02:39 PM UTC 2025 Stopping container neutron-haproxy-ovnmeta-419dfb65-f0dd-44b5-a131-b7c37ebf4bab (c0ec349cd527aaa2050cd456a2adde135cadbf6873f2e9819fe20dd3647d976c)\nc0ec349cd527aaa2050cd456a2adde135cadbf6873f2e9819fe20dd3647d976c\nMon Dec  1 08:02:40 PM UTC 2025 Deleting container neutron-haproxy-ovnmeta-419dfb65-f0dd-44b5-a131-b7c37ebf4bab (c0ec349cd527aaa2050cd456a2adde135cadbf6873f2e9819fe20dd3647d976c)\nc0ec349cd527aaa2050cd456a2adde135cadbf6873f2e9819fe20dd3647d976c\n', '', 0)) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Dec  1 20:02:40 compute-0 ovn_metadata_agent[106828]: 2025-12-01 20:02:40.279 239862 DEBUG oslo.privsep.daemon [-] privsep: reply[b4df3b60-d3e8-47e5-873e-dd53f2326a73]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Dec  1 20:02:40 compute-0 ovn_metadata_agent[106828]: 2025-12-01 20:02:40.280 106833 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): DelPortCommand(_result=None, port=tap419dfb65-f0, bridge=None, if_exists=True) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89#033[00m
Dec  1 20:02:40 compute-0 nova_compute[189564]: 2025-12-01 20:02:40.283 189568 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Dec  1 20:02:40 compute-0 kernel: tap419dfb65-f0: left promiscuous mode
Dec  1 20:02:40 compute-0 nova_compute[189564]: 2025-12-01 20:02:40.287 189568 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Dec  1 20:02:40 compute-0 ovn_metadata_agent[106828]: 2025-12-01 20:02:40.304 239862 DEBUG oslo.privsep.daemon [-] privsep: reply[03e1ad2f-7960-4916-8406-3b6e60123e3c]: (4, True) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Dec  1 20:02:40 compute-0 nova_compute[189564]: 2025-12-01 20:02:40.310 189568 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Dec  1 20:02:40 compute-0 nova_compute[189564]: 2025-12-01 20:02:40.317 189568 DEBUG oslo_concurrency.processutils [None req-841ca05d-97dc-4ba4-a7eb-6937d7c1c9bc 89c8a8cb31224140bf2b9c0b94acfe04 5102d72cb1ce4e6da810b2584a2abd73 - - default default] CMD "/usr/bin/python3 -m oslo_concurrency.prlimit --as=1073741824 --cpu=30 -- env LC_ALL=C LANG=C qemu-img info /var/lib/nova/instances/4a104baa-5fd5-47aa-973b-11d99c76c3e2/disk --force-share --output=json" returned: 0 in 0.082s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422#033[00m
Dec  1 20:02:40 compute-0 nova_compute[189564]: 2025-12-01 20:02:40.319 189568 DEBUG nova.objects.instance [None req-841ca05d-97dc-4ba4-a7eb-6937d7c1c9bc 89c8a8cb31224140bf2b9c0b94acfe04 5102d72cb1ce4e6da810b2584a2abd73 - - default default] Lazy-loading 'trusted_certs' on Instance uuid 4a104baa-5fd5-47aa-973b-11d99c76c3e2 obj_load_attr /usr/lib/python3.9/site-packages/nova/objects/instance.py:1105#033[00m
Dec  1 20:02:40 compute-0 ovn_metadata_agent[106828]: 2025-12-01 20:02:40.322 239862 DEBUG oslo.privsep.daemon [-] privsep: reply[3d028497-8d0a-4ea5-bb76-09647d2c2e00]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Dec  1 20:02:40 compute-0 ovn_metadata_agent[106828]: 2025-12-01 20:02:40.325 239862 DEBUG oslo.privsep.daemon [-] privsep: reply[f3a69a49-62e5-42d4-b3a2-c1f904f4042c]: (4, True) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Dec  1 20:02:40 compute-0 nova_compute[189564]: 2025-12-01 20:02:40.340 189568 DEBUG oslo_concurrency.processutils [None req-841ca05d-97dc-4ba4-a7eb-6937d7c1c9bc 89c8a8cb31224140bf2b9c0b94acfe04 5102d72cb1ce4e6da810b2584a2abd73 - - default default] Running cmd (subprocess): /usr/bin/python3 -m oslo_concurrency.prlimit --as=1073741824 --cpu=30 -- env LC_ALL=C LANG=C qemu-img info /var/lib/nova/instances/_base/b6c46a34fa48a1b06387586e8222a42077151abd --force-share --output=json execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384#033[00m
Dec  1 20:02:40 compute-0 ovn_metadata_agent[106828]: 2025-12-01 20:02:40.350 239862 DEBUG oslo.privsep.daemon [-] privsep: reply[db40be50-edc8-4ed8-94fd-9f950c6f68a7]: (4, [{'family': 0, '__align': (), 'ifi_type': 772, 'index': 1, 'flags': 65609, 'change': 0, 'attrs': [['IFLA_IFNAME', 'lo'], ['IFLA_TXQLEN', 1000], ['IFLA_OPERSTATE', 'UNKNOWN'], ['IFLA_LINKMODE', 0], ['IFLA_MTU', 65536], ['IFLA_MIN_MTU', 0], ['IFLA_MAX_MTU', 0], ['IFLA_GROUP', 0], ['IFLA_PROMISCUITY', 0], ['UNKNOWN', {'header': {'length': 8, 'type': 61}}], ['IFLA_NUM_TX_QUEUES', 1], ['IFLA_GSO_MAX_SEGS', 65535], ['IFLA_GSO_MAX_SIZE', 65536], ['IFLA_GRO_MAX_SIZE', 65536], ['UNKNOWN', {'header': {'length': 8, 'type': 63}}], ['UNKNOWN', {'header': {'length': 8, 'type': 64}}], ['IFLA_TSO_MAX_SIZE', 524280], ['IFLA_TSO_MAX_SEGS', 65535], ['UNKNOWN', {'header': {'length': 8, 'type': 66}}], ['IFLA_NUM_RX_QUEUES', 1], ['IFLA_CARRIER', 1], ['IFLA_CARRIER_CHANGES', 0], ['IFLA_CARRIER_UP_COUNT', 0], ['IFLA_CARRIER_DOWN_COUNT', 0], ['IFLA_PROTO_DOWN', 0], ['IFLA_ADDRESS', '00:00:00:00:00:00'], ['IFLA_BROADCAST', '00:00:00:00:00:00'], ['IFLA_STATS64', {'rx_packets': 1, 'tx_packets': 1, 'rx_bytes': 28, 'tx_bytes': 28, 'rx_errors': 0, 'tx_errors': 0, 'rx_dropped': 0, 'tx_dropped': 0, 'multicast': 0, 'collisions': 0, 'rx_length_errors': 0, 'rx_over_errors': 0, 'rx_crc_errors': 0, 'rx_frame_errors': 0, 'rx_fifo_errors': 0, 'rx_missed_errors': 0, 'tx_aborted_errors': 0, 'tx_carrier_errors': 0, 'tx_fifo_errors': 0, 'tx_heartbeat_errors': 0, 'tx_window_errors': 0, 'rx_compressed': 0, 'tx_compressed': 0}], ['IFLA_STATS', {'rx_packets': 1, 'tx_packets': 1, 'rx_bytes': 28, 'tx_bytes': 28, 'rx_errors': 0, 'tx_errors': 0, 'rx_dropped': 0, 'tx_dropped': 0, 'multicast': 0, 'collisions': 0, 'rx_length_errors': 0, 'rx_over_errors': 0, 'rx_crc_errors': 0, 'rx_frame_errors': 0, 'rx_fifo_errors': 0, 'rx_missed_errors': 0, 'tx_aborted_errors': 0, 'tx_carrier_errors': 0, 'tx_fifo_errors': 0, 'tx_heartbeat_errors': 0, 'tx_window_errors': 0, 'rx_compressed': 0, 'tx_compressed': 0}], ['IFLA_XDP', {'attrs': [['IFLA_XDP_ATTACHED', None]]}], ['IFLA_QDISC', 'noqueue'], ['IFLA_AF_SPEC', {'attrs': [['AF_INET', {'dummy': 65668, 'forwarding': 1, 'mc_forwarding': 0, 'proxy_arp': 0, 'accept_redirects': 0, 'secure_redirects': 0, 'send_redirects': 0, 'shared_media': 1, 'rp_filter': 1, 'accept_source_route': 0, 'bootp_relay': 0, 'log_martians': 0, 'tag': 0, 'arpfilter': 0, 'medium_id': 0, 'noxfrm': 1, 'nopolicy': 1, 'force_igmp_version': 0, 'arp_announce': 0, 'arp_ignore': 0, 'promote_secondaries': 1, 'arp_accept': 0, 'arp_notify': 0, 'accept_local': 0, 'src_vmark': 0, 'proxy_arp_pvlan': 0, 'route_localnet': 0, 'igmpv2_unsolicited_report_interval': 10000, 'igmpv3_unsolicited_report_interval': 1000}], ['AF_INET6', {'attrs': [['IFLA_INET6_FLAGS', 2147483648], ['IFLA_INET6_CACHEINFO', {'max_reasm_len': 65535, 'tstamp': 576646, 'reachable_time': 24130, 'retrans_time': 1000}], ['IFLA_INET6_CONF', {'forwarding': 0, 'hop_limit': 64, 'mtu': 65536, 'accept_ra': 1, 'accept_redirects': 1, 'autoconf': 1, 'dad_transmits': 1, 'router_solicitations': 4294967295, 'router_solicitation_interval': 4000, 'router_solicitation_delay': 1000, 'use_tempaddr': 4294967295, 'temp_valid_lft': 604800, 'temp_preferred_lft': 86400, 'regen_max_retry': 3, 'max_desync_factor': 600, 'max_addresses': 16, 'force_mld_version': 0, 'accept_ra_defrtr': 1, 'accept_ra_pinfo': 1, 'accept_ra_rtr_pref': 1, 'router_probe_interval': 60000, 'accept_ra_rt_info_max_plen': 0, 'proxy_ndp': 0, 'optimistic_dad': 0, 'accept_source_route': 0, 'mc_forwarding': 0, 'disable_ipv6': 0, 'accept_dad': 4294967295, 'force_tllao': 0, 'ndisc_notify': 0}], ['IFLA_INET6_STATS', {'num': 38, 'inpkts': 0, 'inoctets': 0, 'indelivers': 0, 'outforwdatagrams': 0, 'outpkts': 0, 'outoctets': 0, 'inhdrerrors': 0, 'intoobigerrors': 0, 'innoroutes': 0, 'inaddrerrors': 0, 'inunknownprotos': 0, 'intruncatedpkts': 0, 'indiscards': 0, 'outdiscards': 0, 'outnoroutes': 0, 'reasmtimeout': 0, 'reasmreqds': 0, 'reasmoks': 0, 'reasmfails': 0, 'fragoks': 0, 'fragfails': 0, 'fragcreates': 0, 'inmcastpkts': 0, 'outmcastpkts': 0, 'inbcastpkts': 0, 'outbcastpkts': 0, 'inmcastoctets': 0, 'outmcastoctets': 0, 'inbcastoctets': 0, 'outbcastoctets': 0, 'csumerrors': 0, 'noectpkts': 0, 'ect1pkts': 0, 'ect0pkts': 0, 'cepkts': 0}], ['IFLA_INET6_ICMP6STATS', {'num': 7, 'inmsgs': 0, 'inerrors': 0, 'outmsgs': 0, 'outerrors': 0, 'csumerrors': 0}], ['IFLA_INET6_TOKEN', '::'], ['IFLA_INET6_ADDR_GEN_MODE', 0]]}]]}], ['IFLA_MAP', {'mem_start': 0, 'mem_end': 0, 'base_addr': 0, 'irq': 0, 'dma': 0, 'port': 0}], ['UNKNOWN', {'header': {'length': 4, 'type': 32830}}], ['UNKNOWN', {'header': {'length': 4, 'type': 32833}}]], 'header': {'length': 1404, 'type': 16, 'flags': 2, 'sequence_number': 255, 'pid': 255511, 'error': None, 'target': 'ovnmeta-419dfb65-f0dd-44b5-a131-b7c37ebf4bab', 'stats': (0, 0, 0)}, 'state': 'up', 'event': 'RTM_NEWLINK'}]) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Dec  1 20:02:40 compute-0 ovn_metadata_agent[106828]: 2025-12-01 20:02:40.352 106945 DEBUG neutron.privileged.agent.linux.ip_lib [-] Namespace ovnmeta-419dfb65-f0dd-44b5-a131-b7c37ebf4bab deleted. remove_netns /usr/lib/python3.9/site-packages/neutron/privileged/agent/linux/ip_lib.py:607#033[00m
Dec  1 20:02:40 compute-0 ovn_metadata_agent[106828]: 2025-12-01 20:02:40.352 106945 DEBUG oslo.privsep.daemon [-] privsep: reply[6a2f0993-b798-40be-b5c1-0bdb64c69e2f]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Dec  1 20:02:40 compute-0 systemd[1]: run-netns-ovnmeta\x2d419dfb65\x2df0dd\x2d44b5\x2da131\x2db7c37ebf4bab.mount: Deactivated successfully.
Dec  1 20:02:40 compute-0 nova_compute[189564]: 2025-12-01 20:02:40.410 189568 DEBUG oslo_concurrency.processutils [None req-841ca05d-97dc-4ba4-a7eb-6937d7c1c9bc 89c8a8cb31224140bf2b9c0b94acfe04 5102d72cb1ce4e6da810b2584a2abd73 - - default default] CMD "/usr/bin/python3 -m oslo_concurrency.prlimit --as=1073741824 --cpu=30 -- env LC_ALL=C LANG=C qemu-img info /var/lib/nova/instances/_base/b6c46a34fa48a1b06387586e8222a42077151abd --force-share --output=json" returned: 0 in 0.070s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422#033[00m
Dec  1 20:02:40 compute-0 nova_compute[189564]: 2025-12-01 20:02:40.417 189568 DEBUG nova.virt.disk.api [None req-841ca05d-97dc-4ba4-a7eb-6937d7c1c9bc 89c8a8cb31224140bf2b9c0b94acfe04 5102d72cb1ce4e6da810b2584a2abd73 - - default default] Checking if we can resize image /var/lib/nova/instances/4a104baa-5fd5-47aa-973b-11d99c76c3e2/disk. size=1073741824 can_resize_image /usr/lib/python3.9/site-packages/nova/virt/disk/api.py:166#033[00m
Dec  1 20:02:40 compute-0 nova_compute[189564]: 2025-12-01 20:02:40.417 189568 DEBUG oslo_concurrency.processutils [None req-841ca05d-97dc-4ba4-a7eb-6937d7c1c9bc 89c8a8cb31224140bf2b9c0b94acfe04 5102d72cb1ce4e6da810b2584a2abd73 - - default default] Running cmd (subprocess): /usr/bin/python3 -m oslo_concurrency.prlimit --as=1073741824 --cpu=30 -- env LC_ALL=C LANG=C qemu-img info /var/lib/nova/instances/4a104baa-5fd5-47aa-973b-11d99c76c3e2/disk --force-share --output=json execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384#033[00m
Dec  1 20:02:40 compute-0 nova_compute[189564]: 2025-12-01 20:02:40.475 189568 DEBUG oslo_concurrency.processutils [None req-841ca05d-97dc-4ba4-a7eb-6937d7c1c9bc 89c8a8cb31224140bf2b9c0b94acfe04 5102d72cb1ce4e6da810b2584a2abd73 - - default default] CMD "/usr/bin/python3 -m oslo_concurrency.prlimit --as=1073741824 --cpu=30 -- env LC_ALL=C LANG=C qemu-img info /var/lib/nova/instances/4a104baa-5fd5-47aa-973b-11d99c76c3e2/disk --force-share --output=json" returned: 0 in 0.058s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422#033[00m
Dec  1 20:02:40 compute-0 nova_compute[189564]: 2025-12-01 20:02:40.476 189568 DEBUG nova.virt.disk.api [None req-841ca05d-97dc-4ba4-a7eb-6937d7c1c9bc 89c8a8cb31224140bf2b9c0b94acfe04 5102d72cb1ce4e6da810b2584a2abd73 - - default default] Cannot resize image /var/lib/nova/instances/4a104baa-5fd5-47aa-973b-11d99c76c3e2/disk to a smaller size. can_resize_image /usr/lib/python3.9/site-packages/nova/virt/disk/api.py:172#033[00m
Dec  1 20:02:40 compute-0 nova_compute[189564]: 2025-12-01 20:02:40.476 189568 DEBUG nova.objects.instance [None req-841ca05d-97dc-4ba4-a7eb-6937d7c1c9bc 89c8a8cb31224140bf2b9c0b94acfe04 5102d72cb1ce4e6da810b2584a2abd73 - - default default] Lazy-loading 'migration_context' on Instance uuid 4a104baa-5fd5-47aa-973b-11d99c76c3e2 obj_load_attr /usr/lib/python3.9/site-packages/nova/objects/instance.py:1105#033[00m
Dec  1 20:02:40 compute-0 nova_compute[189564]: 2025-12-01 20:02:40.505 189568 DEBUG nova.virt.libvirt.vif [None req-841ca05d-97dc-4ba4-a7eb-6937d7c1c9bc 89c8a8cb31224140bf2b9c0b94acfe04 5102d72cb1ce4e6da810b2584a2abd73 - - default default] vif_type=ovs instance=Instance(access_ip_v4=None,access_ip_v6=None,architecture=None,auto_disk_config=False,availability_zone='nova',cell_name=None,cleaned=False,config_drive='True',created_at=2025-12-01T20:01:10Z,default_ephemeral_device=None,default_swap_device=None,deleted=False,deleted_at=None,device_metadata=<?>,disable_terminate=False,display_description='tempest-ServerActionsTestJSON-server-1064429924',display_name='tempest-ServerActionsTestJSON-server-1064429924',ec2_ids=<?>,ephemeral_gb=0,ephemeral_key_uuid=None,fault=<?>,flavor=Flavor(3),hidden=False,host='compute-0.ctlplane.example.com',hostname='tempest-serveractionstestjson-server-1064429924',id=7,image_ref='d169c234-7ac2-4fdc-b9fa-a08c93484d75',info_cache=InstanceInfoCache,instance_type_id=3,kernel_id='',key_data='ecdsa-sha2-nistp384 AAAAE2VjZHNhLXNoYTItbmlzdHAzODQAAAAIbmlzdHAzODQAAABhBNy2Fa/005sFOm6rBTfWAhWPMicjwNe2lxBTmDNZ4YT4rkioptEkmqoV9BaZ0x7iRnfzTvUcepaaUfsJtdWIwpd6ISWDG/KMPFbrCHDmVc4nqNhxbzpyNrnXIODKw/JJYg==',key_name='tempest-keypair-1301911410',keypairs=<?>,launch_index=0,launched_at=2025-12-01T20:01:26Z,launched_on='compute-0.ctlplane.example.com',locked=False,locked_by=None,memory_mb=128,metadata={},migration_context=None,new_flavor=None,node='compute-0.ctlplane.example.com',numa_topology=None,old_flavor=None,os_type=None,pci_devices=PciDeviceList,pci_requests=<?>,power_state=1,progress=0,project_id='5102d72cb1ce4e6da810b2584a2abd73',ramdisk_id='',reservation_id='r-3k9rdt17',resources=None,root_device_name='/dev/vda',root_gb=1,security_groups=SecurityGroupList,services=<?>,shutdown_terminate=False,system_metadata={boot_roles='member,reader',image_base_image_ref='d169c234-7ac2-4fdc-b9fa-a08c93484d75',image_container_format='bare',image_disk_format='qcow2',image_hw_cdrom_bus='sata',image_hw_disk_bus='virtio',image_hw_input_bus='usb',image_hw_machine_type='q35',image_hw_pointer_model='usbtablet',image_hw_rng_model='virtio',image_hw_video_model='virtio',image_hw_vif_model='virtio',image_min_disk='1',image_min_ram='0',owner_project_name='tempest-ServerActionsTestJSON-87382225',owner_user_name='tempest-ServerActionsTestJSON-87382225-project-member'},tags=<?>,task_state='reboot_started_hard',terminated_at=None,trusted_certs=None,updated_at=2025-12-01T20:02:39Z,user_data='IyEvYmluL3NoCmVjaG8gIlByaW50aW5nIGNpcnJvcyB1c2VyIGF1dGhvcml6ZWQga2V5cyIKY2F0IH5jaXJyb3MvLnNzaC9hdXRob3JpemVkX2tleXMgfHwgdHJ1ZQo=',user_id='89c8a8cb31224140bf2b9c0b94acfe04',uuid=4a104baa-5fd5-47aa-973b-11d99c76c3e2,vcpu_model=VirtCPUModel,vcpus=1,vm_mode=None,vm_state='active') vif={"id": "09097114-7a48-4b64-ab17-ed474efbf80e", "address": "fa:16:3e:3e:bf:1a", "network": {"id": "419dfb65-f0dd-44b5-a131-b7c37ebf4bab", "bridge": "br-int", "label": "tempest-ServerActionsTestJSON-188173667-network", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.13", "type": "fixed", "version": 4, "meta": {}, "floating_ips": [{"address": "192.168.122.211", "type": "floating", "version": 4, "meta": {}}]}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "5102d72cb1ce4e6da810b2584a2abd73", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tap09097114-7a", "ovs_interfaceid": "09097114-7a48-4b64-ab17-ed474efbf80e", "qbh_params": null, "qbg_params": null, "active": true, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}} plug /usr/lib/python3.9/site-packages/nova/virt/libvirt/vif.py:710#033[00m
Dec  1 20:02:40 compute-0 nova_compute[189564]: 2025-12-01 20:02:40.505 189568 DEBUG nova.network.os_vif_util [None req-841ca05d-97dc-4ba4-a7eb-6937d7c1c9bc 89c8a8cb31224140bf2b9c0b94acfe04 5102d72cb1ce4e6da810b2584a2abd73 - - default default] Converting VIF {"id": "09097114-7a48-4b64-ab17-ed474efbf80e", "address": "fa:16:3e:3e:bf:1a", "network": {"id": "419dfb65-f0dd-44b5-a131-b7c37ebf4bab", "bridge": "br-int", "label": "tempest-ServerActionsTestJSON-188173667-network", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.13", "type": "fixed", "version": 4, "meta": {}, "floating_ips": [{"address": "192.168.122.211", "type": "floating", "version": 4, "meta": {}}]}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "5102d72cb1ce4e6da810b2584a2abd73", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tap09097114-7a", "ovs_interfaceid": "09097114-7a48-4b64-ab17-ed474efbf80e", "qbh_params": null, "qbg_params": null, "active": true, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}} nova_to_osvif_vif /usr/lib/python3.9/site-packages/nova/network/os_vif_util.py:511#033[00m
Dec  1 20:02:40 compute-0 nova_compute[189564]: 2025-12-01 20:02:40.506 189568 DEBUG nova.network.os_vif_util [None req-841ca05d-97dc-4ba4-a7eb-6937d7c1c9bc 89c8a8cb31224140bf2b9c0b94acfe04 5102d72cb1ce4e6da810b2584a2abd73 - - default default] Converted object VIFOpenVSwitch(active=True,address=fa:16:3e:3e:bf:1a,bridge_name='br-int',has_traffic_filtering=True,id=09097114-7a48-4b64-ab17-ed474efbf80e,network=Network(419dfb65-f0dd-44b5-a131-b7c37ebf4bab),plugin='ovs',port_profile=VIFPortProfileOpenVSwitch,preserve_on_delete=False,vif_name='tap09097114-7a') nova_to_osvif_vif /usr/lib/python3.9/site-packages/nova/network/os_vif_util.py:548#033[00m
Dec  1 20:02:40 compute-0 nova_compute[189564]: 2025-12-01 20:02:40.506 189568 DEBUG os_vif [None req-841ca05d-97dc-4ba4-a7eb-6937d7c1c9bc 89c8a8cb31224140bf2b9c0b94acfe04 5102d72cb1ce4e6da810b2584a2abd73 - - default default] Plugging vif VIFOpenVSwitch(active=True,address=fa:16:3e:3e:bf:1a,bridge_name='br-int',has_traffic_filtering=True,id=09097114-7a48-4b64-ab17-ed474efbf80e,network=Network(419dfb65-f0dd-44b5-a131-b7c37ebf4bab),plugin='ovs',port_profile=VIFPortProfileOpenVSwitch,preserve_on_delete=False,vif_name='tap09097114-7a') plug /usr/lib/python3.9/site-packages/os_vif/__init__.py:76#033[00m
Dec  1 20:02:40 compute-0 nova_compute[189564]: 2025-12-01 20:02:40.507 189568 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 21 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Dec  1 20:02:40 compute-0 nova_compute[189564]: 2025-12-01 20:02:40.507 189568 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): AddBridgeCommand(_result=None, name=br-int, may_exist=True, datapath_type=system) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89#033[00m
Dec  1 20:02:40 compute-0 nova_compute[189564]: 2025-12-01 20:02:40.508 189568 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Transaction caused no change do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:129#033[00m
Dec  1 20:02:40 compute-0 nova_compute[189564]: 2025-12-01 20:02:40.511 189568 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 21 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Dec  1 20:02:40 compute-0 nova_compute[189564]: 2025-12-01 20:02:40.511 189568 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): AddPortCommand(_result=None, bridge=br-int, port=tap09097114-7a, may_exist=True) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89#033[00m
Dec  1 20:02:40 compute-0 nova_compute[189564]: 2025-12-01 20:02:40.512 189568 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=1): DbSetCommand(_result=None, table=Interface, record=tap09097114-7a, col_values=(('external_ids', {'iface-id': '09097114-7a48-4b64-ab17-ed474efbf80e', 'iface-status': 'active', 'attached-mac': 'fa:16:3e:3e:bf:1a', 'vm-uuid': '4a104baa-5fd5-47aa-973b-11d99c76c3e2'}),)) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89#033[00m
Dec  1 20:02:40 compute-0 NetworkManager[56474]: <info>  [1764619360.5142] manager: (tap09097114-7a): new Open vSwitch Port device (/org/freedesktop/NetworkManager/Devices/62)
Dec  1 20:02:40 compute-0 nova_compute[189564]: 2025-12-01 20:02:40.516 189568 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] 0-ms timeout __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:248#033[00m
Dec  1 20:02:40 compute-0 nova_compute[189564]: 2025-12-01 20:02:40.523 189568 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Dec  1 20:02:40 compute-0 nova_compute[189564]: 2025-12-01 20:02:40.524 189568 INFO os_vif [None req-841ca05d-97dc-4ba4-a7eb-6937d7c1c9bc 89c8a8cb31224140bf2b9c0b94acfe04 5102d72cb1ce4e6da810b2584a2abd73 - - default default] Successfully plugged vif VIFOpenVSwitch(active=True,address=fa:16:3e:3e:bf:1a,bridge_name='br-int',has_traffic_filtering=True,id=09097114-7a48-4b64-ab17-ed474efbf80e,network=Network(419dfb65-f0dd-44b5-a131-b7c37ebf4bab),plugin='ovs',port_profile=VIFPortProfileOpenVSwitch,preserve_on_delete=False,vif_name='tap09097114-7a')#033[00m
Dec  1 20:02:40 compute-0 nova_compute[189564]: 2025-12-01 20:02:40.581 189568 DEBUG nova.compute.manager [req-f178673e-ab04-4c8c-97f6-a310f0eec29c req-24784580-1568-41b2-ba55-46ef92c41f93 8141e98fb94749b083df4b60a326419b 58466eba0634458f8dd92f929824a1d9 - - default default] [instance: 4a104baa-5fd5-47aa-973b-11d99c76c3e2] Received event network-vif-unplugged-09097114-7a48-4b64-ab17-ed474efbf80e external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11048#033[00m
Dec  1 20:02:40 compute-0 nova_compute[189564]: 2025-12-01 20:02:40.582 189568 DEBUG oslo_concurrency.lockutils [req-f178673e-ab04-4c8c-97f6-a310f0eec29c req-24784580-1568-41b2-ba55-46ef92c41f93 8141e98fb94749b083df4b60a326419b 58466eba0634458f8dd92f929824a1d9 - - default default] Acquiring lock "4a104baa-5fd5-47aa-973b-11d99c76c3e2-events" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m
Dec  1 20:02:40 compute-0 nova_compute[189564]: 2025-12-01 20:02:40.582 189568 DEBUG oslo_concurrency.lockutils [req-f178673e-ab04-4c8c-97f6-a310f0eec29c req-24784580-1568-41b2-ba55-46ef92c41f93 8141e98fb94749b083df4b60a326419b 58466eba0634458f8dd92f929824a1d9 - - default default] Lock "4a104baa-5fd5-47aa-973b-11d99c76c3e2-events" acquired by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m
Dec  1 20:02:40 compute-0 nova_compute[189564]: 2025-12-01 20:02:40.591 189568 DEBUG oslo_concurrency.lockutils [req-f178673e-ab04-4c8c-97f6-a310f0eec29c req-24784580-1568-41b2-ba55-46ef92c41f93 8141e98fb94749b083df4b60a326419b 58466eba0634458f8dd92f929824a1d9 - - default default] Lock "4a104baa-5fd5-47aa-973b-11d99c76c3e2-events" "released" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: held 0.009s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m
Dec  1 20:02:40 compute-0 nova_compute[189564]: 2025-12-01 20:02:40.592 189568 DEBUG nova.compute.manager [req-f178673e-ab04-4c8c-97f6-a310f0eec29c req-24784580-1568-41b2-ba55-46ef92c41f93 8141e98fb94749b083df4b60a326419b 58466eba0634458f8dd92f929824a1d9 - - default default] [instance: 4a104baa-5fd5-47aa-973b-11d99c76c3e2] No waiting events found dispatching network-vif-unplugged-09097114-7a48-4b64-ab17-ed474efbf80e pop_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:320#033[00m
Dec  1 20:02:40 compute-0 nova_compute[189564]: 2025-12-01 20:02:40.592 189568 WARNING nova.compute.manager [req-f178673e-ab04-4c8c-97f6-a310f0eec29c req-24784580-1568-41b2-ba55-46ef92c41f93 8141e98fb94749b083df4b60a326419b 58466eba0634458f8dd92f929824a1d9 - - default default] [instance: 4a104baa-5fd5-47aa-973b-11d99c76c3e2] Received unexpected event network-vif-unplugged-09097114-7a48-4b64-ab17-ed474efbf80e for instance with vm_state active and task_state reboot_started_hard.#033[00m
Dec  1 20:02:40 compute-0 kernel: tap09097114-7a: entered promiscuous mode
Dec  1 20:02:40 compute-0 NetworkManager[56474]: <info>  [1764619360.6140] manager: (tap09097114-7a): new Tun device (/org/freedesktop/NetworkManager/Devices/63)
Dec  1 20:02:40 compute-0 ovn_controller[97948]: 2025-12-01T20:02:40Z|00122|binding|INFO|Claiming lport 09097114-7a48-4b64-ab17-ed474efbf80e for this chassis.
Dec  1 20:02:40 compute-0 ovn_controller[97948]: 2025-12-01T20:02:40Z|00123|binding|INFO|09097114-7a48-4b64-ab17-ed474efbf80e: Claiming fa:16:3e:3e:bf:1a 10.100.0.13
Dec  1 20:02:40 compute-0 nova_compute[189564]: 2025-12-01 20:02:40.617 189568 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Dec  1 20:02:40 compute-0 systemd-udevd[255428]: Network interface NamePolicy= disabled on kernel command line.
Dec  1 20:02:40 compute-0 ovn_metadata_agent[106828]: 2025-12-01 20:02:40.642 106833 DEBUG ovsdbapp.backend.ovs_idl.event [-] Matched UPDATE: PortBindingUpdatedEvent(events=('update',), table='Port_Binding', conditions=None, old_conditions=None), priority=20 to row=Port_Binding(mac=['fa:16:3e:3e:bf:1a 10.100.0.13'], port_security=['fa:16:3e:3e:bf:1a 10.100.0.13'], type=, nat_addresses=[], virtual_parent=[], up=[False], options={'requested-chassis': 'compute-0.ctlplane.example.com'}, parent_port=[], requested_additional_chassis=[], ha_chassis_group=[], external_ids={'neutron:cidrs': '10.100.0.13/28', 'neutron:device_id': '4a104baa-5fd5-47aa-973b-11d99c76c3e2', 'neutron:device_owner': 'compute:nova', 'neutron:mtu': '', 'neutron:network_name': 'neutron-419dfb65-f0dd-44b5-a131-b7c37ebf4bab', 'neutron:port_capabilities': '', 'neutron:port_name': '', 'neutron:project_id': '5102d72cb1ce4e6da810b2584a2abd73', 'neutron:revision_number': '5', 'neutron:security_group_ids': 'fb1a9182-2a79-4a69-a063-58799cf34a33', 'neutron:subnet_pool_addr_scope4': '', 'neutron:subnet_pool_addr_scope6': '', 'neutron:vnic_type': 'normal', 'neutron:port_fip': '192.168.122.211'}, additional_chassis=[], tag=[], additional_encap=[], encap=[], mirror_rules=[], datapath=b0f29072-dc2b-4972-a602-c2fe180fbdaf, chassis=[<ovs.db.idl.Row object at 0x7f1b36766670>], tunnel_key=3, gateway_chassis=[], requested_chassis=[<ovs.db.idl.Row object at 0x7f1b36766670>], logical_port=09097114-7a48-4b64-ab17-ed474efbf80e) old=Port_Binding(chassis=[]) matches /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/event.py:43#033[00m
Dec  1 20:02:40 compute-0 NetworkManager[56474]: <info>  [1764619360.6441] device (tap09097114-7a): state change: unmanaged -> unavailable (reason 'connection-assumed', managed-type: 'external')
Dec  1 20:02:40 compute-0 ovn_metadata_agent[106828]: 2025-12-01 20:02:40.644 106833 INFO neutron.agent.ovn.metadata.agent [-] Port 09097114-7a48-4b64-ab17-ed474efbf80e in datapath 419dfb65-f0dd-44b5-a131-b7c37ebf4bab bound to our chassis#033[00m
Dec  1 20:02:40 compute-0 NetworkManager[56474]: <info>  [1764619360.6452] device (tap09097114-7a): state change: unavailable -> disconnected (reason 'none', managed-type: 'external')
Dec  1 20:02:40 compute-0 ovn_metadata_agent[106828]: 2025-12-01 20:02:40.646 106833 INFO neutron.agent.ovn.metadata.agent [-] Provisioning metadata for network 419dfb65-f0dd-44b5-a131-b7c37ebf4bab#033[00m
Dec  1 20:02:40 compute-0 ovn_controller[97948]: 2025-12-01T20:02:40Z|00124|binding|INFO|Setting lport 09097114-7a48-4b64-ab17-ed474efbf80e ovn-installed in OVS
Dec  1 20:02:40 compute-0 ovn_controller[97948]: 2025-12-01T20:02:40Z|00125|binding|INFO|Setting lport 09097114-7a48-4b64-ab17-ed474efbf80e up in Southbound
Dec  1 20:02:40 compute-0 nova_compute[189564]: 2025-12-01 20:02:40.659 189568 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Dec  1 20:02:40 compute-0 ovn_metadata_agent[106828]: 2025-12-01 20:02:40.663 239862 DEBUG oslo.privsep.daemon [-] privsep: reply[a0a02940-9b6b-4bad-ba5c-d3a9701dfd79]: (4, False) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Dec  1 20:02:40 compute-0 ovn_metadata_agent[106828]: 2025-12-01 20:02:40.664 106833 DEBUG neutron.agent.ovn.metadata.agent [-] Creating VETH tap419dfb65-f1 in ovnmeta-419dfb65-f0dd-44b5-a131-b7c37ebf4bab namespace provision_datapath /usr/lib/python3.9/site-packages/neutron/agent/ovn/metadata/agent.py:665#033[00m
Dec  1 20:02:40 compute-0 ovn_metadata_agent[106828]: 2025-12-01 20:02:40.666 239862 DEBUG neutron.privileged.agent.linux.ip_lib [-] Interface tap419dfb65-f0 not found in namespace None get_link_id /usr/lib/python3.9/site-packages/neutron/privileged/agent/linux/ip_lib.py:204#033[00m
Dec  1 20:02:40 compute-0 ovn_metadata_agent[106828]: 2025-12-01 20:02:40.667 239862 DEBUG oslo.privsep.daemon [-] privsep: reply[b503b5bd-8ee2-4612-89f6-219824a56f3f]: (4, False) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Dec  1 20:02:40 compute-0 ovn_metadata_agent[106828]: 2025-12-01 20:02:40.668 239862 DEBUG oslo.privsep.daemon [-] privsep: reply[794823bc-986f-4407-8dab-4c232ba9dac8]: (4, False) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Dec  1 20:02:40 compute-0 ovn_metadata_agent[106828]: 2025-12-01 20:02:40.679 106945 DEBUG oslo.privsep.daemon [-] privsep: reply[c85ca87d-dbf7-4edf-9524-30d9f88d1ee3]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Dec  1 20:02:40 compute-0 systemd-machined[155891]: New machine qemu-12-instance-00000007.
Dec  1 20:02:40 compute-0 systemd[1]: Started Virtual Machine qemu-12-instance-00000007.
Dec  1 20:02:40 compute-0 nova_compute[189564]: 2025-12-01 20:02:40.721 189568 DEBUG nova.compute.manager [req-49303465-21ff-4825-a7c8-3fb604920d22 req-76d4a41a-4761-4520-862e-fef273d554d2 8141e98fb94749b083df4b60a326419b 58466eba0634458f8dd92f929824a1d9 - - default default] [instance: 421c1bd5-7edf-41ce-b0a5-872efcaf35b0] Received event network-vif-plugged-36c65cc8-9f73-47e0-8a82-7ca2a02890e5 external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11048#033[00m
Dec  1 20:02:40 compute-0 nova_compute[189564]: 2025-12-01 20:02:40.722 189568 DEBUG oslo_concurrency.lockutils [req-49303465-21ff-4825-a7c8-3fb604920d22 req-76d4a41a-4761-4520-862e-fef273d554d2 8141e98fb94749b083df4b60a326419b 58466eba0634458f8dd92f929824a1d9 - - default default] Acquiring lock "421c1bd5-7edf-41ce-b0a5-872efcaf35b0-events" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m
Dec  1 20:02:40 compute-0 ovn_metadata_agent[106828]: 2025-12-01 20:02:40.723 239862 DEBUG oslo.privsep.daemon [-] privsep: reply[f6a46397-0823-4458-be61-b62b289c946f]: (4, ('net.ipv4.conf.all.promote_secondaries = 1\n', '', 0)) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Dec  1 20:02:40 compute-0 nova_compute[189564]: 2025-12-01 20:02:40.725 189568 DEBUG oslo_concurrency.lockutils [req-49303465-21ff-4825-a7c8-3fb604920d22 req-76d4a41a-4761-4520-862e-fef273d554d2 8141e98fb94749b083df4b60a326419b 58466eba0634458f8dd92f929824a1d9 - - default default] Lock "421c1bd5-7edf-41ce-b0a5-872efcaf35b0-events" acquired by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: waited 0.002s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m
Dec  1 20:02:40 compute-0 nova_compute[189564]: 2025-12-01 20:02:40.726 189568 DEBUG oslo_concurrency.lockutils [req-49303465-21ff-4825-a7c8-3fb604920d22 req-76d4a41a-4761-4520-862e-fef273d554d2 8141e98fb94749b083df4b60a326419b 58466eba0634458f8dd92f929824a1d9 - - default default] Lock "421c1bd5-7edf-41ce-b0a5-872efcaf35b0-events" "released" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: held 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m
Dec  1 20:02:40 compute-0 nova_compute[189564]: 2025-12-01 20:02:40.726 189568 DEBUG nova.compute.manager [req-49303465-21ff-4825-a7c8-3fb604920d22 req-76d4a41a-4761-4520-862e-fef273d554d2 8141e98fb94749b083df4b60a326419b 58466eba0634458f8dd92f929824a1d9 - - default default] [instance: 421c1bd5-7edf-41ce-b0a5-872efcaf35b0] No waiting events found dispatching network-vif-plugged-36c65cc8-9f73-47e0-8a82-7ca2a02890e5 pop_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:320#033[00m
Dec  1 20:02:40 compute-0 nova_compute[189564]: 2025-12-01 20:02:40.727 189568 WARNING nova.compute.manager [req-49303465-21ff-4825-a7c8-3fb604920d22 req-76d4a41a-4761-4520-862e-fef273d554d2 8141e98fb94749b083df4b60a326419b 58466eba0634458f8dd92f929824a1d9 - - default default] [instance: 421c1bd5-7edf-41ce-b0a5-872efcaf35b0] Received unexpected event network-vif-plugged-36c65cc8-9f73-47e0-8a82-7ca2a02890e5 for instance with vm_state active and task_state None.#033[00m
Dec  1 20:02:40 compute-0 ovn_metadata_agent[106828]: 2025-12-01 20:02:40.763 239942 DEBUG oslo.privsep.daemon [-] privsep: reply[9ad05a7d-b0dd-4fa7-b720-012f2748c81a]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Dec  1 20:02:40 compute-0 NetworkManager[56474]: <info>  [1764619360.7834] manager: (tap419dfb65-f0): new Veth device (/org/freedesktop/NetworkManager/Devices/64)
Dec  1 20:02:40 compute-0 ovn_metadata_agent[106828]: 2025-12-01 20:02:40.784 239862 DEBUG oslo.privsep.daemon [-] privsep: reply[c810d1c5-483c-453d-9ba9-8aec65abbbdc]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Dec  1 20:02:40 compute-0 ovn_metadata_agent[106828]: 2025-12-01 20:02:40.814 239942 DEBUG oslo.privsep.daemon [-] privsep: reply[5fb1c925-a25f-4961-bc8c-fc9d16a7cfc3]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Dec  1 20:02:40 compute-0 ovn_metadata_agent[106828]: 2025-12-01 20:02:40.817 239942 DEBUG oslo.privsep.daemon [-] privsep: reply[ba9aa04a-d520-4e78-b825-c16db0486f25]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Dec  1 20:02:40 compute-0 NetworkManager[56474]: <info>  [1764619360.8383] device (tap419dfb65-f0): carrier: link connected
Dec  1 20:02:40 compute-0 ovn_metadata_agent[106828]: 2025-12-01 20:02:40.845 239942 DEBUG oslo.privsep.daemon [-] privsep: reply[55d9ac59-26d6-450d-8a76-18264bb4f265]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Dec  1 20:02:40 compute-0 ovn_metadata_agent[106828]: 2025-12-01 20:02:40.860 239862 DEBUG oslo.privsep.daemon [-] privsep: reply[e00fcedf-bdca-4e06-9fd8-1f371a022a9f]: (4, [{'family': 0, '__align': (), 'ifi_type': 1, 'index': 2, 'flags': 69699, 'change': 0, 'attrs': [['IFLA_IFNAME', 'tap419dfb65-f1'], ['IFLA_TXQLEN', 1000], ['IFLA_OPERSTATE', 'UP'], ['IFLA_LINKMODE', 0], ['IFLA_MTU', 1500], ['IFLA_MIN_MTU', 68], ['IFLA_MAX_MTU', 65535], ['IFLA_GROUP', 0], ['IFLA_PROMISCUITY', 0], ['UNKNOWN', {'header': {'length': 8, 'type': 61}}], ['IFLA_NUM_TX_QUEUES', 8], ['IFLA_GSO_MAX_SEGS', 65535], ['IFLA_GSO_MAX_SIZE', 65536], ['IFLA_GRO_MAX_SIZE', 65536], ['UNKNOWN', {'header': {'length': 8, 'type': 63}}], ['UNKNOWN', {'header': {'length': 8, 'type': 64}}], ['IFLA_TSO_MAX_SIZE', 524280], ['IFLA_TSO_MAX_SEGS', 65535], ['UNKNOWN', {'header': {'length': 8, 'type': 66}}], ['IFLA_NUM_RX_QUEUES', 8], ['IFLA_CARRIER', 1], ['IFLA_CARRIER_CHANGES', 2], ['IFLA_CARRIER_UP_COUNT', 1], ['IFLA_CARRIER_DOWN_COUNT', 1], ['IFLA_PROTO_DOWN', 0], ['IFLA_ADDRESS', 'fa:16:3e:4f:9b:3e'], ['IFLA_BROADCAST', 'ff:ff:ff:ff:ff:ff'], ['IFLA_STATS64', {'rx_packets': 1, 'tx_packets': 1, 'rx_bytes': 90, 'tx_bytes': 90, 'rx_errors': 0, 'tx_errors': 0, 'rx_dropped': 0, 'tx_dropped': 0, 'multicast': 0, 'collisions': 0, 'rx_length_errors': 0, 'rx_over_errors': 0, 'rx_crc_errors': 0, 'rx_frame_errors': 0, 'rx_fifo_errors': 0, 'rx_missed_errors': 0, 'tx_aborted_errors': 0, 'tx_carrier_errors': 0, 'tx_fifo_errors': 0, 'tx_heartbeat_errors': 0, 'tx_window_errors': 0, 'rx_compressed': 0, 'tx_compressed': 0}], ['IFLA_STATS', {'rx_packets': 1, 'tx_packets': 1, 'rx_bytes': 90, 'tx_bytes': 90, 'rx_errors': 0, 'tx_errors': 0, 'rx_dropped': 0, 'tx_dropped': 0, 'multicast': 0, 'collisions': 0, 'rx_length_errors': 0, 'rx_over_errors': 0, 'rx_crc_errors': 0, 'rx_frame_errors': 0, 'rx_fifo_errors': 0, 'rx_missed_errors': 0, 'tx_aborted_errors': 0, 'tx_carrier_errors': 0, 'tx_fifo_errors': 0, 'tx_heartbeat_errors': 0, 'tx_window_errors': 0, 'rx_compressed': 0, 'tx_compressed': 0}], ['IFLA_XDP', {'attrs': [['IFLA_XDP_ATTACHED', None]]}], ['IFLA_LINKINFO', {'attrs': [['IFLA_INFO_KIND', 'veth']]}], ['IFLA_LINK_NETNSID', 0], ['IFLA_LINK', 37], ['IFLA_QDISC', 'noqueue'], ['IFLA_AF_SPEC', {'attrs': [['AF_INET', {'dummy': 65668, 'forwarding': 1, 'mc_forwarding': 0, 'proxy_arp': 0, 'accept_redirects': 0, 'secure_redirects': 0, 'send_redirects': 0, 'shared_media': 1, 'rp_filter': 1, 'accept_source_route': 0, 'bootp_relay': 0, 'log_martians': 0, 'tag': 0, 'arpfilter': 0, 'medium_id': 0, 'noxfrm': 0, 'nopolicy': 0, 'force_igmp_version': 0, 'arp_announce': 0, 'arp_ignore': 0, 'promote_secondaries': 1, 'arp_accept': 0, 'arp_notify': 0, 'accept_local': 0, 'src_vmark': 0, 'proxy_arp_pvlan': 0, 'route_localnet': 0, 'igmpv2_unsolicited_report_interval': 10000, 'igmpv3_unsolicited_report_interval': 1000}], ['AF_INET6', {'attrs': [['IFLA_INET6_FLAGS', 2147483648], ['IFLA_INET6_CACHEINFO', {'max_reasm_len': 65535, 'tstamp': 584855, 'reachable_time': 37607, 'retrans_time': 1000}], ['IFLA_INET6_CONF', {'forwarding': 0, 'hop_limit': 64, 'mtu': 1500, 'accept_ra': 1, 'accept_redirects': 1, 'autoconf': 1, 'dad_transmits': 1, 'router_solicitations': 4294967295, 'router_solicitation_interval': 4000, 'router_solicitation_delay': 1000, 'use_tempaddr': 0, 'temp_valid_lft': 604800, 'temp_preferred_lft': 86400, 'regen_max_retry': 3, 'max_desync_factor': 600, 'max_addresses': 16, 'force_mld_version': 0, 'accept_ra_defrtr': 1, 'accept_ra_pinfo': 1, 'accept_ra_rtr_pref': 1, 'router_probe_interval': 60000, 'accept_ra_rt_info_max_plen': 0, 'proxy_ndp': 0, 'optimistic_dad': 0, 'accept_source_route': 0, 'mc_forwarding': 0, 'disable_ipv6': 0, 'accept_dad': 1, 'force_tllao': 0, 'ndisc_notify': 0}], ['IFLA_INET6_STATS', {'num': 38, 'inpkts': 1, 'inoctets': 76, 'indelivers': 0, 'outforwdatagrams': 0, 'outpkts': 1, 'outoctets': 76, 'inhdrerrors': 0, 'intoobigerrors': 0, 'innoroutes': 0, 'inaddrerrors': 0, 'inunknownprotos': 0, 'intruncatedpkts': 0, 'indiscards': 0, 'outdiscards': 0, 'outnoroutes': 0, 'reasmtimeout': 0, 'reasmreqds': 0, 'reasmoks': 0, 'reasmfails': 0, 'fragoks': 0, 'fragfails': 0, 'fragcreates': 0, 'inmcastpkts': 1, 'outmcastpkts': 1, 'inbcastpkts': 0, 'outbcastpkts': 0, 'inmcastoctets': 76, 'outmcastoctets': 76, 'inbcastoctets': 0, 'outbcastoctets': 0, 'csumerrors': 0, 'noectpkts': 1, 'ect1pkts': 0, 'ect0pkts': 0, 'cepkts': 0}], ['IFLA_INET6_ICMP6STATS', {'num': 7, 'inmsgs': 0, 'inerrors': 0, 'outmsgs': 1, 'outerrors': 0, 'csumerrors': 0}], ['IFLA_INET6_TOKEN', '::'], ['IFLA_INET6_ADDR_GEN_MODE', 0]]}]]}], ['IFLA_MAP', {'mem_start': 0, 'mem_end': 0, 'base_addr': 0, 'irq': 0, 'dma': 0, 'port': 0}], ['UNKNOWN', {'header': {'length': 4, 'type': 32830}}], ['UNKNOWN', {'header': {'length': 4, 'type': 32833}}]], 'header': {'length': 1448, 'type': 16, 'flags': 2, 'sequence_number': 255, 'pid': 255566, 'error': None, 'target': 'ovnmeta-419dfb65-f0dd-44b5-a131-b7c37ebf4bab', 'stats': (0, 0, 0)}, 'state': 'up', 'event': 'RTM_NEWLINK'}]) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Dec  1 20:02:40 compute-0 ovn_metadata_agent[106828]: 2025-12-01 20:02:40.884 239862 DEBUG oslo.privsep.daemon [-] privsep: reply[10614260-b1d5-4b24-933f-db45bc8d5489]: (4, ({'family': 10, 'prefixlen': 64, 'flags': 192, 'scope': 253, 'index': 2, 'attrs': [['IFA_ADDRESS', 'fe80::f816:3eff:fe4f:9b3e'], ['IFA_CACHEINFO', {'ifa_preferred': 4294967295, 'ifa_valid': 4294967295, 'cstamp': 584855, 'tstamp': 584855}], ['IFA_FLAGS', 192]], 'header': {'length': 72, 'type': 20, 'flags': 2, 'sequence_number': 255, 'pid': 255567, 'error': None, 'target': 'ovnmeta-419dfb65-f0dd-44b5-a131-b7c37ebf4bab', 'stats': (0, 0, 0)}, 'event': 'RTM_NEWADDR'},)) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Dec  1 20:02:40 compute-0 ovn_metadata_agent[106828]: 2025-12-01 20:02:40.903 239862 DEBUG oslo.privsep.daemon [-] privsep: reply[9f7b026c-be46-47eb-aa09-d3671945070b]: (4, [{'family': 0, '__align': (), 'ifi_type': 1, 'index': 2, 'flags': 69699, 'change': 0, 'attrs': [['IFLA_IFNAME', 'tap419dfb65-f1'], ['IFLA_TXQLEN', 1000], ['IFLA_OPERSTATE', 'UP'], ['IFLA_LINKMODE', 0], ['IFLA_MTU', 1500], ['IFLA_MIN_MTU', 68], ['IFLA_MAX_MTU', 65535], ['IFLA_GROUP', 0], ['IFLA_PROMISCUITY', 0], ['UNKNOWN', {'header': {'length': 8, 'type': 61}}], ['IFLA_NUM_TX_QUEUES', 8], ['IFLA_GSO_MAX_SEGS', 65535], ['IFLA_GSO_MAX_SIZE', 65536], ['IFLA_GRO_MAX_SIZE', 65536], ['UNKNOWN', {'header': {'length': 8, 'type': 63}}], ['UNKNOWN', {'header': {'length': 8, 'type': 64}}], ['IFLA_TSO_MAX_SIZE', 524280], ['IFLA_TSO_MAX_SEGS', 65535], ['UNKNOWN', {'header': {'length': 8, 'type': 66}}], ['IFLA_NUM_RX_QUEUES', 8], ['IFLA_CARRIER', 1], ['IFLA_CARRIER_CHANGES', 2], ['IFLA_CARRIER_UP_COUNT', 1], ['IFLA_CARRIER_DOWN_COUNT', 1], ['IFLA_PROTO_DOWN', 0], ['IFLA_ADDRESS', 'fa:16:3e:4f:9b:3e'], ['IFLA_BROADCAST', 'ff:ff:ff:ff:ff:ff'], ['IFLA_STATS64', {'rx_packets': 1, 'tx_packets': 1, 'rx_bytes': 90, 'tx_bytes': 90, 'rx_errors': 0, 'tx_errors': 0, 'rx_dropped': 0, 'tx_dropped': 0, 'multicast': 0, 'collisions': 0, 'rx_length_errors': 0, 'rx_over_errors': 0, 'rx_crc_errors': 0, 'rx_frame_errors': 0, 'rx_fifo_errors': 0, 'rx_missed_errors': 0, 'tx_aborted_errors': 0, 'tx_carrier_errors': 0, 'tx_fifo_errors': 0, 'tx_heartbeat_errors': 0, 'tx_window_errors': 0, 'rx_compressed': 0, 'tx_compressed': 0}], ['IFLA_STATS', {'rx_packets': 1, 'tx_packets': 1, 'rx_bytes': 90, 'tx_bytes': 90, 'rx_errors': 0, 'tx_errors': 0, 'rx_dropped': 0, 'tx_dropped': 0, 'multicast': 0, 'collisions': 0, 'rx_length_errors': 0, 'rx_over_errors': 0, 'rx_crc_errors': 0, 'rx_frame_errors': 0, 'rx_fifo_errors': 0, 'rx_missed_errors': 0, 'tx_aborted_errors': 0, 'tx_carrier_errors': 0, 'tx_fifo_errors': 0, 'tx_heartbeat_errors': 0, 'tx_window_errors': 0, 'rx_compressed': 0, 'tx_compressed': 0}], ['IFLA_XDP', {'attrs': [['IFLA_XDP_ATTACHED', None]]}], ['IFLA_LINKINFO', {'attrs': [['IFLA_INFO_KIND', 'veth']]}], ['IFLA_LINK_NETNSID', 0], ['IFLA_LINK', 37], ['IFLA_QDISC', 'noqueue'], ['IFLA_AF_SPEC', {'attrs': [['AF_INET', {'dummy': 65668, 'forwarding': 1, 'mc_forwarding': 0, 'proxy_arp': 0, 'accept_redirects': 0, 'secure_redirects': 0, 'send_redirects': 0, 'shared_media': 1, 'rp_filter': 1, 'accept_source_route': 0, 'bootp_relay': 0, 'log_martians': 0, 'tag': 0, 'arpfilter': 0, 'medium_id': 0, 'noxfrm': 0, 'nopolicy': 0, 'force_igmp_version': 0, 'arp_announce': 0, 'arp_ignore': 0, 'promote_secondaries': 1, 'arp_accept': 0, 'arp_notify': 0, 'accept_local': 0, 'src_vmark': 0, 'proxy_arp_pvlan': 0, 'route_localnet': 0, 'igmpv2_unsolicited_report_interval': 10000, 'igmpv3_unsolicited_report_interval': 1000}], ['AF_INET6', {'attrs': [['IFLA_INET6_FLAGS', 2147483648], ['IFLA_INET6_CACHEINFO', {'max_reasm_len': 65535, 'tstamp': 584855, 'reachable_time': 37607, 'retrans_time': 1000}], ['IFLA_INET6_CONF', {'forwarding': 0, 'hop_limit': 64, 'mtu': 1500, 'accept_ra': 1, 'accept_redirects': 1, 'autoconf': 1, 'dad_transmits': 1, 'router_solicitations': 4294967295, 'router_solicitation_interval': 4000, 'router_solicitation_delay': 1000, 'use_tempaddr': 0, 'temp_valid_lft': 604800, 'temp_preferred_lft': 86400, 'regen_max_retry': 3, 'max_desync_factor': 600, 'max_addresses': 16, 'force_mld_version': 0, 'accept_ra_defrtr': 1, 'accept_ra_pinfo': 1, 'accept_ra_rtr_pref': 1, 'router_probe_interval': 60000, 'accept_ra_rt_info_max_plen': 0, 'proxy_ndp': 0, 'optimistic_dad': 0, 'accept_source_route': 0, 'mc_forwarding': 0, 'disable_ipv6': 0, 'accept_dad': 1, 'force_tllao': 0, 'ndisc_notify': 0}], ['IFLA_INET6_STATS', {'num': 38, 'inpkts': 1, 'inoctets': 76, 'indelivers': 0, 'outforwdatagrams': 0, 'outpkts': 1, 'outoctets': 76, 'inhdrerrors': 0, 'intoobigerrors': 0, 'innoroutes': 0, 'inaddrerrors': 0, 'inunknownprotos': 0, 'intruncatedpkts': 0, 'indiscards': 0, 'outdiscards': 0, 'outnoroutes': 0, 'reasmtimeout': 0, 'reasmreqds': 0, 'reasmoks': 0, 'reasmfails': 0, 'fragoks': 0, 'fragfails': 0, 'fragcreates': 0, 'inmcastpkts': 1, 'outmcastpkts': 1, 'inbcastpkts': 0, 'outbcastpkts': 0, 'inmcastoctets': 76, 'outmcastoctets': 76, 'inbcastoctets': 0, 'outbcastoctets': 0, 'csumerrors': 0, 'noectpkts': 1, 'ect1pkts': 0, 'ect0pkts': 0, 'cepkts': 0}], ['IFLA_INET6_ICMP6STATS', {'num': 7, 'inmsgs': 0, 'inerrors': 0, 'outmsgs': 1, 'outerrors': 0, 'csumerrors': 0}], ['IFLA_INET6_TOKEN', '::'], ['IFLA_INET6_ADDR_GEN_MODE', 0]]}]]}], ['IFLA_MAP', {'mem_start': 0, 'mem_end': 0, 'base_addr': 0, 'irq': 0, 'dma': 0, 'port': 0}], ['UNKNOWN', {'header': {'length': 4, 'type': 32830}}], ['UNKNOWN', {'header': {'length': 4, 'type': 32833}}]], 'header': {'length': 1448, 'type': 16, 'flags': 0, 'sequence_number': 255, 'pid': 255568, 'error': None, 'target': 'ovnmeta-419dfb65-f0dd-44b5-a131-b7c37ebf4bab', 'stats': (0, 0, 0)}, 'state': 'up', 'event': 'RTM_NEWLINK'}]) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Dec  1 20:02:40 compute-0 ovn_metadata_agent[106828]: 2025-12-01 20:02:40.947 239862 DEBUG oslo.privsep.daemon [-] privsep: reply[38df4eef-414a-4fb7-a634-b8a41613de0b]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Dec  1 20:02:41 compute-0 ovn_metadata_agent[106828]: 2025-12-01 20:02:41.009 239862 DEBUG oslo.privsep.daemon [-] privsep: reply[133f9f7b-9db8-415a-86af-f747658bd61d]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Dec  1 20:02:41 compute-0 ovn_metadata_agent[106828]: 2025-12-01 20:02:41.010 106833 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): DelPortCommand(_result=None, port=tap419dfb65-f0, bridge=br-ex, if_exists=True) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89#033[00m
Dec  1 20:02:41 compute-0 ovn_metadata_agent[106828]: 2025-12-01 20:02:41.010 106833 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Transaction caused no change do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:129#033[00m
Dec  1 20:02:41 compute-0 ovn_metadata_agent[106828]: 2025-12-01 20:02:41.011 106833 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): AddPortCommand(_result=None, bridge=br-int, port=tap419dfb65-f0, may_exist=True) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89#033[00m
Dec  1 20:02:41 compute-0 nova_compute[189564]: 2025-12-01 20:02:41.012 189568 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Dec  1 20:02:41 compute-0 NetworkManager[56474]: <info>  [1764619361.0132] manager: (tap419dfb65-f0): new Open vSwitch Port device (/org/freedesktop/NetworkManager/Devices/65)
Dec  1 20:02:41 compute-0 kernel: tap419dfb65-f0: entered promiscuous mode
Dec  1 20:02:41 compute-0 ovn_metadata_agent[106828]: 2025-12-01 20:02:41.017 106833 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): DbSetCommand(_result=None, table=Interface, record=tap419dfb65-f0, col_values=(('external_ids', {'iface-id': '0966f8f1-95fd-4a77-80c1-25197c60ec2b'}),)) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89#033[00m
Dec  1 20:02:41 compute-0 ovn_controller[97948]: 2025-12-01T20:02:41Z|00126|binding|INFO|Releasing lport 0966f8f1-95fd-4a77-80c1-25197c60ec2b from this chassis (sb_readonly=0)
Dec  1 20:02:41 compute-0 nova_compute[189564]: 2025-12-01 20:02:41.018 189568 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Dec  1 20:02:41 compute-0 nova_compute[189564]: 2025-12-01 20:02:41.032 189568 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Dec  1 20:02:41 compute-0 ovn_metadata_agent[106828]: 2025-12-01 20:02:41.033 106833 DEBUG neutron.agent.linux.utils [-] Unable to access /var/lib/neutron/external/pids/419dfb65-f0dd-44b5-a131-b7c37ebf4bab.pid.haproxy; Error: [Errno 2] No such file or directory: '/var/lib/neutron/external/pids/419dfb65-f0dd-44b5-a131-b7c37ebf4bab.pid.haproxy' get_value_from_file /usr/lib/python3.9/site-packages/neutron/agent/linux/utils.py:252#033[00m
Dec  1 20:02:41 compute-0 ovn_metadata_agent[106828]: 2025-12-01 20:02:41.034 239862 DEBUG oslo.privsep.daemon [-] privsep: reply[d76f17d2-4e68-40cf-9dd8-ff3856a80c26]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Dec  1 20:02:41 compute-0 ovn_metadata_agent[106828]: 2025-12-01 20:02:41.034 106833 DEBUG neutron.agent.ovn.metadata.driver [-] haproxy_cfg = 
Dec  1 20:02:41 compute-0 ovn_metadata_agent[106828]: global
Dec  1 20:02:41 compute-0 ovn_metadata_agent[106828]:    log         /dev/log local0 debug
Dec  1 20:02:41 compute-0 ovn_metadata_agent[106828]:    log-tag     haproxy-metadata-proxy-419dfb65-f0dd-44b5-a131-b7c37ebf4bab
Dec  1 20:02:41 compute-0 ovn_metadata_agent[106828]:    user        root
Dec  1 20:02:41 compute-0 ovn_metadata_agent[106828]:    group       root
Dec  1 20:02:41 compute-0 ovn_metadata_agent[106828]:    maxconn     1024
Dec  1 20:02:41 compute-0 ovn_metadata_agent[106828]:    pidfile     /var/lib/neutron/external/pids/419dfb65-f0dd-44b5-a131-b7c37ebf4bab.pid.haproxy
Dec  1 20:02:41 compute-0 ovn_metadata_agent[106828]:    daemon
Dec  1 20:02:41 compute-0 ovn_metadata_agent[106828]: 
Dec  1 20:02:41 compute-0 ovn_metadata_agent[106828]: defaults
Dec  1 20:02:41 compute-0 ovn_metadata_agent[106828]:    log global
Dec  1 20:02:41 compute-0 ovn_metadata_agent[106828]:    mode http
Dec  1 20:02:41 compute-0 ovn_metadata_agent[106828]:    option httplog
Dec  1 20:02:41 compute-0 ovn_metadata_agent[106828]:    option dontlognull
Dec  1 20:02:41 compute-0 ovn_metadata_agent[106828]:    option http-server-close
Dec  1 20:02:41 compute-0 ovn_metadata_agent[106828]:    option forwardfor
Dec  1 20:02:41 compute-0 ovn_metadata_agent[106828]:    retries                 3
Dec  1 20:02:41 compute-0 ovn_metadata_agent[106828]:    timeout http-request    30s
Dec  1 20:02:41 compute-0 ovn_metadata_agent[106828]:    timeout connect         30s
Dec  1 20:02:41 compute-0 ovn_metadata_agent[106828]:    timeout client          32s
Dec  1 20:02:41 compute-0 ovn_metadata_agent[106828]:    timeout server          32s
Dec  1 20:02:41 compute-0 ovn_metadata_agent[106828]:    timeout http-keep-alive 30s
Dec  1 20:02:41 compute-0 ovn_metadata_agent[106828]: 
Dec  1 20:02:41 compute-0 ovn_metadata_agent[106828]: 
Dec  1 20:02:41 compute-0 ovn_metadata_agent[106828]: listen listener
Dec  1 20:02:41 compute-0 ovn_metadata_agent[106828]:    bind 169.254.169.254:80
Dec  1 20:02:41 compute-0 ovn_metadata_agent[106828]:    server metadata /var/lib/neutron/metadata_proxy
Dec  1 20:02:41 compute-0 ovn_metadata_agent[106828]:    http-request add-header X-OVN-Network-ID 419dfb65-f0dd-44b5-a131-b7c37ebf4bab
Dec  1 20:02:41 compute-0 ovn_metadata_agent[106828]: create_config_file /usr/lib/python3.9/site-packages/neutron/agent/ovn/metadata/driver.py:107#033[00m
Dec  1 20:02:41 compute-0 ovn_metadata_agent[106828]: 2025-12-01 20:02:41.035 106833 DEBUG neutron.agent.linux.utils [-] Running command: ['sudo', 'neutron-rootwrap', '/etc/neutron/rootwrap.conf', 'ip', 'netns', 'exec', 'ovnmeta-419dfb65-f0dd-44b5-a131-b7c37ebf4bab', 'env', 'PROCESS_TAG=haproxy-419dfb65-f0dd-44b5-a131-b7c37ebf4bab', 'haproxy', '-f', '/var/lib/neutron/ovn-metadata-proxy/419dfb65-f0dd-44b5-a131-b7c37ebf4bab.conf'] create_process /usr/lib/python3.9/site-packages/neutron/agent/linux/utils.py:84#033[00m
Dec  1 20:02:41 compute-0 nova_compute[189564]: 2025-12-01 20:02:41.037 189568 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Dec  1 20:02:41 compute-0 nova_compute[189564]: 2025-12-01 20:02:41.407 189568 DEBUG nova.virt.libvirt.host [None req-025acbbd-8b0a-4055-b5a6-f0460d6fa220 - - - - - -] Removed pending event for 4a104baa-5fd5-47aa-973b-11d99c76c3e2 due to event _event_emit_delayed /usr/lib/python3.9/site-packages/nova/virt/libvirt/host.py:438#033[00m
Dec  1 20:02:41 compute-0 nova_compute[189564]: 2025-12-01 20:02:41.407 189568 DEBUG nova.virt.driver [None req-025acbbd-8b0a-4055-b5a6-f0460d6fa220 - - - - - -] Emitting event <LifecycleEvent: 1764619361.4054372, 4a104baa-5fd5-47aa-973b-11d99c76c3e2 => Resumed> emit_event /usr/lib/python3.9/site-packages/nova/virt/driver.py:1653#033[00m
Dec  1 20:02:41 compute-0 nova_compute[189564]: 2025-12-01 20:02:41.408 189568 INFO nova.compute.manager [None req-025acbbd-8b0a-4055-b5a6-f0460d6fa220 - - - - - -] [instance: 4a104baa-5fd5-47aa-973b-11d99c76c3e2] VM Resumed (Lifecycle Event)#033[00m
Dec  1 20:02:41 compute-0 nova_compute[189564]: 2025-12-01 20:02:41.417 189568 DEBUG nova.compute.manager [None req-841ca05d-97dc-4ba4-a7eb-6937d7c1c9bc 89c8a8cb31224140bf2b9c0b94acfe04 5102d72cb1ce4e6da810b2584a2abd73 - - default default] [instance: 4a104baa-5fd5-47aa-973b-11d99c76c3e2] Instance event wait completed in 0 seconds for  wait_for_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:577#033[00m
Dec  1 20:02:41 compute-0 nova_compute[189564]: 2025-12-01 20:02:41.421 189568 INFO nova.virt.libvirt.driver [-] [instance: 4a104baa-5fd5-47aa-973b-11d99c76c3e2] Instance rebooted successfully.#033[00m
Dec  1 20:02:41 compute-0 nova_compute[189564]: 2025-12-01 20:02:41.422 189568 DEBUG nova.compute.manager [None req-841ca05d-97dc-4ba4-a7eb-6937d7c1c9bc 89c8a8cb31224140bf2b9c0b94acfe04 5102d72cb1ce4e6da810b2584a2abd73 - - default default] [instance: 4a104baa-5fd5-47aa-973b-11d99c76c3e2] Checking state _get_power_state /usr/lib/python3.9/site-packages/nova/compute/manager.py:1762#033[00m
Dec  1 20:02:41 compute-0 nova_compute[189564]: 2025-12-01 20:02:41.425 189568 DEBUG nova.compute.manager [None req-025acbbd-8b0a-4055-b5a6-f0460d6fa220 - - - - - -] [instance: 4a104baa-5fd5-47aa-973b-11d99c76c3e2] Checking state _get_power_state /usr/lib/python3.9/site-packages/nova/compute/manager.py:1762#033[00m
Dec  1 20:02:41 compute-0 nova_compute[189564]: 2025-12-01 20:02:41.432 189568 DEBUG nova.compute.manager [None req-025acbbd-8b0a-4055-b5a6-f0460d6fa220 - - - - - -] [instance: 4a104baa-5fd5-47aa-973b-11d99c76c3e2] Synchronizing instance power state after lifecycle event "Resumed"; current vm_state: active, current task_state: reboot_started_hard, current DB power_state: 1, VM power_state: 1 handle_lifecycle_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:1396#033[00m
Dec  1 20:02:41 compute-0 nova_compute[189564]: 2025-12-01 20:02:41.473 189568 INFO nova.compute.manager [None req-025acbbd-8b0a-4055-b5a6-f0460d6fa220 - - - - - -] [instance: 4a104baa-5fd5-47aa-973b-11d99c76c3e2] During sync_power_state the instance has a pending task (reboot_started_hard). Skip.#033[00m
Dec  1 20:02:41 compute-0 nova_compute[189564]: 2025-12-01 20:02:41.473 189568 DEBUG nova.virt.driver [None req-025acbbd-8b0a-4055-b5a6-f0460d6fa220 - - - - - -] Emitting event <LifecycleEvent: 1764619361.4067314, 4a104baa-5fd5-47aa-973b-11d99c76c3e2 => Started> emit_event /usr/lib/python3.9/site-packages/nova/virt/driver.py:1653#033[00m
Dec  1 20:02:41 compute-0 nova_compute[189564]: 2025-12-01 20:02:41.474 189568 INFO nova.compute.manager [None req-025acbbd-8b0a-4055-b5a6-f0460d6fa220 - - - - - -] [instance: 4a104baa-5fd5-47aa-973b-11d99c76c3e2] VM Started (Lifecycle Event)#033[00m
Dec  1 20:02:41 compute-0 nova_compute[189564]: 2025-12-01 20:02:41.483 189568 DEBUG oslo_concurrency.lockutils [None req-841ca05d-97dc-4ba4-a7eb-6937d7c1c9bc 89c8a8cb31224140bf2b9c0b94acfe04 5102d72cb1ce4e6da810b2584a2abd73 - - default default] Lock "4a104baa-5fd5-47aa-973b-11d99c76c3e2" "released" by "nova.compute.manager.ComputeManager.reboot_instance.<locals>.do_reboot_instance" :: held 4.437s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m
Dec  1 20:02:41 compute-0 nova_compute[189564]: 2025-12-01 20:02:41.495 189568 DEBUG nova.compute.manager [None req-025acbbd-8b0a-4055-b5a6-f0460d6fa220 - - - - - -] [instance: 4a104baa-5fd5-47aa-973b-11d99c76c3e2] Checking state _get_power_state /usr/lib/python3.9/site-packages/nova/compute/manager.py:1762#033[00m
Dec  1 20:02:41 compute-0 nova_compute[189564]: 2025-12-01 20:02:41.503 189568 DEBUG nova.compute.manager [None req-025acbbd-8b0a-4055-b5a6-f0460d6fa220 - - - - - -] [instance: 4a104baa-5fd5-47aa-973b-11d99c76c3e2] Synchronizing instance power state after lifecycle event "Started"; current vm_state: active, current task_state: None, current DB power_state: 1, VM power_state: 1 handle_lifecycle_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:1396#033[00m
Dec  1 20:02:41 compute-0 podman[255603]: 2025-12-01 20:02:41.50429196 +0000 UTC m=+0.066858231 container create 590c759611d74775ccc5f04134592fd49335012f6c43c247141945fd6c7d9934 (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified, name=neutron-haproxy-ovnmeta-419dfb65-f0dd-44b5-a131-b7c37ebf4bab, io.buildah.version=1.41.3, org.label-schema.license=GPLv2, org.label-schema.vendor=CentOS, org.label-schema.schema-version=1.0, org.label-schema.build-date=20251125, org.label-schema.name=CentOS Stream 9 Base Image, tcib_build_tag=fa2bb8efef6782c26ea7f1675eeb36dd, maintainer=OpenStack Kubernetes Operator team, tcib_managed=true)
Dec  1 20:02:41 compute-0 systemd[1]: Started libpod-conmon-590c759611d74775ccc5f04134592fd49335012f6c43c247141945fd6c7d9934.scope.
Dec  1 20:02:41 compute-0 podman[255603]: 2025-12-01 20:02:41.475340469 +0000 UTC m=+0.037906770 image pull 014dc726c85414b29f2dde7b5d875685d08784761c0f0ffa8630d1583a877bf9 quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified
Dec  1 20:02:41 compute-0 systemd[1]: Started libcrun container.
Dec  1 20:02:41 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/bcaedc1cb9702662f57327f8efb7de9ad6e4d6aaf864bcd5394c1b5ada553131/merged/var/lib/neutron supports timestamps until 2038 (0x7fffffff)
Dec  1 20:02:41 compute-0 podman[255603]: 2025-12-01 20:02:41.649324163 +0000 UTC m=+0.211890464 container init 590c759611d74775ccc5f04134592fd49335012f6c43c247141945fd6c7d9934 (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified, name=neutron-haproxy-ovnmeta-419dfb65-f0dd-44b5-a131-b7c37ebf4bab, tcib_managed=true, org.label-schema.vendor=CentOS, org.label-schema.schema-version=1.0, maintainer=OpenStack Kubernetes Operator team, io.buildah.version=1.41.3, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 9 Base Image, tcib_build_tag=fa2bb8efef6782c26ea7f1675eeb36dd, org.label-schema.build-date=20251125)
Dec  1 20:02:41 compute-0 podman[255603]: 2025-12-01 20:02:41.659767027 +0000 UTC m=+0.222333308 container start 590c759611d74775ccc5f04134592fd49335012f6c43c247141945fd6c7d9934 (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified, name=neutron-haproxy-ovnmeta-419dfb65-f0dd-44b5-a131-b7c37ebf4bab, tcib_build_tag=fa2bb8efef6782c26ea7f1675eeb36dd, org.label-schema.build-date=20251125, maintainer=OpenStack Kubernetes Operator team, tcib_managed=true, io.buildah.version=1.41.3, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.schema-version=1.0, org.label-schema.license=GPLv2, org.label-schema.vendor=CentOS)
Dec  1 20:02:41 compute-0 neutron-haproxy-ovnmeta-419dfb65-f0dd-44b5-a131-b7c37ebf4bab[255618]: [NOTICE]   (255622) : New worker (255624) forked
Dec  1 20:02:41 compute-0 neutron-haproxy-ovnmeta-419dfb65-f0dd-44b5-a131-b7c37ebf4bab[255618]: [NOTICE]   (255622) : Loading success.
Dec  1 20:02:42 compute-0 nova_compute[189564]: 2025-12-01 20:02:42.749 189568 DEBUG nova.compute.manager [req-129318d9-17ca-41ba-a216-c89465338d38 req-49cc407f-bd9b-4237-a835-80281d3ca75a 8141e98fb94749b083df4b60a326419b 58466eba0634458f8dd92f929824a1d9 - - default default] [instance: 4a104baa-5fd5-47aa-973b-11d99c76c3e2] Received event network-vif-plugged-09097114-7a48-4b64-ab17-ed474efbf80e external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11048#033[00m
Dec  1 20:02:42 compute-0 nova_compute[189564]: 2025-12-01 20:02:42.750 189568 DEBUG oslo_concurrency.lockutils [req-129318d9-17ca-41ba-a216-c89465338d38 req-49cc407f-bd9b-4237-a835-80281d3ca75a 8141e98fb94749b083df4b60a326419b 58466eba0634458f8dd92f929824a1d9 - - default default] Acquiring lock "4a104baa-5fd5-47aa-973b-11d99c76c3e2-events" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m
Dec  1 20:02:42 compute-0 nova_compute[189564]: 2025-12-01 20:02:42.750 189568 DEBUG oslo_concurrency.lockutils [req-129318d9-17ca-41ba-a216-c89465338d38 req-49cc407f-bd9b-4237-a835-80281d3ca75a 8141e98fb94749b083df4b60a326419b 58466eba0634458f8dd92f929824a1d9 - - default default] Lock "4a104baa-5fd5-47aa-973b-11d99c76c3e2-events" acquired by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m
Dec  1 20:02:42 compute-0 nova_compute[189564]: 2025-12-01 20:02:42.750 189568 DEBUG oslo_concurrency.lockutils [req-129318d9-17ca-41ba-a216-c89465338d38 req-49cc407f-bd9b-4237-a835-80281d3ca75a 8141e98fb94749b083df4b60a326419b 58466eba0634458f8dd92f929824a1d9 - - default default] Lock "4a104baa-5fd5-47aa-973b-11d99c76c3e2-events" "released" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m
Dec  1 20:02:42 compute-0 nova_compute[189564]: 2025-12-01 20:02:42.750 189568 DEBUG nova.compute.manager [req-129318d9-17ca-41ba-a216-c89465338d38 req-49cc407f-bd9b-4237-a835-80281d3ca75a 8141e98fb94749b083df4b60a326419b 58466eba0634458f8dd92f929824a1d9 - - default default] [instance: 4a104baa-5fd5-47aa-973b-11d99c76c3e2] No waiting events found dispatching network-vif-plugged-09097114-7a48-4b64-ab17-ed474efbf80e pop_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:320#033[00m
Dec  1 20:02:42 compute-0 nova_compute[189564]: 2025-12-01 20:02:42.750 189568 WARNING nova.compute.manager [req-129318d9-17ca-41ba-a216-c89465338d38 req-49cc407f-bd9b-4237-a835-80281d3ca75a 8141e98fb94749b083df4b60a326419b 58466eba0634458f8dd92f929824a1d9 - - default default] [instance: 4a104baa-5fd5-47aa-973b-11d99c76c3e2] Received unexpected event network-vif-plugged-09097114-7a48-4b64-ab17-ed474efbf80e for instance with vm_state active and task_state None.#033[00m
Dec  1 20:02:42 compute-0 nova_compute[189564]: 2025-12-01 20:02:42.751 189568 DEBUG nova.compute.manager [req-129318d9-17ca-41ba-a216-c89465338d38 req-49cc407f-bd9b-4237-a835-80281d3ca75a 8141e98fb94749b083df4b60a326419b 58466eba0634458f8dd92f929824a1d9 - - default default] [instance: 4a104baa-5fd5-47aa-973b-11d99c76c3e2] Received event network-vif-plugged-09097114-7a48-4b64-ab17-ed474efbf80e external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11048#033[00m
Dec  1 20:02:42 compute-0 nova_compute[189564]: 2025-12-01 20:02:42.751 189568 DEBUG oslo_concurrency.lockutils [req-129318d9-17ca-41ba-a216-c89465338d38 req-49cc407f-bd9b-4237-a835-80281d3ca75a 8141e98fb94749b083df4b60a326419b 58466eba0634458f8dd92f929824a1d9 - - default default] Acquiring lock "4a104baa-5fd5-47aa-973b-11d99c76c3e2-events" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m
Dec  1 20:02:42 compute-0 nova_compute[189564]: 2025-12-01 20:02:42.751 189568 DEBUG oslo_concurrency.lockutils [req-129318d9-17ca-41ba-a216-c89465338d38 req-49cc407f-bd9b-4237-a835-80281d3ca75a 8141e98fb94749b083df4b60a326419b 58466eba0634458f8dd92f929824a1d9 - - default default] Lock "4a104baa-5fd5-47aa-973b-11d99c76c3e2-events" acquired by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m
Dec  1 20:02:42 compute-0 nova_compute[189564]: 2025-12-01 20:02:42.751 189568 DEBUG oslo_concurrency.lockutils [req-129318d9-17ca-41ba-a216-c89465338d38 req-49cc407f-bd9b-4237-a835-80281d3ca75a 8141e98fb94749b083df4b60a326419b 58466eba0634458f8dd92f929824a1d9 - - default default] Lock "4a104baa-5fd5-47aa-973b-11d99c76c3e2-events" "released" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m
Dec  1 20:02:42 compute-0 nova_compute[189564]: 2025-12-01 20:02:42.752 189568 DEBUG nova.compute.manager [req-129318d9-17ca-41ba-a216-c89465338d38 req-49cc407f-bd9b-4237-a835-80281d3ca75a 8141e98fb94749b083df4b60a326419b 58466eba0634458f8dd92f929824a1d9 - - default default] [instance: 4a104baa-5fd5-47aa-973b-11d99c76c3e2] No waiting events found dispatching network-vif-plugged-09097114-7a48-4b64-ab17-ed474efbf80e pop_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:320#033[00m
Dec  1 20:02:42 compute-0 nova_compute[189564]: 2025-12-01 20:02:42.752 189568 WARNING nova.compute.manager [req-129318d9-17ca-41ba-a216-c89465338d38 req-49cc407f-bd9b-4237-a835-80281d3ca75a 8141e98fb94749b083df4b60a326419b 58466eba0634458f8dd92f929824a1d9 - - default default] [instance: 4a104baa-5fd5-47aa-973b-11d99c76c3e2] Received unexpected event network-vif-plugged-09097114-7a48-4b64-ab17-ed474efbf80e for instance with vm_state active and task_state None.#033[00m
Dec  1 20:02:42 compute-0 nova_compute[189564]: 2025-12-01 20:02:42.752 189568 DEBUG nova.compute.manager [req-129318d9-17ca-41ba-a216-c89465338d38 req-49cc407f-bd9b-4237-a835-80281d3ca75a 8141e98fb94749b083df4b60a326419b 58466eba0634458f8dd92f929824a1d9 - - default default] [instance: 4a104baa-5fd5-47aa-973b-11d99c76c3e2] Received event network-vif-plugged-09097114-7a48-4b64-ab17-ed474efbf80e external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11048#033[00m
Dec  1 20:02:42 compute-0 nova_compute[189564]: 2025-12-01 20:02:42.752 189568 DEBUG oslo_concurrency.lockutils [req-129318d9-17ca-41ba-a216-c89465338d38 req-49cc407f-bd9b-4237-a835-80281d3ca75a 8141e98fb94749b083df4b60a326419b 58466eba0634458f8dd92f929824a1d9 - - default default] Acquiring lock "4a104baa-5fd5-47aa-973b-11d99c76c3e2-events" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m
Dec  1 20:02:42 compute-0 nova_compute[189564]: 2025-12-01 20:02:42.753 189568 DEBUG oslo_concurrency.lockutils [req-129318d9-17ca-41ba-a216-c89465338d38 req-49cc407f-bd9b-4237-a835-80281d3ca75a 8141e98fb94749b083df4b60a326419b 58466eba0634458f8dd92f929824a1d9 - - default default] Lock "4a104baa-5fd5-47aa-973b-11d99c76c3e2-events" acquired by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m
Dec  1 20:02:42 compute-0 nova_compute[189564]: 2025-12-01 20:02:42.753 189568 DEBUG oslo_concurrency.lockutils [req-129318d9-17ca-41ba-a216-c89465338d38 req-49cc407f-bd9b-4237-a835-80281d3ca75a 8141e98fb94749b083df4b60a326419b 58466eba0634458f8dd92f929824a1d9 - - default default] Lock "4a104baa-5fd5-47aa-973b-11d99c76c3e2-events" "released" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m
Dec  1 20:02:42 compute-0 nova_compute[189564]: 2025-12-01 20:02:42.753 189568 DEBUG nova.compute.manager [req-129318d9-17ca-41ba-a216-c89465338d38 req-49cc407f-bd9b-4237-a835-80281d3ca75a 8141e98fb94749b083df4b60a326419b 58466eba0634458f8dd92f929824a1d9 - - default default] [instance: 4a104baa-5fd5-47aa-973b-11d99c76c3e2] No waiting events found dispatching network-vif-plugged-09097114-7a48-4b64-ab17-ed474efbf80e pop_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:320#033[00m
Dec  1 20:02:42 compute-0 nova_compute[189564]: 2025-12-01 20:02:42.753 189568 WARNING nova.compute.manager [req-129318d9-17ca-41ba-a216-c89465338d38 req-49cc407f-bd9b-4237-a835-80281d3ca75a 8141e98fb94749b083df4b60a326419b 58466eba0634458f8dd92f929824a1d9 - - default default] [instance: 4a104baa-5fd5-47aa-973b-11d99c76c3e2] Received unexpected event network-vif-plugged-09097114-7a48-4b64-ab17-ed474efbf80e for instance with vm_state active and task_state None.#033[00m
Dec  1 20:02:43 compute-0 nova_compute[189564]: 2025-12-01 20:02:43.860 189568 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Dec  1 20:02:44 compute-0 nova_compute[189564]: 2025-12-01 20:02:44.731 189568 DEBUG nova.compute.manager [req-ebc127d0-c432-41bf-9c28-3c4649ee2581 req-5e1832e2-ca83-4dda-b92c-c9147b564cc4 8141e98fb94749b083df4b60a326419b 58466eba0634458f8dd92f929824a1d9 - - default default] [instance: 6c1de815-4e42-4798-9a73-220b67333524] Received event network-vif-plugged-05dcfe74-fe60-45d4-b1df-aec9fcc57adb external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11048#033[00m
Dec  1 20:02:44 compute-0 nova_compute[189564]: 2025-12-01 20:02:44.731 189568 DEBUG oslo_concurrency.lockutils [req-ebc127d0-c432-41bf-9c28-3c4649ee2581 req-5e1832e2-ca83-4dda-b92c-c9147b564cc4 8141e98fb94749b083df4b60a326419b 58466eba0634458f8dd92f929824a1d9 - - default default] Acquiring lock "6c1de815-4e42-4798-9a73-220b67333524-events" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m
Dec  1 20:02:44 compute-0 nova_compute[189564]: 2025-12-01 20:02:44.732 189568 DEBUG oslo_concurrency.lockutils [req-ebc127d0-c432-41bf-9c28-3c4649ee2581 req-5e1832e2-ca83-4dda-b92c-c9147b564cc4 8141e98fb94749b083df4b60a326419b 58466eba0634458f8dd92f929824a1d9 - - default default] Lock "6c1de815-4e42-4798-9a73-220b67333524-events" acquired by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m
Dec  1 20:02:44 compute-0 nova_compute[189564]: 2025-12-01 20:02:44.732 189568 DEBUG oslo_concurrency.lockutils [req-ebc127d0-c432-41bf-9c28-3c4649ee2581 req-5e1832e2-ca83-4dda-b92c-c9147b564cc4 8141e98fb94749b083df4b60a326419b 58466eba0634458f8dd92f929824a1d9 - - default default] Lock "6c1de815-4e42-4798-9a73-220b67333524-events" "released" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m
Dec  1 20:02:44 compute-0 nova_compute[189564]: 2025-12-01 20:02:44.732 189568 DEBUG nova.compute.manager [req-ebc127d0-c432-41bf-9c28-3c4649ee2581 req-5e1832e2-ca83-4dda-b92c-c9147b564cc4 8141e98fb94749b083df4b60a326419b 58466eba0634458f8dd92f929824a1d9 - - default default] [instance: 6c1de815-4e42-4798-9a73-220b67333524] Processing event network-vif-plugged-05dcfe74-fe60-45d4-b1df-aec9fcc57adb _process_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:10808#033[00m
Dec  1 20:02:44 compute-0 nova_compute[189564]: 2025-12-01 20:02:44.734 189568 DEBUG nova.compute.manager [None req-edc753ca-df70-41b8-9e8c-a36b0a4da18d 715e289b64b4407387cbcfe958eb2d0f 162c071887824085bcc9c384a2f8baf0 - - default default] [instance: 6c1de815-4e42-4798-9a73-220b67333524] Instance event wait completed in 6 seconds for network-vif-plugged wait_for_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:577#033[00m
Dec  1 20:02:44 compute-0 nova_compute[189564]: 2025-12-01 20:02:44.740 189568 DEBUG nova.virt.driver [None req-025acbbd-8b0a-4055-b5a6-f0460d6fa220 - - - - - -] Emitting event <LifecycleEvent: 1764619364.7391694, 6c1de815-4e42-4798-9a73-220b67333524 => Resumed> emit_event /usr/lib/python3.9/site-packages/nova/virt/driver.py:1653#033[00m
Dec  1 20:02:44 compute-0 nova_compute[189564]: 2025-12-01 20:02:44.740 189568 INFO nova.compute.manager [None req-025acbbd-8b0a-4055-b5a6-f0460d6fa220 - - - - - -] [instance: 6c1de815-4e42-4798-9a73-220b67333524] VM Resumed (Lifecycle Event)#033[00m
Dec  1 20:02:44 compute-0 nova_compute[189564]: 2025-12-01 20:02:44.742 189568 DEBUG nova.virt.libvirt.driver [None req-edc753ca-df70-41b8-9e8c-a36b0a4da18d 715e289b64b4407387cbcfe958eb2d0f 162c071887824085bcc9c384a2f8baf0 - - default default] [instance: 6c1de815-4e42-4798-9a73-220b67333524] Guest created on hypervisor spawn /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:4417#033[00m
Dec  1 20:02:44 compute-0 nova_compute[189564]: 2025-12-01 20:02:44.748 189568 INFO nova.virt.libvirt.driver [-] [instance: 6c1de815-4e42-4798-9a73-220b67333524] Instance spawned successfully.#033[00m
Dec  1 20:02:44 compute-0 nova_compute[189564]: 2025-12-01 20:02:44.749 189568 DEBUG nova.virt.libvirt.driver [None req-edc753ca-df70-41b8-9e8c-a36b0a4da18d 715e289b64b4407387cbcfe958eb2d0f 162c071887824085bcc9c384a2f8baf0 - - default default] [instance: 6c1de815-4e42-4798-9a73-220b67333524] Attempting to register defaults for the following image properties: ['hw_cdrom_bus', 'hw_disk_bus', 'hw_input_bus', 'hw_pointer_model', 'hw_video_model', 'hw_vif_model'] _register_undefined_instance_details /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:917#033[00m
Dec  1 20:02:44 compute-0 nova_compute[189564]: 2025-12-01 20:02:44.770 189568 DEBUG nova.compute.manager [None req-025acbbd-8b0a-4055-b5a6-f0460d6fa220 - - - - - -] [instance: 6c1de815-4e42-4798-9a73-220b67333524] Checking state _get_power_state /usr/lib/python3.9/site-packages/nova/compute/manager.py:1762#033[00m
Dec  1 20:02:44 compute-0 nova_compute[189564]: 2025-12-01 20:02:44.775 189568 DEBUG nova.compute.manager [None req-025acbbd-8b0a-4055-b5a6-f0460d6fa220 - - - - - -] [instance: 6c1de815-4e42-4798-9a73-220b67333524] Synchronizing instance power state after lifecycle event "Resumed"; current vm_state: building, current task_state: spawning, current DB power_state: 0, VM power_state: 1 handle_lifecycle_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:1396#033[00m
Dec  1 20:02:44 compute-0 nova_compute[189564]: 2025-12-01 20:02:44.785 189568 DEBUG nova.virt.libvirt.driver [None req-edc753ca-df70-41b8-9e8c-a36b0a4da18d 715e289b64b4407387cbcfe958eb2d0f 162c071887824085bcc9c384a2f8baf0 - - default default] [instance: 6c1de815-4e42-4798-9a73-220b67333524] Found default for hw_cdrom_bus of sata _register_undefined_instance_details /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:946#033[00m
Dec  1 20:02:44 compute-0 nova_compute[189564]: 2025-12-01 20:02:44.785 189568 DEBUG nova.virt.libvirt.driver [None req-edc753ca-df70-41b8-9e8c-a36b0a4da18d 715e289b64b4407387cbcfe958eb2d0f 162c071887824085bcc9c384a2f8baf0 - - default default] [instance: 6c1de815-4e42-4798-9a73-220b67333524] Found default for hw_disk_bus of virtio _register_undefined_instance_details /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:946#033[00m
Dec  1 20:02:44 compute-0 nova_compute[189564]: 2025-12-01 20:02:44.786 189568 DEBUG nova.virt.libvirt.driver [None req-edc753ca-df70-41b8-9e8c-a36b0a4da18d 715e289b64b4407387cbcfe958eb2d0f 162c071887824085bcc9c384a2f8baf0 - - default default] [instance: 6c1de815-4e42-4798-9a73-220b67333524] Found default for hw_input_bus of usb _register_undefined_instance_details /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:946#033[00m
Dec  1 20:02:44 compute-0 nova_compute[189564]: 2025-12-01 20:02:44.786 189568 DEBUG nova.virt.libvirt.driver [None req-edc753ca-df70-41b8-9e8c-a36b0a4da18d 715e289b64b4407387cbcfe958eb2d0f 162c071887824085bcc9c384a2f8baf0 - - default default] [instance: 6c1de815-4e42-4798-9a73-220b67333524] Found default for hw_pointer_model of usbtablet _register_undefined_instance_details /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:946#033[00m
Dec  1 20:02:44 compute-0 nova_compute[189564]: 2025-12-01 20:02:44.787 189568 DEBUG nova.virt.libvirt.driver [None req-edc753ca-df70-41b8-9e8c-a36b0a4da18d 715e289b64b4407387cbcfe958eb2d0f 162c071887824085bcc9c384a2f8baf0 - - default default] [instance: 6c1de815-4e42-4798-9a73-220b67333524] Found default for hw_video_model of virtio _register_undefined_instance_details /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:946#033[00m
Dec  1 20:02:44 compute-0 nova_compute[189564]: 2025-12-01 20:02:44.787 189568 DEBUG nova.virt.libvirt.driver [None req-edc753ca-df70-41b8-9e8c-a36b0a4da18d 715e289b64b4407387cbcfe958eb2d0f 162c071887824085bcc9c384a2f8baf0 - - default default] [instance: 6c1de815-4e42-4798-9a73-220b67333524] Found default for hw_vif_model of virtio _register_undefined_instance_details /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:946#033[00m
Dec  1 20:02:44 compute-0 podman[255646]: 2025-12-01 20:02:44.800499121 +0000 UTC m=+0.111398707 container health_status eee51cf6f5ac491b85fb09827fece37ea9afa564acb449d4ec0d0155a452f02b (image=quay.io/podified-antelope-centos9/openstack-multipathd:current-podified, name=multipathd, health_status=healthy, health_failing_streak=0, health_log=, config_data={'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/multipathd', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-multipathd:current-podified', 'net': 'host', 'privileged': True, 'restart': 'always', 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/var/lib/kolla/config_files/multipathd.json:/var/lib/kolla/config_files/config.json:ro', '/dev:/dev', '/run/udev:/run/udev', '/sys:/sys', '/lib/modules:/lib/modules:ro', '/etc/iscsi:/etc/iscsi:ro', '/var/lib/iscsi:/var/lib/iscsi', '/etc/multipath:/etc/multipath:z', '/etc/multipath.conf:/etc/multipath.conf:ro', '/var/lib/openstack/healthchecks/multipathd:/openstack:ro,z']}, config_id=multipathd, container_name=multipathd, org.label-schema.build-date=20251125, org.label-schema.name=CentOS Stream 9 Base Image, io.buildah.version=1.41.3, managed_by=edpm_ansible, tcib_managed=true, org.label-schema.schema-version=1.0, maintainer=OpenStack Kubernetes Operator team, org.label-schema.license=GPLv2, org.label-schema.vendor=CentOS, tcib_build_tag=fa2bb8efef6782c26ea7f1675eeb36dd)
Dec  1 20:02:44 compute-0 nova_compute[189564]: 2025-12-01 20:02:44.814 189568 INFO nova.compute.manager [None req-025acbbd-8b0a-4055-b5a6-f0460d6fa220 - - - - - -] [instance: 6c1de815-4e42-4798-9a73-220b67333524] During sync_power_state the instance has a pending task (spawning). Skip.#033[00m
Dec  1 20:02:44 compute-0 nova_compute[189564]: 2025-12-01 20:02:44.853 189568 INFO nova.compute.manager [None req-edc753ca-df70-41b8-9e8c-a36b0a4da18d 715e289b64b4407387cbcfe958eb2d0f 162c071887824085bcc9c384a2f8baf0 - - default default] [instance: 6c1de815-4e42-4798-9a73-220b67333524] Took 14.69 seconds to spawn the instance on the hypervisor.#033[00m
Dec  1 20:02:44 compute-0 nova_compute[189564]: 2025-12-01 20:02:44.854 189568 DEBUG nova.compute.manager [None req-edc753ca-df70-41b8-9e8c-a36b0a4da18d 715e289b64b4407387cbcfe958eb2d0f 162c071887824085bcc9c384a2f8baf0 - - default default] [instance: 6c1de815-4e42-4798-9a73-220b67333524] Checking state _get_power_state /usr/lib/python3.9/site-packages/nova/compute/manager.py:1762#033[00m
Dec  1 20:02:44 compute-0 nova_compute[189564]: 2025-12-01 20:02:44.885 189568 DEBUG nova.compute.manager [req-c4a3924b-4e22-4163-9fdf-8967f9d50cfb req-cb9df985-daa7-42c2-9c7c-a41539561243 8141e98fb94749b083df4b60a326419b 58466eba0634458f8dd92f929824a1d9 - - default default] [instance: 421c1bd5-7edf-41ce-b0a5-872efcaf35b0] Received event network-changed-36c65cc8-9f73-47e0-8a82-7ca2a02890e5 external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11048#033[00m
Dec  1 20:02:44 compute-0 nova_compute[189564]: 2025-12-01 20:02:44.886 189568 DEBUG nova.compute.manager [req-c4a3924b-4e22-4163-9fdf-8967f9d50cfb req-cb9df985-daa7-42c2-9c7c-a41539561243 8141e98fb94749b083df4b60a326419b 58466eba0634458f8dd92f929824a1d9 - - default default] [instance: 421c1bd5-7edf-41ce-b0a5-872efcaf35b0] Refreshing instance network info cache due to event network-changed-36c65cc8-9f73-47e0-8a82-7ca2a02890e5. external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11053#033[00m
Dec  1 20:02:44 compute-0 nova_compute[189564]: 2025-12-01 20:02:44.887 189568 DEBUG oslo_concurrency.lockutils [req-c4a3924b-4e22-4163-9fdf-8967f9d50cfb req-cb9df985-daa7-42c2-9c7c-a41539561243 8141e98fb94749b083df4b60a326419b 58466eba0634458f8dd92f929824a1d9 - - default default] Acquiring lock "refresh_cache-421c1bd5-7edf-41ce-b0a5-872efcaf35b0" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:312#033[00m
Dec  1 20:02:44 compute-0 nova_compute[189564]: 2025-12-01 20:02:44.887 189568 DEBUG oslo_concurrency.lockutils [req-c4a3924b-4e22-4163-9fdf-8967f9d50cfb req-cb9df985-daa7-42c2-9c7c-a41539561243 8141e98fb94749b083df4b60a326419b 58466eba0634458f8dd92f929824a1d9 - - default default] Acquired lock "refresh_cache-421c1bd5-7edf-41ce-b0a5-872efcaf35b0" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:315#033[00m
Dec  1 20:02:44 compute-0 nova_compute[189564]: 2025-12-01 20:02:44.888 189568 DEBUG nova.network.neutron [req-c4a3924b-4e22-4163-9fdf-8967f9d50cfb req-cb9df985-daa7-42c2-9c7c-a41539561243 8141e98fb94749b083df4b60a326419b 58466eba0634458f8dd92f929824a1d9 - - default default] [instance: 421c1bd5-7edf-41ce-b0a5-872efcaf35b0] Refreshing network info cache for port 36c65cc8-9f73-47e0-8a82-7ca2a02890e5 _get_instance_nw_info /usr/lib/python3.9/site-packages/nova/network/neutron.py:2007#033[00m
Dec  1 20:02:44 compute-0 nova_compute[189564]: 2025-12-01 20:02:44.965 189568 INFO nova.compute.manager [None req-edc753ca-df70-41b8-9e8c-a36b0a4da18d 715e289b64b4407387cbcfe958eb2d0f 162c071887824085bcc9c384a2f8baf0 - - default default] [instance: 6c1de815-4e42-4798-9a73-220b67333524] Took 15.71 seconds to build instance.#033[00m
Dec  1 20:02:44 compute-0 nova_compute[189564]: 2025-12-01 20:02:44.997 189568 DEBUG oslo_concurrency.lockutils [None req-edc753ca-df70-41b8-9e8c-a36b0a4da18d 715e289b64b4407387cbcfe958eb2d0f 162c071887824085bcc9c384a2f8baf0 - - default default] Lock "6c1de815-4e42-4798-9a73-220b67333524" "released" by "nova.compute.manager.ComputeManager.build_and_run_instance.<locals>._locked_do_build_and_run_instance" :: held 16.627s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m
Dec  1 20:02:45 compute-0 nova_compute[189564]: 2025-12-01 20:02:44.998 189568 DEBUG oslo_concurrency.lockutils [None req-7cec860b-c1c5-49d2-8a4f-732c76c1788d - - - - - -] Lock "6c1de815-4e42-4798-9a73-220b67333524" acquired by "nova.compute.manager.ComputeManager._sync_power_states.<locals>._sync.<locals>.query_driver_power_state_and_sync" :: waited 8.782s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m
Dec  1 20:02:45 compute-0 nova_compute[189564]: 2025-12-01 20:02:44.999 189568 INFO nova.compute.manager [None req-7cec860b-c1c5-49d2-8a4f-732c76c1788d - - - - - -] [instance: 6c1de815-4e42-4798-9a73-220b67333524] During sync_power_state the instance has a pending task (spawning). Skip.#033[00m
Dec  1 20:02:45 compute-0 nova_compute[189564]: 2025-12-01 20:02:45.000 189568 DEBUG oslo_concurrency.lockutils [None req-7cec860b-c1c5-49d2-8a4f-732c76c1788d - - - - - -] Lock "6c1de815-4e42-4798-9a73-220b67333524" "released" by "nova.compute.manager.ComputeManager._sync_power_states.<locals>._sync.<locals>.query_driver_power_state_and_sync" :: held 0.002s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m
Dec  1 20:02:45 compute-0 ovn_metadata_agent[106828]: 2025-12-01 20:02:45.358 106833 DEBUG ovsdbapp.backend.ovs_idl.event [-] Matched UPDATE: SbGlobalUpdateEvent(events=('update',), table='SB_Global', conditions=None, old_conditions=None), priority=20 to row=SB_Global(external_ids={}, nb_cfg=11, options={'arp_ns_explicit_output': 'true', 'mac_prefix': 'ae:b8:e0', 'max_tunid': '16711680', 'northd_internal_version': '24.03.8-20.33.0-76.8', 'svc_monitor_mac': 'f2:87:69:a7:38:2b'}, ipsec=False) old=SB_Global(nb_cfg=10) matches /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/event.py:43#033[00m
Dec  1 20:02:45 compute-0 ovn_metadata_agent[106828]: 2025-12-01 20:02:45.359 106833 DEBUG neutron.agent.ovn.metadata.agent [-] Delaying updating chassis table for 0 seconds run /usr/lib/python3.9/site-packages/neutron/agent/ovn/metadata/agent.py:274#033[00m
Dec  1 20:02:45 compute-0 nova_compute[189564]: 2025-12-01 20:02:45.361 189568 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Dec  1 20:02:45 compute-0 ovn_metadata_agent[106828]: 2025-12-01 20:02:45.364 106833 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): DbSetCommand(_result=None, table=Chassis_Private, record=91869463-7ce7-4561-8225-db4a77bb5f12, col_values=(('external_ids', {'neutron:ovn-metadata-sb-cfg': '11'}),), if_exists=True) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89#033[00m
Dec  1 20:02:45 compute-0 nova_compute[189564]: 2025-12-01 20:02:45.513 189568 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Dec  1 20:02:45 compute-0 ovn_controller[97948]: 2025-12-01T20:02:45Z|00012|pinctrl(ovn_pinctrl0)|INFO|DHCPOFFER fa:16:3e:69:55:e7 10.100.0.6
Dec  1 20:02:45 compute-0 ovn_controller[97948]: 2025-12-01T20:02:45Z|00013|pinctrl(ovn_pinctrl0)|INFO|DHCPACK fa:16:3e:69:55:e7 10.100.0.6
Dec  1 20:02:46 compute-0 nova_compute[189564]: 2025-12-01 20:02:46.327 189568 DEBUG nova.network.neutron [req-c4a3924b-4e22-4163-9fdf-8967f9d50cfb req-cb9df985-daa7-42c2-9c7c-a41539561243 8141e98fb94749b083df4b60a326419b 58466eba0634458f8dd92f929824a1d9 - - default default] [instance: 421c1bd5-7edf-41ce-b0a5-872efcaf35b0] Updated VIF entry in instance network info cache for port 36c65cc8-9f73-47e0-8a82-7ca2a02890e5. _build_network_info_model /usr/lib/python3.9/site-packages/nova/network/neutron.py:3482#033[00m
Dec  1 20:02:46 compute-0 nova_compute[189564]: 2025-12-01 20:02:46.328 189568 DEBUG nova.network.neutron [req-c4a3924b-4e22-4163-9fdf-8967f9d50cfb req-cb9df985-daa7-42c2-9c7c-a41539561243 8141e98fb94749b083df4b60a326419b 58466eba0634458f8dd92f929824a1d9 - - default default] [instance: 421c1bd5-7edf-41ce-b0a5-872efcaf35b0] Updating instance_info_cache with network_info: [{"id": "36c65cc8-9f73-47e0-8a82-7ca2a02890e5", "address": "fa:16:3e:67:e4:f2", "network": {"id": "61c137f0-effb-4f90-8a6c-ea3831f8e4db", "bridge": "br-int", "label": "tempest-TestServerBasicOps-1994330948-network", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.14", "type": "fixed", "version": 4, "meta": {}, "floating_ips": [{"address": "192.168.122.217", "type": "floating", "version": 4, "meta": {}}]}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "bde8983778e8471a8b7f6da9e9d53732", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tap36c65cc8-9f", "ovs_interfaceid": "36c65cc8-9f73-47e0-8a82-7ca2a02890e5", "qbh_params": null, "qbg_params": null, "active": true, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}}] update_instance_cache_with_nw_info /usr/lib/python3.9/site-packages/nova/network/neutron.py:116#033[00m
Dec  1 20:02:46 compute-0 nova_compute[189564]: 2025-12-01 20:02:46.351 189568 DEBUG oslo_concurrency.lockutils [req-c4a3924b-4e22-4163-9fdf-8967f9d50cfb req-cb9df985-daa7-42c2-9c7c-a41539561243 8141e98fb94749b083df4b60a326419b 58466eba0634458f8dd92f929824a1d9 - - default default] Releasing lock "refresh_cache-421c1bd5-7edf-41ce-b0a5-872efcaf35b0" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:333#033[00m
Dec  1 20:02:46 compute-0 nova_compute[189564]: 2025-12-01 20:02:46.847 189568 DEBUG nova.compute.manager [req-fac370c7-1202-4dc2-abf2-8ae90cbc0fbe req-1e73bdb9-437c-4cbd-b6d3-188bd9be27e1 8141e98fb94749b083df4b60a326419b 58466eba0634458f8dd92f929824a1d9 - - default default] [instance: 6c1de815-4e42-4798-9a73-220b67333524] Received event network-vif-plugged-05dcfe74-fe60-45d4-b1df-aec9fcc57adb external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11048#033[00m
Dec  1 20:02:46 compute-0 nova_compute[189564]: 2025-12-01 20:02:46.847 189568 DEBUG oslo_concurrency.lockutils [req-fac370c7-1202-4dc2-abf2-8ae90cbc0fbe req-1e73bdb9-437c-4cbd-b6d3-188bd9be27e1 8141e98fb94749b083df4b60a326419b 58466eba0634458f8dd92f929824a1d9 - - default default] Acquiring lock "6c1de815-4e42-4798-9a73-220b67333524-events" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m
Dec  1 20:02:46 compute-0 nova_compute[189564]: 2025-12-01 20:02:46.848 189568 DEBUG oslo_concurrency.lockutils [req-fac370c7-1202-4dc2-abf2-8ae90cbc0fbe req-1e73bdb9-437c-4cbd-b6d3-188bd9be27e1 8141e98fb94749b083df4b60a326419b 58466eba0634458f8dd92f929824a1d9 - - default default] Lock "6c1de815-4e42-4798-9a73-220b67333524-events" acquired by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m
Dec  1 20:02:46 compute-0 nova_compute[189564]: 2025-12-01 20:02:46.848 189568 DEBUG oslo_concurrency.lockutils [req-fac370c7-1202-4dc2-abf2-8ae90cbc0fbe req-1e73bdb9-437c-4cbd-b6d3-188bd9be27e1 8141e98fb94749b083df4b60a326419b 58466eba0634458f8dd92f929824a1d9 - - default default] Lock "6c1de815-4e42-4798-9a73-220b67333524-events" "released" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m
Dec  1 20:02:46 compute-0 nova_compute[189564]: 2025-12-01 20:02:46.848 189568 DEBUG nova.compute.manager [req-fac370c7-1202-4dc2-abf2-8ae90cbc0fbe req-1e73bdb9-437c-4cbd-b6d3-188bd9be27e1 8141e98fb94749b083df4b60a326419b 58466eba0634458f8dd92f929824a1d9 - - default default] [instance: 6c1de815-4e42-4798-9a73-220b67333524] No waiting events found dispatching network-vif-plugged-05dcfe74-fe60-45d4-b1df-aec9fcc57adb pop_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:320#033[00m
Dec  1 20:02:46 compute-0 nova_compute[189564]: 2025-12-01 20:02:46.848 189568 WARNING nova.compute.manager [req-fac370c7-1202-4dc2-abf2-8ae90cbc0fbe req-1e73bdb9-437c-4cbd-b6d3-188bd9be27e1 8141e98fb94749b083df4b60a326419b 58466eba0634458f8dd92f929824a1d9 - - default default] [instance: 6c1de815-4e42-4798-9a73-220b67333524] Received unexpected event network-vif-plugged-05dcfe74-fe60-45d4-b1df-aec9fcc57adb for instance with vm_state active and task_state None.#033[00m
Dec  1 20:02:48 compute-0 podman[255664]: 2025-12-01 20:02:48.350317764 +0000 UTC m=+0.110552400 container health_status 61ddba5fa28aaa4735d9b3aecc3d300f499f9ae2248b5f55cd6d6127fcce4236 (image=quay.io/navidys/prometheus-podman-exporter:v1.10.1, name=podman_exporter, health_status=healthy, health_failing_streak=0, health_log=, config_id=edpm, container_name=podman_exporter, maintainer=Navid Yaghoobi <navidys@fedoraproject.org>, managed_by=edpm_ansible, config_data={'image': 'quay.io/navidys/prometheus-podman-exporter:v1.10.1', 'restart': 'always', 'recreate': True, 'user': 'root', 'privileged': True, 'ports': ['9882:9882'], 'net': 'host', 'command': ['--web.config.file=/etc/podman_exporter/podman_exporter.yaml'], 'environment': {'OS_ENDPOINT_TYPE': 'internal', 'CONTAINER_HOST': 'unix:///run/podman/podman.sock'}, 'healthcheck': {'test': '/openstack/healthcheck podman_exporter', 'mount': '/var/lib/openstack/healthchecks/podman_exporter'}, 'volumes': ['/var/lib/openstack/config/telemetry/podman_exporter.yaml:/etc/podman_exporter/podman_exporter.yaml:z', '/var/lib/openstack/certs/telemetry/default:/etc/podman_exporter/tls:z', '/run/podman/podman.sock:/run/podman/podman.sock:rw,z', '/var/lib/openstack/healthchecks/podman_exporter:/openstack:ro,z']})
Dec  1 20:02:48 compute-0 nova_compute[189564]: 2025-12-01 20:02:48.864 189568 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Dec  1 20:02:49 compute-0 nova_compute[189564]: 2025-12-01 20:02:49.464 189568 DEBUG nova.compute.manager [req-1fc328e7-1f36-4f30-8855-1c62295db782 req-dba44d64-718d-4790-ace7-e8ed47c0bc93 8141e98fb94749b083df4b60a326419b 58466eba0634458f8dd92f929824a1d9 - - default default] [instance: 6c1de815-4e42-4798-9a73-220b67333524] Received event network-changed-05dcfe74-fe60-45d4-b1df-aec9fcc57adb external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11048#033[00m
Dec  1 20:02:49 compute-0 nova_compute[189564]: 2025-12-01 20:02:49.464 189568 DEBUG nova.compute.manager [req-1fc328e7-1f36-4f30-8855-1c62295db782 req-dba44d64-718d-4790-ace7-e8ed47c0bc93 8141e98fb94749b083df4b60a326419b 58466eba0634458f8dd92f929824a1d9 - - default default] [instance: 6c1de815-4e42-4798-9a73-220b67333524] Refreshing instance network info cache due to event network-changed-05dcfe74-fe60-45d4-b1df-aec9fcc57adb. external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11053#033[00m
Dec  1 20:02:49 compute-0 nova_compute[189564]: 2025-12-01 20:02:49.465 189568 DEBUG oslo_concurrency.lockutils [req-1fc328e7-1f36-4f30-8855-1c62295db782 req-dba44d64-718d-4790-ace7-e8ed47c0bc93 8141e98fb94749b083df4b60a326419b 58466eba0634458f8dd92f929824a1d9 - - default default] Acquiring lock "refresh_cache-6c1de815-4e42-4798-9a73-220b67333524" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:312#033[00m
Dec  1 20:02:49 compute-0 nova_compute[189564]: 2025-12-01 20:02:49.465 189568 DEBUG oslo_concurrency.lockutils [req-1fc328e7-1f36-4f30-8855-1c62295db782 req-dba44d64-718d-4790-ace7-e8ed47c0bc93 8141e98fb94749b083df4b60a326419b 58466eba0634458f8dd92f929824a1d9 - - default default] Acquired lock "refresh_cache-6c1de815-4e42-4798-9a73-220b67333524" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:315#033[00m
Dec  1 20:02:49 compute-0 nova_compute[189564]: 2025-12-01 20:02:49.466 189568 DEBUG nova.network.neutron [req-1fc328e7-1f36-4f30-8855-1c62295db782 req-dba44d64-718d-4790-ace7-e8ed47c0bc93 8141e98fb94749b083df4b60a326419b 58466eba0634458f8dd92f929824a1d9 - - default default] [instance: 6c1de815-4e42-4798-9a73-220b67333524] Refreshing network info cache for port 05dcfe74-fe60-45d4-b1df-aec9fcc57adb _get_instance_nw_info /usr/lib/python3.9/site-packages/nova/network/neutron.py:2007#033[00m
Dec  1 20:02:50 compute-0 nova_compute[189564]: 2025-12-01 20:02:50.515 189568 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Dec  1 20:02:52 compute-0 nova_compute[189564]: 2025-12-01 20:02:52.019 189568 DEBUG nova.network.neutron [req-1fc328e7-1f36-4f30-8855-1c62295db782 req-dba44d64-718d-4790-ace7-e8ed47c0bc93 8141e98fb94749b083df4b60a326419b 58466eba0634458f8dd92f929824a1d9 - - default default] [instance: 6c1de815-4e42-4798-9a73-220b67333524] Updated VIF entry in instance network info cache for port 05dcfe74-fe60-45d4-b1df-aec9fcc57adb. _build_network_info_model /usr/lib/python3.9/site-packages/nova/network/neutron.py:3482#033[00m
Dec  1 20:02:52 compute-0 nova_compute[189564]: 2025-12-01 20:02:52.022 189568 DEBUG nova.network.neutron [req-1fc328e7-1f36-4f30-8855-1c62295db782 req-dba44d64-718d-4790-ace7-e8ed47c0bc93 8141e98fb94749b083df4b60a326419b 58466eba0634458f8dd92f929824a1d9 - - default default] [instance: 6c1de815-4e42-4798-9a73-220b67333524] Updating instance_info_cache with network_info: [{"id": "05dcfe74-fe60-45d4-b1df-aec9fcc57adb", "address": "fa:16:3e:96:ce:cc", "network": {"id": "d273f808-5cbd-4428-9f2c-ed8b50232c12", "bridge": "br-int", "label": "tempest-network-smoke--1707279970", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.11", "type": "fixed", "version": 4, "meta": {}, "floating_ips": [{"address": "192.168.122.172", "type": "floating", "version": 4, "meta": {}}]}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "162c071887824085bcc9c384a2f8baf0", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tap05dcfe74-fe", "ovs_interfaceid": "05dcfe74-fe60-45d4-b1df-aec9fcc57adb", "qbh_params": null, "qbg_params": null, "active": true, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}}] update_instance_cache_with_nw_info /usr/lib/python3.9/site-packages/nova/network/neutron.py:116#033[00m
Dec  1 20:02:52 compute-0 nova_compute[189564]: 2025-12-01 20:02:52.049 189568 DEBUG oslo_concurrency.lockutils [req-1fc328e7-1f36-4f30-8855-1c62295db782 req-dba44d64-718d-4790-ace7-e8ed47c0bc93 8141e98fb94749b083df4b60a326419b 58466eba0634458f8dd92f929824a1d9 - - default default] Releasing lock "refresh_cache-6c1de815-4e42-4798-9a73-220b67333524" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:333#033[00m
Dec  1 20:02:52 compute-0 podman[255695]: 2025-12-01 20:02:52.366516769 +0000 UTC m=+0.121082859 container health_status 3a3d264f7eb8586ed3d44da8bad3c69e5911bcb2ca062b771386b6d47a5118de (image=quay.rdoproject.org/podified-master-centos10/openstack-ceilometer-compute:current-tested, name=ceilometer_agent_compute, health_status=healthy, health_failing_streak=0, health_log=, maintainer=OpenStack Kubernetes Operator team, org.label-schema.name=CentOS Stream 10 Base Image, org.label-schema.vendor=CentOS, config_id=edpm, container_name=ceilometer_agent_compute, managed_by=edpm_ansible, org.label-schema.build-date=20251125, io.buildah.version=1.41.4, org.label-schema.license=GPLv2, tcib_managed=true, config_data={'image': 'quay.rdoproject.org/podified-master-centos10/openstack-ceilometer-compute:current-tested', 'user': 'ceilometer', 'restart': 'always', 'command': 'kolla_start', 'security_opt': 'label:type:ceilometer_polling_t', 'net': 'host', 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'OS_ENDPOINT_TYPE': 'internal'}, 'healthcheck': {'test': '/openstack/healthcheck compute', 'mount': '/var/lib/openstack/healthchecks/ceilometer_agent_compute'}, 'volumes': ['/var/lib/openstack/config/telemetry:/var/lib/openstack/config/:z', '/var/lib/openstack/config/telemetry/ceilometer-agent-compute.json:/var/lib/kolla/config_files/config.json:z', '/run/libvirt:/run/libvirt:shared,ro', '/etc/hosts:/etc/hosts:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/var/lib/openstack/cacerts/telemetry/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/config/telemetry/ceilometer_prom_exporter.yaml:/etc/ceilometer/ceilometer_prom_exporter.yaml:z', '/var/lib/openstack/certs/telemetry/default:/etc/ceilometer/tls:z', '/dev/log:/dev/log', '/var/lib/openstack/healthchecks/ceilometer_agent_compute:/openstack:ro,z']}, org.label-schema.schema-version=1.0, tcib_build_tag=3a7876c5b6a4ff2e2bc50e11e9db5f42)
Dec  1 20:02:52 compute-0 podman[255690]: 2025-12-01 20:02:52.373095143 +0000 UTC m=+0.142957328 container health_status 23921011954a99f31a49758e512d9e3575f6b2ebf536e7df85e3be11e7690b76 (image=quay.io/sustainable_computing_io/kepler:release-0.7.12, name=kepler, health_status=healthy, health_failing_streak=0, health_log=, config_data={'image': 'quay.io/sustainable_computing_io/kepler:release-0.7.12', 'privileged': 'true', 'restart': 'always', 'ports': ['8888:8888'], 'net': 'host', 'command': '-v=2', 'recreate': True, 'environment': {'ENABLE_GPU': 'true', 'EXPOSE_CONTAINER_METRICS': 'true', 'ENABLE_PROCESS_METRICS': 'true', 'EXPOSE_VM_METRICS': 'true', 'EXPOSE_ESTIMATED_IDLE_POWER_METRICS': 'false', 'LIBVIRT_METADATA_URI': 'http://openstack.org/xmlns/libvirt/nova/1.1'}, 'healthcheck': {'test': '/openstack/healthcheck kepler', 'mount': '/var/lib/openstack/healthchecks/kepler'}, 'volumes': ['/lib/modules:/lib/modules:ro', '/run/libvirt:/run/libvirt:shared,ro', '/sys:/sys', '/proc:/proc', '/var/lib/openstack/healthchecks/kepler:/openstack:ro,z']}, managed_by=edpm_ansible, summary=Provides the latest release of Red Hat Universal Base Image 9., architecture=x86_64, com.redhat.component=ubi9-container, com.redhat.license_terms=https://www.redhat.com/en/about/red-hat-end-user-license-agreements#UBI, maintainer=Red Hat, Inc., url=https://access.redhat.com/containers/#/registry.access.redhat.com/ubi9/images/9.4-1214.1726694543, build-date=2024-09-18T21:23:30, container_name=kepler, version=9.4, io.openshift.expose-services=, description=The Universal Base Image is designed and engineered to be the base layer for all of your containerized applications, middleware and utilities. This base image is freely redistributable, but Red Hat only supports Red Hat technologies through subscriptions for Red Hat products. This image is maintained by Red Hat and updated regularly., io.k8s.display-name=Red Hat Universal Base Image 9, release-0.7.12=, io.buildah.version=1.29.0, release=1214.1726694543, vendor=Red Hat, Inc., vcs-ref=e309397d02fc53f7fa99db1371b8700eb49f268f, config_id=edpm, distribution-scope=public, io.k8s.description=The Universal Base Image is designed and engineered to be the base layer for all of your containerized applications, middleware and utilities. This base image is freely redistributable, but Red Hat only supports Red Hat technologies through subscriptions for Red Hat products. This image is maintained by Red Hat and updated regularly., vcs-type=git, io.openshift.tags=base rhel9, name=ubi9)
Dec  1 20:02:52 compute-0 podman[255691]: 2025-12-01 20:02:52.376597162 +0000 UTC m=+0.134282109 container health_status 34a1614f07848d6f362b3ed1fa2407dbcd0f2c7c831f6ef43ff8b2d278ce7c3d (image=quay.io/podified-antelope-centos9/openstack-ceilometer-ipmi:current-podified, name=ceilometer_agent_ipmi, health_status=healthy, health_failing_streak=0, health_log=, io.buildah.version=1.41.3, maintainer=OpenStack Kubernetes Operator team, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 9 Base Image, tcib_build_tag=fa2bb8efef6782c26ea7f1675eeb36dd, config_data={'image': 'quay.io/podified-antelope-centos9/openstack-ceilometer-ipmi:current-podified', 'user': 'ceilometer', 'restart': 'always', 'command': 'kolla_start', 'security_opt': 'label:type:ceilometer_polling_t', 'privileged': 'true', 'net': 'host', 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'OS_ENDPOINT_TYPE': 'internal'}, 'healthcheck': {'test': '/openstack/healthcheck ipmi', 'mount': '/var/lib/openstack/healthchecks/ceilometer_agent_ipmi'}, 'volumes': ['/var/lib/openstack/config/telemetry-power-monitoring:/var/lib/openstack/config/:z', '/var/lib/openstack/config/telemetry-power-monitoring/ceilometer-agent-ipmi.json:/var/lib/kolla/config_files/config.json:z', '/etc/hosts:/etc/hosts:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/var/lib/openstack/cacerts/telemetry-power-monitoring/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/config/telemetry-power-monitoring/ceilometer_prom_exporter.yaml:/etc/ceilometer/ceilometer_prom_exporter.yaml:z', '/var/lib/openstack/certs/telemetry-power-monitoring/default:/etc/ceilometer/tls:z', '/dev/log:/dev/log', '/var/lib/openstack/healthchecks/ceilometer_agent_ipmi:/openstack:ro,z']}, managed_by=edpm_ansible, org.label-schema.build-date=20251125, org.label-schema.vendor=CentOS, config_id=edpm, org.label-schema.schema-version=1.0, tcib_managed=true, container_name=ceilometer_agent_ipmi)
Dec  1 20:02:52 compute-0 podman[255699]: 2025-12-01 20:02:52.389882687 +0000 UTC m=+0.132427752 container health_status 43b014a7c88484529ca37fbc1aa040d68d3c565a681d98a3ffe696ded1c66c8b (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified, name=ovn_metadata_agent, health_status=healthy, health_failing_streak=0, health_log=, tcib_managed=true, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS, container_name=ovn_metadata_agent, org.label-schema.license=GPLv2, tcib_build_tag=fa2bb8efef6782c26ea7f1675eeb36dd, config_data={'cgroupns': 'host', 'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'EDPM_CONFIG_HASH': '0823bd3e096c75f72e4a95820d41b0d4b6a1172bd2892ddb9f29b788a11bc87d'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ovn_metadata_agent', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified', 'net': 'host', 'pid': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/run/openvswitch:/run/openvswitch:z', '/var/lib/config-data/ansible-generated/neutron-ovn-metadata-agent:/etc/neutron.conf.d:z', '/run/netns:/run/netns:shared', '/var/lib/kolla/config_files/ovn_metadata_agent.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/neutron:/var/lib/neutron:shared,z', '/var/lib/neutron/ovn_metadata_haproxy_wrapper:/usr/local/bin/haproxy:ro', '/var/lib/neutron/kill_scripts:/etc/neutron/kill_scripts:ro', '/var/lib/openstack/cacerts/neutron-metadata/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/ca.crt:/etc/pki/tls/certs/ovndbca.crt:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/tls.crt:/etc/pki/tls/certs/ovndb.crt:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/tls.key:/etc/pki/tls/private/ovndb.key:ro,Z', '/var/lib/openstack/healthchecks/ovn_metadata_agent:/openstack:ro,z']}, config_id=ovn_metadata_agent, io.buildah.version=1.41.3, maintainer=OpenStack Kubernetes Operator team, org.label-schema.build-date=20251125, org.label-schema.name=CentOS Stream 9 Base Image, managed_by=edpm_ansible)
Dec  1 20:02:52 compute-0 podman[255709]: 2025-12-01 20:02:52.442791283 +0000 UTC m=+0.175371108 container health_status ac5c9902abf0db9f43c889599b2bcc73d33eb8b65444ffdd9b56a5cc93dab792 (image=quay.io/podified-antelope-centos9/openstack-ovn-controller:current-podified, name=ovn_controller, health_status=healthy, health_failing_streak=0, health_log=, managed_by=edpm_ansible, org.label-schema.vendor=CentOS, config_id=ovn_controller, org.label-schema.build-date=20251125, tcib_build_tag=fa2bb8efef6782c26ea7f1675eeb36dd, container_name=ovn_controller, io.buildah.version=1.41.3, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.schema-version=1.0, tcib_managed=true, config_data={'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ovn_controller', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-ovn-controller:current-podified', 'net': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/lib/modules:/lib/modules:ro', '/run:/run', '/var/lib/openvswitch/ovn:/run/ovn:shared,z', '/var/lib/kolla/config_files/ovn_controller.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/openstack/cacerts/ovn/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/certs/ovn/default/ca.crt:/etc/pki/tls/certs/ovndbca.crt:ro,z', '/var/lib/openstack/certs/ovn/default/tls.crt:/etc/pki/tls/certs/ovndb.crt:ro,z', '/var/lib/openstack/certs/ovn/default/tls.key:/etc/pki/tls/private/ovndb.key:ro,Z', '/var/lib/openstack/healthchecks/ovn_controller:/openstack:ro,z']}, maintainer=OpenStack Kubernetes Operator team)
Dec  1 20:02:53 compute-0 nova_compute[189564]: 2025-12-01 20:02:53.865 189568 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Dec  1 20:02:54 compute-0 nova_compute[189564]: 2025-12-01 20:02:54.899 189568 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Dec  1 20:02:55 compute-0 nova_compute[189564]: 2025-12-01 20:02:55.519 189568 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Dec  1 20:02:58 compute-0 nova_compute[189564]: 2025-12-01 20:02:58.790 189568 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Dec  1 20:02:58 compute-0 nova_compute[189564]: 2025-12-01 20:02:58.870 189568 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Dec  1 20:02:59 compute-0 podman[203750]: time="2025-12-01T20:02:59Z" level=info msg="List containers: received `last` parameter - overwriting `limit`"
Dec  1 20:02:59 compute-0 podman[203750]: @ - - [01/Dec/2025:20:02:59 +0000] "GET /v4.9.3/libpod/containers/json?all=true&external=false&last=0&namespace=false&size=false&sync=false HTTP/1.1" 200 33218 "" "Go-http-client/1.1"
Dec  1 20:02:59 compute-0 podman[203750]: @ - - [01/Dec/2025:20:02:59 +0000] "GET /v4.9.3/libpod/containers/stats?all=false&interval=1&stream=false HTTP/1.1" 200 6193 "" "Go-http-client/1.1"
Dec  1 20:03:00 compute-0 nova_compute[189564]: 2025-12-01 20:03:00.522 189568 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Dec  1 20:03:01 compute-0 openstack_network_exporter[205914]: ERROR   20:03:01 appctl.go:144: Failed to get PID for ovn-northd: no control socket files found for ovn-northd
Dec  1 20:03:01 compute-0 openstack_network_exporter[205914]: ERROR   20:03:01 appctl.go:144: Failed to get PID for ovn-northd: no control socket files found for ovn-northd
Dec  1 20:03:01 compute-0 openstack_network_exporter[205914]: ERROR   20:03:01 appctl.go:131: Failed to prepare call to ovsdb-server: no control socket files found for the ovs db server
Dec  1 20:03:01 compute-0 openstack_network_exporter[205914]: ERROR   20:03:01 appctl.go:174: call(dpif-netdev/pmd-rxq-show): please specify an existing datapath
Dec  1 20:03:01 compute-0 openstack_network_exporter[205914]: 
Dec  1 20:03:01 compute-0 openstack_network_exporter[205914]: ERROR   20:03:01 appctl.go:174: call(dpif-netdev/pmd-perf-show): please specify an existing datapath
Dec  1 20:03:01 compute-0 openstack_network_exporter[205914]: 
Dec  1 20:03:03 compute-0 nova_compute[189564]: 2025-12-01 20:03:03.005 189568 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Dec  1 20:03:03 compute-0 nova_compute[189564]: 2025-12-01 20:03:03.873 189568 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Dec  1 20:03:04 compute-0 podman[255787]: 2025-12-01 20:03:04.317051341 +0000 UTC m=+0.078404431 container health_status b46bda7fc50db8041eef75400930fc7591d8331b3adc9964f77b2cc87c6b98e2 (image=quay.io/openstack-k8s-operators/openstack-network-exporter:current-podified, name=openstack_network_exporter, health_status=healthy, health_failing_streak=0, health_log=, architecture=x86_64, config_data={'image': 'quay.io/openstack-k8s-operators/openstack-network-exporter:current-podified', 'restart': 'always', 'recreate': True, 'privileged': True, 'ports': ['9105:9105'], 'command': [], 'net': 'host', 'environment': {'OS_ENDPOINT_TYPE': 'internal', 'OPENSTACK_NETWORK_EXPORTER_YAML': '/etc/openstack_network_exporter/openstack_network_exporter.yaml'}, 'healthcheck': {'test': '/openstack/healthcheck openstack-netwo', 'mount': '/var/lib/openstack/healthchecks/openstack_network_exporter'}, 'volumes': ['/var/lib/openstack/config/telemetry/openstack_network_exporter.yaml:/etc/openstack_network_exporter/openstack_network_exporter.yaml:z', '/var/lib/openstack/certs/telemetry/default:/etc/openstack_network_exporter/tls:z', '/var/run/openvswitch:/run/openvswitch:rw,z', '/var/lib/openvswitch/ovn:/run/ovn:rw,z', '/proc:/host/proc:ro', '/var/lib/openstack/healthchecks/openstack_network_exporter:/openstack:ro,z']}, io.k8s.display-name=Red Hat Universal Base Image 9 Minimal, vcs-type=git, build-date=2025-08-20T13:12:41, url=https://catalog.redhat.com/en/search?searchType=containers, vendor=Red Hat, Inc., com.redhat.component=ubi9-minimal-container, container_name=openstack_network_exporter, io.buildah.version=1.33.7, name=ubi9-minimal, vcs-ref=f4b088292653bbf5ca8188a5e59ffd06a8671d4b, config_id=edpm, description=The Universal Base Image Minimal is a stripped down image that uses microdnf as a package manager. This base image is freely redistributable, but Red Hat only supports Red Hat technologies through subscriptions for Red Hat products. This image is maintained by Red Hat and updated regularly., io.openshift.tags=minimal rhel9, io.openshift.expose-services=, maintainer=Red Hat, Inc., distribution-scope=public, managed_by=edpm_ansible, io.k8s.description=The Universal Base Image Minimal is a stripped down image that uses microdnf as a package manager. This base image is freely redistributable, but Red Hat only supports Red Hat technologies through subscriptions for Red Hat products. This image is maintained by Red Hat and updated regularly., release=1755695350, summary=Provides the latest release of the minimal Red Hat Universal Base Image 9., com.redhat.license_terms=https://www.redhat.com/en/about/red-hat-end-user-license-agreements#UBI, version=9.6)
Dec  1 20:03:05 compute-0 nova_compute[189564]: 2025-12-01 20:03:05.524 189568 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Dec  1 20:03:07 compute-0 nova_compute[189564]: 2025-12-01 20:03:07.092 189568 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Dec  1 20:03:08 compute-0 podman[255811]: 2025-12-01 20:03:08.322643725 +0000 UTC m=+0.093032015 container health_status 9bc16c1e84935b321683dd2dfd3901959431e420d380b6b9982945dff3d516b2 (image=quay.io/prometheus/node-exporter:v1.5.0, name=node_exporter, health_status=healthy, health_failing_streak=0, health_log=, config_data={'image': 'quay.io/prometheus/node-exporter:v1.5.0', 'restart': 'always', 'recreate': True, 'user': 'root', 'privileged': True, 'ports': ['9100:9100'], 'command': ['--web.config.file=/etc/node_exporter/node_exporter.yaml', '--web.disable-exporter-metrics', '--collector.systemd', '--collector.systemd.unit-include=(edpm_.*|ovs.*|openvswitch|virt.*|rsyslog)\\.service', '--no-collector.dmi', '--no-collector.entropy', '--no-collector.thermal_zone', '--no-collector.time', '--no-collector.timex', '--no-collector.uname', '--no-collector.stat', '--no-collector.hwmon', '--no-collector.os', '--no-collector.selinux', '--no-collector.textfile', '--no-collector.powersupplyclass', '--no-collector.pressure', '--no-collector.rapl'], 'net': 'host', 'environment': {'OS_ENDPOINT_TYPE': 'internal'}, 'healthcheck': {'test': '/openstack/healthcheck node_exporter', 'mount': '/var/lib/openstack/healthchecks/node_exporter'}, 'volumes': ['/var/lib/openstack/config/telemetry/node_exporter.yaml:/etc/node_exporter/node_exporter.yaml:z', '/var/lib/openstack/certs/telemetry/default:/etc/node_exporter/tls:z', '/var/run/dbus/system_bus_socket:/var/run/dbus/system_bus_socket:rw', '/var/lib/openstack/healthchecks/node_exporter:/openstack:ro,z']}, config_id=edpm, container_name=node_exporter, maintainer=The Prometheus Authors <prometheus-developers@googlegroups.com>, managed_by=edpm_ansible)
Dec  1 20:03:08 compute-0 nova_compute[189564]: 2025-12-01 20:03:08.876 189568 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Dec  1 20:03:10 compute-0 nova_compute[189564]: 2025-12-01 20:03:10.526 189568 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Dec  1 20:03:12 compute-0 ovn_metadata_agent[106828]: 2025-12-01 20:03:12.221 106833 DEBUG oslo_concurrency.lockutils [-] Acquiring lock "_check_child_processes" by "neutron.agent.linux.external_process.ProcessMonitor._check_child_processes" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m
Dec  1 20:03:12 compute-0 ovn_metadata_agent[106828]: 2025-12-01 20:03:12.221 106833 DEBUG oslo_concurrency.lockutils [-] Lock "_check_child_processes" acquired by "neutron.agent.linux.external_process.ProcessMonitor._check_child_processes" :: waited 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m
Dec  1 20:03:12 compute-0 ovn_metadata_agent[106828]: 2025-12-01 20:03:12.222 106833 DEBUG oslo_concurrency.lockutils [-] Lock "_check_child_processes" "released" by "neutron.agent.linux.external_process.ProcessMonitor._check_child_processes" :: held 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m
Dec  1 20:03:12 compute-0 ovn_controller[97948]: 2025-12-01T20:03:12Z|00127|binding|INFO|Releasing lport 39b24bc2-6265-4d8f-9166-2751c476b101 from this chassis (sb_readonly=0)
Dec  1 20:03:12 compute-0 ovn_controller[97948]: 2025-12-01T20:03:12Z|00128|binding|INFO|Releasing lport 0966f8f1-95fd-4a77-80c1-25197c60ec2b from this chassis (sb_readonly=0)
Dec  1 20:03:12 compute-0 ovn_controller[97948]: 2025-12-01T20:03:12Z|00129|binding|INFO|Releasing lport cb6caae9-9b40-4384-a692-7fed62ba0bdc from this chassis (sb_readonly=0)
Dec  1 20:03:12 compute-0 ovn_controller[97948]: 2025-12-01T20:03:12Z|00130|binding|INFO|Releasing lport b1e4fac5-26a3-4807-b860-bcfa4669fff5 from this chassis (sb_readonly=0)
Dec  1 20:03:12 compute-0 ovn_controller[97948]: 2025-12-01T20:03:12Z|00014|pinctrl(ovn_pinctrl0)|INFO|DHCPOFFER fa:16:3e:67:e4:f2 10.100.0.14
Dec  1 20:03:12 compute-0 ovn_controller[97948]: 2025-12-01T20:03:12Z|00015|pinctrl(ovn_pinctrl0)|INFO|DHCPACK fa:16:3e:67:e4:f2 10.100.0.14
Dec  1 20:03:12 compute-0 nova_compute[189564]: 2025-12-01 20:03:12.598 189568 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Dec  1 20:03:13 compute-0 nova_compute[189564]: 2025-12-01 20:03:13.879 189568 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Dec  1 20:03:14 compute-0 ovn_controller[97948]: 2025-12-01T20:03:14Z|00016|pinctrl(ovn_pinctrl0)|INFO|DHCPACK fa:16:3e:3e:bf:1a 10.100.0.13
Dec  1 20:03:14 compute-0 nova_compute[189564]: 2025-12-01 20:03:14.248 189568 DEBUG oslo_service.periodic_task [None req-7cec860b-c1c5-49d2-8a4f-732c76c1788d - - - - - -] Running periodic task ComputeManager._reclaim_queued_deletes run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210#033[00m
Dec  1 20:03:14 compute-0 nova_compute[189564]: 2025-12-01 20:03:14.248 189568 DEBUG nova.compute.manager [None req-7cec860b-c1c5-49d2-8a4f-732c76c1788d - - - - - -] CONF.reclaim_instance_interval <= 0, skipping... _reclaim_queued_deletes /usr/lib/python3.9/site-packages/nova/compute/manager.py:10477#033[00m
Dec  1 20:03:15 compute-0 podman[255849]: 2025-12-01 20:03:15.322972401 +0000 UTC m=+0.087861475 container health_status eee51cf6f5ac491b85fb09827fece37ea9afa564acb449d4ec0d0155a452f02b (image=quay.io/podified-antelope-centos9/openstack-multipathd:current-podified, name=multipathd, health_status=healthy, health_failing_streak=0, health_log=, tcib_build_tag=fa2bb8efef6782c26ea7f1675eeb36dd, maintainer=OpenStack Kubernetes Operator team, config_data={'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/multipathd', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-multipathd:current-podified', 'net': 'host', 'privileged': True, 'restart': 'always', 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/var/lib/kolla/config_files/multipathd.json:/var/lib/kolla/config_files/config.json:ro', '/dev:/dev', '/run/udev:/run/udev', '/sys:/sys', '/lib/modules:/lib/modules:ro', '/etc/iscsi:/etc/iscsi:ro', '/var/lib/iscsi:/var/lib/iscsi', '/etc/multipath:/etc/multipath:z', '/etc/multipath.conf:/etc/multipath.conf:ro', '/var/lib/openstack/healthchecks/multipathd:/openstack:ro,z']}, container_name=multipathd, org.label-schema.build-date=20251125, config_id=multipathd, org.label-schema.license=GPLv2, org.label-schema.vendor=CentOS, tcib_managed=true, io.buildah.version=1.41.3, managed_by=edpm_ansible, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.schema-version=1.0)
Dec  1 20:03:15 compute-0 nova_compute[189564]: 2025-12-01 20:03:15.527 189568 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Dec  1 20:03:16 compute-0 nova_compute[189564]: 2025-12-01 20:03:16.206 189568 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Dec  1 20:03:17 compute-0 nova_compute[189564]: 2025-12-01 20:03:17.248 189568 DEBUG oslo_service.periodic_task [None req-7cec860b-c1c5-49d2-8a4f-732c76c1788d - - - - - -] Running periodic task ComputeManager._poll_rescued_instances run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210#033[00m
Dec  1 20:03:18 compute-0 ovn_controller[97948]: 2025-12-01T20:03:18Z|00017|pinctrl(ovn_pinctrl0)|INFO|DHCPOFFER fa:16:3e:96:ce:cc 10.100.0.11
Dec  1 20:03:18 compute-0 ovn_controller[97948]: 2025-12-01T20:03:18Z|00018|pinctrl(ovn_pinctrl0)|INFO|DHCPACK fa:16:3e:96:ce:cc 10.100.0.11
Dec  1 20:03:18 compute-0 nova_compute[189564]: 2025-12-01 20:03:18.882 189568 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Dec  1 20:03:19 compute-0 nova_compute[189564]: 2025-12-01 20:03:19.248 189568 DEBUG oslo_service.periodic_task [None req-7cec860b-c1c5-49d2-8a4f-732c76c1788d - - - - - -] Running periodic task ComputeManager._poll_rebooting_instances run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210#033[00m
Dec  1 20:03:19 compute-0 podman[255877]: 2025-12-01 20:03:19.293509535 +0000 UTC m=+0.070237936 container health_status 61ddba5fa28aaa4735d9b3aecc3d300f499f9ae2248b5f55cd6d6127fcce4236 (image=quay.io/navidys/prometheus-podman-exporter:v1.10.1, name=podman_exporter, health_status=healthy, health_failing_streak=0, health_log=, config_id=edpm, container_name=podman_exporter, maintainer=Navid Yaghoobi <navidys@fedoraproject.org>, managed_by=edpm_ansible, config_data={'image': 'quay.io/navidys/prometheus-podman-exporter:v1.10.1', 'restart': 'always', 'recreate': True, 'user': 'root', 'privileged': True, 'ports': ['9882:9882'], 'net': 'host', 'command': ['--web.config.file=/etc/podman_exporter/podman_exporter.yaml'], 'environment': {'OS_ENDPOINT_TYPE': 'internal', 'CONTAINER_HOST': 'unix:///run/podman/podman.sock'}, 'healthcheck': {'test': '/openstack/healthcheck podman_exporter', 'mount': '/var/lib/openstack/healthchecks/podman_exporter'}, 'volumes': ['/var/lib/openstack/config/telemetry/podman_exporter.yaml:/etc/podman_exporter/podman_exporter.yaml:z', '/var/lib/openstack/certs/telemetry/default:/etc/podman_exporter/tls:z', '/run/podman/podman.sock:/run/podman/podman.sock:rw,z', '/var/lib/openstack/healthchecks/podman_exporter:/openstack:ro,z']})
Dec  1 20:03:19 compute-0 nova_compute[189564]: 2025-12-01 20:03:19.694 189568 DEBUG nova.objects.instance [None req-d54cd523-db4d-469b-b40f-d9f0b74a3fc2 f4faf878be724ad8aa31fd034c9818d9 4517904b95d64f0c874d5afda12566c4 - - default default] Lazy-loading 'flavor' on Instance uuid 4ace6300-5447-4f61-9b27-a7249155c57b obj_load_attr /usr/lib/python3.9/site-packages/nova/objects/instance.py:1105#033[00m
Dec  1 20:03:19 compute-0 nova_compute[189564]: 2025-12-01 20:03:19.760 189568 DEBUG oslo_concurrency.lockutils [None req-d54cd523-db4d-469b-b40f-d9f0b74a3fc2 f4faf878be724ad8aa31fd034c9818d9 4517904b95d64f0c874d5afda12566c4 - - default default] Acquiring lock "refresh_cache-4ace6300-5447-4f61-9b27-a7249155c57b" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:312#033[00m
Dec  1 20:03:19 compute-0 nova_compute[189564]: 2025-12-01 20:03:19.761 189568 DEBUG oslo_concurrency.lockutils [None req-d54cd523-db4d-469b-b40f-d9f0b74a3fc2 f4faf878be724ad8aa31fd034c9818d9 4517904b95d64f0c874d5afda12566c4 - - default default] Acquired lock "refresh_cache-4ace6300-5447-4f61-9b27-a7249155c57b" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:315#033[00m
Dec  1 20:03:19 compute-0 ovn_controller[97948]: 2025-12-01T20:03:19Z|00131|binding|INFO|Releasing lport 39b24bc2-6265-4d8f-9166-2751c476b101 from this chassis (sb_readonly=0)
Dec  1 20:03:19 compute-0 ovn_controller[97948]: 2025-12-01T20:03:19Z|00132|binding|INFO|Releasing lport 0966f8f1-95fd-4a77-80c1-25197c60ec2b from this chassis (sb_readonly=0)
Dec  1 20:03:19 compute-0 ovn_controller[97948]: 2025-12-01T20:03:19Z|00133|binding|INFO|Releasing lport cb6caae9-9b40-4384-a692-7fed62ba0bdc from this chassis (sb_readonly=0)
Dec  1 20:03:19 compute-0 ovn_controller[97948]: 2025-12-01T20:03:19Z|00134|binding|INFO|Releasing lport b1e4fac5-26a3-4807-b860-bcfa4669fff5 from this chassis (sb_readonly=0)
Dec  1 20:03:20 compute-0 nova_compute[189564]: 2025-12-01 20:03:20.062 189568 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Dec  1 20:03:20 compute-0 nova_compute[189564]: 2025-12-01 20:03:20.530 189568 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Dec  1 20:03:21 compute-0 nova_compute[189564]: 2025-12-01 20:03:21.390 189568 DEBUG nova.network.neutron [None req-d54cd523-db4d-469b-b40f-d9f0b74a3fc2 f4faf878be724ad8aa31fd034c9818d9 4517904b95d64f0c874d5afda12566c4 - - default default] [instance: 4ace6300-5447-4f61-9b27-a7249155c57b] Building network info cache for instance _get_instance_nw_info /usr/lib/python3.9/site-packages/nova/network/neutron.py:2010#033[00m
Dec  1 20:03:21 compute-0 nova_compute[189564]: 2025-12-01 20:03:21.514 189568 DEBUG nova.compute.manager [req-9a85cd32-b51f-4e80-9700-421d473491aa req-b47b907b-7395-4a74-aaf2-c6afa5a21d76 8141e98fb94749b083df4b60a326419b 58466eba0634458f8dd92f929824a1d9 - - default default] [instance: 4ace6300-5447-4f61-9b27-a7249155c57b] Received event network-changed-7101ff55-a92d-431c-8cc4-8b3412507465 external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11048#033[00m
Dec  1 20:03:21 compute-0 nova_compute[189564]: 2025-12-01 20:03:21.515 189568 DEBUG nova.compute.manager [req-9a85cd32-b51f-4e80-9700-421d473491aa req-b47b907b-7395-4a74-aaf2-c6afa5a21d76 8141e98fb94749b083df4b60a326419b 58466eba0634458f8dd92f929824a1d9 - - default default] [instance: 4ace6300-5447-4f61-9b27-a7249155c57b] Refreshing instance network info cache due to event network-changed-7101ff55-a92d-431c-8cc4-8b3412507465. external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11053#033[00m
Dec  1 20:03:21 compute-0 nova_compute[189564]: 2025-12-01 20:03:21.516 189568 DEBUG oslo_concurrency.lockutils [req-9a85cd32-b51f-4e80-9700-421d473491aa req-b47b907b-7395-4a74-aaf2-c6afa5a21d76 8141e98fb94749b083df4b60a326419b 58466eba0634458f8dd92f929824a1d9 - - default default] Acquiring lock "refresh_cache-4ace6300-5447-4f61-9b27-a7249155c57b" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:312#033[00m
Dec  1 20:03:22 compute-0 nova_compute[189564]: 2025-12-01 20:03:22.365 189568 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Dec  1 20:03:23 compute-0 nova_compute[189564]: 2025-12-01 20:03:23.247 189568 DEBUG oslo_service.periodic_task [None req-7cec860b-c1c5-49d2-8a4f-732c76c1788d - - - - - -] Running periodic task ComputeManager._heal_instance_info_cache run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210#033[00m
Dec  1 20:03:23 compute-0 nova_compute[189564]: 2025-12-01 20:03:23.247 189568 DEBUG nova.compute.manager [None req-7cec860b-c1c5-49d2-8a4f-732c76c1788d - - - - - -] Starting heal instance info cache _heal_instance_info_cache /usr/lib/python3.9/site-packages/nova/compute/manager.py:9858#033[00m
Dec  1 20:03:23 compute-0 nova_compute[189564]: 2025-12-01 20:03:23.248 189568 DEBUG nova.compute.manager [None req-7cec860b-c1c5-49d2-8a4f-732c76c1788d - - - - - -] Rebuilding the list of instances to heal _heal_instance_info_cache /usr/lib/python3.9/site-packages/nova/compute/manager.py:9862#033[00m
Dec  1 20:03:23 compute-0 podman[255906]: 2025-12-01 20:03:23.336886195 +0000 UTC m=+0.102328455 container health_status 23921011954a99f31a49758e512d9e3575f6b2ebf536e7df85e3be11e7690b76 (image=quay.io/sustainable_computing_io/kepler:release-0.7.12, name=kepler, health_status=healthy, health_failing_streak=0, health_log=, release=1214.1726694543, release-0.7.12=, config_data={'image': 'quay.io/sustainable_computing_io/kepler:release-0.7.12', 'privileged': 'true', 'restart': 'always', 'ports': ['8888:8888'], 'net': 'host', 'command': '-v=2', 'recreate': True, 'environment': {'ENABLE_GPU': 'true', 'EXPOSE_CONTAINER_METRICS': 'true', 'ENABLE_PROCESS_METRICS': 'true', 'EXPOSE_VM_METRICS': 'true', 'EXPOSE_ESTIMATED_IDLE_POWER_METRICS': 'false', 'LIBVIRT_METADATA_URI': 'http://openstack.org/xmlns/libvirt/nova/1.1'}, 'healthcheck': {'test': '/openstack/healthcheck kepler', 'mount': '/var/lib/openstack/healthchecks/kepler'}, 'volumes': ['/lib/modules:/lib/modules:ro', '/run/libvirt:/run/libvirt:shared,ro', '/sys:/sys', '/proc:/proc', '/var/lib/openstack/healthchecks/kepler:/openstack:ro,z']}, vcs-type=git, build-date=2024-09-18T21:23:30, com.redhat.component=ubi9-container, io.buildah.version=1.29.0, com.redhat.license_terms=https://www.redhat.com/en/about/red-hat-end-user-license-agreements#UBI, config_id=edpm, summary=Provides the latest release of Red Hat Universal Base Image 9., io.openshift.tags=base rhel9, io.k8s.display-name=Red Hat Universal Base Image 9, managed_by=edpm_ansible, vendor=Red Hat, Inc., io.k8s.description=The Universal Base Image is designed and engineered to be the base layer for all of your containerized applications, middleware and utilities. This base image is freely redistributable, but Red Hat only supports Red Hat technologies through subscriptions for Red Hat products. This image is maintained by Red Hat and updated regularly., version=9.4, url=https://access.redhat.com/containers/#/registry.access.redhat.com/ubi9/images/9.4-1214.1726694543, vcs-ref=e309397d02fc53f7fa99db1371b8700eb49f268f, io.openshift.expose-services=, architecture=x86_64, container_name=kepler, distribution-scope=public, maintainer=Red Hat, Inc., name=ubi9, description=The Universal Base Image is designed and engineered to be the base layer for all of your containerized applications, middleware and utilities. This base image is freely redistributable, but Red Hat only supports Red Hat technologies through subscriptions for Red Hat products. This image is maintained by Red Hat and updated regularly.)
Dec  1 20:03:23 compute-0 podman[255908]: 2025-12-01 20:03:23.346095121 +0000 UTC m=+0.103611425 container health_status 3a3d264f7eb8586ed3d44da8bad3c69e5911bcb2ca062b771386b6d47a5118de (image=quay.rdoproject.org/podified-master-centos10/openstack-ceilometer-compute:current-tested, name=ceilometer_agent_compute, health_status=healthy, health_failing_streak=0, health_log=, org.label-schema.license=GPLv2, tcib_build_tag=3a7876c5b6a4ff2e2bc50e11e9db5f42, tcib_managed=true, container_name=ceilometer_agent_compute, maintainer=OpenStack Kubernetes Operator team, org.label-schema.name=CentOS Stream 10 Base Image, org.label-schema.schema-version=1.0, config_data={'image': 'quay.rdoproject.org/podified-master-centos10/openstack-ceilometer-compute:current-tested', 'user': 'ceilometer', 'restart': 'always', 'command': 'kolla_start', 'security_opt': 'label:type:ceilometer_polling_t', 'net': 'host', 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'OS_ENDPOINT_TYPE': 'internal'}, 'healthcheck': {'test': '/openstack/healthcheck compute', 'mount': '/var/lib/openstack/healthchecks/ceilometer_agent_compute'}, 'volumes': ['/var/lib/openstack/config/telemetry:/var/lib/openstack/config/:z', '/var/lib/openstack/config/telemetry/ceilometer-agent-compute.json:/var/lib/kolla/config_files/config.json:z', '/run/libvirt:/run/libvirt:shared,ro', '/etc/hosts:/etc/hosts:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/var/lib/openstack/cacerts/telemetry/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/config/telemetry/ceilometer_prom_exporter.yaml:/etc/ceilometer/ceilometer_prom_exporter.yaml:z', '/var/lib/openstack/certs/telemetry/default:/etc/ceilometer/tls:z', '/dev/log:/dev/log', '/var/lib/openstack/healthchecks/ceilometer_agent_compute:/openstack:ro,z']}, managed_by=edpm_ansible, io.buildah.version=1.41.4, org.label-schema.vendor=CentOS, config_id=edpm, org.label-schema.build-date=20251125)
Dec  1 20:03:23 compute-0 podman[255907]: 2025-12-01 20:03:23.35603782 +0000 UTC m=+0.123072550 container health_status 34a1614f07848d6f362b3ed1fa2407dbcd0f2c7c831f6ef43ff8b2d278ce7c3d (image=quay.io/podified-antelope-centos9/openstack-ceilometer-ipmi:current-podified, name=ceilometer_agent_ipmi, health_status=healthy, health_failing_streak=0, health_log=, org.label-schema.license=GPLv2, tcib_managed=true, maintainer=OpenStack Kubernetes Operator team, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS, container_name=ceilometer_agent_ipmi, io.buildah.version=1.41.3, managed_by=edpm_ansible, config_data={'image': 'quay.io/podified-antelope-centos9/openstack-ceilometer-ipmi:current-podified', 'user': 'ceilometer', 'restart': 'always', 'command': 'kolla_start', 'security_opt': 'label:type:ceilometer_polling_t', 'privileged': 'true', 'net': 'host', 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'OS_ENDPOINT_TYPE': 'internal'}, 'healthcheck': {'test': '/openstack/healthcheck ipmi', 'mount': '/var/lib/openstack/healthchecks/ceilometer_agent_ipmi'}, 'volumes': ['/var/lib/openstack/config/telemetry-power-monitoring:/var/lib/openstack/config/:z', '/var/lib/openstack/config/telemetry-power-monitoring/ceilometer-agent-ipmi.json:/var/lib/kolla/config_files/config.json:z', '/etc/hosts:/etc/hosts:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/var/lib/openstack/cacerts/telemetry-power-monitoring/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/config/telemetry-power-monitoring/ceilometer_prom_exporter.yaml:/etc/ceilometer/ceilometer_prom_exporter.yaml:z', '/var/lib/openstack/certs/telemetry-power-monitoring/default:/etc/ceilometer/tls:z', '/dev/log:/dev/log', '/var/lib/openstack/healthchecks/ceilometer_agent_ipmi:/openstack:ro,z']}, config_id=edpm, org.label-schema.build-date=20251125, tcib_build_tag=fa2bb8efef6782c26ea7f1675eeb36dd)
Dec  1 20:03:23 compute-0 podman[255909]: 2025-12-01 20:03:23.361518742 +0000 UTC m=+0.116143846 container health_status 43b014a7c88484529ca37fbc1aa040d68d3c565a681d98a3ffe696ded1c66c8b (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified, name=ovn_metadata_agent, health_status=healthy, health_failing_streak=0, health_log=, managed_by=edpm_ansible, org.label-schema.build-date=20251125, org.label-schema.vendor=CentOS, org.label-schema.license=GPLv2, org.label-schema.schema-version=1.0, tcib_managed=true, container_name=ovn_metadata_agent, io.buildah.version=1.41.3, config_data={'cgroupns': 'host', 'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'EDPM_CONFIG_HASH': '0823bd3e096c75f72e4a95820d41b0d4b6a1172bd2892ddb9f29b788a11bc87d'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ovn_metadata_agent', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified', 'net': 'host', 'pid': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/run/openvswitch:/run/openvswitch:z', '/var/lib/config-data/ansible-generated/neutron-ovn-metadata-agent:/etc/neutron.conf.d:z', '/run/netns:/run/netns:shared', '/var/lib/kolla/config_files/ovn_metadata_agent.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/neutron:/var/lib/neutron:shared,z', '/var/lib/neutron/ovn_metadata_haproxy_wrapper:/usr/local/bin/haproxy:ro', '/var/lib/neutron/kill_scripts:/etc/neutron/kill_scripts:ro', '/var/lib/openstack/cacerts/neutron-metadata/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/ca.crt:/etc/pki/tls/certs/ovndbca.crt:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/tls.crt:/etc/pki/tls/certs/ovndb.crt:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/tls.key:/etc/pki/tls/private/ovndb.key:ro,Z', '/var/lib/openstack/healthchecks/ovn_metadata_agent:/openstack:ro,z']}, maintainer=OpenStack Kubernetes Operator team, org.label-schema.name=CentOS Stream 9 Base Image, tcib_build_tag=fa2bb8efef6782c26ea7f1675eeb36dd, config_id=ovn_metadata_agent)
Dec  1 20:03:23 compute-0 podman[255910]: 2025-12-01 20:03:23.380553164 +0000 UTC m=+0.125800065 container health_status ac5c9902abf0db9f43c889599b2bcc73d33eb8b65444ffdd9b56a5cc93dab792 (image=quay.io/podified-antelope-centos9/openstack-ovn-controller:current-podified, name=ovn_controller, health_status=healthy, health_failing_streak=0, health_log=, org.label-schema.vendor=CentOS, container_name=ovn_controller, io.buildah.version=1.41.3, managed_by=edpm_ansible, org.label-schema.build-date=20251125, org.label-schema.license=GPLv2, org.label-schema.schema-version=1.0, tcib_build_tag=fa2bb8efef6782c26ea7f1675eeb36dd, config_data={'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ovn_controller', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-ovn-controller:current-podified', 'net': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/lib/modules:/lib/modules:ro', '/run:/run', '/var/lib/openvswitch/ovn:/run/ovn:shared,z', '/var/lib/kolla/config_files/ovn_controller.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/openstack/cacerts/ovn/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/certs/ovn/default/ca.crt:/etc/pki/tls/certs/ovndbca.crt:ro,z', '/var/lib/openstack/certs/ovn/default/tls.crt:/etc/pki/tls/certs/ovndb.crt:ro,z', '/var/lib/openstack/certs/ovn/default/tls.key:/etc/pki/tls/private/ovndb.key:ro,Z', '/var/lib/openstack/healthchecks/ovn_controller:/openstack:ro,z']}, tcib_managed=true, config_id=ovn_controller, maintainer=OpenStack Kubernetes Operator team, org.label-schema.name=CentOS Stream 9 Base Image)
Dec  1 20:03:23 compute-0 nova_compute[189564]: 2025-12-01 20:03:23.452 189568 DEBUG oslo_concurrency.lockutils [None req-7cec860b-c1c5-49d2-8a4f-732c76c1788d - - - - - -] Acquiring lock "refresh_cache-4a104baa-5fd5-47aa-973b-11d99c76c3e2" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:312#033[00m
Dec  1 20:03:23 compute-0 nova_compute[189564]: 2025-12-01 20:03:23.453 189568 DEBUG oslo_concurrency.lockutils [None req-7cec860b-c1c5-49d2-8a4f-732c76c1788d - - - - - -] Acquired lock "refresh_cache-4a104baa-5fd5-47aa-973b-11d99c76c3e2" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:315#033[00m
Dec  1 20:03:23 compute-0 nova_compute[189564]: 2025-12-01 20:03:23.453 189568 DEBUG nova.network.neutron [None req-7cec860b-c1c5-49d2-8a4f-732c76c1788d - - - - - -] [instance: 4a104baa-5fd5-47aa-973b-11d99c76c3e2] Forcefully refreshing network info cache for instance _get_instance_nw_info /usr/lib/python3.9/site-packages/nova/network/neutron.py:2004#033[00m
Dec  1 20:03:23 compute-0 nova_compute[189564]: 2025-12-01 20:03:23.455 189568 DEBUG nova.objects.instance [None req-7cec860b-c1c5-49d2-8a4f-732c76c1788d - - - - - -] Lazy-loading 'info_cache' on Instance uuid 4a104baa-5fd5-47aa-973b-11d99c76c3e2 obj_load_attr /usr/lib/python3.9/site-packages/nova/objects/instance.py:1105#033[00m
Dec  1 20:03:23 compute-0 nova_compute[189564]: 2025-12-01 20:03:23.884 189568 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Dec  1 20:03:24 compute-0 nova_compute[189564]: 2025-12-01 20:03:24.455 189568 DEBUG nova.network.neutron [None req-d54cd523-db4d-469b-b40f-d9f0b74a3fc2 f4faf878be724ad8aa31fd034c9818d9 4517904b95d64f0c874d5afda12566c4 - - default default] [instance: 4ace6300-5447-4f61-9b27-a7249155c57b] Updating instance_info_cache with network_info: [{"id": "7101ff55-a92d-431c-8cc4-8b3412507465", "address": "fa:16:3e:69:55:e7", "network": {"id": "f6d551f8-4db8-41ef-9a06-51292bc6bab6", "bridge": "br-int", "label": "tempest-AttachInterfacesUnderV243Test-1484983586-network", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.12", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}, {"address": "10.100.0.6", "type": "fixed", "version": 4, "meta": {}, "floating_ips": [{"address": "192.168.122.189", "type": "floating", "version": 4, "meta": {}}]}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "4517904b95d64f0c874d5afda12566c4", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tap7101ff55-a9", "ovs_interfaceid": "7101ff55-a92d-431c-8cc4-8b3412507465", "qbh_params": null, "qbg_params": null, "active": true, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}}] update_instance_cache_with_nw_info /usr/lib/python3.9/site-packages/nova/network/neutron.py:116#033[00m
Dec  1 20:03:24 compute-0 nova_compute[189564]: 2025-12-01 20:03:24.510 189568 INFO nova.compute.manager [None req-d58af41b-5685-4793-abb0-ecb6d259d19a 715e289b64b4407387cbcfe958eb2d0f 162c071887824085bcc9c384a2f8baf0 - - default default] [instance: 6c1de815-4e42-4798-9a73-220b67333524] Get console output#033[00m
Dec  1 20:03:24 compute-0 nova_compute[189564]: 2025-12-01 20:03:24.523 189568 DEBUG oslo_concurrency.lockutils [None req-d54cd523-db4d-469b-b40f-d9f0b74a3fc2 f4faf878be724ad8aa31fd034c9818d9 4517904b95d64f0c874d5afda12566c4 - - default default] Releasing lock "refresh_cache-4ace6300-5447-4f61-9b27-a7249155c57b" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:333#033[00m
Dec  1 20:03:24 compute-0 nova_compute[189564]: 2025-12-01 20:03:24.523 189568 DEBUG nova.compute.manager [None req-d54cd523-db4d-469b-b40f-d9f0b74a3fc2 f4faf878be724ad8aa31fd034c9818d9 4517904b95d64f0c874d5afda12566c4 - - default default] [instance: 4ace6300-5447-4f61-9b27-a7249155c57b] Inject network info _inject_network_info /usr/lib/python3.9/site-packages/nova/compute/manager.py:7144#033[00m
Dec  1 20:03:24 compute-0 nova_compute[189564]: 2025-12-01 20:03:24.524 189568 DEBUG nova.compute.manager [None req-d54cd523-db4d-469b-b40f-d9f0b74a3fc2 f4faf878be724ad8aa31fd034c9818d9 4517904b95d64f0c874d5afda12566c4 - - default default] [instance: 4ace6300-5447-4f61-9b27-a7249155c57b] network_info to inject: |[{"id": "7101ff55-a92d-431c-8cc4-8b3412507465", "address": "fa:16:3e:69:55:e7", "network": {"id": "f6d551f8-4db8-41ef-9a06-51292bc6bab6", "bridge": "br-int", "label": "tempest-AttachInterfacesUnderV243Test-1484983586-network", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.12", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}, {"address": "10.100.0.6", "type": "fixed", "version": 4, "meta": {}, "floating_ips": [{"address": "192.168.122.189", "type": "floating", "version": 4, "meta": {}}]}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "4517904b95d64f0c874d5afda12566c4", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tap7101ff55-a9", "ovs_interfaceid": "7101ff55-a92d-431c-8cc4-8b3412507465", "qbh_params": null, "qbg_params": null, "active": true, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}}]| _inject_network_info /usr/lib/python3.9/site-packages/nova/compute/manager.py:7145#033[00m
Dec  1 20:03:24 compute-0 nova_compute[189564]: 2025-12-01 20:03:24.526 189568 DEBUG oslo_concurrency.lockutils [req-9a85cd32-b51f-4e80-9700-421d473491aa req-b47b907b-7395-4a74-aaf2-c6afa5a21d76 8141e98fb94749b083df4b60a326419b 58466eba0634458f8dd92f929824a1d9 - - default default] Acquired lock "refresh_cache-4ace6300-5447-4f61-9b27-a7249155c57b" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:315#033[00m
Dec  1 20:03:24 compute-0 nova_compute[189564]: 2025-12-01 20:03:24.526 189568 DEBUG nova.network.neutron [req-9a85cd32-b51f-4e80-9700-421d473491aa req-b47b907b-7395-4a74-aaf2-c6afa5a21d76 8141e98fb94749b083df4b60a326419b 58466eba0634458f8dd92f929824a1d9 - - default default] [instance: 4ace6300-5447-4f61-9b27-a7249155c57b] Refreshing network info cache for port 7101ff55-a92d-431c-8cc4-8b3412507465 _get_instance_nw_info /usr/lib/python3.9/site-packages/nova/network/neutron.py:2007#033[00m
Dec  1 20:03:24 compute-0 nova_compute[189564]: 2025-12-01 20:03:24.634 239719 INFO nova.privsep.libvirt [-] Ignored error while reading from instance console pty: can't concat NoneType to bytes#033[00m
Dec  1 20:03:25 compute-0 nova_compute[189564]: 2025-12-01 20:03:25.533 189568 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Dec  1 20:03:25 compute-0 nova_compute[189564]: 2025-12-01 20:03:25.555 189568 DEBUG nova.objects.instance [None req-1eecf6af-f1d3-4b45-98fd-41142141870f f4faf878be724ad8aa31fd034c9818d9 4517904b95d64f0c874d5afda12566c4 - - default default] Lazy-loading 'flavor' on Instance uuid 4ace6300-5447-4f61-9b27-a7249155c57b obj_load_attr /usr/lib/python3.9/site-packages/nova/objects/instance.py:1105#033[00m
Dec  1 20:03:25 compute-0 nova_compute[189564]: 2025-12-01 20:03:25.589 189568 DEBUG oslo_concurrency.lockutils [None req-1eecf6af-f1d3-4b45-98fd-41142141870f f4faf878be724ad8aa31fd034c9818d9 4517904b95d64f0c874d5afda12566c4 - - default default] Acquiring lock "refresh_cache-4ace6300-5447-4f61-9b27-a7249155c57b" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:312#033[00m
Dec  1 20:03:25 compute-0 nova_compute[189564]: 2025-12-01 20:03:25.832 189568 DEBUG nova.network.neutron [None req-7cec860b-c1c5-49d2-8a4f-732c76c1788d - - - - - -] [instance: 4a104baa-5fd5-47aa-973b-11d99c76c3e2] Updating instance_info_cache with network_info: [{"id": "09097114-7a48-4b64-ab17-ed474efbf80e", "address": "fa:16:3e:3e:bf:1a", "network": {"id": "419dfb65-f0dd-44b5-a131-b7c37ebf4bab", "bridge": "br-int", "label": "tempest-ServerActionsTestJSON-188173667-network", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.13", "type": "fixed", "version": 4, "meta": {}, "floating_ips": [{"address": "192.168.122.211", "type": "floating", "version": 4, "meta": {}}]}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "5102d72cb1ce4e6da810b2584a2abd73", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tap09097114-7a", "ovs_interfaceid": "09097114-7a48-4b64-ab17-ed474efbf80e", "qbh_params": null, "qbg_params": null, "active": true, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}}] update_instance_cache_with_nw_info /usr/lib/python3.9/site-packages/nova/network/neutron.py:116#033[00m
Dec  1 20:03:25 compute-0 nova_compute[189564]: 2025-12-01 20:03:25.862 189568 DEBUG oslo_concurrency.lockutils [None req-7cec860b-c1c5-49d2-8a4f-732c76c1788d - - - - - -] Releasing lock "refresh_cache-4a104baa-5fd5-47aa-973b-11d99c76c3e2" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:333#033[00m
Dec  1 20:03:25 compute-0 nova_compute[189564]: 2025-12-01 20:03:25.862 189568 DEBUG nova.compute.manager [None req-7cec860b-c1c5-49d2-8a4f-732c76c1788d - - - - - -] [instance: 4a104baa-5fd5-47aa-973b-11d99c76c3e2] Updated the network info_cache for instance _heal_instance_info_cache /usr/lib/python3.9/site-packages/nova/compute/manager.py:9929#033[00m
Dec  1 20:03:25 compute-0 nova_compute[189564]: 2025-12-01 20:03:25.863 189568 DEBUG oslo_service.periodic_task [None req-7cec860b-c1c5-49d2-8a4f-732c76c1788d - - - - - -] Running periodic task ComputeManager._poll_unconfirmed_resizes run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210#033[00m
Dec  1 20:03:25 compute-0 nova_compute[189564]: 2025-12-01 20:03:25.863 189568 DEBUG oslo_service.periodic_task [None req-7cec860b-c1c5-49d2-8a4f-732c76c1788d - - - - - -] Running periodic task ComputeManager._poll_volume_usage run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210#033[00m
Dec  1 20:03:25 compute-0 nova_compute[189564]: 2025-12-01 20:03:25.864 189568 DEBUG oslo_service.periodic_task [None req-7cec860b-c1c5-49d2-8a4f-732c76c1788d - - - - - -] Running periodic task ComputeManager.update_available_resource run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210#033[00m
Dec  1 20:03:25 compute-0 nova_compute[189564]: 2025-12-01 20:03:25.895 189568 DEBUG oslo_concurrency.lockutils [None req-7cec860b-c1c5-49d2-8a4f-732c76c1788d - - - - - -] Acquiring lock "compute_resources" by "nova.compute.resource_tracker.ResourceTracker.clean_compute_node_cache" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m
Dec  1 20:03:25 compute-0 nova_compute[189564]: 2025-12-01 20:03:25.896 189568 DEBUG oslo_concurrency.lockutils [None req-7cec860b-c1c5-49d2-8a4f-732c76c1788d - - - - - -] Lock "compute_resources" acquired by "nova.compute.resource_tracker.ResourceTracker.clean_compute_node_cache" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m
Dec  1 20:03:25 compute-0 nova_compute[189564]: 2025-12-01 20:03:25.896 189568 DEBUG oslo_concurrency.lockutils [None req-7cec860b-c1c5-49d2-8a4f-732c76c1788d - - - - - -] Lock "compute_resources" "released" by "nova.compute.resource_tracker.ResourceTracker.clean_compute_node_cache" :: held 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m
Dec  1 20:03:25 compute-0 nova_compute[189564]: 2025-12-01 20:03:25.897 189568 DEBUG nova.compute.resource_tracker [None req-7cec860b-c1c5-49d2-8a4f-732c76c1788d - - - - - -] Auditing locally available compute resources for compute-0.ctlplane.example.com (node: compute-0.ctlplane.example.com) update_available_resource /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:861#033[00m
Dec  1 20:03:26 compute-0 nova_compute[189564]: 2025-12-01 20:03:26.019 189568 DEBUG oslo_concurrency.processutils [None req-7cec860b-c1c5-49d2-8a4f-732c76c1788d - - - - - -] Running cmd (subprocess): /usr/bin/python3 -m oslo_concurrency.prlimit --as=1073741824 --cpu=30 -- env LC_ALL=C LANG=C qemu-img info /var/lib/nova/instances/4ace6300-5447-4f61-9b27-a7249155c57b/disk --force-share --output=json execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384#033[00m
Dec  1 20:03:26 compute-0 nova_compute[189564]: 2025-12-01 20:03:26.124 189568 DEBUG oslo_concurrency.processutils [None req-7cec860b-c1c5-49d2-8a4f-732c76c1788d - - - - - -] CMD "/usr/bin/python3 -m oslo_concurrency.prlimit --as=1073741824 --cpu=30 -- env LC_ALL=C LANG=C qemu-img info /var/lib/nova/instances/4ace6300-5447-4f61-9b27-a7249155c57b/disk --force-share --output=json" returned: 0 in 0.105s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422#033[00m
Dec  1 20:03:26 compute-0 nova_compute[189564]: 2025-12-01 20:03:26.126 189568 DEBUG oslo_concurrency.processutils [None req-7cec860b-c1c5-49d2-8a4f-732c76c1788d - - - - - -] Running cmd (subprocess): /usr/bin/python3 -m oslo_concurrency.prlimit --as=1073741824 --cpu=30 -- env LC_ALL=C LANG=C qemu-img info /var/lib/nova/instances/4ace6300-5447-4f61-9b27-a7249155c57b/disk --force-share --output=json execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384#033[00m
Dec  1 20:03:26 compute-0 nova_compute[189564]: 2025-12-01 20:03:26.207 189568 DEBUG oslo_concurrency.processutils [None req-7cec860b-c1c5-49d2-8a4f-732c76c1788d - - - - - -] CMD "/usr/bin/python3 -m oslo_concurrency.prlimit --as=1073741824 --cpu=30 -- env LC_ALL=C LANG=C qemu-img info /var/lib/nova/instances/4ace6300-5447-4f61-9b27-a7249155c57b/disk --force-share --output=json" returned: 0 in 0.081s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422#033[00m
Dec  1 20:03:26 compute-0 nova_compute[189564]: 2025-12-01 20:03:26.220 189568 DEBUG oslo_concurrency.processutils [None req-7cec860b-c1c5-49d2-8a4f-732c76c1788d - - - - - -] Running cmd (subprocess): /usr/bin/python3 -m oslo_concurrency.prlimit --as=1073741824 --cpu=30 -- env LC_ALL=C LANG=C qemu-img info /var/lib/nova/instances/6c1de815-4e42-4798-9a73-220b67333524/disk --force-share --output=json execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384#033[00m
Dec  1 20:03:26 compute-0 nova_compute[189564]: 2025-12-01 20:03:26.307 189568 DEBUG oslo_concurrency.processutils [None req-7cec860b-c1c5-49d2-8a4f-732c76c1788d - - - - - -] CMD "/usr/bin/python3 -m oslo_concurrency.prlimit --as=1073741824 --cpu=30 -- env LC_ALL=C LANG=C qemu-img info /var/lib/nova/instances/6c1de815-4e42-4798-9a73-220b67333524/disk --force-share --output=json" returned: 0 in 0.088s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422#033[00m
Dec  1 20:03:26 compute-0 nova_compute[189564]: 2025-12-01 20:03:26.310 189568 DEBUG oslo_concurrency.processutils [None req-7cec860b-c1c5-49d2-8a4f-732c76c1788d - - - - - -] Running cmd (subprocess): /usr/bin/python3 -m oslo_concurrency.prlimit --as=1073741824 --cpu=30 -- env LC_ALL=C LANG=C qemu-img info /var/lib/nova/instances/6c1de815-4e42-4798-9a73-220b67333524/disk --force-share --output=json execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384#033[00m
Dec  1 20:03:26 compute-0 nova_compute[189564]: 2025-12-01 20:03:26.376 189568 DEBUG oslo_concurrency.processutils [None req-7cec860b-c1c5-49d2-8a4f-732c76c1788d - - - - - -] CMD "/usr/bin/python3 -m oslo_concurrency.prlimit --as=1073741824 --cpu=30 -- env LC_ALL=C LANG=C qemu-img info /var/lib/nova/instances/6c1de815-4e42-4798-9a73-220b67333524/disk --force-share --output=json" returned: 0 in 0.066s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422#033[00m
Dec  1 20:03:26 compute-0 nova_compute[189564]: 2025-12-01 20:03:26.388 189568 DEBUG oslo_concurrency.processutils [None req-7cec860b-c1c5-49d2-8a4f-732c76c1788d - - - - - -] Running cmd (subprocess): /usr/bin/python3 -m oslo_concurrency.prlimit --as=1073741824 --cpu=30 -- env LC_ALL=C LANG=C qemu-img info /var/lib/nova/instances/421c1bd5-7edf-41ce-b0a5-872efcaf35b0/disk --force-share --output=json execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384#033[00m
Dec  1 20:03:26 compute-0 nova_compute[189564]: 2025-12-01 20:03:26.471 189568 DEBUG oslo_concurrency.processutils [None req-7cec860b-c1c5-49d2-8a4f-732c76c1788d - - - - - -] CMD "/usr/bin/python3 -m oslo_concurrency.prlimit --as=1073741824 --cpu=30 -- env LC_ALL=C LANG=C qemu-img info /var/lib/nova/instances/421c1bd5-7edf-41ce-b0a5-872efcaf35b0/disk --force-share --output=json" returned: 0 in 0.083s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422#033[00m
Dec  1 20:03:26 compute-0 nova_compute[189564]: 2025-12-01 20:03:26.472 189568 DEBUG oslo_concurrency.processutils [None req-7cec860b-c1c5-49d2-8a4f-732c76c1788d - - - - - -] Running cmd (subprocess): /usr/bin/python3 -m oslo_concurrency.prlimit --as=1073741824 --cpu=30 -- env LC_ALL=C LANG=C qemu-img info /var/lib/nova/instances/421c1bd5-7edf-41ce-b0a5-872efcaf35b0/disk --force-share --output=json execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384#033[00m
Dec  1 20:03:26 compute-0 nova_compute[189564]: 2025-12-01 20:03:26.559 189568 DEBUG oslo_concurrency.processutils [None req-7cec860b-c1c5-49d2-8a4f-732c76c1788d - - - - - -] CMD "/usr/bin/python3 -m oslo_concurrency.prlimit --as=1073741824 --cpu=30 -- env LC_ALL=C LANG=C qemu-img info /var/lib/nova/instances/421c1bd5-7edf-41ce-b0a5-872efcaf35b0/disk --force-share --output=json" returned: 0 in 0.087s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422#033[00m
Dec  1 20:03:26 compute-0 nova_compute[189564]: 2025-12-01 20:03:26.568 189568 DEBUG oslo_concurrency.processutils [None req-7cec860b-c1c5-49d2-8a4f-732c76c1788d - - - - - -] Running cmd (subprocess): /usr/bin/python3 -m oslo_concurrency.prlimit --as=1073741824 --cpu=30 -- env LC_ALL=C LANG=C qemu-img info /var/lib/nova/instances/4a104baa-5fd5-47aa-973b-11d99c76c3e2/disk --force-share --output=json execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384#033[00m
Dec  1 20:03:26 compute-0 nova_compute[189564]: 2025-12-01 20:03:26.659 189568 DEBUG oslo_concurrency.processutils [None req-7cec860b-c1c5-49d2-8a4f-732c76c1788d - - - - - -] CMD "/usr/bin/python3 -m oslo_concurrency.prlimit --as=1073741824 --cpu=30 -- env LC_ALL=C LANG=C qemu-img info /var/lib/nova/instances/4a104baa-5fd5-47aa-973b-11d99c76c3e2/disk --force-share --output=json" returned: 0 in 0.091s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422#033[00m
Dec  1 20:03:26 compute-0 nova_compute[189564]: 2025-12-01 20:03:26.660 189568 DEBUG oslo_concurrency.processutils [None req-7cec860b-c1c5-49d2-8a4f-732c76c1788d - - - - - -] Running cmd (subprocess): /usr/bin/python3 -m oslo_concurrency.prlimit --as=1073741824 --cpu=30 -- env LC_ALL=C LANG=C qemu-img info /var/lib/nova/instances/4a104baa-5fd5-47aa-973b-11d99c76c3e2/disk --force-share --output=json execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384#033[00m
Dec  1 20:03:26 compute-0 nova_compute[189564]: 2025-12-01 20:03:26.729 189568 DEBUG nova.compute.manager [req-91e311eb-7b40-4988-8ed0-0dd26bbdb95d req-eafa701c-e025-4a86-9887-1b3fe8244468 8141e98fb94749b083df4b60a326419b 58466eba0634458f8dd92f929824a1d9 - - default default] [instance: 6c1de815-4e42-4798-9a73-220b67333524] Received event network-changed-05dcfe74-fe60-45d4-b1df-aec9fcc57adb external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11048#033[00m
Dec  1 20:03:26 compute-0 nova_compute[189564]: 2025-12-01 20:03:26.730 189568 DEBUG nova.compute.manager [req-91e311eb-7b40-4988-8ed0-0dd26bbdb95d req-eafa701c-e025-4a86-9887-1b3fe8244468 8141e98fb94749b083df4b60a326419b 58466eba0634458f8dd92f929824a1d9 - - default default] [instance: 6c1de815-4e42-4798-9a73-220b67333524] Refreshing instance network info cache due to event network-changed-05dcfe74-fe60-45d4-b1df-aec9fcc57adb. external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11053#033[00m
Dec  1 20:03:26 compute-0 nova_compute[189564]: 2025-12-01 20:03:26.730 189568 DEBUG oslo_concurrency.lockutils [req-91e311eb-7b40-4988-8ed0-0dd26bbdb95d req-eafa701c-e025-4a86-9887-1b3fe8244468 8141e98fb94749b083df4b60a326419b 58466eba0634458f8dd92f929824a1d9 - - default default] Acquiring lock "refresh_cache-6c1de815-4e42-4798-9a73-220b67333524" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:312#033[00m
Dec  1 20:03:26 compute-0 nova_compute[189564]: 2025-12-01 20:03:26.730 189568 DEBUG oslo_concurrency.lockutils [req-91e311eb-7b40-4988-8ed0-0dd26bbdb95d req-eafa701c-e025-4a86-9887-1b3fe8244468 8141e98fb94749b083df4b60a326419b 58466eba0634458f8dd92f929824a1d9 - - default default] Acquired lock "refresh_cache-6c1de815-4e42-4798-9a73-220b67333524" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:315#033[00m
Dec  1 20:03:26 compute-0 nova_compute[189564]: 2025-12-01 20:03:26.730 189568 DEBUG nova.network.neutron [req-91e311eb-7b40-4988-8ed0-0dd26bbdb95d req-eafa701c-e025-4a86-9887-1b3fe8244468 8141e98fb94749b083df4b60a326419b 58466eba0634458f8dd92f929824a1d9 - - default default] [instance: 6c1de815-4e42-4798-9a73-220b67333524] Refreshing network info cache for port 05dcfe74-fe60-45d4-b1df-aec9fcc57adb _get_instance_nw_info /usr/lib/python3.9/site-packages/nova/network/neutron.py:2007#033[00m
Dec  1 20:03:26 compute-0 nova_compute[189564]: 2025-12-01 20:03:26.740 189568 DEBUG oslo_concurrency.processutils [None req-7cec860b-c1c5-49d2-8a4f-732c76c1788d - - - - - -] CMD "/usr/bin/python3 -m oslo_concurrency.prlimit --as=1073741824 --cpu=30 -- env LC_ALL=C LANG=C qemu-img info /var/lib/nova/instances/4a104baa-5fd5-47aa-973b-11d99c76c3e2/disk --force-share --output=json" returned: 0 in 0.080s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422#033[00m
Dec  1 20:03:27 compute-0 nova_compute[189564]: 2025-12-01 20:03:27.204 189568 WARNING nova.virt.libvirt.driver [None req-7cec860b-c1c5-49d2-8a4f-732c76c1788d - - - - - -] This host appears to have multiple sockets per NUMA node. The `socket` PCI NUMA affinity will not be supported.#033[00m
Dec  1 20:03:27 compute-0 nova_compute[189564]: 2025-12-01 20:03:27.206 189568 DEBUG nova.compute.resource_tracker [None req-7cec860b-c1c5-49d2-8a4f-732c76c1788d - - - - - -] Hypervisor/Node resource view: name=compute-0.ctlplane.example.com free_ram=4701MB free_disk=72.22347640991211GB free_vcpus=4 pci_devices=[{"dev_id": "pci_0000_00_03_0", "address": "0000:00:03.0", "product_id": "1000", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1000", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_01_3", "address": "0000:00:01.3", "product_id": "7113", "vendor_id": "8086", "numa_node": null, "label": "label_8086_7113", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_06_0", "address": "0000:00:06.0", "product_id": "1005", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1005", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_04_0", "address": "0000:00:04.0", "product_id": "1001", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1001", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_01_1", "address": "0000:00:01.1", "product_id": "7010", "vendor_id": "8086", "numa_node": null, "label": "label_8086_7010", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_00_0", "address": "0000:00:00.0", "product_id": "1237", "vendor_id": "8086", "numa_node": null, "label": "label_8086_1237", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_05_0", "address": "0000:00:05.0", "product_id": "1002", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1002", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_02_0", "address": "0000:00:02.0", "product_id": "1050", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1050", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_01_2", "address": "0000:00:01.2", "product_id": "7020", "vendor_id": "8086", "numa_node": null, "label": "label_8086_7020", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_01_0", "address": "0000:00:01.0", "product_id": "7000", "vendor_id": "8086", "numa_node": null, "label": "label_8086_7000", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_07_0", "address": "0000:00:07.0", "product_id": "1000", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1000", "dev_type": "type-PCI"}] _report_hypervisor_resource_view /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:1034#033[00m
Dec  1 20:03:27 compute-0 nova_compute[189564]: 2025-12-01 20:03:27.206 189568 DEBUG oslo_concurrency.lockutils [None req-7cec860b-c1c5-49d2-8a4f-732c76c1788d - - - - - -] Acquiring lock "compute_resources" by "nova.compute.resource_tracker.ResourceTracker._update_available_resource" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m
Dec  1 20:03:27 compute-0 nova_compute[189564]: 2025-12-01 20:03:27.207 189568 DEBUG oslo_concurrency.lockutils [None req-7cec860b-c1c5-49d2-8a4f-732c76c1788d - - - - - -] Lock "compute_resources" acquired by "nova.compute.resource_tracker.ResourceTracker._update_available_resource" :: waited 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m
Dec  1 20:03:27 compute-0 nova_compute[189564]: 2025-12-01 20:03:27.373 189568 DEBUG nova.compute.resource_tracker [None req-7cec860b-c1c5-49d2-8a4f-732c76c1788d - - - - - -] Instance 4a104baa-5fd5-47aa-973b-11d99c76c3e2 actively managed on this compute host and has allocations in placement: {'resources': {'DISK_GB': 1, 'MEMORY_MB': 128, 'VCPU': 1}}. _remove_deleted_instances_allocations /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:1635#033[00m
Dec  1 20:03:27 compute-0 nova_compute[189564]: 2025-12-01 20:03:27.373 189568 DEBUG nova.compute.resource_tracker [None req-7cec860b-c1c5-49d2-8a4f-732c76c1788d - - - - - -] Instance 4ace6300-5447-4f61-9b27-a7249155c57b actively managed on this compute host and has allocations in placement: {'resources': {'DISK_GB': 1, 'MEMORY_MB': 128, 'VCPU': 1}}. _remove_deleted_instances_allocations /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:1635#033[00m
Dec  1 20:03:27 compute-0 nova_compute[189564]: 2025-12-01 20:03:27.374 189568 DEBUG nova.compute.resource_tracker [None req-7cec860b-c1c5-49d2-8a4f-732c76c1788d - - - - - -] Instance 6c1de815-4e42-4798-9a73-220b67333524 actively managed on this compute host and has allocations in placement: {'resources': {'DISK_GB': 1, 'MEMORY_MB': 128, 'VCPU': 1}}. _remove_deleted_instances_allocations /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:1635#033[00m
Dec  1 20:03:27 compute-0 nova_compute[189564]: 2025-12-01 20:03:27.374 189568 DEBUG nova.compute.resource_tracker [None req-7cec860b-c1c5-49d2-8a4f-732c76c1788d - - - - - -] Instance 421c1bd5-7edf-41ce-b0a5-872efcaf35b0 actively managed on this compute host and has allocations in placement: {'resources': {'DISK_GB': 1, 'MEMORY_MB': 128, 'VCPU': 1}}. _remove_deleted_instances_allocations /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:1635#033[00m
Dec  1 20:03:27 compute-0 nova_compute[189564]: 2025-12-01 20:03:27.374 189568 DEBUG nova.compute.resource_tracker [None req-7cec860b-c1c5-49d2-8a4f-732c76c1788d - - - - - -] Total usable vcpus: 8, total allocated vcpus: 4 _report_final_resource_view /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:1057#033[00m
Dec  1 20:03:27 compute-0 nova_compute[189564]: 2025-12-01 20:03:27.375 189568 DEBUG nova.compute.resource_tracker [None req-7cec860b-c1c5-49d2-8a4f-732c76c1788d - - - - - -] Final resource view: name=compute-0.ctlplane.example.com phys_ram=7680MB used_ram=1024MB phys_disk=79GB used_disk=4GB total_vcpus=8 used_vcpus=4 pci_stats=[] _report_final_resource_view /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:1066#033[00m
Dec  1 20:03:27 compute-0 nova_compute[189564]: 2025-12-01 20:03:27.486 189568 DEBUG nova.compute.provider_tree [None req-7cec860b-c1c5-49d2-8a4f-732c76c1788d - - - - - -] Inventory has not changed in ProviderTree for provider: 0211b5d4-bab8-409f-8f53-df766ffbcb27 update_inventory /usr/lib/python3.9/site-packages/nova/compute/provider_tree.py:180#033[00m
Dec  1 20:03:27 compute-0 nova_compute[189564]: 2025-12-01 20:03:27.502 189568 DEBUG nova.scheduler.client.report [None req-7cec860b-c1c5-49d2-8a4f-732c76c1788d - - - - - -] Inventory has not changed for provider 0211b5d4-bab8-409f-8f53-df766ffbcb27 based on inventory data: {'VCPU': {'total': 8, 'reserved': 0, 'min_unit': 1, 'max_unit': 8, 'step_size': 1, 'allocation_ratio': 4.0}, 'MEMORY_MB': {'total': 7680, 'reserved': 512, 'min_unit': 1, 'max_unit': 7680, 'step_size': 1, 'allocation_ratio': 1.0}, 'DISK_GB': {'total': 79, 'reserved': 1, 'min_unit': 1, 'max_unit': 79, 'step_size': 1, 'allocation_ratio': 0.9}} set_inventory_for_provider /usr/lib/python3.9/site-packages/nova/scheduler/client/report.py:940#033[00m
Dec  1 20:03:27 compute-0 nova_compute[189564]: 2025-12-01 20:03:27.531 189568 DEBUG nova.compute.resource_tracker [None req-7cec860b-c1c5-49d2-8a4f-732c76c1788d - - - - - -] Compute_service record updated for compute-0.ctlplane.example.com:compute-0.ctlplane.example.com _update_available_resource /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:995#033[00m
Dec  1 20:03:27 compute-0 nova_compute[189564]: 2025-12-01 20:03:27.532 189568 DEBUG oslo_concurrency.lockutils [None req-7cec860b-c1c5-49d2-8a4f-732c76c1788d - - - - - -] Lock "compute_resources" "released" by "nova.compute.resource_tracker.ResourceTracker._update_available_resource" :: held 0.325s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m
Dec  1 20:03:27 compute-0 nova_compute[189564]: 2025-12-01 20:03:27.540 189568 DEBUG nova.network.neutron [req-9a85cd32-b51f-4e80-9700-421d473491aa req-b47b907b-7395-4a74-aaf2-c6afa5a21d76 8141e98fb94749b083df4b60a326419b 58466eba0634458f8dd92f929824a1d9 - - default default] [instance: 4ace6300-5447-4f61-9b27-a7249155c57b] Updated VIF entry in instance network info cache for port 7101ff55-a92d-431c-8cc4-8b3412507465. _build_network_info_model /usr/lib/python3.9/site-packages/nova/network/neutron.py:3482#033[00m
Dec  1 20:03:27 compute-0 nova_compute[189564]: 2025-12-01 20:03:27.540 189568 DEBUG nova.network.neutron [req-9a85cd32-b51f-4e80-9700-421d473491aa req-b47b907b-7395-4a74-aaf2-c6afa5a21d76 8141e98fb94749b083df4b60a326419b 58466eba0634458f8dd92f929824a1d9 - - default default] [instance: 4ace6300-5447-4f61-9b27-a7249155c57b] Updating instance_info_cache with network_info: [{"id": "7101ff55-a92d-431c-8cc4-8b3412507465", "address": "fa:16:3e:69:55:e7", "network": {"id": "f6d551f8-4db8-41ef-9a06-51292bc6bab6", "bridge": "br-int", "label": "tempest-AttachInterfacesUnderV243Test-1484983586-network", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.12", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}, {"address": "10.100.0.6", "type": "fixed", "version": 4, "meta": {}, "floating_ips": [{"address": "192.168.122.189", "type": "floating", "version": 4, "meta": {}}]}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "4517904b95d64f0c874d5afda12566c4", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tap7101ff55-a9", "ovs_interfaceid": "7101ff55-a92d-431c-8cc4-8b3412507465", "qbh_params": null, "qbg_params": null, "active": true, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}}] update_instance_cache_with_nw_info /usr/lib/python3.9/site-packages/nova/network/neutron.py:116#033[00m
Dec  1 20:03:27 compute-0 nova_compute[189564]: 2025-12-01 20:03:27.557 189568 DEBUG oslo_concurrency.lockutils [req-9a85cd32-b51f-4e80-9700-421d473491aa req-b47b907b-7395-4a74-aaf2-c6afa5a21d76 8141e98fb94749b083df4b60a326419b 58466eba0634458f8dd92f929824a1d9 - - default default] Releasing lock "refresh_cache-4ace6300-5447-4f61-9b27-a7249155c57b" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:333#033[00m
Dec  1 20:03:27 compute-0 nova_compute[189564]: 2025-12-01 20:03:27.559 189568 DEBUG oslo_concurrency.lockutils [None req-1eecf6af-f1d3-4b45-98fd-41142141870f f4faf878be724ad8aa31fd034c9818d9 4517904b95d64f0c874d5afda12566c4 - - default default] Acquired lock "refresh_cache-4ace6300-5447-4f61-9b27-a7249155c57b" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:315#033[00m
Dec  1 20:03:27 compute-0 nova_compute[189564]: 2025-12-01 20:03:27.917 189568 DEBUG oslo_service.periodic_task [None req-7cec860b-c1c5-49d2-8a4f-732c76c1788d - - - - - -] Running periodic task ComputeManager._instance_usage_audit run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210#033[00m
Dec  1 20:03:28 compute-0 nova_compute[189564]: 2025-12-01 20:03:28.244 189568 DEBUG oslo_service.periodic_task [None req-7cec860b-c1c5-49d2-8a4f-732c76c1788d - - - - - -] Running periodic task ComputeManager._check_instance_build_time run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210#033[00m
Dec  1 20:03:28 compute-0 nova_compute[189564]: 2025-12-01 20:03:28.889 189568 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Dec  1 20:03:29 compute-0 ovn_controller[97948]: 2025-12-01T20:03:29Z|00135|binding|INFO|Releasing lport 39b24bc2-6265-4d8f-9166-2751c476b101 from this chassis (sb_readonly=0)
Dec  1 20:03:29 compute-0 ovn_controller[97948]: 2025-12-01T20:03:29Z|00136|binding|INFO|Releasing lport 0966f8f1-95fd-4a77-80c1-25197c60ec2b from this chassis (sb_readonly=0)
Dec  1 20:03:29 compute-0 ovn_controller[97948]: 2025-12-01T20:03:29Z|00137|binding|INFO|Releasing lport cb6caae9-9b40-4384-a692-7fed62ba0bdc from this chassis (sb_readonly=0)
Dec  1 20:03:29 compute-0 ovn_controller[97948]: 2025-12-01T20:03:29Z|00138|binding|INFO|Releasing lport b1e4fac5-26a3-4807-b860-bcfa4669fff5 from this chassis (sb_readonly=0)
Dec  1 20:03:29 compute-0 nova_compute[189564]: 2025-12-01 20:03:29.385 189568 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Dec  1 20:03:29 compute-0 nova_compute[189564]: 2025-12-01 20:03:29.581 189568 DEBUG nova.network.neutron [None req-1eecf6af-f1d3-4b45-98fd-41142141870f f4faf878be724ad8aa31fd034c9818d9 4517904b95d64f0c874d5afda12566c4 - - default default] [instance: 4ace6300-5447-4f61-9b27-a7249155c57b] Building network info cache for instance _get_instance_nw_info /usr/lib/python3.9/site-packages/nova/network/neutron.py:2010#033[00m
Dec  1 20:03:29 compute-0 nova_compute[189564]: 2025-12-01 20:03:29.684 189568 DEBUG nova.compute.manager [req-2fa2a259-c6ae-4c32-8efb-1f0bf2baa732 req-31c4a6c5-2f3f-4c95-a972-9337e306b338 8141e98fb94749b083df4b60a326419b 58466eba0634458f8dd92f929824a1d9 - - default default] [instance: 4ace6300-5447-4f61-9b27-a7249155c57b] Received event network-changed-7101ff55-a92d-431c-8cc4-8b3412507465 external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11048#033[00m
Dec  1 20:03:29 compute-0 nova_compute[189564]: 2025-12-01 20:03:29.685 189568 DEBUG nova.compute.manager [req-2fa2a259-c6ae-4c32-8efb-1f0bf2baa732 req-31c4a6c5-2f3f-4c95-a972-9337e306b338 8141e98fb94749b083df4b60a326419b 58466eba0634458f8dd92f929824a1d9 - - default default] [instance: 4ace6300-5447-4f61-9b27-a7249155c57b] Refreshing instance network info cache due to event network-changed-7101ff55-a92d-431c-8cc4-8b3412507465. external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11053#033[00m
Dec  1 20:03:29 compute-0 nova_compute[189564]: 2025-12-01 20:03:29.685 189568 DEBUG oslo_concurrency.lockutils [req-2fa2a259-c6ae-4c32-8efb-1f0bf2baa732 req-31c4a6c5-2f3f-4c95-a972-9337e306b338 8141e98fb94749b083df4b60a326419b 58466eba0634458f8dd92f929824a1d9 - - default default] Acquiring lock "refresh_cache-4ace6300-5447-4f61-9b27-a7249155c57b" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:312#033[00m
Dec  1 20:03:29 compute-0 podman[203750]: time="2025-12-01T20:03:29Z" level=info msg="List containers: received `last` parameter - overwriting `limit`"
Dec  1 20:03:29 compute-0 podman[203750]: @ - - [01/Dec/2025:20:03:29 +0000] "GET /v4.9.3/libpod/containers/json?all=true&external=false&last=0&namespace=false&size=false&sync=false HTTP/1.1" 200 33218 "" "Go-http-client/1.1"
Dec  1 20:03:29 compute-0 podman[203750]: @ - - [01/Dec/2025:20:03:29 +0000] "GET /v4.9.3/libpod/containers/stats?all=false&interval=1&stream=false HTTP/1.1" 200 6197 "" "Go-http-client/1.1"
Dec  1 20:03:29 compute-0 nova_compute[189564]: 2025-12-01 20:03:29.817 189568 DEBUG nova.network.neutron [req-91e311eb-7b40-4988-8ed0-0dd26bbdb95d req-eafa701c-e025-4a86-9887-1b3fe8244468 8141e98fb94749b083df4b60a326419b 58466eba0634458f8dd92f929824a1d9 - - default default] [instance: 6c1de815-4e42-4798-9a73-220b67333524] Updated VIF entry in instance network info cache for port 05dcfe74-fe60-45d4-b1df-aec9fcc57adb. _build_network_info_model /usr/lib/python3.9/site-packages/nova/network/neutron.py:3482#033[00m
Dec  1 20:03:29 compute-0 nova_compute[189564]: 2025-12-01 20:03:29.818 189568 DEBUG nova.network.neutron [req-91e311eb-7b40-4988-8ed0-0dd26bbdb95d req-eafa701c-e025-4a86-9887-1b3fe8244468 8141e98fb94749b083df4b60a326419b 58466eba0634458f8dd92f929824a1d9 - - default default] [instance: 6c1de815-4e42-4798-9a73-220b67333524] Updating instance_info_cache with network_info: [{"id": "05dcfe74-fe60-45d4-b1df-aec9fcc57adb", "address": "fa:16:3e:96:ce:cc", "network": {"id": "d273f808-5cbd-4428-9f2c-ed8b50232c12", "bridge": "br-int", "label": "tempest-network-smoke--1707279970", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.11", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "162c071887824085bcc9c384a2f8baf0", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tap05dcfe74-fe", "ovs_interfaceid": "05dcfe74-fe60-45d4-b1df-aec9fcc57adb", "qbh_params": null, "qbg_params": null, "active": true, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}}] update_instance_cache_with_nw_info /usr/lib/python3.9/site-packages/nova/network/neutron.py:116#033[00m
Dec  1 20:03:29 compute-0 nova_compute[189564]: 2025-12-01 20:03:29.853 189568 DEBUG oslo_concurrency.lockutils [req-91e311eb-7b40-4988-8ed0-0dd26bbdb95d req-eafa701c-e025-4a86-9887-1b3fe8244468 8141e98fb94749b083df4b60a326419b 58466eba0634458f8dd92f929824a1d9 - - default default] Releasing lock "refresh_cache-6c1de815-4e42-4798-9a73-220b67333524" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:333#033[00m
Dec  1 20:03:30 compute-0 nova_compute[189564]: 2025-12-01 20:03:30.535 189568 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Dec  1 20:03:31 compute-0 nova_compute[189564]: 2025-12-01 20:03:31.242 189568 DEBUG oslo_service.periodic_task [None req-7cec860b-c1c5-49d2-8a4f-732c76c1788d - - - - - -] Running periodic task ComputeManager._sync_scheduler_instance_info run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210#033[00m
Dec  1 20:03:31 compute-0 openstack_network_exporter[205914]: ERROR   20:03:31 appctl.go:144: Failed to get PID for ovn-northd: no control socket files found for ovn-northd
Dec  1 20:03:31 compute-0 openstack_network_exporter[205914]: ERROR   20:03:31 appctl.go:144: Failed to get PID for ovn-northd: no control socket files found for ovn-northd
Dec  1 20:03:31 compute-0 openstack_network_exporter[205914]: ERROR   20:03:31 appctl.go:131: Failed to prepare call to ovsdb-server: no control socket files found for the ovs db server
Dec  1 20:03:31 compute-0 openstack_network_exporter[205914]: ERROR   20:03:31 appctl.go:174: call(dpif-netdev/pmd-perf-show): please specify an existing datapath
Dec  1 20:03:31 compute-0 openstack_network_exporter[205914]: 
Dec  1 20:03:31 compute-0 openstack_network_exporter[205914]: ERROR   20:03:31 appctl.go:174: call(dpif-netdev/pmd-rxq-show): please specify an existing datapath
Dec  1 20:03:31 compute-0 openstack_network_exporter[205914]: 
Dec  1 20:03:31 compute-0 nova_compute[189564]: 2025-12-01 20:03:31.608 189568 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Dec  1 20:03:31 compute-0 nova_compute[189564]: 2025-12-01 20:03:31.691 189568 DEBUG nova.network.neutron [None req-1eecf6af-f1d3-4b45-98fd-41142141870f f4faf878be724ad8aa31fd034c9818d9 4517904b95d64f0c874d5afda12566c4 - - default default] [instance: 4ace6300-5447-4f61-9b27-a7249155c57b] Updating instance_info_cache with network_info: [{"id": "7101ff55-a92d-431c-8cc4-8b3412507465", "address": "fa:16:3e:69:55:e7", "network": {"id": "f6d551f8-4db8-41ef-9a06-51292bc6bab6", "bridge": "br-int", "label": "tempest-AttachInterfacesUnderV243Test-1484983586-network", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.6", "type": "fixed", "version": 4, "meta": {}, "floating_ips": [{"address": "192.168.122.189", "type": "floating", "version": 4, "meta": {}}]}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "4517904b95d64f0c874d5afda12566c4", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tap7101ff55-a9", "ovs_interfaceid": "7101ff55-a92d-431c-8cc4-8b3412507465", "qbh_params": null, "qbg_params": null, "active": true, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}}] update_instance_cache_with_nw_info /usr/lib/python3.9/site-packages/nova/network/neutron.py:116#033[00m
Dec  1 20:03:31 compute-0 nova_compute[189564]: 2025-12-01 20:03:31.711 189568 DEBUG oslo_concurrency.lockutils [None req-1eecf6af-f1d3-4b45-98fd-41142141870f f4faf878be724ad8aa31fd034c9818d9 4517904b95d64f0c874d5afda12566c4 - - default default] Releasing lock "refresh_cache-4ace6300-5447-4f61-9b27-a7249155c57b" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:333#033[00m
Dec  1 20:03:31 compute-0 nova_compute[189564]: 2025-12-01 20:03:31.712 189568 DEBUG nova.compute.manager [None req-1eecf6af-f1d3-4b45-98fd-41142141870f f4faf878be724ad8aa31fd034c9818d9 4517904b95d64f0c874d5afda12566c4 - - default default] [instance: 4ace6300-5447-4f61-9b27-a7249155c57b] Inject network info _inject_network_info /usr/lib/python3.9/site-packages/nova/compute/manager.py:7144#033[00m
Dec  1 20:03:31 compute-0 nova_compute[189564]: 2025-12-01 20:03:31.713 189568 DEBUG nova.compute.manager [None req-1eecf6af-f1d3-4b45-98fd-41142141870f f4faf878be724ad8aa31fd034c9818d9 4517904b95d64f0c874d5afda12566c4 - - default default] [instance: 4ace6300-5447-4f61-9b27-a7249155c57b] network_info to inject: |[{"id": "7101ff55-a92d-431c-8cc4-8b3412507465", "address": "fa:16:3e:69:55:e7", "network": {"id": "f6d551f8-4db8-41ef-9a06-51292bc6bab6", "bridge": "br-int", "label": "tempest-AttachInterfacesUnderV243Test-1484983586-network", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.6", "type": "fixed", "version": 4, "meta": {}, "floating_ips": [{"address": "192.168.122.189", "type": "floating", "version": 4, "meta": {}}]}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "4517904b95d64f0c874d5afda12566c4", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tap7101ff55-a9", "ovs_interfaceid": "7101ff55-a92d-431c-8cc4-8b3412507465", "qbh_params": null, "qbg_params": null, "active": true, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}}]| _inject_network_info /usr/lib/python3.9/site-packages/nova/compute/manager.py:7145#033[00m
Dec  1 20:03:31 compute-0 nova_compute[189564]: 2025-12-01 20:03:31.717 189568 DEBUG oslo_concurrency.lockutils [req-2fa2a259-c6ae-4c32-8efb-1f0bf2baa732 req-31c4a6c5-2f3f-4c95-a972-9337e306b338 8141e98fb94749b083df4b60a326419b 58466eba0634458f8dd92f929824a1d9 - - default default] Acquired lock "refresh_cache-4ace6300-5447-4f61-9b27-a7249155c57b" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:315#033[00m
Dec  1 20:03:31 compute-0 nova_compute[189564]: 2025-12-01 20:03:31.718 189568 DEBUG nova.network.neutron [req-2fa2a259-c6ae-4c32-8efb-1f0bf2baa732 req-31c4a6c5-2f3f-4c95-a972-9337e306b338 8141e98fb94749b083df4b60a326419b 58466eba0634458f8dd92f929824a1d9 - - default default] [instance: 4ace6300-5447-4f61-9b27-a7249155c57b] Refreshing network info cache for port 7101ff55-a92d-431c-8cc4-8b3412507465 _get_instance_nw_info /usr/lib/python3.9/site-packages/nova/network/neutron.py:2007#033[00m
Dec  1 20:03:32 compute-0 nova_compute[189564]: 2025-12-01 20:03:32.886 189568 DEBUG oslo_concurrency.lockutils [None req-e731a7ff-1568-46b2-8618-b7fa519bfbbb f4faf878be724ad8aa31fd034c9818d9 4517904b95d64f0c874d5afda12566c4 - - default default] Acquiring lock "4ace6300-5447-4f61-9b27-a7249155c57b" by "nova.compute.manager.ComputeManager.terminate_instance.<locals>.do_terminate_instance" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m
Dec  1 20:03:32 compute-0 nova_compute[189564]: 2025-12-01 20:03:32.887 189568 DEBUG oslo_concurrency.lockutils [None req-e731a7ff-1568-46b2-8618-b7fa519bfbbb f4faf878be724ad8aa31fd034c9818d9 4517904b95d64f0c874d5afda12566c4 - - default default] Lock "4ace6300-5447-4f61-9b27-a7249155c57b" acquired by "nova.compute.manager.ComputeManager.terminate_instance.<locals>.do_terminate_instance" :: waited 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m
Dec  1 20:03:32 compute-0 nova_compute[189564]: 2025-12-01 20:03:32.888 189568 DEBUG oslo_concurrency.lockutils [None req-e731a7ff-1568-46b2-8618-b7fa519bfbbb f4faf878be724ad8aa31fd034c9818d9 4517904b95d64f0c874d5afda12566c4 - - default default] Acquiring lock "4ace6300-5447-4f61-9b27-a7249155c57b-events" by "nova.compute.manager.InstanceEvents.clear_events_for_instance.<locals>._clear_events" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m
Dec  1 20:03:32 compute-0 nova_compute[189564]: 2025-12-01 20:03:32.888 189568 DEBUG oslo_concurrency.lockutils [None req-e731a7ff-1568-46b2-8618-b7fa519bfbbb f4faf878be724ad8aa31fd034c9818d9 4517904b95d64f0c874d5afda12566c4 - - default default] Lock "4ace6300-5447-4f61-9b27-a7249155c57b-events" acquired by "nova.compute.manager.InstanceEvents.clear_events_for_instance.<locals>._clear_events" :: waited 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m
Dec  1 20:03:32 compute-0 nova_compute[189564]: 2025-12-01 20:03:32.889 189568 DEBUG oslo_concurrency.lockutils [None req-e731a7ff-1568-46b2-8618-b7fa519bfbbb f4faf878be724ad8aa31fd034c9818d9 4517904b95d64f0c874d5afda12566c4 - - default default] Lock "4ace6300-5447-4f61-9b27-a7249155c57b-events" "released" by "nova.compute.manager.InstanceEvents.clear_events_for_instance.<locals>._clear_events" :: held 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m
Dec  1 20:03:32 compute-0 nova_compute[189564]: 2025-12-01 20:03:32.891 189568 INFO nova.compute.manager [None req-e731a7ff-1568-46b2-8618-b7fa519bfbbb f4faf878be724ad8aa31fd034c9818d9 4517904b95d64f0c874d5afda12566c4 - - default default] [instance: 4ace6300-5447-4f61-9b27-a7249155c57b] Terminating instance#033[00m
Dec  1 20:03:32 compute-0 nova_compute[189564]: 2025-12-01 20:03:32.893 189568 DEBUG nova.compute.manager [None req-e731a7ff-1568-46b2-8618-b7fa519bfbbb f4faf878be724ad8aa31fd034c9818d9 4517904b95d64f0c874d5afda12566c4 - - default default] [instance: 4ace6300-5447-4f61-9b27-a7249155c57b] Start destroying the instance on the hypervisor. _shutdown_instance /usr/lib/python3.9/site-packages/nova/compute/manager.py:3120#033[00m
Dec  1 20:03:32 compute-0 kernel: tap7101ff55-a9 (unregistering): left promiscuous mode
Dec  1 20:03:32 compute-0 NetworkManager[56474]: <info>  [1764619412.9389] device (tap7101ff55-a9): state change: disconnected -> unmanaged (reason 'unmanaged', managed-type: 'removed')
Dec  1 20:03:32 compute-0 ovn_controller[97948]: 2025-12-01T20:03:32Z|00139|binding|INFO|Releasing lport 7101ff55-a92d-431c-8cc4-8b3412507465 from this chassis (sb_readonly=0)
Dec  1 20:03:32 compute-0 nova_compute[189564]: 2025-12-01 20:03:32.957 189568 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Dec  1 20:03:32 compute-0 ovn_controller[97948]: 2025-12-01T20:03:32Z|00140|binding|INFO|Setting lport 7101ff55-a92d-431c-8cc4-8b3412507465 down in Southbound
Dec  1 20:03:32 compute-0 ovn_controller[97948]: 2025-12-01T20:03:32Z|00141|binding|INFO|Removing iface tap7101ff55-a9 ovn-installed in OVS
Dec  1 20:03:32 compute-0 nova_compute[189564]: 2025-12-01 20:03:32.961 189568 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Dec  1 20:03:32 compute-0 ovn_metadata_agent[106828]: 2025-12-01 20:03:32.972 106833 DEBUG ovsdbapp.backend.ovs_idl.event [-] Matched UPDATE: PortBindingUpdatedEvent(events=('update',), table='Port_Binding', conditions=None, old_conditions=None), priority=20 to row=Port_Binding(mac=['fa:16:3e:69:55:e7 10.100.0.6'], port_security=['fa:16:3e:69:55:e7 10.100.0.6'], type=, nat_addresses=[], virtual_parent=[], up=[False], options={'requested-chassis': 'compute-0.ctlplane.example.com'}, parent_port=[], requested_additional_chassis=[], ha_chassis_group=[], external_ids={'neutron:cidrs': '10.100.0.6/28', 'neutron:device_id': '4ace6300-5447-4f61-9b27-a7249155c57b', 'neutron:device_owner': 'compute:nova', 'neutron:mtu': '', 'neutron:network_name': 'neutron-f6d551f8-4db8-41ef-9a06-51292bc6bab6', 'neutron:port_capabilities': '', 'neutron:port_name': '', 'neutron:project_id': '4517904b95d64f0c874d5afda12566c4', 'neutron:revision_number': '6', 'neutron:security_group_ids': 'b68416a2-a571-45d1-83ff-8369ecb15d10', 'neutron:subnet_pool_addr_scope4': '', 'neutron:subnet_pool_addr_scope6': '', 'neutron:vnic_type': 'normal', 'neutron:host_id': 'compute-0.ctlplane.example.com', 'neutron:port_fip': '192.168.122.189'}, additional_chassis=[], tag=[], additional_encap=[], encap=[], mirror_rules=[], datapath=3cd09c29-bcaf-417a-9d6d-85e82a6aa131, chassis=[], tunnel_key=3, gateway_chassis=[], requested_chassis=[<ovs.db.idl.Row object at 0x7f1b36766670>], logical_port=7101ff55-a92d-431c-8cc4-8b3412507465) old=Port_Binding(up=[True], chassis=[<ovs.db.idl.Row object at 0x7f1b36766670>]) matches /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/event.py:43#033[00m
Dec  1 20:03:32 compute-0 nova_compute[189564]: 2025-12-01 20:03:32.973 189568 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Dec  1 20:03:32 compute-0 ovn_metadata_agent[106828]: 2025-12-01 20:03:32.974 106833 INFO neutron.agent.ovn.metadata.agent [-] Port 7101ff55-a92d-431c-8cc4-8b3412507465 in datapath f6d551f8-4db8-41ef-9a06-51292bc6bab6 unbound from our chassis#033[00m
Dec  1 20:03:32 compute-0 ovn_metadata_agent[106828]: 2025-12-01 20:03:32.978 106833 DEBUG neutron.agent.ovn.metadata.agent [-] No valid VIF ports were found for network f6d551f8-4db8-41ef-9a06-51292bc6bab6, tearing the namespace down if needed _get_provision_params /usr/lib/python3.9/site-packages/neutron/agent/ovn/metadata/agent.py:628#033[00m
Dec  1 20:03:32 compute-0 ovn_metadata_agent[106828]: 2025-12-01 20:03:32.980 239862 DEBUG oslo.privsep.daemon [-] privsep: reply[03dac530-49cf-4418-83c8-a7ebf5082636]: (4, True) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Dec  1 20:03:32 compute-0 ovn_metadata_agent[106828]: 2025-12-01 20:03:32.981 106833 INFO neutron.agent.ovn.metadata.agent [-] Cleaning up ovnmeta-f6d551f8-4db8-41ef-9a06-51292bc6bab6 namespace which is not needed anymore#033[00m
Dec  1 20:03:33 compute-0 systemd[1]: machine-qemu\x2d8\x2dinstance\x2d00000009.scope: Deactivated successfully.
Dec  1 20:03:33 compute-0 systemd[1]: machine-qemu\x2d8\x2dinstance\x2d00000009.scope: Consumed 42.768s CPU time.
Dec  1 20:03:33 compute-0 systemd-machined[155891]: Machine qemu-8-instance-00000009 terminated.
Dec  1 20:03:33 compute-0 nova_compute[189564]: 2025-12-01 20:03:33.129 189568 DEBUG nova.network.neutron [req-2fa2a259-c6ae-4c32-8efb-1f0bf2baa732 req-31c4a6c5-2f3f-4c95-a972-9337e306b338 8141e98fb94749b083df4b60a326419b 58466eba0634458f8dd92f929824a1d9 - - default default] [instance: 4ace6300-5447-4f61-9b27-a7249155c57b] Updated VIF entry in instance network info cache for port 7101ff55-a92d-431c-8cc4-8b3412507465. _build_network_info_model /usr/lib/python3.9/site-packages/nova/network/neutron.py:3482#033[00m
Dec  1 20:03:33 compute-0 nova_compute[189564]: 2025-12-01 20:03:33.130 189568 DEBUG nova.network.neutron [req-2fa2a259-c6ae-4c32-8efb-1f0bf2baa732 req-31c4a6c5-2f3f-4c95-a972-9337e306b338 8141e98fb94749b083df4b60a326419b 58466eba0634458f8dd92f929824a1d9 - - default default] [instance: 4ace6300-5447-4f61-9b27-a7249155c57b] Updating instance_info_cache with network_info: [{"id": "7101ff55-a92d-431c-8cc4-8b3412507465", "address": "fa:16:3e:69:55:e7", "network": {"id": "f6d551f8-4db8-41ef-9a06-51292bc6bab6", "bridge": "br-int", "label": "tempest-AttachInterfacesUnderV243Test-1484983586-network", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.6", "type": "fixed", "version": 4, "meta": {}, "floating_ips": [{"address": "192.168.122.189", "type": "floating", "version": 4, "meta": {}}]}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "4517904b95d64f0c874d5afda12566c4", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tap7101ff55-a9", "ovs_interfaceid": "7101ff55-a92d-431c-8cc4-8b3412507465", "qbh_params": null, "qbg_params": null, "active": true, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}}] update_instance_cache_with_nw_info /usr/lib/python3.9/site-packages/nova/network/neutron.py:116#033[00m
Dec  1 20:03:33 compute-0 nova_compute[189564]: 2025-12-01 20:03:33.185 189568 INFO nova.virt.libvirt.driver [-] [instance: 4ace6300-5447-4f61-9b27-a7249155c57b] Instance destroyed successfully.#033[00m
Dec  1 20:03:33 compute-0 nova_compute[189564]: 2025-12-01 20:03:33.186 189568 DEBUG nova.objects.instance [None req-e731a7ff-1568-46b2-8618-b7fa519bfbbb f4faf878be724ad8aa31fd034c9818d9 4517904b95d64f0c874d5afda12566c4 - - default default] Lazy-loading 'resources' on Instance uuid 4ace6300-5447-4f61-9b27-a7249155c57b obj_load_attr /usr/lib/python3.9/site-packages/nova/objects/instance.py:1105#033[00m
Dec  1 20:03:33 compute-0 neutron-haproxy-ovnmeta-f6d551f8-4db8-41ef-9a06-51292bc6bab6[254759]: [NOTICE]   (254764) : haproxy version is 2.8.14-c23fe91
Dec  1 20:03:33 compute-0 neutron-haproxy-ovnmeta-f6d551f8-4db8-41ef-9a06-51292bc6bab6[254759]: [NOTICE]   (254764) : path to executable is /usr/sbin/haproxy
Dec  1 20:03:33 compute-0 neutron-haproxy-ovnmeta-f6d551f8-4db8-41ef-9a06-51292bc6bab6[254759]: [WARNING]  (254764) : Exiting Master process...
Dec  1 20:03:33 compute-0 neutron-haproxy-ovnmeta-f6d551f8-4db8-41ef-9a06-51292bc6bab6[254759]: [ALERT]    (254764) : Current worker (254766) exited with code 143 (Terminated)
Dec  1 20:03:33 compute-0 neutron-haproxy-ovnmeta-f6d551f8-4db8-41ef-9a06-51292bc6bab6[254759]: [WARNING]  (254764) : All workers exited. Exiting... (0)
Dec  1 20:03:33 compute-0 nova_compute[189564]: 2025-12-01 20:03:33.254 189568 DEBUG nova.virt.libvirt.vif [None req-e731a7ff-1568-46b2-8618-b7fa519bfbbb f4faf878be724ad8aa31fd034c9818d9 4517904b95d64f0c874d5afda12566c4 - - default default] vif_type=ovs instance=Instance(access_ip_v4=None,access_ip_v6=None,architecture=None,auto_disk_config=False,availability_zone='nova',cell_name=None,cleaned=False,config_drive='True',created_at=2025-12-01T20:01:59Z,default_ephemeral_device=None,default_swap_device=None,deleted=False,deleted_at=None,device_metadata=<?>,disable_terminate=False,display_description='tempest-AttachInterfacesUnderV243Test-server-1479640136',display_name='tempest-AttachInterfacesUnderV243Test-server-1479640136',ec2_ids=<?>,ephemeral_gb=0,ephemeral_key_uuid=None,fault=<?>,flavor=Flavor(3),hidden=False,host='compute-0.ctlplane.example.com',hostname='tempest-attachinterfacesunderv243test-server-1479640136',id=9,image_ref='d169c234-7ac2-4fdc-b9fa-a08c93484d75',info_cache=InstanceInfoCache,instance_type_id=3,kernel_id='',key_data='ecdsa-sha2-nistp384 AAAAE2VjZHNhLXNoYTItbmlzdHAzODQAAAAIbmlzdHAzODQAAABhBOPVZoWW8r4f94xaa9hAUCfBMAMdM1AmJScI4znu9hdCX1jEINzVnS4DsiCUu/xmx9ibNZ0YEMnpa2LoFXPPqSMLj/g4TA6XBMSRJA8vxRXcj98f9dTCmQdhYfylR7YynQ==',key_name='tempest-keypair-1081056876',keypairs=<?>,launch_index=0,launched_at=2025-12-01T20:02:10Z,launched_on='compute-0.ctlplane.example.com',locked=False,locked_by=None,memory_mb=128,metadata={},migration_context=<?>,new_flavor=None,node='compute-0.ctlplane.example.com',numa_topology=None,old_flavor=None,os_type=None,pci_devices=<?>,pci_requests=<?>,power_state=1,progress=0,project_id='4517904b95d64f0c874d5afda12566c4',ramdisk_id='',reservation_id='r-k1ogh0w2',resources=None,root_device_name='/dev/vda',root_gb=1,security_groups=SecurityGroupList,services=<?>,shutdown_terminate=False,system_metadata={boot_roles='member,reader',image_base_image_ref='d169c234-7ac2-4fdc-b9fa-a08c93484d75',image_container_format='bare',image_disk_format='qcow2',image_hw_cdrom_bus='sata',image_hw_disk_bus='virtio',image_hw_input_bus='usb',image_hw_machine_type='q35',image_hw_pointer_model='usbtablet',image_hw_rng_model='virtio',image_hw_video_model='virtio',image_hw_vif_model='virtio',image_min_disk='1',image_min_ram='0',owner_project_name='tempest-AttachInterfacesUnderV243Test-1152149572',owner_user_name='tempest-AttachInterfacesUnderV243Test-1152149572-project-member'},tags=<?>,task_state='deleting',terminated_at=None,trusted_certs=<?>,updated_at=2025-12-01T20:03:31Z,user_data='IyEvYmluL3NoCmVjaG8gIlByaW50aW5nIGNpcnJvcyB1c2VyIGF1dGhvcml6ZWQga2V5cyIKY2F0IH5jaXJyb3MvLnNzaC9hdXRob3JpemVkX2tleXMgfHwgdHJ1ZQo=',user_id='f4faf878be724ad8aa31fd034c9818d9',uuid=4ace6300-5447-4f61-9b27-a7249155c57b,vcpu_model=<?>,vcpus=1,vm_mode=None,vm_state='active') vif={"id": "7101ff55-a92d-431c-8cc4-8b3412507465", "address": "fa:16:3e:69:55:e7", "network": {"id": "f6d551f8-4db8-41ef-9a06-51292bc6bab6", "bridge": "br-int", "label": "tempest-AttachInterfacesUnderV243Test-1484983586-network", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.6", "type": "fixed", "version": 4, "meta": {}, "floating_ips": [{"address": "192.168.122.189", "type": "floating", "version": 4, "meta": {}}]}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "4517904b95d64f0c874d5afda12566c4", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tap7101ff55-a9", "ovs_interfaceid": "7101ff55-a92d-431c-8cc4-8b3412507465", "qbh_params": null, "qbg_params": null, "active": true, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}} unplug /usr/lib/python3.9/site-packages/nova/virt/libvirt/vif.py:828#033[00m
Dec  1 20:03:33 compute-0 nova_compute[189564]: 2025-12-01 20:03:33.255 189568 DEBUG nova.network.os_vif_util [None req-e731a7ff-1568-46b2-8618-b7fa519bfbbb f4faf878be724ad8aa31fd034c9818d9 4517904b95d64f0c874d5afda12566c4 - - default default] Converting VIF {"id": "7101ff55-a92d-431c-8cc4-8b3412507465", "address": "fa:16:3e:69:55:e7", "network": {"id": "f6d551f8-4db8-41ef-9a06-51292bc6bab6", "bridge": "br-int", "label": "tempest-AttachInterfacesUnderV243Test-1484983586-network", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.6", "type": "fixed", "version": 4, "meta": {}, "floating_ips": [{"address": "192.168.122.189", "type": "floating", "version": 4, "meta": {}}]}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "4517904b95d64f0c874d5afda12566c4", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tap7101ff55-a9", "ovs_interfaceid": "7101ff55-a92d-431c-8cc4-8b3412507465", "qbh_params": null, "qbg_params": null, "active": true, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}} nova_to_osvif_vif /usr/lib/python3.9/site-packages/nova/network/os_vif_util.py:511#033[00m
Dec  1 20:03:33 compute-0 nova_compute[189564]: 2025-12-01 20:03:33.256 189568 DEBUG nova.network.os_vif_util [None req-e731a7ff-1568-46b2-8618-b7fa519bfbbb f4faf878be724ad8aa31fd034c9818d9 4517904b95d64f0c874d5afda12566c4 - - default default] Converted object VIFOpenVSwitch(active=True,address=fa:16:3e:69:55:e7,bridge_name='br-int',has_traffic_filtering=True,id=7101ff55-a92d-431c-8cc4-8b3412507465,network=Network(f6d551f8-4db8-41ef-9a06-51292bc6bab6),plugin='ovs',port_profile=VIFPortProfileOpenVSwitch,preserve_on_delete=False,vif_name='tap7101ff55-a9') nova_to_osvif_vif /usr/lib/python3.9/site-packages/nova/network/os_vif_util.py:548#033[00m
Dec  1 20:03:33 compute-0 systemd[1]: libpod-d2a1a4b50e867ea1cc67999ecf3954d066cbeb366f113ade7af0b4553fbff670.scope: Deactivated successfully.
Dec  1 20:03:33 compute-0 nova_compute[189564]: 2025-12-01 20:03:33.256 189568 DEBUG os_vif [None req-e731a7ff-1568-46b2-8618-b7fa519bfbbb f4faf878be724ad8aa31fd034c9818d9 4517904b95d64f0c874d5afda12566c4 - - default default] Unplugging vif VIFOpenVSwitch(active=True,address=fa:16:3e:69:55:e7,bridge_name='br-int',has_traffic_filtering=True,id=7101ff55-a92d-431c-8cc4-8b3412507465,network=Network(f6d551f8-4db8-41ef-9a06-51292bc6bab6),plugin='ovs',port_profile=VIFPortProfileOpenVSwitch,preserve_on_delete=False,vif_name='tap7101ff55-a9') unplug /usr/lib/python3.9/site-packages/os_vif/__init__.py:109#033[00m
Dec  1 20:03:33 compute-0 nova_compute[189564]: 2025-12-01 20:03:33.258 189568 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 21 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Dec  1 20:03:33 compute-0 nova_compute[189564]: 2025-12-01 20:03:33.258 189568 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): DelPortCommand(_result=None, port=tap7101ff55-a9, bridge=br-int, if_exists=True) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89#033[00m
Dec  1 20:03:33 compute-0 nova_compute[189564]: 2025-12-01 20:03:33.260 189568 DEBUG oslo_concurrency.lockutils [req-2fa2a259-c6ae-4c32-8efb-1f0bf2baa732 req-31c4a6c5-2f3f-4c95-a972-9337e306b338 8141e98fb94749b083df4b60a326419b 58466eba0634458f8dd92f929824a1d9 - - default default] Releasing lock "refresh_cache-4ace6300-5447-4f61-9b27-a7249155c57b" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:333#033[00m
Dec  1 20:03:33 compute-0 nova_compute[189564]: 2025-12-01 20:03:33.261 189568 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Dec  1 20:03:33 compute-0 podman[256051]: 2025-12-01 20:03:33.262308226 +0000 UTC m=+0.138951884 container died d2a1a4b50e867ea1cc67999ecf3954d066cbeb366f113ade7af0b4553fbff670 (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified, name=neutron-haproxy-ovnmeta-f6d551f8-4db8-41ef-9a06-51292bc6bab6, tcib_managed=true, maintainer=OpenStack Kubernetes Operator team, org.label-schema.build-date=20251125, org.label-schema.license=GPLv2, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS, tcib_build_tag=fa2bb8efef6782c26ea7f1675eeb36dd, io.buildah.version=1.41.3, org.label-schema.name=CentOS Stream 9 Base Image)
Dec  1 20:03:33 compute-0 nova_compute[189564]: 2025-12-01 20:03:33.264 189568 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Dec  1 20:03:33 compute-0 nova_compute[189564]: 2025-12-01 20:03:33.266 189568 INFO os_vif [None req-e731a7ff-1568-46b2-8618-b7fa519bfbbb f4faf878be724ad8aa31fd034c9818d9 4517904b95d64f0c874d5afda12566c4 - - default default] Successfully unplugged vif VIFOpenVSwitch(active=True,address=fa:16:3e:69:55:e7,bridge_name='br-int',has_traffic_filtering=True,id=7101ff55-a92d-431c-8cc4-8b3412507465,network=Network(f6d551f8-4db8-41ef-9a06-51292bc6bab6),plugin='ovs',port_profile=VIFPortProfileOpenVSwitch,preserve_on_delete=False,vif_name='tap7101ff55-a9')#033[00m
Dec  1 20:03:33 compute-0 nova_compute[189564]: 2025-12-01 20:03:33.267 189568 INFO nova.virt.libvirt.driver [None req-e731a7ff-1568-46b2-8618-b7fa519bfbbb f4faf878be724ad8aa31fd034c9818d9 4517904b95d64f0c874d5afda12566c4 - - default default] [instance: 4ace6300-5447-4f61-9b27-a7249155c57b] Deleting instance files /var/lib/nova/instances/4ace6300-5447-4f61-9b27-a7249155c57b_del#033[00m
Dec  1 20:03:33 compute-0 nova_compute[189564]: 2025-12-01 20:03:33.268 189568 INFO nova.virt.libvirt.driver [None req-e731a7ff-1568-46b2-8618-b7fa519bfbbb f4faf878be724ad8aa31fd034c9818d9 4517904b95d64f0c874d5afda12566c4 - - default default] [instance: 4ace6300-5447-4f61-9b27-a7249155c57b] Deletion of /var/lib/nova/instances/4ace6300-5447-4f61-9b27-a7249155c57b_del complete#033[00m
Dec  1 20:03:33 compute-0 nova_compute[189564]: 2025-12-01 20:03:33.349 189568 INFO nova.compute.manager [None req-e731a7ff-1568-46b2-8618-b7fa519bfbbb f4faf878be724ad8aa31fd034c9818d9 4517904b95d64f0c874d5afda12566c4 - - default default] [instance: 4ace6300-5447-4f61-9b27-a7249155c57b] Took 0.46 seconds to destroy the instance on the hypervisor.#033[00m
Dec  1 20:03:33 compute-0 nova_compute[189564]: 2025-12-01 20:03:33.350 189568 DEBUG oslo.service.loopingcall [None req-e731a7ff-1568-46b2-8618-b7fa519bfbbb f4faf878be724ad8aa31fd034c9818d9 4517904b95d64f0c874d5afda12566c4 - - default default] Waiting for function nova.compute.manager.ComputeManager._try_deallocate_network.<locals>._deallocate_network_with_retries to return. func /usr/lib/python3.9/site-packages/oslo_service/loopingcall.py:435#033[00m
Dec  1 20:03:33 compute-0 nova_compute[189564]: 2025-12-01 20:03:33.350 189568 DEBUG nova.compute.manager [-] [instance: 4ace6300-5447-4f61-9b27-a7249155c57b] Deallocating network for instance _deallocate_network /usr/lib/python3.9/site-packages/nova/compute/manager.py:2259#033[00m
Dec  1 20:03:33 compute-0 nova_compute[189564]: 2025-12-01 20:03:33.351 189568 DEBUG nova.network.neutron [-] [instance: 4ace6300-5447-4f61-9b27-a7249155c57b] deallocate_for_instance() deallocate_for_instance /usr/lib/python3.9/site-packages/nova/network/neutron.py:1803#033[00m
Dec  1 20:03:33 compute-0 systemd[1]: var-lib-containers-storage-overlay\x2dcontainers-d2a1a4b50e867ea1cc67999ecf3954d066cbeb366f113ade7af0b4553fbff670-userdata-shm.mount: Deactivated successfully.
Dec  1 20:03:33 compute-0 systemd[1]: var-lib-containers-storage-overlay-4211cff01512ea3e49fdeae5b3d8f473d64bddd91f6be1fac0d1fca1ad30c9f1-merged.mount: Deactivated successfully.
Dec  1 20:03:33 compute-0 podman[256051]: 2025-12-01 20:03:33.388034068 +0000 UTC m=+0.264677686 container cleanup d2a1a4b50e867ea1cc67999ecf3954d066cbeb366f113ade7af0b4553fbff670 (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified, name=neutron-haproxy-ovnmeta-f6d551f8-4db8-41ef-9a06-51292bc6bab6, io.buildah.version=1.41.3, org.label-schema.build-date=20251125, org.label-schema.license=GPLv2, org.label-schema.vendor=CentOS, tcib_managed=true, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.schema-version=1.0, tcib_build_tag=fa2bb8efef6782c26ea7f1675eeb36dd, maintainer=OpenStack Kubernetes Operator team)
Dec  1 20:03:33 compute-0 systemd[1]: libpod-conmon-d2a1a4b50e867ea1cc67999ecf3954d066cbeb366f113ade7af0b4553fbff670.scope: Deactivated successfully.
Dec  1 20:03:33 compute-0 podman[256096]: 2025-12-01 20:03:33.511360215 +0000 UTC m=+0.074970343 container remove d2a1a4b50e867ea1cc67999ecf3954d066cbeb366f113ade7af0b4553fbff670 (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified, name=neutron-haproxy-ovnmeta-f6d551f8-4db8-41ef-9a06-51292bc6bab6, org.label-schema.schema-version=1.0, tcib_build_tag=fa2bb8efef6782c26ea7f1675eeb36dd, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.vendor=CentOS, io.buildah.version=1.41.3, maintainer=OpenStack Kubernetes Operator team, org.label-schema.license=GPLv2, tcib_managed=true, org.label-schema.build-date=20251125)
Dec  1 20:03:33 compute-0 ovn_metadata_agent[106828]: 2025-12-01 20:03:33.519 239862 DEBUG oslo.privsep.daemon [-] privsep: reply[79b4a7cd-7bff-4fc5-9120-3dd91b71b38e]: (4, ('Mon Dec  1 08:03:33 PM UTC 2025 Stopping container neutron-haproxy-ovnmeta-f6d551f8-4db8-41ef-9a06-51292bc6bab6 (d2a1a4b50e867ea1cc67999ecf3954d066cbeb366f113ade7af0b4553fbff670)\nd2a1a4b50e867ea1cc67999ecf3954d066cbeb366f113ade7af0b4553fbff670\nMon Dec  1 08:03:33 PM UTC 2025 Deleting container neutron-haproxy-ovnmeta-f6d551f8-4db8-41ef-9a06-51292bc6bab6 (d2a1a4b50e867ea1cc67999ecf3954d066cbeb366f113ade7af0b4553fbff670)\nd2a1a4b50e867ea1cc67999ecf3954d066cbeb366f113ade7af0b4553fbff670\n', '', 0)) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Dec  1 20:03:33 compute-0 ovn_metadata_agent[106828]: 2025-12-01 20:03:33.521 239862 DEBUG oslo.privsep.daemon [-] privsep: reply[961f22b6-3829-4b12-bf5e-ab37a75d36e6]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Dec  1 20:03:33 compute-0 ovn_metadata_agent[106828]: 2025-12-01 20:03:33.522 106833 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): DelPortCommand(_result=None, port=tapf6d551f8-40, bridge=None, if_exists=True) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89#033[00m
Dec  1 20:03:33 compute-0 nova_compute[189564]: 2025-12-01 20:03:33.525 189568 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Dec  1 20:03:33 compute-0 kernel: tapf6d551f8-40: left promiscuous mode
Dec  1 20:03:33 compute-0 nova_compute[189564]: 2025-12-01 20:03:33.540 189568 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Dec  1 20:03:33 compute-0 ovn_metadata_agent[106828]: 2025-12-01 20:03:33.543 239862 DEBUG oslo.privsep.daemon [-] privsep: reply[c40eb24e-5fc8-4948-9ef2-26a0cb05c665]: (4, True) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Dec  1 20:03:33 compute-0 ovn_metadata_agent[106828]: 2025-12-01 20:03:33.557 239862 DEBUG oslo.privsep.daemon [-] privsep: reply[2f5a77fa-9681-4aff-b955-fa1c7fe1815f]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Dec  1 20:03:33 compute-0 ovn_metadata_agent[106828]: 2025-12-01 20:03:33.558 239862 DEBUG oslo.privsep.daemon [-] privsep: reply[a193d9fc-7d8d-432e-ad0c-6777346cb8ed]: (4, True) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Dec  1 20:03:33 compute-0 ovn_metadata_agent[106828]: 2025-12-01 20:03:33.574 239862 DEBUG oslo.privsep.daemon [-] privsep: reply[c36aac6d-edb0-435b-a1a0-82cd2757ff7d]: (4, [{'family': 0, '__align': (), 'ifi_type': 772, 'index': 1, 'flags': 65609, 'change': 0, 'attrs': [['IFLA_IFNAME', 'lo'], ['IFLA_TXQLEN', 1000], ['IFLA_OPERSTATE', 'UNKNOWN'], ['IFLA_LINKMODE', 0], ['IFLA_MTU', 65536], ['IFLA_MIN_MTU', 0], ['IFLA_MAX_MTU', 0], ['IFLA_GROUP', 0], ['IFLA_PROMISCUITY', 0], ['UNKNOWN', {'header': {'length': 8, 'type': 61}}], ['IFLA_NUM_TX_QUEUES', 1], ['IFLA_GSO_MAX_SEGS', 65535], ['IFLA_GSO_MAX_SIZE', 65536], ['IFLA_GRO_MAX_SIZE', 65536], ['UNKNOWN', {'header': {'length': 8, 'type': 63}}], ['UNKNOWN', {'header': {'length': 8, 'type': 64}}], ['IFLA_TSO_MAX_SIZE', 524280], ['IFLA_TSO_MAX_SEGS', 65535], ['UNKNOWN', {'header': {'length': 8, 'type': 66}}], ['IFLA_NUM_RX_QUEUES', 1], ['IFLA_CARRIER', 1], ['IFLA_CARRIER_CHANGES', 0], ['IFLA_CARRIER_UP_COUNT', 0], ['IFLA_CARRIER_DOWN_COUNT', 0], ['IFLA_PROTO_DOWN', 0], ['IFLA_ADDRESS', '00:00:00:00:00:00'], ['IFLA_BROADCAST', '00:00:00:00:00:00'], ['IFLA_STATS64', {'rx_packets': 1, 'tx_packets': 1, 'rx_bytes': 28, 'tx_bytes': 28, 'rx_errors': 0, 'tx_errors': 0, 'rx_dropped': 0, 'tx_dropped': 0, 'multicast': 0, 'collisions': 0, 'rx_length_errors': 0, 'rx_over_errors': 0, 'rx_crc_errors': 0, 'rx_frame_errors': 0, 'rx_fifo_errors': 0, 'rx_missed_errors': 0, 'tx_aborted_errors': 0, 'tx_carrier_errors': 0, 'tx_fifo_errors': 0, 'tx_heartbeat_errors': 0, 'tx_window_errors': 0, 'rx_compressed': 0, 'tx_compressed': 0}], ['IFLA_STATS', {'rx_packets': 1, 'tx_packets': 1, 'rx_bytes': 28, 'tx_bytes': 28, 'rx_errors': 0, 'tx_errors': 0, 'rx_dropped': 0, 'tx_dropped': 0, 'multicast': 0, 'collisions': 0, 'rx_length_errors': 0, 'rx_over_errors': 0, 'rx_crc_errors': 0, 'rx_frame_errors': 0, 'rx_fifo_errors': 0, 'rx_missed_errors': 0, 'tx_aborted_errors': 0, 'tx_carrier_errors': 0, 'tx_fifo_errors': 0, 'tx_heartbeat_errors': 0, 'tx_window_errors': 0, 'rx_compressed': 0, 'tx_compressed': 0}], ['IFLA_XDP', {'attrs': [['IFLA_XDP_ATTACHED', None]]}], ['IFLA_QDISC', 'noqueue'], ['IFLA_AF_SPEC', {'attrs': [['AF_INET', {'dummy': 65668, 'forwarding': 1, 'mc_forwarding': 0, 'proxy_arp': 0, 'accept_redirects': 0, 'secure_redirects': 0, 'send_redirects': 0, 'shared_media': 1, 'rp_filter': 1, 'accept_source_route': 0, 'bootp_relay': 0, 'log_martians': 0, 'tag': 0, 'arpfilter': 0, 'medium_id': 0, 'noxfrm': 1, 'nopolicy': 1, 'force_igmp_version': 0, 'arp_announce': 0, 'arp_ignore': 0, 'promote_secondaries': 1, 'arp_accept': 0, 'arp_notify': 0, 'accept_local': 0, 'src_vmark': 0, 'proxy_arp_pvlan': 0, 'route_localnet': 0, 'igmpv2_unsolicited_report_interval': 10000, 'igmpv3_unsolicited_report_interval': 1000}], ['AF_INET6', {'attrs': [['IFLA_INET6_FLAGS', 2147483648], ['IFLA_INET6_CACHEINFO', {'max_reasm_len': 65535, 'tstamp': 581629, 'reachable_time': 41620, 'retrans_time': 1000}], ['IFLA_INET6_CONF', {'forwarding': 0, 'hop_limit': 64, 'mtu': 65536, 'accept_ra': 1, 'accept_redirects': 1, 'autoconf': 1, 'dad_transmits': 1, 'router_solicitations': 4294967295, 'router_solicitation_interval': 4000, 'router_solicitation_delay': 1000, 'use_tempaddr': 4294967295, 'temp_valid_lft': 604800, 'temp_preferred_lft': 86400, 'regen_max_retry': 3, 'max_desync_factor': 600, 'max_addresses': 16, 'force_mld_version': 0, 'accept_ra_defrtr': 1, 'accept_ra_pinfo': 1, 'accept_ra_rtr_pref': 1, 'router_probe_interval': 60000, 'accept_ra_rt_info_max_plen': 0, 'proxy_ndp': 0, 'optimistic_dad': 0, 'accept_source_route': 0, 'mc_forwarding': 0, 'disable_ipv6': 0, 'accept_dad': 4294967295, 'force_tllao': 0, 'ndisc_notify': 0}], ['IFLA_INET6_STATS', {'num': 38, 'inpkts': 0, 'inoctets': 0, 'indelivers': 0, 'outforwdatagrams': 0, 'outpkts': 0, 'outoctets': 0, 'inhdrerrors': 0, 'intoobigerrors': 0, 'innoroutes': 0, 'inaddrerrors': 0, 'inunknownprotos': 0, 'intruncatedpkts': 0, 'indiscards': 0, 'outdiscards': 0, 'outnoroutes': 0, 'reasmtimeout': 0, 'reasmreqds': 0, 'reasmoks': 0, 'reasmfails': 0, 'fragoks': 0, 'fragfails': 0, 'fragcreates': 0, 'inmcastpkts': 0, 'outmcastpkts': 0, 'inbcastpkts': 0, 'outbcastpkts': 0, 'inmcastoctets': 0, 'outmcastoctets': 0, 'inbcastoctets': 0, 'outbcastoctets': 0, 'csumerrors': 0, 'noectpkts': 0, 'ect1pkts': 0, 'ect0pkts': 0, 'cepkts': 0}], ['IFLA_INET6_ICMP6STATS', {'num': 7, 'inmsgs': 0, 'inerrors': 0, 'outmsgs': 0, 'outerrors': 0, 'csumerrors': 0}], ['IFLA_INET6_TOKEN', '::'], ['IFLA_INET6_ADDR_GEN_MODE', 0]]}]]}], ['IFLA_MAP', {'mem_start': 0, 'mem_end': 0, 'base_addr': 0, 'irq': 0, 'dma': 0, 'port': 0}], ['UNKNOWN', {'header': {'length': 4, 'type': 32830}}], ['UNKNOWN', {'header': {'length': 4, 'type': 32833}}]], 'header': {'length': 1404, 'type': 16, 'flags': 2, 'sequence_number': 255, 'pid': 256110, 'error': None, 'target': 'ovnmeta-f6d551f8-4db8-41ef-9a06-51292bc6bab6', 'stats': (0, 0, 0)}, 'state': 'up', 'event': 'RTM_NEWLINK'}]) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Dec  1 20:03:33 compute-0 ovn_metadata_agent[106828]: 2025-12-01 20:03:33.578 106945 DEBUG neutron.privileged.agent.linux.ip_lib [-] Namespace ovnmeta-f6d551f8-4db8-41ef-9a06-51292bc6bab6 deleted. remove_netns /usr/lib/python3.9/site-packages/neutron/privileged/agent/linux/ip_lib.py:607#033[00m
Dec  1 20:03:33 compute-0 systemd[1]: run-netns-ovnmeta\x2df6d551f8\x2d4db8\x2d41ef\x2d9a06\x2d51292bc6bab6.mount: Deactivated successfully.
Dec  1 20:03:33 compute-0 ovn_metadata_agent[106828]: 2025-12-01 20:03:33.578 106945 DEBUG oslo.privsep.daemon [-] privsep: reply[15306ce1-85a0-44f2-9962-6d5404627e1b]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Dec  1 20:03:33 compute-0 nova_compute[189564]: 2025-12-01 20:03:33.893 189568 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Dec  1 20:03:35 compute-0 nova_compute[189564]: 2025-12-01 20:03:35.190 189568 DEBUG nova.network.neutron [-] [instance: 4ace6300-5447-4f61-9b27-a7249155c57b] Updating instance_info_cache with network_info: [] update_instance_cache_with_nw_info /usr/lib/python3.9/site-packages/nova/network/neutron.py:116#033[00m
Dec  1 20:03:35 compute-0 nova_compute[189564]: 2025-12-01 20:03:35.213 189568 INFO nova.compute.manager [-] [instance: 4ace6300-5447-4f61-9b27-a7249155c57b] Took 1.86 seconds to deallocate network for instance.#033[00m
Dec  1 20:03:35 compute-0 nova_compute[189564]: 2025-12-01 20:03:35.264 189568 DEBUG oslo_concurrency.lockutils [None req-e731a7ff-1568-46b2-8618-b7fa519bfbbb f4faf878be724ad8aa31fd034c9818d9 4517904b95d64f0c874d5afda12566c4 - - default default] Acquiring lock "compute_resources" by "nova.compute.resource_tracker.ResourceTracker.update_usage" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m
Dec  1 20:03:35 compute-0 nova_compute[189564]: 2025-12-01 20:03:35.264 189568 DEBUG oslo_concurrency.lockutils [None req-e731a7ff-1568-46b2-8618-b7fa519bfbbb f4faf878be724ad8aa31fd034c9818d9 4517904b95d64f0c874d5afda12566c4 - - default default] Lock "compute_resources" acquired by "nova.compute.resource_tracker.ResourceTracker.update_usage" :: waited 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m
Dec  1 20:03:35 compute-0 nova_compute[189564]: 2025-12-01 20:03:35.286 189568 DEBUG nova.compute.manager [req-88960cca-78ae-4a8f-9083-53cb4553f197 req-cac45f6d-c444-48cd-af86-1d8c55a7af98 8141e98fb94749b083df4b60a326419b 58466eba0634458f8dd92f929824a1d9 - - default default] [instance: 4ace6300-5447-4f61-9b27-a7249155c57b] Received event network-vif-deleted-7101ff55-a92d-431c-8cc4-8b3412507465 external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11048#033[00m
Dec  1 20:03:35 compute-0 podman[256111]: 2025-12-01 20:03:35.348434186 +0000 UTC m=+0.115740992 container health_status b46bda7fc50db8041eef75400930fc7591d8331b3adc9964f77b2cc87c6b98e2 (image=quay.io/openstack-k8s-operators/openstack-network-exporter:current-podified, name=openstack_network_exporter, health_status=healthy, health_failing_streak=0, health_log=, com.redhat.license_terms=https://www.redhat.com/en/about/red-hat-end-user-license-agreements#UBI, build-date=2025-08-20T13:12:41, container_name=openstack_network_exporter, description=The Universal Base Image Minimal is a stripped down image that uses microdnf as a package manager. This base image is freely redistributable, but Red Hat only supports Red Hat technologies through subscriptions for Red Hat products. This image is maintained by Red Hat and updated regularly., summary=Provides the latest release of the minimal Red Hat Universal Base Image 9., config_id=edpm, io.openshift.expose-services=, io.buildah.version=1.33.7, managed_by=edpm_ansible, architecture=x86_64, io.k8s.display-name=Red Hat Universal Base Image 9 Minimal, distribution-scope=public, release=1755695350, name=ubi9-minimal, vcs-type=git, vendor=Red Hat, Inc., config_data={'image': 'quay.io/openstack-k8s-operators/openstack-network-exporter:current-podified', 'restart': 'always', 'recreate': True, 'privileged': True, 'ports': ['9105:9105'], 'command': [], 'net': 'host', 'environment': {'OS_ENDPOINT_TYPE': 'internal', 'OPENSTACK_NETWORK_EXPORTER_YAML': '/etc/openstack_network_exporter/openstack_network_exporter.yaml'}, 'healthcheck': {'test': '/openstack/healthcheck openstack-netwo', 'mount': '/var/lib/openstack/healthchecks/openstack_network_exporter'}, 'volumes': ['/var/lib/openstack/config/telemetry/openstack_network_exporter.yaml:/etc/openstack_network_exporter/openstack_network_exporter.yaml:z', '/var/lib/openstack/certs/telemetry/default:/etc/openstack_network_exporter/tls:z', '/var/run/openvswitch:/run/openvswitch:rw,z', '/var/lib/openvswitch/ovn:/run/ovn:rw,z', '/proc:/host/proc:ro', '/var/lib/openstack/healthchecks/openstack_network_exporter:/openstack:ro,z']}, io.k8s.description=The Universal Base Image Minimal is a stripped down image that uses microdnf as a package manager. This base image is freely redistributable, but Red Hat only supports Red Hat technologies through subscriptions for Red Hat products. This image is maintained by Red Hat and updated regularly., url=https://catalog.redhat.com/en/search?searchType=containers, vcs-ref=f4b088292653bbf5ca8188a5e59ffd06a8671d4b, com.redhat.component=ubi9-minimal-container, version=9.6, io.openshift.tags=minimal rhel9, maintainer=Red Hat, Inc.)
Dec  1 20:03:35 compute-0 nova_compute[189564]: 2025-12-01 20:03:35.370 189568 DEBUG nova.compute.manager [req-6478b1e3-b629-4f7e-8b14-c828864e310b req-e88928da-d3c1-4a90-ac9a-e3b251bbe58a 8141e98fb94749b083df4b60a326419b 58466eba0634458f8dd92f929824a1d9 - - default default] [instance: 4ace6300-5447-4f61-9b27-a7249155c57b] Received event network-vif-plugged-7101ff55-a92d-431c-8cc4-8b3412507465 external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11048#033[00m
Dec  1 20:03:35 compute-0 nova_compute[189564]: 2025-12-01 20:03:35.370 189568 DEBUG oslo_concurrency.lockutils [req-6478b1e3-b629-4f7e-8b14-c828864e310b req-e88928da-d3c1-4a90-ac9a-e3b251bbe58a 8141e98fb94749b083df4b60a326419b 58466eba0634458f8dd92f929824a1d9 - - default default] Acquiring lock "4ace6300-5447-4f61-9b27-a7249155c57b-events" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m
Dec  1 20:03:35 compute-0 nova_compute[189564]: 2025-12-01 20:03:35.371 189568 DEBUG oslo_concurrency.lockutils [req-6478b1e3-b629-4f7e-8b14-c828864e310b req-e88928da-d3c1-4a90-ac9a-e3b251bbe58a 8141e98fb94749b083df4b60a326419b 58466eba0634458f8dd92f929824a1d9 - - default default] Lock "4ace6300-5447-4f61-9b27-a7249155c57b-events" acquired by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m
Dec  1 20:03:35 compute-0 nova_compute[189564]: 2025-12-01 20:03:35.371 189568 DEBUG oslo_concurrency.lockutils [req-6478b1e3-b629-4f7e-8b14-c828864e310b req-e88928da-d3c1-4a90-ac9a-e3b251bbe58a 8141e98fb94749b083df4b60a326419b 58466eba0634458f8dd92f929824a1d9 - - default default] Lock "4ace6300-5447-4f61-9b27-a7249155c57b-events" "released" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m
Dec  1 20:03:35 compute-0 nova_compute[189564]: 2025-12-01 20:03:35.371 189568 DEBUG nova.compute.manager [req-6478b1e3-b629-4f7e-8b14-c828864e310b req-e88928da-d3c1-4a90-ac9a-e3b251bbe58a 8141e98fb94749b083df4b60a326419b 58466eba0634458f8dd92f929824a1d9 - - default default] [instance: 4ace6300-5447-4f61-9b27-a7249155c57b] No waiting events found dispatching network-vif-plugged-7101ff55-a92d-431c-8cc4-8b3412507465 pop_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:320#033[00m
Dec  1 20:03:35 compute-0 nova_compute[189564]: 2025-12-01 20:03:35.372 189568 WARNING nova.compute.manager [req-6478b1e3-b629-4f7e-8b14-c828864e310b req-e88928da-d3c1-4a90-ac9a-e3b251bbe58a 8141e98fb94749b083df4b60a326419b 58466eba0634458f8dd92f929824a1d9 - - default default] [instance: 4ace6300-5447-4f61-9b27-a7249155c57b] Received unexpected event network-vif-plugged-7101ff55-a92d-431c-8cc4-8b3412507465 for instance with vm_state deleted and task_state None.#033[00m
Dec  1 20:03:35 compute-0 nova_compute[189564]: 2025-12-01 20:03:35.401 189568 DEBUG nova.compute.provider_tree [None req-e731a7ff-1568-46b2-8618-b7fa519bfbbb f4faf878be724ad8aa31fd034c9818d9 4517904b95d64f0c874d5afda12566c4 - - default default] Inventory has not changed in ProviderTree for provider: 0211b5d4-bab8-409f-8f53-df766ffbcb27 update_inventory /usr/lib/python3.9/site-packages/nova/compute/provider_tree.py:180#033[00m
Dec  1 20:03:35 compute-0 nova_compute[189564]: 2025-12-01 20:03:35.423 189568 DEBUG nova.scheduler.client.report [None req-e731a7ff-1568-46b2-8618-b7fa519bfbbb f4faf878be724ad8aa31fd034c9818d9 4517904b95d64f0c874d5afda12566c4 - - default default] Inventory has not changed for provider 0211b5d4-bab8-409f-8f53-df766ffbcb27 based on inventory data: {'VCPU': {'total': 8, 'reserved': 0, 'min_unit': 1, 'max_unit': 8, 'step_size': 1, 'allocation_ratio': 4.0}, 'MEMORY_MB': {'total': 7680, 'reserved': 512, 'min_unit': 1, 'max_unit': 7680, 'step_size': 1, 'allocation_ratio': 1.0}, 'DISK_GB': {'total': 79, 'reserved': 1, 'min_unit': 1, 'max_unit': 79, 'step_size': 1, 'allocation_ratio': 0.9}} set_inventory_for_provider /usr/lib/python3.9/site-packages/nova/scheduler/client/report.py:940#033[00m
Dec  1 20:03:35 compute-0 nova_compute[189564]: 2025-12-01 20:03:35.444 189568 DEBUG oslo_concurrency.lockutils [None req-e731a7ff-1568-46b2-8618-b7fa519bfbbb f4faf878be724ad8aa31fd034c9818d9 4517904b95d64f0c874d5afda12566c4 - - default default] Lock "compute_resources" "released" by "nova.compute.resource_tracker.ResourceTracker.update_usage" :: held 0.180s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m
Dec  1 20:03:35 compute-0 nova_compute[189564]: 2025-12-01 20:03:35.477 189568 INFO nova.scheduler.client.report [None req-e731a7ff-1568-46b2-8618-b7fa519bfbbb f4faf878be724ad8aa31fd034c9818d9 4517904b95d64f0c874d5afda12566c4 - - default default] Deleted allocations for instance 4ace6300-5447-4f61-9b27-a7249155c57b#033[00m
Dec  1 20:03:35 compute-0 nova_compute[189564]: 2025-12-01 20:03:35.564 189568 DEBUG oslo_concurrency.lockutils [None req-e731a7ff-1568-46b2-8618-b7fa519bfbbb f4faf878be724ad8aa31fd034c9818d9 4517904b95d64f0c874d5afda12566c4 - - default default] Lock "4ace6300-5447-4f61-9b27-a7249155c57b" "released" by "nova.compute.manager.ComputeManager.terminate_instance.<locals>.do_terminate_instance" :: held 2.677s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m
Dec  1 20:03:36 compute-0 nova_compute[189564]: 2025-12-01 20:03:36.883 189568 DEBUG oslo_concurrency.lockutils [None req-d4149dcd-3b17-4052-99ac-95e505cfa62d 715e289b64b4407387cbcfe958eb2d0f 162c071887824085bcc9c384a2f8baf0 - - default default] Acquiring lock "cb05bc1e-3b85-4998-a503-39bd86bdc17e" by "nova.compute.manager.ComputeManager.build_and_run_instance.<locals>._locked_do_build_and_run_instance" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m
Dec  1 20:03:36 compute-0 nova_compute[189564]: 2025-12-01 20:03:36.883 189568 DEBUG oslo_concurrency.lockutils [None req-d4149dcd-3b17-4052-99ac-95e505cfa62d 715e289b64b4407387cbcfe958eb2d0f 162c071887824085bcc9c384a2f8baf0 - - default default] Lock "cb05bc1e-3b85-4998-a503-39bd86bdc17e" acquired by "nova.compute.manager.ComputeManager.build_and_run_instance.<locals>._locked_do_build_and_run_instance" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m
Dec  1 20:03:36 compute-0 nova_compute[189564]: 2025-12-01 20:03:36.899 189568 DEBUG nova.compute.manager [None req-d4149dcd-3b17-4052-99ac-95e505cfa62d 715e289b64b4407387cbcfe958eb2d0f 162c071887824085bcc9c384a2f8baf0 - - default default] [instance: cb05bc1e-3b85-4998-a503-39bd86bdc17e] Starting instance... _do_build_and_run_instance /usr/lib/python3.9/site-packages/nova/compute/manager.py:2402#033[00m
Dec  1 20:03:36 compute-0 nova_compute[189564]: 2025-12-01 20:03:36.991 189568 DEBUG oslo_concurrency.lockutils [None req-d4149dcd-3b17-4052-99ac-95e505cfa62d 715e289b64b4407387cbcfe958eb2d0f 162c071887824085bcc9c384a2f8baf0 - - default default] Acquiring lock "compute_resources" by "nova.compute.resource_tracker.ResourceTracker.instance_claim" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m
Dec  1 20:03:36 compute-0 nova_compute[189564]: 2025-12-01 20:03:36.992 189568 DEBUG oslo_concurrency.lockutils [None req-d4149dcd-3b17-4052-99ac-95e505cfa62d 715e289b64b4407387cbcfe958eb2d0f 162c071887824085bcc9c384a2f8baf0 - - default default] Lock "compute_resources" acquired by "nova.compute.resource_tracker.ResourceTracker.instance_claim" :: waited 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m
Dec  1 20:03:37 compute-0 nova_compute[189564]: 2025-12-01 20:03:37.005 189568 DEBUG nova.virt.hardware [None req-d4149dcd-3b17-4052-99ac-95e505cfa62d 715e289b64b4407387cbcfe958eb2d0f 162c071887824085bcc9c384a2f8baf0 - - default default] Require both a host and instance NUMA topology to fit instance on host. numa_fit_instance_to_host /usr/lib/python3.9/site-packages/nova/virt/hardware.py:2368#033[00m
Dec  1 20:03:37 compute-0 nova_compute[189564]: 2025-12-01 20:03:37.005 189568 INFO nova.compute.claims [None req-d4149dcd-3b17-4052-99ac-95e505cfa62d 715e289b64b4407387cbcfe958eb2d0f 162c071887824085bcc9c384a2f8baf0 - - default default] [instance: cb05bc1e-3b85-4998-a503-39bd86bdc17e] Claim successful on node compute-0.ctlplane.example.com#033[00m
Dec  1 20:03:37 compute-0 nova_compute[189564]: 2025-12-01 20:03:37.193 189568 DEBUG nova.compute.provider_tree [None req-d4149dcd-3b17-4052-99ac-95e505cfa62d 715e289b64b4407387cbcfe958eb2d0f 162c071887824085bcc9c384a2f8baf0 - - default default] Inventory has not changed in ProviderTree for provider: 0211b5d4-bab8-409f-8f53-df766ffbcb27 update_inventory /usr/lib/python3.9/site-packages/nova/compute/provider_tree.py:180#033[00m
Dec  1 20:03:37 compute-0 nova_compute[189564]: 2025-12-01 20:03:37.211 189568 DEBUG nova.scheduler.client.report [None req-d4149dcd-3b17-4052-99ac-95e505cfa62d 715e289b64b4407387cbcfe958eb2d0f 162c071887824085bcc9c384a2f8baf0 - - default default] Inventory has not changed for provider 0211b5d4-bab8-409f-8f53-df766ffbcb27 based on inventory data: {'VCPU': {'total': 8, 'reserved': 0, 'min_unit': 1, 'max_unit': 8, 'step_size': 1, 'allocation_ratio': 4.0}, 'MEMORY_MB': {'total': 7680, 'reserved': 512, 'min_unit': 1, 'max_unit': 7680, 'step_size': 1, 'allocation_ratio': 1.0}, 'DISK_GB': {'total': 79, 'reserved': 1, 'min_unit': 1, 'max_unit': 79, 'step_size': 1, 'allocation_ratio': 0.9}} set_inventory_for_provider /usr/lib/python3.9/site-packages/nova/scheduler/client/report.py:940#033[00m
Dec  1 20:03:37 compute-0 nova_compute[189564]: 2025-12-01 20:03:37.244 189568 DEBUG oslo_concurrency.lockutils [None req-d4149dcd-3b17-4052-99ac-95e505cfa62d 715e289b64b4407387cbcfe958eb2d0f 162c071887824085bcc9c384a2f8baf0 - - default default] Lock "compute_resources" "released" by "nova.compute.resource_tracker.ResourceTracker.instance_claim" :: held 0.252s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m
Dec  1 20:03:37 compute-0 nova_compute[189564]: 2025-12-01 20:03:37.245 189568 DEBUG nova.compute.manager [None req-d4149dcd-3b17-4052-99ac-95e505cfa62d 715e289b64b4407387cbcfe958eb2d0f 162c071887824085bcc9c384a2f8baf0 - - default default] [instance: cb05bc1e-3b85-4998-a503-39bd86bdc17e] Start building networks asynchronously for instance. _build_resources /usr/lib/python3.9/site-packages/nova/compute/manager.py:2799#033[00m
Dec  1 20:03:37 compute-0 nova_compute[189564]: 2025-12-01 20:03:37.308 189568 DEBUG nova.compute.manager [None req-d4149dcd-3b17-4052-99ac-95e505cfa62d 715e289b64b4407387cbcfe958eb2d0f 162c071887824085bcc9c384a2f8baf0 - - default default] [instance: cb05bc1e-3b85-4998-a503-39bd86bdc17e] Allocating IP information in the background. _allocate_network_async /usr/lib/python3.9/site-packages/nova/compute/manager.py:1952#033[00m
Dec  1 20:03:37 compute-0 nova_compute[189564]: 2025-12-01 20:03:37.309 189568 DEBUG nova.network.neutron [None req-d4149dcd-3b17-4052-99ac-95e505cfa62d 715e289b64b4407387cbcfe958eb2d0f 162c071887824085bcc9c384a2f8baf0 - - default default] [instance: cb05bc1e-3b85-4998-a503-39bd86bdc17e] allocate_for_instance() allocate_for_instance /usr/lib/python3.9/site-packages/nova/network/neutron.py:1156#033[00m
Dec  1 20:03:37 compute-0 nova_compute[189564]: 2025-12-01 20:03:37.334 189568 INFO nova.virt.libvirt.driver [None req-d4149dcd-3b17-4052-99ac-95e505cfa62d 715e289b64b4407387cbcfe958eb2d0f 162c071887824085bcc9c384a2f8baf0 - - default default] [instance: cb05bc1e-3b85-4998-a503-39bd86bdc17e] Ignoring supplied device name: /dev/vda. Libvirt can't honour user-supplied dev names#033[00m
Dec  1 20:03:37 compute-0 nova_compute[189564]: 2025-12-01 20:03:37.366 189568 DEBUG nova.compute.manager [None req-d4149dcd-3b17-4052-99ac-95e505cfa62d 715e289b64b4407387cbcfe958eb2d0f 162c071887824085bcc9c384a2f8baf0 - - default default] [instance: cb05bc1e-3b85-4998-a503-39bd86bdc17e] Start building block device mappings for instance. _build_resources /usr/lib/python3.9/site-packages/nova/compute/manager.py:2834#033[00m
Dec  1 20:03:37 compute-0 nova_compute[189564]: 2025-12-01 20:03:37.478 189568 DEBUG nova.policy [None req-d4149dcd-3b17-4052-99ac-95e505cfa62d 715e289b64b4407387cbcfe958eb2d0f 162c071887824085bcc9c384a2f8baf0 - - default default] Policy check for network:attach_external_network failed with credentials {'is_admin': False, 'user_id': '715e289b64b4407387cbcfe958eb2d0f', 'user_domain_id': 'default', 'system_scope': None, 'domain_id': None, 'project_id': '162c071887824085bcc9c384a2f8baf0', 'project_domain_id': 'default', 'roles': ['member', 'reader'], 'is_admin_project': True, 'service_user_id': None, 'service_user_domain_id': None, 'service_project_id': None, 'service_project_domain_id': None, 'service_roles': []} authorize /usr/lib/python3.9/site-packages/nova/policy.py:203#033[00m
Dec  1 20:03:37 compute-0 nova_compute[189564]: 2025-12-01 20:03:37.485 189568 DEBUG nova.compute.manager [None req-d4149dcd-3b17-4052-99ac-95e505cfa62d 715e289b64b4407387cbcfe958eb2d0f 162c071887824085bcc9c384a2f8baf0 - - default default] [instance: cb05bc1e-3b85-4998-a503-39bd86bdc17e] Start spawning the instance on the hypervisor. _build_and_run_instance /usr/lib/python3.9/site-packages/nova/compute/manager.py:2608#033[00m
Dec  1 20:03:37 compute-0 nova_compute[189564]: 2025-12-01 20:03:37.487 189568 DEBUG nova.virt.libvirt.driver [None req-d4149dcd-3b17-4052-99ac-95e505cfa62d 715e289b64b4407387cbcfe958eb2d0f 162c071887824085bcc9c384a2f8baf0 - - default default] [instance: cb05bc1e-3b85-4998-a503-39bd86bdc17e] Creating instance directory _create_image /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:4723#033[00m
Dec  1 20:03:37 compute-0 nova_compute[189564]: 2025-12-01 20:03:37.487 189568 INFO nova.virt.libvirt.driver [None req-d4149dcd-3b17-4052-99ac-95e505cfa62d 715e289b64b4407387cbcfe958eb2d0f 162c071887824085bcc9c384a2f8baf0 - - default default] [instance: cb05bc1e-3b85-4998-a503-39bd86bdc17e] Creating image(s)#033[00m
Dec  1 20:03:37 compute-0 nova_compute[189564]: 2025-12-01 20:03:37.488 189568 DEBUG oslo_concurrency.lockutils [None req-d4149dcd-3b17-4052-99ac-95e505cfa62d 715e289b64b4407387cbcfe958eb2d0f 162c071887824085bcc9c384a2f8baf0 - - default default] Acquiring lock "/var/lib/nova/instances/cb05bc1e-3b85-4998-a503-39bd86bdc17e/disk.info" by "nova.virt.libvirt.imagebackend.Image.resolve_driver_format.<locals>.write_to_disk_info_file" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m
Dec  1 20:03:37 compute-0 nova_compute[189564]: 2025-12-01 20:03:37.489 189568 DEBUG oslo_concurrency.lockutils [None req-d4149dcd-3b17-4052-99ac-95e505cfa62d 715e289b64b4407387cbcfe958eb2d0f 162c071887824085bcc9c384a2f8baf0 - - default default] Lock "/var/lib/nova/instances/cb05bc1e-3b85-4998-a503-39bd86bdc17e/disk.info" acquired by "nova.virt.libvirt.imagebackend.Image.resolve_driver_format.<locals>.write_to_disk_info_file" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m
Dec  1 20:03:37 compute-0 nova_compute[189564]: 2025-12-01 20:03:37.490 189568 DEBUG oslo_concurrency.lockutils [None req-d4149dcd-3b17-4052-99ac-95e505cfa62d 715e289b64b4407387cbcfe958eb2d0f 162c071887824085bcc9c384a2f8baf0 - - default default] Lock "/var/lib/nova/instances/cb05bc1e-3b85-4998-a503-39bd86bdc17e/disk.info" "released" by "nova.virt.libvirt.imagebackend.Image.resolve_driver_format.<locals>.write_to_disk_info_file" :: held 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m
Dec  1 20:03:37 compute-0 nova_compute[189564]: 2025-12-01 20:03:37.514 189568 DEBUG oslo_concurrency.processutils [None req-d4149dcd-3b17-4052-99ac-95e505cfa62d 715e289b64b4407387cbcfe958eb2d0f 162c071887824085bcc9c384a2f8baf0 - - default default] Running cmd (subprocess): /usr/bin/python3 -m oslo_concurrency.prlimit --as=1073741824 --cpu=30 -- env LC_ALL=C LANG=C qemu-img info /var/lib/nova/instances/_base/b6c46a34fa48a1b06387586e8222a42077151abd --force-share --output=json execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384#033[00m
Dec  1 20:03:37 compute-0 nova_compute[189564]: 2025-12-01 20:03:37.595 189568 DEBUG oslo_concurrency.processutils [None req-d4149dcd-3b17-4052-99ac-95e505cfa62d 715e289b64b4407387cbcfe958eb2d0f 162c071887824085bcc9c384a2f8baf0 - - default default] CMD "/usr/bin/python3 -m oslo_concurrency.prlimit --as=1073741824 --cpu=30 -- env LC_ALL=C LANG=C qemu-img info /var/lib/nova/instances/_base/b6c46a34fa48a1b06387586e8222a42077151abd --force-share --output=json" returned: 0 in 0.082s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422#033[00m
Dec  1 20:03:37 compute-0 nova_compute[189564]: 2025-12-01 20:03:37.598 189568 DEBUG oslo_concurrency.lockutils [None req-d4149dcd-3b17-4052-99ac-95e505cfa62d 715e289b64b4407387cbcfe958eb2d0f 162c071887824085bcc9c384a2f8baf0 - - default default] Acquiring lock "b6c46a34fa48a1b06387586e8222a42077151abd" by "nova.virt.libvirt.imagebackend.Qcow2.create_image.<locals>.create_qcow2_image" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m
Dec  1 20:03:37 compute-0 nova_compute[189564]: 2025-12-01 20:03:37.599 189568 DEBUG oslo_concurrency.lockutils [None req-d4149dcd-3b17-4052-99ac-95e505cfa62d 715e289b64b4407387cbcfe958eb2d0f 162c071887824085bcc9c384a2f8baf0 - - default default] Lock "b6c46a34fa48a1b06387586e8222a42077151abd" acquired by "nova.virt.libvirt.imagebackend.Qcow2.create_image.<locals>.create_qcow2_image" :: waited 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m
Dec  1 20:03:37 compute-0 nova_compute[189564]: 2025-12-01 20:03:37.626 189568 DEBUG oslo_concurrency.processutils [None req-d4149dcd-3b17-4052-99ac-95e505cfa62d 715e289b64b4407387cbcfe958eb2d0f 162c071887824085bcc9c384a2f8baf0 - - default default] Running cmd (subprocess): /usr/bin/python3 -m oslo_concurrency.prlimit --as=1073741824 --cpu=30 -- env LC_ALL=C LANG=C qemu-img info /var/lib/nova/instances/_base/b6c46a34fa48a1b06387586e8222a42077151abd --force-share --output=json execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384#033[00m
Dec  1 20:03:37 compute-0 nova_compute[189564]: 2025-12-01 20:03:37.686 189568 DEBUG oslo_concurrency.processutils [None req-d4149dcd-3b17-4052-99ac-95e505cfa62d 715e289b64b4407387cbcfe958eb2d0f 162c071887824085bcc9c384a2f8baf0 - - default default] CMD "/usr/bin/python3 -m oslo_concurrency.prlimit --as=1073741824 --cpu=30 -- env LC_ALL=C LANG=C qemu-img info /var/lib/nova/instances/_base/b6c46a34fa48a1b06387586e8222a42077151abd --force-share --output=json" returned: 0 in 0.061s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422#033[00m
Dec  1 20:03:37 compute-0 nova_compute[189564]: 2025-12-01 20:03:37.688 189568 DEBUG oslo_concurrency.processutils [None req-d4149dcd-3b17-4052-99ac-95e505cfa62d 715e289b64b4407387cbcfe958eb2d0f 162c071887824085bcc9c384a2f8baf0 - - default default] Running cmd (subprocess): env LC_ALL=C LANG=C qemu-img create -f qcow2 -o backing_file=/var/lib/nova/instances/_base/b6c46a34fa48a1b06387586e8222a42077151abd,backing_fmt=raw /var/lib/nova/instances/cb05bc1e-3b85-4998-a503-39bd86bdc17e/disk 1073741824 execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384#033[00m
Dec  1 20:03:37 compute-0 nova_compute[189564]: 2025-12-01 20:03:37.756 189568 DEBUG oslo_concurrency.processutils [None req-d4149dcd-3b17-4052-99ac-95e505cfa62d 715e289b64b4407387cbcfe958eb2d0f 162c071887824085bcc9c384a2f8baf0 - - default default] CMD "env LC_ALL=C LANG=C qemu-img create -f qcow2 -o backing_file=/var/lib/nova/instances/_base/b6c46a34fa48a1b06387586e8222a42077151abd,backing_fmt=raw /var/lib/nova/instances/cb05bc1e-3b85-4998-a503-39bd86bdc17e/disk 1073741824" returned: 0 in 0.068s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422#033[00m
Dec  1 20:03:37 compute-0 nova_compute[189564]: 2025-12-01 20:03:37.758 189568 DEBUG oslo_concurrency.lockutils [None req-d4149dcd-3b17-4052-99ac-95e505cfa62d 715e289b64b4407387cbcfe958eb2d0f 162c071887824085bcc9c384a2f8baf0 - - default default] Lock "b6c46a34fa48a1b06387586e8222a42077151abd" "released" by "nova.virt.libvirt.imagebackend.Qcow2.create_image.<locals>.create_qcow2_image" :: held 0.159s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m
Dec  1 20:03:37 compute-0 nova_compute[189564]: 2025-12-01 20:03:37.759 189568 DEBUG oslo_concurrency.processutils [None req-d4149dcd-3b17-4052-99ac-95e505cfa62d 715e289b64b4407387cbcfe958eb2d0f 162c071887824085bcc9c384a2f8baf0 - - default default] Running cmd (subprocess): /usr/bin/python3 -m oslo_concurrency.prlimit --as=1073741824 --cpu=30 -- env LC_ALL=C LANG=C qemu-img info /var/lib/nova/instances/_base/b6c46a34fa48a1b06387586e8222a42077151abd --force-share --output=json execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384#033[00m
Dec  1 20:03:37 compute-0 nova_compute[189564]: 2025-12-01 20:03:37.860 189568 DEBUG oslo_concurrency.processutils [None req-d4149dcd-3b17-4052-99ac-95e505cfa62d 715e289b64b4407387cbcfe958eb2d0f 162c071887824085bcc9c384a2f8baf0 - - default default] CMD "/usr/bin/python3 -m oslo_concurrency.prlimit --as=1073741824 --cpu=30 -- env LC_ALL=C LANG=C qemu-img info /var/lib/nova/instances/_base/b6c46a34fa48a1b06387586e8222a42077151abd --force-share --output=json" returned: 0 in 0.101s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422#033[00m
Dec  1 20:03:37 compute-0 nova_compute[189564]: 2025-12-01 20:03:37.861 189568 DEBUG nova.virt.disk.api [None req-d4149dcd-3b17-4052-99ac-95e505cfa62d 715e289b64b4407387cbcfe958eb2d0f 162c071887824085bcc9c384a2f8baf0 - - default default] Checking if we can resize image /var/lib/nova/instances/cb05bc1e-3b85-4998-a503-39bd86bdc17e/disk. size=1073741824 can_resize_image /usr/lib/python3.9/site-packages/nova/virt/disk/api.py:166#033[00m
Dec  1 20:03:37 compute-0 nova_compute[189564]: 2025-12-01 20:03:37.862 189568 DEBUG oslo_concurrency.processutils [None req-d4149dcd-3b17-4052-99ac-95e505cfa62d 715e289b64b4407387cbcfe958eb2d0f 162c071887824085bcc9c384a2f8baf0 - - default default] Running cmd (subprocess): /usr/bin/python3 -m oslo_concurrency.prlimit --as=1073741824 --cpu=30 -- env LC_ALL=C LANG=C qemu-img info /var/lib/nova/instances/cb05bc1e-3b85-4998-a503-39bd86bdc17e/disk --force-share --output=json execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384#033[00m
Dec  1 20:03:37 compute-0 nova_compute[189564]: 2025-12-01 20:03:37.931 189568 DEBUG oslo_concurrency.processutils [None req-d4149dcd-3b17-4052-99ac-95e505cfa62d 715e289b64b4407387cbcfe958eb2d0f 162c071887824085bcc9c384a2f8baf0 - - default default] CMD "/usr/bin/python3 -m oslo_concurrency.prlimit --as=1073741824 --cpu=30 -- env LC_ALL=C LANG=C qemu-img info /var/lib/nova/instances/cb05bc1e-3b85-4998-a503-39bd86bdc17e/disk --force-share --output=json" returned: 0 in 0.069s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422#033[00m
Dec  1 20:03:37 compute-0 nova_compute[189564]: 2025-12-01 20:03:37.932 189568 DEBUG nova.virt.disk.api [None req-d4149dcd-3b17-4052-99ac-95e505cfa62d 715e289b64b4407387cbcfe958eb2d0f 162c071887824085bcc9c384a2f8baf0 - - default default] Cannot resize image /var/lib/nova/instances/cb05bc1e-3b85-4998-a503-39bd86bdc17e/disk to a smaller size. can_resize_image /usr/lib/python3.9/site-packages/nova/virt/disk/api.py:172#033[00m
Dec  1 20:03:37 compute-0 nova_compute[189564]: 2025-12-01 20:03:37.933 189568 DEBUG nova.objects.instance [None req-d4149dcd-3b17-4052-99ac-95e505cfa62d 715e289b64b4407387cbcfe958eb2d0f 162c071887824085bcc9c384a2f8baf0 - - default default] Lazy-loading 'migration_context' on Instance uuid cb05bc1e-3b85-4998-a503-39bd86bdc17e obj_load_attr /usr/lib/python3.9/site-packages/nova/objects/instance.py:1105#033[00m
Dec  1 20:03:38 compute-0 nova_compute[189564]: 2025-12-01 20:03:38.262 189568 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Dec  1 20:03:38 compute-0 podman[256149]: 2025-12-01 20:03:38.876309737 +0000 UTC m=+0.121901795 container health_status 9bc16c1e84935b321683dd2dfd3901959431e420d380b6b9982945dff3d516b2 (image=quay.io/prometheus/node-exporter:v1.5.0, name=node_exporter, health_status=healthy, health_failing_streak=0, health_log=, container_name=node_exporter, maintainer=The Prometheus Authors <prometheus-developers@googlegroups.com>, managed_by=edpm_ansible, config_data={'image': 'quay.io/prometheus/node-exporter:v1.5.0', 'restart': 'always', 'recreate': True, 'user': 'root', 'privileged': True, 'ports': ['9100:9100'], 'command': ['--web.config.file=/etc/node_exporter/node_exporter.yaml', '--web.disable-exporter-metrics', '--collector.systemd', '--collector.systemd.unit-include=(edpm_.*|ovs.*|openvswitch|virt.*|rsyslog)\\.service', '--no-collector.dmi', '--no-collector.entropy', '--no-collector.thermal_zone', '--no-collector.time', '--no-collector.timex', '--no-collector.uname', '--no-collector.stat', '--no-collector.hwmon', '--no-collector.os', '--no-collector.selinux', '--no-collector.textfile', '--no-collector.powersupplyclass', '--no-collector.pressure', '--no-collector.rapl'], 'net': 'host', 'environment': {'OS_ENDPOINT_TYPE': 'internal'}, 'healthcheck': {'test': '/openstack/healthcheck node_exporter', 'mount': '/var/lib/openstack/healthchecks/node_exporter'}, 'volumes': ['/var/lib/openstack/config/telemetry/node_exporter.yaml:/etc/node_exporter/node_exporter.yaml:z', '/var/lib/openstack/certs/telemetry/default:/etc/node_exporter/tls:z', '/var/run/dbus/system_bus_socket:/var/run/dbus/system_bus_socket:rw', '/var/lib/openstack/healthchecks/node_exporter:/openstack:ro,z']}, config_id=edpm)
Dec  1 20:03:38 compute-0 nova_compute[189564]: 2025-12-01 20:03:38.885 189568 DEBUG nova.virt.libvirt.driver [None req-d4149dcd-3b17-4052-99ac-95e505cfa62d 715e289b64b4407387cbcfe958eb2d0f 162c071887824085bcc9c384a2f8baf0 - - default default] [instance: cb05bc1e-3b85-4998-a503-39bd86bdc17e] Created local disks _create_image /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:4857#033[00m
Dec  1 20:03:38 compute-0 nova_compute[189564]: 2025-12-01 20:03:38.885 189568 DEBUG nova.virt.libvirt.driver [None req-d4149dcd-3b17-4052-99ac-95e505cfa62d 715e289b64b4407387cbcfe958eb2d0f 162c071887824085bcc9c384a2f8baf0 - - default default] [instance: cb05bc1e-3b85-4998-a503-39bd86bdc17e] Ensure instance console log exists: /var/lib/nova/instances/cb05bc1e-3b85-4998-a503-39bd86bdc17e/console.log _ensure_console_log_for_instance /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:4609#033[00m
Dec  1 20:03:38 compute-0 nova_compute[189564]: 2025-12-01 20:03:38.887 189568 DEBUG oslo_concurrency.lockutils [None req-d4149dcd-3b17-4052-99ac-95e505cfa62d 715e289b64b4407387cbcfe958eb2d0f 162c071887824085bcc9c384a2f8baf0 - - default default] Acquiring lock "vgpu_resources" by "nova.virt.libvirt.driver.LibvirtDriver._allocate_mdevs" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m
Dec  1 20:03:38 compute-0 nova_compute[189564]: 2025-12-01 20:03:38.887 189568 DEBUG oslo_concurrency.lockutils [None req-d4149dcd-3b17-4052-99ac-95e505cfa62d 715e289b64b4407387cbcfe958eb2d0f 162c071887824085bcc9c384a2f8baf0 - - default default] Lock "vgpu_resources" acquired by "nova.virt.libvirt.driver.LibvirtDriver._allocate_mdevs" :: waited 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m
Dec  1 20:03:38 compute-0 nova_compute[189564]: 2025-12-01 20:03:38.888 189568 DEBUG oslo_concurrency.lockutils [None req-d4149dcd-3b17-4052-99ac-95e505cfa62d 715e289b64b4407387cbcfe958eb2d0f 162c071887824085bcc9c384a2f8baf0 - - default default] Lock "vgpu_resources" "released" by "nova.virt.libvirt.driver.LibvirtDriver._allocate_mdevs" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m
Dec  1 20:03:38 compute-0 nova_compute[189564]: 2025-12-01 20:03:38.897 189568 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Dec  1 20:03:39 compute-0 nova_compute[189564]: 2025-12-01 20:03:39.050 189568 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Dec  1 20:03:39 compute-0 nova_compute[189564]: 2025-12-01 20:03:39.250 189568 DEBUG nova.network.neutron [None req-d4149dcd-3b17-4052-99ac-95e505cfa62d 715e289b64b4407387cbcfe958eb2d0f 162c071887824085bcc9c384a2f8baf0 - - default default] [instance: cb05bc1e-3b85-4998-a503-39bd86bdc17e] Successfully created port: ab2a4211-760a-400a-bd6c-243749c41a4e _create_port_minimal /usr/lib/python3.9/site-packages/nova/network/neutron.py:548#033[00m
Dec  1 20:03:40 compute-0 nova_compute[189564]: 2025-12-01 20:03:40.206 189568 DEBUG nova.network.neutron [None req-d4149dcd-3b17-4052-99ac-95e505cfa62d 715e289b64b4407387cbcfe958eb2d0f 162c071887824085bcc9c384a2f8baf0 - - default default] [instance: cb05bc1e-3b85-4998-a503-39bd86bdc17e] Successfully updated port: ab2a4211-760a-400a-bd6c-243749c41a4e _update_port /usr/lib/python3.9/site-packages/nova/network/neutron.py:586#033[00m
Dec  1 20:03:40 compute-0 nova_compute[189564]: 2025-12-01 20:03:40.236 189568 DEBUG oslo_concurrency.lockutils [None req-d4149dcd-3b17-4052-99ac-95e505cfa62d 715e289b64b4407387cbcfe958eb2d0f 162c071887824085bcc9c384a2f8baf0 - - default default] Acquiring lock "refresh_cache-cb05bc1e-3b85-4998-a503-39bd86bdc17e" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:312#033[00m
Dec  1 20:03:40 compute-0 nova_compute[189564]: 2025-12-01 20:03:40.236 189568 DEBUG oslo_concurrency.lockutils [None req-d4149dcd-3b17-4052-99ac-95e505cfa62d 715e289b64b4407387cbcfe958eb2d0f 162c071887824085bcc9c384a2f8baf0 - - default default] Acquired lock "refresh_cache-cb05bc1e-3b85-4998-a503-39bd86bdc17e" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:315#033[00m
Dec  1 20:03:40 compute-0 nova_compute[189564]: 2025-12-01 20:03:40.237 189568 DEBUG nova.network.neutron [None req-d4149dcd-3b17-4052-99ac-95e505cfa62d 715e289b64b4407387cbcfe958eb2d0f 162c071887824085bcc9c384a2f8baf0 - - default default] [instance: cb05bc1e-3b85-4998-a503-39bd86bdc17e] Building network info cache for instance _get_instance_nw_info /usr/lib/python3.9/site-packages/nova/network/neutron.py:2010#033[00m
Dec  1 20:03:40 compute-0 nova_compute[189564]: 2025-12-01 20:03:40.294 189568 DEBUG nova.compute.manager [req-387c4410-3b94-45db-a282-f856123995b0 req-22e41968-7a76-4c36-9061-d1f8b31d8616 8141e98fb94749b083df4b60a326419b 58466eba0634458f8dd92f929824a1d9 - - default default] [instance: cb05bc1e-3b85-4998-a503-39bd86bdc17e] Received event network-changed-ab2a4211-760a-400a-bd6c-243749c41a4e external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11048#033[00m
Dec  1 20:03:40 compute-0 nova_compute[189564]: 2025-12-01 20:03:40.295 189568 DEBUG nova.compute.manager [req-387c4410-3b94-45db-a282-f856123995b0 req-22e41968-7a76-4c36-9061-d1f8b31d8616 8141e98fb94749b083df4b60a326419b 58466eba0634458f8dd92f929824a1d9 - - default default] [instance: cb05bc1e-3b85-4998-a503-39bd86bdc17e] Refreshing instance network info cache due to event network-changed-ab2a4211-760a-400a-bd6c-243749c41a4e. external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11053#033[00m
Dec  1 20:03:40 compute-0 nova_compute[189564]: 2025-12-01 20:03:40.296 189568 DEBUG oslo_concurrency.lockutils [req-387c4410-3b94-45db-a282-f856123995b0 req-22e41968-7a76-4c36-9061-d1f8b31d8616 8141e98fb94749b083df4b60a326419b 58466eba0634458f8dd92f929824a1d9 - - default default] Acquiring lock "refresh_cache-cb05bc1e-3b85-4998-a503-39bd86bdc17e" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:312#033[00m
Dec  1 20:03:40 compute-0 nova_compute[189564]: 2025-12-01 20:03:40.438 189568 DEBUG nova.network.neutron [None req-d4149dcd-3b17-4052-99ac-95e505cfa62d 715e289b64b4407387cbcfe958eb2d0f 162c071887824085bcc9c384a2f8baf0 - - default default] [instance: cb05bc1e-3b85-4998-a503-39bd86bdc17e] Instance cache missing network info. _get_preexisting_port_ids /usr/lib/python3.9/site-packages/nova/network/neutron.py:3323#033[00m
Dec  1 20:03:41 compute-0 nova_compute[189564]: 2025-12-01 20:03:41.186 189568 DEBUG nova.network.neutron [None req-d4149dcd-3b17-4052-99ac-95e505cfa62d 715e289b64b4407387cbcfe958eb2d0f 162c071887824085bcc9c384a2f8baf0 - - default default] [instance: cb05bc1e-3b85-4998-a503-39bd86bdc17e] Updating instance_info_cache with network_info: [{"id": "ab2a4211-760a-400a-bd6c-243749c41a4e", "address": "fa:16:3e:d2:c4:d1", "network": {"id": "d273f808-5cbd-4428-9f2c-ed8b50232c12", "bridge": "br-int", "label": "tempest-network-smoke--1707279970", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.4", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "162c071887824085bcc9c384a2f8baf0", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tapab2a4211-76", "ovs_interfaceid": "ab2a4211-760a-400a-bd6c-243749c41a4e", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}}] update_instance_cache_with_nw_info /usr/lib/python3.9/site-packages/nova/network/neutron.py:116#033[00m
Dec  1 20:03:41 compute-0 nova_compute[189564]: 2025-12-01 20:03:41.207 189568 DEBUG oslo_concurrency.lockutils [None req-d4149dcd-3b17-4052-99ac-95e505cfa62d 715e289b64b4407387cbcfe958eb2d0f 162c071887824085bcc9c384a2f8baf0 - - default default] Releasing lock "refresh_cache-cb05bc1e-3b85-4998-a503-39bd86bdc17e" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:333#033[00m
Dec  1 20:03:41 compute-0 nova_compute[189564]: 2025-12-01 20:03:41.208 189568 DEBUG nova.compute.manager [None req-d4149dcd-3b17-4052-99ac-95e505cfa62d 715e289b64b4407387cbcfe958eb2d0f 162c071887824085bcc9c384a2f8baf0 - - default default] [instance: cb05bc1e-3b85-4998-a503-39bd86bdc17e] Instance network_info: |[{"id": "ab2a4211-760a-400a-bd6c-243749c41a4e", "address": "fa:16:3e:d2:c4:d1", "network": {"id": "d273f808-5cbd-4428-9f2c-ed8b50232c12", "bridge": "br-int", "label": "tempest-network-smoke--1707279970", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.4", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "162c071887824085bcc9c384a2f8baf0", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tapab2a4211-76", "ovs_interfaceid": "ab2a4211-760a-400a-bd6c-243749c41a4e", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}}]| _allocate_network_async /usr/lib/python3.9/site-packages/nova/compute/manager.py:1967#033[00m
Dec  1 20:03:41 compute-0 nova_compute[189564]: 2025-12-01 20:03:41.209 189568 DEBUG oslo_concurrency.lockutils [req-387c4410-3b94-45db-a282-f856123995b0 req-22e41968-7a76-4c36-9061-d1f8b31d8616 8141e98fb94749b083df4b60a326419b 58466eba0634458f8dd92f929824a1d9 - - default default] Acquired lock "refresh_cache-cb05bc1e-3b85-4998-a503-39bd86bdc17e" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:315#033[00m
Dec  1 20:03:41 compute-0 nova_compute[189564]: 2025-12-01 20:03:41.210 189568 DEBUG nova.network.neutron [req-387c4410-3b94-45db-a282-f856123995b0 req-22e41968-7a76-4c36-9061-d1f8b31d8616 8141e98fb94749b083df4b60a326419b 58466eba0634458f8dd92f929824a1d9 - - default default] [instance: cb05bc1e-3b85-4998-a503-39bd86bdc17e] Refreshing network info cache for port ab2a4211-760a-400a-bd6c-243749c41a4e _get_instance_nw_info /usr/lib/python3.9/site-packages/nova/network/neutron.py:2007#033[00m
Dec  1 20:03:41 compute-0 nova_compute[189564]: 2025-12-01 20:03:41.215 189568 DEBUG nova.virt.libvirt.driver [None req-d4149dcd-3b17-4052-99ac-95e505cfa62d 715e289b64b4407387cbcfe958eb2d0f 162c071887824085bcc9c384a2f8baf0 - - default default] [instance: cb05bc1e-3b85-4998-a503-39bd86bdc17e] Start _get_guest_xml network_info=[{"id": "ab2a4211-760a-400a-bd6c-243749c41a4e", "address": "fa:16:3e:d2:c4:d1", "network": {"id": "d273f808-5cbd-4428-9f2c-ed8b50232c12", "bridge": "br-int", "label": "tempest-network-smoke--1707279970", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.4", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "162c071887824085bcc9c384a2f8baf0", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tapab2a4211-76", "ovs_interfaceid": "ab2a4211-760a-400a-bd6c-243749c41a4e", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}}] disk_info={'disk_bus': 'virtio', 'cdrom_bus': 'sata', 'mapping': {'root': {'bus': 'virtio', 'dev': 'vda', 'type': 'disk', 'boot_index': '1'}, 'disk': {'bus': 'virtio', 'dev': 'vda', 'type': 'disk', 'boot_index': '1'}, 'disk.config': {'bus': 'sata', 'dev': 'sda', 'type': 'cdrom'}}} image_meta=ImageMeta(checksum='c8fc807773e5354afe61636071771906',container_format='bare',created_at=2025-12-01T20:00:12Z,direct_url=<?>,disk_format='qcow2',id=d169c234-7ac2-4fdc-b9fa-a08c93484d75,min_disk=0,min_ram=0,name='cirros-0.6.2-x86_64-disk.img',owner='35d2a9caf1634dca9fc12ec078239d84',properties=ImageMetaProps,protected=<?>,size=21430272,status='active',tags=<?>,updated_at=2025-12-01T20:00:13Z,virtual_size=<?>,visibility=<?>) rescue=None block_device_info={'root_device_name': '/dev/vda', 'image': [{'boot_index': 0, 'guest_format': None, 'encryption_options': None, 'size': 0, 'encryption_secret_uuid': None, 'device_type': 'disk', 'disk_bus': 'virtio', 'encrypted': False, 'encryption_format': None, 'device_name': '/dev/vda', 'image_id': 'd169c234-7ac2-4fdc-b9fa-a08c93484d75'}], 'ephemerals': [], 'block_device_mapping': [], 'swap': None} _get_guest_xml /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:7549#033[00m
Dec  1 20:03:41 compute-0 nova_compute[189564]: 2025-12-01 20:03:41.239 189568 WARNING nova.virt.libvirt.driver [None req-d4149dcd-3b17-4052-99ac-95e505cfa62d 715e289b64b4407387cbcfe958eb2d0f 162c071887824085bcc9c384a2f8baf0 - - default default] This host appears to have multiple sockets per NUMA node. The `socket` PCI NUMA affinity will not be supported.#033[00m
Dec  1 20:03:41 compute-0 nova_compute[189564]: 2025-12-01 20:03:41.251 189568 DEBUG nova.virt.libvirt.host [None req-d4149dcd-3b17-4052-99ac-95e505cfa62d 715e289b64b4407387cbcfe958eb2d0f 162c071887824085bcc9c384a2f8baf0 - - default default] Searching host: 'compute-0.ctlplane.example.com' for CPU controller through CGroups V1... _has_cgroupsv1_cpu_controller /usr/lib/python3.9/site-packages/nova/virt/libvirt/host.py:1653#033[00m
Dec  1 20:03:41 compute-0 nova_compute[189564]: 2025-12-01 20:03:41.252 189568 DEBUG nova.virt.libvirt.host [None req-d4149dcd-3b17-4052-99ac-95e505cfa62d 715e289b64b4407387cbcfe958eb2d0f 162c071887824085bcc9c384a2f8baf0 - - default default] CPU controller missing on host. _has_cgroupsv1_cpu_controller /usr/lib/python3.9/site-packages/nova/virt/libvirt/host.py:1663#033[00m
Dec  1 20:03:41 compute-0 nova_compute[189564]: 2025-12-01 20:03:41.262 189568 DEBUG nova.virt.libvirt.host [None req-d4149dcd-3b17-4052-99ac-95e505cfa62d 715e289b64b4407387cbcfe958eb2d0f 162c071887824085bcc9c384a2f8baf0 - - default default] Searching host: 'compute-0.ctlplane.example.com' for CPU controller through CGroups V2... _has_cgroupsv2_cpu_controller /usr/lib/python3.9/site-packages/nova/virt/libvirt/host.py:1672#033[00m
Dec  1 20:03:41 compute-0 nova_compute[189564]: 2025-12-01 20:03:41.263 189568 DEBUG nova.virt.libvirt.host [None req-d4149dcd-3b17-4052-99ac-95e505cfa62d 715e289b64b4407387cbcfe958eb2d0f 162c071887824085bcc9c384a2f8baf0 - - default default] CPU controller found on host. _has_cgroupsv2_cpu_controller /usr/lib/python3.9/site-packages/nova/virt/libvirt/host.py:1679#033[00m
Dec  1 20:03:41 compute-0 nova_compute[189564]: 2025-12-01 20:03:41.264 189568 DEBUG nova.virt.libvirt.driver [None req-d4149dcd-3b17-4052-99ac-95e505cfa62d 715e289b64b4407387cbcfe958eb2d0f 162c071887824085bcc9c384a2f8baf0 - - default default] CPU mode 'host-model' models '' was chosen, with extra flags: '' _get_guest_cpu_model_config /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:5396#033[00m
Dec  1 20:03:41 compute-0 nova_compute[189564]: 2025-12-01 20:03:41.265 189568 DEBUG nova.virt.hardware [None req-d4149dcd-3b17-4052-99ac-95e505cfa62d 715e289b64b4407387cbcfe958eb2d0f 162c071887824085bcc9c384a2f8baf0 - - default default] Getting desirable topologies for flavor Flavor(created_at=2025-12-01T20:00:10Z,deleted=False,deleted_at=None,description=None,disabled=False,ephemeral_gb=0,extra_specs={hw_rng:allowed='True'},flavorid='69252fc0-77e5-4ac1-807d-77003542464f',id=3,is_public=True,memory_mb=128,name='m1.nano',projects=<?>,root_gb=1,rxtx_factor=1.0,swap=0,updated_at=None,vcpu_weight=0,vcpus=1) and image_meta ImageMeta(checksum='c8fc807773e5354afe61636071771906',container_format='bare',created_at=2025-12-01T20:00:12Z,direct_url=<?>,disk_format='qcow2',id=d169c234-7ac2-4fdc-b9fa-a08c93484d75,min_disk=0,min_ram=0,name='cirros-0.6.2-x86_64-disk.img',owner='35d2a9caf1634dca9fc12ec078239d84',properties=ImageMetaProps,protected=<?>,size=21430272,status='active',tags=<?>,updated_at=2025-12-01T20:00:13Z,virtual_size=<?>,visibility=<?>), allow threads: True _get_desirable_cpu_topologies /usr/lib/python3.9/site-packages/nova/virt/hardware.py:563#033[00m
Dec  1 20:03:41 compute-0 nova_compute[189564]: 2025-12-01 20:03:41.266 189568 DEBUG nova.virt.hardware [None req-d4149dcd-3b17-4052-99ac-95e505cfa62d 715e289b64b4407387cbcfe958eb2d0f 162c071887824085bcc9c384a2f8baf0 - - default default] Flavor limits 0:0:0 get_cpu_topology_constraints /usr/lib/python3.9/site-packages/nova/virt/hardware.py:348#033[00m
Dec  1 20:03:41 compute-0 nova_compute[189564]: 2025-12-01 20:03:41.268 189568 DEBUG nova.virt.hardware [None req-d4149dcd-3b17-4052-99ac-95e505cfa62d 715e289b64b4407387cbcfe958eb2d0f 162c071887824085bcc9c384a2f8baf0 - - default default] Image limits 0:0:0 get_cpu_topology_constraints /usr/lib/python3.9/site-packages/nova/virt/hardware.py:352#033[00m
Dec  1 20:03:41 compute-0 nova_compute[189564]: 2025-12-01 20:03:41.269 189568 DEBUG nova.virt.hardware [None req-d4149dcd-3b17-4052-99ac-95e505cfa62d 715e289b64b4407387cbcfe958eb2d0f 162c071887824085bcc9c384a2f8baf0 - - default default] Flavor pref 0:0:0 get_cpu_topology_constraints /usr/lib/python3.9/site-packages/nova/virt/hardware.py:388#033[00m
Dec  1 20:03:41 compute-0 nova_compute[189564]: 2025-12-01 20:03:41.270 189568 DEBUG nova.virt.hardware [None req-d4149dcd-3b17-4052-99ac-95e505cfa62d 715e289b64b4407387cbcfe958eb2d0f 162c071887824085bcc9c384a2f8baf0 - - default default] Image pref 0:0:0 get_cpu_topology_constraints /usr/lib/python3.9/site-packages/nova/virt/hardware.py:392#033[00m
Dec  1 20:03:41 compute-0 nova_compute[189564]: 2025-12-01 20:03:41.271 189568 DEBUG nova.virt.hardware [None req-d4149dcd-3b17-4052-99ac-95e505cfa62d 715e289b64b4407387cbcfe958eb2d0f 162c071887824085bcc9c384a2f8baf0 - - default default] Chose sockets=0, cores=0, threads=0; limits were sockets=65536, cores=65536, threads=65536 get_cpu_topology_constraints /usr/lib/python3.9/site-packages/nova/virt/hardware.py:430#033[00m
Dec  1 20:03:41 compute-0 nova_compute[189564]: 2025-12-01 20:03:41.271 189568 DEBUG nova.virt.hardware [None req-d4149dcd-3b17-4052-99ac-95e505cfa62d 715e289b64b4407387cbcfe958eb2d0f 162c071887824085bcc9c384a2f8baf0 - - default default] Topology preferred VirtCPUTopology(cores=0,sockets=0,threads=0), maximum VirtCPUTopology(cores=65536,sockets=65536,threads=65536) _get_desirable_cpu_topologies /usr/lib/python3.9/site-packages/nova/virt/hardware.py:569#033[00m
Dec  1 20:03:41 compute-0 nova_compute[189564]: 2025-12-01 20:03:41.272 189568 DEBUG nova.virt.hardware [None req-d4149dcd-3b17-4052-99ac-95e505cfa62d 715e289b64b4407387cbcfe958eb2d0f 162c071887824085bcc9c384a2f8baf0 - - default default] Build topologies for 1 vcpu(s) 1:1:1 _get_possible_cpu_topologies /usr/lib/python3.9/site-packages/nova/virt/hardware.py:471#033[00m
Dec  1 20:03:41 compute-0 nova_compute[189564]: 2025-12-01 20:03:41.273 189568 DEBUG nova.virt.hardware [None req-d4149dcd-3b17-4052-99ac-95e505cfa62d 715e289b64b4407387cbcfe958eb2d0f 162c071887824085bcc9c384a2f8baf0 - - default default] Got 1 possible topologies _get_possible_cpu_topologies /usr/lib/python3.9/site-packages/nova/virt/hardware.py:501#033[00m
Dec  1 20:03:41 compute-0 nova_compute[189564]: 2025-12-01 20:03:41.274 189568 DEBUG nova.virt.hardware [None req-d4149dcd-3b17-4052-99ac-95e505cfa62d 715e289b64b4407387cbcfe958eb2d0f 162c071887824085bcc9c384a2f8baf0 - - default default] Possible topologies [VirtCPUTopology(cores=1,sockets=1,threads=1)] _get_desirable_cpu_topologies /usr/lib/python3.9/site-packages/nova/virt/hardware.py:575#033[00m
Dec  1 20:03:41 compute-0 nova_compute[189564]: 2025-12-01 20:03:41.275 189568 DEBUG nova.virt.hardware [None req-d4149dcd-3b17-4052-99ac-95e505cfa62d 715e289b64b4407387cbcfe958eb2d0f 162c071887824085bcc9c384a2f8baf0 - - default default] Sorted desired topologies [VirtCPUTopology(cores=1,sockets=1,threads=1)] _get_desirable_cpu_topologies /usr/lib/python3.9/site-packages/nova/virt/hardware.py:577#033[00m
Dec  1 20:03:41 compute-0 nova_compute[189564]: 2025-12-01 20:03:41.281 189568 DEBUG nova.virt.libvirt.vif [None req-d4149dcd-3b17-4052-99ac-95e505cfa62d 715e289b64b4407387cbcfe958eb2d0f 162c071887824085bcc9c384a2f8baf0 - - default default] vif_type=ovs instance=Instance(access_ip_v4=None,access_ip_v6=None,architecture=None,auto_disk_config=False,availability_zone='nova',cell_name=None,cleaned=False,config_drive='',created_at=2025-12-01T20:03:36Z,default_ephemeral_device=None,default_swap_device=None,deleted=False,deleted_at=None,device_metadata=None,disable_terminate=False,display_description='tempest-TestNetworkBasicOps-server-400616177',display_name='tempest-TestNetworkBasicOps-server-400616177',ec2_ids=EC2Ids,ephemeral_gb=0,ephemeral_key_uuid=None,fault=<?>,flavor=Flavor(3),hidden=False,host='compute-0.ctlplane.example.com',hostname='tempest-testnetworkbasicops-server-400616177',id=12,image_ref='d169c234-7ac2-4fdc-b9fa-a08c93484d75',info_cache=InstanceInfoCache,instance_type_id=3,kernel_id='',key_data='ecdsa-sha2-nistp384 AAAAE2VjZHNhLXNoYTItbmlzdHAzODQAAAAIbmlzdHAzODQAAABhBMZz4r8QHWL5e6bdaXmmeBXWrJPoycGMIF22/s6cXa/qsI/JeoZ4nIVHktN0yEw5sVq7NOepXV+coQnzO/S0nl+vnmyrZbU9NIMBBwnv3xQCCt5vGYcM/BmPTvGlxk3WhA==',key_name='tempest-TestNetworkBasicOps-138657879',keypairs=KeyPairList,launch_index=0,launched_at=None,launched_on='compute-0.ctlplane.example.com',locked=False,locked_by=None,memory_mb=128,metadata={},migration_context=None,new_flavor=None,node='compute-0.ctlplane.example.com',numa_topology=None,old_flavor=None,os_type=None,pci_devices=<?>,pci_requests=InstancePCIRequests,power_state=0,progress=0,project_id='162c071887824085bcc9c384a2f8baf0',ramdisk_id='',reservation_id='r-saoj57l7',resources=None,root_device_name='/dev/vda',root_gb=1,security_groups=SecurityGroupList,services=<?>,shutdown_terminate=False,system_metadata={boot_roles='member,reader',image_base_image_ref='d169c234-7ac2-4fdc-b9fa-a08c93484d75',image_container_format='bare',image_disk_format='qcow2',image_hw_machine_type='q35',image_hw_rng_model='virtio',image_min_disk='1',image_min_ram='0',network_allocated='True',owner_project_name='tempest-TestNetworkBasicOps-11937336',owner_user_name='tempest-TestNetworkBasicOps-11937336-project-member'},tags=TagList,task_state='spawning',terminated_at=None,trusted_certs=None,updated_at=2025-12-01T20:03:37Z,user_data=None,user_id='715e289b64b4407387cbcfe958eb2d0f',uuid=cb05bc1e-3b85-4998-a503-39bd86bdc17e,vcpu_model=VirtCPUModel,vcpus=1,vm_mode=None,vm_state='building') vif={"id": "ab2a4211-760a-400a-bd6c-243749c41a4e", "address": "fa:16:3e:d2:c4:d1", "network": {"id": "d273f808-5cbd-4428-9f2c-ed8b50232c12", "bridge": "br-int", "label": "tempest-network-smoke--1707279970", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.4", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "162c071887824085bcc9c384a2f8baf0", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tapab2a4211-76", "ovs_interfaceid": "ab2a4211-760a-400a-bd6c-243749c41a4e", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}} virt_type=kvm get_config /usr/lib/python3.9/site-packages/nova/virt/libvirt/vif.py:563#033[00m
Dec  1 20:03:41 compute-0 nova_compute[189564]: 2025-12-01 20:03:41.282 189568 DEBUG nova.network.os_vif_util [None req-d4149dcd-3b17-4052-99ac-95e505cfa62d 715e289b64b4407387cbcfe958eb2d0f 162c071887824085bcc9c384a2f8baf0 - - default default] Converting VIF {"id": "ab2a4211-760a-400a-bd6c-243749c41a4e", "address": "fa:16:3e:d2:c4:d1", "network": {"id": "d273f808-5cbd-4428-9f2c-ed8b50232c12", "bridge": "br-int", "label": "tempest-network-smoke--1707279970", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.4", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "162c071887824085bcc9c384a2f8baf0", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tapab2a4211-76", "ovs_interfaceid": "ab2a4211-760a-400a-bd6c-243749c41a4e", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}} nova_to_osvif_vif /usr/lib/python3.9/site-packages/nova/network/os_vif_util.py:511#033[00m
Dec  1 20:03:41 compute-0 nova_compute[189564]: 2025-12-01 20:03:41.284 189568 DEBUG nova.network.os_vif_util [None req-d4149dcd-3b17-4052-99ac-95e505cfa62d 715e289b64b4407387cbcfe958eb2d0f 162c071887824085bcc9c384a2f8baf0 - - default default] Converted object VIFOpenVSwitch(active=False,address=fa:16:3e:d2:c4:d1,bridge_name='br-int',has_traffic_filtering=True,id=ab2a4211-760a-400a-bd6c-243749c41a4e,network=Network(d273f808-5cbd-4428-9f2c-ed8b50232c12),plugin='ovs',port_profile=VIFPortProfileOpenVSwitch,preserve_on_delete=False,vif_name='tapab2a4211-76') nova_to_osvif_vif /usr/lib/python3.9/site-packages/nova/network/os_vif_util.py:548#033[00m
Dec  1 20:03:41 compute-0 nova_compute[189564]: 2025-12-01 20:03:41.287 189568 DEBUG nova.objects.instance [None req-d4149dcd-3b17-4052-99ac-95e505cfa62d 715e289b64b4407387cbcfe958eb2d0f 162c071887824085bcc9c384a2f8baf0 - - default default] Lazy-loading 'pci_devices' on Instance uuid cb05bc1e-3b85-4998-a503-39bd86bdc17e obj_load_attr /usr/lib/python3.9/site-packages/nova/objects/instance.py:1105#033[00m
Dec  1 20:03:41 compute-0 nova_compute[189564]: 2025-12-01 20:03:41.313 189568 DEBUG nova.virt.libvirt.driver [None req-d4149dcd-3b17-4052-99ac-95e505cfa62d 715e289b64b4407387cbcfe958eb2d0f 162c071887824085bcc9c384a2f8baf0 - - default default] [instance: cb05bc1e-3b85-4998-a503-39bd86bdc17e] End _get_guest_xml xml=<domain type="kvm">
Dec  1 20:03:41 compute-0 nova_compute[189564]:  <uuid>cb05bc1e-3b85-4998-a503-39bd86bdc17e</uuid>
Dec  1 20:03:41 compute-0 nova_compute[189564]:  <name>instance-0000000c</name>
Dec  1 20:03:41 compute-0 nova_compute[189564]:  <memory>131072</memory>
Dec  1 20:03:41 compute-0 nova_compute[189564]:  <vcpu>1</vcpu>
Dec  1 20:03:41 compute-0 nova_compute[189564]:  <metadata>
Dec  1 20:03:41 compute-0 nova_compute[189564]:    <nova:instance xmlns:nova="http://openstack.org/xmlns/libvirt/nova/1.1">
Dec  1 20:03:41 compute-0 nova_compute[189564]:      <nova:package version="27.5.2-0.20250829104910.6f8decf.el9"/>
Dec  1 20:03:41 compute-0 nova_compute[189564]:      <nova:name>tempest-TestNetworkBasicOps-server-400616177</nova:name>
Dec  1 20:03:41 compute-0 nova_compute[189564]:      <nova:creationTime>2025-12-01 20:03:41</nova:creationTime>
Dec  1 20:03:41 compute-0 nova_compute[189564]:      <nova:flavor name="m1.nano">
Dec  1 20:03:41 compute-0 nova_compute[189564]:        <nova:memory>128</nova:memory>
Dec  1 20:03:41 compute-0 nova_compute[189564]:        <nova:disk>1</nova:disk>
Dec  1 20:03:41 compute-0 nova_compute[189564]:        <nova:swap>0</nova:swap>
Dec  1 20:03:41 compute-0 nova_compute[189564]:        <nova:ephemeral>0</nova:ephemeral>
Dec  1 20:03:41 compute-0 nova_compute[189564]:        <nova:vcpus>1</nova:vcpus>
Dec  1 20:03:41 compute-0 nova_compute[189564]:      </nova:flavor>
Dec  1 20:03:41 compute-0 nova_compute[189564]:      <nova:owner>
Dec  1 20:03:41 compute-0 nova_compute[189564]:        <nova:user uuid="715e289b64b4407387cbcfe958eb2d0f">tempest-TestNetworkBasicOps-11937336-project-member</nova:user>
Dec  1 20:03:41 compute-0 nova_compute[189564]:        <nova:project uuid="162c071887824085bcc9c384a2f8baf0">tempest-TestNetworkBasicOps-11937336</nova:project>
Dec  1 20:03:41 compute-0 nova_compute[189564]:      </nova:owner>
Dec  1 20:03:41 compute-0 nova_compute[189564]:      <nova:root type="image" uuid="d169c234-7ac2-4fdc-b9fa-a08c93484d75"/>
Dec  1 20:03:41 compute-0 nova_compute[189564]:      <nova:ports>
Dec  1 20:03:41 compute-0 nova_compute[189564]:        <nova:port uuid="ab2a4211-760a-400a-bd6c-243749c41a4e">
Dec  1 20:03:41 compute-0 nova_compute[189564]:          <nova:ip type="fixed" address="10.100.0.4" ipVersion="4"/>
Dec  1 20:03:41 compute-0 nova_compute[189564]:        </nova:port>
Dec  1 20:03:41 compute-0 nova_compute[189564]:      </nova:ports>
Dec  1 20:03:41 compute-0 nova_compute[189564]:    </nova:instance>
Dec  1 20:03:41 compute-0 nova_compute[189564]:  </metadata>
Dec  1 20:03:41 compute-0 nova_compute[189564]:  <sysinfo type="smbios">
Dec  1 20:03:41 compute-0 nova_compute[189564]:    <system>
Dec  1 20:03:41 compute-0 nova_compute[189564]:      <entry name="manufacturer">RDO</entry>
Dec  1 20:03:41 compute-0 nova_compute[189564]:      <entry name="product">OpenStack Compute</entry>
Dec  1 20:03:41 compute-0 nova_compute[189564]:      <entry name="version">27.5.2-0.20250829104910.6f8decf.el9</entry>
Dec  1 20:03:41 compute-0 nova_compute[189564]:      <entry name="serial">cb05bc1e-3b85-4998-a503-39bd86bdc17e</entry>
Dec  1 20:03:41 compute-0 nova_compute[189564]:      <entry name="uuid">cb05bc1e-3b85-4998-a503-39bd86bdc17e</entry>
Dec  1 20:03:41 compute-0 nova_compute[189564]:      <entry name="family">Virtual Machine</entry>
Dec  1 20:03:41 compute-0 nova_compute[189564]:    </system>
Dec  1 20:03:41 compute-0 nova_compute[189564]:  </sysinfo>
Dec  1 20:03:41 compute-0 nova_compute[189564]:  <os>
Dec  1 20:03:41 compute-0 nova_compute[189564]:    <type arch="x86_64" machine="q35">hvm</type>
Dec  1 20:03:41 compute-0 nova_compute[189564]:    <boot dev="hd"/>
Dec  1 20:03:41 compute-0 nova_compute[189564]:    <smbios mode="sysinfo"/>
Dec  1 20:03:41 compute-0 nova_compute[189564]:  </os>
Dec  1 20:03:41 compute-0 nova_compute[189564]:  <features>
Dec  1 20:03:41 compute-0 nova_compute[189564]:    <acpi/>
Dec  1 20:03:41 compute-0 nova_compute[189564]:    <apic/>
Dec  1 20:03:41 compute-0 nova_compute[189564]:    <vmcoreinfo/>
Dec  1 20:03:41 compute-0 nova_compute[189564]:  </features>
Dec  1 20:03:41 compute-0 nova_compute[189564]:  <clock offset="utc">
Dec  1 20:03:41 compute-0 nova_compute[189564]:    <timer name="pit" tickpolicy="delay"/>
Dec  1 20:03:41 compute-0 nova_compute[189564]:    <timer name="rtc" tickpolicy="catchup"/>
Dec  1 20:03:41 compute-0 nova_compute[189564]:    <timer name="hpet" present="no"/>
Dec  1 20:03:41 compute-0 nova_compute[189564]:  </clock>
Dec  1 20:03:41 compute-0 nova_compute[189564]:  <cpu mode="host-model" match="exact">
Dec  1 20:03:41 compute-0 nova_compute[189564]:    <topology sockets="1" cores="1" threads="1"/>
Dec  1 20:03:41 compute-0 nova_compute[189564]:  </cpu>
Dec  1 20:03:41 compute-0 nova_compute[189564]:  <devices>
Dec  1 20:03:41 compute-0 nova_compute[189564]:    <disk type="file" device="disk">
Dec  1 20:03:41 compute-0 nova_compute[189564]:      <driver name="qemu" type="qcow2" cache="none"/>
Dec  1 20:03:41 compute-0 nova_compute[189564]:      <source file="/var/lib/nova/instances/cb05bc1e-3b85-4998-a503-39bd86bdc17e/disk"/>
Dec  1 20:03:41 compute-0 nova_compute[189564]:      <target dev="vda" bus="virtio"/>
Dec  1 20:03:41 compute-0 nova_compute[189564]:    </disk>
Dec  1 20:03:41 compute-0 nova_compute[189564]:    <disk type="file" device="cdrom">
Dec  1 20:03:41 compute-0 nova_compute[189564]:      <driver name="qemu" type="raw" cache="none"/>
Dec  1 20:03:41 compute-0 nova_compute[189564]:      <source file="/var/lib/nova/instances/cb05bc1e-3b85-4998-a503-39bd86bdc17e/disk.config"/>
Dec  1 20:03:41 compute-0 nova_compute[189564]:      <target dev="sda" bus="sata"/>
Dec  1 20:03:41 compute-0 nova_compute[189564]:    </disk>
Dec  1 20:03:41 compute-0 nova_compute[189564]:    <interface type="ethernet">
Dec  1 20:03:41 compute-0 nova_compute[189564]:      <mac address="fa:16:3e:d2:c4:d1"/>
Dec  1 20:03:41 compute-0 nova_compute[189564]:      <model type="virtio"/>
Dec  1 20:03:41 compute-0 nova_compute[189564]:      <driver name="vhost" rx_queue_size="512"/>
Dec  1 20:03:41 compute-0 nova_compute[189564]:      <mtu size="1442"/>
Dec  1 20:03:41 compute-0 nova_compute[189564]:      <target dev="tapab2a4211-76"/>
Dec  1 20:03:41 compute-0 nova_compute[189564]:    </interface>
Dec  1 20:03:41 compute-0 nova_compute[189564]:    <serial type="pty">
Dec  1 20:03:41 compute-0 nova_compute[189564]:      <log file="/var/lib/nova/instances/cb05bc1e-3b85-4998-a503-39bd86bdc17e/console.log" append="off"/>
Dec  1 20:03:41 compute-0 nova_compute[189564]:    </serial>
Dec  1 20:03:41 compute-0 nova_compute[189564]:    <graphics type="vnc" autoport="yes" listen="::0"/>
Dec  1 20:03:41 compute-0 nova_compute[189564]:    <video>
Dec  1 20:03:41 compute-0 nova_compute[189564]:      <model type="virtio"/>
Dec  1 20:03:41 compute-0 nova_compute[189564]:    </video>
Dec  1 20:03:41 compute-0 nova_compute[189564]:    <input type="tablet" bus="usb"/>
Dec  1 20:03:41 compute-0 nova_compute[189564]:    <rng model="virtio">
Dec  1 20:03:41 compute-0 nova_compute[189564]:      <backend model="random">/dev/urandom</backend>
Dec  1 20:03:41 compute-0 nova_compute[189564]:    </rng>
Dec  1 20:03:41 compute-0 nova_compute[189564]:    <controller type="pci" model="pcie-root"/>
Dec  1 20:03:41 compute-0 nova_compute[189564]:    <controller type="pci" model="pcie-root-port"/>
Dec  1 20:03:41 compute-0 nova_compute[189564]:    <controller type="pci" model="pcie-root-port"/>
Dec  1 20:03:41 compute-0 nova_compute[189564]:    <controller type="pci" model="pcie-root-port"/>
Dec  1 20:03:41 compute-0 nova_compute[189564]:    <controller type="pci" model="pcie-root-port"/>
Dec  1 20:03:41 compute-0 nova_compute[189564]:    <controller type="pci" model="pcie-root-port"/>
Dec  1 20:03:41 compute-0 nova_compute[189564]:    <controller type="pci" model="pcie-root-port"/>
Dec  1 20:03:41 compute-0 nova_compute[189564]:    <controller type="pci" model="pcie-root-port"/>
Dec  1 20:03:41 compute-0 nova_compute[189564]:    <controller type="pci" model="pcie-root-port"/>
Dec  1 20:03:41 compute-0 nova_compute[189564]:    <controller type="pci" model="pcie-root-port"/>
Dec  1 20:03:41 compute-0 nova_compute[189564]:    <controller type="pci" model="pcie-root-port"/>
Dec  1 20:03:41 compute-0 nova_compute[189564]:    <controller type="pci" model="pcie-root-port"/>
Dec  1 20:03:41 compute-0 nova_compute[189564]:    <controller type="pci" model="pcie-root-port"/>
Dec  1 20:03:41 compute-0 nova_compute[189564]:    <controller type="pci" model="pcie-root-port"/>
Dec  1 20:03:41 compute-0 nova_compute[189564]:    <controller type="pci" model="pcie-root-port"/>
Dec  1 20:03:41 compute-0 nova_compute[189564]:    <controller type="pci" model="pcie-root-port"/>
Dec  1 20:03:41 compute-0 nova_compute[189564]:    <controller type="pci" model="pcie-root-port"/>
Dec  1 20:03:41 compute-0 nova_compute[189564]:    <controller type="pci" model="pcie-root-port"/>
Dec  1 20:03:41 compute-0 nova_compute[189564]:    <controller type="pci" model="pcie-root-port"/>
Dec  1 20:03:41 compute-0 nova_compute[189564]:    <controller type="pci" model="pcie-root-port"/>
Dec  1 20:03:41 compute-0 nova_compute[189564]:    <controller type="pci" model="pcie-root-port"/>
Dec  1 20:03:41 compute-0 nova_compute[189564]:    <controller type="pci" model="pcie-root-port"/>
Dec  1 20:03:41 compute-0 nova_compute[189564]:    <controller type="pci" model="pcie-root-port"/>
Dec  1 20:03:41 compute-0 nova_compute[189564]:    <controller type="pci" model="pcie-root-port"/>
Dec  1 20:03:41 compute-0 nova_compute[189564]:    <controller type="pci" model="pcie-root-port"/>
Dec  1 20:03:41 compute-0 nova_compute[189564]:    <controller type="usb" index="0"/>
Dec  1 20:03:41 compute-0 nova_compute[189564]:    <memballoon model="virtio">
Dec  1 20:03:41 compute-0 nova_compute[189564]:      <stats period="10"/>
Dec  1 20:03:41 compute-0 nova_compute[189564]:    </memballoon>
Dec  1 20:03:41 compute-0 nova_compute[189564]:  </devices>
Dec  1 20:03:41 compute-0 nova_compute[189564]: </domain>
Dec  1 20:03:41 compute-0 nova_compute[189564]: _get_guest_xml /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:7555#033[00m
Dec  1 20:03:41 compute-0 nova_compute[189564]: 2025-12-01 20:03:41.315 189568 DEBUG nova.compute.manager [None req-d4149dcd-3b17-4052-99ac-95e505cfa62d 715e289b64b4407387cbcfe958eb2d0f 162c071887824085bcc9c384a2f8baf0 - - default default] [instance: cb05bc1e-3b85-4998-a503-39bd86bdc17e] Preparing to wait for external event network-vif-plugged-ab2a4211-760a-400a-bd6c-243749c41a4e prepare_for_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:283#033[00m
Dec  1 20:03:41 compute-0 nova_compute[189564]: 2025-12-01 20:03:41.316 189568 DEBUG oslo_concurrency.lockutils [None req-d4149dcd-3b17-4052-99ac-95e505cfa62d 715e289b64b4407387cbcfe958eb2d0f 162c071887824085bcc9c384a2f8baf0 - - default default] Acquiring lock "cb05bc1e-3b85-4998-a503-39bd86bdc17e-events" by "nova.compute.manager.InstanceEvents.prepare_for_instance_event.<locals>._create_or_get_event" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m
Dec  1 20:03:41 compute-0 nova_compute[189564]: 2025-12-01 20:03:41.317 189568 DEBUG oslo_concurrency.lockutils [None req-d4149dcd-3b17-4052-99ac-95e505cfa62d 715e289b64b4407387cbcfe958eb2d0f 162c071887824085bcc9c384a2f8baf0 - - default default] Lock "cb05bc1e-3b85-4998-a503-39bd86bdc17e-events" acquired by "nova.compute.manager.InstanceEvents.prepare_for_instance_event.<locals>._create_or_get_event" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m
Dec  1 20:03:41 compute-0 nova_compute[189564]: 2025-12-01 20:03:41.317 189568 DEBUG oslo_concurrency.lockutils [None req-d4149dcd-3b17-4052-99ac-95e505cfa62d 715e289b64b4407387cbcfe958eb2d0f 162c071887824085bcc9c384a2f8baf0 - - default default] Lock "cb05bc1e-3b85-4998-a503-39bd86bdc17e-events" "released" by "nova.compute.manager.InstanceEvents.prepare_for_instance_event.<locals>._create_or_get_event" :: held 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m
Dec  1 20:03:41 compute-0 nova_compute[189564]: 2025-12-01 20:03:41.319 189568 DEBUG nova.virt.libvirt.vif [None req-d4149dcd-3b17-4052-99ac-95e505cfa62d 715e289b64b4407387cbcfe958eb2d0f 162c071887824085bcc9c384a2f8baf0 - - default default] vif_type=ovs instance=Instance(access_ip_v4=None,access_ip_v6=None,architecture=None,auto_disk_config=False,availability_zone='nova',cell_name=None,cleaned=False,config_drive='',created_at=2025-12-01T20:03:36Z,default_ephemeral_device=None,default_swap_device=None,deleted=False,deleted_at=None,device_metadata=None,disable_terminate=False,display_description='tempest-TestNetworkBasicOps-server-400616177',display_name='tempest-TestNetworkBasicOps-server-400616177',ec2_ids=EC2Ids,ephemeral_gb=0,ephemeral_key_uuid=None,fault=<?>,flavor=Flavor(3),hidden=False,host='compute-0.ctlplane.example.com',hostname='tempest-testnetworkbasicops-server-400616177',id=12,image_ref='d169c234-7ac2-4fdc-b9fa-a08c93484d75',info_cache=InstanceInfoCache,instance_type_id=3,kernel_id='',key_data='ecdsa-sha2-nistp384 AAAAE2VjZHNhLXNoYTItbmlzdHAzODQAAAAIbmlzdHAzODQAAABhBMZz4r8QHWL5e6bdaXmmeBXWrJPoycGMIF22/s6cXa/qsI/JeoZ4nIVHktN0yEw5sVq7NOepXV+coQnzO/S0nl+vnmyrZbU9NIMBBwnv3xQCCt5vGYcM/BmPTvGlxk3WhA==',key_name='tempest-TestNetworkBasicOps-138657879',keypairs=KeyPairList,launch_index=0,launched_at=None,launched_on='compute-0.ctlplane.example.com',locked=False,locked_by=None,memory_mb=128,metadata={},migration_context=None,new_flavor=None,node='compute-0.ctlplane.example.com',numa_topology=None,old_flavor=None,os_type=None,pci_devices=PciDeviceList,pci_requests=InstancePCIRequests,power_state=0,progress=0,project_id='162c071887824085bcc9c384a2f8baf0',ramdisk_id='',reservation_id='r-saoj57l7',resources=None,root_device_name='/dev/vda',root_gb=1,security_groups=SecurityGroupList,services=<?>,shutdown_terminate=False,system_metadata={boot_roles='member,reader',image_base_image_ref='d169c234-7ac2-4fdc-b9fa-a08c93484d75',image_container_format='bare',image_disk_format='qcow2',image_hw_machine_type='q35',image_hw_rng_model='virtio',image_min_disk='1',image_min_ram='0',network_allocated='True',owner_project_name='tempest-TestNetworkBasicOps-11937336',owner_user_name='tempest-TestNetworkBasicOps-11937336-project-member'},tags=TagList,task_state='spawning',terminated_at=None,trusted_certs=None,updated_at=2025-12-01T20:03:37Z,user_data=None,user_id='715e289b64b4407387cbcfe958eb2d0f',uuid=cb05bc1e-3b85-4998-a503-39bd86bdc17e,vcpu_model=VirtCPUModel,vcpus=1,vm_mode=None,vm_state='building') vif={"id": "ab2a4211-760a-400a-bd6c-243749c41a4e", "address": "fa:16:3e:d2:c4:d1", "network": {"id": "d273f808-5cbd-4428-9f2c-ed8b50232c12", "bridge": "br-int", "label": "tempest-network-smoke--1707279970", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.4", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "162c071887824085bcc9c384a2f8baf0", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tapab2a4211-76", "ovs_interfaceid": "ab2a4211-760a-400a-bd6c-243749c41a4e", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}} plug /usr/lib/python3.9/site-packages/nova/virt/libvirt/vif.py:710#033[00m
Dec  1 20:03:41 compute-0 nova_compute[189564]: 2025-12-01 20:03:41.319 189568 DEBUG nova.network.os_vif_util [None req-d4149dcd-3b17-4052-99ac-95e505cfa62d 715e289b64b4407387cbcfe958eb2d0f 162c071887824085bcc9c384a2f8baf0 - - default default] Converting VIF {"id": "ab2a4211-760a-400a-bd6c-243749c41a4e", "address": "fa:16:3e:d2:c4:d1", "network": {"id": "d273f808-5cbd-4428-9f2c-ed8b50232c12", "bridge": "br-int", "label": "tempest-network-smoke--1707279970", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.4", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "162c071887824085bcc9c384a2f8baf0", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tapab2a4211-76", "ovs_interfaceid": "ab2a4211-760a-400a-bd6c-243749c41a4e", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}} nova_to_osvif_vif /usr/lib/python3.9/site-packages/nova/network/os_vif_util.py:511#033[00m
Dec  1 20:03:41 compute-0 nova_compute[189564]: 2025-12-01 20:03:41.321 189568 DEBUG nova.network.os_vif_util [None req-d4149dcd-3b17-4052-99ac-95e505cfa62d 715e289b64b4407387cbcfe958eb2d0f 162c071887824085bcc9c384a2f8baf0 - - default default] Converted object VIFOpenVSwitch(active=False,address=fa:16:3e:d2:c4:d1,bridge_name='br-int',has_traffic_filtering=True,id=ab2a4211-760a-400a-bd6c-243749c41a4e,network=Network(d273f808-5cbd-4428-9f2c-ed8b50232c12),plugin='ovs',port_profile=VIFPortProfileOpenVSwitch,preserve_on_delete=False,vif_name='tapab2a4211-76') nova_to_osvif_vif /usr/lib/python3.9/site-packages/nova/network/os_vif_util.py:548#033[00m
Dec  1 20:03:41 compute-0 nova_compute[189564]: 2025-12-01 20:03:41.322 189568 DEBUG os_vif [None req-d4149dcd-3b17-4052-99ac-95e505cfa62d 715e289b64b4407387cbcfe958eb2d0f 162c071887824085bcc9c384a2f8baf0 - - default default] Plugging vif VIFOpenVSwitch(active=False,address=fa:16:3e:d2:c4:d1,bridge_name='br-int',has_traffic_filtering=True,id=ab2a4211-760a-400a-bd6c-243749c41a4e,network=Network(d273f808-5cbd-4428-9f2c-ed8b50232c12),plugin='ovs',port_profile=VIFPortProfileOpenVSwitch,preserve_on_delete=False,vif_name='tapab2a4211-76') plug /usr/lib/python3.9/site-packages/os_vif/__init__.py:76#033[00m
Dec  1 20:03:41 compute-0 nova_compute[189564]: 2025-12-01 20:03:41.323 189568 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 21 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Dec  1 20:03:41 compute-0 nova_compute[189564]: 2025-12-01 20:03:41.324 189568 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): AddBridgeCommand(_result=None, name=br-int, may_exist=True, datapath_type=system) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89#033[00m
Dec  1 20:03:41 compute-0 nova_compute[189564]: 2025-12-01 20:03:41.325 189568 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Transaction caused no change do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:129#033[00m
Dec  1 20:03:41 compute-0 nova_compute[189564]: 2025-12-01 20:03:41.330 189568 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 21 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Dec  1 20:03:41 compute-0 nova_compute[189564]: 2025-12-01 20:03:41.331 189568 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): AddPortCommand(_result=None, bridge=br-int, port=tapab2a4211-76, may_exist=True) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89#033[00m
Dec  1 20:03:41 compute-0 nova_compute[189564]: 2025-12-01 20:03:41.332 189568 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=1): DbSetCommand(_result=None, table=Interface, record=tapab2a4211-76, col_values=(('external_ids', {'iface-id': 'ab2a4211-760a-400a-bd6c-243749c41a4e', 'iface-status': 'active', 'attached-mac': 'fa:16:3e:d2:c4:d1', 'vm-uuid': 'cb05bc1e-3b85-4998-a503-39bd86bdc17e'}),)) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89#033[00m
Dec  1 20:03:41 compute-0 nova_compute[189564]: 2025-12-01 20:03:41.334 189568 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Dec  1 20:03:41 compute-0 NetworkManager[56474]: <info>  [1764619421.3359] manager: (tapab2a4211-76): new Open vSwitch Port device (/org/freedesktop/NetworkManager/Devices/66)
Dec  1 20:03:41 compute-0 nova_compute[189564]: 2025-12-01 20:03:41.337 189568 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] 0-ms timeout __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:248#033[00m
Dec  1 20:03:41 compute-0 nova_compute[189564]: 2025-12-01 20:03:41.346 189568 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Dec  1 20:03:41 compute-0 nova_compute[189564]: 2025-12-01 20:03:41.347 189568 INFO os_vif [None req-d4149dcd-3b17-4052-99ac-95e505cfa62d 715e289b64b4407387cbcfe958eb2d0f 162c071887824085bcc9c384a2f8baf0 - - default default] Successfully plugged vif VIFOpenVSwitch(active=False,address=fa:16:3e:d2:c4:d1,bridge_name='br-int',has_traffic_filtering=True,id=ab2a4211-760a-400a-bd6c-243749c41a4e,network=Network(d273f808-5cbd-4428-9f2c-ed8b50232c12),plugin='ovs',port_profile=VIFPortProfileOpenVSwitch,preserve_on_delete=False,vif_name='tapab2a4211-76')#033[00m
Dec  1 20:03:41 compute-0 nova_compute[189564]: 2025-12-01 20:03:41.427 189568 DEBUG nova.virt.libvirt.driver [None req-d4149dcd-3b17-4052-99ac-95e505cfa62d 715e289b64b4407387cbcfe958eb2d0f 162c071887824085bcc9c384a2f8baf0 - - default default] No BDM found with device name vda, not building metadata. _build_disk_metadata /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:12116#033[00m
Dec  1 20:03:41 compute-0 nova_compute[189564]: 2025-12-01 20:03:41.428 189568 DEBUG nova.virt.libvirt.driver [None req-d4149dcd-3b17-4052-99ac-95e505cfa62d 715e289b64b4407387cbcfe958eb2d0f 162c071887824085bcc9c384a2f8baf0 - - default default] No BDM found with device name sda, not building metadata. _build_disk_metadata /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:12116#033[00m
Dec  1 20:03:41 compute-0 nova_compute[189564]: 2025-12-01 20:03:41.428 189568 DEBUG nova.virt.libvirt.driver [None req-d4149dcd-3b17-4052-99ac-95e505cfa62d 715e289b64b4407387cbcfe958eb2d0f 162c071887824085bcc9c384a2f8baf0 - - default default] No VIF found with MAC fa:16:3e:d2:c4:d1, not building metadata _build_interface_metadata /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:12092#033[00m
Dec  1 20:03:41 compute-0 nova_compute[189564]: 2025-12-01 20:03:41.428 189568 INFO nova.virt.libvirt.driver [None req-d4149dcd-3b17-4052-99ac-95e505cfa62d 715e289b64b4407387cbcfe958eb2d0f 162c071887824085bcc9c384a2f8baf0 - - default default] [instance: cb05bc1e-3b85-4998-a503-39bd86bdc17e] Using config drive#033[00m
Dec  1 20:03:41 compute-0 ovn_controller[97948]: 2025-12-01T20:03:41Z|00142|binding|INFO|Releasing lport 39b24bc2-6265-4d8f-9166-2751c476b101 from this chassis (sb_readonly=0)
Dec  1 20:03:41 compute-0 ovn_controller[97948]: 2025-12-01T20:03:41Z|00143|binding|INFO|Releasing lport 0966f8f1-95fd-4a77-80c1-25197c60ec2b from this chassis (sb_readonly=0)
Dec  1 20:03:41 compute-0 ovn_controller[97948]: 2025-12-01T20:03:41Z|00144|binding|INFO|Releasing lport b1e4fac5-26a3-4807-b860-bcfa4669fff5 from this chassis (sb_readonly=0)
Dec  1 20:03:41 compute-0 nova_compute[189564]: 2025-12-01 20:03:41.668 189568 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Dec  1 20:03:41 compute-0 nova_compute[189564]: 2025-12-01 20:03:41.842 189568 INFO nova.virt.libvirt.driver [None req-d4149dcd-3b17-4052-99ac-95e505cfa62d 715e289b64b4407387cbcfe958eb2d0f 162c071887824085bcc9c384a2f8baf0 - - default default] [instance: cb05bc1e-3b85-4998-a503-39bd86bdc17e] Creating config drive at /var/lib/nova/instances/cb05bc1e-3b85-4998-a503-39bd86bdc17e/disk.config#033[00m
Dec  1 20:03:41 compute-0 nova_compute[189564]: 2025-12-01 20:03:41.850 189568 DEBUG oslo_concurrency.processutils [None req-d4149dcd-3b17-4052-99ac-95e505cfa62d 715e289b64b4407387cbcfe958eb2d0f 162c071887824085bcc9c384a2f8baf0 - - default default] Running cmd (subprocess): /usr/bin/mkisofs -o /var/lib/nova/instances/cb05bc1e-3b85-4998-a503-39bd86bdc17e/disk.config -ldots -allow-lowercase -allow-multidot -l -publisher OpenStack Compute 27.5.2-0.20250829104910.6f8decf.el9 -quiet -J -r -V config-2 /tmp/tmp5x3amgbj execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384#033[00m
Dec  1 20:03:41 compute-0 nova_compute[189564]: 2025-12-01 20:03:41.995 189568 DEBUG oslo_concurrency.processutils [None req-d4149dcd-3b17-4052-99ac-95e505cfa62d 715e289b64b4407387cbcfe958eb2d0f 162c071887824085bcc9c384a2f8baf0 - - default default] CMD "/usr/bin/mkisofs -o /var/lib/nova/instances/cb05bc1e-3b85-4998-a503-39bd86bdc17e/disk.config -ldots -allow-lowercase -allow-multidot -l -publisher OpenStack Compute 27.5.2-0.20250829104910.6f8decf.el9 -quiet -J -r -V config-2 /tmp/tmp5x3amgbj" returned: 0 in 0.144s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422#033[00m
Dec  1 20:03:42 compute-0 kernel: tapab2a4211-76: entered promiscuous mode
Dec  1 20:03:42 compute-0 NetworkManager[56474]: <info>  [1764619422.0950] manager: (tapab2a4211-76): new Tun device (/org/freedesktop/NetworkManager/Devices/67)
Dec  1 20:03:42 compute-0 ovn_controller[97948]: 2025-12-01T20:03:42Z|00145|binding|INFO|Claiming lport ab2a4211-760a-400a-bd6c-243749c41a4e for this chassis.
Dec  1 20:03:42 compute-0 ovn_controller[97948]: 2025-12-01T20:03:42Z|00146|binding|INFO|ab2a4211-760a-400a-bd6c-243749c41a4e: Claiming fa:16:3e:d2:c4:d1 10.100.0.4
Dec  1 20:03:42 compute-0 nova_compute[189564]: 2025-12-01 20:03:42.100 189568 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Dec  1 20:03:42 compute-0 ovn_controller[97948]: 2025-12-01T20:03:42Z|00147|binding|INFO|Setting lport ab2a4211-760a-400a-bd6c-243749c41a4e ovn-installed in OVS
Dec  1 20:03:42 compute-0 nova_compute[189564]: 2025-12-01 20:03:42.118 189568 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Dec  1 20:03:42 compute-0 nova_compute[189564]: 2025-12-01 20:03:42.120 189568 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Dec  1 20:03:42 compute-0 nova_compute[189564]: 2025-12-01 20:03:42.137 189568 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Dec  1 20:03:42 compute-0 ovn_metadata_agent[106828]: 2025-12-01 20:03:42.148 106833 DEBUG ovsdbapp.backend.ovs_idl.event [-] Matched UPDATE: PortBindingUpdatedEvent(events=('update',), table='Port_Binding', conditions=None, old_conditions=None), priority=20 to row=Port_Binding(mac=['fa:16:3e:d2:c4:d1 10.100.0.4'], port_security=['fa:16:3e:d2:c4:d1 10.100.0.4'], type=, nat_addresses=[], virtual_parent=[], up=[False], options={'requested-chassis': 'compute-0.ctlplane.example.com'}, parent_port=[], requested_additional_chassis=[], ha_chassis_group=[], external_ids={'neutron:cidrs': '10.100.0.4/28', 'neutron:device_id': 'cb05bc1e-3b85-4998-a503-39bd86bdc17e', 'neutron:device_owner': 'compute:nova', 'neutron:mtu': '', 'neutron:network_name': 'neutron-d273f808-5cbd-4428-9f2c-ed8b50232c12', 'neutron:port_capabilities': '', 'neutron:port_name': '', 'neutron:project_id': '162c071887824085bcc9c384a2f8baf0', 'neutron:revision_number': '2', 'neutron:security_group_ids': '006fce21-a511-489a-880a-d2b4557c5d3b', 'neutron:subnet_pool_addr_scope4': '', 'neutron:subnet_pool_addr_scope6': '', 'neutron:vnic_type': 'normal'}, additional_chassis=[], tag=[], additional_encap=[], encap=[], mirror_rules=[], datapath=814c1014-135a-4652-9979-0910a324d6ee, chassis=[<ovs.db.idl.Row object at 0x7f1b36766670>], tunnel_key=4, gateway_chassis=[], requested_chassis=[<ovs.db.idl.Row object at 0x7f1b36766670>], logical_port=ab2a4211-760a-400a-bd6c-243749c41a4e) old=Port_Binding(chassis=[]) matches /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/event.py:43#033[00m
Dec  1 20:03:42 compute-0 systemd-udevd[256193]: Network interface NamePolicy= disabled on kernel command line.
Dec  1 20:03:42 compute-0 ovn_metadata_agent[106828]: 2025-12-01 20:03:42.149 106833 INFO neutron.agent.ovn.metadata.agent [-] Port ab2a4211-760a-400a-bd6c-243749c41a4e in datapath d273f808-5cbd-4428-9f2c-ed8b50232c12 bound to our chassis#033[00m
Dec  1 20:03:42 compute-0 ovn_metadata_agent[106828]: 2025-12-01 20:03:42.151 106833 INFO neutron.agent.ovn.metadata.agent [-] Provisioning metadata for network d273f808-5cbd-4428-9f2c-ed8b50232c12#033[00m
Dec  1 20:03:42 compute-0 ovn_controller[97948]: 2025-12-01T20:03:42Z|00148|binding|INFO|Setting lport ab2a4211-760a-400a-bd6c-243749c41a4e up in Southbound
Dec  1 20:03:42 compute-0 systemd-machined[155891]: New machine qemu-13-instance-0000000c.
Dec  1 20:03:42 compute-0 NetworkManager[56474]: <info>  [1764619422.1684] device (tapab2a4211-76): state change: unmanaged -> unavailable (reason 'connection-assumed', managed-type: 'external')
Dec  1 20:03:42 compute-0 NetworkManager[56474]: <info>  [1764619422.1693] device (tapab2a4211-76): state change: unavailable -> disconnected (reason 'none', managed-type: 'external')
Dec  1 20:03:42 compute-0 ovn_metadata_agent[106828]: 2025-12-01 20:03:42.170 239862 DEBUG oslo.privsep.daemon [-] privsep: reply[99afafc2-2778-4a0a-b93a-77c6fcf5d174]: (4, True) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Dec  1 20:03:42 compute-0 systemd[1]: Started Virtual Machine qemu-13-instance-0000000c.
Dec  1 20:03:42 compute-0 ovn_metadata_agent[106828]: 2025-12-01 20:03:42.219 239942 DEBUG oslo.privsep.daemon [-] privsep: reply[e7367ac9-7c7f-4e99-8a04-3cfd065b6e62]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Dec  1 20:03:42 compute-0 ovn_metadata_agent[106828]: 2025-12-01 20:03:42.223 239942 DEBUG oslo.privsep.daemon [-] privsep: reply[bb0a5641-a753-4b15-a066-72f2d4d3b05f]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Dec  1 20:03:42 compute-0 ovn_metadata_agent[106828]: 2025-12-01 20:03:42.265 239942 DEBUG oslo.privsep.daemon [-] privsep: reply[dd595b8e-15a1-46a5-ab8e-37230af43dcd]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Dec  1 20:03:42 compute-0 ovn_metadata_agent[106828]: 2025-12-01 20:03:42.290 239862 DEBUG oslo.privsep.daemon [-] privsep: reply[d195c6ba-3883-4c63-80b2-9a67d286e4dc]: (4, [{'family': 0, '__align': (), 'ifi_type': 1, 'index': 2, 'flags': 69699, 'change': 0, 'attrs': [['IFLA_IFNAME', 'tapd273f808-51'], ['IFLA_TXQLEN', 1000], ['IFLA_OPERSTATE', 'UP'], ['IFLA_LINKMODE', 0], ['IFLA_MTU', 1500], ['IFLA_MIN_MTU', 68], ['IFLA_MAX_MTU', 65535], ['IFLA_GROUP', 0], ['IFLA_PROMISCUITY', 0], ['UNKNOWN', {'header': {'length': 8, 'type': 61}}], ['IFLA_NUM_TX_QUEUES', 8], ['IFLA_GSO_MAX_SEGS', 65535], ['IFLA_GSO_MAX_SIZE', 65536], ['IFLA_GRO_MAX_SIZE', 65536], ['UNKNOWN', {'header': {'length': 8, 'type': 63}}], ['UNKNOWN', {'header': {'length': 8, 'type': 64}}], ['IFLA_TSO_MAX_SIZE', 524280], ['IFLA_TSO_MAX_SEGS', 65535], ['UNKNOWN', {'header': {'length': 8, 'type': 66}}], ['IFLA_NUM_RX_QUEUES', 8], ['IFLA_CARRIER', 1], ['IFLA_CARRIER_CHANGES', 2], ['IFLA_CARRIER_UP_COUNT', 1], ['IFLA_CARRIER_DOWN_COUNT', 1], ['IFLA_PROTO_DOWN', 0], ['IFLA_ADDRESS', 'fa:16:3e:ec:ef:68'], ['IFLA_BROADCAST', 'ff:ff:ff:ff:ff:ff'], ['IFLA_STATS64', {'rx_packets': 8, 'tx_packets': 5, 'rx_bytes': 616, 'tx_bytes': 354, 'rx_errors': 0, 'tx_errors': 0, 'rx_dropped': 0, 'tx_dropped': 0, 'multicast': 0, 'collisions': 0, 'rx_length_errors': 0, 'rx_over_errors': 0, 'rx_crc_errors': 0, 'rx_frame_errors': 0, 'rx_fifo_errors': 0, 'rx_missed_errors': 0, 'tx_aborted_errors': 0, 'tx_carrier_errors': 0, 'tx_fifo_errors': 0, 'tx_heartbeat_errors': 0, 'tx_window_errors': 0, 'rx_compressed': 0, 'tx_compressed': 0}], ['IFLA_STATS', {'rx_packets': 8, 'tx_packets': 5, 'rx_bytes': 616, 'tx_bytes': 354, 'rx_errors': 0, 'tx_errors': 0, 'rx_dropped': 0, 'tx_dropped': 0, 'multicast': 0, 'collisions': 0, 'rx_length_errors': 0, 'rx_over_errors': 0, 'rx_crc_errors': 0, 'rx_frame_errors': 0, 'rx_fifo_errors': 0, 'rx_missed_errors': 0, 'tx_aborted_errors': 0, 'tx_carrier_errors': 0, 'tx_fifo_errors': 0, 'tx_heartbeat_errors': 0, 'tx_window_errors': 0, 'rx_compressed': 0, 'tx_compressed': 0}], ['IFLA_XDP', {'attrs': [['IFLA_XDP_ATTACHED', None]]}], ['IFLA_LINKINFO', {'attrs': [['IFLA_INFO_KIND', 'veth']]}], ['IFLA_LINK_NETNSID', 0], ['IFLA_LINK', 34], ['IFLA_QDISC', 'noqueue'], ['IFLA_AF_SPEC', {'attrs': [['AF_INET', {'dummy': 65668, 'forwarding': 1, 'mc_forwarding': 0, 'proxy_arp': 0, 'accept_redirects': 0, 'secure_redirects': 0, 'send_redirects': 0, 'shared_media': 1, 'rp_filter': 1, 'accept_source_route': 0, 'bootp_relay': 0, 'log_martians': 0, 'tag': 0, 'arpfilter': 0, 'medium_id': 0, 'noxfrm': 0, 'nopolicy': 0, 'force_igmp_version': 0, 'arp_announce': 0, 'arp_ignore': 0, 'promote_secondaries': 1, 'arp_accept': 0, 'arp_notify': 0, 'accept_local': 0, 'src_vmark': 0, 'proxy_arp_pvlan': 0, 'route_localnet': 0, 'igmpv2_unsolicited_report_interval': 10000, 'igmpv3_unsolicited_report_interval': 1000}], ['AF_INET6', {'attrs': [['IFLA_INET6_FLAGS', 2147483648], ['IFLA_INET6_CACHEINFO', {'max_reasm_len': 65535, 'tstamp': 584520, 'reachable_time': 21071, 'retrans_time': 1000}], ['IFLA_INET6_CONF', {'forwarding': 0, 'hop_limit': 64, 'mtu': 1500, 'accept_ra': 1, 'accept_redirects': 1, 'autoconf': 1, 'dad_transmits': 1, 'router_solicitations': 4294967295, 'router_solicitation_interval': 4000, 'router_solicitation_delay': 1000, 'use_tempaddr': 0, 'temp_valid_lft': 604800, 'temp_preferred_lft': 86400, 'regen_max_retry': 3, 'max_desync_factor': 600, 'max_addresses': 16, 'force_mld_version': 0, 'accept_ra_defrtr': 1, 'accept_ra_pinfo': 1, 'accept_ra_rtr_pref': 1, 'router_probe_interval': 60000, 'accept_ra_rt_info_max_plen': 0, 'proxy_ndp': 0, 'optimistic_dad': 0, 'accept_source_route': 0, 'mc_forwarding': 0, 'disable_ipv6': 0, 'accept_dad': 1, 'force_tllao': 0, 'ndisc_notify': 0}], ['IFLA_INET6_STATS', {'num': 38, 'inpkts': 6, 'inoctets': 448, 'indelivers': 1, 'outforwdatagrams': 0, 'outpkts': 3, 'outoctets': 228, 'inhdrerrors': 0, 'intoobigerrors': 0, 'innoroutes': 0, 'inaddrerrors': 0, 'inunknownprotos': 0, 'intruncatedpkts': 0, 'indiscards': 0, 'outdiscards': 0, 'outnoroutes': 0, 'reasmtimeout': 0, 'reasmreqds': 0, 'reasmoks': 0, 'reasmfails': 0, 'fragoks': 0, 'fragfails': 0, 'fragcreates': 0, 'inmcastpkts': 6, 'outmcastpkts': 3, 'inbcastpkts': 0, 'outbcastpkts': 0, 'inmcastoctets': 448, 'outmcastoctets': 228, 'inbcastoctets': 0, 'outbcastoctets': 0, 'csumerrors': 0, 'noectpkts': 6, 'ect1pkts': 0, 'ect0pkts': 0, 'cepkts': 0}], ['IFLA_INET6_ICMP6STATS', {'num': 7, 'inmsgs': 1, 'inerrors': 0, 'outmsgs': 3, 'outerrors': 0, 'csumerrors': 0}], ['IFLA_INET6_TOKEN', '::'], ['IFLA_INET6_ADDR_GEN_MODE', 0]]}]]}], ['IFLA_MAP', {'mem_start': 0, 'mem_end': 0, 'base_addr': 0, 'irq': 0, 'dma': 0, 'port': 0}], ['UNKNOWN', {'header': {'length': 4, 'type': 32830}}], ['UNKNOWN', {'header': {'length': 4, 'type': 32833}}]], 'header': {'length': 1448, 'type': 16, 'flags': 2, 'sequence_number': 255, 'pid': 256207, 'error': None, 'target': 'ovnmeta-d273f808-5cbd-4428-9f2c-ed8b50232c12', 'stats': (0, 0, 0)}, 'state': 'up', 'event': 'RTM_NEWLINK'}]) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Dec  1 20:03:42 compute-0 ovn_metadata_agent[106828]: 2025-12-01 20:03:42.312 239862 DEBUG oslo.privsep.daemon [-] privsep: reply[d7d2f839-fe63-4644-af3d-66a1dfd0be28]: (4, ({'family': 2, 'prefixlen': 28, 'flags': 128, 'scope': 0, 'index': 2, 'attrs': [['IFA_ADDRESS', '10.100.0.2'], ['IFA_LOCAL', '10.100.0.2'], ['IFA_BROADCAST', '10.100.0.15'], ['IFA_LABEL', 'tapd273f808-51'], ['IFA_FLAGS', 128], ['IFA_CACHEINFO', {'ifa_preferred': 4294967295, 'ifa_valid': 4294967295, 'cstamp': 584535, 'tstamp': 584535}]], 'header': {'length': 96, 'type': 20, 'flags': 2, 'sequence_number': 255, 'pid': 256209, 'error': None, 'target': 'ovnmeta-d273f808-5cbd-4428-9f2c-ed8b50232c12', 'stats': (0, 0, 0)}, 'event': 'RTM_NEWADDR'}, {'family': 2, 'prefixlen': 32, 'flags': 128, 'scope': 0, 'index': 2, 'attrs': [['IFA_ADDRESS', '169.254.169.254'], ['IFA_LOCAL', '169.254.169.254'], ['IFA_BROADCAST', '169.254.169.254'], ['IFA_LABEL', 'tapd273f808-51'], ['IFA_FLAGS', 128], ['IFA_CACHEINFO', {'ifa_preferred': 4294967295, 'ifa_valid': 4294967295, 'cstamp': 584540, 'tstamp': 584540}]], 'header': {'length': 96, 'type': 20, 'flags': 2, 'sequence_number': 255, 'pid': 256209, 'error': None, 'target': 'ovnmeta-d273f808-5cbd-4428-9f2c-ed8b50232c12', 'stats': (0, 0, 0)}, 'event': 'RTM_NEWADDR'})) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Dec  1 20:03:42 compute-0 ovn_metadata_agent[106828]: 2025-12-01 20:03:42.313 106833 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): DelPortCommand(_result=None, port=tapd273f808-50, bridge=br-ex, if_exists=True) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89#033[00m
Dec  1 20:03:42 compute-0 nova_compute[189564]: 2025-12-01 20:03:42.315 189568 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Dec  1 20:03:42 compute-0 nova_compute[189564]: 2025-12-01 20:03:42.317 189568 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Dec  1 20:03:42 compute-0 ovn_metadata_agent[106828]: 2025-12-01 20:03:42.318 106833 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): AddPortCommand(_result=None, bridge=br-int, port=tapd273f808-50, may_exist=True) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89#033[00m
Dec  1 20:03:42 compute-0 ovn_metadata_agent[106828]: 2025-12-01 20:03:42.318 106833 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Transaction caused no change do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:129#033[00m
Dec  1 20:03:42 compute-0 ovn_metadata_agent[106828]: 2025-12-01 20:03:42.319 106833 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): DbSetCommand(_result=None, table=Interface, record=tapd273f808-50, col_values=(('external_ids', {'iface-id': 'b1e4fac5-26a3-4807-b860-bcfa4669fff5'}),)) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89#033[00m
Dec  1 20:03:42 compute-0 ovn_metadata_agent[106828]: 2025-12-01 20:03:42.319 106833 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Transaction caused no change do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:129#033[00m
Dec  1 20:03:42 compute-0 nova_compute[189564]: 2025-12-01 20:03:42.510 189568 DEBUG nova.network.neutron [req-387c4410-3b94-45db-a282-f856123995b0 req-22e41968-7a76-4c36-9061-d1f8b31d8616 8141e98fb94749b083df4b60a326419b 58466eba0634458f8dd92f929824a1d9 - - default default] [instance: cb05bc1e-3b85-4998-a503-39bd86bdc17e] Updated VIF entry in instance network info cache for port ab2a4211-760a-400a-bd6c-243749c41a4e. _build_network_info_model /usr/lib/python3.9/site-packages/nova/network/neutron.py:3482#033[00m
Dec  1 20:03:42 compute-0 nova_compute[189564]: 2025-12-01 20:03:42.511 189568 DEBUG nova.network.neutron [req-387c4410-3b94-45db-a282-f856123995b0 req-22e41968-7a76-4c36-9061-d1f8b31d8616 8141e98fb94749b083df4b60a326419b 58466eba0634458f8dd92f929824a1d9 - - default default] [instance: cb05bc1e-3b85-4998-a503-39bd86bdc17e] Updating instance_info_cache with network_info: [{"id": "ab2a4211-760a-400a-bd6c-243749c41a4e", "address": "fa:16:3e:d2:c4:d1", "network": {"id": "d273f808-5cbd-4428-9f2c-ed8b50232c12", "bridge": "br-int", "label": "tempest-network-smoke--1707279970", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.4", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "162c071887824085bcc9c384a2f8baf0", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tapab2a4211-76", "ovs_interfaceid": "ab2a4211-760a-400a-bd6c-243749c41a4e", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}}] update_instance_cache_with_nw_info /usr/lib/python3.9/site-packages/nova/network/neutron.py:116#033[00m
Dec  1 20:03:42 compute-0 nova_compute[189564]: 2025-12-01 20:03:42.536 189568 DEBUG oslo_concurrency.lockutils [req-387c4410-3b94-45db-a282-f856123995b0 req-22e41968-7a76-4c36-9061-d1f8b31d8616 8141e98fb94749b083df4b60a326419b 58466eba0634458f8dd92f929824a1d9 - - default default] Releasing lock "refresh_cache-cb05bc1e-3b85-4998-a503-39bd86bdc17e" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:333#033[00m
Dec  1 20:03:42 compute-0 nova_compute[189564]: 2025-12-01 20:03:42.834 189568 DEBUG nova.virt.driver [None req-025acbbd-8b0a-4055-b5a6-f0460d6fa220 - - - - - -] Emitting event <LifecycleEvent: 1764619422.8341775, cb05bc1e-3b85-4998-a503-39bd86bdc17e => Started> emit_event /usr/lib/python3.9/site-packages/nova/virt/driver.py:1653#033[00m
Dec  1 20:03:42 compute-0 nova_compute[189564]: 2025-12-01 20:03:42.835 189568 INFO nova.compute.manager [None req-025acbbd-8b0a-4055-b5a6-f0460d6fa220 - - - - - -] [instance: cb05bc1e-3b85-4998-a503-39bd86bdc17e] VM Started (Lifecycle Event)#033[00m
Dec  1 20:03:42 compute-0 nova_compute[189564]: 2025-12-01 20:03:42.861 189568 DEBUG nova.compute.manager [None req-025acbbd-8b0a-4055-b5a6-f0460d6fa220 - - - - - -] [instance: cb05bc1e-3b85-4998-a503-39bd86bdc17e] Checking state _get_power_state /usr/lib/python3.9/site-packages/nova/compute/manager.py:1762#033[00m
Dec  1 20:03:42 compute-0 nova_compute[189564]: 2025-12-01 20:03:42.869 189568 DEBUG nova.virt.driver [None req-025acbbd-8b0a-4055-b5a6-f0460d6fa220 - - - - - -] Emitting event <LifecycleEvent: 1764619422.8344033, cb05bc1e-3b85-4998-a503-39bd86bdc17e => Paused> emit_event /usr/lib/python3.9/site-packages/nova/virt/driver.py:1653#033[00m
Dec  1 20:03:42 compute-0 nova_compute[189564]: 2025-12-01 20:03:42.869 189568 INFO nova.compute.manager [None req-025acbbd-8b0a-4055-b5a6-f0460d6fa220 - - - - - -] [instance: cb05bc1e-3b85-4998-a503-39bd86bdc17e] VM Paused (Lifecycle Event)#033[00m
Dec  1 20:03:42 compute-0 nova_compute[189564]: 2025-12-01 20:03:42.893 189568 DEBUG nova.compute.manager [None req-025acbbd-8b0a-4055-b5a6-f0460d6fa220 - - - - - -] [instance: cb05bc1e-3b85-4998-a503-39bd86bdc17e] Checking state _get_power_state /usr/lib/python3.9/site-packages/nova/compute/manager.py:1762#033[00m
Dec  1 20:03:42 compute-0 nova_compute[189564]: 2025-12-01 20:03:42.902 189568 DEBUG nova.compute.manager [None req-025acbbd-8b0a-4055-b5a6-f0460d6fa220 - - - - - -] [instance: cb05bc1e-3b85-4998-a503-39bd86bdc17e] Synchronizing instance power state after lifecycle event "Paused"; current vm_state: building, current task_state: spawning, current DB power_state: 0, VM power_state: 3 handle_lifecycle_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:1396#033[00m
Dec  1 20:03:42 compute-0 nova_compute[189564]: 2025-12-01 20:03:42.913 189568 DEBUG nova.compute.manager [req-5a31d413-74ea-46a6-b56e-9e1e212d16ef req-13f0a7a1-da28-43ac-b4d7-a63a7dbda053 8141e98fb94749b083df4b60a326419b 58466eba0634458f8dd92f929824a1d9 - - default default] [instance: cb05bc1e-3b85-4998-a503-39bd86bdc17e] Received event network-vif-plugged-ab2a4211-760a-400a-bd6c-243749c41a4e external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11048#033[00m
Dec  1 20:03:42 compute-0 nova_compute[189564]: 2025-12-01 20:03:42.913 189568 DEBUG oslo_concurrency.lockutils [req-5a31d413-74ea-46a6-b56e-9e1e212d16ef req-13f0a7a1-da28-43ac-b4d7-a63a7dbda053 8141e98fb94749b083df4b60a326419b 58466eba0634458f8dd92f929824a1d9 - - default default] Acquiring lock "cb05bc1e-3b85-4998-a503-39bd86bdc17e-events" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m
Dec  1 20:03:42 compute-0 nova_compute[189564]: 2025-12-01 20:03:42.913 189568 DEBUG oslo_concurrency.lockutils [req-5a31d413-74ea-46a6-b56e-9e1e212d16ef req-13f0a7a1-da28-43ac-b4d7-a63a7dbda053 8141e98fb94749b083df4b60a326419b 58466eba0634458f8dd92f929824a1d9 - - default default] Lock "cb05bc1e-3b85-4998-a503-39bd86bdc17e-events" acquired by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m
Dec  1 20:03:42 compute-0 nova_compute[189564]: 2025-12-01 20:03:42.914 189568 DEBUG oslo_concurrency.lockutils [req-5a31d413-74ea-46a6-b56e-9e1e212d16ef req-13f0a7a1-da28-43ac-b4d7-a63a7dbda053 8141e98fb94749b083df4b60a326419b 58466eba0634458f8dd92f929824a1d9 - - default default] Lock "cb05bc1e-3b85-4998-a503-39bd86bdc17e-events" "released" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m
Dec  1 20:03:42 compute-0 nova_compute[189564]: 2025-12-01 20:03:42.914 189568 DEBUG nova.compute.manager [req-5a31d413-74ea-46a6-b56e-9e1e212d16ef req-13f0a7a1-da28-43ac-b4d7-a63a7dbda053 8141e98fb94749b083df4b60a326419b 58466eba0634458f8dd92f929824a1d9 - - default default] [instance: cb05bc1e-3b85-4998-a503-39bd86bdc17e] Processing event network-vif-plugged-ab2a4211-760a-400a-bd6c-243749c41a4e _process_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:10808#033[00m
Dec  1 20:03:42 compute-0 nova_compute[189564]: 2025-12-01 20:03:42.915 189568 DEBUG nova.compute.manager [None req-d4149dcd-3b17-4052-99ac-95e505cfa62d 715e289b64b4407387cbcfe958eb2d0f 162c071887824085bcc9c384a2f8baf0 - - default default] [instance: cb05bc1e-3b85-4998-a503-39bd86bdc17e] Instance event wait completed in 0 seconds for network-vif-plugged wait_for_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:577#033[00m
Dec  1 20:03:42 compute-0 nova_compute[189564]: 2025-12-01 20:03:42.924 189568 DEBUG nova.virt.libvirt.driver [None req-d4149dcd-3b17-4052-99ac-95e505cfa62d 715e289b64b4407387cbcfe958eb2d0f 162c071887824085bcc9c384a2f8baf0 - - default default] [instance: cb05bc1e-3b85-4998-a503-39bd86bdc17e] Guest created on hypervisor spawn /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:4417#033[00m
Dec  1 20:03:42 compute-0 nova_compute[189564]: 2025-12-01 20:03:42.925 189568 INFO nova.compute.manager [None req-025acbbd-8b0a-4055-b5a6-f0460d6fa220 - - - - - -] [instance: cb05bc1e-3b85-4998-a503-39bd86bdc17e] During sync_power_state the instance has a pending task (spawning). Skip.#033[00m
Dec  1 20:03:42 compute-0 nova_compute[189564]: 2025-12-01 20:03:42.925 189568 DEBUG nova.virt.driver [None req-025acbbd-8b0a-4055-b5a6-f0460d6fa220 - - - - - -] Emitting event <LifecycleEvent: 1764619422.9220555, cb05bc1e-3b85-4998-a503-39bd86bdc17e => Resumed> emit_event /usr/lib/python3.9/site-packages/nova/virt/driver.py:1653#033[00m
Dec  1 20:03:42 compute-0 nova_compute[189564]: 2025-12-01 20:03:42.926 189568 INFO nova.compute.manager [None req-025acbbd-8b0a-4055-b5a6-f0460d6fa220 - - - - - -] [instance: cb05bc1e-3b85-4998-a503-39bd86bdc17e] VM Resumed (Lifecycle Event)#033[00m
Dec  1 20:03:42 compute-0 nova_compute[189564]: 2025-12-01 20:03:42.938 189568 INFO nova.virt.libvirt.driver [-] [instance: cb05bc1e-3b85-4998-a503-39bd86bdc17e] Instance spawned successfully.#033[00m
Dec  1 20:03:42 compute-0 nova_compute[189564]: 2025-12-01 20:03:42.938 189568 DEBUG nova.virt.libvirt.driver [None req-d4149dcd-3b17-4052-99ac-95e505cfa62d 715e289b64b4407387cbcfe958eb2d0f 162c071887824085bcc9c384a2f8baf0 - - default default] [instance: cb05bc1e-3b85-4998-a503-39bd86bdc17e] Attempting to register defaults for the following image properties: ['hw_cdrom_bus', 'hw_disk_bus', 'hw_input_bus', 'hw_pointer_model', 'hw_video_model', 'hw_vif_model'] _register_undefined_instance_details /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:917#033[00m
Dec  1 20:03:42 compute-0 nova_compute[189564]: 2025-12-01 20:03:42.946 189568 DEBUG nova.compute.manager [None req-025acbbd-8b0a-4055-b5a6-f0460d6fa220 - - - - - -] [instance: cb05bc1e-3b85-4998-a503-39bd86bdc17e] Checking state _get_power_state /usr/lib/python3.9/site-packages/nova/compute/manager.py:1762#033[00m
Dec  1 20:03:42 compute-0 nova_compute[189564]: 2025-12-01 20:03:42.969 189568 DEBUG nova.compute.manager [None req-025acbbd-8b0a-4055-b5a6-f0460d6fa220 - - - - - -] [instance: cb05bc1e-3b85-4998-a503-39bd86bdc17e] Synchronizing instance power state after lifecycle event "Resumed"; current vm_state: building, current task_state: spawning, current DB power_state: 0, VM power_state: 1 handle_lifecycle_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:1396#033[00m
Dec  1 20:03:42 compute-0 nova_compute[189564]: 2025-12-01 20:03:42.978 189568 DEBUG nova.virt.libvirt.driver [None req-d4149dcd-3b17-4052-99ac-95e505cfa62d 715e289b64b4407387cbcfe958eb2d0f 162c071887824085bcc9c384a2f8baf0 - - default default] [instance: cb05bc1e-3b85-4998-a503-39bd86bdc17e] Found default for hw_cdrom_bus of sata _register_undefined_instance_details /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:946#033[00m
Dec  1 20:03:42 compute-0 nova_compute[189564]: 2025-12-01 20:03:42.979 189568 DEBUG nova.virt.libvirt.driver [None req-d4149dcd-3b17-4052-99ac-95e505cfa62d 715e289b64b4407387cbcfe958eb2d0f 162c071887824085bcc9c384a2f8baf0 - - default default] [instance: cb05bc1e-3b85-4998-a503-39bd86bdc17e] Found default for hw_disk_bus of virtio _register_undefined_instance_details /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:946#033[00m
Dec  1 20:03:42 compute-0 nova_compute[189564]: 2025-12-01 20:03:42.980 189568 DEBUG nova.virt.libvirt.driver [None req-d4149dcd-3b17-4052-99ac-95e505cfa62d 715e289b64b4407387cbcfe958eb2d0f 162c071887824085bcc9c384a2f8baf0 - - default default] [instance: cb05bc1e-3b85-4998-a503-39bd86bdc17e] Found default for hw_input_bus of usb _register_undefined_instance_details /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:946#033[00m
Dec  1 20:03:42 compute-0 nova_compute[189564]: 2025-12-01 20:03:42.981 189568 DEBUG nova.virt.libvirt.driver [None req-d4149dcd-3b17-4052-99ac-95e505cfa62d 715e289b64b4407387cbcfe958eb2d0f 162c071887824085bcc9c384a2f8baf0 - - default default] [instance: cb05bc1e-3b85-4998-a503-39bd86bdc17e] Found default for hw_pointer_model of usbtablet _register_undefined_instance_details /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:946#033[00m
Dec  1 20:03:42 compute-0 nova_compute[189564]: 2025-12-01 20:03:42.982 189568 DEBUG nova.virt.libvirt.driver [None req-d4149dcd-3b17-4052-99ac-95e505cfa62d 715e289b64b4407387cbcfe958eb2d0f 162c071887824085bcc9c384a2f8baf0 - - default default] [instance: cb05bc1e-3b85-4998-a503-39bd86bdc17e] Found default for hw_video_model of virtio _register_undefined_instance_details /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:946#033[00m
Dec  1 20:03:42 compute-0 nova_compute[189564]: 2025-12-01 20:03:42.983 189568 DEBUG nova.virt.libvirt.driver [None req-d4149dcd-3b17-4052-99ac-95e505cfa62d 715e289b64b4407387cbcfe958eb2d0f 162c071887824085bcc9c384a2f8baf0 - - default default] [instance: cb05bc1e-3b85-4998-a503-39bd86bdc17e] Found default for hw_vif_model of virtio _register_undefined_instance_details /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:946#033[00m
Dec  1 20:03:42 compute-0 nova_compute[189564]: 2025-12-01 20:03:42.994 189568 INFO nova.compute.manager [None req-025acbbd-8b0a-4055-b5a6-f0460d6fa220 - - - - - -] [instance: cb05bc1e-3b85-4998-a503-39bd86bdc17e] During sync_power_state the instance has a pending task (spawning). Skip.#033[00m
Dec  1 20:03:43 compute-0 nova_compute[189564]: 2025-12-01 20:03:43.044 189568 INFO nova.compute.manager [None req-d4149dcd-3b17-4052-99ac-95e505cfa62d 715e289b64b4407387cbcfe958eb2d0f 162c071887824085bcc9c384a2f8baf0 - - default default] [instance: cb05bc1e-3b85-4998-a503-39bd86bdc17e] Took 5.56 seconds to spawn the instance on the hypervisor.#033[00m
Dec  1 20:03:43 compute-0 nova_compute[189564]: 2025-12-01 20:03:43.045 189568 DEBUG nova.compute.manager [None req-d4149dcd-3b17-4052-99ac-95e505cfa62d 715e289b64b4407387cbcfe958eb2d0f 162c071887824085bcc9c384a2f8baf0 - - default default] [instance: cb05bc1e-3b85-4998-a503-39bd86bdc17e] Checking state _get_power_state /usr/lib/python3.9/site-packages/nova/compute/manager.py:1762#033[00m
Dec  1 20:03:43 compute-0 nova_compute[189564]: 2025-12-01 20:03:43.185 189568 INFO nova.compute.manager [None req-d4149dcd-3b17-4052-99ac-95e505cfa62d 715e289b64b4407387cbcfe958eb2d0f 162c071887824085bcc9c384a2f8baf0 - - default default] [instance: cb05bc1e-3b85-4998-a503-39bd86bdc17e] Took 6.21 seconds to build instance.#033[00m
Dec  1 20:03:43 compute-0 nova_compute[189564]: 2025-12-01 20:03:43.217 189568 DEBUG oslo_concurrency.lockutils [None req-d4149dcd-3b17-4052-99ac-95e505cfa62d 715e289b64b4407387cbcfe958eb2d0f 162c071887824085bcc9c384a2f8baf0 - - default default] Lock "cb05bc1e-3b85-4998-a503-39bd86bdc17e" "released" by "nova.compute.manager.ComputeManager.build_and_run_instance.<locals>._locked_do_build_and_run_instance" :: held 6.334s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m
Dec  1 20:03:43 compute-0 nova_compute[189564]: 2025-12-01 20:03:43.902 189568 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Dec  1 20:03:45 compute-0 nova_compute[189564]: 2025-12-01 20:03:45.061 189568 DEBUG nova.compute.manager [req-59972e3c-dbe1-43c8-b504-34f0d8847d52 req-f9710f52-2458-4241-ac82-6da4572a23c1 8141e98fb94749b083df4b60a326419b 58466eba0634458f8dd92f929824a1d9 - - default default] [instance: cb05bc1e-3b85-4998-a503-39bd86bdc17e] Received event network-vif-plugged-ab2a4211-760a-400a-bd6c-243749c41a4e external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11048#033[00m
Dec  1 20:03:45 compute-0 nova_compute[189564]: 2025-12-01 20:03:45.061 189568 DEBUG oslo_concurrency.lockutils [req-59972e3c-dbe1-43c8-b504-34f0d8847d52 req-f9710f52-2458-4241-ac82-6da4572a23c1 8141e98fb94749b083df4b60a326419b 58466eba0634458f8dd92f929824a1d9 - - default default] Acquiring lock "cb05bc1e-3b85-4998-a503-39bd86bdc17e-events" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m
Dec  1 20:03:45 compute-0 nova_compute[189564]: 2025-12-01 20:03:45.062 189568 DEBUG oslo_concurrency.lockutils [req-59972e3c-dbe1-43c8-b504-34f0d8847d52 req-f9710f52-2458-4241-ac82-6da4572a23c1 8141e98fb94749b083df4b60a326419b 58466eba0634458f8dd92f929824a1d9 - - default default] Lock "cb05bc1e-3b85-4998-a503-39bd86bdc17e-events" acquired by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m
Dec  1 20:03:45 compute-0 nova_compute[189564]: 2025-12-01 20:03:45.062 189568 DEBUG oslo_concurrency.lockutils [req-59972e3c-dbe1-43c8-b504-34f0d8847d52 req-f9710f52-2458-4241-ac82-6da4572a23c1 8141e98fb94749b083df4b60a326419b 58466eba0634458f8dd92f929824a1d9 - - default default] Lock "cb05bc1e-3b85-4998-a503-39bd86bdc17e-events" "released" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m
Dec  1 20:03:45 compute-0 nova_compute[189564]: 2025-12-01 20:03:45.062 189568 DEBUG nova.compute.manager [req-59972e3c-dbe1-43c8-b504-34f0d8847d52 req-f9710f52-2458-4241-ac82-6da4572a23c1 8141e98fb94749b083df4b60a326419b 58466eba0634458f8dd92f929824a1d9 - - default default] [instance: cb05bc1e-3b85-4998-a503-39bd86bdc17e] No waiting events found dispatching network-vif-plugged-ab2a4211-760a-400a-bd6c-243749c41a4e pop_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:320#033[00m
Dec  1 20:03:45 compute-0 nova_compute[189564]: 2025-12-01 20:03:45.063 189568 WARNING nova.compute.manager [req-59972e3c-dbe1-43c8-b504-34f0d8847d52 req-f9710f52-2458-4241-ac82-6da4572a23c1 8141e98fb94749b083df4b60a326419b 58466eba0634458f8dd92f929824a1d9 - - default default] [instance: cb05bc1e-3b85-4998-a503-39bd86bdc17e] Received unexpected event network-vif-plugged-ab2a4211-760a-400a-bd6c-243749c41a4e for instance with vm_state active and task_state None.#033[00m
Dec  1 20:03:45 compute-0 nova_compute[189564]: 2025-12-01 20:03:45.892 189568 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Dec  1 20:03:45 compute-0 ovn_metadata_agent[106828]: 2025-12-01 20:03:45.895 106833 DEBUG ovsdbapp.backend.ovs_idl.event [-] Matched UPDATE: SbGlobalUpdateEvent(events=('update',), table='SB_Global', conditions=None, old_conditions=None), priority=20 to row=SB_Global(external_ids={}, nb_cfg=12, options={'arp_ns_explicit_output': 'true', 'mac_prefix': 'ae:b8:e0', 'max_tunid': '16711680', 'northd_internal_version': '24.03.8-20.33.0-76.8', 'svc_monitor_mac': 'f2:87:69:a7:38:2b'}, ipsec=False) old=SB_Global(nb_cfg=11) matches /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/event.py:43#033[00m
Dec  1 20:03:45 compute-0 ovn_metadata_agent[106828]: 2025-12-01 20:03:45.898 106833 DEBUG neutron.agent.ovn.metadata.agent [-] Delaying updating chassis table for 6 seconds run /usr/lib/python3.9/site-packages/neutron/agent/ovn/metadata/agent.py:274#033[00m
Dec  1 20:03:46 compute-0 ovn_controller[97948]: 2025-12-01T20:03:46Z|00149|binding|INFO|Releasing lport 39b24bc2-6265-4d8f-9166-2751c476b101 from this chassis (sb_readonly=0)
Dec  1 20:03:46 compute-0 ovn_controller[97948]: 2025-12-01T20:03:46Z|00150|binding|INFO|Releasing lport 0966f8f1-95fd-4a77-80c1-25197c60ec2b from this chassis (sb_readonly=0)
Dec  1 20:03:46 compute-0 ovn_controller[97948]: 2025-12-01T20:03:46Z|00151|binding|INFO|Releasing lport b1e4fac5-26a3-4807-b860-bcfa4669fff5 from this chassis (sb_readonly=0)
Dec  1 20:03:46 compute-0 nova_compute[189564]: 2025-12-01 20:03:46.082 189568 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Dec  1 20:03:46 compute-0 nova_compute[189564]: 2025-12-01 20:03:46.335 189568 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Dec  1 20:03:46 compute-0 podman[256217]: 2025-12-01 20:03:46.359294611 +0000 UTC m=+0.111250894 container health_status eee51cf6f5ac491b85fb09827fece37ea9afa564acb449d4ec0d0155a452f02b (image=quay.io/podified-antelope-centos9/openstack-multipathd:current-podified, name=multipathd, health_status=healthy, health_failing_streak=0, health_log=, managed_by=edpm_ansible, tcib_build_tag=fa2bb8efef6782c26ea7f1675eeb36dd, config_data={'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/multipathd', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-multipathd:current-podified', 'net': 'host', 'privileged': True, 'restart': 'always', 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/var/lib/kolla/config_files/multipathd.json:/var/lib/kolla/config_files/config.json:ro', '/dev:/dev', '/run/udev:/run/udev', '/sys:/sys', '/lib/modules:/lib/modules:ro', '/etc/iscsi:/etc/iscsi:ro', '/var/lib/iscsi:/var/lib/iscsi', '/etc/multipath:/etc/multipath:z', '/etc/multipath.conf:/etc/multipath.conf:ro', '/var/lib/openstack/healthchecks/multipathd:/openstack:ro,z']}, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.vendor=CentOS, tcib_managed=true, container_name=multipathd, io.buildah.version=1.41.3, config_id=multipathd, org.label-schema.build-date=20251125, org.label-schema.schema-version=1.0, maintainer=OpenStack Kubernetes Operator team)
Dec  1 20:03:46 compute-0 ovn_metadata_agent[106828]: 2025-12-01 20:03:46.803 106940 DEBUG eventlet.wsgi.server [-] (106940) accepted '' server /usr/lib/python3.9/site-packages/eventlet/wsgi.py:1004#033[00m
Dec  1 20:03:46 compute-0 ovn_metadata_agent[106828]: 2025-12-01 20:03:46.805 106940 DEBUG neutron.agent.ovn.metadata.server [-] Request: GET /latest/meta-data/public-ipv4 HTTP/1.0#015
Dec  1 20:03:46 compute-0 ovn_metadata_agent[106828]: Accept: */*#015
Dec  1 20:03:46 compute-0 ovn_metadata_agent[106828]: Connection: close#015
Dec  1 20:03:46 compute-0 ovn_metadata_agent[106828]: Content-Type: text/plain#015
Dec  1 20:03:46 compute-0 ovn_metadata_agent[106828]: Host: 169.254.169.254#015
Dec  1 20:03:46 compute-0 ovn_metadata_agent[106828]: User-Agent: curl/7.84.0#015
Dec  1 20:03:46 compute-0 ovn_metadata_agent[106828]: X-Forwarded-For: 10.100.0.14#015
Dec  1 20:03:46 compute-0 ovn_metadata_agent[106828]: X-Ovn-Network-Id: 61c137f0-effb-4f90-8a6c-ea3831f8e4db __call__ /usr/lib/python3.9/site-packages/neutron/agent/ovn/metadata/server.py:82#033[00m
Dec  1 20:03:47 compute-0 nova_compute[189564]: 2025-12-01 20:03:47.171 189568 DEBUG nova.compute.manager [req-513a184c-cbba-4293-a888-dd8c25f89be6 req-820a348d-a809-4283-896b-bd308a06b5f1 8141e98fb94749b083df4b60a326419b 58466eba0634458f8dd92f929824a1d9 - - default default] [instance: cb05bc1e-3b85-4998-a503-39bd86bdc17e] Received event network-changed-ab2a4211-760a-400a-bd6c-243749c41a4e external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11048#033[00m
Dec  1 20:03:47 compute-0 nova_compute[189564]: 2025-12-01 20:03:47.171 189568 DEBUG nova.compute.manager [req-513a184c-cbba-4293-a888-dd8c25f89be6 req-820a348d-a809-4283-896b-bd308a06b5f1 8141e98fb94749b083df4b60a326419b 58466eba0634458f8dd92f929824a1d9 - - default default] [instance: cb05bc1e-3b85-4998-a503-39bd86bdc17e] Refreshing instance network info cache due to event network-changed-ab2a4211-760a-400a-bd6c-243749c41a4e. external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11053#033[00m
Dec  1 20:03:47 compute-0 nova_compute[189564]: 2025-12-01 20:03:47.172 189568 DEBUG oslo_concurrency.lockutils [req-513a184c-cbba-4293-a888-dd8c25f89be6 req-820a348d-a809-4283-896b-bd308a06b5f1 8141e98fb94749b083df4b60a326419b 58466eba0634458f8dd92f929824a1d9 - - default default] Acquiring lock "refresh_cache-cb05bc1e-3b85-4998-a503-39bd86bdc17e" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:312#033[00m
Dec  1 20:03:47 compute-0 nova_compute[189564]: 2025-12-01 20:03:47.172 189568 DEBUG oslo_concurrency.lockutils [req-513a184c-cbba-4293-a888-dd8c25f89be6 req-820a348d-a809-4283-896b-bd308a06b5f1 8141e98fb94749b083df4b60a326419b 58466eba0634458f8dd92f929824a1d9 - - default default] Acquired lock "refresh_cache-cb05bc1e-3b85-4998-a503-39bd86bdc17e" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:315#033[00m
Dec  1 20:03:47 compute-0 nova_compute[189564]: 2025-12-01 20:03:47.172 189568 DEBUG nova.network.neutron [req-513a184c-cbba-4293-a888-dd8c25f89be6 req-820a348d-a809-4283-896b-bd308a06b5f1 8141e98fb94749b083df4b60a326419b 58466eba0634458f8dd92f929824a1d9 - - default default] [instance: cb05bc1e-3b85-4998-a503-39bd86bdc17e] Refreshing network info cache for port ab2a4211-760a-400a-bd6c-243749c41a4e _get_instance_nw_info /usr/lib/python3.9/site-packages/nova/network/neutron.py:2007#033[00m
Dec  1 20:03:48 compute-0 nova_compute[189564]: 2025-12-01 20:03:48.182 189568 DEBUG nova.virt.driver [-] Emitting event <LifecycleEvent: 1764619413.1803625, 4ace6300-5447-4f61-9b27-a7249155c57b => Stopped> emit_event /usr/lib/python3.9/site-packages/nova/virt/driver.py:1653#033[00m
Dec  1 20:03:48 compute-0 nova_compute[189564]: 2025-12-01 20:03:48.182 189568 INFO nova.compute.manager [-] [instance: 4ace6300-5447-4f61-9b27-a7249155c57b] VM Stopped (Lifecycle Event)#033[00m
Dec  1 20:03:48 compute-0 nova_compute[189564]: 2025-12-01 20:03:48.305 189568 DEBUG nova.compute.manager [None req-3e920b7e-fca4-4ca4-839c-60463c9a9048 - - - - - -] [instance: 4ace6300-5447-4f61-9b27-a7249155c57b] Checking state _get_power_state /usr/lib/python3.9/site-packages/nova/compute/manager.py:1762#033[00m
Dec  1 20:03:48 compute-0 nova_compute[189564]: 2025-12-01 20:03:48.387 189568 DEBUG oslo_concurrency.lockutils [None req-c6a515d7-2e06-4054-83e4-9d04a8e4005e 89c8a8cb31224140bf2b9c0b94acfe04 5102d72cb1ce4e6da810b2584a2abd73 - - default default] Acquiring lock "4a104baa-5fd5-47aa-973b-11d99c76c3e2" by "nova.compute.manager.ComputeManager.terminate_instance.<locals>.do_terminate_instance" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m
Dec  1 20:03:48 compute-0 nova_compute[189564]: 2025-12-01 20:03:48.387 189568 DEBUG oslo_concurrency.lockutils [None req-c6a515d7-2e06-4054-83e4-9d04a8e4005e 89c8a8cb31224140bf2b9c0b94acfe04 5102d72cb1ce4e6da810b2584a2abd73 - - default default] Lock "4a104baa-5fd5-47aa-973b-11d99c76c3e2" acquired by "nova.compute.manager.ComputeManager.terminate_instance.<locals>.do_terminate_instance" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m
Dec  1 20:03:48 compute-0 nova_compute[189564]: 2025-12-01 20:03:48.387 189568 DEBUG oslo_concurrency.lockutils [None req-c6a515d7-2e06-4054-83e4-9d04a8e4005e 89c8a8cb31224140bf2b9c0b94acfe04 5102d72cb1ce4e6da810b2584a2abd73 - - default default] Acquiring lock "4a104baa-5fd5-47aa-973b-11d99c76c3e2-events" by "nova.compute.manager.InstanceEvents.clear_events_for_instance.<locals>._clear_events" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m
Dec  1 20:03:48 compute-0 nova_compute[189564]: 2025-12-01 20:03:48.388 189568 DEBUG oslo_concurrency.lockutils [None req-c6a515d7-2e06-4054-83e4-9d04a8e4005e 89c8a8cb31224140bf2b9c0b94acfe04 5102d72cb1ce4e6da810b2584a2abd73 - - default default] Lock "4a104baa-5fd5-47aa-973b-11d99c76c3e2-events" acquired by "nova.compute.manager.InstanceEvents.clear_events_for_instance.<locals>._clear_events" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m
Dec  1 20:03:48 compute-0 nova_compute[189564]: 2025-12-01 20:03:48.388 189568 DEBUG oslo_concurrency.lockutils [None req-c6a515d7-2e06-4054-83e4-9d04a8e4005e 89c8a8cb31224140bf2b9c0b94acfe04 5102d72cb1ce4e6da810b2584a2abd73 - - default default] Lock "4a104baa-5fd5-47aa-973b-11d99c76c3e2-events" "released" by "nova.compute.manager.InstanceEvents.clear_events_for_instance.<locals>._clear_events" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m
Dec  1 20:03:48 compute-0 nova_compute[189564]: 2025-12-01 20:03:48.389 189568 INFO nova.compute.manager [None req-c6a515d7-2e06-4054-83e4-9d04a8e4005e 89c8a8cb31224140bf2b9c0b94acfe04 5102d72cb1ce4e6da810b2584a2abd73 - - default default] [instance: 4a104baa-5fd5-47aa-973b-11d99c76c3e2] Terminating instance#033[00m
Dec  1 20:03:48 compute-0 nova_compute[189564]: 2025-12-01 20:03:48.389 189568 DEBUG nova.compute.manager [None req-c6a515d7-2e06-4054-83e4-9d04a8e4005e 89c8a8cb31224140bf2b9c0b94acfe04 5102d72cb1ce4e6da810b2584a2abd73 - - default default] [instance: 4a104baa-5fd5-47aa-973b-11d99c76c3e2] Start destroying the instance on the hypervisor. _shutdown_instance /usr/lib/python3.9/site-packages/nova/compute/manager.py:3120#033[00m
Dec  1 20:03:48 compute-0 kernel: tap09097114-7a (unregistering): left promiscuous mode
Dec  1 20:03:48 compute-0 NetworkManager[56474]: <info>  [1764619428.4428] device (tap09097114-7a): state change: disconnected -> unmanaged (reason 'unmanaged', managed-type: 'removed')
Dec  1 20:03:48 compute-0 ovn_controller[97948]: 2025-12-01T20:03:48Z|00152|binding|INFO|Releasing lport 09097114-7a48-4b64-ab17-ed474efbf80e from this chassis (sb_readonly=0)
Dec  1 20:03:48 compute-0 ovn_controller[97948]: 2025-12-01T20:03:48Z|00153|binding|INFO|Setting lport 09097114-7a48-4b64-ab17-ed474efbf80e down in Southbound
Dec  1 20:03:48 compute-0 ovn_controller[97948]: 2025-12-01T20:03:48Z|00154|binding|INFO|Removing iface tap09097114-7a ovn-installed in OVS
Dec  1 20:03:48 compute-0 ovn_metadata_agent[106828]: 2025-12-01 20:03:48.482 106833 DEBUG ovsdbapp.backend.ovs_idl.event [-] Matched UPDATE: PortBindingUpdatedEvent(events=('update',), table='Port_Binding', conditions=None, old_conditions=None), priority=20 to row=Port_Binding(mac=['fa:16:3e:3e:bf:1a 10.100.0.13'], port_security=['fa:16:3e:3e:bf:1a 10.100.0.13'], type=, nat_addresses=[], virtual_parent=[], up=[False], options={'requested-chassis': 'compute-0.ctlplane.example.com'}, parent_port=[], requested_additional_chassis=[], ha_chassis_group=[], external_ids={'neutron:cidrs': '10.100.0.13/28', 'neutron:device_id': '4a104baa-5fd5-47aa-973b-11d99c76c3e2', 'neutron:device_owner': 'compute:nova', 'neutron:mtu': '', 'neutron:network_name': 'neutron-419dfb65-f0dd-44b5-a131-b7c37ebf4bab', 'neutron:port_capabilities': '', 'neutron:port_name': '', 'neutron:project_id': '5102d72cb1ce4e6da810b2584a2abd73', 'neutron:revision_number': '6', 'neutron:security_group_ids': 'fb1a9182-2a79-4a69-a063-58799cf34a33', 'neutron:subnet_pool_addr_scope4': '', 'neutron:subnet_pool_addr_scope6': '', 'neutron:vnic_type': 'normal', 'neutron:port_fip': '192.168.122.211', 'neutron:host_id': 'compute-0.ctlplane.example.com'}, additional_chassis=[], tag=[], additional_encap=[], encap=[], mirror_rules=[], datapath=b0f29072-dc2b-4972-a602-c2fe180fbdaf, chassis=[], tunnel_key=3, gateway_chassis=[], requested_chassis=[<ovs.db.idl.Row object at 0x7f1b36766670>], logical_port=09097114-7a48-4b64-ab17-ed474efbf80e) old=Port_Binding(up=[True], chassis=[<ovs.db.idl.Row object at 0x7f1b36766670>]) matches /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/event.py:43#033[00m
Dec  1 20:03:48 compute-0 nova_compute[189564]: 2025-12-01 20:03:48.480 189568 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Dec  1 20:03:48 compute-0 ovn_metadata_agent[106828]: 2025-12-01 20:03:48.483 106833 INFO neutron.agent.ovn.metadata.agent [-] Port 09097114-7a48-4b64-ab17-ed474efbf80e in datapath 419dfb65-f0dd-44b5-a131-b7c37ebf4bab unbound from our chassis#033[00m
Dec  1 20:03:48 compute-0 ovn_metadata_agent[106828]: 2025-12-01 20:03:48.485 106833 DEBUG neutron.agent.ovn.metadata.agent [-] No valid VIF ports were found for network 419dfb65-f0dd-44b5-a131-b7c37ebf4bab, tearing the namespace down if needed _get_provision_params /usr/lib/python3.9/site-packages/neutron/agent/ovn/metadata/agent.py:628#033[00m
Dec  1 20:03:48 compute-0 ovn_metadata_agent[106828]: 2025-12-01 20:03:48.498 239862 DEBUG oslo.privsep.daemon [-] privsep: reply[bc37da40-160c-41c4-a2fa-8f07226779a9]: (4, True) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Dec  1 20:03:48 compute-0 ovn_metadata_agent[106828]: 2025-12-01 20:03:48.499 106833 INFO neutron.agent.ovn.metadata.agent [-] Cleaning up ovnmeta-419dfb65-f0dd-44b5-a131-b7c37ebf4bab namespace which is not needed anymore#033[00m
Dec  1 20:03:48 compute-0 ovn_metadata_agent[106828]: 2025-12-01 20:03:48.501 106940 DEBUG neutron.agent.ovn.metadata.server [-] <Response [200]> _proxy_request /usr/lib/python3.9/site-packages/neutron/agent/ovn/metadata/server.py:161#033[00m
Dec  1 20:03:48 compute-0 ovn_metadata_agent[106828]: 2025-12-01 20:03:48.503 106940 INFO eventlet.wsgi.server [-] 10.100.0.14,<local> "GET /latest/meta-data/public-ipv4 HTTP/1.1" status: 200  len: 151 time: 1.6980674#033[00m
Dec  1 20:03:48 compute-0 haproxy-metadata-proxy-61c137f0-effb-4f90-8a6c-ea3831f8e4db[255290]: 10.100.0.14:50022 [01/Dec/2025:20:03:46.801] listener listener/metadata 0/0/0/1701/1701 200 135 - - ---- 1/1/0/0/0 0/0 "GET /latest/meta-data/public-ipv4 HTTP/1.1"
Dec  1 20:03:48 compute-0 nova_compute[189564]: 2025-12-01 20:03:48.508 189568 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Dec  1 20:03:48 compute-0 systemd[1]: machine-qemu\x2d12\x2dinstance\x2d00000007.scope: Deactivated successfully.
Dec  1 20:03:48 compute-0 systemd[1]: machine-qemu\x2d12\x2dinstance\x2d00000007.scope: Consumed 42.460s CPU time.
Dec  1 20:03:48 compute-0 systemd-machined[155891]: Machine qemu-12-instance-00000007 terminated.
Dec  1 20:03:48 compute-0 ovn_metadata_agent[106828]: 2025-12-01 20:03:48.613 106940 DEBUG eventlet.wsgi.server [-] (106940) accepted '' server /usr/lib/python3.9/site-packages/eventlet/wsgi.py:1004#033[00m
Dec  1 20:03:48 compute-0 ovn_metadata_agent[106828]: 2025-12-01 20:03:48.616 106940 DEBUG neutron.agent.ovn.metadata.server [-] Request: POST /openstack/2013-10-17/password HTTP/1.0#015
Dec  1 20:03:48 compute-0 ovn_metadata_agent[106828]: Accept: */*#015
Dec  1 20:03:48 compute-0 ovn_metadata_agent[106828]: Connection: close#015
Dec  1 20:03:48 compute-0 ovn_metadata_agent[106828]: Content-Length: 100#015
Dec  1 20:03:48 compute-0 ovn_metadata_agent[106828]: Content-Type: application/x-www-form-urlencoded#015
Dec  1 20:03:48 compute-0 ovn_metadata_agent[106828]: Host: 169.254.169.254#015
Dec  1 20:03:48 compute-0 ovn_metadata_agent[106828]: User-Agent: curl/7.84.0#015
Dec  1 20:03:48 compute-0 ovn_metadata_agent[106828]: X-Forwarded-For: 10.100.0.14#015
Dec  1 20:03:48 compute-0 ovn_metadata_agent[106828]: X-Ovn-Network-Id: 61c137f0-effb-4f90-8a6c-ea3831f8e4db#015
Dec  1 20:03:48 compute-0 ovn_metadata_agent[106828]: #015
Dec  1 20:03:48 compute-0 ovn_metadata_agent[106828]: testtesttesttesttesttesttesttesttesttesttesttesttesttesttesttesttesttesttesttesttesttesttesttesttest __call__ /usr/lib/python3.9/site-packages/neutron/agent/ovn/metadata/server.py:82#033[00m
Dec  1 20:03:48 compute-0 kernel: tap09097114-7a: entered promiscuous mode
Dec  1 20:03:48 compute-0 systemd-udevd[256241]: Network interface NamePolicy= disabled on kernel command line.
Dec  1 20:03:48 compute-0 NetworkManager[56474]: <info>  [1764619428.6223] manager: (tap09097114-7a): new Tun device (/org/freedesktop/NetworkManager/Devices/68)
Dec  1 20:03:48 compute-0 kernel: tap09097114-7a (unregistering): left promiscuous mode
Dec  1 20:03:48 compute-0 nova_compute[189564]: 2025-12-01 20:03:48.643 189568 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Dec  1 20:03:48 compute-0 nova_compute[189564]: 2025-12-01 20:03:48.678 189568 INFO nova.virt.libvirt.driver [-] [instance: 4a104baa-5fd5-47aa-973b-11d99c76c3e2] Instance destroyed successfully.#033[00m
Dec  1 20:03:48 compute-0 nova_compute[189564]: 2025-12-01 20:03:48.679 189568 DEBUG nova.objects.instance [None req-c6a515d7-2e06-4054-83e4-9d04a8e4005e 89c8a8cb31224140bf2b9c0b94acfe04 5102d72cb1ce4e6da810b2584a2abd73 - - default default] Lazy-loading 'resources' on Instance uuid 4a104baa-5fd5-47aa-973b-11d99c76c3e2 obj_load_attr /usr/lib/python3.9/site-packages/nova/objects/instance.py:1105#033[00m
Dec  1 20:03:48 compute-0 nova_compute[189564]: 2025-12-01 20:03:48.701 189568 DEBUG nova.virt.libvirt.vif [None req-c6a515d7-2e06-4054-83e4-9d04a8e4005e 89c8a8cb31224140bf2b9c0b94acfe04 5102d72cb1ce4e6da810b2584a2abd73 - - default default] vif_type=ovs instance=Instance(access_ip_v4=None,access_ip_v6=None,architecture=None,auto_disk_config=False,availability_zone='nova',cell_name=None,cleaned=False,config_drive='True',created_at=2025-12-01T20:01:10Z,default_ephemeral_device=None,default_swap_device=None,deleted=False,deleted_at=None,device_metadata=<?>,disable_terminate=False,display_description='tempest-ServerActionsTestJSON-server-1064429924',display_name='tempest-ServerActionsTestJSON-server-1064429924',ec2_ids=<?>,ephemeral_gb=0,ephemeral_key_uuid=None,fault=<?>,flavor=Flavor(3),hidden=False,host='compute-0.ctlplane.example.com',hostname='tempest-serveractionstestjson-server-1064429924',id=7,image_ref='d169c234-7ac2-4fdc-b9fa-a08c93484d75',info_cache=InstanceInfoCache,instance_type_id=3,kernel_id='',key_data='ecdsa-sha2-nistp384 AAAAE2VjZHNhLXNoYTItbmlzdHAzODQAAAAIbmlzdHAzODQAAABhBNy2Fa/005sFOm6rBTfWAhWPMicjwNe2lxBTmDNZ4YT4rkioptEkmqoV9BaZ0x7iRnfzTvUcepaaUfsJtdWIwpd6ISWDG/KMPFbrCHDmVc4nqNhxbzpyNrnXIODKw/JJYg==',key_name='tempest-keypair-1301911410',keypairs=<?>,launch_index=0,launched_at=2025-12-01T20:01:26Z,launched_on='compute-0.ctlplane.example.com',locked=False,locked_by=None,memory_mb=128,metadata={},migration_context=<?>,new_flavor=None,node='compute-0.ctlplane.example.com',numa_topology=None,old_flavor=None,os_type=None,pci_devices=<?>,pci_requests=<?>,power_state=1,progress=0,project_id='5102d72cb1ce4e6da810b2584a2abd73',ramdisk_id='',reservation_id='r-3k9rdt17',resources=None,root_device_name='/dev/vda',root_gb=1,security_groups=SecurityGroupList,services=<?>,shutdown_terminate=False,system_metadata={boot_roles='member,reader',image_base_image_ref='d169c234-7ac2-4fdc-b9fa-a08c93484d75',image_container_format='bare',image_disk_format='qcow2',image_hw_cdrom_bus='sata',image_hw_disk_bus='virtio',image_hw_input_bus='usb',image_hw_machine_type='q35',image_hw_pointer_model='usbtablet',image_hw_rng_model='virtio',image_hw_video_model='virtio',image_hw_vif_model='virtio',image_min_disk='1',image_min_ram='0',owner_project_name='tempest-ServerActionsTestJSON-87382225',owner_user_name='tempest-ServerActionsTestJSON-87382225-project-member'},tags=<?>,task_state='deleting',terminated_at=None,trusted_certs=<?>,updated_at=2025-12-01T20:02:41Z,user_data='IyEvYmluL3NoCmVjaG8gIlByaW50aW5nIGNpcnJvcyB1c2VyIGF1dGhvcml6ZWQga2V5cyIKY2F0IH5jaXJyb3MvLnNzaC9hdXRob3JpemVkX2tleXMgfHwgdHJ1ZQo=',user_id='89c8a8cb31224140bf2b9c0b94acfe04',uuid=4a104baa-5fd5-47aa-973b-11d99c76c3e2,vcpu_model=<?>,vcpus=1,vm_mode=None,vm_state='active') vif={"id": "09097114-7a48-4b64-ab17-ed474efbf80e", "address": "fa:16:3e:3e:bf:1a", "network": {"id": "419dfb65-f0dd-44b5-a131-b7c37ebf4bab", "bridge": "br-int", "label": "tempest-ServerActionsTestJSON-188173667-network", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.13", "type": "fixed", "version": 4, "meta": {}, "floating_ips": [{"address": "192.168.122.211", "type": "floating", "version": 4, "meta": {}}]}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "5102d72cb1ce4e6da810b2584a2abd73", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tap09097114-7a", "ovs_interfaceid": "09097114-7a48-4b64-ab17-ed474efbf80e", "qbh_params": null, "qbg_params": null, "active": true, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}} unplug /usr/lib/python3.9/site-packages/nova/virt/libvirt/vif.py:828#033[00m
Dec  1 20:03:48 compute-0 nova_compute[189564]: 2025-12-01 20:03:48.702 189568 DEBUG nova.network.os_vif_util [None req-c6a515d7-2e06-4054-83e4-9d04a8e4005e 89c8a8cb31224140bf2b9c0b94acfe04 5102d72cb1ce4e6da810b2584a2abd73 - - default default] Converting VIF {"id": "09097114-7a48-4b64-ab17-ed474efbf80e", "address": "fa:16:3e:3e:bf:1a", "network": {"id": "419dfb65-f0dd-44b5-a131-b7c37ebf4bab", "bridge": "br-int", "label": "tempest-ServerActionsTestJSON-188173667-network", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.13", "type": "fixed", "version": 4, "meta": {}, "floating_ips": [{"address": "192.168.122.211", "type": "floating", "version": 4, "meta": {}}]}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "5102d72cb1ce4e6da810b2584a2abd73", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tap09097114-7a", "ovs_interfaceid": "09097114-7a48-4b64-ab17-ed474efbf80e", "qbh_params": null, "qbg_params": null, "active": true, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}} nova_to_osvif_vif /usr/lib/python3.9/site-packages/nova/network/os_vif_util.py:511#033[00m
Dec  1 20:03:48 compute-0 nova_compute[189564]: 2025-12-01 20:03:48.703 189568 DEBUG nova.network.os_vif_util [None req-c6a515d7-2e06-4054-83e4-9d04a8e4005e 89c8a8cb31224140bf2b9c0b94acfe04 5102d72cb1ce4e6da810b2584a2abd73 - - default default] Converted object VIFOpenVSwitch(active=True,address=fa:16:3e:3e:bf:1a,bridge_name='br-int',has_traffic_filtering=True,id=09097114-7a48-4b64-ab17-ed474efbf80e,network=Network(419dfb65-f0dd-44b5-a131-b7c37ebf4bab),plugin='ovs',port_profile=VIFPortProfileOpenVSwitch,preserve_on_delete=False,vif_name='tap09097114-7a') nova_to_osvif_vif /usr/lib/python3.9/site-packages/nova/network/os_vif_util.py:548#033[00m
Dec  1 20:03:48 compute-0 nova_compute[189564]: 2025-12-01 20:03:48.703 189568 DEBUG os_vif [None req-c6a515d7-2e06-4054-83e4-9d04a8e4005e 89c8a8cb31224140bf2b9c0b94acfe04 5102d72cb1ce4e6da810b2584a2abd73 - - default default] Unplugging vif VIFOpenVSwitch(active=True,address=fa:16:3e:3e:bf:1a,bridge_name='br-int',has_traffic_filtering=True,id=09097114-7a48-4b64-ab17-ed474efbf80e,network=Network(419dfb65-f0dd-44b5-a131-b7c37ebf4bab),plugin='ovs',port_profile=VIFPortProfileOpenVSwitch,preserve_on_delete=False,vif_name='tap09097114-7a') unplug /usr/lib/python3.9/site-packages/os_vif/__init__.py:109#033[00m
Dec  1 20:03:48 compute-0 nova_compute[189564]: 2025-12-01 20:03:48.705 189568 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 21 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Dec  1 20:03:48 compute-0 nova_compute[189564]: 2025-12-01 20:03:48.706 189568 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): DelPortCommand(_result=None, port=tap09097114-7a, bridge=br-int, if_exists=True) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89#033[00m
Dec  1 20:03:48 compute-0 nova_compute[189564]: 2025-12-01 20:03:48.709 189568 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Dec  1 20:03:48 compute-0 nova_compute[189564]: 2025-12-01 20:03:48.713 189568 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] 0-ms timeout __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:248#033[00m
Dec  1 20:03:48 compute-0 nova_compute[189564]: 2025-12-01 20:03:48.716 189568 INFO os_vif [None req-c6a515d7-2e06-4054-83e4-9d04a8e4005e 89c8a8cb31224140bf2b9c0b94acfe04 5102d72cb1ce4e6da810b2584a2abd73 - - default default] Successfully unplugged vif VIFOpenVSwitch(active=True,address=fa:16:3e:3e:bf:1a,bridge_name='br-int',has_traffic_filtering=True,id=09097114-7a48-4b64-ab17-ed474efbf80e,network=Network(419dfb65-f0dd-44b5-a131-b7c37ebf4bab),plugin='ovs',port_profile=VIFPortProfileOpenVSwitch,preserve_on_delete=False,vif_name='tap09097114-7a')#033[00m
Dec  1 20:03:48 compute-0 nova_compute[189564]: 2025-12-01 20:03:48.717 189568 INFO nova.virt.libvirt.driver [None req-c6a515d7-2e06-4054-83e4-9d04a8e4005e 89c8a8cb31224140bf2b9c0b94acfe04 5102d72cb1ce4e6da810b2584a2abd73 - - default default] [instance: 4a104baa-5fd5-47aa-973b-11d99c76c3e2] Deleting instance files /var/lib/nova/instances/4a104baa-5fd5-47aa-973b-11d99c76c3e2_del#033[00m
Dec  1 20:03:48 compute-0 nova_compute[189564]: 2025-12-01 20:03:48.717 189568 INFO nova.virt.libvirt.driver [None req-c6a515d7-2e06-4054-83e4-9d04a8e4005e 89c8a8cb31224140bf2b9c0b94acfe04 5102d72cb1ce4e6da810b2584a2abd73 - - default default] [instance: 4a104baa-5fd5-47aa-973b-11d99c76c3e2] Deletion of /var/lib/nova/instances/4a104baa-5fd5-47aa-973b-11d99c76c3e2_del complete#033[00m
Dec  1 20:03:48 compute-0 neutron-haproxy-ovnmeta-419dfb65-f0dd-44b5-a131-b7c37ebf4bab[255618]: [NOTICE]   (255622) : haproxy version is 2.8.14-c23fe91
Dec  1 20:03:48 compute-0 neutron-haproxy-ovnmeta-419dfb65-f0dd-44b5-a131-b7c37ebf4bab[255618]: [NOTICE]   (255622) : path to executable is /usr/sbin/haproxy
Dec  1 20:03:48 compute-0 neutron-haproxy-ovnmeta-419dfb65-f0dd-44b5-a131-b7c37ebf4bab[255618]: [WARNING]  (255622) : Exiting Master process...
Dec  1 20:03:48 compute-0 neutron-haproxy-ovnmeta-419dfb65-f0dd-44b5-a131-b7c37ebf4bab[255618]: [ALERT]    (255622) : Current worker (255624) exited with code 143 (Terminated)
Dec  1 20:03:48 compute-0 neutron-haproxy-ovnmeta-419dfb65-f0dd-44b5-a131-b7c37ebf4bab[255618]: [WARNING]  (255622) : All workers exited. Exiting... (0)
Dec  1 20:03:48 compute-0 systemd[1]: libpod-590c759611d74775ccc5f04134592fd49335012f6c43c247141945fd6c7d9934.scope: Deactivated successfully.
Dec  1 20:03:48 compute-0 conmon[255618]: conmon 590c759611d74775ccc5 <nwarn>: Failed to open cgroups file: /sys/fs/cgroup/machine.slice/libpod-590c759611d74775ccc5f04134592fd49335012f6c43c247141945fd6c7d9934.scope/container/memory.events
Dec  1 20:03:48 compute-0 podman[256268]: 2025-12-01 20:03:48.76873726 +0000 UTC m=+0.084180870 container died 590c759611d74775ccc5f04134592fd49335012f6c43c247141945fd6c7d9934 (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified, name=neutron-haproxy-ovnmeta-419dfb65-f0dd-44b5-a131-b7c37ebf4bab, tcib_managed=true, io.buildah.version=1.41.3, org.label-schema.schema-version=1.0, maintainer=OpenStack Kubernetes Operator team, org.label-schema.build-date=20251125, org.label-schema.license=GPLv2, org.label-schema.vendor=CentOS, tcib_build_tag=fa2bb8efef6782c26ea7f1675eeb36dd, org.label-schema.name=CentOS Stream 9 Base Image)
Dec  1 20:03:48 compute-0 nova_compute[189564]: 2025-12-01 20:03:48.782 189568 INFO nova.compute.manager [None req-c6a515d7-2e06-4054-83e4-9d04a8e4005e 89c8a8cb31224140bf2b9c0b94acfe04 5102d72cb1ce4e6da810b2584a2abd73 - - default default] [instance: 4a104baa-5fd5-47aa-973b-11d99c76c3e2] Took 0.39 seconds to destroy the instance on the hypervisor.#033[00m
Dec  1 20:03:48 compute-0 nova_compute[189564]: 2025-12-01 20:03:48.782 189568 DEBUG oslo.service.loopingcall [None req-c6a515d7-2e06-4054-83e4-9d04a8e4005e 89c8a8cb31224140bf2b9c0b94acfe04 5102d72cb1ce4e6da810b2584a2abd73 - - default default] Waiting for function nova.compute.manager.ComputeManager._try_deallocate_network.<locals>._deallocate_network_with_retries to return. func /usr/lib/python3.9/site-packages/oslo_service/loopingcall.py:435#033[00m
Dec  1 20:03:48 compute-0 nova_compute[189564]: 2025-12-01 20:03:48.783 189568 DEBUG nova.compute.manager [-] [instance: 4a104baa-5fd5-47aa-973b-11d99c76c3e2] Deallocating network for instance _deallocate_network /usr/lib/python3.9/site-packages/nova/compute/manager.py:2259#033[00m
Dec  1 20:03:48 compute-0 nova_compute[189564]: 2025-12-01 20:03:48.784 189568 DEBUG nova.network.neutron [-] [instance: 4a104baa-5fd5-47aa-973b-11d99c76c3e2] deallocate_for_instance() deallocate_for_instance /usr/lib/python3.9/site-packages/nova/network/neutron.py:1803#033[00m
Dec  1 20:03:48 compute-0 ceilometer_agent_compute[200308]: 2025-12-01 20:03:48.821 15 DEBUG ceilometer.polling.manager [-] The number of pollsters in source [pollsters] is bigger than the number of worker threads to execute them. Therefore, one can expect the process to be longer than the expected. execute_polling_task_processing /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:253
Dec  1 20:03:48 compute-0 ceilometer_agent_compute[200308]: 2025-12-01 20:03:48.822 15 DEBUG ceilometer.polling.manager [-] Processing pollsters for [pollsters] with [1] threads. execute_polling_task_processing /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:262
Dec  1 20:03:48 compute-0 ceilometer_agent_compute[200308]: 2025-12-01 20:03:48.822 15 DEBUG ceilometer.polling.manager [-] Registering pollster [<stevedore.extension.Extension object at 0x7fcf6cc3f860>] from source [pollsters] to be executed via executor [<concurrent.futures.thread.ThreadPoolExecutor object at 0x7fcf6cd3b320>] with cache [{}], pollster history [{}], and discovery cache [{}]. register_pollster_execution /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:276
Dec  1 20:03:48 compute-0 ceilometer_agent_compute[200308]: 2025-12-01 20:03:48.823 15 DEBUG ceilometer.polling.manager [-] Executing discovery process for pollsters [<ceilometer.compute.pollsters.net.IncomingBytesDeltaPollster object at 0x7fcf6cc3f830>] and discovery method [local_instances] via process [<bound method AgentManager.discover of <ceilometer.polling.manager.AgentManager object at 0x7fcf6cc07a10>>]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:294
Dec  1 20:03:48 compute-0 ceilometer_agent_compute[200308]: 2025-12-01 20:03:48.823 15 DEBUG ceilometer.polling.manager [-] Registering pollster [<stevedore.extension.Extension object at 0x7fcf6c2e4080>] from source [pollsters] to be executed via executor [<concurrent.futures.thread.ThreadPoolExecutor object at 0x7fcf6cd3b320>] with cache [{}], pollster history [{}], and discovery cache [{}]. register_pollster_execution /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:276
Dec  1 20:03:48 compute-0 ceilometer_agent_compute[200308]: 2025-12-01 20:03:48.824 15 DEBUG ceilometer.polling.manager [-] Registering pollster [<stevedore.extension.Extension object at 0x7fcf6efc98b0>] from source [pollsters] to be executed via executor [<concurrent.futures.thread.ThreadPoolExecutor object at 0x7fcf6cd3b320>] with cache [{}], pollster history [{}], and discovery cache [{}]. register_pollster_execution /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:276
Dec  1 20:03:48 compute-0 ceilometer_agent_compute[200308]: 2025-12-01 20:03:48.824 15 DEBUG ceilometer.polling.manager [-] Registering pollster [<stevedore.extension.Extension object at 0x7fcf6c2e4110>] from source [pollsters] to be executed via executor [<concurrent.futures.thread.ThreadPoolExecutor object at 0x7fcf6cd3b320>] with cache [{}], pollster history [{}], and discovery cache [{}]. register_pollster_execution /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:276
Dec  1 20:03:48 compute-0 ceilometer_agent_compute[200308]: 2025-12-01 20:03:48.824 15 DEBUG ceilometer.polling.manager [-] Registering pollster [<stevedore.extension.Extension object at 0x7fcf6c2e41a0>] from source [pollsters] to be executed via executor [<concurrent.futures.thread.ThreadPoolExecutor object at 0x7fcf6cd3b320>] with cache [{}], pollster history [{}], and discovery cache [{}]. register_pollster_execution /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:276
Dec  1 20:03:48 compute-0 ceilometer_agent_compute[200308]: 2025-12-01 20:03:48.824 15 DEBUG ceilometer.polling.manager [-] Registering pollster [<stevedore.extension.Extension object at 0x7fcf6cc3f290>] from source [pollsters] to be executed via executor [<concurrent.futures.thread.ThreadPoolExecutor object at 0x7fcf6cd3b320>] with cache [{}], pollster history [{}], and discovery cache [{}]. register_pollster_execution /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:276
Dec  1 20:03:48 compute-0 ceilometer_agent_compute[200308]: 2025-12-01 20:03:48.824 15 DEBUG ceilometer.polling.manager [-] Registering pollster [<stevedore.extension.Extension object at 0x7fcf6cc3f2c0>] from source [pollsters] to be executed via executor [<concurrent.futures.thread.ThreadPoolExecutor object at 0x7fcf6cd3b320>] with cache [{}], pollster history [{}], and discovery cache [{}]. register_pollster_execution /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:276
Dec  1 20:03:48 compute-0 ceilometer_agent_compute[200308]: 2025-12-01 20:03:48.824 15 DEBUG ceilometer.polling.manager [-] Registering pollster [<stevedore.extension.Extension object at 0x7fcf6e1e92e0>] from source [pollsters] to be executed via executor [<concurrent.futures.thread.ThreadPoolExecutor object at 0x7fcf6cd3b320>] with cache [{}], pollster history [{}], and discovery cache [{}]. register_pollster_execution /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:276
Dec  1 20:03:48 compute-0 ceilometer_agent_compute[200308]: 2025-12-01 20:03:48.824 15 DEBUG ceilometer.polling.manager [-] Registering pollster [<stevedore.extension.Extension object at 0x7fcf6cc3fb00>] from source [pollsters] to be executed via executor [<concurrent.futures.thread.ThreadPoolExecutor object at 0x7fcf6cd3b320>] with cache [{}], pollster history [{}], and discovery cache [{}]. register_pollster_execution /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:276
Dec  1 20:03:48 compute-0 ceilometer_agent_compute[200308]: 2025-12-01 20:03:48.824 15 DEBUG ceilometer.polling.manager [-] Registering pollster [<stevedore.extension.Extension object at 0x7fcf6cc3f320>] from source [pollsters] to be executed via executor [<concurrent.futures.thread.ThreadPoolExecutor object at 0x7fcf6cd3b320>] with cache [{}], pollster history [{}], and discovery cache [{}]. register_pollster_execution /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:276
Dec  1 20:03:48 compute-0 ceilometer_agent_compute[200308]: 2025-12-01 20:03:48.824 15 DEBUG ceilometer.polling.manager [-] Registering pollster [<stevedore.extension.Extension object at 0x7fcf6cc3f380>] from source [pollsters] to be executed via executor [<concurrent.futures.thread.ThreadPoolExecutor object at 0x7fcf6cd3b320>] with cache [{}], pollster history [{}], and discovery cache [{}]. register_pollster_execution /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:276
Dec  1 20:03:48 compute-0 ceilometer_agent_compute[200308]: 2025-12-01 20:03:48.825 15 DEBUG ceilometer.polling.manager [-] Registering pollster [<stevedore.extension.Extension object at 0x7fcf6cc3f3e0>] from source [pollsters] to be executed via executor [<concurrent.futures.thread.ThreadPoolExecutor object at 0x7fcf6cd3b320>] with cache [{}], pollster history [{}], and discovery cache [{}]. register_pollster_execution /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:276
Dec  1 20:03:48 compute-0 ceilometer_agent_compute[200308]: 2025-12-01 20:03:48.825 15 DEBUG ceilometer.polling.manager [-] Registering pollster [<stevedore.extension.Extension object at 0x7fcf6cc3f440>] from source [pollsters] to be executed via executor [<concurrent.futures.thread.ThreadPoolExecutor object at 0x7fcf6cd3b320>] with cache [{}], pollster history [{}], and discovery cache [{}]. register_pollster_execution /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:276
Dec  1 20:03:48 compute-0 ceilometer_agent_compute[200308]: 2025-12-01 20:03:48.825 15 DEBUG ceilometer.polling.manager [-] Registering pollster [<stevedore.extension.Extension object at 0x7fcf6c2e4470>] from source [pollsters] to be executed via executor [<concurrent.futures.thread.ThreadPoolExecutor object at 0x7fcf6cd3b320>] with cache [{}], pollster history [{}], and discovery cache [{}]. register_pollster_execution /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:276
Dec  1 20:03:48 compute-0 ceilometer_agent_compute[200308]: 2025-12-01 20:03:48.825 15 DEBUG ceilometer.polling.manager [-] Registering pollster [<stevedore.extension.Extension object at 0x7fcf6cc3f4a0>] from source [pollsters] to be executed via executor [<concurrent.futures.thread.ThreadPoolExecutor object at 0x7fcf6cd3b320>] with cache [{}], pollster history [{}], and discovery cache [{}]. register_pollster_execution /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:276
Dec  1 20:03:48 compute-0 ceilometer_agent_compute[200308]: 2025-12-01 20:03:48.825 15 DEBUG ceilometer.polling.manager [-] Registering pollster [<stevedore.extension.Extension object at 0x7fcf6cc3f500>] from source [pollsters] to be executed via executor [<concurrent.futures.thread.ThreadPoolExecutor object at 0x7fcf6cd3b320>] with cache [{}], pollster history [{}], and discovery cache [{}]. register_pollster_execution /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:276
Dec  1 20:03:48 compute-0 ceilometer_agent_compute[200308]: 2025-12-01 20:03:48.825 15 DEBUG ceilometer.polling.manager [-] Registering pollster [<stevedore.extension.Extension object at 0x7fcf6cc3e540>] from source [pollsters] to be executed via executor [<concurrent.futures.thread.ThreadPoolExecutor object at 0x7fcf6cd3b320>] with cache [{}], pollster history [{}], and discovery cache [{}]. register_pollster_execution /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:276
Dec  1 20:03:48 compute-0 ceilometer_agent_compute[200308]: 2025-12-01 20:03:48.825 15 DEBUG ceilometer.polling.manager [-] Registering pollster [<stevedore.extension.Extension object at 0x7fcf6cc3f560>] from source [pollsters] to be executed via executor [<concurrent.futures.thread.ThreadPoolExecutor object at 0x7fcf6cd3b320>] with cache [{}], pollster history [{}], and discovery cache [{}]. register_pollster_execution /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:276
Dec  1 20:03:48 compute-0 ceilometer_agent_compute[200308]: 2025-12-01 20:03:48.825 15 DEBUG ceilometer.polling.manager [-] Registering pollster [<stevedore.extension.Extension object at 0x7fcf6cc3fd70>] from source [pollsters] to be executed via executor [<concurrent.futures.thread.ThreadPoolExecutor object at 0x7fcf6cd3b320>] with cache [{}], pollster history [{}], and discovery cache [{}]. register_pollster_execution /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:276
Dec  1 20:03:48 compute-0 ceilometer_agent_compute[200308]: 2025-12-01 20:03:48.826 15 DEBUG ceilometer.polling.manager [-] Registering pollster [<stevedore.extension.Extension object at 0x7fcf6cc3f5c0>] from source [pollsters] to be executed via executor [<concurrent.futures.thread.ThreadPoolExecutor object at 0x7fcf6cd3b320>] with cache [{}], pollster history [{}], and discovery cache [{}]. register_pollster_execution /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:276
Dec  1 20:03:48 compute-0 ceilometer_agent_compute[200308]: 2025-12-01 20:03:48.826 15 DEBUG ceilometer.polling.manager [-] Registering pollster [<stevedore.extension.Extension object at 0x7fcf6cc3fdd0>] from source [pollsters] to be executed via executor [<concurrent.futures.thread.ThreadPoolExecutor object at 0x7fcf6cd3b320>] with cache [{}], pollster history [{}], and discovery cache [{}]. register_pollster_execution /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:276
Dec  1 20:03:48 compute-0 ceilometer_agent_compute[200308]: 2025-12-01 20:03:48.826 15 DEBUG ceilometer.polling.manager [-] Registering pollster [<stevedore.extension.Extension object at 0x7fcf6cc3fe30>] from source [pollsters] to be executed via executor [<concurrent.futures.thread.ThreadPoolExecutor object at 0x7fcf6cd3b320>] with cache [{}], pollster history [{}], and discovery cache [{}]. register_pollster_execution /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:276
Dec  1 20:03:48 compute-0 ceilometer_agent_compute[200308]: 2025-12-01 20:03:48.826 15 DEBUG ceilometer.polling.manager [-] Registering pollster [<stevedore.extension.Extension object at 0x7fcf6cc3fec0>] from source [pollsters] to be executed via executor [<concurrent.futures.thread.ThreadPoolExecutor object at 0x7fcf6cd3b320>] with cache [{}], pollster history [{}], and discovery cache [{}]. register_pollster_execution /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:276
Dec  1 20:03:48 compute-0 ceilometer_agent_compute[200308]: 2025-12-01 20:03:48.826 15 DEBUG ceilometer.polling.manager [-] Registering pollster [<stevedore.extension.Extension object at 0x7fcf6cc3ffb0>] from source [pollsters] to be executed via executor [<concurrent.futures.thread.ThreadPoolExecutor object at 0x7fcf6cd3b320>] with cache [{}], pollster history [{}], and discovery cache [{}]. register_pollster_execution /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:276
Dec  1 20:03:48 compute-0 ceilometer_agent_compute[200308]: 2025-12-01 20:03:48.826 15 DEBUG ceilometer.polling.manager [-] Registering pollster [<stevedore.extension.Extension object at 0x7fcf6cc3d7c0>] from source [pollsters] to be executed via executor [<concurrent.futures.thread.ThreadPoolExecutor object at 0x7fcf6cd3b320>] with cache [{}], pollster history [{}], and discovery cache [{}]. register_pollster_execution /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:276
Dec  1 20:03:48 compute-0 ceilometer_agent_compute[200308]: 2025-12-01 20:03:48.826 15 DEBUG ceilometer.polling.manager [-] Registering pollster [<stevedore.extension.Extension object at 0x7fcf6cc3f7d0>] from source [pollsters] to be executed via executor [<concurrent.futures.thread.ThreadPoolExecutor object at 0x7fcf6cd3b320>] with cache [{}], pollster history [{}], and discovery cache [{}]. register_pollster_execution /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:276
Dec  1 20:03:48 compute-0 systemd[1]: var-lib-containers-storage-overlay-bcaedc1cb9702662f57327f8efb7de9ad6e4d6aaf864bcd5394c1b5ada553131-merged.mount: Deactivated successfully.
Dec  1 20:03:48 compute-0 systemd[1]: var-lib-containers-storage-overlay\x2dcontainers-590c759611d74775ccc5f04134592fd49335012f6c43c247141945fd6c7d9934-userdata-shm.mount: Deactivated successfully.
Dec  1 20:03:48 compute-0 ceilometer_agent_compute[200308]: 2025-12-01 20:03:48.834 15 DEBUG ceilometer.compute.discovery [-] Querying metadata for instance 6c1de815-4e42-4798-9a73-220b67333524 from Nova API get_server /usr/lib/python3.12/site-packages/ceilometer/compute/discovery.py:176
Dec  1 20:03:48 compute-0 ceilometer_agent_compute[200308]: 2025-12-01 20:03:48.836 15 DEBUG novaclient.v2.client [-] REQ: curl -g -i -X GET https://nova-internal.openstack.svc:8774/v2.1/servers/6c1de815-4e42-4798-9a73-220b67333524 -H "Accept: application/json" -H "User-Agent: python-novaclient" -H "X-Auth-Token: {SHA256}1de7f74c971f7abb068fd11d4466b13593717e525e549549f884402049cc943e" -H "X-OpenStack-Nova-API-Version: 2.1" _http_log_request /usr/lib/python3.12/site-packages/keystoneauth1/session.py:572
Dec  1 20:03:48 compute-0 podman[256268]: 2025-12-01 20:03:48.836878491 +0000 UTC m=+0.152322101 container cleanup 590c759611d74775ccc5f04134592fd49335012f6c43c247141945fd6c7d9934 (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified, name=neutron-haproxy-ovnmeta-419dfb65-f0dd-44b5-a131-b7c37ebf4bab, maintainer=OpenStack Kubernetes Operator team, org.label-schema.license=GPLv2, org.label-schema.vendor=CentOS, io.buildah.version=1.41.3, org.label-schema.schema-version=1.0, tcib_managed=true, org.label-schema.build-date=20251125, tcib_build_tag=fa2bb8efef6782c26ea7f1675eeb36dd, org.label-schema.name=CentOS Stream 9 Base Image)
Dec  1 20:03:48 compute-0 systemd[1]: libpod-conmon-590c759611d74775ccc5f04134592fd49335012f6c43c247141945fd6c7d9934.scope: Deactivated successfully.
Dec  1 20:03:48 compute-0 nova_compute[189564]: 2025-12-01 20:03:48.903 189568 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Dec  1 20:03:48 compute-0 podman[256297]: 2025-12-01 20:03:48.940674131 +0000 UTC m=+0.072157177 container remove 590c759611d74775ccc5f04134592fd49335012f6c43c247141945fd6c7d9934 (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified, name=neutron-haproxy-ovnmeta-419dfb65-f0dd-44b5-a131-b7c37ebf4bab, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 9 Base Image, io.buildah.version=1.41.3, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS, tcib_build_tag=fa2bb8efef6782c26ea7f1675eeb36dd, org.label-schema.build-date=20251125, tcib_managed=true, maintainer=OpenStack Kubernetes Operator team)
Dec  1 20:03:48 compute-0 ovn_metadata_agent[106828]: 2025-12-01 20:03:48.943 106940 DEBUG neutron.agent.ovn.metadata.server [-] <Response [200]> _proxy_request /usr/lib/python3.9/site-packages/neutron/agent/ovn/metadata/server.py:161#033[00m
Dec  1 20:03:48 compute-0 ovn_metadata_agent[106828]: 2025-12-01 20:03:48.943 106940 INFO eventlet.wsgi.server [-] 10.100.0.14,<local> "POST /openstack/2013-10-17/password HTTP/1.1" status: 200  len: 134 time: 0.3278711#033[00m
Dec  1 20:03:48 compute-0 haproxy-metadata-proxy-61c137f0-effb-4f90-8a6c-ea3831f8e4db[255290]: 10.100.0.14:50036 [01/Dec/2025:20:03:48.612] listener listener/metadata 0/0/0/331/331 200 118 - - ---- 1/1/0/0/0 0/0 "POST /openstack/2013-10-17/password HTTP/1.1"
Dec  1 20:03:48 compute-0 ovn_metadata_agent[106828]: 2025-12-01 20:03:48.957 239862 DEBUG oslo.privsep.daemon [-] privsep: reply[1ef6712d-2ac4-4873-a0d0-6eb9b506c4ef]: (4, ('Mon Dec  1 08:03:48 PM UTC 2025 Stopping container neutron-haproxy-ovnmeta-419dfb65-f0dd-44b5-a131-b7c37ebf4bab (590c759611d74775ccc5f04134592fd49335012f6c43c247141945fd6c7d9934)\n590c759611d74775ccc5f04134592fd49335012f6c43c247141945fd6c7d9934\nMon Dec  1 08:03:48 PM UTC 2025 Deleting container neutron-haproxy-ovnmeta-419dfb65-f0dd-44b5-a131-b7c37ebf4bab (590c759611d74775ccc5f04134592fd49335012f6c43c247141945fd6c7d9934)\n590c759611d74775ccc5f04134592fd49335012f6c43c247141945fd6c7d9934\n', '', 0)) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Dec  1 20:03:48 compute-0 ovn_metadata_agent[106828]: 2025-12-01 20:03:48.960 239862 DEBUG oslo.privsep.daemon [-] privsep: reply[4d37f622-1dad-47f7-a9cc-0e0738935394]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Dec  1 20:03:48 compute-0 ovn_metadata_agent[106828]: 2025-12-01 20:03:48.961 106833 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): DelPortCommand(_result=None, port=tap419dfb65-f0, bridge=None, if_exists=True) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89#033[00m
Dec  1 20:03:48 compute-0 nova_compute[189564]: 2025-12-01 20:03:48.968 189568 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Dec  1 20:03:48 compute-0 nova_compute[189564]: 2025-12-01 20:03:48.982 189568 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Dec  1 20:03:48 compute-0 kernel: tap419dfb65-f0: left promiscuous mode
Dec  1 20:03:48 compute-0 nova_compute[189564]: 2025-12-01 20:03:48.991 189568 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Dec  1 20:03:48 compute-0 ovn_metadata_agent[106828]: 2025-12-01 20:03:48.993 239862 DEBUG oslo.privsep.daemon [-] privsep: reply[166fb2ad-a859-4a87-96d1-840d245fcf82]: (4, True) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Dec  1 20:03:49 compute-0 ovn_metadata_agent[106828]: 2025-12-01 20:03:49.016 239862 DEBUG oslo.privsep.daemon [-] privsep: reply[e3542271-d79f-4a11-aee2-fefcbbf31d7a]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Dec  1 20:03:49 compute-0 ovn_metadata_agent[106828]: 2025-12-01 20:03:49.018 239862 DEBUG oslo.privsep.daemon [-] privsep: reply[0c4a3153-e810-4aa8-9cc2-7c38d5ea2513]: (4, True) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Dec  1 20:03:49 compute-0 ovn_metadata_agent[106828]: 2025-12-01 20:03:49.038 239862 DEBUG oslo.privsep.daemon [-] privsep: reply[32041c28-52e4-4d1a-8c51-0ba9b58ff53d]: (4, [{'family': 0, '__align': (), 'ifi_type': 772, 'index': 1, 'flags': 65609, 'change': 0, 'attrs': [['IFLA_IFNAME', 'lo'], ['IFLA_TXQLEN', 1000], ['IFLA_OPERSTATE', 'UNKNOWN'], ['IFLA_LINKMODE', 0], ['IFLA_MTU', 65536], ['IFLA_MIN_MTU', 0], ['IFLA_MAX_MTU', 0], ['IFLA_GROUP', 0], ['IFLA_PROMISCUITY', 0], ['UNKNOWN', {'header': {'length': 8, 'type': 61}}], ['IFLA_NUM_TX_QUEUES', 1], ['IFLA_GSO_MAX_SEGS', 65535], ['IFLA_GSO_MAX_SIZE', 65536], ['IFLA_GRO_MAX_SIZE', 65536], ['UNKNOWN', {'header': {'length': 8, 'type': 63}}], ['UNKNOWN', {'header': {'length': 8, 'type': 64}}], ['IFLA_TSO_MAX_SIZE', 524280], ['IFLA_TSO_MAX_SEGS', 65535], ['UNKNOWN', {'header': {'length': 8, 'type': 66}}], ['IFLA_NUM_RX_QUEUES', 1], ['IFLA_CARRIER', 1], ['IFLA_CARRIER_CHANGES', 0], ['IFLA_CARRIER_UP_COUNT', 0], ['IFLA_CARRIER_DOWN_COUNT', 0], ['IFLA_PROTO_DOWN', 0], ['IFLA_ADDRESS', '00:00:00:00:00:00'], ['IFLA_BROADCAST', '00:00:00:00:00:00'], ['IFLA_STATS64', {'rx_packets': 1, 'tx_packets': 1, 'rx_bytes': 28, 'tx_bytes': 28, 'rx_errors': 0, 'tx_errors': 0, 'rx_dropped': 0, 'tx_dropped': 0, 'multicast': 0, 'collisions': 0, 'rx_length_errors': 0, 'rx_over_errors': 0, 'rx_crc_errors': 0, 'rx_frame_errors': 0, 'rx_fifo_errors': 0, 'rx_missed_errors': 0, 'tx_aborted_errors': 0, 'tx_carrier_errors': 0, 'tx_fifo_errors': 0, 'tx_heartbeat_errors': 0, 'tx_window_errors': 0, 'rx_compressed': 0, 'tx_compressed': 0}], ['IFLA_STATS', {'rx_packets': 1, 'tx_packets': 1, 'rx_bytes': 28, 'tx_bytes': 28, 'rx_errors': 0, 'tx_errors': 0, 'rx_dropped': 0, 'tx_dropped': 0, 'multicast': 0, 'collisions': 0, 'rx_length_errors': 0, 'rx_over_errors': 0, 'rx_crc_errors': 0, 'rx_frame_errors': 0, 'rx_fifo_errors': 0, 'rx_missed_errors': 0, 'tx_aborted_errors': 0, 'tx_carrier_errors': 0, 'tx_fifo_errors': 0, 'tx_heartbeat_errors': 0, 'tx_window_errors': 0, 'rx_compressed': 0, 'tx_compressed': 0}], ['IFLA_XDP', {'attrs': [['IFLA_XDP_ATTACHED', None]]}], ['IFLA_QDISC', 'noqueue'], ['IFLA_AF_SPEC', {'attrs': [['AF_INET', {'dummy': 65668, 'forwarding': 1, 'mc_forwarding': 0, 'proxy_arp': 0, 'accept_redirects': 0, 'secure_redirects': 0, 'send_redirects': 0, 'shared_media': 1, 'rp_filter': 1, 'accept_source_route': 0, 'bootp_relay': 0, 'log_martians': 0, 'tag': 0, 'arpfilter': 0, 'medium_id': 0, 'noxfrm': 1, 'nopolicy': 1, 'force_igmp_version': 0, 'arp_announce': 0, 'arp_ignore': 0, 'promote_secondaries': 1, 'arp_accept': 0, 'arp_notify': 0, 'accept_local': 0, 'src_vmark': 0, 'proxy_arp_pvlan': 0, 'route_localnet': 0, 'igmpv2_unsolicited_report_interval': 10000, 'igmpv3_unsolicited_report_interval': 1000}], ['AF_INET6', {'attrs': [['IFLA_INET6_FLAGS', 2147483648], ['IFLA_INET6_CACHEINFO', {'max_reasm_len': 65535, 'tstamp': 584847, 'reachable_time': 27317, 'retrans_time': 1000}], ['IFLA_INET6_CONF', {'forwarding': 0, 'hop_limit': 64, 'mtu': 65536, 'accept_ra': 1, 'accept_redirects': 1, 'autoconf': 1, 'dad_transmits': 1, 'router_solicitations': 4294967295, 'router_solicitation_interval': 4000, 'router_solicitation_delay': 1000, 'use_tempaddr': 4294967295, 'temp_valid_lft': 604800, 'temp_preferred_lft': 86400, 'regen_max_retry': 3, 'max_desync_factor': 600, 'max_addresses': 16, 'force_mld_version': 0, 'accept_ra_defrtr': 1, 'accept_ra_pinfo': 1, 'accept_ra_rtr_pref': 1, 'router_probe_interval': 60000, 'accept_ra_rt_info_max_plen': 0, 'proxy_ndp': 0, 'optimistic_dad': 0, 'accept_source_route': 0, 'mc_forwarding': 0, 'disable_ipv6': 0, 'accept_dad': 4294967295, 'force_tllao': 0, 'ndisc_notify': 0}], ['IFLA_INET6_STATS', {'num': 38, 'inpkts': 0, 'inoctets': 0, 'indelivers': 0, 'outforwdatagrams': 0, 'outpkts': 0, 'outoctets': 0, 'inhdrerrors': 0, 'intoobigerrors': 0, 'innoroutes': 0, 'inaddrerrors': 0, 'inunknownprotos': 0, 'intruncatedpkts': 0, 'indiscards': 0, 'outdiscards': 0, 'outnoroutes': 0, 'reasmtimeout': 0, 'reasmreqds': 0, 'reasmoks': 0, 'reasmfails': 0, 'fragoks': 0, 'fragfails': 0, 'fragcreates': 0, 'inmcastpkts': 0, 'outmcastpkts': 0, 'inbcastpkts': 0, 'outbcastpkts': 0, 'inmcastoctets': 0, 'outmcastoctets': 0, 'inbcastoctets': 0, 'outbcastoctets': 0, 'csumerrors': 0, 'noectpkts': 0, 'ect1pkts': 0, 'ect0pkts': 0, 'cepkts': 0}], ['IFLA_INET6_ICMP6STATS', {'num': 7, 'inmsgs': 0, 'inerrors': 0, 'outmsgs': 0, 'outerrors': 0, 'csumerrors': 0}], ['IFLA_INET6_TOKEN', '::'], ['IFLA_INET6_ADDR_GEN_MODE', 0]]}]]}], ['IFLA_MAP', {'mem_start': 0, 'mem_end': 0, 'base_addr': 0, 'irq': 0, 'dma': 0, 'port': 0}], ['UNKNOWN', {'header': {'length': 4, 'type': 32830}}], ['UNKNOWN', {'header': {'length': 4, 'type': 32833}}]], 'header': {'length': 1404, 'type': 16, 'flags': 2, 'sequence_number': 255, 'pid': 256309, 'error': None, 'target': 'ovnmeta-419dfb65-f0dd-44b5-a131-b7c37ebf4bab', 'stats': (0, 0, 0)}, 'state': 'up', 'event': 'RTM_NEWLINK'}]) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Dec  1 20:03:49 compute-0 systemd[1]: run-netns-ovnmeta\x2d419dfb65\x2df0dd\x2d44b5\x2da131\x2db7c37ebf4bab.mount: Deactivated successfully.
Dec  1 20:03:49 compute-0 ovn_metadata_agent[106828]: 2025-12-01 20:03:49.042 106945 DEBUG neutron.privileged.agent.linux.ip_lib [-] Namespace ovnmeta-419dfb65-f0dd-44b5-a131-b7c37ebf4bab deleted. remove_netns /usr/lib/python3.9/site-packages/neutron/privileged/agent/linux/ip_lib.py:607#033[00m
Dec  1 20:03:49 compute-0 ovn_metadata_agent[106828]: 2025-12-01 20:03:49.042 106945 DEBUG oslo.privsep.daemon [-] privsep: reply[852719a9-c4a4-4eb4-b32d-cede06cdf6f6]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Dec  1 20:03:49 compute-0 ceilometer_agent_compute[200308]: 2025-12-01 20:03:49.744 15 DEBUG novaclient.v2.client [-] RESP: [200] Connection: Keep-Alive Content-Length: 1855 Content-Type: application/json Date: Mon, 01 Dec 2025 20:03:48 GMT Keep-Alive: timeout=5, max=100 OpenStack-API-Version: compute 2.1 Server: Apache Vary: OpenStack-API-Version,X-OpenStack-Nova-API-Version X-OpenStack-Nova-API-Version: 2.1 x-compute-request-id: req-6d95cdcc-baee-4f41-99e0-63c581cf5287 x-openstack-request-id: req-6d95cdcc-baee-4f41-99e0-63c581cf5287 _http_log_response /usr/lib/python3.12/site-packages/keystoneauth1/session.py:613
Dec  1 20:03:49 compute-0 ceilometer_agent_compute[200308]: 2025-12-01 20:03:49.744 15 DEBUG novaclient.v2.client [-] RESP BODY: {"server": {"id": "6c1de815-4e42-4798-9a73-220b67333524", "name": "tempest-TestNetworkBasicOps-server-1354137625", "status": "ACTIVE", "tenant_id": "162c071887824085bcc9c384a2f8baf0", "user_id": "715e289b64b4407387cbcfe958eb2d0f", "metadata": {}, "hostId": "bc79ce201b24e2307554fcc762c63d1ef4225cea2355166fca3b7e2b", "image": {"id": "d169c234-7ac2-4fdc-b9fa-a08c93484d75", "links": [{"rel": "bookmark", "href": "https://nova-internal.openstack.svc:8774/images/d169c234-7ac2-4fdc-b9fa-a08c93484d75"}]}, "flavor": {"id": "69252fc0-77e5-4ac1-807d-77003542464f", "links": [{"rel": "bookmark", "href": "https://nova-internal.openstack.svc:8774/flavors/69252fc0-77e5-4ac1-807d-77003542464f"}]}, "created": "2025-12-01T20:02:26Z", "updated": "2025-12-01T20:02:44Z", "addresses": {"tempest-network-smoke--1707279970": [{"version": 4, "addr": "10.100.0.11", "OS-EXT-IPS:type": "fixed", "OS-EXT-IPS-MAC:mac_addr": "fa:16:3e:96:ce:cc"}]}, "accessIPv4": "", "accessIPv6": "", "links": [{"rel": "self", "href": "https://nova-internal.openstack.svc:8774/v2.1/servers/6c1de815-4e42-4798-9a73-220b67333524"}, {"rel": "bookmark", "href": "https://nova-internal.openstack.svc:8774/servers/6c1de815-4e42-4798-9a73-220b67333524"}], "OS-DCF:diskConfig": "MANUAL", "progress": 0, "OS-EXT-AZ:availability_zone": "nova", "config_drive": "True", "key_name": "tempest-TestNetworkBasicOps-1284131701", "OS-SRV-USG:launched_at": "2025-12-01T20:02:44.000000", "OS-SRV-USG:terminated_at": null, "security_groups": [{"name": "tempest-secgroup-smoke-2044067757"}], "OS-EXT-SRV-ATTR:host": "compute-0.ctlplane.example.com", "OS-EXT-SRV-ATTR:instance_name": "instance-0000000a", "OS-EXT-SRV-ATTR:hypervisor_hostname": "compute-0.ctlplane.example.com", "OS-EXT-STS:task_state": null, "OS-EXT-STS:vm_state": "active", "OS-EXT-STS:power_state": 1, "os-extended-volumes:volumes_attached": []}} _http_log_response /usr/lib/python3.12/site-packages/keystoneauth1/session.py:648
Dec  1 20:03:49 compute-0 ceilometer_agent_compute[200308]: 2025-12-01 20:03:49.745 15 DEBUG novaclient.v2.client [-] GET call to compute for https://nova-internal.openstack.svc:8774/v2.1/servers/6c1de815-4e42-4798-9a73-220b67333524 used request id req-6d95cdcc-baee-4f41-99e0-63c581cf5287 request /usr/lib/python3.12/site-packages/keystoneauth1/session.py:1073
Dec  1 20:03:49 compute-0 ceilometer_agent_compute[200308]: 2025-12-01 20:03:49.747 15 DEBUG ceilometer.compute.discovery [-] instance data: {'id': '6c1de815-4e42-4798-9a73-220b67333524', 'name': 'tempest-TestNetworkBasicOps-server-1354137625', 'flavor': {'id': '69252fc0-77e5-4ac1-807d-77003542464f', 'name': 'm1.nano', 'vcpus': 1, 'ram': 128, 'disk': 1, 'ephemeral': 0, 'swap': 0}, 'image': {'id': 'd169c234-7ac2-4fdc-b9fa-a08c93484d75'}, 'os_type': 'hvm', 'architecture': 'x86_64', 'OS-EXT-SRV-ATTR:instance_name': 'instance-0000000a', 'OS-EXT-SRV-ATTR:host': 'compute-0.ctlplane.example.com', 'OS-EXT-STS:vm_state': 'running', 'tenant_id': '162c071887824085bcc9c384a2f8baf0', 'user_id': '715e289b64b4407387cbcfe958eb2d0f', 'hostId': 'bc79ce201b24e2307554fcc762c63d1ef4225cea2355166fca3b7e2b', 'status': 'active', 'metadata': {}} discover_libvirt_polling /usr/lib/python3.12/site-packages/ceilometer/compute/discovery.py:315
Dec  1 20:03:49 compute-0 ceilometer_agent_compute[200308]: 2025-12-01 20:03:49.750 15 DEBUG ceilometer.compute.discovery [-] Querying metadata for instance 421c1bd5-7edf-41ce-b0a5-872efcaf35b0 from Nova API get_server /usr/lib/python3.12/site-packages/ceilometer/compute/discovery.py:176
Dec  1 20:03:49 compute-0 ceilometer_agent_compute[200308]: 2025-12-01 20:03:49.751 15 DEBUG novaclient.v2.client [-] REQ: curl -g -i -X GET https://nova-internal.openstack.svc:8774/v2.1/servers/421c1bd5-7edf-41ce-b0a5-872efcaf35b0 -H "Accept: application/json" -H "User-Agent: python-novaclient" -H "X-Auth-Token: {SHA256}1de7f74c971f7abb068fd11d4466b13593717e525e549549f884402049cc943e" -H "X-OpenStack-Nova-API-Version: 2.1" _http_log_request /usr/lib/python3.12/site-packages/keystoneauth1/session.py:572
Dec  1 20:03:49 compute-0 nova_compute[189564]: 2025-12-01 20:03:49.856 189568 DEBUG nova.compute.manager [req-e4a1d177-97d5-411c-a10e-f08c604a5a0d req-1f5e9ab7-3cf9-4a1b-9fa2-e1b8bfe5da0b 8141e98fb94749b083df4b60a326419b 58466eba0634458f8dd92f929824a1d9 - - default default] [instance: 4a104baa-5fd5-47aa-973b-11d99c76c3e2] Received event network-vif-unplugged-09097114-7a48-4b64-ab17-ed474efbf80e external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11048#033[00m
Dec  1 20:03:49 compute-0 nova_compute[189564]: 2025-12-01 20:03:49.857 189568 DEBUG oslo_concurrency.lockutils [req-e4a1d177-97d5-411c-a10e-f08c604a5a0d req-1f5e9ab7-3cf9-4a1b-9fa2-e1b8bfe5da0b 8141e98fb94749b083df4b60a326419b 58466eba0634458f8dd92f929824a1d9 - - default default] Acquiring lock "4a104baa-5fd5-47aa-973b-11d99c76c3e2-events" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m
Dec  1 20:03:49 compute-0 nova_compute[189564]: 2025-12-01 20:03:49.857 189568 DEBUG oslo_concurrency.lockutils [req-e4a1d177-97d5-411c-a10e-f08c604a5a0d req-1f5e9ab7-3cf9-4a1b-9fa2-e1b8bfe5da0b 8141e98fb94749b083df4b60a326419b 58466eba0634458f8dd92f929824a1d9 - - default default] Lock "4a104baa-5fd5-47aa-973b-11d99c76c3e2-events" acquired by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m
Dec  1 20:03:49 compute-0 nova_compute[189564]: 2025-12-01 20:03:49.858 189568 DEBUG oslo_concurrency.lockutils [req-e4a1d177-97d5-411c-a10e-f08c604a5a0d req-1f5e9ab7-3cf9-4a1b-9fa2-e1b8bfe5da0b 8141e98fb94749b083df4b60a326419b 58466eba0634458f8dd92f929824a1d9 - - default default] Lock "4a104baa-5fd5-47aa-973b-11d99c76c3e2-events" "released" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m
Dec  1 20:03:49 compute-0 nova_compute[189564]: 2025-12-01 20:03:49.858 189568 DEBUG nova.compute.manager [req-e4a1d177-97d5-411c-a10e-f08c604a5a0d req-1f5e9ab7-3cf9-4a1b-9fa2-e1b8bfe5da0b 8141e98fb94749b083df4b60a326419b 58466eba0634458f8dd92f929824a1d9 - - default default] [instance: 4a104baa-5fd5-47aa-973b-11d99c76c3e2] No waiting events found dispatching network-vif-unplugged-09097114-7a48-4b64-ab17-ed474efbf80e pop_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:320#033[00m
Dec  1 20:03:49 compute-0 nova_compute[189564]: 2025-12-01 20:03:49.859 189568 DEBUG nova.compute.manager [req-e4a1d177-97d5-411c-a10e-f08c604a5a0d req-1f5e9ab7-3cf9-4a1b-9fa2-e1b8bfe5da0b 8141e98fb94749b083df4b60a326419b 58466eba0634458f8dd92f929824a1d9 - - default default] [instance: 4a104baa-5fd5-47aa-973b-11d99c76c3e2] Received event network-vif-unplugged-09097114-7a48-4b64-ab17-ed474efbf80e for instance with task_state deleting. _process_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:10826#033[00m
Dec  1 20:03:49 compute-0 nova_compute[189564]: 2025-12-01 20:03:49.933 189568 DEBUG nova.network.neutron [req-513a184c-cbba-4293-a888-dd8c25f89be6 req-820a348d-a809-4283-896b-bd308a06b5f1 8141e98fb94749b083df4b60a326419b 58466eba0634458f8dd92f929824a1d9 - - default default] [instance: cb05bc1e-3b85-4998-a503-39bd86bdc17e] Updated VIF entry in instance network info cache for port ab2a4211-760a-400a-bd6c-243749c41a4e. _build_network_info_model /usr/lib/python3.9/site-packages/nova/network/neutron.py:3482#033[00m
Dec  1 20:03:49 compute-0 nova_compute[189564]: 2025-12-01 20:03:49.934 189568 DEBUG nova.network.neutron [req-513a184c-cbba-4293-a888-dd8c25f89be6 req-820a348d-a809-4283-896b-bd308a06b5f1 8141e98fb94749b083df4b60a326419b 58466eba0634458f8dd92f929824a1d9 - - default default] [instance: cb05bc1e-3b85-4998-a503-39bd86bdc17e] Updating instance_info_cache with network_info: [{"id": "ab2a4211-760a-400a-bd6c-243749c41a4e", "address": "fa:16:3e:d2:c4:d1", "network": {"id": "d273f808-5cbd-4428-9f2c-ed8b50232c12", "bridge": "br-int", "label": "tempest-network-smoke--1707279970", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.4", "type": "fixed", "version": 4, "meta": {}, "floating_ips": [{"address": "192.168.122.172", "type": "floating", "version": 4, "meta": {}}]}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "162c071887824085bcc9c384a2f8baf0", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tapab2a4211-76", "ovs_interfaceid": "ab2a4211-760a-400a-bd6c-243749c41a4e", "qbh_params": null, "qbg_params": null, "active": true, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}}] update_instance_cache_with_nw_info /usr/lib/python3.9/site-packages/nova/network/neutron.py:116#033[00m
Dec  1 20:03:49 compute-0 nova_compute[189564]: 2025-12-01 20:03:49.966 189568 DEBUG oslo_concurrency.lockutils [req-513a184c-cbba-4293-a888-dd8c25f89be6 req-820a348d-a809-4283-896b-bd308a06b5f1 8141e98fb94749b083df4b60a326419b 58466eba0634458f8dd92f929824a1d9 - - default default] Releasing lock "refresh_cache-cb05bc1e-3b85-4998-a503-39bd86bdc17e" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:333#033[00m
Dec  1 20:03:50 compute-0 nova_compute[189564]: 2025-12-01 20:03:50.284 189568 DEBUG nova.network.neutron [-] [instance: 4a104baa-5fd5-47aa-973b-11d99c76c3e2] Updating instance_info_cache with network_info: [] update_instance_cache_with_nw_info /usr/lib/python3.9/site-packages/nova/network/neutron.py:116#033[00m
Dec  1 20:03:50 compute-0 nova_compute[189564]: 2025-12-01 20:03:50.312 189568 INFO nova.compute.manager [-] [instance: 4a104baa-5fd5-47aa-973b-11d99c76c3e2] Took 1.53 seconds to deallocate network for instance.#033[00m
Dec  1 20:03:50 compute-0 podman[256311]: 2025-12-01 20:03:50.343944803 +0000 UTC m=+0.115389631 container health_status 61ddba5fa28aaa4735d9b3aecc3d300f499f9ae2248b5f55cd6d6127fcce4236 (image=quay.io/navidys/prometheus-podman-exporter:v1.10.1, name=podman_exporter, health_status=healthy, health_failing_streak=0, health_log=, container_name=podman_exporter, maintainer=Navid Yaghoobi <navidys@fedoraproject.org>, managed_by=edpm_ansible, config_data={'image': 'quay.io/navidys/prometheus-podman-exporter:v1.10.1', 'restart': 'always', 'recreate': True, 'user': 'root', 'privileged': True, 'ports': ['9882:9882'], 'net': 'host', 'command': ['--web.config.file=/etc/podman_exporter/podman_exporter.yaml'], 'environment': {'OS_ENDPOINT_TYPE': 'internal', 'CONTAINER_HOST': 'unix:///run/podman/podman.sock'}, 'healthcheck': {'test': '/openstack/healthcheck podman_exporter', 'mount': '/var/lib/openstack/healthchecks/podman_exporter'}, 'volumes': ['/var/lib/openstack/config/telemetry/podman_exporter.yaml:/etc/podman_exporter/podman_exporter.yaml:z', '/var/lib/openstack/certs/telemetry/default:/etc/podman_exporter/tls:z', '/run/podman/podman.sock:/run/podman/podman.sock:rw,z', '/var/lib/openstack/healthchecks/podman_exporter:/openstack:ro,z']}, config_id=edpm)
Dec  1 20:03:50 compute-0 nova_compute[189564]: 2025-12-01 20:03:50.373 189568 DEBUG oslo_concurrency.lockutils [None req-c6a515d7-2e06-4054-83e4-9d04a8e4005e 89c8a8cb31224140bf2b9c0b94acfe04 5102d72cb1ce4e6da810b2584a2abd73 - - default default] Acquiring lock "compute_resources" by "nova.compute.resource_tracker.ResourceTracker.update_usage" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m
Dec  1 20:03:50 compute-0 nova_compute[189564]: 2025-12-01 20:03:50.374 189568 DEBUG oslo_concurrency.lockutils [None req-c6a515d7-2e06-4054-83e4-9d04a8e4005e 89c8a8cb31224140bf2b9c0b94acfe04 5102d72cb1ce4e6da810b2584a2abd73 - - default default] Lock "compute_resources" acquired by "nova.compute.resource_tracker.ResourceTracker.update_usage" :: waited 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m
Dec  1 20:03:50 compute-0 ceilometer_agent_compute[200308]: 2025-12-01 20:03:50.402 15 DEBUG novaclient.v2.client [-] RESP: [200] Connection: Keep-Alive Content-Length: 2083 Content-Type: application/json Date: Mon, 01 Dec 2025 20:03:49 GMT Keep-Alive: timeout=5, max=99 OpenStack-API-Version: compute 2.1 Server: Apache Vary: OpenStack-API-Version,X-OpenStack-Nova-API-Version X-OpenStack-Nova-API-Version: 2.1 x-compute-request-id: req-5008af69-7723-41c0-89bf-7d5e76e7a276 x-openstack-request-id: req-5008af69-7723-41c0-89bf-7d5e76e7a276 _http_log_response /usr/lib/python3.12/site-packages/keystoneauth1/session.py:613
Dec  1 20:03:50 compute-0 ceilometer_agent_compute[200308]: 2025-12-01 20:03:50.402 15 DEBUG novaclient.v2.client [-] RESP BODY: {"server": {"id": "421c1bd5-7edf-41ce-b0a5-872efcaf35b0", "name": "tempest-TestServerBasicOps-server-48441956", "status": "ACTIVE", "tenant_id": "bde8983778e8471a8b7f6da9e9d53732", "user_id": "304fade4774b4bb3838efcc56501f582", "metadata": {"meta1": "data1", "meta2": "data2", "metaN": "dataN"}, "hostId": "47bf557a78a2499ffebc2eb75739d0d1ca92235d7ce50d490204d12a", "image": {"id": "d169c234-7ac2-4fdc-b9fa-a08c93484d75", "links": [{"rel": "bookmark", "href": "https://nova-internal.openstack.svc:8774/images/d169c234-7ac2-4fdc-b9fa-a08c93484d75"}]}, "flavor": {"id": "69252fc0-77e5-4ac1-807d-77003542464f", "links": [{"rel": "bookmark", "href": "https://nova-internal.openstack.svc:8774/flavors/69252fc0-77e5-4ac1-807d-77003542464f"}]}, "created": "2025-12-01T20:02:29Z", "updated": "2025-12-01T20:03:48Z", "addresses": {"tempest-TestServerBasicOps-1994330948-network": [{"version": 4, "addr": "10.100.0.14", "OS-EXT-IPS:type": "fixed", "OS-EXT-IPS-MAC:mac_addr": "fa:16:3e:67:e4:f2"}, {"version": 4, "addr": "192.168.122.217", "OS-EXT-IPS:type": "floating", "OS-EXT-IPS-MAC:mac_addr": "fa:16:3e:67:e4:f2"}]}, "accessIPv4": "", "accessIPv6": "", "links": [{"rel": "self", "href": "https://nova-internal.openstack.svc:8774/v2.1/servers/421c1bd5-7edf-41ce-b0a5-872efcaf35b0"}, {"rel": "bookmark", "href": "https://nova-internal.openstack.svc:8774/servers/421c1bd5-7edf-41ce-b0a5-872efcaf35b0"}], "OS-DCF:diskConfig": "MANUAL", "progress": 0, "OS-EXT-AZ:availability_zone": "nova", "config_drive": "True", "key_name": "tempest-TestServerBasicOps-232633533", "OS-SRV-USG:launched_at": "2025-12-01T20:02:38.000000", "OS-SRV-USG:terminated_at": null, "security_groups": [{"name": "tempest-secgroup-smoke-1439414318"}, {"name": "tempest-securitygroup--1747176056"}], "OS-EXT-SRV-ATTR:host": "compute-0.ctlplane.example.com", "OS-EXT-SRV-ATTR:instance_name": "instance-0000000b", "OS-EXT-SRV-ATTR:hypervisor_hostname": "compute-0.ctlplane.example.com", "OS-EXT-STS:task_state": null, "OS-EXT-STS:vm_state": "active", "OS-EXT-STS:power_state": 1, "os-extended-volumes:volumes_attached": []}} _http_log_response /usr/lib/python3.12/site-packages/keystoneauth1/session.py:648
Dec  1 20:03:50 compute-0 ceilometer_agent_compute[200308]: 2025-12-01 20:03:50.402 15 DEBUG novaclient.v2.client [-] GET call to compute for https://nova-internal.openstack.svc:8774/v2.1/servers/421c1bd5-7edf-41ce-b0a5-872efcaf35b0 used request id req-5008af69-7723-41c0-89bf-7d5e76e7a276 request /usr/lib/python3.12/site-packages/keystoneauth1/session.py:1073
Dec  1 20:03:50 compute-0 ceilometer_agent_compute[200308]: 2025-12-01 20:03:50.404 15 DEBUG ceilometer.compute.discovery [-] instance data: {'id': '421c1bd5-7edf-41ce-b0a5-872efcaf35b0', 'name': 'tempest-TestServerBasicOps-server-48441956', 'flavor': {'id': '69252fc0-77e5-4ac1-807d-77003542464f', 'name': 'm1.nano', 'vcpus': 1, 'ram': 128, 'disk': 1, 'ephemeral': 0, 'swap': 0}, 'image': {'id': 'd169c234-7ac2-4fdc-b9fa-a08c93484d75'}, 'os_type': 'hvm', 'architecture': 'x86_64', 'OS-EXT-SRV-ATTR:instance_name': 'instance-0000000b', 'OS-EXT-SRV-ATTR:host': 'compute-0.ctlplane.example.com', 'OS-EXT-STS:vm_state': 'running', 'tenant_id': 'bde8983778e8471a8b7f6da9e9d53732', 'user_id': '304fade4774b4bb3838efcc56501f582', 'hostId': '47bf557a78a2499ffebc2eb75739d0d1ca92235d7ce50d490204d12a', 'status': 'active', 'metadata': {'meta1': 'data1', 'meta2': 'data2', 'metaN': 'dataN'}} discover_libvirt_polling /usr/lib/python3.12/site-packages/ceilometer/compute/discovery.py:315
Dec  1 20:03:50 compute-0 ceilometer_agent_compute[200308]: 2025-12-01 20:03:50.407 15 DEBUG ceilometer.compute.discovery [-] Querying metadata for instance cb05bc1e-3b85-4998-a503-39bd86bdc17e from Nova API get_server /usr/lib/python3.12/site-packages/ceilometer/compute/discovery.py:176
Dec  1 20:03:50 compute-0 ceilometer_agent_compute[200308]: 2025-12-01 20:03:50.409 15 DEBUG novaclient.v2.client [-] REQ: curl -g -i -X GET https://nova-internal.openstack.svc:8774/v2.1/servers/cb05bc1e-3b85-4998-a503-39bd86bdc17e -H "Accept: application/json" -H "User-Agent: python-novaclient" -H "X-Auth-Token: {SHA256}1de7f74c971f7abb068fd11d4466b13593717e525e549549f884402049cc943e" -H "X-OpenStack-Nova-API-Version: 2.1" _http_log_request /usr/lib/python3.12/site-packages/keystoneauth1/session.py:572
Dec  1 20:03:50 compute-0 nova_compute[189564]: 2025-12-01 20:03:50.497 189568 DEBUG nova.compute.provider_tree [None req-c6a515d7-2e06-4054-83e4-9d04a8e4005e 89c8a8cb31224140bf2b9c0b94acfe04 5102d72cb1ce4e6da810b2584a2abd73 - - default default] Inventory has not changed in ProviderTree for provider: 0211b5d4-bab8-409f-8f53-df766ffbcb27 update_inventory /usr/lib/python3.9/site-packages/nova/compute/provider_tree.py:180#033[00m
Dec  1 20:03:50 compute-0 nova_compute[189564]: 2025-12-01 20:03:50.523 189568 DEBUG nova.scheduler.client.report [None req-c6a515d7-2e06-4054-83e4-9d04a8e4005e 89c8a8cb31224140bf2b9c0b94acfe04 5102d72cb1ce4e6da810b2584a2abd73 - - default default] Inventory has not changed for provider 0211b5d4-bab8-409f-8f53-df766ffbcb27 based on inventory data: {'VCPU': {'total': 8, 'reserved': 0, 'min_unit': 1, 'max_unit': 8, 'step_size': 1, 'allocation_ratio': 4.0}, 'MEMORY_MB': {'total': 7680, 'reserved': 512, 'min_unit': 1, 'max_unit': 7680, 'step_size': 1, 'allocation_ratio': 1.0}, 'DISK_GB': {'total': 79, 'reserved': 1, 'min_unit': 1, 'max_unit': 79, 'step_size': 1, 'allocation_ratio': 0.9}} set_inventory_for_provider /usr/lib/python3.9/site-packages/nova/scheduler/client/report.py:940#033[00m
Dec  1 20:03:50 compute-0 nova_compute[189564]: 2025-12-01 20:03:50.554 189568 DEBUG oslo_concurrency.lockutils [None req-c6a515d7-2e06-4054-83e4-9d04a8e4005e 89c8a8cb31224140bf2b9c0b94acfe04 5102d72cb1ce4e6da810b2584a2abd73 - - default default] Lock "compute_resources" "released" by "nova.compute.resource_tracker.ResourceTracker.update_usage" :: held 0.180s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m
Dec  1 20:03:50 compute-0 nova_compute[189564]: 2025-12-01 20:03:50.594 189568 INFO nova.scheduler.client.report [None req-c6a515d7-2e06-4054-83e4-9d04a8e4005e 89c8a8cb31224140bf2b9c0b94acfe04 5102d72cb1ce4e6da810b2584a2abd73 - - default default] Deleted allocations for instance 4a104baa-5fd5-47aa-973b-11d99c76c3e2#033[00m
Dec  1 20:03:50 compute-0 nova_compute[189564]: 2025-12-01 20:03:50.705 189568 DEBUG oslo_concurrency.lockutils [None req-c6a515d7-2e06-4054-83e4-9d04a8e4005e 89c8a8cb31224140bf2b9c0b94acfe04 5102d72cb1ce4e6da810b2584a2abd73 - - default default] Lock "4a104baa-5fd5-47aa-973b-11d99c76c3e2" "released" by "nova.compute.manager.ComputeManager.terminate_instance.<locals>.do_terminate_instance" :: held 2.318s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m
Dec  1 20:03:50 compute-0 ceilometer_agent_compute[200308]: 2025-12-01 20:03:50.960 15 DEBUG novaclient.v2.client [-] RESP: [200] Connection: Keep-Alive Content-Length: 1973 Content-Type: application/json Date: Mon, 01 Dec 2025 20:03:50 GMT Keep-Alive: timeout=5, max=98 OpenStack-API-Version: compute 2.1 Server: Apache Vary: OpenStack-API-Version,X-OpenStack-Nova-API-Version X-OpenStack-Nova-API-Version: 2.1 x-compute-request-id: req-bb2dc877-4a9b-4634-9953-3ebe2e9f29c5 x-openstack-request-id: req-bb2dc877-4a9b-4634-9953-3ebe2e9f29c5 _http_log_response /usr/lib/python3.12/site-packages/keystoneauth1/session.py:613
Dec  1 20:03:50 compute-0 ceilometer_agent_compute[200308]: 2025-12-01 20:03:50.960 15 DEBUG novaclient.v2.client [-] RESP BODY: {"server": {"id": "cb05bc1e-3b85-4998-a503-39bd86bdc17e", "name": "tempest-TestNetworkBasicOps-server-400616177", "status": "ACTIVE", "tenant_id": "162c071887824085bcc9c384a2f8baf0", "user_id": "715e289b64b4407387cbcfe958eb2d0f", "metadata": {}, "hostId": "bc79ce201b24e2307554fcc762c63d1ef4225cea2355166fca3b7e2b", "image": {"id": "d169c234-7ac2-4fdc-b9fa-a08c93484d75", "links": [{"rel": "bookmark", "href": "https://nova-internal.openstack.svc:8774/images/d169c234-7ac2-4fdc-b9fa-a08c93484d75"}]}, "flavor": {"id": "69252fc0-77e5-4ac1-807d-77003542464f", "links": [{"rel": "bookmark", "href": "https://nova-internal.openstack.svc:8774/flavors/69252fc0-77e5-4ac1-807d-77003542464f"}]}, "created": "2025-12-01T20:03:36Z", "updated": "2025-12-01T20:03:43Z", "addresses": {"tempest-network-smoke--1707279970": [{"version": 4, "addr": "10.100.0.4", "OS-EXT-IPS:type": "fixed", "OS-EXT-IPS-MAC:mac_addr": "fa:16:3e:d2:c4:d1"}, {"version": 4, "addr": "192.168.122.172", "OS-EXT-IPS:type": "floating", "OS-EXT-IPS-MAC:mac_addr": "fa:16:3e:d2:c4:d1"}]}, "accessIPv4": "", "accessIPv6": "", "links": [{"rel": "self", "href": "https://nova-internal.openstack.svc:8774/v2.1/servers/cb05bc1e-3b85-4998-a503-39bd86bdc17e"}, {"rel": "bookmark", "href": "https://nova-internal.openstack.svc:8774/servers/cb05bc1e-3b85-4998-a503-39bd86bdc17e"}], "OS-DCF:diskConfig": "MANUAL", "progress": 0, "OS-EXT-AZ:availability_zone": "nova", "config_drive": "True", "key_name": "tempest-TestNetworkBasicOps-138657879", "OS-SRV-USG:launched_at": "2025-12-01T20:03:43.000000", "OS-SRV-USG:terminated_at": null, "security_groups": [{"name": "tempest-secgroup-smoke-331568704"}], "OS-EXT-SRV-ATTR:host": "compute-0.ctlplane.example.com", "OS-EXT-SRV-ATTR:instance_name": "instance-0000000c", "OS-EXT-SRV-ATTR:hypervisor_hostname": "compute-0.ctlplane.example.com", "OS-EXT-STS:task_state": null, "OS-EXT-STS:vm_state": "active", "OS-EXT-STS:power_state": 1, "os-extended-volumes:volumes_attached": []}} _http_log_response /usr/lib/python3.12/site-packages/keystoneauth1/session.py:648
Dec  1 20:03:50 compute-0 ceilometer_agent_compute[200308]: 2025-12-01 20:03:50.960 15 DEBUG novaclient.v2.client [-] GET call to compute for https://nova-internal.openstack.svc:8774/v2.1/servers/cb05bc1e-3b85-4998-a503-39bd86bdc17e used request id req-bb2dc877-4a9b-4634-9953-3ebe2e9f29c5 request /usr/lib/python3.12/site-packages/keystoneauth1/session.py:1073
Dec  1 20:03:50 compute-0 ceilometer_agent_compute[200308]: 2025-12-01 20:03:50.961 15 DEBUG ceilometer.compute.discovery [-] instance data: {'id': 'cb05bc1e-3b85-4998-a503-39bd86bdc17e', 'name': 'tempest-TestNetworkBasicOps-server-400616177', 'flavor': {'id': '69252fc0-77e5-4ac1-807d-77003542464f', 'name': 'm1.nano', 'vcpus': 1, 'ram': 128, 'disk': 1, 'ephemeral': 0, 'swap': 0}, 'image': {'id': 'd169c234-7ac2-4fdc-b9fa-a08c93484d75'}, 'os_type': 'hvm', 'architecture': 'x86_64', 'OS-EXT-SRV-ATTR:instance_name': 'instance-0000000c', 'OS-EXT-SRV-ATTR:host': 'compute-0.ctlplane.example.com', 'OS-EXT-STS:vm_state': 'running', 'tenant_id': '162c071887824085bcc9c384a2f8baf0', 'user_id': '715e289b64b4407387cbcfe958eb2d0f', 'hostId': 'bc79ce201b24e2307554fcc762c63d1ef4225cea2355166fca3b7e2b', 'status': 'active', 'metadata': {}} discover_libvirt_polling /usr/lib/python3.12/site-packages/ceilometer/compute/discovery.py:315
Dec  1 20:03:50 compute-0 ceilometer_agent_compute[200308]: 2025-12-01 20:03:50.961 15 INFO ceilometer.polling.manager [-] Polling pollster network.incoming.bytes.delta in the context of pollsters
Dec  1 20:03:50 compute-0 ceilometer_agent_compute[200308]: 2025-12-01 20:03:50.962 15 DEBUG ceilometer.polling.manager [-] Checking if we need coordination for pollster [<stevedore.extension.Extension object at 0x7fcf6cc3f860>] with coordination group name [None]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:333
Dec  1 20:03:50 compute-0 ceilometer_agent_compute[200308]: 2025-12-01 20:03:50.962 15 DEBUG ceilometer.polling.manager [-] The pollster [<stevedore.extension.Extension object at 0x7fcf6cc3f860>] is not configured in a source for polling that requires coordination. The current hashrings are the following [None]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:355
Dec  1 20:03:50 compute-0 ceilometer_agent_compute[200308]: 2025-12-01 20:03:50.962 15 DEBUG ceilometer.polling.manager [-] Polster heartbeat update: network.incoming.bytes.delta heartbeat /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:636
Dec  1 20:03:50 compute-0 ceilometer_agent_compute[200308]: 2025-12-01 20:03:50.963 12 DEBUG ceilometer.polling.manager [-] Updated heartbeat for network.incoming.bytes.delta (2025-12-01T20:03:50.962171) _update_status /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:502
Dec  1 20:03:50 compute-0 ceilometer_agent_compute[200308]: 2025-12-01 20:03:50.966 15 DEBUG ceilometer.compute.virt.libvirt.inspector [-] No delta meter predecessor for 6c1de815-4e42-4798-9a73-220b67333524 / tap05dcfe74-fe inspect_vnics /usr/lib/python3.12/site-packages/ceilometer/compute/virt/libvirt/inspector.py:143
Dec  1 20:03:50 compute-0 ceilometer_agent_compute[200308]: 2025-12-01 20:03:50.967 15 DEBUG ceilometer.compute.pollsters [-] 6c1de815-4e42-4798-9a73-220b67333524/network.incoming.bytes.delta volume: 0 _stats_to_sample /usr/lib/python3.12/site-packages/ceilometer/compute/pollsters/__init__.py:108
Dec  1 20:03:50 compute-0 ceilometer_agent_compute[200308]: 2025-12-01 20:03:50.970 15 DEBUG ceilometer.compute.virt.libvirt.inspector [-] No delta meter predecessor for 421c1bd5-7edf-41ce-b0a5-872efcaf35b0 / tap36c65cc8-9f inspect_vnics /usr/lib/python3.12/site-packages/ceilometer/compute/virt/libvirt/inspector.py:143
Dec  1 20:03:50 compute-0 ceilometer_agent_compute[200308]: 2025-12-01 20:03:50.970 15 DEBUG ceilometer.compute.pollsters [-] 421c1bd5-7edf-41ce-b0a5-872efcaf35b0/network.incoming.bytes.delta volume: 0 _stats_to_sample /usr/lib/python3.12/site-packages/ceilometer/compute/pollsters/__init__.py:108
Dec  1 20:03:50 compute-0 ceilometer_agent_compute[200308]: 2025-12-01 20:03:50.974 15 DEBUG ceilometer.compute.virt.libvirt.inspector [-] No delta meter predecessor for cb05bc1e-3b85-4998-a503-39bd86bdc17e / tapab2a4211-76 inspect_vnics /usr/lib/python3.12/site-packages/ceilometer/compute/virt/libvirt/inspector.py:143
Dec  1 20:03:50 compute-0 ceilometer_agent_compute[200308]: 2025-12-01 20:03:50.974 15 DEBUG ceilometer.compute.pollsters [-] cb05bc1e-3b85-4998-a503-39bd86bdc17e/network.incoming.bytes.delta volume: 0 _stats_to_sample /usr/lib/python3.12/site-packages/ceilometer/compute/pollsters/__init__.py:108
Dec  1 20:03:50 compute-0 ceilometer_agent_compute[200308]: 2025-12-01 20:03:50.974 15 INFO ceilometer.polling.manager [-] Finished polling pollster network.incoming.bytes.delta in the context of pollsters
Dec  1 20:03:50 compute-0 ceilometer_agent_compute[200308]: 2025-12-01 20:03:50.975 15 DEBUG ceilometer.polling.manager [-] Executing discovery process for pollsters [<ceilometer.compute.pollsters.net.OutgoingPacketsPollster object at 0x7fcf6c2e4050>] and discovery method [local_instances] via process [<bound method AgentManager.discover of <ceilometer.polling.manager.AgentManager object at 0x7fcf6cc07a10>>]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:294
Dec  1 20:03:50 compute-0 ceilometer_agent_compute[200308]: 2025-12-01 20:03:50.975 15 INFO ceilometer.polling.manager [-] Polling pollster network.outgoing.packets in the context of pollsters
Dec  1 20:03:50 compute-0 ceilometer_agent_compute[200308]: 2025-12-01 20:03:50.975 15 DEBUG ceilometer.polling.manager [-] Checking if we need coordination for pollster [<stevedore.extension.Extension object at 0x7fcf6c2e4080>] with coordination group name [None]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:333
Dec  1 20:03:50 compute-0 ceilometer_agent_compute[200308]: 2025-12-01 20:03:50.975 15 DEBUG ceilometer.polling.manager [-] The pollster [<stevedore.extension.Extension object at 0x7fcf6c2e4080>] is not configured in a source for polling that requires coordination. The current hashrings are the following [None]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:355
Dec  1 20:03:50 compute-0 ceilometer_agent_compute[200308]: 2025-12-01 20:03:50.975 15 DEBUG ceilometer.polling.manager [-] Polster heartbeat update: network.outgoing.packets heartbeat /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:636
Dec  1 20:03:50 compute-0 ceilometer_agent_compute[200308]: 2025-12-01 20:03:50.975 15 DEBUG ceilometer.compute.pollsters [-] 6c1de815-4e42-4798-9a73-220b67333524/network.outgoing.packets volume: 110 _stats_to_sample /usr/lib/python3.12/site-packages/ceilometer/compute/pollsters/__init__.py:108
Dec  1 20:03:50 compute-0 ceilometer_agent_compute[200308]: 2025-12-01 20:03:50.976 15 DEBUG ceilometer.compute.pollsters [-] 421c1bd5-7edf-41ce-b0a5-872efcaf35b0/network.outgoing.packets volume: 166 _stats_to_sample /usr/lib/python3.12/site-packages/ceilometer/compute/pollsters/__init__.py:108
Dec  1 20:03:50 compute-0 ceilometer_agent_compute[200308]: 2025-12-01 20:03:50.976 15 DEBUG ceilometer.compute.pollsters [-] cb05bc1e-3b85-4998-a503-39bd86bdc17e/network.outgoing.packets volume: 0 _stats_to_sample /usr/lib/python3.12/site-packages/ceilometer/compute/pollsters/__init__.py:108
Dec  1 20:03:50 compute-0 ceilometer_agent_compute[200308]: 2025-12-01 20:03:50.976 12 DEBUG ceilometer.polling.manager [-] Updated heartbeat for network.outgoing.packets (2025-12-01T20:03:50.975385) _update_status /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:502
Dec  1 20:03:50 compute-0 ceilometer_agent_compute[200308]: 2025-12-01 20:03:50.976 15 INFO ceilometer.polling.manager [-] Finished polling pollster network.outgoing.packets in the context of pollsters
Dec  1 20:03:50 compute-0 ceilometer_agent_compute[200308]: 2025-12-01 20:03:50.976 15 DEBUG ceilometer.polling.manager [-] Executing discovery process for pollsters [<ceilometer.compute.pollsters.net.OutgoingBytesDeltaPollster object at 0x7fcf6cc3ff20>] and discovery method [local_instances] via process [<bound method AgentManager.discover of <ceilometer.polling.manager.AgentManager object at 0x7fcf6cc07a10>>]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:294
Dec  1 20:03:50 compute-0 ceilometer_agent_compute[200308]: 2025-12-01 20:03:50.976 15 INFO ceilometer.polling.manager [-] Polling pollster network.outgoing.bytes.delta in the context of pollsters
Dec  1 20:03:50 compute-0 ceilometer_agent_compute[200308]: 2025-12-01 20:03:50.976 15 DEBUG ceilometer.polling.manager [-] Checking if we need coordination for pollster [<stevedore.extension.Extension object at 0x7fcf6efc98b0>] with coordination group name [None]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:333
Dec  1 20:03:50 compute-0 ceilometer_agent_compute[200308]: 2025-12-01 20:03:50.976 15 DEBUG ceilometer.polling.manager [-] The pollster [<stevedore.extension.Extension object at 0x7fcf6efc98b0>] is not configured in a source for polling that requires coordination. The current hashrings are the following [None]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:355
Dec  1 20:03:50 compute-0 ceilometer_agent_compute[200308]: 2025-12-01 20:03:50.977 15 DEBUG ceilometer.polling.manager [-] Polster heartbeat update: network.outgoing.bytes.delta heartbeat /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:636
Dec  1 20:03:50 compute-0 ceilometer_agent_compute[200308]: 2025-12-01 20:03:50.977 15 DEBUG ceilometer.compute.pollsters [-] 6c1de815-4e42-4798-9a73-220b67333524/network.outgoing.bytes.delta volume: 0 _stats_to_sample /usr/lib/python3.12/site-packages/ceilometer/compute/pollsters/__init__.py:108
Dec  1 20:03:50 compute-0 ceilometer_agent_compute[200308]: 2025-12-01 20:03:50.977 15 DEBUG ceilometer.compute.pollsters [-] 421c1bd5-7edf-41ce-b0a5-872efcaf35b0/network.outgoing.bytes.delta volume: 0 _stats_to_sample /usr/lib/python3.12/site-packages/ceilometer/compute/pollsters/__init__.py:108
Dec  1 20:03:50 compute-0 ceilometer_agent_compute[200308]: 2025-12-01 20:03:50.977 12 DEBUG ceilometer.polling.manager [-] Updated heartbeat for network.outgoing.bytes.delta (2025-12-01T20:03:50.976964) _update_status /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:502
Dec  1 20:03:50 compute-0 ceilometer_agent_compute[200308]: 2025-12-01 20:03:50.977 15 DEBUG ceilometer.compute.pollsters [-] cb05bc1e-3b85-4998-a503-39bd86bdc17e/network.outgoing.bytes.delta volume: 0 _stats_to_sample /usr/lib/python3.12/site-packages/ceilometer/compute/pollsters/__init__.py:108
Dec  1 20:03:50 compute-0 ceilometer_agent_compute[200308]: 2025-12-01 20:03:50.978 15 INFO ceilometer.polling.manager [-] Finished polling pollster network.outgoing.bytes.delta in the context of pollsters
Dec  1 20:03:50 compute-0 ceilometer_agent_compute[200308]: 2025-12-01 20:03:50.978 15 DEBUG ceilometer.polling.manager [-] Executing discovery process for pollsters [<ceilometer.compute.pollsters.net.OutgoingDropPollster object at 0x7fcf6c2e40e0>] and discovery method [local_instances] via process [<bound method AgentManager.discover of <ceilometer.polling.manager.AgentManager object at 0x7fcf6cc07a10>>]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:294
Dec  1 20:03:50 compute-0 ceilometer_agent_compute[200308]: 2025-12-01 20:03:50.978 15 INFO ceilometer.polling.manager [-] Polling pollster network.outgoing.packets.drop in the context of pollsters
Dec  1 20:03:50 compute-0 ceilometer_agent_compute[200308]: 2025-12-01 20:03:50.978 15 DEBUG ceilometer.polling.manager [-] Checking if we need coordination for pollster [<stevedore.extension.Extension object at 0x7fcf6c2e4110>] with coordination group name [None]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:333
Dec  1 20:03:50 compute-0 ceilometer_agent_compute[200308]: 2025-12-01 20:03:50.978 15 DEBUG ceilometer.polling.manager [-] The pollster [<stevedore.extension.Extension object at 0x7fcf6c2e4110>] is not configured in a source for polling that requires coordination. The current hashrings are the following [None]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:355
Dec  1 20:03:50 compute-0 ceilometer_agent_compute[200308]: 2025-12-01 20:03:50.978 15 DEBUG ceilometer.polling.manager [-] Polster heartbeat update: network.outgoing.packets.drop heartbeat /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:636
Dec  1 20:03:50 compute-0 ceilometer_agent_compute[200308]: 2025-12-01 20:03:50.978 15 DEBUG ceilometer.compute.pollsters [-] 6c1de815-4e42-4798-9a73-220b67333524/network.outgoing.packets.drop volume: 0 _stats_to_sample /usr/lib/python3.12/site-packages/ceilometer/compute/pollsters/__init__.py:108
Dec  1 20:03:50 compute-0 ceilometer_agent_compute[200308]: 2025-12-01 20:03:50.979 15 DEBUG ceilometer.compute.pollsters [-] 421c1bd5-7edf-41ce-b0a5-872efcaf35b0/network.outgoing.packets.drop volume: 0 _stats_to_sample /usr/lib/python3.12/site-packages/ceilometer/compute/pollsters/__init__.py:108
Dec  1 20:03:50 compute-0 ceilometer_agent_compute[200308]: 2025-12-01 20:03:50.979 12 DEBUG ceilometer.polling.manager [-] Updated heartbeat for network.outgoing.packets.drop (2025-12-01T20:03:50.978427) _update_status /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:502
Dec  1 20:03:50 compute-0 ceilometer_agent_compute[200308]: 2025-12-01 20:03:50.979 15 DEBUG ceilometer.compute.pollsters [-] cb05bc1e-3b85-4998-a503-39bd86bdc17e/network.outgoing.packets.drop volume: 0 _stats_to_sample /usr/lib/python3.12/site-packages/ceilometer/compute/pollsters/__init__.py:108
Dec  1 20:03:50 compute-0 ceilometer_agent_compute[200308]: 2025-12-01 20:03:50.979 15 INFO ceilometer.polling.manager [-] Finished polling pollster network.outgoing.packets.drop in the context of pollsters
Dec  1 20:03:50 compute-0 ceilometer_agent_compute[200308]: 2025-12-01 20:03:50.979 15 DEBUG ceilometer.polling.manager [-] Executing discovery process for pollsters [<ceilometer.compute.pollsters.net.OutgoingErrorsPollster object at 0x7fcf6c2e4170>] and discovery method [local_instances] via process [<bound method AgentManager.discover of <ceilometer.polling.manager.AgentManager object at 0x7fcf6cc07a10>>]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:294
Dec  1 20:03:50 compute-0 ceilometer_agent_compute[200308]: 2025-12-01 20:03:50.979 15 INFO ceilometer.polling.manager [-] Polling pollster network.outgoing.packets.error in the context of pollsters
Dec  1 20:03:50 compute-0 ceilometer_agent_compute[200308]: 2025-12-01 20:03:50.979 15 DEBUG ceilometer.polling.manager [-] Checking if we need coordination for pollster [<stevedore.extension.Extension object at 0x7fcf6c2e41a0>] with coordination group name [None]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:333
Dec  1 20:03:50 compute-0 ceilometer_agent_compute[200308]: 2025-12-01 20:03:50.979 15 DEBUG ceilometer.polling.manager [-] The pollster [<stevedore.extension.Extension object at 0x7fcf6c2e41a0>] is not configured in a source for polling that requires coordination. The current hashrings are the following [None]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:355
Dec  1 20:03:50 compute-0 ceilometer_agent_compute[200308]: 2025-12-01 20:03:50.980 15 DEBUG ceilometer.polling.manager [-] Polster heartbeat update: network.outgoing.packets.error heartbeat /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:636
Dec  1 20:03:50 compute-0 ceilometer_agent_compute[200308]: 2025-12-01 20:03:50.980 15 DEBUG ceilometer.compute.pollsters [-] 6c1de815-4e42-4798-9a73-220b67333524/network.outgoing.packets.error volume: 0 _stats_to_sample /usr/lib/python3.12/site-packages/ceilometer/compute/pollsters/__init__.py:108
Dec  1 20:03:50 compute-0 ceilometer_agent_compute[200308]: 2025-12-01 20:03:50.980 15 DEBUG ceilometer.compute.pollsters [-] 421c1bd5-7edf-41ce-b0a5-872efcaf35b0/network.outgoing.packets.error volume: 0 _stats_to_sample /usr/lib/python3.12/site-packages/ceilometer/compute/pollsters/__init__.py:108
Dec  1 20:03:50 compute-0 ceilometer_agent_compute[200308]: 2025-12-01 20:03:50.980 12 DEBUG ceilometer.polling.manager [-] Updated heartbeat for network.outgoing.packets.error (2025-12-01T20:03:50.979980) _update_status /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:502
Dec  1 20:03:50 compute-0 ceilometer_agent_compute[200308]: 2025-12-01 20:03:50.980 15 DEBUG ceilometer.compute.pollsters [-] cb05bc1e-3b85-4998-a503-39bd86bdc17e/network.outgoing.packets.error volume: 0 _stats_to_sample /usr/lib/python3.12/site-packages/ceilometer/compute/pollsters/__init__.py:108
Dec  1 20:03:50 compute-0 ceilometer_agent_compute[200308]: 2025-12-01 20:03:50.981 15 INFO ceilometer.polling.manager [-] Finished polling pollster network.outgoing.packets.error in the context of pollsters
Dec  1 20:03:50 compute-0 ceilometer_agent_compute[200308]: 2025-12-01 20:03:50.981 15 DEBUG ceilometer.polling.manager [-] Executing discovery process for pollsters [<ceilometer.compute.pollsters.disk.PerDeviceCapacityPollster object at 0x7fcf6cc3d820>] and discovery method [local_instances] via process [<bound method AgentManager.discover of <ceilometer.polling.manager.AgentManager object at 0x7fcf6cc07a10>>]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:294
Dec  1 20:03:50 compute-0 ceilometer_agent_compute[200308]: 2025-12-01 20:03:50.981 15 INFO ceilometer.polling.manager [-] Polling pollster disk.device.capacity in the context of pollsters
Dec  1 20:03:50 compute-0 ceilometer_agent_compute[200308]: 2025-12-01 20:03:50.981 15 DEBUG ceilometer.polling.manager [-] Checking if we need coordination for pollster [<stevedore.extension.Extension object at 0x7fcf6cc3f290>] with coordination group name [None]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:333
Dec  1 20:03:50 compute-0 ceilometer_agent_compute[200308]: 2025-12-01 20:03:50.981 15 DEBUG ceilometer.polling.manager [-] The pollster [<stevedore.extension.Extension object at 0x7fcf6cc3f290>] is not configured in a source for polling that requires coordination. The current hashrings are the following [None]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:355
Dec  1 20:03:50 compute-0 ceilometer_agent_compute[200308]: 2025-12-01 20:03:50.981 15 DEBUG ceilometer.polling.manager [-] Polster heartbeat update: disk.device.capacity heartbeat /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:636
Dec  1 20:03:50 compute-0 ceilometer_agent_compute[200308]: 2025-12-01 20:03:50.982 12 DEBUG ceilometer.polling.manager [-] Updated heartbeat for disk.device.capacity (2025-12-01T20:03:50.981424) _update_status /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:502
Dec  1 20:03:51 compute-0 ceilometer_agent_compute[200308]: 2025-12-01 20:03:51.005 15 DEBUG ceilometer.compute.pollsters [-] 6c1de815-4e42-4798-9a73-220b67333524/disk.device.capacity volume: 1073741824 _stats_to_sample /usr/lib/python3.12/site-packages/ceilometer/compute/pollsters/__init__.py:108
Dec  1 20:03:51 compute-0 ceilometer_agent_compute[200308]: 2025-12-01 20:03:51.005 15 DEBUG ceilometer.compute.pollsters [-] 6c1de815-4e42-4798-9a73-220b67333524/disk.device.capacity volume: 485376 _stats_to_sample /usr/lib/python3.12/site-packages/ceilometer/compute/pollsters/__init__.py:108
Dec  1 20:03:51 compute-0 ceilometer_agent_compute[200308]: 2025-12-01 20:03:51.026 15 DEBUG ceilometer.compute.pollsters [-] 421c1bd5-7edf-41ce-b0a5-872efcaf35b0/disk.device.capacity volume: 1073741824 _stats_to_sample /usr/lib/python3.12/site-packages/ceilometer/compute/pollsters/__init__.py:108
Dec  1 20:03:51 compute-0 ceilometer_agent_compute[200308]: 2025-12-01 20:03:51.027 15 DEBUG ceilometer.compute.pollsters [-] 421c1bd5-7edf-41ce-b0a5-872efcaf35b0/disk.device.capacity volume: 509952 _stats_to_sample /usr/lib/python3.12/site-packages/ceilometer/compute/pollsters/__init__.py:108
Dec  1 20:03:51 compute-0 ceilometer_agent_compute[200308]: 2025-12-01 20:03:51.045 15 DEBUG ceilometer.compute.pollsters [-] cb05bc1e-3b85-4998-a503-39bd86bdc17e/disk.device.capacity volume: 1073741824 _stats_to_sample /usr/lib/python3.12/site-packages/ceilometer/compute/pollsters/__init__.py:108
Dec  1 20:03:51 compute-0 ceilometer_agent_compute[200308]: 2025-12-01 20:03:51.046 15 DEBUG ceilometer.compute.pollsters [-] cb05bc1e-3b85-4998-a503-39bd86bdc17e/disk.device.capacity volume: 485376 _stats_to_sample /usr/lib/python3.12/site-packages/ceilometer/compute/pollsters/__init__.py:108
Dec  1 20:03:51 compute-0 ceilometer_agent_compute[200308]: 2025-12-01 20:03:51.046 15 INFO ceilometer.polling.manager [-] Finished polling pollster disk.device.capacity in the context of pollsters
Dec  1 20:03:51 compute-0 ceilometer_agent_compute[200308]: 2025-12-01 20:03:51.047 15 DEBUG ceilometer.polling.manager [-] Executing discovery process for pollsters [<ceilometer.compute.pollsters.disk.PerDeviceReadBytesPollster object at 0x7fcf6cc3f1d0>] and discovery method [local_instances] via process [<bound method AgentManager.discover of <ceilometer.polling.manager.AgentManager object at 0x7fcf6cc07a10>>]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:294
Dec  1 20:03:51 compute-0 ceilometer_agent_compute[200308]: 2025-12-01 20:03:51.047 15 INFO ceilometer.polling.manager [-] Polling pollster disk.device.read.bytes in the context of pollsters
Dec  1 20:03:51 compute-0 ceilometer_agent_compute[200308]: 2025-12-01 20:03:51.047 15 DEBUG ceilometer.polling.manager [-] Checking if we need coordination for pollster [<stevedore.extension.Extension object at 0x7fcf6cc3f2c0>] with coordination group name [None]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:333
Dec  1 20:03:51 compute-0 ceilometer_agent_compute[200308]: 2025-12-01 20:03:51.047 15 DEBUG ceilometer.polling.manager [-] The pollster [<stevedore.extension.Extension object at 0x7fcf6cc3f2c0>] is not configured in a source for polling that requires coordination. The current hashrings are the following [None]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:355
Dec  1 20:03:51 compute-0 ceilometer_agent_compute[200308]: 2025-12-01 20:03:51.047 15 DEBUG ceilometer.polling.manager [-] Polster heartbeat update: disk.device.read.bytes heartbeat /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:636
Dec  1 20:03:51 compute-0 ceilometer_agent_compute[200308]: 2025-12-01 20:03:51.048 12 DEBUG ceilometer.polling.manager [-] Updated heartbeat for disk.device.read.bytes (2025-12-01T20:03:51.047465) _update_status /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:502
Dec  1 20:03:51 compute-0 ceilometer_agent_compute[200308]: 2025-12-01 20:03:51.109 15 DEBUG ceilometer.compute.pollsters [-] 6c1de815-4e42-4798-9a73-220b67333524/disk.device.read.bytes volume: 30366208 _stats_to_sample /usr/lib/python3.12/site-packages/ceilometer/compute/pollsters/__init__.py:108
Dec  1 20:03:51 compute-0 ceilometer_agent_compute[200308]: 2025-12-01 20:03:51.110 15 DEBUG ceilometer.compute.pollsters [-] 6c1de815-4e42-4798-9a73-220b67333524/disk.device.read.bytes volume: 274750 _stats_to_sample /usr/lib/python3.12/site-packages/ceilometer/compute/pollsters/__init__.py:108
Dec  1 20:03:51 compute-0 ceilometer_agent_compute[200308]: 2025-12-01 20:03:51.187 15 DEBUG ceilometer.compute.pollsters [-] 421c1bd5-7edf-41ce-b0a5-872efcaf35b0/disk.device.read.bytes volume: 31644160 _stats_to_sample /usr/lib/python3.12/site-packages/ceilometer/compute/pollsters/__init__.py:108
Dec  1 20:03:51 compute-0 ceilometer_agent_compute[200308]: 2025-12-01 20:03:51.187 15 DEBUG ceilometer.compute.pollsters [-] 421c1bd5-7edf-41ce-b0a5-872efcaf35b0/disk.device.read.bytes volume: 324434 _stats_to_sample /usr/lib/python3.12/site-packages/ceilometer/compute/pollsters/__init__.py:108
Dec  1 20:03:51 compute-0 ceilometer_agent_compute[200308]: 2025-12-01 20:03:51.250 15 DEBUG ceilometer.compute.pollsters [-] cb05bc1e-3b85-4998-a503-39bd86bdc17e/disk.device.read.bytes volume: 23775232 _stats_to_sample /usr/lib/python3.12/site-packages/ceilometer/compute/pollsters/__init__.py:108
Dec  1 20:03:51 compute-0 ceilometer_agent_compute[200308]: 2025-12-01 20:03:51.251 15 DEBUG ceilometer.compute.pollsters [-] cb05bc1e-3b85-4998-a503-39bd86bdc17e/disk.device.read.bytes volume: 2048 _stats_to_sample /usr/lib/python3.12/site-packages/ceilometer/compute/pollsters/__init__.py:108
Dec  1 20:03:51 compute-0 ceilometer_agent_compute[200308]: 2025-12-01 20:03:51.252 15 INFO ceilometer.polling.manager [-] Finished polling pollster disk.device.read.bytes in the context of pollsters
Dec  1 20:03:51 compute-0 ceilometer_agent_compute[200308]: 2025-12-01 20:03:51.252 15 DEBUG ceilometer.polling.manager [-] Executing discovery process for pollsters [<ceilometer.compute.pollsters.net.IncomingBytesPollster object at 0x7fcf6cc3f800>] and discovery method [local_instances] via process [<bound method AgentManager.discover of <ceilometer.polling.manager.AgentManager object at 0x7fcf6cc07a10>>]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:294
Dec  1 20:03:51 compute-0 ceilometer_agent_compute[200308]: 2025-12-01 20:03:51.253 15 INFO ceilometer.polling.manager [-] Polling pollster network.incoming.bytes in the context of pollsters
Dec  1 20:03:51 compute-0 ceilometer_agent_compute[200308]: 2025-12-01 20:03:51.253 15 DEBUG ceilometer.polling.manager [-] Checking if we need coordination for pollster [<stevedore.extension.Extension object at 0x7fcf6e1e92e0>] with coordination group name [None]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:333
Dec  1 20:03:51 compute-0 ceilometer_agent_compute[200308]: 2025-12-01 20:03:51.253 15 DEBUG ceilometer.polling.manager [-] The pollster [<stevedore.extension.Extension object at 0x7fcf6e1e92e0>] is not configured in a source for polling that requires coordination. The current hashrings are the following [None]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:355
Dec  1 20:03:51 compute-0 ceilometer_agent_compute[200308]: 2025-12-01 20:03:51.253 15 DEBUG ceilometer.polling.manager [-] Polster heartbeat update: network.incoming.bytes heartbeat /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:636
Dec  1 20:03:51 compute-0 ceilometer_agent_compute[200308]: 2025-12-01 20:03:51.254 15 DEBUG ceilometer.compute.pollsters [-] 6c1de815-4e42-4798-9a73-220b67333524/network.incoming.bytes volume: 20286 _stats_to_sample /usr/lib/python3.12/site-packages/ceilometer/compute/pollsters/__init__.py:108
Dec  1 20:03:51 compute-0 ceilometer_agent_compute[200308]: 2025-12-01 20:03:51.255 15 DEBUG ceilometer.compute.pollsters [-] 421c1bd5-7edf-41ce-b0a5-872efcaf35b0/network.incoming.bytes volume: 25623 _stats_to_sample /usr/lib/python3.12/site-packages/ceilometer/compute/pollsters/__init__.py:108
Dec  1 20:03:51 compute-0 ceilometer_agent_compute[200308]: 2025-12-01 20:03:51.255 15 DEBUG ceilometer.compute.pollsters [-] cb05bc1e-3b85-4998-a503-39bd86bdc17e/network.incoming.bytes volume: 90 _stats_to_sample /usr/lib/python3.12/site-packages/ceilometer/compute/pollsters/__init__.py:108
Dec  1 20:03:51 compute-0 ceilometer_agent_compute[200308]: 2025-12-01 20:03:51.256 15 INFO ceilometer.polling.manager [-] Finished polling pollster network.incoming.bytes in the context of pollsters
Dec  1 20:03:51 compute-0 ceilometer_agent_compute[200308]: 2025-12-01 20:03:51.256 15 DEBUG ceilometer.polling.manager [-] Executing discovery process for pollsters [<ceilometer.compute.pollsters.net.IncomingBytesRatePollster object at 0x7fcf6cc3fd10>] and discovery method [local_instances] via process [<bound method AgentManager.discover of <ceilometer.polling.manager.AgentManager object at 0x7fcf6cc07a10>>]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:294
Dec  1 20:03:51 compute-0 ceilometer_agent_compute[200308]: 2025-12-01 20:03:51.256 15 INFO ceilometer.polling.manager [-] Polling pollster network.incoming.bytes.rate in the context of pollsters
Dec  1 20:03:51 compute-0 ceilometer_agent_compute[200308]: 2025-12-01 20:03:51.257 15 DEBUG ceilometer.polling.manager [-] Checking if we need coordination for pollster [<stevedore.extension.Extension object at 0x7fcf6cc3fb00>] with coordination group name [None]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:333
Dec  1 20:03:51 compute-0 ceilometer_agent_compute[200308]: 2025-12-01 20:03:51.257 15 DEBUG ceilometer.polling.manager [-] The pollster [<stevedore.extension.Extension object at 0x7fcf6cc3fb00>] is not configured in a source for polling that requires coordination. The current hashrings are the following [None]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:355
Dec  1 20:03:51 compute-0 ceilometer_agent_compute[200308]: 2025-12-01 20:03:51.257 15 DEBUG ceilometer.polling.manager [-] Polster heartbeat update: network.incoming.bytes.rate heartbeat /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:636
Dec  1 20:03:51 compute-0 ceilometer_agent_compute[200308]: 2025-12-01 20:03:51.258 15 DEBUG ceilometer.compute.pollsters [-] LibvirtInspector does not provide data for IncomingBytesRatePollster get_samples /usr/lib/python3.12/site-packages/ceilometer/compute/pollsters/__init__.py:162
Dec  1 20:03:51 compute-0 ceilometer_agent_compute[200308]: 2025-12-01 20:03:51.258 15 ERROR ceilometer.polling.manager [-] Prevent pollster network.incoming.bytes.rate from polling [<NovaLikeServer: tempest-TestNetworkBasicOps-server-1354137625>, <NovaLikeServer: tempest-TestServerBasicOps-server-48441956>, <NovaLikeServer: tempest-TestNetworkBasicOps-server-400616177>] on source pollsters anymore!: ceilometer.polling.plugin_base.PollsterPermanentError: [<NovaLikeServer: tempest-TestNetworkBasicOps-server-1354137625>, <NovaLikeServer: tempest-TestServerBasicOps-server-48441956>, <NovaLikeServer: tempest-TestNetworkBasicOps-server-400616177>]
Dec  1 20:03:51 compute-0 ceilometer_agent_compute[200308]: 2025-12-01 20:03:51.259 15 DEBUG ceilometer.polling.manager [-] Executing discovery process for pollsters [<ceilometer.compute.pollsters.disk.PerDeviceDiskReadLatencyPollster object at 0x7fcf6cc3f2f0>] and discovery method [local_instances] via process [<bound method AgentManager.discover of <ceilometer.polling.manager.AgentManager object at 0x7fcf6cc07a10>>]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:294
Dec  1 20:03:51 compute-0 ceilometer_agent_compute[200308]: 2025-12-01 20:03:51.259 15 INFO ceilometer.polling.manager [-] Polling pollster disk.device.read.latency in the context of pollsters
Dec  1 20:03:51 compute-0 ceilometer_agent_compute[200308]: 2025-12-01 20:03:51.260 15 DEBUG ceilometer.polling.manager [-] Checking if we need coordination for pollster [<stevedore.extension.Extension object at 0x7fcf6cc3f320>] with coordination group name [None]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:333
Dec  1 20:03:51 compute-0 ceilometer_agent_compute[200308]: 2025-12-01 20:03:51.260 15 DEBUG ceilometer.polling.manager [-] The pollster [<stevedore.extension.Extension object at 0x7fcf6cc3f320>] is not configured in a source for polling that requires coordination. The current hashrings are the following [None]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:355
Dec  1 20:03:51 compute-0 ceilometer_agent_compute[200308]: 2025-12-01 20:03:51.260 15 DEBUG ceilometer.polling.manager [-] Polster heartbeat update: disk.device.read.latency heartbeat /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:636
Dec  1 20:03:51 compute-0 ceilometer_agent_compute[200308]: 2025-12-01 20:03:51.261 15 DEBUG ceilometer.compute.pollsters [-] 6c1de815-4e42-4798-9a73-220b67333524/disk.device.read.latency volume: 595537439 _stats_to_sample /usr/lib/python3.12/site-packages/ceilometer/compute/pollsters/__init__.py:108
Dec  1 20:03:51 compute-0 ceilometer_agent_compute[200308]: 2025-12-01 20:03:51.261 15 DEBUG ceilometer.compute.pollsters [-] 6c1de815-4e42-4798-9a73-220b67333524/disk.device.read.latency volume: 57533864 _stats_to_sample /usr/lib/python3.12/site-packages/ceilometer/compute/pollsters/__init__.py:108
Dec  1 20:03:51 compute-0 ceilometer_agent_compute[200308]: 2025-12-01 20:03:51.262 15 DEBUG ceilometer.compute.pollsters [-] 421c1bd5-7edf-41ce-b0a5-872efcaf35b0/disk.device.read.latency volume: 683504368 _stats_to_sample /usr/lib/python3.12/site-packages/ceilometer/compute/pollsters/__init__.py:108
Dec  1 20:03:51 compute-0 ceilometer_agent_compute[200308]: 2025-12-01 20:03:51.262 15 DEBUG ceilometer.compute.pollsters [-] 421c1bd5-7edf-41ce-b0a5-872efcaf35b0/disk.device.read.latency volume: 210010626 _stats_to_sample /usr/lib/python3.12/site-packages/ceilometer/compute/pollsters/__init__.py:108
Dec  1 20:03:51 compute-0 ceilometer_agent_compute[200308]: 2025-12-01 20:03:51.263 15 DEBUG ceilometer.compute.pollsters [-] cb05bc1e-3b85-4998-a503-39bd86bdc17e/disk.device.read.latency volume: 597204351 _stats_to_sample /usr/lib/python3.12/site-packages/ceilometer/compute/pollsters/__init__.py:108
Dec  1 20:03:51 compute-0 ceilometer_agent_compute[200308]: 2025-12-01 20:03:51.263 12 DEBUG ceilometer.polling.manager [-] Updated heartbeat for network.incoming.bytes (2025-12-01T20:03:51.253530) _update_status /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:502
Dec  1 20:03:51 compute-0 ceilometer_agent_compute[200308]: 2025-12-01 20:03:51.263 12 DEBUG ceilometer.polling.manager [-] Updated heartbeat for network.incoming.bytes.rate (2025-12-01T20:03:51.257329) _update_status /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:502
Dec  1 20:03:51 compute-0 ceilometer_agent_compute[200308]: 2025-12-01 20:03:51.263 15 DEBUG ceilometer.compute.pollsters [-] cb05bc1e-3b85-4998-a503-39bd86bdc17e/disk.device.read.latency volume: 1551928 _stats_to_sample /usr/lib/python3.12/site-packages/ceilometer/compute/pollsters/__init__.py:108
Dec  1 20:03:51 compute-0 ceilometer_agent_compute[200308]: 2025-12-01 20:03:51.263 12 DEBUG ceilometer.polling.manager [-] Updated heartbeat for disk.device.read.latency (2025-12-01T20:03:51.260362) _update_status /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:502
Dec  1 20:03:51 compute-0 ceilometer_agent_compute[200308]: 2025-12-01 20:03:51.264 15 INFO ceilometer.polling.manager [-] Finished polling pollster disk.device.read.latency in the context of pollsters
Dec  1 20:03:51 compute-0 ceilometer_agent_compute[200308]: 2025-12-01 20:03:51.264 15 DEBUG ceilometer.polling.manager [-] Executing discovery process for pollsters [<ceilometer.compute.pollsters.disk.PerDeviceReadRequestsPollster object at 0x7fcf6cc3f350>] and discovery method [local_instances] via process [<bound method AgentManager.discover of <ceilometer.polling.manager.AgentManager object at 0x7fcf6cc07a10>>]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:294
Dec  1 20:03:51 compute-0 ceilometer_agent_compute[200308]: 2025-12-01 20:03:51.264 15 INFO ceilometer.polling.manager [-] Polling pollster disk.device.read.requests in the context of pollsters
Dec  1 20:03:51 compute-0 ceilometer_agent_compute[200308]: 2025-12-01 20:03:51.264 15 DEBUG ceilometer.polling.manager [-] Checking if we need coordination for pollster [<stevedore.extension.Extension object at 0x7fcf6cc3f380>] with coordination group name [None]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:333
Dec  1 20:03:51 compute-0 ceilometer_agent_compute[200308]: 2025-12-01 20:03:51.265 15 DEBUG ceilometer.polling.manager [-] The pollster [<stevedore.extension.Extension object at 0x7fcf6cc3f380>] is not configured in a source for polling that requires coordination. The current hashrings are the following [None]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:355
Dec  1 20:03:51 compute-0 ceilometer_agent_compute[200308]: 2025-12-01 20:03:51.265 15 DEBUG ceilometer.polling.manager [-] Polster heartbeat update: disk.device.read.requests heartbeat /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:636
Dec  1 20:03:51 compute-0 ceilometer_agent_compute[200308]: 2025-12-01 20:03:51.266 15 DEBUG ceilometer.compute.pollsters [-] 6c1de815-4e42-4798-9a73-220b67333524/disk.device.read.requests volume: 1093 _stats_to_sample /usr/lib/python3.12/site-packages/ceilometer/compute/pollsters/__init__.py:108
Dec  1 20:03:51 compute-0 ceilometer_agent_compute[200308]: 2025-12-01 20:03:51.266 15 DEBUG ceilometer.compute.pollsters [-] 6c1de815-4e42-4798-9a73-220b67333524/disk.device.read.requests volume: 108 _stats_to_sample /usr/lib/python3.12/site-packages/ceilometer/compute/pollsters/__init__.py:108
Dec  1 20:03:51 compute-0 ceilometer_agent_compute[200308]: 2025-12-01 20:03:51.267 15 DEBUG ceilometer.compute.pollsters [-] 421c1bd5-7edf-41ce-b0a5-872efcaf35b0/disk.device.read.requests volume: 1158 _stats_to_sample /usr/lib/python3.12/site-packages/ceilometer/compute/pollsters/__init__.py:108
Dec  1 20:03:51 compute-0 ceilometer_agent_compute[200308]: 2025-12-01 20:03:51.267 15 DEBUG ceilometer.compute.pollsters [-] 421c1bd5-7edf-41ce-b0a5-872efcaf35b0/disk.device.read.requests volume: 160 _stats_to_sample /usr/lib/python3.12/site-packages/ceilometer/compute/pollsters/__init__.py:108
Dec  1 20:03:51 compute-0 ceilometer_agent_compute[200308]: 2025-12-01 20:03:51.267 15 DEBUG ceilometer.compute.pollsters [-] cb05bc1e-3b85-4998-a503-39bd86bdc17e/disk.device.read.requests volume: 760 _stats_to_sample /usr/lib/python3.12/site-packages/ceilometer/compute/pollsters/__init__.py:108
Dec  1 20:03:51 compute-0 ceilometer_agent_compute[200308]: 2025-12-01 20:03:51.267 12 DEBUG ceilometer.polling.manager [-] Updated heartbeat for disk.device.read.requests (2025-12-01T20:03:51.265160) _update_status /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:502
Dec  1 20:03:51 compute-0 ceilometer_agent_compute[200308]: 2025-12-01 20:03:51.268 15 DEBUG ceilometer.compute.pollsters [-] cb05bc1e-3b85-4998-a503-39bd86bdc17e/disk.device.read.requests volume: 1 _stats_to_sample /usr/lib/python3.12/site-packages/ceilometer/compute/pollsters/__init__.py:108
Dec  1 20:03:51 compute-0 ceilometer_agent_compute[200308]: 2025-12-01 20:03:51.269 15 INFO ceilometer.polling.manager [-] Finished polling pollster disk.device.read.requests in the context of pollsters
Dec  1 20:03:51 compute-0 ceilometer_agent_compute[200308]: 2025-12-01 20:03:51.269 15 DEBUG ceilometer.polling.manager [-] Executing discovery process for pollsters [<ceilometer.compute.pollsters.disk.PerDevicePhysicalPollster object at 0x7fcf6cc3f3b0>] and discovery method [local_instances] via process [<bound method AgentManager.discover of <ceilometer.polling.manager.AgentManager object at 0x7fcf6cc07a10>>]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:294
Dec  1 20:03:51 compute-0 ceilometer_agent_compute[200308]: 2025-12-01 20:03:51.269 15 INFO ceilometer.polling.manager [-] Polling pollster disk.device.usage in the context of pollsters
Dec  1 20:03:51 compute-0 ceilometer_agent_compute[200308]: 2025-12-01 20:03:51.269 15 DEBUG ceilometer.polling.manager [-] Checking if we need coordination for pollster [<stevedore.extension.Extension object at 0x7fcf6cc3f3e0>] with coordination group name [None]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:333
Dec  1 20:03:51 compute-0 ceilometer_agent_compute[200308]: 2025-12-01 20:03:51.270 15 DEBUG ceilometer.polling.manager [-] The pollster [<stevedore.extension.Extension object at 0x7fcf6cc3f3e0>] is not configured in a source for polling that requires coordination. The current hashrings are the following [None]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:355
Dec  1 20:03:51 compute-0 ceilometer_agent_compute[200308]: 2025-12-01 20:03:51.271 12 DEBUG ceilometer.polling.manager [-] Updated heartbeat for disk.device.usage (2025-12-01T20:03:51.270283) _update_status /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:502
Dec  1 20:03:51 compute-0 ceilometer_agent_compute[200308]: 2025-12-01 20:03:51.270 15 DEBUG ceilometer.polling.manager [-] Polster heartbeat update: disk.device.usage heartbeat /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:636
Dec  1 20:03:51 compute-0 ceilometer_agent_compute[200308]: 2025-12-01 20:03:51.271 15 DEBUG ceilometer.compute.pollsters [-] 6c1de815-4e42-4798-9a73-220b67333524/disk.device.usage volume: 29949952 _stats_to_sample /usr/lib/python3.12/site-packages/ceilometer/compute/pollsters/__init__.py:108
Dec  1 20:03:51 compute-0 ceilometer_agent_compute[200308]: 2025-12-01 20:03:51.272 15 DEBUG ceilometer.compute.pollsters [-] 6c1de815-4e42-4798-9a73-220b67333524/disk.device.usage volume: 485376 _stats_to_sample /usr/lib/python3.12/site-packages/ceilometer/compute/pollsters/__init__.py:108
Dec  1 20:03:51 compute-0 ceilometer_agent_compute[200308]: 2025-12-01 20:03:51.272 15 DEBUG ceilometer.compute.pollsters [-] 421c1bd5-7edf-41ce-b0a5-872efcaf35b0/disk.device.usage volume: 30015488 _stats_to_sample /usr/lib/python3.12/site-packages/ceilometer/compute/pollsters/__init__.py:108
Dec  1 20:03:51 compute-0 ceilometer_agent_compute[200308]: 2025-12-01 20:03:51.272 15 DEBUG ceilometer.compute.pollsters [-] 421c1bd5-7edf-41ce-b0a5-872efcaf35b0/disk.device.usage volume: 509952 _stats_to_sample /usr/lib/python3.12/site-packages/ceilometer/compute/pollsters/__init__.py:108
Dec  1 20:03:51 compute-0 ceilometer_agent_compute[200308]: 2025-12-01 20:03:51.273 15 DEBUG ceilometer.compute.pollsters [-] cb05bc1e-3b85-4998-a503-39bd86bdc17e/disk.device.usage volume: 196624 _stats_to_sample /usr/lib/python3.12/site-packages/ceilometer/compute/pollsters/__init__.py:108
Dec  1 20:03:51 compute-0 ceilometer_agent_compute[200308]: 2025-12-01 20:03:51.273 15 DEBUG ceilometer.compute.pollsters [-] cb05bc1e-3b85-4998-a503-39bd86bdc17e/disk.device.usage volume: 485376 _stats_to_sample /usr/lib/python3.12/site-packages/ceilometer/compute/pollsters/__init__.py:108
Dec  1 20:03:51 compute-0 ceilometer_agent_compute[200308]: 2025-12-01 20:03:51.274 15 INFO ceilometer.polling.manager [-] Finished polling pollster disk.device.usage in the context of pollsters
Dec  1 20:03:51 compute-0 ceilometer_agent_compute[200308]: 2025-12-01 20:03:51.274 15 DEBUG ceilometer.polling.manager [-] Executing discovery process for pollsters [<ceilometer.compute.pollsters.disk.PerDeviceWriteBytesPollster object at 0x7fcf6cc3f410>] and discovery method [local_instances] via process [<bound method AgentManager.discover of <ceilometer.polling.manager.AgentManager object at 0x7fcf6cc07a10>>]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:294
Dec  1 20:03:51 compute-0 ceilometer_agent_compute[200308]: 2025-12-01 20:03:51.274 15 INFO ceilometer.polling.manager [-] Polling pollster disk.device.write.bytes in the context of pollsters
Dec  1 20:03:51 compute-0 ceilometer_agent_compute[200308]: 2025-12-01 20:03:51.274 15 DEBUG ceilometer.polling.manager [-] Checking if we need coordination for pollster [<stevedore.extension.Extension object at 0x7fcf6cc3f440>] with coordination group name [None]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:333
Dec  1 20:03:51 compute-0 ceilometer_agent_compute[200308]: 2025-12-01 20:03:51.274 15 DEBUG ceilometer.polling.manager [-] The pollster [<stevedore.extension.Extension object at 0x7fcf6cc3f440>] is not configured in a source for polling that requires coordination. The current hashrings are the following [None]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:355
Dec  1 20:03:51 compute-0 ceilometer_agent_compute[200308]: 2025-12-01 20:03:51.274 15 DEBUG ceilometer.polling.manager [-] Polster heartbeat update: disk.device.write.bytes heartbeat /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:636
Dec  1 20:03:51 compute-0 ceilometer_agent_compute[200308]: 2025-12-01 20:03:51.275 12 DEBUG ceilometer.polling.manager [-] Updated heartbeat for disk.device.write.bytes (2025-12-01T20:03:51.274706) _update_status /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:502
Dec  1 20:03:51 compute-0 ceilometer_agent_compute[200308]: 2025-12-01 20:03:51.275 15 DEBUG ceilometer.compute.pollsters [-] 6c1de815-4e42-4798-9a73-220b67333524/disk.device.write.bytes volume: 72990720 _stats_to_sample /usr/lib/python3.12/site-packages/ceilometer/compute/pollsters/__init__.py:108
Dec  1 20:03:51 compute-0 ceilometer_agent_compute[200308]: 2025-12-01 20:03:51.275 15 DEBUG ceilometer.compute.pollsters [-] 6c1de815-4e42-4798-9a73-220b67333524/disk.device.write.bytes volume: 0 _stats_to_sample /usr/lib/python3.12/site-packages/ceilometer/compute/pollsters/__init__.py:108
Dec  1 20:03:51 compute-0 ceilometer_agent_compute[200308]: 2025-12-01 20:03:51.275 15 DEBUG ceilometer.compute.pollsters [-] 421c1bd5-7edf-41ce-b0a5-872efcaf35b0/disk.device.write.bytes volume: 73007104 _stats_to_sample /usr/lib/python3.12/site-packages/ceilometer/compute/pollsters/__init__.py:108
Dec  1 20:03:51 compute-0 ceilometer_agent_compute[200308]: 2025-12-01 20:03:51.276 15 DEBUG ceilometer.compute.pollsters [-] 421c1bd5-7edf-41ce-b0a5-872efcaf35b0/disk.device.write.bytes volume: 0 _stats_to_sample /usr/lib/python3.12/site-packages/ceilometer/compute/pollsters/__init__.py:108
Dec  1 20:03:51 compute-0 ceilometer_agent_compute[200308]: 2025-12-01 20:03:51.276 15 DEBUG ceilometer.compute.pollsters [-] cb05bc1e-3b85-4998-a503-39bd86bdc17e/disk.device.write.bytes volume: 0 _stats_to_sample /usr/lib/python3.12/site-packages/ceilometer/compute/pollsters/__init__.py:108
Dec  1 20:03:51 compute-0 ceilometer_agent_compute[200308]: 2025-12-01 20:03:51.276 15 DEBUG ceilometer.compute.pollsters [-] cb05bc1e-3b85-4998-a503-39bd86bdc17e/disk.device.write.bytes volume: 0 _stats_to_sample /usr/lib/python3.12/site-packages/ceilometer/compute/pollsters/__init__.py:108
Dec  1 20:03:51 compute-0 ceilometer_agent_compute[200308]: 2025-12-01 20:03:51.277 15 INFO ceilometer.polling.manager [-] Finished polling pollster disk.device.write.bytes in the context of pollsters
Dec  1 20:03:51 compute-0 ceilometer_agent_compute[200308]: 2025-12-01 20:03:51.277 15 DEBUG ceilometer.polling.manager [-] Executing discovery process for pollsters [<ceilometer.compute.pollsters.instance_stats.PowerStatePollster object at 0x7fcf6c2e4440>] and discovery method [local_instances] via process [<bound method AgentManager.discover of <ceilometer.polling.manager.AgentManager object at 0x7fcf6cc07a10>>]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:294
Dec  1 20:03:51 compute-0 ceilometer_agent_compute[200308]: 2025-12-01 20:03:51.277 15 INFO ceilometer.polling.manager [-] Polling pollster power.state in the context of pollsters
Dec  1 20:03:51 compute-0 ceilometer_agent_compute[200308]: 2025-12-01 20:03:51.277 15 DEBUG ceilometer.polling.manager [-] Checking if we need coordination for pollster [<stevedore.extension.Extension object at 0x7fcf6c2e4470>] with coordination group name [None]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:333
Dec  1 20:03:51 compute-0 ceilometer_agent_compute[200308]: 2025-12-01 20:03:51.277 15 DEBUG ceilometer.polling.manager [-] The pollster [<stevedore.extension.Extension object at 0x7fcf6c2e4470>] is not configured in a source for polling that requires coordination. The current hashrings are the following [None]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:355
Dec  1 20:03:51 compute-0 ceilometer_agent_compute[200308]: 2025-12-01 20:03:51.277 15 DEBUG ceilometer.polling.manager [-] Polster heartbeat update: power.state heartbeat /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:636
Dec  1 20:03:51 compute-0 ceilometer_agent_compute[200308]: 2025-12-01 20:03:51.278 12 DEBUG ceilometer.polling.manager [-] Updated heartbeat for power.state (2025-12-01T20:03:51.277825) _update_status /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:502
Dec  1 20:03:51 compute-0 ceilometer_agent_compute[200308]: 2025-12-01 20:03:51.306 15 DEBUG ceilometer.compute.pollsters [-] 6c1de815-4e42-4798-9a73-220b67333524/power.state volume: 1 _stats_to_sample /usr/lib/python3.12/site-packages/ceilometer/compute/pollsters/__init__.py:108
Dec  1 20:03:51 compute-0 ceilometer_agent_compute[200308]: 2025-12-01 20:03:51.345 15 DEBUG ceilometer.compute.pollsters [-] 421c1bd5-7edf-41ce-b0a5-872efcaf35b0/power.state volume: 1 _stats_to_sample /usr/lib/python3.12/site-packages/ceilometer/compute/pollsters/__init__.py:108
Dec  1 20:03:51 compute-0 ceilometer_agent_compute[200308]: 2025-12-01 20:03:51.384 15 DEBUG ceilometer.compute.pollsters [-] cb05bc1e-3b85-4998-a503-39bd86bdc17e/power.state volume: 1 _stats_to_sample /usr/lib/python3.12/site-packages/ceilometer/compute/pollsters/__init__.py:108
Dec  1 20:03:51 compute-0 ceilometer_agent_compute[200308]: 2025-12-01 20:03:51.384 15 INFO ceilometer.polling.manager [-] Finished polling pollster power.state in the context of pollsters
Dec  1 20:03:51 compute-0 ceilometer_agent_compute[200308]: 2025-12-01 20:03:51.384 15 DEBUG ceilometer.polling.manager [-] Executing discovery process for pollsters [<ceilometer.compute.pollsters.disk.PerDeviceDiskWriteLatencyPollster object at 0x7fcf6cc3f470>] and discovery method [local_instances] via process [<bound method AgentManager.discover of <ceilometer.polling.manager.AgentManager object at 0x7fcf6cc07a10>>]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:294
Dec  1 20:03:51 compute-0 ceilometer_agent_compute[200308]: 2025-12-01 20:03:51.384 15 INFO ceilometer.polling.manager [-] Polling pollster disk.device.write.latency in the context of pollsters
Dec  1 20:03:51 compute-0 ceilometer_agent_compute[200308]: 2025-12-01 20:03:51.385 15 DEBUG ceilometer.polling.manager [-] Checking if we need coordination for pollster [<stevedore.extension.Extension object at 0x7fcf6cc3f4a0>] with coordination group name [None]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:333
Dec  1 20:03:51 compute-0 ceilometer_agent_compute[200308]: 2025-12-01 20:03:51.385 15 DEBUG ceilometer.polling.manager [-] The pollster [<stevedore.extension.Extension object at 0x7fcf6cc3f4a0>] is not configured in a source for polling that requires coordination. The current hashrings are the following [None]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:355
Dec  1 20:03:51 compute-0 ceilometer_agent_compute[200308]: 2025-12-01 20:03:51.385 15 DEBUG ceilometer.polling.manager [-] Polster heartbeat update: disk.device.write.latency heartbeat /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:636
Dec  1 20:03:51 compute-0 ceilometer_agent_compute[200308]: 2025-12-01 20:03:51.385 15 DEBUG ceilometer.compute.pollsters [-] 6c1de815-4e42-4798-9a73-220b67333524/disk.device.write.latency volume: 3318254811 _stats_to_sample /usr/lib/python3.12/site-packages/ceilometer/compute/pollsters/__init__.py:108
Dec  1 20:03:51 compute-0 ceilometer_agent_compute[200308]: 2025-12-01 20:03:51.385 15 DEBUG ceilometer.compute.pollsters [-] 6c1de815-4e42-4798-9a73-220b67333524/disk.device.write.latency volume: 0 _stats_to_sample /usr/lib/python3.12/site-packages/ceilometer/compute/pollsters/__init__.py:108
Dec  1 20:03:51 compute-0 ceilometer_agent_compute[200308]: 2025-12-01 20:03:51.385 15 DEBUG ceilometer.compute.pollsters [-] 421c1bd5-7edf-41ce-b0a5-872efcaf35b0/disk.device.write.latency volume: 4696133443 _stats_to_sample /usr/lib/python3.12/site-packages/ceilometer/compute/pollsters/__init__.py:108
Dec  1 20:03:51 compute-0 ceilometer_agent_compute[200308]: 2025-12-01 20:03:51.386 15 DEBUG ceilometer.compute.pollsters [-] 421c1bd5-7edf-41ce-b0a5-872efcaf35b0/disk.device.write.latency volume: 0 _stats_to_sample /usr/lib/python3.12/site-packages/ceilometer/compute/pollsters/__init__.py:108
Dec  1 20:03:51 compute-0 ceilometer_agent_compute[200308]: 2025-12-01 20:03:51.386 15 DEBUG ceilometer.compute.pollsters [-] cb05bc1e-3b85-4998-a503-39bd86bdc17e/disk.device.write.latency volume: 0 _stats_to_sample /usr/lib/python3.12/site-packages/ceilometer/compute/pollsters/__init__.py:108
Dec  1 20:03:51 compute-0 ceilometer_agent_compute[200308]: 2025-12-01 20:03:51.386 15 DEBUG ceilometer.compute.pollsters [-] cb05bc1e-3b85-4998-a503-39bd86bdc17e/disk.device.write.latency volume: 0 _stats_to_sample /usr/lib/python3.12/site-packages/ceilometer/compute/pollsters/__init__.py:108
Dec  1 20:03:51 compute-0 ceilometer_agent_compute[200308]: 2025-12-01 20:03:51.387 15 INFO ceilometer.polling.manager [-] Finished polling pollster disk.device.write.latency in the context of pollsters
Dec  1 20:03:51 compute-0 ceilometer_agent_compute[200308]: 2025-12-01 20:03:51.387 15 DEBUG ceilometer.polling.manager [-] Executing discovery process for pollsters [<ceilometer.compute.pollsters.disk.PerDeviceWriteRequestsPollster object at 0x7fcf6cc3f4d0>] and discovery method [local_instances] via process [<bound method AgentManager.discover of <ceilometer.polling.manager.AgentManager object at 0x7fcf6cc07a10>>]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:294
Dec  1 20:03:51 compute-0 ceilometer_agent_compute[200308]: 2025-12-01 20:03:51.387 15 INFO ceilometer.polling.manager [-] Polling pollster disk.device.write.requests in the context of pollsters
Dec  1 20:03:51 compute-0 ceilometer_agent_compute[200308]: 2025-12-01 20:03:51.387 15 DEBUG ceilometer.polling.manager [-] Checking if we need coordination for pollster [<stevedore.extension.Extension object at 0x7fcf6cc3f500>] with coordination group name [None]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:333
Dec  1 20:03:51 compute-0 ceilometer_agent_compute[200308]: 2025-12-01 20:03:51.387 12 DEBUG ceilometer.polling.manager [-] Updated heartbeat for disk.device.write.latency (2025-12-01T20:03:51.385149) _update_status /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:502
Dec  1 20:03:51 compute-0 ceilometer_agent_compute[200308]: 2025-12-01 20:03:51.387 15 DEBUG ceilometer.polling.manager [-] The pollster [<stevedore.extension.Extension object at 0x7fcf6cc3f500>] is not configured in a source for polling that requires coordination. The current hashrings are the following [None]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:355
Dec  1 20:03:51 compute-0 ceilometer_agent_compute[200308]: 2025-12-01 20:03:51.387 15 DEBUG ceilometer.polling.manager [-] Polster heartbeat update: disk.device.write.requests heartbeat /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:636
Dec  1 20:03:51 compute-0 ceilometer_agent_compute[200308]: 2025-12-01 20:03:51.388 15 DEBUG ceilometer.compute.pollsters [-] 6c1de815-4e42-4798-9a73-220b67333524/disk.device.write.requests volume: 327 _stats_to_sample /usr/lib/python3.12/site-packages/ceilometer/compute/pollsters/__init__.py:108
Dec  1 20:03:51 compute-0 ceilometer_agent_compute[200308]: 2025-12-01 20:03:51.388 15 DEBUG ceilometer.compute.pollsters [-] 6c1de815-4e42-4798-9a73-220b67333524/disk.device.write.requests volume: 0 _stats_to_sample /usr/lib/python3.12/site-packages/ceilometer/compute/pollsters/__init__.py:108
Dec  1 20:03:51 compute-0 ceilometer_agent_compute[200308]: 2025-12-01 20:03:51.388 12 DEBUG ceilometer.polling.manager [-] Updated heartbeat for disk.device.write.requests (2025-12-01T20:03:51.387942) _update_status /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:502
Dec  1 20:03:51 compute-0 ceilometer_agent_compute[200308]: 2025-12-01 20:03:51.388 15 DEBUG ceilometer.compute.pollsters [-] 421c1bd5-7edf-41ce-b0a5-872efcaf35b0/disk.device.write.requests volume: 317 _stats_to_sample /usr/lib/python3.12/site-packages/ceilometer/compute/pollsters/__init__.py:108
Dec  1 20:03:51 compute-0 ceilometer_agent_compute[200308]: 2025-12-01 20:03:51.388 15 DEBUG ceilometer.compute.pollsters [-] 421c1bd5-7edf-41ce-b0a5-872efcaf35b0/disk.device.write.requests volume: 0 _stats_to_sample /usr/lib/python3.12/site-packages/ceilometer/compute/pollsters/__init__.py:108
Dec  1 20:03:51 compute-0 ceilometer_agent_compute[200308]: 2025-12-01 20:03:51.389 15 DEBUG ceilometer.compute.pollsters [-] cb05bc1e-3b85-4998-a503-39bd86bdc17e/disk.device.write.requests volume: 0 _stats_to_sample /usr/lib/python3.12/site-packages/ceilometer/compute/pollsters/__init__.py:108
Dec  1 20:03:51 compute-0 ceilometer_agent_compute[200308]: 2025-12-01 20:03:51.389 15 DEBUG ceilometer.compute.pollsters [-] cb05bc1e-3b85-4998-a503-39bd86bdc17e/disk.device.write.requests volume: 0 _stats_to_sample /usr/lib/python3.12/site-packages/ceilometer/compute/pollsters/__init__.py:108
Dec  1 20:03:51 compute-0 ceilometer_agent_compute[200308]: 2025-12-01 20:03:51.389 15 INFO ceilometer.polling.manager [-] Finished polling pollster disk.device.write.requests in the context of pollsters
Dec  1 20:03:51 compute-0 ceilometer_agent_compute[200308]: 2025-12-01 20:03:51.389 15 DEBUG ceilometer.polling.manager [-] Executing discovery process for pollsters [<ceilometer.compute.pollsters.disk.PerDeviceAllocationPollster object at 0x7fcf6cc3e5d0>] and discovery method [local_instances] via process [<bound method AgentManager.discover of <ceilometer.polling.manager.AgentManager object at 0x7fcf6cc07a10>>]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:294
Dec  1 20:03:51 compute-0 ceilometer_agent_compute[200308]: 2025-12-01 20:03:51.389 15 INFO ceilometer.polling.manager [-] Polling pollster disk.device.allocation in the context of pollsters
Dec  1 20:03:51 compute-0 ceilometer_agent_compute[200308]: 2025-12-01 20:03:51.389 15 DEBUG ceilometer.polling.manager [-] Checking if we need coordination for pollster [<stevedore.extension.Extension object at 0x7fcf6cc3e540>] with coordination group name [None]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:333
Dec  1 20:03:51 compute-0 ceilometer_agent_compute[200308]: 2025-12-01 20:03:51.389 15 DEBUG ceilometer.polling.manager [-] The pollster [<stevedore.extension.Extension object at 0x7fcf6cc3e540>] is not configured in a source for polling that requires coordination. The current hashrings are the following [None]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:355
Dec  1 20:03:51 compute-0 ceilometer_agent_compute[200308]: 2025-12-01 20:03:51.389 15 DEBUG ceilometer.polling.manager [-] Polster heartbeat update: disk.device.allocation heartbeat /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:636
Dec  1 20:03:51 compute-0 ceilometer_agent_compute[200308]: 2025-12-01 20:03:51.390 15 DEBUG ceilometer.compute.pollsters [-] 6c1de815-4e42-4798-9a73-220b67333524/disk.device.allocation volume: 30089216 _stats_to_sample /usr/lib/python3.12/site-packages/ceilometer/compute/pollsters/__init__.py:108
Dec  1 20:03:51 compute-0 ceilometer_agent_compute[200308]: 2025-12-01 20:03:51.390 15 DEBUG ceilometer.compute.pollsters [-] 6c1de815-4e42-4798-9a73-220b67333524/disk.device.allocation volume: 487424 _stats_to_sample /usr/lib/python3.12/site-packages/ceilometer/compute/pollsters/__init__.py:108
Dec  1 20:03:51 compute-0 ceilometer_agent_compute[200308]: 2025-12-01 20:03:51.390 15 DEBUG ceilometer.compute.pollsters [-] 421c1bd5-7edf-41ce-b0a5-872efcaf35b0/disk.device.allocation volume: 30482432 _stats_to_sample /usr/lib/python3.12/site-packages/ceilometer/compute/pollsters/__init__.py:108
Dec  1 20:03:51 compute-0 ceilometer_agent_compute[200308]: 2025-12-01 20:03:51.390 12 DEBUG ceilometer.polling.manager [-] Updated heartbeat for disk.device.allocation (2025-12-01T20:03:51.389924) _update_status /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:502
Dec  1 20:03:51 compute-0 ceilometer_agent_compute[200308]: 2025-12-01 20:03:51.390 15 DEBUG ceilometer.compute.pollsters [-] 421c1bd5-7edf-41ce-b0a5-872efcaf35b0/disk.device.allocation volume: 512000 _stats_to_sample /usr/lib/python3.12/site-packages/ceilometer/compute/pollsters/__init__.py:108
Dec  1 20:03:51 compute-0 ceilometer_agent_compute[200308]: 2025-12-01 20:03:51.391 15 DEBUG ceilometer.compute.pollsters [-] cb05bc1e-3b85-4998-a503-39bd86bdc17e/disk.device.allocation volume: 204800 _stats_to_sample /usr/lib/python3.12/site-packages/ceilometer/compute/pollsters/__init__.py:108
Dec  1 20:03:51 compute-0 ceilometer_agent_compute[200308]: 2025-12-01 20:03:51.391 15 DEBUG ceilometer.compute.pollsters [-] cb05bc1e-3b85-4998-a503-39bd86bdc17e/disk.device.allocation volume: 487424 _stats_to_sample /usr/lib/python3.12/site-packages/ceilometer/compute/pollsters/__init__.py:108
Dec  1 20:03:51 compute-0 ceilometer_agent_compute[200308]: 2025-12-01 20:03:51.391 15 INFO ceilometer.polling.manager [-] Finished polling pollster disk.device.allocation in the context of pollsters
Dec  1 20:03:51 compute-0 ceilometer_agent_compute[200308]: 2025-12-01 20:03:51.391 15 DEBUG ceilometer.polling.manager [-] Executing discovery process for pollsters [<ceilometer.compute.pollsters.disk.EphemeralSizePollster object at 0x7fcf6cc3f530>] and discovery method [local_instances] via process [<bound method AgentManager.discover of <ceilometer.polling.manager.AgentManager object at 0x7fcf6cc07a10>>]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:294
Dec  1 20:03:51 compute-0 ceilometer_agent_compute[200308]: 2025-12-01 20:03:51.391 15 INFO ceilometer.polling.manager [-] Polling pollster disk.ephemeral.size in the context of pollsters
Dec  1 20:03:51 compute-0 ceilometer_agent_compute[200308]: 2025-12-01 20:03:51.391 15 DEBUG ceilometer.polling.manager [-] Checking if we need coordination for pollster [<stevedore.extension.Extension object at 0x7fcf6cc3f560>] with coordination group name [None]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:333
Dec  1 20:03:51 compute-0 ceilometer_agent_compute[200308]: 2025-12-01 20:03:51.392 15 DEBUG ceilometer.polling.manager [-] The pollster [<stevedore.extension.Extension object at 0x7fcf6cc3f560>] is not configured in a source for polling that requires coordination. The current hashrings are the following [None]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:355
Dec  1 20:03:51 compute-0 ceilometer_agent_compute[200308]: 2025-12-01 20:03:51.392 15 DEBUG ceilometer.polling.manager [-] Polster heartbeat update: disk.ephemeral.size heartbeat /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:636
Dec  1 20:03:51 compute-0 ceilometer_agent_compute[200308]: 2025-12-01 20:03:51.392 15 INFO ceilometer.polling.manager [-] Finished polling pollster disk.ephemeral.size in the context of pollsters
Dec  1 20:03:51 compute-0 ceilometer_agent_compute[200308]: 2025-12-01 20:03:51.392 15 DEBUG ceilometer.polling.manager [-] Executing discovery process for pollsters [<ceilometer.compute.pollsters.net.IncomingPacketsPollster object at 0x7fcf6cc3fd40>] and discovery method [local_instances] via process [<bound method AgentManager.discover of <ceilometer.polling.manager.AgentManager object at 0x7fcf6cc07a10>>]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:294
Dec  1 20:03:51 compute-0 ceilometer_agent_compute[200308]: 2025-12-01 20:03:51.393 15 INFO ceilometer.polling.manager [-] Polling pollster network.incoming.packets in the context of pollsters
Dec  1 20:03:51 compute-0 ceilometer_agent_compute[200308]: 2025-12-01 20:03:51.393 15 DEBUG ceilometer.polling.manager [-] Checking if we need coordination for pollster [<stevedore.extension.Extension object at 0x7fcf6cc3fd70>] with coordination group name [None]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:333
Dec  1 20:03:51 compute-0 ceilometer_agent_compute[200308]: 2025-12-01 20:03:51.393 15 DEBUG ceilometer.polling.manager [-] The pollster [<stevedore.extension.Extension object at 0x7fcf6cc3fd70>] is not configured in a source for polling that requires coordination. The current hashrings are the following [None]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:355
Dec  1 20:03:51 compute-0 ceilometer_agent_compute[200308]: 2025-12-01 20:03:51.393 15 DEBUG ceilometer.polling.manager [-] Polster heartbeat update: network.incoming.packets heartbeat /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:636
Dec  1 20:03:51 compute-0 ceilometer_agent_compute[200308]: 2025-12-01 20:03:51.393 15 DEBUG ceilometer.compute.pollsters [-] 6c1de815-4e42-4798-9a73-220b67333524/network.incoming.packets volume: 117 _stats_to_sample /usr/lib/python3.12/site-packages/ceilometer/compute/pollsters/__init__.py:108
Dec  1 20:03:51 compute-0 ceilometer_agent_compute[200308]: 2025-12-01 20:03:51.393 15 DEBUG ceilometer.compute.pollsters [-] 421c1bd5-7edf-41ce-b0a5-872efcaf35b0/network.incoming.packets volume: 145 _stats_to_sample /usr/lib/python3.12/site-packages/ceilometer/compute/pollsters/__init__.py:108
Dec  1 20:03:51 compute-0 ceilometer_agent_compute[200308]: 2025-12-01 20:03:51.393 15 DEBUG ceilometer.compute.pollsters [-] cb05bc1e-3b85-4998-a503-39bd86bdc17e/network.incoming.packets volume: 1 _stats_to_sample /usr/lib/python3.12/site-packages/ceilometer/compute/pollsters/__init__.py:108
Dec  1 20:03:51 compute-0 ceilometer_agent_compute[200308]: 2025-12-01 20:03:51.394 15 INFO ceilometer.polling.manager [-] Finished polling pollster network.incoming.packets in the context of pollsters
Dec  1 20:03:51 compute-0 ceilometer_agent_compute[200308]: 2025-12-01 20:03:51.394 15 DEBUG ceilometer.polling.manager [-] Executing discovery process for pollsters [<ceilometer.compute.pollsters.disk.RootSizePollster object at 0x7fcf6cc3f590>] and discovery method [local_instances] via process [<bound method AgentManager.discover of <ceilometer.polling.manager.AgentManager object at 0x7fcf6cc07a10>>]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:294
Dec  1 20:03:51 compute-0 ceilometer_agent_compute[200308]: 2025-12-01 20:03:51.394 15 INFO ceilometer.polling.manager [-] Polling pollster disk.root.size in the context of pollsters
Dec  1 20:03:51 compute-0 ceilometer_agent_compute[200308]: 2025-12-01 20:03:51.394 15 DEBUG ceilometer.polling.manager [-] Checking if we need coordination for pollster [<stevedore.extension.Extension object at 0x7fcf6cc3f5c0>] with coordination group name [None]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:333
Dec  1 20:03:51 compute-0 ceilometer_agent_compute[200308]: 2025-12-01 20:03:51.394 12 DEBUG ceilometer.polling.manager [-] Updated heartbeat for disk.ephemeral.size (2025-12-01T20:03:51.392105) _update_status /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:502
Dec  1 20:03:51 compute-0 ceilometer_agent_compute[200308]: 2025-12-01 20:03:51.394 15 DEBUG ceilometer.polling.manager [-] The pollster [<stevedore.extension.Extension object at 0x7fcf6cc3f5c0>] is not configured in a source for polling that requires coordination. The current hashrings are the following [None]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:355
Dec  1 20:03:51 compute-0 ceilometer_agent_compute[200308]: 2025-12-01 20:03:51.394 12 DEBUG ceilometer.polling.manager [-] Updated heartbeat for network.incoming.packets (2025-12-01T20:03:51.393324) _update_status /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:502
Dec  1 20:03:51 compute-0 ceilometer_agent_compute[200308]: 2025-12-01 20:03:51.395 15 DEBUG ceilometer.polling.manager [-] Polster heartbeat update: disk.root.size heartbeat /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:636
Dec  1 20:03:51 compute-0 ceilometer_agent_compute[200308]: 2025-12-01 20:03:51.395 15 INFO ceilometer.polling.manager [-] Finished polling pollster disk.root.size in the context of pollsters
Dec  1 20:03:51 compute-0 ceilometer_agent_compute[200308]: 2025-12-01 20:03:51.395 15 DEBUG ceilometer.polling.manager [-] Executing discovery process for pollsters [<ceilometer.compute.pollsters.net.IncomingDropPollster object at 0x7fcf6cc3fda0>] and discovery method [local_instances] via process [<bound method AgentManager.discover of <ceilometer.polling.manager.AgentManager object at 0x7fcf6cc07a10>>]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:294
Dec  1 20:03:51 compute-0 ceilometer_agent_compute[200308]: 2025-12-01 20:03:51.395 15 INFO ceilometer.polling.manager [-] Polling pollster network.incoming.packets.drop in the context of pollsters
Dec  1 20:03:51 compute-0 ceilometer_agent_compute[200308]: 2025-12-01 20:03:51.395 15 DEBUG ceilometer.polling.manager [-] Checking if we need coordination for pollster [<stevedore.extension.Extension object at 0x7fcf6cc3fdd0>] with coordination group name [None]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:333
Dec  1 20:03:51 compute-0 ceilometer_agent_compute[200308]: 2025-12-01 20:03:51.396 15 DEBUG ceilometer.polling.manager [-] The pollster [<stevedore.extension.Extension object at 0x7fcf6cc3fdd0>] is not configured in a source for polling that requires coordination. The current hashrings are the following [None]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:355
Dec  1 20:03:51 compute-0 ceilometer_agent_compute[200308]: 2025-12-01 20:03:51.396 15 DEBUG ceilometer.polling.manager [-] Polster heartbeat update: network.incoming.packets.drop heartbeat /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:636
Dec  1 20:03:51 compute-0 ceilometer_agent_compute[200308]: 2025-12-01 20:03:51.396 15 DEBUG ceilometer.compute.pollsters [-] 6c1de815-4e42-4798-9a73-220b67333524/network.incoming.packets.drop volume: 0 _stats_to_sample /usr/lib/python3.12/site-packages/ceilometer/compute/pollsters/__init__.py:108
Dec  1 20:03:51 compute-0 ceilometer_agent_compute[200308]: 2025-12-01 20:03:51.396 12 DEBUG ceilometer.polling.manager [-] Updated heartbeat for disk.root.size (2025-12-01T20:03:51.395005) _update_status /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:502
Dec  1 20:03:51 compute-0 ceilometer_agent_compute[200308]: 2025-12-01 20:03:51.396 15 DEBUG ceilometer.compute.pollsters [-] 421c1bd5-7edf-41ce-b0a5-872efcaf35b0/network.incoming.packets.drop volume: 0 _stats_to_sample /usr/lib/python3.12/site-packages/ceilometer/compute/pollsters/__init__.py:108
Dec  1 20:03:51 compute-0 ceilometer_agent_compute[200308]: 2025-12-01 20:03:51.396 12 DEBUG ceilometer.polling.manager [-] Updated heartbeat for network.incoming.packets.drop (2025-12-01T20:03:51.396255) _update_status /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:502
Dec  1 20:03:51 compute-0 ceilometer_agent_compute[200308]: 2025-12-01 20:03:51.397 15 DEBUG ceilometer.compute.pollsters [-] cb05bc1e-3b85-4998-a503-39bd86bdc17e/network.incoming.packets.drop volume: 0 _stats_to_sample /usr/lib/python3.12/site-packages/ceilometer/compute/pollsters/__init__.py:108
Dec  1 20:03:51 compute-0 ceilometer_agent_compute[200308]: 2025-12-01 20:03:51.397 15 INFO ceilometer.polling.manager [-] Finished polling pollster network.incoming.packets.drop in the context of pollsters
Dec  1 20:03:51 compute-0 ceilometer_agent_compute[200308]: 2025-12-01 20:03:51.397 15 DEBUG ceilometer.polling.manager [-] Executing discovery process for pollsters [<ceilometer.compute.pollsters.net.IncomingErrorsPollster object at 0x7fcf6cc3fe00>] and discovery method [local_instances] via process [<bound method AgentManager.discover of <ceilometer.polling.manager.AgentManager object at 0x7fcf6cc07a10>>]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:294
Dec  1 20:03:51 compute-0 ceilometer_agent_compute[200308]: 2025-12-01 20:03:51.397 15 INFO ceilometer.polling.manager [-] Polling pollster network.incoming.packets.error in the context of pollsters
Dec  1 20:03:51 compute-0 ceilometer_agent_compute[200308]: 2025-12-01 20:03:51.397 15 DEBUG ceilometer.polling.manager [-] Checking if we need coordination for pollster [<stevedore.extension.Extension object at 0x7fcf6cc3fe30>] with coordination group name [None]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:333
Dec  1 20:03:51 compute-0 ceilometer_agent_compute[200308]: 2025-12-01 20:03:51.398 15 DEBUG ceilometer.polling.manager [-] The pollster [<stevedore.extension.Extension object at 0x7fcf6cc3fe30>] is not configured in a source for polling that requires coordination. The current hashrings are the following [None]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:355
Dec  1 20:03:51 compute-0 ceilometer_agent_compute[200308]: 2025-12-01 20:03:51.398 15 DEBUG ceilometer.polling.manager [-] Polster heartbeat update: network.incoming.packets.error heartbeat /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:636
Dec  1 20:03:51 compute-0 ceilometer_agent_compute[200308]: 2025-12-01 20:03:51.398 12 DEBUG ceilometer.polling.manager [-] Updated heartbeat for network.incoming.packets.error (2025-12-01T20:03:51.398111) _update_status /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:502
Dec  1 20:03:51 compute-0 ceilometer_agent_compute[200308]: 2025-12-01 20:03:51.398 15 DEBUG ceilometer.compute.pollsters [-] 6c1de815-4e42-4798-9a73-220b67333524/network.incoming.packets.error volume: 0 _stats_to_sample /usr/lib/python3.12/site-packages/ceilometer/compute/pollsters/__init__.py:108
Dec  1 20:03:51 compute-0 ceilometer_agent_compute[200308]: 2025-12-01 20:03:51.398 15 DEBUG ceilometer.compute.pollsters [-] 421c1bd5-7edf-41ce-b0a5-872efcaf35b0/network.incoming.packets.error volume: 0 _stats_to_sample /usr/lib/python3.12/site-packages/ceilometer/compute/pollsters/__init__.py:108
Dec  1 20:03:51 compute-0 ceilometer_agent_compute[200308]: 2025-12-01 20:03:51.398 15 DEBUG ceilometer.compute.pollsters [-] cb05bc1e-3b85-4998-a503-39bd86bdc17e/network.incoming.packets.error volume: 0 _stats_to_sample /usr/lib/python3.12/site-packages/ceilometer/compute/pollsters/__init__.py:108
Dec  1 20:03:51 compute-0 ceilometer_agent_compute[200308]: 2025-12-01 20:03:51.399 15 INFO ceilometer.polling.manager [-] Finished polling pollster network.incoming.packets.error in the context of pollsters
Dec  1 20:03:51 compute-0 ceilometer_agent_compute[200308]: 2025-12-01 20:03:51.399 15 DEBUG ceilometer.polling.manager [-] Executing discovery process for pollsters [<ceilometer.compute.pollsters.net.OutgoingBytesPollster object at 0x7fcf6cc3fe90>] and discovery method [local_instances] via process [<bound method AgentManager.discover of <ceilometer.polling.manager.AgentManager object at 0x7fcf6cc07a10>>]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:294
Dec  1 20:03:51 compute-0 ceilometer_agent_compute[200308]: 2025-12-01 20:03:51.399 15 INFO ceilometer.polling.manager [-] Polling pollster network.outgoing.bytes in the context of pollsters
Dec  1 20:03:51 compute-0 ceilometer_agent_compute[200308]: 2025-12-01 20:03:51.399 15 DEBUG ceilometer.polling.manager [-] Checking if we need coordination for pollster [<stevedore.extension.Extension object at 0x7fcf6cc3fec0>] with coordination group name [None]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:333
Dec  1 20:03:51 compute-0 ceilometer_agent_compute[200308]: 2025-12-01 20:03:51.399 15 DEBUG ceilometer.polling.manager [-] The pollster [<stevedore.extension.Extension object at 0x7fcf6cc3fec0>] is not configured in a source for polling that requires coordination. The current hashrings are the following [None]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:355
Dec  1 20:03:51 compute-0 ceilometer_agent_compute[200308]: 2025-12-01 20:03:51.399 15 DEBUG ceilometer.polling.manager [-] Polster heartbeat update: network.outgoing.bytes heartbeat /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:636
Dec  1 20:03:51 compute-0 ceilometer_agent_compute[200308]: 2025-12-01 20:03:51.399 15 DEBUG ceilometer.compute.pollsters [-] 6c1de815-4e42-4798-9a73-220b67333524/network.outgoing.bytes volume: 16100 _stats_to_sample /usr/lib/python3.12/site-packages/ceilometer/compute/pollsters/__init__.py:108
Dec  1 20:03:51 compute-0 ceilometer_agent_compute[200308]: 2025-12-01 20:03:51.400 15 DEBUG ceilometer.compute.pollsters [-] 421c1bd5-7edf-41ce-b0a5-872efcaf35b0/network.outgoing.bytes volume: 28850 _stats_to_sample /usr/lib/python3.12/site-packages/ceilometer/compute/pollsters/__init__.py:108
Dec  1 20:03:51 compute-0 ceilometer_agent_compute[200308]: 2025-12-01 20:03:51.400 15 DEBUG ceilometer.compute.pollsters [-] cb05bc1e-3b85-4998-a503-39bd86bdc17e/network.outgoing.bytes volume: 0 _stats_to_sample /usr/lib/python3.12/site-packages/ceilometer/compute/pollsters/__init__.py:108
Dec  1 20:03:51 compute-0 ceilometer_agent_compute[200308]: 2025-12-01 20:03:51.400 15 INFO ceilometer.polling.manager [-] Finished polling pollster network.outgoing.bytes in the context of pollsters
Dec  1 20:03:51 compute-0 ceilometer_agent_compute[200308]: 2025-12-01 20:03:51.400 15 DEBUG ceilometer.polling.manager [-] Executing discovery process for pollsters [<ceilometer.compute.pollsters.net.OutgoingBytesRatePollster object at 0x7fcf6cc3ff80>] and discovery method [local_instances] via process [<bound method AgentManager.discover of <ceilometer.polling.manager.AgentManager object at 0x7fcf6cc07a10>>]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:294
Dec  1 20:03:51 compute-0 ceilometer_agent_compute[200308]: 2025-12-01 20:03:51.401 15 INFO ceilometer.polling.manager [-] Polling pollster network.outgoing.bytes.rate in the context of pollsters
Dec  1 20:03:51 compute-0 ceilometer_agent_compute[200308]: 2025-12-01 20:03:51.401 15 DEBUG ceilometer.polling.manager [-] Checking if we need coordination for pollster [<stevedore.extension.Extension object at 0x7fcf6cc3ffb0>] with coordination group name [None]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:333
Dec  1 20:03:51 compute-0 ceilometer_agent_compute[200308]: 2025-12-01 20:03:51.401 12 DEBUG ceilometer.polling.manager [-] Updated heartbeat for network.outgoing.bytes (2025-12-01T20:03:51.399740) _update_status /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:502
Dec  1 20:03:51 compute-0 ceilometer_agent_compute[200308]: 2025-12-01 20:03:51.401 15 DEBUG ceilometer.polling.manager [-] The pollster [<stevedore.extension.Extension object at 0x7fcf6cc3ffb0>] is not configured in a source for polling that requires coordination. The current hashrings are the following [None]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:355
Dec  1 20:03:51 compute-0 ceilometer_agent_compute[200308]: 2025-12-01 20:03:51.401 15 DEBUG ceilometer.polling.manager [-] Polster heartbeat update: network.outgoing.bytes.rate heartbeat /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:636
Dec  1 20:03:51 compute-0 ceilometer_agent_compute[200308]: 2025-12-01 20:03:51.401 15 DEBUG ceilometer.compute.pollsters [-] LibvirtInspector does not provide data for OutgoingBytesRatePollster get_samples /usr/lib/python3.12/site-packages/ceilometer/compute/pollsters/__init__.py:162
Dec  1 20:03:51 compute-0 ceilometer_agent_compute[200308]: 2025-12-01 20:03:51.401 12 DEBUG ceilometer.polling.manager [-] Updated heartbeat for network.outgoing.bytes.rate (2025-12-01T20:03:51.401379) _update_status /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:502
Dec  1 20:03:51 compute-0 ceilometer_agent_compute[200308]: 2025-12-01 20:03:51.401 15 ERROR ceilometer.polling.manager [-] Prevent pollster network.outgoing.bytes.rate from polling [<NovaLikeServer: tempest-TestNetworkBasicOps-server-1354137625>, <NovaLikeServer: tempest-TestServerBasicOps-server-48441956>, <NovaLikeServer: tempest-TestNetworkBasicOps-server-400616177>] on source pollsters anymore!: ceilometer.polling.plugin_base.PollsterPermanentError: [<NovaLikeServer: tempest-TestNetworkBasicOps-server-1354137625>, <NovaLikeServer: tempest-TestServerBasicOps-server-48441956>, <NovaLikeServer: tempest-TestNetworkBasicOps-server-400616177>]
Dec  1 20:03:51 compute-0 ceilometer_agent_compute[200308]: 2025-12-01 20:03:51.402 15 DEBUG ceilometer.polling.manager [-] Executing discovery process for pollsters [<ceilometer.compute.pollsters.instance_stats.CPUPollster object at 0x7fcf6cbd1b80>] and discovery method [local_instances] via process [<bound method AgentManager.discover of <ceilometer.polling.manager.AgentManager object at 0x7fcf6cc07a10>>]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:294
Dec  1 20:03:51 compute-0 ceilometer_agent_compute[200308]: 2025-12-01 20:03:51.402 15 INFO ceilometer.polling.manager [-] Polling pollster cpu in the context of pollsters
Dec  1 20:03:51 compute-0 ceilometer_agent_compute[200308]: 2025-12-01 20:03:51.402 15 DEBUG ceilometer.polling.manager [-] Checking if we need coordination for pollster [<stevedore.extension.Extension object at 0x7fcf6cc3d7c0>] with coordination group name [None]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:333
Dec  1 20:03:51 compute-0 ceilometer_agent_compute[200308]: 2025-12-01 20:03:51.402 15 DEBUG ceilometer.polling.manager [-] The pollster [<stevedore.extension.Extension object at 0x7fcf6cc3d7c0>] is not configured in a source for polling that requires coordination. The current hashrings are the following [None]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:355
Dec  1 20:03:51 compute-0 ceilometer_agent_compute[200308]: 2025-12-01 20:03:51.402 15 DEBUG ceilometer.polling.manager [-] Polster heartbeat update: cpu heartbeat /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:636
Dec  1 20:03:51 compute-0 ceilometer_agent_compute[200308]: 2025-12-01 20:03:51.402 15 DEBUG ceilometer.compute.pollsters [-] 6c1de815-4e42-4798-9a73-220b67333524/cpu volume: 33860000000 _stats_to_sample /usr/lib/python3.12/site-packages/ceilometer/compute/pollsters/__init__.py:108
Dec  1 20:03:51 compute-0 ceilometer_agent_compute[200308]: 2025-12-01 20:03:51.402 15 DEBUG ceilometer.compute.pollsters [-] 421c1bd5-7edf-41ce-b0a5-872efcaf35b0/cpu volume: 35300000000 _stats_to_sample /usr/lib/python3.12/site-packages/ceilometer/compute/pollsters/__init__.py:108
Dec  1 20:03:51 compute-0 ceilometer_agent_compute[200308]: 2025-12-01 20:03:51.402 15 DEBUG ceilometer.compute.pollsters [-] cb05bc1e-3b85-4998-a503-39bd86bdc17e/cpu volume: 8110000000 _stats_to_sample /usr/lib/python3.12/site-packages/ceilometer/compute/pollsters/__init__.py:108
Dec  1 20:03:51 compute-0 ceilometer_agent_compute[200308]: 2025-12-01 20:03:51.403 15 INFO ceilometer.polling.manager [-] Finished polling pollster cpu in the context of pollsters
Dec  1 20:03:51 compute-0 ceilometer_agent_compute[200308]: 2025-12-01 20:03:51.403 15 DEBUG ceilometer.polling.manager [-] Executing discovery process for pollsters [<ceilometer.compute.pollsters.instance_stats.MemoryUsagePollster object at 0x7fcf6cc3f7a0>] and discovery method [local_instances] via process [<bound method AgentManager.discover of <ceilometer.polling.manager.AgentManager object at 0x7fcf6cc07a10>>]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:294
Dec  1 20:03:51 compute-0 ceilometer_agent_compute[200308]: 2025-12-01 20:03:51.403 15 INFO ceilometer.polling.manager [-] Polling pollster memory.usage in the context of pollsters
Dec  1 20:03:51 compute-0 ceilometer_agent_compute[200308]: 2025-12-01 20:03:51.403 15 DEBUG ceilometer.polling.manager [-] Checking if we need coordination for pollster [<stevedore.extension.Extension object at 0x7fcf6cc3f7d0>] with coordination group name [None]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:333
Dec  1 20:03:51 compute-0 ceilometer_agent_compute[200308]: 2025-12-01 20:03:51.403 15 DEBUG ceilometer.polling.manager [-] The pollster [<stevedore.extension.Extension object at 0x7fcf6cc3f7d0>] is not configured in a source for polling that requires coordination. The current hashrings are the following [None]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:355
Dec  1 20:03:51 compute-0 ceilometer_agent_compute[200308]: 2025-12-01 20:03:51.404 12 DEBUG ceilometer.polling.manager [-] Updated heartbeat for cpu (2025-12-01T20:03:51.402262) _update_status /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:502
Dec  1 20:03:51 compute-0 ceilometer_agent_compute[200308]: 2025-12-01 20:03:51.404 15 DEBUG ceilometer.polling.manager [-] Polster heartbeat update: memory.usage heartbeat /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:636
Dec  1 20:03:51 compute-0 ceilometer_agent_compute[200308]: 2025-12-01 20:03:51.404 15 DEBUG ceilometer.compute.pollsters [-] 6c1de815-4e42-4798-9a73-220b67333524/memory.usage volume: 42.5703125 _stats_to_sample /usr/lib/python3.12/site-packages/ceilometer/compute/pollsters/__init__.py:108
Dec  1 20:03:51 compute-0 ceilometer_agent_compute[200308]: 2025-12-01 20:03:51.404 12 DEBUG ceilometer.polling.manager [-] Updated heartbeat for memory.usage (2025-12-01T20:03:51.404099) _update_status /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:502
Dec  1 20:03:51 compute-0 ceilometer_agent_compute[200308]: 2025-12-01 20:03:51.404 15 DEBUG ceilometer.compute.pollsters [-] 421c1bd5-7edf-41ce-b0a5-872efcaf35b0/memory.usage volume: 42.56640625 _stats_to_sample /usr/lib/python3.12/site-packages/ceilometer/compute/pollsters/__init__.py:108
Dec  1 20:03:51 compute-0 ceilometer_agent_compute[200308]: 2025-12-01 20:03:51.404 15 DEBUG ceilometer.compute.pollsters [-] cb05bc1e-3b85-4998-a503-39bd86bdc17e/memory.usage volume: Unavailable _stats_to_sample /usr/lib/python3.12/site-packages/ceilometer/compute/pollsters/__init__.py:108
Dec  1 20:03:51 compute-0 ceilometer_agent_compute[200308]: 2025-12-01 20:03:51.404 15 WARNING ceilometer.compute.pollsters [-] memory.usage statistic in not available for instance cb05bc1e-3b85-4998-a503-39bd86bdc17e: ceilometer.compute.pollsters.NoVolumeException
Dec  1 20:03:51 compute-0 ceilometer_agent_compute[200308]: 2025-12-01 20:03:51.405 15 INFO ceilometer.polling.manager [-] Finished polling pollster memory.usage in the context of pollsters
Dec  1 20:03:51 compute-0 ceilometer_agent_compute[200308]: 2025-12-01 20:03:51.405 15 DEBUG ceilometer.polling.manager [-] Finished processing pollster [network.incoming.bytes.delta]. execute_polling_task_processing /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:272
Dec  1 20:03:51 compute-0 ceilometer_agent_compute[200308]: 2025-12-01 20:03:51.405 15 DEBUG ceilometer.polling.manager [-] Finished processing pollster [network.outgoing.packets]. execute_polling_task_processing /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:272
Dec  1 20:03:51 compute-0 ceilometer_agent_compute[200308]: 2025-12-01 20:03:51.405 15 DEBUG ceilometer.polling.manager [-] Finished processing pollster [network.outgoing.bytes.delta]. execute_polling_task_processing /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:272
Dec  1 20:03:51 compute-0 ceilometer_agent_compute[200308]: 2025-12-01 20:03:51.405 15 DEBUG ceilometer.polling.manager [-] Finished processing pollster [network.outgoing.packets.drop]. execute_polling_task_processing /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:272
Dec  1 20:03:51 compute-0 ceilometer_agent_compute[200308]: 2025-12-01 20:03:51.405 15 DEBUG ceilometer.polling.manager [-] Finished processing pollster [network.outgoing.packets.error]. execute_polling_task_processing /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:272
Dec  1 20:03:51 compute-0 ceilometer_agent_compute[200308]: 2025-12-01 20:03:51.405 15 DEBUG ceilometer.polling.manager [-] Finished processing pollster [disk.device.capacity]. execute_polling_task_processing /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:272
Dec  1 20:03:51 compute-0 ceilometer_agent_compute[200308]: 2025-12-01 20:03:51.406 15 DEBUG ceilometer.polling.manager [-] Finished processing pollster [disk.device.read.bytes]. execute_polling_task_processing /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:272
Dec  1 20:03:51 compute-0 ceilometer_agent_compute[200308]: 2025-12-01 20:03:51.406 15 DEBUG ceilometer.polling.manager [-] Finished processing pollster [network.incoming.bytes]. execute_polling_task_processing /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:272
Dec  1 20:03:51 compute-0 ceilometer_agent_compute[200308]: 2025-12-01 20:03:51.406 15 DEBUG ceilometer.polling.manager [-] Finished processing pollster [network.incoming.bytes.rate]. execute_polling_task_processing /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:272
Dec  1 20:03:51 compute-0 ceilometer_agent_compute[200308]: 2025-12-01 20:03:51.406 15 DEBUG ceilometer.polling.manager [-] Finished processing pollster [disk.device.read.latency]. execute_polling_task_processing /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:272
Dec  1 20:03:51 compute-0 ceilometer_agent_compute[200308]: 2025-12-01 20:03:51.406 15 DEBUG ceilometer.polling.manager [-] Finished processing pollster [disk.device.read.requests]. execute_polling_task_processing /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:272
Dec  1 20:03:51 compute-0 ceilometer_agent_compute[200308]: 2025-12-01 20:03:51.406 15 DEBUG ceilometer.polling.manager [-] Finished processing pollster [disk.device.usage]. execute_polling_task_processing /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:272
Dec  1 20:03:51 compute-0 ceilometer_agent_compute[200308]: 2025-12-01 20:03:51.406 15 DEBUG ceilometer.polling.manager [-] Finished processing pollster [disk.device.write.bytes]. execute_polling_task_processing /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:272
Dec  1 20:03:51 compute-0 ceilometer_agent_compute[200308]: 2025-12-01 20:03:51.406 15 DEBUG ceilometer.polling.manager [-] Finished processing pollster [power.state]. execute_polling_task_processing /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:272
Dec  1 20:03:51 compute-0 ceilometer_agent_compute[200308]: 2025-12-01 20:03:51.406 15 DEBUG ceilometer.polling.manager [-] Finished processing pollster [disk.device.write.latency]. execute_polling_task_processing /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:272
Dec  1 20:03:51 compute-0 ceilometer_agent_compute[200308]: 2025-12-01 20:03:51.406 15 DEBUG ceilometer.polling.manager [-] Finished processing pollster [disk.device.write.requests]. execute_polling_task_processing /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:272
Dec  1 20:03:51 compute-0 ceilometer_agent_compute[200308]: 2025-12-01 20:03:51.406 15 DEBUG ceilometer.polling.manager [-] Finished processing pollster [disk.device.allocation]. execute_polling_task_processing /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:272
Dec  1 20:03:51 compute-0 ceilometer_agent_compute[200308]: 2025-12-01 20:03:51.406 15 DEBUG ceilometer.polling.manager [-] Finished processing pollster [disk.ephemeral.size]. execute_polling_task_processing /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:272
Dec  1 20:03:51 compute-0 ceilometer_agent_compute[200308]: 2025-12-01 20:03:51.406 15 DEBUG ceilometer.polling.manager [-] Finished processing pollster [network.incoming.packets]. execute_polling_task_processing /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:272
Dec  1 20:03:51 compute-0 ceilometer_agent_compute[200308]: 2025-12-01 20:03:51.406 15 DEBUG ceilometer.polling.manager [-] Finished processing pollster [disk.root.size]. execute_polling_task_processing /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:272
Dec  1 20:03:51 compute-0 ceilometer_agent_compute[200308]: 2025-12-01 20:03:51.406 15 DEBUG ceilometer.polling.manager [-] Finished processing pollster [network.incoming.packets.drop]. execute_polling_task_processing /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:272
Dec  1 20:03:51 compute-0 ceilometer_agent_compute[200308]: 2025-12-01 20:03:51.407 15 DEBUG ceilometer.polling.manager [-] Finished processing pollster [network.incoming.packets.error]. execute_polling_task_processing /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:272
Dec  1 20:03:51 compute-0 ceilometer_agent_compute[200308]: 2025-12-01 20:03:51.407 15 DEBUG ceilometer.polling.manager [-] Finished processing pollster [network.outgoing.bytes]. execute_polling_task_processing /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:272
Dec  1 20:03:51 compute-0 ceilometer_agent_compute[200308]: 2025-12-01 20:03:51.407 15 DEBUG ceilometer.polling.manager [-] Finished processing pollster [network.outgoing.bytes.rate]. execute_polling_task_processing /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:272
Dec  1 20:03:51 compute-0 ceilometer_agent_compute[200308]: 2025-12-01 20:03:51.407 15 DEBUG ceilometer.polling.manager [-] Finished processing pollster [cpu]. execute_polling_task_processing /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:272
Dec  1 20:03:51 compute-0 ceilometer_agent_compute[200308]: 2025-12-01 20:03:51.407 15 DEBUG ceilometer.polling.manager [-] Finished processing pollster [memory.usage]. execute_polling_task_processing /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:272
Dec  1 20:03:51 compute-0 ovn_metadata_agent[106828]: 2025-12-01 20:03:51.901 106833 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): DbSetCommand(_result=None, table=Chassis_Private, record=91869463-7ce7-4561-8225-db4a77bb5f12, col_values=(('external_ids', {'neutron:ovn-metadata-sb-cfg': '12'}),), if_exists=True) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89#033[00m
Dec  1 20:03:52 compute-0 nova_compute[189564]: 2025-12-01 20:03:52.056 189568 DEBUG nova.compute.manager [req-7e8d9cf5-0097-463f-812b-c17bad902856 req-07b9c8b8-88d8-4276-837b-c61babce2ca2 8141e98fb94749b083df4b60a326419b 58466eba0634458f8dd92f929824a1d9 - - default default] [instance: 4a104baa-5fd5-47aa-973b-11d99c76c3e2] Received event network-vif-plugged-09097114-7a48-4b64-ab17-ed474efbf80e external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11048#033[00m
Dec  1 20:03:52 compute-0 nova_compute[189564]: 2025-12-01 20:03:52.056 189568 DEBUG oslo_concurrency.lockutils [req-7e8d9cf5-0097-463f-812b-c17bad902856 req-07b9c8b8-88d8-4276-837b-c61babce2ca2 8141e98fb94749b083df4b60a326419b 58466eba0634458f8dd92f929824a1d9 - - default default] Acquiring lock "4a104baa-5fd5-47aa-973b-11d99c76c3e2-events" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m
Dec  1 20:03:52 compute-0 nova_compute[189564]: 2025-12-01 20:03:52.057 189568 DEBUG oslo_concurrency.lockutils [req-7e8d9cf5-0097-463f-812b-c17bad902856 req-07b9c8b8-88d8-4276-837b-c61babce2ca2 8141e98fb94749b083df4b60a326419b 58466eba0634458f8dd92f929824a1d9 - - default default] Lock "4a104baa-5fd5-47aa-973b-11d99c76c3e2-events" acquired by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m
Dec  1 20:03:52 compute-0 nova_compute[189564]: 2025-12-01 20:03:52.057 189568 DEBUG oslo_concurrency.lockutils [req-7e8d9cf5-0097-463f-812b-c17bad902856 req-07b9c8b8-88d8-4276-837b-c61babce2ca2 8141e98fb94749b083df4b60a326419b 58466eba0634458f8dd92f929824a1d9 - - default default] Lock "4a104baa-5fd5-47aa-973b-11d99c76c3e2-events" "released" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m
Dec  1 20:03:52 compute-0 nova_compute[189564]: 2025-12-01 20:03:52.057 189568 DEBUG nova.compute.manager [req-7e8d9cf5-0097-463f-812b-c17bad902856 req-07b9c8b8-88d8-4276-837b-c61babce2ca2 8141e98fb94749b083df4b60a326419b 58466eba0634458f8dd92f929824a1d9 - - default default] [instance: 4a104baa-5fd5-47aa-973b-11d99c76c3e2] No waiting events found dispatching network-vif-plugged-09097114-7a48-4b64-ab17-ed474efbf80e pop_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:320#033[00m
Dec  1 20:03:52 compute-0 nova_compute[189564]: 2025-12-01 20:03:52.058 189568 WARNING nova.compute.manager [req-7e8d9cf5-0097-463f-812b-c17bad902856 req-07b9c8b8-88d8-4276-837b-c61babce2ca2 8141e98fb94749b083df4b60a326419b 58466eba0634458f8dd92f929824a1d9 - - default default] [instance: 4a104baa-5fd5-47aa-973b-11d99c76c3e2] Received unexpected event network-vif-plugged-09097114-7a48-4b64-ab17-ed474efbf80e for instance with vm_state deleted and task_state None.#033[00m
Dec  1 20:03:52 compute-0 nova_compute[189564]: 2025-12-01 20:03:52.058 189568 DEBUG nova.compute.manager [req-7e8d9cf5-0097-463f-812b-c17bad902856 req-07b9c8b8-88d8-4276-837b-c61babce2ca2 8141e98fb94749b083df4b60a326419b 58466eba0634458f8dd92f929824a1d9 - - default default] [instance: 4a104baa-5fd5-47aa-973b-11d99c76c3e2] Received event network-vif-deleted-09097114-7a48-4b64-ab17-ed474efbf80e external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11048#033[00m
Dec  1 20:03:52 compute-0 nova_compute[189564]: 2025-12-01 20:03:52.329 189568 DEBUG oslo_concurrency.lockutils [None req-1ef7797b-5398-4537-8685-409a1c164a30 304fade4774b4bb3838efcc56501f582 bde8983778e8471a8b7f6da9e9d53732 - - default default] Acquiring lock "421c1bd5-7edf-41ce-b0a5-872efcaf35b0" by "nova.compute.manager.ComputeManager.terminate_instance.<locals>.do_terminate_instance" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m
Dec  1 20:03:52 compute-0 nova_compute[189564]: 2025-12-01 20:03:52.330 189568 DEBUG oslo_concurrency.lockutils [None req-1ef7797b-5398-4537-8685-409a1c164a30 304fade4774b4bb3838efcc56501f582 bde8983778e8471a8b7f6da9e9d53732 - - default default] Lock "421c1bd5-7edf-41ce-b0a5-872efcaf35b0" acquired by "nova.compute.manager.ComputeManager.terminate_instance.<locals>.do_terminate_instance" :: waited 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m
Dec  1 20:03:52 compute-0 nova_compute[189564]: 2025-12-01 20:03:52.331 189568 DEBUG oslo_concurrency.lockutils [None req-1ef7797b-5398-4537-8685-409a1c164a30 304fade4774b4bb3838efcc56501f582 bde8983778e8471a8b7f6da9e9d53732 - - default default] Acquiring lock "421c1bd5-7edf-41ce-b0a5-872efcaf35b0-events" by "nova.compute.manager.InstanceEvents.clear_events_for_instance.<locals>._clear_events" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m
Dec  1 20:03:52 compute-0 nova_compute[189564]: 2025-12-01 20:03:52.332 189568 DEBUG oslo_concurrency.lockutils [None req-1ef7797b-5398-4537-8685-409a1c164a30 304fade4774b4bb3838efcc56501f582 bde8983778e8471a8b7f6da9e9d53732 - - default default] Lock "421c1bd5-7edf-41ce-b0a5-872efcaf35b0-events" acquired by "nova.compute.manager.InstanceEvents.clear_events_for_instance.<locals>._clear_events" :: waited 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m
Dec  1 20:03:52 compute-0 nova_compute[189564]: 2025-12-01 20:03:52.332 189568 DEBUG oslo_concurrency.lockutils [None req-1ef7797b-5398-4537-8685-409a1c164a30 304fade4774b4bb3838efcc56501f582 bde8983778e8471a8b7f6da9e9d53732 - - default default] Lock "421c1bd5-7edf-41ce-b0a5-872efcaf35b0-events" "released" by "nova.compute.manager.InstanceEvents.clear_events_for_instance.<locals>._clear_events" :: held 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m
Dec  1 20:03:52 compute-0 nova_compute[189564]: 2025-12-01 20:03:52.334 189568 INFO nova.compute.manager [None req-1ef7797b-5398-4537-8685-409a1c164a30 304fade4774b4bb3838efcc56501f582 bde8983778e8471a8b7f6da9e9d53732 - - default default] [instance: 421c1bd5-7edf-41ce-b0a5-872efcaf35b0] Terminating instance#033[00m
Dec  1 20:03:52 compute-0 nova_compute[189564]: 2025-12-01 20:03:52.336 189568 DEBUG nova.compute.manager [None req-1ef7797b-5398-4537-8685-409a1c164a30 304fade4774b4bb3838efcc56501f582 bde8983778e8471a8b7f6da9e9d53732 - - default default] [instance: 421c1bd5-7edf-41ce-b0a5-872efcaf35b0] Start destroying the instance on the hypervisor. _shutdown_instance /usr/lib/python3.9/site-packages/nova/compute/manager.py:3120#033[00m
Dec  1 20:03:52 compute-0 kernel: tap36c65cc8-9f (unregistering): left promiscuous mode
Dec  1 20:03:52 compute-0 NetworkManager[56474]: <info>  [1764619432.3955] device (tap36c65cc8-9f): state change: disconnected -> unmanaged (reason 'unmanaged', managed-type: 'removed')
Dec  1 20:03:52 compute-0 ovn_controller[97948]: 2025-12-01T20:03:52Z|00155|binding|INFO|Releasing lport 36c65cc8-9f73-47e0-8a82-7ca2a02890e5 from this chassis (sb_readonly=0)
Dec  1 20:03:52 compute-0 nova_compute[189564]: 2025-12-01 20:03:52.406 189568 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Dec  1 20:03:52 compute-0 ovn_controller[97948]: 2025-12-01T20:03:52Z|00156|binding|INFO|Setting lport 36c65cc8-9f73-47e0-8a82-7ca2a02890e5 down in Southbound
Dec  1 20:03:52 compute-0 ovn_controller[97948]: 2025-12-01T20:03:52Z|00157|binding|INFO|Removing iface tap36c65cc8-9f ovn-installed in OVS
Dec  1 20:03:52 compute-0 nova_compute[189564]: 2025-12-01 20:03:52.413 189568 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Dec  1 20:03:52 compute-0 ovn_metadata_agent[106828]: 2025-12-01 20:03:52.421 106833 DEBUG ovsdbapp.backend.ovs_idl.event [-] Matched UPDATE: PortBindingUpdatedEvent(events=('update',), table='Port_Binding', conditions=None, old_conditions=None), priority=20 to row=Port_Binding(mac=['fa:16:3e:67:e4:f2 10.100.0.14'], port_security=['fa:16:3e:67:e4:f2 10.100.0.14'], type=, nat_addresses=[], virtual_parent=[], up=[False], options={'requested-chassis': 'compute-0.ctlplane.example.com'}, parent_port=[], requested_additional_chassis=[], ha_chassis_group=[], external_ids={'neutron:cidrs': '10.100.0.14/28', 'neutron:device_id': '421c1bd5-7edf-41ce-b0a5-872efcaf35b0', 'neutron:device_owner': 'compute:nova', 'neutron:mtu': '', 'neutron:network_name': 'neutron-61c137f0-effb-4f90-8a6c-ea3831f8e4db', 'neutron:port_capabilities': '', 'neutron:port_name': '', 'neutron:project_id': 'bde8983778e8471a8b7f6da9e9d53732', 'neutron:revision_number': '4', 'neutron:security_group_ids': 'bfd44490-c6a6-4dbb-b2ea-afe6ce03a378 e9f2ae9c-ee72-46a2-b911-c2f7a0a61f4f', 'neutron:subnet_pool_addr_scope4': '', 'neutron:subnet_pool_addr_scope6': '', 'neutron:vnic_type': 'normal', 'neutron:host_id': 'compute-0.ctlplane.example.com', 'neutron:port_fip': '192.168.122.217'}, additional_chassis=[], tag=[], additional_encap=[], encap=[], mirror_rules=[], datapath=b0cf4599-31fb-4d2b-a772-41955e5d1a1c, chassis=[], tunnel_key=3, gateway_chassis=[], requested_chassis=[<ovs.db.idl.Row object at 0x7f1b36766670>], logical_port=36c65cc8-9f73-47e0-8a82-7ca2a02890e5) old=Port_Binding(up=[True], chassis=[<ovs.db.idl.Row object at 0x7f1b36766670>]) matches /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/event.py:43#033[00m
Dec  1 20:03:52 compute-0 ovn_metadata_agent[106828]: 2025-12-01 20:03:52.424 106833 INFO neutron.agent.ovn.metadata.agent [-] Port 36c65cc8-9f73-47e0-8a82-7ca2a02890e5 in datapath 61c137f0-effb-4f90-8a6c-ea3831f8e4db unbound from our chassis#033[00m
Dec  1 20:03:52 compute-0 ovn_metadata_agent[106828]: 2025-12-01 20:03:52.428 106833 DEBUG neutron.agent.ovn.metadata.agent [-] No valid VIF ports were found for network 61c137f0-effb-4f90-8a6c-ea3831f8e4db, tearing the namespace down if needed _get_provision_params /usr/lib/python3.9/site-packages/neutron/agent/ovn/metadata/agent.py:628#033[00m
Dec  1 20:03:52 compute-0 ovn_metadata_agent[106828]: 2025-12-01 20:03:52.431 239862 DEBUG oslo.privsep.daemon [-] privsep: reply[e49e277a-a59e-4e9d-8298-ac30d22495b1]: (4, True) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Dec  1 20:03:52 compute-0 ovn_metadata_agent[106828]: 2025-12-01 20:03:52.432 106833 INFO neutron.agent.ovn.metadata.agent [-] Cleaning up ovnmeta-61c137f0-effb-4f90-8a6c-ea3831f8e4db namespace which is not needed anymore#033[00m
Dec  1 20:03:52 compute-0 nova_compute[189564]: 2025-12-01 20:03:52.437 189568 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Dec  1 20:03:52 compute-0 systemd[1]: machine-qemu\x2d10\x2dinstance\x2d0000000b.scope: Deactivated successfully.
Dec  1 20:03:52 compute-0 systemd[1]: machine-qemu\x2d10\x2dinstance\x2d0000000b.scope: Consumed 42.922s CPU time.
Dec  1 20:03:52 compute-0 systemd-machined[155891]: Machine qemu-10-instance-0000000b terminated.
Dec  1 20:03:52 compute-0 nova_compute[189564]: 2025-12-01 20:03:52.622 189568 INFO nova.virt.libvirt.driver [-] [instance: 421c1bd5-7edf-41ce-b0a5-872efcaf35b0] Instance destroyed successfully.#033[00m
Dec  1 20:03:52 compute-0 nova_compute[189564]: 2025-12-01 20:03:52.623 189568 DEBUG nova.objects.instance [None req-1ef7797b-5398-4537-8685-409a1c164a30 304fade4774b4bb3838efcc56501f582 bde8983778e8471a8b7f6da9e9d53732 - - default default] Lazy-loading 'resources' on Instance uuid 421c1bd5-7edf-41ce-b0a5-872efcaf35b0 obj_load_attr /usr/lib/python3.9/site-packages/nova/objects/instance.py:1105#033[00m
Dec  1 20:03:52 compute-0 nova_compute[189564]: 2025-12-01 20:03:52.637 189568 DEBUG nova.virt.libvirt.vif [None req-1ef7797b-5398-4537-8685-409a1c164a30 304fade4774b4bb3838efcc56501f582 bde8983778e8471a8b7f6da9e9d53732 - - default default] vif_type=ovs instance=Instance(access_ip_v4=None,access_ip_v6=None,architecture=None,auto_disk_config=False,availability_zone='nova',cell_name=None,cleaned=False,config_drive='True',created_at=2025-12-01T20:02:29Z,default_ephemeral_device=None,default_swap_device=None,deleted=False,deleted_at=None,device_metadata=<?>,disable_terminate=False,display_description='tempest-TestServerBasicOps-server-48441956',display_name='tempest-TestServerBasicOps-server-48441956',ec2_ids=<?>,ephemeral_gb=0,ephemeral_key_uuid=None,fault=<?>,flavor=Flavor(3),hidden=False,host='compute-0.ctlplane.example.com',hostname='tempest-testserverbasicops-server-48441956',id=11,image_ref='d169c234-7ac2-4fdc-b9fa-a08c93484d75',info_cache=InstanceInfoCache,instance_type_id=3,kernel_id='',key_data='ecdsa-sha2-nistp384 AAAAE2VjZHNhLXNoYTItbmlzdHAzODQAAAAIbmlzdHAzODQAAABhBAjaBVksdVBINl9zeD8esJMb4Vfc08yy8kW7yEo+Tn5f93Vx5EP21WRviUp4cdA9l5B1MnoKZGq0fFz416IF/plwNciZi0lqZU9c6SZEc6R79Ku1E8FXtQULIca0cSlUsA==',key_name='tempest-TestServerBasicOps-232633533',keypairs=<?>,launch_index=0,launched_at=2025-12-01T20:02:38Z,launched_on='compute-0.ctlplane.example.com',locked=False,locked_by=None,memory_mb=128,metadata={meta1='data1',meta2='data2',metaN='dataN'},migration_context=<?>,new_flavor=None,node='compute-0.ctlplane.example.com',numa_topology=None,old_flavor=None,os_type=None,pci_devices=<?>,pci_requests=<?>,power_state=1,progress=0,project_id='bde8983778e8471a8b7f6da9e9d53732',ramdisk_id='',reservation_id='r-g6r0wj4i',resources=None,root_device_name='/dev/vda',root_gb=1,security_groups=SecurityGroupList,services=<?>,shutdown_terminate=False,system_metadata={boot_roles='member,reader',image_base_image_ref='d169c234-7ac2-4fdc-b9fa-a08c93484d75',image_container_format='bare',image_disk_format='qcow2',image_hw_cdrom_bus='sata',image_hw_disk_bus='virtio',image_hw_input_bus='usb',image_hw_machine_type='q35',image_hw_pointer_model='usbtablet',image_hw_rng_model='virtio',image_hw_video_model='virtio',image_hw_vif_model='virtio',image_min_disk='1',image_min_ram='0',owner_project_name='tempest-TestServerBasicOps-212789688',owner_user_name='tempest-TestServerBasicOps-212789688-project-member',password_0='testtesttesttesttesttesttesttesttesttesttesttesttesttesttesttesttesttesttesttesttesttesttesttesttest',password_1='',password_2='',password_3=''},tags=<?>,task_state='deleting',terminated_at=None,trusted_certs=<?>,updated_at=2025-12-01T20:03:48Z,user_data='IyEvYmluL3NoCmVjaG8gIlByaW50aW5nIGNpcnJvcyB1c2VyIGF1dGhvcml6ZWQga2V5cyIKY2F0IH5jaXJyb3MvLnNzaC9hdXRob3JpemVkX2tleXMgfHwgdHJ1ZQo=',user_id='304fade4774b4bb3838efcc56501f582',uuid=421c1bd5-7edf-41ce-b0a5-872efcaf35b0,vcpu_model=<?>,vcpus=1,vm_mode=None,vm_state='active') vif={"id": "36c65cc8-9f73-47e0-8a82-7ca2a02890e5", "address": "fa:16:3e:67:e4:f2", "network": {"id": "61c137f0-effb-4f90-8a6c-ea3831f8e4db", "bridge": "br-int", "label": "tempest-TestServerBasicOps-1994330948-network", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.14", "type": "fixed", "version": 4, "meta": {}, "floating_ips": [{"address": "192.168.122.217", "type": "floating", "version": 4, "meta": {}}]}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "bde8983778e8471a8b7f6da9e9d53732", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tap36c65cc8-9f", "ovs_interfaceid": "36c65cc8-9f73-47e0-8a82-7ca2a02890e5", "qbh_params": null, "qbg_params": null, "active": true, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}} unplug /usr/lib/python3.9/site-packages/nova/virt/libvirt/vif.py:828#033[00m
Dec  1 20:03:52 compute-0 nova_compute[189564]: 2025-12-01 20:03:52.638 189568 DEBUG nova.network.os_vif_util [None req-1ef7797b-5398-4537-8685-409a1c164a30 304fade4774b4bb3838efcc56501f582 bde8983778e8471a8b7f6da9e9d53732 - - default default] Converting VIF {"id": "36c65cc8-9f73-47e0-8a82-7ca2a02890e5", "address": "fa:16:3e:67:e4:f2", "network": {"id": "61c137f0-effb-4f90-8a6c-ea3831f8e4db", "bridge": "br-int", "label": "tempest-TestServerBasicOps-1994330948-network", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.14", "type": "fixed", "version": 4, "meta": {}, "floating_ips": [{"address": "192.168.122.217", "type": "floating", "version": 4, "meta": {}}]}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "bde8983778e8471a8b7f6da9e9d53732", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tap36c65cc8-9f", "ovs_interfaceid": "36c65cc8-9f73-47e0-8a82-7ca2a02890e5", "qbh_params": null, "qbg_params": null, "active": true, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}} nova_to_osvif_vif /usr/lib/python3.9/site-packages/nova/network/os_vif_util.py:511#033[00m
Dec  1 20:03:52 compute-0 nova_compute[189564]: 2025-12-01 20:03:52.639 189568 DEBUG nova.network.os_vif_util [None req-1ef7797b-5398-4537-8685-409a1c164a30 304fade4774b4bb3838efcc56501f582 bde8983778e8471a8b7f6da9e9d53732 - - default default] Converted object VIFOpenVSwitch(active=True,address=fa:16:3e:67:e4:f2,bridge_name='br-int',has_traffic_filtering=True,id=36c65cc8-9f73-47e0-8a82-7ca2a02890e5,network=Network(61c137f0-effb-4f90-8a6c-ea3831f8e4db),plugin='ovs',port_profile=VIFPortProfileOpenVSwitch,preserve_on_delete=False,vif_name='tap36c65cc8-9f') nova_to_osvif_vif /usr/lib/python3.9/site-packages/nova/network/os_vif_util.py:548#033[00m
Dec  1 20:03:52 compute-0 nova_compute[189564]: 2025-12-01 20:03:52.639 189568 DEBUG os_vif [None req-1ef7797b-5398-4537-8685-409a1c164a30 304fade4774b4bb3838efcc56501f582 bde8983778e8471a8b7f6da9e9d53732 - - default default] Unplugging vif VIFOpenVSwitch(active=True,address=fa:16:3e:67:e4:f2,bridge_name='br-int',has_traffic_filtering=True,id=36c65cc8-9f73-47e0-8a82-7ca2a02890e5,network=Network(61c137f0-effb-4f90-8a6c-ea3831f8e4db),plugin='ovs',port_profile=VIFPortProfileOpenVSwitch,preserve_on_delete=False,vif_name='tap36c65cc8-9f') unplug /usr/lib/python3.9/site-packages/os_vif/__init__.py:109#033[00m
Dec  1 20:03:52 compute-0 nova_compute[189564]: 2025-12-01 20:03:52.641 189568 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 21 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Dec  1 20:03:52 compute-0 nova_compute[189564]: 2025-12-01 20:03:52.641 189568 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): DelPortCommand(_result=None, port=tap36c65cc8-9f, bridge=br-int, if_exists=True) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89#033[00m
Dec  1 20:03:52 compute-0 nova_compute[189564]: 2025-12-01 20:03:52.643 189568 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Dec  1 20:03:52 compute-0 nova_compute[189564]: 2025-12-01 20:03:52.646 189568 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] 0-ms timeout __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:248#033[00m
Dec  1 20:03:52 compute-0 nova_compute[189564]: 2025-12-01 20:03:52.648 189568 INFO os_vif [None req-1ef7797b-5398-4537-8685-409a1c164a30 304fade4774b4bb3838efcc56501f582 bde8983778e8471a8b7f6da9e9d53732 - - default default] Successfully unplugged vif VIFOpenVSwitch(active=True,address=fa:16:3e:67:e4:f2,bridge_name='br-int',has_traffic_filtering=True,id=36c65cc8-9f73-47e0-8a82-7ca2a02890e5,network=Network(61c137f0-effb-4f90-8a6c-ea3831f8e4db),plugin='ovs',port_profile=VIFPortProfileOpenVSwitch,preserve_on_delete=False,vif_name='tap36c65cc8-9f')#033[00m
Dec  1 20:03:52 compute-0 nova_compute[189564]: 2025-12-01 20:03:52.649 189568 INFO nova.virt.libvirt.driver [None req-1ef7797b-5398-4537-8685-409a1c164a30 304fade4774b4bb3838efcc56501f582 bde8983778e8471a8b7f6da9e9d53732 - - default default] [instance: 421c1bd5-7edf-41ce-b0a5-872efcaf35b0] Deleting instance files /var/lib/nova/instances/421c1bd5-7edf-41ce-b0a5-872efcaf35b0_del#033[00m
Dec  1 20:03:52 compute-0 nova_compute[189564]: 2025-12-01 20:03:52.650 189568 INFO nova.virt.libvirt.driver [None req-1ef7797b-5398-4537-8685-409a1c164a30 304fade4774b4bb3838efcc56501f582 bde8983778e8471a8b7f6da9e9d53732 - - default default] [instance: 421c1bd5-7edf-41ce-b0a5-872efcaf35b0] Deletion of /var/lib/nova/instances/421c1bd5-7edf-41ce-b0a5-872efcaf35b0_del complete#033[00m
Dec  1 20:03:52 compute-0 neutron-haproxy-ovnmeta-61c137f0-effb-4f90-8a6c-ea3831f8e4db[255284]: [NOTICE]   (255288) : haproxy version is 2.8.14-c23fe91
Dec  1 20:03:52 compute-0 neutron-haproxy-ovnmeta-61c137f0-effb-4f90-8a6c-ea3831f8e4db[255284]: [NOTICE]   (255288) : path to executable is /usr/sbin/haproxy
Dec  1 20:03:52 compute-0 neutron-haproxy-ovnmeta-61c137f0-effb-4f90-8a6c-ea3831f8e4db[255284]: [WARNING]  (255288) : Exiting Master process...
Dec  1 20:03:52 compute-0 neutron-haproxy-ovnmeta-61c137f0-effb-4f90-8a6c-ea3831f8e4db[255284]: [ALERT]    (255288) : Current worker (255290) exited with code 143 (Terminated)
Dec  1 20:03:52 compute-0 neutron-haproxy-ovnmeta-61c137f0-effb-4f90-8a6c-ea3831f8e4db[255284]: [WARNING]  (255288) : All workers exited. Exiting... (0)
Dec  1 20:03:52 compute-0 systemd[1]: libpod-59e9b70137d81be1d8c697c11c6297dcc613a0b5cc7c25b2724f466cd2778010.scope: Deactivated successfully.
Dec  1 20:03:52 compute-0 podman[256360]: 2025-12-01 20:03:52.666041456 +0000 UTC m=+0.087002549 container died 59e9b70137d81be1d8c697c11c6297dcc613a0b5cc7c25b2724f466cd2778010 (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified, name=neutron-haproxy-ovnmeta-61c137f0-effb-4f90-8a6c-ea3831f8e4db, org.label-schema.build-date=20251125, tcib_build_tag=fa2bb8efef6782c26ea7f1675eeb36dd, maintainer=OpenStack Kubernetes Operator team, org.label-schema.schema-version=1.0, tcib_managed=true, io.buildah.version=1.41.3, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.vendor=CentOS)
Dec  1 20:03:52 compute-0 systemd[1]: var-lib-containers-storage-overlay\x2dcontainers-59e9b70137d81be1d8c697c11c6297dcc613a0b5cc7c25b2724f466cd2778010-userdata-shm.mount: Deactivated successfully.
Dec  1 20:03:52 compute-0 systemd[1]: var-lib-containers-storage-overlay-9722a2c7ce3c81ce6b286e207364c55fc594916c29299d314fc0bb6f3313a714-merged.mount: Deactivated successfully.
Dec  1 20:03:52 compute-0 podman[256360]: 2025-12-01 20:03:52.725882108 +0000 UTC m=+0.146843201 container cleanup 59e9b70137d81be1d8c697c11c6297dcc613a0b5cc7c25b2724f466cd2778010 (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified, name=neutron-haproxy-ovnmeta-61c137f0-effb-4f90-8a6c-ea3831f8e4db, org.label-schema.license=GPLv2, tcib_build_tag=fa2bb8efef6782c26ea7f1675eeb36dd, maintainer=OpenStack Kubernetes Operator team, org.label-schema.schema-version=1.0, tcib_managed=true, org.label-schema.build-date=20251125, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.vendor=CentOS, io.buildah.version=1.41.3)
Dec  1 20:03:52 compute-0 systemd[1]: libpod-conmon-59e9b70137d81be1d8c697c11c6297dcc613a0b5cc7c25b2724f466cd2778010.scope: Deactivated successfully.
Dec  1 20:03:52 compute-0 podman[256403]: 2025-12-01 20:03:52.846521422 +0000 UTC m=+0.085459751 container remove 59e9b70137d81be1d8c697c11c6297dcc613a0b5cc7c25b2724f466cd2778010 (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified, name=neutron-haproxy-ovnmeta-61c137f0-effb-4f90-8a6c-ea3831f8e4db, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.vendor=CentOS, maintainer=OpenStack Kubernetes Operator team, org.label-schema.license=GPLv2, tcib_managed=true, org.label-schema.build-date=20251125, org.label-schema.schema-version=1.0, tcib_build_tag=fa2bb8efef6782c26ea7f1675eeb36dd, io.buildah.version=1.41.3)
Dec  1 20:03:52 compute-0 ovn_metadata_agent[106828]: 2025-12-01 20:03:52.861 239862 DEBUG oslo.privsep.daemon [-] privsep: reply[50c3ea32-8145-4879-918e-fce92d45955f]: (4, ('Mon Dec  1 08:03:52 PM UTC 2025 Stopping container neutron-haproxy-ovnmeta-61c137f0-effb-4f90-8a6c-ea3831f8e4db (59e9b70137d81be1d8c697c11c6297dcc613a0b5cc7c25b2724f466cd2778010)\n59e9b70137d81be1d8c697c11c6297dcc613a0b5cc7c25b2724f466cd2778010\nMon Dec  1 08:03:52 PM UTC 2025 Deleting container neutron-haproxy-ovnmeta-61c137f0-effb-4f90-8a6c-ea3831f8e4db (59e9b70137d81be1d8c697c11c6297dcc613a0b5cc7c25b2724f466cd2778010)\n59e9b70137d81be1d8c697c11c6297dcc613a0b5cc7c25b2724f466cd2778010\n', '', 0)) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Dec  1 20:03:52 compute-0 ovn_metadata_agent[106828]: 2025-12-01 20:03:52.863 239862 DEBUG oslo.privsep.daemon [-] privsep: reply[c3b8659d-d2bf-4913-a1ea-0d9c0b4e66ba]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Dec  1 20:03:52 compute-0 ovn_metadata_agent[106828]: 2025-12-01 20:03:52.865 106833 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): DelPortCommand(_result=None, port=tap61c137f0-e0, bridge=None, if_exists=True) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89#033[00m
Dec  1 20:03:52 compute-0 kernel: tap61c137f0-e0: left promiscuous mode
Dec  1 20:03:52 compute-0 nova_compute[189564]: 2025-12-01 20:03:52.870 189568 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Dec  1 20:03:52 compute-0 nova_compute[189564]: 2025-12-01 20:03:52.888 189568 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Dec  1 20:03:52 compute-0 nova_compute[189564]: 2025-12-01 20:03:52.889 189568 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Dec  1 20:03:52 compute-0 ovn_metadata_agent[106828]: 2025-12-01 20:03:52.895 239862 DEBUG oslo.privsep.daemon [-] privsep: reply[1a5a8e08-ed2d-4e55-997f-3821a1f920c9]: (4, True) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Dec  1 20:03:52 compute-0 ovn_metadata_agent[106828]: 2025-12-01 20:03:52.914 239862 DEBUG oslo.privsep.daemon [-] privsep: reply[6d05eb16-9858-4f6e-8705-41681b4a9d30]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Dec  1 20:03:52 compute-0 ovn_metadata_agent[106828]: 2025-12-01 20:03:52.919 239862 DEBUG oslo.privsep.daemon [-] privsep: reply[352d258e-42df-46c8-8d0a-fa112414a673]: (4, True) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Dec  1 20:03:52 compute-0 ovn_metadata_agent[106828]: 2025-12-01 20:03:52.938 239862 DEBUG oslo.privsep.daemon [-] privsep: reply[eafa5135-65a0-42f2-933f-8303d59cfd34]: (4, [{'family': 0, '__align': (), 'ifi_type': 772, 'index': 1, 'flags': 65609, 'change': 0, 'attrs': [['IFLA_IFNAME', 'lo'], ['IFLA_TXQLEN', 1000], ['IFLA_OPERSTATE', 'UNKNOWN'], ['IFLA_LINKMODE', 0], ['IFLA_MTU', 65536], ['IFLA_MIN_MTU', 0], ['IFLA_MAX_MTU', 0], ['IFLA_GROUP', 0], ['IFLA_PROMISCUITY', 0], ['UNKNOWN', {'header': {'length': 8, 'type': 61}}], ['IFLA_NUM_TX_QUEUES', 1], ['IFLA_GSO_MAX_SEGS', 65535], ['IFLA_GSO_MAX_SIZE', 65536], ['IFLA_GRO_MAX_SIZE', 65536], ['UNKNOWN', {'header': {'length': 8, 'type': 63}}], ['UNKNOWN', {'header': {'length': 8, 'type': 64}}], ['IFLA_TSO_MAX_SIZE', 524280], ['IFLA_TSO_MAX_SEGS', 65535], ['UNKNOWN', {'header': {'length': 8, 'type': 66}}], ['IFLA_NUM_RX_QUEUES', 1], ['IFLA_CARRIER', 1], ['IFLA_CARRIER_CHANGES', 0], ['IFLA_CARRIER_UP_COUNT', 0], ['IFLA_CARRIER_DOWN_COUNT', 0], ['IFLA_PROTO_DOWN', 0], ['IFLA_ADDRESS', '00:00:00:00:00:00'], ['IFLA_BROADCAST', '00:00:00:00:00:00'], ['IFLA_STATS64', {'rx_packets': 1, 'tx_packets': 1, 'rx_bytes': 28, 'tx_bytes': 28, 'rx_errors': 0, 'tx_errors': 0, 'rx_dropped': 0, 'tx_dropped': 0, 'multicast': 0, 'collisions': 0, 'rx_length_errors': 0, 'rx_over_errors': 0, 'rx_crc_errors': 0, 'rx_frame_errors': 0, 'rx_fifo_errors': 0, 'rx_missed_errors': 0, 'tx_aborted_errors': 0, 'tx_carrier_errors': 0, 'tx_fifo_errors': 0, 'tx_heartbeat_errors': 0, 'tx_window_errors': 0, 'rx_compressed': 0, 'tx_compressed': 0}], ['IFLA_STATS', {'rx_packets': 1, 'tx_packets': 1, 'rx_bytes': 28, 'tx_bytes': 28, 'rx_errors': 0, 'tx_errors': 0, 'rx_dropped': 0, 'tx_dropped': 0, 'multicast': 0, 'collisions': 0, 'rx_length_errors': 0, 'rx_over_errors': 0, 'rx_crc_errors': 0, 'rx_frame_errors': 0, 'rx_fifo_errors': 0, 'rx_missed_errors': 0, 'tx_aborted_errors': 0, 'tx_carrier_errors': 0, 'tx_fifo_errors': 0, 'tx_heartbeat_errors': 0, 'tx_window_errors': 0, 'rx_compressed': 0, 'tx_compressed': 0}], ['IFLA_XDP', {'attrs': [['IFLA_XDP_ATTACHED', None]]}], ['IFLA_QDISC', 'noqueue'], ['IFLA_AF_SPEC', {'attrs': [['AF_INET', {'dummy': 65668, 'forwarding': 1, 'mc_forwarding': 0, 'proxy_arp': 0, 'accept_redirects': 0, 'secure_redirects': 0, 'send_redirects': 0, 'shared_media': 1, 'rp_filter': 1, 'accept_source_route': 0, 'bootp_relay': 0, 'log_martians': 0, 'tag': 0, 'arpfilter': 0, 'medium_id': 0, 'noxfrm': 1, 'nopolicy': 1, 'force_igmp_version': 0, 'arp_announce': 0, 'arp_ignore': 0, 'promote_secondaries': 1, 'arp_accept': 0, 'arp_notify': 0, 'accept_local': 0, 'src_vmark': 0, 'proxy_arp_pvlan': 0, 'route_localnet': 0, 'igmpv2_unsolicited_report_interval': 10000, 'igmpv3_unsolicited_report_interval': 1000}], ['AF_INET6', {'attrs': [['IFLA_INET6_FLAGS', 2147483648], ['IFLA_INET6_CACHEINFO', {'max_reasm_len': 65535, 'tstamp': 584367, 'reachable_time': 27621, 'retrans_time': 1000}], ['IFLA_INET6_CONF', {'forwarding': 0, 'hop_limit': 64, 'mtu': 65536, 'accept_ra': 1, 'accept_redirects': 1, 'autoconf': 1, 'dad_transmits': 1, 'router_solicitations': 4294967295, 'router_solicitation_interval': 4000, 'router_solicitation_delay': 1000, 'use_tempaddr': 4294967295, 'temp_valid_lft': 604800, 'temp_preferred_lft': 86400, 'regen_max_retry': 3, 'max_desync_factor': 600, 'max_addresses': 16, 'force_mld_version': 0, 'accept_ra_defrtr': 1, 'accept_ra_pinfo': 1, 'accept_ra_rtr_pref': 1, 'router_probe_interval': 60000, 'accept_ra_rt_info_max_plen': 0, 'proxy_ndp': 0, 'optimistic_dad': 0, 'accept_source_route': 0, 'mc_forwarding': 0, 'disable_ipv6': 0, 'accept_dad': 4294967295, 'force_tllao': 0, 'ndisc_notify': 0}], ['IFLA_INET6_STATS', {'num': 38, 'inpkts': 0, 'inoctets': 0, 'indelivers': 0, 'outforwdatagrams': 0, 'outpkts': 0, 'outoctets': 0, 'inhdrerrors': 0, 'intoobigerrors': 0, 'innoroutes': 0, 'inaddrerrors': 0, 'inunknownprotos': 0, 'intruncatedpkts': 0, 'indiscards': 0, 'outdiscards': 0, 'outnoroutes': 0, 'reasmtimeout': 0, 'reasmreqds': 0, 'reasmoks': 0, 'reasmfails': 0, 'fragoks': 0, 'fragfails': 0, 'fragcreates': 0, 'inmcastpkts': 0, 'outmcastpkts': 0, 'inbcastpkts': 0, 'outbcastpkts': 0, 'inmcastoctets': 0, 'outmcastoctets': 0, 'inbcastoctets': 0, 'outbcastoctets': 0, 'csumerrors': 0, 'noectpkts': 0, 'ect1pkts': 0, 'ect0pkts': 0, 'cepkts': 0}], ['IFLA_INET6_ICMP6STATS', {'num': 7, 'inmsgs': 0, 'inerrors': 0, 'outmsgs': 0, 'outerrors': 0, 'csumerrors': 0}], ['IFLA_INET6_TOKEN', '::'], ['IFLA_INET6_ADDR_GEN_MODE', 0]]}]]}], ['IFLA_MAP', {'mem_start': 0, 'mem_end': 0, 'base_addr': 0, 'irq': 0, 'dma': 0, 'port': 0}], ['UNKNOWN', {'header': {'length': 4, 'type': 32830}}], ['UNKNOWN', {'header': {'length': 4, 'type': 32833}}]], 'header': {'length': 1404, 'type': 16, 'flags': 2, 'sequence_number': 255, 'pid': 256418, 'error': None, 'target': 'ovnmeta-61c137f0-effb-4f90-8a6c-ea3831f8e4db', 'stats': (0, 0, 0)}, 'state': 'up', 'event': 'RTM_NEWLINK'}]) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Dec  1 20:03:52 compute-0 ovn_metadata_agent[106828]: 2025-12-01 20:03:52.941 106945 DEBUG neutron.privileged.agent.linux.ip_lib [-] Namespace ovnmeta-61c137f0-effb-4f90-8a6c-ea3831f8e4db deleted. remove_netns /usr/lib/python3.9/site-packages/neutron/privileged/agent/linux/ip_lib.py:607#033[00m
Dec  1 20:03:52 compute-0 ovn_metadata_agent[106828]: 2025-12-01 20:03:52.942 106945 DEBUG oslo.privsep.daemon [-] privsep: reply[59e5aa83-85a7-43b5-94d8-11e0ee65ace7]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Dec  1 20:03:52 compute-0 systemd[1]: run-netns-ovnmeta\x2d61c137f0\x2deffb\x2d4f90\x2d8a6c\x2dea3831f8e4db.mount: Deactivated successfully.
Dec  1 20:03:53 compute-0 nova_compute[189564]: 2025-12-01 20:03:53.025 189568 DEBUG nova.compute.manager [req-9a169620-20b9-4529-a06e-3e502d259f5b req-35fb815b-5ece-4e96-8eed-90b6d3168ff8 8141e98fb94749b083df4b60a326419b 58466eba0634458f8dd92f929824a1d9 - - default default] [instance: 421c1bd5-7edf-41ce-b0a5-872efcaf35b0] Received event network-vif-unplugged-36c65cc8-9f73-47e0-8a82-7ca2a02890e5 external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11048#033[00m
Dec  1 20:03:53 compute-0 nova_compute[189564]: 2025-12-01 20:03:53.025 189568 DEBUG oslo_concurrency.lockutils [req-9a169620-20b9-4529-a06e-3e502d259f5b req-35fb815b-5ece-4e96-8eed-90b6d3168ff8 8141e98fb94749b083df4b60a326419b 58466eba0634458f8dd92f929824a1d9 - - default default] Acquiring lock "421c1bd5-7edf-41ce-b0a5-872efcaf35b0-events" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m
Dec  1 20:03:53 compute-0 nova_compute[189564]: 2025-12-01 20:03:53.028 189568 DEBUG oslo_concurrency.lockutils [req-9a169620-20b9-4529-a06e-3e502d259f5b req-35fb815b-5ece-4e96-8eed-90b6d3168ff8 8141e98fb94749b083df4b60a326419b 58466eba0634458f8dd92f929824a1d9 - - default default] Lock "421c1bd5-7edf-41ce-b0a5-872efcaf35b0-events" acquired by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: waited 0.002s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m
Dec  1 20:03:53 compute-0 nova_compute[189564]: 2025-12-01 20:03:53.028 189568 DEBUG oslo_concurrency.lockutils [req-9a169620-20b9-4529-a06e-3e502d259f5b req-35fb815b-5ece-4e96-8eed-90b6d3168ff8 8141e98fb94749b083df4b60a326419b 58466eba0634458f8dd92f929824a1d9 - - default default] Lock "421c1bd5-7edf-41ce-b0a5-872efcaf35b0-events" "released" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m
Dec  1 20:03:53 compute-0 nova_compute[189564]: 2025-12-01 20:03:53.029 189568 DEBUG nova.compute.manager [req-9a169620-20b9-4529-a06e-3e502d259f5b req-35fb815b-5ece-4e96-8eed-90b6d3168ff8 8141e98fb94749b083df4b60a326419b 58466eba0634458f8dd92f929824a1d9 - - default default] [instance: 421c1bd5-7edf-41ce-b0a5-872efcaf35b0] No waiting events found dispatching network-vif-unplugged-36c65cc8-9f73-47e0-8a82-7ca2a02890e5 pop_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:320#033[00m
Dec  1 20:03:53 compute-0 nova_compute[189564]: 2025-12-01 20:03:53.030 189568 DEBUG nova.compute.manager [req-9a169620-20b9-4529-a06e-3e502d259f5b req-35fb815b-5ece-4e96-8eed-90b6d3168ff8 8141e98fb94749b083df4b60a326419b 58466eba0634458f8dd92f929824a1d9 - - default default] [instance: 421c1bd5-7edf-41ce-b0a5-872efcaf35b0] Received event network-vif-unplugged-36c65cc8-9f73-47e0-8a82-7ca2a02890e5 for instance with task_state deleting. _process_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:10826#033[00m
Dec  1 20:03:53 compute-0 nova_compute[189564]: 2025-12-01 20:03:53.092 189568 INFO nova.compute.manager [None req-1ef7797b-5398-4537-8685-409a1c164a30 304fade4774b4bb3838efcc56501f582 bde8983778e8471a8b7f6da9e9d53732 - - default default] [instance: 421c1bd5-7edf-41ce-b0a5-872efcaf35b0] Took 0.76 seconds to destroy the instance on the hypervisor.#033[00m
Dec  1 20:03:53 compute-0 nova_compute[189564]: 2025-12-01 20:03:53.093 189568 DEBUG oslo.service.loopingcall [None req-1ef7797b-5398-4537-8685-409a1c164a30 304fade4774b4bb3838efcc56501f582 bde8983778e8471a8b7f6da9e9d53732 - - default default] Waiting for function nova.compute.manager.ComputeManager._try_deallocate_network.<locals>._deallocate_network_with_retries to return. func /usr/lib/python3.9/site-packages/oslo_service/loopingcall.py:435#033[00m
Dec  1 20:03:53 compute-0 nova_compute[189564]: 2025-12-01 20:03:53.094 189568 DEBUG nova.compute.manager [-] [instance: 421c1bd5-7edf-41ce-b0a5-872efcaf35b0] Deallocating network for instance _deallocate_network /usr/lib/python3.9/site-packages/nova/compute/manager.py:2259#033[00m
Dec  1 20:03:53 compute-0 nova_compute[189564]: 2025-12-01 20:03:53.095 189568 DEBUG nova.network.neutron [-] [instance: 421c1bd5-7edf-41ce-b0a5-872efcaf35b0] deallocate_for_instance() deallocate_for_instance /usr/lib/python3.9/site-packages/nova/network/neutron.py:1803#033[00m
Dec  1 20:03:53 compute-0 nova_compute[189564]: 2025-12-01 20:03:53.908 189568 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Dec  1 20:03:54 compute-0 podman[256420]: 2025-12-01 20:03:54.363997357 +0000 UTC m=+0.115141683 container health_status 34a1614f07848d6f362b3ed1fa2407dbcd0f2c7c831f6ef43ff8b2d278ce7c3d (image=quay.io/podified-antelope-centos9/openstack-ceilometer-ipmi:current-podified, name=ceilometer_agent_ipmi, health_status=healthy, health_failing_streak=0, health_log=, tcib_managed=true, config_id=edpm, maintainer=OpenStack Kubernetes Operator team, org.label-schema.build-date=20251125, org.label-schema.schema-version=1.0, container_name=ceilometer_agent_ipmi, managed_by=edpm_ansible, org.label-schema.name=CentOS Stream 9 Base Image, tcib_build_tag=fa2bb8efef6782c26ea7f1675eeb36dd, io.buildah.version=1.41.3, config_data={'image': 'quay.io/podified-antelope-centos9/openstack-ceilometer-ipmi:current-podified', 'user': 'ceilometer', 'restart': 'always', 'command': 'kolla_start', 'security_opt': 'label:type:ceilometer_polling_t', 'privileged': 'true', 'net': 'host', 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'OS_ENDPOINT_TYPE': 'internal'}, 'healthcheck': {'test': '/openstack/healthcheck ipmi', 'mount': '/var/lib/openstack/healthchecks/ceilometer_agent_ipmi'}, 'volumes': ['/var/lib/openstack/config/telemetry-power-monitoring:/var/lib/openstack/config/:z', '/var/lib/openstack/config/telemetry-power-monitoring/ceilometer-agent-ipmi.json:/var/lib/kolla/config_files/config.json:z', '/etc/hosts:/etc/hosts:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/var/lib/openstack/cacerts/telemetry-power-monitoring/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/config/telemetry-power-monitoring/ceilometer_prom_exporter.yaml:/etc/ceilometer/ceilometer_prom_exporter.yaml:z', '/var/lib/openstack/certs/telemetry-power-monitoring/default:/etc/ceilometer/tls:z', '/dev/log:/dev/log', '/var/lib/openstack/healthchecks/ceilometer_agent_ipmi:/openstack:ro,z']}, org.label-schema.license=GPLv2, org.label-schema.vendor=CentOS)
Dec  1 20:03:54 compute-0 podman[256421]: 2025-12-01 20:03:54.370738447 +0000 UTC m=+0.115116353 container health_status 3a3d264f7eb8586ed3d44da8bad3c69e5911bcb2ca062b771386b6d47a5118de (image=quay.rdoproject.org/podified-master-centos10/openstack-ceilometer-compute:current-tested, name=ceilometer_agent_compute, health_status=healthy, health_failing_streak=0, health_log=, tcib_build_tag=3a7876c5b6a4ff2e2bc50e11e9db5f42, io.buildah.version=1.41.4, maintainer=OpenStack Kubernetes Operator team, org.label-schema.build-date=20251125, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 10 Base Image, config_data={'image': 'quay.rdoproject.org/podified-master-centos10/openstack-ceilometer-compute:current-tested', 'user': 'ceilometer', 'restart': 'always', 'command': 'kolla_start', 'security_opt': 'label:type:ceilometer_polling_t', 'net': 'host', 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'OS_ENDPOINT_TYPE': 'internal'}, 'healthcheck': {'test': '/openstack/healthcheck compute', 'mount': '/var/lib/openstack/healthchecks/ceilometer_agent_compute'}, 'volumes': ['/var/lib/openstack/config/telemetry:/var/lib/openstack/config/:z', '/var/lib/openstack/config/telemetry/ceilometer-agent-compute.json:/var/lib/kolla/config_files/config.json:z', '/run/libvirt:/run/libvirt:shared,ro', '/etc/hosts:/etc/hosts:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/var/lib/openstack/cacerts/telemetry/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/config/telemetry/ceilometer_prom_exporter.yaml:/etc/ceilometer/ceilometer_prom_exporter.yaml:z', '/var/lib/openstack/certs/telemetry/default:/etc/ceilometer/tls:z', '/dev/log:/dev/log', '/var/lib/openstack/healthchecks/ceilometer_agent_compute:/openstack:ro,z']}, config_id=edpm, container_name=ceilometer_agent_compute, org.label-schema.vendor=CentOS, tcib_managed=true, managed_by=edpm_ansible, org.label-schema.schema-version=1.0)
Dec  1 20:03:54 compute-0 podman[256422]: 2025-12-01 20:03:54.375276938 +0000 UTC m=+0.108046912 container health_status 43b014a7c88484529ca37fbc1aa040d68d3c565a681d98a3ffe696ded1c66c8b (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified, name=ovn_metadata_agent, health_status=healthy, health_failing_streak=0, health_log=, maintainer=OpenStack Kubernetes Operator team, org.label-schema.build-date=20251125, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.license=GPLv2, config_data={'cgroupns': 'host', 'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'EDPM_CONFIG_HASH': '0823bd3e096c75f72e4a95820d41b0d4b6a1172bd2892ddb9f29b788a11bc87d'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ovn_metadata_agent', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified', 'net': 'host', 'pid': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/run/openvswitch:/run/openvswitch:z', '/var/lib/config-data/ansible-generated/neutron-ovn-metadata-agent:/etc/neutron.conf.d:z', '/run/netns:/run/netns:shared', '/var/lib/kolla/config_files/ovn_metadata_agent.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/neutron:/var/lib/neutron:shared,z', '/var/lib/neutron/ovn_metadata_haproxy_wrapper:/usr/local/bin/haproxy:ro', '/var/lib/neutron/kill_scripts:/etc/neutron/kill_scripts:ro', '/var/lib/openstack/cacerts/neutron-metadata/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/ca.crt:/etc/pki/tls/certs/ovndbca.crt:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/tls.crt:/etc/pki/tls/certs/ovndb.crt:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/tls.key:/etc/pki/tls/private/ovndb.key:ro,Z', '/var/lib/openstack/healthchecks/ovn_metadata_agent:/openstack:ro,z']}, container_name=ovn_metadata_agent, org.label-schema.vendor=CentOS, managed_by=edpm_ansible, org.label-schema.schema-version=1.0, tcib_build_tag=fa2bb8efef6782c26ea7f1675eeb36dd, tcib_managed=true, config_id=ovn_metadata_agent, io.buildah.version=1.41.3)
Dec  1 20:03:54 compute-0 podman[256419]: 2025-12-01 20:03:54.377748426 +0000 UTC m=+0.126977572 container health_status 23921011954a99f31a49758e512d9e3575f6b2ebf536e7df85e3be11e7690b76 (image=quay.io/sustainable_computing_io/kepler:release-0.7.12, name=kepler, health_status=healthy, health_failing_streak=0, health_log=, container_name=kepler, release=1214.1726694543, vendor=Red Hat, Inc., io.openshift.expose-services=, build-date=2024-09-18T21:23:30, version=9.4, io.buildah.version=1.29.0, io.k8s.description=The Universal Base Image is designed and engineered to be the base layer for all of your containerized applications, middleware and utilities. This base image is freely redistributable, but Red Hat only supports Red Hat technologies through subscriptions for Red Hat products. This image is maintained by Red Hat and updated regularly., maintainer=Red Hat, Inc., vcs-ref=e309397d02fc53f7fa99db1371b8700eb49f268f, io.k8s.display-name=Red Hat Universal Base Image 9, vcs-type=git, name=ubi9, url=https://access.redhat.com/containers/#/registry.access.redhat.com/ubi9/images/9.4-1214.1726694543, com.redhat.license_terms=https://www.redhat.com/en/about/red-hat-end-user-license-agreements#UBI, description=The Universal Base Image is designed and engineered to be the base layer for all of your containerized applications, middleware and utilities. This base image is freely redistributable, but Red Hat only supports Red Hat technologies through subscriptions for Red Hat products. This image is maintained by Red Hat and updated regularly., distribution-scope=public, managed_by=edpm_ansible, summary=Provides the latest release of Red Hat Universal Base Image 9., io.openshift.tags=base rhel9, release-0.7.12=, config_data={'image': 'quay.io/sustainable_computing_io/kepler:release-0.7.12', 'privileged': 'true', 'restart': 'always', 'ports': ['8888:8888'], 'net': 'host', 'command': '-v=2', 'recreate': True, 'environment': {'ENABLE_GPU': 'true', 'EXPOSE_CONTAINER_METRICS': 'true', 'ENABLE_PROCESS_METRICS': 'true', 'EXPOSE_VM_METRICS': 'true', 'EXPOSE_ESTIMATED_IDLE_POWER_METRICS': 'false', 'LIBVIRT_METADATA_URI': 'http://openstack.org/xmlns/libvirt/nova/1.1'}, 'healthcheck': {'test': '/openstack/healthcheck kepler', 'mount': '/var/lib/openstack/healthchecks/kepler'}, 'volumes': ['/lib/modules:/lib/modules:ro', '/run/libvirt:/run/libvirt:shared,ro', '/sys:/sys', '/proc:/proc', '/var/lib/openstack/healthchecks/kepler:/openstack:ro,z']}, architecture=x86_64, com.redhat.component=ubi9-container, config_id=edpm)
Dec  1 20:03:54 compute-0 podman[256426]: 2025-12-01 20:03:54.409451362 +0000 UTC m=+0.132100002 container health_status ac5c9902abf0db9f43c889599b2bcc73d33eb8b65444ffdd9b56a5cc93dab792 (image=quay.io/podified-antelope-centos9/openstack-ovn-controller:current-podified, name=ovn_controller, health_status=healthy, health_failing_streak=0, health_log=, config_id=ovn_controller, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS, io.buildah.version=1.41.3, managed_by=edpm_ansible, org.label-schema.license=GPLv2, maintainer=OpenStack Kubernetes Operator team, org.label-schema.build-date=20251125, org.label-schema.name=CentOS Stream 9 Base Image, container_name=ovn_controller, tcib_build_tag=fa2bb8efef6782c26ea7f1675eeb36dd, tcib_managed=true, config_data={'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ovn_controller', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-ovn-controller:current-podified', 'net': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/lib/modules:/lib/modules:ro', '/run:/run', '/var/lib/openvswitch/ovn:/run/ovn:shared,z', '/var/lib/kolla/config_files/ovn_controller.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/openstack/cacerts/ovn/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/certs/ovn/default/ca.crt:/etc/pki/tls/certs/ovndbca.crt:ro,z', '/var/lib/openstack/certs/ovn/default/tls.crt:/etc/pki/tls/certs/ovndb.crt:ro,z', '/var/lib/openstack/certs/ovn/default/tls.key:/etc/pki/tls/private/ovndb.key:ro,Z', '/var/lib/openstack/healthchecks/ovn_controller:/openstack:ro,z']})
Dec  1 20:03:55 compute-0 nova_compute[189564]: 2025-12-01 20:03:55.139 189568 DEBUG nova.compute.manager [req-3fbcaf78-8901-48de-83b4-ca7fdf0a91ee req-5e9a30d0-9ed6-4cd8-b0a3-b2a512e5a277 8141e98fb94749b083df4b60a326419b 58466eba0634458f8dd92f929824a1d9 - - default default] [instance: 421c1bd5-7edf-41ce-b0a5-872efcaf35b0] Received event network-vif-plugged-36c65cc8-9f73-47e0-8a82-7ca2a02890e5 external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11048#033[00m
Dec  1 20:03:55 compute-0 nova_compute[189564]: 2025-12-01 20:03:55.140 189568 DEBUG oslo_concurrency.lockutils [req-3fbcaf78-8901-48de-83b4-ca7fdf0a91ee req-5e9a30d0-9ed6-4cd8-b0a3-b2a512e5a277 8141e98fb94749b083df4b60a326419b 58466eba0634458f8dd92f929824a1d9 - - default default] Acquiring lock "421c1bd5-7edf-41ce-b0a5-872efcaf35b0-events" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m
Dec  1 20:03:55 compute-0 nova_compute[189564]: 2025-12-01 20:03:55.140 189568 DEBUG oslo_concurrency.lockutils [req-3fbcaf78-8901-48de-83b4-ca7fdf0a91ee req-5e9a30d0-9ed6-4cd8-b0a3-b2a512e5a277 8141e98fb94749b083df4b60a326419b 58466eba0634458f8dd92f929824a1d9 - - default default] Lock "421c1bd5-7edf-41ce-b0a5-872efcaf35b0-events" acquired by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m
Dec  1 20:03:55 compute-0 nova_compute[189564]: 2025-12-01 20:03:55.140 189568 DEBUG oslo_concurrency.lockutils [req-3fbcaf78-8901-48de-83b4-ca7fdf0a91ee req-5e9a30d0-9ed6-4cd8-b0a3-b2a512e5a277 8141e98fb94749b083df4b60a326419b 58466eba0634458f8dd92f929824a1d9 - - default default] Lock "421c1bd5-7edf-41ce-b0a5-872efcaf35b0-events" "released" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m
Dec  1 20:03:55 compute-0 nova_compute[189564]: 2025-12-01 20:03:55.141 189568 DEBUG nova.compute.manager [req-3fbcaf78-8901-48de-83b4-ca7fdf0a91ee req-5e9a30d0-9ed6-4cd8-b0a3-b2a512e5a277 8141e98fb94749b083df4b60a326419b 58466eba0634458f8dd92f929824a1d9 - - default default] [instance: 421c1bd5-7edf-41ce-b0a5-872efcaf35b0] No waiting events found dispatching network-vif-plugged-36c65cc8-9f73-47e0-8a82-7ca2a02890e5 pop_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:320#033[00m
Dec  1 20:03:55 compute-0 nova_compute[189564]: 2025-12-01 20:03:55.141 189568 WARNING nova.compute.manager [req-3fbcaf78-8901-48de-83b4-ca7fdf0a91ee req-5e9a30d0-9ed6-4cd8-b0a3-b2a512e5a277 8141e98fb94749b083df4b60a326419b 58466eba0634458f8dd92f929824a1d9 - - default default] [instance: 421c1bd5-7edf-41ce-b0a5-872efcaf35b0] Received unexpected event network-vif-plugged-36c65cc8-9f73-47e0-8a82-7ca2a02890e5 for instance with vm_state active and task_state deleting.#033[00m
Dec  1 20:03:55 compute-0 nova_compute[189564]: 2025-12-01 20:03:55.756 189568 DEBUG nova.network.neutron [-] [instance: 421c1bd5-7edf-41ce-b0a5-872efcaf35b0] Updating instance_info_cache with network_info: [] update_instance_cache_with_nw_info /usr/lib/python3.9/site-packages/nova/network/neutron.py:116#033[00m
Dec  1 20:03:55 compute-0 nova_compute[189564]: 2025-12-01 20:03:55.782 189568 INFO nova.compute.manager [-] [instance: 421c1bd5-7edf-41ce-b0a5-872efcaf35b0] Took 2.69 seconds to deallocate network for instance.#033[00m
Dec  1 20:03:55 compute-0 nova_compute[189564]: 2025-12-01 20:03:55.847 189568 DEBUG oslo_concurrency.lockutils [None req-1ef7797b-5398-4537-8685-409a1c164a30 304fade4774b4bb3838efcc56501f582 bde8983778e8471a8b7f6da9e9d53732 - - default default] Acquiring lock "compute_resources" by "nova.compute.resource_tracker.ResourceTracker.update_usage" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m
Dec  1 20:03:55 compute-0 nova_compute[189564]: 2025-12-01 20:03:55.847 189568 DEBUG oslo_concurrency.lockutils [None req-1ef7797b-5398-4537-8685-409a1c164a30 304fade4774b4bb3838efcc56501f582 bde8983778e8471a8b7f6da9e9d53732 - - default default] Lock "compute_resources" acquired by "nova.compute.resource_tracker.ResourceTracker.update_usage" :: waited 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m
Dec  1 20:03:55 compute-0 nova_compute[189564]: 2025-12-01 20:03:55.957 189568 DEBUG nova.compute.provider_tree [None req-1ef7797b-5398-4537-8685-409a1c164a30 304fade4774b4bb3838efcc56501f582 bde8983778e8471a8b7f6da9e9d53732 - - default default] Inventory has not changed in ProviderTree for provider: 0211b5d4-bab8-409f-8f53-df766ffbcb27 update_inventory /usr/lib/python3.9/site-packages/nova/compute/provider_tree.py:180#033[00m
Dec  1 20:03:55 compute-0 nova_compute[189564]: 2025-12-01 20:03:55.970 189568 DEBUG nova.scheduler.client.report [None req-1ef7797b-5398-4537-8685-409a1c164a30 304fade4774b4bb3838efcc56501f582 bde8983778e8471a8b7f6da9e9d53732 - - default default] Inventory has not changed for provider 0211b5d4-bab8-409f-8f53-df766ffbcb27 based on inventory data: {'VCPU': {'total': 8, 'reserved': 0, 'min_unit': 1, 'max_unit': 8, 'step_size': 1, 'allocation_ratio': 4.0}, 'MEMORY_MB': {'total': 7680, 'reserved': 512, 'min_unit': 1, 'max_unit': 7680, 'step_size': 1, 'allocation_ratio': 1.0}, 'DISK_GB': {'total': 79, 'reserved': 1, 'min_unit': 1, 'max_unit': 79, 'step_size': 1, 'allocation_ratio': 0.9}} set_inventory_for_provider /usr/lib/python3.9/site-packages/nova/scheduler/client/report.py:940#033[00m
Dec  1 20:03:55 compute-0 nova_compute[189564]: 2025-12-01 20:03:55.988 189568 DEBUG oslo_concurrency.lockutils [None req-1ef7797b-5398-4537-8685-409a1c164a30 304fade4774b4bb3838efcc56501f582 bde8983778e8471a8b7f6da9e9d53732 - - default default] Lock "compute_resources" "released" by "nova.compute.resource_tracker.ResourceTracker.update_usage" :: held 0.141s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m
Dec  1 20:03:56 compute-0 nova_compute[189564]: 2025-12-01 20:03:56.009 189568 INFO nova.scheduler.client.report [None req-1ef7797b-5398-4537-8685-409a1c164a30 304fade4774b4bb3838efcc56501f582 bde8983778e8471a8b7f6da9e9d53732 - - default default] Deleted allocations for instance 421c1bd5-7edf-41ce-b0a5-872efcaf35b0#033[00m
Dec  1 20:03:56 compute-0 nova_compute[189564]: 2025-12-01 20:03:56.071 189568 DEBUG oslo_concurrency.lockutils [None req-1ef7797b-5398-4537-8685-409a1c164a30 304fade4774b4bb3838efcc56501f582 bde8983778e8471a8b7f6da9e9d53732 - - default default] Lock "421c1bd5-7edf-41ce-b0a5-872efcaf35b0" "released" by "nova.compute.manager.ComputeManager.terminate_instance.<locals>.do_terminate_instance" :: held 3.740s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m
Dec  1 20:03:56 compute-0 ovn_controller[97948]: 2025-12-01T20:03:56Z|00158|binding|INFO|Releasing lport b1e4fac5-26a3-4807-b860-bcfa4669fff5 from this chassis (sb_readonly=0)
Dec  1 20:03:56 compute-0 nova_compute[189564]: 2025-12-01 20:03:56.649 189568 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Dec  1 20:03:57 compute-0 nova_compute[189564]: 2025-12-01 20:03:57.378 189568 DEBUG nova.compute.manager [req-8dd75787-2593-469e-a858-f2376d8d1267 req-4067b773-6ee9-49bd-9eb7-614730ded230 8141e98fb94749b083df4b60a326419b 58466eba0634458f8dd92f929824a1d9 - - default default] [instance: 421c1bd5-7edf-41ce-b0a5-872efcaf35b0] Received event network-vif-deleted-36c65cc8-9f73-47e0-8a82-7ca2a02890e5 external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11048#033[00m
Dec  1 20:03:57 compute-0 nova_compute[189564]: 2025-12-01 20:03:57.644 189568 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Dec  1 20:03:58 compute-0 nova_compute[189564]: 2025-12-01 20:03:58.908 189568 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Dec  1 20:03:59 compute-0 podman[203750]: time="2025-12-01T20:03:59Z" level=info msg="List containers: received `last` parameter - overwriting `limit`"
Dec  1 20:03:59 compute-0 podman[203750]: @ - - [01/Dec/2025:20:03:59 +0000] "GET /v4.9.3/libpod/containers/json?all=true&external=false&last=0&namespace=false&size=false&sync=false HTTP/1.1" 200 29521 "" "Go-http-client/1.1"
Dec  1 20:03:59 compute-0 podman[203750]: @ - - [01/Dec/2025:20:03:59 +0000] "GET /v4.9.3/libpod/containers/stats?all=false&interval=1&stream=false HTTP/1.1" 200 4810 "" "Go-http-client/1.1"
Dec  1 20:04:01 compute-0 openstack_network_exporter[205914]: ERROR   20:04:01 appctl.go:131: Failed to prepare call to ovsdb-server: no control socket files found for the ovs db server
Dec  1 20:04:01 compute-0 openstack_network_exporter[205914]: ERROR   20:04:01 appctl.go:144: Failed to get PID for ovn-northd: no control socket files found for ovn-northd
Dec  1 20:04:01 compute-0 openstack_network_exporter[205914]: ERROR   20:04:01 appctl.go:144: Failed to get PID for ovn-northd: no control socket files found for ovn-northd
Dec  1 20:04:01 compute-0 openstack_network_exporter[205914]: ERROR   20:04:01 appctl.go:174: call(dpif-netdev/pmd-perf-show): please specify an existing datapath
Dec  1 20:04:01 compute-0 openstack_network_exporter[205914]: 
Dec  1 20:04:01 compute-0 openstack_network_exporter[205914]: ERROR   20:04:01 appctl.go:174: call(dpif-netdev/pmd-rxq-show): please specify an existing datapath
Dec  1 20:04:01 compute-0 openstack_network_exporter[205914]: 
Dec  1 20:04:02 compute-0 ovn_controller[97948]: 2025-12-01T20:04:02Z|00159|binding|INFO|Releasing lport b1e4fac5-26a3-4807-b860-bcfa4669fff5 from this chassis (sb_readonly=0)
Dec  1 20:04:02 compute-0 nova_compute[189564]: 2025-12-01 20:04:02.491 189568 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Dec  1 20:04:02 compute-0 nova_compute[189564]: 2025-12-01 20:04:02.647 189568 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Dec  1 20:04:03 compute-0 nova_compute[189564]: 2025-12-01 20:04:03.675 189568 DEBUG nova.virt.driver [-] Emitting event <LifecycleEvent: 1764619428.6738818, 4a104baa-5fd5-47aa-973b-11d99c76c3e2 => Stopped> emit_event /usr/lib/python3.9/site-packages/nova/virt/driver.py:1653#033[00m
Dec  1 20:04:03 compute-0 nova_compute[189564]: 2025-12-01 20:04:03.676 189568 INFO nova.compute.manager [-] [instance: 4a104baa-5fd5-47aa-973b-11d99c76c3e2] VM Stopped (Lifecycle Event)#033[00m
Dec  1 20:04:03 compute-0 nova_compute[189564]: 2025-12-01 20:04:03.703 189568 DEBUG nova.compute.manager [None req-0c62b3b7-419e-488d-a354-960d5c1e33b3 - - - - - -] [instance: 4a104baa-5fd5-47aa-973b-11d99c76c3e2] Checking state _get_power_state /usr/lib/python3.9/site-packages/nova/compute/manager.py:1762#033[00m
Dec  1 20:04:03 compute-0 nova_compute[189564]: 2025-12-01 20:04:03.911 189568 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Dec  1 20:04:06 compute-0 podman[256520]: 2025-12-01 20:04:06.362331197 +0000 UTC m=+0.125967100 container health_status b46bda7fc50db8041eef75400930fc7591d8331b3adc9964f77b2cc87c6b98e2 (image=quay.io/openstack-k8s-operators/openstack-network-exporter:current-podified, name=openstack_network_exporter, health_status=healthy, health_failing_streak=0, health_log=, io.k8s.description=The Universal Base Image Minimal is a stripped down image that uses microdnf as a package manager. This base image is freely redistributable, but Red Hat only supports Red Hat technologies through subscriptions for Red Hat products. This image is maintained by Red Hat and updated regularly., build-date=2025-08-20T13:12:41, config_data={'image': 'quay.io/openstack-k8s-operators/openstack-network-exporter:current-podified', 'restart': 'always', 'recreate': True, 'privileged': True, 'ports': ['9105:9105'], 'command': [], 'net': 'host', 'environment': {'OS_ENDPOINT_TYPE': 'internal', 'OPENSTACK_NETWORK_EXPORTER_YAML': '/etc/openstack_network_exporter/openstack_network_exporter.yaml'}, 'healthcheck': {'test': '/openstack/healthcheck openstack-netwo', 'mount': '/var/lib/openstack/healthchecks/openstack_network_exporter'}, 'volumes': ['/var/lib/openstack/config/telemetry/openstack_network_exporter.yaml:/etc/openstack_network_exporter/openstack_network_exporter.yaml:z', '/var/lib/openstack/certs/telemetry/default:/etc/openstack_network_exporter/tls:z', '/var/run/openvswitch:/run/openvswitch:rw,z', '/var/lib/openvswitch/ovn:/run/ovn:rw,z', '/proc:/host/proc:ro', '/var/lib/openstack/healthchecks/openstack_network_exporter:/openstack:ro,z']}, architecture=x86_64, description=The Universal Base Image Minimal is a stripped down image that uses microdnf as a package manager. This base image is freely redistributable, but Red Hat only supports Red Hat technologies through subscriptions for Red Hat products. This image is maintained by Red Hat and updated regularly., io.buildah.version=1.33.7, summary=Provides the latest release of the minimal Red Hat Universal Base Image 9., maintainer=Red Hat, Inc., vendor=Red Hat, Inc., managed_by=edpm_ansible, name=ubi9-minimal, distribution-scope=public, config_id=edpm, url=https://catalog.redhat.com/en/search?searchType=containers, version=9.6, com.redhat.component=ubi9-minimal-container, com.redhat.license_terms=https://www.redhat.com/en/about/red-hat-end-user-license-agreements#UBI, vcs-type=git, io.openshift.expose-services=, io.openshift.tags=minimal rhel9, container_name=openstack_network_exporter, io.k8s.display-name=Red Hat Universal Base Image 9 Minimal, release=1755695350, vcs-ref=f4b088292653bbf5ca8188a5e59ffd06a8671d4b)
Dec  1 20:04:07 compute-0 ovn_controller[97948]: 2025-12-01T20:04:07Z|00160|binding|INFO|Releasing lport b1e4fac5-26a3-4807-b860-bcfa4669fff5 from this chassis (sb_readonly=0)
Dec  1 20:04:07 compute-0 nova_compute[189564]: 2025-12-01 20:04:07.182 189568 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Dec  1 20:04:07 compute-0 nova_compute[189564]: 2025-12-01 20:04:07.612 189568 DEBUG nova.virt.driver [-] Emitting event <LifecycleEvent: 1764619432.611185, 421c1bd5-7edf-41ce-b0a5-872efcaf35b0 => Stopped> emit_event /usr/lib/python3.9/site-packages/nova/virt/driver.py:1653#033[00m
Dec  1 20:04:07 compute-0 nova_compute[189564]: 2025-12-01 20:04:07.613 189568 INFO nova.compute.manager [-] [instance: 421c1bd5-7edf-41ce-b0a5-872efcaf35b0] VM Stopped (Lifecycle Event)#033[00m
Dec  1 20:04:07 compute-0 nova_compute[189564]: 2025-12-01 20:04:07.642 189568 DEBUG nova.compute.manager [None req-6b611a80-4475-4ce6-ae48-d11ed65a182e - - - - - -] [instance: 421c1bd5-7edf-41ce-b0a5-872efcaf35b0] Checking state _get_power_state /usr/lib/python3.9/site-packages/nova/compute/manager.py:1762#033[00m
Dec  1 20:04:07 compute-0 nova_compute[189564]: 2025-12-01 20:04:07.650 189568 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Dec  1 20:04:08 compute-0 nova_compute[189564]: 2025-12-01 20:04:08.914 189568 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Dec  1 20:04:09 compute-0 podman[256538]: 2025-12-01 20:04:09.3815514 +0000 UTC m=+0.151591158 container health_status 9bc16c1e84935b321683dd2dfd3901959431e420d380b6b9982945dff3d516b2 (image=quay.io/prometheus/node-exporter:v1.5.0, name=node_exporter, health_status=healthy, health_failing_streak=0, health_log=, config_data={'image': 'quay.io/prometheus/node-exporter:v1.5.0', 'restart': 'always', 'recreate': True, 'user': 'root', 'privileged': True, 'ports': ['9100:9100'], 'command': ['--web.config.file=/etc/node_exporter/node_exporter.yaml', '--web.disable-exporter-metrics', '--collector.systemd', '--collector.systemd.unit-include=(edpm_.*|ovs.*|openvswitch|virt.*|rsyslog)\\.service', '--no-collector.dmi', '--no-collector.entropy', '--no-collector.thermal_zone', '--no-collector.time', '--no-collector.timex', '--no-collector.uname', '--no-collector.stat', '--no-collector.hwmon', '--no-collector.os', '--no-collector.selinux', '--no-collector.textfile', '--no-collector.powersupplyclass', '--no-collector.pressure', '--no-collector.rapl'], 'net': 'host', 'environment': {'OS_ENDPOINT_TYPE': 'internal'}, 'healthcheck': {'test': '/openstack/healthcheck node_exporter', 'mount': '/var/lib/openstack/healthchecks/node_exporter'}, 'volumes': ['/var/lib/openstack/config/telemetry/node_exporter.yaml:/etc/node_exporter/node_exporter.yaml:z', '/var/lib/openstack/certs/telemetry/default:/etc/node_exporter/tls:z', '/var/run/dbus/system_bus_socket:/var/run/dbus/system_bus_socket:rw', '/var/lib/openstack/healthchecks/node_exporter:/openstack:ro,z']}, config_id=edpm, container_name=node_exporter, maintainer=The Prometheus Authors <prometheus-developers@googlegroups.com>, managed_by=edpm_ansible)
Dec  1 20:04:12 compute-0 ovn_metadata_agent[106828]: 2025-12-01 20:04:12.221 106833 DEBUG oslo_concurrency.lockutils [-] Acquiring lock "_check_child_processes" by "neutron.agent.linux.external_process.ProcessMonitor._check_child_processes" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m
Dec  1 20:04:12 compute-0 ovn_metadata_agent[106828]: 2025-12-01 20:04:12.221 106833 DEBUG oslo_concurrency.lockutils [-] Lock "_check_child_processes" acquired by "neutron.agent.linux.external_process.ProcessMonitor._check_child_processes" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m
Dec  1 20:04:12 compute-0 ovn_metadata_agent[106828]: 2025-12-01 20:04:12.222 106833 DEBUG oslo_concurrency.lockutils [-] Lock "_check_child_processes" "released" by "neutron.agent.linux.external_process.ProcessMonitor._check_child_processes" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m
Dec  1 20:04:12 compute-0 nova_compute[189564]: 2025-12-01 20:04:12.656 189568 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Dec  1 20:04:13 compute-0 nova_compute[189564]: 2025-12-01 20:04:13.918 189568 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Dec  1 20:04:15 compute-0 nova_compute[189564]: 2025-12-01 20:04:15.249 189568 DEBUG oslo_service.periodic_task [None req-7cec860b-c1c5-49d2-8a4f-732c76c1788d - - - - - -] Running periodic task ComputeManager._reclaim_queued_deletes run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210#033[00m
Dec  1 20:04:15 compute-0 nova_compute[189564]: 2025-12-01 20:04:15.252 189568 DEBUG nova.compute.manager [None req-7cec860b-c1c5-49d2-8a4f-732c76c1788d - - - - - -] CONF.reclaim_instance_interval <= 0, skipping... _reclaim_queued_deletes /usr/lib/python3.9/site-packages/nova/compute/manager.py:10477#033[00m
Dec  1 20:04:17 compute-0 nova_compute[189564]: 2025-12-01 20:04:17.252 189568 DEBUG oslo_service.periodic_task [None req-7cec860b-c1c5-49d2-8a4f-732c76c1788d - - - - - -] Running periodic task ComputeManager._poll_rescued_instances run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210#033[00m
Dec  1 20:04:17 compute-0 podman[256564]: 2025-12-01 20:04:17.337113075 +0000 UTC m=+0.088937683 container health_status eee51cf6f5ac491b85fb09827fece37ea9afa564acb449d4ec0d0155a452f02b (image=quay.io/podified-antelope-centos9/openstack-multipathd:current-podified, name=multipathd, health_status=healthy, health_failing_streak=0, health_log=, org.label-schema.vendor=CentOS, org.label-schema.schema-version=1.0, tcib_managed=true, config_data={'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/multipathd', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-multipathd:current-podified', 'net': 'host', 'privileged': True, 'restart': 'always', 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/var/lib/kolla/config_files/multipathd.json:/var/lib/kolla/config_files/config.json:ro', '/dev:/dev', '/run/udev:/run/udev', '/sys:/sys', '/lib/modules:/lib/modules:ro', '/etc/iscsi:/etc/iscsi:ro', '/var/lib/iscsi:/var/lib/iscsi', '/etc/multipath:/etc/multipath:z', '/etc/multipath.conf:/etc/multipath.conf:ro', '/var/lib/openstack/healthchecks/multipathd:/openstack:ro,z']}, maintainer=OpenStack Kubernetes Operator team, managed_by=edpm_ansible, org.label-schema.name=CentOS Stream 9 Base Image, config_id=multipathd, io.buildah.version=1.41.3, tcib_build_tag=fa2bb8efef6782c26ea7f1675eeb36dd, container_name=multipathd, org.label-schema.build-date=20251125, org.label-schema.license=GPLv2)
Dec  1 20:04:17 compute-0 nova_compute[189564]: 2025-12-01 20:04:17.661 189568 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Dec  1 20:04:18 compute-0 nova_compute[189564]: 2025-12-01 20:04:18.920 189568 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Dec  1 20:04:19 compute-0 nova_compute[189564]: 2025-12-01 20:04:19.611 189568 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Dec  1 20:04:19 compute-0 ovn_controller[97948]: 2025-12-01T20:04:19Z|00019|pinctrl(ovn_pinctrl0)|INFO|DHCPOFFER fa:16:3e:d2:c4:d1 10.100.0.4
Dec  1 20:04:19 compute-0 ovn_controller[97948]: 2025-12-01T20:04:19Z|00020|pinctrl(ovn_pinctrl0)|INFO|DHCPACK fa:16:3e:d2:c4:d1 10.100.0.4
Dec  1 20:04:21 compute-0 nova_compute[189564]: 2025-12-01 20:04:21.248 189568 DEBUG oslo_service.periodic_task [None req-7cec860b-c1c5-49d2-8a4f-732c76c1788d - - - - - -] Running periodic task ComputeManager._poll_rebooting_instances run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210#033[00m
Dec  1 20:04:21 compute-0 podman[256612]: 2025-12-01 20:04:21.362037428 +0000 UTC m=+0.123622632 container health_status 61ddba5fa28aaa4735d9b3aecc3d300f499f9ae2248b5f55cd6d6127fcce4236 (image=quay.io/navidys/prometheus-podman-exporter:v1.10.1, name=podman_exporter, health_status=healthy, health_failing_streak=0, health_log=, config_data={'image': 'quay.io/navidys/prometheus-podman-exporter:v1.10.1', 'restart': 'always', 'recreate': True, 'user': 'root', 'privileged': True, 'ports': ['9882:9882'], 'net': 'host', 'command': ['--web.config.file=/etc/podman_exporter/podman_exporter.yaml'], 'environment': {'OS_ENDPOINT_TYPE': 'internal', 'CONTAINER_HOST': 'unix:///run/podman/podman.sock'}, 'healthcheck': {'test': '/openstack/healthcheck podman_exporter', 'mount': '/var/lib/openstack/healthchecks/podman_exporter'}, 'volumes': ['/var/lib/openstack/config/telemetry/podman_exporter.yaml:/etc/podman_exporter/podman_exporter.yaml:z', '/var/lib/openstack/certs/telemetry/default:/etc/podman_exporter/tls:z', '/run/podman/podman.sock:/run/podman/podman.sock:rw,z', '/var/lib/openstack/healthchecks/podman_exporter:/openstack:ro,z']}, config_id=edpm, container_name=podman_exporter, maintainer=Navid Yaghoobi <navidys@fedoraproject.org>, managed_by=edpm_ansible)
Dec  1 20:04:22 compute-0 nova_compute[189564]: 2025-12-01 20:04:22.666 189568 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Dec  1 20:04:23 compute-0 nova_compute[189564]: 2025-12-01 20:04:23.923 189568 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Dec  1 20:04:24 compute-0 nova_compute[189564]: 2025-12-01 20:04:24.248 189568 DEBUG oslo_service.periodic_task [None req-7cec860b-c1c5-49d2-8a4f-732c76c1788d - - - - - -] Running periodic task ComputeManager.update_available_resource run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210#033[00m
Dec  1 20:04:24 compute-0 nova_compute[189564]: 2025-12-01 20:04:24.277 189568 DEBUG oslo_concurrency.lockutils [None req-7cec860b-c1c5-49d2-8a4f-732c76c1788d - - - - - -] Acquiring lock "compute_resources" by "nova.compute.resource_tracker.ResourceTracker.clean_compute_node_cache" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m
Dec  1 20:04:24 compute-0 nova_compute[189564]: 2025-12-01 20:04:24.278 189568 DEBUG oslo_concurrency.lockutils [None req-7cec860b-c1c5-49d2-8a4f-732c76c1788d - - - - - -] Lock "compute_resources" acquired by "nova.compute.resource_tracker.ResourceTracker.clean_compute_node_cache" :: waited 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m
Dec  1 20:04:24 compute-0 nova_compute[189564]: 2025-12-01 20:04:24.278 189568 DEBUG oslo_concurrency.lockutils [None req-7cec860b-c1c5-49d2-8a4f-732c76c1788d - - - - - -] Lock "compute_resources" "released" by "nova.compute.resource_tracker.ResourceTracker.clean_compute_node_cache" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m
Dec  1 20:04:24 compute-0 nova_compute[189564]: 2025-12-01 20:04:24.278 189568 DEBUG nova.compute.resource_tracker [None req-7cec860b-c1c5-49d2-8a4f-732c76c1788d - - - - - -] Auditing locally available compute resources for compute-0.ctlplane.example.com (node: compute-0.ctlplane.example.com) update_available_resource /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:861#033[00m
Dec  1 20:04:24 compute-0 nova_compute[189564]: 2025-12-01 20:04:24.546 189568 DEBUG oslo_concurrency.processutils [None req-7cec860b-c1c5-49d2-8a4f-732c76c1788d - - - - - -] Running cmd (subprocess): /usr/bin/python3 -m oslo_concurrency.prlimit --as=1073741824 --cpu=30 -- env LC_ALL=C LANG=C qemu-img info /var/lib/nova/instances/6c1de815-4e42-4798-9a73-220b67333524/disk --force-share --output=json execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384#033[00m
Dec  1 20:04:24 compute-0 nova_compute[189564]: 2025-12-01 20:04:24.628 189568 DEBUG oslo_concurrency.processutils [None req-7cec860b-c1c5-49d2-8a4f-732c76c1788d - - - - - -] CMD "/usr/bin/python3 -m oslo_concurrency.prlimit --as=1073741824 --cpu=30 -- env LC_ALL=C LANG=C qemu-img info /var/lib/nova/instances/6c1de815-4e42-4798-9a73-220b67333524/disk --force-share --output=json" returned: 0 in 0.082s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422#033[00m
Dec  1 20:04:24 compute-0 nova_compute[189564]: 2025-12-01 20:04:24.629 189568 DEBUG oslo_concurrency.processutils [None req-7cec860b-c1c5-49d2-8a4f-732c76c1788d - - - - - -] Running cmd (subprocess): /usr/bin/python3 -m oslo_concurrency.prlimit --as=1073741824 --cpu=30 -- env LC_ALL=C LANG=C qemu-img info /var/lib/nova/instances/6c1de815-4e42-4798-9a73-220b67333524/disk --force-share --output=json execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384#033[00m
Dec  1 20:04:24 compute-0 nova_compute[189564]: 2025-12-01 20:04:24.706 189568 DEBUG oslo_concurrency.processutils [None req-7cec860b-c1c5-49d2-8a4f-732c76c1788d - - - - - -] CMD "/usr/bin/python3 -m oslo_concurrency.prlimit --as=1073741824 --cpu=30 -- env LC_ALL=C LANG=C qemu-img info /var/lib/nova/instances/6c1de815-4e42-4798-9a73-220b67333524/disk --force-share --output=json" returned: 0 in 0.076s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422#033[00m
Dec  1 20:04:24 compute-0 nova_compute[189564]: 2025-12-01 20:04:24.716 189568 DEBUG oslo_concurrency.processutils [None req-7cec860b-c1c5-49d2-8a4f-732c76c1788d - - - - - -] Running cmd (subprocess): /usr/bin/python3 -m oslo_concurrency.prlimit --as=1073741824 --cpu=30 -- env LC_ALL=C LANG=C qemu-img info /var/lib/nova/instances/cb05bc1e-3b85-4998-a503-39bd86bdc17e/disk --force-share --output=json execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384#033[00m
Dec  1 20:04:24 compute-0 nova_compute[189564]: 2025-12-01 20:04:24.819 189568 DEBUG oslo_concurrency.processutils [None req-7cec860b-c1c5-49d2-8a4f-732c76c1788d - - - - - -] CMD "/usr/bin/python3 -m oslo_concurrency.prlimit --as=1073741824 --cpu=30 -- env LC_ALL=C LANG=C qemu-img info /var/lib/nova/instances/cb05bc1e-3b85-4998-a503-39bd86bdc17e/disk --force-share --output=json" returned: 0 in 0.103s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422#033[00m
Dec  1 20:04:24 compute-0 nova_compute[189564]: 2025-12-01 20:04:24.820 189568 DEBUG oslo_concurrency.processutils [None req-7cec860b-c1c5-49d2-8a4f-732c76c1788d - - - - - -] Running cmd (subprocess): /usr/bin/python3 -m oslo_concurrency.prlimit --as=1073741824 --cpu=30 -- env LC_ALL=C LANG=C qemu-img info /var/lib/nova/instances/cb05bc1e-3b85-4998-a503-39bd86bdc17e/disk --force-share --output=json execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384#033[00m
Dec  1 20:04:24 compute-0 nova_compute[189564]: 2025-12-01 20:04:24.901 189568 DEBUG oslo_concurrency.processutils [None req-7cec860b-c1c5-49d2-8a4f-732c76c1788d - - - - - -] CMD "/usr/bin/python3 -m oslo_concurrency.prlimit --as=1073741824 --cpu=30 -- env LC_ALL=C LANG=C qemu-img info /var/lib/nova/instances/cb05bc1e-3b85-4998-a503-39bd86bdc17e/disk --force-share --output=json" returned: 0 in 0.081s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422#033[00m
Dec  1 20:04:24 compute-0 nova_compute[189564]: 2025-12-01 20:04:24.987 189568 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Dec  1 20:04:25 compute-0 podman[256652]: 2025-12-01 20:04:25.32621732 +0000 UTC m=+0.087164287 container health_status 34a1614f07848d6f362b3ed1fa2407dbcd0f2c7c831f6ef43ff8b2d278ce7c3d (image=quay.io/podified-antelope-centos9/openstack-ceilometer-ipmi:current-podified, name=ceilometer_agent_ipmi, health_status=healthy, health_failing_streak=0, health_log=, maintainer=OpenStack Kubernetes Operator team, managed_by=edpm_ansible, config_id=edpm, container_name=ceilometer_agent_ipmi, io.buildah.version=1.41.3, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.schema-version=1.0, tcib_build_tag=fa2bb8efef6782c26ea7f1675eeb36dd, config_data={'image': 'quay.io/podified-antelope-centos9/openstack-ceilometer-ipmi:current-podified', 'user': 'ceilometer', 'restart': 'always', 'command': 'kolla_start', 'security_opt': 'label:type:ceilometer_polling_t', 'privileged': 'true', 'net': 'host', 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'OS_ENDPOINT_TYPE': 'internal'}, 'healthcheck': {'test': '/openstack/healthcheck ipmi', 'mount': '/var/lib/openstack/healthchecks/ceilometer_agent_ipmi'}, 'volumes': ['/var/lib/openstack/config/telemetry-power-monitoring:/var/lib/openstack/config/:z', '/var/lib/openstack/config/telemetry-power-monitoring/ceilometer-agent-ipmi.json:/var/lib/kolla/config_files/config.json:z', '/etc/hosts:/etc/hosts:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/var/lib/openstack/cacerts/telemetry-power-monitoring/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/config/telemetry-power-monitoring/ceilometer_prom_exporter.yaml:/etc/ceilometer/ceilometer_prom_exporter.yaml:z', '/var/lib/openstack/certs/telemetry-power-monitoring/default:/etc/ceilometer/tls:z', '/dev/log:/dev/log', '/var/lib/openstack/healthchecks/ceilometer_agent_ipmi:/openstack:ro,z']}, org.label-schema.build-date=20251125, org.label-schema.license=GPLv2, org.label-schema.vendor=CentOS, tcib_managed=true)
Dec  1 20:04:25 compute-0 podman[256651]: 2025-12-01 20:04:25.332601394 +0000 UTC m=+0.096380961 container health_status 23921011954a99f31a49758e512d9e3575f6b2ebf536e7df85e3be11e7690b76 (image=quay.io/sustainable_computing_io/kepler:release-0.7.12, name=kepler, health_status=healthy, health_failing_streak=0, health_log=, managed_by=edpm_ansible, vendor=Red Hat, Inc., description=The Universal Base Image is designed and engineered to be the base layer for all of your containerized applications, middleware and utilities. This base image is freely redistributable, but Red Hat only supports Red Hat technologies through subscriptions for Red Hat products. This image is maintained by Red Hat and updated regularly., name=ubi9, version=9.4, com.redhat.component=ubi9-container, config_data={'image': 'quay.io/sustainable_computing_io/kepler:release-0.7.12', 'privileged': 'true', 'restart': 'always', 'ports': ['8888:8888'], 'net': 'host', 'command': '-v=2', 'recreate': True, 'environment': {'ENABLE_GPU': 'true', 'EXPOSE_CONTAINER_METRICS': 'true', 'ENABLE_PROCESS_METRICS': 'true', 'EXPOSE_VM_METRICS': 'true', 'EXPOSE_ESTIMATED_IDLE_POWER_METRICS': 'false', 'LIBVIRT_METADATA_URI': 'http://openstack.org/xmlns/libvirt/nova/1.1'}, 'healthcheck': {'test': '/openstack/healthcheck kepler', 'mount': '/var/lib/openstack/healthchecks/kepler'}, 'volumes': ['/lib/modules:/lib/modules:ro', '/run/libvirt:/run/libvirt:shared,ro', '/sys:/sys', '/proc:/proc', '/var/lib/openstack/healthchecks/kepler:/openstack:ro,z']}, io.k8s.display-name=Red Hat Universal Base Image 9, summary=Provides the latest release of Red Hat Universal Base Image 9., maintainer=Red Hat, Inc., release=1214.1726694543, release-0.7.12=, config_id=edpm, io.buildah.version=1.29.0, io.k8s.description=The Universal Base Image is designed and engineered to be the base layer for all of your containerized applications, middleware and utilities. This base image is freely redistributable, but Red Hat only supports Red Hat technologies through subscriptions for Red Hat products. This image is maintained by Red Hat and updated regularly., container_name=kepler, distribution-scope=public, url=https://access.redhat.com/containers/#/registry.access.redhat.com/ubi9/images/9.4-1214.1726694543, architecture=x86_64, com.redhat.license_terms=https://www.redhat.com/en/about/red-hat-end-user-license-agreements#UBI, vcs-type=git, io.openshift.expose-services=, io.openshift.tags=base rhel9, vcs-ref=e309397d02fc53f7fa99db1371b8700eb49f268f, build-date=2024-09-18T21:23:30)
Dec  1 20:04:25 compute-0 nova_compute[189564]: 2025-12-01 20:04:25.350 189568 WARNING nova.virt.libvirt.driver [None req-7cec860b-c1c5-49d2-8a4f-732c76c1788d - - - - - -] This host appears to have multiple sockets per NUMA node. The `socket` PCI NUMA affinity will not be supported.#033[00m
Dec  1 20:04:25 compute-0 nova_compute[189564]: 2025-12-01 20:04:25.351 189568 DEBUG nova.compute.resource_tracker [None req-7cec860b-c1c5-49d2-8a4f-732c76c1788d - - - - - -] Hypervisor/Node resource view: name=compute-0.ctlplane.example.com free_ram=5011MB free_disk=72.28232955932617GB free_vcpus=6 pci_devices=[{"dev_id": "pci_0000_00_03_0", "address": "0000:00:03.0", "product_id": "1000", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1000", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_01_3", "address": "0000:00:01.3", "product_id": "7113", "vendor_id": "8086", "numa_node": null, "label": "label_8086_7113", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_06_0", "address": "0000:00:06.0", "product_id": "1005", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1005", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_04_0", "address": "0000:00:04.0", "product_id": "1001", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1001", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_01_1", "address": "0000:00:01.1", "product_id": "7010", "vendor_id": "8086", "numa_node": null, "label": "label_8086_7010", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_00_0", "address": "0000:00:00.0", "product_id": "1237", "vendor_id": "8086", "numa_node": null, "label": "label_8086_1237", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_05_0", "address": "0000:00:05.0", "product_id": "1002", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1002", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_02_0", "address": "0000:00:02.0", "product_id": "1050", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1050", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_01_2", "address": "0000:00:01.2", "product_id": "7020", "vendor_id": "8086", "numa_node": null, "label": "label_8086_7020", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_01_0", "address": "0000:00:01.0", "product_id": "7000", "vendor_id": "8086", "numa_node": null, "label": "label_8086_7000", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_07_0", "address": "0000:00:07.0", "product_id": "1000", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1000", "dev_type": "type-PCI"}] _report_hypervisor_resource_view /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:1034#033[00m
Dec  1 20:04:25 compute-0 nova_compute[189564]: 2025-12-01 20:04:25.352 189568 DEBUG oslo_concurrency.lockutils [None req-7cec860b-c1c5-49d2-8a4f-732c76c1788d - - - - - -] Acquiring lock "compute_resources" by "nova.compute.resource_tracker.ResourceTracker._update_available_resource" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m
Dec  1 20:04:25 compute-0 nova_compute[189564]: 2025-12-01 20:04:25.352 189568 DEBUG oslo_concurrency.lockutils [None req-7cec860b-c1c5-49d2-8a4f-732c76c1788d - - - - - -] Lock "compute_resources" acquired by "nova.compute.resource_tracker.ResourceTracker._update_available_resource" :: waited 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m
Dec  1 20:04:25 compute-0 podman[256653]: 2025-12-01 20:04:25.354976249 +0000 UTC m=+0.111436583 container health_status 3a3d264f7eb8586ed3d44da8bad3c69e5911bcb2ca062b771386b6d47a5118de (image=quay.rdoproject.org/podified-master-centos10/openstack-ceilometer-compute:current-tested, name=ceilometer_agent_compute, health_status=healthy, health_failing_streak=0, health_log=, container_name=ceilometer_agent_compute, io.buildah.version=1.41.4, managed_by=edpm_ansible, config_data={'image': 'quay.rdoproject.org/podified-master-centos10/openstack-ceilometer-compute:current-tested', 'user': 'ceilometer', 'restart': 'always', 'command': 'kolla_start', 'security_opt': 'label:type:ceilometer_polling_t', 'net': 'host', 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'OS_ENDPOINT_TYPE': 'internal'}, 'healthcheck': {'test': '/openstack/healthcheck compute', 'mount': '/var/lib/openstack/healthchecks/ceilometer_agent_compute'}, 'volumes': ['/var/lib/openstack/config/telemetry:/var/lib/openstack/config/:z', '/var/lib/openstack/config/telemetry/ceilometer-agent-compute.json:/var/lib/kolla/config_files/config.json:z', '/run/libvirt:/run/libvirt:shared,ro', '/etc/hosts:/etc/hosts:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/var/lib/openstack/cacerts/telemetry/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/config/telemetry/ceilometer_prom_exporter.yaml:/etc/ceilometer/ceilometer_prom_exporter.yaml:z', '/var/lib/openstack/certs/telemetry/default:/etc/ceilometer/tls:z', '/dev/log:/dev/log', '/var/lib/openstack/healthchecks/ceilometer_agent_compute:/openstack:ro,z']}, maintainer=OpenStack Kubernetes Operator team, config_id=edpm, org.label-schema.build-date=20251125, org.label-schema.name=CentOS Stream 10 Base Image, org.label-schema.schema-version=1.0, org.label-schema.license=GPLv2, org.label-schema.vendor=CentOS, tcib_build_tag=3a7876c5b6a4ff2e2bc50e11e9db5f42, tcib_managed=true)
Dec  1 20:04:25 compute-0 podman[256654]: 2025-12-01 20:04:25.355178515 +0000 UTC m=+0.112334700 container health_status 43b014a7c88484529ca37fbc1aa040d68d3c565a681d98a3ffe696ded1c66c8b (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified, name=ovn_metadata_agent, health_status=healthy, health_failing_streak=0, health_log=, tcib_build_tag=fa2bb8efef6782c26ea7f1675eeb36dd, io.buildah.version=1.41.3, org.label-schema.build-date=20251125, org.label-schema.license=GPLv2, org.label-schema.schema-version=1.0, tcib_managed=true, config_id=ovn_metadata_agent, container_name=ovn_metadata_agent, managed_by=edpm_ansible, org.label-schema.name=CentOS Stream 9 Base Image, config_data={'cgroupns': 'host', 'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'EDPM_CONFIG_HASH': '0823bd3e096c75f72e4a95820d41b0d4b6a1172bd2892ddb9f29b788a11bc87d'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ovn_metadata_agent', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified', 'net': 'host', 'pid': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/run/openvswitch:/run/openvswitch:z', '/var/lib/config-data/ansible-generated/neutron-ovn-metadata-agent:/etc/neutron.conf.d:z', '/run/netns:/run/netns:shared', '/var/lib/kolla/config_files/ovn_metadata_agent.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/neutron:/var/lib/neutron:shared,z', '/var/lib/neutron/ovn_metadata_haproxy_wrapper:/usr/local/bin/haproxy:ro', '/var/lib/neutron/kill_scripts:/etc/neutron/kill_scripts:ro', '/var/lib/openstack/cacerts/neutron-metadata/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/ca.crt:/etc/pki/tls/certs/ovndbca.crt:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/tls.crt:/etc/pki/tls/certs/ovndb.crt:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/tls.key:/etc/pki/tls/private/ovndb.key:ro,Z', '/var/lib/openstack/healthchecks/ovn_metadata_agent:/openstack:ro,z']}, maintainer=OpenStack Kubernetes Operator team, org.label-schema.vendor=CentOS)
Dec  1 20:04:25 compute-0 podman[256655]: 2025-12-01 20:04:25.372403495 +0000 UTC m=+0.123511028 container health_status ac5c9902abf0db9f43c889599b2bcc73d33eb8b65444ffdd9b56a5cc93dab792 (image=quay.io/podified-antelope-centos9/openstack-ovn-controller:current-podified, name=ovn_controller, health_status=healthy, health_failing_streak=0, health_log=, org.label-schema.license=GPLv2, org.label-schema.vendor=CentOS, org.label-schema.build-date=20251125, org.label-schema.schema-version=1.0, tcib_managed=true, tcib_build_tag=fa2bb8efef6782c26ea7f1675eeb36dd, config_id=ovn_controller, container_name=ovn_controller, io.buildah.version=1.41.3, maintainer=OpenStack Kubernetes Operator team, org.label-schema.name=CentOS Stream 9 Base Image, config_data={'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ovn_controller', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-ovn-controller:current-podified', 'net': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/lib/modules:/lib/modules:ro', '/run:/run', '/var/lib/openvswitch/ovn:/run/ovn:shared,z', '/var/lib/kolla/config_files/ovn_controller.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/openstack/cacerts/ovn/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/certs/ovn/default/ca.crt:/etc/pki/tls/certs/ovndbca.crt:ro,z', '/var/lib/openstack/certs/ovn/default/tls.crt:/etc/pki/tls/certs/ovndb.crt:ro,z', '/var/lib/openstack/certs/ovn/default/tls.key:/etc/pki/tls/private/ovndb.key:ro,Z', '/var/lib/openstack/healthchecks/ovn_controller:/openstack:ro,z']}, managed_by=edpm_ansible)
Dec  1 20:04:25 compute-0 nova_compute[189564]: 2025-12-01 20:04:25.426 189568 DEBUG nova.compute.resource_tracker [None req-7cec860b-c1c5-49d2-8a4f-732c76c1788d - - - - - -] Instance 6c1de815-4e42-4798-9a73-220b67333524 actively managed on this compute host and has allocations in placement: {'resources': {'DISK_GB': 1, 'MEMORY_MB': 128, 'VCPU': 1}}. _remove_deleted_instances_allocations /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:1635#033[00m
Dec  1 20:04:25 compute-0 nova_compute[189564]: 2025-12-01 20:04:25.426 189568 DEBUG nova.compute.resource_tracker [None req-7cec860b-c1c5-49d2-8a4f-732c76c1788d - - - - - -] Instance cb05bc1e-3b85-4998-a503-39bd86bdc17e actively managed on this compute host and has allocations in placement: {'resources': {'DISK_GB': 1, 'MEMORY_MB': 128, 'VCPU': 1}}. _remove_deleted_instances_allocations /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:1635#033[00m
Dec  1 20:04:25 compute-0 nova_compute[189564]: 2025-12-01 20:04:25.426 189568 DEBUG nova.compute.resource_tracker [None req-7cec860b-c1c5-49d2-8a4f-732c76c1788d - - - - - -] Total usable vcpus: 8, total allocated vcpus: 2 _report_final_resource_view /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:1057#033[00m
Dec  1 20:04:25 compute-0 nova_compute[189564]: 2025-12-01 20:04:25.427 189568 DEBUG nova.compute.resource_tracker [None req-7cec860b-c1c5-49d2-8a4f-732c76c1788d - - - - - -] Final resource view: name=compute-0.ctlplane.example.com phys_ram=7680MB used_ram=768MB phys_disk=79GB used_disk=2GB total_vcpus=8 used_vcpus=2 pci_stats=[] _report_final_resource_view /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:1066#033[00m
Dec  1 20:04:25 compute-0 nova_compute[189564]: 2025-12-01 20:04:25.480 189568 DEBUG nova.compute.provider_tree [None req-7cec860b-c1c5-49d2-8a4f-732c76c1788d - - - - - -] Inventory has not changed in ProviderTree for provider: 0211b5d4-bab8-409f-8f53-df766ffbcb27 update_inventory /usr/lib/python3.9/site-packages/nova/compute/provider_tree.py:180#033[00m
Dec  1 20:04:25 compute-0 nova_compute[189564]: 2025-12-01 20:04:25.494 189568 DEBUG nova.scheduler.client.report [None req-7cec860b-c1c5-49d2-8a4f-732c76c1788d - - - - - -] Inventory has not changed for provider 0211b5d4-bab8-409f-8f53-df766ffbcb27 based on inventory data: {'VCPU': {'total': 8, 'reserved': 0, 'min_unit': 1, 'max_unit': 8, 'step_size': 1, 'allocation_ratio': 4.0}, 'MEMORY_MB': {'total': 7680, 'reserved': 512, 'min_unit': 1, 'max_unit': 7680, 'step_size': 1, 'allocation_ratio': 1.0}, 'DISK_GB': {'total': 79, 'reserved': 1, 'min_unit': 1, 'max_unit': 79, 'step_size': 1, 'allocation_ratio': 0.9}} set_inventory_for_provider /usr/lib/python3.9/site-packages/nova/scheduler/client/report.py:940#033[00m
Dec  1 20:04:25 compute-0 nova_compute[189564]: 2025-12-01 20:04:25.524 189568 DEBUG nova.compute.resource_tracker [None req-7cec860b-c1c5-49d2-8a4f-732c76c1788d - - - - - -] Compute_service record updated for compute-0.ctlplane.example.com:compute-0.ctlplane.example.com _update_available_resource /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:995#033[00m
Dec  1 20:04:25 compute-0 nova_compute[189564]: 2025-12-01 20:04:25.525 189568 DEBUG oslo_concurrency.lockutils [None req-7cec860b-c1c5-49d2-8a4f-732c76c1788d - - - - - -] Lock "compute_resources" "released" by "nova.compute.resource_tracker.ResourceTracker._update_available_resource" :: held 0.172s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m
Dec  1 20:04:26 compute-0 nova_compute[189564]: 2025-12-01 20:04:26.527 189568 DEBUG oslo_service.periodic_task [None req-7cec860b-c1c5-49d2-8a4f-732c76c1788d - - - - - -] Running periodic task ComputeManager._heal_instance_info_cache run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210#033[00m
Dec  1 20:04:26 compute-0 nova_compute[189564]: 2025-12-01 20:04:26.528 189568 DEBUG nova.compute.manager [None req-7cec860b-c1c5-49d2-8a4f-732c76c1788d - - - - - -] Starting heal instance info cache _heal_instance_info_cache /usr/lib/python3.9/site-packages/nova/compute/manager.py:9858#033[00m
Dec  1 20:04:26 compute-0 nova_compute[189564]: 2025-12-01 20:04:26.532 189568 INFO nova.compute.manager [None req-80037e03-9eb9-4149-8c38-1684153363ef 715e289b64b4407387cbcfe958eb2d0f 162c071887824085bcc9c384a2f8baf0 - - default default] [instance: cb05bc1e-3b85-4998-a503-39bd86bdc17e] Get console output#033[00m
Dec  1 20:04:26 compute-0 nova_compute[189564]: 2025-12-01 20:04:26.544 239719 INFO nova.privsep.libvirt [-] Ignored error while reading from instance console pty: can't concat NoneType to bytes#033[00m
Dec  1 20:04:26 compute-0 nova_compute[189564]: 2025-12-01 20:04:26.785 189568 DEBUG oslo_concurrency.lockutils [None req-7cec860b-c1c5-49d2-8a4f-732c76c1788d - - - - - -] Acquiring lock "refresh_cache-6c1de815-4e42-4798-9a73-220b67333524" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:312#033[00m
Dec  1 20:04:26 compute-0 nova_compute[189564]: 2025-12-01 20:04:26.786 189568 DEBUG oslo_concurrency.lockutils [None req-7cec860b-c1c5-49d2-8a4f-732c76c1788d - - - - - -] Acquired lock "refresh_cache-6c1de815-4e42-4798-9a73-220b67333524" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:315#033[00m
Dec  1 20:04:26 compute-0 nova_compute[189564]: 2025-12-01 20:04:26.787 189568 DEBUG nova.network.neutron [None req-7cec860b-c1c5-49d2-8a4f-732c76c1788d - - - - - -] [instance: 6c1de815-4e42-4798-9a73-220b67333524] Forcefully refreshing network info cache for instance _get_instance_nw_info /usr/lib/python3.9/site-packages/nova/network/neutron.py:2004#033[00m
Dec  1 20:04:27 compute-0 nova_compute[189564]: 2025-12-01 20:04:27.181 189568 DEBUG oslo_concurrency.lockutils [None req-17f63672-89f9-4340-bff7-fb5520f4160d 715e289b64b4407387cbcfe958eb2d0f 162c071887824085bcc9c384a2f8baf0 - - default default] Acquiring lock "cb05bc1e-3b85-4998-a503-39bd86bdc17e" by "nova.compute.manager.ComputeManager.terminate_instance.<locals>.do_terminate_instance" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m
Dec  1 20:04:27 compute-0 nova_compute[189564]: 2025-12-01 20:04:27.182 189568 DEBUG oslo_concurrency.lockutils [None req-17f63672-89f9-4340-bff7-fb5520f4160d 715e289b64b4407387cbcfe958eb2d0f 162c071887824085bcc9c384a2f8baf0 - - default default] Lock "cb05bc1e-3b85-4998-a503-39bd86bdc17e" acquired by "nova.compute.manager.ComputeManager.terminate_instance.<locals>.do_terminate_instance" :: waited 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m
Dec  1 20:04:27 compute-0 nova_compute[189564]: 2025-12-01 20:04:27.183 189568 DEBUG oslo_concurrency.lockutils [None req-17f63672-89f9-4340-bff7-fb5520f4160d 715e289b64b4407387cbcfe958eb2d0f 162c071887824085bcc9c384a2f8baf0 - - default default] Acquiring lock "cb05bc1e-3b85-4998-a503-39bd86bdc17e-events" by "nova.compute.manager.InstanceEvents.clear_events_for_instance.<locals>._clear_events" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m
Dec  1 20:04:27 compute-0 nova_compute[189564]: 2025-12-01 20:04:27.183 189568 DEBUG oslo_concurrency.lockutils [None req-17f63672-89f9-4340-bff7-fb5520f4160d 715e289b64b4407387cbcfe958eb2d0f 162c071887824085bcc9c384a2f8baf0 - - default default] Lock "cb05bc1e-3b85-4998-a503-39bd86bdc17e-events" acquired by "nova.compute.manager.InstanceEvents.clear_events_for_instance.<locals>._clear_events" :: waited 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m
Dec  1 20:04:27 compute-0 nova_compute[189564]: 2025-12-01 20:04:27.184 189568 DEBUG oslo_concurrency.lockutils [None req-17f63672-89f9-4340-bff7-fb5520f4160d 715e289b64b4407387cbcfe958eb2d0f 162c071887824085bcc9c384a2f8baf0 - - default default] Lock "cb05bc1e-3b85-4998-a503-39bd86bdc17e-events" "released" by "nova.compute.manager.InstanceEvents.clear_events_for_instance.<locals>._clear_events" :: held 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m
Dec  1 20:04:27 compute-0 nova_compute[189564]: 2025-12-01 20:04:27.186 189568 INFO nova.compute.manager [None req-17f63672-89f9-4340-bff7-fb5520f4160d 715e289b64b4407387cbcfe958eb2d0f 162c071887824085bcc9c384a2f8baf0 - - default default] [instance: cb05bc1e-3b85-4998-a503-39bd86bdc17e] Terminating instance#033[00m
Dec  1 20:04:27 compute-0 nova_compute[189564]: 2025-12-01 20:04:27.188 189568 DEBUG nova.compute.manager [None req-17f63672-89f9-4340-bff7-fb5520f4160d 715e289b64b4407387cbcfe958eb2d0f 162c071887824085bcc9c384a2f8baf0 - - default default] [instance: cb05bc1e-3b85-4998-a503-39bd86bdc17e] Start destroying the instance on the hypervisor. _shutdown_instance /usr/lib/python3.9/site-packages/nova/compute/manager.py:3120#033[00m
Dec  1 20:04:27 compute-0 kernel: tapab2a4211-76 (unregistering): left promiscuous mode
Dec  1 20:04:27 compute-0 NetworkManager[56474]: <info>  [1764619467.2256] device (tapab2a4211-76): state change: disconnected -> unmanaged (reason 'unmanaged', managed-type: 'removed')
Dec  1 20:04:27 compute-0 ovn_controller[97948]: 2025-12-01T20:04:27Z|00161|binding|INFO|Releasing lport ab2a4211-760a-400a-bd6c-243749c41a4e from this chassis (sb_readonly=0)
Dec  1 20:04:27 compute-0 ovn_controller[97948]: 2025-12-01T20:04:27Z|00162|binding|INFO|Setting lport ab2a4211-760a-400a-bd6c-243749c41a4e down in Southbound
Dec  1 20:04:27 compute-0 nova_compute[189564]: 2025-12-01 20:04:27.243 189568 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Dec  1 20:04:27 compute-0 ovn_controller[97948]: 2025-12-01T20:04:27Z|00163|binding|INFO|Removing iface tapab2a4211-76 ovn-installed in OVS
Dec  1 20:04:27 compute-0 nova_compute[189564]: 2025-12-01 20:04:27.246 189568 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Dec  1 20:04:27 compute-0 ovn_metadata_agent[106828]: 2025-12-01 20:04:27.256 106833 DEBUG ovsdbapp.backend.ovs_idl.event [-] Matched UPDATE: PortBindingUpdatedEvent(events=('update',), table='Port_Binding', conditions=None, old_conditions=None), priority=20 to row=Port_Binding(mac=['fa:16:3e:d2:c4:d1 10.100.0.4'], port_security=['fa:16:3e:d2:c4:d1 10.100.0.4'], type=, nat_addresses=[], virtual_parent=[], up=[False], options={'requested-chassis': 'compute-0.ctlplane.example.com'}, parent_port=[], requested_additional_chassis=[], ha_chassis_group=[], external_ids={'neutron:cidrs': '10.100.0.4/28', 'neutron:device_id': 'cb05bc1e-3b85-4998-a503-39bd86bdc17e', 'neutron:device_owner': 'compute:nova', 'neutron:mtu': '', 'neutron:network_name': 'neutron-d273f808-5cbd-4428-9f2c-ed8b50232c12', 'neutron:port_capabilities': '', 'neutron:port_name': '', 'neutron:project_id': '162c071887824085bcc9c384a2f8baf0', 'neutron:revision_number': '4', 'neutron:security_group_ids': '006fce21-a511-489a-880a-d2b4557c5d3b', 'neutron:subnet_pool_addr_scope4': '', 'neutron:subnet_pool_addr_scope6': '', 'neutron:vnic_type': 'normal', 'neutron:host_id': 'compute-0.ctlplane.example.com', 'neutron:port_fip': '192.168.122.172'}, additional_chassis=[], tag=[], additional_encap=[], encap=[], mirror_rules=[], datapath=814c1014-135a-4652-9979-0910a324d6ee, chassis=[], tunnel_key=4, gateway_chassis=[], requested_chassis=[<ovs.db.idl.Row object at 0x7f1b36766670>], logical_port=ab2a4211-760a-400a-bd6c-243749c41a4e) old=Port_Binding(up=[True], chassis=[<ovs.db.idl.Row object at 0x7f1b36766670>]) matches /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/event.py:43#033[00m
Dec  1 20:04:27 compute-0 ovn_metadata_agent[106828]: 2025-12-01 20:04:27.259 106833 INFO neutron.agent.ovn.metadata.agent [-] Port ab2a4211-760a-400a-bd6c-243749c41a4e in datapath d273f808-5cbd-4428-9f2c-ed8b50232c12 unbound from our chassis#033[00m
Dec  1 20:04:27 compute-0 ovn_metadata_agent[106828]: 2025-12-01 20:04:27.263 106833 INFO neutron.agent.ovn.metadata.agent [-] Provisioning metadata for network d273f808-5cbd-4428-9f2c-ed8b50232c12#033[00m
Dec  1 20:04:27 compute-0 nova_compute[189564]: 2025-12-01 20:04:27.271 189568 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Dec  1 20:04:27 compute-0 systemd[1]: machine-qemu\x2d13\x2dinstance\x2d0000000c.scope: Deactivated successfully.
Dec  1 20:04:27 compute-0 systemd[1]: machine-qemu\x2d13\x2dinstance\x2d0000000c.scope: Consumed 38.404s CPU time.
Dec  1 20:04:27 compute-0 ovn_metadata_agent[106828]: 2025-12-01 20:04:27.292 239862 DEBUG oslo.privsep.daemon [-] privsep: reply[6e77eaf7-7f67-4c99-96a1-69982c144c35]: (4, True) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Dec  1 20:04:27 compute-0 systemd-machined[155891]: Machine qemu-13-instance-0000000c terminated.
Dec  1 20:04:27 compute-0 ovn_metadata_agent[106828]: 2025-12-01 20:04:27.333 239942 DEBUG oslo.privsep.daemon [-] privsep: reply[93b1a0f2-4ff0-433e-81be-7cfbd608bb3f]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Dec  1 20:04:27 compute-0 ovn_metadata_agent[106828]: 2025-12-01 20:04:27.337 239942 DEBUG oslo.privsep.daemon [-] privsep: reply[22416941-1abd-41a3-91c4-79be97106d47]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Dec  1 20:04:27 compute-0 ovn_metadata_agent[106828]: 2025-12-01 20:04:27.369 239942 DEBUG oslo.privsep.daemon [-] privsep: reply[5cb6bd68-7699-4516-bd39-4d697714dfb7]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Dec  1 20:04:27 compute-0 ovn_metadata_agent[106828]: 2025-12-01 20:04:27.394 239862 DEBUG oslo.privsep.daemon [-] privsep: reply[b4970410-36e6-4d9a-b1e2-ed5736779c82]: (4, [{'family': 0, '__align': (), 'ifi_type': 1, 'index': 2, 'flags': 69699, 'change': 0, 'attrs': [['IFLA_IFNAME', 'tapd273f808-51'], ['IFLA_TXQLEN', 1000], ['IFLA_OPERSTATE', 'UP'], ['IFLA_LINKMODE', 0], ['IFLA_MTU', 1500], ['IFLA_MIN_MTU', 68], ['IFLA_MAX_MTU', 65535], ['IFLA_GROUP', 0], ['IFLA_PROMISCUITY', 0], ['UNKNOWN', {'header': {'length': 8, 'type': 61}}], ['IFLA_NUM_TX_QUEUES', 8], ['IFLA_GSO_MAX_SEGS', 65535], ['IFLA_GSO_MAX_SIZE', 65536], ['IFLA_GRO_MAX_SIZE', 65536], ['UNKNOWN', {'header': {'length': 8, 'type': 63}}], ['UNKNOWN', {'header': {'length': 8, 'type': 64}}], ['IFLA_TSO_MAX_SIZE', 524280], ['IFLA_TSO_MAX_SEGS', 65535], ['UNKNOWN', {'header': {'length': 8, 'type': 66}}], ['IFLA_NUM_RX_QUEUES', 8], ['IFLA_CARRIER', 1], ['IFLA_CARRIER_CHANGES', 2], ['IFLA_CARRIER_UP_COUNT', 1], ['IFLA_CARRIER_DOWN_COUNT', 1], ['IFLA_PROTO_DOWN', 0], ['IFLA_ADDRESS', 'fa:16:3e:ec:ef:68'], ['IFLA_BROADCAST', 'ff:ff:ff:ff:ff:ff'], ['IFLA_STATS64', {'rx_packets': 10, 'tx_packets': 7, 'rx_bytes': 700, 'tx_bytes': 438, 'rx_errors': 0, 'tx_errors': 0, 'rx_dropped': 0, 'tx_dropped': 0, 'multicast': 0, 'collisions': 0, 'rx_length_errors': 0, 'rx_over_errors': 0, 'rx_crc_errors': 0, 'rx_frame_errors': 0, 'rx_fifo_errors': 0, 'rx_missed_errors': 0, 'tx_aborted_errors': 0, 'tx_carrier_errors': 0, 'tx_fifo_errors': 0, 'tx_heartbeat_errors': 0, 'tx_window_errors': 0, 'rx_compressed': 0, 'tx_compressed': 0}], ['IFLA_STATS', {'rx_packets': 10, 'tx_packets': 7, 'rx_bytes': 700, 'tx_bytes': 438, 'rx_errors': 0, 'tx_errors': 0, 'rx_dropped': 0, 'tx_dropped': 0, 'multicast': 0, 'collisions': 0, 'rx_length_errors': 0, 'rx_over_errors': 0, 'rx_crc_errors': 0, 'rx_frame_errors': 0, 'rx_fifo_errors': 0, 'rx_missed_errors': 0, 'tx_aborted_errors': 0, 'tx_carrier_errors': 0, 'tx_fifo_errors': 0, 'tx_heartbeat_errors': 0, 'tx_window_errors': 0, 'rx_compressed': 0, 'tx_compressed': 0}], ['IFLA_XDP', {'attrs': [['IFLA_XDP_ATTACHED', None]]}], ['IFLA_LINKINFO', {'attrs': [['IFLA_INFO_KIND', 'veth']]}], ['IFLA_LINK_NETNSID', 0], ['IFLA_LINK', 34], ['IFLA_QDISC', 'noqueue'], ['IFLA_AF_SPEC', {'attrs': [['AF_INET', {'dummy': 65668, 'forwarding': 1, 'mc_forwarding': 0, 'proxy_arp': 0, 'accept_redirects': 0, 'secure_redirects': 0, 'send_redirects': 0, 'shared_media': 1, 'rp_filter': 1, 'accept_source_route': 0, 'bootp_relay': 0, 'log_martians': 0, 'tag': 0, 'arpfilter': 0, 'medium_id': 0, 'noxfrm': 0, 'nopolicy': 0, 'force_igmp_version': 0, 'arp_announce': 0, 'arp_ignore': 0, 'promote_secondaries': 1, 'arp_accept': 0, 'arp_notify': 0, 'accept_local': 0, 'src_vmark': 0, 'proxy_arp_pvlan': 0, 'route_localnet': 0, 'igmpv2_unsolicited_report_interval': 10000, 'igmpv3_unsolicited_report_interval': 1000}], ['AF_INET6', {'attrs': [['IFLA_INET6_FLAGS', 2147483648], ['IFLA_INET6_CACHEINFO', {'max_reasm_len': 65535, 'tstamp': 584520, 'reachable_time': 21071, 'retrans_time': 1000}], ['IFLA_INET6_CONF', {'forwarding': 0, 'hop_limit': 64, 'mtu': 1500, 'accept_ra': 1, 'accept_redirects': 1, 'autoconf': 1, 'dad_transmits': 1, 'router_solicitations': 4294967295, 'router_solicitation_interval': 4000, 'router_solicitation_delay': 1000, 'use_tempaddr': 0, 'temp_valid_lft': 604800, 'temp_preferred_lft': 86400, 'regen_max_retry': 3, 'max_desync_factor': 600, 'max_addresses': 16, 'force_mld_version': 0, 'accept_ra_defrtr': 1, 'accept_ra_pinfo': 1, 'accept_ra_rtr_pref': 1, 'router_probe_interval': 60000, 'accept_ra_rt_info_max_plen': 0, 'proxy_ndp': 0, 'optimistic_dad': 0, 'accept_source_route': 0, 'mc_forwarding': 0, 'disable_ipv6': 0, 'accept_dad': 1, 'force_tllao': 0, 'ndisc_notify': 0}], ['IFLA_INET6_STATS', {'num': 38, 'inpkts': 6, 'inoctets': 448, 'indelivers': 1, 'outforwdatagrams': 0, 'outpkts': 3, 'outoctets': 228, 'inhdrerrors': 0, 'intoobigerrors': 0, 'innoroutes': 0, 'inaddrerrors': 0, 'inunknownprotos': 0, 'intruncatedpkts': 0, 'indiscards': 0, 'outdiscards': 0, 'outnoroutes': 0, 'reasmtimeout': 0, 'reasmreqds': 0, 'reasmoks': 0, 'reasmfails': 0, 'fragoks': 0, 'fragfails': 0, 'fragcreates': 0, 'inmcastpkts': 6, 'outmcastpkts': 3, 'inbcastpkts': 0, 'outbcastpkts': 0, 'inmcastoctets': 448, 'outmcastoctets': 228, 'inbcastoctets': 0, 'outbcastoctets': 0, 'csumerrors': 0, 'noectpkts': 6, 'ect1pkts': 0, 'ect0pkts': 0, 'cepkts': 0}], ['IFLA_INET6_ICMP6STATS', {'num': 7, 'inmsgs': 1, 'inerrors': 0, 'outmsgs': 3, 'outerrors': 0, 'csumerrors': 0}], ['IFLA_INET6_TOKEN', '::'], ['IFLA_INET6_ADDR_GEN_MODE', 0]]}]]}], ['IFLA_MAP', {'mem_start': 0, 'mem_end': 0, 'base_addr': 0, 'irq': 0, 'dma': 0, 'port': 0}], ['UNKNOWN', {'header': {'length': 4, 'type': 32830}}], ['UNKNOWN', {'header': {'length': 4, 'type': 32833}}]], 'header': {'length': 1448, 'type': 16, 'flags': 2, 'sequence_number': 255, 'pid': 256758, 'error': None, 'target': 'ovnmeta-d273f808-5cbd-4428-9f2c-ed8b50232c12', 'stats': (0, 0, 0)}, 'state': 'up', 'event': 'RTM_NEWLINK'}]) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Dec  1 20:04:27 compute-0 ovn_metadata_agent[106828]: 2025-12-01 20:04:27.419 239862 DEBUG oslo.privsep.daemon [-] privsep: reply[7f639991-7fd6-49af-9a63-ded41d769df3]: (4, ({'family': 2, 'prefixlen': 28, 'flags': 128, 'scope': 0, 'index': 2, 'attrs': [['IFA_ADDRESS', '10.100.0.2'], ['IFA_LOCAL', '10.100.0.2'], ['IFA_BROADCAST', '10.100.0.15'], ['IFA_LABEL', 'tapd273f808-51'], ['IFA_FLAGS', 128], ['IFA_CACHEINFO', {'ifa_preferred': 4294967295, 'ifa_valid': 4294967295, 'cstamp': 584535, 'tstamp': 584535}]], 'header': {'length': 96, 'type': 20, 'flags': 2, 'sequence_number': 255, 'pid': 256760, 'error': None, 'target': 'ovnmeta-d273f808-5cbd-4428-9f2c-ed8b50232c12', 'stats': (0, 0, 0)}, 'event': 'RTM_NEWADDR'}, {'family': 2, 'prefixlen': 32, 'flags': 128, 'scope': 0, 'index': 2, 'attrs': [['IFA_ADDRESS', '169.254.169.254'], ['IFA_LOCAL', '169.254.169.254'], ['IFA_BROADCAST', '169.254.169.254'], ['IFA_LABEL', 'tapd273f808-51'], ['IFA_FLAGS', 128], ['IFA_CACHEINFO', {'ifa_preferred': 4294967295, 'ifa_valid': 4294967295, 'cstamp': 584540, 'tstamp': 584540}]], 'header': {'length': 96, 'type': 20, 'flags': 2, 'sequence_number': 255, 'pid': 256760, 'error': None, 'target': 'ovnmeta-d273f808-5cbd-4428-9f2c-ed8b50232c12', 'stats': (0, 0, 0)}, 'event': 'RTM_NEWADDR'})) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Dec  1 20:04:27 compute-0 ovn_metadata_agent[106828]: 2025-12-01 20:04:27.421 106833 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): DelPortCommand(_result=None, port=tapd273f808-50, bridge=br-ex, if_exists=True) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89#033[00m
Dec  1 20:04:27 compute-0 nova_compute[189564]: 2025-12-01 20:04:27.423 189568 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Dec  1 20:04:27 compute-0 ovn_metadata_agent[106828]: 2025-12-01 20:04:27.436 106833 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): AddPortCommand(_result=None, bridge=br-int, port=tapd273f808-50, may_exist=True) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89#033[00m
Dec  1 20:04:27 compute-0 ovn_metadata_agent[106828]: 2025-12-01 20:04:27.436 106833 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Transaction caused no change do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:129#033[00m
Dec  1 20:04:27 compute-0 ovn_metadata_agent[106828]: 2025-12-01 20:04:27.437 106833 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): DbSetCommand(_result=None, table=Interface, record=tapd273f808-50, col_values=(('external_ids', {'iface-id': 'b1e4fac5-26a3-4807-b860-bcfa4669fff5'}),)) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89#033[00m
Dec  1 20:04:27 compute-0 ovn_metadata_agent[106828]: 2025-12-01 20:04:27.437 106833 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Transaction caused no change do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:129#033[00m
Dec  1 20:04:27 compute-0 nova_compute[189564]: 2025-12-01 20:04:27.437 189568 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Dec  1 20:04:27 compute-0 nova_compute[189564]: 2025-12-01 20:04:27.475 189568 INFO nova.virt.libvirt.driver [-] [instance: cb05bc1e-3b85-4998-a503-39bd86bdc17e] Instance destroyed successfully.#033[00m
Dec  1 20:04:27 compute-0 nova_compute[189564]: 2025-12-01 20:04:27.475 189568 DEBUG nova.objects.instance [None req-17f63672-89f9-4340-bff7-fb5520f4160d 715e289b64b4407387cbcfe958eb2d0f 162c071887824085bcc9c384a2f8baf0 - - default default] Lazy-loading 'resources' on Instance uuid cb05bc1e-3b85-4998-a503-39bd86bdc17e obj_load_attr /usr/lib/python3.9/site-packages/nova/objects/instance.py:1105#033[00m
Dec  1 20:04:27 compute-0 nova_compute[189564]: 2025-12-01 20:04:27.490 189568 DEBUG nova.virt.libvirt.vif [None req-17f63672-89f9-4340-bff7-fb5520f4160d 715e289b64b4407387cbcfe958eb2d0f 162c071887824085bcc9c384a2f8baf0 - - default default] vif_type=ovs instance=Instance(access_ip_v4=None,access_ip_v6=None,architecture=None,auto_disk_config=False,availability_zone='nova',cell_name=None,cleaned=False,config_drive='True',created_at=2025-12-01T20:03:36Z,default_ephemeral_device=None,default_swap_device=None,deleted=False,deleted_at=None,device_metadata=<?>,disable_terminate=False,display_description='tempest-TestNetworkBasicOps-server-400616177',display_name='tempest-TestNetworkBasicOps-server-400616177',ec2_ids=<?>,ephemeral_gb=0,ephemeral_key_uuid=None,fault=<?>,flavor=Flavor(3),hidden=False,host='compute-0.ctlplane.example.com',hostname='tempest-testnetworkbasicops-server-400616177',id=12,image_ref='d169c234-7ac2-4fdc-b9fa-a08c93484d75',info_cache=InstanceInfoCache,instance_type_id=3,kernel_id='',key_data='ecdsa-sha2-nistp384 AAAAE2VjZHNhLXNoYTItbmlzdHAzODQAAAAIbmlzdHAzODQAAABhBMZz4r8QHWL5e6bdaXmmeBXWrJPoycGMIF22/s6cXa/qsI/JeoZ4nIVHktN0yEw5sVq7NOepXV+coQnzO/S0nl+vnmyrZbU9NIMBBwnv3xQCCt5vGYcM/BmPTvGlxk3WhA==',key_name='tempest-TestNetworkBasicOps-138657879',keypairs=<?>,launch_index=0,launched_at=2025-12-01T20:03:43Z,launched_on='compute-0.ctlplane.example.com',locked=False,locked_by=None,memory_mb=128,metadata={},migration_context=<?>,new_flavor=None,node='compute-0.ctlplane.example.com',numa_topology=None,old_flavor=None,os_type=None,pci_devices=<?>,pci_requests=<?>,power_state=1,progress=0,project_id='162c071887824085bcc9c384a2f8baf0',ramdisk_id='',reservation_id='r-saoj57l7',resources=None,root_device_name='/dev/vda',root_gb=1,security_groups=SecurityGroupList,services=<?>,shutdown_terminate=False,system_metadata={boot_roles='member,reader',image_base_image_ref='d169c234-7ac2-4fdc-b9fa-a08c93484d75',image_container_format='bare',image_disk_format='qcow2',image_hw_cdrom_bus='sata',image_hw_disk_bus='virtio',image_hw_input_bus='usb',image_hw_machine_type='q35',image_hw_pointer_model='usbtablet',image_hw_rng_model='virtio',image_hw_video_model='virtio',image_hw_vif_model='virtio',image_min_disk='1',image_min_ram='0',owner_project_name='tempest-TestNetworkBasicOps-11937336',owner_user_name='tempest-TestNetworkBasicOps-11937336-project-member'},tags=<?>,task_state='deleting',terminated_at=None,trusted_certs=<?>,updated_at=2025-12-01T20:03:43Z,user_data=None,user_id='715e289b64b4407387cbcfe958eb2d0f',uuid=cb05bc1e-3b85-4998-a503-39bd86bdc17e,vcpu_model=<?>,vcpus=1,vm_mode=None,vm_state='active') vif={"id": "ab2a4211-760a-400a-bd6c-243749c41a4e", "address": "fa:16:3e:d2:c4:d1", "network": {"id": "d273f808-5cbd-4428-9f2c-ed8b50232c12", "bridge": "br-int", "label": "tempest-network-smoke--1707279970", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.4", "type": "fixed", "version": 4, "meta": {}, "floating_ips": [{"address": "192.168.122.172", "type": "floating", "version": 4, "meta": {}}]}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "162c071887824085bcc9c384a2f8baf0", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tapab2a4211-76", "ovs_interfaceid": "ab2a4211-760a-400a-bd6c-243749c41a4e", "qbh_params": null, "qbg_params": null, "active": true, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}} unplug /usr/lib/python3.9/site-packages/nova/virt/libvirt/vif.py:828#033[00m
Dec  1 20:04:27 compute-0 nova_compute[189564]: 2025-12-01 20:04:27.490 189568 DEBUG nova.network.os_vif_util [None req-17f63672-89f9-4340-bff7-fb5520f4160d 715e289b64b4407387cbcfe958eb2d0f 162c071887824085bcc9c384a2f8baf0 - - default default] Converting VIF {"id": "ab2a4211-760a-400a-bd6c-243749c41a4e", "address": "fa:16:3e:d2:c4:d1", "network": {"id": "d273f808-5cbd-4428-9f2c-ed8b50232c12", "bridge": "br-int", "label": "tempest-network-smoke--1707279970", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.4", "type": "fixed", "version": 4, "meta": {}, "floating_ips": [{"address": "192.168.122.172", "type": "floating", "version": 4, "meta": {}}]}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "162c071887824085bcc9c384a2f8baf0", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tapab2a4211-76", "ovs_interfaceid": "ab2a4211-760a-400a-bd6c-243749c41a4e", "qbh_params": null, "qbg_params": null, "active": true, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}} nova_to_osvif_vif /usr/lib/python3.9/site-packages/nova/network/os_vif_util.py:511#033[00m
Dec  1 20:04:27 compute-0 nova_compute[189564]: 2025-12-01 20:04:27.491 189568 DEBUG nova.network.os_vif_util [None req-17f63672-89f9-4340-bff7-fb5520f4160d 715e289b64b4407387cbcfe958eb2d0f 162c071887824085bcc9c384a2f8baf0 - - default default] Converted object VIFOpenVSwitch(active=True,address=fa:16:3e:d2:c4:d1,bridge_name='br-int',has_traffic_filtering=True,id=ab2a4211-760a-400a-bd6c-243749c41a4e,network=Network(d273f808-5cbd-4428-9f2c-ed8b50232c12),plugin='ovs',port_profile=VIFPortProfileOpenVSwitch,preserve_on_delete=False,vif_name='tapab2a4211-76') nova_to_osvif_vif /usr/lib/python3.9/site-packages/nova/network/os_vif_util.py:548#033[00m
Dec  1 20:04:27 compute-0 nova_compute[189564]: 2025-12-01 20:04:27.492 189568 DEBUG os_vif [None req-17f63672-89f9-4340-bff7-fb5520f4160d 715e289b64b4407387cbcfe958eb2d0f 162c071887824085bcc9c384a2f8baf0 - - default default] Unplugging vif VIFOpenVSwitch(active=True,address=fa:16:3e:d2:c4:d1,bridge_name='br-int',has_traffic_filtering=True,id=ab2a4211-760a-400a-bd6c-243749c41a4e,network=Network(d273f808-5cbd-4428-9f2c-ed8b50232c12),plugin='ovs',port_profile=VIFPortProfileOpenVSwitch,preserve_on_delete=False,vif_name='tapab2a4211-76') unplug /usr/lib/python3.9/site-packages/os_vif/__init__.py:109#033[00m
Dec  1 20:04:27 compute-0 nova_compute[189564]: 2025-12-01 20:04:27.495 189568 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 21 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Dec  1 20:04:27 compute-0 nova_compute[189564]: 2025-12-01 20:04:27.495 189568 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): DelPortCommand(_result=None, port=tapab2a4211-76, bridge=br-int, if_exists=True) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89#033[00m
Dec  1 20:04:27 compute-0 nova_compute[189564]: 2025-12-01 20:04:27.497 189568 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Dec  1 20:04:27 compute-0 nova_compute[189564]: 2025-12-01 20:04:27.500 189568 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Dec  1 20:04:27 compute-0 nova_compute[189564]: 2025-12-01 20:04:27.504 189568 INFO os_vif [None req-17f63672-89f9-4340-bff7-fb5520f4160d 715e289b64b4407387cbcfe958eb2d0f 162c071887824085bcc9c384a2f8baf0 - - default default] Successfully unplugged vif VIFOpenVSwitch(active=True,address=fa:16:3e:d2:c4:d1,bridge_name='br-int',has_traffic_filtering=True,id=ab2a4211-760a-400a-bd6c-243749c41a4e,network=Network(d273f808-5cbd-4428-9f2c-ed8b50232c12),plugin='ovs',port_profile=VIFPortProfileOpenVSwitch,preserve_on_delete=False,vif_name='tapab2a4211-76')#033[00m
Dec  1 20:04:27 compute-0 nova_compute[189564]: 2025-12-01 20:04:27.505 189568 INFO nova.virt.libvirt.driver [None req-17f63672-89f9-4340-bff7-fb5520f4160d 715e289b64b4407387cbcfe958eb2d0f 162c071887824085bcc9c384a2f8baf0 - - default default] [instance: cb05bc1e-3b85-4998-a503-39bd86bdc17e] Deleting instance files /var/lib/nova/instances/cb05bc1e-3b85-4998-a503-39bd86bdc17e_del#033[00m
Dec  1 20:04:27 compute-0 nova_compute[189564]: 2025-12-01 20:04:27.506 189568 INFO nova.virt.libvirt.driver [None req-17f63672-89f9-4340-bff7-fb5520f4160d 715e289b64b4407387cbcfe958eb2d0f 162c071887824085bcc9c384a2f8baf0 - - default default] [instance: cb05bc1e-3b85-4998-a503-39bd86bdc17e] Deletion of /var/lib/nova/instances/cb05bc1e-3b85-4998-a503-39bd86bdc17e_del complete#033[00m
Dec  1 20:04:27 compute-0 nova_compute[189564]: 2025-12-01 20:04:27.555 189568 INFO nova.compute.manager [None req-17f63672-89f9-4340-bff7-fb5520f4160d 715e289b64b4407387cbcfe958eb2d0f 162c071887824085bcc9c384a2f8baf0 - - default default] [instance: cb05bc1e-3b85-4998-a503-39bd86bdc17e] Took 0.37 seconds to destroy the instance on the hypervisor.#033[00m
Dec  1 20:04:27 compute-0 nova_compute[189564]: 2025-12-01 20:04:27.556 189568 DEBUG oslo.service.loopingcall [None req-17f63672-89f9-4340-bff7-fb5520f4160d 715e289b64b4407387cbcfe958eb2d0f 162c071887824085bcc9c384a2f8baf0 - - default default] Waiting for function nova.compute.manager.ComputeManager._try_deallocate_network.<locals>._deallocate_network_with_retries to return. func /usr/lib/python3.9/site-packages/oslo_service/loopingcall.py:435#033[00m
Dec  1 20:04:27 compute-0 nova_compute[189564]: 2025-12-01 20:04:27.556 189568 DEBUG nova.compute.manager [-] [instance: cb05bc1e-3b85-4998-a503-39bd86bdc17e] Deallocating network for instance _deallocate_network /usr/lib/python3.9/site-packages/nova/compute/manager.py:2259#033[00m
Dec  1 20:04:27 compute-0 nova_compute[189564]: 2025-12-01 20:04:27.556 189568 DEBUG nova.network.neutron [-] [instance: cb05bc1e-3b85-4998-a503-39bd86bdc17e] deallocate_for_instance() deallocate_for_instance /usr/lib/python3.9/site-packages/nova/network/neutron.py:1803#033[00m
Dec  1 20:04:28 compute-0 nova_compute[189564]: 2025-12-01 20:04:28.730 189568 DEBUG nova.network.neutron [None req-7cec860b-c1c5-49d2-8a4f-732c76c1788d - - - - - -] [instance: 6c1de815-4e42-4798-9a73-220b67333524] Updating instance_info_cache with network_info: [{"id": "05dcfe74-fe60-45d4-b1df-aec9fcc57adb", "address": "fa:16:3e:96:ce:cc", "network": {"id": "d273f808-5cbd-4428-9f2c-ed8b50232c12", "bridge": "br-int", "label": "tempest-network-smoke--1707279970", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.11", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "162c071887824085bcc9c384a2f8baf0", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tap05dcfe74-fe", "ovs_interfaceid": "05dcfe74-fe60-45d4-b1df-aec9fcc57adb", "qbh_params": null, "qbg_params": null, "active": true, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}}] update_instance_cache_with_nw_info /usr/lib/python3.9/site-packages/nova/network/neutron.py:116#033[00m
Dec  1 20:04:28 compute-0 nova_compute[189564]: 2025-12-01 20:04:28.750 189568 DEBUG oslo_concurrency.lockutils [None req-7cec860b-c1c5-49d2-8a4f-732c76c1788d - - - - - -] Releasing lock "refresh_cache-6c1de815-4e42-4798-9a73-220b67333524" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:333#033[00m
Dec  1 20:04:28 compute-0 nova_compute[189564]: 2025-12-01 20:04:28.750 189568 DEBUG nova.compute.manager [None req-7cec860b-c1c5-49d2-8a4f-732c76c1788d - - - - - -] [instance: 6c1de815-4e42-4798-9a73-220b67333524] Updated the network info_cache for instance _heal_instance_info_cache /usr/lib/python3.9/site-packages/nova/compute/manager.py:9929#033[00m
Dec  1 20:04:28 compute-0 nova_compute[189564]: 2025-12-01 20:04:28.751 189568 DEBUG oslo_service.periodic_task [None req-7cec860b-c1c5-49d2-8a4f-732c76c1788d - - - - - -] Running periodic task ComputeManager._poll_unconfirmed_resizes run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210#033[00m
Dec  1 20:04:28 compute-0 nova_compute[189564]: 2025-12-01 20:04:28.752 189568 DEBUG oslo_service.periodic_task [None req-7cec860b-c1c5-49d2-8a4f-732c76c1788d - - - - - -] Running periodic task ComputeManager._instance_usage_audit run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210#033[00m
Dec  1 20:04:28 compute-0 nova_compute[189564]: 2025-12-01 20:04:28.752 189568 DEBUG oslo_service.periodic_task [None req-7cec860b-c1c5-49d2-8a4f-732c76c1788d - - - - - -] Running periodic task ComputeManager._poll_volume_usage run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210#033[00m
Dec  1 20:04:28 compute-0 nova_compute[189564]: 2025-12-01 20:04:28.927 189568 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Dec  1 20:04:29 compute-0 nova_compute[189564]: 2025-12-01 20:04:29.125 189568 DEBUG nova.network.neutron [-] [instance: cb05bc1e-3b85-4998-a503-39bd86bdc17e] Updating instance_info_cache with network_info: [] update_instance_cache_with_nw_info /usr/lib/python3.9/site-packages/nova/network/neutron.py:116#033[00m
Dec  1 20:04:29 compute-0 nova_compute[189564]: 2025-12-01 20:04:29.155 189568 INFO nova.compute.manager [-] [instance: cb05bc1e-3b85-4998-a503-39bd86bdc17e] Took 1.60 seconds to deallocate network for instance.#033[00m
Dec  1 20:04:29 compute-0 nova_compute[189564]: 2025-12-01 20:04:29.204 189568 DEBUG oslo_concurrency.lockutils [None req-17f63672-89f9-4340-bff7-fb5520f4160d 715e289b64b4407387cbcfe958eb2d0f 162c071887824085bcc9c384a2f8baf0 - - default default] Acquiring lock "compute_resources" by "nova.compute.resource_tracker.ResourceTracker.update_usage" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m
Dec  1 20:04:29 compute-0 nova_compute[189564]: 2025-12-01 20:04:29.204 189568 DEBUG oslo_concurrency.lockutils [None req-17f63672-89f9-4340-bff7-fb5520f4160d 715e289b64b4407387cbcfe958eb2d0f 162c071887824085bcc9c384a2f8baf0 - - default default] Lock "compute_resources" acquired by "nova.compute.resource_tracker.ResourceTracker.update_usage" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m
Dec  1 20:04:29 compute-0 nova_compute[189564]: 2025-12-01 20:04:29.297 189568 DEBUG nova.compute.provider_tree [None req-17f63672-89f9-4340-bff7-fb5520f4160d 715e289b64b4407387cbcfe958eb2d0f 162c071887824085bcc9c384a2f8baf0 - - default default] Inventory has not changed in ProviderTree for provider: 0211b5d4-bab8-409f-8f53-df766ffbcb27 update_inventory /usr/lib/python3.9/site-packages/nova/compute/provider_tree.py:180#033[00m
Dec  1 20:04:29 compute-0 nova_compute[189564]: 2025-12-01 20:04:29.310 189568 DEBUG nova.compute.manager [req-c5c70169-7e22-445b-a574-bb3b8e5c1b5a req-1f3f144f-4186-4ba0-9718-b9ca3be1dee3 8141e98fb94749b083df4b60a326419b 58466eba0634458f8dd92f929824a1d9 - - default default] [instance: cb05bc1e-3b85-4998-a503-39bd86bdc17e] Received event network-vif-deleted-ab2a4211-760a-400a-bd6c-243749c41a4e external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11048#033[00m
Dec  1 20:04:29 compute-0 nova_compute[189564]: 2025-12-01 20:04:29.315 189568 DEBUG nova.scheduler.client.report [None req-17f63672-89f9-4340-bff7-fb5520f4160d 715e289b64b4407387cbcfe958eb2d0f 162c071887824085bcc9c384a2f8baf0 - - default default] Inventory has not changed for provider 0211b5d4-bab8-409f-8f53-df766ffbcb27 based on inventory data: {'VCPU': {'total': 8, 'reserved': 0, 'min_unit': 1, 'max_unit': 8, 'step_size': 1, 'allocation_ratio': 4.0}, 'MEMORY_MB': {'total': 7680, 'reserved': 512, 'min_unit': 1, 'max_unit': 7680, 'step_size': 1, 'allocation_ratio': 1.0}, 'DISK_GB': {'total': 79, 'reserved': 1, 'min_unit': 1, 'max_unit': 79, 'step_size': 1, 'allocation_ratio': 0.9}} set_inventory_for_provider /usr/lib/python3.9/site-packages/nova/scheduler/client/report.py:940#033[00m
Dec  1 20:04:29 compute-0 nova_compute[189564]: 2025-12-01 20:04:29.338 189568 DEBUG oslo_concurrency.lockutils [None req-17f63672-89f9-4340-bff7-fb5520f4160d 715e289b64b4407387cbcfe958eb2d0f 162c071887824085bcc9c384a2f8baf0 - - default default] Lock "compute_resources" "released" by "nova.compute.resource_tracker.ResourceTracker.update_usage" :: held 0.134s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m
Dec  1 20:04:29 compute-0 nova_compute[189564]: 2025-12-01 20:04:29.373 189568 INFO nova.scheduler.client.report [None req-17f63672-89f9-4340-bff7-fb5520f4160d 715e289b64b4407387cbcfe958eb2d0f 162c071887824085bcc9c384a2f8baf0 - - default default] Deleted allocations for instance cb05bc1e-3b85-4998-a503-39bd86bdc17e#033[00m
Dec  1 20:04:29 compute-0 nova_compute[189564]: 2025-12-01 20:04:29.452 189568 DEBUG oslo_concurrency.lockutils [None req-17f63672-89f9-4340-bff7-fb5520f4160d 715e289b64b4407387cbcfe958eb2d0f 162c071887824085bcc9c384a2f8baf0 - - default default] Lock "cb05bc1e-3b85-4998-a503-39bd86bdc17e" "released" by "nova.compute.manager.ComputeManager.terminate_instance.<locals>.do_terminate_instance" :: held 2.270s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m
Dec  1 20:04:29 compute-0 podman[203750]: time="2025-12-01T20:04:29Z" level=info msg="List containers: received `last` parameter - overwriting `limit`"
Dec  1 20:04:29 compute-0 podman[203750]: @ - - [01/Dec/2025:20:04:29 +0000] "GET /v4.9.3/libpod/containers/json?all=true&external=false&last=0&namespace=false&size=false&sync=false HTTP/1.1" 200 29521 "" "Go-http-client/1.1"
Dec  1 20:04:29 compute-0 podman[203750]: @ - - [01/Dec/2025:20:04:29 +0000] "GET /v4.9.3/libpod/containers/stats?all=false&interval=1&stream=false HTTP/1.1" 200 4808 "" "Go-http-client/1.1"
Dec  1 20:04:30 compute-0 nova_compute[189564]: 2025-12-01 20:04:30.639 189568 DEBUG oslo_concurrency.lockutils [None req-818fb88c-2ce1-45c5-b391-7b1a65553fa4 715e289b64b4407387cbcfe958eb2d0f 162c071887824085bcc9c384a2f8baf0 - - default default] Acquiring lock "6c1de815-4e42-4798-9a73-220b67333524" by "nova.compute.manager.ComputeManager.terminate_instance.<locals>.do_terminate_instance" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m
Dec  1 20:04:30 compute-0 nova_compute[189564]: 2025-12-01 20:04:30.640 189568 DEBUG oslo_concurrency.lockutils [None req-818fb88c-2ce1-45c5-b391-7b1a65553fa4 715e289b64b4407387cbcfe958eb2d0f 162c071887824085bcc9c384a2f8baf0 - - default default] Lock "6c1de815-4e42-4798-9a73-220b67333524" acquired by "nova.compute.manager.ComputeManager.terminate_instance.<locals>.do_terminate_instance" :: waited 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m
Dec  1 20:04:30 compute-0 nova_compute[189564]: 2025-12-01 20:04:30.640 189568 DEBUG oslo_concurrency.lockutils [None req-818fb88c-2ce1-45c5-b391-7b1a65553fa4 715e289b64b4407387cbcfe958eb2d0f 162c071887824085bcc9c384a2f8baf0 - - default default] Acquiring lock "6c1de815-4e42-4798-9a73-220b67333524-events" by "nova.compute.manager.InstanceEvents.clear_events_for_instance.<locals>._clear_events" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m
Dec  1 20:04:30 compute-0 nova_compute[189564]: 2025-12-01 20:04:30.640 189568 DEBUG oslo_concurrency.lockutils [None req-818fb88c-2ce1-45c5-b391-7b1a65553fa4 715e289b64b4407387cbcfe958eb2d0f 162c071887824085bcc9c384a2f8baf0 - - default default] Lock "6c1de815-4e42-4798-9a73-220b67333524-events" acquired by "nova.compute.manager.InstanceEvents.clear_events_for_instance.<locals>._clear_events" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m
Dec  1 20:04:30 compute-0 nova_compute[189564]: 2025-12-01 20:04:30.641 189568 DEBUG oslo_concurrency.lockutils [None req-818fb88c-2ce1-45c5-b391-7b1a65553fa4 715e289b64b4407387cbcfe958eb2d0f 162c071887824085bcc9c384a2f8baf0 - - default default] Lock "6c1de815-4e42-4798-9a73-220b67333524-events" "released" by "nova.compute.manager.InstanceEvents.clear_events_for_instance.<locals>._clear_events" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m
Dec  1 20:04:30 compute-0 nova_compute[189564]: 2025-12-01 20:04:30.642 189568 INFO nova.compute.manager [None req-818fb88c-2ce1-45c5-b391-7b1a65553fa4 715e289b64b4407387cbcfe958eb2d0f 162c071887824085bcc9c384a2f8baf0 - - default default] [instance: 6c1de815-4e42-4798-9a73-220b67333524] Terminating instance#033[00m
Dec  1 20:04:30 compute-0 nova_compute[189564]: 2025-12-01 20:04:30.642 189568 DEBUG nova.compute.manager [None req-818fb88c-2ce1-45c5-b391-7b1a65553fa4 715e289b64b4407387cbcfe958eb2d0f 162c071887824085bcc9c384a2f8baf0 - - default default] [instance: 6c1de815-4e42-4798-9a73-220b67333524] Start destroying the instance on the hypervisor. _shutdown_instance /usr/lib/python3.9/site-packages/nova/compute/manager.py:3120#033[00m
Dec  1 20:04:30 compute-0 kernel: tap05dcfe74-fe (unregistering): left promiscuous mode
Dec  1 20:04:30 compute-0 ovn_controller[97948]: 2025-12-01T20:04:30Z|00164|binding|INFO|Releasing lport 05dcfe74-fe60-45d4-b1df-aec9fcc57adb from this chassis (sb_readonly=0)
Dec  1 20:04:30 compute-0 ovn_controller[97948]: 2025-12-01T20:04:30Z|00165|binding|INFO|Setting lport 05dcfe74-fe60-45d4-b1df-aec9fcc57adb down in Southbound
Dec  1 20:04:30 compute-0 NetworkManager[56474]: <info>  [1764619470.6995] device (tap05dcfe74-fe): state change: disconnected -> unmanaged (reason 'unmanaged', managed-type: 'removed')
Dec  1 20:04:30 compute-0 nova_compute[189564]: 2025-12-01 20:04:30.698 189568 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Dec  1 20:04:30 compute-0 ovn_controller[97948]: 2025-12-01T20:04:30Z|00166|binding|INFO|Removing iface tap05dcfe74-fe ovn-installed in OVS
Dec  1 20:04:30 compute-0 nova_compute[189564]: 2025-12-01 20:04:30.704 189568 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Dec  1 20:04:30 compute-0 ovn_metadata_agent[106828]: 2025-12-01 20:04:30.719 106833 DEBUG ovsdbapp.backend.ovs_idl.event [-] Matched UPDATE: PortBindingUpdatedEvent(events=('update',), table='Port_Binding', conditions=None, old_conditions=None), priority=20 to row=Port_Binding(mac=['fa:16:3e:96:ce:cc 10.100.0.11'], port_security=['fa:16:3e:96:ce:cc 10.100.0.11'], type=, nat_addresses=[], virtual_parent=[], up=[False], options={'requested-chassis': 'compute-0.ctlplane.example.com'}, parent_port=[], requested_additional_chassis=[], ha_chassis_group=[], external_ids={'neutron:cidrs': '10.100.0.11/28', 'neutron:device_id': '6c1de815-4e42-4798-9a73-220b67333524', 'neutron:device_owner': 'compute:nova', 'neutron:mtu': '', 'neutron:network_name': 'neutron-d273f808-5cbd-4428-9f2c-ed8b50232c12', 'neutron:port_capabilities': '', 'neutron:port_name': '', 'neutron:project_id': '162c071887824085bcc9c384a2f8baf0', 'neutron:revision_number': '4', 'neutron:security_group_ids': '2076f83d-5552-45b8-8fa9-3136d8f7a584', 'neutron:subnet_pool_addr_scope4': '', 'neutron:subnet_pool_addr_scope6': '', 'neutron:vnic_type': 'normal', 'neutron:host_id': 'compute-0.ctlplane.example.com'}, additional_chassis=[], tag=[], additional_encap=[], encap=[], mirror_rules=[], datapath=814c1014-135a-4652-9979-0910a324d6ee, chassis=[], tunnel_key=3, gateway_chassis=[], requested_chassis=[<ovs.db.idl.Row object at 0x7f1b36766670>], logical_port=05dcfe74-fe60-45d4-b1df-aec9fcc57adb) old=Port_Binding(up=[True], chassis=[<ovs.db.idl.Row object at 0x7f1b36766670>]) matches /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/event.py:43#033[00m
Dec  1 20:04:30 compute-0 ovn_metadata_agent[106828]: 2025-12-01 20:04:30.721 106833 INFO neutron.agent.ovn.metadata.agent [-] Port 05dcfe74-fe60-45d4-b1df-aec9fcc57adb in datapath d273f808-5cbd-4428-9f2c-ed8b50232c12 unbound from our chassis#033[00m
Dec  1 20:04:30 compute-0 ovn_metadata_agent[106828]: 2025-12-01 20:04:30.724 106833 DEBUG neutron.agent.ovn.metadata.agent [-] No valid VIF ports were found for network d273f808-5cbd-4428-9f2c-ed8b50232c12, tearing the namespace down if needed _get_provision_params /usr/lib/python3.9/site-packages/neutron/agent/ovn/metadata/agent.py:628#033[00m
Dec  1 20:04:30 compute-0 ovn_metadata_agent[106828]: 2025-12-01 20:04:30.726 239862 DEBUG oslo.privsep.daemon [-] privsep: reply[27a5d18f-47eb-4d9f-90e6-b552f5bc31c3]: (4, True) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Dec  1 20:04:30 compute-0 ovn_metadata_agent[106828]: 2025-12-01 20:04:30.728 106833 INFO neutron.agent.ovn.metadata.agent [-] Cleaning up ovnmeta-d273f808-5cbd-4428-9f2c-ed8b50232c12 namespace which is not needed anymore#033[00m
Dec  1 20:04:30 compute-0 nova_compute[189564]: 2025-12-01 20:04:30.749 189568 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Dec  1 20:04:30 compute-0 systemd[1]: machine-qemu\x2d11\x2dinstance\x2d0000000a.scope: Deactivated successfully.
Dec  1 20:04:30 compute-0 systemd[1]: machine-qemu\x2d11\x2dinstance\x2d0000000a.scope: Consumed 46.687s CPU time.
Dec  1 20:04:30 compute-0 systemd-machined[155891]: Machine qemu-11-instance-0000000a terminated.
Dec  1 20:04:30 compute-0 nova_compute[189564]: 2025-12-01 20:04:30.884 189568 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Dec  1 20:04:30 compute-0 nova_compute[189564]: 2025-12-01 20:04:30.936 189568 INFO nova.virt.libvirt.driver [-] [instance: 6c1de815-4e42-4798-9a73-220b67333524] Instance destroyed successfully.#033[00m
Dec  1 20:04:30 compute-0 nova_compute[189564]: 2025-12-01 20:04:30.937 189568 DEBUG nova.objects.instance [None req-818fb88c-2ce1-45c5-b391-7b1a65553fa4 715e289b64b4407387cbcfe958eb2d0f 162c071887824085bcc9c384a2f8baf0 - - default default] Lazy-loading 'resources' on Instance uuid 6c1de815-4e42-4798-9a73-220b67333524 obj_load_attr /usr/lib/python3.9/site-packages/nova/objects/instance.py:1105#033[00m
Dec  1 20:04:30 compute-0 nova_compute[189564]: 2025-12-01 20:04:30.951 189568 DEBUG nova.virt.libvirt.vif [None req-818fb88c-2ce1-45c5-b391-7b1a65553fa4 715e289b64b4407387cbcfe958eb2d0f 162c071887824085bcc9c384a2f8baf0 - - default default] vif_type=ovs instance=Instance(access_ip_v4=None,access_ip_v6=None,architecture=None,auto_disk_config=False,availability_zone='nova',cell_name=None,cleaned=False,config_drive='True',created_at=2025-12-01T20:02:26Z,default_ephemeral_device=None,default_swap_device=None,deleted=False,deleted_at=None,device_metadata=<?>,disable_terminate=False,display_description='tempest-TestNetworkBasicOps-server-1354137625',display_name='tempest-TestNetworkBasicOps-server-1354137625',ec2_ids=<?>,ephemeral_gb=0,ephemeral_key_uuid=None,fault=<?>,flavor=Flavor(3),hidden=False,host='compute-0.ctlplane.example.com',hostname='tempest-testnetworkbasicops-server-1354137625',id=10,image_ref='d169c234-7ac2-4fdc-b9fa-a08c93484d75',info_cache=InstanceInfoCache,instance_type_id=3,kernel_id='',key_data='ecdsa-sha2-nistp384 AAAAE2VjZHNhLXNoYTItbmlzdHAzODQAAAAIbmlzdHAzODQAAABhBJXXL1VmYJQIcc1w3eeVop88t2Ef6y4FcvSuzTqjnp4aoRVZAWxw/mpCexZIWojf5DtgeBdIftUsHhfzzaOrN8U3tBt+3B3E1Cnro9vJzaqRXCHV+LgsCurD0OxCo26xfA==',key_name='tempest-TestNetworkBasicOps-1284131701',keypairs=<?>,launch_index=0,launched_at=2025-12-01T20:02:44Z,launched_on='compute-0.ctlplane.example.com',locked=False,locked_by=None,memory_mb=128,metadata={},migration_context=<?>,new_flavor=None,node='compute-0.ctlplane.example.com',numa_topology=None,old_flavor=None,os_type=None,pci_devices=<?>,pci_requests=<?>,power_state=1,progress=0,project_id='162c071887824085bcc9c384a2f8baf0',ramdisk_id='',reservation_id='r-yvgdafub',resources=None,root_device_name='/dev/vda',root_gb=1,security_groups=SecurityGroupList,services=<?>,shutdown_terminate=False,system_metadata={boot_roles='reader,member',image_base_image_ref='d169c234-7ac2-4fdc-b9fa-a08c93484d75',image_container_format='bare',image_disk_format='qcow2',image_hw_cdrom_bus='sata',image_hw_disk_bus='virtio',image_hw_input_bus='usb',image_hw_machine_type='q35',image_hw_pointer_model='usbtablet',image_hw_rng_model='virtio',image_hw_video_model='virtio',image_hw_vif_model='virtio',image_min_disk='1',image_min_ram='0',owner_project_name='tempest-TestNetworkBasicOps-11937336',owner_user_name='tempest-TestNetworkBasicOps-11937336-project-member'},tags=<?>,task_state='deleting',terminated_at=None,trusted_certs=<?>,updated_at=2025-12-01T20:02:44Z,user_data=None,user_id='715e289b64b4407387cbcfe958eb2d0f',uuid=6c1de815-4e42-4798-9a73-220b67333524,vcpu_model=<?>,vcpus=1,vm_mode=None,vm_state='active') vif={"id": "05dcfe74-fe60-45d4-b1df-aec9fcc57adb", "address": "fa:16:3e:96:ce:cc", "network": {"id": "d273f808-5cbd-4428-9f2c-ed8b50232c12", "bridge": "br-int", "label": "tempest-network-smoke--1707279970", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.11", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "162c071887824085bcc9c384a2f8baf0", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tap05dcfe74-fe", "ovs_interfaceid": "05dcfe74-fe60-45d4-b1df-aec9fcc57adb", "qbh_params": null, "qbg_params": null, "active": true, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}} unplug /usr/lib/python3.9/site-packages/nova/virt/libvirt/vif.py:828#033[00m
Dec  1 20:04:30 compute-0 nova_compute[189564]: 2025-12-01 20:04:30.951 189568 DEBUG nova.network.os_vif_util [None req-818fb88c-2ce1-45c5-b391-7b1a65553fa4 715e289b64b4407387cbcfe958eb2d0f 162c071887824085bcc9c384a2f8baf0 - - default default] Converting VIF {"id": "05dcfe74-fe60-45d4-b1df-aec9fcc57adb", "address": "fa:16:3e:96:ce:cc", "network": {"id": "d273f808-5cbd-4428-9f2c-ed8b50232c12", "bridge": "br-int", "label": "tempest-network-smoke--1707279970", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.11", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "162c071887824085bcc9c384a2f8baf0", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tap05dcfe74-fe", "ovs_interfaceid": "05dcfe74-fe60-45d4-b1df-aec9fcc57adb", "qbh_params": null, "qbg_params": null, "active": true, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}} nova_to_osvif_vif /usr/lib/python3.9/site-packages/nova/network/os_vif_util.py:511#033[00m
Dec  1 20:04:30 compute-0 nova_compute[189564]: 2025-12-01 20:04:30.952 189568 DEBUG nova.network.os_vif_util [None req-818fb88c-2ce1-45c5-b391-7b1a65553fa4 715e289b64b4407387cbcfe958eb2d0f 162c071887824085bcc9c384a2f8baf0 - - default default] Converted object VIFOpenVSwitch(active=True,address=fa:16:3e:96:ce:cc,bridge_name='br-int',has_traffic_filtering=True,id=05dcfe74-fe60-45d4-b1df-aec9fcc57adb,network=Network(d273f808-5cbd-4428-9f2c-ed8b50232c12),plugin='ovs',port_profile=VIFPortProfileOpenVSwitch,preserve_on_delete=False,vif_name='tap05dcfe74-fe') nova_to_osvif_vif /usr/lib/python3.9/site-packages/nova/network/os_vif_util.py:548#033[00m
Dec  1 20:04:30 compute-0 nova_compute[189564]: 2025-12-01 20:04:30.952 189568 DEBUG os_vif [None req-818fb88c-2ce1-45c5-b391-7b1a65553fa4 715e289b64b4407387cbcfe958eb2d0f 162c071887824085bcc9c384a2f8baf0 - - default default] Unplugging vif VIFOpenVSwitch(active=True,address=fa:16:3e:96:ce:cc,bridge_name='br-int',has_traffic_filtering=True,id=05dcfe74-fe60-45d4-b1df-aec9fcc57adb,network=Network(d273f808-5cbd-4428-9f2c-ed8b50232c12),plugin='ovs',port_profile=VIFPortProfileOpenVSwitch,preserve_on_delete=False,vif_name='tap05dcfe74-fe') unplug /usr/lib/python3.9/site-packages/os_vif/__init__.py:109#033[00m
Dec  1 20:04:30 compute-0 nova_compute[189564]: 2025-12-01 20:04:30.953 189568 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 21 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Dec  1 20:04:30 compute-0 nova_compute[189564]: 2025-12-01 20:04:30.953 189568 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): DelPortCommand(_result=None, port=tap05dcfe74-fe, bridge=br-int, if_exists=True) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89#033[00m
Dec  1 20:04:30 compute-0 nova_compute[189564]: 2025-12-01 20:04:30.954 189568 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Dec  1 20:04:30 compute-0 neutron-haproxy-ovnmeta-d273f808-5cbd-4428-9f2c-ed8b50232c12[255407]: [NOTICE]   (255412) : haproxy version is 2.8.14-c23fe91
Dec  1 20:04:30 compute-0 neutron-haproxy-ovnmeta-d273f808-5cbd-4428-9f2c-ed8b50232c12[255407]: [NOTICE]   (255412) : path to executable is /usr/sbin/haproxy
Dec  1 20:04:30 compute-0 neutron-haproxy-ovnmeta-d273f808-5cbd-4428-9f2c-ed8b50232c12[255407]: [WARNING]  (255412) : Exiting Master process...
Dec  1 20:04:30 compute-0 neutron-haproxy-ovnmeta-d273f808-5cbd-4428-9f2c-ed8b50232c12[255407]: [WARNING]  (255412) : Exiting Master process...
Dec  1 20:04:30 compute-0 nova_compute[189564]: 2025-12-01 20:04:30.959 189568 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] 0-ms timeout __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:248#033[00m
Dec  1 20:04:30 compute-0 nova_compute[189564]: 2025-12-01 20:04:30.961 189568 INFO os_vif [None req-818fb88c-2ce1-45c5-b391-7b1a65553fa4 715e289b64b4407387cbcfe958eb2d0f 162c071887824085bcc9c384a2f8baf0 - - default default] Successfully unplugged vif VIFOpenVSwitch(active=True,address=fa:16:3e:96:ce:cc,bridge_name='br-int',has_traffic_filtering=True,id=05dcfe74-fe60-45d4-b1df-aec9fcc57adb,network=Network(d273f808-5cbd-4428-9f2c-ed8b50232c12),plugin='ovs',port_profile=VIFPortProfileOpenVSwitch,preserve_on_delete=False,vif_name='tap05dcfe74-fe')#033[00m
Dec  1 20:04:30 compute-0 nova_compute[189564]: 2025-12-01 20:04:30.961 189568 INFO nova.virt.libvirt.driver [None req-818fb88c-2ce1-45c5-b391-7b1a65553fa4 715e289b64b4407387cbcfe958eb2d0f 162c071887824085bcc9c384a2f8baf0 - - default default] [instance: 6c1de815-4e42-4798-9a73-220b67333524] Deleting instance files /var/lib/nova/instances/6c1de815-4e42-4798-9a73-220b67333524_del#033[00m
Dec  1 20:04:30 compute-0 neutron-haproxy-ovnmeta-d273f808-5cbd-4428-9f2c-ed8b50232c12[255407]: [ALERT]    (255412) : Current worker (255414) exited with code 143 (Terminated)
Dec  1 20:04:30 compute-0 neutron-haproxy-ovnmeta-d273f808-5cbd-4428-9f2c-ed8b50232c12[255407]: [WARNING]  (255412) : All workers exited. Exiting... (0)
Dec  1 20:04:30 compute-0 nova_compute[189564]: 2025-12-01 20:04:30.962 189568 INFO nova.virt.libvirt.driver [None req-818fb88c-2ce1-45c5-b391-7b1a65553fa4 715e289b64b4407387cbcfe958eb2d0f 162c071887824085bcc9c384a2f8baf0 - - default default] [instance: 6c1de815-4e42-4798-9a73-220b67333524] Deletion of /var/lib/nova/instances/6c1de815-4e42-4798-9a73-220b67333524_del complete#033[00m
Dec  1 20:04:30 compute-0 systemd[1]: libpod-60e50dd4313bdb53c88c794a22d7e1fe77f90f939c042f8eb10c1e7d9d164410.scope: Deactivated successfully.
Dec  1 20:04:30 compute-0 podman[256806]: 2025-12-01 20:04:30.976813908 +0000 UTC m=+0.077447545 container died 60e50dd4313bdb53c88c794a22d7e1fe77f90f939c042f8eb10c1e7d9d164410 (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified, name=neutron-haproxy-ovnmeta-d273f808-5cbd-4428-9f2c-ed8b50232c12, tcib_managed=true, tcib_build_tag=fa2bb8efef6782c26ea7f1675eeb36dd, org.label-schema.license=GPLv2, io.buildah.version=1.41.3, maintainer=OpenStack Kubernetes Operator team, org.label-schema.build-date=20251125, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.vendor=CentOS, org.label-schema.schema-version=1.0)
Dec  1 20:04:31 compute-0 systemd[1]: var-lib-containers-storage-overlay\x2dcontainers-60e50dd4313bdb53c88c794a22d7e1fe77f90f939c042f8eb10c1e7d9d164410-userdata-shm.mount: Deactivated successfully.
Dec  1 20:04:31 compute-0 systemd[1]: var-lib-containers-storage-overlay-25e21cfd61ff86c2cdb153566dcaac9b1e4f22b0c8f3ebb15b3a06c6c2916ce9-merged.mount: Deactivated successfully.
Dec  1 20:04:31 compute-0 nova_compute[189564]: 2025-12-01 20:04:31.017 189568 INFO nova.compute.manager [None req-818fb88c-2ce1-45c5-b391-7b1a65553fa4 715e289b64b4407387cbcfe958eb2d0f 162c071887824085bcc9c384a2f8baf0 - - default default] [instance: 6c1de815-4e42-4798-9a73-220b67333524] Took 0.37 seconds to destroy the instance on the hypervisor.#033[00m
Dec  1 20:04:31 compute-0 nova_compute[189564]: 2025-12-01 20:04:31.017 189568 DEBUG oslo.service.loopingcall [None req-818fb88c-2ce1-45c5-b391-7b1a65553fa4 715e289b64b4407387cbcfe958eb2d0f 162c071887824085bcc9c384a2f8baf0 - - default default] Waiting for function nova.compute.manager.ComputeManager._try_deallocate_network.<locals>._deallocate_network_with_retries to return. func /usr/lib/python3.9/site-packages/oslo_service/loopingcall.py:435#033[00m
Dec  1 20:04:31 compute-0 nova_compute[189564]: 2025-12-01 20:04:31.017 189568 DEBUG nova.compute.manager [-] [instance: 6c1de815-4e42-4798-9a73-220b67333524] Deallocating network for instance _deallocate_network /usr/lib/python3.9/site-packages/nova/compute/manager.py:2259#033[00m
Dec  1 20:04:31 compute-0 nova_compute[189564]: 2025-12-01 20:04:31.017 189568 DEBUG nova.network.neutron [-] [instance: 6c1de815-4e42-4798-9a73-220b67333524] deallocate_for_instance() deallocate_for_instance /usr/lib/python3.9/site-packages/nova/network/neutron.py:1803#033[00m
Dec  1 20:04:31 compute-0 podman[256806]: 2025-12-01 20:04:31.028841211 +0000 UTC m=+0.129474838 container cleanup 60e50dd4313bdb53c88c794a22d7e1fe77f90f939c042f8eb10c1e7d9d164410 (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified, name=neutron-haproxy-ovnmeta-d273f808-5cbd-4428-9f2c-ed8b50232c12, maintainer=OpenStack Kubernetes Operator team, tcib_managed=true, io.buildah.version=1.41.3, org.label-schema.build-date=20251125, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.vendor=CentOS, tcib_build_tag=fa2bb8efef6782c26ea7f1675eeb36dd, org.label-schema.license=GPLv2, org.label-schema.schema-version=1.0)
Dec  1 20:04:31 compute-0 systemd[1]: libpod-conmon-60e50dd4313bdb53c88c794a22d7e1fe77f90f939c042f8eb10c1e7d9d164410.scope: Deactivated successfully.
Dec  1 20:04:31 compute-0 podman[256850]: 2025-12-01 20:04:31.124780427 +0000 UTC m=+0.061394873 container remove 60e50dd4313bdb53c88c794a22d7e1fe77f90f939c042f8eb10c1e7d9d164410 (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified, name=neutron-haproxy-ovnmeta-d273f808-5cbd-4428-9f2c-ed8b50232c12, org.label-schema.build-date=20251125, maintainer=OpenStack Kubernetes Operator team, org.label-schema.schema-version=1.0, tcib_build_tag=fa2bb8efef6782c26ea7f1675eeb36dd, tcib_managed=true, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.vendor=CentOS, io.buildah.version=1.41.3)
Dec  1 20:04:31 compute-0 ovn_metadata_agent[106828]: 2025-12-01 20:04:31.136 239862 DEBUG oslo.privsep.daemon [-] privsep: reply[d369e3d1-5c38-4bbb-83db-22c51bc700a0]: (4, ('Mon Dec  1 08:04:30 PM UTC 2025 Stopping container neutron-haproxy-ovnmeta-d273f808-5cbd-4428-9f2c-ed8b50232c12 (60e50dd4313bdb53c88c794a22d7e1fe77f90f939c042f8eb10c1e7d9d164410)\n60e50dd4313bdb53c88c794a22d7e1fe77f90f939c042f8eb10c1e7d9d164410\nMon Dec  1 08:04:31 PM UTC 2025 Deleting container neutron-haproxy-ovnmeta-d273f808-5cbd-4428-9f2c-ed8b50232c12 (60e50dd4313bdb53c88c794a22d7e1fe77f90f939c042f8eb10c1e7d9d164410)\n60e50dd4313bdb53c88c794a22d7e1fe77f90f939c042f8eb10c1e7d9d164410\n', '', 0)) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Dec  1 20:04:31 compute-0 ovn_metadata_agent[106828]: 2025-12-01 20:04:31.137 239862 DEBUG oslo.privsep.daemon [-] privsep: reply[7717e467-4a0f-45d0-9376-6929974b9ca5]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Dec  1 20:04:31 compute-0 ovn_metadata_agent[106828]: 2025-12-01 20:04:31.138 106833 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): DelPortCommand(_result=None, port=tapd273f808-50, bridge=None, if_exists=True) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89#033[00m
Dec  1 20:04:31 compute-0 nova_compute[189564]: 2025-12-01 20:04:31.141 189568 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Dec  1 20:04:31 compute-0 kernel: tapd273f808-50: left promiscuous mode
Dec  1 20:04:31 compute-0 nova_compute[189564]: 2025-12-01 20:04:31.144 189568 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Dec  1 20:04:31 compute-0 ovn_metadata_agent[106828]: 2025-12-01 20:04:31.147 239862 DEBUG oslo.privsep.daemon [-] privsep: reply[754c487f-b8fd-4df8-bd2f-938c22fc1366]: (4, True) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Dec  1 20:04:31 compute-0 nova_compute[189564]: 2025-12-01 20:04:31.162 189568 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Dec  1 20:04:31 compute-0 ovn_metadata_agent[106828]: 2025-12-01 20:04:31.170 239862 DEBUG oslo.privsep.daemon [-] privsep: reply[246952bb-ba87-47a5-bb63-e759ec2b5f51]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Dec  1 20:04:31 compute-0 ovn_metadata_agent[106828]: 2025-12-01 20:04:31.172 239862 DEBUG oslo.privsep.daemon [-] privsep: reply[2745ccf3-b0ae-48e4-b7ef-3bda948965bf]: (4, True) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Dec  1 20:04:31 compute-0 ovn_metadata_agent[106828]: 2025-12-01 20:04:31.186 239862 DEBUG oslo.privsep.daemon [-] privsep: reply[bc810283-cfab-4ee1-becc-ad7660100eb8]: (4, [{'family': 0, '__align': (), 'ifi_type': 772, 'index': 1, 'flags': 65609, 'change': 0, 'attrs': [['IFLA_IFNAME', 'lo'], ['IFLA_TXQLEN', 1000], ['IFLA_OPERSTATE', 'UNKNOWN'], ['IFLA_LINKMODE', 0], ['IFLA_MTU', 65536], ['IFLA_MIN_MTU', 0], ['IFLA_MAX_MTU', 0], ['IFLA_GROUP', 0], ['IFLA_PROMISCUITY', 0], ['UNKNOWN', {'header': {'length': 8, 'type': 61}}], ['IFLA_NUM_TX_QUEUES', 1], ['IFLA_GSO_MAX_SEGS', 65535], ['IFLA_GSO_MAX_SIZE', 65536], ['IFLA_GRO_MAX_SIZE', 65536], ['UNKNOWN', {'header': {'length': 8, 'type': 63}}], ['UNKNOWN', {'header': {'length': 8, 'type': 64}}], ['IFLA_TSO_MAX_SIZE', 524280], ['IFLA_TSO_MAX_SEGS', 65535], ['UNKNOWN', {'header': {'length': 8, 'type': 66}}], ['IFLA_NUM_RX_QUEUES', 1], ['IFLA_CARRIER', 1], ['IFLA_CARRIER_CHANGES', 0], ['IFLA_CARRIER_UP_COUNT', 0], ['IFLA_CARRIER_DOWN_COUNT', 0], ['IFLA_PROTO_DOWN', 0], ['IFLA_ADDRESS', '00:00:00:00:00:00'], ['IFLA_BROADCAST', '00:00:00:00:00:00'], ['IFLA_STATS64', {'rx_packets': 1, 'tx_packets': 1, 'rx_bytes': 28, 'tx_bytes': 28, 'rx_errors': 0, 'tx_errors': 0, 'rx_dropped': 0, 'tx_dropped': 0, 'multicast': 0, 'collisions': 0, 'rx_length_errors': 0, 'rx_over_errors': 0, 'rx_crc_errors': 0, 'rx_frame_errors': 0, 'rx_fifo_errors': 0, 'rx_missed_errors': 0, 'tx_aborted_errors': 0, 'tx_carrier_errors': 0, 'tx_fifo_errors': 0, 'tx_heartbeat_errors': 0, 'tx_window_errors': 0, 'rx_compressed': 0, 'tx_compressed': 0}], ['IFLA_STATS', {'rx_packets': 1, 'tx_packets': 1, 'rx_bytes': 28, 'tx_bytes': 28, 'rx_errors': 0, 'tx_errors': 0, 'rx_dropped': 0, 'tx_dropped': 0, 'multicast': 0, 'collisions': 0, 'rx_length_errors': 0, 'rx_over_errors': 0, 'rx_crc_errors': 0, 'rx_frame_errors': 0, 'rx_fifo_errors': 0, 'rx_missed_errors': 0, 'tx_aborted_errors': 0, 'tx_carrier_errors': 0, 'tx_fifo_errors': 0, 'tx_heartbeat_errors': 0, 'tx_window_errors': 0, 'rx_compressed': 0, 'tx_compressed': 0}], ['IFLA_XDP', {'attrs': [['IFLA_XDP_ATTACHED', None]]}], ['IFLA_QDISC', 'noqueue'], ['IFLA_AF_SPEC', {'attrs': [['AF_INET', {'dummy': 65668, 'forwarding': 1, 'mc_forwarding': 0, 'proxy_arp': 0, 'accept_redirects': 0, 'secure_redirects': 0, 'send_redirects': 0, 'shared_media': 1, 'rp_filter': 1, 'accept_source_route': 0, 'bootp_relay': 0, 'log_martians': 0, 'tag': 0, 'arpfilter': 0, 'medium_id': 0, 'noxfrm': 1, 'nopolicy': 1, 'force_igmp_version': 0, 'arp_announce': 0, 'arp_ignore': 0, 'promote_secondaries': 1, 'arp_accept': 0, 'arp_notify': 0, 'accept_local': 0, 'src_vmark': 0, 'proxy_arp_pvlan': 0, 'route_localnet': 0, 'igmpv2_unsolicited_report_interval': 10000, 'igmpv3_unsolicited_report_interval': 1000}], ['AF_INET6', {'attrs': [['IFLA_INET6_FLAGS', 2147483648], ['IFLA_INET6_CACHEINFO', {'max_reasm_len': 65535, 'tstamp': 584513, 'reachable_time': 39269, 'retrans_time': 1000}], ['IFLA_INET6_CONF', {'forwarding': 0, 'hop_limit': 64, 'mtu': 65536, 'accept_ra': 1, 'accept_redirects': 1, 'autoconf': 1, 'dad_transmits': 1, 'router_solicitations': 4294967295, 'router_solicitation_interval': 4000, 'router_solicitation_delay': 1000, 'use_tempaddr': 4294967295, 'temp_valid_lft': 604800, 'temp_preferred_lft': 86400, 'regen_max_retry': 3, 'max_desync_factor': 600, 'max_addresses': 16, 'force_mld_version': 0, 'accept_ra_defrtr': 1, 'accept_ra_pinfo': 1, 'accept_ra_rtr_pref': 1, 'router_probe_interval': 60000, 'accept_ra_rt_info_max_plen': 0, 'proxy_ndp': 0, 'optimistic_dad': 0, 'accept_source_route': 0, 'mc_forwarding': 0, 'disable_ipv6': 0, 'accept_dad': 4294967295, 'force_tllao': 0, 'ndisc_notify': 0}], ['IFLA_INET6_STATS', {'num': 38, 'inpkts': 0, 'inoctets': 0, 'indelivers': 0, 'outforwdatagrams': 0, 'outpkts': 0, 'outoctets': 0, 'inhdrerrors': 0, 'intoobigerrors': 0, 'innoroutes': 0, 'inaddrerrors': 0, 'inunknownprotos': 0, 'intruncatedpkts': 0, 'indiscards': 0, 'outdiscards': 0, 'outnoroutes': 0, 'reasmtimeout': 0, 'reasmreqds': 0, 'reasmoks': 0, 'reasmfails': 0, 'fragoks': 0, 'fragfails': 0, 'fragcreates': 0, 'inmcastpkts': 0, 'outmcastpkts': 0, 'inbcastpkts': 0, 'outbcastpkts': 0, 'inmcastoctets': 0, 'outmcastoctets': 0, 'inbcastoctets': 0, 'outbcastoctets': 0, 'csumerrors': 0, 'noectpkts': 0, 'ect1pkts': 0, 'ect0pkts': 0, 'cepkts': 0}], ['IFLA_INET6_ICMP6STATS', {'num': 7, 'inmsgs': 0, 'inerrors': 0, 'outmsgs': 0, 'outerrors': 0, 'csumerrors': 0}], ['IFLA_INET6_TOKEN', '::'], ['IFLA_INET6_ADDR_GEN_MODE', 0]]}]]}], ['IFLA_MAP', {'mem_start': 0, 'mem_end': 0, 'base_addr': 0, 'irq': 0, 'dma': 0, 'port': 0}], ['UNKNOWN', {'header': {'length': 4, 'type': 32830}}], ['UNKNOWN', {'header': {'length': 4, 'type': 32833}}]], 'header': {'length': 1404, 'type': 16, 'flags': 2, 'sequence_number': 255, 'pid': 256864, 'error': None, 'target': 'ovnmeta-d273f808-5cbd-4428-9f2c-ed8b50232c12', 'stats': (0, 0, 0)}, 'state': 'up', 'event': 'RTM_NEWLINK'}]) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Dec  1 20:04:31 compute-0 ovn_metadata_agent[106828]: 2025-12-01 20:04:31.189 106945 DEBUG neutron.privileged.agent.linux.ip_lib [-] Namespace ovnmeta-d273f808-5cbd-4428-9f2c-ed8b50232c12 deleted. remove_netns /usr/lib/python3.9/site-packages/neutron/privileged/agent/linux/ip_lib.py:607#033[00m
Dec  1 20:04:31 compute-0 systemd[1]: run-netns-ovnmeta\x2dd273f808\x2d5cbd\x2d4428\x2d9f2c\x2ded8b50232c12.mount: Deactivated successfully.
Dec  1 20:04:31 compute-0 ovn_metadata_agent[106828]: 2025-12-01 20:04:31.189 106945 DEBUG oslo.privsep.daemon [-] privsep: reply[a3a37b14-57c7-4265-9ebd-6c306dc0d7f1]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Dec  1 20:04:31 compute-0 openstack_network_exporter[205914]: ERROR   20:04:31 appctl.go:131: Failed to prepare call to ovsdb-server: no control socket files found for the ovs db server
Dec  1 20:04:31 compute-0 openstack_network_exporter[205914]: ERROR   20:04:31 appctl.go:144: Failed to get PID for ovn-northd: no control socket files found for ovn-northd
Dec  1 20:04:31 compute-0 openstack_network_exporter[205914]: ERROR   20:04:31 appctl.go:144: Failed to get PID for ovn-northd: no control socket files found for ovn-northd
Dec  1 20:04:31 compute-0 openstack_network_exporter[205914]: ERROR   20:04:31 appctl.go:174: call(dpif-netdev/pmd-perf-show): please specify an existing datapath
Dec  1 20:04:31 compute-0 openstack_network_exporter[205914]: 
Dec  1 20:04:31 compute-0 openstack_network_exporter[205914]: ERROR   20:04:31 appctl.go:174: call(dpif-netdev/pmd-rxq-show): please specify an existing datapath
Dec  1 20:04:31 compute-0 openstack_network_exporter[205914]: 
Dec  1 20:04:31 compute-0 nova_compute[189564]: 2025-12-01 20:04:31.435 189568 DEBUG nova.compute.manager [req-0ace642d-9fed-48b4-9654-93ae9058dfd9 req-d5de76c0-6756-4c8f-8691-c2285b258499 8141e98fb94749b083df4b60a326419b 58466eba0634458f8dd92f929824a1d9 - - default default] [instance: 6c1de815-4e42-4798-9a73-220b67333524] Received event network-vif-unplugged-05dcfe74-fe60-45d4-b1df-aec9fcc57adb external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11048#033[00m
Dec  1 20:04:31 compute-0 nova_compute[189564]: 2025-12-01 20:04:31.436 189568 DEBUG oslo_concurrency.lockutils [req-0ace642d-9fed-48b4-9654-93ae9058dfd9 req-d5de76c0-6756-4c8f-8691-c2285b258499 8141e98fb94749b083df4b60a326419b 58466eba0634458f8dd92f929824a1d9 - - default default] Acquiring lock "6c1de815-4e42-4798-9a73-220b67333524-events" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m
Dec  1 20:04:31 compute-0 nova_compute[189564]: 2025-12-01 20:04:31.437 189568 DEBUG oslo_concurrency.lockutils [req-0ace642d-9fed-48b4-9654-93ae9058dfd9 req-d5de76c0-6756-4c8f-8691-c2285b258499 8141e98fb94749b083df4b60a326419b 58466eba0634458f8dd92f929824a1d9 - - default default] Lock "6c1de815-4e42-4798-9a73-220b67333524-events" acquired by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: waited 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m
Dec  1 20:04:31 compute-0 nova_compute[189564]: 2025-12-01 20:04:31.437 189568 DEBUG oslo_concurrency.lockutils [req-0ace642d-9fed-48b4-9654-93ae9058dfd9 req-d5de76c0-6756-4c8f-8691-c2285b258499 8141e98fb94749b083df4b60a326419b 58466eba0634458f8dd92f929824a1d9 - - default default] Lock "6c1de815-4e42-4798-9a73-220b67333524-events" "released" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: held 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m
Dec  1 20:04:31 compute-0 nova_compute[189564]: 2025-12-01 20:04:31.438 189568 DEBUG nova.compute.manager [req-0ace642d-9fed-48b4-9654-93ae9058dfd9 req-d5de76c0-6756-4c8f-8691-c2285b258499 8141e98fb94749b083df4b60a326419b 58466eba0634458f8dd92f929824a1d9 - - default default] [instance: 6c1de815-4e42-4798-9a73-220b67333524] No waiting events found dispatching network-vif-unplugged-05dcfe74-fe60-45d4-b1df-aec9fcc57adb pop_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:320#033[00m
Dec  1 20:04:31 compute-0 nova_compute[189564]: 2025-12-01 20:04:31.439 189568 DEBUG nova.compute.manager [req-0ace642d-9fed-48b4-9654-93ae9058dfd9 req-d5de76c0-6756-4c8f-8691-c2285b258499 8141e98fb94749b083df4b60a326419b 58466eba0634458f8dd92f929824a1d9 - - default default] [instance: 6c1de815-4e42-4798-9a73-220b67333524] Received event network-vif-unplugged-05dcfe74-fe60-45d4-b1df-aec9fcc57adb for instance with task_state deleting. _process_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:10826#033[00m
Dec  1 20:04:31 compute-0 nova_compute[189564]: 2025-12-01 20:04:31.441 189568 DEBUG nova.compute.manager [req-0ace642d-9fed-48b4-9654-93ae9058dfd9 req-d5de76c0-6756-4c8f-8691-c2285b258499 8141e98fb94749b083df4b60a326419b 58466eba0634458f8dd92f929824a1d9 - - default default] [instance: 6c1de815-4e42-4798-9a73-220b67333524] Received event network-vif-plugged-05dcfe74-fe60-45d4-b1df-aec9fcc57adb external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11048#033[00m
Dec  1 20:04:31 compute-0 nova_compute[189564]: 2025-12-01 20:04:31.442 189568 DEBUG oslo_concurrency.lockutils [req-0ace642d-9fed-48b4-9654-93ae9058dfd9 req-d5de76c0-6756-4c8f-8691-c2285b258499 8141e98fb94749b083df4b60a326419b 58466eba0634458f8dd92f929824a1d9 - - default default] Acquiring lock "6c1de815-4e42-4798-9a73-220b67333524-events" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m
Dec  1 20:04:31 compute-0 nova_compute[189564]: 2025-12-01 20:04:31.442 189568 DEBUG oslo_concurrency.lockutils [req-0ace642d-9fed-48b4-9654-93ae9058dfd9 req-d5de76c0-6756-4c8f-8691-c2285b258499 8141e98fb94749b083df4b60a326419b 58466eba0634458f8dd92f929824a1d9 - - default default] Lock "6c1de815-4e42-4798-9a73-220b67333524-events" acquired by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: waited 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m
Dec  1 20:04:31 compute-0 nova_compute[189564]: 2025-12-01 20:04:31.442 189568 DEBUG oslo_concurrency.lockutils [req-0ace642d-9fed-48b4-9654-93ae9058dfd9 req-d5de76c0-6756-4c8f-8691-c2285b258499 8141e98fb94749b083df4b60a326419b 58466eba0634458f8dd92f929824a1d9 - - default default] Lock "6c1de815-4e42-4798-9a73-220b67333524-events" "released" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m
Dec  1 20:04:31 compute-0 nova_compute[189564]: 2025-12-01 20:04:31.443 189568 DEBUG nova.compute.manager [req-0ace642d-9fed-48b4-9654-93ae9058dfd9 req-d5de76c0-6756-4c8f-8691-c2285b258499 8141e98fb94749b083df4b60a326419b 58466eba0634458f8dd92f929824a1d9 - - default default] [instance: 6c1de815-4e42-4798-9a73-220b67333524] No waiting events found dispatching network-vif-plugged-05dcfe74-fe60-45d4-b1df-aec9fcc57adb pop_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:320#033[00m
Dec  1 20:04:31 compute-0 nova_compute[189564]: 2025-12-01 20:04:31.443 189568 WARNING nova.compute.manager [req-0ace642d-9fed-48b4-9654-93ae9058dfd9 req-d5de76c0-6756-4c8f-8691-c2285b258499 8141e98fb94749b083df4b60a326419b 58466eba0634458f8dd92f929824a1d9 - - default default] [instance: 6c1de815-4e42-4798-9a73-220b67333524] Received unexpected event network-vif-plugged-05dcfe74-fe60-45d4-b1df-aec9fcc57adb for instance with vm_state active and task_state deleting.#033[00m
Dec  1 20:04:31 compute-0 nova_compute[189564]: 2025-12-01 20:04:31.467 189568 DEBUG oslo_service.periodic_task [None req-7cec860b-c1c5-49d2-8a4f-732c76c1788d - - - - - -] Running periodic task ComputeManager._check_instance_build_time run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210#033[00m
Dec  1 20:04:31 compute-0 nova_compute[189564]: 2025-12-01 20:04:31.582 189568 DEBUG nova.network.neutron [-] [instance: 6c1de815-4e42-4798-9a73-220b67333524] Updating instance_info_cache with network_info: [] update_instance_cache_with_nw_info /usr/lib/python3.9/site-packages/nova/network/neutron.py:116#033[00m
Dec  1 20:04:31 compute-0 nova_compute[189564]: 2025-12-01 20:04:31.599 189568 INFO nova.compute.manager [-] [instance: 6c1de815-4e42-4798-9a73-220b67333524] Took 0.58 seconds to deallocate network for instance.#033[00m
Dec  1 20:04:31 compute-0 nova_compute[189564]: 2025-12-01 20:04:31.645 189568 DEBUG oslo_concurrency.lockutils [None req-818fb88c-2ce1-45c5-b391-7b1a65553fa4 715e289b64b4407387cbcfe958eb2d0f 162c071887824085bcc9c384a2f8baf0 - - default default] Acquiring lock "compute_resources" by "nova.compute.resource_tracker.ResourceTracker.update_usage" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m
Dec  1 20:04:31 compute-0 nova_compute[189564]: 2025-12-01 20:04:31.646 189568 DEBUG oslo_concurrency.lockutils [None req-818fb88c-2ce1-45c5-b391-7b1a65553fa4 715e289b64b4407387cbcfe958eb2d0f 162c071887824085bcc9c384a2f8baf0 - - default default] Lock "compute_resources" acquired by "nova.compute.resource_tracker.ResourceTracker.update_usage" :: waited 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m
Dec  1 20:04:31 compute-0 nova_compute[189564]: 2025-12-01 20:04:31.712 189568 DEBUG nova.compute.provider_tree [None req-818fb88c-2ce1-45c5-b391-7b1a65553fa4 715e289b64b4407387cbcfe958eb2d0f 162c071887824085bcc9c384a2f8baf0 - - default default] Inventory has not changed in ProviderTree for provider: 0211b5d4-bab8-409f-8f53-df766ffbcb27 update_inventory /usr/lib/python3.9/site-packages/nova/compute/provider_tree.py:180#033[00m
Dec  1 20:04:31 compute-0 nova_compute[189564]: 2025-12-01 20:04:31.731 189568 DEBUG nova.scheduler.client.report [None req-818fb88c-2ce1-45c5-b391-7b1a65553fa4 715e289b64b4407387cbcfe958eb2d0f 162c071887824085bcc9c384a2f8baf0 - - default default] Inventory has not changed for provider 0211b5d4-bab8-409f-8f53-df766ffbcb27 based on inventory data: {'VCPU': {'total': 8, 'reserved': 0, 'min_unit': 1, 'max_unit': 8, 'step_size': 1, 'allocation_ratio': 4.0}, 'MEMORY_MB': {'total': 7680, 'reserved': 512, 'min_unit': 1, 'max_unit': 7680, 'step_size': 1, 'allocation_ratio': 1.0}, 'DISK_GB': {'total': 79, 'reserved': 1, 'min_unit': 1, 'max_unit': 79, 'step_size': 1, 'allocation_ratio': 0.9}} set_inventory_for_provider /usr/lib/python3.9/site-packages/nova/scheduler/client/report.py:940#033[00m
Dec  1 20:04:31 compute-0 nova_compute[189564]: 2025-12-01 20:04:31.833 189568 DEBUG nova.compute.manager [req-4d0a72e3-54bb-4b89-be1a-e2a4e01b7a8c req-a39db24c-f2e7-4bcb-9c55-d7cc5ad0e1b8 8141e98fb94749b083df4b60a326419b 58466eba0634458f8dd92f929824a1d9 - - default default] [instance: 6c1de815-4e42-4798-9a73-220b67333524] Received event network-vif-deleted-05dcfe74-fe60-45d4-b1df-aec9fcc57adb external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11048#033[00m
Dec  1 20:04:31 compute-0 nova_compute[189564]: 2025-12-01 20:04:31.836 189568 DEBUG oslo_concurrency.lockutils [None req-818fb88c-2ce1-45c5-b391-7b1a65553fa4 715e289b64b4407387cbcfe958eb2d0f 162c071887824085bcc9c384a2f8baf0 - - default default] Lock "compute_resources" "released" by "nova.compute.resource_tracker.ResourceTracker.update_usage" :: held 0.190s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m
Dec  1 20:04:31 compute-0 nova_compute[189564]: 2025-12-01 20:04:31.865 189568 INFO nova.scheduler.client.report [None req-818fb88c-2ce1-45c5-b391-7b1a65553fa4 715e289b64b4407387cbcfe958eb2d0f 162c071887824085bcc9c384a2f8baf0 - - default default] Deleted allocations for instance 6c1de815-4e42-4798-9a73-220b67333524#033[00m
Dec  1 20:04:32 compute-0 nova_compute[189564]: 2025-12-01 20:04:32.125 189568 DEBUG oslo_concurrency.lockutils [None req-818fb88c-2ce1-45c5-b391-7b1a65553fa4 715e289b64b4407387cbcfe958eb2d0f 162c071887824085bcc9c384a2f8baf0 - - default default] Lock "6c1de815-4e42-4798-9a73-220b67333524" "released" by "nova.compute.manager.ComputeManager.terminate_instance.<locals>.do_terminate_instance" :: held 1.485s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m
Dec  1 20:04:33 compute-0 nova_compute[189564]: 2025-12-01 20:04:33.930 189568 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Dec  1 20:04:35 compute-0 nova_compute[189564]: 2025-12-01 20:04:35.956 189568 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Dec  1 20:04:37 compute-0 nova_compute[189564]: 2025-12-01 20:04:37.220 189568 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Dec  1 20:04:37 compute-0 podman[256865]: 2025-12-01 20:04:37.394036885 +0000 UTC m=+0.155917163 container health_status b46bda7fc50db8041eef75400930fc7591d8331b3adc9964f77b2cc87c6b98e2 (image=quay.io/openstack-k8s-operators/openstack-network-exporter:current-podified, name=openstack_network_exporter, health_status=healthy, health_failing_streak=0, health_log=, io.k8s.description=The Universal Base Image Minimal is a stripped down image that uses microdnf as a package manager. This base image is freely redistributable, but Red Hat only supports Red Hat technologies through subscriptions for Red Hat products. This image is maintained by Red Hat and updated regularly., release=1755695350, build-date=2025-08-20T13:12:41, io.buildah.version=1.33.7, com.redhat.license_terms=https://www.redhat.com/en/about/red-hat-end-user-license-agreements#UBI, vcs-type=git, config_data={'image': 'quay.io/openstack-k8s-operators/openstack-network-exporter:current-podified', 'restart': 'always', 'recreate': True, 'privileged': True, 'ports': ['9105:9105'], 'command': [], 'net': 'host', 'environment': {'OS_ENDPOINT_TYPE': 'internal', 'OPENSTACK_NETWORK_EXPORTER_YAML': '/etc/openstack_network_exporter/openstack_network_exporter.yaml'}, 'healthcheck': {'test': '/openstack/healthcheck openstack-netwo', 'mount': '/var/lib/openstack/healthchecks/openstack_network_exporter'}, 'volumes': ['/var/lib/openstack/config/telemetry/openstack_network_exporter.yaml:/etc/openstack_network_exporter/openstack_network_exporter.yaml:z', '/var/lib/openstack/certs/telemetry/default:/etc/openstack_network_exporter/tls:z', '/var/run/openvswitch:/run/openvswitch:rw,z', '/var/lib/openvswitch/ovn:/run/ovn:rw,z', '/proc:/host/proc:ro', '/var/lib/openstack/healthchecks/openstack_network_exporter:/openstack:ro,z']}, name=ubi9-minimal, url=https://catalog.redhat.com/en/search?searchType=containers, io.openshift.tags=minimal rhel9, version=9.6, io.openshift.expose-services=, vcs-ref=f4b088292653bbf5ca8188a5e59ffd06a8671d4b, description=The Universal Base Image Minimal is a stripped down image that uses microdnf as a package manager. This base image is freely redistributable, but Red Hat only supports Red Hat technologies through subscriptions for Red Hat products. This image is maintained by Red Hat and updated regularly., vendor=Red Hat, Inc., config_id=edpm, distribution-scope=public, io.k8s.display-name=Red Hat Universal Base Image 9 Minimal, summary=Provides the latest release of the minimal Red Hat Universal Base Image 9., maintainer=Red Hat, Inc., architecture=x86_64, container_name=openstack_network_exporter, managed_by=edpm_ansible, com.redhat.component=ubi9-minimal-container)
Dec  1 20:04:37 compute-0 nova_compute[189564]: 2025-12-01 20:04:37.406 189568 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Dec  1 20:04:38 compute-0 nova_compute[189564]: 2025-12-01 20:04:38.933 189568 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Dec  1 20:04:40 compute-0 podman[256887]: 2025-12-01 20:04:40.349312748 +0000 UTC m=+0.108975703 container health_status 9bc16c1e84935b321683dd2dfd3901959431e420d380b6b9982945dff3d516b2 (image=quay.io/prometheus/node-exporter:v1.5.0, name=node_exporter, health_status=healthy, health_failing_streak=0, health_log=, config_data={'image': 'quay.io/prometheus/node-exporter:v1.5.0', 'restart': 'always', 'recreate': True, 'user': 'root', 'privileged': True, 'ports': ['9100:9100'], 'command': ['--web.config.file=/etc/node_exporter/node_exporter.yaml', '--web.disable-exporter-metrics', '--collector.systemd', '--collector.systemd.unit-include=(edpm_.*|ovs.*|openvswitch|virt.*|rsyslog)\\.service', '--no-collector.dmi', '--no-collector.entropy', '--no-collector.thermal_zone', '--no-collector.time', '--no-collector.timex', '--no-collector.uname', '--no-collector.stat', '--no-collector.hwmon', '--no-collector.os', '--no-collector.selinux', '--no-collector.textfile', '--no-collector.powersupplyclass', '--no-collector.pressure', '--no-collector.rapl'], 'net': 'host', 'environment': {'OS_ENDPOINT_TYPE': 'internal'}, 'healthcheck': {'test': '/openstack/healthcheck node_exporter', 'mount': '/var/lib/openstack/healthchecks/node_exporter'}, 'volumes': ['/var/lib/openstack/config/telemetry/node_exporter.yaml:/etc/node_exporter/node_exporter.yaml:z', '/var/lib/openstack/certs/telemetry/default:/etc/node_exporter/tls:z', '/var/run/dbus/system_bus_socket:/var/run/dbus/system_bus_socket:rw', '/var/lib/openstack/healthchecks/node_exporter:/openstack:ro,z']}, config_id=edpm, container_name=node_exporter, maintainer=The Prometheus Authors <prometheus-developers@googlegroups.com>, managed_by=edpm_ansible)
Dec  1 20:04:40 compute-0 nova_compute[189564]: 2025-12-01 20:04:40.959 189568 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Dec  1 20:04:42 compute-0 nova_compute[189564]: 2025-12-01 20:04:42.473 189568 DEBUG nova.virt.driver [-] Emitting event <LifecycleEvent: 1764619467.4717963, cb05bc1e-3b85-4998-a503-39bd86bdc17e => Stopped> emit_event /usr/lib/python3.9/site-packages/nova/virt/driver.py:1653#033[00m
Dec  1 20:04:42 compute-0 nova_compute[189564]: 2025-12-01 20:04:42.474 189568 INFO nova.compute.manager [-] [instance: cb05bc1e-3b85-4998-a503-39bd86bdc17e] VM Stopped (Lifecycle Event)#033[00m
Dec  1 20:04:42 compute-0 nova_compute[189564]: 2025-12-01 20:04:42.515 189568 DEBUG nova.compute.manager [None req-d33bf05c-7502-4eac-a555-7cbe5aebae04 - - - - - -] [instance: cb05bc1e-3b85-4998-a503-39bd86bdc17e] Checking state _get_power_state /usr/lib/python3.9/site-packages/nova/compute/manager.py:1762#033[00m
Dec  1 20:04:43 compute-0 nova_compute[189564]: 2025-12-01 20:04:43.935 189568 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Dec  1 20:04:45 compute-0 nova_compute[189564]: 2025-12-01 20:04:45.932 189568 DEBUG nova.virt.driver [-] Emitting event <LifecycleEvent: 1764619470.930548, 6c1de815-4e42-4798-9a73-220b67333524 => Stopped> emit_event /usr/lib/python3.9/site-packages/nova/virt/driver.py:1653#033[00m
Dec  1 20:04:45 compute-0 nova_compute[189564]: 2025-12-01 20:04:45.933 189568 INFO nova.compute.manager [-] [instance: 6c1de815-4e42-4798-9a73-220b67333524] VM Stopped (Lifecycle Event)#033[00m
Dec  1 20:04:45 compute-0 nova_compute[189564]: 2025-12-01 20:04:45.953 189568 DEBUG nova.compute.manager [None req-1090d8e5-f35d-48dc-8d1b-4e938fd97485 - - - - - -] [instance: 6c1de815-4e42-4798-9a73-220b67333524] Checking state _get_power_state /usr/lib/python3.9/site-packages/nova/compute/manager.py:1762#033[00m
Dec  1 20:04:45 compute-0 nova_compute[189564]: 2025-12-01 20:04:45.961 189568 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Dec  1 20:04:46 compute-0 ovn_metadata_agent[106828]: 2025-12-01 20:04:46.807 106833 DEBUG ovsdbapp.backend.ovs_idl.event [-] Matched UPDATE: SbGlobalUpdateEvent(events=('update',), table='SB_Global', conditions=None, old_conditions=None), priority=20 to row=SB_Global(external_ids={}, nb_cfg=13, options={'arp_ns_explicit_output': 'true', 'mac_prefix': 'ae:b8:e0', 'max_tunid': '16711680', 'northd_internal_version': '24.03.8-20.33.0-76.8', 'svc_monitor_mac': 'f2:87:69:a7:38:2b'}, ipsec=False) old=SB_Global(nb_cfg=12) matches /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/event.py:43#033[00m
Dec  1 20:04:46 compute-0 nova_compute[189564]: 2025-12-01 20:04:46.807 189568 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Dec  1 20:04:46 compute-0 ovn_metadata_agent[106828]: 2025-12-01 20:04:46.809 106833 DEBUG neutron.agent.ovn.metadata.agent [-] Delaying updating chassis table for 4 seconds run /usr/lib/python3.9/site-packages/neutron/agent/ovn/metadata/agent.py:274#033[00m
Dec  1 20:04:48 compute-0 podman[256914]: 2025-12-01 20:04:48.364155136 +0000 UTC m=+0.132418603 container health_status eee51cf6f5ac491b85fb09827fece37ea9afa564acb449d4ec0d0155a452f02b (image=quay.io/podified-antelope-centos9/openstack-multipathd:current-podified, name=multipathd, health_status=healthy, health_failing_streak=0, health_log=, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS, tcib_build_tag=fa2bb8efef6782c26ea7f1675eeb36dd, config_id=multipathd, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 9 Base Image, config_data={'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/multipathd', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-multipathd:current-podified', 'net': 'host', 'privileged': True, 'restart': 'always', 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/var/lib/kolla/config_files/multipathd.json:/var/lib/kolla/config_files/config.json:ro', '/dev:/dev', '/run/udev:/run/udev', '/sys:/sys', '/lib/modules:/lib/modules:ro', '/etc/iscsi:/etc/iscsi:ro', '/var/lib/iscsi:/var/lib/iscsi', '/etc/multipath:/etc/multipath:z', '/etc/multipath.conf:/etc/multipath.conf:ro', '/var/lib/openstack/healthchecks/multipathd:/openstack:ro,z']}, container_name=multipathd, io.buildah.version=1.41.3, maintainer=OpenStack Kubernetes Operator team, org.label-schema.build-date=20251125, managed_by=edpm_ansible, tcib_managed=true)
Dec  1 20:04:48 compute-0 nova_compute[189564]: 2025-12-01 20:04:48.938 189568 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Dec  1 20:04:50 compute-0 ovn_metadata_agent[106828]: 2025-12-01 20:04:50.811 106833 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): DbSetCommand(_result=None, table=Chassis_Private, record=91869463-7ce7-4561-8225-db4a77bb5f12, col_values=(('external_ids', {'neutron:ovn-metadata-sb-cfg': '13'}),), if_exists=True) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89#033[00m
Dec  1 20:04:50 compute-0 nova_compute[189564]: 2025-12-01 20:04:50.964 189568 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Dec  1 20:04:52 compute-0 podman[256938]: 2025-12-01 20:04:52.360007549 +0000 UTC m=+0.125395818 container health_status 61ddba5fa28aaa4735d9b3aecc3d300f499f9ae2248b5f55cd6d6127fcce4236 (image=quay.io/navidys/prometheus-podman-exporter:v1.10.1, name=podman_exporter, health_status=healthy, health_failing_streak=0, health_log=, config_id=edpm, container_name=podman_exporter, maintainer=Navid Yaghoobi <navidys@fedoraproject.org>, managed_by=edpm_ansible, config_data={'image': 'quay.io/navidys/prometheus-podman-exporter:v1.10.1', 'restart': 'always', 'recreate': True, 'user': 'root', 'privileged': True, 'ports': ['9882:9882'], 'net': 'host', 'command': ['--web.config.file=/etc/podman_exporter/podman_exporter.yaml'], 'environment': {'OS_ENDPOINT_TYPE': 'internal', 'CONTAINER_HOST': 'unix:///run/podman/podman.sock'}, 'healthcheck': {'test': '/openstack/healthcheck podman_exporter', 'mount': '/var/lib/openstack/healthchecks/podman_exporter'}, 'volumes': ['/var/lib/openstack/config/telemetry/podman_exporter.yaml:/etc/podman_exporter/podman_exporter.yaml:z', '/var/lib/openstack/certs/telemetry/default:/etc/podman_exporter/tls:z', '/run/podman/podman.sock:/run/podman/podman.sock:rw,z', '/var/lib/openstack/healthchecks/podman_exporter:/openstack:ro,z']})
Dec  1 20:04:53 compute-0 nova_compute[189564]: 2025-12-01 20:04:53.941 189568 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Dec  1 20:04:55 compute-0 nova_compute[189564]: 2025-12-01 20:04:55.966 189568 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Dec  1 20:04:56 compute-0 podman[256963]: 2025-12-01 20:04:56.348342663 +0000 UTC m=+0.112097163 container health_status 3a3d264f7eb8586ed3d44da8bad3c69e5911bcb2ca062b771386b6d47a5118de (image=quay.rdoproject.org/podified-master-centos10/openstack-ceilometer-compute:current-tested, name=ceilometer_agent_compute, health_status=healthy, health_failing_streak=0, health_log=, tcib_managed=true, io.buildah.version=1.41.4, maintainer=OpenStack Kubernetes Operator team, org.label-schema.vendor=CentOS, container_name=ceilometer_agent_compute, managed_by=edpm_ansible, org.label-schema.build-date=20251125, org.label-schema.license=GPLv2, config_data={'image': 'quay.rdoproject.org/podified-master-centos10/openstack-ceilometer-compute:current-tested', 'user': 'ceilometer', 'restart': 'always', 'command': 'kolla_start', 'security_opt': 'label:type:ceilometer_polling_t', 'net': 'host', 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'OS_ENDPOINT_TYPE': 'internal'}, 'healthcheck': {'test': '/openstack/healthcheck compute', 'mount': '/var/lib/openstack/healthchecks/ceilometer_agent_compute'}, 'volumes': ['/var/lib/openstack/config/telemetry:/var/lib/openstack/config/:z', '/var/lib/openstack/config/telemetry/ceilometer-agent-compute.json:/var/lib/kolla/config_files/config.json:z', '/run/libvirt:/run/libvirt:shared,ro', '/etc/hosts:/etc/hosts:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/var/lib/openstack/cacerts/telemetry/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/config/telemetry/ceilometer_prom_exporter.yaml:/etc/ceilometer/ceilometer_prom_exporter.yaml:z', '/var/lib/openstack/certs/telemetry/default:/etc/ceilometer/tls:z', '/dev/log:/dev/log', '/var/lib/openstack/healthchecks/ceilometer_agent_compute:/openstack:ro,z']}, org.label-schema.name=CentOS Stream 10 Base Image, config_id=edpm, org.label-schema.schema-version=1.0, tcib_build_tag=3a7876c5b6a4ff2e2bc50e11e9db5f42)
Dec  1 20:04:56 compute-0 podman[256961]: 2025-12-01 20:04:56.358246689 +0000 UTC m=+0.117008210 container health_status 23921011954a99f31a49758e512d9e3575f6b2ebf536e7df85e3be11e7690b76 (image=quay.io/sustainable_computing_io/kepler:release-0.7.12, name=kepler, health_status=healthy, health_failing_streak=0, health_log=, io.openshift.tags=base rhel9, distribution-scope=public, maintainer=Red Hat, Inc., managed_by=edpm_ansible, vendor=Red Hat, Inc., io.k8s.description=The Universal Base Image is designed and engineered to be the base layer for all of your containerized applications, middleware and utilities. This base image is freely redistributable, but Red Hat only supports Red Hat technologies through subscriptions for Red Hat products. This image is maintained by Red Hat and updated regularly., vcs-ref=e309397d02fc53f7fa99db1371b8700eb49f268f, build-date=2024-09-18T21:23:30, io.openshift.expose-services=, url=https://access.redhat.com/containers/#/registry.access.redhat.com/ubi9/images/9.4-1214.1726694543, release-0.7.12=, vcs-type=git, com.redhat.component=ubi9-container, com.redhat.license_terms=https://www.redhat.com/en/about/red-hat-end-user-license-agreements#UBI, config_id=edpm, name=ubi9, io.k8s.display-name=Red Hat Universal Base Image 9, summary=Provides the latest release of Red Hat Universal Base Image 9., config_data={'image': 'quay.io/sustainable_computing_io/kepler:release-0.7.12', 'privileged': 'true', 'restart': 'always', 'ports': ['8888:8888'], 'net': 'host', 'command': '-v=2', 'recreate': True, 'environment': {'ENABLE_GPU': 'true', 'EXPOSE_CONTAINER_METRICS': 'true', 'ENABLE_PROCESS_METRICS': 'true', 'EXPOSE_VM_METRICS': 'true', 'EXPOSE_ESTIMATED_IDLE_POWER_METRICS': 'false', 'LIBVIRT_METADATA_URI': 'http://openstack.org/xmlns/libvirt/nova/1.1'}, 'healthcheck': {'test': '/openstack/healthcheck kepler', 'mount': '/var/lib/openstack/healthchecks/kepler'}, 'volumes': ['/lib/modules:/lib/modules:ro', '/run/libvirt:/run/libvirt:shared,ro', '/sys:/sys', '/proc:/proc', '/var/lib/openstack/healthchecks/kepler:/openstack:ro,z']}, release=1214.1726694543, io.buildah.version=1.29.0, version=9.4, architecture=x86_64, container_name=kepler, description=The Universal Base Image is designed and engineered to be the base layer for all of your containerized applications, middleware and utilities. This base image is freely redistributable, but Red Hat only supports Red Hat technologies through subscriptions for Red Hat products. This image is maintained by Red Hat and updated regularly.)
Dec  1 20:04:56 compute-0 podman[256964]: 2025-12-01 20:04:56.360735308 +0000 UTC m=+0.117213196 container health_status 43b014a7c88484529ca37fbc1aa040d68d3c565a681d98a3ffe696ded1c66c8b (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified, name=ovn_metadata_agent, health_status=healthy, health_failing_streak=0, health_log=, io.buildah.version=1.41.3, org.label-schema.build-date=20251125, org.label-schema.schema-version=1.0, org.label-schema.license=GPLv2, config_data={'cgroupns': 'host', 'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'EDPM_CONFIG_HASH': '0823bd3e096c75f72e4a95820d41b0d4b6a1172bd2892ddb9f29b788a11bc87d'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ovn_metadata_agent', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified', 'net': 'host', 'pid': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/run/openvswitch:/run/openvswitch:z', '/var/lib/config-data/ansible-generated/neutron-ovn-metadata-agent:/etc/neutron.conf.d:z', '/run/netns:/run/netns:shared', '/var/lib/kolla/config_files/ovn_metadata_agent.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/neutron:/var/lib/neutron:shared,z', '/var/lib/neutron/ovn_metadata_haproxy_wrapper:/usr/local/bin/haproxy:ro', '/var/lib/neutron/kill_scripts:/etc/neutron/kill_scripts:ro', '/var/lib/openstack/cacerts/neutron-metadata/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/ca.crt:/etc/pki/tls/certs/ovndbca.crt:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/tls.crt:/etc/pki/tls/certs/ovndb.crt:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/tls.key:/etc/pki/tls/private/ovndb.key:ro,Z', '/var/lib/openstack/healthchecks/ovn_metadata_agent:/openstack:ro,z']}, config_id=ovn_metadata_agent, container_name=ovn_metadata_agent, org.label-schema.vendor=CentOS, tcib_build_tag=fa2bb8efef6782c26ea7f1675eeb36dd, maintainer=OpenStack Kubernetes Operator team, managed_by=edpm_ansible, org.label-schema.name=CentOS Stream 9 Base Image, tcib_managed=true)
Dec  1 20:04:56 compute-0 podman[256962]: 2025-12-01 20:04:56.383375742 +0000 UTC m=+0.137870406 container health_status 34a1614f07848d6f362b3ed1fa2407dbcd0f2c7c831f6ef43ff8b2d278ce7c3d (image=quay.io/podified-antelope-centos9/openstack-ceilometer-ipmi:current-podified, name=ceilometer_agent_ipmi, health_status=healthy, health_failing_streak=0, health_log=, org.label-schema.license=GPLv2, tcib_build_tag=fa2bb8efef6782c26ea7f1675eeb36dd, config_id=edpm, maintainer=OpenStack Kubernetes Operator team, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS, config_data={'image': 'quay.io/podified-antelope-centos9/openstack-ceilometer-ipmi:current-podified', 'user': 'ceilometer', 'restart': 'always', 'command': 'kolla_start', 'security_opt': 'label:type:ceilometer_polling_t', 'privileged': 'true', 'net': 'host', 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'OS_ENDPOINT_TYPE': 'internal'}, 'healthcheck': {'test': '/openstack/healthcheck ipmi', 'mount': '/var/lib/openstack/healthchecks/ceilometer_agent_ipmi'}, 'volumes': ['/var/lib/openstack/config/telemetry-power-monitoring:/var/lib/openstack/config/:z', '/var/lib/openstack/config/telemetry-power-monitoring/ceilometer-agent-ipmi.json:/var/lib/kolla/config_files/config.json:z', '/etc/hosts:/etc/hosts:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/var/lib/openstack/cacerts/telemetry-power-monitoring/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/config/telemetry-power-monitoring/ceilometer_prom_exporter.yaml:/etc/ceilometer/ceilometer_prom_exporter.yaml:z', '/var/lib/openstack/certs/telemetry-power-monitoring/default:/etc/ceilometer/tls:z', '/dev/log:/dev/log', '/var/lib/openstack/healthchecks/ceilometer_agent_ipmi:/openstack:ro,z']}, managed_by=edpm_ansible, org.label-schema.name=CentOS Stream 9 Base Image, tcib_managed=true, container_name=ceilometer_agent_ipmi, io.buildah.version=1.41.3, org.label-schema.build-date=20251125)
Dec  1 20:04:56 compute-0 podman[256965]: 2025-12-01 20:04:56.398487105 +0000 UTC m=+0.139708935 container health_status ac5c9902abf0db9f43c889599b2bcc73d33eb8b65444ffdd9b56a5cc93dab792 (image=quay.io/podified-antelope-centos9/openstack-ovn-controller:current-podified, name=ovn_controller, health_status=healthy, health_failing_streak=0, health_log=, config_id=ovn_controller, maintainer=OpenStack Kubernetes Operator team, tcib_managed=true, managed_by=edpm_ansible, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.vendor=CentOS, tcib_build_tag=fa2bb8efef6782c26ea7f1675eeb36dd, container_name=ovn_controller, org.label-schema.build-date=20251125, org.label-schema.license=GPLv2, org.label-schema.schema-version=1.0, config_data={'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ovn_controller', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-ovn-controller:current-podified', 'net': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/lib/modules:/lib/modules:ro', '/run:/run', '/var/lib/openvswitch/ovn:/run/ovn:shared,z', '/var/lib/kolla/config_files/ovn_controller.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/openstack/cacerts/ovn/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/certs/ovn/default/ca.crt:/etc/pki/tls/certs/ovndbca.crt:ro,z', '/var/lib/openstack/certs/ovn/default/tls.crt:/etc/pki/tls/certs/ovndb.crt:ro,z', '/var/lib/openstack/certs/ovn/default/tls.key:/etc/pki/tls/private/ovndb.key:ro,Z', '/var/lib/openstack/healthchecks/ovn_controller:/openstack:ro,z']}, io.buildah.version=1.41.3)
Dec  1 20:04:58 compute-0 nova_compute[189564]: 2025-12-01 20:04:58.944 189568 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Dec  1 20:04:59 compute-0 podman[203750]: time="2025-12-01T20:04:59Z" level=info msg="List containers: received `last` parameter - overwriting `limit`"
Dec  1 20:04:59 compute-0 podman[203750]: @ - - [01/Dec/2025:20:04:59 +0000] "GET /v4.9.3/libpod/containers/json?all=true&external=false&last=0&namespace=false&size=false&sync=false HTTP/1.1" 200 28288 "" "Go-http-client/1.1"
Dec  1 20:04:59 compute-0 podman[203750]: @ - - [01/Dec/2025:20:04:59 +0000] "GET /v4.9.3/libpod/containers/stats?all=false&interval=1&stream=false HTTP/1.1" 200 4342 "" "Go-http-client/1.1"
Dec  1 20:05:00 compute-0 nova_compute[189564]: 2025-12-01 20:05:00.969 189568 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Dec  1 20:05:01 compute-0 openstack_network_exporter[205914]: ERROR   20:05:01 appctl.go:131: Failed to prepare call to ovsdb-server: no control socket files found for the ovs db server
Dec  1 20:05:01 compute-0 openstack_network_exporter[205914]: ERROR   20:05:01 appctl.go:144: Failed to get PID for ovn-northd: no control socket files found for ovn-northd
Dec  1 20:05:01 compute-0 openstack_network_exporter[205914]: ERROR   20:05:01 appctl.go:144: Failed to get PID for ovn-northd: no control socket files found for ovn-northd
Dec  1 20:05:01 compute-0 openstack_network_exporter[205914]: ERROR   20:05:01 appctl.go:174: call(dpif-netdev/pmd-perf-show): please specify an existing datapath
Dec  1 20:05:01 compute-0 openstack_network_exporter[205914]: 
Dec  1 20:05:01 compute-0 openstack_network_exporter[205914]: ERROR   20:05:01 appctl.go:174: call(dpif-netdev/pmd-rxq-show): please specify an existing datapath
Dec  1 20:05:01 compute-0 openstack_network_exporter[205914]: 
Dec  1 20:05:03 compute-0 nova_compute[189564]: 2025-12-01 20:05:03.948 189568 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Dec  1 20:05:05 compute-0 nova_compute[189564]: 2025-12-01 20:05:05.973 189568 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Dec  1 20:05:08 compute-0 podman[257063]: 2025-12-01 20:05:08.342433253 +0000 UTC m=+0.110227924 container health_status b46bda7fc50db8041eef75400930fc7591d8331b3adc9964f77b2cc87c6b98e2 (image=quay.io/openstack-k8s-operators/openstack-network-exporter:current-podified, name=openstack_network_exporter, health_status=healthy, health_failing_streak=0, health_log=, com.redhat.license_terms=https://www.redhat.com/en/about/red-hat-end-user-license-agreements#UBI, managed_by=edpm_ansible, vcs-type=git, io.k8s.description=The Universal Base Image Minimal is a stripped down image that uses microdnf as a package manager. This base image is freely redistributable, but Red Hat only supports Red Hat technologies through subscriptions for Red Hat products. This image is maintained by Red Hat and updated regularly., name=ubi9-minimal, distribution-scope=public, io.k8s.display-name=Red Hat Universal Base Image 9 Minimal, build-date=2025-08-20T13:12:41, summary=Provides the latest release of the minimal Red Hat Universal Base Image 9., vcs-ref=f4b088292653bbf5ca8188a5e59ffd06a8671d4b, description=The Universal Base Image Minimal is a stripped down image that uses microdnf as a package manager. This base image is freely redistributable, but Red Hat only supports Red Hat technologies through subscriptions for Red Hat products. This image is maintained by Red Hat and updated regularly., io.openshift.tags=minimal rhel9, config_data={'image': 'quay.io/openstack-k8s-operators/openstack-network-exporter:current-podified', 'restart': 'always', 'recreate': True, 'privileged': True, 'ports': ['9105:9105'], 'command': [], 'net': 'host', 'environment': {'OS_ENDPOINT_TYPE': 'internal', 'OPENSTACK_NETWORK_EXPORTER_YAML': '/etc/openstack_network_exporter/openstack_network_exporter.yaml'}, 'healthcheck': {'test': '/openstack/healthcheck openstack-netwo', 'mount': '/var/lib/openstack/healthchecks/openstack_network_exporter'}, 'volumes': ['/var/lib/openstack/config/telemetry/openstack_network_exporter.yaml:/etc/openstack_network_exporter/openstack_network_exporter.yaml:z', '/var/lib/openstack/certs/telemetry/default:/etc/openstack_network_exporter/tls:z', '/var/run/openvswitch:/run/openvswitch:rw,z', '/var/lib/openvswitch/ovn:/run/ovn:rw,z', '/proc:/host/proc:ro', '/var/lib/openstack/healthchecks/openstack_network_exporter:/openstack:ro,z']}, version=9.6, container_name=openstack_network_exporter, maintainer=Red Hat, Inc., com.redhat.component=ubi9-minimal-container, config_id=edpm, release=1755695350, vendor=Red Hat, Inc., io.openshift.expose-services=, architecture=x86_64, io.buildah.version=1.33.7, url=https://catalog.redhat.com/en/search?searchType=containers)
Dec  1 20:05:08 compute-0 nova_compute[189564]: 2025-12-01 20:05:08.949 189568 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Dec  1 20:05:10 compute-0 nova_compute[189564]: 2025-12-01 20:05:10.977 189568 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Dec  1 20:05:11 compute-0 podman[257083]: 2025-12-01 20:05:11.32785674 +0000 UTC m=+0.095694969 container health_status 9bc16c1e84935b321683dd2dfd3901959431e420d380b6b9982945dff3d516b2 (image=quay.io/prometheus/node-exporter:v1.5.0, name=node_exporter, health_status=healthy, health_failing_streak=0, health_log=, config_data={'image': 'quay.io/prometheus/node-exporter:v1.5.0', 'restart': 'always', 'recreate': True, 'user': 'root', 'privileged': True, 'ports': ['9100:9100'], 'command': ['--web.config.file=/etc/node_exporter/node_exporter.yaml', '--web.disable-exporter-metrics', '--collector.systemd', '--collector.systemd.unit-include=(edpm_.*|ovs.*|openvswitch|virt.*|rsyslog)\\.service', '--no-collector.dmi', '--no-collector.entropy', '--no-collector.thermal_zone', '--no-collector.time', '--no-collector.timex', '--no-collector.uname', '--no-collector.stat', '--no-collector.hwmon', '--no-collector.os', '--no-collector.selinux', '--no-collector.textfile', '--no-collector.powersupplyclass', '--no-collector.pressure', '--no-collector.rapl'], 'net': 'host', 'environment': {'OS_ENDPOINT_TYPE': 'internal'}, 'healthcheck': {'test': '/openstack/healthcheck node_exporter', 'mount': '/var/lib/openstack/healthchecks/node_exporter'}, 'volumes': ['/var/lib/openstack/config/telemetry/node_exporter.yaml:/etc/node_exporter/node_exporter.yaml:z', '/var/lib/openstack/certs/telemetry/default:/etc/node_exporter/tls:z', '/var/run/dbus/system_bus_socket:/var/run/dbus/system_bus_socket:rw', '/var/lib/openstack/healthchecks/node_exporter:/openstack:ro,z']}, config_id=edpm, container_name=node_exporter, maintainer=The Prometheus Authors <prometheus-developers@googlegroups.com>, managed_by=edpm_ansible)
Dec  1 20:05:12 compute-0 ovn_metadata_agent[106828]: 2025-12-01 20:05:12.222 106833 DEBUG oslo_concurrency.lockutils [-] Acquiring lock "_check_child_processes" by "neutron.agent.linux.external_process.ProcessMonitor._check_child_processes" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m
Dec  1 20:05:12 compute-0 ovn_metadata_agent[106828]: 2025-12-01 20:05:12.223 106833 DEBUG oslo_concurrency.lockutils [-] Lock "_check_child_processes" acquired by "neutron.agent.linux.external_process.ProcessMonitor._check_child_processes" :: waited 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m
Dec  1 20:05:12 compute-0 ovn_metadata_agent[106828]: 2025-12-01 20:05:12.223 106833 DEBUG oslo_concurrency.lockutils [-] Lock "_check_child_processes" "released" by "neutron.agent.linux.external_process.ProcessMonitor._check_child_processes" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m
Dec  1 20:05:13 compute-0 nova_compute[189564]: 2025-12-01 20:05:13.953 189568 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Dec  1 20:05:15 compute-0 nova_compute[189564]: 2025-12-01 20:05:15.980 189568 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Dec  1 20:05:17 compute-0 nova_compute[189564]: 2025-12-01 20:05:17.247 189568 DEBUG oslo_service.periodic_task [None req-7cec860b-c1c5-49d2-8a4f-732c76c1788d - - - - - -] Running periodic task ComputeManager._reclaim_queued_deletes run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210#033[00m
Dec  1 20:05:17 compute-0 nova_compute[189564]: 2025-12-01 20:05:17.248 189568 DEBUG nova.compute.manager [None req-7cec860b-c1c5-49d2-8a4f-732c76c1788d - - - - - -] CONF.reclaim_instance_interval <= 0, skipping... _reclaim_queued_deletes /usr/lib/python3.9/site-packages/nova/compute/manager.py:10477#033[00m
Dec  1 20:05:17 compute-0 ovn_controller[97948]: 2025-12-01T20:05:17Z|00167|memory_trim|INFO|Detected inactivity (last active 30009 ms ago): trimming memory
Dec  1 20:05:18 compute-0 nova_compute[189564]: 2025-12-01 20:05:18.249 189568 DEBUG oslo_service.periodic_task [None req-7cec860b-c1c5-49d2-8a4f-732c76c1788d - - - - - -] Running periodic task ComputeManager._poll_rescued_instances run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210#033[00m
Dec  1 20:05:18 compute-0 nova_compute[189564]: 2025-12-01 20:05:18.957 189568 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Dec  1 20:05:19 compute-0 podman[257109]: 2025-12-01 20:05:19.336527749 +0000 UTC m=+0.100892245 container health_status eee51cf6f5ac491b85fb09827fece37ea9afa564acb449d4ec0d0155a452f02b (image=quay.io/podified-antelope-centos9/openstack-multipathd:current-podified, name=multipathd, health_status=healthy, health_failing_streak=0, health_log=, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.schema-version=1.0, tcib_build_tag=fa2bb8efef6782c26ea7f1675eeb36dd, tcib_managed=true, config_data={'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/multipathd', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-multipathd:current-podified', 'net': 'host', 'privileged': True, 'restart': 'always', 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/var/lib/kolla/config_files/multipathd.json:/var/lib/kolla/config_files/config.json:ro', '/dev:/dev', '/run/udev:/run/udev', '/sys:/sys', '/lib/modules:/lib/modules:ro', '/etc/iscsi:/etc/iscsi:ro', '/var/lib/iscsi:/var/lib/iscsi', '/etc/multipath:/etc/multipath:z', '/etc/multipath.conf:/etc/multipath.conf:ro', '/var/lib/openstack/healthchecks/multipathd:/openstack:ro,z']}, io.buildah.version=1.41.3, org.label-schema.build-date=20251125, org.label-schema.vendor=CentOS, config_id=multipathd, container_name=multipathd, org.label-schema.license=GPLv2, managed_by=edpm_ansible, maintainer=OpenStack Kubernetes Operator team)
Dec  1 20:05:20 compute-0 nova_compute[189564]: 2025-12-01 20:05:20.984 189568 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Dec  1 20:05:22 compute-0 nova_compute[189564]: 2025-12-01 20:05:22.530 189568 DEBUG oslo_concurrency.lockutils [None req-9a8d114f-a120-44d9-9d3e-0fcc2a586e30 87b1f4a5842648dead0562b1cf8b4f18 ce8fb01897ec4dc4a54e7b478a0450c6 - - default default] Acquiring lock "2e63a3e2-688c-470f-9b69-98ac22f0c892" by "nova.compute.manager.ComputeManager.build_and_run_instance.<locals>._locked_do_build_and_run_instance" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m
Dec  1 20:05:22 compute-0 nova_compute[189564]: 2025-12-01 20:05:22.531 189568 DEBUG oslo_concurrency.lockutils [None req-9a8d114f-a120-44d9-9d3e-0fcc2a586e30 87b1f4a5842648dead0562b1cf8b4f18 ce8fb01897ec4dc4a54e7b478a0450c6 - - default default] Lock "2e63a3e2-688c-470f-9b69-98ac22f0c892" acquired by "nova.compute.manager.ComputeManager.build_and_run_instance.<locals>._locked_do_build_and_run_instance" :: waited 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m
Dec  1 20:05:22 compute-0 nova_compute[189564]: 2025-12-01 20:05:22.557 189568 DEBUG nova.compute.manager [None req-9a8d114f-a120-44d9-9d3e-0fcc2a586e30 87b1f4a5842648dead0562b1cf8b4f18 ce8fb01897ec4dc4a54e7b478a0450c6 - - default default] [instance: 2e63a3e2-688c-470f-9b69-98ac22f0c892] Starting instance... _do_build_and_run_instance /usr/lib/python3.9/site-packages/nova/compute/manager.py:2402#033[00m
Dec  1 20:05:22 compute-0 nova_compute[189564]: 2025-12-01 20:05:22.670 189568 DEBUG oslo_concurrency.lockutils [None req-9a8d114f-a120-44d9-9d3e-0fcc2a586e30 87b1f4a5842648dead0562b1cf8b4f18 ce8fb01897ec4dc4a54e7b478a0450c6 - - default default] Acquiring lock "compute_resources" by "nova.compute.resource_tracker.ResourceTracker.instance_claim" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m
Dec  1 20:05:22 compute-0 nova_compute[189564]: 2025-12-01 20:05:22.671 189568 DEBUG oslo_concurrency.lockutils [None req-9a8d114f-a120-44d9-9d3e-0fcc2a586e30 87b1f4a5842648dead0562b1cf8b4f18 ce8fb01897ec4dc4a54e7b478a0450c6 - - default default] Lock "compute_resources" acquired by "nova.compute.resource_tracker.ResourceTracker.instance_claim" :: waited 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m
Dec  1 20:05:22 compute-0 nova_compute[189564]: 2025-12-01 20:05:22.683 189568 DEBUG nova.virt.hardware [None req-9a8d114f-a120-44d9-9d3e-0fcc2a586e30 87b1f4a5842648dead0562b1cf8b4f18 ce8fb01897ec4dc4a54e7b478a0450c6 - - default default] Require both a host and instance NUMA topology to fit instance on host. numa_fit_instance_to_host /usr/lib/python3.9/site-packages/nova/virt/hardware.py:2368#033[00m
Dec  1 20:05:22 compute-0 nova_compute[189564]: 2025-12-01 20:05:22.684 189568 INFO nova.compute.claims [None req-9a8d114f-a120-44d9-9d3e-0fcc2a586e30 87b1f4a5842648dead0562b1cf8b4f18 ce8fb01897ec4dc4a54e7b478a0450c6 - - default default] [instance: 2e63a3e2-688c-470f-9b69-98ac22f0c892] Claim successful on node compute-0.ctlplane.example.com#033[00m
Dec  1 20:05:22 compute-0 nova_compute[189564]: 2025-12-01 20:05:22.783 189568 DEBUG nova.scheduler.client.report [None req-9a8d114f-a120-44d9-9d3e-0fcc2a586e30 87b1f4a5842648dead0562b1cf8b4f18 ce8fb01897ec4dc4a54e7b478a0450c6 - - default default] Refreshing inventories for resource provider 0211b5d4-bab8-409f-8f53-df766ffbcb27 _refresh_associations /usr/lib/python3.9/site-packages/nova/scheduler/client/report.py:804#033[00m
Dec  1 20:05:22 compute-0 nova_compute[189564]: 2025-12-01 20:05:22.804 189568 DEBUG nova.scheduler.client.report [None req-9a8d114f-a120-44d9-9d3e-0fcc2a586e30 87b1f4a5842648dead0562b1cf8b4f18 ce8fb01897ec4dc4a54e7b478a0450c6 - - default default] Updating ProviderTree inventory for provider 0211b5d4-bab8-409f-8f53-df766ffbcb27 from _refresh_and_get_inventory using data: {'VCPU': {'total': 8, 'reserved': 0, 'min_unit': 1, 'max_unit': 8, 'step_size': 1, 'allocation_ratio': 4.0}, 'MEMORY_MB': {'total': 7680, 'reserved': 512, 'min_unit': 1, 'max_unit': 7680, 'step_size': 1, 'allocation_ratio': 1.0}, 'DISK_GB': {'total': 79, 'reserved': 1, 'min_unit': 1, 'max_unit': 79, 'step_size': 1, 'allocation_ratio': 0.9}} _refresh_and_get_inventory /usr/lib/python3.9/site-packages/nova/scheduler/client/report.py:768#033[00m
Dec  1 20:05:22 compute-0 nova_compute[189564]: 2025-12-01 20:05:22.805 189568 DEBUG nova.compute.provider_tree [None req-9a8d114f-a120-44d9-9d3e-0fcc2a586e30 87b1f4a5842648dead0562b1cf8b4f18 ce8fb01897ec4dc4a54e7b478a0450c6 - - default default] Updating inventory in ProviderTree for provider 0211b5d4-bab8-409f-8f53-df766ffbcb27 with inventory: {'VCPU': {'total': 8, 'reserved': 0, 'min_unit': 1, 'max_unit': 8, 'step_size': 1, 'allocation_ratio': 4.0}, 'MEMORY_MB': {'total': 7680, 'reserved': 512, 'min_unit': 1, 'max_unit': 7680, 'step_size': 1, 'allocation_ratio': 1.0}, 'DISK_GB': {'total': 79, 'reserved': 1, 'min_unit': 1, 'max_unit': 79, 'step_size': 1, 'allocation_ratio': 0.9}} update_inventory /usr/lib/python3.9/site-packages/nova/compute/provider_tree.py:176#033[00m
Dec  1 20:05:22 compute-0 nova_compute[189564]: 2025-12-01 20:05:22.830 189568 DEBUG nova.scheduler.client.report [None req-9a8d114f-a120-44d9-9d3e-0fcc2a586e30 87b1f4a5842648dead0562b1cf8b4f18 ce8fb01897ec4dc4a54e7b478a0450c6 - - default default] Refreshing aggregate associations for resource provider 0211b5d4-bab8-409f-8f53-df766ffbcb27, aggregates: None _refresh_associations /usr/lib/python3.9/site-packages/nova/scheduler/client/report.py:813#033[00m
Dec  1 20:05:22 compute-0 nova_compute[189564]: 2025-12-01 20:05:22.861 189568 DEBUG nova.scheduler.client.report [None req-9a8d114f-a120-44d9-9d3e-0fcc2a586e30 87b1f4a5842648dead0562b1cf8b4f18 ce8fb01897ec4dc4a54e7b478a0450c6 - - default default] Refreshing trait associations for resource provider 0211b5d4-bab8-409f-8f53-df766ffbcb27, traits: COMPUTE_RESCUE_BFV,COMPUTE_VIOMMU_MODEL_INTEL,COMPUTE_SECURITY_UEFI_SECURE_BOOT,COMPUTE_GRAPHICS_MODEL_VIRTIO,HW_CPU_X86_AMD_SVM,COMPUTE_NODE,COMPUTE_VIOMMU_MODEL_AUTO,HW_CPU_X86_BMI2,COMPUTE_IMAGE_TYPE_ISO,HW_CPU_X86_SSE2,COMPUTE_STORAGE_BUS_SATA,HW_CPU_X86_SSE41,COMPUTE_IMAGE_TYPE_ARI,COMPUTE_SECURITY_TPM_1_2,COMPUTE_STORAGE_BUS_VIRTIO,COMPUTE_IMAGE_TYPE_AMI,COMPUTE_NET_VIF_MODEL_E1000,COMPUTE_GRAPHICS_MODEL_CIRRUS,COMPUTE_GRAPHICS_MODEL_VGA,COMPUTE_TRUSTED_CERTS,COMPUTE_STORAGE_BUS_USB,COMPUTE_VOLUME_ATTACH_WITH_TAG,COMPUTE_NET_VIF_MODEL_VIRTIO,HW_CPU_X86_FMA3,HW_CPU_X86_SSE4A,COMPUTE_ACCELERATORS,COMPUTE_VOLUME_EXTEND,HW_CPU_X86_ABM,COMPUTE_DEVICE_TAGGING,HW_CPU_X86_AVX,HW_CPU_X86_SSE,HW_CPU_X86_SVM,COMPUTE_STORAGE_BUS_IDE,COMPUTE_NET_ATTACH_INTERFACE,HW_CPU_X86_F16C,HW_CPU_X86_MMX,COMPUTE_NET_VIF_MODEL_SPAPR_VLAN,COMPUTE_NET_VIF_MODEL_E1000E,HW_CPU_X86_CLMUL,COMPUTE_GRAPHICS_MODEL_NONE,COMPUTE_VIOMMU_MODEL_VIRTIO,HW_CPU_X86_AVX2,COMPUTE_NET_VIF_MODEL_VMXNET3,COMPUTE_SECURITY_TPM_2_0,COMPUTE_IMAGE_TYPE_AKI,HW_CPU_X86_SSSE3,COMPUTE_IMAGE_TYPE_QCOW2,HW_CPU_X86_BMI,HW_CPU_X86_AESNI,COMPUTE_GRAPHICS_MODEL_BOCHS,COMPUTE_NET_VIF_MODEL_RTL8139,COMPUTE_STORAGE_BUS_SCSI,COMPUTE_NET_VIF_MODEL_PCNET,COMPUTE_VOLUME_MULTI_ATTACH,COMPUTE_SOCKET_PCI_NUMA_AFFINITY,COMPUTE_NET_VIF_MODEL_NE2K_PCI,HW_CPU_X86_SHA,COMPUTE_IMAGE_TYPE_RAW,COMPUTE_NET_ATTACH_INTERFACE_WITH_TAG,HW_CPU_X86_SSE42,COMPUTE_STORAGE_BUS_FDC _refresh_associations /usr/lib/python3.9/site-packages/nova/scheduler/client/report.py:825#033[00m
Dec  1 20:05:22 compute-0 nova_compute[189564]: 2025-12-01 20:05:22.921 189568 DEBUG nova.compute.provider_tree [None req-9a8d114f-a120-44d9-9d3e-0fcc2a586e30 87b1f4a5842648dead0562b1cf8b4f18 ce8fb01897ec4dc4a54e7b478a0450c6 - - default default] Inventory has not changed in ProviderTree for provider: 0211b5d4-bab8-409f-8f53-df766ffbcb27 update_inventory /usr/lib/python3.9/site-packages/nova/compute/provider_tree.py:180#033[00m
Dec  1 20:05:22 compute-0 nova_compute[189564]: 2025-12-01 20:05:22.946 189568 DEBUG nova.scheduler.client.report [None req-9a8d114f-a120-44d9-9d3e-0fcc2a586e30 87b1f4a5842648dead0562b1cf8b4f18 ce8fb01897ec4dc4a54e7b478a0450c6 - - default default] Inventory has not changed for provider 0211b5d4-bab8-409f-8f53-df766ffbcb27 based on inventory data: {'VCPU': {'total': 8, 'reserved': 0, 'min_unit': 1, 'max_unit': 8, 'step_size': 1, 'allocation_ratio': 4.0}, 'MEMORY_MB': {'total': 7680, 'reserved': 512, 'min_unit': 1, 'max_unit': 7680, 'step_size': 1, 'allocation_ratio': 1.0}, 'DISK_GB': {'total': 79, 'reserved': 1, 'min_unit': 1, 'max_unit': 79, 'step_size': 1, 'allocation_ratio': 0.9}} set_inventory_for_provider /usr/lib/python3.9/site-packages/nova/scheduler/client/report.py:940#033[00m
Dec  1 20:05:22 compute-0 nova_compute[189564]: 2025-12-01 20:05:22.989 189568 DEBUG oslo_concurrency.lockutils [None req-9a8d114f-a120-44d9-9d3e-0fcc2a586e30 87b1f4a5842648dead0562b1cf8b4f18 ce8fb01897ec4dc4a54e7b478a0450c6 - - default default] Lock "compute_resources" "released" by "nova.compute.resource_tracker.ResourceTracker.instance_claim" :: held 0.318s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m
Dec  1 20:05:22 compute-0 nova_compute[189564]: 2025-12-01 20:05:22.990 189568 DEBUG nova.compute.manager [None req-9a8d114f-a120-44d9-9d3e-0fcc2a586e30 87b1f4a5842648dead0562b1cf8b4f18 ce8fb01897ec4dc4a54e7b478a0450c6 - - default default] [instance: 2e63a3e2-688c-470f-9b69-98ac22f0c892] Start building networks asynchronously for instance. _build_resources /usr/lib/python3.9/site-packages/nova/compute/manager.py:2799#033[00m
Dec  1 20:05:23 compute-0 nova_compute[189564]: 2025-12-01 20:05:23.036 189568 DEBUG nova.compute.manager [None req-9a8d114f-a120-44d9-9d3e-0fcc2a586e30 87b1f4a5842648dead0562b1cf8b4f18 ce8fb01897ec4dc4a54e7b478a0450c6 - - default default] [instance: 2e63a3e2-688c-470f-9b69-98ac22f0c892] Allocating IP information in the background. _allocate_network_async /usr/lib/python3.9/site-packages/nova/compute/manager.py:1952#033[00m
Dec  1 20:05:23 compute-0 nova_compute[189564]: 2025-12-01 20:05:23.037 189568 DEBUG nova.network.neutron [None req-9a8d114f-a120-44d9-9d3e-0fcc2a586e30 87b1f4a5842648dead0562b1cf8b4f18 ce8fb01897ec4dc4a54e7b478a0450c6 - - default default] [instance: 2e63a3e2-688c-470f-9b69-98ac22f0c892] allocate_for_instance() allocate_for_instance /usr/lib/python3.9/site-packages/nova/network/neutron.py:1156#033[00m
Dec  1 20:05:23 compute-0 nova_compute[189564]: 2025-12-01 20:05:23.056 189568 INFO nova.virt.libvirt.driver [None req-9a8d114f-a120-44d9-9d3e-0fcc2a586e30 87b1f4a5842648dead0562b1cf8b4f18 ce8fb01897ec4dc4a54e7b478a0450c6 - - default default] [instance: 2e63a3e2-688c-470f-9b69-98ac22f0c892] Ignoring supplied device name: /dev/vda. Libvirt can't honour user-supplied dev names#033[00m
Dec  1 20:05:23 compute-0 nova_compute[189564]: 2025-12-01 20:05:23.080 189568 DEBUG nova.compute.manager [None req-9a8d114f-a120-44d9-9d3e-0fcc2a586e30 87b1f4a5842648dead0562b1cf8b4f18 ce8fb01897ec4dc4a54e7b478a0450c6 - - default default] [instance: 2e63a3e2-688c-470f-9b69-98ac22f0c892] Start building block device mappings for instance. _build_resources /usr/lib/python3.9/site-packages/nova/compute/manager.py:2834#033[00m
Dec  1 20:05:23 compute-0 nova_compute[189564]: 2025-12-01 20:05:23.175 189568 DEBUG nova.compute.manager [None req-9a8d114f-a120-44d9-9d3e-0fcc2a586e30 87b1f4a5842648dead0562b1cf8b4f18 ce8fb01897ec4dc4a54e7b478a0450c6 - - default default] [instance: 2e63a3e2-688c-470f-9b69-98ac22f0c892] Start spawning the instance on the hypervisor. _build_and_run_instance /usr/lib/python3.9/site-packages/nova/compute/manager.py:2608#033[00m
Dec  1 20:05:23 compute-0 nova_compute[189564]: 2025-12-01 20:05:23.176 189568 DEBUG nova.virt.libvirt.driver [None req-9a8d114f-a120-44d9-9d3e-0fcc2a586e30 87b1f4a5842648dead0562b1cf8b4f18 ce8fb01897ec4dc4a54e7b478a0450c6 - - default default] [instance: 2e63a3e2-688c-470f-9b69-98ac22f0c892] Creating instance directory _create_image /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:4723#033[00m
Dec  1 20:05:23 compute-0 nova_compute[189564]: 2025-12-01 20:05:23.177 189568 INFO nova.virt.libvirt.driver [None req-9a8d114f-a120-44d9-9d3e-0fcc2a586e30 87b1f4a5842648dead0562b1cf8b4f18 ce8fb01897ec4dc4a54e7b478a0450c6 - - default default] [instance: 2e63a3e2-688c-470f-9b69-98ac22f0c892] Creating image(s)#033[00m
Dec  1 20:05:23 compute-0 nova_compute[189564]: 2025-12-01 20:05:23.177 189568 DEBUG oslo_concurrency.lockutils [None req-9a8d114f-a120-44d9-9d3e-0fcc2a586e30 87b1f4a5842648dead0562b1cf8b4f18 ce8fb01897ec4dc4a54e7b478a0450c6 - - default default] Acquiring lock "/var/lib/nova/instances/2e63a3e2-688c-470f-9b69-98ac22f0c892/disk.info" by "nova.virt.libvirt.imagebackend.Image.resolve_driver_format.<locals>.write_to_disk_info_file" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m
Dec  1 20:05:23 compute-0 nova_compute[189564]: 2025-12-01 20:05:23.177 189568 DEBUG oslo_concurrency.lockutils [None req-9a8d114f-a120-44d9-9d3e-0fcc2a586e30 87b1f4a5842648dead0562b1cf8b4f18 ce8fb01897ec4dc4a54e7b478a0450c6 - - default default] Lock "/var/lib/nova/instances/2e63a3e2-688c-470f-9b69-98ac22f0c892/disk.info" acquired by "nova.virt.libvirt.imagebackend.Image.resolve_driver_format.<locals>.write_to_disk_info_file" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m
Dec  1 20:05:23 compute-0 nova_compute[189564]: 2025-12-01 20:05:23.178 189568 DEBUG oslo_concurrency.lockutils [None req-9a8d114f-a120-44d9-9d3e-0fcc2a586e30 87b1f4a5842648dead0562b1cf8b4f18 ce8fb01897ec4dc4a54e7b478a0450c6 - - default default] Lock "/var/lib/nova/instances/2e63a3e2-688c-470f-9b69-98ac22f0c892/disk.info" "released" by "nova.virt.libvirt.imagebackend.Image.resolve_driver_format.<locals>.write_to_disk_info_file" :: held 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m
Dec  1 20:05:23 compute-0 nova_compute[189564]: 2025-12-01 20:05:23.179 189568 DEBUG oslo_concurrency.lockutils [None req-9a8d114f-a120-44d9-9d3e-0fcc2a586e30 87b1f4a5842648dead0562b1cf8b4f18 ce8fb01897ec4dc4a54e7b478a0450c6 - - default default] Acquiring lock "556b39aa36844a62d14eda3a6341e6c6cb1bcd4a" by "nova.virt.libvirt.imagebackend.Image.cache.<locals>.fetch_func_sync" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m
Dec  1 20:05:23 compute-0 nova_compute[189564]: 2025-12-01 20:05:23.179 189568 DEBUG oslo_concurrency.lockutils [None req-9a8d114f-a120-44d9-9d3e-0fcc2a586e30 87b1f4a5842648dead0562b1cf8b4f18 ce8fb01897ec4dc4a54e7b478a0450c6 - - default default] Lock "556b39aa36844a62d14eda3a6341e6c6cb1bcd4a" acquired by "nova.virt.libvirt.imagebackend.Image.cache.<locals>.fetch_func_sync" :: waited 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m
Dec  1 20:05:23 compute-0 nova_compute[189564]: 2025-12-01 20:05:23.248 189568 DEBUG oslo_service.periodic_task [None req-7cec860b-c1c5-49d2-8a4f-732c76c1788d - - - - - -] Running periodic task ComputeManager._poll_rebooting_instances run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210#033[00m
Dec  1 20:05:23 compute-0 podman[257132]: 2025-12-01 20:05:23.384922691 +0000 UTC m=+0.148186556 container health_status 61ddba5fa28aaa4735d9b3aecc3d300f499f9ae2248b5f55cd6d6127fcce4236 (image=quay.io/navidys/prometheus-podman-exporter:v1.10.1, name=podman_exporter, health_status=healthy, health_failing_streak=0, health_log=, config_id=edpm, container_name=podman_exporter, maintainer=Navid Yaghoobi <navidys@fedoraproject.org>, managed_by=edpm_ansible, config_data={'image': 'quay.io/navidys/prometheus-podman-exporter:v1.10.1', 'restart': 'always', 'recreate': True, 'user': 'root', 'privileged': True, 'ports': ['9882:9882'], 'net': 'host', 'command': ['--web.config.file=/etc/podman_exporter/podman_exporter.yaml'], 'environment': {'OS_ENDPOINT_TYPE': 'internal', 'CONTAINER_HOST': 'unix:///run/podman/podman.sock'}, 'healthcheck': {'test': '/openstack/healthcheck podman_exporter', 'mount': '/var/lib/openstack/healthchecks/podman_exporter'}, 'volumes': ['/var/lib/openstack/config/telemetry/podman_exporter.yaml:/etc/podman_exporter/podman_exporter.yaml:z', '/var/lib/openstack/certs/telemetry/default:/etc/podman_exporter/tls:z', '/run/podman/podman.sock:/run/podman/podman.sock:rw,z', '/var/lib/openstack/healthchecks/podman_exporter:/openstack:ro,z']})
Dec  1 20:05:23 compute-0 nova_compute[189564]: 2025-12-01 20:05:23.528 189568 DEBUG nova.policy [None req-9a8d114f-a120-44d9-9d3e-0fcc2a586e30 87b1f4a5842648dead0562b1cf8b4f18 ce8fb01897ec4dc4a54e7b478a0450c6 - - default default] Policy check for network:attach_external_network failed with credentials {'is_admin': False, 'user_id': '87b1f4a5842648dead0562b1cf8b4f18', 'user_domain_id': 'default', 'system_scope': None, 'domain_id': None, 'project_id': 'ce8fb01897ec4dc4a54e7b478a0450c6', 'project_domain_id': 'default', 'roles': ['reader', 'member'], 'is_admin_project': True, 'service_user_id': None, 'service_user_domain_id': None, 'service_project_id': None, 'service_project_domain_id': None, 'service_roles': []} authorize /usr/lib/python3.9/site-packages/nova/policy.py:203#033[00m
Dec  1 20:05:24 compute-0 nova_compute[189564]: 2025-12-01 20:05:24.044 189568 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Dec  1 20:05:24 compute-0 nova_compute[189564]: 2025-12-01 20:05:24.432 189568 DEBUG oslo_concurrency.processutils [None req-9a8d114f-a120-44d9-9d3e-0fcc2a586e30 87b1f4a5842648dead0562b1cf8b4f18 ce8fb01897ec4dc4a54e7b478a0450c6 - - default default] Running cmd (subprocess): /usr/bin/python3 -m oslo_concurrency.prlimit --as=1073741824 --cpu=30 -- env LC_ALL=C LANG=C qemu-img info /var/lib/nova/instances/_base/556b39aa36844a62d14eda3a6341e6c6cb1bcd4a.part --force-share --output=json execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384#033[00m
Dec  1 20:05:24 compute-0 nova_compute[189564]: 2025-12-01 20:05:24.534 189568 DEBUG oslo_concurrency.processutils [None req-9a8d114f-a120-44d9-9d3e-0fcc2a586e30 87b1f4a5842648dead0562b1cf8b4f18 ce8fb01897ec4dc4a54e7b478a0450c6 - - default default] CMD "/usr/bin/python3 -m oslo_concurrency.prlimit --as=1073741824 --cpu=30 -- env LC_ALL=C LANG=C qemu-img info /var/lib/nova/instances/_base/556b39aa36844a62d14eda3a6341e6c6cb1bcd4a.part --force-share --output=json" returned: 0 in 0.102s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422#033[00m
Dec  1 20:05:24 compute-0 nova_compute[189564]: 2025-12-01 20:05:24.536 189568 DEBUG nova.virt.images [None req-9a8d114f-a120-44d9-9d3e-0fcc2a586e30 87b1f4a5842648dead0562b1cf8b4f18 ce8fb01897ec4dc4a54e7b478a0450c6 - - default default] bffb6851-f47b-44e0-90e7-e01d72f9a4d2 was qcow2, converting to raw fetch_to_raw /usr/lib/python3.9/site-packages/nova/virt/images.py:242#033[00m
Dec  1 20:05:24 compute-0 nova_compute[189564]: 2025-12-01 20:05:24.537 189568 DEBUG nova.privsep.utils [None req-9a8d114f-a120-44d9-9d3e-0fcc2a586e30 87b1f4a5842648dead0562b1cf8b4f18 ce8fb01897ec4dc4a54e7b478a0450c6 - - default default] Path '/var/lib/nova/instances' supports direct I/O supports_direct_io /usr/lib/python3.9/site-packages/nova/privsep/utils.py:63#033[00m
Dec  1 20:05:24 compute-0 nova_compute[189564]: 2025-12-01 20:05:24.538 189568 DEBUG oslo_concurrency.processutils [None req-9a8d114f-a120-44d9-9d3e-0fcc2a586e30 87b1f4a5842648dead0562b1cf8b4f18 ce8fb01897ec4dc4a54e7b478a0450c6 - - default default] Running cmd (subprocess): qemu-img convert -t none -O raw -f qcow2 /var/lib/nova/instances/_base/556b39aa36844a62d14eda3a6341e6c6cb1bcd4a.part /var/lib/nova/instances/_base/556b39aa36844a62d14eda3a6341e6c6cb1bcd4a.converted execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384#033[00m
Dec  1 20:05:24 compute-0 nova_compute[189564]: 2025-12-01 20:05:24.852 189568 DEBUG oslo_concurrency.processutils [None req-9a8d114f-a120-44d9-9d3e-0fcc2a586e30 87b1f4a5842648dead0562b1cf8b4f18 ce8fb01897ec4dc4a54e7b478a0450c6 - - default default] CMD "qemu-img convert -t none -O raw -f qcow2 /var/lib/nova/instances/_base/556b39aa36844a62d14eda3a6341e6c6cb1bcd4a.part /var/lib/nova/instances/_base/556b39aa36844a62d14eda3a6341e6c6cb1bcd4a.converted" returned: 0 in 0.314s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422#033[00m
Dec  1 20:05:24 compute-0 nova_compute[189564]: 2025-12-01 20:05:24.861 189568 DEBUG oslo_concurrency.processutils [None req-9a8d114f-a120-44d9-9d3e-0fcc2a586e30 87b1f4a5842648dead0562b1cf8b4f18 ce8fb01897ec4dc4a54e7b478a0450c6 - - default default] Running cmd (subprocess): /usr/bin/python3 -m oslo_concurrency.prlimit --as=1073741824 --cpu=30 -- env LC_ALL=C LANG=C qemu-img info /var/lib/nova/instances/_base/556b39aa36844a62d14eda3a6341e6c6cb1bcd4a.converted --force-share --output=json execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384#033[00m
Dec  1 20:05:24 compute-0 nova_compute[189564]: 2025-12-01 20:05:24.953 189568 DEBUG oslo_concurrency.processutils [None req-9a8d114f-a120-44d9-9d3e-0fcc2a586e30 87b1f4a5842648dead0562b1cf8b4f18 ce8fb01897ec4dc4a54e7b478a0450c6 - - default default] CMD "/usr/bin/python3 -m oslo_concurrency.prlimit --as=1073741824 --cpu=30 -- env LC_ALL=C LANG=C qemu-img info /var/lib/nova/instances/_base/556b39aa36844a62d14eda3a6341e6c6cb1bcd4a.converted --force-share --output=json" returned: 0 in 0.092s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422#033[00m
Dec  1 20:05:24 compute-0 nova_compute[189564]: 2025-12-01 20:05:24.956 189568 DEBUG oslo_concurrency.lockutils [None req-9a8d114f-a120-44d9-9d3e-0fcc2a586e30 87b1f4a5842648dead0562b1cf8b4f18 ce8fb01897ec4dc4a54e7b478a0450c6 - - default default] Lock "556b39aa36844a62d14eda3a6341e6c6cb1bcd4a" "released" by "nova.virt.libvirt.imagebackend.Image.cache.<locals>.fetch_func_sync" :: held 1.777s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m
Dec  1 20:05:24 compute-0 nova_compute[189564]: 2025-12-01 20:05:24.985 189568 DEBUG oslo_concurrency.processutils [None req-9a8d114f-a120-44d9-9d3e-0fcc2a586e30 87b1f4a5842648dead0562b1cf8b4f18 ce8fb01897ec4dc4a54e7b478a0450c6 - - default default] Running cmd (subprocess): /usr/bin/python3 -m oslo_concurrency.prlimit --as=1073741824 --cpu=30 -- env LC_ALL=C LANG=C qemu-img info /var/lib/nova/instances/_base/556b39aa36844a62d14eda3a6341e6c6cb1bcd4a --force-share --output=json execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384#033[00m
Dec  1 20:05:25 compute-0 nova_compute[189564]: 2025-12-01 20:05:25.046 189568 DEBUG oslo_concurrency.processutils [None req-9a8d114f-a120-44d9-9d3e-0fcc2a586e30 87b1f4a5842648dead0562b1cf8b4f18 ce8fb01897ec4dc4a54e7b478a0450c6 - - default default] CMD "/usr/bin/python3 -m oslo_concurrency.prlimit --as=1073741824 --cpu=30 -- env LC_ALL=C LANG=C qemu-img info /var/lib/nova/instances/_base/556b39aa36844a62d14eda3a6341e6c6cb1bcd4a --force-share --output=json" returned: 0 in 0.061s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422#033[00m
Dec  1 20:05:25 compute-0 nova_compute[189564]: 2025-12-01 20:05:25.049 189568 DEBUG oslo_concurrency.lockutils [None req-9a8d114f-a120-44d9-9d3e-0fcc2a586e30 87b1f4a5842648dead0562b1cf8b4f18 ce8fb01897ec4dc4a54e7b478a0450c6 - - default default] Acquiring lock "556b39aa36844a62d14eda3a6341e6c6cb1bcd4a" by "nova.virt.libvirt.imagebackend.Qcow2.create_image.<locals>.create_qcow2_image" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m
Dec  1 20:05:25 compute-0 nova_compute[189564]: 2025-12-01 20:05:25.050 189568 DEBUG oslo_concurrency.lockutils [None req-9a8d114f-a120-44d9-9d3e-0fcc2a586e30 87b1f4a5842648dead0562b1cf8b4f18 ce8fb01897ec4dc4a54e7b478a0450c6 - - default default] Lock "556b39aa36844a62d14eda3a6341e6c6cb1bcd4a" acquired by "nova.virt.libvirt.imagebackend.Qcow2.create_image.<locals>.create_qcow2_image" :: waited 0.002s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m
Dec  1 20:05:25 compute-0 nova_compute[189564]: 2025-12-01 20:05:25.075 189568 DEBUG oslo_concurrency.processutils [None req-9a8d114f-a120-44d9-9d3e-0fcc2a586e30 87b1f4a5842648dead0562b1cf8b4f18 ce8fb01897ec4dc4a54e7b478a0450c6 - - default default] Running cmd (subprocess): /usr/bin/python3 -m oslo_concurrency.prlimit --as=1073741824 --cpu=30 -- env LC_ALL=C LANG=C qemu-img info /var/lib/nova/instances/_base/556b39aa36844a62d14eda3a6341e6c6cb1bcd4a --force-share --output=json execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384#033[00m
Dec  1 20:05:25 compute-0 nova_compute[189564]: 2025-12-01 20:05:25.134 189568 DEBUG oslo_concurrency.processutils [None req-9a8d114f-a120-44d9-9d3e-0fcc2a586e30 87b1f4a5842648dead0562b1cf8b4f18 ce8fb01897ec4dc4a54e7b478a0450c6 - - default default] CMD "/usr/bin/python3 -m oslo_concurrency.prlimit --as=1073741824 --cpu=30 -- env LC_ALL=C LANG=C qemu-img info /var/lib/nova/instances/_base/556b39aa36844a62d14eda3a6341e6c6cb1bcd4a --force-share --output=json" returned: 0 in 0.058s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422#033[00m
Dec  1 20:05:25 compute-0 nova_compute[189564]: 2025-12-01 20:05:25.136 189568 DEBUG oslo_concurrency.processutils [None req-9a8d114f-a120-44d9-9d3e-0fcc2a586e30 87b1f4a5842648dead0562b1cf8b4f18 ce8fb01897ec4dc4a54e7b478a0450c6 - - default default] Running cmd (subprocess): env LC_ALL=C LANG=C qemu-img create -f qcow2 -o backing_file=/var/lib/nova/instances/_base/556b39aa36844a62d14eda3a6341e6c6cb1bcd4a,backing_fmt=raw /var/lib/nova/instances/2e63a3e2-688c-470f-9b69-98ac22f0c892/disk 1073741824 execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384#033[00m
Dec  1 20:05:25 compute-0 nova_compute[189564]: 2025-12-01 20:05:25.175 189568 DEBUG oslo_concurrency.processutils [None req-9a8d114f-a120-44d9-9d3e-0fcc2a586e30 87b1f4a5842648dead0562b1cf8b4f18 ce8fb01897ec4dc4a54e7b478a0450c6 - - default default] CMD "env LC_ALL=C LANG=C qemu-img create -f qcow2 -o backing_file=/var/lib/nova/instances/_base/556b39aa36844a62d14eda3a6341e6c6cb1bcd4a,backing_fmt=raw /var/lib/nova/instances/2e63a3e2-688c-470f-9b69-98ac22f0c892/disk 1073741824" returned: 0 in 0.038s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422#033[00m
Dec  1 20:05:25 compute-0 nova_compute[189564]: 2025-12-01 20:05:25.176 189568 DEBUG oslo_concurrency.lockutils [None req-9a8d114f-a120-44d9-9d3e-0fcc2a586e30 87b1f4a5842648dead0562b1cf8b4f18 ce8fb01897ec4dc4a54e7b478a0450c6 - - default default] Lock "556b39aa36844a62d14eda3a6341e6c6cb1bcd4a" "released" by "nova.virt.libvirt.imagebackend.Qcow2.create_image.<locals>.create_qcow2_image" :: held 0.126s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m
Dec  1 20:05:25 compute-0 nova_compute[189564]: 2025-12-01 20:05:25.176 189568 DEBUG oslo_concurrency.processutils [None req-9a8d114f-a120-44d9-9d3e-0fcc2a586e30 87b1f4a5842648dead0562b1cf8b4f18 ce8fb01897ec4dc4a54e7b478a0450c6 - - default default] Running cmd (subprocess): /usr/bin/python3 -m oslo_concurrency.prlimit --as=1073741824 --cpu=30 -- env LC_ALL=C LANG=C qemu-img info /var/lib/nova/instances/_base/556b39aa36844a62d14eda3a6341e6c6cb1bcd4a --force-share --output=json execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384#033[00m
Dec  1 20:05:25 compute-0 nova_compute[189564]: 2025-12-01 20:05:25.229 189568 DEBUG oslo_concurrency.processutils [None req-9a8d114f-a120-44d9-9d3e-0fcc2a586e30 87b1f4a5842648dead0562b1cf8b4f18 ce8fb01897ec4dc4a54e7b478a0450c6 - - default default] CMD "/usr/bin/python3 -m oslo_concurrency.prlimit --as=1073741824 --cpu=30 -- env LC_ALL=C LANG=C qemu-img info /var/lib/nova/instances/_base/556b39aa36844a62d14eda3a6341e6c6cb1bcd4a --force-share --output=json" returned: 0 in 0.052s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422#033[00m
Dec  1 20:05:25 compute-0 nova_compute[189564]: 2025-12-01 20:05:25.230 189568 DEBUG nova.virt.disk.api [None req-9a8d114f-a120-44d9-9d3e-0fcc2a586e30 87b1f4a5842648dead0562b1cf8b4f18 ce8fb01897ec4dc4a54e7b478a0450c6 - - default default] Checking if we can resize image /var/lib/nova/instances/2e63a3e2-688c-470f-9b69-98ac22f0c892/disk. size=1073741824 can_resize_image /usr/lib/python3.9/site-packages/nova/virt/disk/api.py:166#033[00m
Dec  1 20:05:25 compute-0 nova_compute[189564]: 2025-12-01 20:05:25.230 189568 DEBUG oslo_concurrency.processutils [None req-9a8d114f-a120-44d9-9d3e-0fcc2a586e30 87b1f4a5842648dead0562b1cf8b4f18 ce8fb01897ec4dc4a54e7b478a0450c6 - - default default] Running cmd (subprocess): /usr/bin/python3 -m oslo_concurrency.prlimit --as=1073741824 --cpu=30 -- env LC_ALL=C LANG=C qemu-img info /var/lib/nova/instances/2e63a3e2-688c-470f-9b69-98ac22f0c892/disk --force-share --output=json execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384#033[00m
Dec  1 20:05:25 compute-0 nova_compute[189564]: 2025-12-01 20:05:25.247 189568 DEBUG oslo_service.periodic_task [None req-7cec860b-c1c5-49d2-8a4f-732c76c1788d - - - - - -] Running periodic task ComputeManager._heal_instance_info_cache run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210#033[00m
Dec  1 20:05:25 compute-0 nova_compute[189564]: 2025-12-01 20:05:25.248 189568 DEBUG nova.compute.manager [None req-7cec860b-c1c5-49d2-8a4f-732c76c1788d - - - - - -] Starting heal instance info cache _heal_instance_info_cache /usr/lib/python3.9/site-packages/nova/compute/manager.py:9858#033[00m
Dec  1 20:05:25 compute-0 nova_compute[189564]: 2025-12-01 20:05:25.270 189568 DEBUG nova.compute.manager [None req-7cec860b-c1c5-49d2-8a4f-732c76c1788d - - - - - -] Didn't find any instances for network info cache update. _heal_instance_info_cache /usr/lib/python3.9/site-packages/nova/compute/manager.py:9944#033[00m
Dec  1 20:05:25 compute-0 nova_compute[189564]: 2025-12-01 20:05:25.271 189568 DEBUG oslo_service.periodic_task [None req-7cec860b-c1c5-49d2-8a4f-732c76c1788d - - - - - -] Running periodic task ComputeManager._poll_unconfirmed_resizes run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210#033[00m
Dec  1 20:05:25 compute-0 nova_compute[189564]: 2025-12-01 20:05:25.271 189568 DEBUG oslo_service.periodic_task [None req-7cec860b-c1c5-49d2-8a4f-732c76c1788d - - - - - -] Running periodic task ComputeManager.update_available_resource run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210#033[00m
Dec  1 20:05:25 compute-0 nova_compute[189564]: 2025-12-01 20:05:25.293 189568 DEBUG oslo_concurrency.lockutils [None req-7cec860b-c1c5-49d2-8a4f-732c76c1788d - - - - - -] Acquiring lock "compute_resources" by "nova.compute.resource_tracker.ResourceTracker.clean_compute_node_cache" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m
Dec  1 20:05:25 compute-0 nova_compute[189564]: 2025-12-01 20:05:25.293 189568 DEBUG oslo_concurrency.lockutils [None req-7cec860b-c1c5-49d2-8a4f-732c76c1788d - - - - - -] Lock "compute_resources" acquired by "nova.compute.resource_tracker.ResourceTracker.clean_compute_node_cache" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m
Dec  1 20:05:25 compute-0 nova_compute[189564]: 2025-12-01 20:05:25.294 189568 DEBUG oslo_concurrency.lockutils [None req-7cec860b-c1c5-49d2-8a4f-732c76c1788d - - - - - -] Lock "compute_resources" "released" by "nova.compute.resource_tracker.ResourceTracker.clean_compute_node_cache" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m
Dec  1 20:05:25 compute-0 nova_compute[189564]: 2025-12-01 20:05:25.294 189568 DEBUG nova.compute.resource_tracker [None req-7cec860b-c1c5-49d2-8a4f-732c76c1788d - - - - - -] Auditing locally available compute resources for compute-0.ctlplane.example.com (node: compute-0.ctlplane.example.com) update_available_resource /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:861#033[00m
Dec  1 20:05:25 compute-0 nova_compute[189564]: 2025-12-01 20:05:25.297 189568 DEBUG oslo_concurrency.processutils [None req-9a8d114f-a120-44d9-9d3e-0fcc2a586e30 87b1f4a5842648dead0562b1cf8b4f18 ce8fb01897ec4dc4a54e7b478a0450c6 - - default default] CMD "/usr/bin/python3 -m oslo_concurrency.prlimit --as=1073741824 --cpu=30 -- env LC_ALL=C LANG=C qemu-img info /var/lib/nova/instances/2e63a3e2-688c-470f-9b69-98ac22f0c892/disk --force-share --output=json" returned: 0 in 0.067s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422#033[00m
Dec  1 20:05:25 compute-0 nova_compute[189564]: 2025-12-01 20:05:25.297 189568 DEBUG nova.virt.disk.api [None req-9a8d114f-a120-44d9-9d3e-0fcc2a586e30 87b1f4a5842648dead0562b1cf8b4f18 ce8fb01897ec4dc4a54e7b478a0450c6 - - default default] Cannot resize image /var/lib/nova/instances/2e63a3e2-688c-470f-9b69-98ac22f0c892/disk to a smaller size. can_resize_image /usr/lib/python3.9/site-packages/nova/virt/disk/api.py:172#033[00m
Dec  1 20:05:25 compute-0 nova_compute[189564]: 2025-12-01 20:05:25.297 189568 DEBUG nova.objects.instance [None req-9a8d114f-a120-44d9-9d3e-0fcc2a586e30 87b1f4a5842648dead0562b1cf8b4f18 ce8fb01897ec4dc4a54e7b478a0450c6 - - default default] Lazy-loading 'migration_context' on Instance uuid 2e63a3e2-688c-470f-9b69-98ac22f0c892 obj_load_attr /usr/lib/python3.9/site-packages/nova/objects/instance.py:1105#033[00m
Dec  1 20:05:25 compute-0 nova_compute[189564]: 2025-12-01 20:05:25.311 189568 DEBUG nova.virt.libvirt.driver [None req-9a8d114f-a120-44d9-9d3e-0fcc2a586e30 87b1f4a5842648dead0562b1cf8b4f18 ce8fb01897ec4dc4a54e7b478a0450c6 - - default default] [instance: 2e63a3e2-688c-470f-9b69-98ac22f0c892] Created local disks _create_image /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:4857#033[00m
Dec  1 20:05:25 compute-0 nova_compute[189564]: 2025-12-01 20:05:25.311 189568 DEBUG nova.virt.libvirt.driver [None req-9a8d114f-a120-44d9-9d3e-0fcc2a586e30 87b1f4a5842648dead0562b1cf8b4f18 ce8fb01897ec4dc4a54e7b478a0450c6 - - default default] [instance: 2e63a3e2-688c-470f-9b69-98ac22f0c892] Ensure instance console log exists: /var/lib/nova/instances/2e63a3e2-688c-470f-9b69-98ac22f0c892/console.log _ensure_console_log_for_instance /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:4609#033[00m
Dec  1 20:05:25 compute-0 nova_compute[189564]: 2025-12-01 20:05:25.312 189568 DEBUG oslo_concurrency.lockutils [None req-9a8d114f-a120-44d9-9d3e-0fcc2a586e30 87b1f4a5842648dead0562b1cf8b4f18 ce8fb01897ec4dc4a54e7b478a0450c6 - - default default] Acquiring lock "vgpu_resources" by "nova.virt.libvirt.driver.LibvirtDriver._allocate_mdevs" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m
Dec  1 20:05:25 compute-0 nova_compute[189564]: 2025-12-01 20:05:25.312 189568 DEBUG oslo_concurrency.lockutils [None req-9a8d114f-a120-44d9-9d3e-0fcc2a586e30 87b1f4a5842648dead0562b1cf8b4f18 ce8fb01897ec4dc4a54e7b478a0450c6 - - default default] Lock "vgpu_resources" acquired by "nova.virt.libvirt.driver.LibvirtDriver._allocate_mdevs" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m
Dec  1 20:05:25 compute-0 nova_compute[189564]: 2025-12-01 20:05:25.312 189568 DEBUG oslo_concurrency.lockutils [None req-9a8d114f-a120-44d9-9d3e-0fcc2a586e30 87b1f4a5842648dead0562b1cf8b4f18 ce8fb01897ec4dc4a54e7b478a0450c6 - - default default] Lock "vgpu_resources" "released" by "nova.virt.libvirt.driver.LibvirtDriver._allocate_mdevs" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m
Dec  1 20:05:25 compute-0 nova_compute[189564]: 2025-12-01 20:05:25.628 189568 DEBUG nova.network.neutron [None req-9a8d114f-a120-44d9-9d3e-0fcc2a586e30 87b1f4a5842648dead0562b1cf8b4f18 ce8fb01897ec4dc4a54e7b478a0450c6 - - default default] [instance: 2e63a3e2-688c-470f-9b69-98ac22f0c892] Successfully created port: 3076324c-1772-4ebf-8d52-056282f5b5b9 _create_port_minimal /usr/lib/python3.9/site-packages/nova/network/neutron.py:548#033[00m
Dec  1 20:05:25 compute-0 nova_compute[189564]: 2025-12-01 20:05:25.761 189568 WARNING nova.virt.libvirt.driver [None req-7cec860b-c1c5-49d2-8a4f-732c76c1788d - - - - - -] This host appears to have multiple sockets per NUMA node. The `socket` PCI NUMA affinity will not be supported.#033[00m
Dec  1 20:05:25 compute-0 nova_compute[189564]: 2025-12-01 20:05:25.763 189568 DEBUG nova.compute.resource_tracker [None req-7cec860b-c1c5-49d2-8a4f-732c76c1788d - - - - - -] Hypervisor/Node resource view: name=compute-0.ctlplane.example.com free_ram=5348MB free_disk=72.30519485473633GB free_vcpus=8 pci_devices=[{"dev_id": "pci_0000_00_03_0", "address": "0000:00:03.0", "product_id": "1000", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1000", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_01_3", "address": "0000:00:01.3", "product_id": "7113", "vendor_id": "8086", "numa_node": null, "label": "label_8086_7113", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_06_0", "address": "0000:00:06.0", "product_id": "1005", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1005", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_04_0", "address": "0000:00:04.0", "product_id": "1001", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1001", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_01_1", "address": "0000:00:01.1", "product_id": "7010", "vendor_id": "8086", "numa_node": null, "label": "label_8086_7010", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_00_0", "address": "0000:00:00.0", "product_id": "1237", "vendor_id": "8086", "numa_node": null, "label": "label_8086_1237", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_05_0", "address": "0000:00:05.0", "product_id": "1002", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1002", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_02_0", "address": "0000:00:02.0", "product_id": "1050", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1050", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_01_2", "address": "0000:00:01.2", "product_id": "7020", "vendor_id": "8086", "numa_node": null, "label": "label_8086_7020", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_01_0", "address": "0000:00:01.0", "product_id": "7000", "vendor_id": "8086", "numa_node": null, "label": "label_8086_7000", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_07_0", "address": "0000:00:07.0", "product_id": "1000", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1000", "dev_type": "type-PCI"}] _report_hypervisor_resource_view /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:1034#033[00m
Dec  1 20:05:25 compute-0 nova_compute[189564]: 2025-12-01 20:05:25.763 189568 DEBUG oslo_concurrency.lockutils [None req-7cec860b-c1c5-49d2-8a4f-732c76c1788d - - - - - -] Acquiring lock "compute_resources" by "nova.compute.resource_tracker.ResourceTracker._update_available_resource" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m
Dec  1 20:05:25 compute-0 nova_compute[189564]: 2025-12-01 20:05:25.763 189568 DEBUG oslo_concurrency.lockutils [None req-7cec860b-c1c5-49d2-8a4f-732c76c1788d - - - - - -] Lock "compute_resources" acquired by "nova.compute.resource_tracker.ResourceTracker._update_available_resource" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m
Dec  1 20:05:25 compute-0 nova_compute[189564]: 2025-12-01 20:05:25.988 189568 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Dec  1 20:05:26 compute-0 nova_compute[189564]: 2025-12-01 20:05:26.075 189568 DEBUG nova.compute.resource_tracker [None req-7cec860b-c1c5-49d2-8a4f-732c76c1788d - - - - - -] Instance 2e63a3e2-688c-470f-9b69-98ac22f0c892 actively managed on this compute host and has allocations in placement: {'resources': {'DISK_GB': 1, 'MEMORY_MB': 128, 'VCPU': 1}}. _remove_deleted_instances_allocations /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:1635#033[00m
Dec  1 20:05:26 compute-0 nova_compute[189564]: 2025-12-01 20:05:26.076 189568 DEBUG nova.compute.resource_tracker [None req-7cec860b-c1c5-49d2-8a4f-732c76c1788d - - - - - -] Total usable vcpus: 8, total allocated vcpus: 1 _report_final_resource_view /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:1057#033[00m
Dec  1 20:05:26 compute-0 nova_compute[189564]: 2025-12-01 20:05:26.076 189568 DEBUG nova.compute.resource_tracker [None req-7cec860b-c1c5-49d2-8a4f-732c76c1788d - - - - - -] Final resource view: name=compute-0.ctlplane.example.com phys_ram=7680MB used_ram=640MB phys_disk=79GB used_disk=1GB total_vcpus=8 used_vcpus=1 pci_stats=[] _report_final_resource_view /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:1066#033[00m
Dec  1 20:05:26 compute-0 nova_compute[189564]: 2025-12-01 20:05:26.128 189568 DEBUG nova.compute.provider_tree [None req-7cec860b-c1c5-49d2-8a4f-732c76c1788d - - - - - -] Inventory has not changed in ProviderTree for provider: 0211b5d4-bab8-409f-8f53-df766ffbcb27 update_inventory /usr/lib/python3.9/site-packages/nova/compute/provider_tree.py:180#033[00m
Dec  1 20:05:26 compute-0 nova_compute[189564]: 2025-12-01 20:05:26.146 189568 DEBUG nova.scheduler.client.report [None req-7cec860b-c1c5-49d2-8a4f-732c76c1788d - - - - - -] Inventory has not changed for provider 0211b5d4-bab8-409f-8f53-df766ffbcb27 based on inventory data: {'VCPU': {'total': 8, 'reserved': 0, 'min_unit': 1, 'max_unit': 8, 'step_size': 1, 'allocation_ratio': 4.0}, 'MEMORY_MB': {'total': 7680, 'reserved': 512, 'min_unit': 1, 'max_unit': 7680, 'step_size': 1, 'allocation_ratio': 1.0}, 'DISK_GB': {'total': 79, 'reserved': 1, 'min_unit': 1, 'max_unit': 79, 'step_size': 1, 'allocation_ratio': 0.9}} set_inventory_for_provider /usr/lib/python3.9/site-packages/nova/scheduler/client/report.py:940#033[00m
Dec  1 20:05:26 compute-0 nova_compute[189564]: 2025-12-01 20:05:26.172 189568 DEBUG nova.compute.resource_tracker [None req-7cec860b-c1c5-49d2-8a4f-732c76c1788d - - - - - -] Compute_service record updated for compute-0.ctlplane.example.com:compute-0.ctlplane.example.com _update_available_resource /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:995#033[00m
Dec  1 20:05:26 compute-0 nova_compute[189564]: 2025-12-01 20:05:26.173 189568 DEBUG oslo_concurrency.lockutils [None req-7cec860b-c1c5-49d2-8a4f-732c76c1788d - - - - - -] Lock "compute_resources" "released" by "nova.compute.resource_tracker.ResourceTracker._update_available_resource" :: held 0.410s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m
Dec  1 20:05:26 compute-0 nova_compute[189564]: 2025-12-01 20:05:26.812 189568 DEBUG nova.network.neutron [None req-9a8d114f-a120-44d9-9d3e-0fcc2a586e30 87b1f4a5842648dead0562b1cf8b4f18 ce8fb01897ec4dc4a54e7b478a0450c6 - - default default] [instance: 2e63a3e2-688c-470f-9b69-98ac22f0c892] Successfully updated port: 3076324c-1772-4ebf-8d52-056282f5b5b9 _update_port /usr/lib/python3.9/site-packages/nova/network/neutron.py:586#033[00m
Dec  1 20:05:26 compute-0 nova_compute[189564]: 2025-12-01 20:05:26.829 189568 DEBUG oslo_concurrency.lockutils [None req-9a8d114f-a120-44d9-9d3e-0fcc2a586e30 87b1f4a5842648dead0562b1cf8b4f18 ce8fb01897ec4dc4a54e7b478a0450c6 - - default default] Acquiring lock "refresh_cache-2e63a3e2-688c-470f-9b69-98ac22f0c892" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:312#033[00m
Dec  1 20:05:26 compute-0 nova_compute[189564]: 2025-12-01 20:05:26.829 189568 DEBUG oslo_concurrency.lockutils [None req-9a8d114f-a120-44d9-9d3e-0fcc2a586e30 87b1f4a5842648dead0562b1cf8b4f18 ce8fb01897ec4dc4a54e7b478a0450c6 - - default default] Acquired lock "refresh_cache-2e63a3e2-688c-470f-9b69-98ac22f0c892" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:315#033[00m
Dec  1 20:05:26 compute-0 nova_compute[189564]: 2025-12-01 20:05:26.830 189568 DEBUG nova.network.neutron [None req-9a8d114f-a120-44d9-9d3e-0fcc2a586e30 87b1f4a5842648dead0562b1cf8b4f18 ce8fb01897ec4dc4a54e7b478a0450c6 - - default default] [instance: 2e63a3e2-688c-470f-9b69-98ac22f0c892] Building network info cache for instance _get_instance_nw_info /usr/lib/python3.9/site-packages/nova/network/neutron.py:2010#033[00m
Dec  1 20:05:26 compute-0 nova_compute[189564]: 2025-12-01 20:05:26.943 189568 DEBUG nova.compute.manager [req-ddcd25a8-625b-4098-82d9-67bcde45f05f req-c4bbddc9-f93b-4758-ba90-7ffaf3c1aeb3 8141e98fb94749b083df4b60a326419b 58466eba0634458f8dd92f929824a1d9 - - default default] [instance: 2e63a3e2-688c-470f-9b69-98ac22f0c892] Received event network-changed-3076324c-1772-4ebf-8d52-056282f5b5b9 external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11048#033[00m
Dec  1 20:05:26 compute-0 nova_compute[189564]: 2025-12-01 20:05:26.944 189568 DEBUG nova.compute.manager [req-ddcd25a8-625b-4098-82d9-67bcde45f05f req-c4bbddc9-f93b-4758-ba90-7ffaf3c1aeb3 8141e98fb94749b083df4b60a326419b 58466eba0634458f8dd92f929824a1d9 - - default default] [instance: 2e63a3e2-688c-470f-9b69-98ac22f0c892] Refreshing instance network info cache due to event network-changed-3076324c-1772-4ebf-8d52-056282f5b5b9. external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11053#033[00m
Dec  1 20:05:26 compute-0 nova_compute[189564]: 2025-12-01 20:05:26.944 189568 DEBUG oslo_concurrency.lockutils [req-ddcd25a8-625b-4098-82d9-67bcde45f05f req-c4bbddc9-f93b-4758-ba90-7ffaf3c1aeb3 8141e98fb94749b083df4b60a326419b 58466eba0634458f8dd92f929824a1d9 - - default default] Acquiring lock "refresh_cache-2e63a3e2-688c-470f-9b69-98ac22f0c892" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:312#033[00m
Dec  1 20:05:27 compute-0 nova_compute[189564]: 2025-12-01 20:05:27.059 189568 DEBUG nova.network.neutron [None req-9a8d114f-a120-44d9-9d3e-0fcc2a586e30 87b1f4a5842648dead0562b1cf8b4f18 ce8fb01897ec4dc4a54e7b478a0450c6 - - default default] [instance: 2e63a3e2-688c-470f-9b69-98ac22f0c892] Instance cache missing network info. _get_preexisting_port_ids /usr/lib/python3.9/site-packages/nova/network/neutron.py:3323#033[00m
Dec  1 20:05:27 compute-0 nova_compute[189564]: 2025-12-01 20:05:27.150 189568 DEBUG oslo_service.periodic_task [None req-7cec860b-c1c5-49d2-8a4f-732c76c1788d - - - - - -] Running periodic task ComputeManager._instance_usage_audit run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210#033[00m
Dec  1 20:05:27 compute-0 nova_compute[189564]: 2025-12-01 20:05:27.151 189568 DEBUG oslo_service.periodic_task [None req-7cec860b-c1c5-49d2-8a4f-732c76c1788d - - - - - -] Running periodic task ComputeManager._poll_volume_usage run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210#033[00m
Dec  1 20:05:27 compute-0 podman[257186]: 2025-12-01 20:05:27.343145023 +0000 UTC m=+0.092220708 container health_status 43b014a7c88484529ca37fbc1aa040d68d3c565a681d98a3ffe696ded1c66c8b (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified, name=ovn_metadata_agent, health_status=healthy, health_failing_streak=0, health_log=, config_data={'cgroupns': 'host', 'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'EDPM_CONFIG_HASH': '0823bd3e096c75f72e4a95820d41b0d4b6a1172bd2892ddb9f29b788a11bc87d'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ovn_metadata_agent', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified', 'net': 'host', 'pid': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/run/openvswitch:/run/openvswitch:z', '/var/lib/config-data/ansible-generated/neutron-ovn-metadata-agent:/etc/neutron.conf.d:z', '/run/netns:/run/netns:shared', '/var/lib/kolla/config_files/ovn_metadata_agent.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/neutron:/var/lib/neutron:shared,z', '/var/lib/neutron/ovn_metadata_haproxy_wrapper:/usr/local/bin/haproxy:ro', '/var/lib/neutron/kill_scripts:/etc/neutron/kill_scripts:ro', '/var/lib/openstack/cacerts/neutron-metadata/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/ca.crt:/etc/pki/tls/certs/ovndbca.crt:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/tls.crt:/etc/pki/tls/certs/ovndb.crt:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/tls.key:/etc/pki/tls/private/ovndb.key:ro,Z', '/var/lib/openstack/healthchecks/ovn_metadata_agent:/openstack:ro,z']}, maintainer=OpenStack Kubernetes Operator team, org.label-schema.build-date=20251125, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS, managed_by=edpm_ansible, container_name=ovn_metadata_agent, org.label-schema.name=CentOS Stream 9 Base Image, tcib_build_tag=fa2bb8efef6782c26ea7f1675eeb36dd, config_id=ovn_metadata_agent, io.buildah.version=1.41.3, org.label-schema.license=GPLv2, tcib_managed=true)
Dec  1 20:05:27 compute-0 podman[257184]: 2025-12-01 20:05:27.347805951 +0000 UTC m=+0.107660381 container health_status 34a1614f07848d6f362b3ed1fa2407dbcd0f2c7c831f6ef43ff8b2d278ce7c3d (image=quay.io/podified-antelope-centos9/openstack-ceilometer-ipmi:current-podified, name=ceilometer_agent_ipmi, health_status=healthy, health_failing_streak=0, health_log=, config_id=edpm, tcib_build_tag=fa2bb8efef6782c26ea7f1675eeb36dd, tcib_managed=true, container_name=ceilometer_agent_ipmi, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS, maintainer=OpenStack Kubernetes Operator team, org.label-schema.build-date=20251125, config_data={'image': 'quay.io/podified-antelope-centos9/openstack-ceilometer-ipmi:current-podified', 'user': 'ceilometer', 'restart': 'always', 'command': 'kolla_start', 'security_opt': 'label:type:ceilometer_polling_t', 'privileged': 'true', 'net': 'host', 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'OS_ENDPOINT_TYPE': 'internal'}, 'healthcheck': {'test': '/openstack/healthcheck ipmi', 'mount': '/var/lib/openstack/healthchecks/ceilometer_agent_ipmi'}, 'volumes': ['/var/lib/openstack/config/telemetry-power-monitoring:/var/lib/openstack/config/:z', '/var/lib/openstack/config/telemetry-power-monitoring/ceilometer-agent-ipmi.json:/var/lib/kolla/config_files/config.json:z', '/etc/hosts:/etc/hosts:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/var/lib/openstack/cacerts/telemetry-power-monitoring/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/config/telemetry-power-monitoring/ceilometer_prom_exporter.yaml:/etc/ceilometer/ceilometer_prom_exporter.yaml:z', '/var/lib/openstack/certs/telemetry-power-monitoring/default:/etc/ceilometer/tls:z', '/dev/log:/dev/log', '/var/lib/openstack/healthchecks/ceilometer_agent_ipmi:/openstack:ro,z']}, io.buildah.version=1.41.3, managed_by=edpm_ansible)
Dec  1 20:05:27 compute-0 podman[257185]: 2025-12-01 20:05:27.350303801 +0000 UTC m=+0.116607707 container health_status 3a3d264f7eb8586ed3d44da8bad3c69e5911bcb2ca062b771386b6d47a5118de (image=quay.rdoproject.org/podified-master-centos10/openstack-ceilometer-compute:current-tested, name=ceilometer_agent_compute, health_status=healthy, health_failing_streak=0, health_log=, container_name=ceilometer_agent_compute, maintainer=OpenStack Kubernetes Operator team, org.label-schema.build-date=20251125, tcib_build_tag=3a7876c5b6a4ff2e2bc50e11e9db5f42, config_data={'image': 'quay.rdoproject.org/podified-master-centos10/openstack-ceilometer-compute:current-tested', 'user': 'ceilometer', 'restart': 'always', 'command': 'kolla_start', 'security_opt': 'label:type:ceilometer_polling_t', 'net': 'host', 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'OS_ENDPOINT_TYPE': 'internal'}, 'healthcheck': {'test': '/openstack/healthcheck compute', 'mount': '/var/lib/openstack/healthchecks/ceilometer_agent_compute'}, 'volumes': ['/var/lib/openstack/config/telemetry:/var/lib/openstack/config/:z', '/var/lib/openstack/config/telemetry/ceilometer-agent-compute.json:/var/lib/kolla/config_files/config.json:z', '/run/libvirt:/run/libvirt:shared,ro', '/etc/hosts:/etc/hosts:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/var/lib/openstack/cacerts/telemetry/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/config/telemetry/ceilometer_prom_exporter.yaml:/etc/ceilometer/ceilometer_prom_exporter.yaml:z', '/var/lib/openstack/certs/telemetry/default:/etc/ceilometer/tls:z', '/dev/log:/dev/log', '/var/lib/openstack/healthchecks/ceilometer_agent_compute:/openstack:ro,z']}, io.buildah.version=1.41.4, managed_by=edpm_ansible, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 10 Base Image, org.label-schema.schema-version=1.0, tcib_managed=true, config_id=edpm, org.label-schema.vendor=CentOS)
Dec  1 20:05:27 compute-0 podman[257183]: 2025-12-01 20:05:27.381112775 +0000 UTC m=+0.142994160 container health_status 23921011954a99f31a49758e512d9e3575f6b2ebf536e7df85e3be11e7690b76 (image=quay.io/sustainable_computing_io/kepler:release-0.7.12, name=kepler, health_status=healthy, health_failing_streak=0, health_log=, vcs-ref=e309397d02fc53f7fa99db1371b8700eb49f268f, description=The Universal Base Image is designed and engineered to be the base layer for all of your containerized applications, middleware and utilities. This base image is freely redistributable, but Red Hat only supports Red Hat technologies through subscriptions for Red Hat products. This image is maintained by Red Hat and updated regularly., io.openshift.tags=base rhel9, maintainer=Red Hat, Inc., name=ubi9, summary=Provides the latest release of Red Hat Universal Base Image 9., vcs-type=git, io.k8s.description=The Universal Base Image is designed and engineered to be the base layer for all of your containerized applications, middleware and utilities. This base image is freely redistributable, but Red Hat only supports Red Hat technologies through subscriptions for Red Hat products. This image is maintained by Red Hat and updated regularly., config_data={'image': 'quay.io/sustainable_computing_io/kepler:release-0.7.12', 'privileged': 'true', 'restart': 'always', 'ports': ['8888:8888'], 'net': 'host', 'command': '-v=2', 'recreate': True, 'environment': {'ENABLE_GPU': 'true', 'EXPOSE_CONTAINER_METRICS': 'true', 'ENABLE_PROCESS_METRICS': 'true', 'EXPOSE_VM_METRICS': 'true', 'EXPOSE_ESTIMATED_IDLE_POWER_METRICS': 'false', 'LIBVIRT_METADATA_URI': 'http://openstack.org/xmlns/libvirt/nova/1.1'}, 'healthcheck': {'test': '/openstack/healthcheck kepler', 'mount': '/var/lib/openstack/healthchecks/kepler'}, 'volumes': ['/lib/modules:/lib/modules:ro', '/run/libvirt:/run/libvirt:shared,ro', '/sys:/sys', '/proc:/proc', '/var/lib/openstack/healthchecks/kepler:/openstack:ro,z']}, io.openshift.expose-services=, release-0.7.12=, config_id=edpm, com.redhat.license_terms=https://www.redhat.com/en/about/red-hat-end-user-license-agreements#UBI, io.k8s.display-name=Red Hat Universal Base Image 9, url=https://access.redhat.com/containers/#/registry.access.redhat.com/ubi9/images/9.4-1214.1726694543, com.redhat.component=ubi9-container, release=1214.1726694543, vendor=Red Hat, Inc., version=9.4, io.buildah.version=1.29.0, build-date=2024-09-18T21:23:30, distribution-scope=public, managed_by=edpm_ansible, architecture=x86_64, container_name=kepler)
Dec  1 20:05:27 compute-0 podman[257187]: 2025-12-01 20:05:27.400494895 +0000 UTC m=+0.148718464 container health_status ac5c9902abf0db9f43c889599b2bcc73d33eb8b65444ffdd9b56a5cc93dab792 (image=quay.io/podified-antelope-centos9/openstack-ovn-controller:current-podified, name=ovn_controller, health_status=healthy, health_failing_streak=0, health_log=, org.label-schema.build-date=20251125, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 9 Base Image, config_data={'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ovn_controller', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-ovn-controller:current-podified', 'net': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/lib/modules:/lib/modules:ro', '/run:/run', '/var/lib/openvswitch/ovn:/run/ovn:shared,z', '/var/lib/kolla/config_files/ovn_controller.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/openstack/cacerts/ovn/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/certs/ovn/default/ca.crt:/etc/pki/tls/certs/ovndbca.crt:ro,z', '/var/lib/openstack/certs/ovn/default/tls.crt:/etc/pki/tls/certs/ovndb.crt:ro,z', '/var/lib/openstack/certs/ovn/default/tls.key:/etc/pki/tls/private/ovndb.key:ro,Z', '/var/lib/openstack/healthchecks/ovn_controller:/openstack:ro,z']}, tcib_managed=true, config_id=ovn_controller, container_name=ovn_controller, maintainer=OpenStack Kubernetes Operator team, tcib_build_tag=fa2bb8efef6782c26ea7f1675eeb36dd, managed_by=edpm_ansible, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS, io.buildah.version=1.41.3)
Dec  1 20:05:28 compute-0 nova_compute[189564]: 2025-12-01 20:05:28.700 189568 DEBUG nova.network.neutron [None req-9a8d114f-a120-44d9-9d3e-0fcc2a586e30 87b1f4a5842648dead0562b1cf8b4f18 ce8fb01897ec4dc4a54e7b478a0450c6 - - default default] [instance: 2e63a3e2-688c-470f-9b69-98ac22f0c892] Updating instance_info_cache with network_info: [{"id": "3076324c-1772-4ebf-8d52-056282f5b5b9", "address": "fa:16:3e:ec:bc:e0", "network": {"id": "b72e0b6b-24ff-49af-9297-d0f55dd2fe07", "bridge": "br-int", "label": "", "subnets": [{"cidr": "10.100.0.0/16", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.3.29", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "ce8fb01897ec4dc4a54e7b478a0450c6", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tap3076324c-17", "ovs_interfaceid": "3076324c-1772-4ebf-8d52-056282f5b5b9", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}}] update_instance_cache_with_nw_info /usr/lib/python3.9/site-packages/nova/network/neutron.py:116#033[00m
Dec  1 20:05:28 compute-0 nova_compute[189564]: 2025-12-01 20:05:28.729 189568 DEBUG oslo_concurrency.lockutils [None req-9a8d114f-a120-44d9-9d3e-0fcc2a586e30 87b1f4a5842648dead0562b1cf8b4f18 ce8fb01897ec4dc4a54e7b478a0450c6 - - default default] Releasing lock "refresh_cache-2e63a3e2-688c-470f-9b69-98ac22f0c892" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:333#033[00m
Dec  1 20:05:28 compute-0 nova_compute[189564]: 2025-12-01 20:05:28.730 189568 DEBUG nova.compute.manager [None req-9a8d114f-a120-44d9-9d3e-0fcc2a586e30 87b1f4a5842648dead0562b1cf8b4f18 ce8fb01897ec4dc4a54e7b478a0450c6 - - default default] [instance: 2e63a3e2-688c-470f-9b69-98ac22f0c892] Instance network_info: |[{"id": "3076324c-1772-4ebf-8d52-056282f5b5b9", "address": "fa:16:3e:ec:bc:e0", "network": {"id": "b72e0b6b-24ff-49af-9297-d0f55dd2fe07", "bridge": "br-int", "label": "", "subnets": [{"cidr": "10.100.0.0/16", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.3.29", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "ce8fb01897ec4dc4a54e7b478a0450c6", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tap3076324c-17", "ovs_interfaceid": "3076324c-1772-4ebf-8d52-056282f5b5b9", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}}]| _allocate_network_async /usr/lib/python3.9/site-packages/nova/compute/manager.py:1967#033[00m
Dec  1 20:05:28 compute-0 nova_compute[189564]: 2025-12-01 20:05:28.731 189568 DEBUG oslo_concurrency.lockutils [req-ddcd25a8-625b-4098-82d9-67bcde45f05f req-c4bbddc9-f93b-4758-ba90-7ffaf3c1aeb3 8141e98fb94749b083df4b60a326419b 58466eba0634458f8dd92f929824a1d9 - - default default] Acquired lock "refresh_cache-2e63a3e2-688c-470f-9b69-98ac22f0c892" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:315#033[00m
Dec  1 20:05:28 compute-0 nova_compute[189564]: 2025-12-01 20:05:28.732 189568 DEBUG nova.network.neutron [req-ddcd25a8-625b-4098-82d9-67bcde45f05f req-c4bbddc9-f93b-4758-ba90-7ffaf3c1aeb3 8141e98fb94749b083df4b60a326419b 58466eba0634458f8dd92f929824a1d9 - - default default] [instance: 2e63a3e2-688c-470f-9b69-98ac22f0c892] Refreshing network info cache for port 3076324c-1772-4ebf-8d52-056282f5b5b9 _get_instance_nw_info /usr/lib/python3.9/site-packages/nova/network/neutron.py:2007#033[00m
Dec  1 20:05:28 compute-0 nova_compute[189564]: 2025-12-01 20:05:28.737 189568 DEBUG nova.virt.libvirt.driver [None req-9a8d114f-a120-44d9-9d3e-0fcc2a586e30 87b1f4a5842648dead0562b1cf8b4f18 ce8fb01897ec4dc4a54e7b478a0450c6 - - default default] [instance: 2e63a3e2-688c-470f-9b69-98ac22f0c892] Start _get_guest_xml network_info=[{"id": "3076324c-1772-4ebf-8d52-056282f5b5b9", "address": "fa:16:3e:ec:bc:e0", "network": {"id": "b72e0b6b-24ff-49af-9297-d0f55dd2fe07", "bridge": "br-int", "label": "", "subnets": [{"cidr": "10.100.0.0/16", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.3.29", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "ce8fb01897ec4dc4a54e7b478a0450c6", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tap3076324c-17", "ovs_interfaceid": "3076324c-1772-4ebf-8d52-056282f5b5b9", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}}] disk_info={'disk_bus': 'virtio', 'cdrom_bus': 'sata', 'mapping': {'root': {'bus': 'virtio', 'dev': 'vda', 'type': 'disk', 'boot_index': '1'}, 'disk': {'bus': 'virtio', 'dev': 'vda', 'type': 'disk', 'boot_index': '1'}, 'disk.config': {'bus': 'sata', 'dev': 'sda', 'type': 'cdrom'}}} image_meta=ImageMeta(checksum='c8fc807773e5354afe61636071771906',container_format='bare',created_at=2025-12-01T20:05:12Z,direct_url=<?>,disk_format='qcow2',id=bffb6851-f47b-44e0-90e7-e01d72f9a4d2,min_disk=0,min_ram=0,name='tempest-scenario-img--1009152532',owner='ce8fb01897ec4dc4a54e7b478a0450c6',properties=ImageMetaProps,protected=<?>,size=21430272,status='active',tags=<?>,updated_at=2025-12-01T20:05:14Z,virtual_size=<?>,visibility=<?>) rescue=None block_device_info={'root_device_name': '/dev/vda', 'image': [{'boot_index': 0, 'guest_format': None, 'encryption_options': None, 'size': 0, 'encryption_secret_uuid': None, 'device_type': 'disk', 'disk_bus': 'virtio', 'encrypted': False, 'encryption_format': None, 'device_name': '/dev/vda', 'image_id': 'bffb6851-f47b-44e0-90e7-e01d72f9a4d2'}], 'ephemerals': [], 'block_device_mapping': [], 'swap': None} _get_guest_xml /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:7549#033[00m
Dec  1 20:05:28 compute-0 nova_compute[189564]: 2025-12-01 20:05:28.761 189568 WARNING nova.virt.libvirt.driver [None req-9a8d114f-a120-44d9-9d3e-0fcc2a586e30 87b1f4a5842648dead0562b1cf8b4f18 ce8fb01897ec4dc4a54e7b478a0450c6 - - default default] This host appears to have multiple sockets per NUMA node. The `socket` PCI NUMA affinity will not be supported.#033[00m
Dec  1 20:05:28 compute-0 nova_compute[189564]: 2025-12-01 20:05:28.770 189568 DEBUG nova.virt.libvirt.host [None req-9a8d114f-a120-44d9-9d3e-0fcc2a586e30 87b1f4a5842648dead0562b1cf8b4f18 ce8fb01897ec4dc4a54e7b478a0450c6 - - default default] Searching host: 'compute-0.ctlplane.example.com' for CPU controller through CGroups V1... _has_cgroupsv1_cpu_controller /usr/lib/python3.9/site-packages/nova/virt/libvirt/host.py:1653#033[00m
Dec  1 20:05:28 compute-0 nova_compute[189564]: 2025-12-01 20:05:28.771 189568 DEBUG nova.virt.libvirt.host [None req-9a8d114f-a120-44d9-9d3e-0fcc2a586e30 87b1f4a5842648dead0562b1cf8b4f18 ce8fb01897ec4dc4a54e7b478a0450c6 - - default default] CPU controller missing on host. _has_cgroupsv1_cpu_controller /usr/lib/python3.9/site-packages/nova/virt/libvirt/host.py:1663#033[00m
Dec  1 20:05:28 compute-0 nova_compute[189564]: 2025-12-01 20:05:28.779 189568 DEBUG nova.virt.libvirt.host [None req-9a8d114f-a120-44d9-9d3e-0fcc2a586e30 87b1f4a5842648dead0562b1cf8b4f18 ce8fb01897ec4dc4a54e7b478a0450c6 - - default default] Searching host: 'compute-0.ctlplane.example.com' for CPU controller through CGroups V2... _has_cgroupsv2_cpu_controller /usr/lib/python3.9/site-packages/nova/virt/libvirt/host.py:1672#033[00m
Dec  1 20:05:28 compute-0 nova_compute[189564]: 2025-12-01 20:05:28.780 189568 DEBUG nova.virt.libvirt.host [None req-9a8d114f-a120-44d9-9d3e-0fcc2a586e30 87b1f4a5842648dead0562b1cf8b4f18 ce8fb01897ec4dc4a54e7b478a0450c6 - - default default] CPU controller found on host. _has_cgroupsv2_cpu_controller /usr/lib/python3.9/site-packages/nova/virt/libvirt/host.py:1679#033[00m
Dec  1 20:05:28 compute-0 nova_compute[189564]: 2025-12-01 20:05:28.781 189568 DEBUG nova.virt.libvirt.driver [None req-9a8d114f-a120-44d9-9d3e-0fcc2a586e30 87b1f4a5842648dead0562b1cf8b4f18 ce8fb01897ec4dc4a54e7b478a0450c6 - - default default] CPU mode 'host-model' models '' was chosen, with extra flags: '' _get_guest_cpu_model_config /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:5396#033[00m
Dec  1 20:05:28 compute-0 nova_compute[189564]: 2025-12-01 20:05:28.782 189568 DEBUG nova.virt.hardware [None req-9a8d114f-a120-44d9-9d3e-0fcc2a586e30 87b1f4a5842648dead0562b1cf8b4f18 ce8fb01897ec4dc4a54e7b478a0450c6 - - default default] Getting desirable topologies for flavor Flavor(created_at=2025-12-01T20:00:10Z,deleted=False,deleted_at=None,description=None,disabled=False,ephemeral_gb=0,extra_specs={hw_rng:allowed='True'},flavorid='69252fc0-77e5-4ac1-807d-77003542464f',id=3,is_public=True,memory_mb=128,name='m1.nano',projects=<?>,root_gb=1,rxtx_factor=1.0,swap=0,updated_at=None,vcpu_weight=0,vcpus=1) and image_meta ImageMeta(checksum='c8fc807773e5354afe61636071771906',container_format='bare',created_at=2025-12-01T20:05:12Z,direct_url=<?>,disk_format='qcow2',id=bffb6851-f47b-44e0-90e7-e01d72f9a4d2,min_disk=0,min_ram=0,name='tempest-scenario-img--1009152532',owner='ce8fb01897ec4dc4a54e7b478a0450c6',properties=ImageMetaProps,protected=<?>,size=21430272,status='active',tags=<?>,updated_at=2025-12-01T20:05:14Z,virtual_size=<?>,visibility=<?>), allow threads: True _get_desirable_cpu_topologies /usr/lib/python3.9/site-packages/nova/virt/hardware.py:563#033[00m
Dec  1 20:05:28 compute-0 nova_compute[189564]: 2025-12-01 20:05:28.783 189568 DEBUG nova.virt.hardware [None req-9a8d114f-a120-44d9-9d3e-0fcc2a586e30 87b1f4a5842648dead0562b1cf8b4f18 ce8fb01897ec4dc4a54e7b478a0450c6 - - default default] Flavor limits 0:0:0 get_cpu_topology_constraints /usr/lib/python3.9/site-packages/nova/virt/hardware.py:348#033[00m
Dec  1 20:05:28 compute-0 nova_compute[189564]: 2025-12-01 20:05:28.784 189568 DEBUG nova.virt.hardware [None req-9a8d114f-a120-44d9-9d3e-0fcc2a586e30 87b1f4a5842648dead0562b1cf8b4f18 ce8fb01897ec4dc4a54e7b478a0450c6 - - default default] Image limits 0:0:0 get_cpu_topology_constraints /usr/lib/python3.9/site-packages/nova/virt/hardware.py:352#033[00m
Dec  1 20:05:28 compute-0 nova_compute[189564]: 2025-12-01 20:05:28.784 189568 DEBUG nova.virt.hardware [None req-9a8d114f-a120-44d9-9d3e-0fcc2a586e30 87b1f4a5842648dead0562b1cf8b4f18 ce8fb01897ec4dc4a54e7b478a0450c6 - - default default] Flavor pref 0:0:0 get_cpu_topology_constraints /usr/lib/python3.9/site-packages/nova/virt/hardware.py:388#033[00m
Dec  1 20:05:28 compute-0 nova_compute[189564]: 2025-12-01 20:05:28.785 189568 DEBUG nova.virt.hardware [None req-9a8d114f-a120-44d9-9d3e-0fcc2a586e30 87b1f4a5842648dead0562b1cf8b4f18 ce8fb01897ec4dc4a54e7b478a0450c6 - - default default] Image pref 0:0:0 get_cpu_topology_constraints /usr/lib/python3.9/site-packages/nova/virt/hardware.py:392#033[00m
Dec  1 20:05:28 compute-0 nova_compute[189564]: 2025-12-01 20:05:28.786 189568 DEBUG nova.virt.hardware [None req-9a8d114f-a120-44d9-9d3e-0fcc2a586e30 87b1f4a5842648dead0562b1cf8b4f18 ce8fb01897ec4dc4a54e7b478a0450c6 - - default default] Chose sockets=0, cores=0, threads=0; limits were sockets=65536, cores=65536, threads=65536 get_cpu_topology_constraints /usr/lib/python3.9/site-packages/nova/virt/hardware.py:430#033[00m
Dec  1 20:05:28 compute-0 nova_compute[189564]: 2025-12-01 20:05:28.787 189568 DEBUG nova.virt.hardware [None req-9a8d114f-a120-44d9-9d3e-0fcc2a586e30 87b1f4a5842648dead0562b1cf8b4f18 ce8fb01897ec4dc4a54e7b478a0450c6 - - default default] Topology preferred VirtCPUTopology(cores=0,sockets=0,threads=0), maximum VirtCPUTopology(cores=65536,sockets=65536,threads=65536) _get_desirable_cpu_topologies /usr/lib/python3.9/site-packages/nova/virt/hardware.py:569#033[00m
Dec  1 20:05:28 compute-0 nova_compute[189564]: 2025-12-01 20:05:28.787 189568 DEBUG nova.virt.hardware [None req-9a8d114f-a120-44d9-9d3e-0fcc2a586e30 87b1f4a5842648dead0562b1cf8b4f18 ce8fb01897ec4dc4a54e7b478a0450c6 - - default default] Build topologies for 1 vcpu(s) 1:1:1 _get_possible_cpu_topologies /usr/lib/python3.9/site-packages/nova/virt/hardware.py:471#033[00m
Dec  1 20:05:28 compute-0 nova_compute[189564]: 2025-12-01 20:05:28.788 189568 DEBUG nova.virt.hardware [None req-9a8d114f-a120-44d9-9d3e-0fcc2a586e30 87b1f4a5842648dead0562b1cf8b4f18 ce8fb01897ec4dc4a54e7b478a0450c6 - - default default] Got 1 possible topologies _get_possible_cpu_topologies /usr/lib/python3.9/site-packages/nova/virt/hardware.py:501#033[00m
Dec  1 20:05:28 compute-0 nova_compute[189564]: 2025-12-01 20:05:28.789 189568 DEBUG nova.virt.hardware [None req-9a8d114f-a120-44d9-9d3e-0fcc2a586e30 87b1f4a5842648dead0562b1cf8b4f18 ce8fb01897ec4dc4a54e7b478a0450c6 - - default default] Possible topologies [VirtCPUTopology(cores=1,sockets=1,threads=1)] _get_desirable_cpu_topologies /usr/lib/python3.9/site-packages/nova/virt/hardware.py:575#033[00m
Dec  1 20:05:28 compute-0 nova_compute[189564]: 2025-12-01 20:05:28.789 189568 DEBUG nova.virt.hardware [None req-9a8d114f-a120-44d9-9d3e-0fcc2a586e30 87b1f4a5842648dead0562b1cf8b4f18 ce8fb01897ec4dc4a54e7b478a0450c6 - - default default] Sorted desired topologies [VirtCPUTopology(cores=1,sockets=1,threads=1)] _get_desirable_cpu_topologies /usr/lib/python3.9/site-packages/nova/virt/hardware.py:577#033[00m
Dec  1 20:05:28 compute-0 nova_compute[189564]: 2025-12-01 20:05:28.796 189568 DEBUG nova.virt.libvirt.vif [None req-9a8d114f-a120-44d9-9d3e-0fcc2a586e30 87b1f4a5842648dead0562b1cf8b4f18 ce8fb01897ec4dc4a54e7b478a0450c6 - - default default] vif_type=ovs instance=Instance(access_ip_v4=None,access_ip_v6=None,architecture=None,auto_disk_config=False,availability_zone='nova',cell_name=None,cleaned=False,config_drive='',created_at=2025-12-01T20:05:21Z,default_ephemeral_device=None,default_swap_device=None,deleted=False,deleted_at=None,device_metadata=None,disable_terminate=False,display_description=None,display_name='te-4551674-asg-jbxama3kkz6o-ydtfx5qziqnj-k254cxbeo4x2',ec2_ids=EC2Ids,ephemeral_gb=0,ephemeral_key_uuid=None,fault=<?>,flavor=Flavor(3),hidden=False,host='compute-0.ctlplane.example.com',hostname='te-4551674-asg-jbxama3kkz6o-ydtfx5qziqnj-k254cxbeo4x2',id=13,image_ref='bffb6851-f47b-44e0-90e7-e01d72f9a4d2',info_cache=InstanceInfoCache,instance_type_id=3,kernel_id='',key_data=None,key_name=None,keypairs=KeyPairList,launch_index=0,launched_at=None,launched_on='compute-0.ctlplane.example.com',locked=False,locked_by=None,memory_mb=128,metadata={metering.server_group='f148fe63-b9e9-42f1-b9d7-8790a6058874'},migration_context=None,new_flavor=None,node='compute-0.ctlplane.example.com',numa_topology=None,old_flavor=None,os_type=None,pci_devices=<?>,pci_requests=InstancePCIRequests,power_state=0,progress=0,project_id='ce8fb01897ec4dc4a54e7b478a0450c6',ramdisk_id='',reservation_id='r-s00hz3dx',resources=None,root_device_name='/dev/vda',root_gb=1,security_groups=SecurityGroupList,services=<?>,shutdown_terminate=False,system_metadata={boot_roles='reader,member',image_base_image_ref='bffb6851-f47b-44e0-90e7-e01d72f9a4d2',image_container_format='bare',image_disk_format='qcow2',image_hw_machine_type='q35',image_min_disk='1',image_min_ram='0',network_allocated='True',owner_project_name='tempest-PrometheusGabbiTest-1865175500',owner_user_name='tempest-PrometheusGabbiTest-1865175500-project-member'},tags=TagList,task_state='spawning',terminated_at=None,trusted_certs=None,updated_at=2025-12-01T20:05:23Z,user_data='IyEvYmluL3NoCmVjaG8gJ0xvYWRpbmcgQ1BVJwpzZXQgLXYKY2F0IC9kZXYvdXJhbmRvbSA+IC9kZXYvbnVsbCAmIHNsZWVwIDMwMCA7IGtpbGwgJCEgCg==',user_id='87b1f4a5842648dead0562b1cf8b4f18',uuid=2e63a3e2-688c-470f-9b69-98ac22f0c892,vcpu_model=VirtCPUModel,vcpus=1,vm_mode=None,vm_state='building') vif={"id": "3076324c-1772-4ebf-8d52-056282f5b5b9", "address": "fa:16:3e:ec:bc:e0", "network": {"id": "b72e0b6b-24ff-49af-9297-d0f55dd2fe07", "bridge": "br-int", "label": "", "subnets": [{"cidr": "10.100.0.0/16", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.3.29", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "ce8fb01897ec4dc4a54e7b478a0450c6", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tap3076324c-17", "ovs_interfaceid": "3076324c-1772-4ebf-8d52-056282f5b5b9", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}} virt_type=kvm get_config /usr/lib/python3.9/site-packages/nova/virt/libvirt/vif.py:563#033[00m
Dec  1 20:05:28 compute-0 nova_compute[189564]: 2025-12-01 20:05:28.797 189568 DEBUG nova.network.os_vif_util [None req-9a8d114f-a120-44d9-9d3e-0fcc2a586e30 87b1f4a5842648dead0562b1cf8b4f18 ce8fb01897ec4dc4a54e7b478a0450c6 - - default default] Converting VIF {"id": "3076324c-1772-4ebf-8d52-056282f5b5b9", "address": "fa:16:3e:ec:bc:e0", "network": {"id": "b72e0b6b-24ff-49af-9297-d0f55dd2fe07", "bridge": "br-int", "label": "", "subnets": [{"cidr": "10.100.0.0/16", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.3.29", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "ce8fb01897ec4dc4a54e7b478a0450c6", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tap3076324c-17", "ovs_interfaceid": "3076324c-1772-4ebf-8d52-056282f5b5b9", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}} nova_to_osvif_vif /usr/lib/python3.9/site-packages/nova/network/os_vif_util.py:511#033[00m
Dec  1 20:05:28 compute-0 nova_compute[189564]: 2025-12-01 20:05:28.798 189568 DEBUG nova.network.os_vif_util [None req-9a8d114f-a120-44d9-9d3e-0fcc2a586e30 87b1f4a5842648dead0562b1cf8b4f18 ce8fb01897ec4dc4a54e7b478a0450c6 - - default default] Converted object VIFOpenVSwitch(active=False,address=fa:16:3e:ec:bc:e0,bridge_name='br-int',has_traffic_filtering=True,id=3076324c-1772-4ebf-8d52-056282f5b5b9,network=Network(b72e0b6b-24ff-49af-9297-d0f55dd2fe07),plugin='ovs',port_profile=VIFPortProfileOpenVSwitch,preserve_on_delete=False,vif_name='tap3076324c-17') nova_to_osvif_vif /usr/lib/python3.9/site-packages/nova/network/os_vif_util.py:548#033[00m
Dec  1 20:05:28 compute-0 nova_compute[189564]: 2025-12-01 20:05:28.800 189568 DEBUG nova.objects.instance [None req-9a8d114f-a120-44d9-9d3e-0fcc2a586e30 87b1f4a5842648dead0562b1cf8b4f18 ce8fb01897ec4dc4a54e7b478a0450c6 - - default default] Lazy-loading 'pci_devices' on Instance uuid 2e63a3e2-688c-470f-9b69-98ac22f0c892 obj_load_attr /usr/lib/python3.9/site-packages/nova/objects/instance.py:1105#033[00m
Dec  1 20:05:28 compute-0 nova_compute[189564]: 2025-12-01 20:05:28.826 189568 DEBUG nova.virt.libvirt.driver [None req-9a8d114f-a120-44d9-9d3e-0fcc2a586e30 87b1f4a5842648dead0562b1cf8b4f18 ce8fb01897ec4dc4a54e7b478a0450c6 - - default default] [instance: 2e63a3e2-688c-470f-9b69-98ac22f0c892] End _get_guest_xml xml=<domain type="kvm">
Dec  1 20:05:28 compute-0 nova_compute[189564]:  <uuid>2e63a3e2-688c-470f-9b69-98ac22f0c892</uuid>
Dec  1 20:05:28 compute-0 nova_compute[189564]:  <name>instance-0000000d</name>
Dec  1 20:05:28 compute-0 nova_compute[189564]:  <memory>131072</memory>
Dec  1 20:05:28 compute-0 nova_compute[189564]:  <vcpu>1</vcpu>
Dec  1 20:05:28 compute-0 nova_compute[189564]:  <metadata>
Dec  1 20:05:28 compute-0 nova_compute[189564]:    <nova:instance xmlns:nova="http://openstack.org/xmlns/libvirt/nova/1.1">
Dec  1 20:05:28 compute-0 nova_compute[189564]:      <nova:package version="27.5.2-0.20250829104910.6f8decf.el9"/>
Dec  1 20:05:28 compute-0 nova_compute[189564]:      <nova:name>te-4551674-asg-jbxama3kkz6o-ydtfx5qziqnj-k254cxbeo4x2</nova:name>
Dec  1 20:05:28 compute-0 nova_compute[189564]:      <nova:creationTime>2025-12-01 20:05:28</nova:creationTime>
Dec  1 20:05:28 compute-0 nova_compute[189564]:      <nova:flavor name="m1.nano">
Dec  1 20:05:28 compute-0 nova_compute[189564]:        <nova:memory>128</nova:memory>
Dec  1 20:05:28 compute-0 nova_compute[189564]:        <nova:disk>1</nova:disk>
Dec  1 20:05:28 compute-0 nova_compute[189564]:        <nova:swap>0</nova:swap>
Dec  1 20:05:28 compute-0 nova_compute[189564]:        <nova:ephemeral>0</nova:ephemeral>
Dec  1 20:05:28 compute-0 nova_compute[189564]:        <nova:vcpus>1</nova:vcpus>
Dec  1 20:05:28 compute-0 nova_compute[189564]:      </nova:flavor>
Dec  1 20:05:28 compute-0 nova_compute[189564]:      <nova:owner>
Dec  1 20:05:28 compute-0 nova_compute[189564]:        <nova:user uuid="87b1f4a5842648dead0562b1cf8b4f18">tempest-PrometheusGabbiTest-1865175500-project-member</nova:user>
Dec  1 20:05:28 compute-0 nova_compute[189564]:        <nova:project uuid="ce8fb01897ec4dc4a54e7b478a0450c6">tempest-PrometheusGabbiTest-1865175500</nova:project>
Dec  1 20:05:28 compute-0 nova_compute[189564]:      </nova:owner>
Dec  1 20:05:28 compute-0 nova_compute[189564]:      <nova:root type="image" uuid="bffb6851-f47b-44e0-90e7-e01d72f9a4d2"/>
Dec  1 20:05:28 compute-0 nova_compute[189564]:      <nova:ports>
Dec  1 20:05:28 compute-0 nova_compute[189564]:        <nova:port uuid="3076324c-1772-4ebf-8d52-056282f5b5b9">
Dec  1 20:05:28 compute-0 nova_compute[189564]:          <nova:ip type="fixed" address="10.100.3.29" ipVersion="4"/>
Dec  1 20:05:28 compute-0 nova_compute[189564]:        </nova:port>
Dec  1 20:05:28 compute-0 nova_compute[189564]:      </nova:ports>
Dec  1 20:05:28 compute-0 nova_compute[189564]:    </nova:instance>
Dec  1 20:05:28 compute-0 nova_compute[189564]:  </metadata>
Dec  1 20:05:28 compute-0 nova_compute[189564]:  <sysinfo type="smbios">
Dec  1 20:05:28 compute-0 nova_compute[189564]:    <system>
Dec  1 20:05:28 compute-0 nova_compute[189564]:      <entry name="manufacturer">RDO</entry>
Dec  1 20:05:28 compute-0 nova_compute[189564]:      <entry name="product">OpenStack Compute</entry>
Dec  1 20:05:28 compute-0 nova_compute[189564]:      <entry name="version">27.5.2-0.20250829104910.6f8decf.el9</entry>
Dec  1 20:05:28 compute-0 nova_compute[189564]:      <entry name="serial">2e63a3e2-688c-470f-9b69-98ac22f0c892</entry>
Dec  1 20:05:28 compute-0 nova_compute[189564]:      <entry name="uuid">2e63a3e2-688c-470f-9b69-98ac22f0c892</entry>
Dec  1 20:05:28 compute-0 nova_compute[189564]:      <entry name="family">Virtual Machine</entry>
Dec  1 20:05:28 compute-0 nova_compute[189564]:    </system>
Dec  1 20:05:28 compute-0 nova_compute[189564]:  </sysinfo>
Dec  1 20:05:28 compute-0 nova_compute[189564]:  <os>
Dec  1 20:05:28 compute-0 nova_compute[189564]:    <type arch="x86_64" machine="q35">hvm</type>
Dec  1 20:05:28 compute-0 nova_compute[189564]:    <boot dev="hd"/>
Dec  1 20:05:28 compute-0 nova_compute[189564]:    <smbios mode="sysinfo"/>
Dec  1 20:05:28 compute-0 nova_compute[189564]:  </os>
Dec  1 20:05:28 compute-0 nova_compute[189564]:  <features>
Dec  1 20:05:28 compute-0 nova_compute[189564]:    <acpi/>
Dec  1 20:05:28 compute-0 nova_compute[189564]:    <apic/>
Dec  1 20:05:28 compute-0 nova_compute[189564]:    <vmcoreinfo/>
Dec  1 20:05:28 compute-0 nova_compute[189564]:  </features>
Dec  1 20:05:28 compute-0 nova_compute[189564]:  <clock offset="utc">
Dec  1 20:05:28 compute-0 nova_compute[189564]:    <timer name="pit" tickpolicy="delay"/>
Dec  1 20:05:28 compute-0 nova_compute[189564]:    <timer name="rtc" tickpolicy="catchup"/>
Dec  1 20:05:28 compute-0 nova_compute[189564]:    <timer name="hpet" present="no"/>
Dec  1 20:05:28 compute-0 nova_compute[189564]:  </clock>
Dec  1 20:05:28 compute-0 nova_compute[189564]:  <cpu mode="host-model" match="exact">
Dec  1 20:05:28 compute-0 nova_compute[189564]:    <topology sockets="1" cores="1" threads="1"/>
Dec  1 20:05:28 compute-0 nova_compute[189564]:  </cpu>
Dec  1 20:05:28 compute-0 nova_compute[189564]:  <devices>
Dec  1 20:05:28 compute-0 nova_compute[189564]:    <disk type="file" device="disk">
Dec  1 20:05:28 compute-0 nova_compute[189564]:      <driver name="qemu" type="qcow2" cache="none"/>
Dec  1 20:05:28 compute-0 nova_compute[189564]:      <source file="/var/lib/nova/instances/2e63a3e2-688c-470f-9b69-98ac22f0c892/disk"/>
Dec  1 20:05:28 compute-0 nova_compute[189564]:      <target dev="vda" bus="virtio"/>
Dec  1 20:05:28 compute-0 nova_compute[189564]:    </disk>
Dec  1 20:05:28 compute-0 nova_compute[189564]:    <disk type="file" device="cdrom">
Dec  1 20:05:28 compute-0 nova_compute[189564]:      <driver name="qemu" type="raw" cache="none"/>
Dec  1 20:05:28 compute-0 nova_compute[189564]:      <source file="/var/lib/nova/instances/2e63a3e2-688c-470f-9b69-98ac22f0c892/disk.config"/>
Dec  1 20:05:28 compute-0 nova_compute[189564]:      <target dev="sda" bus="sata"/>
Dec  1 20:05:28 compute-0 nova_compute[189564]:    </disk>
Dec  1 20:05:28 compute-0 nova_compute[189564]:    <interface type="ethernet">
Dec  1 20:05:28 compute-0 nova_compute[189564]:      <mac address="fa:16:3e:ec:bc:e0"/>
Dec  1 20:05:28 compute-0 nova_compute[189564]:      <model type="virtio"/>
Dec  1 20:05:28 compute-0 nova_compute[189564]:      <driver name="vhost" rx_queue_size="512"/>
Dec  1 20:05:28 compute-0 nova_compute[189564]:      <mtu size="1442"/>
Dec  1 20:05:28 compute-0 nova_compute[189564]:      <target dev="tap3076324c-17"/>
Dec  1 20:05:28 compute-0 nova_compute[189564]:    </interface>
Dec  1 20:05:28 compute-0 nova_compute[189564]:    <serial type="pty">
Dec  1 20:05:28 compute-0 nova_compute[189564]:      <log file="/var/lib/nova/instances/2e63a3e2-688c-470f-9b69-98ac22f0c892/console.log" append="off"/>
Dec  1 20:05:28 compute-0 nova_compute[189564]:    </serial>
Dec  1 20:05:28 compute-0 nova_compute[189564]:    <graphics type="vnc" autoport="yes" listen="::0"/>
Dec  1 20:05:28 compute-0 nova_compute[189564]:    <video>
Dec  1 20:05:28 compute-0 nova_compute[189564]:      <model type="virtio"/>
Dec  1 20:05:28 compute-0 nova_compute[189564]:    </video>
Dec  1 20:05:28 compute-0 nova_compute[189564]:    <input type="tablet" bus="usb"/>
Dec  1 20:05:28 compute-0 nova_compute[189564]:    <rng model="virtio">
Dec  1 20:05:28 compute-0 nova_compute[189564]:      <backend model="random">/dev/urandom</backend>
Dec  1 20:05:28 compute-0 nova_compute[189564]:    </rng>
Dec  1 20:05:28 compute-0 nova_compute[189564]:    <controller type="pci" model="pcie-root"/>
Dec  1 20:05:28 compute-0 nova_compute[189564]:    <controller type="pci" model="pcie-root-port"/>
Dec  1 20:05:28 compute-0 nova_compute[189564]:    <controller type="pci" model="pcie-root-port"/>
Dec  1 20:05:28 compute-0 nova_compute[189564]:    <controller type="pci" model="pcie-root-port"/>
Dec  1 20:05:28 compute-0 nova_compute[189564]:    <controller type="pci" model="pcie-root-port"/>
Dec  1 20:05:28 compute-0 nova_compute[189564]:    <controller type="pci" model="pcie-root-port"/>
Dec  1 20:05:28 compute-0 nova_compute[189564]:    <controller type="pci" model="pcie-root-port"/>
Dec  1 20:05:28 compute-0 nova_compute[189564]:    <controller type="pci" model="pcie-root-port"/>
Dec  1 20:05:28 compute-0 nova_compute[189564]:    <controller type="pci" model="pcie-root-port"/>
Dec  1 20:05:28 compute-0 nova_compute[189564]:    <controller type="pci" model="pcie-root-port"/>
Dec  1 20:05:28 compute-0 nova_compute[189564]:    <controller type="pci" model="pcie-root-port"/>
Dec  1 20:05:28 compute-0 nova_compute[189564]:    <controller type="pci" model="pcie-root-port"/>
Dec  1 20:05:28 compute-0 nova_compute[189564]:    <controller type="pci" model="pcie-root-port"/>
Dec  1 20:05:28 compute-0 nova_compute[189564]:    <controller type="pci" model="pcie-root-port"/>
Dec  1 20:05:28 compute-0 nova_compute[189564]:    <controller type="pci" model="pcie-root-port"/>
Dec  1 20:05:28 compute-0 nova_compute[189564]:    <controller type="pci" model="pcie-root-port"/>
Dec  1 20:05:28 compute-0 nova_compute[189564]:    <controller type="pci" model="pcie-root-port"/>
Dec  1 20:05:28 compute-0 nova_compute[189564]:    <controller type="pci" model="pcie-root-port"/>
Dec  1 20:05:28 compute-0 nova_compute[189564]:    <controller type="pci" model="pcie-root-port"/>
Dec  1 20:05:28 compute-0 nova_compute[189564]:    <controller type="pci" model="pcie-root-port"/>
Dec  1 20:05:28 compute-0 nova_compute[189564]:    <controller type="pci" model="pcie-root-port"/>
Dec  1 20:05:28 compute-0 nova_compute[189564]:    <controller type="pci" model="pcie-root-port"/>
Dec  1 20:05:28 compute-0 nova_compute[189564]:    <controller type="pci" model="pcie-root-port"/>
Dec  1 20:05:28 compute-0 nova_compute[189564]:    <controller type="pci" model="pcie-root-port"/>
Dec  1 20:05:28 compute-0 nova_compute[189564]:    <controller type="pci" model="pcie-root-port"/>
Dec  1 20:05:28 compute-0 nova_compute[189564]:    <controller type="usb" index="0"/>
Dec  1 20:05:28 compute-0 nova_compute[189564]:    <memballoon model="virtio">
Dec  1 20:05:28 compute-0 nova_compute[189564]:      <stats period="10"/>
Dec  1 20:05:28 compute-0 nova_compute[189564]:    </memballoon>
Dec  1 20:05:28 compute-0 nova_compute[189564]:  </devices>
Dec  1 20:05:28 compute-0 nova_compute[189564]: </domain>
Dec  1 20:05:28 compute-0 nova_compute[189564]: _get_guest_xml /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:7555#033[00m
Dec  1 20:05:28 compute-0 nova_compute[189564]: 2025-12-01 20:05:28.828 189568 DEBUG nova.compute.manager [None req-9a8d114f-a120-44d9-9d3e-0fcc2a586e30 87b1f4a5842648dead0562b1cf8b4f18 ce8fb01897ec4dc4a54e7b478a0450c6 - - default default] [instance: 2e63a3e2-688c-470f-9b69-98ac22f0c892] Preparing to wait for external event network-vif-plugged-3076324c-1772-4ebf-8d52-056282f5b5b9 prepare_for_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:283#033[00m
Dec  1 20:05:28 compute-0 nova_compute[189564]: 2025-12-01 20:05:28.829 189568 DEBUG oslo_concurrency.lockutils [None req-9a8d114f-a120-44d9-9d3e-0fcc2a586e30 87b1f4a5842648dead0562b1cf8b4f18 ce8fb01897ec4dc4a54e7b478a0450c6 - - default default] Acquiring lock "2e63a3e2-688c-470f-9b69-98ac22f0c892-events" by "nova.compute.manager.InstanceEvents.prepare_for_instance_event.<locals>._create_or_get_event" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m
Dec  1 20:05:28 compute-0 nova_compute[189564]: 2025-12-01 20:05:28.829 189568 DEBUG oslo_concurrency.lockutils [None req-9a8d114f-a120-44d9-9d3e-0fcc2a586e30 87b1f4a5842648dead0562b1cf8b4f18 ce8fb01897ec4dc4a54e7b478a0450c6 - - default default] Lock "2e63a3e2-688c-470f-9b69-98ac22f0c892-events" acquired by "nova.compute.manager.InstanceEvents.prepare_for_instance_event.<locals>._create_or_get_event" :: waited 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m
Dec  1 20:05:28 compute-0 nova_compute[189564]: 2025-12-01 20:05:28.830 189568 DEBUG oslo_concurrency.lockutils [None req-9a8d114f-a120-44d9-9d3e-0fcc2a586e30 87b1f4a5842648dead0562b1cf8b4f18 ce8fb01897ec4dc4a54e7b478a0450c6 - - default default] Lock "2e63a3e2-688c-470f-9b69-98ac22f0c892-events" "released" by "nova.compute.manager.InstanceEvents.prepare_for_instance_event.<locals>._create_or_get_event" :: held 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m
Dec  1 20:05:28 compute-0 nova_compute[189564]: 2025-12-01 20:05:28.831 189568 DEBUG nova.virt.libvirt.vif [None req-9a8d114f-a120-44d9-9d3e-0fcc2a586e30 87b1f4a5842648dead0562b1cf8b4f18 ce8fb01897ec4dc4a54e7b478a0450c6 - - default default] vif_type=ovs instance=Instance(access_ip_v4=None,access_ip_v6=None,architecture=None,auto_disk_config=False,availability_zone='nova',cell_name=None,cleaned=False,config_drive='',created_at=2025-12-01T20:05:21Z,default_ephemeral_device=None,default_swap_device=None,deleted=False,deleted_at=None,device_metadata=None,disable_terminate=False,display_description=None,display_name='te-4551674-asg-jbxama3kkz6o-ydtfx5qziqnj-k254cxbeo4x2',ec2_ids=EC2Ids,ephemeral_gb=0,ephemeral_key_uuid=None,fault=<?>,flavor=Flavor(3),hidden=False,host='compute-0.ctlplane.example.com',hostname='te-4551674-asg-jbxama3kkz6o-ydtfx5qziqnj-k254cxbeo4x2',id=13,image_ref='bffb6851-f47b-44e0-90e7-e01d72f9a4d2',info_cache=InstanceInfoCache,instance_type_id=3,kernel_id='',key_data=None,key_name=None,keypairs=KeyPairList,launch_index=0,launched_at=None,launched_on='compute-0.ctlplane.example.com',locked=False,locked_by=None,memory_mb=128,metadata={metering.server_group='f148fe63-b9e9-42f1-b9d7-8790a6058874'},migration_context=None,new_flavor=None,node='compute-0.ctlplane.example.com',numa_topology=None,old_flavor=None,os_type=None,pci_devices=PciDeviceList,pci_requests=InstancePCIRequests,power_state=0,progress=0,project_id='ce8fb01897ec4dc4a54e7b478a0450c6',ramdisk_id='',reservation_id='r-s00hz3dx',resources=None,root_device_name='/dev/vda',root_gb=1,security_groups=SecurityGroupList,services=<?>,shutdown_terminate=False,system_metadata={boot_roles='reader,member',image_base_image_ref='bffb6851-f47b-44e0-90e7-e01d72f9a4d2',image_container_format='bare',image_disk_format='qcow2',image_hw_machine_type='q35',image_min_disk='1',image_min_ram='0',network_allocated='True',owner_project_name='tempest-PrometheusGabbiTest-1865175500',owner_user_name='tempest-PrometheusGabbiTest-1865175500-project-member'},tags=TagList,task_state='spawning',terminated_at=None,trusted_certs=None,updated_at=2025-12-01T20:05:23Z,user_data='IyEvYmluL3NoCmVjaG8gJ0xvYWRpbmcgQ1BVJwpzZXQgLXYKY2F0IC9kZXYvdXJhbmRvbSA+IC9kZXYvbnVsbCAmIHNsZWVwIDMwMCA7IGtpbGwgJCEgCg==',user_id='87b1f4a5842648dead0562b1cf8b4f18',uuid=2e63a3e2-688c-470f-9b69-98ac22f0c892,vcpu_model=VirtCPUModel,vcpus=1,vm_mode=None,vm_state='building') vif={"id": "3076324c-1772-4ebf-8d52-056282f5b5b9", "address": "fa:16:3e:ec:bc:e0", "network": {"id": "b72e0b6b-24ff-49af-9297-d0f55dd2fe07", "bridge": "br-int", "label": "", "subnets": [{"cidr": "10.100.0.0/16", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.3.29", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "ce8fb01897ec4dc4a54e7b478a0450c6", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tap3076324c-17", "ovs_interfaceid": "3076324c-1772-4ebf-8d52-056282f5b5b9", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}} plug /usr/lib/python3.9/site-packages/nova/virt/libvirt/vif.py:710#033[00m
Dec  1 20:05:28 compute-0 nova_compute[189564]: 2025-12-01 20:05:28.831 189568 DEBUG nova.network.os_vif_util [None req-9a8d114f-a120-44d9-9d3e-0fcc2a586e30 87b1f4a5842648dead0562b1cf8b4f18 ce8fb01897ec4dc4a54e7b478a0450c6 - - default default] Converting VIF {"id": "3076324c-1772-4ebf-8d52-056282f5b5b9", "address": "fa:16:3e:ec:bc:e0", "network": {"id": "b72e0b6b-24ff-49af-9297-d0f55dd2fe07", "bridge": "br-int", "label": "", "subnets": [{"cidr": "10.100.0.0/16", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.3.29", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "ce8fb01897ec4dc4a54e7b478a0450c6", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tap3076324c-17", "ovs_interfaceid": "3076324c-1772-4ebf-8d52-056282f5b5b9", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}} nova_to_osvif_vif /usr/lib/python3.9/site-packages/nova/network/os_vif_util.py:511#033[00m
Dec  1 20:05:28 compute-0 nova_compute[189564]: 2025-12-01 20:05:28.833 189568 DEBUG nova.network.os_vif_util [None req-9a8d114f-a120-44d9-9d3e-0fcc2a586e30 87b1f4a5842648dead0562b1cf8b4f18 ce8fb01897ec4dc4a54e7b478a0450c6 - - default default] Converted object VIFOpenVSwitch(active=False,address=fa:16:3e:ec:bc:e0,bridge_name='br-int',has_traffic_filtering=True,id=3076324c-1772-4ebf-8d52-056282f5b5b9,network=Network(b72e0b6b-24ff-49af-9297-d0f55dd2fe07),plugin='ovs',port_profile=VIFPortProfileOpenVSwitch,preserve_on_delete=False,vif_name='tap3076324c-17') nova_to_osvif_vif /usr/lib/python3.9/site-packages/nova/network/os_vif_util.py:548#033[00m
Dec  1 20:05:28 compute-0 nova_compute[189564]: 2025-12-01 20:05:28.834 189568 DEBUG os_vif [None req-9a8d114f-a120-44d9-9d3e-0fcc2a586e30 87b1f4a5842648dead0562b1cf8b4f18 ce8fb01897ec4dc4a54e7b478a0450c6 - - default default] Plugging vif VIFOpenVSwitch(active=False,address=fa:16:3e:ec:bc:e0,bridge_name='br-int',has_traffic_filtering=True,id=3076324c-1772-4ebf-8d52-056282f5b5b9,network=Network(b72e0b6b-24ff-49af-9297-d0f55dd2fe07),plugin='ovs',port_profile=VIFPortProfileOpenVSwitch,preserve_on_delete=False,vif_name='tap3076324c-17') plug /usr/lib/python3.9/site-packages/os_vif/__init__.py:76#033[00m
Dec  1 20:05:28 compute-0 nova_compute[189564]: 2025-12-01 20:05:28.836 189568 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 21 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Dec  1 20:05:28 compute-0 nova_compute[189564]: 2025-12-01 20:05:28.837 189568 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): AddBridgeCommand(_result=None, name=br-int, may_exist=True, datapath_type=system) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89#033[00m
Dec  1 20:05:28 compute-0 nova_compute[189564]: 2025-12-01 20:05:28.838 189568 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Transaction caused no change do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:129#033[00m
Dec  1 20:05:28 compute-0 nova_compute[189564]: 2025-12-01 20:05:28.843 189568 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 21 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Dec  1 20:05:28 compute-0 nova_compute[189564]: 2025-12-01 20:05:28.844 189568 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): AddPortCommand(_result=None, bridge=br-int, port=tap3076324c-17, may_exist=True) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89#033[00m
Dec  1 20:05:28 compute-0 nova_compute[189564]: 2025-12-01 20:05:28.846 189568 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=1): DbSetCommand(_result=None, table=Interface, record=tap3076324c-17, col_values=(('external_ids', {'iface-id': '3076324c-1772-4ebf-8d52-056282f5b5b9', 'iface-status': 'active', 'attached-mac': 'fa:16:3e:ec:bc:e0', 'vm-uuid': '2e63a3e2-688c-470f-9b69-98ac22f0c892'}),)) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89#033[00m
Dec  1 20:05:28 compute-0 nova_compute[189564]: 2025-12-01 20:05:28.850 189568 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Dec  1 20:05:28 compute-0 NetworkManager[56474]: <info>  [1764619528.8542] manager: (tap3076324c-17): new Open vSwitch Port device (/org/freedesktop/NetworkManager/Devices/69)
Dec  1 20:05:28 compute-0 nova_compute[189564]: 2025-12-01 20:05:28.854 189568 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] 0-ms timeout __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:248#033[00m
Dec  1 20:05:28 compute-0 nova_compute[189564]: 2025-12-01 20:05:28.864 189568 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Dec  1 20:05:28 compute-0 nova_compute[189564]: 2025-12-01 20:05:28.866 189568 INFO os_vif [None req-9a8d114f-a120-44d9-9d3e-0fcc2a586e30 87b1f4a5842648dead0562b1cf8b4f18 ce8fb01897ec4dc4a54e7b478a0450c6 - - default default] Successfully plugged vif VIFOpenVSwitch(active=False,address=fa:16:3e:ec:bc:e0,bridge_name='br-int',has_traffic_filtering=True,id=3076324c-1772-4ebf-8d52-056282f5b5b9,network=Network(b72e0b6b-24ff-49af-9297-d0f55dd2fe07),plugin='ovs',port_profile=VIFPortProfileOpenVSwitch,preserve_on_delete=False,vif_name='tap3076324c-17')#033[00m
Dec  1 20:05:28 compute-0 nova_compute[189564]: 2025-12-01 20:05:28.938 189568 DEBUG nova.virt.libvirt.driver [None req-9a8d114f-a120-44d9-9d3e-0fcc2a586e30 87b1f4a5842648dead0562b1cf8b4f18 ce8fb01897ec4dc4a54e7b478a0450c6 - - default default] No BDM found with device name vda, not building metadata. _build_disk_metadata /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:12116#033[00m
Dec  1 20:05:28 compute-0 nova_compute[189564]: 2025-12-01 20:05:28.939 189568 DEBUG nova.virt.libvirt.driver [None req-9a8d114f-a120-44d9-9d3e-0fcc2a586e30 87b1f4a5842648dead0562b1cf8b4f18 ce8fb01897ec4dc4a54e7b478a0450c6 - - default default] No BDM found with device name sda, not building metadata. _build_disk_metadata /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:12116#033[00m
Dec  1 20:05:28 compute-0 nova_compute[189564]: 2025-12-01 20:05:28.939 189568 DEBUG nova.virt.libvirt.driver [None req-9a8d114f-a120-44d9-9d3e-0fcc2a586e30 87b1f4a5842648dead0562b1cf8b4f18 ce8fb01897ec4dc4a54e7b478a0450c6 - - default default] No VIF found with MAC fa:16:3e:ec:bc:e0, not building metadata _build_interface_metadata /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:12092#033[00m
Dec  1 20:05:28 compute-0 nova_compute[189564]: 2025-12-01 20:05:28.940 189568 INFO nova.virt.libvirt.driver [None req-9a8d114f-a120-44d9-9d3e-0fcc2a586e30 87b1f4a5842648dead0562b1cf8b4f18 ce8fb01897ec4dc4a54e7b478a0450c6 - - default default] [instance: 2e63a3e2-688c-470f-9b69-98ac22f0c892] Using config drive#033[00m
Dec  1 20:05:28 compute-0 nova_compute[189564]: 2025-12-01 20:05:28.963 189568 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Dec  1 20:05:29 compute-0 nova_compute[189564]: 2025-12-01 20:05:29.537 189568 INFO nova.virt.libvirt.driver [None req-9a8d114f-a120-44d9-9d3e-0fcc2a586e30 87b1f4a5842648dead0562b1cf8b4f18 ce8fb01897ec4dc4a54e7b478a0450c6 - - default default] [instance: 2e63a3e2-688c-470f-9b69-98ac22f0c892] Creating config drive at /var/lib/nova/instances/2e63a3e2-688c-470f-9b69-98ac22f0c892/disk.config#033[00m
Dec  1 20:05:29 compute-0 nova_compute[189564]: 2025-12-01 20:05:29.552 189568 DEBUG oslo_concurrency.processutils [None req-9a8d114f-a120-44d9-9d3e-0fcc2a586e30 87b1f4a5842648dead0562b1cf8b4f18 ce8fb01897ec4dc4a54e7b478a0450c6 - - default default] Running cmd (subprocess): /usr/bin/mkisofs -o /var/lib/nova/instances/2e63a3e2-688c-470f-9b69-98ac22f0c892/disk.config -ldots -allow-lowercase -allow-multidot -l -publisher OpenStack Compute 27.5.2-0.20250829104910.6f8decf.el9 -quiet -J -r -V config-2 /tmp/tmptupsbc5j execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384#033[00m
Dec  1 20:05:29 compute-0 nova_compute[189564]: 2025-12-01 20:05:29.709 189568 DEBUG oslo_concurrency.processutils [None req-9a8d114f-a120-44d9-9d3e-0fcc2a586e30 87b1f4a5842648dead0562b1cf8b4f18 ce8fb01897ec4dc4a54e7b478a0450c6 - - default default] CMD "/usr/bin/mkisofs -o /var/lib/nova/instances/2e63a3e2-688c-470f-9b69-98ac22f0c892/disk.config -ldots -allow-lowercase -allow-multidot -l -publisher OpenStack Compute 27.5.2-0.20250829104910.6f8decf.el9 -quiet -J -r -V config-2 /tmp/tmptupsbc5j" returned: 0 in 0.157s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422#033[00m
Dec  1 20:05:29 compute-0 podman[203750]: time="2025-12-01T20:05:29Z" level=info msg="List containers: received `last` parameter - overwriting `limit`"
Dec  1 20:05:29 compute-0 podman[203750]: @ - - [01/Dec/2025:20:05:29 +0000] "GET /v4.9.3/libpod/containers/json?all=true&external=false&last=0&namespace=false&size=false&sync=false HTTP/1.1" 200 28288 "" "Go-http-client/1.1"
Dec  1 20:05:29 compute-0 podman[203750]: @ - - [01/Dec/2025:20:05:29 +0000] "GET /v4.9.3/libpod/containers/stats?all=false&interval=1&stream=false HTTP/1.1" 200 4341 "" "Go-http-client/1.1"
Dec  1 20:05:29 compute-0 kernel: tap3076324c-17: entered promiscuous mode
Dec  1 20:05:29 compute-0 NetworkManager[56474]: <info>  [1764619529.8089] manager: (tap3076324c-17): new Tun device (/org/freedesktop/NetworkManager/Devices/70)
Dec  1 20:05:29 compute-0 ovn_controller[97948]: 2025-12-01T20:05:29Z|00168|binding|INFO|Claiming lport 3076324c-1772-4ebf-8d52-056282f5b5b9 for this chassis.
Dec  1 20:05:29 compute-0 ovn_controller[97948]: 2025-12-01T20:05:29Z|00169|binding|INFO|3076324c-1772-4ebf-8d52-056282f5b5b9: Claiming fa:16:3e:ec:bc:e0 10.100.3.29
Dec  1 20:05:29 compute-0 nova_compute[189564]: 2025-12-01 20:05:29.809 189568 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Dec  1 20:05:29 compute-0 ovn_metadata_agent[106828]: 2025-12-01 20:05:29.830 106833 DEBUG ovsdbapp.backend.ovs_idl.event [-] Matched UPDATE: PortBindingUpdatedEvent(events=('update',), table='Port_Binding', conditions=None, old_conditions=None), priority=20 to row=Port_Binding(mac=['fa:16:3e:ec:bc:e0 10.100.3.29'], port_security=['fa:16:3e:ec:bc:e0 10.100.3.29'], type=, nat_addresses=[], virtual_parent=[], up=[False], options={'requested-chassis': 'compute-0.ctlplane.example.com'}, parent_port=[], requested_additional_chassis=[], ha_chassis_group=[], external_ids={'neutron:cidrs': '10.100.3.29/16', 'neutron:device_id': '2e63a3e2-688c-470f-9b69-98ac22f0c892', 'neutron:device_owner': 'compute:nova', 'neutron:mtu': '', 'neutron:network_name': 'neutron-b72e0b6b-24ff-49af-9297-d0f55dd2fe07', 'neutron:port_capabilities': '', 'neutron:port_name': '', 'neutron:project_id': 'ce8fb01897ec4dc4a54e7b478a0450c6', 'neutron:revision_number': '2', 'neutron:security_group_ids': '31f326a2-1dd0-42fd-9a01-b17a7fb79ecb', 'neutron:subnet_pool_addr_scope4': '', 'neutron:subnet_pool_addr_scope6': '', 'neutron:vnic_type': 'normal'}, additional_chassis=[], tag=[], additional_encap=[], encap=[], mirror_rules=[], datapath=4321fa83-980a-46fb-a7a0-cf14441fe575, chassis=[<ovs.db.idl.Row object at 0x7f1b36766670>], tunnel_key=2, gateway_chassis=[], requested_chassis=[<ovs.db.idl.Row object at 0x7f1b36766670>], logical_port=3076324c-1772-4ebf-8d52-056282f5b5b9) old=Port_Binding(chassis=[]) matches /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/event.py:43#033[00m
Dec  1 20:05:29 compute-0 ovn_metadata_agent[106828]: 2025-12-01 20:05:29.831 106833 INFO neutron.agent.ovn.metadata.agent [-] Port 3076324c-1772-4ebf-8d52-056282f5b5b9 in datapath b72e0b6b-24ff-49af-9297-d0f55dd2fe07 bound to our chassis#033[00m
Dec  1 20:05:29 compute-0 ovn_metadata_agent[106828]: 2025-12-01 20:05:29.832 106833 INFO neutron.agent.ovn.metadata.agent [-] Provisioning metadata for network b72e0b6b-24ff-49af-9297-d0f55dd2fe07#033[00m
Dec  1 20:05:29 compute-0 ovn_metadata_agent[106828]: 2025-12-01 20:05:29.851 239862 DEBUG oslo.privsep.daemon [-] privsep: reply[e1eda637-b89e-4a11-bda2-88e03fcfdc85]: (4, False) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Dec  1 20:05:29 compute-0 ovn_metadata_agent[106828]: 2025-12-01 20:05:29.853 106833 DEBUG neutron.agent.ovn.metadata.agent [-] Creating VETH tapb72e0b6b-21 in ovnmeta-b72e0b6b-24ff-49af-9297-d0f55dd2fe07 namespace provision_datapath /usr/lib/python3.9/site-packages/neutron/agent/ovn/metadata/agent.py:665#033[00m
Dec  1 20:05:29 compute-0 ovn_metadata_agent[106828]: 2025-12-01 20:05:29.856 239862 DEBUG neutron.privileged.agent.linux.ip_lib [-] Interface tapb72e0b6b-20 not found in namespace None get_link_id /usr/lib/python3.9/site-packages/neutron/privileged/agent/linux/ip_lib.py:204#033[00m
Dec  1 20:05:29 compute-0 ovn_metadata_agent[106828]: 2025-12-01 20:05:29.856 239862 DEBUG oslo.privsep.daemon [-] privsep: reply[d46124e7-2ba2-4416-a0c8-9ebfe1006fb0]: (4, False) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Dec  1 20:05:29 compute-0 ovn_metadata_agent[106828]: 2025-12-01 20:05:29.858 239862 DEBUG oslo.privsep.daemon [-] privsep: reply[bcc778c5-7c33-457e-a525-b73b76816901]: (4, False) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Dec  1 20:05:29 compute-0 systemd-udevd[257301]: Network interface NamePolicy= disabled on kernel command line.
Dec  1 20:05:29 compute-0 ovn_metadata_agent[106828]: 2025-12-01 20:05:29.872 106945 DEBUG oslo.privsep.daemon [-] privsep: reply[600b6c7b-570e-4ae6-8b41-78395b960d65]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Dec  1 20:05:29 compute-0 nova_compute[189564]: 2025-12-01 20:05:29.877 189568 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Dec  1 20:05:29 compute-0 nova_compute[189564]: 2025-12-01 20:05:29.884 189568 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Dec  1 20:05:29 compute-0 systemd-machined[155891]: New machine qemu-14-instance-0000000d.
Dec  1 20:05:29 compute-0 ovn_controller[97948]: 2025-12-01T20:05:29Z|00170|binding|INFO|Setting lport 3076324c-1772-4ebf-8d52-056282f5b5b9 ovn-installed in OVS
Dec  1 20:05:29 compute-0 ovn_controller[97948]: 2025-12-01T20:05:29Z|00171|binding|INFO|Setting lport 3076324c-1772-4ebf-8d52-056282f5b5b9 up in Southbound
Dec  1 20:05:29 compute-0 systemd[1]: Started Virtual Machine qemu-14-instance-0000000d.
Dec  1 20:05:29 compute-0 nova_compute[189564]: 2025-12-01 20:05:29.897 189568 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Dec  1 20:05:29 compute-0 NetworkManager[56474]: <info>  [1764619529.9049] device (tap3076324c-17): state change: unmanaged -> unavailable (reason 'connection-assumed', managed-type: 'external')
Dec  1 20:05:29 compute-0 NetworkManager[56474]: <info>  [1764619529.9068] device (tap3076324c-17): state change: unavailable -> disconnected (reason 'none', managed-type: 'external')
Dec  1 20:05:29 compute-0 ovn_metadata_agent[106828]: 2025-12-01 20:05:29.907 239862 DEBUG oslo.privsep.daemon [-] privsep: reply[db1aace6-b312-490b-adb2-70ba1cf5089b]: (4, ('net.ipv4.conf.all.promote_secondaries = 1\n', '', 0)) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Dec  1 20:05:29 compute-0 ovn_metadata_agent[106828]: 2025-12-01 20:05:29.943 239942 DEBUG oslo.privsep.daemon [-] privsep: reply[d9f90ce2-caf0-4458-a9b9-b054f53e9282]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Dec  1 20:05:29 compute-0 ovn_metadata_agent[106828]: 2025-12-01 20:05:29.951 239862 DEBUG oslo.privsep.daemon [-] privsep: reply[72f40787-2077-4fe5-9d0c-25c6df899c98]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Dec  1 20:05:29 compute-0 NetworkManager[56474]: <info>  [1764619529.9521] manager: (tapb72e0b6b-20): new Veth device (/org/freedesktop/NetworkManager/Devices/71)
Dec  1 20:05:29 compute-0 ovn_metadata_agent[106828]: 2025-12-01 20:05:29.993 239942 DEBUG oslo.privsep.daemon [-] privsep: reply[7ad67ee4-f835-4d17-9019-1aa86f7a7947]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Dec  1 20:05:29 compute-0 ovn_metadata_agent[106828]: 2025-12-01 20:05:29.997 239942 DEBUG oslo.privsep.daemon [-] privsep: reply[52d49797-bbcb-4f98-ba32-05c44ef237a1]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Dec  1 20:05:30 compute-0 NetworkManager[56474]: <info>  [1764619530.0301] device (tapb72e0b6b-20): carrier: link connected
Dec  1 20:05:30 compute-0 ovn_metadata_agent[106828]: 2025-12-01 20:05:30.038 239942 DEBUG oslo.privsep.daemon [-] privsep: reply[db59e58a-8a28-4507-805a-6eb7e0c86b59]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Dec  1 20:05:30 compute-0 ovn_metadata_agent[106828]: 2025-12-01 20:05:30.057 239862 DEBUG oslo.privsep.daemon [-] privsep: reply[1aea2a8c-703f-44b9-a958-ae9cd61732f4]: (4, [{'family': 0, '__align': (), 'ifi_type': 1, 'index': 2, 'flags': 69699, 'change': 0, 'attrs': [['IFLA_IFNAME', 'tapb72e0b6b-21'], ['IFLA_TXQLEN', 1000], ['IFLA_OPERSTATE', 'UP'], ['IFLA_LINKMODE', 0], ['IFLA_MTU', 1500], ['IFLA_MIN_MTU', 68], ['IFLA_MAX_MTU', 65535], ['IFLA_GROUP', 0], ['IFLA_PROMISCUITY', 0], ['UNKNOWN', {'header': {'length': 8, 'type': 61}}], ['IFLA_NUM_TX_QUEUES', 8], ['IFLA_GSO_MAX_SEGS', 65535], ['IFLA_GSO_MAX_SIZE', 65536], ['IFLA_GRO_MAX_SIZE', 65536], ['UNKNOWN', {'header': {'length': 8, 'type': 63}}], ['UNKNOWN', {'header': {'length': 8, 'type': 64}}], ['IFLA_TSO_MAX_SIZE', 524280], ['IFLA_TSO_MAX_SEGS', 65535], ['UNKNOWN', {'header': {'length': 8, 'type': 66}}], ['IFLA_NUM_RX_QUEUES', 8], ['IFLA_CARRIER', 1], ['IFLA_CARRIER_CHANGES', 2], ['IFLA_CARRIER_UP_COUNT', 1], ['IFLA_CARRIER_DOWN_COUNT', 1], ['IFLA_PROTO_DOWN', 0], ['IFLA_ADDRESS', 'fa:16:3e:fe:a1:18'], ['IFLA_BROADCAST', 'ff:ff:ff:ff:ff:ff'], ['IFLA_STATS64', {'rx_packets': 1, 'tx_packets': 1, 'rx_bytes': 90, 'tx_bytes': 90, 'rx_errors': 0, 'tx_errors': 0, 'rx_dropped': 0, 'tx_dropped': 0, 'multicast': 0, 'collisions': 0, 'rx_length_errors': 0, 'rx_over_errors': 0, 'rx_crc_errors': 0, 'rx_frame_errors': 0, 'rx_fifo_errors': 0, 'rx_missed_errors': 0, 'tx_aborted_errors': 0, 'tx_carrier_errors': 0, 'tx_fifo_errors': 0, 'tx_heartbeat_errors': 0, 'tx_window_errors': 0, 'rx_compressed': 0, 'tx_compressed': 0}], ['IFLA_STATS', {'rx_packets': 1, 'tx_packets': 1, 'rx_bytes': 90, 'tx_bytes': 90, 'rx_errors': 0, 'tx_errors': 0, 'rx_dropped': 0, 'tx_dropped': 0, 'multicast': 0, 'collisions': 0, 'rx_length_errors': 0, 'rx_over_errors': 0, 'rx_crc_errors': 0, 'rx_frame_errors': 0, 'rx_fifo_errors': 0, 'rx_missed_errors': 0, 'tx_aborted_errors': 0, 'tx_carrier_errors': 0, 'tx_fifo_errors': 0, 'tx_heartbeat_errors': 0, 'tx_window_errors': 0, 'rx_compressed': 0, 'tx_compressed': 0}], ['IFLA_XDP', {'attrs': [['IFLA_XDP_ATTACHED', None]]}], ['IFLA_LINKINFO', {'attrs': [['IFLA_INFO_KIND', 'veth']]}], ['IFLA_LINK_NETNSID', 0], ['IFLA_LINK', 45], ['IFLA_QDISC', 'noqueue'], ['IFLA_AF_SPEC', {'attrs': [['AF_INET', {'dummy': 65668, 'forwarding': 1, 'mc_forwarding': 0, 'proxy_arp': 0, 'accept_redirects': 0, 'secure_redirects': 0, 'send_redirects': 0, 'shared_media': 1, 'rp_filter': 1, 'accept_source_route': 0, 'bootp_relay': 0, 'log_martians': 0, 'tag': 0, 'arpfilter': 0, 'medium_id': 0, 'noxfrm': 0, 'nopolicy': 0, 'force_igmp_version': 0, 'arp_announce': 0, 'arp_ignore': 0, 'promote_secondaries': 1, 'arp_accept': 0, 'arp_notify': 0, 'accept_local': 0, 'src_vmark': 0, 'proxy_arp_pvlan': 0, 'route_localnet': 0, 'igmpv2_unsolicited_report_interval': 10000, 'igmpv3_unsolicited_report_interval': 1000}], ['AF_INET6', {'attrs': [['IFLA_INET6_FLAGS', 2147483648], ['IFLA_INET6_CACHEINFO', {'max_reasm_len': 65535, 'tstamp': 601774, 'reachable_time': 15601, 'retrans_time': 1000}], ['IFLA_INET6_CONF', {'forwarding': 0, 'hop_limit': 64, 'mtu': 1500, 'accept_ra': 1, 'accept_redirects': 1, 'autoconf': 1, 'dad_transmits': 1, 'router_solicitations': 4294967295, 'router_solicitation_interval': 4000, 'router_solicitation_delay': 1000, 'use_tempaddr': 0, 'temp_valid_lft': 604800, 'temp_preferred_lft': 86400, 'regen_max_retry': 3, 'max_desync_factor': 600, 'max_addresses': 16, 'force_mld_version': 0, 'accept_ra_defrtr': 1, 'accept_ra_pinfo': 1, 'accept_ra_rtr_pref': 1, 'router_probe_interval': 60000, 'accept_ra_rt_info_max_plen': 0, 'proxy_ndp': 0, 'optimistic_dad': 0, 'accept_source_route': 0, 'mc_forwarding': 0, 'disable_ipv6': 0, 'accept_dad': 1, 'force_tllao': 0, 'ndisc_notify': 0}], ['IFLA_INET6_STATS', {'num': 38, 'inpkts': 1, 'inoctets': 76, 'indelivers': 0, 'outforwdatagrams': 0, 'outpkts': 1, 'outoctets': 76, 'inhdrerrors': 0, 'intoobigerrors': 0, 'innoroutes': 0, 'inaddrerrors': 0, 'inunknownprotos': 0, 'intruncatedpkts': 0, 'indiscards': 0, 'outdiscards': 0, 'outnoroutes': 0, 'reasmtimeout': 0, 'reasmreqds': 0, 'reasmoks': 0, 'reasmfails': 0, 'fragoks': 0, 'fragfails': 0, 'fragcreates': 0, 'inmcastpkts': 1, 'outmcastpkts': 1, 'inbcastpkts': 0, 'outbcastpkts': 0, 'inmcastoctets': 76, 'outmcastoctets': 76, 'inbcastoctets': 0, 'outbcastoctets': 0, 'csumerrors': 0, 'noectpkts': 1, 'ect1pkts': 0, 'ect0pkts': 0, 'cepkts': 0}], ['IFLA_INET6_ICMP6STATS', {'num': 7, 'inmsgs': 0, 'inerrors': 0, 'outmsgs': 1, 'outerrors': 0, 'csumerrors': 0}], ['IFLA_INET6_TOKEN', '::'], ['IFLA_INET6_ADDR_GEN_MODE', 0]]}]]}], ['IFLA_MAP', {'mem_start': 0, 'mem_end': 0, 'base_addr': 0, 'irq': 0, 'dma': 0, 'port': 0}], ['UNKNOWN', {'header': {'length': 4, 'type': 32830}}], ['UNKNOWN', {'header': {'length': 4, 'type': 32833}}]], 'header': {'length': 1448, 'type': 16, 'flags': 2, 'sequence_number': 255, 'pid': 257333, 'error': None, 'target': 'ovnmeta-b72e0b6b-24ff-49af-9297-d0f55dd2fe07', 'stats': (0, 0, 0)}, 'state': 'up', 'event': 'RTM_NEWLINK'}]) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Dec  1 20:05:30 compute-0 ovn_metadata_agent[106828]: 2025-12-01 20:05:30.074 239862 DEBUG oslo.privsep.daemon [-] privsep: reply[21c75275-207b-4063-92be-b8b4aca3ca89]: (4, ({'family': 10, 'prefixlen': 64, 'flags': 192, 'scope': 253, 'index': 2, 'attrs': [['IFA_ADDRESS', 'fe80::f816:3eff:fefe:a118'], ['IFA_CACHEINFO', {'ifa_preferred': 4294967295, 'ifa_valid': 4294967295, 'cstamp': 601774, 'tstamp': 601774}], ['IFA_FLAGS', 192]], 'header': {'length': 72, 'type': 20, 'flags': 2, 'sequence_number': 255, 'pid': 257334, 'error': None, 'target': 'ovnmeta-b72e0b6b-24ff-49af-9297-d0f55dd2fe07', 'stats': (0, 0, 0)}, 'event': 'RTM_NEWADDR'},)) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Dec  1 20:05:30 compute-0 ovn_metadata_agent[106828]: 2025-12-01 20:05:30.094 239862 DEBUG oslo.privsep.daemon [-] privsep: reply[69be2882-0209-4b63-b589-e0670d3c2f62]: (4, [{'family': 0, '__align': (), 'ifi_type': 1, 'index': 2, 'flags': 69699, 'change': 0, 'attrs': [['IFLA_IFNAME', 'tapb72e0b6b-21'], ['IFLA_TXQLEN', 1000], ['IFLA_OPERSTATE', 'UP'], ['IFLA_LINKMODE', 0], ['IFLA_MTU', 1500], ['IFLA_MIN_MTU', 68], ['IFLA_MAX_MTU', 65535], ['IFLA_GROUP', 0], ['IFLA_PROMISCUITY', 0], ['UNKNOWN', {'header': {'length': 8, 'type': 61}}], ['IFLA_NUM_TX_QUEUES', 8], ['IFLA_GSO_MAX_SEGS', 65535], ['IFLA_GSO_MAX_SIZE', 65536], ['IFLA_GRO_MAX_SIZE', 65536], ['UNKNOWN', {'header': {'length': 8, 'type': 63}}], ['UNKNOWN', {'header': {'length': 8, 'type': 64}}], ['IFLA_TSO_MAX_SIZE', 524280], ['IFLA_TSO_MAX_SEGS', 65535], ['UNKNOWN', {'header': {'length': 8, 'type': 66}}], ['IFLA_NUM_RX_QUEUES', 8], ['IFLA_CARRIER', 1], ['IFLA_CARRIER_CHANGES', 2], ['IFLA_CARRIER_UP_COUNT', 1], ['IFLA_CARRIER_DOWN_COUNT', 1], ['IFLA_PROTO_DOWN', 0], ['IFLA_ADDRESS', 'fa:16:3e:fe:a1:18'], ['IFLA_BROADCAST', 'ff:ff:ff:ff:ff:ff'], ['IFLA_STATS64', {'rx_packets': 1, 'tx_packets': 1, 'rx_bytes': 90, 'tx_bytes': 90, 'rx_errors': 0, 'tx_errors': 0, 'rx_dropped': 0, 'tx_dropped': 0, 'multicast': 0, 'collisions': 0, 'rx_length_errors': 0, 'rx_over_errors': 0, 'rx_crc_errors': 0, 'rx_frame_errors': 0, 'rx_fifo_errors': 0, 'rx_missed_errors': 0, 'tx_aborted_errors': 0, 'tx_carrier_errors': 0, 'tx_fifo_errors': 0, 'tx_heartbeat_errors': 0, 'tx_window_errors': 0, 'rx_compressed': 0, 'tx_compressed': 0}], ['IFLA_STATS', {'rx_packets': 1, 'tx_packets': 1, 'rx_bytes': 90, 'tx_bytes': 90, 'rx_errors': 0, 'tx_errors': 0, 'rx_dropped': 0, 'tx_dropped': 0, 'multicast': 0, 'collisions': 0, 'rx_length_errors': 0, 'rx_over_errors': 0, 'rx_crc_errors': 0, 'rx_frame_errors': 0, 'rx_fifo_errors': 0, 'rx_missed_errors': 0, 'tx_aborted_errors': 0, 'tx_carrier_errors': 0, 'tx_fifo_errors': 0, 'tx_heartbeat_errors': 0, 'tx_window_errors': 0, 'rx_compressed': 0, 'tx_compressed': 0}], ['IFLA_XDP', {'attrs': [['IFLA_XDP_ATTACHED', None]]}], ['IFLA_LINKINFO', {'attrs': [['IFLA_INFO_KIND', 'veth']]}], ['IFLA_LINK_NETNSID', 0], ['IFLA_LINK', 45], ['IFLA_QDISC', 'noqueue'], ['IFLA_AF_SPEC', {'attrs': [['AF_INET', {'dummy': 65668, 'forwarding': 1, 'mc_forwarding': 0, 'proxy_arp': 0, 'accept_redirects': 0, 'secure_redirects': 0, 'send_redirects': 0, 'shared_media': 1, 'rp_filter': 1, 'accept_source_route': 0, 'bootp_relay': 0, 'log_martians': 0, 'tag': 0, 'arpfilter': 0, 'medium_id': 0, 'noxfrm': 0, 'nopolicy': 0, 'force_igmp_version': 0, 'arp_announce': 0, 'arp_ignore': 0, 'promote_secondaries': 1, 'arp_accept': 0, 'arp_notify': 0, 'accept_local': 0, 'src_vmark': 0, 'proxy_arp_pvlan': 0, 'route_localnet': 0, 'igmpv2_unsolicited_report_interval': 10000, 'igmpv3_unsolicited_report_interval': 1000}], ['AF_INET6', {'attrs': [['IFLA_INET6_FLAGS', 2147483648], ['IFLA_INET6_CACHEINFO', {'max_reasm_len': 65535, 'tstamp': 601774, 'reachable_time': 15601, 'retrans_time': 1000}], ['IFLA_INET6_CONF', {'forwarding': 0, 'hop_limit': 64, 'mtu': 1500, 'accept_ra': 1, 'accept_redirects': 1, 'autoconf': 1, 'dad_transmits': 1, 'router_solicitations': 4294967295, 'router_solicitation_interval': 4000, 'router_solicitation_delay': 1000, 'use_tempaddr': 0, 'temp_valid_lft': 604800, 'temp_preferred_lft': 86400, 'regen_max_retry': 3, 'max_desync_factor': 600, 'max_addresses': 16, 'force_mld_version': 0, 'accept_ra_defrtr': 1, 'accept_ra_pinfo': 1, 'accept_ra_rtr_pref': 1, 'router_probe_interval': 60000, 'accept_ra_rt_info_max_plen': 0, 'proxy_ndp': 0, 'optimistic_dad': 0, 'accept_source_route': 0, 'mc_forwarding': 0, 'disable_ipv6': 0, 'accept_dad': 1, 'force_tllao': 0, 'ndisc_notify': 0}], ['IFLA_INET6_STATS', {'num': 38, 'inpkts': 1, 'inoctets': 76, 'indelivers': 0, 'outforwdatagrams': 0, 'outpkts': 1, 'outoctets': 76, 'inhdrerrors': 0, 'intoobigerrors': 0, 'innoroutes': 0, 'inaddrerrors': 0, 'inunknownprotos': 0, 'intruncatedpkts': 0, 'indiscards': 0, 'outdiscards': 0, 'outnoroutes': 0, 'reasmtimeout': 0, 'reasmreqds': 0, 'reasmoks': 0, 'reasmfails': 0, 'fragoks': 0, 'fragfails': 0, 'fragcreates': 0, 'inmcastpkts': 1, 'outmcastpkts': 1, 'inbcastpkts': 0, 'outbcastpkts': 0, 'inmcastoctets': 76, 'outmcastoctets': 76, 'inbcastoctets': 0, 'outbcastoctets': 0, 'csumerrors': 0, 'noectpkts': 1, 'ect1pkts': 0, 'ect0pkts': 0, 'cepkts': 0}], ['IFLA_INET6_ICMP6STATS', {'num': 7, 'inmsgs': 0, 'inerrors': 0, 'outmsgs': 1, 'outerrors': 0, 'csumerrors': 0}], ['IFLA_INET6_TOKEN', '::'], ['IFLA_INET6_ADDR_GEN_MODE', 0]]}]]}], ['IFLA_MAP', {'mem_start': 0, 'mem_end': 0, 'base_addr': 0, 'irq': 0, 'dma': 0, 'port': 0}], ['UNKNOWN', {'header': {'length': 4, 'type': 32830}}], ['UNKNOWN', {'header': {'length': 4, 'type': 32833}}]], 'header': {'length': 1448, 'type': 16, 'flags': 0, 'sequence_number': 255, 'pid': 257335, 'error': None, 'target': 'ovnmeta-b72e0b6b-24ff-49af-9297-d0f55dd2fe07', 'stats': (0, 0, 0)}, 'state': 'up', 'event': 'RTM_NEWLINK'}]) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Dec  1 20:05:30 compute-0 nova_compute[189564]: 2025-12-01 20:05:30.120 189568 DEBUG nova.compute.manager [req-d69f690f-aa7d-4198-af53-b5c9fed8bd99 req-fee55f05-18c9-42a2-a76b-6cd58d1043f8 8141e98fb94749b083df4b60a326419b 58466eba0634458f8dd92f929824a1d9 - - default default] [instance: 2e63a3e2-688c-470f-9b69-98ac22f0c892] Received event network-vif-plugged-3076324c-1772-4ebf-8d52-056282f5b5b9 external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11048#033[00m
Dec  1 20:05:30 compute-0 nova_compute[189564]: 2025-12-01 20:05:30.121 189568 DEBUG oslo_concurrency.lockutils [req-d69f690f-aa7d-4198-af53-b5c9fed8bd99 req-fee55f05-18c9-42a2-a76b-6cd58d1043f8 8141e98fb94749b083df4b60a326419b 58466eba0634458f8dd92f929824a1d9 - - default default] Acquiring lock "2e63a3e2-688c-470f-9b69-98ac22f0c892-events" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m
Dec  1 20:05:30 compute-0 nova_compute[189564]: 2025-12-01 20:05:30.121 189568 DEBUG oslo_concurrency.lockutils [req-d69f690f-aa7d-4198-af53-b5c9fed8bd99 req-fee55f05-18c9-42a2-a76b-6cd58d1043f8 8141e98fb94749b083df4b60a326419b 58466eba0634458f8dd92f929824a1d9 - - default default] Lock "2e63a3e2-688c-470f-9b69-98ac22f0c892-events" acquired by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m
Dec  1 20:05:30 compute-0 nova_compute[189564]: 2025-12-01 20:05:30.121 189568 DEBUG oslo_concurrency.lockutils [req-d69f690f-aa7d-4198-af53-b5c9fed8bd99 req-fee55f05-18c9-42a2-a76b-6cd58d1043f8 8141e98fb94749b083df4b60a326419b 58466eba0634458f8dd92f929824a1d9 - - default default] Lock "2e63a3e2-688c-470f-9b69-98ac22f0c892-events" "released" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m
Dec  1 20:05:30 compute-0 nova_compute[189564]: 2025-12-01 20:05:30.121 189568 DEBUG nova.compute.manager [req-d69f690f-aa7d-4198-af53-b5c9fed8bd99 req-fee55f05-18c9-42a2-a76b-6cd58d1043f8 8141e98fb94749b083df4b60a326419b 58466eba0634458f8dd92f929824a1d9 - - default default] [instance: 2e63a3e2-688c-470f-9b69-98ac22f0c892] Processing event network-vif-plugged-3076324c-1772-4ebf-8d52-056282f5b5b9 _process_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:10808#033[00m
Dec  1 20:05:30 compute-0 ovn_metadata_agent[106828]: 2025-12-01 20:05:30.135 239862 DEBUG oslo.privsep.daemon [-] privsep: reply[3cac2548-b118-4c95-8040-b3ee5565928b]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Dec  1 20:05:30 compute-0 ovn_metadata_agent[106828]: 2025-12-01 20:05:30.212 239862 DEBUG oslo.privsep.daemon [-] privsep: reply[f2f03ca9-f7e9-4a8f-bc7a-aaf0ae006838]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Dec  1 20:05:30 compute-0 ovn_metadata_agent[106828]: 2025-12-01 20:05:30.214 106833 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): DelPortCommand(_result=None, port=tapb72e0b6b-20, bridge=br-ex, if_exists=True) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89#033[00m
Dec  1 20:05:30 compute-0 ovn_metadata_agent[106828]: 2025-12-01 20:05:30.214 106833 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Transaction caused no change do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:129#033[00m
Dec  1 20:05:30 compute-0 ovn_metadata_agent[106828]: 2025-12-01 20:05:30.215 106833 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): AddPortCommand(_result=None, bridge=br-int, port=tapb72e0b6b-20, may_exist=True) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89#033[00m
Dec  1 20:05:30 compute-0 kernel: tapb72e0b6b-20: entered promiscuous mode
Dec  1 20:05:30 compute-0 nova_compute[189564]: 2025-12-01 20:05:30.218 189568 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Dec  1 20:05:30 compute-0 nova_compute[189564]: 2025-12-01 20:05:30.222 189568 DEBUG nova.network.neutron [req-ddcd25a8-625b-4098-82d9-67bcde45f05f req-c4bbddc9-f93b-4758-ba90-7ffaf3c1aeb3 8141e98fb94749b083df4b60a326419b 58466eba0634458f8dd92f929824a1d9 - - default default] [instance: 2e63a3e2-688c-470f-9b69-98ac22f0c892] Updated VIF entry in instance network info cache for port 3076324c-1772-4ebf-8d52-056282f5b5b9. _build_network_info_model /usr/lib/python3.9/site-packages/nova/network/neutron.py:3482#033[00m
Dec  1 20:05:30 compute-0 nova_compute[189564]: 2025-12-01 20:05:30.222 189568 DEBUG nova.network.neutron [req-ddcd25a8-625b-4098-82d9-67bcde45f05f req-c4bbddc9-f93b-4758-ba90-7ffaf3c1aeb3 8141e98fb94749b083df4b60a326419b 58466eba0634458f8dd92f929824a1d9 - - default default] [instance: 2e63a3e2-688c-470f-9b69-98ac22f0c892] Updating instance_info_cache with network_info: [{"id": "3076324c-1772-4ebf-8d52-056282f5b5b9", "address": "fa:16:3e:ec:bc:e0", "network": {"id": "b72e0b6b-24ff-49af-9297-d0f55dd2fe07", "bridge": "br-int", "label": "", "subnets": [{"cidr": "10.100.0.0/16", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.3.29", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "ce8fb01897ec4dc4a54e7b478a0450c6", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tap3076324c-17", "ovs_interfaceid": "3076324c-1772-4ebf-8d52-056282f5b5b9", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}}] update_instance_cache_with_nw_info /usr/lib/python3.9/site-packages/nova/network/neutron.py:116#033[00m
Dec  1 20:05:30 compute-0 NetworkManager[56474]: <info>  [1764619530.2236] manager: (tapb72e0b6b-20): new Open vSwitch Port device (/org/freedesktop/NetworkManager/Devices/72)
Dec  1 20:05:30 compute-0 nova_compute[189564]: 2025-12-01 20:05:30.224 189568 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Dec  1 20:05:30 compute-0 ovn_metadata_agent[106828]: 2025-12-01 20:05:30.230 106833 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): DbSetCommand(_result=None, table=Interface, record=tapb72e0b6b-20, col_values=(('external_ids', {'iface-id': '7a2b95ce-3fa4-48e0-a152-7ae4f9eed7c9'}),)) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89#033[00m
Dec  1 20:05:30 compute-0 nova_compute[189564]: 2025-12-01 20:05:30.233 189568 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Dec  1 20:05:30 compute-0 ovn_controller[97948]: 2025-12-01T20:05:30Z|00172|binding|INFO|Releasing lport 7a2b95ce-3fa4-48e0-a152-7ae4f9eed7c9 from this chassis (sb_readonly=0)
Dec  1 20:05:30 compute-0 nova_compute[189564]: 2025-12-01 20:05:30.234 189568 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Dec  1 20:05:30 compute-0 ovn_metadata_agent[106828]: 2025-12-01 20:05:30.238 106833 DEBUG neutron.agent.linux.utils [-] Unable to access /var/lib/neutron/external/pids/b72e0b6b-24ff-49af-9297-d0f55dd2fe07.pid.haproxy; Error: [Errno 2] No such file or directory: '/var/lib/neutron/external/pids/b72e0b6b-24ff-49af-9297-d0f55dd2fe07.pid.haproxy' get_value_from_file /usr/lib/python3.9/site-packages/neutron/agent/linux/utils.py:252#033[00m
Dec  1 20:05:30 compute-0 ovn_metadata_agent[106828]: 2025-12-01 20:05:30.240 239862 DEBUG oslo.privsep.daemon [-] privsep: reply[4905eb9a-5a55-43bf-b67b-e040d2f5dcae]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Dec  1 20:05:30 compute-0 ovn_metadata_agent[106828]: 2025-12-01 20:05:30.241 106833 DEBUG neutron.agent.ovn.metadata.driver [-] haproxy_cfg = 
Dec  1 20:05:30 compute-0 ovn_metadata_agent[106828]: global
Dec  1 20:05:30 compute-0 ovn_metadata_agent[106828]:    log         /dev/log local0 debug
Dec  1 20:05:30 compute-0 ovn_metadata_agent[106828]:    log-tag     haproxy-metadata-proxy-b72e0b6b-24ff-49af-9297-d0f55dd2fe07
Dec  1 20:05:30 compute-0 ovn_metadata_agent[106828]:    user        root
Dec  1 20:05:30 compute-0 ovn_metadata_agent[106828]:    group       root
Dec  1 20:05:30 compute-0 ovn_metadata_agent[106828]:    maxconn     1024
Dec  1 20:05:30 compute-0 ovn_metadata_agent[106828]:    pidfile     /var/lib/neutron/external/pids/b72e0b6b-24ff-49af-9297-d0f55dd2fe07.pid.haproxy
Dec  1 20:05:30 compute-0 ovn_metadata_agent[106828]:    daemon
Dec  1 20:05:30 compute-0 ovn_metadata_agent[106828]: 
Dec  1 20:05:30 compute-0 ovn_metadata_agent[106828]: defaults
Dec  1 20:05:30 compute-0 ovn_metadata_agent[106828]:    log global
Dec  1 20:05:30 compute-0 ovn_metadata_agent[106828]:    mode http
Dec  1 20:05:30 compute-0 ovn_metadata_agent[106828]:    option httplog
Dec  1 20:05:30 compute-0 ovn_metadata_agent[106828]:    option dontlognull
Dec  1 20:05:30 compute-0 ovn_metadata_agent[106828]:    option http-server-close
Dec  1 20:05:30 compute-0 ovn_metadata_agent[106828]:    option forwardfor
Dec  1 20:05:30 compute-0 ovn_metadata_agent[106828]:    retries                 3
Dec  1 20:05:30 compute-0 ovn_metadata_agent[106828]:    timeout http-request    30s
Dec  1 20:05:30 compute-0 ovn_metadata_agent[106828]:    timeout connect         30s
Dec  1 20:05:30 compute-0 ovn_metadata_agent[106828]:    timeout client          32s
Dec  1 20:05:30 compute-0 ovn_metadata_agent[106828]:    timeout server          32s
Dec  1 20:05:30 compute-0 ovn_metadata_agent[106828]:    timeout http-keep-alive 30s
Dec  1 20:05:30 compute-0 ovn_metadata_agent[106828]: 
Dec  1 20:05:30 compute-0 ovn_metadata_agent[106828]: 
Dec  1 20:05:30 compute-0 ovn_metadata_agent[106828]: listen listener
Dec  1 20:05:30 compute-0 ovn_metadata_agent[106828]:    bind 169.254.169.254:80
Dec  1 20:05:30 compute-0 ovn_metadata_agent[106828]:    server metadata /var/lib/neutron/metadata_proxy
Dec  1 20:05:30 compute-0 ovn_metadata_agent[106828]:    http-request add-header X-OVN-Network-ID b72e0b6b-24ff-49af-9297-d0f55dd2fe07
Dec  1 20:05:30 compute-0 ovn_metadata_agent[106828]: create_config_file /usr/lib/python3.9/site-packages/neutron/agent/ovn/metadata/driver.py:107#033[00m
Dec  1 20:05:30 compute-0 ovn_metadata_agent[106828]: 2025-12-01 20:05:30.243 106833 DEBUG neutron.agent.linux.utils [-] Running command: ['sudo', 'neutron-rootwrap', '/etc/neutron/rootwrap.conf', 'ip', 'netns', 'exec', 'ovnmeta-b72e0b6b-24ff-49af-9297-d0f55dd2fe07', 'env', 'PROCESS_TAG=haproxy-b72e0b6b-24ff-49af-9297-d0f55dd2fe07', 'haproxy', '-f', '/var/lib/neutron/ovn-metadata-proxy/b72e0b6b-24ff-49af-9297-d0f55dd2fe07.conf'] create_process /usr/lib/python3.9/site-packages/neutron/agent/linux/utils.py:84#033[00m
Dec  1 20:05:30 compute-0 nova_compute[189564]: 2025-12-01 20:05:30.248 189568 DEBUG oslo_concurrency.lockutils [req-ddcd25a8-625b-4098-82d9-67bcde45f05f req-c4bbddc9-f93b-4758-ba90-7ffaf3c1aeb3 8141e98fb94749b083df4b60a326419b 58466eba0634458f8dd92f929824a1d9 - - default default] Releasing lock "refresh_cache-2e63a3e2-688c-470f-9b69-98ac22f0c892" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:333#033[00m
Dec  1 20:05:30 compute-0 nova_compute[189564]: 2025-12-01 20:05:30.269 189568 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Dec  1 20:05:30 compute-0 podman[257365]: 2025-12-01 20:05:30.75611807 +0000 UTC m=+0.083178959 container create 803efa2fed4252f29b6278787149b07077d33b5877f30c8855bac09c74b31b58 (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified, name=neutron-haproxy-ovnmeta-b72e0b6b-24ff-49af-9297-d0f55dd2fe07, io.buildah.version=1.41.3, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.vendor=CentOS, org.label-schema.build-date=20251125, org.label-schema.schema-version=1.0, tcib_build_tag=fa2bb8efef6782c26ea7f1675eeb36dd, org.label-schema.license=GPLv2, tcib_managed=true, maintainer=OpenStack Kubernetes Operator team)
Dec  1 20:05:30 compute-0 systemd[1]: Started libpod-conmon-803efa2fed4252f29b6278787149b07077d33b5877f30c8855bac09c74b31b58.scope.
Dec  1 20:05:30 compute-0 podman[257365]: 2025-12-01 20:05:30.713107006 +0000 UTC m=+0.040167905 image pull 014dc726c85414b29f2dde7b5d875685d08784761c0f0ffa8630d1583a877bf9 quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified
Dec  1 20:05:30 compute-0 systemd[1]: Started libcrun container.
Dec  1 20:05:30 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/406ed8560c48f03beb80b2376ade8d190ee606e5e4d51fe4ef571637e01257a0/merged/var/lib/neutron supports timestamps until 2038 (0x7fffffff)
Dec  1 20:05:30 compute-0 podman[257365]: 2025-12-01 20:05:30.851533299 +0000 UTC m=+0.178594208 container init 803efa2fed4252f29b6278787149b07077d33b5877f30c8855bac09c74b31b58 (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified, name=neutron-haproxy-ovnmeta-b72e0b6b-24ff-49af-9297-d0f55dd2fe07, tcib_managed=true, maintainer=OpenStack Kubernetes Operator team, org.label-schema.build-date=20251125, org.label-schema.schema-version=1.0, tcib_build_tag=fa2bb8efef6782c26ea7f1675eeb36dd, org.label-schema.name=CentOS Stream 9 Base Image, io.buildah.version=1.41.3, org.label-schema.license=GPLv2, org.label-schema.vendor=CentOS)
Dec  1 20:05:30 compute-0 podman[257365]: 2025-12-01 20:05:30.866669833 +0000 UTC m=+0.193730722 container start 803efa2fed4252f29b6278787149b07077d33b5877f30c8855bac09c74b31b58 (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified, name=neutron-haproxy-ovnmeta-b72e0b6b-24ff-49af-9297-d0f55dd2fe07, org.label-schema.license=GPLv2, org.label-schema.vendor=CentOS, maintainer=OpenStack Kubernetes Operator team, org.label-schema.schema-version=1.0, tcib_build_tag=fa2bb8efef6782c26ea7f1675eeb36dd, org.label-schema.name=CentOS Stream 9 Base Image, tcib_managed=true, org.label-schema.build-date=20251125, io.buildah.version=1.41.3)
Dec  1 20:05:30 compute-0 neutron-haproxy-ovnmeta-b72e0b6b-24ff-49af-9297-d0f55dd2fe07[257384]: [NOTICE]   (257390) : New worker (257393) forked
Dec  1 20:05:30 compute-0 neutron-haproxy-ovnmeta-b72e0b6b-24ff-49af-9297-d0f55dd2fe07[257384]: [NOTICE]   (257390) : Loading success.
Dec  1 20:05:30 compute-0 nova_compute[189564]: 2025-12-01 20:05:30.932 189568 DEBUG nova.compute.manager [None req-9a8d114f-a120-44d9-9d3e-0fcc2a586e30 87b1f4a5842648dead0562b1cf8b4f18 ce8fb01897ec4dc4a54e7b478a0450c6 - - default default] [instance: 2e63a3e2-688c-470f-9b69-98ac22f0c892] Instance event wait completed in 0 seconds for network-vif-plugged wait_for_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:577#033[00m
Dec  1 20:05:30 compute-0 nova_compute[189564]: 2025-12-01 20:05:30.933 189568 DEBUG nova.virt.driver [None req-025acbbd-8b0a-4055-b5a6-f0460d6fa220 - - - - - -] Emitting event <LifecycleEvent: 1764619530.931445, 2e63a3e2-688c-470f-9b69-98ac22f0c892 => Started> emit_event /usr/lib/python3.9/site-packages/nova/virt/driver.py:1653#033[00m
Dec  1 20:05:30 compute-0 nova_compute[189564]: 2025-12-01 20:05:30.933 189568 INFO nova.compute.manager [None req-025acbbd-8b0a-4055-b5a6-f0460d6fa220 - - - - - -] [instance: 2e63a3e2-688c-470f-9b69-98ac22f0c892] VM Started (Lifecycle Event)#033[00m
Dec  1 20:05:30 compute-0 nova_compute[189564]: 2025-12-01 20:05:30.944 189568 DEBUG nova.virt.libvirt.driver [None req-9a8d114f-a120-44d9-9d3e-0fcc2a586e30 87b1f4a5842648dead0562b1cf8b4f18 ce8fb01897ec4dc4a54e7b478a0450c6 - - default default] [instance: 2e63a3e2-688c-470f-9b69-98ac22f0c892] Guest created on hypervisor spawn /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:4417#033[00m
Dec  1 20:05:30 compute-0 nova_compute[189564]: 2025-12-01 20:05:30.949 189568 INFO nova.virt.libvirt.driver [-] [instance: 2e63a3e2-688c-470f-9b69-98ac22f0c892] Instance spawned successfully.#033[00m
Dec  1 20:05:30 compute-0 nova_compute[189564]: 2025-12-01 20:05:30.949 189568 DEBUG nova.virt.libvirt.driver [None req-9a8d114f-a120-44d9-9d3e-0fcc2a586e30 87b1f4a5842648dead0562b1cf8b4f18 ce8fb01897ec4dc4a54e7b478a0450c6 - - default default] [instance: 2e63a3e2-688c-470f-9b69-98ac22f0c892] Attempting to register defaults for the following image properties: ['hw_cdrom_bus', 'hw_disk_bus', 'hw_input_bus', 'hw_pointer_model', 'hw_video_model', 'hw_vif_model'] _register_undefined_instance_details /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:917#033[00m
Dec  1 20:05:30 compute-0 nova_compute[189564]: 2025-12-01 20:05:30.954 189568 DEBUG nova.compute.manager [None req-025acbbd-8b0a-4055-b5a6-f0460d6fa220 - - - - - -] [instance: 2e63a3e2-688c-470f-9b69-98ac22f0c892] Checking state _get_power_state /usr/lib/python3.9/site-packages/nova/compute/manager.py:1762#033[00m
Dec  1 20:05:30 compute-0 nova_compute[189564]: 2025-12-01 20:05:30.959 189568 DEBUG nova.compute.manager [None req-025acbbd-8b0a-4055-b5a6-f0460d6fa220 - - - - - -] [instance: 2e63a3e2-688c-470f-9b69-98ac22f0c892] Synchronizing instance power state after lifecycle event "Started"; current vm_state: building, current task_state: spawning, current DB power_state: 0, VM power_state: 1 handle_lifecycle_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:1396#033[00m
Dec  1 20:05:30 compute-0 nova_compute[189564]: 2025-12-01 20:05:30.969 189568 DEBUG nova.virt.libvirt.driver [None req-9a8d114f-a120-44d9-9d3e-0fcc2a586e30 87b1f4a5842648dead0562b1cf8b4f18 ce8fb01897ec4dc4a54e7b478a0450c6 - - default default] [instance: 2e63a3e2-688c-470f-9b69-98ac22f0c892] Found default for hw_cdrom_bus of sata _register_undefined_instance_details /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:946#033[00m
Dec  1 20:05:30 compute-0 nova_compute[189564]: 2025-12-01 20:05:30.969 189568 DEBUG nova.virt.libvirt.driver [None req-9a8d114f-a120-44d9-9d3e-0fcc2a586e30 87b1f4a5842648dead0562b1cf8b4f18 ce8fb01897ec4dc4a54e7b478a0450c6 - - default default] [instance: 2e63a3e2-688c-470f-9b69-98ac22f0c892] Found default for hw_disk_bus of virtio _register_undefined_instance_details /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:946#033[00m
Dec  1 20:05:30 compute-0 nova_compute[189564]: 2025-12-01 20:05:30.970 189568 DEBUG nova.virt.libvirt.driver [None req-9a8d114f-a120-44d9-9d3e-0fcc2a586e30 87b1f4a5842648dead0562b1cf8b4f18 ce8fb01897ec4dc4a54e7b478a0450c6 - - default default] [instance: 2e63a3e2-688c-470f-9b69-98ac22f0c892] Found default for hw_input_bus of usb _register_undefined_instance_details /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:946#033[00m
Dec  1 20:05:30 compute-0 nova_compute[189564]: 2025-12-01 20:05:30.970 189568 DEBUG nova.virt.libvirt.driver [None req-9a8d114f-a120-44d9-9d3e-0fcc2a586e30 87b1f4a5842648dead0562b1cf8b4f18 ce8fb01897ec4dc4a54e7b478a0450c6 - - default default] [instance: 2e63a3e2-688c-470f-9b69-98ac22f0c892] Found default for hw_pointer_model of usbtablet _register_undefined_instance_details /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:946#033[00m
Dec  1 20:05:30 compute-0 nova_compute[189564]: 2025-12-01 20:05:30.970 189568 DEBUG nova.virt.libvirt.driver [None req-9a8d114f-a120-44d9-9d3e-0fcc2a586e30 87b1f4a5842648dead0562b1cf8b4f18 ce8fb01897ec4dc4a54e7b478a0450c6 - - default default] [instance: 2e63a3e2-688c-470f-9b69-98ac22f0c892] Found default for hw_video_model of virtio _register_undefined_instance_details /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:946#033[00m
Dec  1 20:05:30 compute-0 nova_compute[189564]: 2025-12-01 20:05:30.971 189568 DEBUG nova.virt.libvirt.driver [None req-9a8d114f-a120-44d9-9d3e-0fcc2a586e30 87b1f4a5842648dead0562b1cf8b4f18 ce8fb01897ec4dc4a54e7b478a0450c6 - - default default] [instance: 2e63a3e2-688c-470f-9b69-98ac22f0c892] Found default for hw_vif_model of virtio _register_undefined_instance_details /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:946#033[00m
Dec  1 20:05:30 compute-0 nova_compute[189564]: 2025-12-01 20:05:30.983 189568 INFO nova.compute.manager [None req-025acbbd-8b0a-4055-b5a6-f0460d6fa220 - - - - - -] [instance: 2e63a3e2-688c-470f-9b69-98ac22f0c892] During sync_power_state the instance has a pending task (spawning). Skip.#033[00m
Dec  1 20:05:30 compute-0 nova_compute[189564]: 2025-12-01 20:05:30.983 189568 DEBUG nova.virt.driver [None req-025acbbd-8b0a-4055-b5a6-f0460d6fa220 - - - - - -] Emitting event <LifecycleEvent: 1764619530.9316308, 2e63a3e2-688c-470f-9b69-98ac22f0c892 => Paused> emit_event /usr/lib/python3.9/site-packages/nova/virt/driver.py:1653#033[00m
Dec  1 20:05:30 compute-0 nova_compute[189564]: 2025-12-01 20:05:30.983 189568 INFO nova.compute.manager [None req-025acbbd-8b0a-4055-b5a6-f0460d6fa220 - - - - - -] [instance: 2e63a3e2-688c-470f-9b69-98ac22f0c892] VM Paused (Lifecycle Event)#033[00m
Dec  1 20:05:31 compute-0 nova_compute[189564]: 2025-12-01 20:05:31.009 189568 DEBUG nova.compute.manager [None req-025acbbd-8b0a-4055-b5a6-f0460d6fa220 - - - - - -] [instance: 2e63a3e2-688c-470f-9b69-98ac22f0c892] Checking state _get_power_state /usr/lib/python3.9/site-packages/nova/compute/manager.py:1762#033[00m
Dec  1 20:05:31 compute-0 nova_compute[189564]: 2025-12-01 20:05:31.015 189568 DEBUG nova.virt.driver [None req-025acbbd-8b0a-4055-b5a6-f0460d6fa220 - - - - - -] Emitting event <LifecycleEvent: 1764619530.9432893, 2e63a3e2-688c-470f-9b69-98ac22f0c892 => Resumed> emit_event /usr/lib/python3.9/site-packages/nova/virt/driver.py:1653#033[00m
Dec  1 20:05:31 compute-0 nova_compute[189564]: 2025-12-01 20:05:31.015 189568 INFO nova.compute.manager [None req-025acbbd-8b0a-4055-b5a6-f0460d6fa220 - - - - - -] [instance: 2e63a3e2-688c-470f-9b69-98ac22f0c892] VM Resumed (Lifecycle Event)#033[00m
Dec  1 20:05:31 compute-0 nova_compute[189564]: 2025-12-01 20:05:31.043 189568 INFO nova.compute.manager [None req-9a8d114f-a120-44d9-9d3e-0fcc2a586e30 87b1f4a5842648dead0562b1cf8b4f18 ce8fb01897ec4dc4a54e7b478a0450c6 - - default default] [instance: 2e63a3e2-688c-470f-9b69-98ac22f0c892] Took 7.87 seconds to spawn the instance on the hypervisor.#033[00m
Dec  1 20:05:31 compute-0 nova_compute[189564]: 2025-12-01 20:05:31.043 189568 DEBUG nova.compute.manager [None req-9a8d114f-a120-44d9-9d3e-0fcc2a586e30 87b1f4a5842648dead0562b1cf8b4f18 ce8fb01897ec4dc4a54e7b478a0450c6 - - default default] [instance: 2e63a3e2-688c-470f-9b69-98ac22f0c892] Checking state _get_power_state /usr/lib/python3.9/site-packages/nova/compute/manager.py:1762#033[00m
Dec  1 20:05:31 compute-0 nova_compute[189564]: 2025-12-01 20:05:31.044 189568 DEBUG nova.compute.manager [None req-025acbbd-8b0a-4055-b5a6-f0460d6fa220 - - - - - -] [instance: 2e63a3e2-688c-470f-9b69-98ac22f0c892] Checking state _get_power_state /usr/lib/python3.9/site-packages/nova/compute/manager.py:1762#033[00m
Dec  1 20:05:31 compute-0 nova_compute[189564]: 2025-12-01 20:05:31.057 189568 DEBUG nova.compute.manager [None req-025acbbd-8b0a-4055-b5a6-f0460d6fa220 - - - - - -] [instance: 2e63a3e2-688c-470f-9b69-98ac22f0c892] Synchronizing instance power state after lifecycle event "Resumed"; current vm_state: building, current task_state: spawning, current DB power_state: 0, VM power_state: 1 handle_lifecycle_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:1396#033[00m
Dec  1 20:05:31 compute-0 nova_compute[189564]: 2025-12-01 20:05:31.085 189568 INFO nova.compute.manager [None req-025acbbd-8b0a-4055-b5a6-f0460d6fa220 - - - - - -] [instance: 2e63a3e2-688c-470f-9b69-98ac22f0c892] During sync_power_state the instance has a pending task (spawning). Skip.#033[00m
Dec  1 20:05:31 compute-0 nova_compute[189564]: 2025-12-01 20:05:31.121 189568 INFO nova.compute.manager [None req-9a8d114f-a120-44d9-9d3e-0fcc2a586e30 87b1f4a5842648dead0562b1cf8b4f18 ce8fb01897ec4dc4a54e7b478a0450c6 - - default default] [instance: 2e63a3e2-688c-470f-9b69-98ac22f0c892] Took 8.49 seconds to build instance.#033[00m
Dec  1 20:05:31 compute-0 nova_compute[189564]: 2025-12-01 20:05:31.140 189568 DEBUG oslo_concurrency.lockutils [None req-9a8d114f-a120-44d9-9d3e-0fcc2a586e30 87b1f4a5842648dead0562b1cf8b4f18 ce8fb01897ec4dc4a54e7b478a0450c6 - - default default] Lock "2e63a3e2-688c-470f-9b69-98ac22f0c892" "released" by "nova.compute.manager.ComputeManager.build_and_run_instance.<locals>._locked_do_build_and_run_instance" :: held 8.610s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m
Dec  1 20:05:31 compute-0 nova_compute[189564]: 2025-12-01 20:05:31.243 189568 DEBUG oslo_service.periodic_task [None req-7cec860b-c1c5-49d2-8a4f-732c76c1788d - - - - - -] Running periodic task ComputeManager._check_instance_build_time run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210#033[00m
Dec  1 20:05:31 compute-0 openstack_network_exporter[205914]: ERROR   20:05:31 appctl.go:131: Failed to prepare call to ovsdb-server: no control socket files found for the ovs db server
Dec  1 20:05:31 compute-0 openstack_network_exporter[205914]: ERROR   20:05:31 appctl.go:144: Failed to get PID for ovn-northd: no control socket files found for ovn-northd
Dec  1 20:05:31 compute-0 openstack_network_exporter[205914]: ERROR   20:05:31 appctl.go:144: Failed to get PID for ovn-northd: no control socket files found for ovn-northd
Dec  1 20:05:31 compute-0 openstack_network_exporter[205914]: ERROR   20:05:31 appctl.go:174: call(dpif-netdev/pmd-perf-show): please specify an existing datapath
Dec  1 20:05:31 compute-0 openstack_network_exporter[205914]: 
Dec  1 20:05:31 compute-0 openstack_network_exporter[205914]: ERROR   20:05:31 appctl.go:174: call(dpif-netdev/pmd-rxq-show): please specify an existing datapath
Dec  1 20:05:31 compute-0 openstack_network_exporter[205914]: 
Dec  1 20:05:32 compute-0 nova_compute[189564]: 2025-12-01 20:05:32.233 189568 DEBUG nova.compute.manager [req-c37724ee-4735-4fc3-abac-aad80c31deaa req-f258dde8-4ee0-4fca-a08f-1d7116a41b24 8141e98fb94749b083df4b60a326419b 58466eba0634458f8dd92f929824a1d9 - - default default] [instance: 2e63a3e2-688c-470f-9b69-98ac22f0c892] Received event network-vif-plugged-3076324c-1772-4ebf-8d52-056282f5b5b9 external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11048#033[00m
Dec  1 20:05:32 compute-0 nova_compute[189564]: 2025-12-01 20:05:32.234 189568 DEBUG oslo_concurrency.lockutils [req-c37724ee-4735-4fc3-abac-aad80c31deaa req-f258dde8-4ee0-4fca-a08f-1d7116a41b24 8141e98fb94749b083df4b60a326419b 58466eba0634458f8dd92f929824a1d9 - - default default] Acquiring lock "2e63a3e2-688c-470f-9b69-98ac22f0c892-events" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m
Dec  1 20:05:32 compute-0 nova_compute[189564]: 2025-12-01 20:05:32.234 189568 DEBUG oslo_concurrency.lockutils [req-c37724ee-4735-4fc3-abac-aad80c31deaa req-f258dde8-4ee0-4fca-a08f-1d7116a41b24 8141e98fb94749b083df4b60a326419b 58466eba0634458f8dd92f929824a1d9 - - default default] Lock "2e63a3e2-688c-470f-9b69-98ac22f0c892-events" acquired by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m
Dec  1 20:05:32 compute-0 nova_compute[189564]: 2025-12-01 20:05:32.235 189568 DEBUG oslo_concurrency.lockutils [req-c37724ee-4735-4fc3-abac-aad80c31deaa req-f258dde8-4ee0-4fca-a08f-1d7116a41b24 8141e98fb94749b083df4b60a326419b 58466eba0634458f8dd92f929824a1d9 - - default default] Lock "2e63a3e2-688c-470f-9b69-98ac22f0c892-events" "released" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: held 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m
Dec  1 20:05:32 compute-0 nova_compute[189564]: 2025-12-01 20:05:32.235 189568 DEBUG nova.compute.manager [req-c37724ee-4735-4fc3-abac-aad80c31deaa req-f258dde8-4ee0-4fca-a08f-1d7116a41b24 8141e98fb94749b083df4b60a326419b 58466eba0634458f8dd92f929824a1d9 - - default default] [instance: 2e63a3e2-688c-470f-9b69-98ac22f0c892] No waiting events found dispatching network-vif-plugged-3076324c-1772-4ebf-8d52-056282f5b5b9 pop_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:320#033[00m
Dec  1 20:05:32 compute-0 nova_compute[189564]: 2025-12-01 20:05:32.235 189568 WARNING nova.compute.manager [req-c37724ee-4735-4fc3-abac-aad80c31deaa req-f258dde8-4ee0-4fca-a08f-1d7116a41b24 8141e98fb94749b083df4b60a326419b 58466eba0634458f8dd92f929824a1d9 - - default default] [instance: 2e63a3e2-688c-470f-9b69-98ac22f0c892] Received unexpected event network-vif-plugged-3076324c-1772-4ebf-8d52-056282f5b5b9 for instance with vm_state active and task_state None.#033[00m
Dec  1 20:05:33 compute-0 nova_compute[189564]: 2025-12-01 20:05:33.851 189568 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Dec  1 20:05:33 compute-0 nova_compute[189564]: 2025-12-01 20:05:33.965 189568 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Dec  1 20:05:35 compute-0 nova_compute[189564]: 2025-12-01 20:05:35.243 189568 DEBUG oslo_service.periodic_task [None req-7cec860b-c1c5-49d2-8a4f-732c76c1788d - - - - - -] Running periodic task ComputeManager._sync_scheduler_instance_info run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210#033[00m
Dec  1 20:05:38 compute-0 nova_compute[189564]: 2025-12-01 20:05:38.856 189568 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Dec  1 20:05:38 compute-0 nova_compute[189564]: 2025-12-01 20:05:38.968 189568 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Dec  1 20:05:39 compute-0 podman[257404]: 2025-12-01 20:05:39.382831979 +0000 UTC m=+0.137786634 container health_status b46bda7fc50db8041eef75400930fc7591d8331b3adc9964f77b2cc87c6b98e2 (image=quay.io/openstack-k8s-operators/openstack-network-exporter:current-podified, name=openstack_network_exporter, health_status=healthy, health_failing_streak=0, health_log=, container_name=openstack_network_exporter, url=https://catalog.redhat.com/en/search?searchType=containers, vcs-ref=f4b088292653bbf5ca8188a5e59ffd06a8671d4b, managed_by=edpm_ansible, com.redhat.license_terms=https://www.redhat.com/en/about/red-hat-end-user-license-agreements#UBI, maintainer=Red Hat, Inc., distribution-scope=public, io.buildah.version=1.33.7, io.openshift.tags=minimal rhel9, io.k8s.description=The Universal Base Image Minimal is a stripped down image that uses microdnf as a package manager. This base image is freely redistributable, but Red Hat only supports Red Hat technologies through subscriptions for Red Hat products. This image is maintained by Red Hat and updated regularly., summary=Provides the latest release of the minimal Red Hat Universal Base Image 9., config_data={'image': 'quay.io/openstack-k8s-operators/openstack-network-exporter:current-podified', 'restart': 'always', 'recreate': True, 'privileged': True, 'ports': ['9105:9105'], 'command': [], 'net': 'host', 'environment': {'OS_ENDPOINT_TYPE': 'internal', 'OPENSTACK_NETWORK_EXPORTER_YAML': '/etc/openstack_network_exporter/openstack_network_exporter.yaml'}, 'healthcheck': {'test': '/openstack/healthcheck openstack-netwo', 'mount': '/var/lib/openstack/healthchecks/openstack_network_exporter'}, 'volumes': ['/var/lib/openstack/config/telemetry/openstack_network_exporter.yaml:/etc/openstack_network_exporter/openstack_network_exporter.yaml:z', '/var/lib/openstack/certs/telemetry/default:/etc/openstack_network_exporter/tls:z', '/var/run/openvswitch:/run/openvswitch:rw,z', '/var/lib/openvswitch/ovn:/run/ovn:rw,z', '/proc:/host/proc:ro', '/var/lib/openstack/healthchecks/openstack_network_exporter:/openstack:ro,z']}, name=ubi9-minimal, io.openshift.expose-services=, vcs-type=git, com.redhat.component=ubi9-minimal-container, release=1755695350, vendor=Red Hat, Inc., version=9.6, description=The Universal Base Image Minimal is a stripped down image that uses microdnf as a package manager. This base image is freely redistributable, but Red Hat only supports Red Hat technologies through subscriptions for Red Hat products. This image is maintained by Red Hat and updated regularly., io.k8s.display-name=Red Hat Universal Base Image 9 Minimal, architecture=x86_64, config_id=edpm, build-date=2025-08-20T13:12:41)
Dec  1 20:05:41 compute-0 podman[257428]: 2025-12-01 20:05:41.751739606 +0000 UTC m=+0.107892440 container health_status 9bc16c1e84935b321683dd2dfd3901959431e420d380b6b9982945dff3d516b2 (image=quay.io/prometheus/node-exporter:v1.5.0, name=node_exporter, health_status=healthy, health_failing_streak=0, health_log=, container_name=node_exporter, maintainer=The Prometheus Authors <prometheus-developers@googlegroups.com>, managed_by=edpm_ansible, config_data={'image': 'quay.io/prometheus/node-exporter:v1.5.0', 'restart': 'always', 'recreate': True, 'user': 'root', 'privileged': True, 'ports': ['9100:9100'], 'command': ['--web.config.file=/etc/node_exporter/node_exporter.yaml', '--web.disable-exporter-metrics', '--collector.systemd', '--collector.systemd.unit-include=(edpm_.*|ovs.*|openvswitch|virt.*|rsyslog)\\.service', '--no-collector.dmi', '--no-collector.entropy', '--no-collector.thermal_zone', '--no-collector.time', '--no-collector.timex', '--no-collector.uname', '--no-collector.stat', '--no-collector.hwmon', '--no-collector.os', '--no-collector.selinux', '--no-collector.textfile', '--no-collector.powersupplyclass', '--no-collector.pressure', '--no-collector.rapl'], 'net': 'host', 'environment': {'OS_ENDPOINT_TYPE': 'internal'}, 'healthcheck': {'test': '/openstack/healthcheck node_exporter', 'mount': '/var/lib/openstack/healthchecks/node_exporter'}, 'volumes': ['/var/lib/openstack/config/telemetry/node_exporter.yaml:/etc/node_exporter/node_exporter.yaml:z', '/var/lib/openstack/certs/telemetry/default:/etc/node_exporter/tls:z', '/var/run/dbus/system_bus_socket:/var/run/dbus/system_bus_socket:rw', '/var/lib/openstack/healthchecks/node_exporter:/openstack:ro,z']}, config_id=edpm)
Dec  1 20:05:43 compute-0 nova_compute[189564]: 2025-12-01 20:05:43.865 189568 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Dec  1 20:05:43 compute-0 nova_compute[189564]: 2025-12-01 20:05:43.970 189568 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Dec  1 20:05:48 compute-0 ceilometer_agent_compute[200308]: 2025-12-01 20:05:48.822 15 DEBUG ceilometer.polling.manager [-] The number of pollsters in source [pollsters] is bigger than the number of worker threads to execute them. Therefore, one can expect the process to be longer than the expected. execute_polling_task_processing /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:253
Dec  1 20:05:48 compute-0 ceilometer_agent_compute[200308]: 2025-12-01 20:05:48.823 15 DEBUG ceilometer.polling.manager [-] Processing pollsters for [pollsters] with [1] threads. execute_polling_task_processing /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:262
Dec  1 20:05:48 compute-0 ceilometer_agent_compute[200308]: 2025-12-01 20:05:48.823 15 DEBUG ceilometer.polling.manager [-] Registering pollster [<stevedore.extension.Extension object at 0x7fcf6cc3f860>] from source [pollsters] to be executed via executor [<concurrent.futures.thread.ThreadPoolExecutor object at 0x7fcf66438380>] with cache [{}], pollster history [{}], and discovery cache [{}]. register_pollster_execution /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:276
Dec  1 20:05:48 compute-0 ceilometer_agent_compute[200308]: 2025-12-01 20:05:48.824 15 DEBUG ceilometer.polling.manager [-] Executing discovery process for pollsters [<ceilometer.compute.pollsters.net.IncomingBytesDeltaPollster object at 0x7fcf6cc3f830>] and discovery method [local_instances] via process [<bound method AgentManager.discover of <ceilometer.polling.manager.AgentManager object at 0x7fcf6cc07a10>>]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:294
Dec  1 20:05:48 compute-0 ceilometer_agent_compute[200308]: 2025-12-01 20:05:48.824 15 DEBUG ceilometer.polling.manager [-] Registering pollster [<stevedore.extension.Extension object at 0x7fcf6c2e4080>] from source [pollsters] to be executed via executor [<concurrent.futures.thread.ThreadPoolExecutor object at 0x7fcf66438380>] with cache [{}], pollster history [{}], and discovery cache [{}]. register_pollster_execution /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:276
Dec  1 20:05:48 compute-0 ceilometer_agent_compute[200308]: 2025-12-01 20:05:48.825 15 DEBUG ceilometer.polling.manager [-] Registering pollster [<stevedore.extension.Extension object at 0x7fcf6efc98b0>] from source [pollsters] to be executed via executor [<concurrent.futures.thread.ThreadPoolExecutor object at 0x7fcf66438380>] with cache [{}], pollster history [{}], and discovery cache [{}]. register_pollster_execution /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:276
Dec  1 20:05:48 compute-0 ceilometer_agent_compute[200308]: 2025-12-01 20:05:48.825 15 DEBUG ceilometer.polling.manager [-] Registering pollster [<stevedore.extension.Extension object at 0x7fcf6c2e4110>] from source [pollsters] to be executed via executor [<concurrent.futures.thread.ThreadPoolExecutor object at 0x7fcf66438380>] with cache [{}], pollster history [{}], and discovery cache [{}]. register_pollster_execution /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:276
Dec  1 20:05:48 compute-0 ceilometer_agent_compute[200308]: 2025-12-01 20:05:48.825 15 DEBUG ceilometer.polling.manager [-] Registering pollster [<stevedore.extension.Extension object at 0x7fcf6c2e41a0>] from source [pollsters] to be executed via executor [<concurrent.futures.thread.ThreadPoolExecutor object at 0x7fcf66438380>] with cache [{}], pollster history [{}], and discovery cache [{}]. register_pollster_execution /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:276
Dec  1 20:05:48 compute-0 ceilometer_agent_compute[200308]: 2025-12-01 20:05:48.825 15 DEBUG ceilometer.polling.manager [-] Registering pollster [<stevedore.extension.Extension object at 0x7fcf6cc3f290>] from source [pollsters] to be executed via executor [<concurrent.futures.thread.ThreadPoolExecutor object at 0x7fcf66438380>] with cache [{}], pollster history [{}], and discovery cache [{}]. register_pollster_execution /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:276
Dec  1 20:05:48 compute-0 ceilometer_agent_compute[200308]: 2025-12-01 20:05:48.825 15 DEBUG ceilometer.polling.manager [-] Registering pollster [<stevedore.extension.Extension object at 0x7fcf6cc3f2c0>] from source [pollsters] to be executed via executor [<concurrent.futures.thread.ThreadPoolExecutor object at 0x7fcf66438380>] with cache [{}], pollster history [{}], and discovery cache [{}]. register_pollster_execution /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:276
Dec  1 20:05:48 compute-0 ceilometer_agent_compute[200308]: 2025-12-01 20:05:48.825 15 DEBUG ceilometer.polling.manager [-] Registering pollster [<stevedore.extension.Extension object at 0x7fcf6e1e92e0>] from source [pollsters] to be executed via executor [<concurrent.futures.thread.ThreadPoolExecutor object at 0x7fcf66438380>] with cache [{}], pollster history [{}], and discovery cache [{}]. register_pollster_execution /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:276
Dec  1 20:05:48 compute-0 ceilometer_agent_compute[200308]: 2025-12-01 20:05:48.825 15 DEBUG ceilometer.polling.manager [-] Registering pollster [<stevedore.extension.Extension object at 0x7fcf6cc3fb00>] from source [pollsters] to be executed via executor [<concurrent.futures.thread.ThreadPoolExecutor object at 0x7fcf66438380>] with cache [{}], pollster history [{}], and discovery cache [{}]. register_pollster_execution /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:276
Dec  1 20:05:48 compute-0 ceilometer_agent_compute[200308]: 2025-12-01 20:05:48.825 15 DEBUG ceilometer.polling.manager [-] Registering pollster [<stevedore.extension.Extension object at 0x7fcf6cc3f320>] from source [pollsters] to be executed via executor [<concurrent.futures.thread.ThreadPoolExecutor object at 0x7fcf66438380>] with cache [{}], pollster history [{}], and discovery cache [{}]. register_pollster_execution /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:276
Dec  1 20:05:48 compute-0 ceilometer_agent_compute[200308]: 2025-12-01 20:05:48.826 15 DEBUG ceilometer.polling.manager [-] Registering pollster [<stevedore.extension.Extension object at 0x7fcf6cc3f380>] from source [pollsters] to be executed via executor [<concurrent.futures.thread.ThreadPoolExecutor object at 0x7fcf66438380>] with cache [{}], pollster history [{}], and discovery cache [{}]. register_pollster_execution /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:276
Dec  1 20:05:48 compute-0 ceilometer_agent_compute[200308]: 2025-12-01 20:05:48.826 15 DEBUG ceilometer.polling.manager [-] Registering pollster [<stevedore.extension.Extension object at 0x7fcf6cc3f3e0>] from source [pollsters] to be executed via executor [<concurrent.futures.thread.ThreadPoolExecutor object at 0x7fcf66438380>] with cache [{}], pollster history [{}], and discovery cache [{}]. register_pollster_execution /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:276
Dec  1 20:05:48 compute-0 ceilometer_agent_compute[200308]: 2025-12-01 20:05:48.826 15 DEBUG ceilometer.polling.manager [-] Registering pollster [<stevedore.extension.Extension object at 0x7fcf6cc3f440>] from source [pollsters] to be executed via executor [<concurrent.futures.thread.ThreadPoolExecutor object at 0x7fcf66438380>] with cache [{}], pollster history [{}], and discovery cache [{}]. register_pollster_execution /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:276
Dec  1 20:05:48 compute-0 ceilometer_agent_compute[200308]: 2025-12-01 20:05:48.826 15 DEBUG ceilometer.polling.manager [-] Registering pollster [<stevedore.extension.Extension object at 0x7fcf6c2e4470>] from source [pollsters] to be executed via executor [<concurrent.futures.thread.ThreadPoolExecutor object at 0x7fcf66438380>] with cache [{}], pollster history [{}], and discovery cache [{}]. register_pollster_execution /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:276
Dec  1 20:05:48 compute-0 ceilometer_agent_compute[200308]: 2025-12-01 20:05:48.826 15 DEBUG ceilometer.polling.manager [-] Registering pollster [<stevedore.extension.Extension object at 0x7fcf6cc3f4a0>] from source [pollsters] to be executed via executor [<concurrent.futures.thread.ThreadPoolExecutor object at 0x7fcf66438380>] with cache [{}], pollster history [{}], and discovery cache [{}]. register_pollster_execution /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:276
Dec  1 20:05:48 compute-0 ceilometer_agent_compute[200308]: 2025-12-01 20:05:48.827 15 DEBUG ceilometer.polling.manager [-] Registering pollster [<stevedore.extension.Extension object at 0x7fcf6cc3f500>] from source [pollsters] to be executed via executor [<concurrent.futures.thread.ThreadPoolExecutor object at 0x7fcf66438380>] with cache [{}], pollster history [{}], and discovery cache [{}]. register_pollster_execution /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:276
Dec  1 20:05:48 compute-0 ceilometer_agent_compute[200308]: 2025-12-01 20:05:48.827 15 DEBUG ceilometer.polling.manager [-] Registering pollster [<stevedore.extension.Extension object at 0x7fcf6cc3e540>] from source [pollsters] to be executed via executor [<concurrent.futures.thread.ThreadPoolExecutor object at 0x7fcf66438380>] with cache [{}], pollster history [{}], and discovery cache [{}]. register_pollster_execution /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:276
Dec  1 20:05:48 compute-0 ceilometer_agent_compute[200308]: 2025-12-01 20:05:48.827 15 DEBUG ceilometer.polling.manager [-] Registering pollster [<stevedore.extension.Extension object at 0x7fcf6cc3f560>] from source [pollsters] to be executed via executor [<concurrent.futures.thread.ThreadPoolExecutor object at 0x7fcf66438380>] with cache [{}], pollster history [{}], and discovery cache [{}]. register_pollster_execution /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:276
Dec  1 20:05:48 compute-0 ceilometer_agent_compute[200308]: 2025-12-01 20:05:48.828 15 DEBUG ceilometer.polling.manager [-] Registering pollster [<stevedore.extension.Extension object at 0x7fcf6cc3fd70>] from source [pollsters] to be executed via executor [<concurrent.futures.thread.ThreadPoolExecutor object at 0x7fcf66438380>] with cache [{}], pollster history [{}], and discovery cache [{}]. register_pollster_execution /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:276
Dec  1 20:05:48 compute-0 ceilometer_agent_compute[200308]: 2025-12-01 20:05:48.828 15 DEBUG ceilometer.polling.manager [-] Registering pollster [<stevedore.extension.Extension object at 0x7fcf6cc3f5c0>] from source [pollsters] to be executed via executor [<concurrent.futures.thread.ThreadPoolExecutor object at 0x7fcf66438380>] with cache [{}], pollster history [{}], and discovery cache [{}]. register_pollster_execution /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:276
Dec  1 20:05:48 compute-0 ceilometer_agent_compute[200308]: 2025-12-01 20:05:48.828 15 DEBUG ceilometer.polling.manager [-] Registering pollster [<stevedore.extension.Extension object at 0x7fcf6cc3fdd0>] from source [pollsters] to be executed via executor [<concurrent.futures.thread.ThreadPoolExecutor object at 0x7fcf66438380>] with cache [{}], pollster history [{}], and discovery cache [{}]. register_pollster_execution /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:276
Dec  1 20:05:48 compute-0 ceilometer_agent_compute[200308]: 2025-12-01 20:05:48.829 15 DEBUG ceilometer.polling.manager [-] Registering pollster [<stevedore.extension.Extension object at 0x7fcf6cc3fe30>] from source [pollsters] to be executed via executor [<concurrent.futures.thread.ThreadPoolExecutor object at 0x7fcf66438380>] with cache [{}], pollster history [{}], and discovery cache [{}]. register_pollster_execution /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:276
Dec  1 20:05:48 compute-0 ceilometer_agent_compute[200308]: 2025-12-01 20:05:48.829 15 DEBUG ceilometer.polling.manager [-] Registering pollster [<stevedore.extension.Extension object at 0x7fcf6cc3fec0>] from source [pollsters] to be executed via executor [<concurrent.futures.thread.ThreadPoolExecutor object at 0x7fcf66438380>] with cache [{}], pollster history [{}], and discovery cache [{}]. register_pollster_execution /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:276
Dec  1 20:05:48 compute-0 ceilometer_agent_compute[200308]: 2025-12-01 20:05:48.830 15 DEBUG ceilometer.polling.manager [-] Registering pollster [<stevedore.extension.Extension object at 0x7fcf6cc3ffb0>] from source [pollsters] to be executed via executor [<concurrent.futures.thread.ThreadPoolExecutor object at 0x7fcf66438380>] with cache [{}], pollster history [{}], and discovery cache [{}]. register_pollster_execution /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:276
Dec  1 20:05:48 compute-0 ceilometer_agent_compute[200308]: 2025-12-01 20:05:48.830 15 DEBUG ceilometer.polling.manager [-] Registering pollster [<stevedore.extension.Extension object at 0x7fcf6cc3d7c0>] from source [pollsters] to be executed via executor [<concurrent.futures.thread.ThreadPoolExecutor object at 0x7fcf66438380>] with cache [{}], pollster history [{}], and discovery cache [{}]. register_pollster_execution /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:276
Dec  1 20:05:48 compute-0 ceilometer_agent_compute[200308]: 2025-12-01 20:05:48.831 15 DEBUG ceilometer.compute.discovery [-] Querying metadata for instance 2e63a3e2-688c-470f-9b69-98ac22f0c892 from Nova API get_server /usr/lib/python3.12/site-packages/ceilometer/compute/discovery.py:176
Dec  1 20:05:48 compute-0 ceilometer_agent_compute[200308]: 2025-12-01 20:05:48.831 15 DEBUG ceilometer.polling.manager [-] Registering pollster [<stevedore.extension.Extension object at 0x7fcf6cc3f7d0>] from source [pollsters] to be executed via executor [<concurrent.futures.thread.ThreadPoolExecutor object at 0x7fcf66438380>] with cache [{}], pollster history [{}], and discovery cache [{}]. register_pollster_execution /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:276
Dec  1 20:05:48 compute-0 ceilometer_agent_compute[200308]: 2025-12-01 20:05:48.832 15 DEBUG novaclient.v2.client [-] REQ: curl -g -i -X GET https://nova-internal.openstack.svc:8774/v2.1/servers/2e63a3e2-688c-470f-9b69-98ac22f0c892 -H "Accept: application/json" -H "User-Agent: python-novaclient" -H "X-Auth-Token: {SHA256}1de7f74c971f7abb068fd11d4466b13593717e525e549549f884402049cc943e" -H "X-OpenStack-Nova-API-Version: 2.1" _http_log_request /usr/lib/python3.12/site-packages/keystoneauth1/session.py:572
Dec  1 20:05:48 compute-0 nova_compute[189564]: 2025-12-01 20:05:48.869 189568 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Dec  1 20:05:48 compute-0 nova_compute[189564]: 2025-12-01 20:05:48.972 189568 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Dec  1 20:05:49 compute-0 ceilometer_agent_compute[200308]: 2025-12-01 20:05:49.688 15 DEBUG novaclient.v2.client [-] RESP: [200] Connection: Keep-Alive Content-Length: 1831 Content-Type: application/json Date: Mon, 01 Dec 2025 20:05:48 GMT Keep-Alive: timeout=5, max=100 OpenStack-API-Version: compute 2.1 Server: Apache Vary: OpenStack-API-Version,X-OpenStack-Nova-API-Version X-OpenStack-Nova-API-Version: 2.1 x-compute-request-id: req-a3f6bd9e-7579-49d4-a2c6-92b5534f45da x-openstack-request-id: req-a3f6bd9e-7579-49d4-a2c6-92b5534f45da _http_log_response /usr/lib/python3.12/site-packages/keystoneauth1/session.py:613
Dec  1 20:05:49 compute-0 ceilometer_agent_compute[200308]: 2025-12-01 20:05:49.688 15 DEBUG novaclient.v2.client [-] RESP BODY: {"server": {"id": "2e63a3e2-688c-470f-9b69-98ac22f0c892", "name": "te-4551674-asg-jbxama3kkz6o-ydtfx5qziqnj-k254cxbeo4x2", "status": "ACTIVE", "tenant_id": "ce8fb01897ec4dc4a54e7b478a0450c6", "user_id": "87b1f4a5842648dead0562b1cf8b4f18", "metadata": {"metering.server_group": "f148fe63-b9e9-42f1-b9d7-8790a6058874"}, "hostId": "ed8356c925a37a95605f3d20b7786e3709927537fc31622d463f3259", "image": {"id": "bffb6851-f47b-44e0-90e7-e01d72f9a4d2", "links": [{"rel": "bookmark", "href": "https://nova-internal.openstack.svc:8774/images/bffb6851-f47b-44e0-90e7-e01d72f9a4d2"}]}, "flavor": {"id": "69252fc0-77e5-4ac1-807d-77003542464f", "links": [{"rel": "bookmark", "href": "https://nova-internal.openstack.svc:8774/flavors/69252fc0-77e5-4ac1-807d-77003542464f"}]}, "created": "2025-12-01T20:05:21Z", "updated": "2025-12-01T20:05:31Z", "addresses": {"": [{"version": 4, "addr": "10.100.3.29", "OS-EXT-IPS:type": "fixed", "OS-EXT-IPS-MAC:mac_addr": "fa:16:3e:ec:bc:e0"}]}, "accessIPv4": "", "accessIPv6": "", "links": [{"rel": "self", "href": "https://nova-internal.openstack.svc:8774/v2.1/servers/2e63a3e2-688c-470f-9b69-98ac22f0c892"}, {"rel": "bookmark", "href": "https://nova-internal.openstack.svc:8774/servers/2e63a3e2-688c-470f-9b69-98ac22f0c892"}], "OS-DCF:diskConfig": "MANUAL", "progress": 0, "OS-EXT-AZ:availability_zone": "nova", "config_drive": "True", "key_name": null, "OS-SRV-USG:launched_at": "2025-12-01T20:05:31.000000", "OS-SRV-USG:terminated_at": null, "security_groups": [{"name": "default"}], "OS-EXT-SRV-ATTR:host": "compute-0.ctlplane.example.com", "OS-EXT-SRV-ATTR:instance_name": "instance-0000000d", "OS-EXT-SRV-ATTR:hypervisor_hostname": "compute-0.ctlplane.example.com", "OS-EXT-STS:task_state": null, "OS-EXT-STS:vm_state": "active", "OS-EXT-STS:power_state": 1, "os-extended-volumes:volumes_attached": []}} _http_log_response /usr/lib/python3.12/site-packages/keystoneauth1/session.py:648
Dec  1 20:05:49 compute-0 ceilometer_agent_compute[200308]: 2025-12-01 20:05:49.688 15 DEBUG novaclient.v2.client [-] GET call to compute for https://nova-internal.openstack.svc:8774/v2.1/servers/2e63a3e2-688c-470f-9b69-98ac22f0c892 used request id req-a3f6bd9e-7579-49d4-a2c6-92b5534f45da request /usr/lib/python3.12/site-packages/keystoneauth1/session.py:1073
Dec  1 20:05:49 compute-0 ceilometer_agent_compute[200308]: 2025-12-01 20:05:49.689 15 DEBUG ceilometer.compute.discovery [-] instance data: {'id': '2e63a3e2-688c-470f-9b69-98ac22f0c892', 'name': 'te-4551674-asg-jbxama3kkz6o-ydtfx5qziqnj-k254cxbeo4x2', 'flavor': {'id': '69252fc0-77e5-4ac1-807d-77003542464f', 'name': 'm1.nano', 'vcpus': 1, 'ram': 128, 'disk': 1, 'ephemeral': 0, 'swap': 0}, 'image': {'id': 'bffb6851-f47b-44e0-90e7-e01d72f9a4d2'}, 'os_type': 'hvm', 'architecture': 'x86_64', 'OS-EXT-SRV-ATTR:instance_name': 'instance-0000000d', 'OS-EXT-SRV-ATTR:host': 'compute-0.ctlplane.example.com', 'OS-EXT-STS:vm_state': 'running', 'tenant_id': 'ce8fb01897ec4dc4a54e7b478a0450c6', 'user_id': '87b1f4a5842648dead0562b1cf8b4f18', 'hostId': 'ed8356c925a37a95605f3d20b7786e3709927537fc31622d463f3259', 'status': 'active', 'metadata': {'metering.server_group': 'f148fe63-b9e9-42f1-b9d7-8790a6058874'}} discover_libvirt_polling /usr/lib/python3.12/site-packages/ceilometer/compute/discovery.py:315
Dec  1 20:05:49 compute-0 ceilometer_agent_compute[200308]: 2025-12-01 20:05:49.689 15 INFO ceilometer.polling.manager [-] Polling pollster network.incoming.bytes.delta in the context of pollsters
Dec  1 20:05:49 compute-0 ceilometer_agent_compute[200308]: 2025-12-01 20:05:49.690 15 DEBUG ceilometer.polling.manager [-] Checking if we need coordination for pollster [<stevedore.extension.Extension object at 0x7fcf6cc3f860>] with coordination group name [None]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:333
Dec  1 20:05:49 compute-0 ceilometer_agent_compute[200308]: 2025-12-01 20:05:49.690 15 DEBUG ceilometer.polling.manager [-] The pollster [<stevedore.extension.Extension object at 0x7fcf6cc3f860>] is not configured in a source for polling that requires coordination. The current hashrings are the following [None]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:355
Dec  1 20:05:49 compute-0 ceilometer_agent_compute[200308]: 2025-12-01 20:05:49.690 15 DEBUG ceilometer.polling.manager [-] Polster heartbeat update: network.incoming.bytes.delta heartbeat /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:636
Dec  1 20:05:49 compute-0 ceilometer_agent_compute[200308]: 2025-12-01 20:05:49.691 12 DEBUG ceilometer.polling.manager [-] Updated heartbeat for network.incoming.bytes.delta (2025-12-01T20:05:49.690167) _update_status /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:502
Dec  1 20:05:49 compute-0 ceilometer_agent_compute[200308]: 2025-12-01 20:05:49.694 15 DEBUG ceilometer.compute.virt.libvirt.inspector [-] No delta meter predecessor for 2e63a3e2-688c-470f-9b69-98ac22f0c892 / tap3076324c-17 inspect_vnics /usr/lib/python3.12/site-packages/ceilometer/compute/virt/libvirt/inspector.py:143
Dec  1 20:05:49 compute-0 ceilometer_agent_compute[200308]: 2025-12-01 20:05:49.695 15 DEBUG ceilometer.compute.pollsters [-] 2e63a3e2-688c-470f-9b69-98ac22f0c892/network.incoming.bytes.delta volume: 0 _stats_to_sample /usr/lib/python3.12/site-packages/ceilometer/compute/pollsters/__init__.py:108
Dec  1 20:05:49 compute-0 ceilometer_agent_compute[200308]: 2025-12-01 20:05:49.695 15 INFO ceilometer.polling.manager [-] Finished polling pollster network.incoming.bytes.delta in the context of pollsters
Dec  1 20:05:49 compute-0 ceilometer_agent_compute[200308]: 2025-12-01 20:05:49.695 15 DEBUG ceilometer.polling.manager [-] Executing discovery process for pollsters [<ceilometer.compute.pollsters.net.OutgoingPacketsPollster object at 0x7fcf6c2e4050>] and discovery method [local_instances] via process [<bound method AgentManager.discover of <ceilometer.polling.manager.AgentManager object at 0x7fcf6cc07a10>>]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:294
Dec  1 20:05:49 compute-0 ceilometer_agent_compute[200308]: 2025-12-01 20:05:49.695 15 INFO ceilometer.polling.manager [-] Polling pollster network.outgoing.packets in the context of pollsters
Dec  1 20:05:49 compute-0 ceilometer_agent_compute[200308]: 2025-12-01 20:05:49.695 15 DEBUG ceilometer.polling.manager [-] Checking if we need coordination for pollster [<stevedore.extension.Extension object at 0x7fcf6c2e4080>] with coordination group name [None]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:333
Dec  1 20:05:49 compute-0 ceilometer_agent_compute[200308]: 2025-12-01 20:05:49.695 15 DEBUG ceilometer.polling.manager [-] The pollster [<stevedore.extension.Extension object at 0x7fcf6c2e4080>] is not configured in a source for polling that requires coordination. The current hashrings are the following [None]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:355
Dec  1 20:05:49 compute-0 ceilometer_agent_compute[200308]: 2025-12-01 20:05:49.696 15 DEBUG ceilometer.polling.manager [-] Polster heartbeat update: network.outgoing.packets heartbeat /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:636
Dec  1 20:05:49 compute-0 ceilometer_agent_compute[200308]: 2025-12-01 20:05:49.696 15 DEBUG ceilometer.compute.pollsters [-] 2e63a3e2-688c-470f-9b69-98ac22f0c892/network.outgoing.packets volume: 0 _stats_to_sample /usr/lib/python3.12/site-packages/ceilometer/compute/pollsters/__init__.py:108
Dec  1 20:05:49 compute-0 ceilometer_agent_compute[200308]: 2025-12-01 20:05:49.696 15 INFO ceilometer.polling.manager [-] Finished polling pollster network.outgoing.packets in the context of pollsters
Dec  1 20:05:49 compute-0 ceilometer_agent_compute[200308]: 2025-12-01 20:05:49.696 15 DEBUG ceilometer.polling.manager [-] Executing discovery process for pollsters [<ceilometer.compute.pollsters.net.OutgoingBytesDeltaPollster object at 0x7fcf6cc3ff20>] and discovery method [local_instances] via process [<bound method AgentManager.discover of <ceilometer.polling.manager.AgentManager object at 0x7fcf6cc07a10>>]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:294
Dec  1 20:05:49 compute-0 ceilometer_agent_compute[200308]: 2025-12-01 20:05:49.696 15 INFO ceilometer.polling.manager [-] Polling pollster network.outgoing.bytes.delta in the context of pollsters
Dec  1 20:05:49 compute-0 ceilometer_agent_compute[200308]: 2025-12-01 20:05:49.696 15 DEBUG ceilometer.polling.manager [-] Checking if we need coordination for pollster [<stevedore.extension.Extension object at 0x7fcf6efc98b0>] with coordination group name [None]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:333
Dec  1 20:05:49 compute-0 ceilometer_agent_compute[200308]: 2025-12-01 20:05:49.696 12 DEBUG ceilometer.polling.manager [-] Updated heartbeat for network.outgoing.packets (2025-12-01T20:05:49.695978) _update_status /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:502
Dec  1 20:05:49 compute-0 ceilometer_agent_compute[200308]: 2025-12-01 20:05:49.696 15 DEBUG ceilometer.polling.manager [-] The pollster [<stevedore.extension.Extension object at 0x7fcf6efc98b0>] is not configured in a source for polling that requires coordination. The current hashrings are the following [None]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:355
Dec  1 20:05:49 compute-0 ceilometer_agent_compute[200308]: 2025-12-01 20:05:49.696 15 DEBUG ceilometer.polling.manager [-] Polster heartbeat update: network.outgoing.bytes.delta heartbeat /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:636
Dec  1 20:05:49 compute-0 ceilometer_agent_compute[200308]: 2025-12-01 20:05:49.697 15 DEBUG ceilometer.compute.pollsters [-] 2e63a3e2-688c-470f-9b69-98ac22f0c892/network.outgoing.bytes.delta volume: 0 _stats_to_sample /usr/lib/python3.12/site-packages/ceilometer/compute/pollsters/__init__.py:108
Dec  1 20:05:49 compute-0 ceilometer_agent_compute[200308]: 2025-12-01 20:05:49.697 15 INFO ceilometer.polling.manager [-] Finished polling pollster network.outgoing.bytes.delta in the context of pollsters
Dec  1 20:05:49 compute-0 ceilometer_agent_compute[200308]: 2025-12-01 20:05:49.697 15 DEBUG ceilometer.polling.manager [-] Executing discovery process for pollsters [<ceilometer.compute.pollsters.net.OutgoingDropPollster object at 0x7fcf6c2e40e0>] and discovery method [local_instances] via process [<bound method AgentManager.discover of <ceilometer.polling.manager.AgentManager object at 0x7fcf6cc07a10>>]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:294
Dec  1 20:05:49 compute-0 ceilometer_agent_compute[200308]: 2025-12-01 20:05:49.697 15 INFO ceilometer.polling.manager [-] Polling pollster network.outgoing.packets.drop in the context of pollsters
Dec  1 20:05:49 compute-0 ceilometer_agent_compute[200308]: 2025-12-01 20:05:49.697 15 DEBUG ceilometer.polling.manager [-] Checking if we need coordination for pollster [<stevedore.extension.Extension object at 0x7fcf6c2e4110>] with coordination group name [None]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:333
Dec  1 20:05:49 compute-0 ceilometer_agent_compute[200308]: 2025-12-01 20:05:49.697 15 DEBUG ceilometer.polling.manager [-] The pollster [<stevedore.extension.Extension object at 0x7fcf6c2e4110>] is not configured in a source for polling that requires coordination. The current hashrings are the following [None]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:355
Dec  1 20:05:49 compute-0 ceilometer_agent_compute[200308]: 2025-12-01 20:05:49.698 15 DEBUG ceilometer.polling.manager [-] Polster heartbeat update: network.outgoing.packets.drop heartbeat /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:636
Dec  1 20:05:49 compute-0 ceilometer_agent_compute[200308]: 2025-12-01 20:05:49.698 12 DEBUG ceilometer.polling.manager [-] Updated heartbeat for network.outgoing.bytes.delta (2025-12-01T20:05:49.696922) _update_status /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:502
Dec  1 20:05:49 compute-0 ceilometer_agent_compute[200308]: 2025-12-01 20:05:49.698 15 DEBUG ceilometer.compute.pollsters [-] 2e63a3e2-688c-470f-9b69-98ac22f0c892/network.outgoing.packets.drop volume: 0 _stats_to_sample /usr/lib/python3.12/site-packages/ceilometer/compute/pollsters/__init__.py:108
Dec  1 20:05:49 compute-0 ceilometer_agent_compute[200308]: 2025-12-01 20:05:49.698 15 INFO ceilometer.polling.manager [-] Finished polling pollster network.outgoing.packets.drop in the context of pollsters
Dec  1 20:05:49 compute-0 ceilometer_agent_compute[200308]: 2025-12-01 20:05:49.698 15 DEBUG ceilometer.polling.manager [-] Executing discovery process for pollsters [<ceilometer.compute.pollsters.net.OutgoingErrorsPollster object at 0x7fcf6c2e4170>] and discovery method [local_instances] via process [<bound method AgentManager.discover of <ceilometer.polling.manager.AgentManager object at 0x7fcf6cc07a10>>]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:294
Dec  1 20:05:49 compute-0 ceilometer_agent_compute[200308]: 2025-12-01 20:05:49.698 15 INFO ceilometer.polling.manager [-] Polling pollster network.outgoing.packets.error in the context of pollsters
Dec  1 20:05:49 compute-0 ceilometer_agent_compute[200308]: 2025-12-01 20:05:49.698 15 DEBUG ceilometer.polling.manager [-] Checking if we need coordination for pollster [<stevedore.extension.Extension object at 0x7fcf6c2e41a0>] with coordination group name [None]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:333
Dec  1 20:05:49 compute-0 ceilometer_agent_compute[200308]: 2025-12-01 20:05:49.698 12 DEBUG ceilometer.polling.manager [-] Updated heartbeat for network.outgoing.packets.drop (2025-12-01T20:05:49.697979) _update_status /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:502
Dec  1 20:05:49 compute-0 ceilometer_agent_compute[200308]: 2025-12-01 20:05:49.698 15 DEBUG ceilometer.polling.manager [-] The pollster [<stevedore.extension.Extension object at 0x7fcf6c2e41a0>] is not configured in a source for polling that requires coordination. The current hashrings are the following [None]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:355
Dec  1 20:05:49 compute-0 ceilometer_agent_compute[200308]: 2025-12-01 20:05:49.699 15 DEBUG ceilometer.polling.manager [-] Polster heartbeat update: network.outgoing.packets.error heartbeat /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:636
Dec  1 20:05:49 compute-0 ceilometer_agent_compute[200308]: 2025-12-01 20:05:49.699 15 DEBUG ceilometer.compute.pollsters [-] 2e63a3e2-688c-470f-9b69-98ac22f0c892/network.outgoing.packets.error volume: 0 _stats_to_sample /usr/lib/python3.12/site-packages/ceilometer/compute/pollsters/__init__.py:108
Dec  1 20:05:49 compute-0 ceilometer_agent_compute[200308]: 2025-12-01 20:05:49.699 15 INFO ceilometer.polling.manager [-] Finished polling pollster network.outgoing.packets.error in the context of pollsters
Dec  1 20:05:49 compute-0 ceilometer_agent_compute[200308]: 2025-12-01 20:05:49.699 15 DEBUG ceilometer.polling.manager [-] Executing discovery process for pollsters [<ceilometer.compute.pollsters.disk.PerDeviceCapacityPollster object at 0x7fcf6cc3d820>] and discovery method [local_instances] via process [<bound method AgentManager.discover of <ceilometer.polling.manager.AgentManager object at 0x7fcf6cc07a10>>]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:294
Dec  1 20:05:49 compute-0 ceilometer_agent_compute[200308]: 2025-12-01 20:05:49.699 15 INFO ceilometer.polling.manager [-] Polling pollster disk.device.capacity in the context of pollsters
Dec  1 20:05:49 compute-0 ceilometer_agent_compute[200308]: 2025-12-01 20:05:49.699 15 DEBUG ceilometer.polling.manager [-] Checking if we need coordination for pollster [<stevedore.extension.Extension object at 0x7fcf6cc3f290>] with coordination group name [None]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:333
Dec  1 20:05:49 compute-0 ceilometer_agent_compute[200308]: 2025-12-01 20:05:49.699 12 DEBUG ceilometer.polling.manager [-] Updated heartbeat for network.outgoing.packets.error (2025-12-01T20:05:49.699008) _update_status /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:502
Dec  1 20:05:49 compute-0 ceilometer_agent_compute[200308]: 2025-12-01 20:05:49.699 15 DEBUG ceilometer.polling.manager [-] The pollster [<stevedore.extension.Extension object at 0x7fcf6cc3f290>] is not configured in a source for polling that requires coordination. The current hashrings are the following [None]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:355
Dec  1 20:05:49 compute-0 ceilometer_agent_compute[200308]: 2025-12-01 20:05:49.700 15 DEBUG ceilometer.polling.manager [-] Polster heartbeat update: disk.device.capacity heartbeat /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:636
Dec  1 20:05:49 compute-0 ceilometer_agent_compute[200308]: 2025-12-01 20:05:49.701 12 DEBUG ceilometer.polling.manager [-] Updated heartbeat for disk.device.capacity (2025-12-01T20:05:49.700031) _update_status /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:502
Dec  1 20:05:49 compute-0 ceilometer_agent_compute[200308]: 2025-12-01 20:05:49.714 15 DEBUG ceilometer.compute.pollsters [-] 2e63a3e2-688c-470f-9b69-98ac22f0c892/disk.device.capacity volume: 1073741824 _stats_to_sample /usr/lib/python3.12/site-packages/ceilometer/compute/pollsters/__init__.py:108
Dec  1 20:05:49 compute-0 ceilometer_agent_compute[200308]: 2025-12-01 20:05:49.714 15 DEBUG ceilometer.compute.pollsters [-] 2e63a3e2-688c-470f-9b69-98ac22f0c892/disk.device.capacity volume: 509952 _stats_to_sample /usr/lib/python3.12/site-packages/ceilometer/compute/pollsters/__init__.py:108
Dec  1 20:05:49 compute-0 ceilometer_agent_compute[200308]: 2025-12-01 20:05:49.715 15 INFO ceilometer.polling.manager [-] Finished polling pollster disk.device.capacity in the context of pollsters
Dec  1 20:05:49 compute-0 ceilometer_agent_compute[200308]: 2025-12-01 20:05:49.715 15 DEBUG ceilometer.polling.manager [-] Executing discovery process for pollsters [<ceilometer.compute.pollsters.disk.PerDeviceReadBytesPollster object at 0x7fcf6cc3f1d0>] and discovery method [local_instances] via process [<bound method AgentManager.discover of <ceilometer.polling.manager.AgentManager object at 0x7fcf6cc07a10>>]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:294
Dec  1 20:05:49 compute-0 ceilometer_agent_compute[200308]: 2025-12-01 20:05:49.715 15 INFO ceilometer.polling.manager [-] Polling pollster disk.device.read.bytes in the context of pollsters
Dec  1 20:05:49 compute-0 ceilometer_agent_compute[200308]: 2025-12-01 20:05:49.715 15 DEBUG ceilometer.polling.manager [-] Checking if we need coordination for pollster [<stevedore.extension.Extension object at 0x7fcf6cc3f2c0>] with coordination group name [None]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:333
Dec  1 20:05:49 compute-0 ceilometer_agent_compute[200308]: 2025-12-01 20:05:49.715 15 DEBUG ceilometer.polling.manager [-] The pollster [<stevedore.extension.Extension object at 0x7fcf6cc3f2c0>] is not configured in a source for polling that requires coordination. The current hashrings are the following [None]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:355
Dec  1 20:05:49 compute-0 ceilometer_agent_compute[200308]: 2025-12-01 20:05:49.715 15 DEBUG ceilometer.polling.manager [-] Polster heartbeat update: disk.device.read.bytes heartbeat /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:636
Dec  1 20:05:49 compute-0 ceilometer_agent_compute[200308]: 2025-12-01 20:05:49.716 12 DEBUG ceilometer.polling.manager [-] Updated heartbeat for disk.device.read.bytes (2025-12-01T20:05:49.715531) _update_status /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:502
Dec  1 20:05:49 compute-0 ceilometer_agent_compute[200308]: 2025-12-01 20:05:49.761 15 DEBUG ceilometer.compute.pollsters [-] 2e63a3e2-688c-470f-9b69-98ac22f0c892/disk.device.read.bytes volume: 23775232 _stats_to_sample /usr/lib/python3.12/site-packages/ceilometer/compute/pollsters/__init__.py:108
Dec  1 20:05:49 compute-0 ceilometer_agent_compute[200308]: 2025-12-01 20:05:49.761 15 DEBUG ceilometer.compute.pollsters [-] 2e63a3e2-688c-470f-9b69-98ac22f0c892/disk.device.read.bytes volume: 2048 _stats_to_sample /usr/lib/python3.12/site-packages/ceilometer/compute/pollsters/__init__.py:108
Dec  1 20:05:49 compute-0 ceilometer_agent_compute[200308]: 2025-12-01 20:05:49.762 15 INFO ceilometer.polling.manager [-] Finished polling pollster disk.device.read.bytes in the context of pollsters
Dec  1 20:05:49 compute-0 ceilometer_agent_compute[200308]: 2025-12-01 20:05:49.762 15 DEBUG ceilometer.polling.manager [-] Executing discovery process for pollsters [<ceilometer.compute.pollsters.net.IncomingBytesPollster object at 0x7fcf6cc3f800>] and discovery method [local_instances] via process [<bound method AgentManager.discover of <ceilometer.polling.manager.AgentManager object at 0x7fcf6cc07a10>>]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:294
Dec  1 20:05:49 compute-0 ceilometer_agent_compute[200308]: 2025-12-01 20:05:49.762 15 INFO ceilometer.polling.manager [-] Polling pollster network.incoming.bytes in the context of pollsters
Dec  1 20:05:49 compute-0 ceilometer_agent_compute[200308]: 2025-12-01 20:05:49.762 15 DEBUG ceilometer.polling.manager [-] Checking if we need coordination for pollster [<stevedore.extension.Extension object at 0x7fcf6e1e92e0>] with coordination group name [None]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:333
Dec  1 20:05:49 compute-0 ceilometer_agent_compute[200308]: 2025-12-01 20:05:49.762 15 DEBUG ceilometer.polling.manager [-] The pollster [<stevedore.extension.Extension object at 0x7fcf6e1e92e0>] is not configured in a source for polling that requires coordination. The current hashrings are the following [None]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:355
Dec  1 20:05:49 compute-0 ceilometer_agent_compute[200308]: 2025-12-01 20:05:49.763 15 DEBUG ceilometer.polling.manager [-] Polster heartbeat update: network.incoming.bytes heartbeat /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:636
Dec  1 20:05:49 compute-0 ceilometer_agent_compute[200308]: 2025-12-01 20:05:49.763 15 DEBUG ceilometer.compute.pollsters [-] 2e63a3e2-688c-470f-9b69-98ac22f0c892/network.incoming.bytes volume: 90 _stats_to_sample /usr/lib/python3.12/site-packages/ceilometer/compute/pollsters/__init__.py:108
Dec  1 20:05:49 compute-0 ceilometer_agent_compute[200308]: 2025-12-01 20:05:49.763 12 DEBUG ceilometer.polling.manager [-] Updated heartbeat for network.incoming.bytes (2025-12-01T20:05:49.763076) _update_status /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:502
Dec  1 20:05:49 compute-0 ceilometer_agent_compute[200308]: 2025-12-01 20:05:49.763 15 INFO ceilometer.polling.manager [-] Finished polling pollster network.incoming.bytes in the context of pollsters
Dec  1 20:05:49 compute-0 ceilometer_agent_compute[200308]: 2025-12-01 20:05:49.764 15 DEBUG ceilometer.polling.manager [-] Executing discovery process for pollsters [<ceilometer.compute.pollsters.net.IncomingBytesRatePollster object at 0x7fcf6cc3fd10>] and discovery method [local_instances] via process [<bound method AgentManager.discover of <ceilometer.polling.manager.AgentManager object at 0x7fcf6cc07a10>>]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:294
Dec  1 20:05:49 compute-0 ceilometer_agent_compute[200308]: 2025-12-01 20:05:49.764 15 INFO ceilometer.polling.manager [-] Polling pollster network.incoming.bytes.rate in the context of pollsters
Dec  1 20:05:49 compute-0 ceilometer_agent_compute[200308]: 2025-12-01 20:05:49.764 15 DEBUG ceilometer.polling.manager [-] Checking if we need coordination for pollster [<stevedore.extension.Extension object at 0x7fcf6cc3fb00>] with coordination group name [None]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:333
Dec  1 20:05:49 compute-0 ceilometer_agent_compute[200308]: 2025-12-01 20:05:49.764 15 DEBUG ceilometer.polling.manager [-] The pollster [<stevedore.extension.Extension object at 0x7fcf6cc3fb00>] is not configured in a source for polling that requires coordination. The current hashrings are the following [None]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:355
Dec  1 20:05:49 compute-0 ceilometer_agent_compute[200308]: 2025-12-01 20:05:49.764 15 DEBUG ceilometer.polling.manager [-] Polster heartbeat update: network.incoming.bytes.rate heartbeat /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:636
Dec  1 20:05:49 compute-0 ceilometer_agent_compute[200308]: 2025-12-01 20:05:49.764 15 DEBUG ceilometer.compute.pollsters [-] LibvirtInspector does not provide data for IncomingBytesRatePollster get_samples /usr/lib/python3.12/site-packages/ceilometer/compute/pollsters/__init__.py:162
Dec  1 20:05:49 compute-0 ceilometer_agent_compute[200308]: 2025-12-01 20:05:49.764 12 DEBUG ceilometer.polling.manager [-] Updated heartbeat for network.incoming.bytes.rate (2025-12-01T20:05:49.764465) _update_status /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:502
Dec  1 20:05:49 compute-0 ceilometer_agent_compute[200308]: 2025-12-01 20:05:49.764 15 ERROR ceilometer.polling.manager [-] Prevent pollster network.incoming.bytes.rate from polling [<NovaLikeServer: te-4551674-asg-jbxama3kkz6o-ydtfx5qziqnj-k254cxbeo4x2>] on source pollsters anymore!: ceilometer.polling.plugin_base.PollsterPermanentError: [<NovaLikeServer: te-4551674-asg-jbxama3kkz6o-ydtfx5qziqnj-k254cxbeo4x2>]
Dec  1 20:05:49 compute-0 ceilometer_agent_compute[200308]: 2025-12-01 20:05:49.765 15 DEBUG ceilometer.polling.manager [-] Executing discovery process for pollsters [<ceilometer.compute.pollsters.disk.PerDeviceDiskReadLatencyPollster object at 0x7fcf6cc3f2f0>] and discovery method [local_instances] via process [<bound method AgentManager.discover of <ceilometer.polling.manager.AgentManager object at 0x7fcf6cc07a10>>]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:294
Dec  1 20:05:49 compute-0 ceilometer_agent_compute[200308]: 2025-12-01 20:05:49.765 15 INFO ceilometer.polling.manager [-] Polling pollster disk.device.read.latency in the context of pollsters
Dec  1 20:05:49 compute-0 ceilometer_agent_compute[200308]: 2025-12-01 20:05:49.765 15 DEBUG ceilometer.polling.manager [-] Checking if we need coordination for pollster [<stevedore.extension.Extension object at 0x7fcf6cc3f320>] with coordination group name [None]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:333
Dec  1 20:05:49 compute-0 ceilometer_agent_compute[200308]: 2025-12-01 20:05:49.765 15 DEBUG ceilometer.polling.manager [-] The pollster [<stevedore.extension.Extension object at 0x7fcf6cc3f320>] is not configured in a source for polling that requires coordination. The current hashrings are the following [None]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:355
Dec  1 20:05:49 compute-0 ceilometer_agent_compute[200308]: 2025-12-01 20:05:49.765 15 DEBUG ceilometer.polling.manager [-] Polster heartbeat update: disk.device.read.latency heartbeat /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:636
Dec  1 20:05:49 compute-0 ceilometer_agent_compute[200308]: 2025-12-01 20:05:49.766 15 DEBUG ceilometer.compute.pollsters [-] 2e63a3e2-688c-470f-9b69-98ac22f0c892/disk.device.read.latency volume: 521841187 _stats_to_sample /usr/lib/python3.12/site-packages/ceilometer/compute/pollsters/__init__.py:108
Dec  1 20:05:49 compute-0 ceilometer_agent_compute[200308]: 2025-12-01 20:05:49.766 15 DEBUG ceilometer.compute.pollsters [-] 2e63a3e2-688c-470f-9b69-98ac22f0c892/disk.device.read.latency volume: 1540249 _stats_to_sample /usr/lib/python3.12/site-packages/ceilometer/compute/pollsters/__init__.py:108
Dec  1 20:05:49 compute-0 ceilometer_agent_compute[200308]: 2025-12-01 20:05:49.766 15 INFO ceilometer.polling.manager [-] Finished polling pollster disk.device.read.latency in the context of pollsters
Dec  1 20:05:49 compute-0 ceilometer_agent_compute[200308]: 2025-12-01 20:05:49.767 12 DEBUG ceilometer.polling.manager [-] Updated heartbeat for disk.device.read.latency (2025-12-01T20:05:49.765852) _update_status /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:502
Dec  1 20:05:49 compute-0 ceilometer_agent_compute[200308]: 2025-12-01 20:05:49.767 15 DEBUG ceilometer.polling.manager [-] Executing discovery process for pollsters [<ceilometer.compute.pollsters.disk.PerDeviceReadRequestsPollster object at 0x7fcf6cc3f350>] and discovery method [local_instances] via process [<bound method AgentManager.discover of <ceilometer.polling.manager.AgentManager object at 0x7fcf6cc07a10>>]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:294
Dec  1 20:05:49 compute-0 ceilometer_agent_compute[200308]: 2025-12-01 20:05:49.767 15 INFO ceilometer.polling.manager [-] Polling pollster disk.device.read.requests in the context of pollsters
Dec  1 20:05:49 compute-0 ceilometer_agent_compute[200308]: 2025-12-01 20:05:49.767 15 DEBUG ceilometer.polling.manager [-] Checking if we need coordination for pollster [<stevedore.extension.Extension object at 0x7fcf6cc3f380>] with coordination group name [None]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:333
Dec  1 20:05:49 compute-0 ceilometer_agent_compute[200308]: 2025-12-01 20:05:49.767 15 DEBUG ceilometer.polling.manager [-] The pollster [<stevedore.extension.Extension object at 0x7fcf6cc3f380>] is not configured in a source for polling that requires coordination. The current hashrings are the following [None]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:355
Dec  1 20:05:49 compute-0 ceilometer_agent_compute[200308]: 2025-12-01 20:05:49.767 15 DEBUG ceilometer.polling.manager [-] Polster heartbeat update: disk.device.read.requests heartbeat /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:636
Dec  1 20:05:49 compute-0 ceilometer_agent_compute[200308]: 2025-12-01 20:05:49.767 12 DEBUG ceilometer.polling.manager [-] Updated heartbeat for disk.device.read.requests (2025-12-01T20:05:49.767662) _update_status /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:502
Dec  1 20:05:49 compute-0 ceilometer_agent_compute[200308]: 2025-12-01 20:05:49.767 15 DEBUG ceilometer.compute.pollsters [-] 2e63a3e2-688c-470f-9b69-98ac22f0c892/disk.device.read.requests volume: 760 _stats_to_sample /usr/lib/python3.12/site-packages/ceilometer/compute/pollsters/__init__.py:108
Dec  1 20:05:49 compute-0 ceilometer_agent_compute[200308]: 2025-12-01 20:05:49.768 15 DEBUG ceilometer.compute.pollsters [-] 2e63a3e2-688c-470f-9b69-98ac22f0c892/disk.device.read.requests volume: 1 _stats_to_sample /usr/lib/python3.12/site-packages/ceilometer/compute/pollsters/__init__.py:108
Dec  1 20:05:49 compute-0 ceilometer_agent_compute[200308]: 2025-12-01 20:05:49.768 15 INFO ceilometer.polling.manager [-] Finished polling pollster disk.device.read.requests in the context of pollsters
Dec  1 20:05:49 compute-0 ceilometer_agent_compute[200308]: 2025-12-01 20:05:49.768 15 DEBUG ceilometer.polling.manager [-] Executing discovery process for pollsters [<ceilometer.compute.pollsters.disk.PerDevicePhysicalPollster object at 0x7fcf6cc3f3b0>] and discovery method [local_instances] via process [<bound method AgentManager.discover of <ceilometer.polling.manager.AgentManager object at 0x7fcf6cc07a10>>]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:294
Dec  1 20:05:49 compute-0 ceilometer_agent_compute[200308]: 2025-12-01 20:05:49.769 15 INFO ceilometer.polling.manager [-] Polling pollster disk.device.usage in the context of pollsters
Dec  1 20:05:49 compute-0 ceilometer_agent_compute[200308]: 2025-12-01 20:05:49.769 15 DEBUG ceilometer.polling.manager [-] Checking if we need coordination for pollster [<stevedore.extension.Extension object at 0x7fcf6cc3f3e0>] with coordination group name [None]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:333
Dec  1 20:05:49 compute-0 ceilometer_agent_compute[200308]: 2025-12-01 20:05:49.769 15 DEBUG ceilometer.polling.manager [-] The pollster [<stevedore.extension.Extension object at 0x7fcf6cc3f3e0>] is not configured in a source for polling that requires coordination. The current hashrings are the following [None]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:355
Dec  1 20:05:49 compute-0 ceilometer_agent_compute[200308]: 2025-12-01 20:05:49.769 15 DEBUG ceilometer.polling.manager [-] Polster heartbeat update: disk.device.usage heartbeat /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:636
Dec  1 20:05:49 compute-0 ceilometer_agent_compute[200308]: 2025-12-01 20:05:49.769 15 DEBUG ceilometer.compute.pollsters [-] 2e63a3e2-688c-470f-9b69-98ac22f0c892/disk.device.usage volume: 196624 _stats_to_sample /usr/lib/python3.12/site-packages/ceilometer/compute/pollsters/__init__.py:108
Dec  1 20:05:49 compute-0 ceilometer_agent_compute[200308]: 2025-12-01 20:05:49.770 12 DEBUG ceilometer.polling.manager [-] Updated heartbeat for disk.device.usage (2025-12-01T20:05:49.769340) _update_status /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:502
Dec  1 20:05:49 compute-0 ceilometer_agent_compute[200308]: 2025-12-01 20:05:49.770 15 DEBUG ceilometer.compute.pollsters [-] 2e63a3e2-688c-470f-9b69-98ac22f0c892/disk.device.usage volume: 509952 _stats_to_sample /usr/lib/python3.12/site-packages/ceilometer/compute/pollsters/__init__.py:108
Dec  1 20:05:49 compute-0 ceilometer_agent_compute[200308]: 2025-12-01 20:05:49.770 15 INFO ceilometer.polling.manager [-] Finished polling pollster disk.device.usage in the context of pollsters
Dec  1 20:05:49 compute-0 ceilometer_agent_compute[200308]: 2025-12-01 20:05:49.770 15 DEBUG ceilometer.polling.manager [-] Executing discovery process for pollsters [<ceilometer.compute.pollsters.disk.PerDeviceWriteBytesPollster object at 0x7fcf6cc3f410>] and discovery method [local_instances] via process [<bound method AgentManager.discover of <ceilometer.polling.manager.AgentManager object at 0x7fcf6cc07a10>>]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:294
Dec  1 20:05:49 compute-0 ceilometer_agent_compute[200308]: 2025-12-01 20:05:49.771 15 INFO ceilometer.polling.manager [-] Polling pollster disk.device.write.bytes in the context of pollsters
Dec  1 20:05:49 compute-0 ceilometer_agent_compute[200308]: 2025-12-01 20:05:49.771 15 DEBUG ceilometer.polling.manager [-] Checking if we need coordination for pollster [<stevedore.extension.Extension object at 0x7fcf6cc3f440>] with coordination group name [None]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:333
Dec  1 20:05:49 compute-0 ceilometer_agent_compute[200308]: 2025-12-01 20:05:49.771 15 DEBUG ceilometer.polling.manager [-] The pollster [<stevedore.extension.Extension object at 0x7fcf6cc3f440>] is not configured in a source for polling that requires coordination. The current hashrings are the following [None]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:355
Dec  1 20:05:49 compute-0 ceilometer_agent_compute[200308]: 2025-12-01 20:05:49.771 15 DEBUG ceilometer.polling.manager [-] Polster heartbeat update: disk.device.write.bytes heartbeat /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:636
Dec  1 20:05:49 compute-0 ceilometer_agent_compute[200308]: 2025-12-01 20:05:49.771 15 DEBUG ceilometer.compute.pollsters [-] 2e63a3e2-688c-470f-9b69-98ac22f0c892/disk.device.write.bytes volume: 0 _stats_to_sample /usr/lib/python3.12/site-packages/ceilometer/compute/pollsters/__init__.py:108
Dec  1 20:05:49 compute-0 ceilometer_agent_compute[200308]: 2025-12-01 20:05:49.772 15 DEBUG ceilometer.compute.pollsters [-] 2e63a3e2-688c-470f-9b69-98ac22f0c892/disk.device.write.bytes volume: 0 _stats_to_sample /usr/lib/python3.12/site-packages/ceilometer/compute/pollsters/__init__.py:108
Dec  1 20:05:49 compute-0 ceilometer_agent_compute[200308]: 2025-12-01 20:05:49.772 15 INFO ceilometer.polling.manager [-] Finished polling pollster disk.device.write.bytes in the context of pollsters
Dec  1 20:05:49 compute-0 ceilometer_agent_compute[200308]: 2025-12-01 20:05:49.772 12 DEBUG ceilometer.polling.manager [-] Updated heartbeat for disk.device.write.bytes (2025-12-01T20:05:49.771328) _update_status /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:502
Dec  1 20:05:49 compute-0 ceilometer_agent_compute[200308]: 2025-12-01 20:05:49.772 15 DEBUG ceilometer.polling.manager [-] Executing discovery process for pollsters [<ceilometer.compute.pollsters.instance_stats.PowerStatePollster object at 0x7fcf6c2e4440>] and discovery method [local_instances] via process [<bound method AgentManager.discover of <ceilometer.polling.manager.AgentManager object at 0x7fcf6cc07a10>>]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:294
Dec  1 20:05:49 compute-0 ceilometer_agent_compute[200308]: 2025-12-01 20:05:49.773 15 INFO ceilometer.polling.manager [-] Polling pollster power.state in the context of pollsters
Dec  1 20:05:49 compute-0 ceilometer_agent_compute[200308]: 2025-12-01 20:05:49.773 15 DEBUG ceilometer.polling.manager [-] Checking if we need coordination for pollster [<stevedore.extension.Extension object at 0x7fcf6c2e4470>] with coordination group name [None]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:333
Dec  1 20:05:49 compute-0 ceilometer_agent_compute[200308]: 2025-12-01 20:05:49.773 15 DEBUG ceilometer.polling.manager [-] The pollster [<stevedore.extension.Extension object at 0x7fcf6c2e4470>] is not configured in a source for polling that requires coordination. The current hashrings are the following [None]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:355
Dec  1 20:05:49 compute-0 ceilometer_agent_compute[200308]: 2025-12-01 20:05:49.773 15 DEBUG ceilometer.polling.manager [-] Polster heartbeat update: power.state heartbeat /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:636
Dec  1 20:05:49 compute-0 ceilometer_agent_compute[200308]: 2025-12-01 20:05:49.773 12 DEBUG ceilometer.polling.manager [-] Updated heartbeat for power.state (2025-12-01T20:05:49.773498) _update_status /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:502
Dec  1 20:05:49 compute-0 ceilometer_agent_compute[200308]: 2025-12-01 20:05:49.798 15 DEBUG ceilometer.compute.pollsters [-] 2e63a3e2-688c-470f-9b69-98ac22f0c892/power.state volume: 1 _stats_to_sample /usr/lib/python3.12/site-packages/ceilometer/compute/pollsters/__init__.py:108
Dec  1 20:05:49 compute-0 ceilometer_agent_compute[200308]: 2025-12-01 20:05:49.799 15 INFO ceilometer.polling.manager [-] Finished polling pollster power.state in the context of pollsters
Dec  1 20:05:49 compute-0 ceilometer_agent_compute[200308]: 2025-12-01 20:05:49.799 15 DEBUG ceilometer.polling.manager [-] Executing discovery process for pollsters [<ceilometer.compute.pollsters.disk.PerDeviceDiskWriteLatencyPollster object at 0x7fcf6cc3f470>] and discovery method [local_instances] via process [<bound method AgentManager.discover of <ceilometer.polling.manager.AgentManager object at 0x7fcf6cc07a10>>]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:294
Dec  1 20:05:49 compute-0 ceilometer_agent_compute[200308]: 2025-12-01 20:05:49.799 15 INFO ceilometer.polling.manager [-] Polling pollster disk.device.write.latency in the context of pollsters
Dec  1 20:05:49 compute-0 ceilometer_agent_compute[200308]: 2025-12-01 20:05:49.799 15 DEBUG ceilometer.polling.manager [-] Checking if we need coordination for pollster [<stevedore.extension.Extension object at 0x7fcf6cc3f4a0>] with coordination group name [None]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:333
Dec  1 20:05:49 compute-0 ceilometer_agent_compute[200308]: 2025-12-01 20:05:49.799 15 DEBUG ceilometer.polling.manager [-] The pollster [<stevedore.extension.Extension object at 0x7fcf6cc3f4a0>] is not configured in a source for polling that requires coordination. The current hashrings are the following [None]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:355
Dec  1 20:05:49 compute-0 ceilometer_agent_compute[200308]: 2025-12-01 20:05:49.799 15 DEBUG ceilometer.polling.manager [-] Polster heartbeat update: disk.device.write.latency heartbeat /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:636
Dec  1 20:05:49 compute-0 ceilometer_agent_compute[200308]: 2025-12-01 20:05:49.800 15 DEBUG ceilometer.compute.pollsters [-] 2e63a3e2-688c-470f-9b69-98ac22f0c892/disk.device.write.latency volume: 0 _stats_to_sample /usr/lib/python3.12/site-packages/ceilometer/compute/pollsters/__init__.py:108
Dec  1 20:05:49 compute-0 ceilometer_agent_compute[200308]: 2025-12-01 20:05:49.800 12 DEBUG ceilometer.polling.manager [-] Updated heartbeat for disk.device.write.latency (2025-12-01T20:05:49.799800) _update_status /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:502
Dec  1 20:05:49 compute-0 ceilometer_agent_compute[200308]: 2025-12-01 20:05:49.800 15 DEBUG ceilometer.compute.pollsters [-] 2e63a3e2-688c-470f-9b69-98ac22f0c892/disk.device.write.latency volume: 0 _stats_to_sample /usr/lib/python3.12/site-packages/ceilometer/compute/pollsters/__init__.py:108
Dec  1 20:05:49 compute-0 ceilometer_agent_compute[200308]: 2025-12-01 20:05:49.800 15 INFO ceilometer.polling.manager [-] Finished polling pollster disk.device.write.latency in the context of pollsters
Dec  1 20:05:49 compute-0 ceilometer_agent_compute[200308]: 2025-12-01 20:05:49.801 15 DEBUG ceilometer.polling.manager [-] Executing discovery process for pollsters [<ceilometer.compute.pollsters.disk.PerDeviceWriteRequestsPollster object at 0x7fcf6cc3f4d0>] and discovery method [local_instances] via process [<bound method AgentManager.discover of <ceilometer.polling.manager.AgentManager object at 0x7fcf6cc07a10>>]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:294
Dec  1 20:05:49 compute-0 ceilometer_agent_compute[200308]: 2025-12-01 20:05:49.801 15 INFO ceilometer.polling.manager [-] Polling pollster disk.device.write.requests in the context of pollsters
Dec  1 20:05:49 compute-0 ceilometer_agent_compute[200308]: 2025-12-01 20:05:49.801 15 DEBUG ceilometer.polling.manager [-] Checking if we need coordination for pollster [<stevedore.extension.Extension object at 0x7fcf6cc3f500>] with coordination group name [None]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:333
Dec  1 20:05:49 compute-0 ceilometer_agent_compute[200308]: 2025-12-01 20:05:49.801 15 DEBUG ceilometer.polling.manager [-] The pollster [<stevedore.extension.Extension object at 0x7fcf6cc3f500>] is not configured in a source for polling that requires coordination. The current hashrings are the following [None]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:355
Dec  1 20:05:49 compute-0 ceilometer_agent_compute[200308]: 2025-12-01 20:05:49.801 15 DEBUG ceilometer.polling.manager [-] Polster heartbeat update: disk.device.write.requests heartbeat /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:636
Dec  1 20:05:49 compute-0 ceilometer_agent_compute[200308]: 2025-12-01 20:05:49.801 12 DEBUG ceilometer.polling.manager [-] Updated heartbeat for disk.device.write.requests (2025-12-01T20:05:49.801645) _update_status /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:502
Dec  1 20:05:49 compute-0 ceilometer_agent_compute[200308]: 2025-12-01 20:05:49.801 15 DEBUG ceilometer.compute.pollsters [-] 2e63a3e2-688c-470f-9b69-98ac22f0c892/disk.device.write.requests volume: 0 _stats_to_sample /usr/lib/python3.12/site-packages/ceilometer/compute/pollsters/__init__.py:108
Dec  1 20:05:49 compute-0 ceilometer_agent_compute[200308]: 2025-12-01 20:05:49.802 15 DEBUG ceilometer.compute.pollsters [-] 2e63a3e2-688c-470f-9b69-98ac22f0c892/disk.device.write.requests volume: 0 _stats_to_sample /usr/lib/python3.12/site-packages/ceilometer/compute/pollsters/__init__.py:108
Dec  1 20:05:49 compute-0 ceilometer_agent_compute[200308]: 2025-12-01 20:05:49.802 15 INFO ceilometer.polling.manager [-] Finished polling pollster disk.device.write.requests in the context of pollsters
Dec  1 20:05:49 compute-0 ceilometer_agent_compute[200308]: 2025-12-01 20:05:49.802 15 DEBUG ceilometer.polling.manager [-] Executing discovery process for pollsters [<ceilometer.compute.pollsters.disk.PerDeviceAllocationPollster object at 0x7fcf6cc3e5d0>] and discovery method [local_instances] via process [<bound method AgentManager.discover of <ceilometer.polling.manager.AgentManager object at 0x7fcf6cc07a10>>]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:294
Dec  1 20:05:49 compute-0 ceilometer_agent_compute[200308]: 2025-12-01 20:05:49.802 15 INFO ceilometer.polling.manager [-] Polling pollster disk.device.allocation in the context of pollsters
Dec  1 20:05:49 compute-0 ceilometer_agent_compute[200308]: 2025-12-01 20:05:49.803 15 DEBUG ceilometer.polling.manager [-] Checking if we need coordination for pollster [<stevedore.extension.Extension object at 0x7fcf6cc3e540>] with coordination group name [None]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:333
Dec  1 20:05:49 compute-0 ceilometer_agent_compute[200308]: 2025-12-01 20:05:49.803 15 DEBUG ceilometer.polling.manager [-] The pollster [<stevedore.extension.Extension object at 0x7fcf6cc3e540>] is not configured in a source for polling that requires coordination. The current hashrings are the following [None]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:355
Dec  1 20:05:49 compute-0 ceilometer_agent_compute[200308]: 2025-12-01 20:05:49.803 15 DEBUG ceilometer.polling.manager [-] Polster heartbeat update: disk.device.allocation heartbeat /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:636
Dec  1 20:05:49 compute-0 ceilometer_agent_compute[200308]: 2025-12-01 20:05:49.803 15 DEBUG ceilometer.compute.pollsters [-] 2e63a3e2-688c-470f-9b69-98ac22f0c892/disk.device.allocation volume: 204800 _stats_to_sample /usr/lib/python3.12/site-packages/ceilometer/compute/pollsters/__init__.py:108
Dec  1 20:05:49 compute-0 ceilometer_agent_compute[200308]: 2025-12-01 20:05:49.803 15 DEBUG ceilometer.compute.pollsters [-] 2e63a3e2-688c-470f-9b69-98ac22f0c892/disk.device.allocation volume: 512000 _stats_to_sample /usr/lib/python3.12/site-packages/ceilometer/compute/pollsters/__init__.py:108
Dec  1 20:05:49 compute-0 ceilometer_agent_compute[200308]: 2025-12-01 20:05:49.804 12 DEBUG ceilometer.polling.manager [-] Updated heartbeat for disk.device.allocation (2025-12-01T20:05:49.803267) _update_status /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:502
Dec  1 20:05:49 compute-0 ceilometer_agent_compute[200308]: 2025-12-01 20:05:49.804 15 INFO ceilometer.polling.manager [-] Finished polling pollster disk.device.allocation in the context of pollsters
Dec  1 20:05:49 compute-0 ceilometer_agent_compute[200308]: 2025-12-01 20:05:49.804 15 DEBUG ceilometer.polling.manager [-] Executing discovery process for pollsters [<ceilometer.compute.pollsters.disk.EphemeralSizePollster object at 0x7fcf6cc3f530>] and discovery method [local_instances] via process [<bound method AgentManager.discover of <ceilometer.polling.manager.AgentManager object at 0x7fcf6cc07a10>>]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:294
Dec  1 20:05:49 compute-0 ceilometer_agent_compute[200308]: 2025-12-01 20:05:49.804 15 INFO ceilometer.polling.manager [-] Polling pollster disk.ephemeral.size in the context of pollsters
Dec  1 20:05:49 compute-0 ceilometer_agent_compute[200308]: 2025-12-01 20:05:49.805 15 DEBUG ceilometer.polling.manager [-] Checking if we need coordination for pollster [<stevedore.extension.Extension object at 0x7fcf6cc3f560>] with coordination group name [None]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:333
Dec  1 20:05:49 compute-0 ceilometer_agent_compute[200308]: 2025-12-01 20:05:49.805 15 DEBUG ceilometer.polling.manager [-] The pollster [<stevedore.extension.Extension object at 0x7fcf6cc3f560>] is not configured in a source for polling that requires coordination. The current hashrings are the following [None]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:355
Dec  1 20:05:49 compute-0 ceilometer_agent_compute[200308]: 2025-12-01 20:05:49.805 15 DEBUG ceilometer.polling.manager [-] Polster heartbeat update: disk.ephemeral.size heartbeat /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:636
Dec  1 20:05:49 compute-0 ceilometer_agent_compute[200308]: 2025-12-01 20:05:49.805 15 INFO ceilometer.polling.manager [-] Finished polling pollster disk.ephemeral.size in the context of pollsters
Dec  1 20:05:49 compute-0 ceilometer_agent_compute[200308]: 2025-12-01 20:05:49.806 15 DEBUG ceilometer.polling.manager [-] Executing discovery process for pollsters [<ceilometer.compute.pollsters.net.IncomingPacketsPollster object at 0x7fcf6cc3fd40>] and discovery method [local_instances] via process [<bound method AgentManager.discover of <ceilometer.polling.manager.AgentManager object at 0x7fcf6cc07a10>>]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:294
Dec  1 20:05:49 compute-0 ceilometer_agent_compute[200308]: 2025-12-01 20:05:49.806 15 INFO ceilometer.polling.manager [-] Polling pollster network.incoming.packets in the context of pollsters
Dec  1 20:05:49 compute-0 ceilometer_agent_compute[200308]: 2025-12-01 20:05:49.806 15 DEBUG ceilometer.polling.manager [-] Checking if we need coordination for pollster [<stevedore.extension.Extension object at 0x7fcf6cc3fd70>] with coordination group name [None]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:333
Dec  1 20:05:49 compute-0 ceilometer_agent_compute[200308]: 2025-12-01 20:05:49.806 15 DEBUG ceilometer.polling.manager [-] The pollster [<stevedore.extension.Extension object at 0x7fcf6cc3fd70>] is not configured in a source for polling that requires coordination. The current hashrings are the following [None]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:355
Dec  1 20:05:49 compute-0 ceilometer_agent_compute[200308]: 2025-12-01 20:05:49.806 15 DEBUG ceilometer.polling.manager [-] Polster heartbeat update: network.incoming.packets heartbeat /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:636
Dec  1 20:05:49 compute-0 ceilometer_agent_compute[200308]: 2025-12-01 20:05:49.806 15 DEBUG ceilometer.compute.pollsters [-] 2e63a3e2-688c-470f-9b69-98ac22f0c892/network.incoming.packets volume: 1 _stats_to_sample /usr/lib/python3.12/site-packages/ceilometer/compute/pollsters/__init__.py:108
Dec  1 20:05:49 compute-0 ceilometer_agent_compute[200308]: 2025-12-01 20:05:49.807 12 DEBUG ceilometer.polling.manager [-] Updated heartbeat for disk.ephemeral.size (2025-12-01T20:05:49.805304) _update_status /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:502
Dec  1 20:05:49 compute-0 ceilometer_agent_compute[200308]: 2025-12-01 20:05:49.807 12 DEBUG ceilometer.polling.manager [-] Updated heartbeat for network.incoming.packets (2025-12-01T20:05:49.806800) _update_status /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:502
Dec  1 20:05:49 compute-0 ceilometer_agent_compute[200308]: 2025-12-01 20:05:49.807 15 INFO ceilometer.polling.manager [-] Finished polling pollster network.incoming.packets in the context of pollsters
Dec  1 20:05:49 compute-0 ceilometer_agent_compute[200308]: 2025-12-01 20:05:49.807 15 DEBUG ceilometer.polling.manager [-] Executing discovery process for pollsters [<ceilometer.compute.pollsters.disk.RootSizePollster object at 0x7fcf6cc3f590>] and discovery method [local_instances] via process [<bound method AgentManager.discover of <ceilometer.polling.manager.AgentManager object at 0x7fcf6cc07a10>>]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:294
Dec  1 20:05:49 compute-0 ceilometer_agent_compute[200308]: 2025-12-01 20:05:49.807 15 INFO ceilometer.polling.manager [-] Polling pollster disk.root.size in the context of pollsters
Dec  1 20:05:49 compute-0 ceilometer_agent_compute[200308]: 2025-12-01 20:05:49.807 15 DEBUG ceilometer.polling.manager [-] Checking if we need coordination for pollster [<stevedore.extension.Extension object at 0x7fcf6cc3f5c0>] with coordination group name [None]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:333
Dec  1 20:05:49 compute-0 ceilometer_agent_compute[200308]: 2025-12-01 20:05:49.808 15 DEBUG ceilometer.polling.manager [-] The pollster [<stevedore.extension.Extension object at 0x7fcf6cc3f5c0>] is not configured in a source for polling that requires coordination. The current hashrings are the following [None]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:355
Dec  1 20:05:49 compute-0 ceilometer_agent_compute[200308]: 2025-12-01 20:05:49.808 15 DEBUG ceilometer.polling.manager [-] Polster heartbeat update: disk.root.size heartbeat /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:636
Dec  1 20:05:49 compute-0 ceilometer_agent_compute[200308]: 2025-12-01 20:05:49.808 12 DEBUG ceilometer.polling.manager [-] Updated heartbeat for disk.root.size (2025-12-01T20:05:49.808144) _update_status /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:502
Dec  1 20:05:49 compute-0 ceilometer_agent_compute[200308]: 2025-12-01 20:05:49.808 15 INFO ceilometer.polling.manager [-] Finished polling pollster disk.root.size in the context of pollsters
Dec  1 20:05:49 compute-0 ceilometer_agent_compute[200308]: 2025-12-01 20:05:49.808 15 DEBUG ceilometer.polling.manager [-] Executing discovery process for pollsters [<ceilometer.compute.pollsters.net.IncomingDropPollster object at 0x7fcf6cc3fda0>] and discovery method [local_instances] via process [<bound method AgentManager.discover of <ceilometer.polling.manager.AgentManager object at 0x7fcf6cc07a10>>]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:294
Dec  1 20:05:49 compute-0 ceilometer_agent_compute[200308]: 2025-12-01 20:05:49.809 15 INFO ceilometer.polling.manager [-] Polling pollster network.incoming.packets.drop in the context of pollsters
Dec  1 20:05:49 compute-0 ceilometer_agent_compute[200308]: 2025-12-01 20:05:49.809 15 DEBUG ceilometer.polling.manager [-] Checking if we need coordination for pollster [<stevedore.extension.Extension object at 0x7fcf6cc3fdd0>] with coordination group name [None]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:333
Dec  1 20:05:49 compute-0 ceilometer_agent_compute[200308]: 2025-12-01 20:05:49.809 15 DEBUG ceilometer.polling.manager [-] The pollster [<stevedore.extension.Extension object at 0x7fcf6cc3fdd0>] is not configured in a source for polling that requires coordination. The current hashrings are the following [None]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:355
Dec  1 20:05:49 compute-0 ceilometer_agent_compute[200308]: 2025-12-01 20:05:49.809 15 DEBUG ceilometer.polling.manager [-] Polster heartbeat update: network.incoming.packets.drop heartbeat /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:636
Dec  1 20:05:49 compute-0 ceilometer_agent_compute[200308]: 2025-12-01 20:05:49.809 12 DEBUG ceilometer.polling.manager [-] Updated heartbeat for network.incoming.packets.drop (2025-12-01T20:05:49.809320) _update_status /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:502
Dec  1 20:05:49 compute-0 ceilometer_agent_compute[200308]: 2025-12-01 20:05:49.809 15 DEBUG ceilometer.compute.pollsters [-] 2e63a3e2-688c-470f-9b69-98ac22f0c892/network.incoming.packets.drop volume: 0 _stats_to_sample /usr/lib/python3.12/site-packages/ceilometer/compute/pollsters/__init__.py:108
Dec  1 20:05:49 compute-0 ceilometer_agent_compute[200308]: 2025-12-01 20:05:49.810 15 INFO ceilometer.polling.manager [-] Finished polling pollster network.incoming.packets.drop in the context of pollsters
Dec  1 20:05:49 compute-0 ceilometer_agent_compute[200308]: 2025-12-01 20:05:49.810 15 DEBUG ceilometer.polling.manager [-] Executing discovery process for pollsters [<ceilometer.compute.pollsters.net.IncomingErrorsPollster object at 0x7fcf6cc3fe00>] and discovery method [local_instances] via process [<bound method AgentManager.discover of <ceilometer.polling.manager.AgentManager object at 0x7fcf6cc07a10>>]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:294
Dec  1 20:05:49 compute-0 ceilometer_agent_compute[200308]: 2025-12-01 20:05:49.810 15 INFO ceilometer.polling.manager [-] Polling pollster network.incoming.packets.error in the context of pollsters
Dec  1 20:05:49 compute-0 ceilometer_agent_compute[200308]: 2025-12-01 20:05:49.810 15 DEBUG ceilometer.polling.manager [-] Checking if we need coordination for pollster [<stevedore.extension.Extension object at 0x7fcf6cc3fe30>] with coordination group name [None]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:333
Dec  1 20:05:49 compute-0 ceilometer_agent_compute[200308]: 2025-12-01 20:05:49.810 15 DEBUG ceilometer.polling.manager [-] The pollster [<stevedore.extension.Extension object at 0x7fcf6cc3fe30>] is not configured in a source for polling that requires coordination. The current hashrings are the following [None]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:355
Dec  1 20:05:49 compute-0 ceilometer_agent_compute[200308]: 2025-12-01 20:05:49.810 15 DEBUG ceilometer.polling.manager [-] Polster heartbeat update: network.incoming.packets.error heartbeat /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:636
Dec  1 20:05:49 compute-0 ceilometer_agent_compute[200308]: 2025-12-01 20:05:49.810 15 DEBUG ceilometer.compute.pollsters [-] 2e63a3e2-688c-470f-9b69-98ac22f0c892/network.incoming.packets.error volume: 0 _stats_to_sample /usr/lib/python3.12/site-packages/ceilometer/compute/pollsters/__init__.py:108
Dec  1 20:05:49 compute-0 ceilometer_agent_compute[200308]: 2025-12-01 20:05:49.811 15 INFO ceilometer.polling.manager [-] Finished polling pollster network.incoming.packets.error in the context of pollsters
Dec  1 20:05:49 compute-0 ceilometer_agent_compute[200308]: 2025-12-01 20:05:49.811 15 DEBUG ceilometer.polling.manager [-] Executing discovery process for pollsters [<ceilometer.compute.pollsters.net.OutgoingBytesPollster object at 0x7fcf6cc3fe90>] and discovery method [local_instances] via process [<bound method AgentManager.discover of <ceilometer.polling.manager.AgentManager object at 0x7fcf6cc07a10>>]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:294
Dec  1 20:05:49 compute-0 ceilometer_agent_compute[200308]: 2025-12-01 20:05:49.811 15 INFO ceilometer.polling.manager [-] Polling pollster network.outgoing.bytes in the context of pollsters
Dec  1 20:05:49 compute-0 ceilometer_agent_compute[200308]: 2025-12-01 20:05:49.812 12 DEBUG ceilometer.polling.manager [-] Updated heartbeat for network.incoming.packets.error (2025-12-01T20:05:49.810713) _update_status /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:502
Dec  1 20:05:49 compute-0 ceilometer_agent_compute[200308]: 2025-12-01 20:05:49.811 15 DEBUG ceilometer.polling.manager [-] Checking if we need coordination for pollster [<stevedore.extension.Extension object at 0x7fcf6cc3fec0>] with coordination group name [None]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:333
Dec  1 20:05:49 compute-0 ceilometer_agent_compute[200308]: 2025-12-01 20:05:49.812 15 DEBUG ceilometer.polling.manager [-] The pollster [<stevedore.extension.Extension object at 0x7fcf6cc3fec0>] is not configured in a source for polling that requires coordination. The current hashrings are the following [None]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:355
Dec  1 20:05:49 compute-0 ceilometer_agent_compute[200308]: 2025-12-01 20:05:49.812 15 DEBUG ceilometer.polling.manager [-] Polster heartbeat update: network.outgoing.bytes heartbeat /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:636
Dec  1 20:05:49 compute-0 ceilometer_agent_compute[200308]: 2025-12-01 20:05:49.812 15 DEBUG ceilometer.compute.pollsters [-] 2e63a3e2-688c-470f-9b69-98ac22f0c892/network.outgoing.bytes volume: 0 _stats_to_sample /usr/lib/python3.12/site-packages/ceilometer/compute/pollsters/__init__.py:108
Dec  1 20:05:49 compute-0 ceilometer_agent_compute[200308]: 2025-12-01 20:05:49.812 15 INFO ceilometer.polling.manager [-] Finished polling pollster network.outgoing.bytes in the context of pollsters
Dec  1 20:05:49 compute-0 ceilometer_agent_compute[200308]: 2025-12-01 20:05:49.813 15 DEBUG ceilometer.polling.manager [-] Executing discovery process for pollsters [<ceilometer.compute.pollsters.net.OutgoingBytesRatePollster object at 0x7fcf6cc3ff80>] and discovery method [local_instances] via process [<bound method AgentManager.discover of <ceilometer.polling.manager.AgentManager object at 0x7fcf6cc07a10>>]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:294
Dec  1 20:05:49 compute-0 ceilometer_agent_compute[200308]: 2025-12-01 20:05:49.813 15 INFO ceilometer.polling.manager [-] Polling pollster network.outgoing.bytes.rate in the context of pollsters
Dec  1 20:05:49 compute-0 ceilometer_agent_compute[200308]: 2025-12-01 20:05:49.813 15 DEBUG ceilometer.polling.manager [-] Checking if we need coordination for pollster [<stevedore.extension.Extension object at 0x7fcf6cc3ffb0>] with coordination group name [None]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:333
Dec  1 20:05:49 compute-0 ceilometer_agent_compute[200308]: 2025-12-01 20:05:49.813 15 DEBUG ceilometer.polling.manager [-] The pollster [<stevedore.extension.Extension object at 0x7fcf6cc3ffb0>] is not configured in a source for polling that requires coordination. The current hashrings are the following [None]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:355
Dec  1 20:05:49 compute-0 ceilometer_agent_compute[200308]: 2025-12-01 20:05:49.813 15 DEBUG ceilometer.polling.manager [-] Polster heartbeat update: network.outgoing.bytes.rate heartbeat /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:636
Dec  1 20:05:49 compute-0 ceilometer_agent_compute[200308]: 2025-12-01 20:05:49.813 15 DEBUG ceilometer.compute.pollsters [-] LibvirtInspector does not provide data for OutgoingBytesRatePollster get_samples /usr/lib/python3.12/site-packages/ceilometer/compute/pollsters/__init__.py:162
Dec  1 20:05:49 compute-0 ceilometer_agent_compute[200308]: 2025-12-01 20:05:49.814 15 ERROR ceilometer.polling.manager [-] Prevent pollster network.outgoing.bytes.rate from polling [<NovaLikeServer: te-4551674-asg-jbxama3kkz6o-ydtfx5qziqnj-k254cxbeo4x2>] on source pollsters anymore!: ceilometer.polling.plugin_base.PollsterPermanentError: [<NovaLikeServer: te-4551674-asg-jbxama3kkz6o-ydtfx5qziqnj-k254cxbeo4x2>]
Dec  1 20:05:49 compute-0 ceilometer_agent_compute[200308]: 2025-12-01 20:05:49.814 12 DEBUG ceilometer.polling.manager [-] Updated heartbeat for network.outgoing.bytes (2025-12-01T20:05:49.812191) _update_status /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:502
Dec  1 20:05:49 compute-0 ceilometer_agent_compute[200308]: 2025-12-01 20:05:49.814 15 DEBUG ceilometer.polling.manager [-] Executing discovery process for pollsters [<ceilometer.compute.pollsters.instance_stats.CPUPollster object at 0x7fcf6cbd1b80>] and discovery method [local_instances] via process [<bound method AgentManager.discover of <ceilometer.polling.manager.AgentManager object at 0x7fcf6cc07a10>>]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:294
Dec  1 20:05:49 compute-0 ceilometer_agent_compute[200308]: 2025-12-01 20:05:49.814 15 INFO ceilometer.polling.manager [-] Polling pollster cpu in the context of pollsters
Dec  1 20:05:49 compute-0 ceilometer_agent_compute[200308]: 2025-12-01 20:05:49.814 12 DEBUG ceilometer.polling.manager [-] Updated heartbeat for network.outgoing.bytes.rate (2025-12-01T20:05:49.813787) _update_status /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:502
Dec  1 20:05:49 compute-0 ceilometer_agent_compute[200308]: 2025-12-01 20:05:49.814 15 DEBUG ceilometer.polling.manager [-] Checking if we need coordination for pollster [<stevedore.extension.Extension object at 0x7fcf6cc3d7c0>] with coordination group name [None]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:333
Dec  1 20:05:49 compute-0 ceilometer_agent_compute[200308]: 2025-12-01 20:05:49.814 15 DEBUG ceilometer.polling.manager [-] The pollster [<stevedore.extension.Extension object at 0x7fcf6cc3d7c0>] is not configured in a source for polling that requires coordination. The current hashrings are the following [None]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:355
Dec  1 20:05:49 compute-0 ceilometer_agent_compute[200308]: 2025-12-01 20:05:49.815 15 DEBUG ceilometer.polling.manager [-] Polster heartbeat update: cpu heartbeat /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:636
Dec  1 20:05:49 compute-0 ceilometer_agent_compute[200308]: 2025-12-01 20:05:49.815 15 DEBUG ceilometer.compute.pollsters [-] 2e63a3e2-688c-470f-9b69-98ac22f0c892/cpu volume: 18490000000 _stats_to_sample /usr/lib/python3.12/site-packages/ceilometer/compute/pollsters/__init__.py:108
Dec  1 20:05:49 compute-0 ceilometer_agent_compute[200308]: 2025-12-01 20:05:49.815 15 INFO ceilometer.polling.manager [-] Finished polling pollster cpu in the context of pollsters
Dec  1 20:05:49 compute-0 ceilometer_agent_compute[200308]: 2025-12-01 20:05:49.815 15 DEBUG ceilometer.polling.manager [-] Executing discovery process for pollsters [<ceilometer.compute.pollsters.instance_stats.MemoryUsagePollster object at 0x7fcf6cc3f7a0>] and discovery method [local_instances] via process [<bound method AgentManager.discover of <ceilometer.polling.manager.AgentManager object at 0x7fcf6cc07a10>>]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:294
Dec  1 20:05:49 compute-0 ceilometer_agent_compute[200308]: 2025-12-01 20:05:49.816 12 DEBUG ceilometer.polling.manager [-] Updated heartbeat for cpu (2025-12-01T20:05:49.815024) _update_status /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:502
Dec  1 20:05:49 compute-0 ceilometer_agent_compute[200308]: 2025-12-01 20:05:49.816 15 INFO ceilometer.polling.manager [-] Polling pollster memory.usage in the context of pollsters
Dec  1 20:05:49 compute-0 ceilometer_agent_compute[200308]: 2025-12-01 20:05:49.816 15 DEBUG ceilometer.polling.manager [-] Checking if we need coordination for pollster [<stevedore.extension.Extension object at 0x7fcf6cc3f7d0>] with coordination group name [None]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:333
Dec  1 20:05:49 compute-0 ceilometer_agent_compute[200308]: 2025-12-01 20:05:49.816 15 DEBUG ceilometer.polling.manager [-] The pollster [<stevedore.extension.Extension object at 0x7fcf6cc3f7d0>] is not configured in a source for polling that requires coordination. The current hashrings are the following [None]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:355
Dec  1 20:05:49 compute-0 ceilometer_agent_compute[200308]: 2025-12-01 20:05:49.816 15 DEBUG ceilometer.polling.manager [-] Polster heartbeat update: memory.usage heartbeat /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:636
Dec  1 20:05:49 compute-0 ceilometer_agent_compute[200308]: 2025-12-01 20:05:49.816 15 DEBUG ceilometer.compute.pollsters [-] 2e63a3e2-688c-470f-9b69-98ac22f0c892/memory.usage volume: Unavailable _stats_to_sample /usr/lib/python3.12/site-packages/ceilometer/compute/pollsters/__init__.py:108
Dec  1 20:05:49 compute-0 ceilometer_agent_compute[200308]: 2025-12-01 20:05:49.816 12 DEBUG ceilometer.polling.manager [-] Updated heartbeat for memory.usage (2025-12-01T20:05:49.816454) _update_status /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:502
Dec  1 20:05:49 compute-0 ceilometer_agent_compute[200308]: 2025-12-01 20:05:49.816 15 WARNING ceilometer.compute.pollsters [-] memory.usage statistic in not available for instance 2e63a3e2-688c-470f-9b69-98ac22f0c892: ceilometer.compute.pollsters.NoVolumeException
Dec  1 20:05:49 compute-0 ceilometer_agent_compute[200308]: 2025-12-01 20:05:49.817 15 INFO ceilometer.polling.manager [-] Finished polling pollster memory.usage in the context of pollsters
Dec  1 20:05:49 compute-0 ceilometer_agent_compute[200308]: 2025-12-01 20:05:49.817 15 DEBUG ceilometer.polling.manager [-] Finished processing pollster [network.incoming.bytes.delta]. execute_polling_task_processing /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:272
Dec  1 20:05:49 compute-0 ceilometer_agent_compute[200308]: 2025-12-01 20:05:49.817 15 DEBUG ceilometer.polling.manager [-] Finished processing pollster [network.outgoing.packets]. execute_polling_task_processing /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:272
Dec  1 20:05:49 compute-0 ceilometer_agent_compute[200308]: 2025-12-01 20:05:49.818 15 DEBUG ceilometer.polling.manager [-] Finished processing pollster [network.outgoing.bytes.delta]. execute_polling_task_processing /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:272
Dec  1 20:05:49 compute-0 ceilometer_agent_compute[200308]: 2025-12-01 20:05:49.818 15 DEBUG ceilometer.polling.manager [-] Finished processing pollster [network.outgoing.packets.drop]. execute_polling_task_processing /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:272
Dec  1 20:05:49 compute-0 ceilometer_agent_compute[200308]: 2025-12-01 20:05:49.818 15 DEBUG ceilometer.polling.manager [-] Finished processing pollster [network.outgoing.packets.error]. execute_polling_task_processing /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:272
Dec  1 20:05:49 compute-0 ceilometer_agent_compute[200308]: 2025-12-01 20:05:49.818 15 DEBUG ceilometer.polling.manager [-] Finished processing pollster [disk.device.capacity]. execute_polling_task_processing /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:272
Dec  1 20:05:49 compute-0 ceilometer_agent_compute[200308]: 2025-12-01 20:05:49.818 15 DEBUG ceilometer.polling.manager [-] Finished processing pollster [disk.device.read.bytes]. execute_polling_task_processing /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:272
Dec  1 20:05:49 compute-0 ceilometer_agent_compute[200308]: 2025-12-01 20:05:49.819 15 DEBUG ceilometer.polling.manager [-] Finished processing pollster [network.incoming.bytes]. execute_polling_task_processing /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:272
Dec  1 20:05:49 compute-0 ceilometer_agent_compute[200308]: 2025-12-01 20:05:49.819 15 DEBUG ceilometer.polling.manager [-] Finished processing pollster [network.incoming.bytes.rate]. execute_polling_task_processing /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:272
Dec  1 20:05:49 compute-0 ceilometer_agent_compute[200308]: 2025-12-01 20:05:49.819 15 DEBUG ceilometer.polling.manager [-] Finished processing pollster [disk.device.read.latency]. execute_polling_task_processing /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:272
Dec  1 20:05:49 compute-0 ceilometer_agent_compute[200308]: 2025-12-01 20:05:49.819 15 DEBUG ceilometer.polling.manager [-] Finished processing pollster [disk.device.read.requests]. execute_polling_task_processing /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:272
Dec  1 20:05:49 compute-0 ceilometer_agent_compute[200308]: 2025-12-01 20:05:49.819 15 DEBUG ceilometer.polling.manager [-] Finished processing pollster [disk.device.usage]. execute_polling_task_processing /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:272
Dec  1 20:05:49 compute-0 ceilometer_agent_compute[200308]: 2025-12-01 20:05:49.820 15 DEBUG ceilometer.polling.manager [-] Finished processing pollster [disk.device.write.bytes]. execute_polling_task_processing /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:272
Dec  1 20:05:49 compute-0 ceilometer_agent_compute[200308]: 2025-12-01 20:05:49.820 15 DEBUG ceilometer.polling.manager [-] Finished processing pollster [power.state]. execute_polling_task_processing /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:272
Dec  1 20:05:49 compute-0 ceilometer_agent_compute[200308]: 2025-12-01 20:05:49.820 15 DEBUG ceilometer.polling.manager [-] Finished processing pollster [disk.device.write.latency]. execute_polling_task_processing /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:272
Dec  1 20:05:49 compute-0 ceilometer_agent_compute[200308]: 2025-12-01 20:05:49.820 15 DEBUG ceilometer.polling.manager [-] Finished processing pollster [disk.device.write.requests]. execute_polling_task_processing /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:272
Dec  1 20:05:49 compute-0 ceilometer_agent_compute[200308]: 2025-12-01 20:05:49.820 15 DEBUG ceilometer.polling.manager [-] Finished processing pollster [disk.device.allocation]. execute_polling_task_processing /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:272
Dec  1 20:05:49 compute-0 ceilometer_agent_compute[200308]: 2025-12-01 20:05:49.821 15 DEBUG ceilometer.polling.manager [-] Finished processing pollster [disk.ephemeral.size]. execute_polling_task_processing /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:272
Dec  1 20:05:49 compute-0 ceilometer_agent_compute[200308]: 2025-12-01 20:05:49.821 15 DEBUG ceilometer.polling.manager [-] Finished processing pollster [network.incoming.packets]. execute_polling_task_processing /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:272
Dec  1 20:05:49 compute-0 ceilometer_agent_compute[200308]: 2025-12-01 20:05:49.821 15 DEBUG ceilometer.polling.manager [-] Finished processing pollster [disk.root.size]. execute_polling_task_processing /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:272
Dec  1 20:05:49 compute-0 ceilometer_agent_compute[200308]: 2025-12-01 20:05:49.821 15 DEBUG ceilometer.polling.manager [-] Finished processing pollster [network.incoming.packets.drop]. execute_polling_task_processing /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:272
Dec  1 20:05:49 compute-0 ceilometer_agent_compute[200308]: 2025-12-01 20:05:49.821 15 DEBUG ceilometer.polling.manager [-] Finished processing pollster [network.incoming.packets.error]. execute_polling_task_processing /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:272
Dec  1 20:05:49 compute-0 ceilometer_agent_compute[200308]: 2025-12-01 20:05:49.822 15 DEBUG ceilometer.polling.manager [-] Finished processing pollster [network.outgoing.bytes]. execute_polling_task_processing /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:272
Dec  1 20:05:49 compute-0 ceilometer_agent_compute[200308]: 2025-12-01 20:05:49.822 15 DEBUG ceilometer.polling.manager [-] Finished processing pollster [network.outgoing.bytes.rate]. execute_polling_task_processing /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:272
Dec  1 20:05:49 compute-0 ceilometer_agent_compute[200308]: 2025-12-01 20:05:49.822 15 DEBUG ceilometer.polling.manager [-] Finished processing pollster [cpu]. execute_polling_task_processing /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:272
Dec  1 20:05:49 compute-0 ceilometer_agent_compute[200308]: 2025-12-01 20:05:49.822 15 DEBUG ceilometer.polling.manager [-] Finished processing pollster [memory.usage]. execute_polling_task_processing /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:272
Dec  1 20:05:50 compute-0 podman[257454]: 2025-12-01 20:05:50.29882815 +0000 UTC m=+0.067429936 container health_status eee51cf6f5ac491b85fb09827fece37ea9afa564acb449d4ec0d0155a452f02b (image=quay.io/podified-antelope-centos9/openstack-multipathd:current-podified, name=multipathd, health_status=healthy, health_failing_streak=0, health_log=, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.vendor=CentOS, config_data={'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/multipathd', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-multipathd:current-podified', 'net': 'host', 'privileged': True, 'restart': 'always', 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/var/lib/kolla/config_files/multipathd.json:/var/lib/kolla/config_files/config.json:ro', '/dev:/dev', '/run/udev:/run/udev', '/sys:/sys', '/lib/modules:/lib/modules:ro', '/etc/iscsi:/etc/iscsi:ro', '/var/lib/iscsi:/var/lib/iscsi', '/etc/multipath:/etc/multipath:z', '/etc/multipath.conf:/etc/multipath.conf:ro', '/var/lib/openstack/healthchecks/multipathd:/openstack:ro,z']}, config_id=multipathd, container_name=multipathd, maintainer=OpenStack Kubernetes Operator team, tcib_build_tag=fa2bb8efef6782c26ea7f1675eeb36dd, tcib_managed=true, io.buildah.version=1.41.3, managed_by=edpm_ansible, org.label-schema.build-date=20251125, org.label-schema.license=GPLv2, org.label-schema.schema-version=1.0)
Dec  1 20:05:53 compute-0 nova_compute[189564]: 2025-12-01 20:05:53.871 189568 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Dec  1 20:05:53 compute-0 nova_compute[189564]: 2025-12-01 20:05:53.974 189568 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Dec  1 20:05:54 compute-0 podman[257473]: 2025-12-01 20:05:54.297887436 +0000 UTC m=+0.062140116 container health_status 61ddba5fa28aaa4735d9b3aecc3d300f499f9ae2248b5f55cd6d6127fcce4236 (image=quay.io/navidys/prometheus-podman-exporter:v1.10.1, name=podman_exporter, health_status=healthy, health_failing_streak=0, health_log=, container_name=podman_exporter, maintainer=Navid Yaghoobi <navidys@fedoraproject.org>, managed_by=edpm_ansible, config_data={'image': 'quay.io/navidys/prometheus-podman-exporter:v1.10.1', 'restart': 'always', 'recreate': True, 'user': 'root', 'privileged': True, 'ports': ['9882:9882'], 'net': 'host', 'command': ['--web.config.file=/etc/podman_exporter/podman_exporter.yaml'], 'environment': {'OS_ENDPOINT_TYPE': 'internal', 'CONTAINER_HOST': 'unix:///run/podman/podman.sock'}, 'healthcheck': {'test': '/openstack/healthcheck podman_exporter', 'mount': '/var/lib/openstack/healthchecks/podman_exporter'}, 'volumes': ['/var/lib/openstack/config/telemetry/podman_exporter.yaml:/etc/podman_exporter/podman_exporter.yaml:z', '/var/lib/openstack/certs/telemetry/default:/etc/podman_exporter/tls:z', '/run/podman/podman.sock:/run/podman/podman.sock:rw,z', '/var/lib/openstack/healthchecks/podman_exporter:/openstack:ro,z']}, config_id=edpm)
Dec  1 20:05:58 compute-0 podman[257498]: 2025-12-01 20:05:58.353030564 +0000 UTC m=+0.098538859 container health_status 3a3d264f7eb8586ed3d44da8bad3c69e5911bcb2ca062b771386b6d47a5118de (image=quay.rdoproject.org/podified-master-centos10/openstack-ceilometer-compute:current-tested, name=ceilometer_agent_compute, health_status=healthy, health_failing_streak=0, health_log=, org.label-schema.license=GPLv2, org.label-schema.vendor=CentOS, tcib_build_tag=3a7876c5b6a4ff2e2bc50e11e9db5f42, maintainer=OpenStack Kubernetes Operator team, tcib_managed=true, config_data={'image': 'quay.rdoproject.org/podified-master-centos10/openstack-ceilometer-compute:current-tested', 'user': 'ceilometer', 'restart': 'always', 'command': 'kolla_start', 'security_opt': 'label:type:ceilometer_polling_t', 'net': 'host', 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'OS_ENDPOINT_TYPE': 'internal'}, 'healthcheck': {'test': '/openstack/healthcheck compute', 'mount': '/var/lib/openstack/healthchecks/ceilometer_agent_compute'}, 'volumes': ['/var/lib/openstack/config/telemetry:/var/lib/openstack/config/:z', '/var/lib/openstack/config/telemetry/ceilometer-agent-compute.json:/var/lib/kolla/config_files/config.json:z', '/run/libvirt:/run/libvirt:shared,ro', '/etc/hosts:/etc/hosts:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/var/lib/openstack/cacerts/telemetry/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/config/telemetry/ceilometer_prom_exporter.yaml:/etc/ceilometer/ceilometer_prom_exporter.yaml:z', '/var/lib/openstack/certs/telemetry/default:/etc/ceilometer/tls:z', '/dev/log:/dev/log', '/var/lib/openstack/healthchecks/ceilometer_agent_compute:/openstack:ro,z']}, io.buildah.version=1.41.4, org.label-schema.build-date=20251125, org.label-schema.name=CentOS Stream 10 Base Image, config_id=edpm, managed_by=edpm_ansible, org.label-schema.schema-version=1.0, container_name=ceilometer_agent_compute)
Dec  1 20:05:58 compute-0 podman[257499]: 2025-12-01 20:05:58.358902322 +0000 UTC m=+0.117438813 container health_status 43b014a7c88484529ca37fbc1aa040d68d3c565a681d98a3ffe696ded1c66c8b (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified, name=ovn_metadata_agent, health_status=healthy, health_failing_streak=0, health_log=, tcib_managed=true, io.buildah.version=1.41.3, container_name=ovn_metadata_agent, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.schema-version=1.0, maintainer=OpenStack Kubernetes Operator team, org.label-schema.build-date=20251125, org.label-schema.license=GPLv2, org.label-schema.vendor=CentOS, config_data={'cgroupns': 'host', 'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'EDPM_CONFIG_HASH': '0823bd3e096c75f72e4a95820d41b0d4b6a1172bd2892ddb9f29b788a11bc87d'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ovn_metadata_agent', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified', 'net': 'host', 'pid': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/run/openvswitch:/run/openvswitch:z', '/var/lib/config-data/ansible-generated/neutron-ovn-metadata-agent:/etc/neutron.conf.d:z', '/run/netns:/run/netns:shared', '/var/lib/kolla/config_files/ovn_metadata_agent.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/neutron:/var/lib/neutron:shared,z', '/var/lib/neutron/ovn_metadata_haproxy_wrapper:/usr/local/bin/haproxy:ro', '/var/lib/neutron/kill_scripts:/etc/neutron/kill_scripts:ro', '/var/lib/openstack/cacerts/neutron-metadata/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/ca.crt:/etc/pki/tls/certs/ovndbca.crt:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/tls.crt:/etc/pki/tls/certs/ovndb.crt:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/tls.key:/etc/pki/tls/private/ovndb.key:ro,Z', '/var/lib/openstack/healthchecks/ovn_metadata_agent:/openstack:ro,z']}, config_id=ovn_metadata_agent, managed_by=edpm_ansible, tcib_build_tag=fa2bb8efef6782c26ea7f1675eeb36dd)
Dec  1 20:05:58 compute-0 podman[257496]: 2025-12-01 20:05:58.368015104 +0000 UTC m=+0.128557919 container health_status 23921011954a99f31a49758e512d9e3575f6b2ebf536e7df85e3be11e7690b76 (image=quay.io/sustainable_computing_io/kepler:release-0.7.12, name=kepler, health_status=healthy, health_failing_streak=0, health_log=, architecture=x86_64, com.redhat.license_terms=https://www.redhat.com/en/about/red-hat-end-user-license-agreements#UBI, io.k8s.description=The Universal Base Image is designed and engineered to be the base layer for all of your containerized applications, middleware and utilities. This base image is freely redistributable, but Red Hat only supports Red Hat technologies through subscriptions for Red Hat products. This image is maintained by Red Hat and updated regularly., description=The Universal Base Image is designed and engineered to be the base layer for all of your containerized applications, middleware and utilities. This base image is freely redistributable, but Red Hat only supports Red Hat technologies through subscriptions for Red Hat products. This image is maintained by Red Hat and updated regularly., io.buildah.version=1.29.0, vendor=Red Hat, Inc., name=ubi9, summary=Provides the latest release of Red Hat Universal Base Image 9., url=https://access.redhat.com/containers/#/registry.access.redhat.com/ubi9/images/9.4-1214.1726694543, container_name=kepler, distribution-scope=public, release-0.7.12=, managed_by=edpm_ansible, release=1214.1726694543, build-date=2024-09-18T21:23:30, com.redhat.component=ubi9-container, vcs-ref=e309397d02fc53f7fa99db1371b8700eb49f268f, config_id=edpm, vcs-type=git, config_data={'image': 'quay.io/sustainable_computing_io/kepler:release-0.7.12', 'privileged': 'true', 'restart': 'always', 'ports': ['8888:8888'], 'net': 'host', 'command': '-v=2', 'recreate': True, 'environment': {'ENABLE_GPU': 'true', 'EXPOSE_CONTAINER_METRICS': 'true', 'ENABLE_PROCESS_METRICS': 'true', 'EXPOSE_VM_METRICS': 'true', 'EXPOSE_ESTIMATED_IDLE_POWER_METRICS': 'false', 'LIBVIRT_METADATA_URI': 'http://openstack.org/xmlns/libvirt/nova/1.1'}, 'healthcheck': {'test': '/openstack/healthcheck kepler', 'mount': '/var/lib/openstack/healthchecks/kepler'}, 'volumes': ['/lib/modules:/lib/modules:ro', '/run/libvirt:/run/libvirt:shared,ro', '/sys:/sys', '/proc:/proc', '/var/lib/openstack/healthchecks/kepler:/openstack:ro,z']}, version=9.4, io.openshift.expose-services=, io.openshift.tags=base rhel9, maintainer=Red Hat, Inc., io.k8s.display-name=Red Hat Universal Base Image 9)
Dec  1 20:05:58 compute-0 podman[257497]: 2025-12-01 20:05:58.372033861 +0000 UTC m=+0.132853205 container health_status 34a1614f07848d6f362b3ed1fa2407dbcd0f2c7c831f6ef43ff8b2d278ce7c3d (image=quay.io/podified-antelope-centos9/openstack-ceilometer-ipmi:current-podified, name=ceilometer_agent_ipmi, health_status=healthy, health_failing_streak=0, health_log=, org.label-schema.license=GPLv2, config_data={'image': 'quay.io/podified-antelope-centos9/openstack-ceilometer-ipmi:current-podified', 'user': 'ceilometer', 'restart': 'always', 'command': 'kolla_start', 'security_opt': 'label:type:ceilometer_polling_t', 'privileged': 'true', 'net': 'host', 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'OS_ENDPOINT_TYPE': 'internal'}, 'healthcheck': {'test': '/openstack/healthcheck ipmi', 'mount': '/var/lib/openstack/healthchecks/ceilometer_agent_ipmi'}, 'volumes': ['/var/lib/openstack/config/telemetry-power-monitoring:/var/lib/openstack/config/:z', '/var/lib/openstack/config/telemetry-power-monitoring/ceilometer-agent-ipmi.json:/var/lib/kolla/config_files/config.json:z', '/etc/hosts:/etc/hosts:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/var/lib/openstack/cacerts/telemetry-power-monitoring/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/config/telemetry-power-monitoring/ceilometer_prom_exporter.yaml:/etc/ceilometer/ceilometer_prom_exporter.yaml:z', '/var/lib/openstack/certs/telemetry-power-monitoring/default:/etc/ceilometer/tls:z', '/dev/log:/dev/log', '/var/lib/openstack/healthchecks/ceilometer_agent_ipmi:/openstack:ro,z']}, org.label-schema.schema-version=1.0, config_id=edpm, maintainer=OpenStack Kubernetes Operator team, org.label-schema.build-date=20251125, org.label-schema.name=CentOS Stream 9 Base Image, tcib_build_tag=fa2bb8efef6782c26ea7f1675eeb36dd, tcib_managed=true, container_name=ceilometer_agent_ipmi, io.buildah.version=1.41.3, managed_by=edpm_ansible, org.label-schema.vendor=CentOS)
Dec  1 20:05:58 compute-0 podman[257500]: 2025-12-01 20:05:58.389527091 +0000 UTC m=+0.143049722 container health_status ac5c9902abf0db9f43c889599b2bcc73d33eb8b65444ffdd9b56a5cc93dab792 (image=quay.io/podified-antelope-centos9/openstack-ovn-controller:current-podified, name=ovn_controller, health_status=healthy, health_failing_streak=0, health_log=, io.buildah.version=1.41.3, managed_by=edpm_ansible, org.label-schema.build-date=20251125, config_data={'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ovn_controller', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-ovn-controller:current-podified', 'net': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/lib/modules:/lib/modules:ro', '/run:/run', '/var/lib/openvswitch/ovn:/run/ovn:shared,z', '/var/lib/kolla/config_files/ovn_controller.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/openstack/cacerts/ovn/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/certs/ovn/default/ca.crt:/etc/pki/tls/certs/ovndbca.crt:ro,z', '/var/lib/openstack/certs/ovn/default/tls.crt:/etc/pki/tls/certs/ovndb.crt:ro,z', '/var/lib/openstack/certs/ovn/default/tls.key:/etc/pki/tls/private/ovndb.key:ro,Z', '/var/lib/openstack/healthchecks/ovn_controller:/openstack:ro,z']}, config_id=ovn_controller, maintainer=OpenStack Kubernetes Operator team, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.schema-version=1.0, org.label-schema.license=GPLv2, tcib_managed=true, container_name=ovn_controller, org.label-schema.vendor=CentOS, tcib_build_tag=fa2bb8efef6782c26ea7f1675eeb36dd)
Dec  1 20:05:58 compute-0 nova_compute[189564]: 2025-12-01 20:05:58.875 189568 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Dec  1 20:05:58 compute-0 nova_compute[189564]: 2025-12-01 20:05:58.977 189568 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Dec  1 20:05:59 compute-0 podman[203750]: time="2025-12-01T20:05:59Z" level=info msg="List containers: received `last` parameter - overwriting `limit`"
Dec  1 20:05:59 compute-0 podman[203750]: @ - - [01/Dec/2025:20:05:59 +0000] "GET /v4.9.3/libpod/containers/json?all=true&external=false&last=0&namespace=false&size=false&sync=false HTTP/1.1" 200 29521 "" "Go-http-client/1.1"
Dec  1 20:05:59 compute-0 podman[203750]: @ - - [01/Dec/2025:20:05:59 +0000] "GET /v4.9.3/libpod/containers/stats?all=false&interval=1&stream=false HTTP/1.1" 200 4803 "" "Go-http-client/1.1"
Dec  1 20:06:00 compute-0 ovn_controller[97948]: 2025-12-01T20:06:00Z|00173|memory_trim|INFO|Detected inactivity (last active 30006 ms ago): trimming memory
Dec  1 20:06:01 compute-0 openstack_network_exporter[205914]: ERROR   20:06:01 appctl.go:131: Failed to prepare call to ovsdb-server: no control socket files found for the ovs db server
Dec  1 20:06:01 compute-0 openstack_network_exporter[205914]: ERROR   20:06:01 appctl.go:144: Failed to get PID for ovn-northd: no control socket files found for ovn-northd
Dec  1 20:06:01 compute-0 openstack_network_exporter[205914]: ERROR   20:06:01 appctl.go:144: Failed to get PID for ovn-northd: no control socket files found for ovn-northd
Dec  1 20:06:01 compute-0 openstack_network_exporter[205914]: ERROR   20:06:01 appctl.go:174: call(dpif-netdev/pmd-perf-show): please specify an existing datapath
Dec  1 20:06:01 compute-0 openstack_network_exporter[205914]: 
Dec  1 20:06:01 compute-0 openstack_network_exporter[205914]: ERROR   20:06:01 appctl.go:174: call(dpif-netdev/pmd-rxq-show): please specify an existing datapath
Dec  1 20:06:01 compute-0 openstack_network_exporter[205914]: 
Dec  1 20:06:03 compute-0 nova_compute[189564]: 2025-12-01 20:06:03.879 189568 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Dec  1 20:06:03 compute-0 nova_compute[189564]: 2025-12-01 20:06:03.979 189568 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Dec  1 20:06:04 compute-0 ovn_controller[97948]: 2025-12-01T20:06:04Z|00021|pinctrl(ovn_pinctrl0)|INFO|DHCPOFFER fa:16:3e:ec:bc:e0 10.100.3.29
Dec  1 20:06:04 compute-0 ovn_controller[97948]: 2025-12-01T20:06:04Z|00022|pinctrl(ovn_pinctrl0)|INFO|DHCPACK fa:16:3e:ec:bc:e0 10.100.3.29
Dec  1 20:06:08 compute-0 nova_compute[189564]: 2025-12-01 20:06:08.883 189568 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Dec  1 20:06:08 compute-0 nova_compute[189564]: 2025-12-01 20:06:08.985 189568 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Dec  1 20:06:10 compute-0 podman[257603]: 2025-12-01 20:06:10.350772361 +0000 UTC m=+0.109308784 container health_status b46bda7fc50db8041eef75400930fc7591d8331b3adc9964f77b2cc87c6b98e2 (image=quay.io/openstack-k8s-operators/openstack-network-exporter:current-podified, name=openstack_network_exporter, health_status=healthy, health_failing_streak=0, health_log=, release=1755695350, version=9.6, name=ubi9-minimal, build-date=2025-08-20T13:12:41, io.openshift.expose-services=, vendor=Red Hat, Inc., io.k8s.display-name=Red Hat Universal Base Image 9 Minimal, distribution-scope=public, io.openshift.tags=minimal rhel9, com.redhat.component=ubi9-minimal-container, container_name=openstack_network_exporter, vcs-ref=f4b088292653bbf5ca8188a5e59ffd06a8671d4b, io.k8s.description=The Universal Base Image Minimal is a stripped down image that uses microdnf as a package manager. This base image is freely redistributable, but Red Hat only supports Red Hat technologies through subscriptions for Red Hat products. This image is maintained by Red Hat and updated regularly., vcs-type=git, config_data={'image': 'quay.io/openstack-k8s-operators/openstack-network-exporter:current-podified', 'restart': 'always', 'recreate': True, 'privileged': True, 'ports': ['9105:9105'], 'command': [], 'net': 'host', 'environment': {'OS_ENDPOINT_TYPE': 'internal', 'OPENSTACK_NETWORK_EXPORTER_YAML': '/etc/openstack_network_exporter/openstack_network_exporter.yaml'}, 'healthcheck': {'test': '/openstack/healthcheck openstack-netwo', 'mount': '/var/lib/openstack/healthchecks/openstack_network_exporter'}, 'volumes': ['/var/lib/openstack/config/telemetry/openstack_network_exporter.yaml:/etc/openstack_network_exporter/openstack_network_exporter.yaml:z', '/var/lib/openstack/certs/telemetry/default:/etc/openstack_network_exporter/tls:z', '/var/run/openvswitch:/run/openvswitch:rw,z', '/var/lib/openvswitch/ovn:/run/ovn:rw,z', '/proc:/host/proc:ro', '/var/lib/openstack/healthchecks/openstack_network_exporter:/openstack:ro,z']}, config_id=edpm, architecture=x86_64, description=The Universal Base Image Minimal is a stripped down image that uses microdnf as a package manager. This base image is freely redistributable, but Red Hat only supports Red Hat technologies through subscriptions for Red Hat products. This image is maintained by Red Hat and updated regularly., io.buildah.version=1.33.7, maintainer=Red Hat, Inc., com.redhat.license_terms=https://www.redhat.com/en/about/red-hat-end-user-license-agreements#UBI, managed_by=edpm_ansible, summary=Provides the latest release of the minimal Red Hat Universal Base Image 9., url=https://catalog.redhat.com/en/search?searchType=containers)
Dec  1 20:06:11 compute-0 podman[257628]: 2025-12-01 20:06:11.920194331 +0000 UTC m=+0.095500622 container health_status 9bc16c1e84935b321683dd2dfd3901959431e420d380b6b9982945dff3d516b2 (image=quay.io/prometheus/node-exporter:v1.5.0, name=node_exporter, health_status=healthy, health_failing_streak=0, health_log=, config_id=edpm, container_name=node_exporter, maintainer=The Prometheus Authors <prometheus-developers@googlegroups.com>, managed_by=edpm_ansible, config_data={'image': 'quay.io/prometheus/node-exporter:v1.5.0', 'restart': 'always', 'recreate': True, 'user': 'root', 'privileged': True, 'ports': ['9100:9100'], 'command': ['--web.config.file=/etc/node_exporter/node_exporter.yaml', '--web.disable-exporter-metrics', '--collector.systemd', '--collector.systemd.unit-include=(edpm_.*|ovs.*|openvswitch|virt.*|rsyslog)\\.service', '--no-collector.dmi', '--no-collector.entropy', '--no-collector.thermal_zone', '--no-collector.time', '--no-collector.timex', '--no-collector.uname', '--no-collector.stat', '--no-collector.hwmon', '--no-collector.os', '--no-collector.selinux', '--no-collector.textfile', '--no-collector.powersupplyclass', '--no-collector.pressure', '--no-collector.rapl'], 'net': 'host', 'environment': {'OS_ENDPOINT_TYPE': 'internal'}, 'healthcheck': {'test': '/openstack/healthcheck node_exporter', 'mount': '/var/lib/openstack/healthchecks/node_exporter'}, 'volumes': ['/var/lib/openstack/config/telemetry/node_exporter.yaml:/etc/node_exporter/node_exporter.yaml:z', '/var/lib/openstack/certs/telemetry/default:/etc/node_exporter/tls:z', '/var/run/dbus/system_bus_socket:/var/run/dbus/system_bus_socket:rw', '/var/lib/openstack/healthchecks/node_exporter:/openstack:ro,z']})
Dec  1 20:06:12 compute-0 ovn_metadata_agent[106828]: 2025-12-01 20:06:12.225 106833 DEBUG oslo_concurrency.lockutils [-] Acquiring lock "_check_child_processes" by "neutron.agent.linux.external_process.ProcessMonitor._check_child_processes" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m
Dec  1 20:06:12 compute-0 ovn_metadata_agent[106828]: 2025-12-01 20:06:12.226 106833 DEBUG oslo_concurrency.lockutils [-] Lock "_check_child_processes" acquired by "neutron.agent.linux.external_process.ProcessMonitor._check_child_processes" :: waited 0.002s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m
Dec  1 20:06:12 compute-0 ovn_metadata_agent[106828]: 2025-12-01 20:06:12.227 106833 DEBUG oslo_concurrency.lockutils [-] Lock "_check_child_processes" "released" by "neutron.agent.linux.external_process.ProcessMonitor._check_child_processes" :: held 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m
Dec  1 20:06:13 compute-0 nova_compute[189564]: 2025-12-01 20:06:13.887 189568 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Dec  1 20:06:13 compute-0 nova_compute[189564]: 2025-12-01 20:06:13.985 189568 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Dec  1 20:06:18 compute-0 nova_compute[189564]: 2025-12-01 20:06:18.891 189568 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Dec  1 20:06:18 compute-0 nova_compute[189564]: 2025-12-01 20:06:18.989 189568 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Dec  1 20:06:19 compute-0 nova_compute[189564]: 2025-12-01 20:06:19.248 189568 DEBUG oslo_service.periodic_task [None req-7cec860b-c1c5-49d2-8a4f-732c76c1788d - - - - - -] Running periodic task ComputeManager._poll_rescued_instances run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210#033[00m
Dec  1 20:06:19 compute-0 nova_compute[189564]: 2025-12-01 20:06:19.249 189568 DEBUG oslo_service.periodic_task [None req-7cec860b-c1c5-49d2-8a4f-732c76c1788d - - - - - -] Running periodic task ComputeManager._reclaim_queued_deletes run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210#033[00m
Dec  1 20:06:19 compute-0 nova_compute[189564]: 2025-12-01 20:06:19.250 189568 DEBUG nova.compute.manager [None req-7cec860b-c1c5-49d2-8a4f-732c76c1788d - - - - - -] CONF.reclaim_instance_interval <= 0, skipping... _reclaim_queued_deletes /usr/lib/python3.9/site-packages/nova/compute/manager.py:10477#033[00m
Dec  1 20:06:21 compute-0 podman[257655]: 2025-12-01 20:06:21.337004026 +0000 UTC m=+0.104810129 container health_status eee51cf6f5ac491b85fb09827fece37ea9afa564acb449d4ec0d0155a452f02b (image=quay.io/podified-antelope-centos9/openstack-multipathd:current-podified, name=multipathd, health_status=healthy, health_failing_streak=0, health_log=, tcib_managed=true, io.buildah.version=1.41.3, org.label-schema.build-date=20251125, tcib_build_tag=fa2bb8efef6782c26ea7f1675eeb36dd, config_data={'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/multipathd', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-multipathd:current-podified', 'net': 'host', 'privileged': True, 'restart': 'always', 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/var/lib/kolla/config_files/multipathd.json:/var/lib/kolla/config_files/config.json:ro', '/dev:/dev', '/run/udev:/run/udev', '/sys:/sys', '/lib/modules:/lib/modules:ro', '/etc/iscsi:/etc/iscsi:ro', '/var/lib/iscsi:/var/lib/iscsi', '/etc/multipath:/etc/multipath:z', '/etc/multipath.conf:/etc/multipath.conf:ro', '/var/lib/openstack/healthchecks/multipathd:/openstack:ro,z']}, managed_by=edpm_ansible, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.schema-version=1.0, config_id=multipathd, container_name=multipathd, maintainer=OpenStack Kubernetes Operator team, org.label-schema.license=GPLv2, org.label-schema.vendor=CentOS)
Dec  1 20:06:23 compute-0 nova_compute[189564]: 2025-12-01 20:06:23.250 189568 DEBUG oslo_service.periodic_task [None req-7cec860b-c1c5-49d2-8a4f-732c76c1788d - - - - - -] Running periodic task ComputeManager._poll_rebooting_instances run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210#033[00m
Dec  1 20:06:23 compute-0 nova_compute[189564]: 2025-12-01 20:06:23.894 189568 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Dec  1 20:06:23 compute-0 nova_compute[189564]: 2025-12-01 20:06:23.993 189568 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Dec  1 20:06:25 compute-0 nova_compute[189564]: 2025-12-01 20:06:25.248 189568 DEBUG oslo_service.periodic_task [None req-7cec860b-c1c5-49d2-8a4f-732c76c1788d - - - - - -] Running periodic task ComputeManager._heal_instance_info_cache run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210#033[00m
Dec  1 20:06:25 compute-0 nova_compute[189564]: 2025-12-01 20:06:25.248 189568 DEBUG nova.compute.manager [None req-7cec860b-c1c5-49d2-8a4f-732c76c1788d - - - - - -] Starting heal instance info cache _heal_instance_info_cache /usr/lib/python3.9/site-packages/nova/compute/manager.py:9858#033[00m
Dec  1 20:06:25 compute-0 nova_compute[189564]: 2025-12-01 20:06:25.248 189568 DEBUG nova.compute.manager [None req-7cec860b-c1c5-49d2-8a4f-732c76c1788d - - - - - -] Rebuilding the list of instances to heal _heal_instance_info_cache /usr/lib/python3.9/site-packages/nova/compute/manager.py:9862#033[00m
Dec  1 20:06:25 compute-0 podman[257675]: 2025-12-01 20:06:25.372816058 +0000 UTC m=+0.128320243 container health_status 61ddba5fa28aaa4735d9b3aecc3d300f499f9ae2248b5f55cd6d6127fcce4236 (image=quay.io/navidys/prometheus-podman-exporter:v1.10.1, name=podman_exporter, health_status=healthy, health_failing_streak=0, health_log=, managed_by=edpm_ansible, config_data={'image': 'quay.io/navidys/prometheus-podman-exporter:v1.10.1', 'restart': 'always', 'recreate': True, 'user': 'root', 'privileged': True, 'ports': ['9882:9882'], 'net': 'host', 'command': ['--web.config.file=/etc/podman_exporter/podman_exporter.yaml'], 'environment': {'OS_ENDPOINT_TYPE': 'internal', 'CONTAINER_HOST': 'unix:///run/podman/podman.sock'}, 'healthcheck': {'test': '/openstack/healthcheck podman_exporter', 'mount': '/var/lib/openstack/healthchecks/podman_exporter'}, 'volumes': ['/var/lib/openstack/config/telemetry/podman_exporter.yaml:/etc/podman_exporter/podman_exporter.yaml:z', '/var/lib/openstack/certs/telemetry/default:/etc/podman_exporter/tls:z', '/run/podman/podman.sock:/run/podman/podman.sock:rw,z', '/var/lib/openstack/healthchecks/podman_exporter:/openstack:ro,z']}, config_id=edpm, container_name=podman_exporter, maintainer=Navid Yaghoobi <navidys@fedoraproject.org>)
Dec  1 20:06:25 compute-0 nova_compute[189564]: 2025-12-01 20:06:25.574 189568 DEBUG oslo_concurrency.lockutils [None req-7cec860b-c1c5-49d2-8a4f-732c76c1788d - - - - - -] Acquiring lock "refresh_cache-2e63a3e2-688c-470f-9b69-98ac22f0c892" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:312#033[00m
Dec  1 20:06:25 compute-0 nova_compute[189564]: 2025-12-01 20:06:25.574 189568 DEBUG oslo_concurrency.lockutils [None req-7cec860b-c1c5-49d2-8a4f-732c76c1788d - - - - - -] Acquired lock "refresh_cache-2e63a3e2-688c-470f-9b69-98ac22f0c892" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:315#033[00m
Dec  1 20:06:25 compute-0 nova_compute[189564]: 2025-12-01 20:06:25.575 189568 DEBUG nova.network.neutron [None req-7cec860b-c1c5-49d2-8a4f-732c76c1788d - - - - - -] [instance: 2e63a3e2-688c-470f-9b69-98ac22f0c892] Forcefully refreshing network info cache for instance _get_instance_nw_info /usr/lib/python3.9/site-packages/nova/network/neutron.py:2004#033[00m
Dec  1 20:06:25 compute-0 nova_compute[189564]: 2025-12-01 20:06:25.575 189568 DEBUG nova.objects.instance [None req-7cec860b-c1c5-49d2-8a4f-732c76c1788d - - - - - -] Lazy-loading 'info_cache' on Instance uuid 2e63a3e2-688c-470f-9b69-98ac22f0c892 obj_load_attr /usr/lib/python3.9/site-packages/nova/objects/instance.py:1105#033[00m
Dec  1 20:06:26 compute-0 nova_compute[189564]: 2025-12-01 20:06:26.654 189568 DEBUG nova.network.neutron [None req-7cec860b-c1c5-49d2-8a4f-732c76c1788d - - - - - -] [instance: 2e63a3e2-688c-470f-9b69-98ac22f0c892] Updating instance_info_cache with network_info: [{"id": "3076324c-1772-4ebf-8d52-056282f5b5b9", "address": "fa:16:3e:ec:bc:e0", "network": {"id": "b72e0b6b-24ff-49af-9297-d0f55dd2fe07", "bridge": "br-int", "label": "", "subnets": [{"cidr": "10.100.0.0/16", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.3.29", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "ce8fb01897ec4dc4a54e7b478a0450c6", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tap3076324c-17", "ovs_interfaceid": "3076324c-1772-4ebf-8d52-056282f5b5b9", "qbh_params": null, "qbg_params": null, "active": true, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}}] update_instance_cache_with_nw_info /usr/lib/python3.9/site-packages/nova/network/neutron.py:116#033[00m
Dec  1 20:06:26 compute-0 nova_compute[189564]: 2025-12-01 20:06:26.685 189568 DEBUG oslo_concurrency.lockutils [None req-7cec860b-c1c5-49d2-8a4f-732c76c1788d - - - - - -] Releasing lock "refresh_cache-2e63a3e2-688c-470f-9b69-98ac22f0c892" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:333#033[00m
Dec  1 20:06:26 compute-0 nova_compute[189564]: 2025-12-01 20:06:26.686 189568 DEBUG nova.compute.manager [None req-7cec860b-c1c5-49d2-8a4f-732c76c1788d - - - - - -] [instance: 2e63a3e2-688c-470f-9b69-98ac22f0c892] Updated the network info_cache for instance _heal_instance_info_cache /usr/lib/python3.9/site-packages/nova/compute/manager.py:9929#033[00m
Dec  1 20:06:26 compute-0 nova_compute[189564]: 2025-12-01 20:06:26.688 189568 DEBUG oslo_service.periodic_task [None req-7cec860b-c1c5-49d2-8a4f-732c76c1788d - - - - - -] Running periodic task ComputeManager._poll_unconfirmed_resizes run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210#033[00m
Dec  1 20:06:26 compute-0 nova_compute[189564]: 2025-12-01 20:06:26.689 189568 DEBUG oslo_service.periodic_task [None req-7cec860b-c1c5-49d2-8a4f-732c76c1788d - - - - - -] Running periodic task ComputeManager._instance_usage_audit run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210#033[00m
Dec  1 20:06:26 compute-0 nova_compute[189564]: 2025-12-01 20:06:26.691 189568 DEBUG oslo_service.periodic_task [None req-7cec860b-c1c5-49d2-8a4f-732c76c1788d - - - - - -] Running periodic task ComputeManager._poll_volume_usage run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210#033[00m
Dec  1 20:06:27 compute-0 nova_compute[189564]: 2025-12-01 20:06:27.248 189568 DEBUG oslo_service.periodic_task [None req-7cec860b-c1c5-49d2-8a4f-732c76c1788d - - - - - -] Running periodic task ComputeManager.update_available_resource run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210#033[00m
Dec  1 20:06:27 compute-0 nova_compute[189564]: 2025-12-01 20:06:27.279 189568 DEBUG oslo_concurrency.lockutils [None req-7cec860b-c1c5-49d2-8a4f-732c76c1788d - - - - - -] Acquiring lock "compute_resources" by "nova.compute.resource_tracker.ResourceTracker.clean_compute_node_cache" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m
Dec  1 20:06:27 compute-0 nova_compute[189564]: 2025-12-01 20:06:27.280 189568 DEBUG oslo_concurrency.lockutils [None req-7cec860b-c1c5-49d2-8a4f-732c76c1788d - - - - - -] Lock "compute_resources" acquired by "nova.compute.resource_tracker.ResourceTracker.clean_compute_node_cache" :: waited 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m
Dec  1 20:06:27 compute-0 nova_compute[189564]: 2025-12-01 20:06:27.281 189568 DEBUG oslo_concurrency.lockutils [None req-7cec860b-c1c5-49d2-8a4f-732c76c1788d - - - - - -] Lock "compute_resources" "released" by "nova.compute.resource_tracker.ResourceTracker.clean_compute_node_cache" :: held 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m
Dec  1 20:06:27 compute-0 nova_compute[189564]: 2025-12-01 20:06:27.282 189568 DEBUG nova.compute.resource_tracker [None req-7cec860b-c1c5-49d2-8a4f-732c76c1788d - - - - - -] Auditing locally available compute resources for compute-0.ctlplane.example.com (node: compute-0.ctlplane.example.com) update_available_resource /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:861#033[00m
Dec  1 20:06:27 compute-0 nova_compute[189564]: 2025-12-01 20:06:27.393 189568 DEBUG oslo_concurrency.processutils [None req-7cec860b-c1c5-49d2-8a4f-732c76c1788d - - - - - -] Running cmd (subprocess): /usr/bin/python3 -m oslo_concurrency.prlimit --as=1073741824 --cpu=30 -- env LC_ALL=C LANG=C qemu-img info /var/lib/nova/instances/2e63a3e2-688c-470f-9b69-98ac22f0c892/disk --force-share --output=json execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384#033[00m
Dec  1 20:06:27 compute-0 nova_compute[189564]: 2025-12-01 20:06:27.497 189568 DEBUG oslo_concurrency.processutils [None req-7cec860b-c1c5-49d2-8a4f-732c76c1788d - - - - - -] CMD "/usr/bin/python3 -m oslo_concurrency.prlimit --as=1073741824 --cpu=30 -- env LC_ALL=C LANG=C qemu-img info /var/lib/nova/instances/2e63a3e2-688c-470f-9b69-98ac22f0c892/disk --force-share --output=json" returned: 0 in 0.103s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422#033[00m
Dec  1 20:06:27 compute-0 nova_compute[189564]: 2025-12-01 20:06:27.499 189568 DEBUG oslo_concurrency.processutils [None req-7cec860b-c1c5-49d2-8a4f-732c76c1788d - - - - - -] Running cmd (subprocess): /usr/bin/python3 -m oslo_concurrency.prlimit --as=1073741824 --cpu=30 -- env LC_ALL=C LANG=C qemu-img info /var/lib/nova/instances/2e63a3e2-688c-470f-9b69-98ac22f0c892/disk --force-share --output=json execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384#033[00m
Dec  1 20:06:27 compute-0 nova_compute[189564]: 2025-12-01 20:06:27.579 189568 DEBUG oslo_concurrency.processutils [None req-7cec860b-c1c5-49d2-8a4f-732c76c1788d - - - - - -] CMD "/usr/bin/python3 -m oslo_concurrency.prlimit --as=1073741824 --cpu=30 -- env LC_ALL=C LANG=C qemu-img info /var/lib/nova/instances/2e63a3e2-688c-470f-9b69-98ac22f0c892/disk --force-share --output=json" returned: 0 in 0.080s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422#033[00m
Dec  1 20:06:28 compute-0 nova_compute[189564]: 2025-12-01 20:06:28.000 189568 WARNING nova.virt.libvirt.driver [None req-7cec860b-c1c5-49d2-8a4f-732c76c1788d - - - - - -] This host appears to have multiple sockets per NUMA node. The `socket` PCI NUMA affinity will not be supported.#033[00m
Dec  1 20:06:28 compute-0 nova_compute[189564]: 2025-12-01 20:06:28.002 189568 DEBUG nova.compute.resource_tracker [None req-7cec860b-c1c5-49d2-8a4f-732c76c1788d - - - - - -] Hypervisor/Node resource view: name=compute-0.ctlplane.example.com free_ram=5189MB free_disk=72.27693939208984GB free_vcpus=7 pci_devices=[{"dev_id": "pci_0000_00_03_0", "address": "0000:00:03.0", "product_id": "1000", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1000", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_01_3", "address": "0000:00:01.3", "product_id": "7113", "vendor_id": "8086", "numa_node": null, "label": "label_8086_7113", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_06_0", "address": "0000:00:06.0", "product_id": "1005", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1005", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_04_0", "address": "0000:00:04.0", "product_id": "1001", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1001", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_01_1", "address": "0000:00:01.1", "product_id": "7010", "vendor_id": "8086", "numa_node": null, "label": "label_8086_7010", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_00_0", "address": "0000:00:00.0", "product_id": "1237", "vendor_id": "8086", "numa_node": null, "label": "label_8086_1237", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_05_0", "address": "0000:00:05.0", "product_id": "1002", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1002", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_02_0", "address": "0000:00:02.0", "product_id": "1050", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1050", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_01_2", "address": "0000:00:01.2", "product_id": "7020", "vendor_id": "8086", "numa_node": null, "label": "label_8086_7020", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_01_0", "address": "0000:00:01.0", "product_id": "7000", "vendor_id": "8086", "numa_node": null, "label": "label_8086_7000", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_07_0", "address": "0000:00:07.0", "product_id": "1000", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1000", "dev_type": "type-PCI"}] _report_hypervisor_resource_view /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:1034#033[00m
Dec  1 20:06:28 compute-0 nova_compute[189564]: 2025-12-01 20:06:28.003 189568 DEBUG oslo_concurrency.lockutils [None req-7cec860b-c1c5-49d2-8a4f-732c76c1788d - - - - - -] Acquiring lock "compute_resources" by "nova.compute.resource_tracker.ResourceTracker._update_available_resource" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m
Dec  1 20:06:28 compute-0 nova_compute[189564]: 2025-12-01 20:06:28.003 189568 DEBUG oslo_concurrency.lockutils [None req-7cec860b-c1c5-49d2-8a4f-732c76c1788d - - - - - -] Lock "compute_resources" acquired by "nova.compute.resource_tracker.ResourceTracker._update_available_resource" :: waited 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m
Dec  1 20:06:28 compute-0 nova_compute[189564]: 2025-12-01 20:06:28.226 189568 DEBUG nova.compute.resource_tracker [None req-7cec860b-c1c5-49d2-8a4f-732c76c1788d - - - - - -] Instance 2e63a3e2-688c-470f-9b69-98ac22f0c892 actively managed on this compute host and has allocations in placement: {'resources': {'DISK_GB': 1, 'MEMORY_MB': 128, 'VCPU': 1}}. _remove_deleted_instances_allocations /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:1635#033[00m
Dec  1 20:06:28 compute-0 nova_compute[189564]: 2025-12-01 20:06:28.228 189568 DEBUG nova.compute.resource_tracker [None req-7cec860b-c1c5-49d2-8a4f-732c76c1788d - - - - - -] Total usable vcpus: 8, total allocated vcpus: 1 _report_final_resource_view /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:1057#033[00m
Dec  1 20:06:28 compute-0 nova_compute[189564]: 2025-12-01 20:06:28.228 189568 DEBUG nova.compute.resource_tracker [None req-7cec860b-c1c5-49d2-8a4f-732c76c1788d - - - - - -] Final resource view: name=compute-0.ctlplane.example.com phys_ram=7680MB used_ram=640MB phys_disk=79GB used_disk=1GB total_vcpus=8 used_vcpus=1 pci_stats=[] _report_final_resource_view /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:1066#033[00m
Dec  1 20:06:28 compute-0 nova_compute[189564]: 2025-12-01 20:06:28.329 189568 DEBUG nova.compute.provider_tree [None req-7cec860b-c1c5-49d2-8a4f-732c76c1788d - - - - - -] Inventory has not changed in ProviderTree for provider: 0211b5d4-bab8-409f-8f53-df766ffbcb27 update_inventory /usr/lib/python3.9/site-packages/nova/compute/provider_tree.py:180#033[00m
Dec  1 20:06:28 compute-0 nova_compute[189564]: 2025-12-01 20:06:28.348 189568 DEBUG nova.scheduler.client.report [None req-7cec860b-c1c5-49d2-8a4f-732c76c1788d - - - - - -] Inventory has not changed for provider 0211b5d4-bab8-409f-8f53-df766ffbcb27 based on inventory data: {'VCPU': {'total': 8, 'reserved': 0, 'min_unit': 1, 'max_unit': 8, 'step_size': 1, 'allocation_ratio': 4.0}, 'MEMORY_MB': {'total': 7680, 'reserved': 512, 'min_unit': 1, 'max_unit': 7680, 'step_size': 1, 'allocation_ratio': 1.0}, 'DISK_GB': {'total': 79, 'reserved': 1, 'min_unit': 1, 'max_unit': 79, 'step_size': 1, 'allocation_ratio': 0.9}} set_inventory_for_provider /usr/lib/python3.9/site-packages/nova/scheduler/client/report.py:940#033[00m
Dec  1 20:06:28 compute-0 nova_compute[189564]: 2025-12-01 20:06:28.384 189568 DEBUG nova.compute.resource_tracker [None req-7cec860b-c1c5-49d2-8a4f-732c76c1788d - - - - - -] Compute_service record updated for compute-0.ctlplane.example.com:compute-0.ctlplane.example.com _update_available_resource /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:995#033[00m
Dec  1 20:06:28 compute-0 nova_compute[189564]: 2025-12-01 20:06:28.385 189568 DEBUG oslo_concurrency.lockutils [None req-7cec860b-c1c5-49d2-8a4f-732c76c1788d - - - - - -] Lock "compute_resources" "released" by "nova.compute.resource_tracker.ResourceTracker._update_available_resource" :: held 0.382s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m
Dec  1 20:06:28 compute-0 nova_compute[189564]: 2025-12-01 20:06:28.387 189568 DEBUG oslo_service.periodic_task [None req-7cec860b-c1c5-49d2-8a4f-732c76c1788d - - - - - -] Running periodic task ComputeManager._run_pending_deletes run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210#033[00m
Dec  1 20:06:28 compute-0 nova_compute[189564]: 2025-12-01 20:06:28.388 189568 DEBUG nova.compute.manager [None req-7cec860b-c1c5-49d2-8a4f-732c76c1788d - - - - - -] Cleaning up deleted instances _run_pending_deletes /usr/lib/python3.9/site-packages/nova/compute/manager.py:11145#033[00m
Dec  1 20:06:28 compute-0 nova_compute[189564]: 2025-12-01 20:06:28.433 189568 DEBUG nova.compute.manager [None req-7cec860b-c1c5-49d2-8a4f-732c76c1788d - - - - - -] There are 0 instances to clean _run_pending_deletes /usr/lib/python3.9/site-packages/nova/compute/manager.py:11154#033[00m
Dec  1 20:06:28 compute-0 nova_compute[189564]: 2025-12-01 20:06:28.897 189568 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Dec  1 20:06:28 compute-0 nova_compute[189564]: 2025-12-01 20:06:28.995 189568 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Dec  1 20:06:29 compute-0 podman[257708]: 2025-12-01 20:06:29.344088796 +0000 UTC m=+0.102654491 container health_status 34a1614f07848d6f362b3ed1fa2407dbcd0f2c7c831f6ef43ff8b2d278ce7c3d (image=quay.io/podified-antelope-centos9/openstack-ceilometer-ipmi:current-podified, name=ceilometer_agent_ipmi, health_status=healthy, health_failing_streak=0, health_log=, config_id=edpm, org.label-schema.license=GPLv2, org.label-schema.schema-version=1.0, config_data={'image': 'quay.io/podified-antelope-centos9/openstack-ceilometer-ipmi:current-podified', 'user': 'ceilometer', 'restart': 'always', 'command': 'kolla_start', 'security_opt': 'label:type:ceilometer_polling_t', 'privileged': 'true', 'net': 'host', 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'OS_ENDPOINT_TYPE': 'internal'}, 'healthcheck': {'test': '/openstack/healthcheck ipmi', 'mount': '/var/lib/openstack/healthchecks/ceilometer_agent_ipmi'}, 'volumes': ['/var/lib/openstack/config/telemetry-power-monitoring:/var/lib/openstack/config/:z', '/var/lib/openstack/config/telemetry-power-monitoring/ceilometer-agent-ipmi.json:/var/lib/kolla/config_files/config.json:z', '/etc/hosts:/etc/hosts:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/var/lib/openstack/cacerts/telemetry-power-monitoring/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/config/telemetry-power-monitoring/ceilometer_prom_exporter.yaml:/etc/ceilometer/ceilometer_prom_exporter.yaml:z', '/var/lib/openstack/certs/telemetry-power-monitoring/default:/etc/ceilometer/tls:z', '/dev/log:/dev/log', '/var/lib/openstack/healthchecks/ceilometer_agent_ipmi:/openstack:ro,z']}, org.label-schema.build-date=20251125, org.label-schema.vendor=CentOS, tcib_build_tag=fa2bb8efef6782c26ea7f1675eeb36dd, io.buildah.version=1.41.3, tcib_managed=true, container_name=ceilometer_agent_ipmi, maintainer=OpenStack Kubernetes Operator team, managed_by=edpm_ansible, org.label-schema.name=CentOS Stream 9 Base Image)
Dec  1 20:06:29 compute-0 podman[257707]: 2025-12-01 20:06:29.344710906 +0000 UTC m=+0.111575377 container health_status 23921011954a99f31a49758e512d9e3575f6b2ebf536e7df85e3be11e7690b76 (image=quay.io/sustainable_computing_io/kepler:release-0.7.12, name=kepler, health_status=healthy, health_failing_streak=0, health_log=, description=The Universal Base Image is designed and engineered to be the base layer for all of your containerized applications, middleware and utilities. This base image is freely redistributable, but Red Hat only supports Red Hat technologies through subscriptions for Red Hat products. This image is maintained by Red Hat and updated regularly., version=9.4, io.openshift.tags=base rhel9, name=ubi9, release-0.7.12=, architecture=x86_64, vendor=Red Hat, Inc., distribution-scope=public, io.k8s.display-name=Red Hat Universal Base Image 9, maintainer=Red Hat, Inc., com.redhat.license_terms=https://www.redhat.com/en/about/red-hat-end-user-license-agreements#UBI, io.openshift.expose-services=, build-date=2024-09-18T21:23:30, summary=Provides the latest release of Red Hat Universal Base Image 9., config_id=edpm, io.buildah.version=1.29.0, managed_by=edpm_ansible, vcs-type=git, release=1214.1726694543, url=https://access.redhat.com/containers/#/registry.access.redhat.com/ubi9/images/9.4-1214.1726694543, com.redhat.component=ubi9-container, container_name=kepler, io.k8s.description=The Universal Base Image is designed and engineered to be the base layer for all of your containerized applications, middleware and utilities. This base image is freely redistributable, but Red Hat only supports Red Hat technologies through subscriptions for Red Hat products. This image is maintained by Red Hat and updated regularly., vcs-ref=e309397d02fc53f7fa99db1371b8700eb49f268f, config_data={'image': 'quay.io/sustainable_computing_io/kepler:release-0.7.12', 'privileged': 'true', 'restart': 'always', 'ports': ['8888:8888'], 'net': 'host', 'command': '-v=2', 'recreate': True, 'environment': {'ENABLE_GPU': 'true', 'EXPOSE_CONTAINER_METRICS': 'true', 'ENABLE_PROCESS_METRICS': 'true', 'EXPOSE_VM_METRICS': 'true', 'EXPOSE_ESTIMATED_IDLE_POWER_METRICS': 'false', 'LIBVIRT_METADATA_URI': 'http://openstack.org/xmlns/libvirt/nova/1.1'}, 'healthcheck': {'test': '/openstack/healthcheck kepler', 'mount': '/var/lib/openstack/healthchecks/kepler'}, 'volumes': ['/lib/modules:/lib/modules:ro', '/run/libvirt:/run/libvirt:shared,ro', '/sys:/sys', '/proc:/proc', '/var/lib/openstack/healthchecks/kepler:/openstack:ro,z']})
Dec  1 20:06:29 compute-0 podman[257711]: 2025-12-01 20:06:29.361390599 +0000 UTC m=+0.100502433 container health_status 3a3d264f7eb8586ed3d44da8bad3c69e5911bcb2ca062b771386b6d47a5118de (image=quay.rdoproject.org/podified-master-centos10/openstack-ceilometer-compute:current-tested, name=ceilometer_agent_compute, health_status=healthy, health_failing_streak=0, health_log=, io.buildah.version=1.41.4, org.label-schema.name=CentOS Stream 10 Base Image, container_name=ceilometer_agent_compute, maintainer=OpenStack Kubernetes Operator team, org.label-schema.build-date=20251125, config_id=edpm, managed_by=edpm_ansible, org.label-schema.license=GPLv2, tcib_build_tag=3a7876c5b6a4ff2e2bc50e11e9db5f42, tcib_managed=true, config_data={'image': 'quay.rdoproject.org/podified-master-centos10/openstack-ceilometer-compute:current-tested', 'user': 'ceilometer', 'restart': 'always', 'command': 'kolla_start', 'security_opt': 'label:type:ceilometer_polling_t', 'net': 'host', 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'OS_ENDPOINT_TYPE': 'internal'}, 'healthcheck': {'test': '/openstack/healthcheck compute', 'mount': '/var/lib/openstack/healthchecks/ceilometer_agent_compute'}, 'volumes': ['/var/lib/openstack/config/telemetry:/var/lib/openstack/config/:z', '/var/lib/openstack/config/telemetry/ceilometer-agent-compute.json:/var/lib/kolla/config_files/config.json:z', '/run/libvirt:/run/libvirt:shared,ro', '/etc/hosts:/etc/hosts:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/var/lib/openstack/cacerts/telemetry/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/config/telemetry/ceilometer_prom_exporter.yaml:/etc/ceilometer/ceilometer_prom_exporter.yaml:z', '/var/lib/openstack/certs/telemetry/default:/etc/ceilometer/tls:z', '/dev/log:/dev/log', '/var/lib/openstack/healthchecks/ceilometer_agent_compute:/openstack:ro,z']}, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS)
Dec  1 20:06:29 compute-0 podman[257715]: 2025-12-01 20:06:29.374178307 +0000 UTC m=+0.113371073 container health_status 43b014a7c88484529ca37fbc1aa040d68d3c565a681d98a3ffe696ded1c66c8b (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified, name=ovn_metadata_agent, health_status=healthy, health_failing_streak=0, health_log=, io.buildah.version=1.41.3, org.label-schema.vendor=CentOS, config_data={'cgroupns': 'host', 'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'EDPM_CONFIG_HASH': '0823bd3e096c75f72e4a95820d41b0d4b6a1172bd2892ddb9f29b788a11bc87d'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ovn_metadata_agent', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified', 'net': 'host', 'pid': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/run/openvswitch:/run/openvswitch:z', '/var/lib/config-data/ansible-generated/neutron-ovn-metadata-agent:/etc/neutron.conf.d:z', '/run/netns:/run/netns:shared', '/var/lib/kolla/config_files/ovn_metadata_agent.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/neutron:/var/lib/neutron:shared,z', '/var/lib/neutron/ovn_metadata_haproxy_wrapper:/usr/local/bin/haproxy:ro', '/var/lib/neutron/kill_scripts:/etc/neutron/kill_scripts:ro', '/var/lib/openstack/cacerts/neutron-metadata/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/ca.crt:/etc/pki/tls/certs/ovndbca.crt:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/tls.crt:/etc/pki/tls/certs/ovndb.crt:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/tls.key:/etc/pki/tls/private/ovndb.key:ro,Z', '/var/lib/openstack/healthchecks/ovn_metadata_agent:/openstack:ro,z']}, container_name=ovn_metadata_agent, managed_by=edpm_ansible, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.schema-version=1.0, tcib_build_tag=fa2bb8efef6782c26ea7f1675eeb36dd, maintainer=OpenStack Kubernetes Operator team, org.label-schema.license=GPLv2, tcib_managed=true, org.label-schema.build-date=20251125, config_id=ovn_metadata_agent)
Dec  1 20:06:29 compute-0 podman[257722]: 2025-12-01 20:06:29.405422335 +0000 UTC m=+0.142363480 container health_status ac5c9902abf0db9f43c889599b2bcc73d33eb8b65444ffdd9b56a5cc93dab792 (image=quay.io/podified-antelope-centos9/openstack-ovn-controller:current-podified, name=ovn_controller, health_status=healthy, health_failing_streak=0, health_log=, org.label-schema.build-date=20251125, tcib_build_tag=fa2bb8efef6782c26ea7f1675eeb36dd, tcib_managed=true, io.buildah.version=1.41.3, managed_by=edpm_ansible, org.label-schema.schema-version=1.0, config_data={'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ovn_controller', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-ovn-controller:current-podified', 'net': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/lib/modules:/lib/modules:ro', '/run:/run', '/var/lib/openvswitch/ovn:/run/ovn:shared,z', '/var/lib/kolla/config_files/ovn_controller.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/openstack/cacerts/ovn/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/certs/ovn/default/ca.crt:/etc/pki/tls/certs/ovndbca.crt:ro,z', '/var/lib/openstack/certs/ovn/default/tls.crt:/etc/pki/tls/certs/ovndb.crt:ro,z', '/var/lib/openstack/certs/ovn/default/tls.key:/etc/pki/tls/private/ovndb.key:ro,Z', '/var/lib/openstack/healthchecks/ovn_controller:/openstack:ro,z']}, config_id=ovn_controller, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.vendor=CentOS, maintainer=OpenStack Kubernetes Operator team, org.label-schema.license=GPLv2, container_name=ovn_controller)
Dec  1 20:06:29 compute-0 podman[203750]: time="2025-12-01T20:06:29Z" level=info msg="List containers: received `last` parameter - overwriting `limit`"
Dec  1 20:06:29 compute-0 podman[203750]: @ - - [01/Dec/2025:20:06:29 +0000] "GET /v4.9.3/libpod/containers/json?all=true&external=false&last=0&namespace=false&size=false&sync=false HTTP/1.1" 200 29521 "" "Go-http-client/1.1"
Dec  1 20:06:29 compute-0 podman[203750]: @ - - [01/Dec/2025:20:06:29 +0000] "GET /v4.9.3/libpod/containers/stats?all=false&interval=1&stream=false HTTP/1.1" 200 4810 "" "Go-http-client/1.1"
Dec  1 20:06:31 compute-0 openstack_network_exporter[205914]: ERROR   20:06:31 appctl.go:131: Failed to prepare call to ovsdb-server: no control socket files found for the ovs db server
Dec  1 20:06:31 compute-0 openstack_network_exporter[205914]: ERROR   20:06:31 appctl.go:144: Failed to get PID for ovn-northd: no control socket files found for ovn-northd
Dec  1 20:06:31 compute-0 openstack_network_exporter[205914]: ERROR   20:06:31 appctl.go:144: Failed to get PID for ovn-northd: no control socket files found for ovn-northd
Dec  1 20:06:31 compute-0 openstack_network_exporter[205914]: ERROR   20:06:31 appctl.go:174: call(dpif-netdev/pmd-perf-show): please specify an existing datapath
Dec  1 20:06:31 compute-0 openstack_network_exporter[205914]: 
Dec  1 20:06:31 compute-0 openstack_network_exporter[205914]: ERROR   20:06:31 appctl.go:174: call(dpif-netdev/pmd-rxq-show): please specify an existing datapath
Dec  1 20:06:31 compute-0 openstack_network_exporter[205914]: 
Dec  1 20:06:33 compute-0 nova_compute[189564]: 2025-12-01 20:06:33.902 189568 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Dec  1 20:06:33 compute-0 nova_compute[189564]: 2025-12-01 20:06:33.997 189568 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Dec  1 20:06:34 compute-0 nova_compute[189564]: 2025-12-01 20:06:34.432 189568 DEBUG oslo_service.periodic_task [None req-7cec860b-c1c5-49d2-8a4f-732c76c1788d - - - - - -] Running periodic task ComputeManager._check_instance_build_time run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210#033[00m
Dec  1 20:06:38 compute-0 nova_compute[189564]: 2025-12-01 20:06:38.908 189568 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Dec  1 20:06:39 compute-0 nova_compute[189564]: 2025-12-01 20:06:39.003 189568 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Dec  1 20:06:39 compute-0 nova_compute[189564]: 2025-12-01 20:06:39.248 189568 DEBUG oslo_service.periodic_task [None req-7cec860b-c1c5-49d2-8a4f-732c76c1788d - - - - - -] Running periodic task ComputeManager._cleanup_expired_console_auth_tokens run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210#033[00m
Dec  1 20:06:41 compute-0 podman[257802]: 2025-12-01 20:06:41.342431171 +0000 UTC m=+0.107658690 container health_status b46bda7fc50db8041eef75400930fc7591d8331b3adc9964f77b2cc87c6b98e2 (image=quay.io/openstack-k8s-operators/openstack-network-exporter:current-podified, name=openstack_network_exporter, health_status=healthy, health_failing_streak=0, health_log=, io.k8s.display-name=Red Hat Universal Base Image 9 Minimal, release=1755695350, url=https://catalog.redhat.com/en/search?searchType=containers, container_name=openstack_network_exporter, build-date=2025-08-20T13:12:41, com.redhat.license_terms=https://www.redhat.com/en/about/red-hat-end-user-license-agreements#UBI, distribution-scope=public, managed_by=edpm_ansible, config_id=edpm, summary=Provides the latest release of the minimal Red Hat Universal Base Image 9., vcs-ref=f4b088292653bbf5ca8188a5e59ffd06a8671d4b, version=9.6, io.openshift.expose-services=, com.redhat.component=ubi9-minimal-container, config_data={'image': 'quay.io/openstack-k8s-operators/openstack-network-exporter:current-podified', 'restart': 'always', 'recreate': True, 'privileged': True, 'ports': ['9105:9105'], 'command': [], 'net': 'host', 'environment': {'OS_ENDPOINT_TYPE': 'internal', 'OPENSTACK_NETWORK_EXPORTER_YAML': '/etc/openstack_network_exporter/openstack_network_exporter.yaml'}, 'healthcheck': {'test': '/openstack/healthcheck openstack-netwo', 'mount': '/var/lib/openstack/healthchecks/openstack_network_exporter'}, 'volumes': ['/var/lib/openstack/config/telemetry/openstack_network_exporter.yaml:/etc/openstack_network_exporter/openstack_network_exporter.yaml:z', '/var/lib/openstack/certs/telemetry/default:/etc/openstack_network_exporter/tls:z', '/var/run/openvswitch:/run/openvswitch:rw,z', '/var/lib/openvswitch/ovn:/run/ovn:rw,z', '/proc:/host/proc:ro', '/var/lib/openstack/healthchecks/openstack_network_exporter:/openstack:ro,z']}, description=The Universal Base Image Minimal is a stripped down image that uses microdnf as a package manager. This base image is freely redistributable, but Red Hat only supports Red Hat technologies through subscriptions for Red Hat products. This image is maintained by Red Hat and updated regularly., io.buildah.version=1.33.7, io.openshift.tags=minimal rhel9, vendor=Red Hat, Inc., io.k8s.description=The Universal Base Image Minimal is a stripped down image that uses microdnf as a package manager. This base image is freely redistributable, but Red Hat only supports Red Hat technologies through subscriptions for Red Hat products. This image is maintained by Red Hat and updated regularly., maintainer=Red Hat, Inc., name=ubi9-minimal, architecture=x86_64, vcs-type=git)
Dec  1 20:06:42 compute-0 podman[257822]: 2025-12-01 20:06:42.366262767 +0000 UTC m=+0.123851548 container health_status 9bc16c1e84935b321683dd2dfd3901959431e420d380b6b9982945dff3d516b2 (image=quay.io/prometheus/node-exporter:v1.5.0, name=node_exporter, health_status=healthy, health_failing_streak=0, health_log=, maintainer=The Prometheus Authors <prometheus-developers@googlegroups.com>, managed_by=edpm_ansible, config_data={'image': 'quay.io/prometheus/node-exporter:v1.5.0', 'restart': 'always', 'recreate': True, 'user': 'root', 'privileged': True, 'ports': ['9100:9100'], 'command': ['--web.config.file=/etc/node_exporter/node_exporter.yaml', '--web.disable-exporter-metrics', '--collector.systemd', '--collector.systemd.unit-include=(edpm_.*|ovs.*|openvswitch|virt.*|rsyslog)\\.service', '--no-collector.dmi', '--no-collector.entropy', '--no-collector.thermal_zone', '--no-collector.time', '--no-collector.timex', '--no-collector.uname', '--no-collector.stat', '--no-collector.hwmon', '--no-collector.os', '--no-collector.selinux', '--no-collector.textfile', '--no-collector.powersupplyclass', '--no-collector.pressure', '--no-collector.rapl'], 'net': 'host', 'environment': {'OS_ENDPOINT_TYPE': 'internal'}, 'healthcheck': {'test': '/openstack/healthcheck node_exporter', 'mount': '/var/lib/openstack/healthchecks/node_exporter'}, 'volumes': ['/var/lib/openstack/config/telemetry/node_exporter.yaml:/etc/node_exporter/node_exporter.yaml:z', '/var/lib/openstack/certs/telemetry/default:/etc/node_exporter/tls:z', '/var/run/dbus/system_bus_socket:/var/run/dbus/system_bus_socket:rw', '/var/lib/openstack/healthchecks/node_exporter:/openstack:ro,z']}, config_id=edpm, container_name=node_exporter)
Dec  1 20:06:43 compute-0 nova_compute[189564]: 2025-12-01 20:06:43.914 189568 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Dec  1 20:06:44 compute-0 nova_compute[189564]: 2025-12-01 20:06:44.003 189568 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Dec  1 20:06:48 compute-0 nova_compute[189564]: 2025-12-01 20:06:48.917 189568 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Dec  1 20:06:49 compute-0 nova_compute[189564]: 2025-12-01 20:06:49.005 189568 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Dec  1 20:06:52 compute-0 podman[257848]: 2025-12-01 20:06:52.363245002 +0000 UTC m=+0.120400479 container health_status eee51cf6f5ac491b85fb09827fece37ea9afa564acb449d4ec0d0155a452f02b (image=quay.io/podified-antelope-centos9/openstack-multipathd:current-podified, name=multipathd, health_status=healthy, health_failing_streak=0, health_log=, org.label-schema.vendor=CentOS, org.label-schema.schema-version=1.0, tcib_managed=true, io.buildah.version=1.41.3, managed_by=edpm_ansible, org.label-schema.build-date=20251125, org.label-schema.license=GPLv2, tcib_build_tag=fa2bb8efef6782c26ea7f1675eeb36dd, config_id=multipathd, container_name=multipathd, org.label-schema.name=CentOS Stream 9 Base Image, config_data={'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/multipathd', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-multipathd:current-podified', 'net': 'host', 'privileged': True, 'restart': 'always', 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/var/lib/kolla/config_files/multipathd.json:/var/lib/kolla/config_files/config.json:ro', '/dev:/dev', '/run/udev:/run/udev', '/sys:/sys', '/lib/modules:/lib/modules:ro', '/etc/iscsi:/etc/iscsi:ro', '/var/lib/iscsi:/var/lib/iscsi', '/etc/multipath:/etc/multipath:z', '/etc/multipath.conf:/etc/multipath.conf:ro', '/var/lib/openstack/healthchecks/multipathd:/openstack:ro,z']}, maintainer=OpenStack Kubernetes Operator team)
Dec  1 20:06:53 compute-0 nova_compute[189564]: 2025-12-01 20:06:53.921 189568 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Dec  1 20:06:54 compute-0 nova_compute[189564]: 2025-12-01 20:06:54.009 189568 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Dec  1 20:06:56 compute-0 podman[257871]: 2025-12-01 20:06:56.374202958 +0000 UTC m=+0.124533271 container health_status 61ddba5fa28aaa4735d9b3aecc3d300f499f9ae2248b5f55cd6d6127fcce4236 (image=quay.io/navidys/prometheus-podman-exporter:v1.10.1, name=podman_exporter, health_status=healthy, health_failing_streak=0, health_log=, container_name=podman_exporter, maintainer=Navid Yaghoobi <navidys@fedoraproject.org>, managed_by=edpm_ansible, config_data={'image': 'quay.io/navidys/prometheus-podman-exporter:v1.10.1', 'restart': 'always', 'recreate': True, 'user': 'root', 'privileged': True, 'ports': ['9882:9882'], 'net': 'host', 'command': ['--web.config.file=/etc/podman_exporter/podman_exporter.yaml'], 'environment': {'OS_ENDPOINT_TYPE': 'internal', 'CONTAINER_HOST': 'unix:///run/podman/podman.sock'}, 'healthcheck': {'test': '/openstack/healthcheck podman_exporter', 'mount': '/var/lib/openstack/healthchecks/podman_exporter'}, 'volumes': ['/var/lib/openstack/config/telemetry/podman_exporter.yaml:/etc/podman_exporter/podman_exporter.yaml:z', '/var/lib/openstack/certs/telemetry/default:/etc/podman_exporter/tls:z', '/run/podman/podman.sock:/run/podman/podman.sock:rw,z', '/var/lib/openstack/healthchecks/podman_exporter:/openstack:ro,z']}, config_id=edpm)
Dec  1 20:06:57 compute-0 nova_compute[189564]: 2025-12-01 20:06:57.269 189568 DEBUG oslo_service.periodic_task [None req-7cec860b-c1c5-49d2-8a4f-732c76c1788d - - - - - -] Running periodic task ComputeManager._cleanup_incomplete_migrations run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210#033[00m
Dec  1 20:06:57 compute-0 nova_compute[189564]: 2025-12-01 20:06:57.271 189568 DEBUG nova.compute.manager [None req-7cec860b-c1c5-49d2-8a4f-732c76c1788d - - - - - -] Cleaning up deleted instances with incomplete migration  _cleanup_incomplete_migrations /usr/lib/python3.9/site-packages/nova/compute/manager.py:11183#033[00m
Dec  1 20:06:58 compute-0 nova_compute[189564]: 2025-12-01 20:06:58.924 189568 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Dec  1 20:06:59 compute-0 nova_compute[189564]: 2025-12-01 20:06:59.011 189568 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Dec  1 20:06:59 compute-0 podman[203750]: time="2025-12-01T20:06:59Z" level=info msg="List containers: received `last` parameter - overwriting `limit`"
Dec  1 20:06:59 compute-0 podman[203750]: @ - - [01/Dec/2025:20:06:59 +0000] "GET /v4.9.3/libpod/containers/json?all=true&external=false&last=0&namespace=false&size=false&sync=false HTTP/1.1" 200 29521 "" "Go-http-client/1.1"
Dec  1 20:06:59 compute-0 podman[203750]: @ - - [01/Dec/2025:20:06:59 +0000] "GET /v4.9.3/libpod/containers/stats?all=false&interval=1&stream=false HTTP/1.1" 200 4811 "" "Go-http-client/1.1"
Dec  1 20:07:00 compute-0 podman[257898]: 2025-12-01 20:07:00.316738488 +0000 UTC m=+0.086519285 container health_status 34a1614f07848d6f362b3ed1fa2407dbcd0f2c7c831f6ef43ff8b2d278ce7c3d (image=quay.io/podified-antelope-centos9/openstack-ceilometer-ipmi:current-podified, name=ceilometer_agent_ipmi, health_status=healthy, health_failing_streak=0, health_log=, managed_by=edpm_ansible, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 9 Base Image, maintainer=OpenStack Kubernetes Operator team, org.label-schema.schema-version=1.0, tcib_build_tag=fa2bb8efef6782c26ea7f1675eeb36dd, config_id=edpm, container_name=ceilometer_agent_ipmi, org.label-schema.build-date=20251125, org.label-schema.vendor=CentOS, io.buildah.version=1.41.3, tcib_managed=true, config_data={'image': 'quay.io/podified-antelope-centos9/openstack-ceilometer-ipmi:current-podified', 'user': 'ceilometer', 'restart': 'always', 'command': 'kolla_start', 'security_opt': 'label:type:ceilometer_polling_t', 'privileged': 'true', 'net': 'host', 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'OS_ENDPOINT_TYPE': 'internal'}, 'healthcheck': {'test': '/openstack/healthcheck ipmi', 'mount': '/var/lib/openstack/healthchecks/ceilometer_agent_ipmi'}, 'volumes': ['/var/lib/openstack/config/telemetry-power-monitoring:/var/lib/openstack/config/:z', '/var/lib/openstack/config/telemetry-power-monitoring/ceilometer-agent-ipmi.json:/var/lib/kolla/config_files/config.json:z', '/etc/hosts:/etc/hosts:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/var/lib/openstack/cacerts/telemetry-power-monitoring/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/config/telemetry-power-monitoring/ceilometer_prom_exporter.yaml:/etc/ceilometer/ceilometer_prom_exporter.yaml:z', '/var/lib/openstack/certs/telemetry-power-monitoring/default:/etc/ceilometer/tls:z', '/dev/log:/dev/log', '/var/lib/openstack/healthchecks/ceilometer_agent_ipmi:/openstack:ro,z']})
Dec  1 20:07:00 compute-0 podman[257900]: 2025-12-01 20:07:00.349113062 +0000 UTC m=+0.096102182 container health_status 43b014a7c88484529ca37fbc1aa040d68d3c565a681d98a3ffe696ded1c66c8b (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified, name=ovn_metadata_agent, health_status=healthy, health_failing_streak=0, health_log=, io.buildah.version=1.41.3, org.label-schema.license=GPLv2, config_data={'cgroupns': 'host', 'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'EDPM_CONFIG_HASH': '0823bd3e096c75f72e4a95820d41b0d4b6a1172bd2892ddb9f29b788a11bc87d'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ovn_metadata_agent', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified', 'net': 'host', 'pid': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/run/openvswitch:/run/openvswitch:z', '/var/lib/config-data/ansible-generated/neutron-ovn-metadata-agent:/etc/neutron.conf.d:z', '/run/netns:/run/netns:shared', '/var/lib/kolla/config_files/ovn_metadata_agent.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/neutron:/var/lib/neutron:shared,z', '/var/lib/neutron/ovn_metadata_haproxy_wrapper:/usr/local/bin/haproxy:ro', '/var/lib/neutron/kill_scripts:/etc/neutron/kill_scripts:ro', '/var/lib/openstack/cacerts/neutron-metadata/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/ca.crt:/etc/pki/tls/certs/ovndbca.crt:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/tls.crt:/etc/pki/tls/certs/ovndb.crt:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/tls.key:/etc/pki/tls/private/ovndb.key:ro,Z', '/var/lib/openstack/healthchecks/ovn_metadata_agent:/openstack:ro,z']}, container_name=ovn_metadata_agent, maintainer=OpenStack Kubernetes Operator team, org.label-schema.name=CentOS Stream 9 Base Image, tcib_managed=true, managed_by=edpm_ansible, org.label-schema.build-date=20251125, org.label-schema.schema-version=1.0, tcib_build_tag=fa2bb8efef6782c26ea7f1675eeb36dd, org.label-schema.vendor=CentOS, config_id=ovn_metadata_agent)
Dec  1 20:07:00 compute-0 podman[257899]: 2025-12-01 20:07:00.349590038 +0000 UTC m=+0.114132818 container health_status 3a3d264f7eb8586ed3d44da8bad3c69e5911bcb2ca062b771386b6d47a5118de (image=quay.rdoproject.org/podified-master-centos10/openstack-ceilometer-compute:current-tested, name=ceilometer_agent_compute, health_status=healthy, health_failing_streak=0, health_log=, config_id=edpm, org.label-schema.build-date=20251125, org.label-schema.vendor=CentOS, tcib_build_tag=3a7876c5b6a4ff2e2bc50e11e9db5f42, container_name=ceilometer_agent_compute, org.label-schema.name=CentOS Stream 10 Base Image, org.label-schema.schema-version=1.0, io.buildah.version=1.41.4, maintainer=OpenStack Kubernetes Operator team, org.label-schema.license=GPLv2, managed_by=edpm_ansible, tcib_managed=true, config_data={'image': 'quay.rdoproject.org/podified-master-centos10/openstack-ceilometer-compute:current-tested', 'user': 'ceilometer', 'restart': 'always', 'command': 'kolla_start', 'security_opt': 'label:type:ceilometer_polling_t', 'net': 'host', 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'OS_ENDPOINT_TYPE': 'internal'}, 'healthcheck': {'test': '/openstack/healthcheck compute', 'mount': '/var/lib/openstack/healthchecks/ceilometer_agent_compute'}, 'volumes': ['/var/lib/openstack/config/telemetry:/var/lib/openstack/config/:z', '/var/lib/openstack/config/telemetry/ceilometer-agent-compute.json:/var/lib/kolla/config_files/config.json:z', '/run/libvirt:/run/libvirt:shared,ro', '/etc/hosts:/etc/hosts:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/var/lib/openstack/cacerts/telemetry/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/config/telemetry/ceilometer_prom_exporter.yaml:/etc/ceilometer/ceilometer_prom_exporter.yaml:z', '/var/lib/openstack/certs/telemetry/default:/etc/ceilometer/tls:z', '/dev/log:/dev/log', '/var/lib/openstack/healthchecks/ceilometer_agent_compute:/openstack:ro,z']})
Dec  1 20:07:00 compute-0 podman[257897]: 2025-12-01 20:07:00.383278304 +0000 UTC m=+0.143524617 container health_status 23921011954a99f31a49758e512d9e3575f6b2ebf536e7df85e3be11e7690b76 (image=quay.io/sustainable_computing_io/kepler:release-0.7.12, name=kepler, health_status=healthy, health_failing_streak=0, health_log=, managed_by=edpm_ansible, summary=Provides the latest release of Red Hat Universal Base Image 9., vendor=Red Hat, Inc., maintainer=Red Hat, Inc., release-0.7.12=, vcs-type=git, config_data={'image': 'quay.io/sustainable_computing_io/kepler:release-0.7.12', 'privileged': 'true', 'restart': 'always', 'ports': ['8888:8888'], 'net': 'host', 'command': '-v=2', 'recreate': True, 'environment': {'ENABLE_GPU': 'true', 'EXPOSE_CONTAINER_METRICS': 'true', 'ENABLE_PROCESS_METRICS': 'true', 'EXPOSE_VM_METRICS': 'true', 'EXPOSE_ESTIMATED_IDLE_POWER_METRICS': 'false', 'LIBVIRT_METADATA_URI': 'http://openstack.org/xmlns/libvirt/nova/1.1'}, 'healthcheck': {'test': '/openstack/healthcheck kepler', 'mount': '/var/lib/openstack/healthchecks/kepler'}, 'volumes': ['/lib/modules:/lib/modules:ro', '/run/libvirt:/run/libvirt:shared,ro', '/sys:/sys', '/proc:/proc', '/var/lib/openstack/healthchecks/kepler:/openstack:ro,z']}, io.openshift.expose-services=, release=1214.1726694543, url=https://access.redhat.com/containers/#/registry.access.redhat.com/ubi9/images/9.4-1214.1726694543, com.redhat.license_terms=https://www.redhat.com/en/about/red-hat-end-user-license-agreements#UBI, distribution-scope=public, container_name=kepler, io.buildah.version=1.29.0, io.k8s.description=The Universal Base Image is designed and engineered to be the base layer for all of your containerized applications, middleware and utilities. This base image is freely redistributable, but Red Hat only supports Red Hat technologies through subscriptions for Red Hat products. This image is maintained by Red Hat and updated regularly., config_id=edpm, architecture=x86_64, io.openshift.tags=base rhel9, name=ubi9, io.k8s.display-name=Red Hat Universal Base Image 9, vcs-ref=e309397d02fc53f7fa99db1371b8700eb49f268f, build-date=2024-09-18T21:23:30, com.redhat.component=ubi9-container, description=The Universal Base Image is designed and engineered to be the base layer for all of your containerized applications, middleware and utilities. This base image is freely redistributable, but Red Hat only supports Red Hat technologies through subscriptions for Red Hat products. This image is maintained by Red Hat and updated regularly., version=9.4)
Dec  1 20:07:00 compute-0 podman[257905]: 2025-12-01 20:07:00.395273628 +0000 UTC m=+0.150283064 container health_status ac5c9902abf0db9f43c889599b2bcc73d33eb8b65444ffdd9b56a5cc93dab792 (image=quay.io/podified-antelope-centos9/openstack-ovn-controller:current-podified, name=ovn_controller, health_status=healthy, health_failing_streak=0, health_log=, tcib_managed=true, org.label-schema.build-date=20251125, org.label-schema.license=GPLv2, org.label-schema.vendor=CentOS, io.buildah.version=1.41.3, org.label-schema.name=CentOS Stream 9 Base Image, config_id=ovn_controller, maintainer=OpenStack Kubernetes Operator team, managed_by=edpm_ansible, tcib_build_tag=fa2bb8efef6782c26ea7f1675eeb36dd, config_data={'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ovn_controller', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-ovn-controller:current-podified', 'net': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/lib/modules:/lib/modules:ro', '/run:/run', '/var/lib/openvswitch/ovn:/run/ovn:shared,z', '/var/lib/kolla/config_files/ovn_controller.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/openstack/cacerts/ovn/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/certs/ovn/default/ca.crt:/etc/pki/tls/certs/ovndbca.crt:ro,z', '/var/lib/openstack/certs/ovn/default/tls.crt:/etc/pki/tls/certs/ovndb.crt:ro,z', '/var/lib/openstack/certs/ovn/default/tls.key:/etc/pki/tls/private/ovndb.key:ro,Z', '/var/lib/openstack/healthchecks/ovn_controller:/openstack:ro,z']}, container_name=ovn_controller, org.label-schema.schema-version=1.0)
Dec  1 20:07:01 compute-0 openstack_network_exporter[205914]: ERROR   20:07:01 appctl.go:144: Failed to get PID for ovn-northd: no control socket files found for ovn-northd
Dec  1 20:07:01 compute-0 openstack_network_exporter[205914]: ERROR   20:07:01 appctl.go:144: Failed to get PID for ovn-northd: no control socket files found for ovn-northd
Dec  1 20:07:01 compute-0 openstack_network_exporter[205914]: ERROR   20:07:01 appctl.go:131: Failed to prepare call to ovsdb-server: no control socket files found for the ovs db server
Dec  1 20:07:01 compute-0 openstack_network_exporter[205914]: ERROR   20:07:01 appctl.go:174: call(dpif-netdev/pmd-perf-show): please specify an existing datapath
Dec  1 20:07:01 compute-0 openstack_network_exporter[205914]: 
Dec  1 20:07:01 compute-0 openstack_network_exporter[205914]: ERROR   20:07:01 appctl.go:174: call(dpif-netdev/pmd-rxq-show): please specify an existing datapath
Dec  1 20:07:01 compute-0 openstack_network_exporter[205914]: 
Dec  1 20:07:03 compute-0 nova_compute[189564]: 2025-12-01 20:07:03.929 189568 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Dec  1 20:07:04 compute-0 nova_compute[189564]: 2025-12-01 20:07:04.013 189568 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Dec  1 20:07:08 compute-0 nova_compute[189564]: 2025-12-01 20:07:08.933 189568 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Dec  1 20:07:09 compute-0 nova_compute[189564]: 2025-12-01 20:07:09.016 189568 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Dec  1 20:07:12 compute-0 ovn_metadata_agent[106828]: 2025-12-01 20:07:12.225 106833 DEBUG oslo_concurrency.lockutils [-] Acquiring lock "_check_child_processes" by "neutron.agent.linux.external_process.ProcessMonitor._check_child_processes" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m
Dec  1 20:07:12 compute-0 ovn_metadata_agent[106828]: 2025-12-01 20:07:12.226 106833 DEBUG oslo_concurrency.lockutils [-] Lock "_check_child_processes" acquired by "neutron.agent.linux.external_process.ProcessMonitor._check_child_processes" :: waited 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m
Dec  1 20:07:12 compute-0 ovn_metadata_agent[106828]: 2025-12-01 20:07:12.228 106833 DEBUG oslo_concurrency.lockutils [-] Lock "_check_child_processes" "released" by "neutron.agent.linux.external_process.ProcessMonitor._check_child_processes" :: held 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m
Dec  1 20:07:12 compute-0 podman[258006]: 2025-12-01 20:07:12.363329916 +0000 UTC m=+0.117369761 container health_status b46bda7fc50db8041eef75400930fc7591d8331b3adc9964f77b2cc87c6b98e2 (image=quay.io/openstack-k8s-operators/openstack-network-exporter:current-podified, name=openstack_network_exporter, health_status=healthy, health_failing_streak=0, health_log=, vcs-ref=f4b088292653bbf5ca8188a5e59ffd06a8671d4b, architecture=x86_64, build-date=2025-08-20T13:12:41, com.redhat.component=ubi9-minimal-container, config_id=edpm, summary=Provides the latest release of the minimal Red Hat Universal Base Image 9., version=9.6, vcs-type=git, name=ubi9-minimal, release=1755695350, com.redhat.license_terms=https://www.redhat.com/en/about/red-hat-end-user-license-agreements#UBI, url=https://catalog.redhat.com/en/search?searchType=containers, vendor=Red Hat, Inc., config_data={'image': 'quay.io/openstack-k8s-operators/openstack-network-exporter:current-podified', 'restart': 'always', 'recreate': True, 'privileged': True, 'ports': ['9105:9105'], 'command': [], 'net': 'host', 'environment': {'OS_ENDPOINT_TYPE': 'internal', 'OPENSTACK_NETWORK_EXPORTER_YAML': '/etc/openstack_network_exporter/openstack_network_exporter.yaml'}, 'healthcheck': {'test': '/openstack/healthcheck openstack-netwo', 'mount': '/var/lib/openstack/healthchecks/openstack_network_exporter'}, 'volumes': ['/var/lib/openstack/config/telemetry/openstack_network_exporter.yaml:/etc/openstack_network_exporter/openstack_network_exporter.yaml:z', '/var/lib/openstack/certs/telemetry/default:/etc/openstack_network_exporter/tls:z', '/var/run/openvswitch:/run/openvswitch:rw,z', '/var/lib/openvswitch/ovn:/run/ovn:rw,z', '/proc:/host/proc:ro', '/var/lib/openstack/healthchecks/openstack_network_exporter:/openstack:ro,z']}, io.k8s.display-name=Red Hat Universal Base Image 9 Minimal, maintainer=Red Hat, Inc., container_name=openstack_network_exporter, io.buildah.version=1.33.7, io.openshift.tags=minimal rhel9, io.openshift.expose-services=, managed_by=edpm_ansible, description=The Universal Base Image Minimal is a stripped down image that uses microdnf as a package manager. This base image is freely redistributable, but Red Hat only supports Red Hat technologies through subscriptions for Red Hat products. This image is maintained by Red Hat and updated regularly., distribution-scope=public, io.k8s.description=The Universal Base Image Minimal is a stripped down image that uses microdnf as a package manager. This base image is freely redistributable, but Red Hat only supports Red Hat technologies through subscriptions for Red Hat products. This image is maintained by Red Hat and updated regularly.)
Dec  1 20:07:12 compute-0 podman[258025]: 2025-12-01 20:07:12.50461322 +0000 UTC m=+0.086434183 container health_status 9bc16c1e84935b321683dd2dfd3901959431e420d380b6b9982945dff3d516b2 (image=quay.io/prometheus/node-exporter:v1.5.0, name=node_exporter, health_status=healthy, health_failing_streak=0, health_log=, config_data={'image': 'quay.io/prometheus/node-exporter:v1.5.0', 'restart': 'always', 'recreate': True, 'user': 'root', 'privileged': True, 'ports': ['9100:9100'], 'command': ['--web.config.file=/etc/node_exporter/node_exporter.yaml', '--web.disable-exporter-metrics', '--collector.systemd', '--collector.systemd.unit-include=(edpm_.*|ovs.*|openvswitch|virt.*|rsyslog)\\.service', '--no-collector.dmi', '--no-collector.entropy', '--no-collector.thermal_zone', '--no-collector.time', '--no-collector.timex', '--no-collector.uname', '--no-collector.stat', '--no-collector.hwmon', '--no-collector.os', '--no-collector.selinux', '--no-collector.textfile', '--no-collector.powersupplyclass', '--no-collector.pressure', '--no-collector.rapl'], 'net': 'host', 'environment': {'OS_ENDPOINT_TYPE': 'internal'}, 'healthcheck': {'test': '/openstack/healthcheck node_exporter', 'mount': '/var/lib/openstack/healthchecks/node_exporter'}, 'volumes': ['/var/lib/openstack/config/telemetry/node_exporter.yaml:/etc/node_exporter/node_exporter.yaml:z', '/var/lib/openstack/certs/telemetry/default:/etc/node_exporter/tls:z', '/var/run/dbus/system_bus_socket:/var/run/dbus/system_bus_socket:rw', '/var/lib/openstack/healthchecks/node_exporter:/openstack:ro,z']}, config_id=edpm, container_name=node_exporter, maintainer=The Prometheus Authors <prometheus-developers@googlegroups.com>, managed_by=edpm_ansible)
Dec  1 20:07:13 compute-0 nova_compute[189564]: 2025-12-01 20:07:13.936 189568 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Dec  1 20:07:14 compute-0 nova_compute[189564]: 2025-12-01 20:07:14.020 189568 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Dec  1 20:07:18 compute-0 nova_compute[189564]: 2025-12-01 20:07:18.939 189568 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Dec  1 20:07:19 compute-0 nova_compute[189564]: 2025-12-01 20:07:19.022 189568 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Dec  1 20:07:20 compute-0 nova_compute[189564]: 2025-12-01 20:07:20.269 189568 DEBUG oslo_service.periodic_task [None req-7cec860b-c1c5-49d2-8a4f-732c76c1788d - - - - - -] Running periodic task ComputeManager._poll_rescued_instances run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210#033[00m
Dec  1 20:07:20 compute-0 nova_compute[189564]: 2025-12-01 20:07:20.271 189568 DEBUG oslo_service.periodic_task [None req-7cec860b-c1c5-49d2-8a4f-732c76c1788d - - - - - -] Running periodic task ComputeManager._reclaim_queued_deletes run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210#033[00m
Dec  1 20:07:20 compute-0 nova_compute[189564]: 2025-12-01 20:07:20.271 189568 DEBUG nova.compute.manager [None req-7cec860b-c1c5-49d2-8a4f-732c76c1788d - - - - - -] CONF.reclaim_instance_interval <= 0, skipping... _reclaim_queued_deletes /usr/lib/python3.9/site-packages/nova/compute/manager.py:10477#033[00m
Dec  1 20:07:23 compute-0 podman[258050]: 2025-12-01 20:07:23.316164033 +0000 UTC m=+0.086873447 container health_status eee51cf6f5ac491b85fb09827fece37ea9afa564acb449d4ec0d0155a452f02b (image=quay.io/podified-antelope-centos9/openstack-multipathd:current-podified, name=multipathd, health_status=healthy, health_failing_streak=0, health_log=, config_data={'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/multipathd', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-multipathd:current-podified', 'net': 'host', 'privileged': True, 'restart': 'always', 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/var/lib/kolla/config_files/multipathd.json:/var/lib/kolla/config_files/config.json:ro', '/dev:/dev', '/run/udev:/run/udev', '/sys:/sys', '/lib/modules:/lib/modules:ro', '/etc/iscsi:/etc/iscsi:ro', '/var/lib/iscsi:/var/lib/iscsi', '/etc/multipath:/etc/multipath:z', '/etc/multipath.conf:/etc/multipath.conf:ro', '/var/lib/openstack/healthchecks/multipathd:/openstack:ro,z']}, io.buildah.version=1.41.3, maintainer=OpenStack Kubernetes Operator team, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.vendor=CentOS, config_id=multipathd, container_name=multipathd, org.label-schema.schema-version=1.0, org.label-schema.build-date=20251125, tcib_build_tag=fa2bb8efef6782c26ea7f1675eeb36dd, managed_by=edpm_ansible, org.label-schema.license=GPLv2, tcib_managed=true)
Dec  1 20:07:23 compute-0 nova_compute[189564]: 2025-12-01 20:07:23.943 189568 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Dec  1 20:07:24 compute-0 nova_compute[189564]: 2025-12-01 20:07:24.024 189568 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Dec  1 20:07:24 compute-0 nova_compute[189564]: 2025-12-01 20:07:24.250 189568 DEBUG oslo_service.periodic_task [None req-7cec860b-c1c5-49d2-8a4f-732c76c1788d - - - - - -] Running periodic task ComputeManager._poll_rebooting_instances run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210#033[00m
Dec  1 20:07:26 compute-0 nova_compute[189564]: 2025-12-01 20:07:26.248 189568 DEBUG oslo_service.periodic_task [None req-7cec860b-c1c5-49d2-8a4f-732c76c1788d - - - - - -] Running periodic task ComputeManager._instance_usage_audit run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210#033[00m
Dec  1 20:07:27 compute-0 nova_compute[189564]: 2025-12-01 20:07:27.248 189568 DEBUG oslo_service.periodic_task [None req-7cec860b-c1c5-49d2-8a4f-732c76c1788d - - - - - -] Running periodic task ComputeManager._heal_instance_info_cache run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210#033[00m
Dec  1 20:07:27 compute-0 nova_compute[189564]: 2025-12-01 20:07:27.248 189568 DEBUG nova.compute.manager [None req-7cec860b-c1c5-49d2-8a4f-732c76c1788d - - - - - -] Starting heal instance info cache _heal_instance_info_cache /usr/lib/python3.9/site-packages/nova/compute/manager.py:9858#033[00m
Dec  1 20:07:27 compute-0 nova_compute[189564]: 2025-12-01 20:07:27.249 189568 DEBUG nova.compute.manager [None req-7cec860b-c1c5-49d2-8a4f-732c76c1788d - - - - - -] Rebuilding the list of instances to heal _heal_instance_info_cache /usr/lib/python3.9/site-packages/nova/compute/manager.py:9862#033[00m
Dec  1 20:07:27 compute-0 podman[258071]: 2025-12-01 20:07:27.301388908 +0000 UTC m=+0.072582720 container health_status 61ddba5fa28aaa4735d9b3aecc3d300f499f9ae2248b5f55cd6d6127fcce4236 (image=quay.io/navidys/prometheus-podman-exporter:v1.10.1, name=podman_exporter, health_status=healthy, health_failing_streak=0, health_log=, config_id=edpm, container_name=podman_exporter, maintainer=Navid Yaghoobi <navidys@fedoraproject.org>, managed_by=edpm_ansible, config_data={'image': 'quay.io/navidys/prometheus-podman-exporter:v1.10.1', 'restart': 'always', 'recreate': True, 'user': 'root', 'privileged': True, 'ports': ['9882:9882'], 'net': 'host', 'command': ['--web.config.file=/etc/podman_exporter/podman_exporter.yaml'], 'environment': {'OS_ENDPOINT_TYPE': 'internal', 'CONTAINER_HOST': 'unix:///run/podman/podman.sock'}, 'healthcheck': {'test': '/openstack/healthcheck podman_exporter', 'mount': '/var/lib/openstack/healthchecks/podman_exporter'}, 'volumes': ['/var/lib/openstack/config/telemetry/podman_exporter.yaml:/etc/podman_exporter/podman_exporter.yaml:z', '/var/lib/openstack/certs/telemetry/default:/etc/podman_exporter/tls:z', '/run/podman/podman.sock:/run/podman/podman.sock:rw,z', '/var/lib/openstack/healthchecks/podman_exporter:/openstack:ro,z']})
Dec  1 20:07:28 compute-0 nova_compute[189564]: 2025-12-01 20:07:28.596 189568 DEBUG oslo_concurrency.lockutils [None req-7cec860b-c1c5-49d2-8a4f-732c76c1788d - - - - - -] Acquiring lock "refresh_cache-2e63a3e2-688c-470f-9b69-98ac22f0c892" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:312#033[00m
Dec  1 20:07:28 compute-0 nova_compute[189564]: 2025-12-01 20:07:28.597 189568 DEBUG oslo_concurrency.lockutils [None req-7cec860b-c1c5-49d2-8a4f-732c76c1788d - - - - - -] Acquired lock "refresh_cache-2e63a3e2-688c-470f-9b69-98ac22f0c892" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:315#033[00m
Dec  1 20:07:28 compute-0 nova_compute[189564]: 2025-12-01 20:07:28.598 189568 DEBUG nova.network.neutron [None req-7cec860b-c1c5-49d2-8a4f-732c76c1788d - - - - - -] [instance: 2e63a3e2-688c-470f-9b69-98ac22f0c892] Forcefully refreshing network info cache for instance _get_instance_nw_info /usr/lib/python3.9/site-packages/nova/network/neutron.py:2004#033[00m
Dec  1 20:07:28 compute-0 nova_compute[189564]: 2025-12-01 20:07:28.599 189568 DEBUG nova.objects.instance [None req-7cec860b-c1c5-49d2-8a4f-732c76c1788d - - - - - -] Lazy-loading 'info_cache' on Instance uuid 2e63a3e2-688c-470f-9b69-98ac22f0c892 obj_load_attr /usr/lib/python3.9/site-packages/nova/objects/instance.py:1105#033[00m
Dec  1 20:07:28 compute-0 nova_compute[189564]: 2025-12-01 20:07:28.946 189568 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Dec  1 20:07:29 compute-0 nova_compute[189564]: 2025-12-01 20:07:29.026 189568 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Dec  1 20:07:29 compute-0 podman[203750]: time="2025-12-01T20:07:29Z" level=info msg="List containers: received `last` parameter - overwriting `limit`"
Dec  1 20:07:29 compute-0 podman[203750]: @ - - [01/Dec/2025:20:07:29 +0000] "GET /v4.9.3/libpod/containers/json?all=true&external=false&last=0&namespace=false&size=false&sync=false HTTP/1.1" 200 29521 "" "Go-http-client/1.1"
Dec  1 20:07:29 compute-0 podman[203750]: @ - - [01/Dec/2025:20:07:29 +0000] "GET /v4.9.3/libpod/containers/stats?all=false&interval=1&stream=false HTTP/1.1" 200 4807 "" "Go-http-client/1.1"
Dec  1 20:07:30 compute-0 systemd[1]: virtproxyd.service: Deactivated successfully.
Dec  1 20:07:31 compute-0 podman[258099]: 2025-12-01 20:07:31.059838095 +0000 UTC m=+0.089460689 container health_status 34a1614f07848d6f362b3ed1fa2407dbcd0f2c7c831f6ef43ff8b2d278ce7c3d (image=quay.io/podified-antelope-centos9/openstack-ceilometer-ipmi:current-podified, name=ceilometer_agent_ipmi, health_status=healthy, health_failing_streak=0, health_log=, org.label-schema.name=CentOS Stream 9 Base Image, tcib_managed=true, maintainer=OpenStack Kubernetes Operator team, managed_by=edpm_ansible, org.label-schema.schema-version=1.0, org.label-schema.build-date=20251125, org.label-schema.license=GPLv2, org.label-schema.vendor=CentOS, tcib_build_tag=fa2bb8efef6782c26ea7f1675eeb36dd, config_data={'image': 'quay.io/podified-antelope-centos9/openstack-ceilometer-ipmi:current-podified', 'user': 'ceilometer', 'restart': 'always', 'command': 'kolla_start', 'security_opt': 'label:type:ceilometer_polling_t', 'privileged': 'true', 'net': 'host', 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'OS_ENDPOINT_TYPE': 'internal'}, 'healthcheck': {'test': '/openstack/healthcheck ipmi', 'mount': '/var/lib/openstack/healthchecks/ceilometer_agent_ipmi'}, 'volumes': ['/var/lib/openstack/config/telemetry-power-monitoring:/var/lib/openstack/config/:z', '/var/lib/openstack/config/telemetry-power-monitoring/ceilometer-agent-ipmi.json:/var/lib/kolla/config_files/config.json:z', '/etc/hosts:/etc/hosts:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/var/lib/openstack/cacerts/telemetry-power-monitoring/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/config/telemetry-power-monitoring/ceilometer_prom_exporter.yaml:/etc/ceilometer/ceilometer_prom_exporter.yaml:z', '/var/lib/openstack/certs/telemetry-power-monitoring/default:/etc/ceilometer/tls:z', '/dev/log:/dev/log', '/var/lib/openstack/healthchecks/ceilometer_agent_ipmi:/openstack:ro,z']}, container_name=ceilometer_agent_ipmi, config_id=edpm, io.buildah.version=1.41.3)
Dec  1 20:07:31 compute-0 podman[258106]: 2025-12-01 20:07:31.071037573 +0000 UTC m=+0.089024395 container health_status 43b014a7c88484529ca37fbc1aa040d68d3c565a681d98a3ffe696ded1c66c8b (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified, name=ovn_metadata_agent, health_status=healthy, health_failing_streak=0, health_log=, tcib_managed=true, config_id=ovn_metadata_agent, container_name=ovn_metadata_agent, io.buildah.version=1.41.3, maintainer=OpenStack Kubernetes Operator team, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.license=GPLv2, config_data={'cgroupns': 'host', 'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'EDPM_CONFIG_HASH': '0823bd3e096c75f72e4a95820d41b0d4b6a1172bd2892ddb9f29b788a11bc87d'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ovn_metadata_agent', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified', 'net': 'host', 'pid': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/run/openvswitch:/run/openvswitch:z', '/var/lib/config-data/ansible-generated/neutron-ovn-metadata-agent:/etc/neutron.conf.d:z', '/run/netns:/run/netns:shared', '/var/lib/kolla/config_files/ovn_metadata_agent.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/neutron:/var/lib/neutron:shared,z', '/var/lib/neutron/ovn_metadata_haproxy_wrapper:/usr/local/bin/haproxy:ro', '/var/lib/neutron/kill_scripts:/etc/neutron/kill_scripts:ro', '/var/lib/openstack/cacerts/neutron-metadata/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/ca.crt:/etc/pki/tls/certs/ovndbca.crt:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/tls.crt:/etc/pki/tls/certs/ovndb.crt:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/tls.key:/etc/pki/tls/private/ovndb.key:ro,Z', '/var/lib/openstack/healthchecks/ovn_metadata_agent:/openstack:ro,z']}, managed_by=edpm_ansible, org.label-schema.build-date=20251125, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS, tcib_build_tag=fa2bb8efef6782c26ea7f1675eeb36dd)
Dec  1 20:07:31 compute-0 podman[258100]: 2025-12-01 20:07:31.071019753 +0000 UTC m=+0.090261095 container health_status 3a3d264f7eb8586ed3d44da8bad3c69e5911bcb2ca062b771386b6d47a5118de (image=quay.rdoproject.org/podified-master-centos10/openstack-ceilometer-compute:current-tested, name=ceilometer_agent_compute, health_status=healthy, health_failing_streak=0, health_log=, config_data={'image': 'quay.rdoproject.org/podified-master-centos10/openstack-ceilometer-compute:current-tested', 'user': 'ceilometer', 'restart': 'always', 'command': 'kolla_start', 'security_opt': 'label:type:ceilometer_polling_t', 'net': 'host', 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'OS_ENDPOINT_TYPE': 'internal'}, 'healthcheck': {'test': '/openstack/healthcheck compute', 'mount': '/var/lib/openstack/healthchecks/ceilometer_agent_compute'}, 'volumes': ['/var/lib/openstack/config/telemetry:/var/lib/openstack/config/:z', '/var/lib/openstack/config/telemetry/ceilometer-agent-compute.json:/var/lib/kolla/config_files/config.json:z', '/run/libvirt:/run/libvirt:shared,ro', '/etc/hosts:/etc/hosts:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/var/lib/openstack/cacerts/telemetry/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/config/telemetry/ceilometer_prom_exporter.yaml:/etc/ceilometer/ceilometer_prom_exporter.yaml:z', '/var/lib/openstack/certs/telemetry/default:/etc/ceilometer/tls:z', '/dev/log:/dev/log', '/var/lib/openstack/healthchecks/ceilometer_agent_compute:/openstack:ro,z']}, io.buildah.version=1.41.4, org.label-schema.vendor=CentOS, managed_by=edpm_ansible, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 10 Base Image, org.label-schema.schema-version=1.0, org.label-schema.build-date=20251125, tcib_managed=true, config_id=edpm, container_name=ceilometer_agent_compute, maintainer=OpenStack Kubernetes Operator team, tcib_build_tag=3a7876c5b6a4ff2e2bc50e11e9db5f42)
Dec  1 20:07:31 compute-0 podman[258098]: 2025-12-01 20:07:31.07500574 +0000 UTC m=+0.111150852 container health_status 23921011954a99f31a49758e512d9e3575f6b2ebf536e7df85e3be11e7690b76 (image=quay.io/sustainable_computing_io/kepler:release-0.7.12, name=kepler, health_status=healthy, health_failing_streak=0, health_log=, io.buildah.version=1.29.0, build-date=2024-09-18T21:23:30, architecture=x86_64, maintainer=Red Hat, Inc., summary=Provides the latest release of Red Hat Universal Base Image 9., vendor=Red Hat, Inc., url=https://access.redhat.com/containers/#/registry.access.redhat.com/ubi9/images/9.4-1214.1726694543, distribution-scope=public, config_data={'image': 'quay.io/sustainable_computing_io/kepler:release-0.7.12', 'privileged': 'true', 'restart': 'always', 'ports': ['8888:8888'], 'net': 'host', 'command': '-v=2', 'recreate': True, 'environment': {'ENABLE_GPU': 'true', 'EXPOSE_CONTAINER_METRICS': 'true', 'ENABLE_PROCESS_METRICS': 'true', 'EXPOSE_VM_METRICS': 'true', 'EXPOSE_ESTIMATED_IDLE_POWER_METRICS': 'false', 'LIBVIRT_METADATA_URI': 'http://openstack.org/xmlns/libvirt/nova/1.1'}, 'healthcheck': {'test': '/openstack/healthcheck kepler', 'mount': '/var/lib/openstack/healthchecks/kepler'}, 'volumes': ['/lib/modules:/lib/modules:ro', '/run/libvirt:/run/libvirt:shared,ro', '/sys:/sys', '/proc:/proc', '/var/lib/openstack/healthchecks/kepler:/openstack:ro,z']}, io.k8s.display-name=Red Hat Universal Base Image 9, release=1214.1726694543, release-0.7.12=, com.redhat.license_terms=https://www.redhat.com/en/about/red-hat-end-user-license-agreements#UBI, io.openshift.tags=base rhel9, managed_by=edpm_ansible, com.redhat.component=ubi9-container, io.openshift.expose-services=, vcs-ref=e309397d02fc53f7fa99db1371b8700eb49f268f, io.k8s.description=The Universal Base Image is designed and engineered to be the base layer for all of your containerized applications, middleware and utilities. This base image is freely redistributable, but Red Hat only supports Red Hat technologies through subscriptions for Red Hat products. This image is maintained by Red Hat and updated regularly., vcs-type=git, config_id=edpm, container_name=kepler, description=The Universal Base Image is designed and engineered to be the base layer for all of your containerized applications, middleware and utilities. This base image is freely redistributable, but Red Hat only supports Red Hat technologies through subscriptions for Red Hat products. This image is maintained by Red Hat and updated regularly., name=ubi9, version=9.4)
Dec  1 20:07:31 compute-0 podman[258112]: 2025-12-01 20:07:31.116029291 +0000 UTC m=+0.123271640 container health_status ac5c9902abf0db9f43c889599b2bcc73d33eb8b65444ffdd9b56a5cc93dab792 (image=quay.io/podified-antelope-centos9/openstack-ovn-controller:current-podified, name=ovn_controller, health_status=healthy, health_failing_streak=0, health_log=, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS, tcib_managed=true, config_data={'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ovn_controller', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-ovn-controller:current-podified', 'net': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/lib/modules:/lib/modules:ro', '/run:/run', '/var/lib/openvswitch/ovn:/run/ovn:shared,z', '/var/lib/kolla/config_files/ovn_controller.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/openstack/cacerts/ovn/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/certs/ovn/default/ca.crt:/etc/pki/tls/certs/ovndbca.crt:ro,z', '/var/lib/openstack/certs/ovn/default/tls.crt:/etc/pki/tls/certs/ovndb.crt:ro,z', '/var/lib/openstack/certs/ovn/default/tls.key:/etc/pki/tls/private/ovndb.key:ro,Z', '/var/lib/openstack/healthchecks/ovn_controller:/openstack:ro,z']}, maintainer=OpenStack Kubernetes Operator team, org.label-schema.build-date=20251125, io.buildah.version=1.41.3, managed_by=edpm_ansible, tcib_build_tag=fa2bb8efef6782c26ea7f1675eeb36dd, config_id=ovn_controller, container_name=ovn_controller, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 9 Base Image)
Dec  1 20:07:31 compute-0 openstack_network_exporter[205914]: ERROR   20:07:31 appctl.go:131: Failed to prepare call to ovsdb-server: no control socket files found for the ovs db server
Dec  1 20:07:31 compute-0 openstack_network_exporter[205914]: ERROR   20:07:31 appctl.go:144: Failed to get PID for ovn-northd: no control socket files found for ovn-northd
Dec  1 20:07:31 compute-0 openstack_network_exporter[205914]: ERROR   20:07:31 appctl.go:144: Failed to get PID for ovn-northd: no control socket files found for ovn-northd
Dec  1 20:07:31 compute-0 openstack_network_exporter[205914]: ERROR   20:07:31 appctl.go:174: call(dpif-netdev/pmd-perf-show): please specify an existing datapath
Dec  1 20:07:31 compute-0 openstack_network_exporter[205914]: 
Dec  1 20:07:31 compute-0 openstack_network_exporter[205914]: ERROR   20:07:31 appctl.go:174: call(dpif-netdev/pmd-rxq-show): please specify an existing datapath
Dec  1 20:07:31 compute-0 openstack_network_exporter[205914]: 
Dec  1 20:07:32 compute-0 nova_compute[189564]: 2025-12-01 20:07:32.731 189568 DEBUG nova.network.neutron [None req-7cec860b-c1c5-49d2-8a4f-732c76c1788d - - - - - -] [instance: 2e63a3e2-688c-470f-9b69-98ac22f0c892] Updating instance_info_cache with network_info: [{"id": "3076324c-1772-4ebf-8d52-056282f5b5b9", "address": "fa:16:3e:ec:bc:e0", "network": {"id": "b72e0b6b-24ff-49af-9297-d0f55dd2fe07", "bridge": "br-int", "label": "", "subnets": [{"cidr": "10.100.0.0/16", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.3.29", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "ce8fb01897ec4dc4a54e7b478a0450c6", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tap3076324c-17", "ovs_interfaceid": "3076324c-1772-4ebf-8d52-056282f5b5b9", "qbh_params": null, "qbg_params": null, "active": true, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}}] update_instance_cache_with_nw_info /usr/lib/python3.9/site-packages/nova/network/neutron.py:116#033[00m
Dec  1 20:07:32 compute-0 nova_compute[189564]: 2025-12-01 20:07:32.832 189568 DEBUG oslo_concurrency.lockutils [None req-7cec860b-c1c5-49d2-8a4f-732c76c1788d - - - - - -] Releasing lock "refresh_cache-2e63a3e2-688c-470f-9b69-98ac22f0c892" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:333#033[00m
Dec  1 20:07:32 compute-0 nova_compute[189564]: 2025-12-01 20:07:32.833 189568 DEBUG nova.compute.manager [None req-7cec860b-c1c5-49d2-8a4f-732c76c1788d - - - - - -] [instance: 2e63a3e2-688c-470f-9b69-98ac22f0c892] Updated the network info_cache for instance _heal_instance_info_cache /usr/lib/python3.9/site-packages/nova/compute/manager.py:9929#033[00m
Dec  1 20:07:32 compute-0 nova_compute[189564]: 2025-12-01 20:07:32.834 189568 DEBUG oslo_service.periodic_task [None req-7cec860b-c1c5-49d2-8a4f-732c76c1788d - - - - - -] Running periodic task ComputeManager._poll_unconfirmed_resizes run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210#033[00m
Dec  1 20:07:32 compute-0 nova_compute[189564]: 2025-12-01 20:07:32.835 189568 DEBUG oslo_service.periodic_task [None req-7cec860b-c1c5-49d2-8a4f-732c76c1788d - - - - - -] Running periodic task ComputeManager._poll_volume_usage run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210#033[00m
Dec  1 20:07:32 compute-0 nova_compute[189564]: 2025-12-01 20:07:32.836 189568 DEBUG oslo_service.periodic_task [None req-7cec860b-c1c5-49d2-8a4f-732c76c1788d - - - - - -] Running periodic task ComputeManager.update_available_resource run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210#033[00m
Dec  1 20:07:33 compute-0 nova_compute[189564]: 2025-12-01 20:07:33.059 189568 DEBUG oslo_concurrency.lockutils [None req-7cec860b-c1c5-49d2-8a4f-732c76c1788d - - - - - -] Acquiring lock "compute_resources" by "nova.compute.resource_tracker.ResourceTracker.clean_compute_node_cache" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m
Dec  1 20:07:33 compute-0 nova_compute[189564]: 2025-12-01 20:07:33.059 189568 DEBUG oslo_concurrency.lockutils [None req-7cec860b-c1c5-49d2-8a4f-732c76c1788d - - - - - -] Lock "compute_resources" acquired by "nova.compute.resource_tracker.ResourceTracker.clean_compute_node_cache" :: waited 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m
Dec  1 20:07:33 compute-0 nova_compute[189564]: 2025-12-01 20:07:33.060 189568 DEBUG oslo_concurrency.lockutils [None req-7cec860b-c1c5-49d2-8a4f-732c76c1788d - - - - - -] Lock "compute_resources" "released" by "nova.compute.resource_tracker.ResourceTracker.clean_compute_node_cache" :: held 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m
Dec  1 20:07:33 compute-0 nova_compute[189564]: 2025-12-01 20:07:33.060 189568 DEBUG nova.compute.resource_tracker [None req-7cec860b-c1c5-49d2-8a4f-732c76c1788d - - - - - -] Auditing locally available compute resources for compute-0.ctlplane.example.com (node: compute-0.ctlplane.example.com) update_available_resource /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:861#033[00m
Dec  1 20:07:33 compute-0 nova_compute[189564]: 2025-12-01 20:07:33.144 189568 DEBUG oslo_concurrency.processutils [None req-7cec860b-c1c5-49d2-8a4f-732c76c1788d - - - - - -] Running cmd (subprocess): /usr/bin/python3 -m oslo_concurrency.prlimit --as=1073741824 --cpu=30 -- env LC_ALL=C LANG=C qemu-img info /var/lib/nova/instances/2e63a3e2-688c-470f-9b69-98ac22f0c892/disk --force-share --output=json execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384#033[00m
Dec  1 20:07:33 compute-0 nova_compute[189564]: 2025-12-01 20:07:33.249 189568 DEBUG oslo_concurrency.processutils [None req-7cec860b-c1c5-49d2-8a4f-732c76c1788d - - - - - -] CMD "/usr/bin/python3 -m oslo_concurrency.prlimit --as=1073741824 --cpu=30 -- env LC_ALL=C LANG=C qemu-img info /var/lib/nova/instances/2e63a3e2-688c-470f-9b69-98ac22f0c892/disk --force-share --output=json" returned: 0 in 0.105s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422#033[00m
Dec  1 20:07:33 compute-0 nova_compute[189564]: 2025-12-01 20:07:33.250 189568 DEBUG oslo_concurrency.processutils [None req-7cec860b-c1c5-49d2-8a4f-732c76c1788d - - - - - -] Running cmd (subprocess): /usr/bin/python3 -m oslo_concurrency.prlimit --as=1073741824 --cpu=30 -- env LC_ALL=C LANG=C qemu-img info /var/lib/nova/instances/2e63a3e2-688c-470f-9b69-98ac22f0c892/disk --force-share --output=json execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384#033[00m
Dec  1 20:07:33 compute-0 nova_compute[189564]: 2025-12-01 20:07:33.322 189568 DEBUG oslo_concurrency.processutils [None req-7cec860b-c1c5-49d2-8a4f-732c76c1788d - - - - - -] CMD "/usr/bin/python3 -m oslo_concurrency.prlimit --as=1073741824 --cpu=30 -- env LC_ALL=C LANG=C qemu-img info /var/lib/nova/instances/2e63a3e2-688c-470f-9b69-98ac22f0c892/disk --force-share --output=json" returned: 0 in 0.072s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422#033[00m
Dec  1 20:07:33 compute-0 nova_compute[189564]: 2025-12-01 20:07:33.634 189568 WARNING nova.virt.libvirt.driver [None req-7cec860b-c1c5-49d2-8a4f-732c76c1788d - - - - - -] This host appears to have multiple sockets per NUMA node. The `socket` PCI NUMA affinity will not be supported.#033[00m
Dec  1 20:07:33 compute-0 nova_compute[189564]: 2025-12-01 20:07:33.636 189568 DEBUG nova.compute.resource_tracker [None req-7cec860b-c1c5-49d2-8a4f-732c76c1788d - - - - - -] Hypervisor/Node resource view: name=compute-0.ctlplane.example.com free_ram=5185MB free_disk=72.27693939208984GB free_vcpus=7 pci_devices=[{"dev_id": "pci_0000_00_03_0", "address": "0000:00:03.0", "product_id": "1000", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1000", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_01_3", "address": "0000:00:01.3", "product_id": "7113", "vendor_id": "8086", "numa_node": null, "label": "label_8086_7113", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_06_0", "address": "0000:00:06.0", "product_id": "1005", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1005", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_04_0", "address": "0000:00:04.0", "product_id": "1001", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1001", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_01_1", "address": "0000:00:01.1", "product_id": "7010", "vendor_id": "8086", "numa_node": null, "label": "label_8086_7010", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_00_0", "address": "0000:00:00.0", "product_id": "1237", "vendor_id": "8086", "numa_node": null, "label": "label_8086_1237", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_05_0", "address": "0000:00:05.0", "product_id": "1002", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1002", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_02_0", "address": "0000:00:02.0", "product_id": "1050", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1050", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_01_2", "address": "0000:00:01.2", "product_id": "7020", "vendor_id": "8086", "numa_node": null, "label": "label_8086_7020", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_01_0", "address": "0000:00:01.0", "product_id": "7000", "vendor_id": "8086", "numa_node": null, "label": "label_8086_7000", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_07_0", "address": "0000:00:07.0", "product_id": "1000", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1000", "dev_type": "type-PCI"}] _report_hypervisor_resource_view /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:1034#033[00m
Dec  1 20:07:33 compute-0 nova_compute[189564]: 2025-12-01 20:07:33.637 189568 DEBUG oslo_concurrency.lockutils [None req-7cec860b-c1c5-49d2-8a4f-732c76c1788d - - - - - -] Acquiring lock "compute_resources" by "nova.compute.resource_tracker.ResourceTracker._update_available_resource" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m
Dec  1 20:07:33 compute-0 nova_compute[189564]: 2025-12-01 20:07:33.637 189568 DEBUG oslo_concurrency.lockutils [None req-7cec860b-c1c5-49d2-8a4f-732c76c1788d - - - - - -] Lock "compute_resources" acquired by "nova.compute.resource_tracker.ResourceTracker._update_available_resource" :: waited 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m
Dec  1 20:07:33 compute-0 nova_compute[189564]: 2025-12-01 20:07:33.722 189568 DEBUG nova.compute.resource_tracker [None req-7cec860b-c1c5-49d2-8a4f-732c76c1788d - - - - - -] Instance 2e63a3e2-688c-470f-9b69-98ac22f0c892 actively managed on this compute host and has allocations in placement: {'resources': {'DISK_GB': 1, 'MEMORY_MB': 128, 'VCPU': 1}}. _remove_deleted_instances_allocations /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:1635#033[00m
Dec  1 20:07:33 compute-0 nova_compute[189564]: 2025-12-01 20:07:33.722 189568 DEBUG nova.compute.resource_tracker [None req-7cec860b-c1c5-49d2-8a4f-732c76c1788d - - - - - -] Total usable vcpus: 8, total allocated vcpus: 1 _report_final_resource_view /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:1057#033[00m
Dec  1 20:07:33 compute-0 nova_compute[189564]: 2025-12-01 20:07:33.723 189568 DEBUG nova.compute.resource_tracker [None req-7cec860b-c1c5-49d2-8a4f-732c76c1788d - - - - - -] Final resource view: name=compute-0.ctlplane.example.com phys_ram=7680MB used_ram=640MB phys_disk=79GB used_disk=1GB total_vcpus=8 used_vcpus=1 pci_stats=[] _report_final_resource_view /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:1066#033[00m
Dec  1 20:07:33 compute-0 nova_compute[189564]: 2025-12-01 20:07:33.838 189568 DEBUG nova.compute.provider_tree [None req-7cec860b-c1c5-49d2-8a4f-732c76c1788d - - - - - -] Inventory has not changed in ProviderTree for provider: 0211b5d4-bab8-409f-8f53-df766ffbcb27 update_inventory /usr/lib/python3.9/site-packages/nova/compute/provider_tree.py:180#033[00m
Dec  1 20:07:33 compute-0 nova_compute[189564]: 2025-12-01 20:07:33.956 189568 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Dec  1 20:07:34 compute-0 nova_compute[189564]: 2025-12-01 20:07:34.023 189568 DEBUG nova.scheduler.client.report [None req-7cec860b-c1c5-49d2-8a4f-732c76c1788d - - - - - -] Inventory has not changed for provider 0211b5d4-bab8-409f-8f53-df766ffbcb27 based on inventory data: {'VCPU': {'total': 8, 'reserved': 0, 'min_unit': 1, 'max_unit': 8, 'step_size': 1, 'allocation_ratio': 4.0}, 'MEMORY_MB': {'total': 7680, 'reserved': 512, 'min_unit': 1, 'max_unit': 7680, 'step_size': 1, 'allocation_ratio': 1.0}, 'DISK_GB': {'total': 79, 'reserved': 1, 'min_unit': 1, 'max_unit': 79, 'step_size': 1, 'allocation_ratio': 0.9}} set_inventory_for_provider /usr/lib/python3.9/site-packages/nova/scheduler/client/report.py:940#033[00m
Dec  1 20:07:34 compute-0 nova_compute[189564]: 2025-12-01 20:07:34.025 189568 DEBUG nova.compute.resource_tracker [None req-7cec860b-c1c5-49d2-8a4f-732c76c1788d - - - - - -] Compute_service record updated for compute-0.ctlplane.example.com:compute-0.ctlplane.example.com _update_available_resource /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:995#033[00m
Dec  1 20:07:34 compute-0 nova_compute[189564]: 2025-12-01 20:07:34.025 189568 DEBUG oslo_concurrency.lockutils [None req-7cec860b-c1c5-49d2-8a4f-732c76c1788d - - - - - -] Lock "compute_resources" "released" by "nova.compute.resource_tracker.ResourceTracker._update_available_resource" :: held 0.388s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m
Dec  1 20:07:34 compute-0 nova_compute[189564]: 2025-12-01 20:07:34.029 189568 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Dec  1 20:07:38 compute-0 nova_compute[189564]: 2025-12-01 20:07:38.959 189568 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Dec  1 20:07:39 compute-0 nova_compute[189564]: 2025-12-01 20:07:39.030 189568 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Dec  1 20:07:40 compute-0 nova_compute[189564]: 2025-12-01 20:07:40.021 189568 DEBUG oslo_service.periodic_task [None req-7cec860b-c1c5-49d2-8a4f-732c76c1788d - - - - - -] Running periodic task ComputeManager._check_instance_build_time run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210#033[00m
Dec  1 20:07:40 compute-0 nova_compute[189564]: 2025-12-01 20:07:40.022 189568 DEBUG oslo_service.periodic_task [None req-7cec860b-c1c5-49d2-8a4f-732c76c1788d - - - - - -] Running periodic task ComputeManager._sync_scheduler_instance_info run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210#033[00m
Dec  1 20:07:43 compute-0 podman[258211]: 2025-12-01 20:07:43.381658498 +0000 UTC m=+0.141942126 container health_status 9bc16c1e84935b321683dd2dfd3901959431e420d380b6b9982945dff3d516b2 (image=quay.io/prometheus/node-exporter:v1.5.0, name=node_exporter, health_status=healthy, health_failing_streak=0, health_log=, config_data={'image': 'quay.io/prometheus/node-exporter:v1.5.0', 'restart': 'always', 'recreate': True, 'user': 'root', 'privileged': True, 'ports': ['9100:9100'], 'command': ['--web.config.file=/etc/node_exporter/node_exporter.yaml', '--web.disable-exporter-metrics', '--collector.systemd', '--collector.systemd.unit-include=(edpm_.*|ovs.*|openvswitch|virt.*|rsyslog)\\.service', '--no-collector.dmi', '--no-collector.entropy', '--no-collector.thermal_zone', '--no-collector.time', '--no-collector.timex', '--no-collector.uname', '--no-collector.stat', '--no-collector.hwmon', '--no-collector.os', '--no-collector.selinux', '--no-collector.textfile', '--no-collector.powersupplyclass', '--no-collector.pressure', '--no-collector.rapl'], 'net': 'host', 'environment': {'OS_ENDPOINT_TYPE': 'internal'}, 'healthcheck': {'test': '/openstack/healthcheck node_exporter', 'mount': '/var/lib/openstack/healthchecks/node_exporter'}, 'volumes': ['/var/lib/openstack/config/telemetry/node_exporter.yaml:/etc/node_exporter/node_exporter.yaml:z', '/var/lib/openstack/certs/telemetry/default:/etc/node_exporter/tls:z', '/var/run/dbus/system_bus_socket:/var/run/dbus/system_bus_socket:rw', '/var/lib/openstack/healthchecks/node_exporter:/openstack:ro,z']}, config_id=edpm, container_name=node_exporter, maintainer=The Prometheus Authors <prometheus-developers@googlegroups.com>, managed_by=edpm_ansible)
Dec  1 20:07:43 compute-0 podman[258212]: 2025-12-01 20:07:43.383047962 +0000 UTC m=+0.135388227 container health_status b46bda7fc50db8041eef75400930fc7591d8331b3adc9964f77b2cc87c6b98e2 (image=quay.io/openstack-k8s-operators/openstack-network-exporter:current-podified, name=openstack_network_exporter, health_status=healthy, health_failing_streak=0, health_log=, build-date=2025-08-20T13:12:41, io.buildah.version=1.33.7, io.k8s.description=The Universal Base Image Minimal is a stripped down image that uses microdnf as a package manager. This base image is freely redistributable, but Red Hat only supports Red Hat technologies through subscriptions for Red Hat products. This image is maintained by Red Hat and updated regularly., io.openshift.tags=minimal rhel9, io.k8s.display-name=Red Hat Universal Base Image 9 Minimal, version=9.6, description=The Universal Base Image Minimal is a stripped down image that uses microdnf as a package manager. This base image is freely redistributable, but Red Hat only supports Red Hat technologies through subscriptions for Red Hat products. This image is maintained by Red Hat and updated regularly., managed_by=edpm_ansible, summary=Provides the latest release of the minimal Red Hat Universal Base Image 9., com.redhat.component=ubi9-minimal-container, distribution-scope=public, vcs-type=git, config_id=edpm, container_name=openstack_network_exporter, maintainer=Red Hat, Inc., vcs-ref=f4b088292653bbf5ca8188a5e59ffd06a8671d4b, name=ubi9-minimal, url=https://catalog.redhat.com/en/search?searchType=containers, com.redhat.license_terms=https://www.redhat.com/en/about/red-hat-end-user-license-agreements#UBI, config_data={'image': 'quay.io/openstack-k8s-operators/openstack-network-exporter:current-podified', 'restart': 'always', 'recreate': True, 'privileged': True, 'ports': ['9105:9105'], 'command': [], 'net': 'host', 'environment': {'OS_ENDPOINT_TYPE': 'internal', 'OPENSTACK_NETWORK_EXPORTER_YAML': '/etc/openstack_network_exporter/openstack_network_exporter.yaml'}, 'healthcheck': {'test': '/openstack/healthcheck openstack-netwo', 'mount': '/var/lib/openstack/healthchecks/openstack_network_exporter'}, 'volumes': ['/var/lib/openstack/config/telemetry/openstack_network_exporter.yaml:/etc/openstack_network_exporter/openstack_network_exporter.yaml:z', '/var/lib/openstack/certs/telemetry/default:/etc/openstack_network_exporter/tls:z', '/var/run/openvswitch:/run/openvswitch:rw,z', '/var/lib/openvswitch/ovn:/run/ovn:rw,z', '/proc:/host/proc:ro', '/var/lib/openstack/healthchecks/openstack_network_exporter:/openstack:ro,z']}, vendor=Red Hat, Inc., io.openshift.expose-services=, release=1755695350, architecture=x86_64)
Dec  1 20:07:43 compute-0 nova_compute[189564]: 2025-12-01 20:07:43.963 189568 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Dec  1 20:07:44 compute-0 nova_compute[189564]: 2025-12-01 20:07:44.032 189568 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Dec  1 20:07:48 compute-0 ceilometer_agent_compute[200308]: 2025-12-01 20:07:48.823 15 DEBUG ceilometer.polling.manager [-] The number of pollsters in source [pollsters] is bigger than the number of worker threads to execute them. Therefore, one can expect the process to be longer than the expected. execute_polling_task_processing /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:253
Dec  1 20:07:48 compute-0 ceilometer_agent_compute[200308]: 2025-12-01 20:07:48.824 15 DEBUG ceilometer.polling.manager [-] Processing pollsters for [pollsters] with [1] threads. execute_polling_task_processing /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:262
Dec  1 20:07:48 compute-0 ceilometer_agent_compute[200308]: 2025-12-01 20:07:48.824 15 DEBUG ceilometer.polling.manager [-] Registering pollster [<stevedore.extension.Extension object at 0x7fcf6cc3f860>] from source [pollsters] to be executed via executor [<concurrent.futures.thread.ThreadPoolExecutor object at 0x7fcf6cd3b320>] with cache [{}], pollster history [{}], and discovery cache [{}]. register_pollster_execution /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:276
Dec  1 20:07:48 compute-0 ceilometer_agent_compute[200308]: 2025-12-01 20:07:48.826 15 DEBUG ceilometer.polling.manager [-] Executing discovery process for pollsters [<ceilometer.compute.pollsters.net.IncomingBytesDeltaPollster object at 0x7fcf6cc3f830>] and discovery method [local_instances] via process [<bound method AgentManager.discover of <ceilometer.polling.manager.AgentManager object at 0x7fcf6cc07a10>>]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:294
Dec  1 20:07:48 compute-0 ceilometer_agent_compute[200308]: 2025-12-01 20:07:48.826 15 DEBUG ceilometer.polling.manager [-] Registering pollster [<stevedore.extension.Extension object at 0x7fcf6c2e4080>] from source [pollsters] to be executed via executor [<concurrent.futures.thread.ThreadPoolExecutor object at 0x7fcf6cd3b320>] with cache [{}], pollster history [{}], and discovery cache [{}]. register_pollster_execution /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:276
Dec  1 20:07:48 compute-0 ceilometer_agent_compute[200308]: 2025-12-01 20:07:48.826 15 DEBUG ceilometer.polling.manager [-] Registering pollster [<stevedore.extension.Extension object at 0x7fcf6efc98b0>] from source [pollsters] to be executed via executor [<concurrent.futures.thread.ThreadPoolExecutor object at 0x7fcf6cd3b320>] with cache [{}], pollster history [{}], and discovery cache [{}]. register_pollster_execution /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:276
Dec  1 20:07:48 compute-0 ceilometer_agent_compute[200308]: 2025-12-01 20:07:48.826 15 DEBUG ceilometer.polling.manager [-] Registering pollster [<stevedore.extension.Extension object at 0x7fcf6c2e4110>] from source [pollsters] to be executed via executor [<concurrent.futures.thread.ThreadPoolExecutor object at 0x7fcf6cd3b320>] with cache [{}], pollster history [{}], and discovery cache [{}]. register_pollster_execution /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:276
Dec  1 20:07:48 compute-0 ceilometer_agent_compute[200308]: 2025-12-01 20:07:48.826 15 DEBUG ceilometer.polling.manager [-] Registering pollster [<stevedore.extension.Extension object at 0x7fcf6c2e41a0>] from source [pollsters] to be executed via executor [<concurrent.futures.thread.ThreadPoolExecutor object at 0x7fcf6cd3b320>] with cache [{}], pollster history [{}], and discovery cache [{}]. register_pollster_execution /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:276
Dec  1 20:07:48 compute-0 ceilometer_agent_compute[200308]: 2025-12-01 20:07:48.827 15 DEBUG ceilometer.polling.manager [-] Registering pollster [<stevedore.extension.Extension object at 0x7fcf6cc3f290>] from source [pollsters] to be executed via executor [<concurrent.futures.thread.ThreadPoolExecutor object at 0x7fcf6cd3b320>] with cache [{}], pollster history [{}], and discovery cache [{}]. register_pollster_execution /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:276
Dec  1 20:07:48 compute-0 ceilometer_agent_compute[200308]: 2025-12-01 20:07:48.827 15 DEBUG ceilometer.polling.manager [-] Registering pollster [<stevedore.extension.Extension object at 0x7fcf6cc3f2c0>] from source [pollsters] to be executed via executor [<concurrent.futures.thread.ThreadPoolExecutor object at 0x7fcf6cd3b320>] with cache [{}], pollster history [{}], and discovery cache [{}]. register_pollster_execution /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:276
Dec  1 20:07:48 compute-0 ceilometer_agent_compute[200308]: 2025-12-01 20:07:48.827 15 DEBUG ceilometer.polling.manager [-] Registering pollster [<stevedore.extension.Extension object at 0x7fcf6e1e92e0>] from source [pollsters] to be executed via executor [<concurrent.futures.thread.ThreadPoolExecutor object at 0x7fcf6cd3b320>] with cache [{}], pollster history [{}], and discovery cache [{}]. register_pollster_execution /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:276
Dec  1 20:07:48 compute-0 ceilometer_agent_compute[200308]: 2025-12-01 20:07:48.827 15 DEBUG ceilometer.polling.manager [-] Registering pollster [<stevedore.extension.Extension object at 0x7fcf6cc3fb00>] from source [pollsters] to be executed via executor [<concurrent.futures.thread.ThreadPoolExecutor object at 0x7fcf6cd3b320>] with cache [{}], pollster history [{}], and discovery cache [{}]. register_pollster_execution /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:276
Dec  1 20:07:48 compute-0 ceilometer_agent_compute[200308]: 2025-12-01 20:07:48.827 15 DEBUG ceilometer.polling.manager [-] Registering pollster [<stevedore.extension.Extension object at 0x7fcf6cc3f320>] from source [pollsters] to be executed via executor [<concurrent.futures.thread.ThreadPoolExecutor object at 0x7fcf6cd3b320>] with cache [{}], pollster history [{}], and discovery cache [{}]. register_pollster_execution /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:276
Dec  1 20:07:48 compute-0 ceilometer_agent_compute[200308]: 2025-12-01 20:07:48.827 15 DEBUG ceilometer.polling.manager [-] Registering pollster [<stevedore.extension.Extension object at 0x7fcf6cc3f380>] from source [pollsters] to be executed via executor [<concurrent.futures.thread.ThreadPoolExecutor object at 0x7fcf6cd3b320>] with cache [{}], pollster history [{}], and discovery cache [{}]. register_pollster_execution /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:276
Dec  1 20:07:48 compute-0 ceilometer_agent_compute[200308]: 2025-12-01 20:07:48.827 15 DEBUG ceilometer.polling.manager [-] Registering pollster [<stevedore.extension.Extension object at 0x7fcf6cc3f3e0>] from source [pollsters] to be executed via executor [<concurrent.futures.thread.ThreadPoolExecutor object at 0x7fcf6cd3b320>] with cache [{}], pollster history [{}], and discovery cache [{}]. register_pollster_execution /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:276
Dec  1 20:07:48 compute-0 ceilometer_agent_compute[200308]: 2025-12-01 20:07:48.827 15 DEBUG ceilometer.polling.manager [-] Registering pollster [<stevedore.extension.Extension object at 0x7fcf6cc3f440>] from source [pollsters] to be executed via executor [<concurrent.futures.thread.ThreadPoolExecutor object at 0x7fcf6cd3b320>] with cache [{}], pollster history [{}], and discovery cache [{}]. register_pollster_execution /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:276
Dec  1 20:07:48 compute-0 ceilometer_agent_compute[200308]: 2025-12-01 20:07:48.827 15 DEBUG ceilometer.polling.manager [-] Registering pollster [<stevedore.extension.Extension object at 0x7fcf6c2e4470>] from source [pollsters] to be executed via executor [<concurrent.futures.thread.ThreadPoolExecutor object at 0x7fcf6cd3b320>] with cache [{}], pollster history [{}], and discovery cache [{}]. register_pollster_execution /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:276
Dec  1 20:07:48 compute-0 ceilometer_agent_compute[200308]: 2025-12-01 20:07:48.827 15 DEBUG ceilometer.polling.manager [-] Registering pollster [<stevedore.extension.Extension object at 0x7fcf6cc3f4a0>] from source [pollsters] to be executed via executor [<concurrent.futures.thread.ThreadPoolExecutor object at 0x7fcf6cd3b320>] with cache [{}], pollster history [{}], and discovery cache [{}]. register_pollster_execution /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:276
Dec  1 20:07:48 compute-0 ceilometer_agent_compute[200308]: 2025-12-01 20:07:48.828 15 DEBUG ceilometer.polling.manager [-] Registering pollster [<stevedore.extension.Extension object at 0x7fcf6cc3f500>] from source [pollsters] to be executed via executor [<concurrent.futures.thread.ThreadPoolExecutor object at 0x7fcf6cd3b320>] with cache [{}], pollster history [{}], and discovery cache [{}]. register_pollster_execution /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:276
Dec  1 20:07:48 compute-0 ceilometer_agent_compute[200308]: 2025-12-01 20:07:48.828 15 DEBUG ceilometer.polling.manager [-] Registering pollster [<stevedore.extension.Extension object at 0x7fcf6cc3e540>] from source [pollsters] to be executed via executor [<concurrent.futures.thread.ThreadPoolExecutor object at 0x7fcf6cd3b320>] with cache [{}], pollster history [{}], and discovery cache [{}]. register_pollster_execution /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:276
Dec  1 20:07:48 compute-0 ceilometer_agent_compute[200308]: 2025-12-01 20:07:48.828 15 DEBUG ceilometer.polling.manager [-] Registering pollster [<stevedore.extension.Extension object at 0x7fcf6cc3f560>] from source [pollsters] to be executed via executor [<concurrent.futures.thread.ThreadPoolExecutor object at 0x7fcf6cd3b320>] with cache [{}], pollster history [{}], and discovery cache [{}]. register_pollster_execution /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:276
Dec  1 20:07:48 compute-0 ceilometer_agent_compute[200308]: 2025-12-01 20:07:48.828 15 DEBUG ceilometer.polling.manager [-] Registering pollster [<stevedore.extension.Extension object at 0x7fcf6cc3fd70>] from source [pollsters] to be executed via executor [<concurrent.futures.thread.ThreadPoolExecutor object at 0x7fcf6cd3b320>] with cache [{}], pollster history [{}], and discovery cache [{}]. register_pollster_execution /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:276
Dec  1 20:07:48 compute-0 ceilometer_agent_compute[200308]: 2025-12-01 20:07:48.828 15 DEBUG ceilometer.polling.manager [-] Registering pollster [<stevedore.extension.Extension object at 0x7fcf6cc3f5c0>] from source [pollsters] to be executed via executor [<concurrent.futures.thread.ThreadPoolExecutor object at 0x7fcf6cd3b320>] with cache [{}], pollster history [{}], and discovery cache [{}]. register_pollster_execution /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:276
Dec  1 20:07:48 compute-0 ceilometer_agent_compute[200308]: 2025-12-01 20:07:48.828 15 DEBUG ceilometer.polling.manager [-] Registering pollster [<stevedore.extension.Extension object at 0x7fcf6cc3fdd0>] from source [pollsters] to be executed via executor [<concurrent.futures.thread.ThreadPoolExecutor object at 0x7fcf6cd3b320>] with cache [{}], pollster history [{}], and discovery cache [{}]. register_pollster_execution /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:276
Dec  1 20:07:48 compute-0 ceilometer_agent_compute[200308]: 2025-12-01 20:07:48.828 15 DEBUG ceilometer.polling.manager [-] Registering pollster [<stevedore.extension.Extension object at 0x7fcf6cc3fe30>] from source [pollsters] to be executed via executor [<concurrent.futures.thread.ThreadPoolExecutor object at 0x7fcf6cd3b320>] with cache [{}], pollster history [{}], and discovery cache [{}]. register_pollster_execution /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:276
Dec  1 20:07:48 compute-0 ceilometer_agent_compute[200308]: 2025-12-01 20:07:48.828 15 DEBUG ceilometer.polling.manager [-] Registering pollster [<stevedore.extension.Extension object at 0x7fcf6cc3fec0>] from source [pollsters] to be executed via executor [<concurrent.futures.thread.ThreadPoolExecutor object at 0x7fcf6cd3b320>] with cache [{}], pollster history [{}], and discovery cache [{}]. register_pollster_execution /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:276
Dec  1 20:07:48 compute-0 ceilometer_agent_compute[200308]: 2025-12-01 20:07:48.828 15 DEBUG ceilometer.polling.manager [-] Registering pollster [<stevedore.extension.Extension object at 0x7fcf6cc3ffb0>] from source [pollsters] to be executed via executor [<concurrent.futures.thread.ThreadPoolExecutor object at 0x7fcf6cd3b320>] with cache [{}], pollster history [{}], and discovery cache [{}]. register_pollster_execution /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:276
Dec  1 20:07:48 compute-0 ceilometer_agent_compute[200308]: 2025-12-01 20:07:48.828 15 DEBUG ceilometer.polling.manager [-] Registering pollster [<stevedore.extension.Extension object at 0x7fcf6cc3d7c0>] from source [pollsters] to be executed via executor [<concurrent.futures.thread.ThreadPoolExecutor object at 0x7fcf6cd3b320>] with cache [{}], pollster history [{}], and discovery cache [{}]. register_pollster_execution /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:276
Dec  1 20:07:48 compute-0 ceilometer_agent_compute[200308]: 2025-12-01 20:07:48.829 15 DEBUG ceilometer.polling.manager [-] Registering pollster [<stevedore.extension.Extension object at 0x7fcf6cc3f7d0>] from source [pollsters] to be executed via executor [<concurrent.futures.thread.ThreadPoolExecutor object at 0x7fcf6cd3b320>] with cache [{}], pollster history [{}], and discovery cache [{}]. register_pollster_execution /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:276
Dec  1 20:07:48 compute-0 ceilometer_agent_compute[200308]: 2025-12-01 20:07:48.849 15 DEBUG ceilometer.compute.discovery [-] instance data: {'id': '2e63a3e2-688c-470f-9b69-98ac22f0c892', 'name': 'te-4551674-asg-jbxama3kkz6o-ydtfx5qziqnj-k254cxbeo4x2', 'flavor': {'id': '69252fc0-77e5-4ac1-807d-77003542464f', 'name': 'm1.nano', 'vcpus': 1, 'ram': 128, 'disk': 1, 'ephemeral': 0, 'swap': 0}, 'image': {'id': 'bffb6851-f47b-44e0-90e7-e01d72f9a4d2'}, 'os_type': 'hvm', 'architecture': 'x86_64', 'OS-EXT-SRV-ATTR:instance_name': 'instance-0000000d', 'OS-EXT-SRV-ATTR:host': 'compute-0.ctlplane.example.com', 'OS-EXT-STS:vm_state': 'running', 'tenant_id': 'ce8fb01897ec4dc4a54e7b478a0450c6', 'user_id': '87b1f4a5842648dead0562b1cf8b4f18', 'hostId': 'ed8356c925a37a95605f3d20b7786e3709927537fc31622d463f3259', 'status': 'active', 'metadata': {'metering.server_group': 'f148fe63-b9e9-42f1-b9d7-8790a6058874'}} discover_libvirt_polling /usr/lib/python3.12/site-packages/ceilometer/compute/discovery.py:315
Dec  1 20:07:48 compute-0 ceilometer_agent_compute[200308]: 2025-12-01 20:07:48.850 15 INFO ceilometer.polling.manager [-] Polling pollster network.incoming.bytes.delta in the context of pollsters
Dec  1 20:07:48 compute-0 ceilometer_agent_compute[200308]: 2025-12-01 20:07:48.850 15 DEBUG ceilometer.polling.manager [-] Checking if we need coordination for pollster [<stevedore.extension.Extension object at 0x7fcf6cc3f860>] with coordination group name [None]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:333
Dec  1 20:07:48 compute-0 ceilometer_agent_compute[200308]: 2025-12-01 20:07:48.850 15 DEBUG ceilometer.polling.manager [-] The pollster [<stevedore.extension.Extension object at 0x7fcf6cc3f860>] is not configured in a source for polling that requires coordination. The current hashrings are the following [None]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:355
Dec  1 20:07:48 compute-0 ceilometer_agent_compute[200308]: 2025-12-01 20:07:48.852 12 DEBUG ceilometer.polling.manager [-] Updated heartbeat for network.incoming.bytes.delta (2025-12-01T20:07:48.850950) _update_status /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:502
Dec  1 20:07:48 compute-0 ceilometer_agent_compute[200308]: 2025-12-01 20:07:48.851 15 DEBUG ceilometer.polling.manager [-] Polster heartbeat update: network.incoming.bytes.delta heartbeat /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:636
Dec  1 20:07:48 compute-0 ceilometer_agent_compute[200308]: 2025-12-01 20:07:48.859 15 DEBUG ceilometer.compute.pollsters [-] 2e63a3e2-688c-470f-9b69-98ac22f0c892/network.incoming.bytes.delta volume: 1262 _stats_to_sample /usr/lib/python3.12/site-packages/ceilometer/compute/pollsters/__init__.py:108
Dec  1 20:07:48 compute-0 ceilometer_agent_compute[200308]: 2025-12-01 20:07:48.861 15 INFO ceilometer.polling.manager [-] Finished polling pollster network.incoming.bytes.delta in the context of pollsters
Dec  1 20:07:48 compute-0 ceilometer_agent_compute[200308]: 2025-12-01 20:07:48.861 15 DEBUG ceilometer.polling.manager [-] Executing discovery process for pollsters [<ceilometer.compute.pollsters.net.OutgoingPacketsPollster object at 0x7fcf6c2e4050>] and discovery method [local_instances] via process [<bound method AgentManager.discover of <ceilometer.polling.manager.AgentManager object at 0x7fcf6cc07a10>>]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:294
Dec  1 20:07:48 compute-0 ceilometer_agent_compute[200308]: 2025-12-01 20:07:48.862 15 INFO ceilometer.polling.manager [-] Polling pollster network.outgoing.packets in the context of pollsters
Dec  1 20:07:48 compute-0 ceilometer_agent_compute[200308]: 2025-12-01 20:07:48.862 15 DEBUG ceilometer.polling.manager [-] Checking if we need coordination for pollster [<stevedore.extension.Extension object at 0x7fcf6c2e4080>] with coordination group name [None]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:333
Dec  1 20:07:48 compute-0 ceilometer_agent_compute[200308]: 2025-12-01 20:07:48.863 15 DEBUG ceilometer.polling.manager [-] The pollster [<stevedore.extension.Extension object at 0x7fcf6c2e4080>] is not configured in a source for polling that requires coordination. The current hashrings are the following [None]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:355
Dec  1 20:07:48 compute-0 ceilometer_agent_compute[200308]: 2025-12-01 20:07:48.865 12 DEBUG ceilometer.polling.manager [-] Updated heartbeat for network.outgoing.packets (2025-12-01T20:07:48.863667) _update_status /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:502
Dec  1 20:07:48 compute-0 ceilometer_agent_compute[200308]: 2025-12-01 20:07:48.864 15 DEBUG ceilometer.polling.manager [-] Polster heartbeat update: network.outgoing.packets heartbeat /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:636
Dec  1 20:07:48 compute-0 ceilometer_agent_compute[200308]: 2025-12-01 20:07:48.866 15 DEBUG ceilometer.compute.pollsters [-] 2e63a3e2-688c-470f-9b69-98ac22f0c892/network.outgoing.packets volume: 16 _stats_to_sample /usr/lib/python3.12/site-packages/ceilometer/compute/pollsters/__init__.py:108
Dec  1 20:07:48 compute-0 ceilometer_agent_compute[200308]: 2025-12-01 20:07:48.867 15 INFO ceilometer.polling.manager [-] Finished polling pollster network.outgoing.packets in the context of pollsters
Dec  1 20:07:48 compute-0 ceilometer_agent_compute[200308]: 2025-12-01 20:07:48.867 15 DEBUG ceilometer.polling.manager [-] Executing discovery process for pollsters [<ceilometer.compute.pollsters.net.OutgoingBytesDeltaPollster object at 0x7fcf6cc3ff20>] and discovery method [local_instances] via process [<bound method AgentManager.discover of <ceilometer.polling.manager.AgentManager object at 0x7fcf6cc07a10>>]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:294
Dec  1 20:07:48 compute-0 ceilometer_agent_compute[200308]: 2025-12-01 20:07:48.868 15 INFO ceilometer.polling.manager [-] Polling pollster network.outgoing.bytes.delta in the context of pollsters
Dec  1 20:07:48 compute-0 ceilometer_agent_compute[200308]: 2025-12-01 20:07:48.869 15 DEBUG ceilometer.polling.manager [-] Checking if we need coordination for pollster [<stevedore.extension.Extension object at 0x7fcf6efc98b0>] with coordination group name [None]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:333
Dec  1 20:07:48 compute-0 ceilometer_agent_compute[200308]: 2025-12-01 20:07:48.869 15 DEBUG ceilometer.polling.manager [-] The pollster [<stevedore.extension.Extension object at 0x7fcf6efc98b0>] is not configured in a source for polling that requires coordination. The current hashrings are the following [None]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:355
Dec  1 20:07:48 compute-0 ceilometer_agent_compute[200308]: 2025-12-01 20:07:48.871 12 DEBUG ceilometer.polling.manager [-] Updated heartbeat for network.outgoing.bytes.delta (2025-12-01T20:07:48.870052) _update_status /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:502
Dec  1 20:07:48 compute-0 ceilometer_agent_compute[200308]: 2025-12-01 20:07:48.870 15 DEBUG ceilometer.polling.manager [-] Polster heartbeat update: network.outgoing.bytes.delta heartbeat /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:636
Dec  1 20:07:48 compute-0 ceilometer_agent_compute[200308]: 2025-12-01 20:07:48.872 15 DEBUG ceilometer.compute.pollsters [-] 2e63a3e2-688c-470f-9b69-98ac22f0c892/network.outgoing.bytes.delta volume: 1620 _stats_to_sample /usr/lib/python3.12/site-packages/ceilometer/compute/pollsters/__init__.py:108
Dec  1 20:07:48 compute-0 ceilometer_agent_compute[200308]: 2025-12-01 20:07:48.873 15 INFO ceilometer.polling.manager [-] Finished polling pollster network.outgoing.bytes.delta in the context of pollsters
Dec  1 20:07:48 compute-0 ceilometer_agent_compute[200308]: 2025-12-01 20:07:48.873 15 DEBUG ceilometer.polling.manager [-] Executing discovery process for pollsters [<ceilometer.compute.pollsters.net.OutgoingDropPollster object at 0x7fcf6c2e40e0>] and discovery method [local_instances] via process [<bound method AgentManager.discover of <ceilometer.polling.manager.AgentManager object at 0x7fcf6cc07a10>>]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:294
Dec  1 20:07:48 compute-0 ceilometer_agent_compute[200308]: 2025-12-01 20:07:48.875 15 INFO ceilometer.polling.manager [-] Polling pollster network.outgoing.packets.drop in the context of pollsters
Dec  1 20:07:48 compute-0 ceilometer_agent_compute[200308]: 2025-12-01 20:07:48.876 15 DEBUG ceilometer.polling.manager [-] Checking if we need coordination for pollster [<stevedore.extension.Extension object at 0x7fcf6c2e4110>] with coordination group name [None]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:333
Dec  1 20:07:48 compute-0 ceilometer_agent_compute[200308]: 2025-12-01 20:07:48.876 15 DEBUG ceilometer.polling.manager [-] The pollster [<stevedore.extension.Extension object at 0x7fcf6c2e4110>] is not configured in a source for polling that requires coordination. The current hashrings are the following [None]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:355
Dec  1 20:07:48 compute-0 ceilometer_agent_compute[200308]: 2025-12-01 20:07:48.876 15 DEBUG ceilometer.polling.manager [-] Polster heartbeat update: network.outgoing.packets.drop heartbeat /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:636
Dec  1 20:07:48 compute-0 ceilometer_agent_compute[200308]: 2025-12-01 20:07:48.876 15 DEBUG ceilometer.compute.pollsters [-] 2e63a3e2-688c-470f-9b69-98ac22f0c892/network.outgoing.packets.drop volume: 0 _stats_to_sample /usr/lib/python3.12/site-packages/ceilometer/compute/pollsters/__init__.py:108
Dec  1 20:07:48 compute-0 ceilometer_agent_compute[200308]: 2025-12-01 20:07:48.877 15 INFO ceilometer.polling.manager [-] Finished polling pollster network.outgoing.packets.drop in the context of pollsters
Dec  1 20:07:48 compute-0 ceilometer_agent_compute[200308]: 2025-12-01 20:07:48.877 15 DEBUG ceilometer.polling.manager [-] Executing discovery process for pollsters [<ceilometer.compute.pollsters.net.OutgoingErrorsPollster object at 0x7fcf6c2e4170>] and discovery method [local_instances] via process [<bound method AgentManager.discover of <ceilometer.polling.manager.AgentManager object at 0x7fcf6cc07a10>>]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:294
Dec  1 20:07:48 compute-0 ceilometer_agent_compute[200308]: 2025-12-01 20:07:48.878 15 INFO ceilometer.polling.manager [-] Polling pollster network.outgoing.packets.error in the context of pollsters
Dec  1 20:07:48 compute-0 ceilometer_agent_compute[200308]: 2025-12-01 20:07:48.878 15 DEBUG ceilometer.polling.manager [-] Checking if we need coordination for pollster [<stevedore.extension.Extension object at 0x7fcf6c2e41a0>] with coordination group name [None]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:333
Dec  1 20:07:48 compute-0 ceilometer_agent_compute[200308]: 2025-12-01 20:07:48.878 15 DEBUG ceilometer.polling.manager [-] The pollster [<stevedore.extension.Extension object at 0x7fcf6c2e41a0>] is not configured in a source for polling that requires coordination. The current hashrings are the following [None]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:355
Dec  1 20:07:48 compute-0 ceilometer_agent_compute[200308]: 2025-12-01 20:07:48.879 15 DEBUG ceilometer.polling.manager [-] Polster heartbeat update: network.outgoing.packets.error heartbeat /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:636
Dec  1 20:07:48 compute-0 ceilometer_agent_compute[200308]: 2025-12-01 20:07:48.879 15 DEBUG ceilometer.compute.pollsters [-] 2e63a3e2-688c-470f-9b69-98ac22f0c892/network.outgoing.packets.error volume: 0 _stats_to_sample /usr/lib/python3.12/site-packages/ceilometer/compute/pollsters/__init__.py:108
Dec  1 20:07:48 compute-0 ceilometer_agent_compute[200308]: 2025-12-01 20:07:48.878 12 DEBUG ceilometer.polling.manager [-] Updated heartbeat for network.outgoing.packets.drop (2025-12-01T20:07:48.876385) _update_status /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:502
Dec  1 20:07:48 compute-0 ceilometer_agent_compute[200308]: 2025-12-01 20:07:48.879 15 INFO ceilometer.polling.manager [-] Finished polling pollster network.outgoing.packets.error in the context of pollsters
Dec  1 20:07:48 compute-0 ceilometer_agent_compute[200308]: 2025-12-01 20:07:48.880 15 DEBUG ceilometer.polling.manager [-] Executing discovery process for pollsters [<ceilometer.compute.pollsters.disk.PerDeviceCapacityPollster object at 0x7fcf6cc3d820>] and discovery method [local_instances] via process [<bound method AgentManager.discover of <ceilometer.polling.manager.AgentManager object at 0x7fcf6cc07a10>>]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:294
Dec  1 20:07:48 compute-0 ceilometer_agent_compute[200308]: 2025-12-01 20:07:48.880 15 INFO ceilometer.polling.manager [-] Polling pollster disk.device.capacity in the context of pollsters
Dec  1 20:07:48 compute-0 ceilometer_agent_compute[200308]: 2025-12-01 20:07:48.880 15 DEBUG ceilometer.polling.manager [-] Checking if we need coordination for pollster [<stevedore.extension.Extension object at 0x7fcf6cc3f290>] with coordination group name [None]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:333
Dec  1 20:07:48 compute-0 ceilometer_agent_compute[200308]: 2025-12-01 20:07:48.880 15 DEBUG ceilometer.polling.manager [-] The pollster [<stevedore.extension.Extension object at 0x7fcf6cc3f290>] is not configured in a source for polling that requires coordination. The current hashrings are the following [None]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:355
Dec  1 20:07:48 compute-0 ceilometer_agent_compute[200308]: 2025-12-01 20:07:48.881 15 DEBUG ceilometer.polling.manager [-] Polster heartbeat update: disk.device.capacity heartbeat /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:636
Dec  1 20:07:48 compute-0 ceilometer_agent_compute[200308]: 2025-12-01 20:07:48.881 12 DEBUG ceilometer.polling.manager [-] Updated heartbeat for network.outgoing.packets.error (2025-12-01T20:07:48.879026) _update_status /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:502
Dec  1 20:07:48 compute-0 ceilometer_agent_compute[200308]: 2025-12-01 20:07:48.882 12 DEBUG ceilometer.polling.manager [-] Updated heartbeat for disk.device.capacity (2025-12-01T20:07:48.881062) _update_status /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:502
Dec  1 20:07:48 compute-0 ceilometer_agent_compute[200308]: 2025-12-01 20:07:48.909 15 DEBUG ceilometer.compute.pollsters [-] 2e63a3e2-688c-470f-9b69-98ac22f0c892/disk.device.capacity volume: 1073741824 _stats_to_sample /usr/lib/python3.12/site-packages/ceilometer/compute/pollsters/__init__.py:108
Dec  1 20:07:48 compute-0 ceilometer_agent_compute[200308]: 2025-12-01 20:07:48.910 15 DEBUG ceilometer.compute.pollsters [-] 2e63a3e2-688c-470f-9b69-98ac22f0c892/disk.device.capacity volume: 509952 _stats_to_sample /usr/lib/python3.12/site-packages/ceilometer/compute/pollsters/__init__.py:108
Dec  1 20:07:48 compute-0 ceilometer_agent_compute[200308]: 2025-12-01 20:07:48.911 15 INFO ceilometer.polling.manager [-] Finished polling pollster disk.device.capacity in the context of pollsters
Dec  1 20:07:48 compute-0 ceilometer_agent_compute[200308]: 2025-12-01 20:07:48.911 15 DEBUG ceilometer.polling.manager [-] Executing discovery process for pollsters [<ceilometer.compute.pollsters.disk.PerDeviceReadBytesPollster object at 0x7fcf6cc3f1d0>] and discovery method [local_instances] via process [<bound method AgentManager.discover of <ceilometer.polling.manager.AgentManager object at 0x7fcf6cc07a10>>]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:294
Dec  1 20:07:48 compute-0 ceilometer_agent_compute[200308]: 2025-12-01 20:07:48.911 15 INFO ceilometer.polling.manager [-] Polling pollster disk.device.read.bytes in the context of pollsters
Dec  1 20:07:48 compute-0 ceilometer_agent_compute[200308]: 2025-12-01 20:07:48.912 15 DEBUG ceilometer.polling.manager [-] Checking if we need coordination for pollster [<stevedore.extension.Extension object at 0x7fcf6cc3f2c0>] with coordination group name [None]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:333
Dec  1 20:07:48 compute-0 ceilometer_agent_compute[200308]: 2025-12-01 20:07:48.912 15 DEBUG ceilometer.polling.manager [-] The pollster [<stevedore.extension.Extension object at 0x7fcf6cc3f2c0>] is not configured in a source for polling that requires coordination. The current hashrings are the following [None]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:355
Dec  1 20:07:48 compute-0 ceilometer_agent_compute[200308]: 2025-12-01 20:07:48.912 15 DEBUG ceilometer.polling.manager [-] Polster heartbeat update: disk.device.read.bytes heartbeat /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:636
Dec  1 20:07:48 compute-0 ceilometer_agent_compute[200308]: 2025-12-01 20:07:48.913 12 DEBUG ceilometer.polling.manager [-] Updated heartbeat for disk.device.read.bytes (2025-12-01T20:07:48.912456) _update_status /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:502
Dec  1 20:07:48 compute-0 nova_compute[189564]: 2025-12-01 20:07:48.966 189568 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Dec  1 20:07:48 compute-0 ceilometer_agent_compute[200308]: 2025-12-01 20:07:48.984 15 DEBUG ceilometer.compute.pollsters [-] 2e63a3e2-688c-470f-9b69-98ac22f0c892/disk.device.read.bytes volume: 28969984 _stats_to_sample /usr/lib/python3.12/site-packages/ceilometer/compute/pollsters/__init__.py:108
Dec  1 20:07:48 compute-0 ceilometer_agent_compute[200308]: 2025-12-01 20:07:48.985 15 DEBUG ceilometer.compute.pollsters [-] 2e63a3e2-688c-470f-9b69-98ac22f0c892/disk.device.read.bytes volume: 246078 _stats_to_sample /usr/lib/python3.12/site-packages/ceilometer/compute/pollsters/__init__.py:108
Dec  1 20:07:48 compute-0 ceilometer_agent_compute[200308]: 2025-12-01 20:07:48.985 15 INFO ceilometer.polling.manager [-] Finished polling pollster disk.device.read.bytes in the context of pollsters
Dec  1 20:07:48 compute-0 ceilometer_agent_compute[200308]: 2025-12-01 20:07:48.986 15 DEBUG ceilometer.polling.manager [-] Executing discovery process for pollsters [<ceilometer.compute.pollsters.net.IncomingBytesPollster object at 0x7fcf6cc3f800>] and discovery method [local_instances] via process [<bound method AgentManager.discover of <ceilometer.polling.manager.AgentManager object at 0x7fcf6cc07a10>>]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:294
Dec  1 20:07:48 compute-0 ceilometer_agent_compute[200308]: 2025-12-01 20:07:48.986 15 INFO ceilometer.polling.manager [-] Polling pollster network.incoming.bytes in the context of pollsters
Dec  1 20:07:48 compute-0 ceilometer_agent_compute[200308]: 2025-12-01 20:07:48.986 15 DEBUG ceilometer.polling.manager [-] Checking if we need coordination for pollster [<stevedore.extension.Extension object at 0x7fcf6e1e92e0>] with coordination group name [None]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:333
Dec  1 20:07:48 compute-0 ceilometer_agent_compute[200308]: 2025-12-01 20:07:48.986 15 DEBUG ceilometer.polling.manager [-] The pollster [<stevedore.extension.Extension object at 0x7fcf6e1e92e0>] is not configured in a source for polling that requires coordination. The current hashrings are the following [None]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:355
Dec  1 20:07:48 compute-0 ceilometer_agent_compute[200308]: 2025-12-01 20:07:48.986 15 DEBUG ceilometer.polling.manager [-] Polster heartbeat update: network.incoming.bytes heartbeat /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:636
Dec  1 20:07:48 compute-0 ceilometer_agent_compute[200308]: 2025-12-01 20:07:48.987 15 DEBUG ceilometer.compute.pollsters [-] 2e63a3e2-688c-470f-9b69-98ac22f0c892/network.incoming.bytes volume: 1352 _stats_to_sample /usr/lib/python3.12/site-packages/ceilometer/compute/pollsters/__init__.py:108
Dec  1 20:07:48 compute-0 ceilometer_agent_compute[200308]: 2025-12-01 20:07:48.987 15 INFO ceilometer.polling.manager [-] Finished polling pollster network.incoming.bytes in the context of pollsters
Dec  1 20:07:48 compute-0 ceilometer_agent_compute[200308]: 2025-12-01 20:07:48.988 15 DEBUG ceilometer.polling.manager [-] Executing discovery process for pollsters [<ceilometer.compute.pollsters.net.IncomingBytesRatePollster object at 0x7fcf6cc3fd10>] and discovery method [local_instances] via process [<bound method AgentManager.discover of <ceilometer.polling.manager.AgentManager object at 0x7fcf6cc07a10>>]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:294
Dec  1 20:07:48 compute-0 ceilometer_agent_compute[200308]: 2025-12-01 20:07:48.988 15 DEBUG ceilometer.polling.manager [-] Skip pollster network.incoming.bytes.rate, no new resources found this cycle _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:321
Dec  1 20:07:48 compute-0 ceilometer_agent_compute[200308]: 2025-12-01 20:07:48.988 12 DEBUG ceilometer.polling.manager [-] Updated heartbeat for network.incoming.bytes (2025-12-01T20:07:48.986836) _update_status /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:502
Dec  1 20:07:48 compute-0 ceilometer_agent_compute[200308]: 2025-12-01 20:07:48.988 15 DEBUG ceilometer.polling.manager [-] Executing discovery process for pollsters [<ceilometer.compute.pollsters.disk.PerDeviceDiskReadLatencyPollster object at 0x7fcf6cc3f2f0>] and discovery method [local_instances] via process [<bound method AgentManager.discover of <ceilometer.polling.manager.AgentManager object at 0x7fcf6cc07a10>>]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:294
Dec  1 20:07:48 compute-0 ceilometer_agent_compute[200308]: 2025-12-01 20:07:48.988 15 INFO ceilometer.polling.manager [-] Polling pollster disk.device.read.latency in the context of pollsters
Dec  1 20:07:48 compute-0 ceilometer_agent_compute[200308]: 2025-12-01 20:07:48.988 15 DEBUG ceilometer.polling.manager [-] Checking if we need coordination for pollster [<stevedore.extension.Extension object at 0x7fcf6cc3f320>] with coordination group name [None]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:333
Dec  1 20:07:48 compute-0 ceilometer_agent_compute[200308]: 2025-12-01 20:07:48.988 15 DEBUG ceilometer.polling.manager [-] The pollster [<stevedore.extension.Extension object at 0x7fcf6cc3f320>] is not configured in a source for polling that requires coordination. The current hashrings are the following [None]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:355
Dec  1 20:07:48 compute-0 ceilometer_agent_compute[200308]: 2025-12-01 20:07:48.989 15 DEBUG ceilometer.polling.manager [-] Polster heartbeat update: disk.device.read.latency heartbeat /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:636
Dec  1 20:07:48 compute-0 ceilometer_agent_compute[200308]: 2025-12-01 20:07:48.989 15 DEBUG ceilometer.compute.pollsters [-] 2e63a3e2-688c-470f-9b69-98ac22f0c892/disk.device.read.latency volume: 649034984 _stats_to_sample /usr/lib/python3.12/site-packages/ceilometer/compute/pollsters/__init__.py:108
Dec  1 20:07:48 compute-0 ceilometer_agent_compute[200308]: 2025-12-01 20:07:48.989 15 DEBUG ceilometer.compute.pollsters [-] 2e63a3e2-688c-470f-9b69-98ac22f0c892/disk.device.read.latency volume: 56737496 _stats_to_sample /usr/lib/python3.12/site-packages/ceilometer/compute/pollsters/__init__.py:108
Dec  1 20:07:48 compute-0 ceilometer_agent_compute[200308]: 2025-12-01 20:07:48.990 15 INFO ceilometer.polling.manager [-] Finished polling pollster disk.device.read.latency in the context of pollsters
Dec  1 20:07:48 compute-0 ceilometer_agent_compute[200308]: 2025-12-01 20:07:48.990 15 DEBUG ceilometer.polling.manager [-] Executing discovery process for pollsters [<ceilometer.compute.pollsters.disk.PerDeviceReadRequestsPollster object at 0x7fcf6cc3f350>] and discovery method [local_instances] via process [<bound method AgentManager.discover of <ceilometer.polling.manager.AgentManager object at 0x7fcf6cc07a10>>]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:294
Dec  1 20:07:48 compute-0 ceilometer_agent_compute[200308]: 2025-12-01 20:07:48.990 15 INFO ceilometer.polling.manager [-] Polling pollster disk.device.read.requests in the context of pollsters
Dec  1 20:07:48 compute-0 ceilometer_agent_compute[200308]: 2025-12-01 20:07:48.990 15 DEBUG ceilometer.polling.manager [-] Checking if we need coordination for pollster [<stevedore.extension.Extension object at 0x7fcf6cc3f380>] with coordination group name [None]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:333
Dec  1 20:07:48 compute-0 ceilometer_agent_compute[200308]: 2025-12-01 20:07:48.990 15 DEBUG ceilometer.polling.manager [-] The pollster [<stevedore.extension.Extension object at 0x7fcf6cc3f380>] is not configured in a source for polling that requires coordination. The current hashrings are the following [None]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:355
Dec  1 20:07:48 compute-0 ceilometer_agent_compute[200308]: 2025-12-01 20:07:48.990 15 DEBUG ceilometer.polling.manager [-] Polster heartbeat update: disk.device.read.requests heartbeat /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:636
Dec  1 20:07:48 compute-0 ceilometer_agent_compute[200308]: 2025-12-01 20:07:48.990 15 DEBUG ceilometer.compute.pollsters [-] 2e63a3e2-688c-470f-9b69-98ac22f0c892/disk.device.read.requests volume: 1041 _stats_to_sample /usr/lib/python3.12/site-packages/ceilometer/compute/pollsters/__init__.py:108
Dec  1 20:07:48 compute-0 ceilometer_agent_compute[200308]: 2025-12-01 20:07:48.991 15 DEBUG ceilometer.compute.pollsters [-] 2e63a3e2-688c-470f-9b69-98ac22f0c892/disk.device.read.requests volume: 107 _stats_to_sample /usr/lib/python3.12/site-packages/ceilometer/compute/pollsters/__init__.py:108
Dec  1 20:07:48 compute-0 ceilometer_agent_compute[200308]: 2025-12-01 20:07:48.991 15 INFO ceilometer.polling.manager [-] Finished polling pollster disk.device.read.requests in the context of pollsters
Dec  1 20:07:48 compute-0 ceilometer_agent_compute[200308]: 2025-12-01 20:07:48.991 15 DEBUG ceilometer.polling.manager [-] Executing discovery process for pollsters [<ceilometer.compute.pollsters.disk.PerDevicePhysicalPollster object at 0x7fcf6cc3f3b0>] and discovery method [local_instances] via process [<bound method AgentManager.discover of <ceilometer.polling.manager.AgentManager object at 0x7fcf6cc07a10>>]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:294
Dec  1 20:07:48 compute-0 ceilometer_agent_compute[200308]: 2025-12-01 20:07:48.991 15 INFO ceilometer.polling.manager [-] Polling pollster disk.device.usage in the context of pollsters
Dec  1 20:07:48 compute-0 ceilometer_agent_compute[200308]: 2025-12-01 20:07:48.991 15 DEBUG ceilometer.polling.manager [-] Checking if we need coordination for pollster [<stevedore.extension.Extension object at 0x7fcf6cc3f3e0>] with coordination group name [None]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:333
Dec  1 20:07:48 compute-0 ceilometer_agent_compute[200308]: 2025-12-01 20:07:48.991 15 DEBUG ceilometer.polling.manager [-] The pollster [<stevedore.extension.Extension object at 0x7fcf6cc3f3e0>] is not configured in a source for polling that requires coordination. The current hashrings are the following [None]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:355
Dec  1 20:07:48 compute-0 ceilometer_agent_compute[200308]: 2025-12-01 20:07:48.992 15 DEBUG ceilometer.polling.manager [-] Polster heartbeat update: disk.device.usage heartbeat /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:636
Dec  1 20:07:48 compute-0 ceilometer_agent_compute[200308]: 2025-12-01 20:07:48.992 15 DEBUG ceilometer.compute.pollsters [-] 2e63a3e2-688c-470f-9b69-98ac22f0c892/disk.device.usage volume: 29884416 _stats_to_sample /usr/lib/python3.12/site-packages/ceilometer/compute/pollsters/__init__.py:108
Dec  1 20:07:48 compute-0 ceilometer_agent_compute[200308]: 2025-12-01 20:07:48.992 15 DEBUG ceilometer.compute.pollsters [-] 2e63a3e2-688c-470f-9b69-98ac22f0c892/disk.device.usage volume: 509952 _stats_to_sample /usr/lib/python3.12/site-packages/ceilometer/compute/pollsters/__init__.py:108
Dec  1 20:07:48 compute-0 ceilometer_agent_compute[200308]: 2025-12-01 20:07:48.992 15 INFO ceilometer.polling.manager [-] Finished polling pollster disk.device.usage in the context of pollsters
Dec  1 20:07:48 compute-0 ceilometer_agent_compute[200308]: 2025-12-01 20:07:48.993 15 DEBUG ceilometer.polling.manager [-] Executing discovery process for pollsters [<ceilometer.compute.pollsters.disk.PerDeviceWriteBytesPollster object at 0x7fcf6cc3f410>] and discovery method [local_instances] via process [<bound method AgentManager.discover of <ceilometer.polling.manager.AgentManager object at 0x7fcf6cc07a10>>]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:294
Dec  1 20:07:48 compute-0 ceilometer_agent_compute[200308]: 2025-12-01 20:07:48.994 15 INFO ceilometer.polling.manager [-] Polling pollster disk.device.write.bytes in the context of pollsters
Dec  1 20:07:48 compute-0 ceilometer_agent_compute[200308]: 2025-12-01 20:07:48.994 12 DEBUG ceilometer.polling.manager [-] Updated heartbeat for disk.device.read.latency (2025-12-01T20:07:48.989096) _update_status /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:502
Dec  1 20:07:48 compute-0 ceilometer_agent_compute[200308]: 2025-12-01 20:07:48.994 15 DEBUG ceilometer.polling.manager [-] Checking if we need coordination for pollster [<stevedore.extension.Extension object at 0x7fcf6cc3f440>] with coordination group name [None]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:333
Dec  1 20:07:48 compute-0 ceilometer_agent_compute[200308]: 2025-12-01 20:07:48.994 15 DEBUG ceilometer.polling.manager [-] The pollster [<stevedore.extension.Extension object at 0x7fcf6cc3f440>] is not configured in a source for polling that requires coordination. The current hashrings are the following [None]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:355
Dec  1 20:07:48 compute-0 ceilometer_agent_compute[200308]: 2025-12-01 20:07:48.994 15 DEBUG ceilometer.polling.manager [-] Polster heartbeat update: disk.device.write.bytes heartbeat /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:636
Dec  1 20:07:48 compute-0 ceilometer_agent_compute[200308]: 2025-12-01 20:07:48.994 15 DEBUG ceilometer.compute.pollsters [-] 2e63a3e2-688c-470f-9b69-98ac22f0c892/disk.device.write.bytes volume: 72855552 _stats_to_sample /usr/lib/python3.12/site-packages/ceilometer/compute/pollsters/__init__.py:108
Dec  1 20:07:48 compute-0 ceilometer_agent_compute[200308]: 2025-12-01 20:07:48.995 15 DEBUG ceilometer.compute.pollsters [-] 2e63a3e2-688c-470f-9b69-98ac22f0c892/disk.device.write.bytes volume: 0 _stats_to_sample /usr/lib/python3.12/site-packages/ceilometer/compute/pollsters/__init__.py:108
Dec  1 20:07:48 compute-0 ceilometer_agent_compute[200308]: 2025-12-01 20:07:48.995 15 INFO ceilometer.polling.manager [-] Finished polling pollster disk.device.write.bytes in the context of pollsters
Dec  1 20:07:48 compute-0 ceilometer_agent_compute[200308]: 2025-12-01 20:07:48.995 15 DEBUG ceilometer.polling.manager [-] Executing discovery process for pollsters [<ceilometer.compute.pollsters.instance_stats.PowerStatePollster object at 0x7fcf6c2e4440>] and discovery method [local_instances] via process [<bound method AgentManager.discover of <ceilometer.polling.manager.AgentManager object at 0x7fcf6cc07a10>>]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:294
Dec  1 20:07:48 compute-0 ceilometer_agent_compute[200308]: 2025-12-01 20:07:48.995 15 INFO ceilometer.polling.manager [-] Polling pollster power.state in the context of pollsters
Dec  1 20:07:48 compute-0 ceilometer_agent_compute[200308]: 2025-12-01 20:07:48.995 15 DEBUG ceilometer.polling.manager [-] Checking if we need coordination for pollster [<stevedore.extension.Extension object at 0x7fcf6c2e4470>] with coordination group name [None]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:333
Dec  1 20:07:48 compute-0 ceilometer_agent_compute[200308]: 2025-12-01 20:07:48.996 15 DEBUG ceilometer.polling.manager [-] The pollster [<stevedore.extension.Extension object at 0x7fcf6c2e4470>] is not configured in a source for polling that requires coordination. The current hashrings are the following [None]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:355
Dec  1 20:07:48 compute-0 ceilometer_agent_compute[200308]: 2025-12-01 20:07:48.996 15 DEBUG ceilometer.polling.manager [-] Polster heartbeat update: power.state heartbeat /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:636
Dec  1 20:07:48 compute-0 ceilometer_agent_compute[200308]: 2025-12-01 20:07:48.994 12 DEBUG ceilometer.polling.manager [-] Updated heartbeat for disk.device.read.requests (2025-12-01T20:07:48.990580) _update_status /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:502
Dec  1 20:07:48 compute-0 ceilometer_agent_compute[200308]: 2025-12-01 20:07:48.997 12 DEBUG ceilometer.polling.manager [-] Updated heartbeat for disk.device.usage (2025-12-01T20:07:48.992017) _update_status /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:502
Dec  1 20:07:48 compute-0 ceilometer_agent_compute[200308]: 2025-12-01 20:07:48.997 12 DEBUG ceilometer.polling.manager [-] Updated heartbeat for disk.device.write.bytes (2025-12-01T20:07:48.994451) _update_status /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:502
Dec  1 20:07:48 compute-0 ceilometer_agent_compute[200308]: 2025-12-01 20:07:48.998 12 DEBUG ceilometer.polling.manager [-] Updated heartbeat for power.state (2025-12-01T20:07:48.996128) _update_status /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:502
Dec  1 20:07:49 compute-0 ceilometer_agent_compute[200308]: 2025-12-01 20:07:49.030 15 DEBUG ceilometer.compute.pollsters [-] 2e63a3e2-688c-470f-9b69-98ac22f0c892/power.state volume: 1 _stats_to_sample /usr/lib/python3.12/site-packages/ceilometer/compute/pollsters/__init__.py:108
Dec  1 20:07:49 compute-0 ceilometer_agent_compute[200308]: 2025-12-01 20:07:49.031 15 INFO ceilometer.polling.manager [-] Finished polling pollster power.state in the context of pollsters
Dec  1 20:07:49 compute-0 ceilometer_agent_compute[200308]: 2025-12-01 20:07:49.031 15 DEBUG ceilometer.polling.manager [-] Executing discovery process for pollsters [<ceilometer.compute.pollsters.disk.PerDeviceDiskWriteLatencyPollster object at 0x7fcf6cc3f470>] and discovery method [local_instances] via process [<bound method AgentManager.discover of <ceilometer.polling.manager.AgentManager object at 0x7fcf6cc07a10>>]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:294
Dec  1 20:07:49 compute-0 ceilometer_agent_compute[200308]: 2025-12-01 20:07:49.031 15 INFO ceilometer.polling.manager [-] Polling pollster disk.device.write.latency in the context of pollsters
Dec  1 20:07:49 compute-0 ceilometer_agent_compute[200308]: 2025-12-01 20:07:49.031 15 DEBUG ceilometer.polling.manager [-] Checking if we need coordination for pollster [<stevedore.extension.Extension object at 0x7fcf6cc3f4a0>] with coordination group name [None]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:333
Dec  1 20:07:49 compute-0 ceilometer_agent_compute[200308]: 2025-12-01 20:07:49.031 15 DEBUG ceilometer.polling.manager [-] The pollster [<stevedore.extension.Extension object at 0x7fcf6cc3f4a0>] is not configured in a source for polling that requires coordination. The current hashrings are the following [None]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:355
Dec  1 20:07:49 compute-0 ceilometer_agent_compute[200308]: 2025-12-01 20:07:49.031 15 DEBUG ceilometer.polling.manager [-] Polster heartbeat update: disk.device.write.latency heartbeat /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:636
Dec  1 20:07:49 compute-0 ceilometer_agent_compute[200308]: 2025-12-01 20:07:49.032 15 DEBUG ceilometer.compute.pollsters [-] 2e63a3e2-688c-470f-9b69-98ac22f0c892/disk.device.write.latency volume: 3249905700 _stats_to_sample /usr/lib/python3.12/site-packages/ceilometer/compute/pollsters/__init__.py:108
Dec  1 20:07:49 compute-0 ceilometer_agent_compute[200308]: 2025-12-01 20:07:49.032 15 DEBUG ceilometer.compute.pollsters [-] 2e63a3e2-688c-470f-9b69-98ac22f0c892/disk.device.write.latency volume: 0 _stats_to_sample /usr/lib/python3.12/site-packages/ceilometer/compute/pollsters/__init__.py:108
Dec  1 20:07:49 compute-0 ceilometer_agent_compute[200308]: 2025-12-01 20:07:49.032 15 INFO ceilometer.polling.manager [-] Finished polling pollster disk.device.write.latency in the context of pollsters
Dec  1 20:07:49 compute-0 ceilometer_agent_compute[200308]: 2025-12-01 20:07:49.032 15 DEBUG ceilometer.polling.manager [-] Executing discovery process for pollsters [<ceilometer.compute.pollsters.disk.PerDeviceWriteRequestsPollster object at 0x7fcf6cc3f4d0>] and discovery method [local_instances] via process [<bound method AgentManager.discover of <ceilometer.polling.manager.AgentManager object at 0x7fcf6cc07a10>>]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:294
Dec  1 20:07:49 compute-0 ceilometer_agent_compute[200308]: 2025-12-01 20:07:49.032 15 INFO ceilometer.polling.manager [-] Polling pollster disk.device.write.requests in the context of pollsters
Dec  1 20:07:49 compute-0 ceilometer_agent_compute[200308]: 2025-12-01 20:07:49.033 15 DEBUG ceilometer.polling.manager [-] Checking if we need coordination for pollster [<stevedore.extension.Extension object at 0x7fcf6cc3f500>] with coordination group name [None]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:333
Dec  1 20:07:49 compute-0 ceilometer_agent_compute[200308]: 2025-12-01 20:07:49.033 15 DEBUG ceilometer.polling.manager [-] The pollster [<stevedore.extension.Extension object at 0x7fcf6cc3f500>] is not configured in a source for polling that requires coordination. The current hashrings are the following [None]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:355
Dec  1 20:07:49 compute-0 ceilometer_agent_compute[200308]: 2025-12-01 20:07:49.033 15 DEBUG ceilometer.polling.manager [-] Polster heartbeat update: disk.device.write.requests heartbeat /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:636
Dec  1 20:07:49 compute-0 ceilometer_agent_compute[200308]: 2025-12-01 20:07:49.033 15 DEBUG ceilometer.compute.pollsters [-] 2e63a3e2-688c-470f-9b69-98ac22f0c892/disk.device.write.requests volume: 319 _stats_to_sample /usr/lib/python3.12/site-packages/ceilometer/compute/pollsters/__init__.py:108
Dec  1 20:07:49 compute-0 ceilometer_agent_compute[200308]: 2025-12-01 20:07:49.036 15 DEBUG ceilometer.compute.pollsters [-] 2e63a3e2-688c-470f-9b69-98ac22f0c892/disk.device.write.requests volume: 0 _stats_to_sample /usr/lib/python3.12/site-packages/ceilometer/compute/pollsters/__init__.py:108
Dec  1 20:07:49 compute-0 ceilometer_agent_compute[200308]: 2025-12-01 20:07:49.037 15 INFO ceilometer.polling.manager [-] Finished polling pollster disk.device.write.requests in the context of pollsters
Dec  1 20:07:49 compute-0 nova_compute[189564]: 2025-12-01 20:07:49.035 189568 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Dec  1 20:07:49 compute-0 ceilometer_agent_compute[200308]: 2025-12-01 20:07:49.037 15 DEBUG ceilometer.polling.manager [-] Executing discovery process for pollsters [<ceilometer.compute.pollsters.disk.PerDeviceAllocationPollster object at 0x7fcf6cc3e5d0>] and discovery method [local_instances] via process [<bound method AgentManager.discover of <ceilometer.polling.manager.AgentManager object at 0x7fcf6cc07a10>>]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:294
Dec  1 20:07:49 compute-0 ceilometer_agent_compute[200308]: 2025-12-01 20:07:49.037 15 INFO ceilometer.polling.manager [-] Polling pollster disk.device.allocation in the context of pollsters
Dec  1 20:07:49 compute-0 ceilometer_agent_compute[200308]: 2025-12-01 20:07:49.037 15 DEBUG ceilometer.polling.manager [-] Checking if we need coordination for pollster [<stevedore.extension.Extension object at 0x7fcf6cc3e540>] with coordination group name [None]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:333
Dec  1 20:07:49 compute-0 ceilometer_agent_compute[200308]: 2025-12-01 20:07:49.038 15 DEBUG ceilometer.polling.manager [-] The pollster [<stevedore.extension.Extension object at 0x7fcf6cc3e540>] is not configured in a source for polling that requires coordination. The current hashrings are the following [None]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:355
Dec  1 20:07:49 compute-0 ceilometer_agent_compute[200308]: 2025-12-01 20:07:49.038 15 DEBUG ceilometer.polling.manager [-] Polster heartbeat update: disk.device.allocation heartbeat /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:636
Dec  1 20:07:49 compute-0 ceilometer_agent_compute[200308]: 2025-12-01 20:07:49.038 15 DEBUG ceilometer.compute.pollsters [-] 2e63a3e2-688c-470f-9b69-98ac22f0c892/disk.device.allocation volume: 30023680 _stats_to_sample /usr/lib/python3.12/site-packages/ceilometer/compute/pollsters/__init__.py:108
Dec  1 20:07:49 compute-0 ceilometer_agent_compute[200308]: 2025-12-01 20:07:49.038 15 DEBUG ceilometer.compute.pollsters [-] 2e63a3e2-688c-470f-9b69-98ac22f0c892/disk.device.allocation volume: 512000 _stats_to_sample /usr/lib/python3.12/site-packages/ceilometer/compute/pollsters/__init__.py:108
Dec  1 20:07:49 compute-0 ceilometer_agent_compute[200308]: 2025-12-01 20:07:49.039 15 INFO ceilometer.polling.manager [-] Finished polling pollster disk.device.allocation in the context of pollsters
Dec  1 20:07:49 compute-0 ceilometer_agent_compute[200308]: 2025-12-01 20:07:49.039 15 DEBUG ceilometer.polling.manager [-] Executing discovery process for pollsters [<ceilometer.compute.pollsters.disk.EphemeralSizePollster object at 0x7fcf6cc3f530>] and discovery method [local_instances] via process [<bound method AgentManager.discover of <ceilometer.polling.manager.AgentManager object at 0x7fcf6cc07a10>>]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:294
Dec  1 20:07:49 compute-0 ceilometer_agent_compute[200308]: 2025-12-01 20:07:49.039 15 INFO ceilometer.polling.manager [-] Polling pollster disk.ephemeral.size in the context of pollsters
Dec  1 20:07:49 compute-0 ceilometer_agent_compute[200308]: 2025-12-01 20:07:49.039 15 DEBUG ceilometer.polling.manager [-] Checking if we need coordination for pollster [<stevedore.extension.Extension object at 0x7fcf6cc3f560>] with coordination group name [None]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:333
Dec  1 20:07:49 compute-0 ceilometer_agent_compute[200308]: 2025-12-01 20:07:49.039 15 DEBUG ceilometer.polling.manager [-] The pollster [<stevedore.extension.Extension object at 0x7fcf6cc3f560>] is not configured in a source for polling that requires coordination. The current hashrings are the following [None]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:355
Dec  1 20:07:49 compute-0 ceilometer_agent_compute[200308]: 2025-12-01 20:07:49.040 15 DEBUG ceilometer.polling.manager [-] Polster heartbeat update: disk.ephemeral.size heartbeat /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:636
Dec  1 20:07:49 compute-0 ceilometer_agent_compute[200308]: 2025-12-01 20:07:49.040 15 INFO ceilometer.polling.manager [-] Finished polling pollster disk.ephemeral.size in the context of pollsters
Dec  1 20:07:49 compute-0 ceilometer_agent_compute[200308]: 2025-12-01 20:07:49.040 15 DEBUG ceilometer.polling.manager [-] Executing discovery process for pollsters [<ceilometer.compute.pollsters.net.IncomingPacketsPollster object at 0x7fcf6cc3fd40>] and discovery method [local_instances] via process [<bound method AgentManager.discover of <ceilometer.polling.manager.AgentManager object at 0x7fcf6cc07a10>>]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:294
Dec  1 20:07:49 compute-0 ceilometer_agent_compute[200308]: 2025-12-01 20:07:49.040 15 INFO ceilometer.polling.manager [-] Polling pollster network.incoming.packets in the context of pollsters
Dec  1 20:07:49 compute-0 ceilometer_agent_compute[200308]: 2025-12-01 20:07:49.040 15 DEBUG ceilometer.polling.manager [-] Checking if we need coordination for pollster [<stevedore.extension.Extension object at 0x7fcf6cc3fd70>] with coordination group name [None]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:333
Dec  1 20:07:49 compute-0 ceilometer_agent_compute[200308]: 2025-12-01 20:07:49.041 15 DEBUG ceilometer.polling.manager [-] The pollster [<stevedore.extension.Extension object at 0x7fcf6cc3fd70>] is not configured in a source for polling that requires coordination. The current hashrings are the following [None]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:355
Dec  1 20:07:49 compute-0 ceilometer_agent_compute[200308]: 2025-12-01 20:07:49.041 15 DEBUG ceilometer.polling.manager [-] Polster heartbeat update: network.incoming.packets heartbeat /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:636
Dec  1 20:07:49 compute-0 ceilometer_agent_compute[200308]: 2025-12-01 20:07:49.041 15 DEBUG ceilometer.compute.pollsters [-] 2e63a3e2-688c-470f-9b69-98ac22f0c892/network.incoming.packets volume: 9 _stats_to_sample /usr/lib/python3.12/site-packages/ceilometer/compute/pollsters/__init__.py:108
Dec  1 20:07:49 compute-0 ceilometer_agent_compute[200308]: 2025-12-01 20:07:49.041 15 INFO ceilometer.polling.manager [-] Finished polling pollster network.incoming.packets in the context of pollsters
Dec  1 20:07:49 compute-0 ceilometer_agent_compute[200308]: 2025-12-01 20:07:49.042 15 DEBUG ceilometer.polling.manager [-] Executing discovery process for pollsters [<ceilometer.compute.pollsters.disk.RootSizePollster object at 0x7fcf6cc3f590>] and discovery method [local_instances] via process [<bound method AgentManager.discover of <ceilometer.polling.manager.AgentManager object at 0x7fcf6cc07a10>>]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:294
Dec  1 20:07:49 compute-0 ceilometer_agent_compute[200308]: 2025-12-01 20:07:49.042 15 INFO ceilometer.polling.manager [-] Polling pollster disk.root.size in the context of pollsters
Dec  1 20:07:49 compute-0 ceilometer_agent_compute[200308]: 2025-12-01 20:07:49.042 15 DEBUG ceilometer.polling.manager [-] Checking if we need coordination for pollster [<stevedore.extension.Extension object at 0x7fcf6cc3f5c0>] with coordination group name [None]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:333
Dec  1 20:07:49 compute-0 ceilometer_agent_compute[200308]: 2025-12-01 20:07:49.042 15 DEBUG ceilometer.polling.manager [-] The pollster [<stevedore.extension.Extension object at 0x7fcf6cc3f5c0>] is not configured in a source for polling that requires coordination. The current hashrings are the following [None]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:355
Dec  1 20:07:49 compute-0 ceilometer_agent_compute[200308]: 2025-12-01 20:07:49.042 12 DEBUG ceilometer.polling.manager [-] Updated heartbeat for disk.device.write.latency (2025-12-01T20:07:49.031825) _update_status /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:502
Dec  1 20:07:49 compute-0 ceilometer_agent_compute[200308]: 2025-12-01 20:07:49.042 15 DEBUG ceilometer.polling.manager [-] Polster heartbeat update: disk.root.size heartbeat /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:636
Dec  1 20:07:49 compute-0 ceilometer_agent_compute[200308]: 2025-12-01 20:07:49.043 12 DEBUG ceilometer.polling.manager [-] Updated heartbeat for disk.device.write.requests (2025-12-01T20:07:49.033247) _update_status /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:502
Dec  1 20:07:49 compute-0 ceilometer_agent_compute[200308]: 2025-12-01 20:07:49.043 12 DEBUG ceilometer.polling.manager [-] Updated heartbeat for disk.device.allocation (2025-12-01T20:07:49.038115) _update_status /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:502
Dec  1 20:07:49 compute-0 ceilometer_agent_compute[200308]: 2025-12-01 20:07:49.043 15 INFO ceilometer.polling.manager [-] Finished polling pollster disk.root.size in the context of pollsters
Dec  1 20:07:49 compute-0 ceilometer_agent_compute[200308]: 2025-12-01 20:07:49.043 12 DEBUG ceilometer.polling.manager [-] Updated heartbeat for disk.ephemeral.size (2025-12-01T20:07:49.039997) _update_status /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:502
Dec  1 20:07:49 compute-0 ceilometer_agent_compute[200308]: 2025-12-01 20:07:49.045 12 DEBUG ceilometer.polling.manager [-] Updated heartbeat for network.incoming.packets (2025-12-01T20:07:49.041224) _update_status /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:502
Dec  1 20:07:49 compute-0 ceilometer_agent_compute[200308]: 2025-12-01 20:07:49.045 15 DEBUG ceilometer.polling.manager [-] Executing discovery process for pollsters [<ceilometer.compute.pollsters.net.IncomingDropPollster object at 0x7fcf6cc3fda0>] and discovery method [local_instances] via process [<bound method AgentManager.discover of <ceilometer.polling.manager.AgentManager object at 0x7fcf6cc07a10>>]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:294
Dec  1 20:07:49 compute-0 ceilometer_agent_compute[200308]: 2025-12-01 20:07:49.048 15 INFO ceilometer.polling.manager [-] Polling pollster network.incoming.packets.drop in the context of pollsters
Dec  1 20:07:49 compute-0 ceilometer_agent_compute[200308]: 2025-12-01 20:07:49.048 15 DEBUG ceilometer.polling.manager [-] Checking if we need coordination for pollster [<stevedore.extension.Extension object at 0x7fcf6cc3fdd0>] with coordination group name [None]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:333
Dec  1 20:07:49 compute-0 ceilometer_agent_compute[200308]: 2025-12-01 20:07:49.048 15 DEBUG ceilometer.polling.manager [-] The pollster [<stevedore.extension.Extension object at 0x7fcf6cc3fdd0>] is not configured in a source for polling that requires coordination. The current hashrings are the following [None]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:355
Dec  1 20:07:49 compute-0 ceilometer_agent_compute[200308]: 2025-12-01 20:07:49.048 15 DEBUG ceilometer.polling.manager [-] Polster heartbeat update: network.incoming.packets.drop heartbeat /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:636
Dec  1 20:07:49 compute-0 ceilometer_agent_compute[200308]: 2025-12-01 20:07:49.048 15 DEBUG ceilometer.compute.pollsters [-] 2e63a3e2-688c-470f-9b69-98ac22f0c892/network.incoming.packets.drop volume: 0 _stats_to_sample /usr/lib/python3.12/site-packages/ceilometer/compute/pollsters/__init__.py:108
Dec  1 20:07:49 compute-0 ceilometer_agent_compute[200308]: 2025-12-01 20:07:49.049 15 INFO ceilometer.polling.manager [-] Finished polling pollster network.incoming.packets.drop in the context of pollsters
Dec  1 20:07:49 compute-0 ceilometer_agent_compute[200308]: 2025-12-01 20:07:49.049 15 DEBUG ceilometer.polling.manager [-] Executing discovery process for pollsters [<ceilometer.compute.pollsters.net.IncomingErrorsPollster object at 0x7fcf6cc3fe00>] and discovery method [local_instances] via process [<bound method AgentManager.discover of <ceilometer.polling.manager.AgentManager object at 0x7fcf6cc07a10>>]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:294
Dec  1 20:07:49 compute-0 ceilometer_agent_compute[200308]: 2025-12-01 20:07:49.049 15 INFO ceilometer.polling.manager [-] Polling pollster network.incoming.packets.error in the context of pollsters
Dec  1 20:07:49 compute-0 ceilometer_agent_compute[200308]: 2025-12-01 20:07:49.049 15 DEBUG ceilometer.polling.manager [-] Checking if we need coordination for pollster [<stevedore.extension.Extension object at 0x7fcf6cc3fe30>] with coordination group name [None]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:333
Dec  1 20:07:49 compute-0 ceilometer_agent_compute[200308]: 2025-12-01 20:07:49.049 15 DEBUG ceilometer.polling.manager [-] The pollster [<stevedore.extension.Extension object at 0x7fcf6cc3fe30>] is not configured in a source for polling that requires coordination. The current hashrings are the following [None]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:355
Dec  1 20:07:49 compute-0 ceilometer_agent_compute[200308]: 2025-12-01 20:07:49.049 15 DEBUG ceilometer.polling.manager [-] Polster heartbeat update: network.incoming.packets.error heartbeat /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:636
Dec  1 20:07:49 compute-0 ceilometer_agent_compute[200308]: 2025-12-01 20:07:49.049 15 DEBUG ceilometer.compute.pollsters [-] 2e63a3e2-688c-470f-9b69-98ac22f0c892/network.incoming.packets.error volume: 0 _stats_to_sample /usr/lib/python3.12/site-packages/ceilometer/compute/pollsters/__init__.py:108
Dec  1 20:07:49 compute-0 ceilometer_agent_compute[200308]: 2025-12-01 20:07:49.050 15 INFO ceilometer.polling.manager [-] Finished polling pollster network.incoming.packets.error in the context of pollsters
Dec  1 20:07:49 compute-0 ceilometer_agent_compute[200308]: 2025-12-01 20:07:49.050 15 DEBUG ceilometer.polling.manager [-] Executing discovery process for pollsters [<ceilometer.compute.pollsters.net.OutgoingBytesPollster object at 0x7fcf6cc3fe90>] and discovery method [local_instances] via process [<bound method AgentManager.discover of <ceilometer.polling.manager.AgentManager object at 0x7fcf6cc07a10>>]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:294
Dec  1 20:07:49 compute-0 ceilometer_agent_compute[200308]: 2025-12-01 20:07:49.050 15 INFO ceilometer.polling.manager [-] Polling pollster network.outgoing.bytes in the context of pollsters
Dec  1 20:07:49 compute-0 ceilometer_agent_compute[200308]: 2025-12-01 20:07:49.050 15 DEBUG ceilometer.polling.manager [-] Checking if we need coordination for pollster [<stevedore.extension.Extension object at 0x7fcf6cc3fec0>] with coordination group name [None]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:333
Dec  1 20:07:49 compute-0 ceilometer_agent_compute[200308]: 2025-12-01 20:07:49.050 15 DEBUG ceilometer.polling.manager [-] The pollster [<stevedore.extension.Extension object at 0x7fcf6cc3fec0>] is not configured in a source for polling that requires coordination. The current hashrings are the following [None]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:355
Dec  1 20:07:49 compute-0 ceilometer_agent_compute[200308]: 2025-12-01 20:07:49.050 15 DEBUG ceilometer.polling.manager [-] Polster heartbeat update: network.outgoing.bytes heartbeat /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:636
Dec  1 20:07:49 compute-0 ceilometer_agent_compute[200308]: 2025-12-01 20:07:49.050 15 DEBUG ceilometer.compute.pollsters [-] 2e63a3e2-688c-470f-9b69-98ac22f0c892/network.outgoing.bytes volume: 1620 _stats_to_sample /usr/lib/python3.12/site-packages/ceilometer/compute/pollsters/__init__.py:108
Dec  1 20:07:49 compute-0 ceilometer_agent_compute[200308]: 2025-12-01 20:07:49.051 15 INFO ceilometer.polling.manager [-] Finished polling pollster network.outgoing.bytes in the context of pollsters
Dec  1 20:07:49 compute-0 ceilometer_agent_compute[200308]: 2025-12-01 20:07:49.051 15 DEBUG ceilometer.polling.manager [-] Executing discovery process for pollsters [<ceilometer.compute.pollsters.net.OutgoingBytesRatePollster object at 0x7fcf6cc3ff80>] and discovery method [local_instances] via process [<bound method AgentManager.discover of <ceilometer.polling.manager.AgentManager object at 0x7fcf6cc07a10>>]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:294
Dec  1 20:07:49 compute-0 ceilometer_agent_compute[200308]: 2025-12-01 20:07:49.051 15 DEBUG ceilometer.polling.manager [-] Skip pollster network.outgoing.bytes.rate, no new resources found this cycle _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:321
Dec  1 20:07:49 compute-0 ceilometer_agent_compute[200308]: 2025-12-01 20:07:49.051 15 DEBUG ceilometer.polling.manager [-] Executing discovery process for pollsters [<ceilometer.compute.pollsters.instance_stats.CPUPollster object at 0x7fcf6cbd1b80>] and discovery method [local_instances] via process [<bound method AgentManager.discover of <ceilometer.polling.manager.AgentManager object at 0x7fcf6cc07a10>>]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:294
Dec  1 20:07:49 compute-0 ceilometer_agent_compute[200308]: 2025-12-01 20:07:49.051 15 INFO ceilometer.polling.manager [-] Polling pollster cpu in the context of pollsters
Dec  1 20:07:49 compute-0 ceilometer_agent_compute[200308]: 2025-12-01 20:07:49.051 15 DEBUG ceilometer.polling.manager [-] Checking if we need coordination for pollster [<stevedore.extension.Extension object at 0x7fcf6cc3d7c0>] with coordination group name [None]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:333
Dec  1 20:07:49 compute-0 ceilometer_agent_compute[200308]: 2025-12-01 20:07:49.051 15 DEBUG ceilometer.polling.manager [-] The pollster [<stevedore.extension.Extension object at 0x7fcf6cc3d7c0>] is not configured in a source for polling that requires coordination. The current hashrings are the following [None]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:355
Dec  1 20:07:49 compute-0 ceilometer_agent_compute[200308]: 2025-12-01 20:07:49.051 15 DEBUG ceilometer.polling.manager [-] Polster heartbeat update: cpu heartbeat /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:636
Dec  1 20:07:49 compute-0 ceilometer_agent_compute[200308]: 2025-12-01 20:07:49.052 15 DEBUG ceilometer.compute.pollsters [-] 2e63a3e2-688c-470f-9b69-98ac22f0c892/cpu volume: 136340000000 _stats_to_sample /usr/lib/python3.12/site-packages/ceilometer/compute/pollsters/__init__.py:108
Dec  1 20:07:49 compute-0 ceilometer_agent_compute[200308]: 2025-12-01 20:07:49.052 15 INFO ceilometer.polling.manager [-] Finished polling pollster cpu in the context of pollsters
Dec  1 20:07:49 compute-0 ceilometer_agent_compute[200308]: 2025-12-01 20:07:49.052 15 DEBUG ceilometer.polling.manager [-] Executing discovery process for pollsters [<ceilometer.compute.pollsters.instance_stats.MemoryUsagePollster object at 0x7fcf6cc3f7a0>] and discovery method [local_instances] via process [<bound method AgentManager.discover of <ceilometer.polling.manager.AgentManager object at 0x7fcf6cc07a10>>]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:294
Dec  1 20:07:49 compute-0 ceilometer_agent_compute[200308]: 2025-12-01 20:07:49.052 15 INFO ceilometer.polling.manager [-] Polling pollster memory.usage in the context of pollsters
Dec  1 20:07:49 compute-0 ceilometer_agent_compute[200308]: 2025-12-01 20:07:49.052 15 DEBUG ceilometer.polling.manager [-] Checking if we need coordination for pollster [<stevedore.extension.Extension object at 0x7fcf6cc3f7d0>] with coordination group name [None]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:333
Dec  1 20:07:49 compute-0 ceilometer_agent_compute[200308]: 2025-12-01 20:07:49.052 15 DEBUG ceilometer.polling.manager [-] The pollster [<stevedore.extension.Extension object at 0x7fcf6cc3f7d0>] is not configured in a source for polling that requires coordination. The current hashrings are the following [None]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:355
Dec  1 20:07:49 compute-0 ceilometer_agent_compute[200308]: 2025-12-01 20:07:49.053 15 DEBUG ceilometer.polling.manager [-] Polster heartbeat update: memory.usage heartbeat /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:636
Dec  1 20:07:49 compute-0 ceilometer_agent_compute[200308]: 2025-12-01 20:07:49.053 15 DEBUG ceilometer.compute.pollsters [-] 2e63a3e2-688c-470f-9b69-98ac22f0c892/memory.usage volume: 43.47265625 _stats_to_sample /usr/lib/python3.12/site-packages/ceilometer/compute/pollsters/__init__.py:108
Dec  1 20:07:49 compute-0 ceilometer_agent_compute[200308]: 2025-12-01 20:07:49.053 15 INFO ceilometer.polling.manager [-] Finished polling pollster memory.usage in the context of pollsters
Dec  1 20:07:49 compute-0 ceilometer_agent_compute[200308]: 2025-12-01 20:07:49.053 15 DEBUG ceilometer.polling.manager [-] Finished processing pollster [network.incoming.bytes.delta]. execute_polling_task_processing /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:272
Dec  1 20:07:49 compute-0 ceilometer_agent_compute[200308]: 2025-12-01 20:07:49.054 15 DEBUG ceilometer.polling.manager [-] Finished processing pollster [network.outgoing.packets]. execute_polling_task_processing /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:272
Dec  1 20:07:49 compute-0 ceilometer_agent_compute[200308]: 2025-12-01 20:07:49.054 15 DEBUG ceilometer.polling.manager [-] Finished processing pollster [network.outgoing.bytes.delta]. execute_polling_task_processing /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:272
Dec  1 20:07:49 compute-0 ceilometer_agent_compute[200308]: 2025-12-01 20:07:49.054 15 DEBUG ceilometer.polling.manager [-] Finished processing pollster [network.outgoing.packets.drop]. execute_polling_task_processing /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:272
Dec  1 20:07:49 compute-0 ceilometer_agent_compute[200308]: 2025-12-01 20:07:49.054 15 DEBUG ceilometer.polling.manager [-] Finished processing pollster [network.outgoing.packets.error]. execute_polling_task_processing /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:272
Dec  1 20:07:49 compute-0 ceilometer_agent_compute[200308]: 2025-12-01 20:07:49.054 15 DEBUG ceilometer.polling.manager [-] Finished processing pollster [disk.device.capacity]. execute_polling_task_processing /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:272
Dec  1 20:07:49 compute-0 ceilometer_agent_compute[200308]: 2025-12-01 20:07:49.054 15 DEBUG ceilometer.polling.manager [-] Finished processing pollster [disk.device.read.bytes]. execute_polling_task_processing /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:272
Dec  1 20:07:49 compute-0 ceilometer_agent_compute[200308]: 2025-12-01 20:07:49.054 15 DEBUG ceilometer.polling.manager [-] Finished processing pollster [network.incoming.bytes]. execute_polling_task_processing /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:272
Dec  1 20:07:49 compute-0 ceilometer_agent_compute[200308]: 2025-12-01 20:07:49.054 15 DEBUG ceilometer.polling.manager [-] Finished processing pollster [network.incoming.bytes.rate]. execute_polling_task_processing /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:272
Dec  1 20:07:49 compute-0 ceilometer_agent_compute[200308]: 2025-12-01 20:07:49.054 15 DEBUG ceilometer.polling.manager [-] Finished processing pollster [disk.device.read.latency]. execute_polling_task_processing /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:272
Dec  1 20:07:49 compute-0 ceilometer_agent_compute[200308]: 2025-12-01 20:07:49.054 15 DEBUG ceilometer.polling.manager [-] Finished processing pollster [disk.device.read.requests]. execute_polling_task_processing /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:272
Dec  1 20:07:49 compute-0 ceilometer_agent_compute[200308]: 2025-12-01 20:07:49.054 15 DEBUG ceilometer.polling.manager [-] Finished processing pollster [disk.device.usage]. execute_polling_task_processing /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:272
Dec  1 20:07:49 compute-0 ceilometer_agent_compute[200308]: 2025-12-01 20:07:49.054 15 DEBUG ceilometer.polling.manager [-] Finished processing pollster [disk.device.write.bytes]. execute_polling_task_processing /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:272
Dec  1 20:07:49 compute-0 ceilometer_agent_compute[200308]: 2025-12-01 20:07:49.054 15 DEBUG ceilometer.polling.manager [-] Finished processing pollster [power.state]. execute_polling_task_processing /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:272
Dec  1 20:07:49 compute-0 ceilometer_agent_compute[200308]: 2025-12-01 20:07:49.054 15 DEBUG ceilometer.polling.manager [-] Finished processing pollster [disk.device.write.latency]. execute_polling_task_processing /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:272
Dec  1 20:07:49 compute-0 ceilometer_agent_compute[200308]: 2025-12-01 20:07:49.055 15 DEBUG ceilometer.polling.manager [-] Finished processing pollster [disk.device.write.requests]. execute_polling_task_processing /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:272
Dec  1 20:07:49 compute-0 ceilometer_agent_compute[200308]: 2025-12-01 20:07:49.055 15 DEBUG ceilometer.polling.manager [-] Finished processing pollster [disk.device.allocation]. execute_polling_task_processing /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:272
Dec  1 20:07:49 compute-0 ceilometer_agent_compute[200308]: 2025-12-01 20:07:49.055 15 DEBUG ceilometer.polling.manager [-] Finished processing pollster [disk.ephemeral.size]. execute_polling_task_processing /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:272
Dec  1 20:07:49 compute-0 ceilometer_agent_compute[200308]: 2025-12-01 20:07:49.055 15 DEBUG ceilometer.polling.manager [-] Finished processing pollster [network.incoming.packets]. execute_polling_task_processing /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:272
Dec  1 20:07:49 compute-0 ceilometer_agent_compute[200308]: 2025-12-01 20:07:49.055 15 DEBUG ceilometer.polling.manager [-] Finished processing pollster [disk.root.size]. execute_polling_task_processing /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:272
Dec  1 20:07:49 compute-0 ceilometer_agent_compute[200308]: 2025-12-01 20:07:49.055 15 DEBUG ceilometer.polling.manager [-] Finished processing pollster [network.incoming.packets.drop]. execute_polling_task_processing /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:272
Dec  1 20:07:49 compute-0 ceilometer_agent_compute[200308]: 2025-12-01 20:07:49.055 15 DEBUG ceilometer.polling.manager [-] Finished processing pollster [network.incoming.packets.error]. execute_polling_task_processing /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:272
Dec  1 20:07:49 compute-0 ceilometer_agent_compute[200308]: 2025-12-01 20:07:49.055 15 DEBUG ceilometer.polling.manager [-] Finished processing pollster [network.outgoing.bytes]. execute_polling_task_processing /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:272
Dec  1 20:07:49 compute-0 ceilometer_agent_compute[200308]: 2025-12-01 20:07:49.055 15 DEBUG ceilometer.polling.manager [-] Finished processing pollster [network.outgoing.bytes.rate]. execute_polling_task_processing /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:272
Dec  1 20:07:49 compute-0 ceilometer_agent_compute[200308]: 2025-12-01 20:07:49.055 15 DEBUG ceilometer.polling.manager [-] Finished processing pollster [cpu]. execute_polling_task_processing /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:272
Dec  1 20:07:49 compute-0 ceilometer_agent_compute[200308]: 2025-12-01 20:07:49.055 15 DEBUG ceilometer.polling.manager [-] Finished processing pollster [memory.usage]. execute_polling_task_processing /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:272
Dec  1 20:07:49 compute-0 ceilometer_agent_compute[200308]: 2025-12-01 20:07:49.057 12 DEBUG ceilometer.polling.manager [-] Updated heartbeat for disk.root.size (2025-12-01T20:07:49.042907) _update_status /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:502
Dec  1 20:07:49 compute-0 ceilometer_agent_compute[200308]: 2025-12-01 20:07:49.057 12 DEBUG ceilometer.polling.manager [-] Updated heartbeat for network.incoming.packets.drop (2025-12-01T20:07:49.048486) _update_status /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:502
Dec  1 20:07:49 compute-0 ceilometer_agent_compute[200308]: 2025-12-01 20:07:49.057 12 DEBUG ceilometer.polling.manager [-] Updated heartbeat for network.incoming.packets.error (2025-12-01T20:07:49.049710) _update_status /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:502
Dec  1 20:07:49 compute-0 ceilometer_agent_compute[200308]: 2025-12-01 20:07:49.057 12 DEBUG ceilometer.polling.manager [-] Updated heartbeat for network.outgoing.bytes (2025-12-01T20:07:49.050707) _update_status /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:502
Dec  1 20:07:49 compute-0 ceilometer_agent_compute[200308]: 2025-12-01 20:07:49.058 12 DEBUG ceilometer.polling.manager [-] Updated heartbeat for cpu (2025-12-01T20:07:49.051918) _update_status /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:502
Dec  1 20:07:49 compute-0 ceilometer_agent_compute[200308]: 2025-12-01 20:07:49.058 12 DEBUG ceilometer.polling.manager [-] Updated heartbeat for memory.usage (2025-12-01T20:07:49.053040) _update_status /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:502
Dec  1 20:07:53 compute-0 nova_compute[189564]: 2025-12-01 20:07:53.971 189568 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Dec  1 20:07:54 compute-0 nova_compute[189564]: 2025-12-01 20:07:54.040 189568 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Dec  1 20:07:54 compute-0 podman[258253]: 2025-12-01 20:07:54.345302751 +0000 UTC m=+0.107292109 container health_status eee51cf6f5ac491b85fb09827fece37ea9afa564acb449d4ec0d0155a452f02b (image=quay.io/podified-antelope-centos9/openstack-multipathd:current-podified, name=multipathd, health_status=healthy, health_failing_streak=0, health_log=, maintainer=OpenStack Kubernetes Operator team, org.label-schema.vendor=CentOS, managed_by=edpm_ansible, org.label-schema.build-date=20251125, org.label-schema.license=GPLv2, tcib_managed=true, config_data={'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/multipathd', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-multipathd:current-podified', 'net': 'host', 'privileged': True, 'restart': 'always', 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/var/lib/kolla/config_files/multipathd.json:/var/lib/kolla/config_files/config.json:ro', '/dev:/dev', '/run/udev:/run/udev', '/sys:/sys', '/lib/modules:/lib/modules:ro', '/etc/iscsi:/etc/iscsi:ro', '/var/lib/iscsi:/var/lib/iscsi', '/etc/multipath:/etc/multipath:z', '/etc/multipath.conf:/etc/multipath.conf:ro', '/var/lib/openstack/healthchecks/multipathd:/openstack:ro,z']}, config_id=multipathd, io.buildah.version=1.41.3, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.schema-version=1.0, tcib_build_tag=fa2bb8efef6782c26ea7f1675eeb36dd, container_name=multipathd)
Dec  1 20:07:58 compute-0 podman[258273]: 2025-12-01 20:07:58.334083979 +0000 UTC m=+0.105291355 container health_status 61ddba5fa28aaa4735d9b3aecc3d300f499f9ae2248b5f55cd6d6127fcce4236 (image=quay.io/navidys/prometheus-podman-exporter:v1.10.1, name=podman_exporter, health_status=healthy, health_failing_streak=0, health_log=, config_data={'image': 'quay.io/navidys/prometheus-podman-exporter:v1.10.1', 'restart': 'always', 'recreate': True, 'user': 'root', 'privileged': True, 'ports': ['9882:9882'], 'net': 'host', 'command': ['--web.config.file=/etc/podman_exporter/podman_exporter.yaml'], 'environment': {'OS_ENDPOINT_TYPE': 'internal', 'CONTAINER_HOST': 'unix:///run/podman/podman.sock'}, 'healthcheck': {'test': '/openstack/healthcheck podman_exporter', 'mount': '/var/lib/openstack/healthchecks/podman_exporter'}, 'volumes': ['/var/lib/openstack/config/telemetry/podman_exporter.yaml:/etc/podman_exporter/podman_exporter.yaml:z', '/var/lib/openstack/certs/telemetry/default:/etc/podman_exporter/tls:z', '/run/podman/podman.sock:/run/podman/podman.sock:rw,z', '/var/lib/openstack/healthchecks/podman_exporter:/openstack:ro,z']}, config_id=edpm, container_name=podman_exporter, maintainer=Navid Yaghoobi <navidys@fedoraproject.org>, managed_by=edpm_ansible)
Dec  1 20:07:58 compute-0 nova_compute[189564]: 2025-12-01 20:07:58.975 189568 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Dec  1 20:07:59 compute-0 nova_compute[189564]: 2025-12-01 20:07:59.047 189568 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Dec  1 20:07:59 compute-0 podman[203750]: time="2025-12-01T20:07:59Z" level=info msg="List containers: received `last` parameter - overwriting `limit`"
Dec  1 20:07:59 compute-0 podman[203750]: @ - - [01/Dec/2025:20:07:59 +0000] "GET /v4.9.3/libpod/containers/json?all=true&external=false&last=0&namespace=false&size=false&sync=false HTTP/1.1" 200 29521 "" "Go-http-client/1.1"
Dec  1 20:07:59 compute-0 podman[203750]: @ - - [01/Dec/2025:20:07:59 +0000] "GET /v4.9.3/libpod/containers/stats?all=false&interval=1&stream=false HTTP/1.1" 200 4802 "" "Go-http-client/1.1"
Dec  1 20:08:01 compute-0 podman[258298]: 2025-12-01 20:08:01.302080009 +0000 UTC m=+0.068784880 container health_status 43b014a7c88484529ca37fbc1aa040d68d3c565a681d98a3ffe696ded1c66c8b (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified, name=ovn_metadata_agent, health_status=healthy, health_failing_streak=0, health_log=, tcib_build_tag=fa2bb8efef6782c26ea7f1675eeb36dd, tcib_managed=true, config_data={'cgroupns': 'host', 'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'EDPM_CONFIG_HASH': '0823bd3e096c75f72e4a95820d41b0d4b6a1172bd2892ddb9f29b788a11bc87d'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ovn_metadata_agent', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified', 'net': 'host', 'pid': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/run/openvswitch:/run/openvswitch:z', '/var/lib/config-data/ansible-generated/neutron-ovn-metadata-agent:/etc/neutron.conf.d:z', '/run/netns:/run/netns:shared', '/var/lib/kolla/config_files/ovn_metadata_agent.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/neutron:/var/lib/neutron:shared,z', '/var/lib/neutron/ovn_metadata_haproxy_wrapper:/usr/local/bin/haproxy:ro', '/var/lib/neutron/kill_scripts:/etc/neutron/kill_scripts:ro', '/var/lib/openstack/cacerts/neutron-metadata/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/ca.crt:/etc/pki/tls/certs/ovndbca.crt:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/tls.crt:/etc/pki/tls/certs/ovndb.crt:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/tls.key:/etc/pki/tls/private/ovndb.key:ro,Z', '/var/lib/openstack/healthchecks/ovn_metadata_agent:/openstack:ro,z']}, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.schema-version=1.0, org.label-schema.build-date=20251125, io.buildah.version=1.41.3, org.label-schema.vendor=CentOS, config_id=ovn_metadata_agent, container_name=ovn_metadata_agent, maintainer=OpenStack Kubernetes Operator team, managed_by=edpm_ansible)
Dec  1 20:08:01 compute-0 podman[258296]: 2025-12-01 20:08:01.352170609 +0000 UTC m=+0.122954050 container health_status 34a1614f07848d6f362b3ed1fa2407dbcd0f2c7c831f6ef43ff8b2d278ce7c3d (image=quay.io/podified-antelope-centos9/openstack-ceilometer-ipmi:current-podified, name=ceilometer_agent_ipmi, health_status=healthy, health_failing_streak=0, health_log=, config_data={'image': 'quay.io/podified-antelope-centos9/openstack-ceilometer-ipmi:current-podified', 'user': 'ceilometer', 'restart': 'always', 'command': 'kolla_start', 'security_opt': 'label:type:ceilometer_polling_t', 'privileged': 'true', 'net': 'host', 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'OS_ENDPOINT_TYPE': 'internal'}, 'healthcheck': {'test': '/openstack/healthcheck ipmi', 'mount': '/var/lib/openstack/healthchecks/ceilometer_agent_ipmi'}, 'volumes': ['/var/lib/openstack/config/telemetry-power-monitoring:/var/lib/openstack/config/:z', '/var/lib/openstack/config/telemetry-power-monitoring/ceilometer-agent-ipmi.json:/var/lib/kolla/config_files/config.json:z', '/etc/hosts:/etc/hosts:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/var/lib/openstack/cacerts/telemetry-power-monitoring/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/config/telemetry-power-monitoring/ceilometer_prom_exporter.yaml:/etc/ceilometer/ceilometer_prom_exporter.yaml:z', '/var/lib/openstack/certs/telemetry-power-monitoring/default:/etc/ceilometer/tls:z', '/dev/log:/dev/log', '/var/lib/openstack/healthchecks/ceilometer_agent_ipmi:/openstack:ro,z']}, io.buildah.version=1.41.3, org.label-schema.build-date=20251125, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.schema-version=1.0, maintainer=OpenStack Kubernetes Operator team, managed_by=edpm_ansible, org.label-schema.license=GPLv2, tcib_managed=true, config_id=edpm, org.label-schema.vendor=CentOS, tcib_build_tag=fa2bb8efef6782c26ea7f1675eeb36dd, container_name=ceilometer_agent_ipmi)
Dec  1 20:08:01 compute-0 podman[258299]: 2025-12-01 20:08:01.358936866 +0000 UTC m=+0.123815698 container health_status ac5c9902abf0db9f43c889599b2bcc73d33eb8b65444ffdd9b56a5cc93dab792 (image=quay.io/podified-antelope-centos9/openstack-ovn-controller:current-podified, name=ovn_controller, health_status=healthy, health_failing_streak=0, health_log=, org.label-schema.build-date=20251125, org.label-schema.license=GPLv2, io.buildah.version=1.41.3, maintainer=OpenStack Kubernetes Operator team, managed_by=edpm_ansible, org.label-schema.vendor=CentOS, config_data={'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ovn_controller', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-ovn-controller:current-podified', 'net': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/lib/modules:/lib/modules:ro', '/run:/run', '/var/lib/openvswitch/ovn:/run/ovn:shared,z', '/var/lib/kolla/config_files/ovn_controller.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/openstack/cacerts/ovn/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/certs/ovn/default/ca.crt:/etc/pki/tls/certs/ovndbca.crt:ro,z', '/var/lib/openstack/certs/ovn/default/tls.crt:/etc/pki/tls/certs/ovndb.crt:ro,z', '/var/lib/openstack/certs/ovn/default/tls.key:/etc/pki/tls/private/ovndb.key:ro,Z', '/var/lib/openstack/healthchecks/ovn_controller:/openstack:ro,z']}, container_name=ovn_controller, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.schema-version=1.0, tcib_build_tag=fa2bb8efef6782c26ea7f1675eeb36dd, tcib_managed=true, config_id=ovn_controller)
Dec  1 20:08:01 compute-0 podman[258295]: 2025-12-01 20:08:01.38035563 +0000 UTC m=+0.141256295 container health_status 23921011954a99f31a49758e512d9e3575f6b2ebf536e7df85e3be11e7690b76 (image=quay.io/sustainable_computing_io/kepler:release-0.7.12, name=kepler, health_status=healthy, health_failing_streak=0, health_log=, config_id=edpm, architecture=x86_64, build-date=2024-09-18T21:23:30, vcs-ref=e309397d02fc53f7fa99db1371b8700eb49f268f, vendor=Red Hat, Inc., maintainer=Red Hat, Inc., vcs-type=git, name=ubi9, container_name=kepler, distribution-scope=public, io.k8s.display-name=Red Hat Universal Base Image 9, release=1214.1726694543, config_data={'image': 'quay.io/sustainable_computing_io/kepler:release-0.7.12', 'privileged': 'true', 'restart': 'always', 'ports': ['8888:8888'], 'net': 'host', 'command': '-v=2', 'recreate': True, 'environment': {'ENABLE_GPU': 'true', 'EXPOSE_CONTAINER_METRICS': 'true', 'ENABLE_PROCESS_METRICS': 'true', 'EXPOSE_VM_METRICS': 'true', 'EXPOSE_ESTIMATED_IDLE_POWER_METRICS': 'false', 'LIBVIRT_METADATA_URI': 'http://openstack.org/xmlns/libvirt/nova/1.1'}, 'healthcheck': {'test': '/openstack/healthcheck kepler', 'mount': '/var/lib/openstack/healthchecks/kepler'}, 'volumes': ['/lib/modules:/lib/modules:ro', '/run/libvirt:/run/libvirt:shared,ro', '/sys:/sys', '/proc:/proc', '/var/lib/openstack/healthchecks/kepler:/openstack:ro,z']}, managed_by=edpm_ansible, version=9.4, release-0.7.12=, com.redhat.license_terms=https://www.redhat.com/en/about/red-hat-end-user-license-agreements#UBI, io.k8s.description=The Universal Base Image is designed and engineered to be the base layer for all of your containerized applications, middleware and utilities. This base image is freely redistributable, but Red Hat only supports Red Hat technologies through subscriptions for Red Hat products. This image is maintained by Red Hat and updated regularly., io.buildah.version=1.29.0, com.redhat.component=ubi9-container, url=https://access.redhat.com/containers/#/registry.access.redhat.com/ubi9/images/9.4-1214.1726694543, description=The Universal Base Image is designed and engineered to be the base layer for all of your containerized applications, middleware and utilities. This base image is freely redistributable, but Red Hat only supports Red Hat technologies through subscriptions for Red Hat products. This image is maintained by Red Hat and updated regularly., io.openshift.expose-services=, io.openshift.tags=base rhel9, summary=Provides the latest release of Red Hat Universal Base Image 9.)
Dec  1 20:08:01 compute-0 podman[258297]: 2025-12-01 20:08:01.382629372 +0000 UTC m=+0.135592213 container health_status 3a3d264f7eb8586ed3d44da8bad3c69e5911bcb2ca062b771386b6d47a5118de (image=quay.rdoproject.org/podified-master-centos10/openstack-ceilometer-compute:current-tested, name=ceilometer_agent_compute, health_status=healthy, health_failing_streak=0, health_log=, container_name=ceilometer_agent_compute, org.label-schema.name=CentOS Stream 10 Base Image, org.label-schema.schema-version=1.0, tcib_build_tag=3a7876c5b6a4ff2e2bc50e11e9db5f42, org.label-schema.build-date=20251125, org.label-schema.license=GPLv2, tcib_managed=true, managed_by=edpm_ansible, org.label-schema.vendor=CentOS, io.buildah.version=1.41.4, maintainer=OpenStack Kubernetes Operator team, config_data={'image': 'quay.rdoproject.org/podified-master-centos10/openstack-ceilometer-compute:current-tested', 'user': 'ceilometer', 'restart': 'always', 'command': 'kolla_start', 'security_opt': 'label:type:ceilometer_polling_t', 'net': 'host', 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'OS_ENDPOINT_TYPE': 'internal'}, 'healthcheck': {'test': '/openstack/healthcheck compute', 'mount': '/var/lib/openstack/healthchecks/ceilometer_agent_compute'}, 'volumes': ['/var/lib/openstack/config/telemetry:/var/lib/openstack/config/:z', '/var/lib/openstack/config/telemetry/ceilometer-agent-compute.json:/var/lib/kolla/config_files/config.json:z', '/run/libvirt:/run/libvirt:shared,ro', '/etc/hosts:/etc/hosts:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/var/lib/openstack/cacerts/telemetry/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/config/telemetry/ceilometer_prom_exporter.yaml:/etc/ceilometer/ceilometer_prom_exporter.yaml:z', '/var/lib/openstack/certs/telemetry/default:/etc/ceilometer/tls:z', '/dev/log:/dev/log', '/var/lib/openstack/healthchecks/ceilometer_agent_compute:/openstack:ro,z']}, config_id=edpm)
Dec  1 20:08:01 compute-0 openstack_network_exporter[205914]: ERROR   20:08:01 appctl.go:144: Failed to get PID for ovn-northd: no control socket files found for ovn-northd
Dec  1 20:08:01 compute-0 openstack_network_exporter[205914]: ERROR   20:08:01 appctl.go:144: Failed to get PID for ovn-northd: no control socket files found for ovn-northd
Dec  1 20:08:01 compute-0 openstack_network_exporter[205914]: ERROR   20:08:01 appctl.go:131: Failed to prepare call to ovsdb-server: no control socket files found for the ovs db server
Dec  1 20:08:01 compute-0 openstack_network_exporter[205914]: ERROR   20:08:01 appctl.go:174: call(dpif-netdev/pmd-rxq-show): please specify an existing datapath
Dec  1 20:08:01 compute-0 openstack_network_exporter[205914]: 
Dec  1 20:08:01 compute-0 openstack_network_exporter[205914]: ERROR   20:08:01 appctl.go:174: call(dpif-netdev/pmd-perf-show): please specify an existing datapath
Dec  1 20:08:01 compute-0 openstack_network_exporter[205914]: 
Dec  1 20:08:03 compute-0 nova_compute[189564]: 2025-12-01 20:08:03.980 189568 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Dec  1 20:08:04 compute-0 nova_compute[189564]: 2025-12-01 20:08:04.050 189568 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Dec  1 20:08:08 compute-0 nova_compute[189564]: 2025-12-01 20:08:08.988 189568 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Dec  1 20:08:09 compute-0 nova_compute[189564]: 2025-12-01 20:08:09.053 189568 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Dec  1 20:08:12 compute-0 ovn_metadata_agent[106828]: 2025-12-01 20:08:12.226 106833 DEBUG oslo_concurrency.lockutils [-] Acquiring lock "_check_child_processes" by "neutron.agent.linux.external_process.ProcessMonitor._check_child_processes" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m
Dec  1 20:08:12 compute-0 ovn_metadata_agent[106828]: 2025-12-01 20:08:12.227 106833 DEBUG oslo_concurrency.lockutils [-] Lock "_check_child_processes" acquired by "neutron.agent.linux.external_process.ProcessMonitor._check_child_processes" :: waited 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m
Dec  1 20:08:12 compute-0 ovn_metadata_agent[106828]: 2025-12-01 20:08:12.227 106833 DEBUG oslo_concurrency.lockutils [-] Lock "_check_child_processes" "released" by "neutron.agent.linux.external_process.ProcessMonitor._check_child_processes" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m
Dec  1 20:08:13 compute-0 nova_compute[189564]: 2025-12-01 20:08:13.997 189568 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Dec  1 20:08:14 compute-0 nova_compute[189564]: 2025-12-01 20:08:14.056 189568 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Dec  1 20:08:14 compute-0 podman[258393]: 2025-12-01 20:08:14.333764664 +0000 UTC m=+0.087572850 container health_status 9bc16c1e84935b321683dd2dfd3901959431e420d380b6b9982945dff3d516b2 (image=quay.io/prometheus/node-exporter:v1.5.0, name=node_exporter, health_status=healthy, health_failing_streak=0, health_log=, container_name=node_exporter, maintainer=The Prometheus Authors <prometheus-developers@googlegroups.com>, managed_by=edpm_ansible, config_data={'image': 'quay.io/prometheus/node-exporter:v1.5.0', 'restart': 'always', 'recreate': True, 'user': 'root', 'privileged': True, 'ports': ['9100:9100'], 'command': ['--web.config.file=/etc/node_exporter/node_exporter.yaml', '--web.disable-exporter-metrics', '--collector.systemd', '--collector.systemd.unit-include=(edpm_.*|ovs.*|openvswitch|virt.*|rsyslog)\\.service', '--no-collector.dmi', '--no-collector.entropy', '--no-collector.thermal_zone', '--no-collector.time', '--no-collector.timex', '--no-collector.uname', '--no-collector.stat', '--no-collector.hwmon', '--no-collector.os', '--no-collector.selinux', '--no-collector.textfile', '--no-collector.powersupplyclass', '--no-collector.pressure', '--no-collector.rapl'], 'net': 'host', 'environment': {'OS_ENDPOINT_TYPE': 'internal'}, 'healthcheck': {'test': '/openstack/healthcheck node_exporter', 'mount': '/var/lib/openstack/healthchecks/node_exporter'}, 'volumes': ['/var/lib/openstack/config/telemetry/node_exporter.yaml:/etc/node_exporter/node_exporter.yaml:z', '/var/lib/openstack/certs/telemetry/default:/etc/node_exporter/tls:z', '/var/run/dbus/system_bus_socket:/var/run/dbus/system_bus_socket:rw', '/var/lib/openstack/healthchecks/node_exporter:/openstack:ro,z']}, config_id=edpm)
Dec  1 20:08:14 compute-0 podman[258394]: 2025-12-01 20:08:14.346537362 +0000 UTC m=+0.101020189 container health_status b46bda7fc50db8041eef75400930fc7591d8331b3adc9964f77b2cc87c6b98e2 (image=quay.io/openstack-k8s-operators/openstack-network-exporter:current-podified, name=openstack_network_exporter, health_status=healthy, health_failing_streak=0, health_log=, com.redhat.component=ubi9-minimal-container, release=1755695350, architecture=x86_64, summary=Provides the latest release of the minimal Red Hat Universal Base Image 9., build-date=2025-08-20T13:12:41, io.k8s.description=The Universal Base Image Minimal is a stripped down image that uses microdnf as a package manager. This base image is freely redistributable, but Red Hat only supports Red Hat technologies through subscriptions for Red Hat products. This image is maintained by Red Hat and updated regularly., io.k8s.display-name=Red Hat Universal Base Image 9 Minimal, vcs-type=git, name=ubi9-minimal, com.redhat.license_terms=https://www.redhat.com/en/about/red-hat-end-user-license-agreements#UBI, container_name=openstack_network_exporter, vcs-ref=f4b088292653bbf5ca8188a5e59ffd06a8671d4b, version=9.6, io.buildah.version=1.33.7, maintainer=Red Hat, Inc., distribution-scope=public, config_data={'image': 'quay.io/openstack-k8s-operators/openstack-network-exporter:current-podified', 'restart': 'always', 'recreate': True, 'privileged': True, 'ports': ['9105:9105'], 'command': [], 'net': 'host', 'environment': {'OS_ENDPOINT_TYPE': 'internal', 'OPENSTACK_NETWORK_EXPORTER_YAML': '/etc/openstack_network_exporter/openstack_network_exporter.yaml'}, 'healthcheck': {'test': '/openstack/healthcheck openstack-netwo', 'mount': '/var/lib/openstack/healthchecks/openstack_network_exporter'}, 'volumes': ['/var/lib/openstack/config/telemetry/openstack_network_exporter.yaml:/etc/openstack_network_exporter/openstack_network_exporter.yaml:z', '/var/lib/openstack/certs/telemetry/default:/etc/openstack_network_exporter/tls:z', '/var/run/openvswitch:/run/openvswitch:rw,z', '/var/lib/openvswitch/ovn:/run/ovn:rw,z', '/proc:/host/proc:ro', '/var/lib/openstack/healthchecks/openstack_network_exporter:/openstack:ro,z']}, url=https://catalog.redhat.com/en/search?searchType=containers, config_id=edpm, managed_by=edpm_ansible, vendor=Red Hat, Inc., io.openshift.tags=minimal rhel9, description=The Universal Base Image Minimal is a stripped down image that uses microdnf as a package manager. This base image is freely redistributable, but Red Hat only supports Red Hat technologies through subscriptions for Red Hat products. This image is maintained by Red Hat and updated regularly., io.openshift.expose-services=)
Dec  1 20:08:19 compute-0 nova_compute[189564]: 2025-12-01 20:08:19.002 189568 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Dec  1 20:08:19 compute-0 nova_compute[189564]: 2025-12-01 20:08:19.059 189568 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Dec  1 20:08:20 compute-0 nova_compute[189564]: 2025-12-01 20:08:20.248 189568 DEBUG oslo_service.periodic_task [None req-7cec860b-c1c5-49d2-8a4f-732c76c1788d - - - - - -] Running periodic task ComputeManager._poll_rescued_instances run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210#033[00m
Dec  1 20:08:20 compute-0 nova_compute[189564]: 2025-12-01 20:08:20.249 189568 DEBUG oslo_service.periodic_task [None req-7cec860b-c1c5-49d2-8a4f-732c76c1788d - - - - - -] Running periodic task ComputeManager._reclaim_queued_deletes run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210#033[00m
Dec  1 20:08:20 compute-0 nova_compute[189564]: 2025-12-01 20:08:20.250 189568 DEBUG nova.compute.manager [None req-7cec860b-c1c5-49d2-8a4f-732c76c1788d - - - - - -] CONF.reclaim_instance_interval <= 0, skipping... _reclaim_queued_deletes /usr/lib/python3.9/site-packages/nova/compute/manager.py:10477#033[00m
Dec  1 20:08:24 compute-0 nova_compute[189564]: 2025-12-01 20:08:24.008 189568 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Dec  1 20:08:24 compute-0 nova_compute[189564]: 2025-12-01 20:08:24.061 189568 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Dec  1 20:08:24 compute-0 nova_compute[189564]: 2025-12-01 20:08:24.251 189568 DEBUG oslo_service.periodic_task [None req-7cec860b-c1c5-49d2-8a4f-732c76c1788d - - - - - -] Running periodic task ComputeManager._poll_rebooting_instances run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210#033[00m
Dec  1 20:08:25 compute-0 podman[258444]: 2025-12-01 20:08:25.30190142 +0000 UTC m=+0.078195909 container health_status eee51cf6f5ac491b85fb09827fece37ea9afa564acb449d4ec0d0155a452f02b (image=quay.io/podified-antelope-centos9/openstack-multipathd:current-podified, name=multipathd, health_status=healthy, health_failing_streak=0, health_log=, org.label-schema.vendor=CentOS, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 9 Base Image, tcib_build_tag=fa2bb8efef6782c26ea7f1675eeb36dd, config_data={'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/multipathd', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-multipathd:current-podified', 'net': 'host', 'privileged': True, 'restart': 'always', 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/var/lib/kolla/config_files/multipathd.json:/var/lib/kolla/config_files/config.json:ro', '/dev:/dev', '/run/udev:/run/udev', '/sys:/sys', '/lib/modules:/lib/modules:ro', '/etc/iscsi:/etc/iscsi:ro', '/var/lib/iscsi:/var/lib/iscsi', '/etc/multipath:/etc/multipath:z', '/etc/multipath.conf:/etc/multipath.conf:ro', '/var/lib/openstack/healthchecks/multipathd:/openstack:ro,z']}, io.buildah.version=1.41.3, org.label-schema.build-date=20251125, config_id=multipathd, managed_by=edpm_ansible, org.label-schema.schema-version=1.0, tcib_managed=true, container_name=multipathd, maintainer=OpenStack Kubernetes Operator team)
Dec  1 20:08:27 compute-0 nova_compute[189564]: 2025-12-01 20:08:27.248 189568 DEBUG oslo_service.periodic_task [None req-7cec860b-c1c5-49d2-8a4f-732c76c1788d - - - - - -] Running periodic task ComputeManager._heal_instance_info_cache run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210#033[00m
Dec  1 20:08:27 compute-0 nova_compute[189564]: 2025-12-01 20:08:27.249 189568 DEBUG nova.compute.manager [None req-7cec860b-c1c5-49d2-8a4f-732c76c1788d - - - - - -] Starting heal instance info cache _heal_instance_info_cache /usr/lib/python3.9/site-packages/nova/compute/manager.py:9858#033[00m
Dec  1 20:08:27 compute-0 nova_compute[189564]: 2025-12-01 20:08:27.249 189568 DEBUG nova.compute.manager [None req-7cec860b-c1c5-49d2-8a4f-732c76c1788d - - - - - -] Rebuilding the list of instances to heal _heal_instance_info_cache /usr/lib/python3.9/site-packages/nova/compute/manager.py:9862#033[00m
Dec  1 20:08:27 compute-0 nova_compute[189564]: 2025-12-01 20:08:27.613 189568 DEBUG oslo_concurrency.lockutils [None req-7cec860b-c1c5-49d2-8a4f-732c76c1788d - - - - - -] Acquiring lock "refresh_cache-2e63a3e2-688c-470f-9b69-98ac22f0c892" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:312#033[00m
Dec  1 20:08:27 compute-0 nova_compute[189564]: 2025-12-01 20:08:27.614 189568 DEBUG oslo_concurrency.lockutils [None req-7cec860b-c1c5-49d2-8a4f-732c76c1788d - - - - - -] Acquired lock "refresh_cache-2e63a3e2-688c-470f-9b69-98ac22f0c892" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:315#033[00m
Dec  1 20:08:27 compute-0 nova_compute[189564]: 2025-12-01 20:08:27.614 189568 DEBUG nova.network.neutron [None req-7cec860b-c1c5-49d2-8a4f-732c76c1788d - - - - - -] [instance: 2e63a3e2-688c-470f-9b69-98ac22f0c892] Forcefully refreshing network info cache for instance _get_instance_nw_info /usr/lib/python3.9/site-packages/nova/network/neutron.py:2004#033[00m
Dec  1 20:08:27 compute-0 nova_compute[189564]: 2025-12-01 20:08:27.614 189568 DEBUG nova.objects.instance [None req-7cec860b-c1c5-49d2-8a4f-732c76c1788d - - - - - -] Lazy-loading 'info_cache' on Instance uuid 2e63a3e2-688c-470f-9b69-98ac22f0c892 obj_load_attr /usr/lib/python3.9/site-packages/nova/objects/instance.py:1105#033[00m
Dec  1 20:08:28 compute-0 nova_compute[189564]: 2025-12-01 20:08:28.160 189568 DEBUG oslo_concurrency.lockutils [None req-ef2f901d-b422-4159-86ba-6c6b084dc5b9 87b1f4a5842648dead0562b1cf8b4f18 ce8fb01897ec4dc4a54e7b478a0450c6 - - default default] Acquiring lock "1ba24bd2-a29b-4c5b-b8c7-cba0830ed166" by "nova.compute.manager.ComputeManager.build_and_run_instance.<locals>._locked_do_build_and_run_instance" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m
Dec  1 20:08:28 compute-0 nova_compute[189564]: 2025-12-01 20:08:28.160 189568 DEBUG oslo_concurrency.lockutils [None req-ef2f901d-b422-4159-86ba-6c6b084dc5b9 87b1f4a5842648dead0562b1cf8b4f18 ce8fb01897ec4dc4a54e7b478a0450c6 - - default default] Lock "1ba24bd2-a29b-4c5b-b8c7-cba0830ed166" acquired by "nova.compute.manager.ComputeManager.build_and_run_instance.<locals>._locked_do_build_and_run_instance" :: waited 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m
Dec  1 20:08:28 compute-0 nova_compute[189564]: 2025-12-01 20:08:28.174 189568 DEBUG nova.compute.manager [None req-ef2f901d-b422-4159-86ba-6c6b084dc5b9 87b1f4a5842648dead0562b1cf8b4f18 ce8fb01897ec4dc4a54e7b478a0450c6 - - default default] [instance: 1ba24bd2-a29b-4c5b-b8c7-cba0830ed166] Starting instance... _do_build_and_run_instance /usr/lib/python3.9/site-packages/nova/compute/manager.py:2402#033[00m
Dec  1 20:08:28 compute-0 nova_compute[189564]: 2025-12-01 20:08:28.262 189568 DEBUG oslo_concurrency.lockutils [None req-ef2f901d-b422-4159-86ba-6c6b084dc5b9 87b1f4a5842648dead0562b1cf8b4f18 ce8fb01897ec4dc4a54e7b478a0450c6 - - default default] Acquiring lock "compute_resources" by "nova.compute.resource_tracker.ResourceTracker.instance_claim" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m
Dec  1 20:08:28 compute-0 nova_compute[189564]: 2025-12-01 20:08:28.263 189568 DEBUG oslo_concurrency.lockutils [None req-ef2f901d-b422-4159-86ba-6c6b084dc5b9 87b1f4a5842648dead0562b1cf8b4f18 ce8fb01897ec4dc4a54e7b478a0450c6 - - default default] Lock "compute_resources" acquired by "nova.compute.resource_tracker.ResourceTracker.instance_claim" :: waited 0.002s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m
Dec  1 20:08:28 compute-0 nova_compute[189564]: 2025-12-01 20:08:28.277 189568 DEBUG nova.virt.hardware [None req-ef2f901d-b422-4159-86ba-6c6b084dc5b9 87b1f4a5842648dead0562b1cf8b4f18 ce8fb01897ec4dc4a54e7b478a0450c6 - - default default] Require both a host and instance NUMA topology to fit instance on host. numa_fit_instance_to_host /usr/lib/python3.9/site-packages/nova/virt/hardware.py:2368#033[00m
Dec  1 20:08:28 compute-0 nova_compute[189564]: 2025-12-01 20:08:28.278 189568 INFO nova.compute.claims [None req-ef2f901d-b422-4159-86ba-6c6b084dc5b9 87b1f4a5842648dead0562b1cf8b4f18 ce8fb01897ec4dc4a54e7b478a0450c6 - - default default] [instance: 1ba24bd2-a29b-4c5b-b8c7-cba0830ed166] Claim successful on node compute-0.ctlplane.example.com#033[00m
Dec  1 20:08:28 compute-0 nova_compute[189564]: 2025-12-01 20:08:28.397 189568 DEBUG nova.compute.provider_tree [None req-ef2f901d-b422-4159-86ba-6c6b084dc5b9 87b1f4a5842648dead0562b1cf8b4f18 ce8fb01897ec4dc4a54e7b478a0450c6 - - default default] Inventory has not changed in ProviderTree for provider: 0211b5d4-bab8-409f-8f53-df766ffbcb27 update_inventory /usr/lib/python3.9/site-packages/nova/compute/provider_tree.py:180#033[00m
Dec  1 20:08:28 compute-0 nova_compute[189564]: 2025-12-01 20:08:28.412 189568 DEBUG nova.scheduler.client.report [None req-ef2f901d-b422-4159-86ba-6c6b084dc5b9 87b1f4a5842648dead0562b1cf8b4f18 ce8fb01897ec4dc4a54e7b478a0450c6 - - default default] Inventory has not changed for provider 0211b5d4-bab8-409f-8f53-df766ffbcb27 based on inventory data: {'VCPU': {'total': 8, 'reserved': 0, 'min_unit': 1, 'max_unit': 8, 'step_size': 1, 'allocation_ratio': 4.0}, 'MEMORY_MB': {'total': 7680, 'reserved': 512, 'min_unit': 1, 'max_unit': 7680, 'step_size': 1, 'allocation_ratio': 1.0}, 'DISK_GB': {'total': 79, 'reserved': 1, 'min_unit': 1, 'max_unit': 79, 'step_size': 1, 'allocation_ratio': 0.9}} set_inventory_for_provider /usr/lib/python3.9/site-packages/nova/scheduler/client/report.py:940#033[00m
Dec  1 20:08:28 compute-0 nova_compute[189564]: 2025-12-01 20:08:28.432 189568 DEBUG oslo_concurrency.lockutils [None req-ef2f901d-b422-4159-86ba-6c6b084dc5b9 87b1f4a5842648dead0562b1cf8b4f18 ce8fb01897ec4dc4a54e7b478a0450c6 - - default default] Lock "compute_resources" "released" by "nova.compute.resource_tracker.ResourceTracker.instance_claim" :: held 0.169s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m
Dec  1 20:08:28 compute-0 nova_compute[189564]: 2025-12-01 20:08:28.433 189568 DEBUG nova.compute.manager [None req-ef2f901d-b422-4159-86ba-6c6b084dc5b9 87b1f4a5842648dead0562b1cf8b4f18 ce8fb01897ec4dc4a54e7b478a0450c6 - - default default] [instance: 1ba24bd2-a29b-4c5b-b8c7-cba0830ed166] Start building networks asynchronously for instance. _build_resources /usr/lib/python3.9/site-packages/nova/compute/manager.py:2799#033[00m
Dec  1 20:08:28 compute-0 nova_compute[189564]: 2025-12-01 20:08:28.477 189568 DEBUG nova.compute.manager [None req-ef2f901d-b422-4159-86ba-6c6b084dc5b9 87b1f4a5842648dead0562b1cf8b4f18 ce8fb01897ec4dc4a54e7b478a0450c6 - - default default] [instance: 1ba24bd2-a29b-4c5b-b8c7-cba0830ed166] Allocating IP information in the background. _allocate_network_async /usr/lib/python3.9/site-packages/nova/compute/manager.py:1952#033[00m
Dec  1 20:08:28 compute-0 nova_compute[189564]: 2025-12-01 20:08:28.477 189568 DEBUG nova.network.neutron [None req-ef2f901d-b422-4159-86ba-6c6b084dc5b9 87b1f4a5842648dead0562b1cf8b4f18 ce8fb01897ec4dc4a54e7b478a0450c6 - - default default] [instance: 1ba24bd2-a29b-4c5b-b8c7-cba0830ed166] allocate_for_instance() allocate_for_instance /usr/lib/python3.9/site-packages/nova/network/neutron.py:1156#033[00m
Dec  1 20:08:28 compute-0 nova_compute[189564]: 2025-12-01 20:08:28.497 189568 INFO nova.virt.libvirt.driver [None req-ef2f901d-b422-4159-86ba-6c6b084dc5b9 87b1f4a5842648dead0562b1cf8b4f18 ce8fb01897ec4dc4a54e7b478a0450c6 - - default default] [instance: 1ba24bd2-a29b-4c5b-b8c7-cba0830ed166] Ignoring supplied device name: /dev/vda. Libvirt can't honour user-supplied dev names#033[00m
Dec  1 20:08:28 compute-0 nova_compute[189564]: 2025-12-01 20:08:28.518 189568 DEBUG nova.compute.manager [None req-ef2f901d-b422-4159-86ba-6c6b084dc5b9 87b1f4a5842648dead0562b1cf8b4f18 ce8fb01897ec4dc4a54e7b478a0450c6 - - default default] [instance: 1ba24bd2-a29b-4c5b-b8c7-cba0830ed166] Start building block device mappings for instance. _build_resources /usr/lib/python3.9/site-packages/nova/compute/manager.py:2834#033[00m
Dec  1 20:08:28 compute-0 nova_compute[189564]: 2025-12-01 20:08:28.588 189568 DEBUG nova.compute.manager [None req-ef2f901d-b422-4159-86ba-6c6b084dc5b9 87b1f4a5842648dead0562b1cf8b4f18 ce8fb01897ec4dc4a54e7b478a0450c6 - - default default] [instance: 1ba24bd2-a29b-4c5b-b8c7-cba0830ed166] Start spawning the instance on the hypervisor. _build_and_run_instance /usr/lib/python3.9/site-packages/nova/compute/manager.py:2608#033[00m
Dec  1 20:08:28 compute-0 nova_compute[189564]: 2025-12-01 20:08:28.590 189568 DEBUG nova.virt.libvirt.driver [None req-ef2f901d-b422-4159-86ba-6c6b084dc5b9 87b1f4a5842648dead0562b1cf8b4f18 ce8fb01897ec4dc4a54e7b478a0450c6 - - default default] [instance: 1ba24bd2-a29b-4c5b-b8c7-cba0830ed166] Creating instance directory _create_image /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:4723#033[00m
Dec  1 20:08:28 compute-0 nova_compute[189564]: 2025-12-01 20:08:28.590 189568 INFO nova.virt.libvirt.driver [None req-ef2f901d-b422-4159-86ba-6c6b084dc5b9 87b1f4a5842648dead0562b1cf8b4f18 ce8fb01897ec4dc4a54e7b478a0450c6 - - default default] [instance: 1ba24bd2-a29b-4c5b-b8c7-cba0830ed166] Creating image(s)#033[00m
Dec  1 20:08:28 compute-0 nova_compute[189564]: 2025-12-01 20:08:28.591 189568 DEBUG oslo_concurrency.lockutils [None req-ef2f901d-b422-4159-86ba-6c6b084dc5b9 87b1f4a5842648dead0562b1cf8b4f18 ce8fb01897ec4dc4a54e7b478a0450c6 - - default default] Acquiring lock "/var/lib/nova/instances/1ba24bd2-a29b-4c5b-b8c7-cba0830ed166/disk.info" by "nova.virt.libvirt.imagebackend.Image.resolve_driver_format.<locals>.write_to_disk_info_file" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m
Dec  1 20:08:28 compute-0 nova_compute[189564]: 2025-12-01 20:08:28.591 189568 DEBUG oslo_concurrency.lockutils [None req-ef2f901d-b422-4159-86ba-6c6b084dc5b9 87b1f4a5842648dead0562b1cf8b4f18 ce8fb01897ec4dc4a54e7b478a0450c6 - - default default] Lock "/var/lib/nova/instances/1ba24bd2-a29b-4c5b-b8c7-cba0830ed166/disk.info" acquired by "nova.virt.libvirt.imagebackend.Image.resolve_driver_format.<locals>.write_to_disk_info_file" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m
Dec  1 20:08:28 compute-0 nova_compute[189564]: 2025-12-01 20:08:28.592 189568 DEBUG oslo_concurrency.lockutils [None req-ef2f901d-b422-4159-86ba-6c6b084dc5b9 87b1f4a5842648dead0562b1cf8b4f18 ce8fb01897ec4dc4a54e7b478a0450c6 - - default default] Lock "/var/lib/nova/instances/1ba24bd2-a29b-4c5b-b8c7-cba0830ed166/disk.info" "released" by "nova.virt.libvirt.imagebackend.Image.resolve_driver_format.<locals>.write_to_disk_info_file" :: held 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m
Dec  1 20:08:28 compute-0 nova_compute[189564]: 2025-12-01 20:08:28.604 189568 DEBUG oslo_concurrency.processutils [None req-ef2f901d-b422-4159-86ba-6c6b084dc5b9 87b1f4a5842648dead0562b1cf8b4f18 ce8fb01897ec4dc4a54e7b478a0450c6 - - default default] Running cmd (subprocess): /usr/bin/python3 -m oslo_concurrency.prlimit --as=1073741824 --cpu=30 -- env LC_ALL=C LANG=C qemu-img info /var/lib/nova/instances/_base/556b39aa36844a62d14eda3a6341e6c6cb1bcd4a --force-share --output=json execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384#033[00m
Dec  1 20:08:28 compute-0 nova_compute[189564]: 2025-12-01 20:08:28.677 189568 DEBUG oslo_concurrency.processutils [None req-ef2f901d-b422-4159-86ba-6c6b084dc5b9 87b1f4a5842648dead0562b1cf8b4f18 ce8fb01897ec4dc4a54e7b478a0450c6 - - default default] CMD "/usr/bin/python3 -m oslo_concurrency.prlimit --as=1073741824 --cpu=30 -- env LC_ALL=C LANG=C qemu-img info /var/lib/nova/instances/_base/556b39aa36844a62d14eda3a6341e6c6cb1bcd4a --force-share --output=json" returned: 0 in 0.072s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422#033[00m
Dec  1 20:08:28 compute-0 nova_compute[189564]: 2025-12-01 20:08:28.678 189568 DEBUG oslo_concurrency.lockutils [None req-ef2f901d-b422-4159-86ba-6c6b084dc5b9 87b1f4a5842648dead0562b1cf8b4f18 ce8fb01897ec4dc4a54e7b478a0450c6 - - default default] Acquiring lock "556b39aa36844a62d14eda3a6341e6c6cb1bcd4a" by "nova.virt.libvirt.imagebackend.Qcow2.create_image.<locals>.create_qcow2_image" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m
Dec  1 20:08:28 compute-0 nova_compute[189564]: 2025-12-01 20:08:28.678 189568 DEBUG oslo_concurrency.lockutils [None req-ef2f901d-b422-4159-86ba-6c6b084dc5b9 87b1f4a5842648dead0562b1cf8b4f18 ce8fb01897ec4dc4a54e7b478a0450c6 - - default default] Lock "556b39aa36844a62d14eda3a6341e6c6cb1bcd4a" acquired by "nova.virt.libvirt.imagebackend.Qcow2.create_image.<locals>.create_qcow2_image" :: waited 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m
Dec  1 20:08:28 compute-0 nova_compute[189564]: 2025-12-01 20:08:28.689 189568 DEBUG oslo_concurrency.processutils [None req-ef2f901d-b422-4159-86ba-6c6b084dc5b9 87b1f4a5842648dead0562b1cf8b4f18 ce8fb01897ec4dc4a54e7b478a0450c6 - - default default] Running cmd (subprocess): /usr/bin/python3 -m oslo_concurrency.prlimit --as=1073741824 --cpu=30 -- env LC_ALL=C LANG=C qemu-img info /var/lib/nova/instances/_base/556b39aa36844a62d14eda3a6341e6c6cb1bcd4a --force-share --output=json execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384#033[00m
Dec  1 20:08:28 compute-0 nova_compute[189564]: 2025-12-01 20:08:28.711 189568 DEBUG nova.policy [None req-ef2f901d-b422-4159-86ba-6c6b084dc5b9 87b1f4a5842648dead0562b1cf8b4f18 ce8fb01897ec4dc4a54e7b478a0450c6 - - default default] Policy check for network:attach_external_network failed with credentials {'is_admin': False, 'user_id': '87b1f4a5842648dead0562b1cf8b4f18', 'user_domain_id': 'default', 'system_scope': None, 'domain_id': None, 'project_id': 'ce8fb01897ec4dc4a54e7b478a0450c6', 'project_domain_id': 'default', 'roles': ['reader', 'member'], 'is_admin_project': True, 'service_user_id': None, 'service_user_domain_id': None, 'service_project_id': None, 'service_project_domain_id': None, 'service_roles': []} authorize /usr/lib/python3.9/site-packages/nova/policy.py:203#033[00m
Dec  1 20:08:28 compute-0 nova_compute[189564]: 2025-12-01 20:08:28.716 189568 DEBUG nova.network.neutron [None req-7cec860b-c1c5-49d2-8a4f-732c76c1788d - - - - - -] [instance: 2e63a3e2-688c-470f-9b69-98ac22f0c892] Updating instance_info_cache with network_info: [{"id": "3076324c-1772-4ebf-8d52-056282f5b5b9", "address": "fa:16:3e:ec:bc:e0", "network": {"id": "b72e0b6b-24ff-49af-9297-d0f55dd2fe07", "bridge": "br-int", "label": "", "subnets": [{"cidr": "10.100.0.0/16", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.3.29", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "ce8fb01897ec4dc4a54e7b478a0450c6", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tap3076324c-17", "ovs_interfaceid": "3076324c-1772-4ebf-8d52-056282f5b5b9", "qbh_params": null, "qbg_params": null, "active": true, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}}] update_instance_cache_with_nw_info /usr/lib/python3.9/site-packages/nova/network/neutron.py:116#033[00m
Dec  1 20:08:28 compute-0 nova_compute[189564]: 2025-12-01 20:08:28.738 189568 DEBUG oslo_concurrency.lockutils [None req-7cec860b-c1c5-49d2-8a4f-732c76c1788d - - - - - -] Releasing lock "refresh_cache-2e63a3e2-688c-470f-9b69-98ac22f0c892" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:333#033[00m
Dec  1 20:08:28 compute-0 nova_compute[189564]: 2025-12-01 20:08:28.738 189568 DEBUG nova.compute.manager [None req-7cec860b-c1c5-49d2-8a4f-732c76c1788d - - - - - -] [instance: 2e63a3e2-688c-470f-9b69-98ac22f0c892] Updated the network info_cache for instance _heal_instance_info_cache /usr/lib/python3.9/site-packages/nova/compute/manager.py:9929#033[00m
Dec  1 20:08:28 compute-0 nova_compute[189564]: 2025-12-01 20:08:28.739 189568 DEBUG oslo_service.periodic_task [None req-7cec860b-c1c5-49d2-8a4f-732c76c1788d - - - - - -] Running periodic task ComputeManager._poll_unconfirmed_resizes run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210#033[00m
Dec  1 20:08:28 compute-0 nova_compute[189564]: 2025-12-01 20:08:28.740 189568 DEBUG oslo_service.periodic_task [None req-7cec860b-c1c5-49d2-8a4f-732c76c1788d - - - - - -] Running periodic task ComputeManager._instance_usage_audit run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210#033[00m
Dec  1 20:08:28 compute-0 nova_compute[189564]: 2025-12-01 20:08:28.763 189568 DEBUG oslo_concurrency.processutils [None req-ef2f901d-b422-4159-86ba-6c6b084dc5b9 87b1f4a5842648dead0562b1cf8b4f18 ce8fb01897ec4dc4a54e7b478a0450c6 - - default default] CMD "/usr/bin/python3 -m oslo_concurrency.prlimit --as=1073741824 --cpu=30 -- env LC_ALL=C LANG=C qemu-img info /var/lib/nova/instances/_base/556b39aa36844a62d14eda3a6341e6c6cb1bcd4a --force-share --output=json" returned: 0 in 0.074s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422#033[00m
Dec  1 20:08:28 compute-0 nova_compute[189564]: 2025-12-01 20:08:28.764 189568 DEBUG oslo_concurrency.processutils [None req-ef2f901d-b422-4159-86ba-6c6b084dc5b9 87b1f4a5842648dead0562b1cf8b4f18 ce8fb01897ec4dc4a54e7b478a0450c6 - - default default] Running cmd (subprocess): env LC_ALL=C LANG=C qemu-img create -f qcow2 -o backing_file=/var/lib/nova/instances/_base/556b39aa36844a62d14eda3a6341e6c6cb1bcd4a,backing_fmt=raw /var/lib/nova/instances/1ba24bd2-a29b-4c5b-b8c7-cba0830ed166/disk 1073741824 execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384#033[00m
Dec  1 20:08:28 compute-0 nova_compute[189564]: 2025-12-01 20:08:28.820 189568 DEBUG oslo_concurrency.processutils [None req-ef2f901d-b422-4159-86ba-6c6b084dc5b9 87b1f4a5842648dead0562b1cf8b4f18 ce8fb01897ec4dc4a54e7b478a0450c6 - - default default] CMD "env LC_ALL=C LANG=C qemu-img create -f qcow2 -o backing_file=/var/lib/nova/instances/_base/556b39aa36844a62d14eda3a6341e6c6cb1bcd4a,backing_fmt=raw /var/lib/nova/instances/1ba24bd2-a29b-4c5b-b8c7-cba0830ed166/disk 1073741824" returned: 0 in 0.056s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422#033[00m
Dec  1 20:08:28 compute-0 nova_compute[189564]: 2025-12-01 20:08:28.821 189568 DEBUG oslo_concurrency.lockutils [None req-ef2f901d-b422-4159-86ba-6c6b084dc5b9 87b1f4a5842648dead0562b1cf8b4f18 ce8fb01897ec4dc4a54e7b478a0450c6 - - default default] Lock "556b39aa36844a62d14eda3a6341e6c6cb1bcd4a" "released" by "nova.virt.libvirt.imagebackend.Qcow2.create_image.<locals>.create_qcow2_image" :: held 0.143s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m
Dec  1 20:08:28 compute-0 nova_compute[189564]: 2025-12-01 20:08:28.822 189568 DEBUG oslo_concurrency.processutils [None req-ef2f901d-b422-4159-86ba-6c6b084dc5b9 87b1f4a5842648dead0562b1cf8b4f18 ce8fb01897ec4dc4a54e7b478a0450c6 - - default default] Running cmd (subprocess): /usr/bin/python3 -m oslo_concurrency.prlimit --as=1073741824 --cpu=30 -- env LC_ALL=C LANG=C qemu-img info /var/lib/nova/instances/_base/556b39aa36844a62d14eda3a6341e6c6cb1bcd4a --force-share --output=json execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384#033[00m
Dec  1 20:08:28 compute-0 nova_compute[189564]: 2025-12-01 20:08:28.883 189568 DEBUG oslo_concurrency.processutils [None req-ef2f901d-b422-4159-86ba-6c6b084dc5b9 87b1f4a5842648dead0562b1cf8b4f18 ce8fb01897ec4dc4a54e7b478a0450c6 - - default default] CMD "/usr/bin/python3 -m oslo_concurrency.prlimit --as=1073741824 --cpu=30 -- env LC_ALL=C LANG=C qemu-img info /var/lib/nova/instances/_base/556b39aa36844a62d14eda3a6341e6c6cb1bcd4a --force-share --output=json" returned: 0 in 0.061s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422#033[00m
Dec  1 20:08:28 compute-0 nova_compute[189564]: 2025-12-01 20:08:28.884 189568 DEBUG nova.virt.disk.api [None req-ef2f901d-b422-4159-86ba-6c6b084dc5b9 87b1f4a5842648dead0562b1cf8b4f18 ce8fb01897ec4dc4a54e7b478a0450c6 - - default default] Checking if we can resize image /var/lib/nova/instances/1ba24bd2-a29b-4c5b-b8c7-cba0830ed166/disk. size=1073741824 can_resize_image /usr/lib/python3.9/site-packages/nova/virt/disk/api.py:166#033[00m
Dec  1 20:08:28 compute-0 nova_compute[189564]: 2025-12-01 20:08:28.884 189568 DEBUG oslo_concurrency.processutils [None req-ef2f901d-b422-4159-86ba-6c6b084dc5b9 87b1f4a5842648dead0562b1cf8b4f18 ce8fb01897ec4dc4a54e7b478a0450c6 - - default default] Running cmd (subprocess): /usr/bin/python3 -m oslo_concurrency.prlimit --as=1073741824 --cpu=30 -- env LC_ALL=C LANG=C qemu-img info /var/lib/nova/instances/1ba24bd2-a29b-4c5b-b8c7-cba0830ed166/disk --force-share --output=json execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384#033[00m
Dec  1 20:08:28 compute-0 nova_compute[189564]: 2025-12-01 20:08:28.974 189568 DEBUG oslo_concurrency.processutils [None req-ef2f901d-b422-4159-86ba-6c6b084dc5b9 87b1f4a5842648dead0562b1cf8b4f18 ce8fb01897ec4dc4a54e7b478a0450c6 - - default default] CMD "/usr/bin/python3 -m oslo_concurrency.prlimit --as=1073741824 --cpu=30 -- env LC_ALL=C LANG=C qemu-img info /var/lib/nova/instances/1ba24bd2-a29b-4c5b-b8c7-cba0830ed166/disk --force-share --output=json" returned: 0 in 0.090s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422#033[00m
Dec  1 20:08:28 compute-0 nova_compute[189564]: 2025-12-01 20:08:28.976 189568 DEBUG nova.virt.disk.api [None req-ef2f901d-b422-4159-86ba-6c6b084dc5b9 87b1f4a5842648dead0562b1cf8b4f18 ce8fb01897ec4dc4a54e7b478a0450c6 - - default default] Cannot resize image /var/lib/nova/instances/1ba24bd2-a29b-4c5b-b8c7-cba0830ed166/disk to a smaller size. can_resize_image /usr/lib/python3.9/site-packages/nova/virt/disk/api.py:172#033[00m
Dec  1 20:08:28 compute-0 nova_compute[189564]: 2025-12-01 20:08:28.977 189568 DEBUG nova.objects.instance [None req-ef2f901d-b422-4159-86ba-6c6b084dc5b9 87b1f4a5842648dead0562b1cf8b4f18 ce8fb01897ec4dc4a54e7b478a0450c6 - - default default] Lazy-loading 'migration_context' on Instance uuid 1ba24bd2-a29b-4c5b-b8c7-cba0830ed166 obj_load_attr /usr/lib/python3.9/site-packages/nova/objects/instance.py:1105#033[00m
Dec  1 20:08:28 compute-0 nova_compute[189564]: 2025-12-01 20:08:28.992 189568 DEBUG nova.virt.libvirt.driver [None req-ef2f901d-b422-4159-86ba-6c6b084dc5b9 87b1f4a5842648dead0562b1cf8b4f18 ce8fb01897ec4dc4a54e7b478a0450c6 - - default default] [instance: 1ba24bd2-a29b-4c5b-b8c7-cba0830ed166] Created local disks _create_image /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:4857#033[00m
Dec  1 20:08:28 compute-0 nova_compute[189564]: 2025-12-01 20:08:28.993 189568 DEBUG nova.virt.libvirt.driver [None req-ef2f901d-b422-4159-86ba-6c6b084dc5b9 87b1f4a5842648dead0562b1cf8b4f18 ce8fb01897ec4dc4a54e7b478a0450c6 - - default default] [instance: 1ba24bd2-a29b-4c5b-b8c7-cba0830ed166] Ensure instance console log exists: /var/lib/nova/instances/1ba24bd2-a29b-4c5b-b8c7-cba0830ed166/console.log _ensure_console_log_for_instance /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:4609#033[00m
Dec  1 20:08:28 compute-0 nova_compute[189564]: 2025-12-01 20:08:28.994 189568 DEBUG oslo_concurrency.lockutils [None req-ef2f901d-b422-4159-86ba-6c6b084dc5b9 87b1f4a5842648dead0562b1cf8b4f18 ce8fb01897ec4dc4a54e7b478a0450c6 - - default default] Acquiring lock "vgpu_resources" by "nova.virt.libvirt.driver.LibvirtDriver._allocate_mdevs" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m
Dec  1 20:08:28 compute-0 nova_compute[189564]: 2025-12-01 20:08:28.994 189568 DEBUG oslo_concurrency.lockutils [None req-ef2f901d-b422-4159-86ba-6c6b084dc5b9 87b1f4a5842648dead0562b1cf8b4f18 ce8fb01897ec4dc4a54e7b478a0450c6 - - default default] Lock "vgpu_resources" acquired by "nova.virt.libvirt.driver.LibvirtDriver._allocate_mdevs" :: waited 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m
Dec  1 20:08:28 compute-0 nova_compute[189564]: 2025-12-01 20:08:28.995 189568 DEBUG oslo_concurrency.lockutils [None req-ef2f901d-b422-4159-86ba-6c6b084dc5b9 87b1f4a5842648dead0562b1cf8b4f18 ce8fb01897ec4dc4a54e7b478a0450c6 - - default default] Lock "vgpu_resources" "released" by "nova.virt.libvirt.driver.LibvirtDriver._allocate_mdevs" :: held 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m
Dec  1 20:08:29 compute-0 nova_compute[189564]: 2025-12-01 20:08:29.012 189568 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Dec  1 20:08:29 compute-0 nova_compute[189564]: 2025-12-01 20:08:29.062 189568 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Dec  1 20:08:29 compute-0 podman[258479]: 2025-12-01 20:08:29.338515536 +0000 UTC m=+0.110176660 container health_status 61ddba5fa28aaa4735d9b3aecc3d300f499f9ae2248b5f55cd6d6127fcce4236 (image=quay.io/navidys/prometheus-podman-exporter:v1.10.1, name=podman_exporter, health_status=healthy, health_failing_streak=0, health_log=, config_id=edpm, container_name=podman_exporter, maintainer=Navid Yaghoobi <navidys@fedoraproject.org>, managed_by=edpm_ansible, config_data={'image': 'quay.io/navidys/prometheus-podman-exporter:v1.10.1', 'restart': 'always', 'recreate': True, 'user': 'root', 'privileged': True, 'ports': ['9882:9882'], 'net': 'host', 'command': ['--web.config.file=/etc/podman_exporter/podman_exporter.yaml'], 'environment': {'OS_ENDPOINT_TYPE': 'internal', 'CONTAINER_HOST': 'unix:///run/podman/podman.sock'}, 'healthcheck': {'test': '/openstack/healthcheck podman_exporter', 'mount': '/var/lib/openstack/healthchecks/podman_exporter'}, 'volumes': ['/var/lib/openstack/config/telemetry/podman_exporter.yaml:/etc/podman_exporter/podman_exporter.yaml:z', '/var/lib/openstack/certs/telemetry/default:/etc/podman_exporter/tls:z', '/run/podman/podman.sock:/run/podman/podman.sock:rw,z', '/var/lib/openstack/healthchecks/podman_exporter:/openstack:ro,z']})
Dec  1 20:08:29 compute-0 ovn_metadata_agent[106828]: 2025-12-01 20:08:29.676 106833 DEBUG ovsdbapp.backend.ovs_idl.event [-] Matched UPDATE: SbGlobalUpdateEvent(events=('update',), table='SB_Global', conditions=None, old_conditions=None), priority=20 to row=SB_Global(external_ids={}, nb_cfg=14, options={'arp_ns_explicit_output': 'true', 'mac_prefix': 'ae:b8:e0', 'max_tunid': '16711680', 'northd_internal_version': '24.03.8-20.33.0-76.8', 'svc_monitor_mac': 'f2:87:69:a7:38:2b'}, ipsec=False) old=SB_Global(nb_cfg=13) matches /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/event.py:43#033[00m
Dec  1 20:08:29 compute-0 nova_compute[189564]: 2025-12-01 20:08:29.678 189568 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Dec  1 20:08:29 compute-0 ovn_metadata_agent[106828]: 2025-12-01 20:08:29.682 106833 DEBUG neutron.agent.ovn.metadata.agent [-] Delaying updating chassis table for 2 seconds run /usr/lib/python3.9/site-packages/neutron/agent/ovn/metadata/agent.py:274#033[00m
Dec  1 20:08:29 compute-0 podman[203750]: time="2025-12-01T20:08:29Z" level=info msg="List containers: received `last` parameter - overwriting `limit`"
Dec  1 20:08:29 compute-0 podman[203750]: @ - - [01/Dec/2025:20:08:29 +0000] "GET /v4.9.3/libpod/containers/json?all=true&external=false&last=0&namespace=false&size=false&sync=false HTTP/1.1" 200 29521 "" "Go-http-client/1.1"
Dec  1 20:08:29 compute-0 podman[203750]: @ - - [01/Dec/2025:20:08:29 +0000] "GET /v4.9.3/libpod/containers/stats?all=false&interval=1&stream=false HTTP/1.1" 200 4808 "" "Go-http-client/1.1"
Dec  1 20:08:29 compute-0 nova_compute[189564]: 2025-12-01 20:08:29.798 189568 DEBUG nova.network.neutron [None req-ef2f901d-b422-4159-86ba-6c6b084dc5b9 87b1f4a5842648dead0562b1cf8b4f18 ce8fb01897ec4dc4a54e7b478a0450c6 - - default default] [instance: 1ba24bd2-a29b-4c5b-b8c7-cba0830ed166] Successfully created port: 3f58b3a2-d9b9-4462-8f74-88eea7d00105 _create_port_minimal /usr/lib/python3.9/site-packages/nova/network/neutron.py:548#033[00m
Dec  1 20:08:30 compute-0 nova_compute[189564]: 2025-12-01 20:08:30.248 189568 DEBUG oslo_service.periodic_task [None req-7cec860b-c1c5-49d2-8a4f-732c76c1788d - - - - - -] Running periodic task ComputeManager._poll_volume_usage run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210#033[00m
Dec  1 20:08:30 compute-0 nova_compute[189564]: 2025-12-01 20:08:30.249 189568 DEBUG oslo_service.periodic_task [None req-7cec860b-c1c5-49d2-8a4f-732c76c1788d - - - - - -] Running periodic task ComputeManager.update_available_resource run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210#033[00m
Dec  1 20:08:30 compute-0 nova_compute[189564]: 2025-12-01 20:08:30.282 189568 DEBUG oslo_concurrency.lockutils [None req-7cec860b-c1c5-49d2-8a4f-732c76c1788d - - - - - -] Acquiring lock "compute_resources" by "nova.compute.resource_tracker.ResourceTracker.clean_compute_node_cache" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m
Dec  1 20:08:30 compute-0 nova_compute[189564]: 2025-12-01 20:08:30.283 189568 DEBUG oslo_concurrency.lockutils [None req-7cec860b-c1c5-49d2-8a4f-732c76c1788d - - - - - -] Lock "compute_resources" acquired by "nova.compute.resource_tracker.ResourceTracker.clean_compute_node_cache" :: waited 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m
Dec  1 20:08:30 compute-0 nova_compute[189564]: 2025-12-01 20:08:30.284 189568 DEBUG oslo_concurrency.lockutils [None req-7cec860b-c1c5-49d2-8a4f-732c76c1788d - - - - - -] Lock "compute_resources" "released" by "nova.compute.resource_tracker.ResourceTracker.clean_compute_node_cache" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m
Dec  1 20:08:30 compute-0 nova_compute[189564]: 2025-12-01 20:08:30.284 189568 DEBUG nova.compute.resource_tracker [None req-7cec860b-c1c5-49d2-8a4f-732c76c1788d - - - - - -] Auditing locally available compute resources for compute-0.ctlplane.example.com (node: compute-0.ctlplane.example.com) update_available_resource /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:861#033[00m
Dec  1 20:08:30 compute-0 nova_compute[189564]: 2025-12-01 20:08:30.364 189568 DEBUG oslo_concurrency.processutils [None req-7cec860b-c1c5-49d2-8a4f-732c76c1788d - - - - - -] Running cmd (subprocess): /usr/bin/python3 -m oslo_concurrency.prlimit --as=1073741824 --cpu=30 -- env LC_ALL=C LANG=C qemu-img info /var/lib/nova/instances/2e63a3e2-688c-470f-9b69-98ac22f0c892/disk --force-share --output=json execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384#033[00m
Dec  1 20:08:30 compute-0 nova_compute[189564]: 2025-12-01 20:08:30.436 189568 DEBUG oslo_concurrency.processutils [None req-7cec860b-c1c5-49d2-8a4f-732c76c1788d - - - - - -] CMD "/usr/bin/python3 -m oslo_concurrency.prlimit --as=1073741824 --cpu=30 -- env LC_ALL=C LANG=C qemu-img info /var/lib/nova/instances/2e63a3e2-688c-470f-9b69-98ac22f0c892/disk --force-share --output=json" returned: 0 in 0.071s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422#033[00m
Dec  1 20:08:30 compute-0 nova_compute[189564]: 2025-12-01 20:08:30.438 189568 DEBUG oslo_concurrency.processutils [None req-7cec860b-c1c5-49d2-8a4f-732c76c1788d - - - - - -] Running cmd (subprocess): /usr/bin/python3 -m oslo_concurrency.prlimit --as=1073741824 --cpu=30 -- env LC_ALL=C LANG=C qemu-img info /var/lib/nova/instances/2e63a3e2-688c-470f-9b69-98ac22f0c892/disk --force-share --output=json execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384#033[00m
Dec  1 20:08:30 compute-0 nova_compute[189564]: 2025-12-01 20:08:30.505 189568 DEBUG oslo_concurrency.processutils [None req-7cec860b-c1c5-49d2-8a4f-732c76c1788d - - - - - -] CMD "/usr/bin/python3 -m oslo_concurrency.prlimit --as=1073741824 --cpu=30 -- env LC_ALL=C LANG=C qemu-img info /var/lib/nova/instances/2e63a3e2-688c-470f-9b69-98ac22f0c892/disk --force-share --output=json" returned: 0 in 0.067s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422#033[00m
Dec  1 20:08:30 compute-0 nova_compute[189564]: 2025-12-01 20:08:30.527 189568 DEBUG nova.network.neutron [None req-ef2f901d-b422-4159-86ba-6c6b084dc5b9 87b1f4a5842648dead0562b1cf8b4f18 ce8fb01897ec4dc4a54e7b478a0450c6 - - default default] [instance: 1ba24bd2-a29b-4c5b-b8c7-cba0830ed166] Successfully updated port: 3f58b3a2-d9b9-4462-8f74-88eea7d00105 _update_port /usr/lib/python3.9/site-packages/nova/network/neutron.py:586#033[00m
Dec  1 20:08:30 compute-0 nova_compute[189564]: 2025-12-01 20:08:30.547 189568 DEBUG oslo_concurrency.lockutils [None req-ef2f901d-b422-4159-86ba-6c6b084dc5b9 87b1f4a5842648dead0562b1cf8b4f18 ce8fb01897ec4dc4a54e7b478a0450c6 - - default default] Acquiring lock "refresh_cache-1ba24bd2-a29b-4c5b-b8c7-cba0830ed166" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:312#033[00m
Dec  1 20:08:30 compute-0 nova_compute[189564]: 2025-12-01 20:08:30.548 189568 DEBUG oslo_concurrency.lockutils [None req-ef2f901d-b422-4159-86ba-6c6b084dc5b9 87b1f4a5842648dead0562b1cf8b4f18 ce8fb01897ec4dc4a54e7b478a0450c6 - - default default] Acquired lock "refresh_cache-1ba24bd2-a29b-4c5b-b8c7-cba0830ed166" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:315#033[00m
Dec  1 20:08:30 compute-0 nova_compute[189564]: 2025-12-01 20:08:30.548 189568 DEBUG nova.network.neutron [None req-ef2f901d-b422-4159-86ba-6c6b084dc5b9 87b1f4a5842648dead0562b1cf8b4f18 ce8fb01897ec4dc4a54e7b478a0450c6 - - default default] [instance: 1ba24bd2-a29b-4c5b-b8c7-cba0830ed166] Building network info cache for instance _get_instance_nw_info /usr/lib/python3.9/site-packages/nova/network/neutron.py:2010#033[00m
Dec  1 20:08:30 compute-0 nova_compute[189564]: 2025-12-01 20:08:30.625 189568 DEBUG nova.compute.manager [req-6e4ab898-c9aa-4ff5-b0ca-596d20d3fc7d req-98005170-4d75-4859-ad9a-2389173fbaab 8141e98fb94749b083df4b60a326419b 58466eba0634458f8dd92f929824a1d9 - - default default] [instance: 1ba24bd2-a29b-4c5b-b8c7-cba0830ed166] Received event network-changed-3f58b3a2-d9b9-4462-8f74-88eea7d00105 external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11048#033[00m
Dec  1 20:08:30 compute-0 nova_compute[189564]: 2025-12-01 20:08:30.626 189568 DEBUG nova.compute.manager [req-6e4ab898-c9aa-4ff5-b0ca-596d20d3fc7d req-98005170-4d75-4859-ad9a-2389173fbaab 8141e98fb94749b083df4b60a326419b 58466eba0634458f8dd92f929824a1d9 - - default default] [instance: 1ba24bd2-a29b-4c5b-b8c7-cba0830ed166] Refreshing instance network info cache due to event network-changed-3f58b3a2-d9b9-4462-8f74-88eea7d00105. external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11053#033[00m
Dec  1 20:08:30 compute-0 nova_compute[189564]: 2025-12-01 20:08:30.626 189568 DEBUG oslo_concurrency.lockutils [req-6e4ab898-c9aa-4ff5-b0ca-596d20d3fc7d req-98005170-4d75-4859-ad9a-2389173fbaab 8141e98fb94749b083df4b60a326419b 58466eba0634458f8dd92f929824a1d9 - - default default] Acquiring lock "refresh_cache-1ba24bd2-a29b-4c5b-b8c7-cba0830ed166" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:312#033[00m
Dec  1 20:08:30 compute-0 nova_compute[189564]: 2025-12-01 20:08:30.710 189568 DEBUG nova.network.neutron [None req-ef2f901d-b422-4159-86ba-6c6b084dc5b9 87b1f4a5842648dead0562b1cf8b4f18 ce8fb01897ec4dc4a54e7b478a0450c6 - - default default] [instance: 1ba24bd2-a29b-4c5b-b8c7-cba0830ed166] Instance cache missing network info. _get_preexisting_port_ids /usr/lib/python3.9/site-packages/nova/network/neutron.py:3323#033[00m
Dec  1 20:08:30 compute-0 nova_compute[189564]: 2025-12-01 20:08:30.910 189568 WARNING nova.virt.libvirt.driver [None req-7cec860b-c1c5-49d2-8a4f-732c76c1788d - - - - - -] This host appears to have multiple sockets per NUMA node. The `socket` PCI NUMA affinity will not be supported.#033[00m
Dec  1 20:08:30 compute-0 nova_compute[189564]: 2025-12-01 20:08:30.911 189568 DEBUG nova.compute.resource_tracker [None req-7cec860b-c1c5-49d2-8a4f-732c76c1788d - - - - - -] Hypervisor/Node resource view: name=compute-0.ctlplane.example.com free_ram=5195MB free_disk=72.27669906616211GB free_vcpus=7 pci_devices=[{"dev_id": "pci_0000_00_03_0", "address": "0000:00:03.0", "product_id": "1000", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1000", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_01_3", "address": "0000:00:01.3", "product_id": "7113", "vendor_id": "8086", "numa_node": null, "label": "label_8086_7113", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_06_0", "address": "0000:00:06.0", "product_id": "1005", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1005", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_04_0", "address": "0000:00:04.0", "product_id": "1001", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1001", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_01_1", "address": "0000:00:01.1", "product_id": "7010", "vendor_id": "8086", "numa_node": null, "label": "label_8086_7010", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_00_0", "address": "0000:00:00.0", "product_id": "1237", "vendor_id": "8086", "numa_node": null, "label": "label_8086_1237", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_05_0", "address": "0000:00:05.0", "product_id": "1002", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1002", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_02_0", "address": "0000:00:02.0", "product_id": "1050", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1050", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_01_2", "address": "0000:00:01.2", "product_id": "7020", "vendor_id": "8086", "numa_node": null, "label": "label_8086_7020", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_01_0", "address": "0000:00:01.0", "product_id": "7000", "vendor_id": "8086", "numa_node": null, "label": "label_8086_7000", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_07_0", "address": "0000:00:07.0", "product_id": "1000", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1000", "dev_type": "type-PCI"}] _report_hypervisor_resource_view /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:1034#033[00m
Dec  1 20:08:30 compute-0 nova_compute[189564]: 2025-12-01 20:08:30.912 189568 DEBUG oslo_concurrency.lockutils [None req-7cec860b-c1c5-49d2-8a4f-732c76c1788d - - - - - -] Acquiring lock "compute_resources" by "nova.compute.resource_tracker.ResourceTracker._update_available_resource" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m
Dec  1 20:08:30 compute-0 nova_compute[189564]: 2025-12-01 20:08:30.912 189568 DEBUG oslo_concurrency.lockutils [None req-7cec860b-c1c5-49d2-8a4f-732c76c1788d - - - - - -] Lock "compute_resources" acquired by "nova.compute.resource_tracker.ResourceTracker._update_available_resource" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m
Dec  1 20:08:30 compute-0 nova_compute[189564]: 2025-12-01 20:08:30.975 189568 DEBUG nova.compute.resource_tracker [None req-7cec860b-c1c5-49d2-8a4f-732c76c1788d - - - - - -] Instance 2e63a3e2-688c-470f-9b69-98ac22f0c892 actively managed on this compute host and has allocations in placement: {'resources': {'DISK_GB': 1, 'MEMORY_MB': 128, 'VCPU': 1}}. _remove_deleted_instances_allocations /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:1635#033[00m
Dec  1 20:08:30 compute-0 nova_compute[189564]: 2025-12-01 20:08:30.976 189568 DEBUG nova.compute.resource_tracker [None req-7cec860b-c1c5-49d2-8a4f-732c76c1788d - - - - - -] Instance 1ba24bd2-a29b-4c5b-b8c7-cba0830ed166 actively managed on this compute host and has allocations in placement: {'resources': {'DISK_GB': 1, 'MEMORY_MB': 128, 'VCPU': 1}}. _remove_deleted_instances_allocations /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:1635#033[00m
Dec  1 20:08:30 compute-0 nova_compute[189564]: 2025-12-01 20:08:30.976 189568 DEBUG nova.compute.resource_tracker [None req-7cec860b-c1c5-49d2-8a4f-732c76c1788d - - - - - -] Total usable vcpus: 8, total allocated vcpus: 2 _report_final_resource_view /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:1057#033[00m
Dec  1 20:08:30 compute-0 nova_compute[189564]: 2025-12-01 20:08:30.976 189568 DEBUG nova.compute.resource_tracker [None req-7cec860b-c1c5-49d2-8a4f-732c76c1788d - - - - - -] Final resource view: name=compute-0.ctlplane.example.com phys_ram=7680MB used_ram=768MB phys_disk=79GB used_disk=2GB total_vcpus=8 used_vcpus=2 pci_stats=[] _report_final_resource_view /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:1066#033[00m
Dec  1 20:08:31 compute-0 nova_compute[189564]: 2025-12-01 20:08:31.036 189568 DEBUG nova.compute.provider_tree [None req-7cec860b-c1c5-49d2-8a4f-732c76c1788d - - - - - -] Inventory has not changed in ProviderTree for provider: 0211b5d4-bab8-409f-8f53-df766ffbcb27 update_inventory /usr/lib/python3.9/site-packages/nova/compute/provider_tree.py:180#033[00m
Dec  1 20:08:31 compute-0 nova_compute[189564]: 2025-12-01 20:08:31.050 189568 DEBUG nova.scheduler.client.report [None req-7cec860b-c1c5-49d2-8a4f-732c76c1788d - - - - - -] Inventory has not changed for provider 0211b5d4-bab8-409f-8f53-df766ffbcb27 based on inventory data: {'VCPU': {'total': 8, 'reserved': 0, 'min_unit': 1, 'max_unit': 8, 'step_size': 1, 'allocation_ratio': 4.0}, 'MEMORY_MB': {'total': 7680, 'reserved': 512, 'min_unit': 1, 'max_unit': 7680, 'step_size': 1, 'allocation_ratio': 1.0}, 'DISK_GB': {'total': 79, 'reserved': 1, 'min_unit': 1, 'max_unit': 79, 'step_size': 1, 'allocation_ratio': 0.9}} set_inventory_for_provider /usr/lib/python3.9/site-packages/nova/scheduler/client/report.py:940#033[00m
Dec  1 20:08:31 compute-0 nova_compute[189564]: 2025-12-01 20:08:31.069 189568 DEBUG nova.compute.resource_tracker [None req-7cec860b-c1c5-49d2-8a4f-732c76c1788d - - - - - -] Compute_service record updated for compute-0.ctlplane.example.com:compute-0.ctlplane.example.com _update_available_resource /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:995#033[00m
Dec  1 20:08:31 compute-0 nova_compute[189564]: 2025-12-01 20:08:31.070 189568 DEBUG oslo_concurrency.lockutils [None req-7cec860b-c1c5-49d2-8a4f-732c76c1788d - - - - - -] Lock "compute_resources" "released" by "nova.compute.resource_tracker.ResourceTracker._update_available_resource" :: held 0.158s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m
Dec  1 20:08:31 compute-0 openstack_network_exporter[205914]: ERROR   20:08:31 appctl.go:131: Failed to prepare call to ovsdb-server: no control socket files found for the ovs db server
Dec  1 20:08:31 compute-0 openstack_network_exporter[205914]: ERROR   20:08:31 appctl.go:144: Failed to get PID for ovn-northd: no control socket files found for ovn-northd
Dec  1 20:08:31 compute-0 openstack_network_exporter[205914]: ERROR   20:08:31 appctl.go:144: Failed to get PID for ovn-northd: no control socket files found for ovn-northd
Dec  1 20:08:31 compute-0 openstack_network_exporter[205914]: ERROR   20:08:31 appctl.go:174: call(dpif-netdev/pmd-perf-show): please specify an existing datapath
Dec  1 20:08:31 compute-0 openstack_network_exporter[205914]: 
Dec  1 20:08:31 compute-0 openstack_network_exporter[205914]: ERROR   20:08:31 appctl.go:174: call(dpif-netdev/pmd-rxq-show): please specify an existing datapath
Dec  1 20:08:31 compute-0 openstack_network_exporter[205914]: 
Dec  1 20:08:31 compute-0 ovn_metadata_agent[106828]: 2025-12-01 20:08:31.686 106833 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): DbSetCommand(_result=None, table=Chassis_Private, record=91869463-7ce7-4561-8225-db4a77bb5f12, col_values=(('external_ids', {'neutron:ovn-metadata-sb-cfg': '14'}),), if_exists=True) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89#033[00m
Dec  1 20:08:31 compute-0 nova_compute[189564]: 2025-12-01 20:08:31.867 189568 DEBUG nova.network.neutron [None req-ef2f901d-b422-4159-86ba-6c6b084dc5b9 87b1f4a5842648dead0562b1cf8b4f18 ce8fb01897ec4dc4a54e7b478a0450c6 - - default default] [instance: 1ba24bd2-a29b-4c5b-b8c7-cba0830ed166] Updating instance_info_cache with network_info: [{"id": "3f58b3a2-d9b9-4462-8f74-88eea7d00105", "address": "fa:16:3e:a9:69:d7", "network": {"id": "b72e0b6b-24ff-49af-9297-d0f55dd2fe07", "bridge": "br-int", "label": "", "subnets": [{"cidr": "10.100.0.0/16", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.1.231", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "ce8fb01897ec4dc4a54e7b478a0450c6", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tap3f58b3a2-d9", "ovs_interfaceid": "3f58b3a2-d9b9-4462-8f74-88eea7d00105", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}}] update_instance_cache_with_nw_info /usr/lib/python3.9/site-packages/nova/network/neutron.py:116#033[00m
Dec  1 20:08:32 compute-0 nova_compute[189564]: 2025-12-01 20:08:32.138 189568 DEBUG oslo_concurrency.lockutils [None req-ef2f901d-b422-4159-86ba-6c6b084dc5b9 87b1f4a5842648dead0562b1cf8b4f18 ce8fb01897ec4dc4a54e7b478a0450c6 - - default default] Releasing lock "refresh_cache-1ba24bd2-a29b-4c5b-b8c7-cba0830ed166" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:333#033[00m
Dec  1 20:08:32 compute-0 nova_compute[189564]: 2025-12-01 20:08:32.139 189568 DEBUG nova.compute.manager [None req-ef2f901d-b422-4159-86ba-6c6b084dc5b9 87b1f4a5842648dead0562b1cf8b4f18 ce8fb01897ec4dc4a54e7b478a0450c6 - - default default] [instance: 1ba24bd2-a29b-4c5b-b8c7-cba0830ed166] Instance network_info: |[{"id": "3f58b3a2-d9b9-4462-8f74-88eea7d00105", "address": "fa:16:3e:a9:69:d7", "network": {"id": "b72e0b6b-24ff-49af-9297-d0f55dd2fe07", "bridge": "br-int", "label": "", "subnets": [{"cidr": "10.100.0.0/16", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.1.231", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "ce8fb01897ec4dc4a54e7b478a0450c6", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tap3f58b3a2-d9", "ovs_interfaceid": "3f58b3a2-d9b9-4462-8f74-88eea7d00105", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}}]| _allocate_network_async /usr/lib/python3.9/site-packages/nova/compute/manager.py:1967#033[00m
Dec  1 20:08:32 compute-0 nova_compute[189564]: 2025-12-01 20:08:32.141 189568 DEBUG oslo_concurrency.lockutils [req-6e4ab898-c9aa-4ff5-b0ca-596d20d3fc7d req-98005170-4d75-4859-ad9a-2389173fbaab 8141e98fb94749b083df4b60a326419b 58466eba0634458f8dd92f929824a1d9 - - default default] Acquired lock "refresh_cache-1ba24bd2-a29b-4c5b-b8c7-cba0830ed166" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:315#033[00m
Dec  1 20:08:32 compute-0 nova_compute[189564]: 2025-12-01 20:08:32.142 189568 DEBUG nova.network.neutron [req-6e4ab898-c9aa-4ff5-b0ca-596d20d3fc7d req-98005170-4d75-4859-ad9a-2389173fbaab 8141e98fb94749b083df4b60a326419b 58466eba0634458f8dd92f929824a1d9 - - default default] [instance: 1ba24bd2-a29b-4c5b-b8c7-cba0830ed166] Refreshing network info cache for port 3f58b3a2-d9b9-4462-8f74-88eea7d00105 _get_instance_nw_info /usr/lib/python3.9/site-packages/nova/network/neutron.py:2007#033[00m
Dec  1 20:08:32 compute-0 nova_compute[189564]: 2025-12-01 20:08:32.146 189568 DEBUG nova.virt.libvirt.driver [None req-ef2f901d-b422-4159-86ba-6c6b084dc5b9 87b1f4a5842648dead0562b1cf8b4f18 ce8fb01897ec4dc4a54e7b478a0450c6 - - default default] [instance: 1ba24bd2-a29b-4c5b-b8c7-cba0830ed166] Start _get_guest_xml network_info=[{"id": "3f58b3a2-d9b9-4462-8f74-88eea7d00105", "address": "fa:16:3e:a9:69:d7", "network": {"id": "b72e0b6b-24ff-49af-9297-d0f55dd2fe07", "bridge": "br-int", "label": "", "subnets": [{"cidr": "10.100.0.0/16", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.1.231", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "ce8fb01897ec4dc4a54e7b478a0450c6", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tap3f58b3a2-d9", "ovs_interfaceid": "3f58b3a2-d9b9-4462-8f74-88eea7d00105", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}}] disk_info={'disk_bus': 'virtio', 'cdrom_bus': 'sata', 'mapping': {'root': {'bus': 'virtio', 'dev': 'vda', 'type': 'disk', 'boot_index': '1'}, 'disk': {'bus': 'virtio', 'dev': 'vda', 'type': 'disk', 'boot_index': '1'}, 'disk.config': {'bus': 'sata', 'dev': 'sda', 'type': 'cdrom'}}} image_meta=ImageMeta(checksum='c8fc807773e5354afe61636071771906',container_format='bare',created_at=2025-12-01T20:05:12Z,direct_url=<?>,disk_format='qcow2',id=bffb6851-f47b-44e0-90e7-e01d72f9a4d2,min_disk=0,min_ram=0,name='tempest-scenario-img--1009152532',owner='ce8fb01897ec4dc4a54e7b478a0450c6',properties=ImageMetaProps,protected=<?>,size=21430272,status='active',tags=<?>,updated_at=2025-12-01T20:05:14Z,virtual_size=<?>,visibility=<?>) rescue=None block_device_info={'root_device_name': '/dev/vda', 'image': [{'boot_index': 0, 'guest_format': None, 'encryption_options': None, 'size': 0, 'encryption_secret_uuid': None, 'device_type': 'disk', 'disk_bus': 'virtio', 'encrypted': False, 'encryption_format': None, 'device_name': '/dev/vda', 'image_id': 'bffb6851-f47b-44e0-90e7-e01d72f9a4d2'}], 'ephemerals': [], 'block_device_mapping': [], 'swap': None} _get_guest_xml /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:7549#033[00m
Dec  1 20:08:32 compute-0 nova_compute[189564]: 2025-12-01 20:08:32.153 189568 WARNING nova.virt.libvirt.driver [None req-ef2f901d-b422-4159-86ba-6c6b084dc5b9 87b1f4a5842648dead0562b1cf8b4f18 ce8fb01897ec4dc4a54e7b478a0450c6 - - default default] This host appears to have multiple sockets per NUMA node. The `socket` PCI NUMA affinity will not be supported.#033[00m
Dec  1 20:08:32 compute-0 nova_compute[189564]: 2025-12-01 20:08:32.162 189568 DEBUG nova.virt.libvirt.host [None req-ef2f901d-b422-4159-86ba-6c6b084dc5b9 87b1f4a5842648dead0562b1cf8b4f18 ce8fb01897ec4dc4a54e7b478a0450c6 - - default default] Searching host: 'compute-0.ctlplane.example.com' for CPU controller through CGroups V1... _has_cgroupsv1_cpu_controller /usr/lib/python3.9/site-packages/nova/virt/libvirt/host.py:1653#033[00m
Dec  1 20:08:32 compute-0 nova_compute[189564]: 2025-12-01 20:08:32.162 189568 DEBUG nova.virt.libvirt.host [None req-ef2f901d-b422-4159-86ba-6c6b084dc5b9 87b1f4a5842648dead0562b1cf8b4f18 ce8fb01897ec4dc4a54e7b478a0450c6 - - default default] CPU controller missing on host. _has_cgroupsv1_cpu_controller /usr/lib/python3.9/site-packages/nova/virt/libvirt/host.py:1663#033[00m
Dec  1 20:08:32 compute-0 nova_compute[189564]: 2025-12-01 20:08:32.166 189568 DEBUG nova.virt.libvirt.host [None req-ef2f901d-b422-4159-86ba-6c6b084dc5b9 87b1f4a5842648dead0562b1cf8b4f18 ce8fb01897ec4dc4a54e7b478a0450c6 - - default default] Searching host: 'compute-0.ctlplane.example.com' for CPU controller through CGroups V2... _has_cgroupsv2_cpu_controller /usr/lib/python3.9/site-packages/nova/virt/libvirt/host.py:1672#033[00m
Dec  1 20:08:32 compute-0 nova_compute[189564]: 2025-12-01 20:08:32.167 189568 DEBUG nova.virt.libvirt.host [None req-ef2f901d-b422-4159-86ba-6c6b084dc5b9 87b1f4a5842648dead0562b1cf8b4f18 ce8fb01897ec4dc4a54e7b478a0450c6 - - default default] CPU controller found on host. _has_cgroupsv2_cpu_controller /usr/lib/python3.9/site-packages/nova/virt/libvirt/host.py:1679#033[00m
Dec  1 20:08:32 compute-0 nova_compute[189564]: 2025-12-01 20:08:32.167 189568 DEBUG nova.virt.libvirt.driver [None req-ef2f901d-b422-4159-86ba-6c6b084dc5b9 87b1f4a5842648dead0562b1cf8b4f18 ce8fb01897ec4dc4a54e7b478a0450c6 - - default default] CPU mode 'host-model' models '' was chosen, with extra flags: '' _get_guest_cpu_model_config /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:5396#033[00m
Dec  1 20:08:32 compute-0 nova_compute[189564]: 2025-12-01 20:08:32.168 189568 DEBUG nova.virt.hardware [None req-ef2f901d-b422-4159-86ba-6c6b084dc5b9 87b1f4a5842648dead0562b1cf8b4f18 ce8fb01897ec4dc4a54e7b478a0450c6 - - default default] Getting desirable topologies for flavor Flavor(created_at=2025-12-01T20:00:10Z,deleted=False,deleted_at=None,description=None,disabled=False,ephemeral_gb=0,extra_specs={hw_rng:allowed='True'},flavorid='69252fc0-77e5-4ac1-807d-77003542464f',id=3,is_public=True,memory_mb=128,name='m1.nano',projects=<?>,root_gb=1,rxtx_factor=1.0,swap=0,updated_at=None,vcpu_weight=0,vcpus=1) and image_meta ImageMeta(checksum='c8fc807773e5354afe61636071771906',container_format='bare',created_at=2025-12-01T20:05:12Z,direct_url=<?>,disk_format='qcow2',id=bffb6851-f47b-44e0-90e7-e01d72f9a4d2,min_disk=0,min_ram=0,name='tempest-scenario-img--1009152532',owner='ce8fb01897ec4dc4a54e7b478a0450c6',properties=ImageMetaProps,protected=<?>,size=21430272,status='active',tags=<?>,updated_at=2025-12-01T20:05:14Z,virtual_size=<?>,visibility=<?>), allow threads: True _get_desirable_cpu_topologies /usr/lib/python3.9/site-packages/nova/virt/hardware.py:563#033[00m
Dec  1 20:08:32 compute-0 nova_compute[189564]: 2025-12-01 20:08:32.168 189568 DEBUG nova.virt.hardware [None req-ef2f901d-b422-4159-86ba-6c6b084dc5b9 87b1f4a5842648dead0562b1cf8b4f18 ce8fb01897ec4dc4a54e7b478a0450c6 - - default default] Flavor limits 0:0:0 get_cpu_topology_constraints /usr/lib/python3.9/site-packages/nova/virt/hardware.py:348#033[00m
Dec  1 20:08:32 compute-0 nova_compute[189564]: 2025-12-01 20:08:32.169 189568 DEBUG nova.virt.hardware [None req-ef2f901d-b422-4159-86ba-6c6b084dc5b9 87b1f4a5842648dead0562b1cf8b4f18 ce8fb01897ec4dc4a54e7b478a0450c6 - - default default] Image limits 0:0:0 get_cpu_topology_constraints /usr/lib/python3.9/site-packages/nova/virt/hardware.py:352#033[00m
Dec  1 20:08:32 compute-0 nova_compute[189564]: 2025-12-01 20:08:32.169 189568 DEBUG nova.virt.hardware [None req-ef2f901d-b422-4159-86ba-6c6b084dc5b9 87b1f4a5842648dead0562b1cf8b4f18 ce8fb01897ec4dc4a54e7b478a0450c6 - - default default] Flavor pref 0:0:0 get_cpu_topology_constraints /usr/lib/python3.9/site-packages/nova/virt/hardware.py:388#033[00m
Dec  1 20:08:32 compute-0 nova_compute[189564]: 2025-12-01 20:08:32.169 189568 DEBUG nova.virt.hardware [None req-ef2f901d-b422-4159-86ba-6c6b084dc5b9 87b1f4a5842648dead0562b1cf8b4f18 ce8fb01897ec4dc4a54e7b478a0450c6 - - default default] Image pref 0:0:0 get_cpu_topology_constraints /usr/lib/python3.9/site-packages/nova/virt/hardware.py:392#033[00m
Dec  1 20:08:32 compute-0 nova_compute[189564]: 2025-12-01 20:08:32.169 189568 DEBUG nova.virt.hardware [None req-ef2f901d-b422-4159-86ba-6c6b084dc5b9 87b1f4a5842648dead0562b1cf8b4f18 ce8fb01897ec4dc4a54e7b478a0450c6 - - default default] Chose sockets=0, cores=0, threads=0; limits were sockets=65536, cores=65536, threads=65536 get_cpu_topology_constraints /usr/lib/python3.9/site-packages/nova/virt/hardware.py:430#033[00m
Dec  1 20:08:32 compute-0 nova_compute[189564]: 2025-12-01 20:08:32.170 189568 DEBUG nova.virt.hardware [None req-ef2f901d-b422-4159-86ba-6c6b084dc5b9 87b1f4a5842648dead0562b1cf8b4f18 ce8fb01897ec4dc4a54e7b478a0450c6 - - default default] Topology preferred VirtCPUTopology(cores=0,sockets=0,threads=0), maximum VirtCPUTopology(cores=65536,sockets=65536,threads=65536) _get_desirable_cpu_topologies /usr/lib/python3.9/site-packages/nova/virt/hardware.py:569#033[00m
Dec  1 20:08:32 compute-0 nova_compute[189564]: 2025-12-01 20:08:32.170 189568 DEBUG nova.virt.hardware [None req-ef2f901d-b422-4159-86ba-6c6b084dc5b9 87b1f4a5842648dead0562b1cf8b4f18 ce8fb01897ec4dc4a54e7b478a0450c6 - - default default] Build topologies for 1 vcpu(s) 1:1:1 _get_possible_cpu_topologies /usr/lib/python3.9/site-packages/nova/virt/hardware.py:471#033[00m
Dec  1 20:08:32 compute-0 nova_compute[189564]: 2025-12-01 20:08:32.170 189568 DEBUG nova.virt.hardware [None req-ef2f901d-b422-4159-86ba-6c6b084dc5b9 87b1f4a5842648dead0562b1cf8b4f18 ce8fb01897ec4dc4a54e7b478a0450c6 - - default default] Got 1 possible topologies _get_possible_cpu_topologies /usr/lib/python3.9/site-packages/nova/virt/hardware.py:501#033[00m
Dec  1 20:08:32 compute-0 nova_compute[189564]: 2025-12-01 20:08:32.171 189568 DEBUG nova.virt.hardware [None req-ef2f901d-b422-4159-86ba-6c6b084dc5b9 87b1f4a5842648dead0562b1cf8b4f18 ce8fb01897ec4dc4a54e7b478a0450c6 - - default default] Possible topologies [VirtCPUTopology(cores=1,sockets=1,threads=1)] _get_desirable_cpu_topologies /usr/lib/python3.9/site-packages/nova/virt/hardware.py:575#033[00m
Dec  1 20:08:32 compute-0 nova_compute[189564]: 2025-12-01 20:08:32.171 189568 DEBUG nova.virt.hardware [None req-ef2f901d-b422-4159-86ba-6c6b084dc5b9 87b1f4a5842648dead0562b1cf8b4f18 ce8fb01897ec4dc4a54e7b478a0450c6 - - default default] Sorted desired topologies [VirtCPUTopology(cores=1,sockets=1,threads=1)] _get_desirable_cpu_topologies /usr/lib/python3.9/site-packages/nova/virt/hardware.py:577#033[00m
Dec  1 20:08:32 compute-0 nova_compute[189564]: 2025-12-01 20:08:32.174 189568 DEBUG nova.virt.libvirt.vif [None req-ef2f901d-b422-4159-86ba-6c6b084dc5b9 87b1f4a5842648dead0562b1cf8b4f18 ce8fb01897ec4dc4a54e7b478a0450c6 - - default default] vif_type=ovs instance=Instance(access_ip_v4=None,access_ip_v6=None,architecture=None,auto_disk_config=False,availability_zone='nova',cell_name=None,cleaned=False,config_drive='',created_at=2025-12-01T20:08:26Z,default_ephemeral_device=None,default_swap_device=None,deleted=False,deleted_at=None,device_metadata=None,disable_terminate=False,display_description=None,display_name='te-4551674-asg-jbxama3kkz6o-bxsvliczlwdv-hbpajxundnbg',ec2_ids=EC2Ids,ephemeral_gb=0,ephemeral_key_uuid=None,fault=<?>,flavor=Flavor(3),hidden=False,host='compute-0.ctlplane.example.com',hostname='te-4551674-asg-jbxama3kkz6o-bxsvliczlwdv-hbpajxundnbg',id=14,image_ref='bffb6851-f47b-44e0-90e7-e01d72f9a4d2',info_cache=InstanceInfoCache,instance_type_id=3,kernel_id='',key_data=None,key_name=None,keypairs=KeyPairList,launch_index=0,launched_at=None,launched_on='compute-0.ctlplane.example.com',locked=False,locked_by=None,memory_mb=128,metadata={metering.server_group='f148fe63-b9e9-42f1-b9d7-8790a6058874'},migration_context=None,new_flavor=None,node='compute-0.ctlplane.example.com',numa_topology=None,old_flavor=None,os_type=None,pci_devices=<?>,pci_requests=InstancePCIRequests,power_state=0,progress=0,project_id='ce8fb01897ec4dc4a54e7b478a0450c6',ramdisk_id='',reservation_id='r-boklydnb',resources=None,root_device_name='/dev/vda',root_gb=1,security_groups=SecurityGroupList,services=<?>,shutdown_terminate=False,system_metadata={boot_roles='reader,member',image_base_image_ref='bffb6851-f47b-44e0-90e7-e01d72f9a4d2',image_container_format='bare',image_disk_format='qcow2',image_hw_machine_type='q35',image_min_disk='1',image_min_ram='0',network_allocated='True',owner_project_name='tempest-PrometheusGabbiTest-1865175500',owner_user_name='tempest-PrometheusGabbiTest-1865175500-project-member'},tags=TagList,task_state='spawning',terminated_at=None,trusted_certs=None,updated_at=2025-12-01T20:08:28Z,user_data='IyEvYmluL3NoCmVjaG8gJ0xvYWRpbmcgQ1BVJwpzZXQgLXYKY2F0IC9kZXYvdXJhbmRvbSA+IC9kZXYvbnVsbCAmIHNsZWVwIDMwMCA7IGtpbGwgJCEgCg==',user_id='87b1f4a5842648dead0562b1cf8b4f18',uuid=1ba24bd2-a29b-4c5b-b8c7-cba0830ed166,vcpu_model=VirtCPUModel,vcpus=1,vm_mode=None,vm_state='building') vif={"id": "3f58b3a2-d9b9-4462-8f74-88eea7d00105", "address": "fa:16:3e:a9:69:d7", "network": {"id": "b72e0b6b-24ff-49af-9297-d0f55dd2fe07", "bridge": "br-int", "label": "", "subnets": [{"cidr": "10.100.0.0/16", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.1.231", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "ce8fb01897ec4dc4a54e7b478a0450c6", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tap3f58b3a2-d9", "ovs_interfaceid": "3f58b3a2-d9b9-4462-8f74-88eea7d00105", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}} virt_type=kvm get_config /usr/lib/python3.9/site-packages/nova/virt/libvirt/vif.py:563#033[00m
Dec  1 20:08:32 compute-0 nova_compute[189564]: 2025-12-01 20:08:32.175 189568 DEBUG nova.network.os_vif_util [None req-ef2f901d-b422-4159-86ba-6c6b084dc5b9 87b1f4a5842648dead0562b1cf8b4f18 ce8fb01897ec4dc4a54e7b478a0450c6 - - default default] Converting VIF {"id": "3f58b3a2-d9b9-4462-8f74-88eea7d00105", "address": "fa:16:3e:a9:69:d7", "network": {"id": "b72e0b6b-24ff-49af-9297-d0f55dd2fe07", "bridge": "br-int", "label": "", "subnets": [{"cidr": "10.100.0.0/16", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.1.231", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "ce8fb01897ec4dc4a54e7b478a0450c6", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tap3f58b3a2-d9", "ovs_interfaceid": "3f58b3a2-d9b9-4462-8f74-88eea7d00105", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}} nova_to_osvif_vif /usr/lib/python3.9/site-packages/nova/network/os_vif_util.py:511#033[00m
Dec  1 20:08:32 compute-0 nova_compute[189564]: 2025-12-01 20:08:32.175 189568 DEBUG nova.network.os_vif_util [None req-ef2f901d-b422-4159-86ba-6c6b084dc5b9 87b1f4a5842648dead0562b1cf8b4f18 ce8fb01897ec4dc4a54e7b478a0450c6 - - default default] Converted object VIFOpenVSwitch(active=False,address=fa:16:3e:a9:69:d7,bridge_name='br-int',has_traffic_filtering=True,id=3f58b3a2-d9b9-4462-8f74-88eea7d00105,network=Network(b72e0b6b-24ff-49af-9297-d0f55dd2fe07),plugin='ovs',port_profile=VIFPortProfileOpenVSwitch,preserve_on_delete=False,vif_name='tap3f58b3a2-d9') nova_to_osvif_vif /usr/lib/python3.9/site-packages/nova/network/os_vif_util.py:548#033[00m
Dec  1 20:08:32 compute-0 nova_compute[189564]: 2025-12-01 20:08:32.178 189568 DEBUG nova.objects.instance [None req-ef2f901d-b422-4159-86ba-6c6b084dc5b9 87b1f4a5842648dead0562b1cf8b4f18 ce8fb01897ec4dc4a54e7b478a0450c6 - - default default] Lazy-loading 'pci_devices' on Instance uuid 1ba24bd2-a29b-4c5b-b8c7-cba0830ed166 obj_load_attr /usr/lib/python3.9/site-packages/nova/objects/instance.py:1105#033[00m
Dec  1 20:08:32 compute-0 nova_compute[189564]: 2025-12-01 20:08:32.189 189568 DEBUG nova.virt.libvirt.driver [None req-ef2f901d-b422-4159-86ba-6c6b084dc5b9 87b1f4a5842648dead0562b1cf8b4f18 ce8fb01897ec4dc4a54e7b478a0450c6 - - default default] [instance: 1ba24bd2-a29b-4c5b-b8c7-cba0830ed166] End _get_guest_xml xml=<domain type="kvm">
Dec  1 20:08:32 compute-0 nova_compute[189564]:  <uuid>1ba24bd2-a29b-4c5b-b8c7-cba0830ed166</uuid>
Dec  1 20:08:32 compute-0 nova_compute[189564]:  <name>instance-0000000e</name>
Dec  1 20:08:32 compute-0 nova_compute[189564]:  <memory>131072</memory>
Dec  1 20:08:32 compute-0 nova_compute[189564]:  <vcpu>1</vcpu>
Dec  1 20:08:32 compute-0 nova_compute[189564]:  <metadata>
Dec  1 20:08:32 compute-0 nova_compute[189564]:    <nova:instance xmlns:nova="http://openstack.org/xmlns/libvirt/nova/1.1">
Dec  1 20:08:32 compute-0 nova_compute[189564]:      <nova:package version="27.5.2-0.20250829104910.6f8decf.el9"/>
Dec  1 20:08:32 compute-0 nova_compute[189564]:      <nova:name>te-4551674-asg-jbxama3kkz6o-bxsvliczlwdv-hbpajxundnbg</nova:name>
Dec  1 20:08:32 compute-0 nova_compute[189564]:      <nova:creationTime>2025-12-01 20:08:32</nova:creationTime>
Dec  1 20:08:32 compute-0 nova_compute[189564]:      <nova:flavor name="m1.nano">
Dec  1 20:08:32 compute-0 nova_compute[189564]:        <nova:memory>128</nova:memory>
Dec  1 20:08:32 compute-0 nova_compute[189564]:        <nova:disk>1</nova:disk>
Dec  1 20:08:32 compute-0 nova_compute[189564]:        <nova:swap>0</nova:swap>
Dec  1 20:08:32 compute-0 nova_compute[189564]:        <nova:ephemeral>0</nova:ephemeral>
Dec  1 20:08:32 compute-0 nova_compute[189564]:        <nova:vcpus>1</nova:vcpus>
Dec  1 20:08:32 compute-0 nova_compute[189564]:      </nova:flavor>
Dec  1 20:08:32 compute-0 nova_compute[189564]:      <nova:owner>
Dec  1 20:08:32 compute-0 nova_compute[189564]:        <nova:user uuid="87b1f4a5842648dead0562b1cf8b4f18">tempest-PrometheusGabbiTest-1865175500-project-member</nova:user>
Dec  1 20:08:32 compute-0 nova_compute[189564]:        <nova:project uuid="ce8fb01897ec4dc4a54e7b478a0450c6">tempest-PrometheusGabbiTest-1865175500</nova:project>
Dec  1 20:08:32 compute-0 nova_compute[189564]:      </nova:owner>
Dec  1 20:08:32 compute-0 nova_compute[189564]:      <nova:root type="image" uuid="bffb6851-f47b-44e0-90e7-e01d72f9a4d2"/>
Dec  1 20:08:32 compute-0 nova_compute[189564]:      <nova:ports>
Dec  1 20:08:32 compute-0 nova_compute[189564]:        <nova:port uuid="3f58b3a2-d9b9-4462-8f74-88eea7d00105">
Dec  1 20:08:32 compute-0 nova_compute[189564]:          <nova:ip type="fixed" address="10.100.1.231" ipVersion="4"/>
Dec  1 20:08:32 compute-0 nova_compute[189564]:        </nova:port>
Dec  1 20:08:32 compute-0 nova_compute[189564]:      </nova:ports>
Dec  1 20:08:32 compute-0 nova_compute[189564]:    </nova:instance>
Dec  1 20:08:32 compute-0 nova_compute[189564]:  </metadata>
Dec  1 20:08:32 compute-0 nova_compute[189564]:  <sysinfo type="smbios">
Dec  1 20:08:32 compute-0 nova_compute[189564]:    <system>
Dec  1 20:08:32 compute-0 nova_compute[189564]:      <entry name="manufacturer">RDO</entry>
Dec  1 20:08:32 compute-0 nova_compute[189564]:      <entry name="product">OpenStack Compute</entry>
Dec  1 20:08:32 compute-0 nova_compute[189564]:      <entry name="version">27.5.2-0.20250829104910.6f8decf.el9</entry>
Dec  1 20:08:32 compute-0 nova_compute[189564]:      <entry name="serial">1ba24bd2-a29b-4c5b-b8c7-cba0830ed166</entry>
Dec  1 20:08:32 compute-0 nova_compute[189564]:      <entry name="uuid">1ba24bd2-a29b-4c5b-b8c7-cba0830ed166</entry>
Dec  1 20:08:32 compute-0 nova_compute[189564]:      <entry name="family">Virtual Machine</entry>
Dec  1 20:08:32 compute-0 nova_compute[189564]:    </system>
Dec  1 20:08:32 compute-0 nova_compute[189564]:  </sysinfo>
Dec  1 20:08:32 compute-0 nova_compute[189564]:  <os>
Dec  1 20:08:32 compute-0 nova_compute[189564]:    <type arch="x86_64" machine="q35">hvm</type>
Dec  1 20:08:32 compute-0 nova_compute[189564]:    <boot dev="hd"/>
Dec  1 20:08:32 compute-0 nova_compute[189564]:    <smbios mode="sysinfo"/>
Dec  1 20:08:32 compute-0 nova_compute[189564]:  </os>
Dec  1 20:08:32 compute-0 nova_compute[189564]:  <features>
Dec  1 20:08:32 compute-0 nova_compute[189564]:    <acpi/>
Dec  1 20:08:32 compute-0 nova_compute[189564]:    <apic/>
Dec  1 20:08:32 compute-0 nova_compute[189564]:    <vmcoreinfo/>
Dec  1 20:08:32 compute-0 nova_compute[189564]:  </features>
Dec  1 20:08:32 compute-0 nova_compute[189564]:  <clock offset="utc">
Dec  1 20:08:32 compute-0 nova_compute[189564]:    <timer name="pit" tickpolicy="delay"/>
Dec  1 20:08:32 compute-0 nova_compute[189564]:    <timer name="rtc" tickpolicy="catchup"/>
Dec  1 20:08:32 compute-0 nova_compute[189564]:    <timer name="hpet" present="no"/>
Dec  1 20:08:32 compute-0 nova_compute[189564]:  </clock>
Dec  1 20:08:32 compute-0 nova_compute[189564]:  <cpu mode="host-model" match="exact">
Dec  1 20:08:32 compute-0 nova_compute[189564]:    <topology sockets="1" cores="1" threads="1"/>
Dec  1 20:08:32 compute-0 nova_compute[189564]:  </cpu>
Dec  1 20:08:32 compute-0 nova_compute[189564]:  <devices>
Dec  1 20:08:32 compute-0 nova_compute[189564]:    <disk type="file" device="disk">
Dec  1 20:08:32 compute-0 nova_compute[189564]:      <driver name="qemu" type="qcow2" cache="none"/>
Dec  1 20:08:32 compute-0 nova_compute[189564]:      <source file="/var/lib/nova/instances/1ba24bd2-a29b-4c5b-b8c7-cba0830ed166/disk"/>
Dec  1 20:08:32 compute-0 nova_compute[189564]:      <target dev="vda" bus="virtio"/>
Dec  1 20:08:32 compute-0 nova_compute[189564]:    </disk>
Dec  1 20:08:32 compute-0 nova_compute[189564]:    <disk type="file" device="cdrom">
Dec  1 20:08:32 compute-0 nova_compute[189564]:      <driver name="qemu" type="raw" cache="none"/>
Dec  1 20:08:32 compute-0 nova_compute[189564]:      <source file="/var/lib/nova/instances/1ba24bd2-a29b-4c5b-b8c7-cba0830ed166/disk.config"/>
Dec  1 20:08:32 compute-0 nova_compute[189564]:      <target dev="sda" bus="sata"/>
Dec  1 20:08:32 compute-0 nova_compute[189564]:    </disk>
Dec  1 20:08:32 compute-0 nova_compute[189564]:    <interface type="ethernet">
Dec  1 20:08:32 compute-0 nova_compute[189564]:      <mac address="fa:16:3e:a9:69:d7"/>
Dec  1 20:08:32 compute-0 nova_compute[189564]:      <model type="virtio"/>
Dec  1 20:08:32 compute-0 nova_compute[189564]:      <driver name="vhost" rx_queue_size="512"/>
Dec  1 20:08:32 compute-0 nova_compute[189564]:      <mtu size="1442"/>
Dec  1 20:08:32 compute-0 nova_compute[189564]:      <target dev="tap3f58b3a2-d9"/>
Dec  1 20:08:32 compute-0 nova_compute[189564]:    </interface>
Dec  1 20:08:32 compute-0 nova_compute[189564]:    <serial type="pty">
Dec  1 20:08:32 compute-0 nova_compute[189564]:      <log file="/var/lib/nova/instances/1ba24bd2-a29b-4c5b-b8c7-cba0830ed166/console.log" append="off"/>
Dec  1 20:08:32 compute-0 nova_compute[189564]:    </serial>
Dec  1 20:08:32 compute-0 nova_compute[189564]:    <graphics type="vnc" autoport="yes" listen="::0"/>
Dec  1 20:08:32 compute-0 nova_compute[189564]:    <video>
Dec  1 20:08:32 compute-0 nova_compute[189564]:      <model type="virtio"/>
Dec  1 20:08:32 compute-0 nova_compute[189564]:    </video>
Dec  1 20:08:32 compute-0 nova_compute[189564]:    <input type="tablet" bus="usb"/>
Dec  1 20:08:32 compute-0 nova_compute[189564]:    <rng model="virtio">
Dec  1 20:08:32 compute-0 nova_compute[189564]:      <backend model="random">/dev/urandom</backend>
Dec  1 20:08:32 compute-0 nova_compute[189564]:    </rng>
Dec  1 20:08:32 compute-0 nova_compute[189564]:    <controller type="pci" model="pcie-root"/>
Dec  1 20:08:32 compute-0 nova_compute[189564]:    <controller type="pci" model="pcie-root-port"/>
Dec  1 20:08:32 compute-0 nova_compute[189564]:    <controller type="pci" model="pcie-root-port"/>
Dec  1 20:08:32 compute-0 nova_compute[189564]:    <controller type="pci" model="pcie-root-port"/>
Dec  1 20:08:32 compute-0 nova_compute[189564]:    <controller type="pci" model="pcie-root-port"/>
Dec  1 20:08:32 compute-0 nova_compute[189564]:    <controller type="pci" model="pcie-root-port"/>
Dec  1 20:08:32 compute-0 nova_compute[189564]:    <controller type="pci" model="pcie-root-port"/>
Dec  1 20:08:32 compute-0 nova_compute[189564]:    <controller type="pci" model="pcie-root-port"/>
Dec  1 20:08:32 compute-0 nova_compute[189564]:    <controller type="pci" model="pcie-root-port"/>
Dec  1 20:08:32 compute-0 nova_compute[189564]:    <controller type="pci" model="pcie-root-port"/>
Dec  1 20:08:32 compute-0 nova_compute[189564]:    <controller type="pci" model="pcie-root-port"/>
Dec  1 20:08:32 compute-0 nova_compute[189564]:    <controller type="pci" model="pcie-root-port"/>
Dec  1 20:08:32 compute-0 nova_compute[189564]:    <controller type="pci" model="pcie-root-port"/>
Dec  1 20:08:32 compute-0 nova_compute[189564]:    <controller type="pci" model="pcie-root-port"/>
Dec  1 20:08:32 compute-0 nova_compute[189564]:    <controller type="pci" model="pcie-root-port"/>
Dec  1 20:08:32 compute-0 nova_compute[189564]:    <controller type="pci" model="pcie-root-port"/>
Dec  1 20:08:32 compute-0 nova_compute[189564]:    <controller type="pci" model="pcie-root-port"/>
Dec  1 20:08:32 compute-0 nova_compute[189564]:    <controller type="pci" model="pcie-root-port"/>
Dec  1 20:08:32 compute-0 nova_compute[189564]:    <controller type="pci" model="pcie-root-port"/>
Dec  1 20:08:32 compute-0 nova_compute[189564]:    <controller type="pci" model="pcie-root-port"/>
Dec  1 20:08:32 compute-0 nova_compute[189564]:    <controller type="pci" model="pcie-root-port"/>
Dec  1 20:08:32 compute-0 nova_compute[189564]:    <controller type="pci" model="pcie-root-port"/>
Dec  1 20:08:32 compute-0 nova_compute[189564]:    <controller type="pci" model="pcie-root-port"/>
Dec  1 20:08:32 compute-0 nova_compute[189564]:    <controller type="pci" model="pcie-root-port"/>
Dec  1 20:08:32 compute-0 nova_compute[189564]:    <controller type="pci" model="pcie-root-port"/>
Dec  1 20:08:32 compute-0 nova_compute[189564]:    <controller type="usb" index="0"/>
Dec  1 20:08:32 compute-0 nova_compute[189564]:    <memballoon model="virtio">
Dec  1 20:08:32 compute-0 nova_compute[189564]:      <stats period="10"/>
Dec  1 20:08:32 compute-0 nova_compute[189564]:    </memballoon>
Dec  1 20:08:32 compute-0 nova_compute[189564]:  </devices>
Dec  1 20:08:32 compute-0 nova_compute[189564]: </domain>
Dec  1 20:08:32 compute-0 nova_compute[189564]: _get_guest_xml /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:7555#033[00m
Dec  1 20:08:32 compute-0 nova_compute[189564]: 2025-12-01 20:08:32.191 189568 DEBUG nova.compute.manager [None req-ef2f901d-b422-4159-86ba-6c6b084dc5b9 87b1f4a5842648dead0562b1cf8b4f18 ce8fb01897ec4dc4a54e7b478a0450c6 - - default default] [instance: 1ba24bd2-a29b-4c5b-b8c7-cba0830ed166] Preparing to wait for external event network-vif-plugged-3f58b3a2-d9b9-4462-8f74-88eea7d00105 prepare_for_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:283#033[00m
Dec  1 20:08:32 compute-0 nova_compute[189564]: 2025-12-01 20:08:32.191 189568 DEBUG oslo_concurrency.lockutils [None req-ef2f901d-b422-4159-86ba-6c6b084dc5b9 87b1f4a5842648dead0562b1cf8b4f18 ce8fb01897ec4dc4a54e7b478a0450c6 - - default default] Acquiring lock "1ba24bd2-a29b-4c5b-b8c7-cba0830ed166-events" by "nova.compute.manager.InstanceEvents.prepare_for_instance_event.<locals>._create_or_get_event" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m
Dec  1 20:08:32 compute-0 nova_compute[189564]: 2025-12-01 20:08:32.191 189568 DEBUG oslo_concurrency.lockutils [None req-ef2f901d-b422-4159-86ba-6c6b084dc5b9 87b1f4a5842648dead0562b1cf8b4f18 ce8fb01897ec4dc4a54e7b478a0450c6 - - default default] Lock "1ba24bd2-a29b-4c5b-b8c7-cba0830ed166-events" acquired by "nova.compute.manager.InstanceEvents.prepare_for_instance_event.<locals>._create_or_get_event" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m
Dec  1 20:08:32 compute-0 nova_compute[189564]: 2025-12-01 20:08:32.192 189568 DEBUG oslo_concurrency.lockutils [None req-ef2f901d-b422-4159-86ba-6c6b084dc5b9 87b1f4a5842648dead0562b1cf8b4f18 ce8fb01897ec4dc4a54e7b478a0450c6 - - default default] Lock "1ba24bd2-a29b-4c5b-b8c7-cba0830ed166-events" "released" by "nova.compute.manager.InstanceEvents.prepare_for_instance_event.<locals>._create_or_get_event" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m
Dec  1 20:08:32 compute-0 nova_compute[189564]: 2025-12-01 20:08:32.192 189568 DEBUG nova.virt.libvirt.vif [None req-ef2f901d-b422-4159-86ba-6c6b084dc5b9 87b1f4a5842648dead0562b1cf8b4f18 ce8fb01897ec4dc4a54e7b478a0450c6 - - default default] vif_type=ovs instance=Instance(access_ip_v4=None,access_ip_v6=None,architecture=None,auto_disk_config=False,availability_zone='nova',cell_name=None,cleaned=False,config_drive='',created_at=2025-12-01T20:08:26Z,default_ephemeral_device=None,default_swap_device=None,deleted=False,deleted_at=None,device_metadata=None,disable_terminate=False,display_description=None,display_name='te-4551674-asg-jbxama3kkz6o-bxsvliczlwdv-hbpajxundnbg',ec2_ids=EC2Ids,ephemeral_gb=0,ephemeral_key_uuid=None,fault=<?>,flavor=Flavor(3),hidden=False,host='compute-0.ctlplane.example.com',hostname='te-4551674-asg-jbxama3kkz6o-bxsvliczlwdv-hbpajxundnbg',id=14,image_ref='bffb6851-f47b-44e0-90e7-e01d72f9a4d2',info_cache=InstanceInfoCache,instance_type_id=3,kernel_id='',key_data=None,key_name=None,keypairs=KeyPairList,launch_index=0,launched_at=None,launched_on='compute-0.ctlplane.example.com',locked=False,locked_by=None,memory_mb=128,metadata={metering.server_group='f148fe63-b9e9-42f1-b9d7-8790a6058874'},migration_context=None,new_flavor=None,node='compute-0.ctlplane.example.com',numa_topology=None,old_flavor=None,os_type=None,pci_devices=PciDeviceList,pci_requests=InstancePCIRequests,power_state=0,progress=0,project_id='ce8fb01897ec4dc4a54e7b478a0450c6',ramdisk_id='',reservation_id='r-boklydnb',resources=None,root_device_name='/dev/vda',root_gb=1,security_groups=SecurityGroupList,services=<?>,shutdown_terminate=False,system_metadata={boot_roles='reader,member',image_base_image_ref='bffb6851-f47b-44e0-90e7-e01d72f9a4d2',image_container_format='bare',image_disk_format='qcow2',image_hw_machine_type='q35',image_min_disk='1',image_min_ram='0',network_allocated='True',owner_project_name='tempest-PrometheusGabbiTest-1865175500',owner_user_name='tempest-PrometheusGabbiTest-1865175500-project-member'},tags=TagList,task_state='spawning',terminated_at=None,trusted_certs=None,updated_at=2025-12-01T20:08:28Z,user_data='IyEvYmluL3NoCmVjaG8gJ0xvYWRpbmcgQ1BVJwpzZXQgLXYKY2F0IC9kZXYvdXJhbmRvbSA+IC9kZXYvbnVsbCAmIHNsZWVwIDMwMCA7IGtpbGwgJCEgCg==',user_id='87b1f4a5842648dead0562b1cf8b4f18',uuid=1ba24bd2-a29b-4c5b-b8c7-cba0830ed166,vcpu_model=VirtCPUModel,vcpus=1,vm_mode=None,vm_state='building') vif={"id": "3f58b3a2-d9b9-4462-8f74-88eea7d00105", "address": "fa:16:3e:a9:69:d7", "network": {"id": "b72e0b6b-24ff-49af-9297-d0f55dd2fe07", "bridge": "br-int", "label": "", "subnets": [{"cidr": "10.100.0.0/16", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.1.231", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "ce8fb01897ec4dc4a54e7b478a0450c6", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tap3f58b3a2-d9", "ovs_interfaceid": "3f58b3a2-d9b9-4462-8f74-88eea7d00105", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}} plug /usr/lib/python3.9/site-packages/nova/virt/libvirt/vif.py:710#033[00m
Dec  1 20:08:32 compute-0 nova_compute[189564]: 2025-12-01 20:08:32.193 189568 DEBUG nova.network.os_vif_util [None req-ef2f901d-b422-4159-86ba-6c6b084dc5b9 87b1f4a5842648dead0562b1cf8b4f18 ce8fb01897ec4dc4a54e7b478a0450c6 - - default default] Converting VIF {"id": "3f58b3a2-d9b9-4462-8f74-88eea7d00105", "address": "fa:16:3e:a9:69:d7", "network": {"id": "b72e0b6b-24ff-49af-9297-d0f55dd2fe07", "bridge": "br-int", "label": "", "subnets": [{"cidr": "10.100.0.0/16", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.1.231", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "ce8fb01897ec4dc4a54e7b478a0450c6", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tap3f58b3a2-d9", "ovs_interfaceid": "3f58b3a2-d9b9-4462-8f74-88eea7d00105", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}} nova_to_osvif_vif /usr/lib/python3.9/site-packages/nova/network/os_vif_util.py:511#033[00m
Dec  1 20:08:32 compute-0 nova_compute[189564]: 2025-12-01 20:08:32.193 189568 DEBUG nova.network.os_vif_util [None req-ef2f901d-b422-4159-86ba-6c6b084dc5b9 87b1f4a5842648dead0562b1cf8b4f18 ce8fb01897ec4dc4a54e7b478a0450c6 - - default default] Converted object VIFOpenVSwitch(active=False,address=fa:16:3e:a9:69:d7,bridge_name='br-int',has_traffic_filtering=True,id=3f58b3a2-d9b9-4462-8f74-88eea7d00105,network=Network(b72e0b6b-24ff-49af-9297-d0f55dd2fe07),plugin='ovs',port_profile=VIFPortProfileOpenVSwitch,preserve_on_delete=False,vif_name='tap3f58b3a2-d9') nova_to_osvif_vif /usr/lib/python3.9/site-packages/nova/network/os_vif_util.py:548#033[00m
Dec  1 20:08:32 compute-0 nova_compute[189564]: 2025-12-01 20:08:32.194 189568 DEBUG os_vif [None req-ef2f901d-b422-4159-86ba-6c6b084dc5b9 87b1f4a5842648dead0562b1cf8b4f18 ce8fb01897ec4dc4a54e7b478a0450c6 - - default default] Plugging vif VIFOpenVSwitch(active=False,address=fa:16:3e:a9:69:d7,bridge_name='br-int',has_traffic_filtering=True,id=3f58b3a2-d9b9-4462-8f74-88eea7d00105,network=Network(b72e0b6b-24ff-49af-9297-d0f55dd2fe07),plugin='ovs',port_profile=VIFPortProfileOpenVSwitch,preserve_on_delete=False,vif_name='tap3f58b3a2-d9') plug /usr/lib/python3.9/site-packages/os_vif/__init__.py:76#033[00m
Dec  1 20:08:32 compute-0 nova_compute[189564]: 2025-12-01 20:08:32.194 189568 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 21 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Dec  1 20:08:32 compute-0 nova_compute[189564]: 2025-12-01 20:08:32.195 189568 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): AddBridgeCommand(_result=None, name=br-int, may_exist=True, datapath_type=system) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89#033[00m
Dec  1 20:08:32 compute-0 nova_compute[189564]: 2025-12-01 20:08:32.195 189568 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Transaction caused no change do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:129#033[00m
Dec  1 20:08:32 compute-0 nova_compute[189564]: 2025-12-01 20:08:32.197 189568 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 21 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Dec  1 20:08:32 compute-0 nova_compute[189564]: 2025-12-01 20:08:32.198 189568 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): AddPortCommand(_result=None, bridge=br-int, port=tap3f58b3a2-d9, may_exist=True) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89#033[00m
Dec  1 20:08:32 compute-0 nova_compute[189564]: 2025-12-01 20:08:32.198 189568 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=1): DbSetCommand(_result=None, table=Interface, record=tap3f58b3a2-d9, col_values=(('external_ids', {'iface-id': '3f58b3a2-d9b9-4462-8f74-88eea7d00105', 'iface-status': 'active', 'attached-mac': 'fa:16:3e:a9:69:d7', 'vm-uuid': '1ba24bd2-a29b-4c5b-b8c7-cba0830ed166'}),)) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89#033[00m
Dec  1 20:08:32 compute-0 nova_compute[189564]: 2025-12-01 20:08:32.200 189568 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Dec  1 20:08:32 compute-0 nova_compute[189564]: 2025-12-01 20:08:32.202 189568 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] 0-ms timeout __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:248#033[00m
Dec  1 20:08:32 compute-0 NetworkManager[56474]: <info>  [1764619712.2035] manager: (tap3f58b3a2-d9): new Open vSwitch Port device (/org/freedesktop/NetworkManager/Devices/73)
Dec  1 20:08:32 compute-0 nova_compute[189564]: 2025-12-01 20:08:32.209 189568 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Dec  1 20:08:32 compute-0 nova_compute[189564]: 2025-12-01 20:08:32.210 189568 INFO os_vif [None req-ef2f901d-b422-4159-86ba-6c6b084dc5b9 87b1f4a5842648dead0562b1cf8b4f18 ce8fb01897ec4dc4a54e7b478a0450c6 - - default default] Successfully plugged vif VIFOpenVSwitch(active=False,address=fa:16:3e:a9:69:d7,bridge_name='br-int',has_traffic_filtering=True,id=3f58b3a2-d9b9-4462-8f74-88eea7d00105,network=Network(b72e0b6b-24ff-49af-9297-d0f55dd2fe07),plugin='ovs',port_profile=VIFPortProfileOpenVSwitch,preserve_on_delete=False,vif_name='tap3f58b3a2-d9')#033[00m
Dec  1 20:08:32 compute-0 nova_compute[189564]: 2025-12-01 20:08:32.264 189568 DEBUG nova.virt.libvirt.driver [None req-ef2f901d-b422-4159-86ba-6c6b084dc5b9 87b1f4a5842648dead0562b1cf8b4f18 ce8fb01897ec4dc4a54e7b478a0450c6 - - default default] No BDM found with device name vda, not building metadata. _build_disk_metadata /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:12116#033[00m
Dec  1 20:08:32 compute-0 nova_compute[189564]: 2025-12-01 20:08:32.264 189568 DEBUG nova.virt.libvirt.driver [None req-ef2f901d-b422-4159-86ba-6c6b084dc5b9 87b1f4a5842648dead0562b1cf8b4f18 ce8fb01897ec4dc4a54e7b478a0450c6 - - default default] No BDM found with device name sda, not building metadata. _build_disk_metadata /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:12116#033[00m
Dec  1 20:08:32 compute-0 nova_compute[189564]: 2025-12-01 20:08:32.265 189568 DEBUG nova.virt.libvirt.driver [None req-ef2f901d-b422-4159-86ba-6c6b084dc5b9 87b1f4a5842648dead0562b1cf8b4f18 ce8fb01897ec4dc4a54e7b478a0450c6 - - default default] No VIF found with MAC fa:16:3e:a9:69:d7, not building metadata _build_interface_metadata /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:12092#033[00m
Dec  1 20:08:32 compute-0 nova_compute[189564]: 2025-12-01 20:08:32.265 189568 INFO nova.virt.libvirt.driver [None req-ef2f901d-b422-4159-86ba-6c6b084dc5b9 87b1f4a5842648dead0562b1cf8b4f18 ce8fb01897ec4dc4a54e7b478a0450c6 - - default default] [instance: 1ba24bd2-a29b-4c5b-b8c7-cba0830ed166] Using config drive#033[00m
Dec  1 20:08:32 compute-0 podman[258517]: 2025-12-01 20:08:32.36044146 +0000 UTC m=+0.117259108 container health_status 3a3d264f7eb8586ed3d44da8bad3c69e5911bcb2ca062b771386b6d47a5118de (image=quay.rdoproject.org/podified-master-centos10/openstack-ceilometer-compute:current-tested, name=ceilometer_agent_compute, health_status=healthy, health_failing_streak=0, health_log=, config_id=edpm, org.label-schema.build-date=20251125, config_data={'image': 'quay.rdoproject.org/podified-master-centos10/openstack-ceilometer-compute:current-tested', 'user': 'ceilometer', 'restart': 'always', 'command': 'kolla_start', 'security_opt': 'label:type:ceilometer_polling_t', 'net': 'host', 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'OS_ENDPOINT_TYPE': 'internal'}, 'healthcheck': {'test': '/openstack/healthcheck compute', 'mount': '/var/lib/openstack/healthchecks/ceilometer_agent_compute'}, 'volumes': ['/var/lib/openstack/config/telemetry:/var/lib/openstack/config/:z', '/var/lib/openstack/config/telemetry/ceilometer-agent-compute.json:/var/lib/kolla/config_files/config.json:z', '/run/libvirt:/run/libvirt:shared,ro', '/etc/hosts:/etc/hosts:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/var/lib/openstack/cacerts/telemetry/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/config/telemetry/ceilometer_prom_exporter.yaml:/etc/ceilometer/ceilometer_prom_exporter.yaml:z', '/var/lib/openstack/certs/telemetry/default:/etc/ceilometer/tls:z', '/dev/log:/dev/log', '/var/lib/openstack/healthchecks/ceilometer_agent_compute:/openstack:ro,z']}, io.buildah.version=1.41.4, tcib_build_tag=3a7876c5b6a4ff2e2bc50e11e9db5f42, tcib_managed=true, maintainer=OpenStack Kubernetes Operator team, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS, container_name=ceilometer_agent_compute, managed_by=edpm_ansible, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 10 Base Image)
Dec  1 20:08:32 compute-0 podman[258518]: 2025-12-01 20:08:32.371345908 +0000 UTC m=+0.124615912 container health_status 43b014a7c88484529ca37fbc1aa040d68d3c565a681d98a3ffe696ded1c66c8b (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified, name=ovn_metadata_agent, health_status=healthy, health_failing_streak=0, health_log=, io.buildah.version=1.41.3, maintainer=OpenStack Kubernetes Operator team, org.label-schema.name=CentOS Stream 9 Base Image, managed_by=edpm_ansible, org.label-schema.license=GPLv2, tcib_managed=true, config_id=ovn_metadata_agent, org.label-schema.build-date=20251125, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS, tcib_build_tag=fa2bb8efef6782c26ea7f1675eeb36dd, config_data={'cgroupns': 'host', 'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'EDPM_CONFIG_HASH': '0823bd3e096c75f72e4a95820d41b0d4b6a1172bd2892ddb9f29b788a11bc87d'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ovn_metadata_agent', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified', 'net': 'host', 'pid': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/run/openvswitch:/run/openvswitch:z', '/var/lib/config-data/ansible-generated/neutron-ovn-metadata-agent:/etc/neutron.conf.d:z', '/run/netns:/run/netns:shared', '/var/lib/kolla/config_files/ovn_metadata_agent.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/neutron:/var/lib/neutron:shared,z', '/var/lib/neutron/ovn_metadata_haproxy_wrapper:/usr/local/bin/haproxy:ro', '/var/lib/neutron/kill_scripts:/etc/neutron/kill_scripts:ro', '/var/lib/openstack/cacerts/neutron-metadata/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/ca.crt:/etc/pki/tls/certs/ovndbca.crt:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/tls.crt:/etc/pki/tls/certs/ovndb.crt:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/tls.key:/etc/pki/tls/private/ovndb.key:ro,Z', '/var/lib/openstack/healthchecks/ovn_metadata_agent:/openstack:ro,z']}, container_name=ovn_metadata_agent)
Dec  1 20:08:32 compute-0 podman[258516]: 2025-12-01 20:08:32.37233621 +0000 UTC m=+0.137484534 container health_status 34a1614f07848d6f362b3ed1fa2407dbcd0f2c7c831f6ef43ff8b2d278ce7c3d (image=quay.io/podified-antelope-centos9/openstack-ceilometer-ipmi:current-podified, name=ceilometer_agent_ipmi, health_status=healthy, health_failing_streak=0, health_log=, container_name=ceilometer_agent_ipmi, org.label-schema.build-date=20251125, org.label-schema.license=GPLv2, tcib_build_tag=fa2bb8efef6782c26ea7f1675eeb36dd, io.buildah.version=1.41.3, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.vendor=CentOS, tcib_managed=true, maintainer=OpenStack Kubernetes Operator team, managed_by=edpm_ansible, config_id=edpm, org.label-schema.schema-version=1.0, config_data={'image': 'quay.io/podified-antelope-centos9/openstack-ceilometer-ipmi:current-podified', 'user': 'ceilometer', 'restart': 'always', 'command': 'kolla_start', 'security_opt': 'label:type:ceilometer_polling_t', 'privileged': 'true', 'net': 'host', 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'OS_ENDPOINT_TYPE': 'internal'}, 'healthcheck': {'test': '/openstack/healthcheck ipmi', 'mount': '/var/lib/openstack/healthchecks/ceilometer_agent_ipmi'}, 'volumes': ['/var/lib/openstack/config/telemetry-power-monitoring:/var/lib/openstack/config/:z', '/var/lib/openstack/config/telemetry-power-monitoring/ceilometer-agent-ipmi.json:/var/lib/kolla/config_files/config.json:z', '/etc/hosts:/etc/hosts:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/var/lib/openstack/cacerts/telemetry-power-monitoring/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/config/telemetry-power-monitoring/ceilometer_prom_exporter.yaml:/etc/ceilometer/ceilometer_prom_exporter.yaml:z', '/var/lib/openstack/certs/telemetry-power-monitoring/default:/etc/ceilometer/tls:z', '/dev/log:/dev/log', '/var/lib/openstack/healthchecks/ceilometer_agent_ipmi:/openstack:ro,z']})
Dec  1 20:08:32 compute-0 podman[258514]: 2025-12-01 20:08:32.372257268 +0000 UTC m=+0.139770018 container health_status 23921011954a99f31a49758e512d9e3575f6b2ebf536e7df85e3be11e7690b76 (image=quay.io/sustainable_computing_io/kepler:release-0.7.12, name=kepler, health_status=healthy, health_failing_streak=0, health_log=, name=ubi9, url=https://access.redhat.com/containers/#/registry.access.redhat.com/ubi9/images/9.4-1214.1726694543, com.redhat.component=ubi9-container, summary=Provides the latest release of Red Hat Universal Base Image 9., vendor=Red Hat, Inc., release-0.7.12=, com.redhat.license_terms=https://www.redhat.com/en/about/red-hat-end-user-license-agreements#UBI, build-date=2024-09-18T21:23:30, io.openshift.tags=base rhel9, config_id=edpm, description=The Universal Base Image is designed and engineered to be the base layer for all of your containerized applications, middleware and utilities. This base image is freely redistributable, but Red Hat only supports Red Hat technologies through subscriptions for Red Hat products. This image is maintained by Red Hat and updated regularly., io.k8s.display-name=Red Hat Universal Base Image 9, container_name=kepler, io.openshift.expose-services=, vcs-type=git, release=1214.1726694543, vcs-ref=e309397d02fc53f7fa99db1371b8700eb49f268f, io.buildah.version=1.29.0, config_data={'image': 'quay.io/sustainable_computing_io/kepler:release-0.7.12', 'privileged': 'true', 'restart': 'always', 'ports': ['8888:8888'], 'net': 'host', 'command': '-v=2', 'recreate': True, 'environment': {'ENABLE_GPU': 'true', 'EXPOSE_CONTAINER_METRICS': 'true', 'ENABLE_PROCESS_METRICS': 'true', 'EXPOSE_VM_METRICS': 'true', 'EXPOSE_ESTIMATED_IDLE_POWER_METRICS': 'false', 'LIBVIRT_METADATA_URI': 'http://openstack.org/xmlns/libvirt/nova/1.1'}, 'healthcheck': {'test': '/openstack/healthcheck kepler', 'mount': '/var/lib/openstack/healthchecks/kepler'}, 'volumes': ['/lib/modules:/lib/modules:ro', '/run/libvirt:/run/libvirt:shared,ro', '/sys:/sys', '/proc:/proc', '/var/lib/openstack/healthchecks/kepler:/openstack:ro,z']}, io.k8s.description=The Universal Base Image is designed and engineered to be the base layer for all of your containerized applications, middleware and utilities. This base image is freely redistributable, but Red Hat only supports Red Hat technologies through subscriptions for Red Hat products. This image is maintained by Red Hat and updated regularly., architecture=x86_64, maintainer=Red Hat, Inc., version=9.4, managed_by=edpm_ansible, distribution-scope=public)
Dec  1 20:08:32 compute-0 podman[258529]: 2025-12-01 20:08:32.387167443 +0000 UTC m=+0.138961881 container health_status ac5c9902abf0db9f43c889599b2bcc73d33eb8b65444ffdd9b56a5cc93dab792 (image=quay.io/podified-antelope-centos9/openstack-ovn-controller:current-podified, name=ovn_controller, health_status=healthy, health_failing_streak=0, health_log=, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS, tcib_build_tag=fa2bb8efef6782c26ea7f1675eeb36dd, maintainer=OpenStack Kubernetes Operator team, tcib_managed=true, config_data={'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ovn_controller', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-ovn-controller:current-podified', 'net': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/lib/modules:/lib/modules:ro', '/run:/run', '/var/lib/openvswitch/ovn:/run/ovn:shared,z', '/var/lib/kolla/config_files/ovn_controller.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/openstack/cacerts/ovn/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/certs/ovn/default/ca.crt:/etc/pki/tls/certs/ovndbca.crt:ro,z', '/var/lib/openstack/certs/ovn/default/tls.crt:/etc/pki/tls/certs/ovndb.crt:ro,z', '/var/lib/openstack/certs/ovn/default/tls.key:/etc/pki/tls/private/ovndb.key:ro,Z', '/var/lib/openstack/healthchecks/ovn_controller:/openstack:ro,z']}, config_id=ovn_controller, io.buildah.version=1.41.3, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 9 Base Image, container_name=ovn_controller, managed_by=edpm_ansible, org.label-schema.build-date=20251125)
Dec  1 20:08:32 compute-0 nova_compute[189564]: 2025-12-01 20:08:32.794 189568 INFO nova.virt.libvirt.driver [None req-ef2f901d-b422-4159-86ba-6c6b084dc5b9 87b1f4a5842648dead0562b1cf8b4f18 ce8fb01897ec4dc4a54e7b478a0450c6 - - default default] [instance: 1ba24bd2-a29b-4c5b-b8c7-cba0830ed166] Creating config drive at /var/lib/nova/instances/1ba24bd2-a29b-4c5b-b8c7-cba0830ed166/disk.config#033[00m
Dec  1 20:08:32 compute-0 nova_compute[189564]: 2025-12-01 20:08:32.800 189568 DEBUG oslo_concurrency.processutils [None req-ef2f901d-b422-4159-86ba-6c6b084dc5b9 87b1f4a5842648dead0562b1cf8b4f18 ce8fb01897ec4dc4a54e7b478a0450c6 - - default default] Running cmd (subprocess): /usr/bin/mkisofs -o /var/lib/nova/instances/1ba24bd2-a29b-4c5b-b8c7-cba0830ed166/disk.config -ldots -allow-lowercase -allow-multidot -l -publisher OpenStack Compute 27.5.2-0.20250829104910.6f8decf.el9 -quiet -J -r -V config-2 /tmp/tmppizv_w5u execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384#033[00m
Dec  1 20:08:32 compute-0 nova_compute[189564]: 2025-12-01 20:08:32.928 189568 DEBUG oslo_concurrency.processutils [None req-ef2f901d-b422-4159-86ba-6c6b084dc5b9 87b1f4a5842648dead0562b1cf8b4f18 ce8fb01897ec4dc4a54e7b478a0450c6 - - default default] CMD "/usr/bin/mkisofs -o /var/lib/nova/instances/1ba24bd2-a29b-4c5b-b8c7-cba0830ed166/disk.config -ldots -allow-lowercase -allow-multidot -l -publisher OpenStack Compute 27.5.2-0.20250829104910.6f8decf.el9 -quiet -J -r -V config-2 /tmp/tmppizv_w5u" returned: 0 in 0.128s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422#033[00m
Dec  1 20:08:33 compute-0 kernel: tap3f58b3a2-d9: entered promiscuous mode
Dec  1 20:08:33 compute-0 NetworkManager[56474]: <info>  [1764619713.0247] manager: (tap3f58b3a2-d9): new Tun device (/org/freedesktop/NetworkManager/Devices/74)
Dec  1 20:08:33 compute-0 nova_compute[189564]: 2025-12-01 20:08:33.033 189568 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Dec  1 20:08:33 compute-0 ovn_controller[97948]: 2025-12-01T20:08:33Z|00174|binding|INFO|Claiming lport 3f58b3a2-d9b9-4462-8f74-88eea7d00105 for this chassis.
Dec  1 20:08:33 compute-0 ovn_controller[97948]: 2025-12-01T20:08:33Z|00175|binding|INFO|3f58b3a2-d9b9-4462-8f74-88eea7d00105: Claiming fa:16:3e:a9:69:d7 10.100.1.231
Dec  1 20:08:33 compute-0 ovn_controller[97948]: 2025-12-01T20:08:33Z|00176|binding|INFO|Setting lport 3f58b3a2-d9b9-4462-8f74-88eea7d00105 ovn-installed in OVS
Dec  1 20:08:33 compute-0 nova_compute[189564]: 2025-12-01 20:08:33.060 189568 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Dec  1 20:08:33 compute-0 nova_compute[189564]: 2025-12-01 20:08:33.061 189568 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Dec  1 20:08:33 compute-0 nova_compute[189564]: 2025-12-01 20:08:33.070 189568 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Dec  1 20:08:33 compute-0 systemd-udevd[258625]: Network interface NamePolicy= disabled on kernel command line.
Dec  1 20:08:33 compute-0 NetworkManager[56474]: <info>  [1764619713.1024] device (tap3f58b3a2-d9): state change: unmanaged -> unavailable (reason 'connection-assumed', managed-type: 'external')
Dec  1 20:08:33 compute-0 systemd-machined[155891]: New machine qemu-15-instance-0000000e.
Dec  1 20:08:33 compute-0 NetworkManager[56474]: <info>  [1764619713.1116] device (tap3f58b3a2-d9): state change: unavailable -> disconnected (reason 'none', managed-type: 'external')
Dec  1 20:08:33 compute-0 ovn_controller[97948]: 2025-12-01T20:08:33Z|00177|binding|INFO|Setting lport 3f58b3a2-d9b9-4462-8f74-88eea7d00105 up in Southbound
Dec  1 20:08:33 compute-0 systemd[1]: Started Virtual Machine qemu-15-instance-0000000e.
Dec  1 20:08:33 compute-0 ovn_metadata_agent[106828]: 2025-12-01 20:08:33.118 106833 DEBUG ovsdbapp.backend.ovs_idl.event [-] Matched UPDATE: PortBindingUpdatedEvent(events=('update',), table='Port_Binding', conditions=None, old_conditions=None), priority=20 to row=Port_Binding(mac=['fa:16:3e:a9:69:d7 10.100.1.231'], port_security=['fa:16:3e:a9:69:d7 10.100.1.231'], type=, nat_addresses=[], virtual_parent=[], up=[False], options={'requested-chassis': 'compute-0.ctlplane.example.com'}, parent_port=[], requested_additional_chassis=[], ha_chassis_group=[], external_ids={'neutron:cidrs': '10.100.1.231/16', 'neutron:device_id': '1ba24bd2-a29b-4c5b-b8c7-cba0830ed166', 'neutron:device_owner': 'compute:nova', 'neutron:mtu': '', 'neutron:network_name': 'neutron-b72e0b6b-24ff-49af-9297-d0f55dd2fe07', 'neutron:port_capabilities': '', 'neutron:port_name': '', 'neutron:project_id': 'ce8fb01897ec4dc4a54e7b478a0450c6', 'neutron:revision_number': '2', 'neutron:security_group_ids': '31f326a2-1dd0-42fd-9a01-b17a7fb79ecb', 'neutron:subnet_pool_addr_scope4': '', 'neutron:subnet_pool_addr_scope6': '', 'neutron:vnic_type': 'normal'}, additional_chassis=[], tag=[], additional_encap=[], encap=[], mirror_rules=[], datapath=4321fa83-980a-46fb-a7a0-cf14441fe575, chassis=[<ovs.db.idl.Row object at 0x7f1b36766670>], tunnel_key=3, gateway_chassis=[], requested_chassis=[<ovs.db.idl.Row object at 0x7f1b36766670>], logical_port=3f58b3a2-d9b9-4462-8f74-88eea7d00105) old=Port_Binding(chassis=[]) matches /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/event.py:43#033[00m
Dec  1 20:08:33 compute-0 ovn_metadata_agent[106828]: 2025-12-01 20:08:33.119 106833 INFO neutron.agent.ovn.metadata.agent [-] Port 3f58b3a2-d9b9-4462-8f74-88eea7d00105 in datapath b72e0b6b-24ff-49af-9297-d0f55dd2fe07 bound to our chassis#033[00m
Dec  1 20:08:33 compute-0 ovn_metadata_agent[106828]: 2025-12-01 20:08:33.121 106833 INFO neutron.agent.ovn.metadata.agent [-] Provisioning metadata for network b72e0b6b-24ff-49af-9297-d0f55dd2fe07#033[00m
Dec  1 20:08:33 compute-0 ovn_metadata_agent[106828]: 2025-12-01 20:08:33.143 239862 DEBUG oslo.privsep.daemon [-] privsep: reply[bcbdb822-525b-4664-93d8-4577d33b1dca]: (4, True) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Dec  1 20:08:33 compute-0 ovn_metadata_agent[106828]: 2025-12-01 20:08:33.187 239942 DEBUG oslo.privsep.daemon [-] privsep: reply[d24d6a52-c362-4a72-8d5d-4e3043f3fdf9]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Dec  1 20:08:33 compute-0 ovn_metadata_agent[106828]: 2025-12-01 20:08:33.191 239942 DEBUG oslo.privsep.daemon [-] privsep: reply[9ffaadcb-a1c1-45bb-b4b6-eb8166a0855e]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Dec  1 20:08:33 compute-0 ovn_metadata_agent[106828]: 2025-12-01 20:08:33.225 239942 DEBUG oslo.privsep.daemon [-] privsep: reply[e3479c5c-f271-4f75-b517-faa93b561c4f]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Dec  1 20:08:33 compute-0 ovn_metadata_agent[106828]: 2025-12-01 20:08:33.250 239862 DEBUG oslo.privsep.daemon [-] privsep: reply[af4bbf75-6c38-42aa-b2de-62aebc1cfb22]: (4, [{'family': 0, '__align': (), 'ifi_type': 1, 'index': 2, 'flags': 69699, 'change': 0, 'attrs': [['IFLA_IFNAME', 'tapb72e0b6b-21'], ['IFLA_TXQLEN', 1000], ['IFLA_OPERSTATE', 'UP'], ['IFLA_LINKMODE', 0], ['IFLA_MTU', 1500], ['IFLA_MIN_MTU', 68], ['IFLA_MAX_MTU', 65535], ['IFLA_GROUP', 0], ['IFLA_PROMISCUITY', 0], ['UNKNOWN', {'header': {'length': 8, 'type': 61}}], ['IFLA_NUM_TX_QUEUES', 8], ['IFLA_GSO_MAX_SEGS', 65535], ['IFLA_GSO_MAX_SIZE', 65536], ['IFLA_GRO_MAX_SIZE', 65536], ['UNKNOWN', {'header': {'length': 8, 'type': 63}}], ['UNKNOWN', {'header': {'length': 8, 'type': 64}}], ['IFLA_TSO_MAX_SIZE', 524280], ['IFLA_TSO_MAX_SEGS', 65535], ['UNKNOWN', {'header': {'length': 8, 'type': 66}}], ['IFLA_NUM_RX_QUEUES', 8], ['IFLA_CARRIER', 1], ['IFLA_CARRIER_CHANGES', 2], ['IFLA_CARRIER_UP_COUNT', 1], ['IFLA_CARRIER_DOWN_COUNT', 1], ['IFLA_PROTO_DOWN', 0], ['IFLA_ADDRESS', 'fa:16:3e:fe:a1:18'], ['IFLA_BROADCAST', 'ff:ff:ff:ff:ff:ff'], ['IFLA_STATS64', {'rx_packets': 8, 'tx_packets': 6, 'rx_bytes': 616, 'tx_bytes': 444, 'rx_errors': 0, 'tx_errors': 0, 'rx_dropped': 0, 'tx_dropped': 0, 'multicast': 0, 'collisions': 0, 'rx_length_errors': 0, 'rx_over_errors': 0, 'rx_crc_errors': 0, 'rx_frame_errors': 0, 'rx_fifo_errors': 0, 'rx_missed_errors': 0, 'tx_aborted_errors': 0, 'tx_carrier_errors': 0, 'tx_fifo_errors': 0, 'tx_heartbeat_errors': 0, 'tx_window_errors': 0, 'rx_compressed': 0, 'tx_compressed': 0}], ['IFLA_STATS', {'rx_packets': 8, 'tx_packets': 6, 'rx_bytes': 616, 'tx_bytes': 444, 'rx_errors': 0, 'tx_errors': 0, 'rx_dropped': 0, 'tx_dropped': 0, 'multicast': 0, 'collisions': 0, 'rx_length_errors': 0, 'rx_over_errors': 0, 'rx_crc_errors': 0, 'rx_frame_errors': 0, 'rx_fifo_errors': 0, 'rx_missed_errors': 0, 'tx_aborted_errors': 0, 'tx_carrier_errors': 0, 'tx_fifo_errors': 0, 'tx_heartbeat_errors': 0, 'tx_window_errors': 0, 'rx_compressed': 0, 'tx_compressed': 0}], ['IFLA_XDP', {'attrs': [['IFLA_XDP_ATTACHED', None]]}], ['IFLA_LINKINFO', {'attrs': [['IFLA_INFO_KIND', 'veth']]}], ['IFLA_LINK_NETNSID', 0], ['IFLA_LINK', 45], ['IFLA_QDISC', 'noqueue'], ['IFLA_AF_SPEC', {'attrs': [['AF_INET', {'dummy': 65668, 'forwarding': 1, 'mc_forwarding': 0, 'proxy_arp': 0, 'accept_redirects': 0, 'secure_redirects': 0, 'send_redirects': 0, 'shared_media': 1, 'rp_filter': 1, 'accept_source_route': 0, 'bootp_relay': 0, 'log_martians': 0, 'tag': 0, 'arpfilter': 0, 'medium_id': 0, 'noxfrm': 0, 'nopolicy': 0, 'force_igmp_version': 0, 'arp_announce': 0, 'arp_ignore': 0, 'promote_secondaries': 1, 'arp_accept': 0, 'arp_notify': 0, 'accept_local': 0, 'src_vmark': 0, 'proxy_arp_pvlan': 0, 'route_localnet': 0, 'igmpv2_unsolicited_report_interval': 10000, 'igmpv3_unsolicited_report_interval': 1000}], ['AF_INET6', {'attrs': [['IFLA_INET6_FLAGS', 2147483648], ['IFLA_INET6_CACHEINFO', {'max_reasm_len': 65535, 'tstamp': 601774, 'reachable_time': 15601, 'retrans_time': 1000}], ['IFLA_INET6_CONF', {'forwarding': 0, 'hop_limit': 64, 'mtu': 1500, 'accept_ra': 1, 'accept_redirects': 1, 'autoconf': 1, 'dad_transmits': 1, 'router_solicitations': 4294967295, 'router_solicitation_interval': 4000, 'router_solicitation_delay': 1000, 'use_tempaddr': 0, 'temp_valid_lft': 604800, 'temp_preferred_lft': 86400, 'regen_max_retry': 3, 'max_desync_factor': 600, 'max_addresses': 16, 'force_mld_version': 0, 'accept_ra_defrtr': 1, 'accept_ra_pinfo': 1, 'accept_ra_rtr_pref': 1, 'router_probe_interval': 60000, 'accept_ra_rt_info_max_plen': 0, 'proxy_ndp': 0, 'optimistic_dad': 0, 'accept_source_route': 0, 'mc_forwarding': 0, 'disable_ipv6': 0, 'accept_dad': 1, 'force_tllao': 0, 'ndisc_notify': 0}], ['IFLA_INET6_STATS', {'num': 38, 'inpkts': 6, 'inoctets': 448, 'indelivers': 1, 'outforwdatagrams': 0, 'outpkts': 4, 'outoctets': 304, 'inhdrerrors': 0, 'intoobigerrors': 0, 'innoroutes': 0, 'inaddrerrors': 0, 'inunknownprotos': 0, 'intruncatedpkts': 0, 'indiscards': 0, 'outdiscards': 0, 'outnoroutes': 0, 'reasmtimeout': 0, 'reasmreqds': 0, 'reasmoks': 0, 'reasmfails': 0, 'fragoks': 0, 'fragfails': 0, 'fragcreates': 0, 'inmcastpkts': 6, 'outmcastpkts': 4, 'inbcastpkts': 0, 'outbcastpkts': 0, 'inmcastoctets': 448, 'outmcastoctets': 304, 'inbcastoctets': 0, 'outbcastoctets': 0, 'csumerrors': 0, 'noectpkts': 6, 'ect1pkts': 0, 'ect0pkts': 0, 'cepkts': 0}], ['IFLA_INET6_ICMP6STATS', {'num': 7, 'inmsgs': 1, 'inerrors': 0, 'outmsgs': 4, 'outerrors': 0, 'csumerrors': 0}], ['IFLA_INET6_TOKEN', '::'], ['IFLA_INET6_ADDR_GEN_MODE', 0]]}]]}], ['IFLA_MAP', {'mem_start': 0, 'mem_end': 0, 'base_addr': 0, 'irq': 0, 'dma': 0, 'port': 0}], ['UNKNOWN', {'header': {'length': 4, 'type': 32830}}], ['UNKNOWN', {'header': {'length': 4, 'type': 32833}}]], 'header': {'length': 1448, 'type': 16, 'flags': 2, 'sequence_number': 255, 'pid': 258642, 'error': None, 'target': 'ovnmeta-b72e0b6b-24ff-49af-9297-d0f55dd2fe07', 'stats': (0, 0, 0)}, 'state': 'up', 'event': 'RTM_NEWLINK'}]) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Dec  1 20:08:33 compute-0 ovn_metadata_agent[106828]: 2025-12-01 20:08:33.275 239862 DEBUG oslo.privsep.daemon [-] privsep: reply[2e1d6cee-7fa3-4b40-a362-22ed73b7111a]: (4, ({'family': 2, 'prefixlen': 32, 'flags': 128, 'scope': 0, 'index': 2, 'attrs': [['IFA_ADDRESS', '169.254.169.254'], ['IFA_LOCAL', '169.254.169.254'], ['IFA_BROADCAST', '169.254.169.254'], ['IFA_LABEL', 'tapb72e0b6b-21'], ['IFA_FLAGS', 128], ['IFA_CACHEINFO', {'ifa_preferred': 4294967295, 'ifa_valid': 4294967295, 'cstamp': 601788, 'tstamp': 601788}]], 'header': {'length': 96, 'type': 20, 'flags': 2, 'sequence_number': 255, 'pid': 258643, 'error': None, 'target': 'ovnmeta-b72e0b6b-24ff-49af-9297-d0f55dd2fe07', 'stats': (0, 0, 0)}, 'event': 'RTM_NEWADDR'}, {'family': 2, 'prefixlen': 16, 'flags': 128, 'scope': 0, 'index': 2, 'attrs': [['IFA_ADDRESS', '10.100.0.2'], ['IFA_LOCAL', '10.100.0.2'], ['IFA_BROADCAST', '10.100.255.255'], ['IFA_LABEL', 'tapb72e0b6b-21'], ['IFA_FLAGS', 128], ['IFA_CACHEINFO', {'ifa_preferred': 4294967295, 'ifa_valid': 4294967295, 'cstamp': 601792, 'tstamp': 601792}]], 'header': {'length': 96, 'type': 20, 'flags': 2, 'sequence_number': 255, 'pid': 258643, 'error': None, 'target': 'ovnmeta-b72e0b6b-24ff-49af-9297-d0f55dd2fe07', 'stats': (0, 0, 0)}, 'event': 'RTM_NEWADDR'})) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Dec  1 20:08:33 compute-0 ovn_metadata_agent[106828]: 2025-12-01 20:08:33.279 106833 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): DelPortCommand(_result=None, port=tapb72e0b6b-20, bridge=br-ex, if_exists=True) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89#033[00m
Dec  1 20:08:33 compute-0 nova_compute[189564]: 2025-12-01 20:08:33.281 189568 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Dec  1 20:08:33 compute-0 nova_compute[189564]: 2025-12-01 20:08:33.282 189568 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Dec  1 20:08:33 compute-0 ovn_metadata_agent[106828]: 2025-12-01 20:08:33.283 106833 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): AddPortCommand(_result=None, bridge=br-int, port=tapb72e0b6b-20, may_exist=True) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89#033[00m
Dec  1 20:08:33 compute-0 ovn_metadata_agent[106828]: 2025-12-01 20:08:33.283 106833 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Transaction caused no change do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:129#033[00m
Dec  1 20:08:33 compute-0 ovn_metadata_agent[106828]: 2025-12-01 20:08:33.283 106833 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): DbSetCommand(_result=None, table=Interface, record=tapb72e0b6b-20, col_values=(('external_ids', {'iface-id': '7a2b95ce-3fa4-48e0-a152-7ae4f9eed7c9'}),)) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89#033[00m
Dec  1 20:08:33 compute-0 ovn_metadata_agent[106828]: 2025-12-01 20:08:33.284 106833 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Transaction caused no change do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:129#033[00m
Dec  1 20:08:33 compute-0 nova_compute[189564]: 2025-12-01 20:08:33.781 189568 DEBUG nova.compute.manager [req-aaeeb478-8933-4f06-853c-8a91a36a3b62 req-dca51113-aae5-4fbf-a0d9-519e24c1589c 8141e98fb94749b083df4b60a326419b 58466eba0634458f8dd92f929824a1d9 - - default default] [instance: 1ba24bd2-a29b-4c5b-b8c7-cba0830ed166] Received event network-vif-plugged-3f58b3a2-d9b9-4462-8f74-88eea7d00105 external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11048#033[00m
Dec  1 20:08:33 compute-0 nova_compute[189564]: 2025-12-01 20:08:33.783 189568 DEBUG oslo_concurrency.lockutils [req-aaeeb478-8933-4f06-853c-8a91a36a3b62 req-dca51113-aae5-4fbf-a0d9-519e24c1589c 8141e98fb94749b083df4b60a326419b 58466eba0634458f8dd92f929824a1d9 - - default default] Acquiring lock "1ba24bd2-a29b-4c5b-b8c7-cba0830ed166-events" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m
Dec  1 20:08:33 compute-0 nova_compute[189564]: 2025-12-01 20:08:33.783 189568 DEBUG oslo_concurrency.lockutils [req-aaeeb478-8933-4f06-853c-8a91a36a3b62 req-dca51113-aae5-4fbf-a0d9-519e24c1589c 8141e98fb94749b083df4b60a326419b 58466eba0634458f8dd92f929824a1d9 - - default default] Lock "1ba24bd2-a29b-4c5b-b8c7-cba0830ed166-events" acquired by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: waited 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m
Dec  1 20:08:33 compute-0 nova_compute[189564]: 2025-12-01 20:08:33.784 189568 DEBUG oslo_concurrency.lockutils [req-aaeeb478-8933-4f06-853c-8a91a36a3b62 req-dca51113-aae5-4fbf-a0d9-519e24c1589c 8141e98fb94749b083df4b60a326419b 58466eba0634458f8dd92f929824a1d9 - - default default] Lock "1ba24bd2-a29b-4c5b-b8c7-cba0830ed166-events" "released" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m
Dec  1 20:08:33 compute-0 nova_compute[189564]: 2025-12-01 20:08:33.784 189568 DEBUG nova.compute.manager [req-aaeeb478-8933-4f06-853c-8a91a36a3b62 req-dca51113-aae5-4fbf-a0d9-519e24c1589c 8141e98fb94749b083df4b60a326419b 58466eba0634458f8dd92f929824a1d9 - - default default] [instance: 1ba24bd2-a29b-4c5b-b8c7-cba0830ed166] Processing event network-vif-plugged-3f58b3a2-d9b9-4462-8f74-88eea7d00105 _process_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:10808#033[00m
Dec  1 20:08:33 compute-0 systemd[1]: Starting libvirt proxy daemon...
Dec  1 20:08:33 compute-0 systemd[1]: Started libvirt proxy daemon.
Dec  1 20:08:33 compute-0 nova_compute[189564]: 2025-12-01 20:08:33.861 189568 DEBUG nova.network.neutron [req-6e4ab898-c9aa-4ff5-b0ca-596d20d3fc7d req-98005170-4d75-4859-ad9a-2389173fbaab 8141e98fb94749b083df4b60a326419b 58466eba0634458f8dd92f929824a1d9 - - default default] [instance: 1ba24bd2-a29b-4c5b-b8c7-cba0830ed166] Updated VIF entry in instance network info cache for port 3f58b3a2-d9b9-4462-8f74-88eea7d00105. _build_network_info_model /usr/lib/python3.9/site-packages/nova/network/neutron.py:3482#033[00m
Dec  1 20:08:33 compute-0 nova_compute[189564]: 2025-12-01 20:08:33.862 189568 DEBUG nova.network.neutron [req-6e4ab898-c9aa-4ff5-b0ca-596d20d3fc7d req-98005170-4d75-4859-ad9a-2389173fbaab 8141e98fb94749b083df4b60a326419b 58466eba0634458f8dd92f929824a1d9 - - default default] [instance: 1ba24bd2-a29b-4c5b-b8c7-cba0830ed166] Updating instance_info_cache with network_info: [{"id": "3f58b3a2-d9b9-4462-8f74-88eea7d00105", "address": "fa:16:3e:a9:69:d7", "network": {"id": "b72e0b6b-24ff-49af-9297-d0f55dd2fe07", "bridge": "br-int", "label": "", "subnets": [{"cidr": "10.100.0.0/16", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.1.231", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "ce8fb01897ec4dc4a54e7b478a0450c6", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tap3f58b3a2-d9", "ovs_interfaceid": "3f58b3a2-d9b9-4462-8f74-88eea7d00105", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}}] update_instance_cache_with_nw_info /usr/lib/python3.9/site-packages/nova/network/neutron.py:116#033[00m
Dec  1 20:08:33 compute-0 nova_compute[189564]: 2025-12-01 20:08:33.879 189568 DEBUG oslo_concurrency.lockutils [req-6e4ab898-c9aa-4ff5-b0ca-596d20d3fc7d req-98005170-4d75-4859-ad9a-2389173fbaab 8141e98fb94749b083df4b60a326419b 58466eba0634458f8dd92f929824a1d9 - - default default] Releasing lock "refresh_cache-1ba24bd2-a29b-4c5b-b8c7-cba0830ed166" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:333#033[00m
Dec  1 20:08:33 compute-0 nova_compute[189564]: 2025-12-01 20:08:33.965 189568 DEBUG nova.compute.manager [None req-ef2f901d-b422-4159-86ba-6c6b084dc5b9 87b1f4a5842648dead0562b1cf8b4f18 ce8fb01897ec4dc4a54e7b478a0450c6 - - default default] [instance: 1ba24bd2-a29b-4c5b-b8c7-cba0830ed166] Instance event wait completed in 0 seconds for network-vif-plugged wait_for_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:577#033[00m
Dec  1 20:08:33 compute-0 nova_compute[189564]: 2025-12-01 20:08:33.967 189568 DEBUG nova.virt.driver [None req-025acbbd-8b0a-4055-b5a6-f0460d6fa220 - - - - - -] Emitting event <LifecycleEvent: 1764619713.9663403, 1ba24bd2-a29b-4c5b-b8c7-cba0830ed166 => Started> emit_event /usr/lib/python3.9/site-packages/nova/virt/driver.py:1653#033[00m
Dec  1 20:08:33 compute-0 nova_compute[189564]: 2025-12-01 20:08:33.967 189568 INFO nova.compute.manager [None req-025acbbd-8b0a-4055-b5a6-f0460d6fa220 - - - - - -] [instance: 1ba24bd2-a29b-4c5b-b8c7-cba0830ed166] VM Started (Lifecycle Event)#033[00m
Dec  1 20:08:33 compute-0 nova_compute[189564]: 2025-12-01 20:08:33.972 189568 DEBUG nova.virt.libvirt.driver [None req-ef2f901d-b422-4159-86ba-6c6b084dc5b9 87b1f4a5842648dead0562b1cf8b4f18 ce8fb01897ec4dc4a54e7b478a0450c6 - - default default] [instance: 1ba24bd2-a29b-4c5b-b8c7-cba0830ed166] Guest created on hypervisor spawn /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:4417#033[00m
Dec  1 20:08:33 compute-0 nova_compute[189564]: 2025-12-01 20:08:33.979 189568 INFO nova.virt.libvirt.driver [-] [instance: 1ba24bd2-a29b-4c5b-b8c7-cba0830ed166] Instance spawned successfully.#033[00m
Dec  1 20:08:33 compute-0 nova_compute[189564]: 2025-12-01 20:08:33.979 189568 DEBUG nova.virt.libvirt.driver [None req-ef2f901d-b422-4159-86ba-6c6b084dc5b9 87b1f4a5842648dead0562b1cf8b4f18 ce8fb01897ec4dc4a54e7b478a0450c6 - - default default] [instance: 1ba24bd2-a29b-4c5b-b8c7-cba0830ed166] Attempting to register defaults for the following image properties: ['hw_cdrom_bus', 'hw_disk_bus', 'hw_input_bus', 'hw_pointer_model', 'hw_video_model', 'hw_vif_model'] _register_undefined_instance_details /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:917#033[00m
Dec  1 20:08:34 compute-0 nova_compute[189564]: 2025-12-01 20:08:34.030 189568 DEBUG nova.compute.manager [None req-025acbbd-8b0a-4055-b5a6-f0460d6fa220 - - - - - -] [instance: 1ba24bd2-a29b-4c5b-b8c7-cba0830ed166] Checking state _get_power_state /usr/lib/python3.9/site-packages/nova/compute/manager.py:1762#033[00m
Dec  1 20:08:34 compute-0 nova_compute[189564]: 2025-12-01 20:08:34.041 189568 DEBUG nova.compute.manager [None req-025acbbd-8b0a-4055-b5a6-f0460d6fa220 - - - - - -] [instance: 1ba24bd2-a29b-4c5b-b8c7-cba0830ed166] Synchronizing instance power state after lifecycle event "Started"; current vm_state: building, current task_state: spawning, current DB power_state: 0, VM power_state: 1 handle_lifecycle_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:1396#033[00m
Dec  1 20:08:34 compute-0 nova_compute[189564]: 2025-12-01 20:08:34.048 189568 DEBUG nova.virt.libvirt.driver [None req-ef2f901d-b422-4159-86ba-6c6b084dc5b9 87b1f4a5842648dead0562b1cf8b4f18 ce8fb01897ec4dc4a54e7b478a0450c6 - - default default] [instance: 1ba24bd2-a29b-4c5b-b8c7-cba0830ed166] Found default for hw_cdrom_bus of sata _register_undefined_instance_details /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:946#033[00m
Dec  1 20:08:34 compute-0 nova_compute[189564]: 2025-12-01 20:08:34.050 189568 DEBUG nova.virt.libvirt.driver [None req-ef2f901d-b422-4159-86ba-6c6b084dc5b9 87b1f4a5842648dead0562b1cf8b4f18 ce8fb01897ec4dc4a54e7b478a0450c6 - - default default] [instance: 1ba24bd2-a29b-4c5b-b8c7-cba0830ed166] Found default for hw_disk_bus of virtio _register_undefined_instance_details /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:946#033[00m
Dec  1 20:08:34 compute-0 nova_compute[189564]: 2025-12-01 20:08:34.051 189568 DEBUG nova.virt.libvirt.driver [None req-ef2f901d-b422-4159-86ba-6c6b084dc5b9 87b1f4a5842648dead0562b1cf8b4f18 ce8fb01897ec4dc4a54e7b478a0450c6 - - default default] [instance: 1ba24bd2-a29b-4c5b-b8c7-cba0830ed166] Found default for hw_input_bus of usb _register_undefined_instance_details /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:946#033[00m
Dec  1 20:08:34 compute-0 nova_compute[189564]: 2025-12-01 20:08:34.051 189568 DEBUG nova.virt.libvirt.driver [None req-ef2f901d-b422-4159-86ba-6c6b084dc5b9 87b1f4a5842648dead0562b1cf8b4f18 ce8fb01897ec4dc4a54e7b478a0450c6 - - default default] [instance: 1ba24bd2-a29b-4c5b-b8c7-cba0830ed166] Found default for hw_pointer_model of usbtablet _register_undefined_instance_details /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:946#033[00m
Dec  1 20:08:34 compute-0 nova_compute[189564]: 2025-12-01 20:08:34.052 189568 DEBUG nova.virt.libvirt.driver [None req-ef2f901d-b422-4159-86ba-6c6b084dc5b9 87b1f4a5842648dead0562b1cf8b4f18 ce8fb01897ec4dc4a54e7b478a0450c6 - - default default] [instance: 1ba24bd2-a29b-4c5b-b8c7-cba0830ed166] Found default for hw_video_model of virtio _register_undefined_instance_details /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:946#033[00m
Dec  1 20:08:34 compute-0 nova_compute[189564]: 2025-12-01 20:08:34.053 189568 DEBUG nova.virt.libvirt.driver [None req-ef2f901d-b422-4159-86ba-6c6b084dc5b9 87b1f4a5842648dead0562b1cf8b4f18 ce8fb01897ec4dc4a54e7b478a0450c6 - - default default] [instance: 1ba24bd2-a29b-4c5b-b8c7-cba0830ed166] Found default for hw_vif_model of virtio _register_undefined_instance_details /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:946#033[00m
Dec  1 20:08:34 compute-0 nova_compute[189564]: 2025-12-01 20:08:34.063 189568 INFO nova.compute.manager [None req-025acbbd-8b0a-4055-b5a6-f0460d6fa220 - - - - - -] [instance: 1ba24bd2-a29b-4c5b-b8c7-cba0830ed166] During sync_power_state the instance has a pending task (spawning). Skip.#033[00m
Dec  1 20:08:34 compute-0 nova_compute[189564]: 2025-12-01 20:08:34.065 189568 DEBUG nova.virt.driver [None req-025acbbd-8b0a-4055-b5a6-f0460d6fa220 - - - - - -] Emitting event <LifecycleEvent: 1764619713.9666243, 1ba24bd2-a29b-4c5b-b8c7-cba0830ed166 => Paused> emit_event /usr/lib/python3.9/site-packages/nova/virt/driver.py:1653#033[00m
Dec  1 20:08:34 compute-0 nova_compute[189564]: 2025-12-01 20:08:34.065 189568 INFO nova.compute.manager [None req-025acbbd-8b0a-4055-b5a6-f0460d6fa220 - - - - - -] [instance: 1ba24bd2-a29b-4c5b-b8c7-cba0830ed166] VM Paused (Lifecycle Event)#033[00m
Dec  1 20:08:34 compute-0 nova_compute[189564]: 2025-12-01 20:08:34.067 189568 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Dec  1 20:08:34 compute-0 nova_compute[189564]: 2025-12-01 20:08:34.070 189568 DEBUG oslo_service.periodic_task [None req-7cec860b-c1c5-49d2-8a4f-732c76c1788d - - - - - -] Running periodic task ComputeManager._check_instance_build_time run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210#033[00m
Dec  1 20:08:34 compute-0 nova_compute[189564]: 2025-12-01 20:08:34.092 189568 DEBUG nova.compute.manager [None req-025acbbd-8b0a-4055-b5a6-f0460d6fa220 - - - - - -] [instance: 1ba24bd2-a29b-4c5b-b8c7-cba0830ed166] Checking state _get_power_state /usr/lib/python3.9/site-packages/nova/compute/manager.py:1762#033[00m
Dec  1 20:08:34 compute-0 nova_compute[189564]: 2025-12-01 20:08:34.099 189568 DEBUG nova.virt.driver [None req-025acbbd-8b0a-4055-b5a6-f0460d6fa220 - - - - - -] Emitting event <LifecycleEvent: 1764619713.9736958, 1ba24bd2-a29b-4c5b-b8c7-cba0830ed166 => Resumed> emit_event /usr/lib/python3.9/site-packages/nova/virt/driver.py:1653#033[00m
Dec  1 20:08:34 compute-0 nova_compute[189564]: 2025-12-01 20:08:34.100 189568 INFO nova.compute.manager [None req-025acbbd-8b0a-4055-b5a6-f0460d6fa220 - - - - - -] [instance: 1ba24bd2-a29b-4c5b-b8c7-cba0830ed166] VM Resumed (Lifecycle Event)#033[00m
Dec  1 20:08:34 compute-0 nova_compute[189564]: 2025-12-01 20:08:34.123 189568 DEBUG nova.compute.manager [None req-025acbbd-8b0a-4055-b5a6-f0460d6fa220 - - - - - -] [instance: 1ba24bd2-a29b-4c5b-b8c7-cba0830ed166] Checking state _get_power_state /usr/lib/python3.9/site-packages/nova/compute/manager.py:1762#033[00m
Dec  1 20:08:34 compute-0 nova_compute[189564]: 2025-12-01 20:08:34.130 189568 DEBUG nova.compute.manager [None req-025acbbd-8b0a-4055-b5a6-f0460d6fa220 - - - - - -] [instance: 1ba24bd2-a29b-4c5b-b8c7-cba0830ed166] Synchronizing instance power state after lifecycle event "Resumed"; current vm_state: building, current task_state: spawning, current DB power_state: 0, VM power_state: 1 handle_lifecycle_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:1396#033[00m
Dec  1 20:08:34 compute-0 nova_compute[189564]: 2025-12-01 20:08:34.145 189568 INFO nova.compute.manager [None req-ef2f901d-b422-4159-86ba-6c6b084dc5b9 87b1f4a5842648dead0562b1cf8b4f18 ce8fb01897ec4dc4a54e7b478a0450c6 - - default default] [instance: 1ba24bd2-a29b-4c5b-b8c7-cba0830ed166] Took 5.56 seconds to spawn the instance on the hypervisor.#033[00m
Dec  1 20:08:34 compute-0 nova_compute[189564]: 2025-12-01 20:08:34.145 189568 DEBUG nova.compute.manager [None req-ef2f901d-b422-4159-86ba-6c6b084dc5b9 87b1f4a5842648dead0562b1cf8b4f18 ce8fb01897ec4dc4a54e7b478a0450c6 - - default default] [instance: 1ba24bd2-a29b-4c5b-b8c7-cba0830ed166] Checking state _get_power_state /usr/lib/python3.9/site-packages/nova/compute/manager.py:1762#033[00m
Dec  1 20:08:34 compute-0 nova_compute[189564]: 2025-12-01 20:08:34.248 189568 INFO nova.compute.manager [None req-025acbbd-8b0a-4055-b5a6-f0460d6fa220 - - - - - -] [instance: 1ba24bd2-a29b-4c5b-b8c7-cba0830ed166] During sync_power_state the instance has a pending task (spawning). Skip.#033[00m
Dec  1 20:08:34 compute-0 nova_compute[189564]: 2025-12-01 20:08:34.329 189568 INFO nova.compute.manager [None req-ef2f901d-b422-4159-86ba-6c6b084dc5b9 87b1f4a5842648dead0562b1cf8b4f18 ce8fb01897ec4dc4a54e7b478a0450c6 - - default default] [instance: 1ba24bd2-a29b-4c5b-b8c7-cba0830ed166] Took 6.11 seconds to build instance.#033[00m
Dec  1 20:08:34 compute-0 nova_compute[189564]: 2025-12-01 20:08:34.430 189568 DEBUG oslo_concurrency.lockutils [None req-ef2f901d-b422-4159-86ba-6c6b084dc5b9 87b1f4a5842648dead0562b1cf8b4f18 ce8fb01897ec4dc4a54e7b478a0450c6 - - default default] Lock "1ba24bd2-a29b-4c5b-b8c7-cba0830ed166" "released" by "nova.compute.manager.ComputeManager.build_and_run_instance.<locals>._locked_do_build_and_run_instance" :: held 6.269s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m
Dec  1 20:08:35 compute-0 nova_compute[189564]: 2025-12-01 20:08:35.886 189568 DEBUG nova.compute.manager [req-095b4371-e736-408e-8cc5-3b484fec39e4 req-4bf21b8d-159c-4d9a-ab2d-9eb93d572641 8141e98fb94749b083df4b60a326419b 58466eba0634458f8dd92f929824a1d9 - - default default] [instance: 1ba24bd2-a29b-4c5b-b8c7-cba0830ed166] Received event network-vif-plugged-3f58b3a2-d9b9-4462-8f74-88eea7d00105 external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11048#033[00m
Dec  1 20:08:35 compute-0 nova_compute[189564]: 2025-12-01 20:08:35.887 189568 DEBUG oslo_concurrency.lockutils [req-095b4371-e736-408e-8cc5-3b484fec39e4 req-4bf21b8d-159c-4d9a-ab2d-9eb93d572641 8141e98fb94749b083df4b60a326419b 58466eba0634458f8dd92f929824a1d9 - - default default] Acquiring lock "1ba24bd2-a29b-4c5b-b8c7-cba0830ed166-events" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m
Dec  1 20:08:35 compute-0 nova_compute[189564]: 2025-12-01 20:08:35.888 189568 DEBUG oslo_concurrency.lockutils [req-095b4371-e736-408e-8cc5-3b484fec39e4 req-4bf21b8d-159c-4d9a-ab2d-9eb93d572641 8141e98fb94749b083df4b60a326419b 58466eba0634458f8dd92f929824a1d9 - - default default] Lock "1ba24bd2-a29b-4c5b-b8c7-cba0830ed166-events" acquired by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m
Dec  1 20:08:35 compute-0 nova_compute[189564]: 2025-12-01 20:08:35.888 189568 DEBUG oslo_concurrency.lockutils [req-095b4371-e736-408e-8cc5-3b484fec39e4 req-4bf21b8d-159c-4d9a-ab2d-9eb93d572641 8141e98fb94749b083df4b60a326419b 58466eba0634458f8dd92f929824a1d9 - - default default] Lock "1ba24bd2-a29b-4c5b-b8c7-cba0830ed166-events" "released" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m
Dec  1 20:08:35 compute-0 nova_compute[189564]: 2025-12-01 20:08:35.889 189568 DEBUG nova.compute.manager [req-095b4371-e736-408e-8cc5-3b484fec39e4 req-4bf21b8d-159c-4d9a-ab2d-9eb93d572641 8141e98fb94749b083df4b60a326419b 58466eba0634458f8dd92f929824a1d9 - - default default] [instance: 1ba24bd2-a29b-4c5b-b8c7-cba0830ed166] No waiting events found dispatching network-vif-plugged-3f58b3a2-d9b9-4462-8f74-88eea7d00105 pop_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:320#033[00m
Dec  1 20:08:35 compute-0 nova_compute[189564]: 2025-12-01 20:08:35.889 189568 WARNING nova.compute.manager [req-095b4371-e736-408e-8cc5-3b484fec39e4 req-4bf21b8d-159c-4d9a-ab2d-9eb93d572641 8141e98fb94749b083df4b60a326419b 58466eba0634458f8dd92f929824a1d9 - - default default] [instance: 1ba24bd2-a29b-4c5b-b8c7-cba0830ed166] Received unexpected event network-vif-plugged-3f58b3a2-d9b9-4462-8f74-88eea7d00105 for instance with vm_state active and task_state None.#033[00m
Dec  1 20:08:37 compute-0 nova_compute[189564]: 2025-12-01 20:08:37.202 189568 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Dec  1 20:08:39 compute-0 nova_compute[189564]: 2025-12-01 20:08:39.070 189568 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Dec  1 20:08:42 compute-0 nova_compute[189564]: 2025-12-01 20:08:42.208 189568 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Dec  1 20:08:44 compute-0 nova_compute[189564]: 2025-12-01 20:08:44.076 189568 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Dec  1 20:08:44 compute-0 podman[258678]: 2025-12-01 20:08:44.829027042 +0000 UTC m=+0.124733497 container health_status b46bda7fc50db8041eef75400930fc7591d8331b3adc9964f77b2cc87c6b98e2 (image=quay.io/openstack-k8s-operators/openstack-network-exporter:current-podified, name=openstack_network_exporter, health_status=healthy, health_failing_streak=0, health_log=, config_data={'image': 'quay.io/openstack-k8s-operators/openstack-network-exporter:current-podified', 'restart': 'always', 'recreate': True, 'privileged': True, 'ports': ['9105:9105'], 'command': [], 'net': 'host', 'environment': {'OS_ENDPOINT_TYPE': 'internal', 'OPENSTACK_NETWORK_EXPORTER_YAML': '/etc/openstack_network_exporter/openstack_network_exporter.yaml'}, 'healthcheck': {'test': '/openstack/healthcheck openstack-netwo', 'mount': '/var/lib/openstack/healthchecks/openstack_network_exporter'}, 'volumes': ['/var/lib/openstack/config/telemetry/openstack_network_exporter.yaml:/etc/openstack_network_exporter/openstack_network_exporter.yaml:z', '/var/lib/openstack/certs/telemetry/default:/etc/openstack_network_exporter/tls:z', '/var/run/openvswitch:/run/openvswitch:rw,z', '/var/lib/openvswitch/ovn:/run/ovn:rw,z', '/proc:/host/proc:ro', '/var/lib/openstack/healthchecks/openstack_network_exporter:/openstack:ro,z']}, managed_by=edpm_ansible, summary=Provides the latest release of the minimal Red Hat Universal Base Image 9., vcs-ref=f4b088292653bbf5ca8188a5e59ffd06a8671d4b, vendor=Red Hat, Inc., build-date=2025-08-20T13:12:41, io.k8s.description=The Universal Base Image Minimal is a stripped down image that uses microdnf as a package manager. This base image is freely redistributable, but Red Hat only supports Red Hat technologies through subscriptions for Red Hat products. This image is maintained by Red Hat and updated regularly., io.buildah.version=1.33.7, config_id=edpm, io.k8s.display-name=Red Hat Universal Base Image 9 Minimal, name=ubi9-minimal, release=1755695350, vcs-type=git, container_name=openstack_network_exporter, description=The Universal Base Image Minimal is a stripped down image that uses microdnf as a package manager. This base image is freely redistributable, but Red Hat only supports Red Hat technologies through subscriptions for Red Hat products. This image is maintained by Red Hat and updated regularly., com.redhat.component=ubi9-minimal-container, io.openshift.expose-services=, io.openshift.tags=minimal rhel9, distribution-scope=public, maintainer=Red Hat, Inc., url=https://catalog.redhat.com/en/search?searchType=containers, com.redhat.license_terms=https://www.redhat.com/en/about/red-hat-end-user-license-agreements#UBI, architecture=x86_64, version=9.6)
Dec  1 20:08:44 compute-0 podman[258677]: 2025-12-01 20:08:44.836696926 +0000 UTC m=+0.135286493 container health_status 9bc16c1e84935b321683dd2dfd3901959431e420d380b6b9982945dff3d516b2 (image=quay.io/prometheus/node-exporter:v1.5.0, name=node_exporter, health_status=healthy, health_failing_streak=0, health_log=, config_data={'image': 'quay.io/prometheus/node-exporter:v1.5.0', 'restart': 'always', 'recreate': True, 'user': 'root', 'privileged': True, 'ports': ['9100:9100'], 'command': ['--web.config.file=/etc/node_exporter/node_exporter.yaml', '--web.disable-exporter-metrics', '--collector.systemd', '--collector.systemd.unit-include=(edpm_.*|ovs.*|openvswitch|virt.*|rsyslog)\\.service', '--no-collector.dmi', '--no-collector.entropy', '--no-collector.thermal_zone', '--no-collector.time', '--no-collector.timex', '--no-collector.uname', '--no-collector.stat', '--no-collector.hwmon', '--no-collector.os', '--no-collector.selinux', '--no-collector.textfile', '--no-collector.powersupplyclass', '--no-collector.pressure', '--no-collector.rapl'], 'net': 'host', 'environment': {'OS_ENDPOINT_TYPE': 'internal'}, 'healthcheck': {'test': '/openstack/healthcheck node_exporter', 'mount': '/var/lib/openstack/healthchecks/node_exporter'}, 'volumes': ['/var/lib/openstack/config/telemetry/node_exporter.yaml:/etc/node_exporter/node_exporter.yaml:z', '/var/lib/openstack/certs/telemetry/default:/etc/node_exporter/tls:z', '/var/run/dbus/system_bus_socket:/var/run/dbus/system_bus_socket:rw', '/var/lib/openstack/healthchecks/node_exporter:/openstack:ro,z']}, config_id=edpm, container_name=node_exporter, maintainer=The Prometheus Authors <prometheus-developers@googlegroups.com>, managed_by=edpm_ansible)
Dec  1 20:08:47 compute-0 nova_compute[189564]: 2025-12-01 20:08:47.210 189568 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Dec  1 20:08:49 compute-0 nova_compute[189564]: 2025-12-01 20:08:49.074 189568 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Dec  1 20:08:52 compute-0 nova_compute[189564]: 2025-12-01 20:08:52.214 189568 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Dec  1 20:08:54 compute-0 nova_compute[189564]: 2025-12-01 20:08:54.080 189568 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Dec  1 20:08:56 compute-0 podman[258724]: 2025-12-01 20:08:56.364200178 +0000 UTC m=+0.124831641 container health_status eee51cf6f5ac491b85fb09827fece37ea9afa564acb449d4ec0d0155a452f02b (image=quay.io/podified-antelope-centos9/openstack-multipathd:current-podified, name=multipathd, health_status=healthy, health_failing_streak=0, health_log=, maintainer=OpenStack Kubernetes Operator team, org.label-schema.name=CentOS Stream 9 Base Image, tcib_build_tag=fa2bb8efef6782c26ea7f1675eeb36dd, config_data={'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/multipathd', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-multipathd:current-podified', 'net': 'host', 'privileged': True, 'restart': 'always', 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/var/lib/kolla/config_files/multipathd.json:/var/lib/kolla/config_files/config.json:ro', '/dev:/dev', '/run/udev:/run/udev', '/sys:/sys', '/lib/modules:/lib/modules:ro', '/etc/iscsi:/etc/iscsi:ro', '/var/lib/iscsi:/var/lib/iscsi', '/etc/multipath:/etc/multipath:z', '/etc/multipath.conf:/etc/multipath.conf:ro', '/var/lib/openstack/healthchecks/multipathd:/openstack:ro,z']}, io.buildah.version=1.41.3, org.label-schema.schema-version=1.0, tcib_managed=true, config_id=multipathd, managed_by=edpm_ansible, org.label-schema.build-date=20251125, org.label-schema.license=GPLv2, org.label-schema.vendor=CentOS, container_name=multipathd)
Dec  1 20:08:57 compute-0 nova_compute[189564]: 2025-12-01 20:08:57.219 189568 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Dec  1 20:08:59 compute-0 nova_compute[189564]: 2025-12-01 20:08:59.079 189568 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Dec  1 20:08:59 compute-0 podman[203750]: time="2025-12-01T20:08:59Z" level=info msg="List containers: received `last` parameter - overwriting `limit`"
Dec  1 20:08:59 compute-0 podman[203750]: @ - - [01/Dec/2025:20:08:59 +0000] "GET /v4.9.3/libpod/containers/json?all=true&external=false&last=0&namespace=false&size=false&sync=false HTTP/1.1" 200 29521 "" "Go-http-client/1.1"
Dec  1 20:08:59 compute-0 podman[203750]: @ - - [01/Dec/2025:20:08:59 +0000] "GET /v4.9.3/libpod/containers/stats?all=false&interval=1&stream=false HTTP/1.1" 200 4809 "" "Go-http-client/1.1"
Dec  1 20:09:00 compute-0 podman[258743]: 2025-12-01 20:09:00.367471239 +0000 UTC m=+0.128644432 container health_status 61ddba5fa28aaa4735d9b3aecc3d300f499f9ae2248b5f55cd6d6127fcce4236 (image=quay.io/navidys/prometheus-podman-exporter:v1.10.1, name=podman_exporter, health_status=healthy, health_failing_streak=0, health_log=, managed_by=edpm_ansible, config_data={'image': 'quay.io/navidys/prometheus-podman-exporter:v1.10.1', 'restart': 'always', 'recreate': True, 'user': 'root', 'privileged': True, 'ports': ['9882:9882'], 'net': 'host', 'command': ['--web.config.file=/etc/podman_exporter/podman_exporter.yaml'], 'environment': {'OS_ENDPOINT_TYPE': 'internal', 'CONTAINER_HOST': 'unix:///run/podman/podman.sock'}, 'healthcheck': {'test': '/openstack/healthcheck podman_exporter', 'mount': '/var/lib/openstack/healthchecks/podman_exporter'}, 'volumes': ['/var/lib/openstack/config/telemetry/podman_exporter.yaml:/etc/podman_exporter/podman_exporter.yaml:z', '/var/lib/openstack/certs/telemetry/default:/etc/podman_exporter/tls:z', '/run/podman/podman.sock:/run/podman/podman.sock:rw,z', '/var/lib/openstack/healthchecks/podman_exporter:/openstack:ro,z']}, config_id=edpm, container_name=podman_exporter, maintainer=Navid Yaghoobi <navidys@fedoraproject.org>)
Dec  1 20:09:01 compute-0 openstack_network_exporter[205914]: ERROR   20:09:01 appctl.go:144: Failed to get PID for ovn-northd: no control socket files found for ovn-northd
Dec  1 20:09:01 compute-0 openstack_network_exporter[205914]: ERROR   20:09:01 appctl.go:144: Failed to get PID for ovn-northd: no control socket files found for ovn-northd
Dec  1 20:09:01 compute-0 openstack_network_exporter[205914]: ERROR   20:09:01 appctl.go:131: Failed to prepare call to ovsdb-server: no control socket files found for the ovs db server
Dec  1 20:09:01 compute-0 openstack_network_exporter[205914]: ERROR   20:09:01 appctl.go:174: call(dpif-netdev/pmd-perf-show): please specify an existing datapath
Dec  1 20:09:01 compute-0 openstack_network_exporter[205914]: 
Dec  1 20:09:01 compute-0 openstack_network_exporter[205914]: ERROR   20:09:01 appctl.go:174: call(dpif-netdev/pmd-rxq-show): please specify an existing datapath
Dec  1 20:09:01 compute-0 openstack_network_exporter[205914]: 
Dec  1 20:09:02 compute-0 nova_compute[189564]: 2025-12-01 20:09:02.222 189568 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Dec  1 20:09:03 compute-0 ovn_controller[97948]: 2025-12-01T20:09:03Z|00178|memory_trim|INFO|Detected inactivity (last active 30003 ms ago): trimming memory
Dec  1 20:09:03 compute-0 podman[258769]: 2025-12-01 20:09:03.347848803 +0000 UTC m=+0.087377033 container health_status 43b014a7c88484529ca37fbc1aa040d68d3c565a681d98a3ffe696ded1c66c8b (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified, name=ovn_metadata_agent, health_status=healthy, health_failing_streak=0, health_log=, io.buildah.version=1.41.3, org.label-schema.vendor=CentOS, config_data={'cgroupns': 'host', 'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'EDPM_CONFIG_HASH': '0823bd3e096c75f72e4a95820d41b0d4b6a1172bd2892ddb9f29b788a11bc87d'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ovn_metadata_agent', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified', 'net': 'host', 'pid': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/run/openvswitch:/run/openvswitch:z', '/var/lib/config-data/ansible-generated/neutron-ovn-metadata-agent:/etc/neutron.conf.d:z', '/run/netns:/run/netns:shared', '/var/lib/kolla/config_files/ovn_metadata_agent.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/neutron:/var/lib/neutron:shared,z', '/var/lib/neutron/ovn_metadata_haproxy_wrapper:/usr/local/bin/haproxy:ro', '/var/lib/neutron/kill_scripts:/etc/neutron/kill_scripts:ro', '/var/lib/openstack/cacerts/neutron-metadata/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/ca.crt:/etc/pki/tls/certs/ovndbca.crt:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/tls.crt:/etc/pki/tls/certs/ovndb.crt:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/tls.key:/etc/pki/tls/private/ovndb.key:ro,Z', '/var/lib/openstack/healthchecks/ovn_metadata_agent:/openstack:ro,z']}, config_id=ovn_metadata_agent, maintainer=OpenStack Kubernetes Operator team, managed_by=edpm_ansible, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.schema-version=1.0, org.label-schema.build-date=20251125, tcib_managed=true, org.label-schema.license=GPLv2, tcib_build_tag=fa2bb8efef6782c26ea7f1675eeb36dd, container_name=ovn_metadata_agent)
Dec  1 20:09:03 compute-0 podman[258767]: 2025-12-01 20:09:03.352619667 +0000 UTC m=+0.104250783 container health_status 34a1614f07848d6f362b3ed1fa2407dbcd0f2c7c831f6ef43ff8b2d278ce7c3d (image=quay.io/podified-antelope-centos9/openstack-ceilometer-ipmi:current-podified, name=ceilometer_agent_ipmi, health_status=healthy, health_failing_streak=0, health_log=, org.label-schema.vendor=CentOS, tcib_managed=true, config_data={'image': 'quay.io/podified-antelope-centos9/openstack-ceilometer-ipmi:current-podified', 'user': 'ceilometer', 'restart': 'always', 'command': 'kolla_start', 'security_opt': 'label:type:ceilometer_polling_t', 'privileged': 'true', 'net': 'host', 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'OS_ENDPOINT_TYPE': 'internal'}, 'healthcheck': {'test': '/openstack/healthcheck ipmi', 'mount': '/var/lib/openstack/healthchecks/ceilometer_agent_ipmi'}, 'volumes': ['/var/lib/openstack/config/telemetry-power-monitoring:/var/lib/openstack/config/:z', '/var/lib/openstack/config/telemetry-power-monitoring/ceilometer-agent-ipmi.json:/var/lib/kolla/config_files/config.json:z', '/etc/hosts:/etc/hosts:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/var/lib/openstack/cacerts/telemetry-power-monitoring/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/config/telemetry-power-monitoring/ceilometer_prom_exporter.yaml:/etc/ceilometer/ceilometer_prom_exporter.yaml:z', '/var/lib/openstack/certs/telemetry-power-monitoring/default:/etc/ceilometer/tls:z', '/dev/log:/dev/log', '/var/lib/openstack/healthchecks/ceilometer_agent_ipmi:/openstack:ro,z']}, container_name=ceilometer_agent_ipmi, maintainer=OpenStack Kubernetes Operator team, tcib_build_tag=fa2bb8efef6782c26ea7f1675eeb36dd, managed_by=edpm_ansible, org.label-schema.build-date=20251125, org.label-schema.schema-version=1.0, config_id=edpm, org.label-schema.name=CentOS Stream 9 Base Image, io.buildah.version=1.41.3, org.label-schema.license=GPLv2)
Dec  1 20:09:03 compute-0 podman[258768]: 2025-12-01 20:09:03.361853161 +0000 UTC m=+0.113740685 container health_status 3a3d264f7eb8586ed3d44da8bad3c69e5911bcb2ca062b771386b6d47a5118de (image=quay.rdoproject.org/podified-master-centos10/openstack-ceilometer-compute:current-tested, name=ceilometer_agent_compute, health_status=healthy, health_failing_streak=0, health_log=, org.label-schema.build-date=20251125, config_id=edpm, org.label-schema.name=CentOS Stream 10 Base Image, org.label-schema.vendor=CentOS, tcib_build_tag=3a7876c5b6a4ff2e2bc50e11e9db5f42, tcib_managed=true, org.label-schema.license=GPLv2, container_name=ceilometer_agent_compute, managed_by=edpm_ansible, org.label-schema.schema-version=1.0, config_data={'image': 'quay.rdoproject.org/podified-master-centos10/openstack-ceilometer-compute:current-tested', 'user': 'ceilometer', 'restart': 'always', 'command': 'kolla_start', 'security_opt': 'label:type:ceilometer_polling_t', 'net': 'host', 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'OS_ENDPOINT_TYPE': 'internal'}, 'healthcheck': {'test': '/openstack/healthcheck compute', 'mount': '/var/lib/openstack/healthchecks/ceilometer_agent_compute'}, 'volumes': ['/var/lib/openstack/config/telemetry:/var/lib/openstack/config/:z', '/var/lib/openstack/config/telemetry/ceilometer-agent-compute.json:/var/lib/kolla/config_files/config.json:z', '/run/libvirt:/run/libvirt:shared,ro', '/etc/hosts:/etc/hosts:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/var/lib/openstack/cacerts/telemetry/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/config/telemetry/ceilometer_prom_exporter.yaml:/etc/ceilometer/ceilometer_prom_exporter.yaml:z', '/var/lib/openstack/certs/telemetry/default:/etc/ceilometer/tls:z', '/dev/log:/dev/log', '/var/lib/openstack/healthchecks/ceilometer_agent_compute:/openstack:ro,z']}, io.buildah.version=1.41.4, maintainer=OpenStack Kubernetes Operator team)
Dec  1 20:09:03 compute-0 podman[258766]: 2025-12-01 20:09:03.362219263 +0000 UTC m=+0.116899596 container health_status 23921011954a99f31a49758e512d9e3575f6b2ebf536e7df85e3be11e7690b76 (image=quay.io/sustainable_computing_io/kepler:release-0.7.12, name=kepler, health_status=healthy, health_failing_streak=0, health_log=, name=ubi9, version=9.4, io.k8s.display-name=Red Hat Universal Base Image 9, maintainer=Red Hat, Inc., url=https://access.redhat.com/containers/#/registry.access.redhat.com/ubi9/images/9.4-1214.1726694543, com.redhat.component=ubi9-container, io.openshift.expose-services=, config_data={'image': 'quay.io/sustainable_computing_io/kepler:release-0.7.12', 'privileged': 'true', 'restart': 'always', 'ports': ['8888:8888'], 'net': 'host', 'command': '-v=2', 'recreate': True, 'environment': {'ENABLE_GPU': 'true', 'EXPOSE_CONTAINER_METRICS': 'true', 'ENABLE_PROCESS_METRICS': 'true', 'EXPOSE_VM_METRICS': 'true', 'EXPOSE_ESTIMATED_IDLE_POWER_METRICS': 'false', 'LIBVIRT_METADATA_URI': 'http://openstack.org/xmlns/libvirt/nova/1.1'}, 'healthcheck': {'test': '/openstack/healthcheck kepler', 'mount': '/var/lib/openstack/healthchecks/kepler'}, 'volumes': ['/lib/modules:/lib/modules:ro', '/run/libvirt:/run/libvirt:shared,ro', '/sys:/sys', '/proc:/proc', '/var/lib/openstack/healthchecks/kepler:/openstack:ro,z']}, config_id=edpm, summary=Provides the latest release of Red Hat Universal Base Image 9., architecture=x86_64, io.k8s.description=The Universal Base Image is designed and engineered to be the base layer for all of your containerized applications, middleware and utilities. This base image is freely redistributable, but Red Hat only supports Red Hat technologies through subscriptions for Red Hat products. This image is maintained by Red Hat and updated regularly., vcs-ref=e309397d02fc53f7fa99db1371b8700eb49f268f, vendor=Red Hat, Inc., container_name=kepler, io.buildah.version=1.29.0, build-date=2024-09-18T21:23:30, release-0.7.12=, com.redhat.license_terms=https://www.redhat.com/en/about/red-hat-end-user-license-agreements#UBI, release=1214.1726694543, vcs-type=git, managed_by=edpm_ansible, distribution-scope=public, description=The Universal Base Image is designed and engineered to be the base layer for all of your containerized applications, middleware and utilities. This base image is freely redistributable, but Red Hat only supports Red Hat technologies through subscriptions for Red Hat products. This image is maintained by Red Hat and updated regularly., io.openshift.tags=base rhel9)
Dec  1 20:09:03 compute-0 podman[258770]: 2025-12-01 20:09:03.394974179 +0000 UTC m=+0.127893457 container health_status ac5c9902abf0db9f43c889599b2bcc73d33eb8b65444ffdd9b56a5cc93dab792 (image=quay.io/podified-antelope-centos9/openstack-ovn-controller:current-podified, name=ovn_controller, health_status=healthy, health_failing_streak=0, health_log=, io.buildah.version=1.41.3, org.label-schema.vendor=CentOS, config_data={'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ovn_controller', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-ovn-controller:current-podified', 'net': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/lib/modules:/lib/modules:ro', '/run:/run', '/var/lib/openvswitch/ovn:/run/ovn:shared,z', '/var/lib/kolla/config_files/ovn_controller.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/openstack/cacerts/ovn/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/certs/ovn/default/ca.crt:/etc/pki/tls/certs/ovndbca.crt:ro,z', '/var/lib/openstack/certs/ovn/default/tls.crt:/etc/pki/tls/certs/ovndb.crt:ro,z', '/var/lib/openstack/certs/ovn/default/tls.key:/etc/pki/tls/private/ovndb.key:ro,Z', '/var/lib/openstack/healthchecks/ovn_controller:/openstack:ro,z']}, config_id=ovn_controller, container_name=ovn_controller, maintainer=OpenStack Kubernetes Operator team, org.label-schema.schema-version=1.0, tcib_build_tag=fa2bb8efef6782c26ea7f1675eeb36dd, tcib_managed=true, managed_by=edpm_ansible, org.label-schema.build-date=20251125, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 9 Base Image)
Dec  1 20:09:04 compute-0 nova_compute[189564]: 2025-12-01 20:09:04.082 189568 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Dec  1 20:09:05 compute-0 ovn_controller[97948]: 2025-12-01T20:09:05Z|00023|pinctrl(ovn_pinctrl0)|INFO|DHCPOFFER fa:16:3e:a9:69:d7 10.100.1.231
Dec  1 20:09:05 compute-0 ovn_controller[97948]: 2025-12-01T20:09:05Z|00024|pinctrl(ovn_pinctrl0)|INFO|DHCPACK fa:16:3e:a9:69:d7 10.100.1.231
Dec  1 20:09:07 compute-0 nova_compute[189564]: 2025-12-01 20:09:07.233 189568 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Dec  1 20:09:09 compute-0 nova_compute[189564]: 2025-12-01 20:09:09.088 189568 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Dec  1 20:09:12 compute-0 ovn_metadata_agent[106828]: 2025-12-01 20:09:12.227 106833 DEBUG oslo_concurrency.lockutils [-] Acquiring lock "_check_child_processes" by "neutron.agent.linux.external_process.ProcessMonitor._check_child_processes" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m
Dec  1 20:09:12 compute-0 ovn_metadata_agent[106828]: 2025-12-01 20:09:12.228 106833 DEBUG oslo_concurrency.lockutils [-] Lock "_check_child_processes" acquired by "neutron.agent.linux.external_process.ProcessMonitor._check_child_processes" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m
Dec  1 20:09:12 compute-0 ovn_metadata_agent[106828]: 2025-12-01 20:09:12.228 106833 DEBUG oslo_concurrency.lockutils [-] Lock "_check_child_processes" "released" by "neutron.agent.linux.external_process.ProcessMonitor._check_child_processes" :: held 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m
Dec  1 20:09:12 compute-0 nova_compute[189564]: 2025-12-01 20:09:12.238 189568 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Dec  1 20:09:14 compute-0 nova_compute[189564]: 2025-12-01 20:09:14.089 189568 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Dec  1 20:09:15 compute-0 podman[258872]: 2025-12-01 20:09:15.366437496 +0000 UTC m=+0.114248611 container health_status 9bc16c1e84935b321683dd2dfd3901959431e420d380b6b9982945dff3d516b2 (image=quay.io/prometheus/node-exporter:v1.5.0, name=node_exporter, health_status=healthy, health_failing_streak=0, health_log=, config_data={'image': 'quay.io/prometheus/node-exporter:v1.5.0', 'restart': 'always', 'recreate': True, 'user': 'root', 'privileged': True, 'ports': ['9100:9100'], 'command': ['--web.config.file=/etc/node_exporter/node_exporter.yaml', '--web.disable-exporter-metrics', '--collector.systemd', '--collector.systemd.unit-include=(edpm_.*|ovs.*|openvswitch|virt.*|rsyslog)\\.service', '--no-collector.dmi', '--no-collector.entropy', '--no-collector.thermal_zone', '--no-collector.time', '--no-collector.timex', '--no-collector.uname', '--no-collector.stat', '--no-collector.hwmon', '--no-collector.os', '--no-collector.selinux', '--no-collector.textfile', '--no-collector.powersupplyclass', '--no-collector.pressure', '--no-collector.rapl'], 'net': 'host', 'environment': {'OS_ENDPOINT_TYPE': 'internal'}, 'healthcheck': {'test': '/openstack/healthcheck node_exporter', 'mount': '/var/lib/openstack/healthchecks/node_exporter'}, 'volumes': ['/var/lib/openstack/config/telemetry/node_exporter.yaml:/etc/node_exporter/node_exporter.yaml:z', '/var/lib/openstack/certs/telemetry/default:/etc/node_exporter/tls:z', '/var/run/dbus/system_bus_socket:/var/run/dbus/system_bus_socket:rw', '/var/lib/openstack/healthchecks/node_exporter:/openstack:ro,z']}, config_id=edpm, container_name=node_exporter, maintainer=The Prometheus Authors <prometheus-developers@googlegroups.com>, managed_by=edpm_ansible)
Dec  1 20:09:15 compute-0 podman[258873]: 2025-12-01 20:09:15.3900214 +0000 UTC m=+0.132967430 container health_status b46bda7fc50db8041eef75400930fc7591d8331b3adc9964f77b2cc87c6b98e2 (image=quay.io/openstack-k8s-operators/openstack-network-exporter:current-podified, name=openstack_network_exporter, health_status=healthy, health_failing_streak=0, health_log=, build-date=2025-08-20T13:12:41, io.k8s.display-name=Red Hat Universal Base Image 9 Minimal, vcs-type=git, description=The Universal Base Image Minimal is a stripped down image that uses microdnf as a package manager. This base image is freely redistributable, but Red Hat only supports Red Hat technologies through subscriptions for Red Hat products. This image is maintained by Red Hat and updated regularly., distribution-scope=public, com.redhat.license_terms=https://www.redhat.com/en/about/red-hat-end-user-license-agreements#UBI, vendor=Red Hat, Inc., io.openshift.expose-services=, vcs-ref=f4b088292653bbf5ca8188a5e59ffd06a8671d4b, architecture=x86_64, io.k8s.description=The Universal Base Image Minimal is a stripped down image that uses microdnf as a package manager. This base image is freely redistributable, but Red Hat only supports Red Hat technologies through subscriptions for Red Hat products. This image is maintained by Red Hat and updated regularly., managed_by=edpm_ansible, maintainer=Red Hat, Inc., name=ubi9-minimal, url=https://catalog.redhat.com/en/search?searchType=containers, version=9.6, com.redhat.component=ubi9-minimal-container, config_data={'image': 'quay.io/openstack-k8s-operators/openstack-network-exporter:current-podified', 'restart': 'always', 'recreate': True, 'privileged': True, 'ports': ['9105:9105'], 'command': [], 'net': 'host', 'environment': {'OS_ENDPOINT_TYPE': 'internal', 'OPENSTACK_NETWORK_EXPORTER_YAML': '/etc/openstack_network_exporter/openstack_network_exporter.yaml'}, 'healthcheck': {'test': '/openstack/healthcheck openstack-netwo', 'mount': '/var/lib/openstack/healthchecks/openstack_network_exporter'}, 'volumes': ['/var/lib/openstack/config/telemetry/openstack_network_exporter.yaml:/etc/openstack_network_exporter/openstack_network_exporter.yaml:z', '/var/lib/openstack/certs/telemetry/default:/etc/openstack_network_exporter/tls:z', '/var/run/openvswitch:/run/openvswitch:rw,z', '/var/lib/openvswitch/ovn:/run/ovn:rw,z', '/proc:/host/proc:ro', '/var/lib/openstack/healthchecks/openstack_network_exporter:/openstack:ro,z']}, config_id=edpm, container_name=openstack_network_exporter, release=1755695350, io.buildah.version=1.33.7, summary=Provides the latest release of the minimal Red Hat Universal Base Image 9., io.openshift.tags=minimal rhel9)
Dec  1 20:09:17 compute-0 nova_compute[189564]: 2025-12-01 20:09:17.244 189568 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Dec  1 20:09:19 compute-0 nova_compute[189564]: 2025-12-01 20:09:19.092 189568 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Dec  1 20:09:21 compute-0 nova_compute[189564]: 2025-12-01 20:09:21.247 189568 DEBUG oslo_service.periodic_task [None req-7cec860b-c1c5-49d2-8a4f-732c76c1788d - - - - - -] Running periodic task ComputeManager._poll_rescued_instances run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210#033[00m
Dec  1 20:09:21 compute-0 nova_compute[189564]: 2025-12-01 20:09:21.247 189568 DEBUG oslo_service.periodic_task [None req-7cec860b-c1c5-49d2-8a4f-732c76c1788d - - - - - -] Running periodic task ComputeManager._reclaim_queued_deletes run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210#033[00m
Dec  1 20:09:21 compute-0 nova_compute[189564]: 2025-12-01 20:09:21.248 189568 DEBUG nova.compute.manager [None req-7cec860b-c1c5-49d2-8a4f-732c76c1788d - - - - - -] CONF.reclaim_instance_interval <= 0, skipping... _reclaim_queued_deletes /usr/lib/python3.9/site-packages/nova/compute/manager.py:10477#033[00m
Dec  1 20:09:22 compute-0 nova_compute[189564]: 2025-12-01 20:09:22.249 189568 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Dec  1 20:09:24 compute-0 nova_compute[189564]: 2025-12-01 20:09:24.097 189568 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Dec  1 20:09:24 compute-0 nova_compute[189564]: 2025-12-01 20:09:24.248 189568 DEBUG oslo_service.periodic_task [None req-7cec860b-c1c5-49d2-8a4f-732c76c1788d - - - - - -] Running periodic task ComputeManager._poll_rebooting_instances run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210#033[00m
Dec  1 20:09:27 compute-0 nova_compute[189564]: 2025-12-01 20:09:27.254 189568 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Dec  1 20:09:27 compute-0 podman[258917]: 2025-12-01 20:09:27.342970505 +0000 UTC m=+0.102535517 container health_status eee51cf6f5ac491b85fb09827fece37ea9afa564acb449d4ec0d0155a452f02b (image=quay.io/podified-antelope-centos9/openstack-multipathd:current-podified, name=multipathd, health_status=healthy, health_failing_streak=0, health_log=, config_data={'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/multipathd', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-multipathd:current-podified', 'net': 'host', 'privileged': True, 'restart': 'always', 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/var/lib/kolla/config_files/multipathd.json:/var/lib/kolla/config_files/config.json:ro', '/dev:/dev', '/run/udev:/run/udev', '/sys:/sys', '/lib/modules:/lib/modules:ro', '/etc/iscsi:/etc/iscsi:ro', '/var/lib/iscsi:/var/lib/iscsi', '/etc/multipath:/etc/multipath:z', '/etc/multipath.conf:/etc/multipath.conf:ro', '/var/lib/openstack/healthchecks/multipathd:/openstack:ro,z']}, config_id=multipathd, org.label-schema.license=GPLv2, maintainer=OpenStack Kubernetes Operator team, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS, io.buildah.version=1.41.3, managed_by=edpm_ansible, tcib_build_tag=fa2bb8efef6782c26ea7f1675eeb36dd, container_name=multipathd, org.label-schema.build-date=20251125, tcib_managed=true)
Dec  1 20:09:28 compute-0 nova_compute[189564]: 2025-12-01 20:09:28.247 189568 DEBUG oslo_service.periodic_task [None req-7cec860b-c1c5-49d2-8a4f-732c76c1788d - - - - - -] Running periodic task ComputeManager._poll_unconfirmed_resizes run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210#033[00m
Dec  1 20:09:29 compute-0 nova_compute[189564]: 2025-12-01 20:09:29.097 189568 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Dec  1 20:09:29 compute-0 nova_compute[189564]: 2025-12-01 20:09:29.247 189568 DEBUG oslo_service.periodic_task [None req-7cec860b-c1c5-49d2-8a4f-732c76c1788d - - - - - -] Running periodic task ComputeManager._heal_instance_info_cache run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210#033[00m
Dec  1 20:09:29 compute-0 nova_compute[189564]: 2025-12-01 20:09:29.248 189568 DEBUG nova.compute.manager [None req-7cec860b-c1c5-49d2-8a4f-732c76c1788d - - - - - -] Starting heal instance info cache _heal_instance_info_cache /usr/lib/python3.9/site-packages/nova/compute/manager.py:9858#033[00m
Dec  1 20:09:29 compute-0 nova_compute[189564]: 2025-12-01 20:09:29.248 189568 DEBUG nova.compute.manager [None req-7cec860b-c1c5-49d2-8a4f-732c76c1788d - - - - - -] Rebuilding the list of instances to heal _heal_instance_info_cache /usr/lib/python3.9/site-packages/nova/compute/manager.py:9862#033[00m
Dec  1 20:09:29 compute-0 nova_compute[189564]: 2025-12-01 20:09:29.645 189568 DEBUG oslo_concurrency.lockutils [None req-7cec860b-c1c5-49d2-8a4f-732c76c1788d - - - - - -] Acquiring lock "refresh_cache-2e63a3e2-688c-470f-9b69-98ac22f0c892" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:312#033[00m
Dec  1 20:09:29 compute-0 nova_compute[189564]: 2025-12-01 20:09:29.646 189568 DEBUG oslo_concurrency.lockutils [None req-7cec860b-c1c5-49d2-8a4f-732c76c1788d - - - - - -] Acquired lock "refresh_cache-2e63a3e2-688c-470f-9b69-98ac22f0c892" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:315#033[00m
Dec  1 20:09:29 compute-0 nova_compute[189564]: 2025-12-01 20:09:29.647 189568 DEBUG nova.network.neutron [None req-7cec860b-c1c5-49d2-8a4f-732c76c1788d - - - - - -] [instance: 2e63a3e2-688c-470f-9b69-98ac22f0c892] Forcefully refreshing network info cache for instance _get_instance_nw_info /usr/lib/python3.9/site-packages/nova/network/neutron.py:2004#033[00m
Dec  1 20:09:29 compute-0 nova_compute[189564]: 2025-12-01 20:09:29.648 189568 DEBUG nova.objects.instance [None req-7cec860b-c1c5-49d2-8a4f-732c76c1788d - - - - - -] Lazy-loading 'info_cache' on Instance uuid 2e63a3e2-688c-470f-9b69-98ac22f0c892 obj_load_attr /usr/lib/python3.9/site-packages/nova/objects/instance.py:1105#033[00m
Dec  1 20:09:29 compute-0 podman[203750]: time="2025-12-01T20:09:29Z" level=info msg="List containers: received `last` parameter - overwriting `limit`"
Dec  1 20:09:29 compute-0 podman[203750]: @ - - [01/Dec/2025:20:09:29 +0000] "GET /v4.9.3/libpod/containers/json?all=true&external=false&last=0&namespace=false&size=false&sync=false HTTP/1.1" 200 29521 "" "Go-http-client/1.1"
Dec  1 20:09:29 compute-0 podman[203750]: @ - - [01/Dec/2025:20:09:29 +0000] "GET /v4.9.3/libpod/containers/stats?all=false&interval=1&stream=false HTTP/1.1" 200 4809 "" "Go-http-client/1.1"
Dec  1 20:09:30 compute-0 podman[258939]: 2025-12-01 20:09:30.549945462 +0000 UTC m=+0.095453491 container health_status 61ddba5fa28aaa4735d9b3aecc3d300f499f9ae2248b5f55cd6d6127fcce4236 (image=quay.io/navidys/prometheus-podman-exporter:v1.10.1, name=podman_exporter, health_status=healthy, health_failing_streak=0, health_log=, maintainer=Navid Yaghoobi <navidys@fedoraproject.org>, managed_by=edpm_ansible, config_data={'image': 'quay.io/navidys/prometheus-podman-exporter:v1.10.1', 'restart': 'always', 'recreate': True, 'user': 'root', 'privileged': True, 'ports': ['9882:9882'], 'net': 'host', 'command': ['--web.config.file=/etc/podman_exporter/podman_exporter.yaml'], 'environment': {'OS_ENDPOINT_TYPE': 'internal', 'CONTAINER_HOST': 'unix:///run/podman/podman.sock'}, 'healthcheck': {'test': '/openstack/healthcheck podman_exporter', 'mount': '/var/lib/openstack/healthchecks/podman_exporter'}, 'volumes': ['/var/lib/openstack/config/telemetry/podman_exporter.yaml:/etc/podman_exporter/podman_exporter.yaml:z', '/var/lib/openstack/certs/telemetry/default:/etc/podman_exporter/tls:z', '/run/podman/podman.sock:/run/podman/podman.sock:rw,z', '/var/lib/openstack/healthchecks/podman_exporter:/openstack:ro,z']}, config_id=edpm, container_name=podman_exporter)
Dec  1 20:09:31 compute-0 openstack_network_exporter[205914]: ERROR   20:09:31 appctl.go:131: Failed to prepare call to ovsdb-server: no control socket files found for the ovs db server
Dec  1 20:09:31 compute-0 openstack_network_exporter[205914]: ERROR   20:09:31 appctl.go:144: Failed to get PID for ovn-northd: no control socket files found for ovn-northd
Dec  1 20:09:31 compute-0 openstack_network_exporter[205914]: ERROR   20:09:31 appctl.go:144: Failed to get PID for ovn-northd: no control socket files found for ovn-northd
Dec  1 20:09:31 compute-0 openstack_network_exporter[205914]: ERROR   20:09:31 appctl.go:174: call(dpif-netdev/pmd-perf-show): please specify an existing datapath
Dec  1 20:09:31 compute-0 openstack_network_exporter[205914]: 
Dec  1 20:09:31 compute-0 openstack_network_exporter[205914]: ERROR   20:09:31 appctl.go:174: call(dpif-netdev/pmd-rxq-show): please specify an existing datapath
Dec  1 20:09:31 compute-0 openstack_network_exporter[205914]: 
Dec  1 20:09:31 compute-0 nova_compute[189564]: 2025-12-01 20:09:31.850 189568 DEBUG nova.network.neutron [None req-7cec860b-c1c5-49d2-8a4f-732c76c1788d - - - - - -] [instance: 2e63a3e2-688c-470f-9b69-98ac22f0c892] Updating instance_info_cache with network_info: [{"id": "3076324c-1772-4ebf-8d52-056282f5b5b9", "address": "fa:16:3e:ec:bc:e0", "network": {"id": "b72e0b6b-24ff-49af-9297-d0f55dd2fe07", "bridge": "br-int", "label": "", "subnets": [{"cidr": "10.100.0.0/16", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.3.29", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "ce8fb01897ec4dc4a54e7b478a0450c6", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tap3076324c-17", "ovs_interfaceid": "3076324c-1772-4ebf-8d52-056282f5b5b9", "qbh_params": null, "qbg_params": null, "active": true, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}}] update_instance_cache_with_nw_info /usr/lib/python3.9/site-packages/nova/network/neutron.py:116#033[00m
Dec  1 20:09:31 compute-0 nova_compute[189564]: 2025-12-01 20:09:31.921 189568 DEBUG oslo_concurrency.lockutils [None req-7cec860b-c1c5-49d2-8a4f-732c76c1788d - - - - - -] Releasing lock "refresh_cache-2e63a3e2-688c-470f-9b69-98ac22f0c892" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:333#033[00m
Dec  1 20:09:31 compute-0 nova_compute[189564]: 2025-12-01 20:09:31.922 189568 DEBUG nova.compute.manager [None req-7cec860b-c1c5-49d2-8a4f-732c76c1788d - - - - - -] [instance: 2e63a3e2-688c-470f-9b69-98ac22f0c892] Updated the network info_cache for instance _heal_instance_info_cache /usr/lib/python3.9/site-packages/nova/compute/manager.py:9929#033[00m
Dec  1 20:09:31 compute-0 nova_compute[189564]: 2025-12-01 20:09:31.923 189568 DEBUG oslo_service.periodic_task [None req-7cec860b-c1c5-49d2-8a4f-732c76c1788d - - - - - -] Running periodic task ComputeManager._instance_usage_audit run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210#033[00m
Dec  1 20:09:31 compute-0 nova_compute[189564]: 2025-12-01 20:09:31.923 189568 DEBUG oslo_service.periodic_task [None req-7cec860b-c1c5-49d2-8a4f-732c76c1788d - - - - - -] Running periodic task ComputeManager.update_available_resource run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210#033[00m
Dec  1 20:09:31 compute-0 nova_compute[189564]: 2025-12-01 20:09:31.997 189568 DEBUG oslo_concurrency.lockutils [None req-7cec860b-c1c5-49d2-8a4f-732c76c1788d - - - - - -] Acquiring lock "compute_resources" by "nova.compute.resource_tracker.ResourceTracker.clean_compute_node_cache" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m
Dec  1 20:09:31 compute-0 nova_compute[189564]: 2025-12-01 20:09:31.998 189568 DEBUG oslo_concurrency.lockutils [None req-7cec860b-c1c5-49d2-8a4f-732c76c1788d - - - - - -] Lock "compute_resources" acquired by "nova.compute.resource_tracker.ResourceTracker.clean_compute_node_cache" :: waited 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m
Dec  1 20:09:31 compute-0 nova_compute[189564]: 2025-12-01 20:09:31.999 189568 DEBUG oslo_concurrency.lockutils [None req-7cec860b-c1c5-49d2-8a4f-732c76c1788d - - - - - -] Lock "compute_resources" "released" by "nova.compute.resource_tracker.ResourceTracker.clean_compute_node_cache" :: held 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m
Dec  1 20:09:31 compute-0 nova_compute[189564]: 2025-12-01 20:09:31.999 189568 DEBUG nova.compute.resource_tracker [None req-7cec860b-c1c5-49d2-8a4f-732c76c1788d - - - - - -] Auditing locally available compute resources for compute-0.ctlplane.example.com (node: compute-0.ctlplane.example.com) update_available_resource /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:861#033[00m
Dec  1 20:09:32 compute-0 nova_compute[189564]: 2025-12-01 20:09:32.153 189568 DEBUG oslo_concurrency.processutils [None req-7cec860b-c1c5-49d2-8a4f-732c76c1788d - - - - - -] Running cmd (subprocess): /usr/bin/python3 -m oslo_concurrency.prlimit --as=1073741824 --cpu=30 -- env LC_ALL=C LANG=C qemu-img info /var/lib/nova/instances/1ba24bd2-a29b-4c5b-b8c7-cba0830ed166/disk --force-share --output=json execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384#033[00m
Dec  1 20:09:32 compute-0 nova_compute[189564]: 2025-12-01 20:09:32.256 189568 DEBUG oslo_concurrency.processutils [None req-7cec860b-c1c5-49d2-8a4f-732c76c1788d - - - - - -] CMD "/usr/bin/python3 -m oslo_concurrency.prlimit --as=1073741824 --cpu=30 -- env LC_ALL=C LANG=C qemu-img info /var/lib/nova/instances/1ba24bd2-a29b-4c5b-b8c7-cba0830ed166/disk --force-share --output=json" returned: 0 in 0.103s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422#033[00m
Dec  1 20:09:32 compute-0 nova_compute[189564]: 2025-12-01 20:09:32.258 189568 DEBUG oslo_concurrency.processutils [None req-7cec860b-c1c5-49d2-8a4f-732c76c1788d - - - - - -] Running cmd (subprocess): /usr/bin/python3 -m oslo_concurrency.prlimit --as=1073741824 --cpu=30 -- env LC_ALL=C LANG=C qemu-img info /var/lib/nova/instances/1ba24bd2-a29b-4c5b-b8c7-cba0830ed166/disk --force-share --output=json execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384#033[00m
Dec  1 20:09:32 compute-0 nova_compute[189564]: 2025-12-01 20:09:32.278 189568 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Dec  1 20:09:32 compute-0 nova_compute[189564]: 2025-12-01 20:09:32.322 189568 DEBUG oslo_concurrency.processutils [None req-7cec860b-c1c5-49d2-8a4f-732c76c1788d - - - - - -] CMD "/usr/bin/python3 -m oslo_concurrency.prlimit --as=1073741824 --cpu=30 -- env LC_ALL=C LANG=C qemu-img info /var/lib/nova/instances/1ba24bd2-a29b-4c5b-b8c7-cba0830ed166/disk --force-share --output=json" returned: 0 in 0.064s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422#033[00m
Dec  1 20:09:32 compute-0 nova_compute[189564]: 2025-12-01 20:09:32.333 189568 DEBUG oslo_concurrency.processutils [None req-7cec860b-c1c5-49d2-8a4f-732c76c1788d - - - - - -] Running cmd (subprocess): /usr/bin/python3 -m oslo_concurrency.prlimit --as=1073741824 --cpu=30 -- env LC_ALL=C LANG=C qemu-img info /var/lib/nova/instances/2e63a3e2-688c-470f-9b69-98ac22f0c892/disk --force-share --output=json execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384#033[00m
Dec  1 20:09:32 compute-0 nova_compute[189564]: 2025-12-01 20:09:32.397 189568 DEBUG oslo_concurrency.processutils [None req-7cec860b-c1c5-49d2-8a4f-732c76c1788d - - - - - -] CMD "/usr/bin/python3 -m oslo_concurrency.prlimit --as=1073741824 --cpu=30 -- env LC_ALL=C LANG=C qemu-img info /var/lib/nova/instances/2e63a3e2-688c-470f-9b69-98ac22f0c892/disk --force-share --output=json" returned: 0 in 0.063s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422#033[00m
Dec  1 20:09:32 compute-0 nova_compute[189564]: 2025-12-01 20:09:32.398 189568 DEBUG oslo_concurrency.processutils [None req-7cec860b-c1c5-49d2-8a4f-732c76c1788d - - - - - -] Running cmd (subprocess): /usr/bin/python3 -m oslo_concurrency.prlimit --as=1073741824 --cpu=30 -- env LC_ALL=C LANG=C qemu-img info /var/lib/nova/instances/2e63a3e2-688c-470f-9b69-98ac22f0c892/disk --force-share --output=json execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384#033[00m
Dec  1 20:09:32 compute-0 nova_compute[189564]: 2025-12-01 20:09:32.494 189568 DEBUG oslo_concurrency.processutils [None req-7cec860b-c1c5-49d2-8a4f-732c76c1788d - - - - - -] CMD "/usr/bin/python3 -m oslo_concurrency.prlimit --as=1073741824 --cpu=30 -- env LC_ALL=C LANG=C qemu-img info /var/lib/nova/instances/2e63a3e2-688c-470f-9b69-98ac22f0c892/disk --force-share --output=json" returned: 0 in 0.095s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422#033[00m
Dec  1 20:09:32 compute-0 nova_compute[189564]: 2025-12-01 20:09:32.952 189568 WARNING nova.virt.libvirt.driver [None req-7cec860b-c1c5-49d2-8a4f-732c76c1788d - - - - - -] This host appears to have multiple sockets per NUMA node. The `socket` PCI NUMA affinity will not be supported.#033[00m
Dec  1 20:09:32 compute-0 nova_compute[189564]: 2025-12-01 20:09:32.954 189568 DEBUG nova.compute.resource_tracker [None req-7cec860b-c1c5-49d2-8a4f-732c76c1788d - - - - - -] Hypervisor/Node resource view: name=compute-0.ctlplane.example.com free_ram=5051MB free_disk=72.24808120727539GB free_vcpus=6 pci_devices=[{"dev_id": "pci_0000_00_03_0", "address": "0000:00:03.0", "product_id": "1000", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1000", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_01_3", "address": "0000:00:01.3", "product_id": "7113", "vendor_id": "8086", "numa_node": null, "label": "label_8086_7113", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_06_0", "address": "0000:00:06.0", "product_id": "1005", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1005", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_04_0", "address": "0000:00:04.0", "product_id": "1001", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1001", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_01_1", "address": "0000:00:01.1", "product_id": "7010", "vendor_id": "8086", "numa_node": null, "label": "label_8086_7010", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_00_0", "address": "0000:00:00.0", "product_id": "1237", "vendor_id": "8086", "numa_node": null, "label": "label_8086_1237", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_05_0", "address": "0000:00:05.0", "product_id": "1002", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1002", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_02_0", "address": "0000:00:02.0", "product_id": "1050", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1050", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_01_2", "address": "0000:00:01.2", "product_id": "7020", "vendor_id": "8086", "numa_node": null, "label": "label_8086_7020", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_01_0", "address": "0000:00:01.0", "product_id": "7000", "vendor_id": "8086", "numa_node": null, "label": "label_8086_7000", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_07_0", "address": "0000:00:07.0", "product_id": "1000", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1000", "dev_type": "type-PCI"}] _report_hypervisor_resource_view /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:1034#033[00m
Dec  1 20:09:32 compute-0 nova_compute[189564]: 2025-12-01 20:09:32.955 189568 DEBUG oslo_concurrency.lockutils [None req-7cec860b-c1c5-49d2-8a4f-732c76c1788d - - - - - -] Acquiring lock "compute_resources" by "nova.compute.resource_tracker.ResourceTracker._update_available_resource" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m
Dec  1 20:09:32 compute-0 nova_compute[189564]: 2025-12-01 20:09:32.956 189568 DEBUG oslo_concurrency.lockutils [None req-7cec860b-c1c5-49d2-8a4f-732c76c1788d - - - - - -] Lock "compute_resources" acquired by "nova.compute.resource_tracker.ResourceTracker._update_available_resource" :: waited 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m
Dec  1 20:09:33 compute-0 nova_compute[189564]: 2025-12-01 20:09:33.054 189568 DEBUG nova.compute.resource_tracker [None req-7cec860b-c1c5-49d2-8a4f-732c76c1788d - - - - - -] Instance 2e63a3e2-688c-470f-9b69-98ac22f0c892 actively managed on this compute host and has allocations in placement: {'resources': {'DISK_GB': 1, 'MEMORY_MB': 128, 'VCPU': 1}}. _remove_deleted_instances_allocations /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:1635#033[00m
Dec  1 20:09:33 compute-0 nova_compute[189564]: 2025-12-01 20:09:33.055 189568 DEBUG nova.compute.resource_tracker [None req-7cec860b-c1c5-49d2-8a4f-732c76c1788d - - - - - -] Instance 1ba24bd2-a29b-4c5b-b8c7-cba0830ed166 actively managed on this compute host and has allocations in placement: {'resources': {'DISK_GB': 1, 'MEMORY_MB': 128, 'VCPU': 1}}. _remove_deleted_instances_allocations /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:1635#033[00m
Dec  1 20:09:33 compute-0 nova_compute[189564]: 2025-12-01 20:09:33.055 189568 DEBUG nova.compute.resource_tracker [None req-7cec860b-c1c5-49d2-8a4f-732c76c1788d - - - - - -] Total usable vcpus: 8, total allocated vcpus: 2 _report_final_resource_view /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:1057#033[00m
Dec  1 20:09:33 compute-0 nova_compute[189564]: 2025-12-01 20:09:33.056 189568 DEBUG nova.compute.resource_tracker [None req-7cec860b-c1c5-49d2-8a4f-732c76c1788d - - - - - -] Final resource view: name=compute-0.ctlplane.example.com phys_ram=7680MB used_ram=768MB phys_disk=79GB used_disk=2GB total_vcpus=8 used_vcpus=2 pci_stats=[] _report_final_resource_view /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:1066#033[00m
Dec  1 20:09:33 compute-0 nova_compute[189564]: 2025-12-01 20:09:33.116 189568 DEBUG nova.compute.provider_tree [None req-7cec860b-c1c5-49d2-8a4f-732c76c1788d - - - - - -] Inventory has not changed in ProviderTree for provider: 0211b5d4-bab8-409f-8f53-df766ffbcb27 update_inventory /usr/lib/python3.9/site-packages/nova/compute/provider_tree.py:180#033[00m
Dec  1 20:09:33 compute-0 nova_compute[189564]: 2025-12-01 20:09:33.130 189568 DEBUG nova.scheduler.client.report [None req-7cec860b-c1c5-49d2-8a4f-732c76c1788d - - - - - -] Inventory has not changed for provider 0211b5d4-bab8-409f-8f53-df766ffbcb27 based on inventory data: {'VCPU': {'total': 8, 'reserved': 0, 'min_unit': 1, 'max_unit': 8, 'step_size': 1, 'allocation_ratio': 4.0}, 'MEMORY_MB': {'total': 7680, 'reserved': 512, 'min_unit': 1, 'max_unit': 7680, 'step_size': 1, 'allocation_ratio': 1.0}, 'DISK_GB': {'total': 79, 'reserved': 1, 'min_unit': 1, 'max_unit': 79, 'step_size': 1, 'allocation_ratio': 0.9}} set_inventory_for_provider /usr/lib/python3.9/site-packages/nova/scheduler/client/report.py:940#033[00m
Dec  1 20:09:33 compute-0 nova_compute[189564]: 2025-12-01 20:09:33.148 189568 DEBUG nova.compute.resource_tracker [None req-7cec860b-c1c5-49d2-8a4f-732c76c1788d - - - - - -] Compute_service record updated for compute-0.ctlplane.example.com:compute-0.ctlplane.example.com _update_available_resource /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:995#033[00m
Dec  1 20:09:33 compute-0 nova_compute[189564]: 2025-12-01 20:09:33.150 189568 DEBUG oslo_concurrency.lockutils [None req-7cec860b-c1c5-49d2-8a4f-732c76c1788d - - - - - -] Lock "compute_resources" "released" by "nova.compute.resource_tracker.ResourceTracker._update_available_resource" :: held 0.194s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m
Dec  1 20:09:33 compute-0 nova_compute[189564]: 2025-12-01 20:09:33.476 189568 DEBUG oslo_service.periodic_task [None req-7cec860b-c1c5-49d2-8a4f-732c76c1788d - - - - - -] Running periodic task ComputeManager._check_instance_build_time run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210#033[00m
Dec  1 20:09:33 compute-0 nova_compute[189564]: 2025-12-01 20:09:33.478 189568 DEBUG oslo_service.periodic_task [None req-7cec860b-c1c5-49d2-8a4f-732c76c1788d - - - - - -] Running periodic task ComputeManager._poll_volume_usage run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210#033[00m
Dec  1 20:09:34 compute-0 nova_compute[189564]: 2025-12-01 20:09:34.100 189568 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Dec  1 20:09:34 compute-0 podman[258976]: 2025-12-01 20:09:34.366774694 +0000 UTC m=+0.131338797 container health_status 23921011954a99f31a49758e512d9e3575f6b2ebf536e7df85e3be11e7690b76 (image=quay.io/sustainable_computing_io/kepler:release-0.7.12, name=kepler, health_status=healthy, health_failing_streak=0, health_log=, description=The Universal Base Image is designed and engineered to be the base layer for all of your containerized applications, middleware and utilities. This base image is freely redistributable, but Red Hat only supports Red Hat technologies through subscriptions for Red Hat products. This image is maintained by Red Hat and updated regularly., name=ubi9, distribution-scope=public, io.openshift.tags=base rhel9, release=1214.1726694543, vcs-type=git, build-date=2024-09-18T21:23:30, config_id=edpm, io.k8s.description=The Universal Base Image is designed and engineered to be the base layer for all of your containerized applications, middleware and utilities. This base image is freely redistributable, but Red Hat only supports Red Hat technologies through subscriptions for Red Hat products. This image is maintained by Red Hat and updated regularly., url=https://access.redhat.com/containers/#/registry.access.redhat.com/ubi9/images/9.4-1214.1726694543, architecture=x86_64, io.openshift.expose-services=, maintainer=Red Hat, Inc., vcs-ref=e309397d02fc53f7fa99db1371b8700eb49f268f, summary=Provides the latest release of Red Hat Universal Base Image 9., config_data={'image': 'quay.io/sustainable_computing_io/kepler:release-0.7.12', 'privileged': 'true', 'restart': 'always', 'ports': ['8888:8888'], 'net': 'host', 'command': '-v=2', 'recreate': True, 'environment': {'ENABLE_GPU': 'true', 'EXPOSE_CONTAINER_METRICS': 'true', 'ENABLE_PROCESS_METRICS': 'true', 'EXPOSE_VM_METRICS': 'true', 'EXPOSE_ESTIMATED_IDLE_POWER_METRICS': 'false', 'LIBVIRT_METADATA_URI': 'http://openstack.org/xmlns/libvirt/nova/1.1'}, 'healthcheck': {'test': '/openstack/healthcheck kepler', 'mount': '/var/lib/openstack/healthchecks/kepler'}, 'volumes': ['/lib/modules:/lib/modules:ro', '/run/libvirt:/run/libvirt:shared,ro', '/sys:/sys', '/proc:/proc', '/var/lib/openstack/healthchecks/kepler:/openstack:ro,z']}, io.k8s.display-name=Red Hat Universal Base Image 9, release-0.7.12=, container_name=kepler, com.redhat.license_terms=https://www.redhat.com/en/about/red-hat-end-user-license-agreements#UBI, managed_by=edpm_ansible, com.redhat.component=ubi9-container, vendor=Red Hat, Inc., io.buildah.version=1.29.0, version=9.4)
Dec  1 20:09:34 compute-0 podman[258985]: 2025-12-01 20:09:34.370501194 +0000 UTC m=+0.104693607 container health_status 43b014a7c88484529ca37fbc1aa040d68d3c565a681d98a3ffe696ded1c66c8b (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified, name=ovn_metadata_agent, health_status=healthy, health_failing_streak=0, health_log=, maintainer=OpenStack Kubernetes Operator team, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 9 Base Image, config_data={'cgroupns': 'host', 'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'EDPM_CONFIG_HASH': '0823bd3e096c75f72e4a95820d41b0d4b6a1172bd2892ddb9f29b788a11bc87d'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ovn_metadata_agent', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified', 'net': 'host', 'pid': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/run/openvswitch:/run/openvswitch:z', '/var/lib/config-data/ansible-generated/neutron-ovn-metadata-agent:/etc/neutron.conf.d:z', '/run/netns:/run/netns:shared', '/var/lib/kolla/config_files/ovn_metadata_agent.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/neutron:/var/lib/neutron:shared,z', '/var/lib/neutron/ovn_metadata_haproxy_wrapper:/usr/local/bin/haproxy:ro', '/var/lib/neutron/kill_scripts:/etc/neutron/kill_scripts:ro', '/var/lib/openstack/cacerts/neutron-metadata/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/ca.crt:/etc/pki/tls/certs/ovndbca.crt:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/tls.crt:/etc/pki/tls/certs/ovndb.crt:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/tls.key:/etc/pki/tls/private/ovndb.key:ro,Z', '/var/lib/openstack/healthchecks/ovn_metadata_agent:/openstack:ro,z']}, container_name=ovn_metadata_agent, managed_by=edpm_ansible, org.label-schema.build-date=20251125, tcib_build_tag=fa2bb8efef6782c26ea7f1675eeb36dd, io.buildah.version=1.41.3, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS, tcib_managed=true, config_id=ovn_metadata_agent)
Dec  1 20:09:34 compute-0 podman[258977]: 2025-12-01 20:09:34.375963798 +0000 UTC m=+0.135003305 container health_status 34a1614f07848d6f362b3ed1fa2407dbcd0f2c7c831f6ef43ff8b2d278ce7c3d (image=quay.io/podified-antelope-centos9/openstack-ceilometer-ipmi:current-podified, name=ceilometer_agent_ipmi, health_status=healthy, health_failing_streak=0, health_log=, maintainer=OpenStack Kubernetes Operator team, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.build-date=20251125, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS, tcib_managed=true, config_data={'image': 'quay.io/podified-antelope-centos9/openstack-ceilometer-ipmi:current-podified', 'user': 'ceilometer', 'restart': 'always', 'command': 'kolla_start', 'security_opt': 'label:type:ceilometer_polling_t', 'privileged': 'true', 'net': 'host', 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'OS_ENDPOINT_TYPE': 'internal'}, 'healthcheck': {'test': '/openstack/healthcheck ipmi', 'mount': '/var/lib/openstack/healthchecks/ceilometer_agent_ipmi'}, 'volumes': ['/var/lib/openstack/config/telemetry-power-monitoring:/var/lib/openstack/config/:z', '/var/lib/openstack/config/telemetry-power-monitoring/ceilometer-agent-ipmi.json:/var/lib/kolla/config_files/config.json:z', '/etc/hosts:/etc/hosts:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/var/lib/openstack/cacerts/telemetry-power-monitoring/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/config/telemetry-power-monitoring/ceilometer_prom_exporter.yaml:/etc/ceilometer/ceilometer_prom_exporter.yaml:z', '/var/lib/openstack/certs/telemetry-power-monitoring/default:/etc/ceilometer/tls:z', '/dev/log:/dev/log', '/var/lib/openstack/healthchecks/ceilometer_agent_ipmi:/openstack:ro,z']}, config_id=edpm, io.buildah.version=1.41.3, managed_by=edpm_ansible, tcib_build_tag=fa2bb8efef6782c26ea7f1675eeb36dd, container_name=ceilometer_agent_ipmi)
Dec  1 20:09:34 compute-0 podman[258978]: 2025-12-01 20:09:34.392654682 +0000 UTC m=+0.130570173 container health_status 3a3d264f7eb8586ed3d44da8bad3c69e5911bcb2ca062b771386b6d47a5118de (image=quay.rdoproject.org/podified-master-centos10/openstack-ceilometer-compute:current-tested, name=ceilometer_agent_compute, health_status=healthy, health_failing_streak=0, health_log=, container_name=ceilometer_agent_compute, org.label-schema.build-date=20251125, config_data={'image': 'quay.rdoproject.org/podified-master-centos10/openstack-ceilometer-compute:current-tested', 'user': 'ceilometer', 'restart': 'always', 'command': 'kolla_start', 'security_opt': 'label:type:ceilometer_polling_t', 'net': 'host', 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'OS_ENDPOINT_TYPE': 'internal'}, 'healthcheck': {'test': '/openstack/healthcheck compute', 'mount': '/var/lib/openstack/healthchecks/ceilometer_agent_compute'}, 'volumes': ['/var/lib/openstack/config/telemetry:/var/lib/openstack/config/:z', '/var/lib/openstack/config/telemetry/ceilometer-agent-compute.json:/var/lib/kolla/config_files/config.json:z', '/run/libvirt:/run/libvirt:shared,ro', '/etc/hosts:/etc/hosts:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/var/lib/openstack/cacerts/telemetry/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/config/telemetry/ceilometer_prom_exporter.yaml:/etc/ceilometer/ceilometer_prom_exporter.yaml:z', '/var/lib/openstack/certs/telemetry/default:/etc/ceilometer/tls:z', '/dev/log:/dev/log', '/var/lib/openstack/healthchecks/ceilometer_agent_compute:/openstack:ro,z']}, org.label-schema.vendor=CentOS, tcib_managed=true, io.buildah.version=1.41.4, tcib_build_tag=3a7876c5b6a4ff2e2bc50e11e9db5f42, maintainer=OpenStack Kubernetes Operator team, managed_by=edpm_ansible, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 10 Base Image, org.label-schema.schema-version=1.0, config_id=edpm)
Dec  1 20:09:34 compute-0 podman[258991]: 2025-12-01 20:09:34.427140384 +0000 UTC m=+0.148946341 container health_status ac5c9902abf0db9f43c889599b2bcc73d33eb8b65444ffdd9b56a5cc93dab792 (image=quay.io/podified-antelope-centos9/openstack-ovn-controller:current-podified, name=ovn_controller, health_status=healthy, health_failing_streak=0, health_log=, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.schema-version=1.0, managed_by=edpm_ansible, org.label-schema.vendor=CentOS, maintainer=OpenStack Kubernetes Operator team, config_data={'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ovn_controller', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-ovn-controller:current-podified', 'net': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/lib/modules:/lib/modules:ro', '/run:/run', '/var/lib/openvswitch/ovn:/run/ovn:shared,z', '/var/lib/kolla/config_files/ovn_controller.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/openstack/cacerts/ovn/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/certs/ovn/default/ca.crt:/etc/pki/tls/certs/ovndbca.crt:ro,z', '/var/lib/openstack/certs/ovn/default/tls.crt:/etc/pki/tls/certs/ovndb.crt:ro,z', '/var/lib/openstack/certs/ovn/default/tls.key:/etc/pki/tls/private/ovndb.key:ro,Z', '/var/lib/openstack/healthchecks/ovn_controller:/openstack:ro,z']}, config_id=ovn_controller, io.buildah.version=1.41.3, org.label-schema.build-date=20251125, tcib_build_tag=fa2bb8efef6782c26ea7f1675eeb36dd, tcib_managed=true, container_name=ovn_controller)
Dec  1 20:09:37 compute-0 nova_compute[189564]: 2025-12-01 20:09:37.284 189568 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Dec  1 20:09:39 compute-0 nova_compute[189564]: 2025-12-01 20:09:39.102 189568 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Dec  1 20:09:40 compute-0 nova_compute[189564]: 2025-12-01 20:09:40.250 189568 DEBUG oslo_service.periodic_task [None req-7cec860b-c1c5-49d2-8a4f-732c76c1788d - - - - - -] Running periodic task ComputeManager._sync_scheduler_instance_info run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210#033[00m
Dec  1 20:09:42 compute-0 nova_compute[189564]: 2025-12-01 20:09:42.296 189568 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Dec  1 20:09:44 compute-0 nova_compute[189564]: 2025-12-01 20:09:44.107 189568 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Dec  1 20:09:46 compute-0 podman[259075]: 2025-12-01 20:09:46.294877947 +0000 UTC m=+0.062580041 container health_status 9bc16c1e84935b321683dd2dfd3901959431e420d380b6b9982945dff3d516b2 (image=quay.io/prometheus/node-exporter:v1.5.0, name=node_exporter, health_status=healthy, health_failing_streak=0, health_log=, managed_by=edpm_ansible, config_data={'image': 'quay.io/prometheus/node-exporter:v1.5.0', 'restart': 'always', 'recreate': True, 'user': 'root', 'privileged': True, 'ports': ['9100:9100'], 'command': ['--web.config.file=/etc/node_exporter/node_exporter.yaml', '--web.disable-exporter-metrics', '--collector.systemd', '--collector.systemd.unit-include=(edpm_.*|ovs.*|openvswitch|virt.*|rsyslog)\\.service', '--no-collector.dmi', '--no-collector.entropy', '--no-collector.thermal_zone', '--no-collector.time', '--no-collector.timex', '--no-collector.uname', '--no-collector.stat', '--no-collector.hwmon', '--no-collector.os', '--no-collector.selinux', '--no-collector.textfile', '--no-collector.powersupplyclass', '--no-collector.pressure', '--no-collector.rapl'], 'net': 'host', 'environment': {'OS_ENDPOINT_TYPE': 'internal'}, 'healthcheck': {'test': '/openstack/healthcheck node_exporter', 'mount': '/var/lib/openstack/healthchecks/node_exporter'}, 'volumes': ['/var/lib/openstack/config/telemetry/node_exporter.yaml:/etc/node_exporter/node_exporter.yaml:z', '/var/lib/openstack/certs/telemetry/default:/etc/node_exporter/tls:z', '/var/run/dbus/system_bus_socket:/var/run/dbus/system_bus_socket:rw', '/var/lib/openstack/healthchecks/node_exporter:/openstack:ro,z']}, config_id=edpm, container_name=node_exporter, maintainer=The Prometheus Authors <prometheus-developers@googlegroups.com>)
Dec  1 20:09:46 compute-0 podman[259076]: 2025-12-01 20:09:46.3209856 +0000 UTC m=+0.078126937 container health_status b46bda7fc50db8041eef75400930fc7591d8331b3adc9964f77b2cc87c6b98e2 (image=quay.io/openstack-k8s-operators/openstack-network-exporter:current-podified, name=openstack_network_exporter, health_status=healthy, health_failing_streak=0, health_log=, io.openshift.expose-services=, managed_by=edpm_ansible, url=https://catalog.redhat.com/en/search?searchType=containers, description=The Universal Base Image Minimal is a stripped down image that uses microdnf as a package manager. This base image is freely redistributable, but Red Hat only supports Red Hat technologies through subscriptions for Red Hat products. This image is maintained by Red Hat and updated regularly., io.k8s.display-name=Red Hat Universal Base Image 9 Minimal, com.redhat.license_terms=https://www.redhat.com/en/about/red-hat-end-user-license-agreements#UBI, container_name=openstack_network_exporter, io.openshift.tags=minimal rhel9, vcs-type=git, version=9.6, io.buildah.version=1.33.7, maintainer=Red Hat, Inc., vendor=Red Hat, Inc., config_data={'image': 'quay.io/openstack-k8s-operators/openstack-network-exporter:current-podified', 'restart': 'always', 'recreate': True, 'privileged': True, 'ports': ['9105:9105'], 'command': [], 'net': 'host', 'environment': {'OS_ENDPOINT_TYPE': 'internal', 'OPENSTACK_NETWORK_EXPORTER_YAML': '/etc/openstack_network_exporter/openstack_network_exporter.yaml'}, 'healthcheck': {'test': '/openstack/healthcheck openstack-netwo', 'mount': '/var/lib/openstack/healthchecks/openstack_network_exporter'}, 'volumes': ['/var/lib/openstack/config/telemetry/openstack_network_exporter.yaml:/etc/openstack_network_exporter/openstack_network_exporter.yaml:z', '/var/lib/openstack/certs/telemetry/default:/etc/openstack_network_exporter/tls:z', '/var/run/openvswitch:/run/openvswitch:rw,z', '/var/lib/openvswitch/ovn:/run/ovn:rw,z', '/proc:/host/proc:ro', '/var/lib/openstack/healthchecks/openstack_network_exporter:/openstack:ro,z']}, config_id=edpm, release=1755695350, com.redhat.component=ubi9-minimal-container, io.k8s.description=The Universal Base Image Minimal is a stripped down image that uses microdnf as a package manager. This base image is freely redistributable, but Red Hat only supports Red Hat technologies through subscriptions for Red Hat products. This image is maintained by Red Hat and updated regularly., architecture=x86_64, build-date=2025-08-20T13:12:41, distribution-scope=public, vcs-ref=f4b088292653bbf5ca8188a5e59ffd06a8671d4b, summary=Provides the latest release of the minimal Red Hat Universal Base Image 9., name=ubi9-minimal)
Dec  1 20:09:47 compute-0 nova_compute[189564]: 2025-12-01 20:09:47.319 189568 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Dec  1 20:09:48 compute-0 ceilometer_agent_compute[200308]: 2025-12-01 20:09:48.824 15 DEBUG ceilometer.polling.manager [-] The number of pollsters in source [pollsters] is bigger than the number of worker threads to execute them. Therefore, one can expect the process to be longer than the expected. execute_polling_task_processing /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:253
Dec  1 20:09:48 compute-0 ceilometer_agent_compute[200308]: 2025-12-01 20:09:48.824 15 DEBUG ceilometer.polling.manager [-] Processing pollsters for [pollsters] with [1] threads. execute_polling_task_processing /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:262
Dec  1 20:09:48 compute-0 ceilometer_agent_compute[200308]: 2025-12-01 20:09:48.824 15 DEBUG ceilometer.polling.manager [-] Registering pollster [<stevedore.extension.Extension object at 0x7fcf6cc3f860>] from source [pollsters] to be executed via executor [<concurrent.futures.thread.ThreadPoolExecutor object at 0x7fcf6ebb4140>] with cache [{}], pollster history [{}], and discovery cache [{}]. register_pollster_execution /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:276
Dec  1 20:09:48 compute-0 ceilometer_agent_compute[200308]: 2025-12-01 20:09:48.825 15 DEBUG ceilometer.polling.manager [-] Executing discovery process for pollsters [<ceilometer.compute.pollsters.net.IncomingBytesDeltaPollster object at 0x7fcf6cc3f830>] and discovery method [local_instances] via process [<bound method AgentManager.discover of <ceilometer.polling.manager.AgentManager object at 0x7fcf6cc07a10>>]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:294
Dec  1 20:09:48 compute-0 ceilometer_agent_compute[200308]: 2025-12-01 20:09:48.825 15 DEBUG ceilometer.polling.manager [-] Registering pollster [<stevedore.extension.Extension object at 0x7fcf6c2e4080>] from source [pollsters] to be executed via executor [<concurrent.futures.thread.ThreadPoolExecutor object at 0x7fcf6ebb4140>] with cache [{}], pollster history [{}], and discovery cache [{}]. register_pollster_execution /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:276
Dec  1 20:09:48 compute-0 ceilometer_agent_compute[200308]: 2025-12-01 20:09:48.826 15 DEBUG ceilometer.polling.manager [-] Registering pollster [<stevedore.extension.Extension object at 0x7fcf6efc98b0>] from source [pollsters] to be executed via executor [<concurrent.futures.thread.ThreadPoolExecutor object at 0x7fcf6ebb4140>] with cache [{}], pollster history [{}], and discovery cache [{}]. register_pollster_execution /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:276
Dec  1 20:09:48 compute-0 ceilometer_agent_compute[200308]: 2025-12-01 20:09:48.826 15 DEBUG ceilometer.polling.manager [-] Registering pollster [<stevedore.extension.Extension object at 0x7fcf6c2e4110>] from source [pollsters] to be executed via executor [<concurrent.futures.thread.ThreadPoolExecutor object at 0x7fcf6ebb4140>] with cache [{}], pollster history [{}], and discovery cache [{}]. register_pollster_execution /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:276
Dec  1 20:09:48 compute-0 ceilometer_agent_compute[200308]: 2025-12-01 20:09:48.826 15 DEBUG ceilometer.polling.manager [-] Registering pollster [<stevedore.extension.Extension object at 0x7fcf6c2e41a0>] from source [pollsters] to be executed via executor [<concurrent.futures.thread.ThreadPoolExecutor object at 0x7fcf6ebb4140>] with cache [{}], pollster history [{}], and discovery cache [{}]. register_pollster_execution /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:276
Dec  1 20:09:48 compute-0 ceilometer_agent_compute[200308]: 2025-12-01 20:09:48.826 15 DEBUG ceilometer.polling.manager [-] Registering pollster [<stevedore.extension.Extension object at 0x7fcf6cc3f290>] from source [pollsters] to be executed via executor [<concurrent.futures.thread.ThreadPoolExecutor object at 0x7fcf6ebb4140>] with cache [{}], pollster history [{}], and discovery cache [{}]. register_pollster_execution /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:276
Dec  1 20:09:48 compute-0 ceilometer_agent_compute[200308]: 2025-12-01 20:09:48.826 15 DEBUG ceilometer.polling.manager [-] Registering pollster [<stevedore.extension.Extension object at 0x7fcf6cc3f2c0>] from source [pollsters] to be executed via executor [<concurrent.futures.thread.ThreadPoolExecutor object at 0x7fcf6ebb4140>] with cache [{}], pollster history [{}], and discovery cache [{}]. register_pollster_execution /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:276
Dec  1 20:09:48 compute-0 ceilometer_agent_compute[200308]: 2025-12-01 20:09:48.826 15 DEBUG ceilometer.polling.manager [-] Registering pollster [<stevedore.extension.Extension object at 0x7fcf6e1e92e0>] from source [pollsters] to be executed via executor [<concurrent.futures.thread.ThreadPoolExecutor object at 0x7fcf6ebb4140>] with cache [{}], pollster history [{}], and discovery cache [{}]. register_pollster_execution /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:276
Dec  1 20:09:48 compute-0 ceilometer_agent_compute[200308]: 2025-12-01 20:09:48.826 15 DEBUG ceilometer.polling.manager [-] Registering pollster [<stevedore.extension.Extension object at 0x7fcf6cc3fb00>] from source [pollsters] to be executed via executor [<concurrent.futures.thread.ThreadPoolExecutor object at 0x7fcf6ebb4140>] with cache [{}], pollster history [{}], and discovery cache [{}]. register_pollster_execution /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:276
Dec  1 20:09:48 compute-0 ceilometer_agent_compute[200308]: 2025-12-01 20:09:48.826 15 DEBUG ceilometer.polling.manager [-] Registering pollster [<stevedore.extension.Extension object at 0x7fcf6cc3f320>] from source [pollsters] to be executed via executor [<concurrent.futures.thread.ThreadPoolExecutor object at 0x7fcf6ebb4140>] with cache [{}], pollster history [{}], and discovery cache [{}]. register_pollster_execution /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:276
Dec  1 20:09:48 compute-0 ceilometer_agent_compute[200308]: 2025-12-01 20:09:48.826 15 DEBUG ceilometer.polling.manager [-] Registering pollster [<stevedore.extension.Extension object at 0x7fcf6cc3f380>] from source [pollsters] to be executed via executor [<concurrent.futures.thread.ThreadPoolExecutor object at 0x7fcf6ebb4140>] with cache [{}], pollster history [{}], and discovery cache [{}]. register_pollster_execution /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:276
Dec  1 20:09:48 compute-0 ceilometer_agent_compute[200308]: 2025-12-01 20:09:48.826 15 DEBUG ceilometer.polling.manager [-] Registering pollster [<stevedore.extension.Extension object at 0x7fcf6cc3f3e0>] from source [pollsters] to be executed via executor [<concurrent.futures.thread.ThreadPoolExecutor object at 0x7fcf6ebb4140>] with cache [{}], pollster history [{}], and discovery cache [{}]. register_pollster_execution /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:276
Dec  1 20:09:48 compute-0 ceilometer_agent_compute[200308]: 2025-12-01 20:09:48.827 15 DEBUG ceilometer.polling.manager [-] Registering pollster [<stevedore.extension.Extension object at 0x7fcf6cc3f440>] from source [pollsters] to be executed via executor [<concurrent.futures.thread.ThreadPoolExecutor object at 0x7fcf6ebb4140>] with cache [{}], pollster history [{}], and discovery cache [{}]. register_pollster_execution /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:276
Dec  1 20:09:48 compute-0 ceilometer_agent_compute[200308]: 2025-12-01 20:09:48.827 15 DEBUG ceilometer.polling.manager [-] Registering pollster [<stevedore.extension.Extension object at 0x7fcf6c2e4470>] from source [pollsters] to be executed via executor [<concurrent.futures.thread.ThreadPoolExecutor object at 0x7fcf6ebb4140>] with cache [{}], pollster history [{}], and discovery cache [{}]. register_pollster_execution /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:276
Dec  1 20:09:48 compute-0 ceilometer_agent_compute[200308]: 2025-12-01 20:09:48.827 15 DEBUG ceilometer.polling.manager [-] Registering pollster [<stevedore.extension.Extension object at 0x7fcf6cc3f4a0>] from source [pollsters] to be executed via executor [<concurrent.futures.thread.ThreadPoolExecutor object at 0x7fcf6ebb4140>] with cache [{}], pollster history [{}], and discovery cache [{}]. register_pollster_execution /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:276
Dec  1 20:09:48 compute-0 ceilometer_agent_compute[200308]: 2025-12-01 20:09:48.827 15 DEBUG ceilometer.polling.manager [-] Registering pollster [<stevedore.extension.Extension object at 0x7fcf6cc3f500>] from source [pollsters] to be executed via executor [<concurrent.futures.thread.ThreadPoolExecutor object at 0x7fcf6ebb4140>] with cache [{}], pollster history [{}], and discovery cache [{}]. register_pollster_execution /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:276
Dec  1 20:09:48 compute-0 ceilometer_agent_compute[200308]: 2025-12-01 20:09:48.827 15 DEBUG ceilometer.polling.manager [-] Registering pollster [<stevedore.extension.Extension object at 0x7fcf6cc3e540>] from source [pollsters] to be executed via executor [<concurrent.futures.thread.ThreadPoolExecutor object at 0x7fcf6ebb4140>] with cache [{}], pollster history [{}], and discovery cache [{}]. register_pollster_execution /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:276
Dec  1 20:09:48 compute-0 ceilometer_agent_compute[200308]: 2025-12-01 20:09:48.827 15 DEBUG ceilometer.polling.manager [-] Registering pollster [<stevedore.extension.Extension object at 0x7fcf6cc3f560>] from source [pollsters] to be executed via executor [<concurrent.futures.thread.ThreadPoolExecutor object at 0x7fcf6ebb4140>] with cache [{}], pollster history [{}], and discovery cache [{}]. register_pollster_execution /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:276
Dec  1 20:09:48 compute-0 ceilometer_agent_compute[200308]: 2025-12-01 20:09:48.827 15 DEBUG ceilometer.polling.manager [-] Registering pollster [<stevedore.extension.Extension object at 0x7fcf6cc3fd70>] from source [pollsters] to be executed via executor [<concurrent.futures.thread.ThreadPoolExecutor object at 0x7fcf6ebb4140>] with cache [{}], pollster history [{}], and discovery cache [{}]. register_pollster_execution /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:276
Dec  1 20:09:48 compute-0 ceilometer_agent_compute[200308]: 2025-12-01 20:09:48.827 15 DEBUG ceilometer.polling.manager [-] Registering pollster [<stevedore.extension.Extension object at 0x7fcf6cc3f5c0>] from source [pollsters] to be executed via executor [<concurrent.futures.thread.ThreadPoolExecutor object at 0x7fcf6ebb4140>] with cache [{}], pollster history [{}], and discovery cache [{}]. register_pollster_execution /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:276
Dec  1 20:09:48 compute-0 ceilometer_agent_compute[200308]: 2025-12-01 20:09:48.828 15 DEBUG ceilometer.polling.manager [-] Registering pollster [<stevedore.extension.Extension object at 0x7fcf6cc3fdd0>] from source [pollsters] to be executed via executor [<concurrent.futures.thread.ThreadPoolExecutor object at 0x7fcf6ebb4140>] with cache [{}], pollster history [{}], and discovery cache [{}]. register_pollster_execution /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:276
Dec  1 20:09:48 compute-0 ceilometer_agent_compute[200308]: 2025-12-01 20:09:48.828 15 DEBUG ceilometer.polling.manager [-] Registering pollster [<stevedore.extension.Extension object at 0x7fcf6cc3fe30>] from source [pollsters] to be executed via executor [<concurrent.futures.thread.ThreadPoolExecutor object at 0x7fcf6ebb4140>] with cache [{}], pollster history [{}], and discovery cache [{}]. register_pollster_execution /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:276
Dec  1 20:09:48 compute-0 ceilometer_agent_compute[200308]: 2025-12-01 20:09:48.828 15 DEBUG ceilometer.polling.manager [-] Registering pollster [<stevedore.extension.Extension object at 0x7fcf6cc3fec0>] from source [pollsters] to be executed via executor [<concurrent.futures.thread.ThreadPoolExecutor object at 0x7fcf6ebb4140>] with cache [{}], pollster history [{}], and discovery cache [{}]. register_pollster_execution /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:276
Dec  1 20:09:48 compute-0 ceilometer_agent_compute[200308]: 2025-12-01 20:09:48.828 15 DEBUG ceilometer.polling.manager [-] Registering pollster [<stevedore.extension.Extension object at 0x7fcf6cc3ffb0>] from source [pollsters] to be executed via executor [<concurrent.futures.thread.ThreadPoolExecutor object at 0x7fcf6ebb4140>] with cache [{}], pollster history [{}], and discovery cache [{}]. register_pollster_execution /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:276
Dec  1 20:09:48 compute-0 ceilometer_agent_compute[200308]: 2025-12-01 20:09:48.828 15 DEBUG ceilometer.polling.manager [-] Registering pollster [<stevedore.extension.Extension object at 0x7fcf6cc3d7c0>] from source [pollsters] to be executed via executor [<concurrent.futures.thread.ThreadPoolExecutor object at 0x7fcf6ebb4140>] with cache [{}], pollster history [{}], and discovery cache [{}]. register_pollster_execution /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:276
Dec  1 20:09:48 compute-0 ceilometer_agent_compute[200308]: 2025-12-01 20:09:48.828 15 DEBUG ceilometer.polling.manager [-] Registering pollster [<stevedore.extension.Extension object at 0x7fcf6cc3f7d0>] from source [pollsters] to be executed via executor [<concurrent.futures.thread.ThreadPoolExecutor object at 0x7fcf6ebb4140>] with cache [{}], pollster history [{}], and discovery cache [{}]. register_pollster_execution /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:276
Dec  1 20:09:48 compute-0 ceilometer_agent_compute[200308]: 2025-12-01 20:09:48.831 15 DEBUG ceilometer.compute.discovery [-] Querying metadata for instance 1ba24bd2-a29b-4c5b-b8c7-cba0830ed166 from Nova API get_server /usr/lib/python3.12/site-packages/ceilometer/compute/discovery.py:176
Dec  1 20:09:48 compute-0 ceilometer_agent_compute[200308]: 2025-12-01 20:09:48.832 15 DEBUG novaclient.v2.client [-] REQ: curl -g -i -X GET https://nova-internal.openstack.svc:8774/v2.1/servers/1ba24bd2-a29b-4c5b-b8c7-cba0830ed166 -H "Accept: application/json" -H "User-Agent: python-novaclient" -H "X-Auth-Token: {SHA256}1de7f74c971f7abb068fd11d4466b13593717e525e549549f884402049cc943e" -H "X-OpenStack-Nova-API-Version: 2.1" _http_log_request /usr/lib/python3.12/site-packages/keystoneauth1/session.py:572
Dec  1 20:09:49 compute-0 nova_compute[189564]: 2025-12-01 20:09:49.110 189568 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Dec  1 20:09:49 compute-0 ceilometer_agent_compute[200308]: 2025-12-01 20:09:49.797 15 DEBUG novaclient.v2.client [-] RESP: [200] Connection: Keep-Alive Content-Length: 1832 Content-Type: application/json Date: Mon, 01 Dec 2025 20:09:48 GMT Keep-Alive: timeout=5, max=100 OpenStack-API-Version: compute 2.1 Server: Apache Vary: OpenStack-API-Version,X-OpenStack-Nova-API-Version X-OpenStack-Nova-API-Version: 2.1 x-compute-request-id: req-c3191b92-9f78-4c83-9f7c-894699070c51 x-openstack-request-id: req-c3191b92-9f78-4c83-9f7c-894699070c51 _http_log_response /usr/lib/python3.12/site-packages/keystoneauth1/session.py:613
Dec  1 20:09:49 compute-0 ceilometer_agent_compute[200308]: 2025-12-01 20:09:49.797 15 DEBUG novaclient.v2.client [-] RESP BODY: {"server": {"id": "1ba24bd2-a29b-4c5b-b8c7-cba0830ed166", "name": "te-4551674-asg-jbxama3kkz6o-bxsvliczlwdv-hbpajxundnbg", "status": "ACTIVE", "tenant_id": "ce8fb01897ec4dc4a54e7b478a0450c6", "user_id": "87b1f4a5842648dead0562b1cf8b4f18", "metadata": {"metering.server_group": "f148fe63-b9e9-42f1-b9d7-8790a6058874"}, "hostId": "ed8356c925a37a95605f3d20b7786e3709927537fc31622d463f3259", "image": {"id": "bffb6851-f47b-44e0-90e7-e01d72f9a4d2", "links": [{"rel": "bookmark", "href": "https://nova-internal.openstack.svc:8774/images/bffb6851-f47b-44e0-90e7-e01d72f9a4d2"}]}, "flavor": {"id": "69252fc0-77e5-4ac1-807d-77003542464f", "links": [{"rel": "bookmark", "href": "https://nova-internal.openstack.svc:8774/flavors/69252fc0-77e5-4ac1-807d-77003542464f"}]}, "created": "2025-12-01T20:08:26Z", "updated": "2025-12-01T20:08:34Z", "addresses": {"": [{"version": 4, "addr": "10.100.1.231", "OS-EXT-IPS:type": "fixed", "OS-EXT-IPS-MAC:mac_addr": "fa:16:3e:a9:69:d7"}]}, "accessIPv4": "", "accessIPv6": "", "links": [{"rel": "self", "href": "https://nova-internal.openstack.svc:8774/v2.1/servers/1ba24bd2-a29b-4c5b-b8c7-cba0830ed166"}, {"rel": "bookmark", "href": "https://nova-internal.openstack.svc:8774/servers/1ba24bd2-a29b-4c5b-b8c7-cba0830ed166"}], "OS-DCF:diskConfig": "MANUAL", "progress": 0, "OS-EXT-AZ:availability_zone": "nova", "config_drive": "True", "key_name": null, "OS-SRV-USG:launched_at": "2025-12-01T20:08:34.000000", "OS-SRV-USG:terminated_at": null, "security_groups": [{"name": "default"}], "OS-EXT-SRV-ATTR:host": "compute-0.ctlplane.example.com", "OS-EXT-SRV-ATTR:instance_name": "instance-0000000e", "OS-EXT-SRV-ATTR:hypervisor_hostname": "compute-0.ctlplane.example.com", "OS-EXT-STS:task_state": null, "OS-EXT-STS:vm_state": "active", "OS-EXT-STS:power_state": 1, "os-extended-volumes:volumes_attached": []}} _http_log_response /usr/lib/python3.12/site-packages/keystoneauth1/session.py:648
Dec  1 20:09:49 compute-0 ceilometer_agent_compute[200308]: 2025-12-01 20:09:49.797 15 DEBUG novaclient.v2.client [-] GET call to compute for https://nova-internal.openstack.svc:8774/v2.1/servers/1ba24bd2-a29b-4c5b-b8c7-cba0830ed166 used request id req-c3191b92-9f78-4c83-9f7c-894699070c51 request /usr/lib/python3.12/site-packages/keystoneauth1/session.py:1073
Dec  1 20:09:49 compute-0 ceilometer_agent_compute[200308]: 2025-12-01 20:09:49.799 15 DEBUG ceilometer.compute.discovery [-] instance data: {'id': '1ba24bd2-a29b-4c5b-b8c7-cba0830ed166', 'name': 'te-4551674-asg-jbxama3kkz6o-bxsvliczlwdv-hbpajxundnbg', 'flavor': {'id': '69252fc0-77e5-4ac1-807d-77003542464f', 'name': 'm1.nano', 'vcpus': 1, 'ram': 128, 'disk': 1, 'ephemeral': 0, 'swap': 0}, 'image': {'id': 'bffb6851-f47b-44e0-90e7-e01d72f9a4d2'}, 'os_type': 'hvm', 'architecture': 'x86_64', 'OS-EXT-SRV-ATTR:instance_name': 'instance-0000000e', 'OS-EXT-SRV-ATTR:host': 'compute-0.ctlplane.example.com', 'OS-EXT-STS:vm_state': 'running', 'tenant_id': 'ce8fb01897ec4dc4a54e7b478a0450c6', 'user_id': '87b1f4a5842648dead0562b1cf8b4f18', 'hostId': 'ed8356c925a37a95605f3d20b7786e3709927537fc31622d463f3259', 'status': 'active', 'metadata': {'metering.server_group': 'f148fe63-b9e9-42f1-b9d7-8790a6058874'}} discover_libvirt_polling /usr/lib/python3.12/site-packages/ceilometer/compute/discovery.py:315
Dec  1 20:09:49 compute-0 ceilometer_agent_compute[200308]: 2025-12-01 20:09:49.803 15 DEBUG ceilometer.compute.discovery [-] instance data: {'id': '2e63a3e2-688c-470f-9b69-98ac22f0c892', 'name': 'te-4551674-asg-jbxama3kkz6o-ydtfx5qziqnj-k254cxbeo4x2', 'flavor': {'id': '69252fc0-77e5-4ac1-807d-77003542464f', 'name': 'm1.nano', 'vcpus': 1, 'ram': 128, 'disk': 1, 'ephemeral': 0, 'swap': 0}, 'image': {'id': 'bffb6851-f47b-44e0-90e7-e01d72f9a4d2'}, 'os_type': 'hvm', 'architecture': 'x86_64', 'OS-EXT-SRV-ATTR:instance_name': 'instance-0000000d', 'OS-EXT-SRV-ATTR:host': 'compute-0.ctlplane.example.com', 'OS-EXT-STS:vm_state': 'running', 'tenant_id': 'ce8fb01897ec4dc4a54e7b478a0450c6', 'user_id': '87b1f4a5842648dead0562b1cf8b4f18', 'hostId': 'ed8356c925a37a95605f3d20b7786e3709927537fc31622d463f3259', 'status': 'active', 'metadata': {'metering.server_group': 'f148fe63-b9e9-42f1-b9d7-8790a6058874'}} discover_libvirt_polling /usr/lib/python3.12/site-packages/ceilometer/compute/discovery.py:315
Dec  1 20:09:49 compute-0 ceilometer_agent_compute[200308]: 2025-12-01 20:09:49.803 15 INFO ceilometer.polling.manager [-] Polling pollster network.incoming.bytes.delta in the context of pollsters
Dec  1 20:09:49 compute-0 ceilometer_agent_compute[200308]: 2025-12-01 20:09:49.803 15 DEBUG ceilometer.polling.manager [-] Checking if we need coordination for pollster [<stevedore.extension.Extension object at 0x7fcf6cc3f860>] with coordination group name [None]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:333
Dec  1 20:09:49 compute-0 ceilometer_agent_compute[200308]: 2025-12-01 20:09:49.803 15 DEBUG ceilometer.polling.manager [-] The pollster [<stevedore.extension.Extension object at 0x7fcf6cc3f860>] is not configured in a source for polling that requires coordination. The current hashrings are the following [None]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:355
Dec  1 20:09:49 compute-0 ceilometer_agent_compute[200308]: 2025-12-01 20:09:49.804 15 DEBUG ceilometer.polling.manager [-] Polster heartbeat update: network.incoming.bytes.delta heartbeat /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:636
Dec  1 20:09:49 compute-0 ceilometer_agent_compute[200308]: 2025-12-01 20:09:49.805 12 DEBUG ceilometer.polling.manager [-] Updated heartbeat for network.incoming.bytes.delta (2025-12-01T20:09:49.804091) _update_status /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:502
Dec  1 20:09:49 compute-0 ceilometer_agent_compute[200308]: 2025-12-01 20:09:49.810 15 DEBUG ceilometer.compute.virt.libvirt.inspector [-] No delta meter predecessor for 1ba24bd2-a29b-4c5b-b8c7-cba0830ed166 / tap3f58b3a2-d9 inspect_vnics /usr/lib/python3.12/site-packages/ceilometer/compute/virt/libvirt/inspector.py:143
Dec  1 20:09:49 compute-0 ceilometer_agent_compute[200308]: 2025-12-01 20:09:49.810 15 DEBUG ceilometer.compute.pollsters [-] 1ba24bd2-a29b-4c5b-b8c7-cba0830ed166/network.incoming.bytes.delta volume: 0 _stats_to_sample /usr/lib/python3.12/site-packages/ceilometer/compute/pollsters/__init__.py:108
Dec  1 20:09:49 compute-0 ceilometer_agent_compute[200308]: 2025-12-01 20:09:49.815 15 DEBUG ceilometer.compute.pollsters [-] 2e63a3e2-688c-470f-9b69-98ac22f0c892/network.incoming.bytes.delta volume: 168 _stats_to_sample /usr/lib/python3.12/site-packages/ceilometer/compute/pollsters/__init__.py:108
Dec  1 20:09:49 compute-0 ceilometer_agent_compute[200308]: 2025-12-01 20:09:49.816 15 INFO ceilometer.polling.manager [-] Finished polling pollster network.incoming.bytes.delta in the context of pollsters
Dec  1 20:09:49 compute-0 ceilometer_agent_compute[200308]: 2025-12-01 20:09:49.816 15 DEBUG ceilometer.polling.manager [-] Executing discovery process for pollsters [<ceilometer.compute.pollsters.net.OutgoingPacketsPollster object at 0x7fcf6c2e4050>] and discovery method [local_instances] via process [<bound method AgentManager.discover of <ceilometer.polling.manager.AgentManager object at 0x7fcf6cc07a10>>]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:294
Dec  1 20:09:49 compute-0 ceilometer_agent_compute[200308]: 2025-12-01 20:09:49.816 15 INFO ceilometer.polling.manager [-] Polling pollster network.outgoing.packets in the context of pollsters
Dec  1 20:09:49 compute-0 ceilometer_agent_compute[200308]: 2025-12-01 20:09:49.816 15 DEBUG ceilometer.polling.manager [-] Checking if we need coordination for pollster [<stevedore.extension.Extension object at 0x7fcf6c2e4080>] with coordination group name [None]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:333
Dec  1 20:09:49 compute-0 ceilometer_agent_compute[200308]: 2025-12-01 20:09:49.816 15 DEBUG ceilometer.polling.manager [-] The pollster [<stevedore.extension.Extension object at 0x7fcf6c2e4080>] is not configured in a source for polling that requires coordination. The current hashrings are the following [None]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:355
Dec  1 20:09:49 compute-0 ceilometer_agent_compute[200308]: 2025-12-01 20:09:49.816 15 DEBUG ceilometer.polling.manager [-] Polster heartbeat update: network.outgoing.packets heartbeat /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:636
Dec  1 20:09:49 compute-0 ceilometer_agent_compute[200308]: 2025-12-01 20:09:49.817 15 DEBUG ceilometer.compute.pollsters [-] 1ba24bd2-a29b-4c5b-b8c7-cba0830ed166/network.outgoing.packets volume: 16 _stats_to_sample /usr/lib/python3.12/site-packages/ceilometer/compute/pollsters/__init__.py:108
Dec  1 20:09:49 compute-0 ceilometer_agent_compute[200308]: 2025-12-01 20:09:49.817 12 DEBUG ceilometer.polling.manager [-] Updated heartbeat for network.outgoing.packets (2025-12-01T20:09:49.816795) _update_status /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:502
Dec  1 20:09:49 compute-0 ceilometer_agent_compute[200308]: 2025-12-01 20:09:49.817 15 DEBUG ceilometer.compute.pollsters [-] 2e63a3e2-688c-470f-9b69-98ac22f0c892/network.outgoing.packets volume: 16 _stats_to_sample /usr/lib/python3.12/site-packages/ceilometer/compute/pollsters/__init__.py:108
Dec  1 20:09:49 compute-0 ceilometer_agent_compute[200308]: 2025-12-01 20:09:49.818 15 INFO ceilometer.polling.manager [-] Finished polling pollster network.outgoing.packets in the context of pollsters
Dec  1 20:09:49 compute-0 ceilometer_agent_compute[200308]: 2025-12-01 20:09:49.818 15 DEBUG ceilometer.polling.manager [-] Executing discovery process for pollsters [<ceilometer.compute.pollsters.net.OutgoingBytesDeltaPollster object at 0x7fcf6cc3ff20>] and discovery method [local_instances] via process [<bound method AgentManager.discover of <ceilometer.polling.manager.AgentManager object at 0x7fcf6cc07a10>>]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:294
Dec  1 20:09:49 compute-0 ceilometer_agent_compute[200308]: 2025-12-01 20:09:49.818 15 INFO ceilometer.polling.manager [-] Polling pollster network.outgoing.bytes.delta in the context of pollsters
Dec  1 20:09:49 compute-0 ceilometer_agent_compute[200308]: 2025-12-01 20:09:49.818 15 DEBUG ceilometer.polling.manager [-] Checking if we need coordination for pollster [<stevedore.extension.Extension object at 0x7fcf6efc98b0>] with coordination group name [None]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:333
Dec  1 20:09:49 compute-0 ceilometer_agent_compute[200308]: 2025-12-01 20:09:49.818 15 DEBUG ceilometer.polling.manager [-] The pollster [<stevedore.extension.Extension object at 0x7fcf6efc98b0>] is not configured in a source for polling that requires coordination. The current hashrings are the following [None]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:355
Dec  1 20:09:49 compute-0 ceilometer_agent_compute[200308]: 2025-12-01 20:09:49.818 15 DEBUG ceilometer.polling.manager [-] Polster heartbeat update: network.outgoing.bytes.delta heartbeat /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:636
Dec  1 20:09:49 compute-0 ceilometer_agent_compute[200308]: 2025-12-01 20:09:49.818 15 DEBUG ceilometer.compute.pollsters [-] 1ba24bd2-a29b-4c5b-b8c7-cba0830ed166/network.outgoing.bytes.delta volume: 0 _stats_to_sample /usr/lib/python3.12/site-packages/ceilometer/compute/pollsters/__init__.py:108
Dec  1 20:09:49 compute-0 ceilometer_agent_compute[200308]: 2025-12-01 20:09:49.819 15 DEBUG ceilometer.compute.pollsters [-] 2e63a3e2-688c-470f-9b69-98ac22f0c892/network.outgoing.bytes.delta volume: 0 _stats_to_sample /usr/lib/python3.12/site-packages/ceilometer/compute/pollsters/__init__.py:108
Dec  1 20:09:49 compute-0 ceilometer_agent_compute[200308]: 2025-12-01 20:09:49.819 15 INFO ceilometer.polling.manager [-] Finished polling pollster network.outgoing.bytes.delta in the context of pollsters
Dec  1 20:09:49 compute-0 ceilometer_agent_compute[200308]: 2025-12-01 20:09:49.819 15 DEBUG ceilometer.polling.manager [-] Executing discovery process for pollsters [<ceilometer.compute.pollsters.net.OutgoingDropPollster object at 0x7fcf6c2e40e0>] and discovery method [local_instances] via process [<bound method AgentManager.discover of <ceilometer.polling.manager.AgentManager object at 0x7fcf6cc07a10>>]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:294
Dec  1 20:09:49 compute-0 ceilometer_agent_compute[200308]: 2025-12-01 20:09:49.820 12 DEBUG ceilometer.polling.manager [-] Updated heartbeat for network.outgoing.bytes.delta (2025-12-01T20:09:49.818652) _update_status /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:502
Dec  1 20:09:49 compute-0 ceilometer_agent_compute[200308]: 2025-12-01 20:09:49.820 15 INFO ceilometer.polling.manager [-] Polling pollster network.outgoing.packets.drop in the context of pollsters
Dec  1 20:09:49 compute-0 ceilometer_agent_compute[200308]: 2025-12-01 20:09:49.820 15 DEBUG ceilometer.polling.manager [-] Checking if we need coordination for pollster [<stevedore.extension.Extension object at 0x7fcf6c2e4110>] with coordination group name [None]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:333
Dec  1 20:09:49 compute-0 ceilometer_agent_compute[200308]: 2025-12-01 20:09:49.820 15 DEBUG ceilometer.polling.manager [-] The pollster [<stevedore.extension.Extension object at 0x7fcf6c2e4110>] is not configured in a source for polling that requires coordination. The current hashrings are the following [None]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:355
Dec  1 20:09:49 compute-0 ceilometer_agent_compute[200308]: 2025-12-01 20:09:49.820 15 DEBUG ceilometer.polling.manager [-] Polster heartbeat update: network.outgoing.packets.drop heartbeat /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:636
Dec  1 20:09:49 compute-0 ceilometer_agent_compute[200308]: 2025-12-01 20:09:49.820 15 DEBUG ceilometer.compute.pollsters [-] 1ba24bd2-a29b-4c5b-b8c7-cba0830ed166/network.outgoing.packets.drop volume: 0 _stats_to_sample /usr/lib/python3.12/site-packages/ceilometer/compute/pollsters/__init__.py:108
Dec  1 20:09:49 compute-0 ceilometer_agent_compute[200308]: 2025-12-01 20:09:49.821 15 DEBUG ceilometer.compute.pollsters [-] 2e63a3e2-688c-470f-9b69-98ac22f0c892/network.outgoing.packets.drop volume: 0 _stats_to_sample /usr/lib/python3.12/site-packages/ceilometer/compute/pollsters/__init__.py:108
Dec  1 20:09:49 compute-0 ceilometer_agent_compute[200308]: 2025-12-01 20:09:49.821 15 INFO ceilometer.polling.manager [-] Finished polling pollster network.outgoing.packets.drop in the context of pollsters
Dec  1 20:09:49 compute-0 ceilometer_agent_compute[200308]: 2025-12-01 20:09:49.821 12 DEBUG ceilometer.polling.manager [-] Updated heartbeat for network.outgoing.packets.drop (2025-12-01T20:09:49.820477) _update_status /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:502
Dec  1 20:09:49 compute-0 ceilometer_agent_compute[200308]: 2025-12-01 20:09:49.821 15 DEBUG ceilometer.polling.manager [-] Executing discovery process for pollsters [<ceilometer.compute.pollsters.net.OutgoingErrorsPollster object at 0x7fcf6c2e4170>] and discovery method [local_instances] via process [<bound method AgentManager.discover of <ceilometer.polling.manager.AgentManager object at 0x7fcf6cc07a10>>]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:294
Dec  1 20:09:49 compute-0 ceilometer_agent_compute[200308]: 2025-12-01 20:09:49.822 15 INFO ceilometer.polling.manager [-] Polling pollster network.outgoing.packets.error in the context of pollsters
Dec  1 20:09:49 compute-0 ceilometer_agent_compute[200308]: 2025-12-01 20:09:49.822 15 DEBUG ceilometer.polling.manager [-] Checking if we need coordination for pollster [<stevedore.extension.Extension object at 0x7fcf6c2e41a0>] with coordination group name [None]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:333
Dec  1 20:09:49 compute-0 ceilometer_agent_compute[200308]: 2025-12-01 20:09:49.822 15 DEBUG ceilometer.polling.manager [-] The pollster [<stevedore.extension.Extension object at 0x7fcf6c2e41a0>] is not configured in a source for polling that requires coordination. The current hashrings are the following [None]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:355
Dec  1 20:09:49 compute-0 ceilometer_agent_compute[200308]: 2025-12-01 20:09:49.822 15 DEBUG ceilometer.polling.manager [-] Polster heartbeat update: network.outgoing.packets.error heartbeat /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:636
Dec  1 20:09:49 compute-0 ceilometer_agent_compute[200308]: 2025-12-01 20:09:49.822 15 DEBUG ceilometer.compute.pollsters [-] 1ba24bd2-a29b-4c5b-b8c7-cba0830ed166/network.outgoing.packets.error volume: 0 _stats_to_sample /usr/lib/python3.12/site-packages/ceilometer/compute/pollsters/__init__.py:108
Dec  1 20:09:49 compute-0 ceilometer_agent_compute[200308]: 2025-12-01 20:09:49.822 15 DEBUG ceilometer.compute.pollsters [-] 2e63a3e2-688c-470f-9b69-98ac22f0c892/network.outgoing.packets.error volume: 0 _stats_to_sample /usr/lib/python3.12/site-packages/ceilometer/compute/pollsters/__init__.py:108
Dec  1 20:09:49 compute-0 ceilometer_agent_compute[200308]: 2025-12-01 20:09:49.823 15 INFO ceilometer.polling.manager [-] Finished polling pollster network.outgoing.packets.error in the context of pollsters
Dec  1 20:09:49 compute-0 ceilometer_agent_compute[200308]: 2025-12-01 20:09:49.823 15 DEBUG ceilometer.polling.manager [-] Executing discovery process for pollsters [<ceilometer.compute.pollsters.disk.PerDeviceCapacityPollster object at 0x7fcf6cc3d820>] and discovery method [local_instances] via process [<bound method AgentManager.discover of <ceilometer.polling.manager.AgentManager object at 0x7fcf6cc07a10>>]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:294
Dec  1 20:09:49 compute-0 ceilometer_agent_compute[200308]: 2025-12-01 20:09:49.824 15 INFO ceilometer.polling.manager [-] Polling pollster disk.device.capacity in the context of pollsters
Dec  1 20:09:49 compute-0 ceilometer_agent_compute[200308]: 2025-12-01 20:09:49.824 12 DEBUG ceilometer.polling.manager [-] Updated heartbeat for network.outgoing.packets.error (2025-12-01T20:09:49.822362) _update_status /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:502
Dec  1 20:09:49 compute-0 ceilometer_agent_compute[200308]: 2025-12-01 20:09:49.824 15 DEBUG ceilometer.polling.manager [-] Checking if we need coordination for pollster [<stevedore.extension.Extension object at 0x7fcf6cc3f290>] with coordination group name [None]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:333
Dec  1 20:09:49 compute-0 ceilometer_agent_compute[200308]: 2025-12-01 20:09:49.824 15 DEBUG ceilometer.polling.manager [-] The pollster [<stevedore.extension.Extension object at 0x7fcf6cc3f290>] is not configured in a source for polling that requires coordination. The current hashrings are the following [None]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:355
Dec  1 20:09:49 compute-0 ceilometer_agent_compute[200308]: 2025-12-01 20:09:49.824 15 DEBUG ceilometer.polling.manager [-] Polster heartbeat update: disk.device.capacity heartbeat /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:636
Dec  1 20:09:49 compute-0 ceilometer_agent_compute[200308]: 2025-12-01 20:09:49.825 12 DEBUG ceilometer.polling.manager [-] Updated heartbeat for disk.device.capacity (2025-12-01T20:09:49.824760) _update_status /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:502
Dec  1 20:09:49 compute-0 ceilometer_agent_compute[200308]: 2025-12-01 20:09:49.843 15 DEBUG ceilometer.compute.pollsters [-] 1ba24bd2-a29b-4c5b-b8c7-cba0830ed166/disk.device.capacity volume: 1073741824 _stats_to_sample /usr/lib/python3.12/site-packages/ceilometer/compute/pollsters/__init__.py:108
Dec  1 20:09:49 compute-0 ceilometer_agent_compute[200308]: 2025-12-01 20:09:49.844 15 DEBUG ceilometer.compute.pollsters [-] 1ba24bd2-a29b-4c5b-b8c7-cba0830ed166/disk.device.capacity volume: 509952 _stats_to_sample /usr/lib/python3.12/site-packages/ceilometer/compute/pollsters/__init__.py:108
Dec  1 20:09:49 compute-0 ceilometer_agent_compute[200308]: 2025-12-01 20:09:49.860 15 DEBUG ceilometer.compute.pollsters [-] 2e63a3e2-688c-470f-9b69-98ac22f0c892/disk.device.capacity volume: 1073741824 _stats_to_sample /usr/lib/python3.12/site-packages/ceilometer/compute/pollsters/__init__.py:108
Dec  1 20:09:49 compute-0 ceilometer_agent_compute[200308]: 2025-12-01 20:09:49.861 15 DEBUG ceilometer.compute.pollsters [-] 2e63a3e2-688c-470f-9b69-98ac22f0c892/disk.device.capacity volume: 509952 _stats_to_sample /usr/lib/python3.12/site-packages/ceilometer/compute/pollsters/__init__.py:108
Dec  1 20:09:49 compute-0 ceilometer_agent_compute[200308]: 2025-12-01 20:09:49.861 15 INFO ceilometer.polling.manager [-] Finished polling pollster disk.device.capacity in the context of pollsters
Dec  1 20:09:49 compute-0 ceilometer_agent_compute[200308]: 2025-12-01 20:09:49.861 15 DEBUG ceilometer.polling.manager [-] Executing discovery process for pollsters [<ceilometer.compute.pollsters.disk.PerDeviceReadBytesPollster object at 0x7fcf6cc3f1d0>] and discovery method [local_instances] via process [<bound method AgentManager.discover of <ceilometer.polling.manager.AgentManager object at 0x7fcf6cc07a10>>]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:294
Dec  1 20:09:49 compute-0 ceilometer_agent_compute[200308]: 2025-12-01 20:09:49.862 15 INFO ceilometer.polling.manager [-] Polling pollster disk.device.read.bytes in the context of pollsters
Dec  1 20:09:49 compute-0 ceilometer_agent_compute[200308]: 2025-12-01 20:09:49.862 15 DEBUG ceilometer.polling.manager [-] Checking if we need coordination for pollster [<stevedore.extension.Extension object at 0x7fcf6cc3f2c0>] with coordination group name [None]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:333
Dec  1 20:09:49 compute-0 ceilometer_agent_compute[200308]: 2025-12-01 20:09:49.862 15 DEBUG ceilometer.polling.manager [-] The pollster [<stevedore.extension.Extension object at 0x7fcf6cc3f2c0>] is not configured in a source for polling that requires coordination. The current hashrings are the following [None]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:355
Dec  1 20:09:49 compute-0 ceilometer_agent_compute[200308]: 2025-12-01 20:09:49.862 15 DEBUG ceilometer.polling.manager [-] Polster heartbeat update: disk.device.read.bytes heartbeat /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:636
Dec  1 20:09:49 compute-0 ceilometer_agent_compute[200308]: 2025-12-01 20:09:49.863 12 DEBUG ceilometer.polling.manager [-] Updated heartbeat for disk.device.read.bytes (2025-12-01T20:09:49.862337) _update_status /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:502
Dec  1 20:09:49 compute-0 ceilometer_agent_compute[200308]: 2025-12-01 20:09:49.905 15 DEBUG ceilometer.compute.pollsters [-] 1ba24bd2-a29b-4c5b-b8c7-cba0830ed166/disk.device.read.bytes volume: 30145536 _stats_to_sample /usr/lib/python3.12/site-packages/ceilometer/compute/pollsters/__init__.py:108
Dec  1 20:09:49 compute-0 ceilometer_agent_compute[200308]: 2025-12-01 20:09:49.905 15 DEBUG ceilometer.compute.pollsters [-] 1ba24bd2-a29b-4c5b-b8c7-cba0830ed166/disk.device.read.bytes volume: 246078 _stats_to_sample /usr/lib/python3.12/site-packages/ceilometer/compute/pollsters/__init__.py:108
Dec  1 20:09:49 compute-0 ceilometer_agent_compute[200308]: 2025-12-01 20:09:49.951 15 DEBUG ceilometer.compute.pollsters [-] 2e63a3e2-688c-470f-9b69-98ac22f0c892/disk.device.read.bytes volume: 28969984 _stats_to_sample /usr/lib/python3.12/site-packages/ceilometer/compute/pollsters/__init__.py:108
Dec  1 20:09:49 compute-0 ceilometer_agent_compute[200308]: 2025-12-01 20:09:49.952 15 DEBUG ceilometer.compute.pollsters [-] 2e63a3e2-688c-470f-9b69-98ac22f0c892/disk.device.read.bytes volume: 246078 _stats_to_sample /usr/lib/python3.12/site-packages/ceilometer/compute/pollsters/__init__.py:108
Dec  1 20:09:49 compute-0 ceilometer_agent_compute[200308]: 2025-12-01 20:09:49.952 15 INFO ceilometer.polling.manager [-] Finished polling pollster disk.device.read.bytes in the context of pollsters
Dec  1 20:09:49 compute-0 ceilometer_agent_compute[200308]: 2025-12-01 20:09:49.953 15 DEBUG ceilometer.polling.manager [-] Executing discovery process for pollsters [<ceilometer.compute.pollsters.net.IncomingBytesPollster object at 0x7fcf6cc3f800>] and discovery method [local_instances] via process [<bound method AgentManager.discover of <ceilometer.polling.manager.AgentManager object at 0x7fcf6cc07a10>>]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:294
Dec  1 20:09:49 compute-0 ceilometer_agent_compute[200308]: 2025-12-01 20:09:49.953 15 INFO ceilometer.polling.manager [-] Polling pollster network.incoming.bytes in the context of pollsters
Dec  1 20:09:49 compute-0 ceilometer_agent_compute[200308]: 2025-12-01 20:09:49.953 15 DEBUG ceilometer.polling.manager [-] Checking if we need coordination for pollster [<stevedore.extension.Extension object at 0x7fcf6e1e92e0>] with coordination group name [None]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:333
Dec  1 20:09:49 compute-0 ceilometer_agent_compute[200308]: 2025-12-01 20:09:49.953 15 DEBUG ceilometer.polling.manager [-] The pollster [<stevedore.extension.Extension object at 0x7fcf6e1e92e0>] is not configured in a source for polling that requires coordination. The current hashrings are the following [None]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:355
Dec  1 20:09:49 compute-0 ceilometer_agent_compute[200308]: 2025-12-01 20:09:49.953 15 DEBUG ceilometer.polling.manager [-] Polster heartbeat update: network.incoming.bytes heartbeat /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:636
Dec  1 20:09:49 compute-0 ceilometer_agent_compute[200308]: 2025-12-01 20:09:49.953 15 DEBUG ceilometer.compute.pollsters [-] 1ba24bd2-a29b-4c5b-b8c7-cba0830ed166/network.incoming.bytes volume: 1346 _stats_to_sample /usr/lib/python3.12/site-packages/ceilometer/compute/pollsters/__init__.py:108
Dec  1 20:09:49 compute-0 ceilometer_agent_compute[200308]: 2025-12-01 20:09:49.954 12 DEBUG ceilometer.polling.manager [-] Updated heartbeat for network.incoming.bytes (2025-12-01T20:09:49.953533) _update_status /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:502
Dec  1 20:09:49 compute-0 ceilometer_agent_compute[200308]: 2025-12-01 20:09:49.954 15 DEBUG ceilometer.compute.pollsters [-] 2e63a3e2-688c-470f-9b69-98ac22f0c892/network.incoming.bytes volume: 1520 _stats_to_sample /usr/lib/python3.12/site-packages/ceilometer/compute/pollsters/__init__.py:108
Dec  1 20:09:49 compute-0 ceilometer_agent_compute[200308]: 2025-12-01 20:09:49.954 15 INFO ceilometer.polling.manager [-] Finished polling pollster network.incoming.bytes in the context of pollsters
Dec  1 20:09:49 compute-0 ceilometer_agent_compute[200308]: 2025-12-01 20:09:49.954 15 DEBUG ceilometer.polling.manager [-] Executing discovery process for pollsters [<ceilometer.compute.pollsters.net.IncomingBytesRatePollster object at 0x7fcf6cc3fd10>] and discovery method [local_instances] via process [<bound method AgentManager.discover of <ceilometer.polling.manager.AgentManager object at 0x7fcf6cc07a10>>]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:294
Dec  1 20:09:49 compute-0 ceilometer_agent_compute[200308]: 2025-12-01 20:09:49.954 15 INFO ceilometer.polling.manager [-] Polling pollster network.incoming.bytes.rate in the context of pollsters
Dec  1 20:09:49 compute-0 ceilometer_agent_compute[200308]: 2025-12-01 20:09:49.955 15 DEBUG ceilometer.polling.manager [-] Checking if we need coordination for pollster [<stevedore.extension.Extension object at 0x7fcf6cc3fb00>] with coordination group name [None]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:333
Dec  1 20:09:49 compute-0 ceilometer_agent_compute[200308]: 2025-12-01 20:09:49.955 15 DEBUG ceilometer.polling.manager [-] The pollster [<stevedore.extension.Extension object at 0x7fcf6cc3fb00>] is not configured in a source for polling that requires coordination. The current hashrings are the following [None]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:355
Dec  1 20:09:49 compute-0 ceilometer_agent_compute[200308]: 2025-12-01 20:09:49.955 15 DEBUG ceilometer.polling.manager [-] Polster heartbeat update: network.incoming.bytes.rate heartbeat /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:636
Dec  1 20:09:49 compute-0 ceilometer_agent_compute[200308]: 2025-12-01 20:09:49.955 15 DEBUG ceilometer.compute.pollsters [-] LibvirtInspector does not provide data for IncomingBytesRatePollster get_samples /usr/lib/python3.12/site-packages/ceilometer/compute/pollsters/__init__.py:162
Dec  1 20:09:49 compute-0 ceilometer_agent_compute[200308]: 2025-12-01 20:09:49.955 12 DEBUG ceilometer.polling.manager [-] Updated heartbeat for network.incoming.bytes.rate (2025-12-01T20:09:49.955189) _update_status /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:502
Dec  1 20:09:49 compute-0 ceilometer_agent_compute[200308]: 2025-12-01 20:09:49.955 15 ERROR ceilometer.polling.manager [-] Prevent pollster network.incoming.bytes.rate from polling [<NovaLikeServer: te-4551674-asg-jbxama3kkz6o-bxsvliczlwdv-hbpajxundnbg>] on source pollsters anymore!: ceilometer.polling.plugin_base.PollsterPermanentError: [<NovaLikeServer: te-4551674-asg-jbxama3kkz6o-bxsvliczlwdv-hbpajxundnbg>]
Dec  1 20:09:49 compute-0 ceilometer_agent_compute[200308]: 2025-12-01 20:09:49.956 15 DEBUG ceilometer.polling.manager [-] Executing discovery process for pollsters [<ceilometer.compute.pollsters.disk.PerDeviceDiskReadLatencyPollster object at 0x7fcf6cc3f2f0>] and discovery method [local_instances] via process [<bound method AgentManager.discover of <ceilometer.polling.manager.AgentManager object at 0x7fcf6cc07a10>>]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:294
Dec  1 20:09:49 compute-0 ceilometer_agent_compute[200308]: 2025-12-01 20:09:49.956 15 INFO ceilometer.polling.manager [-] Polling pollster disk.device.read.latency in the context of pollsters
Dec  1 20:09:49 compute-0 ceilometer_agent_compute[200308]: 2025-12-01 20:09:49.956 15 DEBUG ceilometer.polling.manager [-] Checking if we need coordination for pollster [<stevedore.extension.Extension object at 0x7fcf6cc3f320>] with coordination group name [None]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:333
Dec  1 20:09:49 compute-0 ceilometer_agent_compute[200308]: 2025-12-01 20:09:49.956 15 DEBUG ceilometer.polling.manager [-] The pollster [<stevedore.extension.Extension object at 0x7fcf6cc3f320>] is not configured in a source for polling that requires coordination. The current hashrings are the following [None]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:355
Dec  1 20:09:49 compute-0 ceilometer_agent_compute[200308]: 2025-12-01 20:09:49.956 15 DEBUG ceilometer.polling.manager [-] Polster heartbeat update: disk.device.read.latency heartbeat /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:636
Dec  1 20:09:49 compute-0 ceilometer_agent_compute[200308]: 2025-12-01 20:09:49.956 15 DEBUG ceilometer.compute.pollsters [-] 1ba24bd2-a29b-4c5b-b8c7-cba0830ed166/disk.device.read.latency volume: 593467445 _stats_to_sample /usr/lib/python3.12/site-packages/ceilometer/compute/pollsters/__init__.py:108
Dec  1 20:09:49 compute-0 ceilometer_agent_compute[200308]: 2025-12-01 20:09:49.956 15 DEBUG ceilometer.compute.pollsters [-] 1ba24bd2-a29b-4c5b-b8c7-cba0830ed166/disk.device.read.latency volume: 67563058 _stats_to_sample /usr/lib/python3.12/site-packages/ceilometer/compute/pollsters/__init__.py:108
Dec  1 20:09:49 compute-0 ceilometer_agent_compute[200308]: 2025-12-01 20:09:49.957 15 DEBUG ceilometer.compute.pollsters [-] 2e63a3e2-688c-470f-9b69-98ac22f0c892/disk.device.read.latency volume: 649034984 _stats_to_sample /usr/lib/python3.12/site-packages/ceilometer/compute/pollsters/__init__.py:108
Dec  1 20:09:49 compute-0 ceilometer_agent_compute[200308]: 2025-12-01 20:09:49.957 15 DEBUG ceilometer.compute.pollsters [-] 2e63a3e2-688c-470f-9b69-98ac22f0c892/disk.device.read.latency volume: 56737496 _stats_to_sample /usr/lib/python3.12/site-packages/ceilometer/compute/pollsters/__init__.py:108
Dec  1 20:09:49 compute-0 ceilometer_agent_compute[200308]: 2025-12-01 20:09:49.957 15 INFO ceilometer.polling.manager [-] Finished polling pollster disk.device.read.latency in the context of pollsters
Dec  1 20:09:49 compute-0 ceilometer_agent_compute[200308]: 2025-12-01 20:09:49.958 15 DEBUG ceilometer.polling.manager [-] Executing discovery process for pollsters [<ceilometer.compute.pollsters.disk.PerDeviceReadRequestsPollster object at 0x7fcf6cc3f350>] and discovery method [local_instances] via process [<bound method AgentManager.discover of <ceilometer.polling.manager.AgentManager object at 0x7fcf6cc07a10>>]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:294
Dec  1 20:09:49 compute-0 ceilometer_agent_compute[200308]: 2025-12-01 20:09:49.958 15 INFO ceilometer.polling.manager [-] Polling pollster disk.device.read.requests in the context of pollsters
Dec  1 20:09:49 compute-0 ceilometer_agent_compute[200308]: 2025-12-01 20:09:49.958 15 DEBUG ceilometer.polling.manager [-] Checking if we need coordination for pollster [<stevedore.extension.Extension object at 0x7fcf6cc3f380>] with coordination group name [None]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:333
Dec  1 20:09:49 compute-0 ceilometer_agent_compute[200308]: 2025-12-01 20:09:49.958 12 DEBUG ceilometer.polling.manager [-] Updated heartbeat for disk.device.read.latency (2025-12-01T20:09:49.956405) _update_status /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:502
Dec  1 20:09:49 compute-0 ceilometer_agent_compute[200308]: 2025-12-01 20:09:49.958 15 DEBUG ceilometer.polling.manager [-] The pollster [<stevedore.extension.Extension object at 0x7fcf6cc3f380>] is not configured in a source for polling that requires coordination. The current hashrings are the following [None]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:355
Dec  1 20:09:49 compute-0 ceilometer_agent_compute[200308]: 2025-12-01 20:09:49.958 15 DEBUG ceilometer.polling.manager [-] Polster heartbeat update: disk.device.read.requests heartbeat /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:636
Dec  1 20:09:49 compute-0 ceilometer_agent_compute[200308]: 2025-12-01 20:09:49.959 15 DEBUG ceilometer.compute.pollsters [-] 1ba24bd2-a29b-4c5b-b8c7-cba0830ed166/disk.device.read.requests volume: 1092 _stats_to_sample /usr/lib/python3.12/site-packages/ceilometer/compute/pollsters/__init__.py:108
Dec  1 20:09:49 compute-0 ceilometer_agent_compute[200308]: 2025-12-01 20:09:49.959 15 DEBUG ceilometer.compute.pollsters [-] 1ba24bd2-a29b-4c5b-b8c7-cba0830ed166/disk.device.read.requests volume: 107 _stats_to_sample /usr/lib/python3.12/site-packages/ceilometer/compute/pollsters/__init__.py:108
Dec  1 20:09:49 compute-0 ceilometer_agent_compute[200308]: 2025-12-01 20:09:49.959 15 DEBUG ceilometer.compute.pollsters [-] 2e63a3e2-688c-470f-9b69-98ac22f0c892/disk.device.read.requests volume: 1041 _stats_to_sample /usr/lib/python3.12/site-packages/ceilometer/compute/pollsters/__init__.py:108
Dec  1 20:09:49 compute-0 ceilometer_agent_compute[200308]: 2025-12-01 20:09:49.959 15 DEBUG ceilometer.compute.pollsters [-] 2e63a3e2-688c-470f-9b69-98ac22f0c892/disk.device.read.requests volume: 107 _stats_to_sample /usr/lib/python3.12/site-packages/ceilometer/compute/pollsters/__init__.py:108
Dec  1 20:09:49 compute-0 ceilometer_agent_compute[200308]: 2025-12-01 20:09:49.960 15 INFO ceilometer.polling.manager [-] Finished polling pollster disk.device.read.requests in the context of pollsters
Dec  1 20:09:49 compute-0 ceilometer_agent_compute[200308]: 2025-12-01 20:09:49.960 15 DEBUG ceilometer.polling.manager [-] Executing discovery process for pollsters [<ceilometer.compute.pollsters.disk.PerDevicePhysicalPollster object at 0x7fcf6cc3f3b0>] and discovery method [local_instances] via process [<bound method AgentManager.discover of <ceilometer.polling.manager.AgentManager object at 0x7fcf6cc07a10>>]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:294
Dec  1 20:09:49 compute-0 ceilometer_agent_compute[200308]: 2025-12-01 20:09:49.960 15 INFO ceilometer.polling.manager [-] Polling pollster disk.device.usage in the context of pollsters
Dec  1 20:09:49 compute-0 ceilometer_agent_compute[200308]: 2025-12-01 20:09:49.960 15 DEBUG ceilometer.polling.manager [-] Checking if we need coordination for pollster [<stevedore.extension.Extension object at 0x7fcf6cc3f3e0>] with coordination group name [None]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:333
Dec  1 20:09:49 compute-0 ceilometer_agent_compute[200308]: 2025-12-01 20:09:49.960 15 DEBUG ceilometer.polling.manager [-] The pollster [<stevedore.extension.Extension object at 0x7fcf6cc3f3e0>] is not configured in a source for polling that requires coordination. The current hashrings are the following [None]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:355
Dec  1 20:09:49 compute-0 ceilometer_agent_compute[200308]: 2025-12-01 20:09:49.961 15 DEBUG ceilometer.polling.manager [-] Polster heartbeat update: disk.device.usage heartbeat /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:636
Dec  1 20:09:49 compute-0 ceilometer_agent_compute[200308]: 2025-12-01 20:09:49.961 15 DEBUG ceilometer.compute.pollsters [-] 1ba24bd2-a29b-4c5b-b8c7-cba0830ed166/disk.device.usage volume: 29884416 _stats_to_sample /usr/lib/python3.12/site-packages/ceilometer/compute/pollsters/__init__.py:108
Dec  1 20:09:49 compute-0 ceilometer_agent_compute[200308]: 2025-12-01 20:09:49.961 12 DEBUG ceilometer.polling.manager [-] Updated heartbeat for disk.device.read.requests (2025-12-01T20:09:49.958897) _update_status /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:502
Dec  1 20:09:49 compute-0 ceilometer_agent_compute[200308]: 2025-12-01 20:09:49.961 12 DEBUG ceilometer.polling.manager [-] Updated heartbeat for disk.device.usage (2025-12-01T20:09:49.961087) _update_status /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:502
Dec  1 20:09:49 compute-0 ceilometer_agent_compute[200308]: 2025-12-01 20:09:49.961 15 DEBUG ceilometer.compute.pollsters [-] 1ba24bd2-a29b-4c5b-b8c7-cba0830ed166/disk.device.usage volume: 509952 _stats_to_sample /usr/lib/python3.12/site-packages/ceilometer/compute/pollsters/__init__.py:108
Dec  1 20:09:49 compute-0 ceilometer_agent_compute[200308]: 2025-12-01 20:09:49.962 15 DEBUG ceilometer.compute.pollsters [-] 2e63a3e2-688c-470f-9b69-98ac22f0c892/disk.device.usage volume: 29884416 _stats_to_sample /usr/lib/python3.12/site-packages/ceilometer/compute/pollsters/__init__.py:108
Dec  1 20:09:49 compute-0 ceilometer_agent_compute[200308]: 2025-12-01 20:09:49.962 15 DEBUG ceilometer.compute.pollsters [-] 2e63a3e2-688c-470f-9b69-98ac22f0c892/disk.device.usage volume: 509952 _stats_to_sample /usr/lib/python3.12/site-packages/ceilometer/compute/pollsters/__init__.py:108
Dec  1 20:09:49 compute-0 ceilometer_agent_compute[200308]: 2025-12-01 20:09:49.962 15 INFO ceilometer.polling.manager [-] Finished polling pollster disk.device.usage in the context of pollsters
Dec  1 20:09:49 compute-0 ceilometer_agent_compute[200308]: 2025-12-01 20:09:49.962 15 DEBUG ceilometer.polling.manager [-] Executing discovery process for pollsters [<ceilometer.compute.pollsters.disk.PerDeviceWriteBytesPollster object at 0x7fcf6cc3f410>] and discovery method [local_instances] via process [<bound method AgentManager.discover of <ceilometer.polling.manager.AgentManager object at 0x7fcf6cc07a10>>]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:294
Dec  1 20:09:49 compute-0 ceilometer_agent_compute[200308]: 2025-12-01 20:09:49.963 15 INFO ceilometer.polling.manager [-] Polling pollster disk.device.write.bytes in the context of pollsters
Dec  1 20:09:49 compute-0 ceilometer_agent_compute[200308]: 2025-12-01 20:09:49.963 15 DEBUG ceilometer.polling.manager [-] Checking if we need coordination for pollster [<stevedore.extension.Extension object at 0x7fcf6cc3f440>] with coordination group name [None]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:333
Dec  1 20:09:49 compute-0 ceilometer_agent_compute[200308]: 2025-12-01 20:09:49.963 15 DEBUG ceilometer.polling.manager [-] The pollster [<stevedore.extension.Extension object at 0x7fcf6cc3f440>] is not configured in a source for polling that requires coordination. The current hashrings are the following [None]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:355
Dec  1 20:09:49 compute-0 ceilometer_agent_compute[200308]: 2025-12-01 20:09:49.963 15 DEBUG ceilometer.polling.manager [-] Polster heartbeat update: disk.device.write.bytes heartbeat /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:636
Dec  1 20:09:49 compute-0 ceilometer_agent_compute[200308]: 2025-12-01 20:09:49.963 15 DEBUG ceilometer.compute.pollsters [-] 1ba24bd2-a29b-4c5b-b8c7-cba0830ed166/disk.device.write.bytes volume: 72806400 _stats_to_sample /usr/lib/python3.12/site-packages/ceilometer/compute/pollsters/__init__.py:108
Dec  1 20:09:49 compute-0 ceilometer_agent_compute[200308]: 2025-12-01 20:09:49.963 15 DEBUG ceilometer.compute.pollsters [-] 1ba24bd2-a29b-4c5b-b8c7-cba0830ed166/disk.device.write.bytes volume: 0 _stats_to_sample /usr/lib/python3.12/site-packages/ceilometer/compute/pollsters/__init__.py:108
Dec  1 20:09:49 compute-0 ceilometer_agent_compute[200308]: 2025-12-01 20:09:49.964 15 DEBUG ceilometer.compute.pollsters [-] 2e63a3e2-688c-470f-9b69-98ac22f0c892/disk.device.write.bytes volume: 72855552 _stats_to_sample /usr/lib/python3.12/site-packages/ceilometer/compute/pollsters/__init__.py:108
Dec  1 20:09:49 compute-0 ceilometer_agent_compute[200308]: 2025-12-01 20:09:49.964 15 DEBUG ceilometer.compute.pollsters [-] 2e63a3e2-688c-470f-9b69-98ac22f0c892/disk.device.write.bytes volume: 0 _stats_to_sample /usr/lib/python3.12/site-packages/ceilometer/compute/pollsters/__init__.py:108
Dec  1 20:09:49 compute-0 ceilometer_agent_compute[200308]: 2025-12-01 20:09:49.964 15 INFO ceilometer.polling.manager [-] Finished polling pollster disk.device.write.bytes in the context of pollsters
Dec  1 20:09:49 compute-0 ceilometer_agent_compute[200308]: 2025-12-01 20:09:49.964 15 DEBUG ceilometer.polling.manager [-] Executing discovery process for pollsters [<ceilometer.compute.pollsters.instance_stats.PowerStatePollster object at 0x7fcf6c2e4440>] and discovery method [local_instances] via process [<bound method AgentManager.discover of <ceilometer.polling.manager.AgentManager object at 0x7fcf6cc07a10>>]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:294
Dec  1 20:09:49 compute-0 ceilometer_agent_compute[200308]: 2025-12-01 20:09:49.965 15 INFO ceilometer.polling.manager [-] Polling pollster power.state in the context of pollsters
Dec  1 20:09:49 compute-0 ceilometer_agent_compute[200308]: 2025-12-01 20:09:49.965 15 DEBUG ceilometer.polling.manager [-] Checking if we need coordination for pollster [<stevedore.extension.Extension object at 0x7fcf6c2e4470>] with coordination group name [None]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:333
Dec  1 20:09:49 compute-0 ceilometer_agent_compute[200308]: 2025-12-01 20:09:49.965 12 DEBUG ceilometer.polling.manager [-] Updated heartbeat for disk.device.write.bytes (2025-12-01T20:09:49.963339) _update_status /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:502
Dec  1 20:09:49 compute-0 ceilometer_agent_compute[200308]: 2025-12-01 20:09:49.965 15 DEBUG ceilometer.polling.manager [-] The pollster [<stevedore.extension.Extension object at 0x7fcf6c2e4470>] is not configured in a source for polling that requires coordination. The current hashrings are the following [None]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:355
Dec  1 20:09:49 compute-0 ceilometer_agent_compute[200308]: 2025-12-01 20:09:49.965 15 DEBUG ceilometer.polling.manager [-] Polster heartbeat update: power.state heartbeat /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:636
Dec  1 20:09:49 compute-0 ceilometer_agent_compute[200308]: 2025-12-01 20:09:49.966 12 DEBUG ceilometer.polling.manager [-] Updated heartbeat for power.state (2025-12-01T20:09:49.965455) _update_status /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:502
Dec  1 20:09:49 compute-0 ceilometer_agent_compute[200308]: 2025-12-01 20:09:49.986 15 DEBUG ceilometer.compute.pollsters [-] 1ba24bd2-a29b-4c5b-b8c7-cba0830ed166/power.state volume: 1 _stats_to_sample /usr/lib/python3.12/site-packages/ceilometer/compute/pollsters/__init__.py:108
Dec  1 20:09:50 compute-0 ceilometer_agent_compute[200308]: 2025-12-01 20:09:50.009 15 DEBUG ceilometer.compute.pollsters [-] 2e63a3e2-688c-470f-9b69-98ac22f0c892/power.state volume: 1 _stats_to_sample /usr/lib/python3.12/site-packages/ceilometer/compute/pollsters/__init__.py:108
Dec  1 20:09:50 compute-0 ceilometer_agent_compute[200308]: 2025-12-01 20:09:50.010 15 INFO ceilometer.polling.manager [-] Finished polling pollster power.state in the context of pollsters
Dec  1 20:09:50 compute-0 ceilometer_agent_compute[200308]: 2025-12-01 20:09:50.010 15 DEBUG ceilometer.polling.manager [-] Executing discovery process for pollsters [<ceilometer.compute.pollsters.disk.PerDeviceDiskWriteLatencyPollster object at 0x7fcf6cc3f470>] and discovery method [local_instances] via process [<bound method AgentManager.discover of <ceilometer.polling.manager.AgentManager object at 0x7fcf6cc07a10>>]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:294
Dec  1 20:09:50 compute-0 ceilometer_agent_compute[200308]: 2025-12-01 20:09:50.010 15 INFO ceilometer.polling.manager [-] Polling pollster disk.device.write.latency in the context of pollsters
Dec  1 20:09:50 compute-0 ceilometer_agent_compute[200308]: 2025-12-01 20:09:50.010 15 DEBUG ceilometer.polling.manager [-] Checking if we need coordination for pollster [<stevedore.extension.Extension object at 0x7fcf6cc3f4a0>] with coordination group name [None]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:333
Dec  1 20:09:50 compute-0 ceilometer_agent_compute[200308]: 2025-12-01 20:09:50.010 15 DEBUG ceilometer.polling.manager [-] The pollster [<stevedore.extension.Extension object at 0x7fcf6cc3f4a0>] is not configured in a source for polling that requires coordination. The current hashrings are the following [None]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:355
Dec  1 20:09:50 compute-0 ceilometer_agent_compute[200308]: 2025-12-01 20:09:50.010 15 DEBUG ceilometer.polling.manager [-] Polster heartbeat update: disk.device.write.latency heartbeat /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:636
Dec  1 20:09:50 compute-0 ceilometer_agent_compute[200308]: 2025-12-01 20:09:50.010 15 DEBUG ceilometer.compute.pollsters [-] 1ba24bd2-a29b-4c5b-b8c7-cba0830ed166/disk.device.write.latency volume: 4240918197 _stats_to_sample /usr/lib/python3.12/site-packages/ceilometer/compute/pollsters/__init__.py:108
Dec  1 20:09:50 compute-0 ceilometer_agent_compute[200308]: 2025-12-01 20:09:50.011 15 DEBUG ceilometer.compute.pollsters [-] 1ba24bd2-a29b-4c5b-b8c7-cba0830ed166/disk.device.write.latency volume: 0 _stats_to_sample /usr/lib/python3.12/site-packages/ceilometer/compute/pollsters/__init__.py:108
Dec  1 20:09:50 compute-0 ceilometer_agent_compute[200308]: 2025-12-01 20:09:50.011 15 DEBUG ceilometer.compute.pollsters [-] 2e63a3e2-688c-470f-9b69-98ac22f0c892/disk.device.write.latency volume: 3249905700 _stats_to_sample /usr/lib/python3.12/site-packages/ceilometer/compute/pollsters/__init__.py:108
Dec  1 20:09:50 compute-0 ceilometer_agent_compute[200308]: 2025-12-01 20:09:50.011 12 DEBUG ceilometer.polling.manager [-] Updated heartbeat for disk.device.write.latency (2025-12-01T20:09:50.010730) _update_status /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:502
Dec  1 20:09:50 compute-0 ceilometer_agent_compute[200308]: 2025-12-01 20:09:50.011 15 DEBUG ceilometer.compute.pollsters [-] 2e63a3e2-688c-470f-9b69-98ac22f0c892/disk.device.write.latency volume: 0 _stats_to_sample /usr/lib/python3.12/site-packages/ceilometer/compute/pollsters/__init__.py:108
Dec  1 20:09:50 compute-0 ceilometer_agent_compute[200308]: 2025-12-01 20:09:50.012 15 INFO ceilometer.polling.manager [-] Finished polling pollster disk.device.write.latency in the context of pollsters
Dec  1 20:09:50 compute-0 ceilometer_agent_compute[200308]: 2025-12-01 20:09:50.012 15 DEBUG ceilometer.polling.manager [-] Executing discovery process for pollsters [<ceilometer.compute.pollsters.disk.PerDeviceWriteRequestsPollster object at 0x7fcf6cc3f4d0>] and discovery method [local_instances] via process [<bound method AgentManager.discover of <ceilometer.polling.manager.AgentManager object at 0x7fcf6cc07a10>>]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:294
Dec  1 20:09:50 compute-0 ceilometer_agent_compute[200308]: 2025-12-01 20:09:50.012 15 INFO ceilometer.polling.manager [-] Polling pollster disk.device.write.requests in the context of pollsters
Dec  1 20:09:50 compute-0 ceilometer_agent_compute[200308]: 2025-12-01 20:09:50.012 15 DEBUG ceilometer.polling.manager [-] Checking if we need coordination for pollster [<stevedore.extension.Extension object at 0x7fcf6cc3f500>] with coordination group name [None]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:333
Dec  1 20:09:50 compute-0 ceilometer_agent_compute[200308]: 2025-12-01 20:09:50.012 15 DEBUG ceilometer.polling.manager [-] The pollster [<stevedore.extension.Extension object at 0x7fcf6cc3f500>] is not configured in a source for polling that requires coordination. The current hashrings are the following [None]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:355
Dec  1 20:09:50 compute-0 ceilometer_agent_compute[200308]: 2025-12-01 20:09:50.012 15 DEBUG ceilometer.polling.manager [-] Polster heartbeat update: disk.device.write.requests heartbeat /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:636
Dec  1 20:09:50 compute-0 ceilometer_agent_compute[200308]: 2025-12-01 20:09:50.012 15 DEBUG ceilometer.compute.pollsters [-] 1ba24bd2-a29b-4c5b-b8c7-cba0830ed166/disk.device.write.requests volume: 273 _stats_to_sample /usr/lib/python3.12/site-packages/ceilometer/compute/pollsters/__init__.py:108
Dec  1 20:09:50 compute-0 ceilometer_agent_compute[200308]: 2025-12-01 20:09:50.012 15 DEBUG ceilometer.compute.pollsters [-] 1ba24bd2-a29b-4c5b-b8c7-cba0830ed166/disk.device.write.requests volume: 0 _stats_to_sample /usr/lib/python3.12/site-packages/ceilometer/compute/pollsters/__init__.py:108
Dec  1 20:09:50 compute-0 ceilometer_agent_compute[200308]: 2025-12-01 20:09:50.013 15 DEBUG ceilometer.compute.pollsters [-] 2e63a3e2-688c-470f-9b69-98ac22f0c892/disk.device.write.requests volume: 319 _stats_to_sample /usr/lib/python3.12/site-packages/ceilometer/compute/pollsters/__init__.py:108
Dec  1 20:09:50 compute-0 ceilometer_agent_compute[200308]: 2025-12-01 20:09:50.013 15 DEBUG ceilometer.compute.pollsters [-] 2e63a3e2-688c-470f-9b69-98ac22f0c892/disk.device.write.requests volume: 0 _stats_to_sample /usr/lib/python3.12/site-packages/ceilometer/compute/pollsters/__init__.py:108
Dec  1 20:09:50 compute-0 ceilometer_agent_compute[200308]: 2025-12-01 20:09:50.013 12 DEBUG ceilometer.polling.manager [-] Updated heartbeat for disk.device.write.requests (2025-12-01T20:09:50.012524) _update_status /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:502
Dec  1 20:09:50 compute-0 ceilometer_agent_compute[200308]: 2025-12-01 20:09:50.013 15 INFO ceilometer.polling.manager [-] Finished polling pollster disk.device.write.requests in the context of pollsters
Dec  1 20:09:50 compute-0 ceilometer_agent_compute[200308]: 2025-12-01 20:09:50.013 15 DEBUG ceilometer.polling.manager [-] Executing discovery process for pollsters [<ceilometer.compute.pollsters.disk.PerDeviceAllocationPollster object at 0x7fcf6cc3e5d0>] and discovery method [local_instances] via process [<bound method AgentManager.discover of <ceilometer.polling.manager.AgentManager object at 0x7fcf6cc07a10>>]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:294
Dec  1 20:09:50 compute-0 ceilometer_agent_compute[200308]: 2025-12-01 20:09:50.013 15 INFO ceilometer.polling.manager [-] Polling pollster disk.device.allocation in the context of pollsters
Dec  1 20:09:50 compute-0 ceilometer_agent_compute[200308]: 2025-12-01 20:09:50.014 15 DEBUG ceilometer.polling.manager [-] Checking if we need coordination for pollster [<stevedore.extension.Extension object at 0x7fcf6cc3e540>] with coordination group name [None]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:333
Dec  1 20:09:50 compute-0 ceilometer_agent_compute[200308]: 2025-12-01 20:09:50.014 15 DEBUG ceilometer.polling.manager [-] The pollster [<stevedore.extension.Extension object at 0x7fcf6cc3e540>] is not configured in a source for polling that requires coordination. The current hashrings are the following [None]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:355
Dec  1 20:09:50 compute-0 ceilometer_agent_compute[200308]: 2025-12-01 20:09:50.014 15 DEBUG ceilometer.polling.manager [-] Polster heartbeat update: disk.device.allocation heartbeat /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:636
Dec  1 20:09:50 compute-0 ceilometer_agent_compute[200308]: 2025-12-01 20:09:50.014 12 DEBUG ceilometer.polling.manager [-] Updated heartbeat for disk.device.allocation (2025-12-01T20:09:50.014127) _update_status /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:502
Dec  1 20:09:50 compute-0 ceilometer_agent_compute[200308]: 2025-12-01 20:09:50.014 15 DEBUG ceilometer.compute.pollsters [-] 1ba24bd2-a29b-4c5b-b8c7-cba0830ed166/disk.device.allocation volume: 30154752 _stats_to_sample /usr/lib/python3.12/site-packages/ceilometer/compute/pollsters/__init__.py:108
Dec  1 20:09:50 compute-0 ceilometer_agent_compute[200308]: 2025-12-01 20:09:50.014 15 DEBUG ceilometer.compute.pollsters [-] 1ba24bd2-a29b-4c5b-b8c7-cba0830ed166/disk.device.allocation volume: 512000 _stats_to_sample /usr/lib/python3.12/site-packages/ceilometer/compute/pollsters/__init__.py:108
Dec  1 20:09:50 compute-0 ceilometer_agent_compute[200308]: 2025-12-01 20:09:50.014 15 DEBUG ceilometer.compute.pollsters [-] 2e63a3e2-688c-470f-9b69-98ac22f0c892/disk.device.allocation volume: 30023680 _stats_to_sample /usr/lib/python3.12/site-packages/ceilometer/compute/pollsters/__init__.py:108
Dec  1 20:09:50 compute-0 ceilometer_agent_compute[200308]: 2025-12-01 20:09:50.014 15 DEBUG ceilometer.compute.pollsters [-] 2e63a3e2-688c-470f-9b69-98ac22f0c892/disk.device.allocation volume: 512000 _stats_to_sample /usr/lib/python3.12/site-packages/ceilometer/compute/pollsters/__init__.py:108
Dec  1 20:09:50 compute-0 ceilometer_agent_compute[200308]: 2025-12-01 20:09:50.015 15 INFO ceilometer.polling.manager [-] Finished polling pollster disk.device.allocation in the context of pollsters
Dec  1 20:09:50 compute-0 ceilometer_agent_compute[200308]: 2025-12-01 20:09:50.015 15 DEBUG ceilometer.polling.manager [-] Executing discovery process for pollsters [<ceilometer.compute.pollsters.disk.EphemeralSizePollster object at 0x7fcf6cc3f530>] and discovery method [local_instances] via process [<bound method AgentManager.discover of <ceilometer.polling.manager.AgentManager object at 0x7fcf6cc07a10>>]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:294
Dec  1 20:09:50 compute-0 ceilometer_agent_compute[200308]: 2025-12-01 20:09:50.015 15 INFO ceilometer.polling.manager [-] Polling pollster disk.ephemeral.size in the context of pollsters
Dec  1 20:09:50 compute-0 ceilometer_agent_compute[200308]: 2025-12-01 20:09:50.015 15 DEBUG ceilometer.polling.manager [-] Checking if we need coordination for pollster [<stevedore.extension.Extension object at 0x7fcf6cc3f560>] with coordination group name [None]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:333
Dec  1 20:09:50 compute-0 ceilometer_agent_compute[200308]: 2025-12-01 20:09:50.015 15 DEBUG ceilometer.polling.manager [-] The pollster [<stevedore.extension.Extension object at 0x7fcf6cc3f560>] is not configured in a source for polling that requires coordination. The current hashrings are the following [None]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:355
Dec  1 20:09:50 compute-0 ceilometer_agent_compute[200308]: 2025-12-01 20:09:50.015 15 DEBUG ceilometer.polling.manager [-] Polster heartbeat update: disk.ephemeral.size heartbeat /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:636
Dec  1 20:09:50 compute-0 ceilometer_agent_compute[200308]: 2025-12-01 20:09:50.016 15 INFO ceilometer.polling.manager [-] Finished polling pollster disk.ephemeral.size in the context of pollsters
Dec  1 20:09:50 compute-0 ceilometer_agent_compute[200308]: 2025-12-01 20:09:50.016 15 DEBUG ceilometer.polling.manager [-] Executing discovery process for pollsters [<ceilometer.compute.pollsters.net.IncomingPacketsPollster object at 0x7fcf6cc3fd40>] and discovery method [local_instances] via process [<bound method AgentManager.discover of <ceilometer.polling.manager.AgentManager object at 0x7fcf6cc07a10>>]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:294
Dec  1 20:09:50 compute-0 ceilometer_agent_compute[200308]: 2025-12-01 20:09:50.016 15 INFO ceilometer.polling.manager [-] Polling pollster network.incoming.packets in the context of pollsters
Dec  1 20:09:50 compute-0 ceilometer_agent_compute[200308]: 2025-12-01 20:09:50.016 15 DEBUG ceilometer.polling.manager [-] Checking if we need coordination for pollster [<stevedore.extension.Extension object at 0x7fcf6cc3fd70>] with coordination group name [None]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:333
Dec  1 20:09:50 compute-0 ceilometer_agent_compute[200308]: 2025-12-01 20:09:50.016 15 DEBUG ceilometer.polling.manager [-] The pollster [<stevedore.extension.Extension object at 0x7fcf6cc3fd70>] is not configured in a source for polling that requires coordination. The current hashrings are the following [None]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:355
Dec  1 20:09:50 compute-0 ceilometer_agent_compute[200308]: 2025-12-01 20:09:50.016 15 DEBUG ceilometer.polling.manager [-] Polster heartbeat update: network.incoming.packets heartbeat /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:636
Dec  1 20:09:50 compute-0 ceilometer_agent_compute[200308]: 2025-12-01 20:09:50.016 15 DEBUG ceilometer.compute.pollsters [-] 1ba24bd2-a29b-4c5b-b8c7-cba0830ed166/network.incoming.packets volume: 10 _stats_to_sample /usr/lib/python3.12/site-packages/ceilometer/compute/pollsters/__init__.py:108
Dec  1 20:09:50 compute-0 ceilometer_agent_compute[200308]: 2025-12-01 20:09:50.016 12 DEBUG ceilometer.polling.manager [-] Updated heartbeat for disk.ephemeral.size (2025-12-01T20:09:50.015670) _update_status /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:502
Dec  1 20:09:50 compute-0 ceilometer_agent_compute[200308]: 2025-12-01 20:09:50.017 15 DEBUG ceilometer.compute.pollsters [-] 2e63a3e2-688c-470f-9b69-98ac22f0c892/network.incoming.packets volume: 13 _stats_to_sample /usr/lib/python3.12/site-packages/ceilometer/compute/pollsters/__init__.py:108
Dec  1 20:09:50 compute-0 ceilometer_agent_compute[200308]: 2025-12-01 20:09:50.017 12 DEBUG ceilometer.polling.manager [-] Updated heartbeat for network.incoming.packets (2025-12-01T20:09:50.016520) _update_status /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:502
Dec  1 20:09:50 compute-0 ceilometer_agent_compute[200308]: 2025-12-01 20:09:50.017 15 INFO ceilometer.polling.manager [-] Finished polling pollster network.incoming.packets in the context of pollsters
Dec  1 20:09:50 compute-0 ceilometer_agent_compute[200308]: 2025-12-01 20:09:50.017 15 DEBUG ceilometer.polling.manager [-] Executing discovery process for pollsters [<ceilometer.compute.pollsters.disk.RootSizePollster object at 0x7fcf6cc3f590>] and discovery method [local_instances] via process [<bound method AgentManager.discover of <ceilometer.polling.manager.AgentManager object at 0x7fcf6cc07a10>>]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:294
Dec  1 20:09:50 compute-0 ceilometer_agent_compute[200308]: 2025-12-01 20:09:50.017 15 INFO ceilometer.polling.manager [-] Polling pollster disk.root.size in the context of pollsters
Dec  1 20:09:50 compute-0 ceilometer_agent_compute[200308]: 2025-12-01 20:09:50.017 15 DEBUG ceilometer.polling.manager [-] Checking if we need coordination for pollster [<stevedore.extension.Extension object at 0x7fcf6cc3f5c0>] with coordination group name [None]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:333
Dec  1 20:09:50 compute-0 ceilometer_agent_compute[200308]: 2025-12-01 20:09:50.017 15 DEBUG ceilometer.polling.manager [-] The pollster [<stevedore.extension.Extension object at 0x7fcf6cc3f5c0>] is not configured in a source for polling that requires coordination. The current hashrings are the following [None]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:355
Dec  1 20:09:50 compute-0 ceilometer_agent_compute[200308]: 2025-12-01 20:09:50.017 15 DEBUG ceilometer.polling.manager [-] Polster heartbeat update: disk.root.size heartbeat /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:636
Dec  1 20:09:50 compute-0 ceilometer_agent_compute[200308]: 2025-12-01 20:09:50.018 15 INFO ceilometer.polling.manager [-] Finished polling pollster disk.root.size in the context of pollsters
Dec  1 20:09:50 compute-0 ceilometer_agent_compute[200308]: 2025-12-01 20:09:50.018 15 DEBUG ceilometer.polling.manager [-] Executing discovery process for pollsters [<ceilometer.compute.pollsters.net.IncomingDropPollster object at 0x7fcf6cc3fda0>] and discovery method [local_instances] via process [<bound method AgentManager.discover of <ceilometer.polling.manager.AgentManager object at 0x7fcf6cc07a10>>]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:294
Dec  1 20:09:50 compute-0 ceilometer_agent_compute[200308]: 2025-12-01 20:09:50.018 15 INFO ceilometer.polling.manager [-] Polling pollster network.incoming.packets.drop in the context of pollsters
Dec  1 20:09:50 compute-0 ceilometer_agent_compute[200308]: 2025-12-01 20:09:50.018 15 DEBUG ceilometer.polling.manager [-] Checking if we need coordination for pollster [<stevedore.extension.Extension object at 0x7fcf6cc3fdd0>] with coordination group name [None]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:333
Dec  1 20:09:50 compute-0 ceilometer_agent_compute[200308]: 2025-12-01 20:09:50.018 15 DEBUG ceilometer.polling.manager [-] The pollster [<stevedore.extension.Extension object at 0x7fcf6cc3fdd0>] is not configured in a source for polling that requires coordination. The current hashrings are the following [None]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:355
Dec  1 20:09:50 compute-0 ceilometer_agent_compute[200308]: 2025-12-01 20:09:50.018 15 DEBUG ceilometer.polling.manager [-] Polster heartbeat update: network.incoming.packets.drop heartbeat /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:636
Dec  1 20:09:50 compute-0 ceilometer_agent_compute[200308]: 2025-12-01 20:09:50.018 15 DEBUG ceilometer.compute.pollsters [-] 1ba24bd2-a29b-4c5b-b8c7-cba0830ed166/network.incoming.packets.drop volume: 0 _stats_to_sample /usr/lib/python3.12/site-packages/ceilometer/compute/pollsters/__init__.py:108
Dec  1 20:09:50 compute-0 ceilometer_agent_compute[200308]: 2025-12-01 20:09:50.018 15 DEBUG ceilometer.compute.pollsters [-] 2e63a3e2-688c-470f-9b69-98ac22f0c892/network.incoming.packets.drop volume: 0 _stats_to_sample /usr/lib/python3.12/site-packages/ceilometer/compute/pollsters/__init__.py:108
Dec  1 20:09:50 compute-0 ceilometer_agent_compute[200308]: 2025-12-01 20:09:50.019 15 INFO ceilometer.polling.manager [-] Finished polling pollster network.incoming.packets.drop in the context of pollsters
Dec  1 20:09:50 compute-0 ceilometer_agent_compute[200308]: 2025-12-01 20:09:50.019 15 DEBUG ceilometer.polling.manager [-] Executing discovery process for pollsters [<ceilometer.compute.pollsters.net.IncomingErrorsPollster object at 0x7fcf6cc3fe00>] and discovery method [local_instances] via process [<bound method AgentManager.discover of <ceilometer.polling.manager.AgentManager object at 0x7fcf6cc07a10>>]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:294
Dec  1 20:09:50 compute-0 ceilometer_agent_compute[200308]: 2025-12-01 20:09:50.019 15 INFO ceilometer.polling.manager [-] Polling pollster network.incoming.packets.error in the context of pollsters
Dec  1 20:09:50 compute-0 ceilometer_agent_compute[200308]: 2025-12-01 20:09:50.019 15 DEBUG ceilometer.polling.manager [-] Checking if we need coordination for pollster [<stevedore.extension.Extension object at 0x7fcf6cc3fe30>] with coordination group name [None]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:333
Dec  1 20:09:50 compute-0 ceilometer_agent_compute[200308]: 2025-12-01 20:09:50.019 15 DEBUG ceilometer.polling.manager [-] The pollster [<stevedore.extension.Extension object at 0x7fcf6cc3fe30>] is not configured in a source for polling that requires coordination. The current hashrings are the following [None]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:355
Dec  1 20:09:50 compute-0 ceilometer_agent_compute[200308]: 2025-12-01 20:09:50.019 15 DEBUG ceilometer.polling.manager [-] Polster heartbeat update: network.incoming.packets.error heartbeat /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:636
Dec  1 20:09:50 compute-0 ceilometer_agent_compute[200308]: 2025-12-01 20:09:50.019 15 DEBUG ceilometer.compute.pollsters [-] 1ba24bd2-a29b-4c5b-b8c7-cba0830ed166/network.incoming.packets.error volume: 0 _stats_to_sample /usr/lib/python3.12/site-packages/ceilometer/compute/pollsters/__init__.py:108
Dec  1 20:09:50 compute-0 ceilometer_agent_compute[200308]: 2025-12-01 20:09:50.019 15 DEBUG ceilometer.compute.pollsters [-] 2e63a3e2-688c-470f-9b69-98ac22f0c892/network.incoming.packets.error volume: 0 _stats_to_sample /usr/lib/python3.12/site-packages/ceilometer/compute/pollsters/__init__.py:108
Dec  1 20:09:50 compute-0 ceilometer_agent_compute[200308]: 2025-12-01 20:09:50.020 15 INFO ceilometer.polling.manager [-] Finished polling pollster network.incoming.packets.error in the context of pollsters
Dec  1 20:09:50 compute-0 ceilometer_agent_compute[200308]: 2025-12-01 20:09:50.020 15 DEBUG ceilometer.polling.manager [-] Executing discovery process for pollsters [<ceilometer.compute.pollsters.net.OutgoingBytesPollster object at 0x7fcf6cc3fe90>] and discovery method [local_instances] via process [<bound method AgentManager.discover of <ceilometer.polling.manager.AgentManager object at 0x7fcf6cc07a10>>]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:294
Dec  1 20:09:50 compute-0 ceilometer_agent_compute[200308]: 2025-12-01 20:09:50.020 15 INFO ceilometer.polling.manager [-] Polling pollster network.outgoing.bytes in the context of pollsters
Dec  1 20:09:50 compute-0 ceilometer_agent_compute[200308]: 2025-12-01 20:09:50.020 15 DEBUG ceilometer.polling.manager [-] Checking if we need coordination for pollster [<stevedore.extension.Extension object at 0x7fcf6cc3fec0>] with coordination group name [None]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:333
Dec  1 20:09:50 compute-0 ceilometer_agent_compute[200308]: 2025-12-01 20:09:50.020 15 DEBUG ceilometer.polling.manager [-] The pollster [<stevedore.extension.Extension object at 0x7fcf6cc3fec0>] is not configured in a source for polling that requires coordination. The current hashrings are the following [None]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:355
Dec  1 20:09:50 compute-0 ceilometer_agent_compute[200308]: 2025-12-01 20:09:50.020 15 DEBUG ceilometer.polling.manager [-] Polster heartbeat update: network.outgoing.bytes heartbeat /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:636
Dec  1 20:09:50 compute-0 ceilometer_agent_compute[200308]: 2025-12-01 20:09:50.020 15 DEBUG ceilometer.compute.pollsters [-] 1ba24bd2-a29b-4c5b-b8c7-cba0830ed166/network.outgoing.bytes volume: 1620 _stats_to_sample /usr/lib/python3.12/site-packages/ceilometer/compute/pollsters/__init__.py:108
Dec  1 20:09:50 compute-0 ceilometer_agent_compute[200308]: 2025-12-01 20:09:50.021 15 DEBUG ceilometer.compute.pollsters [-] 2e63a3e2-688c-470f-9b69-98ac22f0c892/network.outgoing.bytes volume: 1620 _stats_to_sample /usr/lib/python3.12/site-packages/ceilometer/compute/pollsters/__init__.py:108
Dec  1 20:09:50 compute-0 ceilometer_agent_compute[200308]: 2025-12-01 20:09:50.021 15 INFO ceilometer.polling.manager [-] Finished polling pollster network.outgoing.bytes in the context of pollsters
Dec  1 20:09:50 compute-0 ceilometer_agent_compute[200308]: 2025-12-01 20:09:50.021 15 DEBUG ceilometer.polling.manager [-] Executing discovery process for pollsters [<ceilometer.compute.pollsters.net.OutgoingBytesRatePollster object at 0x7fcf6cc3ff80>] and discovery method [local_instances] via process [<bound method AgentManager.discover of <ceilometer.polling.manager.AgentManager object at 0x7fcf6cc07a10>>]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:294
Dec  1 20:09:50 compute-0 ceilometer_agent_compute[200308]: 2025-12-01 20:09:50.021 15 INFO ceilometer.polling.manager [-] Polling pollster network.outgoing.bytes.rate in the context of pollsters
Dec  1 20:09:50 compute-0 ceilometer_agent_compute[200308]: 2025-12-01 20:09:50.021 15 DEBUG ceilometer.polling.manager [-] Checking if we need coordination for pollster [<stevedore.extension.Extension object at 0x7fcf6cc3ffb0>] with coordination group name [None]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:333
Dec  1 20:09:50 compute-0 ceilometer_agent_compute[200308]: 2025-12-01 20:09:50.021 15 DEBUG ceilometer.polling.manager [-] The pollster [<stevedore.extension.Extension object at 0x7fcf6cc3ffb0>] is not configured in a source for polling that requires coordination. The current hashrings are the following [None]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:355
Dec  1 20:09:50 compute-0 ceilometer_agent_compute[200308]: 2025-12-01 20:09:50.021 15 DEBUG ceilometer.polling.manager [-] Polster heartbeat update: network.outgoing.bytes.rate heartbeat /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:636
Dec  1 20:09:50 compute-0 ceilometer_agent_compute[200308]: 2025-12-01 20:09:50.022 15 DEBUG ceilometer.compute.pollsters [-] LibvirtInspector does not provide data for OutgoingBytesRatePollster get_samples /usr/lib/python3.12/site-packages/ceilometer/compute/pollsters/__init__.py:162
Dec  1 20:09:50 compute-0 ceilometer_agent_compute[200308]: 2025-12-01 20:09:50.022 12 DEBUG ceilometer.polling.manager [-] Updated heartbeat for disk.root.size (2025-12-01T20:09:50.017718) _update_status /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:502
Dec  1 20:09:50 compute-0 ceilometer_agent_compute[200308]: 2025-12-01 20:09:50.022 15 ERROR ceilometer.polling.manager [-] Prevent pollster network.outgoing.bytes.rate from polling [<NovaLikeServer: te-4551674-asg-jbxama3kkz6o-bxsvliczlwdv-hbpajxundnbg>] on source pollsters anymore!: ceilometer.polling.plugin_base.PollsterPermanentError: [<NovaLikeServer: te-4551674-asg-jbxama3kkz6o-bxsvliczlwdv-hbpajxundnbg>]
Dec  1 20:09:50 compute-0 ceilometer_agent_compute[200308]: 2025-12-01 20:09:50.022 15 DEBUG ceilometer.polling.manager [-] Executing discovery process for pollsters [<ceilometer.compute.pollsters.instance_stats.CPUPollster object at 0x7fcf6cbd1b80>] and discovery method [local_instances] via process [<bound method AgentManager.discover of <ceilometer.polling.manager.AgentManager object at 0x7fcf6cc07a10>>]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:294
Dec  1 20:09:50 compute-0 ceilometer_agent_compute[200308]: 2025-12-01 20:09:50.022 15 INFO ceilometer.polling.manager [-] Polling pollster cpu in the context of pollsters
Dec  1 20:09:50 compute-0 ceilometer_agent_compute[200308]: 2025-12-01 20:09:50.022 15 DEBUG ceilometer.polling.manager [-] Checking if we need coordination for pollster [<stevedore.extension.Extension object at 0x7fcf6cc3d7c0>] with coordination group name [None]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:333
Dec  1 20:09:50 compute-0 ceilometer_agent_compute[200308]: 2025-12-01 20:09:50.022 15 DEBUG ceilometer.polling.manager [-] The pollster [<stevedore.extension.Extension object at 0x7fcf6cc3d7c0>] is not configured in a source for polling that requires coordination. The current hashrings are the following [None]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:355
Dec  1 20:09:50 compute-0 ceilometer_agent_compute[200308]: 2025-12-01 20:09:50.022 15 DEBUG ceilometer.polling.manager [-] Polster heartbeat update: cpu heartbeat /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:636
Dec  1 20:09:50 compute-0 ceilometer_agent_compute[200308]: 2025-12-01 20:09:50.022 15 DEBUG ceilometer.compute.pollsters [-] 1ba24bd2-a29b-4c5b-b8c7-cba0830ed166/cpu volume: 74080000000 _stats_to_sample /usr/lib/python3.12/site-packages/ceilometer/compute/pollsters/__init__.py:108
Dec  1 20:09:50 compute-0 ceilometer_agent_compute[200308]: 2025-12-01 20:09:50.022 12 DEBUG ceilometer.polling.manager [-] Updated heartbeat for network.incoming.packets.drop (2025-12-01T20:09:50.018517) _update_status /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:502
Dec  1 20:09:50 compute-0 ceilometer_agent_compute[200308]: 2025-12-01 20:09:50.023 15 DEBUG ceilometer.compute.pollsters [-] 2e63a3e2-688c-470f-9b69-98ac22f0c892/cpu volume: 257100000000 _stats_to_sample /usr/lib/python3.12/site-packages/ceilometer/compute/pollsters/__init__.py:108
Dec  1 20:09:50 compute-0 ceilometer_agent_compute[200308]: 2025-12-01 20:09:50.023 15 INFO ceilometer.polling.manager [-] Finished polling pollster cpu in the context of pollsters
Dec  1 20:09:50 compute-0 ceilometer_agent_compute[200308]: 2025-12-01 20:09:50.023 15 DEBUG ceilometer.polling.manager [-] Executing discovery process for pollsters [<ceilometer.compute.pollsters.instance_stats.MemoryUsagePollster object at 0x7fcf6cc3f7a0>] and discovery method [local_instances] via process [<bound method AgentManager.discover of <ceilometer.polling.manager.AgentManager object at 0x7fcf6cc07a10>>]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:294
Dec  1 20:09:50 compute-0 ceilometer_agent_compute[200308]: 2025-12-01 20:09:50.023 15 INFO ceilometer.polling.manager [-] Polling pollster memory.usage in the context of pollsters
Dec  1 20:09:50 compute-0 ceilometer_agent_compute[200308]: 2025-12-01 20:09:50.023 12 DEBUG ceilometer.polling.manager [-] Updated heartbeat for network.incoming.packets.error (2025-12-01T20:09:50.019605) _update_status /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:502
Dec  1 20:09:50 compute-0 ceilometer_agent_compute[200308]: 2025-12-01 20:09:50.023 15 DEBUG ceilometer.polling.manager [-] Checking if we need coordination for pollster [<stevedore.extension.Extension object at 0x7fcf6cc3f7d0>] with coordination group name [None]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:333
Dec  1 20:09:50 compute-0 ceilometer_agent_compute[200308]: 2025-12-01 20:09:50.023 15 DEBUG ceilometer.polling.manager [-] The pollster [<stevedore.extension.Extension object at 0x7fcf6cc3f7d0>] is not configured in a source for polling that requires coordination. The current hashrings are the following [None]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:355
Dec  1 20:09:50 compute-0 ceilometer_agent_compute[200308]: 2025-12-01 20:09:50.023 15 DEBUG ceilometer.polling.manager [-] Polster heartbeat update: memory.usage heartbeat /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:636
Dec  1 20:09:50 compute-0 ceilometer_agent_compute[200308]: 2025-12-01 20:09:50.024 15 DEBUG ceilometer.compute.pollsters [-] 1ba24bd2-a29b-4c5b-b8c7-cba0830ed166/memory.usage volume: 43.53515625 _stats_to_sample /usr/lib/python3.12/site-packages/ceilometer/compute/pollsters/__init__.py:108
Dec  1 20:09:50 compute-0 ceilometer_agent_compute[200308]: 2025-12-01 20:09:50.024 15 DEBUG ceilometer.compute.pollsters [-] 2e63a3e2-688c-470f-9b69-98ac22f0c892/memory.usage volume: 43.47265625 _stats_to_sample /usr/lib/python3.12/site-packages/ceilometer/compute/pollsters/__init__.py:108
Dec  1 20:09:50 compute-0 ceilometer_agent_compute[200308]: 2025-12-01 20:09:50.024 15 INFO ceilometer.polling.manager [-] Finished polling pollster memory.usage in the context of pollsters
Dec  1 20:09:50 compute-0 ceilometer_agent_compute[200308]: 2025-12-01 20:09:50.025 15 DEBUG ceilometer.polling.manager [-] Finished processing pollster [network.incoming.bytes.delta]. execute_polling_task_processing /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:272
Dec  1 20:09:50 compute-0 ceilometer_agent_compute[200308]: 2025-12-01 20:09:50.025 15 DEBUG ceilometer.polling.manager [-] Finished processing pollster [network.outgoing.packets]. execute_polling_task_processing /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:272
Dec  1 20:09:50 compute-0 ceilometer_agent_compute[200308]: 2025-12-01 20:09:50.025 15 DEBUG ceilometer.polling.manager [-] Finished processing pollster [network.outgoing.bytes.delta]. execute_polling_task_processing /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:272
Dec  1 20:09:50 compute-0 ceilometer_agent_compute[200308]: 2025-12-01 20:09:50.025 15 DEBUG ceilometer.polling.manager [-] Finished processing pollster [network.outgoing.packets.drop]. execute_polling_task_processing /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:272
Dec  1 20:09:50 compute-0 ceilometer_agent_compute[200308]: 2025-12-01 20:09:50.025 15 DEBUG ceilometer.polling.manager [-] Finished processing pollster [network.outgoing.packets.error]. execute_polling_task_processing /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:272
Dec  1 20:09:50 compute-0 ceilometer_agent_compute[200308]: 2025-12-01 20:09:50.025 15 DEBUG ceilometer.polling.manager [-] Finished processing pollster [disk.device.capacity]. execute_polling_task_processing /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:272
Dec  1 20:09:50 compute-0 ceilometer_agent_compute[200308]: 2025-12-01 20:09:50.026 15 DEBUG ceilometer.polling.manager [-] Finished processing pollster [disk.device.read.bytes]. execute_polling_task_processing /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:272
Dec  1 20:09:50 compute-0 ceilometer_agent_compute[200308]: 2025-12-01 20:09:50.026 15 DEBUG ceilometer.polling.manager [-] Finished processing pollster [network.incoming.bytes]. execute_polling_task_processing /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:272
Dec  1 20:09:50 compute-0 ceilometer_agent_compute[200308]: 2025-12-01 20:09:50.026 15 DEBUG ceilometer.polling.manager [-] Finished processing pollster [network.incoming.bytes.rate]. execute_polling_task_processing /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:272
Dec  1 20:09:50 compute-0 ceilometer_agent_compute[200308]: 2025-12-01 20:09:50.026 15 DEBUG ceilometer.polling.manager [-] Finished processing pollster [disk.device.read.latency]. execute_polling_task_processing /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:272
Dec  1 20:09:50 compute-0 ceilometer_agent_compute[200308]: 2025-12-01 20:09:50.026 15 DEBUG ceilometer.polling.manager [-] Finished processing pollster [disk.device.read.requests]. execute_polling_task_processing /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:272
Dec  1 20:09:50 compute-0 ceilometer_agent_compute[200308]: 2025-12-01 20:09:50.026 15 DEBUG ceilometer.polling.manager [-] Finished processing pollster [disk.device.usage]. execute_polling_task_processing /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:272
Dec  1 20:09:50 compute-0 ceilometer_agent_compute[200308]: 2025-12-01 20:09:50.026 15 DEBUG ceilometer.polling.manager [-] Finished processing pollster [disk.device.write.bytes]. execute_polling_task_processing /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:272
Dec  1 20:09:50 compute-0 ceilometer_agent_compute[200308]: 2025-12-01 20:09:50.026 15 DEBUG ceilometer.polling.manager [-] Finished processing pollster [power.state]. execute_polling_task_processing /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:272
Dec  1 20:09:50 compute-0 ceilometer_agent_compute[200308]: 2025-12-01 20:09:50.026 15 DEBUG ceilometer.polling.manager [-] Finished processing pollster [disk.device.write.latency]. execute_polling_task_processing /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:272
Dec  1 20:09:50 compute-0 ceilometer_agent_compute[200308]: 2025-12-01 20:09:50.026 15 DEBUG ceilometer.polling.manager [-] Finished processing pollster [disk.device.write.requests]. execute_polling_task_processing /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:272
Dec  1 20:09:50 compute-0 ceilometer_agent_compute[200308]: 2025-12-01 20:09:50.027 15 DEBUG ceilometer.polling.manager [-] Finished processing pollster [disk.device.allocation]. execute_polling_task_processing /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:272
Dec  1 20:09:50 compute-0 ceilometer_agent_compute[200308]: 2025-12-01 20:09:50.027 15 DEBUG ceilometer.polling.manager [-] Finished processing pollster [disk.ephemeral.size]. execute_polling_task_processing /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:272
Dec  1 20:09:50 compute-0 ceilometer_agent_compute[200308]: 2025-12-01 20:09:50.027 15 DEBUG ceilometer.polling.manager [-] Finished processing pollster [network.incoming.packets]. execute_polling_task_processing /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:272
Dec  1 20:09:50 compute-0 ceilometer_agent_compute[200308]: 2025-12-01 20:09:50.027 15 DEBUG ceilometer.polling.manager [-] Finished processing pollster [disk.root.size]. execute_polling_task_processing /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:272
Dec  1 20:09:50 compute-0 ceilometer_agent_compute[200308]: 2025-12-01 20:09:50.027 12 DEBUG ceilometer.polling.manager [-] Updated heartbeat for network.outgoing.bytes (2025-12-01T20:09:50.020759) _update_status /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:502
Dec  1 20:09:50 compute-0 ceilometer_agent_compute[200308]: 2025-12-01 20:09:50.027 15 DEBUG ceilometer.polling.manager [-] Finished processing pollster [network.incoming.packets.drop]. execute_polling_task_processing /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:272
Dec  1 20:09:50 compute-0 ceilometer_agent_compute[200308]: 2025-12-01 20:09:50.027 15 DEBUG ceilometer.polling.manager [-] Finished processing pollster [network.incoming.packets.error]. execute_polling_task_processing /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:272
Dec  1 20:09:50 compute-0 ceilometer_agent_compute[200308]: 2025-12-01 20:09:50.028 15 DEBUG ceilometer.polling.manager [-] Finished processing pollster [network.outgoing.bytes]. execute_polling_task_processing /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:272
Dec  1 20:09:50 compute-0 ceilometer_agent_compute[200308]: 2025-12-01 20:09:50.028 12 DEBUG ceilometer.polling.manager [-] Updated heartbeat for network.outgoing.bytes.rate (2025-12-01T20:09:50.021802) _update_status /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:502
Dec  1 20:09:50 compute-0 ceilometer_agent_compute[200308]: 2025-12-01 20:09:50.028 15 DEBUG ceilometer.polling.manager [-] Finished processing pollster [network.outgoing.bytes.rate]. execute_polling_task_processing /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:272
Dec  1 20:09:50 compute-0 ceilometer_agent_compute[200308]: 2025-12-01 20:09:50.028 15 DEBUG ceilometer.polling.manager [-] Finished processing pollster [cpu]. execute_polling_task_processing /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:272
Dec  1 20:09:50 compute-0 ceilometer_agent_compute[200308]: 2025-12-01 20:09:50.028 15 DEBUG ceilometer.polling.manager [-] Finished processing pollster [memory.usage]. execute_polling_task_processing /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:272
Dec  1 20:09:50 compute-0 ceilometer_agent_compute[200308]: 2025-12-01 20:09:50.029 12 DEBUG ceilometer.polling.manager [-] Updated heartbeat for cpu (2025-12-01T20:09:50.022725) _update_status /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:502
Dec  1 20:09:50 compute-0 ceilometer_agent_compute[200308]: 2025-12-01 20:09:50.029 12 DEBUG ceilometer.polling.manager [-] Updated heartbeat for memory.usage (2025-12-01T20:09:50.023943) _update_status /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:502
Dec  1 20:09:52 compute-0 nova_compute[189564]: 2025-12-01 20:09:52.323 189568 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Dec  1 20:09:54 compute-0 nova_compute[189564]: 2025-12-01 20:09:54.113 189568 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Dec  1 20:09:57 compute-0 nova_compute[189564]: 2025-12-01 20:09:57.328 189568 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Dec  1 20:09:58 compute-0 podman[259128]: 2025-12-01 20:09:58.346381291 +0000 UTC m=+0.115059137 container health_status eee51cf6f5ac491b85fb09827fece37ea9afa564acb449d4ec0d0155a452f02b (image=quay.io/podified-antelope-centos9/openstack-multipathd:current-podified, name=multipathd, health_status=healthy, health_failing_streak=0, health_log=, tcib_managed=true, config_data={'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/multipathd', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-multipathd:current-podified', 'net': 'host', 'privileged': True, 'restart': 'always', 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/var/lib/kolla/config_files/multipathd.json:/var/lib/kolla/config_files/config.json:ro', '/dev:/dev', '/run/udev:/run/udev', '/sys:/sys', '/lib/modules:/lib/modules:ro', '/etc/iscsi:/etc/iscsi:ro', '/var/lib/iscsi:/var/lib/iscsi', '/etc/multipath:/etc/multipath:z', '/etc/multipath.conf:/etc/multipath.conf:ro', '/var/lib/openstack/healthchecks/multipathd:/openstack:ro,z']}, container_name=multipathd, io.buildah.version=1.41.3, maintainer=OpenStack Kubernetes Operator team, org.label-schema.license=GPLv2, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS, org.label-schema.build-date=20251125, org.label-schema.name=CentOS Stream 9 Base Image, tcib_build_tag=fa2bb8efef6782c26ea7f1675eeb36dd, config_id=multipathd, managed_by=edpm_ansible)
Dec  1 20:09:59 compute-0 nova_compute[189564]: 2025-12-01 20:09:59.117 189568 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Dec  1 20:09:59 compute-0 podman[203750]: time="2025-12-01T20:09:59Z" level=info msg="List containers: received `last` parameter - overwriting `limit`"
Dec  1 20:09:59 compute-0 podman[203750]: @ - - [01/Dec/2025:20:09:59 +0000] "GET /v4.9.3/libpod/containers/json?all=true&external=false&last=0&namespace=false&size=false&sync=false HTTP/1.1" 200 29521 "" "Go-http-client/1.1"
Dec  1 20:09:59 compute-0 podman[203750]: @ - - [01/Dec/2025:20:09:59 +0000] "GET /v4.9.3/libpod/containers/stats?all=false&interval=1&stream=false HTTP/1.1" 200 4810 "" "Go-http-client/1.1"
Dec  1 20:10:01 compute-0 podman[259149]: 2025-12-01 20:10:01.343735239 +0000 UTC m=+0.106610108 container health_status 61ddba5fa28aaa4735d9b3aecc3d300f499f9ae2248b5f55cd6d6127fcce4236 (image=quay.io/navidys/prometheus-podman-exporter:v1.10.1, name=podman_exporter, health_status=healthy, health_failing_streak=0, health_log=, config_data={'image': 'quay.io/navidys/prometheus-podman-exporter:v1.10.1', 'restart': 'always', 'recreate': True, 'user': 'root', 'privileged': True, 'ports': ['9882:9882'], 'net': 'host', 'command': ['--web.config.file=/etc/podman_exporter/podman_exporter.yaml'], 'environment': {'OS_ENDPOINT_TYPE': 'internal', 'CONTAINER_HOST': 'unix:///run/podman/podman.sock'}, 'healthcheck': {'test': '/openstack/healthcheck podman_exporter', 'mount': '/var/lib/openstack/healthchecks/podman_exporter'}, 'volumes': ['/var/lib/openstack/config/telemetry/podman_exporter.yaml:/etc/podman_exporter/podman_exporter.yaml:z', '/var/lib/openstack/certs/telemetry/default:/etc/podman_exporter/tls:z', '/run/podman/podman.sock:/run/podman/podman.sock:rw,z', '/var/lib/openstack/healthchecks/podman_exporter:/openstack:ro,z']}, config_id=edpm, container_name=podman_exporter, maintainer=Navid Yaghoobi <navidys@fedoraproject.org>, managed_by=edpm_ansible)
Dec  1 20:10:01 compute-0 openstack_network_exporter[205914]: ERROR   20:10:01 appctl.go:131: Failed to prepare call to ovsdb-server: no control socket files found for the ovs db server
Dec  1 20:10:01 compute-0 openstack_network_exporter[205914]: ERROR   20:10:01 appctl.go:144: Failed to get PID for ovn-northd: no control socket files found for ovn-northd
Dec  1 20:10:01 compute-0 openstack_network_exporter[205914]: ERROR   20:10:01 appctl.go:144: Failed to get PID for ovn-northd: no control socket files found for ovn-northd
Dec  1 20:10:01 compute-0 openstack_network_exporter[205914]: ERROR   20:10:01 appctl.go:174: call(dpif-netdev/pmd-perf-show): please specify an existing datapath
Dec  1 20:10:01 compute-0 openstack_network_exporter[205914]: 
Dec  1 20:10:01 compute-0 openstack_network_exporter[205914]: ERROR   20:10:01 appctl.go:174: call(dpif-netdev/pmd-rxq-show): please specify an existing datapath
Dec  1 20:10:01 compute-0 openstack_network_exporter[205914]: 
Dec  1 20:10:02 compute-0 nova_compute[189564]: 2025-12-01 20:10:02.333 189568 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Dec  1 20:10:04 compute-0 nova_compute[189564]: 2025-12-01 20:10:04.119 189568 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Dec  1 20:10:05 compute-0 podman[259174]: 2025-12-01 20:10:05.296488545 +0000 UTC m=+0.067163737 container health_status 34a1614f07848d6f362b3ed1fa2407dbcd0f2c7c831f6ef43ff8b2d278ce7c3d (image=quay.io/podified-antelope-centos9/openstack-ceilometer-ipmi:current-podified, name=ceilometer_agent_ipmi, health_status=healthy, health_failing_streak=0, health_log=, tcib_managed=true, config_data={'image': 'quay.io/podified-antelope-centos9/openstack-ceilometer-ipmi:current-podified', 'user': 'ceilometer', 'restart': 'always', 'command': 'kolla_start', 'security_opt': 'label:type:ceilometer_polling_t', 'privileged': 'true', 'net': 'host', 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'OS_ENDPOINT_TYPE': 'internal'}, 'healthcheck': {'test': '/openstack/healthcheck ipmi', 'mount': '/var/lib/openstack/healthchecks/ceilometer_agent_ipmi'}, 'volumes': ['/var/lib/openstack/config/telemetry-power-monitoring:/var/lib/openstack/config/:z', '/var/lib/openstack/config/telemetry-power-monitoring/ceilometer-agent-ipmi.json:/var/lib/kolla/config_files/config.json:z', '/etc/hosts:/etc/hosts:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/var/lib/openstack/cacerts/telemetry-power-monitoring/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/config/telemetry-power-monitoring/ceilometer_prom_exporter.yaml:/etc/ceilometer/ceilometer_prom_exporter.yaml:z', '/var/lib/openstack/certs/telemetry-power-monitoring/default:/etc/ceilometer/tls:z', '/dev/log:/dev/log', '/var/lib/openstack/healthchecks/ceilometer_agent_ipmi:/openstack:ro,z']}, container_name=ceilometer_agent_ipmi, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 9 Base Image, io.buildah.version=1.41.3, org.label-schema.build-date=20251125, org.label-schema.vendor=CentOS, tcib_build_tag=fa2bb8efef6782c26ea7f1675eeb36dd, config_id=edpm, maintainer=OpenStack Kubernetes Operator team, managed_by=edpm_ansible, org.label-schema.schema-version=1.0)
Dec  1 20:10:05 compute-0 podman[259176]: 2025-12-01 20:10:05.326950599 +0000 UTC m=+0.089031976 container health_status 43b014a7c88484529ca37fbc1aa040d68d3c565a681d98a3ffe696ded1c66c8b (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified, name=ovn_metadata_agent, health_status=healthy, health_failing_streak=0, health_log=, org.label-schema.build-date=20251125, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.schema-version=1.0, config_data={'cgroupns': 'host', 'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'EDPM_CONFIG_HASH': '0823bd3e096c75f72e4a95820d41b0d4b6a1172bd2892ddb9f29b788a11bc87d'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ovn_metadata_agent', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified', 'net': 'host', 'pid': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/run/openvswitch:/run/openvswitch:z', '/var/lib/config-data/ansible-generated/neutron-ovn-metadata-agent:/etc/neutron.conf.d:z', '/run/netns:/run/netns:shared', '/var/lib/kolla/config_files/ovn_metadata_agent.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/neutron:/var/lib/neutron:shared,z', '/var/lib/neutron/ovn_metadata_haproxy_wrapper:/usr/local/bin/haproxy:ro', '/var/lib/neutron/kill_scripts:/etc/neutron/kill_scripts:ro', '/var/lib/openstack/cacerts/neutron-metadata/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/ca.crt:/etc/pki/tls/certs/ovndbca.crt:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/tls.crt:/etc/pki/tls/certs/ovndb.crt:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/tls.key:/etc/pki/tls/private/ovndb.key:ro,Z', '/var/lib/openstack/healthchecks/ovn_metadata_agent:/openstack:ro,z']}, org.label-schema.license=GPLv2, tcib_managed=true, maintainer=OpenStack Kubernetes Operator team, org.label-schema.vendor=CentOS, managed_by=edpm_ansible, tcib_build_tag=fa2bb8efef6782c26ea7f1675eeb36dd, config_id=ovn_metadata_agent, container_name=ovn_metadata_agent, io.buildah.version=1.41.3)
Dec  1 20:10:05 compute-0 podman[259175]: 2025-12-01 20:10:05.346886235 +0000 UTC m=+0.099136248 container health_status 3a3d264f7eb8586ed3d44da8bad3c69e5911bcb2ca062b771386b6d47a5118de (image=quay.rdoproject.org/podified-master-centos10/openstack-ceilometer-compute:current-tested, name=ceilometer_agent_compute, health_status=healthy, health_failing_streak=0, health_log=, io.buildah.version=1.41.4, maintainer=OpenStack Kubernetes Operator team, managed_by=edpm_ansible, org.label-schema.name=CentOS Stream 10 Base Image, tcib_managed=true, config_data={'image': 'quay.rdoproject.org/podified-master-centos10/openstack-ceilometer-compute:current-tested', 'user': 'ceilometer', 'restart': 'always', 'command': 'kolla_start', 'security_opt': 'label:type:ceilometer_polling_t', 'net': 'host', 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'OS_ENDPOINT_TYPE': 'internal'}, 'healthcheck': {'test': '/openstack/healthcheck compute', 'mount': '/var/lib/openstack/healthchecks/ceilometer_agent_compute'}, 'volumes': ['/var/lib/openstack/config/telemetry:/var/lib/openstack/config/:z', '/var/lib/openstack/config/telemetry/ceilometer-agent-compute.json:/var/lib/kolla/config_files/config.json:z', '/run/libvirt:/run/libvirt:shared,ro', '/etc/hosts:/etc/hosts:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/var/lib/openstack/cacerts/telemetry/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/config/telemetry/ceilometer_prom_exporter.yaml:/etc/ceilometer/ceilometer_prom_exporter.yaml:z', '/var/lib/openstack/certs/telemetry/default:/etc/ceilometer/tls:z', '/dev/log:/dev/log', '/var/lib/openstack/healthchecks/ceilometer_agent_compute:/openstack:ro,z']}, org.label-schema.schema-version=1.0, config_id=edpm, tcib_build_tag=3a7876c5b6a4ff2e2bc50e11e9db5f42, org.label-schema.build-date=20251125, org.label-schema.license=GPLv2, org.label-schema.vendor=CentOS, container_name=ceilometer_agent_compute)
Dec  1 20:10:05 compute-0 podman[259173]: 2025-12-01 20:10:05.351503104 +0000 UTC m=+0.121088621 container health_status 23921011954a99f31a49758e512d9e3575f6b2ebf536e7df85e3be11e7690b76 (image=quay.io/sustainable_computing_io/kepler:release-0.7.12, name=kepler, health_status=healthy, health_failing_streak=0, health_log=, config_id=edpm, vcs-type=git, config_data={'image': 'quay.io/sustainable_computing_io/kepler:release-0.7.12', 'privileged': 'true', 'restart': 'always', 'ports': ['8888:8888'], 'net': 'host', 'command': '-v=2', 'recreate': True, 'environment': {'ENABLE_GPU': 'true', 'EXPOSE_CONTAINER_METRICS': 'true', 'ENABLE_PROCESS_METRICS': 'true', 'EXPOSE_VM_METRICS': 'true', 'EXPOSE_ESTIMATED_IDLE_POWER_METRICS': 'false', 'LIBVIRT_METADATA_URI': 'http://openstack.org/xmlns/libvirt/nova/1.1'}, 'healthcheck': {'test': '/openstack/healthcheck kepler', 'mount': '/var/lib/openstack/healthchecks/kepler'}, 'volumes': ['/lib/modules:/lib/modules:ro', '/run/libvirt:/run/libvirt:shared,ro', '/sys:/sys', '/proc:/proc', '/var/lib/openstack/healthchecks/kepler:/openstack:ro,z']}, maintainer=Red Hat, Inc., com.redhat.component=ubi9-container, description=The Universal Base Image is designed and engineered to be the base layer for all of your containerized applications, middleware and utilities. This base image is freely redistributable, but Red Hat only supports Red Hat technologies through subscriptions for Red Hat products. This image is maintained by Red Hat and updated regularly., managed_by=edpm_ansible, release=1214.1726694543, url=https://access.redhat.com/containers/#/registry.access.redhat.com/ubi9/images/9.4-1214.1726694543, com.redhat.license_terms=https://www.redhat.com/en/about/red-hat-end-user-license-agreements#UBI, vendor=Red Hat, Inc., io.buildah.version=1.29.0, name=ubi9, io.k8s.description=The Universal Base Image is designed and engineered to be the base layer for all of your containerized applications, middleware and utilities. This base image is freely redistributable, but Red Hat only supports Red Hat technologies through subscriptions for Red Hat products. This image is maintained by Red Hat and updated regularly., release-0.7.12=, version=9.4, io.openshift.tags=base rhel9, architecture=x86_64, build-date=2024-09-18T21:23:30, io.openshift.expose-services=, container_name=kepler, distribution-scope=public, vcs-ref=e309397d02fc53f7fa99db1371b8700eb49f268f, io.k8s.display-name=Red Hat Universal Base Image 9, summary=Provides the latest release of Red Hat Universal Base Image 9.)
Dec  1 20:10:05 compute-0 podman[259179]: 2025-12-01 20:10:05.408102772 +0000 UTC m=+0.157737711 container health_status ac5c9902abf0db9f43c889599b2bcc73d33eb8b65444ffdd9b56a5cc93dab792 (image=quay.io/podified-antelope-centos9/openstack-ovn-controller:current-podified, name=ovn_controller, health_status=healthy, health_failing_streak=0, health_log=, org.label-schema.vendor=CentOS, maintainer=OpenStack Kubernetes Operator team, org.label-schema.name=CentOS Stream 9 Base Image, tcib_managed=true, config_data={'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ovn_controller', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-ovn-controller:current-podified', 'net': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/lib/modules:/lib/modules:ro', '/run:/run', '/var/lib/openvswitch/ovn:/run/ovn:shared,z', '/var/lib/kolla/config_files/ovn_controller.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/openstack/cacerts/ovn/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/certs/ovn/default/ca.crt:/etc/pki/tls/certs/ovndbca.crt:ro,z', '/var/lib/openstack/certs/ovn/default/tls.crt:/etc/pki/tls/certs/ovndb.crt:ro,z', '/var/lib/openstack/certs/ovn/default/tls.key:/etc/pki/tls/private/ovndb.key:ro,Z', '/var/lib/openstack/healthchecks/ovn_controller:/openstack:ro,z']}, config_id=ovn_controller, container_name=ovn_controller, io.buildah.version=1.41.3, managed_by=edpm_ansible, org.label-schema.build-date=20251125, tcib_build_tag=fa2bb8efef6782c26ea7f1675eeb36dd, org.label-schema.license=GPLv2, org.label-schema.schema-version=1.0)
Dec  1 20:10:07 compute-0 nova_compute[189564]: 2025-12-01 20:10:07.337 189568 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Dec  1 20:10:09 compute-0 nova_compute[189564]: 2025-12-01 20:10:09.123 189568 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Dec  1 20:10:12 compute-0 ovn_metadata_agent[106828]: 2025-12-01 20:10:12.229 106833 DEBUG oslo_concurrency.lockutils [-] Acquiring lock "_check_child_processes" by "neutron.agent.linux.external_process.ProcessMonitor._check_child_processes" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m
Dec  1 20:10:12 compute-0 ovn_metadata_agent[106828]: 2025-12-01 20:10:12.230 106833 DEBUG oslo_concurrency.lockutils [-] Lock "_check_child_processes" acquired by "neutron.agent.linux.external_process.ProcessMonitor._check_child_processes" :: waited 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m
Dec  1 20:10:12 compute-0 ovn_metadata_agent[106828]: 2025-12-01 20:10:12.231 106833 DEBUG oslo_concurrency.lockutils [-] Lock "_check_child_processes" "released" by "neutron.agent.linux.external_process.ProcessMonitor._check_child_processes" :: held 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m
Dec  1 20:10:12 compute-0 nova_compute[189564]: 2025-12-01 20:10:12.342 189568 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Dec  1 20:10:14 compute-0 nova_compute[189564]: 2025-12-01 20:10:14.125 189568 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Dec  1 20:10:17 compute-0 podman[259281]: 2025-12-01 20:10:17.332238607 +0000 UTC m=+0.094320665 container health_status 9bc16c1e84935b321683dd2dfd3901959431e420d380b6b9982945dff3d516b2 (image=quay.io/prometheus/node-exporter:v1.5.0, name=node_exporter, health_status=healthy, health_failing_streak=0, health_log=, managed_by=edpm_ansible, config_data={'image': 'quay.io/prometheus/node-exporter:v1.5.0', 'restart': 'always', 'recreate': True, 'user': 'root', 'privileged': True, 'ports': ['9100:9100'], 'command': ['--web.config.file=/etc/node_exporter/node_exporter.yaml', '--web.disable-exporter-metrics', '--collector.systemd', '--collector.systemd.unit-include=(edpm_.*|ovs.*|openvswitch|virt.*|rsyslog)\\.service', '--no-collector.dmi', '--no-collector.entropy', '--no-collector.thermal_zone', '--no-collector.time', '--no-collector.timex', '--no-collector.uname', '--no-collector.stat', '--no-collector.hwmon', '--no-collector.os', '--no-collector.selinux', '--no-collector.textfile', '--no-collector.powersupplyclass', '--no-collector.pressure', '--no-collector.rapl'], 'net': 'host', 'environment': {'OS_ENDPOINT_TYPE': 'internal'}, 'healthcheck': {'test': '/openstack/healthcheck node_exporter', 'mount': '/var/lib/openstack/healthchecks/node_exporter'}, 'volumes': ['/var/lib/openstack/config/telemetry/node_exporter.yaml:/etc/node_exporter/node_exporter.yaml:z', '/var/lib/openstack/certs/telemetry/default:/etc/node_exporter/tls:z', '/var/run/dbus/system_bus_socket:/var/run/dbus/system_bus_socket:rw', '/var/lib/openstack/healthchecks/node_exporter:/openstack:ro,z']}, config_id=edpm, container_name=node_exporter, maintainer=The Prometheus Authors <prometheus-developers@googlegroups.com>)
Dec  1 20:10:17 compute-0 nova_compute[189564]: 2025-12-01 20:10:17.348 189568 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Dec  1 20:10:17 compute-0 podman[259282]: 2025-12-01 20:10:17.368402962 +0000 UTC m=+0.121361938 container health_status b46bda7fc50db8041eef75400930fc7591d8331b3adc9964f77b2cc87c6b98e2 (image=quay.io/openstack-k8s-operators/openstack-network-exporter:current-podified, name=openstack_network_exporter, health_status=healthy, health_failing_streak=0, health_log=, io.openshift.expose-services=, maintainer=Red Hat, Inc., distribution-scope=public, summary=Provides the latest release of the minimal Red Hat Universal Base Image 9., url=https://catalog.redhat.com/en/search?searchType=containers, build-date=2025-08-20T13:12:41, vcs-ref=f4b088292653bbf5ca8188a5e59ffd06a8671d4b, config_data={'image': 'quay.io/openstack-k8s-operators/openstack-network-exporter:current-podified', 'restart': 'always', 'recreate': True, 'privileged': True, 'ports': ['9105:9105'], 'command': [], 'net': 'host', 'environment': {'OS_ENDPOINT_TYPE': 'internal', 'OPENSTACK_NETWORK_EXPORTER_YAML': '/etc/openstack_network_exporter/openstack_network_exporter.yaml'}, 'healthcheck': {'test': '/openstack/healthcheck openstack-netwo', 'mount': '/var/lib/openstack/healthchecks/openstack_network_exporter'}, 'volumes': ['/var/lib/openstack/config/telemetry/openstack_network_exporter.yaml:/etc/openstack_network_exporter/openstack_network_exporter.yaml:z', '/var/lib/openstack/certs/telemetry/default:/etc/openstack_network_exporter/tls:z', '/var/run/openvswitch:/run/openvswitch:rw,z', '/var/lib/openvswitch/ovn:/run/ovn:rw,z', '/proc:/host/proc:ro', '/var/lib/openstack/healthchecks/openstack_network_exporter:/openstack:ro,z']}, config_id=edpm, io.openshift.tags=minimal rhel9, version=9.6, container_name=openstack_network_exporter, managed_by=edpm_ansible, release=1755695350, vendor=Red Hat, Inc., io.buildah.version=1.33.7, com.redhat.component=ubi9-minimal-container, name=ubi9-minimal, vcs-type=git, io.k8s.description=The Universal Base Image Minimal is a stripped down image that uses microdnf as a package manager. This base image is freely redistributable, but Red Hat only supports Red Hat technologies through subscriptions for Red Hat products. This image is maintained by Red Hat and updated regularly., io.k8s.display-name=Red Hat Universal Base Image 9 Minimal, com.redhat.license_terms=https://www.redhat.com/en/about/red-hat-end-user-license-agreements#UBI, architecture=x86_64, description=The Universal Base Image Minimal is a stripped down image that uses microdnf as a package manager. This base image is freely redistributable, but Red Hat only supports Red Hat technologies through subscriptions for Red Hat products. This image is maintained by Red Hat and updated regularly.)
Dec  1 20:10:19 compute-0 nova_compute[189564]: 2025-12-01 20:10:19.128 189568 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Dec  1 20:10:22 compute-0 nova_compute[189564]: 2025-12-01 20:10:22.248 189568 DEBUG oslo_service.periodic_task [None req-7cec860b-c1c5-49d2-8a4f-732c76c1788d - - - - - -] Running periodic task ComputeManager._poll_rescued_instances run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210#033[00m
Dec  1 20:10:22 compute-0 nova_compute[189564]: 2025-12-01 20:10:22.250 189568 DEBUG oslo_service.periodic_task [None req-7cec860b-c1c5-49d2-8a4f-732c76c1788d - - - - - -] Running periodic task ComputeManager._reclaim_queued_deletes run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210#033[00m
Dec  1 20:10:22 compute-0 nova_compute[189564]: 2025-12-01 20:10:22.251 189568 DEBUG nova.compute.manager [None req-7cec860b-c1c5-49d2-8a4f-732c76c1788d - - - - - -] CONF.reclaim_instance_interval <= 0, skipping... _reclaim_queued_deletes /usr/lib/python3.9/site-packages/nova/compute/manager.py:10477#033[00m
Dec  1 20:10:22 compute-0 nova_compute[189564]: 2025-12-01 20:10:22.353 189568 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Dec  1 20:10:24 compute-0 nova_compute[189564]: 2025-12-01 20:10:24.131 189568 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Dec  1 20:10:24 compute-0 nova_compute[189564]: 2025-12-01 20:10:24.972 189568 DEBUG oslo_concurrency.lockutils [None req-a5d569d3-9244-4c68-b492-a7ec642a9396 87b1f4a5842648dead0562b1cf8b4f18 ce8fb01897ec4dc4a54e7b478a0450c6 - - default default] Acquiring lock "2e63a3e2-688c-470f-9b69-98ac22f0c892" by "nova.compute.manager.ComputeManager.terminate_instance.<locals>.do_terminate_instance" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m
Dec  1 20:10:24 compute-0 nova_compute[189564]: 2025-12-01 20:10:24.973 189568 DEBUG oslo_concurrency.lockutils [None req-a5d569d3-9244-4c68-b492-a7ec642a9396 87b1f4a5842648dead0562b1cf8b4f18 ce8fb01897ec4dc4a54e7b478a0450c6 - - default default] Lock "2e63a3e2-688c-470f-9b69-98ac22f0c892" acquired by "nova.compute.manager.ComputeManager.terminate_instance.<locals>.do_terminate_instance" :: waited 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m
Dec  1 20:10:24 compute-0 nova_compute[189564]: 2025-12-01 20:10:24.974 189568 DEBUG oslo_concurrency.lockutils [None req-a5d569d3-9244-4c68-b492-a7ec642a9396 87b1f4a5842648dead0562b1cf8b4f18 ce8fb01897ec4dc4a54e7b478a0450c6 - - default default] Acquiring lock "2e63a3e2-688c-470f-9b69-98ac22f0c892-events" by "nova.compute.manager.InstanceEvents.clear_events_for_instance.<locals>._clear_events" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m
Dec  1 20:10:24 compute-0 nova_compute[189564]: 2025-12-01 20:10:24.975 189568 DEBUG oslo_concurrency.lockutils [None req-a5d569d3-9244-4c68-b492-a7ec642a9396 87b1f4a5842648dead0562b1cf8b4f18 ce8fb01897ec4dc4a54e7b478a0450c6 - - default default] Lock "2e63a3e2-688c-470f-9b69-98ac22f0c892-events" acquired by "nova.compute.manager.InstanceEvents.clear_events_for_instance.<locals>._clear_events" :: waited 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m
Dec  1 20:10:24 compute-0 nova_compute[189564]: 2025-12-01 20:10:24.976 189568 DEBUG oslo_concurrency.lockutils [None req-a5d569d3-9244-4c68-b492-a7ec642a9396 87b1f4a5842648dead0562b1cf8b4f18 ce8fb01897ec4dc4a54e7b478a0450c6 - - default default] Lock "2e63a3e2-688c-470f-9b69-98ac22f0c892-events" "released" by "nova.compute.manager.InstanceEvents.clear_events_for_instance.<locals>._clear_events" :: held 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m
Dec  1 20:10:24 compute-0 nova_compute[189564]: 2025-12-01 20:10:24.978 189568 INFO nova.compute.manager [None req-a5d569d3-9244-4c68-b492-a7ec642a9396 87b1f4a5842648dead0562b1cf8b4f18 ce8fb01897ec4dc4a54e7b478a0450c6 - - default default] [instance: 2e63a3e2-688c-470f-9b69-98ac22f0c892] Terminating instance#033[00m
Dec  1 20:10:24 compute-0 nova_compute[189564]: 2025-12-01 20:10:24.980 189568 DEBUG nova.compute.manager [None req-a5d569d3-9244-4c68-b492-a7ec642a9396 87b1f4a5842648dead0562b1cf8b4f18 ce8fb01897ec4dc4a54e7b478a0450c6 - - default default] [instance: 2e63a3e2-688c-470f-9b69-98ac22f0c892] Start destroying the instance on the hypervisor. _shutdown_instance /usr/lib/python3.9/site-packages/nova/compute/manager.py:3120#033[00m
Dec  1 20:10:25 compute-0 kernel: tap3076324c-17 (unregistering): left promiscuous mode
Dec  1 20:10:25 compute-0 NetworkManager[56474]: <info>  [1764619825.0320] device (tap3076324c-17): state change: disconnected -> unmanaged (reason 'unmanaged', managed-type: 'removed')
Dec  1 20:10:25 compute-0 nova_compute[189564]: 2025-12-01 20:10:25.049 189568 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Dec  1 20:10:25 compute-0 ovn_controller[97948]: 2025-12-01T20:10:25Z|00179|binding|INFO|Releasing lport 3076324c-1772-4ebf-8d52-056282f5b5b9 from this chassis (sb_readonly=0)
Dec  1 20:10:25 compute-0 ovn_controller[97948]: 2025-12-01T20:10:25Z|00180|binding|INFO|Setting lport 3076324c-1772-4ebf-8d52-056282f5b5b9 down in Southbound
Dec  1 20:10:25 compute-0 ovn_controller[97948]: 2025-12-01T20:10:25Z|00181|binding|INFO|Removing iface tap3076324c-17 ovn-installed in OVS
Dec  1 20:10:25 compute-0 nova_compute[189564]: 2025-12-01 20:10:25.069 189568 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Dec  1 20:10:25 compute-0 ovn_metadata_agent[106828]: 2025-12-01 20:10:25.076 106833 DEBUG ovsdbapp.backend.ovs_idl.event [-] Matched UPDATE: PortBindingUpdatedEvent(events=('update',), table='Port_Binding', conditions=None, old_conditions=None), priority=20 to row=Port_Binding(mac=['fa:16:3e:ec:bc:e0 10.100.3.29'], port_security=['fa:16:3e:ec:bc:e0 10.100.3.29'], type=, nat_addresses=[], virtual_parent=[], up=[False], options={'requested-chassis': 'compute-0.ctlplane.example.com'}, parent_port=[], requested_additional_chassis=[], ha_chassis_group=[], external_ids={'neutron:cidrs': '10.100.3.29/16', 'neutron:device_id': '2e63a3e2-688c-470f-9b69-98ac22f0c892', 'neutron:device_owner': 'compute:nova', 'neutron:mtu': '', 'neutron:network_name': 'neutron-b72e0b6b-24ff-49af-9297-d0f55dd2fe07', 'neutron:port_capabilities': '', 'neutron:port_name': '', 'neutron:project_id': 'ce8fb01897ec4dc4a54e7b478a0450c6', 'neutron:revision_number': '4', 'neutron:security_group_ids': '31f326a2-1dd0-42fd-9a01-b17a7fb79ecb', 'neutron:subnet_pool_addr_scope4': '', 'neutron:subnet_pool_addr_scope6': '', 'neutron:vnic_type': 'normal', 'neutron:host_id': 'compute-0.ctlplane.example.com'}, additional_chassis=[], tag=[], additional_encap=[], encap=[], mirror_rules=[], datapath=4321fa83-980a-46fb-a7a0-cf14441fe575, chassis=[], tunnel_key=2, gateway_chassis=[], requested_chassis=[<ovs.db.idl.Row object at 0x7f1b36766670>], logical_port=3076324c-1772-4ebf-8d52-056282f5b5b9) old=Port_Binding(up=[True], chassis=[<ovs.db.idl.Row object at 0x7f1b36766670>]) matches /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/event.py:43#033[00m
Dec  1 20:10:25 compute-0 ovn_metadata_agent[106828]: 2025-12-01 20:10:25.078 106833 INFO neutron.agent.ovn.metadata.agent [-] Port 3076324c-1772-4ebf-8d52-056282f5b5b9 in datapath b72e0b6b-24ff-49af-9297-d0f55dd2fe07 unbound from our chassis#033[00m
Dec  1 20:10:25 compute-0 ovn_metadata_agent[106828]: 2025-12-01 20:10:25.080 106833 INFO neutron.agent.ovn.metadata.agent [-] Provisioning metadata for network b72e0b6b-24ff-49af-9297-d0f55dd2fe07#033[00m
Dec  1 20:10:25 compute-0 nova_compute[189564]: 2025-12-01 20:10:25.096 189568 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Dec  1 20:10:25 compute-0 ovn_metadata_agent[106828]: 2025-12-01 20:10:25.105 239862 DEBUG oslo.privsep.daemon [-] privsep: reply[fb15d5ff-6984-47a6-bf7f-c76563435638]: (4, True) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Dec  1 20:10:25 compute-0 systemd[1]: machine-qemu\x2d14\x2dinstance\x2d0000000d.scope: Deactivated successfully.
Dec  1 20:10:25 compute-0 systemd[1]: machine-qemu\x2d14\x2dinstance\x2d0000000d.scope: Consumed 5min 29.445s CPU time.
Dec  1 20:10:25 compute-0 systemd-machined[155891]: Machine qemu-14-instance-0000000d terminated.
Dec  1 20:10:25 compute-0 ovn_metadata_agent[106828]: 2025-12-01 20:10:25.149 239942 DEBUG oslo.privsep.daemon [-] privsep: reply[b7d44bb1-edc3-471d-913d-5c033ec90ec4]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Dec  1 20:10:25 compute-0 ovn_metadata_agent[106828]: 2025-12-01 20:10:25.152 239942 DEBUG oslo.privsep.daemon [-] privsep: reply[f7d48efe-0bb4-41ed-a1d6-655adeab7da3]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Dec  1 20:10:25 compute-0 ovn_metadata_agent[106828]: 2025-12-01 20:10:25.191 239942 DEBUG oslo.privsep.daemon [-] privsep: reply[8e162a5d-acce-4f36-97d9-c90c55b6449a]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Dec  1 20:10:25 compute-0 nova_compute[189564]: 2025-12-01 20:10:25.225 189568 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Dec  1 20:10:25 compute-0 ovn_metadata_agent[106828]: 2025-12-01 20:10:25.244 239862 DEBUG oslo.privsep.daemon [-] privsep: reply[874cbd3b-dcad-49fa-b807-ba51349f5fd9]: (4, [{'family': 0, '__align': (), 'ifi_type': 1, 'index': 2, 'flags': 69699, 'change': 0, 'attrs': [['IFLA_IFNAME', 'tapb72e0b6b-21'], ['IFLA_TXQLEN', 1000], ['IFLA_OPERSTATE', 'UP'], ['IFLA_LINKMODE', 0], ['IFLA_MTU', 1500], ['IFLA_MIN_MTU', 68], ['IFLA_MAX_MTU', 65535], ['IFLA_GROUP', 0], ['IFLA_PROMISCUITY', 0], ['UNKNOWN', {'header': {'length': 8, 'type': 61}}], ['IFLA_NUM_TX_QUEUES', 8], ['IFLA_GSO_MAX_SEGS', 65535], ['IFLA_GSO_MAX_SIZE', 65536], ['IFLA_GRO_MAX_SIZE', 65536], ['UNKNOWN', {'header': {'length': 8, 'type': 63}}], ['UNKNOWN', {'header': {'length': 8, 'type': 64}}], ['IFLA_TSO_MAX_SIZE', 524280], ['IFLA_TSO_MAX_SEGS', 65535], ['UNKNOWN', {'header': {'length': 8, 'type': 66}}], ['IFLA_NUM_RX_QUEUES', 8], ['IFLA_CARRIER', 1], ['IFLA_CARRIER_CHANGES', 2], ['IFLA_CARRIER_UP_COUNT', 1], ['IFLA_CARRIER_DOWN_COUNT', 1], ['IFLA_PROTO_DOWN', 0], ['IFLA_ADDRESS', 'fa:16:3e:fe:a1:18'], ['IFLA_BROADCAST', 'ff:ff:ff:ff:ff:ff'], ['IFLA_STATS64', {'rx_packets': 10, 'tx_packets': 8, 'rx_bytes': 700, 'tx_bytes': 528, 'rx_errors': 0, 'tx_errors': 0, 'rx_dropped': 0, 'tx_dropped': 0, 'multicast': 0, 'collisions': 0, 'rx_length_errors': 0, 'rx_over_errors': 0, 'rx_crc_errors': 0, 'rx_frame_errors': 0, 'rx_fifo_errors': 0, 'rx_missed_errors': 0, 'tx_aborted_errors': 0, 'tx_carrier_errors': 0, 'tx_fifo_errors': 0, 'tx_heartbeat_errors': 0, 'tx_window_errors': 0, 'rx_compressed': 0, 'tx_compressed': 0}], ['IFLA_STATS', {'rx_packets': 10, 'tx_packets': 8, 'rx_bytes': 700, 'tx_bytes': 528, 'rx_errors': 0, 'tx_errors': 0, 'rx_dropped': 0, 'tx_dropped': 0, 'multicast': 0, 'collisions': 0, 'rx_length_errors': 0, 'rx_over_errors': 0, 'rx_crc_errors': 0, 'rx_frame_errors': 0, 'rx_fifo_errors': 0, 'rx_missed_errors': 0, 'tx_aborted_errors': 0, 'tx_carrier_errors': 0, 'tx_fifo_errors': 0, 'tx_heartbeat_errors': 0, 'tx_window_errors': 0, 'rx_compressed': 0, 'tx_compressed': 0}], ['IFLA_XDP', {'attrs': [['IFLA_XDP_ATTACHED', None]]}], ['IFLA_LINKINFO', {'attrs': [['IFLA_INFO_KIND', 'veth']]}], ['IFLA_LINK_NETNSID', 0], ['IFLA_LINK', 45], ['IFLA_QDISC', 'noqueue'], ['IFLA_AF_SPEC', {'attrs': [['AF_INET', {'dummy': 65668, 'forwarding': 1, 'mc_forwarding': 0, 'proxy_arp': 0, 'accept_redirects': 0, 'secure_redirects': 0, 'send_redirects': 0, 'shared_media': 1, 'rp_filter': 1, 'accept_source_route': 0, 'bootp_relay': 0, 'log_martians': 0, 'tag': 0, 'arpfilter': 0, 'medium_id': 0, 'noxfrm': 0, 'nopolicy': 0, 'force_igmp_version': 0, 'arp_announce': 0, 'arp_ignore': 0, 'promote_secondaries': 1, 'arp_accept': 0, 'arp_notify': 0, 'accept_local': 0, 'src_vmark': 0, 'proxy_arp_pvlan': 0, 'route_localnet': 0, 'igmpv2_unsolicited_report_interval': 10000, 'igmpv3_unsolicited_report_interval': 1000}], ['AF_INET6', {'attrs': [['IFLA_INET6_FLAGS', 2147483648], ['IFLA_INET6_CACHEINFO', {'max_reasm_len': 65535, 'tstamp': 601774, 'reachable_time': 38829, 'retrans_time': 1000}], ['IFLA_INET6_CONF', {'forwarding': 0, 'hop_limit': 64, 'mtu': 1500, 'accept_ra': 1, 'accept_redirects': 1, 'autoconf': 1, 'dad_transmits': 1, 'router_solicitations': 4294967295, 'router_solicitation_interval': 4000, 'router_solicitation_delay': 1000, 'use_tempaddr': 0, 'temp_valid_lft': 604800, 'temp_preferred_lft': 86400, 'regen_max_retry': 3, 'max_desync_factor': 600, 'max_addresses': 16, 'force_mld_version': 0, 'accept_ra_defrtr': 1, 'accept_ra_pinfo': 1, 'accept_ra_rtr_pref': 1, 'router_probe_interval': 60000, 'accept_ra_rt_info_max_plen': 0, 'proxy_ndp': 0, 'optimistic_dad': 0, 'accept_source_route': 0, 'mc_forwarding': 0, 'disable_ipv6': 0, 'accept_dad': 1, 'force_tllao': 0, 'ndisc_notify': 0}], ['IFLA_INET6_STATS', {'num': 38, 'inpkts': 6, 'inoctets': 448, 'indelivers': 1, 'outforwdatagrams': 0, 'outpkts': 4, 'outoctets': 304, 'inhdrerrors': 0, 'intoobigerrors': 0, 'innoroutes': 0, 'inaddrerrors': 0, 'inunknownprotos': 0, 'intruncatedpkts': 0, 'indiscards': 0, 'outdiscards': 0, 'outnoroutes': 0, 'reasmtimeout': 0, 'reasmreqds': 0, 'reasmoks': 0, 'reasmfails': 0, 'fragoks': 0, 'fragfails': 0, 'fragcreates': 0, 'inmcastpkts': 6, 'outmcastpkts': 4, 'inbcastpkts': 0, 'outbcastpkts': 0, 'inmcastoctets': 448, 'outmcastoctets': 304, 'inbcastoctets': 0, 'outbcastoctets': 0, 'csumerrors': 0, 'noectpkts': 6, 'ect1pkts': 0, 'ect0pkts': 0, 'cepkts': 0}], ['IFLA_INET6_ICMP6STATS', {'num': 7, 'inmsgs': 1, 'inerrors': 0, 'outmsgs': 4, 'outerrors': 0, 'csumerrors': 0}], ['IFLA_INET6_TOKEN', '::'], ['IFLA_INET6_ADDR_GEN_MODE', 0]]}]]}], ['IFLA_MAP', {'mem_start': 0, 'mem_end': 0, 'base_addr': 0, 'irq': 0, 'dma': 0, 'port': 0}], ['UNKNOWN', {'header': {'length': 4, 'type': 32830}}], ['UNKNOWN', {'header': {'length': 4, 'type': 32833}}]], 'header': {'length': 1448, 'type': 16, 'flags': 2, 'sequence_number': 255, 'pid': 259336, 'error': None, 'target': 'ovnmeta-b72e0b6b-24ff-49af-9297-d0f55dd2fe07', 'stats': (0, 0, 0)}, 'state': 'up', 'event': 'RTM_NEWLINK'}]) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Dec  1 20:10:25 compute-0 ovn_metadata_agent[106828]: 2025-12-01 20:10:25.261 239862 DEBUG oslo.privsep.daemon [-] privsep: reply[8cc4e901-f3f5-46e1-a07c-c989970103f4]: (4, ({'family': 2, 'prefixlen': 32, 'flags': 128, 'scope': 0, 'index': 2, 'attrs': [['IFA_ADDRESS', '169.254.169.254'], ['IFA_LOCAL', '169.254.169.254'], ['IFA_BROADCAST', '169.254.169.254'], ['IFA_LABEL', 'tapb72e0b6b-21'], ['IFA_FLAGS', 128], ['IFA_CACHEINFO', {'ifa_preferred': 4294967295, 'ifa_valid': 4294967295, 'cstamp': 601788, 'tstamp': 601788}]], 'header': {'length': 96, 'type': 20, 'flags': 2, 'sequence_number': 255, 'pid': 259352, 'error': None, 'target': 'ovnmeta-b72e0b6b-24ff-49af-9297-d0f55dd2fe07', 'stats': (0, 0, 0)}, 'event': 'RTM_NEWADDR'}, {'family': 2, 'prefixlen': 16, 'flags': 128, 'scope': 0, 'index': 2, 'attrs': [['IFA_ADDRESS', '10.100.0.2'], ['IFA_LOCAL', '10.100.0.2'], ['IFA_BROADCAST', '10.100.255.255'], ['IFA_LABEL', 'tapb72e0b6b-21'], ['IFA_FLAGS', 128], ['IFA_CACHEINFO', {'ifa_preferred': 4294967295, 'ifa_valid': 4294967295, 'cstamp': 601792, 'tstamp': 601792}]], 'header': {'length': 96, 'type': 20, 'flags': 2, 'sequence_number': 255, 'pid': 259352, 'error': None, 'target': 'ovnmeta-b72e0b6b-24ff-49af-9297-d0f55dd2fe07', 'stats': (0, 0, 0)}, 'event': 'RTM_NEWADDR'})) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Dec  1 20:10:25 compute-0 ovn_metadata_agent[106828]: 2025-12-01 20:10:25.264 106833 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): DelPortCommand(_result=None, port=tapb72e0b6b-20, bridge=br-ex, if_exists=True) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89#033[00m
Dec  1 20:10:25 compute-0 nova_compute[189564]: 2025-12-01 20:10:25.262 189568 INFO nova.virt.libvirt.driver [-] [instance: 2e63a3e2-688c-470f-9b69-98ac22f0c892] Instance destroyed successfully.#033[00m
Dec  1 20:10:25 compute-0 nova_compute[189564]: 2025-12-01 20:10:25.263 189568 DEBUG nova.objects.instance [None req-a5d569d3-9244-4c68-b492-a7ec642a9396 87b1f4a5842648dead0562b1cf8b4f18 ce8fb01897ec4dc4a54e7b478a0450c6 - - default default] Lazy-loading 'resources' on Instance uuid 2e63a3e2-688c-470f-9b69-98ac22f0c892 obj_load_attr /usr/lib/python3.9/site-packages/nova/objects/instance.py:1105#033[00m
Dec  1 20:10:25 compute-0 nova_compute[189564]: 2025-12-01 20:10:25.266 189568 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Dec  1 20:10:25 compute-0 nova_compute[189564]: 2025-12-01 20:10:25.271 189568 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Dec  1 20:10:25 compute-0 ovn_metadata_agent[106828]: 2025-12-01 20:10:25.271 106833 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): AddPortCommand(_result=None, bridge=br-int, port=tapb72e0b6b-20, may_exist=True) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89#033[00m
Dec  1 20:10:25 compute-0 ovn_metadata_agent[106828]: 2025-12-01 20:10:25.272 106833 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Transaction caused no change do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:129#033[00m
Dec  1 20:10:25 compute-0 ovn_metadata_agent[106828]: 2025-12-01 20:10:25.272 106833 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): DbSetCommand(_result=None, table=Interface, record=tapb72e0b6b-20, col_values=(('external_ids', {'iface-id': '7a2b95ce-3fa4-48e0-a152-7ae4f9eed7c9'}),)) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89#033[00m
Dec  1 20:10:25 compute-0 ovn_metadata_agent[106828]: 2025-12-01 20:10:25.273 106833 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Transaction caused no change do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:129#033[00m
Dec  1 20:10:25 compute-0 nova_compute[189564]: 2025-12-01 20:10:25.275 189568 DEBUG nova.virt.libvirt.vif [None req-a5d569d3-9244-4c68-b492-a7ec642a9396 87b1f4a5842648dead0562b1cf8b4f18 ce8fb01897ec4dc4a54e7b478a0450c6 - - default default] vif_type=ovs instance=Instance(access_ip_v4=None,access_ip_v6=None,architecture=None,auto_disk_config=False,availability_zone='nova',cell_name=None,cleaned=False,config_drive='True',created_at=2025-12-01T20:05:21Z,default_ephemeral_device=None,default_swap_device=None,deleted=False,deleted_at=None,device_metadata=<?>,disable_terminate=False,display_description=None,display_name='te-4551674-asg-jbxama3kkz6o-ydtfx5qziqnj-k254cxbeo4x2',ec2_ids=<?>,ephemeral_gb=0,ephemeral_key_uuid=None,fault=<?>,flavor=Flavor(3),hidden=False,host='compute-0.ctlplane.example.com',hostname='te-4551674-asg-jbxama3kkz6o-ydtfx5qziqnj-k254cxbeo4x2',id=13,image_ref='bffb6851-f47b-44e0-90e7-e01d72f9a4d2',info_cache=InstanceInfoCache,instance_type_id=3,kernel_id='',key_data=None,key_name=None,keypairs=<?>,launch_index=0,launched_at=2025-12-01T20:05:31Z,launched_on='compute-0.ctlplane.example.com',locked=False,locked_by=None,memory_mb=128,metadata={metering.server_group='f148fe63-b9e9-42f1-b9d7-8790a6058874'},migration_context=<?>,new_flavor=None,node='compute-0.ctlplane.example.com',numa_topology=None,old_flavor=None,os_type=None,pci_devices=<?>,pci_requests=<?>,power_state=1,progress=0,project_id='ce8fb01897ec4dc4a54e7b478a0450c6',ramdisk_id='',reservation_id='r-s00hz3dx',resources=None,root_device_name='/dev/vda',root_gb=1,security_groups=SecurityGroupList,services=<?>,shutdown_terminate=False,system_metadata={boot_roles='reader,member',image_base_image_ref='bffb6851-f47b-44e0-90e7-e01d72f9a4d2',image_container_format='bare',image_disk_format='qcow2',image_hw_cdrom_bus='sata',image_hw_disk_bus='virtio',image_hw_input_bus='usb',image_hw_machine_type='q35',image_hw_pointer_model='usbtablet',image_hw_video_model='virtio',image_hw_vif_model='virtio',image_min_disk='1',image_min_ram='0',owner_project_name='tempest-PrometheusGabbiTest-1865175500',owner_user_name='tempest-PrometheusGabbiTest-1865175500-project-member'},tags=<?>,task_state='deleting',terminated_at=None,trusted_certs=<?>,updated_at=2025-12-01T20:05:31Z,user_data='IyEvYmluL3NoCmVjaG8gJ0xvYWRpbmcgQ1BVJwpzZXQgLXYKY2F0IC9kZXYvdXJhbmRvbSA+IC9kZXYvbnVsbCAmIHNsZWVwIDMwMCA7IGtpbGwgJCEgCg==',user_id='87b1f4a5842648dead0562b1cf8b4f18',uuid=2e63a3e2-688c-470f-9b69-98ac22f0c892,vcpu_model=<?>,vcpus=1,vm_mode=None,vm_state='active') vif={"id": "3076324c-1772-4ebf-8d52-056282f5b5b9", "address": "fa:16:3e:ec:bc:e0", "network": {"id": "b72e0b6b-24ff-49af-9297-d0f55dd2fe07", "bridge": "br-int", "label": "", "subnets": [{"cidr": "10.100.0.0/16", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.3.29", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "ce8fb01897ec4dc4a54e7b478a0450c6", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tap3076324c-17", "ovs_interfaceid": "3076324c-1772-4ebf-8d52-056282f5b5b9", "qbh_params": null, "qbg_params": null, "active": true, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}} unplug /usr/lib/python3.9/site-packages/nova/virt/libvirt/vif.py:828#033[00m
Dec  1 20:10:25 compute-0 nova_compute[189564]: 2025-12-01 20:10:25.276 189568 DEBUG nova.network.os_vif_util [None req-a5d569d3-9244-4c68-b492-a7ec642a9396 87b1f4a5842648dead0562b1cf8b4f18 ce8fb01897ec4dc4a54e7b478a0450c6 - - default default] Converting VIF {"id": "3076324c-1772-4ebf-8d52-056282f5b5b9", "address": "fa:16:3e:ec:bc:e0", "network": {"id": "b72e0b6b-24ff-49af-9297-d0f55dd2fe07", "bridge": "br-int", "label": "", "subnets": [{"cidr": "10.100.0.0/16", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.3.29", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "ce8fb01897ec4dc4a54e7b478a0450c6", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tap3076324c-17", "ovs_interfaceid": "3076324c-1772-4ebf-8d52-056282f5b5b9", "qbh_params": null, "qbg_params": null, "active": true, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}} nova_to_osvif_vif /usr/lib/python3.9/site-packages/nova/network/os_vif_util.py:511#033[00m
Dec  1 20:10:25 compute-0 nova_compute[189564]: 2025-12-01 20:10:25.277 189568 DEBUG nova.network.os_vif_util [None req-a5d569d3-9244-4c68-b492-a7ec642a9396 87b1f4a5842648dead0562b1cf8b4f18 ce8fb01897ec4dc4a54e7b478a0450c6 - - default default] Converted object VIFOpenVSwitch(active=True,address=fa:16:3e:ec:bc:e0,bridge_name='br-int',has_traffic_filtering=True,id=3076324c-1772-4ebf-8d52-056282f5b5b9,network=Network(b72e0b6b-24ff-49af-9297-d0f55dd2fe07),plugin='ovs',port_profile=VIFPortProfileOpenVSwitch,preserve_on_delete=False,vif_name='tap3076324c-17') nova_to_osvif_vif /usr/lib/python3.9/site-packages/nova/network/os_vif_util.py:548#033[00m
Dec  1 20:10:25 compute-0 nova_compute[189564]: 2025-12-01 20:10:25.278 189568 DEBUG os_vif [None req-a5d569d3-9244-4c68-b492-a7ec642a9396 87b1f4a5842648dead0562b1cf8b4f18 ce8fb01897ec4dc4a54e7b478a0450c6 - - default default] Unplugging vif VIFOpenVSwitch(active=True,address=fa:16:3e:ec:bc:e0,bridge_name='br-int',has_traffic_filtering=True,id=3076324c-1772-4ebf-8d52-056282f5b5b9,network=Network(b72e0b6b-24ff-49af-9297-d0f55dd2fe07),plugin='ovs',port_profile=VIFPortProfileOpenVSwitch,preserve_on_delete=False,vif_name='tap3076324c-17') unplug /usr/lib/python3.9/site-packages/os_vif/__init__.py:109#033[00m
Dec  1 20:10:25 compute-0 nova_compute[189564]: 2025-12-01 20:10:25.281 189568 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 21 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Dec  1 20:10:25 compute-0 nova_compute[189564]: 2025-12-01 20:10:25.282 189568 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): DelPortCommand(_result=None, port=tap3076324c-17, bridge=br-int, if_exists=True) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89#033[00m
Dec  1 20:10:25 compute-0 nova_compute[189564]: 2025-12-01 20:10:25.284 189568 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Dec  1 20:10:25 compute-0 nova_compute[189564]: 2025-12-01 20:10:25.287 189568 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] 0-ms timeout __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:248#033[00m
Dec  1 20:10:25 compute-0 nova_compute[189564]: 2025-12-01 20:10:25.290 189568 INFO os_vif [None req-a5d569d3-9244-4c68-b492-a7ec642a9396 87b1f4a5842648dead0562b1cf8b4f18 ce8fb01897ec4dc4a54e7b478a0450c6 - - default default] Successfully unplugged vif VIFOpenVSwitch(active=True,address=fa:16:3e:ec:bc:e0,bridge_name='br-int',has_traffic_filtering=True,id=3076324c-1772-4ebf-8d52-056282f5b5b9,network=Network(b72e0b6b-24ff-49af-9297-d0f55dd2fe07),plugin='ovs',port_profile=VIFPortProfileOpenVSwitch,preserve_on_delete=False,vif_name='tap3076324c-17')#033[00m
Dec  1 20:10:25 compute-0 nova_compute[189564]: 2025-12-01 20:10:25.291 189568 INFO nova.virt.libvirt.driver [None req-a5d569d3-9244-4c68-b492-a7ec642a9396 87b1f4a5842648dead0562b1cf8b4f18 ce8fb01897ec4dc4a54e7b478a0450c6 - - default default] [instance: 2e63a3e2-688c-470f-9b69-98ac22f0c892] Deleting instance files /var/lib/nova/instances/2e63a3e2-688c-470f-9b69-98ac22f0c892_del#033[00m
Dec  1 20:10:25 compute-0 nova_compute[189564]: 2025-12-01 20:10:25.293 189568 INFO nova.virt.libvirt.driver [None req-a5d569d3-9244-4c68-b492-a7ec642a9396 87b1f4a5842648dead0562b1cf8b4f18 ce8fb01897ec4dc4a54e7b478a0450c6 - - default default] [instance: 2e63a3e2-688c-470f-9b69-98ac22f0c892] Deletion of /var/lib/nova/instances/2e63a3e2-688c-470f-9b69-98ac22f0c892_del complete#033[00m
Dec  1 20:10:25 compute-0 nova_compute[189564]: 2025-12-01 20:10:25.345 189568 INFO nova.compute.manager [None req-a5d569d3-9244-4c68-b492-a7ec642a9396 87b1f4a5842648dead0562b1cf8b4f18 ce8fb01897ec4dc4a54e7b478a0450c6 - - default default] [instance: 2e63a3e2-688c-470f-9b69-98ac22f0c892] Took 0.36 seconds to destroy the instance on the hypervisor.#033[00m
Dec  1 20:10:25 compute-0 nova_compute[189564]: 2025-12-01 20:10:25.346 189568 DEBUG oslo.service.loopingcall [None req-a5d569d3-9244-4c68-b492-a7ec642a9396 87b1f4a5842648dead0562b1cf8b4f18 ce8fb01897ec4dc4a54e7b478a0450c6 - - default default] Waiting for function nova.compute.manager.ComputeManager._try_deallocate_network.<locals>._deallocate_network_with_retries to return. func /usr/lib/python3.9/site-packages/oslo_service/loopingcall.py:435#033[00m
Dec  1 20:10:25 compute-0 nova_compute[189564]: 2025-12-01 20:10:25.346 189568 DEBUG nova.compute.manager [-] [instance: 2e63a3e2-688c-470f-9b69-98ac22f0c892] Deallocating network for instance _deallocate_network /usr/lib/python3.9/site-packages/nova/compute/manager.py:2259#033[00m
Dec  1 20:10:25 compute-0 nova_compute[189564]: 2025-12-01 20:10:25.347 189568 DEBUG nova.network.neutron [-] [instance: 2e63a3e2-688c-470f-9b69-98ac22f0c892] deallocate_for_instance() deallocate_for_instance /usr/lib/python3.9/site-packages/nova/network/neutron.py:1803#033[00m
Dec  1 20:10:26 compute-0 nova_compute[189564]: 2025-12-01 20:10:26.037 189568 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Dec  1 20:10:26 compute-0 ovn_metadata_agent[106828]: 2025-12-01 20:10:26.039 106833 DEBUG ovsdbapp.backend.ovs_idl.event [-] Matched UPDATE: SbGlobalUpdateEvent(events=('update',), table='SB_Global', conditions=None, old_conditions=None), priority=20 to row=SB_Global(external_ids={}, nb_cfg=15, options={'arp_ns_explicit_output': 'true', 'mac_prefix': 'ae:b8:e0', 'max_tunid': '16711680', 'northd_internal_version': '24.03.8-20.33.0-76.8', 'svc_monitor_mac': 'f2:87:69:a7:38:2b'}, ipsec=False) old=SB_Global(nb_cfg=14) matches /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/event.py:43#033[00m
Dec  1 20:10:26 compute-0 ovn_metadata_agent[106828]: 2025-12-01 20:10:26.041 106833 DEBUG neutron.agent.ovn.metadata.agent [-] Delaying updating chassis table for 6 seconds run /usr/lib/python3.9/site-packages/neutron/agent/ovn/metadata/agent.py:274#033[00m
Dec  1 20:10:26 compute-0 nova_compute[189564]: 2025-12-01 20:10:26.252 189568 DEBUG oslo_service.periodic_task [None req-7cec860b-c1c5-49d2-8a4f-732c76c1788d - - - - - -] Running periodic task ComputeManager._poll_rebooting_instances run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210#033[00m
Dec  1 20:10:26 compute-0 nova_compute[189564]: 2025-12-01 20:10:26.804 189568 DEBUG nova.compute.manager [req-01f4bd1a-37bf-4c9d-8680-016531117fa1 req-cd1ef1fd-cd05-4199-9540-b8eb44559ad9 8141e98fb94749b083df4b60a326419b 58466eba0634458f8dd92f929824a1d9 - - default default] [instance: 2e63a3e2-688c-470f-9b69-98ac22f0c892] Received event network-vif-unplugged-3076324c-1772-4ebf-8d52-056282f5b5b9 external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11048#033[00m
Dec  1 20:10:26 compute-0 nova_compute[189564]: 2025-12-01 20:10:26.804 189568 DEBUG oslo_concurrency.lockutils [req-01f4bd1a-37bf-4c9d-8680-016531117fa1 req-cd1ef1fd-cd05-4199-9540-b8eb44559ad9 8141e98fb94749b083df4b60a326419b 58466eba0634458f8dd92f929824a1d9 - - default default] Acquiring lock "2e63a3e2-688c-470f-9b69-98ac22f0c892-events" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m
Dec  1 20:10:26 compute-0 nova_compute[189564]: 2025-12-01 20:10:26.805 189568 DEBUG oslo_concurrency.lockutils [req-01f4bd1a-37bf-4c9d-8680-016531117fa1 req-cd1ef1fd-cd05-4199-9540-b8eb44559ad9 8141e98fb94749b083df4b60a326419b 58466eba0634458f8dd92f929824a1d9 - - default default] Lock "2e63a3e2-688c-470f-9b69-98ac22f0c892-events" acquired by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m
Dec  1 20:10:26 compute-0 nova_compute[189564]: 2025-12-01 20:10:26.805 189568 DEBUG oslo_concurrency.lockutils [req-01f4bd1a-37bf-4c9d-8680-016531117fa1 req-cd1ef1fd-cd05-4199-9540-b8eb44559ad9 8141e98fb94749b083df4b60a326419b 58466eba0634458f8dd92f929824a1d9 - - default default] Lock "2e63a3e2-688c-470f-9b69-98ac22f0c892-events" "released" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m
Dec  1 20:10:26 compute-0 nova_compute[189564]: 2025-12-01 20:10:26.806 189568 DEBUG nova.compute.manager [req-01f4bd1a-37bf-4c9d-8680-016531117fa1 req-cd1ef1fd-cd05-4199-9540-b8eb44559ad9 8141e98fb94749b083df4b60a326419b 58466eba0634458f8dd92f929824a1d9 - - default default] [instance: 2e63a3e2-688c-470f-9b69-98ac22f0c892] No waiting events found dispatching network-vif-unplugged-3076324c-1772-4ebf-8d52-056282f5b5b9 pop_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:320#033[00m
Dec  1 20:10:26 compute-0 nova_compute[189564]: 2025-12-01 20:10:26.806 189568 DEBUG nova.compute.manager [req-01f4bd1a-37bf-4c9d-8680-016531117fa1 req-cd1ef1fd-cd05-4199-9540-b8eb44559ad9 8141e98fb94749b083df4b60a326419b 58466eba0634458f8dd92f929824a1d9 - - default default] [instance: 2e63a3e2-688c-470f-9b69-98ac22f0c892] Received event network-vif-unplugged-3076324c-1772-4ebf-8d52-056282f5b5b9 for instance with task_state deleting. _process_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:10826#033[00m
Dec  1 20:10:26 compute-0 nova_compute[189564]: 2025-12-01 20:10:26.964 189568 DEBUG nova.network.neutron [-] [instance: 2e63a3e2-688c-470f-9b69-98ac22f0c892] Updating instance_info_cache with network_info: [] update_instance_cache_with_nw_info /usr/lib/python3.9/site-packages/nova/network/neutron.py:116#033[00m
Dec  1 20:10:26 compute-0 nova_compute[189564]: 2025-12-01 20:10:26.979 189568 INFO nova.compute.manager [-] [instance: 2e63a3e2-688c-470f-9b69-98ac22f0c892] Took 1.63 seconds to deallocate network for instance.#033[00m
Dec  1 20:10:27 compute-0 nova_compute[189564]: 2025-12-01 20:10:27.035 189568 DEBUG oslo_concurrency.lockutils [None req-a5d569d3-9244-4c68-b492-a7ec642a9396 87b1f4a5842648dead0562b1cf8b4f18 ce8fb01897ec4dc4a54e7b478a0450c6 - - default default] Acquiring lock "compute_resources" by "nova.compute.resource_tracker.ResourceTracker.update_usage" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m
Dec  1 20:10:27 compute-0 nova_compute[189564]: 2025-12-01 20:10:27.035 189568 DEBUG oslo_concurrency.lockutils [None req-a5d569d3-9244-4c68-b492-a7ec642a9396 87b1f4a5842648dead0562b1cf8b4f18 ce8fb01897ec4dc4a54e7b478a0450c6 - - default default] Lock "compute_resources" acquired by "nova.compute.resource_tracker.ResourceTracker.update_usage" :: waited 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m
Dec  1 20:10:27 compute-0 nova_compute[189564]: 2025-12-01 20:10:27.054 189568 DEBUG nova.compute.manager [req-6f41e138-7897-47a0-9ca9-f35c47941474 req-da909952-2ff5-43b7-8315-23230a341a7a 8141e98fb94749b083df4b60a326419b 58466eba0634458f8dd92f929824a1d9 - - default default] [instance: 2e63a3e2-688c-470f-9b69-98ac22f0c892] Received event network-vif-deleted-3076324c-1772-4ebf-8d52-056282f5b5b9 external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11048#033[00m
Dec  1 20:10:27 compute-0 nova_compute[189564]: 2025-12-01 20:10:27.070 189568 DEBUG nova.scheduler.client.report [None req-a5d569d3-9244-4c68-b492-a7ec642a9396 87b1f4a5842648dead0562b1cf8b4f18 ce8fb01897ec4dc4a54e7b478a0450c6 - - default default] Refreshing inventories for resource provider 0211b5d4-bab8-409f-8f53-df766ffbcb27 _refresh_associations /usr/lib/python3.9/site-packages/nova/scheduler/client/report.py:804#033[00m
Dec  1 20:10:27 compute-0 nova_compute[189564]: 2025-12-01 20:10:27.087 189568 DEBUG nova.scheduler.client.report [None req-a5d569d3-9244-4c68-b492-a7ec642a9396 87b1f4a5842648dead0562b1cf8b4f18 ce8fb01897ec4dc4a54e7b478a0450c6 - - default default] Updating ProviderTree inventory for provider 0211b5d4-bab8-409f-8f53-df766ffbcb27 from _refresh_and_get_inventory using data: {'VCPU': {'total': 8, 'reserved': 0, 'min_unit': 1, 'max_unit': 8, 'step_size': 1, 'allocation_ratio': 4.0}, 'MEMORY_MB': {'total': 7680, 'reserved': 512, 'min_unit': 1, 'max_unit': 7680, 'step_size': 1, 'allocation_ratio': 1.0}, 'DISK_GB': {'total': 79, 'reserved': 1, 'min_unit': 1, 'max_unit': 79, 'step_size': 1, 'allocation_ratio': 0.9}} _refresh_and_get_inventory /usr/lib/python3.9/site-packages/nova/scheduler/client/report.py:768#033[00m
Dec  1 20:10:27 compute-0 nova_compute[189564]: 2025-12-01 20:10:27.088 189568 DEBUG nova.compute.provider_tree [None req-a5d569d3-9244-4c68-b492-a7ec642a9396 87b1f4a5842648dead0562b1cf8b4f18 ce8fb01897ec4dc4a54e7b478a0450c6 - - default default] Updating inventory in ProviderTree for provider 0211b5d4-bab8-409f-8f53-df766ffbcb27 with inventory: {'VCPU': {'total': 8, 'reserved': 0, 'min_unit': 1, 'max_unit': 8, 'step_size': 1, 'allocation_ratio': 4.0}, 'MEMORY_MB': {'total': 7680, 'reserved': 512, 'min_unit': 1, 'max_unit': 7680, 'step_size': 1, 'allocation_ratio': 1.0}, 'DISK_GB': {'total': 79, 'reserved': 1, 'min_unit': 1, 'max_unit': 79, 'step_size': 1, 'allocation_ratio': 0.9}} update_inventory /usr/lib/python3.9/site-packages/nova/compute/provider_tree.py:176#033[00m
Dec  1 20:10:27 compute-0 nova_compute[189564]: 2025-12-01 20:10:27.107 189568 DEBUG nova.scheduler.client.report [None req-a5d569d3-9244-4c68-b492-a7ec642a9396 87b1f4a5842648dead0562b1cf8b4f18 ce8fb01897ec4dc4a54e7b478a0450c6 - - default default] Refreshing aggregate associations for resource provider 0211b5d4-bab8-409f-8f53-df766ffbcb27, aggregates: None _refresh_associations /usr/lib/python3.9/site-packages/nova/scheduler/client/report.py:813#033[00m
Dec  1 20:10:27 compute-0 nova_compute[189564]: 2025-12-01 20:10:27.134 189568 DEBUG nova.scheduler.client.report [None req-a5d569d3-9244-4c68-b492-a7ec642a9396 87b1f4a5842648dead0562b1cf8b4f18 ce8fb01897ec4dc4a54e7b478a0450c6 - - default default] Refreshing trait associations for resource provider 0211b5d4-bab8-409f-8f53-df766ffbcb27, traits: COMPUTE_RESCUE_BFV,COMPUTE_VIOMMU_MODEL_INTEL,COMPUTE_SECURITY_UEFI_SECURE_BOOT,COMPUTE_GRAPHICS_MODEL_VIRTIO,HW_CPU_X86_AMD_SVM,COMPUTE_NODE,COMPUTE_VIOMMU_MODEL_AUTO,HW_CPU_X86_BMI2,COMPUTE_IMAGE_TYPE_ISO,HW_CPU_X86_SSE2,COMPUTE_STORAGE_BUS_SATA,HW_CPU_X86_SSE41,COMPUTE_IMAGE_TYPE_ARI,COMPUTE_SECURITY_TPM_1_2,COMPUTE_STORAGE_BUS_VIRTIO,COMPUTE_IMAGE_TYPE_AMI,COMPUTE_NET_VIF_MODEL_E1000,COMPUTE_GRAPHICS_MODEL_CIRRUS,COMPUTE_GRAPHICS_MODEL_VGA,COMPUTE_TRUSTED_CERTS,COMPUTE_STORAGE_BUS_USB,COMPUTE_VOLUME_ATTACH_WITH_TAG,COMPUTE_NET_VIF_MODEL_VIRTIO,HW_CPU_X86_FMA3,HW_CPU_X86_SSE4A,COMPUTE_ACCELERATORS,COMPUTE_VOLUME_EXTEND,HW_CPU_X86_ABM,COMPUTE_DEVICE_TAGGING,HW_CPU_X86_AVX,HW_CPU_X86_SSE,HW_CPU_X86_SVM,COMPUTE_STORAGE_BUS_IDE,COMPUTE_NET_ATTACH_INTERFACE,HW_CPU_X86_F16C,HW_CPU_X86_MMX,COMPUTE_NET_VIF_MODEL_SPAPR_VLAN,COMPUTE_NET_VIF_MODEL_E1000E,HW_CPU_X86_CLMUL,COMPUTE_GRAPHICS_MODEL_NONE,COMPUTE_VIOMMU_MODEL_VIRTIO,HW_CPU_X86_AVX2,COMPUTE_NET_VIF_MODEL_VMXNET3,COMPUTE_SECURITY_TPM_2_0,COMPUTE_IMAGE_TYPE_AKI,HW_CPU_X86_SSSE3,COMPUTE_IMAGE_TYPE_QCOW2,HW_CPU_X86_BMI,HW_CPU_X86_AESNI,COMPUTE_GRAPHICS_MODEL_BOCHS,COMPUTE_NET_VIF_MODEL_RTL8139,COMPUTE_STORAGE_BUS_SCSI,COMPUTE_NET_VIF_MODEL_PCNET,COMPUTE_VOLUME_MULTI_ATTACH,COMPUTE_SOCKET_PCI_NUMA_AFFINITY,COMPUTE_NET_VIF_MODEL_NE2K_PCI,HW_CPU_X86_SHA,COMPUTE_IMAGE_TYPE_RAW,COMPUTE_NET_ATTACH_INTERFACE_WITH_TAG,HW_CPU_X86_SSE42,COMPUTE_STORAGE_BUS_FDC _refresh_associations /usr/lib/python3.9/site-packages/nova/scheduler/client/report.py:825#033[00m
Dec  1 20:10:27 compute-0 nova_compute[189564]: 2025-12-01 20:10:27.195 189568 DEBUG nova.compute.provider_tree [None req-a5d569d3-9244-4c68-b492-a7ec642a9396 87b1f4a5842648dead0562b1cf8b4f18 ce8fb01897ec4dc4a54e7b478a0450c6 - - default default] Inventory has not changed in ProviderTree for provider: 0211b5d4-bab8-409f-8f53-df766ffbcb27 update_inventory /usr/lib/python3.9/site-packages/nova/compute/provider_tree.py:180#033[00m
Dec  1 20:10:27 compute-0 nova_compute[189564]: 2025-12-01 20:10:27.211 189568 DEBUG nova.scheduler.client.report [None req-a5d569d3-9244-4c68-b492-a7ec642a9396 87b1f4a5842648dead0562b1cf8b4f18 ce8fb01897ec4dc4a54e7b478a0450c6 - - default default] Inventory has not changed for provider 0211b5d4-bab8-409f-8f53-df766ffbcb27 based on inventory data: {'VCPU': {'total': 8, 'reserved': 0, 'min_unit': 1, 'max_unit': 8, 'step_size': 1, 'allocation_ratio': 4.0}, 'MEMORY_MB': {'total': 7680, 'reserved': 512, 'min_unit': 1, 'max_unit': 7680, 'step_size': 1, 'allocation_ratio': 1.0}, 'DISK_GB': {'total': 79, 'reserved': 1, 'min_unit': 1, 'max_unit': 79, 'step_size': 1, 'allocation_ratio': 0.9}} set_inventory_for_provider /usr/lib/python3.9/site-packages/nova/scheduler/client/report.py:940#033[00m
Dec  1 20:10:27 compute-0 nova_compute[189564]: 2025-12-01 20:10:27.234 189568 DEBUG oslo_concurrency.lockutils [None req-a5d569d3-9244-4c68-b492-a7ec642a9396 87b1f4a5842648dead0562b1cf8b4f18 ce8fb01897ec4dc4a54e7b478a0450c6 - - default default] Lock "compute_resources" "released" by "nova.compute.resource_tracker.ResourceTracker.update_usage" :: held 0.199s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m
Dec  1 20:10:27 compute-0 nova_compute[189564]: 2025-12-01 20:10:27.270 189568 INFO nova.scheduler.client.report [None req-a5d569d3-9244-4c68-b492-a7ec642a9396 87b1f4a5842648dead0562b1cf8b4f18 ce8fb01897ec4dc4a54e7b478a0450c6 - - default default] Deleted allocations for instance 2e63a3e2-688c-470f-9b69-98ac22f0c892#033[00m
Dec  1 20:10:27 compute-0 nova_compute[189564]: 2025-12-01 20:10:27.342 189568 DEBUG oslo_concurrency.lockutils [None req-a5d569d3-9244-4c68-b492-a7ec642a9396 87b1f4a5842648dead0562b1cf8b4f18 ce8fb01897ec4dc4a54e7b478a0450c6 - - default default] Lock "2e63a3e2-688c-470f-9b69-98ac22f0c892" "released" by "nova.compute.manager.ComputeManager.terminate_instance.<locals>.do_terminate_instance" :: held 2.369s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m
Dec  1 20:10:28 compute-0 nova_compute[189564]: 2025-12-01 20:10:28.247 189568 DEBUG oslo_service.periodic_task [None req-7cec860b-c1c5-49d2-8a4f-732c76c1788d - - - - - -] Running periodic task ComputeManager._poll_unconfirmed_resizes run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210#033[00m
Dec  1 20:10:29 compute-0 nova_compute[189564]: 2025-12-01 20:10:29.045 189568 DEBUG nova.compute.manager [req-5b92cd33-7634-4494-821d-5a0eea94e312 req-2526d73e-c9e6-4854-8722-c5522e2f52de 8141e98fb94749b083df4b60a326419b 58466eba0634458f8dd92f929824a1d9 - - default default] [instance: 2e63a3e2-688c-470f-9b69-98ac22f0c892] Received event network-vif-plugged-3076324c-1772-4ebf-8d52-056282f5b5b9 external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11048#033[00m
Dec  1 20:10:29 compute-0 nova_compute[189564]: 2025-12-01 20:10:29.047 189568 DEBUG oslo_concurrency.lockutils [req-5b92cd33-7634-4494-821d-5a0eea94e312 req-2526d73e-c9e6-4854-8722-c5522e2f52de 8141e98fb94749b083df4b60a326419b 58466eba0634458f8dd92f929824a1d9 - - default default] Acquiring lock "2e63a3e2-688c-470f-9b69-98ac22f0c892-events" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m
Dec  1 20:10:29 compute-0 nova_compute[189564]: 2025-12-01 20:10:29.049 189568 DEBUG oslo_concurrency.lockutils [req-5b92cd33-7634-4494-821d-5a0eea94e312 req-2526d73e-c9e6-4854-8722-c5522e2f52de 8141e98fb94749b083df4b60a326419b 58466eba0634458f8dd92f929824a1d9 - - default default] Lock "2e63a3e2-688c-470f-9b69-98ac22f0c892-events" acquired by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: waited 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m
Dec  1 20:10:29 compute-0 nova_compute[189564]: 2025-12-01 20:10:29.049 189568 DEBUG oslo_concurrency.lockutils [req-5b92cd33-7634-4494-821d-5a0eea94e312 req-2526d73e-c9e6-4854-8722-c5522e2f52de 8141e98fb94749b083df4b60a326419b 58466eba0634458f8dd92f929824a1d9 - - default default] Lock "2e63a3e2-688c-470f-9b69-98ac22f0c892-events" "released" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: held 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m
Dec  1 20:10:29 compute-0 nova_compute[189564]: 2025-12-01 20:10:29.050 189568 DEBUG nova.compute.manager [req-5b92cd33-7634-4494-821d-5a0eea94e312 req-2526d73e-c9e6-4854-8722-c5522e2f52de 8141e98fb94749b083df4b60a326419b 58466eba0634458f8dd92f929824a1d9 - - default default] [instance: 2e63a3e2-688c-470f-9b69-98ac22f0c892] No waiting events found dispatching network-vif-plugged-3076324c-1772-4ebf-8d52-056282f5b5b9 pop_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:320#033[00m
Dec  1 20:10:29 compute-0 nova_compute[189564]: 2025-12-01 20:10:29.051 189568 WARNING nova.compute.manager [req-5b92cd33-7634-4494-821d-5a0eea94e312 req-2526d73e-c9e6-4854-8722-c5522e2f52de 8141e98fb94749b083df4b60a326419b 58466eba0634458f8dd92f929824a1d9 - - default default] [instance: 2e63a3e2-688c-470f-9b69-98ac22f0c892] Received unexpected event network-vif-plugged-3076324c-1772-4ebf-8d52-056282f5b5b9 for instance with vm_state deleted and task_state None.#033[00m
Dec  1 20:10:29 compute-0 nova_compute[189564]: 2025-12-01 20:10:29.135 189568 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Dec  1 20:10:29 compute-0 nova_compute[189564]: 2025-12-01 20:10:29.249 189568 DEBUG oslo_service.periodic_task [None req-7cec860b-c1c5-49d2-8a4f-732c76c1788d - - - - - -] Running periodic task ComputeManager._heal_instance_info_cache run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210#033[00m
Dec  1 20:10:29 compute-0 nova_compute[189564]: 2025-12-01 20:10:29.250 189568 DEBUG nova.compute.manager [None req-7cec860b-c1c5-49d2-8a4f-732c76c1788d - - - - - -] Starting heal instance info cache _heal_instance_info_cache /usr/lib/python3.9/site-packages/nova/compute/manager.py:9858#033[00m
Dec  1 20:10:29 compute-0 podman[259356]: 2025-12-01 20:10:29.379802695 +0000 UTC m=+0.130693667 container health_status eee51cf6f5ac491b85fb09827fece37ea9afa564acb449d4ec0d0155a452f02b (image=quay.io/podified-antelope-centos9/openstack-multipathd:current-podified, name=multipathd, health_status=healthy, health_failing_streak=0, health_log=, managed_by=edpm_ansible, org.label-schema.build-date=20251125, io.buildah.version=1.41.3, maintainer=OpenStack Kubernetes Operator team, org.label-schema.schema-version=1.0, container_name=multipathd, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.vendor=CentOS, tcib_build_tag=fa2bb8efef6782c26ea7f1675eeb36dd, config_id=multipathd, org.label-schema.license=GPLv2, tcib_managed=true, config_data={'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/multipathd', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-multipathd:current-podified', 'net': 'host', 'privileged': True, 'restart': 'always', 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/var/lib/kolla/config_files/multipathd.json:/var/lib/kolla/config_files/config.json:ro', '/dev:/dev', '/run/udev:/run/udev', '/sys:/sys', '/lib/modules:/lib/modules:ro', '/etc/iscsi:/etc/iscsi:ro', '/var/lib/iscsi:/var/lib/iscsi', '/etc/multipath:/etc/multipath:z', '/etc/multipath.conf:/etc/multipath.conf:ro', '/var/lib/openstack/healthchecks/multipathd:/openstack:ro,z']})
Dec  1 20:10:29 compute-0 nova_compute[189564]: 2025-12-01 20:10:29.466 189568 DEBUG oslo_concurrency.lockutils [None req-7cec860b-c1c5-49d2-8a4f-732c76c1788d - - - - - -] Acquiring lock "refresh_cache-1ba24bd2-a29b-4c5b-b8c7-cba0830ed166" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:312#033[00m
Dec  1 20:10:29 compute-0 nova_compute[189564]: 2025-12-01 20:10:29.467 189568 DEBUG oslo_concurrency.lockutils [None req-7cec860b-c1c5-49d2-8a4f-732c76c1788d - - - - - -] Acquired lock "refresh_cache-1ba24bd2-a29b-4c5b-b8c7-cba0830ed166" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:315#033[00m
Dec  1 20:10:29 compute-0 nova_compute[189564]: 2025-12-01 20:10:29.467 189568 DEBUG nova.network.neutron [None req-7cec860b-c1c5-49d2-8a4f-732c76c1788d - - - - - -] [instance: 1ba24bd2-a29b-4c5b-b8c7-cba0830ed166] Forcefully refreshing network info cache for instance _get_instance_nw_info /usr/lib/python3.9/site-packages/nova/network/neutron.py:2004#033[00m
Dec  1 20:10:29 compute-0 podman[203750]: time="2025-12-01T20:10:29Z" level=info msg="List containers: received `last` parameter - overwriting `limit`"
Dec  1 20:10:29 compute-0 podman[203750]: @ - - [01/Dec/2025:20:10:29 +0000] "GET /v4.9.3/libpod/containers/json?all=true&external=false&last=0&namespace=false&size=false&sync=false HTTP/1.1" 200 29521 "" "Go-http-client/1.1"
Dec  1 20:10:29 compute-0 podman[203750]: @ - - [01/Dec/2025:20:10:29 +0000] "GET /v4.9.3/libpod/containers/stats?all=false&interval=1&stream=false HTTP/1.1" 200 4813 "" "Go-http-client/1.1"
Dec  1 20:10:30 compute-0 nova_compute[189564]: 2025-12-01 20:10:30.285 189568 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Dec  1 20:10:30 compute-0 nova_compute[189564]: 2025-12-01 20:10:30.988 189568 DEBUG nova.network.neutron [None req-7cec860b-c1c5-49d2-8a4f-732c76c1788d - - - - - -] [instance: 1ba24bd2-a29b-4c5b-b8c7-cba0830ed166] Updating instance_info_cache with network_info: [{"id": "3f58b3a2-d9b9-4462-8f74-88eea7d00105", "address": "fa:16:3e:a9:69:d7", "network": {"id": "b72e0b6b-24ff-49af-9297-d0f55dd2fe07", "bridge": "br-int", "label": "", "subnets": [{"cidr": "10.100.0.0/16", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.1.231", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "ce8fb01897ec4dc4a54e7b478a0450c6", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tap3f58b3a2-d9", "ovs_interfaceid": "3f58b3a2-d9b9-4462-8f74-88eea7d00105", "qbh_params": null, "qbg_params": null, "active": true, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}}] update_instance_cache_with_nw_info /usr/lib/python3.9/site-packages/nova/network/neutron.py:116#033[00m
Dec  1 20:10:31 compute-0 nova_compute[189564]: 2025-12-01 20:10:31.014 189568 DEBUG oslo_concurrency.lockutils [None req-7cec860b-c1c5-49d2-8a4f-732c76c1788d - - - - - -] Releasing lock "refresh_cache-1ba24bd2-a29b-4c5b-b8c7-cba0830ed166" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:333#033[00m
Dec  1 20:10:31 compute-0 nova_compute[189564]: 2025-12-01 20:10:31.014 189568 DEBUG nova.compute.manager [None req-7cec860b-c1c5-49d2-8a4f-732c76c1788d - - - - - -] [instance: 1ba24bd2-a29b-4c5b-b8c7-cba0830ed166] Updated the network info_cache for instance _heal_instance_info_cache /usr/lib/python3.9/site-packages/nova/compute/manager.py:9929#033[00m
Dec  1 20:10:31 compute-0 nova_compute[189564]: 2025-12-01 20:10:31.015 189568 DEBUG oslo_service.periodic_task [None req-7cec860b-c1c5-49d2-8a4f-732c76c1788d - - - - - -] Running periodic task ComputeManager.update_available_resource run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210#033[00m
Dec  1 20:10:31 compute-0 nova_compute[189564]: 2025-12-01 20:10:31.051 189568 DEBUG oslo_concurrency.lockutils [None req-7cec860b-c1c5-49d2-8a4f-732c76c1788d - - - - - -] Acquiring lock "compute_resources" by "nova.compute.resource_tracker.ResourceTracker.clean_compute_node_cache" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m
Dec  1 20:10:31 compute-0 nova_compute[189564]: 2025-12-01 20:10:31.052 189568 DEBUG oslo_concurrency.lockutils [None req-7cec860b-c1c5-49d2-8a4f-732c76c1788d - - - - - -] Lock "compute_resources" acquired by "nova.compute.resource_tracker.ResourceTracker.clean_compute_node_cache" :: waited 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m
Dec  1 20:10:31 compute-0 nova_compute[189564]: 2025-12-01 20:10:31.053 189568 DEBUG oslo_concurrency.lockutils [None req-7cec860b-c1c5-49d2-8a4f-732c76c1788d - - - - - -] Lock "compute_resources" "released" by "nova.compute.resource_tracker.ResourceTracker.clean_compute_node_cache" :: held 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m
Dec  1 20:10:31 compute-0 nova_compute[189564]: 2025-12-01 20:10:31.054 189568 DEBUG nova.compute.resource_tracker [None req-7cec860b-c1c5-49d2-8a4f-732c76c1788d - - - - - -] Auditing locally available compute resources for compute-0.ctlplane.example.com (node: compute-0.ctlplane.example.com) update_available_resource /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:861#033[00m
Dec  1 20:10:31 compute-0 nova_compute[189564]: 2025-12-01 20:10:31.164 189568 DEBUG oslo_concurrency.processutils [None req-7cec860b-c1c5-49d2-8a4f-732c76c1788d - - - - - -] Running cmd (subprocess): /usr/bin/python3 -m oslo_concurrency.prlimit --as=1073741824 --cpu=30 -- env LC_ALL=C LANG=C qemu-img info /var/lib/nova/instances/1ba24bd2-a29b-4c5b-b8c7-cba0830ed166/disk --force-share --output=json execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384#033[00m
Dec  1 20:10:31 compute-0 nova_compute[189564]: 2025-12-01 20:10:31.262 189568 DEBUG oslo_concurrency.processutils [None req-7cec860b-c1c5-49d2-8a4f-732c76c1788d - - - - - -] CMD "/usr/bin/python3 -m oslo_concurrency.prlimit --as=1073741824 --cpu=30 -- env LC_ALL=C LANG=C qemu-img info /var/lib/nova/instances/1ba24bd2-a29b-4c5b-b8c7-cba0830ed166/disk --force-share --output=json" returned: 0 in 0.098s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422#033[00m
Dec  1 20:10:31 compute-0 nova_compute[189564]: 2025-12-01 20:10:31.264 189568 DEBUG oslo_concurrency.processutils [None req-7cec860b-c1c5-49d2-8a4f-732c76c1788d - - - - - -] Running cmd (subprocess): /usr/bin/python3 -m oslo_concurrency.prlimit --as=1073741824 --cpu=30 -- env LC_ALL=C LANG=C qemu-img info /var/lib/nova/instances/1ba24bd2-a29b-4c5b-b8c7-cba0830ed166/disk --force-share --output=json execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384#033[00m
Dec  1 20:10:31 compute-0 nova_compute[189564]: 2025-12-01 20:10:31.367 189568 DEBUG oslo_concurrency.processutils [None req-7cec860b-c1c5-49d2-8a4f-732c76c1788d - - - - - -] CMD "/usr/bin/python3 -m oslo_concurrency.prlimit --as=1073741824 --cpu=30 -- env LC_ALL=C LANG=C qemu-img info /var/lib/nova/instances/1ba24bd2-a29b-4c5b-b8c7-cba0830ed166/disk --force-share --output=json" returned: 0 in 0.102s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422#033[00m
Dec  1 20:10:31 compute-0 openstack_network_exporter[205914]: ERROR   20:10:31 appctl.go:131: Failed to prepare call to ovsdb-server: no control socket files found for the ovs db server
Dec  1 20:10:31 compute-0 openstack_network_exporter[205914]: ERROR   20:10:31 appctl.go:144: Failed to get PID for ovn-northd: no control socket files found for ovn-northd
Dec  1 20:10:31 compute-0 openstack_network_exporter[205914]: ERROR   20:10:31 appctl.go:144: Failed to get PID for ovn-northd: no control socket files found for ovn-northd
Dec  1 20:10:31 compute-0 openstack_network_exporter[205914]: ERROR   20:10:31 appctl.go:174: call(dpif-netdev/pmd-perf-show): please specify an existing datapath
Dec  1 20:10:31 compute-0 openstack_network_exporter[205914]: 
Dec  1 20:10:31 compute-0 openstack_network_exporter[205914]: ERROR   20:10:31 appctl.go:174: call(dpif-netdev/pmd-rxq-show): please specify an existing datapath
Dec  1 20:10:31 compute-0 openstack_network_exporter[205914]: 
Dec  1 20:10:31 compute-0 nova_compute[189564]: 2025-12-01 20:10:31.798 189568 WARNING nova.virt.libvirt.driver [None req-7cec860b-c1c5-49d2-8a4f-732c76c1788d - - - - - -] This host appears to have multiple sockets per NUMA node. The `socket` PCI NUMA affinity will not be supported.#033[00m
Dec  1 20:10:31 compute-0 nova_compute[189564]: 2025-12-01 20:10:31.799 189568 DEBUG nova.compute.resource_tracker [None req-7cec860b-c1c5-49d2-8a4f-732c76c1788d - - - - - -] Hypervisor/Node resource view: name=compute-0.ctlplane.example.com free_ram=5205MB free_disk=72.27659225463867GB free_vcpus=7 pci_devices=[{"dev_id": "pci_0000_00_03_0", "address": "0000:00:03.0", "product_id": "1000", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1000", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_01_3", "address": "0000:00:01.3", "product_id": "7113", "vendor_id": "8086", "numa_node": null, "label": "label_8086_7113", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_06_0", "address": "0000:00:06.0", "product_id": "1005", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1005", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_04_0", "address": "0000:00:04.0", "product_id": "1001", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1001", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_01_1", "address": "0000:00:01.1", "product_id": "7010", "vendor_id": "8086", "numa_node": null, "label": "label_8086_7010", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_00_0", "address": "0000:00:00.0", "product_id": "1237", "vendor_id": "8086", "numa_node": null, "label": "label_8086_1237", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_05_0", "address": "0000:00:05.0", "product_id": "1002", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1002", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_02_0", "address": "0000:00:02.0", "product_id": "1050", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1050", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_01_2", "address": "0000:00:01.2", "product_id": "7020", "vendor_id": "8086", "numa_node": null, "label": "label_8086_7020", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_01_0", "address": "0000:00:01.0", "product_id": "7000", "vendor_id": "8086", "numa_node": null, "label": "label_8086_7000", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_07_0", "address": "0000:00:07.0", "product_id": "1000", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1000", "dev_type": "type-PCI"}] _report_hypervisor_resource_view /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:1034#033[00m
Dec  1 20:10:31 compute-0 nova_compute[189564]: 2025-12-01 20:10:31.800 189568 DEBUG oslo_concurrency.lockutils [None req-7cec860b-c1c5-49d2-8a4f-732c76c1788d - - - - - -] Acquiring lock "compute_resources" by "nova.compute.resource_tracker.ResourceTracker._update_available_resource" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m
Dec  1 20:10:31 compute-0 nova_compute[189564]: 2025-12-01 20:10:31.800 189568 DEBUG oslo_concurrency.lockutils [None req-7cec860b-c1c5-49d2-8a4f-732c76c1788d - - - - - -] Lock "compute_resources" acquired by "nova.compute.resource_tracker.ResourceTracker._update_available_resource" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m
Dec  1 20:10:31 compute-0 nova_compute[189564]: 2025-12-01 20:10:31.882 189568 DEBUG nova.compute.resource_tracker [None req-7cec860b-c1c5-49d2-8a4f-732c76c1788d - - - - - -] Instance 1ba24bd2-a29b-4c5b-b8c7-cba0830ed166 actively managed on this compute host and has allocations in placement: {'resources': {'DISK_GB': 1, 'MEMORY_MB': 128, 'VCPU': 1}}. _remove_deleted_instances_allocations /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:1635#033[00m
Dec  1 20:10:31 compute-0 nova_compute[189564]: 2025-12-01 20:10:31.883 189568 DEBUG nova.compute.resource_tracker [None req-7cec860b-c1c5-49d2-8a4f-732c76c1788d - - - - - -] Total usable vcpus: 8, total allocated vcpus: 1 _report_final_resource_view /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:1057#033[00m
Dec  1 20:10:31 compute-0 nova_compute[189564]: 2025-12-01 20:10:31.883 189568 DEBUG nova.compute.resource_tracker [None req-7cec860b-c1c5-49d2-8a4f-732c76c1788d - - - - - -] Final resource view: name=compute-0.ctlplane.example.com phys_ram=7680MB used_ram=640MB phys_disk=79GB used_disk=1GB total_vcpus=8 used_vcpus=1 pci_stats=[] _report_final_resource_view /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:1066#033[00m
Dec  1 20:10:31 compute-0 nova_compute[189564]: 2025-12-01 20:10:31.962 189568 DEBUG nova.compute.provider_tree [None req-7cec860b-c1c5-49d2-8a4f-732c76c1788d - - - - - -] Inventory has not changed in ProviderTree for provider: 0211b5d4-bab8-409f-8f53-df766ffbcb27 update_inventory /usr/lib/python3.9/site-packages/nova/compute/provider_tree.py:180#033[00m
Dec  1 20:10:31 compute-0 nova_compute[189564]: 2025-12-01 20:10:31.977 189568 DEBUG nova.scheduler.client.report [None req-7cec860b-c1c5-49d2-8a4f-732c76c1788d - - - - - -] Inventory has not changed for provider 0211b5d4-bab8-409f-8f53-df766ffbcb27 based on inventory data: {'VCPU': {'total': 8, 'reserved': 0, 'min_unit': 1, 'max_unit': 8, 'step_size': 1, 'allocation_ratio': 4.0}, 'MEMORY_MB': {'total': 7680, 'reserved': 512, 'min_unit': 1, 'max_unit': 7680, 'step_size': 1, 'allocation_ratio': 1.0}, 'DISK_GB': {'total': 79, 'reserved': 1, 'min_unit': 1, 'max_unit': 79, 'step_size': 1, 'allocation_ratio': 0.9}} set_inventory_for_provider /usr/lib/python3.9/site-packages/nova/scheduler/client/report.py:940#033[00m
Dec  1 20:10:31 compute-0 nova_compute[189564]: 2025-12-01 20:10:31.993 189568 DEBUG nova.compute.resource_tracker [None req-7cec860b-c1c5-49d2-8a4f-732c76c1788d - - - - - -] Compute_service record updated for compute-0.ctlplane.example.com:compute-0.ctlplane.example.com _update_available_resource /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:995#033[00m
Dec  1 20:10:31 compute-0 nova_compute[189564]: 2025-12-01 20:10:31.994 189568 DEBUG oslo_concurrency.lockutils [None req-7cec860b-c1c5-49d2-8a4f-732c76c1788d - - - - - -] Lock "compute_resources" "released" by "nova.compute.resource_tracker.ResourceTracker._update_available_resource" :: held 0.194s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m
Dec  1 20:10:32 compute-0 ovn_metadata_agent[106828]: 2025-12-01 20:10:32.044 106833 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): DbSetCommand(_result=None, table=Chassis_Private, record=91869463-7ce7-4561-8225-db4a77bb5f12, col_values=(('external_ids', {'neutron:ovn-metadata-sb-cfg': '15'}),), if_exists=True) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89#033[00m
Dec  1 20:10:32 compute-0 nova_compute[189564]: 2025-12-01 20:10:32.226 189568 DEBUG oslo_service.periodic_task [None req-7cec860b-c1c5-49d2-8a4f-732c76c1788d - - - - - -] Running periodic task ComputeManager._instance_usage_audit run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210#033[00m
Dec  1 20:10:32 compute-0 podman[259382]: 2025-12-01 20:10:32.333847667 +0000 UTC m=+0.103626893 container health_status 61ddba5fa28aaa4735d9b3aecc3d300f499f9ae2248b5f55cd6d6127fcce4236 (image=quay.io/navidys/prometheus-podman-exporter:v1.10.1, name=podman_exporter, health_status=healthy, health_failing_streak=0, health_log=, config_data={'image': 'quay.io/navidys/prometheus-podman-exporter:v1.10.1', 'restart': 'always', 'recreate': True, 'user': 'root', 'privileged': True, 'ports': ['9882:9882'], 'net': 'host', 'command': ['--web.config.file=/etc/podman_exporter/podman_exporter.yaml'], 'environment': {'OS_ENDPOINT_TYPE': 'internal', 'CONTAINER_HOST': 'unix:///run/podman/podman.sock'}, 'healthcheck': {'test': '/openstack/healthcheck podman_exporter', 'mount': '/var/lib/openstack/healthchecks/podman_exporter'}, 'volumes': ['/var/lib/openstack/config/telemetry/podman_exporter.yaml:/etc/podman_exporter/podman_exporter.yaml:z', '/var/lib/openstack/certs/telemetry/default:/etc/podman_exporter/tls:z', '/run/podman/podman.sock:/run/podman/podman.sock:rw,z', '/var/lib/openstack/healthchecks/podman_exporter:/openstack:ro,z']}, config_id=edpm, container_name=podman_exporter, maintainer=Navid Yaghoobi <navidys@fedoraproject.org>, managed_by=edpm_ansible)
Dec  1 20:10:32 compute-0 nova_compute[189564]: 2025-12-01 20:10:32.588 189568 DEBUG oslo_concurrency.lockutils [None req-9523dfe7-9f78-421d-8a30-e67c614c8f6d 87b1f4a5842648dead0562b1cf8b4f18 ce8fb01897ec4dc4a54e7b478a0450c6 - - default default] Acquiring lock "1ba24bd2-a29b-4c5b-b8c7-cba0830ed166" by "nova.compute.manager.ComputeManager.terminate_instance.<locals>.do_terminate_instance" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m
Dec  1 20:10:32 compute-0 nova_compute[189564]: 2025-12-01 20:10:32.590 189568 DEBUG oslo_concurrency.lockutils [None req-9523dfe7-9f78-421d-8a30-e67c614c8f6d 87b1f4a5842648dead0562b1cf8b4f18 ce8fb01897ec4dc4a54e7b478a0450c6 - - default default] Lock "1ba24bd2-a29b-4c5b-b8c7-cba0830ed166" acquired by "nova.compute.manager.ComputeManager.terminate_instance.<locals>.do_terminate_instance" :: waited 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m
Dec  1 20:10:32 compute-0 nova_compute[189564]: 2025-12-01 20:10:32.591 189568 DEBUG oslo_concurrency.lockutils [None req-9523dfe7-9f78-421d-8a30-e67c614c8f6d 87b1f4a5842648dead0562b1cf8b4f18 ce8fb01897ec4dc4a54e7b478a0450c6 - - default default] Acquiring lock "1ba24bd2-a29b-4c5b-b8c7-cba0830ed166-events" by "nova.compute.manager.InstanceEvents.clear_events_for_instance.<locals>._clear_events" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m
Dec  1 20:10:32 compute-0 nova_compute[189564]: 2025-12-01 20:10:32.591 189568 DEBUG oslo_concurrency.lockutils [None req-9523dfe7-9f78-421d-8a30-e67c614c8f6d 87b1f4a5842648dead0562b1cf8b4f18 ce8fb01897ec4dc4a54e7b478a0450c6 - - default default] Lock "1ba24bd2-a29b-4c5b-b8c7-cba0830ed166-events" acquired by "nova.compute.manager.InstanceEvents.clear_events_for_instance.<locals>._clear_events" :: waited 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m
Dec  1 20:10:32 compute-0 nova_compute[189564]: 2025-12-01 20:10:32.592 189568 DEBUG oslo_concurrency.lockutils [None req-9523dfe7-9f78-421d-8a30-e67c614c8f6d 87b1f4a5842648dead0562b1cf8b4f18 ce8fb01897ec4dc4a54e7b478a0450c6 - - default default] Lock "1ba24bd2-a29b-4c5b-b8c7-cba0830ed166-events" "released" by "nova.compute.manager.InstanceEvents.clear_events_for_instance.<locals>._clear_events" :: held 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m
Dec  1 20:10:32 compute-0 nova_compute[189564]: 2025-12-01 20:10:32.595 189568 INFO nova.compute.manager [None req-9523dfe7-9f78-421d-8a30-e67c614c8f6d 87b1f4a5842648dead0562b1cf8b4f18 ce8fb01897ec4dc4a54e7b478a0450c6 - - default default] [instance: 1ba24bd2-a29b-4c5b-b8c7-cba0830ed166] Terminating instance#033[00m
Dec  1 20:10:32 compute-0 nova_compute[189564]: 2025-12-01 20:10:32.597 189568 DEBUG nova.compute.manager [None req-9523dfe7-9f78-421d-8a30-e67c614c8f6d 87b1f4a5842648dead0562b1cf8b4f18 ce8fb01897ec4dc4a54e7b478a0450c6 - - default default] [instance: 1ba24bd2-a29b-4c5b-b8c7-cba0830ed166] Start destroying the instance on the hypervisor. _shutdown_instance /usr/lib/python3.9/site-packages/nova/compute/manager.py:3120#033[00m
Dec  1 20:10:32 compute-0 kernel: tap3f58b3a2-d9 (unregistering): left promiscuous mode
Dec  1 20:10:32 compute-0 NetworkManager[56474]: <info>  [1764619832.6397] device (tap3f58b3a2-d9): state change: disconnected -> unmanaged (reason 'unmanaged', managed-type: 'removed')
Dec  1 20:10:32 compute-0 ovn_controller[97948]: 2025-12-01T20:10:32Z|00182|binding|INFO|Releasing lport 3f58b3a2-d9b9-4462-8f74-88eea7d00105 from this chassis (sb_readonly=0)
Dec  1 20:10:32 compute-0 ovn_controller[97948]: 2025-12-01T20:10:32Z|00183|binding|INFO|Setting lport 3f58b3a2-d9b9-4462-8f74-88eea7d00105 down in Southbound
Dec  1 20:10:32 compute-0 nova_compute[189564]: 2025-12-01 20:10:32.667 189568 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Dec  1 20:10:32 compute-0 ovn_controller[97948]: 2025-12-01T20:10:32Z|00184|binding|INFO|Removing iface tap3f58b3a2-d9 ovn-installed in OVS
Dec  1 20:10:32 compute-0 nova_compute[189564]: 2025-12-01 20:10:32.671 189568 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Dec  1 20:10:32 compute-0 ovn_metadata_agent[106828]: 2025-12-01 20:10:32.678 106833 DEBUG ovsdbapp.backend.ovs_idl.event [-] Matched UPDATE: PortBindingUpdatedEvent(events=('update',), table='Port_Binding', conditions=None, old_conditions=None), priority=20 to row=Port_Binding(mac=['fa:16:3e:a9:69:d7 10.100.1.231'], port_security=['fa:16:3e:a9:69:d7 10.100.1.231'], type=, nat_addresses=[], virtual_parent=[], up=[False], options={'requested-chassis': 'compute-0.ctlplane.example.com'}, parent_port=[], requested_additional_chassis=[], ha_chassis_group=[], external_ids={'neutron:cidrs': '10.100.1.231/16', 'neutron:device_id': '1ba24bd2-a29b-4c5b-b8c7-cba0830ed166', 'neutron:device_owner': 'compute:nova', 'neutron:mtu': '', 'neutron:network_name': 'neutron-b72e0b6b-24ff-49af-9297-d0f55dd2fe07', 'neutron:port_capabilities': '', 'neutron:port_name': '', 'neutron:project_id': 'ce8fb01897ec4dc4a54e7b478a0450c6', 'neutron:revision_number': '4', 'neutron:security_group_ids': '31f326a2-1dd0-42fd-9a01-b17a7fb79ecb', 'neutron:subnet_pool_addr_scope4': '', 'neutron:subnet_pool_addr_scope6': '', 'neutron:vnic_type': 'normal', 'neutron:host_id': 'compute-0.ctlplane.example.com'}, additional_chassis=[], tag=[], additional_encap=[], encap=[], mirror_rules=[], datapath=4321fa83-980a-46fb-a7a0-cf14441fe575, chassis=[], tunnel_key=3, gateway_chassis=[], requested_chassis=[<ovs.db.idl.Row object at 0x7f1b36766670>], logical_port=3f58b3a2-d9b9-4462-8f74-88eea7d00105) old=Port_Binding(up=[True], chassis=[<ovs.db.idl.Row object at 0x7f1b36766670>]) matches /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/event.py:43#033[00m
Dec  1 20:10:32 compute-0 ovn_metadata_agent[106828]: 2025-12-01 20:10:32.679 106833 INFO neutron.agent.ovn.metadata.agent [-] Port 3f58b3a2-d9b9-4462-8f74-88eea7d00105 in datapath b72e0b6b-24ff-49af-9297-d0f55dd2fe07 unbound from our chassis#033[00m
Dec  1 20:10:32 compute-0 ovn_metadata_agent[106828]: 2025-12-01 20:10:32.680 106833 DEBUG neutron.agent.ovn.metadata.agent [-] No valid VIF ports were found for network b72e0b6b-24ff-49af-9297-d0f55dd2fe07, tearing the namespace down if needed _get_provision_params /usr/lib/python3.9/site-packages/neutron/agent/ovn/metadata/agent.py:628#033[00m
Dec  1 20:10:32 compute-0 ovn_metadata_agent[106828]: 2025-12-01 20:10:32.682 239862 DEBUG oslo.privsep.daemon [-] privsep: reply[7f3a91d0-8e72-476b-921c-a2bf9ca980aa]: (4, True) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Dec  1 20:10:32 compute-0 ovn_metadata_agent[106828]: 2025-12-01 20:10:32.682 106833 INFO neutron.agent.ovn.metadata.agent [-] Cleaning up ovnmeta-b72e0b6b-24ff-49af-9297-d0f55dd2fe07 namespace which is not needed anymore#033[00m
Dec  1 20:10:32 compute-0 nova_compute[189564]: 2025-12-01 20:10:32.699 189568 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Dec  1 20:10:32 compute-0 systemd[1]: machine-qemu\x2d15\x2dinstance\x2d0000000e.scope: Deactivated successfully.
Dec  1 20:10:32 compute-0 systemd[1]: machine-qemu\x2d15\x2dinstance\x2d0000000e.scope: Consumed 2min 10.650s CPU time.
Dec  1 20:10:32 compute-0 systemd-machined[155891]: Machine qemu-15-instance-0000000e terminated.
Dec  1 20:10:32 compute-0 nova_compute[189564]: 2025-12-01 20:10:32.823 189568 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Dec  1 20:10:32 compute-0 nova_compute[189564]: 2025-12-01 20:10:32.835 189568 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Dec  1 20:10:32 compute-0 neutron-haproxy-ovnmeta-b72e0b6b-24ff-49af-9297-d0f55dd2fe07[257384]: [NOTICE]   (257390) : haproxy version is 2.8.14-c23fe91
Dec  1 20:10:32 compute-0 neutron-haproxy-ovnmeta-b72e0b6b-24ff-49af-9297-d0f55dd2fe07[257384]: [NOTICE]   (257390) : path to executable is /usr/sbin/haproxy
Dec  1 20:10:32 compute-0 neutron-haproxy-ovnmeta-b72e0b6b-24ff-49af-9297-d0f55dd2fe07[257384]: [WARNING]  (257390) : Exiting Master process...
Dec  1 20:10:32 compute-0 neutron-haproxy-ovnmeta-b72e0b6b-24ff-49af-9297-d0f55dd2fe07[257384]: [ALERT]    (257390) : Current worker (257393) exited with code 143 (Terminated)
Dec  1 20:10:32 compute-0 neutron-haproxy-ovnmeta-b72e0b6b-24ff-49af-9297-d0f55dd2fe07[257384]: [WARNING]  (257390) : All workers exited. Exiting... (0)
Dec  1 20:10:32 compute-0 systemd[1]: libpod-803efa2fed4252f29b6278787149b07077d33b5877f30c8855bac09c74b31b58.scope: Deactivated successfully.
Dec  1 20:10:32 compute-0 podman[259428]: 2025-12-01 20:10:32.87740894 +0000 UTC m=+0.085454337 container died 803efa2fed4252f29b6278787149b07077d33b5877f30c8855bac09c74b31b58 (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified, name=neutron-haproxy-ovnmeta-b72e0b6b-24ff-49af-9297-d0f55dd2fe07, org.label-schema.name=CentOS Stream 9 Base Image, tcib_build_tag=fa2bb8efef6782c26ea7f1675eeb36dd, org.label-schema.build-date=20251125, org.label-schema.license=GPLv2, org.label-schema.vendor=CentOS, tcib_managed=true, io.buildah.version=1.41.3, maintainer=OpenStack Kubernetes Operator team, org.label-schema.schema-version=1.0)
Dec  1 20:10:32 compute-0 nova_compute[189564]: 2025-12-01 20:10:32.880 189568 INFO nova.virt.libvirt.driver [-] [instance: 1ba24bd2-a29b-4c5b-b8c7-cba0830ed166] Instance destroyed successfully.#033[00m
Dec  1 20:10:32 compute-0 nova_compute[189564]: 2025-12-01 20:10:32.880 189568 DEBUG nova.objects.instance [None req-9523dfe7-9f78-421d-8a30-e67c614c8f6d 87b1f4a5842648dead0562b1cf8b4f18 ce8fb01897ec4dc4a54e7b478a0450c6 - - default default] Lazy-loading 'resources' on Instance uuid 1ba24bd2-a29b-4c5b-b8c7-cba0830ed166 obj_load_attr /usr/lib/python3.9/site-packages/nova/objects/instance.py:1105#033[00m
Dec  1 20:10:32 compute-0 nova_compute[189564]: 2025-12-01 20:10:32.894 189568 DEBUG nova.virt.libvirt.vif [None req-9523dfe7-9f78-421d-8a30-e67c614c8f6d 87b1f4a5842648dead0562b1cf8b4f18 ce8fb01897ec4dc4a54e7b478a0450c6 - - default default] vif_type=ovs instance=Instance(access_ip_v4=None,access_ip_v6=None,architecture=None,auto_disk_config=False,availability_zone='nova',cell_name=None,cleaned=False,config_drive='True',created_at=2025-12-01T20:08:26Z,default_ephemeral_device=None,default_swap_device=None,deleted=False,deleted_at=None,device_metadata=<?>,disable_terminate=False,display_description=None,display_name='te-4551674-asg-jbxama3kkz6o-bxsvliczlwdv-hbpajxundnbg',ec2_ids=<?>,ephemeral_gb=0,ephemeral_key_uuid=None,fault=<?>,flavor=Flavor(3),hidden=False,host='compute-0.ctlplane.example.com',hostname='te-4551674-asg-jbxama3kkz6o-bxsvliczlwdv-hbpajxundnbg',id=14,image_ref='bffb6851-f47b-44e0-90e7-e01d72f9a4d2',info_cache=InstanceInfoCache,instance_type_id=3,kernel_id='',key_data=None,key_name=None,keypairs=<?>,launch_index=0,launched_at=2025-12-01T20:08:34Z,launched_on='compute-0.ctlplane.example.com',locked=False,locked_by=None,memory_mb=128,metadata={metering.server_group='f148fe63-b9e9-42f1-b9d7-8790a6058874'},migration_context=<?>,new_flavor=None,node='compute-0.ctlplane.example.com',numa_topology=None,old_flavor=None,os_type=None,pci_devices=<?>,pci_requests=<?>,power_state=1,progress=0,project_id='ce8fb01897ec4dc4a54e7b478a0450c6',ramdisk_id='',reservation_id='r-boklydnb',resources=None,root_device_name='/dev/vda',root_gb=1,security_groups=SecurityGroupList,services=<?>,shutdown_terminate=False,system_metadata={boot_roles='reader,member',image_base_image_ref='bffb6851-f47b-44e0-90e7-e01d72f9a4d2',image_container_format='bare',image_disk_format='qcow2',image_hw_cdrom_bus='sata',image_hw_disk_bus='virtio',image_hw_input_bus='usb',image_hw_machine_type='q35',image_hw_pointer_model='usbtablet',image_hw_video_model='virtio',image_hw_vif_model='virtio',image_min_disk='1',image_min_ram='0',owner_project_name='tempest-PrometheusGabbiTest-1865175500',owner_user_name='tempest-PrometheusGabbiTest-1865175500-project-member'},tags=<?>,task_state='deleting',terminated_at=None,trusted_certs=<?>,updated_at=2025-12-01T20:08:34Z,user_data='IyEvYmluL3NoCmVjaG8gJ0xvYWRpbmcgQ1BVJwpzZXQgLXYKY2F0IC9kZXYvdXJhbmRvbSA+IC9kZXYvbnVsbCAmIHNsZWVwIDMwMCA7IGtpbGwgJCEgCg==',user_id='87b1f4a5842648dead0562b1cf8b4f18',uuid=1ba24bd2-a29b-4c5b-b8c7-cba0830ed166,vcpu_model=<?>,vcpus=1,vm_mode=None,vm_state='active') vif={"id": "3f58b3a2-d9b9-4462-8f74-88eea7d00105", "address": "fa:16:3e:a9:69:d7", "network": {"id": "b72e0b6b-24ff-49af-9297-d0f55dd2fe07", "bridge": "br-int", "label": "", "subnets": [{"cidr": "10.100.0.0/16", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.1.231", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "ce8fb01897ec4dc4a54e7b478a0450c6", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tap3f58b3a2-d9", "ovs_interfaceid": "3f58b3a2-d9b9-4462-8f74-88eea7d00105", "qbh_params": null, "qbg_params": null, "active": true, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}} unplug /usr/lib/python3.9/site-packages/nova/virt/libvirt/vif.py:828#033[00m
Dec  1 20:10:32 compute-0 nova_compute[189564]: 2025-12-01 20:10:32.895 189568 DEBUG nova.network.os_vif_util [None req-9523dfe7-9f78-421d-8a30-e67c614c8f6d 87b1f4a5842648dead0562b1cf8b4f18 ce8fb01897ec4dc4a54e7b478a0450c6 - - default default] Converting VIF {"id": "3f58b3a2-d9b9-4462-8f74-88eea7d00105", "address": "fa:16:3e:a9:69:d7", "network": {"id": "b72e0b6b-24ff-49af-9297-d0f55dd2fe07", "bridge": "br-int", "label": "", "subnets": [{"cidr": "10.100.0.0/16", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.1.231", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "ce8fb01897ec4dc4a54e7b478a0450c6", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tap3f58b3a2-d9", "ovs_interfaceid": "3f58b3a2-d9b9-4462-8f74-88eea7d00105", "qbh_params": null, "qbg_params": null, "active": true, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}} nova_to_osvif_vif /usr/lib/python3.9/site-packages/nova/network/os_vif_util.py:511#033[00m
Dec  1 20:10:32 compute-0 nova_compute[189564]: 2025-12-01 20:10:32.896 189568 DEBUG nova.network.os_vif_util [None req-9523dfe7-9f78-421d-8a30-e67c614c8f6d 87b1f4a5842648dead0562b1cf8b4f18 ce8fb01897ec4dc4a54e7b478a0450c6 - - default default] Converted object VIFOpenVSwitch(active=True,address=fa:16:3e:a9:69:d7,bridge_name='br-int',has_traffic_filtering=True,id=3f58b3a2-d9b9-4462-8f74-88eea7d00105,network=Network(b72e0b6b-24ff-49af-9297-d0f55dd2fe07),plugin='ovs',port_profile=VIFPortProfileOpenVSwitch,preserve_on_delete=False,vif_name='tap3f58b3a2-d9') nova_to_osvif_vif /usr/lib/python3.9/site-packages/nova/network/os_vif_util.py:548#033[00m
Dec  1 20:10:32 compute-0 nova_compute[189564]: 2025-12-01 20:10:32.896 189568 DEBUG os_vif [None req-9523dfe7-9f78-421d-8a30-e67c614c8f6d 87b1f4a5842648dead0562b1cf8b4f18 ce8fb01897ec4dc4a54e7b478a0450c6 - - default default] Unplugging vif VIFOpenVSwitch(active=True,address=fa:16:3e:a9:69:d7,bridge_name='br-int',has_traffic_filtering=True,id=3f58b3a2-d9b9-4462-8f74-88eea7d00105,network=Network(b72e0b6b-24ff-49af-9297-d0f55dd2fe07),plugin='ovs',port_profile=VIFPortProfileOpenVSwitch,preserve_on_delete=False,vif_name='tap3f58b3a2-d9') unplug /usr/lib/python3.9/site-packages/os_vif/__init__.py:109#033[00m
Dec  1 20:10:32 compute-0 nova_compute[189564]: 2025-12-01 20:10:32.898 189568 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 21 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Dec  1 20:10:32 compute-0 nova_compute[189564]: 2025-12-01 20:10:32.898 189568 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): DelPortCommand(_result=None, port=tap3f58b3a2-d9, bridge=br-int, if_exists=True) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89#033[00m
Dec  1 20:10:32 compute-0 nova_compute[189564]: 2025-12-01 20:10:32.900 189568 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Dec  1 20:10:32 compute-0 nova_compute[189564]: 2025-12-01 20:10:32.901 189568 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Dec  1 20:10:32 compute-0 nova_compute[189564]: 2025-12-01 20:10:32.904 189568 INFO os_vif [None req-9523dfe7-9f78-421d-8a30-e67c614c8f6d 87b1f4a5842648dead0562b1cf8b4f18 ce8fb01897ec4dc4a54e7b478a0450c6 - - default default] Successfully unplugged vif VIFOpenVSwitch(active=True,address=fa:16:3e:a9:69:d7,bridge_name='br-int',has_traffic_filtering=True,id=3f58b3a2-d9b9-4462-8f74-88eea7d00105,network=Network(b72e0b6b-24ff-49af-9297-d0f55dd2fe07),plugin='ovs',port_profile=VIFPortProfileOpenVSwitch,preserve_on_delete=False,vif_name='tap3f58b3a2-d9')#033[00m
Dec  1 20:10:32 compute-0 nova_compute[189564]: 2025-12-01 20:10:32.905 189568 INFO nova.virt.libvirt.driver [None req-9523dfe7-9f78-421d-8a30-e67c614c8f6d 87b1f4a5842648dead0562b1cf8b4f18 ce8fb01897ec4dc4a54e7b478a0450c6 - - default default] [instance: 1ba24bd2-a29b-4c5b-b8c7-cba0830ed166] Deleting instance files /var/lib/nova/instances/1ba24bd2-a29b-4c5b-b8c7-cba0830ed166_del#033[00m
Dec  1 20:10:32 compute-0 nova_compute[189564]: 2025-12-01 20:10:32.905 189568 INFO nova.virt.libvirt.driver [None req-9523dfe7-9f78-421d-8a30-e67c614c8f6d 87b1f4a5842648dead0562b1cf8b4f18 ce8fb01897ec4dc4a54e7b478a0450c6 - - default default] [instance: 1ba24bd2-a29b-4c5b-b8c7-cba0830ed166] Deletion of /var/lib/nova/instances/1ba24bd2-a29b-4c5b-b8c7-cba0830ed166_del complete#033[00m
Dec  1 20:10:32 compute-0 systemd[1]: var-lib-containers-storage-overlay\x2dcontainers-803efa2fed4252f29b6278787149b07077d33b5877f30c8855bac09c74b31b58-userdata-shm.mount: Deactivated successfully.
Dec  1 20:10:32 compute-0 systemd[1]: var-lib-containers-storage-overlay-406ed8560c48f03beb80b2376ade8d190ee606e5e4d51fe4ef571637e01257a0-merged.mount: Deactivated successfully.
Dec  1 20:10:32 compute-0 podman[259428]: 2025-12-01 20:10:32.931917249 +0000 UTC m=+0.139962646 container cleanup 803efa2fed4252f29b6278787149b07077d33b5877f30c8855bac09c74b31b58 (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified, name=neutron-haproxy-ovnmeta-b72e0b6b-24ff-49af-9297-d0f55dd2fe07, org.label-schema.vendor=CentOS, maintainer=OpenStack Kubernetes Operator team, tcib_build_tag=fa2bb8efef6782c26ea7f1675eeb36dd, tcib_managed=true, io.buildah.version=1.41.3, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.schema-version=1.0, org.label-schema.build-date=20251125, org.label-schema.license=GPLv2)
Dec  1 20:10:32 compute-0 systemd[1]: libpod-conmon-803efa2fed4252f29b6278787149b07077d33b5877f30c8855bac09c74b31b58.scope: Deactivated successfully.
Dec  1 20:10:32 compute-0 nova_compute[189564]: 2025-12-01 20:10:32.961 189568 INFO nova.compute.manager [None req-9523dfe7-9f78-421d-8a30-e67c614c8f6d 87b1f4a5842648dead0562b1cf8b4f18 ce8fb01897ec4dc4a54e7b478a0450c6 - - default default] [instance: 1ba24bd2-a29b-4c5b-b8c7-cba0830ed166] Took 0.36 seconds to destroy the instance on the hypervisor.#033[00m
Dec  1 20:10:32 compute-0 nova_compute[189564]: 2025-12-01 20:10:32.962 189568 DEBUG oslo.service.loopingcall [None req-9523dfe7-9f78-421d-8a30-e67c614c8f6d 87b1f4a5842648dead0562b1cf8b4f18 ce8fb01897ec4dc4a54e7b478a0450c6 - - default default] Waiting for function nova.compute.manager.ComputeManager._try_deallocate_network.<locals>._deallocate_network_with_retries to return. func /usr/lib/python3.9/site-packages/oslo_service/loopingcall.py:435#033[00m
Dec  1 20:10:32 compute-0 nova_compute[189564]: 2025-12-01 20:10:32.962 189568 DEBUG nova.compute.manager [-] [instance: 1ba24bd2-a29b-4c5b-b8c7-cba0830ed166] Deallocating network for instance _deallocate_network /usr/lib/python3.9/site-packages/nova/compute/manager.py:2259#033[00m
Dec  1 20:10:32 compute-0 nova_compute[189564]: 2025-12-01 20:10:32.963 189568 DEBUG nova.network.neutron [-] [instance: 1ba24bd2-a29b-4c5b-b8c7-cba0830ed166] deallocate_for_instance() deallocate_for_instance /usr/lib/python3.9/site-packages/nova/network/neutron.py:1803#033[00m
Dec  1 20:10:33 compute-0 podman[259475]: 2025-12-01 20:10:33.018649974 +0000 UTC m=+0.063331576 container remove 803efa2fed4252f29b6278787149b07077d33b5877f30c8855bac09c74b31b58 (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified, name=neutron-haproxy-ovnmeta-b72e0b6b-24ff-49af-9297-d0f55dd2fe07, tcib_build_tag=fa2bb8efef6782c26ea7f1675eeb36dd, org.label-schema.build-date=20251125, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS, io.buildah.version=1.41.3, tcib_managed=true, maintainer=OpenStack Kubernetes Operator team, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 9 Base Image)
Dec  1 20:10:33 compute-0 ovn_metadata_agent[106828]: 2025-12-01 20:10:33.030 239862 DEBUG oslo.privsep.daemon [-] privsep: reply[ad3b70fe-cf08-487a-830d-bda69ce799e3]: (4, ('Mon Dec  1 08:10:32 PM UTC 2025 Stopping container neutron-haproxy-ovnmeta-b72e0b6b-24ff-49af-9297-d0f55dd2fe07 (803efa2fed4252f29b6278787149b07077d33b5877f30c8855bac09c74b31b58)\n803efa2fed4252f29b6278787149b07077d33b5877f30c8855bac09c74b31b58\nMon Dec  1 08:10:32 PM UTC 2025 Deleting container neutron-haproxy-ovnmeta-b72e0b6b-24ff-49af-9297-d0f55dd2fe07 (803efa2fed4252f29b6278787149b07077d33b5877f30c8855bac09c74b31b58)\n803efa2fed4252f29b6278787149b07077d33b5877f30c8855bac09c74b31b58\n', '', 0)) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Dec  1 20:10:33 compute-0 ovn_metadata_agent[106828]: 2025-12-01 20:10:33.032 239862 DEBUG oslo.privsep.daemon [-] privsep: reply[ff3a911d-812b-4125-8ff8-dccca10d72f2]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Dec  1 20:10:33 compute-0 ovn_metadata_agent[106828]: 2025-12-01 20:10:33.034 106833 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): DelPortCommand(_result=None, port=tapb72e0b6b-20, bridge=None, if_exists=True) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89#033[00m
Dec  1 20:10:33 compute-0 nova_compute[189564]: 2025-12-01 20:10:33.036 189568 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Dec  1 20:10:33 compute-0 kernel: tapb72e0b6b-20: left promiscuous mode
Dec  1 20:10:33 compute-0 nova_compute[189564]: 2025-12-01 20:10:33.053 189568 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Dec  1 20:10:33 compute-0 ovn_metadata_agent[106828]: 2025-12-01 20:10:33.057 239862 DEBUG oslo.privsep.daemon [-] privsep: reply[f59b8c62-7e2a-4ea6-ab4d-4783f7950ac5]: (4, True) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Dec  1 20:10:33 compute-0 ovn_metadata_agent[106828]: 2025-12-01 20:10:33.077 239862 DEBUG oslo.privsep.daemon [-] privsep: reply[7c6dac02-037f-473f-b243-f5d2a4eb07bb]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Dec  1 20:10:33 compute-0 ovn_metadata_agent[106828]: 2025-12-01 20:10:33.079 239862 DEBUG oslo.privsep.daemon [-] privsep: reply[0d3676e3-7a79-4749-bd8a-963ccea90a4b]: (4, True) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Dec  1 20:10:33 compute-0 ovn_metadata_agent[106828]: 2025-12-01 20:10:33.094 239862 DEBUG oslo.privsep.daemon [-] privsep: reply[ab0e7aec-3c04-46e4-8c02-9c753c246d28]: (4, [{'family': 0, '__align': (), 'ifi_type': 772, 'index': 1, 'flags': 65609, 'change': 0, 'attrs': [['IFLA_IFNAME', 'lo'], ['IFLA_TXQLEN', 1000], ['IFLA_OPERSTATE', 'UNKNOWN'], ['IFLA_LINKMODE', 0], ['IFLA_MTU', 65536], ['IFLA_MIN_MTU', 0], ['IFLA_MAX_MTU', 0], ['IFLA_GROUP', 0], ['IFLA_PROMISCUITY', 0], ['UNKNOWN', {'header': {'length': 8, 'type': 61}}], ['IFLA_NUM_TX_QUEUES', 1], ['IFLA_GSO_MAX_SEGS', 65535], ['IFLA_GSO_MAX_SIZE', 65536], ['IFLA_GRO_MAX_SIZE', 65536], ['UNKNOWN', {'header': {'length': 8, 'type': 63}}], ['UNKNOWN', {'header': {'length': 8, 'type': 64}}], ['IFLA_TSO_MAX_SIZE', 524280], ['IFLA_TSO_MAX_SEGS', 65535], ['UNKNOWN', {'header': {'length': 8, 'type': 66}}], ['IFLA_NUM_RX_QUEUES', 1], ['IFLA_CARRIER', 1], ['IFLA_CARRIER_CHANGES', 0], ['IFLA_CARRIER_UP_COUNT', 0], ['IFLA_CARRIER_DOWN_COUNT', 0], ['IFLA_PROTO_DOWN', 0], ['IFLA_ADDRESS', '00:00:00:00:00:00'], ['IFLA_BROADCAST', '00:00:00:00:00:00'], ['IFLA_STATS64', {'rx_packets': 1, 'tx_packets': 1, 'rx_bytes': 28, 'tx_bytes': 28, 'rx_errors': 0, 'tx_errors': 0, 'rx_dropped': 0, 'tx_dropped': 0, 'multicast': 0, 'collisions': 0, 'rx_length_errors': 0, 'rx_over_errors': 0, 'rx_crc_errors': 0, 'rx_frame_errors': 0, 'rx_fifo_errors': 0, 'rx_missed_errors': 0, 'tx_aborted_errors': 0, 'tx_carrier_errors': 0, 'tx_fifo_errors': 0, 'tx_heartbeat_errors': 0, 'tx_window_errors': 0, 'rx_compressed': 0, 'tx_compressed': 0}], ['IFLA_STATS', {'rx_packets': 1, 'tx_packets': 1, 'rx_bytes': 28, 'tx_bytes': 28, 'rx_errors': 0, 'tx_errors': 0, 'rx_dropped': 0, 'tx_dropped': 0, 'multicast': 0, 'collisions': 0, 'rx_length_errors': 0, 'rx_over_errors': 0, 'rx_crc_errors': 0, 'rx_frame_errors': 0, 'rx_fifo_errors': 0, 'rx_missed_errors': 0, 'tx_aborted_errors': 0, 'tx_carrier_errors': 0, 'tx_fifo_errors': 0, 'tx_heartbeat_errors': 0, 'tx_window_errors': 0, 'rx_compressed': 0, 'tx_compressed': 0}], ['IFLA_XDP', {'attrs': [['IFLA_XDP_ATTACHED', None]]}], ['IFLA_QDISC', 'noqueue'], ['IFLA_AF_SPEC', {'attrs': [['AF_INET', {'dummy': 65668, 'forwarding': 1, 'mc_forwarding': 0, 'proxy_arp': 0, 'accept_redirects': 0, 'secure_redirects': 0, 'send_redirects': 0, 'shared_media': 1, 'rp_filter': 1, 'accept_source_route': 0, 'bootp_relay': 0, 'log_martians': 0, 'tag': 0, 'arpfilter': 0, 'medium_id': 0, 'noxfrm': 1, 'nopolicy': 1, 'force_igmp_version': 0, 'arp_announce': 0, 'arp_ignore': 0, 'promote_secondaries': 1, 'arp_accept': 0, 'arp_notify': 0, 'accept_local': 0, 'src_vmark': 0, 'proxy_arp_pvlan': 0, 'route_localnet': 0, 'igmpv2_unsolicited_report_interval': 10000, 'igmpv3_unsolicited_report_interval': 1000}], ['AF_INET6', {'attrs': [['IFLA_INET6_FLAGS', 2147483648], ['IFLA_INET6_CACHEINFO', {'max_reasm_len': 65535, 'tstamp': 601765, 'reachable_time': 24106, 'retrans_time': 1000}], ['IFLA_INET6_CONF', {'forwarding': 0, 'hop_limit': 64, 'mtu': 65536, 'accept_ra': 1, 'accept_redirects': 1, 'autoconf': 1, 'dad_transmits': 1, 'router_solicitations': 4294967295, 'router_solicitation_interval': 4000, 'router_solicitation_delay': 1000, 'use_tempaddr': 4294967295, 'temp_valid_lft': 604800, 'temp_preferred_lft': 86400, 'regen_max_retry': 3, 'max_desync_factor': 600, 'max_addresses': 16, 'force_mld_version': 0, 'accept_ra_defrtr': 1, 'accept_ra_pinfo': 1, 'accept_ra_rtr_pref': 1, 'router_probe_interval': 60000, 'accept_ra_rt_info_max_plen': 0, 'proxy_ndp': 0, 'optimistic_dad': 0, 'accept_source_route': 0, 'mc_forwarding': 0, 'disable_ipv6': 0, 'accept_dad': 4294967295, 'force_tllao': 0, 'ndisc_notify': 0}], ['IFLA_INET6_STATS', {'num': 38, 'inpkts': 0, 'inoctets': 0, 'indelivers': 0, 'outforwdatagrams': 0, 'outpkts': 0, 'outoctets': 0, 'inhdrerrors': 0, 'intoobigerrors': 0, 'innoroutes': 0, 'inaddrerrors': 0, 'inunknownprotos': 0, 'intruncatedpkts': 0, 'indiscards': 0, 'outdiscards': 0, 'outnoroutes': 0, 'reasmtimeout': 0, 'reasmreqds': 0, 'reasmoks': 0, 'reasmfails': 0, 'fragoks': 0, 'fragfails': 0, 'fragcreates': 0, 'inmcastpkts': 0, 'outmcastpkts': 0, 'inbcastpkts': 0, 'outbcastpkts': 0, 'inmcastoctets': 0, 'outmcastoctets': 0, 'inbcastoctets': 0, 'outbcastoctets': 0, 'csumerrors': 0, 'noectpkts': 0, 'ect1pkts': 0, 'ect0pkts': 0, 'cepkts': 0}], ['IFLA_INET6_ICMP6STATS', {'num': 7, 'inmsgs': 0, 'inerrors': 0, 'outmsgs': 0, 'outerrors': 0, 'csumerrors': 0}], ['IFLA_INET6_TOKEN', '::'], ['IFLA_INET6_ADDR_GEN_MODE', 0]]}]]}], ['IFLA_MAP', {'mem_start': 0, 'mem_end': 0, 'base_addr': 0, 'irq': 0, 'dma': 0, 'port': 0}], ['UNKNOWN', {'header': {'length': 4, 'type': 32830}}], ['UNKNOWN', {'header': {'length': 4, 'type': 32833}}]], 'header': {'length': 1404, 'type': 16, 'flags': 2, 'sequence_number': 255, 'pid': 259490, 'error': None, 'target': 'ovnmeta-b72e0b6b-24ff-49af-9297-d0f55dd2fe07', 'stats': (0, 0, 0)}, 'state': 'up', 'event': 'RTM_NEWLINK'}]) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Dec  1 20:10:33 compute-0 ovn_metadata_agent[106828]: 2025-12-01 20:10:33.096 106945 DEBUG neutron.privileged.agent.linux.ip_lib [-] Namespace ovnmeta-b72e0b6b-24ff-49af-9297-d0f55dd2fe07 deleted. remove_netns /usr/lib/python3.9/site-packages/neutron/privileged/agent/linux/ip_lib.py:607#033[00m
Dec  1 20:10:33 compute-0 ovn_metadata_agent[106828]: 2025-12-01 20:10:33.096 106945 DEBUG oslo.privsep.daemon [-] privsep: reply[09ea68cd-0c13-44f2-a18a-602436936f59]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Dec  1 20:10:33 compute-0 systemd[1]: run-netns-ovnmeta\x2db72e0b6b\x2d24ff\x2d49af\x2d9297\x2dd0f55dd2fe07.mount: Deactivated successfully.
Dec  1 20:10:34 compute-0 nova_compute[189564]: 2025-12-01 20:10:34.138 189568 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Dec  1 20:10:34 compute-0 nova_compute[189564]: 2025-12-01 20:10:34.248 189568 DEBUG oslo_service.periodic_task [None req-7cec860b-c1c5-49d2-8a4f-732c76c1788d - - - - - -] Running periodic task ComputeManager._poll_volume_usage run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210#033[00m
Dec  1 20:10:35 compute-0 nova_compute[189564]: 2025-12-01 20:10:35.244 189568 DEBUG oslo_service.periodic_task [None req-7cec860b-c1c5-49d2-8a4f-732c76c1788d - - - - - -] Running periodic task ComputeManager._check_instance_build_time run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210#033[00m
Dec  1 20:10:35 compute-0 nova_compute[189564]: 2025-12-01 20:10:35.817 189568 DEBUG nova.compute.manager [req-e9067129-1ec4-4591-80d6-f82e77f5ba8f req-347267f0-69ca-483c-aaa0-ca0c3ffcfdbe 8141e98fb94749b083df4b60a326419b 58466eba0634458f8dd92f929824a1d9 - - default default] [instance: 1ba24bd2-a29b-4c5b-b8c7-cba0830ed166] Received event network-vif-unplugged-3f58b3a2-d9b9-4462-8f74-88eea7d00105 external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11048#033[00m
Dec  1 20:10:35 compute-0 nova_compute[189564]: 2025-12-01 20:10:35.817 189568 DEBUG oslo_concurrency.lockutils [req-e9067129-1ec4-4591-80d6-f82e77f5ba8f req-347267f0-69ca-483c-aaa0-ca0c3ffcfdbe 8141e98fb94749b083df4b60a326419b 58466eba0634458f8dd92f929824a1d9 - - default default] Acquiring lock "1ba24bd2-a29b-4c5b-b8c7-cba0830ed166-events" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m
Dec  1 20:10:35 compute-0 nova_compute[189564]: 2025-12-01 20:10:35.818 189568 DEBUG oslo_concurrency.lockutils [req-e9067129-1ec4-4591-80d6-f82e77f5ba8f req-347267f0-69ca-483c-aaa0-ca0c3ffcfdbe 8141e98fb94749b083df4b60a326419b 58466eba0634458f8dd92f929824a1d9 - - default default] Lock "1ba24bd2-a29b-4c5b-b8c7-cba0830ed166-events" acquired by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m
Dec  1 20:10:35 compute-0 nova_compute[189564]: 2025-12-01 20:10:35.819 189568 DEBUG oslo_concurrency.lockutils [req-e9067129-1ec4-4591-80d6-f82e77f5ba8f req-347267f0-69ca-483c-aaa0-ca0c3ffcfdbe 8141e98fb94749b083df4b60a326419b 58466eba0634458f8dd92f929824a1d9 - - default default] Lock "1ba24bd2-a29b-4c5b-b8c7-cba0830ed166-events" "released" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: held 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m
Dec  1 20:10:35 compute-0 nova_compute[189564]: 2025-12-01 20:10:35.820 189568 DEBUG nova.compute.manager [req-e9067129-1ec4-4591-80d6-f82e77f5ba8f req-347267f0-69ca-483c-aaa0-ca0c3ffcfdbe 8141e98fb94749b083df4b60a326419b 58466eba0634458f8dd92f929824a1d9 - - default default] [instance: 1ba24bd2-a29b-4c5b-b8c7-cba0830ed166] No waiting events found dispatching network-vif-unplugged-3f58b3a2-d9b9-4462-8f74-88eea7d00105 pop_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:320#033[00m
Dec  1 20:10:35 compute-0 nova_compute[189564]: 2025-12-01 20:10:35.821 189568 DEBUG nova.compute.manager [req-e9067129-1ec4-4591-80d6-f82e77f5ba8f req-347267f0-69ca-483c-aaa0-ca0c3ffcfdbe 8141e98fb94749b083df4b60a326419b 58466eba0634458f8dd92f929824a1d9 - - default default] [instance: 1ba24bd2-a29b-4c5b-b8c7-cba0830ed166] Received event network-vif-unplugged-3f58b3a2-d9b9-4462-8f74-88eea7d00105 for instance with task_state deleting. _process_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:10826#033[00m
Dec  1 20:10:36 compute-0 podman[259495]: 2025-12-01 20:10:36.325761528 +0000 UTC m=+0.085392145 container health_status 34a1614f07848d6f362b3ed1fa2407dbcd0f2c7c831f6ef43ff8b2d278ce7c3d (image=quay.io/podified-antelope-centos9/openstack-ceilometer-ipmi:current-podified, name=ceilometer_agent_ipmi, health_status=healthy, health_failing_streak=0, health_log=, org.label-schema.build-date=20251125, org.label-schema.license=GPLv2, org.label-schema.vendor=CentOS, managed_by=edpm_ansible, org.label-schema.name=CentOS Stream 9 Base Image, config_data={'image': 'quay.io/podified-antelope-centos9/openstack-ceilometer-ipmi:current-podified', 'user': 'ceilometer', 'restart': 'always', 'command': 'kolla_start', 'security_opt': 'label:type:ceilometer_polling_t', 'privileged': 'true', 'net': 'host', 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'OS_ENDPOINT_TYPE': 'internal'}, 'healthcheck': {'test': '/openstack/healthcheck ipmi', 'mount': '/var/lib/openstack/healthchecks/ceilometer_agent_ipmi'}, 'volumes': ['/var/lib/openstack/config/telemetry-power-monitoring:/var/lib/openstack/config/:z', '/var/lib/openstack/config/telemetry-power-monitoring/ceilometer-agent-ipmi.json:/var/lib/kolla/config_files/config.json:z', '/etc/hosts:/etc/hosts:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/var/lib/openstack/cacerts/telemetry-power-monitoring/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/config/telemetry-power-monitoring/ceilometer_prom_exporter.yaml:/etc/ceilometer/ceilometer_prom_exporter.yaml:z', '/var/lib/openstack/certs/telemetry-power-monitoring/default:/etc/ceilometer/tls:z', '/dev/log:/dev/log', '/var/lib/openstack/healthchecks/ceilometer_agent_ipmi:/openstack:ro,z']}, maintainer=OpenStack Kubernetes Operator team, org.label-schema.schema-version=1.0, tcib_build_tag=fa2bb8efef6782c26ea7f1675eeb36dd, tcib_managed=true, config_id=edpm, container_name=ceilometer_agent_ipmi, io.buildah.version=1.41.3)
Dec  1 20:10:36 compute-0 podman[259494]: 2025-12-01 20:10:36.333431857 +0000 UTC m=+0.098331358 container health_status 23921011954a99f31a49758e512d9e3575f6b2ebf536e7df85e3be11e7690b76 (image=quay.io/sustainable_computing_io/kepler:release-0.7.12, name=kepler, health_status=healthy, health_failing_streak=0, health_log=, io.k8s.description=The Universal Base Image is designed and engineered to be the base layer for all of your containerized applications, middleware and utilities. This base image is freely redistributable, but Red Hat only supports Red Hat technologies through subscriptions for Red Hat products. This image is maintained by Red Hat and updated regularly., com.redhat.component=ubi9-container, architecture=x86_64, io.buildah.version=1.29.0, managed_by=edpm_ansible, release-0.7.12=, config_id=edpm, description=The Universal Base Image is designed and engineered to be the base layer for all of your containerized applications, middleware and utilities. This base image is freely redistributable, but Red Hat only supports Red Hat technologies through subscriptions for Red Hat products. This image is maintained by Red Hat and updated regularly., distribution-scope=public, release=1214.1726694543, summary=Provides the latest release of Red Hat Universal Base Image 9., io.openshift.expose-services=, url=https://access.redhat.com/containers/#/registry.access.redhat.com/ubi9/images/9.4-1214.1726694543, vcs-type=git, build-date=2024-09-18T21:23:30, version=9.4, io.openshift.tags=base rhel9, vendor=Red Hat, Inc., container_name=kepler, com.redhat.license_terms=https://www.redhat.com/en/about/red-hat-end-user-license-agreements#UBI, config_data={'image': 'quay.io/sustainable_computing_io/kepler:release-0.7.12', 'privileged': 'true', 'restart': 'always', 'ports': ['8888:8888'], 'net': 'host', 'command': '-v=2', 'recreate': True, 'environment': {'ENABLE_GPU': 'true', 'EXPOSE_CONTAINER_METRICS': 'true', 'ENABLE_PROCESS_METRICS': 'true', 'EXPOSE_VM_METRICS': 'true', 'EXPOSE_ESTIMATED_IDLE_POWER_METRICS': 'false', 'LIBVIRT_METADATA_URI': 'http://openstack.org/xmlns/libvirt/nova/1.1'}, 'healthcheck': {'test': '/openstack/healthcheck kepler', 'mount': '/var/lib/openstack/healthchecks/kepler'}, 'volumes': ['/lib/modules:/lib/modules:ro', '/run/libvirt:/run/libvirt:shared,ro', '/sys:/sys', '/proc:/proc', '/var/lib/openstack/healthchecks/kepler:/openstack:ro,z']}, vcs-ref=e309397d02fc53f7fa99db1371b8700eb49f268f, maintainer=Red Hat, Inc., name=ubi9, io.k8s.display-name=Red Hat Universal Base Image 9)
Dec  1 20:10:36 compute-0 nova_compute[189564]: 2025-12-01 20:10:36.333 189568 DEBUG nova.network.neutron [-] [instance: 1ba24bd2-a29b-4c5b-b8c7-cba0830ed166] Updating instance_info_cache with network_info: [] update_instance_cache_with_nw_info /usr/lib/python3.9/site-packages/nova/network/neutron.py:116#033[00m
Dec  1 20:10:36 compute-0 podman[259507]: 2025-12-01 20:10:36.344662207 +0000 UTC m=+0.088140350 container health_status 43b014a7c88484529ca37fbc1aa040d68d3c565a681d98a3ffe696ded1c66c8b (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified, name=ovn_metadata_agent, health_status=healthy, health_failing_streak=0, health_log=, maintainer=OpenStack Kubernetes Operator team, org.label-schema.schema-version=1.0, tcib_build_tag=fa2bb8efef6782c26ea7f1675eeb36dd, config_data={'cgroupns': 'host', 'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'EDPM_CONFIG_HASH': '0823bd3e096c75f72e4a95820d41b0d4b6a1172bd2892ddb9f29b788a11bc87d'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ovn_metadata_agent', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified', 'net': 'host', 'pid': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/run/openvswitch:/run/openvswitch:z', '/var/lib/config-data/ansible-generated/neutron-ovn-metadata-agent:/etc/neutron.conf.d:z', '/run/netns:/run/netns:shared', '/var/lib/kolla/config_files/ovn_metadata_agent.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/neutron:/var/lib/neutron:shared,z', '/var/lib/neutron/ovn_metadata_haproxy_wrapper:/usr/local/bin/haproxy:ro', '/var/lib/neutron/kill_scripts:/etc/neutron/kill_scripts:ro', '/var/lib/openstack/cacerts/neutron-metadata/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/ca.crt:/etc/pki/tls/certs/ovndbca.crt:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/tls.crt:/etc/pki/tls/certs/ovndb.crt:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/tls.key:/etc/pki/tls/private/ovndb.key:ro,Z', '/var/lib/openstack/healthchecks/ovn_metadata_agent:/openstack:ro,z']}, config_id=ovn_metadata_agent, io.buildah.version=1.41.3, container_name=ovn_metadata_agent, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.vendor=CentOS, managed_by=edpm_ansible, org.label-schema.build-date=20251125, tcib_managed=true)
Dec  1 20:10:36 compute-0 nova_compute[189564]: 2025-12-01 20:10:36.349 189568 INFO nova.compute.manager [-] [instance: 1ba24bd2-a29b-4c5b-b8c7-cba0830ed166] Took 3.39 seconds to deallocate network for instance.#033[00m
Dec  1 20:10:36 compute-0 podman[259499]: 2025-12-01 20:10:36.369392248 +0000 UTC m=+0.106406020 container health_status 3a3d264f7eb8586ed3d44da8bad3c69e5911bcb2ca062b771386b6d47a5118de (image=quay.rdoproject.org/podified-master-centos10/openstack-ceilometer-compute:current-tested, name=ceilometer_agent_compute, health_status=healthy, health_failing_streak=0, health_log=, org.label-schema.vendor=CentOS, tcib_build_tag=3a7876c5b6a4ff2e2bc50e11e9db5f42, io.buildah.version=1.41.4, config_data={'image': 'quay.rdoproject.org/podified-master-centos10/openstack-ceilometer-compute:current-tested', 'user': 'ceilometer', 'restart': 'always', 'command': 'kolla_start', 'security_opt': 'label:type:ceilometer_polling_t', 'net': 'host', 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'OS_ENDPOINT_TYPE': 'internal'}, 'healthcheck': {'test': '/openstack/healthcheck compute', 'mount': '/var/lib/openstack/healthchecks/ceilometer_agent_compute'}, 'volumes': ['/var/lib/openstack/config/telemetry:/var/lib/openstack/config/:z', '/var/lib/openstack/config/telemetry/ceilometer-agent-compute.json:/var/lib/kolla/config_files/config.json:z', '/run/libvirt:/run/libvirt:shared,ro', '/etc/hosts:/etc/hosts:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/var/lib/openstack/cacerts/telemetry/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/config/telemetry/ceilometer_prom_exporter.yaml:/etc/ceilometer/ceilometer_prom_exporter.yaml:z', '/var/lib/openstack/certs/telemetry/default:/etc/ceilometer/tls:z', '/dev/log:/dev/log', '/var/lib/openstack/healthchecks/ceilometer_agent_compute:/openstack:ro,z']}, org.label-schema.build-date=20251125, org.label-schema.name=CentOS Stream 10 Base Image, managed_by=edpm_ansible, org.label-schema.license=GPLv2, org.label-schema.schema-version=1.0, tcib_managed=true, config_id=edpm, container_name=ceilometer_agent_compute, maintainer=OpenStack Kubernetes Operator team)
Dec  1 20:10:36 compute-0 nova_compute[189564]: 2025-12-01 20:10:36.389 189568 DEBUG oslo_concurrency.lockutils [None req-9523dfe7-9f78-421d-8a30-e67c614c8f6d 87b1f4a5842648dead0562b1cf8b4f18 ce8fb01897ec4dc4a54e7b478a0450c6 - - default default] Acquiring lock "compute_resources" by "nova.compute.resource_tracker.ResourceTracker.update_usage" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m
Dec  1 20:10:36 compute-0 nova_compute[189564]: 2025-12-01 20:10:36.389 189568 DEBUG oslo_concurrency.lockutils [None req-9523dfe7-9f78-421d-8a30-e67c614c8f6d 87b1f4a5842648dead0562b1cf8b4f18 ce8fb01897ec4dc4a54e7b478a0450c6 - - default default] Lock "compute_resources" acquired by "nova.compute.resource_tracker.ResourceTracker.update_usage" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m
Dec  1 20:10:36 compute-0 nova_compute[189564]: 2025-12-01 20:10:36.412 189568 DEBUG nova.compute.manager [req-4c4c4f00-5215-4e25-bcc8-2f4da6e88ebe req-f2f0ac03-efff-4d06-ae75-fe34807fcfb2 8141e98fb94749b083df4b60a326419b 58466eba0634458f8dd92f929824a1d9 - - default default] [instance: 1ba24bd2-a29b-4c5b-b8c7-cba0830ed166] Received event network-vif-deleted-3f58b3a2-d9b9-4462-8f74-88eea7d00105 external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11048#033[00m
Dec  1 20:10:36 compute-0 podman[259511]: 2025-12-01 20:10:36.416663192 +0000 UTC m=+0.148520593 container health_status ac5c9902abf0db9f43c889599b2bcc73d33eb8b65444ffdd9b56a5cc93dab792 (image=quay.io/podified-antelope-centos9/openstack-ovn-controller:current-podified, name=ovn_controller, health_status=healthy, health_failing_streak=0, health_log=, managed_by=edpm_ansible, tcib_build_tag=fa2bb8efef6782c26ea7f1675eeb36dd, tcib_managed=true, io.buildah.version=1.41.3, maintainer=OpenStack Kubernetes Operator team, org.label-schema.build-date=20251125, org.label-schema.vendor=CentOS, container_name=ovn_controller, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.schema-version=1.0, config_data={'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ovn_controller', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-ovn-controller:current-podified', 'net': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/lib/modules:/lib/modules:ro', '/run:/run', '/var/lib/openvswitch/ovn:/run/ovn:shared,z', '/var/lib/kolla/config_files/ovn_controller.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/openstack/cacerts/ovn/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/certs/ovn/default/ca.crt:/etc/pki/tls/certs/ovndbca.crt:ro,z', '/var/lib/openstack/certs/ovn/default/tls.crt:/etc/pki/tls/certs/ovndb.crt:ro,z', '/var/lib/openstack/certs/ovn/default/tls.key:/etc/pki/tls/private/ovndb.key:ro,Z', '/var/lib/openstack/healthchecks/ovn_controller:/openstack:ro,z']}, config_id=ovn_controller)
Dec  1 20:10:36 compute-0 nova_compute[189564]: 2025-12-01 20:10:36.438 189568 DEBUG nova.compute.provider_tree [None req-9523dfe7-9f78-421d-8a30-e67c614c8f6d 87b1f4a5842648dead0562b1cf8b4f18 ce8fb01897ec4dc4a54e7b478a0450c6 - - default default] Inventory has not changed in ProviderTree for provider: 0211b5d4-bab8-409f-8f53-df766ffbcb27 update_inventory /usr/lib/python3.9/site-packages/nova/compute/provider_tree.py:180#033[00m
Dec  1 20:10:36 compute-0 nova_compute[189564]: 2025-12-01 20:10:36.460 189568 DEBUG nova.scheduler.client.report [None req-9523dfe7-9f78-421d-8a30-e67c614c8f6d 87b1f4a5842648dead0562b1cf8b4f18 ce8fb01897ec4dc4a54e7b478a0450c6 - - default default] Inventory has not changed for provider 0211b5d4-bab8-409f-8f53-df766ffbcb27 based on inventory data: {'VCPU': {'total': 8, 'reserved': 0, 'min_unit': 1, 'max_unit': 8, 'step_size': 1, 'allocation_ratio': 4.0}, 'MEMORY_MB': {'total': 7680, 'reserved': 512, 'min_unit': 1, 'max_unit': 7680, 'step_size': 1, 'allocation_ratio': 1.0}, 'DISK_GB': {'total': 79, 'reserved': 1, 'min_unit': 1, 'max_unit': 79, 'step_size': 1, 'allocation_ratio': 0.9}} set_inventory_for_provider /usr/lib/python3.9/site-packages/nova/scheduler/client/report.py:940#033[00m
Dec  1 20:10:36 compute-0 nova_compute[189564]: 2025-12-01 20:10:36.482 189568 DEBUG oslo_concurrency.lockutils [None req-9523dfe7-9f78-421d-8a30-e67c614c8f6d 87b1f4a5842648dead0562b1cf8b4f18 ce8fb01897ec4dc4a54e7b478a0450c6 - - default default] Lock "compute_resources" "released" by "nova.compute.resource_tracker.ResourceTracker.update_usage" :: held 0.093s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m
Dec  1 20:10:36 compute-0 nova_compute[189564]: 2025-12-01 20:10:36.510 189568 INFO nova.scheduler.client.report [None req-9523dfe7-9f78-421d-8a30-e67c614c8f6d 87b1f4a5842648dead0562b1cf8b4f18 ce8fb01897ec4dc4a54e7b478a0450c6 - - default default] Deleted allocations for instance 1ba24bd2-a29b-4c5b-b8c7-cba0830ed166#033[00m
Dec  1 20:10:36 compute-0 nova_compute[189564]: 2025-12-01 20:10:36.576 189568 DEBUG oslo_concurrency.lockutils [None req-9523dfe7-9f78-421d-8a30-e67c614c8f6d 87b1f4a5842648dead0562b1cf8b4f18 ce8fb01897ec4dc4a54e7b478a0450c6 - - default default] Lock "1ba24bd2-a29b-4c5b-b8c7-cba0830ed166" "released" by "nova.compute.manager.ComputeManager.terminate_instance.<locals>.do_terminate_instance" :: held 3.986s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m
Dec  1 20:10:36 compute-0 systemd[1]: virtproxyd.service: Deactivated successfully.
Dec  1 20:10:37 compute-0 nova_compute[189564]: 2025-12-01 20:10:37.902 189568 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Dec  1 20:10:37 compute-0 nova_compute[189564]: 2025-12-01 20:10:37.924 189568 DEBUG nova.compute.manager [req-e21eafec-1a12-4d18-89a5-ee259c08da82 req-4e821ad9-e8bc-4943-8d44-52a5d6284e51 8141e98fb94749b083df4b60a326419b 58466eba0634458f8dd92f929824a1d9 - - default default] [instance: 1ba24bd2-a29b-4c5b-b8c7-cba0830ed166] Received event network-vif-plugged-3f58b3a2-d9b9-4462-8f74-88eea7d00105 external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11048#033[00m
Dec  1 20:10:37 compute-0 nova_compute[189564]: 2025-12-01 20:10:37.925 189568 DEBUG oslo_concurrency.lockutils [req-e21eafec-1a12-4d18-89a5-ee259c08da82 req-4e821ad9-e8bc-4943-8d44-52a5d6284e51 8141e98fb94749b083df4b60a326419b 58466eba0634458f8dd92f929824a1d9 - - default default] Acquiring lock "1ba24bd2-a29b-4c5b-b8c7-cba0830ed166-events" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m
Dec  1 20:10:37 compute-0 nova_compute[189564]: 2025-12-01 20:10:37.926 189568 DEBUG oslo_concurrency.lockutils [req-e21eafec-1a12-4d18-89a5-ee259c08da82 req-4e821ad9-e8bc-4943-8d44-52a5d6284e51 8141e98fb94749b083df4b60a326419b 58466eba0634458f8dd92f929824a1d9 - - default default] Lock "1ba24bd2-a29b-4c5b-b8c7-cba0830ed166-events" acquired by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: waited 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m
Dec  1 20:10:37 compute-0 nova_compute[189564]: 2025-12-01 20:10:37.926 189568 DEBUG oslo_concurrency.lockutils [req-e21eafec-1a12-4d18-89a5-ee259c08da82 req-4e821ad9-e8bc-4943-8d44-52a5d6284e51 8141e98fb94749b083df4b60a326419b 58466eba0634458f8dd92f929824a1d9 - - default default] Lock "1ba24bd2-a29b-4c5b-b8c7-cba0830ed166-events" "released" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: held 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m
Dec  1 20:10:37 compute-0 nova_compute[189564]: 2025-12-01 20:10:37.927 189568 DEBUG nova.compute.manager [req-e21eafec-1a12-4d18-89a5-ee259c08da82 req-4e821ad9-e8bc-4943-8d44-52a5d6284e51 8141e98fb94749b083df4b60a326419b 58466eba0634458f8dd92f929824a1d9 - - default default] [instance: 1ba24bd2-a29b-4c5b-b8c7-cba0830ed166] No waiting events found dispatching network-vif-plugged-3f58b3a2-d9b9-4462-8f74-88eea7d00105 pop_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:320#033[00m
Dec  1 20:10:37 compute-0 nova_compute[189564]: 2025-12-01 20:10:37.928 189568 WARNING nova.compute.manager [req-e21eafec-1a12-4d18-89a5-ee259c08da82 req-4e821ad9-e8bc-4943-8d44-52a5d6284e51 8141e98fb94749b083df4b60a326419b 58466eba0634458f8dd92f929824a1d9 - - default default] [instance: 1ba24bd2-a29b-4c5b-b8c7-cba0830ed166] Received unexpected event network-vif-plugged-3f58b3a2-d9b9-4462-8f74-88eea7d00105 for instance with vm_state deleted and task_state None.#033[00m
Dec  1 20:10:39 compute-0 nova_compute[189564]: 2025-12-01 20:10:39.142 189568 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Dec  1 20:10:40 compute-0 nova_compute[189564]: 2025-12-01 20:10:40.257 189568 DEBUG nova.virt.driver [-] Emitting event <LifecycleEvent: 1764619825.255658, 2e63a3e2-688c-470f-9b69-98ac22f0c892 => Stopped> emit_event /usr/lib/python3.9/site-packages/nova/virt/driver.py:1653#033[00m
Dec  1 20:10:40 compute-0 nova_compute[189564]: 2025-12-01 20:10:40.258 189568 INFO nova.compute.manager [-] [instance: 2e63a3e2-688c-470f-9b69-98ac22f0c892] VM Stopped (Lifecycle Event)#033[00m
Dec  1 20:10:40 compute-0 nova_compute[189564]: 2025-12-01 20:10:40.286 189568 DEBUG nova.compute.manager [None req-a71d4eda-0c5f-4711-a514-6cb667eb1bf4 - - - - - -] [instance: 2e63a3e2-688c-470f-9b69-98ac22f0c892] Checking state _get_power_state /usr/lib/python3.9/site-packages/nova/compute/manager.py:1762#033[00m
Dec  1 20:10:42 compute-0 nova_compute[189564]: 2025-12-01 20:10:42.907 189568 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Dec  1 20:10:44 compute-0 nova_compute[189564]: 2025-12-01 20:10:44.144 189568 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Dec  1 20:10:47 compute-0 nova_compute[189564]: 2025-12-01 20:10:47.874 189568 DEBUG nova.virt.driver [-] Emitting event <LifecycleEvent: 1764619832.8712263, 1ba24bd2-a29b-4c5b-b8c7-cba0830ed166 => Stopped> emit_event /usr/lib/python3.9/site-packages/nova/virt/driver.py:1653#033[00m
Dec  1 20:10:47 compute-0 nova_compute[189564]: 2025-12-01 20:10:47.874 189568 INFO nova.compute.manager [-] [instance: 1ba24bd2-a29b-4c5b-b8c7-cba0830ed166] VM Stopped (Lifecycle Event)#033[00m
Dec  1 20:10:47 compute-0 nova_compute[189564]: 2025-12-01 20:10:47.912 189568 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Dec  1 20:10:47 compute-0 nova_compute[189564]: 2025-12-01 20:10:47.938 189568 DEBUG nova.compute.manager [None req-25d5d866-8f73-4813-a255-fd00155c6a9f - - - - - -] [instance: 1ba24bd2-a29b-4c5b-b8c7-cba0830ed166] Checking state _get_power_state /usr/lib/python3.9/site-packages/nova/compute/manager.py:1762#033[00m
Dec  1 20:10:48 compute-0 podman[259595]: 2025-12-01 20:10:48.332190647 +0000 UTC m=+0.099864905 container health_status 9bc16c1e84935b321683dd2dfd3901959431e420d380b6b9982945dff3d516b2 (image=quay.io/prometheus/node-exporter:v1.5.0, name=node_exporter, health_status=healthy, health_failing_streak=0, health_log=, container_name=node_exporter, maintainer=The Prometheus Authors <prometheus-developers@googlegroups.com>, managed_by=edpm_ansible, config_data={'image': 'quay.io/prometheus/node-exporter:v1.5.0', 'restart': 'always', 'recreate': True, 'user': 'root', 'privileged': True, 'ports': ['9100:9100'], 'command': ['--web.config.file=/etc/node_exporter/node_exporter.yaml', '--web.disable-exporter-metrics', '--collector.systemd', '--collector.systemd.unit-include=(edpm_.*|ovs.*|openvswitch|virt.*|rsyslog)\\.service', '--no-collector.dmi', '--no-collector.entropy', '--no-collector.thermal_zone', '--no-collector.time', '--no-collector.timex', '--no-collector.uname', '--no-collector.stat', '--no-collector.hwmon', '--no-collector.os', '--no-collector.selinux', '--no-collector.textfile', '--no-collector.powersupplyclass', '--no-collector.pressure', '--no-collector.rapl'], 'net': 'host', 'environment': {'OS_ENDPOINT_TYPE': 'internal'}, 'healthcheck': {'test': '/openstack/healthcheck node_exporter', 'mount': '/var/lib/openstack/healthchecks/node_exporter'}, 'volumes': ['/var/lib/openstack/config/telemetry/node_exporter.yaml:/etc/node_exporter/node_exporter.yaml:z', '/var/lib/openstack/certs/telemetry/default:/etc/node_exporter/tls:z', '/var/run/dbus/system_bus_socket:/var/run/dbus/system_bus_socket:rw', '/var/lib/openstack/healthchecks/node_exporter:/openstack:ro,z']}, config_id=edpm)
Dec  1 20:10:48 compute-0 podman[259596]: 2025-12-01 20:10:48.343902972 +0000 UTC m=+0.096833260 container health_status b46bda7fc50db8041eef75400930fc7591d8331b3adc9964f77b2cc87c6b98e2 (image=quay.io/openstack-k8s-operators/openstack-network-exporter:current-podified, name=openstack_network_exporter, health_status=healthy, health_failing_streak=0, health_log=, vcs-type=git, summary=Provides the latest release of the minimal Red Hat Universal Base Image 9., config_id=edpm, io.openshift.tags=minimal rhel9, release=1755695350, config_data={'image': 'quay.io/openstack-k8s-operators/openstack-network-exporter:current-podified', 'restart': 'always', 'recreate': True, 'privileged': True, 'ports': ['9105:9105'], 'command': [], 'net': 'host', 'environment': {'OS_ENDPOINT_TYPE': 'internal', 'OPENSTACK_NETWORK_EXPORTER_YAML': '/etc/openstack_network_exporter/openstack_network_exporter.yaml'}, 'healthcheck': {'test': '/openstack/healthcheck openstack-netwo', 'mount': '/var/lib/openstack/healthchecks/openstack_network_exporter'}, 'volumes': ['/var/lib/openstack/config/telemetry/openstack_network_exporter.yaml:/etc/openstack_network_exporter/openstack_network_exporter.yaml:z', '/var/lib/openstack/certs/telemetry/default:/etc/openstack_network_exporter/tls:z', '/var/run/openvswitch:/run/openvswitch:rw,z', '/var/lib/openvswitch/ovn:/run/ovn:rw,z', '/proc:/host/proc:ro', '/var/lib/openstack/healthchecks/openstack_network_exporter:/openstack:ro,z']}, description=The Universal Base Image Minimal is a stripped down image that uses microdnf as a package manager. This base image is freely redistributable, but Red Hat only supports Red Hat technologies through subscriptions for Red Hat products. This image is maintained by Red Hat and updated regularly., url=https://catalog.redhat.com/en/search?searchType=containers, version=9.6, com.redhat.license_terms=https://www.redhat.com/en/about/red-hat-end-user-license-agreements#UBI, architecture=x86_64, build-date=2025-08-20T13:12:41, distribution-scope=public, maintainer=Red Hat, Inc., vcs-ref=f4b088292653bbf5ca8188a5e59ffd06a8671d4b, container_name=openstack_network_exporter, io.buildah.version=1.33.7, name=ubi9-minimal, vendor=Red Hat, Inc., io.openshift.expose-services=, managed_by=edpm_ansible, io.k8s.display-name=Red Hat Universal Base Image 9 Minimal, io.k8s.description=The Universal Base Image Minimal is a stripped down image that uses microdnf as a package manager. This base image is freely redistributable, but Red Hat only supports Red Hat technologies through subscriptions for Red Hat products. This image is maintained by Red Hat and updated regularly., com.redhat.component=ubi9-minimal-container)
Dec  1 20:10:49 compute-0 nova_compute[189564]: 2025-12-01 20:10:49.150 189568 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Dec  1 20:10:52 compute-0 nova_compute[189564]: 2025-12-01 20:10:52.917 189568 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Dec  1 20:10:54 compute-0 nova_compute[189564]: 2025-12-01 20:10:54.153 189568 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Dec  1 20:10:55 compute-0 nova_compute[189564]: 2025-12-01 20:10:55.178 189568 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Dec  1 20:10:57 compute-0 nova_compute[189564]: 2025-12-01 20:10:57.922 189568 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Dec  1 20:10:59 compute-0 nova_compute[189564]: 2025-12-01 20:10:59.156 189568 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Dec  1 20:10:59 compute-0 podman[203750]: time="2025-12-01T20:10:59Z" level=info msg="List containers: received `last` parameter - overwriting `limit`"
Dec  1 20:10:59 compute-0 podman[203750]: @ - - [01/Dec/2025:20:10:59 +0000] "GET /v4.9.3/libpod/containers/json?all=true&external=false&last=0&namespace=false&size=false&sync=false HTTP/1.1" 200 28288 "" "Go-http-client/1.1"
Dec  1 20:10:59 compute-0 podman[203750]: @ - - [01/Dec/2025:20:10:59 +0000] "GET /v4.9.3/libpod/containers/stats?all=false&interval=1&stream=false HTTP/1.1" 200 4346 "" "Go-http-client/1.1"
Dec  1 20:11:00 compute-0 podman[259639]: 2025-12-01 20:11:00.3512072 +0000 UTC m=+0.117971891 container health_status eee51cf6f5ac491b85fb09827fece37ea9afa564acb449d4ec0d0155a452f02b (image=quay.io/podified-antelope-centos9/openstack-multipathd:current-podified, name=multipathd, health_status=healthy, health_failing_streak=0, health_log=, config_id=multipathd, io.buildah.version=1.41.3, org.label-schema.schema-version=1.0, container_name=multipathd, maintainer=OpenStack Kubernetes Operator team, org.label-schema.build-date=20251125, managed_by=edpm_ansible, org.label-schema.license=GPLv2, tcib_managed=true, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.vendor=CentOS, tcib_build_tag=fa2bb8efef6782c26ea7f1675eeb36dd, config_data={'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/multipathd', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-multipathd:current-podified', 'net': 'host', 'privileged': True, 'restart': 'always', 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/var/lib/kolla/config_files/multipathd.json:/var/lib/kolla/config_files/config.json:ro', '/dev:/dev', '/run/udev:/run/udev', '/sys:/sys', '/lib/modules:/lib/modules:ro', '/etc/iscsi:/etc/iscsi:ro', '/var/lib/iscsi:/var/lib/iscsi', '/etc/multipath:/etc/multipath:z', '/etc/multipath.conf:/etc/multipath.conf:ro', '/var/lib/openstack/healthchecks/multipathd:/openstack:ro,z']})
Dec  1 20:11:01 compute-0 openstack_network_exporter[205914]: ERROR   20:11:01 appctl.go:131: Failed to prepare call to ovsdb-server: no control socket files found for the ovs db server
Dec  1 20:11:01 compute-0 openstack_network_exporter[205914]: ERROR   20:11:01 appctl.go:144: Failed to get PID for ovn-northd: no control socket files found for ovn-northd
Dec  1 20:11:01 compute-0 openstack_network_exporter[205914]: ERROR   20:11:01 appctl.go:144: Failed to get PID for ovn-northd: no control socket files found for ovn-northd
Dec  1 20:11:01 compute-0 openstack_network_exporter[205914]: ERROR   20:11:01 appctl.go:174: call(dpif-netdev/pmd-perf-show): please specify an existing datapath
Dec  1 20:11:01 compute-0 openstack_network_exporter[205914]: 
Dec  1 20:11:01 compute-0 openstack_network_exporter[205914]: ERROR   20:11:01 appctl.go:174: call(dpif-netdev/pmd-rxq-show): please specify an existing datapath
Dec  1 20:11:01 compute-0 openstack_network_exporter[205914]: 
Dec  1 20:11:02 compute-0 nova_compute[189564]: 2025-12-01 20:11:02.925 189568 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Dec  1 20:11:03 compute-0 podman[259657]: 2025-12-01 20:11:03.279704444 +0000 UTC m=+0.056249365 container health_status 61ddba5fa28aaa4735d9b3aecc3d300f499f9ae2248b5f55cd6d6127fcce4236 (image=quay.io/navidys/prometheus-podman-exporter:v1.10.1, name=podman_exporter, health_status=healthy, health_failing_streak=0, health_log=, maintainer=Navid Yaghoobi <navidys@fedoraproject.org>, managed_by=edpm_ansible, config_data={'image': 'quay.io/navidys/prometheus-podman-exporter:v1.10.1', 'restart': 'always', 'recreate': True, 'user': 'root', 'privileged': True, 'ports': ['9882:9882'], 'net': 'host', 'command': ['--web.config.file=/etc/podman_exporter/podman_exporter.yaml'], 'environment': {'OS_ENDPOINT_TYPE': 'internal', 'CONTAINER_HOST': 'unix:///run/podman/podman.sock'}, 'healthcheck': {'test': '/openstack/healthcheck podman_exporter', 'mount': '/var/lib/openstack/healthchecks/podman_exporter'}, 'volumes': ['/var/lib/openstack/config/telemetry/podman_exporter.yaml:/etc/podman_exporter/podman_exporter.yaml:z', '/var/lib/openstack/certs/telemetry/default:/etc/podman_exporter/tls:z', '/run/podman/podman.sock:/run/podman/podman.sock:rw,z', '/var/lib/openstack/healthchecks/podman_exporter:/openstack:ro,z']}, config_id=edpm, container_name=podman_exporter)
Dec  1 20:11:04 compute-0 nova_compute[189564]: 2025-12-01 20:11:04.158 189568 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Dec  1 20:11:07 compute-0 podman[259684]: 2025-12-01 20:11:07.359116245 +0000 UTC m=+0.102635883 container health_status 34a1614f07848d6f362b3ed1fa2407dbcd0f2c7c831f6ef43ff8b2d278ce7c3d (image=quay.io/podified-antelope-centos9/openstack-ceilometer-ipmi:current-podified, name=ceilometer_agent_ipmi, health_status=healthy, health_failing_streak=0, health_log=, org.label-schema.license=GPLv2, managed_by=edpm_ansible, org.label-schema.schema-version=1.0, tcib_build_tag=fa2bb8efef6782c26ea7f1675eeb36dd, tcib_managed=true, org.label-schema.vendor=CentOS, container_name=ceilometer_agent_ipmi, io.buildah.version=1.41.3, org.label-schema.build-date=20251125, org.label-schema.name=CentOS Stream 9 Base Image, config_data={'image': 'quay.io/podified-antelope-centos9/openstack-ceilometer-ipmi:current-podified', 'user': 'ceilometer', 'restart': 'always', 'command': 'kolla_start', 'security_opt': 'label:type:ceilometer_polling_t', 'privileged': 'true', 'net': 'host', 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'OS_ENDPOINT_TYPE': 'internal'}, 'healthcheck': {'test': '/openstack/healthcheck ipmi', 'mount': '/var/lib/openstack/healthchecks/ceilometer_agent_ipmi'}, 'volumes': ['/var/lib/openstack/config/telemetry-power-monitoring:/var/lib/openstack/config/:z', '/var/lib/openstack/config/telemetry-power-monitoring/ceilometer-agent-ipmi.json:/var/lib/kolla/config_files/config.json:z', '/etc/hosts:/etc/hosts:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/var/lib/openstack/cacerts/telemetry-power-monitoring/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/config/telemetry-power-monitoring/ceilometer_prom_exporter.yaml:/etc/ceilometer/ceilometer_prom_exporter.yaml:z', '/var/lib/openstack/certs/telemetry-power-monitoring/default:/etc/ceilometer/tls:z', '/dev/log:/dev/log', '/var/lib/openstack/healthchecks/ceilometer_agent_ipmi:/openstack:ro,z']}, config_id=edpm, maintainer=OpenStack Kubernetes Operator team)
Dec  1 20:11:07 compute-0 podman[259683]: 2025-12-01 20:11:07.359005071 +0000 UTC m=+0.111478768 container health_status 23921011954a99f31a49758e512d9e3575f6b2ebf536e7df85e3be11e7690b76 (image=quay.io/sustainable_computing_io/kepler:release-0.7.12, name=kepler, health_status=healthy, health_failing_streak=0, health_log=, vcs-type=git, summary=Provides the latest release of Red Hat Universal Base Image 9., maintainer=Red Hat, Inc., io.k8s.display-name=Red Hat Universal Base Image 9, vendor=Red Hat, Inc., release-0.7.12=, build-date=2024-09-18T21:23:30, config_id=edpm, vcs-ref=e309397d02fc53f7fa99db1371b8700eb49f268f, architecture=x86_64, io.openshift.expose-services=, release=1214.1726694543, config_data={'image': 'quay.io/sustainable_computing_io/kepler:release-0.7.12', 'privileged': 'true', 'restart': 'always', 'ports': ['8888:8888'], 'net': 'host', 'command': '-v=2', 'recreate': True, 'environment': {'ENABLE_GPU': 'true', 'EXPOSE_CONTAINER_METRICS': 'true', 'ENABLE_PROCESS_METRICS': 'true', 'EXPOSE_VM_METRICS': 'true', 'EXPOSE_ESTIMATED_IDLE_POWER_METRICS': 'false', 'LIBVIRT_METADATA_URI': 'http://openstack.org/xmlns/libvirt/nova/1.1'}, 'healthcheck': {'test': '/openstack/healthcheck kepler', 'mount': '/var/lib/openstack/healthchecks/kepler'}, 'volumes': ['/lib/modules:/lib/modules:ro', '/run/libvirt:/run/libvirt:shared,ro', '/sys:/sys', '/proc:/proc', '/var/lib/openstack/healthchecks/kepler:/openstack:ro,z']}, container_name=kepler, io.openshift.tags=base rhel9, managed_by=edpm_ansible, io.buildah.version=1.29.0, url=https://access.redhat.com/containers/#/registry.access.redhat.com/ubi9/images/9.4-1214.1726694543, version=9.4, com.redhat.component=ubi9-container, com.redhat.license_terms=https://www.redhat.com/en/about/red-hat-end-user-license-agreements#UBI, io.k8s.description=The Universal Base Image is designed and engineered to be the base layer for all of your containerized applications, middleware and utilities. This base image is freely redistributable, but Red Hat only supports Red Hat technologies through subscriptions for Red Hat products. This image is maintained by Red Hat and updated regularly., description=The Universal Base Image is designed and engineered to be the base layer for all of your containerized applications, middleware and utilities. This base image is freely redistributable, but Red Hat only supports Red Hat technologies through subscriptions for Red Hat products. This image is maintained by Red Hat and updated regularly., distribution-scope=public, name=ubi9)
Dec  1 20:11:07 compute-0 podman[259686]: 2025-12-01 20:11:07.386520979 +0000 UTC m=+0.128851690 container health_status 43b014a7c88484529ca37fbc1aa040d68d3c565a681d98a3ffe696ded1c66c8b (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified, name=ovn_metadata_agent, health_status=healthy, health_failing_streak=0, health_log=, tcib_build_tag=fa2bb8efef6782c26ea7f1675eeb36dd, config_id=ovn_metadata_agent, org.label-schema.build-date=20251125, org.label-schema.vendor=CentOS, config_data={'cgroupns': 'host', 'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'EDPM_CONFIG_HASH': '0823bd3e096c75f72e4a95820d41b0d4b6a1172bd2892ddb9f29b788a11bc87d'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ovn_metadata_agent', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified', 'net': 'host', 'pid': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/run/openvswitch:/run/openvswitch:z', '/var/lib/config-data/ansible-generated/neutron-ovn-metadata-agent:/etc/neutron.conf.d:z', '/run/netns:/run/netns:shared', '/var/lib/kolla/config_files/ovn_metadata_agent.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/neutron:/var/lib/neutron:shared,z', '/var/lib/neutron/ovn_metadata_haproxy_wrapper:/usr/local/bin/haproxy:ro', '/var/lib/neutron/kill_scripts:/etc/neutron/kill_scripts:ro', '/var/lib/openstack/cacerts/neutron-metadata/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/ca.crt:/etc/pki/tls/certs/ovndbca.crt:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/tls.crt:/etc/pki/tls/certs/ovndb.crt:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/tls.key:/etc/pki/tls/private/ovndb.key:ro,Z', '/var/lib/openstack/healthchecks/ovn_metadata_agent:/openstack:ro,z']}, managed_by=edpm_ansible, org.label-schema.license=GPLv2, tcib_managed=true, container_name=ovn_metadata_agent, io.buildah.version=1.41.3, maintainer=OpenStack Kubernetes Operator team, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.schema-version=1.0)
Dec  1 20:11:07 compute-0 podman[259685]: 2025-12-01 20:11:07.391855195 +0000 UTC m=+0.129847991 container health_status 3a3d264f7eb8586ed3d44da8bad3c69e5911bcb2ca062b771386b6d47a5118de (image=quay.rdoproject.org/podified-master-centos10/openstack-ceilometer-compute:current-tested, name=ceilometer_agent_compute, health_status=healthy, health_failing_streak=0, health_log=, org.label-schema.license=GPLv2, tcib_build_tag=3a7876c5b6a4ff2e2bc50e11e9db5f42, io.buildah.version=1.41.4, maintainer=OpenStack Kubernetes Operator team, org.label-schema.build-date=20251125, config_data={'image': 'quay.rdoproject.org/podified-master-centos10/openstack-ceilometer-compute:current-tested', 'user': 'ceilometer', 'restart': 'always', 'command': 'kolla_start', 'security_opt': 'label:type:ceilometer_polling_t', 'net': 'host', 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'OS_ENDPOINT_TYPE': 'internal'}, 'healthcheck': {'test': '/openstack/healthcheck compute', 'mount': '/var/lib/openstack/healthchecks/ceilometer_agent_compute'}, 'volumes': ['/var/lib/openstack/config/telemetry:/var/lib/openstack/config/:z', '/var/lib/openstack/config/telemetry/ceilometer-agent-compute.json:/var/lib/kolla/config_files/config.json:z', '/run/libvirt:/run/libvirt:shared,ro', '/etc/hosts:/etc/hosts:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/var/lib/openstack/cacerts/telemetry/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/config/telemetry/ceilometer_prom_exporter.yaml:/etc/ceilometer/ceilometer_prom_exporter.yaml:z', '/var/lib/openstack/certs/telemetry/default:/etc/ceilometer/tls:z', '/dev/log:/dev/log', '/var/lib/openstack/healthchecks/ceilometer_agent_compute:/openstack:ro,z']}, managed_by=edpm_ansible, org.label-schema.name=CentOS Stream 10 Base Image, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS, tcib_managed=true, config_id=edpm, container_name=ceilometer_agent_compute)
Dec  1 20:11:07 compute-0 podman[259692]: 2025-12-01 20:11:07.427417234 +0000 UTC m=+0.152366152 container health_status ac5c9902abf0db9f43c889599b2bcc73d33eb8b65444ffdd9b56a5cc93dab792 (image=quay.io/podified-antelope-centos9/openstack-ovn-controller:current-podified, name=ovn_controller, health_status=healthy, health_failing_streak=0, health_log=, io.buildah.version=1.41.3, org.label-schema.license=GPLv2, org.label-schema.build-date=20251125, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS, container_name=ovn_controller, maintainer=OpenStack Kubernetes Operator team, managed_by=edpm_ansible, org.label-schema.name=CentOS Stream 9 Base Image, tcib_build_tag=fa2bb8efef6782c26ea7f1675eeb36dd, tcib_managed=true, config_data={'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ovn_controller', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-ovn-controller:current-podified', 'net': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/lib/modules:/lib/modules:ro', '/run:/run', '/var/lib/openvswitch/ovn:/run/ovn:shared,z', '/var/lib/kolla/config_files/ovn_controller.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/openstack/cacerts/ovn/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/certs/ovn/default/ca.crt:/etc/pki/tls/certs/ovndbca.crt:ro,z', '/var/lib/openstack/certs/ovn/default/tls.crt:/etc/pki/tls/certs/ovndb.crt:ro,z', '/var/lib/openstack/certs/ovn/default/tls.key:/etc/pki/tls/private/ovndb.key:ro,Z', '/var/lib/openstack/healthchecks/ovn_controller:/openstack:ro,z']}, config_id=ovn_controller)
Dec  1 20:11:07 compute-0 nova_compute[189564]: 2025-12-01 20:11:07.927 189568 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Dec  1 20:11:09 compute-0 nova_compute[189564]: 2025-12-01 20:11:09.161 189568 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Dec  1 20:11:12 compute-0 ovn_metadata_agent[106828]: 2025-12-01 20:11:12.231 106833 DEBUG oslo_concurrency.lockutils [-] Acquiring lock "_check_child_processes" by "neutron.agent.linux.external_process.ProcessMonitor._check_child_processes" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m
Dec  1 20:11:12 compute-0 ovn_metadata_agent[106828]: 2025-12-01 20:11:12.232 106833 DEBUG oslo_concurrency.lockutils [-] Lock "_check_child_processes" acquired by "neutron.agent.linux.external_process.ProcessMonitor._check_child_processes" :: waited 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m
Dec  1 20:11:12 compute-0 ovn_metadata_agent[106828]: 2025-12-01 20:11:12.232 106833 DEBUG oslo_concurrency.lockutils [-] Lock "_check_child_processes" "released" by "neutron.agent.linux.external_process.ProcessMonitor._check_child_processes" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m
Dec  1 20:11:12 compute-0 nova_compute[189564]: 2025-12-01 20:11:12.932 189568 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Dec  1 20:11:14 compute-0 nova_compute[189564]: 2025-12-01 20:11:14.164 189568 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Dec  1 20:11:17 compute-0 nova_compute[189564]: 2025-12-01 20:11:17.935 189568 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Dec  1 20:11:19 compute-0 nova_compute[189564]: 2025-12-01 20:11:19.166 189568 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Dec  1 20:11:19 compute-0 podman[259779]: 2025-12-01 20:11:19.316766694 +0000 UTC m=+0.089637547 container health_status b46bda7fc50db8041eef75400930fc7591d8331b3adc9964f77b2cc87c6b98e2 (image=quay.io/openstack-k8s-operators/openstack-network-exporter:current-podified, name=openstack_network_exporter, health_status=healthy, health_failing_streak=0, health_log=, vendor=Red Hat, Inc., io.openshift.expose-services=, description=The Universal Base Image Minimal is a stripped down image that uses microdnf as a package manager. This base image is freely redistributable, but Red Hat only supports Red Hat technologies through subscriptions for Red Hat products. This image is maintained by Red Hat and updated regularly., url=https://catalog.redhat.com/en/search?searchType=containers, vcs-type=git, version=9.6, container_name=openstack_network_exporter, name=ubi9-minimal, distribution-scope=public, io.openshift.tags=minimal rhel9, release=1755695350, summary=Provides the latest release of the minimal Red Hat Universal Base Image 9., com.redhat.component=ubi9-minimal-container, maintainer=Red Hat, Inc., managed_by=edpm_ansible, io.k8s.description=The Universal Base Image Minimal is a stripped down image that uses microdnf as a package manager. This base image is freely redistributable, but Red Hat only supports Red Hat technologies through subscriptions for Red Hat products. This image is maintained by Red Hat and updated regularly., io.k8s.display-name=Red Hat Universal Base Image 9 Minimal, build-date=2025-08-20T13:12:41, config_data={'image': 'quay.io/openstack-k8s-operators/openstack-network-exporter:current-podified', 'restart': 'always', 'recreate': True, 'privileged': True, 'ports': ['9105:9105'], 'command': [], 'net': 'host', 'environment': {'OS_ENDPOINT_TYPE': 'internal', 'OPENSTACK_NETWORK_EXPORTER_YAML': '/etc/openstack_network_exporter/openstack_network_exporter.yaml'}, 'healthcheck': {'test': '/openstack/healthcheck openstack-netwo', 'mount': '/var/lib/openstack/healthchecks/openstack_network_exporter'}, 'volumes': ['/var/lib/openstack/config/telemetry/openstack_network_exporter.yaml:/etc/openstack_network_exporter/openstack_network_exporter.yaml:z', '/var/lib/openstack/certs/telemetry/default:/etc/openstack_network_exporter/tls:z', '/var/run/openvswitch:/run/openvswitch:rw,z', '/var/lib/openvswitch/ovn:/run/ovn:rw,z', '/proc:/host/proc:ro', '/var/lib/openstack/healthchecks/openstack_network_exporter:/openstack:ro,z']}, config_id=edpm, architecture=x86_64, com.redhat.license_terms=https://www.redhat.com/en/about/red-hat-end-user-license-agreements#UBI, io.buildah.version=1.33.7, vcs-ref=f4b088292653bbf5ca8188a5e59ffd06a8671d4b)
Dec  1 20:11:19 compute-0 podman[259778]: 2025-12-01 20:11:19.333712681 +0000 UTC m=+0.100816095 container health_status 9bc16c1e84935b321683dd2dfd3901959431e420d380b6b9982945dff3d516b2 (image=quay.io/prometheus/node-exporter:v1.5.0, name=node_exporter, health_status=healthy, health_failing_streak=0, health_log=, config_data={'image': 'quay.io/prometheus/node-exporter:v1.5.0', 'restart': 'always', 'recreate': True, 'user': 'root', 'privileged': True, 'ports': ['9100:9100'], 'command': ['--web.config.file=/etc/node_exporter/node_exporter.yaml', '--web.disable-exporter-metrics', '--collector.systemd', '--collector.systemd.unit-include=(edpm_.*|ovs.*|openvswitch|virt.*|rsyslog)\\.service', '--no-collector.dmi', '--no-collector.entropy', '--no-collector.thermal_zone', '--no-collector.time', '--no-collector.timex', '--no-collector.uname', '--no-collector.stat', '--no-collector.hwmon', '--no-collector.os', '--no-collector.selinux', '--no-collector.textfile', '--no-collector.powersupplyclass', '--no-collector.pressure', '--no-collector.rapl'], 'net': 'host', 'environment': {'OS_ENDPOINT_TYPE': 'internal'}, 'healthcheck': {'test': '/openstack/healthcheck node_exporter', 'mount': '/var/lib/openstack/healthchecks/node_exporter'}, 'volumes': ['/var/lib/openstack/config/telemetry/node_exporter.yaml:/etc/node_exporter/node_exporter.yaml:z', '/var/lib/openstack/certs/telemetry/default:/etc/node_exporter/tls:z', '/var/run/dbus/system_bus_socket:/var/run/dbus/system_bus_socket:rw', '/var/lib/openstack/healthchecks/node_exporter:/openstack:ro,z']}, config_id=edpm, container_name=node_exporter, maintainer=The Prometheus Authors <prometheus-developers@googlegroups.com>, managed_by=edpm_ansible)
Dec  1 20:11:22 compute-0 nova_compute[189564]: 2025-12-01 20:11:22.248 189568 DEBUG oslo_service.periodic_task [None req-7cec860b-c1c5-49d2-8a4f-732c76c1788d - - - - - -] Running periodic task ComputeManager._poll_rescued_instances run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210#033[00m
Dec  1 20:11:22 compute-0 nova_compute[189564]: 2025-12-01 20:11:22.940 189568 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Dec  1 20:11:24 compute-0 nova_compute[189564]: 2025-12-01 20:11:24.168 189568 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Dec  1 20:11:24 compute-0 nova_compute[189564]: 2025-12-01 20:11:24.248 189568 DEBUG oslo_service.periodic_task [None req-7cec860b-c1c5-49d2-8a4f-732c76c1788d - - - - - -] Running periodic task ComputeManager._reclaim_queued_deletes run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210#033[00m
Dec  1 20:11:24 compute-0 nova_compute[189564]: 2025-12-01 20:11:24.248 189568 DEBUG nova.compute.manager [None req-7cec860b-c1c5-49d2-8a4f-732c76c1788d - - - - - -] CONF.reclaim_instance_interval <= 0, skipping... _reclaim_queued_deletes /usr/lib/python3.9/site-packages/nova/compute/manager.py:10477#033[00m
Dec  1 20:11:27 compute-0 nova_compute[189564]: 2025-12-01 20:11:27.945 189568 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Dec  1 20:11:28 compute-0 nova_compute[189564]: 2025-12-01 20:11:28.249 189568 DEBUG oslo_service.periodic_task [None req-7cec860b-c1c5-49d2-8a4f-732c76c1788d - - - - - -] Running periodic task ComputeManager._poll_rebooting_instances run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210#033[00m
Dec  1 20:11:29 compute-0 nova_compute[189564]: 2025-12-01 20:11:29.171 189568 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Dec  1 20:11:29 compute-0 nova_compute[189564]: 2025-12-01 20:11:29.248 189568 DEBUG oslo_service.periodic_task [None req-7cec860b-c1c5-49d2-8a4f-732c76c1788d - - - - - -] Running periodic task ComputeManager._heal_instance_info_cache run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210#033[00m
Dec  1 20:11:29 compute-0 nova_compute[189564]: 2025-12-01 20:11:29.248 189568 DEBUG nova.compute.manager [None req-7cec860b-c1c5-49d2-8a4f-732c76c1788d - - - - - -] Starting heal instance info cache _heal_instance_info_cache /usr/lib/python3.9/site-packages/nova/compute/manager.py:9858#033[00m
Dec  1 20:11:29 compute-0 nova_compute[189564]: 2025-12-01 20:11:29.249 189568 DEBUG nova.compute.manager [None req-7cec860b-c1c5-49d2-8a4f-732c76c1788d - - - - - -] Rebuilding the list of instances to heal _heal_instance_info_cache /usr/lib/python3.9/site-packages/nova/compute/manager.py:9862#033[00m
Dec  1 20:11:29 compute-0 nova_compute[189564]: 2025-12-01 20:11:29.262 189568 DEBUG nova.compute.manager [None req-7cec860b-c1c5-49d2-8a4f-732c76c1788d - - - - - -] Didn't find any instances for network info cache update. _heal_instance_info_cache /usr/lib/python3.9/site-packages/nova/compute/manager.py:9944#033[00m
Dec  1 20:11:29 compute-0 nova_compute[189564]: 2025-12-01 20:11:29.263 189568 DEBUG oslo_service.periodic_task [None req-7cec860b-c1c5-49d2-8a4f-732c76c1788d - - - - - -] Running periodic task ComputeManager._poll_unconfirmed_resizes run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210#033[00m
Dec  1 20:11:29 compute-0 nova_compute[189564]: 2025-12-01 20:11:29.263 189568 DEBUG oslo_service.periodic_task [None req-7cec860b-c1c5-49d2-8a4f-732c76c1788d - - - - - -] Running periodic task ComputeManager._run_pending_deletes run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210#033[00m
Dec  1 20:11:29 compute-0 nova_compute[189564]: 2025-12-01 20:11:29.263 189568 DEBUG nova.compute.manager [None req-7cec860b-c1c5-49d2-8a4f-732c76c1788d - - - - - -] Cleaning up deleted instances _run_pending_deletes /usr/lib/python3.9/site-packages/nova/compute/manager.py:11145#033[00m
Dec  1 20:11:29 compute-0 nova_compute[189564]: 2025-12-01 20:11:29.280 189568 DEBUG nova.compute.manager [None req-7cec860b-c1c5-49d2-8a4f-732c76c1788d - - - - - -] There are 0 instances to clean _run_pending_deletes /usr/lib/python3.9/site-packages/nova/compute/manager.py:11154#033[00m
Dec  1 20:11:29 compute-0 podman[203750]: time="2025-12-01T20:11:29Z" level=info msg="List containers: received `last` parameter - overwriting `limit`"
Dec  1 20:11:29 compute-0 podman[203750]: @ - - [01/Dec/2025:20:11:29 +0000] "GET /v4.9.3/libpod/containers/json?all=true&external=false&last=0&namespace=false&size=false&sync=false HTTP/1.1" 200 28288 "" "Go-http-client/1.1"
Dec  1 20:11:29 compute-0 podman[203750]: @ - - [01/Dec/2025:20:11:29 +0000] "GET /v4.9.3/libpod/containers/stats?all=false&interval=1&stream=false HTTP/1.1" 200 4348 "" "Go-http-client/1.1"
Dec  1 20:11:30 compute-0 nova_compute[189564]: 2025-12-01 20:11:30.267 189568 DEBUG oslo_service.periodic_task [None req-7cec860b-c1c5-49d2-8a4f-732c76c1788d - - - - - -] Running periodic task ComputeManager.update_available_resource run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210#033[00m
Dec  1 20:11:30 compute-0 nova_compute[189564]: 2025-12-01 20:11:30.303 189568 DEBUG oslo_concurrency.lockutils [None req-7cec860b-c1c5-49d2-8a4f-732c76c1788d - - - - - -] Acquiring lock "compute_resources" by "nova.compute.resource_tracker.ResourceTracker.clean_compute_node_cache" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m
Dec  1 20:11:30 compute-0 nova_compute[189564]: 2025-12-01 20:11:30.304 189568 DEBUG oslo_concurrency.lockutils [None req-7cec860b-c1c5-49d2-8a4f-732c76c1788d - - - - - -] Lock "compute_resources" acquired by "nova.compute.resource_tracker.ResourceTracker.clean_compute_node_cache" :: waited 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m
Dec  1 20:11:30 compute-0 nova_compute[189564]: 2025-12-01 20:11:30.304 189568 DEBUG oslo_concurrency.lockutils [None req-7cec860b-c1c5-49d2-8a4f-732c76c1788d - - - - - -] Lock "compute_resources" "released" by "nova.compute.resource_tracker.ResourceTracker.clean_compute_node_cache" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m
Dec  1 20:11:30 compute-0 nova_compute[189564]: 2025-12-01 20:11:30.305 189568 DEBUG nova.compute.resource_tracker [None req-7cec860b-c1c5-49d2-8a4f-732c76c1788d - - - - - -] Auditing locally available compute resources for compute-0.ctlplane.example.com (node: compute-0.ctlplane.example.com) update_available_resource /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:861#033[00m
Dec  1 20:11:30 compute-0 nova_compute[189564]: 2025-12-01 20:11:30.833 189568 WARNING nova.virt.libvirt.driver [None req-7cec860b-c1c5-49d2-8a4f-732c76c1788d - - - - - -] This host appears to have multiple sockets per NUMA node. The `socket` PCI NUMA affinity will not be supported.#033[00m
Dec  1 20:11:30 compute-0 nova_compute[189564]: 2025-12-01 20:11:30.834 189568 DEBUG nova.compute.resource_tracker [None req-7cec860b-c1c5-49d2-8a4f-732c76c1788d - - - - - -] Hypervisor/Node resource view: name=compute-0.ctlplane.example.com free_ram=5375MB free_disk=72.3053092956543GB free_vcpus=8 pci_devices=[{"dev_id": "pci_0000_00_03_0", "address": "0000:00:03.0", "product_id": "1000", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1000", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_01_3", "address": "0000:00:01.3", "product_id": "7113", "vendor_id": "8086", "numa_node": null, "label": "label_8086_7113", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_06_0", "address": "0000:00:06.0", "product_id": "1005", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1005", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_04_0", "address": "0000:00:04.0", "product_id": "1001", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1001", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_01_1", "address": "0000:00:01.1", "product_id": "7010", "vendor_id": "8086", "numa_node": null, "label": "label_8086_7010", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_00_0", "address": "0000:00:00.0", "product_id": "1237", "vendor_id": "8086", "numa_node": null, "label": "label_8086_1237", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_05_0", "address": "0000:00:05.0", "product_id": "1002", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1002", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_02_0", "address": "0000:00:02.0", "product_id": "1050", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1050", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_01_2", "address": "0000:00:01.2", "product_id": "7020", "vendor_id": "8086", "numa_node": null, "label": "label_8086_7020", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_01_0", "address": "0000:00:01.0", "product_id": "7000", "vendor_id": "8086", "numa_node": null, "label": "label_8086_7000", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_07_0", "address": "0000:00:07.0", "product_id": "1000", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1000", "dev_type": "type-PCI"}] _report_hypervisor_resource_view /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:1034#033[00m
Dec  1 20:11:30 compute-0 nova_compute[189564]: 2025-12-01 20:11:30.835 189568 DEBUG oslo_concurrency.lockutils [None req-7cec860b-c1c5-49d2-8a4f-732c76c1788d - - - - - -] Acquiring lock "compute_resources" by "nova.compute.resource_tracker.ResourceTracker._update_available_resource" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m
Dec  1 20:11:30 compute-0 nova_compute[189564]: 2025-12-01 20:11:30.836 189568 DEBUG oslo_concurrency.lockutils [None req-7cec860b-c1c5-49d2-8a4f-732c76c1788d - - - - - -] Lock "compute_resources" acquired by "nova.compute.resource_tracker.ResourceTracker._update_available_resource" :: waited 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m
Dec  1 20:11:30 compute-0 ovn_controller[97948]: 2025-12-01T20:11:30Z|00185|memory_trim|INFO|Detected inactivity (last active 30022 ms ago): trimming memory
Dec  1 20:11:31 compute-0 nova_compute[189564]: 2025-12-01 20:11:31.126 189568 DEBUG nova.compute.resource_tracker [None req-7cec860b-c1c5-49d2-8a4f-732c76c1788d - - - - - -] Total usable vcpus: 8, total allocated vcpus: 0 _report_final_resource_view /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:1057#033[00m
Dec  1 20:11:31 compute-0 nova_compute[189564]: 2025-12-01 20:11:31.127 189568 DEBUG nova.compute.resource_tracker [None req-7cec860b-c1c5-49d2-8a4f-732c76c1788d - - - - - -] Final resource view: name=compute-0.ctlplane.example.com phys_ram=7680MB used_ram=512MB phys_disk=79GB used_disk=0GB total_vcpus=8 used_vcpus=0 pci_stats=[] _report_final_resource_view /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:1066#033[00m
Dec  1 20:11:31 compute-0 nova_compute[189564]: 2025-12-01 20:11:31.225 189568 DEBUG nova.compute.provider_tree [None req-7cec860b-c1c5-49d2-8a4f-732c76c1788d - - - - - -] Inventory has not changed in ProviderTree for provider: 0211b5d4-bab8-409f-8f53-df766ffbcb27 update_inventory /usr/lib/python3.9/site-packages/nova/compute/provider_tree.py:180#033[00m
Dec  1 20:11:31 compute-0 nova_compute[189564]: 2025-12-01 20:11:31.239 189568 DEBUG nova.scheduler.client.report [None req-7cec860b-c1c5-49d2-8a4f-732c76c1788d - - - - - -] Inventory has not changed for provider 0211b5d4-bab8-409f-8f53-df766ffbcb27 based on inventory data: {'VCPU': {'total': 8, 'reserved': 0, 'min_unit': 1, 'max_unit': 8, 'step_size': 1, 'allocation_ratio': 4.0}, 'MEMORY_MB': {'total': 7680, 'reserved': 512, 'min_unit': 1, 'max_unit': 7680, 'step_size': 1, 'allocation_ratio': 1.0}, 'DISK_GB': {'total': 79, 'reserved': 1, 'min_unit': 1, 'max_unit': 79, 'step_size': 1, 'allocation_ratio': 0.9}} set_inventory_for_provider /usr/lib/python3.9/site-packages/nova/scheduler/client/report.py:940#033[00m
Dec  1 20:11:31 compute-0 nova_compute[189564]: 2025-12-01 20:11:31.260 189568 DEBUG nova.compute.resource_tracker [None req-7cec860b-c1c5-49d2-8a4f-732c76c1788d - - - - - -] Compute_service record updated for compute-0.ctlplane.example.com:compute-0.ctlplane.example.com _update_available_resource /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:995#033[00m
Dec  1 20:11:31 compute-0 nova_compute[189564]: 2025-12-01 20:11:31.261 189568 DEBUG oslo_concurrency.lockutils [None req-7cec860b-c1c5-49d2-8a4f-732c76c1788d - - - - - -] Lock "compute_resources" "released" by "nova.compute.resource_tracker.ResourceTracker._update_available_resource" :: held 0.425s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m
Dec  1 20:11:31 compute-0 podman[259827]: 2025-12-01 20:11:31.365113942 +0000 UTC m=+0.123785422 container health_status eee51cf6f5ac491b85fb09827fece37ea9afa564acb449d4ec0d0155a452f02b (image=quay.io/podified-antelope-centos9/openstack-multipathd:current-podified, name=multipathd, health_status=healthy, health_failing_streak=0, health_log=, config_id=multipathd, io.buildah.version=1.41.3, maintainer=OpenStack Kubernetes Operator team, managed_by=edpm_ansible, org.label-schema.build-date=20251125, config_data={'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/multipathd', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-multipathd:current-podified', 'net': 'host', 'privileged': True, 'restart': 'always', 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/var/lib/kolla/config_files/multipathd.json:/var/lib/kolla/config_files/config.json:ro', '/dev:/dev', '/run/udev:/run/udev', '/sys:/sys', '/lib/modules:/lib/modules:ro', '/etc/iscsi:/etc/iscsi:ro', '/var/lib/iscsi:/var/lib/iscsi', '/etc/multipath:/etc/multipath:z', '/etc/multipath.conf:/etc/multipath.conf:ro', '/var/lib/openstack/healthchecks/multipathd:/openstack:ro,z']}, tcib_managed=true, container_name=multipathd, org.label-schema.license=GPLv2, org.label-schema.schema-version=1.0, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.vendor=CentOS, tcib_build_tag=fa2bb8efef6782c26ea7f1675eeb36dd)
Dec  1 20:11:31 compute-0 openstack_network_exporter[205914]: ERROR   20:11:31 appctl.go:131: Failed to prepare call to ovsdb-server: no control socket files found for the ovs db server
Dec  1 20:11:31 compute-0 openstack_network_exporter[205914]: ERROR   20:11:31 appctl.go:144: Failed to get PID for ovn-northd: no control socket files found for ovn-northd
Dec  1 20:11:31 compute-0 openstack_network_exporter[205914]: ERROR   20:11:31 appctl.go:144: Failed to get PID for ovn-northd: no control socket files found for ovn-northd
Dec  1 20:11:31 compute-0 openstack_network_exporter[205914]: ERROR   20:11:31 appctl.go:174: call(dpif-netdev/pmd-perf-show): please specify an existing datapath
Dec  1 20:11:31 compute-0 openstack_network_exporter[205914]: 
Dec  1 20:11:31 compute-0 openstack_network_exporter[205914]: ERROR   20:11:31 appctl.go:174: call(dpif-netdev/pmd-rxq-show): please specify an existing datapath
Dec  1 20:11:31 compute-0 openstack_network_exporter[205914]: 
Dec  1 20:11:32 compute-0 nova_compute[189564]: 2025-12-01 20:11:32.950 189568 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Dec  1 20:11:34 compute-0 nova_compute[189564]: 2025-12-01 20:11:34.176 189568 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Dec  1 20:11:34 compute-0 nova_compute[189564]: 2025-12-01 20:11:34.243 189568 DEBUG oslo_service.periodic_task [None req-7cec860b-c1c5-49d2-8a4f-732c76c1788d - - - - - -] Running periodic task ComputeManager._instance_usage_audit run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210#033[00m
Dec  1 20:11:34 compute-0 podman[259847]: 2025-12-01 20:11:34.300362657 +0000 UTC m=+0.071016865 container health_status 61ddba5fa28aaa4735d9b3aecc3d300f499f9ae2248b5f55cd6d6127fcce4236 (image=quay.io/navidys/prometheus-podman-exporter:v1.10.1, name=podman_exporter, health_status=healthy, health_failing_streak=0, health_log=, config_id=edpm, container_name=podman_exporter, maintainer=Navid Yaghoobi <navidys@fedoraproject.org>, managed_by=edpm_ansible, config_data={'image': 'quay.io/navidys/prometheus-podman-exporter:v1.10.1', 'restart': 'always', 'recreate': True, 'user': 'root', 'privileged': True, 'ports': ['9882:9882'], 'net': 'host', 'command': ['--web.config.file=/etc/podman_exporter/podman_exporter.yaml'], 'environment': {'OS_ENDPOINT_TYPE': 'internal', 'CONTAINER_HOST': 'unix:///run/podman/podman.sock'}, 'healthcheck': {'test': '/openstack/healthcheck podman_exporter', 'mount': '/var/lib/openstack/healthchecks/podman_exporter'}, 'volumes': ['/var/lib/openstack/config/telemetry/podman_exporter.yaml:/etc/podman_exporter/podman_exporter.yaml:z', '/var/lib/openstack/certs/telemetry/default:/etc/podman_exporter/tls:z', '/run/podman/podman.sock:/run/podman/podman.sock:rw,z', '/var/lib/openstack/healthchecks/podman_exporter:/openstack:ro,z']})
Dec  1 20:11:36 compute-0 nova_compute[189564]: 2025-12-01 20:11:36.244 189568 DEBUG oslo_service.periodic_task [None req-7cec860b-c1c5-49d2-8a4f-732c76c1788d - - - - - -] Running periodic task ComputeManager._check_instance_build_time run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210#033[00m
Dec  1 20:11:36 compute-0 nova_compute[189564]: 2025-12-01 20:11:36.247 189568 DEBUG oslo_service.periodic_task [None req-7cec860b-c1c5-49d2-8a4f-732c76c1788d - - - - - -] Running periodic task ComputeManager._poll_volume_usage run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210#033[00m
Dec  1 20:11:37 compute-0 nova_compute[189564]: 2025-12-01 20:11:37.955 189568 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Dec  1 20:11:38 compute-0 podman[259874]: 2025-12-01 20:11:38.334849196 +0000 UTC m=+0.091518915 container health_status 3a3d264f7eb8586ed3d44da8bad3c69e5911bcb2ca062b771386b6d47a5118de (image=quay.rdoproject.org/podified-master-centos10/openstack-ceilometer-compute:current-tested, name=ceilometer_agent_compute, health_status=healthy, health_failing_streak=0, health_log=, org.label-schema.build-date=20251125, tcib_managed=true, io.buildah.version=1.41.4, tcib_build_tag=3a7876c5b6a4ff2e2bc50e11e9db5f42, config_data={'image': 'quay.rdoproject.org/podified-master-centos10/openstack-ceilometer-compute:current-tested', 'user': 'ceilometer', 'restart': 'always', 'command': 'kolla_start', 'security_opt': 'label:type:ceilometer_polling_t', 'net': 'host', 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'OS_ENDPOINT_TYPE': 'internal'}, 'healthcheck': {'test': '/openstack/healthcheck compute', 'mount': '/var/lib/openstack/healthchecks/ceilometer_agent_compute'}, 'volumes': ['/var/lib/openstack/config/telemetry:/var/lib/openstack/config/:z', '/var/lib/openstack/config/telemetry/ceilometer-agent-compute.json:/var/lib/kolla/config_files/config.json:z', '/run/libvirt:/run/libvirt:shared,ro', '/etc/hosts:/etc/hosts:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/var/lib/openstack/cacerts/telemetry/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/config/telemetry/ceilometer_prom_exporter.yaml:/etc/ceilometer/ceilometer_prom_exporter.yaml:z', '/var/lib/openstack/certs/telemetry/default:/etc/ceilometer/tls:z', '/dev/log:/dev/log', '/var/lib/openstack/healthchecks/ceilometer_agent_compute:/openstack:ro,z']}, maintainer=OpenStack Kubernetes Operator team, org.label-schema.license=GPLv2, org.label-schema.vendor=CentOS, managed_by=edpm_ansible, org.label-schema.name=CentOS Stream 10 Base Image, org.label-schema.schema-version=1.0, config_id=edpm, container_name=ceilometer_agent_compute)
Dec  1 20:11:38 compute-0 podman[259872]: 2025-12-01 20:11:38.346543901 +0000 UTC m=+0.101108845 container health_status 23921011954a99f31a49758e512d9e3575f6b2ebf536e7df85e3be11e7690b76 (image=quay.io/sustainable_computing_io/kepler:release-0.7.12, name=kepler, health_status=healthy, health_failing_streak=0, health_log=, build-date=2024-09-18T21:23:30, com.redhat.license_terms=https://www.redhat.com/en/about/red-hat-end-user-license-agreements#UBI, container_name=kepler, io.k8s.display-name=Red Hat Universal Base Image 9, version=9.4, distribution-scope=public, io.k8s.description=The Universal Base Image is designed and engineered to be the base layer for all of your containerized applications, middleware and utilities. This base image is freely redistributable, but Red Hat only supports Red Hat technologies through subscriptions for Red Hat products. This image is maintained by Red Hat and updated regularly., com.redhat.component=ubi9-container, description=The Universal Base Image is designed and engineered to be the base layer for all of your containerized applications, middleware and utilities. This base image is freely redistributable, but Red Hat only supports Red Hat technologies through subscriptions for Red Hat products. This image is maintained by Red Hat and updated regularly., io.openshift.expose-services=, io.openshift.tags=base rhel9, io.buildah.version=1.29.0, maintainer=Red Hat, Inc., vcs-ref=e309397d02fc53f7fa99db1371b8700eb49f268f, vcs-type=git, name=ubi9, config_data={'image': 'quay.io/sustainable_computing_io/kepler:release-0.7.12', 'privileged': 'true', 'restart': 'always', 'ports': ['8888:8888'], 'net': 'host', 'command': '-v=2', 'recreate': True, 'environment': {'ENABLE_GPU': 'true', 'EXPOSE_CONTAINER_METRICS': 'true', 'ENABLE_PROCESS_METRICS': 'true', 'EXPOSE_VM_METRICS': 'true', 'EXPOSE_ESTIMATED_IDLE_POWER_METRICS': 'false', 'LIBVIRT_METADATA_URI': 'http://openstack.org/xmlns/libvirt/nova/1.1'}, 'healthcheck': {'test': '/openstack/healthcheck kepler', 'mount': '/var/lib/openstack/healthchecks/kepler'}, 'volumes': ['/lib/modules:/lib/modules:ro', '/run/libvirt:/run/libvirt:shared,ro', '/sys:/sys', '/proc:/proc', '/var/lib/openstack/healthchecks/kepler:/openstack:ro,z']}, release-0.7.12=, summary=Provides the latest release of Red Hat Universal Base Image 9., url=https://access.redhat.com/containers/#/registry.access.redhat.com/ubi9/images/9.4-1214.1726694543, managed_by=edpm_ansible, release=1214.1726694543, config_id=edpm, vendor=Red Hat, Inc., architecture=x86_64)
Dec  1 20:11:38 compute-0 podman[259875]: 2025-12-01 20:11:38.35354302 +0000 UTC m=+0.104417428 container health_status 43b014a7c88484529ca37fbc1aa040d68d3c565a681d98a3ffe696ded1c66c8b (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified, name=ovn_metadata_agent, health_status=healthy, health_failing_streak=0, health_log=, config_data={'cgroupns': 'host', 'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'EDPM_CONFIG_HASH': '0823bd3e096c75f72e4a95820d41b0d4b6a1172bd2892ddb9f29b788a11bc87d'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ovn_metadata_agent', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified', 'net': 'host', 'pid': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/run/openvswitch:/run/openvswitch:z', '/var/lib/config-data/ansible-generated/neutron-ovn-metadata-agent:/etc/neutron.conf.d:z', '/run/netns:/run/netns:shared', '/var/lib/kolla/config_files/ovn_metadata_agent.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/neutron:/var/lib/neutron:shared,z', '/var/lib/neutron/ovn_metadata_haproxy_wrapper:/usr/local/bin/haproxy:ro', '/var/lib/neutron/kill_scripts:/etc/neutron/kill_scripts:ro', '/var/lib/openstack/cacerts/neutron-metadata/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/ca.crt:/etc/pki/tls/certs/ovndbca.crt:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/tls.crt:/etc/pki/tls/certs/ovndb.crt:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/tls.key:/etc/pki/tls/private/ovndb.key:ro,Z', '/var/lib/openstack/healthchecks/ovn_metadata_agent:/openstack:ro,z']}, config_id=ovn_metadata_agent, maintainer=OpenStack Kubernetes Operator team, org.label-schema.build-date=20251125, tcib_build_tag=fa2bb8efef6782c26ea7f1675eeb36dd, tcib_managed=true, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 9 Base Image, managed_by=edpm_ansible, org.label-schema.schema-version=1.0, container_name=ovn_metadata_agent, io.buildah.version=1.41.3, org.label-schema.vendor=CentOS)
Dec  1 20:11:38 compute-0 podman[259873]: 2025-12-01 20:11:38.354105906 +0000 UTC m=+0.112984754 container health_status 34a1614f07848d6f362b3ed1fa2407dbcd0f2c7c831f6ef43ff8b2d278ce7c3d (image=quay.io/podified-antelope-centos9/openstack-ceilometer-ipmi:current-podified, name=ceilometer_agent_ipmi, health_status=healthy, health_failing_streak=0, health_log=, org.label-schema.license=GPLv2, org.label-schema.schema-version=1.0, tcib_build_tag=fa2bb8efef6782c26ea7f1675eeb36dd, config_data={'image': 'quay.io/podified-antelope-centos9/openstack-ceilometer-ipmi:current-podified', 'user': 'ceilometer', 'restart': 'always', 'command': 'kolla_start', 'security_opt': 'label:type:ceilometer_polling_t', 'privileged': 'true', 'net': 'host', 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'OS_ENDPOINT_TYPE': 'internal'}, 'healthcheck': {'test': '/openstack/healthcheck ipmi', 'mount': '/var/lib/openstack/healthchecks/ceilometer_agent_ipmi'}, 'volumes': ['/var/lib/openstack/config/telemetry-power-monitoring:/var/lib/openstack/config/:z', '/var/lib/openstack/config/telemetry-power-monitoring/ceilometer-agent-ipmi.json:/var/lib/kolla/config_files/config.json:z', '/etc/hosts:/etc/hosts:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/var/lib/openstack/cacerts/telemetry-power-monitoring/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/config/telemetry-power-monitoring/ceilometer_prom_exporter.yaml:/etc/ceilometer/ceilometer_prom_exporter.yaml:z', '/var/lib/openstack/certs/telemetry-power-monitoring/default:/etc/ceilometer/tls:z', '/dev/log:/dev/log', '/var/lib/openstack/healthchecks/ceilometer_agent_ipmi:/openstack:ro,z']}, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.vendor=CentOS, tcib_managed=true, config_id=edpm, io.buildah.version=1.41.3, maintainer=OpenStack Kubernetes Operator team, managed_by=edpm_ansible, container_name=ceilometer_agent_ipmi, org.label-schema.build-date=20251125)
Dec  1 20:11:38 compute-0 podman[259876]: 2025-12-01 20:11:38.38723103 +0000 UTC m=+0.126615480 container health_status ac5c9902abf0db9f43c889599b2bcc73d33eb8b65444ffdd9b56a5cc93dab792 (image=quay.io/podified-antelope-centos9/openstack-ovn-controller:current-podified, name=ovn_controller, health_status=healthy, health_failing_streak=0, health_log=, container_name=ovn_controller, org.label-schema.vendor=CentOS, tcib_build_tag=fa2bb8efef6782c26ea7f1675eeb36dd, tcib_managed=true, managed_by=edpm_ansible, config_data={'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ovn_controller', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-ovn-controller:current-podified', 'net': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/lib/modules:/lib/modules:ro', '/run:/run', '/var/lib/openvswitch/ovn:/run/ovn:shared,z', '/var/lib/kolla/config_files/ovn_controller.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/openstack/cacerts/ovn/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/certs/ovn/default/ca.crt:/etc/pki/tls/certs/ovndbca.crt:ro,z', '/var/lib/openstack/certs/ovn/default/tls.crt:/etc/pki/tls/certs/ovndb.crt:ro,z', '/var/lib/openstack/certs/ovn/default/tls.key:/etc/pki/tls/private/ovndb.key:ro,Z', '/var/lib/openstack/healthchecks/ovn_controller:/openstack:ro,z']}, org.label-schema.schema-version=1.0, config_id=ovn_controller, io.buildah.version=1.41.3, maintainer=OpenStack Kubernetes Operator team, org.label-schema.build-date=20251125, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 9 Base Image)
Dec  1 20:11:39 compute-0 nova_compute[189564]: 2025-12-01 20:11:39.179 189568 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Dec  1 20:11:42 compute-0 nova_compute[189564]: 2025-12-01 20:11:42.960 189568 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Dec  1 20:11:43 compute-0 nova_compute[189564]: 2025-12-01 20:11:43.243 189568 DEBUG oslo_service.periodic_task [None req-7cec860b-c1c5-49d2-8a4f-732c76c1788d - - - - - -] Running periodic task ComputeManager._sync_scheduler_instance_info run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210#033[00m
Dec  1 20:11:43 compute-0 nova_compute[189564]: 2025-12-01 20:11:43.263 189568 DEBUG oslo_service.periodic_task [None req-7cec860b-c1c5-49d2-8a4f-732c76c1788d - - - - - -] Running periodic task ComputeManager._cleanup_expired_console_auth_tokens run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210#033[00m
Dec  1 20:11:44 compute-0 nova_compute[189564]: 2025-12-01 20:11:44.181 189568 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Dec  1 20:11:47 compute-0 nova_compute[189564]: 2025-12-01 20:11:47.963 189568 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Dec  1 20:11:48 compute-0 ceilometer_agent_compute[200308]: 2025-12-01 20:11:48.825 15 DEBUG ceilometer.polling.manager [-] The number of pollsters in source [pollsters] is bigger than the number of worker threads to execute them. Therefore, one can expect the process to be longer than the expected. execute_polling_task_processing /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:253
Dec  1 20:11:48 compute-0 ceilometer_agent_compute[200308]: 2025-12-01 20:11:48.825 15 DEBUG ceilometer.polling.manager [-] Processing pollsters for [pollsters] with [1] threads. execute_polling_task_processing /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:262
Dec  1 20:11:48 compute-0 ceilometer_agent_compute[200308]: 2025-12-01 20:11:48.825 15 DEBUG ceilometer.polling.manager [-] Registering pollster [<stevedore.extension.Extension object at 0x7fcf6cc3f860>] from source [pollsters] to be executed via executor [<concurrent.futures.thread.ThreadPoolExecutor object at 0x7fcf6f28f260>] with cache [{}], pollster history [{}], and discovery cache [{}]. register_pollster_execution /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:276
Dec  1 20:11:48 compute-0 ceilometer_agent_compute[200308]: 2025-12-01 20:11:48.826 15 DEBUG ceilometer.polling.manager [-] Executing discovery process for pollsters [<ceilometer.compute.pollsters.net.IncomingBytesDeltaPollster object at 0x7fcf6cc3f830>] and discovery method [local_instances] via process [<bound method AgentManager.discover of <ceilometer.polling.manager.AgentManager object at 0x7fcf6cc07a10>>]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:294
Dec  1 20:11:48 compute-0 ceilometer_agent_compute[200308]: 2025-12-01 20:11:48.826 15 DEBUG ceilometer.polling.manager [-] Registering pollster [<stevedore.extension.Extension object at 0x7fcf6c2e4080>] from source [pollsters] to be executed via executor [<concurrent.futures.thread.ThreadPoolExecutor object at 0x7fcf6f28f260>] with cache [{}], pollster history [{}], and discovery cache [{}]. register_pollster_execution /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:276
Dec  1 20:11:48 compute-0 ceilometer_agent_compute[200308]: 2025-12-01 20:11:48.827 15 DEBUG ceilometer.polling.manager [-] Registering pollster [<stevedore.extension.Extension object at 0x7fcf6efc98b0>] from source [pollsters] to be executed via executor [<concurrent.futures.thread.ThreadPoolExecutor object at 0x7fcf6f28f260>] with cache [{}], pollster history [{}], and discovery cache [{}]. register_pollster_execution /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:276
Dec  1 20:11:48 compute-0 ceilometer_agent_compute[200308]: 2025-12-01 20:11:48.827 15 DEBUG ceilometer.polling.manager [-] Registering pollster [<stevedore.extension.Extension object at 0x7fcf6c2e4110>] from source [pollsters] to be executed via executor [<concurrent.futures.thread.ThreadPoolExecutor object at 0x7fcf6f28f260>] with cache [{}], pollster history [{}], and discovery cache [{}]. register_pollster_execution /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:276
Dec  1 20:11:48 compute-0 ceilometer_agent_compute[200308]: 2025-12-01 20:11:48.827 15 DEBUG ceilometer.polling.manager [-] Registering pollster [<stevedore.extension.Extension object at 0x7fcf6c2e41a0>] from source [pollsters] to be executed via executor [<concurrent.futures.thread.ThreadPoolExecutor object at 0x7fcf6f28f260>] with cache [{}], pollster history [{}], and discovery cache [{}]. register_pollster_execution /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:276
Dec  1 20:11:48 compute-0 ceilometer_agent_compute[200308]: 2025-12-01 20:11:48.827 15 DEBUG ceilometer.polling.manager [-] Registering pollster [<stevedore.extension.Extension object at 0x7fcf6cc3f290>] from source [pollsters] to be executed via executor [<concurrent.futures.thread.ThreadPoolExecutor object at 0x7fcf6f28f260>] with cache [{}], pollster history [{}], and discovery cache [{}]. register_pollster_execution /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:276
Dec  1 20:11:48 compute-0 ceilometer_agent_compute[200308]: 2025-12-01 20:11:48.828 15 DEBUG ceilometer.polling.manager [-] Registering pollster [<stevedore.extension.Extension object at 0x7fcf6cc3f2c0>] from source [pollsters] to be executed via executor [<concurrent.futures.thread.ThreadPoolExecutor object at 0x7fcf6f28f260>] with cache [{}], pollster history [{}], and discovery cache [{}]. register_pollster_execution /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:276
Dec  1 20:11:48 compute-0 ceilometer_agent_compute[200308]: 2025-12-01 20:11:48.828 15 DEBUG ceilometer.polling.manager [-] Registering pollster [<stevedore.extension.Extension object at 0x7fcf6e1e92e0>] from source [pollsters] to be executed via executor [<concurrent.futures.thread.ThreadPoolExecutor object at 0x7fcf6f28f260>] with cache [{}], pollster history [{}], and discovery cache [{}]. register_pollster_execution /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:276
Dec  1 20:11:48 compute-0 ceilometer_agent_compute[200308]: 2025-12-01 20:11:48.828 15 DEBUG ceilometer.polling.manager [-] Registering pollster [<stevedore.extension.Extension object at 0x7fcf6cc3fb00>] from source [pollsters] to be executed via executor [<concurrent.futures.thread.ThreadPoolExecutor object at 0x7fcf6f28f260>] with cache [{}], pollster history [{}], and discovery cache [{}]. register_pollster_execution /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:276
Dec  1 20:11:48 compute-0 ceilometer_agent_compute[200308]: 2025-12-01 20:11:48.828 15 DEBUG ceilometer.polling.manager [-] Registering pollster [<stevedore.extension.Extension object at 0x7fcf6cc3f320>] from source [pollsters] to be executed via executor [<concurrent.futures.thread.ThreadPoolExecutor object at 0x7fcf6f28f260>] with cache [{}], pollster history [{'network.incoming.bytes.delta': []}], and discovery cache [{'local_instances': []}]. register_pollster_execution /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:276
Dec  1 20:11:48 compute-0 ceilometer_agent_compute[200308]: 2025-12-01 20:11:48.829 15 DEBUG ceilometer.polling.manager [-] Registering pollster [<stevedore.extension.Extension object at 0x7fcf6cc3f380>] from source [pollsters] to be executed via executor [<concurrent.futures.thread.ThreadPoolExecutor object at 0x7fcf6f28f260>] with cache [{}], pollster history [{'network.incoming.bytes.delta': []}], and discovery cache [{'local_instances': []}]. register_pollster_execution /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:276
Dec  1 20:11:48 compute-0 ceilometer_agent_compute[200308]: 2025-12-01 20:11:48.829 15 DEBUG ceilometer.polling.manager [-] Registering pollster [<stevedore.extension.Extension object at 0x7fcf6cc3f3e0>] from source [pollsters] to be executed via executor [<concurrent.futures.thread.ThreadPoolExecutor object at 0x7fcf6f28f260>] with cache [{}], pollster history [{'network.incoming.bytes.delta': []}], and discovery cache [{'local_instances': []}]. register_pollster_execution /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:276
Dec  1 20:11:48 compute-0 ceilometer_agent_compute[200308]: 2025-12-01 20:11:48.829 15 DEBUG ceilometer.polling.manager [-] Registering pollster [<stevedore.extension.Extension object at 0x7fcf6cc3f440>] from source [pollsters] to be executed via executor [<concurrent.futures.thread.ThreadPoolExecutor object at 0x7fcf6f28f260>] with cache [{}], pollster history [{'network.incoming.bytes.delta': []}], and discovery cache [{'local_instances': []}]. register_pollster_execution /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:276
Dec  1 20:11:48 compute-0 ceilometer_agent_compute[200308]: 2025-12-01 20:11:48.830 15 DEBUG ceilometer.polling.manager [-] Registering pollster [<stevedore.extension.Extension object at 0x7fcf6c2e4470>] from source [pollsters] to be executed via executor [<concurrent.futures.thread.ThreadPoolExecutor object at 0x7fcf6f28f260>] with cache [{}], pollster history [{'network.incoming.bytes.delta': []}], and discovery cache [{'local_instances': []}]. register_pollster_execution /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:276
Dec  1 20:11:48 compute-0 ceilometer_agent_compute[200308]: 2025-12-01 20:11:48.830 15 DEBUG ceilometer.polling.manager [-] Registering pollster [<stevedore.extension.Extension object at 0x7fcf6cc3f4a0>] from source [pollsters] to be executed via executor [<concurrent.futures.thread.ThreadPoolExecutor object at 0x7fcf6f28f260>] with cache [{}], pollster history [{'network.incoming.bytes.delta': []}], and discovery cache [{'local_instances': []}]. register_pollster_execution /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:276
Dec  1 20:11:48 compute-0 ceilometer_agent_compute[200308]: 2025-12-01 20:11:48.830 15 DEBUG ceilometer.polling.manager [-] Registering pollster [<stevedore.extension.Extension object at 0x7fcf6cc3f500>] from source [pollsters] to be executed via executor [<concurrent.futures.thread.ThreadPoolExecutor object at 0x7fcf6f28f260>] with cache [{}], pollster history [{'network.incoming.bytes.delta': []}], and discovery cache [{'local_instances': []}]. register_pollster_execution /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:276
Dec  1 20:11:48 compute-0 ceilometer_agent_compute[200308]: 2025-12-01 20:11:48.830 15 DEBUG ceilometer.polling.manager [-] Registering pollster [<stevedore.extension.Extension object at 0x7fcf6cc3e540>] from source [pollsters] to be executed via executor [<concurrent.futures.thread.ThreadPoolExecutor object at 0x7fcf6f28f260>] with cache [{}], pollster history [{'network.incoming.bytes.delta': []}], and discovery cache [{'local_instances': []}]. register_pollster_execution /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:276
Dec  1 20:11:48 compute-0 ceilometer_agent_compute[200308]: 2025-12-01 20:11:48.831 15 DEBUG ceilometer.polling.manager [-] Registering pollster [<stevedore.extension.Extension object at 0x7fcf6cc3f560>] from source [pollsters] to be executed via executor [<concurrent.futures.thread.ThreadPoolExecutor object at 0x7fcf6f28f260>] with cache [{}], pollster history [{'network.incoming.bytes.delta': []}], and discovery cache [{'local_instances': []}]. register_pollster_execution /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:276
Dec  1 20:11:48 compute-0 ceilometer_agent_compute[200308]: 2025-12-01 20:11:48.831 15 DEBUG ceilometer.polling.manager [-] Registering pollster [<stevedore.extension.Extension object at 0x7fcf6cc3fd70>] from source [pollsters] to be executed via executor [<concurrent.futures.thread.ThreadPoolExecutor object at 0x7fcf6f28f260>] with cache [{}], pollster history [{'network.incoming.bytes.delta': []}], and discovery cache [{'local_instances': []}]. register_pollster_execution /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:276
Dec  1 20:11:48 compute-0 ceilometer_agent_compute[200308]: 2025-12-01 20:11:48.831 15 DEBUG ceilometer.polling.manager [-] Registering pollster [<stevedore.extension.Extension object at 0x7fcf6cc3f5c0>] from source [pollsters] to be executed via executor [<concurrent.futures.thread.ThreadPoolExecutor object at 0x7fcf6f28f260>] with cache [{}], pollster history [{'network.incoming.bytes.delta': []}], and discovery cache [{'local_instances': []}]. register_pollster_execution /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:276
Dec  1 20:11:48 compute-0 ceilometer_agent_compute[200308]: 2025-12-01 20:11:48.831 15 DEBUG ceilometer.polling.manager [-] Registering pollster [<stevedore.extension.Extension object at 0x7fcf6cc3fdd0>] from source [pollsters] to be executed via executor [<concurrent.futures.thread.ThreadPoolExecutor object at 0x7fcf6f28f260>] with cache [{}], pollster history [{'network.incoming.bytes.delta': []}], and discovery cache [{'local_instances': []}]. register_pollster_execution /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:276
Dec  1 20:11:48 compute-0 ceilometer_agent_compute[200308]: 2025-12-01 20:11:48.832 15 DEBUG ceilometer.polling.manager [-] Registering pollster [<stevedore.extension.Extension object at 0x7fcf6cc3fe30>] from source [pollsters] to be executed via executor [<concurrent.futures.thread.ThreadPoolExecutor object at 0x7fcf6f28f260>] with cache [{}], pollster history [{'network.incoming.bytes.delta': []}], and discovery cache [{'local_instances': []}]. register_pollster_execution /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:276
Dec  1 20:11:48 compute-0 ceilometer_agent_compute[200308]: 2025-12-01 20:11:48.832 15 DEBUG ceilometer.polling.manager [-] Registering pollster [<stevedore.extension.Extension object at 0x7fcf6cc3fec0>] from source [pollsters] to be executed via executor [<concurrent.futures.thread.ThreadPoolExecutor object at 0x7fcf6f28f260>] with cache [{}], pollster history [{'network.incoming.bytes.delta': []}], and discovery cache [{'local_instances': []}]. register_pollster_execution /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:276
Dec  1 20:11:48 compute-0 ceilometer_agent_compute[200308]: 2025-12-01 20:11:48.832 15 DEBUG ceilometer.polling.manager [-] Registering pollster [<stevedore.extension.Extension object at 0x7fcf6cc3ffb0>] from source [pollsters] to be executed via executor [<concurrent.futures.thread.ThreadPoolExecutor object at 0x7fcf6f28f260>] with cache [{}], pollster history [{'network.incoming.bytes.delta': []}], and discovery cache [{'local_instances': []}]. register_pollster_execution /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:276
Dec  1 20:11:48 compute-0 ceilometer_agent_compute[200308]: 2025-12-01 20:11:48.832 15 DEBUG ceilometer.polling.manager [-] Registering pollster [<stevedore.extension.Extension object at 0x7fcf6cc3d7c0>] from source [pollsters] to be executed via executor [<concurrent.futures.thread.ThreadPoolExecutor object at 0x7fcf6f28f260>] with cache [{}], pollster history [{'network.incoming.bytes.delta': []}], and discovery cache [{'local_instances': []}]. register_pollster_execution /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:276
Dec  1 20:11:48 compute-0 ceilometer_agent_compute[200308]: 2025-12-01 20:11:48.833 15 DEBUG ceilometer.polling.manager [-] Registering pollster [<stevedore.extension.Extension object at 0x7fcf6cc3f7d0>] from source [pollsters] to be executed via executor [<concurrent.futures.thread.ThreadPoolExecutor object at 0x7fcf6f28f260>] with cache [{}], pollster history [{'network.incoming.bytes.delta': []}], and discovery cache [{'local_instances': []}]. register_pollster_execution /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:276
Dec  1 20:11:48 compute-0 ceilometer_agent_compute[200308]: 2025-12-01 20:11:48.828 15 DEBUG ceilometer.polling.manager [-] Skip pollster network.incoming.bytes.delta, no  resources found this cycle _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:321
Dec  1 20:11:48 compute-0 ceilometer_agent_compute[200308]: 2025-12-01 20:11:48.833 15 DEBUG ceilometer.polling.manager [-] Executing discovery process for pollsters [<ceilometer.compute.pollsters.net.OutgoingPacketsPollster object at 0x7fcf6c2e4050>] and discovery method [local_instances] via process [<bound method AgentManager.discover of <ceilometer.polling.manager.AgentManager object at 0x7fcf6cc07a10>>]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:294
Dec  1 20:11:48 compute-0 ceilometer_agent_compute[200308]: 2025-12-01 20:11:48.833 15 DEBUG ceilometer.polling.manager [-] Skip pollster network.outgoing.packets, no  resources found this cycle _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:321
Dec  1 20:11:48 compute-0 ceilometer_agent_compute[200308]: 2025-12-01 20:11:48.834 15 DEBUG ceilometer.polling.manager [-] Executing discovery process for pollsters [<ceilometer.compute.pollsters.net.OutgoingBytesDeltaPollster object at 0x7fcf6cc3ff20>] and discovery method [local_instances] via process [<bound method AgentManager.discover of <ceilometer.polling.manager.AgentManager object at 0x7fcf6cc07a10>>]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:294
Dec  1 20:11:48 compute-0 ceilometer_agent_compute[200308]: 2025-12-01 20:11:48.834 15 DEBUG ceilometer.polling.manager [-] Skip pollster network.outgoing.bytes.delta, no  resources found this cycle _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:321
Dec  1 20:11:48 compute-0 ceilometer_agent_compute[200308]: 2025-12-01 20:11:48.834 15 DEBUG ceilometer.polling.manager [-] Executing discovery process for pollsters [<ceilometer.compute.pollsters.net.OutgoingDropPollster object at 0x7fcf6c2e40e0>] and discovery method [local_instances] via process [<bound method AgentManager.discover of <ceilometer.polling.manager.AgentManager object at 0x7fcf6cc07a10>>]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:294
Dec  1 20:11:48 compute-0 ceilometer_agent_compute[200308]: 2025-12-01 20:11:48.834 15 DEBUG ceilometer.polling.manager [-] Skip pollster network.outgoing.packets.drop, no  resources found this cycle _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:321
Dec  1 20:11:48 compute-0 ceilometer_agent_compute[200308]: 2025-12-01 20:11:48.834 15 DEBUG ceilometer.polling.manager [-] Executing discovery process for pollsters [<ceilometer.compute.pollsters.net.OutgoingErrorsPollster object at 0x7fcf6c2e4170>] and discovery method [local_instances] via process [<bound method AgentManager.discover of <ceilometer.polling.manager.AgentManager object at 0x7fcf6cc07a10>>]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:294
Dec  1 20:11:48 compute-0 ceilometer_agent_compute[200308]: 2025-12-01 20:11:48.835 15 DEBUG ceilometer.polling.manager [-] Skip pollster network.outgoing.packets.error, no  resources found this cycle _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:321
Dec  1 20:11:48 compute-0 ceilometer_agent_compute[200308]: 2025-12-01 20:11:48.835 15 DEBUG ceilometer.polling.manager [-] Executing discovery process for pollsters [<ceilometer.compute.pollsters.disk.PerDeviceCapacityPollster object at 0x7fcf6cc3d820>] and discovery method [local_instances] via process [<bound method AgentManager.discover of <ceilometer.polling.manager.AgentManager object at 0x7fcf6cc07a10>>]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:294
Dec  1 20:11:48 compute-0 ceilometer_agent_compute[200308]: 2025-12-01 20:11:48.835 15 DEBUG ceilometer.polling.manager [-] Skip pollster disk.device.capacity, no  resources found this cycle _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:321
Dec  1 20:11:48 compute-0 ceilometer_agent_compute[200308]: 2025-12-01 20:11:48.835 15 DEBUG ceilometer.polling.manager [-] Executing discovery process for pollsters [<ceilometer.compute.pollsters.disk.PerDeviceReadBytesPollster object at 0x7fcf6cc3f1d0>] and discovery method [local_instances] via process [<bound method AgentManager.discover of <ceilometer.polling.manager.AgentManager object at 0x7fcf6cc07a10>>]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:294
Dec  1 20:11:48 compute-0 ceilometer_agent_compute[200308]: 2025-12-01 20:11:48.835 15 DEBUG ceilometer.polling.manager [-] Skip pollster disk.device.read.bytes, no  resources found this cycle _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:321
Dec  1 20:11:48 compute-0 ceilometer_agent_compute[200308]: 2025-12-01 20:11:48.836 15 DEBUG ceilometer.polling.manager [-] Executing discovery process for pollsters [<ceilometer.compute.pollsters.net.IncomingBytesPollster object at 0x7fcf6cc3f800>] and discovery method [local_instances] via process [<bound method AgentManager.discover of <ceilometer.polling.manager.AgentManager object at 0x7fcf6cc07a10>>]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:294
Dec  1 20:11:48 compute-0 ceilometer_agent_compute[200308]: 2025-12-01 20:11:48.836 15 DEBUG ceilometer.polling.manager [-] Skip pollster network.incoming.bytes, no  resources found this cycle _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:321
Dec  1 20:11:48 compute-0 ceilometer_agent_compute[200308]: 2025-12-01 20:11:48.836 15 DEBUG ceilometer.polling.manager [-] Executing discovery process for pollsters [<ceilometer.compute.pollsters.net.IncomingBytesRatePollster object at 0x7fcf6cc3fd10>] and discovery method [local_instances] via process [<bound method AgentManager.discover of <ceilometer.polling.manager.AgentManager object at 0x7fcf6cc07a10>>]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:294
Dec  1 20:11:48 compute-0 ceilometer_agent_compute[200308]: 2025-12-01 20:11:48.836 15 DEBUG ceilometer.polling.manager [-] Skip pollster network.incoming.bytes.rate, no  resources found this cycle _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:321
Dec  1 20:11:48 compute-0 ceilometer_agent_compute[200308]: 2025-12-01 20:11:48.836 15 DEBUG ceilometer.polling.manager [-] Executing discovery process for pollsters [<ceilometer.compute.pollsters.disk.PerDeviceDiskReadLatencyPollster object at 0x7fcf6cc3f2f0>] and discovery method [local_instances] via process [<bound method AgentManager.discover of <ceilometer.polling.manager.AgentManager object at 0x7fcf6cc07a10>>]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:294
Dec  1 20:11:48 compute-0 ceilometer_agent_compute[200308]: 2025-12-01 20:11:48.837 15 DEBUG ceilometer.polling.manager [-] Skip pollster disk.device.read.latency, no  resources found this cycle _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:321
Dec  1 20:11:48 compute-0 ceilometer_agent_compute[200308]: 2025-12-01 20:11:48.837 15 DEBUG ceilometer.polling.manager [-] Executing discovery process for pollsters [<ceilometer.compute.pollsters.disk.PerDeviceReadRequestsPollster object at 0x7fcf6cc3f350>] and discovery method [local_instances] via process [<bound method AgentManager.discover of <ceilometer.polling.manager.AgentManager object at 0x7fcf6cc07a10>>]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:294
Dec  1 20:11:48 compute-0 ceilometer_agent_compute[200308]: 2025-12-01 20:11:48.837 15 DEBUG ceilometer.polling.manager [-] Skip pollster disk.device.read.requests, no  resources found this cycle _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:321
Dec  1 20:11:48 compute-0 ceilometer_agent_compute[200308]: 2025-12-01 20:11:48.837 15 DEBUG ceilometer.polling.manager [-] Executing discovery process for pollsters [<ceilometer.compute.pollsters.disk.PerDevicePhysicalPollster object at 0x7fcf6cc3f3b0>] and discovery method [local_instances] via process [<bound method AgentManager.discover of <ceilometer.polling.manager.AgentManager object at 0x7fcf6cc07a10>>]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:294
Dec  1 20:11:48 compute-0 ceilometer_agent_compute[200308]: 2025-12-01 20:11:48.837 15 DEBUG ceilometer.polling.manager [-] Skip pollster disk.device.usage, no  resources found this cycle _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:321
Dec  1 20:11:48 compute-0 ceilometer_agent_compute[200308]: 2025-12-01 20:11:48.837 15 DEBUG ceilometer.polling.manager [-] Executing discovery process for pollsters [<ceilometer.compute.pollsters.disk.PerDeviceWriteBytesPollster object at 0x7fcf6cc3f410>] and discovery method [local_instances] via process [<bound method AgentManager.discover of <ceilometer.polling.manager.AgentManager object at 0x7fcf6cc07a10>>]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:294
Dec  1 20:11:48 compute-0 ceilometer_agent_compute[200308]: 2025-12-01 20:11:48.837 15 DEBUG ceilometer.polling.manager [-] Skip pollster disk.device.write.bytes, no  resources found this cycle _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:321
Dec  1 20:11:48 compute-0 ceilometer_agent_compute[200308]: 2025-12-01 20:11:48.838 15 DEBUG ceilometer.polling.manager [-] Executing discovery process for pollsters [<ceilometer.compute.pollsters.instance_stats.PowerStatePollster object at 0x7fcf6c2e4440>] and discovery method [local_instances] via process [<bound method AgentManager.discover of <ceilometer.polling.manager.AgentManager object at 0x7fcf6cc07a10>>]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:294
Dec  1 20:11:48 compute-0 ceilometer_agent_compute[200308]: 2025-12-01 20:11:48.838 15 DEBUG ceilometer.polling.manager [-] Skip pollster power.state, no  resources found this cycle _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:321
Dec  1 20:11:48 compute-0 ceilometer_agent_compute[200308]: 2025-12-01 20:11:48.838 15 DEBUG ceilometer.polling.manager [-] Executing discovery process for pollsters [<ceilometer.compute.pollsters.disk.PerDeviceDiskWriteLatencyPollster object at 0x7fcf6cc3f470>] and discovery method [local_instances] via process [<bound method AgentManager.discover of <ceilometer.polling.manager.AgentManager object at 0x7fcf6cc07a10>>]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:294
Dec  1 20:11:48 compute-0 ceilometer_agent_compute[200308]: 2025-12-01 20:11:48.838 15 DEBUG ceilometer.polling.manager [-] Skip pollster disk.device.write.latency, no  resources found this cycle _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:321
Dec  1 20:11:48 compute-0 ceilometer_agent_compute[200308]: 2025-12-01 20:11:48.838 15 DEBUG ceilometer.polling.manager [-] Executing discovery process for pollsters [<ceilometer.compute.pollsters.disk.PerDeviceWriteRequestsPollster object at 0x7fcf6cc3f4d0>] and discovery method [local_instances] via process [<bound method AgentManager.discover of <ceilometer.polling.manager.AgentManager object at 0x7fcf6cc07a10>>]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:294
Dec  1 20:11:48 compute-0 ceilometer_agent_compute[200308]: 2025-12-01 20:11:48.838 15 DEBUG ceilometer.polling.manager [-] Skip pollster disk.device.write.requests, no  resources found this cycle _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:321
Dec  1 20:11:48 compute-0 ceilometer_agent_compute[200308]: 2025-12-01 20:11:48.838 15 DEBUG ceilometer.polling.manager [-] Executing discovery process for pollsters [<ceilometer.compute.pollsters.disk.PerDeviceAllocationPollster object at 0x7fcf6cc3e5d0>] and discovery method [local_instances] via process [<bound method AgentManager.discover of <ceilometer.polling.manager.AgentManager object at 0x7fcf6cc07a10>>]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:294
Dec  1 20:11:48 compute-0 ceilometer_agent_compute[200308]: 2025-12-01 20:11:48.838 15 DEBUG ceilometer.polling.manager [-] Skip pollster disk.device.allocation, no  resources found this cycle _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:321
Dec  1 20:11:48 compute-0 ceilometer_agent_compute[200308]: 2025-12-01 20:11:48.838 15 DEBUG ceilometer.polling.manager [-] Executing discovery process for pollsters [<ceilometer.compute.pollsters.disk.EphemeralSizePollster object at 0x7fcf6cc3f530>] and discovery method [local_instances] via process [<bound method AgentManager.discover of <ceilometer.polling.manager.AgentManager object at 0x7fcf6cc07a10>>]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:294
Dec  1 20:11:48 compute-0 ceilometer_agent_compute[200308]: 2025-12-01 20:11:48.839 15 DEBUG ceilometer.polling.manager [-] Skip pollster disk.ephemeral.size, no  resources found this cycle _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:321
Dec  1 20:11:48 compute-0 ceilometer_agent_compute[200308]: 2025-12-01 20:11:48.839 15 DEBUG ceilometer.polling.manager [-] Executing discovery process for pollsters [<ceilometer.compute.pollsters.net.IncomingPacketsPollster object at 0x7fcf6cc3fd40>] and discovery method [local_instances] via process [<bound method AgentManager.discover of <ceilometer.polling.manager.AgentManager object at 0x7fcf6cc07a10>>]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:294
Dec  1 20:11:48 compute-0 ceilometer_agent_compute[200308]: 2025-12-01 20:11:48.839 15 DEBUG ceilometer.polling.manager [-] Skip pollster network.incoming.packets, no  resources found this cycle _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:321
Dec  1 20:11:48 compute-0 ceilometer_agent_compute[200308]: 2025-12-01 20:11:48.839 15 DEBUG ceilometer.polling.manager [-] Executing discovery process for pollsters [<ceilometer.compute.pollsters.disk.RootSizePollster object at 0x7fcf6cc3f590>] and discovery method [local_instances] via process [<bound method AgentManager.discover of <ceilometer.polling.manager.AgentManager object at 0x7fcf6cc07a10>>]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:294
Dec  1 20:11:48 compute-0 ceilometer_agent_compute[200308]: 2025-12-01 20:11:48.839 15 DEBUG ceilometer.polling.manager [-] Skip pollster disk.root.size, no  resources found this cycle _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:321
Dec  1 20:11:48 compute-0 ceilometer_agent_compute[200308]: 2025-12-01 20:11:48.839 15 DEBUG ceilometer.polling.manager [-] Executing discovery process for pollsters [<ceilometer.compute.pollsters.net.IncomingDropPollster object at 0x7fcf6cc3fda0>] and discovery method [local_instances] via process [<bound method AgentManager.discover of <ceilometer.polling.manager.AgentManager object at 0x7fcf6cc07a10>>]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:294
Dec  1 20:11:48 compute-0 ceilometer_agent_compute[200308]: 2025-12-01 20:11:48.839 15 DEBUG ceilometer.polling.manager [-] Skip pollster network.incoming.packets.drop, no  resources found this cycle _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:321
Dec  1 20:11:48 compute-0 ceilometer_agent_compute[200308]: 2025-12-01 20:11:48.839 15 DEBUG ceilometer.polling.manager [-] Executing discovery process for pollsters [<ceilometer.compute.pollsters.net.IncomingErrorsPollster object at 0x7fcf6cc3fe00>] and discovery method [local_instances] via process [<bound method AgentManager.discover of <ceilometer.polling.manager.AgentManager object at 0x7fcf6cc07a10>>]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:294
Dec  1 20:11:48 compute-0 ceilometer_agent_compute[200308]: 2025-12-01 20:11:48.840 15 DEBUG ceilometer.polling.manager [-] Skip pollster network.incoming.packets.error, no  resources found this cycle _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:321
Dec  1 20:11:48 compute-0 ceilometer_agent_compute[200308]: 2025-12-01 20:11:48.840 15 DEBUG ceilometer.polling.manager [-] Executing discovery process for pollsters [<ceilometer.compute.pollsters.net.OutgoingBytesPollster object at 0x7fcf6cc3fe90>] and discovery method [local_instances] via process [<bound method AgentManager.discover of <ceilometer.polling.manager.AgentManager object at 0x7fcf6cc07a10>>]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:294
Dec  1 20:11:48 compute-0 ceilometer_agent_compute[200308]: 2025-12-01 20:11:48.840 15 DEBUG ceilometer.polling.manager [-] Skip pollster network.outgoing.bytes, no  resources found this cycle _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:321
Dec  1 20:11:48 compute-0 ceilometer_agent_compute[200308]: 2025-12-01 20:11:48.840 15 DEBUG ceilometer.polling.manager [-] Executing discovery process for pollsters [<ceilometer.compute.pollsters.net.OutgoingBytesRatePollster object at 0x7fcf6cc3ff80>] and discovery method [local_instances] via process [<bound method AgentManager.discover of <ceilometer.polling.manager.AgentManager object at 0x7fcf6cc07a10>>]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:294
Dec  1 20:11:48 compute-0 ceilometer_agent_compute[200308]: 2025-12-01 20:11:48.840 15 DEBUG ceilometer.polling.manager [-] Skip pollster network.outgoing.bytes.rate, no  resources found this cycle _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:321
Dec  1 20:11:48 compute-0 ceilometer_agent_compute[200308]: 2025-12-01 20:11:48.840 15 DEBUG ceilometer.polling.manager [-] Executing discovery process for pollsters [<ceilometer.compute.pollsters.instance_stats.CPUPollster object at 0x7fcf6cbd1b80>] and discovery method [local_instances] via process [<bound method AgentManager.discover of <ceilometer.polling.manager.AgentManager object at 0x7fcf6cc07a10>>]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:294
Dec  1 20:11:48 compute-0 ceilometer_agent_compute[200308]: 2025-12-01 20:11:48.840 15 DEBUG ceilometer.polling.manager [-] Skip pollster cpu, no  resources found this cycle _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:321
Dec  1 20:11:48 compute-0 ceilometer_agent_compute[200308]: 2025-12-01 20:11:48.840 15 DEBUG ceilometer.polling.manager [-] Executing discovery process for pollsters [<ceilometer.compute.pollsters.instance_stats.MemoryUsagePollster object at 0x7fcf6cc3f7a0>] and discovery method [local_instances] via process [<bound method AgentManager.discover of <ceilometer.polling.manager.AgentManager object at 0x7fcf6cc07a10>>]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:294
Dec  1 20:11:48 compute-0 ceilometer_agent_compute[200308]: 2025-12-01 20:11:48.840 15 DEBUG ceilometer.polling.manager [-] Skip pollster memory.usage, no  resources found this cycle _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:321
Dec  1 20:11:48 compute-0 ceilometer_agent_compute[200308]: 2025-12-01 20:11:48.841 15 DEBUG ceilometer.polling.manager [-] Finished processing pollster [network.incoming.bytes.delta]. execute_polling_task_processing /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:272
Dec  1 20:11:48 compute-0 ceilometer_agent_compute[200308]: 2025-12-01 20:11:48.841 15 DEBUG ceilometer.polling.manager [-] Finished processing pollster [network.outgoing.packets]. execute_polling_task_processing /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:272
Dec  1 20:11:48 compute-0 ceilometer_agent_compute[200308]: 2025-12-01 20:11:48.841 15 DEBUG ceilometer.polling.manager [-] Finished processing pollster [network.outgoing.bytes.delta]. execute_polling_task_processing /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:272
Dec  1 20:11:48 compute-0 ceilometer_agent_compute[200308]: 2025-12-01 20:11:48.841 15 DEBUG ceilometer.polling.manager [-] Finished processing pollster [network.outgoing.packets.drop]. execute_polling_task_processing /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:272
Dec  1 20:11:48 compute-0 ceilometer_agent_compute[200308]: 2025-12-01 20:11:48.842 15 DEBUG ceilometer.polling.manager [-] Finished processing pollster [network.outgoing.packets.error]. execute_polling_task_processing /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:272
Dec  1 20:11:48 compute-0 ceilometer_agent_compute[200308]: 2025-12-01 20:11:48.842 15 DEBUG ceilometer.polling.manager [-] Finished processing pollster [disk.device.capacity]. execute_polling_task_processing /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:272
Dec  1 20:11:48 compute-0 ceilometer_agent_compute[200308]: 2025-12-01 20:11:48.842 15 DEBUG ceilometer.polling.manager [-] Finished processing pollster [disk.device.read.bytes]. execute_polling_task_processing /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:272
Dec  1 20:11:48 compute-0 ceilometer_agent_compute[200308]: 2025-12-01 20:11:48.842 15 DEBUG ceilometer.polling.manager [-] Finished processing pollster [network.incoming.bytes]. execute_polling_task_processing /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:272
Dec  1 20:11:48 compute-0 ceilometer_agent_compute[200308]: 2025-12-01 20:11:48.842 15 DEBUG ceilometer.polling.manager [-] Finished processing pollster [network.incoming.bytes.rate]. execute_polling_task_processing /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:272
Dec  1 20:11:48 compute-0 ceilometer_agent_compute[200308]: 2025-12-01 20:11:48.843 15 DEBUG ceilometer.polling.manager [-] Finished processing pollster [disk.device.read.latency]. execute_polling_task_processing /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:272
Dec  1 20:11:48 compute-0 ceilometer_agent_compute[200308]: 2025-12-01 20:11:48.843 15 DEBUG ceilometer.polling.manager [-] Finished processing pollster [disk.device.read.requests]. execute_polling_task_processing /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:272
Dec  1 20:11:48 compute-0 ceilometer_agent_compute[200308]: 2025-12-01 20:11:48.843 15 DEBUG ceilometer.polling.manager [-] Finished processing pollster [disk.device.usage]. execute_polling_task_processing /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:272
Dec  1 20:11:48 compute-0 ceilometer_agent_compute[200308]: 2025-12-01 20:11:48.843 15 DEBUG ceilometer.polling.manager [-] Finished processing pollster [disk.device.write.bytes]. execute_polling_task_processing /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:272
Dec  1 20:11:48 compute-0 ceilometer_agent_compute[200308]: 2025-12-01 20:11:48.843 15 DEBUG ceilometer.polling.manager [-] Finished processing pollster [power.state]. execute_polling_task_processing /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:272
Dec  1 20:11:48 compute-0 ceilometer_agent_compute[200308]: 2025-12-01 20:11:48.844 15 DEBUG ceilometer.polling.manager [-] Finished processing pollster [disk.device.write.latency]. execute_polling_task_processing /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:272
Dec  1 20:11:48 compute-0 ceilometer_agent_compute[200308]: 2025-12-01 20:11:48.844 15 DEBUG ceilometer.polling.manager [-] Finished processing pollster [disk.device.write.requests]. execute_polling_task_processing /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:272
Dec  1 20:11:48 compute-0 ceilometer_agent_compute[200308]: 2025-12-01 20:11:48.844 15 DEBUG ceilometer.polling.manager [-] Finished processing pollster [disk.device.allocation]. execute_polling_task_processing /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:272
Dec  1 20:11:48 compute-0 ceilometer_agent_compute[200308]: 2025-12-01 20:11:48.844 15 DEBUG ceilometer.polling.manager [-] Finished processing pollster [disk.ephemeral.size]. execute_polling_task_processing /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:272
Dec  1 20:11:48 compute-0 ceilometer_agent_compute[200308]: 2025-12-01 20:11:48.844 15 DEBUG ceilometer.polling.manager [-] Finished processing pollster [network.incoming.packets]. execute_polling_task_processing /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:272
Dec  1 20:11:48 compute-0 ceilometer_agent_compute[200308]: 2025-12-01 20:11:48.845 15 DEBUG ceilometer.polling.manager [-] Finished processing pollster [disk.root.size]. execute_polling_task_processing /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:272
Dec  1 20:11:48 compute-0 ceilometer_agent_compute[200308]: 2025-12-01 20:11:48.845 15 DEBUG ceilometer.polling.manager [-] Finished processing pollster [network.incoming.packets.drop]. execute_polling_task_processing /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:272
Dec  1 20:11:48 compute-0 ceilometer_agent_compute[200308]: 2025-12-01 20:11:48.845 15 DEBUG ceilometer.polling.manager [-] Finished processing pollster [network.incoming.packets.error]. execute_polling_task_processing /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:272
Dec  1 20:11:48 compute-0 ceilometer_agent_compute[200308]: 2025-12-01 20:11:48.845 15 DEBUG ceilometer.polling.manager [-] Finished processing pollster [network.outgoing.bytes]. execute_polling_task_processing /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:272
Dec  1 20:11:48 compute-0 ceilometer_agent_compute[200308]: 2025-12-01 20:11:48.846 15 DEBUG ceilometer.polling.manager [-] Finished processing pollster [network.outgoing.bytes.rate]. execute_polling_task_processing /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:272
Dec  1 20:11:48 compute-0 ceilometer_agent_compute[200308]: 2025-12-01 20:11:48.846 15 DEBUG ceilometer.polling.manager [-] Finished processing pollster [cpu]. execute_polling_task_processing /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:272
Dec  1 20:11:48 compute-0 ceilometer_agent_compute[200308]: 2025-12-01 20:11:48.846 15 DEBUG ceilometer.polling.manager [-] Finished processing pollster [memory.usage]. execute_polling_task_processing /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:272
Dec  1 20:11:49 compute-0 nova_compute[189564]: 2025-12-01 20:11:49.183 189568 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Dec  1 20:11:50 compute-0 podman[259976]: 2025-12-01 20:11:50.340517333 +0000 UTC m=+0.105074429 container health_status 9bc16c1e84935b321683dd2dfd3901959431e420d380b6b9982945dff3d516b2 (image=quay.io/prometheus/node-exporter:v1.5.0, name=node_exporter, health_status=healthy, health_failing_streak=0, health_log=, config_id=edpm, container_name=node_exporter, maintainer=The Prometheus Authors <prometheus-developers@googlegroups.com>, managed_by=edpm_ansible, config_data={'image': 'quay.io/prometheus/node-exporter:v1.5.0', 'restart': 'always', 'recreate': True, 'user': 'root', 'privileged': True, 'ports': ['9100:9100'], 'command': ['--web.config.file=/etc/node_exporter/node_exporter.yaml', '--web.disable-exporter-metrics', '--collector.systemd', '--collector.systemd.unit-include=(edpm_.*|ovs.*|openvswitch|virt.*|rsyslog)\\.service', '--no-collector.dmi', '--no-collector.entropy', '--no-collector.thermal_zone', '--no-collector.time', '--no-collector.timex', '--no-collector.uname', '--no-collector.stat', '--no-collector.hwmon', '--no-collector.os', '--no-collector.selinux', '--no-collector.textfile', '--no-collector.powersupplyclass', '--no-collector.pressure', '--no-collector.rapl'], 'net': 'host', 'environment': {'OS_ENDPOINT_TYPE': 'internal'}, 'healthcheck': {'test': '/openstack/healthcheck node_exporter', 'mount': '/var/lib/openstack/healthchecks/node_exporter'}, 'volumes': ['/var/lib/openstack/config/telemetry/node_exporter.yaml:/etc/node_exporter/node_exporter.yaml:z', '/var/lib/openstack/certs/telemetry/default:/etc/node_exporter/tls:z', '/var/run/dbus/system_bus_socket:/var/run/dbus/system_bus_socket:rw', '/var/lib/openstack/healthchecks/node_exporter:/openstack:ro,z']})
Dec  1 20:11:50 compute-0 podman[259977]: 2025-12-01 20:11:50.341339449 +0000 UTC m=+0.097453741 container health_status b46bda7fc50db8041eef75400930fc7591d8331b3adc9964f77b2cc87c6b98e2 (image=quay.io/openstack-k8s-operators/openstack-network-exporter:current-podified, name=openstack_network_exporter, health_status=healthy, health_failing_streak=0, health_log=, url=https://catalog.redhat.com/en/search?searchType=containers, vcs-type=git, vcs-ref=f4b088292653bbf5ca8188a5e59ffd06a8671d4b, version=9.6, architecture=x86_64, com.redhat.license_terms=https://www.redhat.com/en/about/red-hat-end-user-license-agreements#UBI, io.openshift.expose-services=, vendor=Red Hat, Inc., io.openshift.tags=minimal rhel9, build-date=2025-08-20T13:12:41, io.k8s.display-name=Red Hat Universal Base Image 9 Minimal, maintainer=Red Hat, Inc., config_id=edpm, io.k8s.description=The Universal Base Image Minimal is a stripped down image that uses microdnf as a package manager. This base image is freely redistributable, but Red Hat only supports Red Hat technologies through subscriptions for Red Hat products. This image is maintained by Red Hat and updated regularly., io.buildah.version=1.33.7, config_data={'image': 'quay.io/openstack-k8s-operators/openstack-network-exporter:current-podified', 'restart': 'always', 'recreate': True, 'privileged': True, 'ports': ['9105:9105'], 'command': [], 'net': 'host', 'environment': {'OS_ENDPOINT_TYPE': 'internal', 'OPENSTACK_NETWORK_EXPORTER_YAML': '/etc/openstack_network_exporter/openstack_network_exporter.yaml'}, 'healthcheck': {'test': '/openstack/healthcheck openstack-netwo', 'mount': '/var/lib/openstack/healthchecks/openstack_network_exporter'}, 'volumes': ['/var/lib/openstack/config/telemetry/openstack_network_exporter.yaml:/etc/openstack_network_exporter/openstack_network_exporter.yaml:z', '/var/lib/openstack/certs/telemetry/default:/etc/openstack_network_exporter/tls:z', '/var/run/openvswitch:/run/openvswitch:rw,z', '/var/lib/openvswitch/ovn:/run/ovn:rw,z', '/proc:/host/proc:ro', '/var/lib/openstack/healthchecks/openstack_network_exporter:/openstack:ro,z']}, managed_by=edpm_ansible, description=The Universal Base Image Minimal is a stripped down image that uses microdnf as a package manager. This base image is freely redistributable, but Red Hat only supports Red Hat technologies through subscriptions for Red Hat products. This image is maintained by Red Hat and updated regularly., distribution-scope=public, release=1755695350, name=ubi9-minimal, com.redhat.component=ubi9-minimal-container, container_name=openstack_network_exporter, summary=Provides the latest release of the minimal Red Hat Universal Base Image 9.)
Dec  1 20:11:52 compute-0 nova_compute[189564]: 2025-12-01 20:11:52.969 189568 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Dec  1 20:11:54 compute-0 nova_compute[189564]: 2025-12-01 20:11:54.185 189568 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Dec  1 20:11:57 compute-0 nova_compute[189564]: 2025-12-01 20:11:57.974 189568 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Dec  1 20:11:59 compute-0 nova_compute[189564]: 2025-12-01 20:11:59.188 189568 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Dec  1 20:11:59 compute-0 podman[203750]: time="2025-12-01T20:11:59Z" level=info msg="List containers: received `last` parameter - overwriting `limit`"
Dec  1 20:11:59 compute-0 podman[203750]: @ - - [01/Dec/2025:20:11:59 +0000] "GET /v4.9.3/libpod/containers/json?all=true&external=false&last=0&namespace=false&size=false&sync=false HTTP/1.1" 200 28288 "" "Go-http-client/1.1"
Dec  1 20:11:59 compute-0 podman[203750]: @ - - [01/Dec/2025:20:11:59 +0000] "GET /v4.9.3/libpod/containers/stats?all=false&interval=1&stream=false HTTP/1.1" 200 4344 "" "Go-http-client/1.1"
Dec  1 20:12:01 compute-0 openstack_network_exporter[205914]: ERROR   20:12:01 appctl.go:144: Failed to get PID for ovn-northd: no control socket files found for ovn-northd
Dec  1 20:12:01 compute-0 openstack_network_exporter[205914]: ERROR   20:12:01 appctl.go:144: Failed to get PID for ovn-northd: no control socket files found for ovn-northd
Dec  1 20:12:01 compute-0 openstack_network_exporter[205914]: ERROR   20:12:01 appctl.go:131: Failed to prepare call to ovsdb-server: no control socket files found for the ovs db server
Dec  1 20:12:01 compute-0 openstack_network_exporter[205914]: ERROR   20:12:01 appctl.go:174: call(dpif-netdev/pmd-perf-show): please specify an existing datapath
Dec  1 20:12:01 compute-0 openstack_network_exporter[205914]: 
Dec  1 20:12:01 compute-0 openstack_network_exporter[205914]: ERROR   20:12:01 appctl.go:174: call(dpif-netdev/pmd-rxq-show): please specify an existing datapath
Dec  1 20:12:01 compute-0 openstack_network_exporter[205914]: 
Dec  1 20:12:02 compute-0 podman[260022]: 2025-12-01 20:12:02.30321638 +0000 UTC m=+0.077043224 container health_status eee51cf6f5ac491b85fb09827fece37ea9afa564acb449d4ec0d0155a452f02b (image=quay.io/podified-antelope-centos9/openstack-multipathd:current-podified, name=multipathd, health_status=healthy, health_failing_streak=0, health_log=, config_data={'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/multipathd', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-multipathd:current-podified', 'net': 'host', 'privileged': True, 'restart': 'always', 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/var/lib/kolla/config_files/multipathd.json:/var/lib/kolla/config_files/config.json:ro', '/dev:/dev', '/run/udev:/run/udev', '/sys:/sys', '/lib/modules:/lib/modules:ro', '/etc/iscsi:/etc/iscsi:ro', '/var/lib/iscsi:/var/lib/iscsi', '/etc/multipath:/etc/multipath:z', '/etc/multipath.conf:/etc/multipath.conf:ro', '/var/lib/openstack/healthchecks/multipathd:/openstack:ro,z']}, io.buildah.version=1.41.3, org.label-schema.name=CentOS Stream 9 Base Image, tcib_managed=true, config_id=multipathd, org.label-schema.schema-version=1.0, tcib_build_tag=fa2bb8efef6782c26ea7f1675eeb36dd, org.label-schema.license=GPLv2, org.label-schema.vendor=CentOS, container_name=multipathd, maintainer=OpenStack Kubernetes Operator team, managed_by=edpm_ansible, org.label-schema.build-date=20251125)
Dec  1 20:12:02 compute-0 nova_compute[189564]: 2025-12-01 20:12:02.978 189568 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Dec  1 20:12:04 compute-0 nova_compute[189564]: 2025-12-01 20:12:04.192 189568 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Dec  1 20:12:05 compute-0 nova_compute[189564]: 2025-12-01 20:12:05.263 189568 DEBUG oslo_service.periodic_task [None req-7cec860b-c1c5-49d2-8a4f-732c76c1788d - - - - - -] Running periodic task ComputeManager._cleanup_incomplete_migrations run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210#033[00m
Dec  1 20:12:05 compute-0 nova_compute[189564]: 2025-12-01 20:12:05.263 189568 DEBUG nova.compute.manager [None req-7cec860b-c1c5-49d2-8a4f-732c76c1788d - - - - - -] Cleaning up deleted instances with incomplete migration  _cleanup_incomplete_migrations /usr/lib/python3.9/site-packages/nova/compute/manager.py:11183#033[00m
Dec  1 20:12:05 compute-0 podman[260043]: 2025-12-01 20:12:05.304840936 +0000 UTC m=+0.073845574 container health_status 61ddba5fa28aaa4735d9b3aecc3d300f499f9ae2248b5f55cd6d6127fcce4236 (image=quay.io/navidys/prometheus-podman-exporter:v1.10.1, name=podman_exporter, health_status=healthy, health_failing_streak=0, health_log=, config_data={'image': 'quay.io/navidys/prometheus-podman-exporter:v1.10.1', 'restart': 'always', 'recreate': True, 'user': 'root', 'privileged': True, 'ports': ['9882:9882'], 'net': 'host', 'command': ['--web.config.file=/etc/podman_exporter/podman_exporter.yaml'], 'environment': {'OS_ENDPOINT_TYPE': 'internal', 'CONTAINER_HOST': 'unix:///run/podman/podman.sock'}, 'healthcheck': {'test': '/openstack/healthcheck podman_exporter', 'mount': '/var/lib/openstack/healthchecks/podman_exporter'}, 'volumes': ['/var/lib/openstack/config/telemetry/podman_exporter.yaml:/etc/podman_exporter/podman_exporter.yaml:z', '/var/lib/openstack/certs/telemetry/default:/etc/podman_exporter/tls:z', '/run/podman/podman.sock:/run/podman/podman.sock:rw,z', '/var/lib/openstack/healthchecks/podman_exporter:/openstack:ro,z']}, config_id=edpm, container_name=podman_exporter, maintainer=Navid Yaghoobi <navidys@fedoraproject.org>, managed_by=edpm_ansible)
Dec  1 20:12:07 compute-0 nova_compute[189564]: 2025-12-01 20:12:07.983 189568 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Dec  1 20:12:09 compute-0 nova_compute[189564]: 2025-12-01 20:12:09.200 189568 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Dec  1 20:12:09 compute-0 podman[260068]: 2025-12-01 20:12:09.341896535 +0000 UTC m=+0.097218464 container health_status 34a1614f07848d6f362b3ed1fa2407dbcd0f2c7c831f6ef43ff8b2d278ce7c3d (image=quay.io/podified-antelope-centos9/openstack-ceilometer-ipmi:current-podified, name=ceilometer_agent_ipmi, health_status=healthy, health_failing_streak=0, health_log=, tcib_managed=true, io.buildah.version=1.41.3, maintainer=OpenStack Kubernetes Operator team, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.build-date=20251125, tcib_build_tag=fa2bb8efef6782c26ea7f1675eeb36dd, config_data={'image': 'quay.io/podified-antelope-centos9/openstack-ceilometer-ipmi:current-podified', 'user': 'ceilometer', 'restart': 'always', 'command': 'kolla_start', 'security_opt': 'label:type:ceilometer_polling_t', 'privileged': 'true', 'net': 'host', 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'OS_ENDPOINT_TYPE': 'internal'}, 'healthcheck': {'test': '/openstack/healthcheck ipmi', 'mount': '/var/lib/openstack/healthchecks/ceilometer_agent_ipmi'}, 'volumes': ['/var/lib/openstack/config/telemetry-power-monitoring:/var/lib/openstack/config/:z', '/var/lib/openstack/config/telemetry-power-monitoring/ceilometer-agent-ipmi.json:/var/lib/kolla/config_files/config.json:z', '/etc/hosts:/etc/hosts:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/var/lib/openstack/cacerts/telemetry-power-monitoring/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/config/telemetry-power-monitoring/ceilometer_prom_exporter.yaml:/etc/ceilometer/ceilometer_prom_exporter.yaml:z', '/var/lib/openstack/certs/telemetry-power-monitoring/default:/etc/ceilometer/tls:z', '/dev/log:/dev/log', '/var/lib/openstack/healthchecks/ceilometer_agent_ipmi:/openstack:ro,z']}, container_name=ceilometer_agent_ipmi, managed_by=edpm_ansible, org.label-schema.schema-version=1.0, config_id=edpm, org.label-schema.vendor=CentOS)
Dec  1 20:12:09 compute-0 podman[260067]: 2025-12-01 20:12:09.356696116 +0000 UTC m=+0.124305607 container health_status 23921011954a99f31a49758e512d9e3575f6b2ebf536e7df85e3be11e7690b76 (image=quay.io/sustainable_computing_io/kepler:release-0.7.12, name=kepler, health_status=healthy, health_failing_streak=0, health_log=, vcs-type=git, com.redhat.component=ubi9-container, release-0.7.12=, vcs-ref=e309397d02fc53f7fa99db1371b8700eb49f268f, release=1214.1726694543, distribution-scope=public, url=https://access.redhat.com/containers/#/registry.access.redhat.com/ubi9/images/9.4-1214.1726694543, architecture=x86_64, description=The Universal Base Image is designed and engineered to be the base layer for all of your containerized applications, middleware and utilities. This base image is freely redistributable, but Red Hat only supports Red Hat technologies through subscriptions for Red Hat products. This image is maintained by Red Hat and updated regularly., name=ubi9, config_id=edpm, io.k8s.description=The Universal Base Image is designed and engineered to be the base layer for all of your containerized applications, middleware and utilities. This base image is freely redistributable, but Red Hat only supports Red Hat technologies through subscriptions for Red Hat products. This image is maintained by Red Hat and updated regularly., io.openshift.tags=base rhel9, version=9.4, io.k8s.display-name=Red Hat Universal Base Image 9, container_name=kepler, io.openshift.expose-services=, managed_by=edpm_ansible, com.redhat.license_terms=https://www.redhat.com/en/about/red-hat-end-user-license-agreements#UBI, summary=Provides the latest release of Red Hat Universal Base Image 9., build-date=2024-09-18T21:23:30, config_data={'image': 'quay.io/sustainable_computing_io/kepler:release-0.7.12', 'privileged': 'true', 'restart': 'always', 'ports': ['8888:8888'], 'net': 'host', 'command': '-v=2', 'recreate': True, 'environment': {'ENABLE_GPU': 'true', 'EXPOSE_CONTAINER_METRICS': 'true', 'ENABLE_PROCESS_METRICS': 'true', 'EXPOSE_VM_METRICS': 'true', 'EXPOSE_ESTIMATED_IDLE_POWER_METRICS': 'false', 'LIBVIRT_METADATA_URI': 'http://openstack.org/xmlns/libvirt/nova/1.1'}, 'healthcheck': {'test': '/openstack/healthcheck kepler', 'mount': '/var/lib/openstack/healthchecks/kepler'}, 'volumes': ['/lib/modules:/lib/modules:ro', '/run/libvirt:/run/libvirt:shared,ro', '/sys:/sys', '/proc:/proc', '/var/lib/openstack/healthchecks/kepler:/openstack:ro,z']}, maintainer=Red Hat, Inc., vendor=Red Hat, Inc., io.buildah.version=1.29.0)
Dec  1 20:12:09 compute-0 podman[260075]: 2025-12-01 20:12:09.366864144 +0000 UTC m=+0.113873523 container health_status 43b014a7c88484529ca37fbc1aa040d68d3c565a681d98a3ffe696ded1c66c8b (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified, name=ovn_metadata_agent, health_status=healthy, health_failing_streak=0, health_log=, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS, config_data={'cgroupns': 'host', 'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'EDPM_CONFIG_HASH': '0823bd3e096c75f72e4a95820d41b0d4b6a1172bd2892ddb9f29b788a11bc87d'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ovn_metadata_agent', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified', 'net': 'host', 'pid': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/run/openvswitch:/run/openvswitch:z', '/var/lib/config-data/ansible-generated/neutron-ovn-metadata-agent:/etc/neutron.conf.d:z', '/run/netns:/run/netns:shared', '/var/lib/kolla/config_files/ovn_metadata_agent.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/neutron:/var/lib/neutron:shared,z', '/var/lib/neutron/ovn_metadata_haproxy_wrapper:/usr/local/bin/haproxy:ro', '/var/lib/neutron/kill_scripts:/etc/neutron/kill_scripts:ro', '/var/lib/openstack/cacerts/neutron-metadata/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/ca.crt:/etc/pki/tls/certs/ovndbca.crt:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/tls.crt:/etc/pki/tls/certs/ovndb.crt:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/tls.key:/etc/pki/tls/private/ovndb.key:ro,Z', '/var/lib/openstack/healthchecks/ovn_metadata_agent:/openstack:ro,z']}, managed_by=edpm_ansible, org.label-schema.name=CentOS Stream 9 Base Image, container_name=ovn_metadata_agent, io.buildah.version=1.41.3, org.label-schema.build-date=20251125, tcib_build_tag=fa2bb8efef6782c26ea7f1675eeb36dd, config_id=ovn_metadata_agent, tcib_managed=true, maintainer=OpenStack Kubernetes Operator team, org.label-schema.license=GPLv2)
Dec  1 20:12:09 compute-0 podman[260073]: 2025-12-01 20:12:09.379426515 +0000 UTC m=+0.132656618 container health_status 3a3d264f7eb8586ed3d44da8bad3c69e5911bcb2ca062b771386b6d47a5118de (image=quay.rdoproject.org/podified-master-centos10/openstack-ceilometer-compute:current-tested, name=ceilometer_agent_compute, health_status=healthy, health_failing_streak=0, health_log=, config_id=edpm, maintainer=OpenStack Kubernetes Operator team, tcib_build_tag=3a7876c5b6a4ff2e2bc50e11e9db5f42, tcib_managed=true, config_data={'image': 'quay.rdoproject.org/podified-master-centos10/openstack-ceilometer-compute:current-tested', 'user': 'ceilometer', 'restart': 'always', 'command': 'kolla_start', 'security_opt': 'label:type:ceilometer_polling_t', 'net': 'host', 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'OS_ENDPOINT_TYPE': 'internal'}, 'healthcheck': {'test': '/openstack/healthcheck compute', 'mount': '/var/lib/openstack/healthchecks/ceilometer_agent_compute'}, 'volumes': ['/var/lib/openstack/config/telemetry:/var/lib/openstack/config/:z', '/var/lib/openstack/config/telemetry/ceilometer-agent-compute.json:/var/lib/kolla/config_files/config.json:z', '/run/libvirt:/run/libvirt:shared,ro', '/etc/hosts:/etc/hosts:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/var/lib/openstack/cacerts/telemetry/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/config/telemetry/ceilometer_prom_exporter.yaml:/etc/ceilometer/ceilometer_prom_exporter.yaml:z', '/var/lib/openstack/certs/telemetry/default:/etc/ceilometer/tls:z', '/dev/log:/dev/log', '/var/lib/openstack/healthchecks/ceilometer_agent_compute:/openstack:ro,z']}, org.label-schema.build-date=20251125, org.label-schema.license=GPLv2, org.label-schema.vendor=CentOS, io.buildah.version=1.41.4, org.label-schema.name=CentOS Stream 10 Base Image, org.label-schema.schema-version=1.0, container_name=ceilometer_agent_compute, managed_by=edpm_ansible)
Dec  1 20:12:09 compute-0 podman[260081]: 2025-12-01 20:12:09.389005614 +0000 UTC m=+0.129551502 container health_status ac5c9902abf0db9f43c889599b2bcc73d33eb8b65444ffdd9b56a5cc93dab792 (image=quay.io/podified-antelope-centos9/openstack-ovn-controller:current-podified, name=ovn_controller, health_status=healthy, health_failing_streak=0, health_log=, container_name=ovn_controller, maintainer=OpenStack Kubernetes Operator team, org.label-schema.license=GPLv2, org.label-schema.schema-version=1.0, config_data={'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ovn_controller', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-ovn-controller:current-podified', 'net': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/lib/modules:/lib/modules:ro', '/run:/run', '/var/lib/openvswitch/ovn:/run/ovn:shared,z', '/var/lib/kolla/config_files/ovn_controller.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/openstack/cacerts/ovn/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/certs/ovn/default/ca.crt:/etc/pki/tls/certs/ovndbca.crt:ro,z', '/var/lib/openstack/certs/ovn/default/tls.crt:/etc/pki/tls/certs/ovndb.crt:ro,z', '/var/lib/openstack/certs/ovn/default/tls.key:/etc/pki/tls/private/ovndb.key:ro,Z', '/var/lib/openstack/healthchecks/ovn_controller:/openstack:ro,z']}, tcib_build_tag=fa2bb8efef6782c26ea7f1675eeb36dd, config_id=ovn_controller, io.buildah.version=1.41.3, managed_by=edpm_ansible, org.label-schema.vendor=CentOS, org.label-schema.build-date=20251125, org.label-schema.name=CentOS Stream 9 Base Image, tcib_managed=true)
Dec  1 20:12:12 compute-0 ovn_metadata_agent[106828]: 2025-12-01 20:12:12.238 106833 DEBUG oslo_concurrency.lockutils [-] Acquiring lock "_check_child_processes" by "neutron.agent.linux.external_process.ProcessMonitor._check_child_processes" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m
Dec  1 20:12:12 compute-0 ovn_metadata_agent[106828]: 2025-12-01 20:12:12.239 106833 DEBUG oslo_concurrency.lockutils [-] Lock "_check_child_processes" acquired by "neutron.agent.linux.external_process.ProcessMonitor._check_child_processes" :: waited 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m
Dec  1 20:12:12 compute-0 ovn_metadata_agent[106828]: 2025-12-01 20:12:12.240 106833 DEBUG oslo_concurrency.lockutils [-] Lock "_check_child_processes" "released" by "neutron.agent.linux.external_process.ProcessMonitor._check_child_processes" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m
Dec  1 20:12:12 compute-0 nova_compute[189564]: 2025-12-01 20:12:12.989 189568 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Dec  1 20:12:14 compute-0 nova_compute[189564]: 2025-12-01 20:12:14.203 189568 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Dec  1 20:12:17 compute-0 systemd-logind[797]: New session 32 of user zuul.
Dec  1 20:12:17 compute-0 systemd[1]: Started Session 32 of User zuul.
Dec  1 20:12:17 compute-0 nova_compute[189564]: 2025-12-01 20:12:17.993 189568 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Dec  1 20:12:19 compute-0 nova_compute[189564]: 2025-12-01 20:12:19.205 189568 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Dec  1 20:12:20 compute-0 podman[260311]: 2025-12-01 20:12:20.706165986 +0000 UTC m=+0.107775352 container health_status b46bda7fc50db8041eef75400930fc7591d8331b3adc9964f77b2cc87c6b98e2 (image=quay.io/openstack-k8s-operators/openstack-network-exporter:current-podified, name=openstack_network_exporter, health_status=healthy, health_failing_streak=0, health_log=, summary=Provides the latest release of the minimal Red Hat Universal Base Image 9., vcs-type=git, url=https://catalog.redhat.com/en/search?searchType=containers, io.openshift.expose-services=, config_id=edpm, managed_by=edpm_ansible, release=1755695350, vcs-ref=f4b088292653bbf5ca8188a5e59ffd06a8671d4b, io.k8s.description=The Universal Base Image Minimal is a stripped down image that uses microdnf as a package manager. This base image is freely redistributable, but Red Hat only supports Red Hat technologies through subscriptions for Red Hat products. This image is maintained by Red Hat and updated regularly., name=ubi9-minimal, build-date=2025-08-20T13:12:41, com.redhat.license_terms=https://www.redhat.com/en/about/red-hat-end-user-license-agreements#UBI, maintainer=Red Hat, Inc., io.buildah.version=1.33.7, io.openshift.tags=minimal rhel9, version=9.6, description=The Universal Base Image Minimal is a stripped down image that uses microdnf as a package manager. This base image is freely redistributable, but Red Hat only supports Red Hat technologies through subscriptions for Red Hat products. This image is maintained by Red Hat and updated regularly., vendor=Red Hat, Inc., architecture=x86_64, container_name=openstack_network_exporter, io.k8s.display-name=Red Hat Universal Base Image 9 Minimal, com.redhat.component=ubi9-minimal-container, config_data={'image': 'quay.io/openstack-k8s-operators/openstack-network-exporter:current-podified', 'restart': 'always', 'recreate': True, 'privileged': True, 'ports': ['9105:9105'], 'command': [], 'net': 'host', 'environment': {'OS_ENDPOINT_TYPE': 'internal', 'OPENSTACK_NETWORK_EXPORTER_YAML': '/etc/openstack_network_exporter/openstack_network_exporter.yaml'}, 'healthcheck': {'test': '/openstack/healthcheck openstack-netwo', 'mount': '/var/lib/openstack/healthchecks/openstack_network_exporter'}, 'volumes': ['/var/lib/openstack/config/telemetry/openstack_network_exporter.yaml:/etc/openstack_network_exporter/openstack_network_exporter.yaml:z', '/var/lib/openstack/certs/telemetry/default:/etc/openstack_network_exporter/tls:z', '/var/run/openvswitch:/run/openvswitch:rw,z', '/var/lib/openvswitch/ovn:/run/ovn:rw,z', '/proc:/host/proc:ro', '/var/lib/openstack/healthchecks/openstack_network_exporter:/openstack:ro,z']}, distribution-scope=public)
Dec  1 20:12:20 compute-0 podman[260310]: 2025-12-01 20:12:20.729270967 +0000 UTC m=+0.134366502 container health_status 9bc16c1e84935b321683dd2dfd3901959431e420d380b6b9982945dff3d516b2 (image=quay.io/prometheus/node-exporter:v1.5.0, name=node_exporter, health_status=healthy, health_failing_streak=0, health_log=, managed_by=edpm_ansible, config_data={'image': 'quay.io/prometheus/node-exporter:v1.5.0', 'restart': 'always', 'recreate': True, 'user': 'root', 'privileged': True, 'ports': ['9100:9100'], 'command': ['--web.config.file=/etc/node_exporter/node_exporter.yaml', '--web.disable-exporter-metrics', '--collector.systemd', '--collector.systemd.unit-include=(edpm_.*|ovs.*|openvswitch|virt.*|rsyslog)\\.service', '--no-collector.dmi', '--no-collector.entropy', '--no-collector.thermal_zone', '--no-collector.time', '--no-collector.timex', '--no-collector.uname', '--no-collector.stat', '--no-collector.hwmon', '--no-collector.os', '--no-collector.selinux', '--no-collector.textfile', '--no-collector.powersupplyclass', '--no-collector.pressure', '--no-collector.rapl'], 'net': 'host', 'environment': {'OS_ENDPOINT_TYPE': 'internal'}, 'healthcheck': {'test': '/openstack/healthcheck node_exporter', 'mount': '/var/lib/openstack/healthchecks/node_exporter'}, 'volumes': ['/var/lib/openstack/config/telemetry/node_exporter.yaml:/etc/node_exporter/node_exporter.yaml:z', '/var/lib/openstack/certs/telemetry/default:/etc/node_exporter/tls:z', '/var/run/dbus/system_bus_socket:/var/run/dbus/system_bus_socket:rw', '/var/lib/openstack/healthchecks/node_exporter:/openstack:ro,z']}, config_id=edpm, container_name=node_exporter, maintainer=The Prometheus Authors <prometheus-developers@googlegroups.com>)
Dec  1 20:12:23 compute-0 nova_compute[189564]: 2025-12-01 20:12:22.999 189568 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Dec  1 20:12:23 compute-0 nova_compute[189564]: 2025-12-01 20:12:23.265 189568 DEBUG oslo_service.periodic_task [None req-7cec860b-c1c5-49d2-8a4f-732c76c1788d - - - - - -] Running periodic task ComputeManager._poll_rescued_instances run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210#033[00m
Dec  1 20:12:24 compute-0 nova_compute[189564]: 2025-12-01 20:12:24.208 189568 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Dec  1 20:12:26 compute-0 nova_compute[189564]: 2025-12-01 20:12:26.249 189568 DEBUG oslo_service.periodic_task [None req-7cec860b-c1c5-49d2-8a4f-732c76c1788d - - - - - -] Running periodic task ComputeManager._reclaim_queued_deletes run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210#033[00m
Dec  1 20:12:26 compute-0 nova_compute[189564]: 2025-12-01 20:12:26.250 189568 DEBUG nova.compute.manager [None req-7cec860b-c1c5-49d2-8a4f-732c76c1788d - - - - - -] CONF.reclaim_instance_interval <= 0, skipping... _reclaim_queued_deletes /usr/lib/python3.9/site-packages/nova/compute/manager.py:10477#033[00m
Dec  1 20:12:28 compute-0 nova_compute[189564]: 2025-12-01 20:12:28.005 189568 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Dec  1 20:12:29 compute-0 nova_compute[189564]: 2025-12-01 20:12:29.212 189568 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Dec  1 20:12:29 compute-0 nova_compute[189564]: 2025-12-01 20:12:29.250 189568 DEBUG oslo_service.periodic_task [None req-7cec860b-c1c5-49d2-8a4f-732c76c1788d - - - - - -] Running periodic task ComputeManager._poll_rebooting_instances run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210#033[00m
Dec  1 20:12:29 compute-0 podman[203750]: time="2025-12-01T20:12:29Z" level=info msg="List containers: received `last` parameter - overwriting `limit`"
Dec  1 20:12:29 compute-0 podman[203750]: @ - - [01/Dec/2025:20:12:29 +0000] "GET /v4.9.3/libpod/containers/json?all=true&external=false&last=0&namespace=false&size=false&sync=false HTTP/1.1" 200 28288 "" "Go-http-client/1.1"
Dec  1 20:12:29 compute-0 podman[203750]: @ - - [01/Dec/2025:20:12:29 +0000] "GET /v4.9.3/libpod/containers/stats?all=false&interval=1&stream=false HTTP/1.1" 200 4345 "" "Go-http-client/1.1"
Dec  1 20:12:30 compute-0 nova_compute[189564]: 2025-12-01 20:12:30.247 189568 DEBUG oslo_service.periodic_task [None req-7cec860b-c1c5-49d2-8a4f-732c76c1788d - - - - - -] Running periodic task ComputeManager._poll_unconfirmed_resizes run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210#033[00m
Dec  1 20:12:31 compute-0 nova_compute[189564]: 2025-12-01 20:12:31.247 189568 DEBUG oslo_service.periodic_task [None req-7cec860b-c1c5-49d2-8a4f-732c76c1788d - - - - - -] Running periodic task ComputeManager._heal_instance_info_cache run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210#033[00m
Dec  1 20:12:31 compute-0 nova_compute[189564]: 2025-12-01 20:12:31.247 189568 DEBUG nova.compute.manager [None req-7cec860b-c1c5-49d2-8a4f-732c76c1788d - - - - - -] Starting heal instance info cache _heal_instance_info_cache /usr/lib/python3.9/site-packages/nova/compute/manager.py:9858#033[00m
Dec  1 20:12:31 compute-0 nova_compute[189564]: 2025-12-01 20:12:31.247 189568 DEBUG nova.compute.manager [None req-7cec860b-c1c5-49d2-8a4f-732c76c1788d - - - - - -] Rebuilding the list of instances to heal _heal_instance_info_cache /usr/lib/python3.9/site-packages/nova/compute/manager.py:9862#033[00m
Dec  1 20:12:31 compute-0 nova_compute[189564]: 2025-12-01 20:12:31.274 189568 DEBUG nova.compute.manager [None req-7cec860b-c1c5-49d2-8a4f-732c76c1788d - - - - - -] Didn't find any instances for network info cache update. _heal_instance_info_cache /usr/lib/python3.9/site-packages/nova/compute/manager.py:9944#033[00m
Dec  1 20:12:31 compute-0 openstack_network_exporter[205914]: ERROR   20:12:31 appctl.go:131: Failed to prepare call to ovsdb-server: no control socket files found for the ovs db server
Dec  1 20:12:31 compute-0 openstack_network_exporter[205914]: ERROR   20:12:31 appctl.go:144: Failed to get PID for ovn-northd: no control socket files found for ovn-northd
Dec  1 20:12:31 compute-0 openstack_network_exporter[205914]: ERROR   20:12:31 appctl.go:144: Failed to get PID for ovn-northd: no control socket files found for ovn-northd
Dec  1 20:12:31 compute-0 openstack_network_exporter[205914]: ERROR   20:12:31 appctl.go:174: call(dpif-netdev/pmd-rxq-show): please specify an existing datapath
Dec  1 20:12:31 compute-0 openstack_network_exporter[205914]: 
Dec  1 20:12:31 compute-0 openstack_network_exporter[205914]: ERROR   20:12:31 appctl.go:174: call(dpif-netdev/pmd-perf-show): please specify an existing datapath
Dec  1 20:12:31 compute-0 openstack_network_exporter[205914]: 
Dec  1 20:12:32 compute-0 nova_compute[189564]: 2025-12-01 20:12:32.248 189568 DEBUG oslo_service.periodic_task [None req-7cec860b-c1c5-49d2-8a4f-732c76c1788d - - - - - -] Running periodic task ComputeManager.update_available_resource run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210#033[00m
Dec  1 20:12:32 compute-0 nova_compute[189564]: 2025-12-01 20:12:32.341 189568 DEBUG oslo_concurrency.lockutils [None req-7cec860b-c1c5-49d2-8a4f-732c76c1788d - - - - - -] Acquiring lock "compute_resources" by "nova.compute.resource_tracker.ResourceTracker.clean_compute_node_cache" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m
Dec  1 20:12:32 compute-0 nova_compute[189564]: 2025-12-01 20:12:32.341 189568 DEBUG oslo_concurrency.lockutils [None req-7cec860b-c1c5-49d2-8a4f-732c76c1788d - - - - - -] Lock "compute_resources" acquired by "nova.compute.resource_tracker.ResourceTracker.clean_compute_node_cache" :: waited 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m
Dec  1 20:12:32 compute-0 nova_compute[189564]: 2025-12-01 20:12:32.342 189568 DEBUG oslo_concurrency.lockutils [None req-7cec860b-c1c5-49d2-8a4f-732c76c1788d - - - - - -] Lock "compute_resources" "released" by "nova.compute.resource_tracker.ResourceTracker.clean_compute_node_cache" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m
Dec  1 20:12:32 compute-0 nova_compute[189564]: 2025-12-01 20:12:32.342 189568 DEBUG nova.compute.resource_tracker [None req-7cec860b-c1c5-49d2-8a4f-732c76c1788d - - - - - -] Auditing locally available compute resources for compute-0.ctlplane.example.com (node: compute-0.ctlplane.example.com) update_available_resource /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:861#033[00m
Dec  1 20:12:32 compute-0 nova_compute[189564]: 2025-12-01 20:12:32.812 189568 WARNING nova.virt.libvirt.driver [None req-7cec860b-c1c5-49d2-8a4f-732c76c1788d - - - - - -] This host appears to have multiple sockets per NUMA node. The `socket` PCI NUMA affinity will not be supported.#033[00m
Dec  1 20:12:32 compute-0 nova_compute[189564]: 2025-12-01 20:12:32.814 189568 DEBUG nova.compute.resource_tracker [None req-7cec860b-c1c5-49d2-8a4f-732c76c1788d - - - - - -] Hypervisor/Node resource view: name=compute-0.ctlplane.example.com free_ram=5257MB free_disk=72.3036003112793GB free_vcpus=8 pci_devices=[{"dev_id": "pci_0000_00_03_0", "address": "0000:00:03.0", "product_id": "1000", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1000", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_01_3", "address": "0000:00:01.3", "product_id": "7113", "vendor_id": "8086", "numa_node": null, "label": "label_8086_7113", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_06_0", "address": "0000:00:06.0", "product_id": "1005", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1005", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_04_0", "address": "0000:00:04.0", "product_id": "1001", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1001", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_01_1", "address": "0000:00:01.1", "product_id": "7010", "vendor_id": "8086", "numa_node": null, "label": "label_8086_7010", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_00_0", "address": "0000:00:00.0", "product_id": "1237", "vendor_id": "8086", "numa_node": null, "label": "label_8086_1237", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_05_0", "address": "0000:00:05.0", "product_id": "1002", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1002", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_02_0", "address": "0000:00:02.0", "product_id": "1050", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1050", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_01_2", "address": "0000:00:01.2", "product_id": "7020", "vendor_id": "8086", "numa_node": null, "label": "label_8086_7020", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_01_0", "address": "0000:00:01.0", "product_id": "7000", "vendor_id": "8086", "numa_node": null, "label": "label_8086_7000", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_07_0", "address": "0000:00:07.0", "product_id": "1000", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1000", "dev_type": "type-PCI"}] _report_hypervisor_resource_view /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:1034#033[00m
Dec  1 20:12:32 compute-0 nova_compute[189564]: 2025-12-01 20:12:32.815 189568 DEBUG oslo_concurrency.lockutils [None req-7cec860b-c1c5-49d2-8a4f-732c76c1788d - - - - - -] Acquiring lock "compute_resources" by "nova.compute.resource_tracker.ResourceTracker._update_available_resource" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m
Dec  1 20:12:32 compute-0 nova_compute[189564]: 2025-12-01 20:12:32.815 189568 DEBUG oslo_concurrency.lockutils [None req-7cec860b-c1c5-49d2-8a4f-732c76c1788d - - - - - -] Lock "compute_resources" acquired by "nova.compute.resource_tracker.ResourceTracker._update_available_resource" :: waited 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m
Dec  1 20:12:32 compute-0 nova_compute[189564]: 2025-12-01 20:12:32.890 189568 DEBUG nova.compute.resource_tracker [None req-7cec860b-c1c5-49d2-8a4f-732c76c1788d - - - - - -] Total usable vcpus: 8, total allocated vcpus: 0 _report_final_resource_view /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:1057#033[00m
Dec  1 20:12:32 compute-0 nova_compute[189564]: 2025-12-01 20:12:32.890 189568 DEBUG nova.compute.resource_tracker [None req-7cec860b-c1c5-49d2-8a4f-732c76c1788d - - - - - -] Final resource view: name=compute-0.ctlplane.example.com phys_ram=7680MB used_ram=512MB phys_disk=79GB used_disk=0GB total_vcpus=8 used_vcpus=0 pci_stats=[] _report_final_resource_view /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:1066#033[00m
Dec  1 20:12:32 compute-0 nova_compute[189564]: 2025-12-01 20:12:32.916 189568 DEBUG nova.compute.provider_tree [None req-7cec860b-c1c5-49d2-8a4f-732c76c1788d - - - - - -] Inventory has not changed in ProviderTree for provider: 0211b5d4-bab8-409f-8f53-df766ffbcb27 update_inventory /usr/lib/python3.9/site-packages/nova/compute/provider_tree.py:180#033[00m
Dec  1 20:12:32 compute-0 nova_compute[189564]: 2025-12-01 20:12:32.934 189568 DEBUG nova.scheduler.client.report [None req-7cec860b-c1c5-49d2-8a4f-732c76c1788d - - - - - -] Inventory has not changed for provider 0211b5d4-bab8-409f-8f53-df766ffbcb27 based on inventory data: {'VCPU': {'total': 8, 'reserved': 0, 'min_unit': 1, 'max_unit': 8, 'step_size': 1, 'allocation_ratio': 4.0}, 'MEMORY_MB': {'total': 7680, 'reserved': 512, 'min_unit': 1, 'max_unit': 7680, 'step_size': 1, 'allocation_ratio': 1.0}, 'DISK_GB': {'total': 79, 'reserved': 1, 'min_unit': 1, 'max_unit': 79, 'step_size': 1, 'allocation_ratio': 0.9}} set_inventory_for_provider /usr/lib/python3.9/site-packages/nova/scheduler/client/report.py:940#033[00m
Dec  1 20:12:32 compute-0 nova_compute[189564]: 2025-12-01 20:12:32.937 189568 DEBUG nova.compute.resource_tracker [None req-7cec860b-c1c5-49d2-8a4f-732c76c1788d - - - - - -] Compute_service record updated for compute-0.ctlplane.example.com:compute-0.ctlplane.example.com _update_available_resource /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:995#033[00m
Dec  1 20:12:32 compute-0 nova_compute[189564]: 2025-12-01 20:12:32.938 189568 DEBUG oslo_concurrency.lockutils [None req-7cec860b-c1c5-49d2-8a4f-732c76c1788d - - - - - -] Lock "compute_resources" "released" by "nova.compute.resource_tracker.ResourceTracker._update_available_resource" :: held 0.122s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m
Dec  1 20:12:33 compute-0 nova_compute[189564]: 2025-12-01 20:12:33.009 189568 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Dec  1 20:12:33 compute-0 podman[260448]: 2025-12-01 20:12:33.379465009 +0000 UTC m=+0.131418570 container health_status eee51cf6f5ac491b85fb09827fece37ea9afa564acb449d4ec0d0155a452f02b (image=quay.io/podified-antelope-centos9/openstack-multipathd:current-podified, name=multipathd, health_status=healthy, health_failing_streak=0, health_log=, container_name=multipathd, org.label-schema.build-date=20251125, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS, maintainer=OpenStack Kubernetes Operator team, org.label-schema.license=GPLv2, tcib_managed=true, io.buildah.version=1.41.3, managed_by=edpm_ansible, org.label-schema.name=CentOS Stream 9 Base Image, config_id=multipathd, tcib_build_tag=fa2bb8efef6782c26ea7f1675eeb36dd, config_data={'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/multipathd', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-multipathd:current-podified', 'net': 'host', 'privileged': True, 'restart': 'always', 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/var/lib/kolla/config_files/multipathd.json:/var/lib/kolla/config_files/config.json:ro', '/dev:/dev', '/run/udev:/run/udev', '/sys:/sys', '/lib/modules:/lib/modules:ro', '/etc/iscsi:/etc/iscsi:ro', '/var/lib/iscsi:/var/lib/iscsi', '/etc/multipath:/etc/multipath:z', '/etc/multipath.conf:/etc/multipath.conf:ro', '/var/lib/openstack/healthchecks/multipathd:/openstack:ro,z']})
Dec  1 20:12:34 compute-0 nova_compute[189564]: 2025-12-01 20:12:34.216 189568 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Dec  1 20:12:34 compute-0 nova_compute[189564]: 2025-12-01 20:12:34.938 189568 DEBUG oslo_service.periodic_task [None req-7cec860b-c1c5-49d2-8a4f-732c76c1788d - - - - - -] Running periodic task ComputeManager._instance_usage_audit run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210#033[00m
Dec  1 20:12:36 compute-0 podman[260469]: 2025-12-01 20:12:36.329456034 +0000 UTC m=+0.096376127 container health_status 61ddba5fa28aaa4735d9b3aecc3d300f499f9ae2248b5f55cd6d6127fcce4236 (image=quay.io/navidys/prometheus-podman-exporter:v1.10.1, name=podman_exporter, health_status=healthy, health_failing_streak=0, health_log=, container_name=podman_exporter, maintainer=Navid Yaghoobi <navidys@fedoraproject.org>, managed_by=edpm_ansible, config_data={'image': 'quay.io/navidys/prometheus-podman-exporter:v1.10.1', 'restart': 'always', 'recreate': True, 'user': 'root', 'privileged': True, 'ports': ['9882:9882'], 'net': 'host', 'command': ['--web.config.file=/etc/podman_exporter/podman_exporter.yaml'], 'environment': {'OS_ENDPOINT_TYPE': 'internal', 'CONTAINER_HOST': 'unix:///run/podman/podman.sock'}, 'healthcheck': {'test': '/openstack/healthcheck podman_exporter', 'mount': '/var/lib/openstack/healthchecks/podman_exporter'}, 'volumes': ['/var/lib/openstack/config/telemetry/podman_exporter.yaml:/etc/podman_exporter/podman_exporter.yaml:z', '/var/lib/openstack/certs/telemetry/default:/etc/podman_exporter/tls:z', '/run/podman/podman.sock:/run/podman/podman.sock:rw,z', '/var/lib/openstack/healthchecks/podman_exporter:/openstack:ro,z']}, config_id=edpm)
Dec  1 20:12:38 compute-0 nova_compute[189564]: 2025-12-01 20:12:38.016 189568 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Dec  1 20:12:38 compute-0 nova_compute[189564]: 2025-12-01 20:12:38.243 189568 DEBUG oslo_service.periodic_task [None req-7cec860b-c1c5-49d2-8a4f-732c76c1788d - - - - - -] Running periodic task ComputeManager._check_instance_build_time run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210#033[00m
Dec  1 20:12:38 compute-0 nova_compute[189564]: 2025-12-01 20:12:38.247 189568 DEBUG oslo_service.periodic_task [None req-7cec860b-c1c5-49d2-8a4f-732c76c1788d - - - - - -] Running periodic task ComputeManager._poll_volume_usage run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210#033[00m
Dec  1 20:12:39 compute-0 nova_compute[189564]: 2025-12-01 20:12:39.221 189568 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Dec  1 20:12:40 compute-0 podman[260498]: 2025-12-01 20:12:40.325608327 +0000 UTC m=+0.082071340 container health_status 43b014a7c88484529ca37fbc1aa040d68d3c565a681d98a3ffe696ded1c66c8b (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified, name=ovn_metadata_agent, health_status=healthy, health_failing_streak=0, health_log=, container_name=ovn_metadata_agent, io.buildah.version=1.41.3, org.label-schema.license=GPLv2, config_id=ovn_metadata_agent, managed_by=edpm_ansible, org.label-schema.build-date=20251125, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.vendor=CentOS, tcib_build_tag=fa2bb8efef6782c26ea7f1675eeb36dd, tcib_managed=true, config_data={'cgroupns': 'host', 'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'EDPM_CONFIG_HASH': '0823bd3e096c75f72e4a95820d41b0d4b6a1172bd2892ddb9f29b788a11bc87d'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ovn_metadata_agent', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified', 'net': 'host', 'pid': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/run/openvswitch:/run/openvswitch:z', '/var/lib/config-data/ansible-generated/neutron-ovn-metadata-agent:/etc/neutron.conf.d:z', '/run/netns:/run/netns:shared', '/var/lib/kolla/config_files/ovn_metadata_agent.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/neutron:/var/lib/neutron:shared,z', '/var/lib/neutron/ovn_metadata_haproxy_wrapper:/usr/local/bin/haproxy:ro', '/var/lib/neutron/kill_scripts:/etc/neutron/kill_scripts:ro', '/var/lib/openstack/cacerts/neutron-metadata/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/ca.crt:/etc/pki/tls/certs/ovndbca.crt:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/tls.crt:/etc/pki/tls/certs/ovndb.crt:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/tls.key:/etc/pki/tls/private/ovndb.key:ro,Z', '/var/lib/openstack/healthchecks/ovn_metadata_agent:/openstack:ro,z']}, maintainer=OpenStack Kubernetes Operator team, org.label-schema.schema-version=1.0)
Dec  1 20:12:40 compute-0 podman[260495]: 2025-12-01 20:12:40.333918086 +0000 UTC m=+0.102794357 container health_status 23921011954a99f31a49758e512d9e3575f6b2ebf536e7df85e3be11e7690b76 (image=quay.io/sustainable_computing_io/kepler:release-0.7.12, name=kepler, health_status=healthy, health_failing_streak=0, health_log=, architecture=x86_64, vcs-ref=e309397d02fc53f7fa99db1371b8700eb49f268f, io.buildah.version=1.29.0, io.k8s.description=The Universal Base Image is designed and engineered to be the base layer for all of your containerized applications, middleware and utilities. This base image is freely redistributable, but Red Hat only supports Red Hat technologies through subscriptions for Red Hat products. This image is maintained by Red Hat and updated regularly., container_name=kepler, maintainer=Red Hat, Inc., name=ubi9, version=9.4, com.redhat.license_terms=https://www.redhat.com/en/about/red-hat-end-user-license-agreements#UBI, config_data={'image': 'quay.io/sustainable_computing_io/kepler:release-0.7.12', 'privileged': 'true', 'restart': 'always', 'ports': ['8888:8888'], 'net': 'host', 'command': '-v=2', 'recreate': True, 'environment': {'ENABLE_GPU': 'true', 'EXPOSE_CONTAINER_METRICS': 'true', 'ENABLE_PROCESS_METRICS': 'true', 'EXPOSE_VM_METRICS': 'true', 'EXPOSE_ESTIMATED_IDLE_POWER_METRICS': 'false', 'LIBVIRT_METADATA_URI': 'http://openstack.org/xmlns/libvirt/nova/1.1'}, 'healthcheck': {'test': '/openstack/healthcheck kepler', 'mount': '/var/lib/openstack/healthchecks/kepler'}, 'volumes': ['/lib/modules:/lib/modules:ro', '/run/libvirt:/run/libvirt:shared,ro', '/sys:/sys', '/proc:/proc', '/var/lib/openstack/healthchecks/kepler:/openstack:ro,z']}, io.openshift.expose-services=, vendor=Red Hat, Inc., summary=Provides the latest release of Red Hat Universal Base Image 9., com.redhat.component=ubi9-container, build-date=2024-09-18T21:23:30, config_id=edpm, distribution-scope=public, release=1214.1726694543, description=The Universal Base Image is designed and engineered to be the base layer for all of your containerized applications, middleware and utilities. This base image is freely redistributable, but Red Hat only supports Red Hat technologies through subscriptions for Red Hat products. This image is maintained by Red Hat and updated regularly., release-0.7.12=, url=https://access.redhat.com/containers/#/registry.access.redhat.com/ubi9/images/9.4-1214.1726694543, io.openshift.tags=base rhel9, managed_by=edpm_ansible, vcs-type=git, io.k8s.display-name=Red Hat Universal Base Image 9)
Dec  1 20:12:40 compute-0 podman[260496]: 2025-12-01 20:12:40.336827547 +0000 UTC m=+0.102542079 container health_status 34a1614f07848d6f362b3ed1fa2407dbcd0f2c7c831f6ef43ff8b2d278ce7c3d (image=quay.io/podified-antelope-centos9/openstack-ceilometer-ipmi:current-podified, name=ceilometer_agent_ipmi, health_status=healthy, health_failing_streak=0, health_log=, container_name=ceilometer_agent_ipmi, managed_by=edpm_ansible, org.label-schema.vendor=CentOS, config_data={'image': 'quay.io/podified-antelope-centos9/openstack-ceilometer-ipmi:current-podified', 'user': 'ceilometer', 'restart': 'always', 'command': 'kolla_start', 'security_opt': 'label:type:ceilometer_polling_t', 'privileged': 'true', 'net': 'host', 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'OS_ENDPOINT_TYPE': 'internal'}, 'healthcheck': {'test': '/openstack/healthcheck ipmi', 'mount': '/var/lib/openstack/healthchecks/ceilometer_agent_ipmi'}, 'volumes': ['/var/lib/openstack/config/telemetry-power-monitoring:/var/lib/openstack/config/:z', '/var/lib/openstack/config/telemetry-power-monitoring/ceilometer-agent-ipmi.json:/var/lib/kolla/config_files/config.json:z', '/etc/hosts:/etc/hosts:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/var/lib/openstack/cacerts/telemetry-power-monitoring/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/config/telemetry-power-monitoring/ceilometer_prom_exporter.yaml:/etc/ceilometer/ceilometer_prom_exporter.yaml:z', '/var/lib/openstack/certs/telemetry-power-monitoring/default:/etc/ceilometer/tls:z', '/dev/log:/dev/log', '/var/lib/openstack/healthchecks/ceilometer_agent_ipmi:/openstack:ro,z']}, io.buildah.version=1.41.3, maintainer=OpenStack Kubernetes Operator team, org.label-schema.build-date=20251125, tcib_build_tag=fa2bb8efef6782c26ea7f1675eeb36dd, config_id=edpm, tcib_managed=true, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.schema-version=1.0)
Dec  1 20:12:40 compute-0 podman[260497]: 2025-12-01 20:12:40.344694463 +0000 UTC m=+0.106466682 container health_status 3a3d264f7eb8586ed3d44da8bad3c69e5911bcb2ca062b771386b6d47a5118de (image=quay.rdoproject.org/podified-master-centos10/openstack-ceilometer-compute:current-tested, name=ceilometer_agent_compute, health_status=healthy, health_failing_streak=0, health_log=, tcib_build_tag=3a7876c5b6a4ff2e2bc50e11e9db5f42, tcib_managed=true, maintainer=OpenStack Kubernetes Operator team, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS, container_name=ceilometer_agent_compute, managed_by=edpm_ansible, org.label-schema.build-date=20251125, org.label-schema.name=CentOS Stream 10 Base Image, config_data={'image': 'quay.rdoproject.org/podified-master-centos10/openstack-ceilometer-compute:current-tested', 'user': 'ceilometer', 'restart': 'always', 'command': 'kolla_start', 'security_opt': 'label:type:ceilometer_polling_t', 'net': 'host', 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'OS_ENDPOINT_TYPE': 'internal'}, 'healthcheck': {'test': '/openstack/healthcheck compute', 'mount': '/var/lib/openstack/healthchecks/ceilometer_agent_compute'}, 'volumes': ['/var/lib/openstack/config/telemetry:/var/lib/openstack/config/:z', '/var/lib/openstack/config/telemetry/ceilometer-agent-compute.json:/var/lib/kolla/config_files/config.json:z', '/run/libvirt:/run/libvirt:shared,ro', '/etc/hosts:/etc/hosts:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/var/lib/openstack/cacerts/telemetry/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/config/telemetry/ceilometer_prom_exporter.yaml:/etc/ceilometer/ceilometer_prom_exporter.yaml:z', '/var/lib/openstack/certs/telemetry/default:/etc/ceilometer/tls:z', '/dev/log:/dev/log', '/var/lib/openstack/healthchecks/ceilometer_agent_compute:/openstack:ro,z']}, config_id=edpm, io.buildah.version=1.41.4, org.label-schema.license=GPLv2)
Dec  1 20:12:40 compute-0 podman[260503]: 2025-12-01 20:12:40.406543001 +0000 UTC m=+0.159049101 container health_status ac5c9902abf0db9f43c889599b2bcc73d33eb8b65444ffdd9b56a5cc93dab792 (image=quay.io/podified-antelope-centos9/openstack-ovn-controller:current-podified, name=ovn_controller, health_status=healthy, health_failing_streak=0, health_log=, org.label-schema.vendor=CentOS, org.label-schema.license=GPLv2, container_name=ovn_controller, maintainer=OpenStack Kubernetes Operator team, tcib_build_tag=fa2bb8efef6782c26ea7f1675eeb36dd, tcib_managed=true, config_data={'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ovn_controller', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-ovn-controller:current-podified', 'net': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/lib/modules:/lib/modules:ro', '/run:/run', '/var/lib/openvswitch/ovn:/run/ovn:shared,z', '/var/lib/kolla/config_files/ovn_controller.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/openstack/cacerts/ovn/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/certs/ovn/default/ca.crt:/etc/pki/tls/certs/ovndbca.crt:ro,z', '/var/lib/openstack/certs/ovn/default/tls.crt:/etc/pki/tls/certs/ovndb.crt:ro,z', '/var/lib/openstack/certs/ovn/default/tls.key:/etc/pki/tls/private/ovndb.key:ro,Z', '/var/lib/openstack/healthchecks/ovn_controller:/openstack:ro,z']}, config_id=ovn_controller, io.buildah.version=1.41.3, managed_by=edpm_ansible, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.build-date=20251125, org.label-schema.schema-version=1.0)
Dec  1 20:12:43 compute-0 nova_compute[189564]: 2025-12-01 20:12:43.022 189568 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Dec  1 20:12:44 compute-0 nova_compute[189564]: 2025-12-01 20:12:44.223 189568 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Dec  1 20:12:48 compute-0 nova_compute[189564]: 2025-12-01 20:12:48.026 189568 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Dec  1 20:12:49 compute-0 nova_compute[189564]: 2025-12-01 20:12:49.225 189568 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Dec  1 20:12:50 compute-0 ovs-vsctl[260675]: ovs|00001|db_ctl_base|ERR|no key "dpdk-init" in Open_vSwitch record "." column other_config
Dec  1 20:12:51 compute-0 podman[260703]: 2025-12-01 20:12:51.339556304 +0000 UTC m=+0.097641021 container health_status 9bc16c1e84935b321683dd2dfd3901959431e420d380b6b9982945dff3d516b2 (image=quay.io/prometheus/node-exporter:v1.5.0, name=node_exporter, health_status=healthy, health_failing_streak=0, health_log=, maintainer=The Prometheus Authors <prometheus-developers@googlegroups.com>, managed_by=edpm_ansible, config_data={'image': 'quay.io/prometheus/node-exporter:v1.5.0', 'restart': 'always', 'recreate': True, 'user': 'root', 'privileged': True, 'ports': ['9100:9100'], 'command': ['--web.config.file=/etc/node_exporter/node_exporter.yaml', '--web.disable-exporter-metrics', '--collector.systemd', '--collector.systemd.unit-include=(edpm_.*|ovs.*|openvswitch|virt.*|rsyslog)\\.service', '--no-collector.dmi', '--no-collector.entropy', '--no-collector.thermal_zone', '--no-collector.time', '--no-collector.timex', '--no-collector.uname', '--no-collector.stat', '--no-collector.hwmon', '--no-collector.os', '--no-collector.selinux', '--no-collector.textfile', '--no-collector.powersupplyclass', '--no-collector.pressure', '--no-collector.rapl'], 'net': 'host', 'environment': {'OS_ENDPOINT_TYPE': 'internal'}, 'healthcheck': {'test': '/openstack/healthcheck node_exporter', 'mount': '/var/lib/openstack/healthchecks/node_exporter'}, 'volumes': ['/var/lib/openstack/config/telemetry/node_exporter.yaml:/etc/node_exporter/node_exporter.yaml:z', '/var/lib/openstack/certs/telemetry/default:/etc/node_exporter/tls:z', '/var/run/dbus/system_bus_socket:/var/run/dbus/system_bus_socket:rw', '/var/lib/openstack/healthchecks/node_exporter:/openstack:ro,z']}, config_id=edpm, container_name=node_exporter)
Dec  1 20:12:51 compute-0 podman[260708]: 2025-12-01 20:12:51.369417744 +0000 UTC m=+0.129873494 container health_status b46bda7fc50db8041eef75400930fc7591d8331b3adc9964f77b2cc87c6b98e2 (image=quay.io/openstack-k8s-operators/openstack-network-exporter:current-podified, name=openstack_network_exporter, health_status=healthy, health_failing_streak=0, health_log=, description=The Universal Base Image Minimal is a stripped down image that uses microdnf as a package manager. This base image is freely redistributable, but Red Hat only supports Red Hat technologies through subscriptions for Red Hat products. This image is maintained by Red Hat and updated regularly., version=9.6, maintainer=Red Hat, Inc., config_data={'image': 'quay.io/openstack-k8s-operators/openstack-network-exporter:current-podified', 'restart': 'always', 'recreate': True, 'privileged': True, 'ports': ['9105:9105'], 'command': [], 'net': 'host', 'environment': {'OS_ENDPOINT_TYPE': 'internal', 'OPENSTACK_NETWORK_EXPORTER_YAML': '/etc/openstack_network_exporter/openstack_network_exporter.yaml'}, 'healthcheck': {'test': '/openstack/healthcheck openstack-netwo', 'mount': '/var/lib/openstack/healthchecks/openstack_network_exporter'}, 'volumes': ['/var/lib/openstack/config/telemetry/openstack_network_exporter.yaml:/etc/openstack_network_exporter/openstack_network_exporter.yaml:z', '/var/lib/openstack/certs/telemetry/default:/etc/openstack_network_exporter/tls:z', '/var/run/openvswitch:/run/openvswitch:rw,z', '/var/lib/openvswitch/ovn:/run/ovn:rw,z', '/proc:/host/proc:ro', '/var/lib/openstack/healthchecks/openstack_network_exporter:/openstack:ro,z']}, release=1755695350, vcs-ref=f4b088292653bbf5ca8188a5e59ffd06a8671d4b, vcs-type=git, com.redhat.component=ubi9-minimal-container, com.redhat.license_terms=https://www.redhat.com/en/about/red-hat-end-user-license-agreements#UBI, container_name=openstack_network_exporter, name=ubi9-minimal, io.openshift.expose-services=, url=https://catalog.redhat.com/en/search?searchType=containers, build-date=2025-08-20T13:12:41, io.k8s.display-name=Red Hat Universal Base Image 9 Minimal, io.openshift.tags=minimal rhel9, vendor=Red Hat, Inc., architecture=x86_64, config_id=edpm, io.buildah.version=1.33.7, io.k8s.description=The Universal Base Image Minimal is a stripped down image that uses microdnf as a package manager. This base image is freely redistributable, but Red Hat only supports Red Hat technologies through subscriptions for Red Hat products. This image is maintained by Red Hat and updated regularly., managed_by=edpm_ansible, distribution-scope=public, summary=Provides the latest release of the minimal Red Hat Universal Base Image 9.)
Dec  1 20:12:51 compute-0 systemd[1]: proc-sys-fs-binfmt_misc.automount: Got automount request for /proc/sys/fs/binfmt_misc, triggered by 260200 (sos)
Dec  1 20:12:51 compute-0 systemd[1]: Mounting Arbitrary Executable File Formats File System...
Dec  1 20:12:51 compute-0 systemd[1]: Mounted Arbitrary Executable File Formats File System.
Dec  1 20:12:52 compute-0 virtqemud[189187]: Failed to connect socket to '/var/run/libvirt/virtnetworkd-sock-ro': No such file or directory
Dec  1 20:12:52 compute-0 virtqemud[189187]: Failed to connect socket to '/var/run/libvirt/virtnwfilterd-sock-ro': No such file or directory
Dec  1 20:12:52 compute-0 virtqemud[189187]: Failed to connect socket to '/var/run/libvirt/virtstoraged-sock-ro': No such file or directory
Dec  1 20:12:53 compute-0 nova_compute[189564]: 2025-12-01 20:12:53.030 189568 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Dec  1 20:12:54 compute-0 nova_compute[189564]: 2025-12-01 20:12:54.228 189568 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Dec  1 20:12:56 compute-0 systemd[1]: Starting Hostname Service...
Dec  1 20:12:56 compute-0 systemd[1]: Started Hostname Service.
Dec  1 20:12:58 compute-0 nova_compute[189564]: 2025-12-01 20:12:58.035 189568 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
